text
stringlengths 1
1.06M
| meta
dict |
|---|---|
\section{Reproducibility}
\subsection{Experimental Setup}
\paragraph{Computing Infrastructure.}
For all of our experiments, we used a computation cluster with 4 NVIDIA Tesla V100 GPUs, 32GB GPU memory and 256GB RAM.
\paragraph{Implementation}
We used Python 3.7, PyTorch 1.4.0, and Transformers 2.8.0 for all our experiments. We obtain our datasets from the citations specified in the main paper, and link to the repositories of all libraries we use.
\paragraph{Hyperparameter Search}
For our hyper parameter searches, we perform a uniformly random search over learning rate and batch size, with ranges specified in Table ~\ref{tab:xxencomp}, optimizing for the development accuracy. We find the optimal learning rate and batch size pair to be $1e-5$ and $80$ respectively.
\paragraph{Evaluation}
For query matching, we use scikit-learn \footnote{\url{https://scikit-learn.org/stable/}} to calculate the accuracy. For end-to-end performance, we use the MLQA evaluation script to obtain the F1 score of the results\footnote{\url{https://github.com/facebookresearch/MLQA}}.
\paragraph{Datasets}
We use the sentences in each dataset as-is, and rely on the pretrained tokenizer for each model to perform preprocessing.
\subsection{Model Training}
\paragraph{Query Paraphrase Dataset}
We found the optimal training combination of the PAWS-X and QQP datasets by training XLM-R classifiers on training dataset percentages of $(100\%, 0\%)$, $(75\%, 25\%)$, and $(50\%, 50\%)$ of (PAWS-X, QQP) -- with the PAWS-X percentage entailing the entirety of the PAWS-X dataset -- and observe the performance on matching multilingual XQuAD queries.
We shuffle the examples in the training set, and restrict the input examples to being (English, LRL) pairs.
We perform a hyperparameter search as specified in Table ~\ref{tab:ceparams} for each dataset composition, and report the test results in Table ~\ref{tab:xxencomp}.
\begin{table}[h]
\centering
\begin{tabular}{c|c}
(PAWS-X, QQP) & XQuAD \\\hline
$(100\%, 0\%)$ & 0.847 \\
$(75\%, 25\%)$ & \textbf{0.985}\\
$(50\%, 50\%)$ & 0.979
\end{tabular}
\caption{\label{tab:xxencomp}\textbf{XLM-R Query Paraphrase Performance On Different Query Compositions.} The performance of XLM-Roberta on matching XQuAD test queries when finetuned on different training set compositions of PAWS-X and QQP.}
\end{table}
\subsection{Cross Encoder}
We start with the pretrained \texttt{xlm-roberta-large} checkpoint in Huggingface's transformers\footnote{\url{https://github.com/huggingface/transformers}} library and perform a hyperparameter search with the parameters specified in Table 1 by using a modified version of Huggingface's text classification training pipeline for GLUE.
The cross encoder was used in all the RM-MIPS methods. In particular, it was used in the RM-MIPS (mUSE), RM-MIPS (LASER), and RM-MIPS (XLM-R) rows of tables in the main paper.
\begin{table}[h]
\small
\centering
\begin{tabular}{ll}
\toprule
\textsc{Model Parameters} & \textsc{Value/Range} \\
\midrule
\textbf{Fixed Parameters} & {} \\
\midrule
Model & XLM-Roberta Large \\
Num Epochs & 3 \\
Dropout & 0.1 \\
Optimizer & Adam \\
Learning Rate Schedule & Linear Decay \\
Max Sequence Length & 128 \\
\midrule
\textbf{Tuned Parameters} & {} \\
\midrule
Batch Size & [8, 120] \\
Learning Rate & [$9e-4$, $1e-6$] \\
\midrule
\textbf{Extra Info} & {} \\
\midrule
Model Size (\# params) & $550M$ \\
Vocab Size & 250,002 \\
Trials & 30 \\
\bottomrule
\end{tabular}
\caption{\label{tab:ceparams}
\textbf{Cross Encoder Hyperparameter Selection And Tuning Ranges} The hyper parameters we chose and searched over for XLM-Roberta large on the query paraphrase detection datasets.
}
\end{table}
\section{Full Results Breakdowns}
\subsection{LRL$\rightarrow$HRL Results}
See Table ~\ref{tab:mkqatoen} and ~\ref{tab:xquadtoen} for the non-aggregated LRL$\rightarrow$HRL language performances of each method on MKQA and XQuAD respectively.
\subsection{LRL$\rightarrow$HRL$\rightarrow$LRL Results}
See Table ~\ref{tab:mkqaendtoend} and ~\ref{tab:xquadendtoend} for the non-aggregated LRL$\rightarrow$HRL$\rightarrow$LRL language performances of each method on MKQA and XQuAD respectively.
\begin{table*}[htbp]
\centerline{
\begin{small}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c}
\toprule
& ar & $\text{zh}_\text{cn}$ & da & de & es & fi & fr & he & $\text{zh}_\text{hk}$ & hu & it & ja & km \\
\midrule
NMT + MIPS & 69.2 & 48.0 & 89.8 & 87.5 & 86.5 & 76.0 & 87.6 & 74.3 & 42.5 & 79.1 & 86.6 & 62.0 & 45.4 \\
mUSE & 80.0 & 83.2 & 51.7 & 90.9 & 91.7 & 37.6 & 91.5 & 33.5 & 80.8 & 40.7 & 91.6 & 80.0 & 35.6 \\
LASER & 81.5 & 62.8 & 88.6 & 52.0 & 79.9 & 81.6 & 78.5 & 85.5 & 64.0 & 69.1 & 80.4 & 39.3 & 40.2 \\
Single Encoder (XLM-R) & 58.0 & 76.3 & 84.8 & 74.6 & 73.3 & 65.5 & 74.1 & 67.8 & 77.0 & 66.9 & 69.0 & 71.4 & 59.0 \\
RM-MIPS (mUSE) & 77.6 & 81.2 & 77.2 & 88.8 & 88.9 & 59.9 & 88.8 & 44.1 & 81.2 & 64.1 & 88.4 & 81.2 & 50.6 \\
RM-MIPS (LASER) & 77.2 & 77.7 & 89.2 & 66.9 & 84.7 & 84.8 & 84.4 & 83.3 & 78.1 & 77.5 & 84.7 & 64.2 & 48.2 \\
RM-MIPS (Ours) & 72.6 & 80.7 & 90.1 & 86.8 & 87.0 & 82.7 & 87.2 & 80.6 & 81.0 & 81.4 & 85.5 & 79.5 & 72.4 \\
\midrule \midrule
& ko & ms & nl & no & pl & pt & ru & sv & th & tr & $\text{zh}_\text{tw}$ & vi \\
\midrule
NMT + MIPS & 54.2 & 86.0 & 88.8 & 87.2 & 81.9 & 87.4 & 81.9 & 87.2 & 75.0 & 79.6 & 39.7 & 76.0 \\
mUSE & 73.7 & 87.6 & 92.0 & 50.3 & 84.9 & 93.3 & 87.2 & 50.3 & 88.6 & 87.0 & 73.2 & 38.6 \\
LASER & 68.6 & 92.5 & 93.1 & 92.4 & 73.7 & 85.2 & 78.1 & 92.8 & 62.1 & 75.2 & 57.9 & 79.9 \\
Single Encoder (XLM-R) & 72.3 & 76.4 & 79.1 & 81.3 & 70.6 & 65.7 & 78.8 & 83.8 & 79.4 & 68.7 & 71.2 & 78.6 \\
RM-MIPS (mUSE) & 74.7 & 89.9 & 90.9 & 75.6 & 87.3 & 89.8 & 87.1 & 76.0 & 86.8 & 86.6 & 75.0 & 64.4 \\
RM-MIPS (LASER) & 73.1 & 89.5 & 90.2 & 89.7 & 81.9 & 86.7 & 84.3 & 89.8 & 77.0 & 82.3 & 72.5 & 84.0 \\
RM-MIPS (XLM-R) & 75.2 & 89.0 & 89.8 & 88.8 & 85.6 & 85.5 & 86.1 & 90.0 & 85.4 & 83.6 & 75.5 & 85.9 \\
\bottomrule
\end{tabular}
\end{small}}
\caption{\label{tab:mkqatoen}\textbf{MKQA + Natural Questions Per-Language LRL$\rightarrow$HRL Results.} The accuracy scores for each method on query matching.}
\end{table*}
\begin{table*}[htbp]
\centering
\begin{small}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c}
\toprule
& ar & de & el & es & hi & ru & th & tr & vi & zh \\
\midrule
NMT + MIPS & 71.7 & 90.8 & 86.7 & 95.2 & 79.9 & 85.7 & 67.4 & 82.9 & 74.8 & 41.8 \\
mUSE & 87.4 & 96.4 & 7.5 & 98.1 & 3.4 & 93.2 & 91.6 & 94.1 & 17.8 & 90.3 \\
LASER & 61.7 & 33.1 & 3.7 & 86.2 & 28.6 & 70.4 & 24.2 & 65.3 & 64.7 & 29.2 \\
Single Encoder (XLM-R) & 66.8 & 85.1 & 81.7 & 87.8 & 77.6 & 85.0 & 76.6 & 81.9 & 89.4 & 82.3 \\
RM-MIPS (mUSE) & 90.4 & 96.3 & 14.8 & 97.3 & 10.1 & 93.2 & 92.6 & 95.7 & 39.3 & 91.0 \\
RM-MIPS (LASER) & 81.6 & 59.9 & 11.1 & 95.5 & 59.1 & 88.3 & 55.5 & 89.0 & 85.7 & 66.2 \\
RM-MIPS (XLM-R) & 86.6 & 94.2 & 94.1 & 95.5 & 92.0 & 93.0 & 90.7 & 92.5 & 92.1 & 90.8 \\
\bottomrule
\end{tabular}
\end{small}
\caption{\label{tab:xquadtoen}\textbf{XQuAD + SQuAD Per-Language LRL$\rightarrow$HRL Results.} The accuracy scores for each method on query matching.}
\end{table*}
\newpage
\begin{table*}[htbp]
\centerline{
\begin{small}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c}
\toprule
& ar & $\text{zh}_\text{cn}$ & da & de & es & fi & fr & he & $\text{zh}_\text{hk}$ & hu & it & ja & km \\
\midrule
NMT + MIPS & 60.0 & 41.7 & 85.8 & 83.8 & 82.4 & 72.0 & 83.7 & 63.3 & 41.2 & 74.5 & 82.5 & 60.1 & 44.8 \\
mUSE & 68.6 & 62.7 & 50.1 & 87.2 & 87.4 & 37.2 & 87.5 & 31.9 & 68.7 & 40.0 & 87.2 & 74.9 & 35.0 \\
LASER & 70.1 & 49.5 & 84.6 & 50.8 & 76.3 & 77.3 & 75.3 & 72.8 & 56.2 & 65.0 & 76.8 & 39.1 & 38.1 \\
Single Encoder (XLM-R) & 50.9 & 57.5 & 81.0 & 71.7 & 70.2 & 62.0 & 70.9 & 58.6 & 65.8 & 63.1 & 66.0 & 68.0 & 54.9 \\
RM-MIPS (mUSE) & 66.9 & 61.3 & 74.4 & 85.2 & 84.8 & 58.0 & 84.9 & 39.9 & 68.8 & 61.5 & 84.1 & 75.8 & 46.0 \\
RM-MIPS (LASER) & 66.7 & 59.0 & 85.0 & 64.6 & 80.7 & 80.3 & 80.6 & 71.0 & 66.3 & 72.7 & 80.6 & 61.7 & 45.3 \\
RM-MIPS (Ours) & 62.8 & 60.8 & 85.9 & 83.3 & 83.1 & 78.4 & 83.3 & 68.7 & 68.6 & 76.6 & 81.5 & 74.4 & 64.4 \\
\midrule\midrule
& ko & ms & nl & no & pl & pt & ru & sv & th & tr & $\text{zh}_\text{tw}$ & vi & \\
\midrule
NMT + MIPS & 47.5 & 81.1 & 85.3 & 80.2 & 77.6 & 83.3 & 72.6 & 84.1 & 62.9 & 74.7 & 35.2 & 70.6 & \\
mUSE & 63.0 & 82.7 & 88.5 & 48.4 & 80.4 & 88.9 & 77.2 & 49.4 & 72.2 & 81.7 & 55.6 & 37.7 & \\
LASER & 59.1 & 87.4 & 89.7 & 85.1 & 70.0 & 81.2 & 69.4 & 89.5 & 53.7 & 70.7 & 45.7 & 74.4 & \\
Single Encoder (XLM-R) & 62.5 & 72.2 & 76.0 & 75.2 & 67.0 & 62.5 & 70.1 & 80.8 & 66.3 & 64.6 & 54.1 & 73.1 & \\
RM-MIPS (mUSE) & 64.2 & 84.8 & 87.3 & 70.6 & 82.5 & 85.3 & 77.1 & 73.9 & 70.7 & 81.2 & 56.6 & 61.3 & \\
RM-MIPS (LASER) & 63.1 & 84.4 & 86.7 & 81.8 & 77.3 & 82.4 & 74.7 & 86.6 & 64.3 & 77.1 & 55.1 & 78.2 & \\
RM-MIPS (XLM-R) & 64.7 & 84.0 & 86.3 & 81.6 & 81.0 & 81.3 & 76.3 & 86.9 & 69.9 & 78.6 & 56.8 & 79.9 & \\
\bottomrule
\end{tabular}
\end{small}}
\caption{\label{tab:mkqaendtoend}\textbf{MKQA + Natural Questions Per-Language LRL$\rightarrow$HRL$\rightarrow$LRL WikiData Results.} The F1 scores for end-to-end performance of each method on every language when using WikiData translation}
\end{table*}
\begin{table*}[htbp]
\centering
\begin{small}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c}
\toprule
& ar & de & el & es & hi & ru & th & tr & vi & zh \\
\midrule
NMT + MIPS & 35.3 & 55.5 & 39.2 & 68.2 & 32.9 & 30.7 & 17.8 & 42.1 & 45.6 & 19.0 \\
mUSE & 40.8 & 58.2 & 4.4 & 70.0 & 1.6 & 33.4 & 23.4 & 47.0 & 11.8 & 33.6 \\
LASER & 29.9 & 22.7 & 1.5 & 61.8 & 10.8 & 24.2 & 6.4 & 33.0 & 38.6 & 12.7 \\
Single Encoder (XLM-R) & 31.3 & 52.9 & 37.3 & 63.9 & 30.9 & 30.1 & 18.6 & 42.0 & 52.7 & 30.6 \\
RM-MIPS (mUSE) & 42.6 & 58.1 & 7.8 & 69.6 & 4.2 & 33.4 & 23.2 & 47.5 & 26.1 & 33.8 \\
RM-MIPS (LASER) & 38.3 & 38.2 & 5.7 & 68.3 & 22.9 & 31.1 & 13.7 & 44.5 & 50.7 & 26.3 \\
RM-MIPS (XLM-R) & 40.9 & 57.3 & 42.1 & 68.7 & 36.7 & 33.0 & 22.9 & 45.7 & 54.5 & 33.6 \\
\bottomrule
\end{tabular}
\end{small}
\caption{\label{tab:xquadendtoend}\textbf{XQuAD + SQuAD Per-Language LRL$\rightarrow$HRL$\rightarrow$LRL NMT Results.} The F1 scores for end-to-end performance of each method on every language when using NMT translation}
\end{table*}
\section{Re-Ranked Multilingual Maximal Inner Product Search}
\vspace*{-.1cm}
\label{sec:methods}
For the first stage of the XLP task, our goal is to find an equivalent English query for a LRL query: ``Query Matching".
Competing approaches include Single Encoders and Cross Encoders, described further in section \ref{sec:query-matching}.
Single Encoders embed queries independently into a latent vector space, meaning each query $q_{EN}$ from the English database $Q_{EN}$ can be pre-embedded offline.
At inference time, the low resource query $q_{LRL}$ is embedded, then maximal inner product search (MIPS) finds the approximate closest query $q_{EN}$ among all $Q_{EN}$ by cosine similarity.
By comparison, Cross Encoders leverage cross-attention between $q_{LRL}$ and candidate match $q_{EN}$ at inference time, thus requiring $O(|Q_{EN}|)$ forward passes at inference time to find the best paraphrase.
While usually more accurate this is computationally infeasible for a large set of candidates.
We propose a method that combines both Single Encoders and Cross Encoders, which we refer to as Reranked Mulilingual Maximal Inner Product Search (RM-MIPS).
The process, shown in Figure~\ref{fig:xxen_diagram}, first uses a multilingual sentence embedder with MIPS to isolate the top-k candidate similar queries, then uses the cross encoder to rerank the candidate paraphrases.
This approach reflects the Retrieve and Read paradigm common in OR-QA, but applies it to a multilingual setting for semantic similarity search.
The model first queries the English database using the Multilingual Single Encoder $\mathrm{SE}(q_i) = z_{i}$ to obtain the $k$-nearest English query neighbors $\mathcal{N}_{q_{LRL}} \subseteq Q_{EN}$ to the given query $q_{LRL}$ by cosine similarity.
\vspace*{-.1cm}
$$\mathcal{N}_{q_{\scaleto{LRL}{3pt}}} = \argmax_{\{q_1, ..., q_k\} \subseteq Q_{\scaleto{EN}{4pt}}} \sum_{i=1}^{k} \mathrm{sim}(z_{\scaleto{LRL}{4pt}}, z_i)$$
\vspace*{-.1cm}
Then, it uses the Multilingual Cross Encoder $\mathrm{CE}(q_1, q_2)$ to score the remaining set of queries $\mathcal{N}_{q_{\scaleto{LRL}{3pt}}}$ to obtain the final prediction.
\vspace*{-.1cm}
$$\mathrm{\textbf{RM-MIPS}}(q_{\scaleto{LRL}{4pt}}) = \argmax_{q_{\scaleto{EN}{4pt}} \in \mathcal{N}_{q_{\scaleto{LRL}{3pt}}}} \mathrm{CE}(q_{\scaleto{EN}{4pt}}, q_{\scaleto{LRL}{4pt}})$$
\vspace*{-.1cm}
RM-MIPS($q_{\scaleto{LRL}{4pt}}$) proposes an equivalent English query $q_{\scaleto{EN}{4pt}}$, whose English answer can be pulled directly from the database.
\begin{table}[!htp]
\centering
\begin{scriptsize}
\begin{tabular}{l|c|c}
\toprule
& XQuAD & MKQA \\
\midrule
High & es, de, ru, zh & de, es, fr, it, ja, pl, pt, ru, zh\_cn \\
Medium & ar, tr, vi & ar, da, fi, he, hu, ko, nl, no, sv, tr, vi \\
Low & el, hi, th & km, ms, th, zh\_hk, zh\_tw \\
\bottomrule
\end{tabular}
\end{scriptsize}
\caption{\label{tab:lang_groups}We evaluate cross-lingual pivot methods by language groups, divided into high, medium, and low resource according to Wikipedia coverage \citet{wu-dredze-2020-languages}. Note that due to greater language diversity, MKQA contains lower resource languages than XQuAD.}
\end{table}
\vspace*{-.1cm}
\section{Conclusion}
\vspace*{-.1cm}
\label{sec:conclusion}
In conclusion, we formulate a task to cross-lingual open-retrieval question answering more realistic to the constraints and challenges faced by practitioners expanding their systems' capabilities beyond English.
Leveraging access to a large English training set our method of query retrieval followed by reranking greatly outperforms strong baseline methods.
Our analysis compares multiple methods of leveraging this English expertise and concludes our two-stage approach transfers better to lower resource languages, and is more robust in the presence of extensive distractor data and query distribution misalignment.
Circumventing retrieval, this approach offers fast online or offline answer generation to hundreds of languages straight off-the-shelf, without any additional training data for the target language.
We hope this analysis will promote creative methods in multilingual knowledge transfer, and the cross-lingual pivots task will encourage researchers to pursue problem formulations better informed by the needs of existing systems.
In particular, leveraging many location and culturally-specific query knowledge bases, with cross-lingual pivots across many languages is an exciting extension of this work.
\section{Experiments}
\vspace*{-.1cm}
\label{sec:experiments}
We compare systems that leverage an English QA database to answer questions in lower resource languages.
Figure~\ref{fig:overview} illustrates a cross-lingual pivot (XLP), where the task is to map an incoming query from a low resource language to a query in the high resource language database (LRL $\rightarrow$ HRL, discussed in \ref{sec:query-matching}), and then a high resource language answer to a low resource language answer (HRL $\rightarrow$ LRL, discussed in \ref{sec:answer-gen}).
\input{tables/combined_mkqa_macro}
\vspace*{-.1cm}
\input{tables/combined_xquad_macro}
\vspace*{-.1cm}
\subsection{Datasets}
\vspace*{-.1cm}
We provide an overview of the question answering and paraphrase datasets relevant to our study.
\subsubsection{Question Answering}
\vspace*{-.1cm}
To assess cross-lingual pivots, we consider multilingual OR-QA evaluation sets that (a) contain a diverse set of language families, and (b) have ``parallel" questions across all of these languages.
The latter property affords us the opportunity to change the distributional overlap and analyze its effect (\ref{sec:answer-dropout}).
\vspace*{-.1cm}
\paragraph{XQuAD}
\citet{artetxe2019cross} human translate ~1.2k SQuAD examples \citep{rajpurkar2016squad} into 10 other languages. We use all of SQuAD (100k+) as the associated English database, such that only ~1\% of database queries are represented in the LRL evaluation set.
\vspace*{-.1cm}
\paragraph{MKQA} \citet{longpre2020mkqa} human translate 10k examples from the Natural Questions \citep{kwiatkowski2019natural} dataset to 25 other languages. We use the rest of the Open Natural Questions training set (84k) as the associated English database, such that only ~10.6\% of the database queries are represented in the LRL evaluation set\footnote{Open Natural Questions train set found here: \url{https://github.com/google-research-datasets/natural-questions/tree/master/nq_open}}.
\subsubsection{Paraphrase Detection}
\vspace*{-.1cm}
\label{sec:paraphrase-datasets}
To detect paraphrases between LRL queries and HRL queries we train multilingual sentence embedding models with a mix of the following paraphrase datasets.
\vspace*{-.1cm}
\paragraph{PAWS-X}
\citet{yang2019paws} machine translate ~49k examples from the PAWS \citep{zhang2019paws} dataset to six other languages.
This dataset provides both positive and negative paraphrase examples.
\vspace*{-.1cm}
\paragraph{Quora Question Pairs}
\citet{sharma2019natural} provide English question pair examples from Quora; we use the 384k examples from the training split of \citet{wang2017bilateral}.
This dataset provides both positive and negative examples of English paraphrases.
\subsection{Query Matching Baselines: LRL Query $\rightarrow$ HRL Query}
\vspace*{-.1cm}
\label{sec:query-matching}
We consider a combination of translation techniques and cross-lingual sentence encoders to find semantically equivalent queries across languages.
We select from pretrained models which report strong results on similar multilingual tasks, or finetune representations for our task using publicly available paraphrase datasets (\ref{sec:paraphrase-datasets}).
Each finetuned model receives basic hyperparameter tuning over the learning rate and the ratio of training data from PAWS-X and QQP.\footnote{We used an optimal learning rate of 1e-5, and training data ratio of 75\% PAWS-X and 25\% QQP.}
\vspace*{-.1cm}
\paragraph{NMT + MIPS}
\label{sec:nmt-mips}
We use a many-to-many, Transformer-based \citep{vaswani2017attention}, encoder-decoder neural machine translation system, trained on the OPUS multilingual corpus covering 100 languages \citep{zhang2020improving}.
To match the translation to an English query, we use the Universal Sentence Encoder (USE) \citep{cer2018universal} to perform maximal inner product search (MIPS).
\vspace*{-.1cm}
\paragraph{Pretrained Single Encoders}
We consider pre-trained multilingual sentence encoders for sentence retrieval.
We explore mUSE\footnote{mUSE was only trained on the following 16 languages: ar, ch\_cn, ch\_tw, en, fr, de, it, ja, ko da, pl, pt, es, th, tr ru} \citep{yang2019multilingual}, LASER \citep{artetxe2019massively}, and m-SentenceBERT \cite{reimers2019sentence}.
\vspace*{-.1cm}
\paragraph{Finetuned Single Encoders}
We finetune transformer encoders to embed sentences, per \citet{reimers2019sentence}.
We use the softmax loss over the combination of $[x; y; |x-y|]$ from \citet{conneau2017supervised} and mean pool over the final encoder representations to obtain the final sentence representation.
We use XLM-R Large as the base encoder \citep{conneau2019unsupervised}.
\vspace*{-.1cm}
\paragraph{Cross Encoders}
We finetune XLM-R Large \citep{conneau2019unsupervised} which is pretrained using the multilingual masked language modelling (MLM) objective.\footnote{\label{huggingface}We use the pretrained Transformer encoder implementations in the Huggingface library \citep{Wolf2019HuggingFacesTS}.}
For classification, a pair of sentences are given as input for classification, taking advantage of cross-attention between sentences.
\subsection{Answer Translation: HRL Answer $\rightarrow$ LRL Answer}
\vspace*{-.1cm}
\label{sec:answer-gen}
Once we've found an English (HRL) query using RM-MIPS, or one of our ``Query Matching" baselines, we can use the English database to lookup the English answer.
Our final step is to generate an equivalent answer in the target (LRL) language.
We explore straightforward methods of answer generation, including basic neural machine translation (NMT), and WikiData entity translation.
\vspace*{-.1cm}
\paragraph{Machine Translation}
For NMT we use our many-to-many neural machine translation as described in Section~\ref{sec:nmt-mips}.
\input{figures/distractor_drop}
\vspace*{-.1cm}
\paragraph{WikiData Entity Translation}
We propose our WikiData entity translation method for QA datasets with primarily entity type answers that would likely appear in the WikiData knowledge graph \citep{10.1145/2629489}.\footnote{\url{https://www.wikidata.org}}
This method uses a named entity recognizer (NER) with a WikiData entity linker to find an entity \citep{spacy2}. \footnote{\url{https://github.com/explosion/spaCy}}
We train our own entity linker on the public WikiData entity dump according to spaCy's instructions.
If a WikiData entity is found, its structured metadata often contains the equivalent term in the target language, localized to the relevant script/alphabet.
For our implementation, when a WikiData entity is not found, or its translation is not available in the target language, we simply return the English answer.
For XQuAD end-to-end experiments we find straightforward machine translation works best, whereas for MKQA, which contains more short, entity-type answers, we find WikiData Entity Translation works best.
We report results using these simple methods and leave more sophisticated combinations or improvements to future work.
\section{Task: Cross-Lingual Pivots}
\vspace*{-.1cm}
\label{sec:task}
The Open-Retrieval Question Answering (ORQA) task evaluates models' ability to answer information-seeking questions.
In a multilingual setting, the task is to produce answers in the same language as the query.
In some cases, queries may only find answers, or sufficient evidence, in a different language, due to \textit{informational asymmetries} \citep{miniwatts2011, https://doi.org/10.1002/asi.21577}.
To address this, \citet{asai2020xor} propose Cross-Lingual Open-Retrieval Question Answering (XORQA), similar to the Cross-Lingual Information Retrieval (CLIR) task, where a model needs to leverage intermediary information found in other languages, in order to serve an answer in the target language.
In practice, this intermediary language tends to be English, with the most ample resources and training data.
Building on these tasks, we believe there are other benefits to pivoting through high resource languages that have so far been overlooked, and consequently limited research that could more rapidly improve non-English QA.
These two benefits are (I) large query-answer databases have already been collected in English, both in academia \citep{joshi2017triviaqa} and in industry \citep{kwiatkowski2019natural}, and (II) it is often very expensive and challenging to replicate robust retrieval and passage reranking stacks in new languages \citep{fluhr1999multilingual, swapnil2014, manpreet2018}.~\footnote{While it is straightforward to adapt question answering ``reader" modules with zero-shot learning \citep{charlet-etal-2020-cross}, retrieval can be quite challenging. Not only is the underlying document index costly to expand and maintain for a new language \citep{swapnil2014}, but supervision signals collected in the target language are particularly important for dense retrieval and reranking systems which both serve as bottlenecks to downstream multilingual QA \citep{karpukhin2020dense}.
Additionally, real-world QA agents typically require human curated, language-specific infrastructure for retrieval, such as regular expressions, custom tokenization rules, and curated website blocklists.}
As a result, the English capabilities of question answering systems typically exceed those for non-English languages by large margins \citep{lewis2019mlqa, longpre2020mkqa, clark2020tydi}.
We would note that prior work suggests even without access to an English query-answer database, translation methods with an English document index and retrieval outperforms LRL retrieval for open-retrieval QA (see the end-to-end \textsc{XOR-Full} results in \citet{asai2020xor}).
This demonstrates the persistent weakness of non-English retrieval, and motivates alternatives approaches such as cross-lingual pivots.
To remedy this disparity, we believe attending to these two considerations would yield a more realistic task setup.
We propose the Cross-Lingual Pivots (XLPs) task, a variant of XORQA which assumes access to a query-answer database in English, as shown in Figure~\ref{fig:overview}.
Like multilingual ORQA, or XORQA, the task is to produce an answer $\hat{a}_{LRL}$ in the same ``Target" language as question $q_{LRL}$, evaluated by Exact Match of F1 token-overlap with the real answer $a_{LRL}$.
Instead of assuming access to a LRL document index or retrieval system (usually provided by the datasets), we assume access to an English database $D_{HRL}$ which simply maps English queries to their English answer text.
Leveraging this database, and circumventing LRL retrieval, we believe progress in this task will greatly accelerate multilingual capabilities of real question answering assistants.
\section{Introduction}
\vspace*{-.1cm}
\label{sec:introduction}
Open-Retrieval question answering (ORQA) has seen extensive progress in English, significantly outperforming systems in lower resource languages (LRLs).
This advantage is largely driven by the scale of labelled data and open source retrieval tools that exist predominantly for higher resource languages (HRLs) --- usually English.
\begin{figure}[t]
\centering
\hspace{-2mm}\includegraphics[width=0.4625\textwidth]{figures/diagram.png}
\caption{\label{fig:overview} \textbf{Cross-Lingual Pivots (XLP):} We introduce the ``Cross Lingual Pivots" task, formulated as a solution to multilingual question answering that circumvents document retrieval in low resource languages (LRL).
To answer LRL queries, approaches may leverage a question-answer system or database in a high resource language (HRL), such as English.}
\end{figure}
To remedy this discrepancy, recent work leverages English supervision to improve multilingual systems, either by simple translation or zero shot transfer \citep{asai2018multilingual, cui2019cross, charlet-etal-2020-cross}.
While these approaches have helped generalize reading comprehension models to new languages, they are of limited practical use without reliable information retrieval in the target language, which they often implicitly assume.
\begin{figure*}[ht]
\centering
\hspace{-2mm}\includegraphics[width=\textwidth]{figures/xxen_diagram.png}
\caption{\label{fig:xxen_diagram} \textbf{Reranked Multilingual Maximal Inner Product Search (RM-MIPS):} For the Cross-Lingual Pivots task, we propose an approach that maps the LRL query to a semantically equivalent HRL query, finds the appropriate HRL answer, then uses knowledge graph or machine translation to map the answer back to the target LRL.
Specifically, the first stage (in blue) uses multilingual single encoders for fast maximal inner product search (MIPS), and the second stage (in red) reranks the top k candidates using a more expressive multilingual cross-encoder that takes in the concatenation of the LRL query and candidate HRL query.}
\end{figure*}
In practice, we believe this assumption can be challenging to meet.
A new document index can be expensive to collect and maintain, and an effective retrieval stack typically requires language-specific labelled data, tokenization tools, manual heuristics, and curated domain blocklists \citep{fluhr1999multilingual, swapnil2014, manpreet2018}.
Consequently, we discard the common assumption of robust non-English document retrieval, for a more realistic one: that there exists a high-quality English database of query-answer string pairs.
We introduce and motivate the Cross-Lingual Pivots (\textbf{XLP}) task (Section~\ref{sec:task}), which we contend will accelerate progress in LRL question answering by reflecting these practical considerations.
This pivot task is similar to ``translate test" and ``MT-in-the-middle" paradigms \citep{10.3115/974147.974149, zitouni-florian-2008-mention, schneider-etal-2013-supersense} except for the availability of the high-resource language database, which allows for more sophisticated pivot approaches.
Figure~\ref{fig:overview} illustrates a generalized version of an XLP, where LRL queries may seek knowledge from any HRL with its own database.
For this task we combine and compare state-of-the-art methods in machine translation (``translate test") and cross-lingual semantic similarity, in order to map LRL queries to English, and then English answers back to the LRL target language.
In particular we examine how these methods are affected by certain factors: (a) whether the language is high, medium or low resource, (b) the magnitude of data in the HRL database, and (c) the degree of query distribution alignment between languages (i.e., the number of LRL queries that have matches in the HRL database).
Lastly we propose a new approach to this task, motivated by recent dense nearest neighbour (kNN) models in English which achieve strong results in QA by simply searching for similar questions in the training set (or database in our case) \citep{lewis2020question}.
We leverage nearest neighbor semantic similarity search followed by cross-encoder reranking (see Figure~\ref{fig:xxen_diagram}), and refer to the technique as Reranked Multilingual Maximal Inner Product Search (\textbf{RM-MIPS}).
Not only does this approach significantly improve upon ``Translate Test" (the most common pivot technique) and state-of-the-art paraphrase detection baselines, our analysis demonstrates it is more robust to lower resource languages, query distribution misalignment, and the size of the English database.
By circumventing document retrieval and task-specific supervision signals, this straightforward approach offers reliable answer generation to any language (present in pretraining) off-the-shelf.
Furthermore, it can be re-purposed to obtain reliable training data in the target language, with fewer annotation artifacts, and is complementary to a standard end-to-end question answering system.
We hope this analysis complements existing multilingual approaches, and facilitates adoption of more practical (but effective) methods to improve knowledge transfer from English into other languages.
We summarize our contributions as:
\begin{itemize}
\item \textsc{XLP}: A more realistic task setup for practically expanding Multilingual OR-QA to lower resource languages.
\item Comprehensive analysis of factors affecting XLP: (I) types of approaches (translation, paraphrasing) (II) language types, (III) database characteristics, and (IV) query distribution alignment.
\item \textsc{RM-MIPS}: A flexible approach to XLP that beats strong (or state-of-the-art) baselines.
\end{itemize}
\section{Related Work}
\vspace*{-.1cm}
\label{sec:related-work}
\paragraph{Cross-Lingual Modeling}
Multilingual BERT \citep{devlin2019bert}, XLM \citep{lample2019cross}, and XLM-R \cite{conneau2019unsupervised} use masked language modeling (MLM) to share embeddings across languages.
\citet{artetxe2019massively} introduce LASER, a language-agnostic sentence embedder trained using many-to-many machine translation.
\citet{yang2019multilingual} extend \citet{cer2018universal} in a multilingual setting by following \citet{chidambaram2019learning} to train a multi-task dual-encoder model (mUSE).
These multilingual encoders are often used for semantic similarity tasks.
\citet{reimers2019sentence} propose finetuning pooled BERT token representations (Sentence-BERT), and \citet{reimers2020making} extend with knowledge distillation to encourage vector similarity among translations.
Other methods improve multilingual transfer via language alignment \citep{roy2020lareqa, mulcaire2019polyglot, schuster2019cross} or combining machine translation with multilingual encoders \citep{fang2020filter, cui2019cross}.
\vspace*{-.1cm}
\paragraph{Multilingual Question Answering}
Efforts to explore multilingual question answering include MLQA \citep{lewis2019mlqa}, XQuAD \citep{artetxe2019cross}, MKQA \citep{longpre2020mkqa}, TyDi \citep{clark2020tydi} and XORQA \citep{asai2020xor}.
Prior work in multilingual QA achieves strong results combining neural machine translation and multilingual representations via \textbf{Translate-Test}, \textbf{Translate-Train}, or \textbf{Zero Shot} approaches \citep{asai2018multilingual, cui2019cross, charlet-etal-2020-cross}.
This work focuses on \textit{extracting} the answer from a multilingual passage \citep{cui2019cross, asai2018multilingual}, assuming passages are provided.
\vspace*{-.1cm}
\paragraph{Improving Low Resource With High Resource}
Efforts to improve performance on low-resource languages usually explore language alignment or transfer learning.
\citet{chung2017supervised} find supervised and unsupervised improvements in transfer learning when finetuning from a language specific model, and \citet{lee2019cross} leverage a GAN-inspired discriminator \citep{goodfellow2014generative} to enforce language-agnostic representations.
Aligning vector spaces of text representations in existing models \citep{conneau2017word, schuster2019cross, mikolov2013exploiting} remains a promising direction.
Leveraging high resource data has also been studied in sequence labeling \citep{xie2018neural, plank2018distant, schuster2019cross} and machine translation \citep{johnson2017google,zhang2020improving}.
\vspace*{-.1cm}
\paragraph{Paraphrase Detection}
The paraphrase detection task determines whether two sentences are semantically equivalent.
Popular paraphrase datasets include Quora Question Pairs \citep{sharma2019natural}, MRPC \citep{dolan2005automatically}, and STS-B \citep{Cer_2017}.
The adversarially constructed PAWS dataset \citet{zhang2019paws} was translated to 6 langauges, offering a multilingual option, PAWS-X \citet{yang2019paws}.
In a multilingual setting, an auxiliary paraphrase detection (or nearest neighbour) component, over a datastore of training examples, has been shown to greatly improve performance for neural machine translation \citep{khandelwal2020nearest}.
\section{Results}
\vspace*{-.1cm}
\label{sec:results}
\subsection{End-To-End Results}
\vspace*{-.1cm}
We benchmark the performance of the cross-lingual pivot methods on XQuAD and MKQA.
To simulate a realistic setting, we add all the English questions from SQuAD to the English database used in the XQuAD experiments.
Similarly we add all of Natural Questions queries (not just those aligned across languages) in the MKQA experiments.
For each experiment we group the languages into high, medium, and low resource, as shown in Table~\ref{tab:lang_groups}, according to \citet{wu-dredze-2020-languages}.
Tables~\ref{tab:mkqa} and ~\ref{tab:xquad} present the mean performance by language group, for query matching (LRL $\rightarrow$ HRL), and end-to-end results (LRL $\rightarrow$ HRL $\rightarrow$ LRL), query matching and answer translation ins sequence.
Among the models, RM-MIPS typically outperform baselines, particularly on lower resource languages.
We find the reranking component in particular offers significant improvements over the non-reranked sentence encoding approaches in low resource settings, where we believe sentence embeddings are most inconsistent in their performance.
For instance, RM-MIPS (LASER) outperforms LASER by $5.7\%$ on the Lowest resource E2E MKQA task, and $4.0\%$ across all languages.
The margins are even larger between RM-MIPS (mUSE) and mUSE as well as RM-MIPS (XLM-R) and XLM-R).
\begin{figure*}[htb]
\centering
\includegraphics[width=\textwidth]{figures/p80_ad_combined.png}
\caption{\label{fig:dropout} \textbf{Effects of Query Alignment on MKQA end-to-end Performance:} At a target precision of 80\%, the end-to-end Malay (left) and Spanish (right) recall are plotted for each degree of query alignment.
The query alignment axis indicates the percentage of 10k queries with parallel matches retained in the English database.}
\end{figure*}
\vspace*{-.1cm}
For certain high resource languages, mUSE performs particularly strongly, and for XQuAD languages, LASER performs poorly.
Accordingly, the choice of sentence encoder (and it's language proportions in pretraining) is important in optimizing for the cross-lingual pivot task.
The modularity of RM-MIPS offers this flexibility, as the first stage multiligual encoder can be swapped out: we present results for LASER, mUSE, and XLM-R.
Comparing query matching accuracy (left) and end-to-end F1 (right) in Tables~\ref{tab:mkqa} and ~\ref{tab:xquad} measures the performance drop due to answer translation (HRL $\rightarrow$ LRL, see section~\ref{sec:answer-gen} for details).
We see this drop is quite small for MKQA as compared to XQuAD.
Similarly, the ``Perfect LRL $\rightarrow$ HRL" measures the Answer Translation stage on all queries, showing XQuAD's machine translation for answers is much lower than MKQA's Wikidata translation for answers.
This observation indicates that (a) Wikidata translation is particularly strong, and (b) cross-lingual pivot techniques are particularly useful for datasets with frequent entity, date, or numeric-style answers, that can be translated with Wikidata, as seen in MKQA.
Another potential factor in the performance difference between MKQA and XQuAD is that MKQA contains naturally occurring questions, whereas XQuAD does not.
Despite the lower mean end-to-end performance for XQuAD, this cross-lingual pivot can still be used alongside traditional methods, and can be calibrated for high precision/low coverage by abstaining from answering questions that are Wikidata translatable.
One other noteable advantage of paraphrase-based pivot approaches, is that no LRL-specific annotated training data is required.
A question answering system in the target language requires in-language annotated data, or an NMT system from English.
Traditional NMT ``translate test" or ``MT-in-the-middle" \citep{asai2018multilingual, 10.3115/974147.974149, schneider-etal-2013-supersense} approaches also require annotated parallel data to train.
RM-MIPS and our other paraphrase baselines observe monolingual corpora at pre-training time, and only select language pairs during fine-tuning (those present in PAWS-X), and yet these models still perform well on XLP even for non-PAWS-X languages.
\subsection{Database Size}
\vspace*{-.1cm}
To understand the impact of database size on the query matching process, we assemble a larger database with MSMARCO (800k), SQuAD (100k), and Open-NaturalQuestions (90k).
Note that none of the models are explicitly tuned to MKQA, and since MSMARCO and Open-NQ comprise natural user queries (from the same or similar distribution), we believe these are challenging ``distractors".
In Figure~\ref{fig:distractor} we plot accuracy of the most performant models from Tables~\ref{tab:mkqa} and ~\ref{tab:xquad} on each of the high, medium, and low resource language groups over different sizes of database on MKQA.
We report the initial stage query matching (LRL $\rightarrow$ HRL) to isolate individual model matching performance.
We observe that RM-MIPS degrades less quickly with database size than competing methods, and that it degrades less with the resourcefulness of the language group.
\subsection{Query Alignment}
\vspace*{-.1cm}
\label{sec:answer-dropout}
In some cases, incoming LRL queries may not have a corresponding semantic match in the HRL database.
To assess the impact of this, we vary the percentage of queries that have a corresponding match by dropping out their parallel example in the English database (in increments of 10\%).
In Figures~\ref{fig:dropout} we report the median end-to-end recall scores over five different random seeds, at each level of query alignment (x-axis).
At each level of answer query alignment we recompute a No Answer confidence threshold for a target precision of 80\%.
Due to computational restraints, we select one low resource (Malay) and one high resource language (Spanish) to report results on.
We find that even calibrated for high precision (a target of 80\%) the cross-lingual pivot methods can maintain proportional, and occasionally higher, coverage to the degree of query misalignment.
RM-MIPS methods in particular can \textit{outperform} proportional coverage to alignment (the dotted black line on the diagonal) by sourcing answers from similar queries in the database to those dropped out.
Consequently, a practitioner can maintain high precision and respectable recall by selecting a threshold for any degree of query misalignment observed in their test distribution.
The primary limitation of RM-MIPS, or other pivot-oriented approaches, is that their performance is bounded by the degree of query alignment.
However, QA systems still fail to replicate their English answer coverage in LRLs \citep{longpre2020mkqa}, and so we expect pivot techniques to remain essential until this gap narrows completely.
|
{
"timestamp": "2021-07-19T02:06:17",
"yymm": "2012",
"arxiv_id": "2012.14094",
"language": "en",
"url": "https://arxiv.org/abs/2012.14094"
}
|
\section{Introduction}
\label{sec1}
Phosphorene, a novel promising 2D material, has recently attracted much attention owing to its anisotropic bandstructure~\cite{Liu14,Gomez16,Koenig14}. It is a bilayer puckered honeycomb lattice of black phosphorus with a peculiar bandstructure exhibiting Dirac cones in the bulk. Because of its bandstructure, phosphorene has been studied in many theoretical works, particularly in the context of transport studies~\cite{Ghosh17,Wang15,Linder17,Zare17}. Compared to the transition metal dichalcogenide materials, phosphorene has a high charge carrier mobility ($\sim 100~\text{cm}^2/\text{Vs}$) at room temperature~\cite{Liu14}, making it favorable for electronic applications. Moreover, zigzag phosphorene nanoribbons (ZPNR) exhibit two quasi-flat edge states, which are completely isolated from the bulk states~\cite{Ezawa14,Taghizadeh15,Ma16}, in contrast to the other 2D hexagonal lattice structures such as graphene~\cite{Castro09} and silicene~\cite{Shakouri15}. The nature of these isolated edge states originating from a large hopping parameter between two out-of-plane zigzag chains has been discussed in Ref.~\cite{Ezawa14}. Furthermore, a recent study addressed the Ruderman-Kittel-Kasuya-Yosida (RKKY) exchange interaction in ZPNRs. It found two different characteristic periods of the RKKY interaction mediating the magnetic interaction between impurities~\cite{Islam18}.
\par
Motivated by theoretical predictions~\cite{Yazyev10,Honecker10,Feldner11} and experimental
confirmations~\cite{Magda14} of edge magnetism in zigzag graphene nanoribbons, edge magnetism has also been explored in
phosphorene in Ref.~\cite{Zhu14}. This first study found that ZPNRs display a magnetic state at the edge in the absence of a Peierls distortion. However, edge magnetism vanished in the fully relaxed structure. On the other hand, the authors of Ref.~\cite{Du15} have shown that edge magnetism of ZPNR, can survive even with structural relaxation.
In another paper, the authors consider tilted black phosphorene nanoribbons (TPNRs) exposed to an external electric field. They found that the magnetic ground state can be switched by an electric field from antiferromagnetic (AFM) to ferromagnetic (FM)~\cite{Farooq16}. Furthermore, a quantum Monte Carlo calculation demonstrated a high Curie temperature for edge magnetism of ZPNR~\cite{Yang16}.
However, previous studies have mainly considered small unit cells focusing on commensurate FM and AFM states. Magnetic states such as spiral phases or incommensurate phases have not been analyzed. Furthermore, the effect of strain and disorder on the magnetic states still remains unclear.
\par
In this work, using a tight-binding (TB) Hubbard model, we numerically study the edge magnetism of ZPNRs using static mean-field theory (MFT) and dynamical mean-field theory (DMFT). Although QMC must be considered superior to our mean-field approaches, our (D)MFT is much faster and thus makes it possible to analyze large unit cells and incommensurate magnetic phases.
Furthermore, mean-field theories have proven to at least qualitatively, sometimes even quantitatively, correctly describe magnetism in hexagonal 2D systems\cite{Marcin2020}, although being numerically less expensive. A similar combination of techniques has been used to analyze edge magnetism in zigzag graphene nanoribbons~\cite{Palacios07} and nanodots~\cite{Bhowmick08}.
\par
In this paper, we demonstrate the existence of an incommensurate magnetic phase at the edge of ZPNR for weak interaction strengths $U_{c1}\lesssim U\lesssim U_{c2}$.
With increasing interaction strength, this incommensurate magnetic phase undergoes a phase transition into the ferromagnetic or antiferromagnetic phase at $U_{c2}$, which has been reported by previous studies.
We show that the difference in the ground state energies of these two states is exponentially small, making it easy to switch between both states. Besides, to gain more insight into the realization of magnetism in ZPNRs at weak interaction strengths, the purpose of this paper is to analyze the effects of strain and defects on the magnetic state.
Such perturbations of the material are ubiquitous in 2D materials~\cite{Gui08,Wang14,Yue12,Tabatabaei13,Rodin14,Elahi15,Fazileh16}. Moreover, studies on strain in nonmagnetic phosphorene show some intriguing features: A first-principle study predicted a semiconductor-semimetal-metal transition under perpendicular compression~\cite{Rodin14}. In Ref.~\cite{Elahi15}, an emergence of a peculiar Dirac-shaped dispersion for tensile strain in the zigzag edge is proposed. In another work, it was shown that tensile or in-plane strain, together with spin-orbit interaction, gives rise to a topological phase transition~ \cite{Fazileh16}. However, the only study which analyzes the effect of strain on ZPNRs magnetism is found in Ref.~\cite{Du15}. It predicts that at a critical compressive strain along the zigzag edge (about 5$\%$), the ground state changes from an AFM semiconductor to a nonmagnetic metal.
Thus, we here address the effect of strain and Anderson type disorder on the magnetic properties of ZPNRs and find that while the IC phase is very sensitive to strain and disappears fast, it is robust against Anderson type disorder. We also notice that the second critical point $U_{c2}$ shifts to larger values. Thus, one can predict that the AFM/FM magnetic phase disappears under large strain.
\par
The paper is organized as follows: In Sec.\ \ref{sec2}, we introduce the theoretical model and formalism used in the numerical calculations. In Sec.\ \ref{sec3}, we discuss results obtained. Finally, we summarize and conclude our results in Sec.\
\ref{sec4}.
\begin{figure}[b]
\includegraphics[width=1\columnwidth]{fig1.png}
\caption
A schematic view of the ZPNR: The yellow area corresponds to the unit cell, which consists of four atoms. The black arrows show the hoppings up to the fifth nearest-neighbor hopping included in our TB model. The red (blue) circles indicate the upper (lower) layers. The black box is the ribbon unit cell in the y-direction. In the text, $N_y$ refers to the number of ribbon unit cells in the y-direction. The width of the unit cell is specified by $N_x$, which includes $N=4\times N_x$ phosphorus atoms.}
\label{fig1}
\end{figure}
\section{MODEL AND FORMALISM}
\label{sec2}
In order to study the magnetic properties of ZPNRs, we use the following tight-binding model
\begin{equation}
H=\sum_{ij,\sigma}t_{ij}c_{i\sigma}^{\dagger}c_{j\sigma}+U\, \sum_i \left(n_{i,\uparrow}-\frac{1}{2}\right)\,\left(n_{i,\downarrow} -\frac{1}{2}\right),
\label{eq1}
\end{equation}
where the first summation runs up to the fifth nearest neighbor, and $t_{ij}$ is the hopping integral proposed in Ref.~\cite{Rudenko14}. These hopping parameters are $t_1 = -1.220~eV$, $t_2 = 3.665~eV$, $t_3 =-0.205~eV$, $t_4 = -0.105~eV$, and $t_5 = -0.055~eV$, which are shown by arrows in Fig.~\ref{fig1}. Furthermore, we include a local density-density interaction. Thus, this model corresponds to a one-band Hubbard model. To tackle the interaction, we use static and dynamical mean-field theory. At the static mean-field level (MFT) level, all quantum fluctuations are neglected, and the SU(2) spin symmetry must be broken artificially in order to capture the formation of local moments. An extension of the MFT to account for the local moment formation is the dynamical mean-field theory (DMFT). The DMFT approximation accounts for temporal fluctuations and thus includes local charge fluctuations beyond static MFT. Indeed, the accuracy of DMFT to predict the critical point, $U_c$, in the honeycomb lattice has been reported recently in Ref.\cite{Marcin2020,Thu2020}.
\subsection{Static mean-field theory (MFT)}
Evaluating the Coulomb interaction term in the mean-field approximation leads to two potentials terms, direct and exchange term, which must be solved self-consistently. In the case of the Hubbard model in the collinear approximation, only the
direct potential term is nonzero, and one obtains
$$
U\,\sum_i \Bigl( \langle n_{i,\uparrow} \rangle n_{i,\downarrow} + n_{i,\uparrow}\langle n_{i,\downarrow}\rangle
- \langle n_{i,\uparrow}\rangle \langle n_{i,\downarrow}\rangle-\frac{n_{i,\uparrow}+ n_{i,\downarrow}}{2}
+ \frac{1}{4} \Bigr)
$$
where $n_{i\sigma}=c_{i\sigma}^{\dagger}c_{i\sigma}$ is the number operator and $\langle n_{i\sigma}\rangle$ is the average electron occupation number for spin-down $(\downarrow)$ and spin-up $(\uparrow)$ electrons on lattice site $i$. We focus on the undoped ZPNR with exactly one electron per lattice site, i.e., we work with the half-filled Hubbard model.
\par
To calculate the magnetic ground state of Hamiltonian Eq.~(\ref{eq1}), we start with a few initial, specific or random, configurations for the average electron occupation number $\langle n_{i\sigma}\rangle$. Then, by diagonalizing the Hamiltonian, we calculate updated electron occupation numbers. This procedure is repeated until the convergence criteria, chosen as $\eta=10^{-8}$, is achieved on the average electron occupation number. This self-consistent solution provides the local magnetization $m_i^z=(n_{i\uparrow}-n_{i\downarrow})/2$ on each site. Finally, the energies of different states are compared to find the ground state.
\subsection{Dynamical mean-field theory (DMFT)}
A recent study has shown that the transition to the magnetic state in Graphene is captured remarkably well by the inclusion of local charge fluctuations ~\cite{Marcin2020} in the framework of a single-site dynamical mean-field theory~\cite{Georges1996}.
Thus, to go beyond the static MFT and to include local fluctuations, we also use the real-space dynamical mean-field theory (DMFT) to obtain a magnetic solution of the ZPNR.
As in Ref. \cite{Thu2020}, each atom of a $8\times 48$ large cluster is mapped onto its own quantum impurity model by calculating the Green's function and the local hybridization function,
\begin{eqnarray}
\bf{G}_{ij}(\omega)&=&(\omega-\bf{H}-\bf{\Sigma}(\omega))_{ij}^{-1}\\
\bf{\Delta}_{i}(\omega)&=&\bf{G}_{ii}^{-1}(\omega)+\bf{\Sigma}_{ii}(\omega),
\end{eqnarray}
where $i$ and $j$ are indices for the positions of the atoms, $\bf{H}$ is the matrix of the tight-binding Hamiltonian on the finite lattice, $\bf{\Sigma}(\omega)$ is the diagonal matrix including the self-energies of all atoms $\bf{\Sigma}_{ij}(\omega)=\bf{\Sigma}_{ij}(\omega)\delta_{ij}$, $\bf{\Sigma}_{ii}$ is the self-energy of the atom $i$, $\bf{G}_{ij}$ is the Green's function matrix, and $\bf{\Delta}_i$ is the hybridization function of atom $i$. The hybridization function together with the local interaction strength completely defines a quantum impurity model necessary in DMFT, which makes it possible to calculate magnetic states in large clusters\cite{PhysRevB.89.155134,PhysRevB.92.075103,Marcin2020,Thu2020}.
The quantum impurity model is then solved using the numerical renormalization group (NRG)\cite{RevModPhys.47.773,RevModPhys.80.395}, which can calculate dynamical correlation functions and self-energies with high accuracy\cite{PhysRevB.74.245114}.
\begin{figure}
\includegraphics[width=1\columnwidth]{fig2.pdf}
\caption
Evolution of the single-particle gap (a) and the edge magnetization (b) as a function of the Hubbard interaction $U/|t_1|$. Results for different ZPNR widths $N_x=7,8,9$ are plotted with different symbols. In panel (a), the inset shows an exponential curve-fitting of the gap evolution as a function of $1/N_x$ at $U/|t_1|=0.8$. In panel (b), the inset shows the evolution of the maximum magnetization $m^z_{\rm max}$ versus the Hubbard interaction. Three different regimes are labeled and highlighted as: (I) nonmagnetic, (II) gapped-IC, and (III) gapped-AFM (or gapped-FM) regions. The ribbon length is fixed at $N_y=120$, and the periodic boundary condition is implemented in the $y$-direction.}
\label{fig2}
\end{figure}
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{fig3.png}
\caption
Three different magnetic spatial configurations for $U/|t_1|=0.3$ (left panel) and $1.0$ (middle and right panels) of the Hubbard interaction. The width and length of the ribbons are $N_x=7$ and $N_y=120$. Since we here use a very long ribbon, we only show a portion of the ribbon in the y-direction. The blue and red circles display the two different local spin directions. We call the magnetic configuration in the left, middle, and right panel as IC, AFM, and FM phases.}
\label{fig3}
\end{figure}
\begin{figure*}
\includegraphics[width=2\columnwidth]{fig4.pdf}
\caption
The left panels present the local magnetic modulation $m_i^z$ at one edge of the ZPNRs, and the right panels give their corresponding Fourier transformation for different Hubbard interactions $U/|t_1|$. The lattice parameters are the same as in Fig.~\ref{fig2}.}
\label{fig4}
\end{figure*}
\section{RESULTS AND DISCUSSION}
\label{sec3}
In this section, we present the numerical results obtained by static and dynamical MFT. The ZPNR geometry is shown in Fig.~\ref{fig1}. The geometry is specified by two parameters, $N_x$ and $N_y$, which are the width and the length of the cluster. We use open boundaries in the x-direction. A single ZPNR unit-cell is shown as a black box in Fig.~\ref{fig1}. For our static MFT calculations, we use $N_y=120$ unit-cells in y-direction and apply periodic boundary conditions. Thus, our calculation includes $N=4\times N_x\times N_y$ phosphorus atoms. The ribbon width plays an essential role in the creation of the edge states~\cite{Taghizadeh15}, and it has been shown that the ribbon width must be larger than about $3~nm$ for stable edge magnetism, which corresponds to $N_x=7$ in this work.
\subsection{The pristine ZPNRs}
We first consider a finite ribbon cluster. Later, we exploit translation symmetry and extend our study to an infinite ribbon. We focus here on the single-particle gap and the edge magnetization in our analysis, which are two practical observables to understand the magnetic features of ZPNRs. The single-particle gap is here defined as one half of the charge
gap, $\Delta_{\rm sp} = (E_{n-1} - 2\, E_{n} + E_{n+1})/2$, where $E_n$ is the ground-state energy in the sector with $n$ electrons. The edge magnetization is defined as $m^z=\frac{1}{N_{\rm edge}}\sum_{i\in {\rm edge}}^{N_{\rm edge}}\, |\langle m_i^z \rangle|$. The temperature is set to zero.
\par
Figure~\ref{fig2} shows the evolution of the single-particle gap (a) and the edge magnetizations (b) as a function of the Hubbard interaction $U/|t_1|$, calculated by static MFT. The gap is zero for interaction strengths $U<U_{c1}\simeq0.2|t_1|$. At this point, the edge magnetism starts to appear. For $0.2\lesssim U/|t_1|\lesssim0.6$, the magnetization at the edge is not homogeneous. Precisely at the critical point, $U_{c1}$, the magnetization pattern of one edge is an antiferromagnetic state, whose existence has been reported in Ref.~\cite{Du15}. Further increasing the interaction strength, the magnetic state becomes an incommensurate (IC) antiferromagnetic state (see the left panel in Fig.~\ref{fig3} and Fig.~\ref{fig4}). A more detailed analysis is given in the next section.
As can be seen from Fig.~\ref{fig2}(a), the gap starts to increase from the first critical point $U_{c1}\simeq0.2|t_1|$. Surprisingly, the band gap forms a cusp, and decreases for stronger interaction strengths until it reaches the second critical point $U_{c2}\simeq0.6|t_1|$. Beyond $U_{c2}\simeq0.6|t_1|$, the gap shows a linear growth with the Hubbard interaction. The magnetic configuration in this region is illustrated in the middle panel of Fig.~\ref{fig3}, and we refer to it as the AFM phase.
While the magnetization along the edges is homogeneous, the magnetization is exactly opposite at both edges.
This configuration has also been predicted by DFT~\cite{Zhu14} and QMC~\cite{Yang16}. However, besides this AFM phase, we here find another magnetic solution illustrated in the right panel of Fig.~\ref{fig3}. In this magnetic configuration, both edges are ferromagnetically aligned, and interestingly its gap and magnetization behavior are almost the same as in the AFM case.
Comparing the ground state energies of the AFM and the FM states, we find an exponentially small energy difference of $\mathcal{O}(10^{-6})$ (see Table.\ref{tab_EG}). It is worth mentioning that to find the AFM or the FM states, a proper initial guess is necessary while finding the IC phase does not require such an initial guess. We highlight and label the ZPNRs phases in Fig.~\ref{fig2} as follows: (I) nonmagnetic, (II) gapped-IC, and (III) gapped-AFM (or gapped-FM ) regions.
\par
It is intriguing to see that an FM state in a half-filled Hubbard model has a slightly lower ground state energy than the AFM state. However, we might explain this ferromagnetic state by using the argumentation of Stoner ferromagnetism: The wavefunctions of the edge states of the left and the right edges have some overlap with each other if the width of the ZPNR is finite. This overlap will lead to an additional positive energy contribution in the case of an AFM state, which can be prevented by ferromagnetically aligning both edges. Thus, the ferromagnetic state has slightly lower energy. The situation could be similar to graphene zigzag ribbons, where a sharp semiconductor (AF) to metallic (FM) transition occurs by varying the ribbon width, which is seen experimentally~\cite{Magda14} and theoretically~\cite{Chen17}.
\begin{table}[b]
\centering
\begin{tabular}{ |c|c|c|c|c| }
\hline
$U/t_1$ & 0.8 & 0.9 & 1.0 \\
\hline
E[eV]/N& -3.1342389394 & -3.0761689920 & -3.0181486070\\
FM& & & \\
\hline
E[eV]/N& -3.1342358713 & -3.0761659724 & -3.0181456815\\
AF& & & \\
\hline
\end{tabular}
\caption{Ground-state energy per atom for the gapped-AFM and the gapped-FM phase for three interaction strengths.}
\label{tab_EG}
\end{table}
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{fig5.pdf}
\caption
The position of the maximum amplitude in the frequency domain as a function of the Hubbard interaction $U/|t_1|$ in Fig.~\ref{fig4}.}
\label{fig5}
\end{figure}
\begin{figure}[b]
\includegraphics[width=1\columnwidth]{fig6.pdf} \\[0mm]
\caption
Top and middle panels show spectral spectrum which extracted by unfolding energy spectrum of extended unit-cell with $Ny=120$. The bottom panel shows the evolution of single-particle gaps. Regions are as follows: (I) nonmagnetic, (II) gapped-IC, and (III) gapped-AFM. The width of the ribbon is fixed at $N_x=7$. }
\label{fig6}
\end{figure}
\par
Figure~\ref{fig2}(b) shows the edge magnetization $m^z$. To calculate the edge magnetization, we only consider lattice sites along the border of the zigzag edge. The general behavior is consistent with the gap evolution. A small magnetization appears at the first critical point $U_{c1}\simeq0.2|t_1|$ and increases very slowly with $U$ until the second critical point $U_{c2}\simeq0.6|t_1|$. Beyond $U_{c2}$, the edge magnetization $m^z$ saturates. We note that our results of phase (III) are in agreement with the quantum Monte Carlo result reported in Ref.~\cite{Yang16}, which predicted long-range order for $U>0.5~eV$ at zero temperature. The inset of Figs.~\ref{fig2}(b) shows the maximum of the edge magnetization $m^z_{\rm max}$. Interestingly, it captures both the first and the second critical point consistent with the gap evolution.
\par
To obtain more insight into the effect of the ribbon's width on the gap and the magnetization, we present in Fig.~\ref{fig2} data of different widths as a comparison. One can see that the gap does not depend on the ribbon width for $N_x=7,8,9$, for which the data collapse on top of each other. However, for ribbon widths smaller than $N_x<7$, we find that the gap depends on the width. The inset in Fig.~\ref{fig2}(a) displays the gap evolution with the inverse ribbon width, fitted by an exponential curve. We find that the edge magnetization, $m^z$, also collapses on a single curve for $N_x>6$. In particular, all data show the same saturation value in the region-(III). However, we note that the QMC calculation~\cite{Yang16} for room temperature has shown that the magnetization decreases with the ribbon width.
\par
Let us now analyze the IC phase in more detail by accessing the real space data of the local magnetization, $m_i^z$.
The real-space data of the local magnetization, $m_i^z$, reveals how an antiferromagnetic state at one edge changes into the ferromagnetic state when increasing the interaction strength. The local magnetization and its Fourier transformation (FT) are shown in Figs.~\ref{fig4} for different interaction strengths. For small $U/|t_1|=0.25$, the local magnetization pattern is an antiferromagnetic state along one edge, as also demonstrated by the FT with a single peak at $\nu_{\rm max}=\pi$. By increasing $U$, one can see how the local magnetization starts to change. The single peak in the FT splits into two, which move away from $\pi$. Finally, for a Hubbard interactions larger than $U>U_{c2}$, the maximum in the FT occurs at $\nu_{\rm max}=0$, which signals a fully aligned ferromagnetic state along one edge.
In Fig~\ref{fig5}, we show the position of the maximum in the FT $\nu_{\rm max}/\pi$ plotted as function of $U/|t_1|$. It can be seen how the maximum decreases from $1$ to $0$ within the IC phase.
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{mz_dmft.pdf} \\[0mm]
\includegraphics[width=1\columnwidth]{band_dmft.eps}
\caption
DMFT calculations of edge magnetization (top panel) as a function of Hubbard interaction $U/|t_1|$ for a ZPNR width $N_x=8$. Two different regimes are labeled and highlighted as: (I) nonmagnetic, and (II) AFM gapped (or FM gapped ) regions. Bottom panels show the spectral functions $A(E)$ for three different Hubbard interactions. The ribbon length is fixed at $N_y=48$ and the periodic boundary condition is implemented in the $y$-direction.}
\label{fig6b}
\end{figure}
\begin{figure}[b]
\includegraphics[width=1\columnwidth]{fig7.pdf}
\caption
Same as Fig.\ref{fig6}, but for the FM case. Regions are as follows: (I) nonmagnetic, (II) gapped-IC, and (III) gapped-FM. Red and blue lines in the spectral functions correspond to different spin-directions.}
\label{fig7}
\end{figure}
\par
Now, we use the translational symmetry of the lattice and probe the magnetic features of an infinite ribbon. To this end, we focus on the energy dispersion of an infinite ribbon. For a given wavenumber $k$ and spin $\sigma$ the mean-field Hamiltonian has $N$ states $\Psi_{k\sigma}(x)$ with energy $\epsilon_{k\sigma}(x)$. By implementing the iterative self-consistent technique~\cite{Palacios07} in the Brillouin zone (BZ), we can recover the magnetization configuration in ZPNRs. It worth to mention that the magnetic unit-cell in the IC phase is much larger than the lattice unit-cell. Thus, we use an unfolding technique to extract the spectral function. Detailed information about the unfolding is given in the appendix \ref{Unfold}.
\par
In Fig.~\ref{fig6}, the spectral function of the edge states and the corresponding gap evolution are illustrated. We show that the quasi-flat bands of the edges are isolated from the bulk bands, which is the most prominent feature of ZPNRs. It is already known that the hopping $t_4$ term is responsible for the dispersion of these flat bands~\cite{Ezawa14}. We also note that these quasi-flat modes are almost doubly degenerate. As shown in Fig.~\ref{fig6} in the non-magnetic region ($\bar{U}\equiv U/|t_1|=0.2$), these bands are degenerate at the BZ boundaries and split toward the BZ center. Indeed, this splitting becomes smaller for wider ribbons. The following reasoning may explain this: increasing the width of the ZPNRs will reduce the interaction between both edges, leading to the decrease of the edge splitting at the BZ center. When entering the magnetic phase ($\bar{U}\gtrsim0.2$), the bands split at the $kb=\pm\pi/2$ points at the Fermi energy. With increasing interaction strength, the gap size increases, and spectral weight is shifted at $k=0$ above the Fermi energy and at $kb=\pm\pi$ below the Fermi energy. This shift of spectral weight causes the single-particle gap to decrease before entering the ferromagnetic phase (region-(III)). Finally, the spectral weight above and below the Fermi energy form two quasiparticle bands for $U>0.6$. In region-(III) with commensurate (here, gapped-AFM) edge magnetism, one band is shifted toward higher energies, and both bands are separated. The gap is clearly visible in this phase, and we can read off the gap in the energy dispersion. The gap evolution is thereby similar as in Fig.~\ref{fig2} directly calculated by the energy. We find a gap opening when entering the IC phase. The gap width forms a maximum in the IC phase, decreases towards the AFM phase, and finally increases linearly in the AFM phase.
\par
To further validate our MFT results, we will now show DMFT results. As mentioned above, DMFT has proven to predict the critical point in Graphene adequately when compared to lattice-QMC. We here use DMFT for a cluster with parameters $N_x=8$ and $N_y=48$. Our results are summarized in Fig. \ref{fig6b}. For weak interaction strengths, $0.7<U/\vert t_1\vert$, we find a nonmagnetic solution. We note that any (even a magnetic) initial guess for these interaction strengths converges to the same nonmagnetic state. Furthermore, we find a stable magnetic state at the edges of ZPNR for $U/\vert t_1\vert>1.4$, which corresponds to phase (III) in MFT. As with MFT, we can find a stable AFM and a stable FM state. As in Graphene, local fluctuations included by DMFT shift the critical point to stronger interaction strengths compared to static MFT.
More interestingly is the question about the existence of the IC phase. For interaction strengths $U/\vert t_1\vert<1.4$, we do not find a converged magnetic solution. However, for $0.7<U/\vert t_1\vert<1.4$, we find a small magnetization at the edges of ZPNR, which does not vanish when iterating the DMFT calculation. If we start the DMFT in this regime with an inhomogeneous magnetic state, the magnetization configuration changes in each iteration without completely vanishing, but we cannot find a converged solution. We note here that to find an incommensurate state with DMFT, a large cluster and an appropriate initial guess are necessary. Thus, we interpret these DMFT calculations as an attempt to stabilize an incommensurate state. However, as we cannot find a converged solution, we cannot calculate further properties of this phase.
\par
An advantage of DMFT over static MFT is that spectral functions can readily be calculated and include lifetime effects due to correlations. The spectral functions calculated by DMFT are shown in Fig. \ref{fig6b}. In contrast to static MFT, DMFT already includes modifications of the spectral function in the nonmagnetic phase (I). For $U/\vert t_1\vert=0.8$ (converged nonmagnetic solution), we find some (blurred) spectral weight below the quasiparticle band at $k=0$. This spectral weight should correspond to the splitting of the quasiparticle band at the center of the BZ, which is smeared out because of correlations. Furthermore, we find some spectral weight above the quasiparticle band at $kb=\pm\pi$. With increasing interaction strength, spectral weight is particularly transferred from the quasiparticle band lying at the Fermi energy to energies below the Fermi energy, slowly forming a second band. At the same time, the spectral weight moves to higher energies at $kb=\pm \pi$. These two processes finally form two bands, which are clearly visible for $U/\vert t_t\vert=1.6$ with a gap between them. While in static MFT, the formation of a gap takes place when entering the AFM phase (III), in DMFT this separation already starts in the nonmagnetic phase due to local fluctuations.
\par
Finally, we want to examine the above mentioned gapped-FM phase in more detail, which coexists with the gapped-AFM phase. We repeat the previous calculations, using a proper initial guess to obtain the FM phase. The results are depicted in Fig.~\ref{fig7}. We note that when both edges are ferromagnetically aligned, we can also find the IC phase.
The gap evolution reveals a small {\em dome} in the region-(II). The gap opens at the first critical point, $U_{c1}$, forms a maximum and then decreases when approaching the second critical point $U_{c2}$. In the region-(III), the gap increases linearly with the Hubbard strength interaction. It is interesting to note that the energy dispersion of the AFM state (Fig.~\ref{fig6}) and the FM state (Fig.~\ref{fig7}) are almost identical. However, while in the AFM state, all edge modes are spin-degenerate, in the FM state, the edge modes above the Fermi energy have a definite spin-directions and the edge modes below the Fermi energy exhibit an opposite spin direction. Furthermore, there is a slight additional splitting of the edge modes around $k=0$ in the FM state, which is absent in the AFM state. This small splitting is responsible for the energy difference between the AFM and the FM state.
\subsection{Strain effects}
Next, we want to study the impact of strain and disorder on the magnetic state.
To study strain effects, we follow the approach developed in Ref.~\cite{Jiang15, Mohammadi16}. We
will focus here on the tensile strain in the normal direction to the phosphorene plane~\cite{Huang14}. By applying an
axial strain, following the Harrison relation~\cite{Harrison04}, the strain-induced modified hopping parameter in the linear regime can be written as $t_i\approx(1-2\alpha_x^i\epsilon_x-2\alpha_y^i\epsilon_y-2\alpha_z^i\epsilon_z)$, where $\alpha_i^j$ are coefficients related to the structure of phosphorene and $\epsilon_j$ is the strain in the $j$-direction.
\par
Before exploring strain effects on the magnetic features, we briefly comment on the energy dispersion under strain.
Figure~\ref{fig8} presents the energy dispersion for three different strengths of tensile strain
$\epsilon_z=0.0\%,~10\%,~20\%$ in the absence of the Hubbard interaction. It can be seen that the tensile strain has
a significant impact on the band structure: The edge modes are split, which is accompanied by a compression of the bulk bands. Even for strain $\epsilon_z=20\%$, the degeneracy of the edge modes at the BZ boundaries survives, while
one of the split levels crosses the Fermi energy at $k=0$. We also note that for strain $\epsilon_z=10\%$,
the bulk band gets flattened, which is analogous to the strain-induced Landau Levels effects in graphene~\cite{Chang12,Vozmediano13,Roy13,Yang17}.
\begin{figure}
\includegraphics[width=1\columnwidth]{fig8.pdf}
\caption
Energy dispersion of ZPNR shown for three different strengths of tensile strain $\varepsilon_z=0.0\%,~10\%,~20\%$ in the absence of the Hubbard interaction $U$. The two quasi-flat edges, isolated from the bulk, are colored in gold. The width of the ribbon is the same as in Fig. \ref{fig4}. The horizontal red line marks the Fermi level.}
\label{fig8}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{fig9.pdf}
\caption
Panels (a) and (c) show the gap and edge magnetization for different ribbon widths at fixed tensile strain $\varepsilon_z=10\%$. In panels (b) and (d), we fix the ribbon width, $N_x=7$, and show three different strengths of tensile strain $\varepsilon_z=0.0\%,~10\%,~20\%$. The inset in panel (d) shows a \textit{log}-\textit{log} plot of the $m^{z}$ evolution as a function of the tensile strain strength at $U/|t_1|=0.8$ (the line is a power-law fit).}
\label{fig9}
\end{figure}
\par
We now explore the evolution of the gap and the magnetization as a function of the Hubbard interaction in the presence of tensile
strain. Figures~\ref{fig9} (a) and (c) show the results for different ribbon widths with fixed strain $\epsilon_z=10\%$.
Figures~\ref{fig9} (b) and (d) show the results for three different strengths of tensile strain $\epsilon_z=0.0\%,~10\%,~20\%$ with a fixed ribbon width $N_x=7$. One profound effect of tensile strain is the destruction of the intermediate IC phase. For $\epsilon_z=~20\%$, the IC phase has almost vanished. We furthermore notice that the second critical point, $U_{c2}$, shifts to a larger value. Thus, the tensile strain has a tremendous impact on edge magnetism. This can be understood by the following explanation: under tensile strain, the $t_2$ and $t_4$ hopping parameters change more strongly than the others. As shown in Fig.~\ref{fig8}, the tensile strain splits the edge modes and increases their width. The increased bandwidth of these modes makes a larger interaction strength necessary to stabilize the AFM phase. Furthermore, as mentioned earlier, the $t_4$ hopping term is important for the shape of the edge mode and thus plays an essential role in stabilizing the IC phase.
Extrapolating the magnetization $m_z$ to larger values of tensile strain for $U/|t_1|=0.8$, shown in the inset of Fig.~\ref{fig9}(d),
we find that magnetism should vanish at about $\epsilon_z=50\%$, which is much higher than the prediction by first principles in Ref~\cite{Du15}. We note that a strain of about $\epsilon_z=50\%$ is already big enough to destroy the whole structure of the edge modes. Thus, at the MFT level, we can conclude that a magnetic to nonmagnetic transition is not feasible.
\begin{figure}[b]
\includegraphics[width=1\columnwidth]{fig10.pdf}
\caption
Evolution of the single-particle gap (upper panel) and edge magnetization (lower panel) as a function of the Hubbard interaction $U$. Data for clean and disorder cases $w_i/|t_1|=0.0,~0.3$ are shown. The width of the ribbon is the same as in Fig. \ref{fig4}.}
\label{fig11}
\end{figure}
\subsection{Disorder effects}
To study the effects of disorder, we include an additional term along the edges in the Hamiltonian $H_w=\sum_{i,\sigma}w_in_{i,\sigma}$, corresponding to non-magnetic disorder. $w_i$ is the strength of the disorder at site $i$, which is randomly chosen in the interval $[-w/2;w/2]$. Because the translational symmetry along the y-direction is broken, we solve for the ground state in a finite cluster with $N_y=120$ using the periodic boundary condition. To be independent of a special configuration of the disorder, we average over $100$ different realizations. The influence of the edge disorder on the magnetic phases is presented in Fig.~\ref{fig11}.
It can be seen that both, the gap and edge magnetization, are robust against disorder. We have not found any deviation in the saturation value of the edge magnetization in the gapped-AF(FM) phase. Moreover, the evolution of the gap also indicates the existence of the gapped-IC phase in the disordered system. This is in contrast to the result of strain in the preceding section. However, as mentioned before, the hopping parameters $t_2$ and $t_4$, play an essential role in stabilizing the edge states and its corresponding magnetic features. Thus, introducing Anderson type disorder will not destabilize these states.
\section{SUMMARY}
\label{sec4}
We have investigated the edge-state magnetic properties of black phosphorene nanoribbons using a tight-binding model with an electron-electron Hubbard interaction $U$.
Our study aimed to explore the magnetic features of large clusters of black phosphorene nanoribbons for which numerically expensive techniques such as density functional theory and quantum Monte Carlo techniques are not feasible. Thus, to study the model, we have used a combination of static and dynamical mean-field theory (DMFT).
While our calculations for large $U$ are in agreement with previous results, we find an {\em incommensurate} magnetic phase for weak interactions. Performing a detailed Fourier analysis of the magnetization evolution in the {\em incommensurate} (IC) phase, we find a second critical interaction $U_{c2}$ at which the IC phases changes to an antiferromagnetic (AFM) or ferromagnetic (FM) phase.
Finally, we have analyzed the influence of strain and disorder on the magnetic properties. Our results show that while the IC phase is robust to Anderson type disorder, it is fragile against strain.
\begin{acknowledgments}
This work was supported by the Paris//Seine excellence initiative. J. Vahedi is also partially supported by Iran Science Elites Federation, Grant No.11/66332. R.P. is supported by JSPS, KAKENHI Grant No. JP18K03511.
\end{acknowledgments}
|
{
"timestamp": "2021-02-11T02:13:12",
"yymm": "2012",
"arxiv_id": "2012.14052",
"language": "en",
"url": "https://arxiv.org/abs/2012.14052"
}
|
\section{Introduction} \label{sec:intro}
Given a probe image, Person ReID aims to identify the person of interest from a large gallery image database
collected from different cameras.
Person ReID is conducted by estimating the visual similarities between persons.
The gallery images are ranked in the descending order of the similarities as re-identification results.
Such a task has extensive applications in intelligent surveillance. For instance,
it can be used to search criminal suspects or missing persons from a large surveillance camera network
efficiently and effectively.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{fig/challenge3}
\caption{Several retrieval results for a converged ReID model.
Each row represents one visual case. First column (P) is the probe image. A -- J columns show the retrieved gallery images, where the green boxes denote true positives and the red boxes denote false negatives.
From these visual cases, we observe various challenges in Person ReID task that can be summarized into two aspects:
\textbf{1).} scale variance of clues.
\textbf{2).} reliability of each clue.
To name the specific examples,
\textbf{1).} detection misalignment (Row 1, Column D) causes the large scale and shift variance,
\textbf{2).} False alarm of a detector (Row 2, Column F) produces partial human body part, which serves as a distractor in Test-set;
\textbf{3).} Extremely confusing identity pairs (Row 2, Column A) requires scrutiny on multi-scale clues, including not only the global color attribute, but also tiny details like bags and shoes.
\textbf{4).} Global appearance like color and body structure helps, but is influenced by camera parameters and human pose,\eg, the color distortion caused by camera settings shown in Row 3.
\textbf{5).} Facial attributes like wearing glass are useful in Row 2, however, face gets blur or partially occluded in Row 3, whose importance should be reduced.
\textbf{6).} Local parts of different scales are not certainly reliable, \eg, misalignment causes the missing of shoes (Row 2 ,Column C),
suddenly appear of a bicycle in the background (Row 4, Column E), the appearance change of bag caused by different poses (Row 2, Column E).
}
\label{fig:challenge}
\end{figure}
As illustrated in Fig.~\ref{fig:challenge},
person ReID faces many challenges, which are summarized as two aspects:
\textbf{1).} The scale of clues to distinguish persons changes dramatically.
\textbf{2).} None of the clues is absolutely reliable to distinguish persons.
To give the specific examples,
\textbf{1).} Detection misalignment (Row 1, Column D) causes the large scale and shift variance,
\textbf{2).} False alarm of a detector (Row 2, Column F) produces partial human body part, which serves as a distractor in testset;
\textbf{3).} The existence of extremely similar persons (Row 2, Column A) needs a careful scrutiny of multi-scale clues, including not only the global color attribute, but also tiny details like bags and shoes, and intelligently inference upon them.
\textbf{4).} Global appearance like color and body structure helps, but is influenced by camera parameters and human pose, \eg, the color distortion caused by camera settings in Row 3.
\textbf{5).} Facial attributes like wearing glass are useful in Row 2, however, face gets blur or partially occluded in Row 3, and importance of face in this circumstance should be reduced.
\textbf{6).} Local parts of different scales are not certainly reliable, \eg, misalignment causes the missing of shoes (Row 2 ,Column C),
the bicycle emerging in the background (Row 4, Column E), the appearance change of bag caused by different poses (Row 2, Column E).
We argue that all the challenge would be handled when the model is:
\textbf{1).} Capable of discovering global-scale and multiple local-scale clues for distinguishing person from person.
\textbf{2).} Able to evaluate the reliability and ID-relativity of each clue, and finally combine the clues into a discriminative feature.
To achieve the goals mentioned above, we propose two improvement:
\textbf{1).} Multi-scale feature learning (MSFL), with a model structure possessing the potential to utilize different scales and their combination, and intelligently fuse various scales.
\textbf{2).} To fully exploit and realize the potential, we guide the model with Multi-scale Gradient Regularizer (MSGR), which implicitly consider all possible perturbation in the beginning and gradually emphasize adversarial perturbation as model approaching converge. In this way, the model converges to a better optimum, learns to ignore irrelevant factors and focuses on the combination of informative ID-related factors.
\section{Related Work}
\subsection{Pyramid Structure}
Due to the success of deep learning, CNNs have emerged as general purpose feature extractors for a wide range of visual recognition tasks.
However, the features from single convolution layer are insufficient for many tasks that need multi-scale clues to inference.
The importance of multi-scale feature learning and the design of pyramid network structure has been recognized recently
and some recent works \cite{cai2017higher,hariharan2015hypercolumns,long2015fully,xie2015holistically,you2018structurally} attempt to investigate the effectiveness of exploiting feature from different convolution layers within a CNN.
For example, Hariharan et al. \cite{hariharan2015hypercolumns} considered the feature maps from all convolution layers, allowing finer grained resolution for localization tasks.
Long et al. \cite{long2015fully} combined the finer-level and higher-level semantic feature from different convolution layers for better segmentation.
Xie et al. \cite{xie2015holistically} proposed a holistically-nested framework where the side outputs are added after lower convolution layers to provide deep supervision for edge detection.
Cai et al. \cite{cai2017higher} concatenated the activation maps from multiple convolution layers to model the interaction of part features for fine-grained recognition.
However, simply concatenating multi-scale features may fails to capture the importance, reliability, semantically abstract level and ID-relativity of features in different scales, yields a inferior model.
Admittedly, the concatenation of hierarchical features incorporates information of different spatial resolutions. However, it also introduces large semantic gaps caused by different depths.
The high-resolution low-semantic maps may harm their representational capacity for overall task.
Due to this reason, the Single Shot Detector (SSD) \cite{liu2016ssd} foregoes reusing low-level features, instead builds the pyramid starting from high up in the network (e.g., conv4\_3 of VGG nets \cite{simonyan2014very}) and then by adding several new layers. However, the SSD-style pyramid misses the opportunity to reuse the higher-resolution maps of the feature hierarchy.
To create a feature pyramid that has strong semantics at all scales, many novel neural network architectures have been proposed, including U-Net \cite{ronneberger2015u} and Sharp-Mask \cite{pinheiro2016learning} for segmentation, Recombinator networks \cite{honari2016recombinator} for face detection, and Stacked Hourglass networks \cite{newell2016stacked} for keypoint estimation. Ghiasi et al. \cite{ghiasi2016laplacian} present a Laplacian pyramid presentation for FCNs to progressively refine segmentation.
These methods adopt architectures with pyramidal shapes, and exploit lateral/skip connections that associate low-level feature maps with high-semantic features. Different to their works, we further improves the representation power of feature according to its pyramid level.
\subsection{Adversarial Learning}
The idea of adversarial learning comes from adversarial training \cite{goodfellow2014explaining},
where a mixture of normal and adversarially-generated examples are applied in the training process, in the hope to increase the robustness against adversarial examples. However, as suggested by \cite{ross2018improving}, inject adversarial noise in a anisotropic and brute-force way harms the model performance
on the original validation set,
since adversarially-generated examples are not naturally interpretable images.
Adversarial learning originated from regularizing techniques that reduce overfitting, where an implicit or explicit player turns against the optimization goal.
For example, Bengio et al.\cite{bengio2013estimating} and Gulcehre et al.\cite{gulcehre2016noisy} add noise in the ReLU and Sigmoid activation functions respectively.
Szegedy et al. \cite{szegedy2016rethinking} propose label-smoothing regularization technique to minimize distance between the model distribution and uniform distribution.
Metric learning community start to explore this advanced learning paradigm recently. For example,
Deep adversarial metric learning \cite{duan2018deep} simultaneously learns to generate synthetic hard negatives from the observed easy negative samples and discriminate the feature embedding in an adversarial manner.
Adversarial metric learning \cite{Chen2018adv} simultaneously train the hard negative generator and feature embedding in an adversarial manner.
Energy confusion regularization \cite{chen2019energy} seeks to confuse the learned model by enlarging intra-variance of all positive samples.
All of them design a adversarial target, but optimized in one-step and alternative updating fashion.
Different from existing work, we design a differentiable gradient regularizer that implicitly generate multiply adversarial perturbation at the same time and jointly optimized with original target.
\section{Proposed Method}
Generally, representation learning tasks aims at learning a compact representation,
from which the downstream tasks benefit.
As a special case of representation learning task, the training pipeline of Person ReID is also divided into feature extraction stage $\boldsymbol{v} = f_{\theta} (\boldsymbol{x})$ and loss computation stage $\cL (\boldsymbol{v}, y)$.
In this section,
we present our novel modifications for the current pipeline, to equip the model with ability of extracting global-scale and multiple local-scale features in the first stage
and guide the model to evaluate the reliability and ID-relativity of each feature in the second stage.
\subsection{Multi-scale Feature Learning} \label{sec:msfl}
Prevailing Person ReID methods use one-size-fits-all high-level embeddings from a deep convolutional network for all cases.
To be specific, the coarse-resolution high-semantic embeddings from the last layer is leveraged to retrieve the images in the gallery database.
This might limit their accuracy on difficult examples caused by variance of resolution, scale, pose and viewpoint, detector misalignment, and extremely similar identity as shown in Fig.~\ref{fig:challenge}.
As analyzed in Sec.~\ref{sec:intro}, it is important to empower the model to extract features of all scales.
To achieve this goal, ideally we would want to reason jointly across multiple scales of semantic abstraction.
However simply concatenating high and low level embeddings suffers from severe performance degradation, which is verified by \cite{huang2017multi}.
There are two reasons that naively combining low level features hurts the performance:
\textbf{1).} Early layers lack of semantically abstracted features. In the neural network,
the high-resolution features of low level (shape and color) usually lack representation power and
high-semantic features of high level (objects and its parts) tend to lose information about the fine spatial details.
The features of early layers lack representation power and thus
attach to final features prematurely will likely yield unsatisfactory high error rates.
\textbf{2).} The direct deep supervision on low level features altering the internal representation and making it premature.
Modern Neural Networks abstract semantic concept level by level, intermediate supervision influencing the early features to be optimized for the short-term and not for the final layers. This might improves the discriminative of shallow features, but collapses information required to generate high quality features in later layers. This effect becomes more pronounced when the first classifier is attached to an earlier layer as shown in \cite{huang2017multi}.
To remedy this issue,
we present a novel Person ReID architecture that
effectively solves the two problems by
\textbf{1).} creating a feature pyramid that has strong semantics at all scales, which naturally leverage the pyramidal shape of a CNN’s feature hierarchy.
\textbf{2).} inserting more blocks according to the abstraction level of feature,
postpone the abstraction of shallow feature and mitigate the effect of deep supervision.
The overview of proposed network architecture is shown in Fig.~\ref{fig:net}.
The architecture first extract features hierarchy and encode the image into a global context embedding via bottom-up path,
then gradually propagate the information across different scales via top-down pathway and lateral connections,
finally combine features of different scale via Multi-scale Feature Fusion (MSFF).
There are two main consideration for the proposed network architecture:
\textbf{Cross-scale information propagation (CSIP)}
The clues to discriminate persons are imbalanced across the features of different scales and isolated from each other.
To overcome these problem, we combine low and high level features and allow information communicate and rebalance across scales, via the design of top-down pathway and lateral connection.
The first step is extracting features hierarchy. For the convenience of description, we define a \textit{stage} of a network as the combination of layers that produce feature maps of the same size.
We choose the output of the last layer of each stage as our reference set of feature maps, which we will enrich to create the feature pyramid. This choice is natural since the deepest layer of each stage should have the strongest features.
Specifically, for ResNet backbone\cite{he2016deep}, we use the feature activations of each stage’s last residual block (conv1, conv2, conv3, conv4), denoted as $\{C_2, C_3, C_4, C_5\}$, with the strides of $\{4, 8, 16, 32\}$ \wrt the input image.
We do not include conv1 into the pyramid due to its large memory footprint.
Then, the top-down pathway upsamples the high-level strong-semantic features.
These features are then enhanced with features from the bottom-up pathway via lateral connections.
Each lateral connection merges feature maps of the same spatial size from the bottom-up pathway and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are of high resolution and more accurately localized.
To name the details, we use nearest neighbor upsampling to upsample the spatial resolution by a factor of 2,
and append $3\times 3$ convolution to reduce the aliasing effect of upsampling.
The lateral connection consists of a $1\times 1$ convolutional layer to normalize the channel dimensions to 512.
The outputs of lateral connection and upsapling module would be merged element-wise addition.
This final set of feature maps is called $\{P_2, P_3, P_4, P_5\}$, corresponding to $\{C_2, C_3, C_4, C_5\}$ that are respectively of the same spatial sizes.
The result is a feature pyramid that has rich semantics at all levels, where the details of low level are conditionally decoded according to the global context propagated from high-semantic embeddings.
\textbf{Multi-scale feature fusion (MSFF)}
Although low-level embeddings contain rich information about shape, color, and texture, they are not abstract enough to discriminate different persons.
Thus, we stack more bottleneck blocks into shallow layers, refine the feature representation, and finally integrate the information of different scales via concatenating.
To specific, we further abstract features $\{P_2, P_3, P_4\}$ by 3, 2, and 1 bottleneck blocks respectively, the finally induced feature set is $\{F_2, F_3, F_4, P_5\}$.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{fig/net2}
\caption{The overview of proposed methods.}
\label{fig:net}
\end{figure*}
\subsection{Multi-scale Gradient Regularizer}
It is one thing that
all clues across difference scale must be extracted and considered jointly by MSFL,
while it is a more important thing that
some clues should be permitted partially altered or even missing.
To achieve the later and avoid increasing the computation burden in the test stage,
we introduce an differentiable Multi-scale Gradient Regularizer (MSGR),
which is applied in the training stage to help the model extract a more ID-related and discriminative embedding.
Ross et al. \cite{ross2018improving} demonstrate that input gradient regularization helps the model be robust to adversarial attacks
while be more naturally interpretable.
Inspired by \cite{ross2018improving}, we expect that MSGR applied to low-level feature maps could interpretably alter the ID-related color/context information,
and MSGR applied to high-level feature maps could interpretably change the ID-related semantic part information.
Altough the experiments show that the effect is more complicated and entangled, we describe the methods from simple to complex.
We first describe single-scale Gradient Regularizer, which is naturally derived from the worst-case perturbation and intimatedly become a regularizer in the final loss function.
Denote $\boldsymbol{x}$ as input images, and $\mTheta$ as model parameters. For simplicity, we denote $\mathcal{L}(\boldsymbol{x}; \mTheta)$ as final loss, which encapsulate the feature extraction process.
The idea could be formalized as follows.
Instead of solving $\min_{\mTheta} \cL ( \boldsymbol{x}; {\mTheta})$,
ideally we would like to solve the following problem
if we try to build a robust model against any perturbation $\epsilon$.
The perturbation is observed to be random at beginning stage and then become ID-related as the model converging.
\begin{equation}
\label{eq:inner}
\min _{\boldsymbol{\theta}} \max _{\boldsymbol{\epsilon} :\|\boldsymbol{\epsilon}\|_{p} \leq \sigma} \mathcal{L}(\boldsymbol{x}+\boldsymbol{\epsilon} ; \boldsymbol{\theta})
\end{equation}
The norm constraint in eq.~\ref{eq:inner} implies that we only require our model to be robust against certain small perturbation (that do not change ID completely).
In general, this problem is difficult to be solved explicitly due to its non-convex nature \wrt $\boldsymbol \epsilon$ and $\mTheta$.
We propose to solve it via first order Taylor expansion at point $\boldsymbol{x}$.
The inner problem then becomes
\begin{equation}
\label{eq:taylor}
\max _{\boldsymbol \epsilon} \mathcal{L}(\boldsymbol{v})+\nabla_{\boldsymbol{v}} \mathcal{L}^{T} \boldsymbol{\epsilon} \quad \text { s.t. } \quad\|\boldsymbol{\epsilon}\|_{p} \leq \sigma
\end{equation}
This problem is trivially linear, and hence convex \wrt $\boldsymbol{\epsilon}$. We can obtain a closed form solution by Lagrangian multiplier method, see Appendix~\ref{sec:solving} for details. This yields
\begin{equation}
\label{eq:epsilon}
\epsilon=\sigma \operatorname{sign}(\nabla \mathcal{L})\left(\frac{|\nabla \mathcal{L}|}{\|\nabla \mathcal{L}\|_{p^{*}}}\right)^{\frac{1}{p-1}}
\end{equation}
where $p^*$ is the dual of $p$, \ie, $\frac{1}{p^{*}}+\frac{1}{p}=1$.
Substitute the optimal $\boldsymbol{\epsilon}$ back to the original optimization problem eq.~\ref{eq:inner}, we can see that the influence of perturbations can be formulated as a regularization term:
\begin{equation}
\min_{ \mTheta} \mathcal{L}(\boldsymbol{x}; \mTheta)
+\sigma\left\|\nabla_{\boldsymbol{x}} \mathcal{L}\right\|_{p^{*}}
\label{eq:loss}
\end{equation}
There are multiple ways to implement the regularization term:
\textbf{1).} Imitate adversarial training methods, decompose the training procedure into two stages, first
calculate the adversarial perturbation by Eq.~\ref{eq:epsilon},
and then feed the adversarial input and perform the ordinary training procedure to minimize the loss function by gradient descent on $\mTheta$.
\textbf{2).} Summarize the influence of perturbations into a regularization term, as shown in Eq.~\ref{eq:loss}, and allow the gradient of $\left\|\nabla_{\boldsymbol{x}} \mathcal{L}\right\|_{p^{*}}$ \wrt $\mTheta$ flows back to $\mTheta$.
Empirically, \textbf{2).} is superior than \textbf{1).}, which is also verified by adversarial attack and defense history, \ie, adversarial training methods usually harm the model performance and gradient regularizer helps the model converge to a flatter local optimum. Thus, we directly optimize the regularization,
leverage the auto-derivation ability provided by the modern deep learning framework.
For case $p=\infty$, regularization term $\|\nabla \cL\|_1$ is induced to penalized gradient in an anisotropic way,
as for $p=1$, the induced $\|\nabla \cL\|_\infty$ only penalizes the gradient in one direction.
Empirically, experiments show that $p=2$ is more appropriate than both cases above.
Finally, as to multi-scale gradient regularizer, denote $\mTheta =\{\boldsymbol{\theta}_1, ... ,\boldsymbol{\theta}_K \}$ as model parameters;
$\boldsymbol v = f_K( \cdots (f_1(\boldsymbol x; \boldsymbol{\theta}_K)); \boldsymbol{\theta}_1)$ as the features $\boldsymbol v$ that hierarchically extracted by $K$-stage neural network;
and $\mathcal{L}(\boldsymbol{v} ; \mTheta)$ as a loss function.
For MSFL that based on ResNet backbone, there are are $K=4$ scales of features.
Instead of solving $\min_{{\mTheta}} \mathcal{L}(\boldsymbol{v} ; \mTheta)$,
ideally we would like to solve the following problem if we try to build a robust model against small perturbation
$\{\boldsymbol \epsilon_1, ... ,\boldsymbol \epsilon_k \}$
on the features of any scale, including perturbation of texture, color and semantically abstracted attributes such as pose, gender, race and clothing style.
\begin{equation}
\label{eq:joint}
\min _{\mTheta}
\max _{\boldsymbol{\epsilon}_k :\|\boldsymbol{\epsilon}_k\|_{p} \leq \sigma, \forall k}
\mathcal{L}(
f_K \left( \cdots (f_1(\boldsymbol x)+\boldsymbol \epsilon_1 ) )+\boldsymbol \epsilon_K
; {\mTheta} \right)
\end{equation}
Since the nested hierarchical function can be approximated by first order taylor expansion level by level, the inner problem is simplified as
\begin{equation}
\begin{aligned}
\max_{\forall i \in \{1,...,K\}, \|\boldsymbol{\epsilon}_i\|_{p} \leq \sigma}
& \cL(f(\boldsymbol{x})) \\
& + (\nabla_{f_K} \cL)^T \cdots (\nabla_{\boldsymbol{x}} f_1)^T \cdot \boldsymbol{\epsilon}_1 \\
& + \cdots + \nabla_{\boldsymbol{f}_K} \cL^T \boldsymbol{\epsilon}_K
\end{aligned}
\end{equation}
Applying chain rule, $ (\nabla_{f_K} \cL)^T \cdots (\nabla_{\boldsymbol{x}} f_1)^T \cdot \boldsymbol{\epsilon}_1$ is simplified as $\nabla_{\boldsymbol x} \cL^T \boldsymbol{\epsilon}_1$. Similarly, the loss with multi-scale gradient regularizer is
\begin{equation}
\min_{ \mTheta} \mathcal{L}(\boldsymbol{x}; \mTheta)
+\sigma\left\|\nabla_{\boldsymbol{x}} \mathcal{L}\right\|_{p^{*}}
+ \cdots
+ \sigma\left\|\nabla_{{f_K}} \mathcal{L}\right\|_{p^{*}}
\end{equation}
\section{Experiments} \label{sec:exp}
\subsection{Implementation Details}
We follow the setting of widely used open-source implementation of
\cite{zhou2019omni}.
To make sure fair comparison, we do not apply the tricks like Last Stride, Label Smooth \cite{szegedy2016rethinking} and LR Warmup, mentioned in \cite{luo2019bag}.
It is worthy to mention that although Luo \etal \cite{luo2019bag} achieves astonishing performance based on ResNet50 backbone \cite{he2016deep}, the dimension of feature embedding is 2048, and the Rank-1 accuracy on market1501 will drop from $94.5\%$ to $93.0\%$ if the dimension is reduced to $512$.
During the training stage, our implementation details includes:
\textbf{1).} We initialize the ResNet50 with pretrained parameters on ImageNet.
To reduce the feature dimension to $512$, which is the same as \cite{zhou2019omni}, we append BN-FC-ReLU layers after global average pooling of ResNet50,
and change the output dimension of classifier to the number of identities in the training dataset.
\textbf{2).} To constitute a training batch, $P$ identities and $K$ images of per person are sampled randomly. The batch size equals to $B = P \times K$. This approach has shown very good performance in similarity-based ranking and avoids the need to generate a combinatorial number of examplar pairs.
During a training epoch each identity is selected in its batch in turn, and the remaining $P-1$ batch identities are sampled at random. We set $P=8$, and $K=4$.
\textbf{3).} For image preprocessing and augmenting, we resize each image to $288 \times 144$ pixels, randomly crop it into a $256 \times 128$ rectangular image, and flipped horizontally with 0.5 probability. Follow the fashion of ImageNet, each image is first divided by 255, normalized to $[0,1]$ range, substracted by $[0.485, 0.456, 0.406]$, and finally divided by $[0.229, 0.224, 0.225]$. These statistics are calculated according to the natural images in ImageNet.
\textbf{4).} Adam is adopted to optimize the model. The initial learning rate is 0.00035 and decrease by a ratio of 0.1 at the 40th epoch and 70th epoch respectively. Totally there are 120 training epochs. We train the model two NVIDIA TITAN XP GPUs and Pytorch as the platform, with the float16 training and BatchNorm synchronized across GPU as default strategies.
\subsection{Dataset and Protocol}
We focus on three widely-used large datasets: CUHK03\cite{li2014deepreid}, Market1501\cite{zheng2015scalable}, DukeMTMC-ReID\cite{zheng2017unlabeled}, and MSMT17\cite{wei2018person}.
\textbf{CUHK03}\cite{li2014deepreid}dataset contains 13,164 images of 1,360 identities. It provides bounding boxes detected from deformable part models (DPMs) and manual labeling.
We adopt the new training/testing protocol raised by Zhong \etal\cite{zhong2017reranking} using a new training/testing protocol similar to that of Market-1501. The new protocol splits the dataset into training set and testing set, which consist of 767 identities and 700 identities respectively.
In testing, one image from each camera is randomly selected as the query for each identity and use the rest of images to construct the gallery set.
Compared to the traditional protocol that splits the dataset into training set with 1,160 identities and testing set with 100 identities,
the new protocol has two advantages:
\textbf{1).} For each identity, there are multiple ground truths in the gallery, which is more consistent with practical application scenario.
\textbf{2).} Evenly dividing the dataset into training set and testing set at once helps avoid repeating training and testing multiple times.
\textbf{Market1501}\cite{zheng2015scalable} dataset contains 32,668 images of 1,501 labeled
persons of six camera views. There are 751 identities in the training set and 750 identities in the testing set.
In the original study on this proposed dataset, the author also uses mAP as the metric to evaluate the overall ranking quality of predicted rank list.
\textbf{DukeMTMC-ReID}\cite{zheng2017unlabeled} is a subset of DukeMTMC\cite{ristani2016MTMC}
and contains 36,411 images of 1,812 identities captured by eight high-resolution cameras.
The pedestrian images are cropped by hand-drawn bounding boxes. It consists of 16,522 training images of 702 identities, 2,228 query images and 17,661 gallery images of the other 702 identities.
\textbf{MSMT17}\cite{wei2018person} is currently the largest Person ReID
dataset, which contains 126,441 images of 4,101 identities in 15 cameras. This dataset is composed of the training set, which contains 32,621 bounding boxes of 1,041 identities and the test set including 93,820 bounding boxes of 3,060 identities. From the test set, 11,659 images are used as query images and the other 82,161 bounding boxes are used as gallery images. This challenging dataset has more complex scenes and backgrounds, \eg, indoor and outdoor scenes, than others.
To evaluate the performance of proposed methods and compare with other ReID methods, we report two common evaluation metrics: the cumulative matching characteristics (CMC) at rank-
and mean average precision (mAP) on the above four benchmarks following the common settings\cite{zheng2015scalable,wei2018person}.
\subsection{Visualization of Learned Features}
To understand how MSFL+MSGR help learn discriminative features, we visualize the activation values of high-level feature maps $P_5$ to investigate which semantic parts the network focuses on. Following \cite{zhou2019omni}, the activation maps are computed as the sum of absolute-valued feature maps along the channel dimension followed by a spatial $L_2$ normalization.
Fig. \ref{fig:activation} compares the activation maps of the ResNet50 baseline and
the model with MSFL+MSGR.
It is clear that MSFL+MSGR can capture the local discriminative patterns of a person, such as clothing logo, bags (Column 3), shoes (Column 9) and reliably clear frontal face (Column 4), to distinguish person from person.
In contrast, the baseline model over-concentrates on less informative region like background.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{fig/activation}
\caption{
First line contains original images. Second line and third line contain the activation map of ResNet50 baseline and MSFL+MSGR respectively.
These images indicate that MSFL+MSGR detects subtle details like bags (Column 3), shoes (Column 9) and reliably clear frontal face (Column 4), to help discriminate visually similar persons.}
\label{fig:activation}
\end{figure}
To understand how MSFL helps collect multi-scale discriminative clues, we visualize the feature $C_2, P_2, F_2, C_5, P_5$ in the hierarchy. As shown in Fig.~\ref{fig:unet}, due to lack of representative power, $C2$ may over-concentrate on parts or background with salient color (Row 1 Column 2 and Row 4 Column 2) and
edge and context without semantic meaning. With the help of global context from top-down path $P_2$ focus on discriminative person parts.
Meanwhile, $P_5$ attend to multiple discriminative parts which is helpful for robust ReID.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig/unet-plevel}
\caption{
Each line contains a visual case. First Column is the original image; the second to the last column corresponds to activation maps of $C_2, P_2, F_2, C_5, P_5$ of MSFL+MSGR respectively.
Due to lack of representative power, $C2$ may over-concentrate on parts or background with salient color (Row 1 Column 2 and Row 4 Column 2) and
edge and context without semantic meaning. With the help of global context from top-down path $P_2$ focus on discriminative person parts.
Meanwhile, $P_5$ attend to multiple discriminative parts which is helpful for robust ReID.
}
\label{fig:unet}
\end{figure}
To conclude, the qualitative results demonstrate that our multi-scale design, aggregation methods and regularization
enable the model to identify subtle differences between visually similar persons –- a vital requirement for accurate ReID.
\subsection{Ablation Study}
We conduct ablation study on Market1501 dataset.
Comparing (a) with (c) in Tab.~\ref{tab:msfl}, we verify that direct fusing the feature on deep stage can slightly improve the performance, however, directly incorporating the shallow feature (Tab.~\ref{tab:msfl}(b)) hurts the performance.
The reasons are discussed in Sec.~\ref{sec:msfl}, \ie, low-level features lack of semantic abstraction and direct deep supervision on low-level features altering the internal representation and making it premature.
On the contrary, CSIP (Tab.~\ref{tab:msmt}(e)) allow the shallow feature to communicate with global context and be conditionally decoded, thus improve the rank-1 from 90.4\% to 91.5\%.
CSIP consists of two complementary modules, \ie, top-down pathway and lateral connection. Lacking of top-down pathway hurts the performance (Tab.~\ref{tab:msfl}(a) and (c)),
and lacking of lateral connection cannot improve the performance (Tab.~\ref{tab:msfl}(a) and (d)).
Meanwhile, MSFF (Tab.~\ref{tab:msmt}(f)) empower the shallow feature to abstract its detailed information with complex transformation and further improve the rank-1 to 92.3\%.
\begin{table}
\centering
\caption{Ablation Study on Multi-Scale Feature Learning (MSFL)}
\label{tab:msfl}
\scalebox{0.85}{
\begin{tabular} {lccccc}
\toprule
Methods & lateral& top-down & MSFF & rank-1 & mAP \\ \midrule
(a) ResNet50 &&& & 90.4 & 78.3 \\ \midrule
(b) direct fuse $C_4$ and $C_5$ & $\surd$ && & 90.7 & 78.6 \\
(c) direct fuse from $C_2$ to $C_5$ & $\surd$ && & 87.1 & 75.9\\ \midrule
(d) fuse from $P_2$ to $P_5$, w.o. lateral & & $\surd$& & 90.3 & 78.3 \\
(e) fuse from $P_2$ to $P_5$, with lateral & $\surd$ & $\surd$& & \textbf{91.5} & \textbf{79.8} \\
(f) fuse from $F_2$ to $P_5$ & $\surd$ & $\surd$ &$\surd$& \textbf{92.3} & \textbf{81.9} \\ \bottomrule
\end{tabular}
}
\end{table}
Our design of MSFL modifies the network architecture with neglectable computation overhead.
The Last stride=1 trick \cite{sun2018beyond} (Tab.~\ref{tab:complex}(b)) is widely used in Person ReID community which improves the flops from 2.6G to 4.1G.
This trick removed the last spatial downsampling operation after stage 3 in the backbone network to increase the size of the feature map, with the hope of improving performance by increasing spatial resolution.
We do not need to apply this trick, since the MSFL maintains roughly equivalent amount of parameters and flops whiles become superior than Last stride=1 trick.
\begin{table}
\centering
\caption{Complexity of MSFL}
\label{tab:complex}
\scalebox{0.85}{
\begin{tabular} {lcccc}
\toprule
Methods & params (M)& flops (G) & rank-1 & mAP \\ \midrule
(a) ResNet50 baseline & 24.56& 2.6& 90.4 & 78.3 \\ \midrule
(b) ResNet50, Last Stride=1 & 24.56 &4.1& 91.3 & 79.8 \\
(c) fuse from $F_2$ to $P_5$ & 27 & 5.9 & \textbf{92.3} & \textbf{81.9}\\
\bottomrule
\end{tabular}
}
\end{table}
As for MSGR, we first decide that $L_2$-norm is the most beneficial for regularizer term (Tab.~\ref{tab:MSGR}(a)--(d)), which meets our hypothesis that a model with smooth gradients of fewer extreme values is capable of extracting balanced ID-related information, and thus more robust and accurate.
The improvement from Gradient Regularizer is stable (Tab.~\ref{tab:MSGR}(d)--(f)), which is not sensitive to the choice of $\sigma$ in a long range.
Thus, we fix $\sigma = 1e-2$ in the latter experiments.
Meanwhile, via regularizing the loss \wrt the output of middle stages to be smooth,
the \textbf{Multi-scale} Gradient Regularizer further improve the performance.
The most improvement happens when is applied to input and early stages,
it may because the regularization on input gradient includes the effects to encourage the output of middle stages to be smooth, \ie, $\nabla_{\boldsymbol x} \cL^T = (\nabla_{C_K} \cL)^T \cdots (\nabla_{\boldsymbol{x}} f_1)^T \cdot$.
However, it still improves the performance when applying the MSGR to middle outputs, since different stages is responsible for different levels of abstraction. In the latter experiments, we default adopt MSGR on the outputs of stage 1 and 2.
\begin{table}
\centering
\caption{Ablation Study on Multi-scale Gradient Regularizer (MSGR)}
\label{tab:MSGR}
\scalebox{0.95}{
\begin{tabular} {lcccc}
\toprule
Methods & norm & $\sigma$ & rank-1 & mAP \\ \midrule
(a) r50 baseline &/ &/ & 90.4 & 78.3 \\ \midrule
(b) $\|\nabla_{\boldsymbol{x}} \cL \|$ & 1 & 1e-2 & 85.1 & 70.1 \\
(c) $\|\nabla_{\boldsymbol{x}} \cL \|$ & 2 & 1e-2 & \textbf{91.7} & \textbf{80.1} \\
(d) $\|\nabla_{\boldsymbol{x}} \cL \|$ & $\infty$ & 1e-2 & 88.2 & 72.2 \\
(e) $\|\nabla_{\boldsymbol{x}} \cL \|$ & 2 & 1e-1 & {90.7} & 79.4 \\
(f) $\|\nabla_{\boldsymbol{x}} \cL \|$ & 2 & 1e-3 & 91.2 & {80.0} \\
(g) $\|\nabla_{C_2} \cL \|$ & 2 &1e-2 & 92.1 & 80.4 \\
(h) $\|\nabla_{C_5} \cL \|$ & 2 &1e-2 & 90.2 & 78.3 \\ \midrule
(i) $\|\nabla_{\boldsymbol{x}} \cL\| + \|\nabla_{C_2} \cL \|$ &2&1e-2& 92.5 & \textbf{80.3} \\
(j) $\|\nabla_{\boldsymbol{x}} \cL\| + \sum_{k=2}^{3} \|\nabla_{C_k} \cL \|$ &2&1e-2& \textbf{92.7} & 80.1\\
(k) $\|\nabla_{\boldsymbol{x}} \cL\| + \sum_{k=2}^{4} \|\nabla_{C_k} \cL \|$ &2&1e-2& {92.5} & 80.1 \\
(l) $\|\nabla_{\boldsymbol{x}} \cL\| + \sum_{k=2}^{5} \|\nabla_{C_k} \cL \|$ &2&1e-2& {92.4} & 79.8 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Comparison with the state-of-the-art methods}
\begin{table*}[htbp]
\centering
\caption{Comparison of the state-of-the-art results on four large benckmarks.
The CMC scores (\%) at rank-1 and mAP are listed.
To show the effectiveness of each component, we gradually stack them to the ResNet50 baseline.}
\scalebox{1.15}{
\begin{tabular}{lrrrrrrrr}
\toprule
& \multicolumn{2}{c}{Market1501} & \multicolumn{2}{c}{Detected CUHK03} & \multicolumn{2}{c}{DukeReID} & \multicolumn{2}{c}{MSMT17} \\
& \multicolumn{1}{l}{Rank-1} & \multicolumn{1}{l}{mAP} & \multicolumn{1}{l}{Rank-1} & \multicolumn{1}{l}{mAP} & \multicolumn{1}{l}{Rank-1} & \multicolumn{1}{l}{mAP} & \multicolumn{1}{l}{Rank-1} & \multicolumn{1}{l}{mAP} \\ \midrule
PAN & 82.2 & 63.3 & 36.3 & 34 & 71.6 & 51.5 & & \\
SVDNet \cite{sun2017svdnet} & 82.3 & 62.1 & 57.1 & 54.2 & 76.7 & 56.8 & & \\
PDC \cite{su2017pose} & 84.1 & 63.4 & & & & & 58 & 29.7 \\
HAP2S \cite{yu2018hard} & 84.6 & 69.4 & & & 75.9 & 60.6 & & \\
DPFL \cite{chen2017person} & 88.6 & 72.6 & 40.7 & 37 & 79.2 & 60.6 & & \\
DaRe\cite{huang2017multi} & 86.4 & 69.3 & 55.1 & 51.3 & 74.5 & 56.3 & & \\
DaRe+RE & 89 & 76 & 63.3 & 59 & 80.2 & 64.5 & & \\
PNGAN \cite{qian2018pose} & 89.4 & 72.6 & & & 73.6 & 53.2 & & \\
GLAD \cite{wei2017glad} & 89.9 & 73.9 & & & & & 61.4 & 34 \\
KPM \cite{shen2018end} & 90.1 & 75.3 & & & 80.3 & 63.2 & & \\
MLFN \cite{chang2018multi} & 90 & 74.3 & 52.8 & 47.8 & 81 & 62.8 & & \\
DuATM \cite{si2018dual} & 91.4 & 76.6 & & & 81.8 & 64.6 & & \\
Bilinear \cite{suh2018part} & 91.7 & 79.6 & & & 84.4 & 69.3 & & \\
G2G \cite{shen2018deep} & 92.7 & 82.5 & & & 80.7 & 66.4 & & \\
DeepCRF \cite{chen2018group} & 93.5 & 81.6 & & & 84.9 & 69.5 & & \\
PCB+RPP \cite{sun2018beyond} & 93.8 & 81.6 & 63.7 & 57.5 & 83.3 & 69.2 & & \\
SGGNN \cite{shen2018person} & 92.3 & 82.8 & & & 81.1 & 68.2 & & \\
Auto-ReID+RE \cite{quan2019autoreid} & \textbf{95.4} & \textbf{94.2} & \textbf{73.3} & \textbf{69.3} & \textbf{91.4} & \textbf{89.2} & \textbf{78.2} & \textbf{52.5} \\
Mancs \cite{wang2018mancs} & 93.1 & 82.3 & 65.5 & 60.5 & 84.9 & 71.8 & & \\
OSNet \cite{zhou2019omni} & 93.6 & 81 & 57.1 & 54.2 & 84.7 & 68.6 & 71 & 43.5 \\
OSNet+RE & 94.8 & 84.9 & 72.3 & 67.8 & 88.6 & 73.5 & 78.7 & 52.9 \\ \midrule
ResNet50 & 90.4 & 78.3 & 63.4 & 58.3 & 82.9 & 70.6 & 6.7 & 38.9 \\
+CSIP & 91.5 & 79.3 & 65.6 & 60.4 & 84.1 & 74.5 & 69.3 & 40.2 \\
+MSFF & 92.3 & 81.9 & 65.1 & 61.2 & 85.7 & 75.4 & 70.1 & 40.8 \\
+MSGR & \textbf{93.7} & \textbf{83.6} & \textbf{67.7} & \textbf{63.3} & \textbf{86.8} & \textbf{76.9} & \textbf{71.3} & \textbf{45.3} \\
+RE & 94.4 & 89.5 & 71.3 & 68.6 & 89.6 & 89 & 78.4 & 51.7 \\ \bottomrule
\end{tabular
}
\label{tab:mktabl} \label{tab:cuhk03} \label{tab:msmt} \label{tab:duke}
\end{table*
On Market101 dataset (Tab.~\ref{tab:mktabl}, via gradually stacking each component to the ResNet50 model,
we improve the rank-1 from 90.4\% to 93.7\%, outperform the most of the state-of-the-art methods, and reach 94.4\% when applying Rerank.
It worth to mention thath we build our model upon strong baseline, and improve the performance steadily without exhausted hyper-parameter tuning.
On CUHK03 dataset (Tab.~\ref{tab:cuhk03}), due to misalignment, the performance on detected CUHK03 is worse than labeled CUHK03.
Compared to Market1501, the improvement brought by MSFF is relatively small, but by further applying MSGR, the rank-1 on labeled CUHK03 improves to 71.6\%.
We presume that the training data scarcity of new CUHK03 protocol causes the optimization difficulty of MSFF block,
but MSGR is an appropriate regularizer that helps smooth the loss landscape and reduces the optimization difficulty.
On DukeMTMC-ReID (Tab.~\ref{tab:duke}), the effect of each method is similar to that on Market1501.
MSFL+MSGR achieves rank-1 of 86.8\%, superior than the most state-of-the-art model,
and even as competitive as Auto-ReID that applies NAS to automatic discover the best architecture for ReID task.
On MSMT17 (Tab.~\ref{tab:msmt}), despite of complex scenes and diverse backgrounds, MSFL+MSGR improves rank-1 metric from 82.9\% to 86.8\%.
\section{Conclusion}
Person ReID faces many challenges, which are summarized as two aspects,
\ie, the dramatic scale variance and unreliability of each clue for distinguish person.
To address these problems, this paper propose
a model with potential to discover global-scale and multiple local-scale clues
and
introduce adversarial learning to encourage the model to mine subtle clues while determine ID-relativity of each clue by multi-scale gradient regularizer.
It worth to mention that the architecture and regularizer designed by our experiences may not be the optimal.
In our future work, we may apply the idea of AutoML to discover more superior model architecture and loss function that tailored for Person ReID task.
\appendices
\section*{Acknowledgment}
This work was partially supported by
\section{Solving $\epsilon$ in Eq.~\ref{eq:taylor}} \label{sec:solving}
We need to solve $\epsilon$ from Eq.~\ref{eq:taylor}, which is equivalent to
\begin{equation}
\max _{\epsilon} \nabla_{\boldsymbol{x}} \mathcal{L}^{T} \boldsymbol{\epsilon} \quad \text { s.t. } \quad\|\boldsymbol{\epsilon}\|_{p} \leq \sigma
\end{equation}
The optimal $\epsilon$ would be achieved when $\|\boldsymbol{\epsilon}\|_{p}=\sigma$, otherwise, we can increase the norm of $\epsilon$ and increase the objective value. Thus, we are set to solve
\begin{equation}
\max _{\epsilon} \nabla_{\boldsymbol{x}} \mathcal{L}^{T} \boldsymbol{\epsilon} \quad \text { s.t. } \quad\|\boldsymbol{\epsilon}\|_{p}=\sigma
\end{equation}
By introduce Lagrangian multiplier $\lambda$, we have
\begin{align}
\nabla_\epsilon \nabla_{\boldsymbol{x}} \mathcal{L}^{T} \boldsymbol{\epsilon} &=\lambda \nabla_\epsilon \| \epsilon \|_p \\
\nabla_{\boldsymbol{x}} \mathcal{L} &=\lambda \frac{\boldsymbol{\epsilon}^{p-1}}{p\left(\sum_{i} \boldsymbol{\epsilon}_{i}^{p}\right)^{1-\frac{1}{p}}} \\
\nabla_{\boldsymbol{x}} \mathcal{L} &=\frac{\lambda}{p}\left(\frac{\boldsymbol{\epsilon}}{\sigma}\right)^{p-1} \label{eq:comb1} \\
\nabla_{\boldsymbol{x}} \mathcal{L}^{\frac{p}{p-1}} &=\left(\frac{\lambda}{p}\right)^{\frac{p}{p-1}}\left(\frac{\boldsymbol{\epsilon}}{\sigma}\right)^{p}
\end{align}
Sum each element of the vector over two sides, we have
\begin{align}
\|\nabla \mathcal{L}\|_{p^{*}}^{p^{*}} &=\left(\frac{\lambda}{p}\right)^{p^{*}} * 1 \\
\left(\frac{\lambda}{p}\right) &=\|\nabla \mathcal{L}\|_{p^{*}} \label{eq:comb2}
\end{align}
Combine Eq.~\ref{eq:comb1} and Eq.~\ref{eq:comb2}, it is easy to see
\begin{equation}
\epsilon=\sigma \operatorname{sign}(\nabla \mathcal{L})\left(\frac{|\nabla \mathcal{L}|}{\|\nabla \mathcal{L}\|_{p^{*}}}\right)^{\frac{1}{p-1}}
\end{equation}
{
\bibliographystyle{ieee}
|
{
"timestamp": "2020-12-29T02:23:38",
"yymm": "2012",
"arxiv_id": "2012.14061",
"language": "en",
"url": "https://arxiv.org/abs/2012.14061"
}
|
\section{Introduction}
In bouncing cosmological models, the present expansion of the Universe is assumed to have followed a preceding collapsing phase. The cosmological bounce which bridges these two stages might be caused by either classical or quantum effects \cite{brand}.
The present expansion phase of the Universe might also eventually recollapse to a ``big crunch". A cyclic Universe might result if
this is subsequently followed by a bounce.
In such a bounce scenario, it is of interest to ask what happens to any population of black holes present. In our Milky Way galaxy, there are billions of stellar-mass black holes, and it is believed that there is also a multitude
of supermassive black holes at the centers of other galaxies. It is also conceivable that there are ``primordial'' black holes, which formed in the early Universe, which might potentially contribute to any dark matter present \cite{cks}.
In a complete cosmological collapse, it is
expected that all black holes will merge once the Universe becomes sufficiently compressed. This would also occur in the cosmological bounce model if the matter density at the bounce is high enough. In this merging, if the black holes are distributed randomly with a range of possible masses,
it might be expected that
merging would occur in a hierarchical manner, in which increasingly larger horizons form around multiple black holes, whereby the characteristic mass of the black holes increases.
In time the filling factor, $F$, of the black holes (which represents their ratio of size to spacing) will likely reach unity, with the horizons of the individual black holes disappearing completely. Alternatively, in the mathematical idealisation of an exactly regular lattice distribution of black holes (with identical masses), it might be expected that this merger would occur instantaneously at a particular epoch without any preceding hierarchical merging. In either case, there would be a transition in
which in some appropriate sense the whole Universe turns into a black hole.
In Carr and Coley \cite{cc} (henceforward referred to here as $CC$) the merging of a population of black holes at a cosmological bounce was discussed. In $CC$ it was
assumed that all of the black holes have the same mass $M$ and the volume filling factor $F$ is of order $(R_S/L)^3$, where $R_S= 2GM/c^2$ is the Schwarzschild ``radius''
or, rather, the Schwarzschild characteristic ``scale'' (where we will generally set $G=c=1$ hereafter and $L$ is the separation of the black holes).
Indeed, $CC$ determined when the value of $F$ for black holes formed in the previous phase (referred to as `pre-crunch black holes' (PCBHs))
would continue to be less than unity at the cosmological bounce, thereby guaranteeing their persistence into the subsequent phase of expansion.
$CC$ determined the region in which
the PCBHs that presently contribute to the dark matter survive,
and computed the minimum possible mass of any black holes formed during the actual cosmological bounce itself (referred to as `big crunch black holes' (BCBHs)). Although similar to primordial black holes (PBHs), these black holes would form immediately prior to rather than just after the big bounce/bang.
$CC$ assumed that the Universe bounces at a density $\rho_B$, which may be of order the Planck density
but, in principle, $\rho_B$ could be much less. Now, a
spherical sector of mass $M$ forms a black hole
when it descends within its Schwarzschild radius, with density
$\rho_{BH} \sim 10^{18} \left({M}/{M_{\odot}}\right)^{-2} \mathrm{g \, cm}^{-3}$.
A BCBH (which forms during the bounce) has $\rho_{BH}$ which is
necessarily bigger than the cosmological density at the bounce, and we
obtain a {lower} limit on the black hole mass
$M \sim \left({\rho_P}/{\rho_B} \right)^{1/2} M_P$,
where $M_P$ is the Planck mass.
This limit also applies to when pre-existing PCBHs (which form before the bounce)
might lose their own identity in the merger with other PCBHs.
If $F_B$ represents the fraction of the Universe's density in PCBHs at the instant of the bounce, then the average distance between the black holes at the bounce, $R_{\mathrm{sep}}$,
is less than the size of each individual black hole;
$R_{\mathrm{sep}} \lesssim 2GM/c^2$. The merging condition
$M \gtrsim F_B^{-1/2} M_{\mathrm{min}}$
can then be interpreted as a minimum on the fraction of the Universe contained in the black holes.
Therefore, a range of black hole masses in which BCBHs can form but PCBHs do not merge is obtained.
The fraction $F_B$ is constant in a matter-dominated epoch, but will decrease during collapse in a radiation-dominated epoch; however,
the fraction of the material content of the Universe in black holes can still be computed $CC$.
There is a variety of constraints on the quantity and mass of non-evaporating black holes (i.e., those with mass larger than $M \sim 10^{15}$g) from
lensing, dynamics and astrophysics
~\cite{cks}. Constraints on evaporating black holes
are not relevant for PCBHs, but they might be for BCBHs. In
particular, an important dynamical constraint is obtained from large-scale structure formation due to the fluctuations in the black hole number density \cite{Meszaros}.
The analysis of $CC$ may be sufficient in some situations.
However, it cannot be reliable in general since each black hole does not have the Schwarzschild radius when the black holes get sufficiently close together (and it is
certainly not valid as $F \rightarrow1$).
Indeed, the very
definition of a black hole in a cosmological background is questionable.
Since the notion of an event horizon cannot be used (e.g., there may be no spatial infinity), often the behavior of the so-called apparent horizon \cite{AshtekarKrishnan}
is studied. But, in the cosmological context, the apparent horizon of a black hole
can be different from the Schwarzschild value. As a simple example, if the background universe has a non-vanishing cosmological constant, $\Lambda$, the apparent horizon will depend on both $M$ and $\Lambda$ and is smaller than $R_S$, even when assuming spherical symmetry. In addition, the
assumption of spherical symmetry in the case of the apparent horizon
is violated when
the filling factor approaches $1$. Indeed, in the idealisation that the black holes
are configured in a perfect lattice, they tend to ``cubes'' rather than ``spheres''.
The considerations of
$CC$ were essentially heuristic and not based on any rigorous computations. In a subsequent paper \cite{bounce} (henceforward referred to as $CCC$ here),
utilizing earlier work of Clifton {\it et al.}~\cite{clifton2},
exact solutions describing a regular black hole lattice in a cosmological background dominated by a dynamical scalar field at the bounce were derived.
In particular, the question of whether black holes can persist in a Universe that collapses and then subsequently bounces into a new phase of expansion
was studied
by investigating the maximal number of black holes that can keep their individual identity through the bounce \cite{bounce}.
\newpage
\subsection{Black hole lattice models}
A set of models of interest are the black hole lattice (BHL) models \cite{bruneton,num0} in which the matter content of the universe is
divided into discrete cells, each of which is then modeled as a black hole.
The resulting spacetimes are explicitly highly inhomogeneous on small scales,
but they are spatially homogeneous and isotropic on large scales \cite{DurkClif}.
These spacetime models can be used to characterize
the coarse-graining of spacetime \cite{rvdh}, and do not necessarily behave dynamically in precisely the same way as the {\it exactly} spatially homogeneous and isotropic
Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) solutions.
A property of these particular solutions is that the number of black holes ($N$),
which are often assumed to be spaced regularly, is finite and reasonably small. For example, in a closed model, $N$ has the possible values $5, 8, 16, 24, 120, 640$.
(However, $N$ could be larger if the assumption of uniformity is dropped and could be arbitrarily large in a flat universe).
The associated cell size to model the Universe at present is consequently $N^{-1/3}R_{PH} \gtrsim 600$~Mpc.
This, of course, does not model the actual situation in our Universe, in which the total number of supermassive black holes is perhaps of the order $10^{10}$ (e.g., one per galaxy) and stellar mass black holes might number on the order $10^{20}$.
Recently, the effects that mass-clustering has on the scale and characteristics of cosmology within the class of BHL models
was studied \cite{DurkClif}.
Although these models are too simplistic to model the complex structures in our actual Universe, they do equip us with a means to investigate some aspects of coarse-graining and the averaged expansion \cite{rvdh}.
The BHL models were generalized
to include clusters of masses, and consequently allow for the dynamical effects of cosmological structures to be investigated in a well-defined and non-perturbative manner. The cosmological implications of the formation
of structure on the total amount of mass in each of the clusters and of the energy of the interactions within and between clusters
was studied.
The positions of the shared horizons that surround groups of sufficiently close black holes were
investigated.
It was found that a common apparent horizon around the clustered black holes would
appear when the black holes become sufficiently close together (in addition to the apparent horizon of each
individual black hole).
\subsubsection{CCC BHL bounce model}
$CCC$ \cite{bounce} obtained a class of exact ``time-evolving'' solutions to Einstein's constraint equations modeling the spacetime geometry at the surface of minimum of cosmological expansion.
These models contain a regular BHL and a scalar field at the maximal collapse.
The associated initial data can then be utilized to uniquely determine the evolution in (both) the expansion (and collapse)
{directions}, and particularly the dynamical
evolution of the hyperspherical cosmological models;
consequently they constitute a well-defined class of cosmological models.
The actual initial data can itself be used to determine the locations of the black hole apparent horizons and utilized to compute the distance between neighbouring black holes, which then determines the fraction of space filled by black holes at the bounce in terms of the optimal energy density attained there.
These models clearly illustrate the fact that
it is possible for black holes to persist through a cosmological bounce.
In particular, $CCC$ considered hyperspherical cosmological models.
It was determined that the model parameter, $\kappa$, is constrained to lie within a particular range of values if both the black holes have positive mass and also that the value of the scalar field is sufficiently large to dynamically ensure a bounce.
Black holes in scalar field cosmologies with spatial flatness (and solutions with $V \neq 0$), and
the bounce-merger conditions for persistent black holes through a cosmological bounce in higher-dimensional spaces,
were also considered \cite{bounce}.
Apparently, there are
difficulties producing physical bouncing cosmological models containing non-merging black holes of the type discussed in $CCC$ in spatial dimensions greater than five.
There continues to be interest in seeking time-evolving solutions in models in which a multiple number of distinct black holes persist through a non-time-symmetric bounce.
Returning to the question of the possible maximal {fraction} of space filled by PCBHs or the number of BCBHs that can form as the minimum of expansion is approached,
$CCC$ were able to accurately compute $F$ for the BHL cosmological models. Indeed, it was found that there exist solutions in which the universe bounces before $F$ reaches unity. Therefore, $CCC$ obtained the important result that {\it there do exist exact solutions in which black holes
persist through a bounce.} In addition, the expression found for $F$ at the bounce is not too dis-similar from the heuristic result found in the $CC$ analysis for $F \ll 1$.
Although the $CCC$ models are not formally valid for large values of $\kappa$,
there is an indication from $CCC$ that as $\kappa$ increases $F $ approaches but never exceeds unity at the bounce. As $F \rightarrow 1$, the majority of the universe is in the black holes rather than ``in the Friedmann region''. This is not dis-similar to the case of a black hole in a positive curvature Friedmann geometry, in which $M$ approaches zero as the size of the black hole tends to that of the background. Therefore, it is not implausible that the filling factor $F$ can never reach $1$.
One of the primary motivations of the current work is to investigate whether something may prevent the filling factor ever reaching $1$ in the bouncing models under investigation. Indeed,
perhaps the size of the individual black hole becomes smaller than the standard Schwarzschild value or the effective black hole mass approaches zero.
In \cite{bounce} the area method was used to compute the apparent horizon. But this method is not reliable, particularly for larger values of $\kappa$. Therefore, we shall
utilize the notion of a geometric horizon here.
\newpage
\subsection{Geometric horizon}
The event horizon of a black hole spacetime in General Relativity
(GR) is defined as the boundary of the non-empty complement of the
causal past of future null infinity (that is, the sector for which signals originating in the
interior will never escape). Note that the global
behaviour of the spacetime must be known in order for the event horizon to be determined locally \cite{AshtekarKrishnan}.
However, to examine the interaction of realistic black holes with their environment,
in which the black holes are dynamical, a local characterization that
does not necessarily depend on the event horizon is needed.
As a result, when considering time-dependent situations
rather than
stationary black holes, an event horizon is often replaced in practical applications by an apparent horizon, which is defined as the locus of the vanishing expansion of a null geodesic
congruence starting within a trapped surface S (with a spherical topology) \cite{RPenrose}.
Indeed, in numerical studies of collapse,
the apparent horizon is a more
practical surface to track \cite{JThornburg}. A
related concept is marginally outer trapped surfaces (MOTS),
which are defined to be two-dimensional (2D) surfaces for which
the expansion of the outgoing null vector normal to the
surfaces is zero. Assuming a smooth dynamical evolution, these
2D surfaces can be combined to construct a 3D surface
known as a marginally trapped tube \cite{BoothFairhurst}.
Unlike the event horizon, the apparent horizon is quasi-local. But it is
intrinsically foliation-dependent, which can lead to ambiguities
and implementation difficulties.
As a result, it is of great import to define other surfaces that are characterized in an
invariant manner.
In the case of a stationary black hole spacetime, knowledge of the Killing vector field
acting as the null generator on the event horizon then allows the horizon to be characterized
locally. This implies that there are consequently
scalar polynomial (curvature) invariants (SPIs) which are zero
on the stationary horizon \cite{PLB}.
In particular, a special
set of SPIs vanish on the horizon since the curvature tensor and its covariant derivatives must be
of type {\bf II/D} relative to the alignment classification \cite{CMPP} there.
It has subsequently been conjectured that a dynamical black hole admits a quasi-local hyper-surface,
called a geometric horizon (GH), on which the curvature tensor (and its first covariant derivative) are
algebraically special \cite{PLB}.
Indeed, there are examples of time-evolving black hole spacetimes in which a
GH is defined (for example, in the case of
spherical symmetry).
That is, we shall utilize the concept of a GH, which can
be characterized invariantly by the vanishing of a defining set of scalar curvature invariants \cite{PLB},
as an alternative method to using an apparent horizon. In particular, we shall use the GH to characterize the horizon in the BHL models under study here.
\newpage
\subsection{Overview}
We wish to study whether black holes can persist in a collapsing universe that subsequently bounces into a new expansionary phase.
In particular, we are interested in the black hole
number density as the minimum of expansion is approached and whether it is
possible that
the filling factor may be prevented from ever reaching unity. For example, we wish to pursue the
possibility that the size of each individual black hole is smaller than the usual Schwarzschild value.
Therefore, we seek an alternative method for determining the black hole horizon,
and we shall utilize the notion of a GH, which can be simply determined by curvature invariants. This necessitates
looking for time-evolving multiple black hole solutions.
We shall follow
\cite{bounce} and consider a scalar field cosmology to model the matter content of such a universe close to the bounce, and seek solutions representing a dynamical network of black holes.
While the phantom
scalar field bounce model considered here (in which the scalar potential is set to zero -- see later) is not perhaps the most physically realistic
matter model, it does provide exact bounce solutions in which persistence occurs and
which can be primarily studied analytically (hence providing some basic underlying
understanding). More realistic bounce models can be considered,
based on a more effective field theory (which, e.g., does not suffer from a scalar field instability problem due to a violation of the energy conditions), including models which include a cosmological
constant or a non-minimally coupled scalar field \cite{Bronnikov}. Such models will still lead a well-posed initial value problem (which can be defined in an analogous manner to that
done here) that will lead to bouncing cosmologies with similar persistence properties,
but these models can only be studied numerically.
In more detail,
we shall begin by presenting and reviewing the class of
black hole lattice models in a hyperspherical cosmology.
We derive exact time evolving solutions
of instantaneously-static models, by employing
perturbative solutions of the constraint equation
that can then be utilized to develop exact 4-dimensional (4D) dynamical solutions of the Einstein field equations (EFEs). Although
these solutions are very idealistic, they do, however, model a possible realistic
scenario in which a number of distinct black holes persist through a cosmological
bounce.
We focus on the particular case of
eight regularly-spaced black holes with equal masses, and we consider the relevant case of when the model parameter (see section 3) $\kappa > 1$.
We then compute the invariants for these exact solutions necessary to determine the conditions for a GH explicitly. We conclude with a summary and a
discussion of the results.
\newpage
\section{BHL models: Constraint and evolution Eqs.}
We will display the EFEs for a radiation fluid and a scalar field, appropriate for investigating the spacetime geometry in the vicinity of a bounce.
We will first present the constraint and evolution equations in a general form and discuss how a bouncing cosmological model with black holes can be determined as an initial value problem, where the appropriate initial data is given at the bounce.
Similarly to normal early-universe FLRW cosmology, the scalar field is anticipated to dynamically dominate at early times but to subsequently become negligible in comparison to the energy-momentum in radiation and dust at later times.
This implies that in the case of a regular lattice of black holes and a scalar field is considered, the
subsequent late-time dynamics of the spacetime should approach that of one simply consisting of an array of black holes. This is pertinent because there is a time-symmetric initial value problem for a BHL
in the case of a scalar field with a negative coupling constant in which
the black holes
move relative to each other as the space evolves away from the initial time-symmetric surface. Indeed, due to the time symmetry on the initial surface, the constraint equations can be formulated in a linear form, thereby allowing for the initial data for an arbitrary number of black holes to be constructed by addition.
This initial data problem was studied in \cite{clifton2,DurkClif}. In general, the models can be investigated by separating the governing EFEs into sets of constraint and evolution equations relative to a $1 + 3$ decomposition. The intrinsic geometry and the extrinsic curvature of an initial hyper-surface then constitute sufficient initial data to ensure a unique evolution. Therefore, if the constraint equations can be solved at some initial time, then there is sufficient data for the geometry of the complete spacetime to be determined.
In the next subsection we closely follow the approach of \cite{bounce} and we shall repeat the relevant equations therein, in order for the current paper to
be self-contained.
\subsection{Field equations and time-symmetric initial data}
The EFEs for radiation and a scalar field are given by:
\begin{equation}
\label{fe}
G_{ab} = \mu T^{\varphi}_{ab} \, + \, T^{\gamma}_{ab} \, ,
\end{equation}
where $\mu$ is a negative coupling constant (necessarily to produce a bounce) and units are selected
so that $8 \pi G = c=1$. $T^{\varphi}_{ab}$ is the energy-momentum tensor of the scalar field:
\begin{equation}
T^{\varphi}_{ab} = \nabla_a \varphi \nabla_b \varphi -\frac{1}{2} g_{ab} \nabla_c \varphi \nabla^c \varphi - g_{ab} V \, ,
\end{equation}
where $V \equiv V(\varphi)$ is the self interaction potential of the scalar field. $T^{\gamma}_{ab}$ is the energy-momentum tensor of a separately conserved radiative matter field. The contracted second Bianchi identity then implies the evolution Eq. for the scalar field:
\begin{equation}
\label{boxvarphi}
\nabla^a \nabla_a \varphi = \frac{dV}{d\varphi} \, ,
\end{equation}
where $\nabla^a \nabla_a$ is the covariant d'Alembertian operator.
Choosing a time-like vector field, $u^a$, we can then decompose $T^{\varphi}_{ab}$ into an effective energy density, $\rho^{\varphi}$, an isotropic pressure, $p^{\varphi}$, and momentum density, $j_a^{\varphi}$:
\begin{eqnarray}
\label{rho}
\rho^{\varphi} &\equiv& u^a u^b T^{\varphi}_{ab} = \frac{1}{2} \dot{\varphi}^2 + \frac{1}{2} D^a \varphi D_a \varphi +V(\varphi)\, \\
\label{p} p^{\varphi} &\equiv& \frac{1}{3} h^{ab} T^{\varphi}_{ab} = \frac{1}{2} \dot{\varphi}^2 - \frac{1}{6} D^a \varphi D_a \varphi -V(\varphi) \,\\
\label{j}
j_a^{\varphi} &\equiv& - h_a^{\phantom{a} b} u^c T^{\varphi}_{bc} = - \dot{\varphi} \, D_a \varphi \, ,
\end{eqnarray}
where an ``over-dot'' denotes $u^a \nabla_a$ and $h_{ab} \equiv g_{ab} +u_a u_b$
has been introduced, so that $D_a \equiv h_a^{\phantom{a} b} \nabla_b$ denotes the projected covariant derivative orthogonal to $u^a$. In addition, the effective energy density and effective isotropic pressure of a radiation perfect fluid are given by:
\begin{eqnarray}
\rho^{\gamma} &\equiv& u^a u^b T^{\gamma}_{ab} \, , \qquad p^{\gamma} \equiv \frac{1}{3} h^{ab} T^{\gamma}_{ab} \, ,
\end{eqnarray}
where $p^{\gamma} = \frac{1}{3} \rho^{\gamma}$ and $j_a^{\gamma} \equiv - h_a^{\phantom{a} b} u^c T^{\gamma}_{bc} = 0$.
From the embedding Eqs. and the EFEs (\ref{fe}), the Hamiltonian and momentum constraint Eqs. for the spacetime can be written as:
\begin{eqnarray}
\label{hc}
&&\mathcal{R} + K^2 - K_{ab} K^{ab} = 2 u^a u^b G_{ab} = 2 \mu \rho^{\varphi} + 2 \rho^{\gamma} \, \\
\label{mc}
&&D_b K^b_{\phantom{b} a} - D_a K = - h^b_{\phantom{b} a} u^c G_{bc} = \mu j_a^{\varphi} \, ,
\end{eqnarray}
where $K_{ab} \equiv - h_a^{\phantom{a} c} h_b^{\phantom{b} d} \nabla_{(c} u_{d)}$ is the extrinsic curvature of the initial hyper-surface, $K \equiv K^a_{\phantom{a} a}$ and
(assuming an irrotational $u^a$)
$\mathcal{R}$ denotes the Ricci curvature scalar of this intrinsic 3-surface. These Eqs. must necessarily be satisfied on this initial hyper-surface in order for there to be a solution of the EFEs.
We can now decompose the conservation Eqs. for the scalar field and radiation.
The constraint and evolution Eqs. for the radiation fluid are, respectively,
\begin{eqnarray}
\label{radcon}
D_a \rho^{\gamma} + 4 \rho^{\gamma} \dot{u}_a =0 \, , \qquad
\dot{\rho}^{\gamma} = \frac{4}{3} K \rho^{\gamma}
\end{eqnarray}
where $u^a$ has been chosen to be comoving with the radiation fluid. Defining
\begin{equation}
\label{varphivar}
\psi_a \equiv D_a \varphi \, ,
\end{equation}
the propagation Eq. (\ref{boxvarphi}) for the scalar field can then be decomposed into evolution and constraint Eqs. relative to $u^a$:
\begin{eqnarray}
\label{evo1}
\dot{\Pi} &=& D_a \psi^a+ K \Pi + \dot{u}_a \psi^a - V^{\prime}(\varphi) \, \\
\dot{\psi}_a &=& D_a \Pi + \dot{u}_a \Pi + u_a \dot{u}^b \psi_b + K^a_{~b}\psi_b \, \\
\dot{\varphi} &=& \Pi \, ,
\label{evo3}
\end{eqnarray}
where the last Eq. serves to define $\Pi$. Consequently, Eq.~(\ref{varphivar}) is the one remaining constraint Eq., while Eqs.~(\ref{evo1})-(\ref{evo3}) provide the evolution Eqs. for $\Pi$, $\psi_a$ and $\varphi$. We have thus
completed an appropriate initial value formulation.
We now investigate the time-symmetric case in which the initial data satisfies $K_{ab}=0$. From Eqs.~(\ref{j}) and (\ref{mc}), we immediately obtain
\begin{equation}
\dot{\varphi} D_a \varphi =0 \, ,
\end{equation}
and hence either $\dot{\varphi}=0$ or $D_a \varphi=0$ on the initial hyper-surface. If both are satisfied simultaneously, then the evolution Eqs. for the scalar field imply that $\varphi$ vanishes everywhere, which corresponds to the trivial case in which there is, in fact, no scalar field present. The case in which $\dot{\varphi}$ vanishes was studied in \cite{bounce}.
We shall investigate
the remaining case (of most interest for cosmology) in which $D_a \varphi=0$ henceforward in this paper.
If $D_a \varphi =0$, it follows that on the initial hyper-surface $\varphi$ is constant, and from Eqs.~(\ref{rho}) and (\ref{p}) we then obtain $\rho^{\varphi} = \frac{1}{2} \Pi^2 +V$ and $p^{\varphi} = \frac{1}{2} \Pi^2-V$. The time-symmetry on the initial hyper-surface then implies that $\dot{\rho}^{\varphi}= \Pi (\dot{\Pi} +V^{\prime})=0$ and $\dot{p}^{\varphi} = \Pi (\dot{\Pi} -V^{\prime})=0$. Since a non-vanishing $\Pi$ is required in order for a non-vacuum solution to exist, we then have that $\dot{\Pi}=V^{\prime}=0$ (consistent with Eq. (\ref{evo1})). Therefore, in this case we obtain a solution in which initially $\varphi$ is spatially homogeneous, but $\dot{\varphi}$ is spatially inhomogeneous and $\ddot{\varphi}$ is zero. It then follows that $V$ is constant and of the form of a cosmological constant.
To produce a solution to the Einstein-scalar equations, it
remains for the
the Hamiltonian constraint to be satisfied. When $D_a \varphi = V =0$,
Eq. (\ref{hc}) can be written as:
\begin{equation}
\label{constraintR}
\mathcal{R} = \mu \Pi^2 + 2 \rho^{\gamma} \, .
\end{equation}
Since $\Pi$ does not appear in the scalar field constraint Eq., it can consequently be freely specified. A solution for the geometry of an initial hyper-surface satisfying Eq.~(\ref{constraintR}) for a given value of $\Pi$ then leads to a complete solution to all of the constraint Eqs. and consequently to the full initial data necessary to determine the entire subsequent evolution.
In order to obtain explicit solutions with constant conformal curvature, we take the line-element for the time-symmetric initial hyper-surface to be:
\begin{equation}
\label{le}
ds^2 = \Omega^4 d\tilde{s}^2 \, ,
\end{equation}
where $d\tilde{s}^2$ is the metric for a 3-space of constant curvature. We have
\begin{equation}
\label{conformal}
\mathcal{R} = \frac{\tilde{\mathcal{R}}}{\Omega^4} - \frac{8}{\Omega^5} \tilde{D}^2 \Omega \, ,
\end{equation}
where $\tilde{\mathcal{R}}$ is the constant Ricci curvature scalar of the conformal space and $\tilde{D}$ is the covariant derivative on it. Combining these Eqs. then yields
\begin{equation}
\label{constraint}
\tilde{D}^2 \Omega - \frac{\tilde{\mathcal{R}}}{8} \Omega= -\frac{(\mu \Pi^2 + 2 \rho^{\gamma})}{8} \Omega^5 \, .
\end{equation}
For a vacuum spacetime, in which $\mu\Pi^2 + 2 \rho^{\gamma}=0$, this Eq. is linear in $\Omega$, and we can seek a multi-black hole solution by superposition \cite{DurkClif}. Eq.~(\ref{constraint}) can be made
linear in $\varphi$
by an appropriate choice for $\Pi$, which can be then exploited to obtain a solution for black holes in a hyperspherical cosmology. From the EFEs, it is clear that a bounce can only be obtained when the
scalar field is massless with a negative energy density
(i.e., when $\mu <0$ and $V=0$).
\newpage
\section{Black holes in a hyperspherical cosmology}
\label{sec:sphere}
We consider a hyperspherical cosmological model, which has a mathematically simpler initial value problem formulation. The conformal line-element in Eq. (\ref{le})
($\tilde{h^0}_{\alpha \beta}$) is taken to as:
\begin{equation}
\label{hs}
d\tilde{s}^2 = dr^2 + \sin^2 r \left( d\theta^2 + \sin^2 \theta \, d \phi^2 \right) \, ,
\end{equation}
where the Ricci scalar is $\tilde{\mathcal{R}}=6$ and the Hamiltonian constraint (\ref{constraint}) can be written:
\begin{equation}
\tilde{D}^2 \Omega = \left( \frac{3}{4} -\frac{1}{8} (\mu \Pi^2 +2 \rho^{\gamma}) \Omega^4 \right) \Omega \, .
\label{ham}
\end{equation}
We select the value of $\Pi$ on the initial hyper-surface as
\begin{equation}
\label{c}
\Pi^2 = \frac{8}{\mu \Omega^4} \left( \frac{3}{4} - \kappa \right) -\frac{2 \rho^{\gamma}}{\mu} \, ,
\end{equation}
where $\kappa$ is an arbitrary constant (as defined below in
(\ref{helmholtz}), and which constitutes the essential parameter of the BHL and consequently the resulting cosmological model),
so that the constraint Eq. becomes
\begin{equation}
\label{helmholtz}
\tilde{D}^2 \Omega = \kappa \, \Omega \, ,
\end{equation}
where $\kappa > 3/4$ if both $\mu <0$ and $\rho^{\gamma}=0$. This equation is, in fact, the Helmholtz Eq. on a 3-sphere, for which solutions are known. And since it is linear in $\Omega$, multi-black hole solutions can be constructed by superposition. If $\kappa = 3/4$, then the sum of the energy densities of the scalar field and the radiation field is zero on the initial hyper-surface.
\paragraph{Solutions for the conformal factor:}
There are solutions to Eq. (\ref{helmholtz}) of the form:
\begin{equation}
\label{omi}
\Omega_i (r) = \alpha_i \, \frac{\cos (\sqrt{1- \kappa} \, r)}{\sin r} + \gamma_i \, \frac{\sin (\sqrt{1- \kappa} \, r)}{\sin r} \, ,
\end{equation}
where $\alpha_i$ and $\gamma_i$ are constants (and implicitly we assume that $\kappa < 1$ here; the $\kappa > 1$ case is discussed later).
The location of the source is taken as $r=0$,
where these solutions diverge. The geometry is smooth at all other points if the first derivative of $\Omega$ is single-valued. Applying this condition at $r=\pi$ implies that $\gamma_i = - \alpha_i \cot ( \sqrt{1- \kappa}\, \pi)$, and hence
\begin{equation}
\label{cont111}
\Omega_i(r) = \alpha_i \frac{\sin (\sqrt{1-\kappa} \, (\pi-r))}{\sin (\sqrt{1-\kappa} \, \pi) \sin r} \, .
\end{equation}
This Eq. gives the contribution to the conformal factor due to a single point-like source at $r=0$. For $N$ different sources, located at arbitrary positions on the hypersphere, the corresponding solution can be written as
\begin{equation}
\label{cont222}
\Omega (r, \theta, \phi) = \sum_{i=1}^N \alpha_i \frac{\sin (\sqrt{1-\kappa} \, (\pi-r_i))}{\sin (\sqrt{1-\kappa} \, \pi) \sin r_i} \, ,
\end{equation}
where $r_i$ is defined to be the radial coordinate in Eq.~(\ref{hs}) after rotating the hypersphere (and hence the location of the $i$th source is $r_i=0$).
The bare mass of each of the mass sources is obtained by comparison with terms that occur in a time-symmetric slice in the exact Schwarzschild spacetime \cite{DurkClif}.
The {\em{proper mass}} of each individual black hole is consequently obtained by a rotation of the coordinates so that the targeted black hole is centered at $r=0$, and by a
comparison with the leading-order terms in the line element to that of the exact Schwarzschild metric in the limit as $r \rightarrow 0$.
The $r_i \rightarrow 0$ limit of Eq.~(\ref{cont222}) is given by:
\begin{equation}
ds^2 \rightarrow \left( \frac{\alpha_i}{r_i} +A_i \right)^4 \left( dr_i^2 +r_i^2 (d\theta^2 + \sin^2 \theta \, d \phi^2 ) \right) \, ,
\end{equation}
where
\begin{equation}
A_i \equiv - \frac{\alpha_i \sqrt{1-\kappa}}{ \tan (\sqrt{1-\kappa} \, \pi)} + \sum_{j\neq i} \alpha_j \frac{\sin (\sqrt{1-\kappa} \, (\pi-r_{ij}))}{\sin (\sqrt{1-\kappa} \, \pi) \sin r_{ij}} \,
\end{equation}
and $r_{ij}$ is the coordinate distance to the $j$th source from the one located at $r_i=0$.
(In the above we take the limit as $r_{ij} \rightarrow r_i$ for degenerate points; e.g.,
the antipodal point in the 8BH example below.) A new radial coordinate is defined by $r^{\prime}_i \equiv \alpha_i^2 / r_i$, so that
\begin{equation}
ds^2 \rightarrow \left( 1 + \frac{4\alpha_i A_i}{r_i^{\prime}} \right) \left( dr^{\prime 2}_i +r_i^{\prime 2} (d\theta^2 + \sin^2 \theta \, d \phi^2 ) \right) \, .
\end{equation}
When compared with the exact Schwarzschild solution as $r \rightarrow \infty$,
\begin{equation}
\label{schwarz}
ds^2 \rightarrow \left( 1 + \frac{2 m}{r} \right) \left( dr^2 + r^2 (d\theta^2 + \sin^2 \theta d \phi^2) \right) \, ,
\end{equation}
we then obtain the proper mass of the $i$th source:
\begin{equation}
m_i = 2 \alpha_i A_i = - \frac{2 \alpha_i^2 \sqrt{1-\kappa}}{ \tan (\sqrt{1-\kappa} \pi)} + 2 \sum_{j\neq i} \alpha_i \alpha_j \frac{\sin (\sqrt{1-\kappa} \, (\pi-r_{ij}))}{\sin (\sqrt{1-\kappa} \, \pi) \sin r_{ij}} \, .
\end{equation}
We remark that the mass of each individual source depends on each $\alpha_j$ and the position of every other source. This mass will be assumed to be positive, which constrains the value of the constant parameter $\kappa$.
In the above, we note that $r_{j}$ is defined by
\begin{equation} \label{cosrj}
\cos r_j = \cos r_{Ij} \cos r + a_j\sin r_{Ij} \sin r,
\end{equation}
for $j \neq I$, where the coordinate $r$ measures the radial distance from the $Ith$ black hole and $r_{Ij}$ is the distance from this black hole to the $jth$ black hole. (Note that we can expand in powers of $r$, where $r=0$ is centred at the $Ith$ black hole, approximately close to $r=0$.) In addition, $a_k$ is defined by
\begin{equation} \label{ak}
a_k \equiv \left[ \cos \theta_k \cos \theta - \cos\left(\phi_k-\phi\right)\sin \theta_k \sin \theta\right],
\end{equation}
and $\left(r_k, \theta_k, \phi_k \right)$ indicates the location of the $kth$ black hole.
\newpage
\subsection{Eight regularly-spaced black hole masses}
We next study a specific configuration of masses. We consider eight identical mass and equally spaced black hole (8BH) sources ($BH_1 - BH_8$) on a 3-sphere whose positions are displayed in Table \ref{table1}. In this table ($r$, $\theta$, $\phi$) are hyperspherical polar coordinates on the 3-sphere.
In particular, we note that
$\frac{\partial r_{1}}{\partial{r}} = - \frac{\partial r_{2}}{\partial{r}} =1$,
$\frac{\partial r_{1}}{\partial{\Phi}} = \frac{\partial r_{2}}{\partial{\Phi}} =
0$ and
$\frac{\partial r}{\partial{\chi}} \equiv -S[a_{\ell} \cos(r), \frac{\partial a_
{\ell}}{\partial{\Phi}} \sin(r)]$, where $\chi = \{r, \Phi\}$, $\Phi = \{\theta,
\phi\}$, $\ell
= 3-8$, and $S(r) \equiv (1-a_{\ell}^2 \sin^2 (r))^{-1/2}$ so that
$\frac{\partial r_{\bar{\ell}+1}}{\partial{\chi}}=-
\frac{\partial r_{\bar{\ell}}}{\partial{\chi}}$ for $\bar{\ell} = 3,5,7$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\bf{BH} & \bf{($r_k$, $\theta_k$, $\phi_k$)} & $r_{1j}$ & $r_i$ & $a_{\ell}$ \\
\hline
$1$ & $\left(0, \frac{\pi}{2}, \frac{\pi}{2}\right)$ & -- & $r$ & -- \\
$2$ & $\left(\pi, \frac{\pi}{2}, \frac{\pi}{2}\right)$ & $\pi$ & $\pi - r$ & -- \\
$3$ & $\left( \frac{\pi}{2}, 0, \frac{\pi}{2}\right)$ & $\pi/2$ & $\cos(r_{\ell}) = a_{\ell} \sin(r)$ & $\cos\theta$ \\
$4$ & $\left( \frac{\pi}{2}, \pi, \frac{\pi}{2}\right)$ & $\pi/2$ & $"$ & $-a_3$ \\
$5$ & $\left( \frac{\pi}{2}, \frac{\pi}{2}, 0\right)$ & $\pi/2$ & $"$ & $-\sin\theta \cos\phi$ \\
$6$ & $\left( \frac{\pi}{2}, \frac{\pi}{2}, \pi\right)$ & $\pi/2$ & $"$ & $-a_5$ \\
$7$ & $\left( \frac{\pi}{2}, \frac{\pi}{2}, \frac{\pi}{2}\right)$ & $\pi/2$ & $"$ & $-\sin\theta \sin\phi$ \\
$8$ & $\left( \frac{\pi}{2}, \frac{\pi}{2},\frac{3 \pi}{2} \right)$ & $\pi/2$ & $"$ & $-a_7$\\
\hline
\end{tabular}
\end{center}
\caption{The positions of eight regularly arranged points on a 3-sphere in the 4D Euclidean embedding space with hyperspherical polar coordinates, where $I=1$, $i, j, k, \ell$ range from 1, 2, 3 to 8, respectively.
All definitions and
additional details are described in the text.}
\label{table1}
\end{table}
\newpage
CCC qualitatively studied
the interesting question of whether the horizons of neighbouring black holes can touch \cite{bounce}. This would signal whether the black holes
can retain their individual identity as the Universe bounces or whether they merge.
In \cite{bounce} the apparent horizon of each source
was located by determining the area of a sphere of constant coordinate $r$ centred on the location of the black hole \cite{DurkClif} (also see the Appendix).
When the value of $r$ at the apparent horizon is greater than one half of the distance between the individual black hole sources, then we regard these black holes as having merged.
The location of the apparent horizon for different values of $\kappa$ was displayed in Fig. 3 in CCC \cite{bounce}. In particular,
the horizons of two adjacent black holes touch at the intersection of the two lines. This
formally occurs for $\kappa \simeq 5.1$; i.e., if
$\kappa$ is greater than this value, then the black holes are expected to merge prior to the expansion minimum.
CCC determined whether the scale factor of the cosmological region has a positive second time derivative corresponding to a bouncing cosmology.
Assuming that the cosmological region is approximately but not exactly (geometrically) spherical, it was found that it is plausible to infer that the cosmology bounces occurs when $\kappa \gtrsim 1.2$ \cite{bounce}. In addition,
when $0 < \kappa \lesssim 2.2$ the proper mass of each individual black hole is positive. However, the value of $m$ decreases as $\kappa$ increases, and at $\kappa \simeq 2.2$ it vanishes and continues to decrease as $\kappa$ subsequently increases. Therefore, in the case of positive mass black holes, we necessarily have $\kappa \lesssim 2.2$.
By considering the positions of the horizons, the expansion of the cosmological scale factor and the property that the black hole masses are all positive, we thus have the following physical bounds for the model parameter $\kappa$:
\begin{equation}
\label{cond8}
1.2 \lesssim \kappa \lesssim 2.2 \, .
\end{equation}
If $2.2 \lesssim \kappa$, then the black holes have negative mass. If $\kappa \lesssim 1.2$, the magnitude of the scalar field is then not sufficiently large to ensure a bounce in which the universe emerges into a subsequent expanding phase. Writing this in terms of the radius of the apparent horizon, we have that
\begin{equation}
\label{8bounds}
0.51 \lesssim \Delta r_{\rm h} \lesssim 0.82 \, ,
\end{equation}
where $\Delta r_{\rm h}$ represents the fractional distance on the conformal hypersphere that the horizon extends towards the midpoint between adjacent black holes. Therefore,
for all physical values of $\kappa$ we have that $\Delta r_{\rm h} < 1$.
\subsubsection{Comments}
Formally, the analysis presented in $CCC$ and summarized above is for the case $\kappa < 1$. But as noted above, for $\kappa < 1$ there is no bounce
(i.e., not physical), and when $\kappa > 1$, for $\kappa \gtrsim 2.2$ the models have negative masses
(i.e., again not physical), and
hence, strictly speaking, there is no black hole merger for all solutions in the physical regime. We need to consider the case $\kappa > 1$ explicitly.
We recall that
bounces may occur for $\kappa$ greater than $\simeq 2.2$, but they will correspond to cosmological models with negative mass black holes. For the most part, here we will assume that negative mass solutions are not physical.
We note that a lot of intuition is broken in the BHL models studied here, which
can lead to difficulties in physical interpretation.
For example, the vacuum solution on a conformal hypersphere with one point mass doesn't exist; hence the model can not have a single BH limit (where all $\alpha_j\rightarrow 0$ except $\alpha_1=\alpha$). Moreover,
the masses can, in principle, be negative. Most importantly, perhaps, the role of the Schwarzschild characteristic scale $R_S$ is not necessarily relevant. In particular, $m\sim 1$ may not be a good choice in the analysis regardless. From above, $m\equiv \alpha^2 B(\kappa)$, where $B$ depends on $\kappa$ (and where we assume a value for $\kappa$ such that $m>0$). Choosing $m$ effectively fixes $\alpha$. Alternatively, we could fix $\alpha$ so that $m$ is derived. However, as noted earlier, in our qualitative analysis here we shall keep $m=1$ in order to affect a more straightforward
comparison with the results of $CCC$ \cite{bounce}.
\subsubsection{$\kappa > 1$}
In particular, we need to consider the case $\kappa > 1$ separately and carefully.
If we assume that $K \equiv \sqrt{\kappa -1}$, then the physical region defined by (\ref{8bounds}) can be rewritten as:
\begin{equation}
\label{cond8K}
0.45 \lesssim K \lesssim 1.09 \, .
\end{equation}
Again, the lower bound is to ensure a bounce and the upper bound is to assure that
each black hole has a positive proper mass. Two problems are immediate. First, the assumption of spherical symmetry is even more problematic in the
case $\kappa > 1$. Second, we note that the region $2.2 < \kappa$ is unphysical. In particular, the critical merger value
$\kappa \simeq 5.1$ obtained in $CCC$ is not of physical interest.
However, we are interested in the
formal question of what happens as $K$ increases (i.e., is there a critical value
$K_c \sim \sqrt{5.1 -1}$ above which the individual black holes merge)?
\newpage
\subsection{The conformal factor for the case $ \kappa >1$}
Let us consider the case $ \kappa >1$.
We first define:
$$K \equiv \sqrt{\kappa -1}.$$ We note that the earlier expressions are all regular at $\kappa = 1$, where the trigonometric functions therein merely turn into hyperbolic ones
for $\kappa > 1$.
We obtain the conformal factor for a single point-like source at $r=0$
(which diverges at $r=0$, which is taken to be the location of the source).
To obtain a smooth geometry at all other points, we require that the first derivative of $\Omega$ is single-valued at $r=\pi$. Hence we obtain the exact solution
\begin{equation}
\label{cont333}
\Omega_i(r) = \alpha_i \frac{\sinh (K (\pi-r))}{\sinh (K \pi) \sin r} \, .
\end{equation}
In the case of $N$ sources located at arbitrary positions on the hypersphere, the corresponding solution can be written as
\begin{equation}
\label{cont}
\Omega (r, \theta, \phi) = \sum_{i=1}^N \alpha_i \frac{\sinh (K (\pi-r_i))}{\sinh (K \pi) \sin r_i} \, ,
\end{equation}
where $r_i$ (identified by Eqs. \eqref{cosrj} and \eqref{ak}) is the radial coordinate, after rotating the hypersphere so that the $i$th source is positioned at $r_i=0$.
\subsubsection{Proper mass of sources}
Using the same procedure as described earlier (or as in \cite{bounce}), we obtain
the proper mass of the $i$th source as:
\begin{equation}
m_i = - \frac{2 \alpha_i^2 K}{ \tanh (K \pi)} + 2 \sum_{j\neq i} \alpha_i \alpha_j \frac{\sinh (K (\pi-r_{ij}))}{\sinh (K \pi) \sin r_{ij}} \, .
\end{equation}
We remark that the mass of each individual source depends on every $\alpha_j$ and the position of every other black hole. As noted above, this mass can be positive or negative, depending on the value of the constants $\kappa$, $\alpha_i$ and $r_{ij}$.
\subsubsection{8BH configuration: case $ \kappa >1$}
We consider the 8BH case with eight regularly-spaced masses and assume that
$\alpha_i = \alpha$, $m_i = m$. We then obtain:
\begin{equation}
m= \frac{4 {\alpha}^2 \sinh(\frac{K\pi}{2})}{\sinh(K\pi)}
\left[3 - K \sinh(\frac{K\pi}{2}) \right] \, .
\end{equation}
We have that $m=0$ when
\begin{equation}
3 - K \sinh \left[\frac{K\pi}{2} \right] = 0 \, ,
\end{equation}
which is zero when $K \simeq 1.09$ (corresponding to $\kappa \simeq 2.2$).
Notice that $-m {\alpha}^{-2}$ is positive and increases as $K$ increases.
\newpage
\section{Evolution of instantaneously-static models}
We next present perturbative solutions of the constraint Eq. (\ref{constraint}) that can then be utilized to construct exact time evolving solutions of the EFEs. The
resulting 4D spacetimes model physical realisations in which multiple distinct black holes persist through a cosmological bounce.
In order to study the complete dynamics of the models it is necessary to solve the full EFEs and not simply the conservation Eqs. and the constraint Eqs.
We choose coordinates in order that the full 4D spacetime is represented by the metric line-element:
\begin{equation}
\label{dynmetric}
ds^2 = -A^2(t,x^{\gamma}) dt^2 + h_{\alpha \beta} (t,x^{\gamma}) dx^{\alpha} dx^{\beta}.
\end{equation}
An evolving solution would correspond to such a metric which satisfies both the constraint and evolution Eqs. For a dynamical model that bounces at the time-symmetic surface at $t=t_0$, the 3-metric is:
\begin{equation}
\label{hypmetric}
h_{\alpha \beta} (t_0,x^{\gamma}) dx^{\alpha} dx^{\beta} = \Omega^4(x^{\gamma})d\tilde{s}^2
\equiv \Omega^4 \tilde{h}_{\alpha \beta} dx^{\alpha} dx^{\beta}
\, .
\end{equation}
An exact {\em{spacetime}} can then be obtained by solving the evolution Eqs.
\begin{eqnarray}
\mathcal{L}_t h_{\alpha \beta} &=& -2 A K_{\alpha \beta}
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{L}_t K_{\alpha \beta} &=& - D_{\alpha} D_{\beta} A + A \left( \mathcal{R}_{\alpha \beta} - 2 K_{\alpha \gamma} K^{\gamma}_{\phantom{\gamma} \beta} + K \, K_{\alpha \beta} \right) \nonumber \\ &&+ \frac{A}{2} \left( p^{\varphi} - \rho^{\varphi} - 2 \Pi^{\varphi}_{\alpha \beta} - \frac{2}{3}\rho^{\gamma} \right) h_{\alpha \beta} \, ,
\end{eqnarray}
where $t^{a} \equiv A u^{a}$, {and} $\Pi^{\varphi}_{a b} = (h_a^{\phantom{a} c} h_b^{\phantom{b} d} - \frac{1}{3} h_{ab} h^{cd}) T^{\varphi}_{cd}$ is the effective anisotropic stress of the scalar field. Because the initial value problem is well-posed,
the specification of suitable initial values for both the metric and the extrinsic curvature guarantees a unique time evolving solution.
We will next obtain approximate solutions.
\subsection{Perturbative time-evolving solutions}
Assume that the metric functions ($A$, $h_{\alpha\beta}$ in (\ref{dynmetric})) are smooth at the (symmetric) bounce at $t=0$ and can be expanded in powers of $t$. We also expand $T^\varphi_{ab}$, given by Eq. (2), in powers of $t$, and define the time-like vector field $\frac{1}{A}\frac{\partial}{\partial t}$ so that a time derivative in the 1+3 split is defined by $\dot{\chi}=\frac{1}{A}\frac{\partial\chi}{\partial t}$.
At $t=0$, we have that
\begin{equation}
K_{ij}=0, \hspace{2mm} \tilde{D_a}\varphi=0, \hspace{2mm} \dot{\Pi}=0, \hspace{2mm} V^{\prime}=0.
\end{equation}
In addition, we recall that
\begin{equation}
\tilde{D}^2\Omega=\kappa\Omega
\end{equation}
and (at $t$=0)
\begin{equation}
\Pi^2=\dot{\varphi}^2=\frac{\lambda^2}{\Omega^4} \hspace{2mm} ; \hspace{2mm} \lambda^2 \equiv \frac{8}{\mu}\left(\frac{3}{4}-\kappa\right)
\end{equation}
so that $\kappa>\frac{3}{4}$ for $\mu=-1$. Also
\begin{equation}
h^0_{\alpha\beta}\equiv h_{\alpha\beta}\left(t=0,x^\gamma\right)=\Omega^4\tilde{h}^0_{\alpha\beta},
\end{equation}
where
\begin{equation}
\tilde{h}^0_{\alpha\beta}=diag\left[1, \sin^2r, \sin^2r \sin^2\theta\right].
\end{equation}
From $K_{\alpha\beta}=0$ and the EFE, we find that by a translation of $r$ and $t$ we can always set $A=1$, and all linear terms in $h_{\alpha\beta}$ vanish. Hence we have that:
\begin{equation}
h_{\alpha\beta}=h^0_{\alpha\beta}+h^2_{\alpha\beta}t^2+o(t^3),
\end{equation}
where $h^0_{\alpha\beta}$ is defined by (49) and (50). We only need $h_{\alpha\beta}$ to second order in order to calculate the Riemann tensor at $t=0$, but we need the metric to order third order in order to compute the covariant derivative of the Riemann tensor at $t=0$. In the absence of radiation (i.e., a scalar field source only), the bounce can be assumed to be symmetric and hence the next contributions to the metric will be $o(t^4)$. But radiation can be included at $o(t^3)$ self-consistently to obtain an evolving non-symmetric (about the bounce) solution.
We can compute the inverse metric, where $g^{00}=-1$, and
\begin{equation}
h^{\alpha\beta}=h^{\alpha\beta}_0+o(t^2),
\end{equation}
where
\begin{equation}
h^{\alpha\beta}_0=\Omega^{-4}diag\left[1,\frac{1}{\sin^2r},\frac{1}{\sin^2r\sin^2\theta}\right]\equiv\Omega^{-4}\tilde{h}^{\alpha\beta}_0,
\end{equation}
and we only need $h^{\alpha\beta}$ to zeroth-order, $h^{\alpha\beta}_0,$ for the computations here.
The field Eqs. (44) and (45) (for $K_{\alpha\beta}=-h_\alpha^\gamma h_\beta^\delta\nabla_\gamma u_\delta,$ $\mathcal{L}_th_{\alpha\beta}=-2K_{\alpha\beta}$) at $t=0$ then yield
\begin{equation}
K_{\alpha\beta,t}=-h^2_{\alpha\beta}.
\end{equation}
Now, we can explicitly compute the non-trivial connection coefficients to lowest order:
\begin{equation}
\Gamma^0_{\alpha\beta}=h^2_{\alpha\beta}t, \hspace{2mm} \Gamma^\alpha_{0\beta}=\Omega^{-4}h^2_{\alpha\beta}t, \hspace{2mm} \Gamma^\alpha_{\beta\gamma}=\tilde{\Gamma}^\alpha_{\beta\gamma} + o(t^2),
\end{equation}
and similarly for the partial derivatives of the connection coefficients. We can then compute the components of the Riemann tensor. In particular,
from the Eqs. (45) we find that
\begin{equation}
h^2_{\alpha\beta}=-R^0_{\alpha\beta}
\end{equation}
(where the 3D Ricci tensor $R^0_{\alpha\beta}$ is calculated explicitly from $h^0_{\alpha\beta}$ and contains derivatives of $\Omega$, which can be simplified using (47)).
Expansions for $R_{\alpha\beta}, R_{\alpha\beta\gamma\delta}, W_{\alpha\beta\gamma\delta}$, and their first covariant derivatives in terms of $R^0_{\alpha\beta}, R^0_{\alpha\beta\gamma\delta}, W^0_{\alpha\beta\gamma\delta}$, and higher order terms (higher powers of $t$) in terms of $\Omega$ and its derivatives, can be explicitly computed. In particular,
\begin{align}
\begin{split}
&R_{\alpha\beta\mu\gamma} =\tilde{R}_{\alpha\beta\mu\gamma}+o(t^2), \\
&R^0_{\beta\mu\gamma} \sim o(t), \\
&R^0_{\beta0\gamma} =-\frac{1}{2}h^2_{\beta\gamma}=\frac{1}{2}R^0_{\beta\gamma}+o(t^2),
\end{split}
\end{align}
using Eq. (56). We write:
$$\varphi=\phi_o+\Phi_1(r)t+o(t^3),$$
where $\phi_o=const$, there is no $o(t^2)$ term since the acceleration is zero, and from (46):
\begin{align}
\begin{split}
\Phi_1(r)&=\frac{\lambda}{\Omega^2}\equiv\Phi(r), \\
V(\varphi)&=0+V_1(r)t+V_2(r)t^2+o(t^3),
\end{split}
\end{align}
where $V_0=0$ since $V^{\prime}=0$ at $t=0$. We note that $\rho^{\varphi}=0$ at $t=0$, consistent with Eqs. (2) and (3) above. The potential $V(\varphi)$ can be determined from the conservation Eq. (2) for the scalar field (if $\phi_0\neq0$, $V_2=0$, else if $\phi_0=0,$ $V_2\neq0$).
We can compute $T_{ab}$ explicitly, where $T_{ab}=T_{ab}^\varphi$ is defined by (2). Note that (dropping the index 1 on $\Phi_1(r)$)
\begin{equation}
\begin{split}
\dot{\varphi} &=\Phi+o(t^2), \\
D_a\varphi \equiv \psi_a &=t\Phi_{,\alpha}+o(t^3), \\
\dot{\Pi} &=o(t);
\end{split}
\end{equation}
we then have that
\begin{equation}
\begin{split}
\rho^\varphi = p^\varphi &=\frac{1}{2}\Phi^2, \\
D^a \varphi D_a \varphi &\sim o(t^2), \\
\nabla_\alpha \varphi \nabla_\beta \varphi &= \Phi_{,\alpha} \Phi_{,\beta} t^2 + o(t^2), \\
\nabla^a \nabla_a \varphi &= -\Phi^2 + o(t^2).
\end{split}
\end{equation}
Therefore,
\begin{equation}
\begin{split}
T_{00} &= \frac{1}{2} \Phi^2 + o(t), \\
T_{0\alpha} &= \Phi\Phi_{,\alpha} t + o(t^2), \\
T_{\alpha\beta} &= \frac{1}{2} h^0_{\alpha\beta} \Phi^2 + o(t), \\
T = g^{ab} T_{ab} &= \Phi^2 + o(t).
\end{split}
\end{equation}
We can obtain the higher order terms for $V$ by perturbatively solving the conservation Eq. (3) (to lowest order $\dot{\varphi} = \Phi(r) + \chi(r)t^2$; $V(\varphi)=\overline{V}_2 \varphi^2$; yielding Eqs. for $\chi$ and $\overline{V}_2$). In particular, Eq. (3) implies that $\phi_0 V_2 = 0$.
Explicitly, the 4D Ricci tensor at $t=0$ can be written as:
\begin{equation}
^4R_{00} = \prescript{3}{}{R}, \hspace{2mm} ^4R_{0\alpha} = 0, \hspace{2mm} ^4R_{\alpha\beta} = 0, \hspace{2mm} ^4R = - \prescript{3}{}{R}.
\end{equation}
The field Eqs.
\begin{equation}
R_{ab} - \frac{1}{2} g_{ab} = -T_{ab}
\end{equation}
yield $^4R = \prescript{4}{}{T}$ $\left(\prescript{4}{}{T} = - \prescript{3}{}{R}\right)$. Clearly, using Eqs. (59) -- (61), the EFEs are satisfied. Note that $T_{ab;c} g^{bc} = 0$ to lowest order.
\subsubsection{Curvature invariants}
We can use the tensor $T_{ab}$ to compute curvature invariants. Note that, to lowest order, $T = \Phi^2,$ $T_{ab}T^{ab} = \Phi^4$. Defining the trace-free tensor
\begin{equation}
S_{ab} = T_{ab} - \frac{1}{4}g_{ab}T,
\end{equation}
which is related to the trace-free Ricci tensor, we have that
\begin{equation}
S_{00} = \frac{3}{4}\Phi^2, \hspace{2mm} S_{\alpha\beta}=\frac{1}{4}h_{\alpha\beta}\Phi^2; \hspace{2mm} S=0
\end{equation}
and
\begin{equation}
\begin{split}
\tensor{S}{_a^b} \tensor{S}{_b^a} & = \frac{3}{4}\Phi^4, \\
\tensor{S}{_a^b} \tensor{S}{_b^c} \tensor{S}{_c^a} &= -\frac{3}{8} \Phi^6, \\
\tensor{S}{_a^b} \tensor{S}{_b^c} \tensor{S}{_c^d} \tensor{S}{_d^a} &= \frac{21}{64} \Phi^8,
\end{split}
\end{equation}
and the type {\bf II/D} condition \cite{PLB} becomes
\begin{equation}
\label{DT}
^4D_T = {a_0}^2\Phi^{24} = 0,
\end{equation}
where ${a_0}^2$ is a positive non-zero constant.
In addition, computing $T_{ab;c}$ to $o(t^0)$:
\begin{align}
\begin{split}
T_{00;c} &= \Phi\Phi_{,\gamma}\delta^\gamma_c, \\
T_{0\alpha;c} &= \Phi\Phi_{,\alpha}\delta^0_c, \\
T_{\alpha\beta;0} &= 0, \\
T_{\alpha\beta;\delta} &= h^0_{\alpha\beta} \Phi\Phi_{,\delta}
\end{split}
\end{align}
(noting that the higher order terms in $T_{\alpha\beta;0}$ contain appropriate terms involving the potential), so that:
\begin{equation}
T_{ab;c}T^{ab;c} = 6 \Phi^2\Phi_{,\alpha} \Phi_{,\beta} h_0^{\alpha\beta}
\end{equation}
and the type {\bf II/D} condition for the covariant derivative of $T_{ab}$ (or the trace-free $S_{ab}$), $\prescript{4}{}{D}_{\nabla T} =0$, becomes
\begin{equation}
\label{DDT}
\Phi^2\Phi_{,\alpha}\Phi_{,\beta}h^{\alpha\beta}_0 = 0.
\end{equation}
Note that all contractions of the Weyl tensor are proportional to $\left(\Phi^2\right)^{n_1}$, and all contractions of the covariant derivative of the Weyl tensor are proportional to $\left(\Phi^2\Phi_{,\alpha}\Phi^{,\alpha}\right)^{n_2}$, for positive integers $n_1$ and $n_2$. The GH is identified through Eqs. (\ref{DT}) and (\ref{DDT}).
Therefore, effectively the type {\bf II/D} condition for $T_{ab}$ yields
\begin{equation}
\Phi=0,
\end{equation}
and for $\nabla T_{ab}$ (in the spherical limit):
\begin{equation}
\frac{d\Phi}{dr}=0.
\end{equation}
The first condition (67) or (71) enables the proper masses to be identified. The second type {\bf II/D} condition (70), $\Omega_{,\alpha} \Omega^{,\alpha} = 0$, allows us to identify the GH. This condition is analogous to the condition $\partial_r\left(E^{11}\right)=0$ in \cite{DurkClif} using the so-called Weyl-tensor method (see the Appendix). Note that at $t=0$, $E_{\mu\nu}=\prescript{3}{}{R}_{\mu\nu}$ and $H_{\mu\nu}=0$ (the electric and magnetic parts of the Weyl tensor, respectively). The type {\bf II/D} condition for the Weyl tensor is $I^3=\lambda J^2$, where $I\equiv\frac{1}{2}\left(E_{ab}E^{ab}-H_{ab}H^{ab} + iE_{ab}B^{ab}\right)$ (and similarly for $J$) \cite{kramer}.
The analysis can be repeated in flat space and higher dimensions (see Appendix); however, we can only use the GH as above to identify the black hole horizons in higher dimensions.
\newpage
\subsection{Example: hyperspherical black holes} As an example, for the hyperspherical black holes cosmology studied earlier, $\Omega(r,\theta,\phi)$ is given by Eq. (26). Note that
for small $r$ $(j \neq I)$:
\begin{equation}
r_j = r_{Ij} - a_j r + o(r^2).
\end{equation}
From earlier $\prescript{4}{}{D}_T \propto \Phi^{24}$ and $\Phi=\frac{\lambda}{\Omega^2}$. As in \cite{DurkClif}, the expression $\Omega$ is used to identify the proper masses of the black holes. From $\prescript{4}{}{D}_{\nabla T} = 0$, we obtain $\Omega_{,\alpha}\Omega_{,\beta} h^{\alpha\beta}_0 = 0$, from which we determine the GH. Thus, we find that
\begin{equation}
\Omega(r,\theta,\phi) = \frac{\alpha_I}{r} + \frac{m_I}{2\alpha_{I}} + r\left\{ \left(\frac{\kappa}{2} - \frac{1}{3}\right) \alpha_I + C_I \right\} +o(r^2),
\end{equation}
where the proper masses $m_I$ are defined by
\begin{equation}
\frac{m_I}{2\alpha_I} = \frac{-\alpha_I \sqrt{1-\kappa}}{{\tan(\sqrt{1-\kappa}~\pi)}} + B_I,
\end{equation}
and where
\begin{align}
B_I &\equiv \sum_{j \neq I}B_{I_j} = \sum_{j \neq I} \frac{\alpha_j}{\sin r_{Ij}}\left[\cos \left(\sqrt{1-\kappa}~r_{Ij}\right) - \cot \left(\sqrt{1-\kappa}~\pi\right) \sin \left(\sqrt{1-\kappa}~r_{Ij}\right)\right], \\
C_I &= \sum_{j \neq I} a_j \left[B_{I_j} \cot r_{Ij} - B_{I_j} \sqrt{1-\kappa}~\cot \left(\sqrt{1-\kappa}~r_{Ij}\right) + \frac{\alpha_j \sqrt{1-\kappa}}{\sin r_{Ij} \sin\left(\sqrt{1-\kappa}~r_{Ij}\right)}\right].
\end{align}
Note that (for small $r$):
\begin{equation}
\Omega_{,r} = \frac{-\alpha_I}{r^2} + \left\{ (3\kappa-2)\frac{\alpha_I}{6} + C_I \right\},
\end{equation}
and the GH, $r_{gh}$, is defined when this vanishes:
\begin{equation} \label{78}
r_{gh} = \left[ \frac{(3\kappa-2)}{6} + \frac{C_I}{\alpha_I} \right]^{-\frac{1}{2}}.
\end{equation}
\subsubsection{8BH case}
First we consider the case $\kappa < 1$ under the spherical assumption, with equal masses $m_i = m$ ($\alpha_i = \alpha$), where we shall take $m=1$ as in CCC. We also assume a single cell (where the antipodal points $BH_1$ and $BH_2$ are not identified, and there are restrictions on $\theta, \phi$ within this single cell). Note that we need to consider the case $\kappa > 1$ separately and carefully (see later); in particular, in this latter case we need to relax the assumption of spherical symmetry and we need to utilize the GH.
In this specific example we consider a regular 8BH lattice and explicitly determine the proper masses and location of the GH and compare them with results using the Weyl tensor method (see the Appendix). The qualitative agreement further motivates the use of the GH to identify the horizon of the black holes.
We note that
Eq. \eqref{78} is only useful heuristically. We have assumed that $\frac{3}{4} < \kappa < 1$ here and looked at the spherically symmetric approximation for small $r$. Eq. \eqref{78} is not valid for larger $r$, and so cannot be used to estimate $r_{gh}$ (but we can adjust the relative size by decreasing $m$ (or $\alpha$)).
To try and obtain analytical results to confirm the qualitative picture we
can choose $\kappa = \frac{15}{16}$ $\left(\sqrt{1-\kappa} = \frac{1}{4}\right)$.
Within the approximation here,
$$\frac{C_i}{\alpha} = c^2\sin\theta\sin\phi,$$
where $c^2 \equiv (1-\kappa)\cos(\sqrt{1-\kappa}~\pi)\sin^{-2}(\sqrt{1-\kappa}~\pi) > 0$ for $\frac{3}{4}<\kappa<1$. We can consider the range of values $0<\theta<\frac{\pi}{2}$, $0<\phi<\frac{\pi}{2}$ so that $C_i$ is positive (other ranges can be obtained by symmetries for the symmetric 8BH configuration).
For $\frac{C_1}{\alpha}\approx 0$, $r_{gh}^{-2} \cong \frac{3\kappa -2}{6}$, so that qualitatively as $\kappa$ increases $r_{gh}$ is decreasing, which looks promising. Note that $C_1>0$, so that $r_{gh}$ always decreases relative to the``pure" spherically symmetric case. Also note that as $\alpha$ decreases, $r_{gh}$ decreases.
If the black holes merge, they do so first along the lines joining the centres of the black holes: Let $^jC_1$ be the value of $C_1$ along the lines joining $BH_1$ and $BH_j$, where
$$^j C_1 = \alpha c^2(1,0,0,0,0,1,-1).$$
Therefore, we can estimate the effect of $C_1$ in these critical directions.
We also need to redo the computations for $r$ not small (e.g., $r\simeq \frac{1}{2}r_{ij} \sim \frac{\pi}{4}$), and especially without any approximation for $r$ (to find all roots of $\Omega_{,\alpha}\Omega^{,\alpha} = 0$). We note that
\begin{equation}
\sinh\left(nK\pi\right) = \frac{1}{2}e^{nK\pi}\left(1-e^{-2nK\pi}\right),
\end{equation}
and we can neglect the second term (i.e.,
$\sinh\left(nK\pi\right) \simeq \frac{1}{2}e^{nK\pi} \simeq \cosh\left(nK\pi\right)$)
since in the large K approximation $ e^{-2nK\pi} < 10^{-2}$ for $nK > \frac{1}{3}$.
\newpage
\section{8BH: The Case $\kappa>1$}
We assume a single cell 8BH configuration. As noted above, for our qualitative analysis here we shall keep $m=1$ in order to be able to compare with the results of $CCC$ \cite{bounce}. We take $r=0$ to be centred on BH$_1$ ($I=1$). We consider equal masses ($m_i=m=1$) and equal $\alpha_i=\alpha$, where all $r_{Ij}=\frac{\pi}{2}$, except $r_{I2}=\pi$.
The physically viable range for $K$ is $0.45 \lesssim K \lesssim 1.09$.
We shall assume that $K\lesssim 1.09$ ($\kappa \lesssim 2.2$) in order for the masses to be positive. Strictly speaking, the analysis in $CCC$ is not valid for $\kappa \gtrsim 2.2$. However, we can formally investigate what happens for large $\kappa$ here. Also, formally there is a bounce for $K\gtrsim 0.45$ ($\kappa \gtrsim 1.2$).
Note that it was found in the analysis of $CCC$ that the black holes do not merge in this physically acceptable range.
This implies that the ranges for $\theta$, $\phi$ may be restricted in order for the region (and the GH in particular) to lie within the interior of the cell.
We shall choose BH locations and ranges for the angular values below appropriate to the applications (and appeal to the symmetries of the configuration otherwise). We are most interested in the directions (angular values) joining two black holes.
The analysis in the case $\kappa<1$ discussed above is really only applicable for small $r$. The spherically symmetric limit and use of an apparent horizon is even less appropriate in the large K limit (LKA). Indeed, the GH will be angular dependent. Therefore, we
need to redo the analysis in the case $\kappa>1$.
We also have that
\begin{align}
\Omega = &\sum^8_{i=1}\alpha_i \frac{\sinh \left(K(\pi-r_i)\right)}{\sinh (K\pi) \sin r_i}
\end{align}
and, using $\alpha_i \equiv \alpha$, hence
\begin{align}\label{star}
\alpha^{-1} \sinh (K\pi) \hspace{1mm} \Omega &= \frac{\sinh\left[K(\pi-r)\right]}{\sin r} + \frac{\sinh(Kr)}{\sin r} + \sum^8_{\ell=3} \frac{\sinh (K(\pi-r_\ell))}{\sin r_\ell}\\ \nonumber
&\equiv F(r) + G(r,\theta,\phi).
\end{align}
We have that
\begin{align}\label{daggar1}
\Theta &\equiv \alpha^{-2}\sinh^2(K\pi) \left[\Omega_{,i}\Omega_{,j}\right] h^{ij} = \left[ F_{,r} + G_{,r} \right]^2 + \frac{1}{\sin^2 r}\left[ G_{,\theta}\right]^2 + \frac{1}{\sin^2r \cos^2\theta} \left[G_{,\phi}^2\right],
\end{align}
where
\begin{align}
F(r) &\equiv \frac{1}{\sin r}\left[\sinh (K\pi) \cosh (Kr) - \cosh (K\pi) \sinh(Kr) + \sinh (Kr)\right],\\ \nonumber
A(r) &\equiv \left[F(r)\right]_{,r} = \frac{-\cos r}{\sin^2r}\left[\sinh(K\pi)\cosh(Kr)-\cosh(K\pi)\sinh(Kr) + \sinh(Kr)\right]\\
& \hspace{17mm} + \frac{K}{\sin r} \left[\sinh (K\pi) \sinh(Kr) - \cosh(K\pi)\cosh(Kr) + \cosh(Kr)\right].
\end{align}
We rewrite (82) and \eqref{daggar1} using $\ell=3 - 8$ as:
\begin{align}
&\alpha^{-1}\sinh(K\pi)\Omega = F(r) + G(r,\theta,\phi),\\
&G(r,\theta,\phi)=\sum_{\ell=3}^{8}\frac{\sinh(K(\pi-r_{\ell}))}{\sin r_{\ell}} \equiv \sum^8_{\ell=3}G_{\ell}.
\end{align}
We note that
\begin{equation}
F(r)=\frac{1}{\sin r}\left[ \sinh(K\pi)\cosh(Kr) + \sinh(Kr)(1-\cosh(K\pi))\right],
\end{equation}
where $F_{,r}\equiv A(r)$ has no angular ($\theta, \phi$) dependence.\\
For each $\ell$ and $\chi \equiv [r,\theta,\phi]$, we have that
\begin{align}
\tensor{G}{^\ell_{,\chi}} = \frac{\partial r_{\ell}}{\partial{\chi}}&\left\{ \frac{-\cos(r_\ell)}{\sin^2(r_\ell)} \left[\sinh(K\pi)\cosh(Kr_\ell) - \cosh(K\pi)\sinh(Kr_\ell)\right]\right. \nonumber \\
&+ \left. \frac{K}{\sin (r_\ell)} \left( \sinh(K\pi)\sinh(Kr_\ell)-\cosh(K\pi)\cosh(Kr_\ell)\right)\right\}.
\end{align}
We define
\begin{equation}
G = \sum^8_{\ell =3}G_{\ell}=\sum_{\bar\ell=3,5,7}\big(G_{\bar\ell}+G_{\bar\ell +1}\big) \equiv \sum^{\bar3}_{L=\bar1} G_L,
\end{equation}
where each $G_L(r_L) \equiv G_{\bar\ell} + G_{\bar\ell +1}$;
$L=\bar1$ $(\bar\ell=3)$, $L=\bar2$ $(\bar\ell=5)$, $L=\bar3$ $(\bar\ell = 7)$.
Now, for each dipodal pair $\bar\ell$ and $\bar\ell+1$, by a direct computation we have that $r_{\bar\ell+1}=\pi-r_{\ell}$.
We note that explicitly
\begin{equation}
A(r) = -\frac{1}{\sin r} \bigg[ \cot r \big(\sinh K(\pi-r) + \sinh(Kr)\big) + K\big(\cosh K(\pi-r)\big) - \cosh (Kr))\bigg],
\end{equation}
so that immediately we have that $A(\pi-r)=-A(r)$ (and hence that $A(\frac{\pi}{2})=0$).
Hence we obtain the exact result:
\begin{equation}
\big(G_L\big)_{,\chi} = \big(G_L(r_L)\big)_{,r_L}\frac{\partial r_L}{\partial\chi}; ~~G_{,\chi} = \sum^{\bar3}_{L=\bar1}S_L \tensor{b}{_L^\chi}A(r_L),
\end{equation}
where $A(r)$ is defined above, $\cos r_L = a_L\sin r$, and
$$S_L \equiv -\frac{1}{\big(1-a_L^2\sin^2r\big)^\frac{1}{2}},$$
\[ b^\chi_L \equiv \begin{cases}
a_L\sin r & \chi=r\\
\frac{\partial a_L}{\partial \chi} \sin r & \chi=\theta,\phi
\end{cases}.
\]
We recall that $\Theta$ is given by \eqref{daggar1}.
\subsubsection{Approximation method}
Let $r=r_0+x$ (for some $r_0$; e.g., $r_0=r_{gh}$), where $x$ is small (so we can expand about $x=0$). By definition,
\begin{equation}
\cos r_{\bar i} \equiv \bar A_{\bar i}\sin r = \bar A_{\bar i} \sin r_0 + \bar A_{\bar i} \cos (r_0) x \equiv \bar A + \bar B x, \nonumber
\end{equation}
and so
\begin{equation} \label{doubledaggar}
r_{\bar i} = \arccos(\bar A + \bar B x) = \arccos(\bar A) - \frac{\bar B}{\sqrt{1-\bar A ^2}}x \equiv R_{\bar i} + P_{\bar i} x + o(x^2),
\end{equation}
where
\begin{equation} \label{doublestar}
R_{\bar i} \equiv \arccos \big(\bar A_{\bar i} \sin r_0 \big); ~~
P_{\bar i} \equiv \frac{- \bar A_{\bar i} \cos r_{0}}{\sqrt{1-\bar A_{\bar i}^2 \sin^2r_0}}.
\end{equation}
Note that for small $x$:
\begin{equation}
\begin{split}
&A\big(\alpha +\beta x\big) = A\big(\alpha\big) \\
&+ \beta x\left\{\frac{\cos\alpha}{\sin^2\alpha}\bigg[\big[\tan\alpha+2\cot\alpha\big] \big(\sinh K(\pi-\alpha)+\sinh K\alpha\big)-K\big(\cosh K\alpha-\cosh K(\pi-\alpha)\big)\bigg]\right. \\
&+\left. \frac{K}{\sin r}\bigg[ -\cot \alpha\big(-\cosh K(\pi-\alpha)+\cosh K\alpha \big)+K\big(\sinh K(\pi-\alpha) + K\sinh K\alpha\big)\bigg]\right\}.
\end{split} \label{dagger}
\end{equation}
\noindent
Here we assume that all $a_L$ are positive and $0\leq\theta\leq\frac{\pi}{2}$, $0\leq\phi\leq\frac{\pi}{2}$, to ensure that $r_L < \frac{\pi}{2}$ (other regions are obtained by ``symmetry arguments").
There are 3 ``pairs'': $ L = \bar 1, \bar 2, \bar 3$, where
$\cos r_{\bar L} = a_{\bar L} \sin r.$
For the 3 dipodal pairs we take
\begin{align}
0&< r\leq \frac{\pi}{4} \\
0&< a_{\ell}\sin r < \frac{1}{\sqrt 2} \\
0&< \cos r_{\bar L} <\frac{1}{\sqrt 2}\\
\frac{\pi}{4} &< r_{\bar L} < \frac{1}{\sqrt{2}}
\end{align}
For example, in general we take
\begin{align}
&a_{\bar 1}=\cos \theta \hspace{3mm} (= a_3),\\
&a_{\bar 2}=\sin\theta\cos\phi \hspace{3mm} (=-a_5),\\
&a_{\bar 3}=\sin\theta\sin\phi \hspace{3mm} (=-a_7),
\end{align}
\vspace{0.3cm}
\noindent so that
\begin{table}[h!]
\begin{center}
\label{tab:table1}
\begin{tabular}{r|c|c|c}
& $a_{L}$ & $\frac{\partial a_L}{\partial \theta}$ & $\frac{\partial a_L}{\partial \phi}$\\
\hline
$\bar 1$ & $\cos\theta$ & $-\sin\theta$ & $0$ \\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
$\bar 2$ & $\sin\theta\cos\phi$ & $\cos\theta\cos\phi$ & $-\sin\theta\sin\phi$ \\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
$\bar 3$ & $\sin\theta\sin\phi$ & $\cos\theta\sin\phi$ & $\sin\theta\cos\phi$ \\
\end{tabular}
\end{center}
\end{table}\\
\vspace{0.3cm}
\noindent
In a specific example we may change coordinates (i.e., the choice of $a_{\bar 1},$ $a_{\bar 2},$ $a_{\bar 3}$, depending on whether BH$_3$/BH$_4$, BH$_5$/BH$_6$, BH$_7$/BH$_8$ are chosen) to ensure that $0<\theta<\frac{\pi}{2}$ and $0<\phi<\frac{\pi}{2}$ and that all terms are thus well-defined in that specific example.
\subsubsection{Large K Approximation}
Note that in most calculations with $K \gtrsim 1$ $(0<a_{\ell}<1)$, $r<\frac{\pi}{4}$, $\frac{\pi}{4}<r_L<\frac{\pi}{2}$, and we can utilize a``large K" approximation (LKA):
$$\sinh nK\pi \simeq \cosh nK\pi \simeq \frac{1}{2}e^{nK\pi},$$
where the amplitude of the neglected terms vary depending on $n$ (and $K$), but in most applications of interest with $K > 1$ the neglected terms are less than $1\%$, relative to the dominant terms. Note that special care must be taken in which terms to neglect in the case $r\sim\frac{\pi}{4}$, where various terms with exponential factors such as $e^{\frac{\pi}{2}}$ and $e^{\pi-2r}$ are comparable. In such cases analytic computations for precise values of $\theta$ and $\phi$ are preferable. For example, within the LKA approximation and for $\chi=r$ we obtain
$$G_{,r}=\sum^{\bar 3}_{L=\bar 1} \frac{-a_L\cos r}{(1-a_L^2\sin r)^{\frac{1}{2}}}A\left(r_L\right),$$
where
\begin{align} \nonumber
A\left(r_L\right)&=\frac{-\cos r_L}{\sin^2r_L}\bigg[\sinh K(\pi-r_L) + \sinh Kr_L\bigg]\\ \nonumber
& + \frac{K}{\sin r_L}\bigg[ -\cosh K(\pi-r_L) + \cosh Kr_L\bigg]\\
&\simeq \frac{-\frac{1}{2}e^{K(\pi - r_L)}}{\sin r_L}\bigg[\big(K+\cot r_L\big) + o\left(e^{-K(\pi -2r_L)}\right)\bigg].
\end{align}
Therefore, in the LKA the correction terms can be neglected. Indeed, for $r<\frac{\pi}{2}$ and $r_L<\frac{\pi}{4}$,
$$\frac{A\left(r_L\right)}{A(r)} \sim e^{K\left(r-r_L\right)}.$$
For $r\simeq r_0 < \frac{\pi}{4}$, $r_L \simeq \arccos(\bar A_L \sin r_0)$ ($|\bar A_L|<1$), so that $r_L>\frac{\pi}{4}>r_0$ (i.e., $r-r_L<0$), and as $r_0$ decreases $r_L-r$ increases. Hence the terms $A(r_L)$ become increasingly negligible relative to $A(r)$.
As noted above, and as we shall see below, $r\sim \frac{\pi}{4}$ must be treated separately.
We also note that for small $r$, the $\frac{1}{\sin r}$ terms in \eqref{daggar1} and the $\sin r$ factors in $G_{,\theta}$ and $G_{,\phi}$ cancel and there are
consequently no ``degeneracies" (i.e., the $r\to 0$ limit of Eq. \eqref{daggar1} is well-defined). However, the computations here are not valid for small $r$, and we utilize the qualitative analysis described earlier.
In fact, in almost all cases of interest we can show that the $G_{,\theta}$ and $G_{,\phi}$ terms in \eqref{daggar1} can be neglected.
\subsubsection{Comments}
We have obtained an analytical expression for $\Theta$. We now wish to examine the roots, $r=r_{gh}$, of $\Theta=0$. We can study this for a particular value for $K$. Since the physical range for $K$ is $0.45<K \lesssim 1.09$, interesting values for $K$ may include $K=\frac{2}{\pi}\sim 0.63$, $K=1$, and $K=\frac{4}{\pi}$.
In the following applications we can consider the LKA approximation (where appropriate).
We might also consider particular ranges for $r$. For example, we considered the small $r$ limit earlier (where $r_{gh}$ decreases with increasing $K$). However, $r_{gh}$ is not expected to be very small. We can do detailed approximate calculations for $r \simeq \frac{\pi}{4}$.
Numerical plots are often useful.
It is also of interest to study $r_{gh}$ for particular and important angular values (e.g., $\theta=0$ and $\theta=\frac{\pi}{2}$), or for a range of small angular values (of $\theta$, for example, about $\theta=0$ for $\phi=\frac{\pi}{2}$).
\newpage
\subsection{The case $\theta=0$}
Here we have that $a_{\bar 1}=1$, $a_{\bar 2}=a_{\bar 3}=0$,
\begin{align}
b^r_{\bar 1} &= \cos r,\\
b^r_{\bar 2} &= b^r_{\bar 3}=0,\\
b^{\phi}_{L}&=0 \hspace{3mm}\text{ (all L),}\\
b^{\theta}_{\bar 1}&=0, \hspace{3mm} b^{\theta}_{\bar 2, \bar 3} = \sin r(\cos\phi, \sin\phi),
\end{align}
and
\begin{equation}
G_{,r} = \sum^3_{L =1}\frac{-a_L\cos r}{(1-a_L^2\sin r)^{\frac{1}{2}}}A(r_L) = - A(r_{\bar 1}).
\end{equation}
We have that
\begin{align}
G_{,\phi} &=0,\\
G_{,\theta} &= S_{\bar 2}b_{\bar 2}^{\theta}A(r_{\bar 2}) + S_{\bar 3}b_{\bar 3}^{\theta}A(r_{\bar 3}),
\end{align}
so that
\begin{equation}
\frac{1}{\sin r}G_{,\theta}=-\cos\phi A(r_{\bar 2})-\sin\phi A(r_{\bar 3})=0,
\end{equation}
since $\cos r_L=a_L\sin r = 0$, so that $r_L=\frac{\pi}{2}$ ($L=\bar 2, \bar 3$), and $A\left(\frac{\pi}{2}\right)=0$.
Hence, on $\theta=0$, we obtain
\begin{equation}
G_{,\theta}=G_{,\phi}=0; ~~\Theta=\left[F_{,r}+G_{,r}\right]^2.
\end{equation}
This implies that there is no influence from $BH_5 - BH_8$ on $\Theta$ on
$\theta = 0$. We also note that in this case the distance from $BH_1$ to $BH_2$ is $\pi$
(with half way point $\pi/2$).
That is, for $\theta=0$, and using $r_{\bar 1}-\frac{\pi}{2}-r$, we have that
\begin{align} \label{E9}
\sqrt{\Theta} &= F_{,r} + G_{,r} = A(r) - A(\frac{\pi}{2}-r)\\ \nonumber
&=-\frac{1}{\sin r} \left[\frac{-\cos r}{\sin^2 r}\big[\sinh K(\pi-r) + \sinh Kr\big] + K\big[\cosh K(\pi-r)-\cosh Kr\big]\right]\\ \nonumber
&+\frac{1}{\cos r}\bigg\{\tan r\left[\sinh K\left(\frac{\pi}{2}+r\right)+\sinh K\left(\frac{\pi}{2}-r\right)\right]\\ \nonumber
&+K\left[\cosh K\left(\frac{\pi}{2}+r\right) - \cosh K\left(\frac{\pi}{2}-r\right)\right]\bigg\}.
\end{align}
We note that there is no remaining angular dependence in this expression.
We also remark that $r=r_{gh}=\frac{\pi}{4}$ is a solution of $\Theta=0$ for all $K$. We demonstrate in Figure 1 that this is the only root.
Note that $\cos r_{\bar 1}=a_{\bar 1}\sin r = \sin r$, so that for $r=\frac{\pi}{4}-x$, $r_{\bar 1} = \frac{\pi}{4}+x$ and we can do a small $x$ approximation close to $r\cong\frac{\pi}{4}$. We find that in this case
\begin{equation}\label{doubledaggar}
\Theta^{\frac{1}{2}}
\cong -\frac{1}{\sqrt2} e^{\frac{3}{4}K\pi}\bigg\{2(3+2K+K^2)x - (7+8K+2K^2)x^2\bigg\},
\end{equation}
\newpage
\begin{figure}[h!]
\includegraphics[width=\linewidth]{fig1.png}
\caption{The roots of
$A(r) - A(\frac{\pi}{2}-r)$
in Eq.
(\ref{E9}).}
\label{fig1}
\end{figure}
\noindent where various terms are included (even in the LKA approximations where various terms have similar exponential factors). Of course, $\Theta=0$ when $x=0$, which corresponds to $r_{gh}=\frac{\pi}{4}$, which is okay in the case $\theta=0$ when the distance from BH$_1$ to BH$_2$ is $\pi$. (But this implies that we should also study the case $\theta\neq0$.) If $x\neq 0$, we find that at $r_{gh}$,
\begin{equation}
x=\frac{2\big(3+2K+K^2\big)}{7+8K+2K^2}.
\end{equation}
We note that as $K$ increases, the value for $x$ decreases (consistent with the earlier comments), but for $K$ in the physical range this gives rise to a nonsensical result (e.g. $r_{gh}$ is larger than the cell size), and one for which the approximation scheme breaks down. For modest values of $K$, Eq. \eqref{doubledaggar} receives corrections of the order of
$$e^{-\frac{K\pi}{2}}\cdot2\big(3-2K+K^2\big)x,$$
which is always positive and leads to a small increase in the derived value of $r_{gh}$. We conclude that this analysis is inconsistent for $x\neq0$ (in particular, all approximations break down), and a value of $r_{gh}$ other than $r_{gh}=\frac{\pi}{4}$ is not possible. An attempt to analyze the case $r=\frac{\pi}{2}-x$ ($\frac{\pi}{2}$ is the half way point between BH$_1$ and BH$_2$) for small $x$ leads to inconsistencies.
\newpage
\subsection{The case $\phi=\frac{\pi}{2}$} Since $\phi=\frac{\pi}{2}$ always in this case, we choose BH$_7$ ($\frac{\pi}{2},\frac{\pi}{2},\frac{\pi}{2}$) so that $a_{\bar 3}=a_7=-\sin\theta\sin\phi$ (and the distance from BH$_1$ to BH$_3$ and BH$_7$ is $\frac{\pi}{2}$). Recall that $A(\pi-r)=-A(r)$ and $A(\frac{\pi}{2})=0$. In this case, we thus have that:
\begin{table}[h!]
\begin{center}
\label{tab:table1}
\begin{tabular}{r|c|c|c|c|c|c}
& $a_{L}$ & $r_L$ & $\frac{b_L^r}{\cos r}$ & $\frac{b_L^{\theta}}{\sin r}$ & $\frac{b_L^{\phi}}{\sin r}$ & $S_L$\\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
\hline
$\bar 1$ & $\cos\theta$ & $r_{\bar 1}$ & $\cos\theta$ & $-\sin\theta$ & $0$ & $S_{\bar 1}$ \\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
$\bar 2$ & $0$ & $\frac{\pi}{2}$ & $0$ & $0$ & $-\sin\theta$ & $-1$ \\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
$\bar 3$ & $-\sin\theta$ & $\bar r_3$ & $-\sin\theta$ & $-\cos\theta$ & $0$ & $S_{\bar3}$ \\
\end{tabular}
\end{center}
\end{table}\\
where $S_L=-\big(1-a_L^2\sin^2r\big)^{\frac{1}{2}}$, and $\cos r_L=a_L\sin r$, where $a_{\bar 3}=-\tan\theta a_{\bar 1}$.
From $G_{,\chi}=\sum_{L=\bar 1}^{\bar 3}S_L b_L^{\chi} A(r_L)$, we then obtain
\begin{align} \label{eqnG1}
G_{,r}&=\frac{-\cos\theta\cos r}{\left(1-\cos^2\theta\sin^2 r\right)^\frac{1}{2}}\left[A(r_{\bar 1})+\frac{\left(1-\cos^2\theta\sin^2 r\right)^\frac{1}{2}}{\left(1-\sin^2\theta\sin^2 r\right)^\frac{1}{2}}\cdot (-\tan\theta) A\left(r_{\bar 3}\right)\right],\\
G_{,\theta}&=\frac{\sin\theta\sin r}{\left(1-\cos^2\theta\sin^2 r\right)^\frac{1}{2}}\left[ A(r_{\bar 1})+\frac{\left(1-\cos^2\theta\sin^2 r\right)^\frac{1}{2}}{\left(1-\sin^2\theta\sin^2 r\right)^\frac{1}{2}}\cdot (-\cot\theta) A\left(r_{\bar 3}\right)\right], \label{eqnG11}
\end{align}
and $G_{,\phi}=0$. Note that for $\theta=\frac{\pi}{4}$ ($r_{\bar 3}=\pi-r_{\bar 1}$) we have that
\begin{align}
G_{,r}&=\frac{-2\cos\theta\cos r}{\left(1-\cos^2\theta\sin^2 r\right)^\frac{1}{2}}\left[ A\left(r_{\bar 1}\right)\right],\\
G_{,\theta}&=\frac{2\sin\theta\sin r}{\left(1-\cos^2\theta\sin^2 r\right)^\frac{1}{2}}\left[ A\left(r_{\bar 1}\right)\right].
\end{align}
Now, $\cos r_{\bar 1}=\cos\theta\sin r$, $\cos r_{\bar 3}=-\sin\theta\sin r$. Writing $r=r_0+x$, from \eqref{dagger} we have that
\begin{align}
r_{\bar 1}&\cong R_{\bar 1} + P_{\bar 1}x; \qquad R_{\bar 1} = \arccos(\cos\theta\sin r_0), \qquad P_{\bar 1}=\frac{-\cos\theta\cos r_0}{\sqrt{1-\cos^2\theta\sin^2 r_0}},\\
r_{\bar 3}&\cong R_{\bar 3} + P_{\bar 3}x;\qquad R_{\bar 3}=\arccos(-\sin\theta\sin r_0), \quad P_{\bar 3}=\frac{\sin\theta\cos r_0}{\sqrt{1-\sin^2\theta\sin^2 r_0}}.
\end{align}
The solution for $\Theta=0$ when $\theta=0$ is $r_0=r_{gh}=\frac{\pi}{4}$ from earlier.
For small $\theta$, we write $\theta=\delta\theta$. We can neglect $G_{,\theta}$ and so $\sqrt{\Theta}\cong F_{,r} + G_{,r}$. We then find that
\begin{equation}
G_{,r}=-\left[ A\left(r_{\bar 1}\right)-\frac{\delta\theta}{\sqrt{2}} A\left(r_{\bar 3}\right)\right],
\end{equation}
where $r_{\bar 1}=\frac{\pi}{4}-x$ and $r_{\bar 3}=\frac{\pi}{2} + \frac{1}{\sqrt2}\delta\theta$.
From Eq. \eqref{dagger}, we have that for $\alpha=\frac{\pi}{4}$ (so that $\bar r = \frac{\pi}{4} + \beta x$, where $\beta=1$ and $\beta=-1$ corresponds to $r$ and $r_{\bar 1}$, respectively):
\begin{align}
A\left(\frac{\pi}{4} + \beta x\right)= & A\left(\frac{\pi}{4}\right) \\ \nonumber
& +\beta x \left\{\sqrt{2}\left[3\left(\sinh \frac{3K\pi}{4} - \sinh \frac{K\pi}{4}\right) + K\left(\cosh \frac{K\pi}{4} - \cosh \frac{3K\pi}{4}\right)\right]\right. \\ \nonumber
& \left.-\sqrt{2}K\left[\left(\cosh\frac{\pi K}{4} + \cosh\frac{3K\pi}{4}\right) + K\left(\sinh \frac{3K\pi}{4} + \sinh\frac{K\pi}{4}\right)\right]\right\}.
\end{align}
And $r_{\bar 3}=\frac{\pi}{2} + \frac{1}{\sqrt2}\delta\theta$, so that from \eqref{dagger} with $\alpha = \frac{\pi}{2}$ and $\beta x \equiv \frac{1}{\sqrt2} \delta\theta$, we find that
\begin{equation}
A\left(r_{\bar 3}\right) \cong \left\{\sqrt2(1+K)\sinh\frac{K\pi}{2}\right\}\delta\theta, \nonumber
\end{equation}
since $A\left(\frac{\pi}{2}\right)=0$.
Now,
\begin{align}
\Theta^{\frac{1}{2}} = F_{,r} + G_{,r} &= A(r) - \left[A\left(r_{\bar 1}\right) - \frac{\delta\theta}{\sqrt2} A\left(r_{\bar 3}\right)\right] \nonumber \\
&= 2\sqrt2\cdot I(K)x + (1+K)\sinh \frac{K\pi}{2}\left(\delta\theta\right)^2
\end{align}
(as expected -- due to the choice of $r_0=\frac{\pi}{4}$ -- the $o(x^0)$ terms are zero), where
\begin{align}
I(K) \equiv \left(\sinh \frac{K\pi}{4} + \sinh \frac{3K\pi}{4}\right)\left(3+K^2\right) + 2K\left(\cosh \frac{3K\pi}{4} - \cosh \frac{K\pi}{4}\right).
\end{align}
We note that ($\cosh \frac{3K\pi}{4} - \cosh \frac{K\pi}{4}$) is always positive (e.g., $\sim \frac{11}{4}$ for $K=\frac{2}{\pi}$ and $\sim 20$ for $K=\frac{4}{\pi}$) and thus $I(K)>0$. In the LKA,
\begin{equation}
I(K) \sim \frac{1}{2}\left(3+2K+K^2\right)e^{\frac{3K\pi}{4}},
\end{equation}
which increases with $K$! Hence $\Theta^\frac{1}{2}=0$ implies that
\begin{equation}
x_{gh} = \frac{-(1+K)\sinh \frac{K\pi}{2} (\delta\theta)^2}{2\sqrt2 \cdot I(K)},
\end{equation}
which is always negative. Therefore, as $\delta\theta$ increases $x_{gh}$ is negative and decreases. (Note that for $K=\frac{2}{\pi}$, $x_{rg}\simeq - \frac{1}{20}(\delta\theta)^2$, and for $K=\frac{4}{\pi}$, $x_{gh} \simeq -\frac{1}{50}(\delta\theta)^2$). Therefore, {\em{around}} $\theta=0$:
$$r_{gh}=\frac{\pi}{4}+x_{rg},$$
and the GH decreases below the critical value $\frac{\pi}{4}$. As $\delta\theta$ changes, $x_{gh}$ is negative and ($-x_{gh}$) increases.
\newpage
\subsection{The case $\theta=\frac{\pi}{2}$}
The distance from BH$_1$ to BH$_5$ and BH$_7$ (i.e., we are taking $\bar 2=5$ and $\bar 3=7$, respectively (and $\bar 1=3$)), is $\frac{\pi}{2}$. Using the definitions for $S_L$, $b_L^\chi$, and $\cos r_L=a_L\sin r$ (and the expression $G_{,\chi}$) from earlier, we have that
\begin{table}[h!]
\begin{center}
\label{tab:table1}
\begin{tabular}{r|c|c|c|c|c|c|c}
& $a_{L}$ & $\theta=\frac{\pi}{2}$ & $r_L$ & $s_L$ & $\frac{b_L^r}{\cos r}$ & $\frac{b_L^{\theta}}{\sin r}$ & $\frac{b_L^{\phi}}{\sin r}$\\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
\hline
$\bar 1$ & $\cos\theta$ & $0$ & $\frac{\pi}{2}$ & $-1$ & $0$ & $-1$ & $0$ \\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
$\bar 2$ & $-\sin\theta\cos\phi$ & $-\cos\phi$ & $r_{\bar 2}$ & $S_{\bar 2}$ & $-\cos\phi$ & $0$ & $\sin\phi$ \\
\hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm} & \hspace{2mm}\\
$\bar 3$ & $-\sin\theta\sin\phi$ & $-\sin\phi$ & $r_{\bar 3}$ & $S_{\bar 3}$ & $-\sin\phi$ & $0$ & $-\sin\phi$ \\
\end{tabular}
\end{center}
\end{table}\\
Noting that $A\left(\frac{\pi}{2}\right)=0$ and assuming $0<\phi<\frac{\pi}{2}$, we obtain $G_{,\theta}=0$ and
\begin{align}
G_{,r} &= \frac{\cos r\cos\phi}{\sqrt{1-\cos^2\phi\sin^2 r}}\left(A\left(\bar r_2\right) + \frac{\left(1-\cos^2\phi\sin^2 r\right)^{\frac{1}{2}}}{\left(1-\sin^2\phi\sin^2 r\right)^{\frac{1}{2}}} \tan\phi \cdot A\left(r_{\bar 3}\right)\right),\\
G_{,\phi} &= \frac{-\sin r\sin\phi}{\sqrt{1-\cos^2\phi\sin^2 r}}\left(A\left(\bar r_2\right)-\frac{\left(1-\cos^2\phi\sin^2 r\right)^{\frac{1}{2}}}{\left(1-\sin^2\phi\sin^2 r\right)^{\frac{1}{2}}} A\left(r_{\bar 3}\right)\right).
\end{align}
If $\phi=\frac{\pi}{4}$ (as well as $\theta=\frac{\pi}{2}$), then $r_{\bar 3}=r_{\bar 2}$ where $\cos r_{\bar 2} = -\frac{1}{\sqrt 2}\sin r$. We then obtain
\begin{align} \label{eqnG2}
G_{,r} &= \frac{\frac{1}{\sqrt 2}\cos r}{\sqrt{1-\frac{1}{2}\sin^2 r}}\bigg[ A\left(\bar r_2\right) + A\left(r_{\bar 3}\right)\bigg] = \frac{\sqrt{2}\cos r}{\sqrt{1-\frac{1}{2}\sin^2 r}}A\left(r_{\bar 2}\right),\\
G_{,\phi} &= \frac{-\frac{1}{\sqrt{2}}\cos r}{\sqrt{1-\frac{1}{2}\sin^2 r}}\bigg[ A\left(\bar r_2\right) - A\left(r_{\bar 3}\right)\bigg] = 0.
\end{align}
Hence
\begin{equation}
\sqrt{\Theta}(K,r) \equiv F_{,r} + G_{,r}. \label{eqnroot}
\end{equation}
The roots of this Eq. (for which $r_{gh}$ is always less than $\pi/4$) are displayed in Figure 2.
Doing an expansion for $\phi$ around $\frac{\pi}{4}$ (i.e., $\phi=\frac{\pi}{4}+\delta\phi$), and neglecting $G_{,\phi}$ (which is very small), we find that $r_{\bar 3}-r_{\bar 2} \sim o(\delta\phi)$ and $\Theta^{\frac{1}{2}} \cong \bar{A}(r,K) + \beta(r,K)\delta\phi$, where $\beta\neq 0$ and the linear correction is non-zero (and of either sign) for general $r$ and $K$.
\newpage
\begin{figure}[h]
\includegraphics[width=\linewidth]{fig2.png}
\caption{The roots of \eqref{eqnroot} for $F_{,r} + G_{,r}$.}
\label{fig2}
\end{figure}
\newpage
\section{Conclusions}
In order to study whether black holes can persist in a Universe that undergoes a cosmological bounce, we have investigated what happens to the number density of black holes as the minimum of expansion is approached and whether it is
possible that the filling factor, $F$, never reaches $1$.
To this end we have considered the class of
black hole lattice models in a hyperspherical cosmology which
undergo a dynamical bounce due to a scalar field \cite{bounce}.
We have derived time evolving solutions
of instantaneously-static models
perturbatively. And we have utilized the notion of a GH, which can be simply determined by curvature invariants, to characterize the black hole horizons.
We have focused on the particular case of
eight regularly-spaced black holes with equal masses and considered the more relevant case of $\kappa > 1$. Indeed, we have explicitly
computed the invariants for these exact solutions necessary to determine the conditions for the existence of a GH.
\subsubsection{Summary of $\kappa > 1$ case}
We have derived analytical expressions for $\Theta$, where the root of $\Theta=0$ defines the GH, $r_{gh}$. We are most interested in the physical values $0.45 < K \lesssim 1.09$ (in which the black holes were found not to merge in the analysis of CCC), but we also considered qualitative results in the small r analysis
and for larger values of $K$ in the LKA.
We considered the specific case $\theta=0$ in detail. We obtained the exact expression for $\Theta=0$ in Eq. \eqref{E9}, which has the exact solution $r_{gh}=\frac{\pi}{4}$ for all values of $K$, which is much less than half the distance $\left(\frac{\pi}{2}\right)$ between BH$_1$ and BH$_2$ on $\theta=0$.
We then studied the case $\phi=\frac{\pi}{2}$ (where half the distance from BH$_1$ to BH$_3$ and BH$_7$ is $\frac{\pi}{4}$). In this case $G_{,\phi}=0$ and $G_{,\theta}$ and $G_{,r}$ are given by Eqs. \eqref{eqnG1} and \eqref{eqnG11}. We then considered the subcase of $\theta=0+\delta\theta$, and found that the GH always decreases from its value of $\frac{\pi}{4}$ at $\theta=0$ as $|\delta\theta|$ increases.
Finally, we considered the case $\theta=\frac{\pi}{2}$ ($G_{,\theta}=0$, and $G_{,r}$ and $G_{,\phi}$ given by Eqs. \eqref{eqnG2} and \eqref{eqnG22}), and briefly discussed the subcase $\phi=\frac{\pi}{4} + \delta\phi$.
We conclude that the black holes do not merge before nor at the bounce in these models
\subsubsection{Discussion}
If we assume that the area of a black hole horizon in a contracting cosmology
is increasing,
then during contraction the distance between the black holes, $r_{sep}$ (which is $\pi/2 $ at the bounce at $t=0 $ in our models), is decreasing and hence if F at $t=0$ is less than unity, then necessarily $F<1 $ before the bounce.
In addition, the Schwarzchild mass of a black hole (and hence the black hole horizon)
increases as $\kappa$ increases, and so there is an upper limit on $\kappa$ in order
for $F<1$ at the beginning of the collapse in the calculations; i.e., $\kappa$ cannot be
arbitrarily large.
For an equally spaced BHL with equal separation, $r_{sep}$, suppose that $r_{crit}$ is the separation value at the bounce such that all of the black holes merge at
or before the bounce. That is, all black holes will merge if $r_{sep} < r_{crit} $, and none will merge if $r_{sep} > r_{crit}$. If we then have a distribution of black holes with an arbitrary (random) spacing
$r_{sep} $, there will be hierarchical merging depending the relative distances between the black holes. Let the average distance between the black holes in this distribution be $r_{av}$. Suppose that $r_{av} < r_{crit}$, then it is reasonable that black holes with $r_{sep}$ less than or approximately equal to $r_{av} < r_{crit} $ will merge and that black holes for $r_{sep} $ greater than or approximately equal to $r_{av} > r_{crit} $ will not merge. That is, $r_{crit} $ will be a characteristic scale such that above which (on average in some sense) the black holes in the random distribution will persist.
Therefore, it might be expected that in general the results
for randomly distributed black holes would be qualitatively the same as for
the case of equally spaced BHL discussed above; i.e., for $r_{sep} > r_{av}> r_{crit} $, there are
black holes that will not merge before the bounce.
\newpage
\section{Appendix: horizons for vacuum black holes}
Following \cite{DurkClif}, the vacuum constraint Eqs. are:
\begin{equation}
\mathcal{R} + K^2 - K_{ij}K^{ij} = 0, ~~~
D_j(K_i^{\,\,j} - \delta_i^{\,\,j}K) = 0 \, ,
\end{equation}
where
\begin{equation}
\label{eq:ext}
K_{ij} \equiv -\frac{1}{2}\partial_t h_{ij} \, ,
\end{equation}
and $\partial /\partial t$ is a vector orthogonal to the initial hyper-surface, which is choosen to be symmetric relative to a reversal of time.
This means that this surface is static at this instant and that the extrinsic curvature consequently vanishes and that the second constraint above is automatically satisfied and we have that $\mathcal{R} = 0$. If we now rescale the metric conformally (i.e., $h_{ij}=\Omega^4 \tilde{h}_{ij}$), then
\begin{equation}
\mathcal{R} = \Omega^{-4}\tilde{\mathcal{R}} - 8 \Omega^{-5} \tilde{D}^2 \Omega = 0 \, \label{eqnG22},
\end{equation}
where $\tilde{\mathcal{R}}$ is the Ricci curvature scalar of the conformal 3-space and $\tilde{D}_i$ denotes covariant differentiation on it. If we now choose $\tilde{h}_{ij}$ so that $
d\tilde{s}^2 = \Omega^4[d\xi^2 + \sin^2\xi ( d\theta^2 + \sin^2\theta d\phi^2)]$,
then $\tilde{\mathcal{R}}=6$ and both of the Eqs. above are satisfied at the moment of time-reversal, and $\Omega$ satisfies
\begin{equation}
\label{eq:linear}
\tilde{D}^2 \Omega = \frac{3}{4} \Omega \, .
\end{equation}
This equation is linear in $\Omega$, and has solutions in the form of terms similar to $\Omega \propto 1/\sin(\xi/2)$, where each such term corresponds to an additional black hole positioned at a different location on the conformal 3-sphere; that is,
additional terms in the conformal factor $\Omega$ can be included by a rotation of the 3-sphere by an arbitrary angle and by then adding a new term of the form $1/\sin(\xi/2)$ in the resulting coordinates, thereby generating by summation the conformal factor $\Omega$ representing multiple black holes:
\begin{equation} \label{psi0}
\Omega(\xi, \theta, \phi) = \sum_{i=1}^{N} \frac{\sqrt{\tilde{m}_i}}{2f_i(\xi, \theta, \phi)} \, ,
\end{equation}
where $N$ represents the total number of black holes, the $\tilde{m}_i$ are arbitrary constants, and
\begin{equation}
f_i = \sin\left({\frac{1}{2}\arccos(h_i)}\right) \, ,
\end{equation}
where the $h_i$ are defined by
\begin{equation}\label{eq:functions}
h_i = w_i \cos{\xi} + x_i \sin{\xi}\cos{\theta} + y_i \sin{\xi}\sin{\theta}\cos{\phi} + z_i \sin{\xi}\sin{\theta}\sin{\phi} \, ,
\end{equation}
for a divergent term at $(w_i, x_i, y_i, z_i)$ (which represents the position in the initial data of a point-like mass).
We can consequently construct the (exact) initial data for a bouncing cosmology containing $N$ distinct black holes \cite{DurkClif}. Assuming that these black holes are the same distance from each of their closest neighbours,
they form a perfect regular lattice.
The conformal 3-sphere could then be tiled with
(one of six possible) regular polyhedra by positioning a Schwarzschild (black hole) mass at the centre of each \cite{clifton2}.
These models can be generalised by relaxing the assumption of a perfect equidistant spacing.
Let us review the
identification of the locations of the apparent black hole horizons in the
cosmologies \cite{DurkClif}
(which are used to determine when the horizons of nearby black holes remain distinct). We note that in the investigation of the clustering of two adjacent black holes, as the objects approach each other an additional apparent horizon may appear which subsumes them both; when this happens the individual black holes retain their own horizons, but also acquire a new shared horizon.
\subsection{Locating apparent horizons}
Apparent horizons are 2-dimensional MOTS, defined mathematically by the vanishing of the expansion of the (outwardly directed) null normal to the surface; i.e., $\nabla_\mu k^{\mu} =0$. The position of apparent horizons can be obtained by the following two different methods.
\paragraph{The Area Method:} In the case of time symmetry, for all MOTS the (outward-pointing) light-like normal vector can be decomposed in terms of a time-like vector $u^{\mu}$ and a space-like vector $e_1^{\mu}$ (orthogonal to $u^{\mu}$), such that
\begin{equation} \label{exhor}
\nabla_{\mu} k^{\mu} = \frac{1}{\sqrt{2}} \nabla_{\mu} u^{\mu}+\frac{1}{\sqrt{2}} \nabla_{\mu} e_1^{\mu} = 0 \, .
\end{equation}
The vanishing of the extrinsic curvature of the initial data
then implies that $\nabla_{\mu} u^{\mu}=0$ and $\nabla_{\mu} e_1^{\mu} =0$, so that $e_1^{\mu}$ is orthogonal to that apparent horizon, which is
consequently an extremal (minimal) closed surface in the 3-space.
If the gravitational field near each black hole is approximately spherically symmetric,
then the position of the apparent horizon can be estimated by determining the value of $\xi$ that minimizes
\begin{equation}
\label{eq:area}
A(\xi) = \int_0^{2\pi} \int_0^{\pi} \Omega^4 \sin^2(\xi) \sin(\theta) d\theta d\phi \, .
\end{equation}
This method is straightforward from a computational point of view. However, it is only reliable when the black hole horizon is approximately spherically symmetric (which is not the case when the black holes are close together).
\paragraph{The Weyl Tensor Method:} In the case of extrinsically flat initial data, the Ricci identities and the Gauss Eq. can be used to obtain \cite{DurkClif,clifton2}:
\begin{equation} \label{ER}
E_{\mu\nu} = \mathcal{R}_{\mu\nu} \, ,
\end{equation}
where $E_{\mu \nu}$ is the electric part of the Weyl tensor relative to the 4-velocity $u^{\mu}$, and $\mathcal{R}_{\mu\nu}$ is the 3-dimensional Ricci tensor. $\mathcal{R}_{\mu\nu}$ can then be calculated explicitly, and utilizing the Bianchi identities and Eq. (\ref{exhor}), the location of the apparent horizon
can now be the identified by the simple necessary condition $\mathbf{e}_1(E^{11}) =0$ along locally rotationally symmetric curves \cite{DurkClif}
(where we choose coordinates so that the frame derivative $\mathbf{e}_1 = \Omega^{-2} \partial_{\xi}$ points along these curves and $ E^{11}$
is a frame component of the electric Weyl tensor)
or, equivalently,
\begin{equation} \label{eq:weyl}
\frac{1}{\Omega^2}\frac{\partial}{\partial \xi} (\Omega^{-4} {\mathcal R}_{\xi \xi}) = 0 \, ;
\end{equation}
i.e., the MOTS are found at points where $\Omega^{-4} {\mathcal R}_{\xi \xi}$ is optimized.
The area method gives a reliable estimate for the position of the shared apparent horizon, at least for small parameter values \cite{DurkClif,clifton2}.
The Weyl tensor method (when the curves that are rotationally symmetric locally intersect the horizon at points parameterised by $\xi$)
is expected to be more accurate than the area method, particularly when the horizon is not spherical. In the applications in this paper the
Weyl tensor method appears to be related to identifying the GH, and it could be speculated that this might be the case in a wider context.
\newpage
\paragraph{Acknowledgments:}
I would like to thank T. Clifton and B. Carr for helpful discussions, and Nick Layden for help with the figures. Financial support from NSERC of Canada is gratefully acknowledged.
|
{
"timestamp": "2020-12-29T02:22:54",
"yymm": "2012",
"arxiv_id": "2012.14049",
"language": "en",
"url": "https://arxiv.org/abs/2012.14049"
}
|
\section{Introduction and Paper Contributions}
\vspace{-0.20cm}
{Sensor and actuator selection problems (SASPs)---which fall into \textit{(i)} finding optimal geographic placements or time-varying selection of sensors or actuators (SAs) (but not necessarily simultaneously), while \textit{(ii)} optimizing state estimation or control metrics---have become prevalent research topics in numerous science and engineering fields.} For instance, in power networks application, sensor selection corresponds to the placement of phasor measurement units for the purpose of power systems monitoring \citep{qi2015optimal} while actuator selection corresponds to the placement of energy storage devices to ensure system controllability \citep{Pequito2013}. Both problems are crucial to achieve a more economical and robust power systems operations while ensuring that certain minimal requirements, such as allowable tolerance on dynamic estimation error and voltage/frequency deviation, are met.
{Extensive studies have been carried out in the literature to address SASPs for particularly \textit{linear} dynamic systems---see \citep{SUMMERS20143784,Dhingra2014,Taha2018,Nugroho2019124} for some notable references.}
Unlike SASPs in linear(ized) systems which yield feasible or optimal SAs configurations for specific operating points, SASPs for \textit{nonlinear} dynamic systems (NDSs) offer SA selections that are applicable for a much larger operating regions or even globally---this is demonstrated in our prior work through a simple example \citep{Nugroho2019sap}.
Indeed, SASPs for NDSs have not received comparable attention in the literature as the SASPs for linear systems, until recently.
{In \citep{qi2015optimal}, the concept of \textit{empirical} Gramian to quantify the observability of NDSs is considered jointly with the sensor selection problem (SSP) to determine the combination of sensors that maximizes the logarithmic determinant of the empirical Gramian matrix. The joint problem of sensor selection and state observation is investigated in \citep{Haber2017}. In particular, a new method for reconstructing the initial states of NDSs while simultaneously selecting sensors for a given observation window is presented. A different approach is pursued in \citep{Bopardikar2019} where the authors introduce a randomized algorithm for dealing with SSP and develop theoretical bounds for eigenvalue and condition number of observability Gramian. Recently, a method for selecting the control nodes and designing control actions for NDSs, which is based on an open-loop predictive control framework, is proposed in \cite{Haber2021}.} In spite of these efforts, their potential applicability to address SASPs in unstable NDSs along with their ability to incorporate SAs selection with some estimation/control-theoretic metrics, unfortunately, remains unclear.
The objective of this paper is to find the minimal SAs combination for stable/unstable NDSs that ensures stabilization for estimation error (via minimal sensor selection) and closed loop system through feedback control (via minimal actuator selection). The proposed algorithms developed herein are based on the following observations. First, numerous observer and controller designs for NDSs have been developed in the past two decades. By using Lyapunov stability theory and given that the nonlinearities in NDSs satisfy some function sets such as \textit{bounded Jacobian} \citep{Phanomchoeng2010b}, \textit{Lipschitz continuous} \citep{Phanomchoeng2010}, \textit{one-sided Lipschitz} \citep{Abbaszadeh2010}, \textit{quadratically inner-bounded} \citep{Abbaszadeh2010}, and \textit{quadratically bounded} \citep{guo2017decentralized}, the procedure to compute stabilizing observer/controller gain matrix can be posed as convex semidefinite programmings (SDPs).
{Secondly, physical states in many practical NDS models are almost always bounded.
Hence, it can be shown or safely assumed that the corresponding nonlinearities satisfy the properties of the aforementioned function sets in confined state-space regions---regions that are much larger than operating regions of linearization points.}
Without loss of generality, this paper focuses on SASPs for Lipschitz NDSs in which the vector-valued nonlinearity in the system dynamics satisfies the Lipschitz continuity assumption. The proposed method can be easily extended for other types of non-Lipschitz nonlinearities.
{It is assumed throughout the paper that the inputs and outputs of the NDSs form linear relations with the nonlinear dynamics.
It is also worth noting that we do \textit{not} address the simultaneous selection of SAs. That is, we only consider the minimal selection of sensors with observer design or actuators with controller design.
A preliminary version of this work appeared in \citep{Nugroho2019CDC}, where the SSP for Lipschitz NDSs is studied and solved via a general purpose solver \citep{Lofberg2004}. The key contributions of this paper are summarized as follows:}
\vspace{-0.2cm}
\begin{itemize}[leftmargin=*]
\setlength\itemsep{-0.3em}
\item We present a novel generalized framework to address SASPs in NDSs by: \textit{(i)} parameterizing NDSs' nonlinearities based on the corresponding function sets, \textit{(ii)} formulating the SASPs using a plethora of observer and/or controller designs, and \textit{(iii)} reformulating the resulting SASPs into convex mixed-integer SDPs (MISDPs).
\item Particularly for Lipschitz NDSs with Lipschitz constant $\gamma_l$, we formalize NDSs observability and controllability (coined as \textit{$\gamma_l$-observability} and \textit{$\gamma_l$-controllability}, respectively) as SDP feasibility problems and show that it is crucial to obtain the smallest Lipschitz constant in order to obtain less number of SAs in SASPs. {In particular, we theoretically investigate the relationship between the Lipschitz constant of the system and the number of activated SAs}.
\item {A new customized branch-and-bound (BnB) algorithm for MISDPs to solve SASPs is proposed. This BnB algorithm, referred to as \textit{structure-exploiting} BnB (SE-BnB), utilizes heuristics and exploits problem structure to efficiently find optimal and suboptimal solutions for SASPs, owing to the reduced number of constraints and/or variables of the corresponding SDP problems when finding upper an lower bounds on each node of the BnB tree.}
\item {We showcase the computational advantages of the proposed BnB routines for solving SSP in comparison with our implementation of the standard BnB algorithm and the $\ell_1$-norm relaxation technique \citep{Candes2008}.}
\end{itemize}
\vspace{-0.2cm}
The distinction of our approach to address SASPs for NDSs in comparison with the other contemporary methods includes \textit{(a)} its ability to find optimal placement of SAs with stabilizing estimation/control gain matrix, and \textit{(b)} its flexibility to incorporate estimation/control metrics for continuous-time/discrete-time NDSs having different classes of nonlinearity. The paper's organization and notation are as follows.
{Section \ref{sec:model_problem} formalizes the NDSs model. Section \ref{sec:ssp} investigates the relation between Lipschitz constant and $\gamma_l$-observability while Section \ref{sec:obs_ssp_sensor} constructs the SSP for Lipschitz NDSs. Next, Section \ref{sec:sol_appr} focuses on transforming SSP into a convex MISDP form and provides a detailed description for the SE-BnB algorithm. In Section \ref{sec:misc}, some discussions related to actuator selection problem, solving SASPs
beyond Lipschitz NDSs, and robust SSP are provided. Finally, Section \ref{sec:num_test} presents some numerical results and Section \ref{sec:conclusion} concludes the paper.}
\noindent {\textbf{Notation.}} \hspace{0.5cm} The notation $\boldsymbol 1$ represents a matrix of appropriate dimension with elements of 1.
The notations $\mathbb{R}^n$ and $\mathbb{R}^{p\times q}$ denote the sets of row vectors with $n$ elements and matrices with size $p$-by-$q$ with elements in $\mathbb{R}$.
Given $\boldsymbol A$ as a matrix, its $i$-th and $j$-th element is denoted by $A_{ij}$. The operators $\mathrm{Blkdiag}(\cdot)$ constructs a block diagonal matrix, $\mathrm{Diag}(\cdot)$ constructs a diagonal matrix from a vector, $\mathrm{vec}(\cdot)$ constructs a vector by stacking each column of a matrix, $\otimes$ denotes the Kronecker product, and $\odot$ denotes the Hadamard product.
The symbol $*$ is used to represent symmetric entries in symmetric matrices.
The set $\mathbb{I}(n)$ is defined as $\mathbb{I}(n) := \{i\in\mathbb{N}\,|\, 1\leq i \leq n\}$, which is usually used to represent the set of indices.
\setlength{\textfloatsep}{10pt}
\begin{table*}
\vspace{-0.05cm}
\begin{center}
{ \centering \scriptsize
\caption{Dynamic models of some nonlinear dynamic systems in electric power systems, traffic, combustion networks, \ldots.}\label{tab:nonlinear_models_cps}
\vspace{-0.0cm}
\renewcommand{\arraystretch}{1}
\begin{tabular}{c|c|c}
\toprule \midrule
\textbf{Type of Systems} & \textbf{Nonlinear Dynamics Model} & \textbf{Description} \\ \midrule
\makecell{\textit{Electric power}\\ \textit{grids}\citep{bergen2000power}} & \makecell{$\dot{\delta_i}=\omega_i-\omega_0\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;$\\ $\hspace{-.25cm}\dot{\omega}_i=\frac{\omega_0}{2H_i}\bigg(\hspace{-0.05cm}P_{\textrm{m}i}-\sum\limits_{j=1 }^{N} \hspace{-0.05cm}E_iE_j\hspace{-0.05cm}\left(\bar{G}_{ij}\cos(\delta_i-\delta_j)+\bar{B}_{ij}\sin(\delta_i-\delta_j)\right)-\frac{K_{\textrm{D}i}}{\omega_0}(\omega_i-\omega_0)\hspace{-0.05cm}\bigg)$} & \makecell{\nth{2}-order swing equation \\ $\boldsymbol{x_i} := [\delta_i\;\;\omega_i]^{\top}$ \\
$\delta_i$: rotor angle,
$\omega_i$: rotor speed} \\ \midrule
\textit{Highway traffic} \citep{nugroho2018journal} & $\dot{\rho}_i = \frac{v_f}{l}\left(\rho_{i-1}-\rho_i+\hat{\rho}_j-\alpha(k)\check{\rho}_k\right) -\delta \left(\rho_{i-1}^2-\rho_i^2+\hat{\rho}_j^2-\alpha(k)\check{\rho}_k^2\right)$ & \makecell{Free-flow condition \\ $\boldsymbol{x_i} := [\rho_i]$, $\rho_i$: traffic density} \\ \midrule
\makecell{\textit{Combustion} \\ \textit{networks}\citep{Perini2012}} & $\dot{\chi}_i = \sum\limits_{j = 1}^{n_r}(\beta_{ji}-\alpha_{ji})\Bigg(\hspace{-0.05cm}d^{(f)}_j\prod\limits^n_{k=1}\chi_k^{\alpha_{jk}}-d^{(b)}_j\prod\limits^n_{k=1}\chi_k^{\beta_{jk}}\hspace{-0.05cm}\Bigg)$ & \makecell{$\boldsymbol{x_i} := [\chi_i]$ \\ $\chi_i$: chemical concentration} \\ \midrule
\makecell{\textit{Oscillator} \\ \textit{synchronization} \citep{Kuramoto1975ebm}} & $\dot{\theta}_i = \omega_i + \sum\limits_{j = 1}^{N}K_{ij}\sin(\theta_j-\theta_i)$ & \makecell{Kuramoto model \\ $\boldsymbol{x_i} := [\theta_i]$, $\theta_i$: oscillator phase} \\ \midrule
\textit{Epidemic outbreaks}\citep{LAJMANOVICH1976221} & $\dot{p}_i = -\delta p_i + \sum\limits_{j = 1}^{N}a_{ij}\beta p_j(1-p_i)$ & \makecell{$\boldsymbol{x_i} := [p_i]$, $p_i\in[0,1]$ \\ $p_i$: infection probability} \\ \midrule
\makecell{\textit{Mass-spring-damper} \\ \textit{systems}} & \makecell{$m_i \ddot{x}_i = d_i\dot{x}_{i-1} -(d_i+d_{i+1})\dot{x}_{i} +d_{i+1}\dot{x}_{i+1} + k_i{x}_{i-1} -(k_i+k_{i+1}){x}_{i} +k_{i+1}{x}_{i+1}$ \\ $+ a_i{x}_{i-1}^3 -(a_i+a_{i+1}){x}_{i}^3 +a_{i+1}{x}_{i+1}^3\qquad\qquad\qquad\qquad\qquad\qquad\quad$ } & \makecell{$\boldsymbol{x_i} := [x_i]$ \\ $x_i$: mass relative position
} \\ \midrule
\bottomrule
\end{tabular}}
\end{center}
\vspace{-0.6cm}
\end{table*}
\setlength{\floatsep}{10pt}
\vspace{-0.35cm}
\section{System Description and Preliminaries}\label{sec:model_problem}
{Consider a continuous-time networked NDS comprised of $N$ subsystems (also referred to as \textit{nodes}) modeled as}
\begin{align}
\dot{\boldsymbol x}(t) &= \mA \boldsymbol x (t) + \boldsymbol G\boldsymbol f(\boldsymbol x) + \mB \boldsymbol u (t),\quad \boldsymbol y(t) = \mC \boldsymbol x (t).\label{eq:gen_dynamic_systems}
\end{align}
{Numerous NDSs---see Table \ref{tab:nonlinear_models_cps} for some notable examples---can be represented in the form of \eqref{eq:gen_dynamic_systems}}.
In this model, the global state $\boldsymbol x\in \mathbfcal{X}\subset\mathbb{R}^{n_x}$ consists of $N$ numbers of nodal state $\boldsymbol {x_i}\in \mathbfcal{X}_{\boldsymbol i}\subset\mathbb{R}^{n_{x_i}}$. Likewise, each subsystem $\boldsymbol i$ is also comprised of $n_{u_i}$inputs and $n_{y_i}$ measurements such that $\boldsymbol {u_i}\in \mathbfcal{U}_{\boldsymbol i}\subset\mathbb{R}^{n_{u_i}}$ and $\boldsymbol {y_i}\in \mathbb{R}^{n_{y_i}}$. The notations $\boldsymbol {u_i}$ and $\boldsymbol {y_i}$ denote the nodal input and measurements vectors whereas $\mathbfcal{X}$ and $\mathbfcal{U}$ represent the operating region and admissible inputs for NDS \eqref{eq:gen_dynamic_systems}. The global input and output vectors are $\boldsymbol u\in \mathbfcal{U}\subset\mathbb{R}^{n_u}$ and $\boldsymbol y\in \mathbb{R}^{n_y}$.
Matrices $\boldsymbol A$, $\mB$, $\mC$, and $\mG$ are all assumed to have appropriate dimensions and in particular, $\mB$ and $\mC$ are representative of input-to-state and state-to-output mappings and supposed to posses the following structure: $\mB = \mathrm{Blkdiag}\left(\mB_{1},\hdots,\mB_N\right)$, $\mC = \mathrm{Blkdiag}\left(\mC_{1},\hdots,\mC_N\right)$.
Matrix $\boldsymbol A$ expresses the \textit{linear} dynamics of the system as well as interaction between subsystems while matrix $\boldsymbol G$ depicts the distribution of nonlinear mapping $\boldsymbol f:\mathbb{R}^{n_x}\rightarrow \mathbb{R}^{n_g}$ that may capture any linear and {nonlinear} phenomena in the system.
The ultimate objective of this work is to search for both optimal and suboptimal SAs configurations or locations for NDSs (or equivalently, the number of nonzero columns of $\boldsymbol B$ and nonzero rows of $\boldsymbol C$)
while satisfying some user-defined constraints,
which generally may include closed-loop system stability, convergence of estimation error dynamics, and actuator/sensor constraints.
It is worth mentioning that this paper emphasizes solving SASPs for continuous-time NDSs with Lipschitz nonlinearities since \textit{(i)} Lipschitz is one of widely used function sets both in observer and controller designs---for example, see \cite{Phanomchoeng2010,nugroho2018journal}---and \textit{(ii)} the proposed methodology to solve SASPs can be directly extended for other classes of nonlinearities mentioned previously as well as for discrete-time NDSs with additional estimation/control objectives
To proceed, the subsequent assumption is considered in the paper.
\vspace{-0.1cm}
\begin{asmp}\label{def:lip}
The mapping $\boldsymbol f :\mathbb{R}^{n_x}\rightarrow \mbb{R}^{n_g}$ is locally Lipschitz continuous in $\mathbfcal{X}$ such that for any $\boldsymbol x,\hat{\boldsymbol x}\in \mathbfcal{X}$
\begin{align}
\@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol f(\boldsymbol x)-\boldsymbol f(\hat{\boldsymbol x})}_2 \leq \gamma_l \@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol x - \hat{\boldsymbol x}}_2,\label{eq:lip_def}
\end{align}
where $\gamma_{l} \geq 0$ is the corresponding Lipschitz constant.
\end{asmp}
\vspace{-0.1cm}
\noindent
{Readers are referred to \citep{nugroho2020nonlinear} for scalable numerical methods to compute Lipschitz constants---including the ones corresponding to the other function sets.
In the next section, we formalize the notion of observability for Lipschitz NDSs which is crucial in the context of SSP.}
\vspace{-0.35cm}
\section{Observability for Lipschitz NDS}\label{sec:ssp}
In contrast to linear systems, quantifying observability for nonlinear systems is, without doubt, much more difficult and less straightforward.
{To that end, in this section we focus on the concept of observability for NDS \eqref{eq:gen_dynamic_systems} in regard to solvability of the corresponding SDPs and Lipschitz constant.} To proceed, consider a NDS in the form of \eqref{eq:gen_dynamic_systems}.
A standard Luenberger-like observer for NDS \eqref{eq:gen_dynamic_systems} can then be constructed as follows
\vspace{-0.05cm}
\begin{subequations}\label{eq:obs_dynamic_systems_sasp}
\begin{align}
\hspace{-0.15cm}\dot{\hat{\boldsymbol x}}(t) &= \mA \hat{\boldsymbol x} (t) + \boldsymbol G\boldsymbol f(\hat{\boldsymbol x}) + \mB \boldsymbol u (t) + \boldsymbol L (\boldsymbol y(t)-\hat{\boldsymbol y}(t))\\
\hspace{-0.15cm}\hat{\boldsymbol y}(t) &= \mC \hat{\boldsymbol x} (t),
\end{align}
\end{subequations}
where in \eqref{eq:obs_dynamic_systems_sasp}, $\boldsymbol L\in\mbb{R}^{n_x\times n_y}$ is the observer gain matrix, $\hat{\boldsymbol x}$ is the estimated states, and $\hat{\boldsymbol y}$ is the estimated outputs. The objective here is to compute $\boldsymbol L$ that makes the estimation error dynamics, where estimation error is constructed as $\boldsymbol e(t):= \boldsymbol x(t)-\hat{ \boldsymbol x}(t)$, to converge asymptotically towards zero. The next definition quantifies observability for NDS \eqref{eq:gen_dynamic_systems}, posed as a SDP feasibility problem, for a given $\gamma_{l}$.
\vspace{-0.1cm}
\begin{mydef}\label{def:lip_observability}
NDS \eqref{eq:gen_dynamic_systems} is said to be $\gamma_l$-observable if and only if there exist $\boldsymbol P\in \mbb{S}^{n_x}_{++}$, $\boldsymbol Y\in \mbb{R}^{n_x\times n_y}$, and $\epsilon\in\mbb{R}_{++}$ such that the following linear matrix inequality (LMI) is feasible
\begin{align}
\hspace{-0.6cm}\bmat{
\boldsymbol A ^{\top}\boldsymbol P + \boldsymbol P\boldsymbol A - \boldsymbol C ^{\top}\boldsymbol Y ^{\top} -\boldsymbol Y\boldsymbol C +\epsilon\gamma_l^2 \boldsymbol I & * \\
\boldsymbol G ^{\top}\boldsymbol P & -\epsilon \boldsymbol I} & \prec 0. \label{eq:LMI_lip_obs}
\end{align}
\end{mydef}
\vspace{-0.05cm}
LMI \eqref{eq:LMI_lip_obs} originates from \citep{Phanomchoeng2010} with special case $\boldsymbol G = \boldsymbol I$ and it can be proven that \eqref{eq:LMI_lip_obs} is indeed sufficient and necessary for the existence of stabilizing matrix $\boldsymbol L$ for the estimation error dynamics, provided that $V(\boldsymbol e) = \boldsymbol e(t)^{\top}\mP \boldsymbol e(t)$ is the Lyapunov function candidate and $\epsilon \in \mbb{R}_+$. Once the LMI is solved, $\boldsymbol L$ can be recovered as $\mP^{-1}\mY$.
The advantages of utilizing Definition \ref{def:lip_observability} to quantify observability are twofold. First, if there exists any solution for \eqref{eq:LMI_lip_obs} then one can immediately obtain stabilizing gain matrix $\boldsymbol L$ and second, the observability prevails for various operating points in $\mathbfcal{X}$. These are in contrast to other types of observability for NDS such as empirical observability Gramian, as it only quantifies observability based on local behaviors of the systems around certain operating points and, upon determining the observability, one is still required to design a state estimator algorithm to estimate the actual states.
At this point, we are now compelled to consider the following question regarding the feasibility of SDP that corresponds to $\gamma_l$-observability for NDS \eqref{eq:gen_dynamic_systems}:
\vspace{0.1cm}
\noindent \textbf{Q1}: \textit{What role does constant $\gamma_l$ have on $\gamma_l$-observability?}
\vspace{0.1cm}
\noindent To answer \textbf{Q1}, we need to further study the LMI given in \eqref{eq:LMI_lip_obs}. {For the sake of simplicity, let us assume that $\epsilon > 0$ is a constant. Then, realize that $\boldsymbol Y\boldsymbol C$ can be expressed as}
\vspace{-0.05cm}
\begin{align*}
\boldsymbol Y\boldsymbol C = \sum_{i=1}^{n_x}\sum_{j=1}^{n_y} Y_{ij}\check{\boldsymbol C}_{ij},
\end{align*}
\vspace{-0.05cm}
where $\check{\boldsymbol C}_{ij}\in \mbb{R}^{n_y\times n_x}$ is constructed as
\begin{align*}
\mathrm{row}\left(\check{\boldsymbol C}_{ij}\right)_k =
\begin{cases}
\left\{[C_{kl}]\right\}^{n_x}_{l = 1}\in \mbb{R}^{1\times n_x}, &\;\text{if}\;j = k\\
\boldsymbol 0_{1\times n_x}, &\;\text{otherwise}.\\
\end{cases}
\end{align*}
Now let $\check{\boldsymbol C}_{S,ij}:= \check{\boldsymbol C}_{ij} + \check{\boldsymbol C}_{ij}^\top$ for each $i,j$.
By using the above expression, the term $\boldsymbol Y\boldsymbol C + \boldsymbol C ^{\top}\boldsymbol Y ^{\top} $ can be expressed as
\vspace{-0.05cm}
\begin{align}
\boldsymbol Y\boldsymbol C + \boldsymbol C ^{\top}\boldsymbol Y ^{\top}
&= \sum_{i=1}^{n_x}\sum_{j=1}^{n_y} Y_{ij}\check{\boldsymbol C}_{ij} + \left(\sum_{i=1}^{n_x}\sum_{j=1}^{n_y} Y_{ij}\check{\boldsymbol C}_{ij}\hspace{-0.05cm}\right)^{\hspace{-0.1cm}\top} \nonumber \\
&= \sum_{i=1}^{n_x}\sum_{j=1}^{n_y} Y_{ij}\check{\boldsymbol C}_{S,ij}
= \sum_{k=1}^{n_xn_y} y_{v,k}\bar{\boldsymbol C}_k, \label{eq:CY_vec_LMI}
\end{align}
in which $\boldsymbol y_v := [y_{v,1}\,\,y_{v,2}\,\,\cdots\,\,y_{v,n_xn_y}]^\top = \mathrm{vec}(\boldsymbol Y)$ and $\bar{\boldsymbol C}_k\in\mbb{S}^{n_x}$ for each $k\in\mbb{I}(n_xn_y)$ is associated with the corresponding $\check{\boldsymbol C}_{S,ij}$.
Using \eqref{eq:CY_vec_LMI}, LMI \eqref{eq:LMI_lip_obs} along with $\boldsymbol P \succ 0$ can be written in the following compact form
\begin{subequations}\label{eq:obs_LMI_ref}
\vspace{-0.05cm}
\begin{align}
\mathcal{A}(\boldsymbol P, \boldsymbol y_v) + \mathcal{A}_0 \succ 0,\label{eq:obs_LMI_ref_1}
\end{align}
in which $\mathcal{A}: \mbb{S}^{n_x}\times \mbb{R}^{n_xn_y}\rightarrow \mbb{S}^{2n_x}\times\mbb{S}^{n_x}$ is constructed as $\mathcal{A}(\boldsymbol P, \boldsymbol y_v) := \mathcal{A}_{P}(\boldsymbol P) + \mathcal{A}_{Y}({\boldsymbol y_v})$ and
\begin{align}
\mathcal{A}_{P}(\boldsymbol P) &:= \mathrm{Blkdiag}\left(-\bmat{\boldsymbol A ^{\top}\boldsymbol P + \boldsymbol P\boldsymbol A & * \\ \boldsymbol G ^{\top}\boldsymbol P & \boldsymbol O}, \boldsymbol P\right)\label{eq:obs_LMI_ref_2}\\
\mathcal{A}_{Y}({\boldsymbol y_v}) &:= \mathrm{Blkdiag}\left(\bmat{\sum_{i=1}^{n_xn_y} y_{v,i}\bar{\boldsymbol C}_i & *\\ \boldsymbol O&\boldsymbol O},\boldsymbol O\right)\label{eq:obs_LMI_ref_3}\\
\mathcal{A}_0 &:= \mathrm{Blkdiag}\left(\bmat{-\epsilon\gamma_l^2 \boldsymbol I & *\\ \boldsymbol O&\epsilon \boldsymbol I},\boldsymbol O\right).\label{eq:obs_LMI_ref_4}
\end{align}
\end{subequations}
Using the expression provided in \eqref{eq:obs_LMI_ref}, we now present the first theoretical result of the paper.
\vspace{-0.15cm}
\begin{theorem}[Larger Lipschitz Constant Reduces Observability]\label{thm:lip_obs_lmi}
There exist $\boldsymbol P\in \mbb{S}^{n_x}$ and $\boldsymbol y_v\in \mbb{R}^{n_xn_y}$ such that $\mathcal{A}(\boldsymbol P, \boldsymbol y_v) + \mathcal{A}_0\succ 0$ if and only if there is no $\boldsymbol Z\in \mbb{S}^{3n_x}$ in the form of
\begin{subequations}\label{eq:lip_obs_lmi_thm}
\begin{align}
\boldsymbol Z = \mathrm{Blkdiag}\left(\bmat{\boldsymbol Z_1& * \\ \boldsymbol Z_2^\top & \boldsymbol Z_3}, \boldsymbol Z_4\right),\label{eq:lip_obs_lmi_thm_1}
\end{align}
such that $\boldsymbol Z \succeq 0$ and $\boldsymbol Z \neq 0$ satisfying
\begin{align}
&\boldsymbol Z_4 = \boldsymbol Z_1\boldsymbol A^\top + \boldsymbol A\boldsymbol Z_1 + \boldsymbol G\boldsymbol Z_2^\top + \boldsymbol Z_2\boldsymbol G^\top \label{eq:lip_obs_lmi_thm_2}\\
&\sum_{j=1}^{n_x}\sum_{k=1}^{n_x}\bar{C}_{i,jk} Z_{1,jk} = 0,\;\;\forall i\in\mbb{I}(n_xn_y) \label{eq:lip_obs_lmi_thm_3}\\
&\mathrm{tr}(\boldsymbol Z_3) \leq \gamma_l^2 \,\mathrm{tr}(\boldsymbol Z_1). \label{eq:lip_obs_lmi_thm_4}
\end{align}
\end{subequations}
\end{theorem}
\vspace{-0.1cm}
\noindent The proof of the above theorem is presented in \ref{appdx:B}. Problem \eqref{eq:lip_obs_lmi_thm} is referred to throughout the section as the \textit{alternative problem} for \eqref{eq:obs_LMI_ref_1}. To answer \textbf{Q1}, we need to look into inequality \eqref{eq:lip_obs_lmi_thm_4}.
Realize that the term $\mathrm{tr}(\boldsymbol Z_3)$ is lower bounded by zero since $\boldsymbol Z\succeq 0$. Consequently, we get
\vspace{-0.05cm}
\begin{align}
0 \leq \mathrm{tr}(\boldsymbol Z_3) &\leq \gamma_l^2 \,\mathrm{tr}(\boldsymbol Z_1). \label{eq:lip_obs_lmi_infeasibility}
\end{align}
It is seen from \eqref{eq:lip_obs_lmi_infeasibility} that the feasible set of \eqref{eq:lip_obs_lmi_thm} expands as the value of $\gamma_l$ increases.
In a particular case when $\gamma_l = 0$, the term $\mathrm{tr}(\boldsymbol Z_3)$ is enforced to be equal to zero. Since eigenvalues of $\boldsymbol Z_3$ cannot be negative, then they must be equal to zero. On the other hand, if $\gamma_l$ is considerably large, then there is not much restriction on the upper bound of $\mathrm{tr}(\boldsymbol Z_3)$. Hence, it is important to get smaller Lipschitz constant $\gamma_l$ since this will make the problem described in \eqref{eq:lip_obs_lmi_thm} to have narrower feasible space.
{This finding is in accordance with empirical observations from the literature: SDP \eqref{eq:LMI_lip_obs} has no solution if $\gamma_l$ is selected to be sufficiently large---such large Lipschitz constant is labeled as \textit{conservative}. This provides an answer for \textbf{Q1}.
The next section introduces the SSP for Lipschitz NDSs and presents some properties useful for the development of a more efficient BnB algorithm.}
\vspace{-0.35cm}
\section{Sensor Selection and NDS Observability}\label{sec:obs_ssp_sensor}
{In this section, we wish to answer the question below:}
\vspace{0.05cm}
\noindent \textbf{Q2}: \textit{What is the impact of the number of placed sensors on $\gamma_l$-observability?}
\vspace{0.05cm}
The above question may seem trivial since it is expected that dynamic systems have \textit{higher degree of observability} as more sensors are placed or utilized. The objective of this section is to demonstrate the above conjecture from $\gamma_l$-observability stand point. To begin with, we formally construct the SSP for NDS \eqref{eq:gen_dynamic_systems} as follows. Let $\gamma_i\in\{0,1\}$ be a binary variable that ascertains the activation or deactivation of sensor on each subsystem $i$. That is, $\gamma_i = 1$ if the sensor measuring subsystem $i$ is activated and $\gamma_i = 0$ otherwise. These variables can be combined into $\boldsymbol \gamma$, giving $\boldsymbol \gamma := \left[\gamma_1\,\,\gamma_2\,\,\cdots\,\,\gamma_N\right]^{\top}$.
The following conventions are adopted in this section.
\vspace{-0.1cm}
{\begin{mydef}~\label{def:setS}
Let $\mathcal{S}_\gamma$ be an $N$-tuple representing the selection of sensors, i.e., $\mathcal{S}_\gamma := (\gamma_1,\gamma_2,\ldots,\gamma_N)$.
The set of active sensor is constructed as $\mathcal{S}_{\gamma}^{(a)} := \left\{\gamma \in \mathcal{S}_\gamma\,|\,\gamma =1\right\}$.
Let $\bar{\mathcal{S}}_\gamma := (\gamma_1\boldsymbol 1_{n_{y_1}},\gamma_2\boldsymbol 1_{n_{y_2}},\ldots,\gamma_N\boldsymbol 1_{n_{y_N}})$ where $\mathrm{card}(\bar{\mathcal{S}}_\gamma) = n_y$. The selection of sensors in matrix form can be written as $\boldsymbol \Gamma := \mathrm{Blkdiag}\big(\gamma_1\boldsymbol I_{n_{y_1}},\gamma_2\boldsymbol I_{n_{y_2}},\hdots,\gamma_N\boldsymbol I_{n_{y_N}}\big)$.
\end{mydef}}
\vspace{-0.1cm}
By using the above definition, NDS \eqref{eq:gen_dynamic_systems} together with sensor selection can be conveniently expressed as
\vspace{-0.0cm}
\begin{subequations}\label{eq:gen_dynamic_systems_sen_sel}
\begin{align}
\dot{\boldsymbol x}(t) &= \mA \boldsymbol x (t) + \boldsymbol G\boldsymbol f(\boldsymbol x) + \mB\boldsymbol u(t)\\
\boldsymbol y(t) &= \boldsymbol \Gamma\mC \boldsymbol x (t).
\end{align}
\end{subequations}
Additionally, it is also beneficial to consider a set $\mathcal{G}_\gamma\subseteq \{0,1\}^N$ representing logistic constraints and availability of sensors such that $\boldsymbol \gamma \in \mathcal{G}_\gamma$ might be imposed.
{This in turn allows a simplified high level formulation of a general SSP}
\vspace{-0.05cm}
\begin{align*}
\mathbf{(P1)}\;\;\; \minimize \;\;\;& \boldsymbol c^{\top} \boldsymbol \gamma + \mathrm{EstimationObjective} \\
\subjectto \;\;\;& \eqref{eq:gen_dynamic_systems_sen_sel},\; \boldsymbol \gamma\in \mathcal{G}_\gamma,\; \mathrm{EstimationConstraints}.
\end{align*}
\vspace{-0.05cm}
The objectives of \textbf{P1} are threefold: \textit{(i)} performing state estimation for NDS \eqref{eq:gen_dynamic_systems_sen_sel} while \textit{(ii)} utilizing smallest number of sensors as possible (or satisfying a given constraint over the collections of library of sensors) and \textit{(iii)} optimizing a specific estimation metric. Vector $\boldsymbol c\in\mbb{R}^N_{+}$ in the objective function of \textbf{P1} assigns weights for each sensor $\gamma_i$. Given \textbf{P1}, SSP for Lipschitz NDSs assuming no estimation objective (this is considered in Section~\ref{sec:misc}), can be constructed by incorporating \eqref{eq:gen_dynamic_systems_sen_sel} into LMI \eqref{eq:LMI_lip_obs}. The resulting problem is given below.
\vspace{-0.05cm}
\begin{subequations}\label{eq:sen_sel_obs}
\begin{align}
&\hspace*{-0.2cm}\mathbf{(P2)}\; \minimize_{ \boldsymbol P, \boldsymbol Y, \epsilon, \boldsymbol \gamma}\;\; \boldsymbol c^{\top} \boldsymbol \gamma \\
&\hspace*{-0.2cm}\subjectto \;\; \begin{bmatrix}
\boldsymbol A ^{\top}\boldsymbol P + \boldsymbol P\boldsymbol A +\epsilon\gamma_l^2\boldsymbol I & \\
- \boldsymbol Y\boldsymbol \Gamma \boldsymbol C- \boldsymbol C ^{\top}\boldsymbol \Gamma\boldsymbol Y ^{\top}& *\\
\boldsymbol G ^{\top}\boldsymbol P & -\epsilon \boldsymbol I \end{bmatrix} \preceq 0 \label{eq:sen_sel_obs_1} \\
&\quad \quad\;\;\boldsymbol P\succ 0, \;\epsilon > 0,\; \boldsymbol \gamma\in \mathcal{G}_\gamma,\;
\boldsymbol \gamma \in \{0,1\}^N.\label{eq:sen_sel_obs_2}\vspace*{-0.05cm}
\end{align}
\end{subequations}
\vspace{-0.05cm}
In \textbf{P2} the objective is to minimize the number of the activated sensors while \textit{(a)} finding a stabilizing observer gain matrix $\mL$ and \textit{(b)} satisfying sensor's logistic constraints. As $\boldsymbol \Gamma$ is a binary matrix variable, \textbf{P2} is a nonconvex optimization problem with \textit{mixed-integer bilinear matrix inequalities} (MIBMIs) because of the $\boldsymbol Y \boldsymbol \Gamma$ term. Our approach to address this problem is given in the next section. {Following \eqref{eq:CY_vec_LMI}, the term $ \boldsymbol Y\boldsymbol \Gamma \boldsymbol C+\boldsymbol C ^{\top}\boldsymbol \Gamma\boldsymbol Y ^{\top}$ is equal to
\vspace{-0.05cm}
\begin{align}
\sum_{j=1}^{n_y} \gamma_j\left(\sum_{i=1}^{n_x} Y_{ij}\check{\boldsymbol C}_{S,ij}\right) = \sum_{k=1}^{n_xn_y} \bar{\gamma}_{k} y_{v,k}\bar{\boldsymbol C}_k, \label{eq:CGammaY_vec_LMI}
\end{align} }
\vspace{-0.05cm}
\noindent where each $\bar{\gamma}_{k}$ corresponds to ${\gamma}_{j}\in \bar{\mathcal{S}}_\gamma$. As such we may write $\bar{\gamma}_{k}\triangleleft{\gamma}_{j}$. For the sake of analysis, let us assume that $\boldsymbol\gamma$ is fixed. From \eqref{eq:CGammaY_vec_LMI}, define $\mathcal{A}_{Y,\Gamma}({\boldsymbol y_v})$ as follows
\begin{subequations}
\begin{align}
\hspace{-0.3cm}\mathcal{A}_{Y,\Gamma}({\boldsymbol y_v}) &:= \mathrm{Blkdiag}\left(\bmat{\sum_{k=1}^{n_xn_y}\bar{\gamma}_{k} y_{v,k}\bar{\boldsymbol C}_k & *\\ \boldsymbol O&\boldsymbol O},\boldsymbol O\right), \label{eq:obs_LMI_ref_gamma}
\end{align}
such that, by having $\mathcal{A}_{\Gamma}(\boldsymbol P, \boldsymbol y_v) := \mathcal{A}_{P}(\boldsymbol P) + \mathcal{A}_{Y,\Gamma}({\boldsymbol y_v})$, matrix inequality \eqref{eq:obs_LMI_ref_1} now becomes
\begin{align}
\hspace{-0.4cm}\mathcal{A}_{\Gamma}(\boldsymbol P, \boldsymbol y_v) + \mathcal{A}_0 = \mathcal{A}_{P}(\boldsymbol P) + \mathcal{A}_{Y,\Gamma}({\boldsymbol y_v})+ \mathcal{A}_0 \succ 0.\label{eq:obs_LMI_ref_gamma_problem}
\end{align}
\end{subequations}
The next result is established due to Theorem \ref{thm:lip_obs_lmi}.
\vspace{-0.05cm}
\begin{theorem}\label{thm:lip_obs_lmi_sensor}
Consider a certain sensor configuration $\mathcal{S}_\gamma$. There exist $\boldsymbol P\in \mbb{S}^{n_x}$ and $\boldsymbol y_v\in \mbb{R}^{n_xn_y}$ satisfying $\mathcal{A}_{\Gamma}(\boldsymbol P, \boldsymbol y_v) + \mathcal{A}_0\succ 0$ if and only if there is no $\boldsymbol Z\in \mbb{S}^{3n_x}$ provided in the form of \eqref{eq:lip_obs_lmi_thm_1}
such that $\boldsymbol Z \succeq 0$ and $\boldsymbol Z \neq 0$ satisfying \eqref{eq:lip_obs_lmi_thm_2}, \eqref{eq:lip_obs_lmi_thm_4}, and $\boldsymbol Z_1\in \mathcal{Z}\big(\mathcal{S}_{\gamma}^{(a)}\big)$ where
\begin{align}
\hspace{-0.2cm} \mathcal{Z}\big(\mathcal{S}_{\gamma}^{(a)}\big) := &\Bigg\{\vphantom{\Bigg\vert}\boldsymbol Z_1\in \mbb{S}^{n_x}\,\Bigg\vert\,
\sum_{j=1}^{n_x}\sum_{k=1}^{n_x}\bar{\gamma}_{i}\bar{C}_{i,jk} Z_{1,jk} = 0, \nonumber \\
\hspace{-0.2cm} &\quad \;\hspace*{-0.2cm}\;\forall i\in\mbb{I}(n_xn_y)\;\wedge\;\bar{\gamma}_{i}\triangleleft{\gamma}_{m},\;\gamma_m\in\mathcal{S}_{\gamma}^{(a)}\vphantom{\Bigg\vert}\Bigg\}.\label{eq:lip_obs_lmi_sensor}
\end{align}
\end{theorem}
\vspace{-0.1cm}
{It follows from Theorem \ref{thm:lip_obs_lmi_sensor} (see \ref{appdx:C} for the proof) that including more sensors will reduce the feasible set of the alternative problem and thus, in general, utilizing more sensors reduces the feasible set of the alternative problem.} By reducing this set, one can expect that the constraint will be inconsistent thus making the alternative problem infeasible and consequently, giving a feasible solution for \eqref{eq:obs_LMI_ref_gamma_problem}. This provides an answer for \textbf{Q2}. {From Theorem \ref{thm:lip_obs_lmi_sensor}, the following additional result is obtained.}
\vspace{-0.1cm}
\begin{mycor}\label{cor:lip_obs_lmi_sensor}
Suppose $\mathcal{S}_{\gamma,1}$ and $\mathcal{S}_{\gamma,2}$ are two distinct sensor's configurations so that $\mathcal{S}_{\gamma,1}^{(a)}\hspace{-0.05cm}\subset \mathcal{S}_{\gamma,2}^{(a)}$. If \eqref{eq:obs_LMI_ref_gamma_problem} has no solution for $\mathcal{S}_{\gamma,2}$, then it also has no solution for $\mathcal{S}_{\gamma,1}$.
\end{mycor}
\vspace{-0.1cm}
\noindent The above corollary essentially states that if the system is not $\gamma_l$-observable for a particular sensor configuration $\mathcal{S}_{\gamma,2}$, then the system is also not $\gamma_l$-observable for any sensor configuration, represented by $\mathcal{S}_{\gamma,1},$ having fewer number of sensors if $\mathcal{S}_{\gamma,1}^{(a)}\hspace{-0.05cm}\subset \mathcal{S}_{\gamma,2}^{(a)}$.
The proof of Corollary \ref{cor:lip_obs_lmi_sensor} follows directly from the fact that $\mathcal{S}_{\gamma,1}^{(a)}\subset \mathcal{S}_{\gamma,2}^{(a)}$ implies $\mathcal{Z}\big(\mathcal{S}_{\gamma,1}^{(a)}\big) \subset\mathcal{Z}\big(\mathcal{S}_{\gamma,2}^{(a)}\big)$. That is, since the alternative problem with set of constraints $\mathcal{Z}\big(\mathcal{S}_{\gamma,2}^{(a)}\big)$ has nonempty feasible set, then this set is never empty for any $\mathcal{S}_{\gamma,1}$ since $\mathcal{Z}\big(\mathcal{S}_{\gamma,1}^{(a)}\big)$ is less constrained.
By contrapositive, then it is also true that the system is $\gamma_l$-observable for any sensor configuration $\mathcal{S}_{\gamma,2}$ provided that it is also $\gamma_l$-observable for a certain sensor configuration $\mathcal{S}_{\gamma,1}$ such that $\mathcal{S}_{\gamma,1}\hspace{-0.05cm}\subset \mathcal{S}_{\gamma,2}$.
Now, a natural question to ask is whether there is a relation between the value of Lipschitz constant $\gamma_l$ and the number of activated sensors to achieve $\gamma_l$-observability for SSP. Consider a SSP where the objective is to achieve $\gamma_l$-observability with minimum number of activated sensors. From previous results, we know that minimizing $\gamma_l$ while incorporating more sensors reduces the feasible set of the alternative problem. However, as we want to incorporate the least number of sensors as possible, then we can only rely on using the smallest Lipschitz constant $\gamma_l$. The rationale here is, if the alternative problem can be made infeasible from \eqref{eq:lip_obs_lmi_infeasibility} by reducing the Lipschitz constant, then we can utilize less number of activated sensors to achieve $\gamma_l$-observability.
In the next section, we discuss our approach to solve SSP formulated in \textbf{P2}.
\vspace{-0.25cm}
\section{Solving the SSP Through Customized BnB}\label{sec:sol_appr}
\vspace{-0.25cm}
\subsection{From MIBMI to Convex MISDP}\label{ssec:mibmi_to_misdp}
Our approach involves transforming the SSP to a convex MISDP.
Specifically, our prior work \citep{Nugroho2019CDC} reformulates the SSP \textbf{P2} into a covex MISDP using McCormick's relaxation \citep{mccormick1976computability}. This is not the only method to convert MIBMI into MISDP: the big-M method, which is popular in disjunctive programming, can also be employed---see \citep{Nugroho2019124}.
The McCormick's reformulation is performed by defining a new matrix variable $\boldsymbol M := \boldsymbol Y \boldsymbol \Gamma$ where $\boldsymbol M\in\mbb{R}^{n_x\times n_y}$ given that $\boldsymbol Y$ is bounded such that $\barbelow{\boldsymbol Y} \leq \boldsymbol Y \leq \bar{\boldsymbol Y}$, while the big-M assumes that $-L \boldsymbol 1 \leq \boldsymbol Y \leq L \boldsymbol 1$ for a sufficiently large constant $L > 0$. For convenience, we consider $\boldsymbol Y \in B(\boldsymbol Y)$ where $B(\boldsymbol Y)$ is defined as
\begin{align*}
B(\boldsymbol Y) := \begin{cases}
\barbelow{\boldsymbol Y} \leq \boldsymbol Y \leq \bar{\boldsymbol Y}, &\;\text{for McCormick},\\
-L \boldsymbol 1 \leq \boldsymbol Y \leq L \boldsymbol 1, &\;\text{for Big-M}.\\
\end{cases}
\end{align*}
It is apparent here that the bounds on $\boldsymbol Y$ are determined by a single constant $L$ for big-M and matrices $\barbelow{\boldsymbol Y},\bar{\boldsymbol Y}$ for McCormick, making big-M to be a special case of McCormick's relaxation.
The resulting problem can now be written as
\vspace{-0.0cm}
\begin{subequations}\label{eq:sen_sel_obs_const}
\begin{align}
&\hspace*{-0.2cm}(\mathbf{P3})\; \minimize_{ \boldsymbol P, \boldsymbol Y, \boldsymbol M, \epsilon, \boldsymbol \gamma}\;\; \boldsymbol c^{\top} \boldsymbol \gamma \\
&\hspace*{-0.2cm}\subjectto \;\; \begin{bmatrix}
\boldsymbol A ^{\top}\boldsymbol P + \boldsymbol P\boldsymbol A +\epsilon\gamma_l^2\boldsymbol I & \\
- \boldsymbol M \boldsymbol C- \boldsymbol C ^{\top}\boldsymbol M^{\top}& *\\
\boldsymbol G ^{\top}\boldsymbol P & -\epsilon \boldsymbol I \end{bmatrix} \preceq 0 \label{eq:sen_sel_obs_const_1} \\
&\quad \quad \quad \quad\;\;\boldsymbol M = \boldsymbol Y \boldsymbol \Gamma, \;\boldsymbol Y \in B(\boldsymbol Y),\;\eqref{eq:sen_sel_obs_2}.\label{eq:sen_sel_obs_const_2}\vspace*{-0.05cm}
\end{align}
\end{subequations}
Without loss of generality, it is assumed throughout the section that the number of variables in $\boldsymbol \gamma$ are equal to the number of rows in $\boldsymbol C$.
By applying big-M or McCormick's relaxation, the nonconvex MISDP \textbf{P3} can be converted to an equivalent convex MISDP given in the next theorem.
\vspace{-0.1cm}
\begin{theorem}\label{thm:misdp}
Problem \textbf{P3} is equivalent to
\begin{subequations}\label{eq:sen_sel_misdp_mccormick}
\vspace*{-0.00cm}
\begin{align}
\hspace*{-0.1cm}\mathbf{(P4)}\; \minimize_{\boldsymbol v}\;\; & \boldsymbol c^{\top}_v \boldsymbol v \label{eq:sen_sel_misdp_mccormick1}\\
\subjectto \;\; & \mathcal{L}_v(\boldsymbol v) + \mathcal{C}_v\succeq 0 , \label{eq:sen_sel_misdp_mccormick2}\\
\;\;& v_i\in \{0,1\},\;\;\forall i\in\mbb{I}_v(\boldsymbol \gamma), \label{eq:sen_sel_misdp_mccormick3}
\end{align}
where $\mathcal{L}_v(\cdot)$ denotes LMI terms, $\mathcal{C}_v$ denotes the constant terms, and vector $\boldsymbol v\in\mbb{R}^{n_v}$ populates all decision variables in the following fashion
\begin{align}
\boldsymbol v := \bmat{\mathrm{vec}(\boldsymbol P)^{\top}\;\;\epsilon\;\;\mathrm{vec}(\boldsymbol M)^{\top}\;\;\mathrm{vec}(\boldsymbol Y)^{\top}\;\;\boldsymbol \gamma^{\top}}^{\top}, \label{eq:vector_v}
\end{align}
\end{subequations}
with $n_v = {\bar{n}_x+1+2n_xn_y+n_y}, \bar{n}_x := \tfrac{1}{2}n_x(n_x+1)$.
\end{theorem}
{The proof of Theorem \ref{thm:misdp} is detailed in \ref{appdx:Z}}.
The notation $\mbb{I}_v(\boldsymbol \gamma)$ in \eqref{eq:sen_sel_misdp_mccormick3} is useful for indicating the index $i$ of $\boldsymbol v$ such that $v_i = \gamma_j$ for each corresponding $j\in\mbb{I}(\boldsymbol \gamma)$; similar notations are also used to label any variable in $\boldsymbol v$ that corresponds to $\boldsymbol P$, $\boldsymbol Y$, $\boldsymbol M$, and $\epsilon$.
Specifically, the inequality $\boldsymbol \Xi \,\boldsymbol \nu \leq \boldsymbol \varphi$ which is embedded in LMI \eqref{eq:sen_sel_misdp_mccormick2} essentially encompasses the McCormick's envelope on the bilinear equality $M_{ij} = Y_{ij}\gamma_j$ expressed as
\begin{align}
\bmat{1&-1&-\barbelow{Y}_{ij}\\ -1&1&\bar{Y}_{ij}\\1&0&-\bar{Y}_{ij}\\-1&0&\bar{Y}_{ij}}\bmat{M_{ij}\\Y_{ij}\\\gamma_j} \leq \bmat{-\barbelow{Y}_{ij} \\ \bar{Y}_{ij}\\0\\0}, \label{eq:mccormick_envelope}
\end{align}
which is equivalent to the previous bilinear constraint (similarly for big-M).
Now that we have a convex MISDP, problem \textbf{P4} can be solved using one of the aforementioned techniques. In the next section, we develop a customized BnB algorithm that can be used to compute both optimal and suboptimal solutions for \textbf{P4}.
\vspace{-0.25cm}
\subsection{Structure-Exploiting BnB Algorithm }\label{ssec:bnb_misdp}
A natural way to solve such MISDP is through BnB algorithm. In a general purpose BnB algorithm, an SDP is solved in every node, in which the SDP is obtained from relaxing the integer variables to their corresponding convex relaxations. Thus, the complexity of BnB algorithm stems from the difficulties in solving the SDPs and the number of SDPs that need to be solved.
In a practical situation, finding a relatively good solution in a much shorter time is more desirable than finding an optimal one.
{Despite that some general purpose BnB algorithms for MISDPs are available,
these conventional solvers cannot be configured to make them amenable for exploiting the structure of the problem at hand in order to reduce the complexity of solving SSP.}
\setlength{\textfloatsep}{5pt}
{\begin{algorithm}[t]
\caption{\text{SE-BnB Algorithm}}\label{alg:BnB-SSP}
\DontPrintSemicolon
\textbf{input:} $\mathcal{F}$, $\epsilon_f$, $\mathrm{maxBranch}$\;
\textbf{initialize:} $\mathcal{C} = \emptyset$, $\mathcal{W} = \emptyset$, $\mathrm{LB},\mathrm{UB} = \infty$, $k = 0$, $\boldsymbol v^* = \boldsymbol 0$\;
\textbf{process root node:} $\mathcal{T}\leftarrow\mathrm{RelaxSol}(\mathcal{F})$\;
\textbf{update:} $\mathrm{LB}\leftarrow \boldsymbol c_v^{\top}\mathcal{T}(2)$,
$\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{T}\}$\;
\While{$ \mathrm{UB}-\mathrm{LB}> \epsilon_f$ {\bf and} $k < \mathrm{maxBranch}$\label{alg:1_loop}}{
$\tilde{\mathcal{T}} = \argmin_{\mathcal{T}_k\in \mathcal{C}} \boldsymbol c_v^{\top}\mathcal{T}_k(2)$,
$\mathcal{C}\leftarrow\mathcal{C}\setminus\{\tilde{\mathcal{T}}\}$, $\mathrm{LB}\leftarrow \boldsymbol c_v^{\top}\tilde{\mathcal{T}}(2)$\;
\If{$\tilde{\mathcal{T}}(2)_i\in \{0,1\},\;\forall i\in\mbb{I}_v(\boldsymbol \gamma)$}{
$\tilde{\mathcal{T}}(3)\leftarrow\tilde{\mathcal{T}}(2)$\;
}\Else{
$\{\tilde{\mathcal{T}}(3),\mathcal{W}\}\leftarrow\mathrm{FeasSol}(\tilde{\mathcal{T}},\mathcal{W})$\;
\textbf{find:} $i\in\mbb{I}_v(\boldsymbol\gamma)$ such that $\tilde{\mathcal{T}}(2)_i\notin \{0,1\}$ and $\tilde{\mathcal{T}}(2)_i = \min_{j\in\mbb{I}_v(\boldsymbol\gamma)} \@ifstar{\oldabs}{\oldabs*}{\tilde{\mathcal{T}}(2)_j-\tfrac{1}{2}}$\;
$\mathcal{F}_L:= \{\boldsymbol v\in \tilde{\mathcal{T}}(1)\,\vert \,v_i\leq 0\}$ \;
\If{$\forall\,\boldsymbol \gamma\in \{0,1\}^{n_y}, \boldsymbol \gamma \in \mathbfcal{I}(\mathcal{F}_L)$, then $\boldsymbol \gamma \notin \mathcal{W}$\label{step:1a}}{ $\mathcal{T}_L\leftarrow\mathrm{RelaxSol}(\mathcal{F}_L)$, $\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{T}_L\}$}
$\mathcal{F}_R:= \{\boldsymbol v\in \tilde{\mathcal{T}}(1)\,\vert\, v_i\geq 1\}$\;
\If{$\forall\,\boldsymbol \gamma\in \{0,1\}^{n_y}, \boldsymbol \gamma \in \mathbfcal{I}(\mathcal{F}_R)$, then $\boldsymbol \gamma \notin \mathcal{W}$\label{step:1b}}{$\mathcal{T}_R\leftarrow\mathrm{RelaxSol}(\mathcal{F}_R)$, $\mathcal{C}\leftarrow\mathcal{C}\cup\{\mathcal{T}_R\}$}
$k\leftarrow k+1$\;
}
\If{$\boldsymbol c_v^{\top}\tilde{\mathcal{T}}(3) < \mathrm{UB}$}{$\mathrm{UB}\leftarrow\boldsymbol c_v^{\top}\tilde{\mathcal{T}}(3)$, $\boldsymbol v^* \leftarrow \tilde{\mathcal{T}}(3)$}
\ForEach{$\mathcal{T}_j\in C$}{
\If{$\boldsymbol c_v^{\top}\mathcal{T}_j(2) > \mathrm{UB}$}{%
$\mathcal{C}\leftarrow \mathcal{C}\setminus \{\mathcal{T}_j\}$\;
}
}
}
\textbf{output:} $\mathrm{LB}$, $\mathrm{UB}$, $\boldsymbol v^*$\;
\end{algorithm}
}
To that end, we present a customized BnB algorithm that can be utilized to compute optimal and suboptimal solutions for \textbf{P4} in a more efficient manner, owing it to \textit{(a)} its ability to exploit problem structure and \textit{(b)} heuristics to quickly find good suboptimal solutions.
To proceed, define
\begingroup
\allowdisplaybreaks
\begin{align*}
\mathcal{F} &:= \{\boldsymbol v\,\vert\,\mathcal{L}_v(\boldsymbol v) + \mathcal{C}_v\succeq 0,\;v_i\in \{0,1\},\;\forall i\in\mbb{I}_v(\boldsymbol \gamma)\} \\
\mathcal{R} &:= \{\boldsymbol v\,\vert\,\mathcal{L}_v(\boldsymbol v) + \mathcal{C}_v\succeq 0,\;v_i\in [0,1],\;\forall i\in\mbb{I}_v(\boldsymbol \gamma)\},
\end{align*}
\endgroup
i.e., $\mathcal{F}$ denotes the feasible set for \textbf{P4} whereas $\mathcal{R}$ is its convex relaxation.
Note that we have $\mathcal{F} \subseteq\mathcal{R}$ because in $\mathcal{R}$, the integer constraints are relaxed.
The proposed BnB algorithm solves two convex problems on each BnB node in order for it to be able to find a good suboptimal solution immediately: the convex relaxation of \textbf{P4} to get a \textit{local} lower bound, and \textbf{P4} with fixed SAs combination to get a \textit{global} upper bound.
Within this section, it is supposed that each node $i$ is representable by a $3$-tuple $\mathcal{T}_i := \left(\mathcal{F}_i,\barbelow{\boldsymbol v},\bar{\boldsymbol v}\right)$, where
\begin{align*}
\barbelow{\boldsymbol v} = \argmin_{\boldsymbol v\in\mathcal{R}_i}\boldsymbol c_v^{\top}{\boldsymbol v},\;\quad\;\bar{\boldsymbol v}\in \{\boldsymbol v\;\vert\; \boldsymbol v\in\mathcal{F}_i\},
\end{align*}
such that $\mathcal{F}_i = \mathcal{T}_i(1)$, $\barbelow{\boldsymbol v} = \mathcal{T}_i(2)$, and $\bar{\boldsymbol v} = \mathcal{T}_i(3)$.
We also use an abstract set $\mathcal{I}(\cdot)\subseteq \mbb{R}^{n_y}$ where $\mathcal{I}(\mathcal{T}_i) \subset \mathcal{T}_i(1)$ to represent the restrictions (or \textit{integer cut}) on integer variable $v_i$, where $i\in \mbb{I}_v(\boldsymbol \gamma)$, which is added in the branching process of BnB algorithm.
To stop the algorithm when a suboptimal solution is sought, two stopping criteria characterize by $\epsilon_f\in\mbb{R}_+$ and $\mathrm{maxBranch}\in\mbb{N}_+$ are employed. Let $\mathrm{LB}$ and $\mathrm{UB}$ be the known best lower and upper bounds at the end of the algorithm. When the optimality gap $\mathrm{UB}-\mathrm{LB}$ is zero, the solution is optimal. However, if $\mathrm{UB}-\mathrm{LB} \leq \epsilon_f$ the solution is regarded to be $\epsilon_f$-optimal. Alternatively, the solution is only suboptimal. The notation $\mathrm{maxBranch}$, sometimes abbreviated as $\mathrm{mBr}$, determines the number of maximum allowable branching. These BnB routines, referred to as \textit{structure-exploiting BnB} (SE-BnB), are detailed in Algorithm \ref{alg:BnB-SSP}. The algorithmic function $\mathrm{RelaxSol}(\cdot)$ solves the relaxed problem $\barbelow{f} = \min_{\boldsymbol v\in\mathcal{R}}c_v^{\top}{\boldsymbol v}$ for a given $\mathcal{F}$ where $\barbelow{\boldsymbol v}$ is the minimizer and wrap the results into $\mathcal{T}$.
In addition to this, Algorithm \ref{alg:BnB-SSP} uses a heuristic procedure for efficiently finding a feasible solution for \textbf{P4} on each node, if such solution exists. This heuristic generates a random combination of sensors $\boldsymbol \gamma_c$. To do so, we define the set $\mathcal{W}$ as
\vspace{-0.05cm}
\begin{align*}
\mathcal{W} := \left\{\boldsymbol\gamma\in\{0,1\}^{n_y}\,\Big\vert\,\bmat{\boldsymbol w^{\top}\,\,\boldsymbol\gamma^{\top}}^{\top}\notin \mathcal{F},\;\forall \boldsymbol w\right\},
\end{align*}
i.e., the set of combinations of sensors that are infeasible for \textbf{P4}.
By exploiting the property proposed in Corollary \ref{cor:lip_obs_lmi_sensor}, we are able to effectively generate a candidate of sensor combination $\boldsymbol \gamma_c$ that is not positively infeasible for \textbf{P4}.
This heuristic, embedded in the function $\mathrm{FeasSol}(\cdot)$, is presented in Algorithm \ref{algo_feassol}.
This idea is implemented in Steps \ref{step:1a} and \ref{step:1b} of Algorithm \ref{alg:BnB-SSP}.
Now, some modifications for making the SE-BnB algorithm to be more efficient are discussed.
\setlength{\floatsep}{5pt}
{
\begin{algorithm}[t]
\caption{$\mathrm{FeasSol}$}\label{algo_feassol}
\DontPrintSemicolon
\textbf{input:} $\tilde{\mathcal{T}}$, $\mathcal{W}$\;
\textbf{candidate generation:} generate a random sensor combination $\boldsymbol \gamma_c \in \{0,1\}^{n_y}$ such that $\boldsymbol \gamma_c\notin \mathcal{W}$ and
$\boldsymbol \gamma_c \in \mathcal{I}(\tilde{\mathcal{T}}(1))$\;
\textbf{solve:} $\bar{f} = \min_{\boldsymbol v\in\tilde{\mathcal{T}}(1),\,v_i = \gamma_{c,i}}c_v^{\top}{\boldsymbol v}$ (problem \textbf{P6}) with minimizer $\bar{\boldsymbol v}$\;
\If{$\bar{f} = \infty$}{
$\mathcal{W}\leftarrow \mathcal{W}\cup \{\boldsymbol\gamma_c\}$
}
\textbf{output:} $\bar{\boldsymbol v}$, $\mathcal{W}$
\end{algorithm}}
\vspace{-0.2cm}
\begin{itemize}[leftmargin=*]
\setlength\itemsep{-0.3em}
\item In $\mathrm{RelaxSol}(\cdot)$, the relaxed \textbf{P4} is solved. Realize that an integer cut is appended in every node (apart from the root node) that corresponds to whether a particular relaxed integer variable, say $v_i$ for $i\in\mbb{I}_v(\boldsymbol\gamma)$, is constrained to be $v_i \leq 0$ or $v_i \geq 1$. As $v_i\in[0,1]$, then the only feasible solution is $v_i = 0$ or $v_i = 1$. Either case, the resulting McCormick's envelope (or big-M) yields the \textit{best} relaxation which makes \eqref{eq:mccormick_envelope} redundant. At this instance, constraint \eqref{eq:mccormick_envelope} can be discarded and replaced with $M_{ij} = 0$ or $M_{ij} = Y_{ij}$, depending on the value of $v_i$.
\item Recall that the SE-BnB algorithm attempts to find a feasible solution on each node by fixing $\boldsymbol \gamma$ and solve \textbf{P4}---performed in $\mathrm{FeasSol}(\cdot)$. Since \textbf{P4} is equivalent to \textbf{P3} and \textbf{P3} becomes a SDP when $\boldsymbol \gamma$ is fixed, then one can solve \textbf{P3} to find a feasible solution. Furthermore, given a fixed $\boldsymbol \gamma$, the equality constraint $\boldsymbol M = \boldsymbol Y \boldsymbol \Gamma$ becomes redundant and hence, this constraint can be removed as well as the variable $\boldsymbol M$ from \textbf{P3}. Moreover, when a certain variable $\boldsymbol \gamma_{i} = 0$, due to redundancy, then the corresponding row of $\boldsymbol C$ and column of $\boldsymbol Y$ can also be removed to obtain smaller number of variables (essentially, only consider the nonzero row of $\boldsymbol C$ and column of $\boldsymbol Y$).
\end{itemize}
\vspace{-0.2cm}
The aforementioned procedures enable the SE-BnB to exploit the structure of SSP and as a result, the complexity of solving the associated SDPs required to find bounds on each node of the BnB tree can be lowered.
\vspace{-0.35cm}
\section{Some Important Extensions}\label{sec:misc}
\vspace{-0.25cm}
\subsection{Controllability and Actuator Selection}\label{ssec:actuator_controllability}
In this section, NDS controllability and actuator selection problem (ASP) are discussed, in which we consider a similar concept to quantify controllability for a given $\gamma_l$.
\vspace{-0.10cm}
\begin{mydef}\label{def:lip_controllability}
NDS \eqref{eq:gen_dynamic_systems} is said to be $\gamma_l$-controllable if and only if there exist $\boldsymbol Q\in \mbb{S}^{n_x}_{++}$, $\boldsymbol X\in \mbb{R}^{n_u\times n_x}$, and $\sigma\in\mbb{R}_{++}$ such that the LMI below is feasible
\begin{align}
\small \hspace{-0.3cm} \bmat{
\boldsymbol Q\boldsymbol A ^{\top} + \boldsymbol A\boldsymbol Q - \boldsymbol X ^{\top}\boldsymbol B ^{\top} -\boldsymbol B\boldsymbol X +\sigma \boldsymbol G\boldsymbol G^\top & * \\
\boldsymbol Q & -\frac{\sigma}{\gamma_l^2} \boldsymbol I} & \prec 0. \label{eq:LMI_lip_con}
\end{align}
\end{mydef}
\vspace{-0.1cm}
If the NDS is $\gamma_l$-controllable, then from the above LMI, the stabilizing controller gain matrix is given as $\boldsymbol K = \boldsymbol X\boldsymbol Q^{-1}$ with control action $\boldsymbol u(t)= -\boldsymbol K \boldsymbol x(t)$ \citep{Yadegar2018}. It is worth noting that, unlike \eqref{eq:LMI_lip_obs}, LMI \eqref{eq:LMI_lip_con} is a sufficient condition. A similar question to \textbf{Q1} now emerges: \textit{what role does $\gamma_l$ have on $\gamma_l$-controllability?} By performing a similar analysis as in Section \ref{sec:ssp} (not shown here due to space constraint), it is revealed that if $\gamma_l$ is made to be sufficiently small, then there exists at least a solution for LMI \eqref{eq:LMI_lip_con} ({if $\sigma$ is fixed}).
Next, the relation between actuators and NDS controllability is discussed.
Let $\pi_i\in\{0,1\}$ be a binary variable representing the activation or deactivation of actuator/control node on each subsystem $i$ such that $\pi_i = 1$ if control node at $i$ is activated and $\pi_i = 0$ otherwise. For compactness, let us define $\boldsymbol \pi := \left[\pi_1\,\,\pi_2\,\,\cdots\,\,\pi_N\right]^{\top}$ and $\boldsymbol\Pi$ as
$$\boldsymbol \Pi := \mathrm{Blkdiag}\big(\pi_1\boldsymbol I_{n_{u_1}},\pi_2\boldsymbol I_{n_{u_2}},\hdots,\pi_N\boldsymbol I_{n_{u_N}}\big).$$
As such, ASP for NDS \eqref{eq:gen_dynamic_systems} can be expressed as
\vspace{-0.05cm}
\begin{align}
\dot{\boldsymbol x}(t)= \mA \boldsymbol x (t) + \boldsymbol G\boldsymbol f(\boldsymbol x) + \mB\boldsymbol \Pi\boldsymbol u(t),\quad\boldsymbol y(t) = \mC \boldsymbol x (t), \label{eq:gen_dynamic_systems_act_sel}
\end{align}
from which a simplified high level formulation for ASP can be formulated as follows
\vspace{-0.05cm}
\begin{align*}
(\mathbf{P5})\;\;\; \minimize \;\;\;& \boldsymbol c^{\top} \boldsymbol \pi+ \mathrm{ControlObjective} \\
\subjectto \;\;\;& \eqref{eq:gen_dynamic_systems_act_sel},\; \boldsymbol \pi\in \mathcal{G}_\pi,\; \mathrm{ControlConstraints}.
\end{align*}
The goals in the above are threefold: \textit{(i)} performing feedback stabilizing control on NDS \eqref{eq:gen_dynamic_systems_act_sel} while \textit{(ii)} utilizing smallest number of actuators as possible (or satisfying a given constraint over the collections of library of actuators) and \textit{(iii)} optimizing a specific control metric. In \textbf{P5}, vector $\boldsymbol c\in\mbb{R}^N_{+}$ assigns weights for each actuator $\pi_i$ whereas $\mathcal{G}_\pi\subseteq \{0,1\}^N$ represents actuator's logistic constraints.
The ASP for Lipschitz NDSs can be formulated as
\begin{subequations}\label{eq:act_sel_con}
\begin{align}
&\hspace*{-0.3cm}(\mathbf{P6})\; \minimize_{ \boldsymbol Q, \boldsymbol X, \epsilon, \boldsymbol \pi}\;\; \boldsymbol c^{\top} \boldsymbol \pi\\
&\hspace*{-0.3cm}\subjectto \;\; \begin{bmatrix}
\boldsymbol Q\boldsymbol A ^{\top} + \boldsymbol A\boldsymbol Q +\sigma \boldsymbol G\boldsymbol G^\top & \\
- \boldsymbol X ^{\top}\boldsymbol\Pi\boldsymbol B ^{\top} -\boldsymbol B\boldsymbol\Pi\boldsymbol X & *\\
\boldsymbol Q & -\frac{\sigma}{\gamma_l^2} \boldsymbol I \end{bmatrix} \preceq 0 \label{eq:act_sel_con_1} \\
&\quad \quad\;\;\boldsymbol Q\succ 0, \;\sigma > 0,\; \boldsymbol \pi\in \mathcal{G}_\pi,\;
\boldsymbol \pi \in \{0,1\}^N.\label{eq:act_sel_con_2}\vspace*{-0.05cm}
\end{align}
\end{subequations}
In \textbf{P6} the objective is to minimize the number of the activated actuators while \textit{(a)} finding a stabilizing controller gain matrix $\mK$ and \textit{(b)} satisfying actuator's logistic constraints.
Notice that \textbf{P6} is also a nonconvex optimization problem with a MIBMI constraint.
Further analysis shows that, in general, in order to employ less number of actuators, one should obtain the least possible Lipschitz constant.
\vspace{-0.25cm}
\subsection{NDS Parameterization for Non-Lipschitz Systems}\label{ssec:NDS_parameterization}
Note that many observer/controller designs for function sets other than Lipschitz continuous such as one-sided Lipschitz, quadratically inner-bounded, bounded Jacobian, and quadratically bounded---see \citep{Phanomchoeng2010b,Abbaszadeh2010,guo2017decentralized} for notable examples---can also be considered and utilized to construct SASPs.
For example, the nonlinearities in a NDS may satisfy one-sided Lipschitz (OSL) and quadratically inner-bounded (QIB) conditions---see Definition \ref{def:osl_qib}.
\vspace{-0.1cm}
\begin{mydef}[OSL \& QIB]\label{def:osl_qib}
The mapping $\boldsymbol f :\mathbb{R}^{n_x}\rightarrow \mathbb{R}^{n_g}$ in \eqref{eq:gen_dynamic_systems} is locally OSL in $\mathbfcal{X}$ if for any $\boldsymbol x, \hat{\boldsymbol x}\in \mathbfcal{X}$ then
\begin{subequations}\label{eq:osl_qib_def}
\vspace{-0.1cm}
\begin{align*}
\langle \boldsymbol G(\boldsymbol f(\boldsymbol x)-\boldsymbol f(\hat{\boldsymbol x})),\boldsymbol x-\hat{\boldsymbol x}\rangle\leq \gamma_s \@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol x - \hat{\boldsymbol x}}_2^{2},
\end{align*}
for $\gamma_s\in\mbb{R}$ and QIB in $\mathbfcal{X}$ if for any $\boldsymbol x, \hat{\boldsymbol x}\in \mathbfcal{X}$ it holds that
\begin{align*}
&\langle\boldsymbol G(\boldsymbol f(\boldsymbol x)-\boldsymbol f(\hat{\boldsymbol x})),\boldsymbol G(\boldsymbol f(\boldsymbol x)-\boldsymbol f(\hat{\boldsymbol x}))\rangle \leq \label{eq:qib_def}\\ &\qquad\gamma_{q1} \@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol x - \hat{\boldsymbol x}}_2^{2} +\gamma_{q2}\langle\boldsymbol G(\boldsymbol f(\boldsymbol x)-\boldsymbol f(\hat{\boldsymbol x})),\boldsymbol x-\hat{\boldsymbol x}\rangle, \nonumber
\end{align*}
for $\gamma_{q1},\gamma_{q2} \in \mbb{R}$.
\end{subequations}
\end{mydef}
\vspace{-0.1cm}
In Definition \ref{def:osl_qib}, the notation $\langle\cdot,\cdot\rangle$ denotes the standard inner product.
By considering OSL and QIB, the parameterization of NDSs entails to finding the constants $\gamma_s$, $\gamma_{q1}$, and $\gamma_{q2}$. These constants can be computed either analytically or numerically via stochastic point-based method or deterministic interval-based method. If numerical approach is pursued, then the computation of these constants reduces to solving global optimization problems. Our particular work in \cite{nugroho2020nonlinear} investigates NDSs parameterization using interval-based global maximization.
\vspace{-0.25cm}
\subsection{From OSL and QIB to Robust Selection}\label{ssec:SASP_robustness}
Now we demonstrate how the proposed methodology can be easily extend to solve SASPs for other class of nonlinearities.
Now suppose that $\boldsymbol f(\cdot)$ in NDS \eqref{eq:gen_dynamic_systems} is both OSL and QIB with constants $\gamma_s$, $\gamma_{q1}$, and $\gamma_{q2}$. Using observer design presented in \citep{zhang2012full}, the corresponding SSP can be formed as
\vspace{-0.05cm}
\begin{subequations}\label{eq:sen_sel_obs_osl_qib}
\begin{align}
&\hspace*{-0.2cm}\; \minimize_{ \boldsymbol P, \boldsymbol Y, \epsilon_1, \epsilon_2, \boldsymbol \gamma}\;\; \boldsymbol c^{\top} \boldsymbol \gamma \\
&\hspace*{-0.2cm}\subjectto \;\; \begin{bmatrix}
\boldsymbol A^\top \bm P + \bm P\boldsymbol A + \epsilon_1\gamma_s\boldsymbol I \\+\epsilon_2\gamma_{q1}\boldsymbol I- \boldsymbol Y\boldsymbol \Gamma \boldsymbol C- \boldsymbol C ^{\top}\boldsymbol \Gamma\boldsymbol Y ^{\top} &* \\
\boldsymbol G^{\top}\boldsymbol P+\dfrac{\gamma_{q2}\epsilon_2-\epsilon_1}{2}\boldsymbol I & -\epsilon_2 \boldsymbol I
\end{bmatrix} \preceq 0 \label{eq:sen_sel_obs_osl_qib1} \\
&\quad \quad \quad \quad\;\;\boldsymbol P\succ 0, \;\epsilon_1,\epsilon_2 > 0,\; \boldsymbol \gamma\in \mathcal{G}_\gamma,\;
\boldsymbol \gamma \in \{0,1\}^N,\label{eq:sen_sel_obs_osl_qib2}\vspace*{-0.05cm}
\end{align}
\end{subequations}
while the SSP for discrete-time NDSs is formulated as \citep{Zhang2012anote}
\vspace{-0.05cm}
\begin{subequations}\label{eq:sen_sel_obs_osl_qib_discrete}
\begin{align}
&\hspace*{-0.2cm}\; \minimize_{ \boldsymbol P, \boldsymbol Y, \epsilon_1, \epsilon_2, \boldsymbol \gamma}\;\; \boldsymbol c^{\top} \boldsymbol \gamma \\
&\hspace*{-0.0cm}\subjectto \;\; \\
& \begin{bmatrix}
-\boldsymbol P + \epsilon_1\gamma_s\boldsymbol I + \epsilon_2\gamma_{l1}\boldsymbol I & * & *\\
\boldsymbol P \boldsymbol A- \boldsymbol Y \boldsymbol \Gamma \boldsymbol C+\dfrac{\gamma_{q2}\epsilon_2-\epsilon_1}{2}\boldsymbol I & \boldsymbol P-\epsilon_2\boldsymbol I & *
\\
\boldsymbol P \boldsymbol A- \boldsymbol Y \boldsymbol \Gamma \boldsymbol C & \boldsymbol O & -\boldsymbol P
\end{bmatrix} \preceq 0 \label{eq:sen_sel_obs_osl_qib_discrete1} \\
&\;\;\boldsymbol P\succ 0, \;\epsilon_1,\epsilon_2 > 0,\; \boldsymbol \gamma\in \mathcal{G}_\gamma,\;
\boldsymbol \gamma \in \{0,1\}^N.\label{eq:sen_sel_obs_osl_qib_discrete2}\vspace*{-0.05cm}
\end{align}
\end{subequations}
Notice that the problem described in \eqref{eq:sen_sel_obs_osl_qib} possesses similar nonconvexity as in \textbf{P2}.
In addition to the above, the proposed sensor selection framework can also facilitate to include robustness in the objective. For instance, by considering the Lipschitz condition, robust SSP with $\mathcal{L}_{\infty}$ metric can be constructed as \citep{nugroho2018journal}
\vspace{-0.05cm}
\begin{subequations}\label{eq:l_inf_theorem}
\begin{align}
&\minimize_{\boldsymbol P, \boldsymbol Y, \epsilon, \alpha, \mu_{0,1,2}} \quad \mu_0\mu_1 + \mu_2 \label{eq:l_inf_theorem_0}\\
&\subjectto \nonumber \\
&\bmat{ \mA^{\top}\mP + \mP\mA -\mC^{\top}\boldsymbol \Gamma\mY^{\top} \\ -\mY\boldsymbol \Gamma\mC+\alpha\mP+\epsilon\gamma_l^2\mI&*&*\\
\mP & -\epsilon\mI&*\\
\boldsymbol {B_{\mathrm{w}}}^{\top}\mP-\boldsymbol {D_{\mathrm{w}}}^{\top}\mY^{\top}&\mO&-\alpha\mu_0\mI} \preceq 0 \label{eq:l_inf_theorem_1}\\
&\bmat{-\mP & * & * \\
\mO & -\mu_2\mI & *\\
\mZ & \mO & -\mu_1\mI}\preceq 0, \label{eq:l_inf_theorem_2}\\
&\;\;\boldsymbol P\succ 0, \;\epsilon,\mu_0,\mu_1,\mu_2 \geq 0,\;\alpha > 0,\; \boldsymbol \gamma\in \mathcal{G}_\gamma,\;
\boldsymbol \gamma \in \{0,1\}^N,\label{eq:l_inf_theorem_3}\vspace*{-0.05cm}
\end{align}
\end{subequations}
where matrices $\boldsymbol {B_{\mathrm{w}}}$ and $\boldsymbol {D_{\mathrm{w}}}$ represent how the disturbances are distributed, $\boldsymbol Z$ is the performance matrix with $\boldsymbol z(t) = \boldsymbol Z \boldsymbol e(t)$, and $\boldsymbol e(t)$ is the estimation error.
The objective herein is to select the best sensor combination, given a fixed number of activated sensors, that minimizes the worst case disturbance attenuation $\mu = \sqrt{\mu_0\mu_1 + \mu_2}$. If the problem described in \eqref{eq:l_inf_theorem} is solved, then it is ensured that $\lim_{t\rightarrow\infty}\sup \@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol z(t)}_2\leq \mu \@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol w(t)}_{\mathcal{L}_{\infty}}$
where $\boldsymbol w(t)$ denotes the disturbance vector \citep{nugroho2018journal}.
By fixing some constants, \eqref{eq:sen_sel_obs_osl_qib} and \eqref{eq:l_inf_theorem} share similar nonconvexity with \textbf{P2} and thus the proposed methodology can be used to solve the SSP.
Provided that the variables $\alpha$ and $\mu_0$ or $\mu_1$ are fixed, this problem along with the ones described in \eqref{eq:sen_sel_obs_osl_qib} and \eqref{eq:sen_sel_obs_osl_qib_discrete} share similar nonconvexity as \textbf{P2} and therefore, by using the proposed methodology, they can be solved using the SE-BnB algorithm.
Next, we focus on numerically testing the SE-BnB algorithm in solving SSP for a NDS satisfying the Lipschitz condition.
\begin{figure}
\vspace{-0.1cm}
\centering
\subfloat[\label{fig:assessment_1a}]{\includegraphics[keepaspectratio=true,scale=0.76]{assessment_1_time}}{}\vspace{0.05cm}
\subfloat[\label{fig:assessment_1b}]{\includegraphics[keepaspectratio=true,scale=0.76]{assessment_1_cost}}{}\vspace{-0.1cm}\hspace{-0.0cm}\vspace{-0.2cm}
\caption{Numerical experiment results for network of nonlinear unstable nodes to find suboptimal solutions via the SE-BnB and S-BnB: (a) computational time for different number of maximum allowable branching $\mathrm{mBr}$ and (b) the resulting sensor costs. The solid lines correspond to SE-BnB whereas dashed lines correspond to S-BnB. To compensate the heuristics on both SE-BnB and S-BnB, the numerical test is performed three times for each value of $N$ and the results shown correspond to the average values.}
\label{fig:assessment_1}\vspace{-0.00cm}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{fig:assessment_4a}]{\includegraphics[keepaspectratio=true,scale=0.76]{assessment_4_time}}{}\vspace{0.1cm}\vspace{-0.05cm}
\subfloat[\label{fig:assessment_4b}]{\includegraphics[keepaspectratio=true,scale=0.76]{assessment_4_cost}}{}\vspace{-0.2cm}\hspace{-0.0cm}\vspace{-0.00cm}
\caption{Comparison between SE-BnB and $\ell_1$-norm relaxation to search for suboptimal solutions: (a) computational time and (b) the resulting sensor costs. To compensate the heuristics on SE-BnB, the numerical test is performed three times and the results shown correspond to the average values.}
\label{fig:assessment_4}\vspace{0.1cm}
\end{figure}
\vspace{-0.25cm}
\section{Numerical Assessment}\label{sec:num_test}
{In this section, we demonstrate the proposed methodology to solve SSP for a NDS on network of nonlinear unstable nodes. The simulations are performed using MATLAB R2019a, where
YALMIP's \citep{Lofberg2004} optimization package with MOSEK \citep{Andersen2000} solver are used to solve the corresponding SDPs problem.} The network of nonlinear unstable nodes considered here is adapted from \citep{Motee2008}, which is initially a network of linear systems. In order to obtain nonlinear systems, a sinusoidal function is introduced on each node such that each node $i$ has the following dynamics
\begin{align}
\hspace*{-0.2cm}\dot{\boldsymbol x}_i = \bmat{\zeta_{1i} &1 \\1 & \zeta_{2i}}\boldsymbol x_i + \bmat{0\\ \beta_i}\sin(x_{2i}) + \sum_{j\neq i}e^{\alpha(i,j)}\boldsymbol x_j + \bmat{0\\ 1}u_i, \label{eq:nnus_dynamics}
\end{align}
where the constants $\zeta_{1i}$, $\zeta_{2i}$, and $\beta_i$ are randomly generated within $[-2,2]$, $[-2,2]$, and $[-1,1]$ respectively for each $i$. The coupling between nodes $i$ and $j$ is determined by the function of Euclidean distance $\alpha(i,j)$. This distance is randomly generated inside a box of size $5\times 5$. It can be verified that the Lipschitz constant for this system is $\gamma_l = \sqrt{N}$ where $N$ is the number of nodes. For the sake of simplicity, the weighting vector is set to be $\boldsymbol c = \boldsymbol 1$ where $\boldsymbol Y$ is bounded such that $-10^3 \leq Y_{i,j} \leq 10^3$ for each $i,j$. From \eqref{eq:nnus_dynamics}, each node $i$ can have at most two measurements---hence for $N$ subsystems, there are at most $2N$ measurements.
\vspace{-0.25cm}
\subsection{SE-BnB and Standard BnB}
In the first case of this numerical assessment, we evaluate the performance of the SE-BnB algorithm to find suboptimal solutions for SSP. The tolerance defining optimality gap is set to be $10^{-4}$ with integer tolerance $10^{-6}$ while restricting the maximum number of allowable branching. We also invoke a logistic constraint such that at least $20\%$ of the available measurements are equipped with sensor. We test the new SE-BnB algorithm with various number of nodes and compare the results with different values of $\mathrm{maxBranch}$ (or $\mathrm{mBr}$) against our own implementation of the standard BnB algorithm (S-BnB)---that is, only takes MISDP \textbf{P4}---while still utilizes the heuristic discussed in Section \ref{ssec:bnb_misdp}. The results of this numerical test are illustrated in Fig. \ref{fig:assessment_1}. It can be seen that, in overall, the SE-BnB requires less computational time than the S-BnB. This is due to the fact that, in the SE-BnB, the corresponding SDPs solved to obtain local lower bound and global upper bound on each BnB node have reduced complexities as opposed to the S-BnB.
It is also showcased in Fig. \ref{fig:assessment_1b} that this reduction in computational time does not contribute to the resulting sensor costs since sensor costs are merely determined by the number of explored nodes as well as the heuristics used to find global upper bounds. Both SE-BnB and S-BnB show similar sensor costs. These costs are reduced, despite of some irregularities, as more branches are explored---these irregularities are attributed to the heuristics used in finding global upper bounds.
\vspace{-0.25cm}
\subsection{SE-BnB and $\ell_1$-Norm Relaxation}
Next, we assess the ability of our SE-BnB algorithm in finding good suboptimal solutions relative to $\ell_1$-norm relaxation, which is developed in \citep{Candes2008}, to solve a class of combinatorial optimization problems. {This technique finds its popularity in SASPs for linear systems---for example, see \citep{Argha2017}.}
This particular approach works as follows. First, note that each $\gamma_i$ determines the values of $\mathrm{col}\left(\boldsymbol Y\right)_i$, i.e., $\mathrm{col}\left(\boldsymbol Y\right)_i$ can be any value if and only if $\gamma_i = 1$ and zero otherwise. As such, the integer variable $\boldsymbol \gamma$ can be removed from optimization variables and consequently, the SSP boils down into minimizing the number of nonzero columns of $\boldsymbol Y$, i.e., $\sum_{i=1}^{n_y} \@ifstar{\oldnorm}{\oldnorm*}{\mathrm{col}\left(\boldsymbol Y\right)_i}_{\ell_0}$. Since $\ell_0$-norm is nonconvex, it is replaced by $\ell_1$-norm, yielding the following relaxed problem
\begin{subequations}\label{eq:sen_sel_obs_l1_norm}
\vspace*{-0.2cm}
\begin{align}
\hspace*{-0.3cm} (\mathbf{P7})\; \minimize_{ \boldsymbol P, \boldsymbol Y, \epsilon, \boldsymbol \gamma}\;\; & \sum_{i=1}^{n_y}c_iw_i^{(k)}\@ifstar{\oldnorm}{\oldnorm*}{\mathrm{col}\left(\boldsymbol Y\right)_i}_{\ell_1} \label{eq:sen_sel_obs_l1_norm1}\\
\hspace*{-0.3cm} \subjectto \;\; &\eqref{eq:LMI_lip_obs}, \;\boldsymbol P\succ 0, \;\epsilon > 0,\label{eq:sen_sel_obs_l1_norm3}
\vspace*{-0.25cm}
\end{align}
\end{subequations}
where $w_i^{(k)}$ is the corresponding weighting factor for column $i$ at iteration $k$. When $\boldsymbol w^{(k)}$ is fixed, \textbf{P7} becomes a convex SDP. This weighting factor is updated at each iteration $k$ using the solution from previous iteration through the following update rule \citep{Candes2008}
\begin{align}
w_i^{(k+1)} := \frac{1}{\@ifstar{\oldnorm}{\oldnorm*}{\mathrm{col}\left(\boldsymbol Y\right)_i^{(k)}}_{\ell_1} + \kappa},\; \forall i=1,2,\hdots,n_y,\label{eq:weight_update}
\end{align}
where $\kappa > 0$ is a relatively small constant. Algorithm \ref{alg:L1-SSP}, adapted from \citep{Candes2008}, provides the detailed steps.
In this numerical experiment, both algorithms are configured such that the objective is to find at least $N-1$ activated sensors. The $\ell_1$-norm relaxation is used with $\epsilon_w = 0.1$ and $500$ maximum iterations. Two different values for $\kappa$ are considered which correspond to two scenarios for this approach: $\ell_1$-norm relaxation-1 uses $\kappa = 10^{-4}$ while $\ell_1$-norm relaxation-2 uses $\kappa = 10^{-5}$.
The corresponding results are illustrated in Fig. \ref{fig:assessment_4}. In particular, Fig. \ref{fig:assessment_4b} indicates that our SE-BnB algorithm is able to find suboptimal sensor combinations with $N-1$ activated sensors while the $\ell_1$-norm relaxation fails to find sensor combinations under such prescribed limitation.
On the matter of computational time, as shown in Fig. \ref{fig:assessment_4a}, the SE-BnB algorithm requires less computational time compared to the $\ell_1$-norm relaxation. {This demonstrates an advantage of the proposed SE-BnB algorithm in finding suboptimal solutions for SSP over the $\ell_1$-norm relaxation technique, at least for the network of nonlinear unstable nodes described in \eqref{eq:nnus_dynamics}.}
\vspace{0.05cm}
\setlength{\textfloatsep}{5pt}
{\small \begin{algorithm}[t]
\caption{\text{Iterative $\ell_1$-Norm Relaxation for SSP}}\label{alg:L1-SSP}
\DontPrintSemicolon
\textbf{input:} $\epsilon_w$, $\kappa$, $\mathrm{maxIter}$\;
\textbf{initialize:} $k = 0$, $\boldsymbol w^{(1)} = \boldsymbol 1$ \;
\Do{$\@ifstar{\oldabs}{\oldabs*}{\@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol w^{(k+1)}}_{2}-\@ifstar{\oldnorm}{\oldnorm*}{\boldsymbol w^{(k)}}_{2}} > \epsilon_w$ {\bf and} $k < \mathrm{maxIter}$ \label{alg:stop}}{
$k\leftarrow k + 1$\;
solve \textbf{P7} in \eqref{eq:sen_sel_obs_l1_norm}, obtain $\boldsymbol Y^{(k)}$\;
\ForEach{$i\in\mbb{I}(n_y)$}{
update $w_i^{(k+1)}$ from $\boldsymbol Y^{(k)}$ using \eqref{eq:weight_update} \;
}
}
\textbf{find:} a feasible solution of LMI \eqref{eq:sen_sel_obs_l1_norm3} where $\boldsymbol Y$ corresponds to the nonzero column of $\boldsymbol Y^{(k)}$ from the last iteration\;
\textbf{output:} $\boldsymbol P$, $\boldsymbol Y$\;
\end{algorithm}
}
\setlength{\floatsep}{5pt}
\subsection{Corroborating the Theory}
Finally, we employ SE-BnB algorithm to find optimal solutions for SSP while using different values of Lipschitz constant.
This numerical test is intended to find empirical relation between the value of Lipschitz constant and the number of activated sensors.
For this purpose, we consider the so called Lipschitz multiplier $\eta > 0$ such that the Lipschitz constant for $N$ number of subsystems is formulated as $\gamma_l = \eta \sqrt{N}$. Fig. \ref{fig:assessment_3} illustrates the corresponding results. It can be observed that increasing Lipschitz constant (that is, by equivalently increasing $\eta$) raises the number of activated sensors---this is indicated by the nondecreasing fashion in the number of sensor costs as the value of $\eta$ increases. This result corroborates the theoretical foundation given in Section \ref{sec:obs_ssp_sensor} and Theorem \ref{thm:lip_obs_lmi}. That is, in the context of SSP, increasing Lipschitz constant shrinks the feasible space of the SSP problem, hence requiring more sensors to stabilize the estimation error dynamics.
\vspace{-0.35cm}
\section{Paper Summary and Limitations}\label{sec:conclusion}
\textcolor{black}{
A novel general framework for dealing with SASPs for NDSs is presented. Our approach is built upon SDP formulations for observer/controller designs for various classes of NDSs developed in the literature. Specifically, this paper focuses on addressing SASPs for Lipschitz NDSs and our investigations show that smaller Lipschitz constant allows less number SAs are required to achieve stabilization in control/estimation purpose. A customized BnB algorithm, referred to as SE-BnB, which exploits problem structure and utilizing new heuristics to efficiently obtain optimal and suboptimal solutions for SASPs, is proposed. The main advantage of our framework as opposed to other approaches from the literature is its capability to obtain the optimal SAs combination by means of a BnB algorithm for stable/unstable NDS that ensures stability for either estimation and/or stabilization/tracking purposes.}
The algorithms presented in this paper come with their limitations. First, this paper is not concerned with scaling the problem for extremely large-scale systems. This is a result of using SDPs and LMIs that still have serious scalability issues---in addition to the worst case complexity of BnB routines. Second, this paper also does \textit{not} address the simultaneous selection of SAs and also assumes that all SAs are operating in ideal conditions (e.g. no saturation effect). To that end, our future work will focus on \textit{(i)} investigating different approaches to solve SASPs other than using BnB algorithm such as the generalized Benders decomposition \cite{Zhang2016} and branch-and-cut algorithm \cite{Kobayashi2020}, \textit{(ii)} extending the proposed approach for addressing the robust SASPs as well as the simultaneous selection of SAs through output feedback control and observer-based control policies, and \textit{(iii)} considering the effect of SAs saturation in SASPs.
\begin{figure}
\centering
{\includegraphics[keepaspectratio=true,scale=0.76]{assessment_3_cost}}\vspace{-0.1cm}\vspace{-0.15cm}
\caption{Optimal sensor costs computed through the SE-BnB for different Lipschitz multiplier $\eta$ where the Lipschitz constant is given by $\gamma_l = \eta\sqrt{N}$.}
\label{fig:assessment_3}\vspace{-0.00cm}
\end{figure}
\vspace{-0.25cm}
\section*{Acknowledgments}
This material is based upon work supported by the National Science Foundation (NSF) under Grants 1728629, 1917164, 2151571, and 2152450. We gratefully acknowledge NSF's support and the thorough comments and suggestions made by the editor and the reviewers.
\vspace{-0.25cm}
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2022-05-05T02:26:11",
"yymm": "2012",
"arxiv_id": "2012.14020",
"language": "en",
"url": "https://arxiv.org/abs/2012.14020"
}
|
\section{Introduction}~\label{sec-intro}
Conditional neural text generation models are expected to generate human-readable text that accurately describes source-side information~\citep{sutskever2014sequence,vinyals2015show}.~\blfootnote{$^*$Work done while the authors were at the University of Tokyo.} At training time, these models are trained in a supervised fashion by learning to predict ground-truth symbols~\citep{williams1989learning}, and at inference time, the models usually generate text in a left-to-right fashion. It is known that these text generation models suffer from a variety of generation errors. For example, the models repeat tokens unnecessarily and drop informative tokens; we call these two error types as {\it repeating error} and {\it dropping error}, respectively. Can we design a training framework to explicitly suppress a specific type of the generation errors?
We focus on a reinforcement learning (RL) framework proposed by \citet{ranzato2016sequence}, so that we can incorporate a reward function to penalize erroneous generation during training. The RL framework has been recently applied to many text generation tasks, and the authors have studied designing task-oriented reward functions~\citep{wu2016google,zhang2017sentence,rennie2017self}. More recent work~\citep{dai2017towards,gu2018neural} has proposed training a discriminative reward function, or simply a discriminator, in generative adversarial network frameworks. However, these reward functions are not designed to deal with a specific type of generation errors.
\input{figure/overview}
To answer our research question, we propose a new RL framework that suppresses an arbitrary type of generation errors. \figref{fig-overview} illustrates the overview of our method. We first train a reward function that discriminates between references and sentences containing the targeted type of errors. To train such a discriminator, we introduce artificially-generated negative examples by injecting the specific type of errors to the references (\figref{fig-overview} (a)). The trained discriminator is expected to capture the targeted errors, and following \citet{ranzato2016sequence}, a text generation model learns to generate text while suppressing the errors (\figref{fig-overview} (b)). In this paper, we take two error types as examples: repeating and dropping errors. We show that our method can suppress these errors and improve generation performance on two translation and two captioning tasks.
Our contributions are three-fold:
\begin{itemize}
\item We propose a novel RL framework that suppresses text generation errors by specifically targeting an error type. We show that our method can suppress two types of generation errors: the repeating and dropping errors.
\item Our method can be expansively applied to the existent RL framework using other function such as GLEU, to further boost the generation performance.
\item Our analyses show that the proposed method works more effectively as training example size becomes smaller where more generation errors appear.
\end{itemize}
\section{Neural text generation}\label{sec-generation}
\subsection{Maximum likelihood training}
In conditional neural text generation models, an encoder encodes source-side information $s$ (e.g. a source sentence in machine translation~\citep{sutskever2014sequence} and an image in image captioning~\citep{vinyals2015show}) into an intermediate representation $h^s$, and then a decoder is trained to generate a reference $r = (r_1, r_2, \cdots, r_n)$ conditioning on $h^s$. To train the model in a supervised fashion, we follow the maximum likelihood estimation (MLE) objective:
\begin{eqnarray}
L_{\textrm{MLE}} = -\sum_{j=1}^{n} \log p (r_j | r_{< j}, s).
\label{eq-mle}
\end{eqnarray}
In this paper, we focus on two generation tasks: machine translation and image captioning.
At inference time, the text generation model generates a target-side sentence $t = (t_1, t_2, \cdots, t_{n'})$. When generating $j$-th token, the model follows a categorical distribution $p(t_j | t_{<j}, s)$ to generate a token. To generate a better sentence, we may use a variety of decoding techniques such as beam search.
\subsection{Reinforcement learning}\label{subsec-rl}
We next describe the RL framework proposed in \citet{ranzato2016sequence}. When training the text generation model in the RL framework, we consider the generation model as an agent, the categorical distribution at $j$-th decoding step $p(t_j | s, t_{<j})$ as a policy, where $t_j$ is the $j$-th token the model generates, and choosing a token from the categorical distribution as an action. After generating a complete target sentence $t = (t_1, t_2, \cdots, t_m)$, REINFORCE~\citep{williams1992simple} is used to define the loss function as follows:
\begin{equation}
\begin{split}
L_{\textrm{RL}} = -\sum_{j=1}^{m} & \{ \log p(t_j | s, t_{<j}) \\
& (R(s, t) - b (s, t_j)) \},
\label{eq-rl}
\end{split}
\end{equation}
where $R$ is a reward function and $b$ is a baseline network~\citep{sutton2018reinforcement} to reduce the variance of the gradients.\footnote{Following \citet{ranzato2016sequence}, we simply use a linear regression model as the baseline network.} However, using only \equref{eq-rl} makes training unstable~\cite{wu2016google}. Thus, the following joint loss function is used:
\begin{eqnarray}
\lambda_{\text{MIXED}} \ L_{\textrm{MLE}} + (1 - \lambda_{\text{MIXED}}) \ L_{\textrm{RL}},
\label{eq-mixed}
\end{eqnarray}
where $\lambda_{\text{MIXED}}$ is a hyper-parameter to control the strength of the two signals.
\subsection{Generation errors}
Neural text generation models suffer from various types of errors such as repeating error, dropping error, grammatical error, misordering, and so on. Among them, the repeating and dropping errors are especially focused in the previous work~\citep{mi2016coverage, malaviya2018sparse,holtzman2020curious,welleck2020neural}. As a preliminary experiment, we trained a neural machine translation model with the MLE objective on the WAT'15 Workshop on Japanese-to-English translation task~\citep{nakazawa2016aspec}, and we manually checked $100$ examples that were randomly sampled from the translation results on the test dataset. We have found that $17$ sentences belong to the repeating error, where some tokens are repeated unnecessarily, and $13$ sentences belong to the dropping error, where some tokens drop from their reference sentences.\footnote{These $100$ translation examples are sampled from the RNMT result in \secref{subsec-results-dis}.} We also observed that the model trained with the widely-used RL approach in \citet{wu2016google} improves BLEU scores~\citep{papineni2002bleu} but does not necessarily suppress the above two types of errors.\footnote{Details will be shown in \secref{subsec-results-gleu}.} Based on these observations, we aim to improve the generation performance of the text generation model by suppressing two types of errors.
\section{Approach}
We propose to suppress the targeted type of errors by using the RL framework. Our main focus is to design $R$ in \equref{eq-rl} that gives negative rewards to generated text if they contain the targeted type of errors, and positive rewards to the text otherwise. We use a discriminator to instantiate such an $R$ function. We train the discriminator by taking references as positive examples, and erroneous sentences as negative examples. We prepare these negative examples by artificially injecting the targeted type of errors to the references. In the remainder of this section, we describe the negative examples in \secref{subsec-ane} and the discriminator in \secref{subsec-dis}.
\subsection{Artificial negative examples}\label{subsec-ane}
We would like to prepare the discriminator that can specifically focus on one error type. The discriminator requires negative examples such that they always contain the targeted type of error. We create such negative examples directly from references by using an error-generating function $e = C_{type}(r)$, where $e$ is a sentence containing an error of a specific {\it type} and $r$ is a reference. We design $C_{type}$ depending on the error type we focus on. We call $e$ as an artificial negative example (ANE).
In this study, we design two types of ANEs to deal with the repeating and dropping errors. \tabref{tab-examples-ane} shows a reference sentence and its negative examples of the two types.\footnote{While the tokens are split in a word-level in this example, we apply our method in a subword-level in our experiments.} In the following, we describe the design of ANEs.
\begin{description}[style=unboxed,leftmargin=0cm]
\item[Artificial repeating sentence]
We create artificial negative examples for the repeating errors by modifying reference sentences to repeat their tokens. In general, such repeated tokens do not always appear consecutively in model-generated sentences, and we propose the $C_{type}$ function as follows: given a reference $r$ whose length is $n$, $C_{repeat}(r)$ returns an artificial repeating sentence by duplicating $i$ consecutive tokens starting from the $j$-th token, at $k$ randomly-selected positions. We randomly choose $i$ from $(1, 2, \cdots, m_{\textrm{rep}})$, $j$ from $(1, 2, \cdots, n-i+1)$, and $k$ from $(1, 2, \cdots, n_{\textrm{rep}})$ for each example, where $m_{\textrm{rep}}$ is the maximum number of consecutive tokens, and $n_{\textrm{rep}}$ is the maximum number of the duplication. The $k$ random positions are selected from $(1, 2, \cdots, j-1, j+i-1, \cdots, n-1, n)$ not to break the original consecutive tokens. In \tabref{tab-examples-ane}, the three consecutive tokens ``by the company'' ($i=3$) starting from the $5$-th token ($j=5$) are repeated twice ($k=2$), and the tokens are inserted at after the $7$-th token, ``company,'' and the $9$-th token, ``keeps,'' of the reference. We set the hyper-parameters $(m_{\textrm{rep}},n_{\textrm{rep}}) = (4,4)$. We set $m_{\textrm{rep}} = n$ instead if $n$ is smaller than $n_{\textrm{rep}}$.
\item[Artificial dropping sentence]
We create artificial negative examples for the dropping errors by modifying reference sentences to drop consecutive tokens. Given a reference $r$ whose length is $n$, $C_{drop}(r)$ returns an artificial dropping sentence by dropping $i$ consecutive tokens starting from $j$-th token. We randomly choose $i$ from $(1, 2, \cdots, m_{\textrm{drop}})$ and $j$ from $(1, 2, \cdots, n-i+1)$ for each example, where $m_{\textrm{drop}}$ is the maximum number of consecutive tokens. In the example of \tabref{tab-examples-ane}, the three consecutive tokens ($i=3$) starting from the $5$-th token ($j=5$) are dropped. We set the hyper-parameter $m_{\textrm{drop}} = 4$. This setup allows us to drop randomly chosen ${i}$ consecutive tokens ($i = 1, 2, \cdots, 4$). We set $m_{\textrm{drop}} = n$ instead if $n$ is smaller than $m_{\textrm{drop}}$ and larger than $2$. $C_{drop}$ returns an end-of-sentence token if $n$ equals to $1$.
\end{description}
\input{table/examples-ane}
\subsection{Discriminator}\label{subsec-dis}
We train the discriminator with references and their artificial negative examples. Our discriminator is a binary classifier that takes as input a pair of a source $s$ and its target sentence (either $r$ or $e = C_{type}(r)$) and outputs a real value in [0, 1]. We minimize the following loss function:
\begin{equation}
\begin{split}
L_{\textrm{DIS}}
= & -\mathbb{E}_{s, r}
[\log \text{D}(s, r)]\\
& - \ \mathbb{E}_{s, e}
[\log (1 - \text{D}(s, e))],
\label{eq-dis}
\end{split}
\end{equation}
where $\text{D}$ is a discriminator. During the discriminator training, we call $C_{type}(r)$ every time we process $r$ in a mini-batch. Once the discriminator training is finished, we use the trained discriminator $\text{D}$ in the RL framework by freezing its parameters, so that $R(s, t)$ can output a reward value for a generated sentence $t$ to train the text generation models.
Our discriminator $\text{D}$ consists of two types of encoder: a source-side and target-side encoders. The source-side encoder encodes a source-side information $s$ into a fixed-size vector $h^s$. The target-side encoder takes $h^s$ and encodes the target sentence $t = (t_1, t_2, \cdots, t_n)$ into a sequence of representations $H^t = (h^t_1, h^t_2, \cdots, h^t_n)$. Once $H^t$ is calculated, we compute $\hat{h}^t$ as $\hat{h}^t = maxpool(H^t)$. The discriminator finally obtains the output $y$ as:
\begin{equation*}
y = f_{\text{sigmoid}}(W_o f_{\text{ReLU}}(W_h \hat{h}^t + b_h) + b_o),
\end{equation*}
where $W_h \in \mathbb{R}^{d_h/2 \times d_h}$, $b_h \in \mathbb{R}^{d_h/2}$, $W_o \in \mathbb{R}^{1 \times d_h/2}$, and ${b_o \in \mathbb{R}^1}$ are learnable parameters and $d_h$ is the dimension of $h^t$. $f_{\text{ReLU}}$ is the rectified linear unit (ReLU), and $f_{\text{sigmoid}}$ is the logistic sigmoid function.
\section{Experimental settings}
\subsection{Datasets}
In machine translation, we conducted experiments on Japanese-to-English (Ja-En) and German-to-English (De-En) tasks. For the Ja-En task, we used Asian scientific paper excerpt corpus (ASPEC) from the WAT'15 and used \texttt{train-1.txt} and \texttt{train-2.txt} for training, \texttt{dev.txt} for development, and \texttt{test.txt} for testing, following~\citet{hashimoto2019accelerated}. Both Japanese and English sentences were preprocessed as recommended in WAT'15.\footnote{\url{http://lotus.kuee.kyoto-u.ac.jp/WAT/WAT2015/baseline/dataPreparationJE.html}.} For the De-En task, we used the datasets provided by WMT'16\footnote{\url{http://www.statmt.org/wmt16/translation-task.html}.} and used all parallel corpora (\texttt{Europarl v7}, \texttt{Common Crawl corpus}, and \texttt{News Commentary v11}) for training, NewsTest2013 for development, and NewsTest2014 for testing.
In image captioning, we conducted experiments on two datasets: MS COCO~\citep{lin2014microsoft} and Flickr30K~\citep{plummer2015flickr30k}. For training, developing, and testing, we followed the splits provided by \citet{karpathy2015deep}.
We used {\tt SentencePiece}~\cite{kudo2018sentencepiece} for tokenizing sentences and building vocabularies. We empirically chose the vocabulary sizes of $16,000$ for each language in the Ja-En translation task, $16,000$ for both languages in the De-En translation task because these languages share an alphabet~\citep{sennrich2016neural}, and $2,000$ for MS COCO and Flickr30K captioning tasks. The vocabulary also contains three special tokens: <s> for beginning and </s> for end of a sentence, and <unk> for out-of-vocabulary tokens. \tabref{tab-datasets} shows the dataset statistics. During training in machine translation, we removed the sentence pairs whose maximum length is longer than $80$ and empty sentences from the training dataset.
\input{table/datasets}
\subsection{Models}
\subsubsection{Machine translation}
\paragraph{Text generation model}
For the translation tasks, we used two types of translation models: an RNN-based NMT (RNMT) model~\revise{\citep{bahdanau2015neural}} and the Transformer~\citep{vaswani2017attention}.
Our RNMT is an attention-based NMT model with a 2-layer bi-directional Long Short-Term Memory (LSTM)~\citep{hochreiter1997long} encoder and a 2-layer uni-directional LSTM decoder.
\revise{We used the attention mechanism proposed by \citet{bahdanau2015neural}.}
The hidden state size and the embedding size were set to $512$.
Our Transformer consists of a $6$-layer encoder and a $6$-layer decoder. The hidden state size and the number of heads were set to $512$ and $8$, respectively. To add positional information, we used a positional encoding technique with sine and cosine functions proposed by \citet{vaswani2017attention}.
\paragraph{Discriminator}
For the discriminator, we used a 2-layer uni-directional LSTM for the source-side and target-side encoders with $d_h = 512$.
\subsubsection{Image captioning}
\paragraph{Text generation model}
For the image captioning tasks, we used a simple show-and-tell model~\citep{vinyals2015show}. We used ResNet-152~\citep{he2016deep} pre-trained on an ImageNet classification dataset~\citep{russakovsky2015imagenet} as an encoder by freezing its model parameters to only extract features, and a $512$-dimensional $1$-layer uni-directional LSTM as a decoder. Once an image feature $f^i \in \mathbb{R}^{2048}$ is extracted, the decoder state $h^t_0$ is initialized as $h^t_0 = W_i f^i + b_i$, where $W_i \in \mathbb{R}^{512 \times 2048}$ is a weight matrix and $b_i \in \mathbb{R}^{512}$ is a bias vector.
\paragraph{Discriminator}
For the discriminator, we project the feature vector $f^i$ into the fixed-size vector $h^s$ as $h^s = W_D f^i + b_D$, where a weight matrix $W_D \in \mathbb{R}^{512 \times 2048}$ and a bias vector $b_D \in \mathbb{R}^{512}$ are learnable parameters. We used a uni-directional LSTM as the target-side encoder with $d_h = 512$.
\subsection{Training strategies}
For all the experiments, we used a single GPU of \texttt{NVIDIA GeForce GTX 1080 Ti}.
\subsubsection{Training discriminators}
We used Adam~\citep{kingma2015adam} with the initial learning rate of $1.0 \times 10^{-3}$ and weight decay with the rate of $1.0 \times 10^{-6}$. We checked its accuracy on the development dataset every $1,000$ iterations, halved the learning rate when the accuracy went worse, and finished the training when the learning rate was halved for five times. We chose the best models based on the best accuracy on the development dataset. For the development dataset, we created one negative example for each reference sentence.
\subsubsection{Training text generation models}
The training of the text generation models can be divided into two steps: a pre-training step with the MLE loss \equref{eq-mle} and a reinforcement learning step with the RL loss \equref{eq-mixed} with the pre-trained model following \citet{wu2016google}.
In the pre-training step, we used Adam with the initial learning rate of $1.0 \times 10^{-3}$ and weight decay with the rate of $1.0 \times 10^{-6}$ unless stated otherwise. Each mini-batch contains $128$ examples. We checked its perplexity on the development dataset every $1,000$ iterations and halved the learning rate if the perplexity went worse. We finished the training when we halved the learning rate for five times. For the Transformer, we used AdamW~\citep{loshchilov2019decoupled} with weight decay with the rate of $1.0 \times 10^{-4}$. When using AdamW, we scheduled the learning rate as follows:
\begin{align}
\pushleft{lr =} & \nonumber\\
\begin{cases}
lr_{\text{ini}} + step \times \frac{lr_{\text{max}} - lr_{\text{ini}}}{N_{\text{wm}}} & \text{if}\ step \leq N_{\text{wm}} \\
lr_{\text{max}} \times \eta(step) & \text{otherwise}
\end{cases},
\label{eq:cosine}
\end{align}
where
\begin{equation}
\eta(step) = 0.5 + 0.5 \times \text{cos}(\pi \times \frac{step - N_{\text{wm}}}{N_\text{wm} \times N_{\text{cl}}}). \nonumber
\end{equation}
This scheduling consists of two steps: a linear warmup step from $lr_{\text{ini}}$ to $lr_{\text{max}}$ for the first $N_{\text{wm}}$ iterations, and a cosine annealing step from $lr_{\text{max}}$ to $0$ for $N_{\text{wm}} \times N_{\text{cs}}$ iterations.
We used empirically tuned hyper-parameters as $(lr_{\text{ini}}, lr_{\text{max}}, N_{\text{wm}}, N_{\text{cs}}) = (5.0 \times 10^{-6}, 5.0 \times 10^{-4}, 4,000, 24)$. Each mini-batch contains $512$ examples for the Transformer.
In the pre-training step, we clipped the gradients~\citep{pascanu2013difficulty} with a value of $1.0$ and used the label-smoothing technique~\citep{szegedy2016rethinking} with the rate of $0.1$. We chose the best models based on the lowest perplexity on the development dataset.
In the reinforcement learning step, we used stochastic gradient decent with momentum. \revise{For the translation tasks, we tuned a learning rate of per model with a momentum rate of $0.9$. For the captioning tasks, we consistently used a learning rate of $5.0 \times 10^{-2}$ with a momentum rate of $0.9$. In this step, we took $20,000$ iterations for fine-tuning the parameters of the generation model.} We continued to use the same gradient clipping and label-smoothing techniques. Each mini-batch contains $64$ examples. In particular, we carefully tuned $\lambda_{\text{mixed}}$ in \equref{eq-mixed}.\footnote{More concretely, we searched the values in a coarse-to-fine fashion in \{0.5, 0.3, 0.1, $7.5 \times 10^{-2}$, $5.0 \times 10^{-2}$, $2.5 \times 10^{-2}$, $1.0 \times 10^{-2}$, $7.5 \times 10^{-3}$, $5.0 \times 10^{-3}$, $2.5 \times 10^{-3}$, $1.0 \times 10^{-3}$\}.} We report our model inference generated by the beam search with the width of $10$.
\subsection{Evaluation metrics}
One goal in this study is to suppress the repeating and dropping errors. To quantitatively evaluate these errors in the model's output, we used REP and DROP scores~\citep{malaviya2018sparse}. In this section, we first describe these two metrics in detail, then introduce task-specific metrics.
\subsubsection{REP score}
The REP score calculates how many $n$-gram repetitions are included in a model-generated sentence $t$, given its reference $r$, as follows:
\begin{equation*}
\text{REP}(r, t) = \frac{\sigma(t,r)}{ \sum_{w \in V} r(ww) + \sum_{s \in V^{n}_r} r(s)},
\end{equation*}
where
\begin{eqnarray}
\sigma(t,r) = \lambda_2 \underset{\hspace{0.2cm} s \in V^{n}_r, t(s) \geq 2}{\sum} \hspace{-0.25cm} \max \{0, t(s) - r(s)\} \nonumber \\
+ \lambda_1 \sum_{w \in V} \max \{0, t(ww) - r(ww)\}.
\label{eq-rep}
\end{eqnarray}
$V^{n}_r$ is the set of all the $n$-grams included in the reference. $r(ww)$ and $r(s)$ show the frequency of consecutive $1$-gram $w$ and $n$-gram $s$ of the reference, respectively, and $t(ww)$ and $t(s)$ show those of the machine-generated sentence $t$. $\lambda_1$ and $\lambda_2$ are hyper-parameters.
The REP score is defined for each $n$-gram separately, but in this study, we propose to use an extended REP (eREP) score that evaluates consecutive $1$-gram and $n$-grams ($n=2,3,4$) together.
The eREP score is calculated as follows:
\begin{equation*}
\text{eREP}(r, t) = \frac{\sigma(t,r)}{ \sum_{w \in V} t(ww) + \sum_{s \in V^{n}_t} t(s)},
\end{equation*}
where
\begin{eqnarray*}
\sigma(t,r) = \sum^{4}_{n=2} \lambda_n\underset{\hspace{0.2cm} s \in V^{n}_t, t(s) \geq 2}{\sum} \hspace{-0.25cm} \max \{0, t(s) - r(s)\} \nonumber \\
+ \lambda_1 \sum_{w \in V} \max \{0, t(ww) - r(ww)\}.
\end{eqnarray*}
$V^{n}_t$ is the set of all the $n$-grams included in the machine-generated sentence $t$.
We weight the repetitions of $n$-grams equally ($\lambda_n=1$).
For the eREP score, lower is better.
\subsubsection{DROP score}
In the machine translation tasks, the DROP score calculates how many source tokens are {\it not} covered in model-generated sentences. A word alignment tool is used to identify which source tokens are aligned with its reference, and calculates the ratio of those not aligned with the generated text (hypothesis). In other words, this aims at evaluating how well the source information is covered, and the score is defined as follows:
\begin{equation*}
\text{DROP}(c_{\text{ref}}, c_{\text{hyp}}) = 1 - \frac{1}{|c_{\text{ref}}|} \sum_{i \in c_{\text{ref}}} in(i),
\end{equation*}
where $c_{\text{ref}}$ and $c_{\text{hyp}}$ represent the set of source tokens' indices in the source-reference and source-hypothesis alignments, respectively. $in(i)$ is a function that returns $1$ if an index $i$ is included in $c_{\text{hyp}}$ and $0$ otherwise. For the DROP score, lower is better similarly as in the eREP score.
In the captioning tasks, however, the DROP score is not applicable because the source sentence does not exist. In the experiment, we used ROUGE$_L$~\citep{chen2015microsoft} instead to evaluate how many dropping errors are found in the model's output. In ROUGE$_L$, higher is better contrary to the DROP score. In our preliminary experiment, to check the relationship between the DROP score and ROUGE$_L$, we calculated the Pearson correlation coefficient between the ROUGE$_L$ and the DROP on the RNMT's WAT'15 Ja-En result. We confirmed a negative moderate correlation ($-0.430$) between these two metrics, which means that we can use ROUGE$_L$ instead of the DROP score in the captioning tasks.
\input{table/results-translation}
\input{table/results-captioning}
\input{table/results-discriminator}
\input{table/examples-wat15}
\subsubsection{Task-specific metrics}
We used BLEU and METEOR~\citep{malaviya2018sparse} for the translation tasks, and BLEU and CIDEr~\citep{vedantam2015cider} for the captioning tasks. For the captioning tasks, we used the publicly available code\footnote{\url{https://github.com/tylin/coco-caption}.} to calculate the BLEU and CIDEr scores. Note that for the captioning tasks, we report BLEU-{1,2,3,4}.
\subsection{Model configurations}
\begin{description}[style=unboxed,leftmargin=0cm]
\item[-- MLE] is the baseline model trained by the MLE loss in \equref{eq-mle}.
\item[-- RL-D$_{{\rm REP}}$, RL-D$_{{\rm DROP}}$] are our proposed models trained by the RL framework in \equref{eq-mixed} with our proposed discriminator.
The text generation model parameters are initialized by the MLE baseline. RL-D$_{{\rm REP}}$\ and RL-D$_{{\rm DROP}}$\ are trained to suppress the repeating and dropping errors with the discriminator D$_{{\rm REP}}$\ and D$_{{\rm DROP}}$, respectively.
\end{description}
\section{Results}\label{sec-results}
\revise{
We report the scores in [0, 100]. The symbol~$\dagger$ follows a score if a system produces a significant improvement against MLE\ (p < $0.05$). In our experiments, a statistical significance test was performed by the paired bootstrap resampling method~\citep{koehn2004statistical}.
}
\subsection{Can we suppress the targeted errors?}\label{subsec-results-dis}
In this section, we show the results of using the discriminator to suppress the repeating and dropping errors. For the RL-D$_{{\rm REP}}$\ and RL-D$_{{\rm DROP}}$\ models, we chose the best models based on the score on the development datasets we would like to improve (e.g. the eREP score used for RL-D$_{{\rm REP}}$).
We first show the accuracy of our discriminators because the discriminator plays the key role in our method. \tabref{tab-results-discriminator} reports the binary classification accuracy on the development datasets. D$_{{\rm REP}}$\ achieves over $96$\% accuracy in all the tasks, while the accuracy of D$_{{\rm DROP}}$\ is not as high as that of D$_{{\rm REP}}$. The possible reason for this is that identifying the dropped tokens in a sentence is a more difficult task than identifying the $n$-gram repetitions.
\tabref{tab-results-translation} shows the main results on the translation tasks. RL-D$_{{\rm REP}}$\ consistently improves the eREP scores, except that the MLE-based Transformer in the De-En task has much less room for improvement. RL-D$_{{\rm DROP}}$\ also consistently improves the DROP scores in both the RNMT and Transformer models. Receiving reward from the discriminator, namely D$_{{\rm REP}}$\ and D$_{{\rm DROP}}$, the text generation models learn to suppress the repeating and dropping errors\revise{, and this answers the research question we argued in \secref{sec-intro}}. We can also see that RL-D$_{{\rm REP}}$\ tends to increase the BLEU and that RL-D$_{{\rm DROP}}$\ tends to increase the METEOR \revise{though some exceptions exist}. For the former, we consider that reducing the amount of repetitions by RL-D$_{{\rm REP}}$\ leads to higher $n$-gram precision and thus the model can achieve higher BLEU scores. For the latter, this is presumably because the METEOR is based on unigram precision and recall, and it puts more weight on recall than on precision. Therefore, suppressing the dropping errors and restoring dropped tokens by RL-D$_{{\rm DROP}}$\ contribute to the improvements of the METEOR.
\tabref{tab-results-captioning} shows the results on the image captioning tasks. Again, we can see that RL-D$_{{\rm REP}}$\ significantly improves the eREP scores, leading to the better BLEU-{1,2,3,4} scores in both the COCO and Flickr30K tasks. On the other hand, RL-D$_{{\rm DROP}}$\ does not achieve significant improvements on the ROUGE$_L$. One possible reason is due to the diversity of text in captioning task~\citep{dai2017towards}. In other words, identifying whether the dropping error occurs in a sentence would be an easy task for the discriminator, but generating such tokens would be difficult for the generation model.
We also observed that the balancing factor $\lambda_{\text{mixed}}$ is preferred to be smaller for RL-D$_{{\rm REP}}$\ than that for RL-D$_{{\rm DROP}}$. This indicates that the generation model can rely more on the reward from D$_{{\rm REP}}$\ than that from D$_{{\rm DROP}}$, which supports why RL-D$_{{\rm REP}}$\ produces more expected results than RL-D$_{{\rm DROP}}$\ in the captioning tasks.\footnote{For example, we took $5.0 \times 10^{-3}$ and $5.0 \times 10^{-3}$ for RL-D$_{{\rm REP}}$, and $0.1$ and $5.0 \times 10^{-2}$ for RL-D$_{{\rm DROP}}$\ on the COCO and Flickr30K tasks, respectively.}
We show generation examples of the RNMT-based models in the WAT'15 Ja-En translation task in \tabref{tab-examples-ane}. In Example (A), RL-D$_{{\rm REP}}$\ successfully suppresses the repeated tokens that are found in MLE\ and generates the non-repetitive sentence. In Example (B), RL-D$_{{\rm DROP}}$\ satisfactorily restores the dropped tokens in MLE\ and generates more informative sentence.
\input{table/results-translation-gleu}
\input{table/results-captioning-gleu}
\input{figure/scale}
\subsection{Can we incorporate an off-the-shelf reward function?}\label{subsec-results-gleu}
In \secref{subsec-results-dis}, we have shown that our proposed discriminator can suppress its targeted type of errors. We investigate how existing reward functions work compared with our discriminators, to further improve the generation performance by incorporating them. Taking GLEU~\citep{wu2016google} as an example of existent rewards, we propose the following joint reward function:
\begin{eqnarray}
R(s, t) & = & \lambda_{\text{RL}} R'(s, t) \nonumber \\
& & + (1 - \lambda_{\text{RL}}) GLEU(t, r),
\label{eq-reward}
\end{eqnarray}
where $R'$ is one of our reward functions and $GLEU$ is the GLEU score. The GLEU is known to be effective in improving the BLEU~\citep{wu2016google}. $GLEU(t, r)$ calculates the minimum of the generated sentence $t$'s $n$-gram precision and recall against the reference $r$. $\lambda_{\text{RL}}$ is a hyper-parameter to control the strength of the two signals. In this section, we refer RL-GLEU\ to the model that uses only the GLEU as the reward, RL-GLEU-D$_{{\rm REP}}$\ (or RL-GLEU-D$_{{\rm DROP}}$) to the model that uses both the GLEU and the corresponding discriminator. We chose the best models based on the best BLEU score of the development datasets.
\tabref{tab-results-translation-gleu} shows the results on the translation tasks. In the Ja-En task, RL-GLEU\ contributes to the improvement on the BLEU, DROP, and METEOR scores. Since the GLEU computes recall against the reference, RL-GLEU\ consequently generated more (informative) tokens and improved those scores. This is why RL-GLEU-D$_{{\rm DROP}}$\ less improves the DROP this time. RL-GLEU-D$_{{\rm REP}}$\ further improves the eREP, especially when RL-GLEU\ deteriorates the eREP score. In the De-En task, the Transformer-based RL-GLEU-D$_{{\rm REP}}$\ and RL-GLEU-D$_{{\rm DROP}}$\ show almost the same results of RL-GLEU. This might be due to a large training examples, and we will discuss the effect of training data size in \secref{subsec-results-scale}.
\tabref{tab-results-captioning-gleu} shows the results on the image captioning tasks. In both the COCO and Flickr30K tasks, RL-GLEU\ greatly improves the BLEU and the CIDEr, but generates more repetitions as the eREP indicates. RL-GLEU-D$_{{\rm REP}}$\ achieves further improvements on the BLEU scores by suppressing the repeating errors. RL-GLEU-D$_{{\rm DROP}}$, on the other hand, does not boost the performance of RL-GLEU. We consider that it is difficult to suppress the dropping errors even when using both the GLEU and the discriminator's signals as we discussed in \secref{subsec-results-dis}.
\input{table/results-translation-additional}
\revise{
\subsection{Comparison with related studies}
In the field of machine translation, there are two types of studies that share similarities with our research direction: (i) the method based on coverage and (ii) the method based on Generative Adversarial Network (GAN). The coverage-based method~\citep{mi2016coverage,tu2017context,malaviya2018sparse} that uses attention history to reduce the amount of the repeating and dropping errors. The GAN-based method~\citep{gu2018neural,yang2018improving} utilizes an adversarial training with a discriminator for training the generation model to generate more natural sentences. In this section, we conduct further experiments to compare these methods with ours. Throughout this section, we focus on the RNMT model.
}
\revise{
\subsubsection{Comparison with the coverage-based method}
We first conduct experiments of the coverage-based model. We decided to use the coverage vector~\citep{tu2017context} that stores the attention history and is used in the decoding process to more effectively utilize untranslated source words. We used the Neural Network based coverage, which uses RNNs to model the coverage vector, and set $10$ to the coverage dimension following \citep{tu2017context}. Although \citet{tu2017context} uses a Gated Recurrent Unit (GRU) as an activation function, we used LSTM and empirically confirmed that LSTM also works well. We refer to this model as CovVec.
}
\revise{
\tabref{tab-results-translation-additional} shows the results. In both the two translation tasks, we can see that CovVec successfully improves the eREP and DROP scores from the MLE\ results as we expected. In comparison with \tabref{tab-results-translation}, the eREP scores are close to those of RL-D$_{{\rm REP}}$\ ($2.53$ in the Ja-En task and $0.78$ in the De-En task), and the drop scores are reasonably higher than those of RL-D$_{{\rm DROP}}$\ ($14.63$ in the Ja-En task and $3.59$ in the De-En task), respectively. However, CovVec does not produce any significant improvements on the BLEU and METEOR scores in our experiments, though RL-D$_{{\rm REP}}$\ in the Ja-En task achieved the improvement on BLEU. We consider that this improvement comes from the fact that, by fine-tuning the parameters of MLE, our model can only concentrate on suppressing the target type of errors for erroneous sentences and keep generating the same sentences for other non-problematic sentences. When we checked the translations of RL-D$_{{\rm REP}}$\ and MLE, we found that RL-D$_{{\rm REP}}$\ tends to generat the almost same translations of MLE\ for some source sentences supporting our view.
}
\revise{
\subsubsection{Comparison with the GAN-based method}
We next conduct experiments of the GAN-based model. We decided to use the Gumbel-Greedy Decoding (GGD)~\citep{gu2018neural} that bridges the generation model and the discriminator by using the Gumbel-Softmax estimator~\citep{jang2017categorical}. The hyper-parameters ($N_g$, $N_d$) in \citep{gu2018neural} were set to ($1$, $1$). We refer to this model as GGD.
}
\revise{
The results are shown in \tabref{tab-results-translation-additional}. For the BLEU and METEOR, GGD consistently improves the both scores from the MLE\ results. For the eREP and DROP scores, GGD harms the eREP and improve the DROP in the Ja-En task, while GGD improves the both scores in the De-En task. This indicates that the generation model of GGD does not always learn to suppress both repeating and dropping errors. One possible reason for this is that a type of errors the discriminator deals with can differ, as the frequency of each error type included in machine-generated sentences changes depending on the task and the generation model.
}
\subsection{Varying training size}\label{subsec-results-scale}
We have confirmed that our proposed method works effectively to suppress the targeted type of errors on the WAT'15 Ja-En translation and the COCO and the Flickr30K captioning tasks, but is less effective in the WMT'16 De-En task especially during the joint training with the GLEU. One presumable reason is that the training dataset size in the De-En translation task would be large enough to train the language model well and to suppress the targeted errors that are caused by data sparseness. We hypothesize that our method would be more effective especially when the training dataset is small; in such a situation, the language model would not be trained well and would produce more errors.
To verify this hypothesis, we conducted experiments by varying the training dataset size exponentially (1/1, 1/2, 1/4, 1/8) on the WMT'16 De-En translation task. For simplicity, we changed only the training dataset size, and set the same hyper-parameters used in \secref{subsec-results-gleu}. \figref{fig-scale} shows the results. Note that the 1/1 case is the same as that in \secref{subsec-results-gleu}. \figref{fig-scale} (a), (b), (d) show that the results meet our expectation; the smaller the training dataset size is, the more effective our strategy is in the eREP, DROP, and BLEU scores. In \figref{fig-scale} (c), by contrast, RL-GLEU-D$_{{\rm REP}}$\ is less effective because the Transformer language model might be strong enough to suppress most of the repetitions even with the smallest training dataset.
\subsection{Related work and discussion}\label{subsec-results-discussion}
\section{Related work}\label{sec-results-discussion}
Training neural text generation models with a discriminator have recently been studied in generative adversarial network based methods~\citep{dai2017towards,gu2018neural,yang2018unsupervised}.
\revise{
These work utilize the generative adversarial framework~\citep{goodfellow2014generative,arjovsky2017wasserstein} to train the sentence generation model for generating more natural, human-like sentences.
}
Although our framework can also be considered as such a generator-discriminator framework, there are two points that differentiate ours from the GAN-based methods. One is that our generator can focus on suppressing a targeted type of errors as the discriminator learns to discriminate them from references with artificially-generated erroneous sentences. The other is that our artificial negative examples definitely contain errors, while the machine-generated sentences in the GAN-based methods can be correct sentences as they are sampled from the generation model.
\revise{
There are several studies that use negative examples for training a sentence generation model. To focus on the repetitions, \citet{welleck2020neural} proposed the unlikelihood training with the training objective that decreases the probability of generating the token which has appeared in the previous context.
\citet{he2020negative} collected the negative examples from the model generations and trained the model not to generate them.
One advantage of ours is that our discriminator is trained independent of the sentence generation model as our method does not require any extra step of collecting negative examples from the sentence generation model.
}
Coverage-based methods in machine translation has also aimed at suppressing the repeating and dropping errors by focusing on coverage information about source words~\citep{mi2016coverage,tu2017context,malaviya2018sparse}.
\revise{
While these methods deal with the same types of errors that we focused on the experiments, our method is not limited to these error types and we can apply our method to other error types by designing the artificial negative examples.
}
Automatic post-editing aims at editing machine-translated sentences to fix errors, writing styles, and so on~\citep{freitag2019ape,gu2019levenshtein}. The task is similar to ours, in that they deal with how to make the model-generated sentences better. However, there are two significant differences. One is that they use a machine translation model and a post-editing model independently, while we consistently train one model to generate less erroneous sentences. The other is that, our model explicitly focuses on one specific generation error type of interest, which is not feasible in the previous work.
\section{Conclusion}
We have proposed a reinforcement learning based method that suppresses a targeted type of errors. Our method uses a discriminator that captures a specific type of errors and the discriminator is trained with artificially-generated negative examples. In experimental results, we have shown that our method can suppress repeating and dropping errors.
In future work, it is interesting to explore application of our method to source-side unannotated data.
|
{
"timestamp": "2020-12-29T02:26:01",
"yymm": "2012",
"arxiv_id": "2012.14124",
"language": "en",
"url": "https://arxiv.org/abs/2012.14124"
}
|
\section{Introduction}
Continuous-variable~(CV) systems play an important role in quantum information theory~\cite{Braunstein:2005aa,Ferraro:2005ns,CV2007:aa,Andersen:2010ng,Adesso:2014pm,Ruppert:2014aa,Ruppert:2019aa}. Gaussian states~\cite{Ferraro:2005ns}, for example, form the basic ingredients in key discussions of CV quantum information processing, notably for the study of secure quantum key distribution protocols~\cite{Lorenz:2004aa, Lance:2005aa,Scarani:2009cq,Weedbrook:2012ag}. These are quantum states described by Gaussian quasiprobability distributions~\cite{Wigner:1932aa,Cahill:1969qd}, which include the set of squeezed coherent states. The primary engines that generate these states are Gaussian processes, which are quantum processes that are also representable by a Gaussian quasidistribution. Gaussian quantum processes have been widely studied, especially in the context of channel capacity and quantum communication~\cite{Holevo:1999aa,Eisert:2007aa,Holevo:2007aa,Smith:2011aa,Lupo:2011aa,Holevo:2012aa,Siudzinska:2019aa}.
Proper characterization of Gaussian quantum processes is crucial to ensure that Gaussian resources are reliably generated and utilized. Techniques in multiparameter estimation are commonly well-sought tools for this purposes, but very often, they are used to primarily investigate the quantum Fisher information~\cite{Braunstein:1994aa,Gross:2020aa,Kull:2020aa,DD:2020aa,Safranek:2016aa,Rosanna:2018aa} that bounds the mean squared-error of the estimated parameters. This requires optimal output-state measurements that are technically challenging to achieve in practice~\cite{Oh:2019aa}.
In this work, we shall explore a highly feasible route to optimal Gaussian process tomography that is much more accessible in experiments using coherent input states~\cite{Rahimi-Keshari:2011aa} that can be readily prepared with a well-controlled laser source. To this end, we search for a computationally efficient set of input states that lead to near-optimal precision given a fixed measurement acting on the output states. We shall consider heterodyne detection~\cite{Arthurs:1965al,Yuen:1982hh,Arthurs:1988aa,Martens:1990al,Martens:1991aa,Raymer:1994aj,Trifonov:2001up,Werner:2004as} as the output-state measurement for the exclusive advantage of its tomographic performance in reconstructing Gaussian states~\cite{Rehacek:2015qp,Muller:2016da,Teo:2017aa} over homodyne detection~\cite{Yuen:1983ba,Abbas:1983ak, Schumaker:1984qm}, both of which essentially constitute the typical CV measurements that can be carried out in practice. Another key departure from previous work is that generic Gaussian processes shall be considered in our study, rather than just their subclasses.
The mean squared-error~(MSE) for all the parameters characterizing the unknown Gaussian process is adopted as the figure of merit for the reconstruction quality. To analyze the MSE for general Gaussian processes with large data samples, we shall derive its asymptotic formulas by extending methods previously developed for quantum states~\cite{Teo:2017aa,Zhu:2014aa,Teo:2015qs}. Next, without imposing the trace-preserving~(TP) constraint, we construct a convenient set of input coherent states that minimize the MSE Cauchy--Schwarz \emph{upper bound} for the unknown Gaussian process. We demonstrate that such states give an MSE that is almost identical to the optimal value provided by the best nonadaptive set of input states obtainable only with the complete knowledge about the process of interest. This near-optimality turns even reasonably low-energy coherent states into formidable resources for reconstructing Gaussian processes. Such an input set is ``geometrical'' since the phase-space arrangement of these coherent states is predetermined by only the output-state measurements employed and nothing else. Furthermore, we show numerically that for \emph{arbitrary} completely-positive-trace-preserving~(CPTP) Gaussian processes of parameter ranges typically considered in experiments (to be specified more concretely in Sec.~\ref{sec:results}), the non-TP geometrical strategy emerges as the optimal nonadaptive strategy by asymptotically outperforming the best TP strategy so long as the process displacement components are not very large.
After some background introduction to the general formalism of Gaussian processes in Sec.~\ref{sec:bkgd}, Sec.~\ref{sec:mse} shall be devoted to the explanation and derivation of the MSE formulas for both TP and non-TP reconstruction methods. With the aid of these formulas, Sec.~\ref{sec:geom} then proceeds with the construction of geometrical input states. Finally, Sec.~\ref{sec:results} compares the geometrical strategy with existing common nonadaptive input-state strategies for realistic CPTP Gaussian processes.
\section{Characterization of Gaussian processes}
\label{sec:bkgd}
A physical quantum process $\Phi$ transforms an input state $\rho_\textsc{in}$ into the output state $\rho_\textsc{out}=\Phi[\rho_\textsc{in}]$. A standard operational description for the quantum process $\Phi$ makes use of the \emph{Choi-Jamio\l kowski} formalism, which essentially states that all information about $\Phi$ is encoded into a positive operator ($\rho_\Phi$). Additionally, we say that $\Phi$ is Gaussian if it possesses a two-mode Gaussian quasidistribution. In this case, it is convenient to represent $\Phi$ by its Husimi~Q function
\begin{equation}
Q_\Phi=\exp\,(-\rvec{Z}^\dag \dyadic{A}\, \rvec{Z}+\rvec{B}^\dag\rvec{Z}+c_0)\,,
\end{equation}
which is defined by a complex matrix $\dyadic{A}$, a complex column $\rvec{B}$ (both in the computational basis) and a real constant $c_0$. Here $\rvec{Z}=\TP{(\rvec{z}_1\,\,\rvec{z}_2)}$ consolidates the complex variables labeling the process input~[$\rvec{z}_1=\TP{(z_1\,\,z_1^*)}$] and output~[$\rvec{z}_2=\TP{(z_2\,\,z_2^*)}$] modes.
\emph{Gaussian process tomography} pertains to the characterization of any given unknown $\rho_\Phi$ on the premise that its $Q_\Phi$ is Gaussian. The connection between $\rho_\Phi$ and $Q_\Phi$ is made by \emph{heterodyne measurements}~\cite{Arthurs:1965al,Yuen:1982hh,Arthurs:1988aa,Martens:1990al,Martens:1991aa,Raymer:1994aj,Trifonov:2001up,Werner:2004as} that sample the overcomplete set of coherent states $\{\ket{z}\bra{z}\}$ to probe the output state $\rho_\textsc{out}=\mathrm{tr}_1\{\TP{\rho}_\textsc{in}\otimes 1\, \rho_\Phi\}$, where the transposition is defined for the Fock basis in which all matrices are written in this article. Apart from directly recovering the Q-function parameters, these measurements are also known to give a smaller MSE for characterizing covariance matrices of Gaussian and broad classes of non-Gaussian quantum states compared to its homodyne counterpart~\cite{Rehacek:2015qp,Muller:2016da,Teo:2017aa}. Another reason for this choice of measurements is that when coherent input states $\{\ket{\alpha}\bra{\alpha}\}$ are used, the heterodyne measurement is equivalent to a direct sampling of the process Q~function, as $Q_\Phi(\alpha,\alpha^*,z,z^*)=\opinner{\alpha}{\rho_\textsc{out}}{\alpha}$.
For a completely-positive (CP) $\Phi$ ($\rho_\Phi\geq0$), $\dyadic{A}$, $\rvec{B}$ and $c_0$ are constrained such that $Q_\Phi$ is positive and square-integrable. One way to identify these constraints systematically is by reverting to the real phase-space representation: $Q_\Phi=\exp\,(-\TP{\rvec{R}} \dyadic{A}'\, \rvec{R}+\TP{\rvec{B}'}\rvec{R}+c_0)$ with $\rvec{R}=\TP{(x_1\,\,p_1\,\,x_2\,\,p_2)}$. This is done by recognizing that the transformations $\rvec{Z}=\dyadic{U}\,\rvec{R}$, $\dyadic{A}'=\dyadic{U}^\dag\dyadic{A}\,\dyadic{U}$ and $\rvec{B}'=\dyadic{U}^\dag\,\rvec{B}$ are exacted with the unitary matrix $\dyadic{U}=\dyadic{1}\otimes \dyadic{U}_0$ and $\dyadic{U}_0=\begin{pmatrix}1 & \I\\1 &-\I\end{pmatrix}/\sqrt{2}$. It is now clear that the conditions $\dyadic{A}'\geq0$ and $\dyadic{A}\geq0$ are necessary for $Q_\Phi$ to be real and square-integrable. These give a total of 15 independent real parameters, that is 10 from $\dyadic{A}$, 4 from $\rvec{B}$, and $c_0$. We may parametrize $\dyadic{A}$ and $\rvec{B}$ as
\begin{align}
\dyadic{A}=&\,\begin{pmatrix}
\dyadic{A}_1 & \dyadic{A}_2\\
\dyadic{A}^\dag_2 & \dyadic{A}_3
\end{pmatrix}\,,\,\,\dyadic{A}_1=\begin{pmatrix}
a_1/2 & -c_1^*\\
-c_1 & a_1/2
\end{pmatrix}\,,\nonumber\\
\dyadic{A}_2=&\,\dfrac{1}{2}\begin{pmatrix}
g_2 & g_1^*\\
g_1 & g_2^*
\end{pmatrix}\,,\,\,\dyadic{A}_3=\begin{pmatrix}
a_2/2 & -c_2^*\\
-c_2 & a_2/2
\end{pmatrix}\,,\nonumber\\
\rvec{B}=&\,\begin{pmatrix}
\rvec{b}_1\\
\rvec{b}_2
\end{pmatrix}\,,\,\,\rvec{b}_1=\begin{pmatrix}
b_1\\
b_1^*
\end{pmatrix}\,,\,\,\rvec{b}_2=\begin{pmatrix}
b_2\\
b_2^*
\end{pmatrix}\,.
\label{eq:Gauss_param}
\end{align}
A useful $\Phi$ in quantum information theory is typically also TP $(\tr{\rho_\textsc{in}}=1=\tr{\rho_\textsc{out}})$. Under this constraint, for an invertible $\dyadic{A}_3$, it is shown in Appendix~\ref{app:TP} that 6 of the 15 real parameters are fixed by the rest inasmuch as
\begin{align}
\dyadic{A}_1=&\,\,\dyadic{A}_2\,\dyadic{A}_3^{-1}\,\dyadic{A}_2^\dag\qquad\qquad\qquad\qquad\,\,\,\,\,\, (\text{3 parameters})\,,\nonumber\\
\rvec{b}_1=&\,\,\dyadic{A}_2\,\dyadic{A}_3^{-1}\rvec{b}_2\qquad\qquad\qquad\qquad\,\,\,\,\,\,\,\, (\text{2 parameters})\,,\nonumber\\
c_0=&\,\log(2\sqrt{\DET{\dyadic{A}_3}})-\dfrac{1}{4}\rvec{b}^\dag_2\,\dyadic{A}_3^{-1}\rvec{b}_2\quad\! (\text{1 parameter})\,.
\label{eq:TP_constr}
\end{align}
As a simple example, if we consider a beam splitter that transforms a pair of input mode operators $a$ and $b$ into the pair of output operators $c=a\cos\theta+b\sin\theta$ and $d=-a\sin\theta+b\cos\theta$, then the relevant Choi-Jamio{\l}kowski operator for \emph{a single output mode} ($c$) clearly describes a CPTP process and possesses the Q~function $Q_\Phi=\exp(-|z_1|^2(\cos\theta)^2-|z_2|^2+z_1z_2\cos\theta+z^*_1z^*_2\cos\theta)$~\cite{Wang:2013aa}. In this case, a consistency check gives $\rvec{b}_2=\rvec{0}$, $\dyadic{A}_3=\dyadic{1}/2$, $\dyadic{A}_2=-(\cos\theta)\,\dyadic{\sigma}_x/2$, $\dyadic{A}_1=\dyadic{A}_2\,\dyadic{A}_3^{-1}\dyadic{A}_2^\dag=(\cos\theta)^2\,\dyadic{1}/2$, $\rvec{b}_1=\rvec{0}$ and $c_0=0$, with $\dyadic{\sigma}_x$ being the usual Pauli $x$ matrix in the standard basis.
If the TP constraint is absent from the process reconstruction, the parameter $c_0$ is not estimable since any experimental data can only recover $\rho_\Phi$ uniquely up to a constant multiple~\cite{Bongioanni:2010aa,Teo:2020aa}, such that $\Phi$ may only be fully characterized up to its operator trace. Therefore, the complete characterization of a general single-mode Gaussian $\Phi$ requires 14 real recoverable parameters:
\begin{equation}
\rvec{x}=\TP{(a_1\,\,a_2\,\,b_{1,\mathrm{r}}\,\,b_{1,\mathrm{i}}\,\,b_{2,\mathrm{r}}\,\,b_{2,\mathrm{i}}\,\,c_{1,\mathrm{r}}\,\,c_{1,\mathrm{i}}\,\,c_{2,\mathrm{r}}\,\,c_{2,\mathrm{i}}\,\,g_{1,\mathrm{r}}\,\,g_{1,\mathrm{i}}\,\,g_{2,\mathrm{r}}\,\,g_{2,\mathrm{i}})}\,,
\label{eq:param_nonTP}
\end{equation}
where the subscripts r and i denote the real and imaginary parts of a complex parameter. Formally, the coherent-state sampling measurements by heterodyning gather raw data sampled from the Q~function $Q_\textsc{out}(\rvec{x};z_2\equiv z,z^*_2\equiv z^*)$ of $\rho_\textsc{out}$ (originating from a given $\rho_\textsc{in}$) that encodes $\rvec{x}$. Numerical techniques are then used to obtain the estimator $\widehat{\rvec{x}}$ (distinguished from the true parameter by a caret).
In this work, we focus on studying the accuracy of $\widehat{\rvec{x}}$ for a given unknown $\rvec{x}$. This may be quantified by the MSE~$\overline{(\widehat{\rvec{x}}-\rvec{x})^2}$, where the overline denotes an average over all possible data of a fixed total sample size. For an analytical study, we shall investigate the asymptotic expression of the MSE that applies to typical tomography situations involving large datasets.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{Fig1}
\caption{\label{fig:gauss_opt_tomo}Schematic diagram of (Gaussian) process characterization. A set of $J$ input coherent states of amplitudes $\{\alpha_j\}$, lying in the phase-space box region $-L\leq\alpha_{j,\mathrm{r}},\alpha_{j,\mathrm{i}}\leq L$, are fed to the unknown Gaussian process $\Phi$. Heterodyne measurements, implemented through simultaneous measurements of the position ($X$) and momentum ($P$) quadratures, are performed $N$ times on each of the respective output states. The collected data from all $J$ output states are processed on the discretized phase space of $K=M^2$ bins, after which the Q-function estimator $\widehat{Q}_\Phi$ is reconstructed from the binned data.}
\end{figure}
\section{Mean squared-error formulas}
\label{sec:mse}
\subsection{Relaxation of the TP constraint}
\label{subsec:mse_nonTP}
We first investigate the case where coherent states are used as input states for characterizing an unknown, generally non-TP Gaussian $\Phi$. Given a $\rho_\textsc{in}=\ket{\alpha}\bra{\alpha}$, the output Gaussian Q-function~$Q_\textsc{out}$ can be written in the form
\begin{align}
Q_\textsc{out}=&\,\E{-\TP{\rvec{v}}\rvec{x}'}\,,\nonumber\\
\frac{\TP{\rvec{v}}}{2}=&\,\Big(\frac{|\alpha|^2}{2}\,\,\,\,\frac{|z|^2}{2}\,\,\,\,\alpha_\mathrm{r}\,\,\,\,\alpha_\mathrm{i}\,\,\,\,z_\mathrm{r}\,\,\,\,-z_\mathrm{i}\,\,\,\,(\alpha^2)_\mathrm{r}\,\,\,\,(\alpha^2)_\mathrm{i}\,\,\,\,(z^2)_\mathrm{r}\,\,\,\,-(z^2)_\mathrm{i}\nonumber\\
&\,\,\,\,\,(\alpha z^*)_\mathrm{r}\,\,\,\,(\alpha z^*)_\mathrm{i}\,\,\,\,(\alpha^*z^*)_\mathrm{r}\,\,\,\,(\alpha^*z^*)_\mathrm{i}\,\,\,\,\frac{1}{2}\Big)\,,
\label{eq:Qout_coh}
\end{align}
where $\rvec{x}'=\TP{(\rvec{x}\,\,\,c_0)}$ carries the non-estimable $c_0$. In Gaussian process tomography, $\rvec{x}$ may be extracted from a discretized system of equations governed by \eqref{eq:Qout_coh}. The latter is established by first sending $J$ input coherent states $\{\ket{\alpha_j}\bra{\alpha_j}\}^J_{j=1}$, next performing heterodyne measurements on all corresponding output states $\rho^{(j)}_\textsc{out}$, and later bin the collected data into an $M\times M$ phase-space grid. If the number of phase-space bins $K=M^2$ is large, the final reconstructed $\widehat{\rvec{x}}$ ($\widehat{Q}_\textsc{out}$) should approximate the actual $\rvec{x}$ ($Q_\textsc{out}$) efficiently. The entire flow of Gaussian-process characterization is concisely pictorialized in Fig.~\ref{fig:gauss_opt_tomo}.
For a sufficiently large $J$, we can extract $\rvec{x}=\widetilde{\dyadic{V}^-}\rvec{u}$ by inverting the exponent of Eq.~\eqref{eq:Qout_coh} after taking the logarithm on both sides---the \emph{logarithmic inversion}~(LI) procedure. Here $\widetilde{\dyadic{V}^-}$ is the matrix of the first 14 rows of the \emph{left-pseudoinverse} $\dyadic{V}^-$ $(\dyadic{V}^-\dyadic{V}=\dyadic{1})$ that is defined for the \mbox{$JK\times15$} matrix $\dyadic{V}=(\rvec{v}_{\alpha_1,z_1}\,\,\ldots\,\,\rvec{v}_{\alpha_1,z_K}\,\,\rvec{v}_{\alpha_2,z_1}\,\,\ldots\,\,\rvec{v}_{\alpha_2,z_K}\,\,\ldots\,\,\rvec{v}_{\alpha_J,z_1}\,\,\rvec{v}_{\alpha_J,z_K})^\textsc{t}$ and \mbox{$JK\times1$} column $\rvec{u}=(-\log p_{11}\,\,\ldots\,\, -\log p_{1K}\,\, -\log p_{21}\,\,\ldots\,\, -\log p_{2K}\,\,\ldots\,\, -\log p_{J1}\,\,\ldots\,\, -\log p_{JK})^\textsc{t}$ acquired from an \mbox{$M\times M$} phase-space grid. Each probability $p_{jk}$ is proportional to $Q^{(j)}_\textsc{out}(\rvec{x};z_k,z^*_k)$ up to proper normalization as a consequence of binning.
For LI to be successful, the system $\rvec{u}=\dyadic{V}\rvec{x}'$ must be informationally complete~(IC), that is, there exists a $\dyadic{V}^-$ that is uniquely given by $\dyadic{V}^{-}=(\dyadic{V}^\dag\dyadic{V})^{-1}\dyadic{V}^\dag$. This implies that measurement data collected with such a set of input states uniquely characterize the unknown Gaussian process. In equivalent linear-algebraic terms, an IC set of linearly independent input states gives rise to an invertible \emph{Gram matrix} $\dyadic{G}=\dyadic{V}^\dag\dyadic{V}$ if $J\geq6$. To understand why this is the case, we observe that as 6 out of the 15 terms in $\rvec{v}$ do not depend on $z$, when $J<6$, there naturally exists at least one null right eigenvector $\rvec{e}$ for $\dyadic{V}$ of the form $\rvec{e}=\TP{(e_1\,\,\,0\,\,\,e_2\,\,\,e_3\,\,\,0\,\,\,0\,\,\,e_4\,\,\,e_5\,\,\,0\,\,\,0\,\,\,0\,\,\,0\,\,\,0\,\,\,0\,\,\,e_6)}$, where the 6-dimensional $(e_1\,\,\,e_2\,\,\,e_3\,\,\,e_4\,\,\,e_5\,\,\,e_6)$ is orthogonal to $(|\alpha_j|^2\,\,\,2\alpha_{j,\mathrm{r}}\,\,\,2\alpha_{j,\mathrm{i}}\,\,\,2(\alpha_j^2)_\mathrm{r}\,\,\,2(\alpha_j^2)_\mathrm{i}\,\,\,1)$ for any amplitude $\alpha_j$. This observation is therefore consistent with the alternative arguments in~\cite{Wang:2013aa}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Fig2}
\caption{\label{fig:th_sim_coh}The resource performance of four simulated tomography scenarios on a random Gaussian process using various numbers of randomly-chosen input coherent states and a phase-space grid of $K=400$ as an illustration. All MSEs are computed over all the 14 estimable parameters, and averaged over 100 experiments and 50 random sets of input states for each $J$ and $N$. Dashed curves are results obtained from the analytical formula in Eq.~\eqref{eq:asymp_cohIN_hetOUT} that asymptotically approximates the MSE based on LI reconstruction. The general trend is consistent with the physical understanding that the accuracy of $\widehat{\rvec{x}}$ improves when $J$ and $L$ are large.}
\end{figure}
In realistic scenarios, the log-probability column $\rvec{u}$ is to be replaced by the column of relative log-frequencies $\widehat{\rvec{u}}=(-\log \nu_{jk})$ that reflects the physical relative photodetection counts. Note that $\sum_k\nu_{jk}=1$, and $\nu_{jk}\rightarrow p_{jk}$ as $N\gg1$ in a statistically consistent setting. As a consequence, the LI procedure that now handles these noisy data $\nu_{jk}$ should be modified. As the counts are noisy with statistical fluctuation, for any finite number of sampling copies $N$ per input state, there very likely exist entries in $\widehat{\rvec{u}}$ that are infinite ($\nu_{jk}=0$ for some $j$ and $k$), especially when the corresponding Q-function magnitudes are small. To cope with statistical noise in LI, one may consider only finite entries of $\widehat{\rvec{u}}$. After some statistical reasoning (see Appendix~\ref{app:asymp_log}), we obtain the asymptotic expression
\begin{align}
\mathrm{MSE}\equiv&\,\overline{(\widehat{\rvec{x}}-\rvec{x})^2}=\dfrac{1}{N}\mathrm{Tr}\Big\{\widetilde{\dyadic{V}^-}^\dag\widetilde{\dyadic{V}^-}\,\dyadic{Y}\Big\}\,,\nonumber\\
Y_{jk,j'k'}=&\,\delta_{j,j'}[1-(1-\widetilde{p}_{jk})^N][1-(1-\widetilde{p}_{jk'})^N]\left(\dfrac{\delta_{k,k'}}{\widetilde{p}_{jk}}-1\right)\,,
\label{eq:asymp_cohIN_hetOUT}
\end{align}
where we note that $\sum_k\widetilde{p}_{jk}=1$ are the normalized true probabilities related to $p_{jk}$ through $\widetilde{p}_{jk}=p_{jk}/\sum_kp_{jk}$. Figure~\ref{fig:th_sim_coh} illustrates the positive match between the theoretical expression in \eqref{eq:asymp_cohIN_hetOUT} and simulation results for a given Gaussian process. The real and imaginary parts of the complex input coherent-state amplitude $\alpha_j=\alpha_{j,\mathrm{r}}+\I\,\alpha_{j,\mathrm{i}}$ are chosen from the closed interval [$-L,L$], where the influence of $L$ on the characterization quality of $\Phi$ is explored. We assume that $\gamma_j\equiv\sum_kp_{jk}$ are known with small statistical fluctuation up to a scalar multiple.
We stress that the LI estimator $\widehat{x}$ introduced here, while useful as a formalism for an analytical grasp of the actual characterization problem, usually does not lead to a physical process estimator $\widehat{\Phi}$, since the inversion procedure pays no attention to the positivity requirement for the estimated complex $\widehat{\dyadic{A}}$ matrix. Numerically, it is possible to enforce such a positivity constraint in LI, in which case the resulting estimator will have some statistical bias and an MSE that deviates slightly from the expression in \eqref{eq:asymp_cohIN_hetOUT}. One may also choose to perform LI followed by a projection onto the real and positive $\dyadic{A}'$-space in the real phase-space representation, as previously discussed in Sec.~\ref{sec:bkgd}, to obtain a sufficiently good physical estimator $\widehat{x}$. Supposing that the estimated real matrix $\widehat{\dyadic{A}'}=\dyadic{U}_\text{diag}\,\dyadic{D}\,\dyadic{U}^\dag_\text{diag}$ is diagonalized by the unitary $\dyadic{U}_\text{diag}$, this projection is done through the map $\widehat{\dyadic{A}'}\mapsto\widehat{\dyadic{A}}'_\text{physical}=\mathrm{Tr}\big\{\dyadic{D}\big\}\dyadic{U}_\text{diag}\,\dyadic{D}_+\dyadic{U}^\dag_\text{diag}/\Tr{\dyadic{D}_+}$, where $\dyadic{D}_+$ is essentially the diagonal matrix $\dyadic{D}$ with all negative eigenvalues set to zero.
While LI with positivity constraint and the projection method give highly similar estimators for sufficiently large $N$, the scaling behaviors in $JN$ for both methods generally vary. To put things on firmer statistical grounds, more meaningful estimators, such as the \emph{maximum-likelihood}~(ML) estimators, should be considered. In the context of non-TP process characterization, ML asymptotically gives very similar reconstructions to LI under the physical process constraints. Section~\ref{subsec:ML_TP} provides an explanation regarding this connection and presents a recipe for the ML reconstruction prescription.
\subsection{Imposition of the TP constraint}
\label{subsec:ML_TP}
If the unknown Gaussian process $\Phi$ is TP, then the constraints specified in \eqref{eq:TP_constr} dictate that 9 parameters are enough to characterize $\Phi$:
\begin{equation}
\rvec{x}=\TP{(a_2\,\,b_{2,\mathrm{r}}\,\,b_{2,\mathrm{i}}\,\,c_{2,\mathrm{r}}\,\,c_{2,\mathrm{i}}\,\,g_{1,\mathrm{r}}\,\,g_{1,\mathrm{i}}\,\,g_{2,\mathrm{r}}\,\,g_{2,\mathrm{i}})}\,.
\label{eq:param_TP}
\end{equation}
We note that in this case, $\dyadic{A}\geq0$ so long as $\dyadic{A}_3>0$, since we may write
\begin{equation}
\dyadic{A}\,\widehat{=}\begin{pmatrix}
\dyadic{A}_2\,\dyadic{A}_3^{-1/2}\\
\dyadic{A}_3^{1/2}
\end{pmatrix}\begin{pmatrix}
\dyadic{A}_3^{-1/2}\dyadic{A}_2^\dag\quad\dyadic{A}_3^{1/2}
\end{pmatrix}
\end{equation}
with well-defined matrix square-roots. This also implies that $\dyadic{A}$ is rank-2 and that $a_2^2-4|c_2|^2>0$ is the only necessary and sufficient positivity condition for a CPTP Gaussian $\Phi$ as no other constraints are imposed on $\dyadic{A}_2$ and $\rvec{b}_2$.
Because of the nonlinear dependence on the 9 parameters in the exponent of the output Q~function in accordance with \eqref{eq:TP_constr}, LI is no longer applicable as it only works with a highly specific form of the output Q~function stated in \eqref{eq:Qout_coh}. Instead, statistical method is usually a more favorable option to infer $\rvec{x}$ from the collected data. A popular method is to maximize the log-likelihood function $\log \mathcal{L}=\sum_{jk}\nu_{jk}\log(p_{jk}/\sum_{j'k'}p_{j'k'})$ that takes the multinomial form when each output state is measured with $N$ sampling copies of heterodyne detection independently, subject to the positivity constraint of $\dyadic{A}_3\geq\dyadic{0}$. The log-likelihood $\log \mathcal{L}$ may in general be a nonconvex function of the TP Gaussian-process parameters, so standard numerical techniques might be needed to search for its global maximum for optimal accuracy.
We emphasize that the ML scheme may be applied to any tomographic situation, which evidently includes the characterization of non-TP processes. In this context, with respect to the variable probabilities $p'_{jk}=\exp(-\TP{\rvec{v}_{jk}}\rvec{x}')$, we consider only those $\nu_{jk}>0$ in
\begin{equation}
\log \mathcal{L}=-\sum_{jk}\nu_{jk}\,\TP{\rvec{v}_{jk}}\rvec{x}'-\left(\sum_{j'}\gamma_{j'}\right)\log\left(\sum_{jk}\E{-\TP{\rvec{v}_{jk}}\rvec{x}'}\right)\,,
\end{equation}
where we again assume that the $\gamma_j$s can be determined through calibration procedures up to a multiplicative constant and are not part of the statistical consideration. Maximizing $\log \mathcal{L}$ involves scaling its gradient
\begin{equation}
\frac{\updelta \log \mathcal{L}}{\updelta\rvec{x}'}=\sum_{jk}\left(-\nu_{jk}+ \dfrac{\mu\,p'_{jk}}{\sum_{j'k'}p'_{j'k'}}\right)\TP{\rvec{v}_{jk}}
\end{equation}
for the parameter $\rvec{x}'$ and $\mu=\sum_{j'}\gamma_{j'}$. If the solution to $\mu p'_{jk}/\sum_{j'k'}p'_{j'k'}=\nu_{jk}$ exists under the physical constraints of the parameter estimator $\widehat{\rvec{x}}$ for which the corresponding process estimator $\widehat{\Phi}$ remains a CP process [namely $\dyadic{A}\geq0$ as stated in \eqref{eq:Gauss_param}], then the peak of $\log \mathcal{L}$ can obviously be reached by the maximization. Under this situation, both ML and LI schemes are equivalent when $p'_{jk}=\nu_{jk}$. In the hypothetical event that $\nu_{jk}$ are completely noiseless, then this solution uniquely maximizes the log-likelihood. For finite $N$, satisfying the constraints of $\rvec{x}$ almost surely leads to $p'_{jk}\neq\nu_{jk}$. Nevertheless, for sufficiently large $N$, the MSEs obtained with both schemes are typically not too far from each other in the absence of other sources of external systematic errors.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Fig3}
\caption{\label{fig:th_sim_coh_TP}The resource performance of four simulated tomography scenarios on a random Gaussian TP process of general specifications identical to those of Fig.~\ref{fig:th_sim_coh}, where 50 simulated experiments and 20 random input coherent states are used to average the MSE for the 9 CPTP parameters. Here, $J=3$ turns out to be the minimum number of input coherent states to fully determine an unknown TP Gaussian process~\cite{Wang:2013aa}. Dashed curves are results obtained from the analytical formula in Eq.~\eqref{eq:asymp_cohIN_hetOUT_TP}, which approximate the respective simulated ML MSEs well for large $JN$.}
\end{figure}
The asymptotic MSE expression for the maximum-likelihood~(ML) estimator $\widehat{\rvec{x}}_\textsc{ml}$ for $\rvec{x}$ may be approximately calculated by assuming that $N\gg1$ per output state is large enough so that $\updelta\rvec{x}\equiv\rvec{x}-\widehat{\rvec{x}}_\textsc{ml}$ is typically small. Similar to the treatment presented in Sec.~\ref{subsec:mse_nonTP}, we can define a $JK\times9$ matrix $\dyadic{V}_\textsc{tp}$ such that its $(j,k)$th row is equal to the $1\times9$ row
\begin{equation}
\TP{\rvec{v}}_\textsc{tp}=(\sbra{\dyadic{M}^\dag_{1,jk}}\dyadic{E}_1\,\,\,\,\sbra{\dyadic{M}_{2,jk}+\dyadic{M}^\dag_{2,jk}}\dyadic{E}_2\,\,\,\,\sbra{\dyadic{M}^\dag_{3,jk}}\dyadic{E}_3)\,,
\end{equation}
where the definitions of all auxiliary matrices are given in Appendix~\ref{app:asymp_log}. The principle of small variations thus states that $\dyadic{V}_\textsc{tp}\,\updelta\rvec{x}=\updelta\rvec{u}$. It follows that the asymptotic MSE for estimating the 9 independent CPTP parameters using the constrained ML method is approximately
\begin{equation}
\mathrm{MSE}\equiv\overline{(\widehat{\rvec{x}}_\textsc{ml}-\rvec{x})^2}\approx\dfrac{1}{N}\Tr{{\dyadic{V}^-_\textsc{tp}}^\dag\dyadic{V}^-_\textsc{tp}\,\dyadic{Y}}\,,
\label{eq:asymp_cohIN_hetOUT_TP}
\end{equation}
where $\dyadic{Y}$ is as specified in Eq.~\eqref{eq:asymp_cohIN_hetOUT}.
The ML estimator $\widehat{\rvec{x}}_\textsc{ml}$ is to be constrained by the positivity of $\dyadic{A}_3$ and is typically a biased estimator. In general, there could still be a small difference between the ML MSE and the right-hand side of Eq.~\eqref{eq:asymp_cohIN_hetOUT_TP} that is obtained from operator derivatives that assume the existence of open sets without parameter boundary constraints. Barring this technical issue, Fig.~\ref{fig:th_sim_coh_TP} shows that \eqref{eq:asymp_cohIN_hetOUT_TP} can still serve as a pretty good estimate for the actual MSE.
\section{Geometrical set of input coherent states}
\label{sec:geom}
The LI procedure discussed in Sec.~\ref{subsec:mse_nonTP} hinges on the existence of $\dyadic{V}^{-}$. This is again synonymous to having a $\dyadic{V}$ with no null right eigenvectors, thereby ruling out sets of coherent states with identical amplitudes $|\alpha_j|\equiv|\alpha|$ as candidates for LI, since they result in at least one such null eigenvector, namely $\dyadic{e}\propto\TP{(-1/|\alpha|^2\,\,\,0\,\,\,\ldots\,\,\,0\,\,\,1)}$, for any $J$. Therefore, choices that include the set of coherent states with complex amplitudes that form a ring of radius $r$ in phase space---$\alpha_j=r\,\E{2\pi\I\,j/J}$ for $r>0$ and $1\leq j\leq J$---are non-IC and shall result in failures of the LI scheme. In the regime of ML estimation, these symmetric sets of coherent states give a convex set of estimated parameters that are consistent with the ML probabilities obtained from the measurement data. One therefore cannot obtain a unique parameter reconstruction with such input states.
A general observation from Figs.~\ref{fig:th_sim_coh} and \ref{fig:th_sim_coh_TP} is that input coherent states with lower phase-space energies ($L=1$ for instance) also tend to give larger average MSE values. This is because a set of low-energy coherent states are typically quite closely packed in phase space and their Gram matrix $\dyadic{G}$ can at times be ill-conditioned, that is, its smallest eigenvalue can be very close to zero. Colloquially, low-energy coherent states are not very linearly independent. However, with appropriate optimization strategies, low-energy input coherent states can still be highly effective in characterizing any unknown Gaussian process with significantly higher accuracy than using random low-energy input states, thereby allowing us to maximally utilize these energy-efficient resources.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{Fig4}
\caption{\label{fig:geom_states}(a,b,c)~Regarding the phase-space arrangement (represented by $x,p$ coordinate markers) of the input coherent states, the actual optimal states that collectively minimize the upper bound of \eqref{eq:asymp_cohIN_hetOUT} are geometrically positioned as far apart from each other as possible, such that some states are located in the interior of the finite-energy boundaries defined by $-L\leq x,p\leq L$. (d)~The minimum upper bound of \eqref{eq:asymp_cohIN_hetOUT} monotonically decreases with the number of input states $J$ in such geometrical sets, as expected. A saturated optimality is achieved beyond just $L=2$, beyond which increasing the laser intensity further becomes moot. For low-energy applications, $L=1$ is sufficient for precise Gaussian-process tomography.}
\end{figure}
Regardless of whether or not the TP constraint is imposed when reconstructing $\Phi$, the best performance of any Gaussian-process characterization is ultimately tied to the optimal value of the figure of merit in question. For our case, this is quantified by the minimum of the MSE. The truly optimal set of input states that minimizes the MSE according to either Eq.~\eqref{eq:asymp_cohIN_hetOUT} or \eqref{eq:asymp_cohIN_hetOUT_TP} requires the knowledge of the unknown Gaussian process. Such a set is therefore operationally unobtainable.
We introduce a solution to approximately minimize the MSE without such knowledge by first noting that since $\dyadic{Y}\geq0$, a variant of the Cauchy--Schwarz inequality for positive matrices reads
\begin{equation}
\mathrm{MSE}\leq\dfrac{1}{N}\mathrm{Tr}\Big\{\widetilde{\dyadic{V}^-}^\dag\widetilde{\dyadic{V}^-}\Big\}\Tr{\dyadic{Y}}\approx JN\,\mathrm{Tr}\Big\{\widetilde{\dyadic{V}^-}^\dag\widetilde{\dyadic{V}^-}\Big\}\,,
\end{equation}
where the approximation in the second line of the calculation is valid for sufficiently large $M$, so that $\widetilde{p}_{jk}\ll1$ and $Y_{jk,j'k'}\approx N^2 \widetilde{p}_{jk}\delta_{j,j'}\delta_{k,k'}$. Under this approximation, it is clear that the upper bound of the asymptotic MSE is independent of $\Phi$ and is therefore a purely geometrical term that solely depends on the collective phase-space arrangement of the input coherent states. Therefore, such \emph{geometrical} sets of input states that minimize the MSE Cauchy--Schwarz upper bound can be universally defined for any Gaussian process since they are independent of the measurement data and the unknown process.
Figure~\ref{fig:geom_states} illustrates some desirable properties of geometrical input states. In particular, when $J=6$ (minimal case), the geometrical set is unique up to a collective rotation, whereas arrangements can vary for $J>6$, at times with the possibility of two or more coherent states being very close to each other owing to overcomplete redundancy. The recipe for deriving such geometrical sets of input states does not apply to the TP version of the asymptotic MSE in Eq.~\eqref{eq:asymp_cohIN_hetOUT_TP}, since the TP constraint tangles the 9 independent Gaussian parameters in a highly nonlinear way that cannot be cleanly separated from the phase-space variables ($\dyadic{V}_\textsc{tp}$ depends on these parameters).
As we shall see in Sec.~\ref{sec:results}, for heterodyne detection, such a geometrical set of coherent states on average gives a nearly-optimal MSE when the TP constraint is lifted. Moreover, these special input states can also beat optimal input states that minimizes the MSE when trace preservation is imposed for certain classes of CPTP Gaussian processes.
\section{Performance on CPTP Gaussian processes}
\label{sec:results}
For a given CPTP Gaussian process, both its Q-function parameters $\dyadic{A}$ and $\rvec{B}$ are related to another set of parameters (refer to Appendix~\ref{app:phys_cptp}) according to the maps
\begin{align}
\dyadic{A}=&\,\lim_{t\rightarrow\infty}\,\dfrac{1}{2}\,\dyadic{U}\left[(\dyadic{1}+\TP{\dyadic{X}})\,\dyadic{\Sigma}_t\,(\dyadic{1}+\dyadic{X})+\dyadic{0}\oplus\dyadic{Y}+\dyadic{1}/2\right]^{-1}\dyadic{U}^\dag\,,\nonumber\\
\dyadic{B}=&\,2\,\dyadic{A}\,\dyadic{U}\rvec{\mu}_0\,,
\label{eq:phys_cptp}
\end{align}
where $\dyadic{U}$ is the unitary matrix defined in Sec.~\ref{sec:bkgd}. The matrices $\dyadic{X}$ and $\dyadic{Y}$ effect the general transformations $\rvec{\mu}\rightarrow\dyadic{X}\rvec{\mu}+\rvec{\mu}_0$ and $\dyadic{\Sigma}_\textsc{w}\rightarrow\dyadic{\Sigma}_\textsc{w}'=\TP{\dyadic{X}}\,\dyadic{\Sigma}_\textsc{w}\,\dyadic{X}+\dyadic{Y}$ on the mean ($\rvec{\mu}$) and covariance ($\dyadic{\Sigma}_\textsc{w}$) of the \emph{Wigner function} describing an input Gaussian state, and
\begin{equation}
\dyadic{\Sigma}_t=\dfrac{1}{2}\,\begin{pmatrix}
\cosh t & 0 & \sinh t & 0 \\
0 & \cosh t & 0 & -\sinh t \\
\sinh t & 0 & \cosh t & 0 \\
0 & -\sinh t & 0 & \cosh t
\end{pmatrix}\,.
\label{eq:SIGMA0}
\end{equation}
The matrices $\dyadic{X}$ and $\dyadic{Y}$ may then alternatively be understood as functions of a complete set of relevant operations, namely phase shift~($\phi$), displacement~($x_0,p_0$), squeezing~($r,\theta$), losses~($\chi<1$) and amplifications~($\chi>1$), and couplings to a Gaussian reservoir~($n_\textsc{t},a_\textsc{t},\theta_\textsc{t}$):
\begin{align}
\dyadic{X}=&\,\chi\dyadic{S}(r,\theta)\,\dyadic{R}(\phi)\,,\nonumber\\
\dyadic{Y} =&\, |1-\chi|^2 \dyadic{1}/2 + \frac{n_\textsc{t}}{2}\,\TP{\dyadic{R}(\theta_\textsc{t})}\begin{pmatrix}
1+a_\textsc{t} & 0\\
0 & 1-a_\textsc{t}
\end{pmatrix} \dyadic{R}(\theta_\textsc{t}) \,,\nonumber\\
\dyadic{R}(\phi)=&\,\begin{pmatrix}
\cos\phi & \sin\phi\\
-\sin\phi & \cos\phi
\end{pmatrix}\,,\nonumber\\
\dyadic{S}(r,\theta)=&\,\,\TP{\dyadic{R}(\theta)}\begin{pmatrix}
\E{r} & 0\\
0 & \E{-r}
\end{pmatrix}\dyadic{R}(\theta)\,,\nonumber\\
\rvec{\mu}_0=&\,\begin{pmatrix}
x_0\\
p_0
\end{pmatrix}\,.
\label{eq:decomposition}
\end{align}
It should be noted that although the decomposition in \eqref{eq:decomposition} preserves the symplectic character of the transformed covariance~(see Appendix~\ref{app:phys_cptp}), it is not unique. Different Gaussian operations in various orders can achieve the same physical effect, which is the reason we are reconstructing the more generally applicable parameters $\dyadic{A}$ and $\dyadic{B}$ instead of those in (\ref{eq:decomposition}), even though the latter are directly tied to actual experimental configurations.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Fig5}
\caption{\label{fig:perf_TP}Performances of all five input-state strategies in reconstructing the 9 independent parameters specified in Eq.~\eqref{eq:param_TP} for various groups of unknown Gaussian CPTP processes with heterodyne detection. The MSE is properly scaled for comparison convenience with Fig.~\ref{fig:perf_nonTP}. We fixed $K=400$, $J=6$ and $L=1$ to simulate the conditions of minimal and low-energy input coherent states that are ideal for feasible tomography experiments. An average over all processes \emph{within} each group (with additional averaging over 10 random sets of input states for RML) is carried out to construct the respective $1/3$-$\sigma$ error region for the group, where the fractional $\sigma$ value is so chosen for proper illustration in the vertical logarithmic scale.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Fig6}
\caption{\label{fig:perf_nonTP}Performances of all three non-TP input-state strategies in reconstructing all 14 parameters [see Eq.~\eqref{eq:param_nonTP}] for various groups of unknown Gaussian CPTP processes with heterodyne detection. The MSE is properly scaled for comparison convenience with Fig.~\ref{fig:perf_TP} of the same values chosen for $J$, $K$ and $L$. All error regions plotted here represent 1/3-$\sigma$ standard deviation.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{Fig7}
\caption{\label{fig:small_large}Performances of all input-state strategies for Gp.~3 random Gaussian CPTP processes with heterodyne detection. (a,c)~Small process displacements refer to those in the range $-0.5\leq x_0,p_0\leq 0.5$, and (b,d)~large displacements refer to those in $-4\leq x_0,p_0\leq 4$. All other figure specifications otherwise conform to those in Figs.~\ref{fig:perf_TP} and \ref{fig:perf_nonTP}.}
\end{figure}
The main role of the decomposition in \eqref{eq:decomposition} is to ensure that the randomly generated processes used in our numerical experiments are those that can be found in an experimental setting. The physical parameter ranges may be fixed in the following way. The phase $\phi$ induced by the rotation operation $\dyadic{R}(\phi)$ can be completely arbitrary, $\phi\in[0,2\pi)$, and so can the phase of squeezing $\theta\in[0,\pi/2]$. The squeezing strength $r$ may be reasonably fixed to the range $r\in[0,1/3]$, which is between zero and approximately 6dB that is achievable in active research \cite{PhysRevA.90.060302,APLPhotonics.5.036104}. Displacements in optical experiments are a consequence of interaction with an external field, and may also be arbitrary. They also stand out from all the other parameters because they do not transform the covariance matrix of the input state and can be well estimated by vacuum probe states. For displacements to be comparable with the other operations carried out by the Gaussian process in strength, we shall consider the ranges $x_0,p_0\in[-2,2]$ corresponding to displacement of energy that is slightly higher than that of squeezing. The gain of the channel $\chi$ represents both loss and amplification. Loss, caused by stray reflections, detector inefficiencies and mode mismatch, is an ever present phenomenon in quantum optical experiments, but can often be curtailed leading to transmission rates above 0.9 \cite{APLPhotonics.4.060902}. Amplification can be a result of nonlinear processes, but more often, it arises as a consequence of feed-forward with non-unit gain \cite{PhysRevLett.96.163602, IEEEquantumelectronics.9.1519}. Like displacement, it can in principle be arbitrary, but since strong amplification necessitates noise levels that destroys non-classical features of quantum states \cite{PhysRevA.90.010301}, it is usually kept low. As a conservative choice, we pick $\chi\in(0.1,1.5)$ to run all numerical experiments. The last three terms collectively characterize the added noise, which has purely detrimental effect---adding one unit of vacuum noise to both quadratures is generally sufficient to extinguish any quantum properties of the state. The asymmetry coefficient $a_\textsc{t} \in [-1,1]$ and phase $\theta_\textsc{t} \in [0,\pi/2]$ of the added noise span the practical ranges. The added-noise coupling $n_\textsc{t}\in[0,1]$ was chosen such that it describes both quantum and classical channels.
Three groups of Gaussian CPTP processes are considered here, whose physical parameters are tabulated in Tab.~\ref{tab:gps}. Monte Carlo simulations with these groups of processes are performed using heterodyne measurements, and ML reconstructions are carried out in the original matrix parametrization ($\dyadic{A},\rvec{B}$) for convenience, the specific structure of which depends on whether the TP constraint is imposed or not. Three operational input-state strategies, which are the random strategy with [RML~(TP)] and without the TP constraint [RML~(non-TP)], and the geometrical strategy (GML~(non-TP)), are evaluated by averaging the MSEs for both the 9 independent CPTP parameters and all 14 Gaussian parameters (normalized with the respective number of parameters) over all processes in each group. For benchmarking, the non-operational best strategies that respectively minimize the asymptotic MSEs in \eqref{eq:asymp_cohIN_hetOUT} [BML~(TP)] and \eqref{eq:asymp_cohIN_hetOUT_TP} [BML~(non-TP)] are also plotted. Figures~\ref{fig:perf_TP} and \ref{fig:perf_nonTP} show the performances of all five strategies. For all tested Gaussian processes within the defined physical ranges~[Figs.~\ref{fig:perf_TP}(a,b,c) and Figs.~\ref{fig:perf_nonTP}(a,b,c)], GML is the optimal choice for efficient process-parameter reconstruction with low-energy coherent states ($L=1$).
\begin{table}[t]
\begin{tabular}{rlcccccccccl}
&no.& {$\phi$} & $r$ & $\theta$ & $x_0$ & $p_0$ & $\chi$ & $n_\textsc{t}$ & $a_\textsc{t}$ & $\theta_\textsc{t}$&\\
\cline{2-11}\\[-5ex]
\cline{2-11}\\[-3ex]
\begin{rotate}{90}\!\!\!\!{\bf Gp.~1}\end{rotate}\quad\,&$\bm{1}$& $0$ & $0$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$&(idle)\\[1ex]
\cline{2-11}\\[-3ex]
& $\bm{1}$ & $*$ & $0$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$&(phase shifter)\\
&$\bm{2}$ & $0$ & $*$ & $*$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$&(squeezer)\\
&$\bm{3}$ & $0$ & $0$ & $0$ & $*$ & $*$ & $1$ & $0$ & $0$ & $0$&(displacer)\\
\begin{rotate}{90}\!\!\!\!{\bf Gp.~2}\end{rotate}\quad\,&$\bm{4}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $*$ & $0$ & $0$ & $0$&(gain)\\
&$\bm{5}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1$ & $*$ & $0$ & $0$&(symmetric noise)\\
&$\bm{6}$ & $0$ & $0$ & $0$ & $0$ & $0$ & $1$ & $*$ & $*$ & $*$&(asymmetric noise)\\[1ex]
\cline{2-11}\\[-3ex]
\begin{rotate}{90}\!\!\!\!{\bf Gp.~3}\end{rotate}\quad\,&$\bm{1}$-$\bm{10}$& $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$ & $*$&(arbitrary)\\[1ex]
\cline{2-11}\\[-5ex]
\cline{2-11}
\end{tabular}
\caption{\label{tab:gps}Physical parameters characterizing the five different groups of CPTP processes invoked in the simulations. The wildcard~$*$ denotes a randomly generated value within the corresponding interval for the parameter, as stated in Sec.~\ref{sec:results}. Gp.~1 consists of the singular idle process ($\Phi[\rho]=\rho$), Gp.~2 contains 6 random processes that each represents one basic type of Gaussian operation or noise character (examples 5 and 6 respective coincide with additive symmetric and asymmetric noise arising from a thermal bath). Finally, Gp.~3 consists of 10 completely arbitrary processes.}
\end{table}
Interesting dynamics reveal themselves for the completely random CPTP processes in Gp.~3, where we find that, again, GML beats all strategies when all parameters are arbitrarily chosen with the displacement ranges $-2\leq x_0,p_0\leq2$ obeyed. For the tested random processes, Figs.~\ref{fig:perf_TP}(c) and \ref{fig:perf_nonTP}(c) highlight rather comparable performances between GML and the optimal BML~(TP). However, as shown in Fig.~\ref{fig:small_large}, if we enlarge the displacement ranges, we find that the best TP input-state strategy can outperform the rest. On the other hand, when these ranges are reduced, GML reconstructs process parameters with much better accuracies than for default ranges as in Figs.~\ref{fig:perf_TP}(c) and \ref{fig:perf_nonTP}(c). This leads us to conjecture that the additional reconstruction bias introduced by the TP constraint, on top of that from the CP constraint, apparently has beneficial merits for more correctly singling out estimators that are near the true CPTP Gaussian processes that perform large displacing operations; whereas processes with stronger second-moment manipulating features relative to first-moment displacements are still better characterized with non-TP input-state strategies as they appear to be more robust against noise, in which case GML is the optimal choice.
\section{Conclusion}
There have been many studies related to achieving the quantum limits of parameter estimation. We have taken a different route instead and investigated an experimentally feasible and tomographically efficient way to characterize Gaussian quantum processes. Using heterodyne measurements and input coherent states, we introduced a simple strategy of constructing geometrical sets of input coherent states that effectively optimizes the process parameters' mean squared-error. These geometrical input states are demonstrated to outperform the best nonadaptive input-state strategy in terms of the mean squared-error for typical CPTP processes that do not carry out large displacement operations. This permits us to utilize coherent states of low energies as sufficient convenient resources to achieve very low mean squared-errors in the reconstructed process parameters.
We also observe that if the unknown Gaussian process has large displacing features on input states, input-state strategies that imposes the trace-preserving constraint, on average, give more accurate process estimators than the geometrical strategy where this constraint is relaxed. In the course of acquiring these results, we also obtained asymptotic analytical expressions for the process-parameter mean squared error that were previously not discussed in the quantum process tomography literature to the authors' knowledge.
The next natural step would be to investigate the extent of enhancement when the input coherent states are squeezed. Preliminary studies show that the asymptotic mean squared-error expressions with heterodyning apparently becomes exceedingly complicated, such that there is currently no straightforward recipe to construct the geometrical sets of input states discussed here. Whether there exist output-state measurements that are more compatible with squeezed input states in probing Gaussian processes other than heterodyning is an interesting open question.
\acknowledgments{Y.S.T., S.S. and H.J. acknowledge support by the National Research Foundation of Korea (NRF) (Grant Nos. NRF-2019R1A6A1A10073437, NRF-2018K2A9A1A06069933, NRF-2019M3E4A1080074 and NRF-2020R1A2C1008609); K.P. and P.M. were supported by project 19-19722J of the Grant Agency of Czech Republic (GA\v{C}R).}
|
{
"timestamp": "2020-12-29T02:27:37",
"yymm": "2012",
"arxiv_id": "2012.14177",
"language": "en",
"url": "https://arxiv.org/abs/2012.14177"
}
|
\section*{Introduction}
Under the Treaty on the Non-Proliferation of Nuclear Weapons (NPT)~\cite{NPT1970} and other treaties against nuclear proliferation, the International Atomic Energy Agency (IAEA) is entrusted to verify all nuclear materials under the control of the State with safeguards agreements in force with the Agency.
International safeguards include all the technical measures put in place by the IAEA to verify that each State Parties complies with the aforementioned agreements.
Most safeguards methods are based on the assay of nuclear materials through measurements of ionizing radiation emitted by the sample. Safeguards systems are undergoing an important modernization effort~\cite{safechal}.
One example of such effort is the development of the PGET systems for the inspection of spent fuel in water pools{~\cite{honkamaa2014prototype,white2018application,PGET}}. Traditionally, an inspection of spent fuel in pools was performed using the FORK detector, which encompasses, in its standard version, one ionization chamber to measure gamma rays and two neutron-sensitive fission chambers. The FORK detector, however, exhibits low sensitivity to the diversion of fuel pins, being able to detect the diversion of 50\% or more of the fuel pins in the assembly under inspection~\cite{forkDet}.
A PGET was recently developed to increase the sensitivity to a variety of fuel diversion scenarios~\cite{PGET}. The software to analyze PGET images can be improved~\cite{iaea2019challenge} to reduce user intervention and overall perform faster and more reliable monitoring of spent fuel and therefore implement the ultimate safeguards objective, \textit{i.e.}, the prompt detection of nuclear material diverted from peaceful uses.
The PGET of spent fuel poses some unique challenges, compared to industrial or medical tomographic imaging, which are due to the high activity of the sources (a single-pin activity is of the order of $10^{13}$~Bq) and the high self-attenuation of the fuel pins. While the former can be mitigated using collimated detectors, the latter needs to be addressed by using specific imaging algorithms.{ Previous work has shown that the reconstruction quality could be significantly improved by applying the attenuation correction{~\cite{1930-8337_2020_2_317}}. In this work, we further corrected for the gamma ray down-scattering in the energy window of interest.}
We have implemented and integrated computational methods based on proximal gradient, physics-informed Monte Carlo sampling, and machine learning to (1) improve the PGET image quality{,} (2) automatically identify missing pins{, and (3) quantify the pin activities}. As we will show, the automated identification of missing pins relies heavily on the image quality, and therefore the three tasks are inherently intertwined.
\section*{Results}
\subsection*{Simulated Sinograms}
We simulated the measurement of six fuel pin configurations in a mock-up water-water energetic reactor (VVER) fuel assembly using MCNP 6.2~\cite{goorley2013initial}, as shown in Fig.~\ref{fig: pin_distri}. In Cases 2 and 3, we mimicked the scenario where fuel pins were missing. In Cases 4 and 5, {we used depleted uranium to replace cobalt because it resulted in the highest attenuation among different potential replacement materials~\cite{di2019neutron}, which would allow us to examine the algorithm's capability of \replaced{distinguishing replaced}{identifying missing} pins\deleted{ in the worst-case scenario}. }In Case 6, we {wanted to explore whether the algorithm is able to render an accurate image of a space-dependent activity distribution.} Details of the simulated PGET model are included in {``Methods-Simulation Methods"} section.
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=\linewidth]{SciRep_pics/Fig1-crop.pdf}
\caption{Actual pin distributions in the simulated fuel assemblies (ground truth).{ Pin activity is proportional to the brightness. In Case 1, the mock-up fuel assembly hosted 331 ${}^{60}$Co pins of 8.879~g/cm${}^3$ density, which emitted 1.17 and 1.33~MeV gamma rays. In Cases 2 and 3, 10\% ${}^{60}$Co pins pins were removed at the center or uniformly. In Cases 4 and 5, 10\% ${}^{60}$Co pins pins were replaced by depleted uranium pins of the same size but higher density (10.4~g/cm${}^3$). In Case 6, we decreased the activities of the middle pins by 80\% and bottom pins by 50\%.} }
\label{fig: pin_distri}
\end{figure}
{The detection system we simulated is the so-called PGET prototype device as described by~\cite{white2018application}. }The detection unit consisted of \replaced{two collimated CdTe detector arrays on opposite sides of the fuel assembly}{two collimated CdTe detector arrays on the top and bottom sides of the fuel assembly}, each encompassing 91 detectors. The detector arrays rotated and scanned the fuel bundle in steps of 1$\degree$, which generated a $182\times360$ sinogram, as shown in Fig.~\ref{fig:sinos}. We simulated {$5\times 10^9$} {NPS} (number of {particle histories}) in each case and detected the photons in the 700-1500~keV energy window.
Approximately {15,000} maximum counts in one pixel were achieved, comparable to the counts in an actual PGET measurement.
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=.5\textwidth]{SciRep_pics/Fig2.pdf}
\caption{Simulated sinogram of the fully-loaded bundle. The color scale represents the detected photon counts.{The periodicity of the singogram resulted from the hexagonal symmetry of the simulated fuel assembly.}}
\label{fig:sinos}
\end{figure}
\subsection*{Inverse Approach}
We have developed a systematic approach that allows us to reconstruct high-quality images from individual sinograms, identify the pin locations, and quantify the pin activity levels.{ The image of fuel pins was reconstructed by solving a linear inverse problem. The region of interest was divided into $182 \times 182$ pixels. We calculated the detector array response to a source of unit activity inside each pixel, forming the system response matrix. For an unknown fuel assembly, the measured sinogram is a linear combination of the calculated responses, with the coefficients being the source strength of each pixel. In this way, the image reconstruction problem was converted into a linear inverse problem, with both the simulated sinogram and calculated response matrix as inputs.
First, we performed a simple image reconstruction to obtain an initial estimate of pin locations and pin radius. We calculated the system response matrix using a deterministic ray-tracing method, assuming that the scattering of gamma rays can be neglected~\cite{fanginmm2020}. The reconstructed image of Case~1 is shown in Fig.~\ref{fig: init}a. Inner pixels are brighter than the outer ones on the image, indicating an overestimation of the activity of the pins. This is because the contribution of scattered photons to the system response cannot be neglected for inner pixels, due to the heavy attenuation of unscattered photons. Nevertheless, using this image, we can determine the center of all possible pins as the centroids of the bright regions on the reconstructed image. In this step, we would allow some slackness in pin identification, since a more accurate reconstruction will be performed later. The identified pins are shown in Fig.~\ref{fig: init}b. Horizontal and vertical line profiles passing through the center of each pin were extracted from the image, and we fitted a Gaussian to the average of all profiles, shown in Fig~\ref{fig:init_gaus_fit}. The FWHM (full width at half maximum) of the fitted Gaussian was 0.769~cm, which was taken to be the pin {diameter}.}
\begin{figure}[!htb]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=0.4\linewidth]{SciRep_pics/Fig3-crop.pdf}
\caption{Image reconstruction and pin identification using the simple response matrix in Case 1.}
\label{fig: init}
\end{figure}
\begin{figure}[!htb]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=.23\linewidth]{SciRep_pics/Fig4.pdf}
\caption{Gaussian fit of the average line profile.{ FWHM of the fitted Gaussian is shown as the arrow. The 11 pixels around the pin center shown as the red points were used in the fitting. The coefficient of determination $R^2$ of the fit was 0.9991.}}
\label{fig:init_gaus_fit}
\end{figure}
{With the pin locations and pin radius known, we could create a material map of the fuel assembly. We then \replaced{implemented an accelerated Monte-Carlo algorithm}{performed a Monte Carlo simulation} to calculate the system response matrix, accounting for both the absorption and scattering interaction of photons in the fuel assembly. The images reconstructed using the new response matrix are shown in Fig.~\ref{fig: inv_recon}. The images were then fed to a convolutional neural network to perform pin identification. The results are shown in Fig.~\ref{fig: inv_pid}. For Cases 1 to 5, we achieved 100\% classification accuracy, due to the high quality of reconstructed images. We created a histogram of pin activities, as shown in Fig.~\ref{fig: inv_act}. In Cases 1-5, {the standard deviation of activity estimation ranged from 3\% to 6\%, and }the mean of the pin activity distribution deviated from the ground truth by less than 4\%. In Case 6, we successfully identified all pins, except those with the lowest activity level (Group 1) shown in \added{Fig.~\ref{fig: inv_pid}f and }{Fig.~\ref{fig: inv_act}f}. The activity of pins in Group~2 and Group~3 was accurately reproduced with a relative error below 1\%. The inverse approach is detailed in the ``Methods-Image Reconstruction Methods" section.}
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=\linewidth]{SciRep_pics/Fig5-crop.pdf}
\caption{Image reconstructed using the inverse approach.{ Good contrast between the pin-present region and pin-absent region was achieved. No severe artifacts were observed on the reconstructed images.}}
\label{fig: inv_recon}
\end{figure}
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=\linewidth]{SciRep_pics/Fig6-crop.pdf}
\caption{Pin identifications based on the inverse approach. Identified pins are shown as red circles.{ For Cases 1 to 5, we achieved 100\% classification accuracy; for Case 6, we achieved 100\% classification accuracy for the pins of medium and high activity level.}}
\label{fig: inv_pid}
\end{figure}
\section*{Discussion}
The image reconstruction problem can be formulated into different linear inverse problems based on the selected model of observation noise. As detailed in ``Inverse Problem Approach", we formulated two inverse problems with the Gaussian noise model and Poisson model, which can be solved using FISTA { (fast-iterative shrinkage-thresholding algorithm)~\cite{FISTA}} and PIDAL (Poisson image deconvolution by augmented Lagrangian) algorithm~\cite{Bioucas2010}, respectively. We applied both FISTA and PIDAL to perform reconstruction in Case~1 and compared them in Fig.~\ref{fig:fistavspista}. We assessed the image quality quantitatively using the mean-square-error (MSE) and structural similarity (SSIM)~\cite{wang2004image}. MSE is the mean-squared-difference between the reconstructed image and the ground truth, and SSIM measures the similarity between the reconstructed image and the ground truth. {Lower} MSE and {higher} SSIM mean better reconstruction. As shown in Table~\ref{table:fistavspista}, the image reconstructed with FISTA resulted in a lower MSE and higher SSIM, compared to PIDAL{, and PIDAL took approximately 20\% longer to run than FISTA.} Therefore, we used FISTA as the main image reconstruction algorithm.
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=\linewidth]{SciRep_pics/Fig7-crop.pdf}
\caption{Activity distribution based on the inverse approach.{ The pin activity was estimated from each image by summing up the pixel values inside the present pins.} The ground truth is shown as the red line.}
\label{fig: inv_act}
\end{figure}
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=.5\linewidth]{SciRep_pics/Fig8-crop.pdf}
\caption{Comparison of FISTA and PIDAL reconstruction{ to the ground truth. The two methods led to visually similar results, though the PIDAL reconstruction resulted in a sparser image, compared to FISTA}.}
\label{fig:fistavspista}
\end{figure}
\begin{table}[!htbp]
\captionsetup{font=footnotesize}
\centering
\caption{Comparison of MSE, SSIM, and computation time using FISTA and PIDAL.{ The computation time was measured on an Intel Core i9-7920X CPU.}}\label{table:fistavspista}
\resizebox{.35\linewidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Method & {MSE} & {SSIM} & Computation time (s) \\ \hline
PIDAL & 42.05 & 0.52 & 231.68 \\ \hline
FISTA & 41.45 & 0.59 & 183.57 \\ \hline
\end{tabular}}
\end{table}
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=\linewidth]{SciRep_pics/Fig9-crop.pdf}
\caption{Image reconstructed using FBP.{ Severe image blurring occurred at the center of Case1, Case 3 and Case 5.}}
\label{fig: fbp_recon}
\end{figure}
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=\linewidth]{SciRep_pics/Fig10-crop.pdf}
\caption{Pin identifications based on FBP. Identified pins are shown as red circles.{ Mis-classification of fuel pins appeared mostly at the center of Case 1, Case 3, and Case 5, where the hexagonal symmetry of the pin distribution was broken.}}
\label{fig: fbp_pid}
\end{figure}
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=\linewidth]{SciRep_pics/Fig11-crop.pdf}
\caption{Activity distribution based on FBP.{ The pin activity was estimated by multiplying a normalization constant to the sum of all pixels inside the pin}.The ground truth is shown as the red line.}
\label{fig: fbp_act}
\end{figure}
For comparison, we have also implemented the traditional FBP method to reconstruct the image, which is detailed in ``Methods-Image Reconstruction Methods" section. We then input these images to the neural network for pin identification, and estimated the pin activity levels. We compared the performance of the inverse (FISTA) and FBP approaches in terms of image quality, accuracy of pin identification and activity quantification. Fig.~\ref{fig: fbp_recon} shows the images reconstructed using FBP. Compared to Fig.~\ref{fig: fbp_recon}, images in Fig.~\ref{fig: inv_recon} demonstrated higher contrast and fewer reconstruction artifacts. The blurring at the center was significantly removed using the inverse approach. We calculated the MSE and SSIM for both sets of images, shown in Table~\ref{table:msessim}. We achieved significantly lower MSE and higher SSIM by using the inverse approach, which resulted in lower mis-classification rates of pin identification overall.{ The pin identification results based on FBP reconstruction are shown in Fig.~\ref{fig: fbp_pid}. In Fig.~\ref{fig: fbp_pid}a, Fig.~\ref{fig: fbp_pid}c, and Fig.~\ref{fig: fbp_pid}e, the blurring led to inaccurate pin localization around the center of FBP-reconstructed images, and the hexagonal structure was destroyed. In contrast, the hexagonal structure was accurately reproduced using the inverse problem approach, as shown in Fig.~\ref{fig: inv_pid}a-~\ref{fig: inv_pid}e}.{ Hence, given the same sinogram data, the inverse approach is expected to better distinguish \replaced{pins missing from the fuel assembly}{missed pins from the fuel assembly} and result in lower false alarm rates, compared to FBP. In Case~6, \added{we achieved the best MSE and SSIM, due to the smallest average pixel intensity in all cases. }\replaced{B}{b}oth FBP and inverse approach led to relatively high mis-classification rates due to the low activity of the pins in the middle of the assembly. However, we do not expect such a large variation of pin activities in an actual fuel assembly~\cite{jardine2005radiochemical}}.
\begin{table}[!htbp]
\captionsetup{font=footnotesize}
\centering
\caption{Comparison of image quality and mis-classification rates for the six simulated cases reconstructed using traditional FBP and the inverse (INV) approach. FP: false positive{ (missing pins mis-classified as present)} rate; FN: false negative{ (present pins mis-classified as missing)} rate.}\label{table:msessim}
\resizebox{.5\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{Case} & \multicolumn{2}{c|}{MSE} & \multicolumn{2}{c|}{SSIM} & \multicolumn{4}{c|}{Mis-classification rate} \\ \cline{2-9}
& \multirow{2}{*}{FBP} & \multirow{2}{*}{INV} & \multirow{2}{*}{FBP} & \multirow{2}{*}{INV} & \multicolumn{2}{c|}{FBP} & \multicolumn{2}{c|}{INV} \\ \cline{6-9}
& & & & & FP & FN & FP & FN \\ \hline
1 & 96.62 & 41.45 & 0.16 & 0.59 & 0 & 1.51\% & 0 & 0 \\ \hline
2 & 97.01 & 41.76 & 0.16 & 0.59 & 0.68\% & 0 & 0 & 0 \\ \hline
3 & 95.32 & 40.69 & 0.16 & 0.56 & 0.34\% & 0 & 0 & 0 \\ \hline
4 & 96.63 & 38.43 & 0.16 & 0.64 & 0 & 0 & 0 & 0 \\ \hline
5 & 88.70 & 37.38 & 0.17 & 0.60 & 0 & 3.40\% & 0 & 0 \\ \hline
6 & 83.35 & 36.33 & 0.21 & 0.63 & 0 & 35.03\% & 0 & 27.55\% \\ \hline
\end{tabular}}
\end{table}
We estimated the activity of each pin by summing the pixels on the reconstructed images and created a histogram of pin activities for each case, shown in Fig.~\ref{fig: inv_act} and Fig.~\ref{fig: fbp_act}. {It should be noted that for the FBP reconstruction, there is no definite relationship between the pixel sum and the absolute activity. Appropriate normalization constant needs to be applied to convert the pixel sum into activity, which is not always possible in an actual inspection. In this case, {we calculated the ratio between the mean of the pixel sum distribution and the true activity for Cases 2-5. We then used the average of these ratios as the normalization constant and applied it to all cases to convert the pixel sum to absolute activity.} In contrast, normalization is not needed for the inverse reconstruction because the response matrix has already been normalized per unit source particle. }Compared to Fig.~\ref{fig: fbp_act}, the pin activity distributions in Fig.~\ref{fig: inv_act} were narrower. We calculated the relative error compared to the ground truth and the standard deviation of pin activity, as shown in Table~\ref{table:act_esti}. We obtained smaller standard deviations by using the inverse approach in all cases. The relative error was negligible compared to the ground truth, except for the low activity pins in Group 1 of Case 6.
\begin{table}[!htbp]
\captionsetup{font=footnotesize}
\centering
\caption{Comparison of activity estimations.{ The mean NPS per pin is the mean of the activity distribution; relative error is the relative difference between the mean activity and ground truth; standard deviation refers to the standard deviation of the pin activity distribution. For FBP in Group 1 of Case 6, no data is shown because no present pin is identified.}}\label{table:act_esti}
\resizebox{.6\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Case}} & \multicolumn{3}{c|}{{Mean} NPS per pin ($\times 10^7$)} & \multicolumn{2}{c|}{Relative Error (\%)} & \multicolumn{2}{c|}{Standard deviation (\%)} \\ \cline{3-9}
\multicolumn{2}{|c|}{} & Truth & FBP & INV & FBP & INV & FBP & INV \\ \hline
\multicolumn{2}{|c|}{1} & 1.51 & 1.57 & 1.57 & 3.91 & 3.74 & 6.16 & 4.36 \\ \hline
\multicolumn{2}{|c|}{2} & 1.70 & 1.72 & 1.74 & 1.39 & 2.21 & 7.57 & 3.44 \\ \hline
\multicolumn{2}{|c|}{3} & 1.70 & 1.71 & 1.73 & -0.68 & 2.02 & 8.54 & 5.09 \\ \hline
\multicolumn{2}{|c|}{4} & 1.70 & 1.72 & 1.73 & 1.12 & 1.46 & 7.71 & 3.33 \\ \hline
\multicolumn{2}{|c|}{5} & 1.70 & 1.65 & 1.66 & -2.91 & -2.25 & 15.75 & 6.85 \\ \hline
\multirow{3}{*}{6} & Group 1 & 0.29 & {N.A.} & 0.39 & {N.A.} & 35.73 &{N.A.} & 14.22 \\ \cline{2-9}
& Group 2 & 0.72 & 0.75 & 0.72 & 4.69 & 0.54 & 8.25 & 4.64 \\ \cline{2-9}
& Group 3 & 1.44 & 1.42 & 1.43 & -1.25 & -0.59 & 10.68 & 3.70 \\ \hline
\end{tabular}}
\end{table}
{Compared to $\mathrm{UO}_2$ fuel rod, the cobalt rod simulated in this work resulted in less attenuation because of its lower density and atomic number. Nevertheless, this work describes a general approach for correcting the gamma-ray attenuation and down-scattering in the energy window of interest and it can be easily adapted to $\mathrm{UO}_2$ fuel assemblies by updating the atomic composition and gamma-ray source term in the simulation.}
{In this work, we assumed that the detector \replaced{performance}{gain} is \replaced{the same}{uniform} in the simulation of the sinogram and response matrix, which is not true for the real PGET device. The correction for \replaced{varying detector performances}{non-uniform detector gains} across the detectors, either at the hardware or software level, is crucial to obtain an accurate estimation of pin activity. When identifying drifting detectors in post-processing, e.g., whose response is anomalously different compared to neighbour detectors, a simple approach consists in neglecting their response. In a preliminary analysis on simulated data, we replaced the response of an increasing number of detectors in the sinogram with null arrays. We found that if the number of ``neglected'' detectors is below ten in the PGET setup, the reconstructed image is negligibly affected by not including their response. Future work will be needed to study the robustness of the software suite as a function of a systematic bias in the detector \replaced{performance}{gain}.}
\section*{Conclusion}
{In this work, we have implemented a full set of software, which is able to reconstruct cross-sectional images of mockup fuel assemblies acquired by a simulated PGET system, identify missing fuel pins, and estimate fuel pin activities based on the reconstructed image. We have developed a linear forward model that accounts for the scattering of gamma rays in the assembly to accurately characterize the response matrix of the PGET system. The image reconstruction was formulated into a linear inverse problem by modeling the observation noise as Gaussian, which was solved using FISTA.
The reconstructed image was fed to a convolutional neural network to automatically identify the present pins and determine their centroids. Compared to the FBP approach, the inverse problem approach resulted in over 50\% lower MSE and 200\% higher SSIM, and consequently lower mis-classification rates in pin identification in all cases. Based on the pin identification results, we estimated the pin activity by summing up the pixel values around the centroid inside the pin radius on the image. Compared to the FBP, the inverse approach resulted in smaller standard deviations of pin activity in all cases, with negligible bias with respect to the ground truth. The proposed inverse approach\added{ to reconstruct a fuel pin cross section, identify fuel pins and calculate their activity }took approximately 8 minutes to run on an Intel Core i9-7920X CPU without parallelization. We are currently improving the algorithm to allow automatic classification of different activity groups in a real fuel assembly.}
\section*{Methods}
\subsection*{Monte Carlo Simulation of the PGET based on MCNP }
Fig.~\ref{fig:PGET} shows the cross-sectional view of the MCNP model of Case 1. {In Case 4 and 5, depleted uranium pins were used, where Co was replaced by {0.20\% enrichment }UO${}_{2}$. }{We used the F4 tally as the detector response model in the MCNP simulation and the response matrix calculation. The F4 tally estimates the average photon flux in the detector cell by summing the track lengths of all particles in the 700-1500 keV energy window~\cite{mcnp2003general}. The absolute activity measurement can be obtained using the proposed method as long as the simulated quantity in the response matrix is consistent with the simulated response to an unknown inspected fuel bundle. When applying the proposed technique to experimental data, one would want to validate the simulated detector response with the measured one, to properly incorporate specific detector's properties, such as its energy resolution and time response, in the simulated model.}
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=.5\linewidth]{SciRep_pics/Fig12.pdf}
\caption{The cross-sectional view of the MCNP model in Case 1.{ The hexagonal fuel assembly contains 331 ${}^{60}$Co rods and is submerged in a water cylinder of 33 cm diameter. The fuel pin is simulated as a ${}^{60}$Co cylinder of 7~mm diameter, coated with 1~mm thick aluminum cladding. Gamma rays emitted by the ${}^{60}$Co rods are detected by two collimated CdTe detector arrays located on two sides of the fuel assembly, each encompassing 91 detectors. The tungsten collimator between the detector array and fuel assembly is 10~cm long and has an aperture of 1.5~mm. Each CdTe detector is closely attached to the collimator and is of size $0.175\times0.35\times0.35{\text{ cm}}^{3}$. The distance between two neighbouring detectors is 4~mm. The bottom array is shifted to left by 2~mm, with respect to the top one~\cite{honkamaa2014prototype,white2018application}. The detector arrays rotate clockwise and scan the fuel bundle in steps of 1$\degree$, which generates a $182\times360$ sinogram.}}
\label{fig:PGET}
\end{figure}
\subsection*{Image Reconstruction Methods}
In this section, we describe in detail the two image reconstruction methods implemented in this work: the FBP and linear inverse problem method. We applied both methods to reconstruct images from the sinograms and compared their performance.
\subsubsection*{Filtered Back-Projection}\label{sec:fbp}
Fig.~\ref{fig:fbp_geometry} shows the tomography measurement of the fuel assembly at angle $\theta$. The FBP method relies on two approximations: first, we neglect the attenuation and scattering of gamma rays in the system; second, for pins at different locations, we assume that the geometric efficiencies are the same. Under these approximations, the sinogram $\boldsymbol{\phi}(\rho,\theta)$ can be simplified as the integral of the source distribution $\mathbf{s}(x,y)$ along the red line passing through the detector center, \textit{i.e.},
\begin{equation}
\begin{aligned}
\boldsymbol{\phi}(\rho,\theta) = \iint_{\mathbb{R}^2} \mathbf{s}(x,y) \delta(x \cos\theta + y \sin\theta-\rho) dxdy.
\end{aligned}\label{eq:fbp}
\end{equation}
Eq.~\eqref{eq:fbp} is the standard forward model in X-ray tomography imaging. The classical inverse operation from the sinogram to the source is the filtered back-projection, which is also a standard imaging algorithm in X-ray imaging~\cite{prince2006medical} and will not be detailed here.
\begin{figure}[!htb]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=.4\linewidth]{SciRep_pics/Fig13.png}
\caption{{The tomography measurement of the fuel assembly at an angle $\theta$. The counts of detector $(\rho,\theta)$ can be approximated by the integral of the source distribution $s(x, y)$ along the red line that passes through the detector center.}}
\label{fig:fbp_geometry}
\end{figure}
\subsubsection*{Inverse Problem Approach}
The inverse reconstruction approach is based on the conversion of the image reconstruction problem into a linear inverse problem. Let the vectorized image of the fuel assembly be $\mathbf{s}$, which contains $N$ unknown pixel values, and the vectorized simulated sinogram of size $M=65,520$ ($182 \times 360$) be $\boldsymbol{\phi}$. The inverse approach relies on the assumption that $\boldsymbol{\phi}$ is a linear function of $\mathbf{s}$:
\begin{equation}
\begin{aligned}
\boldsymbol{\phi} = \mathbf{A}\mathbf{s} + \mathbf{n},
\end{aligned}\label{eq:inverse1}
\end{equation}
where $\mathbf{A}$ is the system response matrix of size $M \times N$, and $\mathbf{n}$ models random observation noise, assumed to be isotropic Gaussian distributed. Physically, the $i$-th column of the response matrix $\mathbf{A}$ is the vectorized sinogram corresponding to a pin distribution with unit activity in pixel $i$ but zero elsewhere. Accurate determination of the system response matrix is crucial to obtain a high-quality image and avoid systematic bias in pin identification and activity estimation.
The analytical derivation of the response matrix is discussed in the next section.
Given the simulated sinogram and system response matrix $\mathbf{A}$, we can reconstruct the image by solving the following equation:
\begin{equation}\label{eq:FISTA}
\hat{\mathbf{s}} = \operatorname*{arg\,min}_{{s_i\geq0, \forall i}} \quad \left[\frac{1}{2}\|\boldsymbol{\phi} -\mathbf{A}\mathbf{s}\|_2^2 + \lambda \|\mathbf{s}\|_1 \right] = \operatorname*{arg\,min}_{{s_i\geq0, \forall i}} \quad \left[\frac{1}{2}(\boldsymbol{\phi} -\mathbf{A}\mathbf{s})^T(\boldsymbol{\phi} -\mathbf{A}\mathbf{s}) + \lambda \sum_{i=1}^N|\mathbf{s}_i| \right]
\end{equation}
where the first term is the data-fidelity term assuming Gaussian noise, and the second term is a regularization term acknowledging that the fuel pin is sparsely distributed. The regularization parameter $\lambda$ is chosen based on the noise level. To solve Eq.~\eqref{eq:FISTA}, we have implemented the fast iterative shrinkage-thresholding algorithm (FISTA)~\cite{FISTA}.
As an alternative, we can model the observation noise by Poisson noise and the data-fidelity term is changed accordingly~\cite{Bioucas2010,lefkimmiatis2013poisson} as follows
\begin{equation}\label{eq:PISTA}
\hat{\mathbf{s}} = \operatorname*{arg\,min}_{{s_i\geq0, \forall i}} \quad \left[\sum_{j=1}^{M}([\mathbf{A}\mathbf{s}]_j - \boldsymbol{\phi}_j \log[\mathbf{A}\mathbf{s}]_j) + \lambda \|\mathbf{s}\|_1 \right].
\end{equation}
To solve Eq.~\eqref{eq:PISTA}, we used the PIDAL algorithmic structure described in \cite{Bioucas2010}.
\subsubsection*{Computation of the Response Matrix}
The calculation of the system response matrix $\mathbf{A}$ involves the calculation of detector response to each pixel, with each response being one column in the matrix. For inner pixels, the contribution from scattered photons is non-negligible due to the high attenuation. To account for this, we have implemented an algorithm that combines stochastic photon transport with deterministic collimator-detector modeling to accurately calculate the system response matrix.
{For real fuel assemblies, no prior information of pin distribution and pin radius will be given. It is therefore necessary to first estimate these quantities before running the Monte Carlo simulation. {As discussed in the “Results-Inverse approach” section, }this is done by performing a rough image reconstruction using our previously developed model, where{ uniform attenuation map is used and}{ no scattering effect is considered}~\cite{fanginmm2020}. Based on the reconstructed image, one could determine the pin locations and pin radius, and create a material map, where the material is assumed to be cobalt inside the pin, and water outside.}
Next, we simulate the travel of photons inside the fuel assembly using Monte Carlo.{ Two sets of grids are generated. The fine grid has a pixel size of $0.5\times0.5$~mm${}^{2}$, while the coarse grid has a pixel size of $2\times2$~mm${}^{2}$. {We first discretize the region of interest on the fine grid and calculate the response to the pixels whose center points are inside the pin.} Then we calculate the response to a pixel on the coarse grid by summing the responses to the corresponding $4\times 4$ pixels on the fine grid. \replaced{Based on the pin identification result in the previous step, the potential pin-present region are pixelated into 5,329 coarse pixels. We iterate over all these pixels}{. We iterate over all coarse pixels} and form the response matrix. In this way, we are able to reduce the pixelization error introduced during the discretization of the pins, without increasing the final response matrix size.}
The response to each {fine }pixel {whose center point} is inside the pin is calculated in the following way. A photon $(\vec{r_0}, \vec{\Omega_0}, E_0, W_0)$ is created, with the initial position $\vec{r_0}$ uniformly sampled inside the pixel, the initial moving direction $\vec{\Omega_0}$ uniformly sampled in $4\pi$ space, the initial energy $E_0$ sampled from the source energy spectrum, and the initial weight \added{$W_0$ }assigned to 1. We track the photon inside the PGET using the delta-tracking algorithm~\cite{woodcock1965techniques} and determine the position where the photon interacts with medium. We force the interaction to be Compton scattering by multiplying its weight with the probability that the interaction is Compton scattering,
\begin{equation}
W_{i} = W_{i-1} \frac{\mu_{\text{sc}}(\vec{r}_{i-1}, E_{i-1})}{\mu(\vec{r}_{i-1},E_{i-1})}
\end{equation}
where $\mu_{\text{sc}}(\vec{r}_{i-1}, E_{i-1})$ and ${\mu(\vec{r}_{i-1}, E_{i-1})}$ are the Compton scattering attenuation coefficient and total attenuation coefficient at the $i$-th interaction site, respectively. A scattering angle is sample based on Kahn's rejection algorithm~\cite{kahn1954applications}, and the photon energy $E_{i}$ and moving direction $\vec{\Omega_i}$ are updated accordingly. This process is repeated until the photon escapes the fuel assembly or the photon energy is below the detection threshold. A copy of the photon is saved at its creation site and each interaction site to a sub-projection map.
{Once the sub-projection map is obtained, we apply the convolutional forced detection (CFD) technique to calculate the detector response. }CFD is a widely-used variance reduction technique for X-ray downscatter simulation in single photon emission computed tomography (SPECT){, which has been shown to be 50-100 times faster than conventional forced detection technique~\cite{hutton2011review} and several thousand times faster than full-3D Monte Carlo simulation~\cite{beekman2001efficient}}. Fig.~\ref{fig:cfd} illustrates its principle. In CFD, the photon is forced to travel perpendicularly to the detector plane upon its creation or interaction with medium to effectively simulate all the possible spatial distributions of particle interactions and consequent detection events. For an unscattered photon, the contribution to the F4 tally in the detector cell is given by
\begin{equation}\label{eq:unsccfd}
W\times \frac{\Delta\Omega}{4\pi}\times e^{-\int\mu(\vec{r},E) ds} \times \frac{T_l}{V}
\end{equation}
and for a scattered photon, the contribution is
\begin{equation}\label{eq:sccfd}
W \times \frac{\sigma (\theta) \Delta\Omega}{2\pi \int_{0}^{\pi}\sigma (\theta')\sin(\theta')d\theta' } \times e^{-\int\mu(\vec{r},E(\theta)) ds} \times \frac{T_l}{V}
\end{equation}
In Eq.~\eqref{eq:unsccfd} and \eqref{eq:sccfd}, $W$ is the photon weight, $\Delta\Omega$ is the solid angle subtended by the detector\added{ in steradian}, $\sigma(\theta)$ is the differential cross-section for Compton scattering, $\mu(\vec{r}, E)$ is the total attenuation coefficient along the CFD path, and the last term stands for the contribution to F4 tally, \textit{i.e.}, average track length in detector cell normalized by detector volume.
\begin{figure}[!htbp]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=.5\linewidth]{SciRep_pics/Fig14.pdf}
\caption{Illustration of the CFD{ technique}.{ A photon shown as the gray disk is created on the left and travels in the medium along the solid arrow. The disk radius represents the photon energy. Upon it creation and each interaction with the medium, the photon is forced to be scattered along the dash arrow and to be detected by the detector with a certain probability. }}
\label{fig:cfd}
\end{figure}
Due to the non-ideal collimation, a photon will contribute counts not only to the detector in the same column, but also to the neighbouring ones. This phenomenon is characterized by the depth-dependent point spread function (PSF) of the collimator, \textit{i.e.}, the photon count distribution on the detection plane when a point source is imaged. PSF depends on the vertical distance $z$ between the photon and the detector array~\cite{de2001acceleration}:
\begin{equation}
\begin{gathered}
{PSF_z(x) = \replaced{\frac{2\sqrt{\ln(2)}}{\sqrt{\pi}\mathrm{FWHM}(z)}\exp(\frac{-4\ln{2}\times x^2}{\mathrm{FWHM}(z)^2})}{\frac{1}{\sqrt{2\pi}{\sigma}(z)}\exp(\frac{-x^2}{2\sigma(z)^2})}, \replaced{\mathrm{FWHM}}{\sigma}(z) = b + \frac{b}{l_{\text{eff}}}z, {l_{\text{eff}}} = l - \frac{2}{\mu}}
\end{gathered}
\end{equation}
where $b$ is the collimator aperture, $l$ is the collimator length, $\mu$ is the total attenuation coefficient in the collimator\added{, and $\mathrm{FWHM}$ is the width of the collimator PSF}.
To calculate the response to a pin at an angle $\theta$, the photons saved in the sub-projection map are first rotated by an angle $-\theta$. We then sum the weights of the photons in the same pixel on the sub-projection map and calculate their contribution to the detector response based on Eq.~\eqref{eq:unsccfd} or \eqref{eq:sccfd}, which creates a projection map. We then convolve each row in the projection map with the PSF function and sum all rows up to get the response at this angle. Next, we increment the angle $\theta$ and calculate the response at the next angle. In this way, photon tracking in the fuel assembly is done only once, which saves computation time.
We simulated the travel of 6,400 photons to create the sub-projection map for one pixel. It took approximately 4 minutes to calculate the response matrix containing 5,329 pixels using CFD on an Intel Core i9-7920X CPU.
\subsection*{Pin Localization}
Images reconstructed using both approaches are input to a convolution neural network, referred to as U-net~\cite{ronneberger2015u}, to perform pin identification. {The experimental data provided by IAEA within the framework of a technology open challenge~\cite{iaea2019challenge} were used for the training of the U-net. The training data consisted of the FBP images reconstructed from the measured sinograms of mock-up assemblies of 12 different geometries with a variable number of ${}^{60}$Co pins, and examples of the reconstructed images of these assemblies can be found in~\cite{iaea2019challenge}.}{ The U-net was trained to minimize a binary crossentropy loss between the predicted pin masks and ground truth. For each image, we provided four randomly rotated ones as input to the network, for data augmentation. Data augmentation allowed us to generate new training instances from existing ones, artificially boosting the size of the training set. The network was trained for 10,000 epochs, \textit{i.e.}, complete iteration over the whole dataset, using a learning rate of $10^{-5}$ and an Adam optimizer~\cite{kingma2014adam}. We found that $10^{-5}$ is the learning rate for which the slope of the crossentropy loss is maximized, allowing to reach the minimum. The training was early stopped when the loss reached its minimum value.
} The trained U-net can extract visible structures from the input image and output a mask based on the extracted features. Bright regions on the mask with areas above a certain threshold are identified as present pins, and their centroids are calculated as the coordinates of the center of each pin.
|
{
"timestamp": "2020-12-29T02:22:32",
"yymm": "2012",
"arxiv_id": "2012.14035",
"language": "en",
"url": "https://arxiv.org/abs/2012.14035"
}
|
\section{Introduction}
The carbon isotopes have been an important subject in nuclear cluster physics as they manifest a
rich variety of cluster phenomena. The Hoyle state (the $0_2^+$ state of $^{12}$C) exhibits one of
the most interesting clustering aspects Bose-Einstein condensate (BEC) of three $\alpha$
particles~\cite{Tohsaki2001a}. The structure of the Hoyle state and its analogous states in
neighboring nuclei has been one of the major topics in these decades
~\cite{Funaki2003,Wakasa2007,Chernykh2007,Kanada-Enyo2007,Funaki2008,Itoh2011,Epelbaum2011,
Epelbaum2012,Freer2012,Fukuoka2013,Ohtsubo2013,Carlson2015,Zhou2016}. In the highly
excited region of carbon isotopes, a different type of clustering, the linear-chain of alpha
particles, has also been intensively discussed
~\cite{Itagaki2001,Maruhn2010,Suhara2010,Baba2014,Ebran2014,Freer2014,Baba2016,Fritsch2016,
Baba2017,Yamaguchi2017,Li2017,Baba2018,Marevic2019,Liu2020}. Among the carbon isotopes,
$^{13}{\rm C}$ is of particular importance as the system composed of three $\alpha$ particles
(bosons) plus a valence nucleon (fermion). The Hoyle analog state in $^{13}{\rm C}$, the BEC of
three $\alpha$ particles with a neutron as an impurity, and the possible formation of the
linear-chain structure assisted by a valence neutron have been main interests for this
nucleus~\cite{Milin2002,Freer2011,Furutachi2011,Suhara2011,Wheldon2012,Yamada2015,Ebran2017,
Chiba2020,Inaba2020}.
Recently, apart from these studies, Bijker et al.~\cite{Bijker2019} proposed a different
interpretation for the structure of $^{13}{\rm C}$ based on the symmetry arguments. They applied the
algebraic cluster model (ACM)~\cite{Bijker2000, DellaRocca2017, Bijker2020} to $^{13}{\rm C}$, and
assumed triangular $D'_{3h}$ symmetry of the $3\alpha$ clusters accompanied by a valence
neutron. The intrinsic states were classified into three representations of the symmetry group, and
each of them forms the rotational band and exhibits unique spectrum and transition strengths. Based
on a comparison with available experimental data, they argued that many of the observed ground and
excited states can be assigned to these bands, and hence, $^{13}{\rm C}$ has triangular $D'_{3h}$
symmetry. This suggests an interesting insight into the structure of carbon isotopes and contradicts
the BEC interpretation of the Hoyle state and its analog states. However, the model is based on
purely symmetry concepts and the deviation from the triangular symmetry, which must take place in
reality, is neglected. Therefore, the symmetry behind the spectrum of $^{13}{\rm C}$ and deviation
from it should be tested by the microscopic models without any assumption of the nuclear shape.
The real-time evolution method (REM) recently proposed by Imai et al.~\cite{Imai2019} is one of the
microscopic cluster models which can examine the shape of nuclei without any assumption of the
symmetry. It generates basis wave functions with various cluster configurations by using the
equation-of-motion (EOM) of the Gaussian wave packets. A benchmark calculation showed that REM
precisely describes the 3$\alpha$ system including the Hoyle state. Therefore, a natural idea is to
extend the method to the non-$N\alpha$ systems~\cite{Zhou2020}. It is noted that REM superposes massive
number of the basis wave functions to describe the cluster systems, and it does not introduce any
assumption about the symmetry of nuclear shape and cluster configurations. Therefore, REM is a
suitable nuclear model to test if there exists any symmetry in the spectrum of
$^{13}{\rm C}$. Thus, the aim of this work is two-fold. The first is the extension and benchmark of
REM for non-$N\alpha$ system, and the second is the verification of the triangular $D'_{3h}$
symmetry in the spectrum of $^{13}{\rm C}$.
We organize this paper as follows. In the next section, the framework of REM for the 3$\alpha$+$n$
system is briefly explained. In the section III, the numerical results are presented. Compared to
a previous study which used the same Hamiltonian, REM yields deeper binding energies for all bound
states and describes the 3$\alpha$+$n$ system more accurately. Based on the $B(E2)$ strengths, we
propose an assignment of the rotational bands and discuss the internal structure of the band member
states to examine the triangular $D'_{3h}$ symmetry. It is shown that the ground band member states
have the same intrinsic structure which has triangular arrangement of three alpha
particles. However, it is found that many excited states deviate from a rigid body and fluctuate
around the triangular shape. Finally, in the last section, we summarize this work.
\section{Theoretical framework}
In this section, we explain the Hamiltonian and framework of the real-time evolution method for the
3$\alpha$+$n$ system. The Hamiltonian used in this study is given as,
\begin{align}
\hat{H} = \sum_{i=1}^{13} \hat{t}_i + \sum_{i<j}^{13}\hat{v}_N(r_{ij}) +
\sum_{i<j}^{13}\hat{v}_C(r_{ij}) - \hat{t}_{cm},
\label{eq:ham}
\end{align}
where $\hat{t}_i$ is the kinetic energy of the $i$th nucleon and $\hat{t}_{cm}$ is the
center-of-mass kinetic energy. The $\hat{v}_{N}$ and $\hat{v}_{C}$ denote the effective
nucleon-nucleon and Coulomb interactions, respectively. For the central part of the nucleon-nucleon
interaction, we used Volkov No.~2 force~\cite{Volkov1965} with the exchange parameters, $W=0.4$,
$B=H=0.125$ and $M=0.6$. The G3RS force~\cite{Yamaguchi1979} is used for the spin-orbit part with
two choice of the strengths $u_{ls}=1000$ and 2000 MeV. The latter value $u_{ls}=2000$ MeV was also
used by Furutachi et al.~\cite{Furutachi2011}, and we also adopt the same strength for the sake of
comparison. However, as shown later, it does not reproduce the correct ordering of the ground band
member states, and hence, we also applied a weaker strength $u_{ls}=1000$ MeV for better description
of the ground band.
As the basis wave function to describe the $3\alpha+n$ system, we employ the Brink-Bloch wave
function~\cite{Brink1966} which consists of three $\alpha$ clusters with $(0s)^4$ configuration
coupled with a valence neutron,
\begin{align}
\Phi(\bm Z_1,...,\bm Z_4) &= \mathcal A
\Set{\Phi_\alpha(\bm Z_1)\Phi_\alpha(\bm Z_2)\Phi_\alpha(\bm Z_3)\Phi_n(\bm Z_4)},
\label{eq:brink1}\\
\Phi_\alpha(\bm Z) &= \mathcal A
\Set{\phi(\bm r_1,\bm Z)\chi_{p\uparrow}\cdots\phi(\bm r_4,\bm Z)\chi_{n\downarrow}},\\
\Phi_n(\bm Z) &=\phi(\bm r,\bm Z)\chi_{n\uparrow},\label{eq:brink2}\\
\phi(\bm r,\bm Z) &= \left(\frac{2\nu}{\pi}\right)^{3/4}\exp
\set{-\nu\left(\bm r- \bm Z\right)^2},
\end{align}
where $\Phi_\alpha(\bm Z)$ and $\Phi_n(\bm Z)$ denote the wave packets describing the $\alpha$
cluster and the valence neutron located at $\bm Z$, respectively. In this study, we fix the valence
neutron spin to up in the intrinsic frame without loss of generality. The set of three-dimensional
vectors $\bm Z_1,...,\bm Z_4$ is complex numbered and describes positions and momenta of the
$3\alpha+n$ clusters in the phase space. The size parameter $\nu=1/2b^2$ of the $\alpha$ particle is
fixed to $b=$ 1.46 fm~ which reproduces the observed size of an $\alpha$ particle. The same size
parameter is also used to describe the valence neutron.
In the REM framework, we use the equation-of-motion to generate the basis wave functions with
various configurations of clusters. From the time-dependent variational principle,
\begin{align}
\delta\int dt\frac{\langle\Phi(\bm Z_1,...,\bm Z_4)|i\hbar\;d/dt-\hat{H}|
\Phi(\bm Z_1,...,\bm Z_4)\rangle}{\langle\Phi(\bm Z_1,...,\bm Z_4)|
\Phi(\bm Z_1,...,\bm Z_4)\rangle}=0,
\end{align}
one obtains the equation-of-motion (EOM) for the $3\alpha+n$ cluster centroids
$\bm Z_1,...,\bm Z_4$,
\begin{align}
i\hbar&\sum_{j=1}^4\sum_{\sigma=x,y,z} C_{i\rho j\sigma}\frac{dZ_{j\sigma}}{dt} =
\frac{\partial \mathcal H_{int}}{\partial Z_{i\rho}^*}, \label{eq:eom}\\
\mathcal H_{int}&\equiv\frac{\langle\Phi(\bm Z_1,...,\bm Z_4)|\hat{H}|
\Phi(\bm Z_1,...,\bm Z_4)\rangle}{\langle\Phi(\bm Z_1,...,\bm Z_4)|
\Phi(\bm Z_1,...,\bm Z_4)\rangle},\\
C_{i\rho j\sigma}&\equiv\frac{\partial^2\text{ln}\langle\Phi(\bm Z_1,...,\bm Z_4)|
\Phi(\bm Z_1,...,\bm Z_4)\rangle}{\partial Z^*_{i\rho}\partial Z_{j\sigma}}.
\end{align}
By solving this EOM from an arbitrary initial wave function, a set of the vectors
$\bm Z_1(t),...,\bm Z_4(t)$ is obtained as a function of the real-time $t$, which
defines the basis wave function $\Phi(\bm Z_1(t),...,\bm Z_4(t))$ at each time.
When we solve the EOM, we add an external field $V_d$ to the Hamiltonian,
\begin{align}
V_d&=v_d\sum_i f(|{\rm Re}\bm Z_i-\bm R_{c.m.}|),\\
f(x)&=(x-a)^2\theta(x-a),\\
\bm R_{c.m.} &=\frac{4}{13}\sum_{i=1}^3{\rm Re}\bm Z_i + \frac{1}{13}{\rm Re}\bm Z_4,
\label{eq:wall}
\end{align}
where $\theta(x-a)$ is the step function with $a=10$ fm and $v_d=1.5$ $\text{MeV/fm}^2$. This
external field reflects constituent particles at distance $a$ to prevent them escaping far away.
Once we obtain a set of basis wave functions, we perform the generator coordinate method (GCM)
calculation by superposing them after the projection of the parity and angular momentum,
\begin{align}
\Psi^{J\pi}_M=\int_0^{T_{\text{max}}}dt \sum^J_{K=-J}\hat{P}_{MK}^{J\pi}f_K(t)
\Phi(\bm Z_1(t),...,\bm Z_4(t)),\label{eq:gcmwf}
\end{align}
where $\hat{P}^{J\pi}_{MK}$ is the parity and the angular momentum projection operator,
\begin{align}
\hat{P}^{J\pi}_{MK}=\frac{2J+1}{8\pi^2}\int
d\Omega\mathcal{D}_{MK}^{J*}(\Omega)\hat{R}(\Omega)\frac{1+\pi \hat{P}_x}{2}.
\end{align}
In the practical calculation, the integral in Eq.~(\ref{eq:gcmwf}) is discretized as,
\begin{align}
\Psi^{J\pi}_M = \sum_{iK}\hat{P}^{J\pi}_{MK}f_{iK}\Phi_i, \label{eq:gcmwf2}
\end{align}
where $\Phi_i$ is an abbreviation for $\Phi(\bm Z_1(t),...,\bm Z_4(t))$. The amplitude $f_{iK}$ and
eigenenergy are determined by solving the Hill-Wheeler equation \cite{Hill1953,Griffin1957}.
\section{results and discussion}
\subsection{time evolution of the $\bm{3\alpha}$+$\bm n$ system} The numerical calculations were
performed according to the following procedure. First, using pure imaginary-time $\tau=it$ in
Eq.~(\ref{eq:eom}), we calculate the minimum intrinsic energy, that is found to be $-83.1$
MeV. Then, we generate the wave functions with the intrinsic excitation energy $E^*_{\rm int}$ using
the same equation. We have tested several excitation energies and used $E^*_{\rm int}=30$ MeV in
this work as it gives the best convergence of the GCM calculation. Using these wave functions
as the initial condition at $t=0$, we calculate the time evolution of the $3\alpha$+$n$ system.
The total propagation time was set to 10,000 fm/c, and the wave functions are recorded at every 33
fm/c. Consequently, an ensemble of the 300 wave functions is generated. By using different inital
wave functions at $t=0$, we generated two ensembles which we call set 1 and 2.
\begin{figure}[htb!]
\includegraphics[width=0.7\hsize]{contour.eps}
\caption{The snapshots of the intrinsic density distributions obtained by the
real-time evolution. The top (bottom) panels show the wave functions from the ensemble
set 1 (set 2). } \label{fig:contour}
\end{figure}
\begin{figure}[hbt!]
\includegraphics[width=0.6\hsize]{conv.eps}
\caption{The energies and radii of the $1/2_1^-$ and $5/2_1^+$ states obtained from set
1 and 2 as a function of the total propagation time $T_\text{max}$. The strength
of the spin-orbit potential $u_{ls}=2000$ MeV was adopted. The result for the
$1/2_1^-$ state obtained in Ref.~\cite{Furutachi2011} are denoted by blue lines.}
\label{fig:conv}
\end{figure}
\begin{figure*}[hbt!]
\includegraphics[width=\hsize]{level.eps}
\caption{The energy spectrum of $^{13}$C calculated by using the strength of the spin-orbit
potential $u_{ls}$=2000 and 1000 MeV. The energy is measured relative to the 3$\alpha+n$
threshold. The spectrum is compared with that obtained by Furutachi et~al.~\cite{Furutachi2011}
using the same Hamiltonian with $u_{ls}$=2000 MeV.
Experimental data are taken from Ref. \cite{Ajzenberg-Selove1991}. }
\label{fig:level}
\end{figure*}
Several snapshots of the wave functions from these ensembles are shown in
Fig.~\ref{fig:contour}. Note that the wave functions of set 1 and 2 at $t=0$ fm/c have different
momenta of clusters, although they have almost the same spatial distributions. Consequently, the set
1 and 2 show the different results of the time evolution. We also note that various nuclear shapes
with different cluster configurations naturally emerge from the EOM. In some cases 3$\alpha$
particles are close to each other and the valence neutron is apart from them. In other
cases, 2$\alpha$ particles and the valence neutron are close to each other, and an $\alpha$ particle
is apart from others describing $^9\text{Be}^*+\alpha$ like configurations. In this manner, the
ensembles of the basis wave functions were prepared without any assumption of the spatial symmetry.
\subsection{the calculated full spectrum}
The generated wave functions are superposed to diagonalize the Hamiltonian. To confirm the
convergence of the calculation, Fig.~\ref{fig:conv} shows the energies and radii of the
$1/2_1^-$ and $5/2_1^+$ states, which are the lowest negative- and positive-parity stats, as
functions of the propagation time $T_\text{max}$. The energy and radius of the ground state
($1/2^-_1$ state) show fast convergence and both sets reach almost the identical values. Thus, the
obtained GCM wave functions are converged well independent of the initial wave functions. The figure
also shows that REM yields approximately 1 MeV deeper binding energy of the $1/2^-_1$ state
than the previous study by Furutachi et al.~\cite{Furutachi2011} who used the same Hamiltonian. This
clearly shows that REM can describe the $3\alpha$+$n$ system more accurately. It is interesting
to note that REM gives the larger radius of the ground state despite the deeper binding energy. This
means that REM yields more stretched and long-ranged wave function. It is also noted that
good convergence of the $5/2^+_1$ state was also achieved by using the same ensembles.
The left half of Fig.~\ref{fig:level} compares the full spectrum obtained by REM and the
negative-parity states calculated by Furutachi et al.~\cite{Furutachi2011}. Because two calculations
use the same Hamiltonian, deeper binding energy means a better description of the bound states below
the neutron threshold. Obviously, the present calculation gives deeper energies to all
the negative-parity states below the threshold ($1/2_1^-$, $3/2_1^-$ and $5/2_1^-$). It
also gives deeper binding energy to the $7/2^-_1$ state located just above the threshold, to which
the bound-state approximation may be validated. Thus, REM offers a better description of the bound
states than ordinary GCM calculations.
However, the situation is different for the negative-parity resonances above the neutron threshold to
which variational principle is not applicable and the bound-state approximation does not guarantee
the energy convergence. In fact, two calculations disagree in the highly excited negative-parity
states. It is noted that the model space of REM is much larger than that of the GCM by Furutachi et
al.~\cite{Furutachi2011}. As a result, we found that most of the negative-parity resonances are coupled with
the non-resonant continuum which makes it difficult for us to identify resonant solutions from many
other non-resonant solutions. Therefore, we have not shown the negative-parity states above the
neutron threshold in Fig.~\ref{fig:level}. On the contrary, although we cannot tell the reason
clearly, we found that the coupling is not strong in the positive-parity states, and stable
solutions are obtained which are plotted as resonances in the figure.
The spectrum obtained by the spin-orbit strength $u_{ls}=2000$ MeV does not reproduce the order of
the ground band spectrum. It underestimates the excitation energy of the $5/2^-_1$ state and the
spectrum deviates from the observed rotational pattern. This may affect the assignment of the
rotational bands and the discussion of the intrinsic shape. Therefore, we performed an additional
calculation using weaker spin-orbit strength $u_{ls}=1000$ MeV to check the interaction dependence
of the spectrum. As seen in Fig.~\ref{fig:level}, the weaker spin-orbit strength yields the correct
order of the ground band member states ($1/2^-_1$, $3/2^-_1$, $5/2^-_1$ and $7/2^-_1$ states),
although it still overestimates the moment-of-inertia of the ground band. The side effect of the
weaker spin-orbit interaction is the overestimation of the excitation energies of the
positive-parity states. This may be due to the overestimation of the $^{9}{\rm Be}$+$\alpha$
threshold energy. If we measure them relative to the $^{9}{\rm Be}$+$\alpha$ threshold, the
excitation energies of many positive-parity states get closer to the observed values. This implies
that many positive parity-states have $^{9}{\rm Be}$+$\alpha$ structure~\cite{Milin2002,Freer2011}.
\subsection{band assignment and shape of intrinsic states}
\begin{figure*}[tb!]
\includegraphics[width=0.9\hsize]{band.eps}
\caption{The band-assignment based on the calculated $E2$ transition strengths compared with that
from the algebraic cluster model (ACM)~\cite{Bijker2019} and the experimental assignment which was
also tentatively proposed in Ref.~\cite{Bijker2019}. The filled (open) symbols show the positive-parity
(negative-parity) states.} \label{fig:band}
\end{figure*}
\begin{table}[thb]
\caption{The calculated intra- and inter-band $E2$ transition probabilities in the unit of $e^2\rm
fm^4$. The transitions larger than the Weisskopf estimate ($1 {\rm W.U.}=1.8\ e^2{\rm fm^4}$) are
shown. The numbers in the parenthesis are the experimental values. }\label{tab:be2}
\begin{ruledtabular}
\begin{tabular}{llll}
band $K^{\pi}_i \rightarrow K^{\pi}_f$ & $J_i$ & $J_f$ & $B(E2;J_i\rightarrow J_f)$ \\\hline
$1/2^- \rightarrow 1/2^-$ & $1/2^-_1$ & $3/2^-_1$ & 17.4 (12.7)\\
& & $5/2^-_1$ & 17.1 (16.9)\\
& $3/2^-_1$ & $5/2^-_1$ & 2.4 \\
& & $7/2^-_1$ & 17.8 \\
& $5/2^-_1$ & $7/2^-_1$ & 2.0 \\
$5/2^+ \rightarrow 5/2^+$ & $5/2^+_1$ & $7/2^+_1$ & 13.8 \rule{0pt}{10pt}\\
& & $9/2^+_1$ & 10.9 \\
& $7/2^+_1$ & $9/2^+_1$ & 12.0 \\
$7/2^+ \rightarrow 7/2^+$ & $7/2^+_2$ & $9/2^+_3$ & 12.9 \rule{0pt}{10pt}\\
$1/2^+ \rightarrow 1/2^+$ & $1/2^+_1$ & $3/2^+_1$ & 16.7 \rule{0pt}{10pt}\\
& & $5/2^+_2$ & 20.1 \\
& $3/2^+_1$ & $5/2^+_2$ & 5.0 \\
& & $7/2^+_4$ & 7.6 \\
& $5/2^+_2$ & $9/2^+_2$ & 9.9 \\
$3/2^+ \rightarrow 3/2^+$ & $3/2^+_2$ & $5/2^+_3$ & 10.0 \rule{0pt}{10pt}\\
& & $7/2^+_3$ & 8.7 \\
& $5/2^+_3$ & $7/2^+_3$ & 9.8 \\\hline
$5/2^+ \rightarrow 7/2^+$ & $7/2^+_1$ & $7/2^+_2$ & 4.3 \rule{0pt}{10pt}\\
$1/2^+ \rightarrow 5/2^+$ & $1/2^+_1$ & $5/2^+_1$ & 6.7 (9.0)\rule{0pt}{10pt}\\
& $5/2^+_2$ & $5/2^+_1$ & 3.6 \\
& & $7/2^+_1$ & 4.3 \\
& & $9/2^+_1$ & 3.3 \\
& $9/2^+_2$ & $7/2^+_1$ & 2.2 \\
$1/2^+ \rightarrow 3/2^+$ & $3/2^+_1$ & $7/2^+_3$ & 2.3 \rule{0pt}{10pt}\\
& $7/2^+_4$ & $5/2^+_3$ & 3.0 \\
$3/2^+ \rightarrow 5/2^+$ & $3/2^+_2$ & $5/2^+_1$ & 2.7 \rule{0pt}{10pt}
\end{tabular}
\end{ruledtabular}
\end{table}
Figure~\ref{fig:band} presents the band assignment determined from the calculated $E2$ transition
strengths listed in Tab.~\ref{tab:be2} and compares it with those from the experiment and the ACM
calculation. The band assignment of the REM results is unambiguous as the intra-band $E2$
transitions are clearly stronger than the inter-band transitions.
The $K^\pi=1/2^-$ band is built on the $1/2^-_1$ ground state. The intra-band $E2$ transition
strengths are reasonably described and comparable with the experimental data for the
$1/2^-_1\rightarrow 3/2^-_1$ and $1/2^-_1\rightarrow 5/2^-_1$ transitions. Experimentally, the
ground band terminates at the $9/2^-_{1}$ state, but we could not identify the corresponding state
in our calculation. This may be due to the high excitation energy of this state which causes the
strong coupling with the continuum and makes it difficult to separate this state within the
bound-state approximation.
For the positive-parity states, we have assigned four rotational bands; $K^\pi=5/2^+$, $7/2^+$,
$1/2^+$ and $3/2^+$ which are built on the $5/2^+_1$, $7/2^+_2$, $1/2^+_1$ and $3/2^+_2$ states,
respectively. Experimentally, the $E2$ transition strength for the $1/2^+_1 \rightarrow 5/2^+_1$
transition has already been measured (9.0 $e^2{\rm fm^4}$)~\cite{Ajzenberg-Selove1991} and our
calculation gives comparable value (6.7 $e^2{\rm fm^4}$). However, no other $B(E2)$ data is
available, and the positive-parity band assignment has not been firmly established by the
experiments.
In Ref.~\cite{Bijker2019}, based on ACM which assumes the $3\alpha+n$ cluster structure with
triangular symmetry, the authors proposed a band assignment (Fig.~\ref{fig:band} right panel). They
proposed the $K^\pi=1/2^-$, $5/2^+$ and $7/2^+$ bands which share the same intrinsic structure, and
the $K^\pi=1/2^+$ and $1/2^-$ bands with different structure. They also tentatively classified the
observed states into the rotational bands as shown in the middle panel of Fig.~\ref{fig:band}. In
addition to these four bands, they also pointed out the possible existence of a pair of the
$K^\pi=3/2^\pm$ band approximately at $E_x=10$ MeV. Interestingly, the global structure of the four
bands; $K^\pi=1/2^-$, $5/2^+$, $5/2^+$ and $1/2^+$ qualitatively agrees with the REM results,
although there exist several differences, for example, the order of the bands are different and
several bands are missing. It is also noted that REM shows quantitatively better agreement with
the experiment.
Since the REM calculation does not assume any spatial symmetry, it is interesting to investigate if
there exists the triangular symmetry behind these rotational spectra. In general, the wave function
of REM is a superposition of many basis wave functions with different configurations, and hence, we
need some measure to evaluate its intrinsic structure. For this purpose, we introduce the overlap
between the REM wave function and the basis wave functions defined as,
\begin{align}
O_i = \sum_{KK'} \braket{\Psi^{J\pi}_M|P^{J\pi}_{MK}\Phi_i}B^{-1}_{KK'}
\braket{P^{J\pi}_{MK'}\Phi_i|\Psi^{J\pi}_M},\label{eq:ovlp}
\end{align}
where $B^{-1}$ is the inverse matrix of $B$ which is the overlap of projected basis wave functions,
\begin{align}
B_{KK'} = \braket{P^{J\pi}_{MK}\Phi_i|P^{J\pi}_{MK'}\Phi_i}.
\end{align}
Note that the REM wave function $\Psi_{M}^{J\pi}$ is a superposition of $\Phi_i$
[Eq. (\ref{eq:gcmwf2})]. Therefore, if the overlap $O_i$ is large, $\Psi_i$ can be approximated by
a single basis wave function $\Phi_i$.
\begin{table}[thb]
\caption{The calculated overlaps for each state which is defined by Eq. (\ref{eq:ovlp}). The columns
denoted by $O(1/2^-)$ and $O(1/2^+)$ show the overlap between REM wave function and the basis wave
function which is most dominant in the $1/2_1^-$ and $1/2_1^+$ states,
respectively.}\label{tab:ovlp}
\begin{ruledtabular}
\begin{tabular}{llllll}
\multicolumn{3}{c}{$K^\pi=1/2^-$}&\multicolumn{3}{c}{$K=1/2^+$}\\
$J^\pi$ & $O(1/2^-_1)$ & $O(1/2^+_1)$ &$J^\pi$ & $O(1/2^-_1)$ & $O(1/2^+_1)$ \\
\cline{1-3}\cline{4-6}
$1/2^-_1$ & 0.83 & 0.12 & $1/2^+_1$ & 0.14 & 0.58 \rule{0pt}{10pt}\\
$3/2^-_1$ & 0.83 & 0.16 & $3/2^+_1$ & 0.18 & 0.56 \\
$5/2^-_1$ & 0.73 & 0.06 & $5/2^+_2$ & 0.25 & 0.56 \\
$7/2^-_1$ & 0.76 & 0.13 & $7/2^+_4$ & 0.35 & 0.25 \\
& & & $9/2^+_2$ & 0.35 & 0.45 \\\hline
\multicolumn{3}{c}{$K^\pi=5/2^+$}&\multicolumn{3}{c}{$K=7/2^+$}\rule{0pt}{10pt}\\
$J^\pi$ & $O(1/2^-_1)$ & $O(1/2^+_1)$ &$J^\pi$ & $O(1/2^-_1)$ & $O(1/2^+_1)$ \\
\cline{1-3}\cline{4-6}
$5/2^+_1$ & 0.50 & 0.45 & $7/2^+_2$ & 0.74 & 0.15 \rule{0pt}{10pt}\\
$7/2^+_1$ & 0.54 & 0.42 & $9/2^+_3$ & 0.55 & 0.19 \\
$9/2^+_1$ & 0.58 & 0.43 & & & \\\hline
\multicolumn{3}{c}{$K^\pi=3/2^+$}\rule{0pt}{10pt}\\
$J^\pi$ & $O(1/2^-_1)$ & $O(1/2^+_1)$ \\\cline{1-3}
$3/2^+_2$ & 0.45 & 0.40 \rule{0pt}{10pt}\\
$5/2^+_3$ & 0.46 & 0.26 \\
$7/2^+_3$ & 0.35 & 0.28
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[tb!]
\includegraphics[width=0.5\hsize]{ovdens.eps}
\caption{Panel (a): the density distribution of the basis wave functions which have the maximum
overlap with the $1/2^-_1$ state. Panel (b): Same as panel (a) but for the $1/2^+_1$
state. Contours show the density of $3\alpha$ particles and red boxes show the position of the
wave packet centroid for the valence neutron. }
\label{fig:ovdens}
\end{figure}
The calculated overlaps are summarized in Tab.~\ref{tab:ovlp}. The ground state has the maximum
overlap, which is as large as 0.83, with the basis wave function shown in Fig.~\ref{fig:ovdens}
(a). Note that the density distribution clearly shows the triangular configuration of $3\alpha$
particles with a valence neutron where the lengths of the triangle are 3.31, 3.30 and 3.02
fm. Furthermore, we found that all the member states of the ground band have large overlaps no less
than 0.70 with the same basis wave function. Therefore, we consider that the ground band is
reasonably interpreted as the rotational band having a common intrinsic structure with a triangular
symmetry as asserted by Bijker et al.~\cite{Bijker2019}. They also argued that the $K^\pi=5/2^+$ and
$7/2^+$ bands have the same intrinsic structure and are classified as the ``ground band''. Indeed,
we found that these bands have non-small overlap with the same basis wave function shown in
Fig.~\ref{fig:ovdens} (a). However, our results show a deviation from a rigid shape. The
magnitudes of the overlaps between these bands and the basis wave function shown in
Fig.~\ref{fig:ovdens} (a) are reduced less than 0.60 except for the $7/2^+_2$ state. Furthermore,
these bands have non-small overlaps with other configurations. For example, the $K^\pi=5/2^+$ band
has large overlap with the dominant basis wave function of the $1/2^+_1$ state, which is discussed
below. Thus, the $K^\pi=5/2^+$ and $7/2^+$ bands look similar to the $K^\pi=1/2^-$ band, but the
deviation from the rigid shape is not small.
In Ref.~\cite{Bijker2019}, the $K^\pi=1/2^+$ band was assigned as a rotational band which also has a
triangular arrangement of $3\alpha$ particles but has the valence neutron in a different
single-particle orbit. In the present calculation, we also found that the band-head state
($1/2^+_1$ state) has the maximum overlap with a different basis wave function whose density
distribution is shown in Fig.~\ref{fig:ovdens} (b), but has small overlap with the dominant
configuration of the ground band [Fig.~\ref{fig:ovdens} (a)]. Apparently, the position of the wave
packets of the valence neutron is different from that of the $1/2^+_1$ state, and $\alpha$ particles
deviate from equilateral triangular arrangement as the lengths of the triangle are 3.55, 3.51 and
2.67 fm. This confirms that the $K^\pi=1/2^+$ band has a different intrinsic structure. However, we
again note that the magnitude of the maximum overlap is not as large as that of the ground band, and
the member states of this band show the increasing mixture of other contributions as the
excitation energy and angular momentum increase. Finally, we also found the
strongest admixture of the various configurations in the $K^\pi=3/2^+$ band which is a candidate of
the band proposed in Refs. \cite{Milin2002,Bijker2019}. This may be due to the highest high
excitation energy of this band.
In short, the REM calculation confirmed that the ground band can be interpreted as a
rigid-body rotational band which manifests the triangular symmetry. It also shows that ACM
looks explaining the general trend of the excited bands. However, we found that all the
excited bands have non-small admixture with other configurations and deviate from the
rigid-body interpretation. One of the signature of this mixing is the non-small $E2$ transitions between the bands with different intrinsic structures. Therefore, the experimental data
for these transitions will provide us an important insight into the cluster structure of $^{13}{\rm
C}$.
\section{summary}
In summary, we have investigated the structure of the $3\alpha+n$ system by extending the REM
framework. As a benchmark calculation for the 3$\alpha+n$ system, REM well reproduced the ground and
excited energies where we followed the same Hamiltonian of the previous study as a comparison. It
was also demonstrated that REM accurately describes the wave functions which yields to the deeper
binding energies.
We have also discussed the rotational band assignment and investigated if they manifest the
triangular symmetry. The proposed band assignment qualitatively explains the observed data, although
the order of several bands disagrees and the $K^-=1/2^-$ band is missing in the present result. From
the analysis of the overlap with the basis wave functions, it was found that the ground band can be
regarded as a rigid-body rotational band which manifests the triangular symmetry. We also have seen
that the $D'_{3h}$ symmetry approximately explains the general nature of the excited bands. However,
all the excited bands have non-small admixture with other
configurations without symmetry and deviate from the rigid-body interpretation, because of their
high excitation energies and angular momenta. The non-small $E2$ transitions between different
bands are a signature of the configuration mixing, and we expect that the experimental
data for these transitions will provide us an important information about the underlying
symmetry behind the observed spectrum.
\begin{acknowledgements}
The authors acknowledge the fruitful discussions with Dr. Funaki and Dr. Kawabata. This work was
supported by JSPS KAKENHI Grant Nos. 19K03859, the collaborative research programs 2020 at the
Hokkaido University information initiative center, and by the COREnet program at the RCNP,
Osaka University.
\end{acknowledgements}
|
{
"timestamp": "2020-12-29T02:23:32",
"yymm": "2012",
"arxiv_id": "2012.14055",
"language": "en",
"url": "https://arxiv.org/abs/2012.14055"
}
|
\section{Introduction}
Let $m$ be a positive integer and let $\Omega$ be a finite set.
The {\it $m$-closure} $G\m$ of $G\le\sym(\Omega)$ is
the largest permutation group on $\Omega$ having
the same orbits as $G$ in
its induced action on the Cartesian product~$\Omega^m$.
Wielandt \cite[Theorems~5.8 and 5.12]{Wielandt1969} showed that
\qtnl{100120p}
G^{(1)}\ge G^{(2)}\ge\cdots\ge G^{(m)}=G^{(m+1)}=\cdots =G,
\eqtn
for some $m<|\Omega|$.
(Since the stabilizer in $G$ of all but one point is always trivial,
$G^{(n-1)}=G$ where $n = | \Omega |$; see Theorem~\ref{281219a}.)
In this sense, the $m$-closure can be considered as a natural
approximation of~$G$.
Here we study the closures of
solvable groups; for the nonsolvable case,
see~\cite{LPS1988,PS1992,XuGLP2011}.
The $1$-closure of $G$ is the direct product of symmetric
groups $\sym(\Delta)$, where~$\Delta$ runs over the orbits of~$G$.
Thus the $1$-closure of a solvable group is solvable if and only
if each of its orbits has cardinality at most~$4$.
The case of $2$-closure is more interesting.
The $2$-closure of every (solvable) $2$-transitive group
$G\le\sym(\Omega)$ is $\sym(\Omega)$; other examples of
solvable~$G$ and nonsolvable $G^{(2)}$ appear in~\cite{Skresanov2019}.
But, as shown by Wielandt ~\cite{Wielandt1969},
each of the classes of finite $p$-groups and groups of odd order
is closed with respect to taking the $2$-closure.
Currently, no characterization of solvable groups having
solvable $2$-closure is known.
Seress~\cite{Sere1996} observed that if $G$ is a primitive solvable group, then
$G^{(5)}=G$; so the $5$-closure of a primitive solvable group is
solvable. Our main result is the following stronger statement.
\thrml{241219a}
The $3$-closure of a solvable permutation group is solvable.
\ethrm
Theorem~\ref{241219a} follows from
Theorems~\ref{220520d}, \ref{110519c}, and~\ref{301219b}.
The corollary below is an immediate consequence of
Theorem~\ref{241219a} and the chain of inclusions~\eqref{100120p}.
\crllrl{241219b}
For every integer $m\ge 3$, the $m$-closure of a solvable
permutation group is solvable.
\ecrllr
We briefly outline the structure of our proof.
In Section~\ref{090120c} we recall the
basic theory of the closure of permutation groups,
as developed by Wielandt~\cite{Wielandt1969}.
In Section~\ref{090120b} we
deduce that the $m$-closure of the
direct (respectively, imprimitive wreath) product of two permutation
groups is isomorphic to a subgroup of the direct
(respectively, imprimitive wreath) product of their $m$-closures,
and prove (with some natural constraints) that the same holds true for
the primitive wreath product.
Thus, in Theorem~\ref{220520d}, we reduce the proof of
Theorem~\ref{241219a} from an arbitrary solvable permutation group $G$
to a linearly primitive group: a point stabilizer $G_0$ of
such a group is a primitive linear group over a finite field.
A natural dichotomy arises in our treatment of linearly primitive groups.
If the point stabilizer $G_0$ has a regular faithful orbit,
then Wielandt's theory shows that the $3$-closure of $G$ is solvable (Corollary~\ref{060519b}).
Otherwise, we use results from~\cite{Yang2010,Yang2011,Yang2016} to obtain
in Theorem~\ref{110519c}
an explicit list of pairs $(d,p)$ for which
$G\le\AGL(d,p)$.
In Section~\ref{090120h} we complete the proof, relying on computer
calculations.
Namely, following Short's approach~\cite{Short1992},
we find for each pair $(d,p)$ a set $\cH$ of linearly primitive subgroups of $\GL(d,p)$ containing
a conjugate of each maximal solvable primitive subgroup of $\GL(d,p);$ in particular, $G_0$ is a subgroup
of some $H\in\cH$. If $G$ is $2$-transitive, then $G^{(3)}$ is solvable (Lemma~\ref{220520b}). In the remaining cases, $G^{(3)}=G$ (and so solvable) because $H^{(2)}=H$ for every $H\in\cH$. Verification of the latter is based on sufficient conditions given in Corollary~\ref{060519b} and Lemma~\ref{220520c}.
\section{Wielandt's theory}\label{090120c}
Let $G\le\sym(\Omega)$ and let $m$ be a positive integer. The $m$-orbits of
$G$ are the orbits of componentwise action of~$G$ on the Cartesian
product~$\Omega^m$ of~$\Omega$; the set of all such orbits is denoted
by $\orb_m(G)$.
\xmpll{100620a}
We describe the $m$-orbits of $G=\sym(\Omega)$.
Given an $m$-tuple $\alpha=(\alpha_1,\ldots,\alpha_m)$ of~$\Omega^m$,
let $\pi(\alpha)$ be the partition of
$I=\{1,\ldots,m\}$ such that the elements $i$ and $j$ belong to the
same class of~$\pi$ if and only if $\alpha_i=\alpha_j$.
If $s\in\orb_m(G)$, then
$\pi(s) :=\pi(\alpha)$ does not depend on the choice of $\alpha\in s$.
If $m\le|\Omega|$, then the mapping $s\mapsto \pi(s)$ establishes a
one-to-one correspondence between the $m$-orbits of $G$ and the
partitions of~$I;$ in particular, $|\orb_m(G)|\ge m+2$ for all $m\ge 3$.
\exmpl
A permutation group $H$ on $\Omega$ is {\it $m$-equivalent} to~$G$ if
$$
\orb_m(H)=\orb_m(G).
$$
Obviously, $m$-equivalence is an
equivalence relation on the set of permutation groups on~$\Omega$.
If $m\ge 2$, then two $m$-equivalent groups share some properties,
such as primitivity, or $2$-transitivity;
see \cite[Theorems~4.8 and~4.10]{Wielandt1969}.
The following criterion for $m$-equivalence can
easily be deduced from \cite[Theorem~4.7]{Wielandt1969}.
\lmml{07052020a}
Let $G$ and $H$ be permutation groups on $\Omega$ and assume that $G\le H$.
Then $H$ is $m$-equivalent
to $G$ if and only if, for every $\alpha\in\Omega^m$ and every $h\in H$,
there exists $g\in G$ such that $\alpha^h=\alpha^g$.
\elmm
Wielandt \cite[Theorem~4.3 and Lem\-ma~4.12]{Wielandt1969} established
the following.
\thrml{100120a}
Let $m\geq2$ be an integer and let $G$ and $H$ be $m$-equivalent permutation groups
on~$\Omega$. The following hold:
\nmrt
\tm{i} $G$ and $H$ are $(m-1)$-equivalent;
\tm{ii} $G_\alpha$ and $H_\alpha$ are $(m-1)$-equivalent for all $\alpha\in\Omega$.
\enmrt
\ethrm
The definition of $m$-equivalence implies that the
{\em $m$-closure} of $G$ is the largest group in
the class of $m$-equivalent groups containing~$G$. In particular,
$G$ and $H$ are $m$-equivalent if and only if $G\m=H\m$.
If $G\m=G$ then $G$ is {\it $m$-closed}.
Note that $m$-closure is a
closure
operator: namely,
$G\le G\m$, $G\m=(G\m)\m$, and $G\le H$ implies $G\m\le H\m$.
\thrml{281219a}
Let $m\ge 2$ be an integer.
If a point stabilizer of a permutation group $G$ is $(m-1)$-closed,
then $G$ is $m$-closed.
\ethrm
\proof Let $G\le\sym(\Omega)$ and let $\alpha\in\Omega$.
Since $G$ and $H:=G\m$ are $m$-equivalent, $G_\alpha$
and $H_\alpha$ are $(m-1)$-equivalent (Theorem~\ref{100120a}(ii)).
Since $G_\alpha$ is $(m-1)$-closed,
$$
H_\alpha\le (H_\alpha)^{(m-1)}=(G_\alpha)^{(m-1)}=G_\alpha.
$$
But
$G_\alpha\le H_\alpha$, so $G_\alpha=H_\alpha$. Furthermore,
Theorem~\ref{100120a}(i) implies that $G$ and $H$ are $1$-equivalent.
Therefore $\alpha^G=\alpha^H$.
Hence
$$
|G|=|\alpha^G|\cdot|G_\alpha|=|\alpha^H|\cdot|H_\alpha|=|H|.
$$
Thus $G=G\m$ because $G\le H$.
\eprf\medskip
A permutation group is {\it partly regular} if it has a faithful regular orbit. Clearly, every subgroup of a partly regular group is partly regular.
\crllrl{060519b}
Let $G\le\sym(\Omega)$ and let $m\ge 2$ be an integer.
If an $(m-1)$-point stabilizer of $G$ is partly regular, then $G$
is $(m+1)$-closed.
\begin{comment}
Then
\nmrt
\tm{i} if an $(m-1)$-point stabilizer of $G$ is partly regular,
then $G$ is $(m+1)$-closed,
\tm{ii} if an $(m-1)$-point stabilizer of $G\m$ is partly regular,
then $G$ is $m$-closed.
\enmrt
\end{comment}
\ecrllr
\proof If $G$ has a partly regular $(m-1)$-point stabilizer, then $G$ has
an $m$-point stabilizer which is trivial and so $m$-closed.
The claim follows from Theorem~\ref{281219a}.
\begin{comment}
To prove statement~(ii), we note that
\end{comment}
\eprf
\begin{comment}
Lemma~\ref{07052020a} allows also to prove the following (cf. \cite[Theorems~4.8 and~4.10]{Wielandt1969})
\thrml{07052020b}
\avv{Do we need this?} For $m\ge 2$, any two $m$-equivalent groups are primitive (respectively, $2$-transitive) simultaneously.
\ethrm
\end{comment}
\section{Closures of permutation groups}\label{090120b}
We now study the $m$-closure operator under
standard operations in permutation group theory;
compare with similar results for $m=2$ in~\cite{EP01}.
\thrml{241219c}
Let $K\le\sym(\Gamma)$, let $L\le\sym(\Delta)$, and let $K\times L$ act
on the disjoint union $\Gamma\cup\Delta$. For every integer $m\ge 1$,
$$
(K\times L)\m\le K\m \times L\m.
$$
\ethrm
\proof
Observe that $K\times L$ and $H:=(K\times L)\m$ are
$1$-equivalent. Hence the sets $\Gamma$ and $\Delta$ are
invariant under~$H$. It follows that
$K=(K\times L)^{\Gamma}$ is $m$-equivalent to~$H^{\Gamma}$,
the permutation group induced by the action of $H$ on $\Gamma$,
and
$L=(K\times L)^{\Delta}$ is $m$-equivalent to $H^{\Delta}$.
In particular, $H^\Gamma\le K\m$ and $H^\Delta\le L\m$. Thus
$$
(K\times L)\m=H\le H^{\Gamma}\times H^{\Delta}\le K\m \times L\m,
$$
as required.
\eprf\medskip
An analogue of Theorem~\ref{241219c} exists for the direct
product of permutation groups acting on the Cartesian product of
their underlying sets, but it is not needed here.
The following is a consequence of~\cite[Lemma~2.5]{KaluK1976}.
\thrml{241219d}
Let $K\wr L$ be the imprimitive wreath product of permutation
groups $K$ and $L$. For every integer $m\ge 2$,
$$
(K\wr L)\m=K\m \wr L\m.
$$
\ethrm
The case of the wreath product in {\it product action} is more subtle.
Recall, for example from \cite[Lemma~2.7A]{DM}, that
$K\uparrow L$,
the wreath product in product action of permutation groups $K$ and $L$,
is primitive
if and only if $K$ is primitive and nonregular, and $L$ is transitive and
nontrivial.
For the remainder of the paper,
we assume that
$K\uparrow L$ {\it is primitive} and so label this construction as
the {\it primitive wreath product}.
Even with this assumption, $(K\uparrow L)\m$ is not
always a subgroup of
$K\m \uparrow L\m$: consider for example $m=2$, $K=\sym(4)$, and $L=\alt(3)$.
But we obtain the following.
\thrml{241219d1}
Let $K\uparrow L$ be the primitive wreath product of permutation
groups~$K$ and $L$ of degrees $r$ and $d$, respectively.
Assume that $m\ge 3$ is an integer such that $m\le r$,
and also $m\le d$ unless $d=2$. Then
$$
(K\uparrow L)\m\le K\m \uparrow L\m.
$$
\ethrm
\proof Let $K\le\sym(\Gamma)$ and $L\le\sym(\Delta)$, where
$\Gamma$ and $\Delta$ are sets of cardinality $r$ and~$d$, respectively.
Without loss of generality, we assume that $\Delta=\{1,\ldots,d\}$.
Thus $G:=K\uparrow L$ acts on the Cartesian product
$$
\Omega=\underbrace{\Gamma\times \cdots\times \Gamma}_{d \ \,\text{copies}}.
$$
In what follows, an $m$-tuple $x\in\Omega^m$ is considered as an $m\times d$ matrix $(x_{ij})$ with $x_{ij}\in \Gamma$; the $j$th column of $x$ is denoted by~$x_{*j}$. Note that $K^d$ acts on $x$ by permuting elements inside columns, whereas $L$ permutes the columns.
As observed in \cite[Proof of Proposition~3.1]{EP01},
$G$ is contained in the $2$-closed group
$\sym(\Gamma)\uparrow\sym(\Delta)$. It follows that $H:=G\m$,
being $2$-equivalent to~$G$, is also contained in
$\sym(\Gamma)\uparrow\sym(\Delta)$. Therefore every
permutation of~$H$ can be written in the form
\qtnl{070120a}
h=(h_1,\ldots,h_d;\ov h),\qquad h_1,\ldots,h_d\in\sym(\Gamma),\ \ov h\in\sym(\Delta).
\eqtn
Let $\ov H = \{ \ov h \;:\; h\in H\}$ be the permutation group
induced by the action of~$H$ on~$\Delta$.
\noindent
As a critical step in our proof, we establish the following.
\medskip
\noindent
{\bf Claim.} {\it $H\le \sym(\Gamma)\uparrow L\m$.}\medskip
\proof Without loss of generality, we may assume that $d\ge 3$.
It suffices to show that $\ov H\le L\m$.
Equivalently (cf.\ Lemma~\ref{07052020a}), we show that, for every
$\alpha \in\Delta^m$ and every $\ov h\in \ov H$,
there exists $\ov g\in L$ such that
\qtnl{200220a}
\alpha^{\ov h}=\alpha^{\ov g}.
\eqtn
Since by assumption $K \uparrow L$ is primitive,
$L$ is transitive
and a subgroup of $\ov{H}$. Consequently, $\ov H=L\ov{H}_d$, where
$\ov{H}_d$ is the stabilizer in~$\ov H$ of the point $d\in \Delta$.
Thus we may assume that the element $\ov h$ in
\eqref{200220a} belongs to $\ov{H}_d$.
Let $\alpha=(\alpha_1,\ldots,\alpha_m)\in\Delta^m$,
where $\{\alpha_1,\ldots,\alpha_m\}=\{j_1,\ldots,j_t\}$
and $1\le j_1<\cdots<j_t\le m$.
Let $\pi(\alpha)$ be the partition of
$\{1,\ldots,m\}$ into the classes $\{\ell:\ \alpha_\ell=j_i\}$ where
$i=1,\ldots,t$ and $t=|\pi(\alpha)|$.
Note that the number of $m$-orbits of $\sym(\Gamma)$ is at least $m+2$, because $|\Delta|=d\ge m \ge 3$ (see Example \ref{100620a}). Thus there are pairwise distinct $m$-orbits $s_0,s_1,\ldots,s_t$ such that $\pi(s_i)\ne\pi(\alpha)$ for all~$i$.
Denote by $\myx(\alpha)$ the set of all tuples $x\in\Omega^m$ such that
\qtnl{120621a}
\pi(x_{*j_1})=\pi(s_1),\quad \pi(x_{*j_2})=\pi(s_2),
\quad\ldots,\quad
\pi(x_{*j_t})=\pi(s_t),
\eqtn
\qtnl{120621b}
\pi(x_{*d})=\pi(\alpha),\quad\pi(x_{*i})=\pi(s_0),\quad i\not\in\{j_1,\ldots,j_t,d\}.
\eqtn
Of course, the choice of $s_0,\ldots,s_t$ is arbitrary.
However, if $\beta$ is in the $m$-orbit of $\sym(\Delta)$ containing $\alpha$, then $\pi(\beta)=\pi(\alpha)$, so we can use the same $s_0,\ldots,s_t$ to define $\myx(\beta)$.
Assume that $\myx(\alpha)=\myx(\beta)$ for some $\beta\in\Delta^m$.
Condition~\eqref{120621b} ensures that
$$
\pi(\alpha)=\pi(\beta)\qaq \{\alpha_1,\ldots,\alpha_{m-1}\}=
\{\beta_1,\ldots,\beta_{m-1}\}.
$$
In particular, $\alpha_i=\alpha_j$ if and only
if $\beta_i=\beta_j$ for all $1\le i,j\le m$.
Condition~\eqref{120621a} yields $\alpha=\beta$.
Thus
\qtnl{270520d}
\myx(\alpha)\cap \myx(\beta)\neq\varnothing\quad
\text{if and only if}\quad \alpha=\beta.
\eqtn
Let $h\in H$ be as in ~\eqref{070120a} and let $\ov{h}\in\ov{H}_d$.
In view of our definitions, given
indices $i,j\in\{1,\ldots,m\}$, the column $j$ of a
tuple $x\in \myx(\alpha)$ belongs to~$s_i$ if and only if the
column $j\phmb{\ov h}$ of the tuple $x^h$ belongs to~$s_i$. Therefore
\qtnl{190220a}
\myx(\alpha)^h=\myx(\alpha^{\ov h}).
\eqtn
In particular, the preimage of $\ov{H}_d$ in $H$ acts on the
set $\myx(\alpha)$ for each $\alpha\in\Delta^m$. Since $G\le H$,
\eqref{190220a} holds for every $g\in G$
with $\ov{g}\in L_d$, where $L_d$ is the stabilizer in $L$ of the
point $d\in\Delta$.
Since $G$ and $H$ are $m$-equivalent and $G\le H$,
Lemma~\ref{07052020a} implies that for
every $x\in \myx(\alpha)$ and every $h\in H$ there
exists $g\in G$ such that $x^h=x^g$.
But $\ov h\in\ov{H}_d$; so, in accord
with the first equality in~\eqref{120621b}, $\ov{g}\in L_{d}$. In view
of \eqref{190220a}, this implies that
$$
x^g=x^h\in \myx(\alpha^{\ov h})\cap \myx(\alpha^{\ov g}).
$$
Now \eqref{270520d} yields $\alpha^{\ov{h}}=\alpha^{\ov{g}}$,
which proves \eqref{200220a}. \eprf\medskip
We return to the proof of the theorem.
It remains to verify that for every $h\in H$ and every $\alpha\in\Gamma^m$,
\qtnl{270520e}
\alpha^{h_j}\in \alpha^{K}\mbox{ for }j=1,\ldots,d,
\eqtn
where $h_j$ is defined by \eqref{070120a}.
Without loss of generality, we prove~\eqref{270520e} for $j=1$ only.
Take distinct $m$-orbits $s_0$ and $s_1$ of $\sym(\Gamma)$ such
that $\alpha_1\in s_1$. Choose a tuple $x\in\Omega^m$ satisfying
the conditions
\qtnl{060620i}
x_{*1}=\alpha_1\qaq x_{*2},\ldots,x_{*d}\in s_0.
\eqtn
Our claim implies
that $L\le \ov H\le L\m$, so
$L$ and $\ov H$ are $m$-equivalent and have the
same orbits. Hence there exists $\ov g\in L$ such that
$\ov h\ov g\in\sym(\Delta)$ fixes $1\in\Delta$. So
$$
g=(1,\ldots,1;\ov g) \in G
$$
and the only column of the tuple~$x^{hg}$ belonging to $s_1$ is the
first one, from \eqref{060620i}. Thus
$$
(x^{hg})_{*1}=\alpha^{h_1}.
$$
Since $x^H\in\orb_m(H)=\orb_m(G)$, there exists $f\in G$ such
that $x^{hg}=x^f$; in particular, $\ov f\in\sym(\Delta)$ fixes $1\in\Delta$.
Consequently,
$$
\alpha^{h_1}=(x^{hg})_{*1}=(x^f)_{*1}=\alpha^{f_1}\in \alpha^{K},
$$
as required.\eprf\medskip
If $m=3$, then the hypothesis of Theorem~\ref{241219d1} does not impose any
restrictions on the degrees $r$ and $d$ of the groups $K$ and $L$ because
the primitivity of
$K\uparrow L$ implies that
$r\ge3$ and $d\ge2$.
Therefore the following holds.
\crllrl{211220a}
Let $K\uparrow L$ be the primitive wreath product of permutation
groups~$K$ and $L$. Then
$$
(K\uparrow L)\3\le K\3 \uparrow L\3.
$$
\ecrllr
\section{Reducing primitive solvable groups to
linearly primitive groups}\label{280520b}
We summarize the well-known structure of a
primitive solvable group; for a proof, see
for example \cite[Chap.\ 1, Theorem~7]{Su76}.
\thrml{140311c}
Let $G\le\sym(\Omega)$ be a primitive solvable permutation group.
Now $\Omega$ has cardinality $p^d$
for a prime $p$ and integer $d\ge 1$ and
can be identified with a $d$-dimensional vector space $V$ over $\GF(p)$.
Moreover, $G\le\AGL(d,p)$,
and the stabilizer $H$ in $G$ of the zero vector is an irreducible
subgroup of $\GL(d,p)$.
\ethrm
We establish Theorem \ref{241219a} for two classes of groups.
\lmml{220520a}
If $G\leq\AGaL(1,p^d)$, then $G^{(3)}$ is solvable.
\elmm
\proof
A point stabilizer $\GaL(1,p^d)$
of $\AGaL(1,p^d)$
is $2$-closed by \cite[Proposition 3.1.1]{XuGLP2011}.
Theorem~\ref{281219a} implies that $\AGaL(1,p^d)$ is $3$-closed.
Thus
$$
G^{(3)}\le \AGaL(1,p^d)^{(3)}=\AGaL(1,p^d),
$$
and so $G^{(3)}$ is solvable.
\eprf
\lmml{220520b}
If $G$ is a $2$-transitive solvable group, then $G^{(3)}$ is solvable.
\elmm
\proof
By a theorem of Huppert \cite[Theorem~6.9]{ManzWolf1993},
$G\le\AGaL(1,p^d)$, or
\qtnl{311219a}
p^d\in\{3^2,5^2,7^2, 11^2,23^2,3^4\}.
\eqtn
The first case is settled by Lemma~\ref{220520a}.
In the second case, we consider a point stabilizer $G_\alpha$ and used the {\tt TwoClosure} command
in {\sf GAP} \cite{gap}
and the IRREDSOL package \cite{Hoef2017}
to establish that $(G_\alpha)^{(2)}$ is solvable for
those $p^d$ satisfying~\eqref{311219a}.
Since $(G^{(3)})_\alpha$ and $G_\alpha$ are $2$-equivalent,
$$
(G^{(3)})_\alpha\le (G_\alpha)^{(2)},
$$
and so $(G^{(3)})_\alpha$ is solvable.
Hence $G^{(3)}$, an
extension of an elementary abelian group (of order $p^d$) by
$(G^{(3)})_\alpha$, is solvable.\eprf\medskip
The following lemma gives a sufficient condition for
a primitive solvable group to be $3$-closed.
\lmml{220520c}
In the notation of Theorem~$\ref{140311c}$,
if there exists a nonzero $\alpha \in V$ such that the restriction of
$H$ to $\alpha^H$ is $2$-closed, then $G$ is $3$-closed.
\elmm
\proof
Since $H$ is an irreducible linear group,
the orbit $\Delta=\alpha^H$ contains a basis of $V$.
It follows that $H\cong H^\Delta$,
the restriction of $H$ to $\Delta$.
On the other hand, $G^{(3)}$ and $G$ are $2$-equivalent
by Theorem~\ref{100120a}(i). Consequently, $G^{(3)}$ is also a
primitive subgroup of $\AGL(d,p)$ \cite[Theorem~1.4]{XuGLP2011}.
Hence the stabilizer $L$ in $G^{(3)}$ of the zero vector acts
faithfully on~$\Delta$. Thus
\qtnl{070620a}
H\cong H^\Delta\qaq L\cong L^\Delta.
\eqtn
Since $H$ and $L$ are point stabilizers of $3$-equivalent groups, they
are $2$-equivalent by Theorem~\ref{100120a}(ii). Therefore
\qtnl{070620b}
L\le H^{(2)}.
\eqtn
Following \cite[Lemma~2.1(iii)]{PonomarenkoV2020}, we verify that
\qtnl{070620c}
(H^{(2)})^\Delta\le (H^\Delta)^{(2)}.
\eqtn
By hypothesis $(H^\Delta)^{(2)}=H^\Delta$.
This,
together with \eqref{070620a}, \eqref{070620b}, and \eqref{070620c},
implies that
$$
L\cong L^\Delta\le (H^{(2)})^\Delta\le (H^\Delta)^{(2)}=H^\Delta\cong H.
$$
Since $H\le L$, this yields $H=L$. Thus $G=G^{(3)}$.
\eprf
\medskip
If the point stabilizer $H$ in Theorem~\ref{140311c} is primitive
as a linear group, then $G$
is {\it linearly primitive}, otherwise it is {\it linearly imprimitive}.
As is well-known, the latter case reduces to the primitive wreath product;
see for example~\cite[Proposition~4.1]{EP01}.
\lmml{160311a}
Every linearly imprimitive solvable permutation group $G$ is
isomorphic to a subgroup of a primitive wreath product of two
solvable permutation groups of degrees smaller than the degree of~$G$.
\elmm
We now reduce the proof of Theorem \ref{241219a} to linearly primitive groups.
\thrml{220520d}
A counterexample of minimal degree to the statement
of Theorem~$\ref{241219a}$ is linearly primitive.
\ethrm
\proof
We consider separately the cases where the permutation group $G$ is
intransitive, imprimitive, or linearly imprimitive.\medskip
{\bf Case 1:} $G$ is intransitive. Now $G$ is a subdirect product
of (solvable) constituents, say $K$ and $L$, and their degrees are
less than that of $G$. Therefore their $3$-closures are solvable.
By Theorem~\ref{241219c} so is $(K\times L)^{(3)}$. Thus
$$
G^{(3)}\le (K\times L)^{(3)}
$$
is also solvable.\medskip
{\bf Case 2:} $G$ is imprimitive. Now $G$ can be identified with a
subgroup of the imprimitive wreath product $K\wr L$, where $K$ and $L$
are solvable permutation groups, and their degrees are less than
that of~$G$. Therefore their $3$-closures are solvable. By
Theorem~\ref{241219d} so is $(K\wr L)^{(3)}$. Thus
$$
G^{(3)}\le (K\wr L)^{(3)}
$$
is also solvable.\medskip
{\bf Case 3:} $G$ is primitive, but linearly imprimitive.
Now, by Lemma~\ref{160311a}, it can be identified with a subgroup of
the primitive wreath product $K\uparrow L$, where $K$ and~$L$ are
solvable permutation groups, and their degrees are less than
that of~$G$. Therefore their $3$-closures are solvable.
By Corollary~\ref{211220a} so is $(K\uparrow L)^{(3)}$.
Thus
$$
G^{(3)}\le (K\uparrow L)^{(3)}
$$
is also solvable.
\eprf
\section{Linearly primitive solvable groups: background theory}\label{280520c}
Let $G$ be a linearly primitive solvable permutation group with
point stabilizer $G_0$. Often essential information about $G_0$
can be obtained from a maximal solvable primitive
linear group $H$ containing $G_0$.
Basic information on the structure of $H$
is collected in the following theorem; for its proof, see for example
\cite[Lemma~2.2]{Sere1996}.
\thrml{060519a}
Let $H\le\GL(d,p)$ be a maximal solvable primitive group.
It has a series $1<U\le F\le A\le H$ satisfying the following:
\begin{enumerate}
\item[(i)]
$U$ is the unique maximal abelian normal subgroup of $H$, the linear span of $U$ in {\em Mat}$(d,p)$ is $\GF(p^a)$ where $a$ divides~$d$, and $U$ is cyclic of order $p^a-1;$
\item[(ii)]
$F=\Fit(C_H(U))$ is the Fitting subgroup of the centralizer $C_H(U)$, and
\mbox{$|F\,/\,U|$}$\,=\,e^2$ where $d = ae$ and
each prime divisor of $e$ divides $p^a-1;$
\item[(iii)]
$A=C_H(U)$ and $A/F$ is isomorphic to a completely reducible subgroup
of the direct product $\prod_{i=1}^m\Sp(2n_i,p_i)$ where the $p_i$
and $n_i$ are defined by the
prime power decomposition $e=\prod_{i=1}^mp_i^{n_i};$
\item [(iv)]
$H/A$ is isomorphic to a subgroup of $\aut(\GF(p^a))$ and so $|H/A|$ divides $a$.
\end{enumerate}
\ethrm
Let $G$ be a linearly primitive solvable permutation group.
We fix an embedding of
a point stabilizer $G_0$ of $G$ into a specific
maximal solvable primitive linear group $H$.
We call the integers $p$, $d$, $a$, $e$
defined in Theorem~\ref{060519a} for $H$
the {\it parameters} of~$G$.
Although the parameters of $G$ depend on the choice of $H$,
the monotonicity of the $m$-closure operator guarantees that our
results are independent of it.
\thrml{110519c}
The $3$-closure of a linearly primitive solvable permutation group is solvable,
except possibly for those groups whose parameters are
listed in columns~$2$--$5$ of Table~$\ref{t2}$.
\ethrm
\proof Let $G$ be a linearly primitive solvable permutation group with
parameters $p$, $d$, $a$, $e$, and let $H\le\GL(d,p)$ be
a maximal solvable primitive group containing a point
stabilizer $G_0$ of~$G$.
If $e=1$, then $G\le\AGL(1,p^d)$, and
the result follows
by Lemma~\ref{220520a}. So we may assume that $e>1$.
If $H$ is partly regular, then so is $G_0\le H$;
thus $G$ is $3$-closed by Corollary~\ref{060519b} (take $m=2$).
If $H$ is not partly regular, then its parameters
are listed in \cite[Corollary~3.2]{Yang2016};
columns~$2$--$5$ of Table~$\ref{t2}$ are
taken from~\cite[Table~2]{Yang2016}.~\eprf\medskip
The data in \cite[Table~2]{Yang2016}
was obtained using
\cite[Theorem~4.1]{Yang2011} which states the following: if $H$
is not partly regular, then
\qtnl{301219a}
e=2,\ 3,\ 4,\ 8,\ 9,\ 16.
\eqtn
Hence $e = r^k$ where $2\leq r \leq 3$ and $1\leq k \leq 4$.
Let $b$ be the least positive integer with $p^b\equiv1\pmod{r^c}$ where $c=2$ for $r=2$ and $c=1$ otherwise. We denote the general orthogonal group of degree $2k$ over the field of order $q$ by $O^\varepsilon(2k,q)$ where $\varepsilon\in\{+,-\}$ depends on the Witt index of the corresponding quadratic form. The {\em $r$-radical} of a group is its largest normal $r$-subgroup.
\lmml{211220b} We retain the notation of Theorem~$\ref{060519a}$ where $e=r^k$.
\begin{enumerate}
\item[(i)]
$F$ is the central product of $U$ and an extraspecial group $E$
of order $r^{2k+1};$ if $b \,|\, a$, then all such subgroups
$F$ are conjugate in $\GL(d,p)$,
else $r=b=2$, $a$ is odd, and there are two conjugacy classes of such subgroups;
\item[(ii)]
$A/F$ is a maximal solvable subgroup of $N/F$, where $N=N_L(F)$ and $L=C_{\GL(d,p)}(U)\cong\GL(e,p^a);$ if $b \,|\, a$, then $N/F\cong\Sp(2k,r)$,
else $N/F$ is isomorphic to one of $O^\varepsilon(2k,2)$ depending on the conjugacy class of $F;$
\item[(iii)]
the $r$-radical of $A/F$ has order at most $2$ and is trivial if $b \,|\, a$.
\end{enumerate}
\elmm
\proof Item (i) is well known; see for example
\cite[Theorem~2.4.7]{Short1992}
and subsequent remarks.
By \cite[Theorems~2.5.31, 2.5.34 and~2.4.12]{Short1992},
$A/F$ is isomorphic to a maximal solvable subgroup $M$ of $S=N/F$,
where $S=\Sp(2k,r)$ if $b\,|\,a$ and $S$ is one of
$O^\varepsilon(2k,2)$ otherwise; this proves (ii).
Moreover, $M$ fixes no nonzero isotropic subspace of the
natural $\GF(r)$-module of~$S$.
Therefore $S$ is not contained in any parabolic subgroup of $S$,
so the $r$-radical of $M$ is either trivial or has order $2$,
and the latter is possible only if $M$ is an orthogonal group.~\eprf\medskip
The following lemma can be established using {\sc
Magma} \cite{magma}. Note that $O^+(2,2)$,
$\Sp(2,2)\cong O^-(2,2)$, $\Sp(2,3)$, and $O^+(4,3)$ are solvable.
\lmml{spo}
Let $M$ be a maximal solvable subgroup of
$S \leq \GL(d, r)$ and let the $r$-radical of $M$ satisfy the
order conditions of Lemma~{\em\ref{211220b}(iii)}.
\nmrt
\item[(i)] If $S=\Sp(2,2),\,\Sp(2,3),\,O^\varepsilon(2,2),\,O^+(4,3)$,
then $M=S$.
\item[(ii)] If $S=O^-(4,2)$, then $M$ is conjugate to
a subgroup isomorphic to either $5:4$ of order $20$,
or $S_3\times S_2$ of order $12$.
\item[(iii)] If $S=\Sp(4,2)$, then $M$ is conjugate to
a subgroup isomorphic to either
$O^+(4,2)\cong S_3\wr S_2$ of order $72$, or the normalizer
of a Sylow $5$-subgroup
of $\Sp(4,2)$ which is isomorphic to $5:4$ and has order $20$.
\item[(iv)] If $S=\Sp(4,3)$, then $M$ is conjugate to
one of the following:
\begin{itemize}
\item the normalizer of a Sylow $5$-subgroup of $\Sp(4,3)$ which
is isomorphic to $D_{20}.2$ and has order $40$;
\item a subgroup isomorphic to $2^{1+4}:S_3$ of order $192$;
\item a subgroup isomorphic to $2^{1+4}:D_{10}$ of order $320$;
\item a subgroup isomorphic to $\Sp(2,3)\wr S_2$ of order $1152$.
\end{itemize}
\item[(v)] If $S=O^+(6,2)$, then $M$ is conjugate to one of the following:
\begin{itemize}
\item
the normalizer of a Sylow $3$-subgroup of $O^+(6,2)$
which is isomorphic to
$O^+(4,2)\times O^+(2,2)\cong (S_3\wr S_2)\times S_2$
and has order $144$;
\item
the normalizer of a Sylow $5$-subgroup of $O^+(6,2)$ which
is isomorphic to $(5:4)\times S_3$ and has order $120$;
\item
the normalizer of a Sylow $7$-subgroup of $O^+(6,2)$
which is isomorphic to $7:6$ and has order $42$.
\end{itemize}
\item[(vi)] If $S=O^-(6,2)$, then $M$ is conjugate to one of the following:
\begin{itemize}
\item a subgroup isomorphic to $3^{1+2}:(2.S_4)$ of order $1296$;
\item a subgroup isomorphic to $3^3:(S_4\times S_2)$ of order $1296$;
\item the normalizer of a Sylow $5$-subgroup of $O^-(6,2)$ which is
isomorphic to $(5:4)\times S_2$ and has order $40$.
\end{itemize}
\item[(vii)] If $S=\Sp(6,2)$, then $M$ is conjugate to
one of the following:
\begin{itemize}
\item a subgroup isomorphic to $3^{1+2}:(2.S_4)$ of order $1296$;
\item a subgroup isomorphic to $3^3:(S_4\times S_2)$ of order $1296$;
\item the normalizer of a Sylow $5$-subgroup of $\Sp(6,2)$ which
is isomorphic to $(5:4)\times S_3$ and has order $120$;
\item the normalizer of a Sylow $7$-subgroup of $\Sp(6,2)$ which
is isomorphic to $7:6$ and has order $42$.
\end{itemize}
\enmrt
\elmm
In Table~\ref{t1},
we summarize the orders of the maximal solvable subgroups listed in
Lemma \ref{spo}.
\begin{table}[h]
\begin{center}
\caption{The orders of certain maximal solvable subgroups}\label{t1}
\begin{tabular}{|c|c|l|}
\hline
$e$ & $S$ & $|M|$ \\
\hline
$9$ & $\Sp(4,3)$ & $40, 192, 320, 1152$ \\
\hline
$8$ & $\Sp(6,2)$ & $42,120,1296$ \\
& $O^+(6,2)$ & $42, 120, 144$ \\
& $O^-(6,2)$ & $40, 1296$ \\ \hline
$4$ & $\Sp(4,2)$ & $20,72$ \\
& $O^+(4,2)$ & $72$ \\
& $O^-(4,2)$ & $12,20$\\
\hline
$3$ & $\Sp(2,3)$ & $24$ \\
\hline
$2$ & $\Sp(2,2)$ & $6$ \\
& $O^+(2,2)$ & $2$ \\
& $O^-(2,2)$ & $6$ \\
\hline
\end{tabular}
\end{center}
\end{table}
To state the next lemma we introduce some additional notation.
Let $I_m$ be the identity $m\times m$ matrix, and let $\otimes$
denote the Kronecker product of matrices. If $y$ and $z$ are
$k\times k$ and $m\times m$ matrices respectively,
then $y\otimes I_m$ commutes with $I_k\otimes z$.
For $G\leq \GL(k,p)$ let $G\otimes I_m$ be the
subgroup $\{g\otimes I_m\mid g\in G\}$ of $\GL(k\cdot m,p)$.
Clearly $G\cong G\otimes I_m$.
For
positive integers $k,m$ there exists a natural embedding
\qtnl{GLextfield}
\GL(k,p^m):\langle \varphi\rangle \leq \GL(k\cdot m,p),
\eqtn
where $\varphi$ is a field automorphism of $\GL(k,p^m)$.
The uniqueness of a field of order $p^m$ implies that all
such embeddings are conjugate in $\GL(k\cdot m,p)$. Below we
assume that we fix such an embedding and so realise
$\GL(k,p^m):\langle \varphi\rangle$ as a subgroup of $\GL(k\cdot m,p)$.
\lmml{Kron} We retain the notation of Theorem~$\ref{060519a}$ where $e=r^k$.
\nmrt
\item[(i)] If $b$ divides $a$, then there exists a
maximal solvable primitive subgroup of $\GL(e,p^b)$ with
generators $x_1,\ldots,x_l$, and matrices $t\in \GL(a/b,p^b)$ of
order $p^a - 1$
and $s\in \GL(a/b,p^b)$ of order $a/b$ satisfying $t^s=t^{p^b}$,
such that the subgroup
$$\langle t\otimes I_e, s\otimes I_e,
I_{a/b} \otimes x_1,\ldots, I_{a/b} \otimes x_l\rangle
\leq \GL(d/b,p^b):\langle
s\otimes I_e\rangle\leq \GL(d,p)$$ is conjugate to a normal
subgroup of $H$ containing~$A$ and of index dividing~$b$.\smallskip
\item[(ii)] If $b$ does not divide $a$, then there exists a
maximal solvable primitive subgroup of $\GL(e,p)$ with
generators $x_1,\ldots,x_l$,
and matrices
$t\in \GL(a,p)$ of order $p^a-1$ and
$s\in \GL(a,p)$ of order $a$ satisfying $t^s=t^p$,
such that $H$ is conjugate to the subgroup
$$\langle t\otimes I_e,s\otimes I_e, I_a\otimes x_1,\ldots,
I_a \otimes x_l
\rangle\leq\GL(d,p).$$
\enmrt
\elmm
\proof (i) By Lemma~\ref{211220b}(i), $F=U\circ E$, where $E$ is an extraspecial group of order $r^{2k+1}$; moreover, $F$ is unique up to conjugation in $\GL(d,p)$.
By \cite[2.5.14]{Short1992}, $U=\langle z\otimes I_e \rangle$ where
$z$ is a Singer cycle of $\GL(a,p)$.
By \cite[Theorem~2.5.15]{Short1992}, $L=C_{\GL(d,p)}(U)\cong \GL(e,p^a)$,
and $N_{\GL(d,p)}(U)=\GL(e,p^a):\langle \psi\rangle$,
where $\psi$ is a field automorphism of order $a$. Identifying $L$ with $\GL(e,p^a)$, we have the series of subgroups
$$1<U<F<A\le N_L(F)\le L=\GL(e,p^a).$$
Furthermore, $N_L(F)\cong\Sp(2k,r)$ (cf. Lemma~\ref{211220b}(ii)).\medskip
Since $b$ divides $a$, there is an embedding $\GL(a/b,p^b)\leq \GL(a,p)$ and we may choose it so that $z$ lies in its image. Let $t$ be the
preimage of $z$ under this embedding. By \cite[Lemma~2.7]{PTV2004},
there exists $s\in\GL(a/b,p^b)$ of order $a/b$ such that $t^s=t^{p^b}$.\medskip
The embeddings $\GF(p)\le \GF(p^b)\le \GF(p^a)$ yield embeddings
$$\GL(e,p^a)\le \GL(e\cdot (a/b),p^b)\le \GL(d,p).$$
In particular, $A=C_{H}(U)$ is a subgroup of $\GL(e\cdot (a/b),p^b)$.\medskip
By Lemma~\ref{211220b}(i), $L_1=\GL(e,p^b)$ contains a subgroup
$F_1=U_1\circ E$,
where $U_1=Z(L_1)$ and $E$ is extraspecial of order $r^{2k+1}$.
Hence we have the series of subgroups
$$1<U_1<F_1<A_1\le N_{L_1}(F_1)\le L_1=\GL(e,p^b).$$
It follows from Lemma~\ref{211220b}(ii) that $N_{L_1}(F_1)\cong\Sp(2k,r)\cong N_L(F)$. Since $U= \langle t\otimes I_e \rangle$ commutes with all matrices from $I_{a/b}\otimes N_{L_1}(F_1)$,
$$N_{\GL(e\cdot (a/b), p^b)}(F)=\langle U, I_{a/b}\otimes N_{L_1}(F_1)\rangle.$$ Therefore $A\le\langle U, I_{a/b}\otimes N_{L_1}(F_1)\rangle$ and
there exists $A_1=\langle x_1,\ldots,x_l\rangle\le L_1$ such that $A=U\circ (I_{a/b}\otimes A_1)$.
Since $H/A$ is cyclic of order dividing $a$ and $s\otimes I_e$ commutes with all matrices from $I_{a/b}\otimes A_1$,
it follows that $\langle A, s\otimes I_e\rangle$ is conjugate to a normal subgroup of $H$ of index dividing $b$, as required.\medskip
It remains to show that $A_1$ is primitive in $\GL(e,p^b)$. By way of contradiction, assume that there is a proper $\GF(p^b)$-subspace $\ov W$ of the natural $\GF(p^b)A_1$-module $\ov V$ with $\ov V=\bigoplus_{\ov g\in A_1}\ov W^g$. Consider an embedding of $\GL(e,p^b)$ into $\GL(e,p^a)$ such that $A=A_1\circ U$.
Therefore
$$
\wt V=\GF(p^a)\otimes_{\GF(p^b)}\ov{V}=\bigoplus_{g\in A}\wt{W}^g,
$$
where $\wt V$ is the natural $\GF(p^a)A$-module
and $\wt W=\GF(p^a)\otimes_{\GF(p^b)}\overline{W}$. Hence $A$ is imprimitive in $\GL(e,p^a)$.
Since $U=\langle z\otimes I_e \rangle$ where $z$ is a Singer cycle of $\GL(a,p)$, the $\GF(p^a)$-subspaces $\wt W^g$ are $U$-invariant for all $g\in A$. Therefore
$$
V=\GF(p^a)\otimes_{\GF(p)}\wt{V}=\bigoplus_{g\in A}{W}^g,
$$
where $V$ is the natural $\GF(p)A$-module and $W=\GF(p^a)\otimes_{\GF(p)}\wt{W}$.
Now $H=\langle A,x\rangle,$ where $x$ induces a field automorphism
$\psi$ of $U$ (cf.\ Theorem~\ref{060519a}(iv)).
Since $W^g$ is $\psi$-invariant for each $g\in A$,
$$V=\bigoplus_{h\in H}W^h,$$ which contradicts the primitivity of
$H$ in $\GL(d,p)$.\medskip
(ii) We put $b=1$ and
choose an extraspecial subgroup $F_1\le L_1=\GL(e,p)$
such that $N_{L_1}(F_1)\cong N_L(F)$
(cf.\ Lemma~\ref{211220b});
the choice depends on which of two conjugacy classes contains $F$.
The rest
of the proof is similar to that of~(i). \eprf
\begin{comment}
by~\cite[Theorem~4.1]{Yang2011}. Moreover, according to~\cite[Theorem~3.1]{Yang2016} the parameters of~$G$ in these cases must occur in Table~\ref{t1}.\footnote{In the notation of ~\cite{Yang2016}, we have $b=1$ and $|W|=p^a$.}
\begin{table}[h]\label{110519i}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$e$ & $p^a$ \\
\hline
$16$ & $a=1$, $p\le 11$, or $a\ge 2$, $p^a\le 4$\\
\hline
$9$ & $a=1$, $p\le 23$, or $a\ge 2$, $p^a\le 4$ \\
\hline
$8$ & $a=1$, $p\le 23$, or $a\ge 2$, $p^a\le 4$ \\
\hline
$4$ & $a=1$, $p\le 73$, or $a\ge 2$, $p^a\le 9$ \\
\hline
$3$ & $a=1$, $p\le 37$, or $a\ge 2$, $p^a\le 8$ \\
\hline
$2$ & $a=1$, $p\le 29$, or $a\ge 2$, $p^a\le 9$ \\
\hline
\end{tabular}
\end{center}
\vspace*{.5cm}
\caption{Exceptional parameters of linearly primitive solvable permutation groups (the first approximation).}\label{ttt1}
\end{table}
\vspace{-5mm}
In fact, not for every pair $(p,a)$ from Table~\ref{t1}, there exists a linearly primitive solvable permutation group with these parameters. This is because each prime divisor of~$e$ must divide $p^a-1$, Theorem~\ref{060519a}(P2). Taking this into account, we find that the parameters of $G$ occur in the columns~$2$--$5$ of Table~\ref{t2}.\eprf
\end{comment}
\section{Primitive solvable linear groups: computations}\label{090120h}
To complete the proof of Theorem~\ref{241219a}, it suffices to establish
the following.
\thrml{301219b}
Let $G$ be a linearly primitive solvable permutation group with
exceptional parameters listed in columns~$2$--$5$
of Table~$\ref{t2}$. Then $G^{(3)}$ is solvable.
\ethrm
\proof Let $G_0$ be a point stabilizer of~$G$ with underlying vector
space $V$. By the monotonicity of
the $3$-closure operator, we may assume that $H=G_0$ is a maximal
solvable primitive linear group.
Our proof is computational; more details are given in Section \ref{add-details}.
For each choice of the parameters $p$, $d$, $a$, $e$,
we compute a list $\cH=\cH(p,d,a,e)$ of
solvable primitive subgroups $H$ of $\GL(d, p)$,
ensuring that $\cH$ includes representatives of all conjugacy
classes of such {\it maximal} solvable subgroups
for specified $a$ and $e$.
For every $H\in\cH$, we search for nonzero $\alpha \in V$ such
that one of the following conditions is satisfied:
\nmrt
\tm{A} $\alpha^H$ is a regular orbit of $H$,
so Corollary~\ref{060519b} can be applied;
\tm{B} the restriction of $H$ to $\alpha^H$ is $2$-closed,
so Lemma \ref{220520c} can be applied.
\enmrt
Of course, (A) implies (B).
If we find such $\alpha$, then $G$ is $3$-closed
by Corollary~\ref{060519b} and Lemma~\ref{220520c}, respectively; so $G^{(3)}=G$ is solvable. For those
groups~$H$ where no such $\alpha$ is found,
we verify that $H$ acts transitively on the nonzero vectors of~$V$.
In this case, $G^{(3)}$ is solvable by Lemma~\ref{220520b}.
Our results are summarized in
Table~\ref{t2}.
The $7$th column lists the numbers of groups $H\in\cH$
for which conditions (A) or (B) are satisfied;
we indicate (A) or~(B) by writing ``partly regular'' or
``$2$-closed constituent'', respectively; we write ``transitive'' if $H\in\cH$ acts transitively on the nonzero vectors of~$V$. Detailed results, including the {\sf GAP} procedures used, generators of
$H$, vectors~$\alpha$, and certificates for (A) and (B),
are available at~\cite{Comp}.\eprf
\subsection{Constructing $\cH$}\label{add-details}
The construction of the list $\cH$ for given parameters $p$, $d$, $a$, $e$
naturally divides into two cases I and II; the $6$th column of Table \ref{t2} indicates which of them is applied to the stated parameters $p$, $d$, $a$, $e$.
\subsubsection*{\bf Case I}
Where possible, using the {\sf GAP} package IRREDSOL,
we constructed the list ${\cL}$ of all solvable primitive
subgroups of $\GL(d,p)$.
By Theorem~\ref{060519a}, every group $H\in\cH$ has
order $(p^a-1)\,e^2\,s(H)\,a'$, where $s(H)=|M|$ is the order of
a maximal solvable subgroup $M\cong A/F$ of the corresponding linear group
$S$ from Lemma~\ref{spo}, and $a'$ is a divisor of~$a$. Table~\ref{t1} lists the
possible values of~$s(H)$ for all relevant values of~$e$.
By filtering ${\cL}$ with respect to the possible orders,
we obtain $\cH$.
\subsubsection*{\bf Case II}
We use auxiliary results from Section~\ref{280520c} and \cite[Section~2.5]{Short1992} together with computations
in {\sc Magma} to construct~$\cH$.
Recall that $e=r^k$ where $p\neq r\in\{2,3\}$, $d=ae$, and $b$ is the least positive integer with $p^b\equiv1\pmod{r^c}$ where $c=2$ for $r=2$ and $c=1$ otherwise.
Since $r\in\{2,3\}$, we deduce that $b\leq 2$.
For each remaining set of parameters, we proceed as follows.
\begin{enumerate}
\item[1.]
If $b$ divides $a$ and IRREDSOL contains the solvable primitive
subgroups of $\GL(e,p^b)$, or $b$ does not divide $a$ and IRREDSOL contains
the solvable primitive subgroups of $\GL(e,p)$, then
we construct a list $\cH_0$ by
extracting from the relevant output those groups containing
an extraspecial subgroup of order $r^{2k+1}$ and proceed to Step~5.
\item[2.] Otherwise, using Holt's implementation in
{\sc Magma} of the algorithm
of \cite{HoltCRD}, we construct in $\GL(e,p^b)$ extraspecial subgroups $E$ of order $r^{2k+1}$ (one if~$b$ divides $a$, and two if not) and the corresponding subgroups $F=E\circ U$, where $U=Z(\GL(e,p^b))$, and normalizers $N=N_{\GL(e,p^b)}(F)$, where $N/F\cong\Sp(2k,r)$ or $O^\varepsilon(2k,2)$ (cf. items (i) and (ii) of Lemma~\ref{211220b}).
\item[3.] Lemma~\ref{211220b}(iii) implies that $A/F$ is a maximal solvable subgroup of $N/F$ such that the $r$-radical of $A/F$
is trivial if $N/F\cong\Sp(2k,r)$ and otherwise has order at most $2$.
Using standard tools in {\sc Magma}, we construct the list $\cL$ consisting of all maximal solvable subgroups of $\Sp(2k,r)$ with trivial $r$-radical (if $b$ divides $a$), or of all maximal solvable subgroups of $O^+(2k,2)$
and $O^-(2k,2)$ with $r$-radical of order at most $2$ (if $b$ does not divide $a$).
\item[4.] Following \cite[Theorems~2.5.35 and~2.5.37]{Short1992}, for each subgroup in $\cL$, we produce generators of its complete preimage in $N$.
Thus we obtain the list $\cH_0$ containing up to conjugation all maximal solvable primitive
subgroups of $\GL(e,p^b)$ if $b$ divides $a$, and an equivalent list
in $\GL(e,p)$ if not.
\item[5.] Suppose $b$ divides $a$.
By Lemma \ref{Kron}(i), every maximal solvable primitive
subgroup $H$ of $\GL(d,p)$ contains up to conjugation
an appropriate normal subgroup $H_1$
of index $b \leq 2$ in $H$.
We construct
$t,s$ as in Lemma \ref{Kron}(i).
For $H_0 \leq \GL(e,p^b)$ in $\cH_0$, define
$$H_1=\langle t\otimes I_e,s\otimes I_e,I_{a/b}\otimes H_0\rangle \leq\GL(d,p).$$
If $b=1$, then we take as $\cH$ the set consisting of $H := H_1$
for every $H_0 \in \cH_0$.
If $b=2$, then let $F=\Fit(H_1)$ and define
$$L=N_{\GL(d,p)}(F)\cong F:(\Sp(2k,r):\Z_a).$$
Now we take as $\cH$ the set
consisting of $H := N_L(H_1)$ for every $H_0 \in \cH_0$.
Suppose $b$ does not divide $a$.
We construct $t,s$ as in Lem\-ma~\ref{Kron}(ii),
and take as $\cH$ the set
$$\{\langle t\otimes I_e,s\otimes I_e,I_{a}\otimes
H_0\rangle\mid H_0\in\cH_0\}.$$ The same lemma guarantees that up to conjugation all maximal solvable primitive subgroups of $\GL(d,p)$ are in~$\cH$.
\begin{comment}
\item[5.] If $b$ divides $a$, then we construct
$t,s$ as in Lemma \ref{Kron}(i) and, for every
$H_0 \leq \GL(e,p^b)$ in $\cH_0$, generate
$$H_1=\langle t\otimes I_e,s\otimes I_e,I_{a/b}\otimes H_0\rangle
\leq\GL(d,p).$$
By Lemma \ref{Kron}(i), every maximal solvable primitive subgroup $H\in\GL(d,p)$ includes up to conjugation an appropriate $H_1$ which is
normal and of index dividing $b$ in $H$. Since $r\in\{2,3\}$, we obtain that $b\leq 2$.
If $b=1$, then $H_1=H$, and we take as $\cH$ the set
$$\{\langle t\otimes I_e,s\otimes I_e,I_{a/b}\otimes
H_0\rangle\mid H_0\in\cH_0\}.$$
If $b=2$, then we take $F=F(H_1)$. Now we consider
$$N=N_{\GL(d,p)}(F)\cong F:(\Sp(2k,r):\Z_a)$$
and take as $\cH$ the set
$$\{N_N(\langle t\otimes I_e,s\otimes
I_e,I_{a/b}\otimes H_0\rangle)\mid H_0\in\cH_0\}.$$
Suppose that $b$ does not divide $a$. Then, applying Lem\-ma~\ref{Kron}(ii), we construct matrices $t,s$ and take as $\cH$ the set
$$\{\langle t\otimes I_e,s\otimes I_e,I_{a}\otimes
H_0\rangle\mid H_0\in\cH_0\}.$$ The same lemma guarantees that up to conjugation all maximal solvable primitive subgroups of $\GL(d,p)$ are in~$\cH$.
\end{comment}
\end{enumerate}
\subsection{Processing $\cH$}
We discuss briefly how we process each $H \in \cH$.
For given $\alpha \in V$, condition (A) is readily checked. Since the {\tt TwoClosure} command in {\sf GAP} is time consuming,
we use the {\sf GAP} package COCO2P~\cite{KlinCOCO2P} to verify (B).
Namely, we compute the automorphism group~$\wt H$ of the coherent configuration associated with the restriction of $H$ to $\alpha^H$
using the COCO2P commands {\tt ColorGraph} and {\tt AutomorphismGroup}. Now $H$ is $2$-closed if and only if $H=\wt H$ \mbox{\cite[Corollary~2.2.18]{CP2019}}.
It was sometimes infeasible to compute all $H$-orbits on $V$,
so we randomly selected vectors $\alpha$
until we found one which satisfies either (A) or (B).
Hence, in principle, some cases resolved by (B) could also
be resolved by (A).
\medskip
{\small
\begin{longtable}{|r|r|r|r|r|r|l|l|}
\caption{Results for groups with exceptional parameters\label{t2}} \\
\hline
No. & $e$ & $p$ & $d$ & $a$ & case & results \\
\hline
$1$ & $16$ & $3$ & $16$ & $1$ & II &$780$, partly regular \\
& & & & & & $21$, $2$-closed constituent \\
\hline
$2$ & $16$ & $5$ & $16$ & $1$ & II &$1085$, partly regular \\
\hline
\hline
\hline
$3$ & $9$ & $2$ & $18$ & $2$ & I & $31$, $2$-closed constituent \\
\hline
$4$ & $9$ & $7$ & $9$ & $1$ & II & $44$, partly regular \\
\hline
$5$ & $9$ & $13$ & $9$ & $1$ & II & $44$, partly regular \\
\hline
$6$ & $9$ & $2$ & $36$ & $4$ & II & 7, partly regular \\
\hline
$7$ & $9$ & $19$ & $9$ & $1$ & II & $44$, partly regular \\
\hline
$8$ & $9$ & $5$ & $18$ & $2$ & II & $44$, partly regular \\
\hline
\hline
\hline
$9$ & $8$ & $3$ & $8$ & $1$ & I & $6$, $2$-closed constituent\\
\hline
$10$ & $8$ & $5$ & $8$ & $1$ & I & $4$, $2$-closed constituent \\
\hline
$11$ & $8$ & $7$ & $8$ & $1$ & II & $30$, partly regular \\
& & & & & & $1$, $2$-closed constituent \\
\hline
$12$ & $8$ & $3$ & $16$ & $2$ & II & $63$, partly regular \\
\hline
$13$ & $8$ & $11$ & $8$ & $1$ & II & $122$, partly regular \\
\hline
$14$ & $8$ & $13$ & $8$ & $1$ & II & $63$, partly regular\\
\hline
$15$ & $8$ & $17$ & $8$ & $1$ & II & $63$, partly regular \\
\hline
$16$ & $8$ & $19$ & $8$ & $1$ & II & $123$, partly regular \\
\hline
$17$ & $8$ & $5$ & $16$ & $2$ & II & $4$, partly regular\\
\hline
$18$ & $8$ & $3$ & $24$ & $3$ & II & $6$, partly regular \\
\hline
\hline
\hline
$19$ & $4$ & $3$ & $4$ & $1$ & I & $3$, $2$-closed constituent \\
\hline
$20$ & $4$ & $5$ & $4$ & $1$ & I & $2$, $2$-closed constituent\\
\hline
$21$ & $4$ & $7$ & $4$ & $1$ & I & $13$, $2$-closed constituent \\
\hline
$22$ & $4$ & $3$ & $8$ & $2$ & I & $22$, $2$-closed constituent \\
\hline
$23$ & $4$ & $11$ & $4$ & $1$ & I & $3$, $2$-closed constituent \\
\hline
$24$ & $4$ & $13$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$25$ & $4$ & $17$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$26$ & $4$ & $19$ & $4$ & $1$ & I & $7$, partly regular \\
\hline
$27$ & $4$ & $23$ & $4$ & $1$ & I & $11$, partly regular \\
& & & & & & $1$, $2$-closed constituent \\
\hline
$28$ & $4$ & $5$ & $8$ & $2$ & I & $41$, partly regular \\
& & & & & & $21$, $2$-closed constituent \\
\hline
$29$ & $4$ & $3$ & $12$ & $3$ & I & $9$, partly regular\\
\hline
$30$ & $4$ & $29$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$31$ & $4$ & $31$ & $4$ & $1$ & I & $16$, partly regular \\
\hline
$32$ & $4$ & $37$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$33$ & $4$ & $41$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$34$ & $4$ & $43$ & $4$ & $1$ & I & $6$, $2$-closed constituent \\
\hline
$35$ & $4$ & $47$ & $4$ & $1$ & I & $24$, partly regular \\
& & & & & & $2$, $2$-closed constituent \\
\hline
$36$ & $4$ & $7$ & $8$ & $2$ & I & $11$, partly regular \\
& & & & & & $17$, $2$-closed constituent\\
\hline
$37$ & $4$ & $53$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$38$ & $4$ & $59$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
& & & & & & $1$, partly regular \\
\hline
$39$ & $4$ & $61$ & $4$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$40$ & $4$ & $67$ & $4$ & $1$ & II & $2$, partly regular \\
\hline
$41$ & $4$ & $71$ & $4$ & $1$ & II & $2$, partly regular \\
\hline
$42$ & $4$ & $3$ & $16$ & $4$ & II & $2$, partly regular \\
\hline
$43$ & $4$ & $11$ & $8$ & $2$ & II & $2$, partly regular \\
\hline
$44$ & $4$ & $5$ & $12$ & $3$ & II & $2$, partly regular \\
\hline
$45$ & $4$ & $13$ & $8$ & $2$ & II & $2$, partly regular\\
\hline
$46$ & $4$ & $3$ & $20$ & $5$ & II & $3$, partly regular \\
\hline
\hline
\hline
$47$ & $3$ & $2$ & $6$ & $2$ & I & $2$, $2$-closed constituent \\
\hline
$48$ & $3$ & $7$ & $3$ & $1$ & I & $1$, $2$-closed constituent \\
\hline
$49$ & $3$ & $13$ & $3$ & $1$ & I & $1$, $2$-closed constituent \\
\hline
$50$ & $3$ & $2$ & $12$ & $4$ & I & $3$, $2$-closed constituent \\
\hline
$51$ & $3$ & $19$ & $3$ & $1$ & I & $1$, $2$-closed constituent \\
\hline
$52$ & $3$ & $5$ & $6$ & $2$ & I & $4$, $2$-closed constituent \\
\hline
$53$ & $3$ & $7$ & $6$ & $2$ & I & $5$, partly regular \\
\hline
$54$ & $3$ & $2$ & $18$ & $6$ & I & $7$, partly regular \\
&&&&&I&$11$, $2$-closed constituent \\
\hline
$55$ & $3$ & $11$ & $6$ & $2$ & I & $4$, partly regular \\
\hline
$56$ & $3$ & $13$ & $6$ & $2$ & I & $4$, partly regular \\
\hline
$57$ & $3$ & $2$ & $24$ & $8$ & II & $1$, partly regular \\
\hline
$58$ & $3$ & $17$ & $6$ & $2$ & II & $1$, partly regular \\
\hline
$59$ & $3$ & $7$ & $9$ & $3$ & II & $1$, partly regular \\
\hline
$60$ & $3$ & $19$ & $6$ & $2$ & II & $1$, partly regular \\
\hline
\hline
\hline
$61$ & $2$ & $3$ & $2$ & $1$ & I & $2$, transitive \\
\hline
$62$ & $2$ & $5$ & $2$ & $1$ & I & $1$, transitive \\
\hline
$63$ & $2$ & $7$ & $2$ & $1$ & I & $5$, $2$-closed constituent \\
\hline
$64$ & $2$ & $3$ & $4$ & $2$ & I & $6$, $2$-closed constituent\\
\hline
$65$ & $2$ & $11$ & $2$ & $1$ & I & $3$, $2$-closed constituent \\
\hline
$66$ & $2$ & $13$ & $2$ & $1$ & I & $2$, $2$-closed constituent \\
\hline
$67$ & $2$ & $17$ & $2$ & $1$ & I & $1$, $2$-closed constituent \\
\hline
$68$ & $2$ & $19$ & $2$ & $1$ & I & $3$, $2$-closed constituent \\
\hline
$69$ & $2$ & $23$ & $2$ & $1$ & I & $7$, $2$-closed constituent \\
\hline
$70$ & $2$ & $5$ & $4$ & $2$ & I & $20$, $2$-closed constituent \\
\hline
$71$ & $2$ & $3$ & $6$ & $3$ & I & $3$, partly regular \\
& & & & & & $1$, $2$-closed constituent \\
\hline
$72$ & $2$ & $29$ & $2$ & $1$ & I & $1$, $2$-closed constituent \\
\hline
$73$ & $2$ & $7$ & $4$ & $2$ & I & $12$, $2$-closed constituent \\
\hline
$74$ & $2$ & $3$ & $8$ & $4$ & I & $30$, $2$-closed constituent \\
\hline
$75$ & $2$ & $11$ & $4$ & $2$ & I & $14$, $2$-closed constituent \\
\hline
$76$ & $2$ & $5$ & $6$ & $3$ & I & $2$, $2$-closed constituent \\
\hline
$77$ & $2$ & $13$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$78$ & $2$ & $3$ & $10$ & $5$ & I & $4$, partly regular \\
\hline
$79$ & $2$ & $17$ & $4$ & $2$ & I & $8$, $2$-closed constituent \\
\hline
$80$ & $2$ & $7$ & $6$ & $3$ & I & $22$, partly regular \\
\hline
$81$ & $2$ & $19$ & $4$ & $2$ & I & $19$, $2$-closed constituent \\
\hline
$82$ & $2$ & $23$ & $4$ & $2$ & I & $12$, partly regular \\
\hline
$83$ & $2$ & $5$ & $8$ & $4$ & I & $27$, partly regular \\
\hline
$84$ & $2$ & $3$ & $12$ & $6$ & I & $17$, partly regular \\
\hline
$85$ & $2$ & $29$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$86$ & $2$ & $31$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$87$ & $2$ & $11$ & $6$ & $3$ & I & $7$, partly regular \\
\hline
$88$ & $2$ & $37$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$89$ & $2$ & $41$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$90$ & $2$ & $43$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$91$ & $2$ & $3$ & $14$ & $7$ & I & $4$, partly regular \\
\hline
$92$ & $2$ & $13$ & $6$ & $3$ & I & $5$, partly regular \\
\hline
$93$ & $2$ & $47$ & $4$ & $2$ & I & $8$, partly regular \\
& & & & & I & $1$, $2$-closed constituent \\
\hline
$94$ & $2$ & $7$ & $8$ & $4$ & I & $23$, partly regular \\
\hline
$95$ & $2$ & $53$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$96$ & $2$ & $5$ & $10$ & $5$ & I & $2$, partly regular \\
\hline
$97$ & $2$ & $59$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$98$ & $2$ & $61$ & $4$ & $2$ & I & $8$, partly regular \\
\hline
$99$ & $2$ & $67$ & $4$ & $2$ & II & $1$, partly regular \\
\hline
$100$ & $2$ & $17$ & $6$ & $3$ & II & $1$, partly regular \\
\hline
$101$ & $2$ & $71$ & $4$ & $2$ & II & $4$, partly regular \\
\hline
$102$ & $2$ & $73$ & $4$ & $2$ & II & $1$, partly regular \\
\hline
\end{longtable}
}
|
{
"timestamp": "2021-07-28T02:08:57",
"yymm": "2012",
"arxiv_id": "2012.14166",
"language": "en",
"url": "https://arxiv.org/abs/2012.14166"
}
|
\section{Introduction}
Timely prediction of in-hospital mortality within intensive care units (ICU) is beneficial \cite{sharma2017mortality,johnson2017real} for practitioners to tailor care and allow for earlier interventions to prevent deterioration \cite{delahanty2019development, meyer2018machine}. Electronic Health Record (EHR) data consist of information relating to patient encounters with a health system, such as demographics, disease diagnoses, vital signs, and medications, among others \cite{jensen2012mining, glicksberg2018next} which are often used for machine learning (ML) predictions for different tasks in the biomedical domain including mortality prediction \cite{rajkomar2018scalable,shickel2017deep,glicksberg2018automated}. The inherent complexity of EHR data often require advanced modeling frameworks to gain robust performance for these tasks. A common modeling approach for EHR research is a 2-dimensional convolutional neural networks (CNN) with one dimension as time and the other as clinical features \cite{zhang2017hcnn,kim2019deep,cheng2016risk}. In healthcare-related CNN models, various medical features are normally concatenated to be directly used as inputs and create embeddings \cite{miotto2016deep,de2020phe2vec,Landi2020}. This form of feature representation can be powerful, but disregards the graphical structure and interconnectivity between medical concepts \cite{choi2019graph,choi2018mime} which can affect the CNN performance especially since EHR data is often sparse due to missingness \cite{cheng2016risk}.
In this work, we propose a Heterogeneous Graph Model (HGM) to create a patient embedding vector, which better accounts for missingness in data for training a CNN model. The HGM model captures the relationships between different medical concept types (e.g., diagnoses and lab tests) due to its graphical structure. This relational representation facilitates capturing more complex patient patterns and encoding similarities.
\section{Methodology}
\subsection{Dataset}
We conduct our experiments on de-identified EHR data from MIMIC-III \cite{johnson2016mimic}. This data set contains various clinical data relating to patient admission to ICU, such as demographics, lab test results, and disease diagnoses. We collected data for 5,956 patients, extracting lab tests every hour from admission. There are a total of 409 unique lab tests and 3,387 unique disease diagnoses observed. We bin the lab test events into 6, 12, 24, and 48 hours prior to patient death or discharge from ICU. From these data, we perform mortality predictions that are 10-fold, cross validated.
\subsection{Convolutional Neural Network Model}
\label{cnn}
CNNs are often used, and perform well, on image processing tasks \cite{krizhevsky2012imagenet} due to their inherent feature extraction and abstraction ability, which increases the accuracy for classification tasks. There are also studies that have demonstrated encouraging successes in using CNN for EHR analyses. In this work, we use an standard CNN model as the baseline.
Since CNNs typically require two dimensional inputs, we treat time as the horizontal dimension and medical events as the vertical dimension. For the time dimension, we record every event with one-hour binned increments with respect to the patient death or discharge time. In this model, the vertical dimension is constructed by concatenating two medical event vectors: lab tests and diagnoses. Every entry of the lab test vector records the value of a specific lab test by hour. For the diagnosis vector, the i-th entry is 1 if the i-th diagnosis is observed, otherwise 0. We treat mortality prediction as a binary classification, for which we use a softmax layer with two dimensions and cross-entropy for loss.
\begin{figure*}[h]
\centering
\includegraphics[scale=0.34]{schematic2.png}
\caption{(A) A graphical representation of the HGM for p: patient, i: lab test, and d: diagnosis data. (B) All graph nodes in (A) have a corresponding vector like those shown in (B). The vector representations can be projected into a shared space with the TransE method, and this projection is optimized for retaining relations in the original data in the embedding via skip-gram optimization. Finally, these vectors are concatenated into the CNN model for mortality prediction.}
\label{fig:combine_model}
\end{figure*}
\subsection{Heterogeneous Graph Model}
The features used in baseline CNN model are essentially raw data concatenated together, which does not consider the relationships between medical concepts. We use an HGM to capture these inherent relationships by creating three different type of nodes: patient, lab test, and diagnosis. These different types of nodes are connected by two relation types: tested and diagnosed. These could be represented with two triples:
\begin{align*}
& Patient \xrightarrow{tested} lab :\{patient, tested, lab\}\\
& Patient \xrightarrow{diagnosed} Diagnosis:\{patient, diagnosed, diagnosis\}\\
\end{align*}
the testing relationship shows whether a specific lab test was given to a patient at a specific time, the diagnosed relationship shows whether a patient was diagnosed with a disease.
To represent the lab test and diagnosis node types, we use multi-hot encoding vector: $X_l \in \{0,1\}^{409}$ and $X_d \in \{0,1\}^{3387}$, the i-th entry with the value of 1 indicates whether a specific lab test was performed or a specific diagnosis was given.
\subsubsection{Node Embeddings}
For capturing the relations between different medical events related to a patient, we utilize the TransE model\cite{bordes2013translating} to project different type of nodes into the same latent space, then classify those nodes that are connected as a similar group and the disconnected nodes as a dissimilar group.
The TransE model uses a set of 1) projection matrices and 2) relation vectors. After initialization, projections and translations are optimized end-to-end. Heterogeneous nodes $X_p, X_l, X_d$ are projected into a shared latent space with trainable projection matrices $W_{p},W_{i},W_{d}$ using the nonlinear mappings:
\begin{equation}
\begin{split}
c_{p}&=\sigma(W_{p}\cdot X_{p})\\
c_{i}&=\sigma(W_{i}\cdot X_{i})\\
c_{d}&=\sigma(W_{d}\cdot X_{d})
\end{split}
\end{equation}
Where $\sigma$ is a non-linear activation function and $c_{p},c_{i},c_{d}$ are the latent representations of each type of node. Despite the fact that the EHR uses different dimensions for different data types $X_{p},X_{i},X_{d}$, all nodes types are projected into the same latent space. Then we apply translation operations to link these different types of nodes:
\begin{equation}
\begin{split}
c_{p}&=c_{i}+r_{ip}\\
c_{d}&=c_{p}+r_{pd}\\
\end{split}
\end{equation}
where $r_{ip}$ and $r_{pd}$ are the relation vectors connecting patients to lab tests and diagnoses, respectively. Both $c_{p}^{'}$ and $c_{p}$ use the same projection matrix $W_p$.
\subsubsection{Optimization Model}
For training the HGM, we apply a skip-gram optimization model~\cite{dong2017metapath2vec},which increases the proximity between embedding points whose corresponding graph nodes are often connected after the projection and translation operations:
\begin{equation}
\mathrm{max}\sum_{u\in V}\sum_{t\in T_{V}}logPr(N_{t}(u)|u)
\label{hetero}
\end{equation}
where $N_{t}(u)$ are the neighborhood vertices of center node $u$, and $t\in T_{V}$ is the node type. Here, we learn the node embeddings by maximizing the probability of correctly predicting the the patient node's associated lab tests and diagnoses. The prediction probability is modeled as a softmax function:
\begin{equation}
Pr(c_{t}|f(u))=\frac{e^{\vec{c}_{t}\cdot \vec{u}}}{Z_{u}
\label{soft_max}
\end{equation}
where $\vec{u}$ is the latent representation of patient $u$, $\vec{c}_{t}$ is the latent representation of lab and diagnosis neighbors of node $u$, and $\vec{c}_{t}\cdot \vec{u}$ is the inner product of the two embedding vectors representing their similarity. $Z_{u}$ is the normalization term $Z_{u} = \sum_{v\in V}e^{\vec{v}_{t}\cdot \vec{u}}$ that is a sum over all vertices $V$, each of which is represented as $\vec{v}_{t}$ including all node types. Therefore, equation \ref{hetero} is simplified to:
\begin{equation}
\mathcal{L}_{s}=-\sum_{t\in T}\sum_{u\in V}\Big[\sum_{c_{t}\in N_{t}(u)}\vec{c_{t}}\cdot \vec{u}-logZ_{u}\Big]
\label{sim_loss}
\end{equation}
Numerical computation of $Z_{u}$ is intractable for large-scale graphs. So we adopt negative sampling strategy~\cite{mikolov2013distributed} to approximate the normalization factor. We eventually use the following optimization function:
\begin{align}
\mathcal{L}_{s}&=-\sum_{t\in T}\sum_{u\in V}\Big[\sum_{c_{t}\in N_{t}(u)}log\sigma(\vec{c_{t}}\cdot \vec{u})+
\sum_{j=1}^{\mathbb{K}}E_{c_{j}\sim P_{v}(c_{j})}log\sigma(-\vec{c_{j}}\cdot \vec{u})\Big]
\label{SGNN_loss}
\end{align}
where $\sigma(x)=\frac{1}{1+\exp(-x)}$, $\mathbb{K}$ is the number of negative samples. $P_{v}(c_{j})$ is the negative sampling distribution.\\
For training HGM, we perform heterogeneous neighborhood sampling by its one-hop connectivity, and pick $Patient$ node as the center node, since it has one-hop connections to both $Diagnoses$ and $Lab\_test$ nodes. Specifically, for one training center $Patient$ node, we uniformly sampled 10 $Diagnoses$ one-hop direct connected nodes, and 10 $Lab\_test$ one-hop direct connected nodes. From these sampled 10 $Diagnoses$ nodes, we sample another 10 $Patient$ nodes, each having connections with each of the prior 10 $Diagnoses$ nodes. In this way, we connect the center patient node with similar other $Patient$ nodes by their common diagnoses. We also sample the patient node which belongs to the next hour corresponding to the center $Patient$ node. For negative sampling~\cite{mikolov2013distributed}, we perform uniform sampling through all $Diagnoses$ node and $Lab\_test$ nodes that do not have one-hop connections with the center training patient node. We then project these different nodes into same latent space through TransE model. After unifying the embeddings for different node types, each concept is represented as a point in a Euclidean space. In this space, we can measure the similarity between any two vectors using dot product.
\subsection{HGM Embeddings with CNN Model}
The HGM embedding vector encodes not only a patient's information, but also their relation with diagnoses, lab tests, and subsequent lab test results in time. The patient node is represented as a vector $X_p \in \mathbb{R}^{477}$ containing the numerical values measured from lab tests averaged at that time step. We concatenate the resulting embedding vectors to feed into the baseline CNN vertical feature dimension to form a final feature vector within every hour, and use these new features as the CNN input to predict mortality. In addition, since we encode time as a relation type, we can infer the embedding vector of time steps with missing data based on information from the previous hour. We visualize this procedure in Fig 1.
\section{Experiments}
We aim to predict mortality 6, 12, 24, and 48 hours prior to death and/or discharge. The CNN model is used for prediction as introduced in section \ref{cnn}. We compare three different scenarios to test the impact of adding HGM embedding vectors as additional features to the framework:\\
\begin{itemize}
\item HGM: Embed patient labs and diagnosis raw data.
\item CNN: Use raw lab test feature.
\item {HGM+CNN}: Concatenate the HGM patient embedding vector, and the raw lab test feature vector.
\end{itemize}
In this work, we use AUROC and AUPRC scores as the primary performance metric. We tabulate the results in Table\ref{tab:accur} and we show the evaluation AUROC and AUPRC curves for these tasks in Fig.\ref{fig:roc_curve}
\begin{table}[t]
\centering
\caption{Mortality prediction AUROC evaluation. Mean values from 10-fold cross validation with standard deviation for confidence intervals.}
\begin{tabular}{p{2cm}p{2cm}p{2cm}p{1.5cm}}
\toprule
\multicolumn{1}{l}{\multirow{2}[0]{*}{Hours prior to death}} & \multicolumn{3}{c}{Models} \\
\cmidrule{2-4}
& HGM & CNN & HGM+CNN \\
\midrule
6 & 0.714$\pm{0.02}$ & 0.782$\pm{0.01}$ &\textbf{0.800}$\pm{0.01}$ \\
12 & 0.715$\pm{0.03}$ & 0.771$\pm{0.02}$ & \textbf{0.791}$\pm{0.02}$ \\
24 & 0.653$\pm{0.03}$ &0.775$\pm{0.01}$ & \textbf{0.796}$\pm{0.01}$ \\
48 &0.641$\pm{0.03}$ &0.767$\pm{0.01}$ & \textbf{0.771}$\pm{0.01}$\\
\bottomrule
\end{tabular}%
\label{tab:accur}%
\end{table}
\begin{table}[t]
\centering
\caption{Mortality prediction Mean values from 10-cross validation with standard deviation for confidence intervals.}
\begin{tabular}{p{2cm}p{2cm}p{2cm}p{1.5cm}}
\toprule
\multicolumn{1}{l}{\multirow{2}[0]{*}{Hours prior to death}} & \multicolumn{3}{c}{Models} \\
\cmidrule{2-4}
& HGM & CNN & HGM+CNN \\
\midrule
6 & 0.557$\pm{0.02}$ & 0.590$\pm{0.01}$ &\textbf{0.601}$\pm{0.01}$ \\
12 & 0.559$\pm{0.02}$ & 0.577$\pm{0.02}$ & \textbf{0.600}$\pm{0.01}$ \\
24 & 0.578$\pm{0.02}$ &0.589$\pm{0.01}$ & \textbf{0.604}$\pm{0.01}$ \\
48 &0.567$\pm{0.03}$ &0.585$\pm{0.02}$ & \textbf{0.617}$\pm{0.02}$\\
\bottomrule
\end{tabular}%
\label{tab:accur}%
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{AUC_PR.png}
\caption{Evaluation of AUROC and AUPRC curves for HGM, CNN, and HGM+CNN models}
\label{fig:roc_curve}
\end{figure}
The testing results shows that the HGM+CNN outperforms both the basic HGM and CNN models, indicating the additional information added from the HGM patient embeddings increase the accuracy of predicting in-patient mortality. The prediction accuracy of using different hours prior to death and/or discharge does not vary by much, indicating that different time windows do not have a major impact on the result for this particular task and modeling strategy. The prediction accuracy in the CNN model drops by 1\% in the case of six hours prior to death and/or discharge, but not in the other two models, indicating that using the embedding features from HGM model is slightly more robust than the raw data.
\section{Discussion and Conclusion}
In this work, we propose a method to incorporate patient embedding vector from a HGM model into a CNN model in order to provide more information via interconnectivity between different clinical concepts. We assess the value of this implementation on a task of predicting mortality in EHR data. The results of our experiment shows the superior performance of adding the additional patient embedding vector, which is pretrained from the HGM model, compared to pure raw features as the input to CNN model. In one aspect, this is due to the fact that the HGM embedding vector captures additional relational information between different medical concepts, thus providing additional information to CNN model.
Furthermore, we observe that concatenating the HGM embedding vector with diagnosis feature vectors does not increase the accuracy versus using the concatenation between raw lab test and diagnosis feature vectors. This finding indicates that the raw lab test feature vector can provide unique information for CNN to utilize. At the same time, this finding indicates that the embedded patient vector from HGM model could lose some information from the raw lab test feature along the process of projecting these data into a low dimensional latent space. By concatenating all feature vectors, we aim to preserve the information from different data points, which helps to achieve higher mortality prediction accuracy. We hope the findings from this work can be expanded in future directions that may add more EHR node types and time components on a variety of other important health-related predictive tasks.
\section*{Author Contributions}
TW designed the study and performed the analyses. TW and BSG wrote the manuscript. TW, HH, AA, YD, and BSG evaluated the results and edited the manuscript. YD, AA, and BSG supervised the project.
\section*{Acknowledgements}
\bibliographystyle{ACM-Reference-Format}
|
{
"timestamp": "2020-12-29T02:23:39",
"yymm": "2012",
"arxiv_id": "2012.14065",
"language": "en",
"url": "https://arxiv.org/abs/2012.14065"
}
|
\section{Introduction}
The dynamics of single-species fast-moving quantum particles is described by the relativistic quantum Boltzmann equation:
\begin{align}\label{RQBE0}
\begin{split}
p^{\mu}\partial_{\mu}F = p^0\partial_t F + p\cdot\nabla_xF = C(F,F,F,F),
\end{split}
\end{align}
where $F(x,p,t)$ is a momentum distribution function on the phase point $(x,p)\in\mathbb{T}^3\times\mathbb{R}^3$ at time $t\in[0,\infty)$.
The relativistic quantum collision operator is given by
\begin{multline}\label{C}
C(F_1,F_2,F_3,F_4)=\int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dp'}{p'^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0}W(p,q|p',q')\cr
\times\big[F_1(p')F_2(q')(1+\tau F_3(p))(1+\tau F_4(q))
-(1+\tau F_1(p'))(1+\tau F_2(q'))F_3(p)F_4(q)\big],
\end{multline}
where $\tau=+1$ for bosons, and $\tau=-1$ for fermions.
When the functions $F_1$, $F_2$, $F_3$ and $F_4$ are identical, we denote $C(F,F,F,F)$ as $C(F)$. The transition rate $W(p,q|p',q')$ is defined as
\begin{align}\label{W}
W(p,q|p',q') = \frac{c}{2}s\sigma(g,\theta)\delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu}),
\end{align}
where $\sigma(g,\theta)$ denotes the differential cross-section which describes the collisions between particles. The $4$-dimensional Dirac-delta function implies the conservation laws of momentum and energy. The precise definitions of $s$ and $\sigma(g,\theta)$ are given in Section \ref{sec:notations}.
\subsection{Motivations and a brief history}
By the recent advances in the relativistic quantum theory, there has been increasing interest in the relativistic quantum Boltzmann equation in various fields of physics and engineering.
Li et al. \cite{li2008recent} announced several experimental and theoretical results about the interaction between heavy-ion and neutron using the relativistic quantum Boltzmann equation.
In \cite{buss2012transport}, Buss et al. provided several experiments of quantum interaction such as pion-nucleus interactions, heavy-ion reactions, electron-nucleus collisions and neutrino-nucleus interactions using an equation called Giessen Boltzmann-Uehling-Uhlenbeck (GiBUU) transport model.
Since the speed of the electron in graphene is close to that of light, Lapitski developed the lattice Boltzmann method to simulate the stream of the electrons in graphene in \cite{MR3389279}.
We also introduce some other experiments on the relativistic quantum Boltzmann model \cite{kim2016introduction,li1997equation,succi2002lattice}.
Despite such modern advances in physics and engineering, there have been quite few mathematical studies on the relativistic quantum Boltzmann equation. Akama \cite{akama1970relativistic} considered several types of relativistic quantum kinetic models such as the Boltzmann, the Fokker-Planck, the Landau equations, and this was the first appearance of the relativistic quantum Boltzmann equation.
The unique determination of the relativistic quantum equilibrium satisfying $H$-theorem and conservation laws has been considered by Escobedo et al. in \cite{MR1958975,MR2145021}.
To the best of authors' knowledge, the mathematical theory on the existence of solutions to the equation has not been studied yet.
Regarding the Newtonian non-quantum Boltzmann equation, a brief list of the classic literature includes \cite{MR1857879,MR2997586,MR1313028,MR1307620,MR0258399,MR2525118,MR1379589,MR1908664,MR2000470,MR2095473,MR1942465,MR1014927,MR3040372,MR2534787,MR2227952,MR3157048,MR0135535,MR33674,harris2004introduction,huang1987statistical,MR0479206,MR2683475,spohn2012large}.
The mathematical studies on the relativistic non-quantum Boltzmann equation are also not rich. In 1940, Lichnerowicz and Marrot then suggested the first relativistic Boltzmann equation in \cite{MR4796}.
In \cite{MR933458}, Dudy\'{n}ski and Ekiel-Je\.{z}ewska studied the linearized relativistic Boltzmann equation and obtained the estimates for the linear term in the hard and soft potential cases. The authors also constructed the global existence of mild solution in \cite{MR1151987}.
Glassey and Strauss obtained some estimates on the derivatives of the collision map between pre-post collisional momenta including the upperbound on the average of $\partial p' /\partial p$ and $\partial q' /\partial p$ in \cite{MR1105532}. Very recently, Chapman et al. \cite{2006.02540} provided an analytic and numerical evidence that the lower bound for the Jacobian determinant for the collision map $p\mapsto p'$ or $q'$ in the \textit{center-of-momentum} frame is zero.
In the \textit{nearby-equilibrium} regime, Glassey and Strauss \cite{MR1211782} established a unique global solution in $\mathbb{T}^3$ for the hard potential case, and it was extended to the whole space case via the fourteen-moment compensating function in \cite{MR1321370}.
For the soft potential case, the global existence and the asymptotic behavior were constructed in torus in \cite{MR2728733} and extended to the whole space in \cite{MR2911100}.
In \cite{MR2891870}, Guo and Strain obtained the stability of the relativistic Vlasov-Maxwell-Boltzmann system via the use of two different coordinate represenatations for the post-collisional momenta in order to resolve the issue of the momentum singularity.
In \cite{duan2017relativistic}, Duan and Yu provided the global existence of solutions in the weighted $L^\infty$ framework. Recently, Wang \cite{wang2018global} showed the global wellposedness of the relativistic Boltzmann equation with large amplitude initial data. In the case of Coulombic interaction, Strain and Guo \cite{MR2100057} constructed a unique global-in-time classical solution for the relativistic Landau-Maxwell system. The reduction of the relativistic collision operator using the \textit{center-of-momentum} frame can be found in \cite{MR1958975,MR2765751}.
Andr\'easson et al. \cite{MR2102321} proved the finite-time blowup of the gain term when the initial data starts with the characteristic ball. The uniform $L^1$ stability of mild solution is established in \cite{MR2982812} when the initial data is sufficiently small and it decays exponentially fast. In \cite{MR3880739}, Jang and Yun proved the regularizing estimate of the gain term. For the spatially homogeneous case, Strain and Yun obtained useful inequalities about the relativistic pre-post collisional velocity and proved the existence of solution in \cite{MR3166961}. The uniform $L^{\infty}$ bounds of the solution are established in \cite{jang2019propagation} and the $L^p$ bounds are established in \cite{Jang-Yun-Lp}.
For the Newtonian limit of the relativistic particles, we refer to \cite{MR2679588}.
The mathematical studies on the non-relativistic quantum Boltzmann equation are also not rich. We would like to mention that Nordheim in $1928$ and Uehling - Uhlenbeck in $1933$ discussed the non-relativistic quantum Boltzmann equation in \cite{kikuchi1930kinetische} and \cite{uehling1933transport}, respectively.
Benedetto et al. \cite{MR2301288,MR2357423} showed the rigorous validity of the quantum Boltzmann equation from the $N$-body Schr\"{o}dinger equation in the weak coupling regime. In spatially homogeneous case for the Fermi-Dirac particles, Lu classified quantum equilibrium in \cite{MR1861208}. In \cite{MR2029003}, Lu and Wennberg established the strong stability in $L^1$ and proved the convergence of the solution to the equilibrium. The weak solution to the equation is constructed in the soft potential case \cite{MR2264618,MR2433484}.
For the spatially homogeneous case, several studies on bosons can be found in \cite{MR3493188,MR3215584,MR3906275,MR1751703,MR2096049,MR2157856,MR3038680,MR3217534,MR3451497,MR2811470}. For general mathematical and physical reviews, we refer to \cite{MR2997586,MR1313028,MR1307620,MR1898707,MR0258399,MR2525118,MR1379589,MR1908664,MR2000470,MR2095473,MR2366140,MR1942465}.
\subsection{Notations}\label{sec:notations} Before introducing our main results, we define several notations on the relativistic quantities. First of all, we remark that we normalize all physical constants to $1$ throughout the paper including the speed of light $c$. We denote the energy-momentum $4$-vector as $p^{\mu}$ and usually write it as $p^{\mu}=(p^0,p^1,p^2,p^3)$. The energy-momentum $4$-vector with the lower index is written as a product in the Minkowski metric $p_{\mu}=\eta_{\mu \nu}p^{\nu}$, where the Minkowski metric is given by $\eta_{\mu \nu} =diag(-1,1,1,1) $.
The inner product of energy-momentum $4$-vectors $p^{\mu}$ and $q_{\mu}$ is defined via the Minkowski metric:
\begin{align*}
p^{\mu}q_{\mu} = p^{\mu}\eta_{\mu \mu}q^{\mu} = -p^0q^0 +\sum_{i=1}^3p^iq^i.
\end{align*}
The energy of a relativistic particle is given by $p^0=\sqrt{1+|p|^2}$. Then we can see that the inner product of an energy-momentum $4$-vector with itself is $p^{\mu}p_{\mu}=-1$. We note that the inner product of energy-momentum $4$-vectors is Lorentz invariant $p^{\mu}q_{\mu}=\Lambda p^{\mu}\Lambda q_{\mu}$, where $\Lambda$ is a Lorentz transform which will be defined below.
We define the relative energy $s$ between two energy-momentum 4-vectors $p^\mu$ and $q^\mu$ as
\begin{align}\label{s}
s(p^{\mu},q^{\mu}) = -(p^{\mu}+q^{\mu})(p_{\mu}+q_{\mu}) = -2p^{\mu}q_{\mu}+2,
\end{align}
and the relative momentum $g$ between them as
\begin{align}\label{g}
g(p^{\mu},q^{\mu}) = \sqrt{(p^{\mu}-q^{\mu})(p_{\mu}-q_{\mu})} = \sqrt{2(-p^{\mu}q_{\mu}-1)}.
\end{align}
Note that $s=g^2+4$. In the same sense, we define the relative momentum between $p^{\mu}$ and another momentum $p'^\mu$ and $q'^\mu$ as $\bar{g}= g(p^{\mu},p'^{\mu})$ and $\tilde{g}= g(p^{\mu},q'^{\mu})$, respectively. Also, we define $\bar{s}= s(p^{\mu},p'^{\mu})$ and $\tilde{s}= s(p^{\mu},q'^{\mu})$, so that
$\bar{s}=\bar{g}^2+4$ and $\tilde{s}=\tilde{g}^2+4$.
\subsection{Conservation laws and the Boltzmann $H$-theorem}
The conservation law of the pre-post collisional energy-momentum $4$-vectors is given by
\begin{align}\label{conserv}
p^{\mu}+q^{\mu}=p'^{\mu}+q'^{\mu}.
\end{align}
The inner product with itself gives
\begin{align*}
(p^{\mu}+q^{\mu})(p_{\mu}+q_{\mu}) = (p'^{\mu}+q'^{\mu})(p'_{\mu}+q'_{\mu}).
\end{align*}
Using $p^{\mu}p_{\mu}=p'^{\mu}p'_{\mu}=q^{\mu}q_{\mu}=q'^{\mu}q'_{\mu}=-1$, we have $p^{\mu}q_{\mu}=p'^{\mu}q'_{\mu}$. Similarly, we can have $p^{\mu}p'_{\mu}=q^{\mu}q'_{\mu}$ and $p^{\mu}q'_{\mu}=p'^{\mu}q_{\mu}$, which implies
\begin{align}\label{gsymme}
\begin{split}
g&=g(p^{\mu},q^{\mu})=g(p'^{\mu},q'^{\mu}), \cr
\bar{g}&= g(p^{\mu},p'^{\mu}) =g(q^{\mu},q'^{\mu}), \cr
\tilde{g}&= g(p^{\mu},q'^{\mu}) =g(p'^{\mu},q^{\mu}).
\end{split}
\end{align}
Then it turns out \cite[Proposition 2.7]{Jang-Yun-Lp} that \eqref{conserv} can further imply the \textit{Pythagorean} theorem as
\begin{align}\label{g triangle}
g^2=\bar{g}^2+\tilde{g}^2.
\end{align}
The angle of the scattering kernel $\sigma(g,\theta)$ in \eqref{W} is given by
\begin{align}\label{cos}
\cos\theta = \frac{(p^{\mu}-q^{\mu})(p'_{\mu}-q'_{\mu})}{g^2}.
\end{align}
By \eqref{gsymme} and \eqref{g triangle}, $\cos \theta$ can also be written in the following form \cite[page 277, (A.11)]{MR635279}:
\begin{align}\label{cosa}
\cos\theta= 1-2\frac{\bar{g}^2}{g^2}.
\end{align}
Now we introduce the \textit{center-of-momentum} framework. The relativistic pre-post collisional momenta $p$, $q$ and $p'$, $q'$ satisfying the conservation law \eqref{conserv} can be expressed as
\begin{equation}\label{com}\begin{split}
p'&=\frac{p+q}{2}+\frac{g}{2}\left(w-(\gamma-1)(p+q)\frac{(p+q)\cdot w}{|p+q|^2}\right), \cr
q'&=\frac{p+q}{2}-\frac{g}{2}\left(w-(\gamma-1)(p+q)\frac{(p+q)\cdot w}{|p+q|^2}\right),
\end{split}\end{equation}
where $\gamma=(p^0+q^0)/\sqrt{s}$, and $w$ denotes $\mathbb{S}^2$ component of the unit sphere:
\begin{align}\label{w}
w = (\sin\theta\cos\phi,\sin\theta\sin\phi, \cos\theta),
\end{align}
on $\theta \in[0,\pi]$ and $\phi \in[0,2\pi]$. The pre-post collisional energy $p'^0$ and $q'^0$ is written by
\begin{align*}
p'^0&=\frac{p^0+q^0}{2}+\frac{g}{2\sqrt{s}}(p+q)\cdot w, \cr
q'^0&=\frac{p^0+q^0}{2}-\frac{g}{2\sqrt{s}}(p+q)\cdot w.
\end{align*}
Once we divide each side of \eqref{RQBE0} by $p^0$, then we can rewrite the relativistic quantum Boltzmann equation as follows:
\begin{align}\label{RQBE}
\begin{split}
\partial_tF+\hat{p}\cdot\nabla_x F&=Q(F,F,F,F), \cr
F(x,p,0)&=F_0(x,p).
\end{split}
\end{align}
Note that $Q(F,F,F,F)=\frac{1}{p^0}C(F)$. We generally denote the normalized momentum as $\hat{p}$ where
\begin{align*}
\hat{p}=\frac{p}{p^0}=\frac{p}{\sqrt{1+|p|^2}}.
\end{align*}
In this framework, the relativistic quantum collision operator is reduced to the following form \cite{MR2765751}:
\begin{multline}\label{Q}
Q(F_1,F_2,F_3,F_4)=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw ~ v_{\o}(p^{\mu},q^{\mu}) ~ \sigma(g,\theta)\\\times \big[F_1(p')F_2(q')(1+\tau F_3(p))(1+\tau F_4(q))
-(1+\tau F_1(p'))(1+\tau F_2(q'))F_3(p)F_4(q)\big],
\end{multline}
where the M$\o$ller velocity $v_{\o}$ is given by
\begin{align*}
v_{\o}(p^{\mu},q^{\mu}) =\sqrt{\bigg|\frac{p}{p^0}-\frac{q}{q^0}\bigg|^2-\bigg|\frac{p}{p^0}\times \frac{q}{q^0}\bigg|^2}= \frac{g\sqrt{s}}{p^0q^0}.
\end{align*}
One of the properties that the relativistic quantum collision operator satisfies is the following identity
\begin{align*}
\int_{\mathbb{R}^3}dp ~Q(F,F,F,F)\left( \begin{array}{c} 1 \cr p^{\mu} \end{array}\right) =0 ,
\end{align*}
which implies the conservation laws of the total mass, momentum and energy:
\begin{align}\label{NPE0}
\frac{d}{dt}\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~F\left( \begin{array}{c} 1 \cr p^{\mu} \end{array}\right) = 0.
\end{align}
The $H$-theorem for the relativistic quantum Boltzmann equation is established in \cite{MR1958975} as
\begin{align*}
\frac{d}{dt}\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~ F\ln F -\tau^{-1} (1+\tau F)\ln(1+\tau F) \leq 0 .
\end{align*}
\subsection{Global equilibria}\label{sec:globaleq} It has been shown in \cite{MR2145021} that the global equilibria for the relativistic quantum Boltzmann equation \eqref{RQBE0} has the form of
$$\mathcal{F}(p)=\frac{1}{e^{\nu(p)}-\tau },\text{ for } \nu(p)=ap^0+b\cdot p+c,$$ for some constant $a$, $b,$ and $c.$ In this paper, we rescale the problem and consider the situation that the macroscopic mean velocity $b=0$ is zero. Then we can define the global equilibrium $m(p)$ in the form of
\begin{align} \label{m}
m(p)= \frac{1}{e^{ap^0+c}-\tau },
\end{align}
where $a>0$ and $c\geq -a$ in the case of bosons and $a>0$ and $c\in\mathbb{R}$ in the case of fermions. In this paper, we only consider $a>0$ and $c>-a$ in the case of bosons in order to exclude a possible blow-up of the equilibrium. We choose and fix the constant $a$ and $c$ and consider the initial distribution $F_0$ nearby the equilibrium $m(p)$. Then we will prove that the particle distribution $F$ whose initial distribution $F_0$ is sufficiently close to the relativistic quantum global equilibrium $m(p)$ will converge to $m(p)$ in $H^N_x L^2_v$ sense for $N\ge 3$.
We also denote the non-quantum relativistic equilibrium $J(p)$ as
\begin{align} \label{J}
J(p^0)= e^{-ap^0}.
\end{align}
\subsection{Spaces}
Now we define some notations on norms and inner products which are frequently used throughout this paper. The constant $C$ is generically used, whose value can be changed from line to line. Especially, when we want to indicate the dependency of $a$, we indicate it as $C_{a}$.
We define the standard $L^2$ norm as
\begin{align*}
\|f\|_{L^2_p}=\left(\int_{\mathbb{R}^3}dp~|f(p)|^2 \right)^{\frac{1}{2}}, \quad \|f\|_{L^2_{x,p}}=\left(\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~|f(x,p)|^2 \right)^{\frac{1}{2}},
\end{align*}
and we define the weighted $L^2$ norm as\begin{align*}
\|f\|_{\nu}=\left(\int_{\mathbb{R}^3}dp~\nu(p)|f(p)|^2 \right)^{\frac{1}{2}}, \quad \|f\|_{x,\nu}=\left(\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~ \nu(p)|f(x,p)|^2 \right)^{\frac{1}{2}}.
\end{align*}
The standard $L^2$ inner product is given by
\begin{align*}
\langle f,g \rangle_{L^2_p}&= \int_{\mathbb{R}^3}dp~f(p)g(p) , \quad
\langle f,g \rangle_{L^2_{x,p}}=\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~f(x,p)g(x,p).
\end{align*}
We use the multi-index notation
\begin{align*}
\alpha=(\alpha_0,\alpha_1,\alpha_2,\alpha_3),
\end{align*}
to simplify the differential operator:
\begin{align*}
\partial^{\alpha}=\partial_{t}^{\alpha_0}\partial_{x_1}^{\alpha_1}\partial_{x_2}^{\alpha_2}\partial_{x_3}^{\alpha_3}.
\end{align*}
For a brevity of discussion on the frequently used function $m+\tau m^2$, we often write the variable only at the end of the function as
\begin{align*}
(m+\tau m^2)(p)=m(p)+\tau (m(p))^2.
\end{align*}
We define the higher-order energy norm as follows:
\begin{align*}
\mathcal{E}(f(t))=\frac{1}{2}\sum_{|\alpha|\leq N}\|\partial^{\alpha}f(t)\|^2_{L^2_{x,p}} + \int_0^t\sum_{|\alpha|\leq N}\|\partial^{\alpha}f(s)\|^2_{x,\nu} ds .
\end{align*}
\subsection{Hypothesis on the collision kernel}
We assume that the differential cross section $\sigma(g,\theta)$ satisfies the hard potential assumption with an angular cut-off as in \cite{MR933458,dudynski2007relativistic}:
\begin{align}\label{sigma}
\sigma(g,\theta) = g \sin\theta.
\end{align}This assumption is analogous to the standard hard-sphere assumption for the Newtonian Boltzmann equation; if $|p+q|$ or $|p-q|$ is sufficiently small or if $|p-q|$ is much larger than $|p^0-q^0|$, then it behaves as the Newtonian hard sphere kernel.
\subsection{Main results}
In this paper, we prove the existence of a unique global-in-time classical solution of the relativistic quantum Boltzmann equation nearby the global equilibrium. Before we state our main theorem, we would like to introduce the reformulation of the relativistic quantum Boltzmann equation via a special linearization that is relevant to the relativistic and quantum case.
In the non-quantum case, the standard decomposition of the perturbed solution $f$ near the global equilibrium $\mu$ is $F=\mu+\sqrt{\mu}f$ for $\mu=e^{-|p|^2/2}$ or $\mu=e^{-p^0}$ in the Newtonian and the relativistic cases, respectively.
But in the quantum case, the previous decomposition $F=\mu+\sqrt{\mu}f$ does not guarantee the non-negativity of $\langle Lf, f\rangle_{L^2_{v}}$ as in Lemma \ref{null space}. Thus inspired by the previous work related to the quantum kinetic models in \cite{MR4096124,van1982generalized,MR1773932,MR2902121}, we choose the following decomposition of $F$:
\[F(x,p,t)= m(p)+\sqrt{m(p)+\tau m^2(p)}f(x,p,t),\]
where the global equilibrium $m(p)$ is defined as in \eqref{m}:
\begin{align*}
m(p)= \frac{1}{e^{ap^0+c}-\tau }.
\end{align*}
We plug the decomposition into \eqref{RQBE} and divide each side of the equation by $\sqrt{m+\tau m^2}$ to have
\begin{align}\label{pert1}
\begin{split}
\partial_tf+\hat{p}\cdot\nabla_xf +Lf&= \Gamma(f)+T(f), \cr
f(x,p,0) &= f_0(x,p).
\end{split}
\end{align}
The linear term $Lf$ is given by $$Lf=\nu(p)f+K_1f-K_2f,$$ where $\nu(p)$ is the collision frequency of a relativistic quantum particle, and $K_1$ and $K_2$ are compact operators. The right-hand side of \eqref{pert1} consists of nonlinear terms $\Gamma(f)$ and $T(f)$ where $\Gamma(f)$ consists of all the second-order nonlinear terms and $T(f)$ consists of all the third-order nonlinear terms. We can easily check that the fourth-order nonlinear terms disappear by cancellation. In the linearization process, we observe that the collision operator does not satisfy the quad-linearity $Q(k+h,f,f,f)= Q(k,f,f,f)+Q(h,f,f,f)$. For more of the detailed linearization process, see Section \ref{sec:linearization}. Now we are ready to state our main theorem.
\begin{theorem}\label{Main Theorem}
Let $N\geq 3$. Suppose that the initial data $F_0$ satisfies
\begin{align*}
\left\{\begin{array}{ll} 0 \leq F_0(x,p) \leq 1 \quad \mbox{for fermions,} \\ 0 \leq F_0(x,p) \qquad \hspace{3mm} \mbox{for bosons,} \end{array} \right.
\end{align*}
and the global equilibrium $m(p)$ shares the same total mass, momentum and energy with the initial data:
\begin{align}\label{assumption}
\int_{\mathbb{T}^3\times \mathbb{R}^3} dxdp~ F_0(x,p)\left( \begin{array}{c} 1 \cr p^{\mu} \end{array}\right) = \int_{\mathbb{T}^3\times \mathbb{R}^3} dxdp~m(p)\left( \begin{array}{c} 1 \cr p^{\mu} \end{array}\right).
\end{align}
Then there exist $\delta>0$ and $C>0$ such that if $\mathcal{E}(f_0)\leq \delta$ then there exists a unique global-in-time solution of \eqref{pert1} such that
\begin{enumerate}
\item The distribution function $F(x,p,t)$ has the following bounds:
\begin{align*}
\left\{\begin{array}{ll} 0 \leq F(x,p,t) \leq 1 \quad \mbox{for fermions,} \\ 0 \leq F(x,p,t) \qquad \hspace{3mm} \mbox{for bosons.} \end{array} \right.
\end{align*}
\item The energy norm is bounded globally in time:
\[\sup_{t\in\mathbb{R}^+}\mathcal{E}(f(t))\leq C\mathcal{E}(f_0).\]
\item There exists a uniform constant $\epsilon>0$ such that the perturbation decays exponentially:
\[\sum_{|\alpha|\leq N}\|\partial^{\alpha}f(t)\|^2_{L^2_{x,p}}\leq Ce^{-\epsilon t}.\]
\item Let $f$ and $\bar{f}$ be the solutions with the initial data $f_0$ and $\bar{f}_0$, respectively. Then there exists a positive constant $\delta>0$ such that
\[\|f-\bar{f} \|_{L^2_{x,p}} \leq e^{-\delta t}\|f_0-\bar{f}_0 \|_{L^2_{x,p}}. \]
\end{enumerate}
\end{theorem}
To prove the main theorem, we need a coercivity estimate of the linear operator $L$. The linear operator $L$ satisfies the following dissipation property:
\begin{align*}
\langle Lf, f\rangle_{L^2_{v}} &\geq \delta \|(I-P)f\|_{\nu}^2,
\end{align*}
for some positive $\delta>0$. The macroscopic projection $Pf$ denotes the orthonormal projection onto $L^2_p$ space with respect to the following $5$-basis:
\begin{align}\label{basis1}
\left\{\sqrt{m+\tau m^2},p_1\sqrt{m+\tau m^2},p_2\sqrt{m+\tau m^2},p_3\sqrt{m+\tau m^2},p^0\sqrt{m+\tau m^2}\right\}.
\end{align}
The $5$-dimensional basis above constitutes the kernel of $L$.
Then it is crucial to obtain some upper-bound estimates of the nonlinear terms $\Gamma$ and $T$ in order to construct the solution. Some parts of the nonlinear terms can be estimated by a simple change of variables and the H\"{o}lder inequality. However, regarding some second-order nonlinear terms whose integrands consist of the product of $f(p')$ (or $f(q')$) and $f(p)$, we need to take a change of variables of either $p\mapsto p'$ (or $p\mapsto q'$) and there occur some non-trivial difficulties. Different from the non-relativistic case, a uniform positive lower bound for the Jacobian for the change of variables $|\partial p' /\partial p|$ (or $|\partial q' /\partial p|$) does not exist as shown in \cite{2006.02540}, and hence we need another way to deal with this difficulty. One way is to proceed the estimates of the second-order nonlinear terms by lifting the $dq$ integral to energy-momentum four vector integral $dq^{\mu}$ imposing additional Dirac-delta function. This technique has been introduced in \cite{MR635279, Jang2016, jang2019propagation}.
Then we reduce the integral by computing the Dirac-delta function. In this process, there appears a singularity of $1/\bar{g}$ and the exponential growth with respect to the $p$ variable from $\exp(p^0-p'^0)$. We will explain this more in detail in Section \ref{sec:singbarg}.
Once we obtain the estimates of the linear terms and the nonlinear terms, we define an iteration scheme to construct the local-in-time solution as
\begin{align}\label{itera}
(\partial_t +\hat{p}\cdot \nabla_x)F^{n+1}=Q(F^n,F^n,F^{n+1},F^n).
\end{align}
Regarding the case of fermions, the solution $F$ has to be bounded by $1$ from above. Therefore, we also need to prove that the function $F^{n+1}$ is bounded in the closed interval $[0,1]$ in each iteration scheme based on the induction hypothesis that $F^n$ is in $[0,1]$. Then we obtain that the collision operator $Q(F^n,F^n,F^{n+1},F^n)$ is well-defined.
On the right-hand side of \eqref{itera}, note that we place $F^{n+1}$ in the $p$ variable position of the collision operator $Q$ (i.e., at the third input of $Q(\cdot,\cdot,\cdot,\cdot)$). This is by the specific structure of the nonlinear operator $Q$ from \eqref{Q}. Once we place $F^{n+1}$ as a third input of $Q$ then we can rewrite \eqref{itera} in a more clear structure
\begin{align}\label{itera2}
\{\partial_t +\hat{p}\cdot \nabla_x-\tau G(F^n)+R(F^n)\}F^{n+1}=G(F^n),
\end{align}where $G$ and $R$ is now defined as
\begin{align*}
G(F_1,F_2,F_4)&=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)F_1(p')F_2(q')(1+\tau F_4(q)), \cr
R(F_1,F_2,F_4)&=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)(1+\tau F_1(p'))(1+\tau F_2(q'))F_4(q).
\end{align*}
Also, we can show that the boundedness $0 \leq F^n \leq 1$ implies that $0\leq G(F^n)$ and $0 \leq R(F^n)$, which guarantees the boundedness of $F^{n+1}$.
Then the additional linearization of $F^{n+1}=m+\sqrt{m+\tau m^2}f^{n+1}$ results in creating the dissipation term $\nu(p)f^{n+1}$ in the left-hand side of \eqref{itera2}, which would then result in the exponential decay of the perturbation $f^{n+1}$ in the $L^2$ sense. By the induction argument, we obtain the uniform boundedness of the energy locally in time.
Then the standard way of extending the local existence to the global one is to eliminate the dissipation of the linear part $L$. Similarly to the Newtonian case, we substitute $f=(I-P)f+Pf$ on each side of \eqref{pert1} where $Pf$ is defined by the orthonormal projection of \eqref{basis1}.
Then the expansion of the linear term $(\partial_t+\hat{p}\cdot\nabla_x)Pf$ yields a linear combination with respect to the $14$-basis. Thus we can achieve the following coercivity estimate:
\begin{align*}
\sum_{|\alpha|\leq N}\langle L\partial^{\alpha}f, \partial^{\alpha}f\rangle_{L^2_{x,p}} &\geq \delta \sum_{|\alpha|\leq N}\|\partial^{\alpha}f\|_{x,\nu}^2,
\end{align*}
for some positive constant $\delta>0$. This full coercivity estimate enables the extension of the local-in-time solution to a global-in-time solution.
\subsection{Main difficulties and our strategy}\label{sec:novelty}
In this subsection, we present the difficulties that arise from the estimates of the nonlinear terms. In the relativistic quantum case, there appear new types of nonlinear terms involving both pre- and post-collisional momenta at the same time as $f(p)f(p')$ and $f(p)f(q')$. In general, this kind of terms has been expected to appear only in the non-cutoff Boltzmann theory in the non-quantum case. In the non-cutoff Boltzmann theory, it has been considered very crucial to understand the change of variables $q\mapsto p'$ or $q'$ and the cancellation lemma to understand the fractional diffusive behavior \cite{MR2784329,Jang2016}.
In the non-relativistic case, these new nonlinear terms can be handled because $|\partial p'/\partial p|$ and $|\partial q'/\partial q|$ are bounded from below, as in \cite{MR1857879,MR2525118}. However, in the relativistic case under the \textit{center-of-momentum} frame, such a positive uniform lower-bound of the Jacobian of the collision map does not exist \cite{2006.02540}.
The remedy of the issue has been introduced in \cite{MR635279,MR1321370,MR2728733} in the \textit{center-of-momentum} frame \eqref{com} where the linear term $K_2$ of the difficulty has been calculated. Namely, the authors lift the $dq$ integral to the energy-momentum four vector integral $dq^{\mu}$ by imposing an additional Dirac-delta function, and they take a change of variables via a suitable Lorentz transform. In this paper, we also follow a similar technique to deal with the nonlinearity that occurs in dealing with the terms $\Gamma(f)$ and $T(f)$. In this direction, we however still encounter several other additional difficulties on the nonlinear term involving $f(p)f(p')$ as below. Let us first denote the nonlinear term $\Gamma_{2,1}$ involving $f(p)f(p')$ as
\begin{multline}\label{gamma21o}
\big| \langle \Gamma_{2,1}(f,h) ,\eta \rangle_{L^2_p} \big|
\leq \int_{\mathbb{R}^3}\frac{dp}{p^0}\int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dp'}{p'^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta) \cr
\times \delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu})J(q^0)J(p'^0/2)|f(p)| |h(p')| |\eta(p)|,
\end{multline}
and the integral part with respect to the measure $dqdq'$ as
\begin{align}\label{rela B}
B=\int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta) \delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu})J(q^0).
\end{align}
Then we present the three main difficulties that arise regarding the nonlinear terms.
\subsubsection{The Lorentz-invariant measure $\frac{dp}{p^0}$ and the nonlinear structure of $s\sigma(g,\theta)$}
The first difficulty arises from the Lorentz-invaraint measure $\frac{dp}{p^0}$ including the fraction of the energy and the nonlinear structure of the differential cross section $s\sigma(g,\theta)$ with respect to the collision variables $p$ and $q$. A good way to remove the fraction of the energy is to lift $dq$ and $dq'$ integrals to the energy momentum $4$-vector $dq^{\mu}$ and $dq'^{\mu}$ integrals by considering extra Dirac-delta and unit step functions as
\begin{multline*}
B=\int_{\mathbb{R}^4}dq^{\mu}\int_{\mathbb{R}^4}dq'^{\mu} s\sigma(g,\theta) \delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu}) J(q^0) u(q^0)u(q'^0)\delta(q^{\mu}q_{\mu}+1)\delta(q'^{\mu}q'_{\mu}+1).
\end{multline*}
If we simply eliminate the $dq'^{\mu}$ integral by computing the Dirac-delta function of $\delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu})$, the rest of the integral becomes highly complicated to deal with. Thus, motivated by de Groot et al. in \cite{MR635279}, we instead apply a symmetric change of variables $\bar{q}^{\mu}=q^{\mu}+q'^{\mu}$ and $\bar{q}'^{\mu}=q^{\mu}-q'^{\mu}$.
Despite the nonlinear structure of the $s$, $g$ and $\cos\theta$ in \eqref{s}, \eqref{g} and \eqref{cos}, respectively, we can also represent them as the terms that depend only on the variables $p^{\mu}$, $p'^{\mu}$, $\bar{q}^{\mu}$ and $\bar{q}'^{\mu}$ (See Lemma \ref{g comp}) as follows:
\begin{align*}
g_c^2=\bar{g}^2-\frac{1}{2}(p^{\mu}+p'^{\mu})(\bar{q}_{\mu}-p_{\mu}-p'_{\mu}), \quad s_c=g_c^2+4, \quad \cos\theta_c= 1-2\frac{\bar{g}^2}{g_c^2}.
\end{align*}
Then we can have $B$ in the following form
\begin{align*}
B&=\frac{1}{4}\int_{\mathbb{R}^4\times \mathbb{R}^4}d\Theta(\bar{q}^{\mu},\bar{q}'^{\mu}) s_c\sigma(g_c,\theta_c) \delta^{(4)}(p^{\mu}-p'^{\mu}+\bar{q}'^{\mu}) J\left(\frac{\bar{q}^0+\bar{q}'^0}{2}\right),
\end{align*}
where
\begin{align*}
d\Theta(\bar{q}^{\mu},\bar{q}'^{\mu})&= d\bar{q}^{\mu}d\bar{q}'^{\mu}u(\bar{q}^0)u(\bar{s}-4)\delta((\bar{q}^{\mu}\bar{q}_{\mu}+\bar{q}'^{\mu}\bar{q}'_{\mu})+4)\delta(\bar{q}^{\mu}\bar{q}'_{\mu}).
\end{align*}
By substituting $\bar{q}'^{\mu}=p'^{\mu}-p^{\mu}$, we now reduce the $4$-dimensional Dirac-delta function with the energy-momentum $4$-vector $\delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu}) $ and obtain a more intuitive form of $B$ as
\begin{multline}\label{B2}
B=\frac{1}{4}\int_{\mathbb{R}^4}d\bar{q}^{\mu}u(\bar{q}^0)u(\bar{s}-4)\delta(\bar{q}^{\mu}\bar{q}_{\mu}+(p'^{\mu}-p^{\mu})(p'_{\mu}-p_{\mu})+4)\cr
\times \delta(\bar{q}^{\mu}(p'_{\mu}-p_{\mu})) s_c\sigma(g_c,\theta_c) J\left(\frac{\bar{q}^0+p'^0-p^0}{2}\right).
\end{multline}
\subsubsection{Singularity with respect to $\bar{g}$}\label{sec:singbarg}
Then another difficulty arises from the singularity in the relative momentum $\bar{g}$, which occurs from the reduction of the second Dirac-delta function. We first remark that the second Dirac-delta function of \eqref{B2} consists of an inner product between energy momentum $4$-vectors. Motivated by the explicit form of the Lorentz transform by Strain \cite{MR2728733}, we apply the Lorentz trasnform which converts $p'_{\mu}-p_{\mu}$ to $(0,0,0,\bar{g})$ (i.e. $\Lambda(p'_{\mu}-p_{\mu})=(0,0,0,\bar{g})$). Then we obtain
\begin{align*}
\delta(\bar{q}^{\mu}(p'_{\mu}-p_{\mu})) = \delta(\Lambda \bar{q}^{\mu}\Lambda(p'_{\mu}-p_{\mu})) = \delta(\bar{q}^3\bar{g}) = \frac{1}{\bar{g}}\delta(\bar{q}^3),
\end{align*}
where we used that the Lorentz transform is invariant under the inner-product and that $\delta(ax)=\frac{1}{a}\delta(x)$. We can now see that one $\bar{g}$ in the Dirac-delta function comes out as $1/\bar{g}$ and creates an additional singularity. However, motivated by \eqref{cosa}, we use the half angle formula \cite[page 277, (A.11)]{MR635279} of the $\cos\theta$ and observe that
\[\sin^2(\theta/2) = \frac{1-\cos\theta}{2}=\frac{\bar{g}^2}{g^2}.\]
Combining with the assumption of the differential cross section, we obtain
\begin{align*}
\sigma(g,\theta)=g\sin\theta = 2g\sin\frac{\theta}{2}\cos\frac{\theta}{2} = 2g\sqrt{\frac{1-\cos\theta}{2}}\cos\frac{\theta}{2} = 2\bar{g} \cos\frac{\theta}{2}.
\end{align*}
This allows us to eliminate the singularity of $\bar{g}$.
\subsubsection{Exponential growth}The last difficulty is regarding the last multiplier on the right-hand side of the equation \eqref{B2}. Since the function $J(\bar{q}^0/2)$ is contained in $d\bar{q}^0$ integral, we have an exponential decay in $p'^0$ and an exponential growth in $p^0$ from $J\left((p'^0-p^0)/2\right)$ at the same time. The decay for $p'^0$ is beneficial for the upper-bound estimate, but there is a problematic term of the exponential growth of $p^0$ even in \eqref{gamma21o}.
However, we prove that the remaining part of the right-hand side of \eqref{B2} includes the following exponential decaying factor
\begin{align*}
\exp\left(-\sqrt{\frac{(p^0+p'^0)^2}{4}-\frac{|p\times p'|^2}{\bar{g}^2}}\right).
\end{align*}
Then, by the estimates in \cite[Lemma 3.1, (iii) and (iv)]{MR1211782} of
\begin{align*}
\frac{(p^0+p'^0)^2}{4}-\frac{|p\times p'|^2}{\bar{g}^2} = |p-p'|^2\frac{\bar{g}^2+4}{4\bar{g}^2}
\geq \max\left\{\frac{\bar{g}^2}{4}+1,\frac{1}{4}|p-p'|^2\right\},
\end{align*}
we can have an exponential decay of $\exp\left(-|p-p'|/2\right)$. Since the difference of the energy $|p'^0-p^0|$ can further be bounded by the difference of the momentum $|p'-p|$, the exponential growth can be absorbed by the exponential decay as
\begin{align*}
J\left(\frac{p'^0-p^0}{2}\right)\exp\left(-\sqrt{\frac{(p^0+p'^0)^2}{4}-\frac{|p\times p'|^2}{\bar{g}^2}}\right) \leq e^{-\frac{1}{2}(p'^0-p^0)}e^{-\frac{1}{2}|p-p'|} \leq 1.
\end{align*}
Then we can have properly weighted $L^2$ bounds for the nonlinear terms.
\subsection{Outline of the paper}
This paper is organized as follows.
In Section \ref{sec:linearization}, we linearize the collision operator of the relativistic quantum Boltzmann equation nearby a global equilibrium. In Section \ref{sec:nonlinear}, we establish several estimates on the linear and the nonlinear terms. Section \ref{sec:localintime} is devoted to constructing the unique local-in-time classical solution. In the last section, we prove the coercivity estimate and establish the global-in-time classical solution.
\section{Linearization of the relativistic quantum Boltzmann equation}\label{sec:linearization}
In this section, we introduce the reformulation of the equation \eqref{RQBE0} via the linearization of the relativistic quantum Boltzmann equation nearby the global equilibrium:
\begin{align*}
m(p)= \frac{1}{e^{ap^0+c}-\tau }.
\end{align*}
\begin{proposition}\label{linearization}
If we substitute $F=m+\sqrt{m+\tau m^2}f$ in \eqref{RQBE}, then we have
\begin{align*}
\partial_t f + \hat{p}\cdot \nabla_x f +Lf &=\Gamma(f) +T(f)
\end{align*}
where the linear term $Lf$ is decomposed as following form:
\begin{align*}
Lf&= \nu(p)f +K_1f - K_2f,
\end{align*}
where the collision frequency $\nu(p)$ is given by
\begin{align}\label{nu}
\nu(p) &=\frac{1}{1+\tau m(p)}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)m(q)(1+\tau m(p'))(1+\tau m(q')),
\end{align}
and the compact operator $K_1$ and $K_2$ are defined by
\begin{align}\label{Kf}
\begin{split}
K_1f(p)&=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) \sqrt{m+\tau m^2(p')}\sqrt{m+\tau m^2(q')}f(q), \cr
K_2f(p)&=2 \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)\sqrt{m+\tau m^2(q)}\sqrt{m+\tau m^2(q')} f(p').
\end{split}
\end{align}
The nonlinear term $\Gamma(f)$ is represented as follows:
\begin{align*}
\Gamma(f)&= \sum_{i=1}^6\Gamma_i(f,f), \quad T(f) = \sum_{i=1}^4T_i(f,f,f).
\end{align*}
We denote the precise definition of nonlinear terms at the end of this proof.
\end{proposition}
\begin{proof}
We substitute $F=m+\sqrt{m+\tau m^2}f$ in \eqref{RQBE} to have
\begin{align*}
\sqrt{m+\tau m^2}\partial_t f + \sqrt{m+\tau m^2} \hat{p}\cdot \nabla_x f = Q(m+\sqrt{m+\tau m^2}f).
\end{align*}
Dividing $\sqrt{m+\tau m^2}$ on each side gives an equation for the perturbation $f$:
\begin{align}\label{pertf2}
\partial_t f + \hat{p}\cdot \nabla_x f &= \frac{1}{\sqrt{m+\tau m^2}}Q(m+\sqrt{m+\tau m^2}f).
\end{align}
To decompose the right-hand side into linear and nonlinear terms, we first define the zeroth-order-in-$f$ term $Q_0$:
\begin{align*}
Q_0=Q(m,m,m,m).
\end{align*}
As we can see in \eqref{Q}, the quantum collision operator includes $(1+\tau F)$ terms. Because of these terms, we cannot have the quad-linear property of $Q$. In other words, we have $Q(k+h,f,f,f)\neq Q(k,f,f,f)+Q(h,f,f,f)$. So we define the following four first-order-in-$f$ terms $Q_1$, $Q_2$, $Q_3$ and $Q_4$:
\begin{align*}
Q_1=Q(m+\sqrt{m+\tau m^2}f,m,m,m)-Q_0, \cr Q_2=Q(m,m+\sqrt{m+\tau m^2}f,m,m)-Q_0, \cr
Q_3=Q(m,m,m+\sqrt{m+\tau m^2}f,m)-Q_0, \cr Q_4=Q(m,m,m,m+\sqrt{m+\tau m^2}f)-Q_0.
\end{align*}
In view of this notation, the collection of the zeroth and the first-order terms in $f$ can be written as
\begin{align*}
\frac{1}{\sqrt{m+\tau m^2}} \left( \sum_{i=1}^4 Q_i +Q_0 \right).
\end{align*}
Thus we divide the right-hand side of \eqref{pertf2} into the collection of zeroth, first-order terms and other terms as
\begin{align*}
\frac{1}{\sqrt{m+\tau m^2}}Q(m+\sqrt{m+\tau m^2}f) = -Lf + \Gamma(f),
\end{align*}
where
\begin{align*}
Lf=-\frac{1}{\sqrt{m+\tau m^2}}\sum_{i=1}^4 Q_i-\frac{1}{\sqrt{m+\tau m^2}}Q_0,
\end{align*}
and
\begin{align*}
\Gamma(f)=\frac{1}{\sqrt{m+\tau m^2}}Q(m+\sqrt{m+\tau m^2}f)+Lf.
\end{align*}
We first calculate the linear part $Lf$. By definition of $Q_0$, we have
\begin{multline*}
Q_0=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) \bigg[m(p')m(q')(1+\tau m(p))(1+\tau m(q))\cr
-(1+\tau m(p'))(1+\tau m(q'))m(p)m(q) \bigg] .
\end{multline*}
We observe from the conservation of energy
\[p^0+q^0=p'^0+q'^0,\]
that
\begin{align}\label{equal}
m(p')m(q')(1+\tau m(p))(1+\tau m(q)) =(1+\tau m(p'))(1+\tau m(q')) m(p)m(q),
\end{align}
which gives
\[Q_0=0.\]
For the first-order linear term, an explicit computation gives
\begin{multline*}
Q_1=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) \bigg[\sqrt{m+\tau m^2(p')}m(q')(1+\tau m(p))(1+\tau m(q)) \cr
-\tau \sqrt{m+\tau m^2(p')}(1+\tau m(q'))m(p)m(q)\bigg]f(p'),
\end{multline*}
By \eqref{equal} on the first-order term $Q_1$, we have
\begin{multline*}
Q_
=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
\quad \times \left(\frac{\sqrt{m+\tau m^2(p')}}{m(p')}-\tau \frac{\sqrt{m+\tau m^2(p')}}{1+\tau m(p')}\right)f(p'),
\end{multline*}
which is equal to
\begin{align*}
Q_1&=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \frac{f(p')}{\sqrt{m+\tau m^2(p')}}.
\end{align*}
With similar computations, combining with $Q_2$,$Q_3$ and $Q_4$ yields
\begin{multline}\label{Lf}
\begin{split}
Lf= \frac{1}{\sqrt{m+\tau m^2(p)}} \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
\times \bigg(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+ \frac{f(q)}{\sqrt{m+\tau m^2(q)}}-\frac{f(p')}{\sqrt{m+\tau m^2(p')}}
-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\bigg).
\end{split}
\end{multline}
We can easily see that the first term of $Lf$ is equal to $\nu f$. We define the second term of $Lf$ as $K_1 f$:
\begin{multline}\label{K1}
K_1f=\frac{m(p)}{\sqrt{m+\tau m^2(p)}}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) \frac{m(q)}{\sqrt{m+\tau m^2(q)}}\\\times(1+\tau m(p'))(1+\tau m(q'))f(q).
\end{multline}
We observe
\begin{align} \label{m/m-m^2}
\frac{m(p)}{\sqrt{m+\tau m^2(p)}}= e^{-\frac{1}{2}(ap^0+c)}.
\end{align}
Combining with the energy conservation law gives
\begin{align}\label{equal2}
\frac{m(p)}{\sqrt{m+\tau m^2(p)}}\frac{m(q)}{\sqrt{m+\tau m^2(q)}}=\frac{m(p')}{\sqrt{m+\tau m^2(p')}}\frac{m(q')}{\sqrt{m+\tau m^2(q')}}.
\end{align}
Applying it to \eqref{K1}, we have
\begin{align*}
K_1f&=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) \sqrt{m+\tau m^2(p')}\sqrt{m+\tau m^2(q')}f(q).
\end{align*}
We define the collection of the third and the fourth term of $Lf$ in \eqref{Lf} as $ - K_2f$:
\begin{multline}\label{K2feq}
K_2f=\frac{ m(p)}{\sqrt{m+\tau m^2(p)}}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)m(q)\frac{1+\tau m(p')}{\sqrt{m+\tau m^2(p')}}(1+\tau m(q'))f(p') \cr
\quad+\frac{m(p)}{\sqrt{m+\tau m^2(p)}}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)m(q)(1+\tau m(p'))\frac{1+\tau m(q')}{\sqrt{m+\tau m^2(q')}}f(q').
\end{multline}
We would call the first line of the $K_2f$ in \eqref{K2feq} as $K_{2,1}f$ and the second line of \eqref{K2feq} as $K_{2,2}f$.
We then write the $dw$ integral of $K_{2,2}$ in the spherical coordinate $w\mapsto(\phi,\theta)$ as in \eqref{w} as follows:
\begin{multline*}
K_{2,2}f=\frac{m(p)}{\sqrt{m+\tau m^2(p)}} \int_{\mathbb{R}^3}dq \int_{0}^{2\pi}d\phi\int_{0}^{\pi}\sin\theta ~d\theta~ v_{\o}\sigma(g,\theta)\\\times m(q)(1+\tau m(p'))\frac{1+\tau m(q')}{\sqrt{m+\tau m^2(q')}}f(q').
\end{multline*}
Then we apply the change of variables $\theta \rightarrow \pi-\theta$ and $\phi\rightarrow \pi+\phi$. The change of variables would result in the exchanged roles of $p'$ and $q'$, since $w$ in \eqref{w} changes into $-w$. Thus we have
\begin{multline*}
K_{2,2}f=\frac{m(p)}{\sqrt{m+\tau m^2(p)}} \int_{\mathbb{R}^3}dq \int_{\pi}^{3\pi}d\phi\int_{\pi}^{0}\sin\theta ~(-d\theta) \cr
\times v_{\o}\sigma(g,\pi-\theta)m(q)(1+\tau m(q'))\frac{1+\tau m(p')}{\sqrt{m+\tau m^2(p')}}f(p').
\end{multline*}
By the assumption of the differential cross section $\sigma(g,\theta)$ in \eqref{sigma}, we have $\sigma(g,\theta)=g\sin\theta =\sigma(g,\pi-\theta)$. This further gives
\begin{align*}
K_{2,2}f&=\frac{m(p)}{\sqrt{m+\tau m^2(p)}}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw ~ v_{\o}\sigma(g,\theta)m(q)(1+\tau m(q'))\frac{1+\tau m(p')}{\sqrt{m+\tau m^2(p')}}f(p').
\end{align*}
This shows that $K_{2,2}=K_{2,1}$. Thus we have
\begin{align}\label{K2}
\begin{split}
K_2f&=2 K_{2,1}f\\
&=2\frac{ m(p)}{\sqrt{m+\tau m^2(p)}}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)m(q)\frac{1+\tau m(p')}{\sqrt{m+\tau m^2(p')}}(1+\tau m(q'))f(p').
\end{split}
\end{align}
Similarly, we observe that
\begin{align} \label{1-m/m-m^2}
\frac{1+\tau m(p)}{\sqrt{m+\tau m^2(p)}}= e^{\frac{1}{2}(ap^0+c)}.
\end{align}
Combining with \eqref{m/m-m^2}, we have
\begin{align}\label{equal3}
\frac{m(p)}{\sqrt{m+\tau m^2(p)}}\frac{1+\tau m(p')}{\sqrt{m+\tau m^2(p')}}=\frac{1+\tau m(q)}{\sqrt{m+\tau m^2(q)}}\frac{m(q')}{\sqrt{m+\tau m^2(q')}}.
\end{align}
Substituting it in \eqref{K2}, we have
\begin{align*}
K_2f&=2\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)\sqrt{m+\tau m^2(q)}\sqrt{m+\tau m^2(q')} f(p').
\end{align*}
This completes the derivation of the linear term $\nu f$, $K_1f$ and $K_2f$.
Now we consider the nonlinear terms. Since the quantum collision operator is not quad-linear (Recall that $Q(k+h,f,f,f)\neq Q(k,f,f,f)+Q(h,f,f,f)$), the second-order nonlinear term is highly complicated to be represented. Thus, we first observe one of the second-order nonlinear terms involving $f(
p')$ and $f(q')$:
\begin{multline*}
\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw ~ \frac{v_{\o}\sigma(g,\theta)}{\sqrt{m+\tau m^2(p)}}\\\times \big[\sqrt{m+\tau m^2(p')}f(p')\sqrt{m+\tau m^2(q')}f(q')(1+\tau m(p))(1+\tau m(q)) \cr
-\sqrt{m+\tau m^2(p')}f(p')\sqrt{m+\tau m^2(q')}f(q')m(p)m(q)\big].
\end{multline*}There are 6 second-order nonlinear terms of the same kind similar to the term above where the number 6 is coming from the number of choices for choosing 2 variables among the four variables $p$, $q$, $p'$, and $q'$. We represent all of the second-order nonlinear terms ($6$ nonlinear terms) as follows:
\begin{align}\label{Gamma}
\begin{split}
\Gamma(f,h) &= \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw \frac{v_{\o}\sigma(g,\theta)}{\sqrt{m+\tau m^2(p)}} \cr
&\times \bigg[(m(p')m(q')-(1+\tau m(p'))(1+\tau m(q')))\sqrt{m+\tau m^2(p)}f(p)\sqrt{m+\tau m^2(q)}h(q) \cr
&\quad +\tau (m(q')(1+\tau m(q))-m(q)(1+\tau m(q')))\sqrt{m+\tau m^2(p)}f(p)\sqrt{m+\tau m^2(p')}h(p') \cr
&\quad +\tau (m(q')(1+\tau m(p))-m(p)(1+\tau m(q')))\sqrt{m+\tau m^2(q)}f(q)\sqrt{m+\tau m^2(p')}h(p') \cr
&\quad +\tau (m(p')(1+\tau m(q))-m(q)(1+\tau m(p')))\sqrt{m+\tau m^2(p)}f(p)\sqrt{m+\tau m^2(q')}h(q') \cr
&\quad +\tau (m(p')(1+\tau m(p))-m(p)(1+\tau m(p')))\sqrt{m+\tau m^2(q)}f(q)\sqrt{m+\tau m^2(q')}h(q') \cr
&\quad +(1+\tau m(p))((1+\tau m(q))-m(p)m(q))\sqrt{m+\tau m^2(p')}f(p')\sqrt{m+\tau m^2(q')}h(q')\bigg]\cr
&=\Gamma_1(f,h)+\Gamma_2(f,h)+\Gamma_3(f,h)+\Gamma_4(f,h)+\Gamma_5(f,h)+\Gamma_6(f,h).
\end{split}
\end{align}
We also denote as
\begin{align*}
\Gamma(f) &= \sum_{1 \leq i \leq 6}\Gamma_i(f,f),
\end{align*}when $f=h$.
Lastly, we consider the following third-order nonlinear terms involving $f(p')$, $f(q')$ and $f(p)$:
\begin{multline*}
\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw ~ v_{\o} \sigma(g,\theta)\big[\tau \sqrt{m+\tau m^2(p')}f(p')\sqrt{m+\tau m^2(q')}f(q')f(p)(1+\tau m(q)) \cr
-\sqrt{m+\tau m^2(p')}f(p')\sqrt{m+\tau m^2(q')}f(q')f(p)m(q)\big].
\end{multline*}
Similarly, we represent all of the third-order nonlinear terms ($4$ nonlinear terms) as
\iffalse
\begin{align*}
T(f,h,\eta)&= \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw \frac{v_{\o}\sigma(g,\theta)}{\sqrt{m+\tau m^2(p)}} \cr
&\times \bigg[\sqrt{m+\tau m^2(p)}f(p)\sqrt{m+\tau m^2(q)}h(q)\sqrt{m+\tau m^2(p')}\eta(p') \cr
&\quad +\sqrt{m+\tau m^2(p)}f(p)\sqrt{m+\tau m^2(q)}h(q)\sqrt{m+\tau m^2(q')}\eta(q') \cr
&\quad-\sqrt{m+\tau m^2(p)}f(p)\sqrt{m+\tau m^2(p')}h(p')\sqrt{m+\tau m^2(q')}\eta(q') \cr
&\quad-\sqrt{m+\tau m^2(q)}f(q)\sqrt{m+\tau m^2(p')}h(p')\sqrt{m+\tau m^2(q')}\eta(q') \bigg] \cr
&=T_1(f,h,\eta)+T_2(f,h,\eta)+T_3(f,h,\eta)+T_4(f,h,\eta).
\end{align*}
\fi
\begin{align}\label{T}
\begin{split}
T_1(f,h,\eta)&= -\tau \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) f(p)\sqrt{m+\tau m^2(q)}h(q)\sqrt{m+\tau m^2(p')}\eta(p'), \cr
T_2(f,h,\eta)&= -\tau \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) f(p)\sqrt{m+\tau m^2(q)}h(q)\sqrt{m+\tau m^2(q')}\eta(q'), \cr
T_3(f,h,\eta)&= \tau \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) f(p)\sqrt{m+\tau m^2(p')}h(p')\sqrt{m+\tau m^2(q')}\eta(q'), \cr
T_4(f,h,\eta)&= \tau \frac{1}{\sqrt{m+\tau m^2(p)}}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta) \sqrt{m+\tau m^2(q)}f(q) \cr
&\quad \times \sqrt{m+\tau m^2(p')}h(p')\sqrt{m+\tau m^2(q')}\eta(q').
\end{split}
\end{align}
Similarly, we define
\begin{align*}
T(f)&= \sum_{1 \leq i \leq 4}T_i(f,f,f).
\end{align*}
We can easily check that the fourth-order nonlinear term is cancelled.
\end{proof}
By the linearization proposition above and substituting $F=m+\sqrt{m+\tau m^2}f$ in \eqref{RQBE}, we obtain the linearized equation for the relativistic quantum Boltzmann model \eqref{RQBE} as follows:
\begin{align}\label{pertf}
\begin{split}
\partial_tf+\hat{p}\cdot\nabla_xf +Lf&= \Gamma(f)+T(f), \cr
f(x,p,0) &= f_0(x,p).
\end{split}
\end{align}
where $f_0(x,p) = (F_0(x,p)-m)/\sqrt{m+\tau m^2}$. Then the conservation laws \eqref{NPE0} can be written as following form:
\begin{align}\label{consf}
\begin{split}
\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~f(x,p,t)\sqrt{m+\tau m^2} &= \int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~f_0(x,p)\sqrt{m+\tau m^2}, \cr
\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~f(x,p,t)p\sqrt{m+\tau m^2} &= \int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~f_0(x,p)p\sqrt{m+\tau m^2}, \cr
\int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~f(x,p,t)p^0\sqrt{m+\tau m^2} &= \int_{\mathbb{T}^3}dx\int_{\mathbb{R}^3}dp~f_0(x,p)p^0\sqrt{m+\tau m^2}.
\end{split}
\end{align}
Now we state several useful properties for the linear term $Lf$.
\begin{lemma}\label{symmetric Lf} For any smooth function $\phi$, we have
\begin{align*}
&\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
&\times \bigg(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}} -\frac{f(p')}{\sqrt{m+\tau m^2(p')}}
-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\bigg)\phi(p) \cr
&=\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
&\times \bigg(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}} -\frac{f(p')}{\sqrt{m+\tau m^2(p')}}
-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\bigg)\phi(q) \cr
&=\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
&\times \bigg(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}} -\frac{f(p')}{\sqrt{m+\tau m^2(p')}}
-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\bigg)(-\phi(p')) \cr
&=\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
&\times \bigg(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}} -\frac{f(p')}{\sqrt{m+\tau m^2(p')}}
-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\bigg)(-\phi(q')).
\end{align*}
\end{lemma}
\begin{proof}
Note that $m(p)m(q)(1+\tau m(p'))(1+\tau m(q'))$ is invariant under the change of variables $(p,q) \leftrightarrow (p',q')$ from \eqref{equal}. Thus, by the change of variables $p \leftrightarrow q$ and $(p,q) \leftrightarrow (p',q')$ introduced in \cite{MR1379589}, we have the desired results.
\end{proof}
\begin{lemma}\label{null space} We have the following properties for the linear operator $L$:
\begin{enumerate}
\item $L$ is a symmetric operator: $\langle Lf, g \rangle_{L^2_p}=\langle f, Lg \rangle_{L^2_p}$.
\item $Lf=0$ if and only if $f=Pf$.
\end{enumerate}
where $Pf$ is defined as the orthonormal projection to the $L^2_p$ space which is spanned by following $5$-dimensional basis:
\begin{align*}
\left\{\sqrt{m+\tau m^2},p_1\sqrt{m+\tau m^2},p_2\sqrt{m+\tau m^2},p_3\sqrt{m+\tau m^2},p^0\sqrt{m+\tau m^2}\right\}.
\end{align*}
\end{lemma}
\begin{proof}
(1) We take an inner product of $g$ with $Lf$ of \eqref{Lf} to have
\begin{align*}
\int_{\mathbb{R}^3}dp~ g Lf &= \int_{\mathbb{R}^3}dp\frac{g(p)}{\sqrt{m+\tau m^2(p)}} \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
& \times \bigg(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}} -\frac{f(p')}{\sqrt{m+\tau m^2(p')}}
-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\bigg).
\end{align*}
By Lemma \ref{symmetric Lf}, we substitute
\[\phi(p)=\frac{1}{\sqrt{m+\tau m^2(p)}}g(p) \]
and obtain
\begin{align*}
\int_{\mathbb{R}^3}dp~ gLf &= \frac{1}{4}\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
&\times \left(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}} -\frac{f(p')}{\sqrt{m+\tau m^2(p')}}-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\right) \cr
&\times \left(\frac{g(p)}{\sqrt{m+\tau m^2(p)}}+\frac{g(q)}{\sqrt{m+\tau m^2(q)}} -\frac{g(p')}{\sqrt{m+\tau m^2(p')}}-\frac{g(q')}{\sqrt{m+\tau m^2(q')}}\right).
\end{align*}
This proves the symmetricity of $L$. \newline
(2) Once we substitute $g=f$ in the equation above, then we have
\begin{align*}
\int_{\mathbb{R}^3}dp~ fLf &= \frac{1}{4}\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) m(p)m(q)(1+\tau m(p'))(1+\tau m(q')) \cr
&\times \bigg(\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}} -\frac{f(p')}{\sqrt{m+\tau m^2(p')}}
-\frac{f(q')}{\sqrt{m+\tau m^2(q')}}\bigg)^2 \cr
& \geq 0 .
\end{align*}
Therefore $Lf =0$ implies
\begin{align*}
\frac{f(p)}{\sqrt{m+\tau m^2(p)}}+\frac{f(q)}{\sqrt{m+\tau m^2(q)}}= \frac{f(p')}{\sqrt{m+\tau m^2(p')}}
+\frac{f(q')}{\sqrt{m+\tau m^2(q')}},
\end{align*}
which is satisfied if and only if $f=Pf$.
Moreover, $L(Pf)=0$ gives the desired results.
\end{proof}
\section{Estimates of the linear and the nonlinear terms}\label{sec:nonlinear}
This section is devoted to proving several estimates of the linear and the nonlinear terms. Especially, we emphasize that the estimates of the nonlinear term $\Gamma_2$ involve the main difficulties as we mentioned in Section \ref{sec:novelty}. Before we move onto it, we present some useful properties that are commonly used throughout this paper.
\begin{lemma}\label{mJ} There exist positive constants $C_1>0$ and $C_2>0$ such that
\begin{align*}
C_1J(p^0)\leq m(p) \leq C_2 J(p^0).
\end{align*}
\end{lemma}
\begin{proof}
In the fermionic case, $C_1= 1/(1+e^c)$ and $C_2=e^{-c}$ give the desired estimates:
\begin{align*}
\frac{1}{e^c+1}\frac{1}{e^{ap^0}} \leq \frac{1}{e^{ap^0+c}+1} \leq \frac{1}{e^{ap^0+c}}.
\end{align*}
In the case of bosons, we choose $C_1=e^{-c}$ and $C_2=1/(e^c-e^{-a})$. Since $c>-a$, we have $C_2<\infty$, and we get$$
\frac{1}{e^{ap^0+c}} \leq \frac{1}{e^{ap^0+c}-1} \leq \frac{1}{e^c-e^{-a}}\frac{1}{e^{ap^0}} .
$$
\end{proof}
By this Lemma, the relativistic quantum equilibrium $m(p)$ can be treated as the non-quantum relativistic equilibrium $J(p^0)$. Now we present the estimates for collision frequency $\nu(p)$ and operator $K_1$ and $K_2$.
\begin{lemma} There exists a positive constant $C>0$ such that
\begin{align*}
\frac{1}{C}(p^0)^{\frac{1}{2}} \leq \nu(p) \leq C(p^0)^{\frac{1}{2}}.
\end{align*}
The integral operator $K_i(f)$ is a compact operator in $L^2_p$ for $i=1,2$.
\end{lemma}
\begin{proof}
By Lemma \ref{mJ},
\begin{align}\label{1+m}
1+\tau m(p) = \frac{e^{ap^0+c}}{e^{ap^0+c}-\tau} = \frac{e^cm(p)}{J(p^0)} \leq C.
\end{align}
Then by \eqref{1+m}, we have the upper bound of $\nu$ as
\begin{align*}
\nu(p) &\leq C \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)m(q).
\end{align*}
Similarly we have
\begin{align}\label{m-m^2}
\sqrt{m+\tau m^2(p)}= \frac{e^{\frac{1}{2}(ap^0+c)}}{e^{ap^0+c}-\tau } \leq C e^{\frac{1}{2}ap^0} = CJ(p^0/2),
\end{align}
which yields
\begin{align*}
K_1f(p)&\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(p'^0/2)J(q'^0/2)f(q), \text{and}\\
K_2f(p)&\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)J(q^0/2)J(q'^0/2) f(p').
\end{align*}
We note that the upper bounds of $\nu$ and $K_i$ are identical to those of non-quantum relativistic linear terms in \cite{MR933458}. Therefore we obtain the desired results.
\end{proof}
\begin{lemma}\label{coercivity} The linear operator $L$ satisfies following dissipation property for some positive constant $\delta$:
\[ \langle Lf, f\rangle_{L^2_{v}} \geq \delta \|(I-P)f\|_{\nu}^2.\]
\end{lemma}
\begin{proof} Since $K_1$ and $K_2$ are compact and $\nu$ satisfies Lemma \ref{mJ}, we have the desired results by following the same proof of Lemma 3.4 of \cite{MR2728733}.
\end{proof}
\subsection{Estimates of the second-order nonlinear terms}
In this subsection, we establish the estimates on the second-order nonlinear terms.
\begin{lemma}\label{nonlin ff}
We have
\begin{align*}
\big|\langle \Gamma(f,h) ,\eta \rangle_{L^2_p} \big| \leq C \left(\|f\|_{L^2_p}\|h\|_{\nu}+\|f\|_{\nu}\|h\|_{L^2_p}\right)\|\eta\|_{\nu}.
\end{align*}
\end{lemma}
\begin{proof}
As we can see in \eqref{Gamma}, there are six second-order nonlinear terms. By the change of variables $p' \leftrightarrow q'$, we can see that $\Gamma_2(f,h)=\Gamma_4(f,h)$ and $\Gamma_3(f,h)=\Gamma_5(f,h)$. Thus we write the proof in three parts. Firstly, we prove the estimates of $\Gamma_1(f,h)$ and $\Gamma_6(f,h)$, since they are similar to the non-quantum relativistic case. Secondly, we present the estimate of $\Gamma_3(f,h)$. Lastly, we prove the most difficult part $\Gamma_2(f,f)$. For this we need several techniques to deal with the difficulties that arise from the relativistic integrals.\newline
{\bf (1) Estimates of $\Gamma_1$ and $\Gamma_6$:} We first consider $\Gamma_1(f,h)$ term. By \eqref{1+m}, \eqref{m-m^2}, and Lemma \ref{mJ}, we have
\begin{align}\label{gamma1}
\big| \langle \Gamma_1(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw ~ v_{\o}\sigma(g,\theta) J(q^0/2)|f(p)| |h(q)| |\eta(p)|.
\end{align}
By the H\"{o}lder inequality, we have
\begin{align*}
\big| \langle \Gamma_1(f,h) ,\eta \rangle_{L^2_p} \big|& \leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{S}^2}dw \left(\int_{\mathbb{R}^3}dq~v_{\o}^2\sigma^2(g,\theta)J(q^0)\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^3}dq|h(q)|^2\right)^{\frac{1}{2}} |f(p)| |\eta(p)|.
\end{align*}
For the first $dq$ integral, we recall the definition of $v_{\o}$ and $\sigma(g,\theta)$:
\begin{align*}
v_{\o}= \frac{g\sqrt{s}}{p^0q^0}, \qquad \sigma(g,\theta)=g\sin\theta.
\end{align*}
By an explicit computation, we have
\begin{align*}
&=2+2p^0q^0-2p\cdot q
~\leq~ 2p^0q^0+2(1-\cos\theta)|p||q|
~\leq~ 4p^0q^0,
\end{align*}
and
\begin{align*}
g=\sqrt{s-4} \leq \sqrt{s} \leq 2\sqrt{p^0q^0}.
\end{align*}
Applying above inequalities, we have
\begin{align*}
\int_{\mathbb{R}^3}dq~v_{\o}^2\sigma^2(g,\theta)J(q^0) \leq Cp^0.
\end{align*}
We apply the H\"{o}lder inequality again to obtain the following estimate:
\begin{align*}
\big| \langle \Gamma_1(f,h) ,\eta \rangle_{L^2_p} \big|& \leq C \|h\|_{L^2_p}\int_{\mathbb{R}^3}dp \int_{\mathbb{S}^2}dw~ (p^0)^{\frac{1}{2}} |f(p)| |\eta(p)| \leq C\|f\|_{\nu} \|h\|_{L^2_p} \|\eta\|_{\nu}.
\end{align*}
Now we consider $\Gamma_6(f,h)$ term. We first use the energy conservation law $p^0+q^0=p'^0+q'^0$ to have
\begin{align*}
\sqrt{m+\tau m^2(p')}\sqrt{m+\tau m^2(q')}\leq CJ(p'^0/2)J(q'^0/2) = CJ(p^0/2)J(q^0/2),
\end{align*}
which gives
\begin{align}\label{gamma6}
\big| \langle \Gamma_6(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0/2)|f(p')| |h(q')| |\eta(p)|.
\end{align}
Applying the H\"{o}lder inequality yields
\begin{align*}
\big| \langle \Gamma_6(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{S}^2}dw\left(\int_{\mathbb{R}^3}dq ~ v_{\o}\sigma^2(g,\theta) J(q^0)\right)^{\frac{1}{2}}\cr
&\quad \times \left(\int_{\mathbb{R}^3}dq ~ v_{\o}|f(p')|^2|h(q')|^2\right)^{\frac{1}{2}}|\eta(p)|.
\end{align*}
Similarly we use $s \leq 4p^0q^0 $ and $g \leq 2(p^0)^{\frac{1}{2}}(q^0)^{\frac{1}{2}}$ to have
\begin{align*}
\big| \langle \Gamma_6(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{S}^2}dw~(p^0)^{\frac{1}{2}}\left(\int_{\mathbb{R}^3}dq~ v_{\o}|f(p')|^2|h(q')|^2\right)^{\frac{1}{2}}|\eta(p)|.
\end{align*}
By the H\"{o}lder inequality for the $dpdw$ integral, we have
\begin{align*}
\big| \langle \Gamma_6(f,h) ,\eta \rangle_{L^2_p} \big| &\leq C\|\eta\|_{\nu} \left(\int_{\mathbb{R}^3}dp\int_{\mathbb{S}^2}dw \ (p^0)^{\frac{1}{2}}\int_{\mathbb{R}^3}dq v_{\o}|f(p')|^2|h(q')|^2\right)^{\frac{1}{2}}.
\end{align*}
The energy conservation law implies
\begin{align*}
(p^0)^{1/2}\leq (p'^0+q'^0)^{1/2} \leq (p'^0)^{1/2}+(q'^0)^{1/2},
\end{align*}
which yields
\begin{align*}
\big| \langle \Gamma_6(f,h) ,\eta \rangle_{L^2_p} \big| &\leq \|\eta\|_{\nu} \int_{\mathbb{S}^2}dw\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq~ v_{\o}\left((p'^0)^{\frac{1}{2}}+(q'^0)^{\frac{1}{2}}\right) |f(p')|^2|h(q')|^2 .
\end{align*}
Then we consider the pre-post change of variables $(p,q) \leftrightarrow (p',q')$ with the Jacobian in \cite{MR1105532}:
\begin{align}\label{pq,p'q'}
\frac{dpdq}{p^0q^0} =\frac{dp'dq'}{p'^0q'^0},
\end{align}
and recall from \eqref{gsymme} that we already have
\begin{align*}
s(p^{\mu},q^{\mu}) = s(p'^{\mu},q'^{\mu}), \qquad g(p^{\mu},q^{\mu}) = g(p'^{\mu},q'^{\mu}).
\end{align*}
Thus we have
\begin{multline*}
\int_{\mathbb{S}^2}dw\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq~ v_{\o}\left((p'^0)^{\frac{1}{2}}+(q'^0)^{\frac{1}{2}}\right) |f(p')|^2|h(q')|^2 . \cr
= \int_{\mathbb{S}^2}dw\int_{\mathbb{R}^3}dp'\int_{\mathbb{R}^3}dq'~\frac{g(p'^{\mu},q'^{\mu})\sqrt{s(p'^{\mu},q'^{\mu})}}{p'^0q'^0}\left( (p'^0)^{\frac{1}{2}}+(q'^0)^{\frac{1}{2}}\right) |f(p')|^2|h(q')|^2.
\end{multline*}
Finally by $v_{\o}(p'^{\mu},q'^{\mu}) \leq C$ and the H\"{o}lder inequality, we obtain the desired results:
\begin{align*}
\big| \langle \Gamma_6(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \left(\|f\|_{L^2_p}\|h\|_{\nu}+\|f\|_{\nu}\|h\|_{L^2_p}\right)\|\eta\|_{\nu}.
\end{align*}
\newline
{\bf (2) Estimates of $\Gamma_3$:}
We split the $\Gamma_3$ term into two parts:
\begin{align*}
\Gamma_3(f,h) &= \frac{\tau }{\sqrt{m+\tau m^2(p)}}\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) \{m(q')(1+\tau m(p))-m(p)(1+\tau m(q'))\} \cr
& \quad \times \sqrt{m+\tau m^2(q)}f(q)\sqrt{m+\tau m^2(p')}h(p') \cr
&= \Gamma_{3,1}(f,h)+ \Gamma_{3,2}(f,h).
\end{align*}
By $1+\tau m(p)\leq C$ and $1+\tau m(q')\le C$, we obtain
\begin{align*}
|\Gamma_{3,1}(f,h)| &\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) \frac{J(q'^0)J(q^0/2)J(p'^0/2)}{J(p^0/2)}|f(q)| |h(p')| \cr
&\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0)J(q'^0/2) |f(q)| |h(p')|,
\end{align*}
and
\begin{align*}
|\Gamma_{3,2}(f,h)| &\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(p^0/2)J(q^0/2)J(p'^0/2)|f(q)| |h(p')|,
\end{align*}
where we used $J(p'^0/2)J(q'^0/2)=J(p^0/2)J(q^0/2)$ for the exponential term. In the case of $\Gamma_{3,2}$, there are decays of two precollisional momentum variables $J(p^0/2)J(q^0/2)$. Thus we can have the exponential decay with respect to all the pre-post momentum variables by the energy conservation:
\begin{align*}
J(p^0/2)J(q^0/2)=J(p^0/4)J(q^0/4)J(p'^0/4)J(q'^0/4).
\end{align*}
Thus, the estimate of $\Gamma_{3,2}$ can be absorbed in that of $\Gamma_{3,1}$. Thus, without loss of generality, we only consider the estimates of $\Gamma_{3,1}$:
\begin{align*}
\big| \langle \Gamma_{3,1}(f,h) ,\eta \rangle_{L^2_p} \big|
\leq C\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0)J(q'^0/2)|f(q)| |h(p')| |\eta(p)| .
\end{align*}
To separate pre-collisional variable and post-collisional variable, we apply the H\"{o}lder inequality as follows:
\begin{align*}
\big| \langle \Gamma_{3,1}(f,h) ,\eta \rangle_{L^2_p} \big| &\leq C \left(\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0)J(q'^0/2)|f(q)|^2|\eta(p)|^2 \right)^{\frac{1}{2}} \cr
&\times \left(\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0)J(q'^0/2)|h(p')|^2 \right)^{\frac{1}{2}} \cr
&= II_1 \times II_2.
\end{align*}
Using $s \leq 4p^0q^0 $ and $g \leq 2(p^0)^{\frac{1}{2}}(q^0)^{\frac{1}{2}}$, we have
\begin{align*}
(II_1)^2 &\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ (p^0)^{\frac{1}{2}}(q^0)^{\frac{1}{2}} J(q^0)J(q'^0/2)|f(q)|^2|\eta(p)|^2 .
\end{align*}
Since $(q^0)^{\frac{1}{2}}J(q^0/2)$ and $J(q'^0/2)$ are bounded by constants, we obtain
\begin{align*}
(II_1)^2&\leq C\int_{\mathbb{R}^3}dq~ |f(q)|^2\int_{\mathbb{R}^3} dp~ (p^0)^{\frac{1}{2}}|\eta(p)|^2 \leq C\|f\|_{L^2_p}^2\|\eta\|_{\nu}^2.
\end{align*}
For $II_2$, we apply the pre-post momentum change of variables $(p,q) \leftrightarrow (p',q')$ similarly to the estimate of $\Gamma_6(f,h)$. Note that $v_{\o} = \frac{g\sqrt{s}}{p^0q^0}$ is invariant under the change of variables, and we obtain
\begin{align*}
(II_2)^2&=\int_{\mathbb{R}^3}dp'\int_{\mathbb{R}^3}dq'\int_{\mathbb{S}^2}dw~ \frac{g(p'^{\mu},q'^{\mu})\sqrt{s(p'^{\mu},q'^{\mu})}}{p'^0q'^0}\sigma(g(p'^{\mu},q'^{\mu}),\theta) J(q^0)J(q'^0/2)|h(p')|^2.
\end{align*}
Similarly, we use $g(p'^{\mu},q'^{\mu}) \leq 2(p'^0)^{1/2}(q'^0)^{1/2} $ and $s(p'^{\mu},q'^{\mu}) \leq 4p'^0q'^0$ to have
\begin{align*}
(II_2)^2&\leq\int_{\mathbb{R}^3}dp'\int_{\mathbb{R}^3}dq'\int_{\mathbb{S}^2}dw~ (p'^0)^{\frac{1}{2}}(q'^0)^{\frac{1}{2}} J(q^0)J(q'^0/2)|h(p')|^2 \cr
&\leq\int_{\mathbb{S}^2}dw \int_{\mathbb{R}^3}dq'~(q'^0)^{\frac{1}{2}} J(q'^0/2) \int_{\mathbb{R}^3}dp'~(p'^0)^{\frac{1}{2}}|h(p')|^2 \cr
&\leq C\|h\|_{\nu}^2.
\end{align*}
Combining the estimates of $II_1$ and $II_2$ yields
\begin{align*}
\big| \langle \Gamma_{3,1}(f,h) ,\eta \rangle_{L^2_p} \big| &\leq C\|f\|_{L^2_p}\|h\|_{\nu}\|\eta\|_{\nu}.
\end{align*}
\newline
{\bf (3) Estimates of $\Gamma_2$:}
We first separate $\Gamma_2$ by two terms:
\begin{align*}
\Gamma_2(f,h) &= -\tau \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) \big[m(q)(1+\tau m(q'))-m(q')(1+\tau m(q))\big] \cr
& \quad \times \sqrt{m+\tau m^2(p')} f(p)h(p') \cr
& =\Gamma_{2,1}(f,h)-\Gamma_{2,2}(f,h).
\end{align*}
Using $1+\tau m(p) \leq C$, we estimate each term as follows:
\begin{align*}
\big|\Gamma_{2,1}(f,h)\big| &\leq C \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0)J(p'^0/2) |f(p)||h(p')|,
\end{align*}
and
\begin{align*}
\big|\Gamma_{2,2}(f,h)\big| &\leq C \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q'^0)J(p'^0/2) |f(p)||h(p')|.
\end{align*}
Since $\Gamma_{2,2}$ has the exponential decays with respect to two post-collisional variables $p'$ and $q'$, we can have the decays for all pre- and post-collisional momentum variables by using the energy conservation:
\begin{align*}
J(p^0/2)J(q^0/2)=J(p^0/4)J(q^0/4)J(p'^0/4)J(q'^0/4).
\end{align*}
Thus the estimate of $\Gamma_{2,2}$ can be absorbed into that of $\Gamma_{2,1}$. Thus, we only consider $\Gamma_{2,1}$ term. However, different from the previous cases, it is difficult to separate $f,h$ and $\eta$ using the H\"{o}lder inequality. Thus, we proceed the estimate via lifting the $dw$ integral to $dp'dq'$ integral imposing a $4$-dimensional Dirac-delta function:
\begin{multline}\label{gamma21}
\big| \langle \Gamma_{2,1}(f,h) ,\eta \rangle_{L^2_p} \big|
\leq C \int_{\mathbb{R}^3}\frac{dp}{p^0}\int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dp'}{p'^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta) \cr
\times \delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu})J(q^0)J(p'^0/2)|f(p)| |h(p')| |\eta(p)|.
\end{multline}
Then we focus on the following estimate:
\begin{align}\label{B}
B&=\int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta) \delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu})J(q^0).
\end{align}
We also lift the $dq$ and $dq'$ integrals to the relativistic energy-momentum $4$-vector integral $dq^{\mu}$ and $dq'^{\mu}$ imposing Dirac-delta function and unit step function:
\begin{multline}\label{B1}
B=\int_{\mathbb{R}^4}dq^{\mu}\int_{\mathbb{R}^4}dq'^{\mu} s\sigma(g,\theta) \delta^{(4)}(p^{\mu}+q^{\mu}-p'^{\mu}-q'^{\mu}) J(q^0) u(q^0)u(q'^0)\delta(q^{\mu}q_{\mu}+1)\delta(q'^{\mu}q'_{\mu}+1),
\end{multline}
where the unit step function $u(x)$ is defined by $1$ when $x\geq 0$ and $0$ when $x<0$. Note that $B$ has the integration over $dq^{\mu}$ and $dq'^{\mu}$, but $s$ and $g$ are functions depending on $p^{\mu}$ and $q^{\mu}$.
Thus it is convenient to split the $q^{\mu}$ and the $q'^{\mu}$ parts from $g$ as in the lemma below.
Before we proceed furtherly, we first provide a preliminary lemma on the properties of $g,$ $\bar{g}$ and $\tilde{g}$.
\begin{lemma}[\cite{Jang-Yun-Lp, Jang2016, MR2728733}] \label{g comp} Define $g,$ $\bar{g}$ and $\tilde{g}$ as \eqref{gsymme}. Then we have
\begin{align*}
&(1) ~ g^2=\bar{g}^2+\tilde{g}^2, \cr
&(2) ~ \bar{g}^2=-\frac{1}{2}(p^{\mu}+q'^{\mu})(q_{\mu}+p'_{\mu}-p_{\mu}-q'_{\mu}) , \cr
&(3) ~ \tilde{g}^2=-\frac{1}{2}(p^{\mu}+p'^{\mu})(q_{\mu}+q'_{\mu}-p_{\mu}-p'_{\mu}).
\end{align*}
\end{lemma}
\begin{proof} The proofs can be found in \cite{Jang-Yun-Lp, MR2728733}, but for the readers' convenience, we present the proof. \newline
(1) By definition of $g$ in \eqref{gsymme}, we have
\begin{align*}
-g^2+\bar{g}^2+\tilde{g}^2
&= -2+2p^{\mu}q_{\mu} -2p^{\mu}p'_{\mu}-2p^{\mu}q'_{\mu}.
\end{align*}
Since $p^{\mu}$ is energy momentum $4$-vector, we have $p^{\mu}p_{\mu}=-1$, which gives
\begin{align*}
-g^2+\bar{g}^2+\tilde{g}^2&= 2p^{\mu}p_{\mu}+2p^{\mu}q_{\mu} -2p^{\mu}p'_{\mu}-2p^{\mu}q'_{\mu} \cr
&=2p^{\mu}(p_{\mu}+q_{\mu} -p'_{\mu}-q'_{\mu}).
\end{align*}
Then the conservation law of energy momentum $4$-vector lead to the desired results. \newline
(2)
We denote the right-hand side as $R$ and expand as follows:
\begin{align*}
R=
-\frac{1}{2}(p^{\mu}q_{\mu}+p^{\mu}p'_{\mu}+q'^{\mu}q_{\mu}+q'^{\mu}p'_{\mu})+\frac{1}{2}(p^{\mu}p_{\mu}+2p^{\mu}q'_{\mu}+q'^{\mu}q'_{\mu}).
\end{align*}
We recall from \eqref{gsymme} that $p^{\mu}q_{\mu}=p'^{\mu}q'_{\mu}$ and $p^{\mu}p'_{\mu}=q^{\mu}q'_{\mu}$ to have
\begin{align*}
R=-\frac{1}{2}(2p^{\mu}q_{\mu}+2p^{\mu}p'_{\mu})+\frac{1}{2}(-2+2p^{\mu}q'_{\mu})
=-p^{\mu}q_{\mu}+p^{\mu}q'_{\mu}-p^{\mu}p'_{\mu}-1.
\end{align*}
Using $p^{\mu}p_{\mu}=-1$, we have
\begin{align*}
=p^{\mu}(-q_{\mu}+q'_{\mu}-p'_{\mu}+p_{\mu}).
\end{align*}
Then the conservation laws of energy momentum $4$-vector yields
\begin{align*}
R=2p^{\mu}(-p'_{\mu}+p_{\mu}) = -2p^{\mu}p'_{\mu}-2 .
\end{align*}
By definition of $\bar{g}$, we derived desired results. \newline
(3) The proof can be obtained by the same way as that of (2). So we omit it.
\end{proof}
By Lemma \ref{g comp} (1) and (3) above, we use the following representation of $g$:
\begin{align*}
g^2=\bar{g}^2-\frac{1}{2}(p^{\mu}+p'^{\mu})(q_{\mu}+q'_{\mu}-p_{\mu}-p'_{\mu}).
\end{align*}
Now we go back to the estimates of $B$ and apply the following change of variables
\begin{align}\label{changofv}
\bar{q}^{\mu}=q^{\mu}+q'^{\mu}, \qquad \bar{q}'^{\mu}=q^{\mu}-q'^{\mu}.
\end{align}
Then the reverse relation can be written by
\begin{align*}
q^{\mu}=\frac{1}{2}(\bar{q}^{\mu}+\bar{q}'^{\mu}),\qquad q'^{\mu}=\frac{1}{2}(\bar{q}^{\mu}-\bar{q}'^{\mu}),
\end{align*}
and the Jacobian is given by
\begin{align*}
\frac{\partial (q^{\mu},q'^{\mu})}{\partial(\bar{q}^{\mu},\bar{q}'^{\mu})} = \frac{1}{16}.
\end{align*}
Then the $g_c$, $s_c$ and $\theta_c$ are now expressed as
\begin{align}\label{gs}
g_c^2=\bar{g}^2-\frac{1}{2}(p^{\mu}+p'^{\mu})(\bar{q}_{\mu}-p_{\mu}-p'_{\mu}), \quad s_c=g_c^2+4, \quad \cos\theta_c= 1-2\frac{\bar{g}^2}{g_c^2}.
\end{align}
To calculate the Dirac-delta function and the unit step function in \eqref{B1}, we use followings.
First we use $\delta(x)\delta(y)=2\delta(x+y)\delta(x-y)$ to have
\begin{align*}
\delta(q^{\mu}q_{\mu}+1)\delta(q'^{\mu}q'_{\mu}+1) &= 2\delta(q^{\mu}q_{\mu}+q'^{\mu}q'_{\mu}+2)\delta(q^{\mu}q_{\mu}-q'^{\mu}q'_{\mu}) \cr
&=4\delta((\bar{q}^{\mu}\bar{q}_{\mu}+\bar{q}'^{\mu}\bar{q}'_{\mu})+4)\delta(\bar{q}^{\mu}\bar{q}'_{\mu}).
\end{align*}
Note that $q^0\geq 0$ and $q'^0\geq 0$ are equivalent to $q^0+q'^0\geq 0$ and $q^0q'^0 \geq 0 $. Also, $q^0q'^0 \geq 0 $ is equivalent to $\bar{g}^2=2q^0q'^0-2q\cdot q'-2 \geq 0$ under the assumption that $q^{\mu}q_{\mu}+1 = 0 $ and $q'^{\mu}q'_{\mu}+1=0$. Thus we have
\begin{align*}
u(q^0)u(q'^0)\delta(q^{\mu}q_{\mu}+1)\delta(q'^{\mu}q'_{\mu}+1) &=
u(\bar{q}^0)u(\bar{s}-4)4\delta((\bar{q}^{\mu}\bar{q}_{\mu}+\bar{q}'^{\mu}\bar{q}'_{\mu})+4)\delta(\bar{q}^{\mu}\bar{q}'_{\mu}).
\end{align*}
Thus we have
\begin{align*}
B&=\frac{1}{4}\int_{\mathbb{R}^4\times \mathbb{R}^4}d\Theta(\bar{q}^{\mu},\bar{q}'^{\mu}) s_c\sigma(g_c,\theta_c) \delta^{(4)}(p^{\mu}-p'^{\mu}+\bar{q}'^{\mu}) J\left(\frac{\bar{q}^0+\bar{q}'^0}{2}\right),
\end{align*}
where
\begin{align*}
d\Theta(\bar{q}^{\mu},\bar{q}'^{\mu})&= d\bar{q}^{\mu}d\bar{q}'^{\mu}u(\bar{q}^0)u(\bar{s}-4)\delta((\bar{q}^{\mu}\bar{q}_{\mu}+\bar{q}'^{\mu}\bar{q}'_{\mu})+4)\delta(\bar{q}^{\mu}\bar{q}'_{\mu}).
\end{align*}
Now we substitute $\bar{q}'^{\mu}=p'^{\mu}-p^{\mu}$ to reduce the $4$-dimensional integration $d\bar{q}'^{\mu}$ by reducing the $4$-dimensional Dirac-delta function as follows:
\begin{align*}
B&=\frac{1}{4}\int_{\mathbb{R}^4}d\Theta(\bar{q}^{\mu}) s_c\sigma(g_c,\theta_c) J\left(\frac{\bar{q}^0+p'^0-p^0}{2}\right),
\end{align*}
where
\begin{align*}
d\Theta(\bar{q}^{\mu})&= d\bar{q}^{\mu}u(\bar{q}^0)u(\bar{s}-4)\delta(\bar{q}^{\mu}\bar{q}_{\mu}+(p'^{\mu}-p^{\mu})(p'_{\mu}-p_{\mu})+4)\delta(\bar{q}^{\mu}(p'_{\mu}-p_{\mu})).
\end{align*}
To remove one more Dirac-delta function, we follow that
\begin{align*}
u(\bar{q}^0)\delta(\bar{q}^{\mu}\bar{q}_{\mu}+(p'^{\mu}-p^{\mu})(p'_{\mu}-p_{\mu})+4)
&= u(\bar{q}^0)\delta(\bar{q}^{\mu}\bar{q}_{\mu}-2p^{\mu}p'_{\mu}+2) \cr
&= u(\bar{q}^0)\delta(-(\bar{q}^0)^2+|\bar{q}|^2+\bar{s}) \cr
&= \frac{\delta(\bar{q}^0-\sqrt{|\bar{q}|^2+\bar{s}})}{2\sqrt{|\bar{q}|^2+\bar{s}}}.
\end{align*}
Then the $d\bar{q}^0$ integral with the delta function above is reduced as follows:
\begin{align*}
B&=\frac{1}{4}J\left(\frac{p'^0-p^0}{2}\right)\int_{\mathbb{R}^3}\frac{d\bar{q}}{\bar{q}^0} s_c\sigma(g_c,\theta_c) J\left(\frac{\bar{q}^0}{2}\right)u(\bar{s}-4)\delta(\bar{q}^{\mu}(p'_{\mu}-p_{\mu})),
\end{align*}
where $\bar{q}^0$ is defined by $\sqrt{|\bar{q}|^2+\bar{s}}$. Since $\bar{s}-4=\bar{g}^2\geq0$, the last unit step function is always equal to $1$. In order to make the final Dirac-delta function even simpler, we consider the Lorentz transform satisfying
\begin{align*}
\Lambda(p_{\mu}+p'_{\mu}) &= (\sqrt{\bar{s}},0,0,0), \qquad \Lambda(p_{\mu}-p'_{\mu}) = (0,0,0,-\bar{g}).
\end{align*}
We can see that in \cite{MR2728733} that the Lorentz transform satisfying the relation above is uniquely determined by following form:
\begin{align}\label{Lorentz}
\Lambda &= \left[ {\begin{array}{cccc}
\frac{p^0+p'^0}{\sqrt{\bar{s}}} & -\frac{p_1+p'_1}{\sqrt{\bar{s}}} & -\frac{p_2+p'_2}{\sqrt{\bar{s}}} & -\frac{p_3+p'_3}{\sqrt{\bar{s}}} \\
\Lambda^{01} & \Lambda^{11} & \Lambda^{21} & \Lambda^{31} \\
0 & \frac{(p \times p')_1}{|p\times p'|} & \frac{(p \times p')_2}{|p\times p'|} & \frac{(p \times p')_3}{|p\times p'|} \\
\frac{p^0-p'^0}{\bar{g}} & -\frac{p_1-p'_1}{\bar{g}} & -\frac{p_2-p'_2}{\bar{g}} & -\frac{p_3-p'_3}{\bar{g}}
\end{array} } \right],
\end{align}
where
\begin{align*}
\Lambda^{i1} = \frac{2(p_i\{p^0+p'^0p^{\mu}p'_{\mu}\}+p'_i\{p'^0+p^0p^{\mu}p'_{\mu}\})}{\bar{g}\sqrt{\bar{s}}|p\times p'|}.
\end{align*}
Then the exponential part can also be expressed carrying out the Lorentz transform as follows:
\begin{align*}
J\left(\bar{q}^0/2\right) = \exp\left(-\bar{q}_0/2\right) = \exp\left(-\bar{q}^{\mu}U_{\mu}/2\right)= \exp\left(-\Lambda\bar{q}^{\mu}\Lambda U_{\mu}/2\right),
\end{align*}
where we used following simple four-vectors:
\begin{align*}
U^{\mu} = (1,0,0,0), \qquad U_{\mu} = (-1,0,0,0).
\end{align*}
Applying this Lorentz transform yields
\begin{align*}
B&= \frac{1}{4}J\left(\frac{p'^0-p^0}{2}\right)\int_{\mathbb{R}^3}\frac{d\bar{q}}{\bar{q}^0} s_{\Lambda}\sigma(g_{\Lambda},\theta_{\Lambda}) \exp\left(-\Lambda\bar{q}^{\mu}\Lambda U_{\mu}/2\right)\delta(\Lambda \bar{q}^{\mu}\Lambda(p'_{\mu}-p_{\mu})),
\end{align*}
where $g_{\Lambda}$ is written via the Lorentz transform:
\begin{align*}
g_{\Lambda}^2&=\bar{g}^2-\frac{1}{2}\Lambda(p^{\mu}+p'^{\mu})\Lambda(\bar{q}_{\mu}-p_{\mu}-p'_{\mu}) \cr
&=\bar{g}^2+\frac{1}{2}(\sqrt{\bar{s}},0,0,0)(\Lambda\bar{q}_{\mu}-(\sqrt{\bar{s}},0,0,0)).
\end{align*}
We can also represent $s_\Lambda$ and $\cos\theta_\Lambda$ in terms of $\bar{g}$ and $g_\Lambda$ as
\begin{align*}
s_{\Lambda}=g_{\Lambda}^2+4 \quad \text{and}\quad \cos\theta_{\Lambda}= 1-2\frac{\bar{g}^2}{g_{\Lambda}^2}.
\end{align*}
We now apply the change of variables $\Lambda\bar{q}^{\mu}= \bar{q}^{\mu}$. Since $d\bar{q}/\bar{q}^0$ is Lorentz invariant, we have
\begin{align*}
B&= \frac{1}{4}J\left(\frac{p'^0-p^0}{2}\right)\int_{\mathbb{R}^3}\frac{d\bar{q}}{\bar{q}^0} s_{\lambda}\sigma(g_{\lambda},\theta_{\lambda}) \exp\left(-\bar{q}^{\mu}\Lambda U_{\mu}/2\right)\delta(\bar{q}^3\bar{g}),
\end{align*}
where
\begin{align}\label{gs2}
g_{\lambda}^2&=\bar{g}^2+\frac{1}{2}\sqrt{\bar{s}}(\bar{q}^0-\sqrt{\bar{s}}), \quad s_{\lambda}=g_{\lambda}^2+4, \quad \cos\theta_{\lambda}= 1-2\frac{\bar{g}^2}{g_{\lambda}^2}.
\end{align}
Now we use the spherical coordinates for the variable $\bar{q}$:
\begin{align*}
\bar{q} = |\bar{q}|(\sin\psi\cos\phi,\sin\psi\sin\phi, \cos\psi).
\end{align*}
Then $B$ is equal to
\begin{align*}
\frac{1}{4}J\left(\frac{p'^0-p^0}{2}\right)\int_{0}^{2\pi}d\phi \int_{0}^{\pi}d\psi \sin\psi \int_{0}^{\infty}\frac{|\bar{q}|^2d|\bar{q}|}{\bar{q}^0} s_{\lambda}\sigma(g_{\lambda},\theta_{\lambda}) \exp\left(-\bar{q}^{\mu}\Lambda U_{\mu}/2\right)\delta(|\bar{q}|\cos\psi\bar{g}).
\end{align*}
Using $\delta(ax)=(1/a)\delta(x)$ and substituting $\psi= \pi/2$ reduce the $d\psi$ integral with $\delta(\cos\psi)$ as follows:
\begin{align}\label{spher}
B&= \frac{1}{4}J\left(\frac{p'^0-p^0}{2}\right)\int_{0}^{2\pi}d\phi \int_{0}^{\infty}\frac{|\bar{q}|d|\bar{q}|}{\bar{g}\bar{q}^0} s_{\lambda}\sigma(g_{\lambda},\theta_{\lambda}) \exp\left(-\bar{q}^{\mu}\Lambda U_{\mu}/2\right)|_{\psi=\pi/2}.
\end{align}
We first consider the scattering kernel $s_{\lambda}\sigma(g_{\lambda},\theta_{\lambda})$. The half-angle formula from \eqref{gs2} gives
\begin{align*}
\cos\theta_{\lambda}=1-2\sin^2\frac{\theta_{\lambda}}{2}= 1-2\frac{\bar{g}^2}{g_{\lambda}^2},
\end{align*}
which implies
\begin{align}\label{half angle}
\sin\frac{\theta_{\lambda}}{2} =\frac{\bar{g}}{g_{\lambda}},
\end{align}
for $0 \leq \theta_{\lambda} \leq 2\pi$. Thus we have
\begin{align*}
\sigma(g_{\lambda},\theta_{\lambda}) = g_{\lambda}\sin\theta_{\lambda} = 2g_{\lambda}\sin\frac{\theta_{\lambda}}{2}\cos\frac{\theta_{\lambda}}{2} = 2\bar{g}\cos\frac{\theta_{\lambda}}{2} \leq 2\bar{g}.
\end{align*}
From \eqref{gs2} and the definition of $\bar{q}^0$, we have
\begin{align*}
s_{\lambda}=g_{\lambda}^2+4=\bar{g}^2+4+\frac{1}{2}\sqrt{\bar{s}}(\bar{q}^0-\sqrt{\bar{s}}) = \bar{s}+\frac{1}{2}\sqrt{\bar{s}}(\sqrt{|\bar{q}|^2+\bar{s}}-\sqrt{\bar{s}}).
\end{align*}
Then we consider the exponential part of \eqref{spher}. Using the Lorentz transform \eqref{Lorentz}, we have
\begin{align*}
\Lambda U_{\mu}=\left( \frac{p^0+p'^0}{\sqrt{\bar{s}}}, \frac{2 |p \times p'|}{\bar{g}\sqrt{\bar{s}}} , 0 , \frac{p^0-p'^0}{\bar{g}} \right).
\end{align*}
Combining with the spherically expression of $\bar{q}$, we calculate
\begin{align*}
\bar{q}^{\mu}\Lambda U_{\mu}|_{\psi=\pi/2} &= (\sqrt{|\bar{q}|^2+\bar{s}},|\bar{q}| \cos\phi,|\bar{q}| \sin\phi,0 )\left( \frac{p^0+p'^0}{\sqrt{\bar{s}}}, \frac{2 |p \times p'|}{\bar{g}\sqrt{\bar{s}}} , 0 , \frac{p^0-p'^0}{\bar{g}} \right) \cr
&=-\sqrt{|\bar{q}|^2+\bar{s}}\frac{p^0+p'^0}{\sqrt{\bar{s}}} + |\bar{q}| \cos\phi\frac{2 |p \times p'|}{\bar{g}\sqrt{\bar{s}}}.
\end{align*}
Thus, \eqref{spher} is bounded by
\begin{align*}
B&\leq \frac{1}{2}J\left(\frac{p'^0-p^0}{2}\right)\int_{0}^{2\pi}d\phi \int_{0}^{\infty}\frac{|\bar{q}|d|\bar{q}|}{\sqrt{|\bar{q}|^2+\bar{s}}} s_{\lambda} \exp\left(-\sqrt{|\bar{q}|^2+\bar{s}}\frac{p^0+p'^0}{2\sqrt{\bar{s}}} + |\bar{q}| \cos\phi\frac{ |p \times p'|}{\bar{g}\sqrt{\bar{s}}}\right).
\end{align*}
We apply the change of variables $|\bar{q}|=\sqrt{\bar{s}}y$ to have
\begin{align*}
B&\leq \frac{1}{2}J\left(\frac{p'^0-p^0}{2}\right)\int_{0}^{2\pi}d\phi \int_{0}^{\infty}\frac{\sqrt{\bar{s}}ydy}{\sqrt{y^2+1}} s_{\lambda} \exp\left(-\sqrt{y^2+1}\frac{p^0+p'^0}{2} + y \cos\phi\frac{ |p \times p'|}{\bar{g}}\right),
\end{align*}
where
\begin{align*}
s_{\lambda}=\bar{s}+\frac{1}{2}\sqrt{\bar{s}}(\sqrt{|\bar{q}|^2+\bar{s}}-\sqrt{\bar{s}}) = \frac{\bar{s}}{2}(1+\sqrt{y^2+1}).
\end{align*}
So we have
\begin{multline*}
\leq \frac{\bar{s}^{3/2}}{2}J\left(\frac{p'^0-p^0}{2}\right) \int_{0}^{\infty}\frac{ydy}{\sqrt{y^2+1}} \frac{1+\sqrt{y^2+1}}{2} \cr
\times \exp\left(-\sqrt{y^2+1}\frac{p^0+p'^0}{2}\right)\int_{0}^{2\pi}d\phi \exp\left(y \cos\phi\frac{ |p \times p'|}{\bar{g}}\right),
\end{multline*}
For the notational simplicity, we denote
\begin{align}\label{Rrdef}
R= \frac{p^0+p'^0}{2}, \qquad r= \frac{ |p \times p'|}{\bar{g}},
\end{align}
then we can write
\begin{align*}
B\leq \frac{\bar{s}^{3/2}\pi}{2}J\left(\frac{p'^0-p^0}{2}\right) \int_{0}^{\infty}dy \left(\frac{y}{\sqrt{y^2+1}}+y\right)\exp\left(-R\sqrt{y^2+1}\right)I_0(ry),
\end{align*}
where $I_0$ denotes the modified Bessel function of the first kind:
\begin{align*}
I_0(y) = \frac{1}{\pi}\int_0^{\pi} e^{y\cos\phi} d\phi.
\end{align*}
The formula including above Bessel function can be found in \cite{MR3307944} and \cite{MR1211782} when $R> r\geq0 $:
\begin{align}\label{Bessel comp}
\begin{split}
I&= \int_{0}^{\infty} \frac{y}{\sqrt{y^2+1}}e^{-R\sqrt{y^2+1}} I_0\left(ry\right) dy = \frac{e^{-\sqrt{R^2-r^2}}}{\sqrt{R^2-r^2}}, \cr
II&= \int_{0}^{\infty} y e^{-R\sqrt{y^2+1}} I_0\left(ry\right) dy = \frac{R}{R^2-r^2}\left(1+ \frac{1}{\sqrt{R^2-r^2}}\right)e^{-\sqrt{R^2-r^2}}.
\end{split}
\end{align}Therefore, we have
\begin{align}\label{B3}
B&\leq \frac{\bar{s}^{3/2}\pi}{2}J\left(\frac{p'^0-p^0}{2}\right) \left[\frac{1}{\sqrt{R^2-r^2}}+\frac{R}{R^2-r^2}+\frac{R}{(R^2-r^2)^{3/2}}\right]e^{-\sqrt{R^2-r^2}}.
\end{align}
We can also find the useful estimates of $R^2-r^2$ in \cite{MR1211782} as
\begin{align}\label{Rr}
R^2-r^2
= |p-p'|^2\frac{\bar{g}^2+4}{4\bar{g}^2}
\geq \max\left\{\frac{\bar{g}^2}{4}+1,\frac{1}{4}|p-p'|^2\right\},
\end{align}
which lead to
\begin{align*}
J\left(\frac{p'^0-p^0}{2}\right)e^{-\sqrt{R^2-r^2}} \leq e^{-\frac{1}{2}(p'^0-p^0)}e^{-\frac{1}{2}|p-p'|} \leq 1.
\end{align*}
In the last inequality, we used
\begin{align*}
|p'^0-p^0| = \big| \sqrt{1+|p'|^2}-\sqrt{1+|p|^2}\big| =
\frac{|(|p'|-|p|)(|p'|+|p|)|}{\sqrt{1+|p'|^2}+\sqrt{1+|p|^2}}
\leq |p'-p|.
\end{align*}
The estimates \eqref{Rr} also implies that
\begin{align*}
\sqrt{R^2-r^2} &\geq 1,\qquad \frac{1}{R^2-r^2} \leq \frac{4}{\bar{s}}.
\end{align*}
Thus the three terms inside the large bracket of \eqref{B3} are bounded as follows:
\begin{align*}
\frac{1}{\sqrt{R^2-r^2}}+\frac{R}{(R^2-r^2)^{3/2}} \leq \frac{2R}{R^2-r^2} \leq 4\frac{p^0+p'^0}{\bar{s}}.
\end{align*}
Combining the estimates above yields
\begin{align}\label{B esti}
B&\leq C \bar{s}^{1/2}(p^0+p'^0).
\end{align}
Now substituting it in \eqref{gamma21}, we turn back to estimate of $\Gamma_{2,1}$ as
\begin{align*}
\big| \langle \Gamma_{2,1}(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}\frac{dp}{p^0}\int_{\mathbb{R}^3}\frac{dp'}{p'^0}\bar{s}^{\frac{1}{2}}(p^0+p'^0)J(p'^0/2)|f(p)||h(p')||\eta(p)|.
\end{align*}
We use $\bar{s}\leq 4 p^0p'^0$ to have
\begin{align*}
\big| \langle \Gamma_{2,1}(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dp'\left( \sqrt{\frac{p^0}{p'^0}}+\sqrt{\frac{p'^0}{p^0}}\right)J(p'^0/2)|f(p)||h(p')||\eta(p)|.
\end{align*}
Using the boundedness $1\leq p^0$ and $\sqrt{p'^0}J(p'^0/4)\leq C$ yields
\begin{align*}
\big| \langle \Gamma_{2,1}(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dp'(p^0)^{\frac{1}{2}}J(p'^0/4)|f(p)||h(p')||\eta(p)|.
\end{align*}
Finally, we apply the H\"{o}lder inequality and obtain
\begin{align*}
\big| \langle \Gamma_{2,1}(f,h) ,\eta \rangle_{L^2_p} \big|&\leq C \|h\|_{L^2_p} \int_{\mathbb{R}^3}dp~(p^0)^{\frac{1}{2}}|f(p)||\eta(p)| \cr
&\leq C \|f\|_{\nu}\|h\|_{L^2_p}\|\eta\|_{\nu}.
\end{align*}
\end{proof}
Regarding the linear terms $K_1f$ and $K_2f$ in \eqref{Kf}, we have the following estimates.
\begin{lemma}\label{K esti} For $i=1,2$, the compact operator $K_i$ satisfies the following estimate.
\begin{align*}
\langle K_if , h \rangle_{L^2_p}&\leq C \|f \|_{\nu}\|h\|_{\nu}.
\end{align*}
\end{lemma}
\begin{proof}
Recall the definition of $K_1$ in \eqref{Kf}. Using the energy conservation law, we can replace post-collisional variables by pre-collisional variables on the exponential term:
\begin{align*}
K_1
&\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(p^0/2)J(q^0/2)|f(q)|.
\end{align*}
Then by $v_{\o}\leq 1$ and $g\leq (p^0)^{1/2}(q^0)^{1/2}$ and the H\"{o}lder inequality, we have
\begin{align*}
\big| \langle K_1f , h \rangle_{L^2_p} \big|&\leq C\|f\|_{L^2_p}\|h\|_{L^2_p}.
\end{align*}
For $K_2$, by the simple boundedness $\sqrt{m+\tau m^2(p)}\leq CJ(p^0/2)$, we have
\begin{align*}
K_2f&\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0/2)J(q'^0/2) |f(p')| .
\end{align*}
Then we take the inner product with $h$ and write it as
\begin{align*}
\langle K_2f , h \rangle_{L^2_p}&\leq C
\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0/2)J(q'^0/2) |f(p')||h(p)| .
\end{align*}
If we substitute $h(q')=J(q'^0/2)$ in $\Gamma_6$ in \eqref{gamma6}, then we get
\begin{align*}
\big| \langle \Gamma_6(f,J) ,h \rangle_{L^2_p} \big|&\leq C \int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw \ v_{\o}\sigma(g,\theta) J(q^0/2)J(q'^0/2)|f(p')||h(p)|.
\end{align*}
Thus, the estimates of $\Gamma_6(f,h)$ gives
\begin{align*}
\big| \langle K_2f , h \rangle_{L^2_p} \big|&\leq C\|f\|_{\nu}\|h\|_{\nu}.
\end{align*}
\end{proof}
\subsection{Estimates of the third-order nonlinear terms}
In this subsection, we provide the upper-bound estimates for the third-order nonlinear terms.
\begin{lemma}\label{nonlin fff} We have
\begin{align*}
\big|\langle T(f,h,\eta) ,\xi \rangle_{L^2_p} \big| \leq C\left( \|f\|_{L^2_p}\|h\|_{L^2_p}\|\eta\|_{\nu} +\|f\|_{L^2_p}\|h\|_{\nu}\|\eta\|_{L^2_p} +\|f\|_{\nu }\|h\|_{L^2_p}\|\eta\|_{L^2_p} \right)\|\xi\|_{\nu}.
\end{align*}
\end{lemma}
\begin{proof}
Recall the definition of third-order nonlinear terms in \eqref{T}. Applying the change of variables $\phi \rightarrow \pi-\phi$ and $\theta\rightarrow \pi+\theta$, we have $T_1(f,h,\eta)=T_2(f,h,\eta)$. Thus we only consider the proof for $T_1$,$T_3$ and $T_4$. \newline
{\bf (1) Estimates of $T_1$: } Using the boundedness $\sqrt{m+\tau m^2(p)}\leq CJ(p^0/2)$, we have
\begin{align*}
|T_1(f,h,\eta)
&\leq C \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)J(p'^0/2)J(q^0/2)|f(p)||h(q)||\eta(p')|.
\end{align*}
We take the inner product with $\xi$ to have
\begin{multline*}
\big| \langle T_1(f,h,\eta), \xi \rangle_{L^2_p} \big| \cr
\leq C \int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(p'^0/2)J(q^0/2) |f(p)||h(q)||\eta(p')||\xi(p)|.
\end{multline*}
Using the H\"{o}lder inequality, we disunite the integrand as follows:
\begin{align*}
\big| \langle T_1(f,h,\eta), \xi \rangle_{L^2_p} \big| &\leq C \left(\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(p'^0/2)J(q^0/2) |f(p)|^2|h(q)|^2\right)^{\frac{1}{2}} \cr
&\times \left(\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(p'^0/2)J(q^0/2)|\eta(p')|^2|\xi(p)|^2\right)^{\frac{1}{2}} \cr
&= T_{11}\times T_{12}.
\end{align*}
We first consider $T_{11}$. Using $v_{\o} \leq C$ and $g\leq 2 \sqrt{p^0q^0}$, we have
\begin{align*}
(T_{11})^2 &\leq \int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~(p^0)^{\frac{1}{2}}(q^0)^{\frac{1}{2}}J(p'^0/2)J(q^0/2) |h(q)|^2|f(p)|^2 \cr
&\leq C \int_{\mathbb{R}^3} dq|h(q)|^2 \int_{\mathbb{R}^3}dp(p^0)^{\frac{1}{2}}|f(p)|^2 \cr
&\leq C \|f\|_{\nu}^2\|h\|_{L^2_{p}}^2,
\end{align*}
where we used $(q^0)^{1/2}J(q^0/2)\leq C$. To estimate $T_{12}$, we rewrite $(T_{12})^2$ as in the form of \eqref{gamma21}:
\begin{multline*}
(T_{12})^2=\int_{\mathbb{R}^3}\frac{dp}{p^0} \int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dp'}{p'^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta)J(p'^0/2)J(q^0/2)\\\times |\eta(p')|^2|\xi(p)|^2\delta^{(4)}(p^{\mu}+q^{\mu}-q'^{\mu}-q'^{\mu}).
\end{multline*}
Then we can find that the $dqdq'$ integral has a similar form as of \eqref{B}:
\begin{align*}
B'&= \int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta)J(q^0/2) \delta^{(4)}(p^{\mu}+q^{\mu}-q'^{\mu}-q'^{\mu}).
\end{align*}
The only difference is that $J(q^0)$ is modified by $J(q^0/2)$. Thus the exponential growth $J((p'^0-p^0)/2)$ in \eqref{B3} is changed by $J((p'^0-p^0)/4)$. But since the $R$ and $r$ in \eqref{Rrdef} are also changed by $R/2$ and $r/2$, we can have
\begin{align*}
J\left(\frac{p'^0-p^0}{4}\right)e^{-\sqrt{R^2-r^2}/2} \leq e^{-\frac{1}{4}(p'^0-p^0)}e^{-\frac{1}{4}|p-p'|} \leq 1.
\end{align*}
Thus we can apply the estimate in \eqref{B esti}:
\begin{align*}
B&\leq C \bar{s}^{1/2}(p^0+p'^0),
\end{align*}
which gives
\begin{align}\label{T_12}
(T_{12})^2 \leq C \int_{\mathbb{R}^3}\frac{dp}{p^0} \int_{\mathbb{R}^3}\frac{dp'}{p'^0}\bar{s}^{1/2}(p^0+p'^0)J(p'^0/2)|\eta(p')|^2|\xi(p)|^2.
\end{align}
Using $\bar{s} \leq C p^0p'^0$ and $(p'^0)^{1/2}J(p'^0/2)\leq C$, we have
\begin{align*}
(T_{12})^2 \leq C \int_{\mathbb{R}^3}dp(p^0)^{1/2}|\xi(p)|^2 \int_{\mathbb{R}^3}dp'|\eta(p')|^2 \leq C\|\eta\|_{L^2_p}^2\|\xi\|_{\nu}^2 .
\end{align*}
Combining the estimates of $T_{11}$ and $T_{12}$ yields
\begin{align*}
\big| \langle T_1(f,h,\eta), \xi \rangle_{L^2_p} \big| \leq C \|f\|_{\nu}\|h\|_{L^2_{p}}\|\eta\|_{L^2_p}\|\xi\|_{\nu}.
\end{align*}
{\bf (2) Estimates of $T_3$: } Using $\sqrt{m+\tau m^2(p)}\leq J(p^0/2)$, we estimate $T_3$ as follows:
\begin{align*}
|T_3(f,h,\eta)
&\leq \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta)J(p'^0/2)J(q'^0/2)|f(p)||h(p')||\eta(q')|.
\end{align*}
Via the H\"{o}lder inequality, we can split the variable as follows:
\begin{align*}
\big| \langle T_3(f,h,\eta), \xi \rangle_{L^2_p} \big| &\leq C \left(\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(p'^0/2)J(q'^0/2) |f(p)|^2|h(p')|^2\right)^{\frac{1}{2}} \cr
&\times \left(\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(p'^0/2)J(q'^0/2)|\xi(p)|^2|\eta(q')|^2\right)^{\frac{1}{2}}\cr
&= T_{31} \times T_{32}.
\end{align*}Via the change of variables $p' \leftrightarrow q'$, we can see that $T_{31}$ and $T_{32}$ are indeed identical. Thus we only consider $T_{31}$ part here. Similarly to the estimates of $\Gamma_2$, we rewrite $(T_{31})^2$ as follows:
\begin{multline*}
\int_{\mathbb{R}^3}\frac{dp}{p^0} \int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dp'}{p'^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta)J(p'^0/2)J(q'^0/2)|f(p)|^2|h(p')|^2\delta^{(4)}(p^{\mu}+q^{\mu}-q'^{\mu}-q'^{\mu}).
\end{multline*}
Then we note that the energy conservation law and the exponential decays with respect to the two post-collisional momenta can together imply the decays with respect to all pre-post collisional variables:
\begin{align*}
J(p'^0/2)J(q'^0/2)=J(p^0/4)J(q^0/4)J(p'^0/4)J(q'^0/4).
\end{align*}
Then, we use the boundedness of $B$ in \eqref{B esti} as
\begin{align*}
B&= \int_{\mathbb{R}^3}\frac{dq}{q^0}\int_{\mathbb{R}^3}\frac{dq'}{q'^0} s\sigma(g,\theta)J(q^0/4) \delta^{(4)}(p^{\mu}+q^{\mu}-q'^{\mu}-q'^{\mu}) \leq C \bar{s}^{1/2}(p^0+p'^0),
\end{align*}
which yields
\begin{align*}
(T_{31})^2 \leq C \int_{\mathbb{R}^3}\frac{dp}{p^0} \int_{\mathbb{R}^3}\frac{dp'}{p'^0}\bar{s}^{1/2}(p^0+p'^0)J(p^0/4)J(p'^0/4)|f(p)|^2|h(p')|^2.
\end{align*}
Using $\bar{s} \leq C p^0p'^0$ and $(p^0)^{1/2}J(p^0/4)\leq C$, we have
\begin{align*}
(T_{31})^2 \leq C \int_{\mathbb{R}^3}dp|f(p)|^2 \int_{\mathbb{R}^3}dp'|h(p')|^2 \leq C\|f\|_{L^2_p}^2\|h\|_{L^2_p}^2 .
\end{align*}
Since $T_{32}$ has the same form with $T_{31}$, we have
\begin{align*}
\big| \langle T_3(f,h,\eta), \xi \rangle_{L^2_p} \big| \leq C \|f\|_{L^2_p} \|h\|_{L^2_p} \|\eta\|_{L^2_p} \| \xi \|_{L^2_p}.
\end{align*}
{\bf (3) Estimates of $T_4$: }
Recall the definition of $T_4$ in \eqref{T}. Using Lemma \ref{mJ}, we can have $CJ(p^0/2) \leq \sqrt{m+\tau m^2(p)} \leq CJ(p^0/2)$, which yields
\begin{align*}
|T_4(f,h,\eta)|&\leq C\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) \frac{J(q^0/2)J(p'^0/2)J(q'^0/2)}{J(p^0/2)} |f(q)||h(p')||\eta(q')|.
\end{align*}
Using the energy conservation law $J(p'^0/2)J(q'^0/2)=J(p^0/2)J(q^0/2)$, we have
\begin{align*}
|T_4(f,h,\eta)|&\leq \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ v_{\o}\sigma(g,\theta) J(q^0) |f(q)||h(p')||\eta(q')|.
\end{align*}
Then we take the inner product with $\xi$ to have
\begin{align*}
\big| \langle T_4(f,h,\eta), \xi \rangle_{L^2_p} \big|
&\leq C \int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(q^0) |f(q)||h(p')||\eta(q')||\xi(p)|.
\end{align*}
Via the H\"{o}lder inequality, we divide the variable into two pre-collisional variables and two post-collisional variables as follows:
\begin{align*}
\big| \langle T_4(f,h,\eta), \xi \rangle_{L^2_p} \big|
&\leq C \left(\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(q^0) |f(q)|^2|\xi(p)|^2\right)^{\frac{1}{2}} \cr
&\times \left(\int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)J(q^0) |h(p')|^2|\eta(q')|^2\right)^{\frac{1}{2}} \cr
&= T_{41} \times T_{42}.
\end{align*}
For the estimates of $T_{41}$, we use $v_{\o}\sigma(g,\theta) \leq C (p^0)^{1/2}(q^0)^{1/2}$ to have
\begin{align*}
(T_{41})^2&\leq C \int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~(p^0)^{\frac{1}{2}}(q^0)^{\frac{1}{2}}J(q^0) |f(q)|^2|\xi(p)|^2
\leq C \|f\|_{L^2_p}^2\|\xi\|_{\nu}^2,
\end{align*}
where we used $(q^0)^{\frac{1}{2}}J(q^0) \leq C$.
Similarly, we have
\begin{align*}
(T_{42})^2 &\leq C \int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~ (p^0)^{\frac{1}{2}} |h(p')|^2|\eta(q')|^2.
\end{align*}
We use $(p^0)^{1/2}\leq (p'^0)^{1/2}+(q'^0)^{1/2}$ to have
\begin{align*}
(T_{42})^2 &\leq C \int_{\mathbb{R}^3}dp \int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~\left((p'^0)^{\frac{1}{2}}+(q'^0)^{\frac{1}{2}}\right) |h(p')|^2|\eta(q')|^2.
\end{align*}
Then applying the pre-post collisional change of variables $(p,q) \leftrightarrow (p',q')$ as in \eqref{pq,p'q'} yields
\begin{align*}
T_{42} &\leq C \left(\| h \|_{L^2_p} \| \eta \|_{\nu} + \| h \|_{\nu} \| \eta \|_{L^2_p} \right).
\end{align*}
Combining the estimates of $T_{41}$ and $T_{42}$ yields
\begin{align*}
\big| \langle T_4(f,h,\eta), \xi \rangle_{L^2_p} \big|&\leq C \left(\| h \|_{L^2_p} \| \eta \|_{\nu} + \| h \|_{\nu} \| \eta \|_{L^2_p} \right)\|f\|_{L^2_p}\|\xi\|_{\nu}.
\end{align*}
\end{proof}
\section{Local existence}\label{sec:localintime}
In this section, we construct the local-in-time classical solution. Here we briefly introduce a main difference of the quantum Boltzmann theory from the Newtonian one.
From the statistical description of quantum Boltzmann equation in \cite{MR0258399}, we denote that the term $(1-F(p))$ in the collision operator for fermions is the probability that a fermion is being placed in the $p$ momentum place after a collision. Thus the ratio $(1-F(p))$ has to be non-negative. Then, in each iteration scheme in the proof of the local existence, the collision operator depending on $F^{n+1}$ is well-defined only when $F^{n+1}$ is in the interval $[0,1]$ in the case of fermions. Thus we need to prove the boundedness of the solution $F^{n+1}$ in each iteration step as well.
\begin{theorem}\label{Local}
Let $N\geq 3$. Suppose that the initial data $F_0$ satisfies
\begin{align*}
\left\{\begin{array}{ll} 0 \leq F_0(x,p)=m(p)+\sqrt{m(p)-m^2(p)}f_0(x,p) \leq 1, \quad \mbox{for fermion} \\ 0 \leq F_0(x,p)=m(p)+\sqrt{m(p)+m^2(p)}f_0(x,p), \quad \hspace{6mm} \mbox{for boson} , \end{array} \right.
\end{align*}
Then there exist $M_0>0$ and $T_*>0$ such that if $\mathcal{E}(f_0)\leq \frac{M_0}{2}$ then there exists a unique local-in-time solution f(x,p,t) of \eqref{pertf} satisfying
\begin{enumerate}
\item The higher order energy is uniformly bounded:
\[\sup_{0 \leq t\leq T_*}\mathcal{E}(f(t))\leq M_0.\]
\item The distribution function is bounded in $t\in[0,T_*]$:
\begin{align*}
\left\{\begin{array}{ll} 0\leq F(x,p,t)=m(p)+\sqrt{m(p)-m^2(p)}f(x,p,t) \leq 1, \quad \mbox{for fermions} \\ 0 \leq F(x,p,t)=m(p)+\sqrt{m(p)+m^2(p)}f(x,p,t), \quad \hspace{6mm} \mbox{for bosons} , \end{array} \right.
\end{align*}
\item The higher order energy norm $\mathcal{E}(f(t))$ is continuous in $t\in[0,T_*]$.
\end{enumerate}
\end{theorem}
\begin{proof}
We first take an iteration scheme as follows:
\begin{align}\label{iter}
(\partial_t +\hat{p}\cdot \nabla_x)F^{n+1}=Q(F^n,F^n,F^{n+1},F^n),
\end{align}
with $F^0(x,p,t)=F_0(x,p)$. Note that $F^{n+1}$ in the operator $Q$ is placed in the $p$ variable location. We proceed the proof by applying the induction argument. Let us assume that
\begin{align}\label{assume}
\left\{\begin{array}{ll}0 \leq F^n(x,p,t) \leq 1 \quad \mbox{for fermions,} \\ 0 \leq F^n(x,p,t) \qquad \hspace{3mm} \mbox{for bosons,} \end{array} \ \text{and}\ \sup_{0 \leq t\leq T_*}\mathcal{E}(f^n(t))\leq M_0. \right .
\end{align}
Note that the collision operator $Q(F^n,F^n,F^{n+1},F^n)$ is well-defined when $F^{n+1}$ is in the interval $[0,1]$. We define
\begin{align*}
G(F)&=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)F(p')F(q')(1+\tau F(q)), \cr
R(F)&=\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}dw~v_{\o}\sigma(g,\theta)(1+\tau F(p'))(1+\tau F(q'))F(q),
\end{align*}
to have
\begin{align*}
Q(F^n,F^n,F^{n+1},F^n)=G(F^n)(1+\tau F^{n+1}(p))-R(F^n)F^{n+1}(p).
\end{align*}
Then we rewrite \eqref{iter} as follows:
\begin{align}\label{RG}
(\partial_t +\hat{p}\cdot \nabla_x-\tau G(F^n)+R(F^n))F^{n+1}=G(F^n).
\end{align}
In the case of fermions, we additionally consider the upper bound of $F^{n+1}$. We observe that the induction hypothesis $(\ref{assume})_1$ implies
\begin{align*}
0\leq G(F^n), \quad 0 \leq R(F^n).
\end{align*}
Since $R(F^n)$ is positive, we have
\begin{align*}
(\partial_t +\hat{p}\cdot \nabla_x+G(F^n)+R(F^n))F^{n+1}\leq G(F^n)+R(F^n).
\end{align*}The associated ODE for the particle characteristic trajectory is given by $dX(s)/ds = \hat{p}(s)$ where $X(s)=X(s;t,x,p)$.
We integrate over the particle path to obtain
\begin{multline*}
F^{n+1}(X(t),p,t)\leq e^{-\int_0^t(G(F^n)+R(F^n))d\tau}F_0(X(0),p) \cr
+\int_0^t e^{-\int_s^t(G(F^n)+R(F^n))d\tau}(G(F^n)+R(F^n))(X(s),p,s)ds.
\end{multline*} We observe from
\begin{align*}
\frac{d}{ds} \left\{e^{-\int_s^t(G(F^n)+R(F^n))d\tau}\right\} = e^{-\int_s^t(G(F^n)+R(F^n))d\tau}(G(F^n)+R(F^n))(X(s),p,s),
\end{align*}
\iffalse
\begin{multline*}
F^{n+1}(x,p,t)\leq e^{-\int_0^t(G(F^n)+R(F^n))d\tau}F_0(x-\hat{p}t,p) \cr
+\int_0^t e^{-\int_s^t(G(F^n)+R(F^n))(x-\hat{p}(t-\tau),p,\tau)d\tau}(G(F^n)+R(F^n))(x-\hat{p}(t-s),p,s)ds.
\end{multline*}
We observe from
\begin{multline*}
\frac{d}{ds} \left\{e^{-\int_s^t(G(F^n)+R(F^n))(x-\hat{p}(t-\tau),p,\tau)d\tau}\right\} \cr
= e^{-\int_s^t(G(F^n)+R(F^n))(x-\hat{p}(t-\tau),p,\tau)d\tau}(G(F^n)+R(F^n))(x-\hat{p}(t-s),p,s)
\end{multline*}
\fi
that
\begin{align*}
F^{n+1}(x,p,t) &\leq e^{-\int_0^t(G(F^n)+R(F^n))d\tau}F_0(x-\hat{p}t,p)+1-e^{-\int_0^t(G(F^n)+R(F^n))d\tau} \cr
&=e^{-\int_0^t(G(F^n)+R(F^n))d\tau}(F_0(x-\hat{p}t,p)-1)+1 \cr
&\leq 1,
\end{align*}
where we used the boundedness of the initial data $F_0 \leq 1 $.
We now consider the lower bound of $F^{n+1}$. By $G(F^n)\geq 0 $ on \eqref{RG}, we have
\begin{align*}
(\partial_t +\hat{p}\cdot \nabla_x-\tau G(F^n)+R(F^n))F^{n+1}\geq 0,
\end{align*}
which yields
\begin{align*}
F^{n+1}(x,p,t) \geq e^{-\int_0^t(-\tau G(F^n)+R(F^n))dt}F_0(x-\hat{p}t,p) \geq 0,
\end{align*}
where we used the boundedness of the initial data $F_0\geq 0 $. Now the collision operator of \eqref{iter} is well-defined.
Substituting $F^{n+1}=m+\sqrt{m+\tau m^2}f^{n+1}$ on \eqref{iter} gives
\begin{align*}
(\partial_t +\hat{p}\cdot \nabla_x+\nu)f^{n+1}=Kf^n+\Gamma(f^n,f^{n+1}),
\end{align*}
where $K=K_2-K_1$ and
\begin{align*}
\Gamma(f^n,f^{n+1})&= \sum_{i=1,2,4}\Gamma_i(f^{n+1},f^n)+\sum_{i=3,5,6}\Gamma_i(f^n,f^n) \cr
&\quad +\sum_{i=1,2,3}T_i(f^{n+1},f^n,f^n)+T_4(f^n,f^n,f^n).
\end{align*}
We take $\partial^{\alpha}$ on each side to have
\begin{align*}
\partial_t\partial^{\alpha}f^{n+1}+\hat{p}\cdot\nabla_x\partial^{\alpha}f^{n+1}+\partial^{\alpha}(\nu f^{n+1})&=\partial^{\alpha}Kf^n+\partial^{\alpha}\Gamma(f^n,f^{n+1}).
\end{align*}
We take the $L^2_{x,p}$ inner product with $\partial^{\alpha}f^{n+1}$, then the nonlinear estimates Lemma \ref{nonlin ff} and Lemma \ref{nonlin fff} with the induction hypothesis $(\ref{assume})_2$ yield
\begin{align*}
(1-C\sqrt{M_0}-CM_0-CT_*^2M_0 -CM_0^2-CM_0^3)\sup_{0\leq t \leq T_*}\mathcal{E}_{n+1}(t) \leq \frac{M_0}{2}
\end{align*}
For sufficiently small $T_*$ and $M_0$, we have
\begin{align*}
\sup_{0\leq t \leq T_*}\mathcal{E}_{n+1}(t) \leq M_0.
\end{align*}
Taking the limit as $n\rightarrow \infty$ gives a local-in-time classical solution.
The remaining proof is standard as in \cite{MR1908664,MR2000470} and we omit it.
\end{proof}
\section{Global existence}
In this section, we extend the local solution constructed in Theorem \ref{Local} to a global solution. For this we first recover the full coercivity estimates of the linear operator $L$.
\subsection{Coercivity estimate}
Recall the definition of $Pf$ in Lemma \ref{null space}. Since $Pf$ is the orthonormal projection to the $L^2_p$ space with the following basis,
\begin{align*}
\left\{\sqrt{m+\tau m^2},p_1\sqrt{m+\tau m^2},p_2\sqrt{m+\tau m^2},p_3\sqrt{m+\tau m^2},p^0\sqrt{m+\tau m^2}\right\},
\end{align*}
$Pf$ can be written as follows by the Gram-Schmidt process:
\begin{align*}
Pf=\mathcal{A}\sqrt{m+\tau m^2}+\mathcal{B}\cdot p\sqrt{m+\tau m^2}+\mathcal{C}p^0\sqrt{m+\tau m^2},
\end{align*}
where $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ are given by
\begin{align*}
\mathcal{A} &= \frac{1}{\lambda}\int_{\mathbb{R}^3}dp~f\sqrt{m+\tau m^2}-\frac{\lambda_0}{\lambda}\frac{1}{\lambda_{00}-\frac{\lambda_0^2}{\lambda}}\left(\int_{\mathbb{R}^3}dp~fp^0\sqrt{m+\tau m^2}-\frac{\lambda_0}{\lambda}\int_{\mathbb{R}^3}dp~f\sqrt{m+\tau m^2}\right), \cr
\mathcal{B}_i&= \frac{1}{\lambda_i}\int_{\mathbb{R}^3}dp~fp^i\sqrt{m+\tau m^2},\cr
\mathcal{C}&= \frac{1}{\lambda_{00}-\frac{\lambda_0^2}{\lambda}}\left(\int_{\mathbb{R}^3}dp~fp^0\sqrt{m+\tau m^2}-\frac{\lambda_0}{\lambda}\int_{\mathbb{R}^3}dp~f\sqrt{m+\tau m^2}\right),
\end{align*}
and
\begin{align*}
\lambda &= \int_{\mathbb{R}^3}dp~m(p)+\tau m^2(p), \qquad \hspace{6mm}
\lambda_i = \int_{\mathbb{R}^3}dp~(p^i)^2(m(p)+\tau m^2(p)), \cr
\lambda_0 &= \int_{\mathbb{R}^3}dp~p^0(m(p)+\tau m^2(p)), \qquad
\lambda_{00} = \int_{\mathbb{R}^3}dp~(p^0)^2(m(p)+\tau m^2(p)),
\end{align*}
for $i=1,2,3$. Since $Pf$ has the exponential decay $\sqrt{m+\tau m^2}$, we can have
\begin{align}\label{PfABC}
\sum_{|\alpha|\leq N}\| \partial^{\alpha} Pf \|_{x,\nu} &\leq \sum_{|\alpha|\leq N} \left(\| \partial^{\alpha}\mathcal{A} \|_{L^2_x}+\| \partial^{\alpha}\mathcal{B} \|_{L^2_x}+\| \partial^{\alpha}\mathcal{C} \|_{L^2_x}\right).
\end{align}
Now we substitute $f= Pf+(I-P)f$ in the perturbation equation \eqref{pertf} to have
\begin{align*}
(\partial_t+\hat{p}\cdot \nabla_x)(Pf) = -(\partial_t+\hat{p}\cdot \nabla_x)((I-P)f)-L(I-P)f+\Gamma(f)+T(f),
\end{align*}
where we used $L(Pf)=0$ by Lemma \ref{null space}. We expand the left-hand side as follows:
\begin{align*}
\left(\partial_t\mathcal{A} +\sum_{i=1}^3\partial_{x_i}\mathcal{A}\frac{p_i}{p^0} + \sum_{i=1}^3(\partial_t\mathcal{B}_i+\partial_{x_i}\mathcal{C})p_i+\sum_{1 \leq i,j \leq 3}\partial_{x_i}\mathcal{B}_j\frac{p_ip_j}{p^0} + \partial_t\mathcal{C}p^0 \right) \sqrt{m+\tau m^2},
\end{align*}
which is a linear combination of following $14$-basis:
\begin{align}\label{basis}
\left\{\sqrt{m+\tau m^2},\quad \frac{p_i}{p^0}\sqrt{m+\tau m^2},\quad p_i\sqrt{m+\tau m^2},\quad \frac{p_ip_j}{p^0}\sqrt{m+\tau m^2},\quad p^0\sqrt{m+\tau m^2} \right\},
\end{align}
for $1\leq i,j \leq 3$. To denote the right-hand side, we define
\begin{align}\label{l,h}
l&=-(\partial_t+\hat{p}\cdot \nabla_x+L)((I-P)f), \qquad
h=\Gamma(f)+T(f).
\end{align}
By expanding $l$ and $h$ with respect to the $14$-basis elements in \eqref{basis}, we obtain the following macro-micro system:
\begin{align}\label{system}
\begin{split}
\partial_t\mathcal{A}&= l_a+h_a,\cr
\partial_{x_i}\mathcal{A}&=l_i+h_i, \cr \partial_t\mathcal{B}_i+\partial_{x_i}\mathcal{C}&= l_{bci}+h_{bci},\cr
\partial_{x_i}\mathcal{B}_j+\partial_{x_j}\mathcal{B}_i &= l_{ij}+h_{ij},\cr
\partial_t\mathcal{C}&= l_c+h_c,
\end{split}
\end{align}
for $i,j=1,\cdots,3$. On the right-hand side, $l_a, l_i, l_{bci}, l_{ij}, l_c$ and $h_a, h_i, h_{bci}, h_{ij}, h_c$ are the coefficients of expansion of $l$ and $h$ with respect to the basis in \eqref{basis}, respectively. We define the summation of the coefficients of $l$ and $h$ as follows:
\begin{align*}
\tilde{l} &= l_a + l_c + \sum_{1\leq i \leq 3}\left(l_i + l_{bci}\right) + \sum_{1\leq i,j \leq 3}l_{ij}, \cr
\tilde{h} &= h_a + h_c + \sum_{1\leq i \leq 3}\left(h_i + h_{bci}\right) + \sum_{1\leq i,j \leq 3}h_{ij}.
\end{align*}
Note that the macro-micro system of \eqref{system} is identical to the macro-micro system of the relativistic Landau-Maxwell model (98)-(102) in \cite{MR2100057} when the electromagnetic field is zero $B=E=0$. However, we provide the full proof here for the readers' convenience.
We first observe following property of $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$.
The conservation laws of $f$ in \eqref{consf} can be written in the following form.
\begin{lemma}\label{ABC0} We have
\begin{align*}
\int_{\mathbb{T}^3}\mathcal{A}(x,t)dx= \int_{\mathbb{T}^3}\mathcal{B}(x,t)dx= \int_{\mathbb{T}^3}\mathcal{C}(x,t)dx= 0.
\end{align*}
\end{lemma}
\begin{proof}
The conservation laws in \eqref{NPE0} implies
\begin{align*}
\int_{\mathbb{T}^3\times \mathbb{R}^3} dxdp~(F-F_0)\left( \begin{array}{c} 1 \cr p^{\mu} \end{array}\right) &= 0.
\end{align*}
Combining with the assumption \eqref{assumption}, we have
\begin{align*}
\int_{\mathbb{T}^3\times \mathbb{R}^3} dxdp~f\sqrt{m+\tau m^2}\left( \begin{array}{c} 1 \cr p^{\mu} \end{array}\right) = 0.
\end{align*}
We derived desired results.
\end{proof}
We now establish the estimates of $\mathcal{A}, \mathcal{B}, \mathcal{C}$. The proof of the following lemma is motivated by the proof of the estimate of (108) in \cite{MR2100057}.
\begin{lemma}\label{ABC} Let $N \geq 3$. We have
\begin{align*}
\sum_{|\alpha| \leq N}\left\{\|\partial^{\alpha}\mathcal{A}\|_{L^2_x}+\|\partial^{\alpha}\mathcal{B}\|_{L^2_x}+\|\partial^{\alpha}\mathcal{C}\|_{L^2_x}\right\} \leq C\sum_{|\alpha|\leq N-1}\left( \|\partial^{\alpha}\tilde{l}\|_{L^2_x}+\|\partial^{\alpha}\tilde{h}\|_{L^2_x}\right).
\end{align*}
\end{lemma}
\begin{proof}
We first prove the estimate of $\mathcal{A}$. Taking $\partial^{\alpha}$ on the first equation of \eqref{system} gives
\begin{align*}
\partial^{\alpha}\partial_t\mathcal{A}&= \partial^{\alpha}l_a+\partial^{\alpha}h_a.
\end{align*}
we take the innerproduct with $\partial^{\alpha}\partial_t\mathcal{A}$ and apply the H\"{o}lder inequality to have
\begin{align*}
\|\partial^{\alpha}\partial_t \mathcal{A}\|_{L_{x}^2}^2 \leq C\left(\|\partial^{\alpha}l_a\|_{L^2_x}\|\partial^{\alpha}\partial_t \mathcal{A}\|_{L_{x}^2}+\|\partial^{\alpha}h_a\|_{L^2_x}\|\partial^{\alpha}\partial_t \mathcal{A}\|_{L_{x}^2}\right).
\end{align*}
Dividing each side by $\|\partial^{\alpha}\partial_t \mathcal{A}\|_{L_{x}^2}$, we have
\begin{align*}
\|\partial^{\alpha}\partial_t \mathcal{A}\|_{L_{x}^2} \leq C(\|\partial^{\alpha}l_a\|_{L^2_x}+\|\partial^{\alpha}h_a\|_{L^2_x}).
\end{align*}
Similarly we take $\partial^{\alpha}$ on the second equation of \eqref{system} to have
\begin{align*}
\partial^{\alpha}\partial_{x_i}\mathcal{A}&=\partial^{\alpha}l_i+\partial^{\alpha}h_i.
\end{align*}
Multiplying $\partial^{\alpha}\partial_{x_i}\mathcal{A}$ on each side and applying the H\"{o}lder inequality, we have
\begin{align*}
\|\partial^{\alpha}\nabla_x\mathcal{A}\|_{L_{x}^2}\leq C (\|\partial^{\alpha}l_i\|_{L^2_x}+\|\partial^{\alpha}h_i\|_{L^2_x}).
\end{align*}
We apply the Poincar\'{e} inequality on $\mathcal{A}$ to have
\begin{align*}
\|\mathcal{A}\|_{L_{x}^2}- \left\|\frac{1}{|\mathbb{T}^3|}\int_{\mathbb{T}^3}\mathcal{A}(x)dx\right\|_{L_{x}^2} \leq C \|\nabla_x \mathcal{A}\|_{L^2_x}.
\end{align*}
Then the conservation law for $\mathcal{A}$ in Lemma \ref{ABC0} implies
\begin{align*}
\|\mathcal{A}\|_{L_{x}^2}\leq C \|\nabla_x \mathcal{A}\|_{L^2_x}.
\end{align*}
This completes the estimate of $\mathcal{A}$.
Since $\mathcal{B}$ and $\mathcal{C}$ are connected by the third equation of \eqref{system}, we consider the fourth and the fifth equations of \eqref{system} first. For the notational brevity, we use $\partial_j$ to denote $\partial_{x_j}$. Using the fourth equation of \eqref{system}, we calculate
\begin{align*}
\triangle \mathcal{B}_i = \sum_{1\leq j\leq 3}\partial_{jj}\mathcal{B}_i &= \sum_{j \neq i} \partial_{jj} \mathcal{B}_i +\partial_{ii} \mathcal{B}_i \cr
&= \sum_{j \neq i} \left(\partial_{j}l_{ij} + \partial_{j}h_{ij}-\partial_{ji} \mathcal{B}_j\right) + \frac{1}{2}(\partial_{i}l_{ii} + \partial_{i}h_{ii}).
\end{align*}
We substitute the fourth equation for $i=j$ case to have
\begin{align*}
\triangle \mathcal{B}_i &= \sum_{j \neq i} \left(\partial_{j}l_{ij} + \partial_{j}h_{ij}-\frac{1}{2}(\partial_{i}l_{jj} +\partial_{i}h_{jj})\right) + \frac{1}{2}(\partial_{i}l_{ii} + \partial_{i}h_{ii}).
\end{align*}
Taking $\partial^{\alpha}$ and multiplying $\partial^{\alpha}\mathcal{B}_i$ on each side yield
\begin{align}\label{Bx}
\|\partial^{\alpha}\nabla_x \mathcal{B}\|_{L^2_x} \leq C \sum_{1 \leq i,j \leq 3} \left(\|\partial^{\alpha}l_{ij}\|_{L^2_x}+\|\partial^{\alpha}h_{ij}\|_{L^2_x}\right).
\end{align}
We take $\partial^{\alpha}$ on the fifth equation of \eqref{system} to have
\begin{align}\label{tC}
\|\partial^{\alpha}\partial_t \mathcal{C}\|_{L_{x}^2} \leq C\|\partial^{\alpha}l_c\|_{L^2_x}+\|\partial^{\alpha}h_c\|_{L^2_x}.
\end{align}
Now we use the third equation of \eqref{system}. We consider the estimate of $\mathcal{B}$ when there are more than one temporal derivatives. For this, we take the temporal derivative $\partial_t^{n}$ for $1\leq n\leq N-1 $ on the third equation of \eqref{system} to have
\begin{align*}
\partial_t^{n}\partial_t\mathcal{B}_i&= \partial_t^{n}l_{bci}+\partial_t^{n}h_{bci}-\partial_t^{n}\partial_{x_i}\mathcal{C}.
\end{align*}
Then the estimate \eqref{tC} gives
\begin{align}\label{Bt}
\|\partial_t^{n+1}\mathcal{B}_i\|_{L^2_x}& \leq \|\partial_t^{n}l_{bci}\|_{L^2_x}+\|\partial_t^{n}h_{bci}\|_{L^2_x} + C\left(\|\partial_{x_i}\partial_t^{n-1}l_c\|_{L^2_x}+\|\partial_{x_i}\partial_t^{n-1}h_c\|_{L^2_x}\right).
\end{align}
For the estimate of $\mathcal{B}$ when there are less than two temporal derivatives, we apply the Poincar\'{e} inequality on $\mathcal{B}$ and $\partial_t\mathcal{B}$ to have
\begin{align*}
\|\partial_t^n\mathcal{B}_i\|_{L^2_x}-\left\|\frac{1}{|\mathbb{T}^3|}\partial_t^n\int_{\mathbb{T}^3} \mathcal{B}_i dx\right\| &\leq \|\nabla_x \partial_t^n \mathcal{B}_i\|_{L^2_x},
\end{align*}
for $n=0,1$. Then the conservation law in Lemma \ref{ABC0} yields
\begin{align*}
\|\partial_t^n\mathcal{B}_i\|_{L^2_x}&\leq \|\nabla_x \partial_t^n \mathcal{B}_i\|_{L^2_x}.
\end{align*}
Combining with \eqref{Bx} and \eqref{Bt}, we conclude that
\begin{align}\label{BT}
\sum_{|\alpha| \leq N}\|\partial^{\alpha}\mathcal{B}\|_{L^2_x} \leq C\sum_{|\alpha|\leq N-1}\left( \|\partial^{\alpha}\tilde{l}\|_{L^2_x}+\|\partial^{\alpha}\tilde{h}\|_{L^2_x}\right).
\end{align}
For the estimate of $\mathcal{C}$, we take $\partial^{\alpha}$ on third equation of \eqref{system} to have
\begin{align*}
\partial^{\alpha}\partial_{x_i}\mathcal{C}&= \partial^{\alpha}l_{bci}+\partial^{\alpha}h_{bci}-\partial^{\alpha}\partial_t\mathcal{B}_i.
\end{align*}
Applying \eqref{BT} yields
\begin{align}\label{xC}
\|\partial^{\alpha}\partial_{x_i}\mathcal{C}\|_{L^2_x}&\leq C\sum_{|\beta|= |\alpha|}\left( \|\partial^{\beta}\tilde{l}\|_{L^2_x}+\|\partial^{\beta}\tilde{h}\|_{L^2_x}\right).
\end{align}
Then the Poincar\'{e} inequality with Lemma \ref{ABC0} gives
\begin{align*}
\|\mathcal{C}\|_{L_{x}^2}\leq C \|\nabla_x \mathcal{C}\|_{L^2_x}.
\end{align*}
Combining with the estimates \eqref{tC} and \eqref{xC}, we derive the desired result.
\end{proof}
\begin{lemma}\label{lh esti} Let $N\geq 3$. Suppose that
\begin{align*}
\sum_{|\alpha|\leq N}\|\partial^{\alpha}f\|_{L^2_{x,p}}^2 \leq M_0 .
\end{align*}
Then we have
\begin{align*}
&(1) \ \sum_{|\alpha|\leq N-1} \|\partial^{\alpha}\tilde{l}\|_{L^2_x} \leq C \sum_{|\alpha|\leq N}\|(I-P)\partial^{\alpha}f\|_{x,\nu}, \cr
&(2) \ \sum_{|\alpha|\leq N} \|\partial^{\alpha}\tilde{h}\|_{L^2_x} \leq C(\sqrt{M_0}+M_0) \sum_{|\alpha|\leq N}\|\partial^{\alpha}f\|_{x,\nu}.
\end{align*}
\end{lemma}
\begin{proof}
For the Newtonian Boltzmann equation and the relativistic Landau-Maxwell system, analogous proofs can be found in \cite{MR2000470} and \cite{MR2100057}, respectively. The difference now is that the estimate of the nonlinear term $h$ includes the third-order nonlinear terms. \newline
(1) We denote the $14$-basis of \eqref{basis} as $\{e_i\}_{1\leq i \leq 14}$, and let $\{e_i^*\}_{1\leq i \leq 14}$ be the corresponding orthonormal basis. Then the orthonormal basis can be written by a linear combination of the original basis $\{e_i\}_{1\leq i \leq 14}$ as follows:
\begin{align*}
e_i^* = \sum_{j=1}^{14} C_{ij}e_j,
\end{align*}
for $j=1,\cdots,14$. We consider the orthonormal expansion of $l$ as follows:
\begin{align*}
l= \sum_{i=1}^{14}\langle l,e_i^*\rangle_{L^2_p}e_i^* = \sum_{i=1}^{14}\left\langle l, \sum_{j=1}^{14} C_{ij}e_j\right\rangle_{L^2_p} \sum_{k=1}^{14} C_{ik}e_k.
\end{align*}
Then the coefficient of $e_k$ can be read as follows:
\begin{align*}
\sum_{1\leq i,j\leq 14} C_{ij}C_{ik}\langle l, e_j\rangle_{L^2_p},
\end{align*}
which correspond to $l_a, l_i, l_{bci}, l_{ij}$, and $l_c$.
By the definition of $l$ in \eqref{l,h} and the linear operator $L$ in Proposition \ref{linearization}, we can write $l$ as
\begin{align*}
l&=-(\partial_t+\hat{p}\cdot \nabla_x+\nu+K_1-K_2)((I-P)f).
\end{align*}
For $|\alpha|=N-1$, we have
\begin{multline*}
\bigg\|\int_{\mathbb{R}^3} dp~\partial^{\alpha} l \cdot e_i(p) \bigg\|_{L^2_x} \cr
\leq \bigg\|\int_{\mathbb{R}^3} dp~\bigg(\sum_{|\beta|= N}(1+\hat{p})(I-P)\partial^{\beta}f + (\nu+K_1-K_2)(I-P)\partial^{\alpha}f\bigg) \cdot e_i(p) \bigg\|_{L^2_x}.
\end{multline*}
We use the H\"{o}lder inequality on the $(1+\hat{p})(I-P)\partial^{\beta}f$ term and the $\nu(I-P)\partial^{\alpha}f$ term. Then we have
\begin{align*}
\int_{\mathbb{R}^3} dp~ \bigg(\sum_{|\beta|= N}(1+\hat{p})(I-P)\partial^{\beta}f + \nu(I-P)\partial^{\alpha}f\bigg) \cdot e_i(p) \leq \sum_{|\alpha|\leq N}\|(I-P)\partial^{\alpha}f \|_{L^2_{x,p}} ,
\end{align*}
where we used the exponential decay $\sqrt{m+\tau m^2}$ of the basis $e_i(p)$ in \eqref{basis} that implies
\begin{align*}
\int_{\mathbb{R}^3}\left((1+\hat{p})^2+\nu(p)\right)e_i^2(p) dp \leq C.
\end{align*}
From the estimate of the compact operator $K_j$ from Lemma \ref{K esti}, we also have
\begin{align*}
\langle K_jf , e_i \rangle_{L^2_p}&\leq C \|f \|_{\nu},
\end{align*}
for $j=1,2$ and $i=1,\cdots,14$. The estimates above together yield
\begin{align*}
\bigg\|\int_{\mathbb{R}^3}\partial^{\alpha} l \cdot e_i(p) dp \bigg\|_{L^2_x} & \leq C \sum_{|\alpha| = N}\|(I-P)\partial^{\alpha}f \|_{L^2_{x,p}}+\|(I-P)\partial^{\alpha}f \|_{x,\nu} \cr
& \leq C \sum_{|\alpha| \leq N}\|(I-P)\partial^{\alpha}f \|_{x,\nu}.
\end{align*}
(2) In the same manner, the coefficient of $e_k$ of expansion of $l$ can be written as
\begin{align*}
\sum_{1\leq i,j\leq 14} C_{ij}C_{ik}\langle h, e_i\rangle_{L^2_p}.
\end{align*}
For the second-order nonlinear terms, by Lemma \ref{nonlin ff}, we have
\begin{align*}
\big| \langle \partial^{\alpha} \Gamma(f,f) ,e_i \rangle_{L^2_{x,p}} \big|
\leq C \sum_{|\alpha_1|+|\alpha_2|\leq|\alpha|}\|\partial^{\alpha_1}f\|_{L^2_{x,p}}\|\partial^{\alpha_2}f\|_{x,\nu}
\leq C \sqrt{M_0} \sum_{|\alpha_1|\leq|\alpha|} \|\partial^{\alpha_1}f\|_{x,\nu}.
\end{align*}
For the third-order nonlinear term, Lemma \ref{nonlin fff} yields
\begin{align*}
\big| \langle \partial^{\alpha} T(f,f,f) ,e_i \rangle \big|
&\leq C \sum_{|\alpha_1|+|\alpha_2|+|\alpha_3|\leq|\alpha|}\|\partial^{\alpha_1}f\|_{L^2_p}\|\partial^{\alpha_2}f\|_{L^2_p}\|\partial^{\alpha_3}f\|_{\nu}.
\end{align*}
Then the Sobolev embedding $H^2(\mathbb{T}^3)\subset\subset L^{\infty}(\mathbb{T}^3)$ implies
\begin{align*}
\bigg\|\int_{\mathbb{R}^3}\partial^{\alpha} T(f,f,f) \cdot e_i(p) dp \bigg\|_{L^2_x}
&\leq CM_0\sum_{|\alpha_1|\leq|\alpha|}\|\partial^{\alpha_1}f\|_{x,\nu}.
\end{align*}
So we obtain the desired results.
\end{proof}
We now have all the estimates to recover the full coercivity. We combine \eqref{PfABC} with Lemma \ref{ABC} and Lemma \ref{lh esti} to have
\begin{align*}
\sum_{|\alpha|\leq N}\| \partial^{\alpha} Pf \|_{x,\nu}
&\leq \sum_{|\alpha|\leq N-1} \left( \| \partial^{\alpha}l \|_{L^2_x}+\| \partial^{\alpha}h \|_{L^2_x} \right)\cr
&\leq C\sum_{|\alpha|\leq N}\left(\|(I-P)\partial^{\alpha}f\|_{x,\nu}+C\sqrt{M_0}\|\partial^{\alpha}f\|_{x,\nu}\right).
\end{align*}
Thus, for a sufficiently small $M_0$, there exists $\delta>0$ such that
\begin{align}\label{full coer}
\sum_{|\alpha|\leq N}\langle L\partial^{\alpha}f, \partial^{\alpha}f\rangle_{L^2_{x,p}} &\geq \delta \sum_{|\alpha|\leq N}\|\partial^{\alpha}f\|_{x,\nu}^2,
\end{align}by Lemma \ref{coercivity}.
\subsection{Global existence} We now have all the ingredients for the proof of Theorem \ref{Main Theorem}. Extending the local solution constructed in Theorem \ref{Local} to a global solution is standard as in \cite{MR2000470,MR2095473}. We only sketch the proof here. Recall that substituting $F=m+\sqrt{m+\tau m^2}f$ on \eqref{RQBE} yields the following linearized equation:
\begin{align*}
\partial_tf+\hat{p}\cdot\nabla_xf +Lf&= \Gamma(f)+T(f), \cr
f(x,p,0) &= f_0(x,p).
\end{align*}
We take $\partial^{\alpha}$ on each side to have
\begin{align*}
\partial_t\partial^{\alpha}f+\hat{p}\cdot\nabla_x\partial^{\alpha}f+L\partial^{\alpha}f&= \partial^{\alpha}\Gamma(f)+\partial^{\alpha}T(f).
\end{align*}
Taking the inner product with $\partial^{\alpha}f $ yields
\begin{align*}
\frac{1}{2}\frac{d}{dt}\| \partial^{\alpha}f\|_{L^2_{x,p}}^2+ \langle L\partial^{\alpha}f, \partial^{\alpha}f\rangle_{L^2_{x,p}}&= \langle \partial^{\alpha}f, \partial^{\alpha}\Gamma(f) \rangle_{L^2_{x,p}}+\langle \partial^{\alpha}f, \partial^{\alpha}T(f) \rangle_{L^2_{x,p}}.
\end{align*}
Applying the full coercivity estimate \eqref{full coer}, we have
\begin{align*}
\frac{1}{2}\frac{d}{dt}\| \partial^{\alpha}f\|_{L^2_{x,p}}^2+ \delta \| \partial^{\alpha}f\|_{x,\nu}^2&\leq \langle \partial^{\alpha}f, \partial^{\alpha}\Gamma(f) \rangle_{L^2_{x,p}}+\langle \partial^{\alpha}f, \partial^{\alpha}T(f) \rangle_{L^2_{x,p}}.
\end{align*}
The estimate of the second-order nonlinear terms in Lemma \ref{nonlin ff} gives
\begin{align*}
\langle \partial^{\alpha}f, \partial^{\alpha}\Gamma(f) \rangle_{L^2_{x,p}} \leq C \sum_{|\alpha_1|+|\alpha_2| \leq N} \int_{\mathbb{T}^3}dx ~\left(\|\partial^{\alpha_1}f\|_{L^2_p}\|\partial^{\alpha_2}f\|_{\nu}+\|\partial^{\alpha_1}f\|_{\nu}\|\partial^{\alpha_2}f\|_{L^2_p}\right)\|\partial^{\alpha}f\|_{\nu}.
\end{align*}
Without loss of generality, we assume that $\alpha_1$ is less than or equal to $\alpha_2$. Since $N\geq 3$, we have $|\alpha_1|+2 \leq N $. Thus the Sobolev embedding $H^2(\mathbb{T}^3)\subset\subset L^{\infty}(\mathbb{T}^3)$ implies
\begin{align*}
\sum_{|\alpha| \leq N}\langle \partial^{\alpha}f, \partial^{\alpha}\Gamma(f) \rangle_{L^2_{x,p}} &\leq C \sum_{|\alpha| \leq N}\|\partial^{\alpha}f\|_{L^2_{x,p}}\sum_{|\alpha| \leq N}\|\partial^{\alpha}f\|_{x,\nu}\sum_{|\alpha| \leq N}\|\partial^{\alpha}f\|_{x,\nu}\cr
&\leq C\sqrt{\mathcal{E}(t)}\sum_{|\alpha| \leq N} \|\partial^{\alpha}f \|_{x,\nu}^2.
\end{align*}
For the estimate of the third-order nonlinear terms, we apply Lemma \ref{nonlin fff} to have
\begin{multline*}
\langle \partial^{\alpha}f, \partial^{\alpha}T(f) \rangle_{L^2_{x,p}} \leq C \sum_{|\alpha_1|+|\alpha_2|+|\alpha_3| \leq N} \int_{\mathbb{T}^3}dx ~ \big( \|\partial^{\alpha_1}f\|_{L^2_p}\|\partial^{\alpha_2}f\|_{L^2_p}\|\partial^{\alpha_3}f\|_{\nu} \cr
+\|\partial^{\alpha_1}f\|_{L^2_p}\|\partial^{\alpha_2}f\|_{\nu}\|\partial^{\alpha_3}f\|_{L^2_p} +\|\partial^{\alpha_1}f\|_{\nu }\|\partial^{\alpha_2}f\|_{L^2_p}\|\partial^{\alpha_3}f\|_{L^2_p} \big)\|\partial^{\alpha}f\|_{\nu}.
\end{multline*}
Similarly we assume that $\alpha_1$ and $\alpha_2$ are less than or equal to $\alpha_3$.
Since $N\geq 3$, we have $|\alpha_1|+2\leq N$ and $|\alpha_2|+2 \leq N $. By the Sobolev embedding $H^2(\mathbb{T}^3)\subset\subset L^{\infty}(\mathbb{T}^3)$, we have
\begin{align*}
\sum_{|\alpha| \leq N}\langle \partial^{\alpha}f, \partial^{\alpha}T(f) \rangle_{L^2_{x,p}} &\leq C \sum_{|\alpha| \leq N}\|\partial^{\alpha}f\|_{L^2_{x,p}}\sum_{|\alpha| \leq N}\|\partial^{\alpha}f\|_{L^2_{x,p}}\sum_{|\alpha| \leq N}\|\partial^{\alpha}f\|_{x,\nu}\sum_{|\alpha| \leq N}\|\partial^{\alpha}f\|_{x,\nu}\cr
&\leq C\mathcal{E}(t)\sum_{|\alpha| \leq N} \|\partial^{\alpha}f \|_{x,\nu}^2.
\end{align*}
Combining these estimates, we conclude that
\begin{align*}
\frac{1}{2}\frac{d}{dt}\| \partial^{\alpha}f\|_{L^2_{x,p}}^2 + \delta \sum_{|\alpha| \leq N} \|\partial^{\alpha}f \|_{x,\nu}^2 &\leq C \left(\sqrt{\mathcal{E}(t)}+\mathcal{E}(t)\right) \sum_{|\alpha| \leq N} \|\partial^{\alpha}f \|_{x,\nu}^2.
\end{align*}
The remaining proof can be established by the standard continuity argument as in \cite{MR2000470,MR2095473}. This completes the proof.
\noindent {\bf Acknowledgement:}
J. W. Jang is supported by CRC 1060 \textit{The mathematics of emergent effects} at the
University of Bonn funded through the German Science Foundation (DFG).
S.-B. Yun is supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1801-02.
|
{
"timestamp": "2020-12-29T02:28:23",
"yymm": "2012",
"arxiv_id": "2012.14213",
"language": "en",
"url": "https://arxiv.org/abs/2012.14213"
}
|
\section{Introduction}
The use of Fourier methods in astronomical imaging is mainly related to radio interferometry \citep{richard2017interferometry}. However, in the last three decades, this approach has been utilized also in the case of solar hard X-ray telescopes that have been conceived in order to provide spatial Fourier components of the photon flux emitted via either bremsstrahlung or thermal processes during solar flares \citep{enlighten1658,krucker2020spectrometer}. These Fourier components, named {\em{visibilities}}, are sampled by the hard X-ray instrument in the two dimensional Fourier space, named $(u,{\mbox{v}})$-plane, in a sparse way, according to a geometry depending on the instrument design. By instance, the {\em{Reuven Ramaty High Energy Spectroscopic Imager (RHESSI)}} relies on the use of a set of nine rotating modulation collimators (RMCs) whose Full Width at Half Maximum (FWHM) is logarithmically spaced between $2.3$ and $183$ arcsec \citep{2002SoPh}. Each RMC measures visibilities on a circle of points in the $(u,v)$-space with a spatial frequency that corresponds to its angular resolution and a position angle that varies according to the spacecraft rotation (see Figure \ref{figure:fig-1}, left panel). On the other hand, the {\em{Spectrometer/Telescope for Imaging X-rays (STIX)}} on-board {\em{Solar Orbiter}} is based on the Moir\'e pattern technology \citep{STIX1,2019A&A...624A.130M} and its $30$ collimators sample the $(u,{\mbox{v}})$-plane over a set of six spirals for a FWHM resolution coarser than $7$ arcsec (see Figure \ref{figure:fig-1}, right panel).
Image reconstruction methods in solar hard X-ray astronomy rely on procedures that allow some sort of interpolation/extrapolation in the $(u,{\mbox{v}})$-space in order to recover information in between the sampled frequencies, for reducing the imaging artifacts and, outside the sampling domain, for obtaining super-resolution effects. Most methods accomplish these objectives by imposing constraints in the image domain, either by optimizing parameters associated to predefined image shapes via comparison with observations \citep{Aschwanden,sciacchitano2018identification}, or by minimizing regularization functionals that combine a fitting term with a stability term \citep{felix2017compressed,duval2018solar,massa2020mem_ge}.
However, the most straightforward approach to interpolation/extrapolation in visibility-based imaging is probably the one implemented in the uv$\_$smooth method \citep{009HAMassoneRDXI}, which is inspired by standard gridding approaches utilized in radio-astronomy. In particular, uv$\_$smooth starts from the observation that the coverage of the $(u,{\mbox{v}})$-plane offered by hard X-ray instruments is much sparser than that typical of radio astronomy and therefore utilizes spline interpolation at spatial frequencies smaller than the largest sampled frequencies and soft-thresholding on the image to reduce the ringing effects due to a naive and unconstrained Fourier transform inversion procedure \citep{daubechies2004iterative,Massone1}. This approach can exploit Fast Fourier Transform (FFT) in the inversion process and is characterized by a satisfactory reliability when reconstructing extended sources \citep{guo2013specific,guo2012determination,guo2012properties,caspi2015hard}; however, several applications \citep{dennis2019remarkably,bonettini2014accelerated} showed that uv$\_$smooth does not work properly when it is applied to visibility sets characterized by significant oscillations in the $(u,{\mbox{v}})$-plane. This misbehavior is essentially due to the fact that the interpolation algorithm utilized in uv$\_$smooth is not optimal and often misses the oscillating frequency information related to very narrow or well-separated sources (or, in the case of {\em{RHESSI}}, associated to the use of detectors with fine grids in the observation process).
The present paper proposes an enhanced release of uv$\_$smooth, based on the use of an advanced approach to interpolation in the frequency domain. Specifically, this approach relies on the use of Variably Scaled Kernels (VSKs), which are able to include {\em{a priori}} information in the interpolation process \citep{Bozzini1,vskmpi}. This additional knowledge is implicitly put into the kernel via a {\em{scaling function}} that determines the accuracy of the approximation process and that is linked to a first coarse reconstruction of the sought image. As far as the practical implementation of the VSK setting is concerned, in this study we considered the Mat\'ern $C^0$ kernel, which takes advantage of a low regularity degree and of a better numerical stability \citep{Matern}.
The plan of the paper is as follows. Section 2 illustrates the interpolation process based on VSKs. Section 3 describes the overall image reconstruction approach relying on the use of interpolation in the $(u,{\mbox{v}})$-plane and of the soft-thresholding technique applied for image reconstruction. Section 4 contains some validation tests performed against both synthetic {\em{STIX}} visibilities and experimental {\em{RHESSI}} observations. Our conclusions are offered in Section 5.
\begin{figure}
\centering
\includegraphics[scale=0.2]{./FigPaper/visrhessi} \hskip 1.2cm
\includegraphics[scale=0.2]{./FigPaper/visstix}
\caption{The sampling of the $(u,v)$ plane provided by {\em{RHESSI}} (left panel) and {\em{STIX}} (right panel).}
\label{figure:fig-1}
\end{figure}
\section{Interpolation in the Fourier domain}
Visibility-based hard X-ray telescopes provide experimental measurements of the Fourier transform of the incoming photon flux at specific points of the spatial frequency plane. We denote with ${\bf{f}}$ the vector whose components are the discretized values of the incoming flux, with ${\bf{F}}$ the discretized Fourier transform, with $ \{ {\bf u}_i=(u_i,v_i) \}_{i=1}^{n}$ the set of sampled points in the $(u,{\mbox{v}})$-plane, with ${\bf{V}}$ the vector whose $n$ components are the observed visibilities and with $\chi$ the binary mask returning $1$ at frequencies $\{{\bf{u}}_i\}_{i=1}^{n}$ and zero elsewhere. Then, the image formation model in this framework can be approximated by
\begin{equation}\label{b1}
{\bf{V}} = \chi \cdot {\bf{F}} {\bf{f}}~,
\end{equation}
where the symbol $\cdot$ denotes the entry-wise product. The uv$\_$smooth code incorporated in the SSW tree and validated in the case of {\em{RHESSI}} visibilities addresses equation (\ref{b1}) by means of an interpolation/extrapolation procedure in which the interpolation step is carried out via an algorithm based on spline functions and the extrapolation step is realized by means of a soft-thresholding scheme \citep{daubechies2004iterative,Massone1}. In the present paper we want to generalize the interpolation step of uv$\_$smooth by means of a more sophisticated numerical technique, in order to improve uv$\_$smooth performances, particularly in the case when visibility oscillations are significant.
In general, any interpolation approach seeks for a function, namely $P$, that matches the given measurements at their corresponding locations. Thus an interpolant of the visibilities is constructed in such a way that
\begin{equation}\label{b1-1}
P(\boldsymbol{u}_i)=\boldsymbol{V}_i, \quad i=1,\ldots,n.
\end{equation}
Typically, any interpolating function is of the form
\begin{equation}\label{b2}
P({\bf{u}}) = \sum_{k=1}^{n} a_k b_k({\bf{u}})~,
\end{equation}
where $\{b_1({\bf{u}}),\ldots,b_n({\bf{u}})\}$ is a set of appropriate basis functions and ${\bf{u}}$ is a vector in the interpolation domain. A possible choice for these basis functions is represented by the so-called Radial Basis Functions (RBFs), see e.g \citep{Fasshauer}, which have the property that
\begin{equation}\label{b3}
b_k({\bf{u}}) = \phi(\|{\bf{u}}-{\bf{u}}_k\|),~~~~k=1,\ldots,n~,
\end{equation}
where $\phi$ is a specific RBF.
In order to incorporate possible prior information in the interpolation process, the Variably Scaled Kernels (VSKs) represent a specific implementation of RBFs in which
\begin{equation}\label{b4}
b_k({\bf{u}}) = \phi(\|({\bf{u}},\psi({\bf{u}}))- ({\bf{u}}_k,\psi({\bf{u}}_k))\|),
~~~k=1,\ldots,n~,
\end{equation}
and where $\psi$ is the so-called scaling function encoding such prior information on the emitting source ${\bf{f}}$. Therefore, once the functions $\phi$ and $\psi$ are chosen, by imposing the interpolation conditions (\ref{b1-1}) the interpolation problem is reduced to the solution of the linear system
\begin{equation}\label{b5}
K{\bf{a}} = {\bf{V}}~,
\end{equation}
where ${\bf{a}}=(a_1,\ldots,a_n)^T$, ${\bf{V}}=({\bf{v}}_1,\ldots,{\bf{v}}_n)^T$ and $K_{ij}=\phi(\|({\bf{u}}_i,\psi({\bf{u}}_i))- ({\bf{u}}_j,\psi({\bf{u}}_j)\|)$, $i,j=1,\ldots,n$. Once system (\ref{b5}) is solved, the computed vector ${\bf{a}}$ is used to evaluate the interpolating function $P({\bf{u}})$ on the $N$ points $\{\bar{{\bf{u}}}_1,\ldots,\bar{{\bf{u}}}_N\}$ of a regular mesh of the $(u,{\mbox{v}})$-plane, with $N >> n$. This provides the visibility surface ${\bf{\overline{V}}}$ such that
\begin{equation}\label{bb5}
{\bf{\overline{V}}}_k = P(\bar{{\bf u}}_k) = \sum_{i=1}^n a_i \phi(\|({\bf{{\overline{u}}}}_k,\psi({\bf{{\overline{u}}}}_k))- ({\bf{u}}_i,\psi({\bf{u}}_i))\|),
~~~k=1,\ldots,N~.
\end{equation}
Equation (\ref{bb5}) implies that, after interpolation, the reconstruction problem for visibility-based interpolation has become
\begin{equation}\label{bbb5}
{\bf{\overline{V}}} = {\bf{\overline{F}}} {\bf{\overline{f}}}~,
\end{equation}
where ${\bf{\overline{F}}}$ is the $N \times N$ discretized Fourier transform and ${\bf{\overline{f}}}$ is the $N \times 1$ vector to reconstruct.
Two comments are probably relevant in conclusion of this subsection. First, from a technical viewpoint, the choice of $\phi$ and $\psi$ should guarantee numerical stability of system (\ref{b5}). Moreover, at a more general level, VSK approaches map the original measured data into a higher dimension space and therefore can be considered as a feature augmentation strategy. It follows that the definition of the scaling function plays a crucial role for the final outcome of this approach and the idea is to select it so that it mimics the samples as shown in \citep{vskmpi,vskjump,romani}.
\section{Image reconstruction}
The implementation of an image reconstruction process relying on the interpolation procedure described in the previous section needs the definition of a pipeline made of the following steps:
\begin{enumerate}
\item Construction of the matrix $K$. This step needs the choice of the function $\phi$ generating the RBFs and of the scaling function $\psi$, which implies to account for some prior information on the source image. As far as $\phi$ is concerned, we have chosen the Gaussian-like Mat\'ern function
\begin{equation}\label{b3-1}
\phi(\|{\bf{u}}-{\bf{u}}_k\|) = {\rm e}^{-\|{\bf{u}}-{\bf{u}}_k\|}~.
\end{equation}
As for $\psi$, in this study we have implemented two possible choices, based on coarse estimates of the X-ray source to reconstruct:
\begin{itemize}
\item We have applied the inverse Discrete Fourier Transform to the visibility set and used the Fourier projection of the corresponding back-projected map as the scaling function.
\item We have applied CLEAN to the visibility set and used the Fourier projection of the map of the CLEAN components as the scaling function.
\end{itemize}
\item Solution of equation (\ref{b5}). This is a square and rather well-conditioned linear system and therefore standard numerics for computing $K^{-1}$ works properly in the case of input data characterized by large signal-to-noise ratios. When the data statistics is low the system is solved by means of the equally standard Tikhonov method \citep{2003A&A...405..325M}.
\item Reconstruction of the image ${\bf{f}}$. To this aim we have implemented a soft-thresholding approach based on the projected Landweber iterative scheme \citep{1996JOSAA..13.1516P,piana1997projected}
\begin{equation}\label{c1}
{\bf{{\overline{f}}}}^{(k+1)} = {\cal{P}}_+[{\bf{\overline{f}}}^{(k)} + {\bf{\overline{F}}}^T({\bf{\overline{V}}} -
{\bf{\overline{F}}}{\bf{\overline{f}}}^{(k)})]~,
\end{equation}
where ${\cal{P}}_+$ pixel-wise imposes a positivity constraint.
In the present implementation we have assumed the initialization ${\bf{f}}=0$ and a stopping rule that relies on a check on the $\chi^2$ values \citep{Massone1}.
\end{enumerate}
The main advantages of this scheme are essentially two. First, the positivity constraint induces super-resolution effects, since it allows extrapolating the frequency information outside the support of the interpolated visibility surface \citep{1996JOSAA..13.1516P}. Second, the implementation of the iterative scheme is made computationally effective by the use of an FFT routine performing the required forward and backward Fourier transformation. We also point out that a well-established weakness of CLEAN is the fact that the determination of the reconstructed CLEAN map from the map of the CLEAN components is typically realized by means of convolution with an idealized point spread function whose FWHM is chosen by means of totally heuristic considerations. Choosing $\psi$ as the map of the CLEAN components is a way to exploit it in a completely objective way, within the framework of an automatic image reconstruction method.
\section{Applications to the reconstruction of flaring sources}
In this section we discuss the effectiveness of this enhanced release of uv$\_$smooth for visibility-based image reconstruction by considering tests on both synthetic simulations obtained by means of the {\em{STIX}} simulation software and experimental {\em{RHESSI}} observations.
\subsection{STIX simulated visibilities}
We simulated four {\em{STIX}} configurations with an overall incident flux of $10^4$ photons cm$^{-2}$ s$^{-1}$ (see Figure \ref{figmap}, first column). The first two configurations (Configuration 1 and Configuration 2) consisted of two foot-points with centers located at two different positions along the main diagonal. The third and fourth configurations (Configuration 3 and Configuration 4) mimic two flaring loops, one at the center of the field-of-view and the other one off-center (refer to Tables \ref{tab1}--\ref{tab4} for details on the parameters of the four considered configurations).
Using the {\em{STIX}} simulation software we generated 25 realizations of synthetic {\em{STIX}} visibilities for each configuration. Then, Figure \ref{figmap} shows the results provided by the original version of uv$\_$smooth and by the two enhanced versions of the algorithm when the scaling functions are based upon the back-projected map (uv$\_$smooth$\_$BP) and the map of the CLEAN components (uv$\_$smooth$\_$CC). In Tables \ref{tab1}--\ref{tab4} the corresponding values of the reconstructed parameters are compared with the ones of the ground-truths, where for each parameter we have given the average value with respect to the 25 realizations and the corresponding standard deviation.
The CPU times employed to obtain the reconstructions are shown in Table \ref{cpu}. Tests have been carried out on a Intel(R) Core(TM) i7 CPU 4712MQ 2.13 GHz processor.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/2pointL_1_uv_vsk_mapC}\vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/2pointH_1_uv_vsk_mapC} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/loop_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/loop_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/loop_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/loop_1_uv_vsk_mapC} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/loop1_1_realmap}
\includegraphics[scale=0.11]{./FigPaper/loop1_1_uv_map}
\includegraphics[scale=0.11]{./FigPaper/loop1_1_uv_vsk_map}
\includegraphics[scale=0.11]{./FigPaper/loop1_1_uv_vsk_mapC}
\caption{Reconstruction of four synthetic flaring configurations using simulated {\em{STIX}} visibilities. First column: ground-truth configurations. Second column: reconstructions provided by uv$\_$smooth. Third column: reconstructions obtained by using VSK-based interpolation when $\psi$ is the back projection (uv$\_$smooth$\_$BP). Third column: reconstructions obtained by using VSK-based interpolation when $\psi$ is the map of the CLEAN components (uv$\_$smooth$\_$CC). The ground-truth and reconstruction parameter values are in Tables 1--4.}
\label{figmap}
\end{figure}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 1. The foot-point centers are denoted as $(x_p,y_p)$ while the flux is measured as photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lcccc}
\hline
\hline
& \multicolumn{4}{c}{First Peak} \\
\hline
&\hskip 0.1cm $x_p$ & \hskip 0.1cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & -8.0 & -8.0 & 11.0 & 6.58 \\
uv$\_$smooth & -6.0 $\pm$ 0.6 & -5.0 $\pm$ 0.4 & 11.2 $\pm$ 0.3 & 5.08 $\pm$ 0.13\\
uv$\_$smooth$\_$BP & -6.3 $\pm$ 0.4 & -6.2 $\pm$ 0.4 & 11.5 $\pm$ 0.4 & 5.53 $\pm$ 0.13 \\
\smallskip
uv$\_$smooth$\_$CC & -6.4 $\pm$ 0.5 & -6.0 $\pm$ 0.5 & 11.6 $\pm$ 0.4 & 5.52 $\pm$ 0.18 \\
\hline
& \multicolumn{4}{c}{Second Peak}\\
\hline
&\hskip 0.1cm $x_p$ & \hskip 0.1cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & 8.0 & 8.0 & 11.0 & 3.21 \\
uv$\_$smooth & 8.0 $\pm$ 0.5 & 6.4 $\pm$ 0.5 & 10.7 $\pm$ 0.5 & 2.42 $\pm$ 0.12\\
uv$\_$smooth$\_$BP & 8.1 $\pm$ 0.4 & 8.3 $\pm$ 0.6 & 11.5 $\pm$ 0.5 & 2.70 $\pm$ 0.13 \\
\smallskip
uv$\_$smooth$\_$$\_$CC & 7.9 $\pm$ 0.6 & 6.9 $\pm$ 0.3 & 12.3 $\pm$ 0.8 & 2.77 $\pm$ 0.14 \\
\end{tabular}
\begin{tabular}{lc}
\hline
& Total Flux ($\times 10^3$) \\
\hline
Simulated & 10.00 \\
uv$\_$smooth & 9,27 $\pm$ 0.18\\
uv$\_$smooth$\_$BP & 9.86 $\pm$ 0.23\\
\smallskip
uv$\_$smooth$\_$CC & 10.10 $\pm$ 0.19\\
\hline
\hline
\end{tabular}
\label{tab1}
\end{table}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 2. The foot-point centers are denoted as $(x_p,y_p)$ while the flux is measured as photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lcccc}
\hline
\hline
& \multicolumn{4}{c}{First Peak} \\
\hline
&\hskip 0.2cm $x_p$ & \hskip 0.2cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & -24.0 & -24.0 & 11.0 & 6.51 \\
uv$\_$smooth & -7.5 $\pm$ 0.5 & -10.6 $\pm$ 0.4 & 12.3 $\pm$ 0.1 & 8.23 $\pm$ 0.19 \\
uv$\_$smooth$\_$BP & -21.9 $\pm$ 0.2 & -21.7 $\pm$ 0.4 & 10.8 $\pm$ 0.2 & 5.26 $\pm$ 0.11 \\
\smallskip
uv$\_$smooth$\_$CC & -21.8 $\pm$ 0.3 & -21.7 $\pm$ 0.4 & 11.4 $\pm$ 0.4 & 5.27 $\pm$ 0.12 \\
\hline
& \multicolumn{4}{c}{Second Peak}\\
\hline
&\hskip 0.2cm $x_p$ & \hskip 0.2cm $y_p$& FWHM & FLUX ($\times 10^3$) \\
Simulated & 24.0 & 24.0 & 11.0 & 3.25 \\
uv$\_$smooth & 8.5 $\pm$ 0.5 & -9.4 $\pm$ 0.8 & 12.9 $\pm$ 0.1 & 8.94 $\pm$ 0.26 \\
uv$\_$smooth$\_$BP& 24.0 $\pm$ 0.2 & 24.4 $\pm$ 0.5 & 10.9 $\pm$ 0.2 & 2.50 $\pm$ 0.12 \\
\smallskip
uv$\_$smooth$\_$CC & 23.3 $\pm$ 0.3 & 24.6 $\pm$ 0.8 & 10.9 $\pm$ 0.7 & 2.35 $\pm$ 0.14 \\
\end{tabular}
\begin{tabular}{lc}
\hline
& Total Flux ($\times 10^3$) \\
\hline \\
Simulated & 10.00 \\
uv$\_$smooth & 9.88 $\pm$ 0.34 \\
uv$\_$smooth$\_$BP & 10.55 $\pm$ 0.30 \\
\smallskip
uv$\_$smooth$\_$CC & 12.37 $\pm$ 0.28 \\
\hline
\hline
\end{tabular}
\label{tab2}
\end{table}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 3. The position of the pixel with maximum intensity is denoted as $(x_p,y_p)$. The flux units are photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lccc}
\hline
\hline
&$x_p$ & $y_p$ & Total Flux ($\times 10^3$) \\
\hline
Simulated & 0.0 & 0.0 & 10.00 \\
uv$\_$smooth & 0.7 $\pm$ 0.6 & 1.3 $\pm$ 0.5 & 9.71 $\pm$ 0.27 \\
uv$\_$smooth$\_$BP & -0.5 $\pm$ 0.6 & 1.4 $\pm$ 0.6 & 10.55 $\pm$ 0.02\\
\smallskip
uv$\_$smooth$\_$CC & -0.6 $\pm$ 0.5 & 0.6 $\pm$ 0.4 & 10.55 $\pm$ 0.02\\
\hline
\hline
\end{tabular}
\label{tab3}
\end{table}
\begin{table}[ht]
\caption{Results for the reconstruction of Configuration 4. The position of the pixel with maximum intensity is denoted as $(x_p,y_p)$. The flux units are photon cm$^{-2}$ s$^{-1}$.}
\begin{tabular}{lccc}
\hline
\hline
&$x_p$ & $y_p$& Total Flux ($\times 10^3$) \\
\hline
Simulated & 18.0 & 18.0 & 10.00 \\
uv$\_$smooth & 15.4 $\pm$ 0.8 & 11.4 $\pm$ 0.7 & 10.59 $\pm$ 0.01 \\
uv$\_$smooth$\_$BP & 16.6 $\pm$ 0.7 & 15.8 $\pm$ 0.8 & 10.59 $\pm$ 0.02\\
\smallskip
uv$\_$smooth$\_$CC & 17.0 $\pm$ 0.7 & 16.8 $\pm$ 0.8 & 10.59 $\pm$ 0.01\\
\hline
\hline
\end{tabular}
\label{tab4}
\end{table}
\begin{table}[ht]
\caption{CPU burden (in second) employed by the three reconstruction algorithms averaged over the data corresponding to the four configurations.}
\begin{tabular}{lc}
\hline
\hline
& CPU times \\
\hline
uv$\_$smooth & 0.18 \\
uv$\_$smooth$\_$BP & 4.24 \\
\smallskip
uv$\_$smooth$\_$CC & 6.78 \\
\hline
\hline
\end{tabular}
\label{cpu}
\end{table}
\subsection{RHESSI observations}
On Saturday, May 3 2014 the GOES $1-8$ $\mathring{A}$ passband instrument recorded nine C class flares originating from three different active regions. In particular, in the time interval between 15:54:00 UT and 16:13:40 UT {\em{RHESSI}} observed a C$1.7$ event whose flaring shape in the $3-6$ keV energy channel evolved from a double foot-point to a narrow ribbon-like configuration. We have tested the effectiveness of this enhanced approach to interpolation in the $(u,{\mbox{v}})$-plane by considering five time intervals in that range, each one of $1$ minute duration. First, we focused on the visibility bag recorded at 16:07:04 UT by the combination of $3$ through $9$, $2$ through $9$ and $1$ through $9$ {\em{RHESSI}} detectors, respectively. Figures \ref{fig_rhessi_maydet} and \ref{fig_rhessi_vis_dets} respectively compare the reconstructions and the corresponding visibility fitting provided by uv$\_$smooth, uv$\_$smooth$\_$BP and uv$\_$smooth$\_$CC with the reconstructions and the fitting given by CLEAN when the map of the CLEAN components is convolved {\em{a posteriori}} with an idealized PSF with CLEAN beamwidth factor equal to $2$ (as done for the generation of the {\em{RHESSI}} image archive) and pixel dimension equal to $1$ arcsec in the 3 through 9 detector configuration and equal to $0.5$ arcsec for the other two combinations of detectors. The $\chi^2$ values of the four reconstruction methods are reported in Table \ref{tab:my_label_chi}. Then, in Figure \ref{fig_rhessi_may} and Figure \ref{fig_rhessi_may_vis} we fixed the configuration based on $3$ through $9$ detectors and compared the reconstructions provided by the same four imaging methods as in Figure \ref{fig_rhessi_maydet} and the corresponding fitting of the experimental measurements in the case of five time intervals between 16:08:04 and 16.12:04 UT. The $\chi^2$ values predicted by the four reconstruction methods with respect to the observations are contained in Table \ref{tab:my_label}.
\section{Comments and conclusions}
Enhancing visibility interpolation is particularly crucial in the case of the {\em{STIX}} image reconstruction problem, where observations are linked to a set of $30$ visibilities and, correspondingly, the sparsity of the sampling in the $(u,{\mbox{v}})$-plane is pronounced. As a confirmation of this, comparison with the four ground-truth configurations considered in the simulations of Figure \ref{figmap} shows that the use of VSKs provides more accurate estimates of the imaging parameters; this is particularly true in the case of Configurations 2 and 4 that produce wilder oscillations in the visibility domain and where the need of powerful interpolation is more urgent. The computational times reported in Table \ref{cpu} show that VSK interpolation increases the burden but keeps the reconstruction times competitive with the ones of most hard X-ray imaging methods.
In the case of {\em{RHESSI}} observations, the use of finer grids increases the spatial resolution but, at the same time, introduces high resolution artifacts. However, also in this case we can notice an improvement carried by the use of VSKs with respect to standard uv$\_$smooth, i.e. the progressive fragmentation of the reconstructed sources is less significant particularly when detectors from $2$ through $9$ are used. For most cases, we can notice that uv$\_$smooth$\_$BP and uv$\_$smooth$\_$CC can guarantee a nice trade-off between reconstruction accuracy and fitting: imaging artifacts are less numerous and pronounced if compared to standard uv$\_$smooth while $\chi^2$ values are either comparable or smaller than the ones corresponding to CLEAN reconstructions. Further, comparison between uv$\_$smooth$\_$CC and CLEAN shows that the former method can be interpreted as a user-independent way to exploit the CLEAN component map. Therefore, uv$\_$smooth$\_$CC concludes the overall CLEAN process, keeping the highly reliable step providing the CLEAN components and replacing the more heuristic one represented by the convolution with an idealized PSF with a totally automatic process based on feature augmentation.
\begin{acknowledgements}
The authors acknowledge the financial contribution from the agreement ASI-INAF n.2018-16-HH.0. This research has been accomplished within Rete ITaliana di Approssimazione (RITA). This is the first paper that AMM and MP submit after Richard Schwartz passed away, on Saturday December 12 2020. In these difficult times for the whole humanity, Richard's death has represented a further reason of sadness and grief for the {\em{RHESSI}} and {\em{STIX}} communities. AMM and MP acknowledge that Richard's intellectual guide is and will always remain an unforgettable milestone for their current and future scientific activity.
\end{acknowledgements}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det3_9}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det3_9vskb}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det3_9vskC}
\includegraphics[scale=0.11]{./FigPaper/figd1} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det2_9_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det2_9vskb_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det2_9vskC_new}
\includegraphics[scale=0.11]{./FigPaper/figd2} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det1_9_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det1_9vskb_new}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_det1_9vskC_new}
\includegraphics[scale=0.11]{./FigPaper/figd3}
\caption{Reconstruction of the flare observed by RHESSI on May 3 in 2014 at 16:07:04 UT. From left to right, the columns contain the reconstructions via uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom the three rows indicate the reconstructions obtained using {\em{RHESSI}} detectors 3 through 9, 2 through 9 and 1 through 9, respectively.
}
\label{fig_rhessi_maydet}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS001_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS001_vskB_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS001_vskC_c}
\includegraphics[scale=0.128]{./FigPaper/vfigd1} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS011_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS011_vskB_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS011_vskC_c}
\includegraphics[scale=0.128]{./FigPaper/vfigd2} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS111_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS111_vskB_c}
\includegraphics[scale=0.128]{./FigPaper/3May_rhessi_VIS111_vskC_c}
\includegraphics[scale=0.128]{./FigPaper/vfigd3}
\caption{Comparison between predicted and measured visibilities for the flare observed by RHESSI on May 3 2014 at 16:07:04 UT. From left to right, the columns contain the fits corresponding to uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom, the rows correspond to using detector configurations from 3 through 9, from 2 through 9, and from 1 through 9, respectively.
}
\label{fig_rhessi_vis_dets}
\end{figure}
\begin{table}[ht]
\begin{tabular}{ccccc}
\hline
\hline
detectors & uv$\_$smooth & uv$\_$smooth$\_$BP & uv$\_$smooth$\_$CC & CLEAN \\
\hline
3--9 & 1.05 & 1.02 & 0.98 & 7.17 \\
2--9 & 1.20 & 0.96 & 0.93 & 4.57 \\
1--9 & 1.07 & 1.08 & 1.19 & 3.95 \\
\hline
\hline
\end{tabular}
\caption{$\chi^2$ values predicted by the four reconstruction methods applied to the {\em{RHESSI}} visibilities observed on May 3 2014 at 16:07:04 UT. The values are computed with respect to the visibilities measured by detectors 3 through 9, 2 through 9, and 1 through 9, respectively.}
\label{tab:my_label_chi}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_1}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_1vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_1vskC}
\includegraphics[scale=0.11]{./FigPaper/fig1} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_2}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_2vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_2vskC}
\includegraphics[scale=0.11]{./FigPaper/fig2} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_3}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_3vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_3vskC}
\includegraphics[scale=0.11]{./FigPaper/fig3} \vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_4}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_4vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_4vskC}
\includegraphics[scale=0.11]{./FigPaper/fig4}
\vskip 0.1cm
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_5}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_5vskB}
\includegraphics[scale=0.11]{./FigPaper/3Mayrhessi_001_5vskC}
\includegraphics[scale=0.11]{./FigPaper/fig5} \caption{Reconstruction of the flare observed by RHESSI on May 3 2014. From left to right, the columns contain the reconstructions obtained by uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom, the rows denote the evolution of the flare shape in five time intervals from 16:08:04 through 16:12:04 UT (integration time: $1$ min).
}
\label{fig_rhessi_may}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_1}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_1vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_1vskC}
\includegraphics[scale=0.128]{./FigPaper/fig1a}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_2}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_2vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_2vskC}
\includegraphics[scale=0.128]{./FigPaper/fig2a} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_3}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_3vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_3vskC}
\includegraphics[scale=0.128]{./FigPaper/fig3a} \vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_4}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_4vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_4vskC}
\includegraphics[scale=0.128]{./FigPaper/fig4a}
\vskip 0.1cm
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_5}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_5vskB}
\includegraphics[scale=0.128]{./FigPaper/3Mayrhessi_VIS001_5vskC}
\includegraphics[scale=0.128]{./FigPaper/fig5a}
\caption{Comparison between predicted and measured visibilities for the flare observed by RHESSI on May 3 2014. From left to right, the columns contain the fits corresponding to uv$\_$smooth, uv$\_$smooth$\_$BP, uv$\_$smooth$\_$CC and CLEAN. From top to bottom, the rows correspond to the evolution of the flare shape in five time intervals from 16:08:04 through 16:12:04 UT (integration time: $1$ min).
}
\label{fig_rhessi_may_vis}
\end{figure}
\begin{table}[ht]
\begin{tabular}{ccccc}
\hline
\hline
& uv$\_$smooth & uv$\_$smooth$\_$BP & uv$\_$smooth$\_$CC & CLEAN \\
\hline
$t_1$ & 1.13 & 1.10 & 1.07 & 3.70 \\
$t_2$ & 1.80 & 1.73 & 1.75 & 1.71 \\
$t_3$ & 2.25 & 2.18 & 1.96 & 1.42 \\
$t_4$ & 2.75 & 2.14 & 2.06 & 1.90 \\
$t_5$ & 3.01 & 2.97 & 2.70 & 1.80\\
\hline
\hline
\end{tabular}
\caption{$\chi^2$ values predicted by the four reconstruction methods applied to the {\em{RHESSI}} visibilities observed on May 3 2014 in the 5 time intervals from 16:08:04 through 16:12:04 UT (integration time: $1$ min). The values are computed with respect to the visibilities measured by detectors 3 through 9.}
\label{tab:my_label}
\end{table}
\bibliographystyle{aa.bst}
|
{
"timestamp": "2020-12-29T02:21:33",
"yymm": "2012",
"arxiv_id": "2012.14007",
"language": "en",
"url": "https://arxiv.org/abs/2012.14007"
}
|
\section{Introduction}
Let us consider a bounded derived category of coherent sheaves $D^b(S)$ for a weighted projective surface
$S = \mathbf{P}(1,1,2)$.
It is generated by coherent sheaves $\mathcal{O}_S(-2), \mathcal{O}_S(-1), \mathcal{O}_S$, where
$\mathcal{O}_S(-1)$ is a reflexive sheaf of rank $1$ which is not invertible.
But there is a locally free sheaf $F_S$ of rank $2$ on $S$ sitting in an exact sequence
\[
0 \to \mathcal{O}_S(-1) \to F_S \to \mathcal{O}_S(-1) \to 0
\]
such that $\mathcal{O}_S(-2),F_S,\mathcal{O}_S$ generate $D^b(S)$, and such that
$\text{End}(F_S) \cong k[t]/(t^2)$ and $\text{Ext}^i(F_S,F_S) \cong 0$ for $i > 0$.
In this way we obtain a semi-orthogonal decomposition
\[
D^b(S) = \langle \mathcal{O}_S(-2),F_S,\mathcal{O}_S \rangle \cong \langle D^b(k), D^b(k[t]/(t^2)), D^b(k) \rangle.
\]
This kind of phenomena is greatly generalized in \cite{KKS} to surfaces with cyclic quotient singularities
and to $3$-dimensional varieties in \cite{KPS}.
But the generalizations in dimension $3$ are mostly concerned only with the case of varieties having hypersurface singularities,
especially ordinary double points.
In this article we would like to consider higher dimensional varieties with toric singularities taking an example of
a weighted projective space.
The purpose of this short article is to provide an example which shows that a similar construction of a semi-orthogonal decomposition
of a bounded derived category is possible for a higher dimensional variety with a cyclic quotient singularity.
This work is inspired by talks by Professors Martin Kalck and Evgeny Shinder in a Zoom conference
\lq\lq Categories and Birational Geometry'' held in December 2020
(http://www.mathnet.ru/php/conference.\break phtml?confid=1835\&option\_lang=eng).
The author would like to thank them for the comments on the first version of the article.
\begin{Thm}
Let $X = \mathbf{P}(1,1,1,3)$ be a weighted projective space of dimension $3$.
Then there exist locally free sheaves $F$ and $G$ of ranks $3$ and $6$, respectively,
which satisfy the following conditions:
(1) $\text{Ext}^i(F \oplus G,F \oplus G) \cong 0$ for $i > 0$.
(2) There is a semi-orthogonal decomposition
\[
D^b(X) = \langle F \oplus G,\mathcal{O}_X, \mathcal{O}_X(3) \rangle \cong \langle D^b(R_X), D^b(k), D^b(k) \rangle
\]
where $R_X = \text{End}(F \oplus G)$ with $\dim R_X = 45$.
\end{Thm}
\section{Preliminaries}
We use the following notation.
$D^b(X)$ (resp. $D^-(X)$) denotes the bounded (resp. bounded from above)
derived category of coherent sheaves on a variety $X$.
$D^b(R)$ (resp. $D^-(R)$) denotes the bounded (resp. bounded from above)
derived category of finitely generated right $R$-modules for an associative $k$-algebra $R$.
We write
\[
\text{Hom}^*(A,B) := H^*(\mathbf{R}\text{Hom}(A,B)) = \bigoplus_i \text{Hom}(A,B[i])[-i].
\]
We work over $k = \mathbf{C}$.
\vskip 1pc
We recall some definitions from \cite{Bondal}.
Let $A$ be a $k$-linear triangulated category.
A set of objects $\{a_i\} \subset A$ is said to {\em generate} $A$ when the following holds: for $a \in A$,
if $\text{Hom}(a_i,a[j]) = 0$ for all $i$ and $j$, then $a \cong 0$.
We say that $A$ has a {\em semi-orthogonal decomposition} to triangulated full subcategories $B_1,\dots,B_n$
and denote
\[
A = \langle B_1,\dots,B_n \rangle
\]
if (1) $\text{Hom}(b_i,b_j) = 0$ for all $b_i \in B_i$ and $b_j \in B_j$ such that $i > j$, and
(2) $A$ is the smallest triangulated subcategory which contains all $B_i$.
When $B_i$ is generated by $b_i$ for each $i$, then we also write
\[
A = \langle b_1,\dots,b_n \rangle.
\]
Let $i_*: B \to A$ be a triangulated full subcategory.
$B$ is said to be {\em left (resp. right) admissible} if $i_*$ has a left adjoint $i^*$ (resp. a right adjoint $i^!$).
The {\em left (resp. right) semi-orthogonal complement} ${}^{\perp}B$ (resp. $B^{\perp}$) is
defined as a full subcategory of $A$ such that
\[
\begin{split}
&{}^{\perp}B = \{a \in A \mid \text{Hom}(a,b) = 0\,\,\, \forall b \in B\}, \\
&B^{\perp} = \{a \in A \mid \text{Hom}(b,a) = 0\,\,\, \forall b \in B\}.
\end{split}
\]
If $B$ is a left (resp. right) admissible subcategory, then
we have a semi-orthogonal decomposition
\[
\begin{split}
A = \langle B, {}^{\perp}B \rangle \quad (\text{resp. }= \langle B^{\perp}, B \rangle).
\end{split}
\]
Indeed we have the following distinguished triangles for any $a \in A$:
\[
\begin{split}
&c \to a \to i_*i^*a \to c[1], \\
&d[-1] \to i_*i^!a \to a \to d,
\end{split}
\]
such that we have $i_*i^*a \in B$, $c \in {}^{\perp}B$, $i_*i^!a \in B$ and $d \in B^{\perp}$.
\vskip 1pc
We recall a version of the tilting theory in \cite{TU}.
Let $X$ be a projective variety and let $P$ be a {\em perfect complex}, an object in $D^b(X)$ which is locally isomorphic to a
bounded complex of locally free sheaves.
$P$ is said to be {\em tilting} if $\text{Hom}(P,P[i]) = 0$ for $i \ne 0$.
The endomorphism algebra $R = \text{End}(P)$ is an associative algebra.
\begin{Lem}[\cite{TU}~Lemma 3.3]
Let $P \in D^b(X)$ be a tilting object, and let $R = \text{End}(P)$.
Let $\Phi: D^-(X) \to D^-(R)$ and $\Psi: D^-(R) \to D^-(X)$ be functors defined by
$\Phi(\bullet) = \mathbf{R}\text{Hom}(P,\bullet)$
and $\Psi(\bullet) = \bullet \otimes^{\mathbf{L}}_R P$.
Let $A^-$ be the essential image of the functor $\Psi$, and let $A^b = A^- \cap D^b(X)$.
Then $\Psi$ is a left adjoint functor of $\Phi$, and
$\Phi$ induces an equivalence of triangulated categories $A^- \cong D^-(R)$.
Moreover, $\Phi(D^b(X)) = D^b(R)$, and $\Phi$ induces an equivalence $A^b \cong D^b(R)$.
\end{Lem}
\begin{proof}
\cite{TU}~Lemma 3.3 treats the case that $A^- = D^-(X)$.
But the same proof works in our situation.
\end{proof}
\begin{Cor}
Let $P_1, \dots, P_n \in D^b(X)$ be tilting objects, and let $R_i = \text{End}(P_i)$.
Assume that $\{P_i\}$ generates $D^-(X)$ and that $\text{Hom}^*(P_i,P_j) = 0$ for $i > j$.
Then there is a semi-orthogonal decomposition
\[
D^b(X) = \langle P_1,\dots,P_n \rangle \cong \langle D^b(R_1), \dots, D^b(R_n) \rangle.
\]
\end{Cor}
\begin{proof}
We have functors $\Phi_i: D^-(X) \to D^-(R_i)$ and
$\Psi_i: D^-(R_i) \to D^-(X)$ as in the lemma.
Let $A^-_i$ be the essential image of the functor $\Psi_i$, and let $A^b_i = A^-_i \cap D^b(X)$.
We have $A^b_i \cong D^b(R_i)$ by $\Phi_i$.
$\Phi_n$ is a right adjoint functor of $\Psi_n$, hence $A^-_n$ is a right admissible subcategory.
Thus we have a semi-orthogonal decomposition
\[
D^-(X) = \langle D^-_n, A^-_n \rangle
\]
with $D^-_n = (A^-_n)^{\perp}$.
$\Phi_{n-1}$ induces a right adjoint functor $D^-_n \to D^-(R_{n-1})$ of the functor
$\Psi_{n-1}: D^-(R_{n-1}) \to A^-_{n-1} \subset D^-_n$.
Hence we have a semi-orthogonal decomposition
\[
D^-_n = \langle D^-_{n-1}, A^-_{n-1} \rangle
\]
with $D^-_{n-1} = (A^-_{n-1})^{\perp}$.
In this way, we obtain a
semi-orthogonal decomposition
\[
D^-(X) = \langle A^-_1, \dots, A^-_n \rangle.
\]
By taking the intersection with $D^b(X)$, we obtain our claim using the last part of the lemma.
\end{proof}
\section{Proof}
Let $X = \mathbf{P}(1,1,1,3)$ be a weighted projective space of dimension $3$.
It is a projective cone over $\mathbf{P}^2$ by $\mathcal{O}_{\mathbf{P}^2}(3)$.
$X$ has a Gorenstein isolated quotient singularity of type $\frac 13(1,1,1)$ at $P = [0:0:0:1]$.
We have $K_X \cong \mathcal{O}_X(-6)$.
Let $Z \cong \mathbf{P}^2$ be a hyperplane section of $X$ at infinity.
We have $Z \in \vert \mathcal{O}_X(3) \vert$, and
$N_{X/Z} = \mathcal{O}_Z(3)$.
Let $F$ and $G$ be locally free sheaves of ranks $3$ and $6$ on $X$ defined by natural exact sequences:
\[
\begin{split}
&0 \to F \to \mathcal{O}_X^3 \to \mathcal{O}_Z(1) \to 0, \\
&0 \to G \to \mathcal{O}_X^6 \to \mathcal{O}_Z(2) \to 0.
\end{split}
\]
\begin{Lem}
$F,G,\mathcal{O}_X,\mathcal{O}_X(3)$ generate $D^-(X)$.
\end{Lem}
\begin{proof}
Let $D$ be the smallest full subcategory of $D^b(X)$ which contains $F,G, \mathcal{O}_X,\mathcal{O}_X(3)$
and closed under shifts and cone constructions.
Then $\mathcal{O}_Z(1),\mathcal{O}_Z(2),\mathcal{O}_Z(3)$ are contained in $D$.
Hence $D^b(Z) \subset D$.
It follows that $\mathcal{O}_X(3m) \in D$ for all $m$.
For any non-zero object $a \in D^-(X)$, if $i$ is the largest integer such that $H^i(a) \ne 0$, then
there is a non-zero morphism $\mathcal{O}_X(3m)[-i] \to a$ for some integer $m$.
Therefore $D^-(X)$ is generated by $F,G, \mathcal{O}_X,\mathcal{O}_X(3)$.
\end{proof}
\begin{Lem}
(1) $\text{Hom}^*(\mathcal{O}_X,\mathcal{O}_X) \cong k$.
(2) $\text{Hom}^*(\mathcal{O}_X,\mathcal{O}_Z(1)) \cong k^3$.
(3) $\text{Hom}^*(\mathcal{O}_X,\mathcal{O}_Z(2)) \cong k^6$.
(4) $\text{Hom}^*(\mathcal{O}_Z(1),\mathcal{O}_X) \cong k^6[-1]$.
(5) $\text{Hom}^*(\mathcal{O}_Z(2),\mathcal{O}_X) \cong k^3[-1]$.
(6) $\text{Hom}^*(\mathcal{O}_Z(1),\mathcal{O}_Z(1)) \cong k \oplus k^{10}[-1]$.
(7) $\text{Hom}^*(\mathcal{O}_Z(1),\mathcal{O}_Z(2)) \cong k^3 \oplus k^{15}[-1]$.
(8) $\text{Hom}^*(\mathcal{O}_Z(2),\mathcal{O}_Z(1)) \cong k^6[-1]$.
\end{Lem}
\begin{proof}
(1), (2), (3) are obvious.
(4) $\text{Hom}^i(\mathcal{O}_Z(1),\mathcal{O}_X) \cong \text{Hom}^{3-i}(\mathcal{O}_X,\mathcal{O}_Z(-5))^*
\cong H^{3-i}(Z,\mathcal{O}_Z(-5))^* \cong H^{2-(3-i)}(Z,\mathcal{O}_Z(2))$.
(5) $\text{Hom}^i(\mathcal{O}_Z(2),\mathcal{O}_X) \cong \text{Hom}^{3-i}(\mathcal{O}_X,\mathcal{O}_Z(-4))^*
\cong H^{3-i}(Z,\mathcal{O}_Z(-4))^* \cong H^{2-(3-i)}(Z,\mathcal{O}_Z(1))$.
(6) $\mathcal{H}om^*(\mathcal{O}_Z(1),\mathcal{O}_Z(1)) \cong \mathcal{O}_Z \oplus N_{Z/X}[-1]
\cong \mathcal{O}_Z \oplus \mathcal{O}_Z(3)[-1]$, and
$\text{Hom}^*(\mathcal{O}_Z(1),\mathcal{O}_Z(1)) \cong H^0(Z,\mathcal{H}om^*(\mathcal{O}_Z(1),\mathcal{O}_Z(1)))
\cong k \oplus k^{10}[-1]$.
(7), (8) as well as (6) follow from an exact sequence $0 \to \mathcal{O}_X(-2) \to \mathcal{O}_X(1) \to \mathcal{O}_Z(1) \to 0$ with
$\text{Hom}^*(\mathcal{O}_X(1), \mathcal{O}_Z(1)) \cong k$,
$\text{Hom}^*(\mathcal{O}_X(-2), \mathcal{O}_Z(1)) \cong k^{10}$,
$\text{Hom}^*(\mathcal{O}_X(1), \mathcal{O}_Z(2)) \cong k^3$,
$\text{Hom}^*(\mathcal{O}_X(-2), \mathcal{O}_Z(2)) \cong k^{15}$,
$\text{Hom}^*(\mathcal{O}_X(1), \mathcal{O}_Z) \cong 0$,
$\text{Hom}^*(\mathcal{O}_X(-2), \mathcal{O}_Z) \cong k^6$.
\end{proof}
\begin{Lem}
(1) $\text{Hom}^*(\mathcal{O}_X,F) \cong 0$.
(2) $\text{Hom}^*(\mathcal{O}_X,G) \cong 0$.
(3) $\text{Hom}^*(\mathcal{O}_X(3),F) \cong 0$.
(4) $\text{Hom}^*(\mathcal{O}_X(3),G) \cong 0$.
\end{Lem}
\begin{proof}
(1) and (2) follow from $\text{Hom}^*(\mathcal{O}_X,\mathcal{O}_X^3) \cong \text{Hom}^*(\mathcal{O}_X,\mathcal{O}_Z(1))$ and
$\text{Hom}^*(\mathcal{O}_X,\mathcal{O}_X^6) \cong \text{Hom}^*(\mathcal{O}_X,\mathcal{O}_Z(2))$.
(3) and (4) follow from
$\text{Hom}^*(\mathcal{O}_X(3),\mathcal{O}_X) \cong \text{Hom}^*(\mathcal{O}_X(3),\mathcal{O}_Z(1))
\cong \text{Hom}^*(\mathcal{O}_X(3),\mathcal{O}_Z(2)) \cong 0$.
\end{proof}
\begin{Lem}
(1) $\text{Hom}^*(F,\mathcal{O}_X) \cong k^9$.
(2) $\text{Hom}^*(F,\mathcal{O}_Z(1)) \cong k^{18}$.
(3) $\text{Hom}^*(F,\mathcal{O}_Z(2)) \cong k^{30}$.
(4) $\text{Hom}^*(G,\mathcal{O}_X) \cong k^9$.
(5) $\text{Hom}^*(G,\mathcal{O}_Z(1)) \cong k^{24}$.
(6) $\text{Hom}^*(G,\mathcal{O}_Z(2)) \cong k^{45}$.
\end{Lem}
\begin{proof}
(1) follows from $\text{Hom}^*(\mathcal{O}_Z(1),\mathcal{O}_X) \cong k^6[-1]$ and
$\text{Hom}^*(\mathcal{O}_X^3,\mathcal{O}_X) \cong k^3$ with an exact sequence
\[
0 \to \text{Hom}(\mathcal{O}_X^3,\mathcal{O}_X) \to \text{Hom}(F,\mathcal{O}_X) \to
\text{Ext}^1(\mathcal{O}_Z(1),\mathcal{O}_X)
\to 0.
\]
(2) follows from $\text{Hom}^*(\mathcal{O}_Z(1),\mathcal{O}_Z(1)) \cong k \oplus k^{10}[-1]$ and
$\text{Hom}^*(\mathcal{O}_X^3,\mathcal{O}_Z(1)) \cong k^9$ with an exact sequence
\[
0 \to \text{Hom}(\mathcal{O}_Z(1),\mathcal{O}_Z(1)) \to \text{Hom}(\mathcal{O}_X^3,\mathcal{O}_Z(1))
\to \text{Hom}(F,\mathcal{O}_Z(1)) \to \text{Ext}^1(\mathcal{O}_Z(1),\mathcal{O}_Z(1)) \to 0.
\]
(3) follows from $\text{Hom}^*(\mathcal{O}_Z(1),\mathcal{O}_Z(2)) \cong k^3 \oplus k^{15}[-1]$ and
$\text{Hom}^*(\mathcal{O}_X^3,\mathcal{O}_Z(2)) \cong k^{18}$ with an exact sequence
\[
0 \to \text{Hom}(\mathcal{O}_Z(1),\mathcal{O}_Z(2)) \to \text{Hom}(\mathcal{O}_X^3,\mathcal{O}_Z(2))
\to \text{Hom}(F,\mathcal{O}_Z(2)) \to \text{Ext}^1(\mathcal{O}_Z(1),\mathcal{O}_Z(2)) \to 0.
\]
(4) follows from $\text{Hom}^*(\mathcal{O}_Z(2),\mathcal{O}_X) \cong k^3[-1]$ and
$\text{Hom}^*(\mathcal{O}_X^6,\mathcal{O}_X) \cong k^6$ with an exact sequence
\[
0 \to \text{Hom}(\mathcal{O}_X^6,\mathcal{O}_X) \to \text{Hom}(G,\mathcal{O}_X) \to
\text{Ext}^1(\mathcal{O}_Z(2),\mathcal{O}_X)
\to 0.
\]
(5) follows from $\text{Hom}^*(\mathcal{O}_Z(2),\mathcal{O}_Z(1)) \cong k^6[-1]$ and
$\text{Hom}^*(\mathcal{O}_X^6,\mathcal{O}_Z(1)) \cong k^{18}$ with an exact sequence
\[
0 \to \text{Hom}(\mathcal{O}_X^6,\mathcal{O}_Z(1)) \to \text{Hom}(G,\mathcal{O}_Z(1)) \to
\text{Ext}^1(\mathcal{O}_Z(2),\mathcal{O}_Z(1)) \to 0.
\]
(6) follows from $\text{Hom}^*(\mathcal{O}_Z(2),\mathcal{O}_Z(2)) \cong k \oplus k^{10}[-1]$ and
$\text{Hom}^*(\mathcal{O}_X^6,\mathcal{O}_Z(2)) \cong k^{36}$ with an exact sequence
\[
0 \to \text{Hom}(\mathcal{O}_Z(2),\mathcal{O}_Z(2)) \to \text{Hom}(\mathcal{O}_X^6,\mathcal{O}_Z(2)) \to
\text{Hom}(G,\mathcal{O}_Z(2)) \to \text{Ext}^1(\mathcal{O}_Z(2),\mathcal{O}_Z(2)) \to 0.
\]
\end{proof}
\begin{Prop}
(1) $\text{Hom}^*(F,F) \cong k^9$.
(2) $\text{Hom}^*(G,G) \cong k^9$.
(3) $\text{Hom}^*(F,G) \cong k^{24}$.
(4) $\text{Hom}^*(G,F) \cong k^3$.
\end{Prop}
\begin{proof}
(1) It is sufficient to prove that the natural homomorphism
$\text{Hom}(F,\mathcal{O}_X^3) \to \text{Hom}(F,\mathcal{O}_Z(1))$
is surjective.
We have a commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X^3,\mathcal{O}_X^3) @>>> \text{Hom}(F,\mathcal{O}_X^3) @>>>
\text{Hom}^1(\mathcal{O}_Z(1),\mathcal{O}_X^3) @>>> 0 \\
@V{\cong}VV @VVV @VVV \\
\text{Hom}(\mathcal{O}_X^3,\mathcal{O}_Z(1)) @>>> \text{Hom}(F,\mathcal{O}_Z(1)) @>>>
\text{Hom}^1(\mathcal{O}_Z(1), \mathcal{O}_Z(1)) @>>> 0
\end{CD}
\]
Since the left vertical arrow is bijective, it is sufficient to prove that the right vertical arrow is surjective.
We have another commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X(-2),\mathcal{O}_X^3) @>{\cong}>> \text{Hom}^1(\mathcal{O}_Z(1),\mathcal{O}_X^3) \\
@VVV @VVV \\
\text{Hom}(\mathcal{O}_X(-2), \mathcal{O}_Z(1)) @>>> \text{Hom}^1(\mathcal{O}_Z(1),\mathcal{O}_Z(1)) @>>> 0
\end{CD}
\]
The left vertical arrow is surjective, hence we have our claim.
(2) It is sufficient to prove that the natural homomorphism
$\text{Hom}(G,\mathcal{O}_X^6) \to \text{Hom}(G,\mathcal{O}_Z(2))$
is surjective.
We have a commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X^6,\mathcal{O}_X^6) @>>> \text{Hom}(G,\mathcal{O}_X^6) @>>>
\text{Hom}^1(\mathcal{O}_Z(2),\mathcal{O}_X^6)
@>>> 0 \\
@V{\cong}VV @VVV @VVV \\
\text{Hom}(\mathcal{O}_X^6,\mathcal{O}_Z(2)) @>>> \text{Hom}(G,\mathcal{O}_Z(2)) @>>>
\text{Hom}^1(\mathcal{O}_Z(2), \mathcal{O}_Z(2))
@>>> 0
\end{CD}
\]
Since the left vertical arrow is bijective, it is sufficient to prove that the right vertical arrow is surjective.
We have another commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X(-1),\mathcal{O}_X^6) @>{\cong}>> \text{Hom}^1(\mathcal{O}_Z(2),\mathcal{O}_X^6) \\
@VVV @VVV \\
\text{Hom}(\mathcal{O}_X(-1), \mathcal{O}_Z(2)) @>>> \text{Hom}^1(\mathcal{O}_Z(2),\mathcal{O}_Z(2)) @>>> 0
\end{CD}
\]
The left vertical arrow is surjective, hence we have our claim.
(3) It is sufficient to prove that the natural homomorphism
$\text{Hom}(F,\mathcal{O}_X^6) \to \text{Hom}(F,\mathcal{O}_Z(2))$
is surjective.
We have a commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X^3,\mathcal{O}_X^6) @>>> \text{Hom}(F,\mathcal{O}_X^6) @>>>
\text{Hom}^1(\mathcal{O}_Z(1),\mathcal{O}_X^6) @>>> 0 \\
@V{\cong}VV @VVV @VVV \\
\text{Hom}(\mathcal{O}_X^3,\mathcal{O}_Z(2)) @>>> \text{Hom}(F,\mathcal{O}_Z(2)) @>>>
\text{Hom}^1(\mathcal{O}_Z(1), \mathcal{O}_Z(2)) @>>> 0
\end{CD}
\]
Since the left vertical arrow is bijective, it is sufficient to prove that the right vertical arrow is surjective.
We have another commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X(-2),\mathcal{O}_X^6) @>{\cong}>> \text{Hom}^1(\mathcal{O}_Z(1),\mathcal{O}_X^6) \\
@VVV @VVV \\
\text{Hom}(\mathcal{O}_X(-2), \mathcal{O}_Z(2)) @>>> \text{Hom}^1(\mathcal{O}_Z(1),\mathcal{O}_Z(2)) @>>> 0
\end{CD}
\]
The left vertical arrow is surjective, hence we have our claim.
(4) It is sufficient to prove that the natural homomorphism
$\text{Hom}(G,\mathcal{O}_X^3) \to \text{Hom}(G,\mathcal{O}_Z(1))$
is surjective.
We have a commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X^6,\mathcal{O}_X^3) @>>> \text{Hom}(G,\mathcal{O}_X^3) @>>>
\text{Hom}^1(\mathcal{O}_Z(2),\mathcal{O}_X^3)
@>>> 0 \\
@V{\cong}VV @VVV @VVV \\
\text{Hom}(\mathcal{O}_X^6,\mathcal{O}_Z(1)) @>>> \text{Hom}(G,\mathcal{O}_Z(1)) @>>>
\text{Hom}^1(\mathcal{O}_Z(2), \mathcal{O}_Z(1))
@>>> 0
\end{CD}
\]
Since the left vertical arrow is bijective, it is sufficient to prove that the right vertical arrow is surjective.
We have another commutative diagram:
\[
\begin{CD}
\text{Hom}(\mathcal{O}_X(-1),\mathcal{O}_X^6) @>{\cong}>> \text{Hom}^1(\mathcal{O}_Z(2),\mathcal{O}_X^6) \\
@VVV @VVV \\
\text{Hom}(\mathcal{O}_X(-1), \mathcal{O}_Z(2)) @>>> \text{Hom}^1(\mathcal{O}_Z(2),\mathcal{O}_Z(2))
@>>> 0
\end{CD}
\]
The left vertical arrow is surjective, hence we have our claim.
\end{proof}
\begin{Rem}
(1) There is a smooth Deligne-Mumford stack $\tilde X$ with a projection (a \lq\lq non-commutative crepant resolution'')
$\pi: \tilde X \to X = \mathbf{P}(1,1,1,3)$ which is an isomorphism over the smooth locus of $X$.
By \cite{stack} \S5, $\tilde X$ has a full exceptional collection of length $6$ consisting of invertible sheaves:
\[
D^b(\tilde X) = \langle \mathcal{O}_{\tilde X}, \mathcal{O}_{\tilde X}(1), \mathcal{O}_{\tilde X}(2),
\mathcal{O}_{\tilde X}(3), \mathcal{O}_{\tilde X}(4), \mathcal{O}_{\tilde X}(5) \rangle.
\]
\vskip 1pc
(2) The sheaf of $\mathcal{O}_X$-algebras $\mathcal{E}nd(F)$ has fibers isomorphic to the matrix algebra
$M(3,k)$,
but it seems that it is not isomorphic to $M(3,\mathcal{O}_X)$.
We have $45 = 3^2 + 6^2$.
But we do not know whether there are simpler tilting bundles.
\vskip 1pc
(3) The locally free sheaf $F_S$ in \S1 satisfies the following commutative diagram:
\[
\begin{CD}
@. @. 0 @. 0 \\
@. @. @VVV @VVV \\
0 @>>> \mathcal{O}_S(-1) @>>> F_S @>>> \mathcal{O}_S(-1) @>>> 0 \\
@. @V=VV @VVV @VVV \\
0 @>>> \mathcal{O}_S(-1) @>>> \mathcal{O}_S^2 @>>> \mathcal{O}_S(1) @>>> 0 \\
@. @. @VVV @VVV \\
@. @. \mathcal{O}_C(1) @>=>> \mathcal{O}_C(1) \\
@. @. @VVV @VVV \\
@. @. 0 @. 0
\end{CD}
\]
where $\mathbf{P}^1 \cong C \subset S$ is a curve at infinity.
Similarly $F$ and $G$ satisfy the following:
\[
\begin{CD}
@. @. @. 0 @. 0 \\
@. @. @. @VVV @VVV \\
0 @>>> \mathcal{O}_X(-2) @>>> \mathcal{O}_X(-1)^3 @>>> F @>>> \mathcal{O}_X(-2) @>>> 0 \\
@. @V=VV @V=VV @VVV @VVV \\
0 @>>> \mathcal{O}_X(-2) @>>> \mathcal{O}_X(-1)^3 @>>> \mathcal{O}_X^3 @>>> \mathcal{O}_X(1)
@>>> 0 \\
@. @. @. @VVV @VVV \\
@. @. @. \mathcal{O}_Z(1) @>=>> \mathcal{O}_Z(1) \\
@. @. @. @VVV @VVV \\
@. @. @. 0 @. 0
\end{CD}
\]
and
\[
\begin{CD}
@. @. @. 0 @. 0 \\
@. @. @. @VVV @VVV \\
0 @>>> \mathcal{O}_X(-2)^3 @>>> \mathcal{O}_X(-1)^8 @>>> G @>>> \mathcal{O}_X(-1) @>>> 0 \\
@. @V=VV @V=VV @VVV @VVV \\
0 @>>> \mathcal{O}_X(-2)^3 @>>> \mathcal{O}_X(-1)^8 @>>> \mathcal{O}_X^6 @>>> \mathcal{O}_X(2)
@>>> 0 \\
@. @. @. @VVV @VVV \\
@. @. @. \mathcal{O}_Z(2) @>=>> \mathcal{O}_Z(2) \\
@. @. @. @VVV @VVV \\
@. @. @. 0 @. 0
\end{CD}
\]
These sheaves are not iterated extensions (\lq\lq non-commutative deformations'')
of a collection $(\mathcal{O}_X(-2), \mathcal{O}_X(-1))$, but higher extensions (cf. \cite{multi}, \cite{ODP}).
(4) Martin Kalck noticed in a private communication that the above calculations
on the $45$-dimensional algebra
match up at least numerically with a computation in the cluster theory and higher Auslander-Reiten sequences.
\end{Rem}
|
{
"timestamp": "2021-01-11T02:04:19",
"yymm": "2012",
"arxiv_id": "2012.14158",
"language": "en",
"url": "https://arxiv.org/abs/2012.14158"
}
|
\section{\romannumeral1. Introduction}
For most integrated optoelectronic devices which function on the basis of heterojunctions, the sharpness of the interfaces is a key parameter determining the quality of the devices \cite{1}. The fabrication success rate, consistency and working life of heterojunction devices are all related to the surface roughness of the component thin films, so controlling the surface roughness of component thin film is one of the key factors for optimizing heterojunction devices. But as a complex phenomenon, surface morphology of growing thin films is determined by many physics factors (non-uniform distribution in flux density, energy and momentum of incident species, kinetic energy, momentum, growth temperature, growth rate, sticky coefficient, diffusion, Ehrlich-Schwoebel barrier (ESB) \cite{2,3,4} and so on) and physics effects (the random distribution effect, the anistropic growth rate effect, surface energy effect, strain effect, shadowing effect \cite{5}, quantum size effect \cite{6,7}, steering effect \cite{8}, etc.). Optimizing all of the showing-up factors is a quite time-consuming solution usually adopted for achieving ultra-smooth surface, but this solution may not work for many complex compound functional materials.
In fact, the applications of surface morphologies of thin films can be classified into two categories. One category is utilizing rough surfaces, which can be achieved using many methods such as surface etching, glancing-angle deposition \cite{5}, and various other surface nanostructure fabrication techniques. The other category utilizes atomic-scale smooth surfaces or interfaces, which cannot be always satisfied because not every material can grow in a layer-by-layer mode. For example, to fabricate sandwich-type Josephson junctions using two YBa$_2$Cu$_3$O$_{7-\delta}$ (YBCO) layers sandwiching an insulating layer between, the middle layer must be both insulating and of a uniform thickness around the superconducting coherence length (13.5${\pm}$1.5\AA{} along a,b-axis) of YBCO simultaneously \cite{9}, which means at least one YBCO layer and the insulating layer must have atomic-scale smooth surfaces. However, YBCO thin film does not grow in a layer-by-layer mode, so high-temperature superconducting computers based on YBCO have been unavailable so far. Therefore, if we invent a universally usable modulating method of altering thin-film growth mode from the island-like (Volmer-Weber) growth to a 2-dimensional (layer-by-layer or step-flow) mode, we can not only open up new research fields that rely on atomic-scale smooth thin films, but also realize the industrial applications of important functional materials and thus greatly promote new technology developments, which is of epoch-making significance.
\begin{figure}
\includegraphics[width=8cm]{fig1.eps}
\caption{Schematic of EW in optically thinner medium. The horizontal blue lines represent the isoamplitude planes, and the vertical red lines the isophase planes. Light refraction at the vacuum-substrate interfaces is ignored for simplicity.}
\end{figure}
To achieve atomic-scale smooth surface of a thin film material growing intrinsically in an island-like way, its growth mode must be altered into 2D growth manner. A basic train of thought is that considering the surface roughness is a kind of disorder (surface entropy) caused in a large degree by the random surface diffusion, we need infuse negative entropy into surface to break the randomness of adatom diffusions on the growing nuclei or islands for decreasinge surface roughness. In this article, we come up with a method of using evanescent waves (EW) to enhance the downward interlayer diffusion and suppress the upward interlayer diffusion, thus smoothen the surface of a growing thin film. The article is organized as follows: First, we introduce the main idea of the modulating effect of EW on a growing thin film. Then, we derived the formulas of the optical force exerted on adatoms by EW and the lowered part of the diffusion barrier. In the following, to illustrate the feasibility of this method using an example, the diffusion barriers without EW and with EW of different amplitudes were simulated on Ag thin film, and to visualize the modulating effect on its surface morphology, Kinetic Monte Carlo simulations were performed based on the quasi-minimum energy paths (quasi-MEPs) and ESBs of Ag(100) surface obtained from DFT calculations. Finally, the ubiquitousness of this method was discussed. The significant enhancement of downward interlayer diffusion by EW theoretically verify the feasibility of this method, and experimental tests are suggested using time-resolved x-ray scattering (trXRS) \cite{10} and grazing-incidence x-ray photon correlation spectroscopy (GIXPCS) \cite{11}.
\section{\romannumeral2. Principles}
It is well known that total reflection produces evanescent wave field in the optically thinner medium \cite{12}. As depicted in Fig.1, when a laser beam is incident onto a thin film from the substrate side with an incident angle $\theta$ larger than the critical angle $\theta_{c}$, total reflection happens at the thin film-vacuum interface and an EW field is generated at the vacuum side near the thin film surface. The electric field of EW parallel to the reflection plane is a strong gradient field which can polarize atoms and exert a net attraction force pulling the polarized atoms downwards to the interface. Therefore we can use this attraction force to enhance the downward diffusion and suppress the upward diffusion of the adatoms on the nuclei or islands of a growing thin film, which should result in a smoothened surface.
According to fundamental optics principles, the electric field of EW can be derived
\begin{equation}
E_{EW} = E_0e^{-kz\sqrt{sin^2\theta - \epsilon_1/\epsilon_2}}e^{i(\omega t - kxsin\theta )}
\end{equation}
where $\omega$ is angular frequency, $k$ is the wave vector of incident wave, $\theta$ denotes the incident angle, and $\epsilon_1$,$\epsilon_2 $ represent the dielectric constants of Medium 1 and 2, respectively. The direction of the electric field depends on its polarization state.
\begin{comment}
Based on Lorentz effective field theory \cite{10,11,12}, the local electric field at an atom position can be obtained by summation of the external electric field and field induced by dipoles around the atom, which can be expressed as
\begin{equation}
\boldsymbol{E_{loc}} = \boldsymbol{E_{ex}} + \boldsymbol{E_{dip}}
\end{equation}
However, the harmonic property in both time and space of EW contributes to nearly compensated electric field in one site from surrounding dipole oscillators. Therefore, Eq.(2) can be rewritten as:
\begin{equation}
\boldsymbol{E_{loc}} = \boldsymbol{E_{ex}}
\end{equation}
Let $\alpha(\omega)$ be the polarizability of each adatom, the polarization $\boldsymbol{p}$ can be obtained
\begin{equation}
\boldsymbol{p} = \alpha(\omega) \boldsymbol{E_{loc,z}}
\end{equation}
\end{comment}
Based on Lorentz effective field theory \cite{13,14,15}, the local electric field at position $\boldsymbol{r_i}$ occupied by an atom can be obtained by summation of the external electric field at this site and the fields induced by dipole oscillators around the atom, which can be expressed as \cite{13,16}
\begin{equation}
\boldsymbol{E_{loc}}(\boldsymbol{r_i},\omega) = \boldsymbol{E_{ex}}(\boldsymbol{r_i},\omega) + \sum_{i \neq j} \boldsymbol{E_{dip}}(\boldsymbol{r_i},\boldsymbol{r_j},\omega)
\end{equation}
\begin{equation}
\boldsymbol{E_{dip}}(\boldsymbol{r_i},\boldsymbol{r_j},\omega) = \frac{3(\boldsymbol{p}(\boldsymbol{r_j},\omega) \cdot \boldsymbol{r_{ij}} )\boldsymbol{r_{ij}}-r_{ij}^2 \boldsymbol{p}(\boldsymbol{r_j},\omega) }{4 \pi \epsilon_0 r_{ij}^5}
\end{equation}
where $\boldsymbol{E_{ex}}(\boldsymbol{r_i},\omega) $ is the external electric field at position $\boldsymbol{r_i}$, $\boldsymbol{p}(\boldsymbol{r_j},\omega)$ denotes the dipole moment at position $\boldsymbol{r_{j}}$, and $\boldsymbol{E_{dip}}(\boldsymbol{r_i},\boldsymbol{r_j},\omega)$ represents the electric field at $\boldsymbol{r_i}$ induced by the dipole oscillator at $\boldsymbol{r_j}$. Here, the retardation effect is ignored due to the small separation $r_{ij} $ between two adatoms, i.e. quasi-static approximation.
Let $\alpha(\boldsymbol{r_i},\omega)$ be the polarizability of adatom at $\boldsymbol{r_i}$, the dipole moment $\boldsymbol{p}(\boldsymbol{r_i},\omega)$ at that position can be obtained
\begin{equation}
\boldsymbol{p}(\boldsymbol{r_i},\omega) = \alpha(\boldsymbol{r_i},\omega) \boldsymbol{E_{loc}}(\boldsymbol{r_i},\omega)
\end{equation}
\begin{equation}
\alpha(\boldsymbol{r_i},\omega) = \alpha_0(\boldsymbol{r_i},\omega)/[1-(2/3)ik_0^3\alpha_0(\boldsymbol{r_i},\omega)]
\end{equation}
where $k_0 = \omega/c$ and $\alpha_0(\boldsymbol{r_i},\omega)$ is given by the Clausius-Mossotti relation:
\begin{equation}
\alpha_0(\boldsymbol{r_i},\omega) = 3 a^3 \epsilon_0 \frac{\epsilon(\omega)-1}{\epsilon(\omega)+2}
\end{equation}
where $a$ represents the size of adatoms.
To obtain the self-consistent solutions of $\boldsymbol{p}(\boldsymbol{r_i},\omega)$ of dielectric grains with arbitrary shape, an iterative method can be employed. Then, using Maxwell tensor method we can obtain the optical force on atom at position $\boldsymbol{r_i}$ \cite{12}
\begin{equation}
\boldsymbol{F_{opti}}(\boldsymbol{r_i},\omega) = [\boldsymbol{p}(\boldsymbol{r_i},\omega)\cdot \nabla]\boldsymbol{E_{ex}}(\boldsymbol{r_i},\omega)
\end{equation}
\begin{comment}
Simple cubic lattice is chosen to simplify the calculations. Atom at site $\mu=a(k_{\mu},l_{\mu},m_{\mu})$ receives the electric field induced by polarization of another atom at site $ \nu=a(k_{\nu},l_{\nu},m_{\nu})$, which can be represented by
\begin{equation}
E_{dip,z}^{\mu \nu} = \Gamma^{\mu \nu } p_{\nu }
\end{equation}
\begin{equation}
\Gamma^{\mu \nu} = \frac{1}{4\pi \epsilon_0 a^3} \frac{2(m_{\mu}-m_{\nu})^2-(k_{\mu}-k_{\nu})^2-(l_{\mu}-l_{\nu})^2}{((m_{\mu}-m_{\nu})^2+(k_{\mu}-k_{\nu})^2+(l_{\mu}-l_{\nu})^2)^{5/2}}
\end{equation}
The total electric field at site $ \mu$ produced by polarizations of all other atoms is
\begin{equation}
E_{dip,z}^{\mu} = \sum_{\nu}^{'} \Gamma^{\mu \nu} p_{\nu}
\end{equation}
The local field at this site is
\begin{equation}
E_{loc,z}^{\mu}=E_{dip,z}^{\mu} +E_{ex,z}^{\mu}
\end{equation}
where $E_{ex,z}^{\mu}$ represents the external field along z-axis at $\mu $ site.
Let $\alpha(\omega)$ be the polarizability of each atom in the z-direction, we have
\begin{equation}
p_{\mu} = \alpha(\omega) E_{loc,z}^{\mu}
\end{equation}
Previous studies calculated the polarizations of atoms near surface. However, the influence of surface roughness was ignored. In order to take this point into account, a thin film-on-substrate model of with L monolayers and N lattice points is constructed, and the last Q monolayers are the thin-film lattice. The polarizabilities of substrate atoms and thin film atoms are represented by $ \alpha_1 (\omega)$, $ \alpha_2 (\omega)$, respectively. If there are no atoms in the lattice, the polarizability is set to zero. Thus, the matrix equation can be established
\begin{equation}
\boldsymbol{M} \boldsymbol{P}=\boldsymbol{A}\boldsymbol{E}
\end{equation}
where $\boldsymbol{M} $ is a $L\times L $ matrix, and its element is $ M_{\mu \nu} = \delta_{\mu \nu } + (1-\delta_{\mu \nu})\alpha_{\mu} \Gamma^{\mu \nu}$. $\boldsymbol{A}$ is a diagonal matrix with the elements of $A_{\nu\mu}=\alpha_{\mu} \delta_{\nu \mu}$. $\boldsymbol{P}$ and $\boldsymbol{E}$ are $L$-dimensional vectors, and their elements are $P_{\mu} = p_{\mu} $ and $E_{\mu} = E_{ex,z}^{\mu} $.\cite{10} The polarization intensity of atoms at each lattice point can be obtained by solving the matrix equation.
Then, using Maxwell tensor method, the gradient force (GF) of atom at site $\mu $ can be obtained \cite{12}
\begin{equation}
\boldsymbol{F^{\mu}_{grad}} = (\boldsymbol{p_{\mu}}\cdot \nabla)\boldsymbol{ E^{\mu}_{ex,z}}
\end{equation}
\end{comment}
\begin{figure}
\includegraphics[width=6cm]{fig2.eps}
\caption{Bottom: Schematic of interlayer diffusion in exchange process enhanced by the gradient force $\vec{F}_z$ and scattering force $\vec{F}_x$ arising from EW. $\Delta\boldsymbol{\vec{r}}$ indicates the moving direction of the squeezed atom in the lower layer. Upper: The z dependence of $F_z$ and $F_x$ for Ag adatoms on $SrTiO_3$ substrate with a 50 watt 440 nm blue laser focused down to a 10 micron spot size.}
\end{figure}
As illustrated in the bottom part of Fig.2 where Ag adatoms on $SrTiO_3$ is used as an example, the optical force include two components, one is gradient force (GF). Its direction points to the interface, pulling adatoms to move downwards, favorable to downward interlayer diffusion. Because the light experiences scattering by the adatoms and energy loss during propagation, EW also generates a scattering force (SF) along its propagating direction.\cite{17}. The upper part of Fig.2 shows the z-dependence of GF and SF, calculated using real data of Ag adatoms on $SrTiO_3$ substrate [18,19] with a 50 watt 440 nm blue laser focused down to a 10 micron spot size. Calculations shows that the optical forces are much larger than the gravities and similar z-dependence curves also exists for other kinds of metal adatoms, as shown in Fig.1S in the Supplemental Material.
The surface morphology of a growing thin film is determined by many atomic processes mainly including deposition, intralayer diffusion, interlayer diffusion, nucleation and re-evaporation. Among them interlayer diffusion plays the most important roles in shaping surface morphology, which processes in two ways. One is that the diffusing atoms hop over the step to the neighboring layers, which is called hopping mechanism; the other is that the atoms on the step squeeze into the lower layer, and the nearest neighbor atoms in the lower layer diffuse, which is called exchange mechanism. Because a hopping adatom must experience a configuration of lower coordination number (CN) that corresponds to a higher diffusion barrier, the exchange process usually is more favorable than the hopping process, which is supported by many researches.\cite{20,21,22} Therefore, at least for the growth of metal thin films the exchange mechanism is the dominant process in interlayer diffusions. Consequently, this process can represent real interlayer diffusions for verifying the effectiveness of the modulating effect of EW on thin-film growth.
We first discuss the adatom diffusion under no EW. In a diffusion process, an adatom diffuses to the nearest-neighboring position with diffusion rate $v$. According to Boltzmann statistics and Arrhenius formula, the diffusion rate $v$ can be expressed as
\begin{equation}
v= v_0 \, exp(- \frac{E_{d}}{k_{B}T})
\end{equation}
where $k_{B}$ is Boltzmann constant, $v_0$ is the vibrational frequency, $T$ is the temperature, and $E_{d}$ is the diffusion energy barrier.
With EW field applied, the gradient force of EW pulls the adatoms on the step edges to squeeze into the lower layer more readily while in upward diffusion the adatoms need to overcome the GF by doing extra work, therefore the downward diffusions get promoted and the upward diffusions are suppressed. Because the light frequency is much higher than the vibrational frequency of adatoms that is about $10^{12}$ to $10^{13} s^{-1}$\cite{1,23}, the optical force during a diffusion move can be averaged into an invariable local optical force field (LOFF) $\epsilon_{loc}(\boldsymbol{r}) $. With setting the position of zero potential energy at the initial site of the diffusing adatoms, we can calculate the diffusion barriers via $ab$-$initio$ simulations, and in the case with EW, the LOFF potential energy should be appended to $E_{d}$.
\begin{equation}
E^{\prime}(\boldsymbol{r}) = E_d(\boldsymbol{r}) + \epsilon_{loc}(\boldsymbol{r})
\end{equation}
where $\boldsymbol{r}$ represents the atomic position, and $\epsilon_{loc}(\boldsymbol{r}) $ can be obtained from our theoretical model with atomic diffusion path
\begin{equation}
\epsilon_{loc}(\boldsymbol{r}) = -\int_{Path(\boldsymbol{r})}^{} \left\langle \boldsymbol{F^{\mu}_{opti}} \right\rangle \boldsymbol{dr}
\end{equation}
Therefore, the diffusion rate can be rewritten as
\begin{equation}
v^{\prime}= v_0 \, exp[- \frac{E^{\prime}(\boldsymbol{r})_{peak}}{k_{B}T}]
\end{equation}
where $E^{\prime}(\boldsymbol{r})_{peak}$ is the peak of the energy curve (the energy of initial state was set as zero), that is, the new diffusion barrier with EW field applied. Eq.(9) and Eq.(10) are valid in calculating both the ESBs and the normal intralayer-diffusion barriers.
\begin{figure}
\includegraphics[width=9cm]{fig3.eps}
\caption{Interlayer diffusion barriers of Ag adatoms on (100) surface without and with EWs arising from incident light with different wave amplitudes $E_0$. To make the difference more distinct, large amplitudes of EW ranged from $1.00\times 10^{11} $ V/m to $2.00\times 10^{11}$ V/m are used in calculations. }
\end{figure}
\section{\romannumeral3. Numerical experiments}
To demonstrate the modulating effect of EW on adatoms diffusions using an example, the diffusion barriers without EW and with EW of different amplitudes were simulated on the growth of Ag thin film. To further simplify the simulations meanwhile without losing generality, we use the dominant exchange process as the representative of real interlayer diffusions to examine the enhancing effect of EW on interlayer diffusions. Therefore no other processes such as hopping across steps, edge diffusion or detachment from island edges were included in the simulations (according to the physics picture of EW-adatom interaction, the field of EW also enhance the net downward crossing-edge diffusion as explained in the following section). Our numerical experiments combine the above theoretical formulas and the DFT simulations that are performed using plane wave-based Vienna $ab$ $initio$ simulation package (VASP) \cite{24} for the purpose of finding MEPs of diffusions. We adopted the Perdew-Burke-Ernzerhof form of the generalized gradient approximation for exchange-correlation functional \cite{25}. Kohn-Sham wave functions are described by a plane wave basis est with cut-off energy of 400 eV. The thickness of vacuum layer is set up to 17 angstrom to eliminate the influence from the periodic pattern in the direction perpendicular to the surface. The diffusion paths and barriers for adatoms are evaluated using Nudge Elastic Band (NEB) method \cite{26,27}.
We selected a 3$\times$3 supercell of Ag(100) films with a K-mesh of 3$\times$3$\times$1. The initial sites of adatoms were set on the upper monolayer (ML), and the process of their interlayer diffusions from the upper ML to the adjacent lower ML were simulated. According to the diffusion path from the DFT calculations, the potential energy landscape modified by the EW effect was obtained from $Eq.(9) $ and $Eq.(10) $, as depicted in Fig.3. It is obvious that EW field lowers the downward interlayer diffusion barrier of adatoms, which is favorable to downward interlayer diffusion in the dominant exchange process, and the larger the incident light wave amplitude $E_0$ in Eq(1) becomes, the lower the barrier is.
The calculated diffusion barrier without EW is 0.54 eV, which is consistent with the previous studies (0.52 eV) \cite{21}. With EW applied, the ESB decreases significantly, for example, an incident beam of $E_0 =2.00 \times 10^{11}$ V/m can lower the ESB of the interlayer diffusion along the direction of SF by $26\%$, as Fig.3 shows. More details of the variations of barriers can be found in Table.1S of Supplemental Material. More significantly, the energy of final state is lowered more compared with no EW situation, which means the possibility of diffusing from lower layers to upper layers tend to zero, which further enhances the 2-dimensional growth behaviors. Although the GF also shrinks the interlayer spacing and alters the MEPs by reshaping the surface potential landscape, fortunately the GF is too small to change the MEPs noticeably, which makes it reasonable to take the calculated MEPs under no EW as the quasi-MEPs under EW in the calculations based on Eq.(9) and Eq.(10).
\begin{figure}
\includegraphics[width=8cm]{fig4.eps}
\caption{Surface morphology of Ag(100) thin film simulated using Kinetic Monte Carlo with 30 jumping diffusion steps. (a) without EW; (b) with both GF and SF of EW at $E_0$=1.00$\times$$10^{11}$ V/m; (c) with both GF and SF of EW and (d) with mere GF of EW at $E_0$ =2.00$\times$$10^{11}$ V/m. The simulated thin-film area is 100$\times$100 lattice points.The RMS roughnesses are listed in the titles, respectively.}
\end{figure}
To visualize the modulating effect of EW, we carried out Kinetic Monte Carlo simulations of Ag(100) thin-film growth based on the results of quasi-MEPs. Fig.4 illustrates the simulation results using the simplified model to grasp the main physics, in which 50000 deposited adatoms land on an area of 100$\times$100 lattice points, then are forced to diffuse with 30 jumping steps; when adatoms diffusing across the boundaries, periodic diffusion condition is used, i.e., the adatoms diffusing out of the simulated deposition area from one side enter the area from the other side. The striking comparison between the roughnesses of films with and without EW demonstrates its effectiveness. The details of the simulations can be found in Supplemental Material. It is noteworthy that the unidirectional nature of SF lead to the asymmetry of diffusion and thus affects smoothness of thin film. Compared with the effect of both the GF and the SF of EW, mere GF works better for smoothing surfaces, as shown in Fig 4.(d). The SF can be cancelled using two symmetrically incident beams in practical experiments.
\section{\romannumeral4. Discussion}
Because EW field can polarize any adatom and thus exert gradient force to enhance the downward interlayer diffusion, from this principle it can be concluded that this approach works universally for any material. This conclusion is very logic and solid although it has not been verified by comprehensive simulations or real experiments on the growth of complex compound thin films. It can be imagined that at any delicate near-balance state between downward and upward interlayer diffusions, even a small force can break their original quasi-balance. This should hold not only for exchange process but also for hopping process across steps, because, although the GF increases the ESB a little for adatoms hopping downward, it lowers the energy of the system with adatoms at the lower layers more since the upward displacement of an adatom in a downward crossing-step hopping process is much smaller than its downward displacement (as shown in Fig.2), and thus upward diffusing adatoms at lower layers have to overcome a larger barrier with EW applied. As a result, the EW-caused reduction in downward hopping rate can be over-compensated by the increased suppression of upward diffusion by EW. Therefore, the net downward interlayer diffusion is increased by EW in hopping process as well. Besides, in our model simulations, only 30 jumping diffusion steps have already produced obvious improvements of surface morphologies. In real cases, the trial jumping frequency around 1$\times$$10^{13}$ Hz can generate much more jumping steps even within 0.1 second, which would make the actual smoothening effect of EW much better than that in our model calculations.
\section{\romannumeral5. Conclusion}
In conclusion, the theory of the modulating effect of evanescent waves on thin-film growth is created with the formulas of the optical force and the lowered diffusion barrier of adatoms in the EW field being derived. Application of the theory on a thin film-on-substrate model using silver growth as an example demonstrates the increased net downward interlayer diffusion rate of adatoms. The simulations illustrate that EW can smoothen the surface of a growing thin film. To the best of our knowledge, this modulation method for enhancing surface flatness has not been tried hitherto. Experiments combining trXRS with GIXPCS are suggested to test this modulating effect.
\begin{acknowledgments}
This research is supported by Chinese Academy of Sciences under agreement CAS No. 11U153210485 and by NSFC of China under Contract No. Y611U21. TCC is supported by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, Division of Materials Science and Engineering, under Grant No. DE-FG02-07ER46383.
\end{acknowledgments}
\section{Kinetic Monte Carlo}
To visualize and compare the surface morphologies of growing thin films under different growth conditions without and with the effect of evanescent waves (EW), we performed Kinetic Monte Carlo model simulations. Considering the growth process of complex materials (such as $YBa_2Cu_3O_{7-\delta}$) is too complicated and their detailed knowledge such as growth units and binding energies of various kinks and steps and edge dents is still not fully revealed, we chose the simple silver thin film as an example to simplify the simulations while grasp the main physics. In fact, even for Ag thin films, the completeness of the simulations in real details has to be sacrificed, although we use as real as possible parameters of silver for simulations.
Without losing convincing, the solid-on-solid model \cite{1}, in which no overhangs are allowed on the simulated surface, was used in the Kinetic Monte Carlo simulations. Considering the purpose of these simulations is to visualize the modulating effect of EW on thin-film growth rather than to pursue the completeness of the simulations, the simplified model simulations include only three procedures:(1) absorption of gas-phase atoms. (2) diffusion of adatoms. (3) nucleation of adatoms and formation of thin film. And the pseudo code of this method is listed in Algorithm 1. In this pseudo code, surface morphology of thin film is denoted by a 2D array $\boldsymbol{S}$. The parameters used in the Kinetic Monte Carlo simulations are shown in Table 1. These parameters of barriers were calculated based on the DFT results and Eq.(10) in the main text.
The degree of randomness of interlayer diffusions has been broken by the EW field in our simulation code. But to weaken the randomness of adatom’s intralayer diffusions, the difference of intralayer diffusions of adatoms around the step edges of nucleus or islands with in-plane bonding steps (CN=1 and 2), step kinks (CN=3), step edge dents (CN=4) and so on must be taken into accounts in the simulation code as well, which makes the simulation code too complicated while helps little to the clarification of the essential physics. So we ignored the variation of in-plane bonding of adatoms with different neighbors in our simulations, which results in the fractal-like morphologies around the islands as shown in Fig.4.In the simulated growth, the code forces bonded adatoms to diffuse in the same way that isolated adatoms diffuse, which consequently lead to a very small percentage of adatoms leave their more stable positions (bonded with neighbors) on the surface, which is the small cost paid by using simple code.
\begin{table*}[h]
\centering
\begin{ruledtabular}
\begin{tabular}{lcdr}
\textrm{\textbf{Algorithm 1:} Kinetic Monte Carlo}\\
\colrule
\textbf{Input:}\\
\qquad\qquad Intralayer diffusion barriers at different directions: \\
\qquad\qquad\qquad$bn1$(follow the direction of scattering force)\\
\qquad\qquad\qquad$bn2$(perpendicular to the direction of scattering force) \\
\qquad\qquad\qquad$bn3$(against the direction of scattering force) \\
\qquad\qquad ES barriers at different directions:\\
\qquad\qquad\qquad $bes1$(follow the direction of scattering force)\\
\qquad\qquad\qquad$bes2$(perpendicular to the direction of scattering force) \\
\qquad\qquad\qquad $bes3$(against the direction of scattering force) \\
\qquad\qquad Number of particles: $N$ \\
\qquad\qquad Step length of Diffussion: $l$\\
\qquad\qquad Size of the lattice: $L$\\
\qquad\qquad Temperature of substrate: $T$.\\
\textbf{Output:} \\
\qquad \qquad Surface morphology of thin film: $\boldsymbol{S}$. (Type of $\boldsymbol{S}$ is $L\times L$ 2d array)\\
\\
\qquad \textbf{for} $i$ in $range(0,N)$ \textbf{do:}\\
\qquad \qquad Randomly select one site $(x,y)$ on the lattice.\\
\qquad \qquad Let the atom be absorbed on the site $(x,y)$; $\boldsymbol{S}$ get updated. \qquad \emph{\# Absorptions of atoms}\\
\qquad \qquad \textbf{for} $j$ in $range(0,l)$ \textbf{do:}\\
\qquad \qquad \qquad Investigate all nearest sites of the adatom on current site: \\
\qquad \qquad \qquad \textbf{if} Upward diffusion would occur from current site to one targeted site \textbf{then:}\\
\qquad \qquad \qquad \qquad Do: Energy barrier from current site to the targeted site $\leftarrow$ $+\infty$\\
\\
\emph{\#Here, the "$+\infty$" means the diffusion from lower layer to upper layer is forbidden. In practical terms this kind of diffusion }\\
\emph{\# are not frequent and even rarer under EW effect. Therefore, this approximation will not affect the acurracy of results} \\
\\
\qquad \qquad \qquad \textbf{else if} Intralayer diffusion would occur from current site to one targeted site \textbf{then:}\\
\qquad \qquad \qquad \qquad \textbf{if} Diffusion and scattering force are in the same direction \textbf{then:}\\
\qquad \qquad \qquad \qquad \qquad Do: Energy barrier from current site to the targeted site $\leftarrow$ $bn1$\\
\qquad \qquad \qquad \qquad \textbf{else if} the direction of diffusion is perpendicular to that of scattering force \textbf{then:}\\
\qquad \qquad \qquad \qquad \qquad Do: Energy barrier from current site to the targeted site $\leftarrow$ $bn2$\\
\qquad \qquad \qquad \qquad \textbf{else if} diffusion and scattering force are in the opposite direction \textbf{then:}\\
\qquad \qquad \qquad \qquad \qquad Do: Energy barrier from current site to the targeted site $\leftarrow$ $bn3$\\
\qquad \qquad \qquad \qquad \textbf{end if}\\
\qquad \qquad \qquad \textbf{else if} Downward diffusion would occur from current site to one targeted site \textbf{then:}\\
\qquad \qquad \qquad \qquad \textbf{if} Diffusion and scattering force are in the same direction \textbf{then:}\\
\qquad \qquad \qquad \qquad \qquad Do: Energy barrier from current site to the targeted site $\leftarrow$ $bes1$\\
\qquad \qquad \qquad \qquad \textbf{else if} the direction of diffusion is perpendicular to that of scattering force \textbf{then:}\\
\qquad \qquad \qquad \qquad \qquad Do: Energy barrier from current site to the targeted site $\leftarrow$ $bes2$\\
\qquad \qquad \qquad \qquad \textbf{else if} diffusion and scattering force are in the opposite direction \textbf{then:}\\
\qquad \qquad \qquad \qquad \qquad Do: Energy barrier from current site to the targeted site $\leftarrow$ $bes3$\\
\qquad \qquad \qquad \qquad \textbf{end if}\\
\qquad \qquad \qquad \textbf{end if}\\
\qquad \qquad \qquad Determine the direction of diffusion base on the barriers to the nextest site, Eq.(11) in the main text and the\\
\qquad\qquad\qquad direction of the last diffusion.\\
\qquad \qquad \qquad Move the adatom to the targeted site; $\boldsymbol{S}$ get updated \qquad \emph{\# Diffusion}\\
\qquad\qquad\qquad \textbf{if} The adatom collide with other nucleared adatom \textbf{then:}\\
\qquad\qquad\qquad\qquad \textbf{break} \qquad \qquad\emph{\# nucleation}\\
\qquad\qquad\qquad \textbf{end if}\\
\qquad \qquad \textbf{end for}\\
\qquad \textbf{end for}\\
\qquad\\
\qquad Output the surface morphology $\boldsymbol{S}$.
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[h]
\begin{ruledtabular}
\begin{tabular}{cccccccc}
\textrm{Electric Field intensity ($V/m$)}&
\textrm{$bn1$}&
\textrm{$bn2$}&
\textrm{$bn3$}&
\textrm{$bes1$}&
\textrm{$bes2$}&
\textrm{$bes3$}\\
\colrule
$2.00\times10^{11}$&0.420&0.510&0.600&0.402&0.517&0.630\\
$1.75\times10^{11}$&0.426&0.496&0.566&0.428&0.523&0.616\\
$1.50\times10^{11}$&0.432&0.484&0.536&0.460&0.529&0.598\\
$1.25\times10^{11}$&0.437&0.473&0.509&0.487&0.534&0.582\\
$1.00\times10^{11}$&0.447&0.470&0.493&0.508&0.539&0.569\\
Without EW&0.451 &0.451 & 0.451& 0.544 & 0.544 & 0.544\\
\end{tabular}
\end{ruledtabular}
\begin{flushleft}
Table 1. Parameters used to generate Fig.4. The unit of each barrier is $eV$ in this table. \\
Other input parameters we used: $N$ was set as 50000; $T$ was $273K$; $L$ was set as 100; $l$ was set as 30.
\end{flushleft}
\end{table*}
\section{universality}
In order to clarify the universality of modulating effect of EW, we also calculated the situations of different adatoms, as shown in Fig.1S. We haven’t found any reported research which demonstrates hopping process is dominant in interlayer diffusion, at least for elemental metal thin films. It can be understood from the fact that the hopping process across the steps experiences a configuration of lower coordinate number (CN) than that of exchange process, and thus has a higher energy barrier than exchange process. Even in materials where interlayer diffusion is dominated by hopping process, the energy of adatoms in the growth system would be lowered by the evanescent wave field. Therefore, in general principle, the downward interlayer diffusion must be enhanced by the applied EW field rather than be undermined, no matter in hopping process or in exchange process.
\begin{figure*}
\includegraphics[width=17cm]{fig1S
\begin{flushleft}
Fig.1S. Evanescent wave-induced gradient force ($F_{grad}$, marked by solid squares) and scattering force ($F_{scat}$, marked by solid circles) exerted on different metal adatoms in the unit of their respective gravities (G).
\end{flushleft}
\end{figure*}
|
{
"timestamp": "2021-09-14T02:12:34",
"yymm": "2012",
"arxiv_id": "2012.14209",
"language": "en",
"url": "https://arxiv.org/abs/2012.14209"
}
|
\section*{Theoretical Model}
We model the quantum state received from two strong thermal point sources of equal intensity by a linear interferometer with two telescopes in the paraxial regime. We assume the two point sources are incoherent, which is reasonable for astronomical observation because radiating particles from distant astronomical objects should not have any correlation \cite{goodman1985statistical}. We assume the positions of the two point sources can be described in one dimension as $X_1$ and $X_2$, as shown in Fig.~\ref{general_setting}. The two point sources are assumed to be monochromatic and can be described by the canonical annihilation and creation operators $c_1$, $c_1^\dagger$ and $c_2$, $c_2^\dagger$. The two modes $a_1$, $a_1^\dagger$ and $a_2$, $a_2^\dagger$ of the interferometer in the image plane receive the state from the sources with phases $\phi_1$ and $\phi_2$ due to the difference in light path length, which contains information on the position of the sources. We explicitly derive the relation between $\phi_1$, $\phi_2$ and the parameters of the settings (detailed in Appendix B) as
\begin{equation}
\phi_i=kB\frac{X_i}{s_0},\quad i=1,2,
\end{equation}
where $B$ is the length of the baseline, $k$ is the wavevector of the light, and $s_0$ is the longitudinal distance to the source plane.
Similar to the derivation in Ref. \cite{lupo2016ultimate}, the states received in the interferometer modes $a_1$ and $a_2$ are an attenuated version of the source modes $c_1$ and $c_2$:
\begin{equation}
c_i\rightarrow \sqrt{\eta}a_1+\sqrt{\eta}e^{i\phi_i}a_2+\sqrt{1-2\eta}\,v_i,\quad i=1,2,
\end{equation}
where $v_i$ are auxiliary environmental modes and $\eta$ is the attenuation ratio. Starting from the thermal states of the source $c_1$ and $c_2$, we derive the states received by the interferometer (detailed in Appendix C) as
\begin{align}\label{eq:state}
&\rho=\frac{1}{(\pi\eta\bar{N})^2}\int_{C^2}d^2\alpha_1d^2\alpha_2\exp \left(-\frac{\abs{\alpha_1}^2+\abs{\alpha_2}^2}{\eta\bar{N}}\right)\nonumber\\
&\times\left[\ket{\alpha_1+\alpha_2}\bra{\alpha_1+\alpha_2}_{a_1}\right.\\
&\otimes\ket{\alpha_1e^{-i\phi_1}+\alpha_2e^{-i\phi_2}}\bra{\alpha_1e^{-i\phi_1}+\alpha_2e^{-i\phi_2}}_{a_2}\left.\right], \nonumber
\end{align}
where $\bar{N}$ represents the strength of each source and $\ket{\alpha_1+\alpha_2}$ and ${\ket{\alpha_1e^{-i\phi_1}+\alpha_2e^{-i\phi_2}}}$ are the coherent states of the two interferometer modes $a_1$ and $a_2$.
We confirm the derived state is still a Gaussian state in Appendix C. Gaussian states are completely characterized by their mean displacement ${\lambda_\mu=\operatorname{Tr}\left[\rho \mathbf{a}_{\mu}\right]}$, where $\mathbf{a}=[a_1,a_1^\dagger,a_2,a_2^\dagger]$, and covariance matrix ${\Sigma_{\mu \nu} =\frac{1}{2} \operatorname{Tr}\left[\rho\left(\tilde{\mathbf{a}}_{\mu} \tilde{\mathbf{a}}_{\nu}+\tilde{\mathbf{a}}_{\nu} \tilde{\mathbf{a}}_{\mu}\right)\right]}$, with ${\tilde{\mathbf{a}}_{\mu}=\mathbf{a}_{\mu}-\lambda_{\mu}}$ \cite{braunstein2005quantum,weedbrook2012gaussian}. The mean displacement $\lambda_\mu$ and covariance matrix $\Sigma$ of $\rho$ are given by
\begin{equation}
\begin{aligned}
&\lambda_\mu=0,\quad \text{for}\,\,\forall \mu, \\
&\Sigma=\left[\begin{matrix}
0 & p & 0 & q\\
p & 0 & q^* & 0\\
0 & q^* & 0 & p\\
q & 0 & p & 0
\label{eq:Sigma}
\end{matrix}\right],
\end{aligned}
\end{equation}
where ${p=2\eta\bar{N}+\frac{1}{2}}$ and ${q=(e^{i\phi_1}+e^{i\phi_2})\eta \bar{N}}$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.8\columnwidth]{general_setting2.eps}
\caption{Schematic of the setup for estimating the position of two strong thermal point sources $c_1$ and $c_2$ at positions $X_1$ and $X_2$, respectively. The light from the two sources is collected with a two-mode interferometer. States received by the two interferometer modes differ by the phases $\phi_1$ and $\phi_2$ due to the difference in path length. }
\label{general_setting}
\end{center}
\end{figure}
\section*{Fundamental Sensitivity Limit}
We now consider the fundamental limit of resolving two point sources with two telescopes. For two point sources, the resolution is reflected in the sensitivity of measuring the centroid ${\theta_1=\frac{1}{2}(X_1+X_2)}$ and the separation between the two sources ${\theta_2=X_1-X_2}$. The sensitivity of estimating $\theta_1$ and $\theta_2$ is bounded by the quantum Fisher information (QFI) $F$: ${\Sigma_{\vec{\theta}}\geq F^{-1}}$, with its $(\mu,\nu)$ element ${[\Sigma_{\vec{\theta}}]_{\mu \nu}=\mathbb{E}\left[(\theta_\mu-\check{\theta}_\mu)(\theta_\nu-\check{\theta}_\nu)\right]}$, where $\check{\theta}_\mu$ is the unbiased estimator of the $\mu$-th unknown parameter. This sensitivity limit given by the QFI is the quantum Cram\'{e}r-Rao bound (QCRB) \cite{helstrom1976quantum}. The matrix element $F_{ij}$ of the QFI of a Gaussian state has been derived as a closed-form expression in terms of $\lambda_\mu$ and $\Sigma$ in Refs.~\cite{monras2013phase,gao2014bounds}:
\begin{equation}\label{QFI_formula}
F_{i j}=\frac{1}{2} \mathfrak{M}_{\alpha \beta, \mu \nu}^{-1} \partial_{j} \Sigma_{\alpha \beta} \partial_{i} \Sigma_{\mu \nu}+\Sigma_{\mu \nu}^{-1} \partial_{j} \lambda_{\mu} \partial_{i} \lambda_{\nu},
\end{equation}
where ${\mathfrak{M}=\Sigma \otimes \Sigma+\frac{1}{4} \Omega \otimes \Omega}$, with ${\Omega=\bigoplus_{k=1}^{n} i \sigma_{y}}$ where $\sigma_{y}$ is the Pauli $y$ matrix, $\partial_j$ is the derivative over the $j$-th unknown parameter, and repeated indices imply summation.
The quantum Fisher information for the separation $\theta_2$ is then given by
\begin{align}\label{F22}
F_{22}&=-\frac{k^2B^2}{s_0^2}\frac{\eta \bar{N}(1+3\eta\bar{N}+\eta\bar{N}\cos(\phi_1-\phi_2))}{-1-2\eta \bar{N}(2+\eta N)+2\eta^2\bar{N}^2\cos(\phi_1-\phi_2)}\nonumber\\
&\xlongequal[]{\theta_2\rightarrow0}\frac{k^2B^2}{s_0^2}\eta\bar{N}.
\end{align}
We emphasize that when the separation between the two point sources tends to zero, i.e.,~${\theta_2\rightarrow 0}$, the quantum Fisher information tends to a constant. This implies that there is actually no resolution limit for resolving two strong thermal point sources. Notice the QFI here is proportional to $B^2$, where $B$ is the baseline of an interferometer. Compared with a single lens, where the QFI is proportional to $D^2$ \cite{tsang2016quantum}, where $D$ is the diameter of a single lens, an interferometer has much larger QFI since $B\gg D$.
The quantum Fisher information $F_{11}$ for estimating the centroid, $\theta_1$, also tends to a constant as the separation ${\theta_2\rightarrow 0}$, as detailed in Appendix D; we discuss this result after analyzing the QFI for the separation. We plot the values of $F_{22}$ as a function of the separation $\theta_2$ for different source strengths $\bar{N}$ in Fig.~\ref{QFI_N_}. The QFI shows periodicity over $\theta_2$ with period $2\pi s_0/(kB)$, which is roughly the conventional resolution limit of interferometry. The periodicity is due to the fact that the position information of the sources is encoded in the phase $e^{i\phi_j}$. Adding $2\pi$ to $\phi_j$ does not affect the state described in Eq.~\ref{eq:state} and hence cannot be distinguished by any measurement. The setup considered in our model can only distinguish $\phi_1-\phi_2\propto X_1-X_2$ up to an integer number of $2\pi$. We can solve this problem by having detectors at more than two positions. This is different from direct imaging, where varying the positions of the sources will always affect the received states and hence there is no periodicity in the QFI \cite{lupo2016ultimate,nair2016far}. We observe that for the intermediate values of $\theta_2$ within a period, the QFI decreases with increasing source strength $\bar{N}$; this is in contrast to the limit of a weak thermal source, in which the QFI is a constant versus separation $\theta_2$ \cite{lupo2020quantum}. As pointed out by Ref.~\cite{nair2016far} in the single lens case, this is a net result of multiphoton events. Note that Fig.~\ref{QFI_N_} is a plot of the QFI \textit{per photon}, which decreases for some values of $\theta_2$ as source strength increases, but a stronger source still has a larger total QFI of estimating $\theta_2$ given its larger photon number. We have verified this and found $\partial F_{22}/\partial( \eta \bar{N})$ is always positive for all possible parameters.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=1\columnwidth]{QFI_N.eps}
\caption{The quantum Fisher information $F_{22}$ for estimating the separation, in units of $\eta \bar{N} k^2 B^2/s_0^2$, as a function of the separation $\theta_2$, for different source strengths $\bar{N}$.}
\label{QFI_N_}
\end{center}
\end{figure}
We now consider what the proper measurement strategy is to actually achieve this limit. The positive operator-valued measure (POVM) that can saturate the QCRB is given by the eigenbasis of the symmetric logarithmic derivative (SLD) \cite{braunstein1994statistical,paris2009quantum}. For a Gaussian state, the SLD has been derived in terms of its mean displacement $\lambda_\mu$ and covariance matrix $\Sigma$ \cite{monras2013phase,gao2014bounds}:
\begin{equation}\label{SLD_formula}
\mathcal{L}_{i}=\frac{1}{2} \mathfrak{M}_{\gamma \kappa, \alpha \beta}^{-1}\left(\partial_{i} \Sigma_{\alpha \beta}\right)\left(a_{\gamma} a_{\kappa}-\Sigma_{\gamma \kappa}\right),
\end{equation}
where $a_i$ is the mode operator and we sum over repeated indices.
The SLD for estimating the separation $\theta_2$ is:
\begin{equation}
\mathcal{L}_{\theta_2}=2l_1a^\dagger_1a_1+2l_1a_2^\dagger a_2+2l_2a_1a_2^\dagger+2l_2^*a_1^\dagger a_2+C_{\theta_2},
\end{equation}
where
\begin{align}
&C_{\theta_2}=-\eta\bar{N}[8l_1+2l_2(e^{i\phi_1}+e^{i\phi_2})+2l_2^*(e^{-i\phi_1}+e^{-i\phi_2})],\nonumber\\
&l_1=\frac{kB}{s_0}\frac{(1+4\eta\bar{N})\cot \frac{\phi_1-\phi_2}{2}}{-4[1+2\eta\bar{N}(2+\eta\bar{N})]+8\eta^2\bar{N}^2\cos(\phi_1-\phi_2)},\\
&l_2=-\frac{kB}{s_0}\nonumber\\
&\;\;\times\frac{e^{-\frac{1}{2}i(\phi_1+\phi_2)}(1+3\eta\bar{N}+\eta\bar{N}\cos(\phi_1-\phi_2))\csc\frac{\phi_1-\phi_2}{2}}{4[-1-2\eta\bar{N}(2+\eta\bar{N})+2\eta^2\bar{N}^2\cos(\phi_1-\phi_2)]}.\nonumber
\end{align}
To find the eigenbasis of the SLD, we diagonalize $\mathcal{L}_{\theta_2}$. Assuming ${d_1=\frac{1}{\sqrt{2}}(a_1+e^{i\delta}a_2)}$, ${d_2=\frac{1}{\sqrt{2}}(a_1-e^{i\delta}a_2)}$ and dropping the constant terms, we have
\begin{align}
\mathcal{L}_{\theta_2}=&(2l_1+l_2e^{i\delta}+l_2^*e^{-i\delta})d_1^\dagger d_1\nonumber\\
&+(2l_1-l_2e^{i\delta}-l_2^*e^{-i\delta})d_2^\dagger d_2\\
&+(l_2e^{i\delta}-l_2^*e^{-i\delta})d_1^\dagger d_2-(l_2e^{i\delta}-l_2^*e^{-i\delta})d_2^\dagger d_1.\nonumber
\end{align}
We can choose ${l_2e^{i\delta}-l_2^*e^{-i\delta}=0}$ or equivalently ${\delta=\frac{1}{2}(\phi_1+\phi_2)}$, which means the SLD has the Fock basis of $d_1$, $d_2$ as its eigenbasis. Thus, the optimal POVM for estimating $\theta_2$ is $\{\ket{m,n}_d\bra{m,n}_{d}\}_{\{m,n\}}$, with ${d_1^\dagger d_1\ket{m,n}_d=m\ket{m,n}_d}$ and ${d_2^\dagger d_2\ket{m,n}_d=n\ket{m,n}_d}$.
As shown in Fig.~\ref{general_setting}, we can implement the above POVM by combining the states of the two modes of the two telescopes on a beam splitter, adding a fixed phase delay $\delta$ corresponding to the optimal delay found above to one of the arms, and performing photon-number-resolved detection in both of the two output ports. This setup is the same as found in Ref.~\cite{lupo2020quantum}, except for the photon-number-resolved detection. More specifically, for the weak thermal source discussed in Ref.~\cite{lupo2020quantum} and in Appendix E, the quantum state $\rho$
received by the two telescopes in modes $a_1$ and $a_2$ is in a Hilbert space spanned by the Fock state basis $\{\ket{m,n}\}$ with constraint $m+n\le 1$ -- no such constraint is present for the case of a strong thermal source. In order to implement the POVM found above, the state is measured for each temporal mode and projected onto one of the Fock bases $\ket{m,n}_d$ of $d_{1,2}$ modes. Data are accumulated to find the probability $P(m,n)$ of getting each outcome. The probability distribution is then fit with its corresponding theoretical prediction to obtain the unknown centroid or separation. The theoretical prediction for the first few $P(m,n)$ is given in Fig.~\ref{Pmn}. The probability distribution $P(m,n)$ is symmetric with respect to $\phi_2-\phi_1=2\pi$ as a function of $\phi_2-\phi_1$, so for some separations there exists ambiguity in the estimation -- this is resolved through measurement of the centroid $\theta_1\propto\phi_1+\phi_2$, discussed below. In practice, the detectors may be able to distinguish only the first few Fock states of low photon number. As shown in Fig.~\ref{FI}, even if only Fock states $\ket{m,n}_d$ with ${m\leq M}$, ${n\leq N}$ can be distinguished, the FI still maintains a reasonable amount of the QFI, which implies that we can achieve a large part of the sensitivity predicted in the ideal case. In particular, we emphasize that even if we only distinguish the presence or absence of the photon, i.e. $M=N=1$, superresolution is still achieved, as is indicated by the finite FI in this case when the separation $\theta_2$ goes to zero.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=1\columnwidth]{Pmn2.eps}
\caption{The probability $P(m,n)$ of projecting the state onto $\ket{m,n}_d$ with $\eta \bar{N}=0.1$ as a function of $\phi_2-\phi_1$.}
\label{Pmn}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=1\columnwidth]{click2.eps}
\caption{Fisher information for photon number detection that only distinguishes the Fock state $\ket{m,n}$ for $m\leq M$, $n\leq N$. Events with greater photon number are ignored. }
\label{FI}
\end{center}
\end{figure}
We estimate the improvement our method provides compared to the conventional imaging method based on the Van Cittert-Zernike theorem using parameters similar to real present-day interferometer arrays in Appendix \ref{compareconventional}. We consider the case where the observation is made with wavelength ${\lambda=5}$~mm and longest baseline ${B=10}$~km. The resolution of the conventional method is then ${\lambda/B=5\times 10^{-7} \mathrm{radians}\approx 0.1''}$. When the angular separation of the two point sources is ${\theta_2/s_0=0.01''}$ and ${\eta \bar{N}=0.01}$, the Fisher information of our optimal measurement is larger than the conventional method by a factor of roughly 30. If we assume the mean square error of estimating the angular separation $\theta_2/s_0$ scales with the number of samples $n$ as ${\Delta (\theta_2/s_0)^2\propto 1/n}$, this implies that our optimal measurement can shorten the observation time by a factor of 30 to achieve the same sensitivity.
The phase delay $\delta$ depends on the centroid $\theta_1$ of the two point sources, since ${\theta_1\propto\phi_1+\phi_2}$, and thus the scheme requires accurate measurement of the centroid. The derivation for the estimation of the centroid is detailed in Appendix D; there it can been seen that a similar measurement strategy that also depends on the centroid is optimal. This recursive relationship can be overcome, as in conventional imaging there is no fundamental limitation on the accuracy of estimating the centroid; thus, other imaging methods can be used or suboptimal strategies can be constructed to determine the centroid, such as using a random phase scheme. Nevertheless, misalignment of the centroid must be taken into account. We now show how the superresolution predicted by the QFI is affected by a deviation of $\delta$ from $\frac{1}{2}(\phi_1+\phi_2)$. We write the deviation as $c=\frac{1}{2}(\phi_1+\phi_2)-\delta$. We write the state $\ket{m,n}_d$ using the Fock basis of modes $a_1$, $a_2$:
\begin{equation}
\begin{aligned}
\ket{m,n}_d&=(d_1^\dagger)^m(d_2^\dagger)^n\ket{0}\\
&=2^{-\frac{m+n}{2}}(a_1^\dagger+e^{-i\delta}a_2^\dagger)^m(a_1^\dagger-e^{-i\delta}a_2^\dagger)^n\ket{0}\\
&=2^{-\frac{m+n}{2}}\sum_{j,k}C_m^jC_n^k(-1)^ke^{-i(j+k)\delta}\\
&\quad\quad\quad\quad\quad\quad\times(a_1^\dagger)^{m+n-j-k}(a_2^\dagger)^{j+k}\ket{0},
\end{aligned}
\end{equation}
where $C_m^j=\frac{m!}{j!(m-j)!}$. We then evaluate
\begin{equation}
\begin{aligned}
&f(m,n,\alpha_1,\alpha_2)\\
&= \bra{m,n}_d\left(\ket{\alpha_1+\alpha_2}\otimes\ket{\alpha_1 e^{-i\phi_1}+\alpha_2 e^{-i\phi_2}}\right)\\
&=2^{-\frac{m+n}{2}}\sum_{j,k}C_m^j C_n^k(-1)^ke^{i(j+k)\delta}\\
&\quad\quad\quad\times e^{-\frac{1}{2}|\alpha_1+\alpha_2|^2-\frac{1}{2}|\alpha_1e^{-i\phi_1}+\alpha_2e^{-i\phi_2}|^2}\\
&\quad\quad\quad\times(\alpha_1+\alpha_2)^{m+n-j-k}(\alpha_1 e^{-i\phi_1}+\alpha_2 e^{-i\phi_2})^{j+k}.
\end{aligned}
\end{equation}
The probability of getting outcome $\ket{m,n}_d\bra{m,n}_d$ is given by
\begin{equation}\begin{aligned}
P_d(m,n)&=\frac{1}{(\pi\bar{N})^2}\int_{C^2}d^2\alpha_1d^2\alpha_2\\
&\times\exp \left(-\frac{\abs{\alpha_1}^2+\abs{\alpha_2}^2}{\bar{N}}\right) |f(m,n,\alpha_1,\alpha_2)|^2.
\end{aligned}\end{equation}
The Fisher information of estimating the separation is calculated as
\begin{equation}\begin{aligned}
FI=\sum_{m,n=0}^\infty \frac{(\partial P_d(m,n)/\partial \theta_2)^2}{P_d(m,n)}.
\end{aligned}\end{equation}
This calculation is intractable both analytically and numerically. We instead make the following approximation. First, we only keep the contribution of $\ket{m,n}_d\bra{m,n}_d$ with $m\leq 3$, $n\leq 3$. As pointed out above, keeping only the first few elements of the POVM can still achieve superresolution; i.e., the Fisher information tends to a constant when the separation $\theta_2\rightarrow 0$. Secondly, we do the integration for the phase and amplitude of $\alpha_1$, $\alpha_2$ separately and define a cut-off for the integration of the amplitude:
\begin{equation}\begin{aligned}
P_d(m,n)&\approx \frac{1}{(\pi\bar{N})^2} \int_0^bd|\alpha_1|\int_0^bd|\alpha_2||\alpha_1||\alpha_2|\\
&\times\exp \left(-\frac{\abs{\alpha_1}^2+\abs{\alpha_2}^2}{\bar{N}}\right)g(m,n,|\alpha_1|,|\alpha_2|),
\end{aligned}\end{equation}
\begin{equation}\begin{aligned}
g(m,n,|\alpha_1|,|\alpha_2|)&=\int_0^{2\pi}d\beta_1\int_0^{2\pi}d\beta_2\, \\
&\times |f(m,n,|\alpha_1|e^{i\beta_1},|\alpha_2|e^{i\beta_2})|^2,
\end{aligned}\end{equation}
where $\alpha_1=|\alpha_1|e^{i\beta_1}$, $\alpha_2=|\alpha_2|e^{i\beta_2}$, and $b$ is a finite number to introduce a cutoff for the integral for convenience in the numerical calculation. For a fixed value of $\bar{N}$, the FI tends to a constant value as $b$ increases, as shown in Fig.~\ref{misalignment_combine}(a), which validates the cutoff.
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=2\columnwidth]{FI_combine.eps}
\caption{(a) Approximate Fisher information as a function of the integration range $[0,b]$. Other parameters are chosen as $\bar{N}=0.01$; $m,n\leq 3$; $c=10^{-6}$; and $\theta_2=10^{-3}$. (b) Approximate Fisher information as a function of separation $\theta_2$ for fixed misalignment $c=10^{-3}$. Other parameters are chosen as $\bar{N}=0.01$ and $m,n\leq 3$. (c) Approximate Fisher information as a function of misalignment $c$ for fixed separation $\theta_2=10^{-3}$. Other parameters are chosen as $\bar{N}=0.01$ and $m,n\leq 3$.}
\label{misalignment_combine}
\end{center}
\end{figure*}
We plot the FI as a function of separation $\theta_2$ with fixed misalignment $c$ in Fig.~\ref{misalignment_combine}(b). It is clear that, with a nonzero misalignment, when the separation tends to zero the FI vanishes and superresolution cannot be achieved. We also plot the FI as a function of the misalignment $c$ with fixed separation $\theta_2$ in Fig.~\ref{misalignment_combine}(c). We observe that increasing the misalignment significantly degrades the FI. The threshold is roughly $c\approx \theta_2$ from the figure.
We emphasize that even though the FI is no longer constant as $\theta_2\rightarrow 0$ in the presence of misalignment, it is still possible to get some benefit from our measurement if the misalignment $c$ is small enough compared to the separation $\theta_2$. For example, when $c=10^{-3}$ and $\theta_2/(s_0/kB)=10^{-1}$, the FI of our measurement approaches $\eta\bar{N}k^2B^2/s_0^2$ [from Fig.~\ref{misalignment_combine}(b)], while for the conventional method, the FI is $2\times 10^{-3}$ times smaller (from Fig.~\ref{misalignment_compare}). Thus, when working below the resolution limit of the conventional method ($\theta_2<s_0/kB$), our method can still significantly outperform the conventional one if the misalignment is not too large. This behavior is similar to what was found for a single lens in the presence of misalignment \cite{tsang2016quantum}: as long as the misalignment is small enough, the FI of estimating the separation is better than the direct imaging method.
\textit{Conclusion}- In summary, we have used quantum estimation theory to determine the fundamental limit of resolving two identical thermal point sources of any strength. The results show that, unlike the conventional imaging method based on the Van Cittert-Zernike theorem, a more properly designed measurement scheme can achieve a resolution not limited by the longest baseline. We find a measurement scheme using a beam splitter and photon-number-resolving detection can achieve the resolution given by the quantum Cram\'{e}r-Rao bound. This paper can be extended to several other cases, such as resolving two point sources of unequal strength \cite{vrehavcek2017multiparameter,vrehavcek2018optimal}, estimating separation in three dimensions similarly to Refs.~\cite{yu2018quantum,lupo2020quantum}, and imaging a general extended source similarly to Refs.~\cite{zhou2019modern,tsang2019quantum}. Although we are unable to find an analytical solution for the case of multiple sources and detectors, it is at least possible to numerically calculate the QFI and SLD following a similar procedure to that briefly discussed in Appendix \ref{multiple}. As in single-lens imaging with noisy detectors \cite{lupo2020subwavelength,llen2020resolution}, we expect the signal-to-noise ratio to limit the resolution. We hope our result inspires more discussion along these lines.
\section*{Acknowledgements}
We thank Offir Cohen, Andrew Jordan, Eric Chitambar, Paul Kwiat, John D. Monnier, Shayan Mookherjea, Michael G. Raymer, Brian J. Smith, Robert Czupryniak, John Steinmetz and Jing Yang for helpful discussion.
This work was supported by the multi-university National Science Foundation Grant No.~1936321 -- QII-TAQS: Quantum-Enhanced Telescopy.
\onecolumngrid
|
{
"timestamp": "2021-09-01T02:05:12",
"yymm": "2012",
"arxiv_id": "2012.14026",
"language": "en",
"url": "https://arxiv.org/abs/2012.14026"
}
|
\section{Introduction}\label{sec:intro}
In quantum computing, we use the Hilbert space of a quantum system to encode and process information. The interactions with the environment lead to errors and an important challenge is to protect our information from the errors. One of the main goals of the theory of quantum error correction (QEC) is to identify the subalgebra of correctable operators associated to an error model, and construct the recovery map that undoes the errors.\footnote{For completeness, we have included a review of the theory of operator algebra error correction in appendix \ref{app:errorcorrection}. See also \cite{kribs2005unified,beny2007quantum,beny2007generalization}.}
In local many-body quantum systems, to every subregion of space $A$ we associate an algebra of observables $\mathcal{A}_A$ that includes the identity operator.\footnote{In interacting relativistic theories, we associate an algebra to the causal development of every ball.} A manifestation of the principle of locality is that if the region $C$ is inside $A$ we have the inclusion of algebras $\mathcal{A}_C\subseteq \mathcal{A}_A$. If we have a lattice the algebra $\mathcal{A}_A$ factors as $\mathcal{A}_A=\mathcal{A}_C\otimes \mathcal{A}_R$ for some $\mathcal{A}_R$ that is called the relative commutant of $\mathcal{A}_C$ in $\mathcal{A}_A$.\footnote{The relative commutant algebra $\mathcal{A}_C$ in $\mathcal{A}_A$ is the set of all operators in $\mathcal{A}_A$ that commute with every operator in $\mathcal{A}_C$. The commutant of an algebra $\mathcal{A}$ that we denote by $\mathcal{A}'$ is the set of all operators in the Hilbert space that commute with all operators in $\mathcal{A}$.} The relative commutant is the algebra associated to the region $A\cap C'$. Any such inclusion is trivially an exact quantum error correction code in the following sense: the physical operators are $\mathcal{A}_A$ and the logical operators are encoded in the subalgebra $\mathcal{A}_C$. The errors act on the relative commutant $\mathcal{A}_R$, and by locality, the errors do not disturb the encoded information because $[a,V_r]=0$ for all $a\in \mathcal{A}_C$ and any error $V_r\in \mathcal{A}_R$; see figure \ref{fig0u}.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\linewidth]{fig0u.pdf}
\caption{The local algebra of region $C$ is a subalgebra of the algebra of a larger region. Any error $V_r$ that acts on the relative commutant $\mathcal{A}_R$ does not disturb the encoded information in $\mathcal{A}_C$.}
\label{fig0u}
\end{figure}
Let us apply a unitary rotation in $\mathcal{A}_A$. We obtain a new algebra inclusion $U\mathcal{A}_C U^\dagger\subset U\mathcal{A}_A U^\dagger$ and a new error correction code; however, the unitary can obscure locality. In fact, every algebra inclusion is an exact quantum error correction code and, if finite dimensional, can be trivialized by a choice of unitary on $A$. Intuitively, this means that there is a hidden notion of locality in the inclusion of any subalgebra $\mathcal{A}^C\subset \mathcal{A}$.\footnote{With an abuse of notation, we have denoted a general subalgebra that includes the identity operator as $\mathcal{A}^C$ because, in this work, the upper index $C$ in $\mathcal{A}^C$ will stand for ``correctable subalgebra''.} Consider a finite dimensional matrix algebra with a trivial center (the observable algebra of a qudit). If the subalgebra $\mathcal{A}^C$ also has a trivial center there exists a unitary $U$ in $\mathcal{A}$ such that $U\mathcal{A} U^\dagger =U\mathcal{A}^CU^\dagger\otimes \mathcal{A}^R$ where $\mathcal{A}^R$ is the relative commutant of $U\mathcal{A}^CU^\dagger$ in $U\mathcal{A} U^\dagger$. If $\mathcal{A}^C$ is a subalgebra with a non-trivial center $Z(\mathcal{A}^C)$ then up to the choice of a unitary the algebra $\mathcal{A}$ factors as the direct sum $\oplus_q \mathcal{A}_C^{(q)}\otimes \mathcal{A}_R^{(q)}$ and $\mathcal{A}^C=\oplus_q \mathcal{A}_C^{(q)}\otimes \mathbb{I}_R^{(q)}$. To visualize this structure we use the diagrams in figure \ref{fig3u}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{fig3u.pdf}
\caption{(a) If $\mathcal{A}^C$ with trivial center is a subalgebra of a finite dimension algebra $\mathcal{A}$, then we have the tensor product factorization $\mathcal{A}=\mathcal{A}^C\otimes \mathcal{A}^R$. (b) If $\mathcal{A}^C$ has a non-trivial center $Z(\mathcal{A}^C)$, we modify the diagram to represent the center as a blue stripe. (c) The center is part of both $\mathcal{A}^C$ and the relative commutant.}
\label{fig3u}
\end{figure}
In this work, we argue that the inclusion of algebras that share the identity operator appear naturally in renormalization group (RG) and holography, however, in these cases the inclusions are not due to any obvious locality principle.
There are two parts to this work.
In the first part, in section \ref{sec:realRG}, we argue that the real-space RG can be modeled as an approximate error correction code that encodes the long-distance operators in the algebra of the short-distance operators. In this picture, the short distance local perturbations are the errors and the long-distance operators (or a subset of them) are the correctable operators. This is closely related to modeling the holographic map as a quantum error correction code \cite{almheiriharlowdong2015bulk,Harlow2017,faulkner2020holographic}.
The connection between the RG and error correction can be seen even in classical systems \cite{Furuya:2021lgx}. The intuition is that exciting a long-range degree of freedom requires acting on a macroscopically large number of short-distance degrees of freedom. The disturbance caused by a local short-distance error cannot alter long-distance modes. Under the RG, local ultra-violet (UV) operators become exponentially weak in the infra-red (IR). Deep in the IR, the UV errors are negligible, and in fact, there is no need to actively correct for them. Low energy states of a gapped system, do not have excitations at distances much larger than the correlation length. To make our connection concrete, we focus on real-space RG in systems near critical points where the long range modes of arbitrary wave-length are excited.
As a concrete model of real-space RG that applies to the quantum system near a critical point, in section \ref{sec:realRG}, we consider the multi-scale renormalization ansatz (MERA) tensor network for lattice models.
MERA has found many applications in the study of quantum field theory (QFT) and gravitational theories in AdS/CFT correspondence \cite{evenbly2011tensor,swingle2012entanglement}. To our knowledge, the connection between MERA and error correction codes was first discussed in \cite{kim2017entanglement}. This connection was extended to continuous MERA (cMERA) in \cite{Furuya:2021lgx}.
The error correction property of MERA is similar to the holographic map modeled as an error correction code with the difference that in a general RG flow we do not have complementary recovery property.\footnote{See figure \ref{fig5u} for complementary recovery in holography. Note that, even in holography, the complementary recovery is an approximate notion. It is known to fail in situations where the code subspace is large \cite{hayden2019learning,akers2019large}.}
Holography suggests that complementary recovery has to emerge in a special class of theories with a large number of local degrees of freedom (large $N$) and are strongly interacting (large gap). We discuss the role of large $N$ and large gap in complementary recovery.
Motivated by the connection between the RG and error correction, in the second part of this work in section \ref{sec:QFT}, we study the operator algebra error correction for an arbitrary von Neumann algebra as a mathematical framework for error correction in continuum quantum field theory (QFT). The error map is modeled by a normal unital completely positive (CP) map $\Phi:\mathcal{A}\to \mathcal{B}$; see figure \ref{fig4u}. When the whole algebra $\mathcal{B}$ is correctable and the error map has no kernel the recovery map is unique and given by the Petz dual of the error map. It isometrically embeds $\mathcal{B}$ in $\mathcal{A}$.
More generally, we consider the setup where only a subalgebra $\mathcal{B}^C$ of the logical operators $\mathcal{B}$ is correctable.\footnote{For instance, in holography, this situation arises when the reconstructable wedge is smaller than the entanglement wedge.} Then, the recovery map restricted to the correctable operators is still the Petz dual of the error map. Any unital CP map that projects $\mathcal{B}$ down to $\mathcal{B}^C$ (i.e. any conditional expectation $\mathcal{E}_B:\mathcal{B}\to\mathcal{B}^C$) can be used to redefine the error such that its full image is correctable. Such conditional expectations exist if the inclusion $\mathcal{B}^C\subset \mathcal{B}$ has finite index \cite{longo1995nets}.
\begin{figure}[t]
\centering
\includegraphics[width=0.25\linewidth]{fig4u.pdf}
\caption{We encode the algebra $\mathcal{B}$ in the physical algebra $\mathcal{A}$. If the correctable subalgebra $\mathcal{B}^C\subset \mathcal{B}$ is strictly smaller than $B$ we use a conditional expectation $\mathcal{E}_B$ to project $\mathcal{B}$ down to $\mathcal{B}^C$. Absorbing $\mathcal{E}_B$ in the error map $\Phi$ we are back to the case where the whole algebra is correctable.}
\label{fig4u}
\end{figure}
For completeness, in the appendices, we have included a self-contained review of the mathematical and information-theoretic background needed for the second part of this work. In appendix \ref{sec:CPmaps&duals}, we review some information theory concepts such as the completely positive (CP) maps and their duals. Appendix \ref{sec:GNS} discusses the GNS Hilbert space which has the following two advantages: 1) linear maps on the algebra (superoperators) correspond to linear operators in the GNS Hilbert space. This simplifies the study of error correction. 2) The GNS Hilbert space can be constructed for all quantum systems (von Neumann algebra), including the local algebra of quantum field theory (QFT) that we are ultimately interested in.
We show that insisting on the dual of a CP map to remain CP leads to two natural notions of dual maps: 1) the dual map of Accardi and Cecchini that we call the $\rho$-dual map and 2) Petz dual map.
Both of these maps play an important role in error correction.
The Petz dual map can understood as the dual with respect to an alternate inner product that has already found several applications in QFT in the discussion of Rindler positivity \cite{casini2011wedge,hartman2017averaged}. While our discussion applies to any quantum system, to help the readers less familiar with von Neumann algebras we mostly use the more familiar notation of finite quantum systems.
In appendix \ref{app:errorcorrection}, we review the Heisenberg picture of quantum error correction.
We say a subalgebra $\mathcal{B}^C$ is correctable if there exists a recovery map $\mathcal{R}:\mathcal{B}^C\to \mathcal{A}$ such that $\Phi(\mathcal{R}(c))=c$ for all $c\in \mathcal{B}^C$. We call the constraint $\Phi\circ \mathcal{R}=\text{id}$ the error correction equation. The recovery map is non-unique because any $\mathcal{R}+\mathcal{X}$ satisfies the error correction equation as long as $\Phi(\mathcal{X}(c))=0$. In other words, the recovery is non-unique when the kernel of the error map is non-trivial. Another source of non-uniqueness comes from the fact that the error correction equation defines the recovery map from $\mathcal{B}^C$ to $\mathcal{A}$. Any extension of the domain of $\mathcal{R}$ from $\mathcal{B}^C$ to $\mathcal{B}$ can be also called a recovery map. We denote the range of the recovery map by $\mathcal{A}^C\equiv \mathcal{R}(\mathcal{B}^C)$. It is a subalgebra of the physical operators. The recovery map is an isometric embedding of the correctable algebra in $\mathcal{A}$.
Conditional expectations are unital CP maps that project an algebra to a subalgebra that includes the identity. In finite dimension, there is a one-to-one correspondence between conditional expectations $\mathcal{E}_{\sigma}$ and unnormalized states $\sigma=\oplus_q \mathbb{I}^q_1\otimes \sigma_2^q$ on the relative commutant of $\mathcal{A}^C$ in $\mathcal{A}$.\footnote{For examples and a more detailed discussion of conditional expectations see appendix \ref{sec:examplesoofCPmaps}.} All the density matrices that are preserved under a conditional expectation $\mathcal{E}_\sigma$ take the separable form $\rho=\oplus_q p_q \rho_1^q\otimes \sigma_2^q$. In exact error correction, $\mathcal{R}\circ \Phi$ is a conditional expectation and its invariant states are the correctable states.
The von Neumann entropy of a correctable state splits into two terms
\begin{eqnarray}\label{entropysplit}
S(\oplus_q p_q \rho_1^q\otimes \sigma^2_q)=H(p)+\sum_q p_q (S(\rho^q_1)+S(\sigma_2^q))=S(\rho_1)+\sum_q p_q S(\sigma_2^q)\ .
\end{eqnarray}
Note that the second term is a property of the correctable subalgebra and not the correctable state.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{fig5u.pdf}
\caption{The subsystem error correction code in holography satisfies complementary recovery.}
\label{fig5u}
\end{figure}
In holography, the boundary algebra is our physical algebra, and the bulk is the code algebra. An isometry $W$ encodes the bulk Hilbert space on the boundary. In the Heisenberg picture, the map $\alpha(a)=W^\dagger a W$ maps the boundary operators to the bulk respecting the complementary recovery property: the boundary operators supported on region $A$ go to those in the bulk localized in $B$ and the operator supported on the complementary region $A'$ go to those in $B'$; see figure \ref{fig5u}.
The bulk operators localized in region $B$ of the bulk are protected against the erasure of $A'$.
The error map is $\Phi=\alpha\circ \text{tr}_{A'}$ and its Petz dual is the recovery map $\mathcal{R}:\mathcal{B}\to \mathcal{A}$. The complementary recovery implies that the composite map $\mathcal{R}\circ \Phi$ is a conditional expectation.\footnote{A similar observation was made in \cite{faulkner2020holographic}.} In holography, the second term on the right-hand-side of (\ref{entropysplit}) is argued to be similar to the contribution of the area operator to the holographic entanglement \cite{Harlow2017}.
\section{Real-space RG as an error correction code}\label{sec:realRG}
\subsection{Conventional theory of QEC}
We start this section with a quick review of the conventional approach to quantum error correction.\footnote{For completeness, in section \ref{app:errorcorrection}, we have included a formal derivation of these results using the operator algebra quantum error correction that we generalize to arbitrary von Neumann algebras in section \ref{sec:QFT}}
In the Schr\"{o}dinger picture of error correction, consider an encoding isometry $W:\mathcal{K}_B\to \mathcal{H}_A$ from the code Hilbert space $\mathcal{K}_B$ to the physical Hilbert space $\mathcal{H}_A$ and a decoding co-isometry $W^\dagger$. The projection operator $P_C=WW^\dagger$ projects to a subspace of $\mathcal{H}_A$ called the code subspace because it is isomorphic to $\mathcal{K}_B$. Throughout this work, we use the following notation: we denote an irreducible representations of an algebra $\mathcal{B}$ by $\mathcal{K}_B$, and a reducible representation (such as the GNS representation) of $\mathcal{B}$ with $\mathcal{H}_B$. In finite dimensional matrix algebras, we have $\mathcal{H}_B=\mathcal{K}_B\otimes \mathcal{K}_{B'}$.
A collection of error operators $V_r$ corrupt the physical states and a collection of recovery operators $R_r$ correct the errors; see figure \ref{fig6u}. In the simple case where the errors $V_r$ are unitary operators we can undo the error using the correction operators $R_r=V_r^\dagger$. Even when the error is not unitary the correction operator is still made out of the conjugate of the error; see appendix \ref{app:errorcorrection}.
For general errors $V_r$, the necessary and sufficient condition for the recovery to be possible is the {\it Knill-Laflamme condition} \footnote{The physical intuition behind the Knill-Laflamme condition can be seen by defining a set of basis states $\{ | C_i \rangle \}$ in the code subspace $P_C \mathcal{H}_A$. Then,
\begin{equation}
P_C V_r^{\dagger} V_s P_C = \sum_{ij} | C_i \rangle \langle C_i | V_r^{\dagger} V_s |C_j \rangle \langle C_j | = \sum_{ij} \langle C_i | V_r^{\dagger} V_s | C_j \rangle | C_i \rangle \langle C_j | .
\end{equation}
We satisfy Knill-Laflamme condition if $\langle C_i | V_r^{\dagger} V_s | C_j \rangle = \lambda_{rs} \delta_{ij}$. This condition implies that the two orthogonal code vectors $\ket{C_i}$ and $\ket{C_j}$ remain orthogonal after the action of the error operators. This ensures that the distinguishable states remain distinguishable despite the errors. } \cite{knill-laflamme1997qec}
\begin{eqnarray}\label{schrodingererror}
P_CV_r^\dagger V_sP_C\propto P_C\ .
\end{eqnarray}
When this condition is satisfied the recovery map is $R_r\propto P_CV_r^\dagger$.
For example, consider the 3-qutrit code where the code Hilbert space $\mathcal{K}_B$ is a single qutrit spanned by $\ket{i}$ with $i=0,1,2$ that is mapped by an isometry $W$ to the subspace $\ket{\bar{i}}=W\ket{i}$:
\begin{eqnarray}
&&\ket{\bar{0}}=\frac{1}{\sqrt{3}}(\ket{000}+\ket{111}+\ket{222})\nonumber\\
&&\ket{\bar{1}}=\frac{1}{\sqrt{3}}(\ket{012}+\ket{120}+\ket{201})\nonumber\\
&&\ket{\bar{2}}=\frac{1}{\sqrt{3}}(\ket{021}+\ket{102}+\ket{210})\ .
\end{eqnarray}
An error that occurs on the third qutrit $V_3$ can be corrected using the $R_3\propto P_C V_3^\dagger$ because
\begin{eqnarray}
W^\dagger R_3 V_3W\ket{i}\propto \ket{i}
\end{eqnarray}
where we have used (\ref{schrodingererror}).
It is convenient to absorb the encoding isometry $W$ in the definition of the errors and the decoding co-isometry $W^\dagger$ in the definition of the recovery operators
\begin{eqnarray}
&&\tilde{V}_r=V_rW,\qquad \tilde{R}_r= W^\dagger R_r\ .
\end{eqnarray}
See figure \ref{fig6u} (a) and (b). There exists a unitary $U$ and a factorization of the Hilbert space $\mathcal{H}_A=\mathcal{K}_A\otimes \mathcal{K}_{A'}$ such that
\begin{eqnarray}\label{unitcode}
U\ket{\bar{i}}=\ket{i}_A\ket{\chi}_{A'}
\end{eqnarray}
for some state $\ket{\chi}_{A'}$. The unitary trivializes the encoding such that the information is encoded in $A$ and the errors act on $A'$. The error correction is guaranteed by the locality property $[a,V_r]=0$ for all $a$ acting on $A$ and error $V_r$ acting on $A'$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig6u.pdf}
\caption{(a) Error correction in the Schr\"{o}dinger picture. The isometry $W$ is the encoding and $W^\dagger$ is the decoding. The errors are $V_r$ and the correction operators are $R_r$. (b) We can absorb the $W$ and $W^\dagger$ in the definition of the errors and the correction operators. (c) Error correction in the Heisenberg picture. The order of operations is reversed. Both the error map $\Phi$ and the recovery map $\mathcal{R}$ are unital completely positive maps. (d) The encoding $\iota$ and decoding $\alpha$ can be absorbed in the definition of the error and the recovery maps.}
\label{fig6u}
\end{figure}
In the Heisenberg picture of error correction, we have the algebra of code operators $\mathcal{B}$ and that of the physical operators $\mathcal{A}$. An error correction code is a collection of four CP maps $(\iota,\mathcal{R},\Phi,\alpha)$, where $\iota:\mathcal{B}\to \mathcal{A}$ is an isometric embedding of $\mathcal{B}$ in $\mathcal{A}$ and $\alpha:\mathcal{A}\to \mathcal{B}$ undoes it. The recovery map is $\mathcal{R}:\mathcal{A}\to \mathcal{A}$ and the error map $\Phi:\mathcal{A}\to \mathcal{A}$ is unital. These maps have the Kraus representation
\begin{eqnarray}\label{Krausforms}
&&\alpha(a)=W^\dagger a W,\qquad \iota(b)=WbW^\dagger\nonumber\\
&&\Phi(a)=\sum_r V_r^\dagger a V_r,\qquad \mathcal{R}(a)=\sum_r R_r^\dagger a R_r\ .
\end{eqnarray}
We have an error correction if for all the code operators $b\in \mathcal{B}$ we have
\begin{eqnarray}
\alpha\circ\Phi\circ\mathcal{R}\circ\iota(b)=b\ .
\end{eqnarray}
See figure \ref{fig6u} (c).
The error correction condition above implies the Knill-Laflamme condition in (\ref{schrodingererror}) as a special case, but it is more general.
To simplify the notation, it is often convenient to absorb $\iota$ in the definition of the recovery map and $\alpha$ in the definition of the error map. In this way, an error correction code is a doublet $(\mathcal{R},\Phi)$ where $\Phi:\mathcal{A}\to \mathcal{B}$ is the error and $\mathcal{R}:\mathcal{B}\to \mathcal{A}$ is the recovery map; see figure \ref{fig6u} (d):
\begin{eqnarray}
&&\Phi(a)=\sum_r \tilde{V}_r^\dagger a \tilde{V}_r,\nonumber\\
&&\mathcal{R}(b)=\sum_r \tilde{R}_r^\dagger b \tilde{R}_r .
\end{eqnarray}
The map $\mathcal{R}\circ \Phi:\mathcal{A}\to \mathcal{A}^C$ projects the physical operators to the subalgebra of correctable operators $\mathcal{A}^C$.
These operators are invariant under the action of $\Phi\circ\mathcal{R}$.
A special error channel relevant to the RG flow and holography is erasure.
In finite dimensional matrix algebras, the erasure is the error map that acts as\footnote{In the Schr\"{o}dinger picture, the erasure channel acts on the density matrices according to $\mathcal{E}_{\sigma'}^*(\rho_{AA'})=\rho_A\otimes \sigma'$.}\footnote{This is the simplest example of a conditional expectation that preserves the states $\rho\otimes \sigma'$.}
\begin{eqnarray}
\mathcal{E}_{\sigma'}(a\otimes a')=(a\otimes \mathbb{I}')\text{tr}(\sigma' a')\ .
\end{eqnarray}
Any operator $a'\in \mathcal{A}'$ is an error and the necessary and sufficient condition for recovery similar to (\ref{schrodingererror}) is
\begin{eqnarray}\label{erasureerror}
\forall a'\in \mathcal{A}':\qquad P_C a'P_C\propto P_C\ .
\end{eqnarray}
This is equivalent to the statement that for any operator $b$ there exists an operator $\mathcal{R}(b)$ acting in subsystem $A$ such that \footnote{We prove this for an error correction code in a general von Neumann algebra in section \ref{sec:QFT}. For a proof in finite dimensional matrix algebras see, for instance, theorem 3.1 of \cite{Harlow2017}}
\begin{eqnarray}\label{mathcalRi}
\mathcal{R}(b)W\ket{i}=W b \ket{i},\qquad \mathcal{R}(b^\dagger)W\ket{i}=W b^\dagger \ket{i}\ .
\end{eqnarray}
Since $P_C[\mathcal{R}(b),a']P_C=0$ any error $V'_r$ supported on $A'$ satisfies
\begin{eqnarray}
\mathcal{R}(b)V'_r W=V'_r W b\ .
\end{eqnarray}
Defining the errors $\tilde{V}'_r=V'_rW$ we have
\begin{eqnarray}
\Phi(\mathcal{R}(b))=\sum_r (\tilde{V}_r')^\dagger \mathcal{R}(b) \tilde{V}'_r=b
\end{eqnarray}
which is the error correction condition in the Heisenberg picture.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig7u.pdf}
\caption{The subsystem error correction in (a) the Schr\"{o}dinger picture (b) the Heisenberg picture.}
\label{fig7u}
\end{figure}
To see how the Heisenberg picture error correction goes beyond the equation (\ref{schrodingererror}) we consider the subsystem error correction. This is the setup where both the physical Hilbert space and the code Hilbert space admit tensor product forms, respectively $\mathcal{H}_A=\mathcal{K}_A\otimes \mathcal{K}_{A'}$ and $\mathcal{H}_B=\mathcal{K}_B\otimes \mathcal{K}_{B'}$. The goal is to encode the operators $b$ supported on $B$ in the physical Hilbert space such that they are protected against the erasure of $A'$. In this case, the necessary and sufficient condition generalizes the Knill-Laflamme conditions in (\ref{schrodingererror}) to
\begin{eqnarray}\label{subsystemcond}
W^\dagger a'W\in \mathcal{B}'\ .
\end{eqnarray}
This is to be compared with the condition in (\ref{erasureerror}) that can be written as
\begin{eqnarray}
W^\dagger a' W=\lambda \mathbb{I}\ .
\end{eqnarray}
It is a standard result in quantum error correction that (\ref{subsystemcond}) is equivalent to the existence of a map $\mathcal{R}:\mathcal{B}\to \mathcal{A}$ such that
\begin{eqnarray}
\mathcal{R}(b)W=W b\ .
\end{eqnarray}
We provide a proof of this for any von Neumann algebra in section \ref{sec:QFT}.
Since $P_C[\mathcal{R}(b),a']P_C=0$ for any error $\tilde{V}'_r=V'_r W$ we have
\begin{eqnarray}\label{erroreq}
\mathcal{R}(b)\tilde{V}'_r=\tilde{V}'_r b
\end{eqnarray}
or equivalently $\Phi(\mathcal{R}(b))=b$; see figure \ref{fig7u}.
\subsection{Entanglement renormalization}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{fig8u.pdf}
\caption{A layer of MERA is an isometry $W_s:\mathcal{H}_{s+1}\to \mathcal{H}_s$ that is comprised of two layers: the coarse-graining isometries $V$ and the local disentangling unitaries $U$.}
\label{fig8u}
\end{figure}
As an explicit example of the connection between the real-space renormalization and the quantum error correction codes we consider a MERA tensor network. A MERA is a sequence of increasingly coarse-grained lattices $\{\mathcal{L}_0,\mathcal{L}_1,\cdots, \mathcal{L}_n\}$ and their corresponding Hilbert spaces $\{\mathcal{H}_0,\mathcal{H}_1,\cdots , \mathcal{H}_n\}$. The Hilbert space $\mathcal{H}_s$ describes the states of the theory at length scale $l_s$ and $l_0<l_1<\cdots <l_n$. The states of $\mathcal{H}_0$ are deep in the UV, and the states of $\mathcal{H}_n$ are in the IR. At each site of every lattice $\mathcal{L}_s$ we have a local Hilbert space that we take to be a qudit for simplicity. A sequence of isometries $W_s:\mathcal{H}_{s+1}\to \mathcal{H}_{s}$ embed $\mathcal{H}_{s+1}$ into the Hilbert space of less coarse-grained states $\mathcal{H}_{s}$. In the standard MERA, each such isometry is comprised of a layer of local coarse-graining isometries $V$ followed by a layer of disentangling unitaries $U$; see figure \ref{fig8u}.
The hierarchical structure of correlations in MERA allows for states with long-range correlations. The isometries $W_s$ can be understood as maps that prepare the states $W_1W_2\cdots W_n\ket{\Psi_n}$ with long-range correlations. Below, we summarize the argument presented in \cite{kim2017entanglement} for the error correction properties of MERA.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{fig9u.pdf}
\caption{(a) One step of RG for a $9$-local operator $a_s$ turns it into a $6$-local operator acting on $\mathcal{H}_{s+1}$. (b) The support of operators supported on a few sites fluctuates but remains almost constant. (c) The support of $k$-local operators with $k\gg 1$ shrinks under the RG. For instance, the support of a $k$-local operator shrinks to at most $\floor{k/2}+2$. In general, the expectation is that the support of operators shrinks by the coarse-graining factor except for some boundary effects that become important when the operator has support on $O(1)$ number of sites.}
\label{fig9u}
\end{figure}
In the Heisenberg picture, MERA is a renormalization map for the operators: $\mathcal{A}_s\to \mathcal{A}_{s+1}$ where $\mathcal{A}_s$ is the algebra of observables of the Hilbert space $\mathcal{H}_s$;
see figure \ref{fig9u}:
\begin{eqnarray}
\alpha(a_s)=W_s^\dagger a_s W_s\ .
\end{eqnarray}
The most important property of MERA for us is that it shrinks the support of local operators in the following sense: if $a_s$ is supported on $k$ adjacent sites with $k\gg 1$ on $\mathcal{L}_s$ and the isometries cut down the number of sites by a factor $\gamma>1$ then the operator $\alpha(a_s)$ is supported on approximately $k/\gamma$ sites of $\mathcal{L}_{s+1}$ \cite{swingle2012entanglement}; see figure \ref{fig9u}.
This is not exactly true because of the boundary effects. For instance, for the MERA in figure \ref{fig9u}, for any $k$-local operator $a$ the support of $\alpha(a)$ is at most $\floor{k/2}+2$.
For $k=O(1)$ the support of the operator almost remains the same.\footnote{It can fluctuate up and down but it can never grow much.}
In higher dimensions, the number of sites in a region scales like the volume of the region and the number of the sites at the boundary scales like the area of the region therefore it is natural to expect that the volume term in the support $a$ shrinks by $\gamma$ up to potential area corrections.
A UV operator $a_0$ supported on region $A_0$ under the RG flow is mapped to the operator $a_s$ whose support we define to be $A_s$. After $s$ layers of RG the linear size of $A_s$ is order $\gamma^{-s}|A_0|$. When $s$ becomes comparable to $\log|A_0|$ the support of the operator reaches a few sites. At this scale, the second stage of the RG flow starts. As we flow further into the IR, the operator remains local on a few sites, however its norm falls exponentially fast. This is because, in the Heseinberg picture, the RG flow map is a quantum channel and hence a contraction: its eigenvalues have norm smaller than one; see appendix \ref{sec:fixed}. The operators that are invariant under the RG flow survive deep in the IR forming a subalgebra of exactly correctable operators. These are the eigenoperators with eigenvalue one. All the other operators decay exponentially fast with the exponent set by $h_{min}=-\log|\lambda|$ where $\lambda$ is the largest eigenvalue of the RG channel with norm less than one \cite{kim2017entanglement}.\footnote{In principle, there can be eigenoperators whose eigenvalues are a phase $e^{i\theta}$. If such operators exist, under the RG flow they will show recurrences. We expect a generic RG flow to not have such recurrences.}
We split the ultra-violet lattice $\mathcal{L}_0$ into a simply connected region $A_0$ and the complement $A'_0$.
The RG flow respects locality in the sense that operators supported on $A_0$ are mapped to operators supported on $A_s$. Therefore, the UV errors $a'_0$ localized on $A'_0$ does not disturb the IR operators $a_s$ in $A_s$: $[\alpha^s(a'_0),a_s]=0$. This is a trivial subsystem error correction code. As we flow further into the IR the support $A_s$ shrinks until it reaches a few sites. At this point, the support of the operator no longer shrinks, instead under the RG flow the norm of the operator drops exponentially fast. If there are $s$ layers of coarse-graining between the IR and the UV states a UV operator supported on a region of size $A_0$ becomes a local operator with a norm that is suppressed by $e^{-(s-\log |A_0|)}$; for a precise statement see lemma 3 in \cite{kim2017entanglement}. Deep in the IR ($s-\log |A_0|\gg 1$) the UV perturbations are vanishingly small. They do not disturb the IR physics; see figure \ref{fig:figMERAu}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figMERAu1.pdf}
\includegraphics[width=0.45\textwidth]{figMERAu2.pdf}
\caption{(a) Any UV errors $a'_0$ (red star) supported on $A'_0$ do not disturb the IR operators that are originally supported on $A_0$ before the RG flow. The black dot denoted as the encoded data represents $a_s$. (b) In the figure, there are $s$ layers between UV and IR where the encoded data is sitting. The size of the support of UV operators shrinks as $\log|A_0|$, though the size drawn in the figure is schematic. }
\label{fig:figMERAu}
\end{figure}
\subsection{Real-space RG in QFT}
In this section, we generalize the connection between MERA and error correction to the RG flow of continuous Poincare invariant QFT. It was shown in \cite{Furuya:2021lgx}, that in continuous MERA (cMERA) \cite{haegeman2013entanglement}, the RG flow of massive free fields is an approximate quantum error correction code. We comment on the emergence of the complementary recovery in holographic code.
The canonical quantization of a QFT that is a perturbation of massive free fields uses the constant time field operator $\varphi(x)$ and its momentum conjugate $\pi(x)$. For simplicity, we set the mass scale to one. As instructed by cMERA \cite{zou2019magic}, to model the RG flow, we deform the Hamiltonian by adding the irrelevant operator $e^{2s}\partial_i\pi(x)\partial_i \pi(x)$ where the index $i$ runs over spatial directions only and the summation over $i$ is implicit. This term acts as an effective cut-off at the length scale $e^s$. For $f^\pm(x)$ real test function on the space, we define the annihilation operators $a(f)=\int d^{d-1}x \: (f^-(x)+i f^+(x)) a(x)$. Under the RG flow this operator renormalizes to $a_s(f_s)$ where $a_s$ is the annihilation operator at scale $e^s$ and the test function $f_s$ is \cite{Furuya:2021lgx}
\begin{eqnarray}\label{RGfpm}
f^\pm_s(x)=(1-e^{2s}\nabla^2)^{\pm 1/4} f^\pm(x)\ .
\end{eqnarray}
Deep in the UV ($s\to -\infty$) the functions $f^\pm$ are supported on region $A$. For smooth enough $f^\pm$ as long as $s\ll \log|A|$ the term $e^{2s}\nabla^2f^\pm$ in (\ref{RGfpm}) is smaller than $f^\pm$ and the renormalization of the field is negligible. This is analogous to the stage one of the RG flow of the operators in MERA. Here, the support does not change but the cut-off length is growing exponentially fast. The cut-off length is analogous to a single site in MERA (the lattice spacing), therefore the support of $f$ in units of the cut-off length is shrinking exponentially fast.
The support of the operator, in units of the cut-off, shrinks until $e^s\sim |A|$ at which point the operator is supported on a region of cut-off length, and the second stage starts. In the second stage, the second term on the right-hand-side of (\ref{RGfpm}) is no longer negligible. It was shown in \cite{Furuya:2021lgx} that for large $s$ the projection of the UV coherent operators to the code subspace becomes approximately proportional to the projection to the code subspace:
\begin{eqnarray}
P_C e^{a_s^\dagger(f_s)-a_s(f^*_s)}P_C\simeq P_C
\end{eqnarray}
which is the Knill-Laflamme condition for approximate error correction.
More generally, we can directly analyze the spectrum of the RG quantum channel.
Deep in the IR, the eigen-operators of the RG quantum channel with the largest eigenvalues are the conformal primaries of the IR fixed point \cite{vidal2007entanglement,vidal2008class}
\begin{eqnarray}
e^{-s\mathcal{D}}(a_h)=e^{-s h } a_h
\end{eqnarray}
where we have defined the superoperator $\mathcal{D}$ that generates the RG flow from the unit length scale to $e^s$. Here, $h\geq 0$ is the scaling dimension of the eigen-operator. The norm of a non-identity operator decays fast with scale. This implies that any local perturbation in the UV becomes exponentially weak in the IR. The only UV operators that survive the RG flow to the low energies are supported on macroscopically large number of degrees of freedom.\footnote{In principle, it is plausible that the RG map has invariant local eigen-operators. Such operators would have vanishing conformal dimensions.} The parameter $h_{min}(s-\log |A|)$ where $h_{min}$ is the dimension of the lightest primary controls how well this error correction code works.
Quantum error correction makes a surprising appearance in quantum gravity and the AdS/CFT duality \cite{almheiriharlowdong2015bulk}.
The discovery of the Ryu-Takayanagi (RT) formula in holography led to an understanding of the duality at the level of subregion density matrices \cite{nishioka2009holographic,czech2012gravity}. It revealed that the map that encodes the bulk operators in the Hilbert space of the boundary theory defines an error correction code. These error correction properties have been used to develop toy models of holography using finite dimensional quantum systems \cite{Pastawski-preskill2015toymodel}. It was recently shown that the Petz map gives a reconstruction of the bulk operators in terms of the boundary observables \cite{chen2020entanglement}. See \cite{penington2019replica} for a recent discussion of the Petz map in the reconstruction of operators behind the horizon of a black hole.
At first look, it appears that the approximate error correction in RG is not related to the exact error correction realized in holography because making the error correction above exact requires the conformal dimension of the lightest primary to go to infinity. The holographic QEC code has the complementary recovery property which means that the operators supported on $A_0$ are mapped to those in $A_s$ and the operators on the complementary region $A'_s$ are encoded in those in the complementary region $A'_0$.\footnote{We will use the Latin letters $A$ and $A'$ to refer a region and its complement and $\mathcal{A}_A$ and $\mathcal{A}_{A'}$ to refer to their corresponding algebra of operators. Note that in the presence of conserved charges $\mathcal{A}_{A'}\neq \mathcal{A}'_A$. This happens because the local algebras have non-trivial centers. We assume periodic boundary conditions so that both $A'$ and its complement $A$ can be chosen to be simply connected.}
In general, the approximate QEC in RG does not have complementary recovery. This property has to emerge in holographic theories.
The connection with holography becomes clearer when we consider an RG with two groups of primaries: light primaries with conformal dimensions $h_L\ll \Delta$ and heavy primaries with $h_H\geq \Delta$ for some large parameter $\Delta$. If we choose our code subspace to be the theory at length scale $e^s l$ with $s=\log|A|+\epsilon$ and $l$ some fixed length scale then any noise $\mathcal{O}_H(A)$ caused by an integrals of heavy operators supported on $A$ can be corrected as long as $\epsilon \Delta\gg 1$. As the gap $\Delta$ goes to infinity, the error correction becomes exact and we obtain complementary recovery. Note that there is no need for a recovery map as the errors simply do not perturb the code subspace. The commutator between the heavy UV operators on $A$ and any local IR operators $a_{IR}(x)$ vanishes simply because their correlation function vanishes $\braket{\mathcal{O}_H(A,l)a_{IR}(e^sl)}\simeq e^{-\Delta(s-\log |A|)}$.
In holography, we can correct for the erasure of region $A$. The error operators include the light operators supported on $A$ in addition to the heavy operators.
As opposed to the heavy operators, the light operators on $A$ have non-vanishing correlations with the IR operators. To argue that their effect is correctable in the IR we need a new mechanism in specific to holographic theories. Such a mechanism is provided in theories with $N\times N$ matrix degrees of freedom at large $N$. The light primaries are $k$ trace operators of the form $\text{tr}(X_1)\cdots \text{tr}(X_k)$ with dimension $O(N^0)$. The heavy operators have large dimension $O(N^2)$ that is the size of the gap $\Delta$ in holography. It follows from the large $N$ factorization that the commutator of light operators are $1/N$ suppressed.\footnote{We thank Venkatesa Chandrasekaran for insightful conversations about the role of large $N$ in error correction.} A small commutator is sufficient for the effect of light operators in $A$ to be correctable in the IR.
\section{Error correction in arbitrary von Neumann algebra}\label{sec:QFT}
The local algebra of quantum field theory is different from the matrix algebras in two important ways: 1) It has no irreducible representations.
2) It does not admit a trace. We need to generalize our discussion of error correction to the GNS Hilbert space to include the local algebra of QFT.\footnote{See appendix \ref{sec:GNS} for a review of the GNS Hilbert space.}
In part two of this work, we generalize the formalism of operator algebra error correction to arbitrary von Neumann algebras. To help the reader, we have included a self-contained review of the mathematical background needed for this section in the appendices \ref{sec:CPmaps&duals} and \ref{sec:GNS} that we refer to frequently in the text. Appendix \ref{app:errorcorrection} reviews the theory of operator algebra error correction.
To define the code and the physical GNS Hilbert spaces we need a state $\rho_B$ of $\mathcal{B}$ \footnote{A state is a normal positive functional of the algebra. When the algebra has a trace it is a density matrix. See appendix \ref{sec:GNS} for more information.}. After the action of the error map this state becomes $\rho_A=\Phi^*(\rho_B)$.\footnote{In the Schr\"{o}dinger picture, the error map corresponds to a quantum channel $\Phi^*$ that sends the states of $\mathcal{B}$ to those of $\mathcal{A}$.} We will choose $\rho_B$ to be full rank (a faithful state).
If the error map has a kernel the state $\rho_A$ is no longer faithful. This means that the errors have erased some information permanently and there will not exist any state that is fully correctable. One way to deal with this is to define a projection to the kernel of the error map and use it explicitly in the recovery map. The recovery map will no longer be unital. Another approach is to enlarge the algebra $\mathcal{B}$ by including the degrees of freedom until the extended error map has trivial kernel.
Physically, an error occurs because of the interaction with some environment degrees of freedom. If there is a kernel for the error map $\Phi:\mathcal{A}\to \mathcal{B}$ it is because the information has left $\mathcal{B}$ and entered the environment. If we add to $\mathcal{B}$ the degrees of freedom of the environment that contain the information that has left $\mathcal{B}$ the extended error map will have a trivial kernel.\footnote{In the extreme case where we include the whole environment in $\mathcal{B}$ the error map is a simple unitary rotation, and completely correctable.}
In the real-space RG in QFT, and in holography, the kernel of the error map is empty. This is because the state $\rho_A$ (the vacuum state of short-distance theory in QFT or the boundary state in holography restricted to a region $A$) is faithful. In this section, in generalizing our discussion of error correction to an arbitrary von Neumann algebra, we will focus on the case where the kernel of the error map is empty.
To get oriented, let us start with matrix algebras. In finite dimensional systems, the GNS Hilbert space of a full rank density matrix $\rho_A$ is a double copy Hilbert spaces $\mathcal{H}_{\rho_A}\equiv \mathcal{K}_A\otimes \mathcal{K}_{A'}$ with a distinguished vector $\ket{\rho_A^{1/2}}\in \mathcal{H}_{\rho_A}$ whose density matrix on both $A$ and $A'$ is equal to $\rho_A$; see appendix \ref{sec:GNS}. Such a vector is called cyclic and separating.
Given a state $\rho_B$ an arbitrary error map $\Phi:\mathcal{A}\to \mathcal{B}$ is represented in the GNS Hilbert space as a contraction $F:\mathcal{H}_{\rho_A}\to \mathcal{H}_{\rho_B}$\footnote{A contraction is an operator with $\|F\|_\infty\leq 1$.}.
We assume that the state $\rho_A$ is also full rank therefore the purification of $\rho_A$ is cyclic and separating. There is a one-to-one correspondence between the linear operators in the GNS Hilbert space and the linear superoperators on the algerbra; see appendix \ref{sec:supervsoperator}.
The operator $F^\dagger$ corresponds to the super-operator $\Phi'_\rho:\mathcal{B}'\to \mathcal{A}'$ that we call the $\rho$-dual map and the operator $J_A F^\dagger J_B$ corresponds to the Petz dual map $\Phi^P_\rho:\mathcal{B}\to \mathcal{A}$ (see section \ref{sec:petz}).
Here, $J_A$ and $J_B$ are the modular conjugation operators corresponding to $\ket{\rho_A^{1/2}}$ and $\ket{\rho_B^{1/2}}$, respectively.
In the special case $F$ is a co-isometry we call the problem of solving for the recovery map a {\it reconstruction problem}. Both real-space RG and holography are reconstruction problems. In theorem \ref{theoremPetz}, we show that any error correction problem where the whole image of the error map is correctable is a reconstruction problem. In reconstruction, the operator $F$ is a co-isometry.
In von Neumann algebras, the analog of the Knill-Laflamme condition for exact error correction is the condition $F^\dagger J_B=J_AF^\dagger$ that we refer to as the {\it Takesaki condition}.\footnote{In the remainder of this work, we often denote isometries like $F^\dagger$ with letter $W$.}
Appendix \ref{app:QECintuition} explains intuitively why the Takesaki condition is necessary and sufficient for exact quantum error correction.
\section{Recovery map in von Neumann algebras}
Consider a unital normal CP error map $\Phi:\mathcal{A}\to \mathcal{B}$ between two von Neumann algebras.
The Kraus representation $\Phi(a)=\sum_rV_r^\dagger a V_r$ of a CP map generalizes to infinite dimensions\footnote{In matrix algebras, the Kraus operators were maps from $\mathcal{K}_A\to \mathcal{K}_B$ where $\mathcal{K}_A$ and $\mathcal{K}_B$ were the irreducible representations of the algebras $\mathcal{A}$ and $\mathcal{B}$. A general von Neumann algebra does not admit an irreducible representation. As we discuss in appendix \ref{app:Krausinf} the generalization of the Kraus representation to an arbitrary von Neumann algebra is in terms of the Kraus operators $V_r:\mathcal{H}_{\rho_B}\to \mathcal{H}_{\rho_A}$.} .
A recovery map is the isometric embedding of the correctable von Neumann subalgebra\footnote{A recovery map satisfies $\mathcal{R}(c)V_r=V_r c, \forall c\in \mathcal{B}^C$.
Therefore, $\mathcal{R}(c_1)\mathcal{R}(c_2)V_r\ket{\rho_A^{1/2}}=\mathcal{R}(c_1c_2)\ket{\rho_A^{1/2}}$. Since we assumed that the kernel of $\Phi$ is empty so the union of the range of all $V_r$ cover the whole Hilbert space and we find that a recovery map is multiplicative: $\mathcal{R}(c_1c_2)=\mathcal{R}(c_1)\mathcal{R}(c_2)$. Since it is CP it becomes an isometric embedding.}.
The CP map $\Phi$ corresponds to a contraction $F:\mathcal{H}_A\to \mathcal{H}_B$:\footnote{We simplify our notation from $\mathcal{H}_{\rho_A}$ to $\mathcal{H}_A$.}
\begin{eqnarray}
\Phi(a)\ket{\rho_B^{1/2}}=Fa\ket{\rho_A^{1/2}}
\end{eqnarray}
and if the whole algebra $\mathcal{B}$ is correctable a recovery map corresponds to an isometry $W:\mathcal{H}_B\to \mathcal{H}_A$. Below, we collect all the theorems we need to generalize our discussion of error correction to arbitrary von Neumann algebra.
We start with the definition of the $\rho$-dual of $\Phi$ and its properties.
\begin{theorem}[$\rho$-dual map: proposition 3.1 \cite{accardi1982conditional}]\label{thm:rhodual}
Let $\Phi:\mathcal{A}\to\mathcal{B}$ be a positive map between von Neumann algebras. Let $\rho_B$ and $\rho_A=\rho_B\circ \Phi$ be faithful states of $\mathcal{B}$ and $\mathcal{A}$. Denote by $\ket{\rho_A^{1/2}}$ and $\ket{\rho_B^{1/2}}$ the cyclic and separating vectors that represent $\rho_A$ and $\rho_B$ in their corresponding Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$. There exists a unique normal positive linear map between the commutants $\Phi'_\rho:\mathcal{B}'\to \mathcal{A}'$ defined by
\begin{eqnarray}
\braket{\Phi'_\rho(b')\rho_A^{1/2}|a \rho_A^{1/2}}=\braket{b'\rho_B^{1/2}|\Phi(a)\rho_B^{1/2}},\qquad \forall a\in \mathcal{A}, b'\in \mathcal{B}'\ .
\end{eqnarray}
If $\Phi$ is CP so is $\Phi'_\rho$, and if $\Phi$ is unital $\Phi'_\rho$ is unital and faithful.
\end{theorem}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\linewidth]{fig16u.pdf}
\caption{The figure shows $\rho$-dual map of $\Phi$ determined by the cyclic and separating vectors $\ket{\rho_A^{1/2}}$ and $\ket{\rho_B^{1/2}}$ as in theorem \ref{thm:rhodual}. The sequences of $\mathcal{J}_B$, $\mathcal{J}_A$, and $\Phi'_{\rho}$ appears to be a Petz dual map constructed in theorem \ref{theoremPetz}.}
\label{fig16u}
\end{figure}
First, consider the case where the whole algebra $\mathcal{B}$ is correctable. This means that there exists a recovery map $\mathcal{R}:\mathcal{B}\to \mathcal{A}$ that isometrically embeds $\mathcal{B}$ in $\mathcal{A}$
\begin{eqnarray}
\mathcal{R}(b)\ket{\rho_A^{1/2}}=W b \ket{\rho_B^{1/2}}
\end{eqnarray}
with $W:\mathcal{H}_B\to \mathcal{H}_A$ an isometry. The map $\Phi\circ\mathcal{R}=\text{id}$ and $\mathcal{R}\circ\Phi: \mathcal{A}\to \mathcal{R}(\mathcal{B}^C)\equiv \mathcal{A}^C\subset \mathcal{A}$ is a conditional expectation that preserves the faithful state $\rho_A$.
Theorem \ref{thmTakesaki2} tells us that the necessary and sufficient condition for the existence of such a conditional expectation is $J_AW=WJ_B$ that we call the Takesaki condition. We use this property in the next theorem to establishes that the recovery map is the Petz dual of the error map, see figure \ref{fig16u}:
\begin{theorem}[Petz dual]\label{theoremPetz}
Let $\Phi:\mathcal{A}\to\mathcal{B}$ be a unital completely positive map between von Neumann algebras. Let $\rho_B$ and $\rho_A=\rho_B\circ \Phi$ be faithful states. Denote by $\ket{\rho_A^{1/2}}$ and $\ket{\rho_B^{1/2}}$ the cyclic and separating vectors that represent $\rho_A$ and $\rho_B$ in their corresponding Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$.
If there exists a normal faithful representation $\mathcal{R}:\mathcal{B}\to \mathcal{A}$ that satisfies $\Phi\circ \mathcal{R}=\text{id}$, it is the {\it Petz dual} of the error map
\begin{eqnarray}
\mathcal{R}(b)=\Phi^P_\rho(b)\equiv \mathcal{J}_A\circ \Phi'_\rho\circ \mathcal{J}_B\ .
\end{eqnarray}
where $\mathcal{J}_A:\mathcal{A}'\to \mathcal{A}$ and $\mathcal{J}_B:\mathcal{B}\to \mathcal{B}'$ are the modular conjugation maps corresponding to $\ket{\rho_A^{1/2}}$ and $\ket{\rho_B^{1/2}}$, respectively.
\end{theorem}
{\bf Proof:}
The superoperator $\Phi$ is unital and CP, therefore it corresponds to a contraction $F:\mathcal{H}_A\to \mathcal{H}_B$. First, we prove that if the whole algebra $\mathcal{B}$ is correctable $F$ is a co-isometry.
The image of the recovery map $\mathcal{A}^C\equiv \mathcal{R}(\mathcal{B})$ is a subalgebra of $\mathcal{A}$. The composite map $\mathcal{E}=\mathcal{R}\circ \Phi:\mathcal{A}\to \mathcal{A}^C$ is unital, CP and preserves every operator in $\mathcal{A}^C$, hence it is a conditional expectation. The operator corresponding to this conditional expectation is a projection to the range of $W$: $WW^\dagger$. Therefore,
\begin{eqnarray}
\mathcal{R}\circ \Phi(a)\ket{\rho_A^{1/2}}=WFa\ket{\rho_A^{1/2}}=WW^\dagger a\ket{\rho_A^{1/2}}\ .
\end{eqnarray}
Since $\ket{\rho_A^{1/2}}$ is cyclic and separating we have $WF=WW^\dagger$ or equivalently $F=W^\dagger$ is a co-isometry.
Since this conditional expectation preserves $\rho_A$ we have the Takesaki condition $J_AW=WJ_B$.
Now, consider the Petz dual map $\Phi^P_\rho(b)$. We check that it satisfies the recovery equation
\begin{eqnarray}
\Phi\circ \Phi^P_\rho(b)\ket{\rho_B^{1/2}}=W^\dagger J_A W J_B b \ket{\rho_B^{1/2}}=b \ket{\rho_B^{1/2}}
\end{eqnarray}
where we have used the Takesaki condition for $\rho_A$. Since $\ket{\rho_B^{1/2}}$ is cyclic and separating this implies that $\Phi\circ \Phi^P_\rho(b)=b$ for all $b\in \mathcal{B}$. In the absence of a kernel for the error map this is the unique recovery map from $\mathcal{B}\to \mathcal{A}^C$. $\Box$
Next, consider the reconstruction problem where only a proper subalgebra $\mathcal{B}^C\subset \mathcal{B}$ is correctable. The Hilbert space $\mathcal{H}_B$ is a representation of $\mathcal{B}^C$ but the vector $\ket{\rho^{1/2}_B}$ is no longer a cyclic and separating vector for $\mathcal{B}^C$. We can use the theorem below to show that the recovery map is dual to $\Phi(a')=W^\dagger a' W\in (\mathcal{B}^C)'$:
\begin{theorem}[Reconstruction maps: theorem 1 of \cite{faulkner2020holographic}]\label{theoremerror}
Let $W:\mathcal{H}_B\to \mathcal{H}_A$ be an isometry in between Hilbert spaces that represent von Neumann algebras $\mathcal{B}$ and $\mathcal{A}$, respectively. The following two statements are equivalent:
\begin{enumerate}
\item For all $a\in \mathcal{A}$ we have $\alpha(a)=W^\dagger aW\in \mathcal{B}$.
\item There exists a normal isometric embedding (injective $*$-homomorphism) $\alpha':\mathcal{B}'\to \mathcal{A}'$ such that $\alpha'(b')W=Wb'$ for all $b'\in\mathcal{B}'$.
\end{enumerate}
When there exists a vector $W\ket{\rho_B^{1/2}}$ that is cyclic and separating for $\mathcal{A}$, $\alpha$ is faithful and the map $\alpha'$ is the unique $\rho$-dual and is unital.
\end{theorem}
The recovery map satisfies the statement (2) therefore it is dual to the map $W^\dagger a'W\in(\mathcal{B}^C)'$ that we call $\Phi$ with an abuse of notation.
The map $\Phi$ acts as $\Phi:\mathcal{A}\to \mathcal{B}$ and $\Phi:\mathcal{A}'\to (\mathcal{B}^C)'$. Since $\mathcal{B}^C$ is smaller than $\mathcal{B}$ we do not have complementary recovery.
We cannot combine $\Phi:\mathcal{A}\to \mathcal{B}$ and $\mathcal{R}:\mathcal{B}^C\to \mathcal{A}$ to get a conditional expectation.
A simple solution is to look for conditional expectations that project from $\mathcal{B}$ to $\mathcal{B}^C$. As we review in appendix \ref{app:errorcorrection}, in finite dimensions, there is a one-to-one correspondence between the conditional expectations from $\mathcal{B}$ to $\mathcal{B}^C$ and the states on the relative commutant of $\mathcal{B}^C$ in $\mathcal{B}$. With any conditional expectation $\mathcal{E}_B:\mathcal{B}\to \mathcal{B}^C$ we can redefine the error map to $\Phi\to \mathcal{E}_B\circ \Phi$. We are back to the case where the whole image of the error map is correctable, and the recovery map is the Petz dual of the new error map.
If the inclusion of $\mathcal{B}^C\subset \mathcal{B}$ has finite index there always exists a conditional expectation from $\mathcal{B}\to \mathcal{B}^C$.
Any von Neumann subalgebra $\mathcal{B}^C$ is a direct integral of factors: $\mathcal{B}^C=\int_q^{\oplus} \mathcal{C}^q$.\footnote{A factor is a von Neumann algebra with trivial center.} Roughly speaking, the index of a subfactor $[\mathcal{C}^q:\mathcal{B}]$ is a measure of how many times the algebra $\mathcal{C}^q$ fits inside $\mathcal{B}$, and when there exists no conditional expectations from $\mathcal{B}$ to $\mathcal{C}^q$ this index is defined to be infinite. When the index is finite there are conditional expectations $\mathcal{E}^q:\mathcal{B}\to \mathcal{C}^q$ \cite{kosaki1986extension}. If all the inclusion of all $\mathcal{C}^q$ in $\mathcal{B}$ have finite indices the direct integral of $\mathcal{E}^q$ is a conditional expectation $\mathcal{E}:\mathcal{B}\to \mathcal{B}^C$.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{fig17u.pdf}
\caption{(a) Given a correctable state $\rho_A$ we can construct the conditional expectation that projects $\mathcal{B}$ to the invariant subalgebra $\mathcal{B}^I$ of $\Phi\circ \Phi^P_\rho$. (b) The Petz map $\Phi^P_\rho$ plays the role of the recovery map sending the operators in $\mathcal{B}^I$ to the subalgebra $\mathcal{A}^I$ that commutes with all errors. This is the von Neumann algebra generalization of the condition $[c,V_r^\dagger V_s]=0$ for the operators in the correctable subalgebra.}
\label{fig17u}
\end{figure}
The correctable subalgebra is the subalgebra of operators that commute with $V_r^\dagger V_s$.\footnote{See appendix \ref{app:errorcorrection}.} We would like to generalize this to arbitrary von Neumann algebras.
If there exists no correctable states the correctable subalgebra is empty. Therefore, we consider
the case where we have an error map $\Phi:\mathcal{A}\to \mathcal{B}$ and a state $\rho_A$ that is correctable. We follow a strategy similar to the passive error correction in appendix \ref{passiveQEC}. The map $\Phi\circ\Phi^P_\rho:\mathcal{B}\to \mathcal{B}$ is unital and CP. We consider the conditional expectation that projects to its invariant subalgebra that we denote by $\mathcal{B}^I$:
\begin{eqnarray}
\mathcal{E}_B=\lim_{N\to \infty}\frac{1}{N}\sum_{n=1}^N(\Phi\circ \Phi^P_\rho)^n\ .
\end{eqnarray}
This is an error correction code for the correctable algebra $\mathcal{B}^I$ with the recovery map $\Phi^P_\rho$ because for all $c\in \mathcal{B}^I$ we have $\Phi\circ \Phi_\rho^P(c)=c$. The range of the recovery map is a subalgebra in $\mathcal{A}$ that we denote by $\mathcal{A}^I=\Phi^P_\rho(\mathcal{B}^I)$.
The map $\mathcal{E}_A=\Phi^P_\rho\circ \mathcal{E}_B\circ \Phi$ is a conditional expectation from $\mathcal{A}$ down to $\mathcal{A}^I$; see figure \ref{fig17u}.
We can redefine the error map to $\mathcal{E}_B\circ \Phi:\mathcal{A}\to \mathcal{B}^I$. We are back to the standard case above, and the recovery map is once again the Petz dual of the error map.
\section{Discussion}\label{sec:discussion}
In summary, we argued that the renormalization group is an approximate error correction code. This is similar to modeling the holographic map as a subsystem error correction code, with the difference that we do not have complementary recovery. We discussed how the complementary recovery emerges in a theory with large $N$ and a large gap.
We studied the operator algebra quantum error correction for an arbitrary von Neumann algebra. If the error map has a kernel some information is irreversibly lost. In real-space RG, the vacuum vector of a QFT is cyclic and separating which implies that the kernel of the RG map is trivial.
In von Neumann algebras, the analog of the Knill-Laflamme condition for exact error correction is the Takesaki condition. When recovery is possible, the recovery map is the Petz dual of the error map.
If the kernel of the error map is not empty (we do not have a cyclic and separating vector) the composition of the recovery map and the error $\mathcal{R}\circ\Phi:\mathcal{A}\to \mathcal{A}^C$ is still a CP map that preserves every operator in $\mathcal{A}^C$, but it is no longer unital. In the language of von Neumann algebras, such a map is an operator valued weight: an unbounded unnormalized positive map with dense domain in $\mathcal{A}_+$ (the positive operators of $\mathcal{A}$) that satisfies the bi-module property \footnote{See appendix \ref{sec:CPmaps&duals} for a discussion of the bi-module property.}. There exists a bijection in between the set of operator value weights from $\mathcal{A}\to \mathcal{A}^C$ and those from $(\mathcal{A}^C)'$ to $\mathcal{A}'$ \cite{connes1980spatial}. The study of operator valued weights could shed light on the problem of reconstruction in the absence of a faithful state.
Consider the AdS$_{d+1}$/CFT$_d$ correspondence in $d>1$ and a simply connected region $A$.
In time-reversal symmetric geometries, the Rangamani-Takayanagi (RT) surface is the co-dimension two surface in the bulk that is anchored on the boundary of $A$, is homologous to $A$ and has minimal area; see figure \ref{fig19u}. Denote by $B$ the region in the bulk that is in between the RT surface and $A$. Consider the map $\mathcal{R}$ that encodes the algebra $\mathcal{B}$ of the bulk on the boundary (bulk reconstruction map). We choose the error map to be $\Phi=\alpha\circ\text{tr}_{A'}$ where $\alpha(\cdot)=W^\dagger (\cdot) W$ and $W:\mathcal{H}_{bulk}\to \mathcal{H}_{boundary}$ is the encoding isometry. All the bulk operators $b\in \mathcal{B}$ satisfy the error correction condition $\Phi(\mathcal{R}(b))=b$ and the recovery map $\mathcal{R}$ is an isometric embedding. The holographic map from the boundary algebra to the bulk algebra has no kernel because both of the bulk and boundary vectors are cyclic and separating with respect to their corresponding algebras. We have complementary recovery and the whole bulk algebra $\mathcal{B}$ is reconstructable. The reconstruction map $\mathcal{R}$ is the Petz dual of the holographic map $\Phi$.
A similar observation was discussed in a recent paper \cite{faulkner2020holographic}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{fig19u.pdf}
\caption{A time slice of anti-de Sitter space with $\mathcal{A}$ the algebra of a region $A$ on the boundary and $\mathcal{B}$ the algebra of the bulk region that is in between $A$ and the Ryu-Takayanagi surface of $A$. The CP map $\Phi$ maps the boundary local algebra to the bulk, whereas $\mathcal{R}$ reconstructs the bulk operators on the boundary.}
\label{fig19u}
\end{figure}
Given a $\rho$-preserving conditional expectation we can define a measure of the information lost under the conditional expectation \cite{entanglemententropy_2020}. This leads to entropic uncertainty relations that play an important role in the derivation of the Ryu-Takayanagi formula in holography \cite{Harlow2017,faulkner2020holographic}. It has been argued that complementary recovery fails in some situations in holography \cite{akers2019large}. That brings the holography reconstruction problem closer to the real-space RG.
Finally, we make the following observation: In AdS$_2$/CFT$_1$ the bulk reconstruction map cannot be a conditional expectation, because there exists no conditional expectations from a type I algebra (the boundary theory is $0+1$ dimensional) to a type III von Neumann algebra (the bulk theory is $1+1$ dimensional QFT). We believe that the resolution of this seeming paradox is that the bulk and boundary relative entropies match only up to $1/N$ corrections. The error correction properties of the holographic map are only approximate. A related observation is that we can define CP maps in between $*$-closed subspaces of observables (operator systems). This generalization can be helpful in moving away from the exact error correction in holography.
\section*{Acknowledgements}
We would like to thank Roy Araiza, Venkatesa Chandrasekaran, Shawn X. Cui, Mudassir Moosa, Kwing Lam Leung, Thomas Sinclair, and Edward Witten for valuable discussions. This work was supported, in part, by a grant-in-aid (PHY-1911298) from the National Science Foundation. NL would like to thank the Institute for Advanced Study for their hospitality during his visit where part of this work was completed.
|
{
"timestamp": "2021-12-22T02:19:17",
"yymm": "2012",
"arxiv_id": "2012.14001",
"language": "en",
"url": "https://arxiv.org/abs/2012.14001"
}
|
\section{Introduction}
The study of derived categories is considered as an important subject in various mathematics, for example, ring theory, representation theory, algebraic geometry and mathematical physics.
In the representation theory of algebras, since the equivalences of derived categories preserve many homological properties, it is a natural problem to determine the derived equivalence class of a given algebra.
It is a well-known result (\cite{Ri89}) that derived equivalences are controlled by tilting objects.
Hence the problem above is reduced to finding all tilting objects for an algebra.
Recently, mutation theory has been intensely studied in the representation theory of algebras.
Mutation is an operation to construct a new object from an original one by exchanging direct summands.
As a typical example, for a symmetric algebra, mutations of tilting objects are also tilting, known as Okuyama--Rickard complexes.
Unfortunately, for any algebra, the class of tilting objects is not necessarily closed under mutations.
Aihara--Iyama (\cite{AI12}) shows that mutations of silting objects are always silting objects, and hence mutations make infinitely many silting objects from a given silting object.
Silting objects are introduced by Keller--Vossieck (\cite{KV88}) as a generalization of tilting objects in order to study bounded $t$-structures on derived categories.
We may expect silting connectedness, that is, any two silting objects are obtained from each other by iterated mutation.
However, Aihara--Grant--Iyama and recently Dugas (\cite{Du21}) give examples of algebras which do not satisfy silting connectedness.
In \cite{Ai13}, Aihara introduce the notion of silting-discrete algebras, which gives a reasonable class of finite dimensional algebras satisfying silting connectedness.
A finite dimensional algebra is called a \emph{silting-discrete algebra} if for each positive integer $d$, the set of isomorphism classes of basic $d$-term silting objects of the bounded homotopy category of finitely generated projective modules is finite.
As nice properties of silting-discrete algebras, bounded $t$-structures correspond bijectively with silting objects (\cite{KY14,AMY19}) and hence the stability space (in the sense of Bridgeland) of the bounded derived category is contractible (\cite{PSZ18,AMY19}).
As mentioned above, for any algebra, mutations of tilting objects are not necessarily tilting.
However, for a self-injective algebra, Chan--Koenig--Liu (\cite{CKL15}) introduce the notion of $\nu$-stable mutation and show that $\nu$-stable mutations of tilting objects are also tilting, where $\nu$ is a Nakayama functor.
It is shown (\cite{AM17}) that tilting-discrete self-injective algebras, which are a tilting analog of silting-discrete algebras, satisfy a property that any two tilting objects are obtained from each other by iterated $\nu$-stable mutation.
In this paper, we discuss a unification of silting-discrete algebras and tilting-discrete self-injective algebras.
Moreover, we give an example of tilting-discrete self-injective algebras that are not silting-discrete.
Let $A$ be a finite dimensional algebra and $\mathcal{T}:=\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ the bounded homotopy category of finitely generated projective $A$-modules.
For a triangle auto-equivalence $\nu$ on $\mathcal{T}$,
we introduce the notion of $\nu$-stable silting-discrete algebras, that is, algebras with finitely many $d$-term $\nu$-stable silting objects of $\mathcal{T}$ for each $d>0$.
Remark that $\nu$-stable silting objects of $\mathcal{T}$ is a generalization of tilting objects for self-injective algebras (see Proposition \ref{prop:selfinjective-tilting}).
The following theorem is one of our main results, which is an analog of \cite[Theorem 1.2]{AM17}.
\begin{theorem}\label{thmA}
Let $A$ be a finite dimensional algebra and $\mathcal{T}:=\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$.
Assume that $\mathcal{T}$ admits a triangle auto-equivalence $\nu$.
Then the following statements are equivalent:
\begin{itemize}
\item[(1)] $A$ is $\nu$-stable silting-discrete.
\item[(2)] For each object $M$ obtained by finite sequence of minimal $\nu$-stable mutations from $A$, the set of isomorphism classes of basic $\nu$-stable silting objects $N$ of $\mathcal{T}$ satisfying $M\ge N\ge M[1]$ is a finite set, where $X\ge Y$ means $\operatorname{Hom}\nolimits_{\mathcal{T}}(X,Y[i])=0$ for all $i>0$.
\end{itemize}
\end{theorem}
Remark that Theorem \ref{thmA} is extended to the case of triangulated categories (see Theorem \ref{thm:equiv-nu-silting-discrete}).
For a symmetric algebra, all silting objects are tilting objects.
Hence tilting-discrete symmetric algebras are silting-discrete.
This result is generalized to weakly symmetric algebras as follows.
\begin{theorem}[Theorem \ref{thm:weakly-symmetric-silting-discrete}]\label{thmB}
Let $A$ be a weakly symmetric algebra and $\mathcal{T}:=\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$.
Let $\nu$ be a Nakayama functor.
Then the following statements are equivalent:
\begin{itemize}
\item[(1)] $A$ is silting-discrete.
\item[(2)] $A$ is $\nu$-stable silting-discrete.
\item[(3)] $A$ is tilting-discrete.
\end{itemize}
In this case, all silting objects are tilting.
\end{theorem}
Independently of the present work, the same result is obtained by August--Dugas \cite{AD21}.
In \cite{AM17}, it is shown that preprojective algebras of Dynkin type are tilting-discrete self-injective algebras.
As an application of Theorem \ref{thmB}, we show that, if $A$ is the preprojective algebra of one of Dynkin diagrams $\mathbf{D}_{2n}(n\ge 2)$, $\mathbf{E}_{7}$ and $\mathbf{E}_{8}$, then it is silting-discrete.
However, we do not know whether each tilting-discrete self-injective algebra is silting-discrete.
Now we propose a natural question.
\begin{question}
Is a tilting-discrete self-injective algebra always silting-discrete?
\end{question}
One of our aims of this paper is to give two counterexamples for the question above.
The first counterexample is as follows.
\begin{theorem}[Theorem \ref{thm:local-tilting}]
Let $A$ be a basic connected non-semisimple self-injective algebra over an algebraically field and let $\nu$ be its Nakayama functor. Assume that $A$ is $\nu$-cyclic.
Then there exists a self-injective algebra $\widetilde{A}$ such that
\begin{itemize}
\item it is not silting-discrete,
\item $\{ \widetilde{A}[i]\mid i\in\mathbb{Z} \}$ coincides with the set of isomorphism classes of all basic tilting objects for $\widetilde{A}$.
In particular, $\widetilde{A}$ is tilting-discrete.
\end{itemize}
\end{theorem}
The second counterexample is as follows.
Let $n,m$ be positive integers and let $K$ be an algebraically closed field.
We denote by $A_{n,m}$ the stable Auslander algebra of a self-injective Nakayama $K$-algebra with $m$ simple modules (up to isomorphism) and Loewy length $n$.
It is known that $A_{n,m}$ is always a self-injective algebra.
\begin{theorem}[Theorem \ref{thm:counterexample-question}]
Let $n,m\ge 5$ be integers with $\gcd(n-1,m)=1$.
Assume that $n$ is odd and $m$ is not divisible by the characteristic of $K$.
Then $A_{n,m}$ is a tilting-discrete algebra but not silting-discrete.
\end{theorem}
\subsection*{Notation}
Let $K$ be a field and $\mathbb{D}:=\operatorname{Hom}\nolimits_{K}(-,K)$.
Throughout this paper, $\mathcal{T}$ is a $K$-linear Hom-finite Krull-Schmidt triangulated category with shift functor $[1]$.
For an object $M$ of $\mathcal{T}$, we denote by $\mathsf{add}\hspace{.01in}(M)$ the smallest full subcategory of $\mathcal{T}$ which contains $M$ and which is closed under taking finite direct sums and direct summands, and by $\mathsf{thick}\hspace{.01in} M$ the smallest triangulated full subcategory of $\mathcal{T}$ which contains $M$ and which is closed under taking direct summands.
For full subcategories $\mathcal{X},\mathcal{Y}$ of $\mathcal{T}$, we define $\mathcal{X}\ast\mathcal{Y}$ as the full subcategory of $\mathcal{T}$ consisting of $T\in\mathcal{T}$ which admits a triangle $X\to T\to Y\to X[1]$ with $X\in\mathcal{X}$ and $Y\in\mathcal{Y}$.
\section{$\nu$-stable silting theory}
In this section, we introduce $\nu$-stable silting mutation theory, which unifies silting mutation theory (\cite{AI12}) and tilting mutation theory of self-injective algebras (\cite{CKL15}, \cite{AM17}), where $\nu$ is a triangle auto-equivalence.
Let $\mathcal{T}$ be a $K$-linear Hom-finite Krull-Schmidt triangulated category with shift functor $[1]$.
Assume that $\mathcal{T}$ has a triangle auto-equivalence $\nu:\mathcal{T} \to \mathcal{T}$.
\subsection{$\nu$-stable objects}
In this subsection, we recall the notion of $\nu$-stable objects, which plays an important role in this paper.
\begin{definition}
An object $M$ of $\mathcal{T}$ is said to be \emph{$\nu$-stable} if $\nu M \cong M$ holds.
\end{definition}
Let $M$ be a basic $\nu$-stable object of $\mathcal{T}$.
We decompose $M$ as $M=\oplus_{i\in I}M_{i}$, where $M_{i}$ is indecomposable.
Then for each $i\in I$, there uniquely exists $j\in I$ such that $\nu M_{i} \cong M_{j}$ because $\nu$ preserves indecomposability.
Define a permutation $v_{M}:I\to I$ as $\nu M_{i}\cong M_{v_{M}(i)}$.
Now we introduce two classes of $\nu$-stable objects.
\begin{definition}
Let $M$ be a basic $\nu$-stable object of $\mathcal{T}$.
\begin{itemize}
\item[(1)] We call $M$ a \emph{weakly symmetric $\nu$-stable object} if $v_{M}$ is an identity map.
\item[(2)] We call $M$ a \emph{symmetric $\nu$-stable object} if the restriction $\nu|_{\mathsf{add}\hspace{.01in} M}$ is functorial isomorphic to the identity functor.
\end{itemize}
\end{definition}
For simplicity, we omit the word ``$\nu$-stable'' in (weakly) symmetric $\nu$-stable objects.
Note that all symmetric objects are weakly symmetric.
Moreover, if $M$ is a symmetric object, then each object of $\mathsf{thick}\hspace{.01in} M$ is $\nu$-stable.
Under the condition that $\nu$ is a Serre functor (i.e., there exists a bifunctorial isomorphism
\begin{align}\label{seq:serre}
\operatorname{Hom}\nolimits_{\mathcal{T}}(X,Y)\cong\mathbb{D}\operatorname{Hom}\nolimits_{\mathcal{T}}(Y,\nu X)
\end{align}
for each $X,Y\in\mathcal{T}$), we obtain the following result.
\begin{proposition}\label{prop:object-algebra}
Assume that $\nu$ is a Serre functor.
If $M$ is a basic $\nu$-stable $($respectively, weakly symmetric, symmetric$)$ object, then $\operatorname{End}\nolimits_{\mathcal{T}}(M)$ is a self-injective $($respectively, weakly symmetric, symmetric$)$ algebra.
\end{proposition}
\begin{proof}
Let $M$ be a $\nu$-stable object of $\mathcal{T}$.
By \eqref{seq:serre}, we have $\operatorname{End}\nolimits_{\mathcal{T}}(M)\cong \mathbb{D}\operatorname{End}\nolimits_{\mathcal{T}}(M)$ as a left $\operatorname{End}\nolimits_{\mathcal{T}}(M)$-module.
Hence $\operatorname{End}\nolimits_{\mathcal{T}}(M)$ is self-injective.
Next, we assume that $M$ is weakly symmetric. Let $M_{i}$ be an indecomposable direct summand of $M$.
Then we obtain $\operatorname{Hom}\nolimits_{\mathcal{T}}(M_{i},M)\cong \mathbb{D}\operatorname{Hom}\nolimits_{\mathcal{T}}(M,\nu M_{i})\cong \mathbb{D}\operatorname{Hom}\nolimits_{\mathcal{T}}(M,M_{v_{M}(i)})\cong \mathbb{D}\operatorname{Hom}\nolimits_{\mathcal{T}}(M,M_{i})$ as a left $\operatorname{End}\nolimits_{\mathcal{T}}(M)$-module.
Therefore $\operatorname{End}\nolimits_{\mathcal{T}}(M)$ is weakly symmetric.
Finally, we assume that $M$ is symmetric.
Since $\nu|_{\mathsf{add}\hspace{.01in} M}$ is functorial isomorphic to the identity functor, we have $\operatorname{End}\nolimits_{\mathcal{T}}(M)\cong \mathbb{D}\operatorname{End}\nolimits_{\mathcal{T}}(M)$ as an $\operatorname{End}\nolimits_{\mathcal{T}}(M)$-$\operatorname{End}\nolimits_{\mathcal{T}}(M)$-bimodule.
Consequently, $\operatorname{End}\nolimits_{\mathcal{T}}(M)$ is symmetric.
\end{proof}
\subsection{$\nu$-stable silting objects}
We start this subsection with recalling the definition of silting objects.
\begin{definition}
An object $M$ of $\mathcal{T}$ is called a \emph{silting} (respectively, \emph{tilting}) \emph{object} of $\mathcal{T}$ if $\mathcal{T}=\mathsf{thick}\hspace{.01in} M$ and $\operatorname{Hom}\nolimits_{\mathcal{T}}(M,M[i])=0$ for all $i>0$ (respectively, $i\neq 0$).
We denote by $\mbox{\rm silt}\hspace{.01in} \mathcal{T}$ (respectively, $\mbox{\rm tilt}\hspace{.01in}\mathcal{T}$, $\mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$) the set of isomorphism classes of basic silting (respectively, tilting, $\nu$-stable silting) objects of $\mathcal{T}$.
\end{definition}
Recall the partial order on $\mbox{\rm silt}\hspace{.01in}\mathcal{T}$.
For objects $M,N$ of $\mathcal{T}$, we write $M \ge N$ if $\operatorname{Hom}\nolimits_{\mathcal{T}}(M,N[i])=0$ for all $i>0$.
Then $(\mbox{\rm silt}\hspace{.01in}\mathcal{T}, \ge)$ is a partially ordered set by \cite[Theorem 2.11]{AI12}.
Moreover, by the restriction, $\ge$ gives a partial order on $\mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$.
For each $M\in\mbox{\rm silt}\hspace{.01in}\mathcal{T}$ and $d\in\mathbb{Z}_{\ge 0}$, let $\ssilt{(d+1)_{M}}\mathcal{T} :=\{ N\in \mbox{\rm silt}\hspace{.01in}\mathcal{T} \mid M \ge N \ge M[d]\}$.
Note that, for $M,N\in \mbox{\rm silt}\hspace{.01in}\mathcal{T}$, $M\geq N \geq M[d]$ if and only if $N\in \mathsf{add}\hspace{.01in} M\ast\mathsf{add}\hspace{.01in} M[1]\ast \cdots \ast\mathsf{add}\hspace{.01in} M[d]$ (for example, see \cite[Lemma 3.6]{AMY19}).
In \cite{ANR13} and also \cite[Theorem A.4]{Ai13}, it is shown that, for a finite dimensional self-injective algebra $A$ over an algebraically closed field,
all $\nu$-stable silting objects of the bounded homotopy category $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ are tilting objects, where $\nu:=\mathbb{D}\operatorname{Hom}\nolimits_{A}(-,A)$ is a Serre functor.
Moreover the converse also holds.
We discuss an analog of their result.
\begin{proposition}\label{prop:nu-silting-tilting}
Assume that $\nu$ is a Serre functor.
Then all $\nu$-stable silting objects of $\mathcal{T}$ are tilting.
\end{proposition}
\begin{proof}
Let $M$ be a $\nu$-stable silting object of $\mathcal{T}$.
It is enough to show that $\operatorname{Hom}\nolimits_{\mathcal{T}}(M,M[i])=0$ for all $i<0$.
For each integer $i$, we have isomorphisms
\begin{align}
\operatorname{Hom}\nolimits_{\mathcal{T}}(M,M[i])\cong \mathbb{D}\operatorname{Hom}\nolimits_{\mathcal{T}}(M[i],\nu M)\cong \mathbb{D}\operatorname{Hom}\nolimits_{\mathcal{T}}(M[i],M) \cong \mathbb{D}\operatorname{Hom}\nolimits_{\mathcal{T}}(M,M[-i]). \notag
\end{align}
Since $M$ is silting, we obtain $\operatorname{Hom}\nolimits_{\mathcal{T}}(M,M[i])=0$ for each negative integer $i$.
\end{proof}
Note that the converse in Proposition \ref{prop:nu-silting-tilting} does not necessarily hold.
Indeed, we give a characterization of algebras that all tilting objects are $\nu$-stable silting.
By the characterization, non-semisimple hereditary algebras have a tilting object which is not $\nu$-stable.
Recall that, by \cite[Corollary 3.9]{Che11}, a finite dimensional algebra $A$ is an Iwanaga--Gorenstein algebra if and only if the bounded homotopy category $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ has a Serre functor $\nu$.
\begin{proposition}\label{prop:selfinjective-tilting}
Let $A$ be a finite dimensional Iwanaga--Gorenstein algebra over an algebraically closed field and $\nu$ the Serre functor.
Then the following statements are equivalent.
\begin{itemize}
\item[(1)] $A$ is self-injective.
\item[(2)] All tilting objects of $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ are $\nu$-stable.
\item[(3)] $A$ is a $\nu$-stable object of $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$.
\item[(4)] $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ has a $\nu$-stable silting object.
\end{itemize}
\end{proposition}
\begin{proof}
(1)$\Rightarrow$(2) follows from \cite[Theorem A.4]{Ai13}.
(2)$\Rightarrow$(3)$\Rightarrow$(4) is clear.
We show (4)$\Rightarrow$(1). Let $T$ be a $\nu$-stable silting object of $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$.
Then $B:=\operatorname{End}\nolimits_{\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)}(T)$ is a self-injective algebra by Proposition \ref{prop:object-algebra}.
Since $T$ is a tilting object of $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ by Proposition \ref{prop:nu-silting-tilting}, $B$ is derived equivalent to $A$.
Hence the assertion follows from \cite[Theorem 2.1]{ANR13}.
\end{proof}
As an application of Proposition \ref{prop:nu-silting-tilting}, we have the following corollary.
\begin{corollary}
Assume that $\nu$ is a Serre functor.
Let $M$ be a symmetric object of $\mathcal{T}$.
Then all silting objects of $\mathsf{thick}\hspace{.01in} M$ are tilting objects of $\mathsf{thick}\hspace{.01in} M$.
\end{corollary}
\begin{proof}
Since $M$ is symmetric, all objects in $\mathsf{thick}\hspace{.01in} M$ are $\nu$-stable.
Hence the assertion follows from Proposition \ref{prop:nu-silting-tilting}.
\end{proof}
\subsection{$\nu$-stable mutations}
Let us start this subsection by recalling the notion of $\nu$-stable mutations.
For basic results on mutations of silting objects, we refer to \cite{AI12}.
In this subsection, we do not necessarily assume that $\nu$ is a Serre functor.
Recall the definition of minimal left approximations.
Let $f: X\to Z$ be a morphism.
We say that $f$ is \emph{left minimal} if each $h\in \operatorname{End}\nolimits_{\mathcal{T}}(Z)$ with $hf=f$ is an isomorphism.
Let $N$ be an object of $\mathcal{T}$.
We call $f$ a \emph{left $\mathsf{add}\hspace{.01in} N$-approximation} of $X$ if $Z\in\mathsf{add}\hspace{.01in} N$ and $\operatorname{Hom}\nolimits_{\mathcal{T}}(f,N)$ is surjective.
A left $\mathsf{add}\hspace{.01in} N$-approximation $f$ is said to be \emph{minimal} if it is left minimal.
Dually, we define a right minimal morphism, a right $\mathsf{add}\hspace{.01in} N$-approximation and a minimal right $\mathsf{add}\hspace{.01in} N$-approximation.
We collect some results for approximations.
The following lemma is a basic result in mutation theory.
\begin{lemma}\label{lem:approximation-result}
Let $N$ be an object of $\mathcal{T}$ with $\operatorname{Hom}\nolimits_{\mathcal{T}}(N,N[1])=0$.
Let
\begin{align}
X\xrightarrow{f}Y\xrightarrow{g}Z\to X[1] \notag
\end{align}
be a non-split triangle.
Then the following statements are equivalent.
\begin{itemize}
\item[(1)] $X$ is indecomposable, $f$ is a minimal left $\mathsf{add}\hspace{.01in} N$-approximation and $\operatorname{Hom}\nolimits_{\mathcal{T}}(N,X[1])=0$.
\item[(2)] $Z$ is indecomposable, $g$ is a minimal right $\mathsf{add}\hspace{.01in} N$-approximation and $\operatorname{Hom}\nolimits_{\mathcal{T}}(Z,N[1])=0$.
\end{itemize}
\end{lemma}
We have an easy observation for left minimal approximations.
\begin{lemma}\label{lem:isom-triang}
Let $N$ be an object of $\mathcal{T}$. Let $X\xrightarrow{f}Y\to Z\to X[1]$ and $X'\xrightarrow{f'}Y'\to Z'\to X'[1]$ be triangles with $f,f'$ minimal left $\mathsf{add}\hspace{.01in} N$-approximations.
For an isomorphism $\varphi:X\to X'$, there exist isomorphisms $\varphi':Y\to Y'$ and $\varphi'':Z\to Z'$ such that the following diagram commutes:
\begin{align}
\xymatrix{
X\ar[r]^-{f}\ar[d]_-{\varphi}^-{\cong}&Y\ar[r]\ar[d]_-{\varphi'}^-{\cong}&Z\ar[r]\ar[d]_-{\varphi''}^-{\cong}&X[1]\ar[d]^-{\cong}\\
X'\ar[r]^-{f'}&Y'\ar[r]&Z'\ar[r]& X'[1].
}\notag
\end{align}
Moreover, if $N$ and $X$ are $\nu$-stable, then so are $Y$ and $Z$.
\end{lemma}
\begin{proof}
The first assertion follows from basic properties of triangulated categories and minimal left approximations.
We show the second assertion. We assume that $X$ and $N$ are $\nu$-stable.
Since $\nu$ is a triangle auto-equivalence, the morphism $\nu f$ is also a minimal left $\mathsf{add}\hspace{.01in} N$-approximation.
Hence the second assertion follows from the first assertion.
\end{proof}
Let $M$ be a basic object of $\mathcal{T}$ with $M=X\oplus N$.
Take a minimal left $\mathsf{add}\hspace{.01in} N$-approximation $f: X\to Y$ and a triangle
\begin{align}
X\xrightarrow{f}Y \to Z \to X[1].\notag
\end{align}
Then $\mu_{X}(M):=Z\oplus N$ is called a (left) mutation of $M$ with respect to $X$.
Moreover, the mutation $\mu_{X}(M)$ is said to be \emph{irreducible} if $X$ is indecomposable.
Mutations of silting objects have the following nice property.
\begin{proposition}\cite[Theorem 2.31 and Proposition 2.33]{AI12}\label{prop:silting-mutation}
Let $M=X\oplus N$ be a basic silting object.
Then $\mu_{X}(M)$ is also basic silting.
Moreover, if $X\neq 0$, then $M>\mu_{X}(M)$.
\end{proposition}
In the following, we introduce the notion of $\nu$-stable mutations, which is an analog of mutations of tilting objects for self-injective algebras (see \cite[\S5]{CKL15}).
We call $\mu_{X}(M)$ a \emph{$\nu$-stable mutation} if $M$ and $X$ are $\nu$-stable. Note that if $M=X\oplus N$ is $\nu$-stable, then we obtain that $X$ is $\nu
$-stable if and only if $N$ is $\nu$-stable. By Lemma \ref{lem:isom-triang} and Proposition \ref{prop:silting-mutation}, we have the following result.
\begin{proposition}\label{prop:nu-silting-mutation}
Let $M=X\oplus N$ be a basic $\nu$-stable silting object with $X$ a $\nu$-stable object.
Then $\mu_{X}(M)$ is also a basic $\nu$-stable silting object.
\end{proposition}
\begin{proof}
Since $X$ and $N$ are $\nu$-stable, so is $\mu_{X}(M)$ by Lemma \ref{lem:isom-triang}.
Thus the assertion follows from Proposition \ref{prop:silting-mutation},
\end{proof}
Now we define irreducible $\nu$-stable mutations.
\begin{definition}
Let $M$ be a $\nu$-stable object.
\begin{itemize}
\item[(1)] A non-zero $\nu$-stable direct summand $X$ of $M$ is said to be \emph{minimal} if there exists no non-zero proper $\nu$-stable direct summand $X'$ of $X$.
\item[(2)] Assume that $M$ is basic. If $X$ is a minimal $\nu$-stable direct summand of $M$, then we call $\mu_{X}(M)$ an \emph{irreducible $\nu$-stable mutation} of $M$ with respect to $X$.
\end{itemize}
\end{definition}
Let $M=X\oplus N$ be a weakly symmetric object. Since each indecomposable direct summand of $M$ is $\nu$-stable, we obtain that $X$ is minimal $\nu$-stable if and only if it is indecomposable. Thus irreducible mutations coincides with irreducible $\nu$-stable mutations.
\begin{proposition}\label{prop:wekaly-symmetric-mutation}
Each $\nu$-stable mutation of a weakly symmetric silting object is also weakly symmetric silting.
\end{proposition}
\begin{proof}
Let $M=X\oplus N$ be a weakly symmetric silting object and take a triangle $X\xrightarrow{f}Y\to Z\to X[1]$ with $f$ a minimal left $\mathsf{add}\hspace{.01in} N$-approximation.
By Proposition \ref{prop:silting-mutation}, $\mu_{X}(M):=Z\oplus N$ is a silting object. Thus it is enough to show that $\mu_{X}(M)$ is weakly symmetric.
We decompose $X$ as $X=\oplus_{i\in I}X_{i}$, where $X_{i}$ is indecomposable.
For each $i\in I$, take a minimal left $\mathsf{add}\hspace{.01in} N$-approximation $f_{i}:X_{i}\to Y_{i}$ and a triangle $X_{i}\xrightarrow{f_{i}}Y_{i}\to Z_{i}\to X_{i}[1]$.
By Lemmas \ref{lem:approximation-result} and \ref{lem:isom-triang}, $Z_{i}$ is indecomposable and $\nu$-stable respectively.
Hence $\oplus_{i\in I} Z_{i}$ is weakly symmetric.
On the other hand, since $\oplus_{i\in I}f_{i}$ is a minimal left $\mathsf{add}\hspace{.01in} N$-approximation, it follows from Lemma \ref{lem:isom-triang} that $Z\cong\oplus_{i\in I} Z_{i}$ and hence $\mu_{X}(M)$ is weakly symmetric.
\end{proof}
In the rest of this subsection, we study combinatorial properties for $\nu$-stable mutations.
The following lemma plays an important role in this section.
\begin{lemma}\cite[Propositions 2.24 and 2.36]{AI12}\label{lem:int-mutation}
Fix an integer $d\ge 1$. Let $M$ be a basic silting object and $N\in \mathsf{add}\hspace{.01in} M \ast \mathsf{add}\hspace{.01in} M[1]\ast\cdots \ast \mathsf{add}\hspace{.01in} M[d]$.
Then the following statements hold.
\begin{itemize}
\item[(1)] For each $l\in [1,d]$, there exists a triangle
\begin{align}
M_{l}'\xrightarrow{f}N\xrightarrow{g}M_{l}''\xrightarrow{h}M_{l}'[1] \notag
\end{align}
such that $f$ is a minimal right $(\mathsf{add}\hspace{.01in} M\ast \cdots \ast \mathsf{add}\hspace{.01in} M[l-1])$-approximation, $g$ is a minimal left $(\mathsf{add}\hspace{.01in} M[l]\ast \cdots \ast \mathsf{add}\hspace{.01in} M[d])$-approximation and $h$ is a radical of $\mathcal{T}$.
\item[(2)] If $N\in \ssilt{(d+1)_{M}}\mathcal{T}\setminus \ssilt{d_{M}}\mathcal{T}$, then $M_{d}''\neq 0$.
Moreover, we have $M>\mu_{X}(M)\ge N$ for each basic non-zero direct summand $X$ of $M_{d}''[-d]$.
\end{itemize}
\end{lemma}
The lemma above induces the following properties of $\nu$-stable mutations.
\begin{proposition}\label{prop:local-property-nu-mutation}
Let $M,N$ be basic silting objects with $M>N$.
Assume that one of the following two conditions is satisfied:
\begin{itemize}
\item[(a)] $M$ and $N$ are $\nu$-stable.
\item[(b)] $M$ is weakly symmetric.
\end{itemize}
Then the following statements hold.
\begin{itemize}
\item[(1)] There is a minimal $\nu$-stable direct summand $X$ of $M$ such that $M>\mu_{X}(M)\ge N$.
\item[(2)] If the set
\begin{align}
\mbox{\rm silt}\hspace{.01in}^{\nu}[N,M]:=\{ L\in \mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T} \mid N\le L\le M\} \notag
\end{align}
is finite, then $N$ can be obtained from $M$ by iterated irreducible $\nu$-stable mutation.
In particular, if $\textnormal{(b)}$ is satisfied, then all objects in $\mbox{\rm silt}\hspace{.01in}^{\nu}[N,M]$ are weakly symmetric.
\end{itemize}
\end{proposition}
\begin{proof}
(1) Let $M>N$ be basic silting objects satisfying (a) or (b).
By \cite[Proposition 2.23]{AI12}, we have $N\in \ssilt{(d+1)_{M}}\mathcal{T}\setminus \ssilt{d_{M}}\mathcal{T}$ for some $d\ge 1$.
Then it follows from Lemma \ref{lem:int-mutation} that there exists a minimal left $\mathsf{add}\hspace{.01in} M[d]$-approximation $g:N\to M_{d}''$ with $M_{d}''\neq 0$.
We show that $M_{d}''$ is $\nu$-stable.
If (a) is satisfied, then the assertion follows from Lemma \ref{lem:isom-triang}.
On the other hand, if (b) is satisfied, then the assertion follows from the fact that each indecomposable direct summand of $M$ is $\nu$-stable.
Hence for both cases, $M_{d}''$ is $\nu$-stable.
Taking a minimal $\nu$-stable direct summand $X$ of $M''_{d}[-d]$, we have $M>\mu_{X}(M)\ge N$ by Lemma \ref{lem:int-mutation}(2).
(2) If $(a)$ (respectively, (b)) is satisfied, then $\mu_{X}(M)$ in (1) is also $\nu$-stable (respectively, weakly symmetric) by Proposition \ref{prop:nu-silting-mutation} (respectively, Proposition \ref{prop:wekaly-symmetric-mutation}).
By repeated use of (1), we have a sequence of irreducible $\nu$-stable mutations
\begin{align}
M>L_{1}>L_{2}>\cdots (\ge N) \notag
\end{align}
in $\mbox{\rm silt}\hspace{.01in}^{\nu}[N,M]$.
Since the set $\mbox{\rm silt}\hspace{.01in}^{\nu}[N,M]$ is finite, there exists an integer $n>0$ such that $L_{n}=N$.
Hence we have the assertion.
\end{proof}
As an analog of \cite[Theorem 2.35]{AI12}, we compare the Hasse quiver of $(\mbox{\rm silt}\hspace{.01in}^{\nu} \mathcal{T},\ge)$ and the mutation quiver $Q(\mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T})=(Q_{0},Q_{1})$ defined as
\begin{align}
&Q_{0}:=\mbox{\rm silt${}^\nu$}\hspace{.01in}\mathcal{T}, \notag\\
&Q_{1}:=\{ M \to N \mid \textnormal{$\mu_{X}(M)=N$ for some minimal $\nu$-stable $X$}\}. \notag
\end{align}
\begin{proposition}
Let $M,N$ be basic $\nu$-stable silting objects of $\mathcal{T}$.
Then the following statements are equivalent:
\begin{itemize}
\item[(1)] $N$ is an irreducible $\nu$-stable mutation of $M$
\item[(2)] $M>N$ and there exists no $L\in \mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$ satisfying $M>L>N$.
\end{itemize}
In particular, $Q(\mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T})$ is quiver isomorphic to the Hasse quiver of $(\mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}, \ge)$.
\end{proposition}
\begin{proof}
The proof is the same as in \cite[Theorem 2.35]{AI12} and \cite[Theorem 5.11]{CKL15}.
\end{proof}
\subsection{$\nu$-stable silting-discrete triangulated category}
In this subsection, we introduce the notion of $\nu$-stable silting-discrete triangulated categories, which gives a unification of silting-discrete triangulated categories and tilting-discrete bounded homotopy categories of finitely generated projective modules for self-injective algebras.
Recall the definition of silting-discrete triangulated categories which is introduced in \cite{Ai13}.
A triangulated category $\mathcal{T}$ with a silting object is said to be \emph{silting-discrete} if for each $M\in \mbox{\rm silt}\hspace{.01in}\mathcal{T}$ and $d\in \mathbb{Z}_{\ge 0}$, the set
\begin{align}
\ssilt{(d+1)_{M}}\mathcal{T}:=\{ N\in \mbox{\rm silt}\hspace{.01in}\mathcal{T} \mid M \ge N \ge M[d]\} \notag
\end{align}
is finite.
Moreover, a finite dimensional algebra is called a \emph{silting-discrete algebra} if the bounded homotopy category $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ is silting-discrete.
Similarly, we define \emph{tilting-discrete triangulated categories} and \emph{tilting-discrete algebras}.
Now we introduce the notion of $\nu$-stable silting-discrete triangulated categories.
\begin{definition}
\begin{itemize}
\item[(1)] Assume that $\mathcal{T}$ has a $\nu$-stable silting object.
A triangulated category $\mathcal{T}$ is said to be \emph{$\nu$-stable silting-discrete} if for each $M\in \mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$ and $d\in \mathbb{Z}_{\ge 0}$, the set
\begin{align}
\ssilt{(d+1)_{M}}^{\nu}\mathcal{T} :=\{ N\in \mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T} \mid M \ge N \ge M[d]\} \notag
\end{align}
is finite. Note that $\ssilt{1_{M}}^{\nu}\mathcal{T}=\{ M \}/\cong$.
\item[(2)] Let $A$ be a finite dimensional algebra and $\nu:\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)\to \mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ a triangle auto-equivalence.
We call $A$ a \emph{$\nu$-stable silting-discrete algebra} if $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ is $\nu$-stable silting-discrete.
\end{itemize}
\end{definition}
The following example shows that $\nu$-stable silting-discrete triangulated categories unify silting-discrete triangulated categories and tilting-discrete self-injective algebras.
\begin{example}
\begin{itemize}
\item[(1)] Assume that $\nu$ is functorial isomorphic to the identity functor.
Then $\nu$-stable silting objects are exactly silting objects.
Moreover, $\nu$-stable silting-discrete triangulated categories coincide with silting-discrete triangulated categories.
\item[(2)] Let $A$ be a self-injective algebra. Then $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ has a Serre functor $\nu$.
By Proposition \ref{prop:selfinjective-tilting}, all tilting objects are $\nu$-stable silting objects.
Hence $A$ is $\nu$-stable silting-discrete if and only if it is tilting-discrete.
\end{itemize}
\end{example}
As a generalization of \cite[Proposition 2.14]{AH06} and \cite[Corollary 2.43]{AI12}, we provide an example of $\nu$-stable silting-discrete triangulated categories which plays an important role in the next section.
Let $M=\oplus_{i\in I}M_{i}$ be a basic $\nu$-stable object of $\mathcal{T}$, where each $M_{i}$ is indecomposable.
Define a permutation $v_{M}:I\to I$ as $\nu M_{i}\cong M_{v_{M}(i)}$.
We call $M$ a \emph{$\nu$-cyclic object} if $v_{M}$ acts transitively on $I$.
\begin{proposition}\label{prop:cyclic-silting}
Let $A$ be a $\nu$-cyclic silting object of $\mathcal{T}$.
Then we have
\begin{align}
\mbox{\rm silt}\hspace{.01in}^{\nu}(\mathcal{T})=\{ A[i]\mid i\in \mathbb{Z}\}. \notag
\end{align}
In particular, $\mathcal{T}$ is $\nu$-stable silting-discrete.
\end{proposition}
\begin{proof}
Let $M\in \mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$.
By \cite[Proposition 2.23]{AI12}, there exist integers $m_{1}\le m_{2}\in \mathbb{Z}$ such that $M\in \mathsf{add}\hspace{.01in} A[m_{1}]\ast \mathsf{add}\hspace{.01in} A[m_{1}+1]\ast\cdots\ast \mathsf{add}\hspace{.01in} A[m_{2}]$, $M\notin \mathsf{add}\hspace{.01in} A[m_{1}+1]\ast\cdots \ast\mathsf{add}\hspace{.01in} A[m_{2}]$ and $M\notin \mathsf{add}\hspace{.01in} A[m_{1}]\ast\cdots\ast\mathsf{add}\hspace{.01in} A[m_{2}-1]$.
Let $L:=M[-m_{1}]$ and $l:=m_{2}-m_{1}$.
Suppose $l \neq 0$.
By Lemma \ref{lem:int-mutation}, there exist two triangles
\begin{align}
&A'\xrightarrow{f} L\to L''\to A'[1],\notag\\
&L'\to L\xrightarrow{g} A''[l]\to L'[1]\notag
\end{align}
such that $A',A''\in \mathsf{add}\hspace{.01in} A$ are non-zero, $f$ is a minimal right $\mathsf{add}\hspace{.01in} A$-approximation and $g$ is a minimal left $\mathsf{add}\hspace{.01in} A[l]$-approximation.
Then it follows from \cite[Lemma 2.25]{AI12} that $\mathsf{add}\hspace{.01in} A'\cap \mathsf{add}\hspace{.01in} A''=\{ 0\}$.
On the other hand, by Lemma \ref{lem:isom-triang}, we have $\nu A'\cong A'$ and $\nu A''\cong A''$.
Since $A$ is $\nu$-cyclic, we obtain $\mathsf{add}\hspace{.01in} A'=\mathsf{add}\hspace{.01in} A=\mathsf{add}\hspace{.01in} A''$, a contradiction.
This implies $l=0$ and hence $M=A[m_{1}]$.
\end{proof}
The following proposition is one of nice properties of $\nu$-stable silting-discrete triangulated categories.
\begin{proposition}
Assume that $\mathcal{T}$ is $\nu$-stable silting-discrete.
If $M,N\in \mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$ with $M> N$, then $N$ can be obtained from $M$ by iterated irreducible $\nu$-stable mutation.
\end{proposition}
\begin{proof}
By \cite[Proposition 2.23]{AI12}, there exists an integer $d$ such that $M> N\ge M[d+1]$.
Since $\mathcal{T}$ is $\nu$-stable silting-discrete, the set $\ssilt{d_{M}}^{\nu}\mathcal{T}$ is finite.
Hence the assertion follows from Proposition \ref{prop:local-property-nu-mutation}(2).
\end{proof}
Next, following \cite[Theorem 3.8]{Ai13} and \cite[Theorem 2.4]{AM17}, we give a characterization of triangulated categories to be $\nu$-stable silting-discrete.
\begin{theorem}\label{thm:equiv-nu-silting-discrete}
Let $\mathcal{T}$ be a triangulated category with a $\nu$-stable silting object.
Then the following statements are equivalent.
\begin{itemize}
\item[(1)] $\mathcal{T}$ is $\nu$-stable silting-discrete.
\item[(2)] $\mathcal{T}$ admits a $\nu$-stable silting object $A$ such that, for each integer $d\ge 0$, the set $\ssilt{(d+1)_{A}}^{\nu}\mathcal{T}$ is finite.
\item[(3)] Fix any basic $\nu$-stable silting object $A$. For each object $M$ obtained by a finite sequence of irreducible $\nu$-stable mutations from $A$, the set $\ssilt{2_{M}}^{\nu}\mathcal{T}$ is finite.
\end{itemize}
\end{theorem}
For convenience of readers, we give a proof of Theorem \ref{thm:equiv-nu-silting-discrete}.
We need the following lemma.
\begin{lemma}\label{lem:technical-lem}
Fix a basic $\nu$-stable silting object $A$ of $\mathcal{T}$ and an integer $d\ge 1$.
Let $M\in \ssilt{(d+1)_{A}}^{\nu}\mathcal{T}$.
Then the following statements hold.
\begin{itemize}
\item[(1)] Let $A'\in \ssilt{2_{A}}^{\nu}\mathcal{T}$ with $A'\neq A[1]$ and $A'\ge M \ge A[d]$.
If $M$ is not in $\ssilt{d_{A'}}^{\nu}\mathcal{T}$, then there exists an irreducible $\nu$-stable mutation $A''$ of $A'$ such that $A'>A''\ge \{ M,A[1] \}$.
\item[(2)] If $\ssilt{2_{A}}^{\nu}\mathcal{T}$ is a finite set, then there exists $N\in \ssilt{2_{A}}^{\nu}\mathcal{T}$ such that $M\in \ssilt{d_{N}}^{\nu}\mathcal{T}$.
\end{itemize}
\end{lemma}
\begin{proof}
(1) By our assumption, $M\in \ssilt{(d+1)_{A'}}^{\nu}\mathcal{T}\setminus \ssilt{d_{A'}}^{\nu}\mathcal{T}$.
Then it follows from Lemma \ref{lem:int-mutation} that there exists a triangle
\begin{align}
M'\to M \xrightarrow{f} P'[d]\xrightarrow{f'}M'[1] \notag
\end{align}
such that $M'\in \mathsf{add}\hspace{.01in} A'\ast \cdots \ast \mathsf{add}\hspace{.01in} A'[d-1]$, $0\neq P'\in \mathsf{add}\hspace{.01in} A'$, $f$ is a minimal left $\mathsf{add}\hspace{.01in} A'[d]$-approximation of $M$ and $f'$ belongs to the radical of $\mathcal{T}$.
By Lemma \ref{lem:isom-triang}, $P'$ is $\nu$-stable.
Moreover, by a similar argument, $A[1]\in \ssilt{2_{A'}}^{\nu}\mathcal{T}\setminus \ssilt{1_{A'}}^{\nu}\mathcal{T}$ induces a triangle
\begin{align}
Q'\xrightarrow{g'}R'\xrightarrow{g''} A[1]\xrightarrow{g} Q'[1]\notag
\end{align}
such that $R',Q'\in \mathsf{add}\hspace{.01in} A'$ are $\nu$-stable, $g$ is a minimal left $\mathsf{add}\hspace{.01in} A'[1]$-approximation of $A[1]$ and $g'$ is in the radical of $\mathcal{T}$.
Since $A$ and $A'$ are silting, so is $Q'\oplus R'$.
This implies $\mathsf{add}\hspace{.01in} A'=\mathsf{add}\hspace{.01in}(Q'\oplus R')$.
On the other hand, it follows from \cite[Lemma 2.25]{AI12} that $\mathsf{add}\hspace{.01in} P'\cap \mathsf{add}\hspace{.01in} R'=\{ 0\}$.
Hence we obtain $P'\in \mathsf{add}\hspace{.01in} Q'$.
Take a minimal $\nu$-stable direct summand $X$ of $P'$.
By Lemma \ref{lem:int-mutation}(2), $A'> \mu_{X}(A') \geq\{ M,A[1]\}$.
Hence, putting $A'':=\mu_{X}(A')$, we have the assertion.
(2) If $M\in \ssilt{d_{A}}^{\nu}\mathcal{T}$, then there is nothing to prove.
In the following, we assume $M\notin \ssilt{d_{A}}^{\nu}\mathcal{T}$.
By (1), there exists an irreducible $\nu$-stable mutation $A'$ of $A$ such that $A>A'\ge \{ M,A[1] \}$.
Hence $A'\in \ssilt{2_{A}}^{\nu}\mathcal{T}$ and $M\in \ssilt{(d+1)_{A'}}^{\nu}\mathcal{T}$.
If $M\in \ssilt{d_{A'}}^{\nu}\mathcal{T}$, then we obtain the desired result.
We assume $M\notin \ssilt{d_{A'}}^{\nu}\mathcal{T}$.
If $A'=A[1]$, then $M\in \ssilt{(d+1)_{A}}^{\nu}\mathcal{T}$ implies $\operatorname{Hom}\nolimits(M,A[d])=0$ and hence $M\in \ssilt{d_{A'}}^{\nu}\mathcal{T}$ a contradiction.
Therefore we assume $A'\neq A[1]$.
By (1), there exists an irreducible $\nu$-stable mutation $A''$ of $A'$ such that $A'>A''\ge \{ M, A[1]\}$.
Thus, we obtain $A''\in \ssilt{2_{A}}^{\nu}\mathcal{T}$ and $M\in\ssilt{(d+1)_{A''}}^{\nu}\mathcal{T}$.
By repeated use of this argument, we have a sequence of irreducible $\nu$-stable mutations in $\ssilt{2_{A}}^{\nu}\mathcal{T}$.
Since $\ssilt{2_{A}}^{\nu}\mathcal{T}$ is finite, this procedure stops after a finite number of steps.
Therefore, there exists $N\in \ssilt{2_{A}}^{\nu}\mathcal{T}$ such that $M\in \ssilt{d_{N}}^{\nu}\mathcal{T}$.
This finishes the proof.
\end{proof}
Now we are ready to prove Theorem \ref{thm:equiv-nu-silting-discrete}.
\begin{proof}[Proof of Theorem \ref{thm:equiv-nu-silting-discrete}]
(1)$\Rightarrow$(2) and (1)$\Rightarrow$(3) clearly hold.
(2)$\Rightarrow$(1): The proof is the same as in \cite[Proposition 3.8]{Ai13}.
(3)$\Rightarrow$(2): We show that $\ssilt{(d+1)_{A}}^{\nu}\mathcal{T}$ is finite for all $d\ge 0$.
If $d\le 1$, then this is clear.
Let $d\ge 2$ and $M\in \ssilt{(d+1)_{A}}^{\nu}\mathcal{T}$.
By assumption, the set $\ssilt{2_{A}}^{\nu}\mathcal{T}$ is finite.
Thus it follows from Lemma \ref{lem:technical-lem}(2) that there exists $A_{1}\in \ssilt{2_{A}}^{\nu}\mathcal{T}$ such that $M\in\ssilt{d_{A_{1}}}^{\nu}\mathcal{T}$.
Since $\mbox{\rm silt}\hspace{.01in}^{\nu}[A_{1},A]$ is a finite set, it follows from Proposition \ref{prop:local-property-nu-mutation}(2) that $A_{1}$ is obtained by a finite sequence of irreducible $\nu$-stable mutations from $A$.
Hence $\ssilt{2_{A_{1}}}^{\nu}\mathcal{T}$ is also a finite set.
By repeated use of this argument, we have
\begin{align}
\displaystyle \ssilt{(d+1)_{A_{0}}}^{\nu}\mathcal{T}\subseteq \bigcup_{A_{1}\in\ssilt{2_{A_{0}}}^{\nu}\mathcal{T}}\bigcup_{A_{2}\in\ssilt{2_{A_{1}}}^{\nu}\mathcal{T}}\cdots\bigcup_{A_{d-1}\in\ssilt{2_{A_{d-2}}}^{\nu}\mathcal{T}}\ssilt{2_{A_{d-1}}}^{\nu}\mathcal{T}, \notag
\end{align}
where $A_{0}:=A$.
Due to the construction, the set $\ssilt{2_{A_{i}}}^{\nu}\mathcal{T}$ is finite for each $i\ge 0$.
This implies that the set $\ssilt{(d+1)_{A}}^{\nu}\mathcal{T}$ is finite.
\end{proof}
We can recover \cite[Theorem 1.2]{AM17}.
\begin{corollary}\label{cor:AM17-thm}
Let $A$ be a finite dimensional algebra $($respectively, a finite dimensional self-injective algebra$)$ and $\nu$ an identity functor $($respectively, a Serre functor$)$.
Then the following conditions are equivalent.
\begin{itemize}
\item[(1)] $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ is silting-discrete $($respectively, tilting-discrete$)$.
\item[(2)] For each basic silting $($respectively, tilting$)$ object $M$ obtained by finite sequence of irreducible $\nu$-stable mutations from $A$, the set $\ssilt{2_{M}}^{\nu}\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ is finite.
\end{itemize}
\end{corollary}
As an application, we give an example of $\nu$-stable silting-discrete triangulated categories.
\begin{example}
Let $A$ be a representation-finite self-injective algebra over an algebraically closed field and $\nu$ a Serre functor.
Then $\ssilt{2_{A}}^{\nu}\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ is a finite set.
Since the class of representation-finite self-injective algebras is derived invariant, $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ is $\nu$-stable silting-discrete, and hence tilting-discrete.
\end{example}
The following theorem is one of our main results of this paper.
\begin{theorem}\label{thm:weakly-symmetric-silting-discrete}
Assume that $\mathcal{T}$ admits a weakly symmetric silting object.
\begin{itemize}
\item[(1)] The following statements are equivalent:
\begin{itemize}
\item[(a)] $\mathcal{T}$ is silting-discrete.
\item[(b)] $\mathcal{T}$ is $\nu$-stable silting-discrete.
\end{itemize}
In this case, all silting objects are weakly symmetric.
\item[(2)] Moreover, if $\nu$ is a Serre functor, then the following statement is also equivalent to $(a)$ and $(b)$:
\begin{itemize}
\item[(c)] $\mathcal{T}$ is tilting-discrete.
\end{itemize}
In this case, all silting objects are tilting.
\end{itemize}
\end{theorem}
\begin{proof}
(1) (a)$\Rightarrow$(b) is clear.
We prove (b)$\Rightarrow$(a).
We show $\mbox{\rm silt}\hspace{.01in}\mathcal{T}=\mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$.
Let $M\in \mbox{\rm silt}\hspace{.01in}\mathcal{T}$.
Take a basic weakly symmetric silting object $A$ of $\mathcal{T}$.
Due to \cite[Proposition 2.23]{AI12}, there exists an integer $n>0$ such that $A[-n]\ge M\ge A[n]$.
Then $A':=A[-n]$ is clearly a weakly symmetric silting object and $\ssilt{(2n)_{A'}}^{\nu}\mathcal{T}$ is finite by (b).
Therefore it follows from Proposition \ref{prop:local-property-nu-mutation}(2) that $M$ is weakly symmetric.
In particular, $M\in \mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$.
Hence we obtain $\mbox{\rm silt}\hspace{.01in}\mathcal{T}=\mbox{\rm silt}\hspace{.01in}^{\nu}\mathcal{T}$.
This implies that, for each $d\ge 0$, the set $\ssilt{(d+1)_{A}}\mathcal{T}=\ssilt{(d+1)_{A}}^{\nu}\mathcal{T}$ is a finite set by (b).
By applying Theorem \ref{thm:equiv-nu-silting-discrete} to the case where $\nu=\mathrm{id}$, $\mathcal{T}$ is silting-discrete.
(2) Assume that $\nu$ is a Serre functor.
By Proposition \ref{prop:nu-silting-tilting}, (a)$\Rightarrow$(c)$\Rightarrow$(b) holds.
Hence the assertion follows from (1).
\end{proof}
As an application of Theorem \ref{thm:weakly-symmetric-silting-discrete}, we have the following result.
\begin{corollary}\label{cor:weakly-symmetric-silting-discrete}
Let $A$ be a weakly symmetric algebra.
Then $A$ is silting-discrete if and only if it is tilting-discrete.
\end{corollary}
\begin{proof}
Clearly, $A$ is a weakly symmetric silting object of $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$.
On the other hand, since a weakly symmetric algebra is an Iwanaga--Gorenstein algebra, $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ admits a Serre functor $\nu$.
Therefore the assertion follows from Theorem \ref{thm:weakly-symmetric-silting-discrete}(2).
\end{proof}
We give concrete examples for Corollary \ref{cor:weakly-symmetric-silting-discrete}.
\begin{example}\label{ex:preprojective}
Let $A$ be the preprojective algebra of one of Dynkin diagrams $\mathbf{D}_{2n}$, $\mathbf{E}_{7}$ and $\mathbf{E}_{8}$.
Then $A$ is weakly symmetric (see \cite{BBK02}) and tilting-discrete by \cite[Theorem 1.3]{AM17}.
By Corollary \ref{cor:weakly-symmetric-silting-discrete}, $A$ is silting-discrete.
\end{example}
Remark that Example \ref{ex:preprojective} can be obtained by \cite[Theorem 1.1]{AM17} because $\Delta=\Delta^{\mathrm{f}}$ holds for $\Delta=\mathbf{D}_{2n}$, $\mathbf{E}_{7}$ or $\mathbf{E}_{8}$.
\section{The first example: trivial tilting-discrete case}
Let $A$ be a basic connected non-semisimple self-injective algebra over an algebraically closed field $K$ and let $\nu=\nu_{A}:=\mathbb{D}\operatorname{Hom}\nolimits_{A}(-,A)$ a Nakayama functor. Note that $\nu$ is a Serre functor in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$.
Our aim of this section is to give a construction of a self-injective algebra $\widetilde{A}$ satisfying the following two properties:
\begin{itemize}
\item $\mbox{\rm tilt}\hspace{.01in}{\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} \widetilde{A})}=\{ A[i]\mid i\in \mathbb{Z}\}$. In particular, $\widetilde{A}$ is tilting-discrete.
\item $\widetilde{A}$ is not silting-discrete.
\end{itemize}
Let $Q=(Q_{0},Q_{1})$ be a finite quiver, where $Q_{0}$ is the vertex set and $Q_{1}$ is the arrow set of $Q$.
We denote by $KQ_{l}$ the subspace of $KQ$ generated by all paths of length $l$.
Define a new quiver $\widetilde{Q}=(\widetilde{Q}_{0},\widetilde{Q}_{1})$ as $\widetilde{Q}_{0}:=Q_{0}$ and $\widetilde{Q}_{1}:=Q_{1}^{+}\coprod Q_{1}^{-}$, where $Q_{1}^{+}:=\{ a^{+}\mid a\in Q_{1}\}$ and $Q_{1}^{-}:=\{ a^{-}\mid a\in Q_{1}\}$.
The correspondences $a\mapsto a^{\pm}$ induce $K$-linear isomorphisms $(-)^{\pm}:KQ_{1}\to KQ_{1}^{\pm}$ and moreover, they are extended to $K$-linear isomorphisms $(-)^{\pm}:\oplus_{l\ge 1}KQ_{l}\to\oplus_{l\ge 1} KQ_{l}^{\pm}$.
Assume that $A=KQ/I$ is self-injective, where $I$ is an admissible ideal of $KQ$.
Let $I^{d}:=\langle a^{+}b^{-}, a^{-}b^{+}\rangle_{K\widetilde{Q}}$ and $I^{c}:=\oplus_{i\in Q_{0}}K(p_{i}^{+}-p_{i}^{-})$, where $p_{i}\notin I$ is a path in $KQ$ with maximal length starting from $i\in Q_{0}$.
Define a subspace $\widetilde{I}$ of $K\widetilde{Q}$ by $\widetilde{I}:=I^{+}+I^{-}+I^{d}+I^{c}$.
Then we have the following lemma.
\begin{lemma}
The following statements hold.
\begin{itemize}
\item[(1)] The subspace $\widetilde{I}$ is a two-sided ideal of $K\widetilde{Q}$.
\item[(2)] If $\operatorname{soc}\nolimits P(i)\subset \operatorname{rad}\nolimits^{2}A$ for each $i\in Q_{0}$, then $\widetilde{I}$ is admissible.
\end{itemize}
\end{lemma}
\begin{proof}
(1) We can easily check that $I^{+}+I^{-}+I^{d}$ is a two-side ideal and $\widetilde{I}$ is a right ideal.
To complete the proof, we show $a^{\pm}(p_{i}^{+}-p_{i}^{-})\in \widetilde{I}$, that is, $a^{+}p_{i}^{+}, a^{-}p_{i}^{-}\in \widetilde{I}$ for all $i\in Q_{0}$ and $a\in Q_{1}$.
Thus it is enough to claim $ap_{i}\in I$.
Indeed, if it is true, then $a^{\pm}p_{i}^{\pm}=(ap_{i})^{\pm}\in I^{\pm}$.
Suppose to contrary that $ap_{i}\notin I$.
If $a$ is a loop, it contradicts to the maximality of the length of $p_{i}$.
On the other hand, if $a:h\to i$ is not a loop, then $ap_{i}\in \operatorname{soc}\nolimits P(h)$.
This implies $\operatorname{soc}\nolimits P(i)=\operatorname{soc}\nolimits P(h)$, a contradiction to self-injectivity.
(2) For each $i\in Q_{0}$, the length of $p_{i}$ is at least two by our assumption.
Since $I$ is an admissible ideal, we have the assertion.
\end{proof}
The following theorem is one of main results in this paper.
\begin{theorem}\label{thm:local-tilting}
Assume that $\operatorname{soc}\nolimits P(i)\subset \operatorname{rad}\nolimits^{2}A$ for each $i\in Q_{0}$.
Let $\widetilde{A}:=K\widetilde{Q}/\widetilde{I}$.
Then the following statements hold.
\begin{itemize}
\item[(1)] $\widetilde{A}$ is a basic self-injective algebra.
\item[(2)] $\widetilde{A}$ is not silting-discrete.
\item[(3)] $A$ is $\nu_{A}$-cyclic object in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ if and only if $\widetilde{A}$ is $\nu_{\widetilde{A}}$-cyclic object in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} \widetilde{A})$.
\item[(4)] If the equivalence condition in \textup{(3)} is satisfied, then we have
\begin{align}
\mbox{\rm tilt}\hspace{.01in} \mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} \widetilde{A}) =\{ \widetilde{A}[i]\mid i\in \mathbb{Z}\}.\notag
\end{align}
In particular, $\widetilde{A}$ is a tilting-discrete algebra.
\end{itemize}
\end{theorem}
By \cite[Theorem 3.2]{AIR14}, we can translate results of two-term silting theory into results of $\tau$-tilting theory, and vice versa.
Remark that a finite dimensional algebra $\Lambda$ is $\tau$-tilting finite if and only if $\ssilt{2_{\Lambda}}\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} \Lambda)$ is a finite set. Moreover, it follows from \cite[Corollary 1.9]{DIRRT} that if $\Lambda$ is $\tau$-tilting finite, then all factor algebras are also $\tau$-tilting finite.
\begin{proof}
In the rest of this section, the statements (1) and (3) are shown in a more general setting.
(2) Note that $Q$ is a Dynkin quiver if and only if $KQ$ is $\tau$-tilting finite (for example, see \cite[Theorem 2.6]{Ad16}).
Since $\widetilde{A}$ contains the path algebra of Kronecker type as a factor algebra, the set $\ssilt{2_{\widetilde{A}}}\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} \widetilde{A})$ is not finite.
Hence $A$ is not a silting-discrete algebra.
(4) By the assumption, $\widetilde{A}$ is a $\nu_{\widetilde{A}}$-cyclic object in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} \widetilde{A})$.
Thus the assertion follows from Propositions \ref{prop:selfinjective-tilting} and \ref{prop:cyclic-silting}.
\end{proof}
Before proving Theorem \ref{thm:local-tilting}(1) and (3), we give an example of $\widetilde{A}$.
\begin{example}
Let $A=KQ/I$ be a self-injective Nakayama algebra, where $Q=(\xymatrix{1\ar@<0.5ex>[r]^-{a}&2\ar@<0.5ex>[l]^-{b}})$ and $I=\langle abab,baba \rangle$.
Then $\widetilde{A}=K\widetilde{Q}/\widetilde{I}$ is given by
\begin{align}
\widetilde{Q}=\xymatrix{1\ar@<3ex>[r]^-{a^{-}}\ar@<0.5ex>[r]^-{a^{+}}&2\ar@<0.5ex>[l]^-{b^{+}}\ar@<3ex>[l]^-{b^{-}}} \notag
\end{align}
and
\begin{align}
\widetilde{I}=\langle a^{+}b^{+}a^{+}b^{+}, b^{+}a^{+}b^{+}a^{+}, a^{-}b^{-}a^{-}b^{-},b^{-}a^{-}b^{-}a^{-}, &a^{+}b^{-},a^{-}b^{+}, b^{+}a^{-},b^{-}a^{+},\notag\\
&a^{+}b^{+}a^{+}-a^{-}b^{-}a^{-}, b^{+}a^{+}b^{+}-b^{-}a^{-}b^{-}\rangle.\notag
\end{align}
Thus we obtain
\begin{align}
A_{A}=\begin{smallmatrix}1\\2\\1\\2\end{smallmatrix}\oplus\begin{smallmatrix}2\\1\\2\\1\end{smallmatrix},\hspace{5mm}
\widetilde{A}_{\widetilde{A}}=\begin{smallmatrix}&1&\\2&&2\\1&&1\\&2&\end{smallmatrix}\oplus\begin{smallmatrix}&2&\\1&&1\\2&&2\\&1&\end{smallmatrix}.\notag
\end{align}
Since $\widetilde{Q}$ contains a multiple arrow, $\widetilde{A}$ is not silting-discrete.
On the other hand, we can easily check that $\widetilde{A}$ is a $\nu_{\widetilde{A}}$-cyclic object of $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} \widetilde{A})$. Hence it is a tilting-discrete algebra.
\end{example}
From now we provide a proof of Theorem \ref{thm:local-tilting}(1) and (3) in more general setting.
We start with recalling some properties of basic self-injective algebras.
For detail, see \cite[Chapter IV]{SY11}.
A basic self-injective $K$-algebra $\Lambda$ admits a non-degenerate associative $K$-bilinear form $(-,-)_{\Lambda}:\Lambda\times \Lambda\to K$.
This property induces an algebra automorphism $\mathrm{v}_{\Lambda}:\Lambda\to \Lambda$ with $(\mathrm{v}_{\Lambda}(a),b)_{\Lambda}=(b,a)_{\Lambda}$ for all $a,b\in \Lambda$.
We call $\mathrm{v}_{\Lambda}$ a \emph{Nakayama automorphism}.
A subspace $\Gamma$ of $\Lambda$ is said to be \emph{$\mathrm{v}_{\Lambda}$-stable} if $\mathrm{v}_{\Lambda}(\Gamma)= \Gamma$.
Let $J$ be a two-sided ideal of $\Lambda$ and $\Gamma$ a subalgebra with $J\subset \Gamma$.
In our convention, the identity of $\Gamma$ coincides with that of $\Lambda$.
Define vector spaces $J',J''$ by
\begin{align}
&J':=\{ \lambda\in \Lambda \mid (J,\lambda)_{\Lambda}=0\},\notag\\
&J'':=\{ \lambda\in \Lambda\mid (\Gamma,\lambda)_{\Lambda}=0\}. \notag
\end{align}
Since $J$ is a two-sided ideal, we can easily check that $J'$ is a two-sided ideal of $\Lambda$.
The following lemma plays an important role in this section.
\begin{lemma}\label{lem:vect-bilinear}
Let $(-,-):=(-,-)_{\Lambda}$ be a non-degenerate associative $K$-bilinear form and $\mathrm{v}:=\mathrm{v}_{\Lambda}$ the Nakayama automorphism associated with $(-,-)$.
Then the restriction $(-,-)_{\Gamma}:=(-,-)|_{\Gamma\times\Gamma}$ is a $K$-bilinear form.
Moreover, if $J'\subseteq J$, then the following statements hold.
\begin{itemize}
\item[(1)] $J''$ is a vector subspace of $\Gamma$.
\item[(2)] If $\Gamma$ is $\mathrm{v}$-stable, then so is $J''$ and the following statements are equivalent for $\gamma\in \Gamma$.
\begin{itemize}
\item[(a)] $(\gamma,-)_{\Gamma}=0$.
\item[(b)] $\gamma\in J''$.
\item[(c)] $(-,\gamma)_{\Gamma}=0$.
\end{itemize}
In particular, the induced $K$-bilinear form $(-,-)_{\Gamma/J''}:\Gamma/J''\times\Gamma/J''\to K$ is non-degenerate.
\end{itemize}
\end{lemma}
\begin{proof}
Since the former assertion clearly holds, we show the latter assertion.
In the following, we assume $J'\subseteq J$.
(1) By the assumption, we have $J''\subseteq J'\subseteq J\subseteq \Gamma$ as $K$-vector spaces.
(2) Assume that $\Gamma$ is $\mathrm{v}$-stable.
First, we show that $J''$ is $\mathrm{v}$-stable.
Let $j\in J''$.
Then $(\Gamma,j)=0$.
Since $\Gamma$ is $\mathrm{v}$-stable, we have $(\Gamma, \mathrm{v}^{\pm}(j))=(\mathrm{v}^{\mp}(\Gamma),j)=(\Gamma,j)=0$.
Hence $\mathrm{v}^{\pm}(j)\in J''$.
This implies that $J''$ is $\mathrm{v}$-stable.
Next, we prove that the conditions (a), (b) and (c) are equivalent to each other.
(a)$\Rightarrow$(b): By (a), we have $(\Gamma,\mathrm{v}^{-}(\gamma))=(\gamma,\Gamma)=0$.
Thus $\mathrm{v}^{-}(\gamma)\in J''$.
Since $J''$ is $\mathrm{v}$-stable, we obtain $\gamma\in J''$.
(b)$\Rightarrow$(c): This follows from the definition of $J''$.
(c)$\Rightarrow$(a): Since $\Gamma$ is $\mathrm{v}$-stable, we obtain $(\gamma,\Gamma)_{\Gamma}=(\gamma,\Gamma)=(\mathrm{v}(\Gamma),\gamma)=0$.
\end{proof}
By the lemma above, we can construct a self-injective algebra as follows.
\begin{proposition}\label{prop:const-frob}
Keep the notation in Lemma \ref{lem:vect-bilinear}.
Assume that $\Gamma$ is $\mathrm{v}$-stable and $J'\subseteq J$.
Then the following statements hold.
\begin{itemize}
\item[(1)] $J''$ is a two-sided ideal of $\Gamma$.
\item[(2)] $\Gamma/J''$ is a Frobenius algebra, and hence a self-injective algebra.
\end{itemize}
\end{proposition}
\begin{proof}
(1) It is enough to show $\gamma j\gamma' \in J''$ for each $j\in J''$ and $\gamma,\gamma'\in \Gamma$.
By the associativity, we obtain $(-,\gamma j)=((-)\gamma,j)$ and
\begin{align}
(-,j\gamma')=((-)j,\gamma')=(\mathrm{v}(\gamma'),(-)j)=(\mathrm{v}(\gamma')(-),j).\notag
\end{align}
By $j\in J''$, we have $(-,\gamma j)_{\Gamma}=0$ and $(-,j\gamma')_{\Gamma}=0$, where the second equation follows from the fact that $\Gamma$ is $\mathrm{v}$-stable.
Hence $\gamma j, j\gamma'\in J''$.
(2) By Lemma \ref{lem:vect-bilinear}(2), $(-,-)_{\Gamma/J''}$ is a non-degenerate associative $K$-bilinear form.
Hence $\Gamma/J''$ is a Frobenius algebra by \cite{Nak39,Nak41} (see also \cite[Theorem IV.2.1]{SY11}).
\end{proof}
In the following, by using Proposition \ref{prop:const-frob}, we prove Theorem \ref{thm:local-tilting}(1).
For a basic connected non-semisimple self-injective $K$-algebra $A=KQ/I$, let $\Lambda:=A\times A$ and $J:=\operatorname{rad}\nolimits \Lambda=\operatorname{rad}\nolimits A\times \operatorname{rad}\nolimits A$.
Then $\Lambda$ is also a self-injective algebra with $((a_{1},b_{1}),(a_{2},b_{2}))_{\Lambda}=(a_{1},b_{1})_{A}+(a_{2},b_{2})_{A}$ for all $(a_{1},b_{1}),(a_{2},b_{2})\in \Lambda$ and $\mathrm{v}_{\Lambda}=\mathrm{v}_{A}\times \mathrm{v}_{A}$ (for example see \cite[Chapter IV]{SY11}).
Define a subset $\Gamma$ by
\begin{align}
\Gamma:=\{ (a,a')\in \Lambda\mid a-a'\in \operatorname{rad}\nolimits A \}.\notag
\end{align}
Then $\Gamma$ is a subalgebra of $\Lambda$ with $1_{\Gamma}=1_{\Lambda}$ and $J\subset \Gamma$.
Let $J':=\{ (a,a')\in\Lambda\mid (J,(a,a'))_{\Lambda}=0\}$ and $J'':=\{ (a,a')\in\Lambda\mid (\Gamma,(a,a'))_{\Lambda}=0\}$.
The following lemma induces that $\Gamma$ and $J'$ satisfy the assumption in Proposition \ref{prop:const-frob}.
\begin{lemma}\label{lem:key-thm32-1}
Under the notation above, the following statements hold.
\begin{itemize}
\item[(1)] $\Gamma$ is $\mathrm{v}_{\Lambda}$-stable.
\item[(2)] $J'=\operatorname{soc}\nolimits \Lambda=\operatorname{soc}\nolimits A\times \operatorname{soc}\nolimits A$ and $J''=\{ (s,-s)\mid s\in \operatorname{soc}\nolimits A \}$.
\item[(3)] $\Gamma/J''$ is a self-injective algebra.
\item[(4)] $A$ is a $\nu_{A}$-cyclic object in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$ if and only if $\Gamma/J''$ is a $\nu_{\Gamma/J''}$-cyclic object in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} (\Gamma/J''))$.
\end{itemize}
\end{lemma}
\begin{proof}
(1) This follows from the fact that $a-a'\in\operatorname{rad}\nolimits A$ if and only if $\mathrm{v}_{A}(a)-\mathrm{v}_{A}(a')\in\operatorname{rad}\nolimits A$.
(2) First we show $J'=\operatorname{soc}\nolimits \Lambda$.
Let $\lambda\in \Lambda$.
By the associativity of $(-,-)_{\Lambda}$, $\lambda\in J'$ if and only if $(-,r\lambda)_{\Lambda}=0$ for all $r\in\operatorname{rad}\nolimits\Lambda$.
Since $(-,-)_{\Lambda}$ is non-degenerate, we have a result that $(-,r\lambda)_{\Lambda}=0$ if and only if $r\lambda=0$.
By $\operatorname{soc}\nolimits \Lambda_{\Lambda}=\operatorname{soc}\nolimits_{\Lambda}\Lambda$, $\lambda\in J'$ if and only if $\lambda\in \operatorname{soc}\nolimits \Lambda$.
Next, we show $J''=\{ (s,-s)\mid s\in\operatorname{soc}\nolimits A\}$.
Let $(s,s')\in J'$.
Then $(s,s')\in J''$ if and only if $((a,a'),(s,s'))_{\Lambda}=0$ for each $(a,a')\in \Gamma$.
By $a-a'\in\operatorname{rad}\nolimits A$, there exists $r\in \operatorname{rad}\nolimits A$ such that $a'=a+r$.
Thus we obtain
\begin{align}
0=((a,a'),(s,s'))_{\Lambda}=(a,s)_{A}+(a',s')_{A}=(a,s+s')_{A}+(r,s')_{A}=(a,s+s')_{A}+(1_{A},rs')_{A}\notag
\end{align}
By $\operatorname{soc}\nolimits A_{A}=\operatorname{soc}\nolimits {}_{A}A$, we have $rs'=0$ and hence $(a,s+s')_{A}=0$.
Thus $(s,s')\in J''$ if and only if $(a,s+s')_{A}=0$ for each $a\in A$.
Since $(-,-)_{A}$ is non-degenerate, $(-,s+s')_{A}=0$ if and only if $s+s'=0$.
Hence $J''=\{ (s,-s)\mid s\in \operatorname{soc}\nolimits A \}$
(3) By (1), $\Gamma$ is $\mathrm{v}_{\Lambda}$-stable.
Since $A$ is a connected non-semisimple self-injective algebra, we have $\operatorname{soc}\nolimits A \subset \operatorname{rad}\nolimits A$.
Hence $J'\subset J$ by (2). Thus the assertion follows from Proposition \ref{prop:const-frob}(2).
(4) Since $K$ is an algebraically closed field, we obtain a result that, for primitive idempotents $e,f\in A$, $eA\cong fA$ if and only if $e-f\in \operatorname{rad}\nolimits A$.
Let $e_{i}$ be the primitive idempotent of $A$ corresponding to a vertex $i\in Q_{0}$.
Then we have $\nu_{A}(e_{i}A)\cong e_{v_{A}(i)}A$, where $v_{A}:Q_{0}\to Q_{0}$ is a Nakayama permutation of $A$.
On the other hand, we obtain $\nu_{A}(e_{i}A)\cong \mathrm{v}_{A}(e_{i})A$ (see \cite[Corollary IV.3.14]{SY11}).
Hence $\mathrm{v}_{A}(e_{i})A\cong e_{v_{A}(i)}A$.
Thus there exists $r\in \operatorname{rad}\nolimits A$ such that $v_{A}(e_{i})=e_{v_{A}(i)}+r$.
Since
\begin{align}
\mathrm{v}_{\Gamma/J''}((e_{i},e_{i})+J'')
&=(\mathrm{v}_{A}(e_{i}),\mathrm{v}_{A}(e_{i}))+J''=(e_{v_{A}(i)}+r,e_{v_{A}(i)}+r)+J''\notag\\
&=((e_{v_{A}(i)},e_{v_{A}(i)})+J'')+((r,r)+J''),\notag
\end{align}
we have $\mathrm{v}_{\Gamma/J''}((e_{i},e_{i})+J'')-((e_{v_{A}(i)},e_{v_{A}(i)})+J'')\in \operatorname{rad}\nolimits (\Gamma/J'')$.
This implies $A$ is $\nu_{A}$-cyclic if and only if $\Gamma/J''$ is $\nu_{\Gamma/J''}$-cyclic.
\end{proof}
Comparing $\widetilde{A}$ and $\Gamma/J''$, we complete the proof of Theorem \ref{thm:local-tilting}.
\begin{proposition}\label{prop:key-thm32}
We have an algebra isomorphism $\varphi:\widetilde{A}\to\Gamma/J''$.
\end{proposition}
\begin{proof}
First, we construct an algebra homomorphism $\psi:K\widetilde{Q}\to \Gamma$ which is surjective.
Decompose the identity $1_{A}$ in $A$ as $1_{A}=\sum_{i\in Q_{0}}e_{i}$, where $e_{i}\in A$ is the primitive idempotent corresponding to a vertex $i\in Q_{0}$.
Let $\psi_{0}:\widetilde{Q}_{0}\to \Gamma$ be the map defined by $\psi_{0}(i):=(e_{i},e_{i})$ for $i\in \widetilde{Q}_{0}$, and $\psi_{1}:\widetilde{Q}_{1}\to \Gamma$ the map defined by $\psi_{1}(\alpha^{+}):=(\alpha,0)$ and $\psi_{1}(\alpha^{-}):=(0,\alpha)$ for $\alpha\in Q_{1}$.
Then we can easily check that
\begin{itemize}
\item[(i)] $1_{\Gamma}=\sum_{i\in\widetilde{Q}_{0}}\psi_{0}(i)$ and $\psi_{0}(i)\psi_{0}(j)=\begin{cases}\psi_{0}(i)&(i=j)\\0&(i\neq j)\end{cases}$,
\item[(ii)] for each $\alpha:i\to j$ in $Q_{1}$, $\psi_{1}(\alpha^{\pm})=\psi_{0}(i)\psi_{1}(\alpha^{\pm})\psi_{0}(j)$.
\end{itemize}
By \cite[Theorem II.1.8]{ASS06}, there exists an algebra homomorphism $\psi:K\widetilde{Q}\to \Gamma$ that extends $\psi_{0}$ and $\psi_{1}$. Note that $\psi$ is surjective.
Composing $\psi$ with the natural surjection $\Gamma\to \Gamma/J''$, we obtain a surjective map $\varphi:K\widetilde{Q}\to \Gamma/J''$.
Next, we show $\varphi(\widetilde{I})=0$.
Since $\psi(I^{\pm})=0$ and $\psi(I^{d})=0$, it is enough to claim that $\psi(p_{i}^{+}-p_{i}^{-})\in J''$ or equivalently $\varphi(p_{i}^{+}-p_{i}^{-})=0$, where $p_{i}\notin I$ is a path in $KQ$ with maximal length starting from $i\in Q_{0}$.
Note that $p_{i}\in \operatorname{soc}\nolimits A$.
By Lemma \ref{lem:key-thm32-1}(2), we have
\begin{align}
\psi(p_{i}^{+}-p_{i}^{-})=\psi(p_{i}^{+})-\psi(p_{i}^{-})=(p_{i},-p_{i})\in J''.\notag
\end{align}
This implies $\varphi(p_{i}^{+}-p_{i}^{-})=0$ and hence $\varphi(\widetilde{I})=0$.
Thus we obtain a surjective map $\varphi:\widetilde{A}\to \Gamma/J''$.
Finally, we check $\dim_{K}\widetilde{A}=\dim_{K}(\Gamma/J'')$.
Since $\varphi: \widetilde{A}\to \Gamma/J''$ is surjective, we have only to show $\dim_{K}\widetilde{A}\leq \dim_{K}(\Gamma/J'')$.
Define a $K$-linear map $\varphi':\Gamma\to K\widetilde{Q}$ by $\varphi'(e_{i},e_{i})=e_{i}$ for $e_{i}\in A$ and $\varphi'(r,0)=r^{+}$ and $\varphi'(0,r)=r^{-}$ for $r\in \operatorname{rad}\nolimits A$.
Since $\varphi'(s,-s)=s^{+}-s^{-}\in I^{c}$ for each $s\in \operatorname{soc}\nolimits A$, we have $\varphi'(J'')\subset \widetilde{I}$.
Thus we obtain a $K$-linear map $\varphi':\Gamma/J'' \to \widetilde{A}$ which is surjective.
This finishes the proof.
\end{proof}
Now we are ready to prove Theorem \ref{thm:local-tilting}(1) and (3).
\begin{proof}[Proof of Theorem \ref{thm:local-tilting}]
By Proposition \ref{prop:key-thm32}, $\widetilde{A}$ is isomorphic to $\Gamma/J''$.
Hence the statements (1) and (3) follow from Lemma \ref{lem:key-thm32-1}.
\end{proof}
\section{The second example: non-trivial tilting-discrete case}
In this section, we give other examples of tilting-discrete algebras which are not silting-discrete.
For integers $i\leq j$, let $[i,j]:=\{ i,i+1,\ldots,j-1,j \}$.
Let $n,m$ be positive integers.
Define a quiver $\mathbb{T}_{n,m}:=(\mathbb{T}_{0},\mathbb{T}_{1})$, where $\mathbb{T}_{0}$ is the vertex set and $\mathbb{T}_{1}$ is the arrow set, as follows:
\begin{itemize}
\item $\mathbb{T}_{0}:=\{ (i,r)\mid i\in [1,n],\ r\in \mathbb{Z}/m\mathbb{Z} \}$,
\item $\mathbb{T}_{1}:=\{ a_{i,r}: (i,r)\to (i+1,r)\mid i\in [1,n-1],\ r\in \mathbb{Z}/m\mathbb{Z} \}$\\[3pt]
\hspace{15mm}$\coprod\{ b_{i,r}: (i,r)\to (i-1,r+1) \mid i\in [2,n],\ r\in \mathbb{Z}/m\mathbb{Z} \}$.
\end{itemize}
For example, $\mathbb{T}_{5,5}$ is given by the following quiver:
\begin{align}
\xymatrix@C=2mm@R=4mm{
(5,3)\ar[rd]^-{b_{5,3}}&&(5,4)\ar[rd]^-{b_{5,4}}&&(5,0)\ar[rd]^-{b_{5,0}}&&(5,1)\ar[rd]^-{b_{5,1}}&&(5,2)\ar[rd]^-{b_{5,2}}&&(5,3)\\
&(4,4)\ar[ru]^-{a_{4,4}}\ar[rd]^-{b_{4,4}}&&(4,0)\ar[ru]^-{a_{4,0}}\ar[rd]^-{b_{4,0}}&&(4,1)\ar[ru]^-{a_{4,1}}\ar[rd]^-{b_{4,1}}&&(4,2)\ar[ru]^-{a_{4,2}}\ar[rd]^-{b_{4,2}}&&(4,3)\ar[ru]^-{a_{4,3}}\ar[rd]^-{b_{4,3}}\\
(3,4)\ar[ru]^-{a_{3,4}}\ar[rd]^-{b_{3,4}}&&(3,0)\ar[ru]^-{a_{3,0}}\ar[rd]^-{b_{3,0}}&&(3,1)\ar[ru]^-{a_{3,1}}\ar[rd]^-{b_{3,1}}&&(3,2)\ar[ru]^-{a_{3,2}}\ar[rd]^-{b_{3,2}}&&(3,3)\ar[ru]^-{a_{3,3}}\ar[rd]^-{b_{3,3}}&&(3,4)\\
&(2,0)\ar[ru]^-{a_{2,0}}\ar[rd]^-{b_{2,0}}&&(2,1)\ar[ru]^-{a_{2,1}}\ar[rd]^-{b_{2,1}}&&(2,2)\ar[ru]^-{a_{2,2}}\ar[rd]^-{b_{2,2}}&&(2,3)\ar[ru]^-{a_{2,3}}\ar[rd]^-{b_{2,3}}&&(2,4)\ar[ru]^-{a_{2,4}}\ar[rd]^-{b_{2,4}}\\
(1,0)\ar[ru]^-{a_{1,0}}&&(1,1)\ar[ru]^-{a_{1,1}}&&(1,2)\ar[ru]^-{a_{1,2}}&&(1,3)\ar[ru]^-{a_{1,3}}&&(1,4)\ar[ru]^-{a_{1,4}}&&(1,0)
}\notag
\end{align}
We define a self-injective algebra $A_{n,m}$, which plays a crucial role in this section.
Let $K$ be an algebraically closed field.
Formally, put $a_{0,r}=a_{n,r}=b_{1,r}=b_{n+1,r}=0$ for all $r\in \mathbb{Z}/m\mathbb{Z}$.
Then $A_{n,m}$ is a bound quiver algebra $K\mathbb{T}_{n,m}/I$, where $I$ is the two-sided ideal generated by $a_{i,r}b_{i+1,r}-b_{i,r}a_{i-1,r+1}$ for all $i\in[1,n]$ and $r\in \mathbb{Z}/m\mathbb{Z}$.
By definition, $A_{n,1}$ is isomorphic to the preprojective algebra of the Dynkin diagram $\mathbf{A}_{n}$, and in general, $A_{n,m}$ is isomorphic to the stable Auslander algebra of a self-injective Nakayama algebra with $m$ simple modules (up to isomorphism) and Loewy length $n$.
Hence $A_{n,m}$ is a self-injective algebra (see \cite{AR73, Bu98}).
Under certain conditions, the algebra $A_{n,m}$ has the desired property.
Namely, the following theorem is our main result.
\begin{theorem}\label{thm:counterexample-question}
Let $n,m\ge 5$ be integers with $\gcd(n-1,m)=1$.
Assume that $n$ is odd and $m$ is not divisible by the characteristic of $K$.
Then $A_{n,m}$ is a tilting-discrete algebra but not silting-discrete.
\end{theorem}
Note that $A_{1,1}$ and $A_{2,1}$ are silting-discrete by direct calculation.
Since $A_{n,m}$ is a self-injective algebra, the Nakayama functor $\nu:=\mathbb{D}\operatorname{Hom}\nolimits_{A_{n,m}}(-,A_{n,m})$ is a Serre functor in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})$.
By Proposition \ref{prop:selfinjective-tilting}, all tilting objects are exactly $\nu$-stable silting objects.
In the following, let $\ssilt{2}A_{n,m}:=\ssilt{2_{A_{n,m}}}\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})$ and $\ttilt{2}{A_{n,m}}:=\ssilt{2_{A_{n,m}}}^{\nu}\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})$.
To show Theorem \ref{thm:counterexample-question}, we need the following two propositions.
\begin{proposition}\label{prop:card-silting-tilting}
Let $n,m$ be positive integers.
Then the following statements hold.
\begin{itemize}
\item[(1)] Assume $n,m\ge 5$. Then $\ssilt{2}A_{n,m}$ is not finite. In particular, $A_{n,m}$ is not silting-discrete.
\item[(2)] Assume that $\gcd(n-1,m)=1$ and $m$ is not divisible by the characteristic of $K$. Then $\ttilt{2}A_{n,m}$ is finite.
\end{itemize}
\end{proposition}
\begin{proposition}\label{prop:derived-class}
Assume that $\gcd(n-1,m)=1$ and $n$ is an odd number.
If $T$ is a tilting object given by iterated irreducible $\nu$-stable mutation from $A_{n,m}$, then the endomorphism algebra $\operatorname{End}\nolimits_{\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})}(T)$ is isomorphic to $A_{n,m}$.
\end{proposition}
Before proving the propositions above, we give a proof of Theorem \ref{thm:counterexample-question} by using them.
\begin{proof}[Proof of Theorem \ref{thm:counterexample-question}]
By Proposition \ref{prop:card-silting-tilting}(1), $A_{n,m}$ is not silting-discrete.
We have only to show that $A_{n,m}$ is tilting-discrete.
Let $T$ be a tilting object given by iterated irreducible $\nu$-stable mutation from $A_{n,m}$ and put $A_{T}:=\operatorname{End}\nolimits_{\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})}(T)$.
Since $A_{T}$ is isomorphic to $A_{n,m}$ by Proposition \ref{prop:derived-class}, we have $\ttilt{2}A_{T}\cong \ttilt{2}A_{n,m}$. Hence by Proposition \ref{prop:card-silting-tilting}(2), $\ttilt{2}A_{T}$ is finite.
This implies that $A_{n,m}$ is tilting-discrete by Corollary \ref{cor:AM17-thm}.
The proof is complete.
\end{proof}
As an application, we have the following result, which is an analog of \cite[Corollary 1.4]{AM17}.
\begin{corollary}
Let $n,m$ be positive integers with $\gcd(n-1,m)=1$.
Assume that $n$ is odd and $m$ is not divisible by the characteristic of $K$.
For each tilting object $T\in \mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})$, the endomorphism algebra $\operatorname{End}\nolimits_{\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})}(T)$ is Morita equivalent to $A_{n,m}$.
In particular, the derived equivalence class coincides with the Morita equivalence class.
\end{corollary}
\begin{proof}
By Theorem \ref{thm:counterexample-question}, $A_{n,m}$ is tilting-discrete.
Due to Proposition \ref{prop:local-property-nu-mutation}, each basic tilting object (up to shift) is given by iterated irreducible $\nu$-stable mutation from $A_{n,m}$.
The assertion follows from Proposition \ref{prop:derived-class}.
\end{proof}
In the rest of this section, we prove Propositions \ref{prop:card-silting-tilting} and \ref{prop:derived-class}.
Let $n,m$ be positive integers.
For a vertex $x\in \mathbb{T}_{0}$, let $P(x):=e_{x} A_{n,m}$, $I(x):=\mathbb{D}(A_{n,m}e_{x})$, and $S(x)=P(x)/\operatorname{rad}\nolimits P(x)\cong \operatorname{soc}\nolimits I(x)$.
Formally put $P(0,r)=P(n+1,r)=0$ for all $r\in \mathbb{Z}/m\mathbb{Z}$.
We identify an element of $\operatorname{Hom}\nolimits_{A_{n,m}}(P(i,r),P(j,s))$ as an element of $e_{(j,s)}A_{n,m}e_{(i,r)}$.
For two-term objects $U=(U_{1}\xrightarrow{d_{U}}U_{0})$ and $V=(V_{1}\xrightarrow{d_V}V_{0})$ in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})$, we denote by $(\varphi_{1},\varphi_{0})$ a morphism in $\operatorname{Hom}\nolimits_{\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A_{n,m})}(U,V)$, that is, it satisfies the following commutative diagram:
\begin{align}
\xymatrix{
U_{1}\ar[r]^-{d_{U}}\ar[d]^-{\varphi_{1}}&U_{0}\ar[d]^-{\varphi_{0}}\;\\
V_{1}\ar[r]^-{d_{V}}&V_{0}.
}\notag
\end{align}
\subsection{Combinatorial properties of $A_{n,m}$}
In this subsection, we collect combinatorial properties of $A_{n,m}$.
Fix $(i,r)\in \mathbb{T}_0$ and let $w$ be a path starting from $(i,r)$.
For the sake of simplicity, we frequently write down a path without indices, e.g., $a_{i,r}a_{i+1, r}b_{i+2,r}a_{i+1,r+1}=:aaba=:a^2ba$.
Then we can regard a path $w$ as a word $\mathbf{w}$ with ``$a$'' and ``$b$''.
We denote by $a(w)$ (respectively, $b(w)$) the number of ``$a$'' (respectively, ``$b$'') in the word $\mathbf{w}$.
Note that $a(w)=b(w)=0$ if and only if $w=e_{i,r}$.
Then, by the definition of the two-sided ideal $I$, we obtain the following properties.
\begin{lemma}\label{lem:comb-property}
Under the notation above, the following statements hold.
\begin{itemize}
\item[(1)] $w=w'\neq 0$ in $A_{n,m}$ if and only if $\left(a(w),b(w)\right)=\left(a(w'),b(w')\right)\in [0,n-i] \times [0, i-1]$.
\item[(2)] $\{a^{s}b^{t}\mid (s,t)\in [0, n-i] \times [0,i-1]\}$ gives a $K$-basis of $P(i,r)$.
In particular, we have $\dim_{K} P(i,r)=i(n-i+1)$ and $\dim_{K}A_{n,m}=m \dim_{K}A_{n,1}$.
\item[(3)] $P(i,r)\cong I(n-i+1,r+i-1)$. In particular, $\nu P(i,r)\cong P(n-i+1, r+i-n)$.
\end{itemize}
\end{lemma}
\subsection{Proof of Proposition \ref{prop:card-silting-tilting}(1)}
In this subsection, we give a proof of Proposition \ref{prop:card-silting-tilting}(1).
\begin{proof}[Proof of Proposition \ref{prop:card-silting-tilting}(1)]
Assume $n, m\ge 5$.
Let $e:=e_{4,r-1}+e_{2,r}+e_{3,r}+e_{4,r}+e_{2,r+1}$ for $r\in \mathbb{Z}/m\mathbb{Z}$.
Then $eA_{n,m}e$ is isomorphic to the path algebra of a Euclidean quiver of type $\widetilde{\mathbf{D}}_{4}$.
Hence the set of isomorphism classes of two-term silting objects in $\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} eA_{n,m}e)$ is not finite (for example see \cite[Theorem 2.6]{Ad16}).
By \cite[Proposition 2.4]{Ad16} and \cite[Corollary 2.9]{DIJ19}, this implies that $\ssilt{2}A_{n,m}$ is not finite.
This finishes the proof.
\end{proof}
\subsection{$\nu$-stability and $\psi$-stability}
Define an algebra automorphism $\psi:A_{n,m}\to A_{n,m}$ as
\begin{align}
e_{i,r}&\mapsto e_{i,r+1},\notag\\
a_{i,r}&\mapsto a_{i,r+1},\notag\\
b_{i,r}&\mapsto b_{i,r+1}.\notag
\end{align}
Then $\psi$ induces an auto-equivalence $\psi:\mod A_{n,m} \to \mod A_{n,m}$ defined by $\psi(M)a:=M\psi(a)$ for each $M\in\mod A_{n,m}$ and $a\in A_{n,m}$.
By definition, we have $\psi (S(i,r))\cong S(i,r-1)$ and $\psi (P(i,r))\cong P(i,r-1)$.
\begin{lemma}\label{lem:two-stable-silt}
Assume $\gcd(n-1,m)=1$.
Then each $\nu$-stable silting object is $\psi$-stable.
In particular, $\ttilt{2}A_{n,m}$ is a subset of $\ssilt{2}^{\psi}A_{n,m}$.
\end{lemma}
\begin{proof}
For each $(i,r)\in \mathbb{T}_{0}$, we obtain $\nu^{2}P(i,r)\cong \nu P(n-i+1, r+i-n)\cong P(i, r-(n-1))$.
By $\gcd(n-1,m)=1$, there exists an integer $s>0$ such that $\nu^{s}P(i,r)\cong P(i,r-1)\cong\psi (P(i,r))$.
This induces that for any $M\in\ssilt{2}A_{n,m}$, the $g$-vector of $\nu^{s}M$ coincides with that of $\psi (M)$.
By \cite[Theorem 5.5]{AIR14}, we obtain $\nu^{s}M\cong \psi (M)$.
Therefore all $\nu$-stable silting objects are $\psi$-stable.
\end{proof}
\subsection{The algebra $A_{n,m}$ as a skew group algebra}
Recall the definition of skew group algebras. For detail, see \cite{RR85}.
Let $A$ be a finite dimensional $K$-algebra and $\operatorname{Aut}\nolimits(A)$ the group of $K$-algebra automorphisms of $A$.
Let $G$ be a finite subgroup of $\operatorname{Aut}\nolimits(A)$ such that the characteristic of $K$ does not divide the order of $G$ (i.e., the group algebra $KG$ is semisimple).
The \emph{skew group algebra} $A\ast G$ is defined as follows: as a $K$-vector space, $A\ast G =A\otimes_{K}KG$ and the multiplication is given by $(a\otimes g)\cdot(a'\otimes g')=a g(a')\otimes gg'$, where $a,a'\in A$ and $g,g'\in G$.
We write $a\ast g$ instead of $a\otimes g$.
Then $A\ast G$ becomes a finite dimensional $K$-algebra with dimension $|G|\dim_{K}A$.
In this subsection, we assume that $m$ is not divisible by the characteristic of $K$.
Let $A_{n}:=A_{n,1}$.
For simplicity, put $e_{i}:=e_{i,0}$, $a_{i}:=a_{i,0}$ and $b_{i}:=b_{i,0}$ in $A_{n}$.
Let $G_{m}=\langle g \rangle$ be a cyclic group of order $m$ acting on $A_{n}$ as
\begin{align}
&g(e_{i}):= e_{i},\notag \\
&g(a_{i}):=a_{i},\notag\\
&g(b_{i}):=\zeta b_{i},\notag
\end{align}
where $\zeta\in K$ is an $m$-th primitive root of unity.
First we show that there exists a $K$-algebra isomorphism $\varphi: A_{n,m}\to A_{n}\ast G_{m}$.
Define a $K$-algebra homomorphism
\begin{align}
\phi: K\langle e_{i,r}, a_{j,s}, b_{k,t}\mid 1\le i \le n,\ 1\le j\le n-1,\ 2\le k\le n,\ r,s,t\in \mathbb{Z}/m\mathbb{Z} \rangle \to A_{n}\ast G_{m} \notag
\end{align}
by setting
\begin{align}
&e_{i,r}\mapsto \dfrac{1}{m}\sum_{p=0}^{m-1}\zeta^{pr}e_{i}\ast g^{p},\notag\\
&a_{j,s}\mapsto \dfrac{1}{m}\sum_{p=0}^{m-1}\zeta^{ps}a_{j}\ast g^{p},\notag\\
&b_{k,t}\mapsto \dfrac{1}{m}\sum_{p=0}^{m-1}\zeta^{p(t+1)}b_{k}\ast g^{p}.\notag
\end{align}
Put $a_{0,r}=a_{n,r}=b_{1,r}=b_{n+1,r}=0$.
Then we obtain the following equations
\begin{itemize}
\item $\phi(\sum e_{i,r})=1$,
\item $\phi(e_{i,r}e_{i',r'})=\delta_{i,i'}\delta_{r,r'}\phi(e_{i,r})$,
\item $\phi(e_{i,r}a_{j,s})=\delta_{i,j}\delta_{r,s}\phi(a_{j,s})$,
\item $\phi(a_{j,s}e_{i,r})=\delta_{j+1,i}\delta_{s,r}\phi(a_{j,s})$,
\item $\phi(e_{i,r}b_{k,t})=\delta_{i,k}\delta_{r,t}\phi(b_{k,t})$,
\item $\phi(b_{k,t}e_{i,r})=\delta_{k-1,i}\delta_{t+1,r}\phi(b_{k,y})$,
\item $\phi(a_{i,r}b_{i+1,r})=\phi(b_{i,r}a_{i-1,r+1})$,
\end{itemize}
where $\delta$ is the Kronecker delta.
Therefore, $\phi$ induces a $K$-algebra homomorphism $\varphi: A_{n,m}\to A_{n}\ast G_{m}$.
Since
\begin{align}
\left[\begin{smallmatrix}
1 & 1 &\cdots & 1\\
1 & \zeta & \cdots & \zeta^{m-1}\\
1 & \zeta^{2} & \cdots & \zeta^{2(m-1)}\\
\vdots &\vdots & & \vdots\\
1 & \zeta^{m-1}&\cdots & \zeta^{(m-1)(m-1)}\\
\end{smallmatrix}\right]
\left[\begin{smallmatrix}
e_{i}\ast 1 & a_{j}\ast 1 & b_{k}\ast 1\\
e_{i}\ast g & a_{j}\ast g & b_{k}\ast g\\
e_{i}\ast g^{2} & a_{j}\ast g^{2} & b_{k}\ast g^2\\
\vdots&\vdots&\vdots\\
e_{i}\ast g^{m-1}& a_{j}\ast g^{m-1} & b_{k}\ast g^{m-1}\\
\end{smallmatrix}\right]=m
\left[\begin{smallmatrix}
\varphi(e_{i,0}) & \varphi(a_{j,0}) & \varphi(b_{k,m-1})\\
\varphi(e_{i,1})& \varphi(a_{j,1}) & \varphi(b_{k,0})\\
\varphi(e_{i,2})& \varphi(a_{j,2}) & \varphi(b_{k,1})\\
\vdots&\vdots&\vdots\\
\varphi(e_{i,m-1})& \varphi(a_{j,m-1})& \varphi(b_{k,m-2})
\end{smallmatrix}\right],\notag
\end{align}
we obtain that $\phi$ and $\varphi$ are surjective.
By $\dim_{K}A_{n,m}=\dim_{K}(A_{n}\ast G_{m})$, the map $\varphi$ is an isomorphism.
Next, following \cite{HZ16}, we compare two-term silting objects of $A_{n}$ with those of $A_{n}\ast G_{m}$.
Let $\mathbb{X}$ be the character group of $G_{m}$.
Since $G_{m}$ is a cyclic group, we have $G_{m}\cong\mathbb{X}=\langle \chi \rangle$, where $\chi(g^{s})=\zeta^s$ for each $s\in \mathbb{Z}$.
Then $\mathbb{X}$ acts on $A_{n}\ast G_{m}$ as
\begin{align}
\chi(a \ast g):=\chi(g)a \ast g =\zeta a\ast g \notag
\end{align}
for each $a\in A_{n}$.
It is easy to check that the following diagram commutes:
\begin{align}
\xymatrix{
A_{n,m}\ar[r]^-{\varphi}\ar[d]^-{\psi}&A_{n}\ast G_{m}\ar[d]^-{\chi}\;\\
A_{n,m}\ar[r]^-{\varphi}&A_{n}\ast G_{m}.
}\notag
\end{align}
Since $\chi$ induces an auto-equivalence $\chi$ on $\mod (A_{n}\ast G_{m})$, we have the following lemma.
\begin{lemma}\label{lem:silt-mesh-skewgroup}
Assume that $m$ is not divisible by the characteristic of $K$.
Then there exist isomorphisms
\begin{align}
\ssilt{2}^{g}A_{n}\cong \ssilt{2}^{\chi}(A_{n}\ast G_{m}) \cong \ssilt{2}^{\psi}A_{n,m}. \notag
\end{align}
\end{lemma}
\begin{proof}
Since $G_{m}$ is solvable, we have an isomorphism $\ssilt{2}^{g}A_{n}\to \ssilt{2}^{\chi}(A_{n}\ast G_{m})$ by \cite[Theorem 1.2]{HZ16}.
Moreover, the commutative diagram above induces an isomorphism $\ssilt{2}^{\chi}(A_{n}\ast G_{m})\cong \ssilt{2}^{\psi}A_{n,m}$.
\end{proof}
\subsection{Proof of Proposition \ref{prop:card-silting-tilting}(2)}
Now we are ready to prove Proposition \ref{prop:card-silting-tilting}(2).
\begin{proof}[Proof of Proposition \ref{prop:card-silting-tilting}(2)]
We show that $\ttilt{2}A_{n,m}$ is a finite set.
If $\gcd(n-1,m)=1$, $\ttilt{2}A_{n,m}$ is a subset of $\ssilt{2}^{\psi}A_{n,m}$ by Lemma \ref{lem:two-stable-silt}.
On the other hand, we assume that $m$ is not divisible by the characteristic of $K$.
Then we have $\ssilt{2}^{\psi}A_{n,m}\cong \ssilt{2}^{g}A_{n}$ by Lemma \ref{lem:silt-mesh-skewgroup}.
Since $A_{n}$ is isomorphic to the preprojective algebra of the Dynkin diagram $\mathbf{A}_{n}$, the set $\ssilt{2}A_{n}$ is finite by \cite[Theorem 0.1]{Miz14}.
Hence $\ssilt{2}^{g}A_{n}$ is also a finite set.
This finishes the proof.
\end{proof}
\subsection{Proof of Proposition \ref{prop:derived-class}}
In this subsection, we prove Proposition \ref{prop:derived-class}.
Assume that $\gcd(n-1,m)=1$ and $n$ is an odd number.
Let $A:=A_{n,m}$ and $\mathcal{T}:=\mathsf{K}^{\rm b}(\mathsf{proj}\hspace{.01in} A)$.
For each $(\ell,s)\in \mathbb{T}_{0}$, we denote by $\mathcal{O}(\ell,s)$ the $\nu$-orbit of $P(\ell,s)$.
Since $\gcd(n-1,m)=1$, we obtain
\begin{align}
\mathcal{O}_{\ell}:=\mathcal{O}(\ell,s)=\{(\ell,r)\mid r\in \mathbb{Z}/m\mathbb{Z}\}\cup\{(n-\ell+1,r)\mid r\in \mathbb{Z}/m\mathbb{Z}\}. \notag
\end{align}
Without loss of generality, we may assume $\ell \in [1,\frac{n+1}{2}]$.
Then $X_\ell:=\underset{(i,r)\in \mathcal{O}_{\ell}}{\bigoplus} P(i,r)$ is a minimal $\nu$-stable object of $A$.
Thus we have an irreducible $\nu$-stable mutation $\mu_{X_{\ell}}(A)=\underset{(i,r)\in \mathbb{T}_{0}}{\bigoplus}T(i,r)$, where $T(i,r)$ is a two-term object defined as
\begin{align}
T(i,r)=
\begin{cases}
\overset{\mathrm{-1st}}{P(i,r+1)}\overset{\left[\begin{smallmatrix} a_{i-1,r+1} \\ -b_{i+1,r}\end{smallmatrix}\right]}{\longrightarrow}\overset{\hspace{7mm}\mathrm{0th}}{P(i-1,r+1)\oplus P(i+1,r)} &(i,r)\in \mathcal{O}_{\ell}\\[3pt]
\overset{\mathrm{0th}}{P(i,r)} &(i,r)\notin \mathcal{O}_{\ell}.
\end{cases}\notag
\end{align}
Since the morphism $\left[\begin{smallmatrix} a_{i-1,r+1} \\ -b_{i+1,r}\end{smallmatrix}\right]:P(i,r-1)\to P(i-1,r+1)\oplus P(i+1,r)$ induces a minimal left $\mathsf{add}\hspace{.01in}(A/P(i,r-1))$-approximation in $\mathcal{T}$, we obtain that the mapping cone $T(i,r)$ is indecomposable by Lemma \ref{lem:approximation-result}.
For each $(i,r)\in\mathbb{T}_{0}$, we define two morphisms $x_{i,r}:T(i+1,r)\to T(i,r)$ and $y_{i,r}:T(i-1,r+1)\to T(i,r)$ as follows:
\begin{itemize}
\item If $(i,r)\in \mathcal{O}_{\ell}$, then $x_{i,r}$ and $y_{i,r}$ are given by the following diagrams respectively:
\begin{align}
\xymatrix{
0\ar[r]\ar[d]&P(i+1,r)\ar[d]^-{\left[\begin{smallmatrix} 0 \\ \mathrm{id}\end{smallmatrix}\right]}\\
P(i,r+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}&{\begin{matrix}P(i-1,r+1)\\\oplus\\ P(i+1,r)
\end{matrix}}}\hspace{15mm}
\xymatrix{
0\ar[r]\ar[d]&P(i-1,r+1)\ar[d]^-{\left[\begin{smallmatrix} \mathrm{id} \\ 0\end{smallmatrix}\right]}\\
P(i,r+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}&{\begin{matrix}P(i-1,r+1)\\\oplus\\ P(i+1,r)
\end{matrix}}}\notag
\end{align}
\item If $(i+1, r)\in \mathcal{O}_\ell$, then $x_{i,r}$ is given by the following diagram:
\begin{align}
\xymatrix{
P(i+1,r+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}\ar[d]&{\begin{matrix}P(i,r+1)\\\oplus\\ P(i+2,r)
\end{matrix}}\ar[d]^-{\left[\begin{smallmatrix} a_{i,r}b_{i+1,r} & a_{i,r}a_{i+1,r}\end{smallmatrix}\right]}\\
0\ar[r]&P(i,r)
}\notag
\end{align}
\item If $(i-1, r+1)\in \mathcal{O}_\ell$, then $y_{i,r}$ is given by the following diagram:
\begin{align}
\xymatrix{
P(i-1,r+2)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}\ar[d]&{\begin{matrix}P(i-2,r+2)\\\oplus\\ P(i,r+1)\end{matrix}}\ar[d]^-{\left[\begin{smallmatrix} b_{i,r}b_{i-1,r+1}& b_{i,r}a_{i-1,r+1}\end{smallmatrix}\right]}\\
0\ar[r]&P(i,r)
}\notag
\end{align}
\item If otherwise, then $x_{i,r}$ and $y_{i,r}$ are given by the following diagrams respectively:
\begin{align}
\xymatrix{
0\ar[r]\ar[d]&P(i+1,r)\ar[d]^-{a_{i,r}}\\
0\ar[r]&P(i,r)}
\hspace{15mm}
\xymatrix{
0\ar[r]\ar[d]&P(i-1,r+1)\ar[d]^-{b_{i,r}}\\
0\ar[r]&P(i,r)}
\notag
\end{align}
\end{itemize}
Note that $x_{i,r}\in \operatorname{rad}\nolimits_{\mathcal{T}}(T(i+1,r),T(i,r))$ and $y_{i,r}\in \operatorname{rad}\nolimits_{\mathcal{T}}(T(i-1,r+1),T(i,r))$.
Moreover, if $i\in [1,n-1]$ (respectively, $i\in [2,n]$), then $x_{i,r}$ (respectively, $y_{i,r}$) is non-zero in $\mathcal{T}$.
We collect properties of two consecutive morphisms.
\begin{lemma}\label{lem:con-rule}
Fix a vertex $(i,r)\in \mathbb{T}_{0}$.
Then the following equalities hold.
\begin{itemize}
\item[(1)] For $i\in [1, n-2]$, we have
\begin{align}
x_{i,r}x_{i+1,r}=
\begin{cases}
(0, \left[\begin{smallmatrix}0&0\\a_{i+1,r}b_{i+2,r}&a_{i+1,r}a_{i+2,r}\end{smallmatrix}\right])&((i,r)\in\mathcal{O}_{\ell}, (i+2,r)\in\mathcal{O_{\ell}})\\
(0, \left[\begin{smallmatrix}0\\a_{i+1,r}\end{smallmatrix}\right])&((i,r)\in\mathcal{O}_{\ell}, (i+2,r)\notin\mathcal{O_{\ell}})\\
(0, \left[\begin{smallmatrix}a_{i,r}a_{i+1,r}b_{i+2,r}&a_{i,r}a_{i+1,r}a_{i+2,r}\end{smallmatrix}\right])&((i,r)\notin\mathcal{O}_{\ell}, (i+2,r)\in\mathcal{O_{\ell}})\\
(0, a_{i,r}a_{i+1,r})&((i,r)\notin\mathcal{O}_{\ell}, (i+2,r)\notin\mathcal{O_{\ell}}).
\end{cases}\notag
\end{align}
\item[(2)] For $i\in [3, n]$, we have
\begin{align}
y_{i,r}y_{i-1,r+1}=
\begin{cases}
(0, \left[\begin{smallmatrix}b_{i-1,r+1}b_{i-2,r+2}&b_{i-1,r+1}a_{i-2,r+2}\\0&0\end{smallmatrix}\right])&((i,r)\in\mathcal{O}_{\ell}, (i-2,r+2)\in\mathcal{O_{\ell}})\\[5pt]
(0, \left[\begin{smallmatrix}b_{i-1,r+1}\\0\end{smallmatrix}\right])&((i,r)\in\mathcal{O}_{\ell}, (i-2,r+2)\notin\mathcal{O_{\ell}})\\
(0, \left[\begin{smallmatrix}b_{i,r}b_{i-1,r+1}b_{i-2,r+2}&b_{i,r}b_{i-1,r+1}a_{i-2,r+2}\end{smallmatrix}\right])&((i,r)\notin\mathcal{O}_{\ell}, (i-2,r+2)\in\mathcal{O_{\ell}})\\
(0, b_{i,r}b_{i-1,r+1})&((i,r)\notin\mathcal{O}_{\ell}, (i-2,r+2)\notin\mathcal{O_{\ell}}).
\end{cases}\notag
\end{align}
\item[(3)] For each $1\le i \le n-1$, we have
\begin{align}
x_{i,r}y_{i+1,r}=
\begin{cases}
(0, a_{i,r}b_{i+1,r})&((i,r)\notin\mathcal{O}_{\ell})\\
(0,\left[\begin{smallmatrix}
0&0\\
b_{i+1,r}b_{i,r+1}&b_{i+1,r}a_{i,r+1}
\end{smallmatrix}\right])&((i,r)\in\mathcal{O}_{\ell}).
\end{cases}\notag
\end{align}
In particular, if $i=1$, then $x_{i,r}y_{i+1,r}=0$.
\item[(4)] For each $2\le i\le n$, we have
\begin{align}
y_{i,r}x_{i-1,r+1}=
\begin{cases}
(0, b_{i,r}a_{i-1,r+1})&((i,r)\notin \mathcal{O}_{\ell})\\
(0, \left[\begin{smallmatrix}
a_{i-1,r+1}b_{i,r+1}&a_{i-1,r+1}a_{i,r+1}\\
0&0
\end{smallmatrix}\right])&((i,r)\in \mathcal{O}_{\ell}).
\end{cases}\notag
\end{align}
In particular, if $i=n$, then $y_{i,r}x_{i-1,r+1}=0$.
\end{itemize}
\end{lemma}
\begin{proof}
This follows from direct calculations.
\end{proof}
Combining Lemma \ref{lem:con-rule}(3) and (4), we have the following result which will induce commutative relations in $\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$.
\begin{lemma}\label{lem:rel-rule}
For each $(i,r)\in \mathbb{T}_{0}$, we have $x_{i,r}y_{i+1,r}-y_{i,r}x_{i-1,r+1}=0$ in $\mathcal{T}$, where $x_{0,r}=x_{n,r}=y_{1,r}=y_{n+1,r}=0$ for all $r\in \mathbb{Z}/m\mathbb{Z}$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:con-rule}(3) and (4), we have
\begin{align}
x_{i,r}y_{i+1,r}-y_{i,r}x_{i-1,r+1}=
\begin{cases}
(0,a_{i,r}b_{i+1,r}-b_{i,r}a_{i-1,r+1})&((i,r)\notin\mathcal{O}_{\ell})\;\\
\left(0, \left[\begin{smallmatrix} -a_{i-1,r+1}b_{i,r+1} & -a_{i-1, r+1}a_{i,r+1}\\ b_{i+1,r}b_{i,r+1}& b_{i+1,r}a_{i,r+1}\end{smallmatrix}\right]\right)&((i,r)\in\mathcal{O}_{\ell}).
\end{cases}\notag
\end{align}
In the case where $(i,r)\notin\mathcal{O}_{\ell}$, we obtain $x_{i,r}y_{i-1,r+1}-y_{i,r}x_{i-1,r+1}=0$ because $a_{i,r}b_{i+1,r}-b_{i,r}a_{i-1,r+1}=0$ in $A$.
On the other hand, we assume $(i,r)\in\mathcal{O}_{\ell}$.
By definition, we have
\begin{align}
&\begin{bmatrix}
-b_{i,r+1} & -a_{i,r+1}
\end{bmatrix}
\begin{bmatrix}
a_{i-1,r+2}\\-b_{i+1,r+1}
\end{bmatrix}
=0,\notag\\
&\begin{bmatrix}
a_{i-1,r+1}\\ -b_{i+1,r}
\end{bmatrix}
\begin{bmatrix}
-b_{i,r+1} & -a_{i,r+1}
\end{bmatrix}
=\begin{bmatrix} -a_{i-1,r+1}b_{i,r+1} & -a_{i-1, r+1}a_{i,r+1}\\ b_{i+1,r}b_{i,r+1}& b_{i+1,r}a_{i,r+1}\end{bmatrix}.\notag
\end{align}
Hence $x_{i,r}y_{i+1,r}-y_{i,r}x_{i-1,r+1}=0$ in $\mathcal{T}$.
\end{proof}
From now on, we often regard $\operatorname{Hom}\nolimits_{\mathcal{T}}(T(i,r),T(j,s))$ as a subset of $\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$ by natural way.
Then we define subsets $\mathsf{X}, \mathsf{Y}$ of $\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$ as
\begin{align}
\mathsf{X}:=\{x_{i,r}\mid i\in [1,n-1],\ r\in \mathbb{Z}/m\mathbb{Z} \},\ \mathsf{Y}:=\{y_{i,r}\mid i\in [2,n],\ r\in \mathbb{Z}/m\mathbb{Z} \},\notag
\end{align}
and let $\mathsf{Z}:=\mathsf{X}\coprod \mathsf{Y}$.
In order to determine the Gabriel quiver of $\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$, we need the following lemma.
\begin{lemma}\label{lem:local-str}
The following statements hold.
\begin{itemize}
\item[(1)] We have
\begin{align}
\operatorname{rad}\nolimits_{\mathcal{T}}(T(i,r), T(j,s))=&\operatorname{Hom}\nolimits_{\mathcal{T}}(T(i-1,r),T(j,s))x_{i-1,r}\notag\\
&+\operatorname{Hom}\nolimits_{\mathcal{T}}(T(i+1,r-1),T(j,s))y_{i+1,r-1}.\notag
\end{align}
In particular, $\mathsf{Z}$ generates $\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$ as a $K$-algebra.
\item[(2)] We have $x_{i-1,r}\notin \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i+1,r-1),T(i-1,r))y_{i+1,r-1}$.
Moreover,
\begin{align}
x_{i-1,r}\in \operatorname{rad}\nolimits_{\mathcal{T}}(T(i,r),T(i-1,r))\setminus \operatorname{rad}\nolimits^{2}_{\mathcal{T}}(T(i,r),T(i-1,r)).\notag
\end{align}
\item[(3)] We have $y_{i+1,r-1}\notin \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i-1,r),T(i+1,r-1))x_{i-1,r}$.
Moreover,
\begin{align}
y_{i+1,r-1}\in \operatorname{rad}\nolimits_{\mathcal{T}}(T(i,r),T(i+1,r-1))\setminus \operatorname{rad}\nolimits^{2}_{\mathcal{T}}(T(i,r),T(i+1,r-1)).\notag
\end{align}
\item[(4)] We have
\begin{align}
\frac{\operatorname{rad}\nolimits_{\mathcal{T}}(T(i,r),T(j,s))}{\operatorname{rad}\nolimits^{2}_{\mathcal{T}}(T(i,r),T(j,s))}=
\begin{cases}
\langle \overline{x_{j,s}} \rangle_{K} &\textnormal{if $(j,s)=(i-1,r)$}\\
\langle \overline{y_{j,s}} \rangle_{K} &\textnormal{if $(j,s)=(i+1,r-1)$}\\
0 & \textnormal{if otherwise}.
\end{cases}\notag
\end{align}
\end{itemize}
\end{lemma}
\begin{proof}
(1) Since $x_{i-1,r}$ and $y_{i+1,r-1}$ are in the radical of $\mathcal{T}$, it is enough to show that for each non-isomorphic morphism $\varphi: T(i,r)\to T(j,s)$, there exist morphisms $g':T(i-1,r)\to T(j,s)$ and $g'':T(i+1,r-1)\to T(j,s)$ such that $\varphi=g'x_{i-1,r}+g''y_{i+1,r-1}$.
Consider the following four cases:
\begin{itemize}
\item[(a)] $(i,r)\in \mathcal{O}_\ell$, $(j,s)\not\in \mathcal{O}_\ell$.
\item[(b)] $(i,r)\in \mathcal{O}_\ell$, $(j,s)\in \mathcal{O}_\ell$.
\item[(c)] $(i,r)\not\in \mathcal{O}_\ell$, $(j,s)\in \mathcal{O}_\ell$.
\item[(d)] $(i,r)\not\in \mathcal{O}_\ell$, $(j,s)\not\in \mathcal{O}_\ell$.
\end{itemize}
\underline{Case (a)}: Assume $\varphi=(0,\left[\begin{smallmatrix} f_{1} & f_{2} \end{smallmatrix}\right])$ with $f_{1}: P(i-1,r+1)\to P(j,s)$ and $f_{2}: P(i+1,r)\to P(j,s)$.
By commutative relations of $A$, we can write $f_{1}=g_{1}b_{i,r}+\mathbf{a}$ and $f_{2}=g_{2}a_{i,r}+\mathbf{b}$, where $g_{1},g_{2}\in \operatorname{Hom}\nolimits_{A}(P(i,r),P(j,s))$, $\mathbf{a}\in\langle a^{k}\mid k\in \mathbb{Z}\rangle_{K}$ and $\mathbf{b}\in \langle b^{k}\mid k\in\mathbb{Z}\rangle_{K}$.
Since
\begin{align}
0=\begin{bmatrix} f_{1}&f_{2}\end{bmatrix}\begin{bmatrix} a_{i-1,r+1}\\-b_{i+1,r}\end{bmatrix}=f_{1}a_{i-1,r+1}-f_{2}b_{i+1,r}, \notag
\end{align}
we have $(g_{1}-g_{2})a_{i,r}b_{i+1,r}=0$, $\mathbf{a}=0$ and $\mathbf{b}=0$.
Assume $g_{1}-g_{2}\neq 0$ and let $g_{1}-g_{2}=ha^{p}b^{q}$, where $h$ is an invertible element in $e_{j,s}Ae_{j,s}$.
Then we have $ha^{p}b^{q}a_{i,r}b_{i+1,r}$=0, and hence $a^{p+1}b^{q+1}=0$.
By Lemma \ref{lem:comb-property}(1), we obtain $p=n-j$ or $q=j-1$.
If $p=n-j$, then $f_{2}=g_{2}a_{i,r}=(g_{1}-ha^{p}b^{q})a_{i,r}=g_{1}a_{i,r}-ha^{p+1}b^{q}=g_{1}a_{i,r}$.
Since we can take $g_{1}$ as $g_{1}=g'_{1}b_{i+1,r-1}+g''_{1}a_{i-1,r}$ for some $g'_{1}\in \operatorname{Hom}\nolimits_{A}(P(i+1,r-1),P(j,s))$ and $g''_{1}\in \operatorname{Hom}\nolimits_{A}(P(i-1,r),P(j,s))$, we have
$\left[\begin{smallmatrix} f_{1}&f_{2}\end{smallmatrix}\right]=\left[\begin{smallmatrix}g'_{1}&g''_{1}\end{smallmatrix}\right]\left[\begin{smallmatrix}b_{i+1,r-1}b_{i,r}&b_{i+1,r-1}a_{i,r}\\a_{i-1,r}b_{i,r}&a_{i-1,r}a_{i,r}\end{smallmatrix}\right]$.
Hence $\varphi=(0,g''_{1})x_{i-1,r}+(0,g'_{1})y_{i+1,r-1}$.
For the remain cases, we have the assertion by a similar argument.
\underline{Case (b)}: Assume $\varphi=(\varphi_{1},\varphi_{0})$ with
\begin{align}
\varphi_{0}=\begin{bmatrix}
f_{11} & f_{12}\\
f_{21} & f_{22}\\
\end{bmatrix},\notag
\end{align}
where $f_{11}:P(i-1,r+1)\to P(j-1,s+1)$, $f_{12}:P(i+1,r)\to P(j-1,s+1)$, $f_{21}:P(i-1,r+1)\to P(j+1,s)$, and $f_{22}:P(i+1,r)\to P(j+1,s)$.
Then we can write $\varphi_{1}$ as $\varphi_{1}=g'a_{i-1,r+1}+g''b_{i+1,r}$ for some $g'\in \operatorname{Hom}\nolimits_{A}(P(i-1,r+1),P(j,s+1))$ and $g''\in \operatorname{Hom}\nolimits_{A}(P(i+1,r),P(j,s+1))$.
Let $\varphi'_{0}:=\varphi_{0}-\left[\begin{smallmatrix}a_{j-1,s+1}\\-b_{j+1,s}\end{smallmatrix}\right]\left[\begin{smallmatrix}g'&-g''\end{smallmatrix}\right]$.
Since $\varphi'_{0}\left[\begin{smallmatrix}a_{i-1,r+1}\\-b_{i+1,r}\end{smallmatrix}\right]=0$ holds, we have a morphism $(0,\varphi'_{0})$ in $\mathcal{T}$.
By direct calculation, $(\varphi_{1},\varphi_{0})$ is homotopic to $(0,\varphi'_{0})$.
Hence we may always assume $\varphi_{1}=0$.
Therefore we have
\begin{align}
\varphi=y_{j,s}(0, \left[\begin{smallmatrix}f_{11}&f_{12}\end{smallmatrix}\right])+x_{j,s}(0, \left[\begin{smallmatrix}f_{21}&f_{22}\end{smallmatrix}\right]).\notag
\end{align}
By $(j+1,s),(j-1,s+1)\notin\mathcal{O}_{\ell}$, the assertion follows from the case (a).
\underline{Case (c)}: Assume $\varphi =\left(0,\left[\begin{smallmatrix}f_{1}\\f_{2}\end{smallmatrix}\right]\right)$ with $f_{1}:P(i,r)\to P(j-1,s+1)$ and $f_{2}:P(i,r)\to P(j+1,s)$.
We may assume $f_{1}=a^{p}b^{q}\neq 0$ and $f_{2}=a^{p'}b^{q'}\neq 0$ for some $p,q,p',q'\ge 0$.
If $(i+1,r-1),(i-1,r)\in \mathcal{O}_{\ell}$, then we have $\left(0,\left[\begin{smallmatrix}f_{1}\\0\end{smallmatrix}\right]\right)=\psi_{1}y_{i+1,r-1}$ and $\left(0,\left[\begin{smallmatrix}0\\f_{2}\end{smallmatrix}\right]\right)=\psi_{2}x_{i-1,r}$, where $\psi_{1}:T(i+1,r-1)\to T(j,s)$ and $\psi_{2}:T(i-1,r)\to T(j,s)$ are given by the following diagrams:
\begin{align}
\xymatrix{
P(i+1,r)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}\ar[d]^-{a^{p}b^{q}}&{\begin{matrix}P(i,r)\\\oplus\\ P(i+2,r-1)\end{matrix}}\ar[d]^-{\left[\begin{smallmatrix} f_{1}&0 \\ 0&a^{p}b^{q}\end{smallmatrix}\right]}\\
P(j,s+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}&{\begin{matrix}P(j-1,s+1)\\\oplus\\ P(j+1,s)\end{matrix}}
}\hspace{10mm}
\xymatrix{
P(i-1,r+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}\ar[d]^-{a^{p'}b^{q'}}&{\begin{matrix}P(i-2,r+1)\\\oplus\\ P(i,r)\end{matrix}}\ar[d]^-{\left[\begin{smallmatrix} a^{p'}b^{q'}&0 \\ 0&f_{2}\end{smallmatrix}\right]}\\
P(j,s+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}&{\begin{matrix}P(j-1,s+1)\\\oplus\\ P(j+1,s)\end{matrix}}
}\notag
\end{align}
If $(i+1,r-1)\in \mathcal{O}_{\ell}$ and $(i-1,r)\notin \mathcal{O}_{\ell}$, then we have $\left(0,\left[\begin{smallmatrix}f_{1}\\0\end{smallmatrix}\right]\right)=\psi_{1}y_{i+1,r-1}$ and
\begin{align}
\left(0,\left[\begin{smallmatrix}0\\f_{2}\end{smallmatrix}\right]\right)=
\begin{cases}
\ \left(0,\left[\begin{smallmatrix}0\\a^{p'-1}b^{q'}\end{smallmatrix}\right]\right)x_{i-1,r} &(p'>0)\\
\ \psi_{3}y_{i+1,r-1} &(p'=0),
\end{cases}\notag
\end{align}
where $\psi_{3}$ is given by the following morphism:
\begin{align}
\xymatrix{
P(i+1,r)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}\ar[d]^-{-ab^{q'-1}}&{\begin{matrix}P(i,r)\\\oplus\\ P(i+2,r-1)\end{matrix}}\ar[d]^-{\left[\begin{smallmatrix} 0&a^{2}b^{q'-2} \\ f_{2}&0\end{smallmatrix}\right]}\\
P(j,s+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}&{\begin{matrix}P(j-1,s+1)\\\oplus\\ P(j+1,s)\end{matrix}}
}\notag
\end{align}
Note that if $p'=0$, then $q'\ge 2$ by $(j,s+1),(i+1,r-1)\in \mathcal{O}_{\ell}$.
For $(i+1,r-1)\notin\mathcal{O}_{\ell}$ and $(i-1,r)\in \mathcal{O}_{\ell}$, we obtain the assertion by a similar argument.
If $(i+1,r-1), (i-1,r)\notin \mathcal{O}_{\ell}$, then we have
\begin{align}
\left(0, \left[\begin{smallmatrix}f_{1}\\0\end{smallmatrix}\right]\right)=
\begin{cases}
\left(0, \left[\begin{smallmatrix}a^{p-1}\\0\end{smallmatrix}\right]\right)x_{i-1,r}& (q= 0)\\
\left(0, \left[\begin{smallmatrix}a^{p}b^{q-1}\\0\end{smallmatrix}\right]\right)y_{i+1,r-1}& (q\neq 0)
\end{cases}\notag
\end{align}
and
\begin{align}
\left(0, \left[\begin{smallmatrix}0\\f_{2}\end{smallmatrix}\right]\right)=
\begin{cases}
\left(0, \left[\begin{smallmatrix}0\\a^{p'-1}\end{smallmatrix}\right]\right)x_{i-1,r}& (q'= 0)\\[5pt]
\left(0, \left[\begin{smallmatrix}0\\a^{p'}b^{q'-1}\end{smallmatrix}\right]\right)y_{i+1,r-1}& (q'\neq 0)
\end{cases}\notag
\end{align}
\underline{Case (d)}: Assume $\varphi =(0,f)$ with non-zero $f:P(i,r)\to P(j,s)$.
We may assume $f=a^{p}b^{q}$ for some $p,q\ge 0$.
If $q\neq 0$, then we have
\begin{align}
\varphi=
\begin{cases}
(0,a^{p}b^{q-1})y_{i+1,r-1} & ((i+1,r-1)\notin \mathcal{O}_{\ell})\\
(0,[\begin{smallmatrix}f&a^{p+1}b^{q-1}\end{smallmatrix}])y_{i+1,r-1} &((i+1,r-1)\in \mathcal{O}_{\ell}).
\end{cases}\notag
\end{align}
On the other hand, if $q=0$, then we have
\begin{align}
\varphi=
\begin{cases}
(0,a^{p-1})x_{i-1,r} &((i-1,r)\notin \mathcal{O}_{\ell})\\
(0,[\begin{smallmatrix}a^{p-1}b&f\end{smallmatrix}])x_{i-1,r} &((i-1,r)\in \mathcal{O}_{\ell}).
\end{cases}\notag
\end{align}
Therefore we have the assertion.
(2) First we show $x_{i-1,r}\notin \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i+1,r-1),T(i-1,r))y_{i+1,r-1}$.
Suppose to contrary that $x_{i-1,r}\in \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i+1,r-1),T(i-1,r))y_{i+1,r-1}$.
Then there exists a non-isomorphic morphism $f:T(i+1,r-1)\to T(i-1,r)$ such that $x_{i-1,r}=fy_{i+1,r-1}$.
By repeated use of (1) and Lemma \ref{lem:rel-rule}, we can write $f=gx_{i,r-1}+\mathbf{y}$ with $g\in \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i,r-1),T(i-1,r))$ and $\mathbf{y}\in \langle \mathsf{Y} \rangle_{K}$.
Comparing the domain and codomain of $f$, we have $\mathbf{y}=0$.
By Lemma \ref{lem:rel-rule},
\begin{align}
x_{i-1,r}=fy_{i+1,r-1}=gx_{i,r-1}y_{i+1,r-1}=gy_{i,r-1}x_{i-1,r}.\notag
\end{align}
This implies $(\mathrm{id}-gy_{i,r-1})x_{i-1,r}=0$.
Since $\mathrm{id}-gy_{i,r-1}\in \operatorname{End}\nolimits_{\mathcal{T}}(T(i-1,r))$ is invertible, we have $x_{i-1,r}=0$, a contradiction.
Next we show $x_{i-1,r}\in \operatorname{rad}\nolimits_{\mathcal{T}}(T(i,r),T(i-1,r))\setminus \operatorname{rad}\nolimits^{2}_{\mathcal{T}}(T(i,r),T(i-1,r))$.
Suppose to the contrary that $x_{i-1,r}\in \operatorname{rad}\nolimits_{\mathcal{T}}^{2}(T(i,r),T(i-1,r))$.
We can write $x_{i-1,r}=\sum_{k}f_{k}g_{k}$ with $f_{k}, g_{k}$ radical morphisms.
By (1), we obtain $g_{k}=g'_{k}x_{i-1,r}+g''_{k}y_{i+1,r-1}$ for some morphisms $g'_{k}, g''_{k}$.
Thus $(\mathrm{id}-\sum_{k}f_{k}g'_{k})x_{i-1,r}=\sum_{k}f_{k}g''_{k}y_{i+1,r-1}$.
Since $f_{k}g'_{k}$ is a radical morphism, we have $x_{i-1,r}\in \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i+1,r-1), T(i-1,r))y_{i+1,r-1}$, a contradiction.
(3) By an argument similar to (2), we have the assertion.
(4) Let $f\in \operatorname{rad}\nolimits_{\mathcal{T}}(T(i,r),T(j,s))$.
By (1), we have $f=f'x_{i-1,r}+f''y_{i+1,r-1}$ for some $f'\in \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i-1,r),T(j,s))$ and $f''\in \operatorname{Hom}\nolimits_{\mathcal{T}}(T(i+1,r-1),T(j,s))$.
Note that $x_{i-1,r}$ and $y_{i+1,r-1}$ belong to the radical of $\mathcal{T}$.
If $(j,s)$ is neither $(i-1,r)$ nor $(i+1,r-1)$, then $f'$ and $f''$ are in the radical of $\mathcal{T}$.
Hence $f\in \operatorname{rad}\nolimits_{\mathcal{T}}^{2}(T(i,r),T(j,s))$.
Assume $(j,s)=(i-1,r)$. Then $f''\in \operatorname{rad}\nolimits_{\mathcal{T}}(T(i+1,r-1),T(j,s))$.
Hence the assertion follows from (2).
For $(j,s)=(i+1,r-1)$, the proof is similar.
\end{proof}
By Lemma \ref{lem:local-str}(4), the Gabriel quiver of $\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$ is isomorphic to $\mathbb{T}_{n,m}$.
Hence there exists a surjective map $\Phi: K\mathbb{T}_{n,m}\to \operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$ given by
\begin{align}
(i,r)&\mapsto T(i,r)\notag\\
a_{i,r}&\mapsto x_{i,r}\notag\\
b_{i,r}&\mapsto y_{i,r}.\notag
\end{align}
Moreover, by Lemma \ref{lem:rel-rule}, we have $I\subset \ker\Phi$, where $I$ is the two-sided ideal generated by $a_{i,r}b_{i+1,r}-b_{i,r}a_{i-1,r+1}$ for all $i\in[1,n]$ and $r\in \mathbb{Z}/m\mathbb{Z}$.
To complete the proof of Proposition \ref{prop:derived-class}, we compare the dimension of $\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$ with that of $A$.
Let $f=z_{1}z_{2}\cdots z_{k}$ be a morphism in $\mathcal{T}$ with $z_{1},z_{2},\ldots, z_{k}\in \mathsf{Z}$.
For the sake of simplicity, we often write down a morphism without indices, e.g., $x_{i,r}x_{i+1, r}y_{i+2,r}y_{i+1,r+1}=:xxyx=:x^{2}yx$.
Then we can regard the morphism $f$ as a word $\mathbf{f}$ with ``$x$'' and ``$y$''.
We denote by $x(f)$ (respectively, $y(f)$) the number of ``$x$'' (respectively, ``$y$'') in the word $\mathbf{f}$.
Note that $x(f)=y(f)=0$ if and only if $f=\mathrm{id}$.
\begin{lemma}\label{lem:dim-end-mut}
Keep the notation above.
Fix $(i,r)\in\mathbb{T}_{0}$ and the codomain of $f$ is $T(i,r)$.
Then the following statements hold.
\begin{itemize}
\item[(1)] $f\neq 0$ if and only if $(x(f),y(f))\in [0,n-i]\times [0,i-1]$.
\item[(2)] $\mathbb{B}:=\{ x^{p}y^{q}\mid (p,q)\in [0,n-i]\times [0,i-1]\}$ forms a basis of $\operatorname{Hom}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A),T(i,r))$.
\item[(3)] $\dim_{K}\operatorname{Hom}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A),T(i,r))=\dim_{K}P(i,r)$.
\end{itemize}
\end{lemma}
\begin{proof}
Fix a vertex $(i,r)\in\mathbb{T}_{0}$.
(1) Let $f=z_{1}z_{2}\cdots z_{k}$ be a morphism in $\mathcal{T}$ with $z_{1},z_{2},\ldots, z_{k}\in \mathsf{Z}$ and codomain $T(i,r)$.
First we claim that if $x(f)>n-i$, then $f=0$.
For $y(f)=0$, this is clear.
In the following, we assume $y(f)\in [1,i-1]$.
By repeated use of Lemma \ref{lem:rel-rule}, we may assume the first $n-i+2$ terms of $f$ as $f=x^{n-i}yx\cdots$.
Since it follows from Lemma \ref{lem:con-rule}(4) that
\begin{align}
x^{n-i}yx=x_{i,r}\cdots x_{n-1,r}y_{n,r}x_{n-1,r+1}=0, \notag
\end{align}
we have $f=0$.
Similarly, we obtain that if $y(f)>i-1$, then $f=0$.
Hence the ``if'' part follows.
Next we prove the ``only if'' part.
Let $p:=x(f)\in [0,n-i]$ and $q:=y(f)\in [0,i-1]$.
By repeatedly using Lemma \ref{lem:rel-rule}, we can write $f=x^{p}y^{q}$.
It is enough to show $x^{n-i}y^{i-1}\neq 0$.
Indeed, we assume that it is true.
If $x^{p}y^{q}=0$ holds for some $p\in [0,n-i]$ and $q\in [0,i-1]$, then we have $x^{n-i}y^{i-1}=0$, a contradiction.
We show $x^{n-i}y^{i-1}\neq 0$.
By the symmetry of the quiver, we may assume $i\in [1,\frac{n+1}{2}]$.
If $i\neq \ell$ (or equivalently $(i,r)\notin\mathcal{O}_{\ell}$ and $(n-i+1,r+i-1)\notin\mathcal{O}_{\ell}$), then we have $x^{n-i}y^{i-1}=(0,a^{n-i}b^{i-1})$.
By Lemma \ref{lem:comb-property}(1), we obtain $a^{n-i}b^{i-1}\neq 0$, and hence $x^{n-i}y^{i-1}\neq 0$.
On the other hand, if $i=\ell$ (or equivalently $(i,r)\in\mathcal{O}_{\ell}$ and $(n-i+1,r+i-1)\in\mathcal{O}_{\ell}$), then we have a commutative diagram
\begin{align}
\xymatrix{
P(n-i+1,r+i)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}\ar[d]^{0}&{\begin{matrix}P(n-i,r+i)\\\oplus\\ P(n-i+2,r+i-1)\end{matrix}}\ar[d]^-{\left[\begin{smallmatrix} 0 & 0\\ \alpha& 0\end{smallmatrix}\right]}\\
P(i,r+1)\ar[r]^-{\left[\begin{smallmatrix} a \\ -b\end{smallmatrix}\right]}&{\begin{matrix}P(i-1,r+1)\\\oplus\\ P(i+1,r)\end{matrix}}\\
}\notag
\end{align}
where $\alpha:= a_{i+1,r}\cdots a_{n-1,r}b_{n,r}\cdots b_{n-i+1,r+i-1}$.
Suppose to the contrary that $x^{n-i}y^{i-1}=0$, that is, there exists a morphism $\left[\begin{smallmatrix}h_{1}&h_{2}\end{smallmatrix}\right]: P(n-i,r+i)\oplus P(n-i+2,r+i-1)\to P(i,r+1)$ such that
\begin{align}
h_{1}a_{n-i,r+i}-h_{2}b_{n-i+2,r+i-1}=0\label{seq:rel1}\\
\left[\begin{smallmatrix}a_{i-1,r+1}\\-b_{i+1,r}\end{smallmatrix}\right]\left[\begin{smallmatrix}h_{1}&h_{2}\end{smallmatrix}\right]=\left[\begin{smallmatrix}0&0\\\alpha&0\end{smallmatrix}\right]\label{seq:rel2}.
\end{align}
By Lemma \ref{lem:comb-property}, we can write
\begin{align}
&h_{1}=k_{1}a_{i,r+1}\cdots a_{n-2,r+1}b_{n-1,r+1}\cdots b_{n-i+2,r+i-2}b_{n-i+1,r+i-1}\notag\\
&h_{2}=k_{2}a_{i,r+1}\cdots a_{n-2,r+1}b_{n-1,r+1}\cdots b_{n-i+2,r+i-2}a_{n-i+1,r+i-1}\notag
\end{align}
with $k_{1},k_{2}\in K$.
By \eqref{seq:rel1}, we have
\begin{align}
(k_{1}-k_{2})a_{i,r+1}\cdots a_{n-1,r+1}b_{n,r+1}\cdots b_{n-i+2,r+i-1}=0.\notag
\end{align}
Thus $k_{1}=k_{2}$ holds.
On the other hand, by \eqref{seq:rel2}, we obtain
\begin{align}
k_{2}a_{i-1,r+1}\cdots a_{n-1,r+1}b_{n,r+1}\cdots b_{n-i+3,r+i-2}=a_{i-1,r+1}h_{2}=0.\notag
\end{align}
Hence $k_{1}=k_{2}=0$.
This implies $0=-b_{i+1,r}h_{1}=\alpha\neq 0$, a contradiction.
(2) We show that $\mathbb{B}$ gives a basis of $K$-vector space
\begin{align}
\displaystyle \operatorname{Hom}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A),T(i,r))=\bigoplus_{(j,s)\in\mathbb{T}_{0}}\operatorname{Hom}\nolimits_{\mathcal{T}}(T(j,s), T(i,r)).\notag
\end{align}
By Lemma \ref{lem:rel-rule} and (1), $\mathbb{B}$ generates $\operatorname{Hom}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A),T(i,r))$.
In the following, we show that $\mathbb{B}$ is a linear independent.
Suppose to contrary that $\mathbb{B}$ is not linear independent.
Then there are
$\emptyset \ne V\subset [0,n-i]\times[0,i-1]$ and $\alpha_{v}\in K\setminus\{0\}$ ($\forall v\in V$) such that
\begin{align}
\sum_{(p,q)\in V} \alpha_{(p,q)}\cdot x^{p}y^{q}=0. \notag
\end{align}
We may assume that $x^{p}y^{q}:T(j,s)\to T(i,r)$ for each $(p,q)\in V$.
Then we can choose $(p_{0}, q_{0})\in V$ such that $p_{0}<p$ and $q_{0}<q$ hold for each $(p,q)\in V\setminus\{(p_{0},q_{0})\}$.
Hence, by Lemma \ref{lem:rel-rule}, we can write
\begin{align}
\sum_{(p,q)\in V} \alpha_{(p,q)}\cdot x^{p}y^{q}=\alpha_{(p_{0},q_{0})}x^{p_{0}}y^{q_{0}} \left(1+z\right)\ \left(z\in \operatorname{rad}\nolimits_{\mathcal{T}}\left(T(i,r),T(i,r)\right)\right). \notag
\end{align}
This means $x^{p_{0}}y^{q_{0}}=0$.
In particular, we have
\begin{align}
x^{n-i}y^{i-1}=x^{n-i-p_{0}}y^{i-1-q_{0}}x^{p_{0}}y^{q_{0}}=0. \notag
\end{align}
(3) This follows from (2) and Lemma \ref{lem:comb-property}(2).
\end{proof}
Now we are ready to prove Proposition \ref{prop:derived-class}.
\begin{proof}[Proof of Proposition \ref{prop:derived-class}]
By Lemmas \ref{lem:local-str}(4) and \ref{lem:dim-end-mut}, there exists a surjective map $\Phi:K\mathbb{T}_{n,m}\to \operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$, which induces a surjective map $A\to \operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$.
Moreover, it follows from Lemma \ref{lem:dim-end-mut} that $\dim_{K}A=\dim_{K}\operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$.
Hence we obtain $A\cong \operatorname{End}\nolimits_{\mathcal{T}}(\mu_{X_{\ell}}(A))$.
\end{proof}
\subsection*{Acknowledgements}
The authors are deeply grateful to Osamu Iyama for informing them of Abe--Hoshino's work and suggesting an abstract construction of $\widetilde{A}$.
|
{
"timestamp": "2021-06-01T02:34:01",
"yymm": "2012",
"arxiv_id": "2012.14119",
"language": "en",
"url": "https://arxiv.org/abs/2012.14119"
}
|
\section{Introduction}
While recent advances in deep learning yielded a significant boost in performance in most computer vision tasks, this success depends a lot on the availability of a large amount of well-annotated
training data. As the cost of acquiring data labels remains high, amongst alternative solutions, domain adaptation approaches have been proposed, where the main idea is to exploit the unlabeled data within the same domain together with annotated data from a different yet related domain. Yet, because learning from the new domain might suffer from distribution mismatch between the two domains, it is necessary to adapt the model learned on the labelled {\em source} to the actual {\em target} domain as pictured in Fig. \ref{fig:USDA}.
With the recent progress on deep learning, a significant performance boost over previous state-of-the art
of visual categorization systems was observed.
In parallel, it was shown that features extracted from
the activation layers of these deep networks can be re-purposed for novel tasks or domains~\cite{DonahueICML14DeCAFActivationFeature} even when the new task/domain differs from the task/domain originally used to train the model. This is because
deep neural networks learn more abstract and more robust representations, they encode category level information and remove, to a certain measure, the domain bias \cite{BengioPAMI13RepresentationLearningReviewNewPerspectives,YosinskiNIPS14HowTransferableDNN}.
Hence, these representations are more transferable to new tasks/domains because they disentangle the factors of variations in underlying data samples while grouping them hierarchically according to their relatedness with invariant factors.
These image representations, in general obtained by
training the model in a fully supervised manner on large-scale annotated datasets, in particular
ImageNet~\cite{RussakovskyIJCV15ImagenetVisualRecognChallenge}, can therefore be directly used to build stronger baselines for domain adaptation methods.
Indeed, by simply training a linear classifier with such representations obtained from activation layers \cite{DonahueICML14DeCAFActivationFeature}, and with no further adaptation to the target set, yields in general significantly better results than most shallow
DA models trained with previously used handcrafted, generally bag of visual words (BOV) \cite{CsurkaECCVWS04VisualCategorizationBagsKeypoints}, representations.
In Fig. \ref{fig:BovAlexNet} we illustrate this using the AlexNet architecture \cite{KrizhevskyNIPS12ImagenetDeepCNN}, however representations obtained with deeper models \cite{SimonyanX14VeryDeepConvolutionalNetworks,HeCVPR16DeepResidualLearning,SzegedyCVPR15GoingDeeperWithConvolutions} provide even better performance and generalization capacity~\cite{CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy}.
\begin{figure}[ttt]
\begin{center}
\includegraphics[width=0.48\textwidth]{USDA.png}
\caption{Domain adaptation is a machine learning technique where knowledge from a labeled source domain is leveraged to learn a model for an unlabeled target domain. It is assumed that there is a distribution mismatch between domains but the task (\emph{e.g.\,} class labels) is shared between domains.}
\vspace{-0.4cm}
\label{fig:USDA}
\end{center}
\end{figure}
While using directly these models trained on the source provides already relatively good results on the target
datasets, especially when the domain shift is moderate, for more challenging problems, \emph{e.g.\,} adaptation between images and paintings, drawings, clip art or
sketches \cite{CastrejonCVPR16LearningAlignedCrossModal,LiCVPR17DeeperBroaderArtierDG,CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy}, a classifier trained even with such deep features would have difficulties to handle the domain differences. Therefore, the need for alternative solutions that directly handle the domain shift remains the preferred solution.
Therefore, in which follows we first discuss and compare
different strategies about how
to exploit deep architectures for domain adaptation. Then, we provide an overview of recent trends in deep visual domain adaptation. Finally, we evoke
a few strategies, orthogonal to the deep DA architecture design, that can be applied to improve those models.
\begin{figure}[ttt]
\begin{center}
\includegraphics[width=0.5\textwidth]{BovAlexNet.jpg}
\caption{Left: Results show that nearest neighbor (NN) classifier results with AlexNet \cite{KrizhevskyNIPS12ImagenetDeepCNN} without any adaptation on the Office+Caltech \cite{GongCVPR12GeodesicFlowKernel} dataset outperform by a large margin classical shallow DA methods using
the SURF-BOV features originally provided with these datasets. Right: we show
Amazon (A) and Webcam (W) data from the Office 31 \cite{SaenkoECCV10AdaptingVisualCategoryModels} benchmark set clustered together
with SURF-BOV and AlexNet features. We
can see that the two domains are much better clustered with deep features then with SURF-BOV.}
\label{fig:BovAlexNet}
\vspace{-0.4cm}
\end{center}
\end{figure}
\section{Deep learning strategies}
\label{sec:DeepStrategies}
There are several ways to exploit deep models to handle
the domain mismatch between the source and the target set,
that can be grouped in four main categories: 1) shallow methods using deep features, 2) using fine-tuned deep architectures, 3) shallow methods using fine-tuned deep features and 4) deep domain adaptation models.
\myparagraph{Shallow DA methods using deep features}
We mentioned above that considering a pre-trained deep model as feature extractor to represent the images and
train a classifier on the source provides already a strong baseline. However, we can go a step further by incorporating these representations into traditional DA methods such as
\cite{GongCVPR12GeodesicFlowKernel,LongCVPR14TransferJointMatchingDA,FarajidavarBMVC14AdaptiveTransductiveTransfer,FernandoPRL15JointSubspaceLearningUDA,BaktashmotlaghBC17LearningDomainInvariantEmbeddingsByMatchingDistributions,CourtyPAMI17OptimalTransportDA}. As shown in
~\cite{DonahueICML14DeCAFActivationFeature,TommasiBC17ADeeperLookAtDatasetBias,SunAAAI16ReturnFrustratinglyEasyDA,CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy}, to cite a few examples, using such DA methods with deep features yields further performance improvement on the target data. Nevertheless, it was observed that
the contribution of using deep features is much more significant than the contribution of using various DA methods. Indeed, as Fig. \ref{fig:BovAlexNet}) illustrates
the gain obtained with any DA on the BOV baseline is low compared to the gain between BOV {\em versus} deep features both for the baseline or any DA method.
\myparagraph{Training deep architectures on the source}
The second solution is to train or fine-tune a deep network on the source domain and use directly the model to
predict the class labels for the target instances. While, in this case there is no adaptation to the target, as
illustrated also in Fig. \ref{fig:DeepStrategies}, we observe not only better performance (or equally if ImageNet is the source) compared with the baseline (classifier trained with the features from backbone pretrained on ImageNet), but also with the previous strategy (shallow DA applied with the corresponding image representations). The explanation is that the deep model disregards in certain measure the appearance variation by focusing on high level semantics, and therefore is able to overcome in certain measure the domain gap. However, if the domain difference between the source and target is important, fine-tuning the model on the source can also overfit the model for the source \cite{ChopraICMLWS13DLIDInterpolatingBetweenDomains,SunAAAI16ReturnFrustratinglyEasyDA} and therefore it is important to correctly select the layers to be fine-tuned \cite{ChuTASKCV16BestPracticesForFineTuning,CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy}.
\begin{figure}[ttt]
\begin{center}
\includegraphics[width=0.48\textwidth]{DvsS_DAN2.png}
\caption{We compare several strategies on the
LandMarkDA dataset \cite{CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy}
using shallow (SDAN) and deep (DDAN) discrepancy-based networks \cite{CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy} built with GoogleNet \cite{SzegedyCVPR15GoingDeeperWithConvolutions} as backbone. No adaptation (NA) means that only the classifier layer was trained, contrary to fine-tuning the model on the source (FT). SDAN is trained with deep features from the ImageNet pre-trained network (SDAN) or from the fine-tuned network (FT+SDAN). We can see that FT+SDAN yields results close to DDAN, which performs the best.}
\label{fig:DeepStrategies}
\vspace{-0.4cm}
\end{center}
\end{figure}
\begin{figure*}[ttt]
\begin{center}
\includegraphics[width=0.92\textwidth]{ShallowVSDeep.png}
\caption{Left: classical DA methods where the image representations are fixed and the domain alignment and source classifier are learned in this feature space. Right: deep DA architecture where image representations, source classifier and domain alignment are all learned jointly in an end-to-end manner. The parameters of the source and target models can be partially or fully shared. }
\label{fig:ShallowvsDeeps}
\vspace{-0.4cm}
\end{center}
\end{figure*}
\myparagraph{Shallow methods using fine-tuned deep features}
Note that the above mentioned two strategies are orthogonal and they can be combined to take advantage of both. This is done by first fine-tuning the model on the source set and then the features extracted with this model
are used by the shallow DA method to
decrease the discrepancy between source and target distributions. In addition to further boosting the performance (see Fig. \ref{fig:DeepStrategies}), further advantages of this strategy are the fact that it does not require tailoring the network architecture for DA, and the fine-tuning on the source can be done in advance, even before seeing the target set.
In Fig. \ref{fig:DeepStrategies} we compare these strategies with a corresponding shallow (single layer perceptron on top of the pre-extracted features) and a deep end-to-end architecture where we use the same discrepancy (kernelized MMD ~\cite{BorgwardtBI06IntegratingKernelMaximumMeanDiscrepancy,LongICML15LearningTransferableFeaturesDAN}
and cross-entropy loss. We can see that using a shallow method with deep features extracted from the fine-tuned model indeed combines the advantages of the fine-tuning with domain adaptation and yields results close to the deep Siamese discriminative network designed for the domain adaptation. Similar behaviour was observed in when comparing DeepCORAL \cite{SunTASKCV16DeepCORALCorrelationAlignment} with CORAL \cite{SunAAAI16ReturnFrustratinglyEasyDA} using features extracted from the pre-trained and fine-tuned network. Note nevertheless that in both cases a relatively simple deep DA method was considered, and as will be discussed in the next sections, these deep models can be further improved in various ways.
\section{Deep DA Models}
Historical shallow DA methods include data re-weighting, metric learning, subspace representations or distribution matching
(see for more details the surveys \cite{GopalanB15DomainAdaptationVisualRecognition,CsurkaBC17AComprehensiveSurveyDAForVisualApplications}). As discussed above, these methods assume that the image representations are fixed (they are handcrafted or pre-extracted from a deep model) and the adaptation model uses these features as input (see left image in Fig.~\ref{fig:ShallowvsDeeps}).
Amongst the most popular shallow DA approaches, a set of methods focuses on aligning the marginal distributions of the source and the target sets. These methods learn either a linear projection or more complex feature transformations with the aim that in the new space the discrepancy between the domains is significantly decreased. Then the classifier trained on the labeled source set in the projected space, thanks to the domain alignment, can directly be applied to the target set.
It is therefore not surprising that amongst the first deep DA models we find the generalization of this pipeline, as illustrated in Fig.~\ref{fig:ShallowvsDeeps}(right) where the deep representation is jointly learned with the source classifier and domain alignment in an end-to-end manner.
These first solutions were
followed by a large amount of different deep DA methods and architectures that can be grouped together according to different criterion (see also \cite{WangNC18DeepVisualDASurvey}). In which follows, we recall some of the main trends.
\myparagraph{Discriminative models}
These models, inspired by classical DA methods, have a Siamese architecture~\cite{Bromley93IJPRAISignatureVerificationTimeDelayNN} with two streams, one for the source set and one for the target set. The two streams can share entirely, partially or not at all the weights, and in general both branches are initialized by the corresponding backbone (\emph{e.g.\,} VGG \cite{SimonyanX14VeryDeepConvolutionalNetworks},
ResNet \cite{HeCVPR16DeepResidualLearning} or GoogleNet \cite{SzegedyCVPR15GoingDeeperWithConvolutions}), trained on the source set most often using the cross-entropy classification loss.
The Siamese network is then trained with the same cross-entropy loss applied only the source stream together with a domain alignment loss defined with both source and target features. This loss uses either the last activation layer before the soft-max prediction
\cite{GhifaryPRICAI14DomainAdaptiveNN} or it can be applied to several activation layers \cite{LongICML15LearningTransferableFeaturesDAN}.
\begin{figure*}[ttt]
\begin{center}
\includegraphics[width=0.98\textwidth]{ObjDetDA.png}
\caption{Domain Adaptive Faster R-CNN model \cite{ChenCVPR18FasterRCNNObjDetWild} aiming to adapt the detector trained on the source for a new domain.
The domain shift is tackled in an adversarial training manner with GRL \cite{GaninJMLR16DomainAdversarialNN} layers on two levels, the image level and the instance level. A consistency regularizer is incorporated
within these two classifiers to learn a domain-invariant region proposal network (RPN). (Image Courtesy to Yuhua Chen).}
\label{fig:objdet}
\vspace{-0.4cm}
\end{center}
\end{figure*}
The domain alignment can be achieved by
minimizing the feature distribution discrepancy, or by using an adversarial loss to increase domain confusion.
To minimize the distribution discrepancy,
most often the Kernelized MMD loss is used \cite{GhifaryPRICAI14DomainAdaptiveNN,LongICML15LearningTransferableFeaturesDAN}, but amongst the alternative losses proposed, we can mention the Central Moment Discrepancy \cite{ZellingerICLR17CentralMomentDiscrepancyDA},
CORAL loss \cite{SunTASKCV16DeepCORALCorrelationAlignment},
or Wasserstein distance \cite{ShenAAAI18WassersteinDistanceGuidedRepresentationLearningDA,BalajiICCV19NormalizedWassersteinDistanceAdvLearnDA}.
Note that the Wasserstein distance is used also to minimize
the global transportation cost in optimal transport based DA methods \cite{CourtyPAMI17OptimalTransportDA,DamodaranECCV18DeepJDOTOptimalTransportUDA,XuCVPR20ReliableWeightedOptimalTransportUDA}, however, these are asymmetric models transporting the source data towards the target samples instead of projecting both sets into a
common latent space.
On the other hand, domain confusion can be achieved either with adversarial losses such as GAN loss
~\cite{GoodfellowNIPS14GenerativeAdversarialNets,TzengCVPR17AdversarialDiscriminativeADDA,VolpiCVPR18AdversarialFeatureAugmentationUDA} and domain confusion loss \cite{TzengICCV15SimultaneousDeepTL,GebruICCV17FineGrainedRecognitionWildMultiTaskDA}, or by using a domain classifier and gradient reversal layer (GRL)~\cite{GaninJMLR16DomainAdversarialNN,PeiAAAI18MultiAdversarialDA}. Note however that the latter can also be formulated as a min-max loss and is achieved by
the integration of a simple binary domain classifier
and a GRL layer into a standard deep architecture which
is unchanged during the forward pass, and reversed for the target during backpropagation. This simple but
quite powerful solution became extremely popular when DA is applied for problems beyond image classification, in particular for object detection \cite{ChenCVPR18FasterRCNNObjDetWild,SaitoCVPR19StrongWeakDistributionAlignmentObjDet,ZhuCVPR19AdaptingObjDetSelectiveCrossDomainAlignment,HeICCV19MultiAdversarialFasterRCNNUnrestrictedObjDet,XuCVPR20ExploringCategoricalRegularizationDAObjDet} (see also Fig. \ref{fig:objdet}), semantic image segmentation \cite{HoffmanX16FCNsInTheWildPixelLevelAdversarialDA,TsaiCVPR18LearningToAdaptStructuredOutputSpaceSemSegm} or video action recognition \cite{LiNIPS18UnsupervisedLearningViewInvariantActionRepr,MunroICCVWS19MultiModalDAFineGrainedActionRecognition}.
\myparagraph{Class-conditional distribution alignment}
To overcome the drawback that aligning marginal distributions without taking into account explicitly the task might lead to sub-optimal solution, several approaches were proposed.
Amongst them we have the ones that tries to align class conditional distributions by minimizing the marginals of features and class predictions jointly \cite{LongICML17DeepTLJointAdaptationNetworks}, or exploit discriminative information conveyed in the classifier predictions to assist adversarial adaptation
\cite{LongNIPS18ConditionalAdversarialDomainAdaptation}.
Instead, \cite{ZhangICML19BridgingTheoryAlgorithmDA}
proposes to focus on the Margin Disparity Discrepancy loss defined on the scoring function and use adversarial learning to solve it.
\cite{SaitoCVPR18MaximumClassifierDiscrepancyUDA, SaitoICLR18AdversarialDropoutRegularization} proposes to minimize task-specific decision boundaries’ disagreement on target examples while aligning features across domains. \cite{KangCVPR19ContrastiveAdaptationNetworkUDA} explicitly models the intra-class and the inter-class domain discrepancy, where intra-class domain discrepancy is minimized to avoid misalignment and the inter-class domain discrepancy is
maximized to enhance the model’s generalization ability.
Assuming the access to at least a small set of labeled target samples, \cite{KoniuszCVPR17DomainAdaptationMixtureAlignmentsSecondHigherOrderScatterTensors} proposed to align higher-order scatter statistics between
domain-specific and class-specific representations.
\myparagraph{Network parameter adaptation} The above methods in general keep the same architecture with the same weights for both source and target streams, which essentially aims to learn domain invariant features. In
contrast to them, several approaches were proposed, where the goal is to specialize the streams
for the respective domains by adapting the parameters of the target stream. As such, \cite{RozantsevPAMI18BeyondSharingWeightsDeepDA,RozantsevCVPR18ResidualParameterTransferDeepDA} explicitly model the domain shift by learning meta parameters that transform the weights and biases of each layer of the network from the source stream to the target one.
Instead, \cite{BermudezChaconICLR20DomainAdaptiveMultibranchNetworks} consider a multi-stream architectures with non shared parameters where learnable gates at multiple levels
allows the network to find for each domain a corresponding weighted aggregation of these parallel streams.
\begin{figure*}[ttt]
\begin{center}
\includegraphics[width=0.9\textwidth]{StyleTransf.png}
\caption{Left: Paired image style transfer \cite{GatysNIPS15TextureSynthesisCNN} where the model takes the content of the source images (first column) and the style of the target image (second column) to generate a target-like source image (third column). Note that these images inherits the label from the source while they look more like the target images.
Right: Un-paired image-to-image (I2I) transfer where the model learns to synthesize directly target-like images (night, rainy, {\em etc}) for a source input and/or source-like images (day, sunny, {\em etc}) for a target image without the need of an explicit style image.}
\label{fig:styleTransfer}
\vspace{-0.4cm}
\end{center}
\end{figure*}
\myparagraph{Domain specific batch normalization}
\cite{CarlucciICCV17AutoDIALDomainAlignmentLayers,LiPR18AdaptiveBatchNormalizationForPracticalDomainAdaptation,ChangCVPR19DomainSpecificBatchNormalizationUDA} have shown that domain specific batch normalization is equivalent to projecting the source and target feature distributions to a reference distribution through feature standardization.
Hence this yields a simple yet efficient solution for
minimizing the gap between domains. \cite{CuiCVPR20TowardsDiscriminabilityDiversityBatchNuclearNormMaximization} proposes
batch nuclear-norm maximization to simultaneously enhance the discriminability and diversity of predicted scores. \cite{ManciniCVPR19UnifyingPredictiveContinuousDAthroughGraphs} applied domain-specific batch normalization
layers in the context of graph-based predictive DA.
\cite{PerrettCVPR19DDLSTMDualDomainLSTMActionRecogn} proposes the DDLSTM architecture for action recognition that performs cross-contaminated recurrent batch normalisation for both single-layer and multi-layer LSTM architectures.
\myparagraph{Encoder–decoder reconstruction}
Early deep auto-encoder frameworks proposed for DA in NLP
\cite{GlorotICML11DASentimentClassification}
rely on the feedforward stacked denoising
autoencoders \cite{VincentICML08ExtractingDenoisingAutoencoders} where a multi-layer neural network reconstructs the input data from partial random corruptions with backpropagation. \cite{ChenICML12MarginalizedDenoisingAutoencodersDA} has shown that such model can be trained efficiently by marginalizing out the noise that leads to a closed form solution for the transformations between layers.
\cite{CsurkaTASKCV16UnsupervisedDAInstanceDenoising} extended this unsupervised network to a supervised one by jointly learning the domain invariance with the cross-domain classifier while keeping the network solvable in a single forward pass.
In contrast to these models that act on the pre-extracted features, more recent reconstruction models trains the encoders/decoders end-to-end. As such, \cite{GhifaryECCV16DeepReconstructionClassificationNetworkDA} combines the standard CNN for source label prediction
with a deconvolutional network~\cite{ZeilerCVPR10DeconvolutionalNetworks} for target data reconstruction by alternating between
unsupervised and supervised training.
\cite{BousmalisNIPS16DomainSeparationNetworks} integrates both domain-specific encoders and shared encoders, and the model integrates a reconstruction loss for a shared decoder that rely on both domain specific and shared representations.
\myparagraph{Transfer domain style}
In many cases the domain shift between domains is strongly related to the image appearance change such as day to night, seasonal change, synthetic to real. Even stronger domain shift can be observed when the adaptation is aimed to be between images that exhibit different artistic style such as paintings, cartoons and sketches~\cite{CastrejonCVPR16LearningAlignedCrossModal,LiCVPR17DeeperBroaderArtierDG,CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy}. To explicitly
account for such stylistic domain shifts, a set of papers proposed to use image-to-image (I2I) style transfer methods \cite{GatysNIPS15TextureSynthesisCNN,HuangICCV17ArbitraryStyleTransferRealTimeAdaptiveInstanceNormalization,LiECCV18ClosedFormImageStylization} to generate a set of {\em target like} source images. They have shown that this new set is suitable to train a model for the target set \cite{CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy,ThomasACCV19ArtisticObjectRecognitionUnsupervisedStyleAdaptation}. The main reason why this works is that these synthesized images inherits
the semantic content of the source, and hence its label,
while their appearances is more similar to the target style (see examples in Figure \ref{fig:styleTransfer}(Left)).
Training a model with this set not only outperforms the model trained with the original source set, but it is also easier to further adapt it to the target set \cite{CsurkaTASKCV17DiscrepancyBasedNetworksUnsupervisedDAComparativeStudy}.
Another set of methods seek to learn how to translate
between domains without using paired input-output examples but instead assuming there is some underlying appearance shift between the domains (\emph{e.g.\,} day to night, sunny to rainy, synthetic to real). For example, \cite{YooECCV16PixelLevelDomainTransfer,BousmalisCVPR17UnsupervisedPixelLevelDAGAN,TaigmanICLR17UnsupervisedCrossDomainImageGeneration} train the network
to synthesize target-like and/or source-like images (see Figure \ref{fig:styleTransfer}(Right)) in general by relying on a Generative Adversarial Networks (GANs) \cite{GoodfellowNIPS14GenerativeAdversarialNets},
where an adversarial loss force the model to
generating fake (target-like) images to be indistinguishable from real (target) photos. A pair of GANs, each corresponding to one of the domains is considered in \cite{LiuNIPS16CoupledGenerativeAdversarialNetworks}, where the model adapts the input noise vector to paired images that are from the two distributions and share the labels.
This work was extended in
\cite{LiuNIPS17UnsupervisedI2ITranslationNetworks} with Variational Auto-Encoders (VAE), where
the image reconstruction, image translation, and the cycle-reconstruction are jointly optimized.
\cite{ZhuICCV17UnpairedI2ICycleConsistentAdversarialNetworks} proposes to learn a mapping between source and target domains using an adversarial GAN loss while imposing a cycle consistent loss, \emph{i.e.\,} the target-like source image mapped back to source style should match the original source image.
\cite{HoffmanICML18CyCADACycleConsistentAdversarialDA}
combined cycle consistency between
input and stylized images with task-specific semantic
consistency, and extended the method to semantic segmentation (see Figure \ref{fig:cycada}). Transferring the target image style to generate synthetic source images is at the core of many DA method for semantic segmentation
\cite{MurezCVPR18ImageToImageTranslationDA,SankaranarayananCVPR18LearningSyntheticDataDomainShiftSemSegm,WuECCV18DCANDualChannelWiseAlignmentNetworksUDA,ChangCVPR19AllAboutStructureDASemSeg,YangCVPR20PhaseConsistentEcologicalDA}. GAN-like DA models combined with similarity preserving constraints were often used for adapting
cross-domain person re-identification models \cite{BakECCV18DomainUDAASynthesisReId,DengCVPR18ImageImageDAwithPreservedSelfSimilarityReID,ChenICCV19InstanceGuidedContextRenderingDAReID}.
\begin{figure*}[ttt]
\begin{center}
\includegraphics[width=0.9\textwidth]{Cycada.png}
\caption{CyCADA \cite{HoffmanICML18CyCADACycleConsistentAdversarialDA}, combines pixel-level and feature-level adaptation where both structural and semantic consistency is enforced. The former is ensured by an L1 penalty on the reconstruction
error between the source image and the image reconstructed from the target-like source. To ensure the latter, a semantic consistency loss is used that forces the segmentation of the target-like source image to match
the source predictions. (Image Courtesy to Judy Hoffman).}
\label{fig:cycada}
\vspace{-0.4cm}
\end{center}
\end{figure*}
\section{Orthogonal improvement strategies}
In addition to the specifically tailored deep DA architectures, several
machine learning strategies can be used with the above models to further improve their performance. While, in some cases such methods were used the main DA solution, we discuss them here separately, as in general these ideas can be easily combined with most of the above mentioned DA models.
\myparagraph{Pseudo-labeling the target data} One of the most used such technique is self-supervised learning with pseudo-labeled target data, sometimes referred to as self-labeling or self-training. The underlying assumption here is that at least for a subset of target samples the labeling is correct and hence the model can rely on them to improve the model.
In this way the model acts as if it was a semi-supervised DA model, except that instead of having ground-truth target labels, these labels come from a pseudo-labeling process.
As not all predictions are correct, often pseudo-labeling confidence scores are computed and used to select which pseudo-labeled samples should be retained for training.
Typical approaches to obtain pseudo labels are, using
the softmax predictions \cite{SaitoICML17AsymmetricTriTrainingUDA,DengTCSVT20RethinkingTripletLossDA}, using distance to class prototypes \cite{CsurkaTASKCV14DomainDomainSpecificClassMeans,PanCVPR19TransferrablePrototypicalNetworksUDA},
clustering \cite{KangCVPR19ContrastiveAdaptationNetworkUDA,SharmaWACV20UnsupervisedMetaDAFashionRetrieval},
label propagation on the joint source-target nearest neighbour graph \cite{TommasiICCV13FrustratinglyEasyDA,SenerNIPS16LearningTransferrableRepresentationsUDA},
via augmented anchors
\cite{ZhangECCV20LabelPropagationAugmentedAnchorsUDA}, or even
considering a teacher classifier, built as an implicit ensemble of source classifiers \cite{DengICCV19ClusterAlignmentTeacherUDA}.
Self-supervising deep DA models with pseudo-labeled target samples is also a popular strategy used to adapt
tasks beyond image classification. For example,
\cite{SharmaWACV20UnsupervisedMetaDAFashionRetrieval} proposed several strategies to pseudo-label fashion products across datasets and use them to solve the meta-domain gap occurring between consumer and shop fashion images.
\cite{GeX20StructuredDAOnlineRelationRegularizationReID} proposed a DA framework with online relation regularization for person re-identification that uses target pseudo labels to improve the target-domain encoder trained via a joint cross-domain labeling system. \cite{LiCVPR19BidirectionalLearningDASemSegm} used predicted labels with high confidence in a
bidirectional learning framework for semantic segmentation, where the image translation model and the segmentation adaptation model are learned alternatively. \cite{WangCVPR20DifferentialTreatmentStuffThingsDASemSegm} combines the self-supervised learning strategy
with a framework where the model is disentangled into a "things" and a "stuffs" segmentation networks.
\myparagraph{Curriculum learning}
To minimise the impact of noisy pseudo-labels during
alignment, curriculum learning-based \cite{BengioACMMM09CurriculumLearning} approaches have
been explored. A simple and most used curriculum learning scenario in DA is to first consider the
most confident target samples for the alignment and including the less confident ones at later stages of the training. Pseudo-labeling confidence
scores are typically determined using the image classifiers
\cite{RoyCVPR19UDAFeatureWhiteningConsensusLoss,ZhangCVPR18CollaborativeAdversarialUDA}, similarity to neighbours \cite{SenerNIPS16LearningTransferrableRepresentationsUDA,TommasiICCV13FrustratinglyEasyDA} or to class
prototypes \cite{ChenCVPR19ProgressiveFeatureAlignmentUDA, CsurkaTASKCV14DomainDomainSpecificClassMeans}. After each epoch, \cite{ZhangCVPR18CollaborativeAdversarialUDA} increases the training set with new target samples that are both highly confident and domain uninformative. To improve the confidence of pseudo-labels, \cite{RoyCVPR19UDAFeatureWhiteningConsensusLoss} relies on the consensus of image transformations, whereas \cite{SaitoICML17AsymmetricTriTrainingUDA} considers the agreement between multiple classifiers. \cite{ShuAAAI19TransferableCurriculudmWeaklySupervisedDA} proposes a weakly-supervised DA framework that
alternates between quantifying the transferability of source examples based on their contributions to the target task and progressively integrating from easy to
hard examples.
\cite{KangCVPR19ContrastiveAdaptationNetworkUDA} considers target clusters initialized by the source cluster centers, and assign target samples to them. At each epoch, first target elements that are far from the affiliated cluster are discarded, then the clusters with too few target samples assigned are also discarded.
Curriculum-learning based DA methods with progressively including harder and harder pseudo-labeled target data was also used for cross-domain person re-identification \cite{FanX17UnsupervisedPersonReIDClusteringFineTuning,ZhangICCV19SelfTrainingWithProgressiveAugmentation,FuICCV19SelfSimilarityGroupingDAPersonReID} and image segmentation \cite{ZouECCV18UnsupervisedDASemSegmClassBalancedSelfTraining,DuICCV19SSFDANSeparatedSemanticFeatureDASemSegm,PanCVPR20UnsupervisedIntraDASemSegmSelfSupervision}.
\myparagraph{Conditional entropy minimization}
Widely used to improve the performance of semi-supervised learning, conditional entropy minimization
in the target domain is another way to improve
decision boundaries of the model \cite{CarlucciICCV17AutoDIALDomainAlignmentLayers,SaitoICML17AsymmetricTriTrainingUDA,ShuICLR18ADIRTTApproachUDA,LongNIPS18ConditionalAdversarialDomainAdaptation}.
The Minimax Entropy loss
\cite{SaitoICCV19SemiSupervisedDAMinimaxEntropy} is a variant where an adversarial learning maximizes the conditional entropy of unlabeled target data with respect to the classifier and minimizes it with respect to the feature encoder. Similarly, \cite{VuCVPR19ADVENTAdversarialEntropyMinimizationDASemSegm} proposes an adversarial loss for entropy minimization used to bridge the domain gap between synthetic to real semantic segmentation adaptation.
\cite{RoyCVPR19UDAFeatureWhiteningConsensusLoss} proposes the Min-Entropy Consensus that merges both the entropy and the consistency loss into a single unified function.
\myparagraph{Self-ensemble learning}
The main idea of self-ensemble learning is
to train the neural network with small perturbations
such as different augmentations, using dropout and various noise while forcing the network
to make consistent predictions for the target samples.
In this spirit,
\cite{KurmiBMVC19CurriculumBasedDropoutDiscriminatorDA}, proposed a Monte Carlo dropout based ensemble discriminator
by gradually increasing the variance of
the sample based distribution. \cite{FrenchICLR18SelfEnsemblingForVisualDA} extended the idea of learning with a mean teacher network \cite{TarvainenNIPS17MeanTeachersBetterRoleModelsSSL} to domain adaptation considering a separate
path for source and target sets and sampling independent batches making the batch normalization domain specific during the training process.
\cite{DengICCV19ClusterAlignmentTeacherUDA}
builds a teacher classifier, to provide pseudo-labels used by a class-conditional clustering loss to force the features from the same class to concentrate together and a conditional feature matching loss to align the clusters from different domains.
|
{
"timestamp": "2020-12-29T02:27:36",
"yymm": "2012",
"arxiv_id": "2012.14176",
"language": "en",
"url": "https://arxiv.org/abs/2012.14176"
}
|
\section{Introduction}
\setlength{\parindent}{0cm}
In Chapter 2 of the thesis of the author \cite[Ch. 2]{1} the orthogonal derivative of a function of one variable was defined. Just as this definition used orthogonal polynomials in one variable, we can try to approximate a partial derivative of a function in several variables by using orthogonal polynomials in several variables.
In this paper we will consider the two-variable case. Similarly, we will extend the definition in \cite[Ch. 3]{1} of the fractional orthogonal derivative to the two-variable case.
\\[3mm]
The contents of this paper are as follows.
\\[2mm]
In section 2 we apply the method of \cite[Ch. 2]{1} to the two-variable case. We want to approximate $\frac{\partial^n f(x,y)}{\partial x^{n-k} \partial y^k}$. Only in the trivial case of a direct product of two orthogonality measures in one variable we can use fully orthogonal polynomials in two variables (again direct products) for this purpose. For other orthogonality measures on $\mathbb{R}^2$ we have to work with biorthogonal polynomials. The most obvious regions for the supports of these measures are the triangular region and the disk. We will only treat the triangular region with
orthogonal measure already considered by Appell and yielding biorthogonal polynomials expressed in terms of Appell hypergeometric functions.
In section 3 we apply the theory of \cite[Ch. 3]{1} to the two-variable case, by
which we obtain a two-dimensional fractional orthogonal derivative.
In section 4 we briefly treat the trivial case of the fractional Jacobi derivative with weight function on the square region.
In section 5 we treat the fractional biorthogonal derivative with weight function on the triangular region.
In the appendix we give an overview of the functions we need for the derivation of the formulas for the fractional orthogonal derivatives. We treat the $F_1$, $F_2$, $F_3$ Appell functions, the $H_2$ Horn function and the $F_P$, $F_Q$ and the $F_{PR}$ functions of Olsson.
\section{The two-dimensional orthogonal derivatives}
In \cite[Ch. 2]{1} we derived a formula for the orthogonal derivative
\[
f^{(n)}(x)=\dfrac{k_{n}n!}{h_{n}}\lim\limits_{\delta \downarrow 0}\dfrac{1}{\delta ^{n}}
\int\nolimits_{\RR}f(x+\delta u)p_{n}(u)d\mu(u)
\]
where the $p_n$ are orthogonal polynomials. This formula can be considered, for instance, for Jacobi, Laguerre or Hermite polynomials. We can also use discrete orthogonal polynomials, for example the Hahn polynomials.
In the two-variable case we will work with Taylor series, as we did in the one-dimensional case, but now there is more freedom. Write the Taylor expansion of a function $f$ around $(x,y)$ as
\begin{equation}
f(x+\delta u,y+\delta v)
=\delta^n\sum\limits_{m=0}^n\sum\limits_{l=0}^m\dfrac{\partial^m}{\partial x^{m-l}\partial y^{l}}f(x,y)
\dfrac{u^{m-l}v^{l}}{(m-l)!\,l!}+o(\delta^n)
\label{4.2.1}
\end{equation}
where $\delta$ is small. We want to approximate the partial derivative $\frac{\partial^n}{\partial x^{n-k} \partial y^k} f(x,y)$. For this purpose we look for a polynomial $p_{n,k}(u,v)$ of degree $n$ such that, when both sides of \eqref{4.2.1} are multiplied by $p_{n,k}(u,v)$ and next both sides are integrated over $\mathbb{R}^2$ with respect to a suitable orthogonality measure $d\mu(u,v)$, all terms in the double summation except for the term with $(m,l)=(n,k)$ become zero. So first we want
\[
\int\nolimits_{\RR}\int\nolimits_{\RR}p_{n,k}(u,v)u^{m-l}v^{l}d\mu(u,v)=0 \qquad \text{if} \ \ m<n
\]
This forces $p_{n,k}$ to be in the ($n+1$)-dimensional space ${\cal V}_n$ of orthogonal polynomials of degree $n$ with respect to the measure $\mu$. Next we require that
\begin{equation}
\int\nolimits_{\RR}\int\nolimits_{\RR}p_{n,k}(u,v)u^{n-l}v^ld\mu(u,v)=0
\qquad \text{if} \ \ k\neq l
\label{4.2.3}
\end{equation}
This can be equivalently written as
\[
\int_{\mathbb{R}^2} p_{n,k}(u,v) q_{m,l}(u,v) d\mu(u,v)=0
\qquad \text{if} \ \ (m,l)\neq (n,k)
\]
where the polynomials $q_{n,l}$ are elements of ${\cal V}_n$ such that $q_{n,l}(u,v)=u^{n-l}v^l$ + polynomial of degree less than $n$. The polynomials $q_{n,l}$ form the so-called {\em monomial} or {\em monic} basis of ${\cal V}_n$. The above condition on $p_{n,k}\in {\cal V}_n$ now defines the polynomials $p_{n,k}$ uniquely, up to a constant factor, as the basis of ${\cal V}_n$ which is {\em biorthogonal} to the monomial basis. Then we get from \eqref{4.2.1} and the properties of $p_{n,k}$ that
\begin{equation}
\dfrac{\partial^n}{\partial x^{n-k}\partial y^{k}}f(x,y)=
(n-k)!\,k!\lim\limits_{\delta \downarrow 0}\dfrac{1}{\delta^n}
\dfrac{\int\nolimits_{\RR}\int\nolimits_{\RR}f(x+\delta u,y+\delta v)p_{n,k}(u,v)d\mu(u,v)}
{\int\nolimits_{\RR}\int\nolimits_{\RR}p_{n,k}(u,v)u^{n-k}v^k d\mu(u,v)}
\label{4.2.4}
\end{equation}
under some conditions on $f$, for instance that $f$ is a $C^n$ function on some neighbourhood of $(x,y)$ and, if $\mu$ has unbounded support, that $|f(u,v)|$ tends sufficiently fast to $0$ as $(u,v)$ tends to $\infty$. More generally, we call the right-hand side of \eqref{4.2.4} a {\em two-dimensional partial orthogonal derivative} of $f$ at $(x,y)$ if the limit exists. But we will in particular be interested in the numerator on the right-hand side (after the limit sign) for fixed $\delta$. We will only treat the square and the triangular region. For the disk see \cite[Section 5.2.2 with $d=2$]{6}, \cite[Chapter 3]{154}.
\subsection{The two-dimensional Jacobi derivative with weight function on the square region}
An easy example of \eqref{4.2.4} can be obtained by letting $d\mu(x,y)=d\mu_1(x) d\mu_2(y)$ and $p_{n,k}(x,y)=r_{n-k}(x)s_k(y)$ with the $r_n$ and $s_n$ one-variable orthogonal polynomials for orthogonality measures $\mu_1$ and $\mu_2$, respectively. Then the polynomials $p_{n,k}$ are two-variable orthogonal polynomials for the orthogonality measure $\mu$ while also $p_{n,k}(x,y)=$C$\,x^{n-k}y^k$ + polynomial of degree less than $n$. So the $p_{n,k}$ form the monomial basis, but this basis is also orthogonal, This is also biorthogonal to the monomial basis, by which the $p_{n,k}$ are the polynomials desired in \eqref{4.2.4}. Now the denominator after the limit sign on the right-hand side of \eqref{4.2.4} factorizes as the product
\[
\int_{\mathbb{R}} r_{n-k}(u) u^{n-k} d\mu_1(u) \times
\int_{\mathbb{R}} s_k(v) v^k d\mu_2(v)
=\frac{h_{n-k}'}{k_{n-k}'} \frac{h_{k}''}{k_{k}''}
\]
by \cite[(2.2.4)]{1}, where the accented and double accented $h_n$'s and $k_n$'s are in the obvious way, following \cite[(2.2.3)]{1}, related to the orthogonal polynomials $r_n$ and $s_n$, respectively. Specialization of \eqref{4.2.4} gives:
\begin{equation}
\dfrac{\partial ^{m+n}}{\partial ^{m}x\partial^{n}y}f(x,y)=
\dfrac{k_{m}\,m!}{h_{m}}\dfrac{k_{n}\,n!}{h_{n}}\lim\limits_{\delta \rightarrow 0}\dfrac{1}{\delta ^{m+n}}\int\nolimits_{-1}^{1}\int\nolimits_{-1}^{1}f(x+\delta v,y+\delta u) p_{m}(u)q_{n}(v) d\mu(u)
d\mu(v)
\label{4.2.5}
\end{equation}
This is a very trivial case. If the measures $\mu_1$ and $\mu_2$ are supported on the segment $[-1,1]$ then the polynomials $p_{n,k}$ are orthogonal on the square $[-1,1]\times[-1,1]$. For example, take the product of the Jacobi measures $d\mu_1(x)=(1-x)^\alpha(1+x)^\beta dx$ and $d\mu_2(y)=(1-y)^\gamma(1+y)^\delta dy$. Substitution in \eqref{4.2.5} gives at last
\begin{multline*}
\dfrac{\partial ^{n+k}}{\partial x^{n}\partial y^{k}}f(x,y)=\lim\limits_{\delta \rightarrow 0}\dfrac{1}{\delta^{n+k}}
\dfrac{k_{n,\alpha ,\beta }\ k_{k,\gamma,\delta }}{h_{n,\alpha ,\beta }\ h_{k,\gamma ,\delta }} \\
\times\int\nolimits_{-1}^{1}\int\nolimits_{-1}^{1}f(x+\delta u,y+\delta v) P_{n}^{(\alpha ,\beta) }
(u)P_{k}^{(\gamma,\delta)}(v)(1-u)^{\alpha }(1+u)^{\beta }(1-v) ^{\gamma }(1+v) ^{\delta}\,du\,dv
\end{multline*}
\subsection{The two-dimensional biorthogonal derivative with weight function on the triangular region}
The triangular region of ${\RR}^{2}$ is defined as $T^{2}:=\left\{(x,y) :0\leq x,y \le 1 \wedge x+y\le 1\right\}$,
on which we consider the Jacobi weight function defined as:
\[
W_{\alpha ,\beta ,\gamma }(x,y):=\dfrac{1}{B(\alpha +1,\beta +1,\gamma +1)}x^{\alpha }
y^{\beta}(1-x-y)^{\gamma }\qquad \alpha ,\beta ,\gamma >-1
\]
which is normalized in such a way that its integral over $T^{2}$ is $1$. The function $B(x,y,z)$ is defined as an extension of the Beta function
\[
B(x,y,z):=\dfrac{\Gamma(x)\Gamma(y)\Gamma(z)}{\Gamma(x+y+z)}
\]
It is easy to prove
\[
B(x,y,z)=B(x,y+z)B(y,z)=B(y,x+z)B(x,z)=B(z,x+y)B(x,y)
\]
Let
\[
\left\langle f,g\right\rangle _{\alpha ,\beta ,\gamma}=\int\nolimits_{T^{2}}f(x,y) g(x,y)
W_{\alpha,\beta ,\gamma }(x,y)\;dx\;dy
\]
Classical orthogonal polynomials of one variable all satisfy a Rodrigues' formula. For obtaining something similar in connection with the triangular region Appell
considered the following analogue of Rodrigues' formula \cite[Section 2.4]{6}:
\begin{equation}
U_{k,n}^{\alpha ,\beta ,\gamma }(x,y) =
\big[x^\alpha y^\beta (1-x-y)^\gamma \big] ^{-1}\dfrac{\partial ^{n}}
{\partial x^{k}\partial y^{n-k}}\left[ x^{k+\alpha }y^{n-k+\beta }(1-x-y)^{n+\gamma } \right]
\qquad 0\leq k\leq n
\label{4.2.15}
\end{equation}
The set $\{U_{k,n}^{\alpha ,\beta ,\gamma }(x,y):0\leq k\leq n\}$ is a basis of
$\mathcal{V}_n^2 (W_{\alpha,\beta,\gamma})$ where $\mathcal{V}_n^2$\, is the space of orthogonal polynomials of degree $n$. See \cite[Prop. 2.4.3]{6}.
To write $U_{k,n}^{\alpha ,\beta ,\gamma}(x,y)$ as an $F_2$ Appell function we can use \cite[5.13(1)]{28}. There follows:
\begin{multline*}
U_{k,n}^{\alpha ,\beta ,\gamma }(x,y) =(\alpha+1)_k(\beta+1)_{n-k}(1-x-y)^n \\
F_2\left( -\gamma -n;-k,-n+k;\alpha +1,\beta +1;\dfrac{x}{x+y-1},\dfrac{y}{x+y-1}\right)
\end{multline*}
From this expression we see that $U_{k,n}^{\alpha ,\beta ,\gamma }(x,y)$ are polynomials of degree less or equal $n$. Using \cite[5.11(8)]{28} we get the much simpler form
\begin{multline}
U_{k,n}^{\alpha ,\beta ,\gamma }(x,y)
=(\alpha+1)_k(\beta+1)_{n-k}(1-x-y)^{-\gamma} \\
F_2(-\gamma -n;\alpha +1+k,\beta +1+n-k;\alpha +1,\beta+1;x,y)
\label{4.2.16}
\end{multline}
This basis is not orthogonal. It can be shown that $\left\langle U_{k,n}^{\alpha ,\beta,\gamma },U_{j,n}^{\alpha ,\beta ,\gamma }\right\rangle _{\alpha ,\beta,\gamma }\neq 0$ for $j\neq k$. If $n\neq m$ then $\left\langle U_{k,n}^{\alpha ,\beta,\gamma },U_{j,m}^{\alpha ,\beta,\gamma }\right\rangle _{\alpha ,\beta,\gamma }= 0$.
Let $V_{m,n}^{\alpha ,\beta ,\gamma }$ be defined with $0\leq m\leq n$ by
\begin{multline}
V_{m,n}^{\alpha ,\beta ,\gamma }(x,y):= \\
=\sum\limits_{i=0}^{m}\sum\limits_{j=0}^{n}(-1) ^{m+n+i+j}\dbinom{m}{i}\dbinom{n}{j}
\dfrac{(\alpha+1)_{m}(\beta +1)_n(\alpha+\beta+\gamma+2) _{m+n+i+j}}
{(\alpha+1)_{i}(\beta+1)_{j}(\alpha+\beta+\gamma+2)_{2m+2n}}x^{i}y^{j}
\end{multline}
This function can be written as an $F_2$ Appell function.
\begin{multline}
V_{m,n}^{\alpha ,\beta ,\gamma }(x,y) =(-1)^{n+m}
\dfrac{(\alpha+1)_{m}(\beta+1)_{n}(\alpha+\beta+\gamma+2) _{m+n}}
{(\alpha+\beta+\gamma+2) _{2n+2m}}\\
F_2(\alpha+\beta+\gamma+2+m+n;-m,-n;\alpha+1,\beta+1;x,y)
\label{4.2.17}
\end{multline}
Then $V_{m,n}$ is the orthogonal projection of $x^{m}y^{n}$ in $L^{2}\left\langle T^{2},W_{\mu }\right\rangle $ onto the space $\mathcal{V}_{m+n}$ of orthogonal polynomials of degree $m+n$ (so they are a monomial basis) and the two families of polynomials are biorthogonal with
\begin{equation}
\left\langle U_{k,n}^{\alpha ,\beta ,\gamma },V_{j,n-j}^{\alpha,\beta ,\gamma }\right\rangle _{\alpha ,\beta ,\gamma }=(-1)^n
\dfrac{(\alpha+1)_k(\beta+1)_{n-k}(\gamma+1)_n k!\,(n-k)!}
{(\alpha+\beta+\gamma+3)_{2n}}\delta _{j,k}\qquad 0\leq j,k\leq n
\label{4.2.17a}
\end{equation}
We can write this last inner product more generally by replacing the parameter $n-j$ by $m-j$. There follows
\[
\left\langle U_{k,n}^{\alpha ,\beta ,\gamma },V_{j,m-j}^{\alpha ,\beta ,\gamma }\right\rangle _{\alpha ,\beta ,\gamma }=0\qquad m\neq n
\]
The basis of polynomials $V_{m,n}^{\alpha,\beta,\gamma}$ as well as the above biorthogonality is already given by Appell for $\gamma=0$ and by Fackerel \& Littler \cite{71} for the general case. If we compare with the notation for the general case at the beginning of Section 2 then we see that $U_{k,n}^{\alpha,\beta,\gamma}=p_{n,n-k}$ and $V_{l,m-l}^{\alpha,\beta,\gamma}=q_{m,m-l}$. Hence, for the partial orthogonal derivative \eqref{4.2.4} we have to use the $U$-basis in the present case.
\
Using the triangular region $T^2=\left\{(x,y) :0\leq x,y,x+y<1\right\}$ we get for the denominator of \eqref{4.2.4}
\begin{align}
I(m,l) \nonumber
&=\iint\nolimits_{0\leq u,v,u+v<1}U_{k,n}^{\alpha ,\beta ,\gamma }(u,v)
u^{m-l} v^l W_{\alpha,\beta,\gamma}(u,v) du dv= \\ \nonumber
&=\big[B(\alpha +1,\beta +1,\gamma +1)\big]^{-1} \\
&\qquad \qquad \times\int\nolimits_{0}^{1}v^{l}\int\nolimits_{0}^{1-v}u^{m-l}\dfrac{\partial ^{k}}
{\partial u^{k}}\dfrac{\partial^{n-k}}{\partial v^{n-k}}\left[u^{k+\alpha }v^{n-k+\beta }
(1-u-v)^{n+\gamma } \right]\,du\,dv
\label{4.2.4.3b}
\end{align}
By integration by parts with respect to $u$ we see that $I\left( m,l\right)=0$ unless $l+k\leq m$. By integration by parts with respect to $v$ we see that $I(m,l) =0$ unless $n\leq l+k$. Together with $m\leq n$ there follows: $m=n$ and $l=n-k$. Substitution in \eqref{4.2.4.3b} gives
\begin{align}
I(n,n-k)
&B(\alpha +1,\beta +1,\gamma +1)= \nonumber \\
&=\int\nolimits_{0}^{1}v^{n-k}\int\nolimits_{0}^{1-v}u^{k}\dfrac{\partial ^{k}}
{\partial u^{k}}\dfrac{\partial ^{n-k}}{\partial v^{n-k}}\left[ u^{k+\alpha }v^{n-k+\beta }(1-u-v)^{n+\gamma } \right]\,du\,dv
\label{4.2.4.3c}
\end{align}
Let
\[
I_{1}=\int\nolimits_{0}^{1-v}u^{k}\dfrac{\partial ^{k}}{\partial u^{k}}
\left[\dfrac{\partial ^{n-k}}{\partial v^{n-k}}\left[ u^{k+\alpha }v^{n-k+\beta }(1-u-v)^{n+\gamma } \right]\right]\,du
\]
After $k$-fold integration by parts there remains
\[
I_{1}=(-1)^k\Gamma(k+1)\int\nolimits_{0}^{1-v}u^{k+\alpha}\dfrac{\partial ^{n-k}}{\partial v^{n-k}}\left[ v^{n-k+\beta }(1-u-v) ^{n+\gamma }\right] du
\]
Substitution in \eqref{4.2.4.3c} gives
\begin{align}
I(n,n-k)
&\big[(-1)^k\Gamma(k+1)\big]^{-1} B(\alpha +1,\beta +1,\gamma +1)= \nonumber \\
&=\int\nolimits_{0}^{1}v^{n-k}\int\nolimits_{0}^{1-v}u^{k+\alpha}
\dfrac{\partial ^{n-k}}{\partial v^{n-k}}\left[v^{n-k+\beta }(1-u-v)^{n+\gamma } \right]\,du\,dv \nonumber \\
&=\int\nolimits_{0}^{1}u^{k+\alpha}\int\nolimits_{0}^{1-u}v^{n-k}
\dfrac{\partial ^{n-k}}{\partial v^{n-k}}\left[v^{n-k+\beta }(1-u-v)^{n+\gamma } \right]\,dv\,du
\label{4.2.4.7}
\end{align}
Let
\[
I_{2}=\int\nolimits_{0}^{1-u}v^{n-k}
\dfrac{\partial ^{n-k}}{\partial v^{n-k}}\left[v^{n-k+\beta }(1-u-v)^{n+\gamma } \right]\,dv
\]
After $(n-k)$-fold integration by parts we get
\[
I_{2}=(-1)^{n-k}\Gamma(n-k+1)\int\nolimits_{0}^{1-u}v^{n-k+\beta}(1-u-v)^{n+\gamma}dv
\]
The integral is a Beta function.
\[
\int\nolimits_{0}^{1-u}v^{n-k+\beta}(1-u-v)^{n+\gamma}dv=
(1-u)^{k+2n+\beta +\gamma +1}B(n-k+\beta +1,n+\gamma +1)
\]
with convergence conditions: $\beta,\gamma>-1$. So for $I_2$ we get
\[
I_{2}=(-1)^{n-k}\Gamma(n-k+1)(1-u)^{k+2n+\beta +\gamma +1}B(n-k+\beta +1,n+\gamma +1)
\]
Substitution in \eqref{4.2.4.7} gives
\begin{multline*}
I(n,n-k)\big[(-1)^n\Gamma(k+1)\Gamma(n-k+1)\big]^{-1} B(\alpha +1,\beta +1,\gamma +1)= \\
=B(n-k+\beta +1,n+\gamma +1)
\int\nolimits_{0}^{1}u^{k+\alpha}(1-u)^{k+2n+\beta +\gamma +1}du
\end{multline*}
The integral is again a Beta function. So we get
\begin{align*}
I(n,n-k)\big[(-1)^n
&\Gamma(k+1)\Gamma(n-k+1)\big]^{-1}B(\alpha +1,\beta +1,\gamma +1)= \\
&=B(n-k+\beta +1,n+\gamma +1)B(k+\alpha+1,2n-k+\beta+\gamma+2) \\
&=B(k+\alpha +1,n-k+\beta+1,n+\gamma +1)
\end{align*}
with convergence conditions: $\Re{\alpha},\Re{\beta},\Re{\gamma}>-1$. After some manipulations we get
\[
I(n,n-k)=(-1)^n\dfrac{(\alpha+1)_k(\beta+1)_{n-k}(\gamma+1)_n k!\,(n-k)!}
{(\alpha+\beta+\gamma+3)_{2n}}
\]
and this is equal to the constant in \eqref{4.2.17a}. This also follows from the fact that
\begin{align*}
I(n,n-k)
&=<U_{k,n},\,u^k\,v^{n-k}> \\
&=<U_{k,n},\,V_{k,n-k}+\text{polynomial of degree less than $n$}> \\
&=<U_{k,n},\,V_{k,n-k}>+<U_{k,n},\text{polynomial of degree less than $n$}> \\
&=<U_{k,n},\,V_{k,n-k}>
\end{align*}
Substitution in \eqref{4.2.4} gives
\begin{multline}
\dfrac{\partial ^{n}}{\partial x^{k}\partial y^{n-k}}f(x,y) =
\dfrac{(-1)^n}{B(\alpha+1+k,\beta+1+n-k,\gamma+1+n)} \\
\lim\limits_{\delta \downarrow 0}\dfrac{1}{\delta ^{n}}\iint\nolimits_{0\leq u,v,u+v<1}
f(x+\delta u,y+\delta v)
U_{k,n}^{\alpha ,\beta ,\gamma }(u,v) u^{\alpha }v^{\beta}(1-u-v) ^{\gamma }\,dv\,du
\label{4.6.14a}
\end{multline}
with convergence conditions: $\alpha,\beta,\gamma>-1$. Using \eqref{4.2.16} gives
\begin{multline*}
\dfrac{\partial ^{n}}{\partial x^{k}\partial y^{n-k}}f(x,y) =
\dfrac{(-1)^n\Gamma(\alpha+\beta+\gamma+3+2n)}{\Gamma(\alpha+1)\Gamma(\beta+1)
\Gamma(\gamma+1+n)} \\
\lim\limits_{\delta \downarrow 0}\dfrac{1}{\delta ^{n}}
\iint\nolimits_{0\leq u,v,u+v<1}f(x+\delta u,y+\delta v)
F_2\left(
\begin{array}{c}
-\gamma -n;\alpha +1+k,\beta +1+n-k \\
\alpha +1,\beta+1%
\end{array}%
;
u,v\right)
u^{\alpha }v^{\beta}\,dv\,du
\end{multline*}
\section{The two-dimensional fractional orthogonal derivative}
We have seen that for the orthogonal derivative in two dimensions there are several choices of region and orthogonality measures for which explicit results can be obtained. For the fractional derivative in two dimensions we have the same possibilities. So we treat first the trivial case of the fractional orthogonal derivative with weight function on the square. Then we treat the fractional biorthogonal derivative with weight function on the triangular region.
Analogous to the Weyl fractional integral in one dimension we define the Weyl fractional integral in two dimensions as:
\[
W^{-\mu ,-\nu }[f](x,y):=\dfrac{1}{\Gamma(\mu)\Gamma(\nu) }
\int\nolimits_{x}^{\infty}\int\nolimits_{y}^{\infty }f(U,V)(U-x)^{\mu -1}(V-y)^{\nu -1}dVdU
\]
The fractional partial derivative will be given by
\begin{multline}
W^{\mu,\nu}[f](x,y)=(-1)^m\dfrac{\partial^m}{\partial x^{m-l}\partial y^l}
W^{\mu -m+l,}{}^{\nu -l}[f](x,y) \\
=\dfrac{(-1)^m}{\Gamma(m-l-\mu)\Gamma(l-\nu)}
\dfrac{\partial ^m}{\partial x^{m-l}\partial y^l}
\left[\int\nolimits_{x}^{\infty}\int\nolimits_{y}^{\infty }f(U,V)(U-x)^{m-l-\mu -1}
(V-y)^{l-\nu -1}dVdU\right]
\label{4.3.1.4}
\end{multline}
Analogous to \cite[(3.3.1)]{1} and in view of \eqref{4.2.4} we can write
\[
W^{\mu ,\nu }[f](x,y) =\lim\limits_{\delta \downarrow 0}W^{\mu,\nu,m,l}_\delta [f](x,y)
\]
where
\begin{align}
W^{\mu,\nu,m,l}_\delta [f](x,y):&=(-1)^mD^{m,l}_\delta \big[ W^{\mu-m+l,\nu-l}[f]\big](x,y) \nonumber \\
&=(-1)^mW^{\mu-m+l,\nu-l}\big[ D^{m,l}_\delta [f]\big](x,y)
\label{6.4.3}
\end{align}
with
\begin{equation}
D^{m,l}_\delta [g](x,y):=\dfrac{(m-l)!\,l!}{\delta^m}\dfrac{}{}\dfrac{\int\nolimits_{\RR}\int\nolimits_{\RR}g(x+\delta u,y+\delta v)p_{m,l}(u,v)d\mu(u,v)}
{\int\nolimits_{\RR}\int\nolimits_{\RR}p_{m,l}(u,v)u^{m-l}v^l d\mu(u,v)}
\label{6.4.4}
\end{equation}
After application of \eqref{6.4.3} and \eqref{6.4.4} to \eqref{4.3.1.4} we obtain
\begin{multline*}
W^{\mu,\nu,m,l}_\delta [f](x,y)=\dfrac{(-1)^m}{\Gamma(m-l-\mu)\Gamma(l-\nu)}
\dfrac{(m-l)!\,l!}{\int\nolimits_{\RR}\int\nolimits_{\RR}p_{m,l}(u,v)u^{m-l}v^l d\mu(u,v)}
\dfrac{1}{\delta^m} \\
\times\int\nolimits_{\RR}\int\nolimits_{\RR}\left[\int\nolimits_{x+\delta u}^{\infty}
\int\nolimits_{y+\delta v}^{\infty }f(U,V)(U-x-\delta u)^{m-l-\mu -1}(V-y-\delta v)^{l-\nu -1}dVdU \right] \\
\times p_{m,l}(u,v) d\mu(u,v)
\end{multline*}
With $U=x+\delta s$ and $V=y+\delta t$ there follows
\begin{multline*}
W^{\mu,\nu,m,l}_\delta [f](x,y)=\dfrac{(-1)^m}{\Gamma(m-l-\mu)\Gamma(l-\nu)}
\dfrac{(m-l)!\,l!}{\int\nolimits_{\RR}\int\nolimits_{\RR}p_{m,l}(u,v)u^{m-l}v^l d\mu(u,v)}
\dfrac{1}{\delta^{\mu+\nu}} \\
\times\int\nolimits_{\RR}\int\nolimits_{\RR}\left[\int\nolimits_{x}^{\infty}\int\nolimits_{y}^{\infty }f(x+\delta s,y+\delta t)(s-u)^{m-l-\mu -1}(t-v)^{l-\nu -1}dtds \right]
p_{m,l}(u,v) d\mu(u,v)
\end{multline*}
Interchange of the inner double integral with the outer double integral gives
\begin{multline}
W^{\mu,\nu,m,l}_\delta [f](x,y)=\dfrac{(-1)^m}{\Gamma(m-l-\mu)\Gamma(l-\nu)}
\dfrac{(m-l)!\,l!}
{\int\nolimits_{\RR}\int\nolimits_{\RR}p_{m,l}(u,v)u^{m-l}v^l d\mu(u,v)}
\dfrac{1}{\delta^{\mu+\nu}} \\
\times\int\nolimits_{\RR}\int\nolimits_{\RR}f(x+\delta s,y+\delta t)
\left[\int\nolimits_{u=-\infty}^s\int\nolimits_{v=-\infty}^t (s-u)^{m-l-\mu-1}(t-v)^{l-\nu-1} p_{m,l} (u,v)d\mu(u,v)\right]dsdt
\label{6.4.5}
\end{multline}
In subsequent subsections we will write \eqref{6.4.5} in more explicit form for the cases of the square and the triangular regions.
\section{The two-dimensional fractional Jacobi derivative with weight function on the square.}
Let $d\mu(x,y)=d\mu_1(x)d\mu_2(y)$ and $p_{n,k}(x,y)=r_{n-k}(x)s_k(y)$ as in the beginning of section 2.1. Then by \eqref{4.2.5} formula \eqref{6.4.5} takes the form
\begin{multline*}
W^{\mu,\nu,m,l}_\delta [f](x,y)=\dfrac{(-1)^m(m-l)!\ l!\ k'_{m-l}\ k''_l}{\Gamma(m-l-\mu)\Gamma(l-\nu)h'_{m-l}\ h''_l}
\dfrac{1}{\delta^{\mu+\nu}}
\int\nolimits_{\RR}\int\nolimits_{\RR}f(x+\delta s,y+\delta t) \\
\times\left[\int\nolimits_{u=-\infty}^s(s-u)^{m-l-\mu-1}r_{m-l}(u)d\mu_1(u)\right]
\left[\int\nolimits_{v=-\infty}^t(t-v)^{l-\nu-1}s_l(v)d\mu_2(v)\right]dsdt
\end{multline*}
Now specialize to $d\mu_1(x)=(1-x)^\alpha(1+x)^\beta dx$ on $[-1,1]$ for $r_n(x)=P^{(\alpha,\beta)}_n(x)$ and $d\mu_2(y)=(1-y)^\gamma(1+y)^\delta\ dy$ for $s_n(y)=P^{(\gamma,\delta)}_n(y)$. Then we get for the fractional Jacobi derivative
\begin{multline}
W^{\mu,\nu,m,l}_\delta [f](x,y)
=\dfrac{(-1)^m(m-l)!\ l!\ k'_{m-l}\ k''_l}{\delta^{\mu+\nu}\ h'_{m-l}\ h''_l} \\
\times \int\nolimits_{-1}^1\int\nolimits_{-1}^1f(x+\delta s,y+\delta t)
\left[\int\nolimits_{-\infty}^s(s-u)^{m-l-\mu-1}P^{(\alpha,\beta)}_{m-l}(u)(1-u)^\alpha(1+u)^\beta du\right] \\
\times \left[\int\nolimits_{-\infty}^t(t-v)^{l-\nu-1}P^{(\gamma,\delta)}_l(v)(1-v)^\gamma(1+v)^\delta\ dv\right] dsdt \qquad
\label{4.3.1.10}
\end{multline}
We now look at the two-dimensional analogue of \cite[(3.3.6)]{1}. There we had transformed the 2-fold integral into a sum of two single integrals, where the integrands are a product of the function $f(x+\delta s)$ with a factor depending on a convolution integral of an orthogonal polynomial. In the two-dimensional case we can transform the $4$-fold integral in such a manner that there results a sum of four $2$-fold integrals with a product of the function $f(x+\delta s,y+\delta t)$ and a factor depending on the $2$-fold convolution integral of the orthogonal polynomials. In the special case of a product of Jacobi polynomials we can use the same computation as in the one-dimensional case. We give here only the result.
\begin{align*}
W^{\mu ,\nu ,m+n,n}[f](x,y)
&\left[(-1)^{m+n}\dfrac{k_{m}m!}{h_{m}}\dfrac{k_{n}n!}{h_{n}}\dfrac{1}{\delta^{\mu +\nu }}
\dfrac{1}{\Gamma(m-\mu)\Gamma(n-\nu) }\right] ^{-1}= \\
&=\int\nolimits_{-1}^{1}\int\nolimits_{-1}^{1}f(x+\delta s,y+\delta t)J_{1}(t,n,\alpha ,\beta ,\nu )
J_{1}(s,m,\gamma ,\delta ,\mu) \,dt\,ds+ \\
&+\int\nolimits_{1}^{\infty}\int\nolimits_{-1}^{1}f(x+\delta s,y+\delta t)J_{1}(t,n,\alpha,\beta,\nu) J_{2}(s,m,\gamma ,\delta ,\mu)\,dt\,ds+ \\
&+\int\nolimits_{-1}^{1}\int\nolimits_{1}^{\infty}f(x+\delta s,y+\delta t)J_{2}(t,n,\alpha,\beta,\nu) J_{1}(s,m,\gamma ,\delta ,\mu) \,dt\,ds+ \nonumber \\
&+\int\nolimits_{1}^{\infty}\int\nolimits_{1}^{\infty}f(x+\delta s,y+\delta t)
J_{2}(t,n,\alpha,\beta,\nu)J_{2}(s,m,\gamma ,\delta,\mu) \,dt\,ds
\end{align*}
The integrals $J_{1}$ and $J_{2}$ are known \cite[(3.5.9)]{1} and \cite[(3.5.10)]{1}. We obtain
\begin{multline*}
J_{1}(\xi ,\lambda ,\alpha ,\beta ,\nu) =(-1) ^{\lambda }\dfrac{\Gamma(\lambda +\beta +1)
\Gamma(\lambda-\nu)}{2^{\lambda-\nu }\lambda !\,\Gamma(\lambda-\nu+\beta+1)}
(1-\xi)^{\lambda+\alpha-\nu}(1+\xi)^{\lambda +\beta -\nu } \\
\hyp21{-\nu ,2\lambda -\nu +\alpha +\beta +1}{\lambda -\nu+\beta +1}{\dfrac{1+\xi }{2}}
\end{multline*}
\begin{multline*}
J_{2}(\xi ,\lambda ,\alpha ,\beta ,\nu) =(-1) ^{\lambda }\dfrac{2^{\lambda +\alpha +\beta +1}}
{(\xi+1)^{\nu +1}}\dfrac{\Gamma(\lambda -\nu)}{\Gamma(-\nu) \lambda !}
\dfrac{\Gamma(\lambda +\alpha +1)\Gamma(\lambda +\beta +1)}{\Gamma(2\lambda +\alpha+\beta +2) } \\
\hyp21{\nu +1,\lambda +\beta +1}{2\lambda +\alpha +\beta +2}{\dfrac{2}{\xi +1}}
\end{multline*}
\section{The fractional biorthogonal derivative with weight function on the triangular region}
To develop a formula for the fractional biorthogonal derivative on the triangular region, we use \eqref{6.4.5} for the triangular region $T^2$.
\begin{multline}
W^{\mu,\nu,n,n-k}_\delta [f](x,y)\left[ \dfrac{h_{1}(\alpha ,\beta ,\gamma,k,n)}
{\Gamma(k-\mu)\Gamma(n-k-\nu)}\dfrac{1}{\delta ^{n}}\right] ^{-1}=
\int\nolimits_{\RR}\int\nolimits_{\RR}f(x+\delta s,y+\delta t) \\
\times\left[\iint\nolimits_{T^2 \cap \big( (-\infty,s]\times(-\infty,t] \big)} (s-u)^{k-\mu-1}(t-v)^{n-k-\nu-1} U^{\alpha,\beta,\gamma}_{k,n}(u,v)u^\alpha v^\beta (1-u-v)^\gamma dudv)\right]dsdt
\label{4.9.4a}
\end{multline}
with
\begin{equation}
h_{1}(\alpha ,\beta ,\gamma ,k,n)=\dfrac{1}{B(\alpha+1+k,\beta+1+n-k,\gamma+1+n)}
\label{4.9.4}
\end{equation}
We see that the integration domain $T^2\cap((-\infty,s]\times(-\infty,t])$ of the inner double integral in \eqref{4.9.4a} depends on the position of the point $(s,t)$ with respect to the triangle $\left\{ \left(u,v\right) \text{ }|\text{ }0<v<u<1\right\}$. Clearly $T^2\cap((-\infty,s]\times(-\infty,t])$ is empty if $(s,t)$ is outside the
first quadrant, while the first quadrant in the $(s,t)$ plane can be split up
as the union of five regions with disjoint interiors as in Figure \ref{Figure 6.1a} such that the corresponding regions $T^2\cap((-\infty,s]\times(-\infty,t])$ inside $T^2$ in the
$(u,v)$ plane are given by Figure \ref{Figure 6.1}.
\begin{figure}[ht]
\centering
\includegraphics[height=4cm]{plot1extra}
\caption{The five possible regions of the position of the point $(s,t)$.}
\label{Figure 6.1a}
\end{figure}
\begin{figure}[ht]
\includegraphics[height=3.5cm,width=16.8cm]{plot1}
\caption{The five possibilities of the position of the point $(s,t)$ with respect to the $u,v$ axes.}
\label{Figure 6.1}
\end{figure}
$\text{I. }$ The point $(s,t)$ lies inside the triangle with $0<s<1,0<t<1-s$. The appropriate part of the integral for $W$ becomes
\[
I=\int_{s=0}^{1}\int_{t=0}^{1-s}\left(\int_{v=0}^{t}\int_{u=0}^{s}(.)\,du\,dv\right) \,dt\,ds
\]
$\text{II. }$ The point $(s,t)$ lies outside the triangle with
$1<s<\infty ,1<t<\infty$. The appropriate part of the integral for $W$ becomes
\[
I=\int_{s=1}^{\infty }\int_{t=1}^{\infty }\left(\int_{v=0}^{1}\int_{u=0}^{1-v}(.)\,du\,dv\right) \,dt\,ds
\]
$\text{III. }$ The point $(s,t)$ lies outside the triangle with
$0<s<1<t<\infty$. The appropriate part of the integral for $W$ becomes
\[
I=\int_{t=1}^{\infty }\int_{s=0}^{1}\left(\int_{u=0}^{s}\int_{v=0}^{1-u}(.) \,dv\,du\right) \,ds\,dt
\]
$\text{IV. }$ The point $(s,t)$ lies outside the triangle with $0<t<1<s<\infty$. The appropriate part of the integral for $W$ becomes
\[
I=\int_{s=1}^{\infty }\int_{t=0}^{1}\left(\int_{v=0}^{t}\int_{u=0}^{1-v}(.)\,du\,dv\right) \,dt\,ds
\]
$\text{V. }$ The point $(s,t)$ lies outside the triangle with
$0<s<1,\ 1-s<t<1$. We divide the region of integration in two parts. The appropriate part of the integral for $W$ becomes
\[
I=\int_{s=0}^{1}\int_{t=1-s}^{1}\left(\int_{v=0}^{1-s}\int_{u=0}^{s}(.)\,du\,dv\right)\,dt\,ds
+\int_{s=0}^{1}\int_{t=1-s}^{1}\left( \int_{v=1-s}^{t}\int_{u=0}^{1-v}(.)\,du\,dv\right) \,dt\,ds
\]
For the fractional derivative we get (the roman numbers refer to the cases in Figure \ref{Figure 6.1})
\begin{align}
W^{\mu,\nu}
&[f](x,y)\left[ \dfrac{ h_{1}(\alpha ,\beta ,\gamma ,k,n) }{\Gamma(k-\mu)\Gamma(n-k-\nu) }
\dfrac{1}{\delta ^{\nu +\mu }}\right] ^{-1}= \nonumber \\
&=\int_{s=0}^{1}\int_{t=0}^{1-s}f(x+\delta s,y+\delta t)
\left( \int_{v=0}^{t}\int_{u=0}^{s}\text{ \ }(.)\,du\,dv\right) \,dt\,ds+\qquad \qquad \quad\ \text{(I)} \nonumber \\
&+\int_{s=1}^{\infty }\int_{t=1}^{\infty }f(x+\delta s,y+\delta t)
\left(\int_{v=0}^{1}\int_{u=0}^{1-v}(.)\,du\,dv\right) \,dt\,ds+\qquad \qquad \qquad\ \text{(II)} \nonumber \\
&+\int_{t=1}^{\infty }\int_{s=0}^{1}\text{ }f(x+\delta s,y+\delta t) \left(\int_{u=0}^{s}\int_{v=0}^{1-u}(.)\,dv\,du\right) \,ds\,dt+\qquad \qquad \quad\ \ \text{(III)} \nonumber \\
&+\int_{s=1}^{\infty }\int_{t=0}^{1}f(x+\delta s,y+\delta t)
\left(\int_{v=0}^{t}\int_{u=0}^{1-v}(.)\,du\,dv\right) \,dt\,ds+\qquad \qquad \qquad \text{ (IV)} \nonumber \\
&+\int_{s=0}^{1}\int_{t=1-s}^{1}f(x+\delta s,y+\delta t)
\left( \int_{v=0}^{1-s}\int_{u=0}^{s}(.)\,du\,dv\right) \,dt\,ds+\qquad \qquad\quad\ \text{ (V)} \nonumber \\ \nonumber
&\quad\quad\quad +\int_{s=0}^{1}\int_{t=1-s}^{1}f(x+\delta s,y+\delta t) \left(\int_{v=1-s}^{t}\int_{u=0}^{1-v}(.)\,du\,dv\right) \,dt\,ds \\
\label{4.9.11}
\end{align}
The abbreviation $(.)$ stands for
\begin{equation}
(.)=(s-u)^{k-\mu -1}(t-v)^{n-k-\nu-1}U_{k,n}^{\alpha ,\beta ,\gamma }(u,v) u^{\alpha}v^{\beta }(1-u-v) ^{\gamma }
\label{4.9.12b}
\end{equation}
For $U_{k,n}^{\alpha ,\beta ,\gamma}(u,v) $ we repeat formulas \eqref{4.2.15} and \eqref{4.2.16}:
\begin{equation}
U_{k,n}^{\alpha ,\beta ,\gamma }(u,v) =\dfrac{1}{u^{\alpha }v^{\beta }(1-u-v) ^{\gamma }} \\
\dfrac{\partial ^{n}}{\partial u^{k}\partial v^{n-k}}\left[ u^{k+\alpha }v^{n-k+\beta }
(1-u-v)^{n+\gamma } \right]
\label{4.9.12a}
\end{equation}
\begin{multline*}
\qquad\qquad \ U_{k,n}^{\alpha ,\beta ,\gamma }(x,y)
=(\alpha+1)_k(\beta+1)_{n-k}(1-x-y)^{-\gamma} \\
F_2(-\gamma -n;\alpha +1+k,\beta +1+n-k;\alpha +1,\beta+1;x,y)
\end{multline*}
for $0\leq k\leq n$. We can use both formulas to compute the fractional derivatives. The results will be of course the same, but with \eqref{4.9.12a} there appear some nice results and so we continue with \eqref{4.9.12a}.
\
For the integrals inside the brackets in \eqref{4.9.11} we can use \eqref{4.9.12b} and \eqref{4.9.12a} and write in general
\begin{multline*}
\iint_{V}(.)\,du\,dv= \\
=\iint_{V}\dfrac{\partial ^{k}}{\partial u^{k}}\dfrac{\partial ^{n-k}}
{\partial v^{n-k}}\left[ u^{k+\alpha}v^{n-k+\beta }(1-u-v)^{n+\gamma } \right](s-u)^{k-\mu -1}
(t-v)^{n-k-\nu-1}\,du\,dv
\end{multline*}
After $k$-fold integration by parts with respect to the variable $u$ we get
\begin{multline*}
\iint_{V}(.)\,du\,dv\left[(-1)^{k}\dfrac{\Gamma(k-\mu)}{\Gamma(-\mu)}\right]^{-1}= \\
=\iint\nolimits_{V}(s-u) ^{-\mu -1}\dfrac{\partial ^{n-k}}{\partial v^{n-k}}\left[ u^{k+\alpha }v^{n-k+\beta }(1-u-v)^{n+\gamma } \right](t-v)^{n-k-\nu -1}\,du\,dv
\end{multline*}
After $(n-k) $-fold integration by parts with respect to the variable $v$ we get
\begin{multline*}
\iint_{V}(.) \,du\,dv\left[ (-1) ^{n}\dfrac{\Gamma(k-\mu)}{\Gamma(-\mu) }
\dfrac{\Gamma(n-k-\nu)}{\Gamma(-\nu)}\right] ^{-1}= \\
=\iint\nolimits_{V}u^{k+\alpha }(s-u) ^{-\mu -1}(1-u-v) ^{n+\gamma }v^{n-k+\beta }
(t-v) ^{-\nu -1}\,du\,dv
\end{multline*}
Combination of \eqref{4.9.4} and \eqref{4.9.11} gives
\begin{align*}
W^{\mu,\nu}[f](x,y)
&\left[ \dfrac{1}{\Gamma(-\mu) \Gamma(-\nu)}
\dfrac{1}{B(k+\alpha+1,n+\gamma+1,n-k+\beta +1)}\dfrac{1}{\delta ^{\nu +\mu }}\right] ^{-1}= \\
&=\int_{0}^{1}\int_{0}^{1-s} f(x+\delta s,y+\delta t) I_{1}(s,t) \,dt\,ds+\qquad \qquad \text{ \ \ \ \ (I)} \nonumber \\
&+\int_{1}^{\infty }\int_{1}^{\infty } f(x+\delta s,y+\delta t) I_{3}(s,t) \,dt\,ds+\qquad \qquad\qquad \text{(II)} \\
&+\int_{1}^{\infty }\int_{0}^{1}\text{ } f(x+\delta s,y+\delta t) I_{2}(s,t) \,ds\,dt+\qquad \qquad \qquad \text{(III)} \\
&+\int_{1}^{\infty }\int_{0}^{1}\text{ } f(x+\delta s,y+\delta t) I_{4}(s,t) \,dt\,ds+\qquad \qquad \qquad \text{(IV)} \\
&+\int_{0}^{1}\int_{1-s}^{1}\text{ } f(x+\delta s,y+\delta t) I_{5}(s,t) \,dt\,ds+ \qquad \qquad \qquad \text{(V)} \\
&\qquad \qquad \qquad +\int_{0}^{1}\int_{1-s}^{1} f(x+\delta s,y+\delta t) I_{6}(s,t)\,dt\,ds
\label{4.9.16}
\end{align*}
Putting $a=k+\alpha +1$, $b=n-k+\beta+1$, $c=-\mu$, $d=-\nu$ and $e=-n-\gamma$ we get for the functions $I_{1..6}(s,t)$
\begin{equation}
I_{1}(s,t)=\int_{0}^{t}\left(\int_{0}^{s}u^{a-1}(s-u)^{c-1}(1-u-v)^{-e}du\right)v^{b-1}(t-v) ^{d-1}dv
\label{4.9.17a}
\end{equation}
with region for $s$ and $t$:$\qquad 0<s<1,0<t<1-s$
\begin{equation}
I_{2}(s,t)=\int_{0}^{1}\left(\int_{0}^{1-v}u^{a-1}(s-u)^{c-1}(1-u-v)^{-e}du\right) v^{b-1}(t-v)^{d-1}dv
\label{4.9.17c}
\end{equation}
with region for $s$ and $t$:$\qquad 1<s<\infty ,1<t<\infty$
\begin{equation}
I_{3}(s,t) =\int_{0}^{s}\left( \int_{0}^{1-u}v^{b-1}(t-v)^{d-1}(1-u-v)^{-e}dv \right) u^{a-1}(s-u)^{c-1}du
\label{4.9.17b}
\end{equation}
with region for $s$ and $t$:$\qquad 0<s<1<t<\infty$
\begin{equation}
I_{4}(s,t)=\int_{0}^{t}\left(\int_{0}^{1-v}u^{a-1}(s-u)^{c-1}(1-u-v)^{-e}du\right) v^{b-1}(t-v)^{d-1}dv
\label{4.9.17d}
\end{equation}
with region for $s$ and $t$:$\qquad 0<t<1<s<\infty$
\begin{equation}
I_{5}(s,t)=\int_{0}^{1-s}\left(\int_{0}^{s}u^{a-1}(s-u)^{c-1}(1-u-v)^{-e}du\right) v^{b-1}(t-v)^{d-1}dv
\label{4.9.17e}
\end{equation}
\begin{equation}
I_{6}(s,t)=\int_{1-s}^{t}\left(\int_{0}^{1-v}u^{a-1}(s-u)^{c-1}(1-u-v)^{-e}du\right) v^{b-1}(t-v) ^{d-1}dv
\label{4.9.17f}
\end{equation}
with region for $s$ and $t$:$\qquad 0<s<1,\ 1-s<t<1$
We will treat the evaluation of the integrals $I_{1,\dots, 6}$ in the next subsections.
\subsection{The integral $I_1(s,t)$}
For the computation of the integral $I_{1}(s,t)$ we write the integral \eqref{4.9.17a} as
\begin{equation}
I_{1}(s,t)=s^{a+c-1}t^{b+d-1}\int_{0}^{1}\int_{0}^{1}u^{a-1}v^{b-1}(1-u)^{c-1}(1-v)^{d-1}( 1-su-tv) ^{-e}\,du\,dv
\label{4.9.19}
\end{equation}
The integrals are convergent if $0<\Re(a)<\Re(a+c)$ and $0<\Re(b)<\Re(b+d)$. Formula \eqref{4.9.19} is the integral representation for the $F_2$ Appell function \cite[5.8.1.(2)]{28}. We get
\[
I_{1}(s,t) =B(a,c)B(b,d)s^{a+c-1}t^{b+d-1}F_2( e;a,b;a+c,b+d;s,t)
\]
\subsection{The integral $I_2(s,t)$}
For the computation of the integral $I_{2}(s,t)$ we write the integral \eqref{4.9.17c} as
\begin{equation}
I_{2}(s,t)=s^{c-1}t^{d-1}\int_{0}^{1}\int_{0}^{1-v}u^{a-1}v^{b-1}(1-u-v)
^{-e}\left( 1-\dfrac{1}{s}u\right) ^{c-1}\left( 1-\dfrac{1}{t}v\right)^{d-1}\,du\,dv
\label{4.9.23}
\end{equation}
The integrals are convergent if $\Re(a),\Re(b),\Re(1-e)>0$. Formula \eqref{4.9.23} is the integral representation for the $F_3$ Appell function \cite[5.8.1.(3)]{28}. We get
\begin{equation}
I_{2}(s,t)=B(a,b,1-e)s^{c-1}t^{d-1}
F_3\left( a,b;1-c,1-d;a+b+1-e;\dfrac{1}{s},\dfrac{1}{t}\right)
\label{4.9.24}
\end{equation}
\subsection{The integral $I_3(s,t)$}
For the computation of the integral $I_{3}(s,t)$ we write the integral \eqref{4.9.17b} as
\begin{multline}
I_{3}(s,t)=s^{a+c-1}t^{d-1} \\
\int_{0}^{1}\int_{0}^{1}u^{a-1}v^{b-1}(1-u)^{c-1}(1-v)^{-e}(1-su) ^{b-e}
\left( 1-\dfrac{1}{t}v+\dfrac{s}{t}uv\right) ^{d-1}\,du\,dv
\label{4.9.21}
\end{multline}
Formula \eqref{4.9.21} is an integral representation of the $H_2$ function \cite[(3.4)]{7}. We get
\begin{equation}
I_{3}(s,t)=B(a,c)B(b,1-e)
s^{a+c-1}t^{d-1}H_2\left( e-b,a,1-d,b;a+c;s,-\dfrac{1}{t}\right)
\label{4.9.22}
\end{equation}
\subsection{The integral $I_4(s,t)$}
For $I_{4}(s,t)$ we look at $I_{2}(s,t)$. Interchanging in $I_{2}(s,t)$ the parameters $a$ and $b$, $c$ and $d$ and the variables $s$ and $t$ we get $I_{4}(s,t)$. So from \eqref{4.9.22} we get directly
\[
I_{4}(s,t)=B(a,1-e)B(b,d)
t^{b+d-1}s^{c-1}H_2\left( e-a,b,1-c,a;b+d;t,-\dfrac{1}{s}\right)
\]
\subsection{The integral $I_5(s,t)$}
For $I_{5}(s,t)$ we write the inner integral of \eqref{4.9.17e} as
\[
\int_{0}^{s}u^{a-1}(s-u)^{c-1}(1-v-u)^{-e}du=B(a,c)s^{a+c-1}(1-v) ^{-e}
\hyp21{a,e}{a+c}{\dfrac{s}{1-v}}
\]
with convergence conditions $\Re(a),\Re(c),\Re(c-e)>0$ and $\dfrac{s}{1-v}\leq 1$. Substitution in \eqref{4.9.17e} and replacing $v$ by $(1-s)v$ gives
\begin{multline}
I_{5}(s,t) =B(a,c)s^{a+c-1}t^{d-1}(1-s)^{b} \\
\times\int_{0}^{1}v^{b-1}\big( 1-(1-s)v\big)^{-e}
\left( 1-\dfrac{1-s}{t}v\right) ^{d-1}
\hyp21{a,e}{a+c}{\dfrac{s}{1-(1-s)v}}dv
\label{4.9.27}
\end{multline}
Because the computation of the integral is very long we only give the result from \cite[section 6.5.5.]{1}
\begin{align}
I_{5}(s,t)&
=B(c,e-c)B(b,c-e+1)s^{a-1}t^{d-1}(1-s)^{b+c-e} \nonumber \\
& \qquad \qquad \qquad \qquad \qquad \qquad \times
F_3\left(
\begin{array}{c}
1-a,b;c,1-d \\
b+c-e+1%
\end{array}%
;\dfrac{s-1}{s},\dfrac{1-s}{t}\right) + \nonumber \\
&+B(a,c-e)B(b,d)s^{a+c-1}t^{b+d-1}F_P(e,b,a,b+d,a+c;t,s) - \nonumber \\
&-\dfrac{B(a,c-e)}{d}s^{a+c-e-1}t^{b-1}(s+t-1)^{d} \nonumber \\
& \qquad \times
\widetilde{F_3}\left(
\begin{array}{c}
1,1-b;e-a-c+1,d \\
d+1%
\end{array}%
;%
\begin{array}{c}
e \\
e-c+1%
\end{array}%
;\dfrac{s+t-1}{s},\dfrac{s+t-1}{t}\right)
\label{4.9.57}
\end{align}
with convergence conditions for the $F_P$ function: $\left\vert t\right\vert <1$, $\left\vert s-1\right\vert <1$. The convergence conditions of the $F_3$ function are: $\left\vert \dfrac{s-1}{s}\right\vert <1$ and $\left\vert \dfrac{1-s}{t}\right\vert<1$. This gives a region of convergence $\dfrac{1}{2}<s<1$. But after analytical continuation (see \eqref{4.P.20}) this function is well-defined for $\dfrac{s-1}{s}<1$ and $\dfrac{1-s}{t}<1$. For the extended $F_3$ function we have with analytical continuation convergence for the same region as for the $F_3$ function (see appendix B.2).This includes the $(s,t)$ domain on which the integral $I_5(s,t)$ is defined. For the parameters we have the convergence conditions: $0<\Re(a),\Re(b),\Re(c),\Re(d),\Re(e)<\Re(c)+1$.
\subsection{The integral $I_6(s,t)$}
For $I_6(s,t)$ we write the inner integral of \eqref{4.9.17f} as
\[
\int_{0}^{1-v}u^{a-1}(s-u)^{c-1}(1-v-u)^{-e}du= B(a,1-e)s^{c-1}(1-v)^{a-e}\hyp21{a,1-c}{a-e+1}{\dfrac{1-v}{s}}
\]
with convergence conditions $\Re(a),\Re(1-e)>0$ and $\dfrac{1-v}{s}<1$.
Because the computation of the integral is also very long we only give the result from \cite[section 6.5.6.]{1}
\begin{align}
I_{6}(s,t)
&=\dfrac{B(a,c-e)}{d}s^{a+c-e-1}t^{b-1}(s+t-1)^{d} \nonumber \\
&\qquad\qquad \times
\widetilde{ F_3}\left(
\begin{array}{c}
1,1-b;e-a-c+1,d \\
d+1%
\end{array}%
;%
\begin{array}{c}
e \\
e-c+1%
\end{array}%
;\dfrac{s+t-1}{s},\dfrac{s+t-1}{t}\right) +\nonumber \\
&+B(1-e,e-c)B(d,1-e+c)s^{a-1}t^{b-1}(s+t-1)^{c+d-e} \nonumber \\
&\qquad\qquad \times F_3\left(
\begin{array}{c}
1-a,1-b;c,d \\
c+d-e+1%
\end{array}%
;\dfrac{s+t-1}{s},\dfrac{s+t-1}{t}\right)
\label{4.9.58}
\end{align}
with convergence conditions: $\Re(a),\Re(d),\Re(1-e),\Re(c-e+1)>0$.
\subsection{The summation of the integrals $I_5(s,t)$ and $I_6(s,t)$}
When adding the functions $I_{5}(s,t)$ \eqref{4.9.57} and $I_{6}(s,t)$ \eqref{4.9.58}
the extended $F_3$ functions will be eliminated. We get
\begin{align}
I_{5}(s,t)+I_{6}(s,t)
&=B(a,c-e)B(b,d)s^{a+c-1}t^{b+d-1}F_P(e,b,a,b+d,a+c;t,s)+ \nonumber \\
&+B(c,e-c)B(b,c-e+1)s^{a-1}(1-s)^{b+c-e}t^{d-1} \nonumber \\
&\qquad \qquad \qquad \qquad \qquad F_3\left(
\begin{array}{c}
1-a,b;c,1-d \\
b+c-e+1%
\end{array}%
;\dfrac{s-1}{s},\dfrac{1-s}{t}\right) + \nonumber \\
&+B(1-e,e-c)B(d,1-e+c)s^{a-1}t^{b-1}(s+t-1) ^{c+d-e} \nonumber \\
&\qquad \qquad \qquad \qquad F_3\left(
\begin{array}{c}
1-a,1-b;c,d \\
c+d-e+1%
\end{array}%
;\dfrac{s+t-1}{s},\dfrac{s+t-1}{t}\right)
\label{4.9.59}
\end{align}
The power series of the functions $F_P$ \eqref{4.B.5} and $F_3$ \eqref{4.P.20} are convergent in the region $0<s<1,1-s<t<1$. For the first $F_3$ function we use the analytical continuation over the whole $(s,t)$ region. For the parameters we get the convergence conditions: $0<~\Re(a),\Re(b),\Re(c),\Re(d)$ and $\Re(e)<1$.
From \cite[p 441]{11} we get
\begin{align}
F_3\left(
\begin{array}{c}
a_0,b_1;b_2,c_1 \\
c_2%
\end{array}%
;x_{1},x_{2}\right)
&=\dfrac{\Gamma (c_2) \Gamma(c_2-b_1-c_1) }{\Gamma( c_2-b_1)
\Gamma(c_2-c_1) }(x_{1}) ^{-a_0}(x_{2})^{-b_1} \nonumber \\
&F_Q\left(b_2+c_1-c_2+1,b_2,c_1;1-a_0+b_2,c_1-b_1+1;\dfrac{1}{x_{1}},%
\dfrac{1}{x_{2}}\right) + \nonumber \\
&+\dfrac{\Gamma (c_2) \Gamma(b_1+c_1-c_2) }{\Gamma(b_1) \Gamma (c_1) }(x_{2})^{1-c_2}
(1-x_{2}) ^{c_2-b_1-c_1} \nonumber \\
&\qquad\quad F_3\left(
\begin{array}{c}
a_0,1-b_1;b_2,1-c_1 \\
1-b_1-c_1+c_2%
\end{array}%
;\dfrac{x_{1}(x_{2}-1)}{x_{2}},1-x_{2}\right)
\label{4.9.60}
\end{align}
After application of \eqref{4.9.60} to the second $F_3$ function of \eqref{4.9.59} we get
\begin{align*}
I_{5}(s,t)+I_{6}(s,t)
&=B(a,c-e)B(b,d)s^{a+c-1}t^{b+d-1}F_P( e,b,a,b+d,a+c;t,s) + \\
&+B(1-e,e-c)B(d,c-e+b)(s+t-1)^{a+b+c+d-e-2} \\
&\qquad \qquad \qquad \qquad \qquad \qquad
F_Q\left( e,c,d;a+c,b+d;\dfrac{s}{s+t-1},\dfrac{t}{s+t-1}\right) + \\
&+\Gamma (e-c)\Gamma (c-e+1)\left[\dfrac{\Gamma (b)\Gamma (c) }{\Gamma(e)\Gamma(c+b-e+1) }+
\dfrac{\Gamma(1-e)\Gamma(e-b-c) }{\Gamma(1-c)\Gamma(1-b) }\right] \\
&\qquad \qquad \qquad \qquad s^{a-1}(1-s) ^{b+c-e}t^{d-1}F_3\left(
\begin{array}{c}
1-a,b;c,1-d \\
b+c-e+1%
\end{array}%
;\dfrac{s-1}{s},\dfrac{1-s}{t}\right)
\end{align*}
The function $F_Q$ is defined in \eqref{4.B.5a}. After some simplification of the terms within the square brackets we get
\begin{align*}
I_{5}(s,t)+I_{6}(s,t)
&=B(a,c-e)B(b,d)s^{a+c-1}t^{b+d-1}F_P( e,b,a,b+d,a+c;t,s) + \\
&+B(1-e,e-c)B(d,c-e+b)s^{a+c-1}t^{b+d-1}(s+t-1) ^{-e} \\
&\qquad \qquad \qquad \qquad \qquad \qquad
F_Q\left( e,c,d;a+c,b+d;\dfrac{s}{s+t-1},\dfrac{t}{s+t-1}\right) + \\
&+B(e-b-c,c)B(1-e,b)s^{a-1}\left( 1-s\right) ^{b+c-e}t^{d-1} \\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad F_3\left(
\begin{array}{c}
1-a,b;c,1-d \\
b+c-e+1%
\end{array}%
;\dfrac{s-1}{s},\dfrac{1-s}{t}\right)
\end{align*}
The power series of the $F_P$ and the $F_Q$ functions are convergent in the region $\{0<s<1 \wedge 1-s<t<1\}$. For the $F_3$ function we use an analytical continuation so we get the convergence in the right region. For the parameters we get the convergence conditions: $0<\Re(a,b,c,d,e)<1$.
\subsection{Summary of the integrals $I_{1..6}(s,t)$}
We give a summary of the functions $I_{1..9}(s,t)$
\begin{equation}
I_{1}(s,t)=B(a,c)B(b,d)s^{a+c-1}t^{b+d-1}F_2(e;a,b;a+c,b+d;s,t)
\label{4.I78.a}
\end{equation}
with $0<s<1,0<t<1-s$ and $0<\Re(a)<\Re(a+c)$, $0<\Re(b)<\Re(b+d)$.
\[
I_{2}(s,t)=B(a,b,1-e)s^{c-1}t^{d-1}
F_3\left(a,b;1-c,1-d;a+b+1-e;\dfrac{1}{s},\dfrac{1}{t}\right)
\]
with $1<s<\infty ,1<t<\infty$ and $\Re(a),\Re(b),\Re(1-e)>0$.
\begin{equation}
I_{3}(s,t)=B(a,c)B(b,1-e)s^{a+c-1}t^{d-1}
H_2\left( e-b,a,1-d,b;a+c;s,-\dfrac{1}{t}\right)
\label{4.I78.c}
\end{equation}
with $0<s<1<t<\infty$ and $\Re(a),\Re(b)>0,\Re(c)>0$.
\begin{equation}
I_{4}(s,t)=B(a,1-e)B(b,d)t^{b+d-1}s^{c-1}
H_2\left( e-a,b,1-c,a;b+d;t,-\dfrac{1}{s}\right)
\label{4.I78.d}
\end{equation}
with $0<t<1<s<\infty$ and $\Re(a),\Re(b),\Re(d)>0$.
\begin{align}
I_{5}(s,t)+I_{6}(s,t)
&=B(a,c-e)B(b,d)s^{a+c-1}t^{b+d-1}F_P(e,b,a,b+d,a+c;t,s) + \nonumber \\
&+B(c,e-c)B(b,c-e+1)s^{a-1}(1-s) ^{b+c-e}t^{d-1} \nonumber \\
&\qquad \qquad \qquad \qquad \qquad
F_3\left(
\begin{array}{c}
1-a,b;c,1-d \\
c+b-e+1%
\end{array}%
;\dfrac{s-1}{s},\dfrac{1-s}{t}\right) + \nonumber \\
&+B(1-e,e-c)B(d,1-e+c)t^{b-1}s^{a-1}(s+t-1) ^{c+d-e} \nonumber \\
&\qquad \qquad \qquad \qquad
F_3\left(
\begin{array}{c}
1-a,1-b;c,d \\
c+d-e+1%
\end{array}%
;\dfrac{s+t-1}{s},\dfrac{s+t-1}{t}\right)
\label{4.I78.e}
\end{align}
with $0<s<1,\ 1-s<t<1$ and $0<\Re(a),\Re(c),\Re(d),\Re(e)<1$.
\\[3mm]
The functions $F_2$, $F_3$, $H_2$ and $F_P$ are all five-parametric double-hypergeometric functions which are solutions of a system of two partial differential equations of second order \cite{9}. This system is associated with the Appell function $F_2(a,b_1,b_2,c_1,c_2;x,y)$ \cite[5.9(10)]{28}. Note that the orthogonal polynomials
\eqref{4.2.16} and \eqref{4.2.17} used for the fractional orthogonal derivative are also expressed in terms of $F_2$ functions.
\subsection{Remarks}
\begin{remark}\
It is well-known \cite[5.9(10)]{28} that the $F_2(a_0;b_1,b_2;c_1,c_2;x,y)$ function is a solution of the following system of partial differential equations
\begin{equation}
x(1-x) r-xys+\Big[ c_1-(a_0+b_1+1) x \Big] p-b_1yq-a_0b_1F_2(x,y) =0
\label{4.9.75a}
\end{equation}
\begin{equation}
y(1-y) t-xys+\Big[ c_2-(a_0+b_2+1) y \Big] q-b_2xp-a_0b_2F_2(x,y) =0
\label{4.9.75b}
\end{equation}
with
\[
p=\dfrac{\partial F_2(x,y) }{\partial x} \ \ \
q=\dfrac{\partial F_2(x,y)}{\partial y} \ \ \
r=\dfrac{\partial ^{2}F_2(x,y) }{\partial x^{2}} \ \ \
s=\dfrac{\partial^{2}F_2(x,y)}{\partial x \partial y} \ \ \
t=\dfrac{\partial^{2}F_2(x,y)}{\partial y^{2}}
\]
Olsson treats all 36 explicit solutions of \eqref{4.9.75a} and \eqref{4.9.75b} \cite{9}. Using the substitutions $a_0=e-a-b-c-d+2,\ b_1=1-c,\ b_2=1-d,\ c_1=2-a-c,\ c_2=2-b-d$ and $x\rightarrow s,\ y\rightarrow t$ he gives among others the following solutions of \eqref{4.9.75a} and \eqref{4.9.75b}
\begin{equation}
s^{a+c-1}t^{b+d-1}F_2\left(
\begin{array}{c}
e;a,b \\ a+c,b+d%
\end{array}%
;s,t\right)
\label{4.9.78a}
\end{equation}
\begin{equation}
s^{a+c-1}F_2\left(
\begin{array}{c}
e-b-d+1;a,1-d \\
a+c,2-b-d%
\end{array}%
;s,t\right)
\label{4.9.78b}
\end{equation}
\begin{equation}
s^{c-1}t^{d-1}F_3\left(
\begin{array}{c}
a,b;1-c,1-d \\
a+b+1-e%
\end{array}%
;\dfrac{1}{s},\dfrac{1}{t}\right)
\label{4.9.78c}
\end{equation}
\begin{equation}
t^{b+d-1}F_P(e-c-a+1,b,1-c,b+d,2-a-c;t,s)
\label{4.9.78d}
\end{equation}
\begin{equation}
s^{a-1}t^{d-1}(1-s) ^{b+c-e}F_3\left(
\begin{array}{c}
1-a,b;c,1-d \\
b+c-e+1%
\end{array}%
;\dfrac{s-1}{s},\dfrac{1-s}{t}\right)
\label{4.9.78e}
\end{equation}
\begin{equation}
s^{a-1}t^{b-1}(1-s-t) ^{c+d-e}F_3\left(
\begin{array}{c}
1-a,1-b;c,d \\
c+d-e+1%
\end{array}%
;\dfrac{s+t-1}{s},\dfrac{s+t-1}{t}\right)
\label{4.9.78f}
\end{equation}
\begin{equation}
s^{a+c-1}t^{d-1}
H_2\left( e-b,a,1-d,b;a+c;s,-\dfrac{1}{t}\right)
\label{4.9.78g}
\end{equation}
\
For the integral $I_{1}(s,t)$ we see that the function $s^{a+c-1}t^{b+d-1}F_2\left(
\begin{array}{c}
e;a,b \\
a+c,b+d%
\end{array}%
;s,t\right) $ equals the solution \eqref{4.9.78a}.
\
For the integral $I_{2}(s,t)$ we see that the function $s^{c-1}t^{d-1}F_3\left( a,b;1-c,1-d;a+b+1-e;\dfrac{1}{s},\dfrac{1}{t}\right)$ equals the solution \eqref{4.9.78c}.
\
For the integral $I_{3}(s,t)$ we see that the function $s^{a+c-1}t^{d-1}H_2\left( e-b,a,1-d,b;a+c;s,-\dfrac{1}{t}\right)$ equals the solution \eqref{4.9.78g}.
\
To compute $I_{4}(s,t)$ we can interchange in $I_{3}(s,t)$ the parameters $a$ and $b$, $c$ and $d$ and the variables $s$ and $t$. Then we see that \eqref{4.I78.d} is a solution of \eqref{4.9.75a} and \eqref{4.9.75b}.
\
For the integral $I_{5}(s,t)+I_{6}(s,t)$ we have the functions
\begin{equation}
L_{1}(s,t) =s^{a+c-1}t^{b+d-1}F_P(e,b,a,b+d,a+c;t,s)
\label{4.9.81a}
\end{equation}
\[
L_{2}(s,t) =s^{a-1}(1-s)^{b+c-e}t^{d-1}F_3\left(
\begin{array}{c}
1-a,b;c,1-d \\
b+c-e+1%
\end{array}%
;\dfrac{s-1}{s},\dfrac{1-s}{t}\right)
\]
\[
L_{3}\left(s,t\right) =s^{a-1}t^{b-1}(s+t-1) ^{c+d-e}F_3\left(
\begin{array}{c}
1-a,1-b;c,d \\
c+d-e+1%
\end{array}%
;\dfrac{s+t-1}{s},\dfrac{s+t-1}{t}\right)
\]
For the function $L_{1}(s,t)$ we use the solution \cite[(26)]{9} and interchange $b_1\longleftrightarrow b_2,c_1\longleftrightarrow c_2$ and $x_{1}\longleftrightarrow
x_{2}$. Then the solution becomes
\[
\left( x_{2}\right) ^{1-c_2}F_P(a_0-c_2+1,b_1,b_2c_2+1,c_1,2-c_2;x_{2},x_{1})
\]
Application to \eqref{4.9.81a} gives as result
\[
t^{b+d-1}F_P(e-c-a+1,b,1-c,b+d,2-a-c;t,s)
\]
This function is equal \eqref{4.9.78d}. So $L_{1}(s,t)$ is a solution of \eqref{4.9.78a} and \eqref{4.9.78b}.
\
The function $L_{2}(s,t)$ equals the solution \eqref{4.9.78e}.
\
The function $L_{3}(s,t)$ equals the solution \eqref{4.9.78f}.
\end{remark}
\begin{remark}
It is clear that the inner double integrals in \eqref{4.9.11} are in fact double integrals transformations which transform the solutions of the $F_2$ equations \eqref{4.9.75a} and \eqref{4.9.75b} into a solution of the $F_2$ equations with other parameters.
\end{remark}
\begin{remark}\
When looking at Figure \ref{Figure 6.1} case Va we can consider the integral when the arguments are on the boundaries. Then we distinguish four possibilities.
\underline{I.\qquad $s=1$.}
For $s=1$ it can be shown that $I_{5}(1,t)+I_{6}(1,t)=I_{4}(1,t)$.
\\[4mm]
\underline{II.\qquad $t=1$.}
For $t=1$ it can be shown that $I_{5}(s,1)+I_{6}(s,1)=I_{3}(s,1)$.
\\[4mm]
\underline{III.\qquad $s=t=1$.}
From \eqref{4.9.27} it follows $I_{5}(1,t)=0$ and $I_{6}(1,t)=I_{4}(1,t)$. For $s=t=1$ it can be shown that $I_{6}( 1,1)=I_{2}(1,1)=I_{3}(1,1)$.
\\[4mm]
\underline{IV.\qquad $s+t=1$.}
For $s+t=1$ it can be shown that $I_{5}(s,1-s)+I_{6}(s,1-s)=I_{1}(s,1-s)$.
\end{remark}
\begin{remark}\
When looking at case V in Figure \ref{Figure 6.1} we see directly that when interchanging $a\leftrightarrow b$, $c\leftrightarrow d$, $s\leftrightarrow t$ and the integration variables $u\leftrightarrow v$ we get
\begin{equation}
I_{5}(a,b,c,d,e;s,t)+I_{6}(a,b,c,d,e;s,t) =I_{5}(b,a,d,c;e;t,s) +I_{6}(b,a,d,c;e;t,s)
\label{4.9.124}
\end{equation}
From the symmetry there follows that $I_5(s,t)+I_6(s,t)$ can be written as half the sum of the two sides of \eqref{4.9.124}.
Looking at equation \eqref{4.I78.e} we see that only the second $F_3$ function in the right-hand side is symmetrical in the parameters and the arguments. Now write \eqref{4.9.124} as
\begin{align*}
I_{5}(a,b,c,d,e;s,t)+I_{6}(a,b,c,d,e;s,t)
&=\lambda\big( I_{5}(a,b,c,d,e;s,t)+I_{6}(a,b,c,d,e;s,t) \big) +\\
&+(1-\lambda)\big( I_{5}(b,a,d,c;e;t,s) +I_{6}(b,a,d,c;e;t,s) \big)
\end{align*}
Now we can choose $\lambda$ so that the second $F_3$ functions will be eliminated. There results
\begin{align*}
I_{5}(s,t)+I_{6}(s,t)
&=A_{1}\;s^{a+c-1}t^{b+d-1}F_P( e,b,a,b+d,a+c;t,s)+ \\
&+A_{2}\;s^{a+c-1}t^{b+d-1}F_P( e,a,b,a+c,b+d;s,t)+ \\
&+A_{3}\;t^{b-1}(1-t)^{a+d-e}s^{c-1}
F_3\left(
\begin{array}{c}
d,a;1-b,1-c \\
a+d-e+1%
\end{array};\dfrac{t-1}{t},\dfrac{1-t}{s}\right)+ \\
&+A_{4}\;s^{a-1}(1-s)^{b+c-e}t^{d-1}
F_3\left(
\begin{array}{c}
c,b;1-a,1-d \\
b+c-e+1%
\end{array};\dfrac{s-1}{s},\dfrac{1-s}{t}\right)
\end{align*}
with
\[
A_1=\dfrac{\Gamma(a)\Gamma(b)\Gamma(e)\Gamma(c-d)\Gamma(d-c+1)\Gamma(1-e)}
{\Gamma(a+c-e)\Gamma( b+d) \Gamma(1-d)\Gamma(e-c+1)}
\]
\[
A_2=\dfrac{\Gamma(a)\Gamma(b)\Gamma(e)\Gamma(d-c)\Gamma(c-d+1)\Gamma(1-e)}
{\Gamma( b+d-e)\Gamma(a+c)\Gamma(1-c)\Gamma(e-d+1)}
\]
\[
A_3=\dfrac{\Gamma(a)\Gamma(d)\Gamma(c-d)\Gamma(d-c+1)\Gamma(1-e)}
{\Gamma(c)\Gamma(1-c)\Gamma(a+d-e+1)} \qquad
\]
\[
A_4=\dfrac{\Gamma(b)\Gamma(c)\Gamma(d-c)\Gamma(c-d+1)\Gamma(1-e)}
{\Gamma(d)\Gamma(1-d)\Gamma(c+b-e+1)} \qquad
\]
Application of Olsson \cite[(26)]{9} gives
\begin{align*}
I_{5}(s,t)+I_{6}(s,t)
&=A_{1}\;s^{a+c-1}t^{b+d-1}(1-t)^{-e}F_P\left( e,d,a,b+d,a+c;\dfrac{t}{t-1},\dfrac{s}{1-t}\right)+ \\
&+A_{3}\;s^{c-1}t^{b-1}(1-t)^{a+d-e}
F_3\left(
\begin{array}{c}
d,a;1-b,1-c \\
a+d-e+1%
\end{array};\dfrac{t-1}{t},\dfrac{1-t}{s}\right)+ \\
&+A_{2}\;s^{a+c-1}t^{b+d-1}(1-s)^{-e}F_P\left( e,c,b,a+c,b+d;\dfrac{s}{s-1},\dfrac{t}{1-s}\right)+ \\
&+A_{4}\;s^{a-1}t^{d-1}(1-s)^{b+c-e}
F_3\left(
\begin{array}{c}
c,b;1-a,1-d \\
b+c-e+1%
\end{array};\dfrac{s-1}{s},\dfrac{1-s}{t}\right)
\end{align*}
Note that the arguments of the $F_P$ functions are the inverses of the arguments of the $F_3$ functions.
\end{remark}
\
\
\textbf{\Large Appendices: Hypergeometric functions of two variables}
\
When deriving formulas for the two-dimensional fractional orthogonal derivatives we encounter various hypergeometric functions of two variables, some of which are beyond the familiar Appell and Horn hypergeometric functions. Here we give an overview of the functions we need in this thesis.
\begin{appendices}
\section{The Pochhammer symbol}
We have already frequently used the Pochhammer symbol, which we defined for
$a\in\mathbb{C}$ by
\[
(a)_i:=a(a+1)\ldots(a+i-1)=\frac{\Gamma(a+i)}{\Gamma(a)}\quad(i=0,1,\ldots),
\]
and more generally by
\[
(a)_b:=\frac{\Gamma(a+b)}{\Gamma(a)}\quad(b\in\mathbb{C}).
\]
We will use this second definition in particular for $b\in\mathbb{Z}$. Then
\[
(a)_{-i}=\frac{(-1)^i}{(1-a)_i}\quad(i\in\mathbb{Z}).
\]
Further properties for $i,j\in\mathbb{Z}$ are:
\[
\Gamma(a+i)=\Gamma(a)(a) _{i}
\]
\[
\Gamma(a-i) =(-1) ^{i}\dfrac{\Gamma(a)}{(1-a) _{i}}
\]
\[
(a+i)_j=\dfrac{(a)_{i+j}}{(a)_i}=\dfrac{(a+j)_i (a)_j}{(a)_i}
\]
\[
(a-i)_{j}=(-1)^{i}(a)_{j-i}(1-a)_i=(-1)^{j}\dfrac{(1-a)_{i}}{(1-a)_{i-j}}
=(a)_j\dfrac{(1-a)_i}{(1-a-j)_i}
\]
There are a lot more. See for example \cite[Appendix I]{17}.
\section{Definitions}
For the basics of hypergeometric functions of two variables see for instance
\cite[Ch. 9]{15}, \cite[sections 5.7-5.12]{28}, \cite[Ch. 1]{16}, \cite[Ch. 8]{17}.
Appell \cite{5}, \cite{8} defines four functions $F_1, F_2, F_3, F_4$ in two variables which are generalizations of the \,$_{2}F_1$\, hypergeometric function. We use only the first three functions \cite[5.7]{28}.
\[
F_1(a;b_1,b_2;c;x,y) =\sum\limits_{i=0}^{\infty}\sum\limits_{j=0}^{\infty }
\dfrac{(a) _{i+j}(b_1)_{i}(b_2)_{j}}{(c) _{i+j}}\dfrac{1}{i!\,j!}x^{i}y^{j}
\]
with convergence region:
$\left\{\left\vert x\right\vert <1\wedge \left\vert y\right\vert <1\right\}$.
\[
F_2(a;b_1,b_2;c_1,c_2;x,y)=\sum\limits_{i=0}^{\infty }\sum\limits_{j=0}^{\infty }
\dfrac{(a)_{i+j}(b_1)_{i}(b_2)_{j}}{(c_1) _{i}(c_2)_{j}}\dfrac{1}{i!\,j!}x^{i}y^{j}
\]
This power series has as convergence region: $\{\vert x \vert +\vert y \vert <1\}$. From the known transformations of the $F_2$ function \cite[5.11. (6),(7),(8)]{28} and from the union of the four partially overlapping regions of convergence (or from the integral representation of the $F_2$ function \cite[5.8(2)]{28}) we get a region of unique analytic convergence $\left\{ x<1\wedge y<1\wedge x+y<1\right\}$.
\begin{equation}
F_3(a_{1},a_{2};b_1,b_2;c;x,y)=\sum\limits_{i=0}^{\infty}\sum\limits_{j=0}^{\infty }
\dfrac{(a_{1})_{i}(a_{2})_{j}(b_1)_{i}(b_2) _{j}}{(c) _{i+j}}\dfrac{1}{i!\,j!}x^{i}y^{j}
\label{4.P.20}
\end{equation}
with convergence region: $\left\{\vert x\vert <1\wedge \vert y\vert <1\right\}$. With the integral representation of the $F_3$ function \cite[5.8(3)]{28}\footnote{Note that the power of the factor $(1-u-v)$ should be $\gamma-\beta-\beta'.$} we get a region of unique analytical continuation of $\{x<1,\,y<1\}$.
Another integral representation is
\begin{multline}
F_3(a_{1},a_{2};b_1,b_2;c;x,y)=
\dfrac{\Gamma(c)}{\Gamma(a_1)\Gamma(a_2)\Gamma(c-a_1-a_2)} \\
\times\int_0^1\int_0^1 u^{a_1-1}v^{a_2-1}(1-u)^{c-a_2-a_1-1}(1-v)^{c-a_2-1}
(1-y\,v)^{-b_2}(1-x\,u+x\,u\,v)^{-b_1}dudv
\label{4.P.20a}
\end{multline}
with $\Re(a_1,a_2,c-a_2-a_1,c-a_2)>0$.
We define the {\em extended} $F_3$ function as
\begin{equation}
\widetilde{F_3}\left(\begin{array}{c}a_{1},a_{2},b_1,b_2 \\
c\end{array};\begin{array}{c}d_{1} \\
d_{2}\end{array};x,y\right)
=\sum\limits_{i=0}^{\infty}\sum\limits_{j=0}^{\infty }\dfrac{(a_{1})_{i}(a_{2})_{j}(b_1) _{i}(b_2)_{j}}{(c)_{i+j}}\dfrac{(d_1)_i}{(d_2)_i}\dfrac{1}{i!\,j!}x^{i}y^{j}
\label{4.P.21}
\end{equation}
This extended $F_3$ function is of order three (see below) and can be written as a triple integral
\begin{multline*}
\widetilde{F_3}\left(\begin{array}{c}a_{1},a_{2},b_1,b_2 \\
c\end{array};\begin{array}{c}d_{1} \\
d_{2}\end{array};x,y\right)
=\dfrac{\Gamma(c)}{\Gamma(a_1)\Gamma(a_2)\Gamma(c-a_1-a_2)}
\dfrac{\Gamma(d_2)}{\Gamma(d_1)\Gamma(d_2-d_1)} \\
\qquad\qquad\times\int_0^1\int_0^1\int_0^1 u^{a_1-1}v^{a_2-1}w^{d_1-1}
(1-u)^{c-a_1-1}(1-v)^{c-a_1-a_2-1}(1-w)^{d_2-d_1-1} \\
\times(1-yv+yuv)^{-b_2}(1-x\,u\,w)^{-b_1}
dudvdw
\end{multline*}
Then we have the convergence conditions for the parameters: $\Re(c)>\Re(a_1)>0,\Re(c-~a_1)>\Re(a_2)>0,\Re(d_2)>\Re(d_1)$. For the arguments we have the analytical continuation of convergence: $\{x<1 \wedge y<1\}$.
Horn \cite{12} more generally calls a double power series
\[
\sum\limits_{i,j=0}^{\infty }A\left( i,j\right) x^{i}y^{j}
\]
{\em hypergeometric} if the two quotients $\dfrac{A(i+1,j) }{A(i,j) }$ and $ \dfrac{A(i,j+1) }
{A(i,j) }$ are rational functions of $i$ and $j$. Write the two rational functions as quotients of polynomials in $i$ and $j$ without common factors.
\
\qquad $\dfrac{A(i+1,j)}{A(i,j)}=\dfrac{F(i,j) }{F^{\prime }(i,j) }\qquad $and\qquad
$\dfrac{A(i,j+1)}{A(i,j) }=\dfrac{G(i,j)}{G^{\prime }(i,j)}$
\
In addition it is assumed that $F'(i,j)$ contains the factor $i+1$ and $G'(i,j)$ the
factor $j+1$. Then the highest degree in $i,j$ of the four polynomials $F,F^{\prime},G,G^{\prime }$ is called the $order$ of the hypergeometric series. Horn also gives a rule (see \cite[Section 5.7.2]{28}) how to determine the region of convergence of the power series from the two above quotients. Then Horn classifies the hypergeometric functions of order two. He gives a list of 34 such functions. The first four items in the list are the Appell functions $F_1,\ F_2,\ F_3$ and $F_{4}$.
\
Number nine in the list of Horn is the $H_2$ function.
\begin{equation}
H_2(a,b_1,b_2,c_1,c_2;x,y)=\sum\limits_{i=0}^{\infty}
\sum\limits_{j=0}^{\infty }\dfrac{(a)_{i-j}(b_1)_{i}(b_2)_{j}(c_1)_{j}}{(c_2) _{i}}
\dfrac{1}{i!\,j!}x^{i}y^{j}
\label{4.P.1}
\end{equation}
with convergence condition (determined by Horn's rule):
\begin{equation}
\left\{|x|<1,|y|<(|x|+1)^{-1}\right\}
\label{4.P.1a}
\end{equation}
In \cite[Section 3]{7} Diekema and Koornwinder showed that
\[
\{ (x<0\wedge (x-1) y<1)\vee (0 \leq x<1\wedge y>-1)\}
\]
is a region in $\mathbb{R}^2$ where $H_2$ has unique analytical continuation.
\begin{figure}[ht]
\centering
\parbox{5cm}{
\includegraphics[width=5cm]{Figure1}
\caption{Convergence region of the $H_2$ function}
\label{fig:1}}
\qquad
\begin{minipage}{5cm}
\includegraphics[width=5cm]{Figure2}
\caption{Region of analytical continuation of $H_2$}
\label{fig:2}
\end{minipage}
\end{figure}
In \cite[section 5.7]{28} for each function in Horn's list a system of two pde's is given which has this function as a solution. This follows a method already indicated by Horn in \cite{14}, \cite{12}.
A comprehensive list of the solutions of the system of pde's \eqref{4.9.75a},\eqref{4.9.75b} for the $F_2$ function was given by Olsson, \cite[p.1289, table I]{9}. Apart from $F_2,\ F_3$ and $H_2$ there are solutions expressed in terms of functions $F_P,\ F_Q$ and $F_R$ which do not occur in Horn's list \cite{11}. Here $F_P$ and $F_Q$ have order three, while $F_R$ is not even of hypergeometric type. In his paper he also mentioned an $F_{PR}$ function \cite[(57)]{9}.
\
Olsson defines his $F_P$ function as:
\begin{equation}
F_P(a,b_1,b_2,c_1,c_2;x,y)=y^{-a}\sum\limits_{i=0}^{\infty }\sum\limits_{j=0}^{\infty }
\dfrac{(a)_{i+j}(a-c_2+1)_{i+j}(b_1)_{i}}{(a+b_2-c_2+1)_{i+j}(c_1)_{i}}\dfrac{1}{i!\,j!}
\left( \dfrac{x}{y}\right) ^{i}\left( \dfrac{y-1}{y}\right) ^{j}
\label{4.P.11}
\end{equation}
with convergence region: $\{|xy^{-1}|+|1-y^{-1}|<1\}$. In \cite{11} (in a different notation) he gives
\begin{equation}
F_P(a,b_1,b_2,c_1,c_2;x,y)=
y^{-a}\sum\limits_{i=0}^{\infty }\sum\limits_{j=0}^{\infty }
\dfrac{(a)_{i+j}(a-c_2+1)_i(b_2)_j}{(a+b_2-c_2+1)_{i+j}}\dfrac{(b_1)_i}{(c_1)_i}\dfrac{1}{i!\,j!}
x^{i}(1-y) ^{j}
\label{4.B.5}
\end{equation}
with convergence region $\{|x|<1,|y-1|<1\}$. In \cite[Theorem 4.1]{7} we showed that
$\{x<1\wedge y>0\}$ is a region in $\mathbb{R}^2$ where $F_P$ has unique analytical continuation.
\
Olsson \cite{11}, \cite{9} defined his $F_Q$ function as
\begin{multline}
F_Q( a,b_1,b_2,c_1,c_2;x,y)=x^{-b_1}y^{b_1-a} \\
\times\sum\limits_{i=0}^{\infty}\sum\limits_{j=0}^{\infty }\dfrac{(b_1+c_2-b_2-a)_{i-j}
(b_1)_i(b_1-c_1+1)_i}{(b_1-a+1)_{i-j}(b_1+c_2-a)_{i-j}}
\dfrac{1}{i!\,j!}\left( \dfrac{y}{x}\right)^i\left(\dfrac{1-y}{y}\right)^j
\label{4.B.5a}
\end{multline}
where the convergence conditions (by Horn's rule) are the same as for the function
$H_2\left(1-\dfrac{1}{y},-\dfrac{y}{x}\right)$ \cite[(3)]{9}. So the convergence conditions following \eqref{4.P.1a} are:
\[
\left\{(x,y) \in \RR^{2}\text{ }|\text{ }|y-1|<|y|, |y-1|+|y|<|x|\right\}
\]
\begin{figure}[ht]
\centering
\includegraphics[width=5cm]{Figure4}
\caption{Convergence region of the $F_Q$ function}
\label{fig:3}
\end{figure}
Olsson gives formula \cite[(56)]{9}
\begin{multline}
F_P(a,b_1,b_2;c_1,c_2;x,y) =
F_{PR}(a,b_1,b_2;c_1,c_2;x,y)+ \\
+\dfrac{\Gamma(a+b_2-c_2+1)\Gamma(a+b_1-c_1-b_2)\Gamma(c_1)}
{\Gamma(a)\Gamma(b_1)\Gamma(a-c_2+1)}
x^{b_1-c_1}y^{-b_2}(1-x)^{c_1-b_1+b_2-a} \\
\times F_3\left(
\begin{array}{c}
1-b_1,b_2;c_1-b_1,b_2-c_2+1 \\
c_1-b_1+b_2-a+1%
\end{array}%
;\dfrac{x-1}{x},\dfrac{1-x}{y}\right)
\label{4.B.6}
\end{multline}
where $F_{PR}$ is the part of the function $F_P$ which is regular at $(1,1)$. For the function $F_{PR}$ he gave the formula \cite[(57)]{9}
\begin{multline}
F_{PR}(a,b_1,b_2;c_1,c_2;x,y)=
\dfrac{\Gamma(a+b_2-c_2+1)\Gamma(c_1-b_1+b_2-a)\Gamma(c_1)}
{\Gamma(a)\Gamma(c_1-b_1+b_2-c_2+1)\Gamma(c_1+b_2-a)} \\
\qquad\qquad\qquad\qquad \times\sum\limits_{i=0}^{\infty }\sum\limits_{j=0}^{\infty}
\dfrac{(a-c_2+1)_i(b_1)_i(b_2)_j}{(a+b_1-c_1-b_2+1)_i}\dfrac{1}{i!\,j!}(1-x)^i(1-y)^j\\
\times\hyp32{b_2-c_2+1,c_1-b_1+b_2-a-i,c_1-a-j}{c_1-b_1+b_2-c_2+1,c_1+b_2-a}{1}
\label{4.B.6a}
\end{multline}
where the $_3 F_2$ function is absolute convergent if $\Re(a)>0$. He then referred to the paper \cite[(12)]{93} in which the convergence region of the double summation is given as
\[
|x-1|+|y-1|<1
\]
The region of convergence of the $F_3$ function is:
\[
\left\{\left|\dfrac{x-1}{x}\right|<1 \wedge \left|\dfrac{1-x}{y} \right|<1\right\}
\]
The point $(1,1)$ is inside both regions.
\begin{figure}[ht]
\centering
\parbox{5cm}{
\includegraphics[width=5cm]{Figure8}
\caption{Convergence region of the $F_{PR}$ function}
\label{fig:5}}
\qquad
\begin{minipage}{5cm}
\includegraphics[width=6cm]{Figure7}
\caption{Convergence region of the $F_3$ function}
\label{fig:4}
\end{minipage}
\end{figure}
\end{appendices}
\
\
|
{
"timestamp": "2020-12-29T02:27:48",
"yymm": "2012",
"arxiv_id": "2012.14189",
"language": "en",
"url": "https://arxiv.org/abs/2012.14189"
}
|
\section{Introduction}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/transpose.pdf}
\end{center}
\caption{A schematic diagram of TransPose. \textbf{Below:} The inference pipeline. \textbf{Above:} Dependency areas for each predicted keypoint location. In this example, the person's left-ankle is occluded by a dog. \textbf{\emph{Which exact image clues the model uses to infer the occluded joint?}} The attention map (red box) gives \emph{fine-grained evidence beyond intuition}: such a pose estimator highly relies on the image clues around the left ankle, left upper leg, and joints on the right leg to estimate the location of occluded left ankle.}\vspace*{-0.1in}
\label{beginning fig}
\end{figure}
Deep convolutional neural networks have achieved impressive performances in the field of human pose estimation.
DeepPose~\cite{toshev2014deeppose} is the early classic method, directly regressing the numerical coordinate locations of keypoints.
Afterwards, fully convolutional networks like ~\cite{wei2016convolutional,long2015fully, newell2016stacked,yang2017learning,chen2018cascaded,papandreou2018PersonLabPP,xiao2018simple,sun2019hrnet} have become the mainstream by predicting keypoints heatmaps, which \emph{implicitly} learn spatial dependencies between body parts.
Yet, most prior works take deep CNN as a powerful black box predictor and focus on improving the network structure, what exactly happens inside the models or how they capture the spatial relationships between body parts remains unclear.
However, from the scientific and practical standpoints, the interpretability of the model can aid practitioners the ability to understand how the model associates structural variables to reach the final predictions and how a pose estimator handles various input images.
It also can help model developers for debugging, decision-making, and further improving the design.
For existing pose estimators, some issues make it challenging to figure out their decision processes.
\emph{(1) Deepness}. The CNN-based models, such as~\cite{wei2016convolutional,newell2016stacked,xiao2018simple,sun2019hrnet}, are usually very deep non-linear models that hinder the interpretation of the function of each layer.
\emph{(2) Implicit relationships}. The global spatial relationships between body parts are implicitly encoded within the neuron activations and the weights of CNNs. It is not easy to decouple such relationships from large amounts of weights and activations in neural networks.
And solely visualizing the intermediate features with a large number of channels (e.g. 256, 512 in SimpleBaseline architecture~\cite{xiao2018simple}) provides little meaningful explanations.
\emph{(3) Limited working memory in inferring various images}. The desired explanations for the model predictions should be image-specific and fine-grained.
When inferring images, however, the \emph{static} convolution kernels are limited in the ability to represent variables due to the limited working memory~\cite{graves2014neural,graves2016hybrid, hochreiter1997long}.
So it is difficult for CNNs to capture image-specific dependencies due to their content-independent parameters yet variable input image contents.
\emph{(4) Lack of tools.} Although there are already many visualization techniques based on gradient or attribution~\cite{erhan2009visualizing, zeiler2014visualizing, simonyan2013deep, selvaraju2017grad, fong2017interpretable, olah2017feature, zhou2016learning, bach2015pixel}, most of them focus on image classification rather than localization.
They aim to reveal class-specific input patterns or saliency maps rather than to explain the relationships between structure variables (\emph{e.g.}, the locations of keypoints). By far, how to develop explainable pose estimators remains challenging.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/attention_and_convolution.pdf}
\end{center}
\caption{CNN vs. Attention. \textbf{Left:} The receptive filed enlarges in the deeper convolutional layer. \textbf{Right:} One self-attention layer can capture the pairwise relationship between any pair of locations.}
\label{conv_vs_attention}\vspace*{-0.1in}
\end{figure}
In this work, we aim to build a human pose estimator that can explicitly capture and reveal the image-specific spatial dependencies between keypoints, as shown in Fig.~\ref{beginning fig}. Due to the poor scaling property of convolution~\cite{ramachandran2019stand}, we argue that convolution has advantages in extracting low-level features, but deeply stacking convolutions at high-level to enlarge the receptive field is not efficient to capture global dependencies. And such deepness increases the difficulty in interpreting CNN predictions. Transformer architecture~\cite{vaswani2017attention} has a natural advantage over CNNs in terms of drawing pairwise or higher-order interactions. As shown in Fig.~\ref{conv_vs_attention}, attention layers enable the model to capture interactions between any pairwise locations, and its attention map acts as an immediate memory to store these dependencies.
Based on these considerations, we propose a novel model called \emph{TransPose}, using convolutions to extract features at low-level and Transformer to capture global dependencies at high-level. In detail, we flatten the feature maps as input to Transformer and recover its output into the 2D-structure heatmaps. In such a design, the last attention layer in Transformer specially acts as an \emph{aggregator}, which collects different contributions from all image locations by attention scores and finally forms the maximum positions in the heatmaps. This type of keypoint localization approach via Transformer establishes a connection with the interpretability of Activation Maximization~\cite{erhan2009visualizing, simonyan2013deep} and extends it to the localization task. The resulting attention scores can indicate what concrete image clues significantly contribute to the predicted locations. With such evidence, we can further analyze the behaviors of the model by examining the influence of different experimental variables. In summary, our contributions are as follow:
\begin{itemize}
\item We introduce Transformer for human pose estimation to predict heatmap-based keypoints positions, which can efficiently capture the spatial relationships between human body parts. \vspace*{-0.05in}
\item We demonstrate that our keypoint localization approach based on Transformer conforms to the interpretability of Activation Maximization~\cite{erhan2009visualizing, simonyan2013deep}. Qualitative analysis reveals the dependencies beyond intuition, which are image-specific and fine-grained.\vspace*{-0.05in}
\item TransPose models achieve competitive performances against state-of-the-arts CNN-based models via fewer parameters and faster speeds. TransPose achieves 75.8 AP and 75.0 AP on COCO validation set and test-dev set, with 73$\%$ fewer parameters and 1.4$\times$ faster than HRNet-W48. In addition, our model transfers very well on MPII benchmark.
\end{itemize}
\section{Related Work}
\subsection{Human Pose Estimation}
Deep CNNs have achieved great success in human pose estimation. The inductive biases of vanilla convolution kernel~\cite{lecun1998gradient, krizhevsky2012imagenet} are locality and translation equivariance. It proves to be efficient to extract low-level image feature. For human pose estimation, capturing global dependencies is crucial~\cite{ramakrishna2014pose,tompson2014joint,wei2016convolutional,papandreou2018PersonLabPP}, but the locality nature of convolution makes it impossible to capture long-range interactions. A typical but brute solution is to enlarge the receptive field, \emph{e.g.} by downsampling the resolution, increasing the depth or expanding the kernel size. Further, sophisticated strategies are proposed such as multi-scale fusion~\cite{newell2016stacked, pfister2015flowing, yang2017learning, chen2018cascaded, sun2019hrnet, chu2017multi, cheng2020higher}, stacking~\cite{wei2016convolutional,xiao2018simple, newell2016stacked}, or high-resolution representation~\cite{sun2019hrnet}; meanwhile, many successful architectures have emerged such as CPM~\cite{wei2016convolutional}, Hourglass Network~\cite{newell2016stacked}, FPN~\cite{yang2017learning}, CPN~\cite{chen2018cascaded}, SimpleBaseline~\cite{xiao2018simple}, HRNet~\cite{sun2019hrnet}, RSN~\cite{cai2020learning}, even automated architectures~\cite{yang2019pose,gong2020autopose,mcnally2020evopose2d, cheng2020scalenas, zhang2020efficientpose}. But as the architecture is becoming more complex, it is more challenging but imperative than ever to seek the interpretability of human pose estimation models. In contrast, our model can estimate human pose in an efficient and explicit way.
\subsection{Explainability}
Explainability means a better understanding for human of how the model makes predictions. As surveyed by~\cite{samek2019explainable}, many works define the goal for explanation is to determine what inputs are the most relevant to the prediction, which is also \emph{the goal we seek in this paper}. ~\cite{erhan2009visualizing, li2015heterogeneous} perform gradient descent in the input space to find out what input patterns can maximize a given unit. ~\cite{simonyan2013deep, fang2017rmpe} further consider generating the image-specific class saliency maps. ~\cite{zeiler2014visualizing} uses DeConvNet to generate feature activities to show what convolutional layers have learned. Some pose estimation methods ~\cite{li2015heterogeneous,zhang2018occluded} visualize the feature maps by choosing specific neurons or channels but the results fail to reveal the spatial relationship between parts. ~\cite{Tang_2019_CVPR} estimates the probability distributions and mutual information between keypoints, yet only revealing the statistic information rather than image-specific explanations. There are also works like Network Dissection~\cite{bau2017network}, Feature Visualization~\cite{olah2017feature}, Excitation Backprop~\cite{zhang2016top}, LRP attribution method~\cite{bach2015pixel}, CAM~\cite{zhou2016learning}, and Grad-CAM~\cite{selvaraju2017grad}, which aim to explain the prediction of CNN classifier or visualize the saliency area significantly affecting the class. Different from most prior works, we aim to reveal the fine-grained spatial dependencies between body joints variables in the structural skeleton. And our model can directly exploit the attention patterns to holistically explain its predictions without the help of external tools. We also notice a recent paper~\cite{chefer2020transformer} that develops LRP-based~\cite{bach2015pixel} method to compute relevance to explain the predictions of Transformer. It takes ViT model~\cite{dosovitskiy2020an} to visualize class-specific relevance map, showing reasonable results. Unlike their goal, we focus on revealing what clues contribute to visual keypoint localizations, and the attentions in our model provide clear evidence for the predictions.
It is worth noting that there are some works, such as CoordConv~\cite{liu2018an} and Zero Padding~\cite{islam2020how}, to explain how the neural network predicts the positions and stores the position information by designing proxy tasks. We also conduct experiments to investigate the importance of position embedding for predicting the locations and its generalization on unseen input scales.
\subsection{Transformer}
Transformer was proposed by Vaswani \emph{et al.}~\cite{vaswani2017attention} for neural machine translation (NMT) task~\cite{sutskever2014sequence}. Large Transformer-based models like BERT~\cite{devlin2018bert}, GPT-2~\cite{radford2019language} are often pre-trained on large amounts of data and then fine-tuned for smaller datasets. Recently, Vision Transformer or attention-augmented layers have merged as new choices for vision tasks such as \cite{parmar2018image, ramachandran2019stand, bello2019attention, dosovitskiy2020an,touvron2020deit, carion2020detr, chen2020pre, dai2020up, zhu2020deformable, wang2020end}. DETR~\cite{carion2020detr} directly predicts a set of object instances by introducing object queries. ViT~\cite{dosovitskiy2020an} is to pre-train a pure Transformer on large data and then fine-tuned on ImageNet for image classification. DeiT~\cite{touvron2020deit} introduces a distillation token to learn knowledge from a teacher. There are also works~\cite{epipolartransformers2020he, handtransformer:huang2020hand, metro:lin2021end} applying Transformers to 3D pose estimation. \cite{epipolartransformers2020he} fuses features from multi-view images by attention mechanism.~\cite{handtransformer:huang2020hand, metro:lin2021end} output 1D sequences composed of joint/vertex coordinates of pose. Unlike them, we use Transformer to predict the 2D heatmaps represented with spatial distributions of keypoints for 2D human pose estimation problem.
\section{Method}
Our goal is to build a model that can explicitly capture global dependencies between human body parts.
We first describe the model architecture. Then we show how it exploits self-attention to capture global interactions and establish a connection between our method and the principle of Activation Maximization.
\subsection{Architecture}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1\linewidth]{figures/architecture-3.pdf}
\end{center}
\caption{The architecture. Firstly, the feature maps are extracted by a CNN backbone and flattened into a sequence. Next, the Transformer encode layers iteratively capture dependencies from the sequences by query-key-value attention. Then, a simple head is used to predict the keypoints heatmaps. The attention map in Transformer can reveal what dependencies (regions or joints) significantly contribute to the activation maximum positions in the predicted keypoint heatmaps.}\vspace*{-0.1in}
\label{architecture}
\end{figure*}
As illustrated in Fig.~\ref{architecture}, TransPose model consists of three components: a CNN backbone to extract low-level image feature; a Transformer Encoder to capture long-range spatial interactions between feature vectors across the locations; a head to predict the keypoints heatmaps.
{\bf Backbone.} Many common CNNs can be taken as the backbone. For better comparisons, we choose two typical CNN architectures: ResNet~\cite{he2016deep} and HRNet~\cite{sun2019hrnet}. We only retain the initial several parts of the original ImageNet pretrained CNNs to extract feature from images. We name them ResNet-S and HRNet-S, the parameters numbers of which are only about 5.5\% and 25\% of the original CNNs.
{\bf Transformer.} We follow the standard Transformer architecture~\cite{vaswani2017attention} as closely as possible. And only the Encoder is employed, as we believe that the pure heatmaps prediction task is simply an encoding task, which compresses the original image information into a compact position representation of keypoints. Given an input image $I\in\mathbb{R}^{ 3\times H_I \times W_I}$, we assume that the CNN backbone outputs a 2D spatial structure image feature $\mathbf{X}_f\in \mathbb{R}^{ d\times H \times W}$ whose feature dimension has been transformed to $d$ by a 1$\times$1 convolution. Then, the image feature map is flattened into a sequence $\mathbf{X}\in \mathbb{R}^{ L\times d}$, \emph{i.e.}, $L$ $d$-dimensional feature vectors where $L=H\times W$. It goes through $N$ attention layers and feed-forward networks (FFNs).
{\bf Head.} A head is attached to Transformer Encoder output $\mathbf{E}\in\mathbb{R}^{L\times d}$ to predict $K$ types of keypoints heatmaps $P\in\mathbb{R}^{ K\times H^* \times W^*}$ where $H^*,W^*=H_I/4,W_I/4$ by default. We firstly reshape $\mathbf{E}$ back to $\mathbb{R}^{ d\times H \times W}$ shape. Then we mainly use a 1$\times$1 convolution to reduce the channel dimension of $\mathbf{E}$ from $d$ to $K$. If $H,W$ are not equal $H^*,W^*$, an additional bilinear interpolation or a 4$\times$4 transposed convolution is used to do upsampling before 1$\times$1 convolution. Note, a 1$\times$1 convolution is completely equivalent to a \emph{position-wise linear} transformation layer.
\subsection{Resolution Settings.}
Due to that the computational complexity of per self-attention layer is $O\left((HW)^{2} \cdot d\right)$, we restrict the attention layers to operate at a resolution with $r\times$ downsampling rate w.r.t. the original input, \emph{i.e.}, $H,W =H_I /r,W_I /r$. In the common human pose estimation architectures~\cite{wei2016convolutional, newell2016stacked, xiao2018simple, sun2019hrnet}, $32\times$ downsampling is usually adopted as a standard setting to obtain a very low-resolution map containing global information. In contrast, we adopt $r=8$ and $r=4$ setting for ResNet-S and HRNet-S, which are beneficial to the trade-off between the memory footprint for attention layers and the loss in detailed information. As a result, our model directly captures long-range interactions at a higher resolution, while preserving the fine-grained local feature information.
\begin{table*}\scriptsize
\begin{center}
\renewcommand{\arraystretch}{0.85}
\setlength{\tabcolsep}{2.0mm}
\begin{tabular}{l|ccc|cccc|c}
\toprule
\textbf{Model Name} & \textbf{Backbone} &\textbf{Downsampling for Attention} & \textbf{Upsampling} &\#\textbf{Layers} &\textbf{Heads}& \textbf{d }& \textbf{h }& \#\textbf{Params} \\
\midrule
TransPose-R-A3* & ResNet-Small* & 1/8 & Bilinear Interpolation &3 & 8 & 256 & 512 &5.0M \\
TransPose-R-A3 & ResNet-Small & 1/8 &Deconvolution &3 & 8 & 256 & 1024 &5.2M \\
TransPose-R-A4 & ResNet-Small & 1/8 &Deconvolution &4 & 8 & 256 & 1024 &6.0M \\
TransPose-H-S & HRNet-Small-W32 & 1/4 & None& 4& 1 & 64 & 128 & 8.0M \\
TransPose-H-A4 & HRNet-Small-W48 & 1/4 &None& 4& 1 & 96 & 192 & 17.3M \\
TransPose-H-A6 & HRNet-Small-W48 & 1/4 &None& 6& 1 & 96 & 192 & 17.5M \\
\bottomrule
\end{tabular}
\end{center}
\caption{Architecture configurations for different TransPose models. More details about the backbones are described in supplementary.}
\label{architecture configurations}
\end{table*}
\begin{table*}\footnotesize
\begin{center}
\setlength{\tabcolsep}{4.5mm}
\renewcommand{\arraystretch}{0.9}
\begin{tabular}{l|c|ll|l|c|l}
\toprule[0.11em]
\textbf{Method} &\textbf{Input Size} &\textbf{AP} & \textbf{AR}& \#\textbf{Params} &\textbf{FLOPs} & \textbf{FPS} \\
\midrule
SimpleBaseline-Res50~\cite{xiao2018simple} &256$\times$192 &70.4& 76.3 &34.0M &8.9G & 114\\
SimpleBaseline-Res101~\cite{xiao2018simple} &256$\times$192 &71.4& 76.3 &53.0M &12.4G & 92\\
SimpleBaseline-Res152~\cite{xiao2018simple} &256$\times$192 &72.0& 77.8 &68.6M &35.3G & 62\\
\hline
TransPose-R-A3* & 256$\times$192 & 71.5 & 76.9&5.0M ({\color{black}$\downarrow$85\%}) &5.4G & 137 ({\color{black}$\uparrow$20\%})\\
TransPose-R-A3 & 256$\times$192 & 71.7 & 77.1&5.2M ({\color{black}$\downarrow$85\%}) &8.0G & 141 ({\color{black}$\uparrow$23\%})\\
TransPose-R-A4 & 256$\times$192 & \textbf{72.6} & \textbf{78.0}& 6.0M ({\color{black}$\downarrow$82\%}) &8.9G &138 ({\color{black}$\uparrow$21\%})\\
\midrule
HRNet-W32~\cite{sun2019hrnet} &256$\times$192 &74.4& 79.8 &28.5M &7.2G & 28\\
HRNet-W48~\cite{sun2019hrnet} &256$\times$192 &75.1& 80.4 &63.6M &14.6G & 27\\
\hline
TransPose-H-S &256$\times$192 & 74.2 & 78.0& 8.0M ({\color{black}$\downarrow$72\%})& 10.2G & 45 ({\color{black}$\uparrow$61\%})\\
TransPose-H-A4 &256$\times$192 & 75.3 & 80.3& 17.3M ({\color{black}$\downarrow$73\%})& 17.5G & 41 ({\color{black}$\uparrow$52\%})\\
TransPose-H-A6 &256$\times$192 & \textbf{75.8} & \textbf{80.8}& 17.5M ($\downarrow$73\%)& 21.8G & 38 ({\color{black}$\uparrow$41\%})\\
\bottomrule[0.1em]
\end{tabular}
\end{center}
\caption{Results on COCO validation set, all provided with the same detected human boxes. TransPose-R-* and TransPose-H-* achieve competitive results to SimpleBaseline and HRNet, with fewer parameters and faster speeds. The reported FLOPs of SimpleBaseline and HRNet only include the convolution and linear layers.}\vspace*{-0.1in}
\label{state-of-the-art}
\end{table*}
\subsection{Attentions are the Dependencies of Localized Keypoints}
\label{attention mechanism}
{\bf Self-Attention mechanism.} The core mechanism of Transformer~\cite{vaswani2017attention} is multi-head self-attention. It first projects an input sequence $\mathbf{X} \in \mathbb{R}^{ L\times d}$ into queries $\mathbf{Q}\in \mathbb{R}^{ L\times d}$, keys $\mathbf{K}\in \mathbb{R}^{ L\times d}$ and values $\mathbf{V}\in \mathbb{R}^{ L\times d}$ by three matrices $\mathbf{W}_q,\mathbf{W}_k,\mathbf{W}_v \in \mathbb{R}^{d\times d}$. Then, the attention scores matrix\footnote{Here we consider single-head self attention. For multi-head self-attention, the attention matrix is the average of attention maps in all heads.} $\mathbf{A}\in\mathbb{R}^{ N\times N}$ is computed by:
\begin{equation}
\mathbf{A}=\operatorname{softmax}\left(\frac{\mathbf{Q} \mathbf{K}^\top}{\sqrt{d}}\right).
\end{equation}
Each query $\boldsymbol{q}_i\in\mathbb{R}^d$ of the token $\boldsymbol{x}_i\in \mathbb{R}^d$ (i.e., feature vector at location $i$) computes similarities with all the keys to achieve a weight vector $\mathbf{w}_i=\mathbf{A}_{i,:} \in \mathbb{R}^{1\times L}$, which determines how much dependency is needed from each token in the previous sequence. Then an increment is achieved by a linear sum of all elements in Value matrix $\mathbf{V}$ with the corresponding weight in $\mathbf{w}_i$ and added to $\boldsymbol{x}_i$. By doing this, the attention maps can be seen as \emph{dynamic weights} that determined by specific image content, reweighting the information flow in the forward propagation.
Self-attention captures and reveals how much contribution the predictions aggregate from each image location. Such contributions from different image locations can be reflected by the gradient~\cite{simonyan2013deep, bach2015pixel, selvaraju2017grad}. Therefore, we concretely analyze how $\boldsymbol{x}_j$ at image/sequence location $j$ affects the activation $\boldsymbol{h}_i$ at location $i$ the predicted keypoint heatmaps, by computing the derivative of $\boldsymbol{h}_i\in \mathbb{R}^{K}$ ($K$ types of keypoints) w.r.t the $\boldsymbol{x}_j$ at location $j$ of the input sequence of the last attention layer. And we further assume $G:=\frac{\partial \boldsymbol{h}_i}{\partial \boldsymbol{x}_j}$ as a function w.r.t. a given attention score $\mathbf{A}_{i,j}$. We obtain:
\begin{equation}\label{grad
\begin{aligned}
G\left(\mathbf{A}_{i,j} \right)
&\approx\mathbf{A}_{i,j}\cdot\mathbf{W}_f
\cdot\mathbf{W}_v^\top+\mathbf{W}_f=\mathbf{A}_{i,j}\cdot\mathbf{K} + \mathbf{B}
\end{aligned}
\end{equation}
where $\mathbf{K},\mathbf{B}\in\mathbb{R}^{K\times d}$ are \emph{static weights} (fixed when inferring) and shared across all image locations. The derivations of Eq.~\ref{grad} are shown in supplementary. We can see that the function $G$ is approximately linear with $\mathbf{A}_{i,j}$, \emph{i.e.}, the degrees of contribution to the prediction $\boldsymbol{h}_i$ directly depend on its attention scores at image locations.
Especially, the last attention layer acts as \emph{an aggregator}, which collects contributions from all image locations according to attentions and forms the maximum activations in the predicted keypoint heatmaps. Although the layers in FFN and head cannot be ignored, they are \emph{position-wise}, which means they approximately linearly transform the contributions from all locations by the same transformation without changing their relative proportions.
\label{paper::grad}\label{paper::definition}
{\bf The activation maximum positions are the keypoints' locations.} The interpretability of Activation Maximization (AM)~\cite{erhan2009visualizing,simonyan2013deep} lies in: the input region which can maximize a given neuron activation can explain what this activated neuron is looking for.
In this task, the learning target of TransPose is to expect the neuron activation $h_{i^*}$ at location $i^*$ of the heatmap to be maximally activated where $i^*$ represents the groundtruth location of a keypoint:
\begin{equation}
\quad\theta^*=\arg\max_{\theta}h_{i^*}(\theta,I).
\end{equation}
Assuming the model has been optimized with parameters $\theta^*$ and it predicts the location of a particular keypoint as $i$ (maximum position in a heatmap), why the model predicts such prediction can be explained by the fact that those locations $\mathbf{J}$, whose element $j$ has higher attention score ($\geq\delta$) with $i$, are the dependencies that significantly contribute to the prediction. The dependencies can be found by:
\begin{equation}
\mathbf{J}=\left\lbrace j| \mathbf{A}_{i,j}\left( \theta^*,I\right) \geq\delta\right\rbrace,
\end{equation}
where $\mathbf{A}\in\mathbb{R}^{ L\times L}$ is the attention map of the last attention layer and also a function w.r.t $\theta^*$ and $I$, \emph{i.e.}, $\mathbf{A}=\mathbf{A}\left( \theta^*,I\right)$. Given an image $I$ and a query location $i$, $\mathbf{A}_{i,:}$ can reveal what dependencies a predicted location $i$ highly relies on, we define it \textbf{\emph{dependency area}}. $\mathbf{A}_{:,j}$ can reveal what area a location $j$ mostly affects, we define it \emph{affected area}.
For the traditional CNN-based methods, they also use heatmap activations as the keypoint locations, but one cannot directly find the explainable patterns for the predictions due to the deepness and highly non-linearity of deep CNNs. The AM-based methods~\cite{erhan2009visualizing, li2015heterogeneous, zeiler2014visualizing, simonyan2013deep} may provide insights while they require extra optimization costs to learn explainable patterns the convolutional kernels prefer to look for. Different from them, we extend AM to heatmap-based localization via Transformer, and we do not need extra optimization costs because the optimization has been implicitly accomplished in our training, i.e., $\mathbf{A}=\mathbf{A}\left( \theta^*,I\right)$. The defined \textbf{\emph{dependency area}} is the pattern we seek, which can show image-specific and keypoint-specific dependencies.
\section{Experiments}
{\bf Dataset.} We evaluate our models on COCO~\cite{lin2014microsoft} and MPII~\cite{andriluka2014} datasets. COCO contains 200k images in the wild and 250k person instances. Train2017 consists of 57k images and 150k person instances. Val2017 set contains 5k images and test-dev2017 consists of 20k images. In Sec~\ref{transfer-mpii}, we show the experiments on MPII~\cite{andriluka2014}. And we adopt the standard evaluation metrics of these benchmarks.
{\bf Technical details.} We follow the top-down human pose estimation paradigm. The training samples are the cropped images with single person. We resize all input images into $256\times192$ resolution. We use the same training strategies, data augmentation and person detected results as~\cite{sun2019hrnet}. We also adopt the coordinate decoding strategy proposed by~\cite{zhang2020distribution} to reduce the quantisation error when decoding from downscaled heatmaps. The feed forward layers are trained with 0.1 dropout and ReLU activate function. Next, we name the models based on ResNet-S and HRNet-S \emph{TransPose-R} and \emph{TransPose-H}, abbreviated as \emph{\textbf{TP-R}} and \emph{\textbf{TP-H}}. The architecture details are reported in Tab.~\ref{architecture configurations}. We use Adam optimizer for all models. Training epochs are 230 for TP-R and 240 for TP-H. The cosine annealing learning rate decay is used. The learning rates for TP-R-A4 and TP-H-A6 models decay from 0.0001 to 0.00001, we recommend using such a schedule for all models. Considering the compatibility with backbone and the memory consumption, we adjust the hyperparameters of Transformer encoder to make the model capacity not very large. In addition, we use 2D sine position embedding as the default position embedding. We describe it in the supplementary
\subsection{Results on COCO keypoint detection task}
We compare TransPose with SimpleBaseline, HRNet, and DARK~\cite{zhang2020distribution}. Specially, we trained the DARK-Res50 on our machines according to the official code with TransPose-R-A4's data augmentation, we achieve 72.0AP; when using the totally same data augmentation and long training schedule of TransPose-R-A4 for it, we obtain 72.1AP (+0.1 AP). The other results showed in Tab.~\ref{state-of-the-art} come from the papers. We test all models on a single NVIDIA 2080Ti GPU with the same experimental conditions to compute the average FPS. Under the input resolution -- 256$\times$192, TransPose-R-A4 and TransPose-H-A6 have obviously overperformed SimpleBaseline-Res152 (+0.6AP)~\cite{xiao2018simple}, HRNet-W48 (+0.7AP)~\cite{sun2019hrnet} and DARK-HRNet~\cite{zhang2020distribution} (+0.2AP), with significantly fewer model parameters and faster speeds. Tab.~\ref{coco-test} shows the results on COCO test set.
\begin{table}\scriptsize
\centering
\label{table:coco_test_dev}
\renewcommand{\arraystretch}{0.9}
\setlength{\tabcolsep}{0.2mm}
\begin{tabular}{c|c|ccc|ccccc}
\toprule[0.1em]
\textbf{Method} & \textbf{Input size} & \#\textbf{Params} & \textbf{FLOPs}& \textbf{FPS}&
$\textbf{AP}$ & $\textbf{AP}_{\text{0.5}}$ & $\textbf{AP}_{\text{0.75}}$ & $\textbf{AP}_{\text{M}}$ & $\textbf{AP}_{\text{L}}$ \\%& $\textbf{AR}$\\
\midrule
G-RMI~\cite{papandreou2017towards} & 353$\times$257 & 42.6M & 57G&- &64.9 & 85.5&71.3&62.3&70.0\\%&69.7\\
Integral~\cite{sun2018integral} & 256$\times$256 &45.0M &11.0G&- &67.8 & 88.2&74.8&63.9&74.0\\%&-\\
CPN~\cite{chen2018cascaded}& 384$\times$288&58.8M&29.2G&-
& 72.1 & 91.4&80.0&68.7&77.2\\%&78.5\\
RMPE~\cite{fang2017rmpe} & 320$\times$256 &28.1M &26.7G&-
&72.3 & 89.2&79.1&68.0&78.6\\%&-\\
SimpleBaseline~\cite{xiao2018simple} &384$\times$288 &68.6M & 35.6G&-
&73.7 & 91.9&81.1&70.3&80.0\\%&79.0\\
%
%
%
HRNet-W32~\cite{sun2019hrnet} & 384$\times$288 &28.5M& 16.0G&26&74.9&92.5&82.8&71.3&80.9\\%&80.1\\
HRNet-W48~\cite{sun2019hrnet} & 256$\times$192 &63.6M&14.6G&27& 74.2&92.4&82.4&70.9&79.7\\%&79.5\\
HRNet-W48~\cite{sun2019hrnet} & 384$\times$288 &63.6M&32.9G&25& 75.5&92.5&83.3&71.9&81.5\\%&80.5\\
DarkPose~\cite{zhang2020distribution} & 384$\times$288 &63.6M&32.9G&25& 76.2&92.5&83.6&72.5&82.4\\%&81.1\\
\midrule
\textbf{TransPose-H-S} & 256$\times$192 &8.0M&10.2G&45& 73.4&91.6&81.1&70.1&79.3\\%&78.6\\
\textbf{TransPose-H-A4} & 256$\times$192 &17.3M&17.5G&41& 74.7&91.9&82.2&71.4&80.7\\%&79.9\\
\textbf{TransPose-H-A6} & 256$\times$192 &17.5M&21.8G&38& 75.0&92.2&82.3&71.3&81.1\\%&80.1\\
\bottomrule[0.1em]
\end{tabular}
\caption{Comparisons with state-of-the-art CNN-based models on COCO test-dev set. Tested on smaller input resolution 256$\times$192 , our models achieve comparable performances with the others.}\vspace*{-0.01in}
\label{coco-test}
\end{table}
\label{position embedding}
\begin{table}[h]\footnotesize
\begin{center}
\setlength{\tabcolsep}{2mm}
\renewcommand{\arraystretch}{0.6}
\begin{tabular}{c|ccc}
\toprule[0.1em]
\textbf{Position Embedding} & \#\textbf{Params} & \textbf{FLOPs} &\textbf{AP} \\
\midrule
\ding{55}& 4.999M & 7.975G& 70.4 \\
Learnable & 5.195M & 7.976G& 70.9 \\
2D Sine (Fixed)& 5.195M&7.976G&71.7 \\
\bottomrule[0.1em]
\end{tabular}
\end{center}
\caption{Results for different position embedding schemes for TransPose models. The input size is $256\times192$.}\vspace*{-0.01in}
\label{position embedding tab}
\end{table}
\subsection{Transfer to MPII benchmark}
\label{transfer-mpii}
Typical pose estimation methods often separately train and evaluate their models on COCO and MPII~\cite{andriluka2014}. Motivated by the success of pre-training in NLP and recent ViT~\cite{dosovitskiy2020an}, we try to transfer our pre-trained models to MPII. We replace the final layer of the pre-trained TransPose model with a uniform-initialized $d \times 16$ linear layer for MPII. When fine-tuning, the learning rates for the pre-trained and final layers are 1$e$-5 and 1$e$-4 with decay.
\begin{figure}
\hspace{-0.5em} \vspace{-0.02in}
\includegraphics[width=1\linewidth]{figures/mpii6.eps}
\vspace*{-0.02in}
\caption{Performances on validation set when fine-tuning models (listed in Tab.~\ref{table::mpii-comparisons}) with different epochs on MPII training set.} \vspace{-0.01in}
\label{figure::mpii-finetune}
\end{figure}
\begin{table}\scriptsize
\centering
\renewcommand{\arraystretch}{0.9}
\setlength{\tabcolsep}{0.9mm}
\begin{tabular}{c|ccllc}
\toprule[0.1em]
\textbf{Models} &\textbf{ Strategy} &\textbf{ Epochs} &\textbf{ Mean@0.5}&\textbf{ Mean@0.1}&\#\textbf{Params}\\
\midrule
\multirow{2}{*}{DARK-HRNet~\cite{zhang2020distribution}} & $\circlearrowleft$ & 210 & 90.6 & 42.0 & 28.5M\\
& $\Rightarrow$ & 100 & 92.0 (+1.4) & 43.6 (+1.6) & 28.5M\\
\midrule
\multirow{2}{*}{TransPose-R-A4} &$\circlearrowleft$ & 230 & 89.3 & 38.6 & 6.0M \\
& $\Rightarrow\uparrow$ & 100 & 92.0 (\textbf{+2.7}) & 44.1 (\textbf{+5.5}) & 6.0M\\
\midrule
\multirow{2}{*}{TransPose-H-A6} & $\circlearrowleft$ & 230 & 90.3 & 41.6 & 17.5M\\
& $\Rightarrow$ & 100 & \textbf{92.3} (+2.0) & \textbf{44.4} (+2.8) & 17.5M\\
\bottomrule[0.1em]
\end{tabular}
\caption{Fine-tuning and full-training performances on MPII validation set. $\circlearrowleft$ means full-training on MPII without COCO pre-training. $\Rightarrow$ means transferring the pretrained model and fine-tuning on MPII; adding $\uparrow$ means fine-tuning MPII on input resolution 384$\times$384 otherwise 256$\times$256.}\vspace*{-0.1in}
\label{table::mpii-comparisons}
\end{table}
For comparisons, we fine-tune the pre-trained DARK-HRNet on MPII with the same settings, and train these models on MPII by standard full-training settings. As shown in Tab.~\ref{table::mpii-comparisons} and Fig.~\ref{figure::mpii-finetune}, the results are interesting: even with longer full-training epochs, models perform worse than the fine-tuned ones; even with large model capacity (28.5M), the improvement (+1.4 AP) brought by pre-training DARK-HRNet is smaller than pre-training TransPose (+2.0 AP). With 256$\times$256 input resolution and fine-tuning on MPII train and val sets, the best result on MPII test set yielded by TransPose-H-A6 is 93.5\% accuracy, as shown in Fig.~\ref{table::mpii-test}. These results show that pre-training and fine-tuning could significantly reduce training costs and improve the performances, particularly for the pre-trained TransPose models.
\begin{table}\scriptsize
\centering
\renewcommand{\arraystretch}{0.9}
\setlength{\tabcolsep}{1mm}
\begin{tabular}{c|ccc}
\toprule[0.1em]
\textbf{Method} & \textbf{Input size} & \textbf{Training Data} & \textbf{Mean@0.5}\\
\midrule
Belagiannis \& Zisserman, FG'17 ~\cite{belagiannis2017recurrent}& 248$\times$248& COCO+MPII$\dagger$ &88.1 \\
Su et al., arXiv'19~\cite{su2019cascade} & 384$\times$384 & HSSK+MPII$\ddagger$ & 93.9 \\
Bulat et al., FG'20~\cite{bulat2020toward} & 256$\times$256 & HSSK+MPII$\ddagger$ & 94.1 \\
Bin et al., ECCV'20~\cite{bin2020adversarial} & 384$\times$384 & HSSK+MPII$\ddagger$ & 94.1 \\
\midrule
Ours (TransPose-H-A6) & 256$\times$256 & COCO+MPII$\dagger$ & 93.5 \\
\bottomrule[0.1em]
\end{tabular}
\caption{Results on MPII benchmark test set. $\dagger$ means pre-training on COCO dataset and fine-tuning on MPII dataset. $\ddagger$ means training both on MPII and HSSK datasets.}\vspace*{-0.1in}
\label{table::mpii-test}
\end{table}
{\bf Discussion.} The pre-training and fine-tuning for Transformer-based models have shown favorable results in NLP~\cite{devlin2018bert,radford2019language} and recent vision models~\cite{dosovitskiy2020an,chen2020pre,dai2020up}.
Our initial results on MPII also suggest that training Transformer-based models on large-scale pose-related data may be a promising way to learn powerful and robust representation for human pose estimation and its downstream tasks.
\subsection{Ablations}
{\bf The importance of position embedding.} Without position embedding, the 2D spatial structure information loses in Transformer. To explore its importance, we conduct experiments on TransPose-R-A3 models with three position embedding strategies: 2D sine position embedding, learnable position embedding, and w/o position embedding. As expected, the models with position embedding perform better, particularly for 2D sine position embedding, as shown in Tab.~\ref{position embedding tab}. But interestingly, TransPose w/o any position embedding only loses 1.3 AP, which suggests that 2D-structure becomes less important. See more details in supplementary.
{\bf Scaling the Size of Transformer Encoder.} We study how performance scales with the size of Transformer Encoder, as shown in Tab.~\ref{layers}. For TransPose-R models, with the number of layers increasing to 6, the performance improvements gradually tend to saturate or degenerate. But we have not observed such a phenomenon on TransPose-H models. Scaling the Transformer obviously improves the performance of TransPose-H.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{figures/generalization.pdf}
\end{center}\vspace*{-0.15in}
\caption{Performances on unseen input resolutions. TransPose models w/ Position Embedding generalize better.} \vspace{-0.1in}
\label{generalization_fig}
\end{figure}
\label{unseen}
{\bf Position embedding helps to generalize better on unseen input resolutions.} The top-down paradigm scales all the cropped images to a fixed size. But for some cases even with a fixed input size or the bottom-up paradigm, the body size in the input varies; the robustness to different scales becomes important. So we design an extreme experiment to test the generalization: we test SimpleBaseline-ResN50-Dark and TransPose-R-A3 models on unseen 128$\times$96, 384$\times$288, 512$\times$388 input resolutions, all of which only have been trained with 256$\times$192 size. Interestingly, the results in Fig.~\ref{generalization_fig} demonstrate that SimpleBaseline and TransPose-R w/o position embedding have obvious performance collapses on unseen resolutions, particularly on 128$\times$96; but TransPose-R with learnable or 2D Sine position embedding have significantly better generalization, especially for 2D Sine position embedding.
{\bf Discussion.} For the input resolution, we mainly trained our models on 256$\times$192 size, thus 768 and 3072 sequence lengths for Transformers in TP-R and TP-H models. Higher input resolutions such as 384$\times$288 for our current models will bring prohibitively expensive computational costs in self-attention layers due to the quadratic complexity.
\begin{table}\scriptsize
\centering
\renewcommand{\arraystretch}{0.6}
\setlength{\tabcolsep}{1mm}
\begin{tabular}{ccccccccc}
\toprule[0.1em]
\textbf{Model} & \#\textbf{Layers} & $d$ & $h$ & \#\textbf{Params} & \textbf{FLOPs}& \textbf{FPS}& \textbf{AP} &\textbf{ AR}\\
\midrule
\multirow{5}{*}{TransPose-R}&\cellcolor[gray]{0.99}2& 256& 1024& 4.4M & 7.0G & 174 & \cellcolor[gray]{0.99}69.6 & \cellcolor[gray]{0.99}75.0\\
&\cellcolor[gray]{0.96}3& 256& 1024 & 5.2M & 8.0G & 141 & \cellcolor[gray]{0.96}71.7 & \cellcolor[gray]{0.96}77.1 \\
&\cellcolor[gray]{0.93}4& 256& 1024 & 6.0M & 8.9G & 138 & \cellcolor[gray]{0.87}72.6 & \cellcolor[gray]{0.87}78.0 \\
&\cellcolor[gray]{0.90}5& 256& 1024 & 6.8M & 9.9G & 126 & \cellcolor[gray]{0.93}72.2 & \cellcolor[gray]{0.93}77.6 \\
&\cellcolor[gray]{0.87}6& 256& 1024 & 7.6M & 10.8G & 109 & \cellcolor[gray]{0.93}72.2 & \cellcolor[gray]{0.93}77.5 \\
\midrule
\multirow{5}{*}{TransPose-H}&4& \cellcolor{cyan!5}64& \cellcolor{cyan!5}128 & 17.0M & 14.6G & - & \cellcolor{cyan!5}75.1 & \cellcolor{cyan!5}80.1 \\
&4& \cellcolor{cyan!10}192& \cellcolor{cyan!10}384 & 18.5M & 27.0G & - & \cellcolor{cyan!10}75.4 & \cellcolor{cyan!10}80.5 \\
&\cellcolor{blue!5}4& 96& 192 & 17.3M & 17.5G & 41 & \cellcolor{blue!5}75.3 & \cellcolor{blue!5}80.3 \\
&\cellcolor{blue!10}5& 96& 192 & 17.4M & 19.7G & 40 & \cellcolor{blue!10}75.6 & \cellcolor{blue!10}80.6 \\
&\cellcolor{blue!15}6& 96& 192 & 17.5M & 21.8G & 38 & \cellcolor{blue!15}75.8 & \cellcolor{blue!15}80.8 \\
\bottomrule[0.1em]
\end{tabular}
\caption{Ablation study on the size of Transformer Encoder. \#Layers, $d$ and $h$ are the number of encoder layers, the dimensions $d$, and the number of hidden units of FFN. }\vspace*{-0.01in}
\label{layers}
\end{table}
\begin{figure*}[t]
\centering
\hspace{-0.5em}
\subfigure[\textbf{TP-R-A4:} predicted keypoints and their \emph{dependency areas} for input \textbf{A}.]{
\includegraphics[width=0.48\linewidth]{figures_pdf/final_attention_map_249_T-R-A4.pdf}
\label{tpr-a}
}\hspace{1.em}\vspace{-0.01in}
\subfigure[\textbf{TP-H-A4:} predicted keypoints and their \emph{dependency areas} for input \textbf{A}.]{
\includegraphics[width=0.48\linewidth]{figures_pdf/final_attention_map_249_T-H-A4.pdf}
\label{tph-a}
}
\quad
\hspace{-0.5em}
\subfigure[\textbf{TP-R-A4:} predicted keypoints and their \emph{dependency areas} for input \textbf{B}.]{
\includegraphics[width=0.48\linewidth]{figures_pdf/final_attention_map_53_T-R-A4.pdf}
\label{tpr-b}
}\hspace{1.em}\vspace{-0.01in}
\subfigure[\textbf{TP-H-A4:} predicted keypoints and their \emph{dependency areas} for input \textbf{B}.]{
\includegraphics[width=0.48\linewidth]{figures_pdf/final_attention_map_53_T-H-A4.pdf}
\label{tph-b}
}
\caption{Predicted locations and the dependency areas for different types of keypoints by different models: TP-R-A4 (left column) and TP-H-A4 (right column). In each sub-figure, the first one is the original input image plotted with predicted skeleton. The other maps visualized by the defined dependency area ($A_{i,:}$) of the attention matrix in the last layer with a threshold value (0.00075). The predicted location of a keypoint is annotated by a WHITE color pentagram ($\star$) in each sub-map. Redder area indicates higher attention scores.}\vspace*{-0.1in}
\label{last-attention-dependency}
\end{figure*}
\begin{figure*}
\centering
\hspace{-1em}
\subfigure[\textbf{TP-R-A4:} predictions and dependency areas for \textbf{Input C}.]{\includegraphics[width=0.49\linewidth]{figures_pdf/attention_map_image_dependency_transposer_thres_00008.pdf}}\hspace{1em}\label{da_tpr}
\subfigure[\textbf{TP-H-A4:} predictions and dependency areas for \textbf{Input C}.]{\includegraphics[width=0.49\linewidth]{figures_pdf/attention_map_image_dependency_transposeh_thres_00008.pdf}}\label{da_tph}
\caption{\textbf{Dependency areas} for the particular positions in the different attention layers by the same visualization method of Fig.\ref{last-attention-dependency}.}\hspace{1em}\vspace*{-0.1in}
\label{all-attention-dependency}
\end{figure*}
\subsection{Qualitative Analysis}
\label{Explainability Analysis}
The hyperparameter configurations for TransPose model might affect the its behavior in an unknown way. In this section, we choose \emph{trained models, types of predicted keypoints, depths of attention layers, and input images} as controlled variables to observe the model behaviors.
{\bf The dependency preferences are different for models with different CNN extractors.} To make comparisons between ResNet-S and HRNet-S based models, we use the trained models TP-\textbf{R}-A4 and TP-\textbf{H}-A4 performances as exemplars. Illustrated in Fig.~\ref{last-attention-dependency}, we choose two typical inputs A and B as examples and visualize the \emph{dependency areas} defined in Sec.~\ref{paper::definition}. We find that although the predictions from TP-R-A4 and TP-H-A4 are exactly the same locations of keypoints, TP-\textbf{H}-A4 can exploit multiple longer-range joints clues to predict keypoints. In contrast, TP-\textbf{R}-A4 prefers to attend to local image cues around the target joint. This characteristic can be further confirmed by the visualized affected areas in supplementary, in which keypoints have larger and non-local affected areas in TP-\textbf{H}-A4. Although such results are not as commonly expected, they reflect: 1) a pose estimator uses global information from long-range joints to localize a particular joint; 2) HRNet-S is better than ResNet-S at capturing long-range dependency relationships information (probably due to its multi-scale fusion scheme).
{\bf Dependencies and influences vary for different types of keypoints.} For keypoints in the head, localizing them mainly relies on visual clues from head, but TP-\textbf{H}-A4 also associates them with shoulders and the joints of arms. Notably, the dependencies of predicting wrists, elbows, knees or ankles have obvious differences for two models, in which TP-\textbf{R}-A4 depends on the local clues at the same side while TP-\textbf{H}-A4 exploits more clues from the joints on the symmetrical side. As shown in Fig.~\ref{tph-a}, Fig.~\ref{tph-b}, and Fig.~\ref{all-attention-dependency}, we can further observe that a pose estimator might gather strong clues from more parts to predict the target keypoint. This can explain why the model still can predict the location of an occluded keypoint accurately, and the occluded keypoint with ambiguity location will have less impact on the other predictions or larger uncertain area to rely on (e.g. the occluded left ankle -- last map of Fig.~\ref{tpr-b} or Fig.~\ref{tph-b}).
{\bf Attentions gradually focus on more fine-grained dependencies with the depth increasing.} Observing all of attention layers (the 1,2,3-th rows of Fig.~\ref{all-attention-dependency}), we surprisingly find that \emph{even without the intermediate GT locations supervision}, TP-H-A4 can still attend to the accurate locations of joints yet with more global cues in the early attention layers. For both models, with the depth increasing, the predictions gradually depend on more fine-grained image clues around local parts or keypoints positions (Fig.~\ref{all-attention-dependency}).
{\bf Image-specific dependencies and statistical commonalities for a single model.} Different from the static relationships encoded in the weights of CNN after training, the attention maps are dynamic to inputs. As shown in Fig.~\ref{tpr-a} and Fig.~\ref{tpr-b}, we can observe that despite the statistical commonalities on the dependency relationships for the predicted keypoints (similar behaviors for most common images), the fine-grained dependencies would slightly change according to the image context. With the existence of occlusion or invisibility in a given image such as input B (Fig.~\ref{tpr-b}), the model can still localize the position of the partially obscured keypoint by looking for more significant image clues and reduces reliance on the invisible keypoint to predict the other ones. It is likely that future works can exploit such attention patterns for parts-to-whole association and aggregating relevant features for 3D pose estimation or action recognition.
\section{Conclusion}
We explored a model -- TransPose -- by introducing Transformer for human pose estimation. The attention layers enable the model to capture global spatial dependencies efficiently and explicitly. And we show that such a heatmap-based localization achieved by Transformer makes our model share the idea with Activation Maximization.
With lightweight architectures, TransPose matches state-of-the-art CNN-based counterparts on COCO and gains significant improvements on MPII when fine-tuned with small training costs. Furthermore, we validate the importance of position embedding. Our qualitative analysis reveals the model behaviors that are variable for layer depths, keypoints types, trained models and input images, which also gives us insights into how models handle special cases such as occlusion.
\textbf{Acknowledgment}.
This work was supported by the National Natural Science Foundation of China (61773117 and 62006041).
{
\bibliographystyle{ieee_fullname}
|
{
"timestamp": "2021-09-02T02:09:49",
"yymm": "2012",
"arxiv_id": "2012.14214",
"language": "en",
"url": "https://arxiv.org/abs/2012.14214"
}
|
\section{Introduction}\label{Intro}
Let $k$ be a perfect field of characteristic $p>0$ and $K$ a totally
ramified degree $e$ field extension of $W(k)[1/p]$.
Fix an algebraic closure $\o{K}$ of $K$,
denote its $p$-adic completion by $\mathbf{C}$, and use $\O_{\mathbf{C}}$ to denote its ring of integers.
Let $X$ be a smooth proper formal scheme over $\O_K$
with (rigid analytic) geometric generic fiber $X_{\bar \eta}$. Write $X_n : = X \times_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$.
Starting from \cite{FontaineMessing} and \cite{Katovanishingcycles}, lots of efforts have been made in investigating the relationship between the crystalline cohomology (and other variants) and the \'etale cohomology attached to $X$.
When $e=1 $, it is proved by Fontaine--Messing (\cite{FontaineMessing}) and Kato (\cite{Katovanishingcycles})
that if $X$ is a proper\footnote{Projective in Kato's paper.} smooth scheme over $\O_K = W(k)$,
then $ {\rm H}^i _{\cris}(X_n / W_n (k) )$ admits a Fontaine--Laffaille module structure
when $i \leq p -1$ and the functor $T_{\cris}$ on the category of Fontaine--Laffaille modules (from Fontaine--Laffaille theory)
satisfies $T_{\cris}({\rm H} ^i_{\cris} (X_n /W_n(k)))\simeq {\rm H} ^i _{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$ as $G_K$-modules when $i \leq p-2$.
When $e> 1$, more complicated base ring has to be introduced. Fix a uniformizer $\pi $ of $K$ and $ E= E(u)\in W(k)[u]$ the Eisenstein polynomial of $\pi$.
Let $S$ be the $p$-adic completion of the PD envelope of $W(k)[u]$ for the ideal $(E) $.
Note that $S$ admits:
\begin{itemize}
\item a Frobenius action $\varphi: S\to S$ which extends the Frobenius $\varphi$ on $W(k)$ and satisfies $\varphi (u)= u ^p$;
\item a filtration $\Fil ^i S $ which is the $p$-complete $i$-th PD ideal; and
\item a monodromy operator $N : S \to S$ via $N(f(u)) = \frac{df}{du} (-u)$.
\end{itemize}
In \cite{Bre98}, Breuil introduced the notion of a \emph{Breuil module} to describe the structure of ${\rm H} ^i _{\cris} (X_n / S_n)$,
and constructed a functor $T_{\st, \star}$ from the category of Breuil modules to the category of $\ensuremath{\mathbb{Z}}_p$-representations of $G_K$.
Here, a Breuil module is a datum consisting of a finite $S$-module $\mathcal{M}$ together with a one-step filtration $\Fil^h\mathcal{M}\subset \mathcal{M}$,
a ``divided Frobenius'' $\varphi_h: \Fil^h\mathcal{M} \to \mathcal{M}$, and a monodromy operator $N : \mathcal{M} \to \mathcal{M}$ which satisfies some conditions given in \S \ref{subsec-Breuilmodules}.
Following ideas of Breuil, Caruso proved the following.
\begin{theorem}[\cite{CarusoInvent}]\label{thm-caruso}
Let $X$ be a proper semi-stable scheme over $\O _K$. Then its log-crystalline cohomology ${\rm H} ^i_{\rm log\text{-}crys} (X_n /S_n)$ has a Breuil module structure
and $T_{\st, \star }({\rm H} ^i_{\rm log\text{-}crys} (X_n /S_n))\simeq {\rm H}^i _{\et} (X_{\bar \eta} , \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}) (i)$
as $G_K$-modules for $ e(i+1) < p -1$ if $n > 1$ and $ei <p-1$ for $n =1$.
\end{theorem}
As new cohomology theories have been introduced in \cite{BMS1}, \cite{BMS2} and \cite{BS19},
it is natural to ask whether these new cohomology theories can recover the aforementioned results due to
Fontaine--Messing, Breuil, and Caruso, and hopefully even improve these results.
In this paper, we use these new cohomology theories, in particular, prismatic cohomology and derived de Rham cohomology,
to study torsion crystalline cohomology, torsion \'etale cohomology, and their relationship. We obtain the following result:
\begin{theorem}\label{Thm-intro-1}
Let $X$ be a smooth proper formal scheme over $\O_K$
with geometric generic fiber $X_{\bar \eta}$, and let $i$ be an integer satisfying $ei < p-1$.
Then ${\rm H} ^i _{\cris} (X_n / S_n)$ has a structure of Breuil modules and
$T_{\st, \star}\left({\rm H} ^i _{\cris} (X_n /S _n) \right)\simeq {\rm H} ^i _{\et}(X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})(i)$ as $\ensuremath{\mathbb{Z}}_p [G_K ]$-modules.
\end{theorem}
Here the additional data of the Breuil module structure is roughly given by the following:
\begin{itemize}
\item the filtration is given by the cohomology of the PD powers of a natural PD ideal sheaf $\mathcal I_{\cris}$
on the crystalline site ${\rm H}^i_{\cris}(X_n/S_n, \mathcal I ^{[h]}_{\cris})$;
\item the $N$ is a disguise of the connection given by the crystal nature of crystalline cohomology; and
\item the divided Frobenius is induced by a natural map of (quasi-)syntomic sheaves.
\end{itemize}
From now on, when we talk about ${\rm H} ^i _{\cris} (X_n / S_n)$, we always implicitly think of it carrying these additional data.
\begin{remark}
\label{rem-better-than-caruso}
\leavevmode
\begin{enumerate}
\item
Let us highlight the difference between Caruso's results and our theorem above.
\begin{enumerate}
\item The $X$ in our theorem is a \emph{smooth} proper \emph{formal} scheme over $\O_K$,
whereas the $X$ in \cite{CarusoInvent} is a semi-stable $\O_K$-model of a smooth proper $K$-variety.
\item Our restriction on $e$ and $i$ is $ei < p -1$ for any $n$ while the restriction in \cite{CarusoInvent} is $ei < p-1$ for $n=1$ and $e(i +1) < p-1$ for $n > 1$.
\end{enumerate}
\item We actually use another functor $T_S$ relating torsion crystalline and \'{e}tale cohomology in the above theorem.
But $T_S$ and $T_{\st, \star}$ are essentially the same. See \S \ref{subsec-two-T}.
\end{enumerate}
\end{remark}
Now let us discuss the strategy of this paper to see how prismatic cohomology and (derived) de Rham cohomology come into the picture.
Let $\mathfrak{S}= W(k)[\![u]\!]$ equipped with the Frobenius morphism $\varphi$ extending (arithmetic) Frobenius $\varphi$ on $W(k)$ and $\varphi (u) = u ^p$.
Then $(\mathfrak{S} , (E))$ is the so-called Breuil--Kisin prism.
Classically, an \emph{(\'etale) Kisin module} of height $h$ is a finite $u$-torsion free $\mathfrak{S}$-module $\mathfrak{M}$, together with a semi-linear map
$\varphi_\mathfrak{M} \colon \mathfrak{M} \to \mathfrak{M}$ so that the cokernel of $1\otimes\varphi_\mathfrak{M} \colon \mathfrak{S} \otimes_{\varphi, \mathfrak{S}}\mathfrak{M} \to \mathfrak{M}$ is killed by $E^h$.
By definition, $\varphi ^* \mathfrak{M} \coloneqq \mathfrak{S} \otimes_{\varphi, \mathfrak{S}} \mathfrak{M}$ admits a \emph{Breuil--Kisin (BK) filtration}
$\Fil^h \varphi ^* \mathfrak{M} \coloneqq (1\otimes \varphi_\mathfrak{M}) ^{-1} (E^h \mathfrak{M})$, which plays an important technical role later.
It is well-known that Kisin module theory is a powerful tool in \emph{abstract} integral $p$-adic Hodge theory:
the study of $\ensuremath{\mathbb{Z}}_p$-lattices in crystalline (semi-stable) representations and their modulo $p^n$-representations,
which can been seen as the arithmetic counterpart of
${\rm H} ^n _{\et} (X_{\bar \eta} , \ensuremath{\mathbb{Z}}_p)$ and ${\rm H} ^i _{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$.
Also the relationship between Kisin modules, Galois representations and Breuil modules
are known in the abstract theory.
In particular, the functor $\underline \mathcal{M} \colon \mathfrak{M} \mapsto \underline \mathcal{M} (\mathfrak{M}) \coloneqq S \otimes_{\varphi, \mathfrak{S}}\mathfrak{M}$
sends a Kisin module $\mathfrak{M}$ of height $h \leq p-1$ to a Breuil module (without $N$-structures) where
\[\Fil^h \mathcal{M} (\mathfrak{M}) \coloneqq \{x \in \mathcal{M}(\mathfrak{M})| (1\otimes \varphi_{\mathfrak{M}}) (x) \in \Fil^h S \otimes_{\mathfrak{S}}\mathfrak{M} \}\subset \underline \mathcal{M}(\mathfrak{M})\]
and $\varphi_h \colon \Fil^h \mathcal{M} (\mathfrak{M}) \overset {1\otimes \varphi_{\mathfrak{M}} }{\longrightarrow} \Fil^h S \otimes _{\mathfrak{S}} \mathfrak{M} \overset{\varphi_h \otimes 1}\longrightarrow S \otimes_{\varphi, \mathfrak{S}}\mathfrak{M} = \underline \mathcal{M} (\mathfrak{M})$
where $\varphi_h \colon \Fil ^h S \to S$ is defined by $\varphi_h (x) = \dfrac{\varphi (x) }{p ^h}$.
See \S \ref{subsec-Breuilmodules} for more details.
It turns out that prismatic cohomology ${\rm H} ^i _{\Prism} (X /\mathfrak{S})$ gives geometric realizations of Kisin modules,
in the sense that ${\rm H}^i_{\Prism} (X/\mathfrak{S})$ modulo its $u^\infty$-torsion submodule is an \'etale Kisin module of height $i$
(see \S \ref{subsec-Galos rep and Kisin}, \S \ref{subsec-prism-is-kisin} and the discussion below for more details).
Suggested by the functor $\underline\mathcal{M}$ in the abstract theory,
one naturally expects the following comparison between Breuil--Kisin prismatic cohomology and crystalline cohomology:
\begin{equation}
\label{eqn-1}
R\Gamma _{\Prism} (X/\mathfrak{S}) \otimes^{\mathbb L}_{\mathfrak{S}, \varphi} S \simeq R\Gamma_{\cris} (X/ S).
\end{equation}
This comparison follows from \cite[Theorem 5.2]{BS19} and base change of prismatic cohomology.
This is pointed out to us by Koshikawa.
Inspired by the above discussion, we show in this paper the following comparison result:
\begin{theorem}[{see \Cref{comparing pris and crys} and \Cref{global comparison}}]
\label{comparison in introduction}
Let $(A,I)$ be a bounded prism, and let $X$ be a smooth proper ($p$-adic) formal scheme over $\Spf(A/I)$.
Then we have a functorial isomorphism
\[
\mathrm{R\Gamma}_{\Prism}(X/A) \otimes^{\mathbb L}_{A, \varphi_A} A \otimes^{\mathbb L}_A \dR_{(A/I)/A}^\wedge
\cong \mathrm{R\Gamma}(X, \dR_{-/A}^\wedge),
\]
which is compatible with base change in the prism $(A,I)$.
\end{theorem}
Here $\dR_{-/A}^\wedge$ denotes the (relative to $A$) $p$-adic derived de Rham complex introduced by Illusie in \cite[Chapter VIII]{Ill72}
and studied extensively by Bhatt in \cite{Bha12}.
In fact, when $A/I$ is $p$-torsion free, this is known due to \cite[Theorem 5.2]{BS19}.
Our proof follows closely the proof of crystalline comparison in \cite{BS19}.
As a consequence, the above gives several comparison results, all of which were known due to work of
Bhatt, Morrow, and Scholze \cite{BMS1}, \cite{BMS2}, \cite{BS19}.
\begin{example}
By \cite[Theorem 3.27]{Bha12}, we know that when $A/I$ is $p$-torsion free, the derived de Rham
complex appearing above is given by certain crystalline cohomology.
With this being said, we can explain what the above comparison gives in concrete situations.
\begin{enumerate}
\item BMS2/Breuil--Kisin prism: when $(A,I) = (\mathfrak{S}, (E))$, then the above comparison becomes \Cref{eqn-1} which,
as mentioned above, was obtained in \cite{BS19}.
As a consequence, we see that Breuil's crystalline cohomology groups ${\rm H}^i_{\cris}(X/S)$ are finitely presented $S$-module,
see \Cref{fp proposition}.
To the best of our knowledge, coherence of $S$ is unknown, and we are unaware of any other means showing that these cohomology groups
are finitely presented.
We thank Bhatt for pointing out this application to us.
\item BMS1: when $(A,I) = (A_{\inf}, \ker(\theta))$ is the perfect prism associated with $\mathcal{O}_{\mathbf{C}}$, then the above comparison
says
\[
\mathrm{R\Gamma}_{\Prism}(X/A_{\inf}) \otimes^{\mathbb L}_{A_{\inf}, \varphi} A_{\inf} \otimes^{\mathbb L}_{A_{\inf}} A_{\cris}
\cong \mathrm{R\Gamma_{\cris}}(X/A_{\inf}).
\]
Recall \cite[Theorem 17.2]{BS19} states that the first base change of the left hand side gives the $A_{\inf}$-cohomology theory constructed
in \cite{BMS1}. Then our comparison here becomes the one established by \cite[Theorem 1.8.(iii)]{BMS1} (see also \cite{Yao19}).
\item PD prism: suppose $I \subset A$ admits a PD structure $\gamma$. Then our comparison implies
\[
\mathrm{R\Gamma}_{\Prism}(X/A) \otimes^{\mathbb L}_{A, \varphi_A} A
\cong \mathrm{R\Gamma_{\cris}}(X/(A, I, \gamma)).
\]
When $I = (p)$, then the above is nothing but the crystalline comparison established in \cite[Theorem 1.8.(1)]{BS19}.
Notice here the left hand side does not depend on the choice of $\gamma$, consequently neither does the right hand side.
Another class of potentially interesting PD prisms consists of $(W(S), V(1))$ for any bounded $p$-complete ring $S$.
\item de Rham comparison: there is a natural map $\gr^0 \colon \dR_{R/A}^\wedge \to R^\wedge$ given by ``quotient out'' first Hodge filtration.
Our comparison result above, after composing with this further base change, gives
\[
\mathrm{R\Gamma}_{\Prism}(X/A) \otimes^{\mathbb L}_{A, \varphi_A} A \otimes^{\mathbb L}_A A/I
\cong \mathrm{R\Gamma_{dR}}(X/(A/I))^\wedge;
\]
here, we have used \cite[Proposition 3.11]{GL20} to identify the result of right hand side under this base change.
This is the de Rham comparison given by \cite[Theorem 1.8.(3)]{BS19}.
\end{enumerate}
\end{example}
In (1)-(3) above, the crystalline comparison \cite[Theorem 5.2]{BS19} also yields comparison isomorphisms.
Note that there are at least two comparison isomorphisms in above discussion,
and we just claimed that they give rise to commutative diagrams, which might worry some readers.
To assure these readers, we establish the following rigidity of $p$-adic derived de Rham cohomology theory.
\begin{theorem}[{see \Cref{functorial endomorphism theorem} and \Cref{functorial endo remark}}]
Let $(A,I)$ be a prism such that $A/I$ is $p$-torsion free.
Then the functor $R \mapsto \dR_{R/A}^\wedge$ from the category of smooth $(A/I)$-algebras to $\mathrm{CAlg}(D(\dR_{(A/I)/A}^\wedge))$
has no automorphism.
Similar statement holds for the functor $R \mapsto \dR_{R/(A/I)}^\wedge$.
\end{theorem}
Therefore, whenever one has a diagram of functorial comparisons between various cohomology theories and the $p$-adic derived de Rham
cohomology, the diagram is always forced to be commutative.
Our method of proving such rigidity is largely inspired by \cite[Sections 10.3 and 10.4]{BLM18} and \cite[Section 18]{BS19}.
In view of rigidity aspects of $p$-adic derived de Rham complexes,
we would like to mention a recent result of Mondal \cite{Mon20}:
roughly speaking, there is a \emph{unique} deformation of de Rham cohomology
from characteristic $p$ to Artinian local rings given by crystalline cohomology (c.f.~\cite[Theorem 10.1.2]{BLM18} for the case of deformation over $\mathbb{Z}_p$).
Let us mention that a recent collaboration between Mondal and the first named author \cite{LM21} computes
endomorphisms of $p$-adic derived de Rham cohomology in various $p$-adic settings.
Next, we discuss compatibility of additional structures on both sides being compared in \Cref{comparison in introduction}, most notably the Frobenius action
and filtration.
In \S \ref{Frobenii} we define a natural Frobenius action on $p$-adic derived de Rham complex assuming the base ring $A$ is a $p$-torsion free
$\delta$-ring.
Therefore the right hand side is equipped with a Frobenius action.
The left hand side admits a Frobenius action as well,
by extending the Frobenius action on prismatic cohomology, as $A \to \dR_{(A/I)/A}^\wedge$ is compatible with Frobenii on them.
The two Frobenii on two sides in \Cref{comparison in introduction} agree when $A$ is $p$-torsion free, see \Cref{compatible with Frobenii}.
Let us remark that these $p$-torsion free conditions most likely can be relaxed, with extra work in developing
the theory of ``derived $\delta$-rings''. We expect the above theoretical results to hold verbatim.
The story of comparing filtrations is our main new contribution to this theory and it is quite involved. Let us rewrite the comparison:
\[
\varphi^*\mathrm{R\Gamma}_{\Prism}(X/A) \otimes^{\mathbb L}_A \dR_{(A/I)/A}^\wedge
\cong \mathrm{R\Gamma}(X, \dR_{-/A}^\wedge).
\]
There are $3$ natural filtrations here:
\begin{itemize}
\item the Nygaard filtration $\Fil^{\bullet}_{\mathrm{N}}(\Prism^{(1)}_{-/A})$ on $\varphi^*\mathrm{R\Gamma}_{\Prism}(X/A)$, see~\cite[Section 15]{BS19};
\item the $I$-adic filtration on $A$; and
\item the Hodge filtration $\Fil^{\bullet}_{\mathrm{H}}(\dR_{-/A}^\wedge)$ on $\dR_{(A/I)/A}^\wedge$ and $\mathrm{R\Gamma}(X, \dR_{-/A}^\wedge)$.
\end{itemize}
They are related in the following fashion.
\begin{theorem}[{see \Cref{smooth H and N filtration}}]
Let $(A,I)$ be a prism such that $A/I$ is $p$-torsion free, and let $X$ be a smooth proper ($p$-adic) formal scheme over $\Spf(A/I)$.
\begin{enumerate}
\item The isomorphism in \Cref{comparison in introduction} refines to a filtered isomorphism:
\[
\bigg(\mathrm{R\Gamma}(X, \Fil^{\bullet}_{\mathrm{N}}(\Prism^{(1)}_{-/A}))\bigg)
\widehat{\otimes}^{\mathbb L}_{(A, I^{\bullet})} (\mathcal{A}, \mathcal{I}^{[\bullet]})
\cong \mathrm{R\Gamma}(X, \Fil^{\bullet}_{\mathrm{H}}(\dR_{-/A}^\wedge)),
\]
where the left hand side denotes the $p$-complete derived tensor of filtered objects over the filtered ring $(A, I^{\bullet})$
provided by the lax symmetric monoidal structure on the filtered derived infinity category.
In particular, we obtain a graded isomorphism between graded algebras:
\[
\bigg(\gr^*_{\mathrm{N}}\mathrm{R\Gamma}(X, \Prism^{(1)}_{-/A}) \widehat{\otimes}^{\mathbb L}_{\Sym^*_{A/I}(I/I^2)} \Gamma^*_{A/I}(I/I^2)\bigg)
\cong \bigg(\gr^*_{\mathrm{H}}\mathrm{R\Gamma}(X, \dR_{-/A}^\wedge)\bigg).
\]
\item The isomorphism in \Cref{comparison in introduction} induces natural isomorphisms:
\[
\mathrm{R\Gamma}(X, \Prism^{(1)}_{-/A}/\Fil^i_{\mathrm{N}}) \cong
\mathrm{R\Gamma}(X, \dR_{-/A}^\wedge/\Fil^i_{\mathrm{H}})
\]
for all $i \leq p$.
\end{enumerate}
Moreover these isomorphisms are functorial in $X$ and $A$.
\end{theorem}
Here $\mathcal{A}$ denotes the $p$-adic PD envelope of $A \twoheadrightarrow A/I$, and $\mathcal{I}^{\bullet}$ denotes the
filtration of PD powers of the ideal $\ker(\mathcal{A} \twoheadrightarrow A/I)$.
For a (somewhat) concrete description of the filtration on $\varphi^*\mathrm{R\Gamma}_{\Prism}(X/A) \otimes^{\mathbb L}_A \dR_{(A/I)/A}^\wedge$
appearing in the above Theorem, we refer readers to \cite[Construction 3.9]{GL20}.
Note that by combining aforesaid result of Bhatt \cite[Theorem 3.27]{Bha12} and a classical result of Illusie \cite[Corollaire VIII.2.2.8]{Ill72},
there is a natural filtered isomorphism $(\dR_{(A/I)/A}^\wedge, \Fil^{\bullet}_{\mathrm{H}}) \cong (\mathcal{A}, \mathcal{I}^{\bullet})$.
In \cite[Section 2]{Ill20}, a comparison of Nygaard and Hodge filtration
is established for crystalline base prism $(A,I) = (W(k), (p))$, in particular
his $A/I$ is entirely $p$-torsion.
It seems reasonable to expect the comparison of filtration holds for general
base prisms.
Both Bhatt and Illusie have sketched to us an approach
of resolving base prism by prisms $(A,I)$ such that $A/I$ is $p$-torsion free,
to reduce the general comparison to our theorem above.
We choose to not pursue that direction further in this paper, as our final
application only uses the comparison when the base is the Breuil--Kisin prism.
With the above general preparation, we are ready to show $\underline{\mathcal{M}} ({\rm H} ^i _{\Prism} (X/\mathfrak{S}))\simeq {\rm H}^i _{\cris} (X/S) $ (when ${\rm H}^i_{\Prism} (X/\mathfrak{S})$ is $u$-torsion free).
In order to treat $p ^n$-torsion cohomologies in Theorem \ref{Thm-intro-1},
we consider the derived mod $p^n$ variants of the aforementioned cohomology theories.
For example, we denote the $p ^n $-torsion prismatic cohomology as $R\Gamma_{\Prism} (X_n/ A_n) : = R\Gamma_{\Prism} (X/ A)\otimes ^{\mathbb L}_ {\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$.
As pointed out by Warning \ref{Warning-not-intrinsic},
such $p ^n$-torsion prismatic cohomology \emph{does not only} depend on $X_n = X \times_{\ensuremath{\mathbb{Z}}_p} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$.
But it is enough for our purpose to understand the $p^n$-torsion crystalline cohomology
${\rm H}^i _{\cris} (X_n / S_n)$ and its relation with \'etale cohomology ${\rm H} ^i _{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$.
Note that the cohomology groups of $R\Gamma _{\Prism} (X_n /\mathfrak{S}_n)$ do fit in our setting of \emph{generalized} Kisin module $\mathfrak{M} $ of height $h$
(discussed in \S \ref{subsec-generalized-Kisin}),
i.e.~a finitely generated $\mathfrak{S}$-module $\mathfrak{M}$ together with a $\varphi_\mathfrak{S}$-semi-linear map $\varphi_\mathfrak{M} : \mathfrak{M} \to \mathfrak{M}$
and an $\mathfrak{S}$-linear map $\psi: \mathfrak{M} \to \varphi ^* \mathfrak{M}$ so that $ \psi\circ (1 \otimes \varphi _\mathfrak{M}) = E^h \id_{\varphi^* \mathfrak{M}} $ and $(1 \otimes \varphi _\mathfrak{M}) \circ \psi = E^h \id_{\mathfrak{M}}$.
The generalized Kisin module is a natural extension of classical (\'etale) Kisin module discussed above allowing $u$-torsions.
In particular, an \'etale Kisin module $\mathfrak{M}$ of height $h $ is a generalized Kisin module of height $h$ without $u$-torsion, where $\psi $ is just defined by $\mathfrak{M} \simeq E ^h \mathfrak{M} \overset {(1\otimes \varphi_\mathfrak{M})^{-1}} {\simeq }\Fil^{h}_{\BK} \varphi ^*\mathfrak{M}\subset \varphi ^*\mathfrak{M} $,
and similarly the BK filtration can be extended to generalized Kisin module by defining
$\Fil^h_{\BK} \varphi^*\mathfrak{M} \coloneqq \mathrm{Im}(\psi: \mathfrak{M} \to \varphi ^* \mathfrak{M} )$.
Most importantly, ${\rm H} ^i _{\Prism} (X_n / \mathfrak{S} _n)$ is a generalized Kisin module of height $i$,
and the BK filtration on $ \varphi ^* {\rm H} ^i_{\Prism} (X_n/ \mathfrak{S}_n)$ exactly matches with the image of
the Nygaard filtration ${\rm H} ^ i _{\qsyn} (X , \Fil^i_{\rm N} \Prism^{(1)}_n) \to {\rm H}^i_{\qsyn} (X, \Prism^{(1)}_n)$ where
$\Fil^i_{\rm N} \Prism^{(1)}_n = \Fil^i_{\rm N} \Prism^{(1)}_{-/ \mathfrak{S}} \otimes ^{{\mathbb L}}_{\ensuremath{\mathbb{Z}} }\ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}} $ and $ \Prism^{(1)}_n = \Prism^{(1)}_{-/ \mathfrak{S}} \otimes ^{{\mathbb L}}_{\ensuremath{\mathbb{Z}} }\ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}} $,
see Proposition \ref{prop-height} and Corollary \ref{cor-BK-Nygaard}.
One can apply many methods in the study of \'etale Kisin modules to treat ${\rm H}^i _{\Prism} (X_n / \mathfrak{S}_n)$ as well.
As a consequence, we prove the following:
\begin{theorem}\label{thm-intro-3}
Let $A = (\mathfrak{S}, E)$ be the Breuil--Kisin prism and write $\mathfrak{M} ^i _n : = {\rm H} ^i _\Prism (X_n / A_n)$.
Let $i \leq p-2$ be an integer.
Then
${\rm H}^{i}_{\cris} (X_n /S_n)$ has a Breuil module structure if and only if $\mathfrak{M}^j_n$ has no $u$-torsion for $j = i , i +1$. In this scenario, we have $\underline{\mathcal{M}} (\mathfrak{M} ^ i _ n)\simeq {\rm H}^{i}_{\cris} (X_n /S_n)$ and $T_S ({\rm H}^{i}_{\cris} (X_n /S_n))\simeq {\rm H} ^i _{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}) (i)$ as $G_K$-modules.
\end{theorem}
Finally, by using Caruso's \Cref{thm-caruso} for $n =1$, we can show that $\mathfrak{M}_n ^{i +1}$ has no $u$-torsion if $ei < p -1$,
hence deducing \Cref{Thm-intro-1}.
In the end of this introduction, let us report what we now,
a year since writing this paper,
know slightly beyond the case of $e \cdot i < p-1$.
Recall Breuil asked \cite[Question 4.1]{BreuilIntegral} whether,
assuming $i < p-1$,
it is true that ${\rm H}^i_{\cris}(X_n/S_n)$
always supports a Breuil module structure
with associated Galois representation given by
${\rm H}^i_{\et}(X_{\bar{\eta}}, \mathbb{Z}/p^n\ensuremath{\mathbb{Z}})$.
In view of our \Cref{thm-intro-3}, what Breuil asked for is really
some module theoretic structure of prismatic cohomology of $X$:
whether the prismatic cohomology always has no $u$-torsion
when $i < p$.
In a sequel to this paper, among other things,
we study $u^\infty$-torsions in prismatic cohomology.
We obtain some results in the boundary case: that is $e \cdot i = p-1$.
Let us restrict ourselves further to the two extrema of the boundary case,
and our relevant findings is summarized below:
\begin{itemize}
\item When $i = 1$, Breuil's question amounts to vanishing of $u$-torsion
in the first and second prismatic cohomology.
The first prismatic cohomology is always $u$-torsion free.
The second prismatic cohomology having $u$-torsion is showed to be equivalent to
the failure of having Albanese abelian (formal) scheme of $X$,
by which we mean a map $X \to A$ with both central and
generic fibres being the Albanese map.
This was studied by Raynaud \cite{Ray79} and we extend some of his results
using this prismatic perspective.
We generalize a construction in \cite[Subsection 2.1]{BMS1} to produce
counterexamples to Breuil's question with $e = p-1 > 1$.
In fact the module structure of ${\rm H}^1_{\cris}(X_n/S_n)$ of this example
is too pathological that it cannot possibly
support a Breuil module structure.
In particular, in hindsight, our \Cref{Thm-intro-1} is sharp.
\item When $e=1$, this is what Fontaine--Messing \cite{FontaineMessing}
and Kato \cite{Katovanishingcycles} studied.
In the boundary case $i = p-1$, we show that the Galois representation $\rho^{p-1}$
attached to the Fontaine--Laffaille structure on the $(p-1)$-st crystalline cohomology
is not far from the $(p-1)$-st \'{e}tale cohomology of the geometric generic fibre.
\end{itemize}
The case of $e \cdot i > p-1$ remains mysterious to us.
We believe the first step of investigation would be a better understanding of
$u^\infty$-torsions in prismatic cohomology, extending our results so far.
We arrange our paper as follows: after collecting rudiments on prismatic cohomology and derived de Rham cohomology in \S 2,
we establish our comparison isomorphism between the two cohomologies in \S 3 together with Frobenius structures.
We devote efforts to discuss various filtrations in \S 4 and establish a filtered comparison.
We remark that the theory in \S 2--\S 4 accommodates quite general classes of prisms, which opens the possibilities to develop,
for example, Breuil--Caruso theory, for more general base rings.
We hope to report the generalization in this direction in future work.
Starting from \S 5,
we restrict ourselves to the Breuil--Kisin prism $(\mathfrak{S}, (E))$ and focus on structures of torsion prismatic cohomology and torsion crystalline cohomology for proper smooth formal scheme $X$ over $\O_K$.
In \S 5 we construct a connection $\nabla$ on derived de Rham cohomology and hence on crystalline cohomology.
The \S6 recalls classical theory of Kisin modules, Breuil modules, functors to Galois representations and
the functor $\underline \mathcal{M}$ connecting Kisin modules and Breuil modules.
Finally \S 7 assembles all previous preparations to prove Theorem \ref{thm-intro-3}.
\subsection*{Acknowledgement} We would like to thank Bhargav Bhatt, Bryden Cais, Luc Illusie,
Teruhisa Koshikawa, Shubhodip Mondal, Deepam Patel, and Emanuel Reinecke,
for very useful discussion and communication during the course of preparing this paper.
The first named author especially would like to thank Bhargav Bhatt for so many helpful discussions
and suggestions concerning this project.
The influence of Bhatt's work and comments on the first half of this paper should be obvious to readers.
We are also grateful to Illusie for stimulating discussion and his encouragement.
\section{Preliminaries}
Starting with this section through \Cref{section filtrations}, unless stated otherwise, all completions and (completed) tensor products are derived.
\subsection{Transversal prisms}
\begin{lemma}
\label{oriented transversal prism lemma}
Let $(A,I)$ be an oriented prism with $I = (d)$. The following are equivalent:
\begin{enumerate}
\item the sequence $(p, d)$ is Koszul regular;
\item the sequence $(p, d)$ is regular;
\item the morphism $\mathbb{Z}_p[\![T]\!] \to A$ sending $T$ to $d$
is flat.
\end{enumerate}
\end{lemma}
\begin{proof}
(3) implies (1): as (3) implies that
$A \otimes_{\mathbb{Z}_p[\![T]\!]} \mathbb{Z}_p[\![T]\!]/(p, T)$ is discrete.
(1) implies (2): (1) implies that the $p$-torsions in $A$ is uniquely $d$-divisible,
and $A/p$ has no $d$-torsion.
On the other hand, we know the $p$-torsions in $A$ is derived $d$-complete,
hence must vanish. Therefore $(p,d)$ is a regular sequence.
(2) implies (3): it suffices to show that for any prime ideal
$\mathfrak{p} \subset \mathbb{Z}_p[\![T]\!]$ the derived tensor
$A \otimes_{\mathbb{Z}_p[\![T]\!]} \mathbb{Z}_p[\![T]\!]/\mathfrak{p}$ is discrete.
When $\mathfrak{p}$ is the unique maximal ideal, this follows immediately from (2).
So we only have to deal with height $1$ primes which are always generated by
a polynomial of the form
\[
f = T^n + p \cdot (\text{lower order terms}),
\]
and we need to show that $A$ is $f$-torsionfree.
Suppose $a \in A$ is an $f$-torsion, modulo $p$ we see that $\bar{a} \in A/p$
is a $d^n$-torsion, now (2) implies that $\bar{a} = 0 \in A/p$.
Therefore we see that $f$-torsions in $A$ is divisible by $p$.
As (2) also implies that $A$ is $p$-torsionfree, we see that $f$-torsions in $A$
is uniquely $p$-divisible.
Since $A$ is derived $p$-complete, we see that $A$ must in fact be $f$-torsionfree.
\end{proof}
We can globalize to non-oriented prisms $(A,I)$.
The following easily follows from~\Cref{oriented transversal prism lemma}.
\begin{lemma}
\label{transversal prism lemma}
Let $(A, I)$ be a prism. The following are equivalent:
\begin{enumerate}
\item there is a $(p, I)$-completely faithfully flat cover by an oriented prism
$(A', I A')$, which satisfies the equivalent conditions
in~\Cref{oriented transversal prism lemma};
\item the ideal $I$ is $p$-completely regular;
\item Zariski locally $(p, I)$ is a regular sequence;
\item the natural morphism
$\Spf(A) \to [\Spf(\mathbb{Z}_p[\![T]\!])/(\mathbb{G}_m)_{\mathbb{Z}_p}]$
classified by $I$ is flat.
\end{enumerate}
\end{lemma}
Let us explain the morphism in (4) above: Zariski locally $I$ is generated
by a nonzerodivisor $d$, hence Zariski locally we get a map
$\Spf(A) \to \Spf(\mathbb{Z}_p[\![T]\!])$,
and on overlap these generators differ by a unit in $A$, hence globally
we have a morphism to the quotient stack.
Alternatively, we can understand this map as the composition of
the universal map $\Spf(A) \to \Sigma$
introduced by Drinfeld~\cite[Section 1.2]{Dr20},
and $\Sigma \to [\Spf(\mathbb{Z}_p[\![T]\!])/(\mathbb{G}_m)_{\mathbb{Z}_p}]$
induced by $W_{\mathrm{prim}} \to \Spf(\mathbb{Z}_p[\![T]\!])$
sending a Witt vector $(x_0, x_1, \ldots)$ to $x_0$.
\begin{definition}
A prism $(A, I)$ is said to be \emph{transversal}
if it satisfies the equivalent conditions in~\Cref{transversal prism lemma}.
\end{definition}
For the remaining of this subsection, let us assume $(A,I)$ to be a transversal prism.
Denote the $p$-completed PD envelope of $A \twoheadrightarrow A/I$ by $\mathcal{A}$, and
denote the kernel of $\mathcal{A} \twoheadrightarrow A/I$ by $\mathcal{I}$.
\begin{example}
Let us list some examples of transversal prisms.
\begin{enumerate}
\item The universal oriented prism is transversal.
\item The Breuil--Kisin prism~\cite[Example 1.3.(3)]{BS19} is transversal.
We have $A = \mathfrak{S}$ and $\mathcal{A}$ is classically denoted by $S$
in classical literature concerning Breuil modules.
\item Let $\mathbf{C}$ be an algebraically closed complete non-Archimedean field extension of
$\mathbb{Q}_p$. Then the perfect prism associated with $\mathcal{O}_\mathbf{C}$ is transversal.
We have $A = A_{\mathrm{inf}}$ and $\mathcal{A} = A_{\mathrm{crys}}$.
\end{enumerate}
\end{example}
Although $\mathcal{A}$ is usually not flat over $A$,
it has $p$-completely finite Tor dimension.
In the next subsection we shall see that this is a general phenomenon
about derived de Rham complex and regularity of $I$.
\begin{lemma}
\label{calA Tor dim 1}
Let $(A,I)$ be a transversal prism.
Then $A \to \mathcal{A}$ has $p$-complete amplitude
in $[-1,0]$, in particular $p$-completely base changing along $A \to \mathcal{A}$
commutes with taking totalizations in $D^{\geq 0}(A)$.
\end{lemma}
\begin{proof}
It suffices to check the statement Zariski locally on $\Spf(A)$,
hence we may assume the prism to be oriented, say $I = (d)$.
Then we may base change to $A/p$.
So we need to check that given an $\mathbb{F}_p$-algebra $R$,
and a nonzerodivisor $d \in R$, the divided power algebra
$S = D_R(d)$ has Tor amplitude in $[-1,0]$ over $R$.
This follows from the fact that $d^p = 0$ in $S$ and
$S$ is a free $R/(d^p)$-module.
The commutation of tensor and totalization now follows from~\cite[Lemma 4.20]{BS19}.
\end{proof}
\subsection{Envelopes and derived de Rham cohomology}
Let $(A,I)$ be a bounded prism.
In this subsection we review derived de Rham complex of simplicial $A$-algebras relative to $A$.
First we want to spell out explicitly the process of freely adjoining divided powers
or delta powers of elements mentioned in \cite[subsections 2.5-2.6 and section 3]{BS19}.
\begin{construction}
\label{definition of envelopes}
(0) Recall $I$ is locally generated by a nonzerodivisor in $A$.
Let $A_i$ be an affine open cover of $\Spf(A)$ such that $I \cdot A_i = (d_i)$ where $d_i \in I$.
There is an $A$-algebra $A[I \cdot x]$ by glueing $A_i[x_i]$'s via
$x_i = \frac{d_i}{d_j}x_j$, it has a surjection $A[I \cdot x] \twoheadrightarrow A$
by glueing maps $x_i \mapsto d_i$.
Alternatively one may directly define
\[
A[I \cdot x] \coloneqq \bigoplus_{n \geq 0} I^n
\]
with the evident surjection being the natural inclusion on each factor.
It can also be seen as the ring of functions on the total space of the line bundle $I^{-1}$
on $\Spec(A)$.
Similarly there is a $\delta$-$A$-algebra $A\{ I \cdot x\}$
by glueing $A_i\{x_i\}$'s via
$A_i\{x_i\} \otimes_{A_i} A_{ij} \xrightarrow{x_i \mapsto \frac{d_i}{d_j}x_j}
A_j\{x_j\} \otimes_{A_j} A_{ij}$
with a surjection $A\{I \cdot x\} \twoheadrightarrow A$ by glueing maps
$x_i \mapsto d_i$.
Alternatively one may directly define
\[
A\{ I \cdot x\} \coloneqq \bigotimes_A^{m \geq 1} \big( \bigoplus_{n \geq 0} (\delta^m(I))^n \big)
\]
with the evident surjection being the natural map on each tensor factor.
This can also be seen as the ring of functions on the total space of an infinite rank vector bundle
on $\Spec(A)$.
Note that the above construction can be generalized to the case where $I$ is replaced
by a line bundle $\mathcal{L}$ on $\Spec(A)$.
In particular one can make sense of $A\{I^{-1} \cdot x\}$ and $A\{\varphi(I) \cdot x\}$.
We remark that there is a natural map $A\{x\} \to A\{I^{-1} \cdot y\}$ by glueing
the maps $x \mapsto d_i y_i$ which we short hand as $x \mapsto y$.
(1) Let $B$ be an $A$-algebra, let $f_1, \ldots, f_r$ be a finite set of elements in $B$.
The \emph{simplicial $B$-algebra obtained by freely adjoining divided powers of $f_i$}
is denoted
by $B \llangle f_i \rrangle$ and
defined to be the derived tensor of the following:
\[
\xymatrix{
\mathbb{Z}[x_1, \ldots, x_r] \ar[r]^-{x_i \mapsto f_i} \ar[d] & B \\
D_{\mathbb{Z}[x_1, \ldots, x_r]} (x_1, \ldots x_r). &
}
\]
The \emph{simplicial $A$-algebra obtained by freely adjoining divided powers of $I$}
is denoted by $A \llangle I \rrangle$ and defined to be the derived tensor of the following:
\[
\xymatrix{
A[I \cdot x] \ar[r]^-{x_i \mapsto d_i} \ar[d] & A \\
D_{A[I \cdot x]}(\ker(A[I \cdot x] \twoheadrightarrow A)), &
}
\]
alternatively one may define it as the glueing of the simplicial $A$-algebras
$A_i \otimes_{x \mapsto d_i, A[x]} D_{A[x]}(x)$.
The \emph{simplicial $B$-algebra obtained by freely adjoining divided powers of $I, f_i$},
denoted by $B \llangle I, f_i \rrangle$,
is defined as the derived tensor of the above two algebras over $A$.
(2) Let $B$ be a $\delta$-$A$-algebra, let $f_1, \ldots, f_r$ be a finite set of elements in $B$.
We define $B\{\frac{f_i}{p}\}$ as derived pushout of the following diagram of simplicial algebras:
\[
\xymatrix{
A\{x_1, \ldots, x_r\} \ar[r]^-{x_i \mapsto f_i} \ar[d]^-{x_i \mapsto p \cdot y_i} & B \\
A\{y_1, \ldots, y_r\}. &
}
\]
We define $A\{\varphi(I)/p\}$ as derived pushout of the following:
\[
\xymatrix{
A\{I \cdot x\} \ar[r]^-{\varphi} \ar[d]^{x \mapsto p \cdot y} & A \\
A\{I \cdot y\}, &
}
\]
alternatively one may define it as the glueing of the simplicial $\delta$-$A$-algebras
$A_i\{\frac{\varphi_A(d_i)}{p}\}$.
Analogously $B\{\frac{\varphi(I)}{p}, \frac{f_i}{p}\}$ is defined as derived tensoring the above
two algebras over $A$.
(3) Given a sequence $(f_1, \ldots, f_r)$ of elements inside a ring $B$,
we use notation
\[
\dR_{B}(f_1, \ldots, f_r)^\wedge \coloneqq \dR_{\mathrm{Kos}(B; f_1, \ldots, f_r)/B}^\wedge
\]
to denote the derived $p$-completed
derived de Rham complex of $\mathrm{Kos}(B; f_1, \ldots, f_r)$,
viewed as a simplicial $B$-algebra, over $B$.
Similarly when $B$ is an $A$-algebra, we denote
\[
\dR_{B}(I)^\wedge \coloneqq \dR_{(B \otimes_A (A/I))/B}^\wedge,
\]
and
\[
\dR_{B}(I, f_i)^\wedge \coloneqq \dR_{\big(\mathrm{Kos}(B; f_1, \ldots, f_r) \otimes_A (A/I)\big)/B}^\wedge.
\]
Let $J$ be an ideal inside $B$, then we denote
\[
\dR_B(J)^\wedge \coloneqq \dR_{(B/J)/B}^\wedge.
\]
Here all the completion are derived $p$-completion.
\end{construction}
\begin{remark}\label{rem-2.7}
(1) Let $B = A\{x\}^\wedge$, note that $x$ is $(p,I)$-completely regular relative to $A$.
Using \cite[Proposition 3.13]{BS19}, we can get a $B$-algebra $C \coloneqq B\{\frac{x}{I}\}^\wedge$
which is locally (on $\Spf(A)$ as one needs to trivialize the line bundle $I$) given by
$C = A\{y\}^\wedge$ together with $B$-algebra structure $x \mapsto d \cdot y$
where $d$ is the local generator of $I$.
One checks immediately that, in our notation here, we have $C \cong A\{I^{-1} \cdot y\}^\wedge$
with $B$-structure given by ($(p,I)$-completion of) $x \mapsto y$.
In fact, after examining the proof of \cite[Proposition 3.13]{BS19}, one finds that
in the situation described in loc.~cit.~, the algebra $B\{\frac{J}{I}\}^\wedge$
is the derived $(p,I)$-complete pushout of the following diagram:
\[
\xymatrix{
A\{x_1, \ldots, x_r\} \ar[r] \ar[d]^-{x_i \mapsto y_i} & B \\
A\{I^{-1} \cdot y_i\}. &
}
\]
(2) We warn readers that when $J = (f_1, \ldots, f_r)$ is an ideal inside $B$,
the two simplicial $B$-algebras $\dR_B(J)^\wedge$ and $\dR_B(f_1, \ldots, f_r)^\wedge$
are usually different.
These two agree when $(f_i)$ is a $p$-completely Koszul regular sequence.
\end{remark}
Below we shall see the relation between derived de Rham complex, divided power envelopes,
and prismatic envelopes which directly follows from \cite[Subsection 2.5]{BS19}.
\begin{lemma}
\label{three envelopes}
(1) Let $B$ be an $A$-algebra, let $\{f_1, \ldots, f_r\}$ be a finite set of elements of $B$.
Then we have the following identification of derived $p$-complete simplicial $B$-algebras:
\[
\dR_B(f_1, \ldots, f_r)^\wedge \cong
B \llangle f_i \rrangle^\wedge.
\]
Similarly we have an identification:
\[
\dR_B(I)^\wedge \cong
B \llangle I \rrangle^\wedge.
\]
(2) Let $B$ be a $\delta$-$A$-algebra, let $\{f_1, \ldots, f_r\}$ be a finite set of elements of $B$.
Then we have the following identification of derived $p$-complete simplicial $B$-algebras:
\[
B \llangle f_i \rrangle^\wedge \cong
B\{\frac{\varphi(f_i)}{p}\}^\wedge.
\]
Similarly we have an identification:
\[
B \llangle I \rrangle^\wedge \cong
B\{\frac{\varphi(I)}{p}\}^\wedge.
\]
\end{lemma}
\begin{proof}
By base change property of their constructions, we reduce ourselves to
the case where $B = A\{x_1, \ldots, x_r\}$ with $f_i = x_i$.
Again by base change we may assume $A$ is the initial oriented prism,
in particular it is flat over $\mathbb{Z}_p$ and $I = (d)$ is generated by a nonzerodivisor.
So we can focus on the case concerning finite set of elements of $B$,
we may further reduce to the case where the set is a singleton.
Now the identification in (1) follows from (the limit version of) \cite[Theorem 3.27]{Bha12}
and \cite[Th\'{e}or\`{e}me V.2.3.2]{crystal2}.
The identification in (2) follows from \cite[Lemma 2.36]{BS19}.
\end{proof}
We deduce a consequence concerning the Tor amplitude of $\dR_A(I)^\wedge$ over $A$,
generalizing \Cref{calA Tor dim 1}.
\begin{lemma}
\label{dR Tor dim 1}
Let $(A,I)$ be a prism.
Then $A \to \dR_A(I)^\wedge$ has $p$-complete amplitude
in $[-1,0]$, in particular $p$-completely base changing along $A \to \dR_A(I)^\wedge$
commutes with taking totalizations in $D^{\geq 0}(A)$.
\end{lemma}
\begin{proof}
We may check this statement locally on $\Spf(A)$, hence we may assume $I = (d)$.
Next, by base change, we may assume $A$ to be the initial oriented prism,
in particular we may assume it to be transversal.
Using \Cref{three envelopes} (1), we see now
$\dR_A(I)^\wedge$ is the $p$-completion of the divided power envelope $D_A(I)^\wedge$.
This reduces the Lemma to \Cref{calA Tor dim 1}.
\end{proof}
We also have a prototype base change formula which will be used in the next section
to establish a general comparison.
\begin{lemma}
\label{naive base change}
Let $(A,I)$ be a prism, denote the composition
$A\{x\} \xrightarrow{\varphi_{A}, x \mapsto \varphi(z)} A\{z\} \to \dR_{A\{z\}}(I)^\wedge$ by $f$.
Then we have a base change formula:
\[
A\{I^{-1} \cdot x\} \widehat{\otimes}_{A\{x\}, f} \dR_{A\{z\}}(I)^\wedge
\cong \dR_{A\{z\}}(I,z)^\wedge,
\]
\end{lemma}
Here the completion on the left hand side is derived $p$-completion.
As $\varphi(I) = (p)$ inside $\pi_0(\dR_{A\{z\}}(I)^\wedge)$, it is the same
as derived $(p,I)$-completion when viewed as an $A$-complex
via $\varphi_A \colon A \to \dR_A(I)^\wedge$.
\begin{proof}
Note that by \Cref{three envelopes} we have identifications
$\dR_{A\{z\}}(I)^\wedge \cong A\{z\}\{\frac{\varphi(I)}{p}\}^\wedge$
as $p$-complete simplicial $A\{z\}$-algebras.
Similarly we can identify $\dR_{A\{z\}}(I,z)^\wedge$ with
$A\{z\}\{\frac{\varphi(z)}{p}, \frac{\varphi(I)}{p}\}^\wedge$.
Now we look at the following diagram
\[
\xymatrix{
A\{x\} \ar[r] \ar[d] & A\{z\} \ar[r] \ar[d] & A\{z\}\{\frac{\varphi(I)}{p}\} \ar[d] \\
A\{I^{-1} \cdot x\} \ar[r] & A\{z\}\{\frac{\varphi(z)}{\varphi(I)}\} \ar[r] & A\{z\}\{\frac{\varphi(z)}{p}, \frac{\varphi(I)}{p}\}.
}
\]
The left square is a pushout diagram by definition.
Hence it suffices to show the right square, after derived $p$-completion,
is also a pushout diagram of $p$-complete simplicial $A\{z\}$-algebras.
To that end, we may work Zariski locally on $A$, so we can assume $I = (d)$
is generated by one element.
This square is the base change of the same diagram when $A$ is the
initial oriented prism, so we have reduced ourselves to that case.
Now every ring in sight is discrete, and the $p$-completed square is a pushout diagram
because $\varphi(d)$ and $p$ differ by a unit inside
$A\{\frac{\varphi(d)}{p}\}^\wedge \cong D_A(d)^\wedge$.
\end{proof}
In \cite[Proposition 3.25]{Bha12}, for any $p$-complete $A$-algebra $B$,
Bhatt constructed a natural map
\[
\mathcal{C}\mathrm{omp}_{B/A} \colon \dR_{B/A}^\wedge \to \mathrm{R\Gamma_{crys}}(B/A).
\]
Here the right hand side denotes the $p$-complete crystalline cohomology
defined using PD thickenings of $B$ relative to $(A, (p), \gamma)$
with $\gamma_i(p) = p^i/i!$.
This natural map is functorial in $A \to B$ and agrees with Berthelot's
de Rham--crystalline comparison \cite[Th\'{e}or\`{e}me IV.2.3.2]{crystal2}
when it is formally smooth (viewed as $p$-adic algebras).
It is shown that when both $A$ and $B$ are flat over $\mathbb{Z}_p$ and
$A \to B$ is $p$-completely locally complete intersection, then the natural map
above is an isomorphism \cite[Theorem 3.27]{Bha12}.
For our purpose we shall be interested in the situation where $B$ is formally smooth
over $A/I$, we cannot summon the above Theorem in loc.~cit.~to say that the natural
map in this situation is an isomorphism.
In fact, when $B = A/I$ the left hand side is $\dR_A(I)^\wedge$ and the right hand side
is the classical $p$-adic completion of the PD envelope of $A$ along $I$ (compatible with the natural PD structure on $(A, (p))$),
denoted as $\mathcal{A}$ \footnote{This notation agrees with the previous subsection as we assumed $(A,I)$ to be a transversal prism there.}.
These two need not be the same, e.g.~take $A = \mathbb{Z}_p$ and $I = (p)$,
then $\dR_A(I)^\wedge = \mathbb{Z}_p[T^i/i!]^\wedge/(T-p)$ but $\mathcal{A} = \mathbb{Z}_p$.
However this turns out to be the only problem.
\begin{proposition}
\label{dR and crys}
Let $B$ be an $A/I$-algebra.
(1) If $B$ is formally smooth over $A/I$, then we have a natural identification
\[
\mathrm{R\Gamma_{crys}}(B/A) \cong \mathrm{R\Gamma_{crys}}(B/\mathcal{A}),
\]
where the right hand side is the usual crystalline cohomology of $\Spf(B)$ over
the PD base $\mathcal{A}$.
(2) There is a natural map
\[
\mathrm{Comp}_{B/A} \colon \dR_{B/A}^\wedge \widehat\otimes_{\dR_A(I)^\wedge} \mathcal{A}
\to \mathrm{R\Gamma_{crys}}(B/\mathcal{A}),
\]
which is functorial in $A/I \to B$.
(3) If $B$ is formally smooth over $A/I$, then the above is an isomorphism.
\end{proposition}
\begin{proof}
(1) is an easy consequence of the fact that $B$ is an $A/I$-algebra.
In fact we only need $A/I \to B$ to be a local complete intersection.
Indeed we use \v{C}ech--Alexander complex to compute both crystalline cohomology,
and one reduces to the following:
Let $P$ be a polynomial $A$-algebra with a surjection $P \twoheadrightarrow B$ of $A$-algebras,
then there is a naturally induced surjection $P \otimes_A \mathcal{A} \twoheadrightarrow B$ of $\mathcal{A}$-algebras,
and we have an identification of PD envelopes
\[
D_{(A,(p), \gamma)}(P \twoheadrightarrow B) = D_{(\mathcal{A},\mathcal{I}, \gamma)}(P \otimes_A \mathcal{A} \twoheadrightarrow B).
\]
(2) The functoriality of Bhatt's $\mathcal{C}\mathrm{omp}_{B/A}$ asserts that the map
is compatible with the natural map $\dR_A(I)^\wedge \to \mathcal{A}$,
hence we get our natural map $\mathrm{Comp}_{B/A}$.
(3) Choose a formal lift $\tilde{B}$ over $A$ (note that $A$ is $(p,I)$-complete).
By the functoriality of Bhatt's
$\mathcal{C}\mathrm{omp}_{B/A}$, we get the following commutative diagram:
\[
\xymatrix{
\dR_{\tilde{B}/A}^\wedge \widehat\otimes_A \mathcal{A} \ar[r] \ar[d] &
\mathrm{R\Gamma_{crys}}(\tilde{B}/A) \widehat\otimes_A \mathcal{A} \ar[d] \\
\dR_{B/A}^\wedge \widehat\otimes_{\dR_A(I)^\wedge} \mathcal{A} \ar[r] &
\mathrm{R\Gamma_{crys}}(B/\mathcal{A}).
}
\]
The top horizontal arrow is an isomorphism by Berthelot's
de Rham-crystalline comparison.
The left vertical arrow is an isomorphism by the K\"{u}nneth formula of derived de Rham
complex: $\dR_{\tilde{B}/A}^\wedge \widehat\otimes_A \dR_A(I)^\wedge \cong \dR_{B/A}^\wedge$.
The right vertical arrow is an isomorphism by base change formula of crystalline cohomology.
Therefore we conclude that the bottom horizontal arrow, which is our $\mathrm{Comp}_{B/A}$,
must also be an isomorphism.
\end{proof}
The above proposition and Bhatt's results discussed before suggest that derived de Rham
complex is a substitute of crystalline cohomology.
Inspired by this philosophy, below let us show that derived de Rham complex only
``depends on the reduction mod $p$ of the input algebra''.
We need to introduce some notations first: denote the $p$-adic derived de Rham complex
$\dR_{\mathbb{F}_p/\mathbb{Z}_p}^\wedge$ by $D$.
Bhatt's result implies that the natural map $\mathbb{Z}_p \to D$ admits a retraction $D \to \mathbb{Z}_p$.
In \Cref{example computing Frobenius on dR} (1) below,
one finds a detailed description of $D$.
\begin{remark}
In fact, one can show that $D$ is the $p$-complete PD envelope of $\mathbb{Z}_p$ along the ideal $(p)$ by $D$.
Moreover under this identification one can easily see that the retraction above is unique,
and is given by the fact that there is a unique PD structure on $(\mathbb{Z}_p, (p))$ (as $\mathbb{Z}_p$ has no $p$-torsion).
Notice that when taking PD envelope, one has to fix a PD base ring,
and we always take it to be the trivial PD ring $(\mathbb{Z}_p, (0), \gamma_{\mathrm{triv}})$
when we say PD envelope without mentioning a PD base ring.
\end{remark}
\begin{proposition}
\label{dependence on special fiber}
Let $R$ be a ring with its derived $p$-completion $R^\wedge$,
let $B$ be a simplicial $R$-algebra. Then there is a natural isomorphism:
\[
\dR_{\mathrm{Kos}(B; p)/R}^\wedge \widehat{\otimes}_{D} \mathbb{Z}_p
\cong \dR_{B/R}^\wedge
\]
which is functorial in $R \to B$.
\end{proposition}
Here the map $D \to \dR_{\mathrm{Kos}(B; p)/R}^\wedge$ is induced by the following natural diagram
\[
\xymatrix{
\mathbb{F}_p \ar[r] & \mathrm{Kos}(B;p) = B \otimes_{\mathbb{Z}} \mathbb{F}_p \\
\mathbb{Z} \ar[r] \ar[u] & R. \ar[u]
}
\]
\begin{proof}
This follows from the K\"{u}nneth formula of derived de Rham complex:
\[
\dR_{\mathrm{Kos}(B; p)/R}^\wedge \cong \dR_{B/R}^\wedge \widehat{\otimes}_R \dR_R(p)^\wedge,
\]
and the base change formula $\dR_R(p)^\wedge \cong D \widehat{\otimes}_{\mathbb{Z}_p} R^\wedge$
as $\mathrm{Kos}(R; p) = R \otimes_\mathbb{Z} \mathbb{F}_p$.
\end{proof}
\subsection{Frobenii}
\label{Frobenii}
Let $A$ be a $p$-torsionfree $\delta$-ring.
Using \Cref{dependence on special fiber} we can define a Frobenius action on $\dR_{B/A}^\wedge$
which is functorial in $(A, \varphi_A)$ and the $A$-algebra $B$.
\begin{construction}
\label{Frobenius construction}
Let $A$ be a $p$-torsionfree $\delta$-ring and $B$ a simplicial $A$-algebra.
Recall there is a functorial endomorphism on simplicial $\mathbb{F}_p$-algebras given by left Kan extending the usual Frobenius
on polynomial $\mathbb{F}_p$-algebras, see \cite[Construction 2.2.6]{DAG13}.
For discrete $\mathbb{F}_p$-algebras, it is just the usual Frobenius.
We may view $B/p = B \otimes_{A} A/p$, using the fact that $\varphi_A$ on $A$ is a lift of Frobenius on $A/p$ we get the following commutative diagram:
\[
\xymatrix{
B/p \ar[r]^-{\varphi_{B/p}} & B/p \\
A \ar[r]^-{\varphi_A} \ar[u] & A \ar[u],
}
\]
it induces a Frobenius map $\tilde{\varphi} \colon \dR_{\mathrm{Kos}(B;p)/A}^\wedge \to \dR_{\mathrm{Kos}(B;p)/A}^\wedge$ which is functorial in $(A \to B, \varphi_A)$.
Similar diagram for $\mathbb{Z} \to \mathbb{F}_p$ (where $A = B = \mathbb{Z}_p$) induces identity on $D$, hence we have a commutative diagram:
\[
\xymatrix{
\dR_{\mathrm{Kos}(B;p)/A}^\wedge \ar[rr]^-{\tilde{\varphi}} & & \dR_{\mathrm{Kos}(B;p)/A}^\wedge \\
& D. \ar[lu] \ar[ru] &
}
\]
Finally we define a Frobenius map $\varphi_{B/A} \colon \dR_{\mathrm{Kos}(B; p)/A}^\wedge \widehat{\otimes}_{D} \mathbb{Z}_p \cong \dR_{B/A}^\wedge
\xrightarrow{\tilde{\varphi} \widehat{\otimes}_{\mathrm{id}_D} {\mathrm{id}_{\mathbb{Z}_p}}} \dR_{B/A}^\wedge$
which is functorial in $(A \to B, \varphi_A)$.
\end{construction}
\begin{remark}
(1)
It is conceivable that the above works for general $\delta$-rings.
In private communication we learned from Bhatt that a $\delta$-structure on a ring $A$ is equivalent to specifying a commutative diagram as follows:
\[
\xymatrix{
A/p \ar[r]^-{\varphi_{A/p}} & A/p \\
A \ar[r]^-{\varphi_A} \ar[u] & A \ar[u],
}
\]
note that here $A/p$ is a simplicial $\mathbb{F}_p$-algebra that has nontrivial $\pi_1$ when $A$ is not $p$-torsionfree.
Hence for any simplicial $A$-algebra $B$,
one can also define a Frobenius on $\dR_{B/A}^\wedge$ as above.
However we do not work out the full story here as we do not need this great generality
for our intended applications later.
(2) By letting $n \to \infty$ in
\cite[Proposition 3.47]{Bha12}, one gets another construction of Frobenius on
$\dR_{A/\mathbb{Z}_p}^\wedge$ for any $\mathbb{Z}_p$-algebra $A$.
However later on we shall see in \Cref{functorial endo remark} that there is only one
Frobenius that is functorial enough (in a suitable sense) on $p$-completed
derived de Rham complexes when the base algebra is a
$p$-torsionfree $\delta$-algebra.
In particular, our construction above agrees with Bhatt's whenever both are defined
(i.e.~when the base is $\mathbb{Z}_p$).
\end{remark}
Let us work out some examples.
\begin{example}
\label{example computing Frobenius on dR}
(1) As an illustrative example, let us contemplate with $A = \mathbb{Z}_p$ and $B = \mathbb{F}_p$.
We have a derived pushout square of rings:
\[
\xymatrix{
\mathbb{Z}_p \ar[r] & B \\
\mathbb{Z}_p[T] \ar[u]^-{T \mapsto p} \ar[r]^-{T \mapsto 0} & A \ar[u],
}
\]
moreover the bottom map is a map of $\delta$-rings if we give
$\mathbb{Z}_p[T]$ a $\delta$-structure with $\varphi(T) = T^p$.
Then we get a pushout diagram of derived de Rham complex which says
$D \cong \dR_{\mathbb{Z}_p/\mathbb{Z}_p[T]}^\wedge \widehat{\otimes}_{\mathbb{Z}_p[T]} \mathbb{Z}_p$.
The latter is the same as
$\mathbb{Z}_p\llangle T \rrangle^\wedge/(T)$ where we have used the fact that
$p$ has divided powers in $\mathbb{Z}_p$ (hence adjoining divided powers of
$T - p$ is the same as adjoining divided powers of $T$).
It is easy to see that the Frobenius defined on
$\dR_{\mathbb{Z}_p/\mathbb{Z}_p[T]}^\wedge \cong
\mathbb{Z}_p\llangle T \rrangle$ is induced by $T \mapsto T^p$
because it has to be compatible with the Frobenius on $\mathbb{Z}_p[T]$.
Therefore the induced Frobenius on $\dR_{B/A}^\wedge$
is \emph{not} the identity.
This might be surprising as one would na\"{i}vely think that the Frobenius
on the pair $(\mathbb{Z}_p, \mathbb{F}_p)$ is identity, hence must induce
identity on the derived de Rham complex.
However the Frobenius on
$\mathbb{F}_p \otimes_{\mathbb{Z}_p} \mathbb{F}_p$ is \emph{not}
the identity (as Frobenius always kills cohomology classes in negative degrees,
see \cite[Remark 2.2.7]{DAG13}),
and it is this Frobenius that induces a map on the derived de Rham complex.
On a related note, Bhatt has pointed out to us that the identity map
is also \emph{not} a lift of Frobenius on
$D \cong \mathbb{Z}_p\llangle T \rrangle^\wedge/(T)$.
(2) Let $J \subset A$ be an ideal which is Zariski locally on $\Spec(A)$ a
colimit of ideals generated by a $p$-completely regular sequence.
Then by \Cref{three envelopes} (1), we have an identification:
$\dR_{A}(J)^\wedge \cong D_A(J)^\wedge$.
Since the Frobenius map obtained is compatible with $\varphi_A$ and $D_A(J)^\wedge$
is $p$-torsionfree, we see that this pins down the Frobenius on $\dR_A(J)^\wedge$:
any $\gamma_n(f)$ with $f \in J$ must be sent to $\frac{\varphi_A(f)^n}{n!}$.
Note that $f^p$ is divisible by $p$ in $D_A(J)^\wedge$, hence
$\varphi_A(f)$ is divisible by $p$ in $D_A(J)^\wedge$.
(3) Let $A$ be $p$-complete, and let $B = A \langle X^{1/p^{\infty}} \rangle$.
Since $A \to B$ is relatively perfect
modulo $p$, there is a unique lift of Frobenius $\varphi_B$ on $B$ covering the Frobenius
on $A$ and it is given by $\varphi_B(X^i) = X^{i \cdot p}$.
By \cite[Proposition 3.4.(1)]{GL20}, we see the natural map to $0$-th
graded piece of Hodge filtration induces an isomorphism
$\dR_{B/A}^\wedge \cong B$.
Applying the functoriality of the \Cref{Frobenius construction} to the map of
triples: $(A \to B, \varphi_A) \to (B \to B, \varphi_B)$, we see that the Frobenius
on $\dR_{B/A}^\wedge \cong B$ must be $\varphi_B$.
\end{example}
When the map $A \to B$ is a surjection with good regularity properties,
we see in \Cref{three envelopes} that one can express $\dR_{B/A}^\wedge$
in terms of prismatic envelopes. Since prismatic envelopes are $\delta$-rings, they possess a Frobenius map by design.
We can use this to give an alternative construction of the Frobenius for derived de Rham cohomology of certain regular $A$-algebras relative to $A$.
To that end, we need to first establish a sheaf property for derived de Rham cohomology.
\begin{proposition}
\label{dR sheaf}
Let $S$ be an $R$-algebra.
Assume:
\begin{itemize}
\item the cotangent complex
$\mathbb{L}_{S/R} \in D(S)$ has $p$-completely Tor amplitude in $[-1,0]$;
\item the (relative to $R/p$) Frobenius twist of $S/p$ is in $D^{\geq -m}(\mathbb{F}_p)$.
\end{itemize}
Consider the category $\mathcal{C}$ consisting of triangles
$R \to P \to S$ with $P$ being an ind-polynomial $R$-algebra,
equipped with indiscrete topology.
Let $\dR_{S/-}^\wedge$ be the sheaf that associates any triangle
$R \to P \to S$ with $\dR_{S/P}^\wedge$.
Then we have:
\begin{enumerate}
\item For any $R \to P \to S$, the $\dR_{S/P}^\wedge$ is in $D^{\geq -m}(R)$.
\item The natural map $\dR_{S/R}^\wedge \to \lim_{\mathcal{C}} \dR_{S/P}^\wedge$
is an isomorphism.
\item For any $R \to P \to S$ with $P \twoheadrightarrow S$ surjective, the natural
map
$\dR_{S/R}^\wedge \to \lim_{\Delta} \dR_{S/P_{\bullet}}^\wedge$
is an isomorphism.
Here $P_n \coloneqq P^{\otimes_R {(n+1)}}$ for any $[n] \in \Delta$,
with induced maps $P_n \twoheadrightarrow S$.
\end{enumerate}
\end{proposition}
\begin{proof}
We shall prove this by reduction modulo $p$.
Hence we may assume $R$ and $S$ are simplicial $\mathbb{F}_p$-algebras.
For (1) we use the conjugate filtrations on the derived de Rham complex.
Since $\mathbb{L}_{S/R}$ has Tor amplitude in $[-1,0]$, so is $\mathbb{L}_{S^{(1,P)}/P}$
where $S^{(1,P)}$ is the (relative to $P$) Frobenius twist of $S$.
The above estimate shows that the graded pieces of the conjugate filtration
has Tor amplitude at least $0$ over $S^{(1,P)}$.
Since $S^{(1)}$ is assumed to be in $D^{\geq -m}(\mathbb{F}_p)$ and the relative Frobenius
for $P$ is flat,
we see that all the graded pieces of the conjugate filtration lives in
$D^{\geq -m}(R)$.
Note that $P \twoheadrightarrow S$ is surjective if and only if
$R \to P \to S$ is weakly final in $\mathcal{C}$.
Since these $\dR_{S/P}^\wedge$ are cohomologically uniformly bounded below,
\cite[Lecture V, Lemma 4.3]{BhaNotes18}
(see also~\cite[\href{https://stacks.math.columbia.edu/tag/07JM}{Tag 07JM}]{stacks-project}) reduces (2) to (3).
Lastly to show (3) we appeal to the conjugate filtration again.
Since the graded pieces of the conjugate filtration is
cohomologically uniformly bounded below by our proof of (1) above,
it suffices to show
$\mathbb{L}_{S^{(1)}/R} \to \lim_{\Delta} \mathbb{L}_{S^{(1,\bullet)}/P_{\bullet}}$
is an isomorphism, where $S^{(1,n)}$ is the (relative to $P_n$)
Frobenius twist of $S$.
This follows easily from the fact that $\lim_{\Delta} \mathbb{L}_{P_{\bullet}/R} \cong 0$.
\end{proof}
The above Proposition gives us a way to describe the Frobenius action of the $p$-completed
derived de Rham complex in more cases than those listed in
\Cref{example computing Frobenius on dR}.
\begin{proposition}
\label{another Frobenius}
Let $A$ be a $p$-torsionfree $p$-complete $\delta$-algebra,
and let $I \subset A$ be an ideal which is Zariski locally on $\Spec(A)$ generated by
a $p$-completely regular element.
Let $B$ be a $p$-completely smooth $A/I$-algebra.
Then we have:
\begin{enumerate}
\item For any $(p, I)$-completely
ind-polynomial $A$-algebra $P$ with a surjection $P \twoheadrightarrow B$,
the kernel $J$ is Zariski locally on $\Spf(P)$ colimit of ideals generated by a $p$-completely
regular sequence.
\item For any such $A \to P \to B$ as in (1), the $\dR_{B/P}^\wedge$ is an
ordinary algebra.
\item For any $(p, I)$-completely
free $\delta$-$A$-algebra $F$ with a surjection $F \twoheadrightarrow B$,
there is a unique $\delta$-algebra structure on $\dR_{B/F}^\wedge$
compatible with that on $F$.
With this $\delta$-structure, we have an identification:
\[
\dR_{B/F}^\wedge \cong F\{\frac{\varphi_F(J)}{p}\}^\wedge.
\]
\item Consider the category $\mathcal{C}$ of all triples
$A \to F \twoheadrightarrow B$ as in (3), we have
\[
\dR_{B/A}^\wedge \cong \lim_{\mathcal{C}} \dR_{B/F}^\wedge.
\]
In fact, it suffices to take limit over the Cech nerve of one such $F \twoheadrightarrow B$.
Together with (3) we get a natural Frobenius action on $\dR_{B/A}^\wedge$.
\item The Frobenius on $\dR_{B/A}^\wedge$ obtained in (4)
agrees with the one in \Cref{Frobenius construction}.
\end{enumerate}
\end{proposition}
The notation $F\{\frac{\varphi_F(J)}{p}\}^\wedge$ is defined analogously as in
\cite[Corollary 3.14]{BS19}.
Using $J$ is Zariski locally given by an ind-$p$-completely regular ideal,
we may define $F\{\frac{\varphi_F(J)}{p}\}^\wedge$ as the glueing of the colimit
of $F\{\frac{\varphi_F(f_i)}{p}\}^\wedge$, where $(f_i)$ is the ind-regular sequence
generating $J$ on a Zariski open.
\begin{proof}
(1) follows easily from the fact that $B$ is formally smooth over $A/I$ and $I$ is
Zariski locally generated by a $p$-completely regular element.
(2) follows from the argument of \Cref{dR sheaf} (1).
Indeed we set $R = A$ and $S = B$. The Frobenius twist of $B/p$ is smooth over
$A/(\varphi_A(I),p) = A/(I^p,p)$, and the latter is an ordinary algebra.
Hence in our situation, we have $m=0$ in the condition of \Cref{dR sheaf}.
This shows that the $\dR_{B/P}^\wedge$ is in $D^{\geq 0}$.
Using conjugate filtration again, it is easy to see that the $p$-completed derived de Rham
complex of any surjection must be in $D^{\leq 0}$.
Hence our $\dR_{B/P}^\wedge$ must in fact be an ordinary algebra.
(3) essentially follows from (1) and \Cref{three envelopes}.
Indeed by description of $J$, we see that $\dR_{B/F}^\wedge \cong D_F(J)^\wedge$.
Since $J$ is Zariski locally an ind-$p$-completely regular ideal, we see that
$D_F(J)^\wedge$ is $p$-torsionfree, hence having a $\delta$-structure is equivalent
to having a lift of Frobenius.
The argument in \Cref{example computing Frobenius on dR} (2) tells us
that there is at most one Frobenius structure on it compatible with that on $F$.
Lastly \Cref{three envelopes} shows that we can put a $\delta$-structure on
it by identifying
\[
\dR_{B/F}^\wedge \cong D_F(J)^\wedge \cong F\{\frac{\varphi_F(J)}{p}\}.
\]
(4) follows from \Cref{dR sheaf} (2)-(3).
As for (5), it suffices to notice that for any of these $A \to F \twoheadrightarrow B$
the two Frobenii defined on $\dR_{B/F}^\wedge$ agree and they are
both functorial in $A \to F \twoheadrightarrow B$.
\end{proof}
The following is similar to \Cref{dR sheaf}, and will be used later in the next section.
\begin{proposition}
\label{lim dR}
Let $(A,I)$ be a bounded prism. Let $R$ be a formally smooth $A/I$-algebra.
Consider $\mathcal{C}$ the category of all triples $A \to P \twoheadrightarrow R$
where $P$ is a $p$-completed polynomial algebra over $A$.
Associated with such a triple is the following diagram:
\[
\xymatrix{
A \ar[r] \ar[d] & P \ar[r] \ar[d] & F \ar[d] \\
A/I \ar[r] & R \ar[r] & S,
}
\]
where $F$ is the $p$-completed free $\delta$-$A$-algebra associated with $P$, and
$S$ is the $p$-completed tensor product $S \coloneqq R \widehat{\otimes}_{P} F$.
Then we have:
\begin{enumerate}
\item Choose an object $A \to P \twoheadrightarrow R$, consider the $n$-th self-fiber product
$A \to P^n \coloneqq P^{\hat{\otimes}_A n} \twoheadrightarrow R$ for any positive integer $n$.
Then the associated $p$-completed free $\delta$-$A$-algebra is $F^n \coloneqq F^{\hat{\otimes}_A n}$,
and we have
\[
R \widehat{\otimes}_{P^n} F^n \cong S^{\hat{\otimes}_R n},
\]
which we shall denote by $S^n$ below.
\item Choose an object $A \to P \twoheadrightarrow R$, then the natural map
\[
\dR_{R/A}^\wedge \rightarrow \lim_{[n] \in \Delta} \dR_{S^n/F^n}^\wedge
\]
is an isomorphism.
\item The natural map
\[
\dR_{R/A}^\wedge \rightarrow \lim_{\mathcal{C}} \dR_{S/F}^\wedge
\]
is an isomorphism.
\end{enumerate}
\end{proposition}
Notice that we do not need to assume $A$ to be $p$-torsionfree here.
\begin{proof}
For (1): if $P$ is $p$-completely adjoin a set $T$ of variables, then $F$ is $p$-completely adjoin
the set $\coprod_{\mathbb{N}} T$ of variables, where $t$ in the $i$-th component represents $\delta^i(x_t)$.
The statement on fiber product and the associated $F^n$ is clear.
As for the statement about $S^n$, just notice that we have the following pushout diagrams:
\[
\xymatrix{
P^n \ar[r] \ar[d] & R^n \coloneqq R^{\hat{\otimes}_A n} \ar[d] \ar[r] & R \ar[d] \\
F^n \ar[r] & S^{\hat{\otimes}_A n} \ar[r] & S^n \coloneqq S^{\hat{\otimes}_R n}.
}
\]
To prove (2), we may reduce modulo $p$.
Note that $A \to F$ and $R \to S$ are $p$-completely faithfully flat.
In a similar manner to the proof of \Cref{dR sheaf} (3),
using conjugate filtration, plus the distinguished triangle of cotangent complex,
and fpqc descent of cotangent complex (see \cite[Theorem 3.1]{BMS2}), one can show this natural map
is an isomorphism.
(3) follows from (2) in the same way as how \Cref{dR sheaf} (2) follows from \Cref{dR sheaf} (3).
\end{proof}
\begin{remark}
\label{another Frobenius on limit dR}
Similar to \Cref{another Frobenius}, assume $A$ to be $p$-torsionfree,
then these $\dR_{S/F}^\wedge$ appeared above are discrete rings, and we can equip
them a natural $\delta$-structure.
By the same proof of \Cref{another Frobenius} the induced Frobenius on $\dR_{R/A}^\wedge$
agrees with the one provided by \Cref{Frobenius construction}.
\end{remark}
Later on we shall see in \Cref{functorial endo remark} (1) that
if $(A,I)$ is a transversal prism, then there is only one
Frobenius in a strong sense. So all these different constructions must give rise to the same map.
\subsection{Na\"{i}ve comparison}
Consider the composition $f \colon A \xrightarrow{\varphi_A} A \to \mathcal{A}$, it induces a morphism of prisms which we still denote by
$f \colon (A,I) \to (\mathcal{A},(p))$.
Let $\mathcal{X}$ be a $p$-completely smooth affine formal scheme over $\Spf(A/I)$.
Now by base change formula of prismatic cohomology~\cite[Theorem 1.8.(5)]{BS19}, we have
\[
\mathrm{R\Gamma}_{\Prism}(\mathcal{X}/A) \widehat{\otimes}_{A,f} \mathcal{A} \cong
\mathrm{R\Gamma}_{\Prism}(\mathcal{Y}/\mathcal{A}),
\]
where $\mathcal{Y} = \mathcal{X} \times_{\mathrm{Spf}(A/I), f} \mathrm{Spec}(\mathcal{A}/p)$.
Then the crystalline comparison of prismatic cohomology~\cite[Theorem 1.8.(1)]{BS19} gives us
\[
\label{naive comparison}
\tag{\epsdice{1}}
\varphi_{\mathcal{A}}^*(\mathrm{R\Gamma}_{\Prism}(\mathcal{X}/A)
\widehat{\otimes}_{A,f} \mathcal{A}) \cong
\varphi_{\mathcal{A}}^*(\mathrm{R\Gamma}_{\Prism}(\mathcal{Y}/\mathcal{A})) \cong
\mathrm{R\Gamma_{crys}}(\mathcal{Y}/\mathcal{A}) \cong
\varphi_{\mathcal{A}}^*(\mathrm{R\Gamma_{crys}}(\mathcal{X}/\mathcal{A})).
\]
Here the last isomorphism comes from the following commutative diagram
\[
\xymatrix{
\mathcal{A} \ar[d]_{\varphi_{\mathcal{A}}} \ar@{->>}[r] & A/(I,p) \ar[d]^{f} \\
\mathcal{A} \ar@{->>}[r] & \mathcal{A}/p.
}
\]
In the following, we aim at getting a Frobenius descent of the isomorphism obtained in~\epsdice{1}, see~\Cref{Frobenius pullback is naive}.
\section{Comparing prismatic and derived de Rham cohomology}
Let $(A,I)$ be a bounded prism.
Let $X$ be a $p$-adic formal scheme which is formally smooth over $\Spf(A/I)$.
In this section we shall establish a functorial comparison between the prismatic cohomology
$\mathrm{R\Gamma}_{\Prism}(X/A)$ with the derived de Rham cohomology
$\dR_{X/A}^{\wedge}$.
\subsection{The comparison}
In the beginning of this subsection we need to comment on an error in the construction of
\v{C}ech--Alexander complex in \cite[Construction 4.16]{BS19}.
We learned this subtlety from Bhatt who was informed by Koshikawa.
The issue is as follows, with notation as in loc.~cit.: suppose $D \rightarrow D/ID \leftarrow R$
is an object in $(R/A)_{\Prism}$, then one needs to exhibit a morphism $(B\{\frac{J}{I}\}^\wedge \to D)$
in $(R/A)_{\Prism}$.
The argument was along the following line, by universal property it suffices to exhibit a map
$B \to D$ sending $J$ into $ID$, which is amount to filling in the following dotted arrow
(of $\delta$-rings)
\[
\xymatrix{
R \ar[r] & D/ID \\
B \ar[u] \ar@{-->}[r] & D \ar[u]
}
\]
that makes the diagram commutative.
At first sight this seems easy, as $B$ is a free $\delta$-ring in a set of variables, we just lift
images of those variables under $B \to R \to D/ID$ to $D$ to get a map of $\delta$-rings.
But there is no way a general lift will make the above diagram commutative for the $\delta$'s of those variables.
Below we describe a fix that we learned from Bhatt.
Recall that the forgetful functor from $\delta$-$A$-algebras to $A$-algebras admits a left adjoint,
see \cite[Remark 2.7]{BS19}.
One checks the following easily:
\begin{itemize}
\item given a derived $(p,I)$-completed polynomial $A$-algebra $P$ which is freely
generated by a set of variables, then apply this left adjoint we will get a
derived $(p,I)$-completed free $\delta$-$A$-algebra
$F$ generated by the same set of variables.
\item this left adjoint commutes with completed tensor product.
\end{itemize}
In particular the natural map $P \to F$ is $(p,I)$-completely ind-smooth.
\begin{construction}[{\v{C}ech--Alexander complex for prismatic cohomology}]
\label{Cech--Alexander complex}
Let $R$ be a $p$-completely smooth $A/I$-algebra.
Let $P$ be a derived $(p,I)$-completed polynomial $A$-algebra along with a surjection $P \twoheadrightarrow R$,
and let $J$ be the kernel.
Associated with the triple $A \to P \twoheadrightarrow R$ is a $\delta$-$A$-algebra
$F\{\frac{JF}{I}\}^\wedge$, obtained by applying \cite[Corollary 3.14]{BS19}.
We make three claims about this construction.
\begin{claim}
\label{claim about Cech--Alexander construction}
\leavevmode
\begin{enumerate}
\item The $\delta$-$A$-algebra $F\{\frac{JF}{I}\}^\wedge$ is naturally an object in $(R/A)_{\Prism}$;
\item as such, it is weakly initial in $(R/A)_{\Prism}$; and
\item if there is a set of triples $A \to P_i \twoheadrightarrow R$, then the coproduct of
associated $F_i\{\frac{J_iF_i}{I}\}^\wedge$ in $(R/A)_{\Prism}$ is
given by the $\delta$-$A$-algebra associated with the triple
$A \to \widehat{\otimes}_A P_i \twoheadrightarrow R$ where the second map is given by the completed
tensor of those $P_i \twoheadrightarrow R$ maps.
\end{enumerate}
\end{claim}
Let us postpone the verification of these claims and continue with the construction.
At this point we may simply follow the rest of \cite[Construction 4.16]{BS19}.
Form the derived $(p,I)$-completed \v{C}ech nerve $P^{\bullet}$ of $A \to P$,
and let $J^{\bullet} \subset P^{\bullet}$ be the kernel of the augmentation map $P^{\bullet} \to P \to R$.
By the first claim above,
we get a cosimplicial object $\left(F^{\bullet}\{\frac{J^\bullet F^\bullet}{I}\}^\wedge\right)$ in $(R/A)_{\Prism}$.
The third claim above shows that this is the \v{C}ech nerve of $F\{\frac{JF}{I}\}^\wedge$ in $(R/A)_{\Prism}$,
and according to the second claim the object $F\{\frac{JF}{I}\}^\wedge$ covers the final object of the topos
$\mathrm{Shv}((R/A)_{\Prism})$.
Therefore $\Prism_{R/A}$ is computed by $F^{\bullet}\{\frac{J^\bullet F^\bullet}{I}\}^\wedge$.
This construction commutes with base change of the prism $(A,I)$.
When $(A,I)$ is fixed, this construction can be carried out in a way which is strictly functorial in $R$,
by setting $P$ to be the completed polynomial $A$-algebra generated by the underlying set of $R$.
\end{construction}
\begin{proof}[Proof of {\Cref{claim about Cech--Alexander construction}}]
Proof of (1): form the following pushout diagram:
\[
\xymatrix{
R \ar[r] & S \\
P \ar[u] \ar[r] & F \ar[u].
}
\]
Denote $F\{\frac{JF}{I}\}^\wedge$ by $C^0$, by its defining property there is a natural map
$S \cong F/JF \to C^0/IC^0$. Hence $C^0$ gives rise to a diagram $(C^0 \to C^0/IC^0 \leftarrow S \leftarrow R)$
which is an object in $(R/A)_{\Prism}$.
Proof of (2) and (3): this follows from chasing through universal properties.
Let $(D \to D/ID \leftarrow R)$ be an object in $(R/A)_{\Prism}$, we have the following chain of equivalences:
\begin{align*}
F\{\frac{JF}{I}\}^\wedge \to D \text{ in } (R/A)_{\Prism} \iff
\text{ a map of $\delta$-$A$-algebras } F \to D \text{ such that } JF \text{ is mapped into } ID \\
\iff \text{ a map of $A$-algebras } P \to D \text{ such that } J \text{ is mapped into } ID.
\end{align*}
It is easy to see that the last statement is equivalent to filling in the following dotted arrow
below
\[
\xymatrix{
R \ar[r] & D/ID \\
P \ar[u] \ar@{-->}[r] & D \ar[u]
}
\]
as $A$-algebras, making the diagram commutative.
Note that there is no requirement from $\delta$-ring consideration here.
Now one checks the claims (2) and (3) easily.
\end{proof}
With the above preparatory discussion, we are ready to compare prismatic cohomology
and derived de Rham cohomology.
The key computation we need is the following.
\begin{lemma}[Comparing prismatic and PD envelopes for regular sequences]
\label{envelopes for regular sequence}
Let $B$ be a $(p,I)$-completely flat $\delta$-$A$-algebra, let $f_1, \ldots, f_r \in B$
be a $(p,I)$-completely regular sequence.
Write $J = (I, f_1, \ldots, f_r) \subset B$.
Then we have a natural identification of $p$-completely flat $\dR_{A}(I)^\wedge$-algebras:
\[
B\{\frac{J}{I}\}^{\wedge} \widehat{\otimes}_{B,\varphi_B} B \widehat{\otimes}_{A} \dR_{A}(I)^\wedge
\cong \dR_{B}(J)^\wedge.
\]
\end{lemma}
Here the $B\{\frac{J}{I}\}^{\wedge}$ is as in~\cite[Proposition 3.13]{BS19},
which is $(p,I)$-completely flat over $A$.
Let us clarify the various completions involved on the left hand side.
First we perform derived $(p,I)$-complete tensor,
then we perform derived $(p,\varphi(I))$-complete tensor
which is the same as derived $p$-complete tensor as $\varphi(I) = (p)$ in
$\pi_0(\dR_A(I)^\wedge)$.
\begin{proof}
Recall that in the proof of \cite[Proposition 3.13]{BS19} and also explained in Remark \ref{rem-2.7}, the $B\{\frac{J}{I}\}^{\wedge}$
is constructed as $p$-completely pushout the following diagram:
\[
\xymatrix{
A\{x_1, \ldots, x_r\} \ar[r]^-{x_i \mapsto f_i} \ar[d] & B \\
A\{x_1, \ldots, x_r\}\{\frac{x_i}{I}\}. &
}
\]
The left hand side in this Lemma is therefore given by pushing-out the above diagram
further along $f_B \colon B \xrightarrow{\varphi_B} B \to B \widehat{\otimes}_{A} \dR_{A}(I)^\wedge$.
The composition
$A\{x_1, \ldots, x_r\} \xrightarrow{x_i \mapsto f_i} B \xrightarrow{f_B}
B \widehat{\otimes}_{A} \dR_{A}(I)^\wedge$ can now be factored as
$A\{x_1, \ldots, x_r\} \xrightarrow{\varphi_A, x_i \mapsto \varphi(z_i)} A\{z_1, \ldots, z_r\}
\to \dR_{A\{z_1, \ldots, z_r\}}(I)^\wedge \to B \widehat{\otimes}_{A} \dR_{A}(I)^\wedge$,
where in the last map $z_i$ is sent to $f_i \otimes 1$.
Hence the left hand side becomes the $p$-completely
outer pushout of the following diagram with solid arrows:
\[
\xymatrix{
A\{x_1, \ldots, x_r\} \ar[r] \ar[d] & \dR_{A\{z_1, \ldots, z_r\}}(I)^\wedge
\ar[r] \ar@{.>}[d] & B \widehat{\otimes}_{A} \dR_{A}(I)^\wedge \ar@{.>}[d] \\
A\{x_1, \ldots, x_r\}\{\frac{x_i}{I}\} \ar@{.>}[r] & \dR_{A\{z_1, \ldots, z_r\}}(I, z_1, \ldots z_r) ^\wedge
\ar@{.>}[r] & \dR_{B \widehat{\otimes}_{A} \dR_{A}(I)^\wedge}(f_i \otimes 1)^\wedge
\cong \dR_{B}(J)^\wedge.
}
\]
Using (multi-variable version of)~\Cref{naive base change} we see the left square of the above
is a $p$-completely pushout.
Base change property of derived de Rham complex now shows the right square of the above
to be a $p$-completely pushout.
Here the isomorphism of the right bottom corner follows from the fact that
$(I, f_1, \ldots, f_r)$ is a Koszul regular sequence in $B$.
\end{proof}
Just like how~\cite[Proposition 3.13]{BS19} implies~\cite[Corollary 3.14]{BS19},
our~\Cref{envelopes for regular sequence} above gives us the following.
\begin{lemma}
\label{weakly initial comparison}
Let $R$ be a $p$-completely smooth $A/I$-algebra.
Let $P$ be a $p$-completed polynomial algebra over $A$,
and let $P \twoheadrightarrow R$ be a surjection of $A$-algebras
with kernel $J$.
Consider the following diagram:
\[
\xymatrix{
A/I \ar[r] & R \ar[r] & S \\
A \ar[r] \ar[u] & P \ar[r] \ar[u] & F \ar[u] \\
}
\]
where $F$ is the $p$-completed free $\delta$-$A$-algebra associated with $P$, and
$S$ is the $p$-completed tensor product $S \coloneqq R \widehat{\otimes}_{P} F$.
Then we have a natural identification of $p$-completely flat $\dR_{A}(I)^\wedge$-algebras:
\[
F\{\frac{J \cdot F}{I}\} \widehat{\otimes}_{F,\varphi_F} F \widehat{\otimes}_{A} \dR_{A}(I)^\wedge
\cong \dR_{S/F}^\wedge.
\]
\end{lemma}
\begin{proof}
Zariski locally on $\Spf(P)$ and $\Spf(F)$, the kernel $J$ and $J \cdot F$
is a colimit of the form considered in \Cref{envelopes for regular sequence}.
Also note that $F/J \cdot F \cong S$, by definition we have $\dR_F(J \cdot F)^\wedge \cong \dR_{S/F}^\wedge$.
Since formation of $p$-complete derived de Rham complex commutes with taking
$p$-complete colimit (of the algebra over $A$) and descends from $p$-completely flat covers,
we may glue the local isomorphisms obtained in
\Cref{envelopes for regular sequence} and take colimit to get our identification here.
\end{proof}
Using this comparison of prismatic envelope and derived de Rham complex, we get a comparison
between prismatic and derived de Rham cohomology as follows.
\begin{theorem}
\label{comparing pris and crys}
Let $(A,I)$ be a bounded prism.
For any $p$-completely smooth $A/I$-algebra $R$, there is a natural isomorphism
in $\mathrm{CAlg}(A)$:
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \dR_A(I)^\wedge \cong
\dR_{R/A}^\wedge,
\]
which is functorial in $A/I \to R$ and satisfies base change in $(A,I)$.
\end{theorem}
Let us emphasize again that when $(A, I)$ is transversal, this follows from \cite[Theorem 5.2]{BS19}.
\begin{proof}
Let us first construct the desired natural morphism
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \dR_A(I)^\wedge \rightarrow
\dR_{R/A}^\wedge.
\]
Given any triple $A \to P \twoheadrightarrow R$ as in the setting of \Cref{weakly initial comparison},
we have a natural morphism
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \dR_A(I)^\wedge \rightarrow
F\{\frac{J \cdot F}{I}\}^{\wedge} \widehat{\otimes}_{F,\varphi_F} F \widehat{\otimes}_{A}
\dR_A(I)^\wedge \cong \dR_{S/F}^\wedge,
\]
which is functorial in $A \to P \twoheadrightarrow R$.
By \Cref{lim dR} (3), the limit of right hand side over
all the triples $A \to P \twoheadrightarrow R$ is just $\dR_{R/A}^\wedge$,
hence we get the desired natural morphism.
It is functorial in $A/I \to R$ and satisfies base change in $(A,I)$.
Now we need to show the natural arrow constructed above is a natural isomorphism.
Let us make more reductions.
It suffices to check this is an isomorphism
after a faithfully flat cover, and since both sides commute
with base change in $A$, we may Zariski localize on $A$,
hence we may first reduce to the case where $A$ is oriented, i.e.~$I = (d)$.
Observe that both sides are the left Kan extension of their restriction
to the category of polynomial $A$-algebras,
so it suffices to show that
the above arrow is a natural isomorphism for algebras of the form
$R = A/I[X_1, \ldots, X_n]^{\wedge}$,
which is the base change of $p$-complete polynomial algebras
over the universal oriented prism.
Hence we can reduce further to the case that $A$ is the universal oriented prism.
In particular we may assume that $(A,I)$ is transversal and that $\varphi_A$ is flat.
Lastly, we shall prove the statement under the assumption that
$(A,I)$ is transversal and that $\varphi_A$ is flat.
Choose a $(p,I)$-completely polynomial $A$ algebra $P$
with a surjection of $A$-algebras
$P \twoheadrightarrow R$, form the cosimplicial object
$\left(F^{\bullet}\{\frac{J^\bullet F^\bullet}{I}\}^\wedge\right)$ in $(R/A)_{\Prism}$
computing $\Prism_{R/A}$ as in \Cref{Cech--Alexander complex}.
Notice that we have an identification of cosimplicial $(p,I)$-complete algebras
$A \xrightarrow{\simeq} F^{\bullet}$.
Since we have reduced ourselves to the case where $(A,I)$ is transversal and that $\varphi_A$ is flat,
using~\Cref{calA Tor dim 1}, the natural morphism considered above gives rise to the following identification
\begin{align*}
\label{long chain of identifications comparing prism and dR}
\tag{\epsdice{2}}
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \dR_A(I)^\wedge \cong
\lim_{\Delta} \left( (F^{\bullet}\{ \frac{F^{\bullet} \cdot J^{\bullet}}{I} \}^{\wedge})
\widehat{\otimes}_{A,\varphi_A} A \widehat{\otimes}_A \dR_A(I)^\wedge
\right)
\\
\cong \lim_{\Delta} \left((F^{\bullet}\{ \frac{F^{\bullet} \cdot J^{\bullet}}{I} \}^{\wedge})
\widehat{\otimes}_{F^{\bullet}, \varphi_{F^{\bullet}}}
F^{\bullet} \widehat{\otimes}_A \dR_A(I)^\wedge \right)
\cong \lim_{[n] \in \Delta} \dR_{S^{\hat{\otimes}_R n}/F^n}^\wedge \cong
\dR_{R/A}^\wedge
\end{align*}
as desired.
Let us comment on the identifications above.
Here we have used the cosimplicial replacement
$(A, \varphi_A) \xrightarrow{\simeq} (F^{\bullet}, \varphi_{F^{\bullet}})$ in the second identification.
The second-to-last identification is provided by \Cref{weakly initial comparison},
and the last identification is because of \Cref{lim dR}.
\end{proof}
\begin{remark}
\label{compatible with Frobenii}
In this paper we have only defined Frobenius action on $\dR_{-/A}^\wedge$ under the assumption
of $A$ being a $p$-torsionfree $\delta$-ring.
Now suppose $(A,I)$ is a $p$-torsionfree prism,
by \Cref{another Frobenius on limit dR}, we see that
the chain of identifications in (\epsdice{2})
is compatible with Frobenius.
Consequently, the identification in \Cref{comparing pris and crys} is compatible with Frobenius
in a functorial manner.
We expect however that one can remove the $p$-torsionfree condition with additional work
in developing the framework of ``derived $\delta$-rings''.
However since the primary interest of this paper is in the case of $p$-torsionfree prisms,
we choose to not pursue that level of generality here.
\end{remark}
Below let us conclude two consequences from \Cref{comparing pris and crys}.
\begin{corollary}
\label{comparying pris and crys for transversal prisms}
Let $(A,I)$ be a bounded prism.
For any $p$-completely smooth $A/I$-algebra $R$, there is a natural isomorphism
in $\mathrm{CAlg}(A)$:
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \mathcal{A} \cong
\mathrm{R\Gamma_{crys}}(R/\mathcal{A}),
\]
which is functorial in $A/I \to R$ and satisfies base change in $(A,I)$.
\end{corollary}
\begin{proof}
This follows from \Cref{comparing pris and crys}: simply base change along the morphism
$\dR_A(I)^\wedge \to \mathcal{A}$, and by \Cref{dR and crys} we have
\[
\dR_{R/A}^\wedge \widehat{\otimes}_{\dR_A(I)^\wedge} \mathcal{A} \cong \mathrm{R\Gamma_{crys}}(R/\mathcal{A}).
\]
\end{proof}
\begin{remark}
\label{Frobenius pullback is naive}
By chasing diagram, one verifies that
the following diagram of isomorphisms:
\[
\xymatrix{
\varphi_{\mathcal{A}}^*\left(\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \mathcal{A}\right) \ar[r]^-{\alpha} \ar[d]^-{\beta} &
\varphi_{\mathcal{A}}^*\left(\mathrm{R\Gamma_{crys}}(R/A)\right) \ar[d]^-{\epsilon} \\
\varphi_{\mathcal{A}}^*\left(\mathrm{R\Gamma_{\Prism}}((R \widehat{\otimes}_{A, \varphi_A} \mathcal{A})/\mathcal{A}) \right) \ar[r]^-{\gamma} &
\mathrm{R\Gamma_{crys}}(\left(R/p \otimes_{A/(p, I), \varphi_A} A/(p, I)\right)/\mathcal{A})
}
\]
is commutative, since all comparisons here are expressed in terms of various explicit
envelopes.
Here these arrows are given by:
\begin{enumerate}
\item $\alpha$ is Frobenius pullback of the arrow in~\Cref{comparying pris and crys for transversal prisms};
\item $\beta$ is the base change of prisms $\varphi_A \colon (A,I) \to (\mathcal{A},p)$;
\item $\gamma$ is the crystalline comparison for crystalline prisms~\cite[Theorem 5.2]{BS19};
\item $\epsilon$ is the base change of crystalline cohomology.
\end{enumerate}
\end{remark}
A bounded prism $(A,I)$ is called a \emph{PD prism}, if there is a PD structure $\gamma$ on $I$, compatible with the canonical one on $(p)$.
\begin{corollary}
\label{comparying pris and crys for PD prisms}
Let $(A,I,\gamma)$ be a bounded PD prism.
Then for any $p$-completely smooth $A/I$-algebra $R$, there is a natural isomorphism
in $\mathrm{CAlg}(A)$:
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\cong \mathrm{R\Gamma_{crys}}(R/(A,I,\gamma)),
\]
which is functorial in $A/I \to R$ and satisfies base change in $(A,I)$.
\end{corollary}
Here $\mathrm{R\Gamma_{crys}}(-/(A,I,\gamma))$ denotes the crystalline cohomology
with respect to the $p$-adic PD base $(A,I,\gamma)$.
\begin{proof}
The additional PD structure gives us a section $\mathcal{A} \to A$, which makes the composition of
\[
A \to \dR_A(I)^\wedge \to \mathcal{A} \to A
\]
being identity.
Take the functorial isomorphism in \Cref{comparying pris and crys for transversal prisms},
and base change further along $\mathcal{A} \to A$ gives us
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A \cong
\mathrm{R\Gamma_{crys}}(R/\mathcal{A}) \widehat{\otimes}_{\mathcal{A}} A,
\]
and the latter is naturally isomorphic to $\mathrm{R\Gamma_{crys}}(R/(A,I,\gamma))$
due to base change in crystalline cohomology theory.
\end{proof}
\begin{remark}
(1) Any derived $p$-complete $\delta$-ring $A$ with bounded $p$-torsion together with the ideal $(p)$
is a PD prism. In this situation, our \Cref{comparying pris and crys for PD prisms}
is simply the crystalline comparison in \cite[Theorem 1.8.(1)]{BS19}.
(2) The left hand side of this comparison does not depend on the PD structure $\gamma$ on $I$,
whereas the right hand side \emph{a priori} does.
Therefore this comparison tells us that the right hand side also does not depend on the
PD structure $\gamma$.
\end{remark}
We can ``globalize'' these comparisons to general quasi-compact quasi-separated smooth formal schemes
over $\Spf(A/I)$.
\begin{theorem}
\label{global comparison}
Let $(A,I)$ be a bounded prism.
Let $X \to \Spf(A/I)$ be a quasi-compact quasi-separated smooth morphism of formal schemes.
Then we have natural isomorphisms in $\mathrm{CAlg}(A)$:
\[
\mathrm{R\Gamma_{\Prism}}(X/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \dR_A(I)^\wedge \cong
\mathrm{R\Gamma}(X, \dR_{-/A}^\wedge);
\]
and
\[
\mathrm{R\Gamma_{\Prism}}(X/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \mathcal{A} \cong
\mathrm{R\Gamma_{crys}}(X/\mathcal{A}).
\]
If $(A,I, \gamma)$ is a PD prism, then we have a natural isomorphism in $\mathrm{CAlg}(A)$:
\[
\mathrm{R\Gamma_{\Prism}}(X/A) \widehat{\otimes}_{A,\varphi_A} A
\cong \mathrm{R\Gamma_{crys}}(X/(A,I,\gamma)).
\]
All the isomorphisms above satisfy base change in $(A,I)$.
Moreover, if $X$ is also proper over $\Spf(A/I)$, then all the completed tensor products above
may be replaced by tensor products.
\end{theorem}
\begin{proof}
Since $X$ is assumed to be quasi-compact and quasi-separated. These cohomologies
are computed by a finite limit of the corresponding cohomologies of affine opens of $X$.
Because completed tensor commutes with finite limit, the comparisons here follow from
\Cref{comparing pris and crys}, \Cref{comparying pris and crys for transversal prisms},
and \Cref{comparying pris and crys for PD prisms}.
To justify the replacement of completed tensor with tensor, just note that $\mathrm{R\Gamma_{\Prism}}(X/A)$
is a perfect complex of $A$-modules for smooth proper $X \to \Spf(A/I)$,
see the last sentence of \cite[Theorem 1.8]{BS19}.
\end{proof}
\subsection{Functorial endomorphisms of derived de Rham complex}
Throughout this subsection, we assume $(A,I)$ to be a transversal prism,
in particular we have $\dR_A(I)^\wedge \cong \mathcal{A}$
and $\dR_{R/A}^\wedge \cong \mathrm{R\Gamma_{crys}}(R/\mathcal{A})$
where $R$ is any $p$-adic formally smooth $A/I$-algebra, see \Cref{dR and crys}.
In this subsection, we aim at understanding all
functorial endomorphisms of the derived de Rham complex functor,
under this transversality assumption.
In particular we shall see that the functorial isomorphism
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \dR_A(I)^\wedge \rightarrow
\dR_{R/A}^\wedge,
\]
appearing in~\Cref{comparing pris and crys} is unique
if we assume $(A,I)$ to be a transversal prism.
In order to show this, we need to first extend the natural isomorphism
to a larger class of $A/I$-algebras.
\begin{construction}[{c.f.~\cite[Construction 7.6]{BS19} and~\cite[Example 5.12]{BMS2}}]
\label{derived prismatic construction}
Fix a bounded prism $(A,I)$, consider the functor
$R \mapsto \dR_{R/A}^{\wedge}$
on $p$-completely smooth $A/I$-algebras $R$ valued in the category of commutative
algebras in the $\infty$-category of $p$-complete objects in $\mathcal{D}(A)$.
Left Kan extend it to all derived $p$-complete simplicial $A/I$-algebras,
which is nothing but the $p$-adic derived de Rham complex relative to $A$,
still denoted by $\dR_{R/A}^{\wedge}$.
Let us record some properties of this construction:
\begin{enumerate}
\item Since $R$ is an $A/I$-algebra, the $\dR_{R/A}^{\wedge}$ is naturally a
$\dR_{(A/I)/A}^{\wedge}$-algebra.
Hence we may actually view the functor as taking values in the category
$\mathrm{CAlg}(\dR_A(I)^\wedge)$.
\item The formation of $\dR_{R/A}^{\wedge}$ commutes with base change in $A$.
\item Below we shall see that, following the reasoning of \cite[Theorem 3.1 and Example 5.12]{BMS2},
the association $R \mapsto \dR_{R/A}^{\wedge}$ defines
a sheaf on the relative quasisyntomic site $\mathrm{qSyn}_{A/I}$.
\item By left Kan extending the natural isomorphism obtained in \Cref{comparing pris and crys},
we get an isomorphism of sheaves:
\[
\Prism^{(1)}_{R/A} \widehat{\otimes}_A \dR_A(I)^\wedge
\cong \dR_{R/A}^{\wedge},
\]
which is compatible with base change in $A$.
Here $\Prism^{(1)}_{R/A} \coloneqq \Prism_{R/A} \widehat{\otimes}_{A,\varphi_A} A$
is the Frobenius pullback of the derived prismatic cohomology.
\item Moreover if we assume that $(A,I)$ is a transversal prism,
then for any $R$ which is large quasisyntomic over $A/I$,
the value $\dR_{R/A}^{\wedge}$ is $p$-completely flat over $\mathcal{A}$
and lives in cohomological degree $0$.
\end{enumerate}
Let us justify the claim (3) above.
\begin{proposition}
\label{sheaf property on relative to A dR}
The association $R \mapsto \dR_{R/A}^{\wedge}$ defines
a sheaf on the relative quasisyntomic site $\mathrm{qSyn}_{A/I}$.
\end{proposition}
\begin{proof}
Let $R \to S$ be a quasisyntomic cover of objects in $\mathrm{qSyn}_{A/I}$,
with C\v{e}ch nerve $S^{\bullet}$.
Our task is to show $\dR_{R/A}^{\wedge} = \lim_{\Delta^{\mathrm{op}}} \dR_{S^{\bullet}/A}^{\wedge}$.
Since both sides are $p$-complete, we may check this after derived modulo $p$.
Below we shall always use $-/p$ to denote derived modulo $p$ .
Now we follow closely the argument in \cite[Example 5.12]{BMS2}, correcting a typo thereof.
First there is a functorial exhaustive increasing $\mathbb{N}$-index filtration, i.e.~the conjugate filtration,
on $\dR_{R/A}/p \cong \dR_{(R/p)/(A/p)}$
with graded pieces given by $\big(\wedge^i_{(R/p)^{(1)}}\mathbb{L}_{(R/p)^{(1)}/(A/p)}\big)[-i]$
(and similarly for $\dR_{(S^{\bullet}/p)/(A/p)}^{\wedge}$).
Here $(-/p)^{(1)}$ denotes the base change along the Frobenius on $A/p$,
and the loc.~cit.~has a typo of not adding this Frobenius twist.
For a discussion of $p$-complete derived de Rham complex and conjugate filtration
in the realm of animated rings, we refer readers to \cite[p.~33-35]{KP21}.
We stare at the following diagram (with its $S^{\bullet}$ analogs in mind):
\[
\xymatrix{
R \ar[r] & R/p \ar[r]^-{\varphi_{A/p}} & (R/p)^{(1)} \\
A/I \ar[u] \ar[r] & (A/I)/p \ar[u] \ar[r]^-{\varphi_{A/p}} & ((A/I)/p)^{(1)} \cong (A/I^p)/p \ar[u] \\
A \ar[u] \ar[r] & A/p \ar[u] \ar[r] ^-{\varphi_{A/p}} & A/p. \ar[u]
}
\]
Note that every square above is Cartesian.
Base change property of cotangent complex implies that
these graded pieces $\big(\wedge^i_{(R/p)^{(1)}}\mathbb{L}_{(R/p)^{(1)}/(A/p)}\big)[-i]$
(and their $S^{\bullet}$ analogs) can be identified with:
\begin{enumerate}
\item either $\wedge^i_R \mathbb{L}_{R/A}[-i] \otimes_{R} \varphi_{A/p,*}(R/p)^{(1)}$;
\item or $\wedge^i_R \mathbb{L}_{R/A} \otimes_{A} \varphi_{A/p,*}(A/p)$,
\end{enumerate}
where the $A$-module (resp.~$R$-module)
structure on $\varphi_{A/p,*}(A/p)$ (resp.~$\varphi_{A/p,*}(R/p)^{(1)}$) is given by
the top and bottom row of the above diagram.
The identification (1) above implies that all these graded pieces live in $D^{\geq -1}$.
Indeed, $\wedge^i_R \mathbb{L}_{R/A}[-i]$ as an $R$-module has Tor-amplitude in $[0,i]$,
and $(R/p)^{(1)}$ is flat over $((A/I)/p)^{(1)} \cong (A/I^p)/p$ which lives in $[-1,0]$,
so $(R/p)^{(1)}$ lives in $D^{\geq -1}$.
Similar statements for $S^{\bullet}$ hold as well.
Hence we are reduced to checking these graded pieces satisfy the descent property,
here we are using the reasoning of \cite[last sentence of Example 5.12]{BMS2}.
Now using the identification (2) above, we are reduced to flat descent for
``tensored'' wedge powers of cotangent complex, see \cite[Proposition 3.2]{LM21}
(which is itself a generalization of \cite[Theorem 3.1]{BMS2}).
\end{proof}
Recall that an $A/I$-algebra is called \emph{large quasisyntomic over $A/I$} (see~\cite[Definition 15.1]{BS19})
if
\begin{itemize}
\item $A/I \to R$ is quasisyntomic; and
\item there is a surjection
$A/I\langle X_j^{1/p^\infty} \mid j \in J \rangle \twoheadrightarrow R$
where $J$ is a set.
\end{itemize}
\end{construction}
The following is inspired by~\cite[Sections 10.3 and 10.4]{BLM18},
and our proof is a modification of the proof thereof.
\begin{theorem}
\label{functorial endomorphism theorem}
Let $(A,I)$ be a transversal prism, and assume that $\Spf(A/I)$ is connected.
Then
\begin{enumerate}
\item The mapping space
\[
{\rm End}_{{\rm Shv}({\qsyn}_{A/I},{\rm CAlg}(\mathcal{A}))}\left(\dR_{-/A}^\wedge, \dR_{-/A}^\wedge \right)
\]
has contractible components given by a submonoid in $\mathbb{N}$.
In particular,
the automorphism space has only one contractible component given by identity.
\item The automorphism space
\[
\mathrm{Aut}_{{\rm Shv}({\qsyn}_{A/I}, {\rm CAlg}(A/I))}\left(\dR_{-/(A/I)}^\wedge, \dR_{-/(A/I)}^\wedge \right)
\]
has only one contractible component given by identity.
\end{enumerate}
\end{theorem}
Since $\mathcal{A}/p \to A/(I,p)$ is a locally nilpotent thickening, we get that
$\Spf(\mathcal{A})$ is also connected.
In particular, the only idempotents in $\mathcal{A}$ are $0$ and $1$.
It is easy to see that the statement concerning automorphism spaces for these functors
hold true without the connectedness assumption, as on each connected component
the automorphism must be identity.
\begin{proof}
The assertion that all components are contractible
follows from the fact that on the basis of large quasisyntomic over $A/I$-algebras,
the sheaves $\dR_{-/A}^\wedge$ and $\dR_{-/(A/I)}^\wedge$ are discrete.
All we need to check is that there are not many functorial endomorphisms (resp.~automorphisms)
for these two sheaves.
Since (2) follows from the same proof as that of (1), let us only
present the proof of (1) here.
To simplify notation, let us denote the set of functorial endomorphisms
by $\mathrm{End}(\dR_{-/A}^\wedge)$.
By restriction, any functorial endomorphism induces a functorial endomorphism
of the functor restricted to the subcategory of $A/I$-algebras of the form
$A/I\langle X_h^{1/p^\infty} \mid h \in H \rangle$ for some set $H$.
We denote the latter monoidal space by $\mathrm{End}(\dR_{-/A}^\wedge |_{\mathrm{perf}})$,
all of whose components are also contractible by the same reasoning.
By definition there is a natural map
\[
\mathrm{res} \colon \mathrm{End}(\dR_{-/A}^\wedge) \to \mathrm{End}(\dR_{-/A}^\wedge |_{\mathrm{perf}})
\]
of monoids.
Now we make following three claims:
\begin{itemize}
\item the natural map $\mathrm{res}$ is injective;
\item the monoid $\mathrm{End}(\dR_{-/A}^\wedge |_{\mathrm{perf}})$
is a submonoid of $\mathbb{Z}$; and
\item the image of $\mathrm{res}$ is contained in $\mathbb{N}$.
\end{itemize}
Below let us show the map $\mathrm{res}$ is injective.
In other words, we need to show that any functorial endomorphism of $\dR_{-/A}^\wedge$
is determined by its restriction to the algebras of the form
$A/I\langle X_h^{1/p^\infty} \mid h \in H \rangle$ for some set $H$.
To see this, notice that $\mathrm{qSyn}_{A/I}$ has a basis given by
large quasi-syntomic over $A/I$-algebras.
Any large quasi-syntomic over $A/I$-algebra $S$, by definition, admits a surjection
from an algebra of the form $A/I\langle X_l^{1/p^\infty} \mid l \in L \rangle$
for some set $L$.
By choosing a set of generators $\{f_j \mid j \in J\}$ of the kernel,
we may form a surjection (c.f.~\cite[proof of Proposition 7.10]{BS19})
\[
S' \coloneqq A/I\langle X_l^{1/p^\infty}, Y_j^{1/p^\infty} \mid l \in L, j \in J \rangle/(Y_j - f_j \mid j \in J)^\wedge \twoheadrightarrow S
\colon Y_j^m \mapsto 0.
\]
This induces a surjection of shifted cotangent complexes:
$\mathbb{L}_{S'/A}[-1] \twoheadrightarrow \mathbb{L}_{S/A}[-1]$, therefore it induces
a surjection of $p$-adic derived de Rham complexes:
$\dR_{S'/A}^\wedge \twoheadrightarrow \dR_{S/A}^\wedge$.
For any such $S'$, we have
\[
\dR_{S'/A}^\wedge \cong D_{\mathcal{A}\langle X_l^{1/p^\infty}, Y_j^{1/p^\infty} \mid l \in L, j \in J \rangle} (Y_j - f_j \mid j \in J)^\wedge,
\]
i.e.~$p$-completely adjoining divided powers of $Y_j - f_j$ for all $j \in J$
to $\mathcal{A}\langle X_l^{1/p^\infty}, Y_j^{1/p^\infty} \mid l \in L, j \in J \rangle$.
Since $\mathcal{A}$ is $p$-torsionfree, any endomorphism of $\dR_{S'/A}^\wedge$
is determined by its restriction to $\mathcal{A}\langle X_l^{1/p^\infty}, Y_j^{1/p^\infty} \mid l \in L, j \in J \rangle$.
Lastly, we know that applying $\dR_{-/A}^\wedge$ functor to the map
\[
A/I\langle X_l^{1/p^\infty}, Y_j^{1/p^\infty} \mid l \in L, j \in J \rangle \to S'
\]
exactly induces the natural map
\[
\mathcal{A}\langle X_l^{1/p^\infty}, Y_j^{1/p^\infty} \mid l \in L, j \in J \rangle \to \dR_{S'/A}.
\]
Therefore we know that any functorial endomorphism of $\dR_{-/A}^\wedge$ must be determined by its restriction
to algebras of the form $A/I\langle X_h^{1/p^\infty} \mid h \in H \rangle$.
Next, let us try to understand $\mathrm{End}(\dR_{-/A}^\wedge |_{\mathrm{perf}})$
and show it is a submonoid of integers.
Consider a functorial endomorphism $f$.
It is determined by its restriction to the one-variable ``perfect'' $A/I$-algebra
$R = A/I\langle X^{1/p^\infty} \rangle$.
We know $\dR_{R/A}^\wedge \cong \mathcal{A}\langle X^{1/p^\infty} \rangle$.
Suppose $f(x) = \sum_{i \in \mathbb{N}[1/p]} a_i X^i \in \mathcal{A}\langle X^{1/p^\infty} \rangle$.
Consider the map $R \to S \coloneqq A/I\langle Y^{1/p^\infty}, Z^{1/p^\infty} \rangle$
sending $X^i \mapsto Y^i Z^i$.
This map induces the corresponding map
$\mathcal{A}\langle X^{1/p^\infty} \rangle \to \mathcal{A}\langle Y^{1/p^\infty}, Z^{1/p^\infty} \rangle$
which also sends $X^i \mapsto Y^i Z^i$.
Now the functoriality of the endomorphism tells us that $f(YZ) = f(Y) \cdot f(Z)$.
We immediately get
$a_i^2 = a_i$ and $a_i \cdot a_j = 0$ for any pair of distinct indices
$i, j \in \mathbb{N}[1/p]$.
By connectedness assumption of $\Spf(A/I)$, we see there is at most one index $i \in \mathbb{N}[1/p]$,
with nonzero $a_i = 1$.
To see there is at least one nonzero $a_i$, we use the map
$R \to A/I$ given by $X^i \mapsto 1$ for all $i \in \mathbb{N}[1/p]$.
We want to show the $i \in \mathbb{N}[1/p]$ got in the previous paragraph defining the functorial
endomorphism $f$ must in fact lie in $p^{\mathbb{Z}}$.
Assume $i = \frac{\ell}{p^N}$ where $\ell$ is an integer coprime to $p$.
Now we contemplate the map
$R \to S$ given by $X \mapsto \lim_n(Y^{1/p^n} + Z^{1/p^n})^{p^n}$,
it induces a map of $\dR_{-/A}^\wedge$ with the image of $X$
given by the same formula.
Functoriality of $f$ implies that we have
\[
(\lim_n(Y^{1/p^n} + Z^{1/p^n})^{p^{n-N}})^\ell = \lim_n(Y^{\ell/p^{n-N}} + Z^{\ell/p^{n-N}})^{p^n}.
\]
Reduction modulo $p$ tells us that
\[
(Y^{1/p^N} + Z^{1/p^N})^\ell = Y^{\ell/p^N} + Z^{\ell/p^N} \in
\mathbb{F}_p[Y^{1/p^\infty},Z^{1/p^\infty}],
\]
forcing $\ell = 1$.
Therefore we see that $\mathrm{End}(\dR_{-/A}^\wedge |_{\mathrm{perf}}) \subset p^{\mathbb{Z}}$,
i.e.~it is a submonoid inside $\mathbb{Z}$.
Finally let us prove the image of $\mathrm{res}$ lands in $p^{\mathbb{N}}$.
We want to rule out negative powers of $p$.
To that end consider $R \to R/(X)$, which induces the map of $p$-adic derived de Rham complex:
\[
\widetilde{R} \coloneqq \mathcal{A}\langle X^{1/p^\infty} \rangle \rightarrow
\widetilde{S} \coloneqq D_{\mathcal{A}\langle X^{1/p^\infty}\rangle}(X)^\wedge.
\]
Here the latter denotes the $p$-complete PD envelope of the former along the ideal $X$,
and this is the natural map.
Take a positive integer $j$, and we need to argue that $X \mapsto X^{1/p^j}$
on $\widetilde{R}$ does not extend to an endomorphism of $\widetilde{S}$.
Suppose otherwise, then the extended endomorphism of $\widetilde{S}$
must send $X^p$ to $X^{p^{1-j}}$, but $X^p$ is divisible by $p$ in $\widetilde{S}$
whereas $X^{p^{1-j}}$ is not (here we use the fact that $j > 0$),
hence we get a contradiction.
The only invertible element in the additive monoid $\mathbb{N}$ is $0$,
corresponding to $X \mapsto X^{(p^0)} = X$,
hence the only functorial automorphism of $\dR_{-/A}^\wedge$ is identity.
\end{proof}
\begin{remark}
\label{functorial endo remark}
Let $(A,I)$ be a transversal prism. Then
\begin{enumerate}
\item By the same argument, there are not many functorial homomorphisms
from $\varphi_A^* \dR_{-/A}^\wedge$ to $\dR_{-/A}^\wedge$.
Similarly, these are determined by its restriction to $R = A/I\langle X^{1/p^\infty} \rangle$.
If we require the restriction sends $X$ to $X^p$, then there is a unique one
given by Frobenius constructed in \Cref{Frobenii}.
Therefore in a strong sense, there is a unique Frobenius.
\item Due to previous remark, we see the comparison in \Cref{comparing pris and crys}
must be compatible with Frobenius.
\item It is unclear which positive integer $i$, corresponding to $X \mapsto X^{p^i}$,
can occur as a functorial endomorphism.
When $A/(p,I)$ has transcendental (relative to $\mathbb{F}_p$) elements, then none of these
can occur.
This can be seen by considering the map $R \to R/(X - a)$ for some lift $a$ of the transcendental element $\bar{a} \in A/(p,I)$.
\end{enumerate}
\end{remark}
Consequently we get the following uniqueness of the functorial comparison established
in~\Cref{comparing pris and crys}, readers shall compare with~\cite[Section 18]{BS19}.
\begin{corollary}
\label{uniqueness of comparison}
Fix a transversal prism $(A,I)$.
There is a unique natural isomorphism of $p$-complete
commutative algebra objects in $\mathcal{D}(\mathcal{A})$:
\[
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \mathcal{A} \rightarrow
\mathrm{R\Gamma_{crys}}(R/A),
\]
which is functorial in the $p$-completely
smooth $A/I$-algebra $R$.
\end{corollary}
\begin{proof}
The existence part is given by~\Cref{comparing pris and crys},
we need to show uniqueness.
Suppose there are two such functorial isomorphisms.
Then consider the composition of one with the inverse of the other,
we get a natural automorphism of the functor $\dR_{-/A} \cong \mathrm{R\Gamma_{crys}}(-/A)$
on smooth $A/I$-algebras.
By left Kan extension, this will induce a natural automorphism
of the functor $\dR_{-/A}$ on quasisyntomic $A/I$-algebras.
We conclude by~\Cref{functorial endomorphism theorem} that this automorphism must be identity.
\end{proof}
\begin{corollary}
\label{match with crys comparison in BMS1}
Let $\mathbf{C}$ be an algebraically closed complete non-Archimedean field extension of $\mathbb{Q}_p$,
and let $(A,I)$ be the associated perfect prism (denoted as $(A_{\inf}, \ker(\theta))$
in literature).
Then the comparison in~\Cref{comparing pris and crys} is compatible
with the crystalline comparison over $\mathcal A = A_{\mathrm{crys}}$
of the $A\Omega$-theory obtained in~\cite{BMS1}.
Concretely, the following diagram of isomorphisms is commutative:
\[
\xymatrix{
\mathrm{R\Gamma_{\Prism}}(R/A) \widehat{\otimes}_{A,\varphi_A} A
\widehat{\otimes}_A \mathcal{A} \ar[r] \ar[d] &
\mathrm{R\Gamma_{crys}}(R/A) \ar[d] \\
A\Omega(R) \widehat{\otimes}_A \mathcal{A} \ar[r] & \mathrm{R\Gamma_{crys}}((R/p)/A),
}
\]
where the left vertical arrow is given by~\cite[Theorem 17.2]{BS19}
and the bottom horizontal arrow is given by~\cite[Theorem 12.1]{BMS1} or \cite{Yao19}.
\end{corollary}
\begin{proof}
This follows from the uniqueness statement in~\Cref{uniqueness of comparison}.
\end{proof}
Both sides of the isomorphism obtained in~\Cref{comparing pris and crys}
after completely tensoring $\mathcal{A}/\mathcal{I} \cong A/I$ over $\mathcal{A}$,
are naturally isomorphic to $\dR_{R/(A/I)}^{\wedge}$.
For the left hand side this follows from the de Rham comparison
of (Frobenius pullback of) the prismatic cohomology:
\[
\Prism^{(1)}_{R/A} \widehat{\otimes}_A \mathcal{A} \widehat{\otimes}_{\mathcal{A}} A/I
\cong \Prism^{(1)}_{R/A} \widehat{\otimes}_A A/I
\cong \dR_{R/(A/I)}^{\wedge},
\]
where the last equality follows from~\cite[Theorem 6.4 or Corollary 15.4]{BS19}.
For the right hand side this is just the base change of the derived de Rham complex
(or base change of crystalline cohomology and
the comparison of de Rham and crystalline cohomology for smooth morphism).
We observe that similar argument as above forces these natural isomorphisms
to be compatible with each other.
\begin{corollary}
\label{compatibility when modulo I}
Let $(A,I)$ be a transversal prism.
The following triangle of natural isomorphisms
\[
\xymatrix{
& \dR_{R/(A/I)}^{\wedge} & \\
\Prism^{(1)}_{R/A} \widehat{\otimes}_A \mathcal{A} \widehat{\otimes}_{\mathcal{A}} A/I \ar[ru]^{\cong} \ar[rr]^{\cong} & &
\dR_{R/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} A/I \ar[lu]_{\cong}
}
\]
is a commutative diagram.
\end{corollary}
\begin{proof}
Observe that all three natural isomorphisms are functorial in $R$,
hence going around the circle produces a functorial automorphism of
$\dR_{R/(A/I)}^{\wedge}$.
Now we argue as in the proof of \Cref{uniqueness of comparison}: using \Cref{functorial endomorphism theorem} we may conclude that this functorial automorphism
must be identity. Hence the above diagram must commute functorially.
\end{proof}
\section{Filtrations}
\label{section filtrations}
Throughout this section,
we assume $(A,I)$ to be a transversal prism and let $(\mathcal{A}, \mathcal{I})$ be the $p$-adic PD envelope of $A$ along $I$.
By~\Cref{comparing pris and crys}, for any $p$-completely
smooth $A/I$-algebra $R$ we have a functorial isomorphism:
\[
\varphi^*(\mathrm{R\Gamma_{\Prism}}(R/A))
\widehat{\otimes}_A \mathcal{A} \cong
\dR_{R/A}^{\wedge}.
\]
All objects involved here have interesting filtrations, they are:
Nygaard filtration on $\varphi^*(\mathrm{R\Gamma_{\Prism}}(R/A))$, $I$-adic filtration
on $A$, PD ideal filtration $\mathcal{I}^{[\bullet]}$ on $\mathcal{A}$,
and Hodge filtration on $\dR_{R/A}^{\wedge}$.
In this section, we discuss how these filtrations are related.
Unless otherwise specified,
we shall use $R$ to denote a general $A/I$-algebra, and $S$ will be used to denote
a large quasi-syntomic over $A/I$-algebra (see the discussion
right after \Cref{derived prismatic construction}).
Let us briefly remind readers how these filtrations are defined and their properties.
\subsection{Hodge filtration on $\dR_{R/A}^{\wedge}$}
Recall that $\mathrm{R\Gamma_{crys}}(R/A)$ is the
cohomology of the structure sheaf $\mathcal{O}_{\mathrm{crys}}$
on the (absolute) crystalline site $(R/A)_{\mathrm{crys}}$.
The crystalline structure sheaf admits a natural surjection to the Zariski structure sheaf,
whose kernel is an ideal sheaf $\mathcal{I}_{\mathrm{crys}}$ admitting divided powers.
Concretely, given a PD thickening $(U,T)$, with $U$ a $p$-adic formal $\Spf(A)$-scheme
with an $\Spf(A)$-map $U \to \mathrm{Spf}(R)$ and $U \hookrightarrow T$ a
$p$-completely nilpotent PD thickening, then
we have $\mathcal{O}_{\mathrm{crys}} \mid_{(U,T)} = \mathcal{O}_T$
and $\mathcal{I}_{\mathrm{crys}} \mid_{(U,T)} =
\ker(\mathcal{O}_T \twoheadrightarrow \mathcal{O}_U)$ which is a PD ideal sheaf
inside $\mathcal{O}_{\mathrm{crys}}$.
For any integer $r \geq 0$, we get a natural filtration on $\mathrm{R\Gamma_{crys}}(R/A)$
given by $\mathrm{R\Gamma_{crys}}(R/A, \mathcal{I}^{[r]}_{\mathrm{crys}})$.
Results of Bhatt~\cite[Section 3.3]{Bha12} and Illusie~\cite[Section VIII.2]{Ill72}
help us to understand this natural filtration in terms of $p$-adic
derived de Rham complex and its Hodge filtrations.
\begin{theorem}[{see~\cite[Proposition 3.25 and Theorem 3.27]{Bha12} and~\cite[Corollaire VIII.2.2.8]{Ill72}}, {see also~\cite[Theorem 3.4.(4)]{GL20}}]
\label{Illusie--Bhatt}
Let $R$ be a $p$-completely locally complete intersection $A/I$-algebra.
Then there is a natural identification of filtered $\mathbb{E}_{\infty}$-$\mathcal{A}$-algebras:
\[
(\dR_{R/A}^{\wedge}, \Fil^r_{\mathrm{H}})
\xrightarrow{\cong}
(\mathrm{R\Gamma_{crys}}(R/A),\mathrm{R\Gamma_{crys}}(R/A, \mathcal{I}^{[r]}_{\mathrm{crys}})).
\]
\end{theorem}
Here $\Fil^{\bullet}_{\mathrm{H}}$ denotes the (derived $p$-completed)
Hodge filtration on $\dR_{R/A}^{\wedge}$,
whose graded pieces are given by
\[
\gr^*_{\mathrm{H}}(\dR_{R/A}^{\wedge}) \cong \Gamma^*_{R}({\mathbb{L}}_{R/A}^{\wedge}[-1]),
\]
where $\Gamma^*$ denotes the derived divided power algebra construction and
${\mathbb{L}}_{R/A}^{\wedge}$ denotes the derived $p$-completed cotangent complex
of $R$ over $A$.
The triangle $A \to A/I \to R$ now gives us a triangle relating various $p$-completed
cotangent complexes:
\[
R \widehat{\otimes}_{A/I} {I/I^2}[1] \cong R \widehat{\otimes}_{A/I} {\mathbb{L}}_{(A/I)/A}^{\wedge}
\to {\mathbb{L}}_{R/A}^{\wedge} \to {\mathbb{L}}_{R/(A/I)}^{\wedge},
\]
where the (shifted) map $R \widehat{\otimes}_{A/I} {I/I^2} \to {\mathbb{L}}_{R/A}^{\wedge}[-1]$
comes from the $\mathcal{A}$-algebra structure on $\dR_{R/A}^{\wedge}$.
Indeed the multiplicativity of Hodge filtrations and the fact that
$I/I^2 \cong \mathcal{I}/\mathcal{I}^{[2]} \cong \gr^1_{\mathrm{H}}(\dR_{(A/I)/A}^{\wedge})$
naturally sits inside $\gr^1_{\mathrm{H}}(\dR_{R/A}^{\wedge})$
giving rise to
\[
\gr^0_{\mathrm{H}}(\dR_{R/A}^{\wedge}) \widehat{\otimes}_{\gr^0_{\mathrm{H}}(\dR_{(A/I)/A}^{\wedge})} \gr^1_{\mathrm{H}}(\dR_{(A/I)/A}^{\wedge})
\to \gr^1_{\mathrm{H}}(\dR_{R/A}^{\wedge}),
\]
which is identified with the shifted map $R \widehat{\otimes}_{A/I} {I/I^2} \to {\mathbb{L}}_{R/A}^{\wedge}[-1]$.
The above discussion naturally extends to all $A/I$-algebras via left Kan extension.
We restrict ourselves to those algebras that are quasisyntomic over $A/I$ so that everything in sight
is a sheaf with respect to the quasisyntomic topology.
Recall that a basis of the quasisyntomic site is given by algebras that are large quasisyntomic
over $A/I$ (see~\cite[Definition 15.1]{BS19}).
Below we shall show that, on this basis,
all these sheaves have values living in cohomological degree zero.
The proof is inspired by~\cite[Subsection 12.5]{BS19}.
\begin{lemma}
\label{lives in cohdeg 0 in char p}
Let $B$ be an $\mathbb{F}_p$-algebra and let $S$ be a $B$-algebra which is
relatively semiperfect with $\mathbb{L}_{S/B}[-1]$ given by a flat $S$-module.
Then $\dR_{S/B}$ and its Hodge filtrations all live in cohomological degree $0$.
\end{lemma}
\begin{proof}
Using the conjugate filtration and Cartier isomorphism, we see that
$\dR_{S/B}$ (being its $0$-th Hodge filtration) lives in degree $0$.
On the other hand, we also know that the graded pieces of the Hodge
filtrations are given by divided powers $\Gamma^*_S({\mathbb{L}}_{S/B}[-1])$,
hence all the graded pieces live in degree $0$ as well.
In order to prove the statement about Hodge filtrations, we
need to show the natural map
$\dR_{S/B} \to \dR_{S/B}/\Fil^r_{\mathrm{H}}$ is surjective
(note that both sides live in degree $0$ by last sentence).
To this end, we proceed by mimicking~\cite[proof of Theorem 12.2]{BS19}.
First we may replace $B$ by the relative perfection of $S$, as the relevant
cotangent complexes ${\mathbb{L}}_{S/B}$ and ${\mathbb{L}}_{S^{(1)}/B}$ are unchanged.
Hence we may assume $B \to S$ is a surjection, as $S/B$ is assumed to be
relatively semiperfect.
Next, by choosing the surjection $\mathbb{F}_p[X_b \mid b \in B] \twoheadrightarrow B$
and base change along the fully faithful map
$\mathbb{F}_p[X_b \mid b \in B] \to \mathbb{F}_p[X_b^{1/p^\infty} \mid b \in B]$,
we may further assume that $B$ is semiperfect
(as surjectiveness of a map can be tested after fully faithful base change).
In particular, any element in the kernel of $B \to S$ admits compatible $p$-power
roots in $B$.
Now if the kernel is generated by a regular sequence, then the map
$\dR_{S/B} \to \dR_{S/B}/\Fil^r_{\mathrm{H}}$ is identified as
$D_B(S) \to D_B(S)/J^{[r]}$ where $D_B(S)$ denotes the PD envelope
and $J^{[r]}$ is the $r$-th divided power ideal of $J = \ker(D_B(S) \twoheadrightarrow S)$.
Therefore $\dR_{S/B} \to \dR_{S/B}/\Fil^r_{\mathrm{H}}$
is surjective by this concrete description.
Lastly given any such surjection $B \twoheadrightarrow S$,
call the underlying set of its kernel by $I$.
Then we look at the surjection of $B$-algebras
\[
\widetilde{S} \coloneqq B[X_i^{1/p^\infty} \mid i \in I]/(X_i \mid i \in I) \twoheadrightarrow S,
\]
where $X_i^{1/p^\infty}$ is sent to (the image of) a compatible $p$-power roots
of the corresponding element $f_i \in I$ in $S$.
We have that the induced map ${\mathbb{L}}_{\widetilde{S}/B}[-1] \to {\mathbb{L}}_{S/B}[-1]$ sends
$X_i$ to $f_i$, hence is a surjection.
Therefore we get that the map
$\gr^*_{\mathrm{H}}(\dR_{\widetilde{S}/B}) \to \gr^*_{\mathrm{H}}(\dR_{S/B})$
is a surjection.
Since $\widetilde{S}$ is a quotient of a relatively perfect algebra over $B$
by an ind-regular sequence.
Applying (filtered colimit of) what we proved in the previous paragraph, we get that
$\dR_{\widetilde{S}/B} \to \dR_{\widetilde{S}/B}/\Fil^r_{\mathrm{H}}$
is also a surjection.
Looking at the following commutative diagram
\[
\xymatrix{
\dR_{\widetilde{S}/B} \ar[r] \ar@{->>}[d] & \dR_{S/B} \ar[d] \\
\dR_{\widetilde{S}/B}/\Fil^r_{\mathrm{H}} \ar@{->>}[r] & \dR_{S/B}/\Fil^r_{\mathrm{H}},
}
\]
we conclude that the right arrow must be surjective, which is what we need to show.
\end{proof}
\begin{lemma}
\label{Hodge filtrations on dR}
Let $S$ be a large quasisyntomic over $A/I$ algebra.
Then all of the Hodge filtrations on $\dR_{S/A}^{\wedge}$ and $\dR_{S/(A/I)}^{\wedge}$
are given by submodules, equivalently all the filtrations and their graded pieces
are cohomologically supported in degree $0$.
Moreover the Hodge filtrations of $\dR_{S/(A/I)}^{\wedge}$ are $p$-completely
flat over $A/I$.
\end{lemma}
\begin{proof}
Derived modulo $p$, we see that the first claim follows from~\Cref{lives in cohdeg 0 in char p}.
Also we see that $\dR_{S/(A/I)}^{\wedge} \to \dR_{S/(A/I)}^{\wedge}/\Fil^r_{\mathrm{H}}$
is surjective.
So the statement of $p$-completely flatness of $\dR_{S/(A/I)}^{\wedge}$
and its Hodge filtrations now follows from $p$-completeness
of $\dR_{S/(A/I)}^{\wedge}$ and the graded pieces of its Hodge filtrations.
Using conjugate filtration and Cartier isomorphism, both $p$-completely
flatness follow from the fact that ${\mathbb{L}}_{S/(A/I)}^{\wedge}[-1]$ is $p$-completely
flat over $S$ and $S$ is $p$-completely flat over $A/I$
(as $S$ is large quasisyntomic over $A/I$).
\end{proof}
Since $\dR_{R/A}^{\wedge}$ is naturally an $\mathcal{A}$-algebra for any $A/I$-algebra $R$,
the filtration on $\mathcal{A}$ by the divided powers of $\mathcal{I}$ gives rise
to another functorial decreasing filtration on $\dR_{R/A}^{\wedge}$:
\[
\Fil^r_{\mathcal{I}}(\dR_{R/A}^{\wedge}) \coloneqq
\dR_{R/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} \mathcal{I}^{[r]}.
\]
We caution readers that this is \emph{not} the $\mathcal{I}$-adic filtration,
as we are using divided powers of $\mathcal{I}$ instead of symmetric powers.
A basic understanding of these filtrations are given by the following:
\begin{lemma}
\label{calIfil in deg 0}
All of these $\Fil^r_{\mathcal{I}}(\dR_{R/A}^{\wedge})$ are quasisyntomic sheaves,
whose values on large quasisyntomic over $A/I$ algebras are supported in degree $0$.
The graded pieces are given by
\[
\gr^r_{\mathcal{I}} \cong
\dR_{R/(A/I)}^{\wedge} \widehat{\otimes}_{A/I} \mathcal{I}^{[r]}/\mathcal{I}^{[r+1]}.
\]
\end{lemma}
\begin{proof}
The statement about graded pieces follows from the following chain of identifications
\[
\gr^r_{\mathcal{I}} \cong \dR_{R/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} \mathcal{I}^{[r]}/\mathcal{I}^{[r+1]}
\cong \dR_{R/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} \mathcal{A}/\mathcal{I}
\widehat{\otimes}_{\mathcal{A}/\mathcal{I}} \mathcal{I}^{[r]}/\mathcal{I}^{[r+1]}
\cong \dR_{R/(A/I)}^{\wedge} \widehat{\otimes}_{A/I} \mathcal{I}^{[r]}/\mathcal{I}^{[r+1]},
\]
where the last identification comes from
$\dR_{R/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} A/I \cong \dR_{R/(A/I)}^{\wedge}$
(c.f.~\cite[Proposition 3.11]{GL20})
and $\mathcal{A}/\mathcal{I} \cong A/I$.
In particular, these graded pieces are given by $\dR_{R/(A/I)}^{\wedge}$ twisted by
a rank $1$ locally free sheaf on $\Spf(A/I)$, hence are quasisyntomic sheaves themselves.
Since $\dR_{R/A}^{\wedge}$ and all these graded pieces are quasisyntomic sheaves,
each $\Fil^r_{\mathcal{I}}$ is also a quasisyntomic sheaf.
If $S$ is large quasisyntomic over $A/I$, then $\dR_{S/A}^{\wedge}$ and all these graded pieces
are supported in cohomological degree $0$ by~\Cref{Hodge filtrations on dR}.
By induction, in order to show the filtrations are in degree $0$, it suffices to show
$\dR_{S/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} \mathcal{I}^{[r]}
\to \dR_{S/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} \mathcal{I}^{[r]}/\mathcal{I}^{[r+1]}$
is surjective for any $r$, which follows from the right exactness of $p$-complete tensor.
\end{proof}
The filtration $\Fil^{\bullet}_{\mathcal{I}}(\dR_{R/A}^{\wedge})$ is a disguise of the Katz--Oda filtration
$\Fil^{\bullet}_{\mathrm{KO}}(\dR_{C/A})$
discussed in~\cite{GL20}, applied to the triple $(A \to B \to C) = (A \to A/I \to R)$.
More precisely, we have
\[
\Fil^{i}_{\mathcal{I}}\dR_{R/A}^{\wedge} \cong \Fil^{\bullet}_{\mathrm{KO}}(\dR_{R/A})^\wedge.
\]
We refer readers to the Subsection 3.2 of loc.~cit.~for a general discussion of additional
structures on the derived de Rham complex of
$A \to C$ when it factorizes through $A \to B \to C$.
Let $R$ be an $A/I$-algebra.
By $p$-completing the double filtrations obtained in \cite[Construction 3.12]{GL20},
we see that $\dR_{R/A}^\wedge$ can be naturally equipped a decreasing filtration indexed by
$\mathbb{N} \times \mathbb{N}$:
\[
\Fil^{i,j}(\dR_{R/A}^\wedge) \coloneqq \bigg(\Fil^i_{\mathrm{KO}}\Fil^j_{\mathrm{H}}(\dR_{R/A})\bigg)^\wedge
\]
The following proposition will describe $\Fil^{i,j}(\dR_{R/A}^\wedge)$
and declare its relation with
the two systems of filtrations $\Fil^{\bullet}_{\mathrm{H}} \dR_{R/A}^{\wedge}$ and
$\Fil^{\bullet}_{\mathcal{I}} \dR_{R/A}^{\wedge}$.
\begin{proposition}
\label{general KO filtration properties}
Let $R$ be an $A/I$-algebra.
Then:
\begin{enumerate}
\item For any $j$, we have an identification $\Fil^{0,j}(\dR_{R/A}^\wedge) \cong \Fil^j_{\mathrm{H}}(\dR_{R/A}^\wedge)$.
\item For each pair $0 \leq j \leq i$, we have an identification
\[
\Fil^{i,j}(\dR_{R/A}^\wedge) \cong \Fil^i_{\mathcal{I}}(\dR_{R/A}^\wedge).
\]
\item For each pair $0 \leq i \leq j$, we have a natural identification
\[
\mathrm{Cone}\bigg(\Fil^{i+1, j}(\dR_{R/A}^\wedge) \to \Fil^{i,j}(\dR_{R/A}^\wedge)\bigg) \cong
\Fil^{j-i}_{\mathrm{H}} \dR_{R/(A/I)}^\wedge \widehat{\otimes}_{A/I} \Gamma^i_{A/I}(I/I^2).
\]
Moreover this identification is compatible with
\[
\xymatrix{
\mathrm{Cone}\bigg(\Fil^{i+1, j}(\dR_{R/A}^\wedge) \to \Fil^{i,j}(\dR_{R/A}^\wedge)\bigg) \ar[d] \ar[r]^-{\cong} &
\Fil^{j-i}_{\mathrm{H}} \dR_{R/(A/I)}^\wedge \widehat{\otimes}_{A/I} \Gamma^i_{A/I}(I/I^2) \ar[d] \\
\mathrm{Cone}\bigg(\Fil^{i+1, 0}(\dR_{R/A}^\wedge) \to \Fil^{i,0}(\dR_{R/A}^\wedge)\bigg) \ar[r]^-\cong &
\dR_{R/(A/I)}^\wedge \widehat{\otimes}_{A/I} \Gamma^i_{A/I}(I/I^2).
}
\]
\item The association $R \mapsto \Fil^{i,j}(\dR_{R/A}^\wedge)$ defines a sheaf on the quasi-syntomic site of $A/I$
for any $(i,j)$.
\end{enumerate}
\end{proposition}
\begin{proof}
For (1): this follows from the \cite[Construction 3.12]{GL20},
$\Fil^{0,j}$ is the $p$-completed $j$-th filtration on
$\dR_{R/A} \otimes_{\dR_A(I)} \Fil^0_{\mathrm{H}}(\dR_A(I))
\cong \dR_{R/A}$.
Since this is a filtered isomorphism, we see that this is nothing but
$p$-completed $j$-th Hodge filtration on $\dR_{R/A}$,
hence it is $\Fil^j_{\mathrm{H}}(\dR_{R/A}^\wedge)$.
For (2): this follows from the \cite[Construction 3.9]{GL20}.
Indeed, the inequality $j \leq i$ implies that the $Fil^j$ of each term
showing in \cite[Construction 3.9]{GL20} is the whole term.
Hence the colimit just gives $\dR_{R/A} \otimes_{\dR_A(I)} \Fil^i_{\mathrm{H}}(\dR_A(I))$
back.
After $p$-completing, we see that by definition we have
$\Fil^{i,j}(\dR_{R/A}^\wedge) \cong \Fil^i_{\mathcal{I}}(\dR_{R/A}^\wedge)$.
(3) follows from $p$-completing \cite[Proposition 3.13.(1)]{GL20}.
For (4): first we claim the associations $R \mapsto \Fil^m_{\mathrm{H}}(\dR_{R/A}^\wedge)$
and $R \mapsto \Fil^n_{\mathrm{H}}(\dR_{R/(A/I)}^\wedge)$ define sheaves
for all $m$ and $n$.
For $m=0$ this is \Cref{sheaf property on relative to A dR},
and for $n=0$ this is \cite[Example 5.12]{BMS2}.
Induction on $m$ and $n$ reduces us to showing the sheaf property of graded pieces,
which are given by $\wedge^i_R \mathbb{L}_{R/A}^{\wedge}[-i]$
and $\wedge^i_R \mathbb{L}_{R/(A/I)}^{\wedge}[-i]$.
$p$-Completing \cite[Theorem 3.1]{BMS2} gives the desired sheaf property of these graded pieces.
Fix a natural number $j$, then by (1) we see that $\Fil^{0,j}$
is a quasisyntomic sheaf.
Each graded piece with respect to $i$, by (2) and (3), is also a sheaf.
Therefore we see that by increasing induction on $i$, each $\Fil^{i,j}$ defines a sheaf.
\end{proof}
To understand these sheaves more concretely, we look at their value on the basis of
large quasisyntomic over $A/I$-algebras.
\begin{proposition}
\label{qsyn KO filtration properties}
Let $S$ be a large quasisyntomic over $A/I$ algebra.
Then:
\begin{enumerate}
\item For any pair $(i,j) \in \mathbb{N} \times \mathbb{N}$, the $\Fil^{i,j}(\dR_{S/A}^\wedge)$
is concentrated in degree $0$, and the natural map $\Fil^{i,j}(\dR_{S/A}^\wedge) \to \dR_{S/A}^\wedge$
is injective.
\item For any $j$, the natural map
\[
\Fil^j_{\mathrm{H}}(\dR_{S/A}^\wedge) \to \Fil^j_{\mathrm{H}}(\dR_{S/(A/I)}^\wedge)
\]
is surjective.
\item For each pair $0 \leq i \leq j$, we have an equality:
\[
\Fil^{i,j}(\dR_{S/A}^{\wedge})
= \sum_{r = i}^j \left(\Fil^{j-r}_{\mathrm{H}} \dR_{S/A}^{\wedge} \cdot \mathcal{I}^{[r]}\right),
\]
where $\Fil^{j-r}_{\mathrm{H}} \dR_{S/A}^{\wedge} \cdot \mathcal{I}^{[r]}$ denotes the image of
$\Fil^{j-r}_{\mathrm{H}} \dR_{S/A}^{\wedge} \widehat{\otimes}_{\mathcal{A}} \mathcal{I}^{[r]}
\to \dR_{S/A}^\wedge$,
and the sum is inside the algebra $\dR_{S/A}^{\wedge}$.
\item We have another description:
\[
\Fil^{i,j}(\dR_{S/A}^{\wedge}) = \left(\Fil^j_{\mathrm{H}} \dR_{S/A}^{\wedge}\right) \cap
\left(\Fil^i_{\mathcal{I}} \dR_{S/A}^{\wedge}\right),
\]
where the intersection happens inside the algebra $\dR_{S/A}^{\wedge}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Proof of (1): we shall prove by decreasing induction on $i$.
When $j \leq i$, by \Cref{general KO filtration properties} (2) we see that
$\Fil^{i,j}(\dR_{S/A}^\wedge) \cong \Fil^i_{\mathcal{I}}(\dR_{S/A}^\wedge)$,
which is concentrated in degree $0$ by \Cref{calIfil in deg 0}.
By \Cref{general KO filtration properties} (3), the graded pieces with respect to $i$ are all concentrated in degree $0$ by \Cref{Hodge filtrations on dR}.
This in turn implies that,
\begin{itemize}
\item All of $\Fil^{i,j}(\dR_{S/A}^\wedge)$ are in degree $0$ for any $(i,j)$; and
\item We have short exact sequences:
\[
0 \to \Fil^{i+1,j}(\dR_{S/A}^\wedge) \to \Fil^{i,j}(\dR_{S/A}^\wedge)
\to \Fil^{j-i}_{\mathrm{H}} \dR_{R/(A/I)}^\wedge \widehat{\otimes}_{A/I} \Gamma^i_{A/I}(I/I^2) \to 0.
\]
\end{itemize}
In particular $\Fil^{i+1,j}(\dR_{S/A}^\wedge) \to \Fil^{i,j}(\dR_{S/A}^\wedge)$
is injective.
Using \Cref{general KO filtration properties} (1) and \Cref{Hodge filtrations on dR},
we see that the map $\Fil^{0,j}(\dR_{S/A}^\wedge) \cong \Fil^j_{\mathrm{H}}(\dR_{S/A}^\wedge) \to \dR_{S/A}^\wedge$ is also injective.
Therefore the composition $\Fil^{i,j}(\dR_{S/A}^\wedge) \to \dR_{S/A}^\wedge$
is injective as well for any $(i,j)$.
(2) follows from the short exact sequence obtained in the previous
paragraph, specializing to $i = 0$.
(3) follows from the combination of (2), \Cref{general KO filtration properties} (3),
and the fact that $p$-completed tensor is right exact.
For (4): first notice that this is true for $i = 0$, due to
\Cref{general KO filtration properties} (1).
Next let us look at the commutative diagram in \Cref{general KO filtration properties} (3).
Since the right hand side is an injection, we see that the map
\[
\Fil^{i,j}(\dR_{S/A}^\wedge)/\Fil^{i+1, j}(\dR_{S/A}^\wedge)
\to \Fil^{i,0}(\dR_{S/A}^\wedge)/\Fil^{i+1, 0}(\dR_{S/A}^\wedge)
\]
is injective.
Therefore, by \Cref{general KO filtration properties} (2), we know that
\[
\Fil^{i+1, j}(\dR_{S/A}^\wedge) = \left(\Fil^{i,j}(\dR_{S/A}^\wedge)\right) \cap
\left(\Fil^{i+1}_{\mathcal{I}}\dR_{S/A}^\wedge\right).
\]
By increasing induction on $i$, we may assume
\[
\Fil^{i,j}(\dR_{S/A}^\wedge) =
\left(\Fil^j_{\mathrm{H}} \dR_{S/A}^{\wedge}\right) \cap
\left(\Fil^i_{\mathcal{I}} \dR_{S/A}^{\wedge}\right).
\]
Hence we have
\[
\Fil^{i+1, j}(\dR_{S/A}^\wedge) =
\left(\Fil^j_{\mathrm{H}} \dR_{S/A}^{\wedge}\right) \cap
\left(\Fil^i_{\mathcal{I}} \dR_{S/A}^{\wedge}\right) \cap
\left(\Fil^{i+1}_{\mathcal{I}}(\dR_{S/A}^\wedge)\right)
= \left(\Fil^j_{\mathrm{H}} \dR_{S/A}^{\wedge}\right) \cap
\left(\Fil^{i+1}_{\mathcal{I}}\dR_{S/A}^\wedge\right).
\]
\end{proof}
Let us draw a table to summarize these filtrations on $\dR_{R/A}^{\wedge}$:
\[
\xymatrix{
& \vdots \ar@{-}[d] & R & & {\mathbb{L}}_{R/(A/I)}^{\wedge}[-1] & &
(\wedge^2_R {\mathbb{L}}_{R/(A/I)})^{\wedge}[-2] & \cdots \\
\ar@{-}[r] & \vdots \ar@{-}[d] \ar@{-}[l] \ar@{-}[r] & \ar@{-}[l] \ar@{-}[r] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \\
A/I & \vdots \ar@{-}[d] & M_0 \widehat{\otimes}_{A/I} N_0 & \ar@{.}[ld] & M_0 \widehat{\otimes}_{A/I} N_1 & \ar@{.}[ld] & M_0 \widehat{\otimes}_{A/I} N_2 & \ar@{.}[ld] \cdots \\
& \vdots \ar@{-}[d] & \ar@{-}[l] \ar@{.}[ld] \ar@{-}[r] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \\
\mathcal{I}/\mathcal{I}^{[2]} & \vdots \ar@{-}[d] & M_1 \otimes_{A/I} N_0 & \ar@{.}[ld] &
M_1 \widehat{\otimes}_{A/I} N_1 & \ar@{.}[ld] & M_1 \widehat{\otimes}_{A/I} N_2 & \ar@{.}[ld] \cdots \\
& \vdots \ar@{-}[d] & \ar@{-}[l] \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{.}[ld] \ar@{-}[r] & \\
\mathcal{I}^{[2]}/\mathcal{I}^{[3]} & \vdots \ar@{-}[d] & M_2 \widehat{\otimes}_{A/I} N_0 & \ar@{.}[ld] & M_2 \widehat{\otimes}_{A/I} N_1 & \ar@{.}[ld] & M_2 \widehat{\otimes}_{A/I} N_2 & \ar@{.}[ld] \cdots \\
\vdots & & \vdots & & \vdots & & \vdots & \\
}
\]
In the diagram above, $M_i=\mathcal{I}^{[i]}/\mathcal{I}^{[i+1]}$, and $N_j=(\wedge^j_R {\mathbb{L}}_{R/(A/I)})^{\wedge}[-j]$, for $i,j\in {\mathbb{N}}$.
Here rows indicate graded pieces of the filtration $\Fil^r_{\mathcal{I}}$,
and each term in $i$-th row indicates the graded piece of the induced filtration on
$\dR_{R/(A/I)}^{\wedge} \widehat{\otimes}_{A/I} \Gamma^i_{A/I}(I/I^2)$.
The skewed dotted line indicate the Hodge filtration on $\dR_{R/A}^{\wedge}$
(given by things below the dotted line).
See also \cite[p.10]{GL20}.
As a consequence we get a structural result on the graded algebra associated with
the Hodge filtration on $\dR_{R/A}^{\wedge}$.
\begin{lemma}
\label{increasing filtration on Hodge graded}
There is a functorial increasing exhaustive filtration $\Fil^v_i$ on the graded algebra
$\gr^*_{\mathrm{H}}(\dR_{R/A}^{\wedge})$ by
graded-$\left(\gr^*_{\mathcal{I}} \mathcal{A} \cong \Gamma^*_{A/I}(I/I^2)\right)$-submodules
with graded pieces given by
\[
\gr^v_i\left(\gr^*_{\mathrm{H}}(\dR_{R/A}^{\wedge})\right) \cong
(\wedge^i_R {\mathbb{L}}_{R/(A/I)})^{\wedge}[-i] \widehat{\otimes}_{A/I} \Gamma^*_{A/I}(I/I^2).
\]
Here $(\wedge^i_R {\mathbb{L}}_{R/(A/I)})^{\wedge}[-i]$ has degree $i$
and the above is a graded isomorphism.
\end{lemma}
We refer to this filtration $\Fil^v_i$ on $\gr^*_{\mathrm{H}}(\dR_{R/A}^{\wedge})$
as the \emph{vertical filtration} from now on,
c.f.~\cite[Construction 3.14]{GL20}.
This choice of name is because the $\Fil^v_i$ is literally the filtration given by vertical columns
in the table before this Lemma.
\begin{proof}
Use the above table one can see this directly.
Equivalently, we may use
\[
\gr^*_{\mathrm{H}}(\dR_{R/A}^{\wedge}) \cong
\left(\Gamma^*_R({\mathbb{L}}_{R/A}^{\wedge}[-1])\right)^{\wedge},
\]
and the triangle
\[
R \widehat{\otimes}_{A/I} I/I^2 \to {\mathbb{L}}_{R/A}^{\wedge}[-1] \to {\mathbb{L}}_{R/(A/I)}^{\wedge}[-1].
\]
\end{proof}
\begin{remark}
\label{flatness of dR}
Let $(A,I)$ be a general bounded prism, and let $S$ be a large quasisyntomic over $A/I$-algebra.
Combining \Cref{comparing pris and crys}, \Cref{derived prismatic construction} (4),
and \cite[Theorem 15.2.(1)]{BS19}, we can see that $\dR_{S/A}^\wedge$ is $p$-completely flat over $\dR_A(I)^\wedge$.
Below is suggested to us by Bhatt.
Using conjugate filtration and the same argument of \Cref{increasing filtration on Hodge graded},
we can give an alternative proof of this fact.
Indeed we can check this after mod $p$, hence we shall assume $A$ to be $p$-torsion.
Next we want to appeal to the conjugate filtrations on both algebras:
we have the following pushout diagram:
\[
\xymatrix{
A \ar[r] & A/I^p \ar[r] & R^{(1)} \\
A \ar[u]^-{\varphi_A} \ar[r] & A/I \ar[r] \ar[u] & R \ar[u].
}
\]
There is a similar functorial increasing exhaustive filtration on the graded
algebra of the conjugate filtered $\dR_{S/A}$,
with graded pieces given by
$(\wedge^i_{R^{(1)}} {\mathbb{L}}_{R^{(1)}/(A/I^p)})[-i] {\otimes}_{A/I^p} \Gamma^*_{A/I^p}(I^p/I^{2p})$.
It is flat over $\Gamma^*_{A/I^p}(I^p/I^{2p})$, which is the conjugate graded algebra
of $\dR_A(I)$.
Lastly we conclude by recalling that an increasingly exhaustive filtered module
of an increasingly exhaustive filtered algebra is flat if the graded counterpart
is flat.
\end{remark}
\subsection{Nygaard filtration}
\label{subsec-NygaardFil}
Recall in~\cite[Section 15]{BS19}, there is a natural decreasing filtration
of quasisyntomic subsheaves on $\Prism^{(1)}_{-/A}$ called the Nygaard filtration with the following properties:
\begin{theorem}[{see~\cite[Theorem 15.2 and 15.3]{BS19} and proof therein}]
\label{Nygaard filtration}
Let $S$ be a large quasisyntomic over $A/I$ algebra. Then
\begin{enumerate}
\item The Nygaard filtrations $\Fil^{\bullet}_{\mathrm{N}}$ on $\Prism^{(1)}_{S/A}$
are given by $p$-completely flat $A$-submodules inside $\Prism^{(1)}_{S/A}$.
\item We have an identification of algebras $\Prism^{(1)}_{S/A}/I \cong \dR_{S/(A/I)}^{\wedge}$,
under which the image of Nygaard filtration becomes the Hodge filtration.
\item For each $i \geq 0$, we have a short exact sequence:
\[
0 \to \Fil^{i}_{\mathrm{N}}\Prism^{(1)}_{S/A} \otimes_A I \to \Fil^{i+1}_{\mathrm{N}}\Prism^{(1)}_{S/A} \to
\Fil^{i+1}_{\mathrm{H}} \dR_{R/(S/I)}^{\wedge} \to 0.
\]
\end{enumerate}
\end{theorem}
Let $R$ be a general quasisyntomic $A/I$-algebra.
On $\Prism^{(1)}_{R/A}$ there is also an $I$-adic filtration
$\Fil^r_{I}\Prism^{(1)}_{R/A} \coloneqq \Prism^{(1)}_{R/A} \otimes_A I^r$,
by~\Cref{Nygaard filtration} (2), we identify the graded pieces as
\[
\gr^r_{I} \cong \Prism^{(1)}_{R/A}/I \otimes_{A/I} I^r/I^{r+1}
\cong \dR_{R/(A/I)}^{\wedge} \otimes_{A/I} \Sym_{A/I}^r(I/I^2).
\]
The $I$-adic filtration and the Nygaard filtration are related by the following.
For any $(i,j) \in \mathbb{N} \times \mathbb{N}$, we define
\[
\Fil^{i,j} \Prism^{(1)}_{R/A} \coloneqq \Fil^{j-i}_{\mathrm{N}}\Prism^{(1)}_{R/A} \otimes_A I^i,
\]
where we adopt the convention that
$\Fil^{l}_{\mathrm{N}}\Prism^{(1)}_{R/A} = \Prism^{(1)}_{R/A}$ if $l \leq 0$.
One checks easily that this puts a decreasing filtration on $\Prism^{(1)}_{R/A}$
indexed by $\mathbb{N} \times \mathbb{N}$.
This filtration has very similar behavior as the $\Fil^{i,j}(\dR_{R/A}^\wedge)$ studied in previous subsection.
The following is the analogue of \Cref{general KO filtration properties}.
\begin{proposition}
\label{general KON filtration properties}
Let $R$ be an $A/I$-algebra. Then:
\begin{enumerate}
\item For any $j$, we have $\Fil^{0,j}\Prism^{(1)}_{R/A} \cong \Fil^j_{\mathrm{N}}\Prism^{(1)}_{R/A}$.
\item For each pair $0 \leq j \leq i$, we have
\[
\Fil^{i,j}\Prism^{(1)}_{R/A} \cong \Fil^i_{I}\Prism^{(1)}_{R/A}.
\]
\item For each pair $0 \leq i \leq j$, we have a
natural identification
\[
\mathrm{Cone}\bigg(\Fil^{i+1,j}\Prism^{(1)}_{R/A} \to \Fil^{i,j}\Prism^{(1)}_{R/A}\bigg) \cong
\Fil^{j-i}_{\mathrm{H}}(\dR_{R/(A/I)}^\wedge) \otimes_{A/I} \Sym_{A/I}^i(I/I^2).
\]
Moreover these identifications fit in the following commutative diagram:
\[
\xymatrix{
\mathrm{Cone}\bigg(\Fil^{i+1,j}\Prism^{(1)}_{R/A} \to \Fil^{i,j}\Prism^{(1)}_{R/A}\bigg) \ar[r]^-{\cong} \ar[d] &
\Fil^{j-i}_{\mathrm{H}}(\dR_{R/(A/I)}^\wedge) \otimes_{A/I} \Sym_{A/I}^i(I/I^2) \ar[d] \\
\mathrm{Cone}\bigg(\Fil^{i+1,0}\Prism^{(1)}_{R/A} \to \Fil^{i,0}\Prism^{(1)}_{R/A}\bigg) \ar[r]^-{\cong} &
\dR_{R/(A/I)}^\wedge \otimes_{A/I} \Sym_{A/I}^i(I/I^2)
}.
\]
\item The association $R \mapsto \Fil^{i,j}\Prism^{(1)}_{R/A}$ defines
a sheaf on $\mathrm{qSyn}_{A/I}$ for any $(i,j)$.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) and (2) follows from definition. (3) follows from \Cref{Nygaard filtration} (3). (4) follows from (3).
\end{proof}
\begin{proposition}
\label{qsyn KON filtration properties}
Let $S$ be a large quasisyntomic over $A/I$ algebra.
Then:
\begin{enumerate}
\item We have an equality:
\[
\Fil^{i,j}\Prism^{(1)}_{S/A}
= \sum_{r = i}^j \left(\Fil^{j-r}_{\mathrm{N}}(\Prism^{(1)}_{S/A}) \cdot I^{r}\right),
\]
where the sum is inside the algebra $\Prism^{(1)}_{S/A}$.
\item We have another equality:
\[
\Fil^{i,j}\Prism^{(1)}_{S/A} = \left(\Fil^j_{\mathrm{N}} \Prism^{(1)}_{S/A}\right) \cap
\left(\Fil^i_{I} \Prism^{(1)}_{S/A}\right),
\]
where the intersection happens inside the algebra $\Prism^{(1)}_{S/A}$.
\end{enumerate}
\end{proposition}
\begin{proof}
The proof is similar to \Cref{qsyn KO filtration properties} (3) and (4).
Notice that $\Fil^j_{\mathrm{N}} \Prism^{(1)}_{S/A} \to \Fil^j_{\mathrm{H}}(\dR_{R/(A/I)}^\wedge)$ is surjective
by \Cref{Nygaard filtration} (2).
\end{proof}
We can express all these structures on $\Prism^{(1)}_{R/A}$
in the following graph similar to what was drawn
in the previous subsection.
One observes that the distinction is just that divided powers of $I/I^2$ get replaced by symmetric powers of $I/I^2$.
\[
\xymatrix{
& \vdots \ar@{-}[d] & R & & {\mathbb{L}}_{R/(A/I)}^{\wedge}[-1] & &
(\wedge^2_R {\mathbb{L}}_{R/(A/I)})^{\wedge}[-2] & \cdots \\
\ar@{-}[r] & \vdots \ar@{-}[d] \ar@{-}[l] \ar@{-}[r] & \ar@{-}[l] \ar@{-}[r] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \\
A/I & \vdots \ar@{-}[d] & M_0 \widehat{\otimes}_{A/I} N_0 & \ar@{.}[ld] & M_0 \widehat{\otimes}_{A/I} N_1 & \ar@{.}[ld] & M_0 \widehat{\otimes}_{A/I} N_2 & \ar@{.}[ld] \cdots \\
& \vdots \ar@{-}[d] & \ar@{-}[l] \ar@{.}[ld] \ar@{-}[r] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \\
I/I^2 & \vdots \ar@{-}[d] & M_1 \otimes_{A/I} N_0 & \ar@{.}[ld] &
M_1 \widehat{\otimes}_{A/I} N_1 & \ar@{.}[ld] & M_1 \widehat{\otimes}_{A/I} N_2 & \ar@{.}[ld] \cdots \\
& \vdots \ar@{-}[d] & \ar@{-}[l] \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{-}[r] \ar@{.}[ld] & \ar@{-}[r] & \ar@{.}[ld] \ar@{-}[r] & \\
I^2/I^3 & \vdots \ar@{-}[d] & M_2 \widehat{\otimes}_{A/I} N_0 & \ar@{.}[ld] & M_2 \widehat{\otimes}_{A/I} N_1 & \ar@{.}[ld] & M_2 \widehat{\otimes}_{A/I} N_2 & \ar@{.}[ld] \cdots \\
\vdots & & \vdots & & \vdots & & \vdots & \\
}
\]
Here rows indicate graded pieces of the filtration $\Fil^r_{I}$,
and each term in each row indicates the graded piece of the Hodge filtration on
$\dR_{R/(A/I)}^{\wedge}$.
The skewed dotted line indicate the Nygaard filtration on $\Prism^{(1)}_{R/A}$
(given by things below the dotted line).
Also as a consequence we get a structural result on the graded algebra associated with
the Nygaard filtration on $\Prism^{(1)}_{R/A}$.
\begin{lemma}
\label{increasing filtration on Nygaard graded}
There is a functorial increasing exhaustive filtration $\Fil^v_i$ on the graded algebra
$\gr^*_{\mathrm{N}}(\Prism^{(1)}_{R/A})$ by
graded-$\left(\gr^*_{I} A \cong \Sym^*_{A/I}(I/I^2)\right)$-submodules
with graded pieces given by
\[
\gr^v_i\left(\gr^*_{\mathrm{N}}(\Prism^{(1)}_{R/A})\right) \cong
(\wedge^i_R {\mathbb{L}}_{R/(A/I)})^{\wedge}[-i] \widehat{\otimes}_{A/I} \Sym^*_{A/I}(I/I^2).
\]
Here $(\wedge^i_R {\mathbb{L}}_{R/(A/I)})^{\wedge}[-i]$ has degree $i$ and the above is a graded isomorphism.
\end{lemma}
We also call this filtration $\Fil^v_i$ on $\gr^*_{\mathrm{N}}(\Prism^{(1)}_{R/A})$
as the \emph{vertical filtration} from now on.
\begin{proof}
This follows from~\Cref{Nygaard filtration} (3), see also the proof of \Cref{increasing filtration on Hodge graded}.
\end{proof}
\subsection{Promote to filtered map}
Recall that we use $(A,I)$ to denote a transversal prism.
For the rest of this section, we shall use $(B,J)$ to denote a general bounded prism.
By~\Cref{comparing pris and crys}, we have a map
$\Prism^{(1)}_{R/B} \to \dR_{R/B}^{\wedge}$
functorial in $B/J \to R$.
The goal of this subsection is to show that this map can be
promoted to a filtered map where the left hand side is equipped with the Nygaard filtration
and the right hand side is equipped with the Hodge filtration.
Our plan is:
\begin{itemize}
\item show certain rigidity of the map being filtered;
\item show the map is filtered when the base prism $(A,I)$ is transversal;
\item show the map is filtered when the algebra $R$ is large quasisyntomic over $B/J$
of a particular type; and
\item show the map is functorially filtered when $R$ is a $p$-completely smooth $B/J$-algebra,
and hence finish the argument by left Kan extension.
\end{itemize}
Fix a natural number $i$.
The main diagram that we shall stare at in this subsection is:
\[
\label{NygHdg diagram}
\tag{\epsdice{3}}
\xymatrix{
\Fil^i_{\mathrm{N}} \ar[r] \ar@{.>}[d]^-{g_i} & \Prism^{(1)} \ar[r] \ar[d] & Q_{1,i} \ar@{.>}[d]^-{f_i} \\
\Fil^i_{\mathrm{H}} \ar[r] & \dR \ar[r] & Q_{2,i},
}
\]
viewed as a commutative diagram of sheaves on $\qsyn_{B/J}$.
Here $Q_{1,i}$ and $Q_{2,i}$ are the cones of the natural maps, so both rows are distinguished triangles
of quasisyntomic sheaves.
All the solid arrows are defined: for instance, the middle vertical arrow is given by \Cref{comparing pris and crys}.
Our main task is to show that one can fill in the dotted arrows $f_i$ and $g_i$ making the diagram commute.
We first need a few lemmas to illustrate that the situation is pretty rigid and there is at most one choice of
these dotted arrows.
\begin{lemma}
\label{cohomology estimate}
Let $S$ be a large quasisyntomic over $B/J$ algebra, then the values of
\[
\Fil^i_{\mathrm{N}},~\Prism^{(1)},~Q_{1,i},~\text{and } Q_{2,i}
\]
at $S$ are concentrated in cohomological degree $0$.
\end{lemma}
\begin{proof}
The first three follows from how they are defined, see \cite[Subsection 15.1]{BS19}.
The claim for $Q_{2,i}$ follows from the fact that $\mathbb{L}_{S/B}^\wedge[-1]$ lives in cohomological degree $0$.
\end{proof}
\begin{lemma}
\label{rigidity of the situation}
Let $S$ be a large quasisyntomic over $B/J$ algebra. Then,
\begin{enumerate}
\item there is at most one choice of $f_i$ making the right square of~\ref{NygHdg diagram} commute;
\item if $S \to T$ is a morphism of large quasisyntomic over $B/J$ algebras,
and suppose $f_i$ are defined on both of them, then the diagram
\[
\xymatrix{
Q_{1,i}(S) \ar[r] \ar[d]^{f_i(S)} & Q_{1,i}(T) \ar[d]^{f_i(T)} \\
Q_{2,i}(S) \ar[r] & Q_{2,i}(T)
}
\]
is commutative;
\item the existence of $f_i(S)$ is equivalent to the existence of $g_i(S)$ making the left square of~\ref{NygHdg diagram}
commute;
\item the $g_i(S)$, if exists, must be unique.
\end{enumerate}
\end{lemma}
\begin{proof}
(1): suppose there are two of them, and take their difference. Since precomposing with
$\Prism^{(1)} \to Q_{1,i}$ of this difference is the zero map $\Prism^{(1)} \xrightarrow{0} Q_{2,i}$ due to commutativity,
the difference must factor through $\Fil^i_{\mathrm{N}}[1]$.
But $\mathrm{Hom}(\Fil^i_{\mathrm{N}}[1], Q_{2,i}) = \{0\}$ by cohomological considerations in \Cref{cohomology estimate}.
Hence the difference must be zero.
(2): the argument is similar to (1). The difference of the two arrows from $Q_{1,i}(S)$ to $Q_{2,i}(T)$ will again factor
through $\Fil^i_{\mathrm{N}}(S)[1]$, hence must again be zero.
(3): just apply TR3, notice that the two rows of~\ref{NygHdg diagram} are exact triangles.
(4): similar to (1). The difference of two possible $g_i(S)$'s factors through an arrow
$\Fil^i_{\mathrm{N}}(S) \to Q_{2,i}[-1](S)$ which is again zero by cohomological considerations.
\end{proof}
After knowing the rigidity of our situation, we shall start proving the existence of $f_i$
following the plan outlined above.
\begin{proposition}
Let $(A,I)$ be a transversal prism, and let
$S = A/I \langle X_1^{1/p^\infty}, \ldots, X_n^{1/p^\infty} \rangle/(f_1, \ldots, f_r)$
where $(f_i)$ is a $p$-completely regular sequence.
We have $\Fil^i_{\mathrm{N}} \Prism^{(1)}_{S/A} \subset \Fil^i_{\mathrm{H}}
\dR_{S/A}^{\wedge}$.
\end{proposition}
\begin{proof}
When $i = 0$, there is nothing to prove,
when $i = 1$, the triangle in Corollary \ref{compatibility when modulo I} gives us
a commutative diagram
\[
\xymatrix{
& S & \\
\Prism^{(1)}_{S/A} \ar@{->>}[ru] \ar[rr] & &
\dR_{S/A}^{\wedge}, \ar@{->>}[lu]
}
\]
since the kernels of these two surjections define the first
Nygaard and Hodge filtrations respectively, we see the containment for $i = 1$.
For general $i$, we prove by induction. Let us look at the induced map
$g \colon \Fil^i_{\mathrm{N}} \Prism^{(1)}_{S/A} \to \dR_{S/A}^{\wedge}/\Fil^i_{\mathrm{H}}$.
We first notice that by induction and $I \subset \mathcal{I}$
the submodule $I \cdot \Fil^{i-1}_{\mathrm{N}} \Prism^{(1)}_{S/A}$
is sent to zero under $g$.
By multiplicativity and the containment for $i = 1$,
we have that $\Sym^i(\Fil^1_{\mathrm{N}} \Prism^{(1)}_{S/A})$
is also sent to zero under $g$.
Now we use~\Cref{Nygaard filtration} (3) to see that
$\Fil^i_{\mathrm{N}}/I \cdot \Fil^{i-1}_{\mathrm{N}}$ is identified with
$\Fil^i_{\mathrm{H}} \dR_{S/(A/I)}^{\wedge}$
and the image of $\Sym^i(\Fil^1_{\mathrm{N}} \Prism^{(1)}_{S/A})$ becomes
$\Sym^i(\Fil^1_{\mathrm{H}} \dR_{S/(A/I)}^{\wedge})$, so we get an induced map
$\bar{g} \colon \left(
\Fil^i_{\mathrm{H}} \dR_{S/(A/I)}^{\wedge}/\Sym^i(\Fil^1_{\mathrm{H}} \dR_{S/(A/I)}^{\wedge})
\right) \to \dR_{S/A}^{\wedge}/\Fil^i_{\mathrm{H}}$.
But the source of this map has its $p$-power torsions submodule being $p$-adically dense
and the target of this map is $p$-torsionfree and $p$-adically complete,
so the map $\bar{g}$ must in fact be zero.
This proves the containment $\Fil^i_{\mathrm{N}} \subset \Fil^i_{\mathrm{H}}$ as claimed.
\end{proof}
The following is inspired by \cite[Subsection 12.4]{BS19}.
\begin{proposition}
\label{filtered map for transversal}
Let $(A,I)$ be a transversal prism, then for any $p$-completely
smooth $A/I$-algebra $R$, the map $\Prism^{(1)}_{R/A} \to \dR_{R/A}^{\wedge}$
can be promoted to a map of filtered algebras.
Moreover this lift is functorial in the $A/I$-algebra $R$, hence left Kan extends
to all animated $A/I$-algebras.
\end{proposition}
\begin{proof}
For any surjection $A/I\langle X_1, \ldots, X_n\rangle \to R$,
the ring
\[
\tilde{R} = A/I\langle X_1^{1/p^\infty}, \ldots, X_n^{1/p^\infty} \rangle
\otimes_{A/I\langle X_1, \ldots, X_n\rangle} R
\]
is large quasisyntomic over $A/I$ and Zariski locally of the form considered in the previous proposition.
Therefore the map $\Prism^{(1)}_{\tilde{R}/A} \to \dR_{\tilde{R}/A}^{\wedge}$
is canonically filtered: (Zariski locally)
existence follows from the previous proposition,
Zariski glue as well as uniqueness is provided by \Cref{rigidity of the situation}.
The same applies to all terms of the \v{C}ech nerve $\tilde{R}^{\bullet}$ of
$R \to \tilde{R}$.
The filtered cosimplicial rings $\Prism^{(1)}_{\tilde{R}^{\bullet}/A}$
and $\dR_{\tilde{R}^{\bullet}/A}^{\wedge}$ computes the filtered rings
$\Prism^{(1)}_{\tilde{R}/A}$ and $\dR_{\tilde{R}/A}^{\wedge}$ separately.
By \Cref{rigidity of the situation}, we get a map of filtered cosimplicial rings.
This construction is independent of the choice of the surjection
$A/I\langle X_1, \ldots, X_n\rangle \to R$: adding extra variables to the $X_i$,
one gets a square of maps between filtered cosimplicial algebras,
we use \Cref{rigidity of the situation} to see the maps commute for each term
associated with $[m] \in \Delta$.
Since the category of such surjections admits pairwise coproducts, it is therefore sifted.
The naturality in $R$ follows from exactly the same argument.
This way we get the desired functorial map.
\end{proof}
Now we turn to the general situation where the base prism $(B,J)$ is not necessarily
transversal.
We bootstrap the previous two propositions.
\begin{proposition}
Let $S = B/J \langle X_1^{1/p^\infty}, \ldots, X_n^{1/p^\infty} \rangle/(f_1, \ldots, f_r)$
where $(f_i)$ is a $p$-completely regular sequence.
Then there exists $f_i(S)$ and $g_i(S)$ making the diagram~\ref{NygHdg diagram} commute.
\end{proposition}
\begin{proof}
We shall utilize the knowledge when the base prism is transversal.
Without loss of generality, let us assume $(B,J) = (B, (d))$ is oriented:
Zariski locally it is oriented, and the locally defined $f_i(S)$ and $g_i(S)$
will necessarily glue due to \Cref{rigidity of the situation}.
Following a private communication with Illusie, let us define a transversal prism $(A, (a))$ together with a surjection
of prisms $(A, (a)) \twoheadrightarrow (B, (d))$ as follows.
Let
\[
A \coloneqq \mathbb{Z}_p\{x_{b}; b \in B\}\{\delta(x_d)^{-1}\}^\wedge_{(x_d, p)}
\]
be given by first adjoining a free $\delta$-variable corresponding to each element in $B$ to $\mathbb{Z}_p$
together with an inverse of the $\delta$ of the variable corresponding to $d \in B$,
completed in the end with respect to $(p, x_d)$.
Denote $a \coloneqq x_d$.
Then $(A, (a))$ is a transversal prism, and there is an evident surjection of prisms
$(A, (a)) \twoheadrightarrow (B, (d))$.
Consider the surjection $A/a \langle X_1^{1/p^\infty}, \ldots, X_n^{1/p^\infty} \rangle \to
B/J \langle X_1^{1/p^\infty}, \ldots, X_n^{1/p^\infty} \rangle$
and lift the elements $f_i$ to $\tilde{f}_i$.
Let
\[
\tilde{S} \coloneqq \mathrm{Kos}(A/a \langle X_1^{1/p^\infty}, \ldots, X_n^{1/p^\infty} \rangle; \tilde{f}_1,
\ldots, \tilde{f}_r).
\]
We have $S = \tilde{S} \otimes^{{\mathbb L}}_{(A/a)} B/d$.
We know the analogous map for $\tilde{S}/(A/a)$ exists, thanks to \Cref{filtered map for transversal}.
Since both Nygaard and Hodge filtrations satisfy base change, we may base change the maps
for $\tilde{S}/(A/a)$ to obtain our desired maps for $S/(B/b)$.
\end{proof}
Following the same reasoning as in the proof of \Cref{filtered map for transversal},
one obtains the following:
\begin{proposition}
\label{filtered map for general}
For any $p$-completely smooth $B/J$-algebra $R$,
the map $\Prism^{(1)}_{R/B} \to \dR_{R/B}^{\wedge}$
can be promoted to a map of filtered algebras.
Moreover this lift is functorial in the $B/J$-algebra $R$, hence left Kan extends
to all animated $B/J$-algebras.
\end{proposition}
\begin{proof}
The argument is exactly the same as \Cref{filtered map for transversal},
note that \Cref{rigidity of the situation} applies to general bounded base prism
$(B,J)$.
\end{proof}
\begin{remark}
Fix a bounded base prism $(B,J)$.
Any such functorial filtered map is determined by its effect on $p$-complete
polynomial algebras, by left Kan extension.
Then by quasisyntomic descent, such a functorial filtered map is determined
by its effect on a basis of $\qsyn_{B/J}$, such as the full subcategory
generated by large quasisyntomic over $B/J$ algebras.
Therefore \Cref{rigidity of the situation} implies that there is
at most one such functorial filtered map.
Combining with the above proposition, we have both existence and uniqueness
of it.
\end{remark}
\begin{remark}
\label{weak compatibility with base change}
Following the way these filtered maps are constructed, we have certain
compatibility with base change:
Let $R$ be an animated $B/J$-algebra, let $(B,J) \to (C, JC)$
be a map of bounded prisms and denote $R' \coloneqq R \otimes^{{\mathbb L}}_{B/J} C/JC$,
then the filtered map for $R'/C$ arises as the filtered map for $R/B$
base changed along $B \to C$.
Indeed it suffices to prove this when $R/(B/J)$ is $p$-completely smooth.
Then one simply notices that a surjection $B/J\langle X_1, \ldots, X_n\rangle \to R$
base change along $B \to C$ will give rise to a surjection
$C/JC\langle X_1, \ldots, X_n\rangle \to R'$.
\end{remark}
\subsection{Comparing Hodge and Nygaard filtrations}
We again use $(A,I)$ to denote a transversal prism, and use
$(B,J)$ to denote a general bounded prism.
All sheaves referred to in this subsection are viewed as objects
in $\mathrm{Shv}(\qsyn_{A/I})$ or $\mathrm{Shv}(\qsyn_{B/J})$
depending on the context.
Combining \Cref{comparing pris and crys} and \Cref{filtered map for general},
we get a natural map of sheaves of filtered rings:
\[
(\Prism^{(1)}_{-/B}, \Fil^{\bullet}_{\mathrm{N}}) \widehat{\otimes}_{(B, J^{\bullet})}
(\dR_{(B/J)/B}^\wedge, \Fil^{\bullet}_{\mathrm{H}})
\longrightarrow (\dR_{-/A}^\wedge, \Fil^{\bullet}_{\mathrm{H}}),
\]
which is an isomorphism on the underlying sheaf of rings.
Our objective in this subsection is to show that the above map is an isomorphism of sheaves
of filtered rings.
Our plan is again to first understand the case of transversal base prism,
then bootstrap to general base prisms.
Let us begin by discussing the case of transversal base prism.
\begin{theorem}
\label{lqsyn H and N filtration}
Let $S$ be a large quasisyntomic over $A/I$ algebra.
\begin{enumerate}
\item The map $\Prism^{(1)}_{S/A} \to \dR_{S/A}^{\wedge}$ is injective.
\item We have
\[
\Fil^r_I \Prism^{(1)}_{S/A} = \left( \Fil^r_{\mathcal{I}} \dR_{S/A}^{\wedge} \right)
\cap \left( \Prism^{(1)}_{S/A} \right).
\]
\item We have
\[
\Fil^i_{\mathrm{N}} \Prism^{(1)}_{S/A} = \left( \Fil^i_{\mathrm{H}} \dR_{S/A}^{\wedge} \right)
\cap \left( \Prism^{(1)}_{S/A} \right).
\]
\item For any $i$, the natural map
$\Prism^{(1)}_{S/A}/\Fil^i_{\mathrm{N}} \to \dR_{S/A}^{\wedge}/\Fil^i_{\mathrm{H}}$
is an injection of $p$-torsionfree modules,
whose cokernel is $(i-1)!$-torsion.
Hence multiplying by $(i-1)!$ gives a natural map backward and compose
the two maps in either direction is the same as multiplying by $(i-1)!$.
In particular, the natural map
\[
\Prism^{(1)}_{S/A}/\Fil^i_{\mathrm{N}} \to \dR_{S/A}^{\wedge}/\Fil^i_{\mathrm{H}}
\]
is an isomorphism for any $i \leq p$.
\item The induced map
$\gr^*_{\mathrm{N}}\Prism^{(1)}_{S/A} \to \gr^*_{\mathrm{H}}\dR_{S/A}^{\wedge}$
is compatible with the vertical filtrations on both sides, and the induced map
on the graded pieces of the vertical filtrations
$(\wedge^i_S {\mathbb{L}}_{S/(A/I)})^{\wedge}[-i] \widehat{\otimes}_{A/I} \Sym^*_{A/I}(I/I^2)
\to (\wedge^i_S {\mathbb{L}}_{S/(A/I)})^{\wedge}[-i] \widehat{\otimes}_{A/I} \Gamma^*_{A/I}(I/I^2)$
is given by $\id \otimes_{A/I} \left(\gr^*_I A \to \gr^*_{\mathcal{I}} \mathcal{A} \right)$.
\end{enumerate}
\end{theorem}
Contemplating with $R = A/I$ suggests that our estimate in (4) is sharp.
Before the proof,
let us remark that $p$-completely tensor over $A$ with an $\mathcal{A}$-module is the same
as $(p,I)$-completely tensor.
This is because $I^p \mathcal{A} \subset p \mathcal{A}$.
\begin{proof}
(1): the map is given by $(p,I)$-completely tensoring the inclusion $A \hookrightarrow \mathcal{A}$
with $\Prism^{(1)}_{S/A}$ over $A$.
Since $\Prism^{(1)}_{S/A}$ is $(p,I)$-completely flat over $A$, see \Cref{flatness of dR}, we get the injectivity
of $\Prism^{(1)}_{S/A} \to \dR_{S/A}^{\wedge}$.
(2): clearly we have $I^r \Prism^{(1)}_{S/A}$ contained in
$\mathcal{I}^{[r]} \dR_{S/A}^{\wedge}$. To check the equality of intersection,
it suffices to show the induced map
$\Prism^{(1)}_{S/A}/I^r \to \dR_{S/A}^{\wedge}/\mathcal{I}^{[r]}$ is injective.
But this map is given by $(p,I)$-completely tensoring $\Prism^{(1)}_{S/A}$ with
the inclusion $A/I^r \hookrightarrow \mathcal{A}/\mathcal{I}^{[r]}$ over $A$,
so we get the desired injectivity again by $(p,I)$-completely flatness of
$\Prism^{(1)}_{S/A}$ over $A$.
(3): it suffices to show that the induced map
$\tilde{g} \colon \Prism^{(1)}_{S/A}/\Fil^i_{\mathrm{N}} \to \dR_{S/A}^{\wedge}/\Fil^i_{\mathrm{H}}$
is injective.
The $I$-adic and $\mathcal{I}^{[\bullet]}$-filtrations on each side induces
maps of graded pieces as
\[
\dR_{S/(A/I)}^{\wedge}/\Fil^j_{\mathrm{H}} \widehat{\otimes}_{A/I} I^{i-j}/I^{i-j+1} \to
\dR_{S/(A/I)}^{\wedge}/\Fil^j_{\mathrm{H}} \widehat{\otimes}_{A/I} \mathcal{I}^{[i-j]}/\mathcal{I}^{[i-j+1]}.
\]
Here we have used \Cref{general KO filtration properties} (3) and \Cref{general KON filtration properties} (3).
We conclude that the map $\tilde{g}$ is injective as $\dR_{S/(A/I)}^{\wedge}/\Fil^j_{\mathrm{H}}$
is $p$-completely flat over $A/I$ for any $j$ and the natural map
$I^{i-j}/I^{i-j+1} \to \mathcal{I}^{[i-j]}/\mathcal{I}^{[i-j+1]}$ is injective.
(4): injectivity follows from the previous paragraph.
Let $S = A/I \langle X_l^{1/p^\infty} \mid l \in L \rangle /M $,
with each element $m \in M$ corresponding to a series $f_m$.
Below we shall not distinguish $m$ and $f_m$.
Consider
\[
S' = A/I \langle X_l^{1/p^\infty}, Y_m^{1/p^\infty} \mid l \in L, m \in M \rangle /(Y_m - f_m; m \in M)
\eqqcolon \widetilde{S}/(Y_m - f_m; m \in M).
\]
There is a surjection $S' \twoheadrightarrow S$ of $A/I$-algebras, sending powers of $Y_m$ to $0$.
This induces a surjection on $\mathbb{L}_{-/A}^\wedge$, hence also a surjection on $\dR_{-/A}^\wedge$.
Therefore it suffices to prove the statement for $S'$.
Now we know $\dR_{S'/A}^\wedge$ is given by $p$-completely adjoining divided powers of
$I$ and $Y_m - f_m$ to $\widetilde{S}$,
and the $i$-th Hodge filtration is given by the ideal $p$-completely generated by those
degree-at-least-$i$ divided monomials.
Since the image of $\Prism^{(1)}_{S'/A}$ contains
$\widetilde{S}$ already,
it suffices to show that $(i-1)!$ times those degree-less-than-$i$ divided monomials
lies in $\widetilde{S}$, which follows from definition.
(5): since the generating factor $(\wedge^i_S {\mathbb{L}}_{S/(A/I)})^{\wedge}[-i]$
of both vertical filtrations comes from the $i$-th graded piece of the Hodge filtration
on $\dR_{S/(A/I)}^{\wedge}$ (via modulo $I$ and $\mathcal{I}$ respectively),
our statement follows from the commutative triangle in~\Cref{compatibility when modulo I}.
\end{proof}
The above statements can be immediately extended to our desired statement
via several reduction steps.
\begin{corollary}
\label{affine H and N filtration}
Let $R$ be a $B/J$-algebra.
The natural map of filtered algebras:
\[
(\Prism^{(1)}_{R/B}, \Fil^{\bullet}_{\mathrm{N}}) \widehat{\otimes}_{(B, J^{\bullet})}
(\dR_{(B/J)/B}^\wedge, \Fil^{\bullet}_{\mathrm{H}})
\longrightarrow (\dR_{R/A}^\wedge, \Fil^{\bullet}_{\mathrm{H}}),
\]
is a filtered isomorphism.
In particular, filtrations on the left hand side define quasisyntomic sheaves.
\end{corollary}
We refer readers to \cite[3.8-3.10]{GL20} for a discussion of the filtration on tensor
of filtered modules over a filtered algebra. Here we use $\widehat{\otimes}$ to mean
that we derived $p$-complete the \cite[Construction 3.9]{GL20}.
\begin{proof}
We make a few reduction steps.
First of all, both sides are left Kan extended from the case of $p$-complete
polynomial algebras, therefore it suffices to show the map is a filtered isomorphism
when $R = B/J \langle X_1, \ldots, X_n\rangle$.
Secondly, it suffices to prove the statement Zariski locally on $\Spf(B/J)$,
hence we may assume $(B,J)$ is oriented.
Now we look at the universal map from universal oriented prism $(A^{\mathrm{univ}}, I) \to (B,J)$.
Let $R^{\mathrm{univ}}$ be the corresponding $p$-complete polynomial algebras over
the reduction of the universal oriented prism.
By \Cref{weak compatibility with base change}, one sees that the filtered map
for $R$ is the base change of the analogous map for $R^{\mathrm{uinv}}$.
Therefore we are finally reduced to the case where the base prism $(A,I)$ is transversal
and $R$ is a $p$-completely smooth $A/I$-algebra.
Since the underlying algebra is an isomorphism by \Cref{comparing pris and crys}, it suffices
to show the induced map of graded algebra is an isomorphism.
By derived $p$-completing \cite[Lemma 3.10]{GL20}, we see that the graded algebra of left hand side
becomes
\[
\gr^*_{\mathrm{N}}(\Prism^{(1)}_{R/A}) \widehat{\otimes}_{\Sym^*_{A/I}(I/I^2)} \Gamma^*_{A/I}(I/I^2).
\]
Now we invoke the vertical filtrations on graded algebras of both sides, see \Cref{increasing filtration on Hodge graded}
and \Cref{increasing filtration on Nygaard graded}.
The vertical filtration on $\gr^*_{\mathrm{N}}(\Prism^{(1)}_{R/A})$ induces an increasing filtration
by $(-) \widehat{\otimes}_{\Sym^*_{A/I}(I/I^2)} \Gamma^*_{A/I}(I/I^2)$,
and our morphism induces identifications
\[
\gr^v_i\bigg(\gr^*_{\mathrm{N}}(\Prism^{(1)}_{R/A}) \widehat{\otimes}_{\Sym^*_{A/I}(I/I^2)} \Gamma^*_{A/I}(I/I^2)\bigg)
\cong \gr^v_i\bigg(\gr^*_{\mathrm{H}}(\dR_{R/A}^\wedge)\bigg)
\]
for all $i$.
Here we have used \Cref{lqsyn H and N filtration} (5).
Since these vertical filtrations are increasing, exhaustive, and uniformly bounded below by $0$,
we conclude that the natural map
\[
\gr^*_{\mathrm{N}}(\Prism^{(1)}_{R/A}) \widehat{\otimes}_{\Sym^*_{A/I}(I/I^2)} \Gamma^*_{A/I}(I/I^2) \longrightarrow
\gr^*_{\mathrm{H}}(\dR_{R/A}^\wedge)
\]
is also an isomorphism.
\end{proof}
In particular, we can specialize to the case of quasi-compact quasi-separated smooth formal schemes over $\Spf(B/J)$.
\begin{corollary}[{c.f.~\cite[Theorem 2.9]{Ill20}}]
\label{smooth H and N filtration}
Let $X$ be a quasi-compact quasi-separated smooth formal scheme over $\Spf(B/J)$.
Then we have a natural filtered isomorphism:
\[
\bigg(\mathrm{R\Gamma}(X, \Fil^{\bullet}_{\mathrm{N}}(\Prism^{(1)}_{-/B}))\bigg)
\widehat{\otimes}_{(B, J^{\bullet})}
(\dR_{(B/J)/B}^\wedge, \Fil^{\bullet}_{\mathrm{H}})
\xrightarrow{\cong} \mathrm{R\Gamma}(X, \Fil^{\bullet}_{\mathrm{H}}(\dR_{-/B}^\wedge));
\]
they are furthermore naturally filtered isomorphic to
$\mathrm{R\Gamma_{crys}}(X, \mathcal{I}^{[\bullet]}_{\mathrm{crys}})$
if $(B,J)$ is transversal.
Similarly whenever $i \leq p$, we also have natural isomorphisms
\[
\mathrm{R\Gamma}(X, \Prism^{(1)}_{-/A}/\Fil^i_{\mathrm{N}}) \xrightarrow{\cong}
\mathrm{R\Gamma}(X, \dR_{-/A}^\wedge/\Fil^i_{\mathrm{H}});
\]
and are furthermore naturally isomorphic to
$\mathrm{R\Gamma_{crys}}(X, \mathcal{O}_{\mathrm{crys}}/\mathcal{I}^{i}_{\mathrm{crys}})$
if $(B,J)$ is transversal.
These isomorphisms are functorial in $X$, and satisfies the base change property
as in \Cref{weak compatibility with base change}.
\end{corollary}
\begin{proof}
These functorial isomorphisms are provided by \Cref{affine H and N filtration}
and \Cref{lqsyn H and N filtration} (4) respectively.
The furthermore equality, when the base prism is transversal, follows from \Cref{Illusie--Bhatt}.
\end{proof}
\begin{remark}
\label{rem-N and H fil}
Back to the transversal base prism case.
A posteriori the filtration on the left hand side of \Cref{affine H and N filtration} is a quasisyntomic sheaf,
hence we may define it as the unfolding of its restriction to the basis of large quasisyntomic over $A/I$-algebras.
Also a posteriori, we know the value on such an algebra $S$ must be concentrated in cohomological degree $0$,
therefore they have to be the image of the augmentation map
\[
\Fil^i(\Prism^{(1)}_{S/A} \widehat{\otimes}_{\mathbb{Z}_p} \mathcal{A}) \to \dR_{S/A}^\wedge,
\]
where the filtration on the left hand side is given by the usual Day convolution.
This implies an equality
\[
\Fil^n_{\mathrm{H}}(\dR_{S/A}^\wedge) = \sum_{i = 0}^n
\bigg(\Fil^i_{\mathrm{N}}(\Prism^{(1)}_{S/A}) \cdot \mathcal{I}^{[n-i]} \bigg),
\]
which also follows from combining \Cref{general KO filtration properties} (1), \Cref{qsyn KO filtration properties} (3),
and \Cref{Nygaard filtration} (2).
Therefore, for any $0 \leq r \leq p-1$, we see that the Frobenius on derived de Rham complex when restricted
to the $r$-th Hodge filtration
\[
\Fil^r_{\mathrm{H}}(\dR_{S/A}^\wedge) \xrightarrow{\varphi} \dR_{S/A}^\wedge
\]
factors through multiplication by $p^r$.
Since for large quasisyntomic over $A/I$-algebras $S$, the $\dR_{S/A}^\wedge$
is $p$-completely flat over $\mathcal{A}$ (see \Cref{flatness of dR}) which is $p$-torsionfree,
we may uniquely divide the restriction $\varphi$ by $p^r$.
By unfolding, this gives rise to divided Frobenii as maps of sheaves on $\mathrm{qSyn}_{A/I}$:
\[
\varphi_r \colon \Fil^r_{\mathrm{H}}(\dR_{-/A}^\wedge) \rightarrow \dR_{-/A}^\wedge.
\]
By definition, they also satisfy $\varphi_r \mid_{\Fil^{r+1}_{\mathrm{H}}} = p \varphi_{r+1}$ when $r \leq p-2$.
Following the same argument of \Cref{functorial endomorphism theorem},
see also \Cref{functorial endo remark}, such a functorial
divided Frobenius is unique for each $0 \leq r \leq p-1$.
When $(A,I)$ is the Breuil--Kisin prism, this gives rise to an alternative definition of the divided Frobenii appeared in
\cite[p.~10]{Bre98}.
\end{remark}
\section{Connection on $\dR_{-/\mathfrak{S}}^\wedge$ and structure of torsion crystalline cohomology}
From this section onward, we focus on the Breuil--Kisin prism $A=(\mathfrak{S}, E)$ and crystalline cohomology over $S= \dR^\wedge_{\O_K/\mathfrak{S}}$.
Let $k$ be a prefect field with characteristic $p$,
and let $K$ be a finite totally ramified extension over $K_0 = W(k)[\frac 1 p]$ with a fixed uniformizer $\pi\in \O_K$. Fix an algebraic closure $\overline{K} $ of $K$ and let $\mathbf{C}$ be $p$-adic completion of $\overline{K}$. Write $G_K : = \Gal (\overline{K} / K)$ and $e=[K:K_0]$.
Let $E= E(u)\in W(k)[u]$ be the Eisenstein polynomial of $\pi$ with constant term $a_0 p$, recall $\mathfrak{S} : = W(k)[\![u]\!]$ is equipped with a Frobenius
$\varphi$ naturally extends that on $W(k)$ by $\varphi (u) = u ^p$.
Pick $\pi_n \in \O_{\overline{K} }$ so that $\pi_{0} = \pi$ and $\pi_{n +1}^p = p$.
Then $\underline{\pi}: = (\pi_n)_{n \geq 0}\in \O_{\mathbf{C}}^\flat$.
We embed $\mathfrak{S} \hookrightarrow A_{\inf}$ via $u \mapsto [\underline{\pi}]$ which is a map of prisms.
Let $K_\infty : = \bigcup\limits_{n = 0}^\infty K(\pi_n)$ and $G_\infty : = \Gal (\overline{K} / K_\infty)$.
It is clear that the embedding $\mathfrak{S} \subset A_{\inf}$ is compatible with $G_\infty$-actions.
We extend $\varphi$ from $\mathfrak{S}$ to $S$ and let $\Fil^m S$ be the $p$-adic closure of the ideal generated by $\gamma_i (E): = \frac{E^i}{i!}, i \geq m$.
We embed $S \hookrightarrow A_{\cris}$ also via $u \mapsto [\underline{\pi}]$.
For $m \leq p-1$, $\varphi(\Fil^m S)\subset p ^m S$.
We set $\varphi_m : = \frac{\varphi}{p ^m} :\Fil ^m S \to S$.
Similar notation also applies to $A_{\cris}$. Write $c_1: = \frac{\varphi (E)}{a_0 p} \in S^\times$.
Finally, there exists a $W(k)$-linear derivation $\nabla_S : S \to S$ by $\nabla_S(f(u))= f'(u)$.
For $n \geq 1$, if $M$ is an $\ensuremath{\mathbb{Z}}_p$-module then we always use $M_n$ to denote $M / p ^n M$.
Similar notation applies to ($p$-adic formal) schemes: i.e., $X_n : = X \times_{\Spf(\ensuremath{\mathbb{Z}}_p)} \Spec (\ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$. Write $W= W(k)$ and reserve $\gamma_i (\cdot)$ for the $i$-th divided power.
\subsection{Connection on $\dR_{-/\mathfrak{S}}^\wedge$}
Under the philosophy that derived de Rham cohomology behaves a lot like crystalline cohomology,
one expects there to be a connection on $\dR_{-/\mathfrak{S}}^\wedge$.
We explain it in this section.
\begin{lemma}
\label{decompleting the base}
Let $R$ be an $\mathcal{O}_K$-algebra. Then the natural morphism
$\dR_{R/W[u]}^\wedge \to \dR_{R/\mathfrak{S}}^\wedge$ is an isomorphism,
where $R$ is regarded as an $\mathfrak{S}$ and $W[u]$ algebra via
$W[u] \to \mathfrak{S} \to \mathcal{O}_K \to R$.
\end{lemma}
\begin{proof}
Just notice the following $p$-completely pushout diagram:
\[
\xymatrix{
\mathfrak{S} \ar[r] & \mathcal{O}_K \ar[r] & R \\
W[u] \ar[r] \ar[u] & \mathcal{O}_K \ar[u] \ar[r] & R \ar[u],
}
\]
and appeal to the $p$-completely base change formula of derived de Rham complexes
to get
\[
\dR_{R/W[u]}^\wedge \widehat{\otimes}_{W[u]} \mathfrak{S} \xrightarrow{\cong} \dR_{R/\mathfrak{S}}^\wedge.
\]
Next we observe that $\dR_{R/W[u]}^\wedge$ is an $S = \dR_{\mathcal{O}_K/W[u]}^\wedge$-complex
and $S \widehat{\otimes}_{W[u]} \mathfrak{S} = S$, hence the base change on the left hand side
gives $\dR_{R/W[u]}^\wedge$ back.
\end{proof}
\begin{construction}[{see also \cite{KO68}}]
\label{connection construction}
For any $W[u]$-algebra $R$,
by ($p$-completely) applying \cite[Lemma 3.13.(4)]{GL20} to the triple $W \to W[u] \to R$,
we see that there is a functorial triangle in filtered derived $\infty$-category of $W$-modules:
\[
\dR_{R/W[u]}^\wedge \widehat{\otimes}_{W[u]} \Omega^1_{W[u]/W}[-1] \to \dR_{R/W}^\wedge
\to \dR_{R/W[u]}^\wedge.
\]
Here $\Omega^1_{W[u]/W}[-1]$ is completely put in the first filtration.
By choosing the generator $du \in \Omega^1_{W[u]/W}$, the above becomes
\[
\dR_{R/W}^\wedge \to \dR_{R/W[u]}^\wedge \xrightarrow{\nabla} \dR_{R/W[u]}^\wedge(-1),
\]
where $(-1)$ indicates the shift of filtrations:
$\Fil^i(\dR_{R/W[u]}^\wedge(-1)) = \Fil^{i-1}_{\mathrm{H}}(\dR_{R/W[u]}^\wedge)$.
When $R$ is smooth over $W[u]$, then $\nabla$ is given by Lie derivative
with respect to $\frac{\partial}{\partial u}$:
\[
\nabla(\omega) = \mathcal{L}_{\frac{\partial}{\partial u}}(\omega).
\]
\end{construction}
\begin{lemma}
\label{connection on ddR}
Let $R$ be an $\mathcal{O}_K$-algebra. Then we have a functorial
triangle in the filtered derived $\infty$-category:
\[
\dR_{R/W}^\wedge \to \dR_{R/\mathfrak{S}}^\wedge \xrightarrow{\nabla} \dR_{R/\mathfrak{S}}^\wedge(-1).
\]
Moreover we have
\[
p u^{p-1} \cdot \varphi \circ \nabla = \nabla \circ \varphi,
\]
where $\varphi \colon \dR_{R/\mathfrak{S}}^\wedge \to \dR_{R/\mathfrak{S}}^\wedge$ is the Frobenius
defined in \Cref{Frobenii}.
\end{lemma}
\begin{proof}
The first statement follows from \Cref{connection construction} and \Cref{decompleting the base}.
To check the equality, by left Kan extension it suffices to check
it for the polynomials.
Then by quasisyntomic descent, it suffices to check the equality for
large quasisyntomic over $\mathcal{O}_K$ algebras.
Following the proof of \Cref{uniqueness of comparison},
we are reduced to showing the equality for algebras of the form
\[
R = \mathcal{O}_K \langle X_i^{1/p^\infty}, Y_j^{1/p^\infty} \mid i \in I, j \in J \rangle/(Y_j - f_j \mid j \in J) \eqqcolon \widetilde{R}/(Y_j - f_j \mid j \in J).
\]
Now the map $\widetilde{R} \to R$ induces a map between $\dR_{-/\mathfrak{S}}^\wedge$ given by
\[
S \langle X_i^{1/p^\infty}, Y_j^{1/p^\infty} \mid i \in I, j \in J \rangle \eqqcolon T
\to D_T(Y_j - f_j; j \in J)^\wedge.
\]
Here $S$ is the $p$-complete PD envelope of $\mathfrak{S}$ along $(E)$ and the latter denotes $p$-completely adjoining divided powers of $(Y_j - f_j)$ in $T$.
Since $D_T(Y_j - f_j; j \in J)^\wedge$ is $p$-complete and $p$-torsionfree, it suffices
to check the identity on $T$.
On $T$, the Frobenius $\varphi$ acts by sending variables $X, Y, u$ to their $p$-th power,
and $\nabla$ acts via $\frac{\partial}{\partial u}$.
Finally we are reduced to checking the equality
\[
p u^{p-1} \cdot \varphi\left(\frac{\partial}{\partial u} (F(u, \underline{X}, \underline{Y}))\right) = \frac{\partial}{\partial u} \left(\varphi( F(u, \underline{X}, \underline{Y})) \right),
\]
for any $F(u, \underline{X}, \underline{Y}) \in T$.
\end{proof}
Consequently, for any $\mathcal{O}_K$-algebra $R$, we always have a long exact sequence:
\[
\label{LES of connection}
\tag{\epsdice{4}}
\ldots \to {\rm H}^i(\dR_{R/W}^\wedge) \to {\rm H}^i(\dR_{R/\mathfrak{S}}^\wedge) \xrightarrow{\nabla} {\rm H}^i(\dR_{R/\mathfrak{S}}^\wedge(-1))
\xrightarrow{+1} \ldots
\]
and its $r$-th filtration analogues for all $r \in \mathbb{N}$.
In special situation, these will break into short exact sequences.
Let us introduce some more notation.
Let $L$ be a perfectoid field extension of $K$ containing all $p$-power roots of $\pi$.
For instance $L$ could be $p$-adic completion of $K_{\infty}$ or $\mathbf{C}$.
Let $A_{\inf}(L) \coloneqq W(\mathcal{O}_L^\flat)$ be Fontaine's $A_{\inf}$ ring associated with
$L$, and recall there is a natural map $\theta \coloneqq A_{\inf}(L) \to \mathcal{O}_L$.
Fix a compatible system of $p$-power roots of $\pi$, we obtain a map $\mathfrak{S} \to A_{\inf}(L)$
with $u \mapsto [\underline{\pi}]$ compatible with $\theta$ and the inclusion $\mathcal{O}_K \to \mathcal{O}_L$.
\begin{proposition}
\label{prop-connection for perfectoid}
With notation as above. Let $R$ be a quasisyntomic $\mathcal{O}_L$-algebra.
Then we have
\begin{enumerate}
\item The natural map $\dR_{R/W}^\wedge \to \dR_{R/A_{\inf}(L)}^\wedge$ is a filtered isomorphism.
\item The sequence \epsdice{4} and its $r$-th filtration analogues break into short exact sequence:
\[
0 \to {\rm H}^i(\Fil^r_{\mathrm{H}}\dR_{R/W}^\wedge) \to {\rm H}^i(\Fil^r_{\mathrm{H}}\dR_{R/\mathfrak{S}}^\wedge)
\xrightarrow{\nabla} {\rm H}^i(\Fil^{r-1}_{\mathrm{H}}\dR_{R/\mathfrak{S}}^\wedge) \to 0,
\]
for all $i$ and $r$.
In particular $\dR_{R/\mathfrak{S}}^\wedge \xrightarrow{\nabla} \dR_{R/\mathfrak{S}}^\wedge(-1)$
is surjective on each ${\rm H}^i$, and
\[
{\rm H}^i(\dR_{R/W}^\wedge) = {\rm H}^i(\dR_{R/\mathfrak{S}}^\wedge)^{\nabla = 0}.
\]
\end{enumerate}
\end{proposition}
\begin{proof}
(1) is \cite[Theorem 3.4.(2)]{GL20}.
As for (2), it suffices to show that the maps
${\rm H}^i(\Fil^r_{\mathrm{H}}\dR_{R/W}^\wedge) \to {\rm H}^i(\Fil^r_{\mathrm{H}}\dR_{R/\mathfrak{S}}^\wedge)$
are injective for all $i$ and $r$.
By functoriality, we have maps of filtered algebras
\[
\dR_{R/W}^\wedge \to \dR_{R/\mathfrak{S}}^\wedge \to \dR_{R/A_{\inf}(L)}^\wedge
\]
whose composition is a filtered isomorphism by (1).
Therefore the first morphism factorizing isomorphism induces injection at the level of cohomology.
This explains why the long exact sequence \epsdice{4} breaks into short exact sequences.
The last statement follows easily by letting $r = 0$.
\end{proof}
\subsection{Structures of torsion crystalline cohomology}
\label{subsection-stru of crys}
Let $X$ be a proper smooth formal scheme over $\O_K$.
Let us summarize the structures on ${\rm H} ^i _{\cris} (X/S) \coloneqq {\rm H} ^i_{\cris} ( X/S , \O_{\cris} )$ constructed from previous sections.
By Corollary \ref{affine H and N filtration} and Theorem \ref{Illusie--Bhatt}, we obtain the following commutative diagram.
\begin{equation}\label{dig-summary of comp}
\begin{split}
\xymatrix{ R\Gamma _{\qsyn} (X, \Prism^{(1)}_{-/\mathfrak{S}})\ar[r] & R\Gamma_{\cris} (X/S, \O_{\cris}) \ar@{= }[r]& S \otimes^{\mathbb L}_{\varphi, \mathfrak{S}}R\Gamma_\Prism ( X /\mathfrak{S}) \\ R\Gamma _{\qsyn} (X, \Fil^m_{\rm N} \Prism ^{1}_{-/ \mathfrak{S}})\ar[u] \ar[r] & R\Gamma _{\cris} (X/S, \mathcal I^{[m]}_{\cris})\ar[u] & }
\end{split}
\end{equation}
Here the second isomorphism of the first row follows the canonical isomorphism $R\Gamma_{\qsyn} ( X, \Prism_{-/ \mathfrak{S}}) \simeq R\Gamma_\Prism (X/\mathfrak{S} ) $ and the fact that $\varphi : \mathfrak{S} \to \mathfrak{S}$ is flat.
For $m \leq p-1$, Remark \ref{rem-N and H fil} allows to us to define $\varphi$-semi-linear map $\varphi_m : {\rm H}^ i _{\cris} (X/S , \mathcal I^{[m]}_{\cris}) \to {\rm H} ^i_{\cris} (X/S)$ so that the following diagram commutes for $m +1 \leq p-1$
$$\xymatrix{{\rm H}^ i _{\cris} (X/S , \mathcal I^{[m]}_{\cris}) \ar[r]^-{\varphi_{m}} & {\rm H} ^i_{\cris} (X/S) \\ {\rm H}^ i _{\cris} (X/S , \mathcal I^{[m+1]}_{\cris})\ar[u]\ar[ur]^{p\varphi_{m +1}} & }$$
We simply denote the above diagram by $\varphi_{m}|_{{\rm H}^ i _{\cris} (X/S , \mathcal I^{[m+1]}_{\cris})} = p \varphi_{m +1}$. It is also clear that for any $s \in \Fil^m S$ and $x\in {\rm H}^ i _{\cris} (X/S ) $
we have \[\varphi_m (sx)= (c_1)^{-m }\varphi_m (s) \varphi_h(E(u)^m x).\]
Finally, the above subsection construct a connection $\nabla: {\rm H} ^i_{\cris} (X/S) \to {\rm H} ^i_{\cris} (X/S) $.
By Proposition \ref{prop-connection for perfectoid} and Lemma \ref{connection on ddR}, we conclude that
\begin{enumerate}
\item $\nabla: {\rm H} ^i_{\cris} (X/S) \to {\rm H} ^i_{\cris} (X/S)$ is $W(k)$-linear derivative satisfying \[\nabla (f(u)x) = f'(u) x + f(u) {\nabla} (x) \]
\item (Griffiths Transversality) $\nabla ({\rm H} ^i _{\cris} (X/ S, \mathcal I^{[m]}_{\cris}))$ factors through ${\rm H} ^i _{\cris} (X/ S, \mathcal I^{[m-1]}_{\cris})$.
\item The following diagram commutes: $$
\xymatrix@C=3em{ {\rm H} ^i _{\cris} (X/S , \mathcal I_{\cris} ^{[m]})\ar[d]_{E(u) \nabla} \ar[r]^-{\varphi_m } & {\rm H} ^i _{\cris} (X/S) \ar[d]^{c_1 \nabla}\\
{\rm H} ^i _{\cris} (X/S , \mathcal I_{\cris} ^{[m]}) \ar[r]^-{ u ^{p-1} \varphi_m } &{\rm H} ^i _{\cris} (X/S) }
$$
\end{enumerate}
The last diagram follows that $pu^{p-1} \varphi \circ \nabla= \nabla \circ \varphi$ by Lemma \ref{connection on ddR} and that $\varphi (E) = p c_1$.
Now consider the $p^n$ -torsion crystalline cohomology ${\rm H} ^i_{\cris} (X_n /S_ n)$ together with filtration ${\rm H} ^i _{\cris} (X_n /S_n, \mathcal I ^{[m]}_{\cris})$. We claim that ${\rm H} ^i_{\cris} (X_n /S_n)$ admits all the above structures $\varphi_m: {\rm H}^i _{\cris} (X_n/ S_n, \mathcal I ^{[m]}_{\cris}) \to {\rm H} ^i_{\cris} (X_n /S_n) $ for $m \leq p-1$ and $\nabla : {\rm H} ^i_{\cris} (X_n /S_n) \to {\rm H} ^i_{\cris} (X_n /S_n) $ satisfying all the above properties.
To see, note that $R\Gamma_{\cris} ( X_n /S_n, \mathcal I ^{[m]}_{\cris})\simeq R\Gamma _{\cris} (X/S, \mathcal I ^ {[m]}_{\cris}) \otimes^{\mathbb L}_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n\ensuremath{\mathbb{Z}}$
where $\mathcal I ^{[0]}_{\cris}= \O_{\cris}$ then all the above properties follow by taking $\otimes^{{\mathbb L}}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$, except the Diagram \eqref{dig-summary of comp} which requires torsion quasi-syntomic cohomology. For this, we define the following torsion cohomologies:
For $m \geq 0$, $R\Gamma _{\dR} (X_n/\mathfrak{S}_n, \Fil^m_{\rm H} ): = R\Gamma_{\dR} (X/\mathfrak{S} , \Fil^m _{\mathrm{H}}) \otimes ^{\mathbb L}_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}} $, $ R\Gamma_{\qsyn} (X_n / \mathfrak{S}_n, \Fil^m_N \Prism^{(1)}): = R\Gamma_{\qsyn}(X/ \mathfrak{S} , \Fil^m_N\Prism^{(1)}_{-/ \mathfrak{S}}) \otimes_\ensuremath{\mathbb{Z}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}} $, and finally $R\Gamma_{\Prism} (X_n/\mathfrak{S}_n): = R\Gamma _\Prism( X/ \mathfrak{S}) \otimes_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$.
Then the derived modulo $p^n$ version of Diagram \eqref{dig-summary of comp} still holds by taking the original diagram and derived modulo $p^n$.
\subsection{Galois action on torsion crystalline cohomology}\label{subsection-GK-action} Keep the notations as the above.
Set $\mathcal X$ to be the base change of $X$ to $\Spf \O_{\mathbf{C}}$ and $\mathcal X _n : = \mathcal{X} \otimes_{\ensuremath{\mathbb{Z}}} {\ensuremath{\mathbb{Z}}/ p ^n\ensuremath{\mathbb{Z}}}$. Then ${\rm H} ^i_{\cris} ({\mathcal X}_n /S_n)$ has an $S$-linear $G_K$-action when we define the $G_K$-action on $S$ is trivial. Note that ${\rm H} ^i_{\cris} ({\mathcal X}_n /A_{\cris, n})$ also has $A_{\cris}$-semi-linear $G_K$-action which is induced by $G_K$-actions on ${\mathcal X}$ and $A_{\cris}$. By Proposition \ref{prop-connection for perfectoid} and its proof, we see that the natural map
$W(k) \to \mathfrak{S} \to A_{\inf}$ induces the following commutative diagram
$$\xymatrix{ {\rm H} ^i_{\cris} ({\mathcal X}_n / W_n (k)) \ar[rd]_\sim^\beta \ar@{^{(}->}[r]^\alpha & {\rm H} ^i_{\cris} ({\mathcal X}_n / S_n) \ar[d]^\iota & {\rm H} ^i _{\cris} (X_n/ S_n)\ar@{_{(}->}[l]_{\tilde \alpha} \ar@{^{(}->}[d]^{\tilde \iota}\\ & {\rm H} ^i _{\cris} ({\mathcal X}_n/ A_{\cris, n})\ar@{=}[r] & {\rm H}^i_{\cris} (X_n / S_n) \otimes_{S_n} A_{\cris, n} . }$$
Note that the second row is an isomorphism because $ {\mathcal X} = X \times_{\Spec(S)} \Spec (A_{\cris})$ and that $A_{\cris, n}$ is flat over $S_n$. Thus $\tilde \iota$ is an injection. So is $\tilde \alpha$. Also we note that $\alpha$ and $\beta$ are both compatible with $G_K$-actions because both the map $ W(k) \to \mathfrak{S}$ and $W(k)\to A_{\inf}$ are $G_K$-compatible. But $\iota$ is not as $\mathfrak{S} \subset A_{\inf}$ is only stable under $G_\infty$-action. It is also clear that ${\rm H} ^i _{\cris} (X_n /S_n) \subset ( {\rm H} ^i _{\cris} ({\mathcal X}_n /S_n))^{G_K}$ via $\tilde \alpha$ and $\tilde \alpha$ is also compatible with connection on both sides. Now we claim the $G_K$-action on ${\rm H} ^i _{\cris} ( {\mathcal X}_n / A_{\cris, n})$ is given by the following formula: For any $\sigma \in
G_K$,
any $x \otimes a \in {\rm H} ^i _{\cris} (X_n /S_n)\otimes_S A_{\cris}\simeq {\rm H} ^i_{\cris} ({\mathcal X}_n / A_{\cris, n})$,
\begin{equation}\label{eqn-action-3}
\sigma(x \otimes a)= \sum_{i=0}^\infty \nabla^i (x) \otimes \gamma_i
\left ( \sigma ([\underline{\pi}])- [\underline{\pi}] \right ) \sigma (a) .
\end{equation}
To see this, for any $x\in \mathcal{M} ^i : = {\rm H} ^i _{\cris} (X_n /S_n)$, set
$$ x^\nabla : = \sum_{m = 0} ^\infty \nabla (x) \gamma_m ([\underline{\pi}] -u) \in {\rm H}^i _{\cris} ({\mathcal X}_n /S_n). $$
Then we immediately see that $x^\nabla \in {\rm H} ^ i _{\cris} ({\mathcal X}_n/ W_n (k)) = {\rm H} ^ i _{\cris} ({\mathcal X}_n/ S_n) ^{\nabla= 0}$. Now we claim that ${\rm H} ^i _{\cris} ({\mathcal X}_n/W_n (k))$ is generated by $\{x ^\nabla | x \in {\rm H} ^i _{\cris} (X_n /S_n)\}$ as an $A_{\cris}$-module. If so then \eqref{eqn-action-3} follows the fact that $\beta$ is $G_K$-equivariant and the construction of $x^\nabla$ (note that both $x$ and $u$ are $G_K$-invariants).
To prove the claim, for any $y \in {\rm H} ^i _{\cris} ({\mathcal X}_n /W_n (k))$, suppose that $\beta (y) = \sum_j a_j \tilde\iota (x_j)$ with $a_j \in A_{\cris} $ and $x_j \in {\rm H}^i _{\cris} (X_n /S_n)$. Then we see that $y ^\nabla : = \sum_j a_j x_j ^\nabla \in {\rm H} ^i _{\cris} ({\mathcal X}_n /W_n (k))$. It suffices to that check $y = y ^\nabla$. Since $\beta$ is an isomorphism, it suffices to show that $\beta (y) = \beta (y ^\nabla)$. This follows that $\beta(x^\nabla) = \iota (x)$ for $x \in {\rm H} ^i _{\cris} (X_n /S_n)$ as $ \iota ([\underline{\pi}]- u)= [\underline{\pi}]- [\underline{\pi}]= 0$.
\section{Torsion Kisin module, Breuil module and associated Galois representations} In this section, we set up the theory of generalized torsion Kisin modules which extends theory of Kisin modules, which is discussed, for example, \cite[\S2]{liu-Fontaine}. The key point for the generalized Kisin modules is that it may have $u$-torsion, and it return to classical torsion Kisin modules when modulo $u$-torsion.
\subsection{(Generalized) Kisin modules}\label{subsec-generalized-Kisin} Let $(\mathfrak{S} , E(u))$ be the Breuil--Kisin prism over $\O_K$ with $d = E(u )= E$ the Eisenstein polynomial of fixed uniformizer $\pi\in \O_K$.
A $\varphi$-module $\mathfrak{M}$ over $\mathfrak{S}$ is an $\mathfrak{S}$-module $\mathfrak{M}$ together $\varphi_\mathfrak{S}$-semilinear map $\varphi_\mathfrak{M}: \mathfrak{M} \to \mathfrak{M}$. Write $ \varphi^*\mathfrak{M} = \mathfrak{S} \otimes_{\varphi, \mathfrak{S}} \mathfrak{M}$. Note that $1 \otimes \varphi_\mathfrak{M} : \varphi ^* \mathfrak{M} \to \mathfrak{M}$ is an $\mathfrak{S}$-linear map.
A \emph{(generalized) Kisin module $\mathfrak{M}$ of height $h$} is a $\varphi$-module $\mathfrak{M}$ of finite $\mathfrak{S}$-type so that there exists an $\mathfrak{S}$-linear map $\psi: \mathfrak{M} \to \varphi ^* \mathfrak{M}$ so that $ \psi \circ (1 \otimes \varphi) = E^h \id_{\varphi^*\mathfrak{M}}$ and $(1\otimes \varphi) \circ \psi = E^h\id_{\mathfrak{M}}$.
Maps between generalized Kisin modules are given by $\mathfrak{S}$-linear maps which are compatible with $\varphi$ and $\psi$. We denote by $\Mod^{\varphi, h}_{\mathfrak{S}}$ the category of (generalized) Kisin module of height $h$.
In \cite{liu-Fontaine}, A \emph{Kisin module $\mathfrak{M}$ of height $h$} is defined to be an \emph{\'etale} $\varphi$-module $\mathfrak{M}$ of finite $\mathfrak{S}$-type so that
$\coker(1 \otimes \varphi) $ is killed by $E^h$. Here \'etale $\varphi$-module means that the natural map $\mathfrak{M} \to \wh{\mathfrak{S} [\frac 1 u]} \otimes_{\mathfrak{S}} \mathfrak{M}$ is injective. Since $E(u)$ is a unit in $\wh{\mathfrak{S} [\frac 1 u]}$,
we easily see that the \'etale assumption implies that $(1\otimes \varphi) : \varphi^* \mathfrak{M} \to \mathfrak{M}$ is injective.
Then existence and uniqueness of $\psi: \mathfrak{M} \to \varphi ^*\mathfrak{M}$, in definition of (generalized) Kisin modules of height $h$, then follows.
That is, the {Kisin module $\mathfrak{M}$ of height $h$} defined classically is (generalized) Kisin module of height $h$. So in the following, we drop ``generalized" when we mention the object in $\Mod^{\varphi, h}_{\mathfrak{S}}$.
If we need to emphasize $\mathfrak{M}$ is also a Kisin modules of height $m$ classically defined, we will mention that it is \'etale.
\begin{lemma}\label{lem-etale}
\leavevmode
\begin{enumerate}
\item $\Mod^{\varphi, h}_\mathfrak{S}$ is an abelian category.
\item $\mathfrak{M}$ is \'etale if and only if $\mathfrak{M}$ has no $u$-torsion.
\item $\mathfrak{M} [\frac 1 p]$ is finite $\mathfrak{S} [\frac 1 p]$-free.
\end{enumerate}
\end{lemma}
\begin{proof} (1) is easy to check because $\varphi: \mathfrak{S} \to \mathfrak{S}$ is faithfully flat. (2) It is clear from the definition that if $\mathfrak{M} $ is \'etale then it has no $u$-torsion. Conversely,
let $\mathfrak{M}[p ^\infty]: = \{ x \in \mathfrak{M}| p ^n x = 0 \text{ for some } n > 0\}$ and $\mathfrak{M}' : = \mathfrak{M}/ \mathfrak{M}[p ^\infty]$. We get the short exact sequence
$0 \to \mathfrak{M}[p ^\infty ] \to \mathfrak{M} \to \mathfrak{M} ' \to 0$. It is clear that both $\mathfrak{M} [p ^\infty]$ and $\mathfrak{M}'$ are objects in $\Mod_\mathfrak{S}^{\varphi , h}$
and $\mathfrak{M}$ has no $u$-torsion then both $\mathfrak{M}[p ^\infty]$ and $\mathfrak{M}'$ has no $u$-torsion. Since $\mathfrak{M}[p ^\infty]$ is killed by some $p$-power, $\mathfrak{M}\otimes \wh \mathfrak{S} [\frac 1 u]= \mathfrak{M}[\frac 1 u ]$. So $\mathfrak{M}[p ^\infty]$ has no $u$-torsion if and only if that $\mathfrak{M} [p ^\infty]$ is \'etale. Now $\mathfrak{M}'$ has no $p$-torsion, now we claim that $\mathfrak{M}'[\frac 1 p]$ is finite $\mathfrak{S} [\frac 1 p]$-free, which will implies (3) and \'etaleness of $\mathfrak{M}'$. By \cite[\S1.2.1]{Fontaine}, $\mathfrak{M}' [\frac 1 p]\simeq \bigoplus \mathfrak{S}[\frac 1 p]/ P_i^{a_i} $ with $P_i \in W(k)[u]$, monic irreducible and $P_i \equiv u ^{b_i} \mod p$, or $P_i = 0$. Without loss of generality, we may assume that $P_i \not = 0$ and show such $\mathfrak{M}'$ does not exist when $\mathfrak{M}' \in \Mod_\mathfrak{S}^{\varphi, h}$. Consider wedge product $\mathfrak{N} $ of $\mathfrak{M}'[\frac 1 p]$, then $\mathfrak{N}\simeq \mathfrak{S} [\frac 1 p]/ f$ with $f = \prod P_i ^{a_i}$ and write $\varphi ^* : = (1 \otimes \varphi)$. We also obtain $\varphi ^*: \varphi ^* \mathfrak{N} \to \mathfrak{N} $ and $\psi : \mathfrak{N} \to \varphi^* \mathfrak{N}$ so that $\psi \circ \varphi ^* = E(u)^h \id_{\varphi^* \mathfrak{N}}$ and $\varphi ^* \circ \psi = E(u)^h \id_{\mathfrak{N}}$ for some $h$. Since $\varphi^* \mathfrak{N} \simeq \mathfrak{S}[\frac 1 p ]/ \varphi (f)$, we can write the above maps explicitly as $$ \mathfrak{S}[\frac 1 p ]/ \varphi (f) \overset {\varphi^*} {\to} \mathfrak{S}[\frac 1 p ]/ f \overset {\psi}{\to} \mathfrak{S}[\frac 1 p ]/ \varphi (f)$$
Write $x = \varphi ^*(1)$ and $y = \psi(1)$. We have $\varphi (f) x = f z'$ and $f y = \varphi (f) w'$ for some $z', w' \in \mathfrak{S}[\frac 1 p]$. The condition $\psi \circ \varphi ^* = E(u)^h \id_{\varphi^* \mathfrak{N}}$ and $\varphi ^* \circ \psi = E(u)^h \id_{\mathfrak{N}}$ implies that $\varphi (f) E(u)^h = f z$ and $f E(u)^h = \varphi(f) w$ with $z, w \in \mathfrak{S} [\frac 1 p]$. So $E(u)^{2h} = zw$. Since $E(u)$ is an Eisenstein polynomial, $z = z_0 E(u) ^l$ with $z_0$ a unit in $\mathfrak{S} [\frac 1 p]$. Then $\varphi (f) = z_0 f E(u) ^{l-h}$. We easily see $z_0 \in\mathfrak{S}^\times $ as both $f$ and $E(u)$ monic. So $l -h > 0$ by mod $p$ on the both sides. Let $a_0= f(0)$ be the constant term of $f(u)$. Since $\varphi (f) (0) = \varphi (a_0) = z_0 (0) a_0 p^{l-h}$. Comparing $p$-adic valuation on the both sides, we see that $a_0 = 0$. Then we may write $f= u^m g$ with $g(0) \not = 0$. But then we have $u ^{pm-m} \varphi (g) = z_0 g E(u)^{l-h} $, which is impossible by comparing constant terms on the both sides.
In summary, such $\mathfrak{M}'$ can not exist and $\mathfrak{M} ' [\frac 1 p]$ is finite $\mathfrak{S}[\frac 1 p]$-free.
\end{proof}
Let $\mathfrak{M}$ be a Kisin module of height $h$ and set $\mathfrak{M} [u ^{\infty}]: =\{x \in \mathfrak{M}| u ^l x = 0 \text{ for some } l\}$. It is that both $(1 \otimes \varphi_\mathfrak{M})(\varphi ^*\mathfrak{M}[u ^\infty]) \subset \mathfrak{M}[u ^\infty]$ and $\psi(\mathfrak{M} [u ^\infty]) \subset \varphi ^*\mathfrak{M}[u ^\infty]$. The above lemma shows that $\mathfrak{M}[u ^\infty] \subset \mathfrak{M} [p ^\infty] $ and $\mathfrak{M}/ \mathfrak{M}[u ^\infty]$ is \'etale.
\begin{lemma}\label{lem-easy} The following short exact sequence is in $\Mod^{\varphi, h}_{\mathfrak{S}}$
$$ 0 \to \mathfrak{M} [u ^\infty] \to \mathfrak{M} \to \mathfrak{M}/ \mathfrak{M}[u ^\infty] \to 0 $$
with $\mathfrak{M} / \mathfrak{M}[u ^\infty]$ being \'etale.
\end{lemma}
It turns out that \'etale Kisin module enjoys many nice properties. Let $\Mod_{\mathfrak{S} , \tor}^{\varphi , h}$ denote the full subcategory of $\Mod_{\mathfrak{S} }^{\varphi , h}$ whose object $\mathfrak{M}$ is torsion, i.e., killed by $p ^n$ for some $n$.
The following Lemma is a part of \cite[Proposition 2.3.2]{liu-Fontaine}.
\begin{lemma}\label{lem-recall}
The following statements are equivalent for a torsion Kisin module $\mathfrak{M}\in \Mod_{\mathfrak{S} , \tor}^{\varphi , h}$:
\begin{enumerate}
\item $\mathfrak{M}$ is \'etale.
\item $\mathfrak{M}$ can be written as a successive quotient of $\mathfrak{M}_i$ so that $\mathfrak{M}_i \in \Mod_{\mathfrak{S} , \tor}^{\varphi , h}$ and $\mathfrak{M}_i$ is finite ${k[\![u]\!]}$-free.
\item $\mathfrak{M} = \mathfrak{N}/ \mathfrak{N}'$ where $\mathfrak{N}' \subset \mathfrak{N}$ are Kisin modules of height $h$ and $\mathfrak{N}'$ and $\mathfrak{N}$ are finite free $\mathfrak{S}$-modules.
\end{enumerate}
\end{lemma}
\begin{corollary}\label{cor-devissage} Give an \'etale Kisin module $\mathfrak{M}\in \Mod^{\varphi, h}_\mathfrak{S} $. There exists \'etale Kisin module $\mathfrak{M} _n \in \Mod^{\varphi, h}_\mathfrak{S}$ killed by $p ^n $ satisfying $\mathfrak{M} /p ^n \mathfrak{M} [\frac 1 u] = \mathfrak{M}_n [\frac 1 u]$ and $\mathfrak{M} = \varprojlim_n \mathfrak{M}_n$
\end{corollary}
\begin{proof}Let $M = \mathfrak{M}\otimes_{\mathfrak{S}} \wh {\mathfrak{S}[\frac 1 u]} $. Consider the exact sequence
$0 \to p ^n M \to M \overset q \to M / p ^n M\to 0 $. Since $\mathfrak{M}$ is \'etale, we see the natural map $\mathfrak{M}\to M$ is injective. Set $\mathfrak{M}_n = q(\mathfrak{M})\subset M / p ^n M$. It is easy to check that $\mathfrak{M}_n [\frac 1 u] = \mathfrak{M} / p ^n \mathfrak{M} [\frac 1 u] = M / p ^n M$, $\mathfrak{M}_n$ has no $u$-torsion and $\mathfrak{M} = \varprojlim_n \mathfrak{M}_n$ (since $\mathfrak{M}$ is $p$-adically closed in $M$). We just need to check that $\mathfrak{M}_n$ has height $h$. This was proved by \cite[Proposition B 1.3.5]{Fontaine}.
\end{proof}
In general, the category of \'etale Kisin modules is not abelian but under some restrictions it could be abelian. Given $\mathfrak{M} \in \Mod_{\mathfrak{S} , \tor}^{\varphi , h}$, let $M = \mathfrak{M}[u ^\infty, p]: = \{ x\in \mathfrak{M}[u ^\infty]|\ p x = 0\}$.
\begin{lemma}\label{lem-control-torsion} If $e h < p-1$ then $M = 0$ and if $eh < 2( p -1) $ then $M \simeq \bigoplus k$ or $0$. \end{lemma}
\begin{proof}
So we have $\psi : M \to \varphi ^*M $ so that $\psi \circ (1 \otimes \varphi) = d ^h\id_{\varphi ^*M } $. We can write $M = \bigoplus_{j=1}^m k [\![u]\!]/ u ^{a_j}$ with $a_j \geq 1$, and then $\varphi ^*M\simeq \bigoplus_{j=1}^m k [\![u]\!]/ u ^{pa_j}$. Assume that $a = \max_j \{ a_j\}$ and let $x \in \varphi ^*M$ so that $u^{pa} x = 0 $ but $u ^{pa -1} x \not = 0$. Since $\psi \circ (1 \otimes \varphi) = u ^{eh}\id_{\varphi^*M}$, we conclude that $u^{eh} x \in \psi (M)$. Note that $u^a M = \{0\}$ and $\psi$ is ${k[\![u]\!]}$-linear, we have $u ^{a+ eh} x =0 $. This forces that $a+ eh \geq pa$. That is, $a \leq \frac{eh}{p-1}$. Hence such $a$ can not exists if $eh <p -1$. If $eh< 2(p-1) $ then $a =1$ or 0. This proves the Lemma.
\end{proof}
\begin{proposition}\label{prop-abelian} If $eh< p -1$ then $\Mod_\mathfrak{S}^{\varphi, h}$ is an abelian category.
\end{proposition}
\begin{proof} By Lemma \ref{lem-control-torsion}, $\mathfrak{M}[u ^\infty] = 0$.
\end{proof}
\begin{example} Let $E(u)= u -p$, $\mathfrak{M}= k \simeq {k[\![u]\!]}/ u$ and $\varphi (1)= 1$. Let $\psi : {k[\![u]\!]}/ u \to {k[\![u]\!]}/ u ^p$ by $\psi(1) = u ^{p-1}$.
Then $\mathfrak{M} \in \Mod_{\mathfrak{S}, \tor}^{\varphi, p -1}$.
\end{example}
Let $\mathfrak{M}\in \Mod_{\mathfrak{S}} ^{\varphi, h}$.
Define \emph{Breuil--Kisin filtration} on $\varphi ^* \mathfrak{M} $ by
\[
\Fil_{\BK} ^h\varphi^* \mathfrak{M} \coloneqq \mathrm{Im}(\psi \colon \mathfrak{M} \to \varphi^*\mathfrak{M}).
\]
In the case that $\mathfrak{M}$ is \'etale then $\psi$ is injective as explained above, and we have an identification
\begin{equation}\label{eqn-BK-filtration}
\Fil_{\BK} ^h\varphi^* \mathfrak{M} \cong \{ x \in \varphi ^* \mathfrak{M} | (1\otimes \varphi) (x) \in E(u)^h \mathfrak{M}\}
\end{equation}
of submodules in $\varphi^* \mathfrak{M}$.
Since there is only filtration considered for Kisin modules in this section, we drop $\BK$ from the notation for this section. Finally there is $\varphi_\mathfrak{S}$-semi-linear map
$\varphi : = \varphi \otimes \varphi: \varphi^*\mathfrak{M} \to \varphi ^* \mathfrak{M}$.
It is clear that $\varphi (\Fil^ i \varphi^* \mathfrak{M}) \subset \varphi (E(u)) ^i \varphi^* \mathfrak{M}$.
If $\mathfrak{M}$ is \'{e}tale, then we define
$\varphi _i : \Fil ^ i \varphi ^* \mathfrak{M} \to \varphi ^* \mathfrak{M} $ via $$\varphi _i(x) : = \frac {\varphi (x)}{\varphi (a_0^{-1}E(u)^i)},$$ where $a_0 p = E(0)$.
\begin{lemma}\label{lem-filh-exact} Suppose that $0 \to \mathfrak{M}' \to \mathfrak{M} \to \mathfrak{M} '' \to 0$ is an exact sequence inside $\Mod_\mathfrak{S} ^{\varphi, h}$ and all modules are \'etale. Then the following sequence is exact:
$$ 0 \to \Fil ^h \varphi ^*\mathfrak{M}' \to \Fil ^h \varphi ^*\mathfrak{M} \to\Fil ^h \varphi ^* \mathfrak{M} '' \to 0$$
\end{lemma}
\begin{proof}
This easily follows that $\varphi^* : \Fil^h \varphi^* \mathfrak{M} \to E^h \mathfrak{M}$ is bijective.
\end{proof}
\begin{remark} The above Lemma fails in general if $i < h$ or if the modules are not \'etale.
\end{remark}
\subsection{Galois representation attached to \'etale Kisin modules}
\label{subsec-Galos rep and Kisin}
Recall that we fix $\pi_n \in \overline K$ so that $\underline \pi : = (\pi _n) \in \O_{\mathbf{C}} ^\flat$ and $\pi_0= \pi$; $K _\infty : = \bigcup_{n \geq 0} K (\pi _n)$ and $G_\infty : = \Gal (\overline{K}/ K_\infty)$. We embed $\mathfrak{S} \to A_{\inf}$ via $u \mapsto [\underline \pi]$. This embedding is compatible with $\varphi$, but not with the $G_K$-action. We have $\mathfrak{S} \subset A_{\inf}^{G_\infty}$.
For a Kisin module $\mathfrak{M} \in \Mod_\mathfrak{S} ^{\varphi, h}$, we can associate a representation of $G_\infty$ via
$$T_\mathfrak{S}(\mathfrak{M}): = \left(\mathfrak{M} \otimes_\mathfrak{S} W(\mathbf{C}^\flat)\right)^{\varphi =1}=\left(\mathfrak{M}/\mathfrak{M} [u ^\infty] \otimes_\mathfrak{S} W(\mathbf{C}^\flat)\right)^{\varphi =1} . $$
So the Galois representation attached to $\mathfrak{M}$ is insensible to $u$-torsion parts because $\frac 1 u \in W(\mathbf{C} ^\flat)$. It is well-known that $T_\mathfrak{S}$ is exact and there exists an $W(\mathbf{C}^\flat)$-linear isomorphism
$$\mathfrak{M} \otimes_\mathfrak{S} W(\mathbf{C}^\flat) \simeq T_\mathfrak{S} (\mathfrak{M})\otimes_{\ensuremath{\mathbb{Z}}_p} W(\mathbf{C}^\flat), $$
which is compatible with $\varphi$ and $G_\infty$-actions.
For many purposes, we define another variant $T^h_\mathfrak{S}$ of $T_\mathfrak{S}$: For an \'etale $\mathfrak{M}\in \Mod_{\mathfrak{S}}^{\varphi, h}$, we can naturally extend $\varphi _h : \Fil ^h \varphi ^* \mathfrak{M} \to \varphi ^* \mathfrak{M}$ to $\varphi _h : \Fil ^h \varphi ^* \mathfrak{M} \otimes_ \mathfrak{S} A_{\inf} \to \varphi ^* \mathfrak{M} \otimes_\mathfrak{S} A_{\inf}$.
$$T^h_\mathfrak{S}(\mathfrak{M}): = \left(\Fil^h \varphi ^* \mathfrak{M} \otimes_\mathfrak{S} A_{\inf}\right)^{\varphi_h =1} = \{ x\in \Fil^h \varphi^* \mathfrak{M}\otimes_\mathfrak{S} A_{\inf}, \ \varphi(x) = \varphi (a_0^{-1}E(u)^h)x\}.$$
\begin{lemma}\label{lem-twistm} Assume that $\mathfrak{M} \in \Mod_{\mathfrak{S}} ^{\varphi, h}$ is \'etale. Then
\begin{enumerate}
\item $T^h _\mathfrak{S} (\mathfrak{M}) \simeq T_\mathfrak{S}(\mathfrak{M}) (h )$.
\item The following sequence is short exact
$$\xymatrix{0 \ar[r]& T^h_\mathfrak{S} (\mathfrak{M}) \ar[r]& \Fil^h \varphi ^* \mathfrak{M} \otimes_\mathfrak{S} A_{\inf}\ar[r]^- {\varphi_h -1} & \varphi ^* \mathfrak{M} \otimes _\mathfrak{S} A_{\inf}\ar[r] & 0}$$
\end{enumerate}
\end{lemma}
\begin{proof}First it is clear that $T_\mathfrak{S} (\mathfrak{M}) = (\varphi ^* \mathfrak{M} \otimes_{\mathfrak{S}} W(\mathbf{C}^\flat)) ^{\varphi =1}$ because $\varphi $ on $W(\mathbf{C}^\flat)$ is bijective.
Recall that $pa_0= E(0)$. Let $\underline{\varepsilon} = (\zeta_{p ^n})_{n \geq 0}\in \O_\mathbf{C} ^\flat$ with $\zeta_{p ^n}$ satisfying $\zeta_1 =1$, $\zeta_{p^n}^p = \zeta_{p ^{n -1}}$ and $\zeta_p \not = 1$.
By Example 3.2.3 in \cite{liu-notelattice}, there exists nonzero ${\mathfrak t} \in A_{\inf} $ so that ${\mathfrak t} \not = 0 \mod p$, $\varphi({\mathfrak t}) = a_0^{-1} E(u) {\mathfrak t}$ and $t \coloneqq \log[\underline \varepsilon]= \mathfrak c \varphi({\mathfrak t})$ with $\mathfrak c = \prod\limits_{n = 1}^\infty \varphi^n(\frac{a_0^{-1}E(u)}{p})\in A_{\cris}^*$. Write $\beta = \varphi ({\mathfrak t})$. Consider map $\iota : T^h _\mathfrak{S} (\mathfrak{M}) \to T_\mathfrak{S}( \mathfrak{M})$ by $x \mapsto \frac{x}{\beta^h }$ for any $x \in \Fil ^h \varphi ^* \mathfrak{M} \otimes_\mathfrak{S} A_{\inf}$. Since $\varphi (\beta)= \varphi (a_0^{-1}E(u)) \beta$, and $\beta \in W(\mathbf{C}^\flat)$ is invertible as ${\mathfrak t} \not = 0 \mod p$, $\iota$ makes sense. Note that $\mathfrak c \in (A_{\cris}) ^{G_\infty} $. So $g (\beta)/ \beta = g (t)/ t$ is cyclotomic character for any $g \in G_\infty$. So $\iota: T^h _\mathfrak{S} (\mathfrak{M}) \to T _\mathfrak{S} (\mathfrak{M}) (h)$ is a map compatible with $G_\infty$-actions. We claim that $T^h _\mathfrak{S}$ is an exact functor. If so since $T_\mathfrak{S}$ is also exact, to show that $\iota$ is an isomorphism, we can reduce to the case that $\mathfrak{M}$ is killed by $p$ by Corollary \ref{cor-devissage}. In this case, $\mathfrak{M}$ is finite ${k[\![u]\!]}$-free. Picking a basis $e_1 , \dots, e_d$ of $\mathfrak{M}$, then $\varphi_\mathfrak{M} (e_1, \dots , e_d)= (e_1, \dots , e_d)A$ with a ${k[\![u]\!]}$-matrix $A$ so that there exists a ${k[\![u]\!]}$-matrix $B$ satisfying $AB = BA = (a_0^{-1}E(u)) ^hI_d$. Let us still regard $e_i$ as a basis of $\varphi ^*\mathfrak{M} $. Then it is easy to check that $(e_1, \dots , e_d) B$ is a basis of $\Fil ^h \varphi ^* \mathfrak{M}$.
Now for any $x = \sum _i e_i \otimes a_i \in \varphi ^*\mathfrak{M} \otimes _ {k[\![u]\!]} \mathbf{C}^\flat$, the equation $\varphi (x) = x$ is equivalent to $\varphi (X) = \varphi(A)^{-1} X$ where $X = (a_1, \dots , a_d)^T$.
The latter gives $\varphi (\beta^h X ) = \varphi (a_0^{-h}E(u)^h A^{-1}) (\beta^h X)= \varphi (B) (\beta^h X)$,
which implies that $Y = \beta^h X $ is in
$(\O _{\mathbf{C}}^ \flat)^d$. That is $y = \beta^h x \in \varphi ^*\mathfrak{M} \otimes_{{k[\![u]\!]}} \O_{\mathbf{C}}^\flat$. Furthermore, consider $Z= B^{-1} \beta^h X$, since
$\varphi (Z)= \varphi( a_0^{-h}B^{-1} A^{-1} E(u)^h ) B Z = B Z$. We conclude that $ Z$ has all entries in $\O_{\mathbf{C}}^\flat$. Then $\beta^h x = (e_1, \dots , e_d) B Z$ is inside $\Fil ^h \varphi ^* \mathfrak{M}\otimes\O_{\mathbf{C}}^\flat$. This proves that $\iota$ is surjective. Since $\iota$ is clearly injective, we show that $\iota$ is a isomorphism.
Now we prove the claim that $T^h_\mathfrak{S}$ is exact. For this, it suffices to show that $\varphi _h -1$ is surjective and we once again reduces to the case that $\mathfrak{M}$ is killed by $p$. By writing the ${k[\![u]\!]}$-basis of $\mathfrak{M}$ as the above, we need to solve the equation $\varphi (X) -BX = Y$ for any $Y = (a_1, \dots , a_d)^T $ for $a_i \in \O_{\mathbf{C}}^\flat$. Since $\mathbf{C}^\flat$ is algebraic closed, we see $X$ exists with entries in $\mathbf{C}^{\flat}$. It is easy to compare valuation of each entry by equation $\varphi (X) = BX + Y$ to show that all entries of $X$ must be in $\O_{\mathbf{C}}^\flat$.
\end{proof}
\subsection{Torsion Breuil modules} \label{subsec-Breuilmodules} We fix $0 \leq h \leq p-2$ for this subsection. Recall that $S= \mathcal A$ is the $p$-adically completed PD-envelope
of $\theta: \mathfrak{S} \twoheadrightarrow \O_K, u\mapsto \pi$, and for $i\ge 1$
write $\Fil^i S\subseteq S$ for the (closure of the) ideal generated by $\{ \gamma_n (E) = E^n/n!\}_{n\ge i}$.
For $i \le p-1$, one has $\varphi(\Fil^i S) \subseteq p^i S$,
so we may define $\varphi_i:\Fil^i S\rightarrow S$ as $\varphi_i \coloneqq p^{-i}\varphi$. We have $c_1 : = \varphi(E(u))/p \in S^\times$.
Let $'\textnormal{Mod}^{\varphi, h }_{S}$ denote the category
whose objects are triples $(\mathcal M, \Fil^h \mathcal{M}, \varphi_h)$, consisting of
\begin{enumerate}
\item an $S$-module $\mathcal{M}$
\item an $S$-submodule $\Fil^h \mathcal{M} \subset \mathcal{M} $ containing $\Fil ^h S \cdot \mathcal{M} $.
\item a $\varphi$-semi-linear map $\varphi_h : \Fil ^h \mathcal{M} \to \mathcal{M} $ such that for all $s \in \Fil^h S$ and $x\in \mathcal{M}$
we have $$\varphi_h (sx)= (c_1)^{-h }\varphi_h (s) \varphi_h(E(u)^hx).$$
\item $\varphi_h (\Fil ^h \mathcal{M})$ generates $\mathcal{M}$ as $S$-modules.
\end{enumerate}
Morphisms are given by $S$-linear maps preserving $\Fil^h $'s and commuting with
$\varphi_h$. A sequence is defined to be \emph{short exact}
if it is short exact as a sequence of $S$-module, and induces a
short exact sequence on $\Fil^h$'s. Let $\textnormal{Mod}^{\varphi, h }_{S, \tor}$ denote the full subcategory of $'\textnormal{Mod}^{\varphi, h }_{S}$ so that $\mathcal{M}$ is killed by a $p$-power and $\mathcal{M}$ can be a written as successive quotient of $\mathcal{M}_i$ in $'\textnormal{Mod}^{\varphi }_{S}$ and each $\mathcal{M}_i \simeq \bigoplus S_1 $ where $S_n : = S/ p ^n S$.
For each object $\mathcal{M} \in \textnormal{Mod}^{\varphi, h }_{S}$, we can extend $\varphi_h$ and $\Fil^h$ to $A_{\cris} \otimes_ S \mathcal{M}$ in the following:
Since $A_{\cris}/ p ^nA_{\cris}$ is faithfully flat over $S/ p ^n$ by \cite[Lem 5.6]{Cais-Liu-BK-crys},
$A_{\cris} \otimes_S \Fil^h \mathcal{M} \to A_{\cris} \otimes \mathcal{M} $ is injective and so we can define $\Fil^h (A_{\cris} \otimes _S \mathcal{M}): = A_{\cris} \otimes _S \Fil ^h \mathcal{M}$ and then $\varphi_h $ extends to $A_{\cris} \otimes_ S \mathcal{M}$.
This allows to define a representation of $G_\infty$ via
$$ T_S (\mathcal{M} ) \coloneqq (\Fil ^h (A_{\cris} \otimes _S \mathcal{M}))^{\varphi_h =1}. $$
Now let us recall the relation of classical torsion Kisin modules and objects in $\Mod_{S, \tor}^{\varphi, h}$ and their relationship to torsion Galois representations. Let $\Mod_{\mathfrak{S}, \tor\et}^{\varphi , h}$ denote the category of \'etale torsion Kisin module of height $h$. In this subsection, all torsion Kisin modules are \'etale torsion Kisin modules, i.e., $\mathfrak{M}$ is $u$-torsion free. For each such $\mathfrak{M}$, we construct an object $\mathcal{M} \in \Mod^{\varphi, h}_{S, \tor}$ as the following: $\mathcal{M}: = S \otimes_{\varphi, \mathfrak{S}} \mathfrak{M}$ and
$$\Fil ^h \mathcal{M}: = \{x \in \mathcal{M} | (1\otimes \varphi) (x) \in \Fil ^h S \otimes_{\mathfrak{S}}\mathfrak{M}\}; $$
and $\varphi_h : \Fil ^h \mathcal{M} \to \mathcal{M}$ is defined as the composite of following map
\begin{equation*}
\xymatrix{
{\Fil ^h \mathcal{M}} \ar[r]^-{1\otimes \varphi_{\mathfrak{M}}} & {\Fil ^h S \otimes_{\mathfrak{S}}\mathfrak{M} }
\ar[r]^-{\varphi_h \otimes 1} & S\otimes_{\varphi,\mathfrak{S}} \mathfrak{M} = \mathcal{M}
} .
\end{equation*}
We write $\u \mathcal{M}(\mathfrak{M})$ for $\mathcal{M} \in \Mod^{\varphi , h}_{S, \tor}$ built from Kisin module $\mathfrak{M}\in \Mod_{\mathfrak{S}, \tor\et}^{\varphi , h}$ as the above. Note that $A_{\cris} \otimes_S \u \mathcal{M}(\mathfrak{M}) = A_{\cris} \otimes_{\varphi, \mathfrak{S}}\mathfrak{M}$.
\begin{proposition}\label{prop-Galois-compatible} The above functor induces an exact equivalence between $\Mod^{\varphi, h}_{\mathfrak{S}, \tor\et }$ and $\textnormal{Mod}^{\varphi, h}_{S, \tor}$. Furthermore, there exists short exact sequence
\begin{equation}\label{eqn-exact_S}
\xymatrix{ 0 \ar[r] & T_S (\mathcal{M}) \ar[r] & A_{\cris} \otimes_S \Fil^h \mathcal{M} \ar[r]^-{\varphi_h-1} & A_{\cris}\otimes_S \mathcal{M}\ar[r]& 0 }
\end{equation}
and an isomorphism of $G_\infty $-representations
$$T_S (\u \mathcal{M} (\mathfrak{M})) \simeq T_\mathfrak{S}(\mathfrak{M})(h).$$
\end{proposition}
\begin{proof} The equivalence of functor together with exactness is \cite[Thm 2.2.1]{CarusoLiu}, which built on Breuil and Kisin's results (see \cite[Proposition 3.3.1]{LiuT-CofBreuil}). Consider an exact sequence in $\textnormal{Mod}^{\varphi, h}_{S, \tor}$,
$$ 0 \to \mathcal{M}'' \to \mathcal{M} \to \mathcal{M} ' \to 0. $$
Then we have the following diagram
$$ \xymatrix{ 0 \ar[r] & T_S (\mathcal{M}'' ) \ar[r] \ar[d] & T_S (\mathcal{M}) \ar[r]\ar[d] & T_S (\mathcal{M}')\ar[d]\ar[r] & 0 \\0 \ar[r] & A_{\cris} \otimes_S \Fil ^h \mathcal{M} '' \ar[r]\ar[d]^{\varphi_h -1} & A_{\cris} \otimes_S \Fil ^h \mathcal{M} \ar[r]\ar[d]^{\varphi_h -1} & A_{\cris} \otimes_S \Fil ^h \mathcal{M} '\ar[r]\ar[d]^{\varphi_h -1} & 0 \\ 0 \ar[r] & A_{\cris} \otimes_S \mathcal{M} '' \ar[r] & A_{\cris} \otimes_S \mathcal{M} \ar[r] & A_{\cris} \otimes_S \mathcal{M} '\ar[r] & 0 \\}$$
By the definition of exactness in $\textnormal{Mod}^{\varphi, h}_{S, \tor}$ and since $A_{\cris} / p ^n$ is flat over $S/ p ^n$,
we see that last two rows of the above diagram are exact.
So to show $\varphi_h -1$ is surjective on $\mathcal{M}$, we reduce to situation that $\mathcal{M}$ is killed by $p$.
Also the surjectivity of $\varphi_h -1$ implies that the functor $T_S$ is exact from the above diagram.
So let us first accept that $\varphi_h-1$ is surjective and postpone the proof in the end.
Now let us construct a natural map $\iota : T^h _\mathfrak{S} (\mathfrak{M}) \to T_ S (\u \mathcal{M} (\mathfrak{M}))$. Write $\mathcal{M} : = \u\mathcal{M}(\mathfrak{M})$.
It is clear that $\Fil ^h \varphi ^* \mathfrak{M} \subset \Fil ^h \u \mathcal{M} (\mathfrak{M}) $ compatible with the injection $\varphi ^*\mathfrak{M} \hookrightarrow \mathcal{M}$. But $\varphi_h$ defined on Kisin modules are slightly different from that on Breuil modules. By chasing definitions, we see that for any $x \in \Fil^h \varphi^* \mathfrak{M} $, $\varphi_{h , \mathcal{M}} (x) =(a_0^{-1} c _1)^h \varphi _{h , \varphi^*\mathfrak{M}}(x)$. Recall $\mathfrak c = \prod\limits_{n = 1}^\infty \varphi^n(\frac{a_0^{-1}E(u)}{p})\in A_{\cris}^*$ in the proof Lemma \ref{lem-twistm}.
Since $\varphi (\mathfrak c) = a_0 ^{-1}c_1 \mathfrak c$, the map $ \iota: A_{\inf} \otimes _\mathfrak{S} \Fil^h \varphi^* \mathfrak{M} \to A_{\cris} \otimes_S \mathcal{M}$ by $\iota(x)= \mathfrak c^h x$ induces a map $\iota: T^h _\mathfrak{S} (\mathfrak{M}) \to T_S (\mathcal{M})$.
To show that $\iota$ is isomorphism, since $T_\mathfrak{S}$, $T_S$ and $\u\mathcal{M}$ are all exact,
we reduce to the case that $\mathfrak{M}$ is killed by $p$ where $\mathfrak{M}$ is finite ${k[\![u]\!]}$-free. As the same argument in Lemma \ref{lem-twistm}, there exists a basis ${e_1 , \dots , e_d}$ of $\varphi ^*\mathfrak{M}$ so that $\Fil ^h\varphi ^* \mathfrak{M}$ has basis $(e_1, \dots, e_d) B$, $\varphi (e_1, \dots , e_d)= (e_1, \dots, e_d)\varphi (A)$ and $AB = B A = (a_0^{-1} E(u))^h I_d$. So any $x\in T^h _\mathfrak{S}(\mathfrak{M})$ corresponds to the solution of $\varphi (X)= B X$. Since $\mathcal{M} = \u \mathcal{M} (\mathfrak{M})$, it is straightforward to compute that $\mathcal{M}$ also has $S_1$-basis $e_1, \dots , e_d$, $\Fil ^h \mathcal{M}$ is generated by $(e_1, \dots , e_d) B$ and $\Fil^p S_1 \mathcal{M} $. Note that $a_0^{-1}c_1 \equiv 1 \mod (p, \Fil^p S) $. So $T_S (\mathcal{M})$ corresponds to solutions
$\varphi (X) = B X \mod \Fil ^p A_{\cris, 1}$ where $A_{\cris ,1 }= A_{\cris}/ p A_{\cris}$. Now it suffices to show the following map is bijective
$$\{ X | \varphi (X) = B X, \ x_i \in \O_{\mathbf{C}}^\flat \} \longrightarrow \{X | \varphi (X) = B X \mod \Fil ^p A_{\cris, 1},\ x_i\in A_{\cris, 1 }\}$$
Let $v$ denote the valuation on $\O^\flat_{\mathbf{C}}$ which normalized by $v(u ^e)=1$. Suppose that $X$ is in the kernel then $X \in E(u)^p \O^\flat_{\mathbf{C}}$. So $v(x_j)\geq p, \forall j$.
Let $x_i$ be the entry with least valuation. Note that $v(\varphi (x_j)) = p v(x_j)$ for any $j$ and $A \varphi (X) = u^{eh} X$. The mini possible of left side valuation is $p v(x_i)$, while the the right side is
$h + v (x_i)$. This is impossible when $v (x_i)\geq p $ because $h \leq p-2$. So this implies that $X= 0$.
Indeed, if $v(x_i)\geq 2$ then the same proof show that $X= 0$. That is, if $X_1, X_2$ are two solution in the left side and $X_1 \equiv X_2 \mod E(u)^2$ then $X_1 = X_2$
Conversely, let $Z$ be the vector inside $A_{\cris, 1 }$ so that $\varphi (Z) = B Z \mod \Fil ^p A_{\cris ,1}$. Then there exists $Z_0$ with entries in $\O ^{\flat}_{\mathbf{C}}$ so that
$\varphi (Z_0) = B Z_0 + E(u)^pC$ where $C$ is a vector with entries in $\O^\flat_{\mathbf{C}}$. Note that $E(u)^p = E(u)^{p-h} BA$. So we may write
$\varphi(Z_0) = B (Z_0+ E(u)^{p-h}AC)$. Let $Z_1= Z_0 + E(u)^{p-h} AC$. Then $\varphi (Z_1) = BZ_1 + u ^{pe(p-h)}C_1$ with $C_1 = - \varphi(AC) $. Note that $pe (p-h)> pe> h e$. we can write
$BZ_1 + u ^{pe(p-h)}C_1= B (Z_1 + u^{\alpha}AC)$ with $\alpha = pe(p-h)-h$. Set $Z_2= Z_1 + u ^\alpha AC$ then $\varphi(Z_2) = Z_2 + u ^{p\alpha} C_2$. Continues this steps, we see that $Z_n$ converges in $\O^\flat_{\mathbf{C}}$ to $Z'$ so that $\varphi(Z') = B Z'$ with $Z' \equiv Z_0 \mod E(u)^{p-h} $. This settles the bijection of these two sets and completes the proof.
It remains to show that $\varphi_h -1: \Fil^h \mathcal{M} \otimes_S A_{\cris} \to \mathcal{M} \otimes_S A_{\cris}$ is surjective and we may assume that $\mathcal{M} = \u\mathcal{M} (\mathfrak{M})$ with $\mathfrak{M}$ killed by $p$. Now that $\mathcal{M} \otimes_S A_{\cris} = \varphi^* \mathfrak{M} \otimes_{{k[\![u]\!]}} {\mathcal O}^\flat_{\mathbf{C}} + \varphi^* \mathfrak{M} \otimes_{{k[\![u]\!]}} \Fil ^pA_{\cris, 1}.$ By Lemma \ref{lem-twistm} (2), it suffices to show that for $y = m \otimes a$ with $m \in \varphi^* \mathfrak{M}$ and $a\in \Fil ^p A_{\cris, 1}$ there exists a $x \in \Fil^h \mathcal{M} \otimes_S A_{\cris} $ so that $\varphi_h (x) - x = y$. Since $\varphi_h (a) = 0 $ for
$a \in \Fil ^p A_{\cris, 1}$, then $y = -x$ is required.
\end{proof}
\begin{remark}\label{rem-compatible} If we combine the isomorphisms $\eta : T_\mathfrak{S} (\mathfrak{M})(h) \to T ^h_\mathfrak{S} (\mathfrak{M}) \to T_S (\u \mathcal{M} (\mathfrak{M}))$ defined by $x \mapsto \beta^h x \mapsto (\beta \mathfrak c)^h x= t ^h x $.
The isomorphism $\eta: T_\mathfrak{S}(\mathfrak{M}) (h) \simeq T_S (\u \mathcal{M}(\mathfrak{M}))$ is natural in the following sense: Suppose that $\mathfrak{M} \otimes_\mathfrak{S} A_{\inf}$ has a $G_K$-actions so that $G_K$-action is semi-linear on $G_K$-action on $A_{\inf}$ and commutes with $\varphi_{\mathfrak{M}}$. Then this $G_K$-action induces a $G_K$-actions on $\u\mathcal{M}(\mathfrak{M}) \otimes_S A_{\cris}$ compatible with $\Fil ^ h$ and $\varphi$. Then both $T_\mathfrak{S}(\mathfrak{M})(h)$ and $T_S(\mathcal{M})$ has $G_K$-actions and $\eta$ is $G_K$-compatible isomorphism.
\end{remark}
Regard both $S$ as subring of $K_0 [\![u]\!]$. Define $I^+ S = S \cap u K_0 [\![u]\!]$ and $I^+ = u \mathfrak{S}$. Clearly we have a natural map $q: \mathfrak{M}/ I ^+ \to \underline{\mathcal{M}}(\mathfrak{M}) / I ^+ S$. By d\'evissage to the situation that $\mathfrak{M}$ killed by $p$, we obtain
\begin{corollary}\label{cor-length-compare}
Let $\mathfrak{M}\in \Mod_{\mathfrak{S}, \tor\et}^{\varphi , h}$. Then we have
\[
\textnormal{length}_{W(k)} (\underline{\mathcal{M}} (\mathfrak{M}) / I ^+ S) = \textnormal{length}_{W(k)} (\mathfrak{M} / u \mathfrak{M}) = \textnormal{length}_{\ensuremath{\mathbb{Z}}} (T_S (\underline{\mathcal{M}} (\mathfrak{M})))= \textnormal{length}_{\ensuremath{\mathbb{Z}}} (T_\mathfrak{S}(\mathfrak{M})).
\]
\end{corollary}
Now let us add one extra structure to $\Mod_{S, \tor}^{\varphi, h }$ to make $T_S (\mathcal{M})$ a $G_K$-representation.
Let $\Mod_{S, \tor}^{\varphi, h, \nabla}$ denote the category of the object $(\mathcal{M}, \Fil ^h \mathcal{M}, \varphi _h, \nabla) $ where
\begin{enumerate}
\item $(\mathcal{M}, \Fil ^h \mathcal{M}, \varphi _h)$ is an object in $\Mod_{S, \tor}^{\varphi, h }$
\item $\nabla: \mathcal{M} \to \mathcal{M}$ is a connection satisfying the following:
\begin{enumerate}
\item $E \nabla(\Fil^h \mathcal{M} )\subset \Fil^h \mathcal{M} $.
\item the following diagram commutes:
\begin{equation}
\begin{split}
\xymatrix{ \Fil^h \mathcal{M}\ar[d]_{E(u) \nabla} \ar[r]^-{\varphi_h } & \mathcal{M} \ar[d]^{c_1 \nabla}\\
\Fil ^h \mathcal{M} \ar[r]^-{ u ^{p-1} \varphi_h } &\mathcal{M} }
\end{split}
\end{equation}
\end{enumerate}
\end{enumerate}
Let us explain the relationship between objects in $\Mod_{S, \tor}^{\varphi, h, \nabla} $ and Breuil modules studied in work of Breuil and Caruso. Let
$N_S : S \to S$ be $W(k)$-linear differentiation so that $N_S(u)= u$. An object $\mathcal{M} $ in $\Mod_{S, \tor}^{\varphi, h }$ is called a \emph{Breuil module} if $\mathcal{M}$ admits a
$W(k)$-linear morphism $N: \mathcal{M} \to \mathcal{M} $ such that :
\begin{enumerate}
\item for all $s\in S $ and $x\in \mathcal{M}$, $N(sx)=N_S(s) x + s N(x)$.
\item $E(u) N(\Fil ^h \mathcal{M} )\subset \Fil ^h \mathcal{M} $.
\item the following diagram commutes:
\begin{equation}\label{eqn-N-diagram}
\begin{split}
\xymatrix{ \Fil^h \mathcal{M}\ar[d]_{E(u) N} \ar[r]^-{\varphi_h } & \mathcal{M}\ar[d]^{c_1 N}\\
\Fil ^h \mathcal{M} \ar[r]^-{\varphi_h } &\mathcal{M} }
\end{split}
\end{equation}
\end{enumerate}
\begin{remark} Breuil and Caruso use convention $N_S(u )= -u$. In fact, there is almost no difference for entire theory by using $N_S(u)= u$ except for the formula \eqref{eqn-action-N} need to change sign comparing with the similar formula \cite[(5.1.1) ]{LiuT-CofBreuil}
\end{remark}
Let $\Mod_{S, \tor}^{\varphi, h, N} $ denote the category of Breuil modules. There is a natural functor $\Mod_{S, \tor}^{\varphi, h, \nabla } \to \Mod_{S, \tor}^{\varphi, h, N}$ by define $N _\mathcal{M} = u\nabla$. It is easy to chase the diagram to see this functor makes sense. So we also call objects in $\textnormal{Mod}_{S, \tor}^{\varphi, h, \nabla}$ Breuil modules.
Now we can define a $G_K$-action on $\mathcal{M} \otimes_S A_{\cris}$ as in:
for any $\sigma \in
G_K$,
any $x \otimes a \in \mathcal{M}\otimes_S A_{\cris}$,
define
\begin{equation}\label{eqn-action}
\sigma(x \otimes a)= \sum_{i=0}^\infty \nabla^i (x) \otimes \gamma_i
\left ( \sigma ([\underline{\pi}])- [\underline{\pi}] \right ) \sigma (a) .
\end{equation}
We can also define a $G_K$-action on $\mathcal{M} \otimes_S A_{\cris}$ as in \cite[\S5.1]{liu-notelattice}:
for any $\sigma \in
G_K$, recall $\underline{\varepsilon}(\sigma)= \frac{\sigma ([\underline{\pi}])}{[\underline{\pi}]}\in A_{\inf}$. For
any $x \otimes a \in \mathcal{M}\otimes_S A_{\cris}$,
define
\begin{equation}\label{eqn-action-N}
\sigma(x \otimes a)= \sum_{i=0}^\infty N^i (x) \otimes \gamma_i
(\log(\underline{\varepsilon}(\sigma))) \sigma(a).
\end{equation}
where $\gamma _i(x)= \frac{x^i}{i!} $ is the standard divided power. We claim that \eqref{eqn-action} and \eqref{eqn-action-N} are the same formula. Let us postpone the proof in \S \ref{subsec-two-equations} as the proof is just long combinatoric calculation.
Note that if $\sigma \in G_\infty$, then $\log(\underline{\varepsilon}(\sigma))= 0$ and
$\sigma (x \otimes a)= x \otimes \sigma(a)$. Thus $G_K$-action defined
above (if it is well defined) is compatible with the natural
$G_\infty$-action on $\mathcal{M} \otimes_S A_{\cris}$.
\begin{lemma}\label{lem-action-from_N}
The above action is well defined $A_{\cris}$-semi-linear $G_K$-action on $\mathcal{M}
\otimes_S A_{\cris}$ and compatible with $\Fil^h (\mathcal{M} \otimes_S A_{\cris})$ and $\varphi_h$.
\end{lemma}
\begin{proof}The proof of \cite[\S5.1]{liu-notelattice} essentially applies here. It is standard to check that \eqref{eqn-action} is well-defined map; it is $A_{\cris}$-semi-linear-action on $\mathcal{M} \otimes _S A_{\cris}$ and compatible with $G_K$-action on $A_{\cris}$; and $G_\infty$-acts on $\mathcal{M} \otimes 1$ trivially. It is clear that $\log(\underline{\varepsilon} (\sigma))\in \Fil^1 A_{\cris} $. So by that $E(u) N(\Fil^h \mathcal{M} )\subset \Fil ^h \mathcal{M} $, we see that
$\sigma (\Fil ^h (\mathcal{M} \otimes _S A_{\cris})) \subset \Fil^h (\mathcal{M} \otimes _S A_{\cris})$. The only thing left to check is that $\varphi_h$ commutes with $G_K$-action, which can be reduce to check the following: write $a = - \log (\underline{\varepsilon} (\sigma))$ and pick $x \in \Fil ^h \mathcal{M}$, we have
$$\varphi_h ( \gamma_i(a) \otimes N^i (x))= \gamma _i (a) \otimes N^i (\varphi_h (x)). $$
It is clear that $\varphi(a)= p a $. So $\varphi (\gamma_i (a))= \gamma_i (a) c_1^{-i} \varphi (E(u)^i)$. So the above equality is reduced to check
$c_1^{-i} \varphi_h (E(u)^i N^i (x)) = N^i(\varphi_h (x))$ and this can be check by induction on $i$.
\end{proof}
\begin{corollary} Given a Breuil module $\mathcal{M} \in \textnormal{Mod} ^{\varphi,h, N} _{S , \tor} $,
then $T_{S}(\mathcal{M} )$ (as a $G_\infty$-representation) extends to a $G_K$-representation.
\end{corollary}
To summarize our section, we return to the situation of \S\ref{subsection-stru of crys} where $\mathcal{M}^i : = {\rm H}^i_{\cris} (X_n /S_n)$ is proved to admit structures $\Fil^i\mathcal{M} ^i = {\rm H} ^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris}),$ $\varphi_i: \Fil^i \mathcal{M}^i \to \mathcal{M}^i $ and $\nabla : \mathcal{M}^i \to \mathcal{M}^i $. Obviously, our axioms of $\textnormal{Mod}^{\varphi, h, \nabla}_{S, \tor}$ is aimed at describing these structures of ${\rm H}^i_{\cris} (X_n /S_n)$.
\begin{definition}\label{def-Breuil-modules}
For $i \leq p-2$, we call that ${\rm H}^i_{\cris} (X_n /S_n)$ is a Breuil module if the quadruple
$$ \left ({\rm H}^i_{\cris} (X_n /S_n), {\rm H} ^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris}), \varphi_i, \nabla \right )$$
constructed in \S\ref{subsection-stru of crys} is an object in $\Mod_{S, \tor}^{\varphi, i , \nabla}$, which is equivalent to the triple
\[
\left ({\rm H}^i_{\cris} (X_n /S_n), {\rm H} ^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris}), \varphi_i \right )
\]
being an object in $\Mod_{S, \tor}^{\varphi, i}$.
\end{definition}
Our main theorem is to show that ${\rm H} ^i _{\cris} (X_n /S_n)$ together with these structures is indeed a Breuil module when $ei < p -1$.
\section{Torsion cohomology and comparison with \'etale cohomology}
In this section, we collect our previous preparations to understand the structures of torsion crystalline cohomology and its relationship with \'etale cohomology via torsion prismatic cohomology. In the end, we
show that if $ei < p -1$ then $p ^n$-th torsion crystalline cohomology ${\rm H} ^i_{\cris} (X_n /S_n)$ has structure of torsion Breuil module to compare to ${\rm H} ^i_{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$ via $T_S$, where $X_{\bar \eta}$ is a geometric generic fiber of $X$.
\subsection{Prismatic cohomology and (generalized) Kisin modules}\label{subsec-prism-is-kisin}
Let $(A,I)$ be any prism. As in the end of \S\ref{subsection-stru of crys},
for any $n \geq 1$, we define torsion prismatic cohomology $\mathrm {R\Gamma}_\Prism (X_n/A_n): = \mathrm{R\Gamma} _{\Prism}(X/A, \O _{\Prism}/ p ^n\O _{\Prism} ) = R\Gamma _{\Prism} (X/ A) \otimes^{\mathbb L}_{\ensuremath{\mathbb{Z}}}\ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$. We have
$R\Gamma_\Prism (X_n / A_n) \simeq R\Gamma _{\qsyn}(X, \Prism_{-/A}/p ^n )\simeq R\Gamma _{\qsyn} (X,\Prism_{-/A} ) \otimes^{\mathbb L}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$.
\begin{warning}\label{Warning-not-intrinsic}
We warn readers that the notation $\mathrm {R\Gamma}_\Prism (X_n/A_n)$ is misleading, as it might suggest that this cohomology
theory only depends on the mod $p^n$ reduction of $X$ which is not true.
See \cite[Remark 2.4]{BMS1} for a counterexample.
\end{warning}
\begin{proposition}
\label{prop-height}
Assume that $(A,I)$ is transversal and $\varphi: A \to A$ is flat. Then ${\rm H} ^i_\Prism(X_n/ A_n)$ has height $i$.
\end{proposition}
\begin{proof} We follow the same idea of
\cite[Corollary 15.5]{BS19} which proved that ${\rm H} ^i _\Prism (X/A)$ has height $i$.
Examining the proof, it suffices to show that $\varphi ^* R\Gamma_{\Prism} (X_n / A_n ) \simeq L \eta_I R\Gamma_{\Prism} (X_n /A_n)$ when $X = \Spf (R)$ is an affine smooth $p$-adic formal scheme over $A/I$.
By Theorem 15.3 of \emph{loc.~cit}, we have
$\varphi ^* R\Gamma_{\Prism} (X / A ) \simeq L \eta_I R\Gamma_{\Prism} (X /A)$. Since $\varphi: A \to A $ is flat, it suffices to show that
\begin{equation}\label{eqn-eta mod p n}
\left (L \eta_I R\Gamma_{\Prism} (X /A) \right ) \otimes^{{\mathbb L}} _{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}} \simeq L \eta_I \left (R\Gamma_{\Prism} (X /A) \otimes^{{\mathbb L}}_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}} \right ).
\end{equation}
Now we may apply \cite[Lemma 5.16]{Bhatt-special} to the above by $g = p ^n$ and $f =d$. So we need to check that ${\rm H}^* (R\Gamma_{\Prism} (X /A) \otimes^{{\mathbb L}} _A A/d)$ has no $p^n$-torsion. This follows from the Hodge--Tate comparison
\[
{\rm H}^i (R\Gamma_{\Prism} (X /A) \otimes^{{\mathbb L}} _A A/I)\simeq \Omega^i _{X/ (A/I)}\{i\}.
\]
\end{proof}
\begin{corollary}\label{cor-pris is Kisin}
For $n \in \mathbb{N} \cup \{\infty\}$, the $\varphi$-module ${\rm H} ^i _{\Prism} (X_n /\mathfrak{S}_n)$ is an object of $\Mod_{\mathfrak{S}} ^{\varphi, i}$, i.e.,
a (generalized) Kisin module of height $i$ and $T_\mathfrak{S} ({\rm H} ^i_{\Prism} (X_n / \mathfrak{S}_n))\simeq {\rm H} ^i _{\et} (X _{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$.
\end{corollary}
\begin{proof}
It suffices to prove that $T_\mathfrak{S} ({\rm H} ^i_{\Prism} (X_n / \mathfrak{S}_n))\simeq {\rm H} ^i _{\et} (X _{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$. Write $\mathfrak{M} ^i _n : = {\rm H}^i _{\Prism} (X_n / \mathfrak{S}_n )$, ${\mathcal X} \coloneqq \Spf\O_{\mathbf{C}} \times_{\Spf \O_K} X$.
For $n \not = \infty$, by \cite[Theorem 1.8 (4) (5)]{BS19}, we have
\[ {\rm H}^i_{\et} (X_{\bar\eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}) \simeq \left ({\rm H} ^i(\mathrm{R\Gamma}_{\Prism}({\mathcal X}/ A_{\inf})/ p ^n) [\frac{1}{E(u)}] \right)^{\varphi =1}= ( \mathfrak{M}^i_n\otimes _\mathfrak{S} W_n (\O_{\mathbf{C}}^\flat) [\frac 1 u]) ^{\varphi =1}= (\mathfrak{M}^i _n\otimes _\mathfrak{S} W_n (\mathbf{C}^\flat) ) ^{\varphi =1}, \]
which is just $ T_\mathfrak{S} (\mathfrak{M} ^i _n)$. The case of $n =\infty$ easily follows by taking inverse limits.
\end{proof}
\begin{remark}\label{rem-G-compatible} The $G_\infty$-action on $T_\mathfrak{S} (\mathfrak{M}^i _n)$ discussed in \S \ref{subsec-Galos rep and Kisin} naturally extends to a $G_K$-action by isomorphism $\mathfrak{M}^i_n \otimes_\mathfrak{S} A_{\inf} \simeq {\rm H} ^i _{\Prism} ({\mathcal X}/ A_{\inf})$, which admits a natural $G_K$-action that commutes with $\varphi$. In this way $T_\mathfrak{S} ({\rm H} ^i_{\Prism} (X_n / \mathfrak{S}_n))\simeq {\rm H} ^i _{\et} (X _{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$ is an isomorphism of $G_K$-actions.
\end{remark}
Let $X_k : = X \times_{\Spf(\O_K)} \Spf(k)$ be the closed fiber of $X$.
\begin{lemma}\label{lem-length of both fibers} If $\Len_{W(k)} {\rm H} ^i_{\cris}(X_k/ W_n (k)) = \Len_{\ensuremath{\mathbb{Z}}} {\rm H} ^i_{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$ then $\mathfrak{M}^j _n$ has no $u$-torsion for $j = i , i +1$.
\end{lemma}
\begin{proof} We claim that $R\Gamma _\Prism (X_n / \mathfrak{S}_n) \otimes^{\mathbb L}_{\mathfrak{S}} W(k) \simeq R\Gamma _{\cris} (X_k /W_n (k)). $ To see this, first note that $(\mathfrak{S}, E)\to (W(k), p)$ by mod $u$ is a map of prisms. So \cite[Theorem 1.8 (5)]{BS19} proves that $R\Gamma _\Prism (X / \mathfrak{S}) \otimes^{\mathbb L}_{\mathfrak{S}} W(k) \simeq R\Gamma _{\Prism} (X_k /W (k))$. Then Theorem 1.8 (1) \emph{loc. cit.} shows that $R\Gamma _\Prism (X / \mathfrak{S}) \otimes^{\mathbb L}_{\mathfrak{S}} W(k) \simeq R\Gamma _{\cris} (X_k /W (k))$. Then the claim follows by $\otimes^{\mathbb L}_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n\ensuremath{\mathbb{Z}}$ on both sides.
The claim immediately shows that the exact sequence
\begin{equation}\label{eqn-exact seq for length}
0 \to \mathfrak{M} ^i _n / u \mathfrak{M} ^i_n \to {\rm H}^i _{\cris} (X_k / W_n (k)) \to \mathfrak{M}^{i +1}_n [u]\to 0 \end{equation}
So $\Len_{W(k)} \mathfrak{M} ^i _n / u \mathfrak{M} ^i_n \leq \Len _{W(k)} {\rm H}^i _{\cris} (X_k / W_n (k)) $. On the other hand, consider the exact sequence in Lemma \ref{lem-easy} with $\mathfrak{M}: = \mathfrak{M} ^i _n$
$$ 0 \to \mathfrak{M} [u ^\infty ] \to \mathfrak{M} \to \mathfrak{M}/ \mathfrak{M}[u ^\infty] \to 0 $$
Write $\mathfrak{M}^{\et}: = \mathfrak{M}/ \mathfrak{M}[u ^\infty]$. Since $\mathfrak{M}^{\et}$ has no $u$-torsion, the above exact sequence remaining exact by modulo $u$. So we have
$\Len _{W(k)} (\mathfrak{M}^{\et}/ u \mathfrak{M} ^{\et}) \leq \Len_{W(k)} \mathfrak{M}/ u \mathfrak{M}$ and equality holds only when $\mathfrak{M}[u ^\infty] = \{0\}$. Since $T_\mathfrak{S} (\mathfrak{M}) = T_\mathfrak{S} (\mathfrak{M} ^{\et})$, and $T_\mathfrak{S} (\mathfrak{M})\simeq {\rm H}^i _{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$ by Corollary \ref{cor-pris is Kisin}, Corollary \ref{cor-length-compare} proves the following inequalities
\[\Len_{\ensuremath{\mathbb{Z}}} {\rm H}^i_{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})= \Len_{W(k)} (\mathfrak{M} ^{\et}/ u \mathfrak{M} ^{\et})\leq \Len _{W(k)} (\mathfrak{M} / u \mathfrak{M}). \]
Now combine with the exact sequence \eqref{eqn-exact seq for length}, we conclude that \[ \Len_{\ensuremath{\mathbb{Z}}} {\rm H} ^i _{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}) \leq \Len_{W(k)} {\rm H}^i _{\cris} (X_k/ W_n(k))\]
and equality holds only if all the above inequalities become equalities and $\mathfrak{M}^{i}_n$ and $\mathfrak{M}^{i+1}_n$ have no $u$-torsions.
\end{proof}
\subsection{Nygaard filtration and Breuil--Kisin filtration} By Corollary \ref{cor-pris is Kisin}, $\mathfrak{M}_n ^i: = {\rm H} ^i_{\Prism} (X_n / \mathfrak{S} _n)$ is a Kisin module of height $i$. Then $\varphi^* \mathfrak{M}^i_n \simeq {\rm H}^i_{\qsyn} (X, \Prism^{(1)}_{-/\mathfrak{S}} \otimes^{\mathbb L}_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})$ admits two filtrations: Breuil--Kisin filtration defined in \eqref{eqn-BK-filtration} and Nygaard filtration ${\rm H}^i _{\qsyn} (X , \Fil^i_{\rm N} \Prism^{(1)}_{-/\mathfrak{S}} \otimes^{\mathbb L}_{\ensuremath{\mathbb{Z}}}{\ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}})$. The aim of this subsection is to compare these two filtrations.
This theme can be put in more general setting for a bounded prism $(A, I)$. Recall that in \cite[\S 15]{BS19} the authors studied
$\Prism_{-/A}$ and $\Prism^{(1)}_{-/A}: = A \wh \otimes^{{\mathbb L}}_{\varphi, A} \Prism_{-/A} $ as sheaves on $\mathrm{qSyn}_{A/I}$.
Also constructed in \emph{loc.~cit.~}is the so-called Nygaard filtration $\Fil^j_{\rm N} \Prism^{(1)} _{-/A}$, also discussed \S \ref{subsec-NygaardFil}.
For any $n \in \mathbb{N} \cup \{\infty\}$, set $\Prism^{(1)}_n : = \Prism^{(1)}_{-/A}\otimes^{{\mathbb L}} _\ensuremath{\mathbb{Z}} \ensuremath{\mathbb{Z}}/ p^n \ensuremath{\mathbb{Z}}$ and
$\Fil^j _{\rm N} \Prism^{(1)}_n : = \Fil^j_{\rm N} \Prism^{(1)} _{-/A} \otimes^{{\mathbb L}}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$.
Here and below, we adopt the convention that $n = \infty$ means we do not perform any base change.
\begin{lemma}
\label{general property of Nygaard filtration}
Let $(A,I)$ be a bounded prism.
Let $X$ be a smooth ($p$-adic) formal scheme over $\Spf(A/I)$ of relative dimension $n$.
Then we have:
\begin{enumerate}
\item The Nygaard filtration $\mathrm{R\Gamma}(X_{\qsyn}, \Fil^{\bullet}_{\mathrm{N}})$ on $\mathrm{R\Gamma}(X_{\qsyn}, \Prism^{(1)}_{-/A})$
is complete.
\item The natural map
\[
\Fil^i_{\mathrm{N}} \otimes_{A} I^j \to \Fil^{i+j}_{\mathrm{N}}
\]
of quasisyntomic sheaves induces a morphism
\[
{\rm H}^l(X_{\qsyn}, \Fil^i_{\mathrm{N}}) \otimes_{A} I^j \rightarrow {\rm H}^l(X_{\qsyn}, \Fil^{i+j}_{\mathrm{N}})
\]
which is an isomorphism when either $l \leq i$, and an injection when $l = i+1$.
When $i \geq n$ this map induces an isomorphism
\[
\mathrm{R\Gamma}(X_{\qsyn}, \Fil^n_{\mathrm{N}}) \otimes_{A} I^j
\cong \mathrm{R\Gamma}(X_{\qsyn}, \Fil^{n+j}_{\mathrm{N}}).
\]
\item The natural map
\[
\varphi \colon \Fil^i_{\mathrm{N}} \to \Prism_{-/A} \otimes_{A} I^i
\]
induces a map on cohomology
\[
{\rm H}^l(X_{\qsyn}, \Fil^i_{\mathrm{N}}) \to {\rm H}^l(X_{\qsyn}, \Prism_{-/A}) \otimes_{A} I^i
\]
which is an isomorphism when $l \leq i$
and injective when $l = i+1$.
\end{enumerate}
Moreover their derived mod $p^m$ counterparts hold true as well.
\end{lemma}
We thank Bhargav for pointing out the statement (3) above, which we did not realize can be proved so easily.
This significantly simplifies an earlier draft.
\begin{proof}
(1) follows from (2). Indeed, (2) implies the Nygaard filtration on
$\mathrm{R\Gamma}(X_{\qsyn}, \Fil^i_{\mathrm{N}})$ is simply the $I$-adic filtration,
hence it is complete.
(2) follows from the following exact triangle of quasisyntomic sheaves:
\[
\Fil^i_{\mathrm{N}} \otimes_{A} I \to \Fil^{i+1}_{\mathrm{N}} \to
\Fil^{i+1}_{\mathrm{H}}\dR_{-/(A/I)}^\wedge.
\]
Observe that
\[
\mathrm{R\Gamma}(X_{\qsyn},\Fil^{l}_{\mathrm{H}}\dR_{-/(A/I)}^\wedge)
\cong \mathrm{R\Gamma}(X, \Fil^{l}_{\mathrm{H}}\dR_{-/(A/I)}^\wedge)
\]
lives in $D^{\geq l}(A/I)$,
and vanishes when $l > n$.
An easy induction gives what we want.
As for (3): we look at the map of filtered complexes
\[
\mathrm{R\Gamma}(X_{\qsyn}, \Fil^i_{\mathrm{N}}) \xrightarrow{\varphi} \mathrm{R\Gamma}(X_{\qsyn}, \Prism_{-/A} \otimes_{A} I^i)
\]
where the former is equipped with Nygaard filtration $\mathrm{R\Gamma}(X_{\qsyn}, \Fil^{i+\ast}_{\mathrm{N}})$
and the latter is equipped with $I$-adic filtration $\mathrm{R\Gamma}(X_{\qsyn}, \Prism_{-/A} \otimes_{A} I^{i+\ast})$.
Notice that both filtrations are complete.
Now \cite[Theorem 15.2.(2)]{BS19} implies that the cone of the $(i+\ast)$-th graded piece lives in $D^{>(i+\ast)}(A/I)$.
Hence we conclude that the cone of $\varphi$ lives in $D^{> i}(A)$.
Therefore the induced maps of degree at most $i$ cohomology groups are isomorphisms,
and the induced map in degree $i+1$ is injective.
Their derived mod $p^m$ counterparts are proved in exactly the same way.
\end{proof}
Now let us return to the situation of Breuil--Kisin prism $A= \mathfrak{S}$.
Recall that $\Prism^{(1)}_n : = \Prism^{(1)}_{-/\mathfrak{S}} \otimes_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$ and $\Fil^i_{\rm N }\Prism^{(1)}_n : = \Fil ^i_{\rm N }\Prism^{(1)}_{-/\mathfrak{S}} \otimes_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$.
Recall that $\mathfrak{M}^i_n \coloneqq {\rm H} ^i_{\Prism} (X_n / \mathfrak{S}_n)$
and recall that Breuil--Kisin-filtration on $\varphi^*\mathfrak{M}^i _n \cong {\rm H} ^i_{\qsyn} (X, \Prism^{(1)}_n)$ is defined as the image of
$\psi \colon \mathfrak{M}^i_n \to \varphi^* \mathfrak{M} ^i _n$.
\begin{corollary}
\label{quasi-filtration identification}
For any $i \in \mathbb{N}$ and any $n \in \mathbb{N} \cup \{\infty\}$, there is a functorial commutative diagram:
\[
\xymatrix{
{\rm H}^i_{\qsyn} (X, \Fil ^i_{\rm N } \Prism ^{(1)}_n) \ar[rr]^-{\varphi_i} \ar[rd] && \mathfrak{M} ^i_n \ar[dl]^-{\psi}\\
& \varphi^* \mathfrak{M} ^i_n &
}
\]
with $\varphi_i$ an isomorphism.
\end{corollary}
\begin{proof}
First let us justify the existence of the functorial commutative diagram.
We may work with affine formal schemes $Y = \Spf (R)$.
In this case, by the proof of \cite[Theorem 15.3 and Corollary 15.5]{BS19},
we see $\psi$ is constructed by the following (right-lower corner) diagram
\[\xymatrix{ \tau^{\leq i}R\Gamma_{\qsyn}(Y, \Fil ^i_{\rm N} \Prism ^{(1)})\ar[d] \ar@{-->}[rr] & & \tau^{\leq i } R\Gamma_{\qsyn} (Y, \Prism) \otimes (E)^i \ar[ld] ^{\psi} \ar[d]\\\tau ^{\leq i} R\Gamma_{\qsyn} (Y, \Prism^{(1)}) \ar[r]^{\sim} \ar@/_1pc/[rr]_{\varphi} & \tau^{\leq i} L\eta_E R\Gamma_{\qsyn} (Y, \Prism)\ar[r] & \tau^{\leq i} R\Gamma_{\qsyn} (Y, \Prism)} \]
Here the top row is the same as (truncation by $\leq i$ of) the following morphism
\[
\mathrm{R\Gamma}(X_{\qsyn}, \Fil^i_{\mathrm{N}}) \xrightarrow{\varphi} \mathrm{R\Gamma}(X_{\qsyn}, \Prism_{-/A} \otimes_{A} I^i)
\]
appeared in Lemma \ref{general property of Nygaard filtration}.
Derived mod $p^n$ gives the desired functorial commutative diagram.
By \Cref{general property of Nygaard filtration} (3) we know
that $\varphi_i$ is an isomorphism.
\end{proof}
\begin{remark}
In the context of filtered derived infinity categories, a filtration is nothing but an arrow.
Hence one could define two ``quasi-filtrations''\footnote{This terminology is suggested by S.~Mondal.}:
one being the Breuil--Kisin quasi-filtration: $\mathfrak{M}^i_n \xrightarrow{\psi} \varphi^* \mathfrak{M} ^i _n$;
another being the $i$-th Nygaard quasi-filtration: ${\rm H}^i_{\qsyn} (X, \Fil ^i_{\rm N } \Prism ^{(1)}_n) \to \varphi^* \mathfrak{M} ^i _n$.
Then the above is saying that these two quasi-filtrations are canonically identified via $\varphi_i$.
\end{remark}
Let us name the map
\[
\iota^{i,j}_n \colon {\rm H}^i_{\qsyn} (X, \Fil ^j_{\rm N } \Prism ^{(1)}_n) \to \Fil^j_{\BK} {\rm H} ^i_{\qsyn} (X, \Prism^{(1)}_n)
\]
for any pair of natural numbers $(i,j)$ and any $n \in \mathbb{N} \cup \{\infty\}$.
We have the following knowledge of the image of $\iota^{i,j}_n$ when $i \leq j$.
\begin{corollary}\label{cor-BK-Nygaard}
Let $i \leq j$.
Then we have an identification
\[
\mathrm{Im}(\iota^{i,j}_n) \cong \mathrm{Im}(\psi \colon \mathfrak{M}^i_n \to \varphi^* \mathfrak{M}^i_n) \cdot E^{j-i}.
\]
In particular, define $\widetilde{\mathfrak{M}^i_n} \coloneqq \mathfrak{M}^i_n/[u^\infty]$ and $\widetilde{\varphi^* \mathfrak{M}^i_n} \coloneqq \varphi^*\mathfrak{M}^i_n/[u^\infty]$,
we have an identification
\[
\mathrm{Im}(\widetilde{\iota}^{i,j}_n \colon {\rm H}^i_{\qsyn} (X, \Fil ^i_{\rm N } \Prism ^{(1)}_n) \to \Fil^i_{\BK}\widetilde{\varphi^* \mathfrak{M}^i_n})
\cong \{x \in \widetilde{\varphi ^* \mathfrak{M}} | (1\otimes \varphi) (x) \in E(u)^{j} \widetilde{\mathfrak{M}^i_n}\}.
\]
\end{corollary}
\begin{proof}
The first statement follows from combining \Cref{general property of Nygaard filtration} (2) and \Cref{quasi-filtration identification}.
The second statement follows from the first statement and the fact that $\mathfrak{M}^i_n$ has height $i$.
\end{proof}
Below we make some primitive investigations of what happens without assuming $i \leq j$.
\begin{proposition}
\label{prop-fil-finite-difference}
Let $A= \mathfrak{S}$ be the Breuil--Kisin prism.
For any triple $(i,j,n)$, the kernel and cokernel of $\iota^{i,j}_n$ above are finite.
\end{proposition}
\begin{proof}
Note that the kernel and cokernel of $\iota^{i,j}_n$ are finitely generated modules over $\mathfrak{S}/(p^n)$.
We have a containment
\[
E(u)^j \cdot \Prism ^{(1)} \subset \Fil^j _N \Prism ^{(1)} \subset \Prism ^{(1)}
\]
of sheaves on $\mathrm{qSyn}_{A/I}$.
This shows that the map $\iota^{i,j}_n$ admits a section up to multiplication by $E(u)^j$,
therefore the kernel and cokernel of $\iota^{i,j}_n$ are annihilated by $E(u)^j$.
If $n \in \mathbb{N}$, the kernel and cokernel of $\iota^{i,j}_n$ are finitely generated modules over $\mathfrak{S}/(p^n, E(u)^j)$, hence finite.
If $n = \infty$, denote the map by $\iota^{i,j}$, we make the following
\begin{claim}
\label{Nygaard-BK-fil-same-inverting-p}
The map $\iota^{i,j} \colon {\rm H}^i_{\qsyn} (X, \Fil^j _N \Prism ^{(1)})[1/p] \to
\Fil^j_{\BK} \varphi^* \mathfrak{M}^i[1/p]$ is an isomorphism.
\end{claim}
Granting this claim, the kernel and cokernel of $\iota^{i,j}$ are finitely generated modules over $\mathfrak{S}/(E(u)^j)$ annihilated by a power
of $p$, hence finite.
\end{proof}
\begin{proof}[{Proof of \Cref{Nygaard-BK-fil-same-inverting-p}}]
First let us show the injectivity, which is the same as injectivity of
\[
{\rm H}^i_{\qsyn} (X, \Fil^j _N \Prism ^{(1)})[1/p] \to {\rm H}^i_{\qsyn} (X, \Prism ^{(1)})[1/p].
\]
To this end, we use the filtration $\Fil^{i,j}$ discussed in \Cref{subsec-NygaardFil}.
We claim a slightly stronger statement: the maps
\[
{\rm H}^m_{\qsyn} (X, \Fil^{i,j} \Prism ^{(1)})[1/p] \to
{\rm H}^m_{\qsyn} (X, \Fil^{i,0} \Prism ^{(1)})[1/p]
\]
are injective for all $i \geq 0$.
The case of $i \geq j$ is trivial due to \Cref{general KON filtration properties} (2).
For the rest of $i$, we perform induction on descending $i$.
By five Lemma and \Cref{general KON filtration properties} (3), it suffices to know that the maps
\[
{\rm H}^m (X, \Fil^{j-i}_{\mathrm{H}} \dR_{X/\mathcal{O}_K})[1/p] \to
{\rm H}^m (X, \dR_{X/\mathcal{O}_K})[1/p]
\]
are injective.
This injectivity is equivalent to the degeneration of the Hodge-to-de Rham spectral sequence for the rigid space $X_K$, which is a result
due to Scholze~\cite[Theorem 1.8]{Sch13}.
Next we show surjectivity by induction on $j$, the case of $j = 0$ being trivial. All we need to show is that the induced map
\[
\mathrm{Coker}\bigg({\rm H}^i_{\qsyn} (X, \Fil^{j+1} _N \Prism ^{(1)})[1/p]
\to {\rm H}^i_{\qsyn} (X, \Fil^j _N \Prism ^{(1)})[1/p]\bigg)
\xrightarrow{\overline{\varphi}} \frac{E(u)^j \mathfrak{M}^i}{E(u)^{j+1} \mathfrak{M}^i}[1/p]
\]
is injective.
By the injectivity of $\iota^{i,j}[1/p]$ proved in the previous paragraph,
we can rewrite the left hand side as ${\rm H}^i_{\qsyn} (X, \gr^j _N \Prism ^{(1)})[1/p]$.
Recall that $\mathfrak{M}^i[1/p]$ is finite free over $\mathfrak{S}[1/p]$ (see~\Cref{lem-etale} (3)),
therefore the right hand side can be rewritten as
${\rm H}^i_{\qsyn}(X, \overline{\Prism})[1/p]\{j\}$, the $j$-th Breuil--Kisin twist
of the $i$-th Hodge--Tate cohomology of $X_K$.
By \cite[Theorem 15.2]{BS19}, we can identify the left hand side
further as the $j$-th conjugate filtration of the right hand side.
Now it follows from the degeneration of Hodge--Tate spectral sequence~\cite[Theorem 13.3]{BMS1} that $\overline{\varphi}$ is always injective.
\end{proof}
Below we exhibit an example illustrating the necessity of the $i \leq j$ assumption in \Cref{cor-BK-Nygaard}.
\begin{example}[{see~\cite[Section 4]{Li20}}]
Let $K$ be a ramified quadratic extension of $\mathbb{Q}_p$ and let $G$ be a lift of $\alpha_p$ over $\mathcal{O}_K$.
Denote the classifying stack of $G$ by $BG$.
Below we summarize previous study of various cohomologies of $BG$ as documented in \cite[4.6-4.10]{Li20}, following notation thereof.
\begin{enumerate}
\item The Breuil--Kisin prismatic cohomology ring of $BG$ is given by
\[
{\rm H}^*_{\Prism}(BG/\mathfrak{S}) \cong \mathfrak{S}[\widetilde{u}]/(p \cdot \widetilde{u})
\]
where $\widetilde{u}$ has degree $2$.
\item The Hodge--Tate spectral sequence does not degenerate at $E_2$ page, but does degenerate at $E_3$ page, giving rise
to short exact sequences:
\[
0 \to {\rm H}^{i+1}(BG, \wedge^{i-1} \mathbb{L}_{BG/\mathcal{O}_K}) \simeq \mathbb{F}_p \to {\rm H}^{2i}_{\mathrm{HT}}(BG/\mathcal{O}_K) \simeq \mathcal{O}_K/(p)
\to {\rm H}^{i}(BG, \wedge^{i} \mathbb{L}_{BG/\mathcal{O}_K}) \simeq \mathbb{F}_p \to 0
\]
for all $i > 0$.
\item The Hodge-to-de Rham spectral sequence does not degenerate at $E_1$ page, but does degenerate at $E_2$ page, giving rise
to short exact sequences:
\[
0 \to {\rm H}^{2i-1}(BG, \mathbb{L}_{BG/\mathcal{O}_K}) \simeq \mathbb{F}_p \to {\rm H}^{2i}_{\mathrm{dR}}(BG/\mathcal{O}_K) \simeq \mathcal{O}_K/(p)
\to {\rm H}^{2i}(BG, \mathcal{O}_{BG}) \simeq \mathbb{F}_p \to 0
\]
for all $i > 0$.
\end{enumerate}
By \cite[Theorem 15.2]{BS19}, we have the following commutative diagram:
\[
\xymatrix{
\mathrm{R\Gamma}_{\qsyn}(BG/\mathfrak{S}, \Prism^{(1)}) \ar[rr]^-{\varphi} \ar[d] & & \mathrm{R\Gamma}_{\qsyn}(BG/\mathfrak{S}, \Prism) \ar[d] \\
\mathrm{R\Gamma_{dR}}(BG/\mathcal{O}_K) \ar[r] & \mathrm{R\Gamma}(BG, \mathcal{O}_{BG}) \ar[r] & \mathrm{R\Gamma_{HT}}(BG/\mathcal{O}_K)
}
\]
where $\varphi$ is the Frobenius on prismatic cohomology, vertical maps are derived modulo $E(u)$ reductions, the two arrows
on the bottom row are natural arrows appearing in Hodge-to-de Rham and Hodge--Tate spectral sequences respectively.
Looking at the degree $2$ cohomology together with (2) and (3) above, we see that $\varphi$ on ${\rm H}^2_{\Prism}(BG/\mathfrak{S})$
is given by, up to a unit in $\mathfrak{S}/p$, multiplication by $u \in \mathfrak{S}/p$.
Since $\varphi$ is a map of $\mathbb{E}_{\infty}$-algebras, using (1) we see that $\varphi$ on ${\rm H}^4_{\Prism}(BG/\mathfrak{S})$
is given by, up to a unit in $\mathfrak{S}/p$, multiplication by $u^2 = E(u) \in \mathfrak{S}/p$.
In particular, we see that $\Fil^1_{\BK} {\rm H}^4_{\qsyn}(BG/\mathfrak{S}, \Prism^{(1)})={\rm H}^4_{\qsyn}(BG/\mathfrak{S}, \Prism^{(1)}) $ is the whole cohomology group.
On the other hand, we claim that the map
\[
{\rm H}^4_{\qsyn}(BG/\mathfrak{S}, \Fil^1_{\mathrm{N}}\Prism^{(1)}) \to {\rm H}^4_{\qsyn}(BG/\mathfrak{S}, \Prism^{(1)})
\]
is not surjective.
Indeed we have a long exact sequence coming from the exact triangle $\Fil^1_{\mathrm{N}}\Prism^{(1)} \to \Prism^{(1)} \to \mathcal{O}_{BG}$
with the second arrow being the composition of derived modulo $E(u)$ followed by projection modulo first Hodge filtration.
Hence (3) above shows that the cokernel is exactly of length $1$.
This shows that $BG$ is a smooth proper stack counterexample for $(i,j,n) = (4,1,\infty)$.
Since all these cohomology groups are $p$-torsion, we see that this also provides a stacky
counterexample for $(i,j,n) = (3,1,1)$.
Finally let us use an approximation of $BG$ to get a smooth proper scheme counterexample.
By \cite[Subsection 4.3]{Li20} there is a smooth projective fourfold $X$ over $\mathcal{O}_K$ together with a map
$f \colon X \to BG$ such that the induced pullback map of Hodge cohomology is injective when total degree is no larger than $4$.
By functoriality of the formation of Breuil--Kisin filtrations,
we know that
\[
\mathrm{Im}\bigg(f^* \colon {\rm H}^4_{\qsyn}(BG/\mathfrak{S}, \Prism^{(1)}) \to {\rm H}^4_{\qsyn}(X/\mathfrak{S}, \Prism^{(1)})\bigg)
\subset \Fil^1_{\BK} {\rm H}^4_{\qsyn}(X/\mathfrak{S}, \Prism^{(1)}).
\]
Lastly we claim that $f^*(\widetilde{u}^2) \in {\rm H}^4_{\qsyn}(X/\mathfrak{S}, \Prism^{(1)})$ is not in the image of ${\rm H}^4_{\qsyn}(X/\mathfrak{S}, \Fil^1_{\mathrm{N}}\Prism^{(1)})$.
To see this it suffices to compare the two exact sequences:
\[
\xymatrix{
{\rm H}^4_{\qsyn}(BG/\mathfrak{S}, \Fil^1_{\mathrm{N}}\Prism^{(1)}) \ar[r] \ar[d] & {\rm H}^4_{\qsyn}(BG/\mathfrak{S}, \Prism^{(1)}) \ar[r] \ar[d] & {\rm H}^4_{\qsyn}(BG, \mathcal{O}_{BG}) \ar[d]^-{f^*} \\
{\rm H}^4_{\qsyn}(X/\mathfrak{S}, \Fil^1_{\mathrm{N}}\Prism^{(1)}) \ar[r] & {\rm H}^4_{\qsyn}(X/\mathfrak{S}, \Prism^{(1)}) \ar[r] & {\rm H}^4_{\qsyn}(X, \mathcal{O}_{X})
}
\]
and invoke the fact that $f^*$ is injective by our choice of $X$.
This gives us smooth projective fourfold over $\mathcal{O}_K$ violating the conclusion of \Cref{cor-BK-Nygaard}
for $(i,j,n) = (4,1,\infty)$ or $(i,j,n) = (3,1,1)$.
\end{example}
\subsection{Torsion crystalline cohomology}
Now we are ready to discuss the structure of ${\rm H}^i_{\cris} (X_n / S_n)$ via prismatic cohomology.
First, we provide an application of the comparison
$\mathrm{R\Gamma}_{\Prism}(\mathcal{X}/\mathfrak{S}) \otimes_{\mathfrak{S}, \varphi} S
\cong \mathrm{R\Gamma}_{\cris}(\mathcal{X}/S)$,
which concerns the module structure of the cohomology of the latter.
We need some preparations.
\begin{lemma}
\label{coherence lemma}
The rings $S/p^n$ are coherent for all $n \in \mathbb{N}$.
\end{lemma}
We do not know if the ring $S$ itself is coherent.
\begin{proof}
We make an induction on $n$.
The starting case $n=1$: since $S$ is given by $p$-completely
adjoin the divided powers of the Eisenstein polynomial $E(u)$ to $\mathfrak{S}$,
we see that $S/p$ is obtained by adjoining divided powers of $E(u) \equiv u^e$
to $\mathfrak{S}/p = k[\![u]\!]$.
It is well-known that the result is
$S/p \cong k[u]/u^{pe} \otimes_k k[u_1, u_2, \ldots]/(u_i^p)$
where $u_i$ is the image of the $p^i$-th divided powers of $E(u)$.
One checks this explicit algebra is coherent by noting
that any finitely generated ideal is generated by polynomials
involving only finitely many variables.
Now we do the induction, which largely relies on \cite[Lemma 3.26]{BMS1}.
Indeed the cited Lemma reduces us to showing the ideal $(p^n)/(p^{n+1})$
in $S/p^{n+1}$, when viewed as an $S/p$-module, is finitely presented.
But in fact $S$ is $p$-torsion free, hence the ideal
$(p^n)/(p^{n+1})$ is free when viewed as an $S/p$-module with generator $p^n$.
\end{proof}
\begin{lemma}
\label{lem-comparison}
Suppose that $ C^\bullet$ is a prefect
$\mathfrak{S}_n$-complex.
Then there exists an exact sequence of $S$-modules
\begin{equation*}
\xymatrix{
0 \ar[r] & {\rm H}^i (C^\bullet) \otimes_{\mathfrak{S}} S
\ar[r] & {\rm H}^i( C ^\bullet \otimes^{\mathbb L} _\mathfrak{S} S )
\ar[r] & \Tor_1^{\mathfrak{S}}( {\rm H}^{i+1} (C^\bullet), S ) \ar[r] & 0.
}
\end{equation*}
In particular $S$ has Tor-amplitude $1$ over $\mathfrak{S}$ and the functor $M \mapsto \Tor_1^{\mathfrak{S}}(M, S)$ is left exact.
\end{lemma}
\begin{proof}
For the first claim, see the argument before the proof of Theorem 5.4 in \cite{Cais-Liu-BK-crys} and replace $A_{\inf}$ (resp.~$A_{\cris}$) there by $\mathfrak{S}$ (resp.~$S$).
The fact that $S$ has Tor-amplitude $1$ over $\mathfrak{S}$ follows from the Auslander--Buchsbaum formula
and torsion-freeness of $S$.
\end{proof}
\begin{proposition}
\label{Tor1 is finitely presented prop}
Let $M$ be a finitely generated Kisin module, then
$\mathrm{Tor}_1^{\mathfrak{S}}(M, \varphi_* S)$
is a finitely presented $S$ module.
\end{proposition}
\begin{proof}
Denote $N \coloneqq M[u^\infty]$ which is the maximal finite length $\mathfrak{S}$-submodule
inside $M$.
We first show $\mathrm{Tor}_1^{\mathfrak{S}}(N, \varphi_* S) \to \mathrm{Tor}_1^{\mathfrak{S}}(M, \varphi_* S)$
is an isomorphism.
Since $S$ has Tor-amplitude $1$ over $\mathfrak{S}$ by \Cref{lem-comparison},
it suffices to show the vanishing of $\mathrm{Tor}_1^{\mathfrak{S}}(M/N, \varphi_* S)$.
Note that $M/N$ is an \'{e}tale Kisin module, we have a sequence
\[
0 \to (M/N)_{\mathrm{tor}} \to M/N \to (M/N)_{\mathrm{tf}} \to 0,
\]
where $(M/N)_{\mathrm{tor}}$ is a successive extension of $k[\![u]\!] = \mathfrak{S}/p$
as $M/N$ is \'{e}tale, and $(M/N)_{\mathrm{tf}}$ is torsion-free.
Next observe that both these two structures are preserved under
base change along the Frobenius on $\mathfrak{S}$.
Therefore it suffices to show $\mathrm{Tor}_1^{\mathfrak{S}}(-, S)=0$
whenever the inputting $\mathfrak{S}$-module is $\mathfrak{S}/p$
or torsion-free. For the former case, it follows from the fact that $S$
has no $p$-torsion.
For the latter, consider the reflexive hull $M'^{\vee \vee}$ of the inputting module
$M' \subset M'^{\vee \vee}$, which is finite free as $\mathfrak{S}$
is a regular Noetherian of dimension $2$.
Finally the desired vanishing of $\mathrm{Tor}_1^{\mathfrak{S}}(M', S)$
follows from the left exactness of $\mathrm{Tor}_1$ against $S$ over $\mathfrak{S}$:
see \Cref{lem-comparison}.
It suffices to show $\mathrm{Tor}_1^{\mathfrak{S}}(N', S)$ is finitely presented
for any finite length $\mathfrak{S}$-module,
which is the content of the next lemma.
\end{proof}
\begin{lemma}
Let $N$ be a finite length $\mathfrak{S}$-module, then
$N \otimes_{\mathfrak{S}} S$ and
$\mathrm{Tor}_1^{\mathfrak{S}}(N, S)$ are finitely presented $S$-modules.
\end{lemma}
\begin{proof}
When $N = k \cong \mathfrak{S}/(p,u)$, the statement for
$k \otimes_{\mathfrak{S}} S = S/(p,u) = (S/p)/u$ and
$\mathrm{Tor}_1^{\mathfrak{S}}(k, S) \cong S/p[u]$ follows from the fact that
$S/p$ is coherent (\Cref{coherence lemma}) and \cite[\href{https://stacks.math.columbia.edu/tag/05CW}{Tag 05CW (3)}]{stacks-project}.
Next we make an induction on the length of $N$.
By considering $N \twoheadrightarrow N/(p,u) \simeq k^{\oplus r}$,
we have a short exact sequence
$0 \to N' \to N \to k \to 0$, which induces a long exact sequence:
\[
0 \to \mathrm{Tor}_1^{\mathfrak{S}}(N', S) \to \mathrm{Tor}_1^{\mathfrak{S}}(N, S) \to
\mathrm{Tor}_1^{\mathfrak{S}}(k, S) \to N' \otimes_{\mathfrak{S}} S \to N \otimes_{\mathfrak{S}} S
\to k \otimes_{\mathfrak{S}} S \to 0.
\]
Induction hypotheses imply that all terms except of
$N \otimes_{\mathfrak{S}} S$ and
$\mathrm{Tor}_1^{\mathfrak{S}}(N, S)$ are finitely presented $S$-modules.
Note that the finite length assumption implies all modules are $S/p^N$-modules
for some sufficiently large $N$.
The coherence of $S/p^N$ (\Cref{coherence lemma}) and \cite[\href{https://stacks.math.columbia.edu/tag/05CW}{Tag 05CW (3)}]{stacks-project}
shows the boundary map $\mathrm{Tor}_1^{\mathfrak{S}}(k, S) \to N' \otimes_{\mathfrak{S}} S$
has finitely presented kernel and cokernel.
Now we use
\cite[\href{https://stacks.math.columbia.edu/tag/0519}{Tag 0519}]{stacks-project}
to finish the proof.
\end{proof}
\begin{proposition}
\label{fp proposition}
Let $\mathcal{X}$ be a smooth proper $p$-adic formal scheme over
$\Spf(\mathcal{O}_K)$.
The $S/p^n$-module ${\rm H}^i_{\cris}(\mathcal{X}_n/S_n)$
is finitely presented, for any integer $i$ and
any $n \in \mathbb{N} \cup \{\infty\}$.
\end{proposition}
Let us stress again that this already follows from \cite[Theorem 5.2]{BS19}.
\begin{proof}
The case of finite $n$ follows from \Cref{coherence lemma}:
the prismatic cohomology complex is a perfect complex, hence the comparison
\cite[Theorem 5.2]{BS19} or \Cref{comparing pris and crys}
shows the crystalline cohomology complex is also perfect
over the coherent ring $S/p^n$.
Therefore all of its cohomology modules are finitely presented as $S/p^n$-modules.
Now we turn to the case $n = \infty$.
By \Cref{lem-comparison} there is a short exact sequence:
\[
0 \to
{\rm H}^i_{\Prism}(\mathcal{X}/\mathfrak{S}) \otimes_{\mathfrak{S}, \varphi} S
\to {\rm H}^i_{\cris}(\mathcal{X}/S) \to
\mathrm{Tor}_1^{\mathfrak{S}}({\rm H}^{i+1}_{\Prism}(\mathcal{X}/\mathfrak{S}), \varphi_* S) \to 0.
\]
Since prismatic cohomology complex is a perfect complex and the ring $\mathfrak{S}$
is Noetherian, we know the term ${\rm H}^i_{\Prism}(\mathcal{X}/\mathfrak{S}) \otimes_{\mathfrak{S}, \varphi} S$
is finitely presented.
Using \cite[\href{https://stacks.math.columbia.edu/tag/0519}{Tag 0519}]{stacks-project}
we are reduced to showing the term
$\mathrm{Tor}_1^{\mathfrak{S}}({\rm H}^{i+1}_{\Prism}(\mathcal{X}/\mathfrak{S}), \varphi_* S)$
is finitely presented.
This in turn follows from \Cref{Tor1 is finitely presented prop} and
the fact that ${\rm H}^{i+1}_{\Prism}(\mathcal{X}/\mathfrak{S})$ is a Kisin module:
see \Cref{cor-pris is Kisin}.
\end{proof}
Now we turn to the main result of our paper,
which concerns the Breuil-module structure of the crystalline cohomology.
Write $\mathfrak{M}^j_n: = {\rm H}^j_\Prism ( X_n/ \mathfrak{S}_n)$ and $\mathcal{M}^j_n: = {\rm H}^j_{\cris} (X_n /S_n)$.
\begin{lemma}
\label{lemma-mod I+S}
The following sequence
\[
0 \to \mathfrak{M}^i _n / u \mathfrak{M} ^i _n \to \mathcal{M}^i _n / I^+S \to \Tor_1^{\mathfrak{S}}( \mathfrak{M}^{i +1}_n, \varphi_* S )/ I^+ S \to 0
\]
is exact.
\end{lemma}
\begin{proof}
By derived mod $p ^n$ version of \Cref{global comparison}, we have $S \otimes_{\varphi, \mathfrak{S}}^{\mathbb L} R\Gamma_\Prism (X_n/ \mathfrak{S}_n )\simeq R\Gamma _{\cris} (X_n / S_n)$. So Lemma \ref{lem-comparison} yields an exact sequence
\begin{equation}\label{eqn-tensor-tor-2}
\xymatrix{0 \ar[r] & S \otimes_{\varphi, \mathfrak{S} }\mathfrak{M}^i_n \ar[r] & \mathcal{M} ^i_n \ar[r] & \Tor _1^\mathfrak{S} (\mathfrak{M}^{i +1}_n, \varphi_* S) \ar[r] & 0}
\end{equation}
as $\varphi$ on $\mathfrak{S}$ is finite flat.
We only need to show the above exact sequence remains left exact after modulo $I^+S$. To see this, note that
$R\Gamma_{\cris} (X_k / W_n (k)) \simeq R\Gamma _ \Prism (X_n / \mathfrak{S}_n) \otimes^{\mathbb L} _{\mathfrak{S}} W(k) \simeq R\Gamma_{\cris} (X_n /S_n) \otimes^{\mathbb L}_{S} W(k ) $, where in the last identification we use the fact that Frobenius on $W(k)$ is an isomorphism.
Using the exact sequence \eqref{eqn-exact seq for length}, then we have the following commutative diagram
\[ \xymatrix{ & S/I ^+S \otimes_{\varphi, \mathfrak{S} }\mathfrak{M}^i_n \ar[d]^\wr \ar[r] & \mathcal{M} ^i_n/ I^+S \ar[d]\ar[r] & \Tor _1^\mathfrak{S} (\mathfrak{M}^{i +1}_n, \varphi_*S_n)/ I ^+S \ar[r]
\ar[d] & 0 \\ 0 \ar[r] & \mathfrak{M}^i_n/ u \mathfrak{M} ^i _n \ar[r] & {\rm H}^i (X_k/ W_n (k)) \ar[r]& \mathfrak{M}^{i+1}_n [u]\ar[r] & 0. }\]
Since the left column is an isomorphism, we conclude that the top row is left exact as desired.
\end{proof}
Recall in \Cref{def-Breuil-modules}, ${\rm H} ^i_{\cris} (X_n / S_n)$ is defined to be a Breuil module if the quadruple
\[
\left ({\rm H}^i_{\cris} (X_n /S_n), {\rm H} ^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris}), \varphi_i, \nabla \right)
\]
constructed in \S\ref{subsection-stru of crys} is an object of
$\Mod_{S, \tor}^{\varphi, i , \nabla}$.
This condition is equivalent to the triple
\[
\left ({\rm H}^i_{\cris} (X_n /S_n), {\rm H} ^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris}), \varphi_i \right )
\]
being an object of $\Mod_{S, \tor}^{\varphi, i}$.
\begin{theorem} \label{thm-main-1}
Let $n \in \mathbb{N}$ and assume $i\leq p-2$. Then $ {\rm H}^j_\Prism(X_n/ \mathfrak{S}_n)$ has no $u$-torsion for $j = i ,i +1$ if and only if
${\rm H}^i _ {\cris}(X_n /S_n)$ is an Breuil module.
When that happens we have $\underline \mathcal{M} ({\rm H}^i_\Prism(X_n/ \mathfrak{S}_n)) \simeq {\rm H}^i _ {\cris}(X_n /S_n)$ inside $\textnormal{Mod}^{\varphi, i}_{S, \tor}$.
\end{theorem}
\begin{proof} Write $\mathfrak{M}^j_n: = {\rm H}^j_\Prism ( X_n/ \mathfrak{S}_n)$. Suppose that it has no $u$-torsion for $j = i,i +1$.
So $\mathfrak{M} ^i _n$ is an \'etale Kisin module of height $i$ by Proposition \ref{prop-height}.
By the discussion of \S \ref{subsec-Breuilmodules}, we know $\mathcal{M}^i_n : = \underline \mathcal{M} (\mathfrak{M} ^i_n)$ is an object of $\text {Mod}^{\varphi, i}_{S, \tor}$.
By derived mod $p ^n$ version of \Cref{global comparison}, we have $S \otimes_{\varphi, \mathfrak{S}}^{\mathbb L} R\Gamma_\Prism (X_n/ \mathfrak{S}_n )\simeq R\Gamma _{\cris} (X_n / S_n)$. So Lemma \ref{lem-comparison} yields
\begin{equation}\label{eqn-tensor-tor}
\xymatrix{0 \ar[r] & S \otimes_{\varphi, \mathfrak{S} }{\rm H} ^i_\Prism ( X_n /\mathfrak{S}_n) \ar[r] & {\rm H} ^i_ {\cris} (X_n / S_n) \ar[r] & \Tor _1^\mathfrak{S} ({\rm H} ^{i +1}_\Prism ( X_n /\mathfrak{S}_n), \varphi_* S_n) \ar[r] & 0. }
\end{equation}
Our assumption that $ \mathfrak{M}^{i+1}_n $ has no $u$-torsion gives an isomorphism $\iota: S \otimes_{\varphi, \mathfrak{S}}{\rm H}^j_\Prism(X_n/ \mathfrak{S}_n) \simeq {\rm H} ^i_ {\cris} (X_n / S_n) $.
Now we claim that $\iota$ induces a natural map $\iota^i: \Fil ^i \underline \mathcal{M} (\mathfrak{M}^i _n) \to {\rm H} ^i_{\cris} (X_n /S_n, \mathcal I_{\cris}^{[i]})$
and both the source and target are natural submodules inside ${\rm H} ^i_ {\cris} (X_n / S_n)$.
In particular, the $\iota^i$ is an injection.
To see this, we note that $\iota$ is induced by natural map $ \varphi^*\mathfrak{M}^i_n \to \mathcal{M}^i_n$ which we still denote by $\iota$.
By Theorem \ref{lqsyn H and N filtration}, we have the following commutative diagram
\[ \xymatrix{ {\rm H}^{i-1}_{\qsyn} (X_n, \Prism^{(1)}_{-/ \mathfrak{S}}/ \Fil^i_{\rm N} \Prism^{(1)}_ {-/\mathfrak{S}} )\ar[r]^-{\alpha}\ar[d]^\wr & {\rm H} ^i_{\qsyn} (X_n, \Fil^i_{\rm N} \Prism ^{(1)}_{-/ \mathfrak{S}}) \ar[r]^-{\beta}\ar[d]& {\rm H} ^i _{\qsyn}(X_n, \Prism ^{(1)}_{-/\mathfrak{S}} )\ar[d] ^\iota \ar[r]& \cdots \\ {\rm H}^{i-1}_{\qsyn} (X_n , \dR^\wedge_{-/\mathfrak{S}}/ \Fil^i_{\rm H} \dR ^\wedge _{-/ \mathfrak{S}} )\ar[r]^-{\alpha'} & {\rm H} ^i_{\qsyn} (X_n, \Fil^i_{\rm H} \dR^\wedge _{-/ \mathfrak{S}})\ar[r] ^-{\beta'} & {\rm H} ^i_{\qsyn} (X_n, \dR^\wedge_{-/\mathfrak{S}} )\ar[r] &\cdots }\]
with both rows being exact.
By Theorem \ref{lqsyn H and N filtration} (4), the left column is an isomorphism. As $\mathfrak{M} ^i_n$ is assumed to have no $u$-torsion,
\Cref{cor-BK-Nygaard} shows that $\beta$ is an injection. Thus $\alpha$ and hence $\alpha'$ are zero maps. So $\beta'$ is an injection. Therefore, Theorem \ref{Illusie--Bhatt} gives the following commutative diagram
\[ \xymatrix{\Fil^i_{\BK}\varphi ^* \mathfrak{M}^i_n \ar@{^{(}->}[r]\ar[d] & \varphi^* \mathfrak{M}^i_n \ar[d] ^\iota\\ {\rm H}^i_{\cris} (X_n /S_n, \mathcal I ^{[i]}_{\cris}) \ar@{^{(}->}[r]& {\rm H} ^i_{\cris} (X_n /S_n). }\]
Since $\iota: \underline{\mathcal{M}} (\mathfrak{M}^i_n) \xrightarrow{\simeq} \mathcal{M} ^i_n = {\rm H} ^i_{\cris} (X_n /S_n) $ is an isomorphism and
$\Fil^i \underline{\mathcal{M}} (\mathfrak{M} ^i _n)$ is the $S$-submodule of $\mathcal{M} ^i_n$ generated by the image of $\Fil ^i \varphi^* \mathfrak{M} ^i _n$ and $\Fil^i S \cdot \mathcal{M} ^i _n$,
we see $\Fil^i \underline{\mathcal{M}} (\mathfrak{M} ^i _n) \subset {\rm H}^i_{\cris} (X_n /S_n, \mathcal I ^{[i]}_{\cris})$ via $\iota$. This shows $\iota$ induces an injection
$\iota : \Fil^i \underline{\mathcal{M}} (\mathfrak{M} ^i _n) \hookrightarrow {\rm H}^i_{\cris} (X_n /S_n, \mathcal I ^{[i]}_{\cris})$.
Next we claim that $\iota^i$ is an isomorphism.
After faithfully flat base changing along $S_n \to A_{\cris, n}: = A_{\cris}/ p ^n$, we are now working with $\mathcal{X} \coloneqq X_{\O_{\mathbf{C}}}$.
Now we need some some facts about the sheaf $\ensuremath{\mathbb{Z}}_p (h)$ on $\mathcal{X}_{\qsyn}$ defined in \cite[\S 7.4]{BMS2}.
First according to \cite[Theorem 10.1]{BMS2}, we have $\ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}} (h) \simeq \tau^{\leq h } {\rm R}\psi_* (\ensuremath{\mathbb{Z}}/ p^n \ensuremath{\mathbb{Z}} (h)) $ where
$\psi : (\mathcal{X}_{\mathbf{C}})_{\et } \to \mathcal{X}_{\et}$ is the natural map of \'etale sites.
By \cite[Theorem F]{AMMN20}, when $h \leq p -2$, we have
\[
\ensuremath{\mathbb{Z}}_p (h) \simeq \fib (\varphi_h -1: \Fil ^h_{\rm H} \dR^\wedge_{-/\ensuremath{\mathbb{Z}}_p} \to \dR^\wedge_{-/\ensuremath{\mathbb{Z}}_p}).
\]
Now \Cref{prop-connection for perfectoid} implies
\[
\ensuremath{\mathbb{Z}}_p (h) \simeq \fib (\varphi_h -1: \Fil ^h_{\rm H} \dR^\wedge_{-/A_{\inf}} \to \dR^\wedge_{-/A_{\inf}}).
\]
Since ${\rm fib}$ commutes with derived mod $p^n$, we may apply $\otimes^{{\mathbb L}}_{\ensuremath{\mathbb{Z}}} \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}$ to this equation.
Finally by Theorem \ref{Illusie--Bhatt}, we get the following exact sequence for $i \leq h \leq p-2$:
\begin{equation}\label{eqn-etale-Fil-2}
\cdots{\rm H} ^{i -1} _{\cris} (\mathcal{X}_n/ A_{\cris, n}) \to {\rm H}^i_ {\et} (\mathcal{X}_{\mathbf{C}} , \ensuremath{\mathbb{Z}}/ p^n \ensuremath{\mathbb{Z}} (h)) \to {\rm H} ^ i _{\cris} (\mathcal{X}_n/ A_{\cris, n}, \mathcal I_{\cris} ^{[h]}) \overset{\varphi_h -1}{\longrightarrow} {\rm H} ^i _{\cris} (\mathcal{X}_n / A_{\cris, n })
\end{equation}
By Equation \eqref{eqn-etale-Fil-2} and Proposition \ref{prop-Galois-compatible}, we obtain the following commutative diagram:
$$\xymatrix{ 0 \ar[r] & T_S (\underline{\mathcal{M}}(\mathfrak{M}^i_n)) \ar[d]^\alpha \ar[r] & A_{\cris}\otimes_S \Fil^i \underline \mathcal{M} (\mathfrak{M}^i_n) \ar[d]^-{1 \otimes \iota^i} \ar[r] & A_{\cris} \otimes_S \underline{\mathcal{M}} (\mathfrak{M}^i_n ) \ar[d]^\wr \ar[r] & 0 \\
& {\rm H}^i_{\et} (\mathcal{X}_{\mathbf{C}}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})(i) \ar[r] ^- s & {\rm H}^i _{\cris} (\mathcal{X}_n/ A_{\cris, n}, \mathcal I_{\cris}^{[i]}) \ar[r] & {\rm H}^i _{\cris} (\mathcal{X}_n/ A_{\cris, n}) & }$$
with both rows being exact.
Since $1 \otimes \iota^i$ is an injection, we see that the map $\alpha$ is also injective.
Then $\alpha$ must be an isomorphism because $T_S (\underline \mathcal{M} (\mathfrak{M} ^i_n)) \simeq T_\mathfrak{S} (\mathfrak{M}^i_n) (i) \simeq {\rm H} ^i _{\et} (\mathcal X_{\mathbf{C}}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})(i)$
due to Proposition \ref{prop-Galois-compatible} and \cite[Theorem 1.8.(4)]{BS19}.
Therefore $s$ is also injective. Now by the snake lemma, we see that $\coker (1 \otimes \iota)= 0$ as required.
Conversely, assume that $\mathcal{M} ^i_n : = {\rm H} ^i _{\cris} ( X_n / S_n )$ is an object in $\textnormal{Mod}^{\varphi, i}_{S, \tor}$ with $\Fil^i \mathcal{M}^i _n = {\rm H} ^i _{\cris} (X_n /S_n , \mathcal I_{\cris}^{[i]})$.
As before, we consider the base change $\mathcal{X} \coloneqq X_{\O_{\mathbf{C}}}$ and we still have a commutative diagram
$$\xymatrix{ 0 \ar[r] & T_S ({\mathcal{M}^i_n}) \ar[d]^\alpha \ar[r] & A_{\cris}\otimes_S \Fil^i \mathcal{M}^i _n \ar[d]^\wr \ar[r] & A_{\cris} \otimes_S {\mathcal{M}^i _n } \ar[d]^\wr \ar[r] & 0 \\ & {\rm H}^i_{\et} (\mathcal{X}_{\mathbf{C}}, \ensuremath{\mathbb{Z}}/ p^n \ensuremath{\mathbb{Z}})(i) \ar[r] ^-s & {\rm H}^i _{\cris} (\mathcal{X}_n / A_{\cris, n}, \mathcal I_{\cris}^{[i]}) \ar[r] & {\rm H}^i _{\cris} (\mathcal{X}_n / A_{\cris, n}). & }$$
The difference here is that the middle column is now an isomorphism,
whereas the first column $\alpha$ is not known to be an isomorphism.
First it is easy to see that $\alpha$ is an injection by chasing the diagram.
Now by Corollary \ref{cor-length-compare}, we have an inequality
\[
\Len_ {W(k)} (\mathcal{M} ^ i_n/I^+S) =\Len _{\ensuremath{\mathbb{Z}}} (T_S (\mathcal{M}^i _n))\leq \Len _{\ensuremath{\mathbb{Z}}} ({\rm H} ^i _{\et} (\mathcal{X}_{\mathbf{C}}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})).
\]
On the other hand, by the proof of Lemma \ref{lem-length of both fibers} and Lemma \ref{lemma-mod I+S}, we see that
\[
\Len _{\ensuremath{\mathbb{Z}}} ({\rm H} ^i _{\et} (\mathcal{X}_{\mathbf{C}}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})) \leq \Len _{W(k)} (\mathfrak{M} ^i_n / u\mathfrak{M}^i_n )\leq \Len_ {W(k)} (\mathcal{M} ^ i_n/I^+S).
\]
Combining the above two inequalities, we see
\[
\Len _{\ensuremath{\mathbb{Z}}} ({\rm H} ^i _{\et} (\mathcal{X}_{\mathbf{C}}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})) = \Len _{W(k)} (\mathfrak{M} ^i_n / u\mathfrak{M}^i_n ) = \Len_ {W(k)} (\mathcal{M} ^ i_n/I^+S).
\]
Now the proof of Lemma \ref{lem-length of both fibers} implies that $\mathfrak{M} ^i _n$ has no $u$-torsion.
By the length equality, the injection
$\mathfrak{M} ^i_n/ u \mathfrak{M} ^i_n \hookrightarrow \mathcal{M}^i _n/ I ^+ S$ in \Cref{lemma-mod I+S} is in fact an isomorphism.
and hence $\Tor_1^{\mathfrak{S}} (\mathfrak{M}^{i+1}_n, \varphi_* S)/I^+S = 0$.
It is easy to see that $\Tor_1^{\mathfrak{S}} (\mathfrak{M}^{i+1}_n, \varphi_* S)$ is a finitely generated $S$ module, applying Nakayama's lemma
yields
$\Tor_1^{\mathfrak{S}} (\mathfrak{M}^{i+1}_n, \varphi_* S) = 0$. Therefore $\mathfrak{M} ^{i+1} _n$ has no $u$-torsion by the following claim.
Claim: If $\mathfrak{M}$ is a $p^n$-torsion $\mathfrak{S}$-module and $\Tor_1^{\mathfrak{S}} (\mathfrak{M} ,\varphi_* S)= 0$ then $\mathfrak{M}$ has no $u$-torsion.
To prove this, we first note that
$\Tor_1^{\mathfrak{S}}(-, \varphi_* S)$ is an left exact functor by \Cref{lem-comparison}.
Secondly note that $\mathfrak{M}$ has no $u$-torsion if and only if it has no $(u,p)$-torsion.
Let $\mathfrak{M}' \subset \mathfrak{M}$ be the submodule of $(u,p)$-torsions in $\mathfrak{M}$.
The above discussion implies that $\Tor_1^{\mathfrak{S}} (\mathfrak{M}', \varphi_* S) = 0$.
Now by definition, we have $\mathfrak{M}' \cong \oplus_{\Lambda} k$ as an $\mathfrak{S}$-module, where $\Lambda$ is an indexing set.
One computes directly that
\[
\Tor_1^{\mathfrak{S}} (\mathfrak{M}', \varphi_* S) = \oplus_{\Lambda} \Tor^1_{\mathfrak{S}}(\mathfrak{S}/(p,u), \varphi_* S) =
\oplus_{\Lambda} \Tor^1_{\mathfrak{S}}(\mathfrak{S}/(p,u^p), S) = \oplus_{\Lambda} \ker(S/p \xrightarrow{\cdot u^p} S/p).
\]
Since $\ker(S/p \xrightarrow{\cdot u^p} S/p)$ is nonzero ($u^{pe} = 0$ in $S/p$), the above computation implies
$\Lambda = \emptyset$, as claimed.
\end{proof}
\begin{corollary} \label{cor-small-Breuil}If $ei < p -1$ then ${\rm H} ^{j}_\Prism (X_n / \mathfrak{S}_n)$ has no $u$-torsion for $j =i , i+1$,
and ${\rm H}^i_{\cris} (X_n / S_n)$ is a Breuil module.
\end{corollary}
\begin{proof} By Lemma \ref{lem-control-torsion} and Proposition \ref{prop-height},
we know that ${\rm H}^{i}_\Prism (X_n / \mathfrak{S}_n)$ has no $u$-torsion.
To show that ${\rm H} ^{i +1}_\Prism (X_n / \mathfrak{S}_n)$ has no $u$-torsion, we first consider the case that $n =1$.
The main theorem of \cite{CarusoInvent} shows that ${\rm H}^i_{\cris} (X_1/S_1)$ is a Breuil module when $n=1 $ and $ei < p -1$.
Then \Cref{thm-main-1} shows that ${\rm H} ^{i+1}_\Prism (X_1/ \mathfrak{S}_1)$ has no $u$-torsion.
Let us prove by induction that $\mathfrak{M}^{i +1}_n: = {\rm H}^{i +1}_{\Prism} (X_n /\mathfrak{S}_n ) $ has no $u$-torsion.
We use the long exact sequence relating various $\mathfrak{M} ^{i+1 }_n : = {\rm H}^{i+1} _{\Prism}(X_n / \mathfrak{S}_n)$:
\[ \cdots \to \mathfrak{M}^{i}_{n -1}\overset f \longrightarrow \mathfrak{M} ^{i+1} _1 \to \mathfrak{M}^{i+1}_n \to \mathfrak{M}^{i+1}_{n -1}\cdots .\]
By induction, we may assume that $\mathfrak{M}^{i+1}_{n -1}$ has no $u$-torsion. It suffices to prove that
$\mathfrak{M} ^{i+1}_1/ f (\mathfrak{M}^{i}_{n -1})$ has no $u$-torsion.
To that end, write $\mathfrak N \coloneqq f(\mathfrak{M} ^{i }_{n-1})$ which has height $i$,
$\mathfrak{M} \coloneqq \mathfrak{M} ^{i+1}_1$ which has height $i+1$ and
$\mathfrak L \coloneqq \mathfrak{M} ^{i+1}_1 / \mathfrak N$.
By construction we have the following exact sequence
\[
0 \to \mathfrak N \overset f \longrightarrow \mathfrak{M} \overset{g}{\longrightarrow} \mathfrak{L} \to 0.
\]
Let $\mathfrak{M}' = g ^{-1} (\mathfrak{L}[u ^\infty])$. Then we obtain two exact sequences
\[
0 \to \mathfrak{N } \to \mathfrak{M} ' \to \mathfrak{L}[u ^\infty] \to 0 \text{ ~and~ }
0 \to \mathfrak{M}' \to \mathfrak{M} \to \mathfrak L / \mathfrak{L}[u ^\infty] \to 0 .
\]
The second sequence has all terms being \'etale Kisin modules. Since $\mathfrak{M}$ has height $i+1$,
we conclude that both $\mathfrak{M} '$ and $\mathfrak L / \mathfrak{L}[u ^\infty] $ has height $i+1$.
Since both $\mathfrak N$ and $\mathfrak{M}'$ are \'etale and hence finite free over ${k[\![u]\!]}$.
This allows us to choose a basis $e_1, \dots , e_d$ of $\mathfrak N$ and a basis $ e'_1 , \dots , e'_d$ of $\mathfrak{M}'$ so that $(e_1 , \dots, e_d ) = (e'_1, \dots , e'_d)\Lambda$, where $\Lambda= \mathrm{diag}(u^{a_1} \dots, u^{a_d})$ a diagonal matrix with
$u^{a_j}$ on the main diagonal such that $a_1 \leq a_2 \leq \dots \leq a_d$.
Let $A$ and $A'$ be the matrices of Frobenius for the corresponding basis. We easily see that following relation
\[
\Lambda A = A' \varphi (\Lambda).
\]
Hence the last column of $A'\varphi(\Lambda)$ is divisible by $u ^{pa_d}$.
Consequently, the last column of $A$ is divisible by $u^{(p-1) a_d}$.
But $\mathfrak N$ has height $i$, which means there exists a matrix $B$ with entries in ${k[\![u]\!]}$ so that $AB = BA = u ^{ei} I_d$.
But this is impossible as $ei < p -1$,
unless $a_d =0$. This forces that $\Lambda = I_d$ and hence a posteriori $\mathfrak L$ has no $u$-torsion as desired.
\end{proof}
\begin{remark}
Let $T$ be the largest integer satisfying $T \cdot e < p-1$, and let $n \in \mathbb{N}$.
It is a result of Min \cite[Lemma 5.1]{Min20} that ${\rm H}^i_{\Prism}(X/\mathfrak{S})$ has no $u$-torsion when
$0 \leq i \leq T+1$.
By a similar argument, one can also show that ${\rm H}^i _\Prism (X_n/\mathfrak{S}_n)$ has no $u$-torsion for $0 \leq i \leq T$.
The slight improvement along this direction in \Cref{cor-small-Breuil} is the statement that ${\rm H}^{T+1} _\Prism (X_n/\mathfrak{S}_n)$
is also $u$-torsion free.
This would imply Min's result.
As far as we can tell, Min's strategy does not give $u$-torsion freeness of ${\rm H}^{T+1} _\Prism (X_n/\mathfrak{S}_n)$.
\end{remark}
\begin{proposition}\label{prop-connection to etale via TS}
Let $i \leq p-2$ be an integer.
Suppose that ${\rm H} ^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris}) \to {\rm H}^i_{\cris} (X_n /S_n) \eqqcolon \mathcal{M}^i_n$
is injective, and denote its image as $\Fil^i\mathcal{M} ^i_n$.
Assume furthermore that $\mathcal{M}^i_n$ together with
\[
(\Fil^i\mathcal{M} ^i_n = {\rm H} ^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris}), \varphi_i, \nabla)
\]
is an object of $\textnormal{Mod}^{\varphi,i, \nabla}_{S, \tor}$.
Then $T_S (\mathcal{M} ^i_n)\simeq {\rm H}^i_{\et} (X_{\bar{\eta}}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})(i)$ as $G_K$-representations.
\end{proposition}
\begin{proof} Theorem \ref{thm-main-1} together with Proposition \ref{prop-Galois-compatible} have already shown the following isomorphisms
\[ T_S ({\rm H}^i_{\cris} (X_n /S_n)) \overset {\iota_1}{\simeq} T_\mathfrak{S} ({\rm H}^i _{\Prism} (X_n / \mathfrak{S}_n)) (i) \overset {\iota_2}{\simeq}
{\rm H}^i_{\et} (X_{\bar{\eta}}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}) (i).\]
The main point here is to check these two isomorphisms $\iota_1 , \iota_2 $ here are compatible with $G_K$-actions.
Let $\mathcal{X} \coloneqq X_{\mathcal{O}_\mathbf{C}}$.
First we have $A_{ \inf} \otimes_{\mathfrak{S}} \mathfrak{M} ^i_n \simeq {\rm H}^i _{\Prism} (\mathcal{X}_n/ A_{\inf, n})$ which admits natural $G_K$-action.
Since $A_{\inf}$ is a perfect prism, \cite[Theorem 1.8.(4)]{BS19} proves that $T_\mathfrak{S} (\mathfrak{M}^i _n)= ({\rm H}^i_\Prism (\mathcal{X}_n/ A_{\inf, n}))^{\varphi=1 } \simeq {\rm H}^i_{\et} (X_{\bar{\eta}} , \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}}) $ is compatible with $G_K$-action, as explained in Remark \ref{rem-G-compatible}.
This concludes that $\iota_2$ is compatible with $G_K$-actions.
Now Theorem \ref{global comparison} shows that the comparison isomorphism
\[
\bar \iota: {\rm H}^i_{\Prism} (\mathcal{X}_n/ A_{\inf, n} ) \otimes_{A_{\inf},\varphi } A_{\cris}\simeq {\rm H}^i_{\cris} (\mathcal{X}_n / A_{\cris, n})
\]
is functorial.
So $\bar \iota$ is compatible with natural $G_K$-actions on the both sides.
Also $\bar \iota$ is compatible with the isomorphism
$\iota: \underline\mathcal{M} (\mathfrak{M}^i_n)\simeq {\rm H}^i_{\cris} (X_n/ S_n)$.
Applying Remark \ref{rem-compatible} here then concludes that
\[
\iota_1: T_S ({\rm H}^i_{\cris} (X_n /S_n)) {\simeq} T_\mathfrak{S} ({\rm H}^i _{\Prism} (X_n / \mathfrak{S}_n)) (i)
\]
is compatible with $G_K$-actions \emph{if} we define the $G_K$-action on ${\rm H} ^i _ {\cris} (X_n / S_n) \otimes_S A_{\cris}$ via the identification
${\rm H} ^i _ {\cris} (X_n / S_n) \otimes_S A_{\cris}= {\rm H} ^i_{\cris} (\mathcal{X}_n / A_{\cris, n})$.
Recall the $G_K$-action on ${\rm H} ^i _ {\cris} (X_n / S_n) \otimes_S A_{\cris} $ is defined via Formula \eqref{eqn-action},
and we have showed in \S \ref{subsection-GK-action} that these two $G_K$-actions are the same.
This proves that $\iota_1$ is also compatible with $G_K$-actions.
\end{proof}
In the end of this subsection, we explain how our results are related to Fontaine--Messing theory in \cite{FontaineMessing}
(see also \cite{Katovanishingcycles})
for a proper smooth \emph{formal} scheme $X$ over $W(k)$.
For any $n \geq 1$, the scheme $X_n$ is smooth proper over $\Spec(W_n (k))$.
So when $0 \leq j \leq i \leq p -1$, the triple $ M^i: = ({\rm H}^i_{\cris}(X_n / W_n (k)), {\rm H} ^i _{\cris}(X_n / W_n (k) , \mathcal I ^{[j]}_{\cris} ), \varphi_j)$ is known to be a Fontaine--Laffaille data.
Now let $i \leq p-2$ and one wants to show that $T_{\cris} (M^i)\simeq {\rm H} ^i _{\et} (X_{\bar \eta}, \ensuremath{\mathbb{Z}}/ p ^n \ensuremath{\mathbb{Z}})(i)$.
We recall the construction of $T_{\cris} (M^i)$ in the following: write $\Fil^j M ^i : = {\rm H}_{\cris}^i (X_n / W_n (k), \mathcal I ^{[j]}_{\cris})$ and let \[\Fil ^i (A_{\cris}\otimes_{W(k)} M ^i) = \sum^i_{j = 0} \Fil^{j} A_{\cris} \otimes_{W(k)}\Fil^{i -j} M ^i \subset A_{\cris} \otimes_{W(k)} M ^i. \]Then one can define
$\varphi_i : \Fil^i (A_{\cris} \otimes _{W(k)} M ^i) \to A_{\cris} \otimes_{W(k)} M ^i$ by $\varphi_i : = \sum\limits_{j = 0}^i \varphi_j|_{\Fil^j A_{\cris}} \otimes \varphi_{i -j}|_{\Fil^{i-j} M^i}$. Now
$T_{\cris} (M^i): = (\Fil ^i (A_{\cris} \otimes M^i))^{\varphi_i=1} $.
Let $\mathcal{M} ^i : = {\rm H}^i _{\cris} (X_n/S_n)$ which is an object of $\Mod^{\varphi, i, \nabla}_{S, \tor}$ by \Cref{cor-small-Breuil}.
It is clear that the base change map $\iota: S \otimes_ {W(k)} M ^i \to \mathcal{M} ^i$ is an isomorphism as $W(k) \to S$ is flat. Define
$\Fil^i (S \otimes_{W(k)}M^i) : = \sum\limits_{j = 0}^i \Fil^j S \otimes_{W(k)} \Fil^{i -j } M ^i\subset S \otimes_{W(k)} M ^i$. Since $\Fil^j M^i$ is direct summand of $\Fil^{j -1} M ^i$,
the natural map $\Fil^i (S \otimes_{W(k)} M ^i) \to {\rm H}^i _{\cris} (X_n /S_n, \mathcal I^{[i]}_{\cris})$ induced by $\iota$ is injective. Therefore, we obtain the following commutative diagram
\[\xymatrix{ 0 \ar[r]& T_{\cris} (M^i)\ar[r]\ar@{^{(}->}[d]& \Fil^i (A_{\cris} \otimes_{W(k)} M^i) \ar[r]^-{\varphi_i-1}\ar@{^{(}->}[d] & A_{\cris} \otimes_{W(k)} M ^i \ar[d] ^ \wr \\ 0 \ar[r] & T_S (\mathcal{M}^i) \ar[r] & \Fil^i (A_{\cris} \otimes_{S} \mathcal{M}^i) \ar[r]^-{\varphi_i -1} & A_{\cris} \otimes_{S} \mathcal{M} ^i.}\]
It is well-known from Fontaine--Laffaille theory that $\Len_{\ensuremath{\mathbb{Z}}} T_{\cris} (M^i) = \Len_{W(k)} M^i$.
By Corollary \ref{cor-length-compare}, we know $\Len_{\ensuremath{\mathbb{Z}}} (T_S(\mathcal{M}^i)) = \Len_{W(k)} (\mathcal{M}^i/ I^+ S) = \Len_{W(k)} M^i$. Therefore, the left column must be bijective. By Proposition \ref{prop-connection to etale via TS}, it remains to check that the isomorphism $T_{\cris} (M^i) \to T_S(\mathcal{M}^i)$ is compatible with $G_K$-actions.
Since the $G_K$-action on $T_S(\mathcal{M}^i)$ is the $G_K$-action on $A_{\cris} \otimes_S \mathcal{M}^i$ via \eqref{eqn-action},
it suffices to show that $M^i \subset (\mathcal{M} ^i)^{\nabla = 0}$,
which follows from Proposition \ref{prop-connection for perfectoid} (1).
\begin{corollary}\label{cor-FM} Fontaine-Messing theory in \cite{FontaineMessing} and \cite{Katovanishingcycles} accommodates $X$ being proper smooth formal scheme over $W(k)$.
\end{corollary}
\section{Some calculations on $T_S$}
\subsection{Identification on \eqref{eqn-action-N} and \eqref{eqn-action}}\label{subsec-two-equations}
In this section, we show that \eqref{eqn-action-N} and \eqref{eqn-action} are the same.
\begin{lemma}If we write $N^n = \sum\limits_{i = 1}^n A_{i, n} u ^i \nabla^i$ then $A_{i, n+1} = A_{i-1, n} + i A_{i , n}$ and $A_{1, n}= A_ { n , n}=1$
\end{lemma}
\begin{proof} An easy induction on $n$ by $N = u \nabla$.
\end{proof}
Recall that $\gamma_i (t)$ denote the $i$-divided power of $t$.
\begin{lemma} $\sum_{n \geq i} A_{i, n} \gamma_n (t) = \gamma_i ((e^t -1)). $
\end{lemma}
\begin{proof}
It suffices to show that Taylor's expansion as functions of $t$ on both sides are equal. This is clear to see that the coefficients of $t^n$, which is first nonzero term, coincide on the both sides. If we write
$\gamma_i (e^t -1) =\sum_{n \geq i} B_{i, n} \gamma_n (t)$ then it suffices to show that $B_{i, n}$ satisfies the recursive formula: $B_{i, n+ 1} = B _{i- 1, n} + i B_ {i , n }$ for $n \geq i$. Note that
\[\gamma_i (e^t -1)= \frac {1}{i !}\left (\sum_{m = 0}^i {i \choose m} (-1)^{i-m} e^{m t} \right ). \]
Therefore, $B_{i, n}= \frac{1}{i !} \left (\sum \limits_{m = 0}^i {i \choose m} (-1)^{i-m} m ^n \right )$. So to check that $ B _{i-1 , n }+ i B_{i, n} = B_{i , n+1}$ is equivalent to check that
\[ \frac{1}{(i -1) !}\left ( \sum_{m = 0} ^{i -1} {i-1 \choose m} (-1) ^{i-1 -m} m ^n + \sum_{m = 0} ^{i } {i \choose m} (-1) ^{i -m} m ^n \right ) = \frac{1}{i!} \sum_{m = 0} ^{i } {i \choose m} (-1) ^{i-m} m ^{n +1}. \]
This follows that $i \left ( {i \choose m} - {i-1 \choose m} \right ) = {i \choose m } m . $
\end{proof}
Now by the above Lemmas, $$ \sum _{n =0}^\infty N ^n (x) \gamma_n (\log (\underline{\epsilon}(\sigma)) = \sum _{n =0}^\infty \nabla ^n (x) u ^n\gamma_n ( e^{\log(\underline{\epsilon} (\sigma))}-1 ) = \sum _{n =0}^\infty \nabla ^n (x) \gamma_n (u ( \underline{\epsilon} (\sigma)-1 )) = \sum _{n =0}^\infty \nabla ^n (x) \gamma_n (\sigma (u)-u ). $$
This proves that \eqref{eqn-action-N} and \eqref{eqn-action} are the same.
\subsection{$T_S$ and $T_{\st, \star}$}\label{subsec-two-T}
In this subsection, we explain our functor $T_S$ and functor $T_{\st , \star}$ used in \cite{CarusoInvent} are the same.
For this purpose, we have to review period ring $\widehat A_{\st}$ from \cite{CarusoInvent}.
Let $\wh A_{\st}= A_{\cris} \langle X \rangle$ be the $p$-adic completion of PD algebra of $A_{\cris}$.
We extend Frobenius $\varphi$ and filtration of $A_{\cris}$ to $\widehat A_{\st}$ as follows: Let $\varphi (X) = (1+ X)^p -1$ and
\[
\Fil ^i \wh A_{\st} \coloneqq \left \{\sum_{j = 0} ^\infty a_j \gamma_j (X), a_j \in \Fil^{\max \{i -j, 0\} } A_{\cris}, \lim_{j \to \infty }a_j \to 0 \ \ p-\text{adically}\right \}.
\]
It is easy to see that we can define $\varphi_r: \Fil ^r \wh A_{\st} \to \wh A_{\st} $ similar to that of $A_{\cris}$. To extend $G_K$-action to $\wh A_{\st}, $ for any $g \in G_K , $ recall that $\underline{\varepsilon} (g) = \frac{ g ([\underline{\pi}])}{[\underline{\pi}]} \in A_{\inf}$ defined before \eqref{eqn-action-N}. Set $g (X)= \underline{\varepsilon} (g) X+ \underline{\varepsilon} (g)-1$. Finally define an $A_{\cris}$-linear monodromy by set $N(X)= -(1 + X)$. We embed $S $ inside $\wh A_{\st }$ via $u \mapsto [\underline{\pi}] (1+ X)^{-1}$. At this point, we have two embeddings $S\to \wh A_{\st}$: $\iota_1: S\hookrightarrow A_{\cris} \subset \wh A_{\st } $ via $ u \mapsto [\underline{\pi}]\in A_{\inf}$ and $\iota_2: S \hookrightarrow \wh A_{\st} $ via $u \mapsto [\underline{\pi}] (1+X)^{-1}$. We will use both embeddings in the following. Notice that there is an $A_{\cris}$-linear projection $q: \wh A_{\st} \to A_{\cris}$ by sending $\gamma_i (X) \mapsto 0 $. It is easy to check that $q$ is compatible with filtration, Frobenius, $G_K$-actions, and \emph{both} embeddings $\iota_i : S \hookrightarrow \wh A_{\st}$. Set $\beta: = \log (1+X)\in \wh A_{\st}$.
\begin{remark} Breuil--Caruso's theory set $N(1+X) = 1+X$. Our setting is different by $-1$ sign to fit our setting $\nabla (u) = 1$. There is no difference for these two different settings up to changing some signs.
\end{remark}
Given a Breuil module $\mathcal{M} \in \Mod _{S , \tor}^{\varphi, N , h}$, we extends filtration, $\varphi_h$, monodromy and $G_K$-actions to $\wh A_{\st} \otimes_{\iota_2, S } \mathcal{M}$ as follows
\[ \Fil ^h \wh A_{\st} \otimes_{\iota_2, S } \mathcal{M} = \wh A_{\st} \otimes_{\iota_2, S } \Fil ^h \mathcal{M}+ \Fil ^h \wh A_{\st} \otimes _{\iota_2, S} \mathcal{M}. \]
For $a\otimes m \in \wh A_{\st} \otimes_{\iota_2, S} \Fil ^h \mathcal{M}$, set $\varphi _h (a\otimes m) = \varphi (a) \otimes \varphi _h (m )$, and for $a \otimes m \in \Fil ^h \wh A_{\st} \otimes _{\iota_2 S} \mathcal{M}$, set $\varphi _h (a\otimes m) = \varphi_h (a) \otimes \varphi_h (E^h m).$ It is easy to check these $\varphi_h$ are compatible with intersection so that $\varphi_h$ extends to $\wh A_{\st} \otimes_{\iota_2, S} \mathcal{M}$.
We extend $G_K$-action from $\wh A_{\st}$ to $\wh A_{\st} \otimes_{\iota _2 , S} \mathcal{M} $ by acting on $\mathcal{M}$-trivially, and $N (a \otimes m) = N(a) \otimes m + a \otimes N(m), \forall a \in \wh A_{\st}, m \in \mathcal{M}$.
Now set
$$T _{\st} (\mathcal{M}): = (\Fil ^h (\wh A_{\st} \otimes_{\iota_2, S }\mathcal{M}))^{\varphi_h =1 , N = 0}. $$
\begin{proposition} There is an isomorphism $T_ S (\mathcal{M})\simeq T_{\st , \star} (\mathcal{M})$ as $G_K$-representations.
\end{proposition}
\begin{proof}
For $m \in \mathcal{M} \subset \wh A_{\st} \otimes_{\iota_2, S} \mathcal{M}$, set
$m ^\nabla \coloneqq \sum\limits_{i = 0} ^\infty N^i (m) \gamma_i(\beta)$ and $\mathcal{M} ^{\nabla} = \{ m ^\nabla | m \in \mathcal{M} \} \subset \wh A_{\st} \otimes_{\iota_2, S} \mathcal{M}.$ To understand the map $\alpha : \mathcal{M} \to \mathcal{M}^\nabla$, consider the following diagram induced by $q: \wh A_{\st} \twoheadrightarrow A_{\cris}$:
\[\xymatrix{ & \mathcal{M} ^\nabla \ar[ldd]^{\alpha'} \ar@{^{(}->}[dr] ^j & \\ \mathcal{M} \ar@{=}[d] \ar@{^{(}->}[rr] \ar@{->>}[ur] ^\alpha & & \wh A_{\st} \otimes_{\iota_2, S} \mathcal{M} \ar@{->>}[d] ^{q} \\ \mathcal{M} \ar@{^{(}->}[rr] & & A_{\cris} \otimes_{ S} \mathcal{M}.}\]
Where $\alpha': \mathcal{M}^\nabla \to q(\mathcal{M})= \mathcal{M}$ is induced by $q$. By definition of $\alpha$, it is easy to show that $\alpha $ and $\alpha '$ is bijective. Also $\alpha $ is an isomorphism of $S$-modules in the sense of $\alpha (\iota _2 (s) m ) = \iota _1(s) \alpha (m) $ for $s\in S $ and $m \in \mathcal{M}$. Using that $N$ satisfies Griffiths transversality and diagram\eqref{eqn-N-diagram} together with facts that $\beta \in \Fil^1 \wh A_{\st}$ and $\varphi (\beta) = p \beta$, a similar argument in Lemma \ref{lem-action-from_N} (by replacing $a$ with $\beta$) show that for any $m \in \Fil ^h \mathcal{M}$ we have $m ^ \nabla\in \Fil ^h (\wh A_{\st} \otimes_{\iota_2 , S} \mathcal{M})$ and
$\varphi_h (m ^\nabla) = (\varphi_h (m))^\nabla$. In summary, $\alpha: \mathcal{M} \to \mathcal{M}^\nabla$ is an isomorphism in $\Mod_{S, \tor} ^{\varphi, h}$ and injections $\mathcal{M} \overset \alpha \simeq \mathcal{M} ^\nabla \subset \wh A_{\st} \otimes_{\iota_2 , S} \mathcal{M}$ are compatible with with filtrations and $\varphi_h$.
Now consider the natural map $A_{\cris} \otimes_S \mathcal{M} ^\nabla \in \wh A_{\st}\otimes_{\iota_2 , S} \mathcal{M}$ induced by inclusion $j : \mathcal{M}^\nabla \subset \wh A_{\st} \otimes_{\iota_2 , S} \mathcal{M}$ which is still denoted by $\tilde j$. Since $q \circ \tilde j $ is an isomorphism (as $A_{\cris}\otimes_S (q\circ \alpha)$ is an isomorphism), we conclude that
$ A_{\cris} \otimes_S \mathcal{M} ^\nabla $ is an $A_{\cris}$-submodule of $\wh A_{\st} \otimes_{\iota_2 , S} \mathcal{M}$ which is compatible with filtration and $\varphi _h$.
Using that $N(\beta) = -1$, we easily see that $\mathcal{M}^\nabla \subset (\wh A_{\st} \otimes_S \mathcal{M}) ^{N = 0}$. In particular, we have an injection $\tilde j: A_{\cris}\otimes_ S M ^ \nabla \to (\wh A_{\st} \otimes_{\iota_2, S} \mathcal{M}) ^{N = 0}$ compatible with filtration and $\varphi_h$.
Therefore $\tilde j$ induces an injection
\[ T_S(\mathcal{M}) = (\Fil ^h (A _{\cris} \otimes_S \mathcal{M}) )^{\varphi _h =1} \overset \alpha \simeq ( \Fil ^h (A_{\cris}\otimes_S \mathcal{M} ^\nabla) ) ^{\varphi _h =1} \subset (\Fil ^h (\wh A_{\st} \otimes_{\iota_2, S} \mathcal{M}) )^{\varphi _h =1 , N = 0}= T_{\st, \star} (\mathcal{M}).\]
To see this this injection is an isomorphism, we can d\'evissage to the case that $\mathcal{M}$ is killed by $p$ because both $T_S$ and $T_{\st , \star}$ are exact functors (\cite[Corollary 2.3.10]{CarusoInvent}). In this case, it also well-known that $\dim_{{\mathbb F}_p } T_{\st, \star}= \rank _{S_1} \mathcal{M} = \dim_{{\mathbb F}_p } T_S(\mathcal{M})$. This establish the isomorphism
$T_S (\mathcal{M}) \simeq T_{\st , \star} (\mathcal{M})$. Finally, we check this isomorphism is compatible with $G_K$-actions. Note that $T_S(\mathcal{M})$ has $G_K$-action via \eqref{eqn-action-N}, while $T_{\st , \star}$ has $G_K$-action from that on $ \wh A_{\st} \otimes_{\iota_2 , S }\mathcal{M} $ with trivial $G_K$-action on $\mathcal{M}$. We have to show that $A_{\cris}\otimes_{S}\mathcal{M}^\nabla$ has $G_K$-action as the subspace of $\wh A_{\st} \otimes_{\iota_2 , S }\mathcal{M}$ is the same as that defined as \eqref{eqn-action-N}. But this easily follows from the formula that $m ^\nabla \coloneqq \sum\limits_{i = 0} ^\infty N^i (m) \gamma_i(\beta)$ and $g (\beta) = \log (\underline \varepsilon(g)) + \beta$.
\end{proof}
\bibliographystyle{amsalpha}
|
{
"timestamp": "2022-04-11T02:08:04",
"yymm": "2012",
"arxiv_id": "2012.14064",
"language": "en",
"url": "https://arxiv.org/abs/2012.14064"
}
|
\section*{NOMENCLATURE}
\list{}{\leftmargin #1}%
}%
{\endlist\par\addvspace{12pt}}
\title{Modeling, Vibration Control, and Trajectory Tracking of a Kinematically Constrained Planar Hybrid Cable-Driven Parallel Robot}
\author{Ronghuai Qi\thanks{Corresponding author.}\\
\affiliation
Department of Mechanical and \\Mechatronics Engineering,\\
University of Waterloo,\\
Waterloo, ON N2L 3G1, Canada\\
e-mail: ronghuai.qi@uwaterloo.ca
}
}
\author{Amir Khajepour\\
\affiliation
Department of Mechanical and \\Mechatronics Engineering,\\
University of Waterloo,\\
Waterloo, ON N2L 3G1, Canada\\
e-mail: a.khajepour@uwaterloo.ca
}
}
\author{William W. Melek\\
\affiliation
Department of Mechanical and \\Mechatronics Engineering,\\
University of Waterloo,\\
Waterloo, ON N2L 3G1, Canada\\
e-mail: william.melek@uwaterloo.ca
}
}
\begin{document}
\maketitle
\begin{abstract}
{\it This paper presents a kinematically constrained planar hybrid cable-driven parallel robot (HCDPR) for warehousing applications as well as other potential applications such as rehabilitation. The proposed HCDPR can harness the strengths and benefits of serial and cable-driven parallel robots. Based on this robotic platform, the goal in this paper is to develop an integrated control system to reduce vibrations and improve the trajectory accuracy and performance of the HCDPR, including deriving kinematic and dynamic equations, proposing solutions for redundancy resolution and optimization of stiffness, and developing two motion and vibration control strategies (controllers I and II). Finally, different case studies are conducted to evaluate the control performance, and the results show that the controller II can achieve the goal better.}
\end{abstract}
\begin{nomenclature}
\entry{${m_j}$}{Mass of the $j$th $(j=1,2)$ link of the robot arm.}
\entry{${m_m}$}{Mass of the mobile platform.}
\entry{${I_m}$}{Moment of inertia of the mobile platform.}
\entry{${I_j}$}{Moment of inertia of the $j$th $(j=1,2)$ link of the robot arm.}
\entry{${l_j}$}{Length of the $j$th $(j=1,2)$ link of the robot arm.}
\entry{${l_{cj}}$}{Length between the joint $j$ $(j=1,2)$ and the center of mass of link $j$ of the robot arm.}
\entry{${{\bf{p}}_e}$}{Vector of positions and orientation of the end-effector.}
\entry{${\bf{q}}$}{Vector of generalized coordinates.}
\entry{${\bf{\dot q}}$}{First order time-derivative of ${\bf{q}}$.}
\entry{${\bf{\ddot q}}$}{Second order time-derivative of ${\bf{q}}$.}
\entry{${{\bf{p}}_m}$}{Vector of positions and orientation of the mobile platform.} \entry{${\bf{\tau }}$}{Vector of generalized forces.}
\entry{${\bf{T}}$}{Vector of cable tensions.}
\entry{${{\bf{F}}_e}$}{External forces applied to the end-effector or the mobile platform.}
\entry{${{\bf{M}}_e}$}{External moment applied to the end-effector or the mobile platform.}
\entry{${g}$}{Gravitational acceleration.}
\entry{${{\bf{p}}_m}$}{The position and orientation of the mobile platform.}
\entry{${{\bf{A}}}$}{Structure matrix.}
\entry{${{\bf{L}}_i}$}{The position vector from the $i$th cable anchor point on the robot static frame to the $i$th cable anchor point on the mobile platform.}
\entry{${{\bf{\hat L}}_i}$}{Unit position vector of the $i$th cable.}
\entry{${\bf{L}}$}{Cable length vector.}
\entry{${\bf{L_0}}$}{Vector of unstretched cable lengths.}
\entry{${{\bf{T}}_i}$}{Vector of the $i$th cable tension.}
\entry{${\bf{K}}$}{Stiffness matrix.}
\entry{${{\bf{K}}_T}$}{Stiffness matrix as a product of the cable tensions.}
\entry{${{\bf{K}}_k}$}{Stiffness matrix as a product of the cable stiffness.}
\entry{${{\bf{K}}_c}$}{Cable stiffness matrix.}
\entry{$E_i$}{Elastic modulus of the $i$th cable.}
\entry{$A_{ci}$}{Cross section of the $i$th cable.}
\entry{${{\bf{J}}_e}$}{Jacobian matrix.}
\end{nomenclature}
\section{Introduction}
{S}{erial} manipulators are one of the most common types of industrial robots, which consist of a base, a series of links connected by actuator-driven joints, and an end-effector. Usually, they have 6 DOFs and offer high positioning accuracy. They are commonly used in industrial applications; however, they have some key limitations, such as high motion inertia and limited workspace envelope \cite{Wei2015}. {For example, the KUKA KR 60 HA is a 6-DOF serial robotic manipulator with a high payload ($60\;\rm{kg}$) carrying capacity and repeatability of $\pm 0.05\;{\rm{mm}}$, but the maximum reach is $2033\;{\rm{mm}}$ \cite{A.Dudarev2016}}. {CDPRs} are another important type of industrial robots. Their configurations usually bear resemblance to parallel manipulators (e.g., Stewart platform \cite{D.Stewart1965}). The NIST RoboCrane \cite{Albus1989,Albus1992} is a typical CDPR with 6 DOFs, which is designed by utilizing the idea of the Stewart platform, and its unique feature is its use of six cables instead of linear actuators. For these robots, rigid links are replaced with cables. This reduces the robot weight since cables are almost massless. It also eliminates the use of revolute joints. These features allow the mobile platform to reach high motion accelerations in large workspaces. For instance, some existing CDPRs were designed and analyzed in~\cite{Hiller2005,S.H.Yeo2013,Khajepour2015,Mendez2014}. However, they are not without some drawbacks, such as their low accuracy, high vibrations, etc., all of which limit their applications \cite{Oh2005}. To overcome the aforementioned shortages of serial and cable-driven parallel robots as well as aggregate their advantages, one approach is to combine these two types of robots to create a hybrid cable-driven parallel robot (HCDPR).
Some research and applications have been developed as follows: Albus~\cite{J.S.Albus1989} developed a cable-driven manipulator, where a robot arm was fixed upside down to the bottom of the RoboCrane robot{'}s platform~\cite{Albus1989,Albus1992} for lifting a load. Cable-driven camera systems are another type of cable-driven robots (cameras are affixed to the CDPRs) that can be used for different applications such as overhead filming~\cite{T.L.VINCENT2008}. {Gouttefarde}~\cite{Gouttefarde2016} developed a CDPR (called CoGiRo CSPR) with the onboard Yaskawa-Motoman SIA20 robot arm for contactless and interacting applications (e.g., spray painting and metal cutting), but this project still has the main problems of low stiffness of the CDPR which will result in vibrations.
However, the literature shows that existing research and applications prefer to affix a robot arm upside down to the bottom of a CDPR\textrm{'}s platform \cite{T.Arai1999,H.Osumi2000,M.Bamdad2015,M.Gouttefarde2017,J.S.Albus1989,J.S.Albus2003} or mainly control the cable robot while treating the serial robot as a manipulation tool or an end-effector rather than a whole system \cite{J.S.Albus1989,J.S.Albus2003}. When a serial robot is mounted on a mobile platform, the two constitute a new coupled system. Only controlling the mobile platform (i.e., treating the serial robot as a manipulation tool) or the serial robot may not guarantee the position accuracy of the end-effector. For applications that use such a system, the main goal is to control the end-effector of the serial robot (e.g., its trajectories and vibrations) in order to effectively accomplish tasks such as pick-and-place. Another major challenge in the utilization of these systems is maintaining the appropriate cable tensions and stiffness for the robot. {\color{black}The stiffness of CDPRs is an important issue, because driven cables are flexible, which reduce the robotic overall stiffness of the robots (compared to rigid cables) and produce undesired vibrations. When a CDPR moves, driven cables should maintain enough tensions to reduce vibrations, i.e., keep enough stiffness for the robot \cite{Behzadipour2006}. Regarding stiffness problem, some research has been developed: Behzadipour and Khajepour \cite{Behzadipour2006} have proposed an equivalent four-spring model to express the stiffness matrices of a CDPR. They also used a simulation example to verify this model. Azadi et al.~\cite{Azadi2009} introduced variable stiffness elements using antagonistic forces. Gosselin \cite{Gosselin1990} analyzed the stiffness mapping for parallel manipulators by considering the internal forces; conversely, Griffis and Duffy \cite{Griffis1993} modeled the global stiffness of a class of simple compliant couplings without considering the internal forces.} {While for a HCDPR, the moving robot arm also generates reaction forces acting on the mobile platform, resulting in mobile platform vibrations. Hence, it is challenging to achieve the goal of minimizing the vibrations and increasing the position accuracy of the end-effector simultaneously.} To the best of our knowledge, limited studies address the modeling and control problems of flexible HCDPRs, especially, when the redundancy and stiffness optimization problems are introduced, the control of trajectories and vibrations becomes more challenging. Although the research in \cite{Gouttefarde2016} showed a CDPR carrying a robot arm for painting large surfaces, vibrations were obvious and large based on their demonstration.
To solve the aforementioned problems in HCDPRs, the goal of this paper is to develop an integrated control system for the HCDPR to reduce vibrations and improve accuracy and performance. To achieve this goal, the following tasks are pursued: 1) derive analytical kinematic and dynamic equations for the HCDPR; 2) propose solutions for redundancy resolution and optimization of stiffness; 3) develop motion and vibration control methods for the HCDPR; and 4) conduct simulations to validate the effectiveness of the control methods proposed in Step 3). Additionally, the main contributions are as follows:
\begin{enumerate}
\item A kinematically constrained planar HCDPR is proposed to harness the strengths and benefits of serial and cable-driven parallel robots. Detailed kinematic and dynamic equations are derived for this robot. An equivalent dynamic modeling (EDM) method is proposed to derive the dynamic equations of the HCDPR. This method has some advantages, e.g., by providing an effective solution for different configurations of HCDPRs.
\item Based on the configuration and models of the HCDPR, the redundancy resolution and stiffness maximization algorithms are proposed.
\item Control strategies are designed for the HCDPR system to reduce vibrations and trajectory tracking errors. Compared to the existing study in~\cite{R.Qi2019j3,R.Qi2019j4}, this paper emphasizes the reaction performance between the mobile platform and the robot arm as well as trajectory tracking of the end-effector.
\end{enumerate}
Furthermore, the e-commerce explosion in recent years~\cite{Hamedthesis2018} stimulates the growth of automated warehousing solutions. By 2024, the market of global automated material handling equipment is predicted to no less than US\$ 50.0 Billion with a CAGR of $8\%$~\cite{MarketEngine2018}. These increase of automated warehousing applications offers a unique opportunity for the development of cable-driven robots. This paper provides a valid solution for these robots.
The rest of the paper is organized as follows: system modeling for the HCDPR is proposed in~\autoref{sec:J5_SystemModeling}. In~\autoref{sec:J5_ControlDesign}, the methods for redundancy resolution, optimization of stiffness, and controller design are proposed. Simulation results are evaluated in~\autoref{sec:J5_NumericalResults}. Finally, in~\autoref{sec:J5_Conclusions}, conclusions are summarized.
\section{Modeling of the Kinematically Constrained Planar HCDPR}\label{sec:J5_SystemModeling}
\subsection{HCDPR Configuration}\label{subsec:J5_HCDPRConfig}
CDPRs are very useful in industries (e.g., warehousing), but they have some limitations (e.g., accuracy and vibrations). Serial robots have higher accuracy but are limited in terms of their workspace envelope. In this paper, the CDPR designed and studied in~\cite{Khajepour2015,Mendez2014,Rushton2016,R.Qi2018j2} is used for the development of the HCDPR, its integrated controller and evaluation and validation of the results. The proposed planar HCDPR consists of a 2-DOF robot arm (in the mechanical model shown in~\autoref{fig:J5_MechModel}), a 6-DOF rigid mobile platform, twelve cables, and four servo motors. The actuators are used to drive the cables to move the mobile platform. The robot arm is fixed on the mobile platform and moves with it. The twelve cables include four sets of cables: two sets of four-cable arrangement on the top and two sets of two-cable arrangement on the bottom. Each set of cables is controlled by one motor. In addition, the top actuators and bottom actuators control the upper cable lengths and lower cable tensions, respectively. The upper cables also restrict the orientation of the mobile platform, i.e., the kinematic constraints. The HCDPR parameters are shown in~\autoref{fig:J5_PlanarModel} and~\autoref{table:J5_HCDPRParameters}, respectively. The eight top cables and four bottom cables are simplified into four cables and two cables, respectively. The inertial coordinate frame $O\left\{ {{x_0},{y_0},{z_0}} \right\}$ is located at the center of the static fixture.
\begin{figure}[!t]\centering
\includegraphics[width=85mm]{figures/J5_MechModel.pdf}
\caption{Mechanical model of the kinematically constrained planar HCDPR.}\label{fig:J5_MechModel}
\end{figure}
\subsection{HCDPR Kinematics}\label{subsec:J5_HCDPRKinematics}
The kinematics of the HCDPR includes forward kinematics and inverse kinematics. In this paper, analytical solutions of the kinematics will be derived for the planar HCDPR shown in~\autoref{fig:J5_PlanarModel}.
In \autoref{fig:J5_PlanarModel}, the planar HCDPR has 5 DOFs (the mobile platform has 3 DOFs and the robot arm has 2 DOFs), where point $P_m(x_m,z_m,\theta_m)$ is located at the center of mass of the mobile platform (indicating the degrees of freedom of the mobile platform); point $P_1(x_1,z_1)$ is located at the first joint of the attached robot arm; point $P_{c1}(x_{c1},z_{c1})$ represents the center of mass of robot link 1; point $P_2(x_2,z_2)$ is located at the second joint of the robot arm; point $P_{c2}(x_{c2},z_{c2})$ denotes the center of mass of robot link 2; and point $P_e(x_e,z_e,q_e)$ represents the positions and orientation of the robot end-effector. In addition, ${x_m}$ and ${z_m}$ denote the positions of the mobile platform (the center of mass) in the X-direction and Z-direction, respectively, with respect to the inertial coordinate frame $\{O\}$. {\color{black}${\theta_m}$ indicates the orientation of the mobile platform with respect to the $OY_0$-axis}, ${\color{black}\theta_{1}}$ and ${\color{black}\theta_{2}}$ are the relative angles of the robot arms as shown in \autoref{fig:J5_PlanarModel}. Other parameters of the HCDPR are also shown in~\autoref{fig:J5_PlanarModel} and~\autoref{table:J5_HCDPRParameters}.
\begin{figure}[!t]\centering
\includegraphics[width=86mm]{figures/J5_PlanarModel.pdf}
\caption{Configuration of the HCDPR. (a) planar HCDPR; (b) enlarged figure of the mobile platform with the robot arm.}\label{fig:J5_PlanarModel}
\end{figure}
The forward kinematics is derived by given the vector of generalized coordinates (joint variables) ${\bf{q}} = {[{x_m},{z_m},{\theta _m},{\color{black}\theta_{1}},}{{\color{black}\theta_{2}}]^T} \in {\mathbb{R}^5}$ to find the vector of positions and orientation ${{\bf{p}}_e} = {[{x_e},{z_e},{q_e}]^T} \in {\mathbb{R}^3}$ and the cable length vector ${\bf{L}} = {[{L_1},{L_2}, \cdots ,{L_6}]^T} \in {\mathbb{R}^6}$ as follows:
\begin{align}
{P_1}:&\left\{ \begin{array}{l}
{x_1} = {x_m} + {l_m}\cos \left( {{\bar \theta_m} } \right)\\
{z_1} = {y_m} + {l_m}\sin \left( {{\bar \theta_m} } \right)
\end{array} \right.\label{eq:J5_1}
\end{align}
\begin{align}
{P_{c1}}:&\left\{ \begin{array}{l}
{x_{c1}} = {x_1} + {l_{c1}}\cos ({\bar \theta_m} + {\theta_{1}})\\
{z_{c1}} = {z_1} + {l_{c1}}\sin ({\bar \theta_m} + {\theta_{1}})
\end{array} \right.\label{eq:J5_2}
\end{align}
\begin{align}
{P_2}:&\left\{ \begin{array}{l}
{x_2} = {x_1} + {l_1}\cos ({\bar \theta_m} + {\theta_{1}})\\
{z_2} = {z_1} + {l_1}\sin ({\bar \theta_m} + {\theta_{1}})
\end{array} \right.\label{eq:J5_3}
\end{align}
\begin{align}
{P_{c2}}:&\left\{ \begin{array}{l}
{x_{c2}} = {x_2} + {l_{c2}}\cos ({\bar \theta_m} + {\theta_{1}} + {\theta_{2}})\\
{z_{c2}} = {z_2} + {l_{c2}}\sin ({\bar \theta_m} + {\theta_{1}} + {\theta_{2}})
\end{array} \right.\label{eq:J5_4}
\end{align}
where ${\bar \theta_m} = {\theta_m} + \pi /2$, $(x_1,z_1)$, $(x_{c1},z_{c1})$, $(x_2,z_2)$, and $(x_{c2},z_{c2})$ represent the positions of joint 4, the center of mass of link 1 of the robot arm, joint 5, and the center of mass of link 2 of the robot arm, respectively.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Parameters of the HCDPR}
\centering
\label{table:J5_HCDPRParameters}
\centering
\begin{tabular}{c c c c}
\hline\hline \\[-3mm]
Symbol & Values & Symbol & Values \\[1.6ex] \hline
$l_{a}$ & $0.440$ \si{\metre} & $l_{bd}$ & $0.055$ \si{\metre} \\
$l_{b}$ & $0.268$ \si{\metre} & $l_{g}$ & $0.086$ \si{\metre} \\
$l_{c}$ & $0.105$ \si{\metre} & $l_{h}$ & $0.105$ \si{\metre} \\
$l_{d}$ & $0.412$ \si{\metre} & $l_{m}$ & $0.052$ \si{\metre} \\
$l_{e}$ & $3.000$ \si{\metre} & $l_{1}$ & $0.305$ \si{\metre} \\
$l_{f}$ & $1.000$ \si{\metre} & $l_{2}$ & $0.305$ \si{\metre} \\
$l_{c1}$ & $0.1525$ \si{\metre} & $l_{c2}$ & $0.1525$ \si{\metre} \\
$m_m$ & \SI{30}{\kilogram} & $I_m$ & $0.83$ \si{\kilogram\metre\squared}\\
$m_1$ & \SI{10}{\kilogram} & $I_1$ & $0.18$ \si{\kilogram\metre\squared}\\
$m_2$ & \SI{10}{\kilogram} & $I_2$ & $0.18$ \si{\kilogram\metre\squared}\\
$T_{min}$ & \SI{40}{\newton} & $T_{max}$ & \SI{2000}{\newton}\\
$K_{s}$ & \SI{1.1e4}{\newton} & $g$ & $9.810$ \si[per-mode=symbol]{\metre\per\second\squared}\\
\hline\hline
\end{tabular}
\end{table}
The end-effector positions and orientation $(x_e,z_e,q_e)$ are described as
\begin{align}
{P_e}:\left\{ \begin{array}{l}
{x_e} = {x_2} + {l_2}\cos ({\bar \theta_m} + {\theta_{1}} + {\theta_{2}})\\
{z_e} = {x_2} + {l_2}\sin ({\bar \theta_m} + {\theta_{1}} + {\theta_{2}})\\
{q_e} = {\bar \theta_m} + {\theta_{1}} + {\theta_{2}}
\end{array} \right..
\label{eq:J5_5}
\end{align}
The velocities of the center of mass of link 1 and link 2 are calculated as
\begin{align}
&\left\{ \begin{array}{l}
{{\dot x}_{c1}} = {{\dot x}_m} - {l_{c1}}({{\dot \theta}_m} + {{\dot \theta}_{1}})\sin ({\bar \theta_m} + {\theta_{1}})\\
{\qquad\;}- {l_m}{{\dot \theta}_m}\sin \left( {{\bar \theta_m} } \right)\\
{{\dot z}_{c1}} = {{\dot y}_m} + {l_{c1}}({{\dot \theta}_m} + {{\dot \theta}_{1}})\cos ({\bar \theta_m} + {\theta_{1}})\\
{\qquad\;}+ {l_m}{{\dot \theta}_m}\cos \left( {{\bar \theta_m} } \right)\\
{v_{c1}} = {\left( {\dot x_{c1}^2 + \dot z_{c1}^2} \right)^{1/2}}
\end{array} \right. \label{eq:J5_6}\\
&\left\{ \begin{array}{l}
{{\dot x}_{c2}} = {{\dot x}_m} - {l_1}({{\dot \theta}_m} + {{\dot \theta}_{1}})\sin ({\bar \theta_m} + {\theta_{1}})\\
{\qquad\;}- {l_m}{{\dot \theta}_m}\sin({\bar \theta_m} ) - {l_{c2}}({{\dot \theta}_m} + {{\dot \theta}_{1}} + {{\dot \theta}_{2}})\\
{\qquad\;}\sin ({\bar \theta_m} + {\theta_{1}} + {\theta_{2}})\\
{{\dot z}_{c2}} = {{\dot y}_m} + {l_{c1}}({{\dot \theta}_m} + {{\dot \theta}_{1}})\cos ({\bar \theta_m} + {\theta_{1}})\\
{\qquad\;}+ {l_m}{{\dot \theta}_m}\cos({\bar \theta_m} ) + {l_{c2}}({{\dot \theta}_m} + {{\dot \theta}_{1}} + {{\dot \theta}_{2}})\\
{\qquad\;}\cos ({\bar \theta_m} + {\theta_{1}} + {\theta_{2}})\\
{v_{c2}} = {\left( {\dot x_{c2}^2 + \dot z_{c2}^2} \right)^{1/2}}
\end{array} \right. \label{eq:J5_7}
\end{align}
where $({\dot x_{c1}},{\dot z_{c1}})$ and $({\dot x_{c2}},{\dot z_{c2}})$ represent the velocities the center of mass of link 1 and link 2 in the X-direction and Z-direction, respectively. ${v_{c1}}$ and ${v_{c2}}$ denote the total velocities of $({\dot x_{c1}},{\dot z_{c1}})$ and $({\dot x_{c2}},{\dot z_{c2}})$, respectively.
The Jacobian matrix ${{\bf{J}}_e}$ is calculated as
\begin{align}
{{\bf{J}}_e} = \frac{{d{{\bf{P}}_e}}}{{d{\bf{q}}}} = \left[\begin{matrix}
1&0&\begin{array}{l}
- {l_1}{\mkern 1mu} \sin \left( {{\bar \theta_m} + {\theta_{1}}} \right) - {l_m}{\mkern 1mu} \sin \left( {{\bar \theta_m} } \right)\\
- {l_2}{\mkern 1mu} \sin \left( {{\bar \theta_m} + {\theta_{1}} + {\theta_{2}}} \right)
\end{array}\\
0&1&\begin{array}{l}
{l_1}{\mkern 1mu} \cos \left( {{\bar \theta_m} + {\theta_{1}}} \right) + {l_m}{\mkern 1mu} \cos \left( {{\bar \theta_m} } \right)\\
+ {l_2}{\mkern 1mu} \cos \left( {{\bar \theta_m} + {\theta_{1}} + {\theta_{2}}} \right)
\end{array}\\
0&0&1
\end{matrix}\right.\nonumber\\
\left.\begin{matrix}
\begin{array}{l}
- {l_1}{\mkern 1mu} \sin \left( {{\bar \theta_m} + {\theta_{1}}} \right)\\
- {l_2}{\mkern 1mu} \sin \left( {{\bar \theta_m} + {\theta_{1}} + {\theta_{2}}} \right)
\end{array}&{ - {l_2}{\mkern 1mu} \sin \left( {{\bar \theta_m} + {\theta_{1}} + {\theta_{2}}} \right)}\\
\begin{array}{l}
{l_1}{\mkern 1mu} \cos \left( {{\bar \theta_m} + {\theta_{1}}} \right)\\
+ {l_2}{\mkern 1mu} \cos \left( {{\bar \theta_m} + {\theta_{1}} + {\theta_{2}}} \right)
\end{array}&{{l_2}{\mkern 1mu} \cos \left( {{\bar \theta_m} + {\theta_{1}} + {\theta_{2}}} \right)}\\
1&1
\end{matrix}\right].
\label{eq:J5_8}
\end{align}
For the cable length ${\bf{L}}$, first, the corresponding vectors shown in~\autoref{fig:J5_PlanarModel} are computed as
\begin{align}
\left\{ \begin{array}{l}
{{\bf{p}}_{m1e}} = {\begin{bmatrix}
{{l_e}/2 - {l_g}}&0&{{l_f}/2}
\end{bmatrix}^T}\\
{{\bf{p}}_{m2e}} = {\begin{bmatrix}
{{l_e}/2}&0&{{l_f}/2 - {l_h}}
\end{bmatrix}^T}\\
{{\bf{p}}_{m3e}} = {\begin{bmatrix}
{{l_e}/2}&0&{ - {l_f}/2}
\end{bmatrix}^T}\\
{{\bf{p}}_{m4e}} = {\begin{bmatrix}
{ - {l_e}/2}&0&{ - {l_f}/2}
\end{bmatrix}^T}\\
{{\bf{p}}_{m5e}} = {\begin{bmatrix}
{ - {l_e}/2}&0&{{l_f}/2 - {l_h}}
\end{bmatrix}^T}\\
{{\bf{p}}_{m6e}} = {\begin{bmatrix}
{ - {l_e}/2 + {l_g}}&0&{{l_f}/2}
\end{bmatrix}^T}
\end{array} \right.
\label{eq:J5_9}
\end{align}
and
\begin{align}
\left\{ \begin{array}{l}
{{\bf{p}}_{m1}} = {\begin{bmatrix}
{{\color{black}x_m}}&0&{{\color{black}z_m}}
\end{bmatrix}^T} + {\bf{R}_y}({{\color{black}\theta_m}}){\begin{bmatrix}
{{l_b}/2}&0&{{l_m}}
\end{bmatrix}^T}\\
{{\bf{p}}_{m2}} = {\begin{bmatrix}
{{\color{black}x_m}}&0&{{\color{black}z_m}}
\end{bmatrix}^T} + {\bf{R}_y}({{\color{black}\theta_m}}){\begin{bmatrix}
{{l_a}/2}&0&{{l_m} - {l_c}}
\end{bmatrix}^T}\\
{{\bf{p}}_{m3}} = {\begin{bmatrix}
{{\color{black}x_m}}&0&{{\color{black}z_m}}
\end{bmatrix}^T} + {\bf{R}_y}({{\color{black}\theta_m}}){\begin{bmatrix}
{{l_d}/2}&0&{{l_m} - {l_{bd}}}
\end{bmatrix}^T}\\
{{\bf{p}}_{m4}} = {\begin{bmatrix}
{{\color{black}x_m}}&0&{{\color{black}z_m}}
\end{bmatrix}^T} + {\bf{R}_y}({{\color{black}\theta_m}}){\begin{bmatrix}
{ - {l_d}/2}&0&{{l_m} - {l_{bd}}}
\end{bmatrix}^T}\\
{{\bf{p}}_{m5}} = {\begin{bmatrix}
{{\color{black}x_m}}&0&{{\color{black}z_m}}
\end{bmatrix}^T} + {\bf{R}_y}({{\color{black}\theta_m}}){\begin{bmatrix}
{ - {l_a}/2}&0&{{l_m} - {l_c}}
\end{bmatrix}^T}\\
{{\bf{p}}_{m6}} = {\begin{bmatrix}
{{\color{black}x_m}}&0&{{\color{black}z_m}}
\end{bmatrix}^T} + {\bf{R}_y}({{\color{black}\theta_m}}){\begin{bmatrix}
{ - {l_b}/2}&0&{{l_m}}
\end{bmatrix}^T}
\end{array} \right.
\label{eq:J5_10}
\end{align}
where ${{\bf{p}}_{mie}}\;\left( {i = 1,2, \cdots ,6} \right)$ and ${{\bf{p}}_{mi}}\;\left( {i = 1,2, \cdots ,6} \right)$ represent the position vectors of the $i$th cable anchor point on the robot static frame and the $i$th cable anchor point on the mobile platform, respectively. ${\bf{R}_y}({{\color{black}\theta_m}})$ is the rotation matrix along the Y-axis (moving frame).
Then, the cable position vector is calculated as
\begin{align}
{{\bf{L}}_i} = {{\bf{p}}_{mie}} - {{\bf{p}}_{mi}}\quad \quad i = 1,2, \cdots ,6.
\label{eq:J5_11}
\end{align}
Finally, the cable length vector is computed as
\begin{align}
{\bf{L}} &= {\begin{bmatrix}
{{L_1}}&{{L_2}}&{{L_3}}&{{L_4}}&{{L_5}}&{{L_6}}
\end{bmatrix}^T}\nonumber\\
&= {\begin{bmatrix}
{\left\| {{{\bf{L}}_1}} \right\|}&{\left\| {{{\bf{L}}_2}} \right\|}&{\left\| {{{\bf{L}}_3}} \right\|}&{\left\| {{{\bf{L}}_4}} \right\|}&{\left\| {{{\bf{L}}_5}} \right\|}&{\left\| {{{\bf{L}}_6}} \right\|}
\end{bmatrix}^T}.
\label{eq:J5_12}
\end{align}
The inverse kinematics is calculated by given the vector of positions and orientation ${{\mathbf{p}}_e} = {\begin{bmatrix}
{{x_e}}&{{z_e}}&{{q_e}}
\end{bmatrix}^T} \in {\mathbb{R}^3}$ and the cable length vector ${\mathbf{L}} = {\begin{bmatrix}
{{L_1}}&{{L_2}}& \cdots &{{L_6}}
\end{bmatrix}^T}$ to find the vector of joint variables ${\mathbf{q}} = \begin{bmatrix}
{{\color{black}x_m}}&{{\color{black}z_m}}& \cdots &{{\color{black}\theta_{2}}} \end{bmatrix}^T$ as follows.
Suppose the cable lengths ${L_1}$ and ${L_6}$ are given and the kinematic constraints are applied (i.e., ${L_1} = {L_2}$ and ${L_5} = {L_6}$, then ${{\color{black}\theta_m}} = 0$). Then, the solutions of ${\color{black}x_m}$ and ${\color{black}z_m}$ are computed as
\begin{align}
\left\{ \begin{array}{l}
{\color{black}x_m} = \frac{{L_1^2 - L_6^2}}{{2({l_b} - {l_e} + 2{l_g})}}\\
{\color{black}z_m} = \frac{{{l_f}}}{2} - {l_c} + {l_m} \pm \frac{1}{{2\left( {{l_b} - {l_e} + 2{l_g}} \right)}}\\
\left( {\left( {{L_1} - {L_6} + {l_b} - {l_e} + 2{l_g}} \right)\left( {{L_6} - {L_1} + {l_b} - {l_e} + 2{l_g}} \right)} \right.\\
{\left. {\left( {{L_1} + {L_6} + {l_b} - {l_e} + 2{l_g}} \right)\left( {{L_1} + {L_6} - {l_b} + {l_e} - 2{l_g}} \right)} \right)^{\frac{1}{2}}}.
\end{array} \right.
\label{eq:J5_13}
\end{align}
Substituting ${\color{black}x_m}$, ${\color{black}z_m}$, and ${\color{black}\theta_m}$ into~\eqref{eq:J5_1}, we can find $(x_1, z_1)$. Other terms are calculated as
\begin{align}
\left\{ \begin{array}{l}
{r_{1e}}: = {\left( {{{\left( {{x_e} - {x_1}} \right)}^2} + {{\left( {{z_e} - {z_1}} \right)}^2}} \right)^{1/2}}\\
\phi : = {\rm{atan2}}\left( {{z_e} - {z_1},{x_e} - {x_1}} \right)\\
\alpha : = {\cos ^{ - 1}}\left( {\frac{{r_{1e}^2 + l_1^2 - l_2^2}}{{2{r_{1e}}{l_1}}}} \right)\\
\beta : = {\cos ^{ - 1}}\left( {\frac{{l_1^2 + l_2^2 - r_{1e}^2}}{{2{l_1}{l_2}}}} \right).
\end{array} \right.
\label{eq:J5_15}
\end{align}
Finally, two solutions for ${\color{black}\theta_{1}}$ and ${\color{black}\theta_{2}}$ are computed (using \eqref{eq:J5_15}) as
\begin{align}
\left\{ \begin{array}{l}
{\color{black}\theta_{1}} = \phi \mp \alpha - {{\color{black}\theta_m}}\\
{\color{black}\theta_{2}} = \pm \left( {\pi - \beta } \right)
\end{array} \right.
\label{eq:J5_16}
\end{align}
where \eqref{eq:J5_15} and \eqref{eq:J5_16} are available whether the cable kinematic constraints are applied or not.
\subsection{HCDPR Dynamics}\label{subsec:J5_HCDPRDynamics}
In this paper, a method called equivalent dynamic modeling (EDM) is used to derive the dynamic equations of the HCDPR using the following steps:
1) mapping cable tensions ($n$ cables) to the equivalent joint forces/torques ($k$ DOFs) for the cable-driven robot;
2) derive equivalent $m$-DOF robot dynamic equations (the attached robot arm has ($m-k$) DOFs) which include equivalent joint forces/torques ($k$ DOFs);
3) express the corresponding terms of equivalent joint forces/torques (in the equivalent $m$-DOF robot dynamic equations obtained in Step 2) in terms of the n cable tensions. Then, the analytical dynamic model of the HCDPR will be introduced. The EDM method has some advantages of deriving dynamic equations for HCDPRs. For example, it provides an effective solution for different configurations of HCDPRs. In this paper, the EDM method will be applied to the planar HCDPR shown in \autoref{fig:J5_PlanarModel}. The equivalent dynamic modeling method applied to the planar HCDPR is conducted as follows:
1) An equivalent three-spring driven model shown in Appendix~\ref{appendix:J5_EquivalentModel} is developed. A cable-tension transformation equation ${{\bf{\tau }}_m} = - {\bf{AT}}$ is satisfied and proved, where ${{\mathbf{\tau }}_m}: = {\left[ {{\tau _x},{\tau _z},{\tau _\theta }} \right]^T} \in {\mathbb{R}^3}$, ${\mathbf{A}} \in {\mathbb{R}^{3 \times n}},$ and ${\mathbf{T}}: = {\left[ {{T_1},{T_2}, \cdots ,{T_n}} \right]^T} \in {\mathbb{R}^n}$ represent the equivalent joint forces/torques applied to the mobile platform, the structure matrix $\bf{A}$, and the cable tensions, respectively.
2) The kinetic and potential energy are calculated as
\begin{align}
{K_E} = &{}\frac{1}{2}{m_m}\dot {\color{black}x}_m^2 + \frac{1}{2}{m_m}\dot {\color{black}z}_m^2 + \frac{1}{2}{I_m}\dot {\color{black}\theta}_m^2 + \frac{1}{2}{m_1}v_{c1}^2 \nonumber\\
&{}+ \frac{1}{2}{I_1}{\left( {{{\dot \theta}_m} + {{\dot \theta}_{1}}} \right)^2} + \frac{1}{2}{m_2}v_{c2}^2 + \frac{1}{2}{I_2}{\left( {{{\dot \theta}_m} + {{\dot \theta}_{1}} + {{\dot \theta}_{2}}} \right)^2}
\label{eq:J5_17}
\end{align}
and
\begin{align}
{P_E} = &{}{m_m}g{\color{black}z_m} + {m_1}g{z_{c1}} + {m_2}g{z_{c2}} + \frac{1}{2}{k_x}{\left( {{\color{black}x_m} - {\color{black}x_{m0}}} \right)^2} \nonumber\\
&{}+ \frac{1}{2}{k_z}{\left( {{\color{black}z_m} - {\color{black}z_{m0}}} \right)^2} + \frac{1}{2}{k_\theta }{\left( {{{\color{black}\theta_m}} - {\color{black}\theta_{m0}}} \right)^2}
\label{eq:J5_18}
\end{align}
where ${k_x}$, ${k_z}$, and ${k_\theta }$ come from the equivalent three-spring driven model shown in \ref{appendix:J5_EquivalentModel}, which represent the corresponding spring constants based on Hooke's law. Also, the expression $\frac{1}{2}{k_x}{\left( {{\color{black}x_m} - {\color{black}x_{m0}}} \right)^2} + \frac{1}{2}{k_z}{\left( {{\color{black}z_m} - {\color{black}z_{m0}}} \right)^2} + \frac{1}{2}{k_\theta }{\left( {{{\color{black}\theta_m}} - {\color{black}\theta_{m0}}} \right)^2}$ indicates the spring potential energy of the equivalent joints.
Then, the Lagrange's equation is described as
\begin{align}
\frac{d}{{dt}}\left( {\frac{{\partial ({K_E} - {P_E})}}{{\partial {{\dot q}_j}}}} \right) - \frac{{\partial ({K_E} - {P_E})}}{{\partial {q_j}}} = {\tau _j}\quad j = 1,2, \cdots ,5.
\label{eq:J5_19}
\end{align}
The dynamic equation is computed as
\begin{align}
{\mathbf{M}}\left( {\mathbf{q}} \right){\mathbf{\ddot q + C}}\left( {{\mathbf{q}},{\mathbf{\dot q}}} \right){\mathbf{\dot q + G}}\left( {\mathbf{q}} \right) + {{\mathbf{P}}_{vs}}\left( {\mathbf{q}} \right)={\mathbf{\tau}}
\label{eq:J5_20}
\end{align}
where $\mathbf{q} \in {\mathbb{R}^{5}}$, ${\mathbf{\dot q}} \in \mathbb{R}{^{5}}$, and ${\mathbf{\ddot q}} \in \mathbb{R}{^{5}}$, represent the vectors of generalized coordinates, velocities, and accelerations, respectively. ${\mathbf{M}}({\mathbf{q}}) \in \mathbb{R}{^{5 \times 5}}$, $\mathbf{C}({{{\mathbf{q}}},{\mathbf{\dot q}}}) \in \mathbb{R}{^{5 \times 5}}$, $\mathbf{G}(\mathbf{q}) \in {\mathbb{R}^{5}}$, and ${\mathbf{P}}_{vs}(\mathbf{q}) \in {\mathbb{R}^{5}}$ denote the inertia matrix, Coriolis and centripetal matrix, vector of gravitational force, and vector of elastic force, respectively. ${\mathbf{\tau}} \in {\mathbb{R}^{5}}$ represents the vector of generalized force. ${\mathbf{M}}({\mathbf{q}}), \mathbf{C}({{{\mathbf{q}}},{\mathbf{\dot q}}}),\mathbf{G}(\mathbf{q})$, and ${{\mathbf{P}}_{vs}}(\mathbf{q})$ are provided in Appendix~\ref{appendix:J5_HCDPR_Drivations}.
When an external force ${{\bf{F}}_e}$ and moment ${{\bf{M}}_e}$ are applied to the end-effector, the dynamic equation can be rewritten as
\begin{align}
{\bf{M}}\left( {\bf{q}} \right){\bf{\ddot q + C}}\left( {{\bf{q}},{\bf{\dot q}}} \right){\bf{\dot q + G}}\left( {\bf{q}} \right) + {{\bf{P}}_{vs}}\left( {\bf{q}} \right){\bf{ + J}}_e^T{\begin{bmatrix}
{{{\bf{F}}_e}}&{{{\bf{M}}_e}}
\end{bmatrix}^T}={\bf{\tau}}
\label{eq:J5_21}
\end{align}
where ${{\bf{J}}_e}$ is the Jacobian matrix. Here, \eqref{eq:J5_20} or \eqref{eq:J5_21} is the equivalent HCDPR dynamic equation.
3) For the planar HCDPR, \eqref{eq:J5_21} and~\eqref{eq:J5_appendix_A_4} are rearranged as
\begin{align}
&{\bf{M}}\left( {\bf{q}} \right){\bf{\ddot q + C}}\left( {{\bf{q}},{\bf{\dot q}}} \right){\bf{\dot q + G}}\left( {\bf{q}} \right) + {\bf{J}}_e^T{\begin{bmatrix}
{{{\bf{F}}_e}}&{{{\bf{M}}_e}}
\end{bmatrix}^T} \nonumber\\
&={\bf{\tau }} - {{\bf{P}}_{vs}}\left( {\bf{q}} \right) = \begin{bmatrix}
{{\tau _1} - {P_{vs1}}}\\
{{\tau _2} - {P_{vs2}}}\\
{{\tau _3} - {P_{vs3}}}\\
{{\tau _4}}\\
{{\tau _5}}
\end{bmatrix}.
\label{eq:J5_22}
\end{align}
Using \eqref{eq:J5_10}, \eqref{eq:J5_11}, and \eqref{eq:J5_12}, and by supposing ${\hat L_k} = \frac{{{L_k}}}{{\left\| {{{\mathbf{L}}_k}} \right\|}} \in {\mathbb{R}^3}$ and ${r_k} = \left( {{p_{mi}} - {p_m}} \right)$, where $i = 1,2, \cdots ,6$. Then, a matrix $\bf{A}$ is defined as
\begin{align}
{\bf{A}} = \begin{bmatrix}
{{{\hat L}_{1x}}}& \cdots &{{{\hat L}_{6x}}}\\
{{{\hat L}_{1z}}}& \cdots &{{{\hat L}_{6z}}}\\
{{{\begin{bmatrix}
{{r_{1x}}}\\{{r_{1z}}}
\end{bmatrix}}} \times {{\begin{bmatrix}
{{{\hat L}_{1x}}}\\{{{\hat L}_{1z}}}
\end{bmatrix}}}}& \cdots &{{{\begin{bmatrix}
{{r_{6x}}}\\{{r_{6z}}}
\end{bmatrix}}} \times {{\begin{bmatrix}
{{{\hat L}_{6x}}}\\{{{\hat L}_{6z}}}
\end{bmatrix}}}}
\end{bmatrix}.
\label{eq:J5_23}
\end{align}
Combining~\eqref{eq:J5_appendix_A_8} and \eqref{eq:J5_23}, $\bf{AT}$ is the force and moment (applied at the center of mass of the mobile platform) coming from the flexible cables. Based on the Lagrange's equation \eqref{eq:J5_19}, ${\tau _j}\;(j = 1,2, \ldots ,5)$ denotes the generalized force/torque applied to the dynamic system at joint j to drive link j, but in the specific HCDPR, the mobile platform is driven by six cables, i.e., there is no direct input (force/torque) applied to the center of mass of the mobile platform, so ${\tau _1} = {\tau _2} = {\tau _3} = 0$. From \eqref{eq:J5_22} and~\eqref{eq:J5_appendix_A_10}, we get
\begin{align}
&{\begin{bmatrix}
{{\tau _1} - {P_{vs1}}}&{{\tau _2} - {P_{vs2}}}&{{\tau _3} - {P_{vs3}}}
\end{bmatrix}^T} \nonumber\\
&= {\begin{bmatrix}
{ - {P_{vs1}}}&{ - {P_{vs2}}}&{ - {P_{vs3}}}
\end{bmatrix}^T} = - {{\bf{\tau }}_m} = {\bf{AT}}.
\label{eq:J5_24}
\end{align}
Then, by combining \eqref{eq:J5_22} and \eqref{eq:J5_24}, the dynamic equation of HCDPR is described as
\begin{align}
{\bf{M}}\left( {\bf{q}} \right){\bf{\ddot q + C}}\left( {{\bf{q}},{\bf{\dot q}}} \right){\bf{\dot q + G}}\left( {\bf{q}} \right) + {\bf{J}}_e^T{\begin{bmatrix}
{{{\bf{F}}_e}}&{{{\bf{M}}_e}}
\end{bmatrix}^T}{\bf{ = }}\begin{bmatrix}
{{\bf{AT}}}\\
{{\tau _4}}\\
{{\tau _5}}
\end{bmatrix}.
\label{eq:J5_25}
\end{align}
In addition, consider the configuration and constraints of HCDPR shown in~\autoref{fig:J5_PlanarModel}. The upper cables and lower cables are based on position control and force control, respectively. Then, the cable tensions $\bf{T}$ shown in \eqref{eq:J5_25} are calculated as
\begin{align}
\left\{ \begin{array}{l}
{T_1} = \frac{{{K_s}}}{{{L_{01}}}}\left( {{L_1} - {L_{01}}} \right)\\
{T_2} = \frac{{{K_s}}}{{{L_{02}}}}\left( {{L_2} - {L_{02}}} \right)\\
{T_3} = {T_3}\\
{T_4} = {T_4}\\
{T_5} = \frac{{{K_s}}}{{{L_{05}}}}\left( {{L_5} - {L_{05}}} \right)\\
{T_6} = \frac{{{K_s}}}{{{L_{06}}}}\left( {{L_6} - {L_{06}}} \right)
\end{array} \right.
\label{eq:J5_26}
\end{align}
where the unstretched cable lengths are ${L_{01}} = {L_{02}}$ and ${L_{05}} = {L_{06}}$ (because of the kinematic constraints), ${L_i}$ $(i = 1,2, \cdots, 6)$ are the state of current cable lengths, and ${K_s}$ is the specific stiffness. Hence, suppose the inputs of the real HCDPR are ${\bf{u}} = {\begin{bmatrix}
{{L_{01}}}&{{T_3}}&{{T_4}}&{{L_{06}}}
\end{bmatrix}^T}$ and ${{\bf{\tau }}_{45}} = {\begin{bmatrix}
{{\tau _4}}&{{\tau _5}}
\end{bmatrix}^T}$ (torques applied to the revolute joint on the robot arm). Also, the outputs are assumed to be ${\bf{q}} = {\begin{bmatrix}
{{\color{black}x_m}}&{{\color{black}z_m}}&{{{\color{black}\theta_m}}}&{{\color{black}\theta_{1}}}&{{\color{black}\theta_{2}}}
\end{bmatrix}^T}$, where ${\color{black}x_m}$ and ${\color{black}z_m}$ are the equivalent prismatic joints on the mobile platform; ${\color{black}\theta_m}$ is the equivalent revolute joint on the mobile platform; and ${\color{black}\theta_{1}}$ and ${\color{black}\theta_{2}}$ are the revolute joints on the robot arm.
\section{Control Design}
\label{sec:J5_ControlDesign}
Based on the system modeling in~\autoref{sec:J5_SystemModeling}, the redundancy resolution, stiffness optimization problem, and controller design will be proposed to address vibration control and trajectory tracking issues.
\subsection{Redundancy Resolution}\label{subsec:J5_RedundancyResolution}
For the HCDPR shown in \autoref{fig:J5_PlanarModel}, the robot is redundant in terms of the number of degrees of freedom (i.e., six cables drive the 3-DOF mobile platform). Suppose an external force and moment ${\left[ {{{\bf{F}}_e},{{\bf{M}}_e}} \right]^T}$ are applied to the center of mass of the mobile platform, we get
\begin{align}
{\mathbf{T}} = {{\mathbf{A}}^ + }\left( {{m_m}{{\begin{bmatrix}
0&0&g
\end{bmatrix}}^T} + {{\left[ {{{\mathbf{F}}_e},{{\mathbf{M}}_e}} \right]}^T}} \right) \in {\mathbb{R}^6}.
\label{eq:J5_3-1}
\end{align}
By supposing the wrench vector ${{\mathbf{W}}_m}: = {m_m}{\begin{bmatrix}
0&0&g
\end{bmatrix}^T} + {\left[ {{{\mathbf{F}}_e},{{\mathbf{M}}_e}} \right]^T} \in {\mathbb{R}^3}$, then
\begin{align}
{\bf{T}} = {{\bf{A}}^T}{\left( {{\bf{A}}{{\bf{A}}^T}} \right)^{ - 1}}{{\bf{W}}_m}.
\label{eq:J5_3-2}
\end{align}
In \eqref{eq:J5_3-2}, the elements of $\bf{T}$ (i.e., cable tensions) might be negative. However, in the real system, they cannot drive the mobile platform if they were negative. The redundancy resolution of the cable tensions T can be formulated as
\begin{align}
{\bf{T}} = {{\bf{A}}^T}{\left( {{\bf{A}}{{\bf{A}}^T}} \right)^{ - 1}}{{\bf{W}}_m} + {\rm{Null}}\left( {\bf{A}} \right){\begin{bmatrix}
{{\lambda _1}}&{{\lambda _2}}&{{\lambda _3}}
\end{bmatrix}^T}
\label{eq:J5_3-3}
\end{align}
where ${\mathbf{T}} = {\begin{bmatrix}
{{T_1}}&{{T_2}}& \cdots &{{T_6}}
\end{bmatrix}^T} \in {\mathbb{R}^6}$, ${\rm{Null}}\left( {\mathbf{A}} \right)$ represents the null space of structure matrix A (A is calculated using \eqref{eq:J5_23}, and ${\lambda _1},{\lambda _2},{\lambda _3} \in \mathbb{R}$ are three arbitrary values. In \eqref{eq:J5_3-3}, ${\rm{Null}}\left( {\bf{A}} \right){\begin{bmatrix}
{{\lambda _1}}&{{\lambda _2}}&{{\lambda _3}}
\end{bmatrix}^T}$ belongs to the null space of $\bf{A}$, since it can be described as ${\bf{A}}\left( {{\rm{Null}}\left( {\bf{A}} \right){{\begin{bmatrix}
{{\lambda _1}}&{{\lambda _2}}&{{\lambda _3}}
\end{bmatrix}}^T}} \right) = \left( {{\bf{A}}{\rm{Null}}\left( {\bf{A}} \right)} \right){\begin{bmatrix}
{{\lambda _1}}&{{\lambda _2}}&{{\lambda _3}}
\end{bmatrix}^T} = {\bf{0}}{\begin{bmatrix}
{{\lambda _1}}&{{\lambda _2}}&{{\lambda _3}}
\end{bmatrix}^T} = 0$. The expression ${\rm{Null}}\left( {\bf{A}} \right){\begin{bmatrix}
{{\lambda _1}}&{{\lambda _2}}&{{\lambda _3}}
\end{bmatrix}^T}$ denotes antagonistic cable tensions. The cable tensions T increase if all the antagonistic cable tensions are positive. Hence, the values of ${\lambda _1},{\lambda _2},{\lambda _3}$ can be selected to maintain that all the cable tensions are positive.
By supposing ${{\bf{T}}_A}: = {{\bf{A}}^T}{\left( {{\bf{A}}{{\bf{A}}^T}} \right)^{ - 1}}{{\bf{W}}_m}$ and ${{\bf{N}}_A}: = {\rm{Null}}\left( {\bf{A}} \right)$, then \eqref{eq:J5_3-3} is rearranged as
\begin{align}
{\bf{T}} = {{\bf{T}}_A} + {{\bf{N}}_A}{\begin{bmatrix}
{{\lambda _1}}&{{\lambda _2}}&{{\lambda _3}}
\end{bmatrix}^T}
\label{eq:J5_3-4}
\end{align}
where ${{\mathbf{T}}_A} = {\begin{bmatrix}
{{T_{A1}}}&{{T_{A2}}}& \cdots &{{T_{A6}}}
\end{bmatrix}^T} \in {\mathbb{R}^6}$ and ${{\mathbf{N}}_A} = \begin{bmatrix}
{{N_{A11}}}&{{N_{A12}}}&{{N_{A13}}} \\
{{N_{A21}}}&{{N_{A22}}}&{{N_{A23}}} \\
{{N_{A31}}}&{{N_{A33}}}&{{N_{A33}}} \\
{{N_{A41}}}&{{N_{A42}}}&{{N_{A43}}} \\
{{N_{A51}}}&{{N_{A52}}}&{{N_{A53}}} \\
{{N_{A61}}}&{{N_{A62}}}&{{N_{A63}}}
\end{bmatrix} \in {\mathbb{R}^{6 \times 3}}$. Eq. \eqref{eq:J5_3-4} is introduced to combine the stiffness maximization method and constraints in order to optimize cable tensions.
\subsection{Maximizing Stiffness of the HCDPR}\label{subsec:J5_MaxStiffness}
To calculate the stiffness matrix ${\bf{K}}$ for a static cable-driven robot, first, suppose an external force and moment ${\left[ {{{\bf{F}}_e},{{\bf{M}}_e}} \right]^T}$ are applied to the center of mass of the mobile platform. The stiffness matrix ${\bf{K}}$ is computed as
\begin{align}
{\bf{K}}:=&{} \frac{{d\left( {{m_m}{{\begin{bmatrix}
0&0&g
\end{bmatrix}}^T} + {{\left[ {{{\bf{F}}_e},{{\bf{M}}_e}} \right]}^T}} \right)}}{{d{{\bf{p}}_m}}} = \frac{{d({\bf{AT}})}}{{d{{\bf{p}}_m}}}\nonumber\\
=&{} \frac{{d{\bf{A}}}}{{d{{\bf{p}}_m}}}{\bf{T}} + {\bf{A}}\frac{{d{\bf{T}}}}{{d{{\bf{p}}_m}}}
=\frac{{d{\bf{A}}}}{{d{{\bf{p}}_m}}}{\bf{T}} + {\bf{A}}\left( {\frac{{d{\bf{T}}}}{{d{\bf{L}}}}} \right)\left( {\frac{{d{\bf{L}}}}{{d{{\bf{p}}_m}}}} \right) \nonumber\\
= &{}\frac{{d{\bf{A}}}}{{d{{\bf{p}}_m}}}{\bf{T}} + {\bf{A}}{{\bf{K}}_c}{{\bf{A}}^T} = :{{\bf{K}}_T} + {{\bf{K}}_k}
\label{eq:J5_3-5}
\end{align}
where ${{\bf{p}}_m}$, ${\bf T}$, and ${\bf L}$ represent the position and orientation of the center of mass of the mobile platform, cable tension vector, and cable position vector, respectively. Matrices ${{\bf{K}}_T}$ and ${{\bf{K}}_k}$ are a product of the cable tensions and cable stiffness, respectively, where ${{\mathbf{K}}_c} = \frac{{d{\mathbf{T}}}}{{d{\mathbf{L}}}} = {\rm{diag}}\left( {{k_1},{k_2}, \cdots ,{k_i}, \cdots ,{k_n}} \right) \in {\mathbb{R}^{n \times n}}$ and ${k_i}$ represents the cable stiffness, i.e., the stiffness coefficient of the $i$th cable. If \eqref{eq:J5_3-5} is expanded in terms of the kinematic parameters ${L_i}$, ${{\mathbf{\hat L}}_i}$, and ${{\mathbf{r}}_i}$, the matrices ${{\mathbf{K}}_T}$ and ${{\mathbf{K}}_k}$ can be described as~\cite{Mendez2014,Behzadipour2006}
\begin{align}
{{\bf{K}}_T} = \sum\limits_{i = 1}^n {\frac{{{T_i}}}{{{L_i}}}\begin{bmatrix}
{{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T}&{\left( {{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T} \right){{\left[ {{{\bf{r}}_i} \times } \right]}^T}}\\
{\left[ {{{\bf{r}}_i} \times } \right]\left( {{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T} \right)}&
\begin{matrix}
\left[ {{{\bf{r}}_i} \times } \right]\left( {{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T} \right){{\left[ {{{\bf{r}}_i} \times } \right]}^T} \\
- \left[ {{{{\bf{\hat L}}}_i} \times } \right]{{\left[ {{{\bf{r}}_i} \times } \right]}^T}
\end{matrix}
\end{bmatrix}}
\label{eq:J5_3-6}
\end{align}
and
\begin{align}
{{\bf{K}}_k} = \sum\limits_{i = 1}^n {{k_i}\begin{bmatrix}
{{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T}&{{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T{{\left[ {{{\bf{r}}_i} \times } \right]}^T}}\\
{\left[ {{{\bf{r}}_i} \times } \right]{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T}&{\left[ {{{\bf{r}}_i} \times } \right]{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T{{\left[ {{{\bf{r}}_i} \times } \right]}^T}}
\end{bmatrix}}
\label{eq:J5_3-7}
\end{align}
where ${{\bf{r}}_i} = \begin{bmatrix}
{{r_{ix}}}\\
{{r_{iy}}}\\
{{r_{iz}}}
\end{bmatrix}$, $\left[ {{{\bf{r}}_i} \times } \right] = \begin{bmatrix}
0&{ - {r_{iz}}}&{{r_{iy}}}\\
{{r_{iz}}}&0&{ - {r_{ix}}}\\
{ - {r_{iy}}}&{{r_{ix}}}&0
\end{bmatrix}$, ${{\bf{\hat L}}_i} = \begin{bmatrix}
{{{\hat L}_{ix}}}\\
{{{\hat L}_{iy}}}\\
{{{\hat L}_{iz}}}
\end{bmatrix}$, and $\left[ {{{{\bf{\hat L}}}_i} \times } \right] = \begin{bmatrix}
0&{ - {{\hat L}_{iz}}}&{{{\hat L}_{iy}}}\\
{{{\hat L}_{iz}}}&0&{ - {{\hat L}_{ix}}}\\
{ - {{\hat L}_{iy}}}&{{{\hat L}_{ix}}}&0
\end{bmatrix}$. $\left[ {{{\bf{r}}_i} \times } \right]$ is defined as the cross product operator, ${k_i}$ is the $i$th cable stiffness, and $\bf{I}$ is the identity matrix. Eq. \eqref{eq:J5_3-6} and \eqref{eq:J5_3-7} are equivalent to the results of the four-spring model proposed by Behzadipour and Khajepour~\cite{Behzadipour2006}. They also proved that a static cable-driven robot is stable if the stiffness matrix ${\bf{K}}$ is positive definite (sufficient condition).
The stiffness matrices in \eqref{eq:J5_3-6} and \eqref{eq:J5_3-7} are applied to the cable-driven robots in 3D. For the HCDPR in this paper, since the upper four cables are utilized for position control and the lower cables are used to set cable tensions, the specific stiffness matrix can be rearranged as
\begin{align}
{{\bf{K}}_T} = \sum\limits_{i = 1}^6 {\frac{{{T_i}}}{{{L_i}}}} \begin{bmatrix}
{{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T}&{\left( {{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T} \right){{\left[ {{{\bf{r}}_i} \times } \right]}^T}}\\
{\left[ {{{\bf{r}}_i} \times } \right]\left( {{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T} \right)}&
\begin{matrix}
\left[ {{{\bf{r}}_i} \times } \right]\left( {{\bf{I}} - {{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T} \right){{\left[ {{{\bf{r}}_i} \times } \right]}^T} \\
- \left[ {{{{\bf{\hat L}}}_i} \times } \right]{{\left[ {{{\bf{r}}_i} \times } \right]^T}}
\end{matrix}
\end{bmatrix}
\label{eq:J5_3-8}
\end{align}
and
\begin{align}
{{\bf{K}}_k} = \sum\limits_{i = 1,2,5,6}^n {{k_i}} \begin{bmatrix}
{{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T}&{{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T{{\left[ {{{\bf{r}}_i} \times } \right]}^T}}\\
{\left[ {{{\bf{r}}_i} \times } \right]{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T}&{\left[ {{{\bf{r}}_i} \times } \right]{{{\bf{\hat L}}}_i}{\bf{\hat L}}_i^T{{\left[ {{{\bf{r}}_i} \times } \right]}^T}}
\end{bmatrix}
\label{eq:J5_3-9}
\end{align}
where ${r_{iy}}$ and ${\hat L_{iy}}$ equal zero, and ${k_i} = {K_s}/{L_{0i}}$. In addition, elements of ${{\bf{K}}_k}$ cannot be controlled, because they come from the property of the cables. Hence, the stiffness of HCDPR can be changed by optimizing ${{\bf{K}}_T}$.
Then, maximizing the stiffness of HCDPR is achieved by the following approach:
1) When the kinematic constraints (${L_{01}} = {L_{02}}$ and ${L_{05}} = {L_{06}}$) are applied, then set the two cable tensions as ${T_1} = {T_2}$ and ${T_5} = {T_6}$. By combining ${k_i} = {K_s}/{L_{0i}}$ and \eqref{eq:J5_26}, we get
\begin{align}
\left\{ \begin{array}{l}
{\lambda _1} = \frac{{{b_2}{c_1} - {b_1}{c_2}}}{{{a_1}{b_2} - {a_2}{b_1}}}\\
{\lambda _2} = \frac{{{a_1}{c_2} - {a_2}{c_1}}}{{{a_1}{b_2} - {a_2}{b_1}}}
\end{array} \right.
\label{eq:J5_3-10}
\end{align}
where
\begin{align}
\left\{ \begin{array}{l}
{a_1} = {N_{A11}} - {N_{A21}}\\
{b_1} = {N_{A12}} - {N_{A22}}\\
{c_1} = {k_1}\left( {{L_1} - {L_2}} \right) + {T_{A2}} - {T_{A1}} + \left( {{N_{A23}} - {N_{A13}}} \right){\lambda _3}\\
{a_2} = {N_{A51}} - {N_{A61}}\\
{b_2} = {N_{A52}} - {N_{A62}}\\
{c_2} = {k_5}\left( {{L_5} - {L_6}} \right) + {T_{A6}} - {T_{A5}} + \left( {{N_{A63}} - {N_{A53}}} \right){\lambda _3}.
\end{array} \right.
\end{align}
Hence, there is only one variable ${\lambda _3}$ to optimize, such that:
\begin{align}
{\bf{T}} = {{\bf{T}}_A} + {{\bf{N}}_A}{\begin{bmatrix}
{\frac{{{b_2}{c_1} - {b_1}{c_2}}}{{{a_1}{b_2} - {a_2}{b_1}}}}&{\frac{{{a_1}{c_2} - {a_2}{c_1}}}{{{a_1}{b_2} - {a_2}{b_1}}}}&{{\lambda _3}}
\end{bmatrix}^T}.
\label{eq:J5_3-11}
\end{align}
2) Maximizing any diametric matrix $\bf{K}$ in \eqref{eq:J5_3-5} provides a unique solution for cable tensions~\cite{G.Meunier2009} and satisfies ${\bf{K}} = g({\lambda _3})$, where $g( \cdot )$ is a monotonic nondecreasing function. So, maximizing the stiffness $\bf{K}$ and maximizing ${\lambda _3}$ are equivalent. Here, the maximum ${\lambda _3}$ will maximize the HCDPR's stiffness $\bf{K}$ in ${\color{black}x_m}$, ${\color{black}z_m}$, and ${\color{black}\theta_m}$ directions (i.e., in the directions of X-axis, Z-axis, and rotation about Y-axis). Then, we have
\begin{align}
{\mathbf{T}} = {\lambda _3}{{\mathbf{D}}_A} + {{\mathbf{E}}_A},\quad \quad {{\mathbf{D}}_A},{{\mathbf{E}}_A} \in {\mathbb{R}^6}
\label{eq:J5_3-12}
\end{align}
where matrices ${{\mathbf{D}}_A}$ and ${{\mathbf{E}}_A}$ are calculated using \eqref{eq:J5_appendix_B_1} and \eqref{eq:J5_appendix_B_2} shown in Appendix~\ref{appendix:J5_SomeTermsStiffness}. The solution for \eqref{eq:J5_3-12} can be described as
\begin{align}
{\lambda _3} = \frac{1}{{{D_{Ai}}}}{T_i} - \frac{{{E_{Ai}}}}{{{D_{Ai}}}},\quad \quad i = 1,2, \cdots ,6.
\label{eq:J5_3-13}
\end{align}
3) The objective function is defined as
\begin{subequations}
\begin{align}
{{\rm{maximize}}}\quad&{{\lambda _3}} \\
{\rm{subject{~}to}}\quad&{\bf{T}} = {\lambda _3}{{\bf{D}}_A} + {{\bf{E}}_A}\\
&{0 \le {T_{i\min }} \le {T_i} \le {T_{i\max }},\quad \quad i = 1,2, \cdots ,6}
\end{align}
\label{eq:J5_3-14}
\end{subequations}
where ${T_i},\;{T_{i\min }},$ and ${T_{i\max }}$ represent the $i$th cable tension, minimum allowable tension, and maximum allowable tension, respectively. Eq.~\eqref{eq:J5_3-14} can be easily solved using solvers such as CVX~\cite{cvx} to find the optimal value ${\lambda _3}$. After ${\lambda _3}$ is obtained, the corresponding optimal cable tension ${\bf{T}}$ is calculated using \eqref{eq:J5_3-12}. Compared to the method of stiffness maximization in the softest direction in~\cite{H.Jamshidifar2017}, \eqref{eq:J5_3-14} provides a simpler and effective approach. In this research, the above algorithm (used to calculate ${\bf{T}}$ from \eqref{eq:J5_3-12}) is combined with controller design to meet the control objective while simultaneously satisfying required stiffness along each motion axes.
\subsection{Control Strategies}\label{subsec:J5_ControlStrategies}
For the proposed configuration of the HCDPR shown in~\autoref{fig:J5_PlanarModel}, four upper cables, two lower cables, and the 2-DOF robot arm are based on position control, force control, and torque control, respectively, i.e., their corresponding inputs are positions (cable lengths), forces, and joint torques. Furthermore, the elastic cables reduce the overall stiffness of the robot, so vibrations become a serious problem for precise control~\cite{Behzadipour2006,G.Meunier2009,M.Rushton2016Acc}. Another major problem is maintaining cable tensions to keeping large enough stiffness for the robot. As mentioned above, the goal of this paper is to develop an integrated control system for the HCDPR to reduce vibrations and improve motion accuracy and performance. In order to achieve this goal, different controllers can be designed, such as PID, LQR, and feed-forward controllers. In the studies, PID-based control strategies are designed to control the motion of the HCDPR.
In addition, for the actual HCDPR system, since the driven cables are flexible, the positions of the mobile platform or actual cable lengths cannot be computed directly from the measurements of encoders (embedded in the corresponding driven actuators). In this case, the upper cables are considered as a rigid body with the given cable lengths while optimizing the lower cable tensions. In other words, each cable is fitted with a force sensor which provides tension magnitude to the robot feedback control system to ensure that the HCDPR has the desired stiffness. The control strategy includes tracking the motions of the mobile platform and the robot arm as well as optimizing the cable tensions to satisfy the required stiffness of the robot.
\begin{figure*}[!t]\centering
\includegraphics[width=130mm]{figures/J5_ControlStructure.pdf}
\caption{Control structures of the HCDPR. (a) The desired inputs are cable lengths and robot arm joint variables; (b) The desired inputs are joint variables.}\label{fig:J5_ControlStructure}
\end{figure*}
Based on the above method, the proposed control structures of the HCDPR are shown in~\autoref{fig:J5_ControlStructure}. \autoref{fig:J5_ControlStructure}(a) shows the desired inputs being cable lengths $({\color{black}{L_{1}},{L_{6}}})$ and robot arm joint variables $({\color{black}\theta_{1}},{\color{black}\theta_{2}})$. In this case, the goal is to control the rigid HCDPR for the desired $({\color{black}{L_{1}},{L_{6}}})$ using PID controller I and the desired $({\color{black}\theta_{1}},{\color{black}\theta_{2}})$ using PID controller II, respectively. \autoref{fig:J5_ControlStructure}(b) represents the desired inputs as joint variables $\mathbf{q}=({\color{black}x_m},{\color{black}z_m},{{\color{black}\theta_m}},{\color{black}\theta_{1}},{\color{black}\theta_{2}})$. In this case, suppose $({\color{black}x_m},{\color{black}z_m},{{\color{black}\theta_m}})$ (i.e., ${P_m}({x_m},{z_m},{\color{black}\theta_m})$) is given (e.g., using external cameras to track the trajectories). In the control scheme, the corresponding PID controllers continuously calculate errors as the difference between the desired and actual values. The controllers’ outputs are then used to command the cables and the robot arm actuators to drive the HCDPR. Also, the optimal cable tensions are obtained using \eqref{eq:J5_3-12}.
The HCDPR system with position controllers developed is shown in \autoref{fig:J5_ControlStructure}(a) and \autoref{fig:J5_ControlStructure}(b). The system consists of the system dynamics in~\autoref{sec:J5_SystemModeling}, the redundancy resolution derived in~\autoref{subsec:J5_RedundancyResolution}, and the stiffness maximization approach proposed in~\autoref{subsec:J5_MaxStiffness}.
Defining an error vector $\mathbf{e}(t)$ for the controllers above as
\begin{align}
\mathbf{e}(t) =
\left\{\begin{array}{l}
[L_1(t) \;\; L_6(t)]^T- [{\tilde L_1}(t)\;\; {\tilde L_6}(t)]^T \;\;\;\: {\rm{PID{~}controller{~}I}}\\
[\theta_1(t) \;\; \theta_2(t)]^T- [{\tilde \theta_1}(t)\;\; {\tilde \theta_2}(t)]^T \quad {\rm{PID{~}controller{~}II}}\\
\mathbf{q}(t) - \tilde {\mathbf{q}}(t)\\
\end{array} \right.
\label{eq:J5_3-15}
\end{align}
where $\tilde{( \cdot )}$ denotes actual values. Based on the diagram shown in~\autoref{fig:J5_ControlStructure}, the control law is designed as
\begin{align}
\begin{bmatrix}
\mathbf{u}_m(t)\\
\mathbf{u}_a(t)
\end{bmatrix}
= {K_p}\mathbf{e}(t) + {K_i}\int_0^t {\mathbf{e}(t)dt} + {K_d}\frac{{d\mathbf{e}(t)}}{{dt}}
\label{eq:J5_3-16}
\end{align}
where ${K_p}$, ${K_i}$, and ${K_d}$ are the proportional, integral, and derivative terms, respectively. $\mathbf{u}_m$ and $\mathbf{u}_a$ represent control inputs to the mobile platform and robot arm, respectively.
\section{Numerical Results and Discussion}
\label{sec:J5_NumericalResults}
To evaluate the control performance in~\autoref{sec:J5_ControlDesign}, the following cases will be studied. All the scenarios are implemented using MATLAB 2019a (The MathWorks, Inc.) on a Windows 7 x64 desktop PC (Inter Core i7-4770, 3.4 GHz CPU and 8 GB RAM), and the initial condition is ${{\bf{q}}_0} = {[{\rm{0}},{\rm{0}},0,0,0]^T}$.
\begin{itemize}
\item Case 1: Control the CDPR by Given Cable Lengths $({\color{black}{L_{1}},{L_{6}}})$
\end{itemize}\par
In the first case, assume ${\color{black}{L_{1}},{L_{6}}},{\color{black}\theta_{1}}$, and ${\color{black}\theta_{2}}$ are obtained from the actuators encoders. The PID controller I is applied (parameters are set as $K_p = 2 \times 10^2, K_i = 10, {~}{\rm{and}}{~} K_d = 0$) to control the upper cables, where ${{\color{black}L_{1}}} = 1.35~\rm{m}$ and ${{\color{black}L_{6}}} = 1.35~\rm{m}$. Meanwhile, the PID controller II is applied (parameters are set as $K_p = 6 \times 10^2, K_i = 20, {~}{\rm{and}}{~} K_d = 1 \times 10^2$) to control the robot arm, and the desired joint variables $[{\color{black}\theta_{1}},{\color{black}\theta_{2}}]$ are equal to $[0,0]$. This means that it is desired to maintain the robot arm stationary. The results in Figure~\ref{subfig:J5_Case_1_a} and Figure~\ref{subfig:J5_Case_1_b} show that the vibrations are not damped out using the proposed PID controllers.
\begin{figure}[h]
\centering
\subfigure[]{\label{subfig:J5_Case_1_a}\includegraphics[width=42mm]{figures/J5_1_L16_q45_Response.pdf}}
\subfigure[]{\label{subfig:J5_Case_1_b}\includegraphics[width=42mm]{figures/J5_1_q123_endeffector.pdf}}\\
\caption{Responses of Case 1. (a) Errors between the desired inputs and the actual outputs and (b) trajectories of the center of mass of the mobile platform and the end-effector.}
\label{fig:J5_Case_1}
\end{figure}
\begin{itemize}
\item Case 2: Control the CDPR by Given ${P_m}({x_m},{z_m},{\color{black}\theta_m})$
\end{itemize}\par
In this case, suppose ${P_m}({x_m},{z_m},{\color{black}\theta_m})$ (i.e., $({\color{black}x_m},{\color{black}z_m},{{\color{black}\theta_m}})$ measurements are available (e.g., vision based feedback). When the PID controller is applied ($K_p = 5 \times 10^5, K_i = 3.5 \times 10^7, {~}{\rm{and}}{~} K_d = 1.1 \times 10^4$), the desired positions of the mobile platform $[{\color{black}x_m},{\color{black}z_m},{{\color{black}\theta_m}},{\color{black}\theta_{1}},{\color{black}\theta_{2}}]$ are set to $[{{2 \times 10^{-3}}},{{4 \times 10^{-3}}},0,0,0]$. The corresponding results are shown in Figure~\ref{subfig:J5_Case_2_a}, Figure~\ref{subfig:J5_Case_2_b}, and Figure~\ref{subfig:J5_Case_2_c}. The results show that the errors between the desired input ${\bf{q}}$ and the actual output ${\bf{\tilde q}}$ go to zero very quickly (about 0.3 seconds), and the dynamic inputs (cable lengths, cable tensions, and robot arm joint torques) applied to the HCDPR are quick to stabilize. In this case, the states of the upper cable tensions stabilize at the set point in less than 0.3 seconds. In addition, it is clear that the vibrations are well controlled when the PID controller is applied.
In summary, based on the results from case 1 and case 2, when the desired ${{\color{black}L_{1}}}$, ${{\color{black}L_{6}}}$, ${\color{black}\theta_{1}}$, and ${\color{black}\theta_{2}}$ are given, vibrations in actual positions of all degrees of freedom need to be damped out (or controlled better).
\begin{figure}[!t]
\centering
\subfigure[]{\label{subfig:J5_Case_2_a}\includegraphics[width=42mm]{figures/J5_2_StateResponse.pdf}}
\subfigure[]{\label{subfig:J5_Case_2_b}\includegraphics[width=42mm]{figures/J5_2_Inputs.pdf}}\\
\subfigure[]{\label{subfig:J5_Case_2_c}\includegraphics[width=84mm]{figures/J5_2_Trajectory.pdf}}\\
\caption{Responses of Case 2. (a) Errors between the desired input ${\bf{q}}$ and the actual output ${\bf{\tilde q}}$, (b) the dynamic inputs (cable lengths, cable tensions, and robot arm joint torques) applied to the HCDPR and the states of the upper cable tensions, and (c) trajectories of the center of mass of the mobile platform and the robot arm end-effector.}
\label{fig:J5_Case_2}
\end{figure}
\begin{itemize}
\item Case 3(a): The Mobile Platform is Fixed and the Robot Arm is Moving
\end{itemize}\par
In this case, the mobile platform is fixed (the cable lengths $({\color{black}{L_{1}},{L_{6}}})$ are given) and the robot arm is moving. Also, PID controller I ($K_p = 2 \times 10^2, K_i = 10, {~}{\rm{and}}{~} K_d = 0$) is applied to control the upper cables and PID controller II ($K_p = 6 \times 10^2, K_i = 20, {~}{\rm{and}}{~} K_d = 1 \times 10^2$) is applied to the robot arm. The desired trajectories are defined as
\begin{align}
\left\{ \begin{array}{l}
{{\color{black}L_{1}}} = {{\color{black}L_{6}}} = 1.35\\
{\color{black}\theta_{1}} = 0.1t,\quad t \in \left[ {0,{t_{\max }}} \right]\\
{\color{black}\theta_{2}} = 0.1t,\quad t \in \left[ {0,{t_{\max }}} \right]
\end{array} \right.
\label{eq:J5_4-1}
\end{align}
where $t$ and ${t_{\max }}$ are the current and maximum running time.
The corresponding results are shown in Figure~\ref{subfig:J5_Case_3a_a} and Figure~\ref{subfig:J5_Case_3a_b}. The results also show that tracking errors are not acceptable, and vibrations are not controlled with near sustained oscillations in cables $L_1$ and $L_6$.
\begin{figure}[!t]
\centering
\subfigure[]{\label{subfig:J5_Case_3a_a}\includegraphics[width=42mm]{figures/J5_3a_L16_q45_Response.pdf}}
\subfigure[]{\label{subfig:J5_Case_3a_b}\includegraphics[width=42mm]{figures/J5_3a_q123_endeffector.pdf}}\\
\caption{Responses of Case 3(a). (a) Errors between the desired inputs and the actual outputs and (b) trajectories of the center of mass of the mobile platform and the end-effector.}
\label{fig:J5_Case_3a}
\end{figure}
\begin{itemize}
\item Case 3(b): The Robot Arm is Fixed and the Mobile Platform is Moving
\end{itemize}\par
In this case, the robot arm is fixed and the mobile platform is moving. This is the same as in Case 3(a), PID controller I ($K_p = 2 \times 10^2, K_i = 10, {~}{\rm{and}}{~} K_d = 0$) is applied to control the upper cables and PID controller II ($K_p = 6 \times 10^2, K_i = 20, {~}{\rm{and}}{~} K_d = 1 \times 10^2$) is applied to the robot arm. The desired trajectories are given by
\begin{align}
\left\{ \begin{array}{l}
{{\color{black}L_{1}}} = 1.35 - 0.01t,\quad t \in \left[ {0,{t_{\max }}} \right]\\
{{\color{black}L_{6}}} = 1.35 + 0.01t,\quad t \in \left[ {0,{t_{\max }}} \right]\\
{\color{black}\theta_{1}} = 0\\
{\color{black}\theta_{2}} = 0
\end{array} \right.
\label{eq:J5_4-2}
\end{align}
where $t$ and ${t_{\max }}$ are the current and maximum running time.
In this case, the results are shown in Figure~\ref{subfig:J5_Case_3b_a} and Figure~\ref{subfig:J5_Case_3b_b}. The results again show that tracking errors are not satisfactory and vibrations are not damped out using the two PID controllers.
\begin{figure}[!t]
\centering
\subfigure[]{\label{subfig:J5_Case_3b_a}\includegraphics[width=42mm]{figures/J5_3b_L16_q45_Response.pdf}}
\subfigure[]{\label{subfig:J5_Case_3b_b}\includegraphics[width=42mm]{figures/J5_3b_q123_endeffector.pdf}}\\
\caption{Responses of Case 3(b). (a) Errors between the desired inputs and the actual outputs and (b) trajectories of the center of mass of the mobile platform and the end-effector.}
\label{fig:J5_Case_3b}
\end{figure}
\begin{itemize}
\item Case 4(a): The Mobile Platform is Fixed and the Robot Arm is Moving
\end{itemize}\par
In this case, the mobile platform is fixed and the robot arm is moving, i.e., the robot arm moves from one point to another. When the PID controller is applied ($K_p = 5 \times 10^5, K_i = 3.5 \times 10^7, {~}{\rm{and}}{~} K_d = 1.1 \times 10^4$), the desired trajectories of the mobile platform are described as
\begin{align}
\left\{ \begin{array}{l}
{\color{black}x_m} = 0\\
{\color{black}z_m} = 0\\
{{\color{black}\theta_m}} = 0\\
{\color{black}\theta_{1}} = t,\quad t \in \left[ {0,{t_{\max }}} \right]\\
{\color{black}\theta_{2}} = - t,\quad t \in \left[ {0,{t_{\max }}} \right]
\end{array} \right.
\label{eq:J5_4-3}
\end{align}
where $t$ and ${t_{\max }}$ are the current and maximum running time.
The corresponding results are shown in Figure~\ref{subfig:J5_Case_4a_a}, Figure~\ref{subfig:J5_Case_4a_b}, and Figure~\ref{subfig:J5_Case_4a_c}. The results show that the errors between the desired input ${\bf{q}}$ and the actual output ${\bf{\tilde q}}$ go to zero very quickly, and the dynamic inputs applied to the HCDPR are quick to stabilize. Moreover, although the mobile platform remains stationary and only the robot arm moves from one point to another in the joint coordinate frame, the robot arm motion still generates reaction forces/moments which in turn create oscillations on the mobile platform. The states of the upper cable tensions are stabilized in less than 0.2 seconds. Because of the action of the PID controller, vibrations of the HCDPR are well controlled in this case.
\begin{figure}[!t]
\centering
\subfigure[]{\label{subfig:J5_Case_4a_a}\includegraphics[width=42mm]{figures/J5_4a_StateResponse.pdf}}
\subfigure[]{\label{subfig:J5_Case_4a_b}\includegraphics[width=42mm]{figures/J5_4a_Inputs.pdf}}\\
\subfigure[]{\label{subfig:J5_Case_4a_c}\includegraphics[width=84mm]{figures/J5_4a_Trajectory.pdf}}\\
\caption{Responses of Case 4(a). (a) Errors between the desired input ${\bf{q}}$ and the actual output ${\bf{\tilde q}}$, (b) the dynamic inputs (cable lengths, cable tensions, and arm joint torques) applied to the HCDPR and the states of the upper cable tensions, and (c) trajectories of the center of mass of the mobile platform and the end-effector.}
\label{fig:J5_Case_4a}
\end{figure}
\begin{itemize}
\item Case 4(b): The Robot Arm is Fixed and the Mobile Platform is Moving
\end{itemize}\par
In this case, the robot arm is fixed and the mobile platform is moving. When the PID controller is applied ($K_p = 5 \times 10^5, K_i = 3.5 \times 10^7, {~}{\rm{and}}{~} K_d = 1.1 \times 10^4$), the desired trajectories of the mobile platform are as follows
\begin{align}
\left\{ \begin{array}{l}
{\color{black}x_m} = - 0.1t,\quad t \in \left[ {0,{t_{\max }}} \right]\\
{\color{black}z_m} = - 0.05t,\quad t \in \left[ {0,{t_{\max }}} \right]\\
{{\color{black}\theta_m}} = 0\\
{\color{black}\theta_{1}} = 0\\
{\color{black}\theta_{2}} = 0
\end{array} \right.
\label{eq:J5_4-4}
\end{align}
where $t$ and ${t_{\max }}$ are the current and maximum running time.
\begin{figure}[!t]
\centering
\subfigure[]{\label{subfig:J5_Case_4b_a}\includegraphics[width=42mm]{figures/J5_4b_StateResponse.pdf}}
\subfigure[]{\label{subfig:J5_Case_4b_b}\includegraphics[width=42mm]{figures/J5_4b_Inputs.pdf}}\\
\subfigure[]{\label{subfig:J5_Case_4b_c}\includegraphics[width=84mm]{figures/J5_4b_Trajectory.pdf}}\\
\caption{Responses of Case 4(b). (a) Errors between the desired input ${\bf{q}}$ and the actual output ${\bf{\tilde q}}$, (b) the dynamic inputs (cable lengths, cable tensions, and robot arm joint torques) applied to the HCDPR and the states of the upper cable tensions, and (c) trajectories of the center of mass of the mobile platform and the end-effector.}
\label{fig:J5_Case_4b}
\end{figure}
The results are shown in Figure~\ref{subfig:J5_Case_4b_a}, Figure~\ref{subfig:J5_Case_4b_b}, and Figure~\ref{subfig:J5_Case_4b_c}. The results show that the errors between the desired input ${\bf{q}}$ and the actual output ${\bf{\tilde q}}$ go to zero in about 0.25 seconds. The dynamic inputs (cable lengths, cable tensions, and robot arm joint torques) applied to the HCDPR are also quick to stabilize. Meanwhile, cable tensions $T_3$ and $T_4$ are always positive since the algorithm for maximizing the stiffness of HCDPR is applied. In this case, the upper cable tensions reach the set values in about 0.2 seconds. The tracking trajectory errors of the center of mass of the mobile platform shown in Figure~\ref{subfig:J5_Case_4b_c} are very small. In addition, when the proposed PID controller is implemented, the vibrations of the HCDPR are well controlled.
In summary, redundancy resolution and stiffness optimization methods for the HCDPR were introduced. PID-based controllers are also designed for position control of the HCDPR system. The performance of the HCDPR using the position PID controllers is analyzed via different scenarios: when the positions/orientations of the mobile platform and the end-effector positions of the rigid robot arm (or joint variables) are given, the trajectory tracking and vibration suppression can be well handled.
\section{Conclusions}
\label{sec:J5_Conclusions}
This paper proposed a kinematically constrained planar HCDPR which can harness the strengths and benefits of serial and cable-driven parallel robots. Based on this HCDPR, kinematics, dynamics, redundancy resolution and stiffness maximization algorithms were developed. Controllers (I and II) were also designed to address trajectory tracking and vibration suppression problems. Control performance was analyzed by using different scenarios, and the results showed that the controller II can achieve the goal better. Besides, compared to the existing research, this paper showed the reaction performance, i.e., the mobile platform was fixed and the robot arm was moving or the mobile platform was fixed and the robot arm is moving, as well as the trajectory tracking of the end-effector, and both results were satisfactory.
\section*{Acknowledgment}
The authors would like to knowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC).
\section*{Appendix A: HCDPR Derivations} \label{appendix:J5_HCDPR_Drivations}
The terms in \eqref{eq:J5_20} are computed as follows:
${\bf{M}}({\bf{q}}) = \begin{bmatrix}
{{M_{11}}}&{{M_{12}}}&{{M_{13}}}&{{M_{14}}}&{{M_{15}}}\\
{{M_{21}}}&{{M_{22}}}&{{M_{23}}}&{{M_{24}}}&{{M_{25}}}\\
{{M_{31}}}&{{M_{32}}}&{{M_{33}}}&{{M_{34}}}&{{M_{35}}}\\
{{M_{41}}}&{{M_{42}}}&{{M_{43}}}&{{M_{44}}}&{{M_{45}}}\\
{{M_{51}}}&{{M_{52}}}&{{M_{53}}}&{{M_{54}}}&{{M_{55}}}
\end{bmatrix}$, ${\bf{C}}( {{\bf{q,\dot q}}} ) = \begin{bmatrix}
{{C_{11}}}&{{C_{12}}}&{{C_{13}}}&{{C_{14}}}&{{C_{15}}}\\
{{C_{21}}}&{{C_{22}}}&{{C_{23}}}&{{C_{24}}}&{{C_{25}}}\\
{{C_{31}}}&{{C_{32}}}&{{C_{33}}}&{{C_{34}}}&{{C_{35}}}\\
{{C_{41}}}&{{C_{42}}}&{{C_{43}}}&{{C_{44}}}&{{C_{45}}}\\
{{C_{51}}}&{{C_{52}}}&{{C_{53}}}&{{C_{54}}}&{{C_{55}}}
\end{bmatrix}$, ${\bf{G}}( {\bf{q}} ) = {\begin{bmatrix}
{{G_1}}\\{{G_2}}\\{{G_3}}\\{{G_4}}\\{{G_5}}
\end{bmatrix}}$, and ${{\bf{P}}_{vs}}( {\bf{q}} ) = {\begin{bmatrix}
{{P_{vs1}}}&{{P_{vs2}}}&{{P_{vs3}}}&0&0
\end{bmatrix}^T}$, in which ${M_{11}} = {m_1} + {m_2} + {m_m}$, ${M_{21}} = 0$, ${M_{31}} = - {l_m} {m_1} \sin ( {{{\color{black}\theta_m}}} ) - {l_m} {m_2} \sin ( {{{\color{black}\theta_m}}} ) - {l_{c2}} {m_2} \sin ( {{{\color{black}\theta_m}} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) - {l_1} {m_2} \sin ( {{{\color{black}\theta_m}} + {\color{black}\theta_{1}}} )
- {l_{c1}} {m_1} \sin ( {{{\color{black}\theta_m}} + {\color{black}\theta_{1}}} )$, ${M_{41}} = - {l_{c2}} {m_2} \sin ( {{{\color{black}\theta_m}} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) - {l_1} {m_2} \sin ( {{{\color{black}\theta_m}} + {\color{black}\theta_{1}}} ) - {l_{c1}} {m_1} \sin ( {{{\color{black}\theta_m}} + {\color{black}\theta_{1}}} )$, $
{M_{51}} = - {l_{c2}} {m_2} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )$, ${M_{12}} = 0$, ${M_{22}} = {m_1} + {m_2} + {m_m}$, $
{M_{32}} = {l_m} {m_1} \cos ( {{\color{black}\theta_m}} ) + {l_m} {m_2} \cos ( {{\color{black}\theta_m}} ) + {l_{c2}} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_{c1}} {m_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} )$, $
{M_{42}} = {l_{c2}} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_{c1}} {m_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} )$, $
{M_{52}} = {l_{c2}} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )$, ${M_{13}} = - {m_2} ({l_1} \sin ( {{\color{black}\theta_m} +{\color{black}\theta_{1}}} ) + {l_m} \sin ( {{\color{black}\theta_m}} ) + {l_{c2}} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} +{\color{black}\theta_{2}}} )) - {m_1} ( { {l_{c1}} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_m} \sin ( {{\color{black}\theta_m}} )} )$, ${M_{23}} = {m_2} ({l_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_m} \cos ( {{\color{black}\theta_m}} ) + {l_{c2}} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )) + {m_1} ( {l_{c1}} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_m} \cos ( {{\color{black}\theta_m}} ))$, ${M_{33}} = {I_1} + {I_2} + {I_m} + {l_1}^2 {m_2} + {l_{c1}}^2 {m_1} + {l_{c2}}^2 {m_2} + {l_m}^2 {m_1} + {l_m}^2 {m_2} + 2 {l_{c2}} {l_m} {m_2} \cos ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )
+ 2 {l_1} {l_{c2}} {m_2} \cos ( {{\color{black}\theta_{2}}} ) + 2 {l_1} {l_m} {m_2} \cos ( {{\color{black}\theta_{1}}} ) + 2 {l_{c1}} {l_m} {m_1} \cos ( {{\color{black}\theta_{1}}} )$, ${M_{43}} = {m_2} {l_1}^2 + 2 {m_2} \cos ( {{\color{black}\theta_{2}}} ) {l_1} {l_{c2}} + {l_m} {m_2} \cos ( {{\color{black}\theta_{1}}} ) {l_1} + {m_1} {l_{c1}}^2 + {l_m} {m_1} \cos ( {{\color{black}\theta_{1}}} ) {l_{c1}} + {m_2} {l_{c2}}^2 + {l_m} {m_2} \cos ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) {l_{c2}} + {I_1} + {I_2}$, ${M_{53}} = {I_2} + {l_{c2}}^2 {m_2} + {l_{c2}} {l_m} {m_2} \cos ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} {l_{c2}} {m_2} \cos ( {{\color{black}\theta_{2}}} )$, ${M_{14}} = - {m_2} ( { {l_1} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_{c2}} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )} ) - {l_{c1}} {m_1} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} )$, ${M_{24}} = {m_2} ({l_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_{c2}} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )) + {l_{c1}} {m_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} )$, ${M_{34}} = {m_2} {l_1}^2 + 2 {m_2} \cos ( {{\color{black}\theta_{2}}} ) {l_1} {l_{c2}} + {l_m} {m_2} \cos ( {{\color{black}\theta_{1}}} ) {l_1} + {m_1} {l_{c1}}^2 + {l_m} {m_1} \cos ( {{\color{black}\theta_{1}}} ) {l_{c1}} + {m_2} {l_{c2}}^2 + {l_m} {m_2} \cos ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) {l_{c2}} + {I_1} + {I_2}$, ${M_{44}} = {m_2} {l_1}^2 + 2 {m_2} \cos ( {{\color{black}\theta_{2}}} ) {l_1} {l_{c2}} + {m_1} {l_{c1}}^2 + {m_2} {l_{c2}}^2 + {I_1} + {I_2}$, ${M_{54}} = {m_2} {l_{c2}}^2 + {l_1} {m_2} \cos ( {{\color{black}\theta_{2}}} ) {l_{c2}} + {I_2}$, ${M_{15}} = - {l_{c2}} {m_2} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )$, ${M_{25}} = {l_{c2}} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )$, ${M_{35}} = {I_2} + {l_{c2}}^2 {m_2} + {l_{c2}} {l_m} {m_2} \cos ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} {l_{c2}} {m_2} \cos ( {{\color{black}\theta_{2}}} )$, ${M_{45}} = {m_2} {l_{c2}}^2 + {l_1} {m_2} \cos ( {{\color{black}\theta_{2}}} ) {l_{c2}} + {I_2}$, ${M_{55}} = {l_{c2}}^2 {m_2} + {I_2}$, ${C_{11}} = 0$, ${C_{12}} = 0$, ${C_{13}} = - {{\dot \theta}_m} ({m_2} ({l_1} \cos ( {\color{black}\theta_m} + {\color{black}\theta_{1}}) + {l_m} \cos ({{\color{black}\theta_m}} ) + {l_{c2}} \cos ({\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}})) + {m_1} ({l_{c1}} \cos ({\color{black}\theta_m} + {\color{black}\theta_{1}}) + {l_m} \cos ( {{\color{black}\theta_m}} )))$, ${C_{14}} = - (2 {{\dot \theta}_m} + {{\dot \theta}_{1}}) ( {l_{c2}} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} {m_2} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_{c1}} {m_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ))$, $
{C_{15}} = - {l_{c2}} {m_2} \cos ({\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}) (2 {{\dot \theta}_m} + 2 {{\dot \theta}_{1}} + {{\dot \theta}_{2}})$, ${C_{21}} = 0$, ${C_{22}} = 0$, $
{C_{23}} = - {{\dot \theta}_m} ({m_2} ({l_1} \sin ({\color{black}\theta_m} + {\color{black}\theta_{1}}) + {l_m} \sin ({{\color{black}\theta_m}} ) + {l_{c2}} \sin( {\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}})) + {m_1} ({l_{c1}} \sin ({\color{black}\theta_m} + {\color{black}\theta_{1}}) + {l_m} \sin ( {{\color{black}\theta_m}} )))$, ${C_{24}} = - ({2 {{\dot \theta}_m} + {{\dot \theta}_{1}}} ) ({l_{c2}} {m_2} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} {m_2} \sin ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {l_{c1}} {m_1} \sin ({\color{black}\theta_m} + {\color{black}\theta_{1}}))$, $
{C_{25}} = - {l_{c2}} {m_2} \sin ({\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}) (2 {{\dot \theta}_m} + 2 {{\dot \theta}_{1}} + {{\dot \theta}_{2}})$, ${C_{31}} = 0$, ${C_{32}} = 0$, ${C_{33}} = 0$, $
{C_{34}} = - {l_m} (2 {{\dot \theta}_m} + {{\dot \theta}_{1}}) ({l_1} {m_2} \sin ( {{\color{black}\theta_{1}}} ) + {l_{c1}} {m_1} \sin ({{\color{black}\theta_{1}}}) + {l_{c2}} {m_2} \sin ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ))$, $
{C_{35}} = - {l_{c2}} {m_2} ( {{l_m} \sin ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} \sin ( {{\color{black}\theta_{2}}} )} ) ( {2 {{\dot \theta}_m} + 2 {{\dot \theta}_{1}} + {{\dot \theta}_{2}}} )$,
${C_{41}} = 0$, ${C_{42}} = 0$, ${C_{43}} = {l_m} {{\dot \theta}_m} ( {{l_1} {m_2} \sin ( {{\color{black}\theta_{1}}} ) + {l_{c1}} {m_1} \sin ( {{\color{black}\theta_{1}}} ) + {l_{c2}} {m_2} \sin ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )} )$, $
{C_{44}} = 0$, ${C_{45}} = - {l_1} {l_{c2}} {m_2} \sin ( {{\color{black}\theta_{2}}} ) ( {2 {{\dot \theta}_m} + 2 {{\dot \theta}_{1}} + {{\dot \theta}_{2}}} )$, ${C_{51}} = 0$, ${C_{52}} = 0$, $
{C_{53}} = {l_{c2}} {m_2} {{\dot \theta}_m} ( {{l_m} \sin ( {{\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {l_1} \sin ( {{\color{black}\theta_{2}}} )} )$, ${C_{54}} = {l_1} {l_{c2}} {m_2} \sin ( {{\color{black}\theta_{2}}} ) ( {2 {{\dot \theta}_m} + {{\dot \theta}_{1}}} )$, ${C_{55}} = 0,$
${G_1} = 0$, ${G_2} = ( {{m_1} + {m_2} + {m_m}} )g$, $
{G_3} = {m_1}g{l_m} \cos ( {{\color{black}\theta_m}} ) + {m_2}g{l_m} \cos ( {{\color{black}\theta_m}} ) + {m_2}g{l_{c2}} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} ) + {m_2}g{l_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} )
+ {m_1}g{l_{c1}} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} )$, $
{G_4} = {m_2}g{l_1} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {m_1} g{l_{c1}}\cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}}} ) + {m_2}g{l_{c2}} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )$, $
{G_5} = {m_2}g {l_{c2}} \cos ( {{\color{black}\theta_m} + {\color{black}\theta_{1}} + {\color{black}\theta_{2}}} )$,
${P_{vs1}} = {k_x}( {{\color{black}x_m} - {\color{black}x_{m0}}} )$, ${P_{vs2}} = {k_z}( {{\color{black}z_m} - {\color{black}z_{m0}}} )$, ${P_{vs3}} = {k_\theta }( {{\color{black}\theta_m} - {\color{black}\theta_{m0}}} )$, ${P_{vs4}} = 0$, ${P_{vs5}} = 0.$
\section*{Appendix B: Equivalent Cable-Driven Model}\label{appendix:J5_EquivalentModel}
\begin{theorem}
Assume an external force and moment ${\left[ {{{\mathbf{F}}_e},{{\mathbf{M}}_e}} \right]^T} \in {\mathbb{R}^3}$ are applied to the mobile platform in a 2D CDPR (shown in \autoref{fig:J5_EquivalentModel}), then the equation ${{\mathbf{\tau }}_m} = - {\mathbf{AT}}$ will be satisfied. In this equation, ${{\mathbf{\tau }}_m}: = {\left[ {{\tau _x},{\tau _z},{\tau _\theta }} \right]^T} \in {\mathbb{R}^3},$ ${\mathbf{A}} \in {\mathbb{R}^{3 \times n}},$ and ${\mathbf{T}}: = {\left[ {{T_1},{T_2}, \cdots ,{T_n}} \right]^T} \in {\mathbb{R}^n}$ represent the equivalent joint forces/torques applied to the mobile platform, the structure matrix $\bf{A}$, and the cable tensions, respectively. In \autoref{fig:J5_EquivalentModel}(b), suppose ${\tau _x},{\tau _z},$ and ${\tau _\theta }$ always parallel axes $O{X_0}$, $O{Z_0}$, and $O{Y_0}$, respectively.
\end{theorem}
\begin{figure}[h]\centering
\includegraphics[width=90mm]{figures/J5_EquivalentModel.pdf}
\caption{An equivalent three-spring driven model for a 2D flexible CDPR. (a) A 2D flexible CDPR; (b) an equivalent three-spring driven model.}\label{fig:J5_EquivalentModel}
\end{figure}
\begin{proof}
Suppose an external force and moment ${\left[ {{{\mathbf{F}}_e},{{\mathbf{M}}_e}} \right]^T} \in {\mathbb{R}^3}$ are applied to the mobile platform (as shown in \autoref{fig:J5_EquivalentModel}(a) and \autoref{fig:J5_EquivalentModel}(b)) and generate the same position and orientation accelerations ${\left[ {{{\ddot x}_m},{{\ddot z}_m},{{\ddot \theta }_m}} \right]^T}$. Using the Newton-Euler formula, the following equations can be derived.
For the model shown in \autoref{fig:J5_EquivalentModel}(a), we have
\begin{align}
\sum\limits_{i = 1}^n {{{\bf{T}}_i}} + {{\bf{F}}_e} + \begin{bmatrix}
0\\
{{m_m}g}
\end{bmatrix} &= \begin{bmatrix}
{{m_m}{{\ddot x}_m}}\\
{{m_m}{{\ddot z}_m}}
\end{bmatrix}\nonumber\\
- \sum\limits_{i = 1}^n {\left( {{{{\bf{\hat L}}}_i}{T_i}} \right)} &= \begin{bmatrix}
{{m_m}{{\ddot x}_m}}\\
{{m_m}{{\ddot z}_m}}
\end{bmatrix} - \begin{bmatrix}
0\\
{{m_m}g}
\end{bmatrix} - {{\bf{F}}_e}
\label{eq:J5_appendix_A_1}
\end{align}
where ${{\bf{\hat L}}_i}$ denotes the unit cable vector. Furthermore,
\begin{align}
\sum\limits_{i = 1}^n {\left( {{{\bf{r}}_i} \times {{\bf{T}}_i}} \right)} + {{\bf{M}}_e} &= {I_m}{{\ddot \theta }_m}\nonumber\\
- \sum\limits_{i = 1}^n {\left( {\left( {{{\bf{r}}_i} \times {{{\bf{\hat L}}}_i}} \right){T_i}} \right)} &= {I_m}{{\ddot \theta }_m} - {{\bf{M}}_e}
\label{eq:J5_appendix_A_2}
\end{align}
Combining \eqref{eq:J5_appendix_A_1} and \eqref{eq:J5_appendix_A_2}, we get
\begin{align}
- \sum\limits_{i = 1}^n {\left\{ {\begin{bmatrix}
{{{{\bf{\hat L}}}_i}}\\
{{{\bf{r}}_i} \times {{{\bf{\hat L}}}_i}}
\end{bmatrix}{T_i}} \right\}} = \begin{bmatrix}
{{m_m}{{\ddot x}_m}}\\
{{m_m}{{\ddot z}_m}}\\
{{I_m}{{\ddot \theta }_m}}
\end{bmatrix} - \begin{bmatrix}
0\\
{{m_m}g}\\
0
\end{bmatrix} - \begin{bmatrix}
{{{\bf{F}}_e}}\\
{{{\bf{M}}_e}}
\end{bmatrix}
\label{eq:J5_appendix_A_3}
\end{align}
For the model shown in \autoref{fig:J5_EquivalentModel}(b), we also have
\begin{align}
\begin{bmatrix}
{{\tau _x}}\\
{{\tau _z}}
\end{bmatrix} &= \begin{bmatrix}
{{m_m}{{\ddot x}_m}}\\
{{m_m}{{\ddot z}_m}}
\end{bmatrix} - \begin{bmatrix}
0\\
{{m_m}g}
\end{bmatrix} - {{\bf{F}}_e}
\label{eq:J5_appendix_A_4}
\end{align}
and
\begin{align}
{\tau _\theta } &= {I_m}{{\ddot \theta }_m} - {{\bf{M}}_e}
\label{eq:J5_appendix_A_5}
\end{align}
Combining \eqref{eq:J5_appendix_A_4} and \eqref{eq:J5_appendix_A_5}, we get
\begin{align}
\begin{bmatrix}
{{\tau _x}}\\
{{\tau _z}}\\
{{\tau _\theta }}
\end{bmatrix} = \begin{bmatrix}
{{m_m}{{\ddot x}_m}}\\
{{m_m}{{\ddot z}_m}}\\
{{I_m}{{\ddot \theta }_m}}
\end{bmatrix} - \begin{bmatrix}
0\\
{{m_m}g}\\
0
\end{bmatrix} - \begin{bmatrix}
{{{\bf{F}}_e}}\\
{{{\bf{M}}_e}}
\end{bmatrix}
\label{eq:J5_appendix_A_6}
\end{align}
Clearly, the right sides of \eqref{eq:J5_appendix_A_3} and \eqref{eq:J5_appendix_A_6} are equal, so
\begin{align}
\begin{bmatrix}
{{\tau _x}}\\
{{\tau _z}}\\
{{\tau _\theta }}
\end{bmatrix} = - \sum\limits_{i = 1}^n {\left\{ {\begin{bmatrix}
{{{{\bf{\hat L}}}_i}}\\
{{{\bf{r}}_i} \times {{{\bf{\hat L}}}_i}}
\end{bmatrix}{T_i}} \right\}}
\label{eq:J5_appendix_A_7}
\end{align}
Eq. \eqref{eq:J5_appendix_A_7} is expanded as
\begin{align}
{{\bf{\tau }}_m}= - \underbrace {\begin{bmatrix}
{{{{\bf{\hat L}}}_1}}&{{{{\bf{\hat L}}}_2}}& \cdots &{{{{\bf{\hat L}}}_i}}& \cdots &{{{{\bf{\hat L}}}_n}}\\
{{{\bf{r}}_1} \times {{{\bf{\hat L}}}_1}}&{{{\bf{r}}_2} \times {{{\bf{\hat L}}}_2}}& \cdots &{{{\bf{r}}_i} \times {{{\bf{\hat L}}}_i}}& \cdots &{{{\bf{r}}_n} \times {{{\bf{\hat L}}}_n}}
\end{bmatrix}}_A{\bf{T}}
\label{eq:J5_appendix_A_8}
\end{align}
where ${{\mathbf{\hat L}}_i} = {\left[ {{{{\mathbf{\hat L}}}_{ix}},{{{\mathbf{\hat L}}}_{iz}}} \right]^T} \in {\mathbb{R}^2}$, ${{\mathbf{r}}_i} = {\left[ {{{\mathbf{r}}_{ix}},{{\mathbf{r}}_{iz}}} \right]^T} \in {\mathbb{R}^2}$, and ${\bf{T}}={\begin{bmatrix} {{T_1}}&{{T_2}}& \cdots &{{T_i}}& \cdots &{{T_n}} \end{bmatrix}^T} \in {\mathbb{R}^n}$. Hence, we get
\begin{align}
{{\bf{\tau }}_m} = - {\bf{AT}}
\label{eq:J5_appendix_A_9}
\end{align}
where $\bf{A}$ represents a structure matrix, determined by the position and orientation of the mobile platform.
Furthermore, ${{\bf{\tau }}_m}$ is satisfied with ${{\bf{\tau }}_m} = {\left[ {{\tau _x},{\tau _z},{\tau _\theta }} \right]^T} = {\left[ {{k_x}\left( {{x_m} - {x_{m0}}} \right),{k_z}\left( {{z_m} - {z_{m0}}} \right),{k_\theta }\left( {{\theta _m} - {\theta _{m0}}} \right)} \right]^T}$, where ${k_x},{k_z},{k_\theta } \in \mathbb{R}$ denote equivalent spring constants (parallel the X-axis, Z-axis, and rotation about Y-axis, respectively), $\left( {{x_m},{z_m},{\theta _m}} \right)$ and $\left( {{x_{m0}},{z_{m0}},{\theta _{m0}}} \right)$ represent the current and initial positions and orientation of the mobile platform, respectively. For the six-cable HCDPR shown in \autoref{fig:J5_PlanarModel}. Then, we also get
\begin{align}
{\bf{AT}} = - {{\bf{\tau }}_m}
= - {\left[ {{k_x}\left( {{x_m} - {x_{m0}}} \right),{k_z}\left( {{z_m} - {z_{m0}}} \right),{k_\theta }\left( {{\theta _m} - {\theta _{m0}}} \right)} \right]^T}
\label{eq:J5_appendix_A_10}
\end{align}
Besides, suppose ${T_i} = \left\{ {\begin{array}{*{20}{l}}
{{k_i}({L_i} - {L_{0i}})}\\
{{T_i}}
\end{array}} \right.\quad \begin{array}{*{20}{c}}
{{\rm{input}}\;{\rm{is}}\;{\rm{the}}\;i{\rm{th}}\;{\rm{cable}}\;{\rm{length}}\;}\\
{{\rm{ input}}\;{\rm{is}}\;{\rm{the}}\;i{\rm{th}}\;{\rm{cable}}\;{\rm{tension}}}
\end{array}$. It is clear that \eqref{eq:J5_appendix_A_9} is available to the cable position (cable length) control, force (cable tension) control, and hybrid cable position/force control.
\end{proof}
\section*{Appendix C: Derivations of the Maximizing Stiffness of the HCDPR}\label{appendix:J5_SomeTermsStiffness}
\begin{align}
{{\bf{D}}_A} = {\begin{bmatrix}
{{D_{A1}}}&{{D_{A2}}}&{{D_{A3}}}&{{D_{A4}}}&{{D_{A5}}}&{{D_{A6}}}
\end{bmatrix}^T}
\label{eq:J5_appendix_B_1}
\end{align}
where
\[\begin{array}{l}
{D_{A1}} ={N}_{A13}\\
{\quad\quad\;\;} - \frac{{{N_{A12}}\left( {\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{\quad\quad\;\;}+ \frac{{{N}_{A11}\left( {\left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A52} - {N}_{A62}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{D_{A2}} ={N}_{A23}\\
{\quad\quad\;\;} - \frac{{{N}_{A22}\left( {\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{\quad\quad\;\;}+ \frac{{{N}_{A21}\left( {\left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A52} - {N}_{A62}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{D_{A3}} ={N}_{A33}\\
{\quad\quad\;\;} - \frac{{{N}_{A32}\left( {\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{\quad\quad\;\;}+ \frac{{{N}_{A31}\left( {\left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A52} - {N}_{A62}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{D_{A4}} ={N}_{A43}\\
{\quad\quad\;\;} - \frac{{{N}_{A42}\left( {\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{\quad\quad\;\;}+ \frac{{{N}_{A41}\left( {\left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A52} - {N}_{A62}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{D_{A5}} ={N}_{A53}\\
{\quad\quad\;\;} - \frac{{{N}_{A52}\left( {\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{\quad\quad\;\;}+ \frac{{{N}_{A51}\left( {\left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A52} - {N}_{A62}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{D_{A6}} ={N}_{A63}\\
{\quad\quad\;\;} - \frac{{{N}_{A62}\left( {\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}\\
{\quad\quad\;\;}+ \frac{{{N}_{A61}\left( {\left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A53} - {N}_{A63}} \right) - \left( {{N}_{A13} - {N}_{A23}} \right)\left( {{N}_{A52} - {N}_{A62}} \right)} \right)}}{{\left( {{N}_{A11} - {N}_{A21}} \right)\left( {{N}_{A52} - {N}_{A62}} \right) - \left( {{N}_{A12} - {N}_{A22}} \right)\left( {{N}_{A51} - {N}_{A61}} \right)}}
\end{array}\]
\begin{align}
{{\bf{E}}_A} = {\begin{bmatrix}
{{E_{A1}}}&{{E_{A2}}}&{{E_{A3}}}&{{E_{A4}}}&{{E_{A5}}}&{{E_{A6}}}
\end{bmatrix}^T}
\label{eq:J5_appendix_B_2}
\end{align}
where
\[\begin{array}{l}
{E_{A1}} = {T_A}_1 + \frac{{N_A}{{_1}_2}({{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})})}{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_1}_2}({{N_A}{{_5}_1} - {N_A}{{_6}_1}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}}))}}{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_1}_1}{( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})})}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}+ \frac{({N_A}{{_1}_1}({{N_A}{{_5}_2} - {N_A}{{_6}_2}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{E_{A2}} = {T_A}_2 + \frac{{N_A}{{_2}_2}({( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_2}_2}({( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_2}_1}({( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}+ \frac{{N_A}{{_2}_1}({( {{N_A}{{_5}_2} - {N_A}{{_6}_2}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
\end{array}\]
\[\begin{array}{l}
{E_{A3}} = {T_A}_3 + \frac{{N_A}{{_3}_2}( {( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_3}_2}({( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_3}_1}({( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})({{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}+ \frac{{N_A}{{_3}_1}({( {{N_A}{{_5}_2} - {N_A}{{_6}_2}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{E_{A4}} = {T_A}_4 + \frac{{N_A}{{_4}_2}( {( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_4}_2}({( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_4}_1}( {( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}+ \frac{{N_A}{{_4}_1}({( {{N_A}{{_5}_2} - {N_A}{{_6}_2}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{E_{A5}} = {T_A}_5 + \frac{{N_A}{{_5}_2}( {( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_5}_2}({( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_5}_1}( {( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}+ \frac{{N_A}{{_5}_1}({( {{N_A}{{_5}_2} - {N_A}{{_6}_2}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{E_{A6}} = {T_A}_6 + \frac{{N_A}{{_6}_2}( {( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_6}_2}({( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}- \frac{{N_A}{{_6}_1}( {( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{T_A}_6 - {T_A}_5 + {k_5}( {{L_5} - {L_6}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}\\
{\quad\quad\;\;}+ \frac{{N_A}{{_6}_1}({( {{N_A}{{_5}_2} - {N_A}{{_6}_2}})( {{T_A}_2 - {T_A}_1 + {k_1}( {{L_1} - {L_2}})}))}}
{{( {{N_A}{{_1}_1} - {N_A}{{_2}_1}})( {{N_A}{{_5}_2} - {N_A}{{_6}_2}}) - ( {{N_A}{{_1}_2} - {N_A}{{_2}_2}})( {{N_A}{{_5}_1} - {N_A}{{_6}_1}})}}
\end{array}\]
\bibliographystyle{asmems4}
|
{
"timestamp": "2020-12-29T02:22:23",
"yymm": "2012",
"arxiv_id": "2012.14029",
"language": "en",
"url": "https://arxiv.org/abs/2012.14029"
}
|
\chapter{Miscellaneous definitions and facts}
\label{Miscellaneous_Facts}
In this Appendix, we list a number of useful
definitions and facts that we often refer to in
various chapters.
For an operator $X$, the \emph{trace norm}, the
\emph{Hilbert-Schmidt norm} and
the \emph{operator norm} are defined respectively
in terms of $|X| = \sqrt{X^\dagger X}$:
\begin{align*}
\|X\|_1 &= \Tr |X|, \\
\|X\|_2 &= \sqrt{\Tr |X|^2}, \\
\|X\|_{\infty} &= \lambda_{\max}(|X|),
\end{align*}
where $\lambda_{\max}(X)$ is the largest eigenvalue of $X$.
\begin{lemma}[Cf.~\cite{bhatia97}]
For any operator $X$,
\begin{align
\|X\|_1 \leq \sqrt{d} \|X\|_2 \leq d \|X\|_{\infty},
\end{align}
where $d$ equals the rank of $X$.
\qed
\end{lemma}
\begin{lemma}[Cf.~\cite{bhatia97}]
\label{norm1_trace}
For any self-adjoint operator $X$,
\begin{align*}
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\norm{X}_1=\max_{-\openone \leq Q \leq \openone}\Tr(QX).
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\blacksquare
\end{align*}
\end{lemma}
\begin{lemma}[Cf.~\cite{bhatia97}]
\label{T_norm1_inequality}
For \aw{any self-adjoint operator $X$ and any operator $T$,}
\begin{align*}
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\norm{TXT^{\dagger}}_1 \leq \norm{T}_{\infty}^2 \norm{X}_1.
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\blacksquare
\end{align*}
\end{lemma}
\begin{lemma}[Cf.~Bhatia~\cite{bhatia97}]
\label{lemma:norm inequality}
For operators $A$, $B$ and $C$ and for any norm $p \in [1,\infty]$ the following holds
\begin{align*}
\norm{ABC}_p \leq \norm{A}_{\infty} \norm{B}_p \norm{C}_{\infty}.
\end{align*}
\end{lemma}
\begin{lemma}[Hoeffding's inequality, Cf.~\cite{DemboZeitouni}]
\label{Hoeffding's inequality}
Let $X_1, X_2,\ldots,X_n$ be independent random variables with $a_i \leq X_i \leq b_i$. Define the empirical mean of these variables as $\overline{X}=\frac{X_1+\ldots+X_n}{n}$, then for any $t>0$
\begin{align*}
\Pr\left\{\overline{X}-\mathbb{E}(\overline{X}) \geq t\right\}
&\leq \exp\left(-\frac{2n^2t^2}{\sum_{i=1}^n(b_i-a_i)^2}\right),\\
\Pr\left\{\overline{X}-\mathbb{E}(\overline{X}) \leq -t\right\}
&\leq \exp\left(-\frac{2n^2t^2}{\sum_{i=1}^n(b_i-a_i)^2}\right).
\end{align*}
\end{lemma}
The fidelity of two states is defined as
\begin{align*}
F(\rho,\sigma)= \Tr \sqrt{\sigma^{\frac{1}{2}} \rho \sigma^{\frac{1}{2}}}.
\end{align*}
When one of the arguments is pure, then
\begin{align*}
F(\rho,\ketbra{\psi}{\psi})
=\sqrt{\Tr (\rho \ketbra{\psi}{\psi})}
=\sqrt{\bra{\psi}\rho\ket{\psi}}.
\end{align*}
\begin{lemma}
\label{lemma:FvdG}
The fidelity is related to the trace norm as follows \cite{Fuchs1999}:
\begin{align*}
1- F(\rho,\sigma) \leq \frac{1}{2}\|\rho-\sigma\|_1 \leq \sqrt{1-F(\rho,\sigma)^2} =: P(\rho,\sigma),
\end{align*}
where $P(\rho,\sigma)$ is the so-called purified distance,
or Bhattacharya distance, between quantum states.
\qed
\end{lemma}
\begin{lemma}[{Pinsker's inequality, cf.~\cite{Schumacher2002}}]
\label{lemma:Pinsker}
The trace norm and relative entropy are related by
\begin{align*}
\|\rho-\sigma\|_1 \leq \sqrt{2 \ln 2 S(\rho\|\sigma)}.
\end{align*} \qed
\end{lemma}
\begin{lemma}[{Uhlmann~\cite{UHLMANN1976}}]
\label{lemma:Uhlmann1}
Let $\rho^A$ and $\sigma^A$ be two quantum states with fidelity $F(\rho^A,\sigma^A)$. Let $\rho^{AB}$ and $\sigma^{AC}$ be purifications of these two states, then there exists an isometry $V:{B\to C} $ such that
\begin{align*}
\phantom{========:}
F\left( (\openone_A \otimes V^{B\to C})\rho^{AB}(\openone_A \otimes V^{B\to C})^{\dagger},\sigma^{AC} \right)
= F(\rho^A,\sigma^A).
\phantom{========:} \blacksquare
\end{align*}
\end{lemma}
A consequence of this, due to \cite[Lemma~2.2]{Devetak2008_1}, is as follows.
\begin{lemma}
\label{lemma:Uhlmann2}
Let $\rho^A$ and $\sigma^A$ be two quantum states with trace distance
$\frac12 \|\rho^A-\sigma^A\|_1 \leq \epsilon$, and
let $\rho^{AB}$ and $\sigma^{AC}$ be purifications of these two states.
Then there exists an isometry $V:{B\to C}$ such that
\begin{align*}
\phantom{=========:}
\left\| (\openone_A \otimes V^{B\to C})\rho^{AB} (\openone_A \otimes V^{B\to C})^{\dagger}
\! \! -\! \sigma^{AC} \right\|_1 \!
\leq\! \sqrt{\epsilon(2-\epsilon)} \,.
\phantom{=========} \blacksquare
\end{align*}
\end{lemma}
\begin{lemma}[{Fannes~\cite{Fannes1973}; Audenaert~\cite{Audenaert2007}}]
\label{Fannes-Audenaert inequality}
Let $\rho$ and $\sigma$ be two states on Hilbert space
$A$ with trace distance
$\frac12\|\rho-\sigma\|_1 \leq \epsilon$, then
\begin{align*}
|S(\rho)-S(\sigma)| \leq \epsilon\log |A| + h(\epsilon),
\end{align*}
where $h(\epsilon)=-\epsilon \log \epsilon -(1-\epsilon)\log (1-\epsilon)$ \aw{is the binary entropy}.
\end{lemma}
\aw{There is also an extension} of the Fannes inequality for the conditional entropy;
this lemma is very useful \aw{especially} when the dimension of the
system \aw{conditioned on} is unbounded.
\begin{lemma}[{Alicki-Fannes~\cite{Alicki2004}; Winter~\cite{Winter2016}}]
\label{AFW lemma}
Let $\rho$ and $\sigma$ be two states on a bipartite Hilbert space
$A\otimes B$ with trace distance $\frac12\|\rho-\sigma\|_1 \leq \epsilon$, then
\begin{align*}
|S(A|B)_{\rho}\!-\!S(A|B)_{\sigma}| \leq 2\epsilon \log |A| \!+\! (1+\epsilon)h\left(\frac{\epsilon}{1+\epsilon}\right)\!.
\end{align*}\qed
\end{lemma}
\begin{lemma}
\label{full_support_lemma}
Let $\rho$ be a state with full support on the Hilbert space \aw{$A$}, i.e.~it
\aw{has positive minimum} eigenvalue $\lambda_{\min}$, and
let $\ket{\psi}^{AR}$ be a purification of $\rho$ on the Hilbert space \aw{$A \otimes R$}.
Then any purification of another state $\sigma$ on \aw{$A$} is of the form
\begin{align*}
(\openone_A \otimes T) \ket{\psi}^{AR},
\end{align*}
where $T$ is an operator acting on system $R$ with $\| T \|_{\infty} \le \frac{1}{\sqrt{\lambda_{\min}}}$.
\begin{proof}
Let $\rho=\sum_i \lambda_i \ketbra{e_i}{e_i}$ and $\sigma=\sum_j \mu_j \ketbra{f_j}{f_j}$ be spectral decompositions of the states. The purification of $\rho$ is $\ket{\psi}^{AR}=\sum_i \sqrt{\lambda_i} \ket{e_i} \ket{i}$. Define $\ket{\phi}^{AR}=\sum_j \sqrt{\mu_j} \ket{f_j} \ket{j}$.
Any purification of the state $\sigma$ is of the form
$(\openone_A \otimes V) \ket{\phi}^{AR}$ where $V$ is an isometry acting on system $R$.
Write the eigenbasis $\set{\ket{f_j}}$ as linear combination of eigenbasis $\set{\ket{e_j}}$, that is, $\ket{f_j}=\sum_i\alpha_{ij}\ket{e_i}$. Then, we have $\ket{\phi}^{AR}=\sum_{i,j} \sqrt{\mu_j} \alpha_{ij} \ket{e_i} \ket{j}$. Define the operator $P=\sum_{jk} p_{jk} \ketbra{j}{k}$ where $p_{jk}=\alpha_{kj} \sqrt{\frac{\mu_j}{\lambda_k}}$. It is immediate to see that
\begin{align*}
\ket{\phi}^{AR}=(\openone_A \otimes P) \ket{\psi}^{AR}.
\end{align*}
Thus, we have $(\openone_A \otimes V) \ket{\phi}^{AR} = (\openone_A \otimes VP) \ket{\psi}^{AR}$.
Defining $T=VP$, we then have
\begin{align*}
\lambda_{\max} (T^{\dagger}T)&=\lambda_{\max} (P^{\dagger}P)\\
&\leq \Tr (P^{\dagger}P) \\
&=\sum_{j,k}|p_{jk}|^2 \\
&= \sum_{j,k}\frac{|\alpha_{kj}|^2\mu_j}{\lambda_k} \\
&\leq \frac{1}{\lambda_{\min}},
\end{align*}
where the last inequality follows \aw{from the orthonormality of} the basis $\set{\ket{f_j}}$.
\end{proof}
\end{lemma}
\begin{lemma}[Gentle Operator Lemma \cite{winter1999_2,Ogawa2007,wilde_2013}]
\label{Gentle Operator Lemma}
If a quantum state $\rho$ with diagonalization $\rho=\sum_j p_j \pi_j$ projects onto operator $\Lambda$ with probability $1- \epsilon$, which is bounded as $0 \leq \Lambda \leq I$, i.e. $\Tr(\rho \Lambda) \geq 1- \epsilon$ then
\begin{align*}
\sum_j p_j\norm{\pi_j-\sqrt{\Lambda}\pi_j \sqrt{\Lambda}}_1 \leq 2\sqrt{\epsilon}.
\end{align*}
\end{lemma}
\begin{definition}
Let $\rho_1,\ldots,\rho_n$ be quantum states on a $d$-dimensional Hilbert space $\mathcal{H}$ with diagonalizations $\rho_i=\sum_j p_{ij} \pi_{ij}$ and one-dimensional projectors $\pi_{ij}$. For $\alpha >0$ and $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ define the set of entropy typical sequences as
\begin{align}
\mathcal{T}_{\alpha,\rho^n }^n=\left\{j^n=j_1 j_2 \ldots j_n : \abs{\sum_{i=1}^n -\log p_{ i j_i}-S(\rho_i) } \leq \alpha \sqrt{n}\right\}. \nonumber
\end{align}
Define the entropy typical projector of $\rho^n$ with constant $\alpha$ as
\begin{align*}
\Pi^n_{\alpha ,\rho^n }=\sum_{j^n \in \mathcal{T}_{\alpha, \rho^n}^n} \pi_{1j_1} \otimes \cdots \otimes \pi_{nj_n}.
\end{align*}
\end{definition}
\begin{lemma}\label{lemma:typicality properties }
(Cf. \cite{csiszar_korner_2011}) There is a constant $0<\beta \leq \max \set{(\log 3)^2,(\log d)^2}$ such that the entropy typical projector has the following properties for any $\alpha >0$, $n>0$ and arbitrary state $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$:
\begin{align*}
\Tr \left(\rho^n \Pi^n_{\alpha ,\rho^n }\right) &\geq 1-\frac{\beta}{\alpha^2}, \\
2^{-\sum_{i=1}^n S(\rho_i)-\alpha \sqrt{n}} \Pi^n_{\alpha,\rho^n}
&\leq \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}
\leq 2^{-\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}}\Pi^n_{\alpha,\rho^n},\quad \text{and} \\
\left(1-\frac{\beta}{\alpha^2}\right) 2^{ \sum_{i=1}^n S(\rho_i)-\alpha \sqrt{n}}
&\leq \Tr \left(\Pi^n_{\alpha ,\rho^n }\right)
\leq 2^{\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}}.
\end{align*}
\end{lemma}
\chapter{Distributed compression of correlated classical-quantum sources}
\label{chap:cqSW}
In this chapter, we resume the investigation of the problem of independent
local compression of correlated quantum sources, the classical case
of which is covered by the celebrated Slepian-Wolf theorem.
We focus specifically on classical-quantum (cq) sources, for which one edge
of the rate region, corresponding to the compression of the classical
part, using the quantum part as side information at the decoder,
was previously determined by Devetak and Winter [Phys. Rev. A 68, 042301 (2003)].
Whereas the Devetak-Winter protocol attains a rate-sum equal to the von
Neumann entropy of the joint source, here we show that the full rate
region is much more complex, due to the partially quantum nature of
the source. In particular, in the opposite case of compressing the
quantum part of the source, using the classical part as side information
at the decoder, typically the rate sum is strictly larger
than the von Neumann entropy of the total source.
We determine the full rate region in the
\textit{generic} case, showing that, apart from the Devetak-Winter
point, all other points in the achievable region
have a rate sum strictly larger than the joint entropy. We can interpret
the difference as the price paid for the quantum encoder being ignorant
of the classical side information.
In the general case, we give an achievable rate region, via protocols
that are built on the decoupling principle, and the protocols of quantum
state merging and quantum state redistribution.
Our achievable region is matched almost by a single-letter converse,
which however still involves asymptotic errors and an unbounded
auxiliary system.
This chapter is based on the papers in \cite{ZK_cqSW_ISIT_2019,ZK_cqSW_2018}.
\section{The source and the compression model}\label{sec:Source and compression model}
\label{introduction}
The Slepian-Wolf problem of two sources
correlated in a known way, but subject to separate, local compression \cite{Slepian1973}
has proved to provide a unifying principle for much of Shannon
theory, giving rise to natural information theoretic interpretations
of entropy and conditional entropy, and exhibiting deep
connections with error correction, channel capacities and
mutual information (cf.~\cite{csiszar_korner_2011}).
The quantum case has been investigated for two decades, starting with the
second author's PhD thesis \cite{Winter1999} \aw{and subsequently in
\cite{Devetak2003}}, up to the systematic study \cite{Ahn2006},
and while we still do not have a complete understanding of the rate region,
it has become clear that the problem is of much higher
complexity than the classical case. The quantum Slepian-Wolf
problem, and specifically quantum data compression with side
information at the decoder, has resulted in many fundamental
advances in quantum information theory, including the protocols
of quantum state merging \cite{Horodecki2007,Abeyesinghe2009} and quantum state
redistribution \cite{Devetak2008_2},
which have given operational meaning to the conditional von
Neumann entropy, the mutual information and the conditional quantum mutual
information, respectively.
A variety of resource models and different tasks have been
considered over the years: The source and its recovery was
either modelled as an ensemble of pure states (following
Schumacher \cite{Schumacher1995}), or as a pure state between the encoders and a
reference system; the communication resource required was
either counted in qubits communicated, in addition either
allowing or disallowing entanglement, or it was counted in
ebits shared between the agents, but with free classical
communication. While this latter model has led to the most
complete picture of the general rate region, in the present
chapter we will go back to the original idea \cite{Schumacher1995,Winter1999}
of quantifying the communication, counted in qubits, between the
encoders and the decoder.
\bigskip
\textbf{Source model.}
The source model we shall consider is a hybrid classical-quantum one,
with two agents, Alice and Bob, whose task is is to compress the
classical and quantum parts of the source, respectively. They then send their
shares to a decoder, \aw{Debbie}, who has to reconstruct the classical
information with high probability and the quantum information with
high (average) fidelity.
In detail, the source is characterised by a classical source, i.e.~a probability
distribution $p(x)$ on a discrete (in fact: finite) alphabet $\mathcal{X}$
which is observed by Alice, and a family of quantum states $\rho_x$
on a quantum system $B$, given by a Hilbert space of finite dimension $|B|$.
To define the problem of independent local compression (and
decompression) of such a correlated \aw{classical-quantum} source, we
shall consider purifications $\psi_x^{BR}$ of the $\rho_x$,
i.e.~$\rho_x^B = \Tr_R \psi_x^{RB}$. Thus the source can be described
compactly by the cq-state
\[
\omega^{XBR} = \sum_{x \in \mathcal{X}} p(x) \ketbra{x}{x}^X \otimes \ketbra{\psi_x}{\psi_x}^{BR}.
\]
We will be interested in the information theoretic limit of
many copies of $\omega$, i.e.
\begin{align*}
\omega^{X^n B^n R^n}
&= \left(\omega^{XBR}\right)^{\otimes n} \\
&= \sum_{x^n \in \mathcal{X}^n} p(x^n) \ketbra{x^n}{x^n}^{X^n}
\otimes \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n} \! \!\!\!,
\end{align*}
where we use the notation
\begin{align*}
x^n &= x_1 x_2 \ldots x_n, \\
\ket{x^n} &= \ket{x_1} \ket{x_2} \cdots \ket{x_n}, \\
p(x^n) &= p(x_1) p(x_2) \cdots p(x_n), \text{ and} \\
\ket{\psi_{x^n}} &= \ket{\psi_{x_1}} \ket{\psi_{x_2}} \cdots \ket{\psi_{x_n}}.
\end{align*}
Alice and Bob, receiving their respective
parts of the source, separately encode these using the most general allowed
quantum operations; the compressed quantum information, living on
a certain number of qubits, is passed to the decoder who has to
output, again acting with a quantum operation, an element of $\mathcal{X}^n$
and a state on $B^n$, in such a way as to attain a low error probability
for $x^n$ and a high-fidelity approximation of the conditional quantum
source state, $\psi_{x^n}^{B^nR^n}$.
We consider two models: unassisted and entanglement-assisted, which we
describe formally in the following
(see Figs.~\ref{fig:una} and \ref{fig:ea}).
\medskip
\textbf{Unassisted model.}
With probability $p(x^n)$, the source provides Alice and Bob respectively
with states $\ket{x^n}^{X^n}$ and $\ket{\psi_{x^n}}^{B^nR^n}$.
Alice and Bob then perform their respective
encoding operations $\mathcal{E}_X:X^n \longrightarrow C_X$ and
$\mathcal{E}_B:B^n \longrightarrow C_B$,
\aw{respectively,} which are quantum operations, i.e.~completely positive and trace preserving (CPTP)
maps. \aw{Of course, as functions they act on the operators (density matrices) over
the respective input and output Hilbert spaces. But as there is no risk of confusion,
we will simply write the Hilbert spaces when
denoting a CPTP map. Note that since $X$ is a classical random variable, $\mathcal{E}_X$
is entirely described by a cq-channel.}
We call $R_X=\frac1n \log|C_X|$ and $R_B=\frac1n \log|C_B|$
\aw{the} quantum rates of the compression protocol.
Since Alice and Bob are required to act independently, the joint encoding operation
is $\mathcal{E}_X \otimes \mathcal{E}_B$.
The systems $C_X$ and $C_B$ are then sent to \aw{Debbie} who performs
a decoding operation \aw{$\mathcal{D}:C_X C_B \longrightarrow \hat{X}^n\hat{B}^n$}.
$\hat{X}^n$ and $ \hat{B}^n$ are output systems with Hilbert spaces $\hat{X}^n$ and $\hat{B}^n$ which are isomorphic to Hilbert spaces $X^{ n}$ and $B^{n}$, respectively.
We define the \aw{extended source state}
\begin{align}\label{eq: extended source model}
&\omega^{X^n {X'}^n B^n R^n} \nonumber \\
\quad &= \left( \omega^{X{X'}BR}\right)^{\otimes n} \nonumber \\
\quad &=\!\!\!\! \! \!\sum_{x^n \in \mathcal{X}^n} \!\!\!\!p(x^n) \!\ketbra{x^n}{x^n}^{X^n} \!\!\!\otimes \ketbra{x^n}{x^n}^{{X'}^n}
\! \!\!\! \otimes \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n} \!\!\!\!,
\end{align}
and say the encoding-decoding
scheme has average fidelity $1-\epsilon$ if
\begin{align}
\label{F_QCSW_unassisted1}
\overline{F} := F\left(\omega^{X^n {X'}^n B^n R^n },\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n }
\right)
\geq 1-\epsilon,
\end{align}
where
\[\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n }\!\!\!\!=\!\left(\mathcal{D} \circ (\mathcal{E}_X \otimes \mathcal{E}_B) \otimes {\operatorname{id}}_{{X'}^n R^n}\right) \omega^{X^n {X'}^n B^n R^n}\!\!\!, \]
and ${\operatorname{id}}_{{X'}^n R^n}$ is the identity (ideal) channel acting on ${X'}^n R^n$.
By the above fidelity definition and the linearity of CPTP maps,
the average fidelity defined in (\ref{F_QCSW_unassisted1}) \aw{can be expressed equivalently as}
\begin{align}
\label{F_QCSW_unassisted2}
\overline{F} \!\!= \!\!\!\!\!\sum_{x^n \in \mathcal{X}^n } \!\!\!p(x^n) F\! \left( \ketbra{x^n}{x^n}^{X^n}\!\!\! \!\otimes\! \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n} \!\!\!\!, \xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\right)\nonumber
\end{align}
where
\begin{align*}
&\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\!\!\!\!= \\
&\quad \!(\mathcal{D} \circ (\mathcal{E}_X \otimes \mathcal{E}_B) \otimes {\operatorname{id}}_{R^n}) \ketbra{x^n}{x^n}^{X^n} \!\!\!\otimes \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n}\!\!\!\! .
\end{align*}
We say that $(R_X,R_B)$ is an (asymptotically) achievable rate pair if
there exist codes $(\mathcal{E}_X,\mathcal{E}_B,\mathcal{D})$ as above
for every $n$, with fidelity $\overline{F}$ converging to $1$,
and classical and quantum rates converging to $R_X$ and $R_B$, respectively.
\aw{The rate region is the set of all achievable rate pairs, as a subset of $\mathbb{R}_{\geq 0}^2$.}
\begin{figure}[!t]
\centering
\includegraphics[scale=.4]{unassisted_model.jpg}
\caption{\aw{Circuit diagram of} the unassisted model. Dotted lines are used to
demarcate domains controlled by the different participants.
The solid lines represent quantum information \aw{registers}.}
\label{fig:una}
\end{figure}
It is shown \aw{by Devetak and Winter} in \cite[Theorem 1]{Devetak2003} and \cite[Corollary IV.13]{Winter1999} that the \aw{rate pair
\begin{equation}
\label{eq:DW}
(R_X,R_B) = (S(X|B),S(B))
\end{equation}
is achievable and optimal. The optimality is two-fold; first, the rate sum
achieved, $R_X+R_B=S(XB)$ is minimal, and secondly, even with unlimited $R_B$,
$R_X \geq S(X|B)$. This shows that the Devetak-Winter point is an extreme point
of the rate region. Interestingly,} Alice can achieve the rate $S(X|B)$ using only classical
communication. However, we \aw{will} prove the converse theorems considering
a quantum channel for Alice, which are obviously stronger statements.
In Theorem \ref{theorem: generic full rate region}, we show that our system model is equivalent
to the model considered in \cite{Devetak2003,Winter1999}, which implies the achievability and
optimality of this rate pair in our system model.
\aw{We remark that in \cite{Devetak2003}, the rate $R_B=S(B)$ was not explicitly
discussed, but it is clear that it can always be achieved by Schumacher's quantum
data compression \cite{Schumacher1995}, introducing an arbitrarily small additional error.}
\medskip
\textbf{Entanglement-assisted model.}
This model \aw{generalizes the unassisted model, and it is basically the same,}
except that we let Bob and \aw{Debbie} share entanglement \aw{and use it in encoding and decoding, respectively.
In addition, we take care of any possible entanglement that is produced in the process.
Consequently, while Alice's encoding $\mathcal{E}_X:X^n \longrightarrow C_X$ remains the same,
the Bob's encoding and the decoding map now act as
$\mathcal{E}_B:B^n B_0 \longrightarrow C_B B_0'$ and
$\mathcal{D}:C_X C_B D_0 \longrightarrow \hat{X}^n\hat{B}^n D_0'$, respectively,
where $B_0$ and $D_0$ are $K$-dimensional quantum registers of Bob and \aw{Debbie},
respectively, designated to hold the initially shared entangled state, and $B_0'$ and $D_0'$
are $L$-dimensional registers for the entanglement produced by the protocol.
Ideally, both initial and final entanglement are given by maximally
entangled states $\Phi_K$ and $\Phi_L$, respectively.}
Correspondingly, we say \aw{that} the encoding-decoding scheme has average fidelity $1-\epsilon$ if
\begin{align}
\label{F_QCSW_assisted}
\overline{F} &:= F\left(\omega^{X^n {X'}^n B^n R^n }\otimes \Phi_L^{B_0'D_0'},
\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n B_0' D_0'} \right) \nonumber\\
&\geq 1-\epsilon,
\end{align}
where
\begin{align*}
\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n B_0' D_0'}
\!\!&=\!\left(\!\mathcal{D} \circ (\mathcal{E}_X \!\otimes \mathcal{E}_{BB_0} \!\otimes\! {\operatorname{id}}_{D_0}\!) \!\otimes\! {\operatorname{id}}_{{X'}^n R^n}\!\right)\\
& \quad \quad \quad \quad \quad \omega^{X^n {X'}^n B^n R^n }\otimes \Phi_L^{B_0'D_0'}.
\end{align*}
We call $E=\frac{1}{n}(\log K - \log L)$ the entanglement rate of the scheme.
The CPTP map $\mathcal{E}_{B}$ takes the input systems $B^nB_0$ to the compressed system
$C_B$ \aw{plus Bob's share of the output entanglement, $B_0'$.}
\aw{Debbie} applies the decoding operation $\mathcal{D}$ on the received systems
$C_XC_B$ and \aw{her part of the initial} entanglement $D_0$,
to produce an output state on systems $\hat{X}^n \hat{B}^n$ \aw{plus her share of the output
entanglement, $D_0'$}.
Similar to the unassisted model, $\hat{X}^n$ and $ \hat{B}^n$ are output systems with Hilbert spaces $\hat{X}^n$ and $\hat{B}^n$ which are isomorphic to Hilbert spaces $X^{n}$ and $B^{n}$, respectively.
We say $(R_X, R_B, E)$ is an (asymptotically) achievable rate triple if for all $n$
there exist entanglement-assisted codes as before, such that the
fidelity $\overline{F}$ converges to $1$, and
the classical, quantum and entanglement rates converge to
$R_X$, $R_B$ and $E$, respectively.
\aw{The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}_{\geq 0}^2\times\mathbb{R}$. In the following we will be mostly
interested in the projection of this region onto the first two coordinates,
$R_X$ and $R_B$, corresponding to unlimited entanglement assistance.}
\medskip
\aw{It is a simple consequence of the time sharing principle that the rate regions,
both for the unassisted and the entanglement-assisted model, are closed convex regions.
Furthermore, since one can always waste rate, the rate regions are open to the ``upper right''.
This means that the task of characterizing the rate regions boils down to describing
the lower boundary, which can be achieved by convex inequalities. In the Slepian-Wolf
problem, they are in fact linear inequalities, and we will find analogues of these
in the present investigation.}
\medskip
Stinespring's dilation theorem \aw{\cite{Stinespring1955}} states that any CPTP map can be built
from the basic operations of isometry and reduction to a subsystem by
tracing out the environment system \cite{Stinespring1955}.
Thus, the encoders and the decoder are without loss of generality isometries
\begin{align*}
U_X : {X^n} &\longrightarrow {C_X W_X}, \\
U_B : {B^n B_0} &\longrightarrow {C_B B_0' W_B}, \\
V : {C_X C_B D_0} &\longrightarrow {\hat{X}^n \hat{B}^n D_0' W_D},
\end{align*}
\aw{where the new systems $W_X$, $W_B$ and $W_D$ are the environment systems
of Alice, Bob and \aw{Debbie}, respectively. They simply remain locally in
possession of the respective party.}
\begin{figure}[!t
\centering
\includegraphics[scale=.4]{assisted_model.jpg}
\caption{\aw{Circuit diagram of} the entanglement-assisted model. Dotted lines are used to
demarcate domains controlled by the different participants.
The solid lines represent quantum information \aw{registers}.}
\label{fig:ea}
\end{figure}
The following lemma states that for a code of block length $n$ and error $\epsilon$,
the environment parts of the encoding and decoding isometries, i.e.~$W_X$, $W_B$
and $W_D$, as well as the entanglement output registers $B_0'$ and $D_0'$, are decoupled from
the reference $R^n$, conditioned on $X^n$.
This lemma plays a crucial role in the proofs of converse theorems.
\begin{lemma}(Decoupling condition)
\label{decoupling condition}
For a code of block length $n$ and error $\epsilon$ in the entanglement-assisted model,
let $W_X$, $W_B$ and $W_D$ be the environments of Alice's and Bob's encoding and of
Debbie's decoding isometries, respectively. Then,
\[
I(W_XW_BW_D B_0'D_0':\hat{X}^n\hat{B}^nR^n|{X'}^n)_\xi \leq n \delta(n,\epsilon) ,
\]
where $\delta(n,\epsilon) = 4\sqrt{6\epsilon}\log(|X| |B|) + \frac2n h(\sqrt{6\epsilon})$,
\aw{with the binary entropy $h(\epsilon)=-\epsilon \log \epsilon -(1-\epsilon)\log (1-\epsilon)$;}
the conditional mutual information is with respect to the state
\begin{align*}
&\xi^{{X'}^n \hat{X}^n \hat{B}^n B_0' D_0' W_XW_BW_D R^{n}} \\
&\quad \quad \quad =\left(V \circ (U_X \otimes U_{B} \otimes \openone_{D_0}) \otimes \openone_{{X'}^n R^n}\right) \\
&\quad \quad \quad \quad \quad \quad(\omega^{X^n {X'}^n B^n R^n } \otimes \Phi_K^{B_0D_0}) \\
&\quad \quad \quad \quad\quad \quad \quad\left(V \circ (U_X \otimes U_{B} \otimes \openone_{D_0}) \otimes \openone_{{X'}^n R^n}\right)^{\dagger}.
\end{align*}
\end{lemma}
\begin{proof}
We show that the fidelity criterion (\ref{F_QCSW_assisted})
implies that given $x^n$, the environments $W_X$, $W_B$ and $W_D$ of Alice's, Bob's and \aw{Debbie}'s isometries
are decoupled from the the rest of the output systems.
The parties share $n$ copies of the state $\omega^{X^{\prime} X B R}$, where
Alice and Bob have access to systems $X^n$ and $B^n$, respectively, and ${X'}^n$ and $R^n$ are the reference systems.
Alice and Bob apply the following isometries to encode their systems, respectively:
\begin{align*}
U_{X}:{X^n} &\longrightarrow {C_X W_X}, \\
U_{B}:{B^n B_0} &\longrightarrow {C_B B_0' W_B},
\end{align*}
where Alice and Bob send respectively their compressed information $C_X$ and $C_B$ to \aw{Debbie}
and keep the environment parts $W_X$ and $W_B$ of their respective isometries for themselves.
\aw{Debbie} applies \aw{the} decoding isometry \aw{$V:{C_X C_B D_0} \longrightarrow {\hat{X}^n \hat{B}^n D_0' W_D}$
to the systems $C_XC_B$ and her part of the entanglement $D_0$,
to generate the output systems $\hat{X}^n \hat{B}^n D_0'$, with $W_D$ the environment of her isometry.
This leads to the following final state after decoding:
\begin{align*}
&\xi^{X'^n \hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n}\\
& =\! \!\!\!\sum_{x^n} \! p(x^n)\! \ketbra{x^n}^{X'^n} \!\!\! \!\! \otimes \! \ketbra{\xi_{x^n}}^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n} \!\!\!\!,
\end{align*}
where
\begin{align*}
\ket{\xi_{x^n}}&^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n}
\!\! \\
&= V^{{C_XC_BD_0 \to \hat{X}^n\hat{B}^nD_0'W_D}}\\
& \quad \quad \big( U_X^{X^n \to C_XW_X}\!\!\ket{x^n}^{X^n}
\!\otimes U_B^{B^nB_0 \to C_BB_0'W_B}\\
&\quad \quad\quad \quad\quad \quad\quad\quad \quad\quad (\ket{\psi_{x^n}}^{B^nR^n}\!\ket{\Phi_K}^{B_0D_0}) \bigr).
\end{align*}
}
The fidelity defined in \aw{Eq.}~(\ref{F_QCSW_assisted}) is \aw{now} bounded as follows:
\begin{align}
\label{eq-B1}
\overline{F}
&= F\left(\omega^{X'^n X^n B^n R^n} \otimes \Phi_L^{B_0'D_0'},
\xi^{X'^n \hat{X}^n \hat{B}^n B_0' D_0' R^n} \right) \nonumber \\
&\leq F\left(\omega^{X'^n X^n B^n R^n}, \xi^{X'^n \hat{X}^n \hat{B}^n R^n} \right) \nonumber \\
&= \!\!\! \!\sum_{x^n \in \mathcal{X}^n} \!\!\! p(x^n) \!F\!\left(\!\ketbra{x^n}{x^n}^{X^n}\!\!\!\! \otimes\! \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n}\!\!\!\!,
\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\!\! \right) \nonumber \\
&= \!\!\!\! \sum_{x^n} p(x^n)\! \sqrt{\!\bra{x^n}\bra{\psi_{x^n}}^{B^n\! R^n} \!
\xi_{x^n}^{\hat{X}^n\hat{B}^nR^n} \!\ket{x^n}\!\ket{\psi_{x^n}}^{B^n\! R^n}\!} \nonumber \\
&\leq \sum_{x^n} p(x^n) \sqrt{\| \xi_{x^n}^{\hat{X}^n\hat{B}^n R^n} \|},
\end{align}
where in the first line $\xi^{X'^n \hat{X}^n \hat{B}^n B_0' D_0' R^n} =\left(\mathcal{D} \circ ({\operatorname{id}}_{X^nD_0} \otimes \mathcal{E}_{B}) \otimes {\operatorname{id}}_{X'^n R^n} \right) \omega^{X^n X'^n B^n R^n } \otimes \Phi_K^{B_0D_0}$.
The \aw{inequality in the second line} is due to the monotonicity of fidelity under partial trace,
and \aw{$\|\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\|$ denotes the operator norm, which in this case
of a positive semidefinite operator is the maximum eigenvalue of $\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}$.
Now, consider the Schmidt decomposition of the state
$\ket{\xi_{x^n}}^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_DR^n}$ with respect to the partition
$\hat{X}^n \hat{B}^n R^n$ : $B_0' D_0'W_X W_B W_D$, i.e.
\begin{align*}
&\ket{\xi_{x^n}}^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_DR^n}\\
& \quad =\!\! \sum_{i} \!\!\sqrt{\lambda_{x^n}(i)}\ket{v_{x^n}(i)}^{\hat{X}^n \!\hat{B}^n\! R^n} \!\!\ket{w_{x^n}(i)}^{B_0'\!D_0' \!W_X\! W_B \!W_D}\!\!\!.
\end{align*}}
High average fidelity $\overline{F} \geq 1-\epsilon$ implies that \emph{on average}
the above states are approximately product states. In other words, the two subsystems are
nearly decoupled on average:
\begin{align}
\label{eq:almost-pure}
\sum_{x^n}& p(x^n) \!F\!\left(\! \ketbra{\xi_{x^n}}{\xi_{x^n}},
\xi_{x^n}^{\hat{X}^n \!\hat{B}^n\! R^n} \!\!\!\!\otimes \xi_{x^n}^{B_0'D_0' W_X W_B W_D} \right) \nonumber\\
&=\! \sum_{x^n} \!p(x^n)\! \sqrt{\bra{\xi_{x^n}}
{\xi_{x^n}^{\hat{X}^n \!\hat{B}^n\! R^n} \!\!\otimes \xi_{x^n}^{B_0'\! D_0'\! W_X \!W_B\! W_D}
\!\! \ket{\xi_{x^n}}}} \nonumber\\
&= \sum_{x^n} p(x^n) \sum_i \lambda_{x^n}(i)^{\frac32} \nonumber\\
&\geq \sum_{x^n} p(x^n) \|\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\|^{\frac32} \nonumber\\
&\geq \left( \sum_{x^n} p(x^n) \sqrt{\|\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\|} \right)^{3} \nonumber\\
&\geq (1-\epsilon)^3
\geq 1 - 3\epsilon,
\end{align}
where in the first line $\ketbra{\xi_{x^n}}$ is a state on systems ${\hat{X}^n \!\hat{B}^n \!B_0'\!D_0'\! W_X\!W_B W_D\!R^n}$. The \aw{inequality in the fifth line} follows from the convexity of $x^3$ for $x \geq 0$,
and \aw{in the sixth line we have used Eq.}~(\ref{eq-B1}).
Based on the relation between fidelity and trace distance \aw{(Lemma \ref{lemma:FvdG}),
we thus obtain for the product ensemble
\begin{align*}
& \zeta^{X'^n \hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n} \\
& \quad:= \sum_{x^n}\! p(x^n)\!\ketbra{x^n}^{X'^n}\!\!\! \!\otimes \xi_{x^n}^{\hat{X}^n \hat{B}^n R^n} \!\!\!\otimes \xi_{x^n}^{B_0'D_0' W_X W_B W_D}\! \!\!,
\end{align*}
that
\begin{align*}
&\| \xi - \zeta \|_1\\
&= \sum_{x^n} p(x^n)\\
&\>\> \norm{\! \ketbra{\xi_{x^n}\! }{\xi_{x^n}\! }^{\! \hat{X}^n \! \hat{B}^n\! B_0' \! D_0' \! W_X\! W_B \! W_D\! R^n}
\! \! \! \! \! \!-\! \xi_{x^n}^{\hat{X}^n\! \hat{B}^n\! R^n} \! \! \!\! \! \otimes\! \xi_{x^n}^{B_0'\! D_0' \! W_X\! W_B \! W_D\! } \! }_{\! 1} \\
& \leq 2\sqrt{6\epsilon}.
\end{align*}}
By \aw{the Alicki-Fannes inequality (Lemma \ref{AFW lemma}), this implies
\begin{align}
\label{decoupling_I}
& I(\hat{X}^n\hat{B}^nR^n : B_0'D_0'W_XW_B W_D | {X'}^n)_\xi \nonumber \\
&= S(\hat{X}^n \hat{B}^n R^n | {X'}^n)_\xi \nonumber \\
& \quad \quad \quad \quad - S(\hat{X}^n \hat{B}^n R^n | {X'}^n B_0'D_0' W_XW_B W_D)_\xi \nonumber \\
&\leq 2\sqrt{6\epsilon} \log(|X|^n |B|^n |R|^n) + 2 h(\sqrt{6\epsilon}) \nonumber \\
&\leq 2\sqrt{6\epsilon} \log(|X|^{2n} |B|^{2n}) + 2 h(\sqrt{6\epsilon}) \nonumber\\
& =: n \delta(n,\epsilon),
\end{align}
where we note in the second line that
$S(\hat{X}^n \hat{B}^n R^n|X'^n B_0'D_0' W_XW_B W_D)_\zeta
= S(\hat{X}^n \hat{B}^n R^n)_\zeta = S(\hat{X}^n \hat{B}^n R^n)_\xi$,
and in the forth line that we can without loss of generality assume $|R| \leq |X| |B|$,
since that is the maximum possible dimension of the support of $\omega^R$.}
\end{proof}
\section{Quantum data compression with classical side information}
\label{sec:seiteninformation}
In this section, we assume that Alice sends her information to \aw{Debbie}
at rate $R_X=\log \abs{\mathcal{X}}$ such that \aw{Debbie} can decode it
perfectly, and we ask how much Bob can compress his system given that
the decoder has access to classical side information $X^n$.
\aw{This problem is a special case of the \emph{classical-quantum Slepian-Wolf problem}} (CQSW problem),
and we call it quantum data compression with classical side information at the decoder,
in analogy to the problem of classical data compression
with quantum side information at the decoder which is addressed
in \cite{Devetak2003,Winter1999}. Note we do not speak about the
compression and decompression of the classical part at all, and the
decoder \aw{may} depend directly on $x^n$.
\aw{Of course, by Shannon's data compression theorem \cite{Shannon1948}, $X$ can always be
compressed to a rate $R_X = H(X)$, introducing an arbitrarily small
error probability}.
We know from previous section that Bob's encoder, in the entanglement-assisted model, is without loss of generality
an isometry \aw{$U \equiv U_B:{B^n B_0} \longrightarrow {C W B_0'}$,
taking $B^n$ and Bob's part of the entanglement $B_0$ to systems
$C\otimes W \otimes B_0'$, where $C \equiv C_B$ is the compressed
information of rate $R_B=\frac{1}{n}\log |C|$; $W \equiv W_B$ is the environment
of Bob's encoding CPTP map, and $B_0'$ is the register carrying Bob's share of
the output entanglement
(in this section, we drop subscript $B$ from $C_B$ and $W_B$).
Having access to side information $X^n$, \aw{Debbie} applies the decoding isometry
$V:X^n C D_0 \to \hat{X}^n \hat{B}^n W_D D_0'$ to generate
the output systems $\hat{X}^n \hat{B}^n$ and entanglement share
$D_0'$, and where $W_D$ is the environment of the isometry.}
We call this encoding-decoding scheme a side information code of
block length $n$ and error $\epsilon$ for the entanglement-assisted model if the average fidelity
(\ref{F_QCSW_assisted}) is at least $1-\epsilon$.
Similarly, we define a side information code for the unassisted model by removing the corresponding systems of entanglement in the encoding and decoding isometries,
that is systems $B_0$, $B'_0$, $D_0$ and $D'_0$.
\medskip
To state our lower bound on the necessary compression rate, we introduce the
following quantity, which emerges naturally from the converse proof.
\begin{definition}
\label{I_delta}
For the state $\omega^{XBR} = \sum_x p(x) \ketbra{x}{x}^X \otimes \ketbra{\psi_x}{\psi_x}^{BR}$
and $\delta \geq 0$, define
\begin{align*}
&I_\delta(\omega) := \sup_{{\mathcal{T}}} I(X:W)_\sigma
\\
&\quad \quad \quad \text{ s.t. } {\mathcal{T}}:B\rightarrow W \text{ CPTP with }
I(R:W|X)_\sigma \leq \delta,
\end{align*}
where the mutual informations are understood with respect to the
state $\sigma^{XWR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}})\omega$ \aw{and $W$ ranges over arbitrary
finite dimensional quantum systems}.
Furthermore, let
\(
\widetilde{I}_0 := \lim_{\delta \searrow 0} I_\delta = \inf_{\delta>0} I_\delta.
\)
\end{definition}
Note that the system $W$ is not restricted in any way, which is
the reason why in this definition we have a supremum and an infimum, rather
than a maximum and a minimum.
(It is a simple consequence of compactness of the domain of optimisation,
together with the continuity of
the mutual information, that if we were to impose a bound on the dimension of $W$
in the above definition, the supremum in $I_\delta$ would be attained, and
for the infimum in $\widetilde{I}_0$, it would hold that $\widetilde{I}_0 = I_0$.)
\begin{lemma}
\label{lemma:I-delta}
The function $I_{\delta}(\omega)$ introduced in Definition~\ref{I_delta},
has the following properties:
\begin{enumerate}
\item It is a non-decreasing function of $\delta$.
\item It is a concave function of $\delta$.
\item It is continuous for $\delta > 0$.
\item For any two states
$\omega_1^{X_1B_1R_1}$ and $\omega_2^{X_2B_2R_2}$ and for $\delta,\delta_1,\delta_2 \geq 0$
\[
I_\delta(\omega_1\otimes\omega_2)
= \max_{\delta_1+\delta_2= \delta} \left (I_{\delta_1}(\omega_1) + I_{\delta_2}(\omega_2)\right).
\]
\item $I_{n\delta}(\omega^{\otimes n}) = n I_\delta(\omega)$.
\item $I_0$ and $\widetilde{I}_0$ are additive:
\begin{align*}
I_0(\omega_1\otimes\omega_2) &= I_0(\omega_1) + I_0(\omega_2)
\quad \text{and} \\ \quad
\widetilde{I}_0(\omega_1\otimes\omega_2) &= \widetilde{I}_0(\omega_1) + \widetilde{I}_0(\omega_2).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
1) The non-decrease with $\delta$ is evident from the definition.
2) For this consider $\delta_1,\delta_2\geq 0$, $0<p<1$,
and let $\delta = p\delta_1+(1-p)\delta_2$.
Let furthermore channels ${\mathcal{T}}_i:B\rightarrow W_i$ be given ($i=1,2$) such that for the
states $\sigma_i^{XW_iR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}}_i)\omega$, $I(R:W_i|X)_{\sigma_i} \leq \delta_i$.
\par
Now define $W := W_1 \oplus W_2$, so that $W_1$ and $W_2$ can be considered
mutually orthogonal subspaces of $W$, and define the new channel
${\mathcal{T}} := p{\mathcal{T}}_1 + (1-p){\mathcal{T}}_2:B\rightarrow W$. By the chain rule for the mutual
information, one can check that w.r.t.~$\sigma^{XWR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}})\omega$,
\begin{align*}
I(R:W|X)_\sigma &= p I(R:W_1|X)_{\sigma_1} + (1-p) I(R:W_2|X)_{\sigma_2} \\
&\leq p\delta_1+(1-p)\delta_2 \\
&= \delta,
\end{align*}
and likewise
\[
I(X:W)_\sigma = p I(X:W_1)_{\sigma_1} + (1-p) I(X:W_2)_{\sigma_2}.
\]
Hence, $I_\delta \geq p I(X:W_1)_{\sigma_1} + (1-p) I(X:W_2)_{\sigma_2}$; by maximizing over
the channels, the concavity follows.
3) Properties 1 and 2 imply that it is continuous for $\delta > 0$.
4) First, we prove that
$I_\delta(\omega_1\otimes\omega_2) \leq \max_{\delta_1+\delta_2= \delta} \left( I_{\delta_1}(\omega_1) + I_{\delta_2}(\omega_2) \right)$;
the other direction \aw{of the inequality is trivial from the definition}.
Let ${\mathcal{T}}:B_1 B_2 \to W$ be a CPTP map such that
\begin{align}
\label{eq1}
\delta & \geq I(W:R_1R_2|X_1X_2) \nonumber \\
&= I(W:R_1|X_1X_2)+I(W:R_2|X_1R_1X_2) \\ \nonumber
&= I(WX_2:R_1|X_1)+I(WX_1R_1:R_2|X_2),
\end{align}
where the first line is to chain rule, and the second line is due to the independence of $\omega_1$ and $\omega_2$.
We now define the new systems $W_1:=WX_2$ and $W_2:=WX_1R_1$. \aw{Then we have,}
\begin{align}
\label{eq2}
I(W:X_1 X_2) &= I(W:X_2)+I(W:X_1|X_2) \\ \nonumber
&= I(W:X_2)+I(WX_2:X_1)\\ \nonumber
&\leq I(\underbrace{WX_1R_1}_{W_2}:X_2)+I(\underbrace{WX_2}_{W_1}:X_1),
\end{align}
where the second equality is due to the independence of $X_1$ and $X_2$.
The inequality follows \aw{from data processing}.
From \aw{Eq.~(\ref{eq1})} we know that $I(W_1:R_1|X_1)\leq \delta_1$ and $I(W_2:R_2|X_2)\leq \delta_2$
for some $\delta_1+\delta_2= \delta$. Thereby, from \aw{Eq.~(\ref{eq2})} we obtain
\begin{align*}
I_\delta(\omega_1 \otimes \omega_2) &\leq I_{\delta_1}(\omega_1 )+I_{\delta_2}(\omega_2 )\\
&\leq \max_{\delta_1+\delta_2=\delta} I_{\delta_1}(\omega_1 )+I_{\delta_2}(\omega_2 ).
\end{align*}
5) \aw{Now, the multi-copy additivity follows easily from property 4:}
According to the first statement of the \aw{lemma}, we have
\[
I_{n\delta}(\omega^{\otimes n}) = \max_{\delta_1+\ldots+\delta_n=n\delta} I_{\delta_1}(\omega)+\ldots+I_{\delta_n}(\omega).
\]
\aw{Here, the right hand side is clearly $\geq n I_\delta(\omega)$ since we can choose all
$\delta_i = \delta$. By the concavity of $I_{\delta}(\omega)$ in $\delta$, on the other hand,
we have for any $\delta_1+\ldots+\delta_n=n\delta$ that
\[
\frac{1}{n}(I_{\delta_1}(\omega)+\ldots+I_{\delta_n}(\omega)) \leq I_{\delta}(\omega),
\]
so the maximum is attained at $\delta_i=\delta$ for all $i=1,\ldots,n$}.
6) The property 4 of the \aw{lemma also} implies that $I_0$ and $\widetilde{I}_0$ are additive.
\end{proof}
\begin{remark}
There is a curious resemblance of our function
$I_\delta$ with the so-called \emph{information bottleneck function} introduced
by Tishby \emph{et al.}~\cite{info-bottleneck}, whose generalization to
quantum information theory is recently being discussed \cite{Salek-QIB,Hirche-QIB}.
Indeed, the concavity and additivity properties of the two functions are proved
by the same principles, although it is not evident to us, what --if any--, the
information theoretic link between $I_\delta$ and the information bottleneck is.
\end{remark}
\subsection{Converse bound}\label{subsec: Converse bound}
In this subsection, we use the properties of the function $I_{\delta}(\omega)$ (Lemma~\ref{lemma:I-delta}) to prove
a lower bound on Bob's quantum communication rate.
\begin{theorem}
\label{converse_QCSW}
In the entanglement-assisted model, consider any side information code of block length $n$ and error $\epsilon$.
Then, Bob's quantum communication rate is lower bounded as
\[
R_B \geq \frac12 \left( S(B)+S(B|X) - I_{\delta(n,\epsilon)} - \delta(n,\epsilon) \right),
\]
where $\delta(n,\epsilon) = 4\sqrt{6\epsilon}\log(|X| |B|) +\frac2n h(\sqrt{6\epsilon})$.
Any asymptotically achievable rate $R_B$ is consequently lower bounded
\[
R_B \geq\frac{1}{2}\left( S(B)+S(B|X)-\widetilde{I}_0 \right).
\]
\end{theorem}
\begin{proof}
\aw{As already discussed in the introduction to this section,}
the encoder of Bob is without loss of generality
an isometry $\aw{U:{B^n B_0} \longrightarrow {C W B_0'}}$.
The existence of a high-fidelity
decoder using $X^n$ as side information
implies that systems $W B_0'$ are decoupled from
system $R^n$ conditional on \aw{$X^n$; indeed, by Lemma \ref{decoupling condition},}
$I(R^n:WB_0'|X'^n) \leq n \delta(n,\epsilon)$.
The first part of the converse reasoning is as follows:
\begin{align*}
nR_B = \log |C|
&\geq S(C) \\
&\geq S(CW B_0')-S(W B_0') \\
&= S(B^n)+S(B_0)-S(W B_0'),
\end{align*}
\aw{where the second inequality is a version of subadditivity, and
the equality in the last line holds because the encoding isometry \aw{$U$}
does not change the entropy; furthermore, $B^n$ and $B_0$ are initially
independent.}
Moreover, the decoder can be dilated to an isometry $\aw{V}: X^n C D_0 \longrightarrow \hat{X}^n \hat{B}^n D_0' W_D$,
where $W_D$ and $D_0'$ are the environment of \aw{Debbie}'s decoding operation and
\aw{the output of \aw{Debbie}'s entanglement, respectively.}
Using the decoupling condition of Lemma \ref{decoupling condition} once more, we have
\begin{align*}
nR_B+S(D_0)&= \log |C| + S(D_0) \\
&\geq S(C)+S(D_0) \\
&\geq S(C D_0) \\
&\geq S(X^nCD_0|{X'}^n) \\
&= S(\hat{X}^n\hat{B}^n D_0' W_D|{X'}^n) \\
&= S(W B_0' R^n|{X'}^n) \\
&\geq S(R^n|{X'}^n)\!+S(WB_0'|{X'}^n) \!\! -\! n \delta(n,\epsilon) \\
&= S(B^n|X^n)\!+S(WB_0'|{X'}^n) \!\! -\! n \delta(n,\epsilon),
\end{align*}
\aw{where the third and fourth line are by subadditivity of the entropy;
the fifth line follows because the decoding isometry $V$ does not change the entropy.
The sixth line holds because for any given $x^n$
the overall state of the systems $\hat{X}^n\hat{B}^n B_0'D_0' W W_DR^n$ is pure.
The penultimate line is due to the decoupling condition (Lemma \ref{decoupling condition}),
and the last line follows because for a given $x^n$ the overall state
of the systems $B^nR^n$ is pure.}
Adding these two relations and dividing by $2n$, we obtain
\begin{align*}
R_B \!\geq \! \frac{1}{2} (S(B)\!+\!S(B|X)) \!-\! \frac{1}{2n} I(X'^n\!\!:\!WB_0') \!-\! \frac{1}{2}\delta(n,\epsilon),
\end{align*}
where the terms $S(B_0)$ and $S(D_0)$ cancel out each other because
$B_0$ and $D_0$ are $K$-dimensional quantum registers with maximally
entangled states $\Phi_K$.
In the above \aw{inequality, the mutual information on the right hand side}
is bounded as
\begin{align*}
I(X'^n:WB_0') \leq I_{n\delta(n,\epsilon)}({\omega^{\otimes n}}) = nI_{\delta(n,\epsilon)}({\omega}),
\end{align*}
\aw{To see this, define the CPTP map $\mathcal{T}:B^n \longrightarrow \widetilde{W}:= WB_0'$ as
$\mathcal{T}(\rho):= \Tr_{CD_0} (U\otimes\openone)(\rho\otimes\Phi_K^{B_0D_0})(U\otimes\openone)^\dagger$.
Then we have $I(R^n:\widetilde{W}|{X'}^n) \leq n\delta(n,\epsilon)$, and hence
the above inequality follows directly from Definition \ref{I_delta}.}
The second statement of the theorem follows because
$\delta(n,\epsilon)$ tends to zero as \aw{$n \rightarrow \infty$ and $\epsilon \rightarrow 0$.}
\end{proof}
\begin{remark}
\label{rem:example}
Notice that the term $\frac{1}{n} I({X'}^n:WB_0')$ is not necessarily small.
For example, suppose \aw{that the source is of the form
$\ket{\psi_x}^{BR} = \ket{\psi_x}^{B'R} \otimes \ket{\psi_x}^{B''}$ for all $x$};
clearly it is possible to perform the coding task by
coding only $B'$ and trashing $B''$ (i.e.~putting it into $W$), because
by having access to $x$ the decoder can reproduce $\psi_x^{B''}$ locally. In
this setting, characteristically $\frac{1}{n} I({X'}^n:WB_0')$ does not go
to zero because ${B''}^n$ ends up in $W$.
\end{remark}
\subsection{Achievable rates}\label{subsec:Achievable rates}
In this subsection, we provide achievable rates \aw{both for the unassisted and entanglement-assisted} model.
\begin{theorem}
\label{State_merging_rate}
\aw{In the unassisted model, there exists a sequence of side information codes that
compress Bob's system $B^n$} at the asymptotic qubit rate
\begin{align*}
R_B = \frac{1}{2}\left( S(B)+S(B|X) \right).
\end{align*}
\end{theorem}
\begin{proof}
We recall that in a side information code, Bob aims to send his system $B^n$ to Debbie while she has access to side information system $X^n$ as explained at the beginning of this section.
We can use the fully quantum Slepian-Wolf protocol (FQSW), also called coherent state merging protocol
(\cite{Abeyesinghe2009} section 7), as a subprotocol since it considers the entanglement
fidelity as the decodability criterion, which is more stringent than
the average fidelity defined in (\ref{F_QCSW_unassisted1}).
Namely, let
\[
\ket{\Omega}^{X X^{\prime} B R}
=\sum_{x \in \mathcal{X}} \sqrt{p(x)} \ket {x}^{X} \ket {x}^{X'} \ket{\psi_{x}}^{B R}
\]
be the source in \aw{the} FQSW problem, where $B$ is the system to be compressed, $X$ is the side information at the decoder, $R$ and $X'$ are the reference systems. Bob applies the corresponding encoding map of the FQSW protocol ${\mathcal{E}}_B: B^n \longrightarrow C$ and sends system $C$ to Debbie who then applies the decoding map of the FQSW protocol ${\mathcal{D}}: X^n C \longrightarrow X^n \hat{B}^n$ to her side information system $X^n$ and the compressed information $C$ to reconstruct system $\hat{B}^n$. These encoding and decoding operations preserve the entanglement fidelity $F_e$ which is the decodability criterion of the FQSW problem:
\begin{align*}
\label{F_QCSW}
F_e &\!
\!=\! \! F \! \! \left(\! \Omega^{X^{\! n}\! {X'}^n\! B^{\! n} \! R^n } \! \! \! ,\! \left(\mathcal{D} \! \circ\! ({\operatorname{id}}_{\! X^n}\! \! \otimes \! \mathcal{E}_{\! B}\! )\! \otimes\! {\operatorname{id}}_{{X'}^n\! R^n}\! \right) \! \Omega^{\! X^{\! n} \! {X'}^{ n} \! B^n\! R^n } \! \! \right) \nonumber \\
&\! \! \! \leq \! \! F \! \! \left(\! \omega^{X^n \! {X'}^n\! B^n \! R^n \! } \! \! ,\! \left(\mathcal{D} \! \circ\! ({\operatorname{id}}_{X^{\! n}}\! \! \otimes \! \mathcal{E}_{\! B}\! ) \! \otimes\! {\operatorname{id}}_{{X'}^n \! R^n}\! \right) \! \omega^{X^{\! n} \! {X'}^n\! B^{\! n} \! R^{n} \! } \! \right) \\
& \! \! = \overline{F},
\end{align*}
where the inequality is due to the monotonicity of fidelity
under \aw{CPTP maps, namely the projective measurement on system $X'$ in the computational
basis $\{ \ketbra{x}{x}\}$)}.
Therefore, if an encoding-decoding scheme attains an entanglement fidelity for
the FQSW problem going to $1$, then it will have the average fidelity for
the CQSW problem going to $1$ as well. Hence, the FQSW rate
\begin{align*}
R_B= \frac{1}{2}I(B:X^{\prime} R)_{\Omega}=\frac{1}{2}(S(B)_{\omega}+S(B|X)_{\omega}),
\end{align*}
\aw{is achievable.}
\end{proof}
\begin{remark}
Notice that for the source considered at the end of the previous subsection
\aw{in Remark \ref{rem:example}}, where
$\ket{\psi_x}^{BR} = \ket{\psi_x}^{B'R} \otimes \ket {\psi_x}^{B''}$
for all $x$, we can achieve a rate strictly smaller than the rate
stated in the above theorem. The reason is that $R$ is only
entangled with $B'$, so clearly it is possible to perform the coding task by
coding only $B'$ and trashing $B''$ because by having access to $x$
the decoder can reproduce the state $\psi_x^{B''}$ locally.
Thereby, the rate $\frac{1}{2} (S(B')+S(B'|X))$ is achievable by
applying coherent state merging as above.
\end{remark}
\medskip
The previous observation shows that in general, the rate $\frac12(S(B)+S(B|X))$
from Theorem \ref{State_merging_rate} is not optimal. By looking for a systematic
way of obtaining better rates, we have the following result in the entanglement-assisted
model.
\begin{theorem}
\label{QSR_achievability}
\aw{In the entanglement-assisted model, there exists a sequence of side information codes}
with the following asymptotic entanglement and qubit rates:
\begin{align*}
E&=\frac{1}{2}\left(I(C:W)_{\sigma}-I(C:X)_{\sigma}\right)
\quad \text{and}\quad\\
R_B&= \frac{1}{2}\left(S(B)_{\omega}+S(B|X)_{\omega}-I(X:W)_{\sigma} \right), \nonumber
\end{align*}
where $C$ and $W$ are, respectively, the system and environment of an
isometry $V:{B\rightarrow CW}$ on $\omega^{XBR}$
producing the state $\sigma^{XCWR} = (\openone_{XR}\otimes V)\omega^{XBR} (\openone_{XR}\otimes V)^{\dagger}$, such that $I(W:R|X)_{\sigma}=0$.
\end{theorem}
\begin{proof}
Notice that there is always an isometry $V:{B\rightarrow CW}$
with $I(W:R|X)_{\sigma}=0$, and the trivial example is the isometry $V:{B\rightarrow B W}$ where system $W$ is a trivial system with state $\ketbra{0}^W$.
First, Bob applies \aw{the} isometry $V$ to each copy \aw{of the $n$ systems $B_1,\ldots,B_n$}:
\begin{align*}
&\sigma^{X X^{\prime} CWR} \\
&\quad \quad =\!\! (V^{B \to C W} \!\!\otimes\! \openone_{X X^{\prime}R})
\omega^{X X^{\prime} BR}
(V^{B \to C W} \!\!\otimes\! \openone_{X X^{\prime}R})^\dagger \\
&\quad \quad = \sum_x p(x) \ketbra{x}^{X}\otimes \ketbra{x}^{X^{\prime}}\otimes \ketbra{\phi_{x}}^{CWR}.
\end{align*}
Now consider the following source state from which the state $\sigma^{X X^{\prime} CWR}$ is obtained by applying projective measurement on system $X'$ in the computational
basis $\{ \ketbra{x}{x}\}$,
\[
\ket{\Sigma}^{X X^{\prime} CW R}
=\sum_{x \in \mathcal{X}} \sqrt{p(x)} \ket {x}^{X} \ket {x}^{X'} \ket{\phi_{x}}^{CW R}.
\]
For this source, consider Bob and \aw{Debbie} respectively hold
the $CW$ and $X$ systems, and Bob wishes to send system $C$ to \aw{Debbie} while keeping
$W$ for himself.
For many copies of the above state, \aw{the} parties can apply \aw{the} quantum state redistribution
(QSR) protocol \cite{Yard2009,Oppenheim2008} for transmitting $C$, having access to system $W$ as
side information at the encoder and to $X$ as side information at the decoder.
According to this protocol, Bob needs exactly the rate of
$R_B = \frac{1}{2}I(C:X^{\prime} R |X)_{\Sigma}
= \frac{1}{2}(S(B)_{\omega}+S(B|X)_{\omega}-I(X:W)_{\sigma})$
qubits of communication.
\aw{The protocol requires the rate of $\frac{1}{2} I(C:W)_{\Sigma}=\frac{1}{2} I(C:W)_{\sigma}$
ebits of entanglement shared between the encoder and decoder, and at the end of the protocol
the rate of $\frac{1}{2} I(C:X)_{\Sigma}=\frac{1}{2} I(C:X)_{\sigma}$ ebits of entanglement is
distilled between the encoder and the decoder (see equations (1) and (2) in \cite{Yard2009}).}
This protocol \aw{attains high} fidelity for \aw{the} state $\Sigma^{X^n {X'}^n C^n W^n R^n }$,
and consequently for \aw{the} state $\sigma^{X^n {X'}^n C^n W^n R^n }$ due to \aw{the} monotonicity
of fidelity under \aw{CPTP maps}:
\begin{align} \label{F_QSR}
1\!\!-\!\epsilon \! &\leq \!F \!\left(\!{\Sigma}^{X^n {X'}^n C^n W^n R^n }\!\!\!\!\otimes\! \Phi_L^{B_0'D_0'}\!\!,{\hat{\Sigma}}^{\hat{X}^n {X'}^n \hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 }
\right) \nonumber \\
&\leq \!F \!\left(\!{\sigma}^{X^n {X'}^n C^n W^n R^n }\!\!\!\!\otimes\! \Phi_L^{B_0'D_0'}\!\!,{\hat{\sigma}}^{\hat{X}^n {X'}^n \hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 }
\right)\!\!,
\end{align}
where
\begin{align*}
&{\hat{\Sigma}}^{\hat{X}^n \!{X'}^n \!\hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 } \\
&\!\! =\!\!\left(\!\mathcal{D} \!\circ\! (\!{\operatorname{id}}_{X^n \!D_0} \!\otimes \!\mathcal{E}_{C\!W \!B_0}\!)\! \otimes\! {\operatorname{id}}_{{X'}^n\! R^n}\!\right)
\!\!\Sigma^{\!X^n\! {X'}^n\! C^n\! W^n \!R^n } \!\!\!\otimes\! \Phi_K^{\!B_0\!D_0}\!\!,
\end{align*}
and
\begin{align*}
&{\hat{\sigma}}^{\hat{X}^n \!{X'}^n \!\hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 } \\
&\!\! =\!\!\left(\!\mathcal{D} \!\circ\! (\!{\operatorname{id}}_{X^n \!D_0} \!\otimes \!\mathcal{E}_{C\!W \!B_0}\!)\! \otimes\! {\operatorname{id}}_{{X'}^n\! R^n}\!\right)
\!\sigma^{\!X^n\! {X'}^n\! C^n\! W^n \!R^n } \!\!\!\otimes\! \Phi_K^{\!B_0\!D_0}\!\!\!,
\end{align*}
and $\mathcal{E}_{CW B_0}$ and $\mathcal{D}$ are respectively the
encoding and decoding operations of the QSR protocol.
The condition $I(W:R|X)_{\sigma}=0$ implies that for every $x$ the systems $W$ and $R$ are decoupled:
\begin{align*}\label{Decoupling_0}
\phi_{x}^{WR}=\phi_{x}^W \otimes \phi_{x}^R.
\end{align*}
By Uhlmann's theorem \cite{UHLMANN1976,Jozsa1994_2},
there exist isometries $V_{x}:{C\rightarrow VB}$ for all $x \in \mathcal{X}$, such that
\[
(\openone\otimes V_{x}^{C\rightarrow VB}) \ket{\phi_{x}}^{CWR}
=\ket{\nu_{x}}^{VW} \otimes \ket{\psi_{x}}^{BR}.
\]
After applying the decoding operation $\mathcal{D}$ of QSR,
\aw{Debbie} applies the isometry $V_{x}:{C\rightarrow VB}$ for each $x$,
which does not change the fidelity (\ref{F_QSR}).
By tracing out the unwanted systems $V^n W^n$,
due to the monotonicity of \aw{the} fidelity under partial trace,
the fidelity defined in (\ref{F_QCSW_assisted}) \aw{will go to $1$} in this encoding-decoding scheme.
\end{proof}
\begin{remark}
In Theorem \ref{QSR_achievability}, the smallest achievable rate,
when unlimited entanglement is available,
is equal to $\frac{1}{2}(S(B)+S(B|X)-I_0)$.
This rate resembles the converse bound
$R_B \geq \frac{1}{2}(S(B)+S(B|X)-\widetilde{I}_0)$,
except \aw{that $\widetilde{I}_0 \geq I_0$}.
In the definition of $\widetilde{I}_0$, it seems unlikely that we can take the
limit of $\delta$ going to 0 directly because there is no dimension bound on
the systems $C$ and $W$, so compactness cannot be used directly to prove that
$\widetilde{I}_0$ and $I_0$ are equal.
\end{remark}
\aw{\begin{remark}
Looking again at the entanglement rate in Theorem \ref{QSR_achievability},
$E=\frac{1}{2}\left(I(C:W)_{\sigma}-I(C:X)_{\sigma}\right)$, we reflect that
there may easily be situations where $E\leq 0$, meaning that no entanglement is
consumed, and in fact no initial entanglement is necessary. In this case,
the theorem improves the rate of Theorem \ref{State_merging_rate} by the
amount $\frac12 I(X:W)$.
This motivates the definition of the following variant of $I_0$,
\begin{align*}
I_{0-}(\omega) :=& \sup I(X:W) \text{ s.t. } I(R:W|X)=0,\\
& I(C:W)-I(C:X) \leq 0,
\end{align*}
where the supremum is over all isometries $V:B\rightarrow CW$.
\par
As a corollary to these considerations, in the unassisted model
the rate $\frac{1}{2}\left(S(B)+S(B|X)-I_{0-} \right)$ is achievable.
\end{remark}}
\subsection{Optimal compression rate for generic sources}
\label{sec:generic side info}
In this subsection, we find the optimal compression rate for
\emph{generic} sources, by which we mean any source except for a
submanifold of lower dimension within the set of all sources.
Concretely, we will consider sources where there is at least one $x$
for which the reduced state $\psi_x^B= \Tr_R \ketbra{\psi_x}{\psi_x}^{BR}$
has full support on $B$.
In this setting, coherent state merging as a subprotocol gives the optimal
compression rate, so not only does the protocol not use any initial
entanglement, but some entanglement is distilled at the end of the protocol.
\begin{theorem}
\label{theorem:generic optimal rate}
In both unassisted and entanglement-assisted models, for any side information code of a generic source, the asymptotic compression rate $R_B$ of Bob is lower bounded
\begin{align*}
R_B \geq \frac{1}{2}\left(S(B)+S(B|X)\right),
\end{align*}
so the protocol of Theorem \ref{State_merging_rate} has optimal rate for a generic source.
Moreover, in that protocol no prior entanglement is needed and
a rate $\frac{1}{2}I(X:B)$ ebits of entanglement
is distilled between the encoder and decoder.
\end{theorem}
\begin{proof}
The converse bound of Theorem \ref{converse_QCSW} states that
\aw{the} asymptotic quantum communication rate of Bob is lower bounded as
\begin{align}
R_B \geq \frac{1}{2}\left(S(B)+S(B|X)-\widetilde{I}_0 \right), \nonumber
\end{align}
where $\widetilde{I}_0$ \aw{comes from} Definition \ref{I_delta}. We will show that for generic sources, $\widetilde{I}_0 = I_{0}= 0$.
Moreover, Theorem \ref{State_merging_rate} states that using coherent state
merging, the asymptotic qubit rate of $\frac{1}{2}(S(B)+S(B|X))$ is achievable,
that no prior entanglement is required and a rate of
$\frac{1}{2}I(X:B)$ ebits of entanglement is distilled between the encoder and the decoder.
We show that for any CPTP map ${\mathcal{T}}:B\to W$, which acts on a generic $\omega^{XBR}$ and produces
state $\sigma^{XWR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}})\omega^{XBR}$ such that $I(R:W|X)_{\sigma}\leq \delta$
for $\delta \geq 0$, the quantum mutual information
$I(X:W)_{\sigma} \leq \delta' \log |X| +2h (\frac12\delta')$ where $\delta'$ is defined in
\aw{Eq.~(\ref{phix_phi0_distance}) below}.
Thus, we obtain
\[
\widetilde{I}_0 = \lim_{\delta \searrow 0} I_\delta = 0.
\]
\aw{To show this claim, we proceed as follows.} From $I(R:W|X)_{\sigma}\leq \delta$ we have
\begin{align*}
I(R:W|X=x)_\sigma \leq \frac{\delta}{p(x)} \quad \quad \forall x \in \mathcal{X},
\end{align*}
\aw{so} by Pinsker's inequality \cite{Schumacher2002} we obtain
\begin{align*}
\left\| \phi_x^{W R}- \phi_x^{W} \otimes \phi_x^{R} \right\|_1 \leq \sqrt{\frac{2 \delta \ln 2}{p(x)}} \quad \quad \forall x \in \mathcal{X}.
\end{align*}
By Uhlmann's theorem (Lemma~\ref{lemma:Uhlmann1} and Lemma~\ref{lemma:Uhlmann2}), there exists an isometry $V_x:{C \to BV}$ such that
\begin{align}\label{eq3}
&\left\| (V_x \otimes \openone_{WR}) \phi_x^{CW R} (V_x \otimes \openone_{WR})^{\dagger}
- \theta_x^{WV} \otimes \psi_x^{BR} \right\|_1 \nonumber \\
& \quad \quad \quad \quad \quad \quad \leq \sqrt{\sqrt{ \frac{\delta\ln 2}{2p(x)}} \left(2-\sqrt{ \frac{\delta\ln 2}{2p(x)}}\right)},
\end{align}
where $\theta_x^{WV}$ is a purification of $\phi_x^{W}$.
Since the source is generic by definition there is an $x$, say $x=0$,
for which $\psi_0^B$ has full support on
$\mathcal{L}(\mathcal{H}_B)$, i.e.~$\lambda_0:=\lambda_{\min}(\psi_0^B)>0$.
By Lemma \ref{full_support_lemma} in Appendix \ref{Miscellaneous_Facts},
for any $\ket{\psi_x}^{BR}$ there is an operator $T_x$ acting on the reference system such that
\begin{align*}
\ket{\psi_x}^{BR} = (\openone_B \otimes T_x) \ket{\psi_0}^{BR}. \nonumber
\end{align*}
Using this fact, we show that the decoding isometry $V_0$ in \aw{Eq.~(\ref{eq3})} works for all states:
\begin{align*}
& \bigl\| (V_0 \otimes \openone_{WR}) \phi_x^{CWR} (V_0^{\dagger} \otimes \openone_{W R})
- \theta_0^{WV} \otimes \psi_x^{BR} \bigr\|_1 \\
&= \bigl\|\!(\!V_0 \!\otimes \!\openone_{W\!R}) (\!\openone_{C\!W} \!\otimes \!T_x \!) \phi_0^{C\!W\!R} (\openone_{C\!W} \!\otimes \!T_x)^{\dagger} (\!V_0^{\dagger}\otimes \openone_{W \!R}\!) \\
&\quad \quad\quad\quad - \theta_0^{W\!V} \!\otimes \!(\openone_B \otimes T_x)\psi_0^{B\!R}(\openone_B \!\otimes\! T_x)^{\dagger} \bigr\|_1 \\
&= \!\!\bigl\| \!(\!\openone_{BVW}\!\otimes \! T_x\!)(V_0\!\otimes\!\openone_{W\!R}) \phi_0^{C\!W\!R} \!(\!V_0^{\dagger}\!\otimes\!\openone_{W \!R}\!) (\!\openone_{B\!V\!W}\!\otimes \!T_x^{\dagger}\!)
\\
&\quad \quad\quad\quad - (\openone_{BVW}\! \otimes \! T_x)\theta_0^{WV} \otimes\psi_0^{B\!R}(\openone_{B\!V\!W} \!\otimes\! T_x^{\dagger}) \bigr\| \\
&\leq \norm{\openone_{BVW} \otimes T_x}_{\infty}^2 \\
& \quad \quad\quad \norm{ \!(\!V_0 \!\otimes\! \openone_{WR})\! \phi_0^{C\!W\!R} (V_0^{\dagger} \!\otimes\! \openone_{W\!R}) - \theta_0^{W\!V} \!\otimes\!\psi_0^{B\!R}\!}_1 \\
&\leq \frac{1}{\lambda_0} \sqrt{\sqrt{ \frac{\delta\ln 2}{2p(0)}} \left(2-\sqrt{ \frac{\delta\ln 2}{2p(0)}}\right)},
\end{align*}
where the \aw{last two} inequalities follow from Lemma \ref{T_norm1_inequality}
and Lemma \ref{full_support_lemma}, respectively.
By tracing out the systems $VBR$ in the above chain of inequalities, we get
\begin{align}
\label{phix_phi0_distance}
\norm{\phi_x^{W}- \phi_0^{W}}_1
\leq \frac{1}{\lambda_0}\sqrt{\sqrt{ \frac{\delta\ln 2}{2p(0)}} \left(2-\sqrt{ \frac{\delta\ln 2}{2p(0)}}\right)} \aw{=: \delta'}.
\end{align}
Thus, by triangle inequality we obtain
\begin{align}
\label{phi_phi0_distance}
& \norm{\underbrace{\sum_x p(x) \ketbra{x}{x}^X \otimes \phi_x^{W}}_{\sigma^{XW}}
- \underbrace{\sum_x p(x) \ketbra{x}{x}^X \otimes \phi_0^{W}}_{\aw{=:}\sigma_0^{XW}}}_1 \nonumber \\
&\quad \quad \quad \quad\quad \quad \leq \sum_x p(x) \norm{ \phi_x^{W}- \phi_0^{W}}_1 \nonumber \\
&\quad \quad\quad \quad\quad \quad\leq \frac{1}{\lambda_0}\sqrt{\sqrt{ \frac{\delta\ln 2}{2p(0)}}
\left(2-\sqrt{ \frac{\delta\ln 2}{2p(0)}}\right)} = \delta'.
\end{align}
By applying \aw{the Alicki-Fannes inequality in the form of Lemma \ref{AFW lemma},} to
\aw{Eq.}~(\ref{phi_phi0_distance}), we have
\begin{align*}
I(X\!:\!W)_\sigma &\!\!=\!S(\!X\!)_{\sigma}\!\!-\!S(X|W)_{\sigma} \!+\!S(X|W)_{\sigma_0}\!\!-\!\!S(X|W)_{\sigma_0} \\
&= S(X|W)_{\sigma_0}-S(X|W)_{\sigma} \\
&\leq \delta' \log |X| + 2h\left(\frac12\delta'\right),
\end{align*}
and the right \aw{hand side} of the above inequality vanishes for \aw{$\delta\rightarrow 0$}.
\end{proof}
\section{Towards the full rate region}\label{sec: full problem}
In this section, we consider the full rate region of the distributed compression of
a \aw{classical-quantum} source.
\begin{theorem}
\label{unknown.theorem}
In the unassisted model, for distributed compression of a \aw{classical-quantum} source,
the rate pairs satisfying the following inequalities are achievable:
\begin{equation}\begin{split}
\label{eq:inner}
R_X &\geq S(X|B), \\
R_B &\geq\frac{1}{2}\left(S(B)+S(B|X)\right), \\
R_X+2R_B &\geq S(B)+S(XB).
\end{split}\end{equation}
\end{theorem}
\begin{proof}
From the Devetak-Winter code, Eq.~(\ref{eq:DW}), and
the code based on state merging, Theorem \ref{State_merging_rate}, two rate
points in the unassisted (and hence also in the unlimited entanglement-assisted)
rate region are:
\begin{align*}
(R_X,R_B) &= (S(X|B),S(B)), \\
(R_X,R_B) &= \left(S(X),\frac12(S(B)+S(B|X))\right).
\end{align*}
Their upper-right convex closure is hence an inner bound to the rate region,
depicted schematically in Fig.~\ref{fig:inner}.
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{inner-bound.jpg}
\caption{\aw{The region of all pairs $(R_X,R_B)$ satisfying the three conditions of
Eq.~(\ref{eq:inner}); it is the upper-right convex closure of the
Devetak-Winter (DW) and the merging (M) point.
All of these points are achievable in the unassisted model}.}
\label{fig:inner}
\end{figure}
\aw{For generic sources we find \aw{that this is in fact the} rate region.
However, in general, we only present some outer bounds and inner bounds (achievable rates),
which show the rate region to be much more complicated than the rate region of \aw{the}
classical Slepian-Wolf problem.}
\aw{\subsection{General converse bounds}
\label{sec:Converse Bounds in General}
For distributed compression of a \aw{classical-quantum} source
in general, we start with a general converse bound.
\begin{theorem}
\label{theorem:full converse}
In the entanglement-assisted model, the asymptotic rate pairs for distributed compression of a
\aw{classical-quantum} source are lower bounded as
\begin{equation}\begin{split}
\label{eq:general-converse}
R_X &\geq S(X|B), \\
R_B &\geq \frac{1}{2}\left( S(B)+S(B|X)-\widetilde{I}_0 \right), \\
R_X + 2R_B &\geq S(B)+S(BX)-\widetilde{I}_0.\\
\end{split}\end{equation}
In the unassisted model, in addition to the above lower bounds,
the asymptotic rate pairs are bounded as
\[
R_X + R_B \geq S(XB).
\]
\end{theorem}
\begin{proof}
The individual lower bounds have been established already:
$R_X\geq S(X|B)$ is from \cite{Devetak2003,Winter1999}, in a slightly different source model.
However, it also holds in our system model if Bob sends his information using
unlimited communication such that \aw{Debbie} can decode it perfectly. Namely, notice
that the fidelity (\ref{F_QCSW_unassisted1}) is more stringent than the decoding
criterion of \cite{Devetak2003,Winter1999}, so any converse bound considering the
decoding criterion of \cite{Devetak2003,Winter1999} is also a converse bound in our system model.
The bound $R_B\geq \frac{1}{2}(S(B)+S(B|X)-\widetilde{I}_0)$ is from
Theorem \ref{theorem:generic optimal rate}. These two bounds hold in the unassisted,
as well as the entanglement-assisted model.
In the unassisted model, the rate sum lower bound $R_X + R_B \geq S(XB)$ has been
argued in \cite{Devetak2003,Winter1999}, too. As a matter of fact, for any distributed compression
scheme for the source, ${\mathcal{E}}_X\otimes{\mathcal{E}}_B$ jointly describes a Schumacher compression scheme
with asymptotically high fidelity. Thus, its rate must be asymptotically lower bounded
by the joint entropy of the source, $S(XB)$ \cite{Schumacher1995,Jozsa1994_1,Barnum1996,Winter1999}.
This leaves the bound $R_X + 2R_B \geq S(B)+S(BX)-\widetilde{I}_0$ to be proved in
the entanglement-assisted model, which we tackle now.
The encoders of Alice and Bob are isometries $U_X:{X^n \to C_X W_X}$ and
$U_B:{B^nB_0 \to C_B W_B B_0'}$, respectively. They send their respective compressed systems
$C_X$ and $C_W$ to \aw{Debbie} and keep the environment parts $W_X$ and $W_B$ for themselves.
Then, \aw{Debbie} applies the decoding isometry $V:{C_X C_B D_0 \to \hat{X}^n\hat{B}^n W_D D_0'}$,
where $\hat{X}^n\hat{B}^n D_0'$ are the output systems, and $W_D$ and $D_0'$ are the
environment of \aw{Debbie}'s decoding isometry and her output entanglement, respectively.
We first bound the following sum rate:
\begin{align}
\label{eq sum rate}
&nR_X+nR_B+S(D_0) \nonumber \\
&\geq S(C_X)+S(C_B)+S(D_0) \nonumber\\
&\geq S(C_XC_BD_0) \nonumber\\
&= S(\hat{X}^n\hat{B}^n W_D D_0') \nonumber\\
&= S(\hat{X}^n\hat{B}^n) + S(W_D D_0'|\hat{X}^n\hat{B}^n) \nonumber\\
&\geq S(\hat{X}^n\hat{B}^n) + S(W_D D_0'|\hat{X}^n\hat{B}^nX'^n) \nonumber\\
&\geq\! \!S(\!X^n \!B^n\!) \!\!+\! S(\!W_D \!D_0'|\hat{X}^n\!\hat{B}^n\!X'^n\!) \!\! \nonumber \\
& \quad \quad\quad\quad\quad\quad\quad\quad-\! n\!\sqrt{\!2\epsilon} \!\log(\!|X| |B|\!) \!\!-\! \!h(\!\sqrt{\!2\epsilon}\!) \nonumber\\
&\geq S(X^n B^n) + S(W_D D_0'|X'^n) - 2n\delta(n,\epsilon) \nonumber\\
&\geq \!S(\!X^n B^n\!) \!\!+ \!\!S(\!W_XW_B B_0'|X'^n\!) \!\! \nonumber \\
&\quad \quad\quad\quad\quad\quad\quad\quad- \!\!S(R^n\hat{B}^n\hat{X}^n|X'^n) \!\!-\!\! 2n\delta(\!n,\epsilon\!) \nonumber\\
&\geq \!S(X^n B^n) \!\!+ \!\!S(W_XW_B B_0'|X'^n) \!\!-\!\! 2n\delta(n,\epsilon)\!\!-\!\!n\delta'(n,\epsilon) \nonumber\\
&= \!S(\!X^n\! B^n\!)\! \!+ \!\!S(\!W_X|X'^n\!) \!\!+\! \!S(\!W_B B_0'|X'^n\!) \!\! \nonumber \\
&\quad \quad\quad\quad\quad\quad\quad\quad- \!\!2n\delta(\!n,\epsilon\!) \!\!-\!\! n\delta'(\!n,\epsilon\!) \nonumber\\
&\geq S(X^n B^n) + S(W_B B_0'|X'^n) - 2n\delta(n,\epsilon)-n\delta'(n,\epsilon),
\end{align}
where the third line is by subadditivity, the equality in the third line follows because
the decoding isometry $V$ does not change the entropy. Then, in the fifth and sixth line
we use the chain rule and strong subadditivity of entropy.
The inequality in the seventh line follows from the decodability of the systems $X^nB^n$:
the fidelity criterion (\ref{F_QCSW_assisted}) implies that the output state on systems
$\hat{X}^n\hat{B}^n$ is $2\sqrt{2\epsilon}$-close to the original state $X^nB^n$ in trace norm;
then apply \aw{the} Fannes inequality (\aw{Lemma} \ref{Fannes-Audenaert inequality}).
The eighth line follows from the decoupling condition (Lemma \ref{decoupling condition}),
which implies that
$I(W_D D_0':\hat{X}^n\hat{B}^n|{X'}^n) \leq n\delta(n,\epsilon)
= 4n\sqrt{6\epsilon} \log(|X| |B|) + 2 h(\sqrt{6\epsilon})$.
In the ninth line, we use that for any given $x^n$, the overall state of
$W_XW_B W_D B_0' D_0'R^n\hat{B}^n\hat{X}^n$ is pure, and invoking subadditivity.
In line tenth, we use the decoding fidelity (\ref{F_QCSW_assisted}) once
more, saying that the output state on systems
$\hat{X}^n\hat{B}^nR^n{X'}^n$ is $2\sqrt{2\epsilon}$-close to the original
state $X^nB^nR^n{X'}^n$ in trace norm;
then apply the Alicki-Fannes inequality (Lemma~\ref{AFW lemma}) in the following equation; notice that given $x^n$ the state on systems $X^nB^nR^n$ is pure, therefore $S(X^nB^nR^n|{X'}^n)=0$, and we obtain:
\begin{align}\label{eq: delta'}
& \abs{S(\hat{X}^n\hat{B}^nR^n|{X'}^n)-S(X^nB^nR^n|{X'}^n)} \nonumber \\
&\quad= S(\hat{X}^n\hat{B}^nR^n|{X'}^n) \nonumber\\
&\quad\leq 2n \sqrt{2\epsilon} \log |X| |B| |R| + (1+\sqrt{2\epsilon})h(\frac{\sqrt{2\epsilon}}{1+\sqrt{2\epsilon}}) \nonumber\\
&\quad\leq 4n \sqrt{2\epsilon} \log |X| |B| + (1+\sqrt{2\epsilon})h(\frac{\sqrt{2\epsilon}}{1+\sqrt{2\epsilon}}) \nonumber\\
&\quad:= \delta'(n,\epsilon),
\end{align}
where in the penultimate line, we can without loss of generality assume $|R| \leq |X| |B|$.
The equality in the twelfth line of Eq.~(\ref{eq sum rate}) follows because for
a given $x^n$ the encoded states of Alice and Bob are independent.
Moreover, we bound $R_B$ as follows:
\begin{align}
\label{eq RB lower bound}
nR_B &\geq S(C_B) \nonumber\\
&\geq S(C_B|W_BB_0') \nonumber\\
&= S(C_BW_BB_0') - S(W_BB_0') \nonumber\\
&= S(B^nB_0) - S(W_BB_0') \nonumber\\
&= S(B^n) + S(B_0) - S(W_BB_0').
\end{align}
Adding \aw{Eqs.}~(\ref{eq sum rate}) and (\ref{eq RB lower bound}), and after
cancellation of $S(B_0)=S(D_0)$, we get
\begin{align}
\label{eq: lower bound R_X+2R_B}
R_X& \! \!+2R_B \nonumber \\
&\geq \!S(B) \! \!+ \! \! S(X B) \! \! - \!\! \frac{1}{n}I(X'^n \!\!: \!W_B B_0') \! \!- \!2\delta( \!n,\epsilon \!) \!\!- \!\!\delta'( \!n,\epsilon \!) \nonumber\\
&\geq \!S(B) \!+ \! S(X B) \!- \! \frac{1}{n}I_{n\delta( \!n,\epsilon \!)}({\omega^{\otimes n}}) \!\!- \!2\delta( \!n,\epsilon \!) \!\!- \! \!\delta'( \!n,\epsilon \!) \nonumber\\
&= \! S(B) \!+ \!S(X B) \!- \! I_{\delta(n,\epsilon)}({\omega}) \!- \! 2\delta(n,\epsilon)-\delta'(n,\epsilon),
\end{align}
where given that $I(R^n:B_0'W_B|X'^n) \leq \delta(n,\epsilon)$, which we have from the
decoupling condition (Lemma \ref{decoupling condition}), the second equality follows directly
from Definition \ref{I_delta}, just as in the proof of Theorem \ref{converse_QCSW}.
The equality in the last line follows from Lemma \ref{lemma:I-delta}.
In the limit of $n\rightarrow\infty$ and $\epsilon\rightarrow 0$, we have
$\delta(n,\epsilon) \rightarrow 0$ and $\delta'(n,\epsilon) \rightarrow 0$, and so $I_{\delta(n,\epsilon)}$ \aw{converges}
to $\widetilde{I}_0$.
\end{proof}}
\subsection{General achievability bounds}
\label{sec:Achievability Bounds in General}
For general, non-generic sources, the achievability bounds of Theorem \ref{unknown.theorem}
and the outer bounds of Theorem \ref{theorem:full converse} do not match. Here we present several more
general achievability results that go somewhat towards filling in the unknown
area in between, without, however, resolving the question completely.
\begin{theorem}
\label{thm:achieve}
In the entanglement-assisted model, for distributed compression of a \aw{classical-quantum} source,
\aw{any rate pairs satisfying} the following inequalities are achievable: \aw{with $\alpha=\frac{2I(X:B)}{I(X:B)+I_0}$},
\begin{equation}\begin{split}
\label{eq:funny-region}
R_X &\geq S(X|B), \\
R_B &\geq \frac{1}{2}\left( S(B)+S(B|X)-I_0 \right), \\
R_X +\alpha R_B &\geq S(X|B)+ \alpha S(B).
\end{split}\end{equation}
\aw{More generally,} for any auxiliary random variable $Y$ such that \aw{$Y$--$X$--$B$}
is a Markov chain, \aw{all the following rate pairs (and hence also their upper-right convex closure)}
are achievable:
\begin{align*}
R_X &= I(X:Y) + S(X|BY) = S(X|B) + I(Y:B), \\
R_B &= \frac{1}{2}(S(B) + S(B|Y) - I(Y:W))\\
&= S(B)-\frac{1}{2}\left(I(Y:B)+I(Y:W)\right),
\end{align*}
where $C$ and $W$ are the system and environment of an \aw{isometry $V:{B\rightarrow CW}$
with $I(W:R|Y)=0$}.
\end{theorem}
\begin{proof}
\aw{The region described by Eq.~(\ref{eq:funny-region}) is precisely the upper-right convex
closure of the two corner points $(S(X|B),S(B))$ and $(S(X),\frac{1}{2}(S(B)+S(B|X)-I_0))$. Their
achievability} follows from \aw{Theorems} \ref{theorem: generic full rate region} and
\ref{QSR_achievability}.
We use the following two achievable points to show the
\aw{second} statement:
\begin{align*}
\left(S(X|B),S(B)\right) \quad \text{and} \quad \left(S(X),\frac{1}{2}(S(B)+S(B|X)-I_0)\right).
\end{align*}
Namely, Alice and \aw{Debbie} (the receiver) use the Reverse Shannon Theorem to simulate the channel
taking $X$ to $Y$ in i.i.d.~fashion, which costs $I(X:Y)$ bits of classical communication \cite{Bennett2002}.
Now we are in a situation that we know, Bob has to encode $B^n$ with side information $Y^n$ at the decoder,
which can be done \aw{at the rate $\frac{1}{2}(S(B)+S(B|Y)-I(Y:W))$, by the quantum state redistribution
protocol of Theorem \ref{QSR_achievability}}.
Then Alice has to send some more information to allow the receiver to decode $X^n$ which is an instance of
classical compression of $X$ with quantum side information $BY$ that is already at the decoder,
hence costing another $S(X|BY)$ bits in communication, \aw{by the Devetak-Winter protocol \cite{Devetak2003,Winter1999}}.
For $Y=X$, \aw{we recover the rate point $\left(S(X),\frac{1}{2}(S(B)+S(B|X)-I_0)\right)$,
and for $Y=\emptyset$ we recover $\left(S(X|B),S(B)\right)$.}
\end{proof}
\medskip
In Fig.~\ref{fig:full}, we show the situation for a general source, depicting
the most important inner and outer bounds on the rate region in the entanglement-assisted
model.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{complicated-diagram.jpg}
\caption{\aw{General outer (converse) bound, in red, and inner (achievable) bounds, in black,
on the entanglement-assisted rate region, assuming unlimited entanglement.
In general, our achievable points, the one from Devetak-Winter (DW), and the ones
using merging (M) and quantum state redistribution (QSR) are no longer on the
boundary of the outer bound. The achievable region is potentially slightly larger than
the upper-right convex closure of the points DW and QSR, connected by a solid black
straight line; indeed, the second part of Theorem \ref{thm:achieve} allows us to
interpolate between DW and QSR along the black dashed curve}.}
\label{fig:full}
\end{figure}
\subsection{Rate region for generic sources}
\label{sec:generic full}
In this subsection, we find the complete rate region for generic sources, generalizing
the insight of Theorem \ref{theorem:generic optimal rate} for the subproblem
of quantum compression with classical side information at the decoder.
\begin{theorem}
\label{theorem: generic full rate region}
\aw{In both unassisted and entanglement-assisted models, for a generic classical-quantum source, in particular one where there is
an $x$ such that $\psi_x^B$ has full support}, the optimal asymptotic rate region for distributed
compression is \aw{the} set of rate pairs satisfying
\begin{align*}
R_X &\geq S(X|B), \\
R_B &\geq\frac{1}{2}\left(S(B)+S(B|X)\right),\\
R_X+2R_B &\geq S(B)+S(XB).
\end{align*}
Moreover, \aw{there are protocols achieving these bounds requiring no prior} entanglement.
\end{theorem}
\aw{\begin{proof}
We have argued the achievability already at the start of this section
(Theorem \ref{unknown.theorem}).
As for the converse, we have shown in Theorem \ref{theorem:generic optimal rate}
that for a generic source, $\widetilde{I}_0=0$, hence the claim follows from
the outer bounds of Theorem \ref{theorem:full converse}.
\end{proof}}
This means that for generic sources, which we recall are the complement of a set of measure
zero, the rate region has the shape of Fig.~\ref{fig:inner}.
\section{Discussion and open problems}
\label{sec:discuss}
\aw{After seeing no progress for over 15 years in the problem of distributed
compression of quantum sources, we have decided to take a fresh look at the
classical-quantum sources considered in \cite{Devetak2003,Winter1999}. There,
the problem of compressing the classical source using the quantum part as
side information at the decoder was solved; here we analyzed the full
rate region, in particular we were interested in the other extreme of compressing
the quantum source using the classical part as side information at the decoder.
Like in the classical Slepian-Wolf coding, the former problem exhibits no rate loss,
in that the quantum part of the source is compressed to the Schumacher rate,
the local entropy, and the sum rate equals the joint entropy of the source.
Interestingly, this is not the case for the latter problem: clearly, if the
classical side information were available both at the encoder and the decoder,
the optimal compression rate would be the conditional entropy $S(B|X)$, which
would again imply no sum rate loss. However, since the classical side information
is supposed to be present only at the decoder, we have shown that in general
the rate sum is strictly larger, in fact generically by $\frac12 I(X:B)$, and
with this additional rate there is always a coding scheme achieving asymptotically
high fidelity. This additional rate could be called ``the price of ignorance'',
as it corresponds to the absence of the side information at the encoder.
To deal with general classical-quantum sources, we introduced information quantities
$I_0$ and $\widetilde{I}_0$ (Definition \ref{I_delta}), to upper and lower bound the
optimal quantum compression rate as
\begin{align*}
\frac12 \! \!\left( \! S(B) \!+ \!S(B|X) \!\!-\! \!\widetilde{I}_0\ \! \! \! \right ) \! \leq \! R_B^* \! \leq \! \frac12 \! \left(S(B) \!+ \! S(B|X)\! \!- \! {I}_0 \!\right)\!,
\end{align*}
when unlimited entanglement is available.
For generic sources, $I_0 = \widetilde{I}_0 = 0$, but in general we do not
understand these quantities very well, and the first set of open problems that
we would like to mention
is about them: is $I_0 = \widetilde{I}_0$ in general, or are there examples of
gaps? How can one calculate either one of these quantities, given that
a priori the auxiliary register $W$ is unbounded? In fact, can one
without loss of generality put a finite bound on the dimension of $W$,
for either optimization problem?
Further open problems concern the need for prior shared entanglement to achieve
the optimal quantum compression rate $R_B^*$. As a matter of fact, it would already
be interesting to know whether the rate $\frac12\left(S(B)+S(B|X)-{I}_0\right)$
requires in general pre-shared entanglement.
The full rate region inherits these features: while it is simple, and
in fact generated by the optimal codes for the two compression-with-side-information
problems (quantum compression with classical side information, and classical
compression with quantum side information), in the generic case, in general the
picture is very complicated, and we have only been able to give several outer and
inner bounds on the rate region, whose determination remains an open problem.}
\medskip
We also would like to comment on the source model that we consider in this chapter,
and its relation to the classical Slepian-Wolf coding.
Our classical-quantum source is characterised by a classical source,
the random variable $X$, and a quantum source $B$, which is described by a
density matrix $\rho_x^B$, but realized as quantum correlation with a purifying
reference system $R$: $\rho_x^B = \tr_R \ketbra{\psi_x}^{BR}$.
A source code in our sense reproduces the states $\ket{\psi_x}^{BR}$ with high
fidelity on average, which implies that, for any ensemble decomposition
$\rho_x^B = \sum_y p(y|x) \ketbra{\psi_{xy}}^B$, it reproduces the
states $\ket{\psi_{xy}}^B$ with high fidelity on average (with respect to
the ensemble probabilities $p(x)p(y|x)$). If we only demand the latter, there
is no need for the purifying system $R$, and the source can be described compactly by
the cccq-state
\begin{align}\label{eq: ensemble cqSW source}
&\sigma^{X'XYB}= \nonumber \\
&\> \!\!\!\! \!\!\sum_{x \in \mathcal{X},y \in \mathcal{Y}} \!\! \!\!\!\!p(x)p(y|x) \!\!\ketbra{x}^{X'} \!\!\!\!\otimes\!\ketbra{x}^X \!\!\!\otimes\! \ketbra{y}^Y \!\!\!\otimes \!\ketbra{\psi_{xy}}^{\!B}\!\!\!,
\end{align}
where $X'$ and $Y$ are reference systems with which the correlation is preserved in a compression protocol. This now includes the well-known classical correlated source considered
by Slepian and Wolf \cite{Slepian1973}, namely if the system $B$ is classical
with orthonormal states $\ket{\psi_{xy}}=\ket{y}$.
In the Schumacher's single compression problem \cite{Schumacher1995}, both
source models, that is, the ensemble source and the purified source, lead to the same
compression rate. However, when there is side information or more generally in the
distributed setting, different source models, albeit sharing the reduced states on $XB$,
do not lead to the same compression rate \cite{Winter1999}.
Our results provide a clear manifestation of this: recall that the minimum
compression rate of Bob in the Slepian-Wolf setting is $S(B|X)$, with the
ensemble fidelity criterion. On the other hand, if the distributions $p(y|x)$
have pairwise overlapping support, or theorem regarding generic sources
applies, resulting in the strictly larger minimum rate $\frac12(S(B)+S(B|X))$
when the average entanglement fidelity criterion is used. The difference can be
attributed to the harder task of maintaining the entanglement with the reference
system, rather than ``only'' classical correlation.
More broadly, a quantum source can be defined as a quantum system together with
correlations with a reference system, in our case any state $\rho^{ABR}$. The
compression task is to reproduce this state with high fidelity by coding and decoding
of $A$ and $B$. While this problem is far from understood in the general case,
what we saw here is that the compression rate may depend on the concrete
correlation with the reference system. In the present chapter, we have considered
both a globally purifying quantum system and an ensemble of purifications, and in this
final discussion, implicitly looked at a classical system keeping track of an
ensemble of states subject to a probability distribution.
Finally, we mention that both models of quantum data compression with classical side information with partially purified source of Eq.~(\ref{eq: extended source model}) and the ensemble model defined in Eq.~(\ref{eq: ensemble cqSW source}) are special cases of the model that we consider in the next chapter.
There we define an ensemble extension of the QSR source, namely the ensemble $\{ p(x), \ketbra{\psi_x}^{ACBR}\}$ with corresponding cqqqq-state $\sum_x p(x)\ketbra{x}^{X} \otimes \ketbra{\psi_x}^{ACBR}$ where Alice who has access to side information system $C$ wants to compress system $A$ and send it, via a noiseless quantum channel, to Bob who has access to side information system $B$. We let the encoder and decoder share free entanglement and consider two decodability critera: per-copy fidelity and block fidelity where in the former the fidelity is preserved for each copy of the source while in the latter the fidelity is preserved for the whole block of $n$ systems similar to the fidelity defined in Eq.~(\ref{F_QCSW_unassisted1}).
For the former criterion we find the optimal quantum communication rate and for the latter criterion we find a converse bound and an achievable rate which match up to an asymptotic error and an unbounded auxiliary
system.
Our new results imply that in the compression of system $B$ with classical side information at the decoder $X$ in the source model of Eq.~(\ref{eq: extended source model}), the converse bound of Theorem~\ref{converse_QCSW}, i.e. the following rate is optimal in the entanglement-assisted model with per-copy fidelity:
\begin{align*}
R_B =\frac{1}{2}\left( S(B)+S(B|X)-\widetilde{I}_0 \right).
\end{align*}
\chapter{Unification of the blind and visible Schumacher compression}
\label{chap: E assisted Schumacher}
In this chapter, we ask how the quantum compression of ensembles of pure states is
affected by the availability of entanglement, and in settings where
the encoder has access to side information.
We find the optimal asymptotic quantum rate and the optimal tradeoff
(rate region) of quantum and entanglement rates.
It turns out that the amount by which the quantum rate beats
the Schumacher limit, the entropy of the source, is precisely half
the entropy of classical information that can be extracted from the
source and side information states without disturbing them at all
(``reversible extraction of classical information'').
In the special case that the encoder has no side information, or that
she has access to the identity of the states, this problem reduces to the known
settings of \textit{blind} and \textit{visible} Schumacher compression, respectively,
albeit here additionally with entanglement assistance.
We comment on connections to previously studied and further rate
tradeoffs when also classical information is considered.
This chapter is based on the papers in \cite{Schumacher_Assisted_arXiv_Z_2019,ZK_Eassisted_ISIT_2019}.
\section{The source model}
The task of data compression of a quantum source, introduced by Schumacher
\cite{Schumacher1995}, marks one of the foundations of quantum
information theory: not only did it provide an information theoretic
interpretation of the von Neumann entropy $S(\rho) = -\Tr\rho\log\rho$
as the minimum compression rate, it also motivated the very concept of the qubit!
In the Schumacher modelling, a source is given by an ensemble
${\mathcal{E}} = \{ p(x), \proj{\psi_x} \}$ of pure states $\psi_x=\proj{\psi_x}\in{\mathcal{S}}(A)$,
$\ket{\psi_x}\in A$, with a Hilbert space $A$ of
finite dimension $|A|<\infty$; ${\mathcal{S}}(A)$ denotes the set of states (density operators).
Furthermore, $x\in{\mathcal{X}}$ ranges over a
discrete alphabet, so that we can can describe the source equivalently by
the classical-quantum (cq) state $\omega = \sum_x p(x) \proj{x}^X \otimes \proj{\psi_x}^A$.
While the achievability of the rate $S(A)_\omega = S(\omega^A)$ was
shown in \cite{Schumacher1995,Jozsa1994_1} (see also \cite[Thm.~1.18]{OhyaPetz:entropy}),
the full (weak) converse was established in \cite{Barnum1996}, a simplified
proof being given by M. Horodecki \cite{Horodecki1998}; the strong
converse was proved in \cite{Winter1999}.
\medskip
In this chapter, we consider a more comprehensive model, where on the one hand
the sender/encoder of the compressed data (Alice) has access to side
information, namely a pure state $\sigma_x^C$ in addition to the source state
$\psi_x^A$, and on the other hand, she and the receiver/decoder of the compressed
data (Bob) share pure state entanglement in the form of EPR pairs at a
certain rate.
Thus, the source is now an ensemble ${\mathcal{E}} = \{ p(x), \proj{\psi_x}^A\otimes\proj{\sigma_x}^C \}$
of product states, which can be described equivalently by the cqq-state
\begin{align}
\label{eq:source state omega}
\omega^{XAC}=\sum_{x\in \mathcal{X}} p(x) \proj{x}^X\otimes \proj{\psi_x}^A\otimes \proj{\sigma_x}^C.
\end{align}
Yet another equivalent description is via the
random variable $X \in {\mathcal{X}}$, distributed according to $p$, i.e. $\Pr\{X=x\}=p_x$;
this also makes the pure states $\psi_X$ and $\sigma_X$ random variables.
We will consider the information theoretic limit of
many copies of $\omega$, i.e.~$\omega^{X^n A^n C^n} = \left(\omega^{XAC}\right)^{\otimes n}$:
\[
\omega^{X^n A^n C^n}
\!\!\! = \!\!\!\!
\sum_{x^n \in \mathcal{X}^n}\!\!\!\! p(x^n) \proj{x^n}^{X^n}
\!\otimes\! \proj{\psi_{x^n}}^{A^n}
\!\otimes\! \proj{\sigma_{x^n}}^{C^n}\!\!\!\!\!,
\]
using the notation
\begin{align*}
x^n &= x_1 x_2 \ldots x_n,\quad\;
p(x^n) = p(x_1) p(x_2) \cdots p(x_n), \\
\ket{x^n} &= \ket{x_1} \ket{x_2} \cdots \ket{x_n},\
\ket{\psi_{x^n}} = \ket{\psi_{x_1}} \ket{\psi_{x_2}} \cdots \ket{\psi_{x_n}}.
\end{align*}
\section{Compression assisted by entanglement}
\label{sec:Compression assisted by entanglement}
We assume that the encoder, Alice, and the decoder, Bob, have initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
With probability $p(x^n)$, the source provides Alice
with the state $\psi_{x^n}^{A^n}\otimes\sigma_{x^n}^{C^n}$.
Then, Alice performs her encoding operation
$\mathcal{C}:A^nC^nA_0 \longrightarrow \hat{C}^nC_A$ on the systems $A^n$,
$C^n$ and her part $A_0$ of the entanglement, which is a quantum channel,
i.e.~a completely positive and trace preserving (CPTP) map.
(Note that our notation is a slight abuse, which we maintain as it is simpler
while it cannot lead to confusions, since channels really are maps between
the trace class operators on the involved Hilbert spaces.)
The dimension of the compressed system obviously has to be smaller than the
original source, i.e. $|C_A| \leq \abs{A}^n$.
We call $Q=\frac1n \log|C_A|$ and $E=\frac{1}{n}\log K$ the quantum and entanglement
rates of the compression protocol, respectively.
The system $C_A$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:C_A B_0 \longrightarrow \hat{A}^n$ on the system
$C_A$ and his part of entanglement $B_0$.
According to Stinespring's theorem \cite{Stinespring1955}, all these CPTP maps can be
dilated to isometries
$V_A : A^nC^nA_0 \hookrightarrow \hat{C}^n C_A W_A$ and
$V_B : C_A B_0 \hookrightarrow {\hat{A}^n W_B}$,
where the new systems $W_A$ and $W_B$ are the environment systems
of Alice and Bob, respectively.
We say the encoding-decoding scheme has fidelity $1-\epsilon$, or error $\epsilon$, if
\begin{align}
\label{eq:Schumcaher assisted fidelity}
\overline{F} &:= F\left( \omega^{X^n\hat{A}^n\hat{C}^n},\xi^{X^n\hat{A}^n\hat{C}^n} \right) \nonumber\\
&=\sum_{x^n \in \mathcal{X}^n}\!\!\! p(x^n)F\!\left(\proj{\psi_{x^n}}^{A^n}\!
\otimes\!\proj{\sigma_{x^n}}^{C^n}\!,\xi_{x^n}^{\hat{A}^n\hat{C}^n}\right) \\
&\geq 1-\epsilon, \nonumber
\end{align}
where $\xi^{X^n\hat{A}^n\hat{C}^n}=\sum_{x^n}p(x^n)\proj{x}^{X^n} \otimes \xi_{x^n}^{\hat{A}^n\hat{C}^n}$
and
$\xi_{x^n}^{\hat{A}^n\hat{C}^n}=(\mathcal{D}\circ\mathcal{C})\!\proj{\psi_{x^n}\!}^{A^n} \!\otimes\! \proj{\sigma_{x^n}\!}^{C^n} \!\otimes\!\Phi_K^{A_0\!B_0}\!\!$.
We say that $(E,Q)$ is an (asymptotically) achievable rate pair if for all $n$
there exist codes such that the fidelity converges to $1$, and
the entanglement and quantum rates converge to $E$ and $Q$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
Note that this means that we demand not only that Bob can reconstruct the
source states $\psi_{x^n}$ with high fidelity on average, but that Alice
retains the side information states $\sigma_{x^n}$ as well with high fidelity.
There are two extreme cases of the side information that have been considered
in the literature:
If $C$ is a trivial system, or more generally if the states
$\sigma_x^C$ are all identical, then the aforementioned task is the
entanglement-assisted version of \textit{blind} Schumacher compression. If $C=X$, or more
precisely $\ket{\sigma_x}=\ket{x}$, then Alice has access to classical random variable
$X$, and the task reduces to \textit{visible} Schumacher compression with entanglement assistance.
The blind-visible terminology is originally from \cite{Barnum1996,Horodecki2000}.
\begin{remark}
\label{remark:E=0}
In the case of no entanglement being available, i.e. $E=0$ ($K=1$), the
problem is fully understood: The asymptotic rate $Q=S(A)$
from \cite{Schumacher1995,Jozsa1994_1} is achievable without touching
the side information, and it is optimal, even in the visible case
(which includes all other side informations), by the weak and strong
converses of \cite{Barnum1996,Horodecki1998} and \cite{Winter1999}.
\qed
\end{remark}
\section{Optimal quantum rate}
To formulate the minimum compression rate under unlimited entanglement
assistance, we need the following concept.
\begin{definition}
\label{def:reducibility}
An ensemble of pure states
${\mathcal{E}}=\{p(x),\proj{\psi_x}^A \otimes \proj{\sigma_x}^C \}_{x\in \mathcal{X}}$
is called \emph{reducible} if its states belong to two or more orthogonal subspaces.
Otherwise the ensemble ${\mathcal{E}}$ is called \emph{irreducible}.
%
We apply the same terminology to the source cqq-state $\omega^{X A C}$.
\end{definition}
Notice that a reducible ensemble can be written uniquely as a disjoint union
of irreducible ensembles $\mathcal{E} = \bigcupdot_{y \in \mathcal{Y}} q(y) \mathcal{E}_y$,
with a partition $\mathcal{X} = \bigcupdot_{y\in \mathcal{Y}} \mathcal{X}_y$ and
irreducible ensembles
\begin{align*}
\mathcal{E}_y = \{ p(x|y), \proj{\psi_x}^A \otimes \proj{\sigma_x}^C \}_{x \in \mathcal{X}_y},
\end{align*}
where $q(y)p(x|y)=p(x)$ for $x \in \mathcal{X}_y$ and
$q(y) =\sum_{x \in \mathcal{X}_y} p(x)$.
We define the subspace spanned by the vectors of each irreducible ensemble as
$F_y := \text{span} \{\ket{\psi_x}\otimes\ket{\sigma_x} : x \in \mathcal{X}_y\}$.
The irreducible ensembles $\mathcal{E}_y$ are pairwise orthogonal, i.e.~$F_{y'} \perp F_y$
for all $y' \neq y$.
We may thus introduce the random variable $Y=Y(X)$ taking values in the set
$\mathcal{Y}$ with probability distribution $q(y)$; namely, $Y$ is a deterministic
function of $X$ such that $\Pr\{X\in\mathcal{X}_Y\}=1$.
We define the \textit{modified} source as
\begin{align*}
\omega^{XACY}=\sum_x p(x) \proj{x}^X\otimes \proj{\psi_x}^A \otimes \proj{\sigma_x}^C \otimes \proj{y(x)}^Y,
\end{align*}
with side information systems $CY$.
Because there is an isometry $V:AC \rightarrow ACY$ which acts as
\begin{equation}
\label{eq:iso}
V\ket{\psi_x}^A \otimes \ket{\sigma_x}^C=\ket{\psi_x}^A \otimes \ket{\sigma_x}^C \otimes \ket{y(x)}^Y,
\end{equation}
the extended source $\omega^{XACY}$ is equivalent to the original
source and side information $\omega^{XAC}$ modulo a local operation of Alice.
We first present the optimal asymptotic compression rate in the following
theorem and prove the achievability of it, but we leave
the converse proof to the end of this section, as it requires introducing
further machinery.
\begin{theorem}
\label{theorem: main}
For the given source $\omega^{XACY}$, the optimal asymptotic compression rate
assisted by unlimited entanglement is
$Q=\frac12 (S(A)+S(A|CY))$.
Furthermore, there is a protocol achieving this communication
rate with entanglement consumption at rate $E=\frac12 (S(A)-S(A|CY))$.
\end{theorem}
\begin{proof}
We first show that this rate is achievable.
Consider the following purification of $\omega^{XACY}$,
\begin{align*}
\ket{\omega}^{XX'ACY}=\sum_x \sqrt{p(x)}\ket{x}^X\ket{x}^{X'}\ket{\psi_x}^A\ket{\sigma_x}^C\ket{y(x)}^Y,
\end{align*}
with side information systems $CY$. This is obtained from
\begin{align*}
\ket{\omega}^{XX'AC}=\sum_x \sqrt{p(x)}\ket{x}^X\ket{x}^{X'}\ket{\psi_x}^A\ket{\sigma_x}^C,
\end{align*}
by Alice applying the isometry $V$ from Eq.~(\ref{eq:iso}).
We apply quantum state redistribution (QSR) \cite{Devetak2008_2,Oppenheim2008}
as a subprotocol, where the objective is for Alice to send to Bob $A^n$,
using $C^nY^n$ as side information, while $(XX')^n$ serves as reference system;
the figure of merit is the fidelity with the original pure state $(\omega^{XX'ACY})^{\otimes n}$.
Denoting the overall encoding-decoding CPTP map
$\Lambda:A^nC^nY^n \rightarrow \hat{A}^n\hat{C}^n\hat{Y}^n$,
QSR gives us the first inequality of the following chain:
\begin{align*}
1-o(1) &\leq F\!\left( \omega^{X^nX'^nA^nC^nY^n}\!\!,
({\operatorname{id}}_{X^nX'^n} \otimes \Lambda) \omega^{X^nX'^nA^nC^nY^n}\! \right) \\
&\leq F\!\left( \omega^{X^nA^nC^nY^n}\!\!,
({\operatorname{id}}_{X^n} \otimes \Lambda) \omega^{X^nA^nC^nY^n}\! \right),
\end{align*}
where the second inequality follows from monotonicity of the fidelity under partial trace.
Thus, the protocol satisfies our fidelity criterion (\ref{eq:Schumcaher assisted fidelity}).
The communication rate we obtain from QSR is
$Q = \frac{1}{2}I(A:XX') = \frac{1}{2} (S(A)+S(A|CY))$.
Furthermore, QSR guarantees entanglement consumption at the rate
$E = \frac12 I(A:CY) = \frac12 (S(A)-S(A|CY))$.
\end{proof}
To prove optimality (the converse), we first need a few preparations.
The following definition is inspired by the ``reversible extraction of classical
information'' in \cite{Barnum2001_2}.
\begin{definition}
\label{def:I_epsilon}
For a source $\omega^{XAC}$ and $\epsilon \geq 0$, define
\begin{align*}
I_\epsilon(\omega) \!\!:= \!\!
\max_{V:AC\rightarrow \hat{A}\hat{C} W \text{ isometry}} \!\! \!\!I(X\!:\!\hat{C}W)_\xi
\text{ s.t. } F(\!\omega^{\!X\!A\!C\!}\!,\xi^{\!X\hat{A}\!\hat{C}\!})\! \geq \!1\!\!-\!\epsilon,
\end{align*}
where
\[
\xi^{X\!\hat{A}\hat{C}W} \!\!\!
=\!(\!\openone_X \otimes V\!) \omega^{XAC}\!(\!\openone_X \otimes V^{\dagger}\!) \!
=\!\sum_x p(x) \proj{x}^X \!\otimes \proj{\xi_x}^{\!\hat{A}\hat{C}W} \!\!\!.
\]
\end{definition}
In this definition, the dimension of the environment is w.l.o.g. bounded as $|W| \leq |A|^2|C|^2$;
hence, the optimisation is of a continuous function over a compact domain, so we have a
maximum rather than a supremum.
\begin{lemma}
\label{lemma:I_epsilon properties}
The function $I_\epsilon(\omega)$ has the following properties:
\begin{enumerate}
\item It is a non-decreasing function of $\epsilon$.
\item It is concave in $\epsilon$.
\item It is continuous for $\epsilon \geq 0$.
\item For any two states $\omega_1^{X_1 A_1 C_1}$ and $\omega_2^{X_2 A_2 C_2}$ and for $\epsilon \geq 0$,
\(
I_{\epsilon}(\omega_1 \otimes \omega_2) \leq I_{\epsilon}(\omega_1) +I_{\epsilon}(\omega_2).
\)
\item For any state $\omega^{XAC}$, $I_0(\omega) \leq S(CY)$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. The definition of $I_\epsilon(\omega)$ directly implies that it
is a non-decreasing function of $\epsilon$.
2. To prove the concavity, let $V_1:AC \rightarrow \hat{A}\hat{C}W$ and
$V_2:AC \rightarrow \hat{A}\hat{C}W$ be the isometries attaining the
maximum for $\epsilon_1$ and $\epsilon_2$, respectively, which act as
follows:
\[
V_1 \ket{\psi_x}^A\ket{\sigma_x}^C=\ket{\xi_x}^{\hat{A}\hat{C}W} \text{ and }
V_2 \ket{\psi_x}^A\ket{\sigma_x}^C=\ket{\zeta_x}^{\hat{A}\hat{C}W}.
\]
For $0\leq \lambda \leq 1$,
define the isometry $U:AC \rightarrow \hat{A}\hat{C}WRR'$ by letting, for all $x$,
\[
U\!\ket{\psi_x}^A\!\ket{\sigma_x}^C
\!:=\!\sqrt{\!\lambda}\ket{\xi_x}^{\hat{A}\hat{C}W}\!\!\ket{00}^{RR'}\!\!\!+\!\!\sqrt{1\!-\!\lambda}\ket{\zeta_x}^{\hat{A}\hat{C}W}\!\!\ket{11}^{RR'},
\]
where systems $R$ and $R'$ are qubits.
Then, the reduced state on the systems $X\hat{A}\hat{C}$ is
$\tau^{X\hat{A}\hat{C}}=\sum_x p(x) \proj{x}^X\otimes \tau_x^{\hat{A}\hat{C}}$,
where $\tau_x^{\hat{A}\hat{C}}=\lambda \xi_x^{\hat{A}\hat{C}}+(1-\lambda)\zeta_x^{\hat{A}\hat{C}}$;
therefore, the fidelity is bounded as follows:
\begin{align*}
F(\omega^{XA\hat{C}}\!,\tau^{X\hat{A}\hat{C}})
&= \sum_x p(x) \sqrt{\bra{\psi_x}
\left(\lambda \xi_x^{\hat{A}\hat{C}}+(1-\lambda)\zeta_x^{\hat{A}\hat{C}}\right)
\ket{\psi_x}} \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\geq \lambda \sum_x p(x) \sqrt{\!\bra{\psi_x} \xi_x^{\hat{A}\hat{C}} \ket{\psi_x}\!}
+ (1-\lambda)\sum_x p(x) \sqrt{\!\bra{\psi_x}\zeta_x^{\hat{A}\hat{C}} \ket{\psi_x}\!} \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\geq 1-\left( \lambda\epsilon_1 +(1-\lambda)\epsilon_2 \right),
\end{align*}
where the second line follows from the concavity of the function $\sqrt{x}$,
and the last line follows by the definition of the isometries $V_1$ and $V_2$.
Now, define $W':=WRR'$ and let $\epsilon=\lambda\epsilon_1 +(1-\lambda)\epsilon_2$.
According to Definition \ref{def:I_epsilon}, we obtain
\begin{align*}
I_\epsilon(\omega) &\geq I(X:\hat{C}W')_{\tau}\\
&= I(X:R)_{\tau}+I(X:\hat{C}W|R)_{\tau}+I(X:R'|\hat{C}WR)_{\tau}\\
&\geq I(X:\hat{C}W|R)_{\tau}
= \lambda I_{\epsilon_1}(\omega)+(1-\lambda)I_{\epsilon_2}(\omega),
\end{align*}
where the third line is due to strong subadditivity of the quantum mutual information.
3. The function is non-decreasing and concave for $\epsilon \geq 0 $, so it is continuous
for $\epsilon > 0 $.
The concavity implies furthermore that $I_{\epsilon}$ is lower semi-continuous at
$\epsilon=0$. On the other hand, since the fidelity and mutual information are
both continuous functions of CPTP maps, and the domain of the optimization is a
compact set, we conclude that $I_\epsilon(\omega)$ is also upper semi-continuous at
$\epsilon=0$, so it is continuous at $\epsilon=0$ \cite[Thms.~10.1,~10.2]{Rockafeller}.
4. In the definition of $I_{\epsilon}(\omega_1 \otimes \omega_2)$, let the isometry
$V_0:A_1 C_1 A_2C_2 \rightarrow \hat{A}_1\hat{C}_1\hat{A}_2\hat{C}_2 W$ be the
one attaining the maximum which acts on the purified source state with purifying
systems $X_1'$ and $X_2'$ as follows:
\begin{align*}
\ket{\xi}&^{X_1\!X_1'\!X_2\!X_2'\!\hat{A}_1\!\hat{C}_1\!\hat{A}_2\!\hat{C}_2\!W} \\
&\phantom{====}
=(\openone_{X_1\!X_1'\!X_2\!X_2'}\otimes V_0)\ket{\omega_1}^{X_1\!X'_1\!A_1\!C_1}
\ket{\omega_1}^{X_2\!X'_2\!A_2\!C_2}\!\!.
\end{align*}
Now, define the isometry $V_1:A_1 C_1 \rightarrow \hat{A}_1\hat{C}_1\hat{A}_2 \hat{C}_2W X_2X'_2$
acting only on the systems $A_1 C_1$ with the output state $\hat{A}_1\hat{C}_1$ and the
environment $W_1:=\hat{A}_2\hat{C}_2 W X_2X'_2$ as follows:
\[
\ket{\xi}^{X_1\!X_1'\!X_2\!X_2'\!\hat{A}_1\!\hat{C}_1\!\hat{A}_2\!\hat{C}_2\!W}
= (\openone_{X_1X_1'}\otimes V_1)\ket{\omega_1}^{X_1X_1'A_1C_1}.
\]
Hence, we obtain
\begin{align*}
F(\omega_1^{X_1\!A_1\!C_1}\!,\xi^{X_1\!\hat{A}_1\!\hat{C}_1})
&\geq F\!\left(\omega_1^{X_1\!A_1\!C_1}\!\otimes\!\omega_2^{X_2\!A_2\!C_2}\!,
\xi^{X_1\!X_2\!\hat{A}_1\!\hat{C}_1\!\hat{A}_2\!\hat{C}_2}\!\right) \\
&\geq 1 - \epsilon,
\end{align*}
where the first inequality is due to monotonicity of the fidelity under
CPTP maps, and the second inequality follows by the definition of $V_0$.
Consider the isometry
$V_2:A_2 C_2 \rightarrow \hat{A}_1\hat{C}_1\hat{A}_2 \hat{C}_2W X_1X'_1$
defined in a similar way, with the output state $\hat{A}_2\hat{C}_2$ and
the environment $W_2:=\hat{A}_1\hat{C}_1 W X_1X'_1$.
Therefore, we obtain
\begin{align*}
I_{\epsilon}(\omega_1) +I_{\epsilon}(\omega_2) &\geq I(X_1:\hat{C}_1W_1)+I(X_2:\hat{C}_2W_2)\\
&\geq I(X_1:\hat{C}_1\hat{C}_2W)+I(X_2:\hat{C}_1\hat{C}_2WX_1)\\
&= I(X_1X_2:\hat{C}_1\hat{C}_2W)
= I_{\epsilon}(\omega_1 \otimes \omega_2),
\end{align*}
where the second line is due to data processing.
5. In the definition of $I_0(\omega)$ let $V_0:AC \rightarrow \hat{A}\hat{C}W$ be the isometry attaining the maximum with $F(\omega^{XAC},\xi^{X\hat{A}\hat{C}})=1$. Hence, we obtain
\begin{align*}
I_0(\omega)&= I(X:\hat{C}W)
= I(XY:\hat{C}W) \\
&= I(Y:\hat{C}W)+I(X:\hat{C}W|Y) \\
&\leq S(Y)+I(X:\hat{C}W|Y) \\
&= S(Y)+I(X:W|Y)+I(X:\hat{C}|WY) \\
&\leq S(Y)+I(X:W|Y)+S(C|WY)\\
&\leq S(Y)+I(X:W|Y)+S(C|Y),
\end{align*}
where the first line follows because $Y$ is a function of $X$. The second and fourth
line are due to the chain rule. The third line follows because for the classical
system $Y$ the conditional entropy $S(Y|\hat{C}W)$ is non-negative. The penultimate
line follows because for any $x$ the state on the system $\hat{C}$ is pure. The
last line is due to strong sub-additivity of the entropy. Furthermore, for every
$y$, the ensemble ${\mathcal{E}}_y$ is irreducible; hence, the conditional mutual information
$I(X:W|Y)=0$ which follows from the detailed discussion on page 2028 of \cite{Barnum2001_2}.
\end{proof}
\begin{proof-of}[{of the converse part of Theorem \ref{theorem: main}}]
We start by observing
\[
nQ+S(B_0) \geq S(C_A)+S(B_0)
\geq S(C_A B_0)
= S(\hat{A}^n W_B),
\]
where the second inequality is due to subadditivity of the entropy,
and the equality follows because the decoding isometry $V_B$ does not change
the entropy. Hence, we get
\begin{align}
\label{eq:converse Schumacher assisted 1}
nQ+S(B_0) &\geq S(\hat{A}^n)+S(W_B|\hat{A}^n) \nonumber\\
&\geq S(\hat{A}^n)+S(W_B|\hat{A}^n X^n) \nonumber\\
&\geq S(A^n)+S(W_B|\hat{A}^n X^n)-n \delta(n,\epsilon) \nonumber\\
&= S(A^n) \!+\! S(\hat{A}^nW_B| X^n\!) \!-\! S(\hat{A}^n| X^n)\!-\!n \delta(n,\epsilon)\nonumber\\
&= S(A^n) \!+\! S(\hat{C}^nW_A| X^n) \!-\! S(\hat{A}^n| X^n) \!-\! n \delta(n,\epsilon)\nonumber\\
&\geq S(A^n)+S(\hat{C}^nW_A| X^n)- 3n \delta(n,\epsilon),
\end{align}
where in the first and second line we use the chain rule and subadditivity of entropy.
The inequality in the third line follows from the decodability of the system $A^n$:
the fidelity criterion (\ref{eq:Schumcaher assisted fidelity}) implies that the
output state on systems $\hat{A}^n$ is $2\sqrt{2\epsilon}$-close to the
original state $A^n$ in trace norm; then apply the Fannes-Audenaert inequality
\cite{Fannes1973,Audenaert2007} where
$\delta(n,\epsilon)=\sqrt{2\epsilon} \log|A| + \frac1n h(\sqrt{2\epsilon})$.
The equalities in the fourth and the fifth line are due to the chain rule
and the fact that for any $x^n$ the overall state of $\hat{A}^n\hat{C}^nW_AW_B$ is pure.
In the last line, we use the decodability of the systems $X^nA^n$, that is the
output state on systems $X^n\hat{A}^n$ is $2\sqrt{2\epsilon}$-close to the
original states $X^nA^n$ in trace norm, then we apply the Alicki-Fannes
inequality \cite{Alicki2004,Winter2016}.
Moreover, we bound $Q$ as follows:
\begin{align}\label{eq:converse Schumacher assisted 2}
nQ &\geq S(C_A)
\geq S(C_A|\hat{C}^nW_A) \nonumber \\
&= S(A^nC^nA_0) -S(\hat{C}^nW_A) \nonumber \\
&= S(A^nC^nY^n)+S(A_0) -S(\hat{C}^nW_A),
\end{align}
where the first equality follows because the encoding isometry
$V_A:A^nC^nA_0 \rightarrow C_A\hat{C}^nW_A$ does not the change the entropy.
Adding Eqs. (\ref{eq:converse Schumacher assisted 1}) and
(\ref{eq:converse Schumacher assisted 2}), we thus obtain
\begin{align}\label{eq:converse Schumacher}
Q &\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2n}I(\hat{C}^nW_A:X^n)-\frac{3}{2}\delta(n,\epsilon)\nonumber \\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2n}I(\hat{C}^nW_AW_B:X^n)-\frac{3}{2}\delta(n,\epsilon)\nonumber \\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2n} I_\epsilon(\omega^{\otimes n})
-\frac{3}{2}\delta(n,\epsilon) \nonumber \\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2} I_\epsilon(\omega)-\frac{3}{2}\delta(n,\epsilon) \nonumber
\end{align}
where the second line is due to data processing. The third line follows from
Definition \ref{def:I_epsilon}. The last line follows from point 4 of Lemma
\ref{lemma:I_epsilon properties}. In the limit of $\epsilon \to 0$ and
$n \to \infty $, the rate is bounded by
\begin{align*}
Q &\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2} I_0(\omega)\\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2}S(CY)\\
&= \frac{1}{2}(S(A)+S(A|CY)),
\end{align*}
where the first line follows from point 3 of Lemma \ref{lemma:I_epsilon properties}
stating that $I_\epsilon(\omega)$ is continuous at $\epsilon=0$.
The second line is due to point 5 of Lemma \ref{lemma:I_epsilon properties}.
\end{proof-of}
\section{Complete rate region}
In this section, we find the complete rate region of achievable rate pairs $(E,Q)$.
\begin{theorem}
\label{theorem:complete rate region}
For the source $\omega^{XACY}$, all asymptotically achievable entanglement and
quantum rate pairs $(E,Q)$ satisfy
\begin{align*}
Q &\geq \frac{1}{2}(S(A)+S(A|CY)),\\
Q+E &\geq S(A).
\end{align*}
Conversely, all the rate pairs satisfying the above inequalities are achievable.
\end{theorem}
\begin{proof}
The first inequality comes from Theorem \ref{theorem: main}.
For the second inequality, consider any code with quantum communication
rate $R$ and entanglement rate $E$. By using an additional communication
rate $E$, Alice and Bob can distribute the entanglement first, and then
apply the given code, converting it into one without preshared
entanglement and communication rate $Q+E$, having exactly the same
fidelity. By Remark \ref{remark:E=0}, $Q+E \geq S(A)$.
As for the achievability, the corner point
$(\frac{1}{2} I(A:CY),\frac{1}{2}(S(A)+S(A|CY)))$ is achievable,
because QSR which is used as the achievability protocol in
Theorem \ref{theorem: main} uses $\frac{1}{2} I(A:CY)$ ebits of
entanglement between Alice and Bob.
Furthermore, all the points on the line $Q+E = S(A)$ for
$Q \geq \frac{1}{2}(S(A)+S(A|CY))$ are achievable because one
ebit can be distributed by sending a qubit.
All other rate pairs are achievable by resource wasting. The rate region is depicted in
Fig.~\ref{fig:E-Q}
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\textwidth]{Q-E_tradeoff.jpg}
\caption{The optimal rate region of quantum and entanglement rates.}
\label{fig:E-Q}
\end{figure}
\section{Discussion}
First of all, let us look what our result tell us in the cases of
blind and visible compression.
\begin{corollary}
\label{corollary:blind}
In blind compression (i.e. if $C$ is trivial, or more generally the
states $\sigma_x$ are all identical), the compression of the source
$\omega^{XACY}$ reduces to the entanglement-assisted Schumacher
compression for which Theorem \ref{theorem: main} gives the optimal asymptotic quantum rate
\[
Q = \frac{1}{2}(S(A)+S(A|Y))=S(A)-\frac{1}{2}S(Y).
\]
This implies that if the source is irreducible, then this rate is equal to the
Schumacher limit $S(A)$. In other words, the entanglement does not help the
compression. Moreover, due to Theorem \ref{theorem:complete rate region}, a
rate $\frac{1}{2}S(Y)$ of entanglement is consumed in the compression,
and $E+Q\geq S(A)$ in general.
\qed
\end{corollary}
The blind compression of a source $\omega^{XAY}$ is also considered
in \cite{Barnum2001_2}, but there instead of entanglement, a noiseless classical
channel was assumed in addition to the quantum channel.
It was shown that the optimal quantum rate assisted with free classical communication
is equal to $S(A)-S(Y)$, while a rate $S(Y)$ of classical communication suffices.
By sending the classical information using dense coding \cite{Bennett1992},
spending $\frac12$ ebit and $\frac12$ qubit per cbit, we can recover the
quantum and entanglement rates of Corollary \ref{corollary:blind}.
This means that our converse implies the optimality of the quantum rate
from \cite{Barnum2001_2}.
Thus we are motivated to look at a modified compression model where the resources
used are classical communication and entanglement. Namely, we let Alice and Bob
share entanglement at rate $E$ and use classical communication at rate $C$, but
otherwise the objective is the same as in Section \ref{sec:Compression assisted by entanglement};
define the rate region as the set of all asymptotic achievable classical
communication and entanglement rate pairs $(C,E)$, such that the decoding fidelity
asymptotically converges to $1$.
\begin{theorem}
For a source $\omega^{XAY}$, a rate pair $(C,E)$ is achievable if and only if
\begin{align*}
C \geq 2S(A)-S(Y),\
E \geq S(A)-S(Y).
\end{align*}
\end{theorem}
\begin{proof}
We start with the converse.
The first inequality follows from Theorem \ref{theorem: main}, because with
unlimited entanglement shared between Alice and Bob,
$\frac{1}{2}(S(A)+S(A|Y))=S(A)-\frac{1}{2}S(Y)$ qubits of quantum
communication is equivalent to $2S(A)-S(Y)$ bits of classical communication
due to teleportation \cite{Bennett1993} and dense coding \cite{Bennett1992}.
The second inequality follows from \cite{Barnum2001_2}, because with free
classical communication, the quantum rate is lower bounded by $S(A)-S(Y)$
which, due to teleportation \cite{Bennett1993}, is equivalent to sharing
$S(A)-S(Y)$ ebits when classical communication is for free.
The achievability of the corner point $(2S(A)-S(Y),S(A)-S(Y))$
follows from \cite{Barnum2001_2} because the compression protocol
uses $S(A)-S(Y)$ qubits and $S(Y)$ bits of classical communication
which is equivalent to using $S(A)-S(Y)$ ebits of entanglement and
$2S(A)-2S(Y)+S(Y)$ bits of classical communication, due to dense coding \cite{Bennett1992}.
Other rate pairs are achievable by resource wasting.
The rate region is depicted in Fig.~\ref{fig:C-E}.
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\textwidth]{C-E_tradeoff.jpg}
\caption{The optimal rate region of classical and entanglement rates.}
\label{fig:C-E}
\end{figure}
\begin{corollary}
\label{cor:visible}
In the visible case, our compression problem reduces to the visible version of
Schumacher compression with entanglement assistance. In this case,
according to Theorem \ref{theorem: main} the optimal asymptotic quantum
rate is $Q=\frac{1}{2}S(A)$.
Moreover, a rate $E=\frac{1}{2}S(A)$ of entanglement
is consumed in the compression scheme, and $E+Q\geq S(A)$ in general.
\qed
\end{corollary}
We remark that the visible compression assisted by unlimited entanglement is
also a special case of remote state preparation considered in \cite{Bennett2005},
from which we know that the rate $Q=\frac12 S(A)$ is achievable and optimal.
The visible analogue of \cite{Barnum2001_2}, of compression using
qubit and cbit resources, was treated in \cite{Hayden2002}, where the
achievable region was determined as the union of all all pairs $(C,Q)$ such
that $Q\geq S(A|Z)$ and $C\geq I(X:Z)$, for any random
variable $Z$ forming a Markov chain $Z$---$X$---$A$. Compare to the
complicated boundary of this region the much simpler one of Corollary \ref{cor:visible},
which consists of two straight lines.
We close by discussing several open questions for future work:
First, the final discussion of different pairs of resources to compress suggests
that an interesting target would be the characterisation of the full triple resource
tradeoff region for $Q$, $C$ and $E$ together.
Secondly, we recall that our definition of successful decoding included
preservation of the side information $\sigma_x^C$ with high fidelity. What is the
optimal compression rate $Q$ if the side information does not have to be preserved?
For an example where this change has a dramatic effect on the optimal communication
rate, consider the ensemble ${\mathcal{E}}$ consisting of the three two-qubit states
$\ket{0}^A\ket{0}^C$, $\ket{1}^A\ket{0}^C$ and $\ket{+}^A\ket{+}^C$
(where $\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$),
with probabilities $\frac12-t$, $\frac12-t$ and $2t$, respectively.
Note that ${\mathcal{E}}$ is irreducible, hence for $t\approx 0$, we get an optimal
quantum rate of $Q\approx 1$, because $S(A) \approx S(A|C) \approx 1$.
However, by applying a CNOT unitary (with $A$ as control and $C$ as target),
the ensemble is transformed into ${\mathcal{E}}'$ consisting of the states
$\ket{0}^A\ket{0}^{C'}$, $\ket{1}^A\ket{1}^{C'}$ and $\ket{+}^A\ket{+}^{C'}$.
The state of $A$ is not changed, only the side information, which is why we
denote it $C'$. Hence we can apply Theorem \ref{theorem: main} to get a
quantum rate $Q\approx\frac12$, because $S(A) \approx 1$, $S(A|C) \approx 0$.
Thirdly, note that the lower bound $Q+E\geq S(A)$ in Theorem \ref{theorem:complete rate region}
holds with a strong converse (see the proof and \cite{Winter1999}).
But does $Q\geq \frac12 (S(A)+S(A|CY))$ hold as a strong converse rate
with unlimited entanglement? Likewise, in the setting of \cite{Barnum2001_2}
with unlimited classical communication, is $Q\geq S(A)-S(Y)$ a strong converse
bound for the quantum rate?
\chapter{Introduction}
\section{Background and motivation}
Information theory studies the transmission, processing, extraction, and utilization of information.
The notion of classical information was first introduced
by Shannon \cite{Shannon1948},
who defined it operationally, as the minimum number of bits needed to communicate the message
produced by a statistical source. This gave meaning to the Shannon entropy $H(X)$ of a source producing a random variable $X$.
The amount of information that two random variables
$X$ and $Y$ have in common was given a meaning through
the mutual information $I(X:Y)$. Operationally it is the rate of communication possible
through a noisy channel taking
$X$ to $Y$.
Quantum Shannon theory is a more general field which studies information on physical systems governed by the rules of
quantum mechanics, therefore encompasses classical information as sub-field, and
was mathematically founded
by Holevo in 1973 \cite{Holevo1973}
to study the transmission of information
over quantum channels following the earliest understanding of the connection between quantum physics and information theory \cite{Gordon64,Levitin69,Forney63,Stratonovich66}.
Surprisingly, von Neumann entropy, which is a generalization of Shannon entropy, was formulated before Shannon entropy in the context of thermodynamics and statistical mechanics, and it was not contemplated to convey informational interpretation. Despite this fact and Holevo's study of classical information on quantum systems \cite{Holevo1973,Holevo1979}, the concept of quantum information
was obscure till 1995, when Schumacher
showed that the von Neumann entropy
has the operational interpretation of the number of \emph{qubits}
needed to transmit quantum states emitted by a statistical source \cite{Schumacher1995}.
After Schumacher's quantitative notion of quantum information, i.e. qubit, and understanding its complementary nature to classical information, quantum Shannon theory has been further established in the last three decades by fundamental discoveries from source and channel coding to quantum cryptography, quantum error-correcting, measures of entanglement and so on
\cite{decoherence-Shor-1995,err-correction-Schumacher-1996,Schumi1996,err-codes1997,Barnum1998,privacy-coherence-1998,E-assisted-C-capacity1999,Lo-Popescu-1999,fidelities-capacities-2000,On-C-capacity-Holevo-2002,Schumacher2002,Bennett2002,Keydistill-DW-2005,Devetak-capacity-2005}.
In particular, the notion of a quantum source
as a quantum state together with correlations with a reference system and its compression led to the discovery of operational meaning for quantum quantities such as quantum conditional entropy, which as opposed to its classical counterpart can obtain negative values. In this source compression task with side information, which is called state merging, the negative values of
conditional entropy imply that the entanglement is generated after the compression is accomplished, and it can be used as a resource for future communications \cite{Horodecki2007,SM_nature}.
Other quantum source compression problems
such as quantum state redistribution and visible compression of mixed states gave operational meaning to quantum conditional mutual information and regularized entanglement of purification \cite{Devetak2008_2,Yard2009,visible_Hayashi}, respectively, and they have been used successfully as sub-protocols to accomplish tasks other than data compression \cite{QRST2014}.
Various source models and their compression have been considered throughout these years
and each source appeared to be a distinct case with a unique compression behavior \cite{Devetak2003,Winter1999,KI2001,visible_Hayashi,Horodecki2007,Devetak2008_2,Yard2009}, and the compression of many other source models has been left open \cite{Winter1999,Ahn2006}.
These open questions and the lack of a source model, which can unify all these seemingly distinct models, is the underlying motivation for the first part of this thesis which focuses on the compression of quantum sources.
We specifically solve the Schumacher's compression problem when the overall state together with the reference is a general mixed state. When there are side information systems, a general reference system appears to be hard to tackle, therefore we attack compression problems with classical references or so called ensemble sources.
Understanding compression and capacity problems apart from finding fundamental limits on the amount of communication and storage rates, has developed tools and quantitative notions, e.g typical subspaces and entropic quantities, which has been successfully used to deal with and interpret other
quantum effects such as quantum thermodynamics and quantum coherence \cite{Brandao2013,Weilenmann2016,Winter-Dong-2016}.
In particular, the innate relationship between information theory and thermodynamics has proved that integrated ideas from both fields are fruitful \cite{Jaynes_2_1957,Jaynes1957,Brillouin62,Jaynes82}.
This has been the motivation for the second part of this thesis which focuses on quantum thermodynamics, where we consider a general framework with multiple conserved quantities and apply information theoretic tools to construct charge conserving operations. These explicit operations are extremely helpful to study traditional thermodynamics settings and laws.
Perhaps the most up-to-date and comprehensive review of the fast-growing field of quantum thermodynamics is contained in the collection of essays
in \cite{Thermo_Quantum_Regime}.
Still, some very fundamental questions concerning quantum thermodynamics have been answered in this thesis.
For a non-specialist,
these questions can be formulated as follows. Normally in both classical and quantum thermodynamics one deals with large systems interacting with an even larger bath. In addition to energy, the system maybe characterized by many macroscopic conserved (on average) quantities, called here charges like total electric charge, total dipole moment, angular momentum, magnetization, total spin components etc. In the quantum case, these quantities may correspond to non-commuting operators. How come that with repeated measurements on the system prepared in the same state, we obtain well defined average values of this charges? The repeated measurements of equally prepared systems can be mathematically treated by considering tensor product states of many copies of the systems. This mathematical construction is used in the thesis to define thermodynamically allowed transformation, which have to conserve all average values of the charges. To any quantum state, we associate a vector with entries of the expected charge values and entropy of that state. The set of all these vectors forms the phase diagram of the system, and show that it characterizes the equivalence classes of states under thermodynamically allowed transformations, which are proven rigorously to correspond to asymptotic unitary transformations that approximately conserve the charges.
Our theory provides a general theoretical framework, but leads also to predictions of very concrete effects. In particular, we estimate how large an asymptotically large
bath is necessary to attain the second law of thermodynamics, and permit a specified
work transformation of a given system. In some situations, the necessary bath extension is relatively small, and then quantum setting requires an extended phase diagram exhibiting negative entropies. This corresponds to the purely quantum effect that at the end of the process, system and bath are entangled. Obviously, such processes are impossible classically! For large thermal bath, thermodynamically allowed transformation leave the system and the bath uncorrelated. In such case, the heat capacity of the bath becomes a function of how tightly the second law is attained.
\section{The structure of the thesis}
The reminder of this chapter is dedicated to introducing some notation and preliminary material, which are prerequisite for the subsequent chapters. In summary as mentioned above, the thesis is based on two main themes: part I and part II revolving around quantum source compression and quantum thermodynamics, respectively.
As for the source compression part, we start with chapter~\ref{chap:source-coding} where we first expand on the notion of a quantum source and continue with a rigorous definition
of asymptotic source compression task which
encompasses as special cases all the reviewed
compression problems in the context of asymptotic quantum source compression and unifies them under a common base. Later in the chapter, we define side information and distributed settings for compressing the information. As for the resources available for communication, we consider noiseless qubit channel and shared entanglement between the parties.
In chapter \ref{chap:mixed state}, we consider the most general source where the overall state with the reference system is a general mixed state. This model covers all the previously studied models such as Schumacher's ensemble and pure sources \cite{Schumacher1995} and the ensemble of mixed state source \cite{KI2001}. We find the optimal trade-off between the entanglement and quantum communication rates. The optimal rates are in terms of a decomposition of the source introduced in \cite{Hayden2004} which is a generalization of the well-known decomposition discovered by Koashi and Imoto in \cite{KI2002}.
When there are side information systems or the compression task is distributed, the general models defined in chapter~\ref{chap:source-coding} appears to be very complicated, and even much simpler models
have been left open since the early exploration of the compression problems \cite{Winter1999, Ahn2006,Devetak2003}; therefore in the subsequent chapters we consider special cases where the source states are from an ensemble, that is the reference system is partly classical.
In chapter~\ref{chap: E assisted Schumacher}, we consider an interpolation between visible and blind Schumacher compression, that is the encoder has access to a side information system which can reduce to a classical system with the information about the identity of the states and a trivial system in the visible and blind scenarios, respectively. We find the optimal trade-off between the entanglement and quantum rates which depending on whether the ensemble is reducible or not, the entanglement consumption reduces the quantum rate or does not help it at all.
Chapter~\ref{chap:cqSW} is about the distributed compression of a hybrid classical-quantum source which is an extension of the celebrated Slepian-Wolf problem \cite{Slepian1973}. Two important sub-problems of this distributed compression problem are classical data compression with quantum side information (at the decoder), which is addressed in \cite{Winter1999,Devetak2003}, and quantum data compression with classical side information (at the decoder), which is the main focus of this chapter. For a class of generic sources we show that the compression rate can be strictly larger than the conditional entropy contrary to the fully classical problem of Slepian-Wolf where the rate of the side information case is always governed by the conditional entropy. However, in general the quantum compression rate reduces by a factor of half of the mutual information between the classical variable and the environment system of the encoder.
Chapter~\ref{chap:QSR ensemble} closes
the first part of the thesis where we consider the most general ensemble model of pure states with side information available both at the encoder and decoder side. When the overall state of the parties and the reference system is pure, the problem is known as quantum state redistribution \cite{Devetak2008_2,Yard2009,Oppenheim2008}.
We find the optimal quantum compression rate and confirm that preserving correlations with a
hybrid classical-quantum reference, which is less stringent than preserving the correlations with the purified reference, can lead to strictly smaller quantum rates. Indeed, this model
includes as special cases the sources considered in chapter~\ref{chap: E assisted Schumacher} and chapter~\ref{chap:cqSW}, however, in the former chapter the figure of merit is block fidelity whereas in the last two chapters the optimal rates are obtained by considering per-copy fidelity; considering block fidelity in the last two chapters, we find upper and lower bounds
which would match if the corresponding function defining the bounds is continuous.
The second part of the thesis consists of two chapters. In chapter~\ref{chap:resource theory}, we develop a general resource theory with allowed operations which are thermodynamically meaningful. The objects of this resource theory are quantum states and the allowed operations are those asymptotically commuting with a general set of charges associated with the quantum system. In order to explicitly construct these operations we use tools and notions such as quantum typicality and approximate microcanonical subspace.
Later in chapter~\ref{chap:thermo}, we use the developed operations to study a traditional thermodynamics setting with multiple conserved quantities consisting of a work system, a thermal bath and many batteries to store each charge. We extend the notion of charge-entropy diagram to a diagram with conditional entropy to find out
which transformations are feasible and show that some transformations are feasible only if the final states of the work system and the thermal bath are entangled, i.e. a
purely quantum effect enlarges the set of feasible transformations for the work system.
\bigskip
Finally, the last six chapters are essentially based on the following publications and preprints:
\begin{itemize}
\item
\textbf{Chapter 3:}
\noindent { \cite{ZK_mixed_state_2019} Z. B. Khanian and A. Winter, “General mixed state quantum data compression with and without entanglement assistance,” \emph{pre-print (2019)}, arXiv: 1912.08506.}
\cite{ZK_mixed_state_ISIT_2020} Z. B. Khanian and A. Winter, “General mixed state quantum data compression with and without entanglement assistance,” in: \emph{Proc. IEEE Int. Symp. Inf. Theory (ISIT)}, Los Angeles, CA, USA, pp. 1852-1857, June 2020.
\item
\textbf{Chapter 4:}
\cite{Schumacher_Assisted_arXiv_Z_2019} Z. B. Khanian and A. Winter, “Entanglement-assisted quantum data
compression,” \emph{preprint (2019)}, arXiv: 1901.06346.
\cite{ZK_Eassisted_ISIT_2019} Z. B. Khanian and A. Winter, “Entanglement-assisted quantum data compression,” in: \emph{Proc. IEEE Int. Symp. Inf. Theory (ISIT)}, Paris, France, pp. 1147–1151, July 2019.
\item \textbf{Chapter 5:}
\cite{ZK_cqSW_2018} Z. B. Khanian and A. Winter, “Distributed compression of correlated classical-quantum sources or: the price of ignorance,” \emph{IEEE Trans. Inf. Theory}, vol. 66, no. 9, pp. 5620-5633, Sep 2020. arXiv: 1811.09177.
\cite{ZK_cqSW_ISIT_2019} Z. B. Khanian and A. Winter, “Distributed compression of correlated classical-quantum sources,” in: \emph{Proc. IEEE Int. Symp. Inf. Theory (ISIT)}, Paris, France, pp. 1152-1156, July 2019.
\item \textbf{Chapter 6: }
\cite{QSR_ensemble_full} Z. B. Khanian and A. Winter, “Rate distortion perspective of quantum state redistribution,” \textit{in preparation}.
\cite{ZK_QSR_ensemble_ISIT_2020} Z. B. Khanian and A. Winter, “Quantum state redistribution for ensemble sources,” in: \emph{Proc. IEEE Int. Symp. Inf. Theory (ISIT)}, Los Angeles, CA, USA, pp. 1858-1863, June 2020.
\item \textbf{Chapter 7 and Chapter 8:}
\cite{thermo_ZBK_2020} Z. B. Khanian, M.~N. Bera, A. Riera, M. Lewenstein and A. Winter, “Resource theory of heat and work with non-commuting charges: yet another new foundation of thermodynamics,” \emph{preprint (2020)}, arXiv: 2011.08020.
\end{itemize}
During my Phd, I have also worked on the following thermodynamics project which is not included in this thesis:
\begin{itemize}
\item \cite{brl19} M. N. Bera, A. Riera, M. Lewenstein, Z. B. Khanian, and
A. Winter, “Thermodynamics as a Consequence of Information Conservation,”
\emph{Quantum}, vol. 3, 2018. arXiv[quant-ph]:1707.01750.
\end{itemize}
\section{Notation and preliminaries}
In this section, we introduce some conventions, notation and facts that we use throughout this thesis.
Quantum systems are associated with (finite dimensional) Hilbert spaces $A$, $B$, etc.,
whose dimensions are denoted $|A|$, $|B|$, respectively.
The state of such
quantum system is entirely characterized by a density operator, say $\rho$, acting on the associated Hilbert space which is a positive semidefinite operator with trace 1.
Also, we use the notation $\phi= \ketbra{\phi}{\phi}$ to denote the density
operator of the pure state vector $\ket{\phi}$.
Moreover, a system is called classical if all the states of the system are diagonal in a fixed orthonormal basis.
The evolution of a quantum system is characterized by a quantum channel or a so-called completely positive and
trace preserving (CPTP) map which is a linear map taking operators on a Hilbert space
to operators on the same or a different Hilbert space \cite{Stinespring1955}, however, since there is no risk of confusion, we denote a CPTP map by the input and output Hilbert spaces, for example, the operator $\mathcal{N}:A \longrightarrow B$ takes the input state $\rho$ on $A$ to the output state $\mathcal{N}(\rho)$ on $B$.
Furthermore, according to Stinespring's factorization theorem \cite{Stinespring1955}, if $\mathcal{N}:A \longrightarrow B$ is a CPTP
map, then it can be dilated to the isometry
$U_{\mathcal{N}}: A \hookrightarrow B W$ with $W$ as the environment system such that $\mathcal{N}(\rho)=\Tr_W(U_{\mathcal{N}} \rho U_{\mathcal{N}}^{\dagger})$ where $\Tr_W(\cdot)$ denotes the partial trace on system $W$.
\medskip
The fidelity, which is a measure of closeness, between two states $\rho$ and $\sigma$ is defined as \cite{Jozsa1994_2}
\begin{align}\label{eq:fidelity_def}
F(\rho, \sigma) := \|\sqrt{\rho}\sqrt{\sigma}\|_1
= \Tr \sqrt{\rho^{\frac{1}{2}} \sigma \rho^{\frac{1}{2}}},
\end{align}
where the trace norm is defined as
\begin{align}\label{eq:norm_def}
\|X\|_1 := \Tr|X| = \Tr\sqrt{X^\dagger X}
\end{align}
It relates to the trace distance in the following well-known way \cite{Fuchs1999}:
\begin{align}\label{eq:Fidelity_norm_relation}
1-F(\rho,\sigma) \leq \frac12\|\rho-\sigma\|_1 \leq \sqrt{1-F(\rho,\sigma)^2}.
\end{align}
The \emph{von Neumann entropy} of a quantum state $\rho$ on system $A$ is defined as
\begin{align}\label{eq:von Neumann}
S(\rho)_A := - \Tr\rho\log\rho,
\end{align}
where throughout this thesis, $\log$ denotes by default the binary logarithm,
and its inverse function $\exp$, unless otherwise stated, is also to basis $2$. $S(\rho)_A$ is also denoted by $S(A)_{\rho}$.
For the diagonalization of $\rho$, i.e $\rho=\sum_x p_x \proj{v_x}$ with orthonormal basis $\{\ket{v_x}\}$, the von Neumann entropy reduces to the \emph{Shannon entropy} $H(X)$ of
a random variable $X$ with probability distribution $p_x$:
\begin{align}\label{eq:Shannon entropy}
H(X):=-\sum_x p_x \log p_x =S(\rho)_A.
\end{align}
The von Neumann entropy is always bounded as the following:
\begin{align}
0\leq S(\rho) \leq \log |A|,
\end{align}
where $|A|$ is the dimension of the underlying Hilbert space of $\rho$, i.e. the support of $\rho$. Moreover, $S(\rho)=0$ if and only if $\rho$ is a pure state, and
$S(\rho) = \log |A|$ if and only if it is a maximally mixed state, i.e. $\rho = \frac{\openone}{\log |A|}$.
\medskip
The mutual information for a state $\rho^{AB}$ on a bipartite Hilbert space $A \otimes B$ is defined as:
\begin{align}
I(A:B):=S(A)_{\rho}+S(B)_{\rho}-S(AB)_{\rho},
\end{align}
which is always non-negative due to sub-additivity of the von Neumann entropy \cite{Araki_Lieb_1970} and is equal to 0 if and only if $\rho^{AB}=\rho^{A}\otimes \rho^{B}$, that is an uncorrelated state.
\medskip
Quantum conditional entropy and quantum conditional mutual information, $S(A|B)_{\rho}$ and $I(A:B|C)_{\rho}$,
respectively, are defined in the same way as their classical counterparts:
\begin{align}\label{eq:conditional_def}
S(A|B)_{\rho} &:= S(AB)_\rho-S(B)_{\rho}, \text{ and} \nonumber\\
I(A:B|C)_{\rho} &:= S(A|C)_\rho-S(A|BC)_{\rho} \nonumber \\
&= \!S(AC)_\rho\!+\! S(BC)_\rho\!\!-\!S(ABC)_\rho\!\!-\!S(C)_\rho.
\end{align}
Quantum conditional entropy can acquire negative values, however, it is always positive if at least one of the systems $A$ or $B$ is classical.
Araki-Lieb inequality holds for the conditional entropy as the following \cite{Araki_Lieb_1970}:
\begin{align}\label{eq:Araki_Leib}
-S(A)_{\rho} \leq S(A|B)_{\rho} \leq S(A)_{\rho},
\end{align}
where the inequality on the right hand side is known as sub-additivity of the entropy.
Quantum conditional mutual information is always positive due to strong sub-additivity of the entropy as the following \cite{SSA_1973}:
\begin{align}\label{eq:SSA}
S(A|BC)_{\rho} \leq S(A|C)_{\rho}.
\end{align}
The quantum \emph{relative entropy} between two quantum states $\rho$ and $\sigma$ is defined as:
\begin{align}
D(\rho || \sigma):=
\begin{cases}
\Tr (\rho (\log \rho - \log \sigma)) & \text{supp}(\rho) \subseteq \text{supp}(\sigma) \\
\infty & \text{otherwise},
\end{cases}
\end{align}
which is always non-negative. Pinsker's inequality \cite{Schumacher2002} relates the quantum relative entropy and the trace norm by
\begin{align}
\|\rho-\sigma\|_1 \leq \sqrt{2 \ln 2 D(\rho\|\sigma)}.
\end{align}
\chapter{Compression of a general mixed state source}
\label{chap:mixed state}
In this chapter, we consider the most general (finite-dimensional) quantum
mechanical information source, which is given by a quantum system
$A$ that is correlated with a reference system $R$. The task is to
compress $A$ in such a way as to reproduce the joint source state
$\rho^{AR}$ at the decoder with asymptotically high fidelity. This
includes Schumacher's original quantum source coding problem of a
pure state ensemble and that of a single pure entangled state, as
well as general mixed state ensembles.
Here, we determine the optimal compression rate (in qubits per
source system) in terms of the Koashi-Imoto decomposition of the
source into a classical, a quantum, and a redundant part. The same
decomposition yields the optimal rate in the presence of unlimited
entanglement between compressor and decoder, and indeed the full
region of feasible qubit-ebit rate pairs.
This chapter is based on the papers in \cite{ZK_mixed_state_ISIT_2020,ZK_mixed_state_2019}.
\medskip
\section{The source model and the compression task}
\label{sec:The Compression task}
We consider a general mixed state source
$\rho^{AR}$ with $A$ and $R$ as the system to be compressed and the reference system, respectively, where the source generates
the information theoretic limit of
many copies of the state $\rho^{AR}$, i.e.~$\rho^{A^n R^n} = \left(\rho^{AR}\right)^{\otimes n}$.
We assume that the encoder, Alice, and the decoder, Bob, have initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
The encoder, Alice, performs the encoding compression operation
$\mathcal{C}:A^n A_0 \longrightarrow M $ on the system $A^n$ and her part $A_0$ of the entanglement, which is a quantum channel,
i.e.~a completely positive and trace preserving (CPTP) map.
Notice that as functions CPTP maps act on the operators (density matrices) over
the respective input and output Hilbert spaces, but as there is no risk of confusion,
we will simply write the Hilbert spaces when denoting a CPTP map.
Alice's encoding operation produces the state $\sigma^{M B_0 R^n}$ with $M$ and $B_0$ as the compressed system of Alice and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M| \leq \abs{A}^n$.
We call $\frac1n \log K$ and $\frac1n \log|M|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively.
The system $M$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:M B_0 \longrightarrow \hat{A}^n$ on the system
$M$ and his part of the entanglement $B_0$.
We say the encoding-decoding scheme has \emph{fidelity} $1-\epsilon$, or \emph{error} $\epsilon$, if
\begin{align}
\label{eq:fidelity criterion}
F\left( \rho^{A^n R^n },\xi^{\hat{A}^n R^n} \right)
\geq 1-\epsilon,
\end{align}
where $\xi^{\hat{A}^n R^n}=\left((\mathcal{D}\circ\mathcal{C})\otimes {\operatorname{id}}_{R^n}\right) \rho^{A^n R^n }$.
Moreover, we say that $(E,Q)$ is an (asymptotically) achievable rate pair if for all $n$
there exist codes such that the fidelity converges to $1$, and
the entanglement and quantum rates converge to $E$ and $Q$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}$.
According to Stinespring's theorem \cite{Stinespring1955}, a CPTP map
${\mathcal{T}}: A \longrightarrow \hat{A}$ can be dilated to an isometry $U: A \hookrightarrow \hat{A} E$
with $E$ as an environment system, called an isometric extension of a CPTP map, such that
${\mathcal{T}}(\rho^A)=\Tr_E (U \rho^A U^{\dagger})$.
Therefore, the encoding and decoding operations are can in general be viewed as
isometries $U_{{\mathcal{E}}} : A^n A_0 \hookrightarrow M W$ and
$U_{{\mathcal{D}}} : M B_0 \hookrightarrow \hat{A}^n V$, respectively,
with the systems $W$ and $V$ as the environment systems
of Alice and Bob, respectively.
\medskip
We say a source $\omega^{BR}$ is equivalent to a source $\rho^{AR}$ if there are
CPTP maps ${\mathcal{T}}:A \longrightarrow B$ and ${\mathcal{R}}:B \longrightarrow A$ in both directions
taking one to the other:
\begin{align} \label{def: equivalent sources}
\omega^{BR}=({\mathcal{T}} \otimes {\operatorname{id}}_R) \rho^{AR} \text{ and }
\rho^{AR}=({\mathcal{R}} \otimes {\operatorname{id}}_R) \omega^{BR}.
\end{align}
The rate regions of equivalent sources are the same, because any achievable rate pair for
one source is achievable for the other source as well. This follows from the fact that for
any code $({\mathcal{C}},{\mathcal{D}})$ of block length $n$ and error $\epsilon$ for $\rho^{AR}$,
concatenating the encoding and decoding operations with ${\mathcal{T}}$ and ${\mathcal{R}}$, i.e. letting
${\mathcal{C}}'={\mathcal{C}}\circ{\mathcal{R}}^{\otimes n}$ and ${\mathcal{D}}'={\mathcal{T}}^{\otimes n}\circ{\mathcal{D}}$, we get a code
of the same error $\epsilon$ for $\omega^{BR}$. Analogously we can turn a code for
$\omega^{BR}$ into one for $\rho^{AR}$.
\section{The qubit-ebit rate region}
\label{sec:The optimal rate region}
The idea behind the compression of the source $\rho^{AR}$ is based on a decomposition
of this state introduced in \cite{Hayden2004}, which is a generalization of the decomposition
introduced by Koashi and Imoto in \cite{KI2002}. Namely, for any set of quantum states $\{ \rho_x\}$,
there is a unique decomposition of the Hilbert space describing
the structure of CPTP maps which preserve the set $\{ \rho_x^A\}$. This idea was generalized
in \cite{Hayden2004} for a general mixed state $\rho^{AR}$ describing the structure of
CPTP maps acting on system $A$ which preserve the overall state $\rho^{AR}$.
This was achieved by showing that any such map preserves the set of all possible
states on system $A$ which can be obtained by measuring system $R$, and
conversely any map preserving the set of all possible
states on system $A$ obtained by measuring system $R$, preserves the state $\rho^{AR}$,
thus reducing the general case to the case of classical-quantum states
\[
\rho^{AY} = \sum_y q(y) \rho_y^A \otimes \proj{y}^Y
= \sum_y \Tr_R \rho^{AR}(\openone_A\otimes M_y^R) \otimes \proj{y}^Y,
\]
which is the ensemble case considered by Koashi and Imoto. As a matter of fact,
looking at the algorithm presented in \cite{KI2002} to compute the decomposition,
it is enough to consider an informationally complete POVM $(M_y)$ on $R$, with
no more than $|R|^2$ many outcomes.
The properties of this decomposition are stated in the following theorem.
\begin{theorem}[\cite{KI2002,Hayden2004}]
\label{thm: KI decomposition}
Associated to the state $\rho^{AR}$, there are Hilbert spaces $C$, $N$ and $Q$
and an isometry $U_{{\text{KI}}}:A \hookrightarrow C N Q$ such that:
\begin{enumerate}
\item The state $\rho^{AR}$ is transformed by $U_{{\text{KI}}}$ as
\begin{equation}
\label{eq:KI state}
(U_{{\text{KI}}}\otimes \openone_R)\rho^{AR} (U_{{\text{KI}}}^{\dagger}\otimes \openone_R)
= \sum_j p_j \proj{j}^{C} \otimes \omega_j^{N} \otimes \rho_j^{Q R}
=:\omega^{C N Q R},
\end{equation}
where the set of vectors $\{ \ket{j}^{C}\}$ form an orthonormal basis for Hilbert space
$C$, and $p_j$ is a probability distribution over $j$. The states $\omega_j^{N}$ and
$\rho_j^{Q R}$ act on the Hilbert spaces $N$ and $Q \otimes R$, respectively.
\item For any CPTP map $\Lambda$ acting on system $A$ which leaves the state $\rho^{AR}$
invariant, that is $(\Lambda \otimes {\operatorname{id}}_R )\rho^{AR}=\rho^{AR}$, every associated
isometric extension $U: A\hookrightarrow A E$ of $\Lambda$ with the environment system
$E$ is of the following form
\begin{equation}
U = (U_{{\text{KI}}}\otimes \openone_E)^{\dagger}
\left( \sum_j \proj{j}^{C} \otimes U_j^{N} \otimes \openone_j^{Q} \right) U_{{\text{KI}}},
\end{equation}
where the isometries $U_j:N \hookrightarrow N E$ satisfy
$\Tr_E [U_j \omega_j U_j^{\dagger}]=\omega_j$ for all $j$.
%
The isometry $U_{KI}$ is unique (up to trivial change of basis of the Hilbert spaces
$C$, $N$ and $Q$). Henceforth, we call the isometry $U_{{\text{KI}}}$ and the state
$\omega^{C N Q R}=\sum_j p_j \proj{j}^{C} \otimes \omega_j^{N} \otimes \rho_j^{Q R}$
the Koashi-Imoto (KI) isometry and KI-decomposition of the state $\rho^{AR}$, respectively.
\item In the particular case of a tripartite system $CNQ$ and a state $\omega^{CNQR}$ already
in Koashi-Imoto form (\ref{eq:KI state}), property 2 says the following:
For any CPTP map $\Lambda$ acting on systems $CNQ$ with
$(\Lambda \otimes {\operatorname{id}}_R )\omega^{CNQR}=\omega^{CNQR}$, every associated
isometric extension $U: CNQ\hookrightarrow CNQ E$ of $\Lambda$ with the environment system
$E$ is of the form
\begin{equation}
U = \sum_j \proj{j}^{C} \otimes U_j^{N} \otimes \openone_j^{Q},
\end{equation}
where the isometries $U_j:N \hookrightarrow N E$ satisfy
$\Tr_E [U_j \omega_j U_j^{\dagger}]=\omega_j$ for all $j$.
\end{enumerate}
\end{theorem}
According to the discussion at the end of Sec. \ref{sec:The Compression task}, the
sources $\rho^{AR}$ and $\omega^{C N Q R}$ are equivalent because there are the isometry
$U_{{\text{KI}}}$ and the reversal CPTP map ${\mathcal{R}}: C N Q \longrightarrow A$, which reverses the
action of the KI isometry, such that:
\begin{align}
\omega^{C N Q R}&= (U_{{\text{KI}}}\otimes \openone_R)\rho^{AR} (U_{{\text{KI}}}^{\dagger}\otimes \openone_R), \nonumber \\
\rho^{AR}&=({\mathcal{R}} \otimes {\operatorname{id}}_R)\omega^{C N Q R}\nonumber \\
&=(U_{{\text{KI}}}^{\dagger }\otimes \openone_R) \omega^{C N Q R} (U_{{\text{KI}}}\otimes \openone_R)+\Tr [(\openone_{C N Q }-\Pi_{C N Q})\omega^{C N Q}]\sigma,
\end{align}
where $\Pi_{C N Q}=U_{{\text{KI}}}U_{{\text{KI}}}^{\dagger}$ is the projection onto the subspace
$U_{{\text{KI}}}A \subset C \otimes N \otimes Q$, and $\sigma$ is an arbitrary state acting on $A\otimes R$.
Henceforth we assume that the source is $\omega^{C N Q R}$, which is convenient because
our main result is expressed in terms of the systems $C$ and $Q$. Notice that
the source $\omega^{C N Q R}$ is in turn equivalent to $\omega^{C Q R}$,
a fact we will exploit in the proof.
Moreover, since the information in $C$ is classical, we can reduce the
compression rate even more if the sender and receiver share
entanglement, by using dense coding of $j$. In the following
theorem we show the optimal qubit-ebit rate tradeoff for the compression of the source $\rho^{AR}$.
\begin{theorem}
\label{theorem:complete rate region mixed state}
For the compression of the source $\rho^{AR}$, all asymptotically achievable entanglement and
quantum rate pairs $(E,Q)$ satisfy
\begin{align*}
Q &\geq S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega},\\
Q+E &\geq S(CQ)_{\omega},
\end{align*}
where the entropies are with respect the KI decomposition of the state $\rho^{AR}$, i.e.
the state $\omega^{C N Q R}$.
Conversely, all the rate pairs satisfying the above inequalities are asymptotically achievable.
\end{theorem}
\begin{remark}
This theorem implies that the optimal asymptotic quantum rates for the compression of
the source $\rho^{AR}$ with and without entanglement assistance are
$S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega}$ and $S(CQ)_{\omega}$ qubits, respectively,
and $\frac{1}{2}S(C)_{\omega}$ ebits of entanglement
are sufficient and necessary in the entanglement assisted case.
\end{remark}
\begin{remark}
If in the compression task the parties were required to preserve the correlations with a purifying reference system, then due to Schumacher compression the optimal qubit rate would be $S(A)_{\rho}=S(CNQ)_{\omega}$. However, Theorem~\ref{theorem:complete rate region mixed state} shows that
the parties can compress more if they are only required to preserve the correlations with a mixed state reference. This gap can be strictly positive if the redundant system $N$ is mixed given the classical information $j$ in system $C$, that is $S(CNQ)_{\omega}-S(CQ)_{\omega}=S(N|CQ)_{\omega}>0$.
\end{remark}
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{Q-E_tradeoff_mixed_state.png}
\caption{The achievable rate region of the entanglement and quantum rates.}
\label{fig:rate region}
\end{figure}
\begin{proof}
We start with the achievability of these rates. The converse proofs need more tools, so we will
leave them to the subsequent sections.
Looking at Fig. \ref{fig:rate region}, it will be enough to
prove the achievability of the corresponding corner points $(E,Q)=(0,S(CQ)_{\omega})$
and $(E,Q)=(\frac{1}{2}S(C)_{\omega},S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega})$
for the unassisted and entanglement assisted cases, respectively. This is because
by definition (and the time-sharing principle) the rate region is convex and
upper-right closed.
Indeed, all the points on the line $Q+E = S(CQ)_{\omega}$ for
$Q \geq S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega}$ are achievable because one
ebit can be distributed by sending a qubit.
All other rate pairs are achievable by resource wasting.
The rate region is depicted in Fig.~\ref{fig:rate region}.
As we discussed, we can assume that the source is
$(\omega^{C N Q R})^{\otimes n}=\omega^{C^n N^n Q^n R^n}$. To achieve the point $(0,S(CQ)_{\omega})$,
Alice traces out the redundant part $N^n$ of the source, to get the state $\omega^{C^n Q^n R^n}$ and applies Schumacher compression to send the systems $C^n Q^n$ to Bob. Since the
Schumacher compression preserves the purification of the systems $C^n Q^n$, it preserves the state $\omega^{C^n Q^n R^n}$ as well. To be more specific, let $\Lambda_S$ denote the composition of the encoding and decoding operations for the Schumacher compression of the state
$\ket{\omega}^{C^n Q^n R^n {R'}^n}$ where the system ${R'}^n$ is a purifying reference system which of course the parties do not have access to. The Schumacher compression preserves the following fidelity on the left member of the equation, therefore it preserves the fidelity on the right member:
\begin{align
1-\epsilon
&\leq F \left({\omega}^{C^n Q^n R^n {R'}^n} ,(\Lambda_S \otimes {\operatorname{id}}_{R^n {R'}^n}) {\omega}^{C^n Q^n R^n {R'}^n}\right) \nonumber\\
&\leq F \left({\omega}^{C^n Q^n R^n } ,(\Lambda_S \otimes {\operatorname{id}}_{R^n } ){\omega}^{C^n Q^n R^n }\right), \nonumber
\end{align}
where the inequality is due to monotonicity of the fidelity under partial trace.
The rate achieved by this scheme is $S(CQ)_{\omega}$. After applying this scheme,
Bob has access to the systems $\hat{C}^n \hat{Q}^n$, which is correlated with the reference
system $R^n$:
\begin{align*}
\zeta^{\hat{C}^n \hat{Q}^n R^n}=(\Lambda_S \otimes {\operatorname{id}}_{R^n } ){\omega}^{C^n Q^n R^n }.
\end{align*}
Then, to reconstruct the system $N^n$, Bob applies the CPTP map
$\mathcal{N}:C Q \longrightarrow C N Q$ to each copy, which acts as follows:
\begin{align
\mathcal{N}(\rho^{CQ})=\sum_j (\proj{j}^{C} \otimes \openone_{Q}) \rho^{CQ} (\proj{j}^{C} \otimes\openone_{Q}) \otimes \omega_j^{N}.\nonumber
\end{align}
This map satisfies the fidelity criterion of Eq.~(\ref{eq:fidelity omega}) because of monotonicity of the fidelity under CPTP maps:
\begin{align}\label{eq:fidelity omega}
1-\epsilon &\leq F \left({\omega}^{C^n Q^n R^n } ,\zeta^{\hat{C}^n \hat{Q}^n R^n}\right) \nonumber\\
& \leq F \left((\mathcal{N}^{\otimes n} \otimes {\operatorname{id}}_{R^n } ){\omega}^{C^n Q^n R^n } ,(\mathcal{N}^{\otimes n} \otimes {\operatorname{id}}_{R^n } ) \zeta^{\hat{C}^n \hat{Q}^n R^n}\right) \nonumber \\
&= F \left({\omega}^{C^n N^n Q^n R^n } ,\tau^{\hat{C}^n \hat{N}^n \hat{Q}^n R^n}\right).
\end{align}
To achieve the point $(\frac{1}{2}S(C)_{\omega},S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega})$,
Alice applies dense coding to send the classical system $C^n$ to Bob which requires
$\frac{n}{2}S(C)_{\omega}$ ebits of initial entanglement and $\frac{n}{2}S(C)_{\omega}$
qubits \cite{Bennett1992}. When both Alice and Bob have access to system $C^n$,
Alice can send the quantum system $Q^n$ to Bob by applying Schumacher compression, which
requires sending $nS(Q|C)$ qubits to Bob.
Therefore, the overall qubit rate is
$\frac{1}{2}S(C)_{\omega}+S(Q|C)=S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega}$.
\end{proof}
\section{Converse}
\label{sec:converse}
In this section, we will provide the converse bounds for the qubit rate $Q$ and the sum
rate $Q+E$ of Theorem~\ref{theorem:complete rate region mixed state}.
We obtain these bounds based on the structure of the CPTP maps which preserve the source state
$\omega^{CNQR}$. Namely, according to Theorem~\ref{thm: KI decomposition} the CPTP maps
acting on systems $CNQ$, which preserve the state $\omega^{CNQR}$, act only on the
redundant system $N$. This implies that the environment systems of such CPTP maps are
decoupled from systems $Q R$ given the classical information $j$ in the classical system $C$.
This gives us an insight into the structure of the encoding-decoding maps, which preserve
the overall state \emph{asymptotically} intact.
To proceed with the proof, we first define two functions that emerge in the converse bounds.
Then, we state some important properties of these functions in
Lemma~\ref{lemma:J_epsilon Z_epsilon properties} which we will use to
compute the tight asymptotic converse bounds.
\begin{definition}
\label{def:J_epsilon Z_epsilon}
For the KI decomposition
$\omega^{C N Q R}=\sum_{j} p_j \proj{j}^{C}\otimes \omega_j^{N} \otimes \rho_{j}^{Q R}$
of the state $\rho^{AR}$ and $\epsilon \geq 0$, define
\begin{align*}
J_\epsilon(\omega) &:=
\max I(\hat{N} E:\hat{C}\hat{Q}|C')_\tau
\text{ s.t. } \\
& \quad \quad U:C N Q \rightarrow \hat{C} \hat{N} \hat{Q} E
\text{ is an isometry with }
F( \omega^{C N Q R},\tau^{\hat{C} \hat{N} \hat{Q}R}) \geq 1- \epsilon, \\
Z_\epsilon(\omega) &:=
\max S(\hat{N} E|C')_\tau
\text{ s.t. } \\
& \quad \quad U:C N Q \rightarrow \hat{C} \hat{N} \hat{Q} E
\text{ is an isometry with }
F( \omega^{C N Q R},\tau^{\hat{C} \hat{N} \hat{Q}R}) \geq 1- \epsilon,
\end{align*}
where
\begin{align*}
\omega^{C N Q R C'}
&=\sum_{j} p_j \proj{j}^{C}\otimes \omega_j^{N} \otimes \rho_{j}^{Q R} \otimes \proj{j}^{C'}, \\
\tau^{\hat{C} \hat{N} \hat{Q} ER C'}
&= (U \otimes \openone_{RC'}) \omega^{C N Q R C'} (U^{\dagger} \otimes \openone_{RC'}), \\
\tau^{\hat{C} \hat{N} \hat{Q} R}
&=\Tr_{E C'} [ \tau^{\hat{C} \hat{N} \hat{Q} ER C'}].
\end{align*}
\end{definition}
In this definition, the dimension of the environment is w.l.o.g. bounded as $|E| \leq (|C||N||Q|)^2$
because the input and output dimensions of the channel are fixed as $|C||N||Q|$;
hence, the optimisation is of a continuous function over a compact domain, so we have a
maximum rather than a supremum.
\begin{lemma}
\label{lemma:J_epsilon Z_epsilon properties}
The functions $Z_\epsilon(\omega)$ and $J_\epsilon(\omega)$ have the following properties:
\begin{enumerate}
\item They are non-decreasing functions of $\epsilon$.
\item They are concave in $\epsilon$.
\item They are continuous for $\epsilon \geq 0$.
\item For any two states $\omega_1^{C_1 N_1 Q_1 R_1}$ and $\omega_2^{C_2 N_2 Q_2 R_2}$ and for $\epsilon \geq 0$,
\begin{align*}
&J_{\epsilon}(\omega_1 \otimes \omega_2) \leq J_{\epsilon}(\omega_1) +J_{\epsilon}(\omega_2),\\
&Z_{\epsilon}(\omega_1 \otimes \omega_2) \leq Z_{\epsilon}(\omega_1) +Z_{\epsilon}(\omega_2).
\end{align*}
\item At $\epsilon=0$, $Z_0(\omega) =S(N|C)_\omega$ and $J_0(\omega) =0$.
\end{enumerate}
\end{lemma}
The proof of this lemma follows in the next section. Now we show how it is
used to prove the converse (optimality) of Theorem \ref{theorem:complete rate region mixed state}.
As a guide to reading the subsequent proof, we remark that in Eqs.~(\ref{eq:converse_Q_1})
and (\ref{eq:converse_Q+E_2}), the environment systems $VW$ of the encoding-decoding
operations appear in the terms $I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)$ and
$S(\hat{N}^n VW | {C'}^n)$, which are bounded by
the functions $J_{\epsilon}(\omega^{\otimes n})$ and $Z_{\epsilon}(\omega^{\otimes n})$,
respectively.
As stated in point 4 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}, these functions
are sub-additive, so basically we can single-letterize the terms appearing in the converse.
Moreover, from point 3 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}, we know that
these functions are continuous for $\epsilon \geq 0$; therefore, the limit points of these
functions are equal to the values of these functions at $\epsilon=0$.
When the fidelity is equal to 1 ($\epsilon=0$), the structure of the CPTP maps
preserving the state $\omega^{C N Q R}$ in Theorem~\ref{thm: KI decomposition}
implies that $J_{0}(\omega)=0$ and $Z_{0}(\omega)=S(N|C)_{\omega}$, as stated
in point 5 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}. Thereby, we conclude
the converse bounds in Eqs.~(\ref{eq:converse_Q_asymptotics}) and (\ref{eq:converse_Q+E_asymptotics}).
\begin{proof-of}[of Theorem \ref{theorem:complete rate region mixed state} (converse)]
We first get the following chain of inequalities considering the process of the
decoding of the information:
\begin{align}
nQ+S(B_0)&\geq S(M)+S(B_0) \label{eq:converse_decoding_1_1} \\
&\geq S(M B_0) \label{eq:converse_decoding_1_2} \\
&= S(\hat{C}^n \hat{N}^n \hat{Q}^n V) \label{eq:converse_decoding_1_4} \\
&= S(\hat{C}^n\hat{Q}^n) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n) \label{eq:converse_decoding_1_5} \\
&\geq nS(C Q) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n) -n \delta(n,\epsilon) \label{eq:converse_decoding_1_6}\\
&\geq nS(C Q) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n {C'}^n) -n \delta(n,\epsilon) \label{eq:converse_decoding_1_7}\\
&= nS(C Q) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n {C'}^n) - S(\hat{N}^n V| {C'}^n)\nonumber \\
&\quad+ S(\hat{N}^n V| {C'}^n)-n \delta(n,\epsilon) \nonumber\\
&= nS(C Q) - I(\hat{N}^n V : \hat{C}^n \hat{Q}^n| {C'}^n)
+ S(\hat{N}^n V| {C'}^n) -n \delta(n,\epsilon)\nonumber\\
&\geq nS(C Q) - I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)+ S(\hat{N}^n V| {C'}^n)-n \delta(n,\epsilon) \label{eq:converse_decoding_1}
\end{align}
where Eq.~(\ref{eq:converse_decoding_1_1}) follows because the entropy of a system
is bounded by the logarithm of the dimension of that system;
Eq.~(\ref{eq:converse_decoding_1_2}) is due to sub-additivity of the entropy;
Eq.~(\ref{eq:converse_decoding_1_4}) follows because the decoding isometry
$U_{{\mathcal{D}}}:M B_0 \hookrightarrow \hat{C}^n \hat{N}^n \hat{Q}^n V$ does not change the entropy;
Eq.~(\ref{eq:converse_decoding_1_5}) is due to the chain rule;
Eq.~(\ref{eq:converse_decoding_1_6}) follows from the decodability: the
output state on systems $\hat{C}^n \hat{Q}^n$ is $2\sqrt{2\epsilon}$-close
to the original state $C^n Q^n$ in trace norm; then the inequality follows
by applying the Fannes-Audenaert inequality
\cite{Fannes1973,Audenaert2007}, where
$\delta(n,\epsilon)=\sqrt{2\epsilon} \log(|C||Q|) + \frac1n h(\sqrt{2\epsilon})$;
Eq.~(\ref{eq:converse_decoding_1_7}) is due to strong sub-additivity of the entropy,
and system $C'$ is a copy of classical system $C$;
Eq.~(\ref{eq:converse_decoding_1})
follows from data processing inequality where $W$ is the environment system
of the encoding isometry $U_{{\mathcal{E}}}:C^n N^n Q^n A_0 \hookrightarrow M W$.
Moreover, considering the process of encoding the information, $Q$ is bounded as follows:
\begin{align}
nQ &\geq S(M) \nonumber \\
&\geq S(M|W {C'}^n) \label{eq:converse_encoding_1_1} \\
&= S(M W {C'}^n) -S(W {C'}^n) \label{eq:converse_encoding_1_2} \\
&= S(C^n N^n Q^n A_0 {C'}^n)-S(W {C'}^n) \label{eq:converse_encoding_1_4} \\
&= S(C^n N^n Q^n {C'}^n)+S(A_0)-S(W {C'}^n) \label{eq:converse_encoding_1_5} \\
&= S(C^n N^n Q^n {C'}^n)+S(A_0)-S({C'}^n)-S(W |{C'}^n) \label{eq:converse_encoding_1_6} \\
&= S(C^n N^n Q^n)+S(A_0)-S({C'}^n)-S(W |{C'}^n) \label{eq:converse_encoding_1_7} \\
&= nS(C Q)+nS(N|C Q)+S(A_0)-nS(C')-S(W |{C'}^n) \label{eq:converse_encoding_1_8} \\
&= nS(C Q )+nS(N |C)+S(A_0)-nS(C')-S(W |{C'}^n), \label{eq:converse_encoding_1}
\end{align}
where Eq.~(\ref{eq:converse_encoding_1_1}) is due to sub-additivity of the entropy;
Eq.~(\ref{eq:converse_encoding_1_2}) is due to the chain rule;
Eq.~(\ref{eq:converse_encoding_1_4}) follows because the encoding isometry
$U_{{\mathcal{E}}}:C^n N^n Q^n A_0 \hookrightarrow M W$ does not the change the entropy;
Eq.~(\ref{eq:converse_encoding_1_5}) follows because the initial entanglement $A_0$
is independent from the source;
Eq.~(\ref{eq:converse_encoding_1_6}) is due to the chain rule;
Eq.~(\ref{eq:converse_encoding_1_7}) follows because $C'$ is a copy of the system $C$,
so $S(C'|C N Q)=0$;
Eq.~(\ref{eq:converse_encoding_1_8}) is due to the chain rule and the fact that the entropy is additive
for product states;
Eq.~(\ref{eq:converse_encoding_1}) follows because conditional on system $C$
the system $N$ is independent from system $Q$.
Now, we add Eqs.~(\ref{eq:converse_decoding_1}) and
(\ref{eq:converse_encoding_1}); the entanglement terms $S(A_0)$ and $S(B_0)$ cancel out,
and by dividing by $2n$ we obtain
\begin{align}
Q \!&\geq S(C Q)-\frac{1}{2}S(C)\! +\frac{1}{2}S(N |C)\!-\frac{1}{2n} I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n) \nonumber \\
& \quad + \frac{1}{2n}S(\hat{N}^n V| {C'}^n)\!-\frac{1}{2n}S(W |{C'}^n)\!-\frac{1}{2} \delta(n,\epsilon) \nonumber\\
&\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2n} I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n) \nonumber \\
&\quad - \frac{1}{2n}S(\hat{N}^n VW | {C'}^n) -\frac{1}{2} \delta(n,\epsilon) \label{eq:converse_Q_1}\\
&\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2n}J_{\epsilon}(\omega^{\otimes n})-\frac{1}{2n} Z_{\epsilon}(\omega^{\otimes n})- \frac{1}{2} \delta(n,\epsilon) \label{eq:converse_Q_2} \\
&\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2}J_{\epsilon}(\omega)-\frac{1}{2} Z_{\epsilon}(\omega)- \frac{1}{2} \delta(n,\epsilon), \label{eq:converse_Q}
\end{align}
where Eq.~(\ref{eq:converse_Q_1}) follows from strong sub-additivity of the entropy,
$S(\hat{N}^n V| {C'}^n)+S(\hat{N}^n V| W {C'}^n)\geq 0$;
Eq.~(\ref{eq:converse_Q_2}) follows from Definition~\ref{def:J_epsilon Z_epsilon};
Eq.~(\ref{eq:converse_Q}) is due to point 4 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
In the limit of $\epsilon \to 0$ and
$n \to \infty $, the qubit rate is thus bounded by
\begin{align}\label{eq:converse_Q_asymptotics}
Q &\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2}J_{0}(\omega)-\frac{1}{2} Z_{0}(\omega) \nonumber\\
&= S(C Q)-\frac{1}{2}S(C),
\end{align}
where the equality follows from point 5 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
Moreover, from Eq.~(\ref{eq:converse_decoding_1}) we have:
\begin{align}
nQ+S(B_0)&= nQ+nE \nonumber \\
&\geq nS(C Q) - I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)+ S(\hat{N}^n V| {C'}^n)-n \delta(n,\epsilon) \nonumber \\
&\geq nS(C Q) - I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)-n \delta(n,\epsilon) \label{eq:converse_Q+E_2} \\
&\geq nS(C Q) - J_{\epsilon}(\omega^{\otimes n})-n \delta(n,\epsilon) \label{eq:converse_Q+E_3} \\
&\geq nS(C Q) - nJ_{\epsilon}(\omega)-n \delta(n,\epsilon), \label{eq:converse_Q+E}
\end{align}
where Eq.~(\ref{eq:converse_Q+E_2}) follows because the entropy conditional on a
classical system is positive, $S(\hat{N}^n V| {C'}^n) \geq 0$;
Eq.~(\ref{eq:converse_Q+E_3}) follows from Definition~\ref{def:J_epsilon Z_epsilon};
Eq.~(\ref{eq:converse_Q+E}) is due to point 4 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
In the limit of $\epsilon \to 0$ and
$n \to \infty $, we thus obtain the following bound on the rate sum:
\begin{align}
Q+E \geq S(CQ) - J_{0}(\omega)
= S(CQ) \label{eq:converse_Q+E_asymptotics},
\end{align}
where the equality follows from point 5 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
\end{proof-of}
\begin{remark}
Our lower bound on $Q+E$ in Eq. (\ref{eq:converse_Q+E_asymptotics}) reproduces the
result of Koashi and Imoto \cite{KI2001} for the case of a classical-quantum source
$\rho^{AX} = \sum_x p(x) \rho_x^A \otimes \proj{x}^X$. This is because a code with
qubit-ebit rate pair $(Q,E)$ gives rise to a compression code in the sense of
Koashi and Imoto using a rate of qubits $Q+E$ and no prior entanglement, simply by
first distributing $E$ ebits and then using the entanglement assisted code.
It is worth noting that conversely, Eq. (\ref{eq:converse_Q+E_asymptotics}) can
be obtained from the Koashi-Imoto result, as follows. Any good code for $\rho^{AR}$
is automatically a good code for the classical-quantum source of mixed states
\[
\rho^{AY} = \sum_y q(y) \rho_y^A \otimes \proj{y}^Y
= \sum_y \Tr_R \rho^{AR}(\openone_A\otimes M_y^R) \otimes \proj{y}^Y,
\]
for any POVM $(M_y)$ on $R$, simply by the monotonicity of the fidelity
under CPTP maps. As discussed before, by choosing an informationally complete
measurement, the KI-decomposition of the ensemble $\{q(y),\rho_y^A\}$ is
identical to that of $\rho^{AR}$ in Theorem \ref{thm: KI decomposition}.
Thus the unassisted qubit compression rate of $\rho^{AY}$ and of $\rho^{AR}$
are lower bounded by the same quantity, the right hand side of Eq. (\ref{eq:converse_Q+E_asymptotics}).
\end{remark}
\section{Proof of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}}
\label{sec: Proof of Lemma}
\begin{enumerate}
\item The definitions of the functions $J_{\epsilon}(\omega)$ and $Z_{\epsilon}(\omega)$
directly imply that they are non-decreasing functions of $\epsilon$.
\item We first prove the concavity of $Z_{\epsilon}(\omega)$.
Let $U_1:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ and
$U_2:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ be the isometries attaining the
maximum for $\epsilon_1$ and $\epsilon_2$, respectively, which act as
follows on the purification $\ket{\omega}^{C N Q R C' R'}$ of the previously
introduced state $\omega^{C N Q R C'}$:
\begin{align*}
&\ket{\tau_1}^{\hat{C} \hat{N} \hat{Q} E R C' R'}
=(U_1 \otimes \openone_{R C' R'}) \ket{\omega}^{C N Q R C' R'}
\quad \text{ and } \\
&\ket{\tau_2}^{\hat{C} \hat{N} \hat{Q} E R C' R'}
=(U_2 \otimes \openone_{R C' R'}) \ket{\omega}^{C N Q R C' R'},
\end{align*}
where $\Tr_{R'}[\proj{\omega}^{C N Q R C' R'}]=\omega^{C N Q R C'}$.
For $0\leq \lambda \leq 1$, define the isometry
$U_0:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E F F'$ which acts as
\begin{equation}
\label{eq: isometry U in convexity}
U_0 := \sqrt{\lambda} U_1 \otimes \ket{11}^{FF'} + \sqrt{1-\lambda} U_2 \otimes \ket{22}^{FF'},
\end{equation}
where systems $F$ and $F'$ are qubits, and
which leads to the state
\begin{align*}
(U_0 \otimes \openone_{R C' R'})& \ket{\omega}^{C N Q R C' R'}\\
&= \sqrt{\lambda}\ket{\tau_1}^{\hat{C} \hat{N} \hat{Q} E R C' R'} \ket{11}^{FF'}
+ \sqrt{1-\lambda}\ket{\tau_2}^{\hat{C} \hat{N} \hat{Q} E R C' R'} \ket{22}^{FF'}.
\end{align*}
Then, $U_0$ defines its state $\tau$. for which the reduced state on the systems
$\hat{C} \hat{N} \hat{Q} R C'$ is
\begin{align} \label{eq: tau in convexity proof}
\tau^{\hat{C} \hat{N} \hat{Q} R C'}
=\lambda \tau_1^{\hat{C} \hat{N} \hat{Q} R C'}+ (1-\lambda) \tau_2^{\hat{C} \hat{N} \hat{Q} R C'}.
\end{align}
Therefore, the fidelity for the state $\tau$ is bounded as follows:
\begin{align}\label{eq:fidelity in convexity}
F(\omega^{C N Q R} &,\tau^{\hat{C} \hat{N} \hat{Q} R} ) \nonumber \\
&= F(\omega^{C N Q R} ,\lambda \tau_1^{\hat{C} \hat{N} \hat{Q} R}
+ (1-\lambda) \tau_2^{\hat{C} \hat{N} \hat{Q} R}) \nonumber \\
&= F(\lambda \omega^{C N Q R}+(1-\lambda)\omega^{C N Q R},
\lambda \tau_1^{\hat{C} \hat{N} \hat{Q} R}
+ (1-\lambda) \tau_2^{\hat{C} \hat{N} \hat{Q} R}) \nonumber\\
&\geq \lambda F( \omega^{C N Q R},\tau_1^{\hat{C} \hat{N} \hat{Q} R})
+(1-\lambda)F( \omega^{C N Q R},\tau_2^{\hat{C} \hat{N} \hat{Q} R}) \nonumber\\
&\geq 1-\left( \lambda\epsilon_1 +(1-\lambda)\epsilon_2 \right).
\end{align}
The first inequality is due to simultaneous concavity of the fidelity in both
arguments;
the last line follows by the definition of the isometries $U_1$ and $U_2$.
Thus, the isometry $U_0$ yields a fidelity of at least
$1-\left( \lambda\epsilon_1 +(1-\lambda)\epsilon_2 \right) =: 1-\epsilon$.
Now let $E'=E FF'$ denote the environment of the isometry $U_0$ defined above.
According to Definition \ref{def:J_epsilon Z_epsilon}, we obtain
\begin{align}
Z_\epsilon(\omega) &\geq S(\hat{N} E'|C')_{\tau} \nonumber\\
&= S(\hat{N} EFF'|C')_{\tau} \nonumber\\
&= S(F|C')_{\tau}+S(\hat{N} E|F C')_{\tau}+S(F'|\hat{N} EF C')_{\tau} \label{eq:Z_concavity_1}\\
&\geq S(\hat{N}E|FC')_{\tau} \label{eq:Z_concavity_2}\\
&= \lambda S(\hat{N} E|C')_{\tau_1}+(1-\lambda) S(\hat{N} E|C')_{\tau_2}\label{eq:Z_concavity_3}\\
&= \lambda Z_{\epsilon_1}(\omega)+(1-\lambda)Z_{\epsilon_2}(\omega) \label{eq:Z_concavity_4},
\end{align}
where the state $\tau$ in the entropies is given in Eq.~(\ref{eq: tau in convexity proof});
%
Eq.~(\ref{eq:Z_concavity_1}) is due to the chain rule;
%
Eq.~(\ref{eq:Z_concavity_2}) follow because
for the state on systems $\hat{N} EFF' C' $ we have $S(F'|C')+S(F'|\hat{N} E F C')\geq 0$
which follows from strong sub-additivity of the entropy;
%
Eq.~(\ref{eq:Z_concavity_3}) follows by expanding the conditional entropy on the classical system $F$;
%
Eq.~(\ref{eq:Z_concavity_4}) follows from the definitions of the isometries $U_1$ and $U_2$.
Moreover, let $U_1:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ and
$U_2:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ be the isometries attaining the
maximum for $\epsilon_1$ and $\epsilon_2$ in the definition of $J_{\epsilon}(\omega)$, respectively.
Again, define the isometry $U_0$ as in Eq.~(\ref{eq: isometry U in convexity}),
which leads to the bound on the fidelity as in Eq.~(\ref{eq:fidelity in convexity}),
letting $E'=EFF'$ be the environment of the isometry $U_0$.
According to Definition \ref{def:J_epsilon Z_epsilon}, we obtain
\begin{align}
J_\epsilon(\omega) &\geq I(\hat{N} E FF':\hat{C} \hat{Q}|C')_{\tau} \nonumber \\
&\geq I(\hat{N} E F:\hat{C} \hat{Q}|C')_{\tau} \label{eq:concavity_J_1} \\
&= I(F:\hat{C} \hat{Q}|C')_{\tau}+I(\hat{N} E :\hat{C} \hat{Q}|F C')_{\tau} \label{eq:concavity_J_2} \\
&\geq I(\hat{N} E :\hat{C} \hat{Q}|F C')_{\tau} \label{eq:concavity_J_3} \\
&= \lambda I(\hat{N} E :\hat{C} \hat{Q}| C')_{\tau_1}+(1-\lambda) I(\hat{N} E :\hat{C} \hat{Q}| C')_{\tau_2} \label{eq:concavity_J_4}\\
&= \lambda J_{\epsilon_1}(\omega)+(1-\lambda)J_{\epsilon_2}(\omega)\label{eq:concavity_J_5},
\end{align}
where Eq.~(\ref{eq:concavity_J_1}) follows from data processing;
%
Eq.~(\ref{eq:concavity_J_2}) is due to the chain rule for mutual information;
%
Eq.~(\ref{eq:concavity_J_3}) follows from strong sub-additivity of the
entropy, $I(F:\hat{C} \hat{Q}|C')_{\tau} \geq 0$;
%
Eq.~(\ref{eq:concavity_J_4}) is obtained by expanding the conditional mutual
information on the classical system $F$;
%
finally, Eq.~(\ref{eq:concavity_J_5}) follows from the definitions of the isometries $U_1$ and $U_2$.
\item The functions are non-decreasing and concave for $\epsilon \geq 0 $, so they are continuous
for $\epsilon > 0$.
%
The concavity implies furthermore that $J_{\epsilon}$ and $Z_{\epsilon}$ are lower semi-continuous at
$\epsilon=0$. On the other hand, since the fidelity, the conditional entropy and the conditional
mutual information are all continuous functions of CPTP maps, and the domain of both optimizations
is a compact set, we conclude that $J_\epsilon(\omega)$ and $Z_{\epsilon}$ are also upper
semi-continuous at $\epsilon=0$, so they are continuous at $\epsilon=0$
\cite[Thms.~10.1 and 10.2]{Rockafeller}.
\item We first prove
$Z_{\epsilon}(\omega_1 \otimes \omega_2) \leq Z_{\epsilon}(\omega_1) +Z_{\epsilon}(\omega_2)$.
%
In the definition of $Z_{\epsilon}(\omega_1 \otimes \omega_2)$, let the isometry
$U_0:C_1 N_1 Q_1 C_2 N_2 Q_2 \hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E$
be the one attaining the maximum, which acts on the following purified source states with purifying
systems $R'_1$ and $R'_2$:
\begin{align}
&\ket{\tau}^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1 R_2 C'_2 R'_2}\\
\quad \quad \quad \quad &=(U_0 \otimes \openone_{R_1 C'_1 R'_1 R_2 C'_2 R'_2})\ket{\omega_1}^{C_1 N_1 Q_1 R_1 C'_1 R'_1}
\otimes \ket{\omega_2}^{C_2 N_2 Q_2 R_2 C'_2 R'_2}. \label{eq:U0-action}
\end{align}
By definition, the fidelity is bounded by
\begin{align*}
F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2}) \geq 1- \epsilon.
\end{align*}
Now, we can define an isometry
$U_1:C_1 N_1 Q_1 \hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 E_1$
acting only on systems $C_1 N_1 Q_1$, by letting
$U_1 = (U_0 \otimes \openone_{R_2 C_2' R_2'})(\openone_{C_1 N_1 Q_1} \otimes \ket{\omega_2}^{C_2 N_2 Q_2 R_2 C_2' R_2})$
and with the environment $E_1 := \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_2 C'_2 R'_2$.
It has the property that
$\ket{\tau}^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1 C_1' R_1' E}
= (U_1 \otimes \openone_{R_1 C_1' R_1'})\ket{\omega_1}^{C_1 N_1 Q_1 R_1 C_1' R_1'}$
has the same reduced state on $\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1$ as $\tau$ from
Eq. (\ref{eq:U0-action}).
This isometry preserves the fidelity for $\omega_1$, which follows from monotonicity
of the fidelity under partial trace:
\begin{align*}
F(\omega_1^{C_1 N_1 Q_1 R_1},\tau_1^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1})
&= F(\omega_1^{C_1 N_1 Q_1 R_1},\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1}) \\
&\geq F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2}) \\
&\geq 1- \epsilon.
\end{align*}
By the same argument, there is the following isometry
\begin{align*}
U_2:C_2 N_2 Q_2\hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1,
\end{align*}
with output system $\hat{C}_2 \hat{N}_2 \hat{Q}_2$ and
environment $E_2:=\hat{C}_1 \hat{N}_1 \hat{Q}_1 E R_1 C'_1 R'_1$, such that
\begin{align*}
F(\omega_2^{C_2 N_2 Q_2 R_2},\tau_2^{\hat{C}_2 \hat{N}_2 \hat{Q}_2 R_2})
&= F(\omega_2^{C_2 N_2 Q_2 R_2},\tau^{\hat{C}_2 \hat{N}_2 \hat{Q}_2 R_2}) \\
&\geq F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2}) \\
&\geq 1- \epsilon.
\end{align*}
%
Therefore, we obtain:
\begin{align}
Z_{\epsilon}(\omega_1) &+Z_{\epsilon}(\omega_2)-Z_{\epsilon}(\omega_1 \otimes \omega_2) \nonumber\\
&\geq
S(\hat{N}_1 E_1|C'_1)_{\tau}+S(\hat{N}_2 E_2|C'_2)_{\tau}-S(\hat{N}_1
\hat{N}_2 E |C'_1 C'_2)_{\tau} \label{eq:Z_additivity_1}\\
&=S(\hat{N}_1 E_1 C'_1)_{\tau}+S(\hat{N}_2 E_2C'_2)_{\tau}-S(\hat{N}_1
\hat{N}_2 E C'_1 C'_2)_{\tau}\nonumber \\
&\quad \quad\quad\quad\quad\quad \quad\quad\quad -S(C'_1)-S(C'_2)+S(C'_1 C'_2) \label{eq:Z_additivity_2}\\
&=S(\hat{N}_1 E_1 C'_1)_{\tau}+S(\hat{N}_2 E_2C'_2)_{\tau}-S(\hat{N}_1
\hat{N}_2 E C'_1 C'_2)_{\tau} \label{eq:Z_additivity_3}\\
&=S(\hat{C}_1\hat{Q}_1 R_1 R'_1)+S(\hat{C}_2\hat{Q}_2 R_2 R'_2)-S(\hat{C}_1\hat{Q}_1 \hat{C}_2\hat{Q}_2 R_1 R'_1 R_2 R'_2) \label{eq:Z_additivity_4}\\
&=I(\hat{C}_1\hat{Q}_1 R_1 R'_1:\hat{C}_2\hat{Q}_2 R_2 R'_2) \nonumber\\
&\geq 0 \label{eq:Z_additivity_5},
\end{align}
where Eq.~(\ref{eq:Z_additivity_1}) is due to Definition~\ref{def:J_epsilon Z_epsilon};
%
Eq.~(\ref{eq:Z_additivity_2}) is due to the chain rule;
%
Eq.~(\ref{eq:Z_additivity_3}) because the systems $C'_1$ and $C'_2$ are independent from each other;
%
Eq.~(\ref{eq:Z_additivity_4}) follows because the overall state on systems
$\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1 R_2 C'_2 R'_2$
is pure;
%
Eq.~(\ref{eq:Z_additivity_5}) is due to sub-additivity of the entropy.
To prove prove
$J_{\epsilon}(\omega_1 \otimes \omega_2) \leq J_{\epsilon}(\omega_1) +J_{\epsilon}(\omega_2)$,
let the isometry
$U_0:C_1 N_1 Q_1 C_2 N_2 Q_2 \hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E$
be the one attaining the maximum in definition of $J_{\epsilon}(\omega_1 \otimes \omega_2)$,
which acts on the following purified source states with purifying
systems $R'_1$ and $R'_2$, as in Eq. (\ref{eq:U0-action}).
By definition, the fidelity is bounded as
\begin{align*}
F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2})
\geq 1- \epsilon.
\end{align*}
Now define
$U_1:C_1 N_1 Q_1\hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_2 C'_2 R'_2$
and $U_2:C_2 N_2 Q_2\hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1$
as in the above discussion, with the environments
$E_1:=\hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_2 C'_2 R'_2$ and
$E_2:=\hat{C}_1 \hat{N}_1 \hat{Q}_1 E R_1 C'_1 R'_1$, respectively.
Recall that the fidelity for the states $\omega_1$ and $\omega_2$ is at least
$1-\epsilon$, because of the monotonicity of the fidelity under partial trace.
Thus we obtain
\begin{align}
J_{\epsilon}(\omega_1)
&+J_{\epsilon}(\omega_2)-J_{\epsilon}(\omega_1 \otimes \omega_2) \nonumber\\
&\geq I(\hat{N}_1 E_1:\hat{C}_1\hat{Q}_1|C'_1)_\tau+
I(\hat{N}_2 E_2:\hat{C}_2\hat{Q}_2|C'_2)_\tau \nonumber \\
&\quad-I(\hat{N}_1\hat{N}_2 E:\hat{C}_1\hat{Q}_1\hat{C}_2\hat{Q}_2|C'_1 C'_2)_\tau
\label{eq:J_additivity_1}\\
&=S(\hat{N}_1 E_1 C'_1)+S(\hat{C}_1\hat{Q}_1C'_1)-S( \hat{C}_1 \hat{N}_1 \hat{Q}_1 E_1 C'_1)-S(C'_1) \nonumber \\
&\quad +S(\hat{N}_2 E_2 C'_2)+S(\hat{C}_2\hat{Q}_2C'_2)-S( \hat{C}_2 \hat{N}_2 \hat{Q}_2 E_2 C'_2)-S(C'_2) \nonumber\\
&\quad \!-\!S(\hat{N}_1\hat{N}_2 E C'_1 C'_2) \!- \!S(\hat{C}_1\hat{Q}_1\hat{C}_2\hat{Q}_2 C'_1 C'_2)\! \nonumber \\
&\quad +\!S( \hat{C}_1\hat{N}_1\hat{Q}_1\hat{C}_2 \hat{N}_2\hat{Q}_2E C'_1 C'_2)\!+\! S(C'_1 C'_2) \label{eq:J_additivity_2} \\
&=S(\hat{C}_1 \hat{Q}_1 R_1 R'_1)+S(\hat{C}_1\hat{Q}_1C'_1)-S(R_1 R'_1)-S(C'_1) \nonumber \\
&\quad +S(\hat{C}_2 \hat{Q}_2 R_2 R'_2)+S(\hat{C}_2\hat{Q}_2C'_2)-S( R_2 R'_2)-S(C'_2) \nonumber\\
&\quad \!-\!S(\hat{C}_1 \hat{Q}_1\hat{C}_2 \hat{Q}_2 R_1 R'_1 R_2 R'_2) \!- \!S(\hat{C}_1\hat{Q}_1\hat{C}_2\hat{Q}_2 C'_1 C'_2)\! \nonumber \\
&\quad +\!S(R_1 R'_1 R_2 R'_2)\!+\! S(C'_1 C'_2) \label{eq:J_additivity_3}\\
&=I(\hat{C}_1 \hat{Q}_1 R_1 R'_1:\hat{C}_2 \hat{Q}_2 R_2 R'_2)
-I(R_1 R'_1:R_2 R'_2)\nonumber \\
&\quad +I(\hat{C}_1\hat{Q}_1C'_1:\hat{C}_2\hat{Q}_2C'_2)
-I(C'_1:C'_2) \nonumber \\
&\geq I( R_1 R'_1:R_2 R'_2)
-I(R_1 R'_1:R_2 R'_2)
+I(C'_1:C'_2)
-I(C'_1:C'_2) \label{eq:J_additivity_4}\\
&=0, \nonumber
\end{align}
where Eq.~(\ref{eq:J_additivity_1}) is due to Definition~\ref{def:J_epsilon Z_epsilon};
%
In Eq.~(\ref{eq:J_additivity_2}) we expand the mutual informations in terms of entropies;
%
Eq.~(\ref{eq:J_additivity_3}) follows because the overall state on systems
$\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1 R_2 C'_2 R'_2$
is pure;
%
Eq.~(\ref{eq:J_additivity_4}) is due to data processing.
\item According to Theorem~\ref{thm: KI decomposition} \cite{KI2002,Hayden2004},
any isometry $U:C N Q \rightarrow \hat{C} \hat{N} \hat{Q} E$ acting on the state
$\omega^{C N Q R C'}$ which preserves the reduced state on systems $C N Q R C'$
($C'$ here is considered as a part of the reference system), acts as the following:
\begin{align*}
(U \otimes \openone_{RC'}) \omega^{C N Q R C'}(U^{\dagger} \otimes \openone_{RC'})
=\sum_{j} p_j \proj{j}^{C}\otimes U_j \omega_j^{N} U_j^{\dagger} \otimes \rho_{j}^{Q R} \otimes \proj{j}^{C'},
\end{align*}
where the isometry $U_j: N \rightarrow \hat{N} E$ satisfies
$\Tr_E [U_j \omega_j^{N} U_j^{\dagger}]=\omega_j$.
Therefore, in Definition~\ref{def:J_epsilon Z_epsilon} for $\epsilon=0$, the final state is
\begin{align*}
\tau^{\hat{C} \hat{N} \hat{Q} E R C'}
= \sum_{j} p_j \proj{j}^{C}\otimes U_j \omega_j^{N} U_j^{\dagger} \otimes \rho_{j}^{Q R} \otimes \proj{j}^{C'}.
\end{align*}
Thus we can directly evaluate
\begin{align*}
Z_0(\omega)=S(\hat{N} E|C')_\tau=S(N |C)_\omega \text{ and }
J_0(\omega)=I(\hat{N} E:\hat{C}\hat{Q}|C')_\tau=0,
\end{align*}
concluding the proof.
\hfill\qedsymbol
\end{enumerate}
\section{Discussion}
\label{sec:Discussion}
We have introduced a common framework for all single-source quantum compression
problems, i.e. settings without side information at the encoder or the decoder,
by defining the compression task as the reproduction of a given bipartite state
between the system to be compressed and a reference. That state, which defines
the task, can be completely general, and special instances recover Schumacher's
quantum source compression (in both variants of a pure state ensemble and of
a pure entangled state) \cite{Schumacher1995}
and compression of a mixed state ensemble source in the blind variant
\cite{Horodecki1998,KI2001}.
Our general result gives the optimal quantum compression rate in terms of
qubits per source state, both in the settings without and with entanglement, and
indeed the entire qubit-ebit rate region, reproducing the aforementioned
special cases, along with other previously considered problems \cite{ZK_Eassisted_ISIT_2019}.
Despite the technical difficulties in obtaining it, the end result has a
simple and intuitive interpretation. Namely, the given source $\rho^{AR}$
is equivalent to a source in standard Koashi-Imoto form,
\[
\omega^{CQR} = \sum_j p_j \proj{j}^C \otimes \rho_j^{QR},
\]
so that $j$ has to be compressed as classical information, at rate $S(C)$,
and $Q$ as quantum information, at rate $S(Q|C)$; in the presence of
entanglement, the former rate is halved while the latter is maintained.
Indeed, what our Theorem \ref{theorem:complete rate region mixed state}
shows is that the original source has the same qubit-ebit
rate region as the clean classical-quantum mixed source
\[
\Omega^{CQRR'C'} = \sum_j p_j \proj{j}^C \otimes \proj{\psi_j}^{QRR'} \otimes \proj{j}^{C'},
\]
where $\ket{\psi_j}^{QRR'}$ purifies $\rho_j^{QR}$, and $RR'C'$ is considered
the reference. In $\Omega$, $C$ is indeed a manifestly classical source,
since it is duplicated in the reference system, and conditional on $C$,
$Q$ is a genuinely quantum source since it is purely entangled with the
reference system. As $\Tr_{R'C'} \Omega^{CQRR'C'} = \omega^{CQR}$, any
code and any achievable rates for $\Omega$ are good for $\omega$, and
that is how the achievability of the rate region in Theorem \ref{theorem:complete rate region mixed state}
can be described. The opposite, that a code good for $\omega$ should be
good for $\Omega$, is far from obvious. Indeed, if that were true, it would
not only yield a quick and simple proof of our converse bounds, but would
imply that the rate region of Theorem \ref{theorem:complete rate region mixed state} satisfies a
strong converse! However, as we do not know this reduction to the source $\Omega$,
our converse proceeds via a more complicated, indirect route, and yields only
a weak converse. Whether the strong converse holds, and what the detailed
relation between the sources $\omega^{CQR}$ and $\Omega^{CQRR'C'}$ is,
remain open questions.
\medskip
\chapter{Quantum state redistribution for ensemble sources}
\label{chap:QSR ensemble}
In this chapter, we consider a generalization of the quantum state redistribution task,
where pure multipartite states from an ensemble source are distributed among
an encoder, a decoder and a reference system. The encoder, Alice, has access
to two quantum systems: system $A$ which she compresses and sends to the decoder,
Bob, and the side information system $C$ which she wants to keep at her site.
Bob has access to quantum side information in a system $B$, wants to decode
the compressed information in such a way to preserve the correlations with the
reference system on average.
As figures of merit, we consider both block error (which is the usual one
in source coding) and per-copy error (which is more akin to rate-distortion theory),
and find the optimal compression rate for the second criterion,
and achievable and converse bounds for the first. The latter almost match in general,
up to an asymptotic error and an unbounded auxiliary system; for so-called irreducible
sources they are provably the same.
This chapter is based on the publications in \cite{ZK_QSR_ensemble_ISIT_2020,QSR_ensemble_full}.
\section{The source model}
Quantum state redistribution (QSR) is a source compression task where both
encoder and decoder have access to side information systems \cite{Devetak2008_2,Yard2009,Oppenheim2008}.
Namely, Alice, Bob and a reference system share asymptotically many copies of a
pure state $\ket{\psi}^{ACBR}$, where Alice aims to compress the quantum system $A$
and send it to Bob via a noiseless quantum channel, while she has access to a side
information quantum system $C$, and Bob has access to the side information quantum system
$B$. Bob upon receiving the compressed information reconstructs system $A$, and the
figure of merit of this task is to preserve the entanglement fidelity between the
reconstructed systems and the purifying reference system $R$.
Quantum state redistribution generalizes Schumacher's compression, which
is recovered as the extreme case that neither encoder nor decoder have any
side information \cite{Schumacher1995}: the source is simply described by a pure state
$\ket{\psi}^{AR}$ shared between the encoder and a reference system.
However, besides this model, and originally, Schumacher considered a source
generating an ensemble of pure states, i.e. ${\mathcal{E}} = \{ p(x), \proj{\psi_x}^{A} \}$,
and showed both source models lead to the same optimal compression rate
(cf. Barnum \emph{et al.} \cite{Barnum1996}, as well as \cite{Winter1999}),
namely the von Neumann entropy of the reduced or average state of $A$, respectively.
In the presence of side information systems though, an ensemble model and a purified
source model can lead to different compression rates. An example of this is
the classical-quantum Slepian-Wolf problem considered in \cite{ZK_cqSW_2018,ZK_cqSW_ISIT_2019},
where the compression rate can be strictly smaller than that of the corresponding
purified source.
The general correlated ensemble source ${\mathcal{E}} = \{ p(x), \proj{\psi_x}^{AB} \}$ was
considered first in \cite{Winter1999} and then developed in \cite{Devetak2003}
and by Ahn \emph{et al.} \cite{Ahn2006},
with $A$ the system to be compressed and $B$ the side information system at the decoder.
It is an ensemble version of the coherent state merging task introduced
in \cite{family,Abeyesinghe2009}.
In \cite{Devetak2003}, the source is $\ket{\psi_x}^{AB} = \ket{f(x)}^A\ket{\phi_x}^B$.
The optimal compression rate for an irreducible source of product states and a source
generating Bell states is found in \cite{Ahn2006}, however, in general case the problem had
been left open.
In the present chapter, we consider an even more general ensemble source where both
encoder and decoder have access to side information systems, and which thus
constitutes an ensemble generalization of the pure QSR source. More precisely,
we consider a source which is given by an ensemble
${\mathcal{E}} = \{ p(x), \proj{\psi_x}^{ACBR} \}$ of pure states
$\psi_x=\proj{\psi_x}\in{\mathcal{S}}(A \otimes C \otimes B \otimes R)$,
$\ket{\psi_x}\in A \otimes C \otimes B \otimes R$, with a Hilbert space
$A \otimes C \otimes B \otimes R$, which in this chapter we assume to be of
finite dimension $|A|\cdot|C|\cdot|B|\cdot|R|<\infty$;
${\mathcal{S}}(A \otimes C \otimes B \otimes R)$ denotes the set of states (density operators).
Furthermore, $x\in{\mathcal{X}}$ ranges over a discrete alphabet, so we can describe
the source equivalently by the classical-quantum (cq) state
$\omega^{ACBRX} = \sum_x p(x) \proj{\psi_x}^{ACBR} \otimes \proj{x}^X$.
In this model, $A$ and $C$ are Alice's information to be sent and side
information system, respectively. System $B$ is the side information of Bob,
and $R$ and $X$ are inaccessible reference systems used only to define the task.
The ensemble model of the previous chapter
as well as those models that have been considered in \cite{Winter1999,Ahn2006,ZK_Eassisted_ISIT_2019,ZK_cqSW_2018,ZK_cqSW_ISIT_2019}
are all special cases of the model that we consider here. We find the optimal
compression rate under the per-copy fidelity criterion, and achievable
and converse rates under the block-fidelity criterion which almost match, up to an
asymptotic error and an unbounded auxiliary system. In the generic case
of so-called \emph{irreducible} ensembles, they are provably the same.
\section{The compression task}
\label{sec:The Compression task QSR ensemble}
We consider the information theoretic setting of many copies of the
source $\omega^{ACBRX}$, i.e.~$\omega^{A^nC^nB^nR^nX^n}=(\omega^{ACBRX})^{\otimes n}$:
\[
\omega^{A^nC^nB^nR^nX^n}
\!\!\! = \!\!\!\!
\sum_{x^n \in \mathcal{X}^n}\!\!\!\! p(x^n) \proj{\psi_{x^n}}^{A^nC^nB^nR^n}
\!\otimes\! \proj{x^n}^{X^n}\!\!\!\!\!,
\]
using the notation
\begin{alignat*}{3}
x^n &= x_1 x_2 \ldots x_n,
&p(x^n) &= p(x_1) p(x_2) \cdots p(x_n), \\
\ket{x^n} &= \ket{x_1} \ket{x_2} \cdots \ket{x_n}, \
&\ket{\psi_{x^n}} &= \ket{\psi_{x_1}} \ket{\psi_{x_2}} \cdots \ket{\psi_{x_n}}.
\end{alignat*}
We assume that the encoder, Alice, and the decoder, Bob, have initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
Alice, who has access to $A^n$ and the side information system $C^n$, performs the
encoding compression operation ${\mathcal{E}}:A^n C^n A_0 \longrightarrow M \hat{C}^n$
on $A^nC^n$ and her part $A_0$ of the entanglement, which is a quantum channel,
i.e.~a completely positive and trace preserving (CPTP) map.
Notice that as functions, CPTP maps act on the operators (density matrices) over
the respective input and output Hilbert spaces, but as there is no risk of confusion,
we will simply write the Hilbert spaces when denoting a CPTP map.
Alice's encoding operation produces the state $\sigma^{M \hat{C}^n B^n B_0 R^n X^n}$
with $M$, $\hat{C}^n$ and $B_0$ as the compressed system of Alice, the reconstructed
side information system of Alice and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M| \leq \abs{A}^n$.
The system $M$ is then sent via a noiseless quantum channel to Bob, who performs
a decoding operation $\mathcal{D}:M B^n B_0 \longrightarrow \hat{A}^n \hat{B}^n$
on the compressed system $M$, his side information $B^n$ and his part of the entanglement
$B_0$, to reconstruct the original systems, now denoted $\hat{A}^n$ and $\hat{B}^n$.
We call
$\frac1n \log|M|$ the \emph{quantum rate} of the compression protocol.
We say an encoding-decoding scheme (or code, for short) has \emph{block fidelity}
$1-\epsilon$, or \emph{block error} $\epsilon$, if
\begin{align}
\label{eq:block fidelity criterion}
F &:= F(\omega^{A^nC^nB^nR^nX^n},\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^nX^n}) \nonumber \\
&= \sum_{x^n} p(x^n) F\left( \psi_{x^n}^{A^nC^nB^nR^n} \!\!,
\xi_{x^n}^{\hat{A}^n \hat{C}^n \hat{B}^n R^n} \right) \geq 1-\epsilon,
\end{align}
where
\begin{align*}
\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^nX^n}
&=\sum_{x^n \in \mathcal{X}^n}\!\!\!\! p(x^n) \xi_{x^n}^{\hat{A}^n \hat{C}^n \hat{B}^n R^n}
\!\otimes\! \proj{x^n}^{X^n}\\
&=\left((\mathcal{D}\circ{\mathcal{E}})\otimes {\operatorname{id}}_{R^nX^n}\right) \omega^{A^nC^nB^n R^nX^n }.
\end{align*}
We say a code has \emph{per-copy fidelity} $1-\epsilon$,
or \emph{per-copy error} $\epsilon$, if
\begin{align}
\label{eq:per-copy-error-RD}
\overline{F} &:= \frac{1}{n}\sum_{i=1}^n F(\omega^{A_iC_iB_iR_iX^n},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX^n}) \nonumber\\
&= \sum_{x^n} p(x^n) \frac{1}{n}\sum_{i=1}^n
F\left( \psi_{x_i}^{ACBR} \!\!,
\xi_{x^n}^{\hat{A}_i \hat{C}_i \hat{B}_i R_i} \right)
\geq 1-\epsilon.
\end{align}
By the monotonicity of the fidelity under the partial trace
(over $X_{[n]\setminus i}$), this implies the easier to verify
condition
\begin{align}
\label{eq:per copy fidelity criterion}
\widetilde{F}
:= \frac{1}{n}\sum_{i=1}^n F\left(\omega^{ACBRX},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}\right)
\geq 1-\epsilon,
\end{align}
where
$\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}=\Tr_{[n]\setminus i}\,\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^nX^n}$,
and `$\Tr_{[n]\setminus i}$' denotes the partial trace over all systems
with indices in $[n]\setminus i$.
Conversely, Eq. \eqref{eq:per copy fidelity criterion} can be shown to imply
the criterion \eqref{eq:per-copy-error-RD} with $(1-\epsilon)^2 \geq 1-2\epsilon$
on the right hand side. Indeed, note that
\begin{align*}
&F\left(\omega^{ACBRX},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}\right) \\
& \quad = \sum_{x_i} p(x_i) F\left(\psi_{x_i}^{ACBR},
\sum_{x_{[n]\setminus i}} p(x_{[n]\setminus i})
\xi_{x^n}^{\hat{A}_i \hat{C}_i \hat{B}_i R_i} \right).
\end{align*}
Thus, by the convexity of the square function and Jensen's inequality,
\[\begin{split}
(1-\epsilon)^2
&\leq \left( \frac{1}{n}\sum_{i=1}^n F(\omega^{ACBRX},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}) \right)^2 \\
&\leq \frac{1}{n}\sum_{i=1}^n \sum_{x^n} p(x^n)
F\left(\psi_{x_i}^{ACBR},\xi_{x^n}^{\hat{A}_i \hat{C}_i \hat{B}_i R_i} \right),
\end{split}\]
and the last line is the left hand side of Eq. \eqref{eq:per-copy-error-RD}.
Correspondingly, we say $Q_b$ and $Q_c$ are an asymptotically achievable block-error rate
and an asymptotically achievable per-copy-error rate, respectively,
if for all $n$ there exist codes such that the block fidelity and per-copy fidelity converge
to $1$, and the quantum rate converges to $Q_b$ and $Q_c$, respectively. Because
of the above demonstrated relations
$\widetilde{F}^2 \leq \overline{F} \leq \widetilde{F}$ it
doesn't matter which of the two version of per-copy fidelity we take.
According to Stinespring's theorem \cite{Stinespring1955}, the encoding and decoding
CPTP maps ${\mathcal{E}}$ and ${\mathcal{D}}$ can be dilated respectively to the isometries
$U_{{\mathcal{E}}}: A^n C^n A_0\hookrightarrow M\hat{C}^n W_n$ and
$U_{{\mathcal{D}}}: M B^n B_0 \hookrightarrow \hat{A}^n \hat{B}^nV_n$,
with $W_n$ and $V_n$ as the environment systems of the encoder and decoder, respectively.
\vspace{-0.2cm}
\section{Main Results}
\label{sec: main results of qsr ensemble}
In Theorem~\ref{thm:main theorem} we obtain the main results of this chapter concerning
optimal (minimum) block-error rate $Q_b^*$ and optimal per-copy-error rate $Q_c^*$.
These rates are expressed in terms of the following single-letter function.
\begin{definition}\label{def:Q_epsilon}
For a state
$\omega^{ACBRX}=\sum_x p(x) \proj{\psi_x}^{ACBR}\otimes \proj{x}^{X}$ and $\epsilon \geq 0$ define:
\begin{align*}
Q(\epsilon) :=&
\inf \frac{1}{2} I(Z:RXX'|B)_{\sigma}
\text{ over CPTP maps } \\
&{\mathcal{E}}_{\epsilon}: AC \rightarrow Z \hat{C} \text{ and } {\mathcal{D}}_{\epsilon}:ZB \rightarrow \hat{A}\hat{B} \text{ s.t.} \\
& F( \omega^{ACBRX},\xi^{\hat{A} \hat{C} \hat{B} RX}) \geq 1- \epsilon,
\end{align*}
where
\begin{align*}
\sigma^{Z\hat{C}BRX}\!
&:=\! ({\mathcal{E}}_{\epsilon}\otimes {\operatorname{id}}_{BRX})\omega^{ACBRX}
\!=\! \sum_x p(x) \sigma_x^{Z\hat{C}BR} \!\otimes \!\proj{x}^{X}\!\!, \\
\xi^{\hat{A}\hat{C}\hat{B}RX}
\!&:=\! ({\mathcal{D}}_{\epsilon} \otimes {\operatorname{id}}_{\hat{C}RX}) \sigma^{Z\hat{C}BRX}
\!=\! \sum_x p(x) \xi_x^{\hat{A}\hat{C}\hat{B}R}\!\otimes\! \proj{x}^{X}\!\!.
\end{align*}
Moreover, define $\widetilde{Q}(0):=\lim_{\epsilon \to 0+} Q(\epsilon)$.
\end{definition}
The function $Q(\epsilon)$ is defined for the specific source $\omega^{ACBRX}$; this dependency is dropped to simplify the notation.
\begin{theorem}
\label{thm:main theorem}
The minimum asymptotically achievable rate with per-copy error is
\begin{align*}
Q_c^*=\widetilde{Q}(0).
\end{align*}
Instead, the minimum asymptotically achievable rate with block error is bounded
from above and below as follows:
\begin{align*}
\widetilde{Q}(0)
\leq Q_b^* \leq Q(0).
\end{align*}
\end{theorem}
\vspace{-0.2cm}
\begin{proof}
We prove the achievability here and leave the converse proof to the next section.
Let $U_0: AC\hookrightarrow Z \hat{C} W$ and $\widetilde{U}_0: ZB\hookrightarrow \hat{A} \hat{B} V$ be respectively the isometric extension of the CPTP maps ${\mathcal{E}}_0$ and ${\mathcal{D}}_0$ in
Definition~\ref{def:Q_epsilon} with fidelity $1$ (i.e. $\epsilon = 0$).
To achieve the block-error rate $Q_b = Q(0)$,
Alice applies the isometry $U_0$,
after which the purified state shared between the parties is
\begin{align*}
\ket{\sigma_0}^{Z\hat{C}WBRXX'}
=\sum_x \sqrt{p(x)} \ket{\sigma_0(x)}^{Z\hat{C}WBR}\otimes \ket{x}^{X}\otimes \ket{x}^{X'}.
\end{align*}
Then the parties apply the QSR protocol to many copies of the above source where Alice
sends system $M$ to Bob and systems $\hat{C}$ and $W$ are her side information.
The rate achieved by the QSR protocol is
\begin{align*}
Q_b=\frac{1}{2}I(Z:RXX'|B)_{\sigma_0}.
\end{align*}
After executing the QSR protocol, Bob has $Z^n$,
and the state shared between the parties is
$\hat{\sigma}_0^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n}$,
which satisfies the following entanglement fidelity:
\begin{align}\label{eq: fidelity 3}
F\left( (\sigma_0^{Z\hat{C}WBRXX'})^{\otimes n},
\hat{\sigma}_0^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n} \right) \to 1,
\end{align}
as $n\to\infty$.
Then, Bob applies to each system the CPTP map ${\mathcal{D}}_0:ZB \longrightarrow \hat{A}\hat{B}$.
Due to the monotonicity of the fidelity under CPTP maps, we obtain
from Eq.~(\ref{eq: fidelity 3})
\begin{align}\label{eq: fidelity 4}
\! \! \! \! F\left(\!\! ({\mathcal{D}}_0^{\otimes n}\!\otimes \!{\operatorname{id}})(\sigma_0^{Z\hat{C}BRX})^{\otimes n}\!\!,
({\mathcal{D}}_0^{\otimes n}\!\otimes\! {\operatorname{id}})\hat{\sigma}_0^{Z^n\hat{C}^nB^n R^n X^n } \!\! \right) \!\!\to\!\! 1
\end{align}
as $n \to \infty$,
where the identity channel ${\operatorname{id}}$ acts on systems ${\hat{C}^nR^nX^n}$.
Notice that by the definition of ${\mathcal{D}}_0$,
\begin{align*}
(\omega^{ACBRX})^{\otimes n}
=({\mathcal{D}}_0^{\otimes n}\otimes {\operatorname{id}}_{\hat{C}^nR^nX^n})(\sigma_0^{Z\hat{C}BRX})^{\otimes n}.
\end{align*}
Thus, the block fidelity criterion of Eq.~(\ref{eq:block fidelity criterion}) holds.
Now, let $U_{\epsilon}: AC\hookrightarrow Z \hat{C} W$ and $\widetilde{U}_{\epsilon}: ZB\hookrightarrow \hat{A} \hat{B} V$ be respectively the isometric extension of the CPTP maps ${\mathcal{E}}_{\epsilon}$ and ${\mathcal{D}}_{\epsilon}$ in
Definition~\ref{def:Q_epsilon} with fidelity $1-\epsilon$.
To achieve the per-copy-error rate $Q_c^*$,
to each copy of the source Alice
applies the isometry $U_{\epsilon}$.
Then the purified state shared between the parties is
\begin{align*}
\ket{\sigma_{\epsilon}}^{Z\hat{C}WBRXX'}
=\sum_x \sqrt{p(x)} \ket{\sigma_{\epsilon}(x)}^{Z\hat{C}WBR}\otimes \ket{x}^{X}\otimes \ket{x}^{X'}.
\end{align*}
The parties apply the QSR protocol to many copies of the above source where Alice
sends system $Z$ to Bob and systems $\hat{C}$ and $W$ are her side information.
The rate achieved by the QSR protocol is
\begin{align}
Q_c &=\frac{1}{2}I(M:RXX'|B)_{\sigma_{\epsilon}}. \nonumber
\end{align}
After executing the QSR protocol, Bob has $Z^n$, and the state shared between
the parties is $\hat{\sigma}_{\epsilon}^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n}$,
which satisfies the following
entanglement fidelity:
\begin{align*}
F\left( (\sigma_{\epsilon}^{Z\hat{C}WBRXX'})^{\otimes n},
\hat{\sigma}_{\epsilon}^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n} \right) \to 1
\end{align*}
as $n \to \infty$.
Due to monotonicity of the fidelity under partial trace, we obtain the per-copy fidelity,
\begin{equation}\label{eq: fidelity 1}
F(\sigma_{\epsilon}^{Z\hat{C}BRX},\hat{\sigma}_{\epsilon}^{Z_i\hat{C}_i B_i R_i X_i }) \to 1,
\end{equation}
for all $i \in [n]$ and $n \to \infty$.
Then, to each system $i$, Bob applies the CPTP map
${\mathcal{D}}_{\epsilon}$.
We obtain
\begin{align}\label{eq: fidelity 2}
F \! \left(\! ( \! {\mathcal{D}}_{\epsilon} \! \otimes \! {\operatorname{id}}_{\hat{C}RX} \! )\sigma_{\epsilon}^{Z\hat{C}BRX}\!,\!
( \! {\mathcal{D}}_{\epsilon}\otimes {\operatorname{id}}_{\hat{C}RX} \! )\hat{\sigma}_{\epsilon}^{Z_i\hat{C}_i B_i \! R_i \! X_i } \! \!\right) \!\to \! 1
\end{align}
for all $i \in [n]$ and $n \to \infty$,
which follows from Eq.~(\ref{eq: fidelity 1}) due to monotonicity of the fidelity
under CPTP maps.
On the other hand, the state
$\xi_{\epsilon}^{\hat{A}\hat{C}\hat{B} RX}
=({\mathcal{D}}_{\epsilon} \otimes {\operatorname{id}}_{\hat{C}RX})\sigma_{\epsilon}^{Z\hat{C}BRX}$
has high fidelity with the original source state, directly from the definition of ${\mathcal{D}}_{\epsilon}$:
\begin{align*}
F(\xi_{\epsilon}^{\hat{A}\hat{C}\hat{B} RX},\omega^{ACBRX}) \to 1.
\end{align*}
Therefore, from the above fidelity and Eq.~(\ref{eq: fidelity 2}) we obtain
\begin{align*}
F\left( \omega^{ACBRX},
({\mathcal{D}}_{\epsilon}\otimes {\operatorname{id}}_{\hat{C}RX})\hat{\sigma}_{\epsilon}^{M_i\hat{C}_i B_i R_i X_i } \right)
\to 1
\end{align*}
for all $i \in [n]$ and $n \to \infty$,
which satisfies the per-copy fidelity criterion in Eq.~(\ref{eq:per copy fidelity criterion}).
\end{proof}
Now, we define a new single-letter function
which then we use to obtain simplified rates in Lemma~\ref{lemma: lower bound on Q_tilde(0)}
and Corollary~\ref{cor:irreducible} which both are proved in \cite{QSR_ensemble_full}.
\begin{definition}
\label{def:K_epsilon}
For a state
$\omega^{ACBRX}=\sum_x p(x) \proj{\psi_x}^{ACBR}\otimes \proj{x}^{X}$ and $\epsilon \geq 0$ define:
\begin{align*}
K_\epsilon(\omega) &:= \sup I(W:X|\hat{C})_{\sigma}
\text{ over isometries } \\
&\phantom{=====}
U: AC \rightarrow Z \hat{C} W \text{ and }
\widetilde{U}:ZB \rightarrow \hat{A}\hat{B}V \text{ s.t.} \\
&\phantom{=====}
F( \omega^{ACBRX},\xi^{\hat{A} \hat{C} \hat{B} RX}) \geq 1- \epsilon,
\end{align*}
where
\begin{align*}
\sigma^{Z\hat{C}WBRX}
&:= (U\otimes \openone_{BRX})\omega^{ACBRX} (U\otimes \openone_{BRX})^{\dagger} \\
& = \sum_x p(x) \proj{\sigma_x}^{Z\hat{C}WBR}\otimes \proj{x}^{X}, \\
\xi^{\hat{A}\hat{C}\hat{B}WVRX}
&:= (\widetilde{U}\otimes \openone_{\hat{C}WRX}) \sigma^{Z\hat{C}WBRX}
(\widetilde{U}\otimes \openone_{\hat{C}WRX})^{\dagger}\\
&= \sum_x p(x) \proj{\xi_x}^{\hat{A}\hat{C}\hat{B}WVR}\otimes \proj{x}^{X}, \\
\xi^{\hat{A}\hat{C}\hat{B}RX}
&:= \Tr_{VW} \xi^{\hat{A}\hat{C}\hat{B}WVRX}.
\end{align*}
Moreover, define $\widetilde{K}_0:=\lim_{\epsilon \to 0+} K_{\epsilon}(\omega)$.
\end{definition}
\begin{remark}
Definition~\ref{def:K_epsilon} directly implies that $K_{0}(\omega) \leq \widetilde{K}_{0}(\omega)$
because $K_{\epsilon}(\omega)$ is a non-decreasing function of $\epsilon$.
Furthermore, $K_{0}(\omega)$ can be strictly positive, for example, for a source
with trivial system $C$ where $\psi_{x}^{A}\psi_{x'}^{A}=0$ holds for $x\neq x'$,
we obtain $K_{0}(\omega)=S(X)$.
This follows because Alice can measure her system and obtain the value of $X$ and
then copy this classical information to the register $W$.
\end{remark}
\begin{lemma}\label{lemma: lower bound on Q_tilde(0)}
The rate $\widetilde{Q}(0)$ is lower bounded as:
\begin{align*}
\widetilde{Q}(0) &\!\geq\! \frac{1}{2} \left(S(A|B)+S(A|C) \right) \!-\!\frac{1}{2}\widetilde{K}_{0} \\
&\!=\!\frac{1}{2}I(A:RXX'|B)_{\omega}\!-\!\frac{1}{2}\widetilde{K}_{0},
\end{align*}
where the above conditional mutual information is precisely the communication rate
of QSR for the purified source
\begin{align}\label{eq: purified source}
\ket{\omega}^{ACBRXX'}
=\sum_x \sqrt{p(x)} \ket{\psi_x}^{ACBR} \otimes \ket{x}^X \otimes \ket{x}^{X'}.
\end{align}
Moreover, if system $C$ is trivial, then $\widetilde{Q}(0)$ is equal to this lower bound.
\end{lemma}
\begin{definition}[{Barnum~\emph{et~al.}~\cite{Barnum2001_2}}]
\label{def:reducibility QSR ensemble}
An ensemble
${\mathcal{E}}=\{p(x),\proj{\psi_x}^{ACBR} \}_{x\in \mathcal{X}}$ of pure states
is called \emph{reducible} if its states fall into two or more orthogonal subspaces.
Otherwise the ensemble ${\mathcal{E}}$ is called \emph{irreducible}.
%
We apply the same terminology to the source state $\omega^{ACBRX}$.
\end{definition}
\begin{corollary}
\label{cor:irreducible}
For an irreducible source $\omega^{ACBRX}$, $K_0=\widetilde{K}_0=0$. Hence, the
optimal asymptotically achievable per-copy-error rate and block-error rate
are equal and
\begin{align*}
Q^*_c=Q^*_b=\frac{1}{2}\left(S(A|C)+S(A|B) \right).
\end{align*}
\end{corollary}
\section{Converse}
In this section, we first show some properties of the function $Q(\epsilon)$, which then we use to prove the converse for Theorem~\ref{thm:main theorem}.
\begin{lemma}
\label{lemma:Q-convex}
For $0 \leq \epsilon \leq 1$, $Q(\epsilon)$ is a monotonically non-increasing, convex function of $\epsilon$. Consequently, for $0<\epsilon <1$ it is also continuous.
\end{lemma}
\vspace{-0.2cm}
\begin{proof}
The monotonicity directly follows from the definition.
For the convexity, we verify Jensen's inequality, that is we start with maps
${\mathcal{E}}_1,{\mathcal{D}}_1$ eligible for error $\epsilon_1$ with the output state $\xi_1^{\hat{A}\hat{C}\hat{B}RX}$, and ${\mathcal{E}}_2,{\mathcal{D}}_2$ eligible for error $\epsilon_2$ with the output state $\xi_2^{\hat{A}\hat{C}\hat{B}RX}$,
and $0\leq p \leq 1$. By embedding into larger Hilbert spaces if necessary, we
can w.l.o.g. assume that the maps act on the same systems for $i=1,2$.
We define the following two maps:
\vspace{-0.2cm}
\begin{align*}
{\mathcal{E}}(\rho) &:= p {\mathcal{E}}_1(\rho) \otimes \proj{1}^{Z'} + (1-p) {\mathcal{E}}_2(\rho) \otimes \proj{2}^{Z'}, \\
{\mathcal{D}}(\rho) &:= {\mathcal{D}}_1(\bra{1}^{Z'} \rho \ket{1}^{Z'}) + {\mathcal{D}}_2(\bra{2}^{Z'} \rho \ket{2}^{Z'}).
\end{align*}
They evidently realise the output state $\xi^{\hat{A}\hat{C}\hat{B}RX} = p\xi_1^{\hat{A}\hat{C}\hat{B}RX} + (1-p)\xi_2^{\hat{A}\hat{C}\hat{B}RX}$ with the following fidelity:
\begin{align
F&(\omega^{ACBRX} ,\xi^{\hat{A} \hat{C} \hat{B} RX} ) \nonumber\\
&= F(\omega^{ACBRX} ,p \xi_1^{\hat{A} \hat{C} \hat{B} RX}
+ (1-p) \xi_2^{\hat{A} \hat{C} \hat{B} RX}) \nonumber \\
&\geq p F(\omega^{ACBRX} \!,\!\xi_1^{\hat{A} \hat{C} \hat{B} RX} \!)
\!+\!(1\!-\! p)F(\omega^{ACBRX} \!,\xi_2^{\hat{A} \hat{C} \hat{B} RX} \!) \nonumber\\
&\geq 1-\left( p\epsilon_1 +(1-p)\epsilon_2 \right), \nonumber
\end{align}
where the third line is due to simultaneous concavity of the fidelity in both arguments.
The last line follows by the definitions of the states $\xi_1$ and $\xi_2$.
Therefore, the maps ${\mathcal{E}}$ and ${\mathcal{D}}$ yield a fidelity of at least
$1-\left( p\epsilon_1 +(1-p)\epsilon_2 \right) =: 1-\epsilon$.
Thus,
\begin{align*}
Q(\epsilon) &\leq I(ZZ':RXX'|B)_\xi \\
&= p I(Z:RXX'|B)_{\xi_1} + (1-p) I(Z:R|B)_{\xi_2},
\end{align*}
and taking the infimum over maps ${\mathcal{E}}_i,{\mathcal{D}}_i$ shows convexity.
The continuity statement follows from a mathematical folklore fact, stating that
any real-valued function that is convex on an interval, is continuous on the
interior of the interval.
\end{proof}
\begin{proof-of}[of Theorem \ref{thm:main theorem} (converse)]
We prove the converse for the per-copy fidelity criterion, therefore, the same converse bound holds for the block fidelity criterion as well.
Consider a block length $n$ code per-copy fidelity $1-\epsilon$. The number
of qubits, $\log|M|$, can be lower bounded as follows, with respect to
the encoded state $({\mathcal{E}}\otimes {\operatorname{id}}_{B_0B^nR^nX^nX'^n})\omega^{A^nC^nB^nR^nX^nX'^n} \otimes \Phi_K^{A_0B_0}$ of the purified source:
\begin{align*}
2\!\log|M| \!&\!\geq 2 S(M) \\
&\geq I(M:R^nX^n X'^n|B^nB_0) \\
&= \! I(\underbrace{MB_0}_{Z}:R^nX^nX'^n|B^n) \!\!-\!\! I(B_0:R^nX^nX'^n|B^n) \\
&= I(Z:R^nX^nX'^n|B^n) \\
&= \sum_{i=1}^n I(Z:R_iX_iX'_i|B^nR_{<i}X_{<i}X'_{<i}) \\
&\quad \quad+ \sum_{i=1}^n I(R_{<i}X_{<i}X'_{<i}B_{[n]\setminus i}:R_iX_iX'_i|B_i) \\
&= \sum_{i=1}^n I(ZR_{<i}X_{<i}X'_{<i}B_{[n]\setminus i}:R_iX_iX'_i|B_i) \\
&\geq \sum_{i=1}^n I(\underbrace{ZB_{[n]\setminus i}}_{Z_i}:R_iX_iX'_i|B_i),
\vspace{-0.3cm}
\end{align*}
where in the first two inequalities we use standard entropy inequalities;
the equation in the third line is due to the chain rule, and the second
conditional information is $0$ because $B_0$ is independent of $B^nR^n$;
the fourth line introduces a new register $Z$, noting that
the encoding together with the entangled state defines a CPTP map
${\mathcal{E}}_0:A^n \rightarrow Z\hat{C}^n$, via ${\mathcal{E}}_0(\rho) = ({\mathcal{E}} \otimes {\operatorname{id}}_{B_0})(\rho \otimes\Phi_K^{A_0B_0})$;
in the fifth we use the chain rule iteratively, and in the second term we introduce,
each summand is $0$ because for all $i$, $R_{<i}B_{[n]\setminus i}$ is independent of $R_iB_i$;
in the sixth line we use again the chain rule for all $i$, and the
last line is due to data processing.
Now, for the $i$-th copy of the source $\omega^{A_iC_iB_iR_iX_i}$, we define maps
${\mathcal{E}}_i:A_i C_i \rightarrow Z_i \hat{C}_i$ and ${\mathcal{D}}_i:B_iZ_i \rightarrow \hat{A}_i \hat{B}_i$,
as follows:
\begin{itemize}
\item[${\mathcal{E}}_i$:] Alice tensors her system $A_i$ with a dummy state $\omega^{\otimes [n]\setminus i}$ and
with $\Phi_K^{A_0B_0}$ (note that all systems are in her possession).
Then she applies ${\mathcal{E}}:A^nC^nA_0 \rightarrow M \hat{C}^n$, and sends
$Z_i := M B_0 B_{[n]\setminus i}$ to Bob, while keeping $\hat{A}_i \hat{C}_i$.
All other systems, i.e. $ \hat{A}_{[n]\setminus i} \hat{C}_{[n]\setminus i} R_{[n]\setminus i}X_{[n]\setminus i}$, are trashed.
\item[${\mathcal{D}}_i$:] Bob applies ${\mathcal{D}}$ to $Z_iB_i = M B_0 B^n $ and keeps
$\hat{A}_i \hat{B}_i$, trashing the rest $\hat{A}_{[n]\setminus i}\hat{B}_{[n]\setminus i}$.
\end{itemize}
By definition, the output state
\begin{align*}
\zeta^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}
\!\!= \!({\mathcal{D}}_i \otimes {\operatorname{id}}_{\hat{A}_i\hat{C}_iR_iX_i}\!)\!\circ\!({\mathcal{E}}_i \otimes {\operatorname{id}}_{B_iR_iX_i}\!)\omega^{A_iC_iB_iR_iX_i}
\end{align*}
equals $\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}$ which has fidelity $1-\epsilon_i$ with the source $\omega^{ACBRX}$, and the fidelity for all copies satisfy $ \frac{1}{n} \sum_i (1-\epsilon_i) \geq 1-\epsilon$.
Thus, we obtain, with respect to the states $({\mathcal{E}}_i \otimes {\operatorname{id}}_{B_iR_iX_iX'_i})\omega^{A_iC_iB_iR_iX_iX'_i}$
\[\begin{split}
\frac{1}{n}\log|M| &\geq \frac{1}{n} \sum_{i=1}^n \frac12 I(Z_i:R_iX_iX'_i|B_i)\\
& \geq \frac{1}{n} \sum_{i=1}^n Q(\epsilon_i)
\geq Q\left( \frac{1}{n} \sum_{i=1}^n \epsilon_i\right)
\geq Q(\epsilon),
\end{split}\]
continuing from before, then by definition of $Q(\epsilon_i)$ since
the pair $({\mathcal{E}}_i,{\mathcal{D}}_i)$ results in fidelity $1-\epsilon_i$, in the next inequality
by convexity and finally by monotonicity of $Q(\epsilon)$ (Lemma \ref{lemma:Q-convex}).
By the taking the limit of $\epsilon \to 0$ and $n \to \infty$, the claim follows.
\end{proof-of}
\vspace{-0.2cm}
\section{Discussion}
\label{sec: discussion qsr ensemble}
We considered a variant of the quantum state redistribution task, where pure
multipartite states from an ensemble are distributed between an encoder,
a decoder and a reference system.
We distinguish two figures of merit for the information processing, per-copy
fidelity and block fidelity, and define the corresponding quantum communication
rates depending on the fidelity criterion, when unlimited entanglement is available.
For the per-copy fidelity criterion, we find that the optimal qubit rate of compression is equal to $\widetilde{Q}(0)$ from Definition~\ref{def:Q_epsilon},
which is bounded from below by the rate of the conventional QSR task minus the limit of the
single-letter non-negative function $\widetilde{K}_0$ from Definition~\ref{def:K_epsilon}:
\begin{align*}
\widetilde{Q}(\!0\!)\!\geq\! \frac{1}{2}\! \left( \!S(A|B)\!+\!S(A|C) \right) \!-\! \frac{1}{2}\! \widetilde{K}_0
\!=\! \frac{1}{2} I(A:RXX'|B)_{\omega} \!-\! \!\frac{1}{2} \widetilde{K}_0,
\end{align*}
where the conditional mutual information is the rate of QSR
for the purified source in Eq.~(\ref{eq: purified source}). This lower bound is tight if system $C$ is trivial (state merging scenario).
For the block fidelity criterion, we have found converse and achievability bounds:
\vspace{-0.2cm}
\begin{align*}
\widetilde{Q}(0)\leq Q_b \leq Q(0).
\end{align*}
The two bounds would match if we knew that the function $Q(\epsilon)$ were
continuous at $\epsilon=0$. However, we don not know this; for one thing, one
cannot use compactness to show continuity because the output system $W$ in
Definition~\ref{def:K_epsilon} is as priori unbounded.
For irreducible sources though, we show here $K_0=\widetilde{K}_0=0$, which implies
that the purified source model and the ensemble model lead to the same compression
rate. For reducible sources the information that the encoder can obtain about the
classical variable of the ensemble, i.e. system $X$, is effectively used as
side information to achieve a smaller compression rate. Thus we reproduce the
result of \cite[Thm.~III.3]{Ahn2006}, which was proven only for irreducible
product state ensembles.
There are other sources for which we know $K_0=\widetilde{K}_0=0$ to hold. First,
the ``generic'' sources in \cite[Thm.~11]{ZK_cqSW_2018}, where it is shown that the
function $\widetilde{I}_0=0$; this function is a special case of the function $\widetilde{K}_0$.
Indeed, the source there is described by an ensemble $\{p(x), \ket{\psi_x}^{AR}\ket{x}^B\}$,
which is always completely reducible, but generically the reduced states $\psi_x^A$ have
pairwise overlapping support, which is the condition under which
vanishing $\widetilde{K}_0$ is shown.
Secondly, the ensemble of four Bell states considered in \cite[Thm.~IV.1]{Ahn2006},
$\{ p(ij), \ket{\Phi_{ij}}^{AB} \}_{i,j=0,1}$,
where the side information system $C$ and the reference system $R$ are trivial;
for this source, the mutual information between Alice's system and the classical system
$X$ is zero, i.e. $I(A:X)=0$. Thus, due to data processing inequality, we have
$I(W:X) \leq I(A:X)=0$. Our main result reproduces the achievable rate
$\frac12 H(p)$, and also the optimality, by very different, and somewhat more
natural methods.
There are other special cases of the source model of this chapter that have been previously
studied in the literature for which $K_0 > 0$ or at least $\widetilde{K}_0 > 0$.
For instance in the source of \cite{Devetak2003}, where Alice's system is classical
with $A=X$ and system $C$ is trivial, one can observe that $K_0 = S(X)$ holds.
The rate we get is $Q^* = \frac12 S(X|B)$ under either error criterion, half of the
quantity reported in \cite{Devetak2003} because of the free entanglement in our
model, which allows for dense coding.
Furthermore, the visible variant of Schumacher compression in \cite{Barnum1996,Winter1999},
where Alice's side information system is classical with $C=X$, the function has the value
$K_0 = S(X)$, and the optimal rate is $Q^*=\frac12 S(A)$, again half of the optimal
rate without entanglement, because we can use remote state preparation and dense coding.
A third example is the ensemble $\{\frac13,\ket{\psi_i}^A\ket{\phi_i}^B\}_{i=1}^3$ from
\cite[Sec.~V.A]{Ahn2006}, which is reducible, but where the reduced ensembles
on systems $A$ and $B$ are both irreducible; it is shown there that the optimal
compression rate is strictly smaller than $(S(A)+S(A|B))/2$.
Finally, recall that in our definition of the compression task we have assumed
that the encoder and decoder share free entanglement. This was motivated so as
to make a smoother connection to QSR.
However, it is not known whether the pre-shared entanglement is always necessary to achieve
the corresponding quantum rates. There are certainly cases where QSR does not require prior
entanglement, such as when Alice's side information $C$ is trivial, which would carry
over to our setting whenever $K_0=\widetilde{K}_0=0$, for instance for an irreducible
ensemble.
More generally, in future work we plan to consider the trade-off between the quantum
and entanglement rates.
\chapter{Resource theory of charges and entropy}
\label{chap:resource theory}
In this chapter, we consider asymptotically many non-interacting systems with multiple conserved quantities or charges.
We generalize the seminal results of
Sparaciari, Oppenheim and Fritz [\emph{Phys. Rev. A} 96:052112, 2017]
to the case of multiple, in general non-commuting charges.
To this aim we formulate a resource theory of thermodynamics of asymptotically
many non-interacting systems with multiple conserved quantities or charges.
To any quantum state, we associate a vector with entries of the expected charge
values and entropy of that state. We call the set of all these vectors the
phase diagram of the system, and show that it characterizes the equivalence
classes of states under asymptotic unitary transformations that approximately
conserve the charges. This chapter is based on the results from \cite{thermo_ZBK_2020}.
\begin{comment}
Thermodynamics is one of the most successful physical theories and a pillar of modern science and technology.
It was initially developed empirically to describe heat engines, such as the steam engine and
internal combustion engines hat powered the industrial revolution of the 18th and 19th century.
Later on, it has been founded on statistical mechanics with the assumption that the systems
are composed of a large number of classical particles.
The thermal baths, which the system interacts with, are even larger in size so that the temperature of
the bath effectively does not alter after the interaction. The laws of thermodynamics find their
applications in all almost branches of the exact sciences. The emergence of quantum mechanics
in the last century, and the subsequent achievements in controlling and tuning of an individual or a
finite number of quantum systems, led to the exploration of thermodynamics in the quantum regime.
There the system is made up of a single or moderate number of quantum particles
interacting with a thermal bath. This regime is often termed the \emph{finite-size} regime.
The system may possess nontrivial quantum correlations, such as entanglement among the particles,
and the bath can be finite or comparable in size with the system. In the quantum domain, another
layer of difficulties arises when one considers more than one conserved quantities (charges)
that do not commute with each other, as the simultaneous conservation of all the charges
cannot be guaranteed.
Recent studies of quantum thermodynamics \cite{Gemmer2009,book2} focus on systems of finite
size and the cases where measurements are allowed only once. The latter situation is often called
the single-shot regime. In addition to thermodynamic averages, there one is interested in
values and bounds on fluctuations of thermodynamic quantities. One way to handle these
problems is by the use of various {\it fluctuations theorems} \cite{Jar97,ro99,cht11}.
Another way to deal with these regimes is exactly via the resource theory of thermodynamics
that allows for rigorous treatment of second laws, optimal work extraction problem,
etc (cf. \cite{bho13,h&p13,Brandao2015}, see also \cite{ssp14,abe13,brl17,brl19}).
The resource theory of quantum thermodynamics was recently extended to deal with quantum
and nano-scale engines made up of a finite or a small number of quantum particles, and
two baths at different temperatures \cite{blb19}.
\end{comment}
\begin{comment}
In the present paper, we formulated a resource theory of quantum thermodynamics with
multiple conserved quantities where the system and bath a priori are arbitrary in size.
We adhere to the asymptotic regime where a system of many non-interacting particles with
multiple conserved quantities or charges interacts with a bath. Here the resource-free states
are the generalized Gibbs states (GGS), also known as the complete-passive states, and the
allowed operations are the (average) entropy-preserving operations.
The thermodynamic resource is quantified by the Helmholtz free entropy. Clearly, the
entropy-preserving operations cannot create thermodynamic resource in the resource-free
GGSs.
For any quantum state, we associate a vector with entries of the average charge
values and entropy of that state. We call the set of all these vectors the phase diagram
of a system. The concept of phase diagram in the present sense was originally pioneered
in \cite{Sparaciari2016} for a system with energy as the only conserved quantity of the
system where it has been shown that the phase diagram is a convex set. Generalization of
the seminal results of \cite{Sparaciari2016} to the case of multiple, in general non-commuting
charges are the subject of the present paper. For an individual system with multiple
charges the phase diagram is not necessarily convex, however, interestingly for a composition
of two or more systems, the phase diagram is convex. Moreover, for a composition of large
enough systems, for any point on the phase diagram, there is a state with a tensor
product structure. This implies that from the macroscopic point of view it is enough
to consider states of a composite system with tensor product structure. This is an
important feature when we study a traditional thermodynamics set-up considering only
tensor product states and it does not affect the generality of the laws of thermodynamics
which only depend on the macroscopic properties of a state rather than the state itself.
Given the entropy-preserving operation as the allowed operations, the (generalized)
phase diagram fully characterizes the thermodynamic transformations of the states and
the role of thermodynamic resources in such processes. We further extend our study to
the situations where the system and bath become correlated. In such a case we use the
conditional entropy, in place of entropy, to express the phase diagram and derive the
laws of quantum thermodynamics when the final state exhibits system-bath correlations.
\end{comment}
\begin{comment}
\textcolor{red}{AW: I think the introduction is still too long-winded. My plan is to
start off with thermodynamics, and move to resource theories, without singling
out coherence or any other specifically. Just give examples and so on, and highlight
the general structure. The describe our setting and how it generalises previous
work by Sparaciari and us. And move to the structure of the paper. Most of the
other stuff can be moved to a discussion section on related work...}
In recent years there has been considerable interest in so-called, \emph{resource theories (RTs)},
rigorously formulated in \cite{Winter-Dong-2016}. This interest have been nourished
by quantum information science, but in general the RT approach applies also to classical theories.
In general, within a RT we have the following features:
\begin{itemize}
\item One defines a space of states of the considered system, and specifies the
set of {\it resource free states}; this set is often convex.
The rest of the states contains some amount of the resource.
\item One defines a set of {\it allowed operations} that can be performed on the
resource free states without generating resource.
\item The consequences of these definitions are then studied: Usually they allow
to define and rigorously bound or even determine various resource measures,
they allow to determine which states can be transformed to the others using
allowed operation, how the property of states may be changed and ho these
changes are bounded under allowed operations, etc.
\end{itemize}
A very well-studied and recent example of a resource theory is the theory of quantum
coherence, pioneered in Ref. \cite{bp14}, and formalized in Ref. \cite{Winter-Dong-2016}.
Resource theory of coherence was generalized in many directions; while all these notions
typically agree on the definition of incoherent states as states that are diagonal in
a fixed reference basis, they differ significantly in the definition of the corresponding
free operations (\cite{c&g16,m&s16,V&S17,msz16}, for a review see \cite{sap16}). More recently,
RT of coherence was applied in distributed scenarios, involving two distant parties where
that can perform only measurements that do not create coherence and can communicate
their outcomes via a classical channel.the dynamical coherence \cite{srb17}. RT of coherence are being also
generalized to the case of dynamical coherence \cite{g&w19}. It has realized practically immediately that RT can be applied to other resources, such as quantum channels \cite{l&w19}, quantum entanglement (see for instance \cite{cpv18,sha19}), quantum non-locality \cite{Vic14}. and even contextuality \cite{d&a18}.
\emph{Resource theories of thermodynamics.} Particularly interesting for the present p
aper are RT of quantum thermodynamics, or thermodynamics, in which resource free states
are, up to certain details, thermal states.
Recent studies of quantum thermodynamics \cite{Gemmer2009,book2} focus on systems of finite
size and the so-called single-shot regime. In addition to thermodynamic averages, there one
is interested in values and bounds on fluctuations of thermodynamic quantities.
One way to handle these problems is by various {\it fluctuations theorems} \cite{Jar97,ro99,cht11}.
Another way to deal with these regimes is exactly via the resource theory of thermodynamics
that allows for rigorous treatment of second laws, optimal work extraction problem,
etc (cf. \cite{bho13,h&p13,Brandao2015}, see also \cite{ssp14,abe13,brl17,brl19}).
RT of quantum thermodynamics was recently extended to deal with nano-engines made up of
a finite or a small number of quantum particles, and two baths at different temperatures \cite{blb10}.
In this paper, we consider asymptotically many non-interacting systems with multiple
conserved quantities or charges, which we call it a composite system. For any quantum
state, we associate a vector with entries of the average charge values and entropy of that state.
We call the set of all these vectors the phase diagram of a system. The concept of phase
diagram in the present sense was originally pioneered in \cite{Sparaciari2016} for a system
with energy as the only conserved quantity of the system where it has been shown that
the phase diagram is a convex set.
For an individual system with multiple charges the phase diagram in not necessarily
convex, however, interestingly for composition of two or more systems, the phase diagram is convex.
Moreover, for composition of large enough systems, for any point of the phase diagram there is
a state with tensor product structure.
This implies that from macroscopic point of view it is enough to consider states of a composite system with tensor product structure.
This is an important feature when later in this paper we study a traditional thermodynamics set-up where considering only tensor product states does not effect generality of the laws of thermodynamics which only depend on the macroscopic properties of a state rather than the state itself.
This motivates to consider an asymptotic resource theory with states of tensor product structure as the objects and allowed operations which are thermodynamically meaningful, that is operations which preserve the entropy and and charges of a system. However, if we restrict the allowed operations to those exactly preserving entropy and charges for all states, i.e., unitaries that commute with all charges of a system, it is difficult if not impossible to find unitaries which transform one object to other object of the resource theory. Therefore, we consider operations which asymptotically preserve the entropy and charges.
Namely, we consider operations which consist of coupling a composite system with an ancillary system with much smaller dimension compared to the composite system and unitaries which almost
commute, in a precise mathematical sense, with the total charges of the composite system and the ancilla. Coupling the composite system to an ancillary system of small dimension ensures that the entropy exchange between them vanishes asymptotically. Furthermore, allowing almost-commuting unitaries conserves the average charges of the system.
The allowed operations classify the objects to asymptotically equivalent objects
which are interconvertible under allowed operations: the objects are interconvertible via allowed operations if and only if they have the same average entropy and average charge values. %
The existence of the allowed operations between the objects of the same class is based on two pillars:
1. For objects with the same average entropy there are states with sublinear dimension which can be coupled to the objects to make their spectrum asymptotically identical.
2. Objects with the same average charge values project onto a common subspace of the charges of the system which has the property that any unitary acting on this subspace is an almost-commuting unitary with the corresponding charges. Therefore, the spectrum of the objects of the same class can be modified using small ancillary systems and then they are interconvertible via unitaries that asymptotically preserve the charges of the system.
The notion of a common subspace for different charges, which are Hermitian operators, is introduced in \cite{Halpern2016} as approximate microcanonical (a.m.c.) subspace.
In this paper, for given charges and parameters, we show the existence of an a.m.c. which is by construction a permutation-symmetry subspace unlike the subspace constructed in \cite{Halpern2016}.
We apply the resource theory that we have developed to understand quantum thermodynamics with multiple conserved quantities. We specifically consider an asymptotic generalization of the setting proposed in \cite{Guryanova2016} where there are many copies of a global system consisting of a main system, called a work system, a thermal bath with fixed temperatures and various batteries to store the charges of the system. Therefore, the objects and allowed operations of the resource theory translate to quantum states of a thermodynamics system and thermodynamical transformations, respectively. It is evident that the allowed operations can transform a state with a tensor product structure to a state of a general form; however, we show that restricting the final states to the specific form of tensor product structure does not reduce the generality of the bounds that we obtain which follows from the fact that for any point of the phase diagram there is state with tensor product structure.
As discussed in \cite{Guryanova2016}, for a system with multiple charges the free entropy is
a conceptually more meaningful quantity than the free energy which is originally defined when energy is the only conserved quantity of the system. Namely,
the free energy bounds the amount of energy
that can be extracted; however, for a system with multiple charges there are not
various quantities that bound the extraction of individual charges; rather
there is only a bound on the trade-off between the charges that can be extracted which is precisely the free entropy defined with respect to the temperatures of the thermal bath.
We show that indeed this is the case in our scenario as well and formulate the second law: the amount of charge combination that is extracted is bounded by the free entropy change of the work system per number of copies of the work system, i.e., the free entropy rate change.
Conversely, we show that all transformations
with given extracted charge values,
with a combination bounded by the free entropy rate change of the work system, are feasible.
As a result, any amount of a given charge, or the so-called work type, is extractable
providing that sufficient amounts of other charges are injected to the system.
Then, this interesting question arises that for given extractable charge values, with a combination saturating the second law with a deficit $\delta$, what is the minimum number of the thermal baths per number of the copies of the work system.
We define this ratio as the thermal bath rate.
We find that for large thermal bath rates the optimal value is inversely proportional to the deficit $\delta$, and there is always a corresponding transformation where the final state of the work system and the thermal bath are uncorrelated.
However, in general this is not true: the minimum rate might be obtained where the final state of the work system and the thermal bath are correlated.
This is a purely quantum effect that correlations result in smaller rates of the thermal bath.
Then, in order to find the optimal rate, we define the conditional entropy phase diagram
which depends on a given conditional state.
\end{comment}
\begin{comment}
\bigskip
\textcolor{red}{Notation: $\und{a}=(a_1,\ldots,a_c)$, $\und{\beta}=(\beta_1,\ldots,\beta_c)$,
$\und{W}=(W_1,\ldots,W_c)$, etc.
Furthermore: phase diagramme denoted $\cP$, its points denoted $(\und{a},s)$,
partition function normal $Z$, not calligraphic.
Also: equation numbers for most equations!}
\end{comment}
\section{Resource theory of charges and entropy}
\label{sec:resource-theory}
Resource theory is a rigorous mathematical framework initially developed to characterize the
role of entanglement in quantum information processing tasks. Later the framework was extended
to characterize coherence, non-locality, asymmetry and many more, including quantum Shannon theory itself, see \cite{bcp14,Winter-Dong-2016,c&g16,m&s16,V&S17,msz16,sap16,srb17,g&w19,cpv18,sha19,Vic14,d&a18,Devetak2008_1}.
The resource theory approach applies also to classical theories. In general, the resource
theories have the following common features: (1) a well-defined set of resource-free states,
and any states that do not belong to this set has a non-vanishing amount of resource;
(2) a well-defined set of resource-free operations, also known as allowed operations,
that cannot create or increase resource in a state. These allow one to quantify the
resources present in the states or operations and characterize their roles in the transformations
between the states or the operations. In particular, it enables one to define and rigorously
bound or even determine various resource measures; determine which states can be
transformed to the others using allowed operation; how the property of states may be
changed, and how these changes are bounded under the allowed operations, etc.
\medskip
A system in our resource theory is a quantum system $Q$ with a finite-dimensional Hilbert space
(denoted $Q$, too, without danger of confusion), together with a
Hamiltonian $H=A_1$ and other quantities (``charges'') $A_2, \ldots, A_c$, all of which are
Hermitian operators that do not necessarily commute with each other. We consider composition of
$n$ non-interacting systems, where the Hilbert space of the \textit{composite} system $Q^n$ is
the tensor product $Q^{\otimes n} = Q_1 \otimes \cdots \otimes Q_n$ of the Hilbert spaces of
the \textit{individual} systems, and the $j$-th charge of the composite system is the sum of
charges of individual systems as follows,
\begin{equation}
A^{(n)}_j = \sum_{i=1}^{n} \openone^{\otimes (i-1)} \otimes A_j \otimes \openone^{\otimes (n-i)},
\quad j=1,2,\ldots,c.
\end{equation}
For ease of notation, we will write throughout
$A_j^{[Q_i]} = \openone^{\otimes (i-1)} \otimes A_j \otimes \openone^{\otimes (n-i)}$.
We wish to build a resource theory where the objects are states on a quantum system,
which are transformed under thermodynamically meaningful operations.
To any quantum state $\rho$ is assigned the point
$(\und{a},s) = (a_1,\ldots,a_c,s)
= \bigl( \Tr \rho A_1, \ldots, \Tr \rho A_c, S(\rho) \bigr) \in \mathbb{R}^{c+1}$,
which is an element in the \emph{phase diagram} that has been originally introduced,
for $c=1$, as energy-entropy diagram in \cite{Sparaciari2016}; there it is shown,
for a system where energy is the only conserved quantity, that the diagram is a convex set.
In the case of commuting multiple conserved quantities, the charge-entropy diagram has been
generalised and further investigated in \cite{brl19}.
Note that the set of all these vectors, denoted $\mathcal{P}^{(1)}$, is not in
general convex (unless the quantities commute pairwise).
An example is a qubit system with charges $\sigma_x$, $\sigma_y$ and $\sigma_z$ where
charge values uniquely determine the state as a linear function of the $\tr \rho\sigma_i$,
hence the entropy, while the von Neumann entropy itself is well-known to be strictly concave.
Moreover, the set of these points for a composite system with charges $A_1^{(n)}, \ldots, A_c^{(n)}$,
which we denote $\mathcal{P}^{(n)}$ contains, but is not necessarily equal to $n\mathcal{P}^{(1)}$
(which however is true for commuting charges). Namely, consider the point
$g=\left(\frac{1}{2}\Tr (\rho_1+\rho_2) A_1, \ldots, \frac{1}{2}\Tr (\rho_1+\rho_2) A_c,
\frac{1}{2}S(\rho_1)+\frac{1}{2}S(\rho_2)\right)$, which does not necessarily
belong to $\mathcal{P}^{(1)}$ but belongs to its convex hull;
however, $2g \in \mathcal{P}^{(2)}$ due to the state $\rho_1 \otimes \rho_2$.
Therefore, we consider the convex hull of the set $\mathcal{P}^{(1)}$ and call it the
\emph{phase diagram} of the system, denoted
\begin{equation}
\overline{\mathcal{P}}
\equiv \overline{\mathcal{P}}^{(1)}
:= \left\{ \left(\sum_i p_i \Tr \rho_i A_1, \ldots, \sum_i p_i\Tr \rho_i A_c, \sum_i p_i S(\rho_i)\right) :
0 \leq p_i \leq 1,\,\sum_i p_i = 1 \right\}.
\end{equation}
The interpretation is that the objects of our resource theory are ensembles of
states $\{p_i,\rho_i\}$, rather than single states.
We define the \emph{zero-entropy diagram} and \emph{max-entropy diagram},
respectively, as the sets
\begin{align*}
\overline{\mathcal{P}}_0^{(1)}
&= \{(\und{a},0): \Tr \rho A_j = a_j \text{ for a state } \rho \}, \\
\overline{\mathcal{P}}_{\max}^{(1)}
&= \left\{\bigl(\und{a},S(\tau(\und{a}))\bigr): \Tr \rho A_j = a_j \text{ for a state } \rho \right\},
\end{align*}
where $\tau(\und{a})$ is the unique state maximising the entropy among all states
with charge values $\Tr \rho A_j = a_j$ for all $j$, which is called generalized
thermal state, or generalized Gibbs state, or also generalized grand canonical state \cite{Liu2007}.
Note that, as a linear image of the compact convex set of states, the zero-entropy diagram is
compact and convex.
We similarly define the set $\mathcal{P}^{(n)}$, the phase diagram $\overline{\mathcal{P}}^{(n)}$,
zero-entropy diagram $\overline{\mathcal{P}}_0^{(n)}$ and max-entropy diagram
$\overline{\mathcal{P}}_{\max}^{(n)}$
for the composition of $n$ systems with charges $A_1^{(n)}, \ldots, A_c^{(n)}$.
\begin{figure}[!t]
\includegraphics[width=1\textwidth]{phase-diagrams.jpg}
\caption{Schematic of the phase diagrams $\mathcal{P}^{(1)}$, $\mathcal{P}^{(2)}$
and $\overline{\mathcal{P}}$. As seen, $\mathcal{P}^{(1)}$ is not convex, and there is a hole inside the diagram.}
\label{fig:phase-diagrams}
\end{figure}
\begin{lemma}
\label{lemma:phase diagram properties}
For an individual and composite systems with charges $A_j$ and $A^{(n)}_j$, respectively,
we have:
\begin{enumerate}
\item $\overline{\mathcal{P}}^{(n)}$, for $n \geq 1$, is a compact and convex
subset of $\mathbb{R}^{c+1}$.
\item $\overline{\mathcal{P}}^{(n)}$, for $n \geq 1$, is the convex hull of the union
$\overline{\mathcal{P}}_{0}^{(n)} \cup \overline{\mathcal{P}}_{\max}^{(n)}$,
of the zero-entropy diagram and the max-entropy diagram.
\item $\overline{\mathcal{P}}^{(n)} = n \overline{\mathcal{P}}^{(1)}$ for all $n \geq 1$.
\item $\mathcal{P}^{(n)}$ is convex for all $n \geq 2$, and indeed
$\mathcal{P}^{(n)} = \overline{\mathcal{P}}^{(n)} = n \overline{\mathcal{P}}^{(1)}$.
\item Every point of $\mathcal{P}^{(n)}$ is realised by a suitable tensor product state
$\rho_1 \otimes \cdots \otimes \rho_n$, for all $n \geq d$.
\item All points $\bigl(\und{a},S(\tau(\und{a}))\bigr) \in \overline{\mathcal{P}}_{\max}$
are extreme points of $\overline{\mathcal{P}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. The phase diagram is convex by definition. Further, $\Tr \rho A_j^{(n)}$ and $S(\rho)$
are continuous functions defined on the set of quantum states which is a compact set;
hence, the set $\mathcal{P}^{(n)}$ is also a compact set. The cxonvex hull of a
finite-dimensional compact set is compact, so the phase diagram is a compact set.
2. Any point in the phase diagram according to the definition is a convex combination of the form
\[(a_1,\ldots,a_c,s)
= \left(\sum_i p_i \Tr(\rho_i A_1), \ldots, \sum_i p_i\Tr(\rho_i A_c),\sum_i p_i S(\rho_i)\right).\]
The point $(a_1,\ldots,a_c,0)$ belongs to $\overline{\mathcal{P}}_{0}^{(1)}$
because the state $\rho=\sum_i p_i \rho_i$ has charge values $a_1, \ldots, a_c$.
Moreover, the state with charge values $a_1, \ldots, a_c$ of maximum entropy is
the generalized thermal state $\tau(\und{a})$, so we have
\begin{align*}
S(\tau(\und{a})) \geq S(\rho) \geq \sum_i p_i S(\rho_i),
\end{align*}
where the second inequality is due to concavity of the entropy. Therefore, any point
$(\und{a},s)$ can be written as the convex combination of the points $(\und{a},0)$ and
$(\und{a},S(\tau(\und{a}))$.
3. Due to item 2, it is enough to show that
$\overline{\mathcal{P}}_{0}^{(n)} = n\overline{\mathcal{P}}_{0}^{(1)}$, and
$\overline{\mathcal{P}}_{\max}^{(n)}= n\overline{\mathcal{P}}_{\max}^{(1)}$.
The former follows from the definition. The latter is due to the fact that the
thermal state for a composite system is the tensor power of the thermal state of
the individual system.
4. Let $\tau(\und{a}) = \sum_i p_i \ketbra{i}{i}$ be the diagonalization of the
generalized thermal state.
For $n \geq 2$, define $\ket{v} = \sum_i \sqrt{p_i} \ket{i}^{\ox n}$. Obviously, the charge
values of the states $\tau(\und{a})^{\ox n}$ and $\ketbra{v}{v}$ are the same, since they
have the same reduced states on the individual systems;
thus, there is a pure state for any point in the zero-entropy diagram of the composite system.
Now, consider the state $\lambda \ketbra{v}{v}+(1-\lambda)\tau(\und{a})^{\ox n}$, which has the
same charge values as $\tau(\und{a})^{\ox n}$ and $\ketbra{v}{v}$.
The entropy $S\bigl(\lambda \ketbra{v}{v}+(1-\lambda)\tau(\und{a})^{\ox n}\bigr)$ is a continuous function
of $\lambda$; hence, for any value $s$ between $0$ and $S(\tau(\und{a})^{\ox n})$, there is a state
with the given values and entropy $s$.
5. For $n\geq d$, it is elementary to see that any state $\rho$ can be decomposed
into a uniform convex combination of $n$ pure states, i.e.
$\rho=\frac{1}{n} \sum_{i=1}^n \ketbra{\psi_i}{\psi_i}$.
Observe that the state $\psi^{n} = \ketbra{\psi_1} \ox \cdots \ox \ketbra{\psi_n}$ has the same
charge values as the state $\rho^{\otimes n}$, but as it is pure it has entropy $0$.
Further, consider the thermal state $\tau$ with the same charge values as $\rho$, but
the maximum entropy consistent with them. Now let
$\rho_i := \lambda \ketbra{\psi_i}{\psi_i}+(1-\lambda)\tau$, and
observe that $\rho_\lambda^{n} = \rho_1 \ox \cdots \rho_n$ has the same charge values as
$\psi^{n}$, $\rho^{n}$ and $\tau^{\ox n}$.
Since the entropy $S(\rho_\lambda^{n})$ is a continuous function of $\lambda$,
thus interpolating smoothly between $0$ and $n S(\tau)$, there is a
tensor product state with the same given charge values and prescribed
entropy $s$ in the said interval.
6. This follows from the strict concavity of the von Neumann entropy $S(\rho)$ as a
function of the state, which imparts the strict concavity on $\und{a} \mapsto S(\tau(\und{a}))$
\end{proof}
\medskip
The penultimate point of Lemma \ref{lemma:phase diagram properties} motivates us to define
a resource theory where the objects are sequences of states on composite systems
of $n\rightarrow\infty$ parts.
Inspired by \cite{Sparaciari2016}, the allowed operations in this resource theory
are those that respect basic principles of physics, namely entropy and charge
conservation. We point out right here, that ``physics'' in the present context
does not necessarily refer to the fundamental physical laws of nature, but to
any rule that the system under consideration obeys.
It is well-known that quantum operations that preserve entropy for all states are
unitaries. The class of unitaries that conserve charges of a system are precisely
those that commute with all charges of that system.
However, it turns out that these constraints are too strong if imposed literally,
when many charges are to be conserved, as it could easily happen that only
trivial unitaries are allowed.
Our way out is to consider the thermodynamic limit and at the same time relax the
allowed operations to approximately entropy and charge conserving ones.
As for the former, we couple the composite system to an ancillary system with corresponding
Hilbert space $\mathcal{K}$ of dimension $2^{o(n)}$ where restricting the dimension of
the ancilla ensures that the \emph{average} entropy of an individual system, that is,
entropy of the composite system per $n$ does not change in the limit of large $n$.
Moreover, as for charge conservation, we consider unitaries that preserve the average
charges of an individual system, and we allow unitaries that are \emph{almost} commuting
with the total charges of the composite system and the ancilla. The precise definition
goes as follows:
\begin{definition}
\label{almost-commuting unitaries}
A unitary operation $U$ acting on a composite system coupled to an ancillary system with
Hilbert spaces $\mathcal{H}^{\otimes n}$ and $\mathcal{K}$ of dimension $2^{o(n)}$, respectively,
is called an \emph{almost-commuting unitary} with the total charges of a composite system and an
ancillary system if the operator norm of the normalised commutator for all total charges
vanishes asymptotically for large $n$:
\begin{align*}
& \lim_{n \to \infty} \frac{1}{n} \norm{ [U,A_j^{(n)}+A_j']}_{\infty} =\\
& \qquad \lim_{n \to \infty} \frac{1}{n}\norm{ U (A_j^{(n)}+A_j')-(A_j^{(n)}+A_j') U }_{\infty}
= 0
\qquad j=1,\ldots,c.
\end{align*}
where $A_j^{(n)}$ and $A_j'$ are respectively the charges of the composite system
and the ancilla, such that $\norm{A_j'}_{\infty} \leq o(n)$.
\end{definition}
We stress that the definition of almost-commuting unitaries automatically implies that
the ancillary system has a relatively small dimension and charges with small operator
norm compared to a composite system.
The first step in the development of our resource theory is a precise characterisation
of which transformations between sequences of product state are possible using almost
commuting unitaries.
To do so, we define \textit{asymptotically equivalent} states as follows:
\begin{definition}
\label{Asymptotic equivalence definition}
Two sequences of product states $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ and
$\sigma^n=\sigma_1 \otimes \cdots \otimes \sigma_n$ of a composite system with
charges $A_j^{(n)}$ for $j=1,\ldots,c$, are called \emph{asymptotically equivalent} if
\begin{align*}
\lim_{n \to \infty} \frac{1}{n} \abs{S(\rho^n) - S(\sigma^n)} &= 0, \\
\lim_{n \to \infty} \frac{1}{n} \abs{\Tr \rho^n A_j^{(n)} - \Tr \sigma^n A_j^{(n)}} &= 0 \text{ for } j=1,\ldots,c.
\end{align*}
In other words, two sequences of product states are considered equivalent if their
associated points in the normalised phase diagrams $\frac1n \mathcal{P}^{(n)}$ differ
by a sequence converging to $0$.
\end{definition}
The \emph{asymptotic equivalence theorem} of \cite{Sparaciari2016} characterizes
feasible state transformations via \textit{exactly} commuting unitaries where energy
is the only conserved quantity of a system, showing that it is precisely
given by asymptotic equivalence.
We prove an extension of this theorem for systems with multiple conserved quantities,
by allowing almost-commuting unitaries.
\begin{theorem}[Asymptotic (approximate) Equivalence Theorem]
\label{Asymptotic equivalence theorem}
Let $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ and
$\sigma^n=\sigma_1 \otimes \cdots \otimes \sigma_n$
be two sequences of product states of a composite system with
charges $A_j^{(n)}$ for $j=1,\ldots,c$.
These two states are asymptotically equivalent if and only if
there exist ancillary quantum systems with corresponding Hilbert space $\mathcal{K}$
of dimension $2^{o(n)}$ and an almost-commuting unitary $U$ acting on
$\mathcal{H}^{\otimes n} \otimes \mathcal{K}$ such that
\begin{align*}
\lim_{n \to \infty} \norm{U \rho^n \otimes \omega' U^{\dagger} - \sigma^n \otimes \omega}_1 &= 0, \\
\end{align*}
where $\omega$ and $\omega'$ are states of the ancillary system, and charges of the ancillary
system are trivial, $A_j' = 0$.
\end{theorem}
\medskip
The \emph{proof} of this theorem is given in Section~\ref{proof-AET}, as it relies on a
number of technical lemmas, among them a novel construction of approximately
microcanonical subspaces (Section~\ref{section: Approximate microcanonical (a.m.c.) subspace}).
\medskip
By grouping the $Q$-systems into blocks of $k$, we do not of course change the
physics of our system, except that now in the asymptotic limit we only consider
$n = k\nu$ copies of $Q$, but the state $\rho^n$ is asymptotically equivalent to
$\rho^{n+O(1)}$ via almost-commuting unitaries according to Definition \ref{almost-commuting unitaries}
and Theorem \ref{Asymptotic equivalence theorem}.
But now we consider $Q^k$ with its charge observables $A_j^{(k)}$ as elementary
systems, which have many more states than the $k$-fold product states we began with.
Yet, Lemma \ref{lemma:phase diagram properties} shows that the phase diagram for
the $k$-copy system is simply the rescaled single-copy phase diagram,
$\overline{\mathcal{P}}^{(k)} = k \overline{\mathcal{P}}^{(1)}$, and indeed
for $k\geq d$, $\mathcal{P}^{(k)} = k \overline{\mathcal{P}}^{(1)}$. This means
that we can extend the equivalence relation of asymptotic equivalence and the
concomitant Asymptotic Equivalence Theorem (AET) \ref{Asymptotic equivalence theorem}
to any sequences of states that factor into product states of blocks $Q^k$, for
any integer $k$, which freedom we shall exploit in our treatment of thermodynamics.
\section{Approximate microcanonical (a.m.c.) subspace}
\label{section: Approximate microcanonical (a.m.c.) subspace}
In this section, we recall the definition of
approximate microcanonical (a.m.c.) and give a new proof that it exists
for certain explicitly given parameters.
For charges $A_j$ and average values $v_j$, a.m.c. is basically a \textit{common} subspace for the spectral projectors of $A_j^{(n)}$ with corresponding values close to $n v_j$; that is, a subspace onto which a state projects with high probability if and only if it projects onto the spectral projectors of the charges with high probability. We show in Theorem~\ref{thm:symmetric-micro-exists} that for a large enough $n$ such a subspace exists.
An interesting property of an a.m.c. subspace is that any unitary acting on this subspace
is an almost-commuting unitary with charges $A_j^{(n)}$.
\begin{definition}
\label{defi:microcanonical}
An \emph{approximate microcanonical (a.m.c.) subspace}, or more precisely
a \emph{$(\epsilon,\eta,\eta',\delta,\delta')$-approximate microcanonical subspace},
$\cM$ of $\cH^{\ox n}$, with projector $P$,
for charges $A_j$ and values $v_j = \langle A_j \rangle$
is one that consists, in a certain precise sense, of exactly the
states with ``very sharp'' values of all the $A_j^{(n)}$.
Mathematically, the following has to hold:
\begin{enumerate}
\item Every state $\omega$ with support contained in $\cM$ satisfies
$\tr \omega\Pi^\eta_j \geq 1-\delta$ for
all $j$.
\item Conversely, every state $\omega$ on $\cH^{\ox n}$ such that
$\tr \omega\Pi^{\eta'}_j \geq 1-\delta'$ for
all $j$, satisfies $\tr\omega P \geq 1-\epsilon$.
\end{enumerate}
Here, $\Pi^\eta_j := \bigl\{ nv_j-n\eta\Sigma(A_j) \leq A_j^{(n)} \leq nv_j+n\eta\Sigma(A_j) \bigr\}$
is the spectral projector of $A_j^{(n)}$ of values close to $n v_j$,
and $\Sigma(A) = \lambda_{\max}(A)-\lambda_{\min}(A)$ is the spectral diameter
of the Hermitian $A$, i.e.~the diameter of the smallest disc
covering the spectrum of $A$.
\end{definition}
\medskip
\begin{remark}
It is shown in Theorem~3 of~\cite{Halpern2016} that for every $\epsilon > c\delta' > 0$,
$\delta > 0$ and $\eta > \eta' > 0$, and for all sufficiently
large $n$, there exists a nontrivial
$(\epsilon,\eta,\eta',\delta,\delta')$-a.m.c.~subspace.
However, there are two (related) reasons why one might be not completely
satisfied with the argument in~\cite{Halpern2016}: First, the proof uses
a difficult result of Ogata~\cite{Ogata} to reduce the non-commuting
case to the seemingly easier of commuting observables; while this
is conceptually nice, it makes it harder to perceive the nature
of the constructed subspace. Secondly, despite the fact that the
defining properties of an a.m.c.~subspace are manifestly permutation symmetric
(w.r.t.~permutations of the $n$ subsystems), the resulting construction
does not have this property.
Here we address both these concerns. Indeed, we shall show by
essentially elementary means how to obtain an a.m.c.~subspace
that is by its definition permutation symmetric.
\end{remark}
\begin{theorem}
\label{thm:symmetric-micro-exists}
Under the previous assumptions, for every $\epsilon > 2(n+1)^{3d^2}\delta' > 0$,
$\eta >\eta' > 0$ and $\delta>0$, for all sufficiently large $n$
there exists an approximate microcanonical subspace projector.
In addition, the subspace can be chosen to be stable under permutations
of the $n$ systems: $U^\pi\cM = \cM$, or equivalently $U^\pi P (U^\pi)^\dagger = P$,
for any permutation $\pi\in S_n$ and its unitary action $U^\pi$.
More precisely, given $\eta > \eta' > 0$ and $\epsilon > 0$, there exists a $\alpha > 0$ such
that there is a non-trivial $(\epsilon,\eta,\eta',\delta,\delta')$-a.m.c. subspace
with
\begin{align*}
\delta &= (c+3)(5n)^{5d^2} e^{-\alpha n} \text{ and } \\
\delta' &= \frac{\epsilon}{2(n+1)^{3d^2}} - (c+3)(5n)^{2d^2} e^{-\alpha n}.
\end{align*}
Furthermore, we may choose $\alpha = \frac{(\eta-\eta')^2}{8c^2(d+1)^2}$.
\end{theorem}
\begin{proof}
For $s>0$, partition the state space ${\mathcal{S}}(\cH)$ on $\cH$ into
\begin{align*}
{\mathcal{C}}_s(\underline{v}) &= \bigl\{ \sigma : \forall j\ |\tr\sigma A_j - v_j| \leq s\Sigma(A_j) \bigr\}, \\
\cF_s(\underline{v}) &= \bigl\{ \sigma : \exists j\ |\tr\sigma A_j - v_j| > s\Sigma(A_j) \bigr\}
= {\mathcal{S}}(\cH) \setminus {\mathcal{C}}_s(\underline{v}),
\end{align*}
which are the sets of states with $A_j$-expectation values ``close''
to and ``far'' from $\underline{v}$.
Note that if $\rho \in {\mathcal{C}}_s(\underline{v})$ and
$\sigma \in \cF_t(\underline{v})$, $0 < s < t$, then $\|\rho-\sigma\|_1 \geq t-s$.
Choosing the precise values of $s>\eta'$ and $t<\eta$ later,
we pick a universal distinguisher $(P,P^\perp)$
between ${\mathcal{C}}_s(\underline{v})^{\ox n}$ and $\cF_t(\underline{v})^{\ox n}$,
according to Lemma~\ref{lemma:universal-test} below:
\begin{align}
\label{eq:s-close}
\forall \rho\in{\mathcal{C}}_s(\underline{v})\ \tr\rho^{\ox n} P^\perp &\leq (c+2)(5n)^{2d^2} e^{-\zeta n}, \\
\label{eq:t-far}
\forall \sigma\in\cF_t(\underline{v})\ \tr\sigma^{\ox n} P &\leq (c+2)(5n)^{2d^2} e^{-\zeta n},
\end{align}
with $\zeta = \frac{(t-s)^2}{2c^2(2d^2+1)}$.
Our a.m.c.~subspace will be $\cM := \operatorname{supp} P$;
by Lemma~\ref{lemma:universal-test}, $P$ and likewise $\cM$ are permutation symmetric.
It remains to check the properties of the definition. First, let $\omega$
be supported on $\cM$. Since we are interested in $\tr\omega \Pi_j^\eta$, we may
without loss of generality assume that $\omega$ is permutation symmetric.
Thus, by the ``constrained de Finetti reduction'' (aka ``Postselection
Lemma'')~\cite[Lemma~18]{Duan2016},
\begin{align}
\label{eq:dF}
\omega \leq (n+1)^{3d^2} \int {\rm d}\sigma\,\sigma^{\ox n} F(\omega,\sigma^{\ox n})^2,
\end{align}
with a certain universal probability measure ${\rm d}\sigma$ on ${\mathcal{S}}(\cH)$,
and the fidelity $F(\rho,\sigma) = \|\sqrt{\rho}\sqrt{\sigma}\|_1$ between
states. We need the monotonicity of the fidelity under cptp maps, which we
apply to the test $(P,P^\perp)$:
\[
F(\omega,\sigma^{\ox n})^2 \leq F\bigl( (\tr\sigma^{\ox n}P,1-\tr\sigma^{\ox n}P),(1,0) \bigr)^2
\leq \tr\sigma^{\ox n}P,
\]
which holds because $\tr\omega P = 1$.
Thus,
\begin{align}
\label{eq:dF-P}
\tr\omega(\Pi_j^\eta)^\perp \leq (n+1)^{3d^2} \int {\rm d}\sigma\,\bigl(\tr\sigma^{\ox n}(\Pi_j^\eta)^\perp\bigr)
(\tr\sigma^{\ox n}P).
\end{align}
Now we split the integral on the right hand side of Eq.~(\ref{eq:dF-P}) into two parts,
$\sigma\in{\mathcal{C}}_t(\underline{v})$ and $\sigma\not\in\cF_t(\underline{v})$:
If $\sigma\in\cF_t(\underline{v})$, then by Eq.~(\ref{eq:t-far})
we have
\[
\tr\sigma^{\ox n}P \leq (c+2)(5n)^{2d^2} e^{-\zeta n}.
\]
On the other hand, if $\sigma\in{\mathcal{C}}_t(\underline{v})$, then because of $t < \eta$ we have
\[
\tr\sigma^{\ox n}(\Pi_j^\eta)^\perp \leq 2 e^{-2(\eta-t)^2 n},
\]
which follows from Hoeffding's inequality~\cite{DemboZeitouni}:
Indeed, let $Z_\ell$ be the i.i.d.~random variables obtained by the
measurement of $A_j$ on the state $\sigma$. They take values in
the interval $[\lambda_{\min}(A_j),\lambda_{\max}(A_j)]$, their expectation
values satisfy $\EE Z_j = \tr\sigma A_j \in [v_j \pm t\Sigma(A_j)]$, while
\[\begin{split}
\tr\sigma^{\ox n}(\Pi_j^\eta)^\perp &= \Pr\left\{\frac1n\sum_\ell Z_\ell \not\in[v_j \pm \eta\Sigma(A_j)\right\} \\
&\leq \Pr\left\{\frac1n\sum_\ell Z_\ell \not\in[\tr\sigma A_j \pm (\eta-t)\Sigma(A_j)\right\},
\end{split}\]
so Hoeffding's inequality applies.
All taken together, we have
\[\begin{split}
\tr\omega(\Pi_j^\eta)^\perp
&\leq (n+1)^{3d^2} \left( (c+2)(5n)^{2d^2} e^{-\zeta n} + 2 e^{-2(\eta-t)^2 n} \right) \\
&\leq (c+3)(5n)^{5d^2} e^{-2(\eta-t)^2 n},
\end{split}\]
because we can choose $t$ such that
\begin{equation}
\label{eq:t}
\eta-t = \frac{t-s}{2c\sqrt{2d^2+1}} \geq \frac{t-s}{4cd}.
\end{equation}
Secondly, let $\omega$ be such that $\tr\omega \Pi_j^\eta \geq 1-\delta'$;
as we are interested in $\tr\omega P$, we may again assume
without loss of generality that $\omega$ is permutation symmetric,
and invoke the constrained de Finetti reduction~\cite[Lemma~18]{Duan2016},
Eq.~(\ref{eq:dF}).
From that we get, much as before,
\[
\tr\omega P^\perp \leq (n+1)^{3d^2} \int {\rm d}\sigma\, (\tr\sigma^{\ox n}P^\perp) F(\omega,\sigma^{\ox n})^2,
\]
and we split the integral on the right hand side into two parts,
depending on $\sigma\in\cF_s(\underline{v})$ or
$\sigma\in{\mathcal{C}}_s(\underline{v})$: In the latter case,
$\tr\sigma^{\ox n}P^\perp \leq (c+2)(5n)^{2d^2} e^{-\zeta n}$,
by Eq.~(\ref{eq:s-close}). In the former case, there exists a $j$ such
that $\tr\sigma A_j = w_j \not\in [v_j \pm s\Sigma(A_j)]$, and so
\[\begin{split}
F(\omega,\sigma^{\ox n})^2
&\leq F\bigl( (1-\delta',\delta'), (\tr\sigma^{\ox n}\Pi_j^{\eta'},1-\tr\sigma^{\ox N}\Pi_j^{\eta'}) \bigr) \\
&\leq \left( \sqrt{\delta'} + \sqrt{\tr\sigma^{\ox n}\Pi_j^{\eta'}} \right)^2 \\
&\leq 2 \delta' + 2 \tr\sigma^{\ox n}\Pi_j^{\eta'} \\
&\leq 2 \delta' + 4 e^{-2(s-\eta')^2 n},
\end{split}\]
the last line again by Hoeffding's inequality; indeed, with the previous notation,
\[\begin{split}
\tr\sigma^{\ox n}\Pi_j^{\eta'} &= \Pr\left\{ \frac1n \sum_\ell Z_\ell \in[v_j \pm \eta'\Sigma(A_j) \right\} \\
&\leq \Pr\left\{ \frac1n \sum_\ell Z_\ell \not\in[w_j \pm (s-\eta')\Sigma(A_j) \right\}.
\end{split}\]
All taken together, we get
\[\begin{split}
\tr\omega P^\perp
&\leq (n+1)^{3d^2} \left( (c+2)(5n)^{2d^2} e^{-\zeta n} + 4 e^{-2(s-\eta')^2 n} + 2 \delta' \right) \\
&\leq (n+1)^{3d^2} (c+3)(5n)^{2d^2} e^{-2(s-\eta')^2 n} + 2(n+1)^{3d^2}\delta',
\end{split}\]
because we can choose $s$ such that
\begin{equation}
\label{eq:s}
s-\eta' = \frac{t-s}{2c\sqrt{2d^2+1}} \geq \frac{t-s}{4cd}.
\end{equation}
From eqs.~(\ref{eq:t}) and (\ref{eq:s}) we get by summation
\[
\eta-\eta' = t-s + \frac{t-s}{c\sqrt{2d^2+1}} \leq (t-s)\left( 1+\frac{1}{cd} \right),
\]
from which we obtain
\[
s-\eta' = \eta-t \geq \frac{\eta-\eta'}{4c(d+1)},
\]
concluding the proof.
\end{proof}
\medskip
\begin{lemma}
\label{lemma:universal-test}
For all $0 < s < t$ there exists $\zeta > 0$, such that
for all $n$ there exists a permutation symmetric
projector $P$ on $\cH^{\ox n}$ with the properties
\begin{align}
\forall \rho\in{\mathcal{C}}_s(\underline{v})\ \tr\rho^{\ox n} P^\perp &\leq (c+2)(5n)^{2d^2} e^{-\zeta n}, \\
\forall \sigma\in\cF_t(\underline{v})\ \tr\sigma^{\ox n} P &\leq (c+2)(5n)^{2d^2} e^{-\zeta n}.
\end{align}
The constant $\zeta$ may be chosen as
$\zeta = \frac{(t-s)^2}{2c^2(2d^2+1)}$.
\end{lemma}
\begin{proof}
We start by showing that there is a POVM $(M,\openone-M)$ with
\begin{align}
\forall \rho\in{\mathcal{C}}_s(\underline{v})\ \tr\rho^{\ox n} (\openone-M) &\leq c e^{-\frac{(t-s)^2}{2c^2}n}, \\
\forall \sigma\in\cF_t(\underline{v})\ \ \qquad \tr\sigma^{\ox n} M &\leq e^{-\frac{(t-s)^2}{2c^2}n}.
\end{align}
Namely, for each $\ell=0,\ldots,n$ choose $j_\ell \in \{1,\ldots,c\}$ uniformly
at random and measure $A_{j_\ell}$ on the $\ell$-th system. Denote the outcome
by the random variable $Z_\ell^{j_{\ell}}$ and let $Z_\ell^j = 0$ for $j\neq j_\ell$.
Thus, for all $j$, the random variables $Z_\ell^j$ are i.i.d.~with
mean $\EE Z_\ell^j = \frac{1}{c}\tr\rho A_j$, if the measured state is $\rho^{\ox n}$.
Outcome $M$ corresponds to the event
\[
\forall j\ \frac1n \sum_\ell Z_\ell^j \in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right];
\]
outcome $\openone-M$ corresponds to the complementary event
\[
\exists j\ \frac1n \sum_\ell Z_\ell^j \not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right].
\]
We can use Hoeffding's inequality to bound the traces in question.\\
For $\rho\in {\mathcal{C}}_s(\underline{v})$, we have $|\EE Z_\ell^j - v_j | \leq \frac{s}{c}\Sigma(A_j)$
for all $j$, and so:
\[\begin{split}
\tr\rho^{\ox n}(\openone-M) &= \Pr\left\{ \exists j\ \frac1n \sum_\ell Z_\ell^j
\not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \sum_{j=1}^c \Pr\left\{ \frac1n \sum_\ell Z_\ell^j
\not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \sum_{j=1}^c \Pr\left\{ \frac1n \sum_\ell Z_\ell^j
\not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \sum_{j=1}^c \Pr\left\{ \left| \frac1n \sum_\ell Z_\ell^j - \EE Z_1^j \right|
> \frac{t-s}{2c} \Sigma(A_j) \right\} \\
&\leq c e^{-\frac{(t-s)^2}{2c^2}n}.
\end{split}\]
For $\sigma\in\cF_t(\underline{v})$, there exists a $j$ such that
$|\EE Z_\ell^j - v_j | > \frac{t}{c}\Sigma(A_j)$. Thus,
\[\begin{split}
\tr\sigma^{\ox n} M &\leq \Pr\left\{ \frac1n \sum_\ell Z_\ell^j
\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \Pr\left\{ \left| \frac1n \sum_\ell Z_\ell^j - \EE Z_1^j \right|
> \frac{t-s}{2c} \Sigma(A_j) \right\} \\
&\leq e^{-\frac{(t-s)^2}{2c^2}n}.
\end{split}\]
This POVM is, by construction, permutation symmetric, but $M$ is not a
projector. To fix this, choose $\lambda$-nets $\cN_C^\lambda$ in ${\mathcal{C}}_s(\underline{v})$
and $\cN_F^\lambda$ in $\cF_t(\underline{v})$, with
$\lambda = e^{-\zeta n}$, with $\zeta = \frac{(t-s)^2}{2c^2(2d^2+1)}$.
This means that every state $\rho\in{\mathcal{C}}_s(\underline{v})$
is no farther than $\lambda$ in trace distance from a $\rho'\in\cN_C^\lambda$,
and likewise for $\cF_t(\underline{v})$.
By~\cite[Lemma~III.6]{Hayden2006} (or rather, a minor variation of its proof),
we can find such nets with
$|\cN_C^\lambda|,\ |\cN_F^\lambda| \leq \left( \frac{5n}{\lambda} \right)^{2d^2}$
elements.
Form the two states
\begin{align*}
\Gamma &:= \frac{1}{|\cN_C^\lambda|} \sum_{\rho\in\cN_C^\lambda} \rho^{\ox n}, \\
\Phi &:= \frac{1}{|\cN_F^\lambda|} \sum_{\sigma\in\cN_F^\lambda} \sigma^{\ox n},
\end{align*}
and let
\[
P := \{ \Gamma-\Phi \geq 0 \}
\]
be the Helstrom projector which optimally distinguishes $\Gamma$ from $\Phi$.
But we know already a POVM that distinguishes the two states, hence
$(P,P^\perp=\openone-P)$ cannot be worse:
\[
\tr\Gamma P^\perp + \tr\Phi P \leq \tr\Gamma (\openone-M) + \tr\Phi M
\leq (c+1) e^{-\frac{(t-s)^2}{2c^2}n},
\]
thus for all $\rho\in\cN_C^\lambda$ and $\sigma\in\cN_F^\lambda$,
\[
\tr\rho^{\ox n}P^\perp,\ \tr\sigma^{\ox n}P
\leq (c+1)\left( \frac{5n}{\lambda} \right)^{2d^2} e^{-\frac{(t-s)^2}{2c^2}n}.
\]
So, by the $\lambda$-net property, we find
for all $\rho\in{\mathcal{C}}_s(\underline{v})$ and $\sigma\in\cF_t(\underline{v})$,
\[
\tr\rho^{\ox n}P^\perp,\ \tr\sigma^{\ox n}P
\leq \lambda + (c+1)\left( \frac{5n}{\lambda} \right)^{2d^2} e^{-\frac{(t-s)^2}{2c^2}n}
\leq (c+2)(5n)^{2d^2} e^{-\zeta n},
\]
by our choice of $\lambda$.
\end{proof}
\begin{corollary}\label{corollary: a.m.c. projection}
For charges $A_j$, values $v_j = \langle A_j \rangle$ and $n>0$, Theorem~\ref{thm:symmetric-micro-exists} implies that there is an a.m.c. subspace $\mathcal{M}$ of $\mathcal{H}^{\otimes n}$ for any $ \eta' > 0$ with the following parameters:
\begin{align*}
\eta &=2 \eta',\\
\delta' &=\frac{c+3}{2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d+1)^2}}, \\
\delta &=(c+3)(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d+1)^2}},\\
\epsilon &=2(c+3)(n+1)^{3d^2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d+1)^2}}.
\end{align*}
Moreover, let $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ be a state with $\frac{1}{n}\abs{\Tr(\rho^n A_j^{(n)})- v_j}\leq \frac{1}{2} \eta' \Sigma(A_j)$. Then, $\rho^n$ projects onto a.m.c. subspace with probability $\epsilon$:
\[\Tr(\rho^n P) \geq 1- \epsilon.\]
\end{corollary}
\begin{proof}
For simplicity of notation we drop the subscript $j$ from $A_j$, $v_j$ and $\Pi^{\eta'}_j$, so let $\sum_{l=1}^d E_l \ketbra{l}{l}$ be the spectral decomposition of $A$. Define independent random variables $X_i$ for $i=1,\ldots,n$ taking values in the set $\set{E_1,\ldots,E_d}$ with probabilities $p_i(E_l)=\Tr(\rho_i \ketbra{l}{l} )$.
Furthermore, define random variable $\overline{X}=\frac{X_1+\ldots+X_n}{n}$ which has the following expectation value
\begin{align*}
\mathbb{E}(\overline{X})=\frac{1}{n} \Tr(\rho^n A^{(n)}).
\end{align*}
Therefore, we obtain
\begin{align*}
1-&\Tr(\rho^n \Pi^{\eta'})\\
&=\sum_{\substack{l_1,\ldots,l_n: \\ \abs{E_{l_1}+\ldots+E_{l_n}-n v} \geq n \eta' \Sigma(A) }} \bra{l_1} \rho_1 \ket{l_1} \ldots \bra{l_n} \rho_n \ket{l_n} \\
&=\Pr \left( \abs{\overline{X}- v} \geq \eta' \Sigma(A) \right) \\
&=\Pr \left( \overline{X}- \mathbb{E}(\overline{X}) \geq \eta'\Sigma(A)+v - \mathbb{E}(\overline{X}) \>\bigcup \> \overline{X}- \mathbb{E}(\overline{X}) \leq -\eta' \Sigma(A)+v - \mathbb{E}(\overline{X}) \right) \\
&\leq \exp \left(-\frac{2n (\eta'\Sigma(A)+v - \mathbb{E}(\overline{X}))^2 }{(\Sigma(A))^2} \right)+\exp \left(-\frac{2n (\eta'\Sigma(A)-v + \mathbb{E}(\overline{X}))^2 }{(\Sigma(A))^2} \right)\\
& \leq 2 \exp (-\frac{n \eta'^2}{2})\\
& \leq \delta',
\end{align*}
where the second line follows because random the variables $X_1,\ldots,X_n$ are independent and as a result $\Pr\{\overline{X}=\frac{E_{l_1}+\ldots+E_{l_n}}{n}\}=\bra{l_1} \rho_1 \ket{l_1} \cdots \bra{l_n} \rho_n \ket{l_n}$. The fourth line is due to Hoeffding's inequality (Lemma~\ref{Hoeffding's inequality}). The fifth line is due to assumption $\abs{\mathbb{E}(\overline{X})- v} \leq \frac{1}{2} \eta' \Sigma(A)$.
Thus, by the definition of a.m.c. subspace $\Tr(\rho^n P) \geq 1- \epsilon$.
\end{proof}
\section{Proof of the AET Theorem~\ref{Asymptotic equivalence theorem}}
\label{proof-AET}
Here, we first prove the following lemma where we will use points 3 and 4 to prove the main theorem. Corollary~\ref{corollary: a.m.c. projection} implies that assuming $\frac{1}{n}\Tr \rho^n A^{(n)}_j \approx \frac{1}{n}\Tr \sigma^n A^{(n)}_j \approx v_j$ the states $\rho^n$ and
$\sigma^n$ project onto the a.m.c. subspace with high probability.
Hence, in Lemma~\ref{lemma: timmed state}, we show that one can find states $\widetilde{\rho}$ and $\widetilde{\sigma}$ with support inside the a.m.c. subspace which are very close to the original states in trace norm, that is, $\widetilde{\rho} \approx \rho^n$ and $\widetilde{\sigma} \approx \sigma^n$, and there are unitaries $V_1$ and $V_2$ that factorizes these states to the tensor product of maximally mixed states $\tau$ and $\tau'$ and some other state of very small dimension:
\begin{align*}
V_1\widetilde{\rho}V_1^{\dagger}=\tau \otimes \omega \quad \text{and} \quad
V_2\widetilde{\sigma}V_2^{\dagger}=\tau' \otimes \omega'.
\end{align*}
Further, assuming that the states $\rho^n$ and $\sigma^n$ have very close entropy rates, i.e.
$\frac{1}{n}S(\rho^n) \approx \frac{1}{n}S(\sigma^n)$, one can find states $\tau$ and $\tau'$ with the same dimension that is $\tau=\tau'$. Thus, we observe that two states $\widetilde{\rho}\otimes \omega'$ and $\widetilde{\sigma}\otimes \omega$ have exactly the same spectrum, so there is unitary acting on the a.m.c. subspace and the ancillary system taking one state to another. Based on the properties of the a.m.c. subspace, we show that this unitary is an almost-commuting unitary with the charges $A_j^{(n)}$.
\begin{lemma}\label{lemma: timmed state}
Let subspace $\mathcal{M}$ of $\mathcal{H}^{\otimes n}$ with projector $P$ be a high probability subspace for state $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$, i.e. $\Tr(\rho^n P) \geq 1- \epsilon$.
Then, for sufficiently large $n$ there is a subspace $\widetilde{\mathcal{M}} \subseteq \mathcal{M}$ with projector $\widetilde{P}$ and state $\widetilde{\rho}$ with support inside $\widetilde{\mathcal{M}}$ such that the following holds:
\begin{enumerate}
\item $\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}) \geq 1- 2\sqrt{\epsilon}-\frac{1}{O(\alpha)}$.
\item $2^{-\sum_{i=1}^n S(\rho_i)-2\alpha \sqrt{n}}\widetilde{P} \leq \widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} \leq 2^{-\sum_{i=1}^n S(\rho_i)+ \alpha \sqrt{n}}\widetilde{P}$.
\item There is a unitary $U$ such that $U\widetilde{\rho}U^{\dagger}=\tau \otimes \omega $
where $\tau$ is a maximally mixed state of dimension $2^{\sum_{i=1}^n S(\rho_i) - O(\alpha \sqrt{n})}$, and $\omega$ is a state of dimension $2^{O(\alpha \sqrt{n })}$.
\item $\norm{\widetilde{\rho}-\rho^n}_1 \leq 2\sqrt{\epsilon}+\frac{1}{O(\alpha)}+2\sqrt{2\sqrt{\epsilon}+\frac{1}{O(\alpha)}} $.
\end{enumerate}
\end{lemma}
\begin{proof}
1. Let $E\geq 0$ and $F\geq 0$ be two positive operators such that $E+F=P \Pi^n_{\alpha ,\rho^n } P$ where all eigenvalues of $F$ are smaller than $2^{-\alpha \sqrt{n}}$, and define $\widetilde{P}$ to be the projection onto the support of $E$.
In other words, $\widetilde{P}$ is the projection onto the support of $P \Pi^n_{\alpha ,\rho^n } P$ with corresponding eigenvalues greater $2^{-\alpha\sqrt{n}}$. Then, we obtain
\begin{align*}
\Tr&(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P})\\
& \geq \Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} E)\\
&=\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} P\Pi^n_{\alpha,\rho^n}P)-\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}F)\\
&\geq \Tr(\rho^n P\Pi^n_{\alpha,\rho^n}P)-\norm{\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}-\rho^n}_1-\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}F)\\
&\geq \Tr(\rho^n \Pi^n_{\alpha,\rho^n})-\norm{P\rho^n P-\rho^n}_1-\norm{\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}-\rho^n}_1-\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}F)\\
&\geq \Tr(\rho^n \Pi^n_{\alpha,\rho^n})-\norm{P\rho^n P-\rho^n}_1-\norm{\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}-\rho^n}_1-2^{-\alpha \sqrt{n}}\\
& \geq 1-\frac{\beta}{\alpha^2}-2\sqrt{\epsilon}-2\frac{\sqrt{\beta}}{\alpha}-2^{-\alpha \sqrt{n}},
\end{align*}
where the first line follows from the fact that $\widetilde{P} \geq E$. The third, forth and fifth lines are due to H\"{o}lder inequality. The last line follows from Lemma \ref{lemma:typicality properties } and gentle operator lemma \ref{Gentle Operator Lemma}.
\medskip
2. By the fact that in the typical subspace the eigenvalues of $\rho^n$ are bounded (Lemma \ref{lemma:typicality properties }), we obtain
\begin{align*}
\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} &\leq 2^{-\sum_{i=1}^n S(\rho_i)+\alpha\sqrt{n}}\widetilde{P} \Pi^n_{\alpha ,\rho^n } \widetilde{P} \\
&\leq 2^{-\sum_{i=1}^n S(\rho_i)+\alpha\sqrt{n}}\widetilde{P}.
\end{align*}
For the lower bound notice that
\begin{align*}
\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}& \geq 2^{-\sum_{i=1}^n S(\rho_i)-\alpha\sqrt{n}}\widetilde{P} \Pi^n_{\alpha ,\rho^n } \widetilde{P}\\
&=2^{-\sum_{i=1}^n S(\rho_i)-\alpha\sqrt{n}}\widetilde{P}P \Pi^n_{\alpha ,\rho^n } P \widetilde{P} \\
&\geq 2^{-\sum_{i=1}^n S(\rho_i)-2\alpha\sqrt{n}} \widetilde{P},
\end{align*}
where the equality holds because $\widetilde{P} \subseteq \mathcal{M}$, therefore $\widetilde{P} P=\widetilde{P}$. The last inequality follows because $\widetilde{P}$ is the projection onto support of $P \Pi^n_{\alpha ,\rho^n } P$ with eigenvalues greater $2^{-\alpha\sqrt{n}}$.
\medskip
3. Consider the unnormalized state $\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}$ with support inside $\widetilde{\mathcal{M}}$.
From from point 2, we know that all the eigenvalues of this state belongs to the interval $[2^{-\sum_{i=1}^n S(\rho_i)-2\alpha \sqrt{n}},2^{-\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}}]$ which we denote it by $[p_{\min},p_{\max}]$. We divide this interval to $b=2^{\floor{5\alpha \sqrt{n}}}$ many intervals (bins) with equal length of $\Delta p=\frac{p_{\max}-p_{\min}}{b}$. Now, we \textit{trim} the eigenvalues of this unnormalized state in three steps as follows.\\
\begin{enumerate}[(a)]
\item Each eigenvalue belongs to a bin which is an interval $[p_k,p_{k+1})$ for some $0 \leq k \leq b -1$ with $p_k=p_{\min}+\Delta p \times k$. For example, eigenvalue $\lambda_l$ is equal to $p_k+q_l$ for some $k$ such that $0\leq q_l< \Delta p$. We throw away $q_l$ part of each eigenvalue $\lambda_l$. The sum of these parts over all eigenvalues is very small
\begin{align*}
\sum_{l=1}^{|\widetilde{\mathcal{M}}|} q_l \leq \Delta p |\widetilde{\mathcal{M}}| \leq 2^{-2\alpha \sqrt{n}+1},
\end{align*}
where the dimension of the subspace $\widetilde{\mathcal{M}}$ is bounded as $|\widetilde{\mathcal{M}}|\leq 2^{\sum_{i=1}^n S(\rho_i)+2\alpha \sqrt{n}}$ which follows from point 2 of the lemma.
\item We throw away the bins which contain less than $2^{\sum_{i=1}^n S(\rho_i)-10\alpha \sqrt{n}}$ many eigenvalues. The sum of all the eigenvalues that are thrown away is bounded by
\begin{align*}
2^{\sum_{i=1}^n S(\rho_i)-10\alpha \sqrt{n}} \times 2^{5\alpha \sqrt{n}} \times 2^{-\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}} \leq 2^{-4\alpha \sqrt{n}},
\end{align*}
in the left member, the first number is the number of eigenvalues in the bin; the second is the number of bins, and the third is the maximum eigenvalue.
\item If a bin, e.g. $k$th bin, is not thrown away in the previous step, it contains $M_k$ many eigenvalues with the same value with
\begin{align}\label{eq:bounds of bin size}
2^{\sum_{i=1}^n S(\rho_i)-10\alpha \sqrt{n}} \leq M_k \leq 2^{\sum_{i=1}^n S(\rho_i)+2\alpha \sqrt{n}}.
\end{align}
Let
\begin{align}\label{eq:L}
L= 2^{\floor{\sum_{i=1}^n S(\rho_i) -10 \alpha\sqrt{n}}}
\end{align}
and for the $k$th bin, let $m_{k}$ be an integer number such that
\begin{align}\label{M_k_L}
m_{k} L\leq M_k \leq (m_{k}+1) L.
\end{align}
Then, $m_{k}$ is bounded as follows
\begin{align}\label{m_k}
m_{k} \leq 2^{12\alpha\sqrt{n}}.
\end{align}
From the $k$th bin, we keep $m_{k} L$ number of eigenvalues and throw away the rest where there are $M_k-m_{k} L \leq L$ many of them; the sum of the eigenvalues that are thrown away in this step is bounded by
\begin{align}
\sum_{k=0}^{b-1} p_{k}(M_k-m_k L) \leq L\sum_{k=0}^{b-1} p_k \leq 2^{-4\alpha\sqrt{n}}. \nonumber
\end{align}
\end{enumerate}
Therefore, for sufficiently large $n$ the sum of the eigenvalues thrown away in
the last three steps is bounded by
\begin{align}\label{eq:thrown away sum}
2^{-2\alpha\sqrt{n}+1}+2^{-4\alpha\sqrt{n}}+2^{-4\alpha\sqrt{n}}\leq 2^{-\alpha\sqrt{n}}
\end{align}
The kept eigenvalues of all bins form an $L$-fold degenerate unnormalized state of dimension $\sum_{k=0}^{b-1} m_{k} L$ because each eigenvalue has at least degeneracy of the order of $L$. Thus, up to unitary $U^{\dagger}$, it can be factorized into the tensor product of a maximally mixed state $\tau$ and unnormalized state $\omega'$ of dimensions $L$ and $\sum_{k=0}^{b-1} m_{k} $, respectively.
From (\ref{m_k}), the dimension of $\omega'$ is bounded by
\begin{align}
\sum_{k=0}^{b-1} m_{k} \leq 2^{12\alpha\sqrt{n}} \times 2^{5 \alpha\sqrt{n}}=2^{17 \alpha\sqrt{n}}. \nonumber
\end{align}
Then, let $\omega =\frac{\omega'}{\Tr(\omega')}$ and define
\begin{align*}
\widetilde{\rho}=U \tau \otimes \omega U^{\dagger}.
\end{align*}
\medskip
4. From points 3 and 1, we obtain
\begin{align}\label{eq: Tr of omega'}
\Tr(\omega') &= \Tr(\tau \otimes \omega') \\
&\geq \Tr(\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P})-2^{-\alpha\sqrt{n}}\\
&\geq 1-2\sqrt{\epsilon}-2\frac{\sqrt{\beta}}{\alpha}-\frac{\beta}{\alpha^2}-2^{-\alpha \sqrt{n}+1}.
\end{align}
Thereby, we get the following
\begin{align*}
\norm{\widetilde{\rho}-\rho^n}_1& \leq \norm{\widetilde{\rho}-U \tau \otimes \omega' U^{\dagger}}_1+\norm{U \tau \otimes \omega' U^{\dagger}-\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}}_1 \\
&\quad \quad \quad+ \norm{\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} -\rho^n}_1 \\
&\leq 1-\Tr(\omega')+\norm{U \tau \otimes \omega' U^{\dagger}-\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}}_1 \\
&\quad \quad \quad+ \norm{\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} -\rho^n}_1\\
&\leq 1-\Tr(\omega')+2^{-\alpha \sqrt{n}}+\norm{\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} -\rho^n}_1\\
&\leq 1-\Tr(\omega')+2^{-\alpha \sqrt{n}} +2\sqrt{2\sqrt{\epsilon}+2\frac{\sqrt{\beta}}{\alpha}+\frac{\beta}{\alpha^2}+2^{-\alpha \sqrt{n}}} \\
&= 2\sqrt{\epsilon}+2\frac{\sqrt{\beta}}{\alpha}+\frac{\beta}{\alpha^2}+2^{-\alpha \sqrt{n}+1}+2\sqrt{2\sqrt{\epsilon}+2\frac{\sqrt{\beta}}{\alpha}+\frac{\beta}{\alpha^2}+2^{-\alpha \sqrt{n}}},
\end{align*}
where the first line is due to triangle inequality. The second, third and fourth lines are
due to Eqs. (\ref{eq: Tr of omega'}) and (\ref{eq:thrown away sum}),
and Lemma \ref{Gentle Operator Lemma}, respectively.
\end{proof}
\begin{proof-of}[{of Theorem \ref{Asymptotic equivalence theorem}}]
We first prove the \textit{if} part. If there is an almost-commuting unitary $U$ and an ancillary system with the desired properties stated in the theorem, then we obtain
\begin{align}
\frac{1}{n}\abs{S(\rho^n)-S(\sigma^n)}
&\leq \frac{1}{n}\abs{S(\rho^n \otimes \omega')-S(\sigma^n\otimes \omega)}
+\frac{1}{n}\abs{S(\omega')-S(\omega)} \nonumber \\
&\leq \frac{1}{n}\abs{S(\rho^n\otimes \omega')-S(\sigma^n\otimes \omega)}
+\frac{2}{n}\log 2^{o(n)} \nonumber \\
&= \frac{1}{n}\abs{S(U(\rho^n\otimes \omega' )U^{\dagger})-S(\sigma^n\otimes \omega)}
+o(1) \nonumber \\
&\leq \frac{1}{n} o(1) \log (d^{n} \times 2^{ o(n)})
+\frac{1}{n}h\left(o(1)\right)+o(1)\nonumber \\
&= o(1) \nonumber,
\end{align}
where the first line follows from additivity of the von Neumann entropy and triangle inequality. The second line is due to the fact that von Neumann entropy of a state is upper bounded by the logarithm of the dimension.
The penultimate line follows from continuity of von Neumann entropy \cite{Fannes1973,Audenaert2007} where $h(x) = -x \log x - (1 - x) \log(1 - x)$ is the binary entropy function.
Moreover, we obtain
\begin{align}\label{approximate average charge conservation}
\frac{1}{n}&\abs{\Tr(\rho^n A_j^{(n)})-\Tr(\sigma^n A_j^{(n)})} \nonumber\\
&=\frac{1}{n}\abs{\Tr\left( \rho^n \otimes \omega' (A_j^{(n)}+A_j')\right)- \Tr\left( \sigma^n\otimes \omega (A_j^{(n)}+A_j')\right) } \nonumber \\
&\leq\frac{1}{n}\abs{\Tr\left( \rho^n \otimes \omega' (A_j^{(n)}+A_j')\right)- \Tr\left( U\rho^n\otimes \omega'U^{\dagger} (A_j^{(n)}+A_j')\right) }\nonumber\\
&\quad \quad +\frac{1}{n}\abs{\Tr\left( U\rho^n\otimes \omega'U^{\dagger} (A_j^{(n)}+A_j')\right)- \Tr\left( \sigma^n\otimes \omega (A_j^{(n)}+A_j')\right) }\nonumber\\
&=\frac{1}{n}\abs{\Tr\left( \rho^n \otimes \omega' \left(A_j^{(n)}+A_j' -U^{\dagger}(A_j^{(n)}+A_j')U\right)\right) }\\
&\quad \quad \quad +\frac{1}{n}\abs{\Tr\left( \left(U\rho^n\otimes \omega'U^{\dagger} - \sigma^n\otimes \omega \right) (A_j^{(n)}+A_j')\right)} \nonumber\\
&\leq \frac{1}{n} \Tr(\rho^n \otimes \omega') \norm{U(A_j^{(n)}+A_j')U^{\dagger} - (A_j^{(n)}+A_j')}_{\infty}\\
&\quad \quad \quad + \frac{1}{n} \norm{U\rho^n\otimes \omega'U^{\dagger} - \sigma^n\otimes \omega}_1 \norm{A_j^{(n)}+A_j'}_{\infty} \nonumber\\
& = o(1),
\end{align}
the second line follows because $A_j'=0$ for all $j$. The third and fifth lines are due to
triangle inequality and H\"{o}lder's inequality, respectively.
\medskip
Now, we prove the \textit{only if} part. Assume for the sates $\rho^n$ and $\sigma^n$ the following holds:
\begin{align*}
& \frac{1}{n} \abs{S(\rho^n)-S(\sigma^n)} \leq \gamma_n \\
& \frac{1}{n} \abs{ \Tr(A_j^{(n)}\rho^n)-\Tr(A_j^{(n)}\sigma^n)}\leq \gamma'_n \quad \quad j=1,\ldots,c,
\end{align*}
for vanishing $\gamma_n $ and $\gamma'_n $.
According to Theorem \ref{thm:symmetric-micro-exists}, for charges $A_j$, values $ v_j=\frac{1}{n} \Tr(\rho^n A_j^{(n)})$, $\eta'>0$ and any $n>0$, there is an a.m.c. subspace $\mathcal{M}$ of $\mathcal{H}^{\otimes n}$ with projector $P$ and the following parameters:
\begin{align*}
&\eta=2 \eta',\\
&\delta'=\frac{c+3}{2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d
+1)^2}}, \\
&\delta=(c+3)(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d
+1)^2}},\\
&\epsilon=2(c+3)(n+1)^{3d^2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d
+1)^2}}.
\end{align*}
Choose $\eta'$ as the following such that $\delta$, $\delta'$ and $\epsilon$ vanish for large $n$:
\begin{align*}
\eta'=\left\{
\begin{array}{ll}
\frac{\sqrt{8}c(d+1) }{n^{\frac{1}{4}} \Sigma(A)_{\min}} \quad &\text{if} \quad \gamma'_n \leq \frac{1}{n^{\frac{1}{4}}}\\
\frac{\sqrt{8}c(d+1)\gamma'_n}{ \Sigma(A)_{\min}} \quad &\text{if} \quad \gamma'_n > \frac{1}{n^{\frac{1}{4}}}
\end{array}
\right.
\end{align*}
where $\Sigma(A)_{\min}$ is the minimum spectral diameter among all spectral diameters of
charges $\Sigma(A_j)$. Since $\frac{1}{n} \Tr(\rho^n A_j^{(n)})= v_j$ and $\abs{\frac{1}{n}\Tr(\sigma^n A_j^{(n)})- v_j} \leq \frac{1}{2}\eta' \Sigma (A_j)$, Corollary~\ref{corollary: a.m.c. projection} implies that states
$\rho^n$ and $\sigma^n$ project onto this a.m.c. subspace with probability $\epsilon$:
\begin{align*}
&\Tr(\rho^n P)\geq 1-\epsilon,\\
&\Tr(\sigma^n P)\geq 1-\epsilon.
\end{align*}
Moreover, consider the typical projectors $\Pi^n_{\alpha ,\rho^n }$ and $\Pi^n_{\alpha ,\sigma^n }$ of states $\rho^n$ and $\sigma^n$, respectively, with $\alpha=n^{\frac{1}{3}}$. Then point 3 and 4 of Lemma~\ref{lemma: timmed state} implies that there are states $\widetilde{\rho}$ and $\widetilde{\sigma}$ with support inside the a.m.c. subspace $\mathcal{M}$ and unitaries $V_1$ and $V_2$ such that
\begin{align}\label{eq:4 formulas}
&\norm{\widetilde{\rho}-\rho^n}_1 \leq o(1) , \nonumber\\
&\norm{\widetilde{\sigma}-\sigma^n}_1 \leq o(1), \nonumber\\
&V_1\widetilde{\rho}V_1^{\dagger}=\tau \otimes \omega, \nonumber\\
&V_2\widetilde{\sigma}V_2^{\dagger}=\tau' \otimes \omega',
\end{align}
where $\tau$ and $\tau'$ are maximally mixed states; since $\abs{S(\rho^n)-S(\sigma^n)} \leq n \gamma_n$, one may choose the dimension of them in Eq.~(\ref{eq:L}) to be exactly the same as $L=2^{\floor{\sum_{i=1}^n S(\rho_i)-10 z}}$ with $z=\max \{\alpha \sqrt{n}, n \gamma_n\}$, hence, we obtain $\tau=\tau'$. Then, $\omega$ and $\omega'$ are states with support inside Hilbert space $\mathcal{K}$ of dimension $2^{o(z)}=2^{o(n)}$.
Then, it is immediate to see that the states $\widetilde{\rho} \otimes \omega'$ and $\widetilde{\sigma} \otimes \omega$ on Hilbert space $\mathcal{M}_t=\mathcal{M} \otimes \mathcal{K}$ have exactly the same spectrum; thus, there is a unitary $\widetilde{U}$ on subspace $\mathcal{M}_t$ such that
\begin{align}\label{eq:exact spectrum states}
\widetilde{U} \widetilde{\rho} \otimes \omega' \widetilde{U}^{\dagger} =\widetilde{\sigma} \otimes \omega.
\end{align}
We extend the unitary $\widetilde{U}$ to $U=\widetilde{U} \oplus \openone_{\mathcal{M}_t^{\perp}}$
acting on $\mathcal{H}^{\otimes n} \otimes \mathcal{K}$ and obtain
\begin{align*}
&\norm{U \rho^n \otimes \omega' U^{\dagger} - \sigma^n \otimes \omega}_1 \\
&\quad \quad \leq
\norm{U \rho^n \otimes \omega' U^{\dagger} - U \widetilde{\rho} \otimes \omega' U^{\dagger}}_1 +\norm{ \sigma^n \otimes \omega-\widetilde{\sigma}\otimes \omega}_1
+\norm{ U \widetilde{\rho} \otimes \omega' U^{\dagger}-\widetilde{\sigma}\otimes \omega}_1 \\
& \quad \quad =\norm{U \rho^n \otimes \omega' U^{\dagger} - U \widetilde{\rho} \otimes \omega' U^{\dagger}}_1 +\norm{ \sigma^n \otimes \omega-\widetilde{\sigma}\otimes \omega}_1 \\
& \quad \quad \leq o(1),
\end{align*}
where the second and last lines are due to Eqs. (\ref{eq:exact spectrum states}) and (\ref{eq:4 formulas}), respectively.
As mentioned before, $\mathcal{M}_t=\mathcal{M} \otimes \mathcal{K}$ is a subspace of $\mathcal{H}^{\otimes n} \otimes \mathcal{K}$ with projector $P_t=P \otimes \openone_{\mathcal{K}}$ where $P$ is the corresponding projector of a.m.c. subspace. We define total charges $A_j^t=A_j^{(n)}+A_j'$ and let $A_j'=0$ for all $j$ and show that every unitary of the form $U=U_{\mathcal{M}_t} \oplus \openone_{\mathcal{M}_t^{\perp}}$ asymptotically commutes with all total charges:
\begin{align*}
\norm{U A_j^t U^{\dagger}-A_j^t}_{\infty} &= \norm{(P_t+P_t^{\perp})(U A_j^t U^{\dagger}-A_j^t)(P_t+P_t^{\perp})}_{\infty} \\
& \leq \norm{P_t(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty}+\norm{P_t^{\perp}(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty} \\
& \quad +\norm{P_t(U A_j^t U^{\dagger}-A_j^t)P_t^{\perp}}_{\infty}+\norm{P_t^{\perp}(U A_j^t U^{\dagger}-A_j^t)P_t^{\perp}}_{\infty}\\
& = \norm{P_t(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty}+2\norm{P_t^{\perp}(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty} \\
& \leq 3 \norm{(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty}\\
& = 3 \norm{(U A_j^t U^{\dagger} -n v_j \openone+ nv_j \openone-A_j^t)P_t}_{\infty}\\
& \leq 3 \norm{(U A_j^t U^{\dagger} -n v_j \openone)P_t}_{\infty}+3 \norm{(A_j^t- nv_j \openone)P_t}_{\infty}\\
& = 6 \norm{(A_j^t- nv_j \openone)P_t}_{\infty}\\
& = 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j \openone)\ket{v}}_2\\
& = 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{ (A_j^t-n v_j \openone)(\Pi_j^{\eta} \otimes \openone_{\mathcal{K}} +\openone-\Pi_j^{\eta} \otimes \openone_{\mathcal{K}}) \ket{v}
}_2 \\
& \leq 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j \openone) \Pi_j^{\eta} \otimes \openone \ket{v}}_2 \\
& \quad \quad \quad+6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j \openone) (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2 \\
& \leq 6n \Sigma (A_j) \eta +6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j I) (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2,
\end{align*}
where the first line is due to the fact that $P_t+P_t^{\perp}=\openone_{\mathcal{H}^{\ox n}} \ox \openone_{\mathcal{K}}$. The forth line follows because $U A_j^t U^{\dagger}-A_j^t$ is a Hermitian operator with zero eigenvalues in the subspace $P_t^{\perp}$.
The fifth line is due to Lemma~\ref{lemma:norm inequality}. The twelfth line is due to the definition of the a.m.c. subspace. Now, bound the second term in the above:
\begin{align*}
&6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j I) (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2 \\
& \quad \quad \quad \leq 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{A_j^t-n v_j \openone }_{\infty} \norm{ (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2 \\
& \quad \quad \quad = 6 \norm{A_j^t-n v_j \openone}_{\infty} \max_{\ket{v} \in \mathcal{M}_t} \sqrt{ \Tr ((\openone-\Pi_j^{\eta} \otimes \openone)\ketbra{v}{v})}\\
& \quad \quad \quad = 6 n\norm{A_j- v_j \openone}_{\infty} \max_{v \in \mathcal{M}} \sqrt{ \Tr ((\openone-\Pi_j^{\eta} )v)}\\
&\quad \quad \quad \leq 6 n\norm{A_j- v_j \openone}_{\infty} \sqrt{ \delta},
\end{align*}
the first line is due to Lemma~\ref{lemma:norm inequality}.
The last line is by definition of the a.m.c. subspace.
Thus, for vanishing $\delta$ and $\eta$ we obtain:
\begin{align*}
\frac{1}{n}\norm{U A_j^t U^{\dagger}-A_j^t}_{\infty} \leq o(1),
\end{align*}
concluding the proof.
\end{proof-of}
\section{Discussion}
\label{sec:discussion}
We have considered an asymptotic resource theory with states of tensor product structure
as the objects and allowed operations which are thermodynamically meaningful,
namely operations which preserve the entropy and and charges of a system asymptotically.
The allowed operations classify the objects into asymptotically equivalent objects
that are interconvertible under allowed operations. The basic result on which
our theory is built is that the objects are interconvertible via allowed operations
if and only if they have the same average entropy and average charge values in the
asymptotic limit.
The existence of the allowed operations between the objects of the same class is based on two pillars:
First, for objects with the same average entropy there are states with sublinear dimension which
can be coupled to the objects to make their spectrum asymptotically identical.
Second, objects with the same average charge values project onto a common subspace of the
charges of the system which has the property that any unitary acting on this subspace is an almost-commuting unitary with the corresponding charges. Therefore, the spectrum of the objects of the same class can be modified using small ancillary systems and then they are interconvertible via unitaries that asymptotically preserve the charges of the system.
The notion of a common subspace for different charges, which are Hermitian operators,
is introduced in \cite{Halpern2016} as approximate microcanonical (a.m.c.) subspace.
In this chapter, for given charges and parameters, we show the existence of an a.m.c. which is by construction a permutation-symmetry subspace, which is not guaranteed by the construction in \cite{Halpern2016}.
\chapter{Formulation of quantum source compression problems}
\label{chap:source-coding}
In this chapter, we first expand on the concept of quantum sources and the literature on that and mathematically define an asymptotic compression task as a general model which
include all previously studied asymptotic models.
Then, we introduce quantum compression problems with side information and review
the literature, and later we proceed with defining the most general asymptotic compression task
with side information.
Finally at the end of the chapter, we summarize the results that we have accomplished on quantum source compression.
\section{What is a quantum source?}
A statistical quantum source is a quantum system together with correlations with a \emph{reference system}.
A criterion of how well a source is reproduced in a communication
task is to measure how well the correlations are preserved with
the reference system. Without correlation, the information does not make sense
because a known quantum state without correlations can be reproduced at the
destination without any communication.
A special case is a classical statistical source, which is modeled by a random variable. Since classical information can be copied, a copy of a random variable can be always stored as a reference, and the final processed information is compared with the copy as a reference
to analyse the performance of the communication task.
However, in the classical information theory literature, the reference is not usually considered explicitly in the description of classical
information theory tasks, but arguably it is conceptually necessary in quantum
information. This is because it allows us to present the figure of merit quantifying
the decoding error as operationally accessible, for example via the probability of
passing a test in the form of a measurement on the combined source and reference systems. This point
is made eloquently in the early work of Schumacher on quantum information transmission
\cite{Schumi1996,Barnum1998}.
To elaborate more on the reference system, consider the source that Schumacher defined in his 1995 paper \cite{Schumacher1995,Jozsa1994_1} as an ensemble of pure states $\{p(x),\ket{\psi_x}^{A} \}$,
where the source generates the state $\ket{\psi_x}$ with probability $p(x)$. The figure of merit for the encoding-decoding process is to keep the decoded quantum states \emph{on average} very close to the original states with respect to the fidelity, where the average is taken over the probability distribution $p(x)$.
By basic algebra one can show that this is equivalent to preserving the classical-quantum state
$\rho^{AR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^R$, where system $A$ is the quantum system to be compressed and $R$ is the reference system; namely the following fidelity relation holds:
\begin{align*}
\sum_{x} p(x) F(\proj{\psi_x}^A,\xi_x^{\hat{A}})=F(\rho^{AR}, \xi^{\hat{A}R}),
\end{align*}
where $\xi_x^{\hat{A}}$ is the decoded state for the realization $x$ and $\xi^{\hat{A}X}=\sum_x p(x) \xi_x^{\hat{A}} \otimes \proj{x}^R$.
Another source model that Schumacher considered was the purification of the source ensemble,
that is the state $\ket{\psi}^{AR}=\sum_x \sqrt{p(x)}\ket{\psi_x}^{A} \ket{x}^R$, where the
figure of merit for the encoding-decoding process was to preserve the pure state correlations with the
reference system $R$ by maintaining a high fidelity between the decoded state and $\psi$.
He showed that both definitions lead to the same compression rate, namely,
the von Neumann entropy of the source $S(A)_{\rho} = S(\rho^A)$, where
$\rho^A = \Tr_R \rho^{AR}$. Incidentally, the full proof of optimality in the first
model, without any additional restrictions on the encoder, had to wait until \cite{Barnum1996}
(see also \cite{Horodecki1998});
the strong converse, i.e. the optimality of the entropy rate even for constant error
bounded away from $1$, was eventually given in \cite{Winter1999}.
Another example of a quantum source is the mixed state source considered by Horodecki \cite{Horodecki1998} and Barnum \emph{et al.} \cite{Barnum2001}, and finally solved by Koashi and Imoto \cite{KI2001},
where the source is defined as an ensemble of mixed states $\{p(x),\rho_x^{A} \}$. Preserving
these mixed quantum states, on average, in the process of encoding-decoding, the task is
equivalent to preserving the state $\rho^{AR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^R$, that is
the quantum system $A$ together with its correlation with the classical reference system $R$.
In this thesis, we consider the most general finite-dimensional source in the realm of
quantum mechanics, namely a quantum system $A$ that is correlated with a reference system
$R$ in an arbitrary way, described by the overall state $\rho^{AR}$. In particular, the
reference does not necessarily purify the source, nor is it assumed to be classical.
The ensemble source and the pure source defined by Schumacher are special cases of this model,
where the reference is a classical system in the former and a purifying system in the latter.
So is the source considered by Koashi and Imoto in \cite{KI2001}, where the reference system
is classical, too.
Understanding the compression of the source $\rho^{AR}$ has paramount importance in the
field of quantum information theory and unifies all the models that have been considered
in the literature. Schumacher's pure source model in a sense is the most stringent model
because it requires preserving the correlations with a purifying reference system which
implies that the correlations with any other reference system is preserved which follows
from the fact that the fidelity is non-decreasing under quantum channels.
However, the converse is not necessarily true: if in a compression task the parties are
required to preserve the correlations with a given reference system which does not purify
the source state, they might be able to compress more efficiently compared to the scenario
where the reference system purifies the source. This is exactly what we show in Chapter~\ref{chap:mixed state}:
we characterise the gap precisely depending on the reference system.
\section{Mathematical definition of quantum noiseless compression}
A source compression task consists of an encoder which maps the source to compressed information which is stored or sent to another party. When it is needed, a decoder maps the compressed information to decoded information, and the aim is to preserve the correlations with the reference system and reconstruct a source which is very close to the original source in some distance measure. In the quantum realm the most general encoding and decoding maps which can be performed on the information is a quantum operation or a CPTP map.
The communication means or quantum storage device is assumed to be an ideal channel acting as an identity on the encoded information which can be simulated through various resources such as a qubit channel, sharing entanglement and sending classical information and etc.
The resource is the dimension of the Hilbert space of the encoding operation.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{"rho_AR_compression".jpg}
\caption{Circuit diagram of the compression task: the source is composed of $n$ copies of the state $\rho^{AR}$ where
$A^n$ is the system to be compressed and $R^n$ is an inaccessible reference system.
Dotted lines are used to demarcate domains controlled by the different participants here the reference, the encoder, Alice and the decoder, Bob. The solid lines represent quantum information registers.
The encoder sends the compressed information, i.e. system $M_n$, to the decoder through a noiseless quantum channel;
moreover, they share initial entanglement in the registers $A_0$ and $B_0$, respectively.
The aim of the compression task is to reconstruct the source at the decoder side, that is the final state $\xi^{\hat{A}^n R^n}$ has the fidelity converging to 1 with the source state
$\rho^{A^nR^n}$; this ensures that the correlations between the reconstructed system $\hat{A}^n$ and the
reference system $R^n$ are preserved. Furthermore, the encoder and decoder distill entanglement in
their registers $A'_0$ and $B'_0$, respectively.
}
\label{fig: rho AR compression task}
\end{figure}
Throughout the thesis, we consider the information theoretic asymptotic limit of
$n$ copies of a finite dimensional source with state $\rho^{AR}$, i.e.~$\rho^{A^n R^n} = \left(\rho^{AR}\right)^{\otimes n}$ where system $A$ is the system to be compressed
and system $R$ is an inaccessible reference system.
We assume that the encoder, Alice, and the decoder, Bob, share initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
The encoder, Alice, performs the encoding compression operation
$\mathcal{E}:A^n A_0 \longrightarrow M_n A_0'$ on the system $A^n$ and her part $A_0$ of the entanglement, which is CPTP map.
Alice's encoding operation produces the state $\sigma^{M_n R^n A_0' B_0}$ with $M$, $A_0'$ and $B_0$ as the compressed system of Alice, Alice's new entanglement system and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M_n| \leq \abs{A}^n$.
The system $M_n$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:M_n B_0 \longrightarrow \hat{A}^n B_0'$ on the system
$M_n$ and his part of the entanglement $B_0$ where $\hat{A}^n$ and $B'_0$ are the reconstructed
source and Bob's new entanglement system.
Ideally the encoder and decoder want to distill entanglement in the form of maximally entangled
state $\Phi_L^{A'_0 B'_0}$ of dimension $L$ in their corresponding registers $A'_0$ and $B'_0$.
We call $\frac1n \log (K-L)$ and $\frac1n \log|M_n|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively.
We say the encoding-decoding scheme has \emph{fidelity} $1-\epsilon$, or \emph{error} $\epsilon$, if
\begin{align}
\label{eq:fidelity criterion no side info task}
F\left( \rho^{A^n R^n } \otimes \Phi_L^{A_0' B_0'},\xi^{\hat{A}^n R^n A_0' B_0'} \right)
\geq 1-\epsilon,
\end{align}
where $\xi^{\hat{A}^n R^n A_0' B_0'}=\left((\mathcal{D}\circ\mathcal{E})\otimes {\operatorname{id}}_{R^n}\right) \rho^{A^n R^n } \otimes \Phi_K^{A_0 B_0}$.
Moreover, we say that $(E,Q)$ is an (asymptotically) achievable rate pair if for all $n$
there exist codes (encoders and decoders) such that the fidelity converges to $1$, and
the entanglement and quantum rates converge to $E$ and $Q$, respectively.
The compression schemes where the error converges to zero are called noiseless compression schemes which we consider throughout the thesis.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
\medskip
A schematic description of the quantum source and its compression is illustrated in Fig.~\ref{fig: rho AR compression task}
where the system to be compressed and the reference are denoted by $A^n$ and $R^n$, respectively.
This compression problems is addressed in Chapter~\ref{chap:mixed state} where we find the
optimal trade-off rate region for the entanglement and quantum rates, that is the pairs $(E,Q)$.
\section{Quantum noiseless compression with side information}\label{sec:side info intro}
Side information in information theory is referred to as extra information, which
is correlated with an information source and is available to encoder, decoder or both of them, and they can use this extra information to use less resources, for example reduce the dimension of the compressed information.
Slepian and Wolf for the first time studied the compression of a classical source, i.e a random variable, where a decoder has access to another random variable, which is correlated with the source, and showed that the compression rate is equal to the conditional Shannon entropy
\cite{Slepian1973}.
The \textit{visible} paradigm of source compression problems are basically
compression problems where an \emph{encoder} has access to side information, i.e. the identity of states from an ensemble generated by a source \cite{Jozsa1994_1,Barnum1996,Horodecki2000,Barnum2001_2,Bennett2005,Hayden2002,visible_Hayashi}.
For example, the source in the visible Schumacher compression \cite{Jozsa1994_1,Barnum1996}
is modeled by a classical-quantum state $\rho^{ACR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^C \otimes \proj{x}^{R}$, where system $A$ is the system to be compressed, and
systems $C$ and $R$ are the side information system of the encoder and the reference system, respectively. It is shown that both visible model and \textit{blind} model, where the encoder
does not have access to system $C$, lead to the same compression rate, i.e. $S(A)$ \cite{ Schumacher1995,Jozsa1994_1,Barnum1996} whereas this is not the case when system $A$ is composed of mixed states, that is visible and blind models for mixed states lead to different compression rates.
In the visible mixed state compression problem, the source is modeled by many copies of the state $\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$ where system $A$ with mixed states is the system to be compressed, and
systems $C$ and $R$ are the side information system of the encoder and the reference system, respectively. Hayashi showed that the optimal compression rate is equal to the regularized entanglement of purification of the source \cite{visible_Hayashi} which is different from the
blind compression ($C$ is not available to the encoder) rate obtained by Koashi and Imoto \cite{KI2001,KI2002}.
The visible compression of this source when the encoder and decoder share unlimited entanglement is a special case of the remote state preparation considered in \cite{Bennett2005}, and the optimal quantum compression rate is equal to $\frac{1}{2}S(A)$.
Winter in his Phd thesis \cite{Winter1999} generalized the notion of correlated sources and side information at the \emph{decoder} to a quantum setting by modeling it as a multipartite quantum source which generates multipartite quantum states where different parties have access
to some parts of a source. The first example studied in this context was a hybrid classical-quantum source $\rho^{ABR_1R_2}=\sum_x p(x) \proj{x}^A \otimes \proj{\psi_x}^{BR_1} \otimes \proj{x}^{R_2}$ where an encoder
compresses the classical system $A$, and a decoder aims to reconstruct this system
while having access to quantum side information system $B$ such that the correlations with the
reference systems $R=R_1R_2$ are preserved \cite{Winter1999,Devetak2003}.
This example is one of the earliest attempts to find operational meaning to quantum conditional entropy in analogy to the classical conditional Shannon entropy which characterizes the optimal compression rate of a classical source with classical side information at the decoder side,
a.k.a. fully classical Slepian-Wolf problem \cite{Slepian1973}.
The compression of a purified source with side information at the decoder is known as
state merging or fully quantum Slepian-Wolf (FQSW) and its discovery was an important milestone in the quantum information field which gave an operational meaning to the quantum conditional entropy \cite{Horodecki2007,Abeyesinghe2009}; in this task, a source generates many copies of the state $\ket{\psi}^{ABR}$ where an encoder compresses system $A$ and sends it to a decoder who has access to system $B$ and aims to reconstruct system $A$ while preserving the correlations with the reference system $R$.
Depending on the communication means which has been considered shared entanglement with free classical communication or quantum communication, the compression rate is equal to $S(A|B)$ ebits or $\frac{1}{2}I(A:R)$ qubits, respectively \cite{Horodecki2007,Abeyesinghe2009}.
An ensemble version of FQSW is considered in \cite{Ahn2006} with the source $\rho^{ABR}=
\sum_x p(x) \proj{\psi_x}^{AB}\otimes\proj{x}^R$ and $A$, $B$ and $R$ as the system to be compressed, the side information at the decoder and the reference system, respectively; the optimal quantum compression rate is
found for some special cases, but the problem has been left open in general.
A generalization of state merging, which is known as quantum state redistribution (QSR), is proposed in \cite{Devetak2008_2,Yard2009}, where
both encoder and decoder have access to side information systems. Namely, a source generates
many copies of the state $\ket{\psi}^{ACBR}$, where an encoder compresses system $A$ while having access to side information system $C$ and sends the compressed information to a decoder who has access to system $B$ and aims to reconstruct system $A$ while preserving the correlations with the reference system $R$; in this compression task systems $C$ and $B$ remain at the disposal
of the encoder and decoder, respectively. This gave an operational meaning to the quantum
conditional mutual information since the optimal quantum compression rate was obtained to be
$\frac{1}{2}I(A:R|C)$.
In the remainder of this section, we define mathematically the most general model for
the compression of quantum sources with side information which includes as special cases all the aforementioned side information problems of this section (considering block fidelity defined in Eq.~\ref{eq:block fidelity criterion side info problem}).
\bigskip
We consider a source generates asymptotic limit of
$n$ copies of a finite dimensional state $\rho^{ACBR}$, i.e.~$\rho^{A^n C^n B^n R^n} = \left(\rho^{ACBR}\right)^{\otimes n}$, and distributes the copies of the systems $AC$, $B$ and $R$ between an encoder, a decoder and an inaccessible reference system, respectively.
We assume that the encoder, Alice, and the decoder, Bob, share initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
The encoder, Alice, performs the encoding compression operation
$\mathcal{E}:A^n C^n A_0 \longrightarrow M_n \hat{C}^n A_0'$ on the system $A^n C^n$ and her part $A_0$ of the entanglement, which is CPTP map.
Alice's encoding operation produces the state $\sigma^{M_n \hat{C}^n B^n R^n A_0' B_0}$ with $M_n$, $\hat{C}^n$, $A_0'$ and $B_0$ as the compressed system of Alice, a reconstruction of system $C^n$, Alice's new entanglement system and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M_n| \leq \abs{A}^n$.
The system $M_n$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:M_n B^n B_0 \longrightarrow \hat{A}^n \hat{B}^n B_0'$ on the compressed information $M_n$, system $B^n$ and his part of the entanglement $B_0$ where $\hat{A}^n$, $\hat{B}^n$ and $B'_0$ are the reconstruction of systems
$A^n$, $B^n$ and Bob's new entanglement system, respectively.
In this task, the side information systems remain at the disposal of their corresponding
parties, that is the encoder and decoder respectively reconstruct systems $C^n$ and $B^n$ after
using them as side information.
Ideally the encoder and decoder want to distill entanglement in the form of maximally entangled
state $\Phi_L^{A'_0 B'_0}$ of dimension $L$ in their corresponding registers $A'_0$ and $B'_0$.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{"rho_ACBR_compression".jpg}
\caption{Circuit diagram of the compression task with side information: the source is composed of $n$ copies of the state $\rho^{ACBR}$ where
$A^n$ is the system to be compressed and $R^n$ is an inaccessible reference system; systems $C^n$ and $B^n$ are the side information available for the encoder and the decoder, respectively.
Dotted lines are used to demarcate domains controlled by the different participants here the reference, the encoder, Alice and the decoder, Bob. The solid lines represent quantum information registers.
The encoder sends the compressed information, i.e. system $M_n$, to the decoder through a noiseless quantum channel;
moreover, they share initial entanglement in the registers $A_0$ and $B_0$, respectively.
The aim of the compression task is to reconstruct system $A^n$ at the decoder side while each party reconstructs its own corresponding side information as well, that is the final state $\xi^{\hat{A}^n \hat{C}^n \hat{B}^n R^n}$ has the fidelity converging to 1 with the source state
$\rho^{A^n C^n B^n R^n}$; this ensures that the correlations between the reconstructed systems $\hat{A}^n \hat{C}^n \hat{B}^n$ and the
reference system $R^n$ are preserved. Furthermore, the encoder and decoder distill entanglement in
their registers $A'_0$ and $B'_0$, respectively.
}
\label{fig: rho ACBR compression task}
\end{figure}
We call $\frac1n \log (K-L)$ and $\frac1n \log|M_n|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively.
We say the encoding-decoding scheme has \emph{block fidelity} $1-\epsilon$, or \emph{block error} $\epsilon$, if
\begin{align}
\label{eq:block fidelity criterion side info problem}
F\left( \rho^{A^n C^n B^n R^n } \otimes \Phi_L^{A_0' B_0'},\xi^{\hat{A}^n \hat{C}^n \hat{B}^n R^n A_0' B_0'} \right)
\geq 1-\epsilon,
\end{align}
where $\xi^{\hat{A}^n \hat{C}^n \hat{B}^n R^n A_0' B_0'}=\left((\mathcal{D}\circ\mathcal{E})\otimes {\operatorname{id}}_{R^n}\right) \rho^{A^n C^n B^n R^n } \otimes \Phi_K^{A_0 B_0}$.
Moreover, we say that $(E_b,Q_b)$ is an (asymptotically) achievable block-error rate pair if for all $n$
there exist codes (encoders and decoders) such that the block fidelity converges to $1$, and
the entanglement and quantum rates converge to $E_b$ and $Q_b$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
A schematic description of the source compression task with side information is illustrated in Fig.~\ref{fig: rho ACBR compression task}.
We also consider another figure of merit which turns out to be an easier criterion to evaluate side information problems;
we say a code has \emph{per-copy fidelity} $1-\epsilon$,
or \emph{per-copy error} $\epsilon$, if
\begin{align}
\label{eq:per-copy-error side information problems}
\frac{1}{n}\sum_{j=1}^n F(\rho^{ACBR},\xi^{\hat{A}_j\hat{C}_j\hat{B}_jR_j}) \geq 1-\epsilon,
\end{align}
where
$\xi^{\hat{A}_j\hat{C}_j\hat{B}_jR_j}=\Tr_{[n]\setminus j}\,\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^n}$, and `$\Tr_{[n]\setminus j}$' denotes the partial trace over all systems
with indices in $[n]\setminus j$.
Similarly, we say that $(E_c,Q_c)$ is an (asymptotically) achievable per-copy-error rate pair if for all $n$
there exist codes (encoders and decoders) such that the per-copy fidelity converges to $1$, and
the entanglement and quantum rates converge to $E_c$ and $Q_c$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
\begin{table}[!t
\renewcommand{\arraystretch}{1.5}
\scriptsize
\begin{center}
\begin{tabular}{ | p{7.5cm} || p{3.5cm} | p{1.5cm} |}
\hline
\small{Source} & \small{$(0,Q_b^*)$} & \small{$(\infty,Q_b^*)$}
\\ \hline\hline
\parbox[t]{8cm}{ \cite{Schumacher1995,Jozsa1994_1}
$\rho^{AR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^R$}
& \parbox[t]{3cm}%
{$S(A)_{\rho}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Schumacher1995,Jozsa1994_1}
$\ket{\psi}^{AR}=\sum_x \sqrt{p(x)}\ket{\psi_x}^{A} \ket{x}^R$}
& \parbox[t]{3cm}%
{$S(A)_{\rho}$}
&$-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Schumacher1995,Jozsa1994_1,Barnum1996}
$\rho^{ACR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^C \otimes \proj{x}^{R}$}
& \parbox[t]{3cm}%
{$S(A)_{\rho}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{KI2001}
$\rho^{AR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^R$}
& \parbox[t]{3cm}%
{$S(CQ)_{\omega}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{visible_Hayashi}
$\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$}
& \parbox[t]{3cm}%
{$\lim_{n \to \infty } \frac{E_p((\rho^{AC})^{\otimes n})}{n}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Bennett2005}
$\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$}
& \parbox[t]{3cm}%
{$-$}
& \parbox[t]{2cm}{$\frac{1}{2}S(A)$}
\\ \hline
\parbox[t]{8cm}{ \cite{Winter1999,Devetak2003}
$\rho^{ABR_1R_2}=\sum_x p(x) \proj{x}^A \otimes \proj{\psi_x}^{BR_1} \otimes \proj{x}^{R_2}$}
& \parbox[t]{3cm}%
{$S(A|B)_{\rho}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Horodecki2007,Abeyesinghe2009}
$\ket{\psi}^{ABR}$}
& \parbox[t]{3.5cm}%
{$\max \{S(A|B)_{\rho}, \frac{1}{2}I(A:R)\}$ }
& \parbox[t]{2cm}{$\frac{1}{2}I(A:R)$}
\\ \hline
\parbox[t]{8cm}{ \cite{Ahn2006}
$\rho^{ABR}=\sum_x p(x) \proj{\psi_x}^{AB}\otimes\proj{x}^R$ }
& \parbox[t]{3.7cm}%
{solved for specific examples}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Devetak2008_2,Yard2009}
$\ket{\psi}^{ACBR}$}
& \parbox[t]{3.7cm}%
{$\max \{ S(A|B)_{\rho}$,$\frac{1}{2}I(A:R|C)\}$}
& \parbox[t]{2cm}{$\frac{1}{2}I(A:R|C)$} \\ \hline
\end{tabular}
\end{center}
\caption{A summary of the asymptotic source compression problems, that have been studied in the literature so far, is presented in this table. The rate pairs $(0,Q_b^*)$ and $(\infty,Q_b^*)$ denote the unassisted and entanglement-assisted qubit rates, respectively. Here $E_p(\cdot)$ denotes the entanglement of purification, moreover, $S(CQ)_{\omega}$ is the von Neumann entropy with respect to Koashi-Imoto decomposition of the source; for more information see chapter~\ref{chap:mixed state}.}
\label{table-of-sources}
\end{table}
\bigskip
The special cases of this general problem that have been addressed so far is summarized in table~\ref{table-of-sources}. This general compression problem has a complex nature; for example,
consider the special case of the visible mixed state source by Hayashi \cite{visible_Hayashi} with classical reference $R$ and classical side information at the encoder $C$, with no side information at the decoder $B= \emptyset$, i.e. $\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$;
with no entanglement consumption, the optimal block-error quantum rate, i.e. the pair $(0,Q_b^*)$ is equal to the regularized entanglement of purification whereas with free entanglement the optimal block-error quantum rate, i.e. the pair $(\infty,Q_b^*)$ is equal to $(\infty,\frac{1}{2}S(A))$ \cite{Bennett2005}.
Therefore, it is insightful to first study the pairs $(0,Q_b^*)$ and $(\infty,Q_b^*)$
for some other special cases of the source $\rho^{ACBR}$.
Moreover as we will show in the subsequent chapters, unlike the classical
scenario where conditional entropy characterizes the classical compression rate,
for non-pure sources, the quantum conditional entropy or mutual information
does not necessary play a
role and more complicated functions of the source determine the compression rate.
In section~\ref{sec:Summary of our results in quantum source compression}, we briefly
go through the special cases of the general source $\rho^{ACBR}$ with side information
which we address in this thesis and discuss the challenges of each particular case
in its corresponding chapter.
\section{Distributed noiseless quantum source compression}
This thesis mainly focuses on the side information compression problems, however, in chapter \ref{chap:cqSW}, aside from a side information problem we study the distributed compression of correlated classical-quantum sources. This motivates us to define a general distributed compression problem, which the side information problem of section~\ref{sec:side info intro} can be considered a sub-problem of this distributed scenario since the decoder can use successive decoding, that is it can first decode the information of one of the encoders and treat it as its own side information, and later decode the information of the other encoder.
\bigskip
Here we define the problem for two encoders, however, the definition can be easily extended to three or more encoders.
We consider a source generates asymptotic limit of
$n$ copies of a finite dimensional state $\rho^{A_1C_1 A_2 C_2BR}$, i.e.~$\rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n} = \left(\rho^{A_1C_1A_2C_2BR}\right)^{\otimes n}$, and distributes the copies of the systems $A_1C_1$, $A_2C_2$, $B$ and $R$ between encoder 1, encoder 2, a decoder and an inaccessible reference system, respectively.
We assume that both encoder 1, Alice and encoder 2, Ava, share initially maximally
entangled states $\Phi_{K_1}^{A_{01}B_{01}}$ and $\Phi_{K_2}^{A_{02}B_{02}}$ with the decoder, Bob, respectively (of dimension $K_1$ and $K_2$ respectively).
The encoder $i$ ($i=1,2$) performs the encoding compression operation, i.e. the CPTP map
$\mathcal{E}_i:A_i^n C_i^n A_{0i} \longrightarrow M_{i_n} \hat{C}_i^n A_{0i}'$ on the systems $A_i^n C_i^n$ and the entanglement part $A_{0i}$.
The encoding operations are distributed in the sense that each encoder applies her own operation locally without having access to the information of the other encoder.
The dimension of the compressed systems are without loss of
generality not larger than the dimension of the
original sources, i.e. $|M_{i_n}| \leq \abs{A_i}^n$.
The systems $M_{i_n}$ ($i=1,2$) are then sent to Bob via a noiseless quantum channel, who performs
the decoding operation $\mathcal{D}:M_{1_n} M_{2_n} B^n B_{01} B_{02} \longrightarrow \hat{A_1}^n \hat{A_2}^n \hat{B}^n B_{01}' B_{02}'$ on the compressed information systems $M_{1_n} M_{2_n}$, system $B^n$ and his parts of the entanglement $B_{01} B_{02}$ where $\hat{A_1}^n \hat{A_2}^n$, $\hat{B}^n$ and $B'_{01} B'_{02}$ are the reconstruction of systems
$A_1^n A_2^n$, $B^n$ and Bob's new entanglement systems, respectively.
In this task, the systems $C_1^n$, $C_2^n$ and $B^n$ remain at the disposal of their corresponding
parties, that is the encoders and the decoder respectively reconstruct systems $C_1^n$, $C_2^n$ and $B^n$ after
using them as side information.
Ideally the encoder $i$ ($i=1,2$) and the decoder aim to distill entanglement in the form of maximally entangled
state $\Phi_{L_i}^{A'_{0i} B'_{0i}}$ of dimension $L_i$ in their corresponding registers $A'_{0i}$ and $B'_{0i}$, respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{"distributed_rho_ACBR_compression".jpg}
\caption{Circuit diagram of the distributed compression task: the source is composed of $n$ copies of the state $\rho^{A_1C_1 A_2C_2 BR}$ where
$A_i^n$ ($i=1,2$) is the system to be compressed and $R^n$ is an inaccessible reference system; systems $C_i^n$ and $B^n$ are the side information available for the encoder $i$ and the decoder, respectively.
Dotted lines are used to demarcate domains controlled by the different participants here the reference, the encoders, Alice and Ava, and the decoder, Bob. The solid lines represent quantum information registers.
The encoder $i$ sends the compressed information, i.e. system $M_{i_n}$, to the decoder through a noiseless quantum channel;
moreover, they share initial entanglement in the registers $A_{0i}$ and $B_{0i}$, respectively.
The aim of the compression task is to reconstruct systems $A_1^n$ and $A_2^n$ at the decoder side while each party reconstructs its own corresponding side information as well, that is the final state $\xi^{\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n R^n}$ has the fidelity converging to 1 with the source state
$\rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n}$; this ensures that the correlations between the reconstructed systems $\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n$ and the
reference system $R^n$ are preserved. Furthermore, the encoder $i$ and the decoder distill entanglement in
their registers $A'_{0i}$ and $B'_{0i}$, respectively.
}
\label{fig: distributed rho ACBR compression task}
\end{figure}
We call $\frac1n \log (K_i-L_i)$ and $\frac1n \log|M_{i_n}|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively (for $i=1,2$).
Moreover, we say the encoding-decoding scheme has \emph{block fidelity} $1-\epsilon$, or \emph{block error} $\epsilon$, if
\begin{align}
\label{eq:block fidelity criterion distributed problem}
F\left( \rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n } \otimes \Phi_{L_1}^{A'_{01} B'_{01}} \otimes \Phi_{L_2}^{A'_{02} B'_{02}},\xi^{\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n R^n A_{01}' B_{01}' A_{02}' B_{02}'} \right)
\geq 1-\epsilon,
\end{align}
where
\begin{align*}
&\xi^{\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n R^n A_{01}' B_{01}' A_{02}' B_{02}'}=\\
& \quad \quad \quad \quad \left((\mathcal{D}\circ(\mathcal{E}_1 \otimes \mathcal{E}_2))\otimes {\operatorname{id}}_{R^n}\right) \rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n } \otimes \Phi_{K_1}^{A_{01} B_{01}} \otimes \Phi_{K_2}^{A_{02} B_{02}}.
\end{align*}
Moreover, we say that $(E_{b_1},E_{b_2},Q_{b_1},Q_{b_2})$ is an (asymptotically) achievable block-error rate tuple if for all $n$
there exist codes (encoders and decoders) such that the block fidelity converges to $1$, and
the $i$th entanglement and quantum rates converge to $E_{b_i}$ and $Q_{b_i}$ for encoder $i$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times \mathbb{R}\times\mathbb{R}_{\geq 0} \times\mathbb{R}_{\geq 0}$.
A schematic description of the source compression task with side information is illustrated in Fig.~\ref{fig: distributed rho ACBR compression task}.
In chapter~\ref{chap:cqSW}, we consider block fidelity, however, the results follow for the per-copy fidelity as well which is defined as follows:
we say a code has \emph{per-copy fidelity} $1-\epsilon$,
or \emph{per-copy error} $\epsilon$, if
\begin{align}
\label{eq:per-copy fidelity criterion distributed problem}
\frac{1}{n} \sum_{j=1}^n F\left( \rho^{A_{1} C_1 A_2 C_2 B R } ,\xi^{\hat{A_{1j}} \hat{C_{1j}} \hat{A_{2j}} \hat{C_{2j}} \hat{B} R } \right)
\geq 1-\epsilon,
\end{align}
where
$\xi^{\hat{A_{1j}} \hat{C_{1j}} \hat{A_{2j}} \hat{C_{2j}} \hat{B} R }=\Tr_{[n]\setminus j}\,\xi^{\hat{A_1}^n \hat{C_1}^n\hat{A_2}^n\hat{C_2}^n\hat{B}^nR^n}$, and `$\Tr_{[n]\setminus j}$' denotes the partial trace over all systems
with indices in $[n]\setminus j$.
Similarly, we say that $(E_{c_1},E_{c_2},Q_{c_1},Q_{c_2})$ is an (asymptotically) achievable per-copy-error rate tuple if for all $n$
there exist codes such that the per-copy fidelity converges to $1$, and
the $i$th entanglement and quantum rates converge to $E_{c_i}$ and $Q_{c_i}$ for encoder $i$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times \mathbb{R}\times\mathbb{R}_{\geq 0} \times\mathbb{R}_{\geq 0}$.
\bigskip
In \cite{Savov_Mscthesis2007,Savov_distributed_2008}, compression of a pure source $\ket{\psi}^{A_1A_2R}$ with side information at the encoders is considered ($C_1,C_2,B=\emptyset$). The achievable rate region is a convex hull of various points where each point corresponding to an encoder is achieved by applying fully quantum Slepian-Wolf (FQSW) compression and treating the rest of the systems as a reference.
The converse bounds are in terms of
the multipartite squashed entanglement,
which is a measure of multipartite entanglement.
\section{Summary of our results in quantum source compression and discussion}
\label{sec:Summary of our results in quantum source compression}
In this section, we briefly explain the special cases of problems, defined in the previous sections, that we address in this thesis.
Notice that in the subsequent chapters we do not necessarily respect the notation $A$, $C$, $B$ and $R$ for the system to be compressed, the side information at the encoder, the side information at the decoder and the reference system, however, we clearly define the task and specify the notation for the corresponding registers. Moreover, we specify whether the error criterion is block fidelity or per-copy fidelity.
\bigskip
In chapter \ref{chap:mixed state}, we consider the compression of a general mixed state source
$\rho^{AR}$ (no side information) and find the optimal trade-off between the entanglement and quantum rates, i.e. the pair $(E,Q)$.
\medskip
In chapter \ref{chap: E assisted Schumacher}, we unify the visible and blind Schumacher compression by considering an interpolation between them as side information, that is the source
$\rho^{ACR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{c_x}^C \otimes \proj{x}^R$ with $A$, $C$ and $R$ as the system to be compressed, the side information at the encoder and the classical reference system. For this source, we find optimal trade-off between the block-error entanglement and quantum rate pairs $(E_b,Q_b)$.
\medskip
In chapter \ref{chap:cqSW}, we consider quantum source compression with classical side information with the source $\rho^{AR_1BR_2}=\sum_x p(x) \proj{\psi_x}^{AR_1} \otimes \proj{x}^B \otimes \proj{x}^{R_2}$ and $A$, $B$ and $R=R_1 R_2$ as the system to be compressed, the side information at the decoder and the hybrid classical-quantum reference systems, respectively.
We study the entanglement assisted case $(\infty,Q_b)$, the unassisted case
$(0,Q_b)$ then distributed scenario considering block fidelity. We find achievable and converse bounds for each scenario and show that the two bounds match for the entanglement assisted quantum block-error rate $Q_b$ up to continuity of a function which appears in the bounds. Finally, considering per-copy fidelity we find the optimal entanglement assisted quantum per-copy-error rate, i.e. the pair $(\infty,Q_c^*)$.
\medskip
In chapter \ref{chap:QSR ensemble}, we consider an ensemble generalization of the quantum state redistribution (QSR), i.e. the source $\rho^{ACBR}=\sum_x p(x) \proj{\psi_x}^{ACBR_1} \otimes \proj{x}^{R_2}$ with $A$, $C$, $B$ and $R=R_1R_2$ as the system to be compressed, the side information at the encoder, the side information at the decoder and the hybrid classical-quantum reference systems, respectively. We consider free entanglement scenario and find the optimal quantum per-copy-error rate, i.e. the pair $(\infty,Q_c^*)$.
With block fidelity, we find achievable and converse bounds which match up to continuity of a function appearing in the bounds.
\medskip
In summary, for a general mixed state we solve the problem when there is no side information, and the rate region is in terms of an extension of the decomposition of the source state which is discovered by Koashi and Imoto in \cite{KI2002}, and later this decomposition extended to a general mixed state in \cite{Hayden2004}. However, for multipartite states this decomposition does not necessarily preserve the tensor structure over various systems; this turns out to be the main hurdle in dealing with general mixed state problems with side information.
This is not an issue for pure or ensemble sources mainly because the structure of maps
which preserve these states are well-understood. For these sources the environment systems of the encoding and decoding operations are decoupled from the reconstructed source given the identity of the state from the ensemble. This property is one of the guiding intuitions behind the converse proofs for the side information problems.
\chapter{Asymptotic thermodynamics of multiple conserved quantities}
\label{chap:thermo}
As a thermodynamic theory, or even as a resource theory in general, transformations
by almost-commuting unitaries, which we developed in the previous chapter, do not appear to be the most fruitful: they are reversible
and induce an equivalence relation among the sequences of product states.
In particular, every point $(\und{a},s)$ of the phase diagram $\overline{\mathcal{P}}^{(1)}$
defines an equivalence class, namely of all state sequences with charges and
entropy converging to $\und{a}$ and $s$, respectively.
To make the theory more interesting, and more resembling of ordinary thermodyanmics,
including irreversibility as expressed in its first and second laws,
we now specialise to a setting considered in many previous papers in the resource theory
of thermodynamics, both with with single or multiple conserved quantities.
Specifically, we consider an asymptotic analogue of the setting proposed
in \cite{Guryanova2016} concerning the interaction of thermal baths with a
quantum system and batteries, where it was shown that the second law constrains
the combination of extractable charge quantities.
In \cite{Guryanova2016}, explicit protocols for state transformations to saturate
the second law are presented, that store each of several commuting charges in its
corresponding battery. However, for the case of non-commuting charges, one battery,
or a so-called reference frame, stores all different types of charges \cite{Halpern2016,Popescu2018}.
Only recently it was shown that reference frames for non-commuting charges
can be constructed, at least under certain conditions, which store the different
charge types in physically separated subsystems \cite{Popescu2019}.
Moreover, the size of the bath required to perform the transformations is not
addressed in these works, as only the limit of asymptotically large bath was
considered.
We will address these questions in a similar setting but in the asymptotic regime,
where Theorem~\ref{Asymptotic equivalence theorem} provides the necessary and sufficient
condition for physically possible state transformations.
In this new setting, the \emph{asymptotic} second law constrains the combination of
extractable charges; we provide explicit protocols for realising transformations
satisfying the second law, where each battery can store its corresponding type of
work in the general case of non-commuting charges. Furthermore, we determine the
minimum number of thermal baths of a given type that is required to perform a transformation.
\section{System model, batteries and the first law}
\label{subsec:model}
We consider a system being in contact with a bath and suitable batteries,
with a total Hilbert space
$Q=S\otimes B\otimes W_1\otimes \cdots \otimes W_c$,
consisting of many non-interacting subsystems; namely, the work system, the thermal bath and
$c$ battery systems with Hilbert spaces ${S}$, ${B}$ and ${W}_j$ for
$j=1,\ldots,c$, respectively.
We call the $j$-th battery system the $j$-type battery as it is designed to absorb
$j$-type work.
The work system and the thermal bath have respectively the charges $A_{S_j}$ and $A_{B_j}$
for all $j$, but $j$-type battery has only one nontrivial charge $A_{W_j}$, and all
its other charges are zero because it is meant to store only the $j$-th charge.
The total charge is the sum of the charges of the sub-systems $A_j=A_{S_j}+A_{B_j}+A_{W_j}$
for all $j$. Furthermore, for a charge $A$, let $\Sigma(A)=\lambda_{\max}(A)-\lambda_{\min}(A)$
denote the spectral diameter, where $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ are
the largest and smallest eigenvalues of the charge $A$, respectively.
We assume that the total spectral
diameter of the work system and the thermal bath is bounded by the spectral diameter of the
battery, that is $\Sigma(A_{S_j})+\Sigma(A_{B_j}) \leq \Sigma(A_{W_j})$ for all $j$; this
assumption ensures that the batteries can absorb or release charges for transformations.
As we discussed in the previous chapter, the generalized thermal state $\tau(\und{a})$
is the state that maximizes the entropy subject to the
constraint that the charges $A_j$ have the values $a_j$.
This state is equal to $\frac{1}{Z}e^{-\sum_{j=1}^c \beta_j A_{j}}$ for real
numbers $\beta_j$ called inverse temperatures and chemical potentials;
each of them is a smooth function of charge values $a_1,\ldots,a_c$, and
$Z=\Tr e^{-\sum_{j=1}^c \beta_j A_{j}}$ is the generalized partition function.
Therefore, the generalized thermal state can be equivalently denoted $\tau(\und{\beta})$
as a function of the inverse temperatures, associated uniquely with the charge
values $\und{a}$.
We assume that the thermal bath is initially in a generalized thermal state
$\tau_b(\und{\beta})$, for globally fixed $\und{\beta}$.
This is because in \cite{Halpern2016} it was argued that these are precisely the
\emph{completely passive} states, from which no energy can be extracted into
a battery storing energy, while not changing any of the other conserved quantity,
by means of almost-commuting unitaries and even when unlimited copies of the state are available.
We assume that the work system with state $\rho_s$ and the thermal bath are initially
uncorrelated, and furthermore that the battery systems can acquire only pure states.
Therefore, the initial state of an \emph{individual} global system $Q$
is assumed to be of the following form,
\begin{equation}
\label{eq:initial composite global}
\rho_{SBW_1\ldots W_c}
= \rho_S \ox \tau(\und{\beta})_B \ox \proj{w_1}_{W_1} \ox \cdots \ox \proj{w_c}_{W_c},
\end{equation}
and the final states we consider are of the form
\begin{equation}
\label{eq:final composite global}
\sigma_{SBW_1\ldots W_c}
= \sigma_{SB} \ox \proj{w_1'}_{W_1} \ox \cdots \ox \proj{w_c'}_{W_c},
\end{equation}
where $\rho_S$ and $\sigma_{SB}$ are states of the system and system-plus-bath, respectively,
and $w_j$ and $w_j'$ label pure states of the $j$-type battery before and after the transformation.
The notation is meant to convey the expectation value of the $j$-type work, i.e. $w_j^{(\prime)}$
is a real number and $\Tr \proj{w_j^{(\prime)}}A_{W_j} = w_j^{(\prime)}$.
The established resource theory of thermodynamics treats the batteries and the bath as
`enablers' of transformations of the system $S$, and we will show first and second laws
that express the essential constraints that any such transformation has to obey.
We start with the batteries. With the notations $\und{W} = W_1\ldots W_c$,
$\ket{\und{w}} = \ket{w_1}\cdots\ket{w_c}$, and $\ket{\und{w}'} = \ket{w_1'}\cdots\ket{w_c'}$,
let us look at a sequence $\rho^n = \rho_{S^n} = \rho_{S_1} \ox\cdots\ox \rho_{S_n}$ of initial
system states, and a sequence $\proj{\und{w}}^n = \proj{\und{w}_1}_{\und{W}_1} \ox\cdots\ox \proj{\und{w}_n}_{\und{W}_n}$
of initial battery states, recalling that the baths are initially all in the same thermal
state, $\tau_{B^n} = \tau(\und{\beta})^{\ox n}$; furthermore a sequence of target states
$\sigma^n = \sigma_{S^nB^n} = \sigma_{S_1B_1} \ox\cdots\ox \sigma_{S_nB_n}$ of the system and bath, and a
sequence $\proj{\und{w}'}^n = \proj{\und{w}_1'}_{\und{W}_1} \ox\cdots\ox \proj{\und{w}_n'}_{\und{W}_n}$
of target states of the batteries.
\begin{definition}
\label{definition:regular}
A sequence of states $\rho^n$ on any system $Q^n$ is called \emph{regular} if
its charge and entropy rates converge, i.e. if
\begin{align*}
a_j &= \lim_{n\rightarrow\infty} \frac1n \Tr \rho^n A_j^{(n)},\ j=1,\ldots,c, \text{ and} \\
s &= \lim_{n\rightarrow\infty} \frac1n S(\rho^n)
\end{align*}
exist. To indicate the dependence on the state sequence,
we write $a_j(\{\rho^n\})$ and $s(\{\rho^n\})$.
\end{definition}
\medskip
According to the AET and the other results of the previous chapter, every point $(\und{a},s)$
in the phase diagram $\overline{\mathcal{P}}^{(1)}$ labels an equivalence class of
regular sequences of product states under transformations by almost-commuting unitaries.
In the rest of the chapter we will essentially focus on regular sequences, so that
we can simply identify them, up to asymptotic equivalence, with a point in the phase
diagram. However, it should be noted that at the expense of clumsier expressions,
most of our expositions can be extended to arbitrary sequences of product states or
block-product states.
\medskip
Now, for regular sequences $\rho_{S^n}$ of initial states of the system and
final states of the system plus bath, $\sigma_{S^nB^n}$, as well as regular
sequences of initial and final battery states, $\proj{\und{w}}^n$ and $\proj{\und{w}'}^n$,
respectively, define the asymptotic rate of $j$-th charge change of the $j$-type battery as
\begin{equation}
\label{eq: W_j definition}
\Delta A_{W_j} := a_j(\{\proj{w_j'}^n\})-a_j(\{\proj{w_j}^n\})
= \lim_{n\rightarrow\infty} \frac1n \Tr (\proj{w_j'}^n-\proj{w_j}^n)A_{W_j}^{(n)}.
\end{equation}
Where there is no danger of confusion, we denote this number also as $W_j$,
the $j$-type work extracted (if $W_j < 0$, this means that the work $-W_j$ is done
on system $S$ and bath $B$).
Similarly, we define the asymptotic rate of $j$-th charge change of the work system
and the bath as
\begin{align*}
\Delta A_{S_j} &:= a_j(\{\sigma_{S^n}\})-a_j(\{\rho_{S^n}\})
= \lim_{n\rightarrow\infty} \frac1n \Tr (\sigma_{S^n}-\rho_{S^n})A_{S_j}^{(n)}, \\
\Delta A_{B_j} &:= a_j(\{\sigma_{B^n}\})-a_j(\{\tau(\und{\beta})_{B^n}\})
= \lim_{n\rightarrow\infty} \frac1n \Tr (\sigma_{B^n}-\tau(\und{\beta})_B^{\ox n})A_{B_j}^{(n)},
\end{align*}
where we denote $\sigma_{S^n} = \tr_{B^n} \sigma_{S^nB^n}$ and likewise
$\sigma_{B^n} = \tr_{S^n} \sigma_{S^nB^n}$.
\begin{theorem}[First Law]
\label{thm:first-law}
Under the above notations, if the regular sequences
$\rho_{S^nB^n\und{W}^n} = \rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \ox \proj{\und{w}}^n$
and $\sigma_{S^nB^n\und{W}^n} = \sigma_{S^nB^n} \ox \proj{\und{w}'}^n$
are equivalent under almost-commuting unitaries, then
\begin{align*}
s(\{\sigma_{S^nB^n}\}) &= s(\{\rho_{S^n}\}) + S(\tau(\und{\beta})) \text{ and} \\
W_j &= -\Delta A_{S_j}-\Delta A_{B_j} \text{ for all } j=1,\ldots,c.
\end{align*}
Conversely, given regular sequences $\rho_{S^n}$ and $\sigma_{S^nB^n}$ of product
states such that
\[
s(\{\sigma_{S^nB^n}\}) = s(\{\rho_{S^n}\}) + S(\tau(\und{\beta})),
\]
and assuming that the spectral radius of the battery observables $W_{A_j}$ is large enough
(see the discussion at the start of this chapter), then there exist regular sequences of
product states of the $j$-type battery, $\proj{w_j}^n$ and $\proj{w_j'}^n$, for all
$j=1,\ldots,c$, such that
\begin{align}
\label{eq:initial}
\rho_{S^nB^n\und{W}^n} &= \rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \ox \proj{\und{w}}^n \text{ and} \\
\label{eq:final}
\sigma_{S^nB^n\und{W}^n} &= \sigma_{S^nB^n} \ox \proj{\und{w}'}^n
\end{align}
can be transformed into each other by almost-commuting unitaries.
\end{theorem}
\begin{proof}
The first part is by definition, since the almost-commuting unitaries
asymptotically preserve the entropy rate and the work rate of all
charges.
In the other direction, all we have to do is find states $\proj{w_j}$
and $\proj{w_j'}$ of the $j$-type battery $W_j$, such that
$W_j = \Delta A_{W_j} = -\Delta A_{S_j}-\Delta A_{B_j}$, for all $j=1,\ldots,c$.
This is clearly possible if the spectral radius of $W_{A_j}$ is large enough.
With this, the states in Eqs. (\ref{eq:initial}) and (\ref{eq:final}) have the
same asymptotic entropy and charge rates.
Hence, the claim follows from the AET, Theorem~\ref{Asymptotic equivalence theorem}.
\end{proof}
\begin{remark}\normalfont
The second part of Theorem~\ref{thm:first-law} says that for regular product state
sequences, as long as the initial and final states of the work system and the thermal bath
have asymptotically the same entropy, they can be transformed one into the another
because there are always batteries that can absorb or release the necessary charge difference.
Furthermore, we can even fix the initial (or final) state of the batteries and
design the matching final (initial) battery state, assuming that the charge
expectation value of the initial (final) state is far enough from the edge of the
spectrum of $A_{W_j}$.
\end{remark}
For any such states, we say that there is a \emph{work transformation}
taking one to the other, denoted
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$.
This transformation is always feasible, implicitly assuming the
presence of suitable batteries for all $j$-type works to balance to books
explicitly.
\begin{remark}\normalfont
As a consequence of the previous remark, we now change our point of view of what a
transformation is. Of our complicated $S$-$B$-$\und{W}$ compound, we only
focus on $SB$ and its state, and treat the batteries as implicit. Since we insist
that batteries need to remain in a pure state, which thus factors off and
does not contribute to the entropy, and due to the above first law Theorem \ref{thm:first-law},
we can indeed understand everything that is going on by looking at how
$\rho_{S^nB^n}$ transforms into $\sigma_{S^nB^n}$.
\end{remark}
Note that in this context, it is in a certain sense enough that the initial states
$\rho_{S^n}$ form a regular sequence of product states and that the target states
$\sigma_{S^nB^n}$ form a regular sequence. This is because the first part of
the first law, Theorem \ref{thm:first-law}, only requires regularity, and
since the target state defines a unique point $(\und{a}',s')$ in the phase
diagram, we can find a sequence of product states $\widetilde{\sigma}_{S^nB^n}$ in
its equivalence class, and use the second part of Theorem \ref{thm:first-law}
to realise the work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \widetilde{\sigma}_{S^nB^n}$.
\section{The second law}
\label{subsec:secondlaw}
If the first law in our framework arises from focusing on the system-plus-bath
compound $SB$, while making the batteries implicit, the second law comes about
from trying to understand the action on the work system $S$ alone, through the
concomitant back-action on the bath $B$.
Following \cite{Guryanova2016,Halpern2016}, the second law constrains the different
combinations of commuting conserved quantities that can be extracted from the work
system. We show here that in the asymptotic regime, the second law similarly bounds
the extractable work rate via the rate of free entropy of the system.
The \emph{free entropy} for a system with state $\rho$, charges $A_j$ and inverse temperatures
$\beta_j$ is defined in \cite{Guryanova2016} as
\begin{align}
\label{free entropy}
\widetilde{F}(\rho) = \sum_{j=1}^c \beta_j \Tr \rho A_j - S(\rho).
\end{align}
It is shown in \cite{Guryanova2016} that the generalized thermal state
$\tau(\und{\beta})$ is the state that minimizes the free entropy for
fixed $\beta_j$.
For any work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
between regular sequences of states,
we define the asymptotic rate of free entropy change for the work system and the
thermal bath respectively as follows:
\begin{equation}\begin{split}
\label{eq:free entropy rates}
\Delta\widetilde{F}_S
&:= \lim_{n \to \infty} \frac{1}{n}\left(\widetilde{F}(\sigma_{S^n})
-\widetilde{F}(\rho_{S^n}) \right), \\
\Delta\widetilde{F}_B
&:= \lim_{n \to \infty} \frac{1}{n}\left(\widetilde{F}(\sigma_{B^n})
-n \widetilde{F}(\tau_B)\right),
\end{split}\end{equation}
where the free entropy is with respect to the charges of the work system and the thermal
bath with fixed inverse temperatures $\beta_j$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{second-law.jpg}
\end{center}
\caption{State change of the bath for a given work transformation under extraction of
$j$-type work $W_j$, viewed in the phase diagram of the bath $\overline{\cP}_B$.
The blue line represents the tangent hyperplane at the corresponding point
of the generalized thermal state $\tau(\und{\beta})_B$, $R$ is the number of copies
of the elementary baths in the proof of Theorem \ref{asymptotic second law},
and $F$ is the point corresponding to the final state of the bath.}
\label{fig:second-law}
\end{figure}
\begin{theorem}[Second Law]
\label{asymptotic second law}
For any work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
between regular sequences of states, the $j$-type works $W_j$ that are extracted
(and they are necessarily $W_j = -\Delta A_{S_j}-\Delta A_{B_j}$ according
to the first law) are constrained by the rate of free entropy change of the system:
\[
\sum_{j=1}^c \beta_j W_j \leq -\Delta\widetilde{F}_S.
\]
Conversely, for arbitrary regular sequences of product states,
$\rho_{S^n}$ and $\sigma_{S^n}$, and any real numbers $W_j$ with
$\sum_{j=1}^c \beta_j W_j < -\Delta\widetilde{F}_S$,
there exists a bath system $B$ and a regular sequence of product states
$\sigma_{S^nB^n}$ with $\Tr_{B^n}\sigma_{S^nB^n} = \sigma_{S^n}$, such that
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
with accompanying extraction of $j$-type work at rate $W_j$.
This is illustrated in Fig.~\ref{fig:second-law}.
\end{theorem}
\begin{proof}
We start with the first statement of the theorem. Consider the global system transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$ by
almost-commuting unitaries.
We use the definition of work (\ref{eq: W_j definition}) and free
entropy (\ref{free entropy}), as well as the first law, Theorem \ref{thm:first-law},
to get
\begin{equation}
\label{work expansion formula}
\begin{split}
\sum_j \beta_j W_j &= -\sum_j \beta_j(\Delta A_{S_j}+\Delta A_{B_j})\\
&= -\Delta\widetilde{F}_S-\Delta\widetilde{F}_B-\Delta s_S -\Delta s_B .
\end{split}
\end{equation}
The second line is due to the definition in Eq. (\ref{eq:free entropy rates}).
Now observe that
\begin{align}\label{eq: positive Delta_SB}
\Delta s_S +\Delta s_B
&= \lim_{n \to \infty} \frac1n \bigl(S(\sigma_{S^n})-S(\rho_{S^n})\bigr)
+ \frac1n \bigl(S(\sigma_{B^n})-nS(\tau(\und{\beta})_B)\bigr) \nonumber\\
&\geq \lim_{n \to \infty} \frac1n \bigl(S(\sigma_{{SB}^n})-S(\rho_{S^n})-S(\tau(\und{\beta})_B^{\ox n})\bigr)
= 0,
\end{align}
where the inequality is due to sub-additivity of von Neumann entropy, and the
final equation due to asymptotic entropy conservation.
Further, the generalized thermal state $\tau(\und{\beta})_B$ has
the minimum free entropy \cite{Guryanova2016}, hence $\Delta\widetilde{F}_B \geq 0$.
For the second statement of the theorem, the achievability part of the second law,
we aim to show that there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^n} \ox \xi_{B^n}$,
with a suitable regular sequences of product states,
and works $W_1,\ldots,W_c$ are extracted.
This will be guaranteed, by the first law, Theorem \ref{thm:first-law},
and the AET, Theorem \ref{Asymptotic equivalence theorem}, if
\begin{equation}\begin{split}
s(\{\xi_{B^n}\}) &= S(\tau(\und{\beta})_B) - \Delta s_S, \\
a_j(\{\xi_{B^n}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \Delta A_{S_j} - W_j \quad \text{for all } j=1,\ldots,c.
\label{eq:state assumptions}
\end{split}\end{equation}
The left hand side here defines a point $(\und{a},s)$ in the charges-entropy
space of the bath, and our task is to show that it lies in the phase diagram,
for which purpose we have to define the bath characteristics suitably.
On the right hand side,
$\bigl(\Tr\tau(\und{\beta})_B A_{B_1}, \ldots, \Tr\tau(\und{\beta})_B A_{B_c}, S(\tau(\und{\beta})_B)\bigr)$
is the point corresponding to the initial state of the bath, which due to
its thermal nature is situated on the upper boundary of the region.
At that point, the region has a unique tangent hyperplane, which has the equation
$\sum_j \beta_j a_j-s = \widetilde{F}(\tau(\und{\beta})_B)$, and the phase diagram
is contained in the half space $\sum_j \beta_j a_j-s \geq \widetilde{F}(\tau(\und{\beta})_B)$,
corresponding to the fact that their free entropy is larger than that of the
thermal state. In fact, due to the strict concavity of the entropy, and hence
of the upper boundary of the phase diagram, the phase diagram, with the exception
of the thermal point $\bigl(\Tr \tau(\und{\beta})_B \underline{A_{B}}, S(\tau(\und{\beta})_B)\bigr)$
is contained in the open half space $\sum_j \beta_j a_j-s > \widetilde{F}(\tau(\und{\beta})_B)$.
One of many ways to construct a suitable bath $B$ is as several ($R\gg 1$) non-interacting
copies of an ``elementary bath'' $b$: $B=b^R$ and charges $A_{B_j}=A^{(R)}_{b_j}$, so that
the GGS of $B$ is $\tau(\und{\beta})_B = \tau(\und{\beta})_b^{\otimes R}$.
We claim that for large enough $R$, the left hand side of Eq. (\ref{eq:state assumptions})
defines a point in the phase diagram of $B$. Indeed, we can express the conditions
in terms of $b$, assuming that we aim for a regular sequence of product states
$\xi_{b^{nR}}$:
\begin{equation}\begin{split}
s(\{\xi_{b^{nR}}\}) &= S(\tau(\und{\beta})_b) - \frac1R \Delta s_S, \\
a_j(\{\xi_{b^{nR}}\}) &= \Tr \tau(\und{\beta})_b A_{b_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c.
\label{eq:R-bath-assumptions}
\end{split}\end{equation}
For all sufficiently large $R$, these points $(\und{a},s)$ are arbitrarily close to
where the bath starts off, at
$(\und{a}_{\und{\beta}},s_{\und{\beta}})
= \bigl(\Tr\tau(\und{\beta})_b A_{b_1}, \ldots, \Tr\tau(\und{\beta})_b A_{b_c}, S(\tau(\und{\beta})_b)\bigr)$,
while they always remains in the open half plane
$\sum_j \beta_j a_j-s > \widetilde{F}(\tau(\und{\beta})_b)$. Indeed,
they all lie on a straight line pointing from $(\und{a}_{\und{\beta}},s_{\und{\beta}})$
into the interior of that half plane. Hence, for sufficiently large $R$,
$(\und{a},s) \in \overline{\cP}$, the phase diagram of $b$, and by
point 5 of Lemma~\ref{lemma:phase diagram properties} there does indeed exist
a regular sequence of product states corresponding to it.
\end{proof}
\section{Finiteness of the bath: tighter constraints and negative entropy}
\label{subsec:finitebath}
In the previous two sections we have elucidated the traditional statements of
the first and second law of thermodynamics, as emerging in our resource theory.
In particular, the second law is tight, if sufficiently large baths are allowed
to be used.
Here, we specifically look at the the second statement (achievability)
of the second law in the presence of an explicitly given, finite bath $B$. It will
turn out that typically, equality in the second law cannot be attained, only
up to a certain loss due to the finiteness of the bath. We also discover
a purely quantum effect whereby the system and the bath remain entangled after
effecting a certain state transformation, allowing quantum engines to perform
tasks impossible classically (i.e. with separable correlations).
The question we want to address is the following refinement of the one answered
in the previous section:
\begin{quote}
Given regular sequences $\rho_{S^n}$ and $\sigma_{S^n}$ of product states,
and numbers $W_j$, are there extensions $\sigma_{S^nB^n}$ of $\sigma_{S^n}$
forming a regular sequence of product states, such that the work
transformation $\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
is feasible, with accompanying extraction of $j$-type work at rate $W_j$?
\end{quote}
To answer it, we need the following \emph{extended phase diagram}.
For a give state $\sigma_S$ of the system $S$, and a bath $B$, define the
the following set:
\begin{equation}
\cP^{(1)}_{|\sigma_S} := \left\{ \bigl(\Tr\xi_B A_1^{(B)},\ldots,\Tr\xi_B A_c^{(B)},S(B|S)_\xi\bigr) :
\xi_{SB} \text{ state with } \Tr_B\xi_{SB}=\sigma_S \right\},
\end{equation}
furthermore its $n$-copy version
\begin{align}
&\cP^{(n)}_{|\sigma_{S^n}}
:= \nonumber
\\ & \left\{\! \bigl(\Tr\xi_{B^n} A_1^{(B^n)}\!,\!\ldots\!,\!\Tr\xi_{B^n} A_c^{(B^n)}\!\!,S(B^n|S^n)_\xi\bigr)\! :
\xi_{S^nB^n} \text{ state with } \Tr_{B^n}\xi_{S^nB^n}\!=\sigma_S^{\otimes n} \!\right\}\!.
\end{align}
Finally, define the \emph{conditional entropy phase diagram} as
\begin{align}
& \overline{\cP}_{|s_0} := \overline{\cP}^{(1)}_{|s_0}
:= \nonumber\\
&\quad \left\{ \bigl(\und{a},s\bigr) : a_j = \Tr\xi_B A_j^{(B)},\,
-\min\{s_0,S(\tau(\und{a}))\} \leq s \leq S(\tau(\und{a}))
\text{ for a state } \xi_B \right\},
\end{align}
and likewise its $n$-copy version $\overline{\cP}^{(n)}_{|ns_0}$,
for a number $s$ (intended to be an entropy or entropy rate).
These concepts are illustrated in Fig.~\ref{fig:extended-phase-diagram}.
The relation between the sets, and the name of the latter, are explained in the
following lemma.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{extended-phase-diagram.jpg}
\end{center}
\caption{Schematic of the extended phase diagram $\overline{\cP}_{|s_0}$.
Depending on the value of $s_0$, whether it is smaller or larger than
$\log|B|$, the diagram acquires either the left hand or the right hand
one of the above shapes.}
\label{fig:extended-phase-diagram}
\end{figure}
\begin{lemma}
\label{lemma:extended-phase-diagram}
With the previous notation, we have:
\begin{enumerate}
\item For all $k$, $\cP^{(k)}_{|\sigma_{S^k}} \subset \overline{\cP}^{(k)}_{|S(\sigma_{S^k})}$,
and the latter is a closed convex set.
\item For all $k$, $\overline{\cP}^{(k)}_{|ks_0} = k \overline{\cP}^{(1)}_{|s_0}$.
\item For a regular sequence $\{\sigma_{S^k}\}$ of product states with entropy rate
$s_0=s(\{\sigma_{S^k}\})$, every point in $\overline{\cP}_{|s}$ is arbitrarily well
approximated by points in $\frac1k \cP^{(k)}_{|\sigma_{S^k}}$ for all sufficiently large $k$.
I.e., $\displaystyle{\overline{\cP}_{|s_0} = \lim_{k\to\infty} \frac1k \cP^{(k)}_{|S(\sigma_{S^k})}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. We only have to convince ourselves that for a state
$\xi_{S^kB^k}$ with $\Tr_{B^k}\xi_{S^kB^k}=\sigma_{S^k}$,
\[
- \min\{S(\sigma_{S^k}),kS(\tau(\und{a}))\} \leq S(B^k|S^k)_\xi \leq k S(\tau(\und{a})),
\]
where $\und{a}=(a_1,\ldots,a_c)$ with $a_i = \frac1k \Tr\xi_{B^k} A_i^{(B^k)}$.
The upper bound follows from subadditivity, since
$S(B^k|S^k)_\xi \leq S(B^k)_\xi \leq k S(\tau(\und{a}))$.
The lower bound consists of two inequalities: first, by purifying $\xi$ to
a state $\ket{\phi} \in S^kB^kR$ and strong subadditivity,
$S(B^k|S^k)_\xi \geq S(B^k|S^kR)_\phi = - S(B^k)_\xi \geq - k S(\tau(\und{a}))$.
Secondly, $S(B^k|S^k)_\xi \geq -S(S^k)_\xi = -S(\sigma_{S^k})$.
2. Follows easily from the definition.
3. It is enough to show that the points of the minimum entropy diagram
\[
\overline{\cP}_{\min|s}
:= \left\{\bigl(\und{a},-\min\{s_0,S(\tau(\und{a}))\}\bigr)
: \Tr \xi_B A_j^{(B)} = a_j \text{ for a state } \xi_B \right\}
\]
can be approximated as claimed by an admissible $k$-copy state $\xi_{S^kB^k}$. This
is because the maximum entropy diagram $\overline{\cP}_{\max}^{(k)}$ is realized
by states $\vartheta_{S^kB^k} := \sigma_{S^k}\ox\tau(\und{a})_B^{\ox k}$, and by
interpolating the states, i.e. $\lambda\xi + (1-\lambda)\vartheta$ for $0\leq\lambda\leq 1$,
we can realize the same charge values $\und{a}$ with entropies in the
whole interval $[S(B^k|S^k)_\xi;k S(\tau(\und{a}))]$.
The approximation of $\overline{\cP}_{\min|s}$ can be proved invoking
results from quantum Shannon theory, specifically quantum state merging,
the form of which we need here is stated below as a Lemma.
For this, consider a tuple $\und{a} \in \overline{\cP}_0$ and a purification
$\ket{\Psi} \in S^kB^kR^k$ of the state $\vartheta_{S^kB^k} = \sigma_{S^k}\ox\tau(\und{a})_B^{\ox k}$,
which can be chosen in such a way as to be a product state itself:
$\ket{\Psi} = \ket{\Psi_1}_{S_1B_1R_1}\ox\cdots\ox\ket{\Psi_k}_{S_kB_kR_k}$.
Now we distinguish two cases, depending on which of the entropies
$S(\sigma_{S^k})$ and $kS\bigl(\tau(\und{a})_B\bigr)$ is the smaller.
\begin{enumerate}[{(i)}]
\item $S(\sigma_{S^k}) \geq S\bigl(\tau(\und{a})_B\bigr)$: We shall
construct $\xi_{S^kB^k}$ in such a way that $\xi_{S^k} = \sigma_{S^k}$ and
$\xi_{B^k} \approx \tau\bigl(\und{a}\bigr)_B^{\otimes k}$. To this end, choose
a pure state $\phi_{CR'}$ with entanglement entropy
$S(\phi_C) = \frac1k S(\sigma_{S^k})-S\bigl(\tau(\und{a})_B\bigr) + \frac12 \epsilon$,
and consider the state
$\widetilde{\Psi}^{S^kB^kC^kR^k{R'}^k} = \Psi_{S^kB^kR^k}\ox\phi_{CR'}^{\ox k}$.
Now we apply state merging (Lemma \ref{lemma:marge}) twice to
this state (which is a tensor product of $k$ systems), with a random
rank-one projector $P$ on the combined system $R^k{R'}^k$:
first, by splitting the remaining parties $S^k : B^kC^k$,
and second by splitting them $B^k : S^kC^k$.
By construction, in both bipartitions it is the solitary system
($S^k$ and $B^k$, resp.) that has the smaller entropy by at least $\frac12\epsilon k$,
showing that the post-measurement state $\widetilde{\xi}(P)_{S^kB^kC^k}$ with
high probability approximates the marginals of $\vartheta_{S^kB^k}$ on $S^k$
and on $B^k$ simultaneously.
Choose a typical subspace projector $\Pi$ of $\phi_C^{\ox k}$ with
$\log \rank \Pi \leq S(\sigma_{S^k})-k S\bigl(\tau(\und{a})_B\bigr) + \epsilon k$,
and let
\[
\ket{\xi(P)}_{S^kB^kC^k} := \frac1c (\openone_{S^kB^k}\Pi_{C^k})\ket{\widetilde{\xi}(P)},
\]
with a normalization constant $c$.
Merging and properties of the typical subspace imply that for sufficiently large $k$,
\begin{align}
\label{eq:xiP-S}
\frac12 \left\| \xi(P)_{S^k} - \sigma_{S^k} \right\|_1 &\leq \epsilon, \\
\label{eq:xiP-B}
\frac12 \left\| \xi(P)_{B^k} - \tau(\und{a})_B^{\ox k} \right\|_1 &\leq \epsilon.
\end{align}
Now, we invoke Uhlmann's theorem applied to purifications of $\sigma_{S^k}$ and
of $\xi(P)_{S^kB^k}$, together with the well-known relations between fidelity
and trace norm applied to Eq.~(\ref{eq:xiP-S}),
to obtain a state $\xi_{S^kB^k}$ with $\xi_{S^k} = \sigma_{S^k}$ and
$\frac12 \left\| \xi(P)_{S^kB^k} - \xi_{S^kB^k} \right\|_1 \leq \sqrt{\epsilon(2-\epsilon)}$,
thus by Eq.~(\ref{eq:xiP-B})
\[
\frac12 \left\| \xi_{B^k} - \tau(\und{a})_B^{\ox k} \right\|_1 \leq \epsilon + \sqrt{\epsilon(2-\epsilon)}.
\]
From the latter bound it follows that
\[
\left| \frac1k \tr \xi_{B^k}A_j^{(B^k)} - a_j \right|
\leq \|A_{B_j}\| \left(\epsilon + \sqrt{\epsilon(2-\epsilon)}\right).
\]
It remains to bound the conditional entropy:
\[\begin{split}
\frac1k& S(B^k|S^k)_\xi \\
&= \frac1k S\bigl(\xi_{S^kB^k}\bigr) - \frac1k S(\xi_{S^k}) \\
&\leq \frac1k S\bigl(\xi(P)_{S^kB^k}\bigr) \!- \!\frac1k S(\sigma_{S^k})
\!+\! \left(\! \epsilon \!+\! \sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right)\log(|S||B|)
\!+\! h\left(\! \epsilon \!+\! \sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right) \\
&\leq \frac1k \log \rank\Pi \!-\! \frac1k S(\sigma_{S^k})
\!+\! \left(\! \epsilon \!+ \!\sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right)\log(|S||B|)
\!+\! h\left( \!\epsilon \!+\! \sqrt{\!\epsilon(2-\epsilon)\!} \!\right) \\
&\leq \frac1k \left( S(\sigma_{S^k}) - kS\bigl(\tau(\und{a})\bigr) \right)
- \frac1k S(\sigma_{S^k})
+ \left( 2\epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)\\
&\quad \quad \quad \quad \quad \quad \quad + h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right) \\
&= -S\bigl(\tau(\und{a})\bigr)
+ \left( 2\epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)
+ h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right) ,
\end{split}\]
where in the second line we have used the Fannes inequality on the continuity
of the entropy \cite{Fannes1973,Audenaert2007}, with the binary entropy
$h(x)=-x\log x-(1-x)\log(1-x)$;
in the third line that $\xi(P)_{S^kB^k}$ has rank at most $\rank \Pi$;
and in the fourth line the upper bound on the latter rank by construction.
\item $S(\sigma_{S^k}) < S\bigl(\tau(\und{a})_B \bigr)$: We shall
construct $\xi_{S^kB^k}$ such that $\xi_{S^k} = \sigma_{S^k}$ and
$\tr\xi_{B^k}A_j^{(B^k)} \approx \tr\tau\bigl(\und{a}\bigr)_B A_{B_j}$ for
all $j=1,\ldots,c$. Here, choose a pure state $\phi_{CR'}$ with entanglement entropy
$S(\phi_C) = \epsilon$, and define
$\widetilde{\Psi}^{S^kB^kC^kR^k{R'}^k} = \Psi_{S^kB^kR^k}\ox\phi_{CR'}^{\ox k}$.
Now we apply state merging (Lemma \ref{lemma:marge}) to
this state (which is a tensor product of $k$ systems), with a random
rank-one projector $P$ on the combined system $R^k{R'}^k$,
by splitting the remaining parties $S^k : B^kC^k$, which ensures that
$S^k$ has the smaller entropy by at least $\epsilon k$,
showing that the post-measurement state $\widetilde{\xi}(P)_{S^kB^kC^k}$ with
high probability approximates the marginal of $\vartheta_{S^kB^k}$ on $S^k$.
Proceed as before with a typical subspace projector $\Pi$ of $\phi_C^{\ox k}$
such that
$\log \rank \Pi \leq S(\sigma_{S^k})-k S\bigl(\tau(\und{a})_B\bigr) + \epsilon k$,
and let
\(
\ket{\xi(P)}_{S^kB^kC^k} := \frac1c (\openone_{S^kB^k}\Pi_{C^k})\ket{\widetilde{\xi}(P)},
\)
with a normalization constant $c$.
Merging and properties of the typical subspace thus imply that for sufficiently large $k$,
\begin{equation}
\label{eq:xiPt-S}
\frac12 \left\| \xi(P)_{S^k} - \sigma_{S^k} \right\|_1 \leq \epsilon.
\end{equation}
Next we need to look at the charge values of $\xi(P)_{B^k}$.
Note that the expectation $\EE_P \xi(P)_{B^k}$ is approximately
equal to $\EE_P \widetilde{\xi}(P)_{B^k} = \tau(\und{a})_B^{\ox k}$.
It follows from \cite[{Lemma III.5}]{Hayden2006}, that if $k$ is sufficiently
large, then with high probability
\begin{equation}
\label{eq:xiPt-B-A}
\left| \tr \bigl(\xi(P)_{B^k} - \tau(\und{a})_B^{\ox k}\bigr) A_j^{(B^k)} \right| \leq \|A_{B_j}\| \epsilon
\quad \text{for all } j=1,\ldots,c.
\end{equation}
So we just focus on a good instance of $P$, where both
Eqs.~(\ref{eq:xiPt-S}) and (\ref{eq:xiPt-B-A}) hold. Now we proceed as
in the first case to find a state $\xi_{S^kB^k}$ with $\xi_{S^k} = \sigma_{S^k}$ and
$\frac12 \left\| \xi(P)_{S^kB^k} - \xi_{S^kB^k} \right\|_1 \leq \sqrt{\epsilon(2-\epsilon)}$,
using Uhlmann's theorem. Thus, as before we find
\[
\left| \frac1k \tr \xi_{B^k}A_j^{(B^k)} - a_j \right|
\leq \|A_{B_j}\| \left(\epsilon + \sqrt{\epsilon(2-\epsilon)}\right).
\]
Regarding the conditional entropy, we have quite similarly as before,
\[\begin{split}
\frac1k &S(B^k|S^k)_\xi \\
&= \frac1k S\bigl(\xi_{S^kB^k}\bigr) - \frac1k S(\xi_{S^k}) \\
&\leq \frac1k S\bigl(\xi(P)_{S^kB^k}\bigr) \!- \!\frac1k S(\sigma_{S^k})
\!+\! \left( \!\epsilon \!+ \!\sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right)\log(|S||B|)
\!+\! h\left(\! \epsilon \!+\! \sqrt{\!\epsilon(2\!-\!\epsilon)\!} \!\right) \\
&\leq \frac1k \log 2^{\epsilon k} - \frac1k S(\sigma_{S^k})
+ \left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)
+ h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right) \\
&\leq -\frac1k S(\sigma_{S^k})
+ \left( 2\epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)
+ h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right).
\end{split}\]
\end{enumerate}
Since in both cases we knew the conditional entropy to be always
$\geq - \frac1k \min\left\{ S(\sigma_{S^k}),k S\bigl(\tau(\und{a})\bigr) \right\}$,
this concludes the proof.
\end{proof}
\medskip
\begin{lemma}[Quantum state merging \cite{Horodecki2007,SM_nature}]
\label{lemma:marge}
Given a pure product state
$\Psi_{A^nB^nC^n}=(\Psi_1)_{A_1B_1C_1}\ox\cdots\ox(\Psi_n)_{A_nB_nC_n}$,
such that $S(\Psi_{A^n})-S(\Psi_{B^n}) \geq \epsilon n$, consider a Haar
random rank-one projector $P$ on $C^n$. Then, for sufficiently large $n$
it holds except with arbitrarily small probability that the post-measurement state
\[
\psi(P)_{A^nB^n} = \frac{1}{\tr\Psi_{C^n}P}\tr_{C^n}\Psi(\openone_{A^nB^n}\ox P)
\]
satisfies $\frac12\| \psi(P)-\Psi_{A^nB^n}\|_1 \leq \epsilon$.
\hfill$\blacksquare$
\end{lemma}
\medskip
\begin{remark}\normalfont
While we have seen that the upper boundary of the extended phase diagram
$\overline{\cP}^{(k)}_{|S(\sigma_{S^k})}$ is exactly realized by points
in $\cP^{(k)}_{|\sigma_{S^k}}$, namely those corresponding to the tensor product
states $\sigma_{S^k} \ox \tau(\und{a})_B^{\ox k}$,
it seems unlikely that we can achieve the analogous thing for the lower boundary:
this would entail finding, for every (sufficiently large) $k$ a tensor product
state, or a block tensor product state, $\xi_{S^kB^k}$ with prescribed charge vector
$\und{a}$ on $B^k$, and $S(B^k|S^k)_\xi = -\min\{kS\bigl(\tau(\und{a})\bigr),S(\sigma_{S^k})\}$.
Now, for concreteness, consider the case that
$kS\bigl(\tau(\und{a})\bigr) \leq S(\sigma_{S^k})$, so that the conditional entropy
aimed for is $S(B^k|S^k)_\xi = -k S\bigl(\tau(\und{a})_B\bigr)$, which is the value
of a purification of $\tau(\und{a})_B^{\ox k}$. In particular, it would mean that
$S(\xi_{B^k}) = k S\bigl(\tau(\und{a})_B\bigr)$, and so -- recalling the
charge values and the maximum entropy principle -- it would follow that
$\xi_{B^k} = \tau(\und{a})_B^{\ox k}$.
However, from the equality conditions in strong subadditivity \cite{Hayden2004},
this in turn would imply that $\xi_{S^kB^k}$ is a probabilistic mixture of
purifications of $\tau(\und{a})_B^{\ox k}$ whose restrictions to $S^k$
are pairwise orthogonal. This would clearly put constraints on the spectrum
of $\sigma_{S^k}$ that are not generally met.
In the other case that $kS\bigl(\tau(\und{a})\bigr) > S(\sigma_{S^k})$, the
conditional entropy should be $S(B^k|S^k)_\xi = - S(\sigma_{S^k})$, and
since $\xi_{S^k} = \sigma_{S^k}$, this would necessitate a pure state
$\xi_{S^kB^k}$. Looking at the proof of Lemma \ref{lemma:extended-phase-diagram},
however, we see that it leaves quite a bit of manoeuvring space, so it
may or may not be possible to satisfy all charge constraints
$\tr \xi_{B^k}A_j^{(B^k)} = a_j$ ($j=1,\ldots,c$).
\end{remark}
\medskip
Coming back to our question, if a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$ is
feasible for regular sequences on the left hand side,
by the first law this implies that
\begin{align*}
s(\{\sigma_{S^nB^n}\}) &= s(\{\rho_{S^n}\}) + S(\tau(\und{\beta})) \text{ and} \\
W_j &= -\Delta A_{S_j}-\Delta A_{B_j} \\
&= a_j(\{\rho_{S^n}\}) - a_j(\{\sigma_{S^n}\})
+ a_j(\{\tau(\und{\beta})_{B^n}\}) - a_j(\{\sigma_{B^n}\}).
\end{align*}
When $\sigma_{S^n}$ and the $W_j$ are given, this constrains the possible
states $\sigma_{S^nB^n}$ as follows: for each $n$,
\begin{align*}
\frac1n S(B^n|S^n)_\sigma &\approx S(\tau(\und{\beta})) - \Delta s_S, \\
\frac1n \Tr \sigma_{B^n} A_{B_j}^{(n)} &\approx \Tr \tau(\und{\beta})_B A_{B_j}
- \Delta A_{S_j} - W_j,\quad \text{for all } j=1,\ldots,c.
\end{align*}
Since by Lemma \ref{lemma:extended-phase-diagram} the left hand sides converge to the
components of a point in $\overline{\cP}_{|s(\{\sigma_{S^n}\})}$, meaning that a necessary
condition for the feasibility of the work transformation in question is that
\begin{equation}\begin{split}
\label{eq:conditional-point}
(\und{a},t) \in \overline{\cP}_{|s(\{\sigma_{S^n}\})}, \text{ with }
a_j &:= \Tr \tau(\und{\beta})_B A_{B_j} - \Delta A_{S_j} - W_j, \\
t &:= S(\tau(\und{\beta})) - \Delta s_S.
\end{split}\end{equation}
Again by Lemma \ref{lemma:extended-phase-diagram}, this is equivalent to
all $a_j$ to be contained in the set of joint quantum expectations of the
observables $A_{B_j}$, and
\[
-\min\left\{ s(\{\sigma_{S^n}\}),S\bigl(\tau(\und{a})\bigr) \right\} \leq t \leq S\bigl(\tau(\und{a})\bigr).
\]
The following theorem shows that this is also sufficient, when we allow blockings
of the asymptotically many systems.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{second-law-finite.jpg}
\end{center}
\caption{State change of the bath for a given work transformation under the extraction of
$j$-type work $W_j$, viewed in the extended phase diagram of the bath, which
initially is in the thermal state $\tau(\und{\beta})_B$, the blue line at the
corresponding point in the diagram representing the tangent hyperplane of the
diagram. The final states $\{\sigma_{S^nB^n}\}$ give rise to the point $F$ in
the extended diagram, whose charge values are those of $\{\sigma_{B^n}\}$,
while the entropy is $\frac1n S(B^n|S^n)_\sigma$.}
\label{fig:second-law-finite}
\end{figure}
\begin{theorem}[Second Law with fixed bath]
\label{thm:second-law-finite-bath}
For arbitrary regular sequences $\rho_{S^n}$ and $\sigma_{S^n}$ of product states,
a given bath $B$, and any real numbers $W_j$,
if there exists a regular sequence of block product states
$\sigma_{S^nB^n}$ with $\Tr_{B^n}\sigma_{S^nB^n} = \sigma_{S^n}$, such that
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
with accompanying extraction of $j$-type work at rate $W_j$,
then Eq. (\ref{eq:conditional-point}) defines a point
$(\und{a},t) \in \overline{\cP}_{|s(\{\sigma_{S^n}\})}$.
Conversely, assuming additionally that $\sigma_{S^n} = \sigma_S^{\otimes n}$ is an i.i.d.~state,
if Eq. (\ref{eq:conditional-point}) defines a point
$(\und{a},t) \in \overline{\cP}_{|S(\sigma_S)}^0$ in the interior of the
extended phase diagram, then for every $\epsilon>0$
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
with block product states $\sigma_{S^nB^n}$ such that
$\Tr_{B^n}\sigma_{S^nB^n} = \sigma_{S^n}$, and with accompanying
extraction of $j$-type work at rate $W_j\pm\epsilon$.
This is illustrated in Fig.~\ref{fig:second-law-finite}.
\end{theorem}
\begin{proof}
We have already argued the necessity of the condition. It remains to show its
sufficiency.
Using Lemma \ref{lemma:extended-phase-diagram}, this is not hard: Namely,
by its point 3, for sufficiently large $k$, $(\und{a},t) \in \overline{\cP}_{|s}$
is $\epsilon$-approximated by $\frac1k \cP^{(k)}_{|\sigma_S^{\ox k}}$, i.e. there
exists a $\sigma_{S^kB^k}$ with $\tr_{B^k} \sigma_{S^kB^k} = \sigma_S^{\ox k}$
with $\frac1k S(B^k|S^k)_\sigma \leq t-\epsilon$ and $\frac1k \tr \sigma_{B^k} A_j^{(B^k)} \approx a_j$
for all $j=1,\ldots,c$. By mixing $\sigma$ with a small fraction of
$\bigl(\tau(\und{a})_B\ox\sigma_S\bigr)^{\ox k}$, we can in fact assume that
$\frac1k S(B^k|S^k)_\sigma = t$ while preserving $\frac1k \tr \sigma_{B^k} A_j^{(B^k)} \approx a_j$.
Now our target block product states will be
$\sigma_{S^nB^n} := \bigl(\sigma_{S^kB^k}\bigr)^{\ox \frac{n}{k}}$ for $n$ a multiple of $k$.
By construction, this sequence has the same entropy rate as
the initial regular sequence of product states $\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n}$,
so by the first law, Theorem \ref{thm:first-law}, and the
AET, Theorem \ref{Asymptotic equivalence theorem}, there is indeed a
corresponding work transformation with $j$-type work extracted
equal to $W_j\pm\epsilon$.
\end{proof}
\medskip
\begin{remark}\normalfont
One might object that tensor power target states are not general enough
in Theorem \ref{thm:second-law-finite-bath}, as we
had observed in the previous chapter that such states do not
generate the full phase diagram $\overline{\mathcal{P}}$ of the system $S$.
However, by considering blocks of $\ell$ systems $S^\ell$, we can
apply the theorem to block tensor power target states
$\sigma_{S^n} = \bigl(\sigma_1\ox\cdots\ox\sigma_\ell\bigr)^{\ox \frac{n}{\ell}}$,
and these latter are in fact a rich enough class to exhaust the entire
phase diagram $\overline{P}$, when $\ell \geq \dim S$
(point 5 of Lemma \ref{lemma:phase diagram properties}).
More generally, we can allow as target \emph{uniformly regular} sequences of
product states $\sigma_{S^n}$, by which we mean the following strengthening
of the condition in Definition \ref{definition:regular}.
Denoting $B_{N+1}^{N+n} := B_{N+1}\ldots B_{N+n}$, we require that for all
$\epsilon > 0$ and uniformly for all $N$, it holds that for sufficiently large $n$,
\[
\left| a_j - \frac1n \Tr \sigma_{B_{N+1}^{N+n}} A_j^{(n)} \right|
\leq \epsilon \text{ for all } j=1,\ldots,c, \text{ and }
\left| s - \frac1n S(\sigma_{B_{N+1}^{N+n}}) \right| \leq \epsilon.
\]
\end{remark}
\section{Tradeoff between thermal bath rate and work extraction}
\label{subsec:bath-rate}
Here we consider a different take on the question of the work deficit due
to finiteness of the bath. Namely, we still consider a given fixed finite
bath system $B$, but now as which state transformations and associated
generalized works are possible when for each copy of the subsystem $S$,
$R\geq 0$ copies of $B$ are present. It is clear what that means when
$R$ is an integer, but below we shall give a meaning to this rate as a real number.
We start off with the observation that ``large enough bath'' in
Theorem \ref{asymptotic second law} can be taken to mean $B^R$, for the given
elementary bath $B$ and sufficiently large integer $R$.
\begin{theorem}
\label{thm:large-R-2nd-law}
For arbitrary regular sequences of product states,
$\rho_{S^n}$ and $\sigma_{S^n}$, and any real numbers $W_j$ with
$\sum_{j=1}^c \beta_j W_j < -\Delta\widetilde{F}_S$,
there exists an integer $R\geq 0$ and a regular sequence of product states
$\sigma_{S^nB^{nR}}$ with $\Tr_{B^{nR}}\sigma_{S^nB^{nR}} = \sigma_{S^n}$, such that
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^nB^{nR}}$
with accompanying extraction of $j$-type work at rate $W_j$.
\end{theorem}
\begin{proof}
This was already shown in the achievability part of Theorem \ref{asymptotic second law}.
\end{proof}
\medskip
To give meaning to a rational rate $R = \frac{\ell}{k}$, group the systems of $S^n$,
for $n=\nu k$, into blocks of $k$, which we denote $\widetilde{S}=S^k$,
and consider $\rho_{S^n} \equiv \rho_{\widetilde{S}^\nu}$ as a $\nu$-party state,
and likewise $\sigma_{S^n} \equiv \sigma_{\widetilde{S}^\nu}$. For each
$\widetilde{S}=S^k$ we assume $\ell$ copies of the thermal bath,
$\tau(\und{\beta})_B^{\otimes \ell} = \tau(\und{\beta})_{\widetilde{B}}$,
with $\widetilde{B} = B^\ell$. If $\{\rho_{S^n}\}$ and $\{\sigma_{S^n}\}$ are
regular sequences of product states, then evidently so are
$\{\rho_{\widetilde{S}^\nu}\}$ and $\{\sigma_{\widetilde{S}^\nu}\}$.
Now, for the given sequences $\{\rho_{S^n}\}$ and $\{\sigma_{S^n}\}$ of initial
and final states, respectively, as well as works $W_1,\ldots,W_c$ satisfying
$\sum_j\beta_j W_j = -\Delta\widetilde{F}_S-\delta$, $\delta \geq 0$,
we can ask what is the infimum over all rates $R = \frac{\ell}{k}$
such that there is a work transformation
\[
\rho_{S^n} \ox \tau(\und{\beta})_{B^{nR}}
\equiv \rho_{\widetilde{S}^\nu} \ox \tau(\und{\beta})_{\widetilde{B}}^{\ox \nu\ell}
\rightarrow
\sigma_{\widetilde{S}^\nu \widetilde{B}^{\nu\ell}}
\equiv \sigma_{S^nB^{nR}},
\]
where as before the final state is intended to satisfy
$\Tr_{\widetilde{B}^{\nu\ell}} \sigma_{\widetilde{S}^\nu \widetilde{B}^{\nu\ell}} = \sigma_{\widetilde{S}^\nu}$.
We observe that if $S(\rho_{S^n})=S(\sigma_{S^n})$ and $\sum_j\beta_j W_j = -\Delta\widetilde{F}_S$,
then the work transformation is possible without
using any thermal bath, which follows from Eq.~(\ref{work expansion formula}).
That is, the thermal bath is not necessary for extracting work if the entropy of the work
system does not change.
Conversely, the role of the thermal bath is precisely to facilitate changes
of entropy in the work system.
To answer the above question after the minimum bath rate $R^*$,
we first show the following lemma.
\begin{lemma}
Consider regular sequences of product states,
$\rho_{S^n}$ and $\sigma_{S^n}$, and real numbers $W_j$, and assume
that for large enough rate $R$ there is a work transformation
$\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^nB^{nR}}$,
with $\sigma_{S^n}$ as the reduced final state on the work system,
and works $W_1,\ldots,W_c$ are extracted. Then there is another work transformation
$\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^n}\ox\xi_{B^{nR}}$,
in which the final state of the work system and the thermal bath are uncorrelated,
$\xi_{B^{nR}}$ is a regular sequence of product states,
and the same works $W_1,\ldots,W_c$ are extracted.
\end{lemma}
\begin{proof}
Assuming that $\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^nB^{nR}}$ is a work transformation, the second law implies that $\sum_j\beta_j W_j = -\Delta\widetilde{F}_s-\delta$ for some $\delta \geq 0$, and we obtain
\begin{equation}\label{eq: coordinates_P_1}
\begin{split}
s(\{\sigma_{B^{nR}}\}) &= S(\tau(\und{\beta})_B) - \frac1R \Delta s_S+\frac{\delta'}{R}, \\
a_j(\{\sigma_{B^{nR}}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c.
\end{split}\end{equation}
for $0 \leq \delta' \leq \delta$ where the first equality is due to the fact that $\Delta\widetilde{F}_B+\Delta s_S +\Delta s_B =\delta$ as seen in Eq.~(\ref{work expansion formula}) and positivity of the entropy rate change from Eq.~(\ref{eq: positive Delta_SB}). The second equality follows from the first law, Theorem \ref{thm:first-law},
and the AET, Theorem \ref{Asymptotic equivalence theorem}.
If $R$ is large enough, due to the convexity of the phase diagram of the thermal bath $\overline{\mathcal{P}}_B^{(1)}$, the following coordinates belong to the phase diagram as well
\begin{equation}\label{eq: coordinates_P_2}
\begin{split}
s(\{\xi_{B^{nR}}\}) &= S(\tau(\und{\beta})_B) - \frac1R \Delta s_S, \\
a_j(\{\xi_{B^{nR}}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c.
\end{split}\end{equation}
Therefore, due to points 3 and 5 of Lemma~\ref{lemma:phase diagram properties},
there is a tensor product state $\xi_{B^{nR}}$ with coordinate of
Eq.~(\ref{eq: coordinates_P_2}) on $\overline{\cP}_B^{(1)}$. Hence the first law,
Theorem~\ref{thm:first-law}, implies that the desired transformation exists, and
works $W_1,\ldots,W_c$ are extracted.
\end{proof}
\medskip
\begin{theorem}
\label{thm:optimal-rate}
For regular sequences of product states, $\rho_{S^n}$ and $\sigma_{S^n}$,
and real numbers $W_j$ satisfying $\sum_j\beta_j W_j = -\Delta\widetilde{F}_s-\delta$,
let $R^*$ be the infimum of rates such that there is a work transformation
$\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^n}\ox\xi_{B^{nR}}$
under which works $W_1,\ldots,W_c$ are extracted, and
$\xi_{B^{nR}}$ is a regular sequence of product states.
Then, this minimum $R^*$ is achieved for a state $\xi_{B^{nR}}$ on the boundary of the
phase diagram $\overline{\mathcal{P}}_B$ of the thermal bath. Indeed, it is point
where the line given by Eq.~(\ref{eq:R-bath-assumptions}) intersects the boundary of
the phase diagram; see Fig.~\ref{fig:optimal-rate}.
Equivalently, it is the smallest $R$ such that the point in
Eq.~(\ref{eq:R-bath-assumptions}) is contained in $\overline{\mathcal{P}}_B$.
For $\delta \ll 1$, the minimum rate can be written as
\begin{equation}
\label{eq:rate-vs-heatcapacity}
R \approx -\frac{1}{2\delta} \sum_{ij} \frac{\partial\beta_j}{\partial a_i}(\Delta A_{S_i}+W_i)(\Delta A_{S_j}+W_j),
\end{equation}
where $\Delta A_{S_j} = a(\{\sigma_{S^n}\}) - a(\{\rho_{S^n}\})$.
\end{theorem}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{second-law-optimalrate.jpg}
\end{center}
\caption{Graphical illustration of $R^*$, the minimum bath rate for a
work transformation $\{\rho_{S^n}\} \rightarrow \{\sigma_{S^n}\}$
satisfying the second law, according to Theorem~\ref{thm:optimal-rate}.
The initial state is the generalized thermal state $\tau(\und{\beta})$, its
corresponding point marked on the upper boundary of the phase diagram. The final
bath states correspond to points on the line denoted $f$, and they
are feasible if and only they fall into the phase diagram.
Consequently, $F^*$ is the point corresponding to the minimum rate.}
\label{fig:optimal-rate}
\end{figure}
\begin{proof}
The final state of the thermal bath $\xi_{{B}^{ nR}}$ is a tensor product state, so
the first law, Theorem~\ref{thm:first-law},
and the AET, Theorem~\ref{Asymptotic equivalence theorem} imply that
\begin{equation}\label{eq:coordinates}
\begin{split}
s(\{\xi_{B^{nR}}\}) &= S(\tau(\und{\beta})_B) - \frac1R \Delta s_S, \\
a_j(\{\xi_{B^{nR}}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c,
\end{split}\end{equation}
where $\Delta s_{S} = s(\{\sigma_{S^n}\}) - s(\{\rho_{S^n}\})$.
Due to point 3 of Lemma~\ref{lemma:phase diagram properties},
the above coordinates belong to $\overline{\mathcal{P}}_B^{(1)}$.
For $R=R^*$ assume that the above coordinates belong to the point $(\und{a},s)$ on the boundary
of the phase diagram $\overline{\mathcal{P}}^{(1)}_B$. Then, for $R>R^*$ the point
of Eq.~(\ref{eq:coordinates}) is a convex combination of the points
$(\und{a},s)$ and the corresponding point of the state $\tau(\und{\beta})_B$,
so it belongs to the phase diagram due to its convexity. Therefore, all points with $R>R^*$
are inside the diagram.
To approximate the minimum $R$ for small $\delta$, define the function
$S(\und{a}):=S(\tau(\und{a})_B)$ for $\und{a}=(a_1,\ldots,a_c)$.
Its Taylor expansion around the point corresponding to the
initial thermal state $\tau(\und{\beta})_B \equiv S\left(\tau(\und{a}^0)_B\right)$ of the bath
gives the approximation
\begin{equation}
\label{eq:S-taylor}
S(\und{a}) \approx S(\und{a}^0) + \sum_j \beta_j (a_j-a_j^0)
+ \frac12 \sum_{ij} \frac{\partial\beta_j}{\partial a_i}(a_j-a_j^0) (a_i-a_i^0),
\end{equation}
where we have used the well-know relation $\frac{\partial S}{\partial a_i}=\beta_i$.
From Eq.~(\ref{eq:coordinates}), we obtain
\begin{align*}
S(\und{a})-S(\und{a}^0) &= -\frac{\Delta s_S}{R},\\
a_j-a_j^0 &= \frac{1}{R} (-\Delta A_{S_j}-W_j),
\end{align*}
and by substituting these values in the Taylor approximation (\ref{eq:S-taylor}),
using the definition of the free entropy and of the deficit $\delta$,
we arrive at the claimed Eq. (\ref{eq:rate-vs-heatcapacity}).
\end{proof}
\medskip
\begin{remark}\normalfont
For a single charge, $c=1$, which we traditionally interpret as the internal
energy $E$ of a system, Eq.~(\ref{eq:rate-vs-heatcapacity}) takes on the very
simple form
\[
R \approx -\frac{1}{2\delta} \frac{\partial\beta}{\partial E}(\Delta E_S+W)^2.
\]
Here we can use the usual thermodynamic definitions to rewrite
$\frac{\partial\beta}{\partial E} = \frac{\partial\frac{1}{T}}{\partial E} = -\frac{1}{T^2}\frac{1}{C}$,
with the heat capacity $C = \frac{\partial E}{\partial T}$, all derivatives taken with
respect to corresponding Gibbs equilibrium states. Thus,
\begin{equation}
R \approx \frac{1}{T^2}\frac{1}{C}\cdot\frac{1}{2\delta}(\Delta E_S+W)^2,
\end{equation}
resulting in a clear operational interpretation of the heat capacity in terms
of the rate of the bath to approach the second law tightly.
For larger numbers of charges, the matrix
$\bigl[ \frac{\partial\beta_j}{\partial a_i} \bigr]_{ij}
= \bigl[ \frac{\partial^2 S}{\partial a_i\partial a_j} \bigr]_{ij}$ is actually
the Hessian of the entropy $S\bigl(\tau(\und{a})_B\bigr)$ with respect to the charges,
and the r.h.s. side of Eq.~(\ref{eq:rate-vs-heatcapacity}) is $\frac{1}{2\delta}$
times the corresponding quadratic form evaluated on the vector
$(\Delta A_{S_1}+W_1,\ldots,\Delta A_{S_c}+W_c)$. Note that by the strict
concavity of the generalized Gibbs entropy, this is a negative definite
symmetric matrix, thus explaining the minus sign in Eq. (\ref{eq:rate-vs-heatcapacity}).
In the same vein as the single-parameter discussion before, the Hessian matrix can
be read as being composed of generalized heat capacities, which likewise receive
their operational interpretation in terms of the required rate of the bath.
\end{remark}
\section{Discussion}
The traditional framework of thermodynamics assumes a system containing an asymptotically large number of particles interacts with an even larger bath. So that all the thermodynamic quantities of interest, e.g., energy, entropy, etc., can be expressed in terms of average or mean values. Also, the notion of temperature there remains meaningful as any exchange of energy hardly drives the bath away from equilibrium as it is considerably large. The quantum thermodynamics attempts to go beyond this assumption. For instance, the system that interacts with a large bath may have a fewer number of quantum particles. In this case, the average quantities are not sufficient to characterize the system as there may be large quantum fluctuations that cannot be ignored. To address this issue, the resource theory of quantum thermodynamics is developed and it shows that the classical laws are not sufficient to characterize the thermodynamic transformations. One rather needs many second laws associated with many one-shot free energies (based on Renyi $\alpha$-relative entropies) \cite{Horodecki2011,Brandao2015}. However, this formalism is still not enough to study the situation where a quantum system interacts with a bath and they are of comparable size. Clearly, the very notion of temperature is questionable as the bath may get driven out of equilibrium after an interaction with the system. To address this, a resource theory is developed based on information conservation \cite{Sparaciari2016, brl19} and it is only applicable to the regime where asymptotically large number system-bath composites are considered. This in turn also allows one to consider the system and bath on the same footing.
Here we have developed a resource theoretic formalism applicable to a more general scenario where a system with multiple conserved quantities (i.e., charges) interacts with a bath, and the system and bath may be of comparable size. These charges may not commute with each other, as allowed by quantum mechanics. The non-commutative nature implies that any (unitary) evolution cannot strictly conserve all these changes simultaneously. We overcome this problem by considering the notion of approximate micro-canonical ensembles, initially developed in \cite{Halpern2016}. This is an essential requirement and forms the basis of the (approximate) first law for thermodynamics with non-commuting charges. With this, we have developed a resource theory for work and heat for thermodynamics with non-commuting charges. We introduce the charge-entropy diagram that conceptually captures all the essential aspects of thermodynamics and an equivalence theorem to show the thermodynamic equivalence between quantum states sharing the same point on the charge-entropy diagram. Then we have derived the second law with the help of the diagram to characterize the state transformations and to quantify the thermodynamics resources such as works corresponding to different charges. We have also considered the situation where the bath is finite and quantified the rate of state transformations. Interestingly the rate of transformation has been shown to have a direct link with the generalized heat-capacity of the bath. All these then extended to the cases where the systems have (quantum) correlation with the bath. There the charge-entropy diagram has been expressed in terms of conditional-entropy of the bath which may get negative in presence of entanglement and, using that, the second law has been derived.
\begin{comment}
We apply the resource theory that we have developed to understand quantum thermodynamics with
multiple conserved quantities. We specifically consider an asymptotic generalization of
the setting proposed in \cite{Guryanova2016} where there are many copies of a global
system consisting of a main system, called a work system, a thermal bath with fixed
temperatures and various batteries to store the charges of the system. Therefore, the
objects and allowed operations of the resource theory translate to quantum states of a
thermodynamics system and thermodynamical transformations, respectively. It is evident
that the allowed operations can transform a state with a tensor product structure to
a state of a general form; however, we show that restricting the final states to the
specific form of tensor product structure does not reduce the generality of the bounds
that we obtain which follows from the fact that for any point of the phase diagram there
is state with tensor product structure.
As discussed in \cite{Guryanova2016}, for a system with multiple charges the free entropy is
a conceptually more meaningful quantity than the free energy which is originally defined when energy is the only conserved quantity of the system. Namely,
the free energy bounds the amount of energy
that can be extracted; however, for a system with multiple charges there are not
various quantities that bound the extraction of individual charges; rather
there is only a bound on the trade-off between the charges that can be extracted which is precisely the free entropy defined with respect to the temperatures of the thermal bath.
We show that indeed this is the case in our scenario as well and formulate the second law: the amount of charge combination that is extracted is bounded by the free entropy change of the work system per number of copies of the work system, i.e., the free entropy rate change.
Conversely, we show that all transformations
with given extracted charge values,
with a combination bounded by the free entropy rate change of the work system, are feasible.
As a result, any amount of a given charge, or the so-called work type, is extractable
providing that sufficient amounts of other charges are injected to the system.
This raises the following fundamental question: for given extractable charge values, with
a combination saturating the second law up to a deficit $\delta$, what is the minimum number
of the thermal baths per number of the copies of the work system.
We define this ratio as the thermal bath rate.
We find that for large thermal bath rates the optimal value is inversely
proportional to the deficit $\delta$, and there is always a corresponding
transformation where the final state of the work system and the thermal bath are uncorrelated.
However, in general this is not true: the minimum rate might be obtained where the final state correlates
the work system and the thermal bath.
This is a purely quantum effect, that certain work transformations are possible
with a smaller size of the thermal bath than would be possible classically; it
relies on work system and bath becoming entangled.
Then, in order to find the optimal rate, we define the conditional entropy phase diagram
which depends on a given conditional state.
\end{comment}
\section*{\Large \sffamily Abstract}
This thesis addresses problems in the field of quantum information theory, specifically, quantum Shannon theory. The first part of the thesis is opened with concrete definitions of general quantum source models and their compression, and each subsequent chapter addresses the compression of a specific source model as a special case of the initially defined general models. First, we find the optimal compression rate of a general mixed state source which includes as special cases all the previously studied models such as Schumacher’s pure and ensemble sources and other mixed state ensemble models. For an interpolation between the visible and blind Schumacher’s ensemble model, we find the optimal compression rate region for the entanglement and quantum rates. Later, we comprehensively study the classical-quantum variation of the celebrated Slepian-Wolf problem and find the optimal rates considering per-copy fidelity; with block fidelity we find single letter achievable and converse bounds which match up to continuity of a function appearing in the bounds. The first part of the thesis is closed with a chapter on the ensemble model of quantum state redistribution for which we find the optimal compression rate considering per-copy fidelity and single-letter achievable and converse bounds matching up to continuity of a function which appears in the bounds.
The second part of the thesis revolves around information theoretical perspective of quantum thermodynamics. We start with a resource theory point of view of a quantum system with multiple non-commuting charges where the objects and allowed operations are thermodynamically meaningful; using tools from quantum Shannon theory we classify the objects and find explicit quantum operations which map the objects of the same class to one another. Subsequently, we apply this resource theory framework to study a traditional thermodynamics setup with multiple non-commuting conserved quantities consisting of a main system, a thermal bath and batteries to store various conserved quantities of the system. We state the laws of the thermodynamics for this system, and show that a purely quantum effect happens in some transformations of the system, that is, some transformations are feasible only if there are quantum correlations between the final state of the system and the thermal bath.
\newpage
\selectlanguage{catalan}
\section*{\Large \sffamily Resum}
Aquesta tesi aborda problemes en el camp de la teoria de la informaci\'{o} qu\`{a}ntica, espec\'{i}ficament, la teoria qu\`{a}ntica de Shannon. La primera part de la tesi comen\chi{c}a amb definicions concretes de models de fonts qu\`{a}ntiques generals i la seva compressi\'{o}, i cada cap\'{i}tol seg\"{u}ent aborda la compressi\'{o} d'un model de font espec\'{i}fic com a casos especials dels models generals definits inicialment. Primer, trobem la taxa de compressi\'{o} \`{o}ptima d'una font d'estats barreja general que inclou com a casos especials tots els models pr\`{e}viament estudiats, com les fonts pures i de co"lectivitats de Schumacher, i altres models de co"lectiuvitats d'estats barreja. Per a una interpolaci\'{o} entre els models de co"lectivitats visible i cec de Schumacher, trobem la regi\'{o} de compressi\'{o} \`{o}ptima per les taxes d'entrella\chi{c}ament i les taxes qu\`{a}ntiques. A continuaci\'{o}, estudiem exhaustivament la variaci\'{o} cl\`{a}ssic-qu\`{a}ntica del fam\'{o}s problema de Slepian-Wolf i trobem les taxes \`{o}ptimes considerant la fidelitat per c\`{o}pia; per la fidelitat de bloc trobem expressions tancades per les fites assolibles i inverses que coincideixen, sota la condici\'{o} de que una funci\'{o} que apareix a les dues fites sigui continua. La primera part de la tesi tanca amb un cap\'{i}tol sobre el model de co"lectivitats per la redistribuci\'{o} d'estats qu\`{a}ntics per al qual trobem la taxa de compressi\'{o} \`{o}ptima considerant la fidelitat per c\`{o}pia i les fites assolibles i inverses, que de nou que coincideixen sota la condici\'{o} de continu\"{i}tat d'una certa funci\'{o}.
La segona part de la tesis gira al voltant de la term\'{o}dinamica qu\`{a}ntica sota de la perspectiva de la teoria de la informaci\'{o}. Comencem amb un punt de vista de la teoria de recursos d'un sistema qu\`{a}ntic amb m\'{u}ltiples c\`{a}rregues que no commuten i amb objectes i operacions permeses que son termodin\`{a}micament significatives; utilitzant eines de la teoria qu\`{a}ntica de Shannon classifiquem els objectes i trobem operacions qu\`{a}ntiques expl\'{i}cites que relacionen els objectes de la mateixa classe entre s\'{i}. Posteriorment, apliquem aquest marc de la teoria de recursos per estudiar una configuraci\'{o} termodin\`{a}mica tradicional amb m\'{u}ltiples quantitats conservades que no commuten que consta d'un sistema principal, un reservori cal\`{o}ric i bateries per emmagatzemar diverses quantitats conservades del sistema. Enunciem les lleis de la termodin\`{a}mica per a aquest sistema, i mostrem que un efecte purament qu\`{a}ntic t\'{e} lloc en algunes transformacions del sistema, \'{e}s a dir, algunes transformacions nom\'{e}s s\'{o}n factibles si hi ha correlacions qu\`{a}ntiques entre l’estat final del sistema i del reservori cal\`{o}ric.
\newpage
\selectlanguage{catalan}
\section*{\Large \sffamily Resumen}
Esta tesis aborda problemas en el campo de la teor\'{i}a de la informaci\'{o}n cu\'{a}ntica, espec\'{i}ficamente, la teor\'{i}a cu\'{a}ntica de Shannon. La primera parte de la tesis comienza con definiciones concretas de modelos de fuentes cu\'{a}nticas generales y su compresi\'{o}n, y cada cap\'{i}tulo subsiguiente aborda la compresi\'{o}n de un modelo de fuente espec\'{i}fico como casos especiales de los modelos generales definidos inicialmente. Primero, encontramos la tasa de compresi\'{o}n \'{o}ptima de una fuente de estado mixto general que incluye como casos especiales todos los modelos previamente estudiados, como las fuentes pura y colectiva de Schumacher, y otros modelos colectivos de estado mixto. Para una interpolaci\'{o}n entre el modelo colectivo visible y ciego de Schumacher, encontramos la regi\'{o}n de tasa de compresi\'{o}n \'{o}ptima para el entrelazamiento y las tasas cu\'{a}nticas. A continuaci\'{o}n, estudiamos exhaustivamente la variaci\'{o}n cl\'{a}sico-cu\'{a}ntica del c\'{e}lebre problema de Slepian-Wolf y encontramos las tasas \'{o}ptimas considerando la fidelidad por copia; con la fidelidad de bloque encontramos l\'{i}mites alcanzables e inversos que coinciden con la continuidad de una funci\'{o}n que aparece en los l\'{i}mites. La primera parte de la tesis cierra con un cap\'{i}tulo sobre el modelo colectivo de redistribuci\'{o}n de estado cu\'{a}ntico para el cual encontramos la tasa de compresi\'{o}n \'{o}ptima considerando la fidelidad por copia y los l\'{i}mites alcanzables e inversos que coinciden con la continuidad de una funci\'{o}n que aparece en los l\'{i}mites.
La segunda parte de la tesis gira en torno a la perspectiva te\'{o}rica de la informaci\'{o}n de la termodin\'{a}mica cu\'{a}ntica. Comenzamos con un punto de vista de la teor\'{i}a de recursos de un sistema cu\'{a}ntico con m\'{u}ltiples cargas no conmutables con objetos y operaciones permitidas que son termodin\'{a}micamente significativas; usando herramientas de la teor\'{i}a cu\'{a}ntica de Shannon clasificamos los objetos y encontramos operaciones cu\'{a}nticas expl\'{i}citas que mapean los objetos de la misma clase entre s\'{i}. Posteriormente, aplicamos este marco de la teor\'{i}a de recursos para estudiar una configuraci\'{o}n termodin\'{a}mica tradicional con m\'{u}ltiples cantidades no conmutables compuesta por un sistema principal, un reservorio cal\'{o}rico y bater\'{i}as para almacenar varias cantidades conservadas del sistema. Enunciamos las leyes de la termodin\'{a}mica para este sistema, y mostramos que ocurre un efecto puramente cu\'{a}ntico en algunas transformaciones del sistema, es decir, algunas transformaciones solo son factibles si existen correlaciones cu\'{a}nticas entre el estado final del sistema y del reservorio cal\'{o}rico.
\selectlanguage{english}
\cleardoublepage
\noindent {\Large \textbf{Acknowledgements}}
\vspace{1cm}
\noindent I express my sincere gratitude to my supervisors Andreas Winter and Maciej Lewenstein for their continuous support and care in any aspect that I could possibly ask for. I started with studying various problems in quantum Shannon theory, and I enjoyed and learned from immense knowledge of Andreas Winter who gave me unlimited freedom and time to submerge myself in problems and wrap my head around them; I have been fascinated to see his perspective, scientific discipline and how he approaches science in general, and I feel privileged to have him as my mentor and role model both in academic and personal life. He later introduced me to Maciej Lewenstein in ICFO where I have learned from his profound knowledge in physics and how to make sense of complicated mathematical notions through physical interpretations without obsessing about equations. I cannot thank Maciej enough for his kindness, support and also the freedom and time that he gave me.
I am honored and delighted to defend my thesis in front of
the experts of the field
John Calsamiglia, Patrick Hayden and Micha{\lambda} Horodecki, whose scientific works have
been a source of inspiration and guidance to me.
I have learned a lot and enjoyed discussing problems during my academic visits that I have had, specifically, I would like to thank Paul Skrzypczyk and Tony Short in the University of Bristol, Nilanjana Datta in Cambridge and Masahito Hayashi in Peng Cheng Lab.
Apart from scientific perspective, I got to enjoy my time in ICFO as a Phd student which is a great institute for anyone seeking professional academic training thanks to the organization, generosity and supportive environment of the institute. I am indebted to both academic and administrative staff. Moreover, thanks to the social environment there, I have made great friendships and enjoyed the fun specifically during various annual events.
I had great time and experience in UAB in our own quantum information group (Giq), where it is my academic home, thanks to the supportive and encouraging atmosphere that has been fostered here. I am indebted to Anna, Emili, John, Ramon and other Giq members for all their support and care; I have enjoyed the seminars, the time we have spent together during lunch and our annual Cal\chi{c}otadas.
I could never accomplish what I have accomplished so far without my background and the training that I have had in great schools in Iran, I specifically thank my teachers in Sharif university of technology where I was exposed to information theory and I got fascinated for the first time about quantum information. I am specifically grateful to Salman Beigi and Amin Gohari for their teaching, advice, scientific manner, support and recommendations.
Despite the challenges I have faced, I got to enjoy living in Barcelona thanks to the beauty of this city and life-long friendships that I have made here.
I am thankful to my friends Arezou, Hara, Lisa, Marzieh, Susanna and other friends in Giq and ICFO, specifically Roger for translating the summary of my thesis to Catalan and Spanish.
During this period, I have gone back to my family all the time. In particular,
I am grateful and indebted to my parents for their love, support and encouragement and planting the initial seeds of love and passion for science.
I have had their
continuous support throughout my life and especially in my academic endeavours.
Finally, I cannot express enough my happiness and gratitude to have Farzin as my husband and friend. We started the Phd at the same time, and despite ups and downs of his own path, he never failed to support and encourage me; he kept my attitude positive and optimistic to overcome challenges of this path even when he was facing his own hurdles. I am thankful to his kindness and care and all the discussions we had regarding quantum information and the Phd life.
\cleardoublepage
\noindent This thesis has been supported by the Spanish MINECO (projects FIS2016-86681-P,
FISICATEAMO FIS2016-79508-P, SEVERO OCHOA No. SEV-2015-0522, FPI
and PID2019-107609GB-I00/AEI/10.13039/501100011033), the FEDER funds, the Generalitat de Catalunya(project 2017-SGR-1127,
2017-SGR-1341 and CERCA/Program), ERC AdG OSYRIS, EU FETPRO QUIC,
and the National Science Centre, Poland-Symfonia grant no.2016/20/
W/ST4/00314.
\tableofcontents
\mainmatter
\part{Quantum Source Compression}
\part{Quantum Thermodynamics}
\section*{\Large \sffamily Abstract}
This thesis addresses problems in the field of quantum information theory, specifically, quantum Shannon theory. The first part of the thesis is opened with concrete definitions of general quantum source models and their compression, and each subsequent chapter addresses the compression of a specific source model as a special case of the initially defined general models. First, we find the optimal compression rate of a general mixed state source which includes as special cases all the previously studied models such as Schumacher’s pure and ensemble sources and other mixed state ensemble models. For an interpolation between the visible and blind Schumacher’s ensemble model, we find the optimal compression rate region for the entanglement and quantum rates. Later, we comprehensively study the classical-quantum variation of the celebrated Slepian-Wolf problem and find the optimal rates considering per-copy fidelity; with block fidelity we find single letter achievable and converse bounds which match up to continuity of a function appearing in the bounds. The first part of the thesis is closed with a chapter on the ensemble model of quantum state redistribution for which we find the optimal compression rate considering per-copy fidelity and single-letter achievable and converse bounds matching up to continuity of a function which appears in the bounds.
The second part of the thesis revolves around information theoretical perspective of quantum thermodynamics. We start with a resource theory point of view of a quantum system with multiple non-commuting charges where the objects and allowed operations are thermodynamically meaningful; using tools from quantum Shannon theory we classify the objects and find explicit quantum operations which map the objects of the same class to one another. Subsequently, we apply this resource theory framework to study a traditional thermodynamics setup with multiple non-commuting conserved quantities consisting of a main system, a thermal bath and batteries to store various conserved quantities of the system. We state the laws of the thermodynamics for this system, and show that a purely quantum effect happens in some transformations of the system, that is, some transformations are feasible only if there are quantum correlations between the final state of the system and the thermal bath.
\newpage
\selectlanguage{catalan}
\section*{\Large \sffamily Resum}
Aquesta tesi aborda problemes en el camp de la teoria de la informaci\'{o} qu\`{a}ntica, espec\'{i}ficament, la teoria qu\`{a}ntica de Shannon. La primera part de la tesi comen\chi{c}a amb definicions concretes de models de fonts qu\`{a}ntiques generals i la seva compressi\'{o}, i cada cap\'{i}tol seg\"{u}ent aborda la compressi\'{o} d'un model de font espec\'{i}fic com a casos especials dels models generals definits inicialment. Primer, trobem la taxa de compressi\'{o} \`{o}ptima d'una font d'estats barreja general que inclou com a casos especials tots els models pr\`{e}viament estudiats, com les fonts pures i de co"lectivitats de Schumacher, i altres models de co"lectiuvitats d'estats barreja. Per a una interpolaci\'{o} entre els models de co"lectivitats visible i cec de Schumacher, trobem la regi\'{o} de compressi\'{o} \`{o}ptima per les taxes d'entrella\chi{c}ament i les taxes qu\`{a}ntiques. A continuaci\'{o}, estudiem exhaustivament la variaci\'{o} cl\`{a}ssic-qu\`{a}ntica del fam\'{o}s problema de Slepian-Wolf i trobem les taxes \`{o}ptimes considerant la fidelitat per c\`{o}pia; per la fidelitat de bloc trobem expressions tancades per les fites assolibles i inverses que coincideixen, sota la condici\'{o} de que una funci\'{o} que apareix a les dues fites sigui continua. La primera part de la tesi tanca amb un cap\'{i}tol sobre el model de co"lectivitats per la redistribuci\'{o} d'estats qu\`{a}ntics per al qual trobem la taxa de compressi\'{o} \`{o}ptima considerant la fidelitat per c\`{o}pia i les fites assolibles i inverses, que de nou que coincideixen sota la condici\'{o} de continu\"{i}tat d'una certa funci\'{o}.
La segona part de la tesis gira al voltant de la term\'{o}dinamica qu\`{a}ntica sota de la perspectiva de la teoria de la informaci\'{o}. Comencem amb un punt de vista de la teoria de recursos d'un sistema qu\`{a}ntic amb m\'{u}ltiples c\`{a}rregues que no commuten i amb objectes i operacions permeses que son termodin\`{a}micament significatives; utilitzant eines de la teoria qu\`{a}ntica de Shannon classifiquem els objectes i trobem operacions qu\`{a}ntiques expl\'{i}cites que relacionen els objectes de la mateixa classe entre s\'{i}. Posteriorment, apliquem aquest marc de la teoria de recursos per estudiar una configuraci\'{o} termodin\`{a}mica tradicional amb m\'{u}ltiples quantitats conservades que no commuten que consta d'un sistema principal, un reservori cal\`{o}ric i bateries per emmagatzemar diverses quantitats conservades del sistema. Enunciem les lleis de la termodin\`{a}mica per a aquest sistema, i mostrem que un efecte purament qu\`{a}ntic t\'{e} lloc en algunes transformacions del sistema, \'{e}s a dir, algunes transformacions nom\'{e}s s\'{o}n factibles si hi ha correlacions qu\`{a}ntiques entre l’estat final del sistema i del reservori cal\`{o}ric.
\newpage
\selectlanguage{catalan}
\section*{\Large \sffamily Resumen}
Esta tesis aborda problemas en el campo de la teor\'{i}a de la informaci\'{o}n cu\'{a}ntica, espec\'{i}ficamente, la teor\'{i}a cu\'{a}ntica de Shannon. La primera parte de la tesis comienza con definiciones concretas de modelos de fuentes cu\'{a}nticas generales y su compresi\'{o}n, y cada cap\'{i}tulo subsiguiente aborda la compresi\'{o}n de un modelo de fuente espec\'{i}fico como casos especiales de los modelos generales definidos inicialmente. Primero, encontramos la tasa de compresi\'{o}n \'{o}ptima de una fuente de estado mixto general que incluye como casos especiales todos los modelos previamente estudiados, como las fuentes pura y colectiva de Schumacher, y otros modelos colectivos de estado mixto. Para una interpolaci\'{o}n entre el modelo colectivo visible y ciego de Schumacher, encontramos la regi\'{o}n de tasa de compresi\'{o}n \'{o}ptima para el entrelazamiento y las tasas cu\'{a}nticas. A continuaci\'{o}n, estudiamos exhaustivamente la variaci\'{o}n cl\'{a}sico-cu\'{a}ntica del c\'{e}lebre problema de Slepian-Wolf y encontramos las tasas \'{o}ptimas considerando la fidelidad por copia; con la fidelidad de bloque encontramos l\'{i}mites alcanzables e inversos que coinciden con la continuidad de una funci\'{o}n que aparece en los l\'{i}mites. La primera parte de la tesis cierra con un cap\'{i}tulo sobre el modelo colectivo de redistribuci\'{o}n de estado cu\'{a}ntico para el cual encontramos la tasa de compresi\'{o}n \'{o}ptima considerando la fidelidad por copia y los l\'{i}mites alcanzables e inversos que coinciden con la continuidad de una funci\'{o}n que aparece en los l\'{i}mites.
La segunda parte de la tesis gira en torno a la perspectiva te\'{o}rica de la informaci\'{o}n de la termodin\'{a}mica cu\'{a}ntica. Comenzamos con un punto de vista de la teor\'{i}a de recursos de un sistema cu\'{a}ntico con m\'{u}ltiples cargas no conmutables con objetos y operaciones permitidas que son termodin\'{a}micamente significativas; usando herramientas de la teor\'{i}a cu\'{a}ntica de Shannon clasificamos los objetos y encontramos operaciones cu\'{a}nticas expl\'{i}citas que mapean los objetos de la misma clase entre s\'{i}. Posteriormente, aplicamos este marco de la teor\'{i}a de recursos para estudiar una configuraci\'{o}n termodin\'{a}mica tradicional con m\'{u}ltiples cantidades no conmutables compuesta por un sistema principal, un reservorio cal\'{o}rico y bater\'{i}as para almacenar varias cantidades conservadas del sistema. Enunciamos las leyes de la termodin\'{a}mica para este sistema, y mostramos que ocurre un efecto puramente cu\'{a}ntico en algunas transformaciones del sistema, es decir, algunas transformaciones solo son factibles si existen correlaciones cu\'{a}nticas entre el estado final del sistema y del reservorio cal\'{o}rico.
\selectlanguage{english}
\cleardoublepage
\noindent {\Large \textbf{Acknowledgements}}
\vspace{1cm}
\noindent I express my sincere gratitude to my supervisors Andreas Winter and Maciej Lewenstein for their continuous support and care in any aspect that I could possibly ask for. I started with studying various problems in quantum Shannon theory, and I enjoyed and learned from immense knowledge of Andreas Winter who gave me unlimited freedom and time to submerge myself in problems and wrap my head around them; I have been fascinated to see his perspective, scientific discipline and how he approaches science in general, and I feel privileged to have him as my mentor and role model both in academic and personal life. He later introduced me to Maciej Lewenstein in ICFO where I have learned from his profound knowledge in physics and how to make sense of complicated mathematical notions through physical interpretations without obsessing about equations. I cannot thank Maciej enough for his kindness, support and also the freedom and time that he gave me.
I am honored and delighted to defend my thesis in front of
the experts of the field
John Calsamiglia, Patrick Hayden and Micha{\lambda} Horodecki, whose scientific works have
been a source of inspiration and guidance to me.
I have learned a lot and enjoyed discussing problems during my academic visits that I have had, specifically, I would like to thank Paul Skrzypczyk and Tony Short in the University of Bristol, Nilanjana Datta in Cambridge and Masahito Hayashi in Peng Cheng Lab.
Apart from scientific perspective, I got to enjoy my time in ICFO as a Phd student which is a great institute for anyone seeking professional academic training thanks to the organization, generosity and supportive environment of the institute. I am indebted to both academic and administrative staff. Moreover, thanks to the social environment there, I have made great friendships and enjoyed the fun specifically during various annual events.
I had great time and experience in UAB in our own quantum information group (Giq), where it is my academic home, thanks to the supportive and encouraging atmosphere that has been fostered here. I am indebted to Anna, Emili, John, Ramon and other Giq members for all their support and care; I have enjoyed the seminars, the time we have spent together during lunch and our annual Cal\chi{c}otadas.
I could never accomplish what I have accomplished so far without my background and the training that I have had in great schools in Iran, I specifically thank my teachers in Sharif university of technology where I was exposed to information theory and I got fascinated for the first time about quantum information. I am specifically grateful to Salman Beigi and Amin Gohari for their teaching, advice, scientific manner, support and recommendations.
Despite the challenges I have faced, I got to enjoy living in Barcelona thanks to the beauty of this city and life-long friendships that I have made here.
I am thankful to my friends Arezou, Hara, Lisa, Marzieh, Susanna and other friends in Giq and ICFO, specifically Roger for translating the summary of my thesis to Catalan and Spanish.
During this period, I have gone back to my family all the time. In particular,
I am grateful and indebted to my parents for their love, support and encouragement and planting the initial seeds of love and passion for science.
I have had their
continuous support throughout my life and especially in my academic endeavours.
Finally, I cannot express enough my happiness and gratitude to have Farzin as my husband and friend. We started the Phd at the same time, and despite ups and downs of his own path, he never failed to support and encourage me; he kept my attitude positive and optimistic to overcome challenges of this path even when he was facing his own hurdles. I am thankful to his kindness and care and all the discussions we had regarding quantum information and the Phd life.
\cleardoublepage
\noindent This thesis has been supported by the Spanish MINECO (projects FIS2016-86681-P,
FISICATEAMO FIS2016-79508-P, SEVERO OCHOA No. SEV-2015-0522, FPI
and PID2019-107609GB-I00/AEI/10.13039/501100011033), the FEDER funds, the Generalitat de Catalunya(project 2017-SGR-1127,
2017-SGR-1341 and CERCA/Program), ERC AdG OSYRIS, EU FETPRO QUIC,
and the National Science Centre, Poland-Symfonia grant no.2016/20/
W/ST4/00314.
\tableofcontents
\mainmatter
\chapter{Miscellaneous definitions and facts}
\label{Miscellaneous_Facts}
In this Appendix, we list a number of useful
definitions and facts that we often refer to in
various chapters.
For an operator $X$, the \emph{trace norm}, the
\emph{Hilbert-Schmidt norm} and
the \emph{operator norm} are defined respectively
in terms of $|X| = \sqrt{X^\dagger X}$:
\begin{align*}
\|X\|_1 &= \Tr |X|, \\
\|X\|_2 &= \sqrt{\Tr |X|^2}, \\
\|X\|_{\infty} &= \lambda_{\max}(|X|),
\end{align*}
where $\lambda_{\max}(X)$ is the largest eigenvalue of $X$.
\begin{lemma}[Cf.~\cite{bhatia97}]
For any operator $X$,
\begin{align
\|X\|_1 \leq \sqrt{d} \|X\|_2 \leq d \|X\|_{\infty},
\end{align}
where $d$ equals the rank of $X$.
\qed
\end{lemma}
\begin{lemma}[Cf.~\cite{bhatia97}]
\label{norm1_trace}
For any self-adjoint operator $X$,
\begin{align*}
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\norm{X}_1=\max_{-\openone \leq Q \leq \openone}\Tr(QX).
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\blacksquare
\end{align*}
\end{lemma}
\begin{lemma}[Cf.~\cite{bhatia97}]
\label{T_norm1_inequality}
For \aw{any self-adjoint operator $X$ and any operator $T$,}
\begin{align*}
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\norm{TXT^{\dagger}}_1 \leq \norm{T}_{\infty}^2 \norm{X}_1.
\quad \quad\quad\quad\quad\quad\quad \quad \quad\quad \quad
\blacksquare
\end{align*}
\end{lemma}
\begin{lemma}[Cf.~Bhatia~\cite{bhatia97}]
\label{lemma:norm inequality}
For operators $A$, $B$ and $C$ and for any norm $p \in [1,\infty]$ the following holds
\begin{align*}
\norm{ABC}_p \leq \norm{A}_{\infty} \norm{B}_p \norm{C}_{\infty}.
\end{align*}
\end{lemma}
\begin{lemma}[Hoeffding's inequality, Cf.~\cite{DemboZeitouni}]
\label{Hoeffding's inequality}
Let $X_1, X_2,\ldots,X_n$ be independent random variables with $a_i \leq X_i \leq b_i$. Define the empirical mean of these variables as $\overline{X}=\frac{X_1+\ldots+X_n}{n}$, then for any $t>0$
\begin{align*}
\Pr\left\{\overline{X}-\mathbb{E}(\overline{X}) \geq t\right\}
&\leq \exp\left(-\frac{2n^2t^2}{\sum_{i=1}^n(b_i-a_i)^2}\right),\\
\Pr\left\{\overline{X}-\mathbb{E}(\overline{X}) \leq -t\right\}
&\leq \exp\left(-\frac{2n^2t^2}{\sum_{i=1}^n(b_i-a_i)^2}\right).
\end{align*}
\end{lemma}
The fidelity of two states is defined as
\begin{align*}
F(\rho,\sigma)= \Tr \sqrt{\sigma^{\frac{1}{2}} \rho \sigma^{\frac{1}{2}}}.
\end{align*}
When one of the arguments is pure, then
\begin{align*}
F(\rho,\ketbra{\psi}{\psi})
=\sqrt{\Tr (\rho \ketbra{\psi}{\psi})}
=\sqrt{\bra{\psi}\rho\ket{\psi}}.
\end{align*}
\begin{lemma}
\label{lemma:FvdG}
The fidelity is related to the trace norm as follows \cite{Fuchs1999}:
\begin{align*}
1- F(\rho,\sigma) \leq \frac{1}{2}\|\rho-\sigma\|_1 \leq \sqrt{1-F(\rho,\sigma)^2} =: P(\rho,\sigma),
\end{align*}
where $P(\rho,\sigma)$ is the so-called purified distance,
or Bhattacharya distance, between quantum states.
\qed
\end{lemma}
\begin{lemma}[{Pinsker's inequality, cf.~\cite{Schumacher2002}}]
\label{lemma:Pinsker}
The trace norm and relative entropy are related by
\begin{align*}
\|\rho-\sigma\|_1 \leq \sqrt{2 \ln 2 S(\rho\|\sigma)}.
\end{align*} \qed
\end{lemma}
\begin{lemma}[{Uhlmann~\cite{UHLMANN1976}}]
\label{lemma:Uhlmann1}
Let $\rho^A$ and $\sigma^A$ be two quantum states with fidelity $F(\rho^A,\sigma^A)$. Let $\rho^{AB}$ and $\sigma^{AC}$ be purifications of these two states, then there exists an isometry $V:{B\to C} $ such that
\begin{align*}
\phantom{========:}
F\left( (\openone_A \otimes V^{B\to C})\rho^{AB}(\openone_A \otimes V^{B\to C})^{\dagger},\sigma^{AC} \right)
= F(\rho^A,\sigma^A).
\phantom{========:} \blacksquare
\end{align*}
\end{lemma}
A consequence of this, due to \cite[Lemma~2.2]{Devetak2008_1}, is as follows.
\begin{lemma}
\label{lemma:Uhlmann2}
Let $\rho^A$ and $\sigma^A$ be two quantum states with trace distance
$\frac12 \|\rho^A-\sigma^A\|_1 \leq \epsilon$, and
let $\rho^{AB}$ and $\sigma^{AC}$ be purifications of these two states.
Then there exists an isometry $V:{B\to C}$ such that
\begin{align*}
\phantom{=========:}
\left\| (\openone_A \otimes V^{B\to C})\rho^{AB} (\openone_A \otimes V^{B\to C})^{\dagger}
\! \! -\! \sigma^{AC} \right\|_1 \!
\leq\! \sqrt{\epsilon(2-\epsilon)} \,.
\phantom{=========} \blacksquare
\end{align*}
\end{lemma}
\begin{lemma}[{Fannes~\cite{Fannes1973}; Audenaert~\cite{Audenaert2007}}]
\label{Fannes-Audenaert inequality}
Let $\rho$ and $\sigma$ be two states on Hilbert space
$A$ with trace distance
$\frac12\|\rho-\sigma\|_1 \leq \epsilon$, then
\begin{align*}
|S(\rho)-S(\sigma)| \leq \epsilon\log |A| + h(\epsilon),
\end{align*}
where $h(\epsilon)=-\epsilon \log \epsilon -(1-\epsilon)\log (1-\epsilon)$ \aw{is the binary entropy}.
\end{lemma}
\aw{There is also an extension} of the Fannes inequality for the conditional entropy;
this lemma is very useful \aw{especially} when the dimension of the
system \aw{conditioned on} is unbounded.
\begin{lemma}[{Alicki-Fannes~\cite{Alicki2004}; Winter~\cite{Winter2016}}]
\label{AFW lemma}
Let $\rho$ and $\sigma$ be two states on a bipartite Hilbert space
$A\otimes B$ with trace distance $\frac12\|\rho-\sigma\|_1 \leq \epsilon$, then
\begin{align*}
|S(A|B)_{\rho}\!-\!S(A|B)_{\sigma}| \leq 2\epsilon \log |A| \!+\! (1+\epsilon)h\left(\frac{\epsilon}{1+\epsilon}\right)\!.
\end{align*}\qed
\end{lemma}
\begin{lemma}
\label{full_support_lemma}
Let $\rho$ be a state with full support on the Hilbert space \aw{$A$}, i.e.~it
\aw{has positive minimum} eigenvalue $\lambda_{\min}$, and
let $\ket{\psi}^{AR}$ be a purification of $\rho$ on the Hilbert space \aw{$A \otimes R$}.
Then any purification of another state $\sigma$ on \aw{$A$} is of the form
\begin{align*}
(\openone_A \otimes T) \ket{\psi}^{AR},
\end{align*}
where $T$ is an operator acting on system $R$ with $\| T \|_{\infty} \le \frac{1}{\sqrt{\lambda_{\min}}}$.
\begin{proof}
Let $\rho=\sum_i \lambda_i \ketbra{e_i}{e_i}$ and $\sigma=\sum_j \mu_j \ketbra{f_j}{f_j}$ be spectral decompositions of the states. The purification of $\rho$ is $\ket{\psi}^{AR}=\sum_i \sqrt{\lambda_i} \ket{e_i} \ket{i}$. Define $\ket{\phi}^{AR}=\sum_j \sqrt{\mu_j} \ket{f_j} \ket{j}$.
Any purification of the state $\sigma$ is of the form
$(\openone_A \otimes V) \ket{\phi}^{AR}$ where $V$ is an isometry acting on system $R$.
Write the eigenbasis $\set{\ket{f_j}}$ as linear combination of eigenbasis $\set{\ket{e_j}}$, that is, $\ket{f_j}=\sum_i\alpha_{ij}\ket{e_i}$. Then, we have $\ket{\phi}^{AR}=\sum_{i,j} \sqrt{\mu_j} \alpha_{ij} \ket{e_i} \ket{j}$. Define the operator $P=\sum_{jk} p_{jk} \ketbra{j}{k}$ where $p_{jk}=\alpha_{kj} \sqrt{\frac{\mu_j}{\lambda_k}}$. It is immediate to see that
\begin{align*}
\ket{\phi}^{AR}=(\openone_A \otimes P) \ket{\psi}^{AR}.
\end{align*}
Thus, we have $(\openone_A \otimes V) \ket{\phi}^{AR} = (\openone_A \otimes VP) \ket{\psi}^{AR}$.
Defining $T=VP$, we then have
\begin{align*}
\lambda_{\max} (T^{\dagger}T)&=\lambda_{\max} (P^{\dagger}P)\\
&\leq \Tr (P^{\dagger}P) \\
&=\sum_{j,k}|p_{jk}|^2 \\
&= \sum_{j,k}\frac{|\alpha_{kj}|^2\mu_j}{\lambda_k} \\
&\leq \frac{1}{\lambda_{\min}},
\end{align*}
where the last inequality follows \aw{from the orthonormality of} the basis $\set{\ket{f_j}}$.
\end{proof}
\end{lemma}
\begin{lemma}[Gentle Operator Lemma \cite{winter1999_2,Ogawa2007,wilde_2013}]
\label{Gentle Operator Lemma}
If a quantum state $\rho$ with diagonalization $\rho=\sum_j p_j \pi_j$ projects onto operator $\Lambda$ with probability $1- \epsilon$, which is bounded as $0 \leq \Lambda \leq I$, i.e. $\Tr(\rho \Lambda) \geq 1- \epsilon$ then
\begin{align*}
\sum_j p_j\norm{\pi_j-\sqrt{\Lambda}\pi_j \sqrt{\Lambda}}_1 \leq 2\sqrt{\epsilon}.
\end{align*}
\end{lemma}
\begin{definition}
Let $\rho_1,\ldots,\rho_n$ be quantum states on a $d$-dimensional Hilbert space $\mathcal{H}$ with diagonalizations $\rho_i=\sum_j p_{ij} \pi_{ij}$ and one-dimensional projectors $\pi_{ij}$. For $\alpha >0$ and $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ define the set of entropy typical sequences as
\begin{align}
\mathcal{T}_{\alpha,\rho^n }^n=\left\{j^n=j_1 j_2 \ldots j_n : \abs{\sum_{i=1}^n -\log p_{ i j_i}-S(\rho_i) } \leq \alpha \sqrt{n}\right\}. \nonumber
\end{align}
Define the entropy typical projector of $\rho^n$ with constant $\alpha$ as
\begin{align*}
\Pi^n_{\alpha ,\rho^n }=\sum_{j^n \in \mathcal{T}_{\alpha, \rho^n}^n} \pi_{1j_1} \otimes \cdots \otimes \pi_{nj_n}.
\end{align*}
\end{definition}
\begin{lemma}\label{lemma:typicality properties }
(Cf. \cite{csiszar_korner_2011}) There is a constant $0<\beta \leq \max \set{(\log 3)^2,(\log d)^2}$ such that the entropy typical projector has the following properties for any $\alpha >0$, $n>0$ and arbitrary state $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$:
\begin{align*}
\Tr \left(\rho^n \Pi^n_{\alpha ,\rho^n }\right) &\geq 1-\frac{\beta}{\alpha^2}, \\
2^{-\sum_{i=1}^n S(\rho_i)-\alpha \sqrt{n}} \Pi^n_{\alpha,\rho^n}
&\leq \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}
\leq 2^{-\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}}\Pi^n_{\alpha,\rho^n},\quad \text{and} \\
\left(1-\frac{\beta}{\alpha^2}\right) 2^{ \sum_{i=1}^n S(\rho_i)-\alpha \sqrt{n}}
&\leq \Tr \left(\Pi^n_{\alpha ,\rho^n }\right)
\leq 2^{\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}}.
\end{align*}
\end{lemma}
\chapter{Distributed compression of correlated classical-quantum sources}
\label{chap:cqSW}
In this chapter, we resume the investigation of the problem of independent
local compression of correlated quantum sources, the classical case
of which is covered by the celebrated Slepian-Wolf theorem.
We focus specifically on classical-quantum (cq) sources, for which one edge
of the rate region, corresponding to the compression of the classical
part, using the quantum part as side information at the decoder,
was previously determined by Devetak and Winter [Phys. Rev. A 68, 042301 (2003)].
Whereas the Devetak-Winter protocol attains a rate-sum equal to the von
Neumann entropy of the joint source, here we show that the full rate
region is much more complex, due to the partially quantum nature of
the source. In particular, in the opposite case of compressing the
quantum part of the source, using the classical part as side information
at the decoder, typically the rate sum is strictly larger
than the von Neumann entropy of the total source.
We determine the full rate region in the
\textit{generic} case, showing that, apart from the Devetak-Winter
point, all other points in the achievable region
have a rate sum strictly larger than the joint entropy. We can interpret
the difference as the price paid for the quantum encoder being ignorant
of the classical side information.
In the general case, we give an achievable rate region, via protocols
that are built on the decoupling principle, and the protocols of quantum
state merging and quantum state redistribution.
Our achievable region is matched almost by a single-letter converse,
which however still involves asymptotic errors and an unbounded
auxiliary system.
This chapter is based on the papers in \cite{ZK_cqSW_ISIT_2019,ZK_cqSW_2018}.
\section{The source and the compression model}\label{sec:Source and compression model}
\label{introduction}
The Slepian-Wolf problem of two sources
correlated in a known way, but subject to separate, local compression \cite{Slepian1973}
has proved to provide a unifying principle for much of Shannon
theory, giving rise to natural information theoretic interpretations
of entropy and conditional entropy, and exhibiting deep
connections with error correction, channel capacities and
mutual information (cf.~\cite{csiszar_korner_2011}).
The quantum case has been investigated for two decades, starting with the
second author's PhD thesis \cite{Winter1999} \aw{and subsequently in
\cite{Devetak2003}}, up to the systematic study \cite{Ahn2006},
and while we still do not have a complete understanding of the rate region,
it has become clear that the problem is of much higher
complexity than the classical case. The quantum Slepian-Wolf
problem, and specifically quantum data compression with side
information at the decoder, has resulted in many fundamental
advances in quantum information theory, including the protocols
of quantum state merging \cite{Horodecki2007,Abeyesinghe2009} and quantum state
redistribution \cite{Devetak2008_2},
which have given operational meaning to the conditional von
Neumann entropy, the mutual information and the conditional quantum mutual
information, respectively.
A variety of resource models and different tasks have been
considered over the years: The source and its recovery was
either modelled as an ensemble of pure states (following
Schumacher \cite{Schumacher1995}), or as a pure state between the encoders and a
reference system; the communication resource required was
either counted in qubits communicated, in addition either
allowing or disallowing entanglement, or it was counted in
ebits shared between the agents, but with free classical
communication. While this latter model has led to the most
complete picture of the general rate region, in the present
chapter we will go back to the original idea \cite{Schumacher1995,Winter1999}
of quantifying the communication, counted in qubits, between the
encoders and the decoder.
\bigskip
\textbf{Source model.}
The source model we shall consider is a hybrid classical-quantum one,
with two agents, Alice and Bob, whose task is is to compress the
classical and quantum parts of the source, respectively. They then send their
shares to a decoder, \aw{Debbie}, who has to reconstruct the classical
information with high probability and the quantum information with
high (average) fidelity.
In detail, the source is characterised by a classical source, i.e.~a probability
distribution $p(x)$ on a discrete (in fact: finite) alphabet $\mathcal{X}$
which is observed by Alice, and a family of quantum states $\rho_x$
on a quantum system $B$, given by a Hilbert space of finite dimension $|B|$.
To define the problem of independent local compression (and
decompression) of such a correlated \aw{classical-quantum} source, we
shall consider purifications $\psi_x^{BR}$ of the $\rho_x$,
i.e.~$\rho_x^B = \Tr_R \psi_x^{RB}$. Thus the source can be described
compactly by the cq-state
\[
\omega^{XBR} = \sum_{x \in \mathcal{X}} p(x) \ketbra{x}{x}^X \otimes \ketbra{\psi_x}{\psi_x}^{BR}.
\]
We will be interested in the information theoretic limit of
many copies of $\omega$, i.e.
\begin{align*}
\omega^{X^n B^n R^n}
&= \left(\omega^{XBR}\right)^{\otimes n} \\
&= \sum_{x^n \in \mathcal{X}^n} p(x^n) \ketbra{x^n}{x^n}^{X^n}
\otimes \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n} \! \!\!\!,
\end{align*}
where we use the notation
\begin{align*}
x^n &= x_1 x_2 \ldots x_n, \\
\ket{x^n} &= \ket{x_1} \ket{x_2} \cdots \ket{x_n}, \\
p(x^n) &= p(x_1) p(x_2) \cdots p(x_n), \text{ and} \\
\ket{\psi_{x^n}} &= \ket{\psi_{x_1}} \ket{\psi_{x_2}} \cdots \ket{\psi_{x_n}}.
\end{align*}
Alice and Bob, receiving their respective
parts of the source, separately encode these using the most general allowed
quantum operations; the compressed quantum information, living on
a certain number of qubits, is passed to the decoder who has to
output, again acting with a quantum operation, an element of $\mathcal{X}^n$
and a state on $B^n$, in such a way as to attain a low error probability
for $x^n$ and a high-fidelity approximation of the conditional quantum
source state, $\psi_{x^n}^{B^nR^n}$.
We consider two models: unassisted and entanglement-assisted, which we
describe formally in the following
(see Figs.~\ref{fig:una} and \ref{fig:ea}).
\medskip
\textbf{Unassisted model.}
With probability $p(x^n)$, the source provides Alice and Bob respectively
with states $\ket{x^n}^{X^n}$ and $\ket{\psi_{x^n}}^{B^nR^n}$.
Alice and Bob then perform their respective
encoding operations $\mathcal{E}_X:X^n \longrightarrow C_X$ and
$\mathcal{E}_B:B^n \longrightarrow C_B$,
\aw{respectively,} which are quantum operations, i.e.~completely positive and trace preserving (CPTP)
maps. \aw{Of course, as functions they act on the operators (density matrices) over
the respective input and output Hilbert spaces. But as there is no risk of confusion,
we will simply write the Hilbert spaces when
denoting a CPTP map. Note that since $X$ is a classical random variable, $\mathcal{E}_X$
is entirely described by a cq-channel.}
We call $R_X=\frac1n \log|C_X|$ and $R_B=\frac1n \log|C_B|$
\aw{the} quantum rates of the compression protocol.
Since Alice and Bob are required to act independently, the joint encoding operation
is $\mathcal{E}_X \otimes \mathcal{E}_B$.
The systems $C_X$ and $C_B$ are then sent to \aw{Debbie} who performs
a decoding operation \aw{$\mathcal{D}:C_X C_B \longrightarrow \hat{X}^n\hat{B}^n$}.
$\hat{X}^n$ and $ \hat{B}^n$ are output systems with Hilbert spaces $\hat{X}^n$ and $\hat{B}^n$ which are isomorphic to Hilbert spaces $X^{ n}$ and $B^{n}$, respectively.
We define the \aw{extended source state}
\begin{align}\label{eq: extended source model}
&\omega^{X^n {X'}^n B^n R^n} \nonumber \\
\quad &= \left( \omega^{X{X'}BR}\right)^{\otimes n} \nonumber \\
\quad &=\!\!\!\! \! \!\sum_{x^n \in \mathcal{X}^n} \!\!\!\!p(x^n) \!\ketbra{x^n}{x^n}^{X^n} \!\!\!\otimes \ketbra{x^n}{x^n}^{{X'}^n}
\! \!\!\! \otimes \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n} \!\!\!\!,
\end{align}
and say the encoding-decoding
scheme has average fidelity $1-\epsilon$ if
\begin{align}
\label{F_QCSW_unassisted1}
\overline{F} := F\left(\omega^{X^n {X'}^n B^n R^n },\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n }
\right)
\geq 1-\epsilon,
\end{align}
where
\[\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n }\!\!\!\!=\!\left(\mathcal{D} \circ (\mathcal{E}_X \otimes \mathcal{E}_B) \otimes {\operatorname{id}}_{{X'}^n R^n}\right) \omega^{X^n {X'}^n B^n R^n}\!\!\!, \]
and ${\operatorname{id}}_{{X'}^n R^n}$ is the identity (ideal) channel acting on ${X'}^n R^n$.
By the above fidelity definition and the linearity of CPTP maps,
the average fidelity defined in (\ref{F_QCSW_unassisted1}) \aw{can be expressed equivalently as}
\begin{align}
\label{F_QCSW_unassisted2}
\overline{F} \!\!= \!\!\!\!\!\sum_{x^n \in \mathcal{X}^n } \!\!\!p(x^n) F\! \left( \ketbra{x^n}{x^n}^{X^n}\!\!\! \!\otimes\! \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n} \!\!\!\!, \xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\right)\nonumber
\end{align}
where
\begin{align*}
&\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\!\!\!\!= \\
&\quad \!(\mathcal{D} \circ (\mathcal{E}_X \otimes \mathcal{E}_B) \otimes {\operatorname{id}}_{R^n}) \ketbra{x^n}{x^n}^{X^n} \!\!\!\otimes \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n}\!\!\!\! .
\end{align*}
We say that $(R_X,R_B)$ is an (asymptotically) achievable rate pair if
there exist codes $(\mathcal{E}_X,\mathcal{E}_B,\mathcal{D})$ as above
for every $n$, with fidelity $\overline{F}$ converging to $1$,
and classical and quantum rates converging to $R_X$ and $R_B$, respectively.
\aw{The rate region is the set of all achievable rate pairs, as a subset of $\mathbb{R}_{\geq 0}^2$.}
\begin{figure}[!t]
\centering
\includegraphics[scale=.4]{unassisted_model.jpg}
\caption{\aw{Circuit diagram of} the unassisted model. Dotted lines are used to
demarcate domains controlled by the different participants.
The solid lines represent quantum information \aw{registers}.}
\label{fig:una}
\end{figure}
It is shown \aw{by Devetak and Winter} in \cite[Theorem 1]{Devetak2003} and \cite[Corollary IV.13]{Winter1999} that the \aw{rate pair
\begin{equation}
\label{eq:DW}
(R_X,R_B) = (S(X|B),S(B))
\end{equation}
is achievable and optimal. The optimality is two-fold; first, the rate sum
achieved, $R_X+R_B=S(XB)$ is minimal, and secondly, even with unlimited $R_B$,
$R_X \geq S(X|B)$. This shows that the Devetak-Winter point is an extreme point
of the rate region. Interestingly,} Alice can achieve the rate $S(X|B)$ using only classical
communication. However, we \aw{will} prove the converse theorems considering
a quantum channel for Alice, which are obviously stronger statements.
In Theorem \ref{theorem: generic full rate region}, we show that our system model is equivalent
to the model considered in \cite{Devetak2003,Winter1999}, which implies the achievability and
optimality of this rate pair in our system model.
\aw{We remark that in \cite{Devetak2003}, the rate $R_B=S(B)$ was not explicitly
discussed, but it is clear that it can always be achieved by Schumacher's quantum
data compression \cite{Schumacher1995}, introducing an arbitrarily small additional error.}
\medskip
\textbf{Entanglement-assisted model.}
This model \aw{generalizes the unassisted model, and it is basically the same,}
except that we let Bob and \aw{Debbie} share entanglement \aw{and use it in encoding and decoding, respectively.
In addition, we take care of any possible entanglement that is produced in the process.
Consequently, while Alice's encoding $\mathcal{E}_X:X^n \longrightarrow C_X$ remains the same,
the Bob's encoding and the decoding map now act as
$\mathcal{E}_B:B^n B_0 \longrightarrow C_B B_0'$ and
$\mathcal{D}:C_X C_B D_0 \longrightarrow \hat{X}^n\hat{B}^n D_0'$, respectively,
where $B_0$ and $D_0$ are $K$-dimensional quantum registers of Bob and \aw{Debbie},
respectively, designated to hold the initially shared entangled state, and $B_0'$ and $D_0'$
are $L$-dimensional registers for the entanglement produced by the protocol.
Ideally, both initial and final entanglement are given by maximally
entangled states $\Phi_K$ and $\Phi_L$, respectively.}
Correspondingly, we say \aw{that} the encoding-decoding scheme has average fidelity $1-\epsilon$ if
\begin{align}
\label{F_QCSW_assisted}
\overline{F} &:= F\left(\omega^{X^n {X'}^n B^n R^n }\otimes \Phi_L^{B_0'D_0'},
\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n B_0' D_0'} \right) \nonumber\\
&\geq 1-\epsilon,
\end{align}
where
\begin{align*}
\xi^{ \hat{X}^n {X'}^n \hat{B}^n R^n B_0' D_0'}
\!\!&=\!\left(\!\mathcal{D} \circ (\mathcal{E}_X \!\otimes \mathcal{E}_{BB_0} \!\otimes\! {\operatorname{id}}_{D_0}\!) \!\otimes\! {\operatorname{id}}_{{X'}^n R^n}\!\right)\\
& \quad \quad \quad \quad \quad \omega^{X^n {X'}^n B^n R^n }\otimes \Phi_L^{B_0'D_0'}.
\end{align*}
We call $E=\frac{1}{n}(\log K - \log L)$ the entanglement rate of the scheme.
The CPTP map $\mathcal{E}_{B}$ takes the input systems $B^nB_0$ to the compressed system
$C_B$ \aw{plus Bob's share of the output entanglement, $B_0'$.}
\aw{Debbie} applies the decoding operation $\mathcal{D}$ on the received systems
$C_XC_B$ and \aw{her part of the initial} entanglement $D_0$,
to produce an output state on systems $\hat{X}^n \hat{B}^n$ \aw{plus her share of the output
entanglement, $D_0'$}.
Similar to the unassisted model, $\hat{X}^n$ and $ \hat{B}^n$ are output systems with Hilbert spaces $\hat{X}^n$ and $\hat{B}^n$ which are isomorphic to Hilbert spaces $X^{n}$ and $B^{n}$, respectively.
We say $(R_X, R_B, E)$ is an (asymptotically) achievable rate triple if for all $n$
there exist entanglement-assisted codes as before, such that the
fidelity $\overline{F}$ converges to $1$, and
the classical, quantum and entanglement rates converge to
$R_X$, $R_B$ and $E$, respectively.
\aw{The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}_{\geq 0}^2\times\mathbb{R}$. In the following we will be mostly
interested in the projection of this region onto the first two coordinates,
$R_X$ and $R_B$, corresponding to unlimited entanglement assistance.}
\medskip
\aw{It is a simple consequence of the time sharing principle that the rate regions,
both for the unassisted and the entanglement-assisted model, are closed convex regions.
Furthermore, since one can always waste rate, the rate regions are open to the ``upper right''.
This means that the task of characterizing the rate regions boils down to describing
the lower boundary, which can be achieved by convex inequalities. In the Slepian-Wolf
problem, they are in fact linear inequalities, and we will find analogues of these
in the present investigation.}
\medskip
Stinespring's dilation theorem \aw{\cite{Stinespring1955}} states that any CPTP map can be built
from the basic operations of isometry and reduction to a subsystem by
tracing out the environment system \cite{Stinespring1955}.
Thus, the encoders and the decoder are without loss of generality isometries
\begin{align*}
U_X : {X^n} &\longrightarrow {C_X W_X}, \\
U_B : {B^n B_0} &\longrightarrow {C_B B_0' W_B}, \\
V : {C_X C_B D_0} &\longrightarrow {\hat{X}^n \hat{B}^n D_0' W_D},
\end{align*}
\aw{where the new systems $W_X$, $W_B$ and $W_D$ are the environment systems
of Alice, Bob and \aw{Debbie}, respectively. They simply remain locally in
possession of the respective party.}
\begin{figure}[!t
\centering
\includegraphics[scale=.4]{assisted_model.jpg}
\caption{\aw{Circuit diagram of} the entanglement-assisted model. Dotted lines are used to
demarcate domains controlled by the different participants.
The solid lines represent quantum information \aw{registers}.}
\label{fig:ea}
\end{figure}
The following lemma states that for a code of block length $n$ and error $\epsilon$,
the environment parts of the encoding and decoding isometries, i.e.~$W_X$, $W_B$
and $W_D$, as well as the entanglement output registers $B_0'$ and $D_0'$, are decoupled from
the reference $R^n$, conditioned on $X^n$.
This lemma plays a crucial role in the proofs of converse theorems.
\begin{lemma}(Decoupling condition)
\label{decoupling condition}
For a code of block length $n$ and error $\epsilon$ in the entanglement-assisted model,
let $W_X$, $W_B$ and $W_D$ be the environments of Alice's and Bob's encoding and of
Debbie's decoding isometries, respectively. Then,
\[
I(W_XW_BW_D B_0'D_0':\hat{X}^n\hat{B}^nR^n|{X'}^n)_\xi \leq n \delta(n,\epsilon) ,
\]
where $\delta(n,\epsilon) = 4\sqrt{6\epsilon}\log(|X| |B|) + \frac2n h(\sqrt{6\epsilon})$,
\aw{with the binary entropy $h(\epsilon)=-\epsilon \log \epsilon -(1-\epsilon)\log (1-\epsilon)$;}
the conditional mutual information is with respect to the state
\begin{align*}
&\xi^{{X'}^n \hat{X}^n \hat{B}^n B_0' D_0' W_XW_BW_D R^{n}} \\
&\quad \quad \quad =\left(V \circ (U_X \otimes U_{B} \otimes \openone_{D_0}) \otimes \openone_{{X'}^n R^n}\right) \\
&\quad \quad \quad \quad \quad \quad(\omega^{X^n {X'}^n B^n R^n } \otimes \Phi_K^{B_0D_0}) \\
&\quad \quad \quad \quad\quad \quad \quad\left(V \circ (U_X \otimes U_{B} \otimes \openone_{D_0}) \otimes \openone_{{X'}^n R^n}\right)^{\dagger}.
\end{align*}
\end{lemma}
\begin{proof}
We show that the fidelity criterion (\ref{F_QCSW_assisted})
implies that given $x^n$, the environments $W_X$, $W_B$ and $W_D$ of Alice's, Bob's and \aw{Debbie}'s isometries
are decoupled from the the rest of the output systems.
The parties share $n$ copies of the state $\omega^{X^{\prime} X B R}$, where
Alice and Bob have access to systems $X^n$ and $B^n$, respectively, and ${X'}^n$ and $R^n$ are the reference systems.
Alice and Bob apply the following isometries to encode their systems, respectively:
\begin{align*}
U_{X}:{X^n} &\longrightarrow {C_X W_X}, \\
U_{B}:{B^n B_0} &\longrightarrow {C_B B_0' W_B},
\end{align*}
where Alice and Bob send respectively their compressed information $C_X$ and $C_B$ to \aw{Debbie}
and keep the environment parts $W_X$ and $W_B$ of their respective isometries for themselves.
\aw{Debbie} applies \aw{the} decoding isometry \aw{$V:{C_X C_B D_0} \longrightarrow {\hat{X}^n \hat{B}^n D_0' W_D}$
to the systems $C_XC_B$ and her part of the entanglement $D_0$,
to generate the output systems $\hat{X}^n \hat{B}^n D_0'$, with $W_D$ the environment of her isometry.
This leads to the following final state after decoding:
\begin{align*}
&\xi^{X'^n \hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n}\\
& =\! \!\!\!\sum_{x^n} \! p(x^n)\! \ketbra{x^n}^{X'^n} \!\!\! \!\! \otimes \! \ketbra{\xi_{x^n}}^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n} \!\!\!\!,
\end{align*}
where
\begin{align*}
\ket{\xi_{x^n}}&^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n}
\!\! \\
&= V^{{C_XC_BD_0 \to \hat{X}^n\hat{B}^nD_0'W_D}}\\
& \quad \quad \big( U_X^{X^n \to C_XW_X}\!\!\ket{x^n}^{X^n}
\!\otimes U_B^{B^nB_0 \to C_BB_0'W_B}\\
&\quad \quad\quad \quad\quad \quad\quad\quad \quad\quad (\ket{\psi_{x^n}}^{B^nR^n}\!\ket{\Phi_K}^{B_0D_0}) \bigr).
\end{align*}
}
The fidelity defined in \aw{Eq.}~(\ref{F_QCSW_assisted}) is \aw{now} bounded as follows:
\begin{align}
\label{eq-B1}
\overline{F}
&= F\left(\omega^{X'^n X^n B^n R^n} \otimes \Phi_L^{B_0'D_0'},
\xi^{X'^n \hat{X}^n \hat{B}^n B_0' D_0' R^n} \right) \nonumber \\
&\leq F\left(\omega^{X'^n X^n B^n R^n}, \xi^{X'^n \hat{X}^n \hat{B}^n R^n} \right) \nonumber \\
&= \!\!\! \!\sum_{x^n \in \mathcal{X}^n} \!\!\! p(x^n) \!F\!\left(\!\ketbra{x^n}{x^n}^{X^n}\!\!\!\! \otimes\! \ketbra{\psi_{x^n}}{\psi_{x^n}}^{B^nR^n}\!\!\!\!,
\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\!\! \right) \nonumber \\
&= \!\!\!\! \sum_{x^n} p(x^n)\! \sqrt{\!\bra{x^n}\bra{\psi_{x^n}}^{B^n\! R^n} \!
\xi_{x^n}^{\hat{X}^n\hat{B}^nR^n} \!\ket{x^n}\!\ket{\psi_{x^n}}^{B^n\! R^n}\!} \nonumber \\
&\leq \sum_{x^n} p(x^n) \sqrt{\| \xi_{x^n}^{\hat{X}^n\hat{B}^n R^n} \|},
\end{align}
where in the first line $\xi^{X'^n \hat{X}^n \hat{B}^n B_0' D_0' R^n} =\left(\mathcal{D} \circ ({\operatorname{id}}_{X^nD_0} \otimes \mathcal{E}_{B}) \otimes {\operatorname{id}}_{X'^n R^n} \right) \omega^{X^n X'^n B^n R^n } \otimes \Phi_K^{B_0D_0}$.
The \aw{inequality in the second line} is due to the monotonicity of fidelity under partial trace,
and \aw{$\|\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\|$ denotes the operator norm, which in this case
of a positive semidefinite operator is the maximum eigenvalue of $\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}$.
Now, consider the Schmidt decomposition of the state
$\ket{\xi_{x^n}}^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_DR^n}$ with respect to the partition
$\hat{X}^n \hat{B}^n R^n$ : $B_0' D_0'W_X W_B W_D$, i.e.
\begin{align*}
&\ket{\xi_{x^n}}^{\hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_DR^n}\\
& \quad =\!\! \sum_{i} \!\!\sqrt{\lambda_{x^n}(i)}\ket{v_{x^n}(i)}^{\hat{X}^n \!\hat{B}^n\! R^n} \!\!\ket{w_{x^n}(i)}^{B_0'\!D_0' \!W_X\! W_B \!W_D}\!\!\!.
\end{align*}}
High average fidelity $\overline{F} \geq 1-\epsilon$ implies that \emph{on average}
the above states are approximately product states. In other words, the two subsystems are
nearly decoupled on average:
\begin{align}
\label{eq:almost-pure}
\sum_{x^n}& p(x^n) \!F\!\left(\! \ketbra{\xi_{x^n}}{\xi_{x^n}},
\xi_{x^n}^{\hat{X}^n \!\hat{B}^n\! R^n} \!\!\!\!\otimes \xi_{x^n}^{B_0'D_0' W_X W_B W_D} \right) \nonumber\\
&=\! \sum_{x^n} \!p(x^n)\! \sqrt{\bra{\xi_{x^n}}
{\xi_{x^n}^{\hat{X}^n \!\hat{B}^n\! R^n} \!\!\otimes \xi_{x^n}^{B_0'\! D_0'\! W_X \!W_B\! W_D}
\!\! \ket{\xi_{x^n}}}} \nonumber\\
&= \sum_{x^n} p(x^n) \sum_i \lambda_{x^n}(i)^{\frac32} \nonumber\\
&\geq \sum_{x^n} p(x^n) \|\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\|^{\frac32} \nonumber\\
&\geq \left( \sum_{x^n} p(x^n) \sqrt{\|\xi_{x^n}^{\hat{X}^n \hat{B}^n R^n}\|} \right)^{3} \nonumber\\
&\geq (1-\epsilon)^3
\geq 1 - 3\epsilon,
\end{align}
where in the first line $\ketbra{\xi_{x^n}}$ is a state on systems ${\hat{X}^n \!\hat{B}^n \!B_0'\!D_0'\! W_X\!W_B W_D\!R^n}$. The \aw{inequality in the fifth line} follows from the convexity of $x^3$ for $x \geq 0$,
and \aw{in the sixth line we have used Eq.}~(\ref{eq-B1}).
Based on the relation between fidelity and trace distance \aw{(Lemma \ref{lemma:FvdG}),
we thus obtain for the product ensemble
\begin{align*}
& \zeta^{X'^n \hat{X}^n \hat{B}^n B_0'D_0' W_X W_B W_D R^n} \\
& \quad:= \sum_{x^n}\! p(x^n)\!\ketbra{x^n}^{X'^n}\!\!\! \!\otimes \xi_{x^n}^{\hat{X}^n \hat{B}^n R^n} \!\!\!\otimes \xi_{x^n}^{B_0'D_0' W_X W_B W_D}\! \!\!,
\end{align*}
that
\begin{align*}
&\| \xi - \zeta \|_1\\
&= \sum_{x^n} p(x^n)\\
&\>\> \norm{\! \ketbra{\xi_{x^n}\! }{\xi_{x^n}\! }^{\! \hat{X}^n \! \hat{B}^n\! B_0' \! D_0' \! W_X\! W_B \! W_D\! R^n}
\! \! \! \! \! \!-\! \xi_{x^n}^{\hat{X}^n\! \hat{B}^n\! R^n} \! \! \!\! \! \otimes\! \xi_{x^n}^{B_0'\! D_0' \! W_X\! W_B \! W_D\! } \! }_{\! 1} \\
& \leq 2\sqrt{6\epsilon}.
\end{align*}}
By \aw{the Alicki-Fannes inequality (Lemma \ref{AFW lemma}), this implies
\begin{align}
\label{decoupling_I}
& I(\hat{X}^n\hat{B}^nR^n : B_0'D_0'W_XW_B W_D | {X'}^n)_\xi \nonumber \\
&= S(\hat{X}^n \hat{B}^n R^n | {X'}^n)_\xi \nonumber \\
& \quad \quad \quad \quad - S(\hat{X}^n \hat{B}^n R^n | {X'}^n B_0'D_0' W_XW_B W_D)_\xi \nonumber \\
&\leq 2\sqrt{6\epsilon} \log(|X|^n |B|^n |R|^n) + 2 h(\sqrt{6\epsilon}) \nonumber \\
&\leq 2\sqrt{6\epsilon} \log(|X|^{2n} |B|^{2n}) + 2 h(\sqrt{6\epsilon}) \nonumber\\
& =: n \delta(n,\epsilon),
\end{align}
where we note in the second line that
$S(\hat{X}^n \hat{B}^n R^n|X'^n B_0'D_0' W_XW_B W_D)_\zeta
= S(\hat{X}^n \hat{B}^n R^n)_\zeta = S(\hat{X}^n \hat{B}^n R^n)_\xi$,
and in the forth line that we can without loss of generality assume $|R| \leq |X| |B|$,
since that is the maximum possible dimension of the support of $\omega^R$.}
\end{proof}
\section{Quantum data compression with classical side information}
\label{sec:seiteninformation}
In this section, we assume that Alice sends her information to \aw{Debbie}
at rate $R_X=\log \abs{\mathcal{X}}$ such that \aw{Debbie} can decode it
perfectly, and we ask how much Bob can compress his system given that
the decoder has access to classical side information $X^n$.
\aw{This problem is a special case of the \emph{classical-quantum Slepian-Wolf problem}} (CQSW problem),
and we call it quantum data compression with classical side information at the decoder,
in analogy to the problem of classical data compression
with quantum side information at the decoder which is addressed
in \cite{Devetak2003,Winter1999}. Note we do not speak about the
compression and decompression of the classical part at all, and the
decoder \aw{may} depend directly on $x^n$.
\aw{Of course, by Shannon's data compression theorem \cite{Shannon1948}, $X$ can always be
compressed to a rate $R_X = H(X)$, introducing an arbitrarily small
error probability}.
We know from previous section that Bob's encoder, in the entanglement-assisted model, is without loss of generality
an isometry \aw{$U \equiv U_B:{B^n B_0} \longrightarrow {C W B_0'}$,
taking $B^n$ and Bob's part of the entanglement $B_0$ to systems
$C\otimes W \otimes B_0'$, where $C \equiv C_B$ is the compressed
information of rate $R_B=\frac{1}{n}\log |C|$; $W \equiv W_B$ is the environment
of Bob's encoding CPTP map, and $B_0'$ is the register carrying Bob's share of
the output entanglement
(in this section, we drop subscript $B$ from $C_B$ and $W_B$).
Having access to side information $X^n$, \aw{Debbie} applies the decoding isometry
$V:X^n C D_0 \to \hat{X}^n \hat{B}^n W_D D_0'$ to generate
the output systems $\hat{X}^n \hat{B}^n$ and entanglement share
$D_0'$, and where $W_D$ is the environment of the isometry.}
We call this encoding-decoding scheme a side information code of
block length $n$ and error $\epsilon$ for the entanglement-assisted model if the average fidelity
(\ref{F_QCSW_assisted}) is at least $1-\epsilon$.
Similarly, we define a side information code for the unassisted model by removing the corresponding systems of entanglement in the encoding and decoding isometries,
that is systems $B_0$, $B'_0$, $D_0$ and $D'_0$.
\medskip
To state our lower bound on the necessary compression rate, we introduce the
following quantity, which emerges naturally from the converse proof.
\begin{definition}
\label{I_delta}
For the state $\omega^{XBR} = \sum_x p(x) \ketbra{x}{x}^X \otimes \ketbra{\psi_x}{\psi_x}^{BR}$
and $\delta \geq 0$, define
\begin{align*}
&I_\delta(\omega) := \sup_{{\mathcal{T}}} I(X:W)_\sigma
\\
&\quad \quad \quad \text{ s.t. } {\mathcal{T}}:B\rightarrow W \text{ CPTP with }
I(R:W|X)_\sigma \leq \delta,
\end{align*}
where the mutual informations are understood with respect to the
state $\sigma^{XWR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}})\omega$ \aw{and $W$ ranges over arbitrary
finite dimensional quantum systems}.
Furthermore, let
\(
\widetilde{I}_0 := \lim_{\delta \searrow 0} I_\delta = \inf_{\delta>0} I_\delta.
\)
\end{definition}
Note that the system $W$ is not restricted in any way, which is
the reason why in this definition we have a supremum and an infimum, rather
than a maximum and a minimum.
(It is a simple consequence of compactness of the domain of optimisation,
together with the continuity of
the mutual information, that if we were to impose a bound on the dimension of $W$
in the above definition, the supremum in $I_\delta$ would be attained, and
for the infimum in $\widetilde{I}_0$, it would hold that $\widetilde{I}_0 = I_0$.)
\begin{lemma}
\label{lemma:I-delta}
The function $I_{\delta}(\omega)$ introduced in Definition~\ref{I_delta},
has the following properties:
\begin{enumerate}
\item It is a non-decreasing function of $\delta$.
\item It is a concave function of $\delta$.
\item It is continuous for $\delta > 0$.
\item For any two states
$\omega_1^{X_1B_1R_1}$ and $\omega_2^{X_2B_2R_2}$ and for $\delta,\delta_1,\delta_2 \geq 0$
\[
I_\delta(\omega_1\otimes\omega_2)
= \max_{\delta_1+\delta_2= \delta} \left (I_{\delta_1}(\omega_1) + I_{\delta_2}(\omega_2)\right).
\]
\item $I_{n\delta}(\omega^{\otimes n}) = n I_\delta(\omega)$.
\item $I_0$ and $\widetilde{I}_0$ are additive:
\begin{align*}
I_0(\omega_1\otimes\omega_2) &= I_0(\omega_1) + I_0(\omega_2)
\quad \text{and} \\ \quad
\widetilde{I}_0(\omega_1\otimes\omega_2) &= \widetilde{I}_0(\omega_1) + \widetilde{I}_0(\omega_2).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
1) The non-decrease with $\delta$ is evident from the definition.
2) For this consider $\delta_1,\delta_2\geq 0$, $0<p<1$,
and let $\delta = p\delta_1+(1-p)\delta_2$.
Let furthermore channels ${\mathcal{T}}_i:B\rightarrow W_i$ be given ($i=1,2$) such that for the
states $\sigma_i^{XW_iR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}}_i)\omega$, $I(R:W_i|X)_{\sigma_i} \leq \delta_i$.
\par
Now define $W := W_1 \oplus W_2$, so that $W_1$ and $W_2$ can be considered
mutually orthogonal subspaces of $W$, and define the new channel
${\mathcal{T}} := p{\mathcal{T}}_1 + (1-p){\mathcal{T}}_2:B\rightarrow W$. By the chain rule for the mutual
information, one can check that w.r.t.~$\sigma^{XWR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}})\omega$,
\begin{align*}
I(R:W|X)_\sigma &= p I(R:W_1|X)_{\sigma_1} + (1-p) I(R:W_2|X)_{\sigma_2} \\
&\leq p\delta_1+(1-p)\delta_2 \\
&= \delta,
\end{align*}
and likewise
\[
I(X:W)_\sigma = p I(X:W_1)_{\sigma_1} + (1-p) I(X:W_2)_{\sigma_2}.
\]
Hence, $I_\delta \geq p I(X:W_1)_{\sigma_1} + (1-p) I(X:W_2)_{\sigma_2}$; by maximizing over
the channels, the concavity follows.
3) Properties 1 and 2 imply that it is continuous for $\delta > 0$.
4) First, we prove that
$I_\delta(\omega_1\otimes\omega_2) \leq \max_{\delta_1+\delta_2= \delta} \left( I_{\delta_1}(\omega_1) + I_{\delta_2}(\omega_2) \right)$;
the other direction \aw{of the inequality is trivial from the definition}.
Let ${\mathcal{T}}:B_1 B_2 \to W$ be a CPTP map such that
\begin{align}
\label{eq1}
\delta & \geq I(W:R_1R_2|X_1X_2) \nonumber \\
&= I(W:R_1|X_1X_2)+I(W:R_2|X_1R_1X_2) \\ \nonumber
&= I(WX_2:R_1|X_1)+I(WX_1R_1:R_2|X_2),
\end{align}
where the first line is to chain rule, and the second line is due to the independence of $\omega_1$ and $\omega_2$.
We now define the new systems $W_1:=WX_2$ and $W_2:=WX_1R_1$. \aw{Then we have,}
\begin{align}
\label{eq2}
I(W:X_1 X_2) &= I(W:X_2)+I(W:X_1|X_2) \\ \nonumber
&= I(W:X_2)+I(WX_2:X_1)\\ \nonumber
&\leq I(\underbrace{WX_1R_1}_{W_2}:X_2)+I(\underbrace{WX_2}_{W_1}:X_1),
\end{align}
where the second equality is due to the independence of $X_1$ and $X_2$.
The inequality follows \aw{from data processing}.
From \aw{Eq.~(\ref{eq1})} we know that $I(W_1:R_1|X_1)\leq \delta_1$ and $I(W_2:R_2|X_2)\leq \delta_2$
for some $\delta_1+\delta_2= \delta$. Thereby, from \aw{Eq.~(\ref{eq2})} we obtain
\begin{align*}
I_\delta(\omega_1 \otimes \omega_2) &\leq I_{\delta_1}(\omega_1 )+I_{\delta_2}(\omega_2 )\\
&\leq \max_{\delta_1+\delta_2=\delta} I_{\delta_1}(\omega_1 )+I_{\delta_2}(\omega_2 ).
\end{align*}
5) \aw{Now, the multi-copy additivity follows easily from property 4:}
According to the first statement of the \aw{lemma}, we have
\[
I_{n\delta}(\omega^{\otimes n}) = \max_{\delta_1+\ldots+\delta_n=n\delta} I_{\delta_1}(\omega)+\ldots+I_{\delta_n}(\omega).
\]
\aw{Here, the right hand side is clearly $\geq n I_\delta(\omega)$ since we can choose all
$\delta_i = \delta$. By the concavity of $I_{\delta}(\omega)$ in $\delta$, on the other hand,
we have for any $\delta_1+\ldots+\delta_n=n\delta$ that
\[
\frac{1}{n}(I_{\delta_1}(\omega)+\ldots+I_{\delta_n}(\omega)) \leq I_{\delta}(\omega),
\]
so the maximum is attained at $\delta_i=\delta$ for all $i=1,\ldots,n$}.
6) The property 4 of the \aw{lemma also} implies that $I_0$ and $\widetilde{I}_0$ are additive.
\end{proof}
\begin{remark}
There is a curious resemblance of our function
$I_\delta$ with the so-called \emph{information bottleneck function} introduced
by Tishby \emph{et al.}~\cite{info-bottleneck}, whose generalization to
quantum information theory is recently being discussed \cite{Salek-QIB,Hirche-QIB}.
Indeed, the concavity and additivity properties of the two functions are proved
by the same principles, although it is not evident to us, what --if any--, the
information theoretic link between $I_\delta$ and the information bottleneck is.
\end{remark}
\subsection{Converse bound}\label{subsec: Converse bound}
In this subsection, we use the properties of the function $I_{\delta}(\omega)$ (Lemma~\ref{lemma:I-delta}) to prove
a lower bound on Bob's quantum communication rate.
\begin{theorem}
\label{converse_QCSW}
In the entanglement-assisted model, consider any side information code of block length $n$ and error $\epsilon$.
Then, Bob's quantum communication rate is lower bounded as
\[
R_B \geq \frac12 \left( S(B)+S(B|X) - I_{\delta(n,\epsilon)} - \delta(n,\epsilon) \right),
\]
where $\delta(n,\epsilon) = 4\sqrt{6\epsilon}\log(|X| |B|) +\frac2n h(\sqrt{6\epsilon})$.
Any asymptotically achievable rate $R_B$ is consequently lower bounded
\[
R_B \geq\frac{1}{2}\left( S(B)+S(B|X)-\widetilde{I}_0 \right).
\]
\end{theorem}
\begin{proof}
\aw{As already discussed in the introduction to this section,}
the encoder of Bob is without loss of generality
an isometry $\aw{U:{B^n B_0} \longrightarrow {C W B_0'}}$.
The existence of a high-fidelity
decoder using $X^n$ as side information
implies that systems $W B_0'$ are decoupled from
system $R^n$ conditional on \aw{$X^n$; indeed, by Lemma \ref{decoupling condition},}
$I(R^n:WB_0'|X'^n) \leq n \delta(n,\epsilon)$.
The first part of the converse reasoning is as follows:
\begin{align*}
nR_B = \log |C|
&\geq S(C) \\
&\geq S(CW B_0')-S(W B_0') \\
&= S(B^n)+S(B_0)-S(W B_0'),
\end{align*}
\aw{where the second inequality is a version of subadditivity, and
the equality in the last line holds because the encoding isometry \aw{$U$}
does not change the entropy; furthermore, $B^n$ and $B_0$ are initially
independent.}
Moreover, the decoder can be dilated to an isometry $\aw{V}: X^n C D_0 \longrightarrow \hat{X}^n \hat{B}^n D_0' W_D$,
where $W_D$ and $D_0'$ are the environment of \aw{Debbie}'s decoding operation and
\aw{the output of \aw{Debbie}'s entanglement, respectively.}
Using the decoupling condition of Lemma \ref{decoupling condition} once more, we have
\begin{align*}
nR_B+S(D_0)&= \log |C| + S(D_0) \\
&\geq S(C)+S(D_0) \\
&\geq S(C D_0) \\
&\geq S(X^nCD_0|{X'}^n) \\
&= S(\hat{X}^n\hat{B}^n D_0' W_D|{X'}^n) \\
&= S(W B_0' R^n|{X'}^n) \\
&\geq S(R^n|{X'}^n)\!+S(WB_0'|{X'}^n) \!\! -\! n \delta(n,\epsilon) \\
&= S(B^n|X^n)\!+S(WB_0'|{X'}^n) \!\! -\! n \delta(n,\epsilon),
\end{align*}
\aw{where the third and fourth line are by subadditivity of the entropy;
the fifth line follows because the decoding isometry $V$ does not change the entropy.
The sixth line holds because for any given $x^n$
the overall state of the systems $\hat{X}^n\hat{B}^n B_0'D_0' W W_DR^n$ is pure.
The penultimate line is due to the decoupling condition (Lemma \ref{decoupling condition}),
and the last line follows because for a given $x^n$ the overall state
of the systems $B^nR^n$ is pure.}
Adding these two relations and dividing by $2n$, we obtain
\begin{align*}
R_B \!\geq \! \frac{1}{2} (S(B)\!+\!S(B|X)) \!-\! \frac{1}{2n} I(X'^n\!\!:\!WB_0') \!-\! \frac{1}{2}\delta(n,\epsilon),
\end{align*}
where the terms $S(B_0)$ and $S(D_0)$ cancel out each other because
$B_0$ and $D_0$ are $K$-dimensional quantum registers with maximally
entangled states $\Phi_K$.
In the above \aw{inequality, the mutual information on the right hand side}
is bounded as
\begin{align*}
I(X'^n:WB_0') \leq I_{n\delta(n,\epsilon)}({\omega^{\otimes n}}) = nI_{\delta(n,\epsilon)}({\omega}),
\end{align*}
\aw{To see this, define the CPTP map $\mathcal{T}:B^n \longrightarrow \widetilde{W}:= WB_0'$ as
$\mathcal{T}(\rho):= \Tr_{CD_0} (U\otimes\openone)(\rho\otimes\Phi_K^{B_0D_0})(U\otimes\openone)^\dagger$.
Then we have $I(R^n:\widetilde{W}|{X'}^n) \leq n\delta(n,\epsilon)$, and hence
the above inequality follows directly from Definition \ref{I_delta}.}
The second statement of the theorem follows because
$\delta(n,\epsilon)$ tends to zero as \aw{$n \rightarrow \infty$ and $\epsilon \rightarrow 0$.}
\end{proof}
\begin{remark}
\label{rem:example}
Notice that the term $\frac{1}{n} I({X'}^n:WB_0')$ is not necessarily small.
For example, suppose \aw{that the source is of the form
$\ket{\psi_x}^{BR} = \ket{\psi_x}^{B'R} \otimes \ket{\psi_x}^{B''}$ for all $x$};
clearly it is possible to perform the coding task by
coding only $B'$ and trashing $B''$ (i.e.~putting it into $W$), because
by having access to $x$ the decoder can reproduce $\psi_x^{B''}$ locally. In
this setting, characteristically $\frac{1}{n} I({X'}^n:WB_0')$ does not go
to zero because ${B''}^n$ ends up in $W$.
\end{remark}
\subsection{Achievable rates}\label{subsec:Achievable rates}
In this subsection, we provide achievable rates \aw{both for the unassisted and entanglement-assisted} model.
\begin{theorem}
\label{State_merging_rate}
\aw{In the unassisted model, there exists a sequence of side information codes that
compress Bob's system $B^n$} at the asymptotic qubit rate
\begin{align*}
R_B = \frac{1}{2}\left( S(B)+S(B|X) \right).
\end{align*}
\end{theorem}
\begin{proof}
We recall that in a side information code, Bob aims to send his system $B^n$ to Debbie while she has access to side information system $X^n$ as explained at the beginning of this section.
We can use the fully quantum Slepian-Wolf protocol (FQSW), also called coherent state merging protocol
(\cite{Abeyesinghe2009} section 7), as a subprotocol since it considers the entanglement
fidelity as the decodability criterion, which is more stringent than
the average fidelity defined in (\ref{F_QCSW_unassisted1}).
Namely, let
\[
\ket{\Omega}^{X X^{\prime} B R}
=\sum_{x \in \mathcal{X}} \sqrt{p(x)} \ket {x}^{X} \ket {x}^{X'} \ket{\psi_{x}}^{B R}
\]
be the source in \aw{the} FQSW problem, where $B$ is the system to be compressed, $X$ is the side information at the decoder, $R$ and $X'$ are the reference systems. Bob applies the corresponding encoding map of the FQSW protocol ${\mathcal{E}}_B: B^n \longrightarrow C$ and sends system $C$ to Debbie who then applies the decoding map of the FQSW protocol ${\mathcal{D}}: X^n C \longrightarrow X^n \hat{B}^n$ to her side information system $X^n$ and the compressed information $C$ to reconstruct system $\hat{B}^n$. These encoding and decoding operations preserve the entanglement fidelity $F_e$ which is the decodability criterion of the FQSW problem:
\begin{align*}
\label{F_QCSW}
F_e &\!
\!=\! \! F \! \! \left(\! \Omega^{X^{\! n}\! {X'}^n\! B^{\! n} \! R^n } \! \! \! ,\! \left(\mathcal{D} \! \circ\! ({\operatorname{id}}_{\! X^n}\! \! \otimes \! \mathcal{E}_{\! B}\! )\! \otimes\! {\operatorname{id}}_{{X'}^n\! R^n}\! \right) \! \Omega^{\! X^{\! n} \! {X'}^{ n} \! B^n\! R^n } \! \! \right) \nonumber \\
&\! \! \! \leq \! \! F \! \! \left(\! \omega^{X^n \! {X'}^n\! B^n \! R^n \! } \! \! ,\! \left(\mathcal{D} \! \circ\! ({\operatorname{id}}_{X^{\! n}}\! \! \otimes \! \mathcal{E}_{\! B}\! ) \! \otimes\! {\operatorname{id}}_{{X'}^n \! R^n}\! \right) \! \omega^{X^{\! n} \! {X'}^n\! B^{\! n} \! R^{n} \! } \! \right) \\
& \! \! = \overline{F},
\end{align*}
where the inequality is due to the monotonicity of fidelity
under \aw{CPTP maps, namely the projective measurement on system $X'$ in the computational
basis $\{ \ketbra{x}{x}\}$)}.
Therefore, if an encoding-decoding scheme attains an entanglement fidelity for
the FQSW problem going to $1$, then it will have the average fidelity for
the CQSW problem going to $1$ as well. Hence, the FQSW rate
\begin{align*}
R_B= \frac{1}{2}I(B:X^{\prime} R)_{\Omega}=\frac{1}{2}(S(B)_{\omega}+S(B|X)_{\omega}),
\end{align*}
\aw{is achievable.}
\end{proof}
\begin{remark}
Notice that for the source considered at the end of the previous subsection
\aw{in Remark \ref{rem:example}}, where
$\ket{\psi_x}^{BR} = \ket{\psi_x}^{B'R} \otimes \ket {\psi_x}^{B''}$
for all $x$, we can achieve a rate strictly smaller than the rate
stated in the above theorem. The reason is that $R$ is only
entangled with $B'$, so clearly it is possible to perform the coding task by
coding only $B'$ and trashing $B''$ because by having access to $x$
the decoder can reproduce the state $\psi_x^{B''}$ locally.
Thereby, the rate $\frac{1}{2} (S(B')+S(B'|X))$ is achievable by
applying coherent state merging as above.
\end{remark}
\medskip
The previous observation shows that in general, the rate $\frac12(S(B)+S(B|X))$
from Theorem \ref{State_merging_rate} is not optimal. By looking for a systematic
way of obtaining better rates, we have the following result in the entanglement-assisted
model.
\begin{theorem}
\label{QSR_achievability}
\aw{In the entanglement-assisted model, there exists a sequence of side information codes}
with the following asymptotic entanglement and qubit rates:
\begin{align*}
E&=\frac{1}{2}\left(I(C:W)_{\sigma}-I(C:X)_{\sigma}\right)
\quad \text{and}\quad\\
R_B&= \frac{1}{2}\left(S(B)_{\omega}+S(B|X)_{\omega}-I(X:W)_{\sigma} \right), \nonumber
\end{align*}
where $C$ and $W$ are, respectively, the system and environment of an
isometry $V:{B\rightarrow CW}$ on $\omega^{XBR}$
producing the state $\sigma^{XCWR} = (\openone_{XR}\otimes V)\omega^{XBR} (\openone_{XR}\otimes V)^{\dagger}$, such that $I(W:R|X)_{\sigma}=0$.
\end{theorem}
\begin{proof}
Notice that there is always an isometry $V:{B\rightarrow CW}$
with $I(W:R|X)_{\sigma}=0$, and the trivial example is the isometry $V:{B\rightarrow B W}$ where system $W$ is a trivial system with state $\ketbra{0}^W$.
First, Bob applies \aw{the} isometry $V$ to each copy \aw{of the $n$ systems $B_1,\ldots,B_n$}:
\begin{align*}
&\sigma^{X X^{\prime} CWR} \\
&\quad \quad =\!\! (V^{B \to C W} \!\!\otimes\! \openone_{X X^{\prime}R})
\omega^{X X^{\prime} BR}
(V^{B \to C W} \!\!\otimes\! \openone_{X X^{\prime}R})^\dagger \\
&\quad \quad = \sum_x p(x) \ketbra{x}^{X}\otimes \ketbra{x}^{X^{\prime}}\otimes \ketbra{\phi_{x}}^{CWR}.
\end{align*}
Now consider the following source state from which the state $\sigma^{X X^{\prime} CWR}$ is obtained by applying projective measurement on system $X'$ in the computational
basis $\{ \ketbra{x}{x}\}$,
\[
\ket{\Sigma}^{X X^{\prime} CW R}
=\sum_{x \in \mathcal{X}} \sqrt{p(x)} \ket {x}^{X} \ket {x}^{X'} \ket{\phi_{x}}^{CW R}.
\]
For this source, consider Bob and \aw{Debbie} respectively hold
the $CW$ and $X$ systems, and Bob wishes to send system $C$ to \aw{Debbie} while keeping
$W$ for himself.
For many copies of the above state, \aw{the} parties can apply \aw{the} quantum state redistribution
(QSR) protocol \cite{Yard2009,Oppenheim2008} for transmitting $C$, having access to system $W$ as
side information at the encoder and to $X$ as side information at the decoder.
According to this protocol, Bob needs exactly the rate of
$R_B = \frac{1}{2}I(C:X^{\prime} R |X)_{\Sigma}
= \frac{1}{2}(S(B)_{\omega}+S(B|X)_{\omega}-I(X:W)_{\sigma})$
qubits of communication.
\aw{The protocol requires the rate of $\frac{1}{2} I(C:W)_{\Sigma}=\frac{1}{2} I(C:W)_{\sigma}$
ebits of entanglement shared between the encoder and decoder, and at the end of the protocol
the rate of $\frac{1}{2} I(C:X)_{\Sigma}=\frac{1}{2} I(C:X)_{\sigma}$ ebits of entanglement is
distilled between the encoder and the decoder (see equations (1) and (2) in \cite{Yard2009}).}
This protocol \aw{attains high} fidelity for \aw{the} state $\Sigma^{X^n {X'}^n C^n W^n R^n }$,
and consequently for \aw{the} state $\sigma^{X^n {X'}^n C^n W^n R^n }$ due to \aw{the} monotonicity
of fidelity under \aw{CPTP maps}:
\begin{align} \label{F_QSR}
1\!\!-\!\epsilon \! &\leq \!F \!\left(\!{\Sigma}^{X^n {X'}^n C^n W^n R^n }\!\!\!\!\otimes\! \Phi_L^{B_0'D_0'}\!\!,{\hat{\Sigma}}^{\hat{X}^n {X'}^n \hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 }
\right) \nonumber \\
&\leq \!F \!\left(\!{\sigma}^{X^n {X'}^n C^n W^n R^n }\!\!\!\!\otimes\! \Phi_L^{B_0'D_0'}\!\!,{\hat{\sigma}}^{\hat{X}^n {X'}^n \hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 }
\right)\!\!,
\end{align}
where
\begin{align*}
&{\hat{\Sigma}}^{\hat{X}^n \!{X'}^n \!\hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 } \\
&\!\! =\!\!\left(\!\mathcal{D} \!\circ\! (\!{\operatorname{id}}_{X^n \!D_0} \!\otimes \!\mathcal{E}_{C\!W \!B_0}\!)\! \otimes\! {\operatorname{id}}_{{X'}^n\! R^n}\!\right)
\!\!\Sigma^{\!X^n\! {X'}^n\! C^n\! W^n \!R^n } \!\!\!\otimes\! \Phi_K^{\!B_0\!D_0}\!\!,
\end{align*}
and
\begin{align*}
&{\hat{\sigma}}^{\hat{X}^n \!{X'}^n \!\hat{ C}^n \! \hat{W}^n\! R^n \! B'_0 \! D'_0 } \\
&\!\! =\!\!\left(\!\mathcal{D} \!\circ\! (\!{\operatorname{id}}_{X^n \!D_0} \!\otimes \!\mathcal{E}_{C\!W \!B_0}\!)\! \otimes\! {\operatorname{id}}_{{X'}^n\! R^n}\!\right)
\!\sigma^{\!X^n\! {X'}^n\! C^n\! W^n \!R^n } \!\!\!\otimes\! \Phi_K^{\!B_0\!D_0}\!\!\!,
\end{align*}
and $\mathcal{E}_{CW B_0}$ and $\mathcal{D}$ are respectively the
encoding and decoding operations of the QSR protocol.
The condition $I(W:R|X)_{\sigma}=0$ implies that for every $x$ the systems $W$ and $R$ are decoupled:
\begin{align*}\label{Decoupling_0}
\phi_{x}^{WR}=\phi_{x}^W \otimes \phi_{x}^R.
\end{align*}
By Uhlmann's theorem \cite{UHLMANN1976,Jozsa1994_2},
there exist isometries $V_{x}:{C\rightarrow VB}$ for all $x \in \mathcal{X}$, such that
\[
(\openone\otimes V_{x}^{C\rightarrow VB}) \ket{\phi_{x}}^{CWR}
=\ket{\nu_{x}}^{VW} \otimes \ket{\psi_{x}}^{BR}.
\]
After applying the decoding operation $\mathcal{D}$ of QSR,
\aw{Debbie} applies the isometry $V_{x}:{C\rightarrow VB}$ for each $x$,
which does not change the fidelity (\ref{F_QSR}).
By tracing out the unwanted systems $V^n W^n$,
due to the monotonicity of \aw{the} fidelity under partial trace,
the fidelity defined in (\ref{F_QCSW_assisted}) \aw{will go to $1$} in this encoding-decoding scheme.
\end{proof}
\begin{remark}
In Theorem \ref{QSR_achievability}, the smallest achievable rate,
when unlimited entanglement is available,
is equal to $\frac{1}{2}(S(B)+S(B|X)-I_0)$.
This rate resembles the converse bound
$R_B \geq \frac{1}{2}(S(B)+S(B|X)-\widetilde{I}_0)$,
except \aw{that $\widetilde{I}_0 \geq I_0$}.
In the definition of $\widetilde{I}_0$, it seems unlikely that we can take the
limit of $\delta$ going to 0 directly because there is no dimension bound on
the systems $C$ and $W$, so compactness cannot be used directly to prove that
$\widetilde{I}_0$ and $I_0$ are equal.
\end{remark}
\aw{\begin{remark}
Looking again at the entanglement rate in Theorem \ref{QSR_achievability},
$E=\frac{1}{2}\left(I(C:W)_{\sigma}-I(C:X)_{\sigma}\right)$, we reflect that
there may easily be situations where $E\leq 0$, meaning that no entanglement is
consumed, and in fact no initial entanglement is necessary. In this case,
the theorem improves the rate of Theorem \ref{State_merging_rate} by the
amount $\frac12 I(X:W)$.
This motivates the definition of the following variant of $I_0$,
\begin{align*}
I_{0-}(\omega) :=& \sup I(X:W) \text{ s.t. } I(R:W|X)=0,\\
& I(C:W)-I(C:X) \leq 0,
\end{align*}
where the supremum is over all isometries $V:B\rightarrow CW$.
\par
As a corollary to these considerations, in the unassisted model
the rate $\frac{1}{2}\left(S(B)+S(B|X)-I_{0-} \right)$ is achievable.
\end{remark}}
\subsection{Optimal compression rate for generic sources}
\label{sec:generic side info}
In this subsection, we find the optimal compression rate for
\emph{generic} sources, by which we mean any source except for a
submanifold of lower dimension within the set of all sources.
Concretely, we will consider sources where there is at least one $x$
for which the reduced state $\psi_x^B= \Tr_R \ketbra{\psi_x}{\psi_x}^{BR}$
has full support on $B$.
In this setting, coherent state merging as a subprotocol gives the optimal
compression rate, so not only does the protocol not use any initial
entanglement, but some entanglement is distilled at the end of the protocol.
\begin{theorem}
\label{theorem:generic optimal rate}
In both unassisted and entanglement-assisted models, for any side information code of a generic source, the asymptotic compression rate $R_B$ of Bob is lower bounded
\begin{align*}
R_B \geq \frac{1}{2}\left(S(B)+S(B|X)\right),
\end{align*}
so the protocol of Theorem \ref{State_merging_rate} has optimal rate for a generic source.
Moreover, in that protocol no prior entanglement is needed and
a rate $\frac{1}{2}I(X:B)$ ebits of entanglement
is distilled between the encoder and decoder.
\end{theorem}
\begin{proof}
The converse bound of Theorem \ref{converse_QCSW} states that
\aw{the} asymptotic quantum communication rate of Bob is lower bounded as
\begin{align}
R_B \geq \frac{1}{2}\left(S(B)+S(B|X)-\widetilde{I}_0 \right), \nonumber
\end{align}
where $\widetilde{I}_0$ \aw{comes from} Definition \ref{I_delta}. We will show that for generic sources, $\widetilde{I}_0 = I_{0}= 0$.
Moreover, Theorem \ref{State_merging_rate} states that using coherent state
merging, the asymptotic qubit rate of $\frac{1}{2}(S(B)+S(B|X))$ is achievable,
that no prior entanglement is required and a rate of
$\frac{1}{2}I(X:B)$ ebits of entanglement is distilled between the encoder and the decoder.
We show that for any CPTP map ${\mathcal{T}}:B\to W$, which acts on a generic $\omega^{XBR}$ and produces
state $\sigma^{XWR} = ({\operatorname{id}}_{XR}\otimes {\mathcal{T}})\omega^{XBR}$ such that $I(R:W|X)_{\sigma}\leq \delta$
for $\delta \geq 0$, the quantum mutual information
$I(X:W)_{\sigma} \leq \delta' \log |X| +2h (\frac12\delta')$ where $\delta'$ is defined in
\aw{Eq.~(\ref{phix_phi0_distance}) below}.
Thus, we obtain
\[
\widetilde{I}_0 = \lim_{\delta \searrow 0} I_\delta = 0.
\]
\aw{To show this claim, we proceed as follows.} From $I(R:W|X)_{\sigma}\leq \delta$ we have
\begin{align*}
I(R:W|X=x)_\sigma \leq \frac{\delta}{p(x)} \quad \quad \forall x \in \mathcal{X},
\end{align*}
\aw{so} by Pinsker's inequality \cite{Schumacher2002} we obtain
\begin{align*}
\left\| \phi_x^{W R}- \phi_x^{W} \otimes \phi_x^{R} \right\|_1 \leq \sqrt{\frac{2 \delta \ln 2}{p(x)}} \quad \quad \forall x \in \mathcal{X}.
\end{align*}
By Uhlmann's theorem (Lemma~\ref{lemma:Uhlmann1} and Lemma~\ref{lemma:Uhlmann2}), there exists an isometry $V_x:{C \to BV}$ such that
\begin{align}\label{eq3}
&\left\| (V_x \otimes \openone_{WR}) \phi_x^{CW R} (V_x \otimes \openone_{WR})^{\dagger}
- \theta_x^{WV} \otimes \psi_x^{BR} \right\|_1 \nonumber \\
& \quad \quad \quad \quad \quad \quad \leq \sqrt{\sqrt{ \frac{\delta\ln 2}{2p(x)}} \left(2-\sqrt{ \frac{\delta\ln 2}{2p(x)}}\right)},
\end{align}
where $\theta_x^{WV}$ is a purification of $\phi_x^{W}$.
Since the source is generic by definition there is an $x$, say $x=0$,
for which $\psi_0^B$ has full support on
$\mathcal{L}(\mathcal{H}_B)$, i.e.~$\lambda_0:=\lambda_{\min}(\psi_0^B)>0$.
By Lemma \ref{full_support_lemma} in Appendix \ref{Miscellaneous_Facts},
for any $\ket{\psi_x}^{BR}$ there is an operator $T_x$ acting on the reference system such that
\begin{align*}
\ket{\psi_x}^{BR} = (\openone_B \otimes T_x) \ket{\psi_0}^{BR}. \nonumber
\end{align*}
Using this fact, we show that the decoding isometry $V_0$ in \aw{Eq.~(\ref{eq3})} works for all states:
\begin{align*}
& \bigl\| (V_0 \otimes \openone_{WR}) \phi_x^{CWR} (V_0^{\dagger} \otimes \openone_{W R})
- \theta_0^{WV} \otimes \psi_x^{BR} \bigr\|_1 \\
&= \bigl\|\!(\!V_0 \!\otimes \!\openone_{W\!R}) (\!\openone_{C\!W} \!\otimes \!T_x \!) \phi_0^{C\!W\!R} (\openone_{C\!W} \!\otimes \!T_x)^{\dagger} (\!V_0^{\dagger}\otimes \openone_{W \!R}\!) \\
&\quad \quad\quad\quad - \theta_0^{W\!V} \!\otimes \!(\openone_B \otimes T_x)\psi_0^{B\!R}(\openone_B \!\otimes\! T_x)^{\dagger} \bigr\|_1 \\
&= \!\!\bigl\| \!(\!\openone_{BVW}\!\otimes \! T_x\!)(V_0\!\otimes\!\openone_{W\!R}) \phi_0^{C\!W\!R} \!(\!V_0^{\dagger}\!\otimes\!\openone_{W \!R}\!) (\!\openone_{B\!V\!W}\!\otimes \!T_x^{\dagger}\!)
\\
&\quad \quad\quad\quad - (\openone_{BVW}\! \otimes \! T_x)\theta_0^{WV} \otimes\psi_0^{B\!R}(\openone_{B\!V\!W} \!\otimes\! T_x^{\dagger}) \bigr\| \\
&\leq \norm{\openone_{BVW} \otimes T_x}_{\infty}^2 \\
& \quad \quad\quad \norm{ \!(\!V_0 \!\otimes\! \openone_{WR})\! \phi_0^{C\!W\!R} (V_0^{\dagger} \!\otimes\! \openone_{W\!R}) - \theta_0^{W\!V} \!\otimes\!\psi_0^{B\!R}\!}_1 \\
&\leq \frac{1}{\lambda_0} \sqrt{\sqrt{ \frac{\delta\ln 2}{2p(0)}} \left(2-\sqrt{ \frac{\delta\ln 2}{2p(0)}}\right)},
\end{align*}
where the \aw{last two} inequalities follow from Lemma \ref{T_norm1_inequality}
and Lemma \ref{full_support_lemma}, respectively.
By tracing out the systems $VBR$ in the above chain of inequalities, we get
\begin{align}
\label{phix_phi0_distance}
\norm{\phi_x^{W}- \phi_0^{W}}_1
\leq \frac{1}{\lambda_0}\sqrt{\sqrt{ \frac{\delta\ln 2}{2p(0)}} \left(2-\sqrt{ \frac{\delta\ln 2}{2p(0)}}\right)} \aw{=: \delta'}.
\end{align}
Thus, by triangle inequality we obtain
\begin{align}
\label{phi_phi0_distance}
& \norm{\underbrace{\sum_x p(x) \ketbra{x}{x}^X \otimes \phi_x^{W}}_{\sigma^{XW}}
- \underbrace{\sum_x p(x) \ketbra{x}{x}^X \otimes \phi_0^{W}}_{\aw{=:}\sigma_0^{XW}}}_1 \nonumber \\
&\quad \quad \quad \quad\quad \quad \leq \sum_x p(x) \norm{ \phi_x^{W}- \phi_0^{W}}_1 \nonumber \\
&\quad \quad\quad \quad\quad \quad\leq \frac{1}{\lambda_0}\sqrt{\sqrt{ \frac{\delta\ln 2}{2p(0)}}
\left(2-\sqrt{ \frac{\delta\ln 2}{2p(0)}}\right)} = \delta'.
\end{align}
By applying \aw{the Alicki-Fannes inequality in the form of Lemma \ref{AFW lemma},} to
\aw{Eq.}~(\ref{phi_phi0_distance}), we have
\begin{align*}
I(X\!:\!W)_\sigma &\!\!=\!S(\!X\!)_{\sigma}\!\!-\!S(X|W)_{\sigma} \!+\!S(X|W)_{\sigma_0}\!\!-\!\!S(X|W)_{\sigma_0} \\
&= S(X|W)_{\sigma_0}-S(X|W)_{\sigma} \\
&\leq \delta' \log |X| + 2h\left(\frac12\delta'\right),
\end{align*}
and the right \aw{hand side} of the above inequality vanishes for \aw{$\delta\rightarrow 0$}.
\end{proof}
\section{Towards the full rate region}\label{sec: full problem}
In this section, we consider the full rate region of the distributed compression of
a \aw{classical-quantum} source.
\begin{theorem}
\label{unknown.theorem}
In the unassisted model, for distributed compression of a \aw{classical-quantum} source,
the rate pairs satisfying the following inequalities are achievable:
\begin{equation}\begin{split}
\label{eq:inner}
R_X &\geq S(X|B), \\
R_B &\geq\frac{1}{2}\left(S(B)+S(B|X)\right), \\
R_X+2R_B &\geq S(B)+S(XB).
\end{split}\end{equation}
\end{theorem}
\begin{proof}
From the Devetak-Winter code, Eq.~(\ref{eq:DW}), and
the code based on state merging, Theorem \ref{State_merging_rate}, two rate
points in the unassisted (and hence also in the unlimited entanglement-assisted)
rate region are:
\begin{align*}
(R_X,R_B) &= (S(X|B),S(B)), \\
(R_X,R_B) &= \left(S(X),\frac12(S(B)+S(B|X))\right).
\end{align*}
Their upper-right convex closure is hence an inner bound to the rate region,
depicted schematically in Fig.~\ref{fig:inner}.
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{inner-bound.jpg}
\caption{\aw{The region of all pairs $(R_X,R_B)$ satisfying the three conditions of
Eq.~(\ref{eq:inner}); it is the upper-right convex closure of the
Devetak-Winter (DW) and the merging (M) point.
All of these points are achievable in the unassisted model}.}
\label{fig:inner}
\end{figure}
\aw{For generic sources we find \aw{that this is in fact the} rate region.
However, in general, we only present some outer bounds and inner bounds (achievable rates),
which show the rate region to be much more complicated than the rate region of \aw{the}
classical Slepian-Wolf problem.}
\aw{\subsection{General converse bounds}
\label{sec:Converse Bounds in General}
For distributed compression of a \aw{classical-quantum} source
in general, we start with a general converse bound.
\begin{theorem}
\label{theorem:full converse}
In the entanglement-assisted model, the asymptotic rate pairs for distributed compression of a
\aw{classical-quantum} source are lower bounded as
\begin{equation}\begin{split}
\label{eq:general-converse}
R_X &\geq S(X|B), \\
R_B &\geq \frac{1}{2}\left( S(B)+S(B|X)-\widetilde{I}_0 \right), \\
R_X + 2R_B &\geq S(B)+S(BX)-\widetilde{I}_0.\\
\end{split}\end{equation}
In the unassisted model, in addition to the above lower bounds,
the asymptotic rate pairs are bounded as
\[
R_X + R_B \geq S(XB).
\]
\end{theorem}
\begin{proof}
The individual lower bounds have been established already:
$R_X\geq S(X|B)$ is from \cite{Devetak2003,Winter1999}, in a slightly different source model.
However, it also holds in our system model if Bob sends his information using
unlimited communication such that \aw{Debbie} can decode it perfectly. Namely, notice
that the fidelity (\ref{F_QCSW_unassisted1}) is more stringent than the decoding
criterion of \cite{Devetak2003,Winter1999}, so any converse bound considering the
decoding criterion of \cite{Devetak2003,Winter1999} is also a converse bound in our system model.
The bound $R_B\geq \frac{1}{2}(S(B)+S(B|X)-\widetilde{I}_0)$ is from
Theorem \ref{theorem:generic optimal rate}. These two bounds hold in the unassisted,
as well as the entanglement-assisted model.
In the unassisted model, the rate sum lower bound $R_X + R_B \geq S(XB)$ has been
argued in \cite{Devetak2003,Winter1999}, too. As a matter of fact, for any distributed compression
scheme for the source, ${\mathcal{E}}_X\otimes{\mathcal{E}}_B$ jointly describes a Schumacher compression scheme
with asymptotically high fidelity. Thus, its rate must be asymptotically lower bounded
by the joint entropy of the source, $S(XB)$ \cite{Schumacher1995,Jozsa1994_1,Barnum1996,Winter1999}.
This leaves the bound $R_X + 2R_B \geq S(B)+S(BX)-\widetilde{I}_0$ to be proved in
the entanglement-assisted model, which we tackle now.
The encoders of Alice and Bob are isometries $U_X:{X^n \to C_X W_X}$ and
$U_B:{B^nB_0 \to C_B W_B B_0'}$, respectively. They send their respective compressed systems
$C_X$ and $C_W$ to \aw{Debbie} and keep the environment parts $W_X$ and $W_B$ for themselves.
Then, \aw{Debbie} applies the decoding isometry $V:{C_X C_B D_0 \to \hat{X}^n\hat{B}^n W_D D_0'}$,
where $\hat{X}^n\hat{B}^n D_0'$ are the output systems, and $W_D$ and $D_0'$ are the
environment of \aw{Debbie}'s decoding isometry and her output entanglement, respectively.
We first bound the following sum rate:
\begin{align}
\label{eq sum rate}
&nR_X+nR_B+S(D_0) \nonumber \\
&\geq S(C_X)+S(C_B)+S(D_0) \nonumber\\
&\geq S(C_XC_BD_0) \nonumber\\
&= S(\hat{X}^n\hat{B}^n W_D D_0') \nonumber\\
&= S(\hat{X}^n\hat{B}^n) + S(W_D D_0'|\hat{X}^n\hat{B}^n) \nonumber\\
&\geq S(\hat{X}^n\hat{B}^n) + S(W_D D_0'|\hat{X}^n\hat{B}^nX'^n) \nonumber\\
&\geq\! \!S(\!X^n \!B^n\!) \!\!+\! S(\!W_D \!D_0'|\hat{X}^n\!\hat{B}^n\!X'^n\!) \!\! \nonumber \\
& \quad \quad\quad\quad\quad\quad\quad\quad-\! n\!\sqrt{\!2\epsilon} \!\log(\!|X| |B|\!) \!\!-\! \!h(\!\sqrt{\!2\epsilon}\!) \nonumber\\
&\geq S(X^n B^n) + S(W_D D_0'|X'^n) - 2n\delta(n,\epsilon) \nonumber\\
&\geq \!S(\!X^n B^n\!) \!\!+ \!\!S(\!W_XW_B B_0'|X'^n\!) \!\! \nonumber \\
&\quad \quad\quad\quad\quad\quad\quad\quad- \!\!S(R^n\hat{B}^n\hat{X}^n|X'^n) \!\!-\!\! 2n\delta(\!n,\epsilon\!) \nonumber\\
&\geq \!S(X^n B^n) \!\!+ \!\!S(W_XW_B B_0'|X'^n) \!\!-\!\! 2n\delta(n,\epsilon)\!\!-\!\!n\delta'(n,\epsilon) \nonumber\\
&= \!S(\!X^n\! B^n\!)\! \!+ \!\!S(\!W_X|X'^n\!) \!\!+\! \!S(\!W_B B_0'|X'^n\!) \!\! \nonumber \\
&\quad \quad\quad\quad\quad\quad\quad\quad- \!\!2n\delta(\!n,\epsilon\!) \!\!-\!\! n\delta'(\!n,\epsilon\!) \nonumber\\
&\geq S(X^n B^n) + S(W_B B_0'|X'^n) - 2n\delta(n,\epsilon)-n\delta'(n,\epsilon),
\end{align}
where the third line is by subadditivity, the equality in the third line follows because
the decoding isometry $V$ does not change the entropy. Then, in the fifth and sixth line
we use the chain rule and strong subadditivity of entropy.
The inequality in the seventh line follows from the decodability of the systems $X^nB^n$:
the fidelity criterion (\ref{F_QCSW_assisted}) implies that the output state on systems
$\hat{X}^n\hat{B}^n$ is $2\sqrt{2\epsilon}$-close to the original state $X^nB^n$ in trace norm;
then apply \aw{the} Fannes inequality (\aw{Lemma} \ref{Fannes-Audenaert inequality}).
The eighth line follows from the decoupling condition (Lemma \ref{decoupling condition}),
which implies that
$I(W_D D_0':\hat{X}^n\hat{B}^n|{X'}^n) \leq n\delta(n,\epsilon)
= 4n\sqrt{6\epsilon} \log(|X| |B|) + 2 h(\sqrt{6\epsilon})$.
In the ninth line, we use that for any given $x^n$, the overall state of
$W_XW_B W_D B_0' D_0'R^n\hat{B}^n\hat{X}^n$ is pure, and invoking subadditivity.
In line tenth, we use the decoding fidelity (\ref{F_QCSW_assisted}) once
more, saying that the output state on systems
$\hat{X}^n\hat{B}^nR^n{X'}^n$ is $2\sqrt{2\epsilon}$-close to the original
state $X^nB^nR^n{X'}^n$ in trace norm;
then apply the Alicki-Fannes inequality (Lemma~\ref{AFW lemma}) in the following equation; notice that given $x^n$ the state on systems $X^nB^nR^n$ is pure, therefore $S(X^nB^nR^n|{X'}^n)=0$, and we obtain:
\begin{align}\label{eq: delta'}
& \abs{S(\hat{X}^n\hat{B}^nR^n|{X'}^n)-S(X^nB^nR^n|{X'}^n)} \nonumber \\
&\quad= S(\hat{X}^n\hat{B}^nR^n|{X'}^n) \nonumber\\
&\quad\leq 2n \sqrt{2\epsilon} \log |X| |B| |R| + (1+\sqrt{2\epsilon})h(\frac{\sqrt{2\epsilon}}{1+\sqrt{2\epsilon}}) \nonumber\\
&\quad\leq 4n \sqrt{2\epsilon} \log |X| |B| + (1+\sqrt{2\epsilon})h(\frac{\sqrt{2\epsilon}}{1+\sqrt{2\epsilon}}) \nonumber\\
&\quad:= \delta'(n,\epsilon),
\end{align}
where in the penultimate line, we can without loss of generality assume $|R| \leq |X| |B|$.
The equality in the twelfth line of Eq.~(\ref{eq sum rate}) follows because for
a given $x^n$ the encoded states of Alice and Bob are independent.
Moreover, we bound $R_B$ as follows:
\begin{align}
\label{eq RB lower bound}
nR_B &\geq S(C_B) \nonumber\\
&\geq S(C_B|W_BB_0') \nonumber\\
&= S(C_BW_BB_0') - S(W_BB_0') \nonumber\\
&= S(B^nB_0) - S(W_BB_0') \nonumber\\
&= S(B^n) + S(B_0) - S(W_BB_0').
\end{align}
Adding \aw{Eqs.}~(\ref{eq sum rate}) and (\ref{eq RB lower bound}), and after
cancellation of $S(B_0)=S(D_0)$, we get
\begin{align}
\label{eq: lower bound R_X+2R_B}
R_X& \! \!+2R_B \nonumber \\
&\geq \!S(B) \! \!+ \! \! S(X B) \! \! - \!\! \frac{1}{n}I(X'^n \!\!: \!W_B B_0') \! \!- \!2\delta( \!n,\epsilon \!) \!\!- \!\!\delta'( \!n,\epsilon \!) \nonumber\\
&\geq \!S(B) \!+ \! S(X B) \!- \! \frac{1}{n}I_{n\delta( \!n,\epsilon \!)}({\omega^{\otimes n}}) \!\!- \!2\delta( \!n,\epsilon \!) \!\!- \! \!\delta'( \!n,\epsilon \!) \nonumber\\
&= \! S(B) \!+ \!S(X B) \!- \! I_{\delta(n,\epsilon)}({\omega}) \!- \! 2\delta(n,\epsilon)-\delta'(n,\epsilon),
\end{align}
where given that $I(R^n:B_0'W_B|X'^n) \leq \delta(n,\epsilon)$, which we have from the
decoupling condition (Lemma \ref{decoupling condition}), the second equality follows directly
from Definition \ref{I_delta}, just as in the proof of Theorem \ref{converse_QCSW}.
The equality in the last line follows from Lemma \ref{lemma:I-delta}.
In the limit of $n\rightarrow\infty$ and $\epsilon\rightarrow 0$, we have
$\delta(n,\epsilon) \rightarrow 0$ and $\delta'(n,\epsilon) \rightarrow 0$, and so $I_{\delta(n,\epsilon)}$ \aw{converges}
to $\widetilde{I}_0$.
\end{proof}}
\subsection{General achievability bounds}
\label{sec:Achievability Bounds in General}
For general, non-generic sources, the achievability bounds of Theorem \ref{unknown.theorem}
and the outer bounds of Theorem \ref{theorem:full converse} do not match. Here we present several more
general achievability results that go somewhat towards filling in the unknown
area in between, without, however, resolving the question completely.
\begin{theorem}
\label{thm:achieve}
In the entanglement-assisted model, for distributed compression of a \aw{classical-quantum} source,
\aw{any rate pairs satisfying} the following inequalities are achievable: \aw{with $\alpha=\frac{2I(X:B)}{I(X:B)+I_0}$},
\begin{equation}\begin{split}
\label{eq:funny-region}
R_X &\geq S(X|B), \\
R_B &\geq \frac{1}{2}\left( S(B)+S(B|X)-I_0 \right), \\
R_X +\alpha R_B &\geq S(X|B)+ \alpha S(B).
\end{split}\end{equation}
\aw{More generally,} for any auxiliary random variable $Y$ such that \aw{$Y$--$X$--$B$}
is a Markov chain, \aw{all the following rate pairs (and hence also their upper-right convex closure)}
are achievable:
\begin{align*}
R_X &= I(X:Y) + S(X|BY) = S(X|B) + I(Y:B), \\
R_B &= \frac{1}{2}(S(B) + S(B|Y) - I(Y:W))\\
&= S(B)-\frac{1}{2}\left(I(Y:B)+I(Y:W)\right),
\end{align*}
where $C$ and $W$ are the system and environment of an \aw{isometry $V:{B\rightarrow CW}$
with $I(W:R|Y)=0$}.
\end{theorem}
\begin{proof}
\aw{The region described by Eq.~(\ref{eq:funny-region}) is precisely the upper-right convex
closure of the two corner points $(S(X|B),S(B))$ and $(S(X),\frac{1}{2}(S(B)+S(B|X)-I_0))$. Their
achievability} follows from \aw{Theorems} \ref{theorem: generic full rate region} and
\ref{QSR_achievability}.
We use the following two achievable points to show the
\aw{second} statement:
\begin{align*}
\left(S(X|B),S(B)\right) \quad \text{and} \quad \left(S(X),\frac{1}{2}(S(B)+S(B|X)-I_0)\right).
\end{align*}
Namely, Alice and \aw{Debbie} (the receiver) use the Reverse Shannon Theorem to simulate the channel
taking $X$ to $Y$ in i.i.d.~fashion, which costs $I(X:Y)$ bits of classical communication \cite{Bennett2002}.
Now we are in a situation that we know, Bob has to encode $B^n$ with side information $Y^n$ at the decoder,
which can be done \aw{at the rate $\frac{1}{2}(S(B)+S(B|Y)-I(Y:W))$, by the quantum state redistribution
protocol of Theorem \ref{QSR_achievability}}.
Then Alice has to send some more information to allow the receiver to decode $X^n$ which is an instance of
classical compression of $X$ with quantum side information $BY$ that is already at the decoder,
hence costing another $S(X|BY)$ bits in communication, \aw{by the Devetak-Winter protocol \cite{Devetak2003,Winter1999}}.
For $Y=X$, \aw{we recover the rate point $\left(S(X),\frac{1}{2}(S(B)+S(B|X)-I_0)\right)$,
and for $Y=\emptyset$ we recover $\left(S(X|B),S(B)\right)$.}
\end{proof}
\medskip
In Fig.~\ref{fig:full}, we show the situation for a general source, depicting
the most important inner and outer bounds on the rate region in the entanglement-assisted
model.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{complicated-diagram.jpg}
\caption{\aw{General outer (converse) bound, in red, and inner (achievable) bounds, in black,
on the entanglement-assisted rate region, assuming unlimited entanglement.
In general, our achievable points, the one from Devetak-Winter (DW), and the ones
using merging (M) and quantum state redistribution (QSR) are no longer on the
boundary of the outer bound. The achievable region is potentially slightly larger than
the upper-right convex closure of the points DW and QSR, connected by a solid black
straight line; indeed, the second part of Theorem \ref{thm:achieve} allows us to
interpolate between DW and QSR along the black dashed curve}.}
\label{fig:full}
\end{figure}
\subsection{Rate region for generic sources}
\label{sec:generic full}
In this subsection, we find the complete rate region for generic sources, generalizing
the insight of Theorem \ref{theorem:generic optimal rate} for the subproblem
of quantum compression with classical side information at the decoder.
\begin{theorem}
\label{theorem: generic full rate region}
\aw{In both unassisted and entanglement-assisted models, for a generic classical-quantum source, in particular one where there is
an $x$ such that $\psi_x^B$ has full support}, the optimal asymptotic rate region for distributed
compression is \aw{the} set of rate pairs satisfying
\begin{align*}
R_X &\geq S(X|B), \\
R_B &\geq\frac{1}{2}\left(S(B)+S(B|X)\right),\\
R_X+2R_B &\geq S(B)+S(XB).
\end{align*}
Moreover, \aw{there are protocols achieving these bounds requiring no prior} entanglement.
\end{theorem}
\aw{\begin{proof}
We have argued the achievability already at the start of this section
(Theorem \ref{unknown.theorem}).
As for the converse, we have shown in Theorem \ref{theorem:generic optimal rate}
that for a generic source, $\widetilde{I}_0=0$, hence the claim follows from
the outer bounds of Theorem \ref{theorem:full converse}.
\end{proof}}
This means that for generic sources, which we recall are the complement of a set of measure
zero, the rate region has the shape of Fig.~\ref{fig:inner}.
\section{Discussion and open problems}
\label{sec:discuss}
\aw{After seeing no progress for over 15 years in the problem of distributed
compression of quantum sources, we have decided to take a fresh look at the
classical-quantum sources considered in \cite{Devetak2003,Winter1999}. There,
the problem of compressing the classical source using the quantum part as
side information at the decoder was solved; here we analyzed the full
rate region, in particular we were interested in the other extreme of compressing
the quantum source using the classical part as side information at the decoder.
Like in the classical Slepian-Wolf coding, the former problem exhibits no rate loss,
in that the quantum part of the source is compressed to the Schumacher rate,
the local entropy, and the sum rate equals the joint entropy of the source.
Interestingly, this is not the case for the latter problem: clearly, if the
classical side information were available both at the encoder and the decoder,
the optimal compression rate would be the conditional entropy $S(B|X)$, which
would again imply no sum rate loss. However, since the classical side information
is supposed to be present only at the decoder, we have shown that in general
the rate sum is strictly larger, in fact generically by $\frac12 I(X:B)$, and
with this additional rate there is always a coding scheme achieving asymptotically
high fidelity. This additional rate could be called ``the price of ignorance'',
as it corresponds to the absence of the side information at the encoder.
To deal with general classical-quantum sources, we introduced information quantities
$I_0$ and $\widetilde{I}_0$ (Definition \ref{I_delta}), to upper and lower bound the
optimal quantum compression rate as
\begin{align*}
\frac12 \! \!\left( \! S(B) \!+ \!S(B|X) \!\!-\! \!\widetilde{I}_0\ \! \! \! \right ) \! \leq \! R_B^* \! \leq \! \frac12 \! \left(S(B) \!+ \! S(B|X)\! \!- \! {I}_0 \!\right)\!,
\end{align*}
when unlimited entanglement is available.
For generic sources, $I_0 = \widetilde{I}_0 = 0$, but in general we do not
understand these quantities very well, and the first set of open problems that
we would like to mention
is about them: is $I_0 = \widetilde{I}_0$ in general, or are there examples of
gaps? How can one calculate either one of these quantities, given that
a priori the auxiliary register $W$ is unbounded? In fact, can one
without loss of generality put a finite bound on the dimension of $W$,
for either optimization problem?
Further open problems concern the need for prior shared entanglement to achieve
the optimal quantum compression rate $R_B^*$. As a matter of fact, it would already
be interesting to know whether the rate $\frac12\left(S(B)+S(B|X)-{I}_0\right)$
requires in general pre-shared entanglement.
The full rate region inherits these features: while it is simple, and
in fact generated by the optimal codes for the two compression-with-side-information
problems (quantum compression with classical side information, and classical
compression with quantum side information), in the generic case, in general the
picture is very complicated, and we have only been able to give several outer and
inner bounds on the rate region, whose determination remains an open problem.}
\medskip
We also would like to comment on the source model that we consider in this chapter,
and its relation to the classical Slepian-Wolf coding.
Our classical-quantum source is characterised by a classical source,
the random variable $X$, and a quantum source $B$, which is described by a
density matrix $\rho_x^B$, but realized as quantum correlation with a purifying
reference system $R$: $\rho_x^B = \tr_R \ketbra{\psi_x}^{BR}$.
A source code in our sense reproduces the states $\ket{\psi_x}^{BR}$ with high
fidelity on average, which implies that, for any ensemble decomposition
$\rho_x^B = \sum_y p(y|x) \ketbra{\psi_{xy}}^B$, it reproduces the
states $\ket{\psi_{xy}}^B$ with high fidelity on average (with respect to
the ensemble probabilities $p(x)p(y|x)$). If we only demand the latter, there
is no need for the purifying system $R$, and the source can be described compactly by
the cccq-state
\begin{align}\label{eq: ensemble cqSW source}
&\sigma^{X'XYB}= \nonumber \\
&\> \!\!\!\! \!\!\sum_{x \in \mathcal{X},y \in \mathcal{Y}} \!\! \!\!\!\!p(x)p(y|x) \!\!\ketbra{x}^{X'} \!\!\!\!\otimes\!\ketbra{x}^X \!\!\!\otimes\! \ketbra{y}^Y \!\!\!\otimes \!\ketbra{\psi_{xy}}^{\!B}\!\!\!,
\end{align}
where $X'$ and $Y$ are reference systems with which the correlation is preserved in a compression protocol. This now includes the well-known classical correlated source considered
by Slepian and Wolf \cite{Slepian1973}, namely if the system $B$ is classical
with orthonormal states $\ket{\psi_{xy}}=\ket{y}$.
In the Schumacher's single compression problem \cite{Schumacher1995}, both
source models, that is, the ensemble source and the purified source, lead to the same
compression rate. However, when there is side information or more generally in the
distributed setting, different source models, albeit sharing the reduced states on $XB$,
do not lead to the same compression rate \cite{Winter1999}.
Our results provide a clear manifestation of this: recall that the minimum
compression rate of Bob in the Slepian-Wolf setting is $S(B|X)$, with the
ensemble fidelity criterion. On the other hand, if the distributions $p(y|x)$
have pairwise overlapping support, or theorem regarding generic sources
applies, resulting in the strictly larger minimum rate $\frac12(S(B)+S(B|X))$
when the average entanglement fidelity criterion is used. The difference can be
attributed to the harder task of maintaining the entanglement with the reference
system, rather than ``only'' classical correlation.
More broadly, a quantum source can be defined as a quantum system together with
correlations with a reference system, in our case any state $\rho^{ABR}$. The
compression task is to reproduce this state with high fidelity by coding and decoding
of $A$ and $B$. While this problem is far from understood in the general case,
what we saw here is that the compression rate may depend on the concrete
correlation with the reference system. In the present chapter, we have considered
both a globally purifying quantum system and an ensemble of purifications, and in this
final discussion, implicitly looked at a classical system keeping track of an
ensemble of states subject to a probability distribution.
Finally, we mention that both models of quantum data compression with classical side information with partially purified source of Eq.~(\ref{eq: extended source model}) and the ensemble model defined in Eq.~(\ref{eq: ensemble cqSW source}) are special cases of the model that we consider in the next chapter.
There we define an ensemble extension of the QSR source, namely the ensemble $\{ p(x), \ketbra{\psi_x}^{ACBR}\}$ with corresponding cqqqq-state $\sum_x p(x)\ketbra{x}^{X} \otimes \ketbra{\psi_x}^{ACBR}$ where Alice who has access to side information system $C$ wants to compress system $A$ and send it, via a noiseless quantum channel, to Bob who has access to side information system $B$. We let the encoder and decoder share free entanglement and consider two decodability critera: per-copy fidelity and block fidelity where in the former the fidelity is preserved for each copy of the source while in the latter the fidelity is preserved for the whole block of $n$ systems similar to the fidelity defined in Eq.~(\ref{F_QCSW_unassisted1}).
For the former criterion we find the optimal quantum communication rate and for the latter criterion we find a converse bound and an achievable rate which match up to an asymptotic error and an unbounded auxiliary
system.
Our new results imply that in the compression of system $B$ with classical side information at the decoder $X$ in the source model of Eq.~(\ref{eq: extended source model}), the converse bound of Theorem~\ref{converse_QCSW}, i.e. the following rate is optimal in the entanglement-assisted model with per-copy fidelity:
\begin{align*}
R_B =\frac{1}{2}\left( S(B)+S(B|X)-\widetilde{I}_0 \right).
\end{align*}
\chapter{Compression of a general mixed state source}
\label{chap:mixed state}
In this chapter, we consider the most general (finite-dimensional) quantum
mechanical information source, which is given by a quantum system
$A$ that is correlated with a reference system $R$. The task is to
compress $A$ in such a way as to reproduce the joint source state
$\rho^{AR}$ at the decoder with asymptotically high fidelity. This
includes Schumacher's original quantum source coding problem of a
pure state ensemble and that of a single pure entangled state, as
well as general mixed state ensembles.
Here, we determine the optimal compression rate (in qubits per
source system) in terms of the Koashi-Imoto decomposition of the
source into a classical, a quantum, and a redundant part. The same
decomposition yields the optimal rate in the presence of unlimited
entanglement between compressor and decoder, and indeed the full
region of feasible qubit-ebit rate pairs.
This chapter is based on the papers in \cite{ZK_mixed_state_ISIT_2020,ZK_mixed_state_2019}.
\medskip
\section{The source model and the compression task}
\label{sec:The Compression task}
We consider a general mixed state source
$\rho^{AR}$ with $A$ and $R$ as the system to be compressed and the reference system, respectively, where the source generates
the information theoretic limit of
many copies of the state $\rho^{AR}$, i.e.~$\rho^{A^n R^n} = \left(\rho^{AR}\right)^{\otimes n}$.
We assume that the encoder, Alice, and the decoder, Bob, have initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
The encoder, Alice, performs the encoding compression operation
$\mathcal{C}:A^n A_0 \longrightarrow M $ on the system $A^n$ and her part $A_0$ of the entanglement, which is a quantum channel,
i.e.~a completely positive and trace preserving (CPTP) map.
Notice that as functions CPTP maps act on the operators (density matrices) over
the respective input and output Hilbert spaces, but as there is no risk of confusion,
we will simply write the Hilbert spaces when denoting a CPTP map.
Alice's encoding operation produces the state $\sigma^{M B_0 R^n}$ with $M$ and $B_0$ as the compressed system of Alice and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M| \leq \abs{A}^n$.
We call $\frac1n \log K$ and $\frac1n \log|M|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively.
The system $M$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:M B_0 \longrightarrow \hat{A}^n$ on the system
$M$ and his part of the entanglement $B_0$.
We say the encoding-decoding scheme has \emph{fidelity} $1-\epsilon$, or \emph{error} $\epsilon$, if
\begin{align}
\label{eq:fidelity criterion}
F\left( \rho^{A^n R^n },\xi^{\hat{A}^n R^n} \right)
\geq 1-\epsilon,
\end{align}
where $\xi^{\hat{A}^n R^n}=\left((\mathcal{D}\circ\mathcal{C})\otimes {\operatorname{id}}_{R^n}\right) \rho^{A^n R^n }$.
Moreover, we say that $(E,Q)$ is an (asymptotically) achievable rate pair if for all $n$
there exist codes such that the fidelity converges to $1$, and
the entanglement and quantum rates converge to $E$ and $Q$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}_{\geq 0}\times\mathbb{R}_{\geq 0}$.
According to Stinespring's theorem \cite{Stinespring1955}, a CPTP map
${\mathcal{T}}: A \longrightarrow \hat{A}$ can be dilated to an isometry $U: A \hookrightarrow \hat{A} E$
with $E$ as an environment system, called an isometric extension of a CPTP map, such that
${\mathcal{T}}(\rho^A)=\Tr_E (U \rho^A U^{\dagger})$.
Therefore, the encoding and decoding operations are can in general be viewed as
isometries $U_{{\mathcal{E}}} : A^n A_0 \hookrightarrow M W$ and
$U_{{\mathcal{D}}} : M B_0 \hookrightarrow \hat{A}^n V$, respectively,
with the systems $W$ and $V$ as the environment systems
of Alice and Bob, respectively.
\medskip
We say a source $\omega^{BR}$ is equivalent to a source $\rho^{AR}$ if there are
CPTP maps ${\mathcal{T}}:A \longrightarrow B$ and ${\mathcal{R}}:B \longrightarrow A$ in both directions
taking one to the other:
\begin{align} \label{def: equivalent sources}
\omega^{BR}=({\mathcal{T}} \otimes {\operatorname{id}}_R) \rho^{AR} \text{ and }
\rho^{AR}=({\mathcal{R}} \otimes {\operatorname{id}}_R) \omega^{BR}.
\end{align}
The rate regions of equivalent sources are the same, because any achievable rate pair for
one source is achievable for the other source as well. This follows from the fact that for
any code $({\mathcal{C}},{\mathcal{D}})$ of block length $n$ and error $\epsilon$ for $\rho^{AR}$,
concatenating the encoding and decoding operations with ${\mathcal{T}}$ and ${\mathcal{R}}$, i.e. letting
${\mathcal{C}}'={\mathcal{C}}\circ{\mathcal{R}}^{\otimes n}$ and ${\mathcal{D}}'={\mathcal{T}}^{\otimes n}\circ{\mathcal{D}}$, we get a code
of the same error $\epsilon$ for $\omega^{BR}$. Analogously we can turn a code for
$\omega^{BR}$ into one for $\rho^{AR}$.
\section{The qubit-ebit rate region}
\label{sec:The optimal rate region}
The idea behind the compression of the source $\rho^{AR}$ is based on a decomposition
of this state introduced in \cite{Hayden2004}, which is a generalization of the decomposition
introduced by Koashi and Imoto in \cite{KI2002}. Namely, for any set of quantum states $\{ \rho_x\}$,
there is a unique decomposition of the Hilbert space describing
the structure of CPTP maps which preserve the set $\{ \rho_x^A\}$. This idea was generalized
in \cite{Hayden2004} for a general mixed state $\rho^{AR}$ describing the structure of
CPTP maps acting on system $A$ which preserve the overall state $\rho^{AR}$.
This was achieved by showing that any such map preserves the set of all possible
states on system $A$ which can be obtained by measuring system $R$, and
conversely any map preserving the set of all possible
states on system $A$ obtained by measuring system $R$, preserves the state $\rho^{AR}$,
thus reducing the general case to the case of classical-quantum states
\[
\rho^{AY} = \sum_y q(y) \rho_y^A \otimes \proj{y}^Y
= \sum_y \Tr_R \rho^{AR}(\openone_A\otimes M_y^R) \otimes \proj{y}^Y,
\]
which is the ensemble case considered by Koashi and Imoto. As a matter of fact,
looking at the algorithm presented in \cite{KI2002} to compute the decomposition,
it is enough to consider an informationally complete POVM $(M_y)$ on $R$, with
no more than $|R|^2$ many outcomes.
The properties of this decomposition are stated in the following theorem.
\begin{theorem}[\cite{KI2002,Hayden2004}]
\label{thm: KI decomposition}
Associated to the state $\rho^{AR}$, there are Hilbert spaces $C$, $N$ and $Q$
and an isometry $U_{{\text{KI}}}:A \hookrightarrow C N Q$ such that:
\begin{enumerate}
\item The state $\rho^{AR}$ is transformed by $U_{{\text{KI}}}$ as
\begin{equation}
\label{eq:KI state}
(U_{{\text{KI}}}\otimes \openone_R)\rho^{AR} (U_{{\text{KI}}}^{\dagger}\otimes \openone_R)
= \sum_j p_j \proj{j}^{C} \otimes \omega_j^{N} \otimes \rho_j^{Q R}
=:\omega^{C N Q R},
\end{equation}
where the set of vectors $\{ \ket{j}^{C}\}$ form an orthonormal basis for Hilbert space
$C$, and $p_j$ is a probability distribution over $j$. The states $\omega_j^{N}$ and
$\rho_j^{Q R}$ act on the Hilbert spaces $N$ and $Q \otimes R$, respectively.
\item For any CPTP map $\Lambda$ acting on system $A$ which leaves the state $\rho^{AR}$
invariant, that is $(\Lambda \otimes {\operatorname{id}}_R )\rho^{AR}=\rho^{AR}$, every associated
isometric extension $U: A\hookrightarrow A E$ of $\Lambda$ with the environment system
$E$ is of the following form
\begin{equation}
U = (U_{{\text{KI}}}\otimes \openone_E)^{\dagger}
\left( \sum_j \proj{j}^{C} \otimes U_j^{N} \otimes \openone_j^{Q} \right) U_{{\text{KI}}},
\end{equation}
where the isometries $U_j:N \hookrightarrow N E$ satisfy
$\Tr_E [U_j \omega_j U_j^{\dagger}]=\omega_j$ for all $j$.
%
The isometry $U_{KI}$ is unique (up to trivial change of basis of the Hilbert spaces
$C$, $N$ and $Q$). Henceforth, we call the isometry $U_{{\text{KI}}}$ and the state
$\omega^{C N Q R}=\sum_j p_j \proj{j}^{C} \otimes \omega_j^{N} \otimes \rho_j^{Q R}$
the Koashi-Imoto (KI) isometry and KI-decomposition of the state $\rho^{AR}$, respectively.
\item In the particular case of a tripartite system $CNQ$ and a state $\omega^{CNQR}$ already
in Koashi-Imoto form (\ref{eq:KI state}), property 2 says the following:
For any CPTP map $\Lambda$ acting on systems $CNQ$ with
$(\Lambda \otimes {\operatorname{id}}_R )\omega^{CNQR}=\omega^{CNQR}$, every associated
isometric extension $U: CNQ\hookrightarrow CNQ E$ of $\Lambda$ with the environment system
$E$ is of the form
\begin{equation}
U = \sum_j \proj{j}^{C} \otimes U_j^{N} \otimes \openone_j^{Q},
\end{equation}
where the isometries $U_j:N \hookrightarrow N E$ satisfy
$\Tr_E [U_j \omega_j U_j^{\dagger}]=\omega_j$ for all $j$.
\end{enumerate}
\end{theorem}
According to the discussion at the end of Sec. \ref{sec:The Compression task}, the
sources $\rho^{AR}$ and $\omega^{C N Q R}$ are equivalent because there are the isometry
$U_{{\text{KI}}}$ and the reversal CPTP map ${\mathcal{R}}: C N Q \longrightarrow A$, which reverses the
action of the KI isometry, such that:
\begin{align}
\omega^{C N Q R}&= (U_{{\text{KI}}}\otimes \openone_R)\rho^{AR} (U_{{\text{KI}}}^{\dagger}\otimes \openone_R), \nonumber \\
\rho^{AR}&=({\mathcal{R}} \otimes {\operatorname{id}}_R)\omega^{C N Q R}\nonumber \\
&=(U_{{\text{KI}}}^{\dagger }\otimes \openone_R) \omega^{C N Q R} (U_{{\text{KI}}}\otimes \openone_R)+\Tr [(\openone_{C N Q }-\Pi_{C N Q})\omega^{C N Q}]\sigma,
\end{align}
where $\Pi_{C N Q}=U_{{\text{KI}}}U_{{\text{KI}}}^{\dagger}$ is the projection onto the subspace
$U_{{\text{KI}}}A \subset C \otimes N \otimes Q$, and $\sigma$ is an arbitrary state acting on $A\otimes R$.
Henceforth we assume that the source is $\omega^{C N Q R}$, which is convenient because
our main result is expressed in terms of the systems $C$ and $Q$. Notice that
the source $\omega^{C N Q R}$ is in turn equivalent to $\omega^{C Q R}$,
a fact we will exploit in the proof.
Moreover, since the information in $C$ is classical, we can reduce the
compression rate even more if the sender and receiver share
entanglement, by using dense coding of $j$. In the following
theorem we show the optimal qubit-ebit rate tradeoff for the compression of the source $\rho^{AR}$.
\begin{theorem}
\label{theorem:complete rate region mixed state}
For the compression of the source $\rho^{AR}$, all asymptotically achievable entanglement and
quantum rate pairs $(E,Q)$ satisfy
\begin{align*}
Q &\geq S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega},\\
Q+E &\geq S(CQ)_{\omega},
\end{align*}
where the entropies are with respect the KI decomposition of the state $\rho^{AR}$, i.e.
the state $\omega^{C N Q R}$.
Conversely, all the rate pairs satisfying the above inequalities are asymptotically achievable.
\end{theorem}
\begin{remark}
This theorem implies that the optimal asymptotic quantum rates for the compression of
the source $\rho^{AR}$ with and without entanglement assistance are
$S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega}$ and $S(CQ)_{\omega}$ qubits, respectively,
and $\frac{1}{2}S(C)_{\omega}$ ebits of entanglement
are sufficient and necessary in the entanglement assisted case.
\end{remark}
\begin{remark}
If in the compression task the parties were required to preserve the correlations with a purifying reference system, then due to Schumacher compression the optimal qubit rate would be $S(A)_{\rho}=S(CNQ)_{\omega}$. However, Theorem~\ref{theorem:complete rate region mixed state} shows that
the parties can compress more if they are only required to preserve the correlations with a mixed state reference. This gap can be strictly positive if the redundant system $N$ is mixed given the classical information $j$ in system $C$, that is $S(CNQ)_{\omega}-S(CQ)_{\omega}=S(N|CQ)_{\omega}>0$.
\end{remark}
\begin{figure}[ht]
\includegraphics[width=0.8\textwidth]{Q-E_tradeoff_mixed_state.png}
\caption{The achievable rate region of the entanglement and quantum rates.}
\label{fig:rate region}
\end{figure}
\begin{proof}
We start with the achievability of these rates. The converse proofs need more tools, so we will
leave them to the subsequent sections.
Looking at Fig. \ref{fig:rate region}, it will be enough to
prove the achievability of the corresponding corner points $(E,Q)=(0,S(CQ)_{\omega})$
and $(E,Q)=(\frac{1}{2}S(C)_{\omega},S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega})$
for the unassisted and entanglement assisted cases, respectively. This is because
by definition (and the time-sharing principle) the rate region is convex and
upper-right closed.
Indeed, all the points on the line $Q+E = S(CQ)_{\omega}$ for
$Q \geq S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega}$ are achievable because one
ebit can be distributed by sending a qubit.
All other rate pairs are achievable by resource wasting.
The rate region is depicted in Fig.~\ref{fig:rate region}.
As we discussed, we can assume that the source is
$(\omega^{C N Q R})^{\otimes n}=\omega^{C^n N^n Q^n R^n}$. To achieve the point $(0,S(CQ)_{\omega})$,
Alice traces out the redundant part $N^n$ of the source, to get the state $\omega^{C^n Q^n R^n}$ and applies Schumacher compression to send the systems $C^n Q^n$ to Bob. Since the
Schumacher compression preserves the purification of the systems $C^n Q^n$, it preserves the state $\omega^{C^n Q^n R^n}$ as well. To be more specific, let $\Lambda_S$ denote the composition of the encoding and decoding operations for the Schumacher compression of the state
$\ket{\omega}^{C^n Q^n R^n {R'}^n}$ where the system ${R'}^n$ is a purifying reference system which of course the parties do not have access to. The Schumacher compression preserves the following fidelity on the left member of the equation, therefore it preserves the fidelity on the right member:
\begin{align
1-\epsilon
&\leq F \left({\omega}^{C^n Q^n R^n {R'}^n} ,(\Lambda_S \otimes {\operatorname{id}}_{R^n {R'}^n}) {\omega}^{C^n Q^n R^n {R'}^n}\right) \nonumber\\
&\leq F \left({\omega}^{C^n Q^n R^n } ,(\Lambda_S \otimes {\operatorname{id}}_{R^n } ){\omega}^{C^n Q^n R^n }\right), \nonumber
\end{align}
where the inequality is due to monotonicity of the fidelity under partial trace.
The rate achieved by this scheme is $S(CQ)_{\omega}$. After applying this scheme,
Bob has access to the systems $\hat{C}^n \hat{Q}^n$, which is correlated with the reference
system $R^n$:
\begin{align*}
\zeta^{\hat{C}^n \hat{Q}^n R^n}=(\Lambda_S \otimes {\operatorname{id}}_{R^n } ){\omega}^{C^n Q^n R^n }.
\end{align*}
Then, to reconstruct the system $N^n$, Bob applies the CPTP map
$\mathcal{N}:C Q \longrightarrow C N Q$ to each copy, which acts as follows:
\begin{align
\mathcal{N}(\rho^{CQ})=\sum_j (\proj{j}^{C} \otimes \openone_{Q}) \rho^{CQ} (\proj{j}^{C} \otimes\openone_{Q}) \otimes \omega_j^{N}.\nonumber
\end{align}
This map satisfies the fidelity criterion of Eq.~(\ref{eq:fidelity omega}) because of monotonicity of the fidelity under CPTP maps:
\begin{align}\label{eq:fidelity omega}
1-\epsilon &\leq F \left({\omega}^{C^n Q^n R^n } ,\zeta^{\hat{C}^n \hat{Q}^n R^n}\right) \nonumber\\
& \leq F \left((\mathcal{N}^{\otimes n} \otimes {\operatorname{id}}_{R^n } ){\omega}^{C^n Q^n R^n } ,(\mathcal{N}^{\otimes n} \otimes {\operatorname{id}}_{R^n } ) \zeta^{\hat{C}^n \hat{Q}^n R^n}\right) \nonumber \\
&= F \left({\omega}^{C^n N^n Q^n R^n } ,\tau^{\hat{C}^n \hat{N}^n \hat{Q}^n R^n}\right).
\end{align}
To achieve the point $(\frac{1}{2}S(C)_{\omega},S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega})$,
Alice applies dense coding to send the classical system $C^n$ to Bob which requires
$\frac{n}{2}S(C)_{\omega}$ ebits of initial entanglement and $\frac{n}{2}S(C)_{\omega}$
qubits \cite{Bennett1992}. When both Alice and Bob have access to system $C^n$,
Alice can send the quantum system $Q^n$ to Bob by applying Schumacher compression, which
requires sending $nS(Q|C)$ qubits to Bob.
Therefore, the overall qubit rate is
$\frac{1}{2}S(C)_{\omega}+S(Q|C)=S(CQ)_{\omega}-\frac{1}{2}S(C)_{\omega}$.
\end{proof}
\section{Converse}
\label{sec:converse}
In this section, we will provide the converse bounds for the qubit rate $Q$ and the sum
rate $Q+E$ of Theorem~\ref{theorem:complete rate region mixed state}.
We obtain these bounds based on the structure of the CPTP maps which preserve the source state
$\omega^{CNQR}$. Namely, according to Theorem~\ref{thm: KI decomposition} the CPTP maps
acting on systems $CNQ$, which preserve the state $\omega^{CNQR}$, act only on the
redundant system $N$. This implies that the environment systems of such CPTP maps are
decoupled from systems $Q R$ given the classical information $j$ in the classical system $C$.
This gives us an insight into the structure of the encoding-decoding maps, which preserve
the overall state \emph{asymptotically} intact.
To proceed with the proof, we first define two functions that emerge in the converse bounds.
Then, we state some important properties of these functions in
Lemma~\ref{lemma:J_epsilon Z_epsilon properties} which we will use to
compute the tight asymptotic converse bounds.
\begin{definition}
\label{def:J_epsilon Z_epsilon}
For the KI decomposition
$\omega^{C N Q R}=\sum_{j} p_j \proj{j}^{C}\otimes \omega_j^{N} \otimes \rho_{j}^{Q R}$
of the state $\rho^{AR}$ and $\epsilon \geq 0$, define
\begin{align*}
J_\epsilon(\omega) &:=
\max I(\hat{N} E:\hat{C}\hat{Q}|C')_\tau
\text{ s.t. } \\
& \quad \quad U:C N Q \rightarrow \hat{C} \hat{N} \hat{Q} E
\text{ is an isometry with }
F( \omega^{C N Q R},\tau^{\hat{C} \hat{N} \hat{Q}R}) \geq 1- \epsilon, \\
Z_\epsilon(\omega) &:=
\max S(\hat{N} E|C')_\tau
\text{ s.t. } \\
& \quad \quad U:C N Q \rightarrow \hat{C} \hat{N} \hat{Q} E
\text{ is an isometry with }
F( \omega^{C N Q R},\tau^{\hat{C} \hat{N} \hat{Q}R}) \geq 1- \epsilon,
\end{align*}
where
\begin{align*}
\omega^{C N Q R C'}
&=\sum_{j} p_j \proj{j}^{C}\otimes \omega_j^{N} \otimes \rho_{j}^{Q R} \otimes \proj{j}^{C'}, \\
\tau^{\hat{C} \hat{N} \hat{Q} ER C'}
&= (U \otimes \openone_{RC'}) \omega^{C N Q R C'} (U^{\dagger} \otimes \openone_{RC'}), \\
\tau^{\hat{C} \hat{N} \hat{Q} R}
&=\Tr_{E C'} [ \tau^{\hat{C} \hat{N} \hat{Q} ER C'}].
\end{align*}
\end{definition}
In this definition, the dimension of the environment is w.l.o.g. bounded as $|E| \leq (|C||N||Q|)^2$
because the input and output dimensions of the channel are fixed as $|C||N||Q|$;
hence, the optimisation is of a continuous function over a compact domain, so we have a
maximum rather than a supremum.
\begin{lemma}
\label{lemma:J_epsilon Z_epsilon properties}
The functions $Z_\epsilon(\omega)$ and $J_\epsilon(\omega)$ have the following properties:
\begin{enumerate}
\item They are non-decreasing functions of $\epsilon$.
\item They are concave in $\epsilon$.
\item They are continuous for $\epsilon \geq 0$.
\item For any two states $\omega_1^{C_1 N_1 Q_1 R_1}$ and $\omega_2^{C_2 N_2 Q_2 R_2}$ and for $\epsilon \geq 0$,
\begin{align*}
&J_{\epsilon}(\omega_1 \otimes \omega_2) \leq J_{\epsilon}(\omega_1) +J_{\epsilon}(\omega_2),\\
&Z_{\epsilon}(\omega_1 \otimes \omega_2) \leq Z_{\epsilon}(\omega_1) +Z_{\epsilon}(\omega_2).
\end{align*}
\item At $\epsilon=0$, $Z_0(\omega) =S(N|C)_\omega$ and $J_0(\omega) =0$.
\end{enumerate}
\end{lemma}
The proof of this lemma follows in the next section. Now we show how it is
used to prove the converse (optimality) of Theorem \ref{theorem:complete rate region mixed state}.
As a guide to reading the subsequent proof, we remark that in Eqs.~(\ref{eq:converse_Q_1})
and (\ref{eq:converse_Q+E_2}), the environment systems $VW$ of the encoding-decoding
operations appear in the terms $I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)$ and
$S(\hat{N}^n VW | {C'}^n)$, which are bounded by
the functions $J_{\epsilon}(\omega^{\otimes n})$ and $Z_{\epsilon}(\omega^{\otimes n})$,
respectively.
As stated in point 4 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}, these functions
are sub-additive, so basically we can single-letterize the terms appearing in the converse.
Moreover, from point 3 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}, we know that
these functions are continuous for $\epsilon \geq 0$; therefore, the limit points of these
functions are equal to the values of these functions at $\epsilon=0$.
When the fidelity is equal to 1 ($\epsilon=0$), the structure of the CPTP maps
preserving the state $\omega^{C N Q R}$ in Theorem~\ref{thm: KI decomposition}
implies that $J_{0}(\omega)=0$ and $Z_{0}(\omega)=S(N|C)_{\omega}$, as stated
in point 5 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}. Thereby, we conclude
the converse bounds in Eqs.~(\ref{eq:converse_Q_asymptotics}) and (\ref{eq:converse_Q+E_asymptotics}).
\begin{proof-of}[of Theorem \ref{theorem:complete rate region mixed state} (converse)]
We first get the following chain of inequalities considering the process of the
decoding of the information:
\begin{align}
nQ+S(B_0)&\geq S(M)+S(B_0) \label{eq:converse_decoding_1_1} \\
&\geq S(M B_0) \label{eq:converse_decoding_1_2} \\
&= S(\hat{C}^n \hat{N}^n \hat{Q}^n V) \label{eq:converse_decoding_1_4} \\
&= S(\hat{C}^n\hat{Q}^n) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n) \label{eq:converse_decoding_1_5} \\
&\geq nS(C Q) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n) -n \delta(n,\epsilon) \label{eq:converse_decoding_1_6}\\
&\geq nS(C Q) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n {C'}^n) -n \delta(n,\epsilon) \label{eq:converse_decoding_1_7}\\
&= nS(C Q) + S(\hat{N}^n V| \hat{C}^n \hat{Q}^n {C'}^n) - S(\hat{N}^n V| {C'}^n)\nonumber \\
&\quad+ S(\hat{N}^n V| {C'}^n)-n \delta(n,\epsilon) \nonumber\\
&= nS(C Q) - I(\hat{N}^n V : \hat{C}^n \hat{Q}^n| {C'}^n)
+ S(\hat{N}^n V| {C'}^n) -n \delta(n,\epsilon)\nonumber\\
&\geq nS(C Q) - I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)+ S(\hat{N}^n V| {C'}^n)-n \delta(n,\epsilon) \label{eq:converse_decoding_1}
\end{align}
where Eq.~(\ref{eq:converse_decoding_1_1}) follows because the entropy of a system
is bounded by the logarithm of the dimension of that system;
Eq.~(\ref{eq:converse_decoding_1_2}) is due to sub-additivity of the entropy;
Eq.~(\ref{eq:converse_decoding_1_4}) follows because the decoding isometry
$U_{{\mathcal{D}}}:M B_0 \hookrightarrow \hat{C}^n \hat{N}^n \hat{Q}^n V$ does not change the entropy;
Eq.~(\ref{eq:converse_decoding_1_5}) is due to the chain rule;
Eq.~(\ref{eq:converse_decoding_1_6}) follows from the decodability: the
output state on systems $\hat{C}^n \hat{Q}^n$ is $2\sqrt{2\epsilon}$-close
to the original state $C^n Q^n$ in trace norm; then the inequality follows
by applying the Fannes-Audenaert inequality
\cite{Fannes1973,Audenaert2007}, where
$\delta(n,\epsilon)=\sqrt{2\epsilon} \log(|C||Q|) + \frac1n h(\sqrt{2\epsilon})$;
Eq.~(\ref{eq:converse_decoding_1_7}) is due to strong sub-additivity of the entropy,
and system $C'$ is a copy of classical system $C$;
Eq.~(\ref{eq:converse_decoding_1})
follows from data processing inequality where $W$ is the environment system
of the encoding isometry $U_{{\mathcal{E}}}:C^n N^n Q^n A_0 \hookrightarrow M W$.
Moreover, considering the process of encoding the information, $Q$ is bounded as follows:
\begin{align}
nQ &\geq S(M) \nonumber \\
&\geq S(M|W {C'}^n) \label{eq:converse_encoding_1_1} \\
&= S(M W {C'}^n) -S(W {C'}^n) \label{eq:converse_encoding_1_2} \\
&= S(C^n N^n Q^n A_0 {C'}^n)-S(W {C'}^n) \label{eq:converse_encoding_1_4} \\
&= S(C^n N^n Q^n {C'}^n)+S(A_0)-S(W {C'}^n) \label{eq:converse_encoding_1_5} \\
&= S(C^n N^n Q^n {C'}^n)+S(A_0)-S({C'}^n)-S(W |{C'}^n) \label{eq:converse_encoding_1_6} \\
&= S(C^n N^n Q^n)+S(A_0)-S({C'}^n)-S(W |{C'}^n) \label{eq:converse_encoding_1_7} \\
&= nS(C Q)+nS(N|C Q)+S(A_0)-nS(C')-S(W |{C'}^n) \label{eq:converse_encoding_1_8} \\
&= nS(C Q )+nS(N |C)+S(A_0)-nS(C')-S(W |{C'}^n), \label{eq:converse_encoding_1}
\end{align}
where Eq.~(\ref{eq:converse_encoding_1_1}) is due to sub-additivity of the entropy;
Eq.~(\ref{eq:converse_encoding_1_2}) is due to the chain rule;
Eq.~(\ref{eq:converse_encoding_1_4}) follows because the encoding isometry
$U_{{\mathcal{E}}}:C^n N^n Q^n A_0 \hookrightarrow M W$ does not the change the entropy;
Eq.~(\ref{eq:converse_encoding_1_5}) follows because the initial entanglement $A_0$
is independent from the source;
Eq.~(\ref{eq:converse_encoding_1_6}) is due to the chain rule;
Eq.~(\ref{eq:converse_encoding_1_7}) follows because $C'$ is a copy of the system $C$,
so $S(C'|C N Q)=0$;
Eq.~(\ref{eq:converse_encoding_1_8}) is due to the chain rule and the fact that the entropy is additive
for product states;
Eq.~(\ref{eq:converse_encoding_1}) follows because conditional on system $C$
the system $N$ is independent from system $Q$.
Now, we add Eqs.~(\ref{eq:converse_decoding_1}) and
(\ref{eq:converse_encoding_1}); the entanglement terms $S(A_0)$ and $S(B_0)$ cancel out,
and by dividing by $2n$ we obtain
\begin{align}
Q \!&\geq S(C Q)-\frac{1}{2}S(C)\! +\frac{1}{2}S(N |C)\!-\frac{1}{2n} I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n) \nonumber \\
& \quad + \frac{1}{2n}S(\hat{N}^n V| {C'}^n)\!-\frac{1}{2n}S(W |{C'}^n)\!-\frac{1}{2} \delta(n,\epsilon) \nonumber\\
&\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2n} I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n) \nonumber \\
&\quad - \frac{1}{2n}S(\hat{N}^n VW | {C'}^n) -\frac{1}{2} \delta(n,\epsilon) \label{eq:converse_Q_1}\\
&\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2n}J_{\epsilon}(\omega^{\otimes n})-\frac{1}{2n} Z_{\epsilon}(\omega^{\otimes n})- \frac{1}{2} \delta(n,\epsilon) \label{eq:converse_Q_2} \\
&\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2}J_{\epsilon}(\omega)-\frac{1}{2} Z_{\epsilon}(\omega)- \frac{1}{2} \delta(n,\epsilon), \label{eq:converse_Q}
\end{align}
where Eq.~(\ref{eq:converse_Q_1}) follows from strong sub-additivity of the entropy,
$S(\hat{N}^n V| {C'}^n)+S(\hat{N}^n V| W {C'}^n)\geq 0$;
Eq.~(\ref{eq:converse_Q_2}) follows from Definition~\ref{def:J_epsilon Z_epsilon};
Eq.~(\ref{eq:converse_Q}) is due to point 4 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
In the limit of $\epsilon \to 0$ and
$n \to \infty $, the qubit rate is thus bounded by
\begin{align}\label{eq:converse_Q_asymptotics}
Q &\geq S(C Q)-\frac{1}{2}S(C) +\frac{1}{2}S(N |C)-\frac{1}{2}J_{0}(\omega)-\frac{1}{2} Z_{0}(\omega) \nonumber\\
&= S(C Q)-\frac{1}{2}S(C),
\end{align}
where the equality follows from point 5 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
Moreover, from Eq.~(\ref{eq:converse_decoding_1}) we have:
\begin{align}
nQ+S(B_0)&= nQ+nE \nonumber \\
&\geq nS(C Q) - I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)+ S(\hat{N}^n V| {C'}^n)-n \delta(n,\epsilon) \nonumber \\
&\geq nS(C Q) - I(\hat{N}^n VW : \hat{C}^n \hat{Q}^n| {C'}^n)-n \delta(n,\epsilon) \label{eq:converse_Q+E_2} \\
&\geq nS(C Q) - J_{\epsilon}(\omega^{\otimes n})-n \delta(n,\epsilon) \label{eq:converse_Q+E_3} \\
&\geq nS(C Q) - nJ_{\epsilon}(\omega)-n \delta(n,\epsilon), \label{eq:converse_Q+E}
\end{align}
where Eq.~(\ref{eq:converse_Q+E_2}) follows because the entropy conditional on a
classical system is positive, $S(\hat{N}^n V| {C'}^n) \geq 0$;
Eq.~(\ref{eq:converse_Q+E_3}) follows from Definition~\ref{def:J_epsilon Z_epsilon};
Eq.~(\ref{eq:converse_Q+E}) is due to point 4 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
In the limit of $\epsilon \to 0$ and
$n \to \infty $, we thus obtain the following bound on the rate sum:
\begin{align}
Q+E \geq S(CQ) - J_{0}(\omega)
= S(CQ) \label{eq:converse_Q+E_asymptotics},
\end{align}
where the equality follows from point 5 of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}.
\end{proof-of}
\begin{remark}
Our lower bound on $Q+E$ in Eq. (\ref{eq:converse_Q+E_asymptotics}) reproduces the
result of Koashi and Imoto \cite{KI2001} for the case of a classical-quantum source
$\rho^{AX} = \sum_x p(x) \rho_x^A \otimes \proj{x}^X$. This is because a code with
qubit-ebit rate pair $(Q,E)$ gives rise to a compression code in the sense of
Koashi and Imoto using a rate of qubits $Q+E$ and no prior entanglement, simply by
first distributing $E$ ebits and then using the entanglement assisted code.
It is worth noting that conversely, Eq. (\ref{eq:converse_Q+E_asymptotics}) can
be obtained from the Koashi-Imoto result, as follows. Any good code for $\rho^{AR}$
is automatically a good code for the classical-quantum source of mixed states
\[
\rho^{AY} = \sum_y q(y) \rho_y^A \otimes \proj{y}^Y
= \sum_y \Tr_R \rho^{AR}(\openone_A\otimes M_y^R) \otimes \proj{y}^Y,
\]
for any POVM $(M_y)$ on $R$, simply by the monotonicity of the fidelity
under CPTP maps. As discussed before, by choosing an informationally complete
measurement, the KI-decomposition of the ensemble $\{q(y),\rho_y^A\}$ is
identical to that of $\rho^{AR}$ in Theorem \ref{thm: KI decomposition}.
Thus the unassisted qubit compression rate of $\rho^{AY}$ and of $\rho^{AR}$
are lower bounded by the same quantity, the right hand side of Eq. (\ref{eq:converse_Q+E_asymptotics}).
\end{remark}
\section{Proof of Lemma~\ref{lemma:J_epsilon Z_epsilon properties}}
\label{sec: Proof of Lemma}
\begin{enumerate}
\item The definitions of the functions $J_{\epsilon}(\omega)$ and $Z_{\epsilon}(\omega)$
directly imply that they are non-decreasing functions of $\epsilon$.
\item We first prove the concavity of $Z_{\epsilon}(\omega)$.
Let $U_1:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ and
$U_2:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ be the isometries attaining the
maximum for $\epsilon_1$ and $\epsilon_2$, respectively, which act as
follows on the purification $\ket{\omega}^{C N Q R C' R'}$ of the previously
introduced state $\omega^{C N Q R C'}$:
\begin{align*}
&\ket{\tau_1}^{\hat{C} \hat{N} \hat{Q} E R C' R'}
=(U_1 \otimes \openone_{R C' R'}) \ket{\omega}^{C N Q R C' R'}
\quad \text{ and } \\
&\ket{\tau_2}^{\hat{C} \hat{N} \hat{Q} E R C' R'}
=(U_2 \otimes \openone_{R C' R'}) \ket{\omega}^{C N Q R C' R'},
\end{align*}
where $\Tr_{R'}[\proj{\omega}^{C N Q R C' R'}]=\omega^{C N Q R C'}$.
For $0\leq \lambda \leq 1$, define the isometry
$U_0:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E F F'$ which acts as
\begin{equation}
\label{eq: isometry U in convexity}
U_0 := \sqrt{\lambda} U_1 \otimes \ket{11}^{FF'} + \sqrt{1-\lambda} U_2 \otimes \ket{22}^{FF'},
\end{equation}
where systems $F$ and $F'$ are qubits, and
which leads to the state
\begin{align*}
(U_0 \otimes \openone_{R C' R'})& \ket{\omega}^{C N Q R C' R'}\\
&= \sqrt{\lambda}\ket{\tau_1}^{\hat{C} \hat{N} \hat{Q} E R C' R'} \ket{11}^{FF'}
+ \sqrt{1-\lambda}\ket{\tau_2}^{\hat{C} \hat{N} \hat{Q} E R C' R'} \ket{22}^{FF'}.
\end{align*}
Then, $U_0$ defines its state $\tau$. for which the reduced state on the systems
$\hat{C} \hat{N} \hat{Q} R C'$ is
\begin{align} \label{eq: tau in convexity proof}
\tau^{\hat{C} \hat{N} \hat{Q} R C'}
=\lambda \tau_1^{\hat{C} \hat{N} \hat{Q} R C'}+ (1-\lambda) \tau_2^{\hat{C} \hat{N} \hat{Q} R C'}.
\end{align}
Therefore, the fidelity for the state $\tau$ is bounded as follows:
\begin{align}\label{eq:fidelity in convexity}
F(\omega^{C N Q R} &,\tau^{\hat{C} \hat{N} \hat{Q} R} ) \nonumber \\
&= F(\omega^{C N Q R} ,\lambda \tau_1^{\hat{C} \hat{N} \hat{Q} R}
+ (1-\lambda) \tau_2^{\hat{C} \hat{N} \hat{Q} R}) \nonumber \\
&= F(\lambda \omega^{C N Q R}+(1-\lambda)\omega^{C N Q R},
\lambda \tau_1^{\hat{C} \hat{N} \hat{Q} R}
+ (1-\lambda) \tau_2^{\hat{C} \hat{N} \hat{Q} R}) \nonumber\\
&\geq \lambda F( \omega^{C N Q R},\tau_1^{\hat{C} \hat{N} \hat{Q} R})
+(1-\lambda)F( \omega^{C N Q R},\tau_2^{\hat{C} \hat{N} \hat{Q} R}) \nonumber\\
&\geq 1-\left( \lambda\epsilon_1 +(1-\lambda)\epsilon_2 \right).
\end{align}
The first inequality is due to simultaneous concavity of the fidelity in both
arguments;
the last line follows by the definition of the isometries $U_1$ and $U_2$.
Thus, the isometry $U_0$ yields a fidelity of at least
$1-\left( \lambda\epsilon_1 +(1-\lambda)\epsilon_2 \right) =: 1-\epsilon$.
Now let $E'=E FF'$ denote the environment of the isometry $U_0$ defined above.
According to Definition \ref{def:J_epsilon Z_epsilon}, we obtain
\begin{align}
Z_\epsilon(\omega) &\geq S(\hat{N} E'|C')_{\tau} \nonumber\\
&= S(\hat{N} EFF'|C')_{\tau} \nonumber\\
&= S(F|C')_{\tau}+S(\hat{N} E|F C')_{\tau}+S(F'|\hat{N} EF C')_{\tau} \label{eq:Z_concavity_1}\\
&\geq S(\hat{N}E|FC')_{\tau} \label{eq:Z_concavity_2}\\
&= \lambda S(\hat{N} E|C')_{\tau_1}+(1-\lambda) S(\hat{N} E|C')_{\tau_2}\label{eq:Z_concavity_3}\\
&= \lambda Z_{\epsilon_1}(\omega)+(1-\lambda)Z_{\epsilon_2}(\omega) \label{eq:Z_concavity_4},
\end{align}
where the state $\tau$ in the entropies is given in Eq.~(\ref{eq: tau in convexity proof});
%
Eq.~(\ref{eq:Z_concavity_1}) is due to the chain rule;
%
Eq.~(\ref{eq:Z_concavity_2}) follow because
for the state on systems $\hat{N} EFF' C' $ we have $S(F'|C')+S(F'|\hat{N} E F C')\geq 0$
which follows from strong sub-additivity of the entropy;
%
Eq.~(\ref{eq:Z_concavity_3}) follows by expanding the conditional entropy on the classical system $F$;
%
Eq.~(\ref{eq:Z_concavity_4}) follows from the definitions of the isometries $U_1$ and $U_2$.
Moreover, let $U_1:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ and
$U_2:C N Q \hookrightarrow \hat{C} \hat{N} \hat{Q} E$ be the isometries attaining the
maximum for $\epsilon_1$ and $\epsilon_2$ in the definition of $J_{\epsilon}(\omega)$, respectively.
Again, define the isometry $U_0$ as in Eq.~(\ref{eq: isometry U in convexity}),
which leads to the bound on the fidelity as in Eq.~(\ref{eq:fidelity in convexity}),
letting $E'=EFF'$ be the environment of the isometry $U_0$.
According to Definition \ref{def:J_epsilon Z_epsilon}, we obtain
\begin{align}
J_\epsilon(\omega) &\geq I(\hat{N} E FF':\hat{C} \hat{Q}|C')_{\tau} \nonumber \\
&\geq I(\hat{N} E F:\hat{C} \hat{Q}|C')_{\tau} \label{eq:concavity_J_1} \\
&= I(F:\hat{C} \hat{Q}|C')_{\tau}+I(\hat{N} E :\hat{C} \hat{Q}|F C')_{\tau} \label{eq:concavity_J_2} \\
&\geq I(\hat{N} E :\hat{C} \hat{Q}|F C')_{\tau} \label{eq:concavity_J_3} \\
&= \lambda I(\hat{N} E :\hat{C} \hat{Q}| C')_{\tau_1}+(1-\lambda) I(\hat{N} E :\hat{C} \hat{Q}| C')_{\tau_2} \label{eq:concavity_J_4}\\
&= \lambda J_{\epsilon_1}(\omega)+(1-\lambda)J_{\epsilon_2}(\omega)\label{eq:concavity_J_5},
\end{align}
where Eq.~(\ref{eq:concavity_J_1}) follows from data processing;
%
Eq.~(\ref{eq:concavity_J_2}) is due to the chain rule for mutual information;
%
Eq.~(\ref{eq:concavity_J_3}) follows from strong sub-additivity of the
entropy, $I(F:\hat{C} \hat{Q}|C')_{\tau} \geq 0$;
%
Eq.~(\ref{eq:concavity_J_4}) is obtained by expanding the conditional mutual
information on the classical system $F$;
%
finally, Eq.~(\ref{eq:concavity_J_5}) follows from the definitions of the isometries $U_1$ and $U_2$.
\item The functions are non-decreasing and concave for $\epsilon \geq 0 $, so they are continuous
for $\epsilon > 0$.
%
The concavity implies furthermore that $J_{\epsilon}$ and $Z_{\epsilon}$ are lower semi-continuous at
$\epsilon=0$. On the other hand, since the fidelity, the conditional entropy and the conditional
mutual information are all continuous functions of CPTP maps, and the domain of both optimizations
is a compact set, we conclude that $J_\epsilon(\omega)$ and $Z_{\epsilon}$ are also upper
semi-continuous at $\epsilon=0$, so they are continuous at $\epsilon=0$
\cite[Thms.~10.1 and 10.2]{Rockafeller}.
\item We first prove
$Z_{\epsilon}(\omega_1 \otimes \omega_2) \leq Z_{\epsilon}(\omega_1) +Z_{\epsilon}(\omega_2)$.
%
In the definition of $Z_{\epsilon}(\omega_1 \otimes \omega_2)$, let the isometry
$U_0:C_1 N_1 Q_1 C_2 N_2 Q_2 \hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E$
be the one attaining the maximum, which acts on the following purified source states with purifying
systems $R'_1$ and $R'_2$:
\begin{align}
&\ket{\tau}^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1 R_2 C'_2 R'_2}\\
\quad \quad \quad \quad &=(U_0 \otimes \openone_{R_1 C'_1 R'_1 R_2 C'_2 R'_2})\ket{\omega_1}^{C_1 N_1 Q_1 R_1 C'_1 R'_1}
\otimes \ket{\omega_2}^{C_2 N_2 Q_2 R_2 C'_2 R'_2}. \label{eq:U0-action}
\end{align}
By definition, the fidelity is bounded by
\begin{align*}
F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2}) \geq 1- \epsilon.
\end{align*}
Now, we can define an isometry
$U_1:C_1 N_1 Q_1 \hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 E_1$
acting only on systems $C_1 N_1 Q_1$, by letting
$U_1 = (U_0 \otimes \openone_{R_2 C_2' R_2'})(\openone_{C_1 N_1 Q_1} \otimes \ket{\omega_2}^{C_2 N_2 Q_2 R_2 C_2' R_2})$
and with the environment $E_1 := \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_2 C'_2 R'_2$.
It has the property that
$\ket{\tau}^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1 C_1' R_1' E}
= (U_1 \otimes \openone_{R_1 C_1' R_1'})\ket{\omega_1}^{C_1 N_1 Q_1 R_1 C_1' R_1'}$
has the same reduced state on $\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1$ as $\tau$ from
Eq. (\ref{eq:U0-action}).
This isometry preserves the fidelity for $\omega_1$, which follows from monotonicity
of the fidelity under partial trace:
\begin{align*}
F(\omega_1^{C_1 N_1 Q_1 R_1},\tau_1^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1})
&= F(\omega_1^{C_1 N_1 Q_1 R_1},\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 R_1}) \\
&\geq F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2}) \\
&\geq 1- \epsilon.
\end{align*}
By the same argument, there is the following isometry
\begin{align*}
U_2:C_2 N_2 Q_2\hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1,
\end{align*}
with output system $\hat{C}_2 \hat{N}_2 \hat{Q}_2$ and
environment $E_2:=\hat{C}_1 \hat{N}_1 \hat{Q}_1 E R_1 C'_1 R'_1$, such that
\begin{align*}
F(\omega_2^{C_2 N_2 Q_2 R_2},\tau_2^{\hat{C}_2 \hat{N}_2 \hat{Q}_2 R_2})
&= F(\omega_2^{C_2 N_2 Q_2 R_2},\tau^{\hat{C}_2 \hat{N}_2 \hat{Q}_2 R_2}) \\
&\geq F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2}) \\
&\geq 1- \epsilon.
\end{align*}
%
Therefore, we obtain:
\begin{align}
Z_{\epsilon}(\omega_1) &+Z_{\epsilon}(\omega_2)-Z_{\epsilon}(\omega_1 \otimes \omega_2) \nonumber\\
&\geq
S(\hat{N}_1 E_1|C'_1)_{\tau}+S(\hat{N}_2 E_2|C'_2)_{\tau}-S(\hat{N}_1
\hat{N}_2 E |C'_1 C'_2)_{\tau} \label{eq:Z_additivity_1}\\
&=S(\hat{N}_1 E_1 C'_1)_{\tau}+S(\hat{N}_2 E_2C'_2)_{\tau}-S(\hat{N}_1
\hat{N}_2 E C'_1 C'_2)_{\tau}\nonumber \\
&\quad \quad\quad\quad\quad\quad \quad\quad\quad -S(C'_1)-S(C'_2)+S(C'_1 C'_2) \label{eq:Z_additivity_2}\\
&=S(\hat{N}_1 E_1 C'_1)_{\tau}+S(\hat{N}_2 E_2C'_2)_{\tau}-S(\hat{N}_1
\hat{N}_2 E C'_1 C'_2)_{\tau} \label{eq:Z_additivity_3}\\
&=S(\hat{C}_1\hat{Q}_1 R_1 R'_1)+S(\hat{C}_2\hat{Q}_2 R_2 R'_2)-S(\hat{C}_1\hat{Q}_1 \hat{C}_2\hat{Q}_2 R_1 R'_1 R_2 R'_2) \label{eq:Z_additivity_4}\\
&=I(\hat{C}_1\hat{Q}_1 R_1 R'_1:\hat{C}_2\hat{Q}_2 R_2 R'_2) \nonumber\\
&\geq 0 \label{eq:Z_additivity_5},
\end{align}
where Eq.~(\ref{eq:Z_additivity_1}) is due to Definition~\ref{def:J_epsilon Z_epsilon};
%
Eq.~(\ref{eq:Z_additivity_2}) is due to the chain rule;
%
Eq.~(\ref{eq:Z_additivity_3}) because the systems $C'_1$ and $C'_2$ are independent from each other;
%
Eq.~(\ref{eq:Z_additivity_4}) follows because the overall state on systems
$\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1 R_2 C'_2 R'_2$
is pure;
%
Eq.~(\ref{eq:Z_additivity_5}) is due to sub-additivity of the entropy.
To prove prove
$J_{\epsilon}(\omega_1 \otimes \omega_2) \leq J_{\epsilon}(\omega_1) +J_{\epsilon}(\omega_2)$,
let the isometry
$U_0:C_1 N_1 Q_1 C_2 N_2 Q_2 \hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E$
be the one attaining the maximum in definition of $J_{\epsilon}(\omega_1 \otimes \omega_2)$,
which acts on the following purified source states with purifying
systems $R'_1$ and $R'_2$, as in Eq. (\ref{eq:U0-action}).
By definition, the fidelity is bounded as
\begin{align*}
F(\omega_1^{C_1 N_1 Q_1 R_1} \otimes \omega_2^{C_2 N_2 Q_2 R_2},
\tau^{\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 R_1 R_2})
\geq 1- \epsilon.
\end{align*}
Now define
$U_1:C_1 N_1 Q_1\hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_2 C'_2 R'_2$
and $U_2:C_2 N_2 Q_2\hookrightarrow \hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1$
as in the above discussion, with the environments
$E_1:=\hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_2 C'_2 R'_2$ and
$E_2:=\hat{C}_1 \hat{N}_1 \hat{Q}_1 E R_1 C'_1 R'_1$, respectively.
Recall that the fidelity for the states $\omega_1$ and $\omega_2$ is at least
$1-\epsilon$, because of the monotonicity of the fidelity under partial trace.
Thus we obtain
\begin{align}
J_{\epsilon}(\omega_1)
&+J_{\epsilon}(\omega_2)-J_{\epsilon}(\omega_1 \otimes \omega_2) \nonumber\\
&\geq I(\hat{N}_1 E_1:\hat{C}_1\hat{Q}_1|C'_1)_\tau+
I(\hat{N}_2 E_2:\hat{C}_2\hat{Q}_2|C'_2)_\tau \nonumber \\
&\quad-I(\hat{N}_1\hat{N}_2 E:\hat{C}_1\hat{Q}_1\hat{C}_2\hat{Q}_2|C'_1 C'_2)_\tau
\label{eq:J_additivity_1}\\
&=S(\hat{N}_1 E_1 C'_1)+S(\hat{C}_1\hat{Q}_1C'_1)-S( \hat{C}_1 \hat{N}_1 \hat{Q}_1 E_1 C'_1)-S(C'_1) \nonumber \\
&\quad +S(\hat{N}_2 E_2 C'_2)+S(\hat{C}_2\hat{Q}_2C'_2)-S( \hat{C}_2 \hat{N}_2 \hat{Q}_2 E_2 C'_2)-S(C'_2) \nonumber\\
&\quad \!-\!S(\hat{N}_1\hat{N}_2 E C'_1 C'_2) \!- \!S(\hat{C}_1\hat{Q}_1\hat{C}_2\hat{Q}_2 C'_1 C'_2)\! \nonumber \\
&\quad +\!S( \hat{C}_1\hat{N}_1\hat{Q}_1\hat{C}_2 \hat{N}_2\hat{Q}_2E C'_1 C'_2)\!+\! S(C'_1 C'_2) \label{eq:J_additivity_2} \\
&=S(\hat{C}_1 \hat{Q}_1 R_1 R'_1)+S(\hat{C}_1\hat{Q}_1C'_1)-S(R_1 R'_1)-S(C'_1) \nonumber \\
&\quad +S(\hat{C}_2 \hat{Q}_2 R_2 R'_2)+S(\hat{C}_2\hat{Q}_2C'_2)-S( R_2 R'_2)-S(C'_2) \nonumber\\
&\quad \!-\!S(\hat{C}_1 \hat{Q}_1\hat{C}_2 \hat{Q}_2 R_1 R'_1 R_2 R'_2) \!- \!S(\hat{C}_1\hat{Q}_1\hat{C}_2\hat{Q}_2 C'_1 C'_2)\! \nonumber \\
&\quad +\!S(R_1 R'_1 R_2 R'_2)\!+\! S(C'_1 C'_2) \label{eq:J_additivity_3}\\
&=I(\hat{C}_1 \hat{Q}_1 R_1 R'_1:\hat{C}_2 \hat{Q}_2 R_2 R'_2)
-I(R_1 R'_1:R_2 R'_2)\nonumber \\
&\quad +I(\hat{C}_1\hat{Q}_1C'_1:\hat{C}_2\hat{Q}_2C'_2)
-I(C'_1:C'_2) \nonumber \\
&\geq I( R_1 R'_1:R_2 R'_2)
-I(R_1 R'_1:R_2 R'_2)
+I(C'_1:C'_2)
-I(C'_1:C'_2) \label{eq:J_additivity_4}\\
&=0, \nonumber
\end{align}
where Eq.~(\ref{eq:J_additivity_1}) is due to Definition~\ref{def:J_epsilon Z_epsilon};
%
In Eq.~(\ref{eq:J_additivity_2}) we expand the mutual informations in terms of entropies;
%
Eq.~(\ref{eq:J_additivity_3}) follows because the overall state on systems
$\hat{C}_1 \hat{N}_1 \hat{Q}_1 \hat{C}_2 \hat{N}_2 \hat{Q}_2 E R_1 C'_1 R'_1 R_2 C'_2 R'_2$
is pure;
%
Eq.~(\ref{eq:J_additivity_4}) is due to data processing.
\item According to Theorem~\ref{thm: KI decomposition} \cite{KI2002,Hayden2004},
any isometry $U:C N Q \rightarrow \hat{C} \hat{N} \hat{Q} E$ acting on the state
$\omega^{C N Q R C'}$ which preserves the reduced state on systems $C N Q R C'$
($C'$ here is considered as a part of the reference system), acts as the following:
\begin{align*}
(U \otimes \openone_{RC'}) \omega^{C N Q R C'}(U^{\dagger} \otimes \openone_{RC'})
=\sum_{j} p_j \proj{j}^{C}\otimes U_j \omega_j^{N} U_j^{\dagger} \otimes \rho_{j}^{Q R} \otimes \proj{j}^{C'},
\end{align*}
where the isometry $U_j: N \rightarrow \hat{N} E$ satisfies
$\Tr_E [U_j \omega_j^{N} U_j^{\dagger}]=\omega_j$.
Therefore, in Definition~\ref{def:J_epsilon Z_epsilon} for $\epsilon=0$, the final state is
\begin{align*}
\tau^{\hat{C} \hat{N} \hat{Q} E R C'}
= \sum_{j} p_j \proj{j}^{C}\otimes U_j \omega_j^{N} U_j^{\dagger} \otimes \rho_{j}^{Q R} \otimes \proj{j}^{C'}.
\end{align*}
Thus we can directly evaluate
\begin{align*}
Z_0(\omega)=S(\hat{N} E|C')_\tau=S(N |C)_\omega \text{ and }
J_0(\omega)=I(\hat{N} E:\hat{C}\hat{Q}|C')_\tau=0,
\end{align*}
concluding the proof.
\hfill\qedsymbol
\end{enumerate}
\section{Discussion}
\label{sec:Discussion}
We have introduced a common framework for all single-source quantum compression
problems, i.e. settings without side information at the encoder or the decoder,
by defining the compression task as the reproduction of a given bipartite state
between the system to be compressed and a reference. That state, which defines
the task, can be completely general, and special instances recover Schumacher's
quantum source compression (in both variants of a pure state ensemble and of
a pure entangled state) \cite{Schumacher1995}
and compression of a mixed state ensemble source in the blind variant
\cite{Horodecki1998,KI2001}.
Our general result gives the optimal quantum compression rate in terms of
qubits per source state, both in the settings without and with entanglement, and
indeed the entire qubit-ebit rate region, reproducing the aforementioned
special cases, along with other previously considered problems \cite{ZK_Eassisted_ISIT_2019}.
Despite the technical difficulties in obtaining it, the end result has a
simple and intuitive interpretation. Namely, the given source $\rho^{AR}$
is equivalent to a source in standard Koashi-Imoto form,
\[
\omega^{CQR} = \sum_j p_j \proj{j}^C \otimes \rho_j^{QR},
\]
so that $j$ has to be compressed as classical information, at rate $S(C)$,
and $Q$ as quantum information, at rate $S(Q|C)$; in the presence of
entanglement, the former rate is halved while the latter is maintained.
Indeed, what our Theorem \ref{theorem:complete rate region mixed state}
shows is that the original source has the same qubit-ebit
rate region as the clean classical-quantum mixed source
\[
\Omega^{CQRR'C'} = \sum_j p_j \proj{j}^C \otimes \proj{\psi_j}^{QRR'} \otimes \proj{j}^{C'},
\]
where $\ket{\psi_j}^{QRR'}$ purifies $\rho_j^{QR}$, and $RR'C'$ is considered
the reference. In $\Omega$, $C$ is indeed a manifestly classical source,
since it is duplicated in the reference system, and conditional on $C$,
$Q$ is a genuinely quantum source since it is purely entangled with the
reference system. As $\Tr_{R'C'} \Omega^{CQRR'C'} = \omega^{CQR}$, any
code and any achievable rates for $\Omega$ are good for $\omega$, and
that is how the achievability of the rate region in Theorem \ref{theorem:complete rate region mixed state}
can be described. The opposite, that a code good for $\omega$ should be
good for $\Omega$, is far from obvious. Indeed, if that were true, it would
not only yield a quick and simple proof of our converse bounds, but would
imply that the rate region of Theorem \ref{theorem:complete rate region mixed state} satisfies a
strong converse! However, as we do not know this reduction to the source $\Omega$,
our converse proceeds via a more complicated, indirect route, and yields only
a weak converse. Whether the strong converse holds, and what the detailed
relation between the sources $\omega^{CQR}$ and $\Omega^{CQRR'C'}$ is,
remain open questions.
\medskip
\chapter{Quantum state redistribution for ensemble sources}
\label{chap:QSR ensemble}
In this chapter, we consider a generalization of the quantum state redistribution task,
where pure multipartite states from an ensemble source are distributed among
an encoder, a decoder and a reference system. The encoder, Alice, has access
to two quantum systems: system $A$ which she compresses and sends to the decoder,
Bob, and the side information system $C$ which she wants to keep at her site.
Bob has access to quantum side information in a system $B$, wants to decode
the compressed information in such a way to preserve the correlations with the
reference system on average.
As figures of merit, we consider both block error (which is the usual one
in source coding) and per-copy error (which is more akin to rate-distortion theory),
and find the optimal compression rate for the second criterion,
and achievable and converse bounds for the first. The latter almost match in general,
up to an asymptotic error and an unbounded auxiliary system; for so-called irreducible
sources they are provably the same.
This chapter is based on the publications in \cite{ZK_QSR_ensemble_ISIT_2020,QSR_ensemble_full}.
\section{The source model}
Quantum state redistribution (QSR) is a source compression task where both
encoder and decoder have access to side information systems \cite{Devetak2008_2,Yard2009,Oppenheim2008}.
Namely, Alice, Bob and a reference system share asymptotically many copies of a
pure state $\ket{\psi}^{ACBR}$, where Alice aims to compress the quantum system $A$
and send it to Bob via a noiseless quantum channel, while she has access to a side
information quantum system $C$, and Bob has access to the side information quantum system
$B$. Bob upon receiving the compressed information reconstructs system $A$, and the
figure of merit of this task is to preserve the entanglement fidelity between the
reconstructed systems and the purifying reference system $R$.
Quantum state redistribution generalizes Schumacher's compression, which
is recovered as the extreme case that neither encoder nor decoder have any
side information \cite{Schumacher1995}: the source is simply described by a pure state
$\ket{\psi}^{AR}$ shared between the encoder and a reference system.
However, besides this model, and originally, Schumacher considered a source
generating an ensemble of pure states, i.e. ${\mathcal{E}} = \{ p(x), \proj{\psi_x}^{A} \}$,
and showed both source models lead to the same optimal compression rate
(cf. Barnum \emph{et al.} \cite{Barnum1996}, as well as \cite{Winter1999}),
namely the von Neumann entropy of the reduced or average state of $A$, respectively.
In the presence of side information systems though, an ensemble model and a purified
source model can lead to different compression rates. An example of this is
the classical-quantum Slepian-Wolf problem considered in \cite{ZK_cqSW_2018,ZK_cqSW_ISIT_2019},
where the compression rate can be strictly smaller than that of the corresponding
purified source.
The general correlated ensemble source ${\mathcal{E}} = \{ p(x), \proj{\psi_x}^{AB} \}$ was
considered first in \cite{Winter1999} and then developed in \cite{Devetak2003}
and by Ahn \emph{et al.} \cite{Ahn2006},
with $A$ the system to be compressed and $B$ the side information system at the decoder.
It is an ensemble version of the coherent state merging task introduced
in \cite{family,Abeyesinghe2009}.
In \cite{Devetak2003}, the source is $\ket{\psi_x}^{AB} = \ket{f(x)}^A\ket{\phi_x}^B$.
The optimal compression rate for an irreducible source of product states and a source
generating Bell states is found in \cite{Ahn2006}, however, in general case the problem had
been left open.
In the present chapter, we consider an even more general ensemble source where both
encoder and decoder have access to side information systems, and which thus
constitutes an ensemble generalization of the pure QSR source. More precisely,
we consider a source which is given by an ensemble
${\mathcal{E}} = \{ p(x), \proj{\psi_x}^{ACBR} \}$ of pure states
$\psi_x=\proj{\psi_x}\in{\mathcal{S}}(A \otimes C \otimes B \otimes R)$,
$\ket{\psi_x}\in A \otimes C \otimes B \otimes R$, with a Hilbert space
$A \otimes C \otimes B \otimes R$, which in this chapter we assume to be of
finite dimension $|A|\cdot|C|\cdot|B|\cdot|R|<\infty$;
${\mathcal{S}}(A \otimes C \otimes B \otimes R)$ denotes the set of states (density operators).
Furthermore, $x\in{\mathcal{X}}$ ranges over a discrete alphabet, so we can describe
the source equivalently by the classical-quantum (cq) state
$\omega^{ACBRX} = \sum_x p(x) \proj{\psi_x}^{ACBR} \otimes \proj{x}^X$.
In this model, $A$ and $C$ are Alice's information to be sent and side
information system, respectively. System $B$ is the side information of Bob,
and $R$ and $X$ are inaccessible reference systems used only to define the task.
The ensemble model of the previous chapter
as well as those models that have been considered in \cite{Winter1999,Ahn2006,ZK_Eassisted_ISIT_2019,ZK_cqSW_2018,ZK_cqSW_ISIT_2019}
are all special cases of the model that we consider here. We find the optimal
compression rate under the per-copy fidelity criterion, and achievable
and converse rates under the block-fidelity criterion which almost match, up to an
asymptotic error and an unbounded auxiliary system. In the generic case
of so-called \emph{irreducible} ensembles, they are provably the same.
\section{The compression task}
\label{sec:The Compression task QSR ensemble}
We consider the information theoretic setting of many copies of the
source $\omega^{ACBRX}$, i.e.~$\omega^{A^nC^nB^nR^nX^n}=(\omega^{ACBRX})^{\otimes n}$:
\[
\omega^{A^nC^nB^nR^nX^n}
\!\!\! = \!\!\!\!
\sum_{x^n \in \mathcal{X}^n}\!\!\!\! p(x^n) \proj{\psi_{x^n}}^{A^nC^nB^nR^n}
\!\otimes\! \proj{x^n}^{X^n}\!\!\!\!\!,
\]
using the notation
\begin{alignat*}{3}
x^n &= x_1 x_2 \ldots x_n,
&p(x^n) &= p(x_1) p(x_2) \cdots p(x_n), \\
\ket{x^n} &= \ket{x_1} \ket{x_2} \cdots \ket{x_n}, \
&\ket{\psi_{x^n}} &= \ket{\psi_{x_1}} \ket{\psi_{x_2}} \cdots \ket{\psi_{x_n}}.
\end{alignat*}
We assume that the encoder, Alice, and the decoder, Bob, have initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
Alice, who has access to $A^n$ and the side information system $C^n$, performs the
encoding compression operation ${\mathcal{E}}:A^n C^n A_0 \longrightarrow M \hat{C}^n$
on $A^nC^n$ and her part $A_0$ of the entanglement, which is a quantum channel,
i.e.~a completely positive and trace preserving (CPTP) map.
Notice that as functions, CPTP maps act on the operators (density matrices) over
the respective input and output Hilbert spaces, but as there is no risk of confusion,
we will simply write the Hilbert spaces when denoting a CPTP map.
Alice's encoding operation produces the state $\sigma^{M \hat{C}^n B^n B_0 R^n X^n}$
with $M$, $\hat{C}^n$ and $B_0$ as the compressed system of Alice, the reconstructed
side information system of Alice and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M| \leq \abs{A}^n$.
The system $M$ is then sent via a noiseless quantum channel to Bob, who performs
a decoding operation $\mathcal{D}:M B^n B_0 \longrightarrow \hat{A}^n \hat{B}^n$
on the compressed system $M$, his side information $B^n$ and his part of the entanglement
$B_0$, to reconstruct the original systems, now denoted $\hat{A}^n$ and $\hat{B}^n$.
We call
$\frac1n \log|M|$ the \emph{quantum rate} of the compression protocol.
We say an encoding-decoding scheme (or code, for short) has \emph{block fidelity}
$1-\epsilon$, or \emph{block error} $\epsilon$, if
\begin{align}
\label{eq:block fidelity criterion}
F &:= F(\omega^{A^nC^nB^nR^nX^n},\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^nX^n}) \nonumber \\
&= \sum_{x^n} p(x^n) F\left( \psi_{x^n}^{A^nC^nB^nR^n} \!\!,
\xi_{x^n}^{\hat{A}^n \hat{C}^n \hat{B}^n R^n} \right) \geq 1-\epsilon,
\end{align}
where
\begin{align*}
\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^nX^n}
&=\sum_{x^n \in \mathcal{X}^n}\!\!\!\! p(x^n) \xi_{x^n}^{\hat{A}^n \hat{C}^n \hat{B}^n R^n}
\!\otimes\! \proj{x^n}^{X^n}\\
&=\left((\mathcal{D}\circ{\mathcal{E}})\otimes {\operatorname{id}}_{R^nX^n}\right) \omega^{A^nC^nB^n R^nX^n }.
\end{align*}
We say a code has \emph{per-copy fidelity} $1-\epsilon$,
or \emph{per-copy error} $\epsilon$, if
\begin{align}
\label{eq:per-copy-error-RD}
\overline{F} &:= \frac{1}{n}\sum_{i=1}^n F(\omega^{A_iC_iB_iR_iX^n},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX^n}) \nonumber\\
&= \sum_{x^n} p(x^n) \frac{1}{n}\sum_{i=1}^n
F\left( \psi_{x_i}^{ACBR} \!\!,
\xi_{x^n}^{\hat{A}_i \hat{C}_i \hat{B}_i R_i} \right)
\geq 1-\epsilon.
\end{align}
By the monotonicity of the fidelity under the partial trace
(over $X_{[n]\setminus i}$), this implies the easier to verify
condition
\begin{align}
\label{eq:per copy fidelity criterion}
\widetilde{F}
:= \frac{1}{n}\sum_{i=1}^n F\left(\omega^{ACBRX},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}\right)
\geq 1-\epsilon,
\end{align}
where
$\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}=\Tr_{[n]\setminus i}\,\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^nX^n}$,
and `$\Tr_{[n]\setminus i}$' denotes the partial trace over all systems
with indices in $[n]\setminus i$.
Conversely, Eq. \eqref{eq:per copy fidelity criterion} can be shown to imply
the criterion \eqref{eq:per-copy-error-RD} with $(1-\epsilon)^2 \geq 1-2\epsilon$
on the right hand side. Indeed, note that
\begin{align*}
&F\left(\omega^{ACBRX},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}\right) \\
& \quad = \sum_{x_i} p(x_i) F\left(\psi_{x_i}^{ACBR},
\sum_{x_{[n]\setminus i}} p(x_{[n]\setminus i})
\xi_{x^n}^{\hat{A}_i \hat{C}_i \hat{B}_i R_i} \right).
\end{align*}
Thus, by the convexity of the square function and Jensen's inequality,
\[\begin{split}
(1-\epsilon)^2
&\leq \left( \frac{1}{n}\sum_{i=1}^n F(\omega^{ACBRX},\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}) \right)^2 \\
&\leq \frac{1}{n}\sum_{i=1}^n \sum_{x^n} p(x^n)
F\left(\psi_{x_i}^{ACBR},\xi_{x^n}^{\hat{A}_i \hat{C}_i \hat{B}_i R_i} \right),
\end{split}\]
and the last line is the left hand side of Eq. \eqref{eq:per-copy-error-RD}.
Correspondingly, we say $Q_b$ and $Q_c$ are an asymptotically achievable block-error rate
and an asymptotically achievable per-copy-error rate, respectively,
if for all $n$ there exist codes such that the block fidelity and per-copy fidelity converge
to $1$, and the quantum rate converges to $Q_b$ and $Q_c$, respectively. Because
of the above demonstrated relations
$\widetilde{F}^2 \leq \overline{F} \leq \widetilde{F}$ it
doesn't matter which of the two version of per-copy fidelity we take.
According to Stinespring's theorem \cite{Stinespring1955}, the encoding and decoding
CPTP maps ${\mathcal{E}}$ and ${\mathcal{D}}$ can be dilated respectively to the isometries
$U_{{\mathcal{E}}}: A^n C^n A_0\hookrightarrow M\hat{C}^n W_n$ and
$U_{{\mathcal{D}}}: M B^n B_0 \hookrightarrow \hat{A}^n \hat{B}^nV_n$,
with $W_n$ and $V_n$ as the environment systems of the encoder and decoder, respectively.
\vspace{-0.2cm}
\section{Main Results}
\label{sec: main results of qsr ensemble}
In Theorem~\ref{thm:main theorem} we obtain the main results of this chapter concerning
optimal (minimum) block-error rate $Q_b^*$ and optimal per-copy-error rate $Q_c^*$.
These rates are expressed in terms of the following single-letter function.
\begin{definition}\label{def:Q_epsilon}
For a state
$\omega^{ACBRX}=\sum_x p(x) \proj{\psi_x}^{ACBR}\otimes \proj{x}^{X}$ and $\epsilon \geq 0$ define:
\begin{align*}
Q(\epsilon) :=&
\inf \frac{1}{2} I(Z:RXX'|B)_{\sigma}
\text{ over CPTP maps } \\
&{\mathcal{E}}_{\epsilon}: AC \rightarrow Z \hat{C} \text{ and } {\mathcal{D}}_{\epsilon}:ZB \rightarrow \hat{A}\hat{B} \text{ s.t.} \\
& F( \omega^{ACBRX},\xi^{\hat{A} \hat{C} \hat{B} RX}) \geq 1- \epsilon,
\end{align*}
where
\begin{align*}
\sigma^{Z\hat{C}BRX}\!
&:=\! ({\mathcal{E}}_{\epsilon}\otimes {\operatorname{id}}_{BRX})\omega^{ACBRX}
\!=\! \sum_x p(x) \sigma_x^{Z\hat{C}BR} \!\otimes \!\proj{x}^{X}\!\!, \\
\xi^{\hat{A}\hat{C}\hat{B}RX}
\!&:=\! ({\mathcal{D}}_{\epsilon} \otimes {\operatorname{id}}_{\hat{C}RX}) \sigma^{Z\hat{C}BRX}
\!=\! \sum_x p(x) \xi_x^{\hat{A}\hat{C}\hat{B}R}\!\otimes\! \proj{x}^{X}\!\!.
\end{align*}
Moreover, define $\widetilde{Q}(0):=\lim_{\epsilon \to 0+} Q(\epsilon)$.
\end{definition}
The function $Q(\epsilon)$ is defined for the specific source $\omega^{ACBRX}$; this dependency is dropped to simplify the notation.
\begin{theorem}
\label{thm:main theorem}
The minimum asymptotically achievable rate with per-copy error is
\begin{align*}
Q_c^*=\widetilde{Q}(0).
\end{align*}
Instead, the minimum asymptotically achievable rate with block error is bounded
from above and below as follows:
\begin{align*}
\widetilde{Q}(0)
\leq Q_b^* \leq Q(0).
\end{align*}
\end{theorem}
\vspace{-0.2cm}
\begin{proof}
We prove the achievability here and leave the converse proof to the next section.
Let $U_0: AC\hookrightarrow Z \hat{C} W$ and $\widetilde{U}_0: ZB\hookrightarrow \hat{A} \hat{B} V$ be respectively the isometric extension of the CPTP maps ${\mathcal{E}}_0$ and ${\mathcal{D}}_0$ in
Definition~\ref{def:Q_epsilon} with fidelity $1$ (i.e. $\epsilon = 0$).
To achieve the block-error rate $Q_b = Q(0)$,
Alice applies the isometry $U_0$,
after which the purified state shared between the parties is
\begin{align*}
\ket{\sigma_0}^{Z\hat{C}WBRXX'}
=\sum_x \sqrt{p(x)} \ket{\sigma_0(x)}^{Z\hat{C}WBR}\otimes \ket{x}^{X}\otimes \ket{x}^{X'}.
\end{align*}
Then the parties apply the QSR protocol to many copies of the above source where Alice
sends system $M$ to Bob and systems $\hat{C}$ and $W$ are her side information.
The rate achieved by the QSR protocol is
\begin{align*}
Q_b=\frac{1}{2}I(Z:RXX'|B)_{\sigma_0}.
\end{align*}
After executing the QSR protocol, Bob has $Z^n$,
and the state shared between the parties is
$\hat{\sigma}_0^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n}$,
which satisfies the following entanglement fidelity:
\begin{align}\label{eq: fidelity 3}
F\left( (\sigma_0^{Z\hat{C}WBRXX'})^{\otimes n},
\hat{\sigma}_0^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n} \right) \to 1,
\end{align}
as $n\to\infty$.
Then, Bob applies to each system the CPTP map ${\mathcal{D}}_0:ZB \longrightarrow \hat{A}\hat{B}$.
Due to the monotonicity of the fidelity under CPTP maps, we obtain
from Eq.~(\ref{eq: fidelity 3})
\begin{align}\label{eq: fidelity 4}
\! \! \! \! F\left(\!\! ({\mathcal{D}}_0^{\otimes n}\!\otimes \!{\operatorname{id}})(\sigma_0^{Z\hat{C}BRX})^{\otimes n}\!\!,
({\mathcal{D}}_0^{\otimes n}\!\otimes\! {\operatorname{id}})\hat{\sigma}_0^{Z^n\hat{C}^nB^n R^n X^n } \!\! \right) \!\!\to\!\! 1
\end{align}
as $n \to \infty$,
where the identity channel ${\operatorname{id}}$ acts on systems ${\hat{C}^nR^nX^n}$.
Notice that by the definition of ${\mathcal{D}}_0$,
\begin{align*}
(\omega^{ACBRX})^{\otimes n}
=({\mathcal{D}}_0^{\otimes n}\otimes {\operatorname{id}}_{\hat{C}^nR^nX^n})(\sigma_0^{Z\hat{C}BRX})^{\otimes n}.
\end{align*}
Thus, the block fidelity criterion of Eq.~(\ref{eq:block fidelity criterion}) holds.
Now, let $U_{\epsilon}: AC\hookrightarrow Z \hat{C} W$ and $\widetilde{U}_{\epsilon}: ZB\hookrightarrow \hat{A} \hat{B} V$ be respectively the isometric extension of the CPTP maps ${\mathcal{E}}_{\epsilon}$ and ${\mathcal{D}}_{\epsilon}$ in
Definition~\ref{def:Q_epsilon} with fidelity $1-\epsilon$.
To achieve the per-copy-error rate $Q_c^*$,
to each copy of the source Alice
applies the isometry $U_{\epsilon}$.
Then the purified state shared between the parties is
\begin{align*}
\ket{\sigma_{\epsilon}}^{Z\hat{C}WBRXX'}
=\sum_x \sqrt{p(x)} \ket{\sigma_{\epsilon}(x)}^{Z\hat{C}WBR}\otimes \ket{x}^{X}\otimes \ket{x}^{X'}.
\end{align*}
The parties apply the QSR protocol to many copies of the above source where Alice
sends system $Z$ to Bob and systems $\hat{C}$ and $W$ are her side information.
The rate achieved by the QSR protocol is
\begin{align}
Q_c &=\frac{1}{2}I(M:RXX'|B)_{\sigma_{\epsilon}}. \nonumber
\end{align}
After executing the QSR protocol, Bob has $Z^n$, and the state shared between
the parties is $\hat{\sigma}_{\epsilon}^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n}$,
which satisfies the following
entanglement fidelity:
\begin{align*}
F\left( (\sigma_{\epsilon}^{Z\hat{C}WBRXX'})^{\otimes n},
\hat{\sigma}_{\epsilon}^{Z^n\hat{C}^nW^nB^nR^nX^n{X'}^n} \right) \to 1
\end{align*}
as $n \to \infty$.
Due to monotonicity of the fidelity under partial trace, we obtain the per-copy fidelity,
\begin{equation}\label{eq: fidelity 1}
F(\sigma_{\epsilon}^{Z\hat{C}BRX},\hat{\sigma}_{\epsilon}^{Z_i\hat{C}_i B_i R_i X_i }) \to 1,
\end{equation}
for all $i \in [n]$ and $n \to \infty$.
Then, to each system $i$, Bob applies the CPTP map
${\mathcal{D}}_{\epsilon}$.
We obtain
\begin{align}\label{eq: fidelity 2}
F \! \left(\! ( \! {\mathcal{D}}_{\epsilon} \! \otimes \! {\operatorname{id}}_{\hat{C}RX} \! )\sigma_{\epsilon}^{Z\hat{C}BRX}\!,\!
( \! {\mathcal{D}}_{\epsilon}\otimes {\operatorname{id}}_{\hat{C}RX} \! )\hat{\sigma}_{\epsilon}^{Z_i\hat{C}_i B_i \! R_i \! X_i } \! \!\right) \!\to \! 1
\end{align}
for all $i \in [n]$ and $n \to \infty$,
which follows from Eq.~(\ref{eq: fidelity 1}) due to monotonicity of the fidelity
under CPTP maps.
On the other hand, the state
$\xi_{\epsilon}^{\hat{A}\hat{C}\hat{B} RX}
=({\mathcal{D}}_{\epsilon} \otimes {\operatorname{id}}_{\hat{C}RX})\sigma_{\epsilon}^{Z\hat{C}BRX}$
has high fidelity with the original source state, directly from the definition of ${\mathcal{D}}_{\epsilon}$:
\begin{align*}
F(\xi_{\epsilon}^{\hat{A}\hat{C}\hat{B} RX},\omega^{ACBRX}) \to 1.
\end{align*}
Therefore, from the above fidelity and Eq.~(\ref{eq: fidelity 2}) we obtain
\begin{align*}
F\left( \omega^{ACBRX},
({\mathcal{D}}_{\epsilon}\otimes {\operatorname{id}}_{\hat{C}RX})\hat{\sigma}_{\epsilon}^{M_i\hat{C}_i B_i R_i X_i } \right)
\to 1
\end{align*}
for all $i \in [n]$ and $n \to \infty$,
which satisfies the per-copy fidelity criterion in Eq.~(\ref{eq:per copy fidelity criterion}).
\end{proof}
Now, we define a new single-letter function
which then we use to obtain simplified rates in Lemma~\ref{lemma: lower bound on Q_tilde(0)}
and Corollary~\ref{cor:irreducible} which both are proved in \cite{QSR_ensemble_full}.
\begin{definition}
\label{def:K_epsilon}
For a state
$\omega^{ACBRX}=\sum_x p(x) \proj{\psi_x}^{ACBR}\otimes \proj{x}^{X}$ and $\epsilon \geq 0$ define:
\begin{align*}
K_\epsilon(\omega) &:= \sup I(W:X|\hat{C})_{\sigma}
\text{ over isometries } \\
&\phantom{=====}
U: AC \rightarrow Z \hat{C} W \text{ and }
\widetilde{U}:ZB \rightarrow \hat{A}\hat{B}V \text{ s.t.} \\
&\phantom{=====}
F( \omega^{ACBRX},\xi^{\hat{A} \hat{C} \hat{B} RX}) \geq 1- \epsilon,
\end{align*}
where
\begin{align*}
\sigma^{Z\hat{C}WBRX}
&:= (U\otimes \openone_{BRX})\omega^{ACBRX} (U\otimes \openone_{BRX})^{\dagger} \\
& = \sum_x p(x) \proj{\sigma_x}^{Z\hat{C}WBR}\otimes \proj{x}^{X}, \\
\xi^{\hat{A}\hat{C}\hat{B}WVRX}
&:= (\widetilde{U}\otimes \openone_{\hat{C}WRX}) \sigma^{Z\hat{C}WBRX}
(\widetilde{U}\otimes \openone_{\hat{C}WRX})^{\dagger}\\
&= \sum_x p(x) \proj{\xi_x}^{\hat{A}\hat{C}\hat{B}WVR}\otimes \proj{x}^{X}, \\
\xi^{\hat{A}\hat{C}\hat{B}RX}
&:= \Tr_{VW} \xi^{\hat{A}\hat{C}\hat{B}WVRX}.
\end{align*}
Moreover, define $\widetilde{K}_0:=\lim_{\epsilon \to 0+} K_{\epsilon}(\omega)$.
\end{definition}
\begin{remark}
Definition~\ref{def:K_epsilon} directly implies that $K_{0}(\omega) \leq \widetilde{K}_{0}(\omega)$
because $K_{\epsilon}(\omega)$ is a non-decreasing function of $\epsilon$.
Furthermore, $K_{0}(\omega)$ can be strictly positive, for example, for a source
with trivial system $C$ where $\psi_{x}^{A}\psi_{x'}^{A}=0$ holds for $x\neq x'$,
we obtain $K_{0}(\omega)=S(X)$.
This follows because Alice can measure her system and obtain the value of $X$ and
then copy this classical information to the register $W$.
\end{remark}
\begin{lemma}\label{lemma: lower bound on Q_tilde(0)}
The rate $\widetilde{Q}(0)$ is lower bounded as:
\begin{align*}
\widetilde{Q}(0) &\!\geq\! \frac{1}{2} \left(S(A|B)+S(A|C) \right) \!-\!\frac{1}{2}\widetilde{K}_{0} \\
&\!=\!\frac{1}{2}I(A:RXX'|B)_{\omega}\!-\!\frac{1}{2}\widetilde{K}_{0},
\end{align*}
where the above conditional mutual information is precisely the communication rate
of QSR for the purified source
\begin{align}\label{eq: purified source}
\ket{\omega}^{ACBRXX'}
=\sum_x \sqrt{p(x)} \ket{\psi_x}^{ACBR} \otimes \ket{x}^X \otimes \ket{x}^{X'}.
\end{align}
Moreover, if system $C$ is trivial, then $\widetilde{Q}(0)$ is equal to this lower bound.
\end{lemma}
\begin{definition}[{Barnum~\emph{et~al.}~\cite{Barnum2001_2}}]
\label{def:reducibility QSR ensemble}
An ensemble
${\mathcal{E}}=\{p(x),\proj{\psi_x}^{ACBR} \}_{x\in \mathcal{X}}$ of pure states
is called \emph{reducible} if its states fall into two or more orthogonal subspaces.
Otherwise the ensemble ${\mathcal{E}}$ is called \emph{irreducible}.
%
We apply the same terminology to the source state $\omega^{ACBRX}$.
\end{definition}
\begin{corollary}
\label{cor:irreducible}
For an irreducible source $\omega^{ACBRX}$, $K_0=\widetilde{K}_0=0$. Hence, the
optimal asymptotically achievable per-copy-error rate and block-error rate
are equal and
\begin{align*}
Q^*_c=Q^*_b=\frac{1}{2}\left(S(A|C)+S(A|B) \right).
\end{align*}
\end{corollary}
\section{Converse}
In this section, we first show some properties of the function $Q(\epsilon)$, which then we use to prove the converse for Theorem~\ref{thm:main theorem}.
\begin{lemma}
\label{lemma:Q-convex}
For $0 \leq \epsilon \leq 1$, $Q(\epsilon)$ is a monotonically non-increasing, convex function of $\epsilon$. Consequently, for $0<\epsilon <1$ it is also continuous.
\end{lemma}
\vspace{-0.2cm}
\begin{proof}
The monotonicity directly follows from the definition.
For the convexity, we verify Jensen's inequality, that is we start with maps
${\mathcal{E}}_1,{\mathcal{D}}_1$ eligible for error $\epsilon_1$ with the output state $\xi_1^{\hat{A}\hat{C}\hat{B}RX}$, and ${\mathcal{E}}_2,{\mathcal{D}}_2$ eligible for error $\epsilon_2$ with the output state $\xi_2^{\hat{A}\hat{C}\hat{B}RX}$,
and $0\leq p \leq 1$. By embedding into larger Hilbert spaces if necessary, we
can w.l.o.g. assume that the maps act on the same systems for $i=1,2$.
We define the following two maps:
\vspace{-0.2cm}
\begin{align*}
{\mathcal{E}}(\rho) &:= p {\mathcal{E}}_1(\rho) \otimes \proj{1}^{Z'} + (1-p) {\mathcal{E}}_2(\rho) \otimes \proj{2}^{Z'}, \\
{\mathcal{D}}(\rho) &:= {\mathcal{D}}_1(\bra{1}^{Z'} \rho \ket{1}^{Z'}) + {\mathcal{D}}_2(\bra{2}^{Z'} \rho \ket{2}^{Z'}).
\end{align*}
They evidently realise the output state $\xi^{\hat{A}\hat{C}\hat{B}RX} = p\xi_1^{\hat{A}\hat{C}\hat{B}RX} + (1-p)\xi_2^{\hat{A}\hat{C}\hat{B}RX}$ with the following fidelity:
\begin{align
F&(\omega^{ACBRX} ,\xi^{\hat{A} \hat{C} \hat{B} RX} ) \nonumber\\
&= F(\omega^{ACBRX} ,p \xi_1^{\hat{A} \hat{C} \hat{B} RX}
+ (1-p) \xi_2^{\hat{A} \hat{C} \hat{B} RX}) \nonumber \\
&\geq p F(\omega^{ACBRX} \!,\!\xi_1^{\hat{A} \hat{C} \hat{B} RX} \!)
\!+\!(1\!-\! p)F(\omega^{ACBRX} \!,\xi_2^{\hat{A} \hat{C} \hat{B} RX} \!) \nonumber\\
&\geq 1-\left( p\epsilon_1 +(1-p)\epsilon_2 \right), \nonumber
\end{align}
where the third line is due to simultaneous concavity of the fidelity in both arguments.
The last line follows by the definitions of the states $\xi_1$ and $\xi_2$.
Therefore, the maps ${\mathcal{E}}$ and ${\mathcal{D}}$ yield a fidelity of at least
$1-\left( p\epsilon_1 +(1-p)\epsilon_2 \right) =: 1-\epsilon$.
Thus,
\begin{align*}
Q(\epsilon) &\leq I(ZZ':RXX'|B)_\xi \\
&= p I(Z:RXX'|B)_{\xi_1} + (1-p) I(Z:R|B)_{\xi_2},
\end{align*}
and taking the infimum over maps ${\mathcal{E}}_i,{\mathcal{D}}_i$ shows convexity.
The continuity statement follows from a mathematical folklore fact, stating that
any real-valued function that is convex on an interval, is continuous on the
interior of the interval.
\end{proof}
\begin{proof-of}[of Theorem \ref{thm:main theorem} (converse)]
We prove the converse for the per-copy fidelity criterion, therefore, the same converse bound holds for the block fidelity criterion as well.
Consider a block length $n$ code per-copy fidelity $1-\epsilon$. The number
of qubits, $\log|M|$, can be lower bounded as follows, with respect to
the encoded state $({\mathcal{E}}\otimes {\operatorname{id}}_{B_0B^nR^nX^nX'^n})\omega^{A^nC^nB^nR^nX^nX'^n} \otimes \Phi_K^{A_0B_0}$ of the purified source:
\begin{align*}
2\!\log|M| \!&\!\geq 2 S(M) \\
&\geq I(M:R^nX^n X'^n|B^nB_0) \\
&= \! I(\underbrace{MB_0}_{Z}:R^nX^nX'^n|B^n) \!\!-\!\! I(B_0:R^nX^nX'^n|B^n) \\
&= I(Z:R^nX^nX'^n|B^n) \\
&= \sum_{i=1}^n I(Z:R_iX_iX'_i|B^nR_{<i}X_{<i}X'_{<i}) \\
&\quad \quad+ \sum_{i=1}^n I(R_{<i}X_{<i}X'_{<i}B_{[n]\setminus i}:R_iX_iX'_i|B_i) \\
&= \sum_{i=1}^n I(ZR_{<i}X_{<i}X'_{<i}B_{[n]\setminus i}:R_iX_iX'_i|B_i) \\
&\geq \sum_{i=1}^n I(\underbrace{ZB_{[n]\setminus i}}_{Z_i}:R_iX_iX'_i|B_i),
\vspace{-0.3cm}
\end{align*}
where in the first two inequalities we use standard entropy inequalities;
the equation in the third line is due to the chain rule, and the second
conditional information is $0$ because $B_0$ is independent of $B^nR^n$;
the fourth line introduces a new register $Z$, noting that
the encoding together with the entangled state defines a CPTP map
${\mathcal{E}}_0:A^n \rightarrow Z\hat{C}^n$, via ${\mathcal{E}}_0(\rho) = ({\mathcal{E}} \otimes {\operatorname{id}}_{B_0})(\rho \otimes\Phi_K^{A_0B_0})$;
in the fifth we use the chain rule iteratively, and in the second term we introduce,
each summand is $0$ because for all $i$, $R_{<i}B_{[n]\setminus i}$ is independent of $R_iB_i$;
in the sixth line we use again the chain rule for all $i$, and the
last line is due to data processing.
Now, for the $i$-th copy of the source $\omega^{A_iC_iB_iR_iX_i}$, we define maps
${\mathcal{E}}_i:A_i C_i \rightarrow Z_i \hat{C}_i$ and ${\mathcal{D}}_i:B_iZ_i \rightarrow \hat{A}_i \hat{B}_i$,
as follows:
\begin{itemize}
\item[${\mathcal{E}}_i$:] Alice tensors her system $A_i$ with a dummy state $\omega^{\otimes [n]\setminus i}$ and
with $\Phi_K^{A_0B_0}$ (note that all systems are in her possession).
Then she applies ${\mathcal{E}}:A^nC^nA_0 \rightarrow M \hat{C}^n$, and sends
$Z_i := M B_0 B_{[n]\setminus i}$ to Bob, while keeping $\hat{A}_i \hat{C}_i$.
All other systems, i.e. $ \hat{A}_{[n]\setminus i} \hat{C}_{[n]\setminus i} R_{[n]\setminus i}X_{[n]\setminus i}$, are trashed.
\item[${\mathcal{D}}_i$:] Bob applies ${\mathcal{D}}$ to $Z_iB_i = M B_0 B^n $ and keeps
$\hat{A}_i \hat{B}_i$, trashing the rest $\hat{A}_{[n]\setminus i}\hat{B}_{[n]\setminus i}$.
\end{itemize}
By definition, the output state
\begin{align*}
\zeta^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}
\!\!= \!({\mathcal{D}}_i \otimes {\operatorname{id}}_{\hat{A}_i\hat{C}_iR_iX_i}\!)\!\circ\!({\mathcal{E}}_i \otimes {\operatorname{id}}_{B_iR_iX_i}\!)\omega^{A_iC_iB_iR_iX_i}
\end{align*}
equals $\xi^{\hat{A}_i\hat{C}_i\hat{B}_iR_iX_i}$ which has fidelity $1-\epsilon_i$ with the source $\omega^{ACBRX}$, and the fidelity for all copies satisfy $ \frac{1}{n} \sum_i (1-\epsilon_i) \geq 1-\epsilon$.
Thus, we obtain, with respect to the states $({\mathcal{E}}_i \otimes {\operatorname{id}}_{B_iR_iX_iX'_i})\omega^{A_iC_iB_iR_iX_iX'_i}$
\[\begin{split}
\frac{1}{n}\log|M| &\geq \frac{1}{n} \sum_{i=1}^n \frac12 I(Z_i:R_iX_iX'_i|B_i)\\
& \geq \frac{1}{n} \sum_{i=1}^n Q(\epsilon_i)
\geq Q\left( \frac{1}{n} \sum_{i=1}^n \epsilon_i\right)
\geq Q(\epsilon),
\end{split}\]
continuing from before, then by definition of $Q(\epsilon_i)$ since
the pair $({\mathcal{E}}_i,{\mathcal{D}}_i)$ results in fidelity $1-\epsilon_i$, in the next inequality
by convexity and finally by monotonicity of $Q(\epsilon)$ (Lemma \ref{lemma:Q-convex}).
By the taking the limit of $\epsilon \to 0$ and $n \to \infty$, the claim follows.
\end{proof-of}
\vspace{-0.2cm}
\section{Discussion}
\label{sec: discussion qsr ensemble}
We considered a variant of the quantum state redistribution task, where pure
multipartite states from an ensemble are distributed between an encoder,
a decoder and a reference system.
We distinguish two figures of merit for the information processing, per-copy
fidelity and block fidelity, and define the corresponding quantum communication
rates depending on the fidelity criterion, when unlimited entanglement is available.
For the per-copy fidelity criterion, we find that the optimal qubit rate of compression is equal to $\widetilde{Q}(0)$ from Definition~\ref{def:Q_epsilon},
which is bounded from below by the rate of the conventional QSR task minus the limit of the
single-letter non-negative function $\widetilde{K}_0$ from Definition~\ref{def:K_epsilon}:
\begin{align*}
\widetilde{Q}(\!0\!)\!\geq\! \frac{1}{2}\! \left( \!S(A|B)\!+\!S(A|C) \right) \!-\! \frac{1}{2}\! \widetilde{K}_0
\!=\! \frac{1}{2} I(A:RXX'|B)_{\omega} \!-\! \!\frac{1}{2} \widetilde{K}_0,
\end{align*}
where the conditional mutual information is the rate of QSR
for the purified source in Eq.~(\ref{eq: purified source}). This lower bound is tight if system $C$ is trivial (state merging scenario).
For the block fidelity criterion, we have found converse and achievability bounds:
\vspace{-0.2cm}
\begin{align*}
\widetilde{Q}(0)\leq Q_b \leq Q(0).
\end{align*}
The two bounds would match if we knew that the function $Q(\epsilon)$ were
continuous at $\epsilon=0$. However, we don not know this; for one thing, one
cannot use compactness to show continuity because the output system $W$ in
Definition~\ref{def:K_epsilon} is as priori unbounded.
For irreducible sources though, we show here $K_0=\widetilde{K}_0=0$, which implies
that the purified source model and the ensemble model lead to the same compression
rate. For reducible sources the information that the encoder can obtain about the
classical variable of the ensemble, i.e. system $X$, is effectively used as
side information to achieve a smaller compression rate. Thus we reproduce the
result of \cite[Thm.~III.3]{Ahn2006}, which was proven only for irreducible
product state ensembles.
There are other sources for which we know $K_0=\widetilde{K}_0=0$ to hold. First,
the ``generic'' sources in \cite[Thm.~11]{ZK_cqSW_2018}, where it is shown that the
function $\widetilde{I}_0=0$; this function is a special case of the function $\widetilde{K}_0$.
Indeed, the source there is described by an ensemble $\{p(x), \ket{\psi_x}^{AR}\ket{x}^B\}$,
which is always completely reducible, but generically the reduced states $\psi_x^A$ have
pairwise overlapping support, which is the condition under which
vanishing $\widetilde{K}_0$ is shown.
Secondly, the ensemble of four Bell states considered in \cite[Thm.~IV.1]{Ahn2006},
$\{ p(ij), \ket{\Phi_{ij}}^{AB} \}_{i,j=0,1}$,
where the side information system $C$ and the reference system $R$ are trivial;
for this source, the mutual information between Alice's system and the classical system
$X$ is zero, i.e. $I(A:X)=0$. Thus, due to data processing inequality, we have
$I(W:X) \leq I(A:X)=0$. Our main result reproduces the achievable rate
$\frac12 H(p)$, and also the optimality, by very different, and somewhat more
natural methods.
There are other special cases of the source model of this chapter that have been previously
studied in the literature for which $K_0 > 0$ or at least $\widetilde{K}_0 > 0$.
For instance in the source of \cite{Devetak2003}, where Alice's system is classical
with $A=X$ and system $C$ is trivial, one can observe that $K_0 = S(X)$ holds.
The rate we get is $Q^* = \frac12 S(X|B)$ under either error criterion, half of the
quantity reported in \cite{Devetak2003} because of the free entanglement in our
model, which allows for dense coding.
Furthermore, the visible variant of Schumacher compression in \cite{Barnum1996,Winter1999},
where Alice's side information system is classical with $C=X$, the function has the value
$K_0 = S(X)$, and the optimal rate is $Q^*=\frac12 S(A)$, again half of the optimal
rate without entanglement, because we can use remote state preparation and dense coding.
A third example is the ensemble $\{\frac13,\ket{\psi_i}^A\ket{\phi_i}^B\}_{i=1}^3$ from
\cite[Sec.~V.A]{Ahn2006}, which is reducible, but where the reduced ensembles
on systems $A$ and $B$ are both irreducible; it is shown there that the optimal
compression rate is strictly smaller than $(S(A)+S(A|B))/2$.
Finally, recall that in our definition of the compression task we have assumed
that the encoder and decoder share free entanglement. This was motivated so as
to make a smoother connection to QSR.
However, it is not known whether the pre-shared entanglement is always necessary to achieve
the corresponding quantum rates. There are certainly cases where QSR does not require prior
entanglement, such as when Alice's side information $C$ is trivial, which would carry
over to our setting whenever $K_0=\widetilde{K}_0=0$, for instance for an irreducible
ensemble.
More generally, in future work we plan to consider the trade-off between the quantum
and entanglement rates.
\chapter{Asymptotic thermodynamics of multiple conserved quantities}
\label{chap:thermo}
As a thermodynamic theory, or even as a resource theory in general, transformations
by almost-commuting unitaries, which we developed in the previous chapter, do not appear to be the most fruitful: they are reversible
and induce an equivalence relation among the sequences of product states.
In particular, every point $(\und{a},s)$ of the phase diagram $\overline{\mathcal{P}}^{(1)}$
defines an equivalence class, namely of all state sequences with charges and
entropy converging to $\und{a}$ and $s$, respectively.
To make the theory more interesting, and more resembling of ordinary thermodyanmics,
including irreversibility as expressed in its first and second laws,
we now specialise to a setting considered in many previous papers in the resource theory
of thermodynamics, both with with single or multiple conserved quantities.
Specifically, we consider an asymptotic analogue of the setting proposed
in \cite{Guryanova2016} concerning the interaction of thermal baths with a
quantum system and batteries, where it was shown that the second law constrains
the combination of extractable charge quantities.
In \cite{Guryanova2016}, explicit protocols for state transformations to saturate
the second law are presented, that store each of several commuting charges in its
corresponding battery. However, for the case of non-commuting charges, one battery,
or a so-called reference frame, stores all different types of charges \cite{Halpern2016,Popescu2018}.
Only recently it was shown that reference frames for non-commuting charges
can be constructed, at least under certain conditions, which store the different
charge types in physically separated subsystems \cite{Popescu2019}.
Moreover, the size of the bath required to perform the transformations is not
addressed in these works, as only the limit of asymptotically large bath was
considered.
We will address these questions in a similar setting but in the asymptotic regime,
where Theorem~\ref{Asymptotic equivalence theorem} provides the necessary and sufficient
condition for physically possible state transformations.
In this new setting, the \emph{asymptotic} second law constrains the combination of
extractable charges; we provide explicit protocols for realising transformations
satisfying the second law, where each battery can store its corresponding type of
work in the general case of non-commuting charges. Furthermore, we determine the
minimum number of thermal baths of a given type that is required to perform a transformation.
\section{System model, batteries and the first law}
\label{subsec:model}
We consider a system being in contact with a bath and suitable batteries,
with a total Hilbert space
$Q=S\otimes B\otimes W_1\otimes \cdots \otimes W_c$,
consisting of many non-interacting subsystems; namely, the work system, the thermal bath and
$c$ battery systems with Hilbert spaces ${S}$, ${B}$ and ${W}_j$ for
$j=1,\ldots,c$, respectively.
We call the $j$-th battery system the $j$-type battery as it is designed to absorb
$j$-type work.
The work system and the thermal bath have respectively the charges $A_{S_j}$ and $A_{B_j}$
for all $j$, but $j$-type battery has only one nontrivial charge $A_{W_j}$, and all
its other charges are zero because it is meant to store only the $j$-th charge.
The total charge is the sum of the charges of the sub-systems $A_j=A_{S_j}+A_{B_j}+A_{W_j}$
for all $j$. Furthermore, for a charge $A$, let $\Sigma(A)=\lambda_{\max}(A)-\lambda_{\min}(A)$
denote the spectral diameter, where $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ are
the largest and smallest eigenvalues of the charge $A$, respectively.
We assume that the total spectral
diameter of the work system and the thermal bath is bounded by the spectral diameter of the
battery, that is $\Sigma(A_{S_j})+\Sigma(A_{B_j}) \leq \Sigma(A_{W_j})$ for all $j$; this
assumption ensures that the batteries can absorb or release charges for transformations.
As we discussed in the previous chapter, the generalized thermal state $\tau(\und{a})$
is the state that maximizes the entropy subject to the
constraint that the charges $A_j$ have the values $a_j$.
This state is equal to $\frac{1}{Z}e^{-\sum_{j=1}^c \beta_j A_{j}}$ for real
numbers $\beta_j$ called inverse temperatures and chemical potentials;
each of them is a smooth function of charge values $a_1,\ldots,a_c$, and
$Z=\Tr e^{-\sum_{j=1}^c \beta_j A_{j}}$ is the generalized partition function.
Therefore, the generalized thermal state can be equivalently denoted $\tau(\und{\beta})$
as a function of the inverse temperatures, associated uniquely with the charge
values $\und{a}$.
We assume that the thermal bath is initially in a generalized thermal state
$\tau_b(\und{\beta})$, for globally fixed $\und{\beta}$.
This is because in \cite{Halpern2016} it was argued that these are precisely the
\emph{completely passive} states, from which no energy can be extracted into
a battery storing energy, while not changing any of the other conserved quantity,
by means of almost-commuting unitaries and even when unlimited copies of the state are available.
We assume that the work system with state $\rho_s$ and the thermal bath are initially
uncorrelated, and furthermore that the battery systems can acquire only pure states.
Therefore, the initial state of an \emph{individual} global system $Q$
is assumed to be of the following form,
\begin{equation}
\label{eq:initial composite global}
\rho_{SBW_1\ldots W_c}
= \rho_S \ox \tau(\und{\beta})_B \ox \proj{w_1}_{W_1} \ox \cdots \ox \proj{w_c}_{W_c},
\end{equation}
and the final states we consider are of the form
\begin{equation}
\label{eq:final composite global}
\sigma_{SBW_1\ldots W_c}
= \sigma_{SB} \ox \proj{w_1'}_{W_1} \ox \cdots \ox \proj{w_c'}_{W_c},
\end{equation}
where $\rho_S$ and $\sigma_{SB}$ are states of the system and system-plus-bath, respectively,
and $w_j$ and $w_j'$ label pure states of the $j$-type battery before and after the transformation.
The notation is meant to convey the expectation value of the $j$-type work, i.e. $w_j^{(\prime)}$
is a real number and $\Tr \proj{w_j^{(\prime)}}A_{W_j} = w_j^{(\prime)}$.
The established resource theory of thermodynamics treats the batteries and the bath as
`enablers' of transformations of the system $S$, and we will show first and second laws
that express the essential constraints that any such transformation has to obey.
We start with the batteries. With the notations $\und{W} = W_1\ldots W_c$,
$\ket{\und{w}} = \ket{w_1}\cdots\ket{w_c}$, and $\ket{\und{w}'} = \ket{w_1'}\cdots\ket{w_c'}$,
let us look at a sequence $\rho^n = \rho_{S^n} = \rho_{S_1} \ox\cdots\ox \rho_{S_n}$ of initial
system states, and a sequence $\proj{\und{w}}^n = \proj{\und{w}_1}_{\und{W}_1} \ox\cdots\ox \proj{\und{w}_n}_{\und{W}_n}$
of initial battery states, recalling that the baths are initially all in the same thermal
state, $\tau_{B^n} = \tau(\und{\beta})^{\ox n}$; furthermore a sequence of target states
$\sigma^n = \sigma_{S^nB^n} = \sigma_{S_1B_1} \ox\cdots\ox \sigma_{S_nB_n}$ of the system and bath, and a
sequence $\proj{\und{w}'}^n = \proj{\und{w}_1'}_{\und{W}_1} \ox\cdots\ox \proj{\und{w}_n'}_{\und{W}_n}$
of target states of the batteries.
\begin{definition}
\label{definition:regular}
A sequence of states $\rho^n$ on any system $Q^n$ is called \emph{regular} if
its charge and entropy rates converge, i.e. if
\begin{align*}
a_j &= \lim_{n\rightarrow\infty} \frac1n \Tr \rho^n A_j^{(n)},\ j=1,\ldots,c, \text{ and} \\
s &= \lim_{n\rightarrow\infty} \frac1n S(\rho^n)
\end{align*}
exist. To indicate the dependence on the state sequence,
we write $a_j(\{\rho^n\})$ and $s(\{\rho^n\})$.
\end{definition}
\medskip
According to the AET and the other results of the previous chapter, every point $(\und{a},s)$
in the phase diagram $\overline{\mathcal{P}}^{(1)}$ labels an equivalence class of
regular sequences of product states under transformations by almost-commuting unitaries.
In the rest of the chapter we will essentially focus on regular sequences, so that
we can simply identify them, up to asymptotic equivalence, with a point in the phase
diagram. However, it should be noted that at the expense of clumsier expressions,
most of our expositions can be extended to arbitrary sequences of product states or
block-product states.
\medskip
Now, for regular sequences $\rho_{S^n}$ of initial states of the system and
final states of the system plus bath, $\sigma_{S^nB^n}$, as well as regular
sequences of initial and final battery states, $\proj{\und{w}}^n$ and $\proj{\und{w}'}^n$,
respectively, define the asymptotic rate of $j$-th charge change of the $j$-type battery as
\begin{equation}
\label{eq: W_j definition}
\Delta A_{W_j} := a_j(\{\proj{w_j'}^n\})-a_j(\{\proj{w_j}^n\})
= \lim_{n\rightarrow\infty} \frac1n \Tr (\proj{w_j'}^n-\proj{w_j}^n)A_{W_j}^{(n)}.
\end{equation}
Where there is no danger of confusion, we denote this number also as $W_j$,
the $j$-type work extracted (if $W_j < 0$, this means that the work $-W_j$ is done
on system $S$ and bath $B$).
Similarly, we define the asymptotic rate of $j$-th charge change of the work system
and the bath as
\begin{align*}
\Delta A_{S_j} &:= a_j(\{\sigma_{S^n}\})-a_j(\{\rho_{S^n}\})
= \lim_{n\rightarrow\infty} \frac1n \Tr (\sigma_{S^n}-\rho_{S^n})A_{S_j}^{(n)}, \\
\Delta A_{B_j} &:= a_j(\{\sigma_{B^n}\})-a_j(\{\tau(\und{\beta})_{B^n}\})
= \lim_{n\rightarrow\infty} \frac1n \Tr (\sigma_{B^n}-\tau(\und{\beta})_B^{\ox n})A_{B_j}^{(n)},
\end{align*}
where we denote $\sigma_{S^n} = \tr_{B^n} \sigma_{S^nB^n}$ and likewise
$\sigma_{B^n} = \tr_{S^n} \sigma_{S^nB^n}$.
\begin{theorem}[First Law]
\label{thm:first-law}
Under the above notations, if the regular sequences
$\rho_{S^nB^n\und{W}^n} = \rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \ox \proj{\und{w}}^n$
and $\sigma_{S^nB^n\und{W}^n} = \sigma_{S^nB^n} \ox \proj{\und{w}'}^n$
are equivalent under almost-commuting unitaries, then
\begin{align*}
s(\{\sigma_{S^nB^n}\}) &= s(\{\rho_{S^n}\}) + S(\tau(\und{\beta})) \text{ and} \\
W_j &= -\Delta A_{S_j}-\Delta A_{B_j} \text{ for all } j=1,\ldots,c.
\end{align*}
Conversely, given regular sequences $\rho_{S^n}$ and $\sigma_{S^nB^n}$ of product
states such that
\[
s(\{\sigma_{S^nB^n}\}) = s(\{\rho_{S^n}\}) + S(\tau(\und{\beta})),
\]
and assuming that the spectral radius of the battery observables $W_{A_j}$ is large enough
(see the discussion at the start of this chapter), then there exist regular sequences of
product states of the $j$-type battery, $\proj{w_j}^n$ and $\proj{w_j'}^n$, for all
$j=1,\ldots,c$, such that
\begin{align}
\label{eq:initial}
\rho_{S^nB^n\und{W}^n} &= \rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \ox \proj{\und{w}}^n \text{ and} \\
\label{eq:final}
\sigma_{S^nB^n\und{W}^n} &= \sigma_{S^nB^n} \ox \proj{\und{w}'}^n
\end{align}
can be transformed into each other by almost-commuting unitaries.
\end{theorem}
\begin{proof}
The first part is by definition, since the almost-commuting unitaries
asymptotically preserve the entropy rate and the work rate of all
charges.
In the other direction, all we have to do is find states $\proj{w_j}$
and $\proj{w_j'}$ of the $j$-type battery $W_j$, such that
$W_j = \Delta A_{W_j} = -\Delta A_{S_j}-\Delta A_{B_j}$, for all $j=1,\ldots,c$.
This is clearly possible if the spectral radius of $W_{A_j}$ is large enough.
With this, the states in Eqs. (\ref{eq:initial}) and (\ref{eq:final}) have the
same asymptotic entropy and charge rates.
Hence, the claim follows from the AET, Theorem~\ref{Asymptotic equivalence theorem}.
\end{proof}
\begin{remark}\normalfont
The second part of Theorem~\ref{thm:first-law} says that for regular product state
sequences, as long as the initial and final states of the work system and the thermal bath
have asymptotically the same entropy, they can be transformed one into the another
because there are always batteries that can absorb or release the necessary charge difference.
Furthermore, we can even fix the initial (or final) state of the batteries and
design the matching final (initial) battery state, assuming that the charge
expectation value of the initial (final) state is far enough from the edge of the
spectrum of $A_{W_j}$.
\end{remark}
For any such states, we say that there is a \emph{work transformation}
taking one to the other, denoted
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$.
This transformation is always feasible, implicitly assuming the
presence of suitable batteries for all $j$-type works to balance to books
explicitly.
\begin{remark}\normalfont
As a consequence of the previous remark, we now change our point of view of what a
transformation is. Of our complicated $S$-$B$-$\und{W}$ compound, we only
focus on $SB$ and its state, and treat the batteries as implicit. Since we insist
that batteries need to remain in a pure state, which thus factors off and
does not contribute to the entropy, and due to the above first law Theorem \ref{thm:first-law},
we can indeed understand everything that is going on by looking at how
$\rho_{S^nB^n}$ transforms into $\sigma_{S^nB^n}$.
\end{remark}
Note that in this context, it is in a certain sense enough that the initial states
$\rho_{S^n}$ form a regular sequence of product states and that the target states
$\sigma_{S^nB^n}$ form a regular sequence. This is because the first part of
the first law, Theorem \ref{thm:first-law}, only requires regularity, and
since the target state defines a unique point $(\und{a}',s')$ in the phase
diagram, we can find a sequence of product states $\widetilde{\sigma}_{S^nB^n}$ in
its equivalence class, and use the second part of Theorem \ref{thm:first-law}
to realise the work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \widetilde{\sigma}_{S^nB^n}$.
\section{The second law}
\label{subsec:secondlaw}
If the first law in our framework arises from focusing on the system-plus-bath
compound $SB$, while making the batteries implicit, the second law comes about
from trying to understand the action on the work system $S$ alone, through the
concomitant back-action on the bath $B$.
Following \cite{Guryanova2016,Halpern2016}, the second law constrains the different
combinations of commuting conserved quantities that can be extracted from the work
system. We show here that in the asymptotic regime, the second law similarly bounds
the extractable work rate via the rate of free entropy of the system.
The \emph{free entropy} for a system with state $\rho$, charges $A_j$ and inverse temperatures
$\beta_j$ is defined in \cite{Guryanova2016} as
\begin{align}
\label{free entropy}
\widetilde{F}(\rho) = \sum_{j=1}^c \beta_j \Tr \rho A_j - S(\rho).
\end{align}
It is shown in \cite{Guryanova2016} that the generalized thermal state
$\tau(\und{\beta})$ is the state that minimizes the free entropy for
fixed $\beta_j$.
For any work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
between regular sequences of states,
we define the asymptotic rate of free entropy change for the work system and the
thermal bath respectively as follows:
\begin{equation}\begin{split}
\label{eq:free entropy rates}
\Delta\widetilde{F}_S
&:= \lim_{n \to \infty} \frac{1}{n}\left(\widetilde{F}(\sigma_{S^n})
-\widetilde{F}(\rho_{S^n}) \right), \\
\Delta\widetilde{F}_B
&:= \lim_{n \to \infty} \frac{1}{n}\left(\widetilde{F}(\sigma_{B^n})
-n \widetilde{F}(\tau_B)\right),
\end{split}\end{equation}
where the free entropy is with respect to the charges of the work system and the thermal
bath with fixed inverse temperatures $\beta_j$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{second-law.jpg}
\end{center}
\caption{State change of the bath for a given work transformation under extraction of
$j$-type work $W_j$, viewed in the phase diagram of the bath $\overline{\cP}_B$.
The blue line represents the tangent hyperplane at the corresponding point
of the generalized thermal state $\tau(\und{\beta})_B$, $R$ is the number of copies
of the elementary baths in the proof of Theorem \ref{asymptotic second law},
and $F$ is the point corresponding to the final state of the bath.}
\label{fig:second-law}
\end{figure}
\begin{theorem}[Second Law]
\label{asymptotic second law}
For any work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
between regular sequences of states, the $j$-type works $W_j$ that are extracted
(and they are necessarily $W_j = -\Delta A_{S_j}-\Delta A_{B_j}$ according
to the first law) are constrained by the rate of free entropy change of the system:
\[
\sum_{j=1}^c \beta_j W_j \leq -\Delta\widetilde{F}_S.
\]
Conversely, for arbitrary regular sequences of product states,
$\rho_{S^n}$ and $\sigma_{S^n}$, and any real numbers $W_j$ with
$\sum_{j=1}^c \beta_j W_j < -\Delta\widetilde{F}_S$,
there exists a bath system $B$ and a regular sequence of product states
$\sigma_{S^nB^n}$ with $\Tr_{B^n}\sigma_{S^nB^n} = \sigma_{S^n}$, such that
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
with accompanying extraction of $j$-type work at rate $W_j$.
This is illustrated in Fig.~\ref{fig:second-law}.
\end{theorem}
\begin{proof}
We start with the first statement of the theorem. Consider the global system transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$ by
almost-commuting unitaries.
We use the definition of work (\ref{eq: W_j definition}) and free
entropy (\ref{free entropy}), as well as the first law, Theorem \ref{thm:first-law},
to get
\begin{equation}
\label{work expansion formula}
\begin{split}
\sum_j \beta_j W_j &= -\sum_j \beta_j(\Delta A_{S_j}+\Delta A_{B_j})\\
&= -\Delta\widetilde{F}_S-\Delta\widetilde{F}_B-\Delta s_S -\Delta s_B .
\end{split}
\end{equation}
The second line is due to the definition in Eq. (\ref{eq:free entropy rates}).
Now observe that
\begin{align}\label{eq: positive Delta_SB}
\Delta s_S +\Delta s_B
&= \lim_{n \to \infty} \frac1n \bigl(S(\sigma_{S^n})-S(\rho_{S^n})\bigr)
+ \frac1n \bigl(S(\sigma_{B^n})-nS(\tau(\und{\beta})_B)\bigr) \nonumber\\
&\geq \lim_{n \to \infty} \frac1n \bigl(S(\sigma_{{SB}^n})-S(\rho_{S^n})-S(\tau(\und{\beta})_B^{\ox n})\bigr)
= 0,
\end{align}
where the inequality is due to sub-additivity of von Neumann entropy, and the
final equation due to asymptotic entropy conservation.
Further, the generalized thermal state $\tau(\und{\beta})_B$ has
the minimum free entropy \cite{Guryanova2016}, hence $\Delta\widetilde{F}_B \geq 0$.
For the second statement of the theorem, the achievability part of the second law,
we aim to show that there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^n} \ox \xi_{B^n}$,
with a suitable regular sequences of product states,
and works $W_1,\ldots,W_c$ are extracted.
This will be guaranteed, by the first law, Theorem \ref{thm:first-law},
and the AET, Theorem \ref{Asymptotic equivalence theorem}, if
\begin{equation}\begin{split}
s(\{\xi_{B^n}\}) &= S(\tau(\und{\beta})_B) - \Delta s_S, \\
a_j(\{\xi_{B^n}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \Delta A_{S_j} - W_j \quad \text{for all } j=1,\ldots,c.
\label{eq:state assumptions}
\end{split}\end{equation}
The left hand side here defines a point $(\und{a},s)$ in the charges-entropy
space of the bath, and our task is to show that it lies in the phase diagram,
for which purpose we have to define the bath characteristics suitably.
On the right hand side,
$\bigl(\Tr\tau(\und{\beta})_B A_{B_1}, \ldots, \Tr\tau(\und{\beta})_B A_{B_c}, S(\tau(\und{\beta})_B)\bigr)$
is the point corresponding to the initial state of the bath, which due to
its thermal nature is situated on the upper boundary of the region.
At that point, the region has a unique tangent hyperplane, which has the equation
$\sum_j \beta_j a_j-s = \widetilde{F}(\tau(\und{\beta})_B)$, and the phase diagram
is contained in the half space $\sum_j \beta_j a_j-s \geq \widetilde{F}(\tau(\und{\beta})_B)$,
corresponding to the fact that their free entropy is larger than that of the
thermal state. In fact, due to the strict concavity of the entropy, and hence
of the upper boundary of the phase diagram, the phase diagram, with the exception
of the thermal point $\bigl(\Tr \tau(\und{\beta})_B \underline{A_{B}}, S(\tau(\und{\beta})_B)\bigr)$
is contained in the open half space $\sum_j \beta_j a_j-s > \widetilde{F}(\tau(\und{\beta})_B)$.
One of many ways to construct a suitable bath $B$ is as several ($R\gg 1$) non-interacting
copies of an ``elementary bath'' $b$: $B=b^R$ and charges $A_{B_j}=A^{(R)}_{b_j}$, so that
the GGS of $B$ is $\tau(\und{\beta})_B = \tau(\und{\beta})_b^{\otimes R}$.
We claim that for large enough $R$, the left hand side of Eq. (\ref{eq:state assumptions})
defines a point in the phase diagram of $B$. Indeed, we can express the conditions
in terms of $b$, assuming that we aim for a regular sequence of product states
$\xi_{b^{nR}}$:
\begin{equation}\begin{split}
s(\{\xi_{b^{nR}}\}) &= S(\tau(\und{\beta})_b) - \frac1R \Delta s_S, \\
a_j(\{\xi_{b^{nR}}\}) &= \Tr \tau(\und{\beta})_b A_{b_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c.
\label{eq:R-bath-assumptions}
\end{split}\end{equation}
For all sufficiently large $R$, these points $(\und{a},s)$ are arbitrarily close to
where the bath starts off, at
$(\und{a}_{\und{\beta}},s_{\und{\beta}})
= \bigl(\Tr\tau(\und{\beta})_b A_{b_1}, \ldots, \Tr\tau(\und{\beta})_b A_{b_c}, S(\tau(\und{\beta})_b)\bigr)$,
while they always remains in the open half plane
$\sum_j \beta_j a_j-s > \widetilde{F}(\tau(\und{\beta})_b)$. Indeed,
they all lie on a straight line pointing from $(\und{a}_{\und{\beta}},s_{\und{\beta}})$
into the interior of that half plane. Hence, for sufficiently large $R$,
$(\und{a},s) \in \overline{\cP}$, the phase diagram of $b$, and by
point 5 of Lemma~\ref{lemma:phase diagram properties} there does indeed exist
a regular sequence of product states corresponding to it.
\end{proof}
\section{Finiteness of the bath: tighter constraints and negative entropy}
\label{subsec:finitebath}
In the previous two sections we have elucidated the traditional statements of
the first and second law of thermodynamics, as emerging in our resource theory.
In particular, the second law is tight, if sufficiently large baths are allowed
to be used.
Here, we specifically look at the the second statement (achievability)
of the second law in the presence of an explicitly given, finite bath $B$. It will
turn out that typically, equality in the second law cannot be attained, only
up to a certain loss due to the finiteness of the bath. We also discover
a purely quantum effect whereby the system and the bath remain entangled after
effecting a certain state transformation, allowing quantum engines to perform
tasks impossible classically (i.e. with separable correlations).
The question we want to address is the following refinement of the one answered
in the previous section:
\begin{quote}
Given regular sequences $\rho_{S^n}$ and $\sigma_{S^n}$ of product states,
and numbers $W_j$, are there extensions $\sigma_{S^nB^n}$ of $\sigma_{S^n}$
forming a regular sequence of product states, such that the work
transformation $\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
is feasible, with accompanying extraction of $j$-type work at rate $W_j$?
\end{quote}
To answer it, we need the following \emph{extended phase diagram}.
For a give state $\sigma_S$ of the system $S$, and a bath $B$, define the
the following set:
\begin{equation}
\cP^{(1)}_{|\sigma_S} := \left\{ \bigl(\Tr\xi_B A_1^{(B)},\ldots,\Tr\xi_B A_c^{(B)},S(B|S)_\xi\bigr) :
\xi_{SB} \text{ state with } \Tr_B\xi_{SB}=\sigma_S \right\},
\end{equation}
furthermore its $n$-copy version
\begin{align}
&\cP^{(n)}_{|\sigma_{S^n}}
:= \nonumber
\\ & \left\{\! \bigl(\Tr\xi_{B^n} A_1^{(B^n)}\!,\!\ldots\!,\!\Tr\xi_{B^n} A_c^{(B^n)}\!\!,S(B^n|S^n)_\xi\bigr)\! :
\xi_{S^nB^n} \text{ state with } \Tr_{B^n}\xi_{S^nB^n}\!=\sigma_S^{\otimes n} \!\right\}\!.
\end{align}
Finally, define the \emph{conditional entropy phase diagram} as
\begin{align}
& \overline{\cP}_{|s_0} := \overline{\cP}^{(1)}_{|s_0}
:= \nonumber\\
&\quad \left\{ \bigl(\und{a},s\bigr) : a_j = \Tr\xi_B A_j^{(B)},\,
-\min\{s_0,S(\tau(\und{a}))\} \leq s \leq S(\tau(\und{a}))
\text{ for a state } \xi_B \right\},
\end{align}
and likewise its $n$-copy version $\overline{\cP}^{(n)}_{|ns_0}$,
for a number $s$ (intended to be an entropy or entropy rate).
These concepts are illustrated in Fig.~\ref{fig:extended-phase-diagram}.
The relation between the sets, and the name of the latter, are explained in the
following lemma.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{extended-phase-diagram.jpg}
\end{center}
\caption{Schematic of the extended phase diagram $\overline{\cP}_{|s_0}$.
Depending on the value of $s_0$, whether it is smaller or larger than
$\log|B|$, the diagram acquires either the left hand or the right hand
one of the above shapes.}
\label{fig:extended-phase-diagram}
\end{figure}
\begin{lemma}
\label{lemma:extended-phase-diagram}
With the previous notation, we have:
\begin{enumerate}
\item For all $k$, $\cP^{(k)}_{|\sigma_{S^k}} \subset \overline{\cP}^{(k)}_{|S(\sigma_{S^k})}$,
and the latter is a closed convex set.
\item For all $k$, $\overline{\cP}^{(k)}_{|ks_0} = k \overline{\cP}^{(1)}_{|s_0}$.
\item For a regular sequence $\{\sigma_{S^k}\}$ of product states with entropy rate
$s_0=s(\{\sigma_{S^k}\})$, every point in $\overline{\cP}_{|s}$ is arbitrarily well
approximated by points in $\frac1k \cP^{(k)}_{|\sigma_{S^k}}$ for all sufficiently large $k$.
I.e., $\displaystyle{\overline{\cP}_{|s_0} = \lim_{k\to\infty} \frac1k \cP^{(k)}_{|S(\sigma_{S^k})}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. We only have to convince ourselves that for a state
$\xi_{S^kB^k}$ with $\Tr_{B^k}\xi_{S^kB^k}=\sigma_{S^k}$,
\[
- \min\{S(\sigma_{S^k}),kS(\tau(\und{a}))\} \leq S(B^k|S^k)_\xi \leq k S(\tau(\und{a})),
\]
where $\und{a}=(a_1,\ldots,a_c)$ with $a_i = \frac1k \Tr\xi_{B^k} A_i^{(B^k)}$.
The upper bound follows from subadditivity, since
$S(B^k|S^k)_\xi \leq S(B^k)_\xi \leq k S(\tau(\und{a}))$.
The lower bound consists of two inequalities: first, by purifying $\xi$ to
a state $\ket{\phi} \in S^kB^kR$ and strong subadditivity,
$S(B^k|S^k)_\xi \geq S(B^k|S^kR)_\phi = - S(B^k)_\xi \geq - k S(\tau(\und{a}))$.
Secondly, $S(B^k|S^k)_\xi \geq -S(S^k)_\xi = -S(\sigma_{S^k})$.
2. Follows easily from the definition.
3. It is enough to show that the points of the minimum entropy diagram
\[
\overline{\cP}_{\min|s}
:= \left\{\bigl(\und{a},-\min\{s_0,S(\tau(\und{a}))\}\bigr)
: \Tr \xi_B A_j^{(B)} = a_j \text{ for a state } \xi_B \right\}
\]
can be approximated as claimed by an admissible $k$-copy state $\xi_{S^kB^k}$. This
is because the maximum entropy diagram $\overline{\cP}_{\max}^{(k)}$ is realized
by states $\vartheta_{S^kB^k} := \sigma_{S^k}\ox\tau(\und{a})_B^{\ox k}$, and by
interpolating the states, i.e. $\lambda\xi + (1-\lambda)\vartheta$ for $0\leq\lambda\leq 1$,
we can realize the same charge values $\und{a}$ with entropies in the
whole interval $[S(B^k|S^k)_\xi;k S(\tau(\und{a}))]$.
The approximation of $\overline{\cP}_{\min|s}$ can be proved invoking
results from quantum Shannon theory, specifically quantum state merging,
the form of which we need here is stated below as a Lemma.
For this, consider a tuple $\und{a} \in \overline{\cP}_0$ and a purification
$\ket{\Psi} \in S^kB^kR^k$ of the state $\vartheta_{S^kB^k} = \sigma_{S^k}\ox\tau(\und{a})_B^{\ox k}$,
which can be chosen in such a way as to be a product state itself:
$\ket{\Psi} = \ket{\Psi_1}_{S_1B_1R_1}\ox\cdots\ox\ket{\Psi_k}_{S_kB_kR_k}$.
Now we distinguish two cases, depending on which of the entropies
$S(\sigma_{S^k})$ and $kS\bigl(\tau(\und{a})_B\bigr)$ is the smaller.
\begin{enumerate}[{(i)}]
\item $S(\sigma_{S^k}) \geq S\bigl(\tau(\und{a})_B\bigr)$: We shall
construct $\xi_{S^kB^k}$ in such a way that $\xi_{S^k} = \sigma_{S^k}$ and
$\xi_{B^k} \approx \tau\bigl(\und{a}\bigr)_B^{\otimes k}$. To this end, choose
a pure state $\phi_{CR'}$ with entanglement entropy
$S(\phi_C) = \frac1k S(\sigma_{S^k})-S\bigl(\tau(\und{a})_B\bigr) + \frac12 \epsilon$,
and consider the state
$\widetilde{\Psi}^{S^kB^kC^kR^k{R'}^k} = \Psi_{S^kB^kR^k}\ox\phi_{CR'}^{\ox k}$.
Now we apply state merging (Lemma \ref{lemma:marge}) twice to
this state (which is a tensor product of $k$ systems), with a random
rank-one projector $P$ on the combined system $R^k{R'}^k$:
first, by splitting the remaining parties $S^k : B^kC^k$,
and second by splitting them $B^k : S^kC^k$.
By construction, in both bipartitions it is the solitary system
($S^k$ and $B^k$, resp.) that has the smaller entropy by at least $\frac12\epsilon k$,
showing that the post-measurement state $\widetilde{\xi}(P)_{S^kB^kC^k}$ with
high probability approximates the marginals of $\vartheta_{S^kB^k}$ on $S^k$
and on $B^k$ simultaneously.
Choose a typical subspace projector $\Pi$ of $\phi_C^{\ox k}$ with
$\log \rank \Pi \leq S(\sigma_{S^k})-k S\bigl(\tau(\und{a})_B\bigr) + \epsilon k$,
and let
\[
\ket{\xi(P)}_{S^kB^kC^k} := \frac1c (\openone_{S^kB^k}\Pi_{C^k})\ket{\widetilde{\xi}(P)},
\]
with a normalization constant $c$.
Merging and properties of the typical subspace imply that for sufficiently large $k$,
\begin{align}
\label{eq:xiP-S}
\frac12 \left\| \xi(P)_{S^k} - \sigma_{S^k} \right\|_1 &\leq \epsilon, \\
\label{eq:xiP-B}
\frac12 \left\| \xi(P)_{B^k} - \tau(\und{a})_B^{\ox k} \right\|_1 &\leq \epsilon.
\end{align}
Now, we invoke Uhlmann's theorem applied to purifications of $\sigma_{S^k}$ and
of $\xi(P)_{S^kB^k}$, together with the well-known relations between fidelity
and trace norm applied to Eq.~(\ref{eq:xiP-S}),
to obtain a state $\xi_{S^kB^k}$ with $\xi_{S^k} = \sigma_{S^k}$ and
$\frac12 \left\| \xi(P)_{S^kB^k} - \xi_{S^kB^k} \right\|_1 \leq \sqrt{\epsilon(2-\epsilon)}$,
thus by Eq.~(\ref{eq:xiP-B})
\[
\frac12 \left\| \xi_{B^k} - \tau(\und{a})_B^{\ox k} \right\|_1 \leq \epsilon + \sqrt{\epsilon(2-\epsilon)}.
\]
From the latter bound it follows that
\[
\left| \frac1k \tr \xi_{B^k}A_j^{(B^k)} - a_j \right|
\leq \|A_{B_j}\| \left(\epsilon + \sqrt{\epsilon(2-\epsilon)}\right).
\]
It remains to bound the conditional entropy:
\[\begin{split}
\frac1k& S(B^k|S^k)_\xi \\
&= \frac1k S\bigl(\xi_{S^kB^k}\bigr) - \frac1k S(\xi_{S^k}) \\
&\leq \frac1k S\bigl(\xi(P)_{S^kB^k}\bigr) \!- \!\frac1k S(\sigma_{S^k})
\!+\! \left(\! \epsilon \!+\! \sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right)\log(|S||B|)
\!+\! h\left(\! \epsilon \!+\! \sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right) \\
&\leq \frac1k \log \rank\Pi \!-\! \frac1k S(\sigma_{S^k})
\!+\! \left(\! \epsilon \!+ \!\sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right)\log(|S||B|)
\!+\! h\left( \!\epsilon \!+\! \sqrt{\!\epsilon(2-\epsilon)\!} \!\right) \\
&\leq \frac1k \left( S(\sigma_{S^k}) - kS\bigl(\tau(\und{a})\bigr) \right)
- \frac1k S(\sigma_{S^k})
+ \left( 2\epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)\\
&\quad \quad \quad \quad \quad \quad \quad + h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right) \\
&= -S\bigl(\tau(\und{a})\bigr)
+ \left( 2\epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)
+ h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right) ,
\end{split}\]
where in the second line we have used the Fannes inequality on the continuity
of the entropy \cite{Fannes1973,Audenaert2007}, with the binary entropy
$h(x)=-x\log x-(1-x)\log(1-x)$;
in the third line that $\xi(P)_{S^kB^k}$ has rank at most $\rank \Pi$;
and in the fourth line the upper bound on the latter rank by construction.
\item $S(\sigma_{S^k}) < S\bigl(\tau(\und{a})_B \bigr)$: We shall
construct $\xi_{S^kB^k}$ such that $\xi_{S^k} = \sigma_{S^k}$ and
$\tr\xi_{B^k}A_j^{(B^k)} \approx \tr\tau\bigl(\und{a}\bigr)_B A_{B_j}$ for
all $j=1,\ldots,c$. Here, choose a pure state $\phi_{CR'}$ with entanglement entropy
$S(\phi_C) = \epsilon$, and define
$\widetilde{\Psi}^{S^kB^kC^kR^k{R'}^k} = \Psi_{S^kB^kR^k}\ox\phi_{CR'}^{\ox k}$.
Now we apply state merging (Lemma \ref{lemma:marge}) to
this state (which is a tensor product of $k$ systems), with a random
rank-one projector $P$ on the combined system $R^k{R'}^k$,
by splitting the remaining parties $S^k : B^kC^k$, which ensures that
$S^k$ has the smaller entropy by at least $\epsilon k$,
showing that the post-measurement state $\widetilde{\xi}(P)_{S^kB^kC^k}$ with
high probability approximates the marginal of $\vartheta_{S^kB^k}$ on $S^k$.
Proceed as before with a typical subspace projector $\Pi$ of $\phi_C^{\ox k}$
such that
$\log \rank \Pi \leq S(\sigma_{S^k})-k S\bigl(\tau(\und{a})_B\bigr) + \epsilon k$,
and let
\(
\ket{\xi(P)}_{S^kB^kC^k} := \frac1c (\openone_{S^kB^k}\Pi_{C^k})\ket{\widetilde{\xi}(P)},
\)
with a normalization constant $c$.
Merging and properties of the typical subspace thus imply that for sufficiently large $k$,
\begin{equation}
\label{eq:xiPt-S}
\frac12 \left\| \xi(P)_{S^k} - \sigma_{S^k} \right\|_1 \leq \epsilon.
\end{equation}
Next we need to look at the charge values of $\xi(P)_{B^k}$.
Note that the expectation $\EE_P \xi(P)_{B^k}$ is approximately
equal to $\EE_P \widetilde{\xi}(P)_{B^k} = \tau(\und{a})_B^{\ox k}$.
It follows from \cite[{Lemma III.5}]{Hayden2006}, that if $k$ is sufficiently
large, then with high probability
\begin{equation}
\label{eq:xiPt-B-A}
\left| \tr \bigl(\xi(P)_{B^k} - \tau(\und{a})_B^{\ox k}\bigr) A_j^{(B^k)} \right| \leq \|A_{B_j}\| \epsilon
\quad \text{for all } j=1,\ldots,c.
\end{equation}
So we just focus on a good instance of $P$, where both
Eqs.~(\ref{eq:xiPt-S}) and (\ref{eq:xiPt-B-A}) hold. Now we proceed as
in the first case to find a state $\xi_{S^kB^k}$ with $\xi_{S^k} = \sigma_{S^k}$ and
$\frac12 \left\| \xi(P)_{S^kB^k} - \xi_{S^kB^k} \right\|_1 \leq \sqrt{\epsilon(2-\epsilon)}$,
using Uhlmann's theorem. Thus, as before we find
\[
\left| \frac1k \tr \xi_{B^k}A_j^{(B^k)} - a_j \right|
\leq \|A_{B_j}\| \left(\epsilon + \sqrt{\epsilon(2-\epsilon)}\right).
\]
Regarding the conditional entropy, we have quite similarly as before,
\[\begin{split}
\frac1k &S(B^k|S^k)_\xi \\
&= \frac1k S\bigl(\xi_{S^kB^k}\bigr) - \frac1k S(\xi_{S^k}) \\
&\leq \frac1k S\bigl(\xi(P)_{S^kB^k}\bigr) \!- \!\frac1k S(\sigma_{S^k})
\!+\! \left( \!\epsilon \!+ \!\sqrt{\!\epsilon(2\!-\!\epsilon)\!} \right)\log(|S||B|)
\!+\! h\left(\! \epsilon \!+\! \sqrt{\!\epsilon(2\!-\!\epsilon)\!} \!\right) \\
&\leq \frac1k \log 2^{\epsilon k} - \frac1k S(\sigma_{S^k})
+ \left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)
+ h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right) \\
&\leq -\frac1k S(\sigma_{S^k})
+ \left( 2\epsilon + \sqrt{\epsilon(2-\epsilon)} \right)\log(|S||B|)
+ h\left( \epsilon + \sqrt{\epsilon(2-\epsilon)} \right).
\end{split}\]
\end{enumerate}
Since in both cases we knew the conditional entropy to be always
$\geq - \frac1k \min\left\{ S(\sigma_{S^k}),k S\bigl(\tau(\und{a})\bigr) \right\}$,
this concludes the proof.
\end{proof}
\medskip
\begin{lemma}[Quantum state merging \cite{Horodecki2007,SM_nature}]
\label{lemma:marge}
Given a pure product state
$\Psi_{A^nB^nC^n}=(\Psi_1)_{A_1B_1C_1}\ox\cdots\ox(\Psi_n)_{A_nB_nC_n}$,
such that $S(\Psi_{A^n})-S(\Psi_{B^n}) \geq \epsilon n$, consider a Haar
random rank-one projector $P$ on $C^n$. Then, for sufficiently large $n$
it holds except with arbitrarily small probability that the post-measurement state
\[
\psi(P)_{A^nB^n} = \frac{1}{\tr\Psi_{C^n}P}\tr_{C^n}\Psi(\openone_{A^nB^n}\ox P)
\]
satisfies $\frac12\| \psi(P)-\Psi_{A^nB^n}\|_1 \leq \epsilon$.
\hfill$\blacksquare$
\end{lemma}
\medskip
\begin{remark}\normalfont
While we have seen that the upper boundary of the extended phase diagram
$\overline{\cP}^{(k)}_{|S(\sigma_{S^k})}$ is exactly realized by points
in $\cP^{(k)}_{|\sigma_{S^k}}$, namely those corresponding to the tensor product
states $\sigma_{S^k} \ox \tau(\und{a})_B^{\ox k}$,
it seems unlikely that we can achieve the analogous thing for the lower boundary:
this would entail finding, for every (sufficiently large) $k$ a tensor product
state, or a block tensor product state, $\xi_{S^kB^k}$ with prescribed charge vector
$\und{a}$ on $B^k$, and $S(B^k|S^k)_\xi = -\min\{kS\bigl(\tau(\und{a})\bigr),S(\sigma_{S^k})\}$.
Now, for concreteness, consider the case that
$kS\bigl(\tau(\und{a})\bigr) \leq S(\sigma_{S^k})$, so that the conditional entropy
aimed for is $S(B^k|S^k)_\xi = -k S\bigl(\tau(\und{a})_B\bigr)$, which is the value
of a purification of $\tau(\und{a})_B^{\ox k}$. In particular, it would mean that
$S(\xi_{B^k}) = k S\bigl(\tau(\und{a})_B\bigr)$, and so -- recalling the
charge values and the maximum entropy principle -- it would follow that
$\xi_{B^k} = \tau(\und{a})_B^{\ox k}$.
However, from the equality conditions in strong subadditivity \cite{Hayden2004},
this in turn would imply that $\xi_{S^kB^k}$ is a probabilistic mixture of
purifications of $\tau(\und{a})_B^{\ox k}$ whose restrictions to $S^k$
are pairwise orthogonal. This would clearly put constraints on the spectrum
of $\sigma_{S^k}$ that are not generally met.
In the other case that $kS\bigl(\tau(\und{a})\bigr) > S(\sigma_{S^k})$, the
conditional entropy should be $S(B^k|S^k)_\xi = - S(\sigma_{S^k})$, and
since $\xi_{S^k} = \sigma_{S^k}$, this would necessitate a pure state
$\xi_{S^kB^k}$. Looking at the proof of Lemma \ref{lemma:extended-phase-diagram},
however, we see that it leaves quite a bit of manoeuvring space, so it
may or may not be possible to satisfy all charge constraints
$\tr \xi_{B^k}A_j^{(B^k)} = a_j$ ($j=1,\ldots,c$).
\end{remark}
\medskip
Coming back to our question, if a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$ is
feasible for regular sequences on the left hand side,
by the first law this implies that
\begin{align*}
s(\{\sigma_{S^nB^n}\}) &= s(\{\rho_{S^n}\}) + S(\tau(\und{\beta})) \text{ and} \\
W_j &= -\Delta A_{S_j}-\Delta A_{B_j} \\
&= a_j(\{\rho_{S^n}\}) - a_j(\{\sigma_{S^n}\})
+ a_j(\{\tau(\und{\beta})_{B^n}\}) - a_j(\{\sigma_{B^n}\}).
\end{align*}
When $\sigma_{S^n}$ and the $W_j$ are given, this constrains the possible
states $\sigma_{S^nB^n}$ as follows: for each $n$,
\begin{align*}
\frac1n S(B^n|S^n)_\sigma &\approx S(\tau(\und{\beta})) - \Delta s_S, \\
\frac1n \Tr \sigma_{B^n} A_{B_j}^{(n)} &\approx \Tr \tau(\und{\beta})_B A_{B_j}
- \Delta A_{S_j} - W_j,\quad \text{for all } j=1,\ldots,c.
\end{align*}
Since by Lemma \ref{lemma:extended-phase-diagram} the left hand sides converge to the
components of a point in $\overline{\cP}_{|s(\{\sigma_{S^n}\})}$, meaning that a necessary
condition for the feasibility of the work transformation in question is that
\begin{equation}\begin{split}
\label{eq:conditional-point}
(\und{a},t) \in \overline{\cP}_{|s(\{\sigma_{S^n}\})}, \text{ with }
a_j &:= \Tr \tau(\und{\beta})_B A_{B_j} - \Delta A_{S_j} - W_j, \\
t &:= S(\tau(\und{\beta})) - \Delta s_S.
\end{split}\end{equation}
Again by Lemma \ref{lemma:extended-phase-diagram}, this is equivalent to
all $a_j$ to be contained in the set of joint quantum expectations of the
observables $A_{B_j}$, and
\[
-\min\left\{ s(\{\sigma_{S^n}\}),S\bigl(\tau(\und{a})\bigr) \right\} \leq t \leq S\bigl(\tau(\und{a})\bigr).
\]
The following theorem shows that this is also sufficient, when we allow blockings
of the asymptotically many systems.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{second-law-finite.jpg}
\end{center}
\caption{State change of the bath for a given work transformation under the extraction of
$j$-type work $W_j$, viewed in the extended phase diagram of the bath, which
initially is in the thermal state $\tau(\und{\beta})_B$, the blue line at the
corresponding point in the diagram representing the tangent hyperplane of the
diagram. The final states $\{\sigma_{S^nB^n}\}$ give rise to the point $F$ in
the extended diagram, whose charge values are those of $\{\sigma_{B^n}\}$,
while the entropy is $\frac1n S(B^n|S^n)_\sigma$.}
\label{fig:second-law-finite}
\end{figure}
\begin{theorem}[Second Law with fixed bath]
\label{thm:second-law-finite-bath}
For arbitrary regular sequences $\rho_{S^n}$ and $\sigma_{S^n}$ of product states,
a given bath $B$, and any real numbers $W_j$,
if there exists a regular sequence of block product states
$\sigma_{S^nB^n}$ with $\Tr_{B^n}\sigma_{S^nB^n} = \sigma_{S^n}$, such that
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
with accompanying extraction of $j$-type work at rate $W_j$,
then Eq. (\ref{eq:conditional-point}) defines a point
$(\und{a},t) \in \overline{\cP}_{|s(\{\sigma_{S^n}\})}$.
Conversely, assuming additionally that $\sigma_{S^n} = \sigma_S^{\otimes n}$ is an i.i.d.~state,
if Eq. (\ref{eq:conditional-point}) defines a point
$(\und{a},t) \in \overline{\cP}_{|S(\sigma_S)}^0$ in the interior of the
extended phase diagram, then for every $\epsilon>0$
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n} \rightarrow \sigma_{S^nB^n}$
with block product states $\sigma_{S^nB^n}$ such that
$\Tr_{B^n}\sigma_{S^nB^n} = \sigma_{S^n}$, and with accompanying
extraction of $j$-type work at rate $W_j\pm\epsilon$.
This is illustrated in Fig.~\ref{fig:second-law-finite}.
\end{theorem}
\begin{proof}
We have already argued the necessity of the condition. It remains to show its
sufficiency.
Using Lemma \ref{lemma:extended-phase-diagram}, this is not hard: Namely,
by its point 3, for sufficiently large $k$, $(\und{a},t) \in \overline{\cP}_{|s}$
is $\epsilon$-approximated by $\frac1k \cP^{(k)}_{|\sigma_S^{\ox k}}$, i.e. there
exists a $\sigma_{S^kB^k}$ with $\tr_{B^k} \sigma_{S^kB^k} = \sigma_S^{\ox k}$
with $\frac1k S(B^k|S^k)_\sigma \leq t-\epsilon$ and $\frac1k \tr \sigma_{B^k} A_j^{(B^k)} \approx a_j$
for all $j=1,\ldots,c$. By mixing $\sigma$ with a small fraction of
$\bigl(\tau(\und{a})_B\ox\sigma_S\bigr)^{\ox k}$, we can in fact assume that
$\frac1k S(B^k|S^k)_\sigma = t$ while preserving $\frac1k \tr \sigma_{B^k} A_j^{(B^k)} \approx a_j$.
Now our target block product states will be
$\sigma_{S^nB^n} := \bigl(\sigma_{S^kB^k}\bigr)^{\ox \frac{n}{k}}$ for $n$ a multiple of $k$.
By construction, this sequence has the same entropy rate as
the initial regular sequence of product states $\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox n}$,
so by the first law, Theorem \ref{thm:first-law}, and the
AET, Theorem \ref{Asymptotic equivalence theorem}, there is indeed a
corresponding work transformation with $j$-type work extracted
equal to $W_j\pm\epsilon$.
\end{proof}
\medskip
\begin{remark}\normalfont
One might object that tensor power target states are not general enough
in Theorem \ref{thm:second-law-finite-bath}, as we
had observed in the previous chapter that such states do not
generate the full phase diagram $\overline{\mathcal{P}}$ of the system $S$.
However, by considering blocks of $\ell$ systems $S^\ell$, we can
apply the theorem to block tensor power target states
$\sigma_{S^n} = \bigl(\sigma_1\ox\cdots\ox\sigma_\ell\bigr)^{\ox \frac{n}{\ell}}$,
and these latter are in fact a rich enough class to exhaust the entire
phase diagram $\overline{P}$, when $\ell \geq \dim S$
(point 5 of Lemma \ref{lemma:phase diagram properties}).
More generally, we can allow as target \emph{uniformly regular} sequences of
product states $\sigma_{S^n}$, by which we mean the following strengthening
of the condition in Definition \ref{definition:regular}.
Denoting $B_{N+1}^{N+n} := B_{N+1}\ldots B_{N+n}$, we require that for all
$\epsilon > 0$ and uniformly for all $N$, it holds that for sufficiently large $n$,
\[
\left| a_j - \frac1n \Tr \sigma_{B_{N+1}^{N+n}} A_j^{(n)} \right|
\leq \epsilon \text{ for all } j=1,\ldots,c, \text{ and }
\left| s - \frac1n S(\sigma_{B_{N+1}^{N+n}}) \right| \leq \epsilon.
\]
\end{remark}
\section{Tradeoff between thermal bath rate and work extraction}
\label{subsec:bath-rate}
Here we consider a different take on the question of the work deficit due
to finiteness of the bath. Namely, we still consider a given fixed finite
bath system $B$, but now as which state transformations and associated
generalized works are possible when for each copy of the subsystem $S$,
$R\geq 0$ copies of $B$ are present. It is clear what that means when
$R$ is an integer, but below we shall give a meaning to this rate as a real number.
We start off with the observation that ``large enough bath'' in
Theorem \ref{asymptotic second law} can be taken to mean $B^R$, for the given
elementary bath $B$ and sufficiently large integer $R$.
\begin{theorem}
\label{thm:large-R-2nd-law}
For arbitrary regular sequences of product states,
$\rho_{S^n}$ and $\sigma_{S^n}$, and any real numbers $W_j$ with
$\sum_{j=1}^c \beta_j W_j < -\Delta\widetilde{F}_S$,
there exists an integer $R\geq 0$ and a regular sequence of product states
$\sigma_{S^nB^{nR}}$ with $\Tr_{B^{nR}}\sigma_{S^nB^{nR}} = \sigma_{S^n}$, such that
there is a work transformation
$\rho_{S^n} \ox \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^nB^{nR}}$
with accompanying extraction of $j$-type work at rate $W_j$.
\end{theorem}
\begin{proof}
This was already shown in the achievability part of Theorem \ref{asymptotic second law}.
\end{proof}
\medskip
To give meaning to a rational rate $R = \frac{\ell}{k}$, group the systems of $S^n$,
for $n=\nu k$, into blocks of $k$, which we denote $\widetilde{S}=S^k$,
and consider $\rho_{S^n} \equiv \rho_{\widetilde{S}^\nu}$ as a $\nu$-party state,
and likewise $\sigma_{S^n} \equiv \sigma_{\widetilde{S}^\nu}$. For each
$\widetilde{S}=S^k$ we assume $\ell$ copies of the thermal bath,
$\tau(\und{\beta})_B^{\otimes \ell} = \tau(\und{\beta})_{\widetilde{B}}$,
with $\widetilde{B} = B^\ell$. If $\{\rho_{S^n}\}$ and $\{\sigma_{S^n}\}$ are
regular sequences of product states, then evidently so are
$\{\rho_{\widetilde{S}^\nu}\}$ and $\{\sigma_{\widetilde{S}^\nu}\}$.
Now, for the given sequences $\{\rho_{S^n}\}$ and $\{\sigma_{S^n}\}$ of initial
and final states, respectively, as well as works $W_1,\ldots,W_c$ satisfying
$\sum_j\beta_j W_j = -\Delta\widetilde{F}_S-\delta$, $\delta \geq 0$,
we can ask what is the infimum over all rates $R = \frac{\ell}{k}$
such that there is a work transformation
\[
\rho_{S^n} \ox \tau(\und{\beta})_{B^{nR}}
\equiv \rho_{\widetilde{S}^\nu} \ox \tau(\und{\beta})_{\widetilde{B}}^{\ox \nu\ell}
\rightarrow
\sigma_{\widetilde{S}^\nu \widetilde{B}^{\nu\ell}}
\equiv \sigma_{S^nB^{nR}},
\]
where as before the final state is intended to satisfy
$\Tr_{\widetilde{B}^{\nu\ell}} \sigma_{\widetilde{S}^\nu \widetilde{B}^{\nu\ell}} = \sigma_{\widetilde{S}^\nu}$.
We observe that if $S(\rho_{S^n})=S(\sigma_{S^n})$ and $\sum_j\beta_j W_j = -\Delta\widetilde{F}_S$,
then the work transformation is possible without
using any thermal bath, which follows from Eq.~(\ref{work expansion formula}).
That is, the thermal bath is not necessary for extracting work if the entropy of the work
system does not change.
Conversely, the role of the thermal bath is precisely to facilitate changes
of entropy in the work system.
To answer the above question after the minimum bath rate $R^*$,
we first show the following lemma.
\begin{lemma}
Consider regular sequences of product states,
$\rho_{S^n}$ and $\sigma_{S^n}$, and real numbers $W_j$, and assume
that for large enough rate $R$ there is a work transformation
$\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^nB^{nR}}$,
with $\sigma_{S^n}$ as the reduced final state on the work system,
and works $W_1,\ldots,W_c$ are extracted. Then there is another work transformation
$\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^n}\ox\xi_{B^{nR}}$,
in which the final state of the work system and the thermal bath are uncorrelated,
$\xi_{B^{nR}}$ is a regular sequence of product states,
and the same works $W_1,\ldots,W_c$ are extracted.
\end{lemma}
\begin{proof}
Assuming that $\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^nB^{nR}}$ is a work transformation, the second law implies that $\sum_j\beta_j W_j = -\Delta\widetilde{F}_s-\delta$ for some $\delta \geq 0$, and we obtain
\begin{equation}\label{eq: coordinates_P_1}
\begin{split}
s(\{\sigma_{B^{nR}}\}) &= S(\tau(\und{\beta})_B) - \frac1R \Delta s_S+\frac{\delta'}{R}, \\
a_j(\{\sigma_{B^{nR}}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c.
\end{split}\end{equation}
for $0 \leq \delta' \leq \delta$ where the first equality is due to the fact that $\Delta\widetilde{F}_B+\Delta s_S +\Delta s_B =\delta$ as seen in Eq.~(\ref{work expansion formula}) and positivity of the entropy rate change from Eq.~(\ref{eq: positive Delta_SB}). The second equality follows from the first law, Theorem \ref{thm:first-law},
and the AET, Theorem \ref{Asymptotic equivalence theorem}.
If $R$ is large enough, due to the convexity of the phase diagram of the thermal bath $\overline{\mathcal{P}}_B^{(1)}$, the following coordinates belong to the phase diagram as well
\begin{equation}\label{eq: coordinates_P_2}
\begin{split}
s(\{\xi_{B^{nR}}\}) &= S(\tau(\und{\beta})_B) - \frac1R \Delta s_S, \\
a_j(\{\xi_{B^{nR}}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c.
\end{split}\end{equation}
Therefore, due to points 3 and 5 of Lemma~\ref{lemma:phase diagram properties},
there is a tensor product state $\xi_{B^{nR}}$ with coordinate of
Eq.~(\ref{eq: coordinates_P_2}) on $\overline{\cP}_B^{(1)}$. Hence the first law,
Theorem~\ref{thm:first-law}, implies that the desired transformation exists, and
works $W_1,\ldots,W_c$ are extracted.
\end{proof}
\medskip
\begin{theorem}
\label{thm:optimal-rate}
For regular sequences of product states, $\rho_{S^n}$ and $\sigma_{S^n}$,
and real numbers $W_j$ satisfying $\sum_j\beta_j W_j = -\Delta\widetilde{F}_s-\delta$,
let $R^*$ be the infimum of rates such that there is a work transformation
$\rho_{S^n} \otimes \tau(\und{\beta})_B^{\ox nR} \rightarrow \sigma_{S^n}\ox\xi_{B^{nR}}$
under which works $W_1,\ldots,W_c$ are extracted, and
$\xi_{B^{nR}}$ is a regular sequence of product states.
Then, this minimum $R^*$ is achieved for a state $\xi_{B^{nR}}$ on the boundary of the
phase diagram $\overline{\mathcal{P}}_B$ of the thermal bath. Indeed, it is point
where the line given by Eq.~(\ref{eq:R-bath-assumptions}) intersects the boundary of
the phase diagram; see Fig.~\ref{fig:optimal-rate}.
Equivalently, it is the smallest $R$ such that the point in
Eq.~(\ref{eq:R-bath-assumptions}) is contained in $\overline{\mathcal{P}}_B$.
For $\delta \ll 1$, the minimum rate can be written as
\begin{equation}
\label{eq:rate-vs-heatcapacity}
R \approx -\frac{1}{2\delta} \sum_{ij} \frac{\partial\beta_j}{\partial a_i}(\Delta A_{S_i}+W_i)(\Delta A_{S_j}+W_j),
\end{equation}
where $\Delta A_{S_j} = a(\{\sigma_{S^n}\}) - a(\{\rho_{S^n}\})$.
\end{theorem}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=10cm,height=8cm]{second-law-optimalrate.jpg}
\end{center}
\caption{Graphical illustration of $R^*$, the minimum bath rate for a
work transformation $\{\rho_{S^n}\} \rightarrow \{\sigma_{S^n}\}$
satisfying the second law, according to Theorem~\ref{thm:optimal-rate}.
The initial state is the generalized thermal state $\tau(\und{\beta})$, its
corresponding point marked on the upper boundary of the phase diagram. The final
bath states correspond to points on the line denoted $f$, and they
are feasible if and only they fall into the phase diagram.
Consequently, $F^*$ is the point corresponding to the minimum rate.}
\label{fig:optimal-rate}
\end{figure}
\begin{proof}
The final state of the thermal bath $\xi_{{B}^{ nR}}$ is a tensor product state, so
the first law, Theorem~\ref{thm:first-law},
and the AET, Theorem~\ref{Asymptotic equivalence theorem} imply that
\begin{equation}\label{eq:coordinates}
\begin{split}
s(\{\xi_{B^{nR}}\}) &= S(\tau(\und{\beta})_B) - \frac1R \Delta s_S, \\
a_j(\{\xi_{B^{nR}}\}) &= \Tr \tau(\und{\beta})_B A_{B_j} - \frac1R (\Delta A_{S_j} + W_j)
\quad \text{for all } j=1,\ldots,c,
\end{split}\end{equation}
where $\Delta s_{S} = s(\{\sigma_{S^n}\}) - s(\{\rho_{S^n}\})$.
Due to point 3 of Lemma~\ref{lemma:phase diagram properties},
the above coordinates belong to $\overline{\mathcal{P}}_B^{(1)}$.
For $R=R^*$ assume that the above coordinates belong to the point $(\und{a},s)$ on the boundary
of the phase diagram $\overline{\mathcal{P}}^{(1)}_B$. Then, for $R>R^*$ the point
of Eq.~(\ref{eq:coordinates}) is a convex combination of the points
$(\und{a},s)$ and the corresponding point of the state $\tau(\und{\beta})_B$,
so it belongs to the phase diagram due to its convexity. Therefore, all points with $R>R^*$
are inside the diagram.
To approximate the minimum $R$ for small $\delta$, define the function
$S(\und{a}):=S(\tau(\und{a})_B)$ for $\und{a}=(a_1,\ldots,a_c)$.
Its Taylor expansion around the point corresponding to the
initial thermal state $\tau(\und{\beta})_B \equiv S\left(\tau(\und{a}^0)_B\right)$ of the bath
gives the approximation
\begin{equation}
\label{eq:S-taylor}
S(\und{a}) \approx S(\und{a}^0) + \sum_j \beta_j (a_j-a_j^0)
+ \frac12 \sum_{ij} \frac{\partial\beta_j}{\partial a_i}(a_j-a_j^0) (a_i-a_i^0),
\end{equation}
where we have used the well-know relation $\frac{\partial S}{\partial a_i}=\beta_i$.
From Eq.~(\ref{eq:coordinates}), we obtain
\begin{align*}
S(\und{a})-S(\und{a}^0) &= -\frac{\Delta s_S}{R},\\
a_j-a_j^0 &= \frac{1}{R} (-\Delta A_{S_j}-W_j),
\end{align*}
and by substituting these values in the Taylor approximation (\ref{eq:S-taylor}),
using the definition of the free entropy and of the deficit $\delta$,
we arrive at the claimed Eq. (\ref{eq:rate-vs-heatcapacity}).
\end{proof}
\medskip
\begin{remark}\normalfont
For a single charge, $c=1$, which we traditionally interpret as the internal
energy $E$ of a system, Eq.~(\ref{eq:rate-vs-heatcapacity}) takes on the very
simple form
\[
R \approx -\frac{1}{2\delta} \frac{\partial\beta}{\partial E}(\Delta E_S+W)^2.
\]
Here we can use the usual thermodynamic definitions to rewrite
$\frac{\partial\beta}{\partial E} = \frac{\partial\frac{1}{T}}{\partial E} = -\frac{1}{T^2}\frac{1}{C}$,
with the heat capacity $C = \frac{\partial E}{\partial T}$, all derivatives taken with
respect to corresponding Gibbs equilibrium states. Thus,
\begin{equation}
R \approx \frac{1}{T^2}\frac{1}{C}\cdot\frac{1}{2\delta}(\Delta E_S+W)^2,
\end{equation}
resulting in a clear operational interpretation of the heat capacity in terms
of the rate of the bath to approach the second law tightly.
For larger numbers of charges, the matrix
$\bigl[ \frac{\partial\beta_j}{\partial a_i} \bigr]_{ij}
= \bigl[ \frac{\partial^2 S}{\partial a_i\partial a_j} \bigr]_{ij}$ is actually
the Hessian of the entropy $S\bigl(\tau(\und{a})_B\bigr)$ with respect to the charges,
and the r.h.s. side of Eq.~(\ref{eq:rate-vs-heatcapacity}) is $\frac{1}{2\delta}$
times the corresponding quadratic form evaluated on the vector
$(\Delta A_{S_1}+W_1,\ldots,\Delta A_{S_c}+W_c)$. Note that by the strict
concavity of the generalized Gibbs entropy, this is a negative definite
symmetric matrix, thus explaining the minus sign in Eq. (\ref{eq:rate-vs-heatcapacity}).
In the same vein as the single-parameter discussion before, the Hessian matrix can
be read as being composed of generalized heat capacities, which likewise receive
their operational interpretation in terms of the required rate of the bath.
\end{remark}
\section{Discussion}
The traditional framework of thermodynamics assumes a system containing an asymptotically large number of particles interacts with an even larger bath. So that all the thermodynamic quantities of interest, e.g., energy, entropy, etc., can be expressed in terms of average or mean values. Also, the notion of temperature there remains meaningful as any exchange of energy hardly drives the bath away from equilibrium as it is considerably large. The quantum thermodynamics attempts to go beyond this assumption. For instance, the system that interacts with a large bath may have a fewer number of quantum particles. In this case, the average quantities are not sufficient to characterize the system as there may be large quantum fluctuations that cannot be ignored. To address this issue, the resource theory of quantum thermodynamics is developed and it shows that the classical laws are not sufficient to characterize the thermodynamic transformations. One rather needs many second laws associated with many one-shot free energies (based on Renyi $\alpha$-relative entropies) \cite{Horodecki2011,Brandao2015}. However, this formalism is still not enough to study the situation where a quantum system interacts with a bath and they are of comparable size. Clearly, the very notion of temperature is questionable as the bath may get driven out of equilibrium after an interaction with the system. To address this, a resource theory is developed based on information conservation \cite{Sparaciari2016, brl19} and it is only applicable to the regime where asymptotically large number system-bath composites are considered. This in turn also allows one to consider the system and bath on the same footing.
Here we have developed a resource theoretic formalism applicable to a more general scenario where a system with multiple conserved quantities (i.e., charges) interacts with a bath, and the system and bath may be of comparable size. These charges may not commute with each other, as allowed by quantum mechanics. The non-commutative nature implies that any (unitary) evolution cannot strictly conserve all these changes simultaneously. We overcome this problem by considering the notion of approximate micro-canonical ensembles, initially developed in \cite{Halpern2016}. This is an essential requirement and forms the basis of the (approximate) first law for thermodynamics with non-commuting charges. With this, we have developed a resource theory for work and heat for thermodynamics with non-commuting charges. We introduce the charge-entropy diagram that conceptually captures all the essential aspects of thermodynamics and an equivalence theorem to show the thermodynamic equivalence between quantum states sharing the same point on the charge-entropy diagram. Then we have derived the second law with the help of the diagram to characterize the state transformations and to quantify the thermodynamics resources such as works corresponding to different charges. We have also considered the situation where the bath is finite and quantified the rate of state transformations. Interestingly the rate of transformation has been shown to have a direct link with the generalized heat-capacity of the bath. All these then extended to the cases where the systems have (quantum) correlation with the bath. There the charge-entropy diagram has been expressed in terms of conditional-entropy of the bath which may get negative in presence of entanglement and, using that, the second law has been derived.
\begin{comment}
We apply the resource theory that we have developed to understand quantum thermodynamics with
multiple conserved quantities. We specifically consider an asymptotic generalization of
the setting proposed in \cite{Guryanova2016} where there are many copies of a global
system consisting of a main system, called a work system, a thermal bath with fixed
temperatures and various batteries to store the charges of the system. Therefore, the
objects and allowed operations of the resource theory translate to quantum states of a
thermodynamics system and thermodynamical transformations, respectively. It is evident
that the allowed operations can transform a state with a tensor product structure to
a state of a general form; however, we show that restricting the final states to the
specific form of tensor product structure does not reduce the generality of the bounds
that we obtain which follows from the fact that for any point of the phase diagram there
is state with tensor product structure.
As discussed in \cite{Guryanova2016}, for a system with multiple charges the free entropy is
a conceptually more meaningful quantity than the free energy which is originally defined when energy is the only conserved quantity of the system. Namely,
the free energy bounds the amount of energy
that can be extracted; however, for a system with multiple charges there are not
various quantities that bound the extraction of individual charges; rather
there is only a bound on the trade-off between the charges that can be extracted which is precisely the free entropy defined with respect to the temperatures of the thermal bath.
We show that indeed this is the case in our scenario as well and formulate the second law: the amount of charge combination that is extracted is bounded by the free entropy change of the work system per number of copies of the work system, i.e., the free entropy rate change.
Conversely, we show that all transformations
with given extracted charge values,
with a combination bounded by the free entropy rate change of the work system, are feasible.
As a result, any amount of a given charge, or the so-called work type, is extractable
providing that sufficient amounts of other charges are injected to the system.
This raises the following fundamental question: for given extractable charge values, with
a combination saturating the second law up to a deficit $\delta$, what is the minimum number
of the thermal baths per number of the copies of the work system.
We define this ratio as the thermal bath rate.
We find that for large thermal bath rates the optimal value is inversely
proportional to the deficit $\delta$, and there is always a corresponding
transformation where the final state of the work system and the thermal bath are uncorrelated.
However, in general this is not true: the minimum rate might be obtained where the final state correlates
the work system and the thermal bath.
This is a purely quantum effect, that certain work transformations are possible
with a smaller size of the thermal bath than would be possible classically; it
relies on work system and bath becoming entangled.
Then, in order to find the optimal rate, we define the conditional entropy phase diagram
which depends on a given conditional state.
\end{comment}
\chapter{Formulation of quantum source compression problems}
\label{chap:source-coding}
In this chapter, we first expand on the concept of quantum sources and the literature on that and mathematically define an asymptotic compression task as a general model which
include all previously studied asymptotic models.
Then, we introduce quantum compression problems with side information and review
the literature, and later we proceed with defining the most general asymptotic compression task
with side information.
Finally at the end of the chapter, we summarize the results that we have accomplished on quantum source compression.
\section{What is a quantum source?}
A statistical quantum source is a quantum system together with correlations with a \emph{reference system}.
A criterion of how well a source is reproduced in a communication
task is to measure how well the correlations are preserved with
the reference system. Without correlation, the information does not make sense
because a known quantum state without correlations can be reproduced at the
destination without any communication.
A special case is a classical statistical source, which is modeled by a random variable. Since classical information can be copied, a copy of a random variable can be always stored as a reference, and the final processed information is compared with the copy as a reference
to analyse the performance of the communication task.
However, in the classical information theory literature, the reference is not usually considered explicitly in the description of classical
information theory tasks, but arguably it is conceptually necessary in quantum
information. This is because it allows us to present the figure of merit quantifying
the decoding error as operationally accessible, for example via the probability of
passing a test in the form of a measurement on the combined source and reference systems. This point
is made eloquently in the early work of Schumacher on quantum information transmission
\cite{Schumi1996,Barnum1998}.
To elaborate more on the reference system, consider the source that Schumacher defined in his 1995 paper \cite{Schumacher1995,Jozsa1994_1} as an ensemble of pure states $\{p(x),\ket{\psi_x}^{A} \}$,
where the source generates the state $\ket{\psi_x}$ with probability $p(x)$. The figure of merit for the encoding-decoding process is to keep the decoded quantum states \emph{on average} very close to the original states with respect to the fidelity, where the average is taken over the probability distribution $p(x)$.
By basic algebra one can show that this is equivalent to preserving the classical-quantum state
$\rho^{AR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^R$, where system $A$ is the quantum system to be compressed and $R$ is the reference system; namely the following fidelity relation holds:
\begin{align*}
\sum_{x} p(x) F(\proj{\psi_x}^A,\xi_x^{\hat{A}})=F(\rho^{AR}, \xi^{\hat{A}R}),
\end{align*}
where $\xi_x^{\hat{A}}$ is the decoded state for the realization $x$ and $\xi^{\hat{A}X}=\sum_x p(x) \xi_x^{\hat{A}} \otimes \proj{x}^R$.
Another source model that Schumacher considered was the purification of the source ensemble,
that is the state $\ket{\psi}^{AR}=\sum_x \sqrt{p(x)}\ket{\psi_x}^{A} \ket{x}^R$, where the
figure of merit for the encoding-decoding process was to preserve the pure state correlations with the
reference system $R$ by maintaining a high fidelity between the decoded state and $\psi$.
He showed that both definitions lead to the same compression rate, namely,
the von Neumann entropy of the source $S(A)_{\rho} = S(\rho^A)$, where
$\rho^A = \Tr_R \rho^{AR}$. Incidentally, the full proof of optimality in the first
model, without any additional restrictions on the encoder, had to wait until \cite{Barnum1996}
(see also \cite{Horodecki1998});
the strong converse, i.e. the optimality of the entropy rate even for constant error
bounded away from $1$, was eventually given in \cite{Winter1999}.
Another example of a quantum source is the mixed state source considered by Horodecki \cite{Horodecki1998} and Barnum \emph{et al.} \cite{Barnum2001}, and finally solved by Koashi and Imoto \cite{KI2001},
where the source is defined as an ensemble of mixed states $\{p(x),\rho_x^{A} \}$. Preserving
these mixed quantum states, on average, in the process of encoding-decoding, the task is
equivalent to preserving the state $\rho^{AR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^R$, that is
the quantum system $A$ together with its correlation with the classical reference system $R$.
In this thesis, we consider the most general finite-dimensional source in the realm of
quantum mechanics, namely a quantum system $A$ that is correlated with a reference system
$R$ in an arbitrary way, described by the overall state $\rho^{AR}$. In particular, the
reference does not necessarily purify the source, nor is it assumed to be classical.
The ensemble source and the pure source defined by Schumacher are special cases of this model,
where the reference is a classical system in the former and a purifying system in the latter.
So is the source considered by Koashi and Imoto in \cite{KI2001}, where the reference system
is classical, too.
Understanding the compression of the source $\rho^{AR}$ has paramount importance in the
field of quantum information theory and unifies all the models that have been considered
in the literature. Schumacher's pure source model in a sense is the most stringent model
because it requires preserving the correlations with a purifying reference system which
implies that the correlations with any other reference system is preserved which follows
from the fact that the fidelity is non-decreasing under quantum channels.
However, the converse is not necessarily true: if in a compression task the parties are
required to preserve the correlations with a given reference system which does not purify
the source state, they might be able to compress more efficiently compared to the scenario
where the reference system purifies the source. This is exactly what we show in Chapter~\ref{chap:mixed state}:
we characterise the gap precisely depending on the reference system.
\section{Mathematical definition of quantum noiseless compression}
A source compression task consists of an encoder which maps the source to compressed information which is stored or sent to another party. When it is needed, a decoder maps the compressed information to decoded information, and the aim is to preserve the correlations with the reference system and reconstruct a source which is very close to the original source in some distance measure. In the quantum realm the most general encoding and decoding maps which can be performed on the information is a quantum operation or a CPTP map.
The communication means or quantum storage device is assumed to be an ideal channel acting as an identity on the encoded information which can be simulated through various resources such as a qubit channel, sharing entanglement and sending classical information and etc.
The resource is the dimension of the Hilbert space of the encoding operation.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{"rho_AR_compression".jpg}
\caption{Circuit diagram of the compression task: the source is composed of $n$ copies of the state $\rho^{AR}$ where
$A^n$ is the system to be compressed and $R^n$ is an inaccessible reference system.
Dotted lines are used to demarcate domains controlled by the different participants here the reference, the encoder, Alice and the decoder, Bob. The solid lines represent quantum information registers.
The encoder sends the compressed information, i.e. system $M_n$, to the decoder through a noiseless quantum channel;
moreover, they share initial entanglement in the registers $A_0$ and $B_0$, respectively.
The aim of the compression task is to reconstruct the source at the decoder side, that is the final state $\xi^{\hat{A}^n R^n}$ has the fidelity converging to 1 with the source state
$\rho^{A^nR^n}$; this ensures that the correlations between the reconstructed system $\hat{A}^n$ and the
reference system $R^n$ are preserved. Furthermore, the encoder and decoder distill entanglement in
their registers $A'_0$ and $B'_0$, respectively.
}
\label{fig: rho AR compression task}
\end{figure}
Throughout the thesis, we consider the information theoretic asymptotic limit of
$n$ copies of a finite dimensional source with state $\rho^{AR}$, i.e.~$\rho^{A^n R^n} = \left(\rho^{AR}\right)^{\otimes n}$ where system $A$ is the system to be compressed
and system $R$ is an inaccessible reference system.
We assume that the encoder, Alice, and the decoder, Bob, share initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
The encoder, Alice, performs the encoding compression operation
$\mathcal{E}:A^n A_0 \longrightarrow M_n A_0'$ on the system $A^n$ and her part $A_0$ of the entanglement, which is CPTP map.
Alice's encoding operation produces the state $\sigma^{M_n R^n A_0' B_0}$ with $M$, $A_0'$ and $B_0$ as the compressed system of Alice, Alice's new entanglement system and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M_n| \leq \abs{A}^n$.
The system $M_n$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:M_n B_0 \longrightarrow \hat{A}^n B_0'$ on the system
$M_n$ and his part of the entanglement $B_0$ where $\hat{A}^n$ and $B'_0$ are the reconstructed
source and Bob's new entanglement system.
Ideally the encoder and decoder want to distill entanglement in the form of maximally entangled
state $\Phi_L^{A'_0 B'_0}$ of dimension $L$ in their corresponding registers $A'_0$ and $B'_0$.
We call $\frac1n \log (K-L)$ and $\frac1n \log|M_n|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively.
We say the encoding-decoding scheme has \emph{fidelity} $1-\epsilon$, or \emph{error} $\epsilon$, if
\begin{align}
\label{eq:fidelity criterion no side info task}
F\left( \rho^{A^n R^n } \otimes \Phi_L^{A_0' B_0'},\xi^{\hat{A}^n R^n A_0' B_0'} \right)
\geq 1-\epsilon,
\end{align}
where $\xi^{\hat{A}^n R^n A_0' B_0'}=\left((\mathcal{D}\circ\mathcal{E})\otimes {\operatorname{id}}_{R^n}\right) \rho^{A^n R^n } \otimes \Phi_K^{A_0 B_0}$.
Moreover, we say that $(E,Q)$ is an (asymptotically) achievable rate pair if for all $n$
there exist codes (encoders and decoders) such that the fidelity converges to $1$, and
the entanglement and quantum rates converge to $E$ and $Q$, respectively.
The compression schemes where the error converges to zero are called noiseless compression schemes which we consider throughout the thesis.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
\medskip
A schematic description of the quantum source and its compression is illustrated in Fig.~\ref{fig: rho AR compression task}
where the system to be compressed and the reference are denoted by $A^n$ and $R^n$, respectively.
This compression problems is addressed in Chapter~\ref{chap:mixed state} where we find the
optimal trade-off rate region for the entanglement and quantum rates, that is the pairs $(E,Q)$.
\section{Quantum noiseless compression with side information}\label{sec:side info intro}
Side information in information theory is referred to as extra information, which
is correlated with an information source and is available to encoder, decoder or both of them, and they can use this extra information to use less resources, for example reduce the dimension of the compressed information.
Slepian and Wolf for the first time studied the compression of a classical source, i.e a random variable, where a decoder has access to another random variable, which is correlated with the source, and showed that the compression rate is equal to the conditional Shannon entropy
\cite{Slepian1973}.
The \textit{visible} paradigm of source compression problems are basically
compression problems where an \emph{encoder} has access to side information, i.e. the identity of states from an ensemble generated by a source \cite{Jozsa1994_1,Barnum1996,Horodecki2000,Barnum2001_2,Bennett2005,Hayden2002,visible_Hayashi}.
For example, the source in the visible Schumacher compression \cite{Jozsa1994_1,Barnum1996}
is modeled by a classical-quantum state $\rho^{ACR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^C \otimes \proj{x}^{R}$, where system $A$ is the system to be compressed, and
systems $C$ and $R$ are the side information system of the encoder and the reference system, respectively. It is shown that both visible model and \textit{blind} model, where the encoder
does not have access to system $C$, lead to the same compression rate, i.e. $S(A)$ \cite{ Schumacher1995,Jozsa1994_1,Barnum1996} whereas this is not the case when system $A$ is composed of mixed states, that is visible and blind models for mixed states lead to different compression rates.
In the visible mixed state compression problem, the source is modeled by many copies of the state $\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$ where system $A$ with mixed states is the system to be compressed, and
systems $C$ and $R$ are the side information system of the encoder and the reference system, respectively. Hayashi showed that the optimal compression rate is equal to the regularized entanglement of purification of the source \cite{visible_Hayashi} which is different from the
blind compression ($C$ is not available to the encoder) rate obtained by Koashi and Imoto \cite{KI2001,KI2002}.
The visible compression of this source when the encoder and decoder share unlimited entanglement is a special case of the remote state preparation considered in \cite{Bennett2005}, and the optimal quantum compression rate is equal to $\frac{1}{2}S(A)$.
Winter in his Phd thesis \cite{Winter1999} generalized the notion of correlated sources and side information at the \emph{decoder} to a quantum setting by modeling it as a multipartite quantum source which generates multipartite quantum states where different parties have access
to some parts of a source. The first example studied in this context was a hybrid classical-quantum source $\rho^{ABR_1R_2}=\sum_x p(x) \proj{x}^A \otimes \proj{\psi_x}^{BR_1} \otimes \proj{x}^{R_2}$ where an encoder
compresses the classical system $A$, and a decoder aims to reconstruct this system
while having access to quantum side information system $B$ such that the correlations with the
reference systems $R=R_1R_2$ are preserved \cite{Winter1999,Devetak2003}.
This example is one of the earliest attempts to find operational meaning to quantum conditional entropy in analogy to the classical conditional Shannon entropy which characterizes the optimal compression rate of a classical source with classical side information at the decoder side,
a.k.a. fully classical Slepian-Wolf problem \cite{Slepian1973}.
The compression of a purified source with side information at the decoder is known as
state merging or fully quantum Slepian-Wolf (FQSW) and its discovery was an important milestone in the quantum information field which gave an operational meaning to the quantum conditional entropy \cite{Horodecki2007,Abeyesinghe2009}; in this task, a source generates many copies of the state $\ket{\psi}^{ABR}$ where an encoder compresses system $A$ and sends it to a decoder who has access to system $B$ and aims to reconstruct system $A$ while preserving the correlations with the reference system $R$.
Depending on the communication means which has been considered shared entanglement with free classical communication or quantum communication, the compression rate is equal to $S(A|B)$ ebits or $\frac{1}{2}I(A:R)$ qubits, respectively \cite{Horodecki2007,Abeyesinghe2009}.
An ensemble version of FQSW is considered in \cite{Ahn2006} with the source $\rho^{ABR}=
\sum_x p(x) \proj{\psi_x}^{AB}\otimes\proj{x}^R$ and $A$, $B$ and $R$ as the system to be compressed, the side information at the decoder and the reference system, respectively; the optimal quantum compression rate is
found for some special cases, but the problem has been left open in general.
A generalization of state merging, which is known as quantum state redistribution (QSR), is proposed in \cite{Devetak2008_2,Yard2009}, where
both encoder and decoder have access to side information systems. Namely, a source generates
many copies of the state $\ket{\psi}^{ACBR}$, where an encoder compresses system $A$ while having access to side information system $C$ and sends the compressed information to a decoder who has access to system $B$ and aims to reconstruct system $A$ while preserving the correlations with the reference system $R$; in this compression task systems $C$ and $B$ remain at the disposal
of the encoder and decoder, respectively. This gave an operational meaning to the quantum
conditional mutual information since the optimal quantum compression rate was obtained to be
$\frac{1}{2}I(A:R|C)$.
In the remainder of this section, we define mathematically the most general model for
the compression of quantum sources with side information which includes as special cases all the aforementioned side information problems of this section (considering block fidelity defined in Eq.~\ref{eq:block fidelity criterion side info problem}).
\bigskip
We consider a source generates asymptotic limit of
$n$ copies of a finite dimensional state $\rho^{ACBR}$, i.e.~$\rho^{A^n C^n B^n R^n} = \left(\rho^{ACBR}\right)^{\otimes n}$, and distributes the copies of the systems $AC$, $B$ and $R$ between an encoder, a decoder and an inaccessible reference system, respectively.
We assume that the encoder, Alice, and the decoder, Bob, share initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
The encoder, Alice, performs the encoding compression operation
$\mathcal{E}:A^n C^n A_0 \longrightarrow M_n \hat{C}^n A_0'$ on the system $A^n C^n$ and her part $A_0$ of the entanglement, which is CPTP map.
Alice's encoding operation produces the state $\sigma^{M_n \hat{C}^n B^n R^n A_0' B_0}$ with $M_n$, $\hat{C}^n$, $A_0'$ and $B_0$ as the compressed system of Alice, a reconstruction of system $C^n$, Alice's new entanglement system and Bob's part of the entanglement, respectively.
The dimension of the compressed system is without loss of
generality not larger than the dimension of the
original source, i.e. $|M_n| \leq \abs{A}^n$.
The system $M_n$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:M_n B^n B_0 \longrightarrow \hat{A}^n \hat{B}^n B_0'$ on the compressed information $M_n$, system $B^n$ and his part of the entanglement $B_0$ where $\hat{A}^n$, $\hat{B}^n$ and $B'_0$ are the reconstruction of systems
$A^n$, $B^n$ and Bob's new entanglement system, respectively.
In this task, the side information systems remain at the disposal of their corresponding
parties, that is the encoder and decoder respectively reconstruct systems $C^n$ and $B^n$ after
using them as side information.
Ideally the encoder and decoder want to distill entanglement in the form of maximally entangled
state $\Phi_L^{A'_0 B'_0}$ of dimension $L$ in their corresponding registers $A'_0$ and $B'_0$.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{"rho_ACBR_compression".jpg}
\caption{Circuit diagram of the compression task with side information: the source is composed of $n$ copies of the state $\rho^{ACBR}$ where
$A^n$ is the system to be compressed and $R^n$ is an inaccessible reference system; systems $C^n$ and $B^n$ are the side information available for the encoder and the decoder, respectively.
Dotted lines are used to demarcate domains controlled by the different participants here the reference, the encoder, Alice and the decoder, Bob. The solid lines represent quantum information registers.
The encoder sends the compressed information, i.e. system $M_n$, to the decoder through a noiseless quantum channel;
moreover, they share initial entanglement in the registers $A_0$ and $B_0$, respectively.
The aim of the compression task is to reconstruct system $A^n$ at the decoder side while each party reconstructs its own corresponding side information as well, that is the final state $\xi^{\hat{A}^n \hat{C}^n \hat{B}^n R^n}$ has the fidelity converging to 1 with the source state
$\rho^{A^n C^n B^n R^n}$; this ensures that the correlations between the reconstructed systems $\hat{A}^n \hat{C}^n \hat{B}^n$ and the
reference system $R^n$ are preserved. Furthermore, the encoder and decoder distill entanglement in
their registers $A'_0$ and $B'_0$, respectively.
}
\label{fig: rho ACBR compression task}
\end{figure}
We call $\frac1n \log (K-L)$ and $\frac1n \log|M_n|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively.
We say the encoding-decoding scheme has \emph{block fidelity} $1-\epsilon$, or \emph{block error} $\epsilon$, if
\begin{align}
\label{eq:block fidelity criterion side info problem}
F\left( \rho^{A^n C^n B^n R^n } \otimes \Phi_L^{A_0' B_0'},\xi^{\hat{A}^n \hat{C}^n \hat{B}^n R^n A_0' B_0'} \right)
\geq 1-\epsilon,
\end{align}
where $\xi^{\hat{A}^n \hat{C}^n \hat{B}^n R^n A_0' B_0'}=\left((\mathcal{D}\circ\mathcal{E})\otimes {\operatorname{id}}_{R^n}\right) \rho^{A^n C^n B^n R^n } \otimes \Phi_K^{A_0 B_0}$.
Moreover, we say that $(E_b,Q_b)$ is an (asymptotically) achievable block-error rate pair if for all $n$
there exist codes (encoders and decoders) such that the block fidelity converges to $1$, and
the entanglement and quantum rates converge to $E_b$ and $Q_b$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
A schematic description of the source compression task with side information is illustrated in Fig.~\ref{fig: rho ACBR compression task}.
We also consider another figure of merit which turns out to be an easier criterion to evaluate side information problems;
we say a code has \emph{per-copy fidelity} $1-\epsilon$,
or \emph{per-copy error} $\epsilon$, if
\begin{align}
\label{eq:per-copy-error side information problems}
\frac{1}{n}\sum_{j=1}^n F(\rho^{ACBR},\xi^{\hat{A}_j\hat{C}_j\hat{B}_jR_j}) \geq 1-\epsilon,
\end{align}
where
$\xi^{\hat{A}_j\hat{C}_j\hat{B}_jR_j}=\Tr_{[n]\setminus j}\,\xi^{\hat{A}^n\hat{C}^n\hat{B}^nR^n}$, and `$\Tr_{[n]\setminus j}$' denotes the partial trace over all systems
with indices in $[n]\setminus j$.
Similarly, we say that $(E_c,Q_c)$ is an (asymptotically) achievable per-copy-error rate pair if for all $n$
there exist codes (encoders and decoders) such that the per-copy fidelity converges to $1$, and
the entanglement and quantum rates converge to $E_c$ and $Q_c$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
\begin{table}[!t
\renewcommand{\arraystretch}{1.5}
\scriptsize
\begin{center}
\begin{tabular}{ | p{7.5cm} || p{3.5cm} | p{1.5cm} |}
\hline
\small{Source} & \small{$(0,Q_b^*)$} & \small{$(\infty,Q_b^*)$}
\\ \hline\hline
\parbox[t]{8cm}{ \cite{Schumacher1995,Jozsa1994_1}
$\rho^{AR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^R$}
& \parbox[t]{3cm}%
{$S(A)_{\rho}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Schumacher1995,Jozsa1994_1}
$\ket{\psi}^{AR}=\sum_x \sqrt{p(x)}\ket{\psi_x}^{A} \ket{x}^R$}
& \parbox[t]{3cm}%
{$S(A)_{\rho}$}
&$-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Schumacher1995,Jozsa1994_1,Barnum1996}
$\rho^{ACR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{x}^C \otimes \proj{x}^{R}$}
& \parbox[t]{3cm}%
{$S(A)_{\rho}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{KI2001}
$\rho^{AR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^R$}
& \parbox[t]{3cm}%
{$S(CQ)_{\omega}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{visible_Hayashi}
$\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$}
& \parbox[t]{3cm}%
{$\lim_{n \to \infty } \frac{E_p((\rho^{AC})^{\otimes n})}{n}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Bennett2005}
$\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$}
& \parbox[t]{3cm}%
{$-$}
& \parbox[t]{2cm}{$\frac{1}{2}S(A)$}
\\ \hline
\parbox[t]{8cm}{ \cite{Winter1999,Devetak2003}
$\rho^{ABR_1R_2}=\sum_x p(x) \proj{x}^A \otimes \proj{\psi_x}^{BR_1} \otimes \proj{x}^{R_2}$}
& \parbox[t]{3cm}%
{$S(A|B)_{\rho}$}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Horodecki2007,Abeyesinghe2009}
$\ket{\psi}^{ABR}$}
& \parbox[t]{3.5cm}%
{$\max \{S(A|B)_{\rho}, \frac{1}{2}I(A:R)\}$ }
& \parbox[t]{2cm}{$\frac{1}{2}I(A:R)$}
\\ \hline
\parbox[t]{8cm}{ \cite{Ahn2006}
$\rho^{ABR}=\sum_x p(x) \proj{\psi_x}^{AB}\otimes\proj{x}^R$ }
& \parbox[t]{3.7cm}%
{solved for specific examples}
& $-$ \parbox[t]{2cm}{}
\\ \hline
\parbox[t]{8cm}{ \cite{Devetak2008_2,Yard2009}
$\ket{\psi}^{ACBR}$}
& \parbox[t]{3.7cm}%
{$\max \{ S(A|B)_{\rho}$,$\frac{1}{2}I(A:R|C)\}$}
& \parbox[t]{2cm}{$\frac{1}{2}I(A:R|C)$} \\ \hline
\end{tabular}
\end{center}
\caption{A summary of the asymptotic source compression problems, that have been studied in the literature so far, is presented in this table. The rate pairs $(0,Q_b^*)$ and $(\infty,Q_b^*)$ denote the unassisted and entanglement-assisted qubit rates, respectively. Here $E_p(\cdot)$ denotes the entanglement of purification, moreover, $S(CQ)_{\omega}$ is the von Neumann entropy with respect to Koashi-Imoto decomposition of the source; for more information see chapter~\ref{chap:mixed state}.}
\label{table-of-sources}
\end{table}
\bigskip
The special cases of this general problem that have been addressed so far is summarized in table~\ref{table-of-sources}. This general compression problem has a complex nature; for example,
consider the special case of the visible mixed state source by Hayashi \cite{visible_Hayashi} with classical reference $R$ and classical side information at the encoder $C$, with no side information at the decoder $B= \emptyset$, i.e. $\rho^{ACR}=\sum_x p(x) \rho_x^A \otimes \proj{x}^C \otimes \proj{x}^{R}$;
with no entanglement consumption, the optimal block-error quantum rate, i.e. the pair $(0,Q_b^*)$ is equal to the regularized entanglement of purification whereas with free entanglement the optimal block-error quantum rate, i.e. the pair $(\infty,Q_b^*)$ is equal to $(\infty,\frac{1}{2}S(A))$ \cite{Bennett2005}.
Therefore, it is insightful to first study the pairs $(0,Q_b^*)$ and $(\infty,Q_b^*)$
for some other special cases of the source $\rho^{ACBR}$.
Moreover as we will show in the subsequent chapters, unlike the classical
scenario where conditional entropy characterizes the classical compression rate,
for non-pure sources, the quantum conditional entropy or mutual information
does not necessary play a
role and more complicated functions of the source determine the compression rate.
In section~\ref{sec:Summary of our results in quantum source compression}, we briefly
go through the special cases of the general source $\rho^{ACBR}$ with side information
which we address in this thesis and discuss the challenges of each particular case
in its corresponding chapter.
\section{Distributed noiseless quantum source compression}
This thesis mainly focuses on the side information compression problems, however, in chapter \ref{chap:cqSW}, aside from a side information problem we study the distributed compression of correlated classical-quantum sources. This motivates us to define a general distributed compression problem, which the side information problem of section~\ref{sec:side info intro} can be considered a sub-problem of this distributed scenario since the decoder can use successive decoding, that is it can first decode the information of one of the encoders and treat it as its own side information, and later decode the information of the other encoder.
\bigskip
Here we define the problem for two encoders, however, the definition can be easily extended to three or more encoders.
We consider a source generates asymptotic limit of
$n$ copies of a finite dimensional state $\rho^{A_1C_1 A_2 C_2BR}$, i.e.~$\rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n} = \left(\rho^{A_1C_1A_2C_2BR}\right)^{\otimes n}$, and distributes the copies of the systems $A_1C_1$, $A_2C_2$, $B$ and $R$ between encoder 1, encoder 2, a decoder and an inaccessible reference system, respectively.
We assume that both encoder 1, Alice and encoder 2, Ava, share initially maximally
entangled states $\Phi_{K_1}^{A_{01}B_{01}}$ and $\Phi_{K_2}^{A_{02}B_{02}}$ with the decoder, Bob, respectively (of dimension $K_1$ and $K_2$ respectively).
The encoder $i$ ($i=1,2$) performs the encoding compression operation, i.e. the CPTP map
$\mathcal{E}_i:A_i^n C_i^n A_{0i} \longrightarrow M_{i_n} \hat{C}_i^n A_{0i}'$ on the systems $A_i^n C_i^n$ and the entanglement part $A_{0i}$.
The encoding operations are distributed in the sense that each encoder applies her own operation locally without having access to the information of the other encoder.
The dimension of the compressed systems are without loss of
generality not larger than the dimension of the
original sources, i.e. $|M_{i_n}| \leq \abs{A_i}^n$.
The systems $M_{i_n}$ ($i=1,2$) are then sent to Bob via a noiseless quantum channel, who performs
the decoding operation $\mathcal{D}:M_{1_n} M_{2_n} B^n B_{01} B_{02} \longrightarrow \hat{A_1}^n \hat{A_2}^n \hat{B}^n B_{01}' B_{02}'$ on the compressed information systems $M_{1_n} M_{2_n}$, system $B^n$ and his parts of the entanglement $B_{01} B_{02}$ where $\hat{A_1}^n \hat{A_2}^n$, $\hat{B}^n$ and $B'_{01} B'_{02}$ are the reconstruction of systems
$A_1^n A_2^n$, $B^n$ and Bob's new entanglement systems, respectively.
In this task, the systems $C_1^n$, $C_2^n$ and $B^n$ remain at the disposal of their corresponding
parties, that is the encoders and the decoder respectively reconstruct systems $C_1^n$, $C_2^n$ and $B^n$ after
using them as side information.
Ideally the encoder $i$ ($i=1,2$) and the decoder aim to distill entanglement in the form of maximally entangled
state $\Phi_{L_i}^{A'_{0i} B'_{0i}}$ of dimension $L_i$ in their corresponding registers $A'_{0i}$ and $B'_{0i}$, respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{"distributed_rho_ACBR_compression".jpg}
\caption{Circuit diagram of the distributed compression task: the source is composed of $n$ copies of the state $\rho^{A_1C_1 A_2C_2 BR}$ where
$A_i^n$ ($i=1,2$) is the system to be compressed and $R^n$ is an inaccessible reference system; systems $C_i^n$ and $B^n$ are the side information available for the encoder $i$ and the decoder, respectively.
Dotted lines are used to demarcate domains controlled by the different participants here the reference, the encoders, Alice and Ava, and the decoder, Bob. The solid lines represent quantum information registers.
The encoder $i$ sends the compressed information, i.e. system $M_{i_n}$, to the decoder through a noiseless quantum channel;
moreover, they share initial entanglement in the registers $A_{0i}$ and $B_{0i}$, respectively.
The aim of the compression task is to reconstruct systems $A_1^n$ and $A_2^n$ at the decoder side while each party reconstructs its own corresponding side information as well, that is the final state $\xi^{\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n R^n}$ has the fidelity converging to 1 with the source state
$\rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n}$; this ensures that the correlations between the reconstructed systems $\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n$ and the
reference system $R^n$ are preserved. Furthermore, the encoder $i$ and the decoder distill entanglement in
their registers $A'_{0i}$ and $B'_{0i}$, respectively.
}
\label{fig: distributed rho ACBR compression task}
\end{figure}
We call $\frac1n \log (K_i-L_i)$ and $\frac1n \log|M_{i_n}|$ the \emph{entanglement
rate} and \emph{quantum
rate} of the compression protocol, respectively (for $i=1,2$).
Moreover, we say the encoding-decoding scheme has \emph{block fidelity} $1-\epsilon$, or \emph{block error} $\epsilon$, if
\begin{align}
\label{eq:block fidelity criterion distributed problem}
F\left( \rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n } \otimes \Phi_{L_1}^{A'_{01} B'_{01}} \otimes \Phi_{L_2}^{A'_{02} B'_{02}},\xi^{\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n R^n A_{01}' B_{01}' A_{02}' B_{02}'} \right)
\geq 1-\epsilon,
\end{align}
where
\begin{align*}
&\xi^{\hat{A_1}^n \hat{C_1}^n \hat{A_2}^n \hat{C_2}^n \hat{B}^n R^n A_{01}' B_{01}' A_{02}' B_{02}'}=\\
& \quad \quad \quad \quad \left((\mathcal{D}\circ(\mathcal{E}_1 \otimes \mathcal{E}_2))\otimes {\operatorname{id}}_{R^n}\right) \rho^{A_1^n C_1^n A_2^n C_2^n B^n R^n } \otimes \Phi_{K_1}^{A_{01} B_{01}} \otimes \Phi_{K_2}^{A_{02} B_{02}}.
\end{align*}
Moreover, we say that $(E_{b_1},E_{b_2},Q_{b_1},Q_{b_2})$ is an (asymptotically) achievable block-error rate tuple if for all $n$
there exist codes (encoders and decoders) such that the block fidelity converges to $1$, and
the $i$th entanglement and quantum rates converge to $E_{b_i}$ and $Q_{b_i}$ for encoder $i$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times \mathbb{R}\times\mathbb{R}_{\geq 0} \times\mathbb{R}_{\geq 0}$.
A schematic description of the source compression task with side information is illustrated in Fig.~\ref{fig: distributed rho ACBR compression task}.
In chapter~\ref{chap:cqSW}, we consider block fidelity, however, the results follow for the per-copy fidelity as well which is defined as follows:
we say a code has \emph{per-copy fidelity} $1-\epsilon$,
or \emph{per-copy error} $\epsilon$, if
\begin{align}
\label{eq:per-copy fidelity criterion distributed problem}
\frac{1}{n} \sum_{j=1}^n F\left( \rho^{A_{1} C_1 A_2 C_2 B R } ,\xi^{\hat{A_{1j}} \hat{C_{1j}} \hat{A_{2j}} \hat{C_{2j}} \hat{B} R } \right)
\geq 1-\epsilon,
\end{align}
where
$\xi^{\hat{A_{1j}} \hat{C_{1j}} \hat{A_{2j}} \hat{C_{2j}} \hat{B} R }=\Tr_{[n]\setminus j}\,\xi^{\hat{A_1}^n \hat{C_1}^n\hat{A_2}^n\hat{C_2}^n\hat{B}^nR^n}$, and `$\Tr_{[n]\setminus j}$' denotes the partial trace over all systems
with indices in $[n]\setminus j$.
Similarly, we say that $(E_{c_1},E_{c_2},Q_{c_1},Q_{c_2})$ is an (asymptotically) achievable per-copy-error rate tuple if for all $n$
there exist codes such that the per-copy fidelity converges to $1$, and
the $i$th entanglement and quantum rates converge to $E_{c_i}$ and $Q_{c_i}$ for encoder $i$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times \mathbb{R}\times\mathbb{R}_{\geq 0} \times\mathbb{R}_{\geq 0}$.
\bigskip
In \cite{Savov_Mscthesis2007,Savov_distributed_2008}, compression of a pure source $\ket{\psi}^{A_1A_2R}$ with side information at the encoders is considered ($C_1,C_2,B=\emptyset$). The achievable rate region is a convex hull of various points where each point corresponding to an encoder is achieved by applying fully quantum Slepian-Wolf (FQSW) compression and treating the rest of the systems as a reference.
The converse bounds are in terms of
the multipartite squashed entanglement,
which is a measure of multipartite entanglement.
\section{Summary of our results in quantum source compression and discussion}
\label{sec:Summary of our results in quantum source compression}
In this section, we briefly explain the special cases of problems, defined in the previous sections, that we address in this thesis.
Notice that in the subsequent chapters we do not necessarily respect the notation $A$, $C$, $B$ and $R$ for the system to be compressed, the side information at the encoder, the side information at the decoder and the reference system, however, we clearly define the task and specify the notation for the corresponding registers. Moreover, we specify whether the error criterion is block fidelity or per-copy fidelity.
\bigskip
In chapter \ref{chap:mixed state}, we consider the compression of a general mixed state source
$\rho^{AR}$ (no side information) and find the optimal trade-off between the entanglement and quantum rates, i.e. the pair $(E,Q)$.
\medskip
In chapter \ref{chap: E assisted Schumacher}, we unify the visible and blind Schumacher compression by considering an interpolation between them as side information, that is the source
$\rho^{ACR}=\sum_x p(x) \proj{\psi_x}^A \otimes \proj{c_x}^C \otimes \proj{x}^R$ with $A$, $C$ and $R$ as the system to be compressed, the side information at the encoder and the classical reference system. For this source, we find optimal trade-off between the block-error entanglement and quantum rate pairs $(E_b,Q_b)$.
\medskip
In chapter \ref{chap:cqSW}, we consider quantum source compression with classical side information with the source $\rho^{AR_1BR_2}=\sum_x p(x) \proj{\psi_x}^{AR_1} \otimes \proj{x}^B \otimes \proj{x}^{R_2}$ and $A$, $B$ and $R=R_1 R_2$ as the system to be compressed, the side information at the decoder and the hybrid classical-quantum reference systems, respectively.
We study the entanglement assisted case $(\infty,Q_b)$, the unassisted case
$(0,Q_b)$ then distributed scenario considering block fidelity. We find achievable and converse bounds for each scenario and show that the two bounds match for the entanglement assisted quantum block-error rate $Q_b$ up to continuity of a function which appears in the bounds. Finally, considering per-copy fidelity we find the optimal entanglement assisted quantum per-copy-error rate, i.e. the pair $(\infty,Q_c^*)$.
\medskip
In chapter \ref{chap:QSR ensemble}, we consider an ensemble generalization of the quantum state redistribution (QSR), i.e. the source $\rho^{ACBR}=\sum_x p(x) \proj{\psi_x}^{ACBR_1} \otimes \proj{x}^{R_2}$ with $A$, $C$, $B$ and $R=R_1R_2$ as the system to be compressed, the side information at the encoder, the side information at the decoder and the hybrid classical-quantum reference systems, respectively. We consider free entanglement scenario and find the optimal quantum per-copy-error rate, i.e. the pair $(\infty,Q_c^*)$.
With block fidelity, we find achievable and converse bounds which match up to continuity of a function appearing in the bounds.
\medskip
In summary, for a general mixed state we solve the problem when there is no side information, and the rate region is in terms of an extension of the decomposition of the source state which is discovered by Koashi and Imoto in \cite{KI2002}, and later this decomposition extended to a general mixed state in \cite{Hayden2004}. However, for multipartite states this decomposition does not necessarily preserve the tensor structure over various systems; this turns out to be the main hurdle in dealing with general mixed state problems with side information.
This is not an issue for pure or ensemble sources mainly because the structure of maps
which preserve these states are well-understood. For these sources the environment systems of the encoding and decoding operations are decoupled from the reconstructed source given the identity of the state from the ensemble. This property is one of the guiding intuitions behind the converse proofs for the side information problems.
\chapter{Resource theory of charges and entropy}
\label{chap:resource theory}
In this chapter, we consider asymptotically many non-interacting systems with multiple conserved quantities or charges.
We generalize the seminal results of
Sparaciari, Oppenheim and Fritz [\emph{Phys. Rev. A} 96:052112, 2017]
to the case of multiple, in general non-commuting charges.
To this aim we formulate a resource theory of thermodynamics of asymptotically
many non-interacting systems with multiple conserved quantities or charges.
To any quantum state, we associate a vector with entries of the expected charge
values and entropy of that state. We call the set of all these vectors the
phase diagram of the system, and show that it characterizes the equivalence
classes of states under asymptotic unitary transformations that approximately
conserve the charges. This chapter is based on the results from \cite{thermo_ZBK_2020}.
\begin{comment}
Thermodynamics is one of the most successful physical theories and a pillar of modern science and technology.
It was initially developed empirically to describe heat engines, such as the steam engine and
internal combustion engines hat powered the industrial revolution of the 18th and 19th century.
Later on, it has been founded on statistical mechanics with the assumption that the systems
are composed of a large number of classical particles.
The thermal baths, which the system interacts with, are even larger in size so that the temperature of
the bath effectively does not alter after the interaction. The laws of thermodynamics find their
applications in all almost branches of the exact sciences. The emergence of quantum mechanics
in the last century, and the subsequent achievements in controlling and tuning of an individual or a
finite number of quantum systems, led to the exploration of thermodynamics in the quantum regime.
There the system is made up of a single or moderate number of quantum particles
interacting with a thermal bath. This regime is often termed the \emph{finite-size} regime.
The system may possess nontrivial quantum correlations, such as entanglement among the particles,
and the bath can be finite or comparable in size with the system. In the quantum domain, another
layer of difficulties arises when one considers more than one conserved quantities (charges)
that do not commute with each other, as the simultaneous conservation of all the charges
cannot be guaranteed.
Recent studies of quantum thermodynamics \cite{Gemmer2009,book2} focus on systems of finite
size and the cases where measurements are allowed only once. The latter situation is often called
the single-shot regime. In addition to thermodynamic averages, there one is interested in
values and bounds on fluctuations of thermodynamic quantities. One way to handle these
problems is by the use of various {\it fluctuations theorems} \cite{Jar97,ro99,cht11}.
Another way to deal with these regimes is exactly via the resource theory of thermodynamics
that allows for rigorous treatment of second laws, optimal work extraction problem,
etc (cf. \cite{bho13,h&p13,Brandao2015}, see also \cite{ssp14,abe13,brl17,brl19}).
The resource theory of quantum thermodynamics was recently extended to deal with quantum
and nano-scale engines made up of a finite or a small number of quantum particles, and
two baths at different temperatures \cite{blb19}.
\end{comment}
\begin{comment}
In the present paper, we formulated a resource theory of quantum thermodynamics with
multiple conserved quantities where the system and bath a priori are arbitrary in size.
We adhere to the asymptotic regime where a system of many non-interacting particles with
multiple conserved quantities or charges interacts with a bath. Here the resource-free states
are the generalized Gibbs states (GGS), also known as the complete-passive states, and the
allowed operations are the (average) entropy-preserving operations.
The thermodynamic resource is quantified by the Helmholtz free entropy. Clearly, the
entropy-preserving operations cannot create thermodynamic resource in the resource-free
GGSs.
For any quantum state, we associate a vector with entries of the average charge
values and entropy of that state. We call the set of all these vectors the phase diagram
of a system. The concept of phase diagram in the present sense was originally pioneered
in \cite{Sparaciari2016} for a system with energy as the only conserved quantity of the
system where it has been shown that the phase diagram is a convex set. Generalization of
the seminal results of \cite{Sparaciari2016} to the case of multiple, in general non-commuting
charges are the subject of the present paper. For an individual system with multiple
charges the phase diagram is not necessarily convex, however, interestingly for a composition
of two or more systems, the phase diagram is convex. Moreover, for a composition of large
enough systems, for any point on the phase diagram, there is a state with a tensor
product structure. This implies that from the macroscopic point of view it is enough
to consider states of a composite system with tensor product structure. This is an
important feature when we study a traditional thermodynamics set-up considering only
tensor product states and it does not affect the generality of the laws of thermodynamics
which only depend on the macroscopic properties of a state rather than the state itself.
Given the entropy-preserving operation as the allowed operations, the (generalized)
phase diagram fully characterizes the thermodynamic transformations of the states and
the role of thermodynamic resources in such processes. We further extend our study to
the situations where the system and bath become correlated. In such a case we use the
conditional entropy, in place of entropy, to express the phase diagram and derive the
laws of quantum thermodynamics when the final state exhibits system-bath correlations.
\end{comment}
\begin{comment}
\textcolor{red}{AW: I think the introduction is still too long-winded. My plan is to
start off with thermodynamics, and move to resource theories, without singling
out coherence or any other specifically. Just give examples and so on, and highlight
the general structure. The describe our setting and how it generalises previous
work by Sparaciari and us. And move to the structure of the paper. Most of the
other stuff can be moved to a discussion section on related work...}
In recent years there has been considerable interest in so-called, \emph{resource theories (RTs)},
rigorously formulated in \cite{Winter-Dong-2016}. This interest have been nourished
by quantum information science, but in general the RT approach applies also to classical theories.
In general, within a RT we have the following features:
\begin{itemize}
\item One defines a space of states of the considered system, and specifies the
set of {\it resource free states}; this set is often convex.
The rest of the states contains some amount of the resource.
\item One defines a set of {\it allowed operations} that can be performed on the
resource free states without generating resource.
\item The consequences of these definitions are then studied: Usually they allow
to define and rigorously bound or even determine various resource measures,
they allow to determine which states can be transformed to the others using
allowed operation, how the property of states may be changed and ho these
changes are bounded under allowed operations, etc.
\end{itemize}
A very well-studied and recent example of a resource theory is the theory of quantum
coherence, pioneered in Ref. \cite{bp14}, and formalized in Ref. \cite{Winter-Dong-2016}.
Resource theory of coherence was generalized in many directions; while all these notions
typically agree on the definition of incoherent states as states that are diagonal in
a fixed reference basis, they differ significantly in the definition of the corresponding
free operations (\cite{c&g16,m&s16,V&S17,msz16}, for a review see \cite{sap16}). More recently,
RT of coherence was applied in distributed scenarios, involving two distant parties where
that can perform only measurements that do not create coherence and can communicate
their outcomes via a classical channel.the dynamical coherence \cite{srb17}. RT of coherence are being also
generalized to the case of dynamical coherence \cite{g&w19}. It has realized practically immediately that RT can be applied to other resources, such as quantum channels \cite{l&w19}, quantum entanglement (see for instance \cite{cpv18,sha19}), quantum non-locality \cite{Vic14}. and even contextuality \cite{d&a18}.
\emph{Resource theories of thermodynamics.} Particularly interesting for the present p
aper are RT of quantum thermodynamics, or thermodynamics, in which resource free states
are, up to certain details, thermal states.
Recent studies of quantum thermodynamics \cite{Gemmer2009,book2} focus on systems of finite
size and the so-called single-shot regime. In addition to thermodynamic averages, there one
is interested in values and bounds on fluctuations of thermodynamic quantities.
One way to handle these problems is by various {\it fluctuations theorems} \cite{Jar97,ro99,cht11}.
Another way to deal with these regimes is exactly via the resource theory of thermodynamics
that allows for rigorous treatment of second laws, optimal work extraction problem,
etc (cf. \cite{bho13,h&p13,Brandao2015}, see also \cite{ssp14,abe13,brl17,brl19}).
RT of quantum thermodynamics was recently extended to deal with nano-engines made up of
a finite or a small number of quantum particles, and two baths at different temperatures \cite{blb10}.
In this paper, we consider asymptotically many non-interacting systems with multiple
conserved quantities or charges, which we call it a composite system. For any quantum
state, we associate a vector with entries of the average charge values and entropy of that state.
We call the set of all these vectors the phase diagram of a system. The concept of phase
diagram in the present sense was originally pioneered in \cite{Sparaciari2016} for a system
with energy as the only conserved quantity of the system where it has been shown that
the phase diagram is a convex set.
For an individual system with multiple charges the phase diagram in not necessarily
convex, however, interestingly for composition of two or more systems, the phase diagram is convex.
Moreover, for composition of large enough systems, for any point of the phase diagram there is
a state with tensor product structure.
This implies that from macroscopic point of view it is enough to consider states of a composite system with tensor product structure.
This is an important feature when later in this paper we study a traditional thermodynamics set-up where considering only tensor product states does not effect generality of the laws of thermodynamics which only depend on the macroscopic properties of a state rather than the state itself.
This motivates to consider an asymptotic resource theory with states of tensor product structure as the objects and allowed operations which are thermodynamically meaningful, that is operations which preserve the entropy and and charges of a system. However, if we restrict the allowed operations to those exactly preserving entropy and charges for all states, i.e., unitaries that commute with all charges of a system, it is difficult if not impossible to find unitaries which transform one object to other object of the resource theory. Therefore, we consider operations which asymptotically preserve the entropy and charges.
Namely, we consider operations which consist of coupling a composite system with an ancillary system with much smaller dimension compared to the composite system and unitaries which almost
commute, in a precise mathematical sense, with the total charges of the composite system and the ancilla. Coupling the composite system to an ancillary system of small dimension ensures that the entropy exchange between them vanishes asymptotically. Furthermore, allowing almost-commuting unitaries conserves the average charges of the system.
The allowed operations classify the objects to asymptotically equivalent objects
which are interconvertible under allowed operations: the objects are interconvertible via allowed operations if and only if they have the same average entropy and average charge values. %
The existence of the allowed operations between the objects of the same class is based on two pillars:
1. For objects with the same average entropy there are states with sublinear dimension which can be coupled to the objects to make their spectrum asymptotically identical.
2. Objects with the same average charge values project onto a common subspace of the charges of the system which has the property that any unitary acting on this subspace is an almost-commuting unitary with the corresponding charges. Therefore, the spectrum of the objects of the same class can be modified using small ancillary systems and then they are interconvertible via unitaries that asymptotically preserve the charges of the system.
The notion of a common subspace for different charges, which are Hermitian operators, is introduced in \cite{Halpern2016} as approximate microcanonical (a.m.c.) subspace.
In this paper, for given charges and parameters, we show the existence of an a.m.c. which is by construction a permutation-symmetry subspace unlike the subspace constructed in \cite{Halpern2016}.
We apply the resource theory that we have developed to understand quantum thermodynamics with multiple conserved quantities. We specifically consider an asymptotic generalization of the setting proposed in \cite{Guryanova2016} where there are many copies of a global system consisting of a main system, called a work system, a thermal bath with fixed temperatures and various batteries to store the charges of the system. Therefore, the objects and allowed operations of the resource theory translate to quantum states of a thermodynamics system and thermodynamical transformations, respectively. It is evident that the allowed operations can transform a state with a tensor product structure to a state of a general form; however, we show that restricting the final states to the specific form of tensor product structure does not reduce the generality of the bounds that we obtain which follows from the fact that for any point of the phase diagram there is state with tensor product structure.
As discussed in \cite{Guryanova2016}, for a system with multiple charges the free entropy is
a conceptually more meaningful quantity than the free energy which is originally defined when energy is the only conserved quantity of the system. Namely,
the free energy bounds the amount of energy
that can be extracted; however, for a system with multiple charges there are not
various quantities that bound the extraction of individual charges; rather
there is only a bound on the trade-off between the charges that can be extracted which is precisely the free entropy defined with respect to the temperatures of the thermal bath.
We show that indeed this is the case in our scenario as well and formulate the second law: the amount of charge combination that is extracted is bounded by the free entropy change of the work system per number of copies of the work system, i.e., the free entropy rate change.
Conversely, we show that all transformations
with given extracted charge values,
with a combination bounded by the free entropy rate change of the work system, are feasible.
As a result, any amount of a given charge, or the so-called work type, is extractable
providing that sufficient amounts of other charges are injected to the system.
Then, this interesting question arises that for given extractable charge values, with a combination saturating the second law with a deficit $\delta$, what is the minimum number of the thermal baths per number of the copies of the work system.
We define this ratio as the thermal bath rate.
We find that for large thermal bath rates the optimal value is inversely proportional to the deficit $\delta$, and there is always a corresponding transformation where the final state of the work system and the thermal bath are uncorrelated.
However, in general this is not true: the minimum rate might be obtained where the final state of the work system and the thermal bath are correlated.
This is a purely quantum effect that correlations result in smaller rates of the thermal bath.
Then, in order to find the optimal rate, we define the conditional entropy phase diagram
which depends on a given conditional state.
\end{comment}
\begin{comment}
\bigskip
\textcolor{red}{Notation: $\und{a}=(a_1,\ldots,a_c)$, $\und{\beta}=(\beta_1,\ldots,\beta_c)$,
$\und{W}=(W_1,\ldots,W_c)$, etc.
Furthermore: phase diagramme denoted $\cP$, its points denoted $(\und{a},s)$,
partition function normal $Z$, not calligraphic.
Also: equation numbers for most equations!}
\end{comment}
\section{Resource theory of charges and entropy}
\label{sec:resource-theory}
Resource theory is a rigorous mathematical framework initially developed to characterize the
role of entanglement in quantum information processing tasks. Later the framework was extended
to characterize coherence, non-locality, asymmetry and many more, including quantum Shannon theory itself, see \cite{bcp14,Winter-Dong-2016,c&g16,m&s16,V&S17,msz16,sap16,srb17,g&w19,cpv18,sha19,Vic14,d&a18,Devetak2008_1}.
The resource theory approach applies also to classical theories. In general, the resource
theories have the following common features: (1) a well-defined set of resource-free states,
and any states that do not belong to this set has a non-vanishing amount of resource;
(2) a well-defined set of resource-free operations, also known as allowed operations,
that cannot create or increase resource in a state. These allow one to quantify the
resources present in the states or operations and characterize their roles in the transformations
between the states or the operations. In particular, it enables one to define and rigorously
bound or even determine various resource measures; determine which states can be
transformed to the others using allowed operation; how the property of states may be
changed, and how these changes are bounded under the allowed operations, etc.
\medskip
A system in our resource theory is a quantum system $Q$ with a finite-dimensional Hilbert space
(denoted $Q$, too, without danger of confusion), together with a
Hamiltonian $H=A_1$ and other quantities (``charges'') $A_2, \ldots, A_c$, all of which are
Hermitian operators that do not necessarily commute with each other. We consider composition of
$n$ non-interacting systems, where the Hilbert space of the \textit{composite} system $Q^n$ is
the tensor product $Q^{\otimes n} = Q_1 \otimes \cdots \otimes Q_n$ of the Hilbert spaces of
the \textit{individual} systems, and the $j$-th charge of the composite system is the sum of
charges of individual systems as follows,
\begin{equation}
A^{(n)}_j = \sum_{i=1}^{n} \openone^{\otimes (i-1)} \otimes A_j \otimes \openone^{\otimes (n-i)},
\quad j=1,2,\ldots,c.
\end{equation}
For ease of notation, we will write throughout
$A_j^{[Q_i]} = \openone^{\otimes (i-1)} \otimes A_j \otimes \openone^{\otimes (n-i)}$.
We wish to build a resource theory where the objects are states on a quantum system,
which are transformed under thermodynamically meaningful operations.
To any quantum state $\rho$ is assigned the point
$(\und{a},s) = (a_1,\ldots,a_c,s)
= \bigl( \Tr \rho A_1, \ldots, \Tr \rho A_c, S(\rho) \bigr) \in \mathbb{R}^{c+1}$,
which is an element in the \emph{phase diagram} that has been originally introduced,
for $c=1$, as energy-entropy diagram in \cite{Sparaciari2016}; there it is shown,
for a system where energy is the only conserved quantity, that the diagram is a convex set.
In the case of commuting multiple conserved quantities, the charge-entropy diagram has been
generalised and further investigated in \cite{brl19}.
Note that the set of all these vectors, denoted $\mathcal{P}^{(1)}$, is not in
general convex (unless the quantities commute pairwise).
An example is a qubit system with charges $\sigma_x$, $\sigma_y$ and $\sigma_z$ where
charge values uniquely determine the state as a linear function of the $\tr \rho\sigma_i$,
hence the entropy, while the von Neumann entropy itself is well-known to be strictly concave.
Moreover, the set of these points for a composite system with charges $A_1^{(n)}, \ldots, A_c^{(n)}$,
which we denote $\mathcal{P}^{(n)}$ contains, but is not necessarily equal to $n\mathcal{P}^{(1)}$
(which however is true for commuting charges). Namely, consider the point
$g=\left(\frac{1}{2}\Tr (\rho_1+\rho_2) A_1, \ldots, \frac{1}{2}\Tr (\rho_1+\rho_2) A_c,
\frac{1}{2}S(\rho_1)+\frac{1}{2}S(\rho_2)\right)$, which does not necessarily
belong to $\mathcal{P}^{(1)}$ but belongs to its convex hull;
however, $2g \in \mathcal{P}^{(2)}$ due to the state $\rho_1 \otimes \rho_2$.
Therefore, we consider the convex hull of the set $\mathcal{P}^{(1)}$ and call it the
\emph{phase diagram} of the system, denoted
\begin{equation}
\overline{\mathcal{P}}
\equiv \overline{\mathcal{P}}^{(1)}
:= \left\{ \left(\sum_i p_i \Tr \rho_i A_1, \ldots, \sum_i p_i\Tr \rho_i A_c, \sum_i p_i S(\rho_i)\right) :
0 \leq p_i \leq 1,\,\sum_i p_i = 1 \right\}.
\end{equation}
The interpretation is that the objects of our resource theory are ensembles of
states $\{p_i,\rho_i\}$, rather than single states.
We define the \emph{zero-entropy diagram} and \emph{max-entropy diagram},
respectively, as the sets
\begin{align*}
\overline{\mathcal{P}}_0^{(1)}
&= \{(\und{a},0): \Tr \rho A_j = a_j \text{ for a state } \rho \}, \\
\overline{\mathcal{P}}_{\max}^{(1)}
&= \left\{\bigl(\und{a},S(\tau(\und{a}))\bigr): \Tr \rho A_j = a_j \text{ for a state } \rho \right\},
\end{align*}
where $\tau(\und{a})$ is the unique state maximising the entropy among all states
with charge values $\Tr \rho A_j = a_j$ for all $j$, which is called generalized
thermal state, or generalized Gibbs state, or also generalized grand canonical state \cite{Liu2007}.
Note that, as a linear image of the compact convex set of states, the zero-entropy diagram is
compact and convex.
We similarly define the set $\mathcal{P}^{(n)}$, the phase diagram $\overline{\mathcal{P}}^{(n)}$,
zero-entropy diagram $\overline{\mathcal{P}}_0^{(n)}$ and max-entropy diagram
$\overline{\mathcal{P}}_{\max}^{(n)}$
for the composition of $n$ systems with charges $A_1^{(n)}, \ldots, A_c^{(n)}$.
\begin{figure}[!t]
\includegraphics[width=1\textwidth]{phase-diagrams.jpg}
\caption{Schematic of the phase diagrams $\mathcal{P}^{(1)}$, $\mathcal{P}^{(2)}$
and $\overline{\mathcal{P}}$. As seen, $\mathcal{P}^{(1)}$ is not convex, and there is a hole inside the diagram.}
\label{fig:phase-diagrams}
\end{figure}
\begin{lemma}
\label{lemma:phase diagram properties}
For an individual and composite systems with charges $A_j$ and $A^{(n)}_j$, respectively,
we have:
\begin{enumerate}
\item $\overline{\mathcal{P}}^{(n)}$, for $n \geq 1$, is a compact and convex
subset of $\mathbb{R}^{c+1}$.
\item $\overline{\mathcal{P}}^{(n)}$, for $n \geq 1$, is the convex hull of the union
$\overline{\mathcal{P}}_{0}^{(n)} \cup \overline{\mathcal{P}}_{\max}^{(n)}$,
of the zero-entropy diagram and the max-entropy diagram.
\item $\overline{\mathcal{P}}^{(n)} = n \overline{\mathcal{P}}^{(1)}$ for all $n \geq 1$.
\item $\mathcal{P}^{(n)}$ is convex for all $n \geq 2$, and indeed
$\mathcal{P}^{(n)} = \overline{\mathcal{P}}^{(n)} = n \overline{\mathcal{P}}^{(1)}$.
\item Every point of $\mathcal{P}^{(n)}$ is realised by a suitable tensor product state
$\rho_1 \otimes \cdots \otimes \rho_n$, for all $n \geq d$.
\item All points $\bigl(\und{a},S(\tau(\und{a}))\bigr) \in \overline{\mathcal{P}}_{\max}$
are extreme points of $\overline{\mathcal{P}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. The phase diagram is convex by definition. Further, $\Tr \rho A_j^{(n)}$ and $S(\rho)$
are continuous functions defined on the set of quantum states which is a compact set;
hence, the set $\mathcal{P}^{(n)}$ is also a compact set. The cxonvex hull of a
finite-dimensional compact set is compact, so the phase diagram is a compact set.
2. Any point in the phase diagram according to the definition is a convex combination of the form
\[(a_1,\ldots,a_c,s)
= \left(\sum_i p_i \Tr(\rho_i A_1), \ldots, \sum_i p_i\Tr(\rho_i A_c),\sum_i p_i S(\rho_i)\right).\]
The point $(a_1,\ldots,a_c,0)$ belongs to $\overline{\mathcal{P}}_{0}^{(1)}$
because the state $\rho=\sum_i p_i \rho_i$ has charge values $a_1, \ldots, a_c$.
Moreover, the state with charge values $a_1, \ldots, a_c$ of maximum entropy is
the generalized thermal state $\tau(\und{a})$, so we have
\begin{align*}
S(\tau(\und{a})) \geq S(\rho) \geq \sum_i p_i S(\rho_i),
\end{align*}
where the second inequality is due to concavity of the entropy. Therefore, any point
$(\und{a},s)$ can be written as the convex combination of the points $(\und{a},0)$ and
$(\und{a},S(\tau(\und{a}))$.
3. Due to item 2, it is enough to show that
$\overline{\mathcal{P}}_{0}^{(n)} = n\overline{\mathcal{P}}_{0}^{(1)}$, and
$\overline{\mathcal{P}}_{\max}^{(n)}= n\overline{\mathcal{P}}_{\max}^{(1)}$.
The former follows from the definition. The latter is due to the fact that the
thermal state for a composite system is the tensor power of the thermal state of
the individual system.
4. Let $\tau(\und{a}) = \sum_i p_i \ketbra{i}{i}$ be the diagonalization of the
generalized thermal state.
For $n \geq 2$, define $\ket{v} = \sum_i \sqrt{p_i} \ket{i}^{\ox n}$. Obviously, the charge
values of the states $\tau(\und{a})^{\ox n}$ and $\ketbra{v}{v}$ are the same, since they
have the same reduced states on the individual systems;
thus, there is a pure state for any point in the zero-entropy diagram of the composite system.
Now, consider the state $\lambda \ketbra{v}{v}+(1-\lambda)\tau(\und{a})^{\ox n}$, which has the
same charge values as $\tau(\und{a})^{\ox n}$ and $\ketbra{v}{v}$.
The entropy $S\bigl(\lambda \ketbra{v}{v}+(1-\lambda)\tau(\und{a})^{\ox n}\bigr)$ is a continuous function
of $\lambda$; hence, for any value $s$ between $0$ and $S(\tau(\und{a})^{\ox n})$, there is a state
with the given values and entropy $s$.
5. For $n\geq d$, it is elementary to see that any state $\rho$ can be decomposed
into a uniform convex combination of $n$ pure states, i.e.
$\rho=\frac{1}{n} \sum_{i=1}^n \ketbra{\psi_i}{\psi_i}$.
Observe that the state $\psi^{n} = \ketbra{\psi_1} \ox \cdots \ox \ketbra{\psi_n}$ has the same
charge values as the state $\rho^{\otimes n}$, but as it is pure it has entropy $0$.
Further, consider the thermal state $\tau$ with the same charge values as $\rho$, but
the maximum entropy consistent with them. Now let
$\rho_i := \lambda \ketbra{\psi_i}{\psi_i}+(1-\lambda)\tau$, and
observe that $\rho_\lambda^{n} = \rho_1 \ox \cdots \rho_n$ has the same charge values as
$\psi^{n}$, $\rho^{n}$ and $\tau^{\ox n}$.
Since the entropy $S(\rho_\lambda^{n})$ is a continuous function of $\lambda$,
thus interpolating smoothly between $0$ and $n S(\tau)$, there is a
tensor product state with the same given charge values and prescribed
entropy $s$ in the said interval.
6. This follows from the strict concavity of the von Neumann entropy $S(\rho)$ as a
function of the state, which imparts the strict concavity on $\und{a} \mapsto S(\tau(\und{a}))$
\end{proof}
\medskip
The penultimate point of Lemma \ref{lemma:phase diagram properties} motivates us to define
a resource theory where the objects are sequences of states on composite systems
of $n\rightarrow\infty$ parts.
Inspired by \cite{Sparaciari2016}, the allowed operations in this resource theory
are those that respect basic principles of physics, namely entropy and charge
conservation. We point out right here, that ``physics'' in the present context
does not necessarily refer to the fundamental physical laws of nature, but to
any rule that the system under consideration obeys.
It is well-known that quantum operations that preserve entropy for all states are
unitaries. The class of unitaries that conserve charges of a system are precisely
those that commute with all charges of that system.
However, it turns out that these constraints are too strong if imposed literally,
when many charges are to be conserved, as it could easily happen that only
trivial unitaries are allowed.
Our way out is to consider the thermodynamic limit and at the same time relax the
allowed operations to approximately entropy and charge conserving ones.
As for the former, we couple the composite system to an ancillary system with corresponding
Hilbert space $\mathcal{K}$ of dimension $2^{o(n)}$ where restricting the dimension of
the ancilla ensures that the \emph{average} entropy of an individual system, that is,
entropy of the composite system per $n$ does not change in the limit of large $n$.
Moreover, as for charge conservation, we consider unitaries that preserve the average
charges of an individual system, and we allow unitaries that are \emph{almost} commuting
with the total charges of the composite system and the ancilla. The precise definition
goes as follows:
\begin{definition}
\label{almost-commuting unitaries}
A unitary operation $U$ acting on a composite system coupled to an ancillary system with
Hilbert spaces $\mathcal{H}^{\otimes n}$ and $\mathcal{K}$ of dimension $2^{o(n)}$, respectively,
is called an \emph{almost-commuting unitary} with the total charges of a composite system and an
ancillary system if the operator norm of the normalised commutator for all total charges
vanishes asymptotically for large $n$:
\begin{align*}
& \lim_{n \to \infty} \frac{1}{n} \norm{ [U,A_j^{(n)}+A_j']}_{\infty} =\\
& \qquad \lim_{n \to \infty} \frac{1}{n}\norm{ U (A_j^{(n)}+A_j')-(A_j^{(n)}+A_j') U }_{\infty}
= 0
\qquad j=1,\ldots,c.
\end{align*}
where $A_j^{(n)}$ and $A_j'$ are respectively the charges of the composite system
and the ancilla, such that $\norm{A_j'}_{\infty} \leq o(n)$.
\end{definition}
We stress that the definition of almost-commuting unitaries automatically implies that
the ancillary system has a relatively small dimension and charges with small operator
norm compared to a composite system.
The first step in the development of our resource theory is a precise characterisation
of which transformations between sequences of product state are possible using almost
commuting unitaries.
To do so, we define \textit{asymptotically equivalent} states as follows:
\begin{definition}
\label{Asymptotic equivalence definition}
Two sequences of product states $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ and
$\sigma^n=\sigma_1 \otimes \cdots \otimes \sigma_n$ of a composite system with
charges $A_j^{(n)}$ for $j=1,\ldots,c$, are called \emph{asymptotically equivalent} if
\begin{align*}
\lim_{n \to \infty} \frac{1}{n} \abs{S(\rho^n) - S(\sigma^n)} &= 0, \\
\lim_{n \to \infty} \frac{1}{n} \abs{\Tr \rho^n A_j^{(n)} - \Tr \sigma^n A_j^{(n)}} &= 0 \text{ for } j=1,\ldots,c.
\end{align*}
In other words, two sequences of product states are considered equivalent if their
associated points in the normalised phase diagrams $\frac1n \mathcal{P}^{(n)}$ differ
by a sequence converging to $0$.
\end{definition}
The \emph{asymptotic equivalence theorem} of \cite{Sparaciari2016} characterizes
feasible state transformations via \textit{exactly} commuting unitaries where energy
is the only conserved quantity of a system, showing that it is precisely
given by asymptotic equivalence.
We prove an extension of this theorem for systems with multiple conserved quantities,
by allowing almost-commuting unitaries.
\begin{theorem}[Asymptotic (approximate) Equivalence Theorem]
\label{Asymptotic equivalence theorem}
Let $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ and
$\sigma^n=\sigma_1 \otimes \cdots \otimes \sigma_n$
be two sequences of product states of a composite system with
charges $A_j^{(n)}$ for $j=1,\ldots,c$.
These two states are asymptotically equivalent if and only if
there exist ancillary quantum systems with corresponding Hilbert space $\mathcal{K}$
of dimension $2^{o(n)}$ and an almost-commuting unitary $U$ acting on
$\mathcal{H}^{\otimes n} \otimes \mathcal{K}$ such that
\begin{align*}
\lim_{n \to \infty} \norm{U \rho^n \otimes \omega' U^{\dagger} - \sigma^n \otimes \omega}_1 &= 0, \\
\end{align*}
where $\omega$ and $\omega'$ are states of the ancillary system, and charges of the ancillary
system are trivial, $A_j' = 0$.
\end{theorem}
\medskip
The \emph{proof} of this theorem is given in Section~\ref{proof-AET}, as it relies on a
number of technical lemmas, among them a novel construction of approximately
microcanonical subspaces (Section~\ref{section: Approximate microcanonical (a.m.c.) subspace}).
\medskip
By grouping the $Q$-systems into blocks of $k$, we do not of course change the
physics of our system, except that now in the asymptotic limit we only consider
$n = k\nu$ copies of $Q$, but the state $\rho^n$ is asymptotically equivalent to
$\rho^{n+O(1)}$ via almost-commuting unitaries according to Definition \ref{almost-commuting unitaries}
and Theorem \ref{Asymptotic equivalence theorem}.
But now we consider $Q^k$ with its charge observables $A_j^{(k)}$ as elementary
systems, which have many more states than the $k$-fold product states we began with.
Yet, Lemma \ref{lemma:phase diagram properties} shows that the phase diagram for
the $k$-copy system is simply the rescaled single-copy phase diagram,
$\overline{\mathcal{P}}^{(k)} = k \overline{\mathcal{P}}^{(1)}$, and indeed
for $k\geq d$, $\mathcal{P}^{(k)} = k \overline{\mathcal{P}}^{(1)}$. This means
that we can extend the equivalence relation of asymptotic equivalence and the
concomitant Asymptotic Equivalence Theorem (AET) \ref{Asymptotic equivalence theorem}
to any sequences of states that factor into product states of blocks $Q^k$, for
any integer $k$, which freedom we shall exploit in our treatment of thermodynamics.
\section{Approximate microcanonical (a.m.c.) subspace}
\label{section: Approximate microcanonical (a.m.c.) subspace}
In this section, we recall the definition of
approximate microcanonical (a.m.c.) and give a new proof that it exists
for certain explicitly given parameters.
For charges $A_j$ and average values $v_j$, a.m.c. is basically a \textit{common} subspace for the spectral projectors of $A_j^{(n)}$ with corresponding values close to $n v_j$; that is, a subspace onto which a state projects with high probability if and only if it projects onto the spectral projectors of the charges with high probability. We show in Theorem~\ref{thm:symmetric-micro-exists} that for a large enough $n$ such a subspace exists.
An interesting property of an a.m.c. subspace is that any unitary acting on this subspace
is an almost-commuting unitary with charges $A_j^{(n)}$.
\begin{definition}
\label{defi:microcanonical}
An \emph{approximate microcanonical (a.m.c.) subspace}, or more precisely
a \emph{$(\epsilon,\eta,\eta',\delta,\delta')$-approximate microcanonical subspace},
$\cM$ of $\cH^{\ox n}$, with projector $P$,
for charges $A_j$ and values $v_j = \langle A_j \rangle$
is one that consists, in a certain precise sense, of exactly the
states with ``very sharp'' values of all the $A_j^{(n)}$.
Mathematically, the following has to hold:
\begin{enumerate}
\item Every state $\omega$ with support contained in $\cM$ satisfies
$\tr \omega\Pi^\eta_j \geq 1-\delta$ for
all $j$.
\item Conversely, every state $\omega$ on $\cH^{\ox n}$ such that
$\tr \omega\Pi^{\eta'}_j \geq 1-\delta'$ for
all $j$, satisfies $\tr\omega P \geq 1-\epsilon$.
\end{enumerate}
Here, $\Pi^\eta_j := \bigl\{ nv_j-n\eta\Sigma(A_j) \leq A_j^{(n)} \leq nv_j+n\eta\Sigma(A_j) \bigr\}$
is the spectral projector of $A_j^{(n)}$ of values close to $n v_j$,
and $\Sigma(A) = \lambda_{\max}(A)-\lambda_{\min}(A)$ is the spectral diameter
of the Hermitian $A$, i.e.~the diameter of the smallest disc
covering the spectrum of $A$.
\end{definition}
\medskip
\begin{remark}
It is shown in Theorem~3 of~\cite{Halpern2016} that for every $\epsilon > c\delta' > 0$,
$\delta > 0$ and $\eta > \eta' > 0$, and for all sufficiently
large $n$, there exists a nontrivial
$(\epsilon,\eta,\eta',\delta,\delta')$-a.m.c.~subspace.
However, there are two (related) reasons why one might be not completely
satisfied with the argument in~\cite{Halpern2016}: First, the proof uses
a difficult result of Ogata~\cite{Ogata} to reduce the non-commuting
case to the seemingly easier of commuting observables; while this
is conceptually nice, it makes it harder to perceive the nature
of the constructed subspace. Secondly, despite the fact that the
defining properties of an a.m.c.~subspace are manifestly permutation symmetric
(w.r.t.~permutations of the $n$ subsystems), the resulting construction
does not have this property.
Here we address both these concerns. Indeed, we shall show by
essentially elementary means how to obtain an a.m.c.~subspace
that is by its definition permutation symmetric.
\end{remark}
\begin{theorem}
\label{thm:symmetric-micro-exists}
Under the previous assumptions, for every $\epsilon > 2(n+1)^{3d^2}\delta' > 0$,
$\eta >\eta' > 0$ and $\delta>0$, for all sufficiently large $n$
there exists an approximate microcanonical subspace projector.
In addition, the subspace can be chosen to be stable under permutations
of the $n$ systems: $U^\pi\cM = \cM$, or equivalently $U^\pi P (U^\pi)^\dagger = P$,
for any permutation $\pi\in S_n$ and its unitary action $U^\pi$.
More precisely, given $\eta > \eta' > 0$ and $\epsilon > 0$, there exists a $\alpha > 0$ such
that there is a non-trivial $(\epsilon,\eta,\eta',\delta,\delta')$-a.m.c. subspace
with
\begin{align*}
\delta &= (c+3)(5n)^{5d^2} e^{-\alpha n} \text{ and } \\
\delta' &= \frac{\epsilon}{2(n+1)^{3d^2}} - (c+3)(5n)^{2d^2} e^{-\alpha n}.
\end{align*}
Furthermore, we may choose $\alpha = \frac{(\eta-\eta')^2}{8c^2(d+1)^2}$.
\end{theorem}
\begin{proof}
For $s>0$, partition the state space ${\mathcal{S}}(\cH)$ on $\cH$ into
\begin{align*}
{\mathcal{C}}_s(\underline{v}) &= \bigl\{ \sigma : \forall j\ |\tr\sigma A_j - v_j| \leq s\Sigma(A_j) \bigr\}, \\
\cF_s(\underline{v}) &= \bigl\{ \sigma : \exists j\ |\tr\sigma A_j - v_j| > s\Sigma(A_j) \bigr\}
= {\mathcal{S}}(\cH) \setminus {\mathcal{C}}_s(\underline{v}),
\end{align*}
which are the sets of states with $A_j$-expectation values ``close''
to and ``far'' from $\underline{v}$.
Note that if $\rho \in {\mathcal{C}}_s(\underline{v})$ and
$\sigma \in \cF_t(\underline{v})$, $0 < s < t$, then $\|\rho-\sigma\|_1 \geq t-s$.
Choosing the precise values of $s>\eta'$ and $t<\eta$ later,
we pick a universal distinguisher $(P,P^\perp)$
between ${\mathcal{C}}_s(\underline{v})^{\ox n}$ and $\cF_t(\underline{v})^{\ox n}$,
according to Lemma~\ref{lemma:universal-test} below:
\begin{align}
\label{eq:s-close}
\forall \rho\in{\mathcal{C}}_s(\underline{v})\ \tr\rho^{\ox n} P^\perp &\leq (c+2)(5n)^{2d^2} e^{-\zeta n}, \\
\label{eq:t-far}
\forall \sigma\in\cF_t(\underline{v})\ \tr\sigma^{\ox n} P &\leq (c+2)(5n)^{2d^2} e^{-\zeta n},
\end{align}
with $\zeta = \frac{(t-s)^2}{2c^2(2d^2+1)}$.
Our a.m.c.~subspace will be $\cM := \operatorname{supp} P$;
by Lemma~\ref{lemma:universal-test}, $P$ and likewise $\cM$ are permutation symmetric.
It remains to check the properties of the definition. First, let $\omega$
be supported on $\cM$. Since we are interested in $\tr\omega \Pi_j^\eta$, we may
without loss of generality assume that $\omega$ is permutation symmetric.
Thus, by the ``constrained de Finetti reduction'' (aka ``Postselection
Lemma'')~\cite[Lemma~18]{Duan2016},
\begin{align}
\label{eq:dF}
\omega \leq (n+1)^{3d^2} \int {\rm d}\sigma\,\sigma^{\ox n} F(\omega,\sigma^{\ox n})^2,
\end{align}
with a certain universal probability measure ${\rm d}\sigma$ on ${\mathcal{S}}(\cH)$,
and the fidelity $F(\rho,\sigma) = \|\sqrt{\rho}\sqrt{\sigma}\|_1$ between
states. We need the monotonicity of the fidelity under cptp maps, which we
apply to the test $(P,P^\perp)$:
\[
F(\omega,\sigma^{\ox n})^2 \leq F\bigl( (\tr\sigma^{\ox n}P,1-\tr\sigma^{\ox n}P),(1,0) \bigr)^2
\leq \tr\sigma^{\ox n}P,
\]
which holds because $\tr\omega P = 1$.
Thus,
\begin{align}
\label{eq:dF-P}
\tr\omega(\Pi_j^\eta)^\perp \leq (n+1)^{3d^2} \int {\rm d}\sigma\,\bigl(\tr\sigma^{\ox n}(\Pi_j^\eta)^\perp\bigr)
(\tr\sigma^{\ox n}P).
\end{align}
Now we split the integral on the right hand side of Eq.~(\ref{eq:dF-P}) into two parts,
$\sigma\in{\mathcal{C}}_t(\underline{v})$ and $\sigma\not\in\cF_t(\underline{v})$:
If $\sigma\in\cF_t(\underline{v})$, then by Eq.~(\ref{eq:t-far})
we have
\[
\tr\sigma^{\ox n}P \leq (c+2)(5n)^{2d^2} e^{-\zeta n}.
\]
On the other hand, if $\sigma\in{\mathcal{C}}_t(\underline{v})$, then because of $t < \eta$ we have
\[
\tr\sigma^{\ox n}(\Pi_j^\eta)^\perp \leq 2 e^{-2(\eta-t)^2 n},
\]
which follows from Hoeffding's inequality~\cite{DemboZeitouni}:
Indeed, let $Z_\ell$ be the i.i.d.~random variables obtained by the
measurement of $A_j$ on the state $\sigma$. They take values in
the interval $[\lambda_{\min}(A_j),\lambda_{\max}(A_j)]$, their expectation
values satisfy $\EE Z_j = \tr\sigma A_j \in [v_j \pm t\Sigma(A_j)]$, while
\[\begin{split}
\tr\sigma^{\ox n}(\Pi_j^\eta)^\perp &= \Pr\left\{\frac1n\sum_\ell Z_\ell \not\in[v_j \pm \eta\Sigma(A_j)\right\} \\
&\leq \Pr\left\{\frac1n\sum_\ell Z_\ell \not\in[\tr\sigma A_j \pm (\eta-t)\Sigma(A_j)\right\},
\end{split}\]
so Hoeffding's inequality applies.
All taken together, we have
\[\begin{split}
\tr\omega(\Pi_j^\eta)^\perp
&\leq (n+1)^{3d^2} \left( (c+2)(5n)^{2d^2} e^{-\zeta n} + 2 e^{-2(\eta-t)^2 n} \right) \\
&\leq (c+3)(5n)^{5d^2} e^{-2(\eta-t)^2 n},
\end{split}\]
because we can choose $t$ such that
\begin{equation}
\label{eq:t}
\eta-t = \frac{t-s}{2c\sqrt{2d^2+1}} \geq \frac{t-s}{4cd}.
\end{equation}
Secondly, let $\omega$ be such that $\tr\omega \Pi_j^\eta \geq 1-\delta'$;
as we are interested in $\tr\omega P$, we may again assume
without loss of generality that $\omega$ is permutation symmetric,
and invoke the constrained de Finetti reduction~\cite[Lemma~18]{Duan2016},
Eq.~(\ref{eq:dF}).
From that we get, much as before,
\[
\tr\omega P^\perp \leq (n+1)^{3d^2} \int {\rm d}\sigma\, (\tr\sigma^{\ox n}P^\perp) F(\omega,\sigma^{\ox n})^2,
\]
and we split the integral on the right hand side into two parts,
depending on $\sigma\in\cF_s(\underline{v})$ or
$\sigma\in{\mathcal{C}}_s(\underline{v})$: In the latter case,
$\tr\sigma^{\ox n}P^\perp \leq (c+2)(5n)^{2d^2} e^{-\zeta n}$,
by Eq.~(\ref{eq:s-close}). In the former case, there exists a $j$ such
that $\tr\sigma A_j = w_j \not\in [v_j \pm s\Sigma(A_j)]$, and so
\[\begin{split}
F(\omega,\sigma^{\ox n})^2
&\leq F\bigl( (1-\delta',\delta'), (\tr\sigma^{\ox n}\Pi_j^{\eta'},1-\tr\sigma^{\ox N}\Pi_j^{\eta'}) \bigr) \\
&\leq \left( \sqrt{\delta'} + \sqrt{\tr\sigma^{\ox n}\Pi_j^{\eta'}} \right)^2 \\
&\leq 2 \delta' + 2 \tr\sigma^{\ox n}\Pi_j^{\eta'} \\
&\leq 2 \delta' + 4 e^{-2(s-\eta')^2 n},
\end{split}\]
the last line again by Hoeffding's inequality; indeed, with the previous notation,
\[\begin{split}
\tr\sigma^{\ox n}\Pi_j^{\eta'} &= \Pr\left\{ \frac1n \sum_\ell Z_\ell \in[v_j \pm \eta'\Sigma(A_j) \right\} \\
&\leq \Pr\left\{ \frac1n \sum_\ell Z_\ell \not\in[w_j \pm (s-\eta')\Sigma(A_j) \right\}.
\end{split}\]
All taken together, we get
\[\begin{split}
\tr\omega P^\perp
&\leq (n+1)^{3d^2} \left( (c+2)(5n)^{2d^2} e^{-\zeta n} + 4 e^{-2(s-\eta')^2 n} + 2 \delta' \right) \\
&\leq (n+1)^{3d^2} (c+3)(5n)^{2d^2} e^{-2(s-\eta')^2 n} + 2(n+1)^{3d^2}\delta',
\end{split}\]
because we can choose $s$ such that
\begin{equation}
\label{eq:s}
s-\eta' = \frac{t-s}{2c\sqrt{2d^2+1}} \geq \frac{t-s}{4cd}.
\end{equation}
From eqs.~(\ref{eq:t}) and (\ref{eq:s}) we get by summation
\[
\eta-\eta' = t-s + \frac{t-s}{c\sqrt{2d^2+1}} \leq (t-s)\left( 1+\frac{1}{cd} \right),
\]
from which we obtain
\[
s-\eta' = \eta-t \geq \frac{\eta-\eta'}{4c(d+1)},
\]
concluding the proof.
\end{proof}
\medskip
\begin{lemma}
\label{lemma:universal-test}
For all $0 < s < t$ there exists $\zeta > 0$, such that
for all $n$ there exists a permutation symmetric
projector $P$ on $\cH^{\ox n}$ with the properties
\begin{align}
\forall \rho\in{\mathcal{C}}_s(\underline{v})\ \tr\rho^{\ox n} P^\perp &\leq (c+2)(5n)^{2d^2} e^{-\zeta n}, \\
\forall \sigma\in\cF_t(\underline{v})\ \tr\sigma^{\ox n} P &\leq (c+2)(5n)^{2d^2} e^{-\zeta n}.
\end{align}
The constant $\zeta$ may be chosen as
$\zeta = \frac{(t-s)^2}{2c^2(2d^2+1)}$.
\end{lemma}
\begin{proof}
We start by showing that there is a POVM $(M,\openone-M)$ with
\begin{align}
\forall \rho\in{\mathcal{C}}_s(\underline{v})\ \tr\rho^{\ox n} (\openone-M) &\leq c e^{-\frac{(t-s)^2}{2c^2}n}, \\
\forall \sigma\in\cF_t(\underline{v})\ \ \qquad \tr\sigma^{\ox n} M &\leq e^{-\frac{(t-s)^2}{2c^2}n}.
\end{align}
Namely, for each $\ell=0,\ldots,n$ choose $j_\ell \in \{1,\ldots,c\}$ uniformly
at random and measure $A_{j_\ell}$ on the $\ell$-th system. Denote the outcome
by the random variable $Z_\ell^{j_{\ell}}$ and let $Z_\ell^j = 0$ for $j\neq j_\ell$.
Thus, for all $j$, the random variables $Z_\ell^j$ are i.i.d.~with
mean $\EE Z_\ell^j = \frac{1}{c}\tr\rho A_j$, if the measured state is $\rho^{\ox n}$.
Outcome $M$ corresponds to the event
\[
\forall j\ \frac1n \sum_\ell Z_\ell^j \in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right];
\]
outcome $\openone-M$ corresponds to the complementary event
\[
\exists j\ \frac1n \sum_\ell Z_\ell^j \not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right].
\]
We can use Hoeffding's inequality to bound the traces in question.\\
For $\rho\in {\mathcal{C}}_s(\underline{v})$, we have $|\EE Z_\ell^j - v_j | \leq \frac{s}{c}\Sigma(A_j)$
for all $j$, and so:
\[\begin{split}
\tr\rho^{\ox n}(\openone-M) &= \Pr\left\{ \exists j\ \frac1n \sum_\ell Z_\ell^j
\not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \sum_{j=1}^c \Pr\left\{ \frac1n \sum_\ell Z_\ell^j
\not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \sum_{j=1}^c \Pr\left\{ \frac1n \sum_\ell Z_\ell^j
\not\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \sum_{j=1}^c \Pr\left\{ \left| \frac1n \sum_\ell Z_\ell^j - \EE Z_1^j \right|
> \frac{t-s}{2c} \Sigma(A_j) \right\} \\
&\leq c e^{-\frac{(t-s)^2}{2c^2}n}.
\end{split}\]
For $\sigma\in\cF_t(\underline{v})$, there exists a $j$ such that
$|\EE Z_\ell^j - v_j | > \frac{t}{c}\Sigma(A_j)$. Thus,
\[\begin{split}
\tr\sigma^{\ox n} M &\leq \Pr\left\{ \frac1n \sum_\ell Z_\ell^j
\in \frac{1}{c}\left[v_j \pm \frac{s+t}{2}\Sigma(A_j)\right] \right\} \\
&\leq \Pr\left\{ \left| \frac1n \sum_\ell Z_\ell^j - \EE Z_1^j \right|
> \frac{t-s}{2c} \Sigma(A_j) \right\} \\
&\leq e^{-\frac{(t-s)^2}{2c^2}n}.
\end{split}\]
This POVM is, by construction, permutation symmetric, but $M$ is not a
projector. To fix this, choose $\lambda$-nets $\cN_C^\lambda$ in ${\mathcal{C}}_s(\underline{v})$
and $\cN_F^\lambda$ in $\cF_t(\underline{v})$, with
$\lambda = e^{-\zeta n}$, with $\zeta = \frac{(t-s)^2}{2c^2(2d^2+1)}$.
This means that every state $\rho\in{\mathcal{C}}_s(\underline{v})$
is no farther than $\lambda$ in trace distance from a $\rho'\in\cN_C^\lambda$,
and likewise for $\cF_t(\underline{v})$.
By~\cite[Lemma~III.6]{Hayden2006} (or rather, a minor variation of its proof),
we can find such nets with
$|\cN_C^\lambda|,\ |\cN_F^\lambda| \leq \left( \frac{5n}{\lambda} \right)^{2d^2}$
elements.
Form the two states
\begin{align*}
\Gamma &:= \frac{1}{|\cN_C^\lambda|} \sum_{\rho\in\cN_C^\lambda} \rho^{\ox n}, \\
\Phi &:= \frac{1}{|\cN_F^\lambda|} \sum_{\sigma\in\cN_F^\lambda} \sigma^{\ox n},
\end{align*}
and let
\[
P := \{ \Gamma-\Phi \geq 0 \}
\]
be the Helstrom projector which optimally distinguishes $\Gamma$ from $\Phi$.
But we know already a POVM that distinguishes the two states, hence
$(P,P^\perp=\openone-P)$ cannot be worse:
\[
\tr\Gamma P^\perp + \tr\Phi P \leq \tr\Gamma (\openone-M) + \tr\Phi M
\leq (c+1) e^{-\frac{(t-s)^2}{2c^2}n},
\]
thus for all $\rho\in\cN_C^\lambda$ and $\sigma\in\cN_F^\lambda$,
\[
\tr\rho^{\ox n}P^\perp,\ \tr\sigma^{\ox n}P
\leq (c+1)\left( \frac{5n}{\lambda} \right)^{2d^2} e^{-\frac{(t-s)^2}{2c^2}n}.
\]
So, by the $\lambda$-net property, we find
for all $\rho\in{\mathcal{C}}_s(\underline{v})$ and $\sigma\in\cF_t(\underline{v})$,
\[
\tr\rho^{\ox n}P^\perp,\ \tr\sigma^{\ox n}P
\leq \lambda + (c+1)\left( \frac{5n}{\lambda} \right)^{2d^2} e^{-\frac{(t-s)^2}{2c^2}n}
\leq (c+2)(5n)^{2d^2} e^{-\zeta n},
\]
by our choice of $\lambda$.
\end{proof}
\begin{corollary}\label{corollary: a.m.c. projection}
For charges $A_j$, values $v_j = \langle A_j \rangle$ and $n>0$, Theorem~\ref{thm:symmetric-micro-exists} implies that there is an a.m.c. subspace $\mathcal{M}$ of $\mathcal{H}^{\otimes n}$ for any $ \eta' > 0$ with the following parameters:
\begin{align*}
\eta &=2 \eta',\\
\delta' &=\frac{c+3}{2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d+1)^2}}, \\
\delta &=(c+3)(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d+1)^2}},\\
\epsilon &=2(c+3)(n+1)^{3d^2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d+1)^2}}.
\end{align*}
Moreover, let $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$ be a state with $\frac{1}{n}\abs{\Tr(\rho^n A_j^{(n)})- v_j}\leq \frac{1}{2} \eta' \Sigma(A_j)$. Then, $\rho^n$ projects onto a.m.c. subspace with probability $\epsilon$:
\[\Tr(\rho^n P) \geq 1- \epsilon.\]
\end{corollary}
\begin{proof}
For simplicity of notation we drop the subscript $j$ from $A_j$, $v_j$ and $\Pi^{\eta'}_j$, so let $\sum_{l=1}^d E_l \ketbra{l}{l}$ be the spectral decomposition of $A$. Define independent random variables $X_i$ for $i=1,\ldots,n$ taking values in the set $\set{E_1,\ldots,E_d}$ with probabilities $p_i(E_l)=\Tr(\rho_i \ketbra{l}{l} )$.
Furthermore, define random variable $\overline{X}=\frac{X_1+\ldots+X_n}{n}$ which has the following expectation value
\begin{align*}
\mathbb{E}(\overline{X})=\frac{1}{n} \Tr(\rho^n A^{(n)}).
\end{align*}
Therefore, we obtain
\begin{align*}
1-&\Tr(\rho^n \Pi^{\eta'})\\
&=\sum_{\substack{l_1,\ldots,l_n: \\ \abs{E_{l_1}+\ldots+E_{l_n}-n v} \geq n \eta' \Sigma(A) }} \bra{l_1} \rho_1 \ket{l_1} \ldots \bra{l_n} \rho_n \ket{l_n} \\
&=\Pr \left( \abs{\overline{X}- v} \geq \eta' \Sigma(A) \right) \\
&=\Pr \left( \overline{X}- \mathbb{E}(\overline{X}) \geq \eta'\Sigma(A)+v - \mathbb{E}(\overline{X}) \>\bigcup \> \overline{X}- \mathbb{E}(\overline{X}) \leq -\eta' \Sigma(A)+v - \mathbb{E}(\overline{X}) \right) \\
&\leq \exp \left(-\frac{2n (\eta'\Sigma(A)+v - \mathbb{E}(\overline{X}))^2 }{(\Sigma(A))^2} \right)+\exp \left(-\frac{2n (\eta'\Sigma(A)-v + \mathbb{E}(\overline{X}))^2 }{(\Sigma(A))^2} \right)\\
& \leq 2 \exp (-\frac{n \eta'^2}{2})\\
& \leq \delta',
\end{align*}
where the second line follows because random the variables $X_1,\ldots,X_n$ are independent and as a result $\Pr\{\overline{X}=\frac{E_{l_1}+\ldots+E_{l_n}}{n}\}=\bra{l_1} \rho_1 \ket{l_1} \cdots \bra{l_n} \rho_n \ket{l_n}$. The fourth line is due to Hoeffding's inequality (Lemma~\ref{Hoeffding's inequality}). The fifth line is due to assumption $\abs{\mathbb{E}(\overline{X})- v} \leq \frac{1}{2} \eta' \Sigma(A)$.
Thus, by the definition of a.m.c. subspace $\Tr(\rho^n P) \geq 1- \epsilon$.
\end{proof}
\section{Proof of the AET Theorem~\ref{Asymptotic equivalence theorem}}
\label{proof-AET}
Here, we first prove the following lemma where we will use points 3 and 4 to prove the main theorem. Corollary~\ref{corollary: a.m.c. projection} implies that assuming $\frac{1}{n}\Tr \rho^n A^{(n)}_j \approx \frac{1}{n}\Tr \sigma^n A^{(n)}_j \approx v_j$ the states $\rho^n$ and
$\sigma^n$ project onto the a.m.c. subspace with high probability.
Hence, in Lemma~\ref{lemma: timmed state}, we show that one can find states $\widetilde{\rho}$ and $\widetilde{\sigma}$ with support inside the a.m.c. subspace which are very close to the original states in trace norm, that is, $\widetilde{\rho} \approx \rho^n$ and $\widetilde{\sigma} \approx \sigma^n$, and there are unitaries $V_1$ and $V_2$ that factorizes these states to the tensor product of maximally mixed states $\tau$ and $\tau'$ and some other state of very small dimension:
\begin{align*}
V_1\widetilde{\rho}V_1^{\dagger}=\tau \otimes \omega \quad \text{and} \quad
V_2\widetilde{\sigma}V_2^{\dagger}=\tau' \otimes \omega'.
\end{align*}
Further, assuming that the states $\rho^n$ and $\sigma^n$ have very close entropy rates, i.e.
$\frac{1}{n}S(\rho^n) \approx \frac{1}{n}S(\sigma^n)$, one can find states $\tau$ and $\tau'$ with the same dimension that is $\tau=\tau'$. Thus, we observe that two states $\widetilde{\rho}\otimes \omega'$ and $\widetilde{\sigma}\otimes \omega$ have exactly the same spectrum, so there is unitary acting on the a.m.c. subspace and the ancillary system taking one state to another. Based on the properties of the a.m.c. subspace, we show that this unitary is an almost-commuting unitary with the charges $A_j^{(n)}$.
\begin{lemma}\label{lemma: timmed state}
Let subspace $\mathcal{M}$ of $\mathcal{H}^{\otimes n}$ with projector $P$ be a high probability subspace for state $\rho^n=\rho_1 \otimes \cdots \otimes \rho_n$, i.e. $\Tr(\rho^n P) \geq 1- \epsilon$.
Then, for sufficiently large $n$ there is a subspace $\widetilde{\mathcal{M}} \subseteq \mathcal{M}$ with projector $\widetilde{P}$ and state $\widetilde{\rho}$ with support inside $\widetilde{\mathcal{M}}$ such that the following holds:
\begin{enumerate}
\item $\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}) \geq 1- 2\sqrt{\epsilon}-\frac{1}{O(\alpha)}$.
\item $2^{-\sum_{i=1}^n S(\rho_i)-2\alpha \sqrt{n}}\widetilde{P} \leq \widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} \leq 2^{-\sum_{i=1}^n S(\rho_i)+ \alpha \sqrt{n}}\widetilde{P}$.
\item There is a unitary $U$ such that $U\widetilde{\rho}U^{\dagger}=\tau \otimes \omega $
where $\tau$ is a maximally mixed state of dimension $2^{\sum_{i=1}^n S(\rho_i) - O(\alpha \sqrt{n})}$, and $\omega$ is a state of dimension $2^{O(\alpha \sqrt{n })}$.
\item $\norm{\widetilde{\rho}-\rho^n}_1 \leq 2\sqrt{\epsilon}+\frac{1}{O(\alpha)}+2\sqrt{2\sqrt{\epsilon}+\frac{1}{O(\alpha)}} $.
\end{enumerate}
\end{lemma}
\begin{proof}
1. Let $E\geq 0$ and $F\geq 0$ be two positive operators such that $E+F=P \Pi^n_{\alpha ,\rho^n } P$ where all eigenvalues of $F$ are smaller than $2^{-\alpha \sqrt{n}}$, and define $\widetilde{P}$ to be the projection onto the support of $E$.
In other words, $\widetilde{P}$ is the projection onto the support of $P \Pi^n_{\alpha ,\rho^n } P$ with corresponding eigenvalues greater $2^{-\alpha\sqrt{n}}$. Then, we obtain
\begin{align*}
\Tr&(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P})\\
& \geq \Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} E)\\
&=\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} P\Pi^n_{\alpha,\rho^n}P)-\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}F)\\
&\geq \Tr(\rho^n P\Pi^n_{\alpha,\rho^n}P)-\norm{\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}-\rho^n}_1-\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}F)\\
&\geq \Tr(\rho^n \Pi^n_{\alpha,\rho^n})-\norm{P\rho^n P-\rho^n}_1-\norm{\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}-\rho^n}_1-\Tr(\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}F)\\
&\geq \Tr(\rho^n \Pi^n_{\alpha,\rho^n})-\norm{P\rho^n P-\rho^n}_1-\norm{\Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n}-\rho^n}_1-2^{-\alpha \sqrt{n}}\\
& \geq 1-\frac{\beta}{\alpha^2}-2\sqrt{\epsilon}-2\frac{\sqrt{\beta}}{\alpha}-2^{-\alpha \sqrt{n}},
\end{align*}
where the first line follows from the fact that $\widetilde{P} \geq E$. The third, forth and fifth lines are due to H\"{o}lder inequality. The last line follows from Lemma \ref{lemma:typicality properties } and gentle operator lemma \ref{Gentle Operator Lemma}.
\medskip
2. By the fact that in the typical subspace the eigenvalues of $\rho^n$ are bounded (Lemma \ref{lemma:typicality properties }), we obtain
\begin{align*}
\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} &\leq 2^{-\sum_{i=1}^n S(\rho_i)+\alpha\sqrt{n}}\widetilde{P} \Pi^n_{\alpha ,\rho^n } \widetilde{P} \\
&\leq 2^{-\sum_{i=1}^n S(\rho_i)+\alpha\sqrt{n}}\widetilde{P}.
\end{align*}
For the lower bound notice that
\begin{align*}
\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}& \geq 2^{-\sum_{i=1}^n S(\rho_i)-\alpha\sqrt{n}}\widetilde{P} \Pi^n_{\alpha ,\rho^n } \widetilde{P}\\
&=2^{-\sum_{i=1}^n S(\rho_i)-\alpha\sqrt{n}}\widetilde{P}P \Pi^n_{\alpha ,\rho^n } P \widetilde{P} \\
&\geq 2^{-\sum_{i=1}^n S(\rho_i)-2\alpha\sqrt{n}} \widetilde{P},
\end{align*}
where the equality holds because $\widetilde{P} \subseteq \mathcal{M}$, therefore $\widetilde{P} P=\widetilde{P}$. The last inequality follows because $\widetilde{P}$ is the projection onto support of $P \Pi^n_{\alpha ,\rho^n } P$ with eigenvalues greater $2^{-\alpha\sqrt{n}}$.
\medskip
3. Consider the unnormalized state $\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}$ with support inside $\widetilde{\mathcal{M}}$.
From from point 2, we know that all the eigenvalues of this state belongs to the interval $[2^{-\sum_{i=1}^n S(\rho_i)-2\alpha \sqrt{n}},2^{-\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}}]$ which we denote it by $[p_{\min},p_{\max}]$. We divide this interval to $b=2^{\floor{5\alpha \sqrt{n}}}$ many intervals (bins) with equal length of $\Delta p=\frac{p_{\max}-p_{\min}}{b}$. Now, we \textit{trim} the eigenvalues of this unnormalized state in three steps as follows.\\
\begin{enumerate}[(a)]
\item Each eigenvalue belongs to a bin which is an interval $[p_k,p_{k+1})$ for some $0 \leq k \leq b -1$ with $p_k=p_{\min}+\Delta p \times k$. For example, eigenvalue $\lambda_l$ is equal to $p_k+q_l$ for some $k$ such that $0\leq q_l< \Delta p$. We throw away $q_l$ part of each eigenvalue $\lambda_l$. The sum of these parts over all eigenvalues is very small
\begin{align*}
\sum_{l=1}^{|\widetilde{\mathcal{M}}|} q_l \leq \Delta p |\widetilde{\mathcal{M}}| \leq 2^{-2\alpha \sqrt{n}+1},
\end{align*}
where the dimension of the subspace $\widetilde{\mathcal{M}}$ is bounded as $|\widetilde{\mathcal{M}}|\leq 2^{\sum_{i=1}^n S(\rho_i)+2\alpha \sqrt{n}}$ which follows from point 2 of the lemma.
\item We throw away the bins which contain less than $2^{\sum_{i=1}^n S(\rho_i)-10\alpha \sqrt{n}}$ many eigenvalues. The sum of all the eigenvalues that are thrown away is bounded by
\begin{align*}
2^{\sum_{i=1}^n S(\rho_i)-10\alpha \sqrt{n}} \times 2^{5\alpha \sqrt{n}} \times 2^{-\sum_{i=1}^n S(\rho_i)+\alpha \sqrt{n}} \leq 2^{-4\alpha \sqrt{n}},
\end{align*}
in the left member, the first number is the number of eigenvalues in the bin; the second is the number of bins, and the third is the maximum eigenvalue.
\item If a bin, e.g. $k$th bin, is not thrown away in the previous step, it contains $M_k$ many eigenvalues with the same value with
\begin{align}\label{eq:bounds of bin size}
2^{\sum_{i=1}^n S(\rho_i)-10\alpha \sqrt{n}} \leq M_k \leq 2^{\sum_{i=1}^n S(\rho_i)+2\alpha \sqrt{n}}.
\end{align}
Let
\begin{align}\label{eq:L}
L= 2^{\floor{\sum_{i=1}^n S(\rho_i) -10 \alpha\sqrt{n}}}
\end{align}
and for the $k$th bin, let $m_{k}$ be an integer number such that
\begin{align}\label{M_k_L}
m_{k} L\leq M_k \leq (m_{k}+1) L.
\end{align}
Then, $m_{k}$ is bounded as follows
\begin{align}\label{m_k}
m_{k} \leq 2^{12\alpha\sqrt{n}}.
\end{align}
From the $k$th bin, we keep $m_{k} L$ number of eigenvalues and throw away the rest where there are $M_k-m_{k} L \leq L$ many of them; the sum of the eigenvalues that are thrown away in this step is bounded by
\begin{align}
\sum_{k=0}^{b-1} p_{k}(M_k-m_k L) \leq L\sum_{k=0}^{b-1} p_k \leq 2^{-4\alpha\sqrt{n}}. \nonumber
\end{align}
\end{enumerate}
Therefore, for sufficiently large $n$ the sum of the eigenvalues thrown away in
the last three steps is bounded by
\begin{align}\label{eq:thrown away sum}
2^{-2\alpha\sqrt{n}+1}+2^{-4\alpha\sqrt{n}}+2^{-4\alpha\sqrt{n}}\leq 2^{-\alpha\sqrt{n}}
\end{align}
The kept eigenvalues of all bins form an $L$-fold degenerate unnormalized state of dimension $\sum_{k=0}^{b-1} m_{k} L$ because each eigenvalue has at least degeneracy of the order of $L$. Thus, up to unitary $U^{\dagger}$, it can be factorized into the tensor product of a maximally mixed state $\tau$ and unnormalized state $\omega'$ of dimensions $L$ and $\sum_{k=0}^{b-1} m_{k} $, respectively.
From (\ref{m_k}), the dimension of $\omega'$ is bounded by
\begin{align}
\sum_{k=0}^{b-1} m_{k} \leq 2^{12\alpha\sqrt{n}} \times 2^{5 \alpha\sqrt{n}}=2^{17 \alpha\sqrt{n}}. \nonumber
\end{align}
Then, let $\omega =\frac{\omega'}{\Tr(\omega')}$ and define
\begin{align*}
\widetilde{\rho}=U \tau \otimes \omega U^{\dagger}.
\end{align*}
\medskip
4. From points 3 and 1, we obtain
\begin{align}\label{eq: Tr of omega'}
\Tr(\omega') &= \Tr(\tau \otimes \omega') \\
&\geq \Tr(\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P})-2^{-\alpha\sqrt{n}}\\
&\geq 1-2\sqrt{\epsilon}-2\frac{\sqrt{\beta}}{\alpha}-\frac{\beta}{\alpha^2}-2^{-\alpha \sqrt{n}+1}.
\end{align}
Thereby, we get the following
\begin{align*}
\norm{\widetilde{\rho}-\rho^n}_1& \leq \norm{\widetilde{\rho}-U \tau \otimes \omega' U^{\dagger}}_1+\norm{U \tau \otimes \omega' U^{\dagger}-\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}}_1 \\
&\quad \quad \quad+ \norm{\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} -\rho^n}_1 \\
&\leq 1-\Tr(\omega')+\norm{U \tau \otimes \omega' U^{\dagger}-\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P}}_1 \\
&\quad \quad \quad+ \norm{\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} -\rho^n}_1\\
&\leq 1-\Tr(\omega')+2^{-\alpha \sqrt{n}}+\norm{\widetilde{P} \Pi^n_{\alpha,\rho^n}\rho^n \Pi^n_{\alpha,\rho^n} \widetilde{P} -\rho^n}_1\\
&\leq 1-\Tr(\omega')+2^{-\alpha \sqrt{n}} +2\sqrt{2\sqrt{\epsilon}+2\frac{\sqrt{\beta}}{\alpha}+\frac{\beta}{\alpha^2}+2^{-\alpha \sqrt{n}}} \\
&= 2\sqrt{\epsilon}+2\frac{\sqrt{\beta}}{\alpha}+\frac{\beta}{\alpha^2}+2^{-\alpha \sqrt{n}+1}+2\sqrt{2\sqrt{\epsilon}+2\frac{\sqrt{\beta}}{\alpha}+\frac{\beta}{\alpha^2}+2^{-\alpha \sqrt{n}}},
\end{align*}
where the first line is due to triangle inequality. The second, third and fourth lines are
due to Eqs. (\ref{eq: Tr of omega'}) and (\ref{eq:thrown away sum}),
and Lemma \ref{Gentle Operator Lemma}, respectively.
\end{proof}
\begin{proof-of}[{of Theorem \ref{Asymptotic equivalence theorem}}]
We first prove the \textit{if} part. If there is an almost-commuting unitary $U$ and an ancillary system with the desired properties stated in the theorem, then we obtain
\begin{align}
\frac{1}{n}\abs{S(\rho^n)-S(\sigma^n)}
&\leq \frac{1}{n}\abs{S(\rho^n \otimes \omega')-S(\sigma^n\otimes \omega)}
+\frac{1}{n}\abs{S(\omega')-S(\omega)} \nonumber \\
&\leq \frac{1}{n}\abs{S(\rho^n\otimes \omega')-S(\sigma^n\otimes \omega)}
+\frac{2}{n}\log 2^{o(n)} \nonumber \\
&= \frac{1}{n}\abs{S(U(\rho^n\otimes \omega' )U^{\dagger})-S(\sigma^n\otimes \omega)}
+o(1) \nonumber \\
&\leq \frac{1}{n} o(1) \log (d^{n} \times 2^{ o(n)})
+\frac{1}{n}h\left(o(1)\right)+o(1)\nonumber \\
&= o(1) \nonumber,
\end{align}
where the first line follows from additivity of the von Neumann entropy and triangle inequality. The second line is due to the fact that von Neumann entropy of a state is upper bounded by the logarithm of the dimension.
The penultimate line follows from continuity of von Neumann entropy \cite{Fannes1973,Audenaert2007} where $h(x) = -x \log x - (1 - x) \log(1 - x)$ is the binary entropy function.
Moreover, we obtain
\begin{align}\label{approximate average charge conservation}
\frac{1}{n}&\abs{\Tr(\rho^n A_j^{(n)})-\Tr(\sigma^n A_j^{(n)})} \nonumber\\
&=\frac{1}{n}\abs{\Tr\left( \rho^n \otimes \omega' (A_j^{(n)}+A_j')\right)- \Tr\left( \sigma^n\otimes \omega (A_j^{(n)}+A_j')\right) } \nonumber \\
&\leq\frac{1}{n}\abs{\Tr\left( \rho^n \otimes \omega' (A_j^{(n)}+A_j')\right)- \Tr\left( U\rho^n\otimes \omega'U^{\dagger} (A_j^{(n)}+A_j')\right) }\nonumber\\
&\quad \quad +\frac{1}{n}\abs{\Tr\left( U\rho^n\otimes \omega'U^{\dagger} (A_j^{(n)}+A_j')\right)- \Tr\left( \sigma^n\otimes \omega (A_j^{(n)}+A_j')\right) }\nonumber\\
&=\frac{1}{n}\abs{\Tr\left( \rho^n \otimes \omega' \left(A_j^{(n)}+A_j' -U^{\dagger}(A_j^{(n)}+A_j')U\right)\right) }\\
&\quad \quad \quad +\frac{1}{n}\abs{\Tr\left( \left(U\rho^n\otimes \omega'U^{\dagger} - \sigma^n\otimes \omega \right) (A_j^{(n)}+A_j')\right)} \nonumber\\
&\leq \frac{1}{n} \Tr(\rho^n \otimes \omega') \norm{U(A_j^{(n)}+A_j')U^{\dagger} - (A_j^{(n)}+A_j')}_{\infty}\\
&\quad \quad \quad + \frac{1}{n} \norm{U\rho^n\otimes \omega'U^{\dagger} - \sigma^n\otimes \omega}_1 \norm{A_j^{(n)}+A_j'}_{\infty} \nonumber\\
& = o(1),
\end{align}
the second line follows because $A_j'=0$ for all $j$. The third and fifth lines are due to
triangle inequality and H\"{o}lder's inequality, respectively.
\medskip
Now, we prove the \textit{only if} part. Assume for the sates $\rho^n$ and $\sigma^n$ the following holds:
\begin{align*}
& \frac{1}{n} \abs{S(\rho^n)-S(\sigma^n)} \leq \gamma_n \\
& \frac{1}{n} \abs{ \Tr(A_j^{(n)}\rho^n)-\Tr(A_j^{(n)}\sigma^n)}\leq \gamma'_n \quad \quad j=1,\ldots,c,
\end{align*}
for vanishing $\gamma_n $ and $\gamma'_n $.
According to Theorem \ref{thm:symmetric-micro-exists}, for charges $A_j$, values $ v_j=\frac{1}{n} \Tr(\rho^n A_j^{(n)})$, $\eta'>0$ and any $n>0$, there is an a.m.c. subspace $\mathcal{M}$ of $\mathcal{H}^{\otimes n}$ with projector $P$ and the following parameters:
\begin{align*}
&\eta=2 \eta',\\
&\delta'=\frac{c+3}{2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d
+1)^2}}, \\
&\delta=(c+3)(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d
+1)^2}},\\
&\epsilon=2(c+3)(n+1)^{3d^2}(5n)^{2d^2} e^{-\frac{n \eta'^2}{8c^2(d
+1)^2}}.
\end{align*}
Choose $\eta'$ as the following such that $\delta$, $\delta'$ and $\epsilon$ vanish for large $n$:
\begin{align*}
\eta'=\left\{
\begin{array}{ll}
\frac{\sqrt{8}c(d+1) }{n^{\frac{1}{4}} \Sigma(A)_{\min}} \quad &\text{if} \quad \gamma'_n \leq \frac{1}{n^{\frac{1}{4}}}\\
\frac{\sqrt{8}c(d+1)\gamma'_n}{ \Sigma(A)_{\min}} \quad &\text{if} \quad \gamma'_n > \frac{1}{n^{\frac{1}{4}}}
\end{array}
\right.
\end{align*}
where $\Sigma(A)_{\min}$ is the minimum spectral diameter among all spectral diameters of
charges $\Sigma(A_j)$. Since $\frac{1}{n} \Tr(\rho^n A_j^{(n)})= v_j$ and $\abs{\frac{1}{n}\Tr(\sigma^n A_j^{(n)})- v_j} \leq \frac{1}{2}\eta' \Sigma (A_j)$, Corollary~\ref{corollary: a.m.c. projection} implies that states
$\rho^n$ and $\sigma^n$ project onto this a.m.c. subspace with probability $\epsilon$:
\begin{align*}
&\Tr(\rho^n P)\geq 1-\epsilon,\\
&\Tr(\sigma^n P)\geq 1-\epsilon.
\end{align*}
Moreover, consider the typical projectors $\Pi^n_{\alpha ,\rho^n }$ and $\Pi^n_{\alpha ,\sigma^n }$ of states $\rho^n$ and $\sigma^n$, respectively, with $\alpha=n^{\frac{1}{3}}$. Then point 3 and 4 of Lemma~\ref{lemma: timmed state} implies that there are states $\widetilde{\rho}$ and $\widetilde{\sigma}$ with support inside the a.m.c. subspace $\mathcal{M}$ and unitaries $V_1$ and $V_2$ such that
\begin{align}\label{eq:4 formulas}
&\norm{\widetilde{\rho}-\rho^n}_1 \leq o(1) , \nonumber\\
&\norm{\widetilde{\sigma}-\sigma^n}_1 \leq o(1), \nonumber\\
&V_1\widetilde{\rho}V_1^{\dagger}=\tau \otimes \omega, \nonumber\\
&V_2\widetilde{\sigma}V_2^{\dagger}=\tau' \otimes \omega',
\end{align}
where $\tau$ and $\tau'$ are maximally mixed states; since $\abs{S(\rho^n)-S(\sigma^n)} \leq n \gamma_n$, one may choose the dimension of them in Eq.~(\ref{eq:L}) to be exactly the same as $L=2^{\floor{\sum_{i=1}^n S(\rho_i)-10 z}}$ with $z=\max \{\alpha \sqrt{n}, n \gamma_n\}$, hence, we obtain $\tau=\tau'$. Then, $\omega$ and $\omega'$ are states with support inside Hilbert space $\mathcal{K}$ of dimension $2^{o(z)}=2^{o(n)}$.
Then, it is immediate to see that the states $\widetilde{\rho} \otimes \omega'$ and $\widetilde{\sigma} \otimes \omega$ on Hilbert space $\mathcal{M}_t=\mathcal{M} \otimes \mathcal{K}$ have exactly the same spectrum; thus, there is a unitary $\widetilde{U}$ on subspace $\mathcal{M}_t$ such that
\begin{align}\label{eq:exact spectrum states}
\widetilde{U} \widetilde{\rho} \otimes \omega' \widetilde{U}^{\dagger} =\widetilde{\sigma} \otimes \omega.
\end{align}
We extend the unitary $\widetilde{U}$ to $U=\widetilde{U} \oplus \openone_{\mathcal{M}_t^{\perp}}$
acting on $\mathcal{H}^{\otimes n} \otimes \mathcal{K}$ and obtain
\begin{align*}
&\norm{U \rho^n \otimes \omega' U^{\dagger} - \sigma^n \otimes \omega}_1 \\
&\quad \quad \leq
\norm{U \rho^n \otimes \omega' U^{\dagger} - U \widetilde{\rho} \otimes \omega' U^{\dagger}}_1 +\norm{ \sigma^n \otimes \omega-\widetilde{\sigma}\otimes \omega}_1
+\norm{ U \widetilde{\rho} \otimes \omega' U^{\dagger}-\widetilde{\sigma}\otimes \omega}_1 \\
& \quad \quad =\norm{U \rho^n \otimes \omega' U^{\dagger} - U \widetilde{\rho} \otimes \omega' U^{\dagger}}_1 +\norm{ \sigma^n \otimes \omega-\widetilde{\sigma}\otimes \omega}_1 \\
& \quad \quad \leq o(1),
\end{align*}
where the second and last lines are due to Eqs. (\ref{eq:exact spectrum states}) and (\ref{eq:4 formulas}), respectively.
As mentioned before, $\mathcal{M}_t=\mathcal{M} \otimes \mathcal{K}$ is a subspace of $\mathcal{H}^{\otimes n} \otimes \mathcal{K}$ with projector $P_t=P \otimes \openone_{\mathcal{K}}$ where $P$ is the corresponding projector of a.m.c. subspace. We define total charges $A_j^t=A_j^{(n)}+A_j'$ and let $A_j'=0$ for all $j$ and show that every unitary of the form $U=U_{\mathcal{M}_t} \oplus \openone_{\mathcal{M}_t^{\perp}}$ asymptotically commutes with all total charges:
\begin{align*}
\norm{U A_j^t U^{\dagger}-A_j^t}_{\infty} &= \norm{(P_t+P_t^{\perp})(U A_j^t U^{\dagger}-A_j^t)(P_t+P_t^{\perp})}_{\infty} \\
& \leq \norm{P_t(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty}+\norm{P_t^{\perp}(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty} \\
& \quad +\norm{P_t(U A_j^t U^{\dagger}-A_j^t)P_t^{\perp}}_{\infty}+\norm{P_t^{\perp}(U A_j^t U^{\dagger}-A_j^t)P_t^{\perp}}_{\infty}\\
& = \norm{P_t(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty}+2\norm{P_t^{\perp}(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty} \\
& \leq 3 \norm{(U A_j^t U^{\dagger}-A_j^t)P_t}_{\infty}\\
& = 3 \norm{(U A_j^t U^{\dagger} -n v_j \openone+ nv_j \openone-A_j^t)P_t}_{\infty}\\
& \leq 3 \norm{(U A_j^t U^{\dagger} -n v_j \openone)P_t}_{\infty}+3 \norm{(A_j^t- nv_j \openone)P_t}_{\infty}\\
& = 6 \norm{(A_j^t- nv_j \openone)P_t}_{\infty}\\
& = 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j \openone)\ket{v}}_2\\
& = 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{ (A_j^t-n v_j \openone)(\Pi_j^{\eta} \otimes \openone_{\mathcal{K}} +\openone-\Pi_j^{\eta} \otimes \openone_{\mathcal{K}}) \ket{v}
}_2 \\
& \leq 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j \openone) \Pi_j^{\eta} \otimes \openone \ket{v}}_2 \\
& \quad \quad \quad+6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j \openone) (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2 \\
& \leq 6n \Sigma (A_j) \eta +6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j I) (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2,
\end{align*}
where the first line is due to the fact that $P_t+P_t^{\perp}=\openone_{\mathcal{H}^{\ox n}} \ox \openone_{\mathcal{K}}$. The forth line follows because $U A_j^t U^{\dagger}-A_j^t$ is a Hermitian operator with zero eigenvalues in the subspace $P_t^{\perp}$.
The fifth line is due to Lemma~\ref{lemma:norm inequality}. The twelfth line is due to the definition of the a.m.c. subspace. Now, bound the second term in the above:
\begin{align*}
&6 \max_{\ket{v} \in \mathcal{M}_t} \norm{(A_j^t-n v_j I) (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2 \\
& \quad \quad \quad \leq 6 \max_{\ket{v} \in \mathcal{M}_t} \norm{A_j^t-n v_j \openone }_{\infty} \norm{ (\openone-\Pi_j^{\eta} \otimes \openone) \ket{v}}_2 \\
& \quad \quad \quad = 6 \norm{A_j^t-n v_j \openone}_{\infty} \max_{\ket{v} \in \mathcal{M}_t} \sqrt{ \Tr ((\openone-\Pi_j^{\eta} \otimes \openone)\ketbra{v}{v})}\\
& \quad \quad \quad = 6 n\norm{A_j- v_j \openone}_{\infty} \max_{v \in \mathcal{M}} \sqrt{ \Tr ((\openone-\Pi_j^{\eta} )v)}\\
&\quad \quad \quad \leq 6 n\norm{A_j- v_j \openone}_{\infty} \sqrt{ \delta},
\end{align*}
the first line is due to Lemma~\ref{lemma:norm inequality}.
The last line is by definition of the a.m.c. subspace.
Thus, for vanishing $\delta$ and $\eta$ we obtain:
\begin{align*}
\frac{1}{n}\norm{U A_j^t U^{\dagger}-A_j^t}_{\infty} \leq o(1),
\end{align*}
concluding the proof.
\end{proof-of}
\section{Discussion}
\label{sec:discussion}
We have considered an asymptotic resource theory with states of tensor product structure
as the objects and allowed operations which are thermodynamically meaningful,
namely operations which preserve the entropy and and charges of a system asymptotically.
The allowed operations classify the objects into asymptotically equivalent objects
that are interconvertible under allowed operations. The basic result on which
our theory is built is that the objects are interconvertible via allowed operations
if and only if they have the same average entropy and average charge values in the
asymptotic limit.
The existence of the allowed operations between the objects of the same class is based on two pillars:
First, for objects with the same average entropy there are states with sublinear dimension which
can be coupled to the objects to make their spectrum asymptotically identical.
Second, objects with the same average charge values project onto a common subspace of the
charges of the system which has the property that any unitary acting on this subspace is an almost-commuting unitary with the corresponding charges. Therefore, the spectrum of the objects of the same class can be modified using small ancillary systems and then they are interconvertible via unitaries that asymptotically preserve the charges of the system.
The notion of a common subspace for different charges, which are Hermitian operators,
is introduced in \cite{Halpern2016} as approximate microcanonical (a.m.c.) subspace.
In this chapter, for given charges and parameters, we show the existence of an a.m.c. which is by construction a permutation-symmetry subspace, which is not guaranteed by the construction in \cite{Halpern2016}.
\chapter{Unification of the blind and visible Schumacher compression}
\label{chap: E assisted Schumacher}
In this chapter, we ask how the quantum compression of ensembles of pure states is
affected by the availability of entanglement, and in settings where
the encoder has access to side information.
We find the optimal asymptotic quantum rate and the optimal tradeoff
(rate region) of quantum and entanglement rates.
It turns out that the amount by which the quantum rate beats
the Schumacher limit, the entropy of the source, is precisely half
the entropy of classical information that can be extracted from the
source and side information states without disturbing them at all
(``reversible extraction of classical information'').
In the special case that the encoder has no side information, or that
she has access to the identity of the states, this problem reduces to the known
settings of \textit{blind} and \textit{visible} Schumacher compression, respectively,
albeit here additionally with entanglement assistance.
We comment on connections to previously studied and further rate
tradeoffs when also classical information is considered.
This chapter is based on the papers in \cite{Schumacher_Assisted_arXiv_Z_2019,ZK_Eassisted_ISIT_2019}.
\section{The source model}
The task of data compression of a quantum source, introduced by Schumacher
\cite{Schumacher1995}, marks one of the foundations of quantum
information theory: not only did it provide an information theoretic
interpretation of the von Neumann entropy $S(\rho) = -\Tr\rho\log\rho$
as the minimum compression rate, it also motivated the very concept of the qubit!
In the Schumacher modelling, a source is given by an ensemble
${\mathcal{E}} = \{ p(x), \proj{\psi_x} \}$ of pure states $\psi_x=\proj{\psi_x}\in{\mathcal{S}}(A)$,
$\ket{\psi_x}\in A$, with a Hilbert space $A$ of
finite dimension $|A|<\infty$; ${\mathcal{S}}(A)$ denotes the set of states (density operators).
Furthermore, $x\in{\mathcal{X}}$ ranges over a
discrete alphabet, so that we can can describe the source equivalently by
the classical-quantum (cq) state $\omega = \sum_x p(x) \proj{x}^X \otimes \proj{\psi_x}^A$.
While the achievability of the rate $S(A)_\omega = S(\omega^A)$ was
shown in \cite{Schumacher1995,Jozsa1994_1} (see also \cite[Thm.~1.18]{OhyaPetz:entropy}),
the full (weak) converse was established in \cite{Barnum1996}, a simplified
proof being given by M. Horodecki \cite{Horodecki1998}; the strong
converse was proved in \cite{Winter1999}.
\medskip
In this chapter, we consider a more comprehensive model, where on the one hand
the sender/encoder of the compressed data (Alice) has access to side
information, namely a pure state $\sigma_x^C$ in addition to the source state
$\psi_x^A$, and on the other hand, she and the receiver/decoder of the compressed
data (Bob) share pure state entanglement in the form of EPR pairs at a
certain rate.
Thus, the source is now an ensemble ${\mathcal{E}} = \{ p(x), \proj{\psi_x}^A\otimes\proj{\sigma_x}^C \}$
of product states, which can be described equivalently by the cqq-state
\begin{align}
\label{eq:source state omega}
\omega^{XAC}=\sum_{x\in \mathcal{X}} p(x) \proj{x}^X\otimes \proj{\psi_x}^A\otimes \proj{\sigma_x}^C.
\end{align}
Yet another equivalent description is via the
random variable $X \in {\mathcal{X}}$, distributed according to $p$, i.e. $\Pr\{X=x\}=p_x$;
this also makes the pure states $\psi_X$ and $\sigma_X$ random variables.
We will consider the information theoretic limit of
many copies of $\omega$, i.e.~$\omega^{X^n A^n C^n} = \left(\omega^{XAC}\right)^{\otimes n}$:
\[
\omega^{X^n A^n C^n}
\!\!\! = \!\!\!\!
\sum_{x^n \in \mathcal{X}^n}\!\!\!\! p(x^n) \proj{x^n}^{X^n}
\!\otimes\! \proj{\psi_{x^n}}^{A^n}
\!\otimes\! \proj{\sigma_{x^n}}^{C^n}\!\!\!\!\!,
\]
using the notation
\begin{align*}
x^n &= x_1 x_2 \ldots x_n,\quad\;
p(x^n) = p(x_1) p(x_2) \cdots p(x_n), \\
\ket{x^n} &= \ket{x_1} \ket{x_2} \cdots \ket{x_n},\
\ket{\psi_{x^n}} = \ket{\psi_{x_1}} \ket{\psi_{x_2}} \cdots \ket{\psi_{x_n}}.
\end{align*}
\section{Compression assisted by entanglement}
\label{sec:Compression assisted by entanglement}
We assume that the encoder, Alice, and the decoder, Bob, have initially a maximally
entangled state $\Phi_K^{A_0B_0}$ on registers $A_0$ and $B_0$ (both of dimension $K$).
With probability $p(x^n)$, the source provides Alice
with the state $\psi_{x^n}^{A^n}\otimes\sigma_{x^n}^{C^n}$.
Then, Alice performs her encoding operation
$\mathcal{C}:A^nC^nA_0 \longrightarrow \hat{C}^nC_A$ on the systems $A^n$,
$C^n$ and her part $A_0$ of the entanglement, which is a quantum channel,
i.e.~a completely positive and trace preserving (CPTP) map.
(Note that our notation is a slight abuse, which we maintain as it is simpler
while it cannot lead to confusions, since channels really are maps between
the trace class operators on the involved Hilbert spaces.)
The dimension of the compressed system obviously has to be smaller than the
original source, i.e. $|C_A| \leq \abs{A}^n$.
We call $Q=\frac1n \log|C_A|$ and $E=\frac{1}{n}\log K$ the quantum and entanglement
rates of the compression protocol, respectively.
The system $C_A$ is then sent to Bob via a noiseless quantum channel, who performs
a decoding operation $\mathcal{D}:C_A B_0 \longrightarrow \hat{A}^n$ on the system
$C_A$ and his part of entanglement $B_0$.
According to Stinespring's theorem \cite{Stinespring1955}, all these CPTP maps can be
dilated to isometries
$V_A : A^nC^nA_0 \hookrightarrow \hat{C}^n C_A W_A$ and
$V_B : C_A B_0 \hookrightarrow {\hat{A}^n W_B}$,
where the new systems $W_A$ and $W_B$ are the environment systems
of Alice and Bob, respectively.
We say the encoding-decoding scheme has fidelity $1-\epsilon$, or error $\epsilon$, if
\begin{align}
\label{eq:Schumcaher assisted fidelity}
\overline{F} &:= F\left( \omega^{X^n\hat{A}^n\hat{C}^n},\xi^{X^n\hat{A}^n\hat{C}^n} \right) \nonumber\\
&=\sum_{x^n \in \mathcal{X}^n}\!\!\! p(x^n)F\!\left(\proj{\psi_{x^n}}^{A^n}\!
\otimes\!\proj{\sigma_{x^n}}^{C^n}\!,\xi_{x^n}^{\hat{A}^n\hat{C}^n}\right) \\
&\geq 1-\epsilon, \nonumber
\end{align}
where $\xi^{X^n\hat{A}^n\hat{C}^n}=\sum_{x^n}p(x^n)\proj{x}^{X^n} \otimes \xi_{x^n}^{\hat{A}^n\hat{C}^n}$
and
$\xi_{x^n}^{\hat{A}^n\hat{C}^n}=(\mathcal{D}\circ\mathcal{C})\!\proj{\psi_{x^n}\!}^{A^n} \!\otimes\! \proj{\sigma_{x^n}\!}^{C^n} \!\otimes\!\Phi_K^{A_0\!B_0}\!\!$.
We say that $(E,Q)$ is an (asymptotically) achievable rate pair if for all $n$
there exist codes such that the fidelity converges to $1$, and
the entanglement and quantum rates converge to $E$ and $Q$, respectively.
The rate region is the set of all achievable rate pairs, as a subset of
$\mathbb{R}\times\mathbb{R}_{\geq 0}$.
Note that this means that we demand not only that Bob can reconstruct the
source states $\psi_{x^n}$ with high fidelity on average, but that Alice
retains the side information states $\sigma_{x^n}$ as well with high fidelity.
There are two extreme cases of the side information that have been considered
in the literature:
If $C$ is a trivial system, or more generally if the states
$\sigma_x^C$ are all identical, then the aforementioned task is the
entanglement-assisted version of \textit{blind} Schumacher compression. If $C=X$, or more
precisely $\ket{\sigma_x}=\ket{x}$, then Alice has access to classical random variable
$X$, and the task reduces to \textit{visible} Schumacher compression with entanglement assistance.
The blind-visible terminology is originally from \cite{Barnum1996,Horodecki2000}.
\begin{remark}
\label{remark:E=0}
In the case of no entanglement being available, i.e. $E=0$ ($K=1$), the
problem is fully understood: The asymptotic rate $Q=S(A)$
from \cite{Schumacher1995,Jozsa1994_1} is achievable without touching
the side information, and it is optimal, even in the visible case
(which includes all other side informations), by the weak and strong
converses of \cite{Barnum1996,Horodecki1998} and \cite{Winter1999}.
\qed
\end{remark}
\section{Optimal quantum rate}
To formulate the minimum compression rate under unlimited entanglement
assistance, we need the following concept.
\begin{definition}
\label{def:reducibility}
An ensemble of pure states
${\mathcal{E}}=\{p(x),\proj{\psi_x}^A \otimes \proj{\sigma_x}^C \}_{x\in \mathcal{X}}$
is called \emph{reducible} if its states belong to two or more orthogonal subspaces.
Otherwise the ensemble ${\mathcal{E}}$ is called \emph{irreducible}.
%
We apply the same terminology to the source cqq-state $\omega^{X A C}$.
\end{definition}
Notice that a reducible ensemble can be written uniquely as a disjoint union
of irreducible ensembles $\mathcal{E} = \bigcupdot_{y \in \mathcal{Y}} q(y) \mathcal{E}_y$,
with a partition $\mathcal{X} = \bigcupdot_{y\in \mathcal{Y}} \mathcal{X}_y$ and
irreducible ensembles
\begin{align*}
\mathcal{E}_y = \{ p(x|y), \proj{\psi_x}^A \otimes \proj{\sigma_x}^C \}_{x \in \mathcal{X}_y},
\end{align*}
where $q(y)p(x|y)=p(x)$ for $x \in \mathcal{X}_y$ and
$q(y) =\sum_{x \in \mathcal{X}_y} p(x)$.
We define the subspace spanned by the vectors of each irreducible ensemble as
$F_y := \text{span} \{\ket{\psi_x}\otimes\ket{\sigma_x} : x \in \mathcal{X}_y\}$.
The irreducible ensembles $\mathcal{E}_y$ are pairwise orthogonal, i.e.~$F_{y'} \perp F_y$
for all $y' \neq y$.
We may thus introduce the random variable $Y=Y(X)$ taking values in the set
$\mathcal{Y}$ with probability distribution $q(y)$; namely, $Y$ is a deterministic
function of $X$ such that $\Pr\{X\in\mathcal{X}_Y\}=1$.
We define the \textit{modified} source as
\begin{align*}
\omega^{XACY}=\sum_x p(x) \proj{x}^X\otimes \proj{\psi_x}^A \otimes \proj{\sigma_x}^C \otimes \proj{y(x)}^Y,
\end{align*}
with side information systems $CY$.
Because there is an isometry $V:AC \rightarrow ACY$ which acts as
\begin{equation}
\label{eq:iso}
V\ket{\psi_x}^A \otimes \ket{\sigma_x}^C=\ket{\psi_x}^A \otimes \ket{\sigma_x}^C \otimes \ket{y(x)}^Y,
\end{equation}
the extended source $\omega^{XACY}$ is equivalent to the original
source and side information $\omega^{XAC}$ modulo a local operation of Alice.
We first present the optimal asymptotic compression rate in the following
theorem and prove the achievability of it, but we leave
the converse proof to the end of this section, as it requires introducing
further machinery.
\begin{theorem}
\label{theorem: main}
For the given source $\omega^{XACY}$, the optimal asymptotic compression rate
assisted by unlimited entanglement is
$Q=\frac12 (S(A)+S(A|CY))$.
Furthermore, there is a protocol achieving this communication
rate with entanglement consumption at rate $E=\frac12 (S(A)-S(A|CY))$.
\end{theorem}
\begin{proof}
We first show that this rate is achievable.
Consider the following purification of $\omega^{XACY}$,
\begin{align*}
\ket{\omega}^{XX'ACY}=\sum_x \sqrt{p(x)}\ket{x}^X\ket{x}^{X'}\ket{\psi_x}^A\ket{\sigma_x}^C\ket{y(x)}^Y,
\end{align*}
with side information systems $CY$. This is obtained from
\begin{align*}
\ket{\omega}^{XX'AC}=\sum_x \sqrt{p(x)}\ket{x}^X\ket{x}^{X'}\ket{\psi_x}^A\ket{\sigma_x}^C,
\end{align*}
by Alice applying the isometry $V$ from Eq.~(\ref{eq:iso}).
We apply quantum state redistribution (QSR) \cite{Devetak2008_2,Oppenheim2008}
as a subprotocol, where the objective is for Alice to send to Bob $A^n$,
using $C^nY^n$ as side information, while $(XX')^n$ serves as reference system;
the figure of merit is the fidelity with the original pure state $(\omega^{XX'ACY})^{\otimes n}$.
Denoting the overall encoding-decoding CPTP map
$\Lambda:A^nC^nY^n \rightarrow \hat{A}^n\hat{C}^n\hat{Y}^n$,
QSR gives us the first inequality of the following chain:
\begin{align*}
1-o(1) &\leq F\!\left( \omega^{X^nX'^nA^nC^nY^n}\!\!,
({\operatorname{id}}_{X^nX'^n} \otimes \Lambda) \omega^{X^nX'^nA^nC^nY^n}\! \right) \\
&\leq F\!\left( \omega^{X^nA^nC^nY^n}\!\!,
({\operatorname{id}}_{X^n} \otimes \Lambda) \omega^{X^nA^nC^nY^n}\! \right),
\end{align*}
where the second inequality follows from monotonicity of the fidelity under partial trace.
Thus, the protocol satisfies our fidelity criterion (\ref{eq:Schumcaher assisted fidelity}).
The communication rate we obtain from QSR is
$Q = \frac{1}{2}I(A:XX') = \frac{1}{2} (S(A)+S(A|CY))$.
Furthermore, QSR guarantees entanglement consumption at the rate
$E = \frac12 I(A:CY) = \frac12 (S(A)-S(A|CY))$.
\end{proof}
To prove optimality (the converse), we first need a few preparations.
The following definition is inspired by the ``reversible extraction of classical
information'' in \cite{Barnum2001_2}.
\begin{definition}
\label{def:I_epsilon}
For a source $\omega^{XAC}$ and $\epsilon \geq 0$, define
\begin{align*}
I_\epsilon(\omega) \!\!:= \!\!
\max_{V:AC\rightarrow \hat{A}\hat{C} W \text{ isometry}} \!\! \!\!I(X\!:\!\hat{C}W)_\xi
\text{ s.t. } F(\!\omega^{\!X\!A\!C\!}\!,\xi^{\!X\hat{A}\!\hat{C}\!})\! \geq \!1\!\!-\!\epsilon,
\end{align*}
where
\[
\xi^{X\!\hat{A}\hat{C}W} \!\!\!
=\!(\!\openone_X \otimes V\!) \omega^{XAC}\!(\!\openone_X \otimes V^{\dagger}\!) \!
=\!\sum_x p(x) \proj{x}^X \!\otimes \proj{\xi_x}^{\!\hat{A}\hat{C}W} \!\!\!.
\]
\end{definition}
In this definition, the dimension of the environment is w.l.o.g. bounded as $|W| \leq |A|^2|C|^2$;
hence, the optimisation is of a continuous function over a compact domain, so we have a
maximum rather than a supremum.
\begin{lemma}
\label{lemma:I_epsilon properties}
The function $I_\epsilon(\omega)$ has the following properties:
\begin{enumerate}
\item It is a non-decreasing function of $\epsilon$.
\item It is concave in $\epsilon$.
\item It is continuous for $\epsilon \geq 0$.
\item For any two states $\omega_1^{X_1 A_1 C_1}$ and $\omega_2^{X_2 A_2 C_2}$ and for $\epsilon \geq 0$,
\(
I_{\epsilon}(\omega_1 \otimes \omega_2) \leq I_{\epsilon}(\omega_1) +I_{\epsilon}(\omega_2).
\)
\item For any state $\omega^{XAC}$, $I_0(\omega) \leq S(CY)$.
\end{enumerate}
\end{lemma}
\begin{proof}
1. The definition of $I_\epsilon(\omega)$ directly implies that it
is a non-decreasing function of $\epsilon$.
2. To prove the concavity, let $V_1:AC \rightarrow \hat{A}\hat{C}W$ and
$V_2:AC \rightarrow \hat{A}\hat{C}W$ be the isometries attaining the
maximum for $\epsilon_1$ and $\epsilon_2$, respectively, which act as
follows:
\[
V_1 \ket{\psi_x}^A\ket{\sigma_x}^C=\ket{\xi_x}^{\hat{A}\hat{C}W} \text{ and }
V_2 \ket{\psi_x}^A\ket{\sigma_x}^C=\ket{\zeta_x}^{\hat{A}\hat{C}W}.
\]
For $0\leq \lambda \leq 1$,
define the isometry $U:AC \rightarrow \hat{A}\hat{C}WRR'$ by letting, for all $x$,
\[
U\!\ket{\psi_x}^A\!\ket{\sigma_x}^C
\!:=\!\sqrt{\!\lambda}\ket{\xi_x}^{\hat{A}\hat{C}W}\!\!\ket{00}^{RR'}\!\!\!+\!\!\sqrt{1\!-\!\lambda}\ket{\zeta_x}^{\hat{A}\hat{C}W}\!\!\ket{11}^{RR'},
\]
where systems $R$ and $R'$ are qubits.
Then, the reduced state on the systems $X\hat{A}\hat{C}$ is
$\tau^{X\hat{A}\hat{C}}=\sum_x p(x) \proj{x}^X\otimes \tau_x^{\hat{A}\hat{C}}$,
where $\tau_x^{\hat{A}\hat{C}}=\lambda \xi_x^{\hat{A}\hat{C}}+(1-\lambda)\zeta_x^{\hat{A}\hat{C}}$;
therefore, the fidelity is bounded as follows:
\begin{align*}
F(\omega^{XA\hat{C}}\!,\tau^{X\hat{A}\hat{C}})
&= \sum_x p(x) \sqrt{\bra{\psi_x}
\left(\lambda \xi_x^{\hat{A}\hat{C}}+(1-\lambda)\zeta_x^{\hat{A}\hat{C}}\right)
\ket{\psi_x}} \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\geq \lambda \sum_x p(x) \sqrt{\!\bra{\psi_x} \xi_x^{\hat{A}\hat{C}} \ket{\psi_x}\!}
+ (1-\lambda)\sum_x p(x) \sqrt{\!\bra{\psi_x}\zeta_x^{\hat{A}\hat{C}} \ket{\psi_x}\!} \\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\geq 1-\left( \lambda\epsilon_1 +(1-\lambda)\epsilon_2 \right),
\end{align*}
where the second line follows from the concavity of the function $\sqrt{x}$,
and the last line follows by the definition of the isometries $V_1$ and $V_2$.
Now, define $W':=WRR'$ and let $\epsilon=\lambda\epsilon_1 +(1-\lambda)\epsilon_2$.
According to Definition \ref{def:I_epsilon}, we obtain
\begin{align*}
I_\epsilon(\omega) &\geq I(X:\hat{C}W')_{\tau}\\
&= I(X:R)_{\tau}+I(X:\hat{C}W|R)_{\tau}+I(X:R'|\hat{C}WR)_{\tau}\\
&\geq I(X:\hat{C}W|R)_{\tau}
= \lambda I_{\epsilon_1}(\omega)+(1-\lambda)I_{\epsilon_2}(\omega),
\end{align*}
where the third line is due to strong subadditivity of the quantum mutual information.
3. The function is non-decreasing and concave for $\epsilon \geq 0 $, so it is continuous
for $\epsilon > 0 $.
The concavity implies furthermore that $I_{\epsilon}$ is lower semi-continuous at
$\epsilon=0$. On the other hand, since the fidelity and mutual information are
both continuous functions of CPTP maps, and the domain of the optimization is a
compact set, we conclude that $I_\epsilon(\omega)$ is also upper semi-continuous at
$\epsilon=0$, so it is continuous at $\epsilon=0$ \cite[Thms.~10.1,~10.2]{Rockafeller}.
4. In the definition of $I_{\epsilon}(\omega_1 \otimes \omega_2)$, let the isometry
$V_0:A_1 C_1 A_2C_2 \rightarrow \hat{A}_1\hat{C}_1\hat{A}_2\hat{C}_2 W$ be the
one attaining the maximum which acts on the purified source state with purifying
systems $X_1'$ and $X_2'$ as follows:
\begin{align*}
\ket{\xi}&^{X_1\!X_1'\!X_2\!X_2'\!\hat{A}_1\!\hat{C}_1\!\hat{A}_2\!\hat{C}_2\!W} \\
&\phantom{====}
=(\openone_{X_1\!X_1'\!X_2\!X_2'}\otimes V_0)\ket{\omega_1}^{X_1\!X'_1\!A_1\!C_1}
\ket{\omega_1}^{X_2\!X'_2\!A_2\!C_2}\!\!.
\end{align*}
Now, define the isometry $V_1:A_1 C_1 \rightarrow \hat{A}_1\hat{C}_1\hat{A}_2 \hat{C}_2W X_2X'_2$
acting only on the systems $A_1 C_1$ with the output state $\hat{A}_1\hat{C}_1$ and the
environment $W_1:=\hat{A}_2\hat{C}_2 W X_2X'_2$ as follows:
\[
\ket{\xi}^{X_1\!X_1'\!X_2\!X_2'\!\hat{A}_1\!\hat{C}_1\!\hat{A}_2\!\hat{C}_2\!W}
= (\openone_{X_1X_1'}\otimes V_1)\ket{\omega_1}^{X_1X_1'A_1C_1}.
\]
Hence, we obtain
\begin{align*}
F(\omega_1^{X_1\!A_1\!C_1}\!,\xi^{X_1\!\hat{A}_1\!\hat{C}_1})
&\geq F\!\left(\omega_1^{X_1\!A_1\!C_1}\!\otimes\!\omega_2^{X_2\!A_2\!C_2}\!,
\xi^{X_1\!X_2\!\hat{A}_1\!\hat{C}_1\!\hat{A}_2\!\hat{C}_2}\!\right) \\
&\geq 1 - \epsilon,
\end{align*}
where the first inequality is due to monotonicity of the fidelity under
CPTP maps, and the second inequality follows by the definition of $V_0$.
Consider the isometry
$V_2:A_2 C_2 \rightarrow \hat{A}_1\hat{C}_1\hat{A}_2 \hat{C}_2W X_1X'_1$
defined in a similar way, with the output state $\hat{A}_2\hat{C}_2$ and
the environment $W_2:=\hat{A}_1\hat{C}_1 W X_1X'_1$.
Therefore, we obtain
\begin{align*}
I_{\epsilon}(\omega_1) +I_{\epsilon}(\omega_2) &\geq I(X_1:\hat{C}_1W_1)+I(X_2:\hat{C}_2W_2)\\
&\geq I(X_1:\hat{C}_1\hat{C}_2W)+I(X_2:\hat{C}_1\hat{C}_2WX_1)\\
&= I(X_1X_2:\hat{C}_1\hat{C}_2W)
= I_{\epsilon}(\omega_1 \otimes \omega_2),
\end{align*}
where the second line is due to data processing.
5. In the definition of $I_0(\omega)$ let $V_0:AC \rightarrow \hat{A}\hat{C}W$ be the isometry attaining the maximum with $F(\omega^{XAC},\xi^{X\hat{A}\hat{C}})=1$. Hence, we obtain
\begin{align*}
I_0(\omega)&= I(X:\hat{C}W)
= I(XY:\hat{C}W) \\
&= I(Y:\hat{C}W)+I(X:\hat{C}W|Y) \\
&\leq S(Y)+I(X:\hat{C}W|Y) \\
&= S(Y)+I(X:W|Y)+I(X:\hat{C}|WY) \\
&\leq S(Y)+I(X:W|Y)+S(C|WY)\\
&\leq S(Y)+I(X:W|Y)+S(C|Y),
\end{align*}
where the first line follows because $Y$ is a function of $X$. The second and fourth
line are due to the chain rule. The third line follows because for the classical
system $Y$ the conditional entropy $S(Y|\hat{C}W)$ is non-negative. The penultimate
line follows because for any $x$ the state on the system $\hat{C}$ is pure. The
last line is due to strong sub-additivity of the entropy. Furthermore, for every
$y$, the ensemble ${\mathcal{E}}_y$ is irreducible; hence, the conditional mutual information
$I(X:W|Y)=0$ which follows from the detailed discussion on page 2028 of \cite{Barnum2001_2}.
\end{proof}
\begin{proof-of}[{of the converse part of Theorem \ref{theorem: main}}]
We start by observing
\[
nQ+S(B_0) \geq S(C_A)+S(B_0)
\geq S(C_A B_0)
= S(\hat{A}^n W_B),
\]
where the second inequality is due to subadditivity of the entropy,
and the equality follows because the decoding isometry $V_B$ does not change
the entropy. Hence, we get
\begin{align}
\label{eq:converse Schumacher assisted 1}
nQ+S(B_0) &\geq S(\hat{A}^n)+S(W_B|\hat{A}^n) \nonumber\\
&\geq S(\hat{A}^n)+S(W_B|\hat{A}^n X^n) \nonumber\\
&\geq S(A^n)+S(W_B|\hat{A}^n X^n)-n \delta(n,\epsilon) \nonumber\\
&= S(A^n) \!+\! S(\hat{A}^nW_B| X^n\!) \!-\! S(\hat{A}^n| X^n)\!-\!n \delta(n,\epsilon)\nonumber\\
&= S(A^n) \!+\! S(\hat{C}^nW_A| X^n) \!-\! S(\hat{A}^n| X^n) \!-\! n \delta(n,\epsilon)\nonumber\\
&\geq S(A^n)+S(\hat{C}^nW_A| X^n)- 3n \delta(n,\epsilon),
\end{align}
where in the first and second line we use the chain rule and subadditivity of entropy.
The inequality in the third line follows from the decodability of the system $A^n$:
the fidelity criterion (\ref{eq:Schumcaher assisted fidelity}) implies that the
output state on systems $\hat{A}^n$ is $2\sqrt{2\epsilon}$-close to the
original state $A^n$ in trace norm; then apply the Fannes-Audenaert inequality
\cite{Fannes1973,Audenaert2007} where
$\delta(n,\epsilon)=\sqrt{2\epsilon} \log|A| + \frac1n h(\sqrt{2\epsilon})$.
The equalities in the fourth and the fifth line are due to the chain rule
and the fact that for any $x^n$ the overall state of $\hat{A}^n\hat{C}^nW_AW_B$ is pure.
In the last line, we use the decodability of the systems $X^nA^n$, that is the
output state on systems $X^n\hat{A}^n$ is $2\sqrt{2\epsilon}$-close to the
original states $X^nA^n$ in trace norm, then we apply the Alicki-Fannes
inequality \cite{Alicki2004,Winter2016}.
Moreover, we bound $Q$ as follows:
\begin{align}\label{eq:converse Schumacher assisted 2}
nQ &\geq S(C_A)
\geq S(C_A|\hat{C}^nW_A) \nonumber \\
&= S(A^nC^nA_0) -S(\hat{C}^nW_A) \nonumber \\
&= S(A^nC^nY^n)+S(A_0) -S(\hat{C}^nW_A),
\end{align}
where the first equality follows because the encoding isometry
$V_A:A^nC^nA_0 \rightarrow C_A\hat{C}^nW_A$ does not the change the entropy.
Adding Eqs. (\ref{eq:converse Schumacher assisted 1}) and
(\ref{eq:converse Schumacher assisted 2}), we thus obtain
\begin{align}\label{eq:converse Schumacher}
Q &\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2n}I(\hat{C}^nW_A:X^n)-\frac{3}{2}\delta(n,\epsilon)\nonumber \\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2n}I(\hat{C}^nW_AW_B:X^n)-\frac{3}{2}\delta(n,\epsilon)\nonumber \\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2n} I_\epsilon(\omega^{\otimes n})
-\frac{3}{2}\delta(n,\epsilon) \nonumber \\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2} I_\epsilon(\omega)-\frac{3}{2}\delta(n,\epsilon) \nonumber
\end{align}
where the second line is due to data processing. The third line follows from
Definition \ref{def:I_epsilon}. The last line follows from point 4 of Lemma
\ref{lemma:I_epsilon properties}. In the limit of $\epsilon \to 0$ and
$n \to \infty $, the rate is bounded by
\begin{align*}
Q &\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2} I_0(\omega)\\
&\geq \frac{1}{2}(S(A)+S(ACY))-\frac{1}{2}S(CY)\\
&= \frac{1}{2}(S(A)+S(A|CY)),
\end{align*}
where the first line follows from point 3 of Lemma \ref{lemma:I_epsilon properties}
stating that $I_\epsilon(\omega)$ is continuous at $\epsilon=0$.
The second line is due to point 5 of Lemma \ref{lemma:I_epsilon properties}.
\end{proof-of}
\section{Complete rate region}
In this section, we find the complete rate region of achievable rate pairs $(E,Q)$.
\begin{theorem}
\label{theorem:complete rate region}
For the source $\omega^{XACY}$, all asymptotically achievable entanglement and
quantum rate pairs $(E,Q)$ satisfy
\begin{align*}
Q &\geq \frac{1}{2}(S(A)+S(A|CY)),\\
Q+E &\geq S(A).
\end{align*}
Conversely, all the rate pairs satisfying the above inequalities are achievable.
\end{theorem}
\begin{proof}
The first inequality comes from Theorem \ref{theorem: main}.
For the second inequality, consider any code with quantum communication
rate $R$ and entanglement rate $E$. By using an additional communication
rate $E$, Alice and Bob can distribute the entanglement first, and then
apply the given code, converting it into one without preshared
entanglement and communication rate $Q+E$, having exactly the same
fidelity. By Remark \ref{remark:E=0}, $Q+E \geq S(A)$.
As for the achievability, the corner point
$(\frac{1}{2} I(A:CY),\frac{1}{2}(S(A)+S(A|CY)))$ is achievable,
because QSR which is used as the achievability protocol in
Theorem \ref{theorem: main} uses $\frac{1}{2} I(A:CY)$ ebits of
entanglement between Alice and Bob.
Furthermore, all the points on the line $Q+E = S(A)$ for
$Q \geq \frac{1}{2}(S(A)+S(A|CY))$ are achievable because one
ebit can be distributed by sending a qubit.
All other rate pairs are achievable by resource wasting. The rate region is depicted in
Fig.~\ref{fig:E-Q}
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\textwidth]{Q-E_tradeoff.jpg}
\caption{The optimal rate region of quantum and entanglement rates.}
\label{fig:E-Q}
\end{figure}
\section{Discussion}
First of all, let us look what our result tell us in the cases of
blind and visible compression.
\begin{corollary}
\label{corollary:blind}
In blind compression (i.e. if $C$ is trivial, or more generally the
states $\sigma_x$ are all identical), the compression of the source
$\omega^{XACY}$ reduces to the entanglement-assisted Schumacher
compression for which Theorem \ref{theorem: main} gives the optimal asymptotic quantum rate
\[
Q = \frac{1}{2}(S(A)+S(A|Y))=S(A)-\frac{1}{2}S(Y).
\]
This implies that if the source is irreducible, then this rate is equal to the
Schumacher limit $S(A)$. In other words, the entanglement does not help the
compression. Moreover, due to Theorem \ref{theorem:complete rate region}, a
rate $\frac{1}{2}S(Y)$ of entanglement is consumed in the compression,
and $E+Q\geq S(A)$ in general.
\qed
\end{corollary}
The blind compression of a source $\omega^{XAY}$ is also considered
in \cite{Barnum2001_2}, but there instead of entanglement, a noiseless classical
channel was assumed in addition to the quantum channel.
It was shown that the optimal quantum rate assisted with free classical communication
is equal to $S(A)-S(Y)$, while a rate $S(Y)$ of classical communication suffices.
By sending the classical information using dense coding \cite{Bennett1992},
spending $\frac12$ ebit and $\frac12$ qubit per cbit, we can recover the
quantum and entanglement rates of Corollary \ref{corollary:blind}.
This means that our converse implies the optimality of the quantum rate
from \cite{Barnum2001_2}.
Thus we are motivated to look at a modified compression model where the resources
used are classical communication and entanglement. Namely, we let Alice and Bob
share entanglement at rate $E$ and use classical communication at rate $C$, but
otherwise the objective is the same as in Section \ref{sec:Compression assisted by entanglement};
define the rate region as the set of all asymptotic achievable classical
communication and entanglement rate pairs $(C,E)$, such that the decoding fidelity
asymptotically converges to $1$.
\begin{theorem}
For a source $\omega^{XAY}$, a rate pair $(C,E)$ is achievable if and only if
\begin{align*}
C \geq 2S(A)-S(Y),\
E \geq S(A)-S(Y).
\end{align*}
\end{theorem}
\begin{proof}
We start with the converse.
The first inequality follows from Theorem \ref{theorem: main}, because with
unlimited entanglement shared between Alice and Bob,
$\frac{1}{2}(S(A)+S(A|Y))=S(A)-\frac{1}{2}S(Y)$ qubits of quantum
communication is equivalent to $2S(A)-S(Y)$ bits of classical communication
due to teleportation \cite{Bennett1993} and dense coding \cite{Bennett1992}.
The second inequality follows from \cite{Barnum2001_2}, because with free
classical communication, the quantum rate is lower bounded by $S(A)-S(Y)$
which, due to teleportation \cite{Bennett1993}, is equivalent to sharing
$S(A)-S(Y)$ ebits when classical communication is for free.
The achievability of the corner point $(2S(A)-S(Y),S(A)-S(Y))$
follows from \cite{Barnum2001_2} because the compression protocol
uses $S(A)-S(Y)$ qubits and $S(Y)$ bits of classical communication
which is equivalent to using $S(A)-S(Y)$ ebits of entanglement and
$2S(A)-2S(Y)+S(Y)$ bits of classical communication, due to dense coding \cite{Bennett1992}.
Other rate pairs are achievable by resource wasting.
The rate region is depicted in Fig.~\ref{fig:C-E}.
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\textwidth]{C-E_tradeoff.jpg}
\caption{The optimal rate region of classical and entanglement rates.}
\label{fig:C-E}
\end{figure}
\begin{corollary}
\label{cor:visible}
In the visible case, our compression problem reduces to the visible version of
Schumacher compression with entanglement assistance. In this case,
according to Theorem \ref{theorem: main} the optimal asymptotic quantum
rate is $Q=\frac{1}{2}S(A)$.
Moreover, a rate $E=\frac{1}{2}S(A)$ of entanglement
is consumed in the compression scheme, and $E+Q\geq S(A)$ in general.
\qed
\end{corollary}
We remark that the visible compression assisted by unlimited entanglement is
also a special case of remote state preparation considered in \cite{Bennett2005},
from which we know that the rate $Q=\frac12 S(A)$ is achievable and optimal.
The visible analogue of \cite{Barnum2001_2}, of compression using
qubit and cbit resources, was treated in \cite{Hayden2002}, where the
achievable region was determined as the union of all all pairs $(C,Q)$ such
that $Q\geq S(A|Z)$ and $C\geq I(X:Z)$, for any random
variable $Z$ forming a Markov chain $Z$---$X$---$A$. Compare to the
complicated boundary of this region the much simpler one of Corollary \ref{cor:visible},
which consists of two straight lines.
We close by discussing several open questions for future work:
First, the final discussion of different pairs of resources to compress suggests
that an interesting target would be the characterisation of the full triple resource
tradeoff region for $Q$, $C$ and $E$ together.
Secondly, we recall that our definition of successful decoding included
preservation of the side information $\sigma_x^C$ with high fidelity. What is the
optimal compression rate $Q$ if the side information does not have to be preserved?
For an example where this change has a dramatic effect on the optimal communication
rate, consider the ensemble ${\mathcal{E}}$ consisting of the three two-qubit states
$\ket{0}^A\ket{0}^C$, $\ket{1}^A\ket{0}^C$ and $\ket{+}^A\ket{+}^C$
(where $\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$),
with probabilities $\frac12-t$, $\frac12-t$ and $2t$, respectively.
Note that ${\mathcal{E}}$ is irreducible, hence for $t\approx 0$, we get an optimal
quantum rate of $Q\approx 1$, because $S(A) \approx S(A|C) \approx 1$.
However, by applying a CNOT unitary (with $A$ as control and $C$ as target),
the ensemble is transformed into ${\mathcal{E}}'$ consisting of the states
$\ket{0}^A\ket{0}^{C'}$, $\ket{1}^A\ket{1}^{C'}$ and $\ket{+}^A\ket{+}^{C'}$.
The state of $A$ is not changed, only the side information, which is why we
denote it $C'$. Hence we can apply Theorem \ref{theorem: main} to get a
quantum rate $Q\approx\frac12$, because $S(A) \approx 1$, $S(A|C) \approx 0$.
Thirdly, note that the lower bound $Q+E\geq S(A)$ in Theorem \ref{theorem:complete rate region}
holds with a strong converse (see the proof and \cite{Winter1999}).
But does $Q\geq \frac12 (S(A)+S(A|CY))$ hold as a strong converse rate
with unlimited entanglement? Likewise, in the setting of \cite{Barnum2001_2}
with unlimited classical communication, is $Q\geq S(A)-S(Y)$ a strong converse
bound for the quantum rate?
\part{Quantum Thermodynamics}
\part{Quantum Source Compression}
|
{
"timestamp": "2020-12-29T02:26:41",
"yymm": "2012",
"arxiv_id": "2012.14143",
"language": "en",
"url": "https://arxiv.org/abs/2012.14143"
}
|
\section{Introduction}
\label{sec:intro}
The Nekrasov partition function of four-dimensional $\mathcal{N}=2$ gauge
theories with Lagrangian description can often be exactly evaluated by
supersymmetric localization \cite{Nekrasov:2002qd,Nekrasov:2003rj}.
The result is
written as a sum over fixed points of a torus action on the instanton moduli
space.
For instance, for $U(2)$ gauge theory with $N_f$ fundamental
hypermultiplets, the partition function is evaluated as
\begin{align}
\mathcal{Z}_{U(2)}^{N_f} =
\mathcal{Z}_\text{pert}\sum_{Y_1,Y_2}\Lambda^{b_0(|Y_1|+|Y_2|)}\mathcal{Z}^\text{vec}_{Y_1,Y_2}(a)
\prod_{i=1}^{N_f} \mathcal{Z}^\text{fund}_{Y_1,Y_2}(a,m_i)~,
\label{eq:Nek1}
\end{align}
where $b_0 \equiv 4-N_f$, $\Lambda$ is a dynamical scale, $a$ is the
vacuum expectation value (VEV) of a scalar, $Y_k$ are Young diagrams,
and $|Y_k|$ is the number of boxes in $Y_k$. The sum over $(Y_1,Y_2)$ can be
regarded as a sum over fixed points on the moduli space of $U(2)$
instantons, and
$\mathcal{Z}^\text{vec}_{Y_1,Y_2}$ and
$\mathcal{Z}^\text{fund}_{Y_1,Y_2}$ are respectively the contributions
of the gauge and matter sectors at these fixed points. The prefactor,
$\mathcal{Z}_\text{pert}$, is the perturbative contribution that makes the power series in $\Lambda$ start with $1$.
\begin{figure}
\begin{center}
\begin{tikzpicture}[gauge/.style={circle,draw=black,inner sep=0pt,minimum size=8mm},flavor/.style={rectangle,draw=black,inner sep=0pt,minimum size=8mm},AD/.style={rectangle,draw=black,fill=red!20,inner sep=0pt,minimum size=8mm},auto]
\node[AD] (1) at (-2,0) {\;$(A_1,D_{4})$\;};
\node[gauge] (2) at (0,0) [shape=circle] {\;$2$\;} edge (1);
\node[AD] (3) at (2,0) {\;$(A_1,D_{4})$\;} edge (2);
\node[flavor] (4) at (0,1.5) {\;1\;} edge (2);
\end{tikzpicture}
\caption{The quiver diagram of the $(A_3,A_3)$ AD theory. The left and right
boxes stand for two copies of $(A_1,D_4)$ theory while the top box
stands for a fundamental hypermultiplet of $SU(2)$. The middle circle
stands for $SU(2)$ gauge group coupled to these ``matters.''}
\label{fig:quiver1}
\end{center}
\end{figure}
Despite the above success for Lagrangian theories, there is a rich class of four-dimensional $\mathcal{N}=2$
gauge theories whose partition functions are still to be evaluated. These
theories involve strongly-coupled CFTs in their matter sector, and therefore
their partition functions cannot be directly evaluated by supersymmetric
localization.
Among other theories, conformal gauge theories coupled to
Argyres-Douglas (AD) theories are of particular
importance in this class, since they provide a new class of
$\mathcal{N}=2$ S-dualities \cite{Buican:2014hfa, DelZotto:2015rca, Cecotti:2015hca, Xie:2016uqq, Xie:2017vaf,
Buican:2017fiq, Xie:2017aqx, Buican:2018ddk}. We call such conformal gauge
theories ``conformally gauged AD theories.''
One of the simplest examples of such theories is the $(A_3,A_3)$ theory
described by the quiver
diagram in Fig.~\ref{fig:quiver1}, where the $(A_1,D_4)$ theory is a
particular AD theory.\footnote{This AD theory is also called $H_2$
theory, $D_2(SU(3))$ theory, and $(A_2,A_2)$ theory in the
literature. The first series of papers on AD theories are
\cite{Argyres:1995jj, Argyres:1995xn, Eguchi:1996vu}.}
The beta function of the gauge coupling vanishes here since the
contributions of the gauge and matter sectors are exactly canceled.
While the partition functions of conformally gauged AD theories have
not been evaluated, there exists a series of {\it non-conformally}
gauged AD theories whose
partition functions were evaluated via a generalization
\cite{Bonelli:2011aa, Gaiotto:2012sf} of the AGT
correspondence \cite{Alday:2009aq, Gaiotto:2009ma} (See
\cite{Kanno:2012xt, Nishinaka:2012kn, Rim:2012tf, Kanno:2013vi, Matsuo:2014rba,
Choi:2015idw, Rim:2016msx,
Itoyama:2018wbh, Itoyama:2018gnh,Nishinaka:2019nuy} for
recent developments on this generalization). In particular, for the theory described by the quiver in Fig.~\ref{fig:quiver2}, the partition function was
evaluated as the inner product of so-called ``irregular states'' of
Virasoro algebra. The application of the generalized AGT
correspondence is possible here
since these theories
can be engineered by compactifying the 6d $(2,0)$ $A_1$
theory on a Riemann surface.
\begin{figure}
\begin{center}
\begin{tikzpicture}[gauge/.style={circle,draw=black,inner sep=0pt,minimum size=8mm},flavor/.style={rectangle,draw=black,inner sep=0pt,minimum size=8mm},AD/.style={rectangle,draw=black,fill=red!20,inner sep=0pt,minimum size=8mm},auto]
\node[AD] (1) at (-2,0) {\;$(A_1,D_{2n})$\;};
\node[gauge] (2) at (0,0) [shape=circle] {\;$2$\;} edge (1);
\node[AD] (3) at (2,0) {\;$(A_1,D_{2n})$\;} edge (2);
\end{tikzpicture}
\caption{A typical example of gauge theories studied in the generalized
AGT correspondence, where $n$ is a positive integer. The circle stands for $SU(2)$ gauge group, and each
box stands for an $(A_1,D_{2n})$ theory. The $SU(2)$ vector multiplet
is diagonally gauging an $SU(2)$ sub-group of the $(A_1,D_{2n})$
theories.}
\label{fig:quiver2}
\end{center}
\end{figure}
The purpose of this paper is to propose a way to compute the partition
function of the conformally gauged AD theory
in
Fig.~\ref{fig:quiver1}, using the
generalized AGT correspondence for the non-conformally gauged AD
theory
in Fig.~\ref{fig:quiver2}. Since the former has no known construction
from the
6d (2,0) $A_1$ theory, one cannot directly apply the AGT correspondence
to it.\footnote{The theory shown in Fig.~\ref{fig:quiver1} can be
constructed by compactifying the 6d (2,0) $A_2$ or $A_3$ theory on a Riemann
surface \cite{Xie:2012hs, Beem:2020pry}. This suggests a possibility of studying its partition function via the
higher-rank generalization of the AGT correspondence
\cite{Wyllard:2009hg}. Going in this direction would, however, be
non-trivial and involved since the 2d
side is now a higher-rank Toda theory. In this paper, we take a
different route.} Instead, our strategy is to apply the generalized AGT correspondence to
the latter (with a
small but crucial modification discussed below) and decompose the resulting
partition function as a sum over Young diagrams, as in
\eqref{eq:Nek1}.
While the theory has a strongly-coupled matter sector, such a
decomposition is expected, since the gauge sector of the theory is still
described by Lagrangian and its path integral is expected to
lead to a sum
over fixed points on the instanton moduli space.
Once such a decomposition is obtained,
one can read off the contribution of the $(A_1,D_{2n})$ theory at each
fixed point, say $\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$.
It is then
straightforward to compute the
partition function of the
conformally gauged AD theory shown in Fig.~\ref{fig:quiver1}.
One important point in the above discussion is that the decomposition
of the form \eqref{eq:Nek1} is for $U(2)$ gauge group while the
generalized AGT correspondence is for $SU(2)$
gauge
group. Indeed, for fixed points on the instanton moduli space to be
labeled by $(Y_1,Y_2)$, the gauge group must be $U(2)$ instead of $SU(2)$.
Therefore, to decompose the resulting partition function as a sum
over $(Y_1,Y_2)$,
we need a $U(2)$ version of the generalized AGT
correspondence.
For the original AGT correspondence, such a $U(2)$
version is realized by considering the direct sum of Virasoro and
Heisenberg algebras ($Vir\oplus H$) on the two-dimensional side \cite{Alba:2010qc}. In this
paper, we extend it to the generalized AGT
correspondence, by considering irregular states of $Vir\oplus H$.
Given the $U(2)$ version of the generalized AGT correspondence, one can
easily
decompose the partition function of $U(2)$ gauge theory coupled to two
$(A_1,D_{2n})$ as
\begin{align}
\mathcal{Z}_{U(2)}^{2\times (A_1,D_{2n})} =
\mathcal{Z}_\text{pert}\sum_{Y_1,Y_2}\Lambda^{b_0(|Y_1|+|Y_2|)}\mathcal{Z}^\text{vec}_{Y_1,Y_2}(a)
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{2n})}(a,m,\pmb{d},\pmb{u})\tilde{\mathcal{Z}}_{Y_1,Y_2}^{(A_1,D_{2n})}(a,\tilde{m},\tilde{\pmb{d}},\tilde{\pmb{u}})~,
\label{eq:Nek2}
\end{align}
where $m,\pmb{d}=(d_1,\cdots,d_{n-1})$ and $\pmb{u}=(u_1,\cdots,u_{n-1})$ are respectively the mass,
relevant couplings and
VEVs of Coulomb branch operators of the $(A_1,D_{2n})$ theory, and
$b_0\equiv 2/n$ is the coefficient of the one-loop $\beta$ function.
We interpret the sum over $(Y_1,Y_2)$ as a sum over fixed
points on the moduli space of $U(2)$ instantons, and identify
$\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$ and
$\tilde{\mathcal{Z}}^{(A_1, D_{2n})}_{Y_1,Y_2}$ as contributions from
the $(A_1,D_{2n})$ theories at each fixed point. The difference between $\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$ and
$\tilde{\mathcal{Z}}^{(A_1, D_{2n})}_{Y_1,Y_2}$ is interpreted as coming
from how
the $U(1)$ part of the gauge group is coupled to $(A_1,D_{2n})$.
With the above identification of $\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$,
we evaluate the partition function of the $(A_3,A_3)$ theory as
follows. We start with the quiver description shown in
Fig.~\ref{fig:quiver1}. When the gauge group in the quiver is $U(2)$, the partition function is
evaluated as
\begin{align}
\mathcal{Z}_{U(2)} = \mathcal{Z}_\text{pert}
\sum_{Y_1,Y_2}q^{|Y_1|+|Y_2|}\mathcal{Z}^\text{vec}_{Y_1,Y_2}(a)\mathcal{Z}^\text{fund}_{Y_1,Y_2}(a,M)
\prod_{i=1}^2\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_4)}(a,m_i,d_i,u_i)~,
\label{eq:Nek3}
\end{align}
where $q$ is the exponential of the marginal gauge coupling, and $M$ is
the mass of the hypermultiplet. The prefactor $\mathcal{Z}_\text{pert}$
is again the perturbative contribution that makes the power series in
$q$ start with $1$. Since
$\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$ is already read off from the decomposition
\eqref{eq:Nek2}, one can explicitly compute the above partition
function.
When the gauge group is $SU(2)$, the partition function differs from
\eqref{eq:Nek3} by the contribution of the $U(1)$-part of the gauge group. Indeed, according to a general
discussion in \cite{Alday:2009aq}, the partition function for $SU(2)$
gauge group is expected to be given by
\begin{align}
\mathcal{Z}_{SU(2)} =
\frac{\mathcal{Z}_{U(2)}}{\mathcal{Z}_{U(1)}}~,
\label{eq:U1}
\end{align}
where $\mathcal{Z}_{U(1)}$ is the partition
function of the $U(1)$ part, and called ``$U(1)$-factor.''
We use \eqref{eq:Nek3} and
\eqref{eq:U1} to show in particular that the S-duality of the $(A_3,A_3)$ theory is in
a peculiar relation to that of $SU(2)$ gauge theory with $4$ fundamental
flavors. It is an interesting open problem to see how this peculiar
relation is connected to a similar relation between the Schur index of
the same pair of theories discussed in \cite{Buican:2019kba}.
The organization of the rest of this paper is the following. In
Sec.~\ref{sec:AGT}, we briefly review the AGT correspondence and its
generalization to AD theories. In Sec.~\ref{sec:V+H}, we consider a
$U(2)$-version of the generalized AGT correspondence, in terms of
irregular states of $Vir\oplus H$. In Sec.~\ref{sec:Nek-AD}, we derive
a formula for $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{2n})}$ corresponding to
the gauged $(A_1,D_{2n})$ theory, using the $U(2)$-version of the
generalized AGT correspondence discussed in the previous section. In
Sec.~\ref{sec:A3A3}, we evaluate the partition function of $(A_3,A_3)$
theory using our formula for
$\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{4})}$. We particularly discuss the
S-duality of the theory in connection to the S-duality of $SU(2)$ gauge
theory with four flavors. In Sec.~\ref{sec:Conclusions}, we conclude and
discuss future directions. There are several
appendices. Sec.~\ref{app:Nek} includes Nekrasov's formulae for
Lagrangian sectors. Sec.~\ref{app:basis} contains the first few examples
of states in a special basis of the highest weight module of $Vir\oplus H$ that we
will use in Sec.~\ref{sec:Nek-AD}. In Sec.~\ref{app:F2}, we explain how
the prepotential of the $(A_3,A_3)$ theory is constrained by the
invariance under \eqref{eq:T-massive} when all the
massive deformations are turned on.
\section{Generalized AGT correspondence}
\label{sec:AGT}
In this section, we briefly review the AGT correspondence
\cite{Alday:2009aq, Gaiotto:2009ma} and its
generalization \cite{Bonelli:2011aa,
Gaiotto:2012sf}.
Suppose that $\mathcal{T}_{\mathcal{C}}$ is a four-dimensional $\mathcal{N}=2$
superconformal field theory obtained by compactifying the 6d $(2,0)\; A_1$ theory on a punctured Riemann surface $\mathcal{C}$. It is known
that punctures on $\mathcal{C}$ can be ``regular'' or ``irregular''
depending on whether a six-dimensional BPS scalar operator has a
simple pole or higher order pole at it. When the BPS operator
has a pole of order $(n+1)$ at an irregular puncture, we say the
puncture is of rank $n$. While this rank can be an integer or half-integer
in general, we only consider integer ranks in this paper.
\begin{figure}
\begin{center}
\begin{tikzpicture}[gauge/.style={circle,draw=black,thick,inner sep=0pt,minimum size=8mm},flavor/.style={rectangle,draw=black,thick,inner sep=0pt,minimum size=8mm},auto]
\node[flavor] (1) at (-8,0) {\;$2$\;};
\node[gauge] (2) at (-6.5,0) [shape=circle] {\;$2$\;} edge (1);
\node[gauge] (3) at (-5,0) [shape=circle] {\;$2$\;} edge (2);
\node (4) at (-3.5,0) {$\cdots$} edge (3);
\node[gauge] (5) at (-2,0) {\;$2$\;} edge (4);
\node[flavor] (6) at (-.5,0) {\;$2$\;} edge (5);
\end{tikzpicture}
\caption{The quiver diagram of the class $\mathcal{S}$ theory
$\mathcal{T}_{\mathcal{C}}$ for $\mathcal{C}$ being sphere with $(n+2)$
regular punctures. There are $(n-1)$ circles, each of which stands for an $SU(2)$ gauge
group.}
\label{fig:quiver0}
\end{center}
\end{figure}
Let us first focus on the case in which $\mathcal{C}$ has no irregular
puncture. In this case, the AGT
correspondence \cite{Alday:2009aq} implies that the Nekrasov partition function of
$\mathcal{T}_{\mathcal{C}}$ is identical to the conformal block of
Liouville theory on $\mathcal{C}$, where the Liouville charge $Q$ is
related to the $\Omega$-background parameters by
$Q=(\epsilon_1+\epsilon_2)/\sqrt{\epsilon_1\epsilon_2}$. As an example,
let us consider the case of $\mathcal{C}$ being
a sphere with $(n+2)$ regular punctures. The corresponding $\mathcal{T}_{\mathcal{C}}$ is
a linear quiver $SU(2)$ gauge theory as shown in
Fig.~\ref{fig:quiver0}. The AGT correspondence implies
\begin{align}
\mathcal{Z}_{SU(2)}(\vec{a};m_1,\cdots,m_{n+2}) =
\mathcal{F}_{\alpha_{0}}{}^{\alpha_{1}}{}_{\beta_1}{}^{\alpha_2}{}_{\beta_2}{}^{\alpha_3}\cdots
{}_{\beta_{n-1}}{}^{\alpha_{n}}{}_{\alpha_{n+1}}~,
\label{eq:linear-quiver}
\end{align}
where the LHS is the Nekrasov partition function of $\mathcal{T}_{C}$
with $\vec{a} \equiv(a_1,\cdots,a_{n-1})$ being the VEVs of Coulomb branch operators, and $m_i$ being
the masses of fundamental and bi-fundamental hypermultiplets. The
subscript, $SU(2)$, is for emphasizing that
the gauge group of $\mathcal{T}_{C}$ is $SU(2)^{n-1}$. The
RHS of \eqref{eq:linear-quiver} is the $(n+2)$-point conformal block of Liouville
theory with $\beta_i$ being intermediate momenta, and $\alpha_{i}$
being external momenta. The 4d and 2d parameters are
related by
\begin{align}
\frac{a_i}{\sqrt{\epsilon_1 \epsilon_2}} &= \beta_i +
\frac{Q}{2}~,\qquad \frac{m_i}{\sqrt{\epsilon_1\epsilon_2}} = \alpha_i + \frac{Q}{2}~.
\end{align}
In the rest of this paper,
we rescale dimensionful parameters so that
$\epsilon_1\epsilon_2=1$, as in \cite{Alday:2009aq}, and write things
in terms of the 2d parameters. When we
need to recover the full $\epsilon_i$-dependence, we rescale the 2d
parameters as
\begin{align}
\alpha_i\to \frac{\alpha_i}{\sqrt{\epsilon_1\epsilon_2}}~,\qquad \beta_i\to \frac{\beta_i}{\sqrt{\epsilon_1\epsilon_2}}~,\qquad Q \to
\frac{Q}{\sqrt{\epsilon_1\epsilon_2}}~.
\label{eq:recover}
\end{align}
The AGT correspondence has been generalized to the case in which
$\mathcal{C}$ has irregular punctures \cite{Gaiotto:2009ma,Bonelli:2011aa,
Gaiotto:2012sf}.
The most important difference from the original AGT is that
$\mathcal{T}_{\mathcal{C}}$ generically involves an AD theory \cite{Bonelli:2011aa,
Gaiotto:2012sf}. One simple example is the case in which
$\mathcal{C}$ is a sphere with one irregular puncture of rank $n$ and
one regular puncture. In this case, $\mathcal{T}_{\mathcal{C}}$ is an AD theory called
the $(A_1,D_{2n})$ theory, and its partition function is identified as the two-point function of 2d operators
corresponding to the punctures on $\mathcal{C}$. By the state-operator map, this two-point function
is mapped to the inner product of two states
\begin{align}
\mathcal{Z}_{(A_1,D_{2n})} = \langle a|I^{(n)}\rangle~,
\label{eq:ZAD1}
\end{align}
where $|a\rangle$ is the Virasoro primary state corresponding to the
regular puncture, while $|I^{(n)}\rangle$ is a state corresponding to
the rank-$n$ irregular puncture.
A more interesting situation arises when
$\mathcal{C}$ has two
irregular punctures of rank $n$. In this case, $\mathcal{T}_{\mathcal{C}}$ is no longer
conformal, but is an $SU(2)$ gauge theory coupled to two
copies of $(A_1,D_{2n})$ theories.
The quiver diagram of this gauge theory is shown in
Fig.~\ref{fig:quiver2}.
Here, each $(A_1,D_{2n})$ in the matter sector is
associated with an irregular puncture on $\mathcal{C}$. The
$\beta$-function coefficient of this $SU(2)$ gauge coupling is
evaluated as $\beta_0 = 2/n$, and therefore $\mathcal{T}_{\mathcal{C}}$
is never conformal.
The generalized AGT correspondence then implies that the partition
function of this theory is given by
\begin{align}
\mathcal{Z}_{SU(2)}^{2\times (A_1,D_{2n})} &= \langle I^{(n)}|I^{(n)}\rangle~,
\label{eq:Z2}
\end{align}
where $\langle I^{(n)}|$ and $|I^{(n)}\rangle$ correspond to the two
irregular punctures on $\mathcal{C}$.
In contrast to the state
corresponding to a regular puncture, the state $|I^{(n)}\rangle$ is not a Virasoro primary
but a linear combination of a primary and its
descendants. This
linear combination is determined to be a
simultaneous solution to a set of equations. There are two
characterizations of this set of equations, and here we follow the
characterization proposed in \cite{Gaiotto:2012sf}:\footnote{The
relation between the characterizations in \cite{Buican:2014hfa} and
\cite{Gaiotto:2012sf} was partially studied in appendix B of \cite{Kanno:2013vi}. }
\begin{align}
L_k|I^{(n)}\rangle = \left\{
\begin{array}{l}
\lambda_k|I^{(n)}\rangle \qquad \text{for} \quad n\leq k\leq 2n
\\[2mm]
\left(\lambda_k + \sum_{\ell = 1}^{n-k}\ell\, c_{\ell+k} \frac{\partial
}{\partial c_{\ell}}\right)|I^{(n)}\rangle \qquad \text{for} \quad
0\leq k < n
\\
\end{array}
\right.~,
\label{eq:const}
\end{align}
where $\lambda_k$ are constants fixed by $c_0,\cdots,c_n$
as
\begin{align}
\lambda_k =
\left\{
\begin{array}{l}
-\sum_{\ell=k-n}^n c_\ell c_{k-\ell} \quad \text{for}\quad n<k\leq 2n
\\[2mm]
-\sum_{\ell=0}^{k} c_\ell c_{k-\ell} + (k+1)Qc_k \quad \text{for}\quad k\leq n
\end{array}
\right.~.
\label{eq:lambda}
\end{align}
It was
conjectured in \cite{Gaiotto:2012sf} that $|I^{(n)}\rangle$ satisfying the
above equations exists in a highest weight module of Virasoro
algebra.\footnote{Note that the highest weight of this module is not
necessarily equal to $c_0$. Indeed, the highest weight is given by
$\beta_0$ discussed below.}
Since the constraints \eqref{eq:const} are
differential equations, $|I^{(n)}\rangle$ is not completely fixed by
$c_k$ but depends on the ``boundary
condition'' or the ``asymptotic behavior.'' It was conjectured in \cite{Gaiotto:2012sf} that $|I^{(n)}\rangle$ is
uniquely fixed by specifying $n$ extra
complex parameters in addition to $c_0,\cdots,c_{n-1}$ and $c_n$. These
extra parameters
characterize the asymptotic behavior of $|I^{(n)}\rangle$ in the small
$c_k$ limit for $k=1,\cdots,n$. We denote these
extra parameters by $\beta_0,\cdots,\beta_{n-1}$.\footnote{See
section 3 and appendix B of \cite{Gaiotto:2012sf} for more detail. Some
explicit examples are also shown in \cite{Nishinaka:2019nuy}.}
Then the generalized AGT correspondence implies that $c_1,\cdots,c_{n-1}$ are related
to relevant couplings, $\beta_1,\cdots,\beta_{n-1}$ are related to the
VEVs of Coulomb branch operators, and $c_0$ and $\beta_0$ are related to
mass parameters of $(A_1,D_{2n})$ theory. The Liouville charge $Q$ is
identified with $\epsilon_1+\epsilon_2$.
The irregular puncture of integer rank $n$ can be created by colliding $(n+1)$
regular punctures. As a result, the condition \eqref{eq:const} is
obtained in the colliding limit of Virasoro primary operators
\cite{Gaiotto:2012sf}, which we briefly review here for later use.
Let us first consider the state
\begin{align}
|\phi_n(z_1,\dots,z_n)\rangle \equiv\;\;
:\!\left(\prod^n_{i=1}V_{\alpha_i}^{L}(z_i)\right)V^L_{\alpha_0}(0)\!:
|0\rangle~,
\label{eq:phin}
\end{align}
where $V_{\alpha}^{L}(z)$ is the Virasoro primary vertex operator of
conformal weight $\alpha(Q-\alpha)$, and $:\!XY\!:$ is the normal-ordered
product of $X$ and $Y$.
The action of
$T_{>}(y)\equiv \sum_{k\geq -1} y^{-k-2}L_k$ on this state is expressed as
\begin{align}
\begin{aligned}
T_{>}(y)|\phi_n(z_1,\dots,z_n)\rangle=\sum_{i=0}^{n}\bigg( \frac{\alpha_i(Q-\alpha_i)}{y-z_i}+\frac{1}{y-z_i}\frac{\partial}{\partial z_i} \bigg)|\phi_n(z_1,\dots,z_n)\rangle~,
\end{aligned}
\label{eq:Tphi}
\end{align}
where $z_0\equiv 0$.
The idea is then to consider a limit $z_i\to 0$ in which the above
action of $T_{>}(y)$ remains well-defined but gives an interesting
result. If we keep $\alpha_i$ finite in the limit,
$|\phi_n(z_1,\cdots,z_n)\rangle$ just reduces to a single primary vertex
operator acting on the vacuum. A more interesting limit is to take
$z_i\to 0$ and $\alpha_i\to \infty$ with
\begin{align}
c_k \equiv \sum_{i=0}^n\alpha_i z_i^k \qquad (k=0,\cdots,n)~,
\label{eq:ck}
\end{align}
kept finite. We call this latter limit ``colliding limit.'' It is
straightforward to show that $T_{>}(y)$ acts on the state
\begin{align}
|I^{(n)}\rangle \equiv \lim_{\text{colliding
limit}}|\phi_n(z_1,\cdots,z_n)\rangle~,
\label{eq:collided}
\end{align}
as
\begin{align}
T_{>}(y) |I^{(n)}\rangle =
\left(\sum_{k=n}^{2n}\frac{\lambda_k}{y^{k+2}} + \sum_{k=0}^{n-1}
\frac{\lambda_k + \sum_{\ell=1}^{n-k}\ell
c_{\ell+k}\frac{\partial}{\partial c_\ell}}{y^{k+2}} +
\frac{L_{-1}}{y}\right)|I^{(n)}\rangle~,
\end{align}
where $\lambda_i$ are defined by \eqref{eq:lambda}. This implies that
the state \eqref{eq:collided} satisfies \eqref{eq:const}, and therefore is an
irregular state of rank $n$. Note that, since \eqref{eq:phin} is defined
in terms of the normal-ordered product, \eqref{eq:collided}
gives a well-defined state.
As mentioned already, the irregular state $|I^{(n)}\rangle$ generally
depends on $n$ extra parameters, $\beta_0,\cdots, \beta_{n-1}$,
corresponding to the ``boundary condition'' of a solution to the
differential equations \eqref{eq:const}. These extra parameters
correspond to inserting screening operators in the product
\eqref{eq:phin}. Since screening operators
commute with the Virasoro algebra, this insertion does not break the
conditions \eqref{eq:const}.
\section{Irregular states in $Vir\oplus H$ modules}
\label{sec:V+H}
As reviewed in the previous section, the generalized AGT correspondence
allows us to evaluate the partition function
of $SU(2)$ gauge theory coupled to two $(A_1,D_{2n})$ theories as in
\eqref{eq:Z2}. In this section, we consider an extension of this generalized AGT
correspondence to the case in which the gauge group is $U(2)$ instead of
$SU(2)$. To that end, we start with the $U(2)$ version of the original
AGT correspondence and consider its colliding limit.
\subsection{$U(2)$ version of the original AGT}
The $U(2)$ version of the original AGT correspondence was studied in the
literature. First, it was found in \cite{Alday:2009aq} that the
Nekrasov partition function of a $U(2)$ gauge theory\,
$\mathcal{Z}_{U(2)}$\, is generally related to that of the $SU(2)$ gauge
theory with the same matter content, $\mathcal{Z}_{SU(2)}$, by
\begin{align}
\mathcal{Z}_{U(2)} = \mathcal{Z}_{SU(2)} \mathcal{Z}_{U(1)}~,
\label{eq:U2}
\end{align}
where $\mathcal{Z}_{U(1)}$ is called ``$U(1)$ factor'' and regarded as
the partition function of the $U(1)$ part of the gauge theory.
The AGT
correspondence implies that
$\mathcal{Z}_{SU(2)}$ is identical to a conformal block of the 2d
Liouville CFT.
The 2d interpretation of the $U(1)$ factor was then given in
\cite{Alba:2010qc}; $\mathcal{Z}_{U(1)}$ is identical
to a correlation function of chiral vertex operators for an extra
Heisenberg algebra. To be concrete, let us focus on the linear quiver
gauge theory described by the quiver in Fig.~\ref{fig:quiver0}. As
shown in Eq.~\eqref{eq:linear-quiver}, $\mathcal{Z}_{SU(2)}$ is identified with the $(n+2)$-point conformal block of Liouville theory. On
the other hand, the $U(1)$ factor is identified as
\begin{align}
\mathcal{Z}_{U(1)} = \langle V^H_{\alpha_0}(z_0) \cdots
V^H_{\alpha_{n+1}}(z_{n+1})\rangle ~,
\label{eq:U2AGT}
\end{align}
where $V^H_\alpha(z)\equiv \exp\Big({2(\alpha-Q)
i \sum_{k<0}\frac{a_k}{k}z^{-k}}\Big)\exp\Big({2\alpha
i\sum_{k>0}\frac{a_k}{k}z^{-k}}\Big)$, and the loci $z_k$ of the vertex
operators coincide with those of the Liouville vertex operators in
Eq.~\eqref{eq:linear-quiver}. Here our convention for the Heisenberg
algebra is such that $[a_k,a_\ell] = \frac{k}{2}\delta_{k+\ell,0}$.
Combining \eqref{eq:linear-quiver},\eqref{eq:U2} and \eqref{eq:U2AGT}, we see that
$\mathcal{Z}_{U(2)}$ is identified with the correlator of $(n+2)$ chiral
vertex
operators of the form\footnote{To be precise, one can also insert
screening operators in the Liouville sector here.}
\begin{align}
\widehat{V}_{\alpha}(z)\equiv V^H_{\alpha}(z) \otimes V^L_{\alpha}(z)~,
\label{eq:VO}
\end{align}
where $V^L_{\alpha}(z)$ is the Virasoro primary vertex operator of
conformal weight $\alpha(Q-\alpha)$ in the
Liouville CFT.
Thus, the $U(2)$ version of the AGT
correspondence involves the direct sum of Virasoro and
Heisenberg algebras, which we denote by $Vir\oplus H$. Note that $L_k$
and $a_k$ are commutative since we consider the direct sum of the two algebras.
The action of $Vir\oplus H$ on $\widehat{V}_\alpha(z)$
is characterized by
\begin{align}
[L_n,\widehat{V}_\alpha(z)] &= (z^{n+1}+(n+1)\alpha(Q-\alpha) z^n)\widehat{V}_\alpha(z)~,\\[5pt]
[a_n,\widehat{V}_\alpha(z)] &= \left\{
\begin{array}{l}
-i\alpha z^n \widehat{V}_\alpha(z)\quad(n<0)\\
i(Q-\alpha)z^n \widehat{V}_\alpha(z)\quad(n>0)
\end{array}
\right.~.
\end{align}
\subsection{$U(2)$ version of the generalized AGT}
\label{subsec:U2gAGT}
We now consider the $U(2)$ version of the generalized AGT
correspondence. Our idea is to start with the $U(2)$ version of the
original AGT, and take the same
colliding limit as the one reviewed in the latter half of Sec.~\ref{sec:AGT}.
To that end, we start with the quiver gauge theory
described in Fig.~\ref{fig:quiver0} with $U(2)$ gauge groups. The
partition function \eqref{eq:U2} is then identified with the product of
\eqref{eq:linear-quiver} and \eqref{eq:U2AGT}. Note that the loci of the
Heisenberg vertex
operators in \eqref{eq:U2AGT} coincide with those of the Liouville
vertex operators, which reflects the fact that the 4d gauge couplings of
$U(1)\subset U(2)$ and $SU(2)\subset U(2)$ are identical.
We now take the limit of parameters in which the 4d
theory flows to
the theory described by the quiver in Fig.~\ref{fig:quiver2} with $U(2)$
gauge group. On the 2d side, this corresponds to the colliding limit of
vertex operators \eqref{eq:VO}, and gives rise to an irregular state
$|\widehat{I}^{(n)}\rangle$ of $Vir\oplus H$. The precise definition of
$|\widehat{I}^{(n)}\rangle$ will be given below. The same argument as in
Sec.~\ref{sec:AGT} then leads us to identifying the inner product
\begin{align}
\mathcal{Z}^{2\times (A_1,D_{2n})}_{U(2)} = \langle
\widehat{I}^{(n)}|\widehat{I}^{(n)}\rangle~,
\label{eq:U2gAGT}
\end{align}
as the partition
function of $U(2)$ gauge theory coupled to two $(A_1,D_{2n})$ theories.
Our new irregular state $|\widehat{I}^{(n)}\rangle$ is characterized by
the actions of the Virasoro and Heisenberg algebras on it. These actions can be read off by keeping track of their
actions on $\widehat{V}_{\alpha_k}(z) = V^H_{\alpha_k}(z_k)\otimes
V^L_{\alpha_k}(z_k)$ in the colliding limit.
To see this,
let us consider the state
\begin{align}
|\widehat{\phi}_n(z_1,\cdots,z_n)\rangle \equiv \;\;
:\!\left(\prod^n_{i=1}\widehat{V}_{\alpha_i}(z_i)\right)\widehat{V}_{\alpha_0}(0)\!:
|0\rangle~.
\label{eq:tilde-phin}
\end{align}
The action of $T_{>}(y)$ on this state is the same as in Eq.~\eqref{eq:Tphi}.
Similarly, the action of $J_{>}(y)\equiv \sum_{k\geq 1}y^{-k-1}a_k$ is
written as
\begin{align}
J_{>}(y)|\widehat{\phi}_n(z_1,\cdots,z_n)\rangle = -\sum_{i=0}^n
\frac{i(Q-\alpha_i)z_i}{y(y-z_i)}|\widehat{\phi}(z_1,\cdots,z_n)\rangle~,
\end{align}
where $z_0\equiv 0$.
We now take the colliding limit $z_i\to 0$ and $\alpha_i\to \infty$ with
\eqref{eq:ck} kept fixed. The irregular state
$|\widehat{I}^{(n)}\rangle$ is now defined by
\begin{align}
|\widehat{I}^{(n)}\rangle \equiv \lim_{\text{colliding limit}}|\widehat{\phi}_n(z_1,\cdots,z_n)\rangle~.
\end{align}
It is straightforward to show that $T_{>}(y)$ and $J_{>}(y)$ act on this
state as
\begin{align}
T_{>}(y)|\widehat{I}^{(n)}\rangle &= \left(\sum_{k=n}^{2n}
\frac{\lambda_k}{y^{k+2}}+
\sum_{k=0}^{n-1}\frac{\lambda_k+\sum_{\ell=1}^{n-k}\ell
c_{\ell+k}\frac{\partial}{\partial c_\ell}}{y^{k+2}} +
\frac{L_{-1}}{y}\right)~,
\\
J_{>}(y)|\widehat{I}^{(n)}\rangle &= \sum_{k=1}^{n}\frac{-ic_k}{y^{k+1}}|\widehat{I}^{(n)}\rangle~,
\end{align}
where $\lambda_k$ and $c_k$ are given by
\eqref{eq:lambda} and \eqref{eq:ck}, respectively.
From the above result, we see that $Vir\oplus H$ acts on
$|\widehat{I}^{(n)}\rangle$ as
\begin{align}
L_k|\widehat{I}^{(n)}\rangle &= \left\{
\begin{array}{l}
\lambda_k|\widehat{I}^{(n)}\rangle \qquad \text{for} \quad n\leq k\leq 2n
\\[2mm]
\left(\lambda_k + \sum_{\ell = 1}^{n-k}\ell\, c_{\ell+k} \frac{\partial
}{\partial c_{\ell}}\right)|\widehat{I}^{(n)}\rangle \qquad \text{for} \quad
0\leq k < n
\\
\end{array}
\right.~,
\label{eq:const2}
\\[3mm]
a_k |\widehat{I}^{(n)}\rangle &=
\left\{
\begin{array}{l}
\;\; \;\;0 \qquad \text{for}\quad n<k
\\[2mm]
-ic_k|\widehat{I}^{(n)}\rangle \qquad \text{for}\quad
1\leq k\leq n
\\
\end{array}
\right.~.
\label{eq:const3}
\end{align}
Note here that the above characterization of the irregular state does
not fix the overall normalization, as in the case of the $SU(2)$-version
of the generalized AGT correspondence. This means an ambiguity in
the computation of
the perturbative part of the partition function
\eqref{eq:U2gAGT}. However, it turns out that the instanton part of the partition
function can be unambiguously computed, as will be discussed in the following sections.
Note also that $|\widehat{I}^{(n)}\rangle$ is by definition decomposed into
the Virasoro part and the Heisenberg part as $|\hat{I}^{(n)}\rangle =
|I^{(n)}\rangle\otimes |I^{(n)}_H\rangle$, where $|I^{(n)}\rangle$ is
the irregular state of Virasoro algebra that was reviewed in
Sec.~\ref{sec:AGT}. While $|I^{(n)}\rangle$ depends on $n$ extra
parameters in addition to $c_0,\cdots,c_{n-1}$ and $c_n$,
the state $|I^{(n)}_H\rangle$ is uniquely fixed by
$c_1,\cdots,c_{n-1}$ and $c_n$ up to a prefactor. We here write down its explicit
expression:
\begin{align}
|I^{(n)}_H\rangle = \exp\left(-2i\sum_{k=1}^n \frac{c_k}{k} a_{-k}
\right)|0\rangle~,
\label{eq:IH}
\end{align}
where
the prefactor is fixed so that $|I^{(n)}_H\rangle$ reduces to
$|0\rangle$ when $c_k=0$. Since $|I^{(n)}_H\rangle$ and
$|I^{(n)}\rangle$ are respectively in a highest weight module of $Vir$
and $H$, we see that $|\widehat{I}^{(n)}\rangle$ is a state in a highest
weight module of $Vir\oplus H$. Note that
the Heisenberg sector has no possible insertion of screening operators, and therefore
\eqref{eq:IH} is the unique expression for $|I^{(n)}_H\rangle$ up to a prefactor. Indeed, the constraints
\eqref{eq:const3} are eigenstate equations, whose solution is fixed (up
to the normalization) by
$\{c_k\}$ without
specifying a ``boundary condition.''
Given the $U(2)$-version of the generalized AGT correspondence
\eqref{eq:U2gAGT}, one can now study the decomposition of
$\mathcal{Z}_{U(2)}^{2\times (A_1,D_{2n})}$ as a sum over pairs of Young
diagrams as in Eq.~\eqref{eq:Nek2}. In the next section, we explicitly
evaluate this decomposition to read off the factor
$\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{2n})}$ in \eqref{eq:Nek2}.
\section{Nekrasov-type formula for AD matter}
\label{sec:Nek-AD}
Here we read off the factor
$\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{2n})}$ in \eqref{eq:Nek2} from
the $U(2)$-version of the generalized AGT correspondence
\eqref{eq:U2gAGT}. This factor can be interpreted as the contribution of
the $(A_1,D_{2n})$ theory at the fixed point corresponding to
$(Y_1,Y_2)$ on the $U(2)$ instanton moduli space.
\subsection{Decomposition}
To read off $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{2n})}$, we first decompose \eqref{eq:U2gAGT} as a sum over
pairs of Young diagrams, using a nice basis of highest weight
modules of $Vir\oplus H$ that was found in
\cite{Alba:2010qc}. To describe it, let us denote by
\begin{align}
\mathcal{Z}_{Y_1,Y_2;W_1,W_2}^\text{bifund}(a,b,\alpha)~,
\label{eq:bifund0}
\end{align}
the contribution to the Nekrasov partition function from a
bi-fundamental hypermultiplet of $U(2)\times U(2)$. Here, $(a,b)$
stands for the VEVs of Coulomb branch operators in the vector multiplet
for $SU(2)\times SU(2) \subset U(2)\times U(2)$, and $\alpha$ is a mass parameter. The explicit
expression for \eqref{eq:bifund0} is written in Appendix
\ref{app:Nek}.
It was shown in \cite{Alba:2010qc} that there
exists an orthogonal basis, $|a;Y_1,Y_2\rangle$, of the highest weight module of $Vir \oplus H$ such that
\begin{align}
\frac{\langle a;Y_1,Y_2| V_\alpha(1) |b;W_1,W_2\rangle}{\langle
a|V_\alpha(1)|b\rangle} &=
\mathcal{Z}_{Y_1,Y_2;W_1,W_2}^\text{bifund}(a,b,\alpha)~,
\label{eq:basis}
\end{align}
where $|a\rangle$ is the highest weight state satisfying $L_0|a\rangle =
\Delta(a)|a\rangle$ and $L_n|a\rangle = a_n|a\rangle = 0$ for $n>0$,
$V_\alpha(z)$ is the vertex operator shown in \eqref{eq:VO}, and $Y_k$
and $W_k$ are arbitrary Young diagrams.
Note that the conjugate $\langle a;Y_1,Y_2|$ is not the usual
hermitian conjugate of $|a;Y_1,Y_2\rangle$; it is obtained
by expanding $|a;Y_1,Y_2\rangle$ as a linear combination of $L_{-k_1}^{m_1}\cdots L_{-k_p}^{m_p}
a_{-\ell_1}^{n_1}\cdots a_{-\ell_q}^{n_q}|a\rangle$, and then replacing each such
state with $\langle a| L_{k_p}^{m_p}\cdots L_{k_1}^{m_1}
a_{\ell_q}^{n_q}\cdots a_{\ell_1}^{n_1}$ without changing the coefficients.
The first few examples of
$|a;Y_1,Y_2\rangle$ are presented in Appendix \ref{app:basis}.
It was also
shown in \cite{Alba:2010qc} that $|a;Y_1,Y_2\rangle$ is generally a
linear combination of descendants of $|a\rangle$ at level
$(|Y_1|+|Y_2|)$.\footnote{Here, the ``level'' is defined by the sum of
the levels of the Virasoro and Heisenberg parts. For example, the level
of $L_{-k_1}(L_{-k_2})^3a_{-\ell}|a\rangle$ is $k_1+3k_2+\ell$.}
As discussed in \cite{Alba:2010qc},
the condition \eqref{eq:basis} and the fact that
$\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a) =
1/\mathcal{Z}_{Y_1,Y_2;Y_1,Y_2}^\text{bifund}(a,a,0)$ imply
\begin{align}
{\bf 1} =
\sum_{Y_1,Y_2}\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)\,|a;Y_1,Y_2\rangle\langle
a;Y_1,Y_2|~,
\label{eq:dec-of-1}
\end{align}
on the highest weight $Vir\oplus H$-module associated with
$|a\rangle$. Note again that $\langle a;Y_1,Y_2|$ is not the usual
conjugate of $|a;Y_1,Y_2\rangle$.
Let us now take $|a\rangle$ to be the highest weight state of the $Vir\oplus H$-module that
includes $|\widehat{I}^{(n)}\rangle$. Then, by inserting \eqref{eq:dec-of-1}
in Eq.~\eqref{eq:U2gAGT}, one obtains the
following decomposition:
\begin{align}
\mathcal{Z}_{U(2)}^{2\times (A_1,D_{2n})} =
\sum_{Y_1,Y_2}\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)\,\langle \widehat{I}^{(n)}|a;Y_1,Y_2\rangle\langle
a;Y_1,Y_2|\widehat{I}^{(n)}\rangle~.
\label{eq:decomp1}
\end{align}
We interpret the above expression as a sum over fixed points on
the moduli space of $U(2)$ instantons, and $\langle
a;Y_1,Y_2|\widehat{I}^{(n)}\rangle$ and $\langle
\hat{I}^{(n)}|a;Y_1,Y_2\rangle$ as the contributions of the
$(A_1,D_{2n})$ theories corresponding to $|\hat{I}^{(n)}\rangle$
and $\langle \hat{I}^{(n)}|$, respectively. Note that this
particularly implies that $\langle a;\emptyset,\emptyset|\hat{I}^{(n)}\rangle$ and $\langle
\hat{I}^{(n)}|a;\emptyset,\emptyset\rangle$ are the partition function of $(A_1,D_{2n})$ theory
with its flavor symmetry un-gauged. Since $|a;\emptyset,\emptyset\rangle
= |a\rangle$ \cite{Alba:2010qc}, this is indeed consistent with
\eqref{eq:ZAD1}.
\subsection{Identification of $\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$}
\label{subsec:ZAD}
Comparing \eqref{eq:decomp1} with \eqref{eq:Nek2}, we see that it is natural to interpret
\begin{align}
\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2} \sim \frac{\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle}{\langle a|\hat{I}^{(n)}\rangle}~,\qquad
\tilde{\mathcal{Z}}^{(A_1,D_{2n})}_{Y_1,Y_2} \sim \frac{\langle
\hat{I}^{(n)}|a;Y_1,Y_2\rangle}{\langle \hat{I}^{(n)}|a\rangle}~,
\label{eq:proportionality}
\end{align}
with possible proportionality constants. Note that the denominators in
\eqref{eq:proportionality} are necessary for
$\mathcal{Z}^{(A_1,D_{2n})}_{\emptyset,\emptyset} =
\tilde{\mathcal{Z}}_{\emptyset,\emptyset}^{(A_1,D_{2n})} = 1$.
Here, $|\widehat{I}^{(n)}\rangle$ and $\langle \widehat{I}^{(n)}|$
correspond to two different $(A_1,D_{2n})$ theories. Indeed, as seen
from their colliding-limit derivation, these $(A_1,D_{2n})$ theories
are differently coupled to the $U(1)$-part of the gauge
group. Therefore, $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{2n})}$ and
$\tilde{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$ are not identical.
In the rest of this sub-section, we make the relations \eqref{eq:proportionality} more
precise.
To that end, we first focus on the left relation, and read off how the
4d and 2d parameters are related. Note that
$\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle$ on the RHS depends on $(2n+1)$ parameters. Indeed,
$|\hat{I}^{(n)}\rangle$ depends on $n$ extra parameters
$\beta_0,\cdots,\beta_{n-1}$ in addition to
$c_0,\cdots,c_{n}$, as reviewed in
Sec.~\ref{subsec:U2gAGT}. One of these extra parameters fixes the highest weight, $a$, of the
$Vir\oplus H$-module that includes $|\hat{I}^{(n)}\rangle$, and therefore
$\langle a;Y_1,Y_2|\hat{I}^{(n)}\rangle$ is completely fixed by these $(2n+1)$
parameters. On the other hand,
$\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}(a,m,\pmb{d},\pmb{u})$ on the LHS depends only
on $2n$ parameters, $a,m,\pmb{d}=(d_1,\cdots,d_{n-1})$ and
$\pmb{u}=(u_1,\cdots, u_{n-1})$. Here, $a$ and $\pmb{u}$ are the VEVs of
Coulomb branch operators, $\pmb{d}$ are relevant couplings and $m$ is
a mass parameter. Therefore, there is a discrepancy in the number of
parameters between the 2d and 4d sides.
To see this discrepancy more explicitly, let us identify the precise relation between the 2d and 4d parameters. We start with the Seiberg-Witten
(SW) curve of the $(A_1,D_{2n})$
theory \cite{Bonelli:2011aa, Xie:2012hs}
\begin{align}
x^2 = \frac{a^2}{z^2} + \sum_{k=1}^{n-1}\frac{u_k}{z^{n+2-k}} +
\frac{m}{z^{n+2}} + \sum_{k=1}^{n-1}\frac{d_k}{z^{2n+2-k}} +
\frac{1}{z^{2n+2}}~,
\label{eq:curve1}
\end{align}
where the SW 1-form is given by $xdz$.\footnote{Here $a$ is regarded as the mass parameter corresponding to a
flavor $SU(2)$ sub-group of the $(A_1,D_{2n})$ theory.} This curve is
identified, on the 2d side, as the classical limit $\epsilon_i\to 0$ of the following
\cite{Alday:2009aq}
\begin{align}
x^2 = -\frac{\langle a |T(z)|\widehat{I}^{(n)}\rangle}{\langle
a|\widehat{I}^{(n)}\rangle} = -\frac{\Delta (a)}{z^2} + \cdots+ \frac{2c_nc_{n-1}}{z^{2n+1}}
+\frac{c_n^2}{z^{2n+2}}~,
\label{eq:curve2}
\end{align}
up to a change of variables that preserves the SW 1-form. Note that the
RHS of the above equation can be explicitly evaluated via
Eq.~\eqref{eq:const2}.\footnote{Recall here that, to recover the full
$\epsilon_i$-dependence, one needs to perform a replacement
corresponding to \eqref{eq:recover}. For the irregular state
$|I^{(n)}\rangle$, this replacement implies $c_k\to
c_k/\sqrt{\epsilon_1\epsilon_2}$ as seen
from its colliding-limit derivation.}
We here change the
variables in \eqref{eq:curve2} as $z\to (c_n)^{\frac{1}{n}}\,z$ and $x\to (c_n)^{-\frac{1}{n}}\,x$ so that the
coefficient of $1/z^{2n+2}$ is $1$. Comparing the classical limit of the
resulting equation
with \eqref{eq:curve1}, we obtain the relation between the 2d and 4d
parameters.\footnote{Note here that $\Delta(a)$ reduces to $-a^2$ in the classical limit
$\epsilon_1,\epsilon_2\to 0$.} To make this relation simple, let us define on the 2d side
\begin{align}
\gamma_k\equiv \frac{c_k}{(c_n)^{\frac{k}{n}}}~,
\end{align}
for $k=0,\cdots,n-1$. In terms of these variables, the relation between
the 2d and 4d parameters is expressed as
\begin{align}
d_k
&=
\sum_{\ell=n-k}^n \gamma_\ell \gamma_{2n-k-\ell}~,\quad m =
\sum_{\ell= 0}^n\gamma_\ell\gamma_{n-\ell}~,\quad u_k = \sum_{\ell =0}^{n-k} \gamma_\ell \gamma_{n-k-\ell} - \sum_{\ell=1}^{k}\ell \gamma_{\ell +
n-k}\frac{\partial}{\partial \gamma_\ell}\mathcal{F}_{(A_1,D_{2n})}~,
\label{eq:coupling}
\end{align}
where the derivatives
$\partial/\partial \gamma_\ell$ are defined with
$\vec{\gamma}\equiv (\gamma_0,\cdots,\gamma_{n-1})$ and $c_n$ taken
as independent variables, and $\mathcal{F}_{(A_1,D_{2n})}$ is the
classical limit of $\log \langle a|\hat{I}^{(n)}\rangle$.
These expressions imply that, when one takes
$\vec{\gamma}$ and $c_n$ as independent variables, all the 4d
parameters are independent of $c_n$.\footnote{To prove this statement,
one needs to show that $\frac{\partial}{\partial \gamma_\ell}\mathcal{F}_{(A_1,D_{2n})}$ is independent
of $c_n$. This can be shown as follows. As we will show below, $\langle
a;Y_1,Y_2|\widehat{I}^{(n)}\rangle =
(c_n)^{\frac{\Delta_a-\Delta_{c_0}+|Y_1|+|Y_2|}{n}}
f_{Y_1,Y_2}(\vec{\gamma})$ for a function $f_{Y_1,Y_2}(\vec{\gamma})$ independent of $c_n$. Setting $Y_1=Y_2=\emptyset$, we find
$\log \langle a|\widehat{I}^{(n)}\rangle =
\frac{\Delta_a-\Delta_{c_0}}{n}\log c_n + \log
f_{\emptyset,\emptyset}(\vec{\gamma})$. This implies that
$\frac{\partial}{\partial \gamma_\ell}\log\langle a|\widehat{I}^{(n)}\rangle=\frac{\partial}{\partial \gamma_\ell}\log
f_{\emptyset,\emptyset}(\vec{\gamma})$. Since this is independent of
$c_n$, its classical limit $\frac{\partial}{\partial
\gamma_\ell}\mathcal{F}_{(A_1,D_{2n})}$ is also independent of $c_n$ when
written in terms of $\vec{\gamma}$ and $c_n$.}
This reflects the conformal invariance of $(A_1,D_{2n})$, and explains the discrepancy in
the number of parameters between the 2d and 4d sides.
The above discussion implies that, for the left relation in
\eqref{eq:proportionality} to be an equality, the $c_n$-dependence of
the RHS needs to be canceled by a constant of proportionality.
To identify this proportionality constant, let us consider
\begin{align}
n\,c_n\left.\frac{\partial}{\partial c_n}\right|_{\vec{\gamma}}\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle~,
\label{eq:cdc1}
\end{align}
where $\partial/\partial c_n|_{\vec{\gamma}}$ is the derivative with respect to
$c_n$ with $\vec{\gamma} = (\gamma_0,\cdots,\gamma_{n-1})$ kept
fixed. From \eqref{eq:const2}, we see that this is identical to
\begin{align}
\langle a;Y_1,Y_2|(L_0 -\Delta_{c_0}) |\widehat{I}^{(n)}\rangle =
\left(\Delta_a -\Delta_{c_0} +|Y_1|+|Y_2|\right)\langle a;Y_1,Y_2|\hat{I}^{(n)}\rangle~.
\label{eq:cdc2}
\end{align}
The equality between the above two implies that $\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle \sim (c_n)^{\frac{\Delta_a-\Delta_{c_0}
+ |Y_1|+|Y_2|}{n}}$, and
therefore the ratio $(c_n)^{-\frac{|Y_1|+|Y_2|}{n}}\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle/\langle a|\hat{I}^{(n)}\rangle$ is independent of $c_n$ when written in
terms of
$\vec{\gamma}$ and $c_n$.\footnote{Recall here that $\langle
a;\emptyset,\emptyset| = \langle a|$.} This
suggests the following identification
\begin{align}
\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2} =
(\zeta c_n)^{-\frac{|Y_1|+|Y_2|}{n}}\frac{\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle}{\langle a|\hat{I}^{(n)}\rangle}~,
\label{eq:proposal1}
\end{align}
where $\zeta$ is a possible numerical constant independent of all
variables. Note that $\zeta$ above can be absorbed by rescaling the
instanton factor $\Lambda$.
A parallel discussion shows that the parameters of
$\tilde{\mathcal{Z}}_{Y_1,Y_2}(a,\tilde{m},\tilde{\pmb{d}},\tilde{\pmb{u}})$
are related to those of $\langle \hat{I}^{(n)}|a;Y_1,Y_2\rangle$ by
a similar relation to \eqref{eq:coupling}. As $\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle$ depends on $c_0,\cdots,c_n$ and
$\beta_0,\cdots,\beta_{n-1}$, $\langle \hat{I}^{(n)}|a;Y_1,Y_2\rangle$
also depends on $(2n+1)$ parameters, which we denote by
$\tilde{c}_0,\cdots,\tilde{c}_{n}$ and
$\tilde{\beta}_0,\cdots,\tilde{\beta}_{n-1}$.
From the same argument as above, we see that
\begin{align}
\tilde{\mathcal{Z}}^{(A_1,D_{2n})}_{Y_1,Y_2} =
(-\zeta\tilde{c}_n^{\,*})^{-\frac{|Y_1|+|Y_2|}{n}}\frac{\langle
\hat{I}^{(n)}|a;Y_1,Y_2\rangle}{\langle \hat{I}^{(n)}|a\rangle}~,
\label{eq:proposal2}
\end{align}
where $\tilde{c}_n^{\,*}$ is the complex conjugate of
$\tilde{c}_n$.\footnote{The opposite sign in the bracket in $(-\tilde{c}_n^*)^{-\frac{|Y_1|+|Y_2|}{n}}$ can be
understood as follows. First recall that $\langle a;Y_1,Y_2|$ in
\eqref{eq:proposal2} is {\it not} the usual conjugate of
$|a;Y_1,Y_2\rangle$, as discussed in \cite{Alba:2010qc}. Indeed,
$\langle a;Y_1,Y_2|$ is obtained by expanding $|a;Y_1,Y_2\rangle$ as a
linear combination of $L_{-k_1}^{m_1}\cdots
L_{-k_p}^{m_p}a_{-\ell_1}^{n_1}\cdots a_{-\ell_q}^{n_q}|a\rangle$, and
then replacing each such state with $\langle a|L_{k_p}^{m_p}\cdots
L_{k_1}^{m_1}a_{\ell_q}^{n_q}\cdots a_{\ell_1}^{n_1}$ without changing
the coefficients. This implies that, $\langle
\hat{I}^{(n)}|a;Y_1,Y_2\rangle$ is obtained from $\langle
a;Y_1,Y_2|\hat{I}^{(n)}\rangle$ by replacing
$\langle a|L_{k_p}^{m_p}\cdots
L_{k_1}^{m_1}a_{\ell_q}^{n_q}\cdots
a_{\ell_1}^{n_1}|\hat{I}^{(n)}\rangle
$ with $\langle \hat{I}^{(n)}|L_{-k_p}^{m_p}\cdots
L_{-k_1}^{m_1}a_{-\ell_q}^{n_q}\cdots a_{-\ell_1}^{n_1}|a\rangle$. From \eqref{eq:const2} and \eqref{eq:const3}, we see that this is
equivalent to the replacement
\begin{align}
c_k \longrightarrow -\tilde{c}_k^*~,\qquad Q\to -Q^*
\end{align}
for $k=0,\cdots,n$. In particular, $c_n$ in \eqref{eq:proposal1} is
replaced by $-\tilde{c}_n^*$.}
In the above identifications, $\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$ and
$\tilde{\mathcal{Z}}^{(A_1,D_{2n})}_{Y_1,Y_2}$ are regarded as the contribution
of the $(A_1,D_{2n})$ theory corresponding to $|\widehat{I}^{(n)}\rangle$
and $\langle \widehat{I}^{(n)}|$, respectively. As discussed at the beginning, these two $(A_1,D_{2n})$ theories have different couplings to the $U(1)$
part of the gauge group.
As we will see in Sec.~\ref{subsec:check}, the difference between $\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$ and
$\tilde{\mathcal{Z}}^{(A_1,D_{2n})}_{Y_1,Y_2}$ is an AD counterpart of the difference between the fundamental and
anti-fundamental hypermultiplets of $U(2)$.
\subsection{Identification of $\Lambda$}
The identifications \eqref{eq:proposal1} and \eqref{eq:proposal2} imply that
\eqref{eq:decomp1} is re-expressed as
\begin{align}
\mathcal{Z}^{2\times (A_1,D_{2n})}_{Y_1,Y_2} &=
\mathcal{Z}_\text{pert}\sum_{Y_1,Y_2}(-\zeta^2 c_n\tilde{c}_n^{\,*})^{\frac{|Y_1|+|Y_2|}{n}}\mathcal{Z}^\text{vec}_{Y_1,Y_2}(a)\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_{2n})}(a,m,\pmb{d},\pmb{u})\tilde{\mathcal{Z}}^{(A_1,D_{2n})}_{Y_1,Y_2}(a,\tilde{m},\tilde{\pmb{d}},\tilde{\pmb{u}})~,
\label{eq:decomp}
\end{align}
with the perturbative part $\mathcal{Z}_\text{pert}\equiv \langle \hat{I}^{(n)}|a\rangle\langle a|\hat{I}^{(n)}\rangle$.
Comparing \eqref{eq:decomp} with \eqref{eq:Nek2}, we identify the
4d dynamical scale as
\begin{align}
\Lambda^2 = - \zeta^2 c_n\tilde{c}_n^{\,*}~.
\label{eq:Lambda}
\end{align}
Recall that the $(A_1,D_{2n})$ sector is independent of $c_n$ (and $\tilde{c}_n^{\,*}$) as a
result of its conformal invariance. Here, the $U(2)$ gauge coupling
breaks this conformal invariance through the dynamical scale, and
therefore it is natural that $\Lambda$ depends on $c_n$ and
$\tilde{c}_n^{\,*}$.
The identification \eqref{eq:Lambda} is also consistent with the SW
curve. The curve of the theory shown in Fig.~\ref{fig:quiver2} is written as
\cite{Bonelli:2011aa}
\begin{align}
x^2 = \Lambda_0^2 z^{2n-2} + \cdots + \frac{\Lambda_0^2}{z^{2n+2}}~,
\label{eq:curve3}
\end{align}
where $\Lambda_0$ is a dynamical scale that can differ from
$\Lambda$ by a numerical factor, and the ellipsis stands for a Laurent polynomial of $z$ which is less singular
than $z^{2n-2}$ at $z=\infty$ and than $1/z^{2n+2}$ at
$z=0$.\footnote{The SW 1-form is again given by $xdz$.} Now,
by the same argument as around Eq.~\eqref{eq:curve2}, this curve is
identified as
\begin{align}
x^2 = -\frac{\langle \hat{I}^{(n)}| T(z)|\hat{I}^{(n)}\rangle}{\langle
\hat{I}^{(n)}|\hat{I}^{(n)}\rangle} =
(\tilde{c}_n^{\,*})^2z^{2n-2} + \cdots + \frac{(c_n)^2}{z^{2n+2}}~,
\label{eq:curve4}
\end{align}
up to a change of variables that preserves the SW 1-form. After changing variables as $z \to
z\,(-c_n/\tilde{c}_n^{\,*})^{\frac{1}{2n}}$ and $x\to
x\,(-c_n/\tilde{c}_n^{\,*})^{-\frac{1}{2n}}$, the curve \eqref{eq:curve4}
is re-expressed as $x^2 = (-c_n\tilde{c}_n^{\,*})z^{2n-2} + \cdots
+(-c_n\tilde{c}_n^{\,*})/z^{2n+2}$. Comparing this with \eqref{eq:curve3},
we find $\Lambda_0^2= -c_n\tilde{c}_n^{\,*}$, which
coincides with \eqref{eq:Lambda} up to a numerical factor.
\subsection{Consistency check}
\label{subsec:check}
Since the $(A_1,D_2)$
theory is a theory of free hypermultiplets in the doublet of $U(2)$, one
can perform a consistency check of our proposals \eqref{eq:proposal1}
and \eqref{eq:proposal2} by comparing them with the Nekrasov's formula
for fundamental and anti-fundamental hypermultiplets.
Let us first consider \eqref{eq:proposal1}.
In the case of $n=1$, the irregular state involved in
\eqref{eq:proposal1} satisfies
\begin{align}
L_2|\hat{I}^{(1)}\rangle &= -c_1^2|\hat{I}^{(1)}\rangle~,
\\[1mm]
L_1|\hat{I}^{(1)}\rangle &= 2(Q-c_0)c_1|\hat{I}^{(1)}\rangle~,
\\
L_0|\hat{I}^{(1)}\rangle &=
\left(\Delta_{c_0}+c_1\frac{\partial}{\partial
c_1}\right)|\hat{I}^{(1)}\rangle~,
\\
a_1|\hat{I}^{(1)}\rangle &= -ic_1|\hat{I}^{(1)}\rangle~,
\end{align}
together with $a_k|\hat{I}^{(1)}\rangle = 0$ for $k\geq 2$.
These equations are enough to compute the ratio of inner products $\langle
a;Y_1,Y_2|\hat{I}^{(1)}\rangle/\langle a|\hat{I}^{(1)}\rangle$. We then find that\footnote{We checked this equality for
$|Y_1|+|Y_2|\leq 6$.}
\begin{align}
\left(-\frac{c_1}{2}\right)^{-|Y_1|-|Y_2|}\frac{\langle a;Y_1,Y_2|\hat{I}^{(1)} \rangle}{\langle
a|\hat{I}^{(1)}\rangle} = \mathcal{Z}^\text{fund}_{Y_1,Y_2}(a,m)~,
\label{eq:check1}
\end{align}
where $\mathcal{Z}^\text{fund}_{Y_1,Y_2}$ is the contribution from a
fundamental hypermultiplet of $U(2)$ as reviewed in Appendix
\ref{app:Nek}.
The mass parameter $m$ is related to $c_0$ by
\begin{align}
m = c_0-\frac{Q}{2}~,
\label{eq:m1}
\end{align}
which coincides with \eqref{eq:coupling} in the classical limit
$\epsilon_i\to 0$.
We see that \eqref{eq:check1} is perfectly consistent with our proposal
\eqref{eq:proposal1} for $\zeta = -1/2$.
We also perform the same computation for $\langle
\hat{I}^{(1)}|a;Y_1,Y_2\rangle/\langle \hat{I}^{(1)}|a\rangle$ to find
that\footnote{We also checked this equality for $|Y_1|+|Y_2|\leq 6$.}
\begin{align}
\left(\frac{\tilde{c}_n^{\,*}}{2}\right)^{-|Y_1|-|Y_2|}\frac{ \langle
\hat{I}^{(1)}|a;Y_1,Y_2\rangle}{\langle \hat{I}^{(1)}|a\rangle} = \mathcal{Z}^\text{anti-fund}_{Y_1,Y_2}(a,\tilde{m})~,
\end{align}
where $\tilde{m}$ is similarly identified as
\begin{align}
\tilde{m} = \left(\tilde{c}_0-\frac{Q}{2}\right)^*~.
\end{align}
This is also in perfect agreement with our proposal \eqref{eq:proposal2} for
$\zeta = -1/2$.
The above two checks suggest that
the difference between $\mathcal{Z}^{(A_1,D_{2n})}_{Y_1,Y_2}$ and
$\tilde{\mathcal{Z}}^{(A_1,D_{2n})}_{Y_1,Y_2}$ can be regarded as the AD counterpart of
the difference between the fundamental and anti-fundamental hypermultiplets.
\section{Application to $(A_3,A_3)$ theory}
\label{sec:A3A3}
In this section, we apply our method to the $(A_3,A_3)$ theory and
compute its partition function. Recall that the $(A_3,A_3)$ theory is
described by the quiver diagram in Fig.~\ref{fig:quiver1}. When the
gauge group is replaced by $U(2)$, the partition function of the theory
is given by
\begin{align}
\mathcal{Z}_{U(2)} = \mathcal{Z}^{U(2)}_\text{pert}\sum_{Y_1,Y_2}q^{|Y_1|+|Y_2|}
\mathcal{Z}_{Y_1,Y_2}^\text{vec}(a)\mathcal{Z}_{Y_1,Y_2}^\text{fund}(a,M)\prod_{i=1}^2 \mathcal{Z}_{Y_1,Y_2}^{(A_1,D_4)}(a,m_i,d_i,u_i)~,
\label{eq:Z-A3A3}
\end{align}
where $\mathcal{Z}^{(A_1,D_4)}_{Y_1,Y_2}$ is the contribution of
the $(A_1,D_4)$ theory that we have identified in
Eq.~\eqref{eq:proposal1}, and $\mathcal{Z}^{U(2)}_\text{pert}$ is the
perturbative contribution that makes the series in $q$ start with $1$. The parameters $m_i,d_i$ and $u_i$ are
respectively a
mass parameter, relevant coupling of dimension $1/2$, and the VEV of
Coulomb branch operator of dimension $3/2$ in the $i$-th $(A_1,D_4)$
theory. Since the $SU(2)$ gauge coupling is exactly marginal, the above
expression includes the exponential of the marginal gauge coupling, $q$,
instead of a dynamical scale.
Note that, depending on how the $U(1)$-part of the gauge group
couples to $(A_1,D_4)$, its contribution to the partition function is
$\mathcal{Z}^{(A_1,D_4)}_{Y_1,Y_2}$ or
$\tilde{\mathcal{Z}}^{(A_1,D_4)}_{Y_1,Y_2}$. In this section, we focus on the case
in which both of
the two $(A_1,D_4)$ theories couple to the $U(1)$ in the way
corresponding to $\mathcal{Z}^{(A_1,D_4)}_{Y_1,Y_2}$. Replacing one or
two of $\mathcal{Z}^{(A_1,D_4)}_{Y_1,Y_2}$ with
$\tilde{\mathcal{Z}}^{(A_1,D_4)}_{Y_1,Y_2}$, one would obtain to a different
$\mathcal{Z}_{U(2)}$, which is however expected to give the same $\mathcal{Z}_{SU(2)}$
when the $U(1)$ factor $\mathcal{Z}_{U(1)}$ is removed as in \eqref{eq:U1}.
The factor $\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_4)}$ in Eq.~\eqref{eq:Z-A3A3}
is given by
\begin{align}
\mathcal{Z}_{Y_1,Y_2}^{(A_1,D_4)}(a,m,d,u) = \left(-\frac{c_2}{2}\right)^{-\frac{|Y_1|+|Y_2|}{2}}\frac{\langle
a;Y_1,Y_2|\hat{I}^{(2)}\rangle}{\langle a|\hat{I}^{(2)}\rangle}~,
\label{eq:Z-A1D4}
\end{align}
where $m,d$ and $u$ are identified as
\begin{align}
m = 2c_0 +\frac{c_1^2}{c_2}~,\qquad d =
\frac{2c_1}{\sqrt{c_2}}~,\qquad u = \frac{2c_0 c_1}{\sqrt{c_2}}-
\sqrt{c_2}\frac{\partial \mathcal{F}_{(A_1,D_4)}}{\partial c_1}~,
\label{eq:mdu}
\end{align}
with $\mathcal{F}_{(A_1,D_4)}$ being the classical limit of $\log \langle
a|\hat{I}^{(2)}\rangle$. Note that we here set $\zeta = -1/2$ in
\eqref{eq:proposal1} to avoid various numerical factors
in the expressions below. This factor can be generated or absorbed by rescaling $q$ in
the expression \eqref{eq:Z-A3A3}.
The irregular state $|\hat{I}^{(2)}\rangle$ is
characterized by
\begin{align}
L_4|\hat{I}^{(2)}\rangle &= -c_2^2|\hat{I}^{(2)}\rangle~,
\label{eq:L4}
\\
L_3|\hat{I}^{(2)}\rangle &= -2c_1c_2|\hat{I}^{(2)}\rangle~,
\\
L_2|\hat{I}^{(2)}\rangle &=
-(c_1^2+c_2(2c_0-3Q))|\hat{I}^{(2)}\rangle~,
\label{eq:L2}
\\
L_1|\hat{I}^{(2)}\rangle &= \left(c_2\frac{\partial}{\partial c_1}
-2c_1(c_0-Q)\right)|\hat{I}^{(2)}\rangle~,
\\
L_0|\hat{I}^{(2)}\rangle &= \left(\Delta_{c_0} + c_1\frac{\partial}{\partial
c_1} + 2c_2\frac{\partial}{\partial c_2}\right)|\hat{I}^{(2)}
\rangle~,
\end{align}
and $a_k|\hat{I}^{(2)}\rangle=-ic_k|\hat{I}^{(2)}\rangle$ for $k=1,2$
together with $a_k|\hat{I}^{(2)}\rangle = 0$ for $k>2$.
To extract the partition function of the $(A_3,A_3)$ theory from
\eqref{eq:Z-A3A3}, one has to remove the contribution of the
$U(1)$-part of the gauge group. As discussed in Sec.~\ref{sec:intro},
this can be done by dividing \eqref{eq:Z-A3A3} by a $U(1)$ factor,
$\mathcal{Z}_{U(1)}$. Therefore the partition function of the
$(A_3,A_3)$ theory is identified as
\begin{align}
\mathcal{Z}_{(A_3,A_3)} =
\frac{\mathcal{Z}_{U(2)}}{\mathcal{Z}_{U(1)}}~.
\label{eq:U2/U1}
\end{align}
While it is beyond the scope of this paper to
determine the $U(1)$ factor, we know that $\mathcal{Z}_{U(1)}$ is independent
of the parameter $a$, since $a$ is the VEV of a
scalar in the $SU(2)$ vector multiplet that is neutral under the
$U(1)$. Below, we use this fact and compute the
classical limit of $\mathcal{Z}_{(A_3,A_3)}$.
\subsection{Prepotential}
\label{subsec:prepotential}
Here we consider the classical limit $\epsilon_i\to 0$, and compute the
prepotential of the $(A_3,A_3)$ theory
\begin{align}
\mathcal{F}^{(A_3,A_3)} \equiv \lim_{\epsilon_i\to 0}
\left(-\epsilon_1\epsilon_2\log \mathcal{Z}_{(A_3,A_3)}\right)~.
\label{eq:F1}
\end{align}
This prepotential splits into the perturbative and instanton parts as $\mathcal{F}^{(A_3,A_3)} = \mathcal{F}_\text{pert}^{(A_3,A_3)} +
\mathcal{F}^{(A_3,A_3)}_\text{inst}$, and we are particularly
interested in the instanton part $\mathcal{F}_\text{inst}^{(A_3,A_3)}$.\footnote{Note that the perturbative part
contains the prepotential of the
$(A_1,D_4)$ theories (with their flavor symmetries ungauged).}
The
instanton part is generally expanded as
\begin{align}
\mathcal{F}^{(A_3,A_3)}_\text{inst} = \sum_{k=1}^\infty \mathcal{F}_k
q^k~.
\label{eq:instA3}
\end{align}
Below, we will compute the coefficients, $\mathcal{F}_k$, in this expansion.
To that end, let us first consider
\begin{align}
\mathcal{F}^{U(2)}\equiv \lim_{\epsilon_i\to 0} \left(-\epsilon_1\epsilon_2\log
\mathcal{Z}_{U(2)}\right)~,
\label{eq:F2}
\end{align}
which is the prepotential of the theory with the gauge group replaced by
$U(2)$. This prepotential also splits into the perturbative part,
${\displaystyle \lim_{\epsilon_i\to 0} (-\epsilon_1\epsilon_2\log \mathcal{Z}_\text{pert}^{U(2)})}$, and
the instanton part
\begin{align}
\mathcal{F}^{U(2)}_\text{inst} \equiv \lim_{\epsilon_i\to
0}\left(-\epsilon_1\epsilon_2\log\frac{\mathcal{Z}^{U(2)}}{\mathcal{Z}^{U(2)}_\text{pert}}\right)~.
\label{eq:instU2}
\end{align}
The instanton part \eqref{eq:instU2} is
identical to $\mathcal{F}^{(A_3,A_3)}_\text{inst}$ up to the
contribution of
the $U(1)$ factor. Our
strategy is to compute $\mathcal{F}^{U(2)}_\text{inst}$ using the
formula \eqref{eq:Z-A3A3}, and then
strip off the $U(1)$ factor to obtain $\mathcal{F}_\text{inst}^{(A_3,A_3)}$.
Note
that the
computation of $\mathcal{Z}_{U(2)}$ via \eqref{eq:Z-A3A3} and \eqref{eq:Z-A1D4} eventually reduces to evaluating
\begin{align}
\langle
a|L_{k_p}^{m_p}\cdots L_{k_1}^{m_1}a_{\ell_q}^{n_q} \cdots
a_{\ell_1}^{n_1}|\hat{I}^{(2)}\rangle~,
\label{eq:fragment}
\end{align}
for positive integers $k_i,m_i,\ell_j$ and $n_j$. Using
\eqref{eq:L4}--\eqref{eq:L2} and the fact that $L_k|\hat{I}^{(2)}\rangle
= a_{\ell}|\hat{I}^{(2)}\rangle = 0$ for $k> 4$ and $\ell> 2$, this
computation further reduces to evaluating
\begin{align}
\langle a|L_1^k|\hat{I}^{(2)}\rangle =
\left(c_2\frac{\partial}{\partial c_1} - 2c_1(c_0-Q)\right)^k
\langle a|\hat{I}^{(2)}\rangle~,
\label{eq:fragment2}
\end{align}
where $\langle a| \hat{I}^{(2)}\rangle$ is
the partition function of $(A_1,D_4)$ theory (with its
flavor symmetry ungauged). Therefore, to compute $\mathcal{Z}_{U(2)}$ for general
$\Omega$-background parameters, one needs to know how
$\langle a|\widehat{I}^{(2)}\rangle$ depends on $c_1$.\footnote{The $1/c_1$-expansion of
$\langle a|I^{(2)}\rangle$ was carefully studied in
\cite{Nishinaka:2019nuy}.} However, in the classical limit $\epsilon_i\to
0$, one can skip this procedure. Indeed, recovering the full
$\epsilon_i$-dependence by $c_k\to
c_k/\sqrt{\epsilon_1\epsilon_2}$, we see that in the classical limit
\begin{align}
\langle a|L_1^k|\hat{I}^{(2)}\rangle = (c_2)^{\frac{k}{2}}(-u)^k
\langle a |\hat{I}^{(2)}\rangle~,
\label{eq:L1k}
\end{align}
where $u$ is defined by Eq.~\eqref{eq:mdu}. Given
\eqref{eq:L4}--\eqref{eq:L2} and \eqref{eq:L1k}, it is straightforward to
compute the classical limit of $\mathcal{Z}_{U(2)}$, and therefore $\mathcal{F}_\text{inst}^{U(2)}$, order by
order in $q$.
We now turn to the $U(1)$ factor. While it is generically non-vanishing,
the contribution from the $U(1)$-factor turns out to vanish when all the
dimensionful parameters in four-dimensions, except for $a$ and
$\epsilon_i$, are turned off. Indeed, in the
classical limit, $\mathcal{Z}_{U(1)}\sim
\exp\left(-\frac{1}{\epsilon_1\epsilon_2}\mathcal{F}_{U(1)}\right)$ with
$\mathcal{F}_{U(1)}$ being independent of $\epsilon_i$. When
dimensionful parameters are turned off except for $a$ and $\epsilon_i$ ,
$\mathcal{F}_{U(1)}$ must be proportional to $a^2$ for dimensional reasons. However, as discussed below \eqref{eq:U2/U1},
$\mathcal{Z}_{U(1)}$ must be independent of $a$. This means that the
proportionality constant is zero so that
$\mathcal{F}_{U(1)}=0$.
Let us now focus on the case in which $M,m_i,d_i$
and $u_i$ in \eqref{eq:Z-A3A3} are all turned off. Then the only
non-vanishing dimensionful parameters are $a$ and $\epsilon_i$. Since the $U(1)$-factor is trivial in
this case, one can identify
\eqref{eq:F1} with \eqref{eq:F2}, and therefore \eqref{eq:instA3} with \eqref{eq:instU2}.
With this identification,
we finally obtain
\begin{align}
\mathcal{F}^{(A_3,A_3)}_\text{inst}(q;a) = \left(\frac{1}{4}q^2 + \frac{13}{128}q^4 +
\frac{23}{384}q^6 + \frac{2701}{65536}q^8 + \cdots \right)a^2~.
\label{eq:F-A3A3}
\end{align}
Remarkably, this expression is closely related to the instanton
part of the prepotential of the $SU(2)$ gauge theory with
four fundamental flavors. Indeed, when all the mass parameters are turned
off, the latter is given by
\begin{align}
\mathcal{F}_\text{inst}^{N_f=4}(q;a) = \left(\frac{1}{2}q +
\frac{13}{64}q^2 + \frac{23}{192}q^3 +
\frac{2701}{32768}q^4 + \cdots \right)a^2~,
\label{eq:F-Nf=4}
\end{align}
as shown in Appendix B.3 of \cite{Alday:2009aq}.
Comparing
\eqref{eq:F-A3A3} and \eqref{eq:F-Nf=4}, we see that
\begin{align}
2\mathcal{F}_\text{inst}^{(A_3,A_3)}(q;a) =
\mathcal{F}_\text{inst}^{N_f=4}(q^2,a)~,
\label{eq:surprise}
\end{align}
at least up to $\mathcal{O}(q^8)$.
\subsection{S-duality}
\label{subsec:S-duality}
Here, we show that one can read off the action of the S-duality group on the $(A_3,A_3)$
theory assuming the remarkable identity \eqref{eq:surprise} extends to
the full prepotential. To that end, let us first give a quick
review of the
S-duality of $SU(2)$ gauge theory with four fundamental flavors. When
the mass parameters are turned off, the full prepotential of this theory
is written as
\begin{align}
\mathcal{F}^{N_f=4} = (\log q_\text{IR})a^2~,
\label{eq:IR_Nf=4}
\end{align}
where $q_\text{IR}$ is related to the IR theta angle and electric
coupling by
\begin{align}
q_\text{IR} = e^{i\theta_\text{IR}- \frac{8\pi^2}{g_\text{IR}^2}}~.
\end{align}
The full prepotential \eqref{eq:IR_Nf=4} is the sum of the instanton
part \eqref{eq:F-Nf=4}
and the perturbative part $(\log q - \log 16)a^2$. This implies that
$q$ and $q_\text{IR}$ are related by \cite{Grimm:2007tm}
\begin{align}
q =
\frac{\theta_2(q_\text{IR})^4}{\theta_3(q_\text{IR})^4}~,
\label{eq:UVIR1}
\end{align}
where we used the convention that $\theta_2(q) =
\sum_{n\in\mathbb{Z}}q^{(n-\frac{1}{2})^2}$ and $\theta_3(q) =
\sum_{n\in \mathbb{Z}}q^{n^2}$. The relation \eqref{eq:UVIR1} implies
that
\begin{align}
\tau_\text{IR} \equiv \frac{1}{\pi i}\log q_\text{IR} =
\frac{\theta_\text{IR}}{\pi} + \frac{8\pi i}{g_\text{IR}^2}
\end{align}
is the modulus of the elliptic curve corresponding to the double cover
of $\mathbb{CP}^1$ with four branch points whose cross-ratio is
$q$. This elliptic curve is identified as the SW-curve of the theory on
the Coulomb branch. The curve has a natural $PSL(2,\mathbb{Z})$-action
generated by
\begin{align}
T: \tau_\text{IR} \to \tau_\text{IR} + 1 ~,\qquad S : \tau_\text{IR} \to
-\frac{1}{\tau_\text{IR}}~.
\label{eq:TS}
\end{align}
From \eqref{eq:UVIR1}, we see that these $T$ and $S$ transformations are
induced by the following changes of the UV gauge couplings:
\begin{align}
T:q\to \frac{q}{q-1}~,\qquad S:q\to 1-q~.
\end{align}
Since the SW-curve is invariant under $T$ and $S$, so is the whole BPS
spectrum on the Coulomb branch. It is then natural to expect that the theory is completely
invariant under this $PSL(2,\mathbb{Z})$.\footnote{When mass parameters
are turned on, they are also permuted by the action of
$PSL(2,\mathbb{Z})$.}
This is the famous S-duality of $SU(2)$ gauge theory with four flavors.
Let us now turn back to the $(A_3,A_3)$ theory. Its quiver
description shown in Fig.~\ref{fig:quiver1} has an obvious similarity
to the $SU(2)$ gauge theory with four flavors; it has the same gauge
group with the vanishing $\beta$-function. This similarity has been studied
carefully in \cite{Buican:2014hfa, DelZotto:2015rca, Cecotti:2015hca}
to show that the IR physics of
the $(A_3,A_3)$ theory on its Coulomb branch admits an action of $PSL(2,\mathbb{Z})$ (see \cite{Xie:2016uqq, Xie:2017vaf,
Buican:2017fiq, Xie:2017aqx, Buican:2018ddk} for further studies on this
new class of $\mathcal{N}=2$ S-dualities). The generalization of the
$SO(8)$-triality in this case is carefully discussed in \cite{Cecotti:2015hca}. This
$PSL(2,\mathbb{Z})$-action has only been studied in the IR language
such as the SW curve, the associated Calabi-Yau three-fold, and the
spectrum of BPS states on the Coulomb branch. Here we discuss the action of
$PSL(2,\mathbb{Z})$ on the UV gauge coupling, using the surprising
relation \eqref{eq:surprise}.
To that end, let us define the IR gauge coupling $q_\text{IR}$ of the
$(A_3,A_3)$ theory similarly by
\begin{align}
q_\text{IR} = e^{i\theta_\text{IR} - \frac{8\pi^2}{g_\text{IR}^2}}
\label{eq:qIR-A3A3}
\end{align}
so that the
full prepotential is written as
\begin{align}
\mathcal{F}^{(A_3,A_3)} = (\log
q_\text{IR})a^2~.
\label{eq:qIR-def}
\end{align}
Assuming the relation
\eqref{eq:surprise} extends to the full prepotential, one
obtains\footnote{Given the relation \eqref{eq:surprise} for the
instanton part, this assumption is rather mild. Indeed, it only requires
that the perturbative part $\mathcal{F}_\text{pert} =
\mathcal{F}_\text{cl} + \mathcal{F}_\text{1-loop}$ also satisfies
\begin{align}
2\mathcal{F}^{(A_3,A_3)}_\text{pert}(q;a) =
\mathcal{F}^{N_f=4}_\text{pert}(q^2;a)~.
\label{eq:tobeproven}
\end{align}
Note that, on both sides of the above relation, $\mathcal{F}_\text{pert} = (\log
q + X)a^2$ with $X$ being a constant. Therefore, in proving
\eqref{eq:tobeproven}, all we need to show is the following equality between two constants:
\begin{align}
2X^{(A_3,A_3)} = X^{N_f=4}~.
\label{eq:1-loop-X}
\end{align}
To prove this, one needs to identify the 1-loop part
$\mathcal{F}_\text{1-loop}$ for $(A_3,A_3)$, which we leave for future
work.
}
\begin{align}
2\mathcal{F}^{(A_3,A_3)}(q;a) = \mathcal{F}^{N_f=4}(q^2;a)~,
\label{eq:surprise2}
\end{align}
which implies that $q_\text{IR}$ is related
to $q$ by
\begin{align}
q^2 = \frac{\theta_2(q_\text{IR}^2)^4}{\theta_3(q_\text{IR}^2)^4}~.
\label{eq:modular2}
\end{align}
This relation means that
\begin{align}
\tau_\text{IR} \equiv \frac{2}{\pi i}\log q_\text{IR} =
\frac{2\theta_\text{IR}}{\pi} + \frac{16\pi i}{g_\text{IR}^2}
\label{eq:tau2}
\end{align}
is the modulus of the double cover
of $\mathbb{CP}^1$ with four branch points whose cross-ratio is
$q^2$. The
$T$ and $S$ transformations of the S-duality group are identified as
\begin{align}
T:\tau_\text{IR} \to \tau_\text{IR} + 1~,\qquad S:\tau_\text{IR} \to
-\frac{1}{\tau_\text{IR}}~.
\label{eq:TS3/2}
\end{align}
Note that \eqref{eq:modular2} reveals a non-trivial relation between the IR gauge
coupling $q_\text{IR}$ and the UV gauge coupling $q$ of the $(A_3,A_3)$
theory. Compared to \eqref{eq:UVIR1} for $SU(2)$ gauge theory with four flavors, both
the UV and IR gauge couplings are replaced by their squares here. While the replacement
$q\to q^2$ can be easily understood from the relation
\eqref{eq:surprise2}, the replacement
\begin{align}
q_\text{IR} \to q_\text{IR}^2
\label{eq:qIR2}
\end{align}
is a bit more non-trivial. We see that this replacement is a consequence
of the factor $2$ in front of $\mathcal{F}^{(A_3,A_3)}$ in
\eqref{eq:surprise2}. Note that \eqref{eq:qIR2}
is crucial to have the
correct weak-coupling behavior. Indeed, in the weak coupling limit, quantum
corrections to the IR gauge coupling vanish, and therefore we expect
$q=q_\text{IR}$. We see that \eqref{eq:modular2} correctly reduces to
$q\sim q_\text{IR}$ in the weak coupling limit $q_\text{IR} \to 0$
if \eqref{eq:qIR2} is simultaneously performed with $q\to q^2$.
From the above discussion, we see how $S$ and $T$ act on
the UV gauge coupling of the $(A_3,A_3)$ theory. Indeed, combining \eqref{eq:modular2} and
\eqref{eq:tau2}, we see that \eqref{eq:TS3/2} corresponds
to
\begin{align}
T:\; q^2 \to \frac{q^2}{q^2-1}~,\qquad S:\; q^2\to 1-q^2~.
\label{eq:TS2}
\end{align}
One can explicitly check that
\eqref{eq:F-A3A3} combined with the classical part
$\mathcal{F}^{(A_3,A_3)}_\text{cl} = (\log q)a^2$ is invariant under $T$.
\subsection{Peculiarity of $T$}
\label{subsec:peculiarT}
From \eqref{eq:tau2}, we see that our $T$-transformation,
$\tau_\text{IR} \to \tau_\text{IR}+1$, corresponds to
\begin{align}
\theta_\text{IR} \to \theta_\text{IR} + \frac{\pi}{2}~.
\label{eq:T_for_theta}
\end{align}
This is remarkable since this means that $T$ maps the
monopole of the minimal magnetic charge to a dyon that has {\it half} the
electric charge of fundamental quark. This is not possible if this
``electric charge'' is a charge associated with the unbroken $U(1)\subset SU(2)$ gauge
group on the Coulomb branch, since in that case the minimal electric
charge is the charge of fundamental quark. Therefore,
the ``electric charge'' here is not simply associated with the unbroken $U(1)\subset SU(2)$.
As we will argue in Sec.~\ref{sec:Conclusions}, the ``electric charge'' here
is interpreted as a linear combination of the electric charge associated
with $U(1)\subset SU(2)$ and those arising from the $(A_1,D_4)$ theories.
In the rest of this sub-section, we show that the $T$ transformation \eqref{eq:T_for_theta} is also consistent with the SW curve of $(A_3,A_3)$ theory. To that end, first recall that the SW curve of this theory is written as \cite{Buican:2014hfa}
\begin{align}
0 &= x^4 +\mathtt{q}x^2z^2 + z^4 + c_{3,0}x^3 + c_{0,3}z^3 + c_{2,0}x^2
+ mxz + c_{0,2}z^2 + c_{1,0}x + c_{0,1}z + c_{0,0}~,
\label{eq:curve}
\end{align}
where $\mathtt{q}$ is a function of $q_\text{IR}$, $c_{3,0}$ and
$c_{0,3}$ are relevant couplings of dimension $1/2$,
$c_{2,0},\,c_{0,2}$ and $m$ are mass parameters,
and $c_{1,0},\,c_{0,1}$ and $c_{0,0}$ parameterize the Coulomb branch moduli
space. The SW 1-form is given by $xdz$. It was
shown in \cite{Buican:2014hfa} that the above curve and 1-form are invariant
under two transformations $\tilde{S}$
and $\tilde{T}$, which were interpreted as two independent S-dual
transformations. In particular, $\tilde{S}$ acts on the marginal gauge
coupling as\footnote{The $\tilde{T}$-transformation acts on the gauge
coupling as $\tilde{T}: \mathsf{q}\to
\frac{12-2\mathsf{q}}{2+\mathsf{q}}$. Our $T$ and $S$ correspond to
$\tilde{S}$ and $\tilde{T}$ in \cite{Buican:2014hfa}, respectively.}
\begin{align}
\tilde{S}:\; \mathsf{q} \to -\mathsf{q}~.
\label{eq:tildeS}
\end{align}
Below, we show that \eqref{eq:tildeS} is identical to our
$T$-transformation \eqref{eq:T_for_theta}, near a cusp on the
conformal manifold.\footnote{Here, the ``conformal manifold'' is defined
as the space of values of exactly marginal couplings in the theory.}
As shown
in \cite{Buican:2014hfa}, $\mathsf{q}\to \infty$ corresponds to a
weak-coupling cusp on the conformal manifold, where $M\equiv m/\mathsf{q}$
and $u\equiv c_{0,0}/\mathsf{q}$ are respectively
identified as the mass of the fundamental hypermultiplet and the VEV of
the Coulomb branch operator in the vector multiplet, in the quiver
description in Fig.~\ref{fig:quiver1}.\footnote{There are also
other cusps at $\mathsf{q} \to \pm 2$, where the parameters in the SW
curve have different interpretations. In particular, the mass of the
fundamental hypermultiplet is identified with some linear combination of
$m, c_{2,0}$ and $c_{0,2}$.} We focus on this cusp,
and turn on an infinitely large mass, $M$, for the fundamental hypermultiplet so
that it decouples from the theory in the infrared. When the
hypermultiplet decouples, the theory reduces to a non-conformally gauged
AD theory described in Fig.~\ref{fig:quiver2}. To realize this limit at
the level of the SW curve, one needs to take $\mathsf{q}\to \infty$
simultaneously with $M\to \infty$. Indeed, if we take $M\to \infty$ with
$\mathsf{q}$ kept fixed, some periods of the
curve are divergent. It turns out that all the periods of the curve
remain finite when one takes $M\to \infty$ and $\mathsf{q}\to \infty$
with
\begin{align}
\Lambda \equiv \frac{M}{\sqrt{\mathsf{q}}}
\label{eq:Lambda-finite}
\end{align}
kept fixed.
This $\Lambda$ is then identified as
a dynamical scale of the resulting theory. The curve correctly reduces to the curve of the IR theory \eqref{eq:curve3} (for $n=2$) in the limit $M,\mathsf{q}\to \infty$
with \eqref{eq:Lambda-finite} kept finite.
\footnote{One can show this explicitly as follows. As show in
\cite{Buican:2014hfa}, near the cusp $\mathsf{q}\to\infty$, $C_{3,0} \equiv
\mathsf{q}^{-\frac{1}{4}}c_{3,0}$ and $C_{0,3}\equiv \mathsf{q}^{
-\frac{1}{4}} c_{0,3}$ are identified with the (correctly-normalized) relevant couplings of
dimension $1/2$, $C_{2,0}\equiv \mathsf{q}^{-\frac{1}{2}}c_{2,0}$ and
$C_{0,2}\equiv \mathsf{q}^{-\frac{1}{2}}c_{0,2}$ are mass deformation
parameters, and $C_{1,0}\equiv \mathsf{q}^{-\frac{3}{2}}c_{1,0}$ and
$C_{0,1}\equiv \mathsf{q}^{-\frac{3}{4}}c_{0,1}$ are the VEVs of Coulomb
branch operators, in the $(A_1,D_4)$ sectors. In terms of these
variables, the curve of the $(A_3,A_3)$ theory is written as
$0 = x^4 + \mathsf{q}x^2z^2 + z^4 + \mathsf{q}^{\frac{1}{4}}C_{3,0}x^3
+ \mathsf{q}^{\frac{1}{4}}C_{0,3}z^3 +
\mathsf{q}^{\frac{1}{2}}C_{2,0}x^2 + \mathsf{q}Mxz+
\mathsf{q}^{\frac{1}{2}}C_{0,2}z^2 + \mathsf{q}^{\frac{3}{4}}C_{1,0}x +
\mathsf{q}^{\frac{3}{4}}C_{0,1}z + \mathsf{q}u$. We here define $X\equiv
i(\sqrt{z}x^{\frac{3}{2}} +
\frac{1}{2}\sqrt{q}\Lambda\sqrt{x/z}),\,Z\equiv \sqrt{z/x}$ and $U
\equiv u - \frac{q\Lambda^2}{4}$, and take the limit $M,\mathsf{q}\to
\infty$ with $\Lambda,U,C_{i,j}$ and $(X,Y)$ kept finite. Then the
curve reduces to
\begin{align}
X^2 &= \tilde{\Lambda}^2Z^2 + \tilde{\Lambda}^{\frac{3}{2}}C_{0,3}Z +
\tilde{\Lambda}C_{0,2}+ \frac{\tilde{\Lambda}^{\frac{1}{2}}C_{0,1}}{Z}
+ \frac{U}{Z^2} + \frac{\tilde{\Lambda}^{\frac{1}{2}}C_{1,0}}{Z^3} +
\frac{\tilde{\Lambda}C_{2,0}}{Z^4} +
\frac{\tilde{\Lambda}^{\frac{3}{2}}C_{3,0}}{Z^5}+
\frac{\tilde{\Lambda}^2}{Z^6}~,
\end{align}
where $\tilde{\Lambda}\equiv -\Lambda/2$. The SW 1-form is written as
$-\frac{3}{2}iXdZ$ up to exact terms. This curve is
precisely identical to \eqref{eq:curve3} for $n=2$. The 1-form is also
identical up to a prefactor that can be absorbed by rescaling
dimensionful parameters and $X$. This implies that
\eqref{eq:Lambda-finite} is the correct identification of the IR
dynamical scale.
}
Note here that, by standard arguments, the dynamical
scale $\Lambda$ of the mass-deformed theory and the gauge coupling $e^{i\theta_\text{IR} -
\frac{8\pi^2}{g_\text{IR}^2}}$ of the conformal theory are related by
\begin{align}
\left(\frac{\Lambda}{M}\right)^{b_0} = e^{i\theta_\text{IR}
-\frac{8\pi^2}{g_\text{IR}^2}}~,
\label{eq:Lambda-finite2}
\end{align}
where $b_0$ is the coefficient of the one-loop $\beta$-function of the
IR theory. Since $b_0 = 1$ in our case, \eqref{eq:Lambda-finite} and
\eqref{eq:Lambda-finite2} imply that
\begin{align}
\mathsf{q} = e^{-2i\theta_\text{IR} + \frac{16\pi^2}{g_\text{IR}^2}}~.
\end{align}
From this relation, it is now clear that the $\tilde{S}$ transformation \eqref{eq:tildeS}
is precisely identical to our $T$-transformation \eqref{eq:T_for_theta}. Therefore,
our $T$-transformation \eqref{eq:T_for_theta} is perfectly consistent
with the earlier analysis of the S-duality using the SW-curve.
\subsection{Turning on couplings, masses and VEVs}
\label{subsec:massive-def}
In the previous sections, we have identified the $PSL(2,\mathbb{Z})$-action in the case
of vanishing dimensionful parameters except for $\epsilon_i$ and
$a$. This action has been interpreted as corresponding to the symmetry of the
SW-curve studied in \cite{Buican:2014hfa}. Since this symmetry of the
curve extends to the case of generic values of dimensionful parameters, we
expect that the $PSL(2,\mathbb{Z})$-action on the partition function has
a similar extension for non-vanishing relevant couplings, masses and
VEVs of Coulomb branch operators.
In this sub-section, we discuss such an extension of the $PSL(2,\mathbb{Z})$-action
The instanton part of the prepotential for generic values of
dimensionful parameters is expanded as
\begin{align}
\mathcal{F}^{(A_3,A_3)}_\text{inst} &= \sum_{k=-1}^\infty
\mathcal{F}_{2k}(q,M,\{m_i\},\{u_i\},\{d_i\})a^{-2k}~.
\label{eq:massiveF}
\end{align}
Recall that $T$
and $S$ act on the UV gauge coupling as in \eqref{eq:TS2}.
Since $S$ is non-perturbative in $q$, it is hard to understand how
\eqref{eq:massiveF} behaves under $S$ using our order-by-order computation.
We therefore focus on the $T$-transformation below.
In the case of $d_i=m_i=u_i=M=0$, \eqref{eq:massiveF} reduces to
\begin{align}
\mathcal{F}_{-2}(q)a^2~,
\label{eq:genus0}
\end{align}
whose explicit expression is shown in Eq.~\eqref{eq:F-A3A3}.
When combined with the perturbative part, this is
invariant under $T:q^2 \to q^2/(q^2-1)$.
When $d_i,m_i,u_i$ and $M$ are
turned on, \eqref{eq:genus0} receives corrections from $\mathcal{F}_{2k}$ for all $k\geq 0$. We evaluated
these corrections using our formula \eqref{eq:Z-A1D4} to find
that $\mathcal{F}_{-2}$ and $\mathcal{F}_{k>0}$ are all
invariant under
\begin{align}
q&\to \frac{iq}{\sqrt{1-q^2}}~,\quad d_1 \to \frac{d_1+q
d_2}{\sqrt{1-q^2}}~,\quad d_2 \to id_2~,\quad m_2 \to
-m_2~,\quad u_2 \to -iu_2~,
\label{eq:T-massive}
\end{align}
with the other parameters kept fixed.\footnote{Note that
$\mathcal{F}_0$ cannot be evaluated without identifying the
$U(1)$-factor, and therefore we leave
the computation of $\mathcal{F}_0$ for future work. The other terms,
$\mathcal{F}_{-2}$ and $\mathcal{F}_{k>0}$, are not affected by the $U(1)$-factor.}
We checked this invariance
up to a very high order of $q$. First few terms in the $q$-series
of $\mathcal{F}_{-2},\mathcal{F}_{2}$ and $\mathcal{F}_4$ are shown in appendix
\ref{app:F2}. Given this invariance, we
interpret \eqref{eq:T-massive} as an extension of $q^2\to q^2/(q^2-1)$
to the case of non-vanishing $d_i,m_i,u_i$ and $M$.
It turns out that \eqref{eq:T-massive} is consistent with the symmetry
of the SW curve, at least in the weak coupling limit. To see this,
recall that our $T$-transformation
is identified with $\tilde{S}$ discussed in \cite{Buican:2014hfa}. This $\tilde{S}$ induces $\mathsf{q}\to -\mathsf{q}$, as
reviewed already, and also changes the other parameters in the SW curve as
\begin{align}
&c_{3,0} \to -e^{\frac{3\pi i}{4}}c_{3,0}~,\quad c_{0,3}\to
-e^{-\frac{3\pi i}{4}}c_{0,3}~,\quad c_{2,0} \to -ic_{2,0}~,\quad
c_{0,2}\to ic_{0,2}~,
\nonumber\\
&m \to -m~,\quad c_{1,0}\to -e^{\frac{\pi
i}{4}}c_{1,0}~,\quad c_{0,1}\to -e^{-\frac{\pi i}{4}}c_{0,1}~,\quad
c_{0,0}\to -c_{0,0}~.
\label{eq:tilde-S}
\end{align}
It is straightforward to show that the curve \eqref{eq:curve} and the
SW 1-form is
invariant under these transformations.
Since
$q^2 \to q^2/(q^2-1)$ is already identified with $\mathsf{q} \to -\mathsf{q}$,
\eqref{eq:tilde-S} is expected to be equivalent to
\eqref{eq:T-massive}. One can show this equivalence explicitly at the
weak-coupling cusp $\mathsf{q}=\infty$.
As shown in
\cite{Buican:2014hfa}, in the limit $\mathsf{q}\to\infty$, the coefficients $c_{i,j}$ must
be appropriately renormalized so that physical
quantities remain finite. In terms of our $d_i,m_i,u_i,M$ and $u$, such
a renormalization is expressed as
\begin{align}
&c_{3,0} = \mathtt{q}^{\frac{1}{4}}d_1~,\quad c_{0,3} =
\mathtt{q}^{\frac{1}{4}}d_2~,\quad c_{2,0} =
\mathtt{q}^{\frac{1}{2}}m_1~,\quad c_{0,2} =
\mathtt{q}^{\frac{1}{2}}m_2~,\quad m = \mathtt{q}M~,
\nonumber \\
&c_{1,0} = \mathtt{q}^{\frac{3}{4}}u_1~,\quad c_{0,1}= \mathtt{q}^{\frac{3}{4}}u_2~,\quad
c_{0,0} = \mathtt{q} u~.
\label{eq:relation}
\end{align}
From \eqref{eq:relation} and \eqref{eq:tilde-S}, we see that $\tilde{S}$
is equivalent to $\mathsf{q}\to \exp(-\pi i)\mathsf{q}$ combined with
\begin{align}
d_2 \to
id_2~,\qquad m_2 \to -m_2~,\qquad u_2\to -iu_2~,
\end{align}
where $d_1,m_1,u_1,M$ and $u$ are kept fixed. Since $\mathsf{q}\to
\exp(-\pi i)\mathsf{q}$ is identified with $q^2\to q^2/(q^2-1)$, this is identical
to \eqref{eq:T-massive} at the leading
order of $q$.
The reason that we only see the leading order terms
is that we are taking the weak-coupling limit $\mathtt{q}\to \infty$ here,
corresponding to $q\to 0$.
Thus, we have shown that our \eqref{eq:T-massive}
is equivalent to \eqref{eq:tilde-S} in the weak-coupling
limit.
Note that the invariance of $\mathcal{F}_\text{inst}^{(A_3,A_3)}$ under \eqref{eq:T-massive} strongly constrains
the possible form of $\mathcal{F}_{k> 0}$. In particular, when combined
with the trivial symmetry under exchanging two $(A_1,D_4)$ theories in
the quiver diagram,
imposing this invariance determines $\mathcal{F}_{k>0}$ up to
some $T$-invariant functions for every $k$. We explicitly show this for
$\mathcal{F}_{2}$ in appendix \ref{app:F2}.
Note also that our check of the invariance under \eqref{eq:T-massive}
is only for the instanton part of
the prepotential. For the full prepotential to be invariant under \eqref{eq:T-massive},
the perturbative part must also be invariant by itself. One can
show that the perturbative part is invariant under \eqref{eq:T-massive}
when $q=0$, but its extension to the case of $q\neq0$ is not
straightforward. If \eqref{eq:T-massive} also preserves the perturbative
part for general $q$, it is natural to identify \eqref{eq:T-massive} as the
$T$-transformation for generic values of $d_i,m_i$ and $u_i$. If
the invariance under \eqref{eq:T-massive} does not extend to the perturbative part, \eqref{eq:T-massive}
is a new symmetry that only preserves the instanton part. We leave a
detailed study of these two options for future work.
\section{Conclusions and discussions}
\label{sec:Conclusions}
In this paper, we have proposed a Nekrasov-type formula for the
instanton partition function of four-dimensional $\mathcal{N}=2\;\,
U(2)$ gauge theories coupled to $(A_1,D_{2n})$ Argyres-Douglas
theories, by extending the generalized AGT correspondence to the case of
$U(2)$ gauge group. We have defined irregular states of the
direct sum of Virasoro and Heisenberg algebras, and then identified
the contribution of the $(A_1,D_{2n})$ theory at each fixed points on
the $U(2)$
instanton
moduli space, as in
\eqref{eq:proposal1} and \eqref{eq:proposal2}. Here, 4d and 2d
parameters are related by \eqref{eq:coupling}.
As an application, we evaluate the instanton partition function of the
$(A_3,A_3)$ theory. While this partition function cannot be directly
evaluated by the AGT correspondence, we have computed it using our
formula above.
Our result shows that, when some
parameters are turned off, the instanton part of the prepotential of
$(A_3,A_3)$ is in a surprising relation to that of $SU(2)$ gauge theory with four flavors,
as shown in \eqref{eq:surprise}. In particular, the two prepotentials
are related by the following replacement of the UV gauge coupling:
\begin{align}
q\to q^2~.
\label{eq:qq2}
\end{align}
From this relation, we have read off the
action of the S-duality group on the UV gauge coupling $q$. We have also
discussed its possible extension to dimensionful parameters.
Here, we give a brief comment on our peculiar $T$-transformation. As
shown in Sec.~\ref{subsec:peculiarT}, the $T$-transformation,
$\tau_\text{IR} \to \tau_\text{IR} + 1$, corresponds to
\begin{align}
\theta_\text{IR} \to \theta_\text{IR} + \frac{\pi}{2}~,
\label{eq:T-again}
\end{align}
and therefore maps the monopole of the minimal magnetic
charge to a dyon that has half the electric charge of
fundamental quark. This
implies that the ``electric charge'' here is not simply the electric charge
associated with the $SU(2)$ vector multiplet in
Fig.~\ref{fig:quiver1}. Instead, this electric charge is
interpreted as a linear combination of the electric charge associated with the $SU(2)$
vector multiplet sector and those associated with the $(A_1,D_4)$ sectors. Indeed, it was shown in \cite{Cecotti:2015hca}
that $PSL(2,\mathbb{Z})$ naturally acts on a modified electro-magnetic
charge lattice where the ``electric charge'' is such a linear
combination of charges arising from the three sectors.\footnote{The ``magnetic charge'' here
is simply associated with the $SU(2)$ vector multiplet without mixing
with those arising from the $(A_1,D_4)$ sectors.} This linear combination naturally arises
when constructing the $(A_3,A_3)$ theory as the IR limit of type IIB
string theory on a Calabi-Yau singularity.
With this modified charge lattice, the minimal
electric charge is now half the charge of
fundamental quark,\footnote{This can be seen from the discussions
around Eq.~(2.64) of \cite{Cecotti:2015hca},
where $\mathtt{deg}\, X$ and $\mathtt{rank}\, X$ are respectively the
(modified) electric and magnetic
charges naturally acted on by $PSL(2,\mathbb{Z})$. Their discussions
here apply to a larger class of theories labeled by $p=2,3,4$ and $6$, and the $(A_3,A_3)$ theory
corresponds to the $p=4$ case. As the authors explain there,
$\mathtt{deg}\, X$ is quantized in the unit of $1/p = 1/4$ when it is
normalized so that the W-boson has charge $1$. This means that the
minimal value of the modified electric charge, $\mathtt{deg}\, X$, is half the charge of
fundamental quark.
which is consistent with our T-transformation \eqref{eq:T-again}.
There are obviously many interesting open problems. We list some of them below.
\begin{itemize}
\item While we have focused on the $(A_1,D_{k})$ theories for
even $k$, there are also those theories for odd
$k$. Since they also have $SU(2)$ flavor symmetry that can be
gauged, one can consider the generalization of our
work to this latter class of theories. One difficulty is that
the relevant irregular state in this case
cannot be obtained in a colliding limit. Therefore, it is not
straightforward to derive the action of the Heisenberg algebra on
the irregular state.
\item It is also interesting to generalize our work to $SU(N)$ gauge theories coupled to
AD theories. For that, we need a $U(N)$-version of the
generalized AGT correspondence. A careful study on the $SU(3)$-version of the generalized AGT
correspondence has been carried out in \cite{Kanno:2013vi}.
\item It would be interesting to study the Nekrasov-Shatashvili
limit \cite{Nekrasov:2009rc} of the $\Omega$-deformed $(A_3,A_3)$
theory. In this limit, combining the
results of \cite{Ito:2017iba,Ito:2018hwp,Ito:2019twh,Ito:2020lyu}
with our formula, one can evaluate the deformed prepotential
of the
$(A_3,A_3)$ theory including both the perturbative and instanton parts.
\item The uplift of the AGT correspondence to five dimensions are known
\cite{Awata:2009ur, Awata:2010yy, Yanagida:2010vz, Taki:2014fva, Mitev:2014isa,
Isachenkov:2014eya, Ohkubo:2015roe, Bourgine:2016vsq,
Pasquetti:2016dyl, Negut:2016dxr}. It would be interesting to
search for a 5d
uplift of our results. That would shed some light on the relation
between the 4d $D_{2n}(SU(2))$ theory and the 5d $\hat{D}_{2n}(SU(2))$
theory \cite{DelZotto:2015rca, Hayashi:2017jze}. More
generally, it would
be interesting to study how our results are phrased in terms
of the $W_{1+\infty}$ algebra and the DIM algebra
along the lines of \cite{Kanno:2011qv, Awata:2011dc, Kanno:2013aha,
Bourgine:2015szm, Mironov:2016yue,
Awata:2016riz,Awata:2016mxc,Awata:2016bdm, Awata:2017cnz,
Bourgine:2017jsi, Bourgine:2017rik,Awata:2017lqa,Sasa:2019rbk}.
\item It was shown in \cite{Buican:2019kba} that the Schur indices of the
$(A_3,A_3)$ theory and $SU(2)$ gauge theory with four flavors are
related by a change of variables involving
\begin{align}
q\to q^2~,
\end{align}
where $q$ is now the superconformal fugacity of the index. Although this
$q$ is different from our $q$ in \eqref{eq:qq2}, it would be
very interesting to study how these two peculiar relations are
connected.
\item Our discussion in the previous bullet suggests that the
Schur index and the prepotential might generally be related. Since the
Schur index is identified as the
character of the associated chiral algebra \cite{Beem:2013sza},
this then suggests a possible connection between the chiral
algebra and the prepotential. Such a connection has already been
suggested
in the application of the ODE/IM correspondence to quantum SW-curves \cite{Ito:2017ypt}.
It would be interesting to study
this connection further.
\item A replacement similar to \eqref{eq:qq2} arises when considering
the Nekrasov partition functions of $SO/Sp$ gauge
theories in connection to $SU$ gauge
theories \cite{Hollands:2010xa, Hollands:2011zc}. This seems to suggest that a $\mathbb{Z}_2$-orbifolding
plays some role similarly in our case. It would be interesting to study
this possibility further.
\end{itemize}
\section*{Acknowledgements}
We would like to thank Matthew Buican, Katsushi Ito, Kazunobu Maruyoshi, Hiraku
Nakajima, Satoshi
Nawata, Takuya Okuda, Jaewon Song, Yuji Tachikawa, and Wenbin
Yan for various illuminating discussions. We would like to particularly
thank Kazunobu Maruyoshi for drawing our attention to the reference
\cite{Alba:2010qc} when one of us (T. N.) had a discussion with him during the international
conference ``KEK Theory Workshop 2018.'' T.~N. also thanks KEK
theory group for hosting the wonderful workshop with various
opportunities of discussions. T.~N.'s research is partially supported by
JSPS Grant-in-Aid for Early-Career Scientists 18K13547. The work of T.~U. is supported by Grant-in-Aid for JSPS Research Fellows 19J11212.
|
{
"timestamp": "2020-12-29T02:24:46",
"yymm": "2012",
"arxiv_id": "2012.14099",
"language": "en",
"url": "https://arxiv.org/abs/2012.14099"
}
|
\section{Introduction}
We consider the cubic nonlinear Schr\"{o}dinger equation (NLS)
\begin{equation}\label{nlsO}
\begin{aligned}
i \partial_t u &= - \partial_{x}^2 u -\mu \vert u \vert^2 u, \qquad (t,x) \in \mathbb{R} \times {\Omega}
\end{aligned}
\end{equation}
equipped with periodic boundary conditions $\Omega = \mathbb{T}$ or set on the full space $\Omega = \mathbb{R}$. We allow both the defocusing and focusing case $\mu = \pm 1$. In the last decades, a large variety of numerical schemes has been proposed to approximate the time dynamics of NLS. A particular attractive class of schemes are splitting methods, where the right-hand side is split into the linear $T(u) = i \partial_{x}^2 u$ and nonlinear part $V(u) = i\mu\vert u \vert^2 u$, respectively. This splitting approach is commonly used in practical computations as it is easy to implement and preserves the mass in the system, i.e., the $L^2$ norm of the solution. For smooth solutions the error behaviour of splitting methods is nowadays well understood. If the initial value satisfies $u(0) \in H^2$ then the Lie splitting scheme
\begin{equation}\label{LieCl}
\begin{aligned}
u_{\text{Lie}}^{n+1} & = \mathrm{e}^{i \tau \partial_x^2} \left( \mathrm{e}^{i \mu \tau \vert u_{\text{Lie}}^{n} \vert^2} u_{\text{Lie}}^{n} \right), \qquad u_{\text{Lie}}^0 = u(0)
\end{aligned}
\end{equation}
allows for global error estimates at order $\tau$ in $L^2$; see \cite{ESS16,Faou12,Ignat11,Lubich08}. This smoothness requirement, i.e., that $u(0) \in H^2$, stems from the local approximation error. More precisely, one can show that the Lie splitting method \eqref{LieCl} applied to cubic NLS \eqref{nlsO} introduces a local error of type $\tau^2 [T,V](u)$ with the commutator given by (see \cite[Section 4.2]{Lubich08})
\begin{equation}\label{clELie}
\frac{1}{2\mu} [T,V](u) = \overline u \left(\partial_x u\right)^2 + 2u \left(\partial_x u\right)\left(\partial_x \overline u\right) + u^2 \partial_{xx} \overline u.
\end{equation}
From the above relation we readily see that the last term $u^2 \partial_{xx} \overline u$ requires the boundedness of two additional derivatives of the solution such that global first-order convergence in $L^2$ will require (at least) $H^2$ solutions. Indeed, a classical Lady Windermere argument (see \cite{HNW}) combined with the standard bilinear estimate
\begin{equation*}\label{bi}
\Vert v w \Vert_{H^\sigma} \leq C_\sigma \Vert v \Vert_{H^\sigma}\Vert w \Vert_{H^\sigma}, \qquad \sigma > 1/2
\end{equation*}
allows us to deduce from the local error structure \eqref{clELie} global first-order convergence in $H^\sigma$ for solutions $u(t) \in H^{\sigma + 2}$ for any $\sigma > 1/2$. Using a refined global error analysis, by first proving fractional convergence of the scheme in a suitable higher order Sobolev space (which implies a priori the boundedness of the numerical solution in this space \cite{Lubich08}), one can push down the error analysis to $L^2$ with a global error of order $\tau$ for $H^{2}$ solutions.
One may now wonder what happens for less smooth solutions $u(t) \in H^s$ with $s < 2$. Clearly, we can no longer expect full first-order convergence due to the local error structure \eqref{clELie} which involves the term $u^2 \partial_{xx} \overline u$. However, one may expect to achieve fractional error estimates with the rate of convergence depending on the regularity of the solution. Indeed, with a refined global error analysis, fractional convergence of order $\tau^{s/2}$ can be established in $L^2$ for solutions in $H^s$ as long as $s>1/2$. The latter restriction $s>1/2$ is thereby crucial when estimating the nonlinear terms in the global error by a Sobolev embedding theorem
\begin{equation}\label{sob}
\Vert v w \Vert_{L^2 } \leq C \Vert v \Vert_{L^2}\Vert w \Vert_{H^{r}},\qquad r>\tfrac12.
\end{equation}
The nonlinear $L^2$ estimate \eqref{sob} restricts the class of possible solutions to smooth Sobolev spaces $u(t) \in H^{s}$ with $s>1/2$. Classical arguments based on the nonlinear $L^2$ estimate \eqref{sob} break down for rough solutions. For a long time it was therefore an open question whether one can actually establish global convergence (with arbitraryly small order of convergence $\tau^{\varepsilon(s)}$, $\varepsilon>0$) for splitting methods for NLS under rough data
\begin{equation}\label{rough}
u(0) \in H^{s}, \qquad 0< s \le 1/2.
\end{equation}
The aim of this paper lies in closing this gap.
In this paper we establish low regularity estimates for the filtered Lie splitting method
\begin{equation}\label{scheme}
\begin{aligned}
u^{n+1} & = \mathrm{e}^{i \tau \partial_x^2} \Pi_{\tau}\left( \mathrm{e}^{i \mu\tau \vert \Pi_{\tau} u^n \vert^2} \Pi_{\tau} u^n \right),
\qquad u^0 & =\Pi_{\tau} u(0)
\end{aligned}
\end{equation}
with the aid of discrete Bourgain spaces; cf. \cite{ORSBourg}. This will allow us for the first time to deal with rough data \eqref{rough}.
Here, the projection operator $ \Pi_{\tau}$ for $\tau>0$ is defined by the Fourier multiplier
\begin{equation}
\label{PiKdef}
\Pi_{\tau} = \chi\left({ -i \partial_x \over\tau^{-\frac12} }\right) = \overline\Pi_{\tau},
\end{equation}
where $\chi$ is the characteristic function $\chi= \mathrm{1}_{[-1, 1]}.$ Note that the projection $\Pi_{\tau}$
is then continuous on $L^p$ for $p \in (1, + \infty)$ with operator norm uniformly bounded for $\tau \in (0, 1]$.
Such a filtered splitting was
\color{black}
originally proposed for nonlinear Schr\"{o}dinger equations on the full space by Ignat \cite{Ignat11}; see also Ignat and Zuazua \cite{IZ09}.
For the filtered Lie splitting~\eqref{scheme} first order error estimates for $H^2(\mathbb{R}^d)$ solutions were established in \cite{Ignat11} with the aid of discrete Strichartz type estimates $L^2(\mathbb{R}^d)$ in dimensions $d \leq 3$. Splitting methods with numerical filters have been also successfully introduced in \cite{Bao} for nonlinear Schr\"odinger equations in the semiclassical regime with attractive interaction to suppress numerically the modulation instability.
The main result of the paper is to prove that the filtered Lie splitting \eqref{scheme} indeed allows approximations to the cubic NLS \eqref{nlsO} with rough initial data \eqref{rough}. More precisely, we show that
\begin{theorem}\label{maintheo}
For every $T >0$ and $u_{0} \in H^{s_{0}}$, $s_{0}>0$, let $u \in \mathcal{C}([0, T], H^{s_{0}})$ be the exact solution of \eqref{nlsO} with initial datum $u_{0}$ and denote by $\unum{n}$ the sequence defined by the scheme~\eqref{scheme}. Then, for every $0<s_{1}<s_{0}$, $s_{1} \in [0, 2]$, we have the following error estimate:
there exists $\tau_{0}>0$ and $C_{T}>0$ such that for every step size $\tau \in (0, \tau_{0}]$
\begin{equation}
\label{final1} \|\unum{n}- u(t_{n})\|_{L^2} \leq C_{T} \tau^{s_{1}\over 2}, \quad 0 \leq n\tau \leq T.
\end{equation}
\end{theorem}
With the aid of discrete Strichartz type estimates (on $\mathbb{R}$) and Bourgain type estimates (on $\mathbb{T}$) low regularity estimates could be recently established for resonance based filtered discretisations for rough data \eqref{rough}; see \cite{ORSBourg,ORS19}. Compared to the filtered Lie splitting \eqref{scheme} resonance based filtered schemes allow (thanks to their favorable local error structure) improved convergence rates for solutions in $H^s$ with $s>1/4$. However, for very rough data, i.e., in regimes where $0 < s <1/4$, the convergence rate we show here for the filtered Lie splitting \eqref{scheme} coincides with the convergence rate established for the resonance based schemes. This is due to the fact that at this point of roughness the ``worst" regularity restrictions stem from the ``space discretization", which consists in projecting the equation
onto a finite number of Fourier modes, and not from the local error of the time discretization.
Our error estimates are based on discrete Bourgain type estimates developed in \cite{ORSBourg}. It seems also possible to extend the analysis developed in this paper to higher dimensions and more general nonlinearities. The central task would lie in establishing the corresponding discrete counterpart of the continuous Bourgain estimates given in \cite{Bour93a} in the various cases as done in \cite{ORSBourg} for the periodic NLS.
Note that the filtered Lie splitting \eqref{scheme} can be seen as a classical Lie splitting discretisation of the projected equation
\begin{equation}
\begin{aligned}\label{nlsP}
i \partial_t u_{\tau} & = - \partial_{x}^2 u_{\tau} - \mu \Pi_{\tau}\left( \vert \Pi_{\tau} u_{\tau} \vert^2 \Pi_{\tau} u_{\tau}\right), \qquad u_{\tau}(0) & = \Pi_{\tau} u(0).
\end{aligned}
\end{equation}
Hence, the idea is to first analyse the difference between the original NLS equation \eqref{nlsO} and its projected counterpart \eqref{nlsP} on the continuous level. This will then allow us to analyse the time discretisation error introduced by the Lie splitting discretisation~\eqref{scheme} applied to the projected equation~\eqref{nlsP}.
\subsection*{Outline of the paper.} In a first step we study the difference between the original NLS equation~\eqref{nlsO} and its projected counterpart \eqref{nlsP}; see Section \ref{sec:errPro}. This will give a bound on
$$
\Vert u(t) - u_{\tau}(t) \Vert_{L^2}.
$$
Then we will analyse the time discretisation error introduced by the Lie splitting discretisation~\eqref{scheme} applied to the projected equation~\eqref{nlsP}; see Section \ref{sec:errSplit}. This will yield a bound on
$$
\Vert u_{\tau}(t_n) - u^n\Vert_{L^2}.
$$
Combining the above two estimates eventually allows us to establish the desired global error estimate on $\Vert u(t_n) - u^n\Vert_{L^2}$
since
$$
\Vert u(t_n) - u^n\Vert_{L^2} \leq \Vert u(t_n) - u_{\tau}(t_n) \Vert_{L^2} + \Vert u_{\tau}(t_n) - u^n\Vert_{L^2}.
$$
We carry out the error analysis on the torus $\mathbb{T}$; however, all estimates also hold for the full space case $\mathbb{R}$.
\subsection*{Notations}
We close this section with some notation that will be used throughout the paper.
For two expressions $a$ and $b$, we write $a \lesssim b$ whenever $a \leq C b$ holds with some constant $C>0$, uniformly in $\tau \in (0, 1]$. We further write $a\sim b$ if $b\lesssim a \lesssim b$. When we want to emphasize that $C$ depends on an additional parameter $\gamma$, we write $a \lesssim_{\gamma} b$.
Further, we denote $\langle \,\cdot\, \rangle = ( 1 + | \cdot |^2)^{1 \over 2}$ and for sequences $(u_{n})_{n \in \mathbb{Z}} \in
X^\mathbb{Z}$ in a Banach space $X$ with norm $\|\cdot \|_{X}$, we use the notation
$$ \|u_{n}\|_{l^p_{\tau}X}=\left( \tau \sum_{n} \|u_{n}\|_{X}^p \right)^{1 \over p}, \quad
\|u_{n}\|_{l^\infty_{\tau}X} = \sup_{n \in \mathbb{Z}} \|u_{n}\|_{X}.$$
\section{Error between the exact and the projected equation}\label{sec:errPro}
In this section we estimate the difference between the solutions of the original NLS equation \eqref{nlsO} and its projected counterpart \eqref{nlsP}.
Such an estimate was already established in \cite{ORSBourg}.
Let us recall the definition of Bourgain spaces. A tempered distribution $u(t,x)$ on $\mathbb{R} \times \mathbb{T}$ belongs to the Bourgain space $X^{s, b}$ if its following norm is finite
\begin{equation*}
\|u\|_{X^{s, b}}=\left(\int_{\mathbb{R}}\sum_{k \in \mathbb{Z}}\left(1+ |k|\right)^{2s}\left(1+| \sigma+ k^2|\right)^{2b}|\widetilde{u}\left(\sigma, k\right)|^2 \mathrm{d}\sigma\right)^{\frac{1}{2}},
\end{equation*}
where $\widetilde{u}$ is the space-time Fourier transform of $u$:
$$
\widetilde{u}(\sigma, k)= \int_{\mathbb{R}\times \mathbb{T}} \mathrm{e}^{-i \sigma t - i k x} u(t,x) \, \mathrm{d} t \mathrm{d} x.
$$
We shall also use a localized version of this space. For $I \subset \mathbb{R}$ being an open interval, we say that $u \in X^{s,b}(I)$ \color{black} if $\|u\|_{X^{s,b}(I)}< \infty$, where
$$
\|u\|_{X^{s,b}(I)} = \inf\{\|\overline{u} \|_{X^{s,b}}, \, \overline{u}|_{I} = u \}.
$$
When $I= (0, T)$ we will often simply use the notation $X^{s,b}(T)$. We refer for example to \cite{ORSBourg} Lem\-ma~2.1 for some useful properties of these spaces in this setting
(and to \cite{Bour93a} and \cite{Tao06} for more details).
A particularly useful property we will exploit in the following is the embedding $X^{s, b} \subset \mathcal{C}(\mathbb{R}, H^s)$ for $b>1/2$.
Let us first recall the following classical well-posedness result for \eqref{nlsO}.
\begin{theorem}
\label{theoNLS}
For every $T>0$ and $u_{0}\in L^2$, there exists a unique solution $u$ of \eqref{nlsO} such that $u\in \mathcal{C}([0, T], L^2) \cap X^{0, {b}}(T)$ for any $b \in (1/2, 5/8)$. Moreover, if $u_{0} \in H^{s_{0}}$, $s_{0}> 0$, then $ u\in \mathcal{C}([0, T], H^{s_{0}}) \cap X^{s_{0}, b}(T)$.
\end{theorem}
We refer again to \cite{ORSBourg} for a sketch of the proof.
In a similar way, we get for the solution of \eqref{nlsP} the following estimate.
\begin{proposition}
\label{propNLSK}
For $u_{0} \in H^{s_{0}}$, $s_{0} \geq 0$, and $\tau \in (0, 1]$, there exists a unique solution $u_{\tau}$ of \eqref{nlsP} such that $u_{\tau} \in X^{s_{0}, b}(T)$ for $b\in(1/2,5/8)$ and every $T > 0$. Moreover, for every $T>0$, there exists $M_{T}$ such that for every $\tau \in (0, 1]$, we have the estimate
$$
\| u_{\tau} \|_{X^{s_{0}, b}(T)} \leq M_{T}.
$$
Furthermore, it holds that uniformly for $\tau \in (0, 1]$, we have for some $C_{T}>0$,
$$
\| u - u_{\tau} \|_{L^\infty((0,T); L^2)} \lesssim \| u - u_{\tau} \|_{X^{0, b}(T)} \leq C_{T} \tau^{s_{0} \over 2}.
$$
\end{proposition}
This statement is proven in \cite{ORSBourg}. It is a special case of Proposition 2.4 and Corollary 2.6 in that article with $K= \tau^{- {1 \over 2}}$.
Moreover, if $s_{0}>0$, we have the following additional property:
\begin{cor}\label{corsb}
Let $b \in (5/8, 1]$ and assume that $s_{0}>0$, then for every $0 \leq s_1 <s_{0},$ we also have that for every $T > 0$ and uniformly in $\tau$,
$$
\|u_{\tau} \|_{X^{s_1,b}(T)} \leq M_{T}.
$$
\end{cor}
This is proven in \cite{ORSBourg} Corollary 2.8.
\section{Error representation of the time discretisation of the projected equation}\label{sec:errSplit}
In this section we derive an estimate on the time discretisation error introduced by the Lie splitting discretisation~\eqref{scheme} applied to the projected equation~\eqref{nlsP}. This will give an estimate on
$$
\Vert u_{\tau}(t_n) - u^n\Vert_{L^2}.
$$
First we rewrite the filtered splitting \eqref{scheme} with the aid of the telescopic identity as follows
\begin{equation}\label{schemT}
\begin{aligned}
u^n = \mathrm{e}^{i n \tau \partial_x^2} \Pi_{\tau} u(0)
+i\mu \tau \sum_{k=0}^{n-1} \mathrm{e}^{i (n-k) \tau \partial_x^2} \Pi_{\tau} \frac{\mathrm{e}^{i \mu \tau \vert \Pi_{\tau} u^k\vert^2}-1}{i \mu \tau} \Pi_{\tau} u^k
\end{aligned}
\end{equation}
(see also \cite{Ignat11}). Next we compare the scheme \eqref{schemT} with the mild solution of \eqref{nlsP}.
Note that the mild solution of \eqref{nlsP} reads with the notation $t_n = n \tau$
\begin{equation}
\begin{aligned}\label{mildiV}
u_{\tau}(t_n) & = \mathrm{e}^{i n \tau \partial_x^2} \Pi_{\tau} u(0)
+i \mu \int_0^{t_n} \mathrm{e}^{i (t_n-s) \partial_x^2} \Pi_{\tau}
\left(\vert \Pi_{\tau} u_{\tau}(s) \vert^2 \Pi_{\tau} u_{\tau}(s)\right) \mathrm{d} s\\
&= \mathrm{e}^{i n \tau \partial_x^2} \Pi_{\tau} u(0)
+i \mu \sum_{k=0}^{n-1} \int_{t_k}^{t_{k+1}} \mathrm{e}^{i (t_n -s) \partial_x^2} \Pi_{\tau}
\left(\vert \Pi_{\tau} u_{\tau}(s) \vert^2 \Pi_{\tau} u_{\tau}(s)\right) \mathrm{d} s\\
&= \mathrm{e}^{i n \tau \partial_x^2} \Pi_{\tau} u(0)
+i \mu \sum_{k=0}^{n-1} \int_{0}^{\tau } \mathrm{e}^{i (t_n-t_k -s) \partial_x^2} \Pi_{\tau}
\left(\vert \Pi_{\tau} u_{\tau}(t_k+s) \vert^2 \Pi_{\tau} u_{\tau}(t_k+s)\right) \mathrm{d} s.
\end{aligned}
\end{equation}
Thus, by taking the difference of \eqref{mildiV} and \eqref{schemT} we obtain that
\begin{equation}\label{global}
u_{\tau}(t_n) - u^n
= i \mu \tau \sum_{k=0}^{n-1} \mathrm{e}^{i (n-k) \tau \partial_x^2}
\left( \Phi_{\mathcal{N}}^\tau(u_{\tau}(t_k)) - \Phi_{\mathcal{N}}^\tau(u^k) \right) + i \mu \sum_{k=0}^{n-1} \mathrm{e}^{i (n-k) \tau \partial_x^2} \mathcal{E}_{\text{loc}}(t_k,\tau,u_{\tau})
\end{equation}
with the nonlinear flow
\begin{equation}\label{numFlow}
\Phi_{\mathcal{N}}^\tau(w) = \Pi_{\tau} \left( \frac{\mathrm{e}^{i\mu \tau \vert \Pi_{\tau} w \vert^2}-1}{i \mu \tau} \Pi_{\tau} w\right)
\end{equation}
and the local error
\begin{equation}\label{localErr}
\begin{aligned}
\mathcal{E}_{\text{loc}}(t_k,&\tau,u_{\tau})
\\& =
\Pi_{\tau} \int_{0}^{\tau } \mathrm{e}^{- i s \partial_x^2}
\left(\vert \Pi_{\tau} u_{\tau}(t_k+s) \vert^2 \Pi_{\tau} u_{\tau}(t_k+s)\right)
- \frac{\mathrm{e}^{i \mu\tau \vert \Pi_{\tau} u_{\tau}(t_k) \vert^2}-1}{i \mu\tau} \Pi_{\tau} u_{\tau}(t_k)\, \mathrm{d} s
\\& =
\Pi_{\tau} \int_{0}^{\tau }\left( \mathrm{e}^{- i s \partial_x^2} -1\right)
\left(\vert \Pi_{\tau} u_{\tau}(t_k+s) \vert^2 \Pi_{\tau} u_{\tau}(t_k+s)\right) \mathrm{d} s
\\&\ \quad +
\Pi_{\tau} \int_{0}^{\tau }
\vert \Pi_{\tau} u_{\tau}(t_k+s) \vert^2 \Pi_{\tau} \Big( u_{\tau}(t_k+s)
- u_{\tau}(t_k) \Big)\mathrm{d} s
\\&\ \quad +
\Pi_{\tau} \int_{0}^{\tau }
\left( \vert \Pi_{\tau} u_{\tau}(t_k+s) \vert^2 -\frac{\mathrm{e}^{i\mu \tau \vert \Pi_{\tau} u_{\tau}(t_k) \vert^2}-1}{i \mu\tau} \right) \Pi_{\tau} u_{\tau}(t_k) \,\mathrm{d} s\\
&
= \mathcal{E}_{1}(t_{k}) + \mathcal{E}_{2}(t_{k}) + \mathcal{E}_{3}(t_{k}),
\end{aligned}
\end{equation}
where by \eqref{mildiV}
\begin{equation}\label{vd}
\begin{aligned}
u_{\tau}(t_k+s) &- u_{\tau}(t_k) \\
&=\left( \mathrm{e}^{i s \partial_x^2} -1\right) u_{\tau}(t_k) +i \mu
\int_0^s \mathrm{e}^{i (s-\xi) \partial_x^2} \Pi_{\tau}\left( \vert \Pi_{\tau} u_{\tau}(t_k+\xi ) \vert^2 \Pi_{\tau} u_{\tau}(t_k+\xi ) \right) \mathrm{d}\xi.
\end{aligned}
\end{equation}
\section{Discrete Bourgain spaces}
Before performing local error and stability estimates for rough data, we shall recall the main properties of the discrete
Bourgain spaces introduced in \cite{ORSBourg}.
For sequences of functions $(u_{n}(x))_{n \in \mathbb{Z}},$ we define the Fourier transform $\widetilde{u_{n}}(\sigma, k)$ by
$$
\mathcal F_{\tau,x}(u_n)(\sigma,k) =\widetilde{u_{n}} (\sigma, k)= \tau \sum_{m \in \mathbb{Z}} \widehat{u_{m}}(k) \,\mathrm{e}^{i m \tau \sigma}, \quad \widehat{u_{m}}(k)= {1 \over 2\pi} \int_{-\pi}^\pi u_{m}(x) \,\mathrm{e}^{-i k x}\mathrm{d} x.
$$
Parseval's identity then reads
\begin{equation}\label{parseval}
\| \widetilde{u_{n}}\|_{L^2l^2}= \|u_{n}\|_{l^2_{\tau}L^2},
\end{equation}
where
$$
\| \widetilde{u_{n}}\|_{L^2l^2}^2 = \int_{-{\pi \over \tau}}^{\pi\over \tau} \sum_{k \in \mathbb{Z}}
|\widetilde{u_{n}}(\sigma, k)|^2 \mathrm{d} \sigma, \quad
\|u_{n}\|_{l^2_{\tau}L^2}^2 = \tau \sum_{m \in \mathbb{Z}} \int_{-\pi}^\pi |u_{m}(x)|^2 \mathrm{d} x.
$$
We define the discrete Bourgain spaces $X^{s,b}_\tau$ for $s\ge 0$, $b\in\mathbb R$, $\tau>0$ by
\begin{equation}\label{norm2}
\| u_n \|_{X^{s,b}_{\tau}} = \left\| \langle k \rangle^s \langle d_{\tau}(\sigma -k^2) \rangle^b \widetilde{u_n}(\sigma, k) \right\|_{L^2l^2},
\end{equation}
where $d_{\tau}(\sigma)=\frac{\mathrm{e}^{i \tau \sigma} - 1}\tau$.
Note that $d_{\tau}$ is $2\pi/\tau$ periodic and that uniformly in $\tau$, we have $|d_{\tau}(\sigma)| \sim | \sigma |$ for $|\tau \sigma | \leq \pi$.
For $s \in \mathbb{R}$ and $b> 1/2$, we have that $X^{s,b}_{\tau} \subset l^\infty_{\tau}H^s$:
\begin{equation}\label{sobbourg}
\|u_{n}\|_{l^\infty_{\tau}H^s } \lesssim_{b} \| u_{n}\|_{X^{s,b}_{\tau}}.
\end{equation}
From the properties of the $d_{\tau}$ function, we also have that
\begin{equation}\label{bshift}
\sup_{\delta \in [-4, 4]} \| \mathrm{e}^{i n\tau \delta \partial_{x}^2} u_{n}\|_{X^{s, b}_{\tau}} \lesssim_{b} \|u_{n}\|_{X^{s,b}_{\tau}},
\end{equation}
\begin{equation}\label{shiftt}
\sup_{\delta \in [-4, 4]} \|\mathrm{e}^{i n \tau \delta } u_{n}\|_{X^{s, b}_{\tau}} \lesssim_{b} \|u_{n}\|_{X^{s, b}_{\tau}}
\end{equation}
and that the discrete spaces satisfy the embeddings
\begin{equation}\label{embdisc1}
\|u_{n}\|_{X^{0, b}_{\tau}} \lesssim { 1 \over \tau^{b-b'}} \|u_{n}\|_{X^{0, b'}_\tau}, \quad b \geq b'.
\end{equation}
Some useful more technical properties are gathered in the following lemma; see~\cite[Lemma 3.4]{ORS19}.
\begin{lemma}\label{bourgainfaciled}
For $\eta \in \mathcal{C}^\infty_{c}(\mathbb{R})$ and $\tau\in(0,1]$, we have that
\begin{align}
\label{bourg1} &\| \eta(n \tau) \mathrm{e}^{in \tau \partial_{x}^2} f\|_{X^{s,b}_{\tau}} \lesssim_{\eta, b} \|f\|_{H^s}, \quad s \in \mathbb{R}, \, b \in \mathbb{R}, \, f \in H^s, \\
\label{bourg2} &\| \eta(n \tau) u_{n}\|_{X^{s,b}_{\tau}} \lesssim_{\eta, b} \|u_{n}\|_{X^{s,b}_{\tau}}, \quad s \in \mathbb{R}, \, b \in \mathbb{R} , \, u_{n} \in X^{s,b}_{\tau},\\
\label{bourg3} &\left\| \eta\left(\frac{n\tau}T \right) u_{n} \right\|_{X^{s,b'}_{\tau}} \lesssim_{\eta, b, b'} T^{b-b'} \|u_{n}\|_{X^{s,b}_{\tau}}, \quad s \in \mathbb{R}, -{1 \over 2} <b' \leq b <{ 1 \over 2},\, 0< T = N \tau \leq 1, \, N \geq 1.
\end{align}
In addition, for
$$
U_{n}(x)= \eta(n \tau) \tau \sum_{m=0}^n \mathrm{e}^{i ( n-m ) \tau \partial_{x}^2} u_{m}(x),
$$
we have
\begin{equation}
\label{bourg4}\|U_{n}\|_{X^{s,b}_{\tau}} \lesssim_{\eta, b} \|u_{n}\|_{X^{s, b-1}_{\tau}}, \quad s \in \mathbb{R}, \, b>1/2.
\end{equation}
We stress that all given estimates are uniform in $\tau$.
\end{lemma}
The crucial product property in the analysis of cubic NLS for these discrete spaces is given in the following lemma.
\begin{lemma}\label{prodd}
We have, uniformly for $\tau \in (0, 1]$,
\begin{equation}\label{prodd1}
\|\Pi_{\tau}u_{n} \|_{l^4_{\tau}L^4} \lesssim \|u_{n}\|_{X^{0, {3\over 8}}_{\tau}}.
\end{equation}
This yields for any sequences $(u_{n})$, $(v_{n})$, $(w_{n})$ and for every $s_{1} \geq 0$,
\begin{equation}\label{prodd3}
\| \Pi_{\tau}\left( \Pi_{ \tau} u_{n} \Pi_{ \tau} v_{n} \Pi_{\tau} w_{n} \right) \|_{X^{s_{1}, - { 3 \over 8}}_{\tau}}
\lesssim \|u_{n}\|_{X^{s_{1}, {3 \over 8}}_{\tau}} \|v_{n}\|_{X^{s_{1}, {3 \over 8}}_{\tau}} \|w_{n}\|_{X^{s_{1}, {3 \over 8}}_{\tau}},
\end{equation}
again uniformly for $\tau \in (0, 1]$.
\end{lemma}
\begin{proof}
The estimate \eqref{prodd1} was proven in \cite{ORSBourg} (see Lemma 3.1), we refer to this paper for the proof.
Note that \eqref{prodd1} states that $\Pi_{\tau}$ is a continuous map from $X^{0, {3\over 8}}_{\tau}$
to $l^4_{\tau}L^4$. We can then also deduce by duality that the adjoint $\Pi_{\tau}^*= \Pi_{\tau}$
is continuous from $l^{4\over 3}_{\tau}L^{4 \over 3}$ to $X^{0, -{3\over 8}}_{\tau}$ with the same norm, which means that
\begin{equation}
\label{dualbourg}
\| \Pi_{\tau} u_{n}\|_{X^{0, -{3\over 8}}_{\tau}} \lesssim \|u_{n}\|_{l^{4\over 3}_{\tau}L^{4 \over 3}}.
\end{equation}
To deduce \eqref{prodd3}, we first use that thanks to \eqref{dualbourg} we have for every $s_1 \geq 0$ \color{black}
$$
\| \Pi_{\tau}\left( \Pi_{ \tau} u_{n} \Pi_{ \tau} v_{n} \Pi_{\tau} w_{n} \right) \|_{X^{s_{1}, - { 3 \over 8}}_{\tau}}
\lesssim \left\| \langle \partial_{x} \rangle^{s_{1}} \left(\Pi_{ \tau} u_{n} \Pi_{ \tau} v_{n} \Pi_{\tau} w_{n} \right) \right\|_{l^{4 \over 3}_{\tau}L^{4 \over 3}}.
$$
Next, by using the generalized Leibniz rule and the H\"older inequality, we obtain that
$$ \| \Pi_{\tau}\left( \Pi_{ \tau} u_{n} \Pi_{ \tau} v_{n} \Pi_{\tau} w_{n} \right) \|_{X^{s_{1}, - { 3 \over 8}}_{\tau}}
\lesssim \| \langle \partial_{x} \rangle^{s_{1}} \Pi_{\tau} u_{n} \|_{l^4_{\tau}L^4} \| \langle \partial_{x} \rangle^{s_{1}} \Pi_{\tau} v_{n} \|_{l^4_{\tau}L^4} \| \langle \partial_{x} \rangle^{s_{1}} \Pi_{\tau} w_{n} \|_{l^4_{\tau}L^4}
$$
and we conclude by using \eqref{prodd1} again.
\end{proof}
Finally, the estimates of Corollary \ref{corsb} for the continuous solution, gives the following property
for its restriction on the grid.
\begin{proposition}\label{propunifdisc}
Let $u_{\tau}$ be the solution of \eqref{nlsP} and define the sequence $u_{\tau}^n= u_{\tau}(n\tau + t', x)$. Assume that $u_0 \in H^{s_{0}}$, $s_{0}> 0 $. Then, for every $s_{1}$, such that $0 \leq s_{1} < s_{0}$, we have that
$$
\sup_{t' \in [0, 4 \tau]} \|\eta(n\tau) u^n_{\tau}\|_{X^{s_{1}, {3 \over 8}}_{\tau}} \leq C_{T}.
$$
\end{proposition}
In the following we shall still denote by $u_{\tau}$ a function which belongs globally to $X^{s_{0}, b}(\mathbb{R}\times \mathbb{T})$,
coincides on $[0, T]$ with the solution $u_{\tau}$ of \eqref{nlsP} given by Proposition \ref{propNLSK} and vanishes for $ t \geq 2T$. With this notation,
we get from the previous proposition that
$$
\sup_{t' \in [0, 4 \tau]} \| u_{\tau}(n\tau + t', x) \|_{X^{s_{1}, {3 \over 8}}_{\tau}} \leq C_{T}.
$$
\section{Global error estimates}
Let $e^{k} = u_\tau(t_k)- u^k$ denote the global error for the modified equation and let $\eta$ be a smooth and compactly supported function, which is one on $[-1, 1]$ and supported in $[-2, 2]$.
Using the properties of $\eta$ allows us to rewrite \eqref{global} as
\begin{equation}
\label{eqen}
e^{n}= i \mu \tau \eta(t_{n}) \sum_{k=0}^{n-1} \mathrm{e}^{i (n-k) \tau \partial_x^2} \eta\left( {t_{k} \over T_{1}}\right)
\bigl( \Phi_{\mathcal{N}}^\tau(u_{\tau}(t_k)) - \Phi_{\mathcal{N}}^\tau(u_{\tau}(t_{k})-e^k)\bigr) + \mathcal{R}_{n}
\end{equation}
with
\begin{equation}
\label{Rn}
\mathcal{R}_{n}= i \eta(t_{n})\mu \sum_{k=0}^{n-1} \mathrm{e}^{i (n-k) \tau \partial_x^2} \eta(t_{k}) \mathcal{E}_{\text{loc}}(t_k,\tau,u_{\tau})
\end{equation}
for $0 \leq n \leq N_{1}$, where $N_{1}=\lfloor {T_{1}\over \tau}\rfloor$ with $T_{1} \leq \min(1, T)$.
The benefit if this modification is that the solution of \eqref{eqen} is globally defined, so that we can use global
Bourgain spaces to estimate it. \color{black}
The aim of this section is the proof of the following estimate of the global error $ \mathcal{R}_{n}.$
\begin{proposition}
\label{globalerror}
For $b \in (1/2, 5/8)$, $ s_{1} \in [0, 2]$, $0 \leq s_{1}< s_{0}$, we have
$$\|\mathcal{R}_{n}\|_{X^{0, b}_{\tau}}\leq C_{T}\tau^{s_{1} \over 2}.$$
\end{proposition}
\begin{proof}
By using \eqref{bourg4}, we first obtain that
\begin{equation}
\label{estE0} \|\mathcal{R}_{n} \|_{X^{0, b}_{\tau}} \lesssim \tau^{-1}
\| \mathcal{E}_{\text{loc}}(t_n,\tau,u_{\tau})\|_{X^{0, b-1}_{\tau}} \lesssim
\tau^{-1} \| \mathcal{E}_{\text{loc}}(t_n,\tau,u_{\tau})\|_{X^{0, -{3 \over 8}}_{\tau}}.
\end{equation}
Thanks to \eqref{localErr}, we need to estimate $ \tau^{-1} \| \mathcal{E}_{i}\|_{X^{0, -{3 \over 8}}_{\tau}}$, $i=1, \, 2, \, 3$.
To estimate $\mathcal{E}_{1}$, since $\mathrm{e}^{-is \partial_{x}^2}-1$ and $\Pi_{\tau}$ are Fourier multipliers
in the space variable, and since $\Pi_{\tau}$ projects on frequencies less than $\tau^{- {1 \over 2}},$ we observe that
for any function $F(t_{n})$, we have
$$ \sup_{s \in [- \tau, \tau]}\| (\mathrm{e}^{-is \partial_{x}^2}-1) \Pi_{\tau} F(t_{n}) \|_{X^{0, b}_{\tau}}
\lesssim \tau^{s_{1}\over 2} \| F(t_{n}) \|_{X^{s_{1}, b}_{\tau}}$$
for $0 \leq s_{1} \leq 2$.
Therefore, we get that
$$ \tau^{-1} \| \mathcal{E}_{1}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim \tau^{s_{1} \over 2} \sup_{s \in [0, \tau]}
\left\| \Pi_{\tau}\left(\vert\Pi_{\tau} u_{\tau}(t_n+s) \vert^2 \Pi_{\tau} u_{\tau}(t_n+s)\right)\right\|_{X^{s_{1}, -{3 \over 8}}_{\tau}}.$$
This yields, thanks to Lemma \ref{prodd},
$$ \tau^{-1} \| \mathcal{E}_{1}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim \tau^{s_{1} \over 2} \sup_{s \in [0, \tau]} \|u_{\tau}(t_{n} + s)\|_{X^{s_{1}, {3 \over 8}}_{\tau}}^3.$$
Therefore, by using Proposition \ref{propunifdisc}, we obtain that
\begin{equation}
\label{estE1} \tau^{-1} \| \mathcal{E}_{1}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim C_{T}\tau^{s_{1} \over 2}.
\end{equation}
Next, by using \eqref{localErr} and \eqref{vd}, we get that
$$ \tau^{-1} \| \mathcal{E}_{2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim \tau^{-1} \| \mathcal{E}_{2, 1}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
+ \tau^{-1} \| \mathcal{E}_{2, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}, $$
where
\begin{align*}
\mathcal{E}_{2, 1}(t_n)
& = \Pi_{\tau} \int_{0}^{\tau }
\vert \Pi_{\tau} u_{\tau}(t_n+s) \vert^2 \Pi_{\tau} \Big(( \mathrm{e}^{is \partial_{x}^2}- 1) u_{\tau}(t_{n})\Big)\mathrm{d} s
, \\
\mathcal{E}_{2, 2}(t_n)
& = i \mu \Pi_{\tau} \int_{0}^{\tau }
\vert \Pi_{\tau} u_{\tau}(t_n+s) \vert^2 \, \Pi_{\tau}\!
\int_0^s \mathrm{e}^{i (s-\xi) \partial_x^2} \Pi_{\tau}\left( \vert \Pi_{\tau} u_{\tau}(t_n+\xi ) \vert^2 \Pi_{\tau} u_{\tau}(t_n+\xi ) \right) \mathrm{d}\xi\mathrm{d} s.
\end{align*}
By using again \eqref{prodd3}, we obtain that
$$ \tau^{-1} \| \mathcal{E}_{2, 1}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim \sup_{s \in [0, \tau]}\left( \|u_{\tau}(t_{n} + s)\|_{X^{0, {3 \over 8}}_{\tau}}^2
\| ( \mathrm{e}^{is \partial_{x}^2}- 1) \Pi_{\tau}u_{\tau}(t_{n})\|_{X^{0, {3 \over 8}}_{\tau}}\right).$$
Since, for $s\in [0, \tau]$, we have
$$ \| ( \mathrm{e}^{is \partial_{x}^2}- 1) \Pi_{\tau}u_{\tau}(t_{n})\|_{X^{0, {3 \over 8}}_{\tau}}
\lesssim \tau^{s_{1} \over 2} \|u_{\tau}(t_{n})\|_{X^{s_{1}, {3 \over 8}}_{\tau}}, $$
we get by using again Proposition \ref{propunifdisc} that
$$ \tau^{-1} \| \mathcal{E}_{2, 1}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\leq C_{T} \tau^{s_{1} \over 2}.$$
In a similar way, we can first obtain thanks to \eqref{prodd3} that
\begin{align*}
\tau^{-1} \| \mathcal{E}_{2, 1}&(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}\\
&\lesssim \sup_{s \in [0, \tau]}\left( \|u_{\tau}(t_{n} + s)\|_{X^{0, {3 \over 8}}_{\tau}}^2
\left\|\int_0^s \mathrm{e}^{i (s-\xi) \partial_x^2} \Pi_{\tau}\left( \vert \Pi_{\tau} u_{\tau}(t_n+\xi ) \vert^2 \Pi_{\tau} u_{\tau}(t_n+\xi ) \right) \mathrm{d}\xi \right\|_{X^{0, {3 \over 8}}_{\tau}}\right).
\end{align*}
Next, by using successively \eqref{bshift}, \eqref{embdisc1} and \eqref{prodd3}, we write
\begin{align*}
\Bigl\|\int_0^s \mathrm{e}^{i (s-\xi) \partial_x^2} \Pi_{\tau}&\left( \vert \Pi_{\tau} u_{\tau}(t_n+\xi ) \vert^2 \Pi_{\tau} u_{\tau}(t_n+\xi ) \right) \mathrm{d}\xi \Bigr\|_{X^{0, {3 \over 8}}_{\tau}}\\
&\lesssim \tau \sup_{\xi \in [0, \tau]} \left\| \vert \Pi_{\tau} u_{\tau}(t_n+\xi ) \vert^2 \Pi_{\tau} u_{\tau}(t_n+\xi ) \right\|_{X^{0, {3 \over 8}}_{\tau}}\\
&\lesssim \tau^{1 \over 4}
\sup_{\xi \in [0, \tau]} \left\| \vert \Pi_{\tau} u_{\tau}(t_n+\xi ) \vert^2 \Pi_{\tau} u_{\tau}(t_n+\xi ) \right\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim \tau^{1 \over 4}
\sup_{\xi \in [0, \tau]} \| u_{\tau}(t_n+\xi ) \|_{X^{0, {3 \over 8}}_{\tau}}^3.
\end{align*}
Consequently, by using again Proposition \ref{propunifdisc}, we find that
$$ \tau^{-1} \| \mathcal{E}_{2, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\leq C_{T} \tau^{1 \over 4}$$
and hence that
$$ \tau^{-1} \| \mathcal{E}_{2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\leq C_{T} (\tau^{s_{1} \over 2} + \tau^{1 \over 4} ) \leq C_{T} \tau^{s_{1} \over 2}$$
if $s_{1} \leq 1/2$.
For $s_{1} >1/2$, we can estimate $\mathcal{E}_{2, 2}$ in a different way. We just use that
\begin{multline*}
\tau^{-1} \| \mathcal{E}_{2, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim \tau^{-1} \| \mathcal{E}_{2, 2}(t_n)\|_{X^{0, 0}_{\tau}}
\\ \lesssim \sup_{s \in [0, \tau]} \|u_{\tau}(t_{n} + s)\|_{l^\infty_{\tau}L^8}^2
\left\|\int_0^s \mathrm{e}^{i (s-\xi) \partial_x^2} \Pi_{\tau}\left( \vert \Pi_{\tau} u_{\tau}(t_n+\xi ) \vert^2 \Pi_{\tau} u_{\tau}(t_n+\xi ) \right) \mathrm{d}\xi \right\|_{l^\infty_{\tau}L^4}
\end{multline*}
and we employ the Sobolev embedding $H^{s_{1}}\subset L^p(\mathbb{T} )$ for every $p$ together with the fact that
$ H^{s_{1}}$ is an algebra to obtain that
$$ \tau^{-1} \| \mathcal{E}_{2, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}} \leq C_{T} \tau.$$
Combining these estimates, we finally get
\begin{equation}
\label{estE2} \tau^{-1} \| \mathcal{E}_{2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\leq C_{T} \tau^{s_{1} \over 2}
\end{equation}
for every $0 \leq s_{1} \leq 2.$
It remains to estimate $\mathcal{E}_{3}.$ We can rewrite it as
$$ \mathcal{E}_{3}(t_{n})= \mathcal{E}_{3, 1}(t_{n}) + \mathcal{E}_{3, 2}(t_{n})$$
with
\begin{align*}
\mathcal{E}_{3, 1}(t_{n}) &= \Pi_{\tau} \int_{0}^{\tau }
\left( \vert \Pi_{\tau} u_{\tau}(t_n+s) \vert^2 - \vert \Pi_{\tau} u_{\tau}(t_n) \vert^2 \right) \, \Pi_{\tau} u_{\tau}(t_n) \,\mathrm{d} s \\
& = \Pi_{\tau} \int_{0}^{\tau }
\mbox{Re} \left( (\Pi_{\tau} u_{\tau}(t_n+s) - \Pi_{\tau}u_{\tau}(t_{n} ) )(\overline{ \Pi_{\tau} u_{\tau}(t_n+s)} +\overline{\Pi_{\tau}u_{\tau}(t_{n} )})\right) \, \Pi_{\tau} u_{\tau}(t_n) \,\mathrm{d} s \\
\mathcal{E}_{3, 2}(t_{n}) &= - \tau \Pi_{\tau} \left(
\frac{\mathrm{e}^{i\mu \tau \vert \Pi_{\tau} u_{\tau}(t_n) \vert^2}-1 -i \mu \tau \vert \Pi_{\tau} u_{\tau}(t_n) \vert^2}{i \mu\tau} \Pi_{\tau} u_{\tau}(t_n)
\right).
\end{align*}
By using again \eqref{vd}, the estimate of $ \mathcal{E}_{3, 1}(t_{n})$ is similar to the estimate of $ \mathcal{E}_{2}(t_{n})$. We find again that
$$ \tau^{-1} \| \mathcal{E}_{3, 1}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\leq C_{T} \tau^{s_{1} \over 2}.$$
For $ \mathcal{E}_{3, 2}$, by using successively \eqref{dualbourg} and the estimate
$$ \left| {\mathrm{e}^{i \mu \tau x}- 1 - i\mu \tau x \over i \mu \tau} \right| \lesssim \tau |x|^2, \quad \forall x \in \mathbb{R},$$
we get that
$$ \tau^{-1} \| \mathcal{E}_{3, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim \left\| \frac{\mathrm{e}^{i\mu \tau \vert \Pi_{\tau} u_{\tau}(t_n) \vert^2}-1 -i \mu \tau \vert \Pi_{\tau} u_{\tau}(t_n) \vert^2}{i \mu\tau} \Pi_{\tau} u_{\tau}(t_k) \right\|_{l^{4\over 3}_{\tau}L^{4\over 3}}
\lesssim \tau \left\| \Pi_{\tau} u_{\tau}(t_n) \right\|_{l^{{20 \over 3}}_{\tau}L^{20 \over 3}}^5.$$
Next, by using successively that $W^{ {1 \over 10}, 4}\subset L^{20 \over 3}(\mathbb{T})$ and \eqref{prodd1}, we get that
$$ \tau^{-1} \| \mathcal{E}_{3, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\lesssim_{T} \tau \left\| \Pi_{\tau} u_{\tau}(t_n) \right\|_{l^4_{\tau}W^{{1 \over 10}, 4} }
\lesssim \tau \left\| \Pi_{\tau} u_{\tau}(t_n) \right\|_{X^{ {1 \over 10}, {3 \over 8}}_{\tau}}.$$
Consequently, if $2 \geq s_{1} >1/10$, we get from Proposition \ref{propunifdisc} that
$$ \tau^{-1} \| \mathcal{E}_{3, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}} \lesssim \tau C_{T} \lesssim \tau^{s_{1} \over 2} C_{T}.$$
If $ s_{1} \leq 1/10$, we can write that
$$ \tau^{-1} \| \mathcal{E}_{3, 2}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}} \lesssim
\tau \left\| \Pi_{\tau} u_{\tau}(t_n) \right\|_{X^{ {1 \over 10}, {3 \over 8}}_{\tau}}
\lesssim \tau^{ {19 \over 20} + {s_{1} \over 2}} C_{T} \lesssim \tau^{{s_{1}\over 2}}C_{T}.$$
Consequently, we have also obtained that
\begin{equation}
\label{estE3}\tau^{-1} \| \mathcal{E}_{3}(t_n)\|_{X^{0, -{3 \over 8}}_{\tau}}
\leq C_{T} \tau^{s_{1} \over 2}.
\end{equation}
We end the proof gathering \eqref{estE0}, \eqref{estE1}, \eqref{estE2} and \eqref{estE3}.\end{proof}
\section{Proof of Theorem \ref{maintheo}}
We are now in a position to give the proof of Theorem \ref{maintheo}.
We first observe that thanks to Proposition \ref{propNLSK}, we have from the triangle inequality that
\begin{equation}
\label{erreurtotale}
\|u(t_{n})- u^n\|_{L^2} \leq \|u(t_{n}) - u_{\tau}(t_n)\|_{L^2} + \|u_{\tau}(t_{n}) - u^n\|_{L^2} \leq C_{T}\tau^{s_{1} \over 2} + \| e^n\|_{l^\infty_{\tau}L^2},
\end{equation}
where $e^{n}$ solves \eqref{eqen}. To get the error estimates of Theorem \ref{maintheo}, it thus suffices to estimate $ \|e^n\|_{X_{\tau}^{0,b}}$ for some $b\in (1/2, 5/8)$ thanks to \eqref{sobbourg}.
By using equation \eqref{eqen}, Lemma \ref{bourgainfaciled} and the error estimate of Proposition \ref{globalerror}, we get that
\begin{equation}
\label{preuveerreur1}
\|e^n \|_{X^{0, b}_{\tau}} \leq C_T T_{1}^{\varepsilon_{0}} \|\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)\|_{X^{0, - { 3 \over 8}}_{\tau}} + C_{T}\tau^{ s_{1}\over 2}.
\end{equation}
Here, we further employed~\eqref{bourg3} with $T_1$ still to be determined and $\varepsilon_0=1-b-\frac38>0$. \color{black}
Note that, by using \eqref{numFlow}, we can write
\begin{align*}
&\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)=
\Pi_{\tau}\left( \Psi_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
\Psi_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)\right), \\
&\Psi_\mathcal{N}^\tau(w)=
\frac{\mathrm{e}^{i\mu \tau \vert \Pi_{\tau} w \vert^2}-1}{i \mu \tau} \Pi_{\tau} w,
\end{align*}
where we have, uniformly in $n$ and $\tau$, the pointwise estimate
\begin{equation}
\label{phipointwise}
\left| { \Psi}_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
{ \Psi}_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)\right|
\lesssim |\Pi_{\tau}u_{\tau}(t_{n})|^2 | \Pi_{\tau}e^{n}| + |\Pi_{\tau}u_{\tau}(t_{n})| |\Pi_{\tau}e^n|^2 + |\Pi_{\tau}e^n|^3.
\end{equation}
We can then get by using \eqref{dualbourg} that
$$\|\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)\|_{X^{0, - { 3 \over 8}}_{\tau}}
\lesssim \|{ \Psi}_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
{ \Psi}_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)\|_{l^{4 \over 3}_{\tau}L^{4\over 3}}
$$
and hence, thanks to \eqref{phipointwise}
and H\"older's inequality, that
\begin{multline*}\|\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)\|_{X^{0, - { 3 \over 8}}_{\tau}}
\\\lesssim \|\Pi_{\tau}u_{\tau}(t_{n})\|_{l^4_{\tau}L^4}^2 \|\Pi_{\tau} e^n\|_{l^4_{\tau}L^4} + \|\Pi_{\tau}u_{\tau}(t_{n})\|_{l^4_{\tau}L^4}
\|\Pi_{\tau} e^n\|_{l^4_{\tau}L^4}^2 + \|\Pi_{\tau} e^n\|_{l^4_{\tau}L^4}^3.
\end{multline*}
Next, by using again \eqref{prodd1} and Proposition \ref{propunifdisc}, we finally get that
$$ \|\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n}))-
\Phi_{\mathcal{N}}^\tau (u_{\tau}(t_{n})-e^n)\|_{X^{0, - { 3 \over 8}}_{\tau}}
\leq C_{T}( \|e^n \|_{X^{0, {3 \over 8}}_{\tau}} + \|e^n \|_{X^{0, {3 \over 8}}_{\tau}}^2 + \|e^n \|_{X^{0, {3 \over 8}}_{\tau}}^3).$$
This yields by using \eqref{preuveerreur1}
$$ \|e^n \|_{X^{0, b}_{\tau}} \leq C_T T_{1}^{\varepsilon_{0}}
\left( \|e^n \|_{X^{0, {3 \over 8}}_{\tau}} + \|e^n \|_{X^{0, {3 \over 8}}_{\tau}}^2 + \|e^n \|_{X^{0, {3 \over 8}}_{\tau}}^3\right) +
C_{T}\tau^{ s_{1}\over 2}.$$
By choosing $T_{1}$ sufficiently small we thus get that
$$
\|e^n \|_{X^{0, b}_{\tau}} \leq C_{T}\tau^{s_{1} \over 2}.
$$
This proves the desired estimate \eqref{final1} for $ 0 \leq n \leq N_{1}= T_{1}/ \tau$. We can then iterate in a classical way the argument on $ T_{1}/ \tau \leq n \leq 2 T_{1}/\tau$ and so on to get the final estimate for $ 0 \leq n \leq T/\tau$.
\section{Numerical experiments}
In this section we numerically underline our convergence Theorem \ref{maintheo}. We illustrate the convergence order of the filtered splitting method in the case of rough and smooth initial data. In Figure~\ref{fig} we solve the periodic Schr\"{o}dinger equation \eqref{nlsO} with the filtered Lie splitting \eqref{scheme} and, respectively, the filtered Strang splitting with initial values
\begin{align*}
u(0) \in H^s\quad \text{with } s = \frac{1}{2}\text{ and } s = 4;
\end{align*}
see, e.g., \cite{KOS19} for details on the construction of rough initial values. We employ a standard Fourier pseudospectral method for the discretization in space and we choose as largest Fourier mode $K = 2^{14}$, i.e., the spatial mesh size $\Delta x = 0.00038$. As a reference solution we use the Strang splitting method with $K$ spatial points and a very small time step size $\tau$. Then, for each time step $\tau$ we measure the error between the filtered Lie and filtered Strang splitting method and the projected reference solution by employing $\Pi_\tau$ in the corresponding discrete $L^2$ norm. Our numerical findings confirm their convergence rate of order $\mathcal{O}(\tau^{1/4})$ for solutions in $H^{1/2}$ (see Theorem \ref{maintheo}) as well as their expected full order of convergence (order one for Lie and order two for Strang) in the case of smooth solutions.
\begin{figure}[h!]
\centering
\includegraphics[width=0.471\linewidth]{SplitNew.eps}
\hfill
\includegraphics[width=0.48\linewidth]{SplitNewSmooth.eps}
\caption{$L^2$ error of the Lie and Strang splitting schemes for rough and smooth initial data. Left: rough initial value $u(0) \in H^{1/2}$; right: smooth initial value $u(0) \in H^4$.}\label{fig}
\end{figure}
\subsection*{Acknowledgements}
{\small
KS has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 850941).
}
|
{
"timestamp": "2020-12-29T02:26:45",
"yymm": "2012",
"arxiv_id": "2012.14146",
"language": "en",
"url": "https://arxiv.org/abs/2012.14146"
}
|
\section{Introduction}
Anomalous behavior of thermodynamic response functions at low temperatures more often than not remains a hallmark of strong interparticle correlations in quantum materials.\cite{Coleman2007} Among many examples of such materials are cerium- and iron-based superconductors which develop superconducting order and exhibit unusual temperature dependence in heat capacity and in London penetration depth correspondingly.\cite{Petrovic2001,Thompson2001,Sarrao2007,Matsuda_Science2012,Matsuda_review2014} These thermodynamic anomalies are likely governed by the system's proximity to the underlying magnetic quantum phase transition which mediates a strong interactions between the constituent quasiparticles. \cite{Sasha, Sachdev,Fernandes2020,Khodas2020}
Correlated insulators, just like superconductors discussed above, may also exhibit anomalous thermodynamic properties which are not necessarily related to the strong exchange interactions between the local magnetic moments. In a most recent example, quantum oscillations in magnetization and a low-temperature upturn in the heat-capacity have been observed in a correlated narrow gap semiconductor samarium hexaboride. \cite{Suchitra2017,SmB6HC1} The experimental interest in this material, which dates back to 1960s,\cite{SmB6,PeterReview} has been recently revived in relation to its unconventional transport properties: below $T^*\simeq 5$K the Ohm's law breaks down so that the bulk develops a gap with respect to the current carrying excitations while only surfaces remain metallic. \cite{Kim2014}
Theory proposals which would explain such a behavior focus on the possibility of the inversion of even- and odd-parity bands in the high-symmetry points of the Brillouin zone. \cite{Dzero2010,Takimoto2011,Dzero2012,Alexandrov2013,Dzero2016} As a result of the band inversion, the surfaces of the sample remain metallic even though the bulk remains fully insulating.
The model with an inverted band structure can also be used in the calculation of the quantum oscillations in magnetization. \cite{Knolle2015,Onur2016,FaWang2016,Inti2018,Fu2018,Pal2019HC} A self-consistent theory of the low-temperature upturn in the heat capacity likely demands that one would need to go beyond a non-interacting low-energy theory. Such an attempt was made by Knolle and Cooper, who formulated a low-temperature theory first by projecting out the double occupation on the $f$-orbitals and, then included the interaction terms, which ultimately lead to the formation of excitons and magnetoexcitons. \cite{Knolle2017} It is not \emph{a priori} clear, however, which instability - if any of the two - would be the leading one. Furthermore, one may consider a scenario in which superconductivity competes with the excitonic-type of instability upon doping these materials with carriers.
In what follows, we address this problem by formulating the low-energy theory with an effective action which includes the short-ranged interactions allowed by symmetry of the problem. Specifically, we consider a generic two-band model with a hybridization gap as a starting point. The one band is assumed to be electron-like, while the other one is hole-like. The hybridization depends linearly on momentum, which corresponds to the case of the $d$- and $f$-orbital bands, while the parabolic dispersion relation is assumed for both bands. We will not make any specific assumptions on the position of the chemical potential at the beginning of the renormalization group flow. Importantly, since within our theory the band parameters as well as hybridization amplitude have been renormalized from their bare values by projecting out the doubly occupancy on $f$-orbitals, so that the effective masses for conduction and valence band quasiparticles are not equal. Consequently, the emerging many-body instabilities can be studied by employing the renormalization group (RG) technique. We find that for an arbitrary ratio between the effective masses of the conduction and valence bands there is a fixed point which corresponds to an instability favoring the formation of magnetic excitons.
This paper has been structured as follows. In the next Section we provide the details on the model, discuss the relevant approximations and write down the low-energy theory with the two-particle interactions included. In Section III we analyze the low-energy theory using the renormalization group approach and derive the renormalization group flow equations for the corresponding coupling constants in both particle-hole and particle-particle channels. Section IV is devoted to the discussion of the results and conclusions.
\section{Model}
When discussing the materials with partially filled $f$-orbitals, the Anderson lattice model is usually the starting point. Since the contact interactions between the $f$-electrons is the largest energy scale in the problem, in order to formulate the low-energy theory one usually projects out the doubly occupied states on $f$-orbitals.\cite{Coleman2007} This procedure leads to the renormalization of the parameters in the Anderson model Hamiltonian. This low-energy model Hamiltonian is our starting point.
\subsection{Single-particle action}
We consider the following single particle Hamiltonian:
\begin{equation}\label{Eq1}
\hat{H}_0=\sum\limits_{\mathbf k}{\Psi}_{a}^\dagger(\mathbf k)\left[
\begin{matrix}
\epsilon_c(\mathbf k)\hat{\tau}_0 & \hat{\Phi}_\mathbf k \\
\hat{\Phi}_\mathbf k^\dagger & \epsilon_f(\mathbf k)\hat{\tau}_0
\end{matrix}
\right]_{ab}\Psi_{b}(\mathbf k),
\end{equation}
where ${\Psi}_\mathbf k^\dagger=(c_{\mathbf k\uparrow}^\dagger, ~c_{\mathbf k\downarrow}^\dagger, ~f_{\mathbf k\uparrow}^\dagger, ~f_{\mathbf k\downarrow}^\dagger)$, $\mathbf k=(k_x,k_y,k_z)$ is the momentum, $\hat{\tau}_0$ is a 2$\times$2 unit matrix, $\hat{\Phi}_\mathbf k$ is a 2$\times$2 hybridization matrix to be specified below and $c,f$ are fermionic annihilation operators for the conduction and valence bands. It is convenient to write the single particle dispersion relation as \cite{Fu2018}
\begin{equation}
{\epsilon_c(\mathbf k)=\frac{k^2}{2m_c}+\frac{E_g}{2}+\mu_0, ~\epsilon_f(\mathbf k)=\frac{k^2}{2m_f}+\frac{E_g}{2}-\mu_0},
\end{equation}
where $E_g<0$ is the energy gap, $\mu_0$ is the energy shift and it is implicitly assumed that $m_f>m_c$.
It will be convenient to introduce
${m_{\pm}}^{-1}={m_c}^{-1}\pm{m_f}^{-1}$ and $k_F^2=-2m_{+}E_g$. If we now set $\mu_0=-k_F^2/4m_{-}$ it follows
\begin{equation}\label{eBands}
\begin{split}
\epsilon_c(\mathbf k)&=\frac{k^2-k_F^2}{4m_+}+\frac{k^2-k_F^2}{4m_{-}}\equiv\xi_\mathbf k+\epsilon_\mathbf k, \\
\epsilon_f(\mathbf k)&=\frac{k^2-k_F^2}{4m_{-}}-\frac{k^2-k_F^2}{4m_+}\equiv\xi_\mathbf k-\epsilon_\mathbf k.
\end{split}
\end{equation}
The specific form and momentum dependence of matrices entering into (\ref{Eq1}) is determined by the type of the hybridizing orbitals.
Here we consider the fairly standard form corresponding to the hybridization between even- and odd-parity orbitals with angular momentum transfer of $\Delta l=1$:
\begin{equation}\label{Phik}
\hat{\Phi}_\mathbf k=V\left(\begin{matrix}
k_z & k_x-ik_y \\
k_x+ik_y & -k_z
\end{matrix}
\right).
\end{equation}
With the help of the Dirac matrices, listed in the Appendix A, the single-particle part of the action reads
\begin{equation}\label{L0}
S_0=\int_x{\Psi}^\dagger(x)\left(\frac{\partial}{\partial \tau}-\mu+\xi_{\hat{\mathbf k}}\mathbbm{1}_4+\sum\limits_{a=0}^3\Sigma^ad_{\hat{\mathbf k}}^a\right)\Psi(x),
\end{equation}
where $\mu$ is the chemical potential, $x=(\mathbf r,\tau)$ and
\begin{equation}\label{SigmasDs}
{\begin{split}
&\Sigma^0=\gamma_0, \quad \Sigma^1=\gamma_0\gamma_1, \quad \Sigma^2=\gamma_0\gamma_2,
\quad \Sigma^3=\gamma_0\gamma_3, \\
&d_\mathbf k^0=\epsilon_\mathbf k, \quad d_\mathbf k^1=Vk_x, \quad
d_\mathbf k^2=Vk_y, \quad d_\mathbf k^3=Vk_z.
\end{split}}
\end{equation}
Clearly, when $m_c=m_f$, the term proportional to $\mathbbm{1}_4$ is zero.
\subsection{Interactions}
The most general form of the Lagrangian density describing weak repulsive interactions is \cite{Bitan0,Bitan}
\begin{equation}\label{Lint}\nonumber
{
\begin{split}
&{\cal L}_{\textrm{int}}=\tilde{g}_1\left({\Psi}^\dagger\Psi\right)^2+\tilde{g}_2\left({\Psi}^\dagger \tau_1\sigma_0\Psi\right)^2+\tilde{g}_3\left({\Psi}^\dagger \tau_2\sigma_0\Psi\right)^2\\&+\tilde{g}_4 \left({\Psi}^\dagger \tau_3\sigma_0\Psi\right)^2
+\tilde{g}_5\left[\left({\Psi}^\dagger \tau_2\sigma_1\Psi\right)^2+\left({\Psi}^\dagger \tau_2\sigma_2\Psi\right)^2\right.\\&\left.+\left({\Psi}^\dagger \tau_2\sigma_3\Psi\right)^2\right]+
\tilde{g}_6\left[\left({\Psi}^\dagger \tau_3\sigma_1\Psi\right)^2+\left({\Psi}^\dagger \tau_3\sigma_2\Psi\right)^2\right.\\&\left.+\left({\Psi}^\dagger \tau_3\sigma_3\Psi\right)^2\right]
+\tilde{g}_7\left[\left({\Psi}^\dagger \tau_0\sigma_1\Psi\right)^2+\left({\Psi}^\dagger \tau_0\sigma_2\Psi\right)^2\right.\\&\left.+\left({\Psi}^\dagger \tau_0\sigma_3\Psi\right)^2\right]+
\tilde{g}_8\left[\left({\Psi}^\dagger \tau_1\sigma_1\Psi\right)^2+\left({\Psi}^\dagger \tau_1\sigma_2\Psi\right)^2\right.\\&\left.+\left({\Psi}^\dagger \tau_1\sigma_3\Psi\right)^2\right].
\end{split}}
\end{equation}
Here ${\vec \tau}$ are Pauli matrices act in the band space, while ${\vec \sigma}$ are Pauli matrices in spin space.
Since ${\cal L}=T-U$, the generic behavior corresponds to the case when all coupling constants $\tilde{g}_i$ are negative, i.e. all interactions are assumed to be
repulsive from the outset.
Furthermore, I introduce basis matrices ${\vec \Gamma}$ according to
\begin{equation}\label{Gammaj}
\begin{split}
&\Gamma^1=\mathbbm{1}_4, ~\Gamma^2=\tau_0\sigma_1, ~ \Gamma^3=\tau_0\sigma_2, ~ \Gamma^4=\tau_0\sigma_3, \\
& \Gamma^5=\tau_1\sigma_0, ~ \Gamma^6=\tau_1\sigma_1, ~ \Gamma^7=\tau_1\sigma_2, ~\Gamma^8=\tau_1\sigma_3, \\
&\Gamma^9=\tau_2\sigma_0, ~ \Gamma^{10}=\tau_2\sigma_1 , ~ \Gamma^{11}=\tau_2\sigma_2, ~ \Gamma^{12}=\tau_2\sigma_3, \\
&\Gamma^{13}=\tau_3\sigma_0, ~\Gamma^{14}=\tau_3\sigma_1, ~\Gamma^{15}=\tau_3\sigma_2, ~ \Gamma^{16}=\tau_3\sigma_3.
\end{split}
\end{equation}
Importantly, each of these matrices satisfies
\begin{equation}\label{basis}
{\left(\Gamma^a\right)^\dagger=\Gamma^a=\left(\Gamma^a\right)^{-1}.}
\end{equation}
Below we will show that not all interaction terms are independent from each other and, as a result, Eq. (\ref{Lint}) can be further simplified.\cite{Bitan0,Bitan,BilayerOskar}
\subsection{Fierz identity}
Thus, we have eight coupling constants, $g_j<0$, However, only four of these matrices (and the corresponding couplings) are independent.
To prove this, let us employ the following Fierz identity \cite{Bitan0,Bitan,BilayerOskar}
\begin{equation}\label{Fierz}
\begin{split}
&\left({\Psi}^\dagger(x)M\Psi(x)\right)\left({\Psi}^\dagger(y)M\Psi(y)\right)\\&=-\frac{1}{16}\sum\limits_{ab}\textrm{Tr}\left({M}{\Gamma}^a{M}{\Gamma}^b\right)\left[{\Psi}^\dagger(x)\Gamma^b\Psi(y)\right]\\&\times\left[{\Psi}^\dagger(y)\Gamma^a\Psi(x)\right]
\end{split}
\end{equation}
along with the relation
\begin{equation}\label{identity}
\delta_{il}\delta_{kj}=\frac{1}{4}\sum\limits_{a=1}^{16}\Gamma_{ik}^a\Gamma_{jl}^a.
\end{equation}
Consider now the following vector
\begin{equation}\label{VT}\nonumber
\begin{split}
&{\vec V}=\left\{{\left(\overline{\Psi}\Gamma^{1}\Psi\right)^2},
\left(\overline{\Psi}\Gamma^{2}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{3}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{4}\Psi\right)^2,\right.\\&\left.
{\left(\overline{\Psi}\Gamma^{5}\Psi\right)^2},
\left(\overline{\Psi}\Gamma^{6}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{7}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{8}\Psi\right)^2, \right.\\&\left.
{\left(\overline{\Psi}\Gamma^{9}\Psi\right)^2},\left(\overline{\Psi}\Gamma^{10}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{11}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{12}\Psi\right)^2,\right.\\&\left.{\left(\overline{\Psi}\Gamma^{13}\Psi\right)^2},
\left(\overline{\Psi}\Gamma^{14}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{15}\Psi\right)^2+\left(\overline{\Psi}\Gamma^{16}\Psi\right)^2
\right\}.
\end{split}
\end{equation}
This choice is matched by the following vector of couplings ${\vec g}=(\tilde{g}_1,\tilde{g}_7,\tilde{g}_2,\tilde{g}_8,\tilde{g}_3,\tilde{g}_5,\tilde{g}_4,\tilde{g}_6)$.
Employing (\ref{Fierz}) along with the definition of vector ${\vec V}$ above, the following system of linear equations
$\sum_{j=1}^8 {\cal F}_{ij}V_j=0$ obtains with
\begin{equation}\label{FV}
{\cal F}=\frac{1}{8}
\left(
\begin{matrix}
5 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
3 & 3 & 3 & -1 & 3 & -1 & 3 & -1 \\
1 & 1 & 5 & 1 & -1 & -1 & -1 & -1 \\
3 & -1 & 3 & 3 & -3 & 1 & -3 & 1 \\
1 & 1 & -1 & -1 & 5 & 1 & -1 & -1 \\
3 & -1 & -3 & 1 & 3 & 3 & -3 & 1 \\
1 & 1 & -1 & -1 & -1 & -1 & 5 & 1 \\
3 & -1 & -3 & 1 & -3 & 1 & 3 & 3
\end{matrix}
\right)
\end{equation}
The vector of the eigenvalues of this matrix is
${\vec \lambda}=(1,1,1,1,0,0,0,0)$.
Since there are four zero eigenvalues, we have only four independent coupling constants. It will be conveniet to keep the interaction terms with couplings $\tilde{g}_1$, $\tilde{g}_2$, $\tilde{g}_3$ and $\tilde{g}_4$. Lastly, with the help of (\ref{FV}) we can express the remaining interaction terms in terms of the independent ones, which is equivalent to the following change of the coupling constants:
$g_1=\tilde{g}_1-\tilde{g}_5-\tilde{g}_6-2\tilde{g}_7-\tilde{g}_8$, $g_2=\tilde{g}_2+\tilde{g}_5+\tilde{g}_6-\tilde{g}_7-2\tilde{g}_8$,
$g_3=\tilde{g}_3-2\tilde{g}_5+\tilde{g}_6-\tilde{g}_7+\tilde{g}_8$ and $g_4=\tilde{g}_4+\tilde{g}_5-2\tilde{g}_6-\tilde{g}_7+\tilde{g}_8$.
Thus, the interaction part of the Lagrangian density becomes
\begin{equation}\label{LintRed}
{
\begin{split}
{\cal L}_{\textrm{int}}(\mathbf r,\tau)&={g}_1 \left({\Psi}^\dagger\Psi\right)^2+{g}_2 \left({\Psi}^\dagger \tau_1\sigma_0\Psi\right)^2\\&+{g}_3\left({\Psi}^\dagger \tau_2\sigma_0\Psi\right)^2+{g}_4\left({\Psi}^\dagger \tau_3\sigma_0\Psi\right)^2.
\end{split}}
\end{equation}
Note, that even though $\tilde{g}_j<0$, the renormalized coupling constants $g_i$ can be either positive or negative.
\section{Renormalization group analysis}
\subsection{Scaling at the tree level}
Each fermionic field is separated into slow ($k<\Lambda/s$) and fast ($\Lambda/s<k<\Lambda$) mode, $\Psi=\Psi_<+\Psi_>$. At the tree level, we need to integrate out the fermions
within the shell of momenta $\Lambda/s<|\mathbf k|<\Lambda$. I have
\begin{equation}\label{L0s}
\begin{split}
S_{0<}&=\int\limits_0^\beta d\tau\int_{|\mathbf k|<\Lambda/s}\frac{d^3\mathbf k}{(2\pi)^3}{\Psi}_{<}^\dagger(\mathbf k,\tau)\left(\frac{\partial}{\partial \tau}-\mu\right.\\&\left.+\xi_\mathbf k\mathbbm{1}_4+\sum\limits_{a}\Sigma^ad_\mathbf k^a\right)\Psi_{<}(\mathbf k,\tau).
\end{split}
\end{equation}
Let us rescale momentum back to its initial region $k'\leq\Lambda$ with $k=k'/s$ and $\tau=s^2\tau'$ and replace the fermionic fields
accordingly to keep the action invariant:
\begin{equation}\label{S0lessS}
{\Psi}(\mathbf k',\tau')=\frac{1}{\zeta}\Psi_{<}(\mathbf k'/s,s^2\tau').
\end{equation}
It follows
\begin{equation}\label{L02}
\begin{split}
S_{0}&=\frac{\zeta^2}{s^3}\int\limits_0^{\beta/s^2} d\tau\int_{|\mathbf k|<\Lambda}\frac{d^3\mathbf k}{(2\pi)^3}{\Psi}^\dagger(\mathbf k,\tau)\left(\frac{\partial}{\partial \tau}-s^2\mu\right.\\&\left.+\Sigma^0d_\mathbf k^0+s\sum\limits_{a=1}^{3}\Sigma^ad_\mathbf k^a\right)\Psi(\mathbf k,\tau).
\end{split}
\end{equation}
Thus, the action remains invariant under the following scale transformation ($s=e^t$):
$T'=s^2T$, $\mu'=s^2\mu$, $V'=sV$, $m_{\pm}'=s^2m_\pm$,
$k_F'=sk_F$, $\zeta=s^{3/2}$,
where the last expression ensures that the action will remain invariant
and $T$ is the temperature. Clearly, with respect to tree-level perturbations, hybridization coupling $V$ is a
relevant variable under the renormalization group flow. However, hybridization grows slower than the chemical potential.
\subsection{RG equations: particle-hole channel}
We now proceed with expanding the action in the powers of the interaction up to the second order in powers of $g_j$'s and integrating out the 'fast' modes.
The effective action in terms of the 'slow' modes is
\begin{equation}\label{Basic}
\begin{split}
&\left\langle e^{-S[\Psi]}\right\rangle_{0>}=e^{-S_0[\Psi_<]-S_{\textrm{int}}[\Psi_<]}\langle e^{-S_{\textrm{int}}[\Psi_<,\Psi_>]}\rangle_{0>}\\&= e^{-S_0[\Psi_<]-\delta S[\Psi_<]},
\end{split}
\end{equation}
where the $\langle...\rangle_0$ denotes the averaging over the gaussian action and
the interaction part of the action $S_{\textrm{int}}$ is determined by the Lagrangian density (\ref{LintRed}).
We continue with the computation of the average over the fast modes (\ref{Basic}) using the cumulant expansion. Integrating out the fast modes in the momentum shell $k\in[\Lambda/s,\Lambda]$ and rescaling the resulting correction to the effective action using (\ref{S0lessS}) one may find the corrections to the coupling constants. The details of the calculation are
presented in Appendix B. The resulting flow equations for the four coupling constants are
\begin{equation}\label{FlowEqs}
{\begin{split}
\frac{dg_1}{d\ln s}&=-\frac{m\Lambda}{4\pi^2}\left[2g_1g_4+\eta\left(g_1+g_4\right)^2\right.\\&\left.+\eta\left(g_2-g_3)^2\right)\right], \\
\frac{dg_2}{d\ln s}&=\frac{m\Lambda}{2\pi^2}
\left[(1-\eta)g_1g_2+\eta g_1g_3+(1+\eta)g_3g_4\right.\\&\left.-g_2g_3-(2+\eta)g_2g_4-g_2^2\right], \\
\frac{dg_3}{d\ln s}&=\frac{m\Lambda}{2\pi^2}
\left[(1-\eta)g_1g_3+\eta g_1g_2+(1+\eta)g_2g_4\right.\\&\left.-g_2g_3-(2+\eta)g_3g_4-g_3^2\right], \\
\frac{dg_4}{d\ln s}&=-\frac{m\Lambda}{4\pi^2}
\left\{(1+\eta)\left[g_1^2+(g_2-g_3)^2+g_4^2\right]\right.\\&\left.+2\eta g_1g_4\right\}.
\end{split}}
\end{equation}
Here we use $m=m_{+}$ for brevity, $\Lambda$ is the ultraviolet cutoff. Note, that the second and third equation are symmetric with respect to $g_2\leftrightarrow g_3$. Lastly, parameter $\eta$ describes the mismatch between the effective masses, Eq. (\ref{C1C2simp}), so that the case of two bands with equal effective masses corresponds to the limit $\eta\to 0$.
To analyze Eq. (\ref{FlowEqs}) it will be convenient to work with the coupling ratios.\cite{BilayerOskar} It will be convenient to choose the following coupling ratios: $v_1=g_1/g_4$, $v_2=g_2/g_4$ and $v_3=g_3/g_4$. The flow equations in terms of these variables are easily derived from (\ref{FlowEqs}), so we will write these compactly as
\begin{equation}\label{Poly}
\begin{split}
g_4\frac{dv_1}{dg_4}&={\cal R}_1(\eta;v_1,v_2,v_3)-v_1, \\
g_4\frac{dv_2}{dg_4}&={\cal R}_2(\eta;v_1,v_2,v_3)-v_2, \\
g_4\frac{dv_3}{dg_4}&={\cal R}_2(\eta;v_1,v_3,v_2)-v_3.
\end{split}
\end{equation}
Since the second and third equations are symmetric with respect to an interchange of $v_2$ and $v_3$, we can determine the fixed points analytically since in order to satisfy the second and third equation simultaneously, we need to require that $v_2^*=v_3^*=v_\perp^*$. The fixed point for the first equation is given by the roots of the following equation:
\begin{equation}\label{EqFP}
\begin{split}
&(v_1^*-1)(v_1^*+1)\left(v_1^*+\frac{\eta}{1+\eta}\right)=0,
\end{split}
\end{equation}
while the fixed point for the remaining two equations is either $v_\perp^*=0$ or $v_\perp^*=(1/4)[(1+\eta)(v_1^*+1)^2-2]$.
Thus, independent of the value of the parameter $\eta$, Eq. (\ref{EqFP}), there are six fixed points.
Our stability analysis of the flow equations around each fixed point shows that there is only one stable fixed point ("sink"), $\left(-\frac{\eta}{1+\eta},0,0\right)$ when the initial value of the coupling constant $g_4>0$. The remaining five fixed points are all unstable at least in one of the directions in the space
of coupling constants. When the initial value of the coupling $g_4<0$ the flow of the couplings reverses and a stable fixed point becomes a source.
The resulting RG flow diagram is presented in Fig. \ref{Fig1}.
\begin{figure}[h]
\centering
\includegraphics[width=3.35in]{Fig1-ph.jpg}
\caption{(color online). Renormalization group flow of the coupling constant ratios for the case of small mismatch between the effective masses, $\eta=0.125$.
We find that there are six fixed points overall in this case. Five fixed points (light green circles) are always unstable. The remaing one (solid red circle) is stable when $g_4>0$ and becomes unstable when $g_4<0$. Without loss of generality we chose to limit the presentation to a case of $g_2=g_3=g_{2(3)}$ and we also assumed that at the beginning of the RG flow $(m\Lambda/4\pi^2)g_4=0.1$.}
\label{Fig1}
\end{figure}
Given the nature of the materials under discussion, this case is not physically relevant for us. Nevertheless, for completeness, we note that the value of $|g_1/g_4|$ at the stable fixed point equals zero for $\eta=0$ and then it
increases with an increase in $\eta$, which means that in the absence of mass anisotropy the stable fixed point is a non-interacting one,
provided $g_4>0$. Lastly, we note that the chemical potential does not effect the flow of the coupling constants as long as $\mu\ll \frac{\Lambda^2}{2m}$ holds.
\subsection{Particle-hole channel susceptibilities} To investigate the leading instability at the stable fixed point in the particle-hole channel we need to analyze the flow of the corresponding susceptibilities. To do that, we modify the action $S\to S+\Delta S$ with\cite{BilayerOskar}
\begin{equation}\label{DeltaS}
\begin{split}
\Delta S_{\textrm{p-h}}&=-\chi_{\textrm{ph}}^{(1)}\int d\tau\int\frac{d^3\mathbf k}{(2\pi)^3} {\Psi}^\dagger(\mathbf k,\tau)\Psi(\mathbf k,\tau)\\&-\sum\limits_{a=2}^{16}\chi_{\textrm{ph}}^{(a)}\int d\tau\int\frac{d^3\mathbf k}{(2\pi)^3} {\Psi}^\dagger(\mathbf k,\tau)\hat{\Gamma}^a\Psi(\mathbf k,\tau).
\end{split}
\end{equation}
Each terms here can be written as a sum of two momentum integrals: one with $k\leq\Lambda/s$ and another with $\Lambda/s\leq k\leq \Lambda$.
The goal is to determine the change of the susceptibilities under the RG flow by perturbation theory in powers of the coupling constants. The flow equations for the susceptibilities are obtained by expanding the exponent (\ref{Basic}) in powers of $S_{\textrm{int}}[\Psi_<,\Psi_>]+\Delta S[\Psi_>]$ and integrating fermions whose momenta
lie in the outer shell $\Lambda/s\leq k\leq \Lambda$.
Thus the part of the action with the susceptibilities becomes
\begin{equation}\label{ReScaleChi2}
\begin{split}
&\Delta S_{\textrm{p-h}}=s^2\int\limits_0^\beta d\tau\int_{|\mathbf k|\leq\Lambda}\frac{d^3\mathbf k}{(2\pi)^3}
\sum\limits_{a=1}^{16}\chi_{\textrm{ph}}^{(a)}\left\{{\Psi}_k^\dagger
\hat{\Gamma}^a\Psi_k\right.\\&\left.+\sum\limits_{\cal S}g_{\cal S}\Pi_{\Gamma^a{\cal S}}{\Psi}_k^\dagger
\hat{\Gamma}^a\Psi_k-\sum\limits_{\cal S}g_{\cal S}{\Psi}_k^\dagger\Upsilon_{\Gamma^a{\cal S}}\Psi_k
\right\},
\end{split}
\end{equation}
where $k=(\mathbf k,\tau)$, the summation is performed over the set ${\cal S}=\{\Gamma^1,\Gamma^5,\Gamma^9,\Gamma^{13}\}$ and we use the following notations
\begin{equation}\label{PiOMYOM}\nonumber
\begin{split}
\Pi_{{\cal U}{\cal S}}&=\int\limits_{-\infty}^\infty\frac{d\omega_n}{2\pi}\int\limits_{{\Lambda/s}\leq p\leq\Lambda}
\textrm{Tr}\left[G_\mathbf p(i\omega_n){\cal U}G_\mathbf p(i\omega_n){\cal S}\right],\\
\Upsilon_{{\cal U}{\cal S}}&=\int\limits_{-\infty}^\infty\frac{d\omega_n}{2\pi}\int\limits_{{\Lambda/s}\leq p\leq\Lambda}{\cal S}G_\mathbf p(i\omega_n){\cal U}G_\mathbf p(i\omega_n){\cal S}.
\end{split}
\end{equation}
After we rescale the momenta and the fermionic fields so that the action takes its original form, the following equations for the corresponding susceptibilities are
\begin{equation}\label{Flow4chi}
\begin{split}
&\frac{d\ln\chi_{\textrm{ph}}^{(j)}}{d\ln s}=2, ~(1\leq j\leq 4, ~13\leq j\leq16), \\
&\frac{d\ln\chi_{\textrm{ph}}^{(5)}}{d\ln s}=2+\frac{m\Lambda}{2\pi^2}(1-v_1+3v_2+v_3)g_4, \\
&\frac{d\ln\chi_{\textrm{ph}}^{(6,7,8)}}{d\ln s}=2+\frac{m\Lambda}{2\pi^2}(1-v_1-v_2+v_3)g_4, \\
&\frac{d\ln\chi_{\textrm{ph}}^{(9)}}{d\ln s}=2+\frac{m\Lambda}{2\pi^2}(1-v_1-v_2+3v_3)g_4, \\
&\frac{d\ln\chi_{\textrm{ph}}^{(10,11,12)}}{d\ln s}=2+\frac{m\Lambda}{2\pi^2}(1-v_1+v_2-v_3)g_4.
\end{split}
\end{equation}
By performing the numerical solution of the flow equations (\ref{FlowEqs}) around the stable fixed point, we find that when both $v_2(s)$ and $v_3(s)$ approach zero from above or when $v_3(s)$ approaches zero from above, while $v_2(s)$ approaches zero from below, the fastest growing susceptibility corresponds to the order parameter
$\phi_s=\langle{\Psi}_\alpha^\dagger\left((\tau_1\pm i\tau_2){\sigma}_0\right)_{\alpha\beta}\Psi_\beta\rangle$,
which describes the spin-singlet excitonic order. In the opposite case, when both $v_2(s)$ and $v_3(s)$ approach zero from below the fastest growing susceptibility describes the emergence of the magneto-excitonic order with the order parameter
${\vec \phi}_{t}=\langle{\Psi}_\alpha^\dagger\left((\tau_{1}\pm i\tau_2){\vec \sigma}\right)_{\alpha\beta}\Psi_\beta\rangle$. Thus, we confirm that the leading instabilities in the particle-hole channel are the instabilities leading to the formation of an excitonic insulator. It remains to be seen whether the superconducting instability may develop faster or not.
\subsection{Particle-particle channel: renormalization group equations} In order to investigate the superconducting instability, the Lagrangian density (\ref{LintRed}) can be recast into the form describing the interactions in the particle-particle channel. This goal can be accomplished with the help of the Fierz identity
\begin{equation}\label{ph2pp}
\begin{split}
&\left({\Psi}^\dagger(x)M\Psi(x)\right)\left({\Psi}^\dagger(x)M\Psi(x)\right)\\&=\frac{1}{16}\sum\limits_{ab}\textrm{Tr}\left({\Gamma}^aM{\Gamma}^b{M}^T\right)\left({\Psi}^\dagger(x)\Gamma^a{\Psi}^*(x)\right)\\&\times\left({\Psi}^T(x)\Gamma^b\Psi(x)\right).
\end{split}
\end{equation}
The fermionic nature of the fields $\Psi$ implies that the only non-vanishing terms are those with $\Gamma^a$ such that
$\Gamma_{ij}^a=-\Gamma_{ji}^a$: this relation holds for only six matrices ($a=3,7,9,10,12,15$). Furthermore, with the help of the Fierz identities we have
\begin{equation}\label{TaDam}
{\begin{split}
&\left[\begin{matrix}
\left({\Psi}^\dagger\Psi\right)^2\\
\left({\Psi}^\dagger \tau_1\sigma_0\Psi\right)^2 \\
\left({\Psi}^\dagger \tau_2\sigma_0\Psi\right)^2 \\
\left({\Psi}^\dagger \tau_3\sigma_0\Psi\right)^2 \\
\end{matrix}
\right]=
\frac{1}{4}\left(\begin{matrix}
1 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & -1 & -1 & -1 & -1 \\
-1 & 1 & -1 & -1 & -1 & 1 \\
1 & -1 & -1 & -1 & -1 & 1
\end{matrix}
\right)\\&\times
\left[\begin{matrix}
\left({\Psi}^\dagger(x)\Gamma^3{\Psi}^*(x)\right)\left({\Psi}^T(x)\Gamma^3\Psi(x)\right) \\
\left({\Psi}^\dagger(x)\Gamma^7{\Psi}^*(x)\right)\left({\Psi}^T(x)\Gamma^7\Psi(x)\right) \\
\left({\Psi}^\dagger(x)\Gamma^9{\Psi}^*(x)\right)\left({\Psi}^T(x)\Gamma^9\Psi(x)\right) \\
\left({\Psi}^\dagger(x)\Gamma^{10}{\Psi}^*(x)\right)\left({\Psi}^T(x)\Gamma^{10}\Psi(x)\right) \\
\left({\Psi}^\dagger(x)\Gamma^{12}{\Psi}^*(x)\right)\left({\Psi}^T(x)\Gamma^{12}\Psi(x)\right) \\
\left({\Psi}^\dagger(x)\Gamma^{15}{\Psi}^*(x)\right)\left({\Psi}^T(x)\Gamma^{15}\Psi(x)\right)
\end{matrix}
\right].
\end{split}}
\end{equation}
Introducing the following notations:
\begin{equation}\label{SingletTrip}\nonumber
\begin{split}
&{\cal S}_1(\mathbf r,\tau)=i{\Psi}^T(x)\Gamma^3\Psi(x)={\Psi}^T(x)(i\tau_0\sigma_2)\Psi(x), \\
&{\cal S}_2(\mathbf r,\tau)=i{\Psi}^T(x)\Gamma^{15}\Psi(x)={\Psi}^T(x)\left(i\tau_3\sigma_2\right)\Psi(x), \\
&{\cal T}_3(\mathbf r,\tau)=i{\Psi}^T(x)\Gamma^7\Psi(x)={\Psi}^T(x)\left(i\tau_1\sigma_2)\right)\Psi(x), \\
&{\cal T}_4^{(1)}(\mathbf r,\tau)=i{\Psi}^T(x)\Gamma^9\Psi(x)={\Psi}^T(x)\left(i\tau_2\sigma_0\right)\Psi(x), \\
& {\cal T}_4^{(2)}(\mathbf r,\tau)=i{\Psi}^T(x)\Gamma^{10}\Psi(x)={\Psi}^T(x)\left(i\tau_2\sigma_1\right)\Psi(x), \\
&{\cal T}_4^{(3)}(\mathbf r,\tau)=i{\Psi}^T(x)\Gamma^{12}\Psi(x)={\Psi}^T(x)\left(i\tau_2\sigma_3\right)\Psi(x).
\end{split}
\end{equation}
we can now write down the Lagrangian density describing the interactions in the particle-particle channel:
\begin{equation}\label{Lintpp}
{\begin{split}
{\cal L}_{\textrm{int}}&=\sum\limits_{j=1}^{2}{u}_j{\cal S}_j^\dagger(\mathbf r,\tau){\cal S}_j(\mathbf r,\tau)+
u_3{\cal T}_{3}^\dagger(\mathbf r,\tau){\cal T}_3(\mathbf r,\tau)\\&+
u_4\sum\limits_{m=1}^{3}{\cal T}_{4}^{(m)\dagger}(\mathbf r,\tau){\cal T}_4^{(m)}(\mathbf r,\tau).
\end{split}}
\end{equation}
where the newly introduced (pairing) coupling constants are
$u_1=(g_1+g_2-g_3+g_4)/4$, $u_2=(g_1-g_2+g_3+g_4)/4$,
$u_3=(g_1+g_2+g_3-g_4)/4$ and $u_4=(g_1-g_2-g_3-g_4)/4$.
Thus, just like in the case of particle-hole channel, we have ended up with four independent couplings. By expanding the operators (\ref{SingletTrip}) we can interpret ${\cal S}_1(\mathbf r,\tau)$ as the pairing operator in the $s$-wave channel, while ${\cal S}_2(\mathbf r,\tau)$ as the pairing operator leading to $s^{\pm}$-wave pairing. The remaining operators account for the pairing in either odd-parity and/or spin-triplet channel.
The renormalization group equations for the couplings $u_j$ can now be derived following the same procedure used to derive Eqs. (\ref{FlowEqs}).
It is worth mentioning here that in this case, that the first four diagram in Fig. \ref{FigDiags} give the same contribution (up to a numerical pre-factor) to the effective action and, importantly, only a contribution from the diagram (E) contains the mass anisotropy parameter $\eta$. The resulting RG equations in this case read:
\begin{equation}\label{Singlet}
\begin{split}
\frac{du_1}{d\ln s}&=\frac{m\Lambda}{2\pi^2}\left[(u_1-u_2)(u_3+3u_4)-2(1+2\eta)u_1^2\right], \\
\frac{du_2}{d\ln s}&=\frac{m\Lambda}{2\pi^2}\left[(u_2-u_1)(u_3+3u_4)-2(1+2\eta)u_2^2\right], \\
\frac{du_3}{d\ln s}&=\frac{m\Lambda}{4\pi^2}\left[(u_1-u_2)^2+u_3^2-3u_4^2+6u_3u_4\right], \\
\frac{du_4}{d\ln s}&=\frac{m\Lambda}{4\pi^2}\left[(u_1-u_2)^2+(u_3-u_4)^2+4u_4^2\right].
\end{split}
\end{equation}
As we have mentioned above these equations have been obtained independently of our earlier calculation, so one can readily check that upon expressing the coupling $u_j$'s in terms of the coupling constants $g_j$'s for the particle-hole channel interactions, we recover the RG equations
(\ref{FlowEqs}).
The fixed point(s) of the equations above (\ref{Singlet}) can be found using the same strategy as we have used above. Since the right-hand-side of the last equation (\ref{Singlet}) can be written as a sum of the squares, we consider the ratio of the couplings $\lambda_a=u_a/u_4$. We find that just like in the particle-hole case, there are six fixed points: five unstable ones and one stable ("sink") when initial value of the coupling $u_4<0$.
The results for the flow of the couplings are presented in Fig. \ref{Fig2}.
The stable fixed point - "sink" - is determined by $\lambda_3^*=1$ and $\lambda_1^*=\lambda_2^*=(1/4)(2\lambda_3^*-(\lambda_3^*)^2-5)/(1+2\eta)$.
\begin{figure}[h]
\centering
\includegraphics[width=3.15in]{Fig2-pp.pdf}
\caption{Renormalization group flow of the coupling constant ratios in the particle-particle channel for the case of small mismatch between the effective masses, $\eta=0.125$.
We find that there are six fixed points overall in this case. Five fixed points (solid orange circles) are always unstable. The remaning one (solid red circle) is stable when $u_4<0$ and becomes unstable when $u_4>0$. Without loss of generality we chose to limit the presentation to a case of $u_1=u_2=u_{s}$ and we also assumed that at the beginning of the RG flow $(m\Lambda/4\pi^2)u_4=-0.1$.}
\label{Fig2}
\end{figure}
\subsection{Particle-particle channel susceptibilities}
To determine the leading channel for the pairing instability, we need to evaluate the corresponding susceptibilities. Introducing the source terms into the action
\begin{equation}\label{DeltaSpp}
\begin{split}
\Delta S_{\textrm{p-p}}&=-\sum\limits_{i}\Delta_{\Gamma^i}\int_k{\Psi}^T(\mathbf k,\tau)\hat{\Gamma}^i\Psi(\mathbf k,\tau),
\end{split}
\end{equation}
Here the summations is performed over matrices $i=3,7,9,10,12,15$ and $k=(\tau,\mathbf k)$. The subsequent calculation is completely analogous to the one above for the susceptibilities in the particle-hole channel. Specifically, after integrating out the fast modes for the renormalization of the source term (\ref{DeltaS}) and keeping in mind that $\Gamma_{\nu\mu}^i=-\Gamma_{\mu\nu}^i$ we find
\begin{equation}\label{ReSource}
\begin{split}
&\Delta_{\Gamma^i}(s){\Psi}_<^T(k)\hat{\Gamma}\Psi_<(k)=s^2\Delta_{\Gamma^i}(1){\Psi}^T(k)\hat{\Gamma}^i\Psi(k)\\&-
s^2\Delta_{\Gamma^i}(1)\sum\limits_{\cal S}g_{\cal S}\int\frac{d\omega}{2\pi}\int\limits_{\Lambda/s}^\Lambda\frac{p^2dp}{2\pi^2}\\&\Gamma_{\nu\mu}^{(i)}G_{\mu\gamma}(\mathbf p,i\omega){\cal S}_{\gamma\delta}\Psi_\delta(k)G_{\nu\alpha}(-\mathbf p,-i\omega){\cal S}_{\alpha\beta}\Psi_\beta(k).
\end{split}
\end{equation}
The momentum and frequency integrals appearing here have been computed already (see Eqs. (\ref{pp},\ref{Cleq1}) in Appendix B). The equations
for the flow of the functions $\Delta_{\textrm{pp}}^{(i)}(s)$ are
\begin{equation}\label{FlowDeltaspp}
\begin{split}
&\frac{d\ln\Delta_{\Gamma^3}}{d\ln s}=2-(1+2\eta)(u_1+u_2+u_3-u_4)\frac{m\Lambda}{\pi^2}, \\
&\frac{d\ln\Delta_{\Gamma^{15}}}{d\ln s}=2-(1+2\eta)(u_1+u_2-u_3+u_4)\frac{m\Lambda}{\pi^2}, \\
&\frac{d\ln\Delta_{\Gamma^{a}}}{d\ln s}=2, ~(a=7,9,10,12).
\end{split}
\end{equation}
Since the only stable fixed point exists for $u_4<0$, the fastest divergent susceptibility is clearly determined by the ratio
$(u_1+u_2+u_3-u_4)/(u_1+u_2-u_3+u_4)$.
Numerical analysis of these equations shows that susceptibility $\Delta_{\Gamma^3}$ corresponding to the singlet $s$-wave pairing is the one diverging fastest.
Furthermore, we find that while the leading divergence corresponds to the singlet pairing, the strongest divergence is still governed by the excitonic instability.
\section{Conclusions}
As the recent experimental studies have shown, the materials which may exhibit the physical effects we have discussed so far are disordered either due to alloying or due to the presence of vacancies in the nominally stoichiometric compounds. This is especially relevant for the excitonic instabilities, which are prone to slightest anisotropy of the underlying band structure let alone the presence of disorder. Since our results so far ignored the presence of disorder, we cannot claim with certainty that the excitonic instability will still be the leading one in that case. This problem, however, requires a thoughtful and careful treatment and, as such, goes beyond the scope of the present study.
Other avenues for further investigation of the problems related to the one discussed here concern the renormalization of the chemical potential especially when the system has been doped and, as a result, the superconducting instability develops faster than the excitonic one. Lastly, we would like to mention the situation when the $s$-orbital band inverts with the $f$-orbital one, which would mean the hybridization matrix element will have $V_\mathbf k\propto k^3$,
so upon integrating out the fast modes it will be the leading determining factor in renormalization of the coupling constants. With this being said, the specific focus of our the future studies depend mainly on the appearance of new experimental data.
To conclude, we have studied the problem of weak coupling many-body instabilities in narrow gap semiconductors with odd-parity band inversion. Our study has been mainly motivated by recent experimental and theoretical work addressing thermodynamic properties of samarium hexaboride.
By employing the renormalization group technique we find that the leading instability is towards the formation of an excitonic order. Depending on the microscopic details of the model the leading excitonic instability may or may not break the time-reversal symmetry.
\section{Acknowledgments} We would like to thank Ammar Kirmani for initial collaboration on this project. Important discussions with Bitan Roy, Maxim Khodas and Oskar Vafek are gratefully acknowledged. MD expresses his gratitude to Max Planck Institute for Physics of Complex Systems (MPI-PKS, Dresden, Germany), where a part of this work has been completed, for hospitality. This work was financially supported by the National Science Foundation grant NSF-DMR-2002795 and, in part, by the U.S. Department of Energy, Basic Energy Sciences, grant DE-SC0016481.
\begin{appendix}
\section{Dirac matrices}
We use the following definition of the Dirac matrices
\begin{equation}\label{DMat}
\begin{split}
\gamma_0&=\left(\begin{matrix} \hat{\sigma}_0 & 0 \\ 0 & -\hat{\sigma}_0\end{matrix}\right), \quad
\gamma_1=\left(\begin{matrix} 0 & \hat{\sigma}_x \\ - \hat{\sigma}_x & 0\end{matrix}\right), \\
\gamma_2&=\left(\begin{matrix} 0 & \hat{\sigma}_y \\ -\hat{\sigma}_y & 0 \end{matrix}\right), \quad
\gamma_3=\left(\begin{matrix} 0 & \hat{\sigma}_z \\ - \hat{\sigma}_z & 0 \end{matrix}\right), \\
\gamma_5&=\left(\begin{matrix} 0 & \hat{\sigma}_0 \\ \hat{\sigma}_0 & 0 \end{matrix}\right).
\end{split}
\end{equation}
Here $\hat{\sigma}_0$ is a 2$\times$2 unit matrix and $\hat{\sigma}_{a}$ ($a=x,y,z$) are Pauli matrices.
\section{Renormalization group equations: auxiliary expressions}
\subsection{Cumulant expansion}
To calculate the average entering into equation (\ref{Basic}) we will employ the cumulant expansion. It reads:
\begin{equation}\label{Cum}
{\langle e^{-S_{\textrm{int}}[\Psi_<,\Psi_>]}\rangle_{0>}\approx e^{-\langle S_{\textrm{int}}\rangle+\frac{1}{2}(\langle S_{\textrm{int}}^2\rangle-\langle S_{\textrm{int}}\rangle^2)+...}}
\end{equation}
To avoid the complications arising from the antisymmetrization of the interaction (\ref{LintRed}), we will formally consider the interaction part
of the action for general coupling in the form
\begin{equation}\label{Sintk}
\begin{split}
S_{\textrm{int}}&=\sum\limits_{{\cal S}{\cal T}}\prod\limits_{j=1,2}\int d\mathbf r_j\int d\tau_jU_{{\cal S}{\cal T}}(12)\\&\times\left({\Psi}^\dagger(1){\cal S}{\Psi}(1)\right)
\left(\Psi^\dagger(2){\cal T}\Psi(2)\right),
\end{split}
\end{equation}
where we used the notation
\begin{equation}\label{UST}
\begin{split}
U_{{\cal S}{\cal T}}(12)&=\frac{g_{{\cal S}{\cal T}}}{2}\int d\tau\int d\mathbf r \prod\limits_{j=1}^2\delta(\mathbf r-\mathbf r_j)\delta(\tau-\tau_j)
\end{split}
\end{equation}
and we defined
$\Psi(j)=\Psi_{\alpha_j}(\mathbf r_j,\tau_j)$ ($j=1,2$).
Using these notations, for the correction to the action we find
\begin{widetext}
\begin{equation}\label{Action}\nonumber
\begin{split}
&\frac{1}{2}(\langle S_{\textrm{int}}^2\rangle-\langle S_{\textrm{int}}\rangle^2)=\frac{1}{2}
\sum\limits_{{\cal S}{\cal S}'}\sum\limits_{{\cal T}{\cal T}'}\sum\limits_{1234}U_{{\cal S}{\cal T}}(12)U_{{\cal S}'{\cal T}'}(34)
\langle\left({\Psi}^\dagger(1){\cal S}{\Psi}(1)\right)
\left(\Psi^\dagger(2){\cal T}\Psi(2)\right)\left({\Psi}^\dagger(3){\cal S}'{\Psi}(3)\right)
\left(\Psi^\dagger(4){\cal T}'\Psi(4)\right)\rangle\\&-
\frac{1}{2}\sum\limits_{{\cal S}{\cal S}'}\sum\limits_{{\cal T}{\cal T}'}\sum\limits_{1234}U_{{\cal S}{\cal T}}(12)U_{{\cal S}'{\cal T}'}(34)
\langle\left({\Psi}^\dagger(1){\cal S}{\Psi}(1)\right)
\left(\Psi^\dagger(2){\cal T}\Psi(2)\right)\rangle\langle\left({\Psi}^\dagger(3){\cal S}'{\Psi}(3)\right)
\left(\Psi^\dagger(4){\cal T}'\Psi(4)\right)\rangle
\end{split}
\end{equation}
\end{widetext}
and each $\Psi=\Psi_<+\Psi_>$. Note that the correction to the action is defined as
\begin{equation}\label{DefineSign}
{\Delta S_{\textrm{int}}=-\frac{1}{2}(\langle S_{\textrm{int}}^2\rangle-\langle S_{\textrm{int}}\rangle^2).}
\end{equation}
There are five different non-zero contributions to (\ref{Action}). The diagram describing the fist contribution is shown on Fig. \ref{FigDiags}.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{Fig3-RG-DiagramA}\\
\caption{Diagram containing a single fermionic loop, which appear in the expansion of the effective action (\ref{DefineSign}). The solid lines
represent the single-particle propagators, while the dashed lines represent the interaction (\ref{Sintk}). The momenta of the internal solid lines
lie on the 'fast' momentum shell $\Lambda/s\leq k\leq\Lambda$.}
\label{FigDiagA}
\end{figure}
\begin{widetext}
\paragraph{Diagram A.} Analytical expression for the diagram Fig. \ref{FigDiagA} is given by
\begin{equation}\label{DiagramA}
\begin{split}
&\frac{1}{2}\sum U_{{\cal S}_1{\cal T}_1}(12)U_{{\cal S}_2{\cal T}_2}(34)
\langle\left({\Psi}^\dagger(1){\cal S}_1{\Psi}(1)\right)
\left(\Psi^\dagger(2){\cal T}_1\Psi(2)\right)\left({\Psi}^\dagger(3){\cal S}_2{\Psi}(3)\right)
\left(\Psi^\dagger(4){\cal T}_2\Psi(4)\right)\rangle_A\\&=-\frac{1}{8}\sum\limits_{{\cal S}_1{\cal T}_1}\sum\limits_{{\cal S}_2{\cal T}_2}
g_{{\cal S}_1{\cal T}_1}g_{{\cal S}_2{\cal T}_2}\int_1\int_2\left\{\left({\Psi}^\dagger(1){\cal S}_1{\Psi}(1)\right)
\textrm{Tr}\left[{\cal T}_1G(1-2){\cal S}_2 G(2-1)\right]\left({\Psi}^\dagger(2){\cal T}_2{\Psi}(2)\right)
\right.\\&\left.+\left({\Psi}^\dagger(1){\cal S}_1{\Psi}(1)\right)
\textrm{Tr}\left[{\cal T}_1G(1-2){\cal T}_2 G(2-1)\right]\left({\Psi}^\dagger(2){\cal S}_2{\Psi}(2)\right)+
\left({\Psi}^\dagger(1){\cal T}_1{\Psi}(1)\right)
\right.\\&\left.\times\textrm{Tr}\left[{\cal S}_1G(1-2){\cal T}_2 G(2-1)\right]\left({\Psi}^\dagger(2){\cal S}_2{\Psi}(2)\right)+
\left({\Psi}^\dagger(1){\cal T}_1{\Psi}(1)\right)
\textrm{Tr}\left[{\cal S}_1G(1-2){\cal S}_2 G(2-1)\right]\left({\Psi}^\dagger(2){\cal T}_2{\Psi}(2)\right)
\right\}\\&=-\frac{1}{2}\sum\limits_{{\cal S}_1{\cal S}_2}g_{{\cal S}_1}g_{{\cal S}_2}\int_1\int_2\left({\Psi}^\dagger(1){\cal S}_1{\Psi}(1)\right)
\textrm{Tr}\left[{\cal S}_1G(1-2){\cal S}_2 G(2-1)\right]\left({\Psi}^\dagger(2){\cal S}_2{\Psi}(2)\right).
\end{split}
\end{equation}
\paragraph{Diagrams B \& C.} The correction to the action from the diagram ${\bf (B)}$ , Fig. \ref{FigDiags}(b,c), reads:
\begin{equation}\label{DiagramB}
\begin{split}
&\frac{1}{2}\sum U_{{\cal S}_1{\cal T}_1}(12)U_{{\cal S}_2{\cal T}_2}(34)
\langle\left({\Psi}^\dagger(1){\cal S}_1{\Psi}(1)\right)
\left(\Psi^\dagger(2){\cal T}_1\Psi(2)\right)\left({\Psi}^\dagger(3){\cal S}_2{\Psi}(3)\right)
\left(\Psi^\dagger(4){\cal T}_2\Psi(4)\right)\rangle_B\\&=\frac{1}{8}\sum\limits_{{\cal S}_1{\cal T}_1}\sum\limits_{{\cal S}_2{\cal T}_2}
g_{{\cal S}_1{\cal T}_1}g_{{\cal S}_2{\cal T}_2}\int_1\int_2
\times\left\{\left({\Psi}^\dagger(1){\cal S}_1G(1-2){\cal S}_2 G(2-1){\cal T}_1{\Psi}(1)\right)\left({\Psi}^\dagger(2){\cal T}_2{\Psi}(2)\right)\right.\\&\left.+\left({\Psi}^\dagger(1){\cal T}_1G(1-2){\cal S}_2 G(2-1){\cal S}_1{\Psi}(1)\right)\left({\Psi}^\dagger(2){\cal T}_2{\Psi}(2)\right)+
\left({\cal S}_2\leftrightarrow{\cal T}_2\right)\right\}\\&=\frac{1}{2}\sum\limits_{{\cal S}_1{\cal S}_2}g_{{\cal S}_1}g_{{\cal S}_2}\int_1\int_2
\left({\Psi}^\dagger(1){\cal S}_1G(1-2){\cal S}_2 G(2-1){\cal S}_1{\Psi}(1)\right)\left({\Psi}^\dagger(2){\cal S}_2{\Psi}(2)\right).
\end{split}
\end{equation}
Finally, the last two contributions to the action can be described by the two diagrams in Fig. \ref{FigDiags}(d,e).
For the diagram ${\bf (D)}$ we derive the following expression
\begin{equation}\label{DiagramD3}
\begin{split}
&\frac{1}{2}\sum U_{{\cal S}_1{\cal T}_1}(12)U_{{\cal S}_2{\cal T}_2}(34)
\langle\left({\Psi}^\dagger(1){\cal S}_1{\Psi}(1)\right)
\left(\Psi^\dagger(2){\cal T}_1\Psi(2)\right)\left({\Psi}^\dagger(3){\cal S}_2{\Psi}(3)\right)
\left(\Psi^\dagger(4){\cal T}_2\Psi(4)\right)\rangle_D\\&=\frac{1}{2}\sum\limits_{{\cal S}_1{\cal S}_2}g_{{\cal S}_1}g_{{\cal S}_2}\int_1\int_2
\left(\Psi^\dagger(1){\cal S}_1G(1-2){\cal S}_2\Psi(2)\right)\left(\Psi^\dagger(2){\cal S}_2G(2-1){\cal S}_1\Psi(1)\right).
\end{split}
\end{equation}
Similarly, for the diagram ${\bf (E)}$ we find
\begin{equation}\label{DiagramE}
\begin{split}
&\frac{1}{2}\sum U_{{\cal S}_1{\cal T}_1}(12)U_{{\cal S}_2{\cal T}_2}(34)
\langle\left({\Psi}^\dagger(1){\cal S}_1{\Psi}(1)\right)
\left(\Psi^\dagger(2){\cal T}_1\Psi(2)\right)\left({\Psi}^\dagger(3){\cal S}_2{\Psi}(3)\right)
\left(\Psi^\dagger(4){\cal T}_2\Psi(4)\right)\rangle_E\\&=\frac{1}{4}\sum\limits_{{\cal S}_1{\cal S}_2}g_{{\cal S}_1}g_{{\cal S}_2}\int_1\int_2
\left\{\left(\Psi^\dagger(1){\cal S}_1G(1-2){\cal S}_2\Psi(2)\right)^2+\left(\Psi^\dagger(1){\cal S}_2G(1-2){\cal S}_1\Psi(2)\right)^2\right\}.
\end{split}
\end{equation}
We would like to remind the reader that the integration in the internal fermionic lines is limited to the momentum shell $[\Lambda/s,\Lambda]$.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{Fig4-RG-DiagramB}
~\includegraphics[width=0.38\linewidth]{Fig4-RG-DiagramD}
\caption{Remaining four diagrams in one-loop approximation, which appear in the expansion of the effective action (\ref{DefineSign}). The solid lines
represent the single-particle propagators, while the dashed lines represent the interaction (\ref{Sintk}). The momenta of the internal solid lines
lie on the 'fast' momentum shell $\Lambda/s\leq k\leq\Lambda$.}
\label{FigDiags}
\end{figure}
\end{widetext}
\subsection{Single-particle propagator}
These expressions can now be used integrate out the fast modes. To do that, we use the expression for the single particle propagator
\begin{equation}\label{Gkw}
\begin{split}
&G_\mathbf k(i\omega_n)=\left((i\omega_n+\mu-\xi_\mathbf k)\mathbbm{1}_4-\sum\limits_{a}\Sigma^ad_\mathbf k^a\right)^{-1}\\&=-\frac{(i\omega_n+\mu-\xi_\mathbf k)\mathbbm{1}_4+\gamma_0d_\mathbf k^0+\sum\limits_a\gamma_0\gamma_ad_\mathbf k^a}{(\omega_n-i\mu+i\xi_\mathbf k)^2+E_\mathbf k^2},
\end{split}
\end{equation}
where $\omega_n=\pi T(2n+1)$ are Matsubara frequencies and
${E_\mathbf k=\sqrt{(d_\mathbf k^0)^2+(d_\mathbf k^x)^2+(d_\mathbf k^y)^2+(d_\mathbf k^z)^2}}$
is the renormalized single-particle spectrum.
\subsection{Particle-hole channel at $T=0$}
Here we will evaluate the one-loop diagrams on the momentum shell $p\in[\Lambda/s,\Lambda]$.
Recall that in the limit $T\to0$
\begin{equation}\label{sum2int}
T\sum\limits_{\omega_n} \to \int\frac{d\omega}{2\pi}
\end{equation}
We adopted the following notations $s=e^t$, so for the infinitesimally narrow shell $\Lambda/s=\Lambda e^{-\delta t}\approx \Lambda(1-\delta t)$.
Next we consider an expression for the fermionic loop in particle-hole channel
\begin{equation}\label{Pph}
\hat{\cal P}_l(\Lambda,\mu)=\int\limits_{\Lambda(1-\delta t)}^{\Lambda}\frac{k^2dk}{2\pi^2}\int\frac{d\Omega_\mathbf k}{4\pi}\int\limits_{-\infty}^{\infty}\frac{d\omega}{2\pi}G_\mathbf k(i\omega)\otimes G_\mathbf k(i\omega)
\end{equation}
and here we use the compact notation $G\otimes G\equiv G_{\alpha_1\alpha_2}G_{\alpha_3\alpha_4}$.
Integration over frequency
\begin{equation}\label{omegaint}
\begin{split}
&\int\limits_{-\infty}^{\infty}\frac{d\omega_n}{2\pi}\frac{(i\omega_n+\mu-\xi_\mathbf k)^2}{[(\omega_n+i(\xi_\mathbf k-\mu)^2+E_\mathbf k^2]^2}\\&
=
\frac{1}{4E_\mathbf k^3}\left[\vartheta(x_1+1)-\vartheta(x_1-1)\right],
\end{split}
\end{equation}
where $x_1=(\mu-\xi_\mathbf k)/E_\mathbf k$.
It will also be convenient to work with function $F_1(x)$ which is defined according to:
\begin{equation}\label{defF0}
{F_1(x)=\frac{1}{2}\textrm{sign}(1+x_1)+\frac{1}{2}\textrm{sign}(1-x_1).}
\end{equation}
It is straightforward to integrate over frequency which yields ($\delta t\ll 1$):
\begin{equation}\label{ph}
\begin{split}
&\hat{\cal P}_1(\Lambda,\mu)=\int\limits_{\Lambda(1-\delta t)}^{\Lambda}\frac{k^2dk}{2\pi^2}\int\frac{d\Omega_\mathbf k}{4\pi}F_1\left(\frac{\mu-\xi_\mathbf k}{E_\mathbf k}\right)\times\\&\left\{-\frac{1}{4E_\mathbf k}\left(\mathbbm{1}_4\otimes\mathbbm{1}_4\right)+\frac{1}{4E_\mathbf k^3}\sum\limits_{ab}d_\mathbf k^a d_\mathbf k^b\left(\Sigma^a\otimes\Sigma^b\right)\right\}.
\end{split}
\end{equation}
To the leading order in powers of $\Lambda\gg k_F$ the hybridization term is much smaller than the kinetic energy:
\begin{equation}\label{ph2}
\begin{split}
&\hat{\cal P}_1(\Lambda,\mu)=\int\limits_{\Lambda(1-\delta t)}^{\Lambda}\frac{k^2dk}{8\pi^2}F_1\left(\frac{\mu-\xi_\mathbf k}{E_\mathbf k}\right)\\&\times\left\{
\frac{\epsilon_\mathbf k^2}{E_\mathbf k^3}\left(\Sigma^0\otimes\Sigma^0\right)
-\frac{\mathbbm{1}_4\otimes \mathbbm{1}_4}{E_\mathbf k}\right\}.
\end{split}
\end{equation}
The value of the remaining integral can be estimated by taking $\Lambda\to\infty$ and $\delta t\ll 1$. I have
\begin{equation}\label{PT0}
\hat{\cal P}_{1}(\Lambda,\mu)\approx\frac{m_{+}\Lambda}{4\pi^2}F_1\left(-\frac{m_{+}}{m_{-}}\right)
\left\{\gamma_0\otimes\gamma_0-\mathbbm{1}_4\otimes \mathbbm{1}_4\right\}\delta t.
\end{equation}
Since $m_{+}/m_{-}=(m_f-m_c)/(m_f+m_c)$, in the limit $m_f\gg m_c$ it follows that $m_{+}/m_{-}\approx 1-2m_c/m_f$, so that $F_1(-m_{+}/m_{-})\approx 1$.
Note that the pre-factor is proportional to the density of states at the Fermi level per spin for the free electrons in three-dimensions.
\subsection{Particle-particle channel at $T=0$}
For the computation of the diagrams ${\bf (D)}$ and ${\bf (E)}$ I will also need to compute the following integral ($\delta t\ll 1$):
\begin{equation}\label{pp}
\begin{split}
&\hat{K}_1(\Lambda,\mu)=\int\limits_{\Lambda(1-\delta t)}^{\Lambda}\frac{k^2dk}{2\pi^2}\int\frac{d\Omega_\mathbf k}{4\pi}\\&\times\int\limits_{-\infty}^{\infty}\frac{d\omega}{2\pi}G_\mathbf k(i\omega)\otimes G_{-\mathbf k}(-i\omega)
\end{split}
\end{equation}
Just like for the calculation of the particle-hole loop, we will integrate over $\omega$ first and write down the
results in terms of the following functions:
\begin{equation}\label{AuxInts}
\begin{split}
&\int\limits_{-\infty}^\infty\frac{d\omega}{2\pi}\frac{1}{[(\omega+i\mu-i\xi_\mathbf k)^2+E_\mathbf k^2][(\omega-i\mu+i\xi_\mathbf k)^2+E_\mathbf k^2]}\\&
=\frac{{\cal C}_1^{(1)}[(\mu-\xi_\mathbf k)/E_\mathbf k]}{4E_\mathbf k^3}, \\
&\int\limits_{-\infty}^\infty\frac{d\omega}{2\pi}\frac{[\omega^2+(\mu-\xi_\mathbf k)^2]}{[(\omega+i\mu-i\xi_\mathbf k)^2+E_\mathbf k^2][(\omega+i\xi_\mathbf k-i\mu)^2+E_\mathbf k^2]}\\&=\frac{{\cal C}_1^{(2)}[(\mu-\xi_\mathbf k)/E_\mathbf k]}{4E_\mathbf k}.
\end{split}
\end{equation}
where functions ${\cal C}_1^{(1)}$ and ${\cal C}_1^{(2)}$ are defined by
\begin{equation}\label{defF1F2}
\begin{split}
&{\cal C}_1^{(1)}(x)=\frac{x\vartheta(1-x)-\vartheta(x-1)}{x(1-x^2)}, \\
&{\cal C}_1^{(2)}(x)=\frac{x\vartheta(1-x)+(1-2x^2)\vartheta(x-1)}{x(1-x^2)}.
\end{split}
\end{equation}
We find
\begin{equation}\label{Ipp}
\begin{split}
&\hat{K}_1(\Lambda,\mu)=\int\limits_{\Lambda(1-\delta t)}^{\Lambda}\frac{k^2dk}{2\pi^2}
\left\{\frac{\epsilon_\mathbf k^2}{4E_\mathbf k^3}\left(\Sigma^0\otimes\Sigma^0\right){\cal C}_1^{(1)}\left(\frac{\mu-\xi_\mathbf k}{E_\mathbf k}\right)
\right.\\&\left.+\frac{1}{4E_\mathbf k}\left(\mathbbm{1}_4\otimes \mathbbm{1}_4\right){\cal C}_1^{(2)}\left(\frac{\mu-\xi_\mathbf k}{E_\mathbf k}\right)\right\}.
\end{split}
\end{equation}
Just like in our analysis of the particle-hole channel, by taking into consideration $\Lambda^2/2m\mu\gg 1$ and $\delta t\ll 1$, we arrive to the following expression
\begin{equation}\label{Cleq1}
\begin{split}
&\hat{K}_{1}(\Lambda,\mu)\approx\frac{m_{+}\Lambda}{4\pi^2}\left\{\left(\gamma_0\otimes\gamma_0\right){\cal C}_1^{(1)}\left(-\frac{m_{+}}{m_{-}}\right)\right.\\&\left.+\left(\mathbbm{1}_4\otimes \mathbbm{1}_4\right){\cal C}_1^{(2)}\left(-\frac{m_{+}}{m_{-}}\right)\right\}\delta t+O(\delta t^2).
\end{split}
\end{equation}
This expression can be further simplified since is usually $m_f\gg m_c$ so that:
\begin{equation}\label{masses}
\begin{split}
\frac{m_{+}}{m_{-}}&=\frac{m_fm_c}{(m_f+m_c)}\frac{(m_f-m_c)}{m_fm_c}=\frac{m_f-m_c}{m_f+m_c} \leq 1.
\end{split}
\end{equation}
Then it follows
\begin{equation}\label{C1C2simp}
\begin{split}
&{\cal C}_1^{(1)}\left(-\frac{m_{+}}{m_{-}}\right)\\&={\cal C}_1^{(2)}\left(-\frac{m_{+}}{m_{-}}\right)\\&=\frac{(m_f+m_c)^2}{4m_fm_c}\approx\frac{m_f}{4m_c}\equiv{1+2\eta}.
\end{split}
\end{equation}
We use these results to compute the corrections to the coupling constants.
\end{appendix}
|
{
"timestamp": "2020-12-29T02:24:33",
"yymm": "2012",
"arxiv_id": "2012.14089",
"language": "en",
"url": "https://arxiv.org/abs/2012.14089"
}
|
\section{Introduction}
Myocardial infarction (MI) is a myocardial ischemic necrosis caused by coronary artery complications that cannot provide enough blood and has become one of the leading causes of death and disability worldwide \cite{thygesen2007universal}. The viability of the cardiac segment is an important parameter to assess the cardiac status after MI, such as whether the segment is functional after the revascularization. Delayed-enhancement MRI (DE-MRI) performed several minutes after the injection is a method to evaluate the extent of MI and assess viable tissues after the injury. According to the World Health Organization (WHO), cardiovascular diseases are the first one cause of death worldwide and 85\% of the deaths are due to heart attacks and strokes.
Automatic segmentation of the different relevant areas from DE-MRI, such as myocardial contours, the infarcted area and the permanent microvascular obstruction area (no-reflow area) could provide useful information like the absolute value (mm$^3$) or percentage of the myocardium, which can provide useful information for quantitative evaluation of MI. However, myocardial infarction segmentation is still a challenging task due to the morphological similarity. Recently, deep learning-based methods have achieved state-of-the-art results for various image segmentation tasks and shown great potential in medical image analysis and clinical applications.
In this paper, we propose a cascaded convolutional neural network for automatic myocardial infarction segmentation from delayed-enhancement cardiac MRI. We first use a 2D U-Net to focus on the intra-slice information and perform a preliminary segmentation. After that, a 3D U-Net is applied to utilize the volumetric spatial information for a subtle segmentation. All the training procedure of our network are based on MICCAI 2020 EMIDEC Challenge dataset\footnote{http://emidec.com} \cite{lalande2020emidec} .
\section{Method}
\subsubsection{Network Architecture}
As the most well-known network structure for medical image segmentation, U-Net \cite{ronneberger2015u} is a classical encoder-decoder segmentation network and achieves state-of-the-art results on many segmentation challenges \cite{heller2020state,isensee2019nnu}. The encoder is similar with the typical classification network and uses convolution-pooling module to extract more high-level semantic features layer by layer. Then the decoder recovers the localization for every voxel and utilizes the extracted feature information for the classification of each pixel. To incorporate multi-scale features and employ the position information, skip connections are constructed between the encoder and decoder in the same stage.
For the segmentation of 3D biomedical images, many 3D segmentation networks like \cite{cciccek20163d,milletari2016v} are proposed to extract volumetric spatial information using 3D convolutions instead of just focusing on intra-slice information. However, for some volumes with highly anisotropic voxel spacings, 3D networks may not always outperform 2D networks when the inter-slice correlation information is not rich \cite{abulnaga2018ischemic}. For example, Case N042 is a 3D MRI volume with an image shape of 166*270*7 and voxel spacing of 1.667mm*1.667mm*10mm on x, y, and z-axis, respectively. This means x and y-axis preserve much higher resolution and richer information than the z-axis. Under this circumstance, using pure 3D network that treats the three axes equally may not be the best choice.
To issue this problem, we propose a cascaded convolutional neural network for automatic myocardial infarction segmentation from delayed-enhancement cardiac MRI. As illustrated in Fig.\ref{Fig1}, our network can be mainly divided into two stages. Firstly, after the preprocessing of input data, the MRI volume is divided into a sequence of slices for input of 2D U-Net to obtain a preliminary segmentation based on the intra-slice information. However, the results of 2D network ignore the correlation between slices, which would lead to limited segmentation accuracy, especially for challenging pathological areas that are difficult to distinguish only based on intra-slice information. Therefore, in the second stage, we use a 3D U-Net to utilize the volumetric spatial information and make a subtle segmentation. Specifically, we concatenate the 2D coarse results in the first stage with the input volume for the input of 3D U-Net in the second stage as a spatial prior. In the end, after the postprocessing like removing the scattered voxels, we get the final segmentation results.
\begin{figure}[!htb]
\includegraphics[width=12cm]{1.png}
\centering
\caption{The overall architecture of our cascaded convolutional neural network. The blocks and arrows viewed in blue and red denote the corresponding structure for 2D and 3D networks. }
\label{Fig1}
\end{figure}
\subsubsection{Implementation Details}
All the training procedure of our network is performed on NVIDIA Tesla V100 GPUs using the Pytorch framework based on the nnU-Net implementation \cite{isensee2019nnu}. During training, we use Adam optimizer with an initial learning rate of 0.01. Instead of patch-based methods, we use the whole short-axis slice and whole volume for the input of the 2D and 3D networks. To enhance the attention of foreground voxels, we use the summation of cross-entropy (CE) loss and dice loss \cite{milletari2016v} as the loss function for the training of our network.
\subsubsection{Dataset and Evaluation Metrics}
The EMIDEC Challenge dataset consists of delayed-enhancement cardiac MRI with a training set of 100 patients including 67 pathological cases and 33 normal cases, and a testing set of another 50 patients including 33 pathological cases and 17 normal cases. For training cases, manual annotations are provided with 0 for background, 1 for cavity, 2 for normal myocardium, 3 for myocardial infarction, and 4 for no-reflow.
For evaluation of segmentation results, clinical metrics include the average errors for the volume of the myocardium of the left ventricle, the volume and the percentage of MI and no-reflow and geometrical metrics include the average Dice coefficient for the different areas and Hausdorff distance for the myocardium.
\section{Experiments and Results}
There are totally 100 scans with published labels to train our network, while the other 50 scans remained for final evaluation. We make random 5-fold cross validation by randomly shuffling the sequence of cases and splitting the training dataset into 5 fixed folds with 20 MR scans in each fold, using 4 folds for training and the other one for testing. In this way, we can make a more comprehensive evaluation of our method.
Table \ref{Table1} and Table \ref{Table2} respectively represent the cross-validation results of our 2D coarse segmentation output and final segmentation output. The evaluation of clinical and geometrical metrics is based on the official code\footnote{https://github.com/EMIDEC-Challenge/Evaluation-metrics}. From the result, we can see that the application of 3D U-Net can make use of the volumetric spatial information and improve the segmentation result.
\begin{table}[]
\caption{Quantitative 5-fold cross-validation results of 2D coarse segmentation output.} \label{Table1}
\centering
\setlength{\tabcolsep}{2.5mm}
\renewcommand\arraystretch{1.1}
\begin{tabular}{ccccccc}
\hline
Targets & Metrics & fold 0 & fold 1 & fold 2 & fold 3 & fold 4 \\ \hline
\multirow{3}{*}{Myocardium} & Dice(\%) & 83.98 & 85.29 & 85.94 & 85.83 & 85.59 \\
& VolDif(mm$^3$) & 10906.43 & 6384.88 & 6012.93 & 5423.57 & 11629.66 \\
& HSD(mm) & 17.01 & 13.77 & 13.68 & 12.60 & 12.65 \\ \hline
\multirow{3}{*}{Infarction} & Dice(\%) & 44.39 & 53.12 & 48.10 & 50.55 & 66.34 \\
& VolDif(mm$^3$) & 9883.17 & 4821.30 & 3449.09 & 4986.66 & 6621.52 \\
& Ratio(\%) & 7.10 & 4.39 & 2.96 & 4.68 & 5.00 \\ \hline
\multirow{3}{*}{NoReflow} & Dice(\%) & 65.26 & 63.84 & 70.61 & 60.24 & 66.67 \\
& VolDif(mm$^3$) & 2703.34 & 775.07 & 480.32 & 703.7 & 443.86 \\
& Ratio(\%) & 1.68 & 0.68 & 0.37 & 0.67 & 0.32 \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\caption{Quantitative 5-fold cross-validation results of our final segmentation output.} \label{Table2}
\centering
\setlength{\tabcolsep}{2.5mm}
\renewcommand\arraystretch{1.1}
\begin{tabular}{ccccccc}
\hline
Targets & Metrics & fold 0 & fold 1 & fold 2 & fold 3 & fold 4 \\ \hline
\multirow{3}{*}{Myocardium} & Dice(\%) & 86.66 & 86.46 & 87.87 & 87.61 & 87.13 \\
& VolDif(mm$^3$) & 8680.23 & 5405.52 & 6087.88 & 4880.5 & 7317.76 \\
& HSD(mm) & 15.88 & 14.12 & 12.96 & 13.43 & 13.79 \\ \hline
\multirow{3}{*}{Infarction} & Dice(\%) & 61.44 & 72.08 & 81.51 & 68.48 & 76.87 \\
& VolDif(mm$^3$) & 6536.55 & 3233.94 & 3514.97 & 4091.74 & 3520.3 \\
& Ratio(\%) & 4.67 & 2.91 & 2.85 & 3.96 & 2.64 \\ \hline
\multirow{3}{*}{NoReflow} & Dice(\%) & 68.47 & 68.33 & 79.67 & 65.12 & 73.48 \\
& VolDif(mm$^3$) & 2158.34 & 712.36 & 451.84 & 620.93 & 649.98 \\
& Ratio(\%) & 1.37 & 0.65 & 0.35 & 0.61 & 0.46 \\ \hline
\end{tabular}
\end{table}
For our final segmentation results, the network performs well on myocardium segmentation, with an average dice score of 0.8715. However, for more challenging segmentation of pathological areas, the average dice score is only 0.7208 and 0.7101 for infarction and no-reflow. Also, the performance variance is very small, which indicates the robustness of our method. Fig.\ref{Fig2} illustrates two samples of our segmentation results and corresponding ground truth in the validation set of our own split. We can see that the segmentation results closely approximate the ground truth.
\begin{figure}[!htb]
\includegraphics[width=12cm]{2.png}
\centering
\caption{Two samples of our segmentation results. The columns from left to right are the image, ground truth, and prediction. (cavity in red, myocardium in green, infarction in blue, no-reflow in yellow)}
\label{Fig2}
\end{figure}
In the inference stage, we obtain the final prediction of the testing set by ensembling the segmentation results of each fold using majority voting. The evaluation results of our method on the testing set of EMIDEC dataset is presented in Table \ref{Table3}. The average Dice score is very similar to our cross-validation results (even higher on some metrics), which indicates that our method is stable for the myocardial infarction segmentation task.
\begin{table}[]
\caption{The Evaluation results of our method on the EMIDEC test set.} \label{Table3}
\setlength{\tabcolsep}{4mm}
\renewcommand\arraystretch{1.1}
\begin{tabular}{ccccc}
\hline
Targets & Dice(\%) & VolDif(mm$^3$) & HSD(mm) & ratio(\%) \\ \hline
Myocardium & 87.86 & 9258.24 & 13.01 & - \\
Infarction & 71.24 & 3117.88 & - & 2.38 \\
NoReflow & 78.51 & 634.69 & - & 0.38 \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}
In this paper, we propose a cascaded convolutional neural network for automatic myocardial infarction segmentation from delayed-enhancement cardiac MRI. The network consists of a 2D U-Net to focus on the intra-slice information to perform a preliminary segmentation and a 3D U-Net to utilize the volumetric spatial information to make a subtle segmentation. Our method is trained and validated on MICCAI 2020 EMIDEC challenge dataset. For the testing stage, our ensembled model has achieved an average Dice score of 0.8786, 0.7124 and 0.7851 for myocardium, ranking first of the segmentation challenge.
\bibliographystyle{splncs04}
\section{Introduction}
Myocardial infarction (MI) is a myocardial ischemic necrosis caused by coronary artery complications that cannot provide enough blood and has become one of the leading causes of death and disability worldwide \cite{thygesen2007universal}. The viability of the cardiac segment is an important parameter to assess the cardiac status after MI, such as whether the segment is functional after the revascularization. Delayed-enhancement MRI (DE-MRI) performed several minutes after the injection is a method to evaluate the extent of MI and assess viable tissues after the injury. According to the World Health Organization (WHO), cardiovascular diseases are the first one cause of death worldwide and 85\% of the deaths are due to heart attacks and strokes.
Automatic segmentation of the different relevant areas from DE-MRI, such as myocardial contours, the infarcted area and the permanent microvascular obstruction area (no-reflow area) could provide useful information like the absolute value (mm$^3$) or percentage of the myocardium, which can provide useful information for quantitative evaluation of MI. However, myocardial infarction segmentation is still a challenging task due to the morphological similarity. Recently, deep learning-based methods have achieved state-of-the-art results for various image segmentation tasks and shown great potential in medical image analysis and clinical applications.
In this paper, we propose a cascaded convolutional neural network for automatic myocardial infarction segmentation from delayed-enhancement cardiac MRI. We first use a 2D U-Net to focus on the intra-slice information and perform a preliminary segmentation. After that, a 3D U-Net is applied to utilize the volumetric spatial information for a subtle segmentation. All the training procedure of our network are based on MICCAI 2020 EMIDEC Challenge dataset\footnote{http://emidec.com} \cite{lalande2020emidec} .
\section{Method}
\subsubsection{Network Architecture}
As the most well-known network structure for medical image segmentation, U-Net \cite{ronneberger2015u} is a classical encoder-decoder segmentation network and achieves state-of-the-art results on many segmentation challenges \cite{heller2020state,isensee2019nnu}. The encoder is similar with the typical classification network and uses convolution-pooling module to extract more high-level semantic features layer by layer. Then the decoder recovers the localization for every voxel and utilizes the extracted feature information for the classification of each pixel. To incorporate multi-scale features and employ the position information, skip connections are constructed between the encoder and decoder in the same stage.
For the segmentation of 3D biomedical images, many 3D segmentation networks like \cite{cciccek20163d,milletari2016v} are proposed to extract volumetric spatial information using 3D convolutions instead of just focusing on intra-slice information. However, for some volumes with highly anisotropic voxel spacings, 3D networks may not always outperform 2D networks when the inter-slice correlation information is not rich \cite{abulnaga2018ischemic}. For example, Case N042 is a 3D MRI volume with an image shape of 166*270*7 and voxel spacing of 1.667mm*1.667mm*10mm on x, y, and z-axis, respectively. This means x and y-axis preserve much higher resolution and richer information than the z-axis. Under this circumstance, using pure 3D network that treats the three axes equally may not be the best choice.
To issue this problem, we propose a cascaded convolutional neural network for automatic myocardial infarction segmentation from delayed-enhancement cardiac MRI. As illustrated in Fig.\ref{Fig1}, our network can be mainly divided into two stages. Firstly, after the preprocessing of input data, the MRI volume is divided into a sequence of slices for input of 2D U-Net to obtain a preliminary segmentation based on the intra-slice information. However, the results of 2D network ignore the correlation between slices, which would lead to limited segmentation accuracy, especially for challenging pathological areas that are difficult to distinguish only based on intra-slice information. Therefore, in the second stage, we use a 3D U-Net to utilize the volumetric spatial information and make a subtle segmentation. Specifically, we concatenate the 2D coarse results in the first stage with the input volume for the input of 3D U-Net in the second stage as a spatial prior. In the end, after the postprocessing like removing the scattered voxels, we get the final segmentation results.
\begin{figure}[!htb]
\includegraphics[width=12cm]{1.png}
\centering
\caption{The overall architecture of our cascaded convolutional neural network. The blocks and arrows viewed in blue and red denote the corresponding structure for 2D and 3D networks. }
\label{Fig1}
\end{figure}
\subsubsection{Implementation Details}
All the training procedure of our network is performed on NVIDIA Tesla V100 GPUs using the Pytorch framework based on the nnU-Net implementation \cite{isensee2019nnu}. During training, we use Adam optimizer with an initial learning rate of 0.01. Instead of patch-based methods, we use the whole short-axis slice and whole volume for the input of the 2D and 3D networks. To enhance the attention of foreground voxels, we use the summation of cross-entropy (CE) loss and dice loss \cite{milletari2016v} as the loss function for the training of our network.
\subsubsection{Dataset and Evaluation Metrics}
The EMIDEC Challenge dataset consists of delayed-enhancement cardiac MRI with a training set of 100 patients including 67 pathological cases and 33 normal cases, and a testing set of another 50 patients including 33 pathological cases and 17 normal cases. For training cases, manual annotations are provided with 0 for background, 1 for cavity, 2 for normal myocardium, 3 for myocardial infarction, and 4 for no-reflow.
For evaluation of segmentation results, clinical metrics include the average errors for the volume of the myocardium of the left ventricle, the volume and the percentage of MI and no-reflow and geometrical metrics include the average Dice coefficient for the different areas and Hausdorff distance for the myocardium.
\section{Experiments and Results}
There are totally 100 scans with published labels to train our network, while the other 50 scans remained for final evaluation. We make random 5-fold cross validation by randomly shuffling the sequence of cases and splitting the training dataset into 5 fixed folds with 20 MR scans in each fold, using 4 folds for training and the other one for testing. In this way, we can make a more comprehensive evaluation of our method.
Table \ref{Table1} and Table \ref{Table2} respectively represent the cross-validation results of our 2D coarse segmentation output and final segmentation output. The evaluation of clinical and geometrical metrics is based on the official code\footnote{https://github.com/EMIDEC-Challenge/Evaluation-metrics}. From the result, we can see that the application of 3D U-Net can make use of the volumetric spatial information and improve the segmentation result.
\begin{table}[]
\caption{Quantitative 5-fold cross-validation results of 2D coarse segmentation output.} \label{Table1}
\centering
\setlength{\tabcolsep}{2.5mm}
\renewcommand\arraystretch{1.1}
\begin{tabular}{ccccccc}
\hline
Targets & Metrics & fold 0 & fold 1 & fold 2 & fold 3 & fold 4 \\ \hline
\multirow{3}{*}{Myocardium} & Dice(\%) & 83.98 & 85.29 & 85.94 & 85.83 & 85.59 \\
& VolDif(mm$^3$) & 10906.43 & 6384.88 & 6012.93 & 5423.57 & 11629.66 \\
& HSD(mm) & 17.01 & 13.77 & 13.68 & 12.60 & 12.65 \\ \hline
\multirow{3}{*}{Infarction} & Dice(\%) & 44.39 & 53.12 & 48.10 & 50.55 & 66.34 \\
& VolDif(mm$^3$) & 9883.17 & 4821.30 & 3449.09 & 4986.66 & 6621.52 \\
& Ratio(\%) & 7.10 & 4.39 & 2.96 & 4.68 & 5.00 \\ \hline
\multirow{3}{*}{NoReflow} & Dice(\%) & 65.26 & 63.84 & 70.61 & 60.24 & 66.67 \\
& VolDif(mm$^3$) & 2703.34 & 775.07 & 480.32 & 703.7 & 443.86 \\
& Ratio(\%) & 1.68 & 0.68 & 0.37 & 0.67 & 0.32 \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\caption{Quantitative 5-fold cross-validation results of our final segmentation output.} \label{Table2}
\centering
\setlength{\tabcolsep}{2.5mm}
\renewcommand\arraystretch{1.1}
\begin{tabular}{ccccccc}
\hline
Targets & Metrics & fold 0 & fold 1 & fold 2 & fold 3 & fold 4 \\ \hline
\multirow{3}{*}{Myocardium} & Dice(\%) & 86.66 & 86.46 & 87.87 & 87.61 & 87.13 \\
& VolDif(mm$^3$) & 8680.23 & 5405.52 & 6087.88 & 4880.5 & 7317.76 \\
& HSD(mm) & 15.88 & 14.12 & 12.96 & 13.43 & 13.79 \\ \hline
\multirow{3}{*}{Infarction} & Dice(\%) & 61.44 & 72.08 & 81.51 & 68.48 & 76.87 \\
& VolDif(mm$^3$) & 6536.55 & 3233.94 & 3514.97 & 4091.74 & 3520.3 \\
& Ratio(\%) & 4.67 & 2.91 & 2.85 & 3.96 & 2.64 \\ \hline
\multirow{3}{*}{NoReflow} & Dice(\%) & 68.47 & 68.33 & 79.67 & 65.12 & 73.48 \\
& VolDif(mm$^3$) & 2158.34 & 712.36 & 451.84 & 620.93 & 649.98 \\
& Ratio(\%) & 1.37 & 0.65 & 0.35 & 0.61 & 0.46 \\ \hline
\end{tabular}
\end{table}
For our final segmentation results, the network performs well on myocardium segmentation, with an average dice score of 0.8715. However, for more challenging segmentation of pathological areas, the average dice score is only 0.7208 and 0.7101 for infarction and no-reflow. Also, the performance variance is very small, which indicates the robustness of our method. Fig.\ref{Fig2} illustrates two samples of our segmentation results and corresponding ground truth in the validation set of our own split. We can see that the segmentation results closely approximate the ground truth.
\begin{figure}[!htb]
\includegraphics[width=12cm]{2.png}
\centering
\caption{Two samples of our segmentation results. The columns from left to right are the image, ground truth, and prediction. (cavity in red, myocardium in green, infarction in blue, no-reflow in yellow)}
\label{Fig2}
\end{figure}
In the inference stage, we obtain the final prediction of the testing set by ensembling the segmentation results of each fold using majority voting. The evaluation results of our method on the testing set of EMIDEC dataset is presented in Table \ref{Table3}. The average Dice score is very similar to our cross-validation results (even higher on some metrics), which indicates that our method is stable for the myocardial infarction segmentation task.
\begin{table}[]
\caption{The Evaluation results of our method on the EMIDEC test set.} \label{Table3}
\setlength{\tabcolsep}{4mm}
\renewcommand\arraystretch{1.1}
\begin{tabular}{ccccc}
\hline
Targets & Dice(\%) & VolDif(mm$^3$) & HSD(mm) & ratio(\%) \\ \hline
Myocardium & 87.86 & 9258.24 & 13.01 & - \\
Infarction & 71.24 & 3117.88 & - & 2.38 \\
NoReflow & 78.51 & 634.69 & - & 0.38 \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}
In this paper, we propose a cascaded convolutional neural network for automatic myocardial infarction segmentation from delayed-enhancement cardiac MRI. The network consists of a 2D U-Net to focus on the intra-slice information to perform a preliminary segmentation and a 3D U-Net to utilize the volumetric spatial information to make a subtle segmentation. Our method is trained and validated on MICCAI 2020 EMIDEC challenge dataset. For the testing stage, our ensembled model has achieved an average Dice score of 0.8786, 0.7124 and 0.7851 for myocardium, ranking first of the segmentation challenge.
\bibliographystyle{splncs04}
|
{
"timestamp": "2020-12-29T02:26:11",
"yymm": "2012",
"arxiv_id": "2012.14128",
"language": "en",
"url": "https://arxiv.org/abs/2012.14128"
}
|
\section*{Apppendix}
\section{Additional results}
\label{sm_additional_results}
\subsection{Early phase \TrF correlates with final generalization}
\label{sec_early_final_appendix}
In this section, we present the additional experimental results for Section~\ref{sec_trf_generalization}. Figure~\ref{fig:IF_LR_trF_bs} shows the experiments with varying batch size for CIFAR-100 and CIFAR-10. The conclusions are the same as discussed in the main text in Section~\ref{sec_trf_generalization}. We also show the training accuracy for all the experiments performed in Figure \ref{fig:IF_LR_trF} and Figure~\ref{fig:IF_LR_trF_bs}. They are shown in Figure \ref{fig:IF_LR_trF_train} and Figure~\ref{fig:IF_LR_trF_bs_train} respectively. Most runs in all these experiments reach training accuracy $\sim$99\% and above.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc10augbs.pdf}
\caption{CIFAR-10 (with aug.)}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc100noaugbs.pdf}
\caption{CIFAR-100 (w/o aug.)}
\end{subfigure}
\caption{Association between early phase values of \TrF and generalization holds on the CIFAR-10 and CIFAR-100 datasets. Each point corresponds to multiple runs with randomly chosen seeds and a specific value of batch size. $\text{Tr}\mathbf{F}_i$ is recorded during the early phase (2-7 epochs, see main text for details), while the test accuracy is the maximum value along the entire optimization path (averaged across runs with the same batch size). The horizontal and vertical error bars show the standard deviation of values across runs. The plots show that early phase \TrF is predictive of final generalization.}
\label{fig:IF_LR_trF_bs}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFimgnetlrtrainacc.pdf}
\caption{ImageNet (w/o augmentation)}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc10auglrtrainacc.pdf}
\caption{CIFAR-10 (with augmentation)}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.97\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc100noauglrtrainacc.pdf}
\caption{CIFAR-100 (w/o augmentation)}
\end{subfigure}
\caption{Training accuracy for the runs corresponding to Figure \ref{fig:IF_LR_trF}. Each point corresponds to multiple seeds and a specific value of learning rate. \TrFi is recorded during the early phase of training (2-7 epochs, see the main text for details), while the training accuracy is the maximum value along the entire optimization path (averaged across runs with the same learning rate).}
\label{fig:IF_LR_trF_train}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc10augbstrainacc.pdf}
\caption{CIFAR-10 (with aug.)}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc100noaugbstrainacc.pdf}
\caption{CIFAR-100 (w/o aug.)}
\end{subfigure}
\caption{Training accuracy for the runs corresponding to Figure \ref{fig:IF_LR_trF_bs}. Each point corresponds to multiple runs with randomly chosen seeds and a specific value of batch size. $\text{Tr}\mathbf{F}_i$ is recorded during the early phase (2-7 epochs, see main text for details), while the training accuracy is the maximum value along the entire optimization path (averaged across runs with the same batch size).}
\label{fig:IF_LR_trF_bs_train}
\end{figure}
\subsection{Fisher Penalty}
\label{app:fisher_penalty_additional_results}
We first show additional metrics for experiments summarized in Table~\ref{tab:fisher_penalty_setting1} in the main text. Table~\ref{app:tab:fisher_penalty_setting1_acc} summarizes the final training accuracy, showing that the baseline experiments were trained until approximately 100\% training accuracy was reached. Table~\ref{app:tab:fisher_penalty_setting1_TrF} supports the claim that all gradient norm regularizers reduce the maximum value of \TrF (we measure \TrF starting from after one epoch of training because \TrF explodes in networks with batch normalization layers at initialization). Finally, Table~\ref{app:tab:fisher_penalty_setting1_time} confirms that the regularizers incurred a relatively small additional computational cost.
Figure~\ref{app:fig:fisher_penalty_setting1_visualization} complements Figure~\ref{fig:fisher_penalty_setting1_visualization} for the other two models on the CIFAR-10 and CIFAR-100 datasets. The figures are in line with the results of the main text.
Lastly, in Table~\ref{app:tab:fisher_penalty_setting2_acc} we report the final training accuracy reached by runs reported in Table~\ref{tab:fisher_penalty_setting2} in the main text.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figs/demoDenseNetC100woaug.pdf}
\vspace{-0.3cm}
\includegraphics[width=0.6\columnwidth]{figs/demoDenseNetC100woaugfigure.pdf}
\caption{DenseNet on CIFAR-100 (w/o aug.)}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figs/demoSCNNC10woaug.pdf}
\vspace{-0.3cm}
\includegraphics[width=0.6\columnwidth]{figs/demoSCNNC10woaugfigure.pdf}
\caption{SimpleCNN on CIFAR-10 (w/o aug.)}
\end{subfigure}
\caption{Same as Figure~\ref{fig:fisher_penalty_setting1_visualization}, but for DenseNet on CIFAR-100, and SimpleCNN on CIFAR-10. Curves were smoothed for visual clarity.}
\label{app:fig:fisher_penalty_setting1_visualization}
\end{figure}
\begin{table}[H]
\centering
\small
\caption{The maximum value of \TrF along the optimization trajectory for experiments on CIFAR-10 or CIFAR-100 included in Table~\ref{tab:fisher_penalty_setting1}.}
\label{app:tab:fisher_penalty_setting1_TrF}
\begin{tabular}{cll|ll|ll}
\toprule
Setting & $\eta^*$ & Baseline & \textsc{GP}\textsubscript{x}\xspace & \textsc{GP}\xspace & \textsc{FP}\xspace & \textsc{GP}\textsubscript{r}\xspace \\
\midrule
DenseNet/C100 (w/o aug.) & 24.68 & 98.17 & 83.64 & 64.33 & 66.24 & 73.66 \\
VGG11/C100 (w/o aug.) & 50.88 & 148.19 & 102.95 & 58.53 & 64.93 & 62.96 \\
WResNet/C100 (w/o aug.) & 26.21 & 91.39 & 41.43 & 40.94 & 56.53 & 39.31 \\
\midrule
SCNN/C10 (w/o aug.) & 24.21 & 52.05 & 47.96 & 25.03 & 19.63 & 25.35 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\small
\caption{Time per epoch (in seconds) for experiments in Table~\ref{tab:fisher_penalty_setting1}. }
\label{app:tab:fisher_penalty_setting1_time}
\begin{tabular}{cll|ll|ll}
\toprule
Setting & $\eta^*$ & Baseline & \textsc{GP}\textsubscript{x}\xspace & \textsc{GP}\xspace & \textsc{FP}\xspace & \textsc{GP}\textsubscript{r}\xspace \\
\toprule
WResNet/TinyImageNet (aug.) & 214.45 & 142.69 & 233.14 & 143.78 & 208.62 & 371.74 \\
\midrule
DenseNet/C100 (w/o aug.) & 78.88 & 57.40 & 77.89 & 78.66 & 97.25 & 75.96 \\
VGG11/C100 (w/o aug.) & 30.50 & 35.27 & 31.54 & 32.52 & 43.41 & 42.40 \\
WResNet/C100 (w/o aug.) & 49.64 & 47.99 & 71.33 & 61.36 & 76.93 & 53.25 \\
\midrule
SCNN/C10 (w/o aug.) & 18.64 & 19.51 & 26.09 & 19.91 & 21.21 & 20.55 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\small
\caption{The final epoch training accuracy for experiments shown in Table~\ref{tab:fisher_penalty_setting1}. Experiments with small learning rate reach no lower accuracy than experiments corresponding to a large learning rate $\eta^*$.}
\label{app:tab:fisher_penalty_setting1_acc}
\begin{tabular}{cll|ll|ll}
\toprule
Setting & $\eta^*$ & Baseline & \textsc{GP}\textsubscript{x}\xspace & \textsc{GP}\xspace & \textsc{FP}\xspace & \textsc{GP}\textsubscript{r}\xspace \\
\midrule
WResNet/TinyImageNet (aug.) & 99.84\% & 99.96\% & {99.97\%} & 93.84\% & 81.05\% & 86.46\% \\
\midrule
DenseNet/C100 (w/o aug.) & {99.98\%} & 99.97\% & 99.96\% & 99.91\% & 99.91\% & 99.39\% \\
VGG11/C100 (w/o aug.) & {99.98\%} & {99.98\%} & 99.85\% & 99.62\% & 97.73\% & 86.32\% \\
WResNet/C100 (w/o aug.) & 99.98\% & 99.98\% & 99.97\% & 99.96\% & {99.99\%} & 99.94\% \\
\midrule
SCNN/C10 (w/o aug.) & {100.00\%} & {100.00\%} & 97.79\% & {100.00\%} & 93.80\% & 94.64\% \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\small
\caption{The final epoch training accuracy for experiments shown in Table~\ref{tab:fisher_penalty_setting2}. }
\label{app:tab:fisher_penalty_setting2_acc}
\begin{tabular}{cll}
\toprule
Setting & $\eta^*$ & $\mathrm{FP}$ \\
\midrule
DenseNet/C100 (aug) & {98.75$\pm$0.27\%} & 97.53$\pm$1.75\% \\
SCNN/C10 (aug) & 97.52$\pm$1.98\% & {98.94$\pm$0.08\%} \\
VGG11/C100 (aug) & {98.64$\pm$0.06\%} & 93.06$\pm$0.01\% \\
WResNet/C100 (aug) & {99.97$\pm$0.01\%} & 99.97$\pm$0.00\% \\
WResNet/Tiny ImageNet (aug) & {99.86$\pm$0.02\%} & 93.65$\pm$5.85\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Fisher Penalty Reduces Memorization}
\label{app:sec:fisher_penalty_memorization}
In this section, we include additional experimental results for Section~\ref{sec:fisher_penalty_prevents_memorization}. Figure~\ref{app:fig:fisher_penalty_noisy_data_visualization} is the same as Figure~\ref{fig:fisher_penalty_noisy_data_visualization}, but for ResNet-50. Finally, we show additional metrics for the experiments involving 25\% noisy examples. Figure~\ref{app:fig:fisher_penalty_angle} shows the cosine between the mini-batch gradients computed on the noisy and clean data. In Table~\ref{app:tab:fisher_penalty_noisy_data_acc_noisy} and Table~\ref{app:tab:fisher_penalty_noisy_data_acc_clean} we show training accuracy on the noisy and clean examples in the final epoch of training.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/gradnormcomparison2209mem25table2gnpsample.pdf}
\caption{Gradient norm on clean examples (left, denoted as $g_c$), noisy examples (middle, denoted as $g_n$), and their ratio (right); evaluated on the training set.}
\end{subfigure}
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/demoacc2209mem25table2gnpsample.pdf}
\caption{Training accuracy on clean and noisy examples (solid/dashed lined, left), validation accuracy (middle), and a scatter plot of training accuracy on clean vs noisy examples (right). }
\end{subfigure}
\caption{Same as Figure~\ref{fig:fisher_penalty_noisy_data_visualization}, but for ResNet-50 trained on the CIFAR-100 dataset.}
\label{app:fig:fisher_penalty_noisy_data_visualization}
\end{figure*}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/cos2209mem25table2gnpy.pdf}
\caption{\textsc{GP}\textsubscript{r}\xspace}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/cos2209mem25table2gnpsample.pdf}
\caption{\textsc{FP}\xspace}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/cos2209mem25table2gnpinput.pdf}
\caption{\textsc{GP}\textsubscript{x}\xspace}
\end{subfigure}
\caption{The cosine between the mini-batch gradients computed on the noisy ($\mathbf{g}_n$) and clean ($\mathbf{g}_c$) data, both measured on the training set. We observe that in the early phase of training the angle is negative. Furthermore, stronger regularization with \textsc{GP}\textsubscript{r}\xspace or \textsc{FP}\xspace correlates with a more negative angle.}
\label{app:fig:fisher_penalty_angle}
\end{figure}
\begin{table*}
\centering
\caption{Training accuracy on the clean examples int the final epoch, for experiments reported in Table~\ref{tab:fisher_penalty_noisy_data} (with 25\% examples with noisy labels).}
\label{app:tab:fisher_penalty_noisy_data_acc_clean}
\begin{tabular}{lllll|ll}
\toprule
Label Noise & Setting & Baseline & Mixup & \textsc{GP}\textsubscript{x}\xspace & \textsc{FP}\xspace & \textsc{GP}\textsubscript{r}\xspace \\
\midrule
25\% & VGG-11/C100 & 99.79\% & 73.14\% & 97.46\% & 79.52\% & 81.75\% \\
& ResNet-52/C100 & 95.87\% & 77.71\% & 95.88\% & 78.72\% & 74.21\% \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\centering
\caption{Training accuracy on the noisy examples in the final epoch, for experiments reported in Table~\ref{tab:fisher_penalty_noisy_data} (with 25\% examples with noisy labels).}
\label{app:tab:fisher_penalty_noisy_data_acc_noisy}
\begin{tabular}{lllll|ll}
\toprule
Label Noise & Setting & Baseline & Mixup & \textsc{GP}\textsubscript{x}\xspace & \textsc{FP}\xspace & \textsc{GP}\textsubscript{r}\xspace \\
\midrule
25\% & VGG-11/C100 & 99.56\% & 8.29\% & 89.23\% & 4.10\% & 4.96\% \\
& ResNet-52/C100 & 73.67\% & 4.22\% & 72.67\% & 4.14\% & 2.86\% \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Early \TrF influences final curvature}
\label{sec_histogram_appendix}
In this section, we present additional experimental results for Section~\ref{sec_trf_final_minima}. The experiment on CIFAR-10 is shown in Figure~\ref{fig:histogram_c10}. The conclusions are the same as discussed in the main text in Section~\ref{sec_trf_final_minima}.
\begin{figure}
\vspace{-1pt}
\centering
\begin{subfigure}[t]{1\textwidth}
\includegraphics[width=1.\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/histogramc10.pdf}
\end{subfigure}
\caption{Small \TrF during the early phase of training is more likely to reach wider minima as measured by \TrH. Left: 2 models are trained with different levels of regularization for 20 epochs on CIFAR-10. \TrF at the end of 20 epochs (denoted as \TrFi) is shown. Middle: Each model is then used as initialization and trained until convergence using the low regularization configuration with different random seeds. A histogram of \TrH at the point corresponding to the best test accuracy along the trajectory (denoted by \TrHf) is shown. Right: a histogram of the best test accuracy corresponding to middle figure is shown.}
\label{fig:histogram_c10}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=1.\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/GTtrHc10augbs.pdf}
\caption{CIFAR-10 (w/ augmentation)}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=1.05\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/GTtrHc100noauglr.pdf}
\caption{CIFAR-100 (w/o augmentation)}
\end{subfigure}
\caption{The value of \TrH over the course of training. Each point corresponds to runs with different seeds and a specific value of learning rate $\eta$ and batch size $S$. $\ell$ and TA respectively denote the minimum training loss and the maximum test accuracy along the entire trajectory for the corresponding runs (averaged across seeds). The plots show that flatter optimization trajectories become biased towards flatter minima early during training, at a coarse scale of hyper-parameter values (red vs blue).}
\label{fig:IF_GT_appendix}
\vspace{-5pt}
\end{figure}
Next, to understand why smaller \TrF during the early phase is more likely to end up in a wider final minimum, we track \TrH during the entire course of training and show that it stabilizes early on. In this experiment, we create two sets of hyper-parameters: coarse-grained and fine-grained. For CIFAR-10, we use batch size $S \in A\cup B$, where $A=\{480,500,520\}$ and $B = \{80,100,120\}$. For all batch size configurations, a learning rate of 0.02 is used. Overloading the symbols $A$ and $B$ for CIFAR-100, we use learning rate $\eta \in A\cup B$, where $A = \{0.0008,0.001, 0.0012\}$ and $B=\{0.008, 0.01, 0.012\}$. For all learning rate configurations, a batch size of 100 is used. In both cases, the elements within each set ($A$ and $B$) vary on a fine-grained scale, while the elements across the two sets vary on a coarse-grained scale. The remaining details and additional experiments can be found in Appendix \ref{sec_trf_final_minima_details}. The experiments are shown in Figure \ref{fig:IF_GT_appendix}. Notice that after initialization (index 0 on the horizontal axis), the first value is computed at epoch 10 (at which point the experiments show that entanglement has started to hold in alignment with the late phase).
We make three observations in this experiment. First, the relative ordering of \TrH values for runs between sets $A$ vs $B$ stays the same after the first 10 epochs. Second, the degree of entanglement is higher between any two epochs when looking at runs across sets $A$ and $B$, while it is weaker when looking at runs within any one of the sets. Finally, test accuracies for set $B$ runs are always higher than those of set $A$ runs, but this trend is not strong for runs within any one set. Note that the minimum loss values are roughly at a similar scale for each dataset and they are all at or below $10^{-2}$.
\section{Computation of \TrH }
\label{app:computation_of_TrH}
We computed \TrH in our experiments using the Hutchinson's estimator \cite{hutchinson1990stochastic},
\begin{align*}
Tr(\mathbf{H}) &= Tr(\mathbf{H} \cdot \mathbf{I})\\
&= Tr(\mathbf{H} \cdot \mathbb{E}[\mathbf{z}\mathbf{z}^T])\\
&= \mathbb{E}[Tr(\mathbf{H} \cdot \mathbf{z}\mathbf{z}^T)]\\
&= \mathbb{E}[\mathbf{z}^T\mathbf{H} \cdot \mathbf{z}]\\
&\approx \frac{1}{M} \sum_{i=1}^{M} \mathbf{z}_i^T\mathbf{H} \cdot \mathbf{z}_i\\
&= \frac{1}{M} \sum_{i=1}^{M} \mathbf{z}_i^T \frac{\partial }{\partial \theta } \left( \frac{\partial \ell}{ \partial \theta^T} \right) \cdot \mathbf{z}_i\\
&= \frac{1}{M} \sum_{i=1}^{M} \mathbf{z}_i^T \frac{\partial }{\partial \theta } \left( \frac{\partial \ell}{ \partial \theta}^T \mathbf{z}_i \right),
\end{align*}
where $\mathbf{I}$ is the identity matrix, $\mathbf{z}$ is a multi-variate standard Gaussian random variable, and $\mathbf{z}_i$'s are i.i.d. instances of $\mathbf{z}$. The larger the value of $M$, the more accurate the approximation is. We used $M=30$. To make the above computation efficient, note that the gradient $\frac{\partial \ell}{ \partial \theta}$ only needs to be computed once and it can be re-used in the summation over the $M$ samples.
\section{Approximations in Fisher Penalty}
\label{app:approx_in_FP}
In this section, we describe in detail the approximations made when computing the Fisher Penalty. Recall that \TrF can be expressed as
\begin{equation}
\mathrm{Tr}(\mathbf{F}) = \mathbb{E}_{x \sim \mathcal{X},\hat y \sim p_{\theta}(y|\bm{x})} \left[ \Vert \frac{\partial}{\partial \mathbf{\theta}}\ell(\bm{x},\hat y) \Vert_2^2 \right].
\end{equation}
In the preliminary experiments, we found that we can use the norm of the expected gradient rather than the expected norm of the gradient, which is a more direct expression of \TrF:
\begin{align*}
\nabla
\mathbb{E}_{x \sim \mathcal{X},\hat y \sim p_{\theta}(y|\bm{x})} \left[ \left\| \frac{\partial}{\partial \mathbf{\theta}}\ell(\bm{x},\hat y) \right\|_2^2 \right]
&\approx
\frac{1}{N}
\sum_{n=1}^N
\frac{1}{M}
\sum_{m=1}^M
\nabla
\left\|
\frac{\partial}{\partial \mathbf{\theta}}\ell(\bm{x}_n,\hat y_{nm})
\right\|^2_2
\\
&\geq
\nabla
\left\|
\frac{1}{NM}
\sum_{n=1}^N
\sum_{m=1}^M
\frac{\partial}{\partial \mathbf{\theta}}\ell(\bm{x}_n,\hat y_{nm})
\right\|^2_2,
\end{align*}
where $N$ and $M$ are the minibatch size and the number of samples from $p_{\theta}(y|\bm{x}_n)$, respectively. This greatly improves the computational efficiency. With $N=B$ and $M=1$, we end up with the following learning objective function:
\begin{equation}
\label{app:eq_fisher_objective}
\ell'(\bm{x}_{1:B}, y_{1:B}; \bm{\theta}) = \frac{1}{B} \sum_{i=1}^B \ell(\bm{x}_i,y_i; \bm{\theta}) + \alpha \left\| \frac{1}{B} \sum_{i=1}^B g(\bm{x}_i, \hat y_i) \right\|^2.
\end{equation}
We found empirically that $\left\| \frac{1}{B} \sum_{i=1}^B g(\bm{x}_i, \hat y_i) \right\|^2$, which we denote by $\mathrm{Tr}(\mathbf{F}_B)$, and \TrF correlate well during training. To demonstrate this, we train SimpleCNN on the CIFAR-10 dataset with 5 different learning rates (from $10^{-3}$ to $10^{-1}$). The outcome is shown in Figure~\ref{app:fig:correlationTrBFvTrF}. We see that for most of the training, with the exception of the final phase, $\mathrm{Tr}(\mathbf{F}^B)$ and $\mathrm{Tr}(\mathbf{F})$ correlate extremely well. Equally importantly, we find that using a large learning affects both $\mathrm{Tr}(\mathbf{F}_B)$ and \TrF, which further suggests the two are closely connected.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\columnwidth]{figs/289miscexptunelrfixscnnlrscatter.pdf}
\caption{Correlation between \TrF and $\mathrm{Tr}(\mathbf{F}^B)$ for SimpleCNN trained on the CIFAR-10 dataset. Blue to red color denotes learning rates from $10^{-3}$ to $10^{-1}$. The value of \TrF and $\mathrm{Tr}(\mathbf{F}^B)$ correlate strongly for the most of the training trajectory. Using large learning rate reduces both \TrF and $\mathrm{Tr}(\mathbf{F}^B)$. }
\label{app:fig:correlationTrBFvTrF}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=0.45\columnwidth]{figs/159exp1gnpmv2scnngnpmValidationAccuracy.pdf}
\includegraphics[width=0.45\columnwidth]{figs/159exp1gnpmv2scnngnpmmathrmTrmathbfF.pdf}
\caption{Fisher Penalty with $f=10$}
\end{subfigure}%
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=0.45\columnwidth]{figs/299miscexptunegnpfreq1scnnlrValidationAccuracy.pdf}
\includegraphics[width=0.45\columnwidth]{figs/299miscexptunegnpfreq1scnnlrmathrmTrmathbfF.pdf}
\caption{Fisher Penalty with $f=1$}
\end{subfigure}
\caption{A comparison between the effect of recomputing Fisher Penalty gradient every 10 iterations (left) or every iteration (right), with respect to validation accuracy and \TrF. We denote by $f$ the frequency with which we update the gradient. Both experiments result in approximately 80\% test accuracy with the best configuration.}
\label{app:fig:frequency_in_FP}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\columnwidth]{figs/1411miscexpgnpbs1v3scnnlrValidationAccuracy.pdf}
\includegraphics[width=0.45\columnwidth]{figs/1411miscexpgnpbs1v3scnnlrmathrmTrmathbfF.pdf}
\caption{Using Fisher Penalty without the approximation results in a similar generalization performance. We penalize the norm of the gradient rather than norm of the mini-batch gradient (as in Equation~\ref{eq_fisher_objective}). We observe that this variant of Fisher Penalty improves generalization to a similar degree as the version of Fisher Penalty used in the paper (c.f. Figure~\ref{app:fig:frequency_in_FP}.), achieving 79.7\% test accuracy.}
\label{app:fig:exact_FP}
\end{figure}
We also update the gradient of $\mathrm{Tr}(\mathbf{F}^B)$ only every 10 optimization steps. We found empirically it does not affect generalization performance nor the ability to regularize \TrF in our setting. However, we acknowledge that it is plausible that this choice would have to be reconsidered in training with very large learning rates or with larger models.
Figure~\ref{app:fig:frequency_in_FP} compares learning curves of training with \textsc{FP}\xspace recomputed every optimization step with every 10th optimization step. For each, we tune the hyperparameter $\alpha$, checking 10 values equally spaced between $10^{-2}$ and $10^{0}$ on a logarithmic scale. We observe that for the optimal value of $\alpha$, both validation accuracy and \TrF are similar between the two runs. Both experiments achieve approximately 80\% test accuracy.
Finally, to ensure that using the approximation in Equation~\ref{eq_fisher_objective} does not negatively affect how Fisher Penalty improves generalization or reduces the value of \TrF, we experiment with a variant of Fisher Penalty without the approximation. Please recall that we always measure \TrF (i.e. we do not use approximations in computing \TrF that is reported in the plots), regardless of what variant of penalty is used in regularizing the training.
Specifically, we augment the loss function with the norm of the gradient computed on the first example in the mini-batch as follows
\begin{equation}
\label{eq_fisher_objective_exact}
\ell'(\bm{x}_{1:B}, y_{1:B}; \bm{\theta}) = \frac{1}{B} \sum_{i=1}^B \ell(\bm{x}_i,y_i; \bm{\theta}) + \alpha \left\| g(\bm{x}_1, \hat y_1) \right\|^2.
\end{equation}
We apply this penalty in each optimization step. We tune the hyperparameter $\alpha$, checking 10 values equally spaced between $10^{-4}$ and $10^{-2}$ on a logarithmic scale.
Figure~\ref{app:fig:exact_FP} summarizes the results. We observe that the best value of $\alpha$ yields 79.7\% test accuracy, compared to 80.02\% test accuracy yielded by the Fisher Penalty. The effect on \TrF is also very similar. We observe that the best run corresponds to a maximum value of \TrF of 24.16, compared to that of 21.38 achieved by Fisher Penalty. These results suggest that the approximation used in this paper's version of the Fisher Penalty only improves the generalization and flattening effects of Fisher Penalty.
\section{A closer look at the surprising effect of learning rate on the loss geometry in the early phase of training}
\label{app:closer_look}
It is intuitive to hypothesize that the catastrophic Fisher explosion (the initial growth of the value of \TrF) occurs during training with a large learning rate but is overlooked due to not sufficiently fine-grained computation of \TrF. In this section, we show evidence against this hypothesis based on the literature mentioned in the main text. We also run additional experiments in which we compute the value of \TrF at each iteration.
The surprising effect of the learning rate on the geometry of the loss surface (e.g. the value of \TrF) was demonstrated in prior works~\citep{jastrzebski_relation_2018,golatkar2019,lewkowycz2020large,leclerc2020regimes}. In particular, \citet{Jastrzebski2020The,lewkowycz2020large}
show that training with a large learning rate rapidly escapes regions of high curvature, where curvature is understood as the spectral norm of the Hessian evaluated at the current point of the loss surface.
Perhaps the most direct experimental data against this hypothesis can be found in \cite{cohen2021gradient} in Figure 1, where training with Gradient Descent finds regions of the loss surface with large curvature for small learning rate rapidly in the early phase of training.
We also run the following experiment to provide further evidence against the hypothesis. We train SimpleCNN on the CIFAR-10 dataset using two different learning rates, while computing the value of \TrF for every mini-batch. We use 128 random samples in each iteration to estimate \TrF.
We find that training with a large learning rate never (even for a single optimization step) enters a region where the value of \TrF is as large as what is reached during training with a small learning rate.
Figure~\ref{app:fig:closer_look_early_phase_lr} shows the experimental data.
We also found similar to hold when varying the batch size, see Section~\ref{app:sec:fisher_explosion_holds_in_lb}, which further shows that the observed effects cannot be explained by the difference in learning speed incurred by using a small learning rate.
To summarize, both the published evidence of \citet{Jastrzebski2020The,lewkowycz2020large,cohen2021gradient}, as well as our additional experiments, are inconsistent with the hypothesis that the results in this paper can be explained by differences in training speed between experiments using large and small learning rates.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\columnwidth]{figs/1411miscexpperbatchTrFscnnlrmathrmTrF.pdf}
\includegraphics[width=0.45\columnwidth]{figs/1411miscexpperbatchTrFscnnlrTrainingAccuracy.pdf}
\caption{Training with a large learning rate never (even for a single optimization step) enters a region with as large value of \TrF as the maximum value of \TrF reached during training with a small learning rate. We run the experiment using SimpleCNN on the CIFAR-10 dataset with two different learning rates. The left plot shows the value of \TrF computed at each iteration, and the right plot shows training accuracy computed on the current mini-batch (curve has been smoothed for clarity). }
\label{app:fig:closer_look_early_phase_lr}
\end{figure}
\section{Catastrophic Fisher Explosion holds in training with large batch size}
\label{app:sec:fisher_explosion_holds_in_lb}
In this section, we show preliminary evidence that the conclusions transfer to large batch size training. Namely, we show that (1) catastrophic Fisher explosion also occurs in large batch size training, and (2) Fisher Penalty can improve generalization and close the generalization gap due to using a large batch size~\citep{keskar_large-batch_2017}.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\columnwidth]{figs/1611miscexpperbatchTrFscnnSmathrmTrF.pdf}
\includegraphics[width=0.45\columnwidth]{figs/1611miscexpperbatchTrFscnnSTrainingAccuracy.pdf}
\caption{Catastrophic Fisher explosion occurs also in large batch size training. Experiment run on the CIFAR-10 and dataset the SimpleCNN model. The left plot shows the value of \TrF computed at each iteration, and the right plot shows training accuracy computed on the current mini-batch (curve has been smoothed for clarity). }
\label{app:fig:closer_look_early_phase_S}
\end{figure}
We first train SimpleCNN on the CIFAR-10 dataset using three different batch sizes, while computing the value of \TrF for every mini-batch. We use 128 random samples in each iteration to estimate \TrF.
Figure~\ref{app:fig:closer_look_early_phase_S} summarizes the experiment. We observe that training with a large batch size enters a region of the loss surface with a substantially larger value of \TrF than with the small batch size.
Next, we run a variant of one of the experiments in Table~\ref{tab:fisher_penalty_setting1}. Instead of using a suboptimal (smaller) learning rate, we use a suboptimal (larger) batch size. Specifically, we train SimpleCNN on the CIFAR-10 dataset (without augmentation) with a 10x larger batch size while keeping the learning rate the same. Using a larger batch size results in $3.24\%$ lower test accuracy ($76.94$\% compared to $73.7\%$ test accuracy, c.f. with Table~\ref{tab:fisher_penalty_setting1}).
We next experiment with Fisher Penalty. We apply the penalty in each optimization step and use the first $128$ examples when computing the penalty. We also use a 2x lower learning rate, which stabilizes training but does not improve generalization on its own (training with this learning rate reaches $73.59\%$ test accuracy). Figure~\ref{app:fig:fisher_penalty_large_bs} shows \TrF and validation accuracy during training for different values of the penalty. We observe that Fisher Penalty improves test accuracy from $73.59\%$ to $78.7\%$. Applying Fisher Penalty also effectively reduces the peak value of \TrF.
Taken together, the results suggest that Catastrophic Fisher explosion holds in large batch size training; using a small batch size improves generalization by a similar mechanism as using a large learning rate, which can be introduced explicitly in the form of Fisher Penalty.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\columnwidth]{figs/1611exp1v7SgnpmscnngnpmmathrmTrmathbfF.pdf}
\includegraphics[width=0.45\columnwidth]{figs/1611exp1v7SgnpmscnngnpmValidationAccuracy.pdf}
\caption{Fisher Penalty improves generalization in large batch size training. Experiment run on the CIFAR-10 dataset (without augmentation) and the SimpleCNN model. Warmer color corresponds to larger coefficient used when applying Fisher Penalty.}
\label{app:fig:fisher_penalty_large_bs}
\end{figure}
\section{\TrH and \TrF correlate strongly}
\label{app:TrH_and_TrF_correlate}
We demonstrate a strong correlation between \TrH and \TrF for DenseNet, ResNet-56 and SimpleCNN in Figure~\ref{fig:correlationTrHvTrF}. We calculate \TrF using a mini-batch. We see that \TrF has a smaller magnitude (we use a mini-batch gradient) but correlates strongly with \TrH.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=1.\columnwidth]{figs/2605densenetTrHvTrF.pdf}
\caption{DenseNet on CIFAR-100}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=1.\columnwidth]{figs/2605scnnTrHvTrF.pdf}
\caption{SimpleCNN on CIFAR-10}
\end{subfigure}
\begin{subfigure}[t]{0.31\textwidth}
\includegraphics[width=1.\columnwidth]{figs/2905resnetskipinithighlrTrHvTrF.pdf}
\caption{ResNet-56 on CIFAR-100}
\end{subfigure}
\caption{Correlation between \TrF and \TrH.}
\label{fig:correlationTrHvTrF}
\end{figure}
\section{Relationship between Fisher Penalty and gradient norm penalty}
\label{app:relationship_FP_and_gp}
We give here a short argument that \textsc{GP}\xspace might act as a proxy for regularizing \TrF. Let $f(\mathbf{x})$ represents the logits of the network, and $\mathcal{L}$ represent the loss. Then $\|\mathbf{g}\|^2= \| \frac{\partial \mathcal{L}(\mathbf{x}, y)}{\partial f(\mathbf{x})}\frac{\partial f(\mathbf{x})}{\partial \mathbf{\theta}} \|^2$, and $\mathrm{Tr}(\mathbf{F}) = \| \frac{\partial \mathcal{L}(\mathbf{x},\hat y)}{\partial f(\mathbf{x})} \frac{\partial f(\mathbf{x})}{\partial \mathbf{\theta}} \|^2$. Hence, in particular, reducing the scale of the Jacobian of the logits with respect to weights will reduce both $\|\mathbf{g}\|^2$ and Tr($\mathbf{F}$). Empirically, penalizing gradient norm seems to be a less effective regularizer, which suggests that it acts as a proxy for regularizing \TrF. A similar argument can be also made for \textsc{GP}\textsubscript{x}\xspace.
\section{Fisher Penalty Reduces Memorization}
\label{app:FP_and_memorization}
We present here a short argument that Fisher Penalty can be seen as reducing the training speed of examples that are both labeled randomly and for which the model makes a random prediction.
Let $\mathcal{D}_R = \{\bm{x}_i, y_i \}_i$ denote a set of examples where the label $y_i$ is sampled from the discrete uniform distribution $\mathcal{U}$. Let $g(\bm{x},y; \bm{\theta})=\frac{\partial}{\partial \mathbf{\theta}}\ell(\bm{x}, y; \bm{\theta})$ denote gradient of the loss function evaluated on an example ($\bm{x}, y$).
Assume that the predictive distribution $p_{\bm{\theta}}(y|\bm{x})$ is uniform for all $\bm{x} \in \mathcal{D}_R$. Then the expected mean squared norm of gradient of examples in $\mathcal{D}_R$ over different sampling of labels is $\mathbb{E}[\|g(\bm{x},y)\|_2^2] = \frac{1}{|\mathcal{D}_R|} \sum_i \mathbb{E}_{y \sim \mathcal{U}}[[\| g(\bm{x_i},y) [\|^2] = \frac{1}{|\mathcal{D}_R|} \sum_i \mathbb{E}_{\hat y \sim p_{\bm{\theta}}(\hat y|\bm{x})}[\| g(\bm{x_i},\hat y)_2\|^2]$.
Fisher Penalty aims to penalize the trace of the Fisher Information Matrix. For examples in $\mathcal{D}_R$, \TrF evaluates to $\mathrm{Tr}(\mathbf{F}_R) = \frac{1}{|\mathcal{D}_R|} \mathbb{E}_{\hat y \sim p_{\theta}(y|\bm{x})} \left[ \Vert g(\bm{x_i},\hat y) \Vert_2^2 \right]$, which under our assumptions is equal to $\mathbb{E}[\|g(\bm{x},y)\|^2]$.
We are interested in understanding how Fisher Penalty affects the learning speed of noisy examples. The reduction in the training loss for a given example can be related to its gradient norm using Taylor expansion. Consider the difference in training loss $\Delta \ell = \ell \left( \bm{x}, y; \bm{\theta} - \eta g(\bm{x}, y) \right) - \ell \left( \bm{x}, y; \bm{\theta} \right)$ after performing a single step of gradient descent on this example. Using first-order Taylor expansion we arrive at $\Delta \ell \approx -\eta \|g(\bm{x}, y)\|_2^2$.
Taken together, penalizing $\|g(\bm{x}, y)\|_2$, which is achieved by penalizing $\mathrm{Tr}(\mathbf{F})_R$, can be seen as slowing down learning on noisy examples.
However, in practice, we apply Fisher Penalty to all examples in the training set because we do not know which ones are corrupted. Consider $\mathcal{D} = \mathcal{D}_R \cup \mathcal{D}_C$, where $\mathcal{D}$ is the whole training set and $\mathcal{D}_C$ denotes the subset with clean (not altered) labels. Then, $\mathrm{Tr}(\mathbf{F}) = \mathrm{Tr}(\mathbf{F}_R) + \mathrm{Tr}(\mathbf{F}_C)$, where $\mathrm{Tr}(\mathbf{F})$ denotes trace of the FIM evaluated on the whole dataset, and $\mathrm{Tr}(\mathbf{F}_C)$ ($\mathrm{Tr}(\mathbf{F}_R)$) denotes the trace of the FIM on the clean (noisy) subset of the dataset.
Hence, if $\mathrm{Tr}(\mathbf{F}_R) \gg \mathrm{Tr}(\mathbf{F}_C)$, we can expect Fisher Penalty to disproportionately slow down training of noisy examples. This assumption is likely to hold because the clean examples tend to be learned much earlier in training than noisy ones~\cite{arpit_closer_2017}. In experiments, we indeed observe that the gradient norm of examples with noisy labels is disproportionately affected by Fisher Penalty, and also that learning on noisy examples is slower.
\section{Additional Experimental Details}
\subsection{Early phase \TrF correlates with final generalization}
\label{sec_early_final_details}
Here, we describe additional details for experiments in Section~\ref{sec_trf_generalization}.
In the experiments with batch size, for CIFAR-10, we use batch sizes 100, 500 and 700, and $\epsilon = 1.2$. For CIFAR-100, we use batch sizes 100, 300 and 700, and $\epsilon = 3.5$. These thresholds are crossed between 2 and 7 epochs across different hyperparameter settings. The remaining details for CIFAR-100 and CIFAR-10 are the same as described in the main text. The optimization details for these datasets are as follows.
\textbf{ImageNet}: No data augmentation was used in order to allow training loss to converge to small values. We use a batch size of 256. Training is done using SGD with momentum set to 0.9, weight decay set to $1e-4$, and with base learning rates in $\{0.001, 0.01, 0.1\}$. Learning rate is dropped by a factor of 0.1 after 29 epochs and training is ended at around 50 epochs at which most runs converge to small loss values. No batch normalization is used and weight are initialized using Fixup \cite{zhang2019fixup}. For each hyperparameter setting, we run two experiments with different random seeds (due to the computational overhead). We compute \TrF using 2500 samples (similarly to \cite{Jastrzebski2020The}).
\textbf{CIFAR-10}: We used random flipping as data augmentation. In the experiments with variation in learning rates (used $\{0.007, 0.01, 0.05\}$), we use a batch size of 256. In the experiments with variation in batch size (used 100, 500 and 700), we use a learning rate of 0.02. Training is done using SGD with momentum set to 0.9, weight decay set to $1e-5$. Learning rate is dropped by a factor of 0.5 at epochs 60, 120, and 170, and training is ended at 200 epochs at which most runs converge to small loss values. No batch normalization is used and weight are initialized using \cite{arpit2019initialize}. For each hyperparameter setting, we run 32 experiments with different random seeds. We compute \TrF using 5000 samples.
\textbf{CIFAR-100}: No data augmentation was used for CIFAR-100 to allow training loss to converge to small values. We used random flipping as data augmentation for CIFAR-10. In the experiments with variation in learning rates (used $\{0.005, 0.001, 0.01\}$), we use a batch size of 100. In the experiments with variation in batch size (used 100, 300 and 700), we use a learning rates of 0.02. Training is done using SGD with momentum set to 0.9, weight decay set to $1e-5$. Learning rate is dropped by a factor of 0.5 at epochs 60, 120, and 170, and training is ended at 200 epochs at which most runs converge to small loss values. No batch normalization is used and the weights are initialized using \cite{arpit2019initialize}. For each hyperparameter setting, we run 32 experiments with different random seeds. We compute \TrF using 5000 samples.
\subsection{Fisher Penalty}
\label{app:fisher_penalty_experimental_details}
Here, we describe the remaining details for the experiments in Section~\ref{sec:fisher_penalty}. We first describe how we tune hyperparameters in these experiments. In the remainder of this section, we describe each setting used in detail.
\paragraph{Tuning hyperparameters} In all experiments, we refer to the optimal learning rate $\eta^*$ as the learning rate found using grid search. In most experiments, we check 5 different learning rate values uniformly spaced on a logarithmic scale, usually between $10^{-2}$ and $10^{0}$. In some experiments, we adapt the range to ensure that it includes the optimal learning rate. We tune the learning rate only once for each configuration (i.e. we do not repeat it for different random seeds).
In the first setting, for most experiments involving gradient norm regularizers, we use 10$\times$ smaller learning rate than $\eta^*$. For TinyImageNet, we use 30$\times$ smaller learning rate than $\eta^*$. To pick the regularization coefficient $\alpha$, we evaluate 10 different values uniformly spaced on a logarithmic scale between $10^{-1} \times v$ to $10^{1} \times v$ with $v \in \mathbb{R}_+$. We choose the best performing $\alpha$ according to validation accuracy. We pick the value of $v$ manually with the aim that the optimal $\alpha$ is included in this range. We generally found that $v=0.01$ works well for \textsc{GP}\xspace, \textsc{GP}\textsubscript{r}\xspace, and \textsc{FP}\xspace. For \textsc{GP}\textsubscript{x}\xspace we found in some experiments that it is necessary to pick larger values of $v$.
\paragraph{Measuring \TrF} We measure \TrF using the number of examples equal to the batch size used in training. For experiments with Batch Normalization layers, we use Batch Normalization in evaluation mode due to the practical reason that computing \TrF uses a batch size of 1, and hence \TrF is not defined for a network with Batch Normalization layers in training mode.
\paragraph{DenseNet on the CIFAR-100 dataset} We use the DenseNet (L=40, k=12) configuration from \cite{huang_densely_2016}. We largely follow the experimental setting in \cite{huang_densely_2016}. We use the standard data augmentation (where noted) and data normalization for CIFAR-100. We hold out random 5000 examples as the validation set. We train the model using SGD with a momentum of 0.9, a batch size of 128, and a weight decay of 0.0001. Following \cite{huang_densely_2016}, we train for 300 epochs and decay the learning rate by a factor of 0.1 after epochs 150 and 225. To reduce variance, in testing we update Batch Normalization statistics using 100 batches from the training set.
\paragraph{Wide ResNet on the CIFAR-100 dataset} We train Wide ResNet (depth 44 and width 3, without Batch Normalization layers). We largely follow experimental setting in \cite{he_deep_2015}.We use the standard data augmentation and data normalization for CIFAR-100. We hold out random 5000 examples as the validation set. We train the model using SGD with a momentum of 0.9, a batch size of 128, and a weight decay of 0.0010. Following \cite{he_deep_2015}, we train for 300 epochs and decay the learning rate by a factor of 0.1 after epochs 150 and 225. We remove Batch Normalization layers. To ensure stable training we use the SkipInit initialization~\citep{de2020batch}.
\paragraph{VGG-11 on the CIFAR-100 dataset} We adapt the VGG-11 model~\citep{Simonyan15} to CIFAR-100. We do not use dropout nor Batch Normalization layers. We hold out random 5000 examples as the validation set. We use the standard data augmentation (where noted) and data normalization for CIFAR-100. We train the model using SGD with a momentum of 0.9, a batch size of 128, and a weight decay of 0.0001. We train the model for 300 epochs and decay the learning rate by a factor of 0.1 after every 40 epochs starting from epoch 80.
\paragraph{SimpleCNN on the CIFAR-10 dataset} We also run experiments on the CNN example architecture from the Keras example repository~\citep{chollet_keras_2015}\footnote{Accessible at \href{https://github.com/keras-team/keras/blob/master/examples/cifar10\_cnn.py}{https://github.com/keras-team/keras/blob/master/examples/cifar10\_cnn.py}.}, which we change slightly. Specifically, we remove dropout and reduce the size of the final fully-connected layer to 128. We train it for 300 epochs and decay the learning rate by a factor of 0.1 after the epochs 150 and 225. We train the model using SGD with a momentum of 0.9, and a batch size of 128.
\paragraph{Wide ResNet on the TinyImageNet dataset} We train Wide ResNet (depth 44 and width 3, with Batch Normalization layers) on TinyImageNet~\citet{Le2015TinyIV}. TinyImageNet consists of a subset of 100,000 examples from ImageNet that we downsized to 32$\times$32 pixels. We train the model using SGD with a momentum of 0.9, a batch size of 128, and a weight decay of 0.0001. We train for 300 epochs and decay the learning rate by a factor of 0.1 after epochs 150 and 225. We do not use validation in TinyImageNet due to its larger size. To reduce variance, in testing we update Batch Normalization statistics using 100 batches from the training set.
\subsection{Fisher Penalty Reduces Memorization}
\label{app:fisher_penalty_prevents_memorization_experimental_details}
Here, we describe additional experimental details for Section~\ref{sec:fisher_penalty_prevents_memorization}. We use two configurations described in Section~\ref{app:fisher_penalty_experimental_details}: VGG-11 trained on CIFAR-100 dataset, and Wide ResNet trained on the CIFAR-100 dataset. We tune the regularization coefficient $\alpha$ in the range $\{ 0.01,0.1,0.3 1,10\}$, with the exception of \textsc{GP}\textsubscript{x}\xspace for which we use the range $\{10, 30, 100, 300, 1000\}$. We tuned the mixup coefficient in the range $\{0.4, 0.8, 1.6, 3.2, 6.4\}$. We removed weight decay in these experiments. We use validation set for early stopping, as commonly done in the literature.
\subsection{Early \TrF influences final curvature}
\label{sec_trf_final_minima_details}
\textbf{CIFAR-10}: We used random flipping as data augmentation for CIFAR-10. We use a learning rate of 0.02 for all experiments. Training is done using SGD with momentum 0.9, weight decay $1e-5$, and batch size as shown in figures. The learning rate is dropped by a factor of 0.5 at 80, 150, and 200 epochs, and training is ended at 250 epochs. No batch normalization is used and the weights are initialized using \cite{arpit2019initialize}. For each batch size, we run 32 experiments with different random seeds. We compute \TrF using 5000 samples.
\textbf{CIFAR-100}: No data augmentation is used. We use a batch size of 100 for all experiments. Training is done using SGD with momentum 0.9, weight decay $1e-5$, and with base learning rate as shown in figures. The learning rate is dropped by a factor of 0.5 at 80, 150, and 200 epochs, and training is ended at 250 epochs. No batch normalization is used and the weights are initialized using \cite{arpit2019initialize}. For each learning rate, we run 32 experiments with different random seeds. We compute \TrF using 5000 samples.
\section{Introduction}
The exact mechanism behind implicit regularization effects in training of deep neural networks (DNNs) remains an extensively debated topic despite being considered a critical component in their empirical success~\citep{neyshabur2017,zhang_understanding_2016,jiang_fantastic_2020}. For instance, it is commonly observed that using a moderately large learning rate in the early phase of training results in better generalization~\citep{lecun_efficient_2012,goodfellow_deep_2016,jiang_fantastic_2020,bjorck_understanding_2018}.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[height=0.5\textwidth]{figs/demo229tunelrdemowresnettinylrValidationAccuracy.pdf}
\hspace{-10em}\caption{Validation accuracy\,\,\,\,\,\,\,\,\,\,\,\,\,}
\end{subfigure} \hspace{-4em}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[height=0.51\textwidth]{figs/demo229tunelrdemowresnettinylrgnorms.pdf}
\caption{The trace of the Fisher Information Matrix\,\,\,\,\,}
\end{subfigure}
\caption{Catastrophic Fisher explosion phenomenon demonstrated for Wide ResNet trained using stochastic gradient descent on the TinyImageNet dataset. Training with a small learning rate leads to a sharp increase in the trace of the Fisher Information Matrix (FIM) early in training (right), which coincides with strong overfitting (left). The trace of the FIM is a measure of the local curvature of the loss surface. Training is done with either a learning rate optimized using grid search ($\eta_1=0.0316$, red), or a small learning rate ($\eta_2=0.001$, blue). }
\label{fig:casestudy}
\end{figure*}
Recent work suggests that the early phase of training of DNNs might hold the key to understanding some of these implicit regularization effects. In particular, the learning rate in the early phase of training has a dramatic effect on the local curvature of the loss function~\citep{Jastrzebski2020The,cohen2021gradient}. It has also been found that when using a small learning rate, the local curvature of the loss surface increases along the optimization trajectory until optimization is \emph{close to instability}.\footnote{That is, a small further increase in the local curvature is not possible without divergence. Interestingly, when training using gradient descent with a learning rate $\eta$, the largest eigenvalue of the Hessian of the training loss was observed to reach the critical value of $\frac{2}{\eta}$~\citep{cohen2021gradient}, at which training oscillates along the eigenvector corresponding to the largest eigenvalue of the Hessian.}
These observations lead to a natural question: does the instability, and the corresponding dramatic change in the local curvature, in the early phase of training influence generalization? We investigate this question through the lens of the Fisher Information Matrix (FIM), a matrix that can be seen as approximating the local curvature of the loss surface~\citep{martens2014,thomas2020interplay}. \citet{achille_critical_2017,jastrzebski_relation_2018,golatkar2019,lewkowycz2020large} independently suggest that effects of the early phase of training on the local curvature critically influence the final generalization, but did not directly test this proposition.
Our main contribution is to show that \emph{implicit regularization effects due to using a large learning rate can be explained by its impact on the trace of the FIM (\TrF), a quantity that reflects the local curvature, from the beginning of training}. Our results suggest that the instability in the early phase is a critical phenomenon for understanding optimization in DNNs. This is in contrast to many prior theoretical works, which generally do not connect implicit regularization effects in SGD to the large instability in the early phase of training~\citep{chaudhari2018stochastic,smith2021on}.
We demonstrate on image classification tasks that \TrF early in training correlates with the final generalization performance across settings with different learning rates or batch sizes. We then show evidence that explicitly regularizing \TrF, which we call Fisher penalty, recovers generalization degradation due to training with a sub-optimal (small) learning rate, and can significantly improve generalization when training with the optimal learning rate. On the other hand, achieving large \TrF early in training, which may occur in practice when using a relatively small learning rate, or due to bad initialization, coincides with poor generalization. We call this phenomenon catastrophic Fisher explosion. Figure~\ref{fig:casestudy} illustrates this effect on the TinyImageNet dataset~\citep{Le2015TinyIV}.
Our second contribution is an analysis of why implicitly or explicitly regularizing \TrF impacts generalization. Our key finding is that penalizing \TrF significantly improves generalization in training with noisy labels. We make a theoretical and empirical argument that penalizing \TrF can be seen as penalizing the gradient norm of noisy examples, which slows down their learning. We hypothesize that implicitly or explicitly regularizing \TrF amplifies the implicit bias of SGD to avoid memorization~\citep{arpit_closer_2017,Rahaman2018OnTS}. Finally, we also show that small \TrF in the early phase of training biases optimization towards a flat minimum~\citep{keskar_large-batch_2017}.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFimgnetlr.pdf}
\caption{ImageNet (w/o augmentation)}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.9\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc10auglr.pdf}
\caption{CIFAR-10 (with augmentation)}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.32\textwidth}
\includegraphics[width=0.97\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/IFtrFc100noauglr.pdf}
\caption{CIFAR-100 (w/o augmentation)}
\end{subfigure}
\caption{Association between \TrF in the initial phase of training (\TrFi) and test accuracy on ImageNet, CIFAR-10 and CIFAR-100 datasets. Each point corresponds to multiple seeds and a specific value of learning rate. \TrFi is recorded during the early phase of training (2-7 epochs, see the main text for details). The plots show that early \TrF is predictive of final generalization. Analogous results illustrating the influence of batch size are shown in Appendix \ref{sec_early_final_appendix}}.
\label{fig:IF_LR_trF}
\end{figure*}
\section{Implicit and explicit regularization of the Fisher Information Matrix}
\paragraph{Fisher Information Matrix}
Consider a probabilistic classification model $p_{\bm{\theta}}(y|\bm{x})$, where $\bm{\theta}$ denotes its parameters. Let $\ell(\bm{x},y;\bm{\theta})$ be the cross-entropy loss function calculated for input $\bm{x}$ and label $y$. Let $g(\bm{x},y; \bm{\theta})=\frac{\partial}{\partial \mathbf{\theta}}\ell(\bm{x}, y; \bm{\theta})$ denote the gradient of the loss computed for an example $(\bm{x},y)$. The central object that we study is the Fisher Information Matrix \F defined as
\begin{equation}
\mathbf{F}(\bm{\theta}) = \mathbb{E}_{x \sim \mathcal{X},\hat y \sim p_{\theta}(y|\bm{x})} [{g}(\bm{x},\hat y) {g}(\bm{x},\hat y)^T ],
\end{equation}
where the expectation is often approximated using the empirical distribution $\mathcal{X}$ induced by the training set. Later, we also look into the Hessian $\mathbf{H}(\bm{\theta})=\frac{\partial^2}{\partial \bm{\theta}^2} \ell(\bm{x},y;\bm{\theta})$. We denote the trace of $\mathbf{F}$ and $\mathbf{H}$ matrices by \TrF and \TrH.
The FIM can be seen as an approximation to the Hessian~\citep{martens2014}. In particular, as $p(y|\bm{x}; \theta) \rightarrow \hat p(y | \bm{x})$, where $\hat p(y | \bm{x})$ is the empirical label distribution, the FIM converges to the Hessian. \citet{thomas2020interplay} showed on image classifications tasks that $\mathrm{Tr}(\mathbf{H})\approx \mathrm{Tr}(\mathbf{F})$ along the optimization trajectory, which we also demonstrate in Supplement~\ref{app:TrH_and_TrF_correlate}. Crucially, note that while \TrH uses label information, \TrF does not use any label information, in contrast to the ``empirical Fisher'' studied for example in \citet{kunstner2019limitations}.
\paragraph{Fisher Penalty}
The early phase has a drastic effect on the trajectory in terms of the local curvature of the loss surface~\citep{achille_critical_2017,jastrzebski_relation_2018,gur-ari_gradient_2018,lewkowycz2020large,leclerc2020regimes}. In particular, \citet{lewkowycz2020large,jastrzebski_relation_2018} show that using a large learning rate in stochastic gradient descent biases training towards low curvature regions of the loss surface early in training. For example, using a large learning rate in SGD was shown to result in a rapid decay of \TrH along the optimization trajectory~\cite{jastrzebski_relation_2018}.
Our main contribution is to propose and investigate a specific mechanism by which using a large learning rate or a small batch size implicitly influences final generalization. Our first insight is to shift the focus from studying the Hessian, to studying the properties of the FIM. Concretely, we hypothesize that using a large learning rate or a small batch size improves generalization by implicitly penalizing \TrF from the very beginning of training.
In order to study the effect of implicit regularization of \TrF, we introduce a regularizer that explicitly penalizes \TrF. First, we note that \TrF can be written as
\begin{equation}
\label{eq:trf}
\mathrm{Tr}(\mathbf{F}) = \mathbb{E}_{x \sim \mathcal{X},\hat y \sim p_{\theta}(y|\bm{x})} \left[ \Vert \frac{\partial}{\partial \mathbf{\theta}}\ell(\bm{x},\hat y) \Vert_2^2 \right].
\end{equation}
Thus, to regularize \TrF, we can simply add $\frac{1}{M} \sum_{i=1}^M \left\| g(\bm{x}_i, \hat y_i) \right\|^2$ term to the loss function, which can be efficiently back-propagated through. This, however, requires a large number of samples to efficiently regularize \TrF, as we show in our experiments. Instead, we add the following term to the loss function:
\begin{equation}
\label{eq_fisher_objective}
\ell'(\bm{x}, y) = \frac{1}{B} \sum_{i=1}^B \ell(\bm{x}_i,y_i) + \alpha \left\| \frac{1}{B} \sum_{i=1}^B g(\bm{x}_i, \hat y_i) \right\|^2,
\end{equation}
where $(\bm{x}, y)$ is a mini-batch of size $B$, $\hat y_i$ is sampled from $p_{\bm{\theta}}(y|\bm{x}_i)$, $\alpha$ is a hyperparameter. Importantly, the equation does not involve target labels. Finally, we compute the gradient of the second term only every 10 optimization steps, and in a given iteration use the most recently computed gradient. We refer to this regularizer as Fisher penalty (\textsc{FP}\xspace).
In the experiments, we show that this formulation efficiently penalizes \TrF. We attribute this largely to the fact that $ \left\| \frac{1}{B} \sum_{i=1}^B g(\bm{x}_i, \hat y_i) \right\|^2$ and \TrF are strongly correlated during training. We discuss this observation, and ablate the approximations, in more detail in Supplement~\ref{app:approx_in_FP}.
\begin{table*}
\centering
\caption{Fisher penalty (\textsc{FP}\xspace) effectively models implicit regularization that arises in SGD due to using large learning rates. Using a 10-30x smaller learning rate (Baseline) results in up to 9\% degradation in test accuracy on popular image classification benchmarks (c.f. to \textit{optimal} $\eta^*$). Adding \textsc{FP}\xspace, which explicitly regularizes \TrF, substantially improves generalization and closes the gap to $\eta^*$. Green cells correspond to runs that finished with, at most, 1\% lower test accuracy than when training with $\eta*$. }
\label{tab:fisher_penalty_setting1}
\begin{tabular}{cll|ll|ll}
\toprule
Setting & $\eta^*$ & Baseline & \textsc{GP}\textsubscript{x}\xspace & \textsc{GP}\xspace & \textsc{FP}\xspace & \textsc{GP}\textsubscript{r}\xspace \\
\midrule
Wide ResNet / TinyImageNet (aug.) & 54.67\% & 52.57\% & 52.79\% & \cellcolor{green!25}56.44\% & \cellcolor{green!25}\textbf{56.73\%} & \cellcolor{green!25}55.41\% \\
\midrule
DenseNet / CIFAR-100 (w/o aug.) & 66.09\% & 58.51\% & 62.12\% & 64.42\% & \cellcolor{green!25} \textbf{66.41\%} & \cellcolor{green!25}66.39\% \\
VGG11 / CIFAR-100 (w/o aug.) & 45.86\% & 36.86\% & \cellcolor{green!25}45.26\% & \cellcolor{green!25}47.35\% & \cellcolor{green!25}\textbf{49.87\%} & \cellcolor{green!25}48.26\% \\
WResNet / CIFAR-100 (w/o aug.) & 53.96\% & 46.38\% & \cellcolor{green!25}\textbf{58.68\%} & \cellcolor{green!25}57.68\% & \cellcolor{green!25}57.05\% & \cellcolor{green!25}58.15\% \\
\midrule
SimpleCNN / CIFAR-10 (w/o aug.) & 76.94\% & 71.32\% & 75.68\% & 75.73\% & \cellcolor{green!25}79.66\% & \cellcolor{green!25}\textbf{79.76\%} \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{Catastrophic Fisher Explosion}
To illustrate the concepts mentioned in this section, we train a Wide ResNet model (depth 44, width 3)~\citep{zagoruyko2016} on the TinyImageNet dataset with SGD and two different learning rates. We illustrate in Figure~\ref{fig:casestudy} that the small learning rate leads to dramatic overfitting, which coincides with a sharp increase in \TrF in the early phase of training. We also show in Supplement~\ref{app:closer_look} that these effects cannot be explained by the difference in learning speed between runs with smaller and learning rates. We call this phenomenon catastrophic Fisher explosion.
\paragraph{Why Fisher Information Matrix} The benefit of the FIM is that it can be efficiently regularized during training. In contrast, \citet{wen2018smoothout,foret2021sharpnessaware} had to rely on certain approximations to efficiently regularize curvature. Equally importantly, the FIM is related to the gradient norm, and as such its effect on learning is more interpretable. We will leverage this connection to argue that \textsc{FP}\xspace slows down learning on noisy examples in the dataset.
\paragraph{Concurrent work on a different mechanism}
\citet{barrett2020implicit,smith2021on} concurrently argue that the implicit regularization effects in SGD can be expressed as a form of a gradient norm penalty. The key difference is that we link the mechanism of implicit regularization to the fact optimization is \emph{close to instability} in the early phase. Supporting this view, we observe that \textsc{FP}\xspace empirically performs better than gradient penalty, and that \textsc{FP}\xspace works best when applied from the start of training.
\section{Early-phase \TrF and final generalization}
\label{sec_trf_generalization}
Using a large learning rate ($\eta$) or small batch size ($S$) in SGD steers optimization to a lower curvature region of the loss surface. However, it remains a hotly debated topic whether such choices explain strong regularization effects~\citep{dinh_sharp_2017,yoshida2017spectral,He2019,tsuzuku2019normalized}. We begin by studying the connection between \TrF and generalization in experiments across which we vary $\eta$ or $S$ in SGD.
\paragraph{Experimental setup}
We run experiments in two settings: (1) ResNet-18 with Fixup~\cite{he_deep_2015,zhang2019fixup} trained on the ImageNet dataset~\citep{deng_imagenet_2009}, (2) ResNet-26 initialized as in \cite{arpit2019initialize} and trained on the CIFAR-10 and CIFAR-100 datasets~\citep{krizhevsky_learning_2009}. We train each architecture using SGD, with various values of $\eta$, $S$, and random seed.
We define \TrFi as \TrF during the initial phase of training. How long we consider the early-phase \TrF to be is determined by measuring when the training loss crosses a task-specific threshold $\epsilon$ that roughly corresponds to the moment when \TrF achieves its maximum value. For ImageNet, we use learning rates 0.001, 0.01, 0.1, and $\epsilon = 3.5$. For CIFAR-10, we use learning rates 0.007, 0.01, 0.05, and $\epsilon = 1.2$. For CIFAR-100, we use learning rates 0.001, 0.005, 0.01, and $\epsilon = 3.5$. In all cases, training loss reaches $\epsilon$ between 2 and 7 epochs across different hyper-parameter settings. We repeat similar experiments for different batch sizes in Supplement~\ref{sec_early_final_appendix}. The remaining training details can be found in Supplement~\ref{sec_early_final_details}.
\paragraph{Results}
Figure~\ref{fig:IF_LR_trF} shows the association between \TrFi and test accuracy across runs with different learning rates. We show results for CIFAR-10 and CIFAR-100 when varying the batch size in Figure \ref{fig:IF_LR_trF_bs} in the Supplement. We find that \TrFi correlates well with the final generalization in our setting, which provides initial evidence for the importance of \TrF. It also serves as a stepping stone towards developing a more granular understanding of the role of implicit regularization of \TrF in the following sections.
\section{Fisher Penalty}
\label{sec:fisher_penalty}
To understand the significance of the identified correlation between \TrFi and generalization, we now run experiments in which we directly penalize \TrF. We focus our attention on the identified effect that using a high learning rate has on \TrF, especially early in training.
\begin{figure}
\centering
\begin{subfigure}[t]{0.236\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/WResNetC100woaugValidationAccuracyfisher.pdf}
\caption{Wide ResNet on CIFAR-100}
\end{subfigure}
\begin{subfigure}[t]{0.236\textwidth}
\includegraphics[width=1.0\columnwidth]{figs/VGG11C100woaugValidationAccuracyfisher.pdf}
\caption{VGG-11 on CIFAR-100}
\end{subfigure}
\begin{subfigure}[t]{0.236\textwidth}
\includegraphics[width=1.0\columnwidth]{figs/SCNNC10woaugValidationAccuracyfisher.pdf}
\caption{Simple CNN on CIFAR-10}
\end{subfigure}
\begin{subfigure}[t]{0.236\textwidth}
\includegraphics[width=1.0\columnwidth]{figs/DenseNetC100woaugValidationAccuracyfisher.pdf}
\caption{DenseNet on CIFAR-100}
\end{subfigure}
\caption{Fisher Penalty has to be applied early in training to close the generalization gap to the optimal learning rate (c.f. the red horizontal line to the blue horizontal line). Each subplot summarizes an experiment in which we apply Fisher Penalty starting from a certain epoch (x axis) and measure the final test accuracy (y axis).}
\label{fig:fisher_penalty_setting1_time}
\end{figure}
\paragraph{Experimental setting}
We use a similar setting as in the previous section, but we include larger models. We run experiments using Wide ResNet~\citep{zagoruyko2016} (depth 44 and width 3, with or without BN layers), SimpleCNN (without BN layers), DenseNet (L=40, K=12)~\citep{huang_densely_2016} and VGG-11~\citep{Simonyan15}. We train these models on either the CIFAR-10 or the CIFAR-100 datasets. Due to larger computational cost, we replace ImageNet with the TinyImageNet dataset~\citep{Le2015TinyIV} (with images scaled to $32\times32$ resolution) in these experiments.
To investigate if the correlation between \TrFi and the final generalization holds more generally, we apply Fisher penalty in two settings. First, we use a learning rate 10-30x smaller than the optimal one, which both incur up to 9\% degradation in test accuracy and results in large value of \TrFi. We also remove data augmentation from the CIFAR-10 and the CIFAR-100 datasets to ensure that training a with small learning rate does not result in underfitting. In the second setting, we add Fisher penalty in training with an optimized learning rate using grid search ($\eta^*$) and train with data augmentation.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figs/demoWResNetC100woaug.pdf}
\vspace{-0.3cm}
\includegraphics[width=0.6\columnwidth]{figs/demoWResNetC100woaugfigure.pdf}
\caption{Wide ResNet on CIFAR-100 (w/o aug.)}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=1.\columnwidth]{figs/demoVGG11C100woaug.pdf}
\vspace{-0.3cm}
\includegraphics[width=0.6\columnwidth]{figs/demoVGG11C100woaugfigure.pdf}
\caption{VGG-11 on CIFAR-100 (w/o aug.)}
\end{subfigure}
\caption{Using Fisher Penalty (\textsc{FP}\xspace) in training with a sub-optimal (small) learning rate drastically reduces the tendency to reach highly curved regions of the loss surface that arises when training with a small learning rate (compare the peak values of \TrF). At the same time, \textsc{FP}\xspace also significantly improves generalization. Penalizing the input gradient norm (\textsc{GP}\textsubscript{x}\xspace) also impacts \TrF but achieves worse generalization. Each subfigure shows validation accuracy (left) and \TrF (right). Curves were smoothed for clarity.}
\label{fig:fisher_penalty_setting1_visualization}
\end{figure*}
Fisher penalty penalizes the gradient norm computed using labels sampled from $p_{\bm{\theta}}(y|\bm{x})$. We hypothesize that a similar, but weaker, effect can be introduced by other gradient norm regularizers. We compare \textsc{FP}\xspace to: (1) penalizing the input gradient norm $\left\Vert \bm{g}_x \right\Vert^2 = \| \frac{\partial}{\partial \bm{x}} \ell(\bm{x},y) \|^2$, which we denote by \textsc{GP}\textsubscript{x}\xspace~\citep{varga2018gradient,Rifai2011,Drucker1992}; (2) penalizing the vanilla mini-batch gradient~\cite{Gulrajani2017}, which we denote by \textsc{GP}\xspace; and penalizing the mini-batch gradient computed with random labels $\left\Vert \bm{g_r} \right\Vert^2 = \| \frac{\partial}{\partial \bm{x}} \ell(\bm{x},\hat y) \|^2$ where $\hat y$ is sampled from a uniform distribution over the label set (\textsc{GP}\textsubscript{r}\xspace). We are not aware of any prior work using \textsc{GP}\xspace or \textsc{GP}\textsubscript{r}\xspace in supervised training, with the exception of \citet{Alizadeh2020Gradient} where the authors penalized $\ell_1$ norm of gradients to compress the network towards the end of training, and concurrent work of \citet{barrett2020implicit}. We note that regularizing \textsc{GP}\textsubscript{x}\xspace is related to regularizing the Jacobian input-output of the network~\citep{hoffman2019,chan2020}.
We tune the hyperparameters on the validation set. More specifically for $\alpha$, we test 10 different values spaced uniformly between $10^{-1} \times v$ and $10^{1} \times v$ on a logarithmic scale with $v \in \mathbb{R}_+$. For TinyImageNet, we evaluate 5 values spaced equally on a logarithmic scale.
We include the remaining experimental details in the Supplement~\ref{app:fisher_penalty_experimental_details}.
\paragraph{Fisher Penalty improves generalization}
Table~\ref{tab:fisher_penalty_setting1} summarizes the results of the main experiment. First, we observe that a suboptimal learning rate (10-30x lower than the optimal) leads to dramatic overfitting. We observe a degradation of up to 9\% in test accuracy, while achieving approximately 100\% training accuracy (see Table~\ref{app:tab:fisher_penalty_setting1_acc} in the Supplement).
Fisher penalty closes the gap in test accuracy between the small and optimal learning rate, and even achieves better performance than the optimal learning rate. A similar performance was observed when minimizing $\left\|g_r\right\|^2$. We will come back to this observation in the next section.
\textsc{GP}\xspace and \textsc{GP}\textsubscript{x}\xspace reduce the early value of \TrF (see Table~\ref{app:tab:fisher_penalty_setting1_TrF} in the Supplement). They, however, generally perform worse than \TrF or \textsc{GP}\textsubscript{r}\xspace and do not fully close the gap between small and optimal learning rate. We hypothesize they improve generalization by a similar but less direct mechanism than \TrF and \textsc{GP}\textsubscript{r}\xspace, which we make more precise in Section~\ref{app:relationship_FP_and_gp} in the Supplement. We note that \textsc{GP}\xspace was proposed in \citet{barrett2020implicit,smith2021on} as the term that SGD implicitly regularizes (see Related Work for more details).
We also investigate whether similar conclusions hold in large batch size training. In experiments with CIFAR-10 and SimpleCNN, we find that we can close the generalization gap due to training with a large batch size by using Fisher Penalty. We provide further details in Supplement~\ref{app:sec:fisher_explosion_holds_in_lb}.
In the second experimental setting, we apply \textsc{FP}\xspace to a network trained with the optimal learning rate $\eta^*$. According to Table~\ref{tab:fisher_penalty_setting2} (see Table~\ref{app:tab:fisher_penalty_setting2_acc} for training accuracies), Fisher Penalty improves generalization in 4 out of 5 settings. The gap between the baseline and \textsc{FP}\xspace is relatively small in 3 out of 5 settings (below 2\%), which is natural given that we already regularize training implicitly by using the optimal $\eta$..
\begin{table}
\centering
\small
\caption{Fisher penalty (\textsc{FP}\xspace) improves generalization in 4 out of 5 settings when applied with the optimal learning rate $\eta^*$ and trained using standard data augmentation. In 3 out of 5 settings the difference between \textsc{FP}\xspace and $\eta^*$ is relatively small (below 2\%), which is expected given that \textsc{FP}\xspace is aimed at reproducing the regularization effect of large $\eta$, and we compare to training with the optimal $\eta^*$. }
\label{tab:fisher_penalty_setting2}
\begin{tabular}{lll}
\toprule
Setting & $\eta^*$ & \textsc{FP}\xspace \\
\midrule
WResNet / TinyImageNet & 54.70$\pm$0.0\% & \textbf{60.00$\pm$0.1\%} \\
\midrule
DenseNet / C100 & \textbf{74.41$\pm$0.5\%} & 74.19$\pm$0.5\% \\
VGG11 / C100 & 59.82$\pm$1.2\% & \textbf{65.08$\pm$0.5\%} \\
WResNet / C100 & 69.48$\pm$0.3\% & \textbf{71.53$\pm$1.2\%} \\
\midrule
SimpleCNN / C10 & 87.16$\pm$0.2\% & \textbf{87.52$\pm$0.5\%} \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Geometry and generalization in the early phase of training}
\label{sec:fisher_penalty_early}
Here, we investigate if penalizing \TrF early in training matters for the final generalization. A positive answer would further strengthen the link between the early phase of training and implicit regularization effects in SGD. We run experiments on CIFAR-10 and CIFAR-100.
First, we observe that all gradient-norm regularizers reduce the early value of \TrF closer to \TrF achieved when trained with the optimal learning rate $\eta^*$. We show this effect with Wide ResNet and VGG-11 on CIFAR-100 in Figure~\ref{fig:fisher_penalty_setting1_visualization}, and for other experimental settings in the Supplement. We also tabulate the maximum achieved values of \TrF over the optimization trajectory in Supplement~\ref{app:fisher_penalty_additional_results}.
To test the importance of explicitly penalizing \TrF early in training, we start applying it after a certain number of epochs $E \in \{1, 2, 4, 8, 16, 32, 64, 128\}$. We use the best hyperparameter set from the previous experiments. Figure~\ref{fig:fisher_penalty_setting1_time} summarizes the results. For both datasets, we observe a consistent pattern. When \textsc{FP}\xspace is applied starting from a later epoch, final generalization is significantly worse, and the generalization gap arising from a suboptimal learning rate is not closed. Interestingly, there seems to be a benefit in applying \textsc{FP}\xspace after some warm-up period, which might be related to the widely used trick to gradually increase the learning rate in the early phase~\citep{gotmare2018a}.
\subsection{Fisher Penalty Reduces Memorization}
\label{sec:fisher_penalty_prevents_memorization}
It is not self-evident why regularizing \TrF should influence generalization. In this section, we show that explicit penalization of \TrF improves learning with datasets with noisy labels. To study this, we replace labels of the examples in the CIFAR-100 dataset (25\% or 50\% of the training set) with labels sampled uniformly. We refer to these examples as \emph{noisy examples}. While label noise in real datasets is not uniform, methods that perform well with uniform label noise generally are more robust to label noise in real datasets~\citep{jiang2020}. We also know that datasets such as CIFAR-100 contain many labeling errors~\citep{song2020robust}. Hence, examining if \TrF reduces memorization of synthetic label noise provides an insight into why it improves generalization in our prior experiments.
We argue \textsc{FP}\xspace should reduce memorization. Under certain assumptions, \TrF is equivalent to the norm of the noisy examples' gradient. As such, we would expect \TrF to decrease the norm of the gradient of the noisy examples compared to the gradient of the clean examples. Assuming that the gradient norm of a given group of examples is related to its learning speed~\citep{Chatterjee2020Coherent,fort_stiffness_2019}, \textsc{FP}\xspace should promote learning first clean examples, before learning noisy examples. We make this argument more precise in the Supplement~\ref{app:FP_and_memorization}.
To study whether the above happens in practice, we compare \textsc{FP}\xspace to \textsc{GP}\textsubscript{x}\xspace, \textsc{GP}\textsubscript{r}\xspace, and mixup~\citep{zhang2018mixup}. While mixup is not the state-of-the-art approach for learning with noisy labels, it is competitive among approaches that do not require additional data nor multiple stages of training. In particular, it is a component in several state-of-the-art methods~\citep{Li2020DivideMix,song2020robust}. For gradient norm based regularizers, we evaluate 6 different hyperparameter values spaced uniformly on a logarithmic scale, and for mixup we evaluate $\beta \in \{0.2, 0.4, 0.8, 1.6, 3.2, 6.4\}$. We experiment with the Wide ResNet and VGG-11 models. We describe remaining experimental details in Supplement~\ref{app:fisher_penalty_prevents_memorization_experimental_details}.
\begin{table*}
\centering
\caption{Fisher Penalty (\textsc{FP}\xspace) and \textsc{GP}\textsubscript{r}\xspace both reduce memorization competitively to mixup. We measure test accuracy at the best validation point in training with either 25\% or 50\% examples with noisy labels in the CIFAR-100 dataset.}
\label{tab:fisher_penalty_noisy_data}
\begin{tabular}{lllll|ll}
\toprule
Label Noise & Setting & Baseline & Mixup & \textsc{GP}\textsubscript{x}\xspace & \textsc{FP}\xspace & \textsc{GP}\textsubscript{r}\xspace \\
\midrule
25\% & VGG-11 / CIFAR-100 & 41.74\% & 52.31\% & 45.94\% & \textbf{60.18\%} & 58.46\% \\
& ResNet-52 / CIFAR-100 & 53.30\% & \textbf{61.61\%} & 52.70\% & 58.31\% & 57.60\% \\
\midrule
50\% & VGG-11 / CIFAR-100 & 30.05\% & 39.15\% & 34.26\% & \textbf{51.33\%} & 50.33\% \\
& ResNet-52 / CIFAR-100 & 43.35\% & \textbf{51.71\%} & 42.99\% & 47.99\% & 50.08\% \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{Results}
To test whether \textsc{FP}\xspace reduces the speed with which the noisy examples are learned, we track the training and validation accuracy, and the gradient norm. We compute these metrics separately on the noisy and clean examples. Figure~\ref{fig:fisher_penalty_noisy_data_visualization} summarizes the results for VGG-11, and we show results for ResNet-32 in Figure~\ref{app:fig:fisher_penalty_noisy_data_visualization} in the Supplement.
We observe that \textsc{FP}\xspace limits the ability of the model to memorize data more strongly than it limits its ability to learn from clean data. Figure~\ref{fig:fisher_penalty_noisy_data_visualization} confirms applying \textsc{FP}\xspace results in training accuracy on noisy examples being lower for the same accuracy on clean examples, compared to the baseline.
We can further confirm our interpretation of the effect \TrF has on training by studying the gradient norm. As visible in the top panel of Figure~\ref{fig:fisher_penalty_noisy_data_visualization}, the gradient norm evaluated on noisy examples is larger than on clean examples, and the ratio is closer to 1 when \textsc{FP}\xspace is applied with a larger coefficient. Interestingly, we also observe that the angle between the noisy and clean examples' gradients is negative early in training, which we plot in Figure~\ref{app:fig:fisher_penalty_angle} in the Supplement.
Finally, we summarize test accuracies (at the best validation epoch) in Table~\ref{tab:fisher_penalty_noisy_data}. Penalizing \TrF reduces memorization competitively to mixup, improving test accuracy by up to 21.28\%. Furthermore, \textsc{FP}\xspace strongly outperforms penalizing \textsc{GP}\textsubscript{x}\xspace, consistent with with trends in the prior Sections. We also again observe \textsc{FP}\xspace to perform comparably to penalizing \textsc{GP}\textsubscript{r}\xspace. Under the assumption that the model predictive distribution is close to random, \textsc{FP}\xspace is equivalent to penalizing \textsc{GP}\textsubscript{r}\xspace. Assuming this holds in the early phase of training, we interpret this as a further corroboration of the importance of applying \textsc{FP}\xspace in the early phase (c.f. Section~\ref{sec:fisher_penalty_early}). In the Supplement, we also report training accuracy separately on the clean and noisy examples for the experiment with 25\% noisy examples.
Taken together, the results suggest implicit or explicit regularization of \textsc{FP}\xspace improves generalization at least in part by strengthening the bias of SGD to learn clean examples before learning noisy examples~\citep{arpit_closer_2017}. We also note that related conclusions about the effect using large learning rates were reached by~\citet{jastrzebski_three_2017,li2019}.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/gradnormcomparison2209mem25tablevgg2gnpsample.pdf}
\caption{Gradient norm on clean examples (left), noisy examples (middle), and their ratio (right); evaluated on the training set.}
\end{subfigure}
\begin{subfigure}[t]{0.9\textwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figs/demoacc2209mem25tablevgg2gnpsample.pdf}
\caption{Training accuracy on clean and noisy examples (solid/dashed lined, left), validation accuracy (middle), and a scatter plot of training accuracy on clean vs noisy examples (right). }
\end{subfigure}
\caption{Fisher penalty reduces memorization by slowing down training on data with noisy labels more strongly than does training on clean data, as indicated by relative differences in gradient norm (top) and training accuracy (bottom) between the two groups of examples. Experiment run with VGG-11 on the CIFAR-100 dataset. Blue to red color represents increasing regularization coefficient (from $10^{-2}$ to $10^1$). }
\label{fig:fisher_penalty_noisy_data_visualization}
\end{figure*}
\section{Early \TrF influences final curvature}
\label{sec_trf_final_minima}
To provide further insight into why it is important to regularize \TrF during the early phase of training, we establish a connection between the early phase of training and the wide minima hypothesis~\citep{hochreiter1997flat,keskar_large-batch_2017} which states that \emph{flat} minima \textit{typically} correspond to better generalization. Here, we use \TrH as a measure of flatness.
\paragraph{Experimental setting}
We investigate how likely it is for an optimization trajectory to end up in a wide minimum in two scenarios: 1) when optimization exhibits small \TrF early on, and 2) when optimization exhibits large \TrF early on. We train two ResNet-26 models for 20 epochs using high and low regularization configurations. At epoch 20 we record \TrF for each model. We then use these two models as initialization for 8 separate models each, and continue training using the low regularization configuration with different random seeds. The motivation behind this experiment is to investigate if the degree of regularization in the early phase biases the model towards minima with certain flatness (\TrH) even though no further high regularization configurations are used during the rest of the training. For all these runs, we record the best test accuracy along the optimization trajectory along with \TrH at the point corresponding to the best test accuracy. We describe the remaining experimental details in Supplement \ref{sec_trf_final_minima_details}.
\paragraph{Results}
We present the result in Figure \ref{fig:histogram_c100} for the CIFAR-100 datasets, and for CIFAR-10 in Supplement \ref{sec_histogram_appendix}. A training run with a lower \TrF during the early phase is more likely to end up in a wider minimum as opposed to one that reaches large \TrF during the early phase. This happens despite that the late phases of both sets of models use the low regularization configuration. The latter runs always end up in sharper minima. In Supplement \ref{sec_trf_final_minima_details} we also show evolution of \TrH throughout training, which suggests that this behavior can be attributed to curvature stabilization happening early during training.
\begin{figure*}
\centering
\begin{subfigure}[t]{1\textwidth}
\includegraphics[width=1.\columnwidth ,trim=0.1in 0.in 0.in 0.in,clip]{figs/histogramc100.pdf}
\end{subfigure}
\caption{Optimization trajectories passing through regions with low \TrF (\TrFi) during the early phase of training reach wider minima. Two ResNet-56 models are trained with two different levels of regularization for 20 epochs on CIFAR-100. Each model is then continued trained using the low regularization configuration with different random seeds. Left: \TrF at the end of the 20 epochs (\TrFi). Middle: A histogram of \TrH at the best test accuracy epoch along the trajectory (\TrHf). Right: a histogram of test accuracy.}
\label{fig:histogram_c100}
\end{figure*}
\section{Related Work}
\label{sec_related_work}
Implicit regularization effects are critical to the empirical success of DNNs~\citep{neyshabur2017,zhang_understanding_2016}. Much of it is attributed to the choice of hyperparameters in SGD \citep{keskar_large-batch_2017,smith2017understanding,li2019}, low complexity bias induced by gradient descent~\citep{xu2018understanding,jacot2018,arora2019,hu2020surprising}, the cross-entropy loss function \citep{poggio2017theory,soudry2018implicit}, or the importance of the early phase of training~\citep{achille_critical_2017,Jastrzebski2020The,fort_deep_learning,Frankle2020The,golatkar2019,lewkowycz2020large}. However, developing a mechanistic understanding of how SGD implicitly regularizes DNNs remains a largely unsolved problem.
Many prior works have proposed explicit regularizers aimed at finding low curvature solutions~\citep{hochreiter1997flat}. \citet{chaudhari2019entropy} proposed a Langevin dynamics based algorithm. \citet{wen2018smoothout,izmailov2018averaging,foret2021sharpnessaware} propose finding wide minima through approximations involving averaging gradients or parameters at the neighborhood of the current parameter state. In contrast, our focus is on elucidating a mechanistic link behind the implicit regularization effects in SGD and the early phase of training. To corroborate our hypothesis, we develop Fisher Penalty, an efficient and novel explicit regularizer that aims to reproduce the regularization effect of training with a large learning rate.
Our work contributes to a better understanding of the role of the FIM in training and generalization of deep neural networks. The FIM was also used to define complexity measures such as the Fisher-Rao norm~\citep{karakida2019,liang2019}, and can be seen as approximating the local curvature of the loss surface~\citep{martens2014,thomas2020interplay}. Most notably, the FIM defines the distance metric used in natural gradient descent~\citep{amari1998}.
Penalizing \TrF is related to penalizing the input gradient norm or the input-output Jacobian of the network, which were shown to be effective regularizers for deep neural networks~\citep{Drucker1992,varga2018gradient,hoffman2019,chan2020, moosavi2019robustness}. Most closely related is \citet{moosavi2019robustness} who regularize the curvature in the input space using a stochastic approximation scheme. Our results suggest that penalizing \textsc{FP}\xspace is a more effective regularizer than other tested variants of gradient norm penalities. However, we did not run an extensive comparison. In particular, we did not compare directly to \citet{moosavi2019robustness}.
\citet{Chatterjee2020Coherent,fort_stiffness_2019} show that SGD avoids memorization by extracting commonalities between examples due to following gradient descent directions shared between examples. Other works have also connected the implicit regularization effects of using large learning rates with preventing memorization~\citep{arpit_closer_2017,li2019,jastrzebski_three_2017}. \citet{liu2020early} was first to note the difference in gradient norms between noisy and clean examples that emerges in the early phase of training. Our work is complementary to these findings. We make a direct connection between memorization, the instability in the early phase of training, and implicit regularization effects arising from using large learning rates in SGD.
Concurrent works have also proposed that implicit regularization effects in SGD can be understood as a form of gradient penalty. \citet{barrett2020implicit,smith2021on} show that SGD implicitly penalizes the mini-batch gradient norm, with the strength controlled by the learning rate, and study \textsc{GP}\xspace as an explicit regularizer. Analogously, we argue that SGD implicitly regularizes \TrF, a closely related quantity, which can be expressed as the squared gradient norm under labels sampled from $p_{\bm{\theta}}(y|\bm{x})$. We found that penalizing \textsc{GP}\xspace did not always close the generalization gap due to using a small learning rate. Overall, our results suggest that the fact that optimization is \emph{close to instability}~\citep{Jastrzebski2020The} in the early phase is critical for understanding implicit regularization effects in SGD.
\section{Conclusion}
The dramatic instability and changes in the curvature that happen in the early phase of training motivated us to probe its importance for generalization~\citep{Jastrzebski2020The,cohen2021gradient}. We investigated if these effects might explain some of the implicit regularization effects attributed to SGD such as the poorly understood generalization benefit of using large learning rates.
We showed evidence that using a large learning rate in SGD influences generalization by implicitly penalizing the trace of the Fisher Information Matrix (\TrF), a measure of the local curvature, from the beginning of training. We argued that (1) the value of early \TrF correlates with final generalization, and (2) explicitly regularizing \TrF can substantially improve generalization, and showed similar results for training with a small batch size. In the absence of implicit or explicit regularization, \TrF can attain large values early in training, which we referred to as catastrophic Fisher explosion.
To better understand the mechanism by which penalizing \TrF improves generalization, we investigated training on a dataset with incorrectly labeled examples. Our key finding is that penalizing \TrF significantly reduces memorization by slowing down learning examples with incorrect labels. We hypothesize that penalizing the local curvature (by using large learning rates or penalizing \TrF) early in training improves generalization at least in part by strengthening the implicit bias of SGD to avoid learning noisy labels~\citep{arpit_closer_2017,li2019,liu2020early}.
Developing theory that is fully consistent with our findings is an interesting topic for the future. Notably, existing theoretical works generally do not connect implicit regularization effects in SGD to the fact optimization is \emph{close to instability} in the early phase of training~\citep{li_stochastic_2017,chaudhari2018stochastic,smith2021on}.
Another exciting topic for the future is connecting these findings to shortcut learning~\citep{Geirhos_2020}. The tendency of SGD to learn the simplest patterns in the datasets can be detrimental to the broader generalization of the model~\citep{nam2020learning}. We hope that by better understanding implicit regularization effects in SGD, our work will contribute to developing optimization methods that better optimize for both in and out of distribution generalization.
\section*{Limitations}
Our work has several limitations. We used mini-batch gradients to approximate \TrF in Fisher Penalty. While experiments suggest it is inconsequential for the main conclusions, more work would be needed to fully establish it. We also experimented only with vision tasks. Finally, we observed that \textsc{FP}\xspace slows down learning on clean examples. This feature makes it challenging to apply with very small learning rates, at which underfitting might start to be an issue.
\section*{Acknowledgments}
GK acknowledges support in part by the FRQNT Strategic Clusters Program (2020-RS4-265502 - Centre UNIQUE - Union Neurosciences \& Artificial Intelligence - Quebec).
KC was supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning:
from pattern recognition to AI) and Samsung Research (Improving Deep Learning using Latent Structure). We thank Catriona C. Geras for proofreading the paper.
|
{
"timestamp": "2021-06-14T02:13:01",
"yymm": "2012",
"arxiv_id": "2012.14193",
"language": "en",
"url": "https://arxiv.org/abs/2012.14193"
}
|
\section{Introduction}
The integrable nature of supersymmetric gauge theories with eight super charges has gathered intensive interests since the groundbreaking work of Seiberg and Witten \cite{SW,Seiberg:1994aj}. In 4d, an R-matrix associated to the instanton counting was discovered on the full $\Omega$-background \cite{MO,Nakajima2013}, and such an R-matrix is known to be attached to an algebra constructed by coproducting the affine Yangians of symmetric Kac-Moody Lie algebras. When the gauge group of the gauge theory is $G=A_n$, the underlying algebra is a coproduct of $n$ copies of the affine Yangian of $\mathfrak{gl}_1$ \cite{MO,SHc}. The affine Yangian of $\mathfrak{gl}_1$ can be further uplifted to a K-theoretic version known as the Ding-Iohara-Miki algebra (or the quantum toroidal algebra of ${\mathfrak{gl}}_1$) \cite{DI,Miki,FFJMM1} and its elliptic version, the elliptic Ding-Iohara-Miki algebra \cite{Saito}. Through these uplifted algebras, we can figure out the action of the algebra \cite{AFS,Zhu-elliptic} and the R-matrix \cite{FJMM1,Awata:2017lqa} on the 5d $\mathcal{N}=1$ gauge theories and a special class of 6d $\mathcal{N}=(1,0)$ theories, which can be beautifully presented in the brane web picture of the gauge theories \cite{AHK,M-strings}. However, the quantum toroidal algebra acting on the gauge theories with gauge groups other than the A-type ones is yet to be clarified (see e.g. footnote 8 in \cite{KZ-Zn}). It is thus still not clear whether the integrability holds on the full $\Omega$-background (for example we refer to the complicated expressions of the qq-characters of BCD-type gauge groups \cite{Haouzi:2020yxy} in contrast to A-type gauge groups \cite{BPS/CFT,Kimura-Pestun,Bourgine:2015szm,Kimura:2016dys,5dBMZ}).
The correspondence has been studied for arbitrary $G$ between $G$-SYM and $\widehat{{}^L{G}}$-Toda chain in the absence of the $\Omega$-background~\cite{Martinec:1995by}, and there are some attempts \cite{Kimura:2017hez,Kimura:2019gon} to build the corresponding algebra from the Nekrasov partition function of the BCD-type quiver gauge theories.\footnote{There is a fiber-base duality connecting the gauge theory with gauge group $G_1$ and quiver structure $\Gamma_1$ to the theory with gauge group $G_2=\Gamma_1$ and quiver structure $\Gamma_2=G_1$ \cite{fiber-base} for $G_{1,2}$ and $\Gamma_{1,2}$ being ADE type. The situation is expected to be more involved for the non-simply-laced cases. This duality was also discussed in the algebraic approach in \cite{Bourgine:2018fjy}.}
Nevertheless, the structure of the quantum algebra is still far from clear at the current stage.
In this work, we take a bottom-up approach to study the integrability of the gauge theories with BCD-type gauge groups by generalizing the Bethe/Gauge correspondence proposed in \cite{NS-2,NS1,NS2}. The original statement was about the duality between 2d $\mathcal{N}=(2,2)^\ast$ (or resp. 3d $\mathcal{N}=2^\ast$; 4d $\mathcal{N}=1^\ast$) SU($N$) gauge theories with XXX (resp. XXZ; XYZ) spin chains. A more modern understanding of the relation with 4d $\mathcal{N}=2$ (resp. 5d $\mathcal{N}=1$; 6d $\mathcal{N}=(1,0)$) gauge theories is explained in~\cite{Dorey:2011pa,Chen:2011sj,Chen:2012we} in details by Higgsing the theory in the so-called Nekrasov-Shatashvili (NS) limit to obtain vortex strings in the Higgs phase. See also ealier related works~\cite{Dorey:1998yh,Dorey:1999zk}. In this description, the 2d $\mathcal{N}=(2,2)^\ast$ (or resp. 3d $\mathcal{N}=2^\ast$; 4d $\mathcal{N}=1^\ast$) theory that captures the integrability nature is an effective theory on the worldvolume of the vortex strings. In relation to the underlying algebra that hosts the R-matrix,%
\footnote{A possible gauge theory interpretation of the R-matrix in the context of the Bethe/Gauge correspondence has been addressed in~\cite{Bullimore:2017lwu}.
}
we remark that a similar (rescaled) NS limit can take the quantum toroidal algebra of ${\mathfrak{gl}}_1$ to the quantum group $U_q(\widehat{\mathfrak{sl}}_2)$, whose finite dimensional representations are known to give the solutions to the $RTT$-relation of the six-vertex model (XXZ spin chain). In this article, we study the correspondence between 2d (or 3d) gauge theories with SO or Sp gauge groups and XXX (or XXZ) spin chain with open boundary conditions. We will also briefly discuss the relation between the results obtained here and the string-theory set-up used in the derivation of the Bethe/Gauge correspondence in the A-type case.
This article is organized as follows. In section \ref{s:R-mat}, we give a brief review on the integrability of the closed and open XYZ spin chain. In section \ref{s:gauge-th}, we review on the $D^2$ ($\times S^1$) partition function of 2d $\mathcal{N}=(2,2)$ (3d $\mathcal{N}=2$) gauge theory, and write down the effective potential. By using this effective potential, we first reproduce the well-known Bethe/Gauge correspondence between the vacuum equation of the A-type gauge theories and the Bethe ansatz equation of the closed XXX (or XXZ) spin chains in section \ref{s:dic-A}, and extend this duality to the case of BCD-type gauge theories and open spin chains in section \ref{s:corr-open}. We further push forward the computation to the $A_2$ quiver gauge theories in section \ref{s:A2-qui}, to a general linear quiver in section \ref{s:Ar_quiver}, and discuss some potential physical meanings of our results in the context of the string theory in section \ref{s:dis}.
\section{The R-matrix and Integrable Spin Chains}\label{s:R-mat}
The integrability of a spin chain is characterized by an R-matrix, ${\bf R}(u):\ V\otimes V\rightarrow V\otimes V$, satisfying the Yang-Baxter equation,
\begin{eqnarray}
{\bf R}_{12}(u-v){\bf R}_{13}(u){\bf R}_{23}(v)={\bf R}_{23}(v){\bf R}_{13}(u){\bf R}_{12}(u-v).\label{YBE}
\end{eqnarray}
The most general R-matrix for a solvable spin-$\frac{1}{2}$ $\mathfrak{sl}_2$-XYZ spin chain model is given by \cite{baxter2007exactly}
\begin{eqnarray}
{\bf R}(u)=\left(\begin{array}{cccc}
\alpha(u) & & & \delta(u)\\
& \beta(u) & \gamma(u) & \\
& \gamma(u) & \beta(u) & \\
\delta(u) & & & \alpha(u)\\
\end{array}\right),\label{R-XYZ}
\end{eqnarray}
where
\begin{subequations}
\begin{eqnarray}
\alpha(u)=\frac{\theta_{0,1/2}(u,2\tau)\theta_{1/2,1/2}(u+\eta,2\tau)}{\theta_{0,1/2}(0,2\tau)\theta_{1/2,1/2}(\eta,2\tau)},\quad \beta(u)=\frac{\theta_{1/2,1/2}(u,2\tau)\theta_{0,1/2}(u+\eta,2\tau)}{\theta_{0,1/2}(0,2\tau)\theta_{1/2,1/2}(\eta,2\tau)},\\
\gamma(u)=\frac{\theta_{0,1/2}(u,2\tau)\theta_{0,1/2}(u+\eta,2\tau)}{\theta_{0,1/2}(0,2\tau)\theta_{0,1/2}(\eta,2\tau)},\quad \delta(u)=\frac{\theta_{1/2,1/2}(u,2\tau)\theta_{1/2,1/2}(u+\eta,2\tau)}{\theta_{0,1/2}(0,2\tau)\theta_{0,1/2}(\eta,2\tau)},
\end{eqnarray}
\end{subequations}
with
\begin{eqnarray}
\theta_{a_1,a_2}(u,\tau)=\sum_{m=-\infty}^\infty \exp\left(i\pi \left((m+a_1)^2\tau+2(m+a_1)(u+a_2)\right)\right).
\label{theta_fn}
\end{eqnarray}
Let us list several useful properties of the above R-matrix.
\begin{itemize}
\item ${\bf R}(0)={\cal P}$, where ${\cal P}$ is the permutation operator that acts as ${\cal P}(x\otimes y)=y\otimes x$ for $^\forall x,y\in V$.
\item ${\bf R}_{21}(u)={\bf R}_{12}(u)={\bf R}_{12}(u)^{t_1t_2}$, where $t_i$ stands for the transpose in the $i$-th vector space.
\item Unitarity: ${\bf R}_{12}(u){\bf R}_{21}(-u)={\bf R}_{12}(u){\bf R}_{12}(-u)=-\frac{\sigma(u-\eta)\sigma(u+\eta)}{\sigma(\eta)^2}{\bf I}=:\rho(u){\bf I}$, for
\begin{eqnarray}
\sigma(u)=\theta_{1/2,1/2}(u,\tau).\label{sigma-def}
\end{eqnarray}
Note that we used
\begin{eqnarray}
\theta_{0,\frac{1}{2}}^2(x)\theta_{0,\frac{1}{2}}^2(y)-\theta_{\frac{1}{2},\frac{1}{2}}^2(x)\theta_{\frac{1}{2},\frac{1}{2}}^2(y)=\theta_{0,\frac{1}{2}}(x+y)\theta_{0,\frac{1}{2}}(x-y)\theta^2_{0,\frac{1}{2}}(0),
\end{eqnarray}
and
\begin{eqnarray}
\theta^2_{\frac{1}{2},\frac{1}{2}}(x)\theta^2_{0,\frac{1}{2}}(y)-\theta^2_{0,\frac{1}{2}}(x)\theta^2_{\frac{1}{2},\frac{1}{2}}(y)=\theta_{\frac{1}{2},\frac{1}{2}}(x+y)\theta_{\frac{1}{2},\frac{1}{2}}(x-y)\theta_{0,\frac{1}{2}}^2(0),
\end{eqnarray}
in the derivation.
\item Crossing unitarity: ${\bf R}_{12}(u)=V_1{\bf R}^{t_2}_{12}(-u-\eta)V_1$ for $V=-i\sigma_y$, so we have
\begin{eqnarray}
{\bf R}_{12}^{t_1}(u){\bf R}_{12}^{t_1}(-u-2\eta)=V_1{\bf R}_{12}(u+\eta)V_1^2{\bf R}_{12}(-u-\eta)V_1=\rho(u-\eta){\bf I}\nonumber\\
=-\frac{\sigma(u+2\eta)\sigma(u)}{\sigma(\eta)^2}{\bf I}=:\rho'(u){\bf I}.\label{ref-uni}
\end{eqnarray}
\end{itemize}
A concrete integrable model is then given by the monodromy matrix, ${\bf T}(u)\in {\rm End}(V^{(0)}\otimes V^{\otimes L})$, and the transfer matrix, $\mathsf{t}(u)\in {\rm End}(V^{\otimes L})$ built from the R-matrix.\footnote{$V^{(0)}$ is called the auxiliary quantum space. One can certainly take $L$ vector spaces, on which the transfer matrix act, to be different. A typical choice is to take different representation of the R-matrix at different site with spin $s_i$.}
The most well-studied model is the closed spin chain with periodic boundary condition, whose monodromy matrix is given by
\begin{eqnarray}
{\bf T}_0(u)={\bf R}_{0L}(u-\vartheta_L)\dots {\bf R}_{01}(u-\vartheta_1),\label{mono-mat}
\end{eqnarray}
where $\vartheta_i$'s are called the inhomogeneous parameters in the closed spin chain.\footnote{Not to be confused with the theta functions~\eqref{theta_fn}.} The monodromy matrix satisfies the RTT-relation,
\begin{eqnarray}
{\bf R}_{00'}(u-v){\bf T}_0(u){\bf T}_{0'}(v)={\bf T}_{0'}(v){\bf T}_0(u){\bf R}_{00'}(u-v),\label{RTT}
\end{eqnarray}
which directly follows from the Yang-Baxter equation (\ref{YBE}). The transfer matrix is then given by
\begin{eqnarray}
\mathsf{t}(u)={\rm tr}_0\left({\bf T}_0(u)\right),
\end{eqnarray}
and one can show by using (\ref{RTT}) that
\begin{eqnarray}
\left[\mathsf{t}(u),\mathsf{t}(u')\right]=0,\quad {\rm for}\ ^\forall u,u'.\label{com-pro-transfer}
\end{eqnarray}
Following the property (\ref{com-pro-transfer}), it is easy to see that all the charges defined as the expansion coefficients of the transfer matrix,
\begin{eqnarray}
\mathsf{t}(u)=:\sum_{n=0}^\infty H^{(n)}u^n,
\end{eqnarray}
commute with each other, i.e. $\left[H^{(n)},H^{(m)}\right]=0$ for $^\forall m,n$. These charges characterize an integrable system described by the Hamiltonian
\begin{eqnarray}
{\cal H}:=\frac{1}{H^{(0)}}H^{(1)}.\label{d-Ham}
\end{eqnarray}
The corresponding Hamiltonian for the closed spin-$\frac{1}{2}$ XYZ chain constructed from (\ref{R-XYZ}) (with all inhomogeneous parameters turned off) is given by
\begin{eqnarray}
{\cal H}=\frac{1}{2}\sum_{n=1}^L\left(J_x\sigma^{(n)}_x\sigma^{(n+1)}_x+J_y\sigma^{(n)}_y\sigma^{(n+1)}_y+J_z\sigma^{(n)}_z\sigma^{(n+1)}_z\right),
\end{eqnarray}
where $\sigma_{x,y,z}^{(n)}$ are the Pauli matrices assigned to the site $n$, and the couplings are parametrized as follows:
\begin{eqnarray}
J_x=e^{i\pi \eta}\frac{\sigma(\eta+\frac{\tau}{2})}{\sigma(\frac{\tau}{2})},\quad J_y=e^{i\pi \eta}\frac{\sigma(\eta+\frac{1+\tau}{2})}{\sigma(\frac{1+\tau}{2})},\quad J_z=\frac{\sigma(\eta+\frac{1}{2})}{\sigma(\frac{1}{2})},
\end{eqnarray}
and $\sigma(u)$ is defined in (\ref{sigma-def}).
\subsection{Reduction to XXZ chain}
The XXZ limit can be taken by setting $\tau\rightarrow i\infty$,
and the following rewriting of the $\theta$-functions is useful to take the limit,
\begin{subequations}
\begin{eqnarray}
&&\theta_{0,0}(u,\tau)=\prod_{m=1}^\infty (1-q^{2m})(1+e^{2\pi iu}q^{2m-1})(1+e^{-2\pi iu}q^{2m-1}),\label{theta-prod-1}\\
&&\theta_{0,1/2}(u,\tau)=\prod_{m=1}^\infty (1-q^{2m})(1-e^{2\pi iu}q^{2m-1})(1-e^{-2\pi iu}q^{2m-1}),\label{theta-prod-2}\\
&&\theta_{1/2,0}(u,\tau)=2q^{\frac{1}{4}}\cos(\pi u)\prod_{m=1}^\infty (1-q^{2m})(1+e^{2\pi iu}q^{2m})(1+e^{-2\pi iu}q^{2m}),\label{theta-prod-3}\\
&&\theta_{1/2,1/2}(u,\tau)=-2q^{\frac{1}{4}}\sin(\pi u)\prod_{m=1}^\infty (1-q^{2m})(1-e^{2\pi iu}q^{2m})(1-e^{-2\pi iu}q^{2m}),\label{theta-prod-4}
\end{eqnarray}
\end{subequations}
where we set $q:=e^{\pi i\tau}$. In the XXZ limit, $q\rightarrow 0$, and we see that
\begin{subequations}
\begin{eqnarray}
&&\theta_{0,0}(u,\tau)\rightarrow 1,\quad \theta_{0,1/2}(u,\tau)\rightarrow 1,\\
&&\theta_{1/2,0}(u,\tau)\sim 2q^{\frac{1}{4}}\cos(\pi u),\quad \theta_{1/2,1/2}(u,\tau)\sim -2q^{\frac{1}{4}}\sin(\pi u).
\end{eqnarray}
\end{subequations}
Therefore we have
\begin{subequations}
\begin{eqnarray}
J_x\sim e^{i\pi\eta}\frac{\sin(\pi(\eta+\frac{\tau}{2}))}{\sin(\frac{\pi\tau}{2})}\sim e^{i\pi\eta}\frac{\exp(-i\pi(\eta+\frac{\tau}{2}))}{\exp(-i\frac{\pi\tau}{2})}\rightarrow 1,\\
J_y\sim e^{i\pi\eta}\frac{\sin(\pi(\eta+\frac{1+\tau}{2}))}{\sin(\frac{\pi(1+\tau)}{2})}\sim e^{i\pi\eta}\frac{\exp(-i\pi(\eta+\frac{1+\tau}{2}))}{\exp(-i\frac{\pi(1+\tau)}{2})}\rightarrow 1,\\
J_z\sim \frac{\sin(\pi(\eta+\frac{1}{2}))}{\sin(\frac{\pi}{2})}\rightarrow \cos\pi\eta,
\end{eqnarray}
\end{subequations}
and
\begin{eqnarray}
\alpha(u)\rightarrow \frac{\sin(\pi(u+\eta))}{\sin(\pi\eta)},\quad \beta(u)\rightarrow \frac{\sin(\pi u)}{\sin(\pi\eta)},\quad \gamma(u)\rightarrow 1,\quad \delta(u)\rightarrow 0.
\end{eqnarray}
One can further take the limit $\eta\rightarrow 0$ with the spectral parameter rescaled by $u\rightarrow u\eta$ to go to the XXX limit.
Let us introduce a new notation,
\begin{eqnarray}
[x]:=\frac{\sin(\pi x)}{\sin(\pi\eta)},
\end{eqnarray}
so that the R-matrix of XXZ spin chain can be expressed as
\begin{eqnarray}
{\bf R}^\text{XXZ}(u)=\left(\begin{array}{cccc}
[u+\eta] & & & \\
& [u] & [\eta] & \\
& [\eta] & [u] & \\
& & & [u+\eta]\\
\end{array}\right).
\end{eqnarray}
We also note that in the XXX limit, $[u]\rightarrow u$ and $[u+\eta]\rightarrow u+1$.
In the case of the XXZ spin chain, if we use the notation
\begin{eqnarray}
\iota(\theta)=\left(\begin{array}{cc}
1 & 0\\
0 & e^{i\theta}\\
\end{array}\right),
\end{eqnarray}
then we can confirm that $\iota(\theta)\otimes \iota(\theta)$ commutes with the R-matrix,
\begin{eqnarray}
\left[\iota(\theta)\otimes \iota(\theta),{\bf R}^\text{XXZ}(u)\right]=0.\label{twist-com}
\end{eqnarray}
This means that the $\theta$-depending transfer matrix
\begin{eqnarray}
\mathsf{t}(u;\theta):={\rm tr}_0\iota_0(\theta){\bf T}_0(u),\label{transfer-twist}
\end{eqnarray}
gives rise to an integrable closed XXZ spin chain with twisted periodic boundary condition,
\begin{eqnarray}
\sigma^{(L+1)}_{x,y,z}=e^{\frac{i}{2}\theta\sigma_z}\sigma^{(1)}_{x,y,z}e^{-\frac{i}{2}\theta\sigma_z}.
\end{eqnarray}
However, we note that the commutation relation (\ref{twist-com}) does not hold for the more general XYZ R-matrix, unless $e^{i\theta}=\pm 1$.
\subsection{Open spin chain}
The open spin chains are more interesting to us in this article. The transfer matrix of an open chain is given by \cite{Sklyanin:1988yz}
\begin{eqnarray}
\mathsf{t}(u)={\rm tr}_0K^-_0(u){\bf T}(u)K^+_0(u){\bf T}^{-1}(-u).
\end{eqnarray}
${\bf T}(u)\in {\rm End}(V^{(0)}\otimes V^{\otimes L})$ is usually taken to be the same one as in the closed chain, (\ref{mono-mat}), and $K^\pm(u)\in {\rm End}(V^{(0)})$ stands for the boundary operators, which respectively satisfy the boundary Yang-Baxter equations
\begin{eqnarray}
{\bf R}_{12}(\lambda_1-\lambda_2)K^+_1(\lambda_1){\bf R}_{21}(\lambda_1+\lambda_2)K^+_2(\lambda_2)=K^+_2(\lambda_2){\bf R}_{12}(\lambda_1+\lambda_2)K^+_1(\lambda_1){\bf R}_{21}(\lambda_1-\lambda_2),\label{b-YBE}
\end{eqnarray}
and
\begin{eqnarray}
&&{\bf R}_{12}(-\lambda_1+\lambda_2)K^{-\ t}_1(\lambda_1){\bf R}_{21}(-\lambda_1-\lambda_2-2\eta)K^{-\ t}_2(\lambda_2)\nonumber\\
&&=K^{-\ t}_2(\lambda_2){\bf R}_{12}(-\lambda_1-\lambda_2-2\eta)K^{-\ t}_1(\lambda_1){\bf R}_{21}(-\lambda_1+\lambda_2),\label{b-YBE-2}
\end{eqnarray}
with $\eta$ the characteristic parameter of the system, s.t.
\begin{eqnarray}
{\bf R}^{t_1}(u){\bf R}^{t_1}(-u-2\eta)=\rho'(u){\bf I},
\end{eqnarray}
for some function $\rho'(u)$. As has been shown in (\ref{ref-uni}), $\rho'(u)$ for the XYZ R-matrix is given by
\begin{eqnarray}
\rho'(u)=-\frac{\sigma(u+2\eta)\sigma(u)}{\sigma(\eta)^2}.
\end{eqnarray}
Note that we can rewrite the RTT-relation (\ref{RTT}) into
\begin{eqnarray}
{\bf T}_{2}^{-1}(\lambda_2){\bf R}_{12}(\lambda_1-\lambda_2){\bf T}_{1}(\lambda_1)={\bf T}_{1}(\lambda_1){\bf R}_{12}(\lambda_1-\lambda_2){\bf T}^{-1}_{2}(\lambda_2),
\end{eqnarray}
and
\begin{eqnarray}
{\bf T}_{1}^{-1}(\lambda_1){\bf R}_{12}(\lambda_2-\lambda_1){\bf T}_{2}(\lambda_2)={\bf T}_{2}(\lambda_2){\bf R}_{12}(\lambda_2-\lambda_1){\bf T}^{-1}_{1}(\lambda_1),
\end{eqnarray}
therefore
\begin{eqnarray}
\tilde{K}_0^+(u):= {\bf T}_{0}(u)K^+(u){\bf T}^{-1}_{0}(-u),\label{def-K+}
\end{eqnarray}
is also a solution to the boundary Yang-Baxter equation (\ref{b-YBE}), that is to say, we can alternatively express the transfer matrix of the open spin chain as
\begin{eqnarray}
\mathsf{t}(u)={\rm tr}_0K_0^-(u)\tilde{K}_0^+(u).\label{transf-open}
\end{eqnarray}
The commutation relation (\ref{com-pro-transfer}) can also be shown in this case with the following calculation,
\begin{eqnarray}
&&\mathsf{t}(u)\mathsf{t}(u')={\rm tr}_{0,0'}K^-_0(u)K^-_{0'}(u')\tilde{K}^+_0(u)\tilde{K}^+_{0'}(u')={\rm tr}_{0,0'}K^{-\ t}_0(u)K^-_{0'}(u')\tilde{K}^{+\ t}_0(u)\tilde{K}^+_{0'}(u')\nonumber\\
&&=\rho^{\prime\ -1}(-u-u'-2\eta){\rm tr}_{0,0'}K^{-\ t}_0(u)K^-_{0'}(u'){\bf R}^{t_{0'}}_{00'}(-u-u'-2\eta){\bf R}^{t_0}_{00'}(u+u')\tilde{K}^{+\ t}_0(u)\tilde{K}^+_{0'}(u')\nonumber\\
&&=\rho^{\prime\ -1}(-u-u'-2\eta){\rm tr}_{0,0'}\left[\left(K^{-\ t}_0(u){\bf R}_{00'}(-u-u'-2\eta)K^{-\ t}_{0'}(u')\right)^{t_{0'}}\right.\nonumber\\
&&\qquad\left.\times \left(\tilde{K}_0^+(u){\bf R}_{00'}(u+u')\tilde{K}^+_{0'}(u')\right)^{t_0}\right]\nonumber\\
&&=\rho^{\prime\ -1}(-u-u'-2\eta){\rm tr}_{0,0'}\left[\left(K^{-\ t}_0(u){\bf R}_{00'}(-u-u'-2\eta)K^{-\ t}_{0'}(u')\right)^{t_{00'}}\tilde{K}_0^+(u){\bf R}_{00'}(u+u')\tilde{K}^+_{0'}(u')\right]\nonumber\\
&&=\rho^{\prime\ -1}(-u-u'-2\eta)\rho^{-1}(u'-u)\nonumber\\
&&\qquad\times {\rm tr}_{0,0'}\left[\left({\bf R}_{00'}(u'-u)K^{-\ t}_0(u){\bf R}_{00'}(-u-u'-2\eta)K^{-\ t}_{0'}(u')\right)^{t_{00'}}\right.\nonumber\\
&&\left.\times {\bf R}_{00'}(u-u')\tilde{K}_0^+(u){\bf R}_{00'}(u+u')\tilde{K}^+_{0'}(u')\right]\nonumber\\
&&=\rho^{\prime\ -1}(-u-u'-2\eta)\rho^{-1}(u'-u)\nonumber\\
&&\qquad\times {\rm tr}_{0,0'}\left[\left(K^{-\ t}_{0'}(u'){\bf R}_{00'}(-u-u'-2\eta)K^{-\ t}_0(u){\bf R}_{00'}(u'-u)\right)^{t_{00'}}\right.\nonumber\\
&&\qquad\left.\times \tilde{K}^+_{0'}(u'){\bf R}_{00'}(u+u')\tilde{K}_0^+(u){\bf R}_{00'}(u-u')\right]\nonumber\\
&&=\mathsf{t}(u')\mathsf{t}(u),
\end{eqnarray}
where $t_{00'}=t_0t_{0'}$ the transpose in both of the $0$-th and the $0'$-th auxiliary spaces, and we used the unitarity of the R-matrix in the derivation.
We focus on the case of diagonal boundary operator,
\begin{eqnarray}
K(u)=\left(\begin{array}{cc}
e(u) & 0\\
0 & f(u)\\
\end{array}\right),
\end{eqnarray}
in this article.
The boundary Yang-Baxter equation, (\ref{b-YBE}), for this diagonal ansatz reduces to
\begin{subequations}
\begin{eqnarray}
\alpha(u-v)\delta(u+v)\left(e(v)f(u)-e(u)f(v)\right)+\alpha(u+v)\delta(u-v)\left(e(u)e(v)-f(u)f(v)\right)=0,\\
\beta(u+v)\gamma(u-v)\left(e(v)f(u)-e(u)f(v)\right)+\beta(u-v)\gamma(u+v)\left(e(u)e(v)-f(u)f(v)\right)=0,
\end{eqnarray}
\end{subequations}
and one can further simplify them to one single equation,
\begin{eqnarray}
&&\theta_{0,1/2}(u-v,2\tau)\theta_{1/2,1/2}(u+v,2\tau)\left(e(v)f(u)-e(u)f(v)\right)\nonumber\\
&&+\theta_{0,1/2}(u+v,2\tau)\theta_{1/2,1/2}(u-v,2\tau)\left(e(u)e(v)-f(u)f(v)\right)=0.
\end{eqnarray}
Two trivial solutions are $e(u)=\pm f(u)$, but we would like to consider a more non-trivial one.
By using the identity
\begin{eqnarray}
&&\theta_{1/2,1/2}(u\pm v,\tau)\theta_{0,1/2}(u\mp v,\tau)\theta_{0,0}(0,\tau)\theta_{1/2,0}(0,\tau)\nonumber\\
&&=\theta_{1/2,1/2}(u,\tau)\theta_{0,1/2}(u,\tau)\theta_{0,0}(v,\tau)\theta_{1/2,0}(v,\tau)\pm \theta_{1/2,1/2}(v,\tau)\theta_{0,1/2}(v,\tau)\theta_{0,0}(u,\tau)\theta_{1/2,0}(u,\tau),\nonumber\\
\end{eqnarray}
we found a solution to the boundary Yang-Baxter equation,
\begin{eqnarray}
e(u)=\frac{\theta_{1/2,1/2}(u+ \xi,2\tau)\theta_{0,1/2}(u- \xi,2\tau)}{\theta_{1/2,1/2}(\xi,2\tau)\theta_{0,1/2}(\xi,2\tau)},\quad f(u)=-\frac{\theta_{1/2,1/2}(u- \xi,2\tau)\theta_{0,1/2}(u+ \xi,2\tau)}{\theta_{1/2,1/2}(\xi,2\tau)\theta_{0,1/2}(\xi,2\tau)},\label{ef-sol}
\end{eqnarray}
where we normalized the boundary operator $K(u)$ s.t. $K(0)={\bf I}$, as we can also see that the R-matrix (\ref{R-XYZ}) trivializes at the same value of $u=0$. In the XXZ limit, the boundary operator becomes
\begin{eqnarray}
K^\text{XXZ}(u)=\frac{1}{[\xi]}\left(\begin{array}{cc}
[u+\xi] & \\
& -[u-\xi]\\
\end{array}\right).
\end{eqnarray}
\paragraph{Remark:} One can easily derive the following identities from (\ref{theta-prod-1}) to (\ref{theta-prod-4}),
\begin{subequations}
\begin{eqnarray}
\theta_{0,\frac{1}{2}}(2u,2\tau)=\left(\prod_{m=1}^\infty \frac{1+q^{2m}}{1-q^{2m}}\right)\theta_{0,0}(u,\tau)\theta_{0,\frac{1}{2}}(u,\tau),\\
\theta_{\frac{1}{2},\frac{1}{2}}(2u,2\tau)=\left(\prod_{m=1}^\infty \frac{1+q^{2m}}{1-q^{2m}}\right)\theta_{\frac{1}{2},\frac{1}{2}}(u,\tau)\theta_{\frac{1}{2},0}(u,\tau).
\end{eqnarray}
\end{subequations}
Then we can rewrite
\begin{subequations}
\begin{eqnarray}
e(u)=\frac{\theta_{1/2,1/2}(\frac{u+ \xi}{2},\tau)\theta_{1/2,0}(\frac{u+ \xi}{2},\tau)\theta_{0,1/2}(\frac{u- \xi}{2},\tau)\theta_{0,0}(\frac{u- \xi}{2},\tau)}{\theta_{1/2,1/2}(\xi/2,\tau)\theta_{1/2,0}(\xi/2,\tau)\theta_{0,1/2}(\xi/2,\tau)\theta_{0,0}(\xi/2,\tau)},\\
f(u)=-\frac{\theta_{1/2,1/2}(\frac{u- \xi}{2},\tau)\theta_{1/2,0}(\frac{u- \xi}{2},\tau)\theta_{0,1/2}(\frac{u+ \xi}{2},\tau)\theta_{0,0}(\frac{u+ \xi}{2},\tau)}{\theta_{1/2,1/2}(\xi/2,\tau)\theta_{1/2,0}(\xi/2,\tau)\theta_{0,1/2}(\xi/2,\tau)\theta_{0,0}(\xi/2,\tau)},
\end{eqnarray}
\end{subequations}
and by further using the identity
\begin{eqnarray}
&&\theta_{1/2,1/2}(x\pm y,\tau)\theta_{1/2,0}(x\mp y,\tau)\theta_{0,0}(0,\tau)\theta_{0,1/2}(0,\tau)=\nonumber\\
&&\theta_{1/2,1/2}(x,\tau)\theta_{1/2,0}(x,\tau)\theta_{0,0}(y,\tau)\theta_{0,1/2}(y,\tau)\pm \theta_{1/2,1/2}(y,\tau)\theta_{1/2,0}(y,\tau)\theta_{0,0}(x,\tau)\theta_{0,1/2}(x,\tau),\nonumber\\
\end{eqnarray}
we have
\begin{eqnarray}
e(u)=\frac{\theta_{1/2,1/2}(u,\tau)\theta_{1/2,0}(\xi,\tau)+\theta_{1/2,1/2}(\xi,\tau)\theta_{1/2,0}(u,\tau)}{2\theta_{1/2,1/2}(\xi/2,\tau)\theta_{1/2,0}(\xi/2,\tau)\theta_{0,1/2}(\xi/2,\tau)\theta_{0,0}(\xi/2,\tau)}\theta_{0,0}(0,\tau)\theta_{0,1/2}(0,\tau)\nonumber\\
=\frac{\theta_{0,0}(0,\tau)\theta_{0,1/2}(0,\tau)\theta_{1/2,1/2}(u,\tau)\theta_{1/2,0}(u,\tau)}{2\theta_{1/2,1/2}(\xi/2,\tau)\theta_{1/2,0}(\xi/2,\tau)\theta_{0,1/2}(\xi/2,\tau)\theta_{0,0}(\xi/2,\tau)}\left(\frac{\theta_{1/2,0}(\xi,\tau)}{\theta_{1/2,0}(u,\tau)}+\frac{\theta_{1/2,1/2}(\xi,\tau)}{\theta_{1/2,1/2}(u,\tau)}\right),\nonumber\\
f(u)=\frac{-\theta_{1/2,1/2}(u,\tau)\theta_{1/2,0}(\xi,\tau)+\theta_{1/2,1/2}(\xi,\tau)\theta_{1/2,0}(u,\tau)}{2\theta_{1/2,1/2}(\xi/2,\tau)\theta_{1/2,0}(\xi/2,\tau)\theta_{0,1/2}(\xi/2,\tau)\theta_{0,0}(\xi/2,\tau)}\theta_{0,0}(0,\tau)\theta_{0,1/2}(0,\tau)\nonumber\\
=\frac{\theta_{0,0}(0,\tau)\theta_{0,1/2}(0,\tau)\theta_{1/2,1/2}(u,\tau)\theta_{1/2,0}(u,\tau)}{2\theta_{1/2,1/2}(\xi/2,\tau)\theta_{1/2,0}(\xi/2,\tau)\theta_{0,1/2}(\xi/2,\tau)\theta_{0,0}(\xi/2,\tau)}\left(-\frac{\theta_{1/2,0}(\xi,\tau)}{\theta_{1/2,0}(u,\tau)}+\frac{\theta_{1/2,1/2}(\xi,\tau)}{\theta_{1/2,1/2}(u,\tau)}\right).
\end{eqnarray}
That is to say, one can decompose the boundary operator $K(u)$ as
\begin{eqnarray}
K(u)=\frac{\theta_{0,0}(0,\tau)\theta_{0,1/2}(0,\tau)\theta_{1/2,1/2}(u,\tau)\theta_{1/2,0}(u,\tau)}{2\theta_{1/2,1/2}(\xi/2,\tau)\theta_{1/2,0}(\xi/2,\tau)\theta_{0,1/2}(\xi/2,\tau)\theta_{0,0}(\xi/2,\tau)}\left(\frac{\theta_{1/2,1/2}(\xi,\tau)}{\theta_{1/2,1/2}(u,\tau)}{\bf I}+\frac{\theta_{1/2,0}(\xi,\tau)}{\theta_{1/2,0}(u,\tau)}\sigma_z\right).
\end{eqnarray}
We remark that up to the overall scaling (which is a free choice of the boundary operator), the above decomposition matches with that given in \cite{Hou_1995,Fan:1996jq} as a special diagonal case. $\Box$
For (\ref{b-YBE-2}), we again use the diagonal ansatz $\tilde{K}={\rm diag}\left(e'(u),f'(u)\right)$, and obtain the following equation,
\begin{eqnarray}
&&\theta_{0,1/2}(-u+v,2\tau)\theta_{1/2,1/2}(-u-v-2\eta,2\tau)\left(e'(v)f'(u)-e'(u)f'(v)\right)\nonumber\\
&&+\theta_{0,1/2}(-u-v-2\eta,2\tau)\theta_{1/2,1/2}(-u+v,2\tau)\left(e'(u)e'(v)-f'(u)f'(v)\right)=0.
\end{eqnarray}
We see that by replacing $u\rightarrow -u-\eta$, $v\rightarrow -v-\eta$ in (\ref{ef-sol}), we obtain the following dual solution for the boundary operator,
\begin{subequations}
\begin{eqnarray}
e'(u)=\frac{\theta_{1/2,1/2}(-u-\eta+ \tilde{\xi},2\tau)\theta_{0,1/2}(-u-\eta- \tilde{\xi},2\tau)}{\theta_{1/2,1/2}(\tilde{\xi},2\tau)\theta_{0,1/2}(\tilde{\xi},2\tau)},\\
f'(u)=-\frac{\theta_{1/2,1/2}(-u-\eta- \tilde{\xi},2\tau)\theta_{0,1/2}(-u-\eta+ \tilde{\xi},2\tau)}{\theta_{1/2,1/2}(\tilde{\xi},2\tau)\theta_{0,1/2}(\tilde{\xi},2\tau)}.\label{ef-dual-sol}
\end{eqnarray}
\end{subequations}
The Bethe ansatz provides a convenient approach to diagonalize the integrable system. In this article, we compare the vacuum equation of the gauge theory with the Bethe ansatz equation of the spin chain. Let us give a brief description on the (algebraic) Bethe ansatz. We can schematically express the transfer matrix as a trace over the auxiliary space of a matrix,
\begin{eqnarray}
\mathsf{t}(u)=:{\rm tr}_0\left(\begin{array}{cc}
\mathcal{A}(u) & \mathcal{B}(u)\\
\mathcal{C}(u) & \mathcal{D}(u)\\
\end{array}\right)=\mathcal{A}(u)+\mathcal{D}(u),
\end{eqnarray}
with the entries $\mathcal{A}(u)$, $\mathcal{B}(u)$, $\mathcal{C}(u)$, $\mathcal{D}(u)$ elements in ${\rm End}(V^{\otimes L})$. For a given ground state $\ket{\Omega}$ of the system, the Bethe ansatz assumes that all the eigenstates of the system takes the form
\begin{eqnarray}
\prod_{i=1}^M\mathcal{B}(u_i)\ket{\Omega},
\end{eqnarray}
where the set of $\{u_i\}$ is determined by the so-called Bethe ansatz equation (BAE).
\begin{comment}
When we employ the periodic boundary condition in the XYZ spin chain, i.e. $\sigma^{(L+1)}_{x,y,z}=\sigma^{(1)}_{x,y,z}$, it is known that for most generic $\eta$ and $\tau$, only even $L$'s guarantee the existence of solutions to the BAE, and the BAE is known to take the form
\begin{eqnarray}
e^{2\phi i}\frac{\sigma(u_i+\eta)^L}{\sigma(u_i)^L}=-\frac{Q(u_i+\eta)}{Q(u_i-\eta)},
\end{eqnarray}
where
\begin{eqnarray}
Q(u):=\prod_{j=1}^M\frac{\sigma(u-u_j)}{\sigma(\eta)},
\end{eqnarray}
and $\phi$ is a phase factor giving the selection rule,
\begin{eqnarray}
e^{i\phi}\prod_{i=1}^M\frac{\sigma(u_i+\eta)}{\sigma(u_i)}=e^{\frac{2\pi ik}{L}},
\end{eqnarray}
labeled by some $k\in\{1,2,\dots,L\}$.
In general, the Bethe ansatz equation of an open spin chain with diagonal boundary conditions parameterized by $\xi_\pm$ is given by (uplift to XXZ)
\begin{eqnarray}
\frac{[u_i+\xi_+-\frac{1}{2}\eta][u_i+\xi_--\frac{1}{2}\eta]\delta_+(u_i)\delta_-(-u_i)}{[u_i-\xi_++\frac{1}{2}\eta][u_i-\xi_-+\frac{1}{2}\eta]\delta_+(-u_i)\delta_-(u_i)}=\prod_{j\neq i}^k \frac{[u_i-u_j+\eta][u_i+u_j+\eta]}{[u_i-u_j-\eta][u_i+u_j-\eta]}
\end{eqnarray}
\end{comment}
We give a more detailed review on how to derive the above Bethe ansatz equation in Appendix~\ref{a:bethe-open}.
The Hamiltonian of the open chain can be found in a similar way as in (\ref{d-Ham}), and we only write down the Hamiltonian of the open XXZ spin chain with diagonal boundary conditions here \cite{Sklyanin:1988yz}:
\begin{eqnarray}
{\cal H}=\sum_{i=1}^{L-1}{\cal H}_{i,i+1}+\frac{\pi}{2}\cot(\pi\xi_-)\sigma_z^{(1)}+\frac{1}{2}\tan(\pi\eta)\cot(\pi\xi_+)\sigma_z^{(L)},
\label{boundary_Ham}
\end{eqnarray}
where ${\cal H}_{i,i+1}=\frac{1}{2}\left(\sigma_x^{(i)}\sigma_x^{(i+1)}+\sigma_y^{(i)}\sigma_y^{(i+1)}+\cos(\pi\eta)\sigma_z^{(i)}\sigma_z^{(i+1)}\right)$ is the building block of the XXZ spin chain, and we omitted some constant terms. We remark that $\xi=0$ forces the corresponding $\sigma_z$ at the boundary to be zero, which can be thought as a fixed-end (or Direchlet) boundary condition, while taking $\xi\rightarrow i\infty$ gives $\cot\xi\rightarrow -i$ and is also a special limit in the spin chain (that minimizes the boundary coupling on the imaginary axis\footnote{In the context of XXZ spin chain, it is very often to take $u$, $\eta$ and $\xi_\pm$ to be pure imaginary.}).
\section{Effective Twisted Superpotential of 3d $\mathcal{N}=2$ Theory and 2d $\mathcal{N}=(2,2)$ Theory}\label{s:gauge-th}
In this section, we quote the expression of the disk partition function of 3d $\mathcal{N}=2$ theory (on $D^2\times S^1$) and 2d $\mathcal{N}=(2,2)$ theory (on $D^2$), and then we compute the effective twisted superpotential of the gauge theories.
\subsection{3d $\mathcal{N}=2$ theory}
The partition function of 3d $\mathcal{N}=2$ theory on $D^2 \times S^1$ was computed in \cite{Yoshida:2014ssa}, where the geometry of $D^2\times S^1$ is parameterized as
\begin{eqnarray}
{\rm d}s^2=\ell^2({\rm d}\theta^2+r^2\sin^2\theta{\rm d}\varphi^2)+{\rm d}\tau^2,
\end{eqnarray}
where the $S^1$ circle has a periodicity $\beta\ell$.
The index on $D^2\times S^1$ is given by the following integral,
\begin{eqnarray}
{\cal I}=\frac{1}{\mid W_G\mid}\int \frac{{\rm d}^N \sigma}{(2\pi)^N}e^{-S_\text{cl}}Z_\text{vec}Z_\text{chi}Z_\text{bd},
\end{eqnarray}
where we denote the Weyl group of $G$ by $W_G$, and the one-loop determinant of the vector multiplet is given by
\begin{eqnarray}
Z_\text{vec}=\prod_{\alpha \in \hat\Delta} e^{\frac{1}{8\beta_2}(\alpha\cdot \sigma)^2}\left(e^{i\alpha\cdot \sigma};q^2\right)_\infty,
\end{eqnarray}
with the set of the roots of $G$ denoted by $\hat\Delta$, and the contribution from the chiral multiplet with Neumann boundary condition reads
\begin{eqnarray}
Z^\text{Neu}_\text{chi}=\prod_{w \in R} e^{{\cal E}(iw\cdot \sigma+\Delta \beta_2+im)}\left(e^{-iw\cdot\sigma-im}q^\Delta;q^2\right)^{-1}_\infty,
\qquad
q=e^{-\beta_2},
\end{eqnarray}
with the set of the weights of the corresponding representation denoted by $R$, the R-charge of the scalar in the chiral multiplet $\Delta$, and
\begin{eqnarray}
{\cal E}(x)=\frac{\beta_2}{12}-\frac{1}{4}x+\frac{1}{8\beta_2}x^2.
\end{eqnarray}
$\beta_1$ is the fugacity of the rotation along $S^1$, $\beta_2$ is the U(1)$_R$ charge fugacity, $\beta \ell=(\beta_1+\beta_2)\ell$ is the circumference of $S^1$.
The one-loop contribution of chiral multiplet with Direchlet boundary condition reads
\begin{eqnarray}
Z^\text{Dir}_\text{chi}=\prod_{w \in R}e^{-{\cal E}(-iw\cdot\sigma +(2-\Delta)\beta_2-im)}\left(e^{iw\cdot\sigma+im}q^{2-\Delta};q^2\right)_\infty,
\end{eqnarray}
and one can confirm that the difference between the chiral multiplet in Direchlet boundary condition and that in Neumann condition is given by a 2d Fermi multiplet living on the boundary, $T^2 = \partial(D^2 \times S^1)$,
\begin{eqnarray}
Z^\text{Dir}_\text{chi}=Z_\text{2d\ Fermi}Z^\text{Neu}_\text{chi},
\end{eqnarray}
with
\begin{eqnarray}
Z_\text{2d\ Fermi}=\prod_{w \in R} e^{-2{\cal E}(iw\cdot\sigma+\Delta\beta_2+F_lM_l)}\theta(e^{-iw\cdot\sigma-F_lM_l}q^{\Delta};q^2),
\end{eqnarray}
where the $\theta$-function here is defined as
\begin{eqnarray}
\theta(y;q)=\prod_{n=0}^\infty (1-yq^n)(1-y^{-1}q^{n+1})=\frac{1}{(1-y^{-1})}(y;q)_\infty (y^{-1};q)_\infty,
\end{eqnarray}
which is equivalent to $\theta_{1/2,1/2}$ up to the variable change (See~\eqref{theta-prod-4}).
The classical piece depends on the FI-term and the boundary Chern-Simons term (defined on the boundary $T^2=S^1\times S^1$), $S_\text{cl}=S_\text{FI}-S_\text{bCS}$,
\begin{eqnarray}
-S_\text{FI}=2\pi i\ell\zeta {\rm tr}\sigma,
\end{eqnarray}
and
\begin{eqnarray}
S_\text{bCS}=\frac{\kappa}{4\beta}{\rm tr}\sigma^2.
\end{eqnarray}
\paragraph{Remark:} As noted in \cite{Yoshida:2014ssa}, when we focus on the special case of 3d $\mathcal{N}=4$ theory, the $D^2\times S^1$ partition function can be identified with the $\Omega$-background partition function $\mathbb{C}_{q^2}\times S^1$ presented in \cite{Aganagic:2013tta} (with proper boundary conditions on $D^2$ chosen).\footnote{They basically discuss 3d $\mathcal{N}=2^*$ theory in \cite{Aganagic:2013tta}, which is the mass deformation of $\mathcal{N}=4$ with the adjoint matter. In fact, they start with 5d $\mathcal{N} = 1$, then discuss 3d theory by considering the Higgs branch locus.} The $\mathcal{N}=4$ hypermultiplet is decomposed into an $\mathcal{N}=2$ chiral multiplet in fundamental representation with Neumann b.c. and an $\mathcal{N}=2$ chiral multiplet in anti-fundamental representation with Direchlet b.c. imposed. The $\mathcal{N}=4$ vector multiplet is decomposed into an $\mathcal{N}=2$ vector multiplet and an $\mathcal{N}=2$ chiral multiplet in the adjoint representation with Neumann boundary condition. Under this identification, $q^2$ is mapped to the $\Omega$-background parameter, and the R-charge fugacity $q^\Delta$ is identified with $q/t$, where $t^2$ is the fugacity parameter of the vector U(1)$_{F=J_L+J_R}$ symmetry in the SU(2)$_L\times$SU(2)$_R$ R-symmetry of 3d $\mathcal{N}=4$ SUSY algebra ($t$ can also be understood as the adjoint mass). We can therefore rescale $\beta_2\rightarrow \beta_2\epsilon/2$ (together with $\Delta=1-\frac{2}{\epsilon}\log_qt$) in the partition function, and take $\epsilon\rightarrow 1$ to go to the classical limit. We will perform this procedure in section \ref{s:eff-pot} to compute the effective twisted superpotential of 3d $\mathcal{N}=2$ theories.
\paragraph{2d $\mathcal{N}=(2,2)$ theory}
The vortex partition of 2d $\mathcal{N}=(2,2)$ theory on the plane $\mathbb{C}$ with the $\Omega$-background can be obtained in the zero radius limit of 3d partition function, but it can alternatively be computed as a dimensional reduction of 4d $\mathcal{N} = 1$ gauge theory on the geometry,%
\footnote{%
See \cite{Kimura:2018axa} for a related discussion.
}
\begin{eqnarray}
{\rm d}s^2={\mid {\rm d}z-iz(\epsilon{\rm d}w+\bar{\epsilon}{\rm d}\bar{w})\mid^2}+\mid{\rm d}w\mid^2,
\end{eqnarray}
where $z=x_1+ix_2$, $w=x_3+ix_4$. The explicit expression is known as (See, e.g. \cite{Fujimori:2015zaa})
\begin{eqnarray}
{\cal I}_{a,\mu}=\int_{C_{a,\mu}}\left(\prod_{i=1}^r\frac{{\rm d}\sigma_i}{2\pi i\epsilon}\right)\exp\left(-\frac{2\pi i\sigma\cdot\tau}{\epsilon}\right)\left(\prod_{\alpha\in\hat\Delta}\Gamma\left(\frac{\alpha\cdot \sigma}{\epsilon}\right)\right)^{-1}\prod_{w\in R}\prod_{a=1}^{N_f}\Gamma\left(\frac{w\cdot \sigma+m_a}{\epsilon}\right),\label{part-exp}
\end{eqnarray}
where $\{\tau\}$ are the renormalized FI parameters for all the U(1) gauge groups, and the contour $C_{a,\mu}$ picks up the poles satisfying
\begin{eqnarray}
\mu_i\cdot \sigma=-m_{a_i}-\epsilon k_i,\label{2d-pole}
\end{eqnarray}
with $k_{1,\dots,r}\in\mathbb{Z}^r_{\geq 0}$ and $a_{1,\dots,r}$ picking out $r$ flavor labels. $\mu_i$ is a weight vector specifying the poles we pick. The above partition function can also be viewed as a disk partition function of 2d $\mathcal{N}=(2,2)$ theory with Neumann boundary condition imposed on chiral multiplets \cite{Sugishita:2013jca,Honda:2013uca,Hori:2013ika}.
\begin{comment}
It is also possible to consider Direchlet boundary condition, where we need to insert instead of the factor $\Gamma\left(\frac{w\cdot \sigma+m}{\epsilon}\right)$ with
\begin{eqnarray}
\frac{-2\pi i e^{\pi i(w\cdot \sigma +m)/\epsilon}}{\Gamma\left(1-\frac{w\cdot \sigma+m}{\epsilon}\right)}.
\end{eqnarray}
By using the property
\begin{eqnarray}
\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin \pi z},
\end{eqnarray}
of the $\Gamma$-function, the Direchlet boundary condition effectively introduces a contribution
\begin{eqnarray}
-\left(e^{\pi i(w\cdot \sigma+m)/\epsilon}-e^{-\pi i(w\cdot \sigma+m)/\epsilon}\right)e^{\pi i(w\cdot \sigma+m)/\epsilon}=1-e^{2\pi i(w\cdot \sigma+m)/\epsilon},
\end{eqnarray}
to the vortex partition function. From the result of 3d partition function, it seems that
\begin{eqnarray}
\epsilon\log(1-e^{2\pi i(w\cdot \sigma+m)/\epsilon})\rightarrow \epsilon\log(-e^{2\pi i(w\cdot \sigma+m)/\epsilon})=2\pi i(w\cdot \sigma+m).
\end{eqnarray}
\end{comment}
\subsection{Effective potential in 3d and 2d}\label{s:eff-pot}
Let us extract out the effective potential of the gauge theory (in the classical limit of the $\Omega$-background, $\epsilon\rightarrow 0$). One has
\begin{eqnarray}
{\cal I}\sim \exp\left(\frac{1}{\epsilon}W_\text{eff}(\sigma^\ast,m)\right).
\end{eqnarray}
To compute the effective potential in 3d, we need the formula
\begin{eqnarray}
\epsilon\log(e^{ix};q^2)_\infty=\epsilon\sum_{n=0}^\infty\log(1-e^{ix}q^{2n})=-\epsilon\sum_{n=0}^\infty\sum_{m=1}^\infty \frac{1}{m}e^{imx}q^{2mn}\nonumber\\
=-\epsilon \sum_{m=1}^\infty \frac{1}{m}e^{imx}\frac{1}{1-e^{-m\beta_2\epsilon}}\xrightarrow{\epsilon \to 0} -\frac{1}{\beta_2}\sum_{m=1}^\infty \frac{1}{m^2}e^{imx}=-\frac{1}{\beta_2}\operatorname{Li}_2(e^{ix}),
\end{eqnarray}
and a related useful identity is given by
\begin{eqnarray}
\exp\left(\frac{\partial}{\partial x}\operatorname{Li}_2(e^{\pm x})\right)=(1-e^{\pm x})^{\mp 1}.\label{id-diff}
\end{eqnarray}
Under the rescaling $\beta_2\rightarrow \beta_2\epsilon/2$ and $\Delta\rightarrow 1+\frac{2i}{\epsilon}\tilde{c}$, we obtain
\begin{eqnarray}
W^\text{3d}_\text{eff}(\sigma,m)=-\frac{1}{\beta_2}\sum_{\alpha \in \hat\Delta} \operatorname{Li}_2(e^{i\alpha\cdot \sigma})+\frac{1}{4\beta_2}\sum_{\alpha \in \hat\Delta} (\alpha\cdot\sigma)^2+\frac{1}{\beta_2}\sum_{w \in R} \sum_{a=1}^{N_f}\operatorname{Li}_2(e^{-iw\cdot \sigma-im_a-i\beta_2\tilde{c}})\nonumber\\
-\frac{1}{4\beta_2}\sum_{w \in R} \sum_{a=1}^{N_f}(w\cdot\sigma+m_a+\beta_2\tilde{c})^2+2\pi i\ell\zeta{\rm tr}\sigma,\label{3d-eff}
\end{eqnarray}
where all the chiral multiplets are put to satisfy the Neumann boundary condition, and we also rescaled the FI-term $\zeta\rightarrow \zeta/\epsilon$.
To switch the $a$-th chiral multiplet to the Direchlet boundary condition, we simply need to add a contribution
\begin{eqnarray}
W^\text{Fermi}(\sigma,m_a)=\sum_{w \in R}\frac{1}{2\beta_2}(w\cdot\sigma+m_a+\beta_2\tilde{c})^2-\frac{1}{\beta_2}\sum_{w \in R} \left(\operatorname{Li}_2(e^{iw\cdot\sigma+im_a+i\beta_2\tilde{c}})+\operatorname{Li}_2(e^{-iw\cdot\sigma-im_a-i\beta_2\tilde{c}})\right)\nonumber\\
=\frac{\pi^2}{3\beta_2}-\frac{\pi}{\beta_2}\sum_{w \in R} (w\cdot\sigma+m_a+\beta_2\tilde{c}),\nonumber\\
\end{eqnarray}
where we used the following identity,
\begin{eqnarray}
\operatorname{Li}_2(e^x)+\operatorname{Li}_2(e^{-x})=\frac{\pi^2}{3}-i\pi x-\frac{x^2}{2},\quad x/i\in[0,2\pi).
\end{eqnarray}
We remark that as discussed in the case of 2d $\mathcal{N}=(2,2)$ theory, (\ref{2d-pole}), the poles picked up in the contour integral take the form
\begin{eqnarray}
\mu_i\cdot \sigma=-m_{a_i}-\epsilon \beta_2 k_i-\epsilon\beta_2 \Delta,
\end{eqnarray}
for some weight vector $\mu_i$ again. In the $\epsilon\rightarrow 0$ limit, the contour integral in the partition function simply forces $\sigma$ to take a specific value $\sigma^\ast_a$ as a linear function of $m_a$'s. That is why we are allowed to treat the effective potential as a normal function of the variables $\sigma_a=\sigma^*_a$, the on-shell values of the critical configuration.
\begin{comment}
$q$-$\Gamma$ function
\begin{eqnarray}
\Gamma_q(x):=(1-q)^{1-x}\frac{(q;q)_\infty}{(q^x;q)_\infty},
\end{eqnarray}
which satisfies
\begin{eqnarray}
\lim_{q\rightarrow 1}\Gamma_q(x)=\Gamma(x),
\end{eqnarray}
from a similar infinite-product expression for $\Gamma(x)$,
\begin{eqnarray}
\Gamma(x)=\frac{1}{x}\prod_{n=1}^\infty \frac{(1+\frac{1}{n})^x}{1+\frac{x}{n}}.
\end{eqnarray}
Another property of $Li(x)$ we need is that
\begin{eqnarray}
\operatorname{Li}_2(e^x)+\operatorname{Li}_2(e^{-x})=\frac{\pi^2}{3}-i\pi x-\frac{x^2}{2}.
\end{eqnarray}
\begin{eqnarray}
\operatorname{Li}_2(e^x)=x\left(1-\log(-x)\right)+\sum_{k=0,k\neq 1}^\infty \frac{\zeta(2-k)}{k!}x^k.
\end{eqnarray}
\end{comment}
In the case of 2d theory, we can take the log of the integrand of (\ref{part-exp}) by using Stirling's formula,
\begin{eqnarray}
\Gamma(z+1)\sim \sqrt{2\pi z}(z/e)^z,
\end{eqnarray}
to obtain
\begin{eqnarray}
W_\text{eff}(\sigma,m)=-\sum_{\alpha\in\hat\Delta} \alpha\cdot \sigma(\log\alpha\cdot\sigma-1)+\sum_{w\in R}\sum_{a=1}^{N_f}(w\cdot\sigma+m_a)\left(\log(w\cdot\sigma+m_a)-1\right)\nonumber\\
-\sum_{w\in R}\sum_{a=1}^{N_f}(w\cdot\sigma+m_a)\log\epsilon-2\pi i \tau\cdot \sigma.\label{pre-eff-W}
\end{eqnarray}
Note that
\begin{eqnarray}
-\sum_{\alpha\in\hat\Delta} \alpha\cdot \sigma(\log\alpha\cdot\sigma-1)=-\sum_{\alpha\in\hat\Delta_+}\left(\alpha\cdot \sigma\log\alpha\cdot\sigma-\alpha\cdot \sigma\log\left((-\alpha)\cdot\sigma\right)\right)\nonumber\\
=\sum_{\alpha\in\hat\Delta_+}\alpha\cdot\sigma\log(-1)=2\pi i\sum_{\alpha\in\hat\Delta_+}\frac{\alpha}{2}\cdot \sigma=2\pi i\rho\cdot \sigma,
\end{eqnarray}
with the Weyl vector given by a half sum of the positive roots (their set denoted by $\hat\Delta_+$), $\rho = \frac{1}{2} \sum_{\alpha \in \hat\Delta_+} \alpha$.
For theories we consider in this article (those have the same number of fundamental and anti-fundamental matters), $\sum_{w\in R}\sum_{a=1}^{N_f}(w\cdot\sigma+m_a)$ vanishes (up to a constant term), and we remark that such restriction to special matter contents agrees with the cancellation condition of U(1)$_R$ anomaly in 2d. We finally arrive at
\begin{eqnarray}
W_\text{eff}(\sigma,m)=-2\pi i \tau\cdot \sigma+2\pi i\rho\cdot\sigma +\sum_{w\in R}\sum_{a=1}^{N_f}(w\cdot\sigma+m_a)\left(\log(w\cdot\sigma+m_a)-1\right).\label{eff-W-2d}
\end{eqnarray}
One can alternatively take a 2d limit of the 3d effective potential (\ref{3d-eff}) to derive the the effective potential of 2d $\mathcal{N}=(2,2)$ theories. To see this, we rescale $\sigma$ to $\beta \ell\sigma$, and $m_a$ to $\beta \ell m_a$, then take the $\beta\sim \beta_2\rightarrow 0$ limit. It is very clear that the quadratic terms in (\ref{3d-eff}) vanish in this limit, and thus do not appear in the 2d effective potential. (\ref{eff-W-2d}) is reproduced (up to an overall scale of $\ell$ and irrelevant constant terms) in this limit by using the following formula,
\begin{eqnarray}
\operatorname{Li}_2(e^x)=x\left(1-\log(-x)\right)+\sum_{k=0,k\neq 1}^\infty \frac{\zeta(2-k)}{k!}x^k,
\end{eqnarray}
and identifying $i\ell\sigma$ in 3d with $\sigma$ in 2d.
A similar contribution as that proportional to $\log\epsilon$ in (\ref{pre-eff-W}),
\begin{eqnarray}
\sum_{w\in R}\sum_{a=1}^{N_f}(w\cdot \sigma+m_a)\log \beta \ell,
\end{eqnarray}
also appears in the 2d limit of the 3d effective potential, and we again expect it to sum to a constant term for the same reason as in the 2d case.
On the other hand, 3d $\mathcal{N}=2$ theories on $D^2\times S^1$ can be uplift to 4d $\mathcal{N}=1$ theories on $D^2\times T^2$~\cite{Longhi:2019hdh}, and one would expect the Bethe/Gauge correspondence works parallelly in 4d. However, the computation is more complicated, especially in the case of XYZ open spin chain~\cite{Fan:1996jq}, so we plan to present the details of the correspondence in 4d in a near-future work.
\section{Dictionary between A-type Gauge Theories and Closed Spin Chains}\label{s:dic-A}
Let us now reproduce the known dictionary between the closed spin chains and SU-type gauge theories. First recall that the Bethe ansatz equation in the twisted closed spin chain (derived in Appendix \ref{a:bethe-open}) is given by
\begin{eqnarray}
\prod_{a=1}^L\frac{[u_i+\eta/2+\eta s_a-\vartheta_a]}{[u_i+\eta/2-\eta s_a-\vartheta_a]}=e^{i\theta}\prod_{j\neq i}^m\frac{[u_i-u_j+\eta]}{[u_i-u_j-\eta]}.
\end{eqnarray}
Correspondingly, we consider 3d $\mathcal{N}=2$ U($N$) theory with one adjoint chiral multiplet, $N_f$ fundamental matters, among which the first $N_d$ obey the Direchlet boundary condition, and $N'_f$ anti-fundamental matters, among which the first $N'_d$ obey the Direchlet boundary condition. The effective potential for this theory is given by
\begin{eqnarray}
W^\text{3d}_\text{eff}(\sigma,m)=-\frac{1}{\beta_2}\sum_{i\neq j}^N \operatorname{Li}_2(e^{i(\sigma_i-\sigma_j)})+\frac{1}{\beta_2}\sum_{i\neq j}^N \operatorname{Li}_2(e^{-i(\sigma_i-\sigma_j)-im_\text{adj}})-\frac{1}{4\beta_2}\sum_{i\neq j}^N (\sigma_i-\sigma_j+m_\text{adj})^2\nonumber\\
+\frac{1}{\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}\operatorname{Li}_2(e^{-i\sigma_i-im_a})-\frac{1}{4\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}(\sigma_i+m_a)^2-\frac{1}{\beta_2}\sum_{i=1}^N \sum_{a=1}^{N'_f}\operatorname{Li}_2(e^{i\sigma_i-i\bar{m}_a})\nonumber\\-\frac{1}{4\beta_2}\sum_{i=1}^N \sum_{a=1}^{N'_f}(\sigma_i-\bar{m}_a)^2+\frac{1}{4\beta_2}\sum_{i\neq j} (\sigma_i-\sigma_j)^2
+2\pi i\ell\zeta{\rm tr}\sigma\nonumber\\
-\frac{\pi}{\beta_2}\sum_{i=1}^N\sum_{a=1}^{N_d}(\sigma_i+m_a)
-\frac{\pi}{\beta_2}\sum_{i=1}^N\sum_{a=1}^{N'_d}(-\sigma_i+\bar{m}_a),\nonumber\\
\end{eqnarray}
where several constant terms are dropped in the simplification and we absorbed $\beta_2\tilde{c}$ into the mass parameters.
The vacuum equation given by
\begin{eqnarray}
\exp\left(\beta_2 i\frac{\partial}{\partial\sigma}W^\text{3d}_\text{eff}(\sigma,m)\right)=1,
\end{eqnarray}
reads
\begin{eqnarray}
e^{-\frac{i}{2}\sum_{a=1}^{N_f}(\sigma_i+m_a)-\frac{i}{2}\sum_{a=1}^{N'_f}(\sigma_i-\bar{m}_a)} \prod_{j\neq i}\frac{1-e^{i(\sigma_j-\sigma_i)}}{1-e^{i(\sigma_i-\sigma_j)}} \frac{1-e^{i(\sigma_i-\sigma_j+m_\text{adj})}}{1-e^{i(\sigma_j-\sigma_i+m_\text{adj})}}\frac{\prod_{a=1}^{N'_f}(1-e^{i(\sigma_i-\bar{m}_a)})}{\prod_{a=1}^{N_f}(1-e^{-i(\sigma_i+m_a)})}\nonumber\\
\times e^{-2\pi \beta_2\ell\zeta-\pi i(N_d-N'_d)}=1,
\end{eqnarray}
where we used (\ref{id-diff}).
We can simplify it to
\begin{eqnarray}
(-1)^{N'_f+N_d-N'_d}e^{-2\pi \beta_2\ell\zeta}\prod_{j\neq i}\frac{\sin(\sigma_i-\sigma_j+m_\text{adj})}{\sin(\sigma_i-\sigma_j-m_\text{adj})}\frac{\prod_{a=1}^{N'_f}\sin(\sigma_i-\bar{m}_a)}{\prod_{a=1}^{N_f}\sin(\sigma_i+m_a)}=1.
\end{eqnarray}
We note that the adjoint matter is important to cancel the factor $e^{i(\sigma_i-\sigma_j)}$ from the vector multiplet.
In the case $N_f=N'_f$ ($\mathcal{N}=2^\ast$ theories), one can identify the above vacuum equation with the Bethe ansatz equation of closed spin chain with $L$ sites and $m$ Bethe roots. The dictionary is given by
\begin{subequations}
\begin{eqnarray}
&m\leftrightarrow N,\quad L\leftrightarrow N_f,&\\
&\pi u_i\leftrightarrow \sigma_i,&\\
&\pi \eta\leftrightarrow m_\text{adj},&\\
&i\theta \leftrightarrow \pi i(N_f+N_d+N'_d)-2\pi \beta_2\ell\zeta.&\label{theta-map}
\end{eqnarray}
\end{subequations}
We also see that we need to specify the mass parameter of chiral multiplets to
\begin{eqnarray}
\pi (\eta/2+\eta s_a-\vartheta_a)\leftrightarrow m_a,\quad \pi (\eta/2-\eta s_a-\vartheta_a)\leftrightarrow -\bar{m}_a.
\end{eqnarray}
The correspondence here certainly works in parallel after taking the 2d limit $\beta\sim\beta_2\rightarrow 0$ in the gauge theory and the XXX limit, $[u]\rightarrow u$, of the spin chain.
\section{Vacuum Equations and Bathe Ansatz Equations of Open Spin Chain}\label{s:corr-open}
In this section, we explore the Bethe/Gauge correspondence between gauge theories with SO and Sp gauge groups and open spin chains with diagonal boundary conditions. There are mainly two reasons for which we consider the open spin chains as the dual integrable system, instead of the closed spin chain. Firstly the symmetry $\sigma\leftrightarrow -\sigma$ in the effective potential of SO and Sp gauge theories naturally appears in the Bethe ansatz equation of open spin chains with diagonal boundary conditions. Secondly, as we can see from the definition of the open chain transfer matrix, (\ref{transf-open}) and (\ref{def-K+}), it can be viewed as a ``folded" version of the closed chain transfer matrix, and this ``folding" process exactly corresponds to the effect of the orientifold added to the brane construction of SO and Sp gauge theories. We may further interpret the boundary operators $K^\pm$ as the realization of the orientifold in the integrable system.
Let us recall the effective potential of a general 3d $\mathcal{N}=2^\ast$ theory (\ref{3d-eff}) with a zero FI-term,
\begin{eqnarray}
W^\text{3d}_\text{eff}(\sigma,m)=-\frac{1}{\beta_2}\sum_{\alpha \in \hat\Delta} \operatorname{Li}_2(e^{i\alpha\cdot \sigma})+\frac{1}{4\beta_2}\sum_{\alpha \in \hat\Delta} (\alpha\cdot\sigma)^2+\frac{1}{\beta_2}\sum_{w \in R} \sum_{a=1}^{N_f}\operatorname{Li}_2(e^{-iw\cdot \sigma-im_a-i\beta_2\tilde{c}})\nonumber\\
-\frac{1}{4\beta_2}\sum_{w \in R} \sum_{a=1}^{N_f}(w\cdot\sigma+m_a+\beta_2\tilde{c})^2.
\end{eqnarray}
We discuss the connection between the gauge theory results and the open spin chain models based on this expression.
\subsection{SO($2N$) theory}
In the case of an SO($2N$) gauge theory with $N_f$ chiral multiplet in the vector representation, the effective potential is specified to
\begin{eqnarray}
&&W^\text{3d}_\text{eff}(\sigma,m)=-\frac{1}{\beta_2}\sum_{i<j}^N \operatorname{Li}_2(e^{i(\pm \sigma_i\pm \sigma_j)})+\frac{1}{4\beta_2}\sum_{i<j}^N (\pm\sigma_i\pm\sigma_j)^2+\frac{1}{\beta_2}\sum_{i<j}^N \operatorname{Li}_2(e^{-i(\pm\sigma_i\pm\sigma_j)-i\beta_2\tilde{c}})\nonumber\\
&&-\frac{1}{4\beta_2}\sum_{i<j}^N (\pm\sigma_i\pm\sigma_j+\beta_2\tilde{c})^2+\frac{1}{\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}\operatorname{Li}_2(e^{-(\pm i\sigma_i+im_a+i\beta_2\tilde{c})})-\frac{1}{4\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}(\pm\sigma_i+m_a+\beta_2\tilde{c})^2,\nonumber\\
\end{eqnarray}
as the positive roots of SO($2N$) is given by $e_i\pm e_j$ ($1\leq i<j\leq N$), where $\pm$ stands for a summation over both signs. We did not impose a specific flavor symmetry on the chiral multiplets at the moment. The vacuum equation reads
\begin{eqnarray}
e^{-i\sigma_i}\prod_{j\neq i}\frac{1-e^{-i(\sigma_i\pm \sigma_j)}}{1-e^{i(\sigma_i\pm \sigma_j)}}\frac{1-e^{i(\sigma_i\pm \sigma_j)-i\beta_2\tilde{c}}}{1-e^{-i(\sigma_i\pm \sigma_j)-i\beta_2\tilde{c}}}\prod_{a=1}^{N_f}\frac{1-e^{i\sigma_i-im_a-i\beta_2\tilde{c}}}{1-e^{-i\sigma_i-im_a-i\beta_2\tilde{c}}}=1,
\end{eqnarray}
or
\begin{eqnarray}
\prod_{j\neq i}\frac{\sin(\sigma_i\pm \sigma_j-\beta_2\tilde{c})}{\sin(-\sigma_i\pm \sigma_j-\beta_2\tilde{c})}\prod_{a=1}^{N_f}\frac{\sin(\sigma_i-m_a-\beta_2\tilde{c})}{\sin(-\sigma_i-m_a-\beta_2\tilde{c})}=1.
\end{eqnarray}
Changing the boundary condition of chiral multiplets does not affect the vacuum equation in this case. Compared to the general Bethe ansatz equation for the open spin chain with diagonal boundary conditions,
\begin{eqnarray}
\frac{\left[u_i+\xi_+-\frac{\eta}{2}\right]\left[u_i-\frac{\eta}{2}+\xi_-\right]\delta_+(u_i)\delta_-(-u_i)}{\left[u_i-\xi_++\frac{\eta}{2}\right]\left[u_i+\frac{\eta}{2}-\xi_-\right]\delta_+(-u_i)\delta_-(u_i)}\prod_{j\neq i}^m \frac{[u_j-u_i+\eta][u_i+u_j-\eta]}{[u_j-u_i-\eta][u_j+u_i+\eta]}=1,
\end{eqnarray}
the vacuum equation can be mapped to the above Bethe ansatz equation with the boundary condition chosen as
\begin{eqnarray}
\xi_+=i\infty,\quad \xi_-=i\infty,\label{bd-SO}
\end{eqnarray}
and we also map $m\leftrightarrow N$, $\pi\eta\leftrightarrow \beta_2\tilde{c}$. We remark that $\xi=i\infty$ is also a special choice in the boundary Hamiltonian~\eqref{boundary_Ham}, but it is not the Dirichlet boundary condition.
Further using the explicit form of $\delta^\pm$,
\begin{eqnarray}
\delta^+(u)=\prod_{a=1}^L[u+\eta/2+\eta s_a-\vartheta_a],\quad \delta^-(u)=\prod_{a=1}^L[u+\eta/2-\eta s_a-\vartheta_a],
\end{eqnarray}
we see that $2L\leftrightarrow N_f$ and the mass parameters $\{m_a\}$ have to be paired as
\begin{eqnarray}
\{m_a+\beta_2\tilde{c}\}\leftrightarrow \left\{-\pi \eta s_a-\frac{\pi\eta}{2}+\pi\vartheta_a,-\pi \eta s_a+\frac{\pi\eta}{2}-\pi\vartheta_a\right\},
\end{eqnarray}
to establish the duality. We remark that besides the shift by the contribution from the $R$-charge $\tilde{c}$ (correspondingly the part $-\pi\eta s_a$ in the spin chain) the mass parameters are paired in the opposite sign. This is deemed to originate from the Sp($L$) flavor symmetry of the SO-type gauge theory. We remark that this correspondence and dictionary reduce to the map between a 2d gauge theory with SO-type gauge theory and an open XXX spin chain with the same boundary condition as (\ref{bd-SO}).
\subsection{SO($2N+1$) theory}
In the case of SO($2N+1$) gauge group, all the roots are given by $\{\pm e_i\pm e_j\}$ for all the possible combinations of $i<j$ and $\{\pm e_i\}_{i=1}^N$ (the total number is $2N^2$). The effective potential is thus given by
\begin{eqnarray}
&&W^\text{3d}_\text{eff}(\sigma,m)=-\frac{1}{\beta_2}\sum_{i<j}^N \operatorname{Li}_2(e^{i(\pm \sigma_i\pm \sigma_j)})+\frac{1}{4\beta_2}\sum_{i<j} (\pm\sigma_i\pm\sigma_j)^2+\frac{1}{\beta_2}\sum_{i<j}^N \operatorname{Li}_2(e^{-i(\pm\sigma_i\pm\sigma_j)-i\beta_2\tilde{c}})\nonumber\\
&&-\frac{1}{4\beta_2}\sum_{i<j}^N (\pm\sigma_i\pm\sigma_j+\beta_2\tilde{c})^2+\frac{1}{\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}\operatorname{Li}_2(e^{-(\pm i\sigma_i+im_a+i\beta_2\tilde{c})})-\frac{1}{4\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}(\pm\sigma_i+m_a+\beta_2\tilde{c})^2\nonumber\\
&&-\frac{1}{\beta_2}\sum_{i=1}^N \operatorname{Li}_2(e^{\pm i\sigma_i})+\frac{1}{2\beta_2}\sigma_i^2+\frac{1}{\beta_2}\sum_{i=1}^N \operatorname{Li}_2(e^{\pm i\sigma_i-i\beta_2\tilde{c}})-\frac{1}{4\beta_2}\sum_{i=1}^N(\sigma_i\pm \beta_2\tilde{c})^2.
\end{eqnarray}
The vacuum equation reads
\begin{eqnarray}
e^{-i\sigma_i}\frac{1-e^{-i\sigma_i}}{1-e^{i\sigma_i}}\frac{1-e^{i\sigma_i-i\beta_2\tilde{c}}}{1-e^{-i\sigma_i-i\beta_2\tilde{c}}}\prod_{j\neq i}\frac{1-e^{-i(\sigma_i\pm \sigma_j)}}{1-e^{i(\sigma_i\pm \sigma_j)}}\frac{1-e^{i(\sigma_i\pm \sigma_j)-i\beta_2\tilde{c}}}{1-e^{-i(\sigma_i\pm \sigma_j)-i\beta_2\tilde{c}}}\prod_{a=1}^{N_f}\frac{1-e^{i\sigma_i-im_a-i\beta_2\tilde{c}}}{1-e^{-i\sigma_i-im_a-i\beta_2\tilde{c}}}=1,
\end{eqnarray}
or equivalently
\begin{eqnarray}
\frac{\sin(\sigma_i-\beta_2\tilde{c})}{\sin(\sigma_i+\beta_2\tilde{c})}\prod_{j\neq i}\frac{\sin(\sigma_i\pm \sigma_j-\beta_2\tilde{c})}{\sin(-\sigma_i\pm \sigma_j-\beta_2\tilde{c})}\prod_{a=1}^{N_f}\frac{\sin(\sigma_i-m_a-\beta_2\tilde{c})}{\sin(-\sigma_i-m_a-\beta_2\tilde{c})}=1,
\end{eqnarray}
where we used the fact that the flavor symmetry is expected to be Sp($L$), which means $N_f=2L$ is an even integer.
The same dictionary maps the above equation to the Bethe ansatz equation of the open spin chain with the boundary condition
\begin{eqnarray}
\xi_+=\frac{\eta}{2},\quad \xi_-=-\frac{\eta}{2}, \quad {\rm for}\ N_f:\ {\rm even},
\end{eqnarray}
which is slightly different from the boundary condition for SO($2N$) theory~\eqref{bd-SO}: the boundary condition parameters are different at two ends of the spin chain.
\subsection{Sp($N$) theory}
Similarly in the case of Sp($N$) gauge theory, all the roots are given by $\{\pm e_i\pm e_j\}$ for $i<j$ and $\{\pm 2e_i\}_{i=1}^N$ (the total number is $2N^2$). The effective potential reads
\begin{eqnarray}
&&W^\text{3d}_\text{eff}(\sigma,m)=-\frac{1}{\beta_2}\sum_{i<j}^N \operatorname{Li}_2(e^{i(\pm \sigma_i\pm \sigma_j)})+\frac{1}{4\beta_2}\sum_{i<j}^N (\pm\sigma_i\pm\sigma_j)^2+\frac{1}{\beta_2}\sum_{i<j}^N \operatorname{Li}_2(e^{-i(\pm\sigma_i\pm\sigma_j)-i\beta_2\tilde{c}})\nonumber\\
&&-\frac{1}{4\beta_2}\sum_{i<j}^N (\pm\sigma_i\pm\sigma_j+\beta_2\tilde{c})^2+\frac{1}{\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}\operatorname{Li}_2(e^{-(\pm i\sigma_i+im_a+i\beta_2\tilde{c})})-\frac{1}{4\beta_2}\sum_{i=1}^N \sum_{a=1}^{N_f}(\pm\sigma_i+m_a+\beta_2\tilde{c})^2\nonumber\\
&&-\frac{1}{\beta_2}\sum_{i=1}^N \operatorname{Li}_2(e^{\pm 2i\sigma_i})+\frac{2}{\beta_2}\sigma_i^2+\frac{1}{\beta_2}\sum_{i=1}^N \operatorname{Li}_2(e^{\pm 2i\sigma_i-i\beta_2\tilde{c}})-\frac{1}{4\beta_2}\sum_{i=1}^N(2\sigma_i\pm \beta_2\tilde{c})^2,
\end{eqnarray}
and the vacuum equation is given by
\begin{eqnarray}
\frac{\sin^2(2\sigma_i-\beta_2\tilde{c})}{\sin^2(2\sigma_i+\beta_2\tilde{c})}\prod_{j\neq i}\frac{\sin(\sigma_i\pm \sigma_j-\beta_2\tilde{c})}{\sin(-\sigma_i\pm \sigma_j-\beta_2\tilde{c})}\prod_{a=1}^{N_f}\frac{\sin(\sigma_i-m_a-\beta_2\tilde{c})}{\sin(-\sigma_i-m_a-\beta_2\tilde{c})}=1.
\end{eqnarray}
This equation does not directly correspond to a Bethe ansatz equation because of the factor of $2$ in $\sin(2\sigma_i\pm\beta_2\tilde{c})$, but in the 2d limit where $[u]\rightarrow u$, it becomes a Bethe ansatz equation for the open spin chain with boundary condition,
\begin{eqnarray}
\xi_+=0,\quad \xi_-=0.
\end{eqnarray}
As mentioned around \eqref{boundary_Ham}, this corresponds to the Dirichlet boundary condition on the both ends of the spin chain.
We remark that there is a similar factor appeared in the study of $C$-type quiver gauge theories in the literature \cite{Dey:2016qqp,Chen:2018ntf} as the squared factor $\frac{\sin^2(2\sigma_i-\beta_2\tilde{c})}{\sin^2(2\sigma_i+\beta_2\tilde{c})}$ in the above equation.
Such a factor appears in the context of the folding trick to construct the non-simply-laced algebra from the simply-laced algebra.
\begin{comment}
Let us recall the effective potential of a general $\mathcal{N}=(2,2)^\ast$ theory, (\ref{eff-W-2d}),
\begin{eqnarray}
W_\text{eff}(\sigma)=\sum_\alpha \left(\langle \sigma\cdot \alpha\rangle +m_\text{adj}\right)\left(\log\left(\langle \sigma\cdot \alpha\rangle +m_\text{adj}\right)-1\right)-2\pi i t\ {\rm tr}\sigma+2\pi i\langle \rho\cdot \sigma\rangle\nonumber\\
+\sum_{w\in R}\sum_{w'\in rep(G)}\left(\langle w'\cdot \sigma\rangle+\langle w\cdot m\rangle\right)\left(\log\left(\langle w'\cdot \sigma\rangle+\langle w\cdot m\rangle\right)-1\right).
\end{eqnarray}
We first consider the case of the pure SO($2N$) gauge theory. The effective potential for the vector multiplet and adjoint chiral is
\begin{eqnarray}
W_\text{eff}(\sigma)=\sum_{i<j}^N(\pm\sigma_i\pm\sigma_j+m_\text{adj})(\log(\pm\sigma_i\pm\sigma_j+m_\text{adj})-1)-2\pi i t\ {\rm tr}\sigma+2\pi i\sum_{i=1}^N(N-i)\sigma_i,
\end{eqnarray}
as the positive roots of SO($2N$) is given by $e_i\pm e_j$ ($1\leq i<j\leq N$). The matter part with $L$ chiral multiplets in the fundamental representation is given by
\begin{eqnarray}
W^{matter}_\text{eff}(\sigma)=\sum_{i=1}^N\sum_{f=1}^{L}(\pm \sigma_i+m_i)(\log(\pm \sigma_i+m_i)-1),
\end{eqnarray}
where we turned on the twisted mass deformation.
The vacuum condition is given by
\begin{eqnarray}
\prod_{j\neq i}\frac{\sigma_i\pm \sigma_j+m_\text{adj}}{\sigma_i\pm \sigma_j-m_\text{adj}}=(-1)^{N-1}e^{2\pi it}\prod_{f=1}^{2L}\frac{\sigma_i+m_i}{\sigma_i-m_i}.
\end{eqnarray}
In the case of SO($2N+1$) gauge group, all the roots are given by $\pm e_i\pm e_j$ for $i<j$ and $\pm e_i$ (the total number is $2N^2$). The pure gauge theory effective potential then reads
\begin{eqnarray}
W_\text{eff}(\sigma)=\sum_{i<j}^N(\pm\sigma_i\pm\sigma_j+iu)(\log(\pm\sigma_i\pm\sigma_j+iu)-1)-2\pi i t\ {\rm tr}\sigma+2\pi i\sum_{i=1}^N(N-i+\frac{1}{2})\sigma_i\nonumber\\
+\sum_{i=1}^N(\pm \sigma_i+iu)(\log(\pm\sigma_i+iu)-1),
\end{eqnarray}
which now leads to the vacuum equation
\begin{eqnarray}
\prod_{j\neq i}^N\frac{\sigma_i\pm \sigma_j+iu}{\sigma_i\pm \sigma_j-iu}=(-1)^{N}e^{2\pi it}\frac{\sigma-iu}{\sigma+iu}\prod_{f=1}^{2L}\frac{\sigma_i+m_f}{\sigma_i-m_f}.
\end{eqnarray}
The corresponding boundary condition is given by
\begin{eqnarray}
\xi_+=\infty,\quad \xi_-=-\frac{1}{2}\eta.
\end{eqnarray}
Similarly in the case of Sp($N$), all the roots are given by $\pm e_i\pm e_j$ for $i<j$ and $\pm 2e_i$ (the total number is $2N^2$). The vacuum equation is then given by
\begin{eqnarray}
\prod_{j\neq i}^N\frac{\sigma_i\pm \sigma_j+iu}{\sigma_i\pm \sigma_j-iu}=(-1)^{N}e^{2\pi it}\frac{(2\sigma-iu)^2}{(2\sigma+iu)^2}\prod_{f=1}^{2L}\frac{\sigma_i+m_f}{\sigma_i-m_f},
\end{eqnarray}
The corresponding boundary condition is
\begin{eqnarray}
\xi_+=0,\quad \xi_-=0.
\end{eqnarray}
\end{comment}
\section{$A_2$ quiver}\label{s:A2-qui}
In general, the correspondence between gauge theory and spin chain is promoted to the highe rank cases~\cite{Dorey:2011pa,Chen:2011sj,Chen:2012we,Nekrasov:2012xe,Nekrasov:2013xda}.
In this section, we explore the correspondence between $A_2$ quiver, which is the simplest non-trivial quiver gauge theory, and $\mathfrak{sl}_3$ spin chain model.
\subsection{$\mathfrak{sl}_3$ spin chain}
The R-matrix associated to the quantum group $U_q(\widehat{\mathfrak{sl}}_3)$ is known to take the form \cite{PERK1981407}
\begin{eqnarray}
{\bf R}(u)=\left(\begin{array}{ccc|ccc|ccc}
[u+\eta] & & & & & & & & \\
& [u] & & e^{i\pi u}[\eta] & & & & &\\
& & [u] & & & & e^{i\pi u}[\eta] & & \\
\hline
& e^{-i\pi u}[\eta] & & [u] & & & & &\\
& & & & [u+\eta] & & & & \\
& & & & & [u] & & e^{i\pi u}[\eta] & \\
\hline
& & e^{-i\pi u}[\eta] & & & & [u] & & \\
& & & & & e^{-i\pi u}[\eta] & & [u] & \\
& & & & & & & & [u+\eta]\\
\end{array}
\right),
\end{eqnarray}
in the convention of this article.
Needless to say, this R-matrix also satisfies various kinds of basic properties of the R-matrix, and especially
\begin{eqnarray}
{\bf R}(0)={\cal P}.
\end{eqnarray}
The Bethe ansatz of a general periodic spin chain associated to the R-matrix of the Lie algebra $\mathfrak{g}$ is well-known in the literature \cite{Dorey:2006an}. In the case of $\mathfrak{g}=\mathfrak{sl}_3$, there are two sets of Bethe roots, $\{u^{(1)}_i\}_{i=1}^{m_1}$ and $\{u^{(2)}_i\}_{i=1}^{m_2}$. The Bethe ansatz equations are \cite{Babelon:1981un,Babelon:1982gp}
\begin{subequations}
\begin{eqnarray}
\prod_{a=1}^{L_1}\frac{[u^{(1)}_i-\vartheta^{(1)}_a+\eta/2]}{[u^{(1)}_i-\vartheta^{(1)}_a-\eta/2]}\prod_{j=1}^{m_2}\frac{[u^{(1)}_i-u^{(2)}_j+\eta/2]}{[u^{(1)}_i-u^{(2)}_j-\eta/2]}=e^{i\delta^{(1)}}\prod_{j=1}^{m_1}\frac{[u^{(1)}_i-u^{(1)}_j+\eta]}{[u^{(1)}_i-u^{(1)}_j-\eta]},\\
\prod_{a=1}^{L_2}\frac{[u^{(2)}_i-\vartheta^{(2)}_a+\eta/2]}{[u^{(2)}_i-\vartheta^{(2)}_a-\eta/2]}\prod_{j=1}^{m_1}\frac{[u^{(2)}_i-u^{(1)}_j+\eta/2]}{[u^{(2)}_i-u^{(1)}_j-\eta/2]}=e^{i\delta^{(2)}}\prod_{j=1}^{m_2}\frac{[u^{(2)}_i-u^{(2)}_j+\eta]}{[u^{(2)}_i-u^{(2)}_j-\eta]}.
\end{eqnarray}
\end{subequations}
\subsection{$A_2$ quiver gauge theory}
Correspondingly, we consider a 3d gauge theory with $A_2$ quiver structure.
That is we glue two gauge nodes with two bifundamental chiral multiplets (they together form a 3d $\mathcal{N}=4$ bifundamental hypermultiplet in the massless limit).
The quiver diagram of the theory we consider is given by
\begin{align}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\draw[ultra thick,->] (-2,1.5)--(-1,0.75);
\draw[ultra thick] (-2,-1.5)--(-1,-0.75);
\draw[ultra thick,<-] (-1,-0.75)--(0,0);
\draw[ultra thick] (-1,0.75)--(0,0);
\draw[ultra thick,->] (0,0) to [out=45,in=180] (1,0.5);
\draw[ultra thick] (1,0.5) to [out=0,in=135] (2,0);
\draw[ultra thick,->] (2,0) to [out=-135,in=0] (1,-0.5);
\draw[ultra thick] (1,-0.5) to [out=180,in=-45] (0,0);
\draw[ultra thick,->] (2,0)--(3,0.75);
\draw[ultra thick] (3,0.75)--(4,1.5);
\draw[ultra thick] (2,0)--(3,-0.75);
\draw[ultra thick,<-] (3,-0.75)--(4,-1.5);
\draw[ultra thick,->-] (0,.3) arc [x radius = .3, y radius = .5, start angle = 270, end angle = -90];
\draw[ultra thick,->-] (2,.3) arc [x radius = .3, y radius = .5, start angle = 270, end angle = -90];
\draw[fill=yellow] (0,0) circle (0.5) node {$N_1$};
\draw[fill=yellow] (-2.5,2) rectangle (-1.5,1);
\draw[fill=yellow] (2,0) circle (0.5) node {$N_2$};
\draw[fill=yellow] (3.5,2) rectangle (4.5,1);
\draw[fill=yellow] (-2.5,-2) rectangle (-1.5,-1);
\draw[fill=yellow] (3.5,-2) rectangle (4.5,-1);
\node at (-2,1.5) {$N_f^{(1)}$};
\node at (-2,-1.5) {$\bar{N}_f^{(1)}$};
\node at (4,1.5) {$\bar{N}_f^{(2)}$};
\node at (4,-1.5) {$N_f^{(2)}$};
\end{tikzpicture}
\end{align}
where yellow nodes are used to stand for SU-type gauge groups or flavor symmetry.
The contribution of these bifundamental matters to the effective potential reads
\begin{eqnarray}
W^\text{3d bfd}_\text{eff}=\frac{1}{\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} \operatorname{Li}_2(e^{-i(\sigma^{(1)}_i-\sigma^{(2)}_j)-im_\text{bfd}})
-\frac{1}{4\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} (\sigma^{(1)}_i-\sigma^{(2)}_j+m_\text{bfd})^2\nonumber\\
+\frac{1}{\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} \operatorname{Li}_2(e^{-i(\sigma^{(2)}_j-\sigma^{(1)}_i)-im_\text{bfd}})
-\frac{1}{4\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} (\sigma^{(2)}_j-\sigma^{(1)}_i+m_\text{bfd})^2.
\end{eqnarray}
The vacuum equations are then given by
\begin{subequations}
\begin{eqnarray}
e^{i\theta^{(1)}}\prod_{j\neq i}^{N_1}\frac{\sin(\sigma^{(1)}_i-\sigma^{(1)}_j+m^{(1)}_\text{adj})}{\sin(\sigma^{(1)}_i-\sigma^{(1)}_j-m^{(1)}_\text{adj})}\frac{\prod_{a=1}^{\bar{N}^{(1)}_f}\sin(\sigma^{(1)}_i-\bar{m}^{(1)}_a)}{\prod_{a=1}^{N^{(1)}_f}\sin(\sigma^{(1)}_i+m^{(1)}_a)}\prod_{k=1}^{N_2}\frac{\sin(\sigma^{(2)}_k-\sigma^{(1)}_i+m_\text{bfd})}{\sin(\sigma^{(1)}_i-\sigma^{(2)}_k+m_\text{bfd})}=1,\\
e^{i\theta^{(2)}}\prod_{j\neq i}^{N_2}\frac{\sin(\sigma^{(2)}_i-\sigma^{(2)}_j+m^{(2)}_\text{adj})}{\sin(\sigma^{(2)}_i-\sigma^{(2)}_j-m^{(2)}_\text{adj})}\frac{\prod_{a=1}^{\bar{N}^{(2)}_f}\sin(\sigma^{(2)}_i-\bar{m}^{(2)}_a)}{\prod_{a=1}^{N^{(2)}_f}\sin(\sigma^{(2)}_i+m^{(2)}_a)}\prod_{k=1}^{N_1}\frac{\sin(\sigma^{(1)}_k-\sigma^{(2)}_i+m_\text{bfd})}{\sin(\sigma^{(2)}_i-\sigma^{(1)}_k+m_\text{bfd})}=1,
\end{eqnarray}
\end{subequations}
where $\theta^{(1,2)}$ are expressed in terms of the gauge theory quantities as (\ref{theta-map}).
If we can adjust the mass parameters to satisfy $m_\text{bfd}=\frac{1}{2}m_\text{adj}^{(1)}=\frac{1}{2}m_\text{adj}^{(2)}\leftrightarrow \frac{\eta}{2}$, then we obtain the $A_2$-type Bethe ansatz equations from the above $A_2$-quiver gauge theory under the map
\begin{subequations}
\begin{eqnarray}
\delta^{(i)}\leftrightarrow \theta^{(i)}+i\pi N_i,\quad N^{(i)}_f=\bar{N}^{(i)}_f\leftrightarrow L_i,\quad N_i\leftrightarrow m_i,\\
m^{(i)}_a\leftrightarrow \frac{\eta}{2}-\vartheta^{(i)}_a,\quad \bar{m}^{(i)}_a\leftrightarrow \frac{\eta}{2}+\vartheta^{(i)}_a,
\end{eqnarray}
\end{subequations}
for $i=1,2$.
\subsection{Open boundary condition}
In the case of open spin chain, the Bethe ansatz has been worked out in \cite{Sun:2017wjh}. We focus on a special diagonal boundary operator of the form
\begin{eqnarray}
K^+(u)=\left(\begin{array}{ccc}
-e^{i\pi u} [u-\xi] & 0 & 0\\
0 & -e^{i\pi u} [u-\xi] & 0\\
0 & 0 & e^{-i\pi u}[u+\xi]\\
\end{array}\right),
\end{eqnarray}
and its dual
\begin{eqnarray}
K^-(u)=\left(\begin{array}{ccc}
e^{-i\pi u+\frac{5}{2}i\pi \eta} [u+\bar{\xi}+3\eta/2] & 0 & 0\\
0 & e^{-i\pi u+\frac{1}{2}i\pi \eta} [u+\bar{\xi}+3\eta/2] & 0\\
0 & 0 & -e^{i\pi u+\frac{3}{2}i\pi \eta}[u-\bar{\xi}+3\eta/2]\\
\end{array}\right),\nonumber\\
\end{eqnarray}
in this article. The boundary operator $K^+(u)$ again satisfies (\ref{b-YBE}), while the dual boundary Yang-Baxter equation is slightly modified to
\begin{eqnarray}
&&{\bf R}_{12}(-u_1+u_2)K_1^-(u_1)M^{-1}_1{\bf R}_{21}(-u_1-u_2-3\eta)M_1K^+_2(u_2)\nonumber\\
&&=K^+_2(u_2)M^{-1}_2{\bf R}_{12}(-u_1-u_2-3\eta)M_2K^+_1(u_1){\bf R}_{21}(u_2-u_1),
\end{eqnarray}
where
\begin{eqnarray}
M={\rm diag}\left(e^{4\eta},e^{2\eta},1\right).
\end{eqnarray}
It follows from the crossing unitarity relation,
\begin{eqnarray}
{\bf R}^{t_1}_{12}M_1{\bf R}^{t_1}_{21}(-u-3\eta)M_1^{-1}=\rho''(u){\bf I},
\end{eqnarray}
for some function $\rho''(u)$, in the case of $\mathfrak{sl}_3$ R-matrix.
The Bethe ansatz equations are
\begin{subequations}
\begin{eqnarray}
\frac{[2u^{(1)}_i-\eta]}{[2u^{(1)}_i+\eta]}\frac{[u^{(1)}_i+\xi+\eta/2][u^{(1)}_i-\bar{\xi}]}{[u^{(1)}_i-\xi-\eta/2][u^{(1)}_i+\bar{\xi}]}\prod_{j=1}^{m_1}\frac{[u^{(1)}_i-u^{(1)}_j+\eta][u^{(1)}_i+u^{(1)}_j+\eta]}{[u^{(1)}_i-u^{(1)}_j-\eta][u^{(1)}_i+u^{(1)}_j-\eta]}\nonumber\\
\prod_{k=1}^{m_2}\frac{[u^{(1)}_i-u^{(2)}_k-\frac{\eta}{2}][u^{(1)}_i+u^{(2)}_k-\frac{\eta}{2}]}{[u^{(1)}_i-u^{(2)}_k+\frac{\eta}{2}][u^{(1)}_i+u^{(2)}_k+\frac{\eta}{2}]}\prod_{a=1}^{L_1}\frac{[u_i+\theta_a-\frac{\eta}{2}][u_i-\theta_a-\frac{\eta}{2}]}{[u_i+\theta_a+\frac{\eta}{2}][u_i-\theta_a+\frac{\eta}{2}]}=1,\label{bethe-sl3-1}\\
\frac{[2u^{(2)}_i+\eta]}{[2u^{(2)}_i-\eta]}\frac{[u^{(2)}_i+\xi][u^{(2)}_i-\bar{\xi}-\eta/2]}{[u^{(2)}_i-\xi][u^{(2)}_i+\bar{\xi}+\eta/2]}\prod_{j=1}^{m_1}\frac{[u_i^{(2)}-u^{(1)}_j+\eta/2][u^{(2)}_i+u^{(1)}_j+\eta/2]}{[u^{(2)}_i-u^{(1)}_j-\eta/2][u^{(2)}_i+u^{(1)}_j-\eta/2]}\nonumber\\
\prod_{j=1}^{m_2}\frac{[u^{(2)}_i-u^{(2)}_j-\eta][u^{(2)}_i+u^{(2)}_j-\eta]}{[u^{(2)}_i-u^{(2)}_j+\eta][u^{(2)}_i+u^{(2)}_j+\eta]}=1.\label{bethe-sl3-2}
\end{eqnarray}
\end{subequations}
In the context of gauge theory, there are two possibilities we would like to analyze. One is an Sp($N_1$)-SO($2N_2$) quiver gauge theory, i.e. one gauge node (say the first node) is Sp($N_1$) gauge group and the other (the second node) is SO($2N_2$) gauge group. Since all the representations of Sp and SO Lie algebras are either real or pseudo-real, a 3d $\mathcal{N}=4$ half-hypermultiplet in these gauge theories is equivalent to one 3d $\mathcal{N}=2$ chiral multiplet (See~\cite{Hollands:2010xa} for a related discussion in 4d $\mathcal{N}=2$ theory).
The quiver structure thus looks like the following (SO and Sp nodes are respectively depicted in blue and green).
\begin{align}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\draw[ultra thick] (-2,0)--(0,0);
\draw[ultra thick] (2,0)--(3.5,0);
\draw[ultra thick] (0,0)--(2,0);
\draw[ultra thick,->-] (0,.3) arc [x radius = .3, y radius = .5, start angle = 270, end angle = -90];
\draw[ultra thick,->-] (2,.3) arc [x radius = .3, y radius = .5, start angle = 270, end angle = -90];
\draw[fill=green] (0,0) circle (0.5) node {$N_1$};
\draw[fill=blue!40] (-2.5,0.5) rectangle (-1.5,-0.5);
\draw[fill=blue!40] (2,0) circle (0.5) node {$N_2$};
\draw[fill=green] (3.5,0.5) rectangle (4.5,-0.5);
\node at (-2,0) {$N_f^{(1)}$};
\node at (4,0) {$N_f^{(2)}$};
\end{tikzpicture}
\end{align}
The effective potential of the bifundamental matter part is given by\footnote{One is allowed to turn on the fugacity parameter $\Delta$ of the 3d $\mathcal{N}=2$ (bifundamental) chiral multiplet that gives rise to an effective bifundamental mass $m_{bfd}$. In the 2d limit, this corresponds to the twisted mass deformation of the 2d $\mathcal{N}=(2,2)$ chiral multiplet. }
\begin{eqnarray}
W^\text{3d bfd}_\text{eff}=\frac{1}{\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} \operatorname{Li}_2(e^{-i(\pm \sigma^{(1)}_i+\pm\sigma^{(2)}_j)-im_\text{bfd}})
-\frac{1}{4\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} (\pm\sigma^{(1)}_i+\pm\sigma^{(2)}_j+m_\text{bfd})^2.
\end{eqnarray}
The vacuum equations are found to be
\begin{subequations}
\begin{eqnarray}
\frac{\sin^2(2\sigma^{(1)}_i-\beta_2\tilde{c}_1)}{\sin^2(2\sigma^{(1)}_i+\beta_2\tilde{c}_1)}\prod_{j\neq i}^{N_1}\frac{\sin(\sigma^{(1)}_i\pm \sigma^{(1)}_j-\beta_2\tilde{c}_1)}{\sin(-\sigma^{(1)}_i\pm \sigma^{(1)}_j-\beta_2\tilde{c}_1)}\prod_{a=1}^{N^{(1)}_f}\frac{\sin(\sigma^{(1)}_i-m^{(1)}_a-\beta_2\tilde{c}_1)}{\sin(-\sigma^{(1)}_i-m^{(1)}_a-\beta_2\tilde{c}_1)}\nonumber\\
\times\prod_{j=1}^{N_2}\frac{\sin(\sigma^{(1)}_i\pm\sigma^{(2)}_j-m_\text{bfd})}{\sin(-\sigma^{(1)}_i\pm\sigma^{(2)}_j-m_\text{bfd})}=1,\\
\prod_{j\neq i}\frac{\sin(\sigma^{(2)}_i\pm \sigma^{(2)}_j-\beta_2\tilde{c}_2)}{\sin(-\sigma^{(2)}_i\pm \sigma^{(2)}_j-\beta_2\tilde{c}_2)}\prod_{a=1}^{N^{(2)}_f}\frac{\sin(\sigma^{(2)}_i-m^{(2)}_a-\beta^{(2)}_2\tilde{c}_2)}{\sin(-\sigma^{(2)}_i-m^{(2)}_a-\beta_2\tilde{c}_2)}\prod_{j=1}^{N_1}\frac{\sin(\sigma^{(2)}_i\pm\sigma^{(1)}_j-m_\text{bfd})}{\sin(-\sigma^{(2)}_i\pm\sigma^{(1)}_j-m_\text{bfd})}=1.
\end{eqnarray}
\end{subequations}
We note that the Bethe ansatz equation (\ref{bethe-sl3-1}) and (\ref{bethe-sl3-2}) we want to map to is symmetric about $\sigma^{(1)}\leftrightarrow \sigma^{(2)}$ and $\xi\leftrightarrow \bar{\xi}$. Similar to the case of $A_1$ quiver gauge theory of Sp($N$) gauge group, we again found difficulties to realize the factor $\frac{\sin^2(2\sigma^{(1)}_i-\beta_2\tilde{c}_1)}{\sin^2(2\sigma^{(1)}_i+\beta_2\tilde{c}_1)}$. However, if we take the 2d limit, $\sin\sigma\rightarrow \sigma$, we can choose
\begin{eqnarray}
\xi=0,\quad \bar{\xi}=0,
\end{eqnarray}
to match the Bethe ansatz equations (\ref{bethe-sl3-1}) and (\ref{bethe-sl3-2}). Here we used the dictionary
\begin{eqnarray}
\pi\eta\leftrightarrow \beta_2\tilde{c}_1=\beta_2\tilde{c}_2,\quad -\frac{\pi\eta}{2}\leftrightarrow m_\text{bfd},\label{dict-A2}
\end{eqnarray}
and also added an imaginary site with $\vartheta^{(1)}_0=0$ in the spin chain (that is $L_1-1\leftrightarrow N^{(1)}_f$) to absorb an overall factor of $\frac{[u-\eta/2]^2}{[u+\eta/2]^2}$ in equation (\ref{bethe-sl3-1}).
Another candidate theory we would like to consider is Sp($N_1$)-SO($2N_2+1$) quiver gauge theory. The bifundamental contribution to the effective potential is
\begin{eqnarray}
W^\text{3d bfd}_\text{eff}=\frac{1}{\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} \operatorname{Li}_2(e^{-i(\pm \sigma^{(1)}_i+\pm\sigma^{(2)}_j)-im_\text{bfd}})
-\frac{1}{4\beta_2}\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} (\pm\sigma^{(1)}_i+\pm\sigma^{(2)}_j+m_\text{bfd})^2\nonumber\\
+\frac{1}{\beta_2}\sum_{i=1}^{N_1}\operatorname{Li}_2(e^{-i(\pm \sigma^{(1)}_i-im_\text{bfd}})
-\frac{1}{4\beta_2}\sum_{i=1}^{N_1}(\pm\sigma^{(1)}_i+m_\text{bfd})^2
\end{eqnarray}
Then the vacua equations are found to be
\begin{subequations}
\begin{eqnarray}
\frac{\sin^2(2\sigma^{(1)}_i-\beta_2\tilde{c}_1)}{\sin^2(2\sigma^{(1)}_i+\beta_2\tilde{c}_1)}\prod_{j\neq i}^{N_1}\frac{\sin(\sigma^{(1)}_i\pm \sigma^{(1)}_j-\beta_2\tilde{c}_1)}{\sin(-\sigma^{(1)}_i\pm \sigma^{(1)}_j-\beta_2\tilde{c}_1)}\prod_{a=1}^{N^{(1)}_f}\frac{\sin(\sigma^{(1)}_i-m^{(1)}_a-\beta_2\tilde{c}_1)}{\sin(-\sigma^{(1)}_i-m^{(1)}_a-\beta_2\tilde{c}_1)}\nonumber\\
\times\prod_{j=1}^{N_2}\frac{\sin(\sigma^{(1)}_i\pm\sigma^{(2)}_j-m_\text{bfd})}{\sin(-\sigma^{(1)}_i\pm\sigma^{(2)}_j-m_\text{bfd})}\times\frac{\sin(\sigma^{(1)}_i-m_\text{bfd})}{\sin(-\sigma^{(1)}_i-m_\text{bfd})}=1,\\
\prod_{j\neq i}\frac{\sin(\sigma^{(2)}_i\pm \sigma^{(2)}_j-\beta_2\tilde{c}_2)}{\sin(-\sigma^{(2)}_i\pm \sigma^{(2)}_j-\beta_2\tilde{c}_2)}\prod_{a=1}^{N^{(2)}_f}\frac{\sin(\sigma^{(2)}_i-m^{(2)}_a-\beta^{(2)}_2\tilde{c}_2)}{\sin(-\sigma^{(2)}_i-m^{(2)}_a-\beta_2\tilde{c}_2)}\prod_{j=1}^{N_1}\frac{\sin(\sigma^{(2)}_i\pm\sigma^{(1)}_j-m_\text{bfd})}{\sin(-\sigma^{(2)}_i\pm\sigma^{(1)}_j-m_\text{bfd})}\nonumber\\
\times\frac{\sin(\sigma^{(2)}_i-\beta_2\tilde{c}_2)}{\sin(\sigma^{(2)}_i+\beta_2\tilde{c}_2)}=1.
\end{eqnarray}
\end{subequations}
Under the same map (\ref{dict-A2}), we note that it is also possible in the 2d limit to choose
\begin{eqnarray}
\xi=0,\quad \bar{\xi}=\frac{\eta}{2},
\end{eqnarray}
to match the vacuum equations with the Bethe ansatz equations. This time we need to further add two imaginary sites with $\vartheta^{(1)}_0=\vartheta^{(1)}_{0'}=0$ ($L_1-2\leftrightarrow N^{(1)}_f$).
\section{$A_r$ quiver}\label{s:Ar_quiver}
The corerspondence between gauge theory and spin chains is promoted to the higher rank cases~\cite{Dorey:2011pa,Chen:2011sj,Chen:2012we,Nekrasov:2012xe,Nekrasov:2013xda}.
From this point of view, $A_r$ quiver gauge theory with SO and Sp symmetry is expected to correspond to $\mathfrak{sl}_{r+1}$ spin chain model with the diagonal-type open boundary condition.
The vacuum equations in such gauge theories are easy to find. For an Sp gauge node at the $\alpha$-th node, the vacuum equation for $\alpha = 1,3,5,\ldots,2\lfloor\frac{r-1}{2}\rfloor+1$ reads
\begin{eqnarray}
&&\frac{\sin^2(2\sigma^{(\alpha)}_i-\beta_2\tilde{c}_\alpha)}{\sin^2(2\sigma^{(\alpha)}_i+\beta_2\tilde{c}_\alpha)}\prod_{j\neq i}^{N_\alpha}\frac{\sin(\sigma^{(\alpha)}_i\pm \sigma^{(\alpha)}_j-\beta_2\tilde{c}_\alpha)}{\sin(-\sigma^{(\alpha)}_i\pm \sigma^{(\alpha)}_j-\beta_2\tilde{c}_\alpha)}\prod_{j=1}^{N_{\alpha-1}}\frac{\sin(\sigma^{(\alpha)}_i\pm\sigma^{(\alpha-1)}_j-m^{(\alpha-1,\alpha)}_\text{bfd})}{\sin(-\sigma^{(\alpha)}_i\pm\sigma^{(\alpha-1)}_j-m^{(\alpha-1,\alpha)}_\text{bfd})}\nonumber\\
&&\times\prod_{k=1}^{N_{\alpha+1}}\frac{\sin(\sigma^{(\alpha)}_i\pm\sigma^{(\alpha+1)}_k-m^{(\alpha,\alpha+1)}_\text{bfd})}{\sin(-\sigma^{(\alpha)}_i\pm\sigma^{(\alpha+1)}_k-m^{(\alpha,\alpha+1)}_\text{bfd})}\times\frac{\sin^{\delta_{\alpha-1}}(\sigma^{(\alpha)}_i-m^{(\alpha-1,\alpha)}_\text{bfd})}{\sin^{\delta_{\alpha-1}}(-\sigma^{(\alpha)}_i-m^{(\alpha-1,\alpha)}_\text{bfd})}\frac{\sin^{\delta_{\alpha+1}}(\sigma^{(\alpha)}_i-m^{(\alpha,\alpha+1)}_\text{bfd})}{\sin^{\delta_{\alpha+1}}(-\sigma^{(\alpha)}_i-m^{(\alpha,\alpha+1)}_\text{bfd})}=1,\nonumber\\
\end{eqnarray}
where we set the gauge group at the $(\alpha\pm 1)$-th gauge node to be SO($2N_{\alpha\pm 1}+\delta_{\alpha\pm 1}$). For a gauge node with SO($2N_\beta+\delta_\beta$) gauge group at the $\beta$-th site for $\beta = 2,4,6,\ldots,2\lfloor\frac{r}{2}\rfloor$, we have
\begin{eqnarray}
\prod_{j\neq i}\frac{\sin(\sigma^{(\beta)}_i\pm \sigma^{(\beta)}_j-\beta_2\tilde{c}_\beta)}{\sin(-\sigma^{(\beta)}_i\pm \sigma^{(\beta)}_j-\beta_2\tilde{c}_\beta)}\prod_{j=1}^{N_{\beta-1}}\frac{\sin(\sigma^{(\beta)}_i\pm\sigma^{(\beta-1)}_j-m^{(\beta-1,\beta)}_\text{bfd})}{\sin(-\sigma^{(\beta)}_i\pm\sigma^{(\beta-1)}_j-m^{(\beta-1,\beta)}_\text{bfd})}\prod_{k=1}^{N_{\beta+1}}\frac{\sin(\sigma^{(\beta)}_i\pm\sigma^{(\beta+1)}_k-m^{(\beta,\beta+1)}_\text{bfd})}{\sin(-\sigma^{(\beta)}_i\pm\sigma^{(\beta+1)}_k-m^{(\beta,\beta+1)}_\text{bfd})}\nonumber\\
\times\frac{\sin^{\delta_\beta}(\sigma^{(\beta)}_i-\beta_2\tilde{c}_\beta)}{\sin^{\delta_\beta}(\sigma^{(\beta)}_i+\beta_2\tilde{c}_\beta)}=1.\nonumber\\
\end{eqnarray}
We leave the comparison with the Bethe ansatz equations to a future work.
\section{Conclusion and Discussion}\label{s:dis}
In this article, we generalized the Bethe/Gauge correspondence first proposed in \cite{NS1,NS2} for A-type gauge theories to BCD-type gauge groups. We saw that the corresponding spin chain on the Bethe side is modified to one with open boundaries. In the correspondence with 2d gauge theories, we found that we can always choose diagonal boundary conditions for open XXX spin chain with the parameters $\xi$ being specified to either $\xi=0$, $\pm\frac{1}{2}$ or $\infty$ to realize the vacuum equation of gauge theory from the Bethe ansatz equation. Furthermore in the case of SO-type gauge groups, one can uplift the correspondence to a map between 3d gauge theories and open XXZ spin chain. On the other hand, when the gauge group is of Sp-type, then the straightforward uplift does not work for some reason. A similar story happens when we consider an $A_2$ quiver gauge theory with one node being SO-type and another being Sp-type, that is such a Bethe/Gauge correspondence (with diagonal open spin chain associated to the $\mathfrak{sl}_3$ R-matrix) can be established in 2d but not in 3d.
We saw that the correspondence worked perfectly for 2d gauge theories, but not as well in 3d. This might be explained in the relation to a string-theory background of these gauge theories. The brane construction of 2d gauge theories with SO and Sp type gauge groups has been given in \cite{Bergman:2018vqe} as an extension of the work \cite{Hanany:1997vm} on the construction of 2d U($N$) gauge theories. In the case of U($N$), one can use the T-duality to uplift the brane web of a 2d theory to that of a 3d theory, or even to a 4d theory. However, since the construction of SO or Sp type gauge theory involves the use of an orientifold, and the orientifold action is not preserved under the T-duality (see for example \cite{johnson_2002,Dabholkar:1997zd}), the uplift is no longer so straightforward in this case. More precisely, the orientifold action is defined as a combination of the worldsheet and the spacetime parity, while the T-duality transforms it to an operation usually denoted as $\Omega$ that reverses the left- and right-moving sectors in perturbative string theory. Interestingly, the uplift works for SO($N$) gauge theories, and it might be related to the ``trivial'' action of $\Omega$ without introducing any additional factor when exchanging the left and right Chan-Paton factors.
For a rather similar reason,
the 4d/2d (5d/3d) correspondence~\cite{Dorey:1998yh,Dorey:1999zk} also becomes vague after adding an orientifold. O4 (or O5) plane used in the brane construction of 4d (resp. 5d) gauge theories lies in the transverse directions to D2 (resp. D3) branes that give rise to the vortices. The effective 2d (or 3d) gauge theories on the vortices in this case is something unfamiliar to us, and it is clearly not the gauge theories with SO or Sp gauge groups considered in this article. One can also see this point by looking at the qq-characters of SO and Sp gauge theories derived in \cite{Haouzi:2020yxy} which contain infinite number of terms. The NS limit does not simplify much and the saddle-point equation of the instanton partition function in this limit appears in a different form from the Bethe ansatz equation of (diagonal) open spin chains. The quantum integrability of 4d and 5d SO and Sp gauge theories still seems to require more effort to study with better idea in the future.
Last but not least, we recall that in the case of A-type gauge theories, starting from the TQ-relation of the corresponding periodic XXZ spin chain,
\begin{eqnarray}
T^A(u)Q^A(u)=\delta_+(u)Q^A(u-\eta)+e^{i\theta}\delta_-(u)Q^A(u+\eta),
\end{eqnarray}
in particular when we focus on the pure gauge theory, one can rewrite it into
\begin{eqnarray}
\left(\hat{y}+e^{i\theta}\hat{y}^{-1}-T^A(u)\right)Q^A(u)=0,\quad \hat{y}:=e^{\eta\partial_u},
\end{eqnarray}
which matches with the spectral curve of the A-type affine Toda chain in the classical limit, where $\hat{y}$ reduces to a normal function.
On the other hand, the expression of the eigenvalue $T(u)$ of the transfer matrix in the open XXZ spin chain can be rewritten into a TQ-relation,
\begin{eqnarray}
[2u]T(u)Q(u)=[2u+\eta][u+\xi_--\eta/2][u+\xi_+-\eta/2]\delta_+(u)\delta_-(-u)Q(u-\eta)\nonumber\\
+[2u-\eta][u-\xi_-+\eta/2][u-\xi_++\eta/2]\delta_+(-u)\delta_-(u)Q(u+\eta),
\end{eqnarray}
where we defined the $Q$-function as
\begin{eqnarray}
Q(u):=\prod_{i=1}^m[u\pm u_i].
\end{eqnarray}
Note that the symmetry between $u\leftrightarrow -u$ in the above TQ-relation restricts $T(u)$ to be an even function of $u$. Since the prefactors $[2u\pm\eta]$ are hard to be absorbed into the $Q$-function, it is not straightforward at all to relate the TQ-relation of the open chain to the spectral curve of the affine Toda chain of BCD-type (as it is expected in the 2d (or XXX) limit to take the form $(P(u)(\hat{y}+\mu \hat{y}^{-1})-\tilde{T}(u))Q(u)=0$ for some factor $\mu$ independent of $u$, some polynomial $P(u)$ and $\tilde{T}(u)$ some function related to $T(u)$). This might again be related to the fact that the classical limit of 4d $\mathcal{N}=2$ gauge theories gives rise to the spectral curve of Toda chains \cite{Martinec:1995by}, and the relation between 4d $\mathcal{N}=2$ theories and 2d theories considered in this articles is still not clear at the current stage.
|
{
"timestamp": "2021-02-15T02:07:02",
"yymm": "2012",
"arxiv_id": "2012.14197",
"language": "en",
"url": "https://arxiv.org/abs/2012.14197"
}
|
\section{Introduction}
We consider triplets $(X,L,a)$ where $X$ is an irregular, complex, projective variety of dimension $n$, $L$ is a line bundle on $X$ and $a:X\longrightarrow A$ is a non trivial generating map to an abelian variety of dimension $q$. Se will have a fibration $f: X \longrightarrow B$ onto a smooth curve. We will assume that the fibration is {\it irregular}, i.e. ${\rm dim} \, a(F)>0$, where $F$ is a general fibre el $f$.
In this situation, several invariants associated to the triplet can be defined: the {\it continuous rank} $h^0_a(X,L)$, the {\it continuous positive degree} ${\rm deg}_a^+f_*L$ and the {\it eventual map} $\phi_L$. If $h^0_a(X,L)\neq 0$ we define the Clifford-Severi slope of $(X,L,a)$ as $\lambda (L,a)={\rm vol}(L)/h^0_a(X,L)$ and the Slope of $L$ with respect to $f$ as $s(f,L,a)={\rm vol}(L)/{\rm deg}_a^+f_*L$.
When we consider a {\it good} class of triplets ${\mathcal F}$ we denote $\lambda_{\mathcal F}(n)$ and $s_{\mathcal F}(n)$ to be the minimum values of such slopes when $(X,L,a)\in {\mathcal F}$ or $(F,L_{|F},a_{|F})\in {\mathcal F}$, respectively, and $n={\rm dim} \,(X)$.
When varieties in ${\mathcal F}$ are of maximal $a$-dimension, in \cite{B2} we prove that
\smallskip
\begin{equation}\label{main}
{\lambda}_{\mathcal F}(n)\geq \, s_{\mathcal F}(n)\geq \,n\, {\lambda}_{\mathcal F}(n-1).
\end{equation}
\smallskip
We also characterize fibrations with minimal slope in the family. Observe that this provides a way to inductively give higher dimensional Clifford-Severi and Slope inequalities just giving an inequality in low dimension. Moreover, we deduce a huge set of Slope inequalities just using all the existing Clifford-Severi inequalities for varieties of maximal $a$-dimension. In order to obtain these results, in \cite{B2} we develop a version of the method of Xiao adapted to the irregular setting, the so called {\it continuous Xiao's method}.
The aim of this work is to obtain similar results for classes of varieties of non-maximal $a$-dimension. Under this assumption it is easy to see that $\lambda_{\mathcal F}(n)=s_{\mathcal F}(n)=0$, since in this case $h^0_a(X,L)>0$ does not implies bigness. We redefine ${\overline \lambda}_{\mathcal F}(n)$ and ${\overline s}_{\mathcal F}(n)$ when restricting to line bundles $L$ with {\it continuous moving part} $L_c$ big.
Adapting the arguments given in \cite{B2}, our first result is an analogous to (\ref{main}) (see Proposition \ref{pardini}, Theorem \ref{thm1} and Remark \ref{resumen}), and allows to obtain Slope inequalities for varieties of non maximal $a$-dimension from Clifford-Severi inequalities of maximal $a$-dimension ones. More concretely, given a good class of triplets ${\mathcal F}$, we define the subclass ${\mathcal F}_p$ imposing the extra condition that $c(X)={\rm dim}(X)-{\rm dim}\,a(X)\leq p$. Then
\bigskip
\noindent {\bf Theorem A}.
{\it
\begin{itemize}
\item [(i)] ${\overline \lambda}_{{\mathcal F}_p}(n)\geq {\overline s}_{{\mathcal F}_p}(n)\geq (n-p)\,\lambda_{{\mathcal F}_0}(n-p-1).$
\small
\item [(ii)] If equality ${\overline s}_{{\mathcal F}_p}(n)= (n-p)\,\lambda_{{\mathcal F}_0}(n-p-1)$ holds, then $f_*(L\otimes a^*(\alpha))^+$ is semistable for general $\alpha \in {\widehat A}$ and $F$ is covered by $(n-p-1)$-dimensional varieties $V$ of maximal $a_{|V}$-dimension such that $\lambda (R,a_{|V})=\lambda_{{\mathcal F}_0}(n-p-1)$, for some $R\in {\rm Pic} (V)$.
\end{itemize}
}
\bigskip
The technique used here is again the continuous Xiao's method.
\bigskip
The second part of the paper is devoted to obtain new Slope inequalities considering the geometry of $T$ and $G$, the connected components of the general fibres of $a$ and $\phi_L$, respectively. We refer to Section 2 and Remark \ref{beta} for definitions. Again we use continuous Xiao's method, adapting the arguments of \cite{B} and \cite{J}. Our main result is
\bigskip
\noindent {\bf Theorem B.} {\it Let $f: X \longrightarrow B$ an irregular fibration with general fibre $F$ and $k={\rm dim} \, a(F)$. Then:
\begin{itemize}
\item [(i)] $s(f,L,a)\geq {\rm vol}_{F|G}(L)\, (k+1)!$
\smallskip
\item [(ii)] If $(L_{|F})_c$ is $a_{|F}$-big, then $s(f,L,a)\geq \beta (L_{|T},n,k+1)\,(k+1)!$
\noindent In particular, if $F$ is not uniruled, then $s(f,L,a)\geq 2\,(k+1)!$
\smallskip
\item [(iii)] If equality holds in (i) or (ii), then $f_*(L\otimes a^*(\alpha))^+$ is semistable for general $\alpha \in {\widehat A}$.
\end{itemize}
}
\bigskip
In Section 2 we survey all the techniques we will use, known results on this topic and the involved definitions. Section 3 is devoted to prove theorems A and B.
\bigskip
\noindent {\underline{Notations and Conventions.}} We work over $\mathbb{C}$. Varieties are projective and smooth unless otherwise stated. We will use notation of divisor or line bundles indistinctly. Given a triplet $(X,L,a)$ we will write $L\otimes \alpha$ instead of $L \otimes a^*(\alpha)$, for $\alpha \in {\widehat A}$.
\bigskip
\noindent {\underline{Acknowledgements.}} The author thanks Lidia Stoppino and Rita Pardini for extremely useful discussions on this topic along the last years.
\bigskip
\bigskip
\section{Preliminaries and technical results}
\bigskip
For benefit of the reader, we collet here a series of preliminaries, definitions, known results and techniques we will use in next section. Main references for these are \cite{B2}, \cite{B}, \cite{BPS3} and \cite{J}.
\smallskip
We consider triplets $(X,L,a)$ where $X$ is a smooth irregular variety of dimension $n$, $a:X \longrightarrow A$ is a nontrivial generating map to an abelian variety of dimension $q$ and $L$ is a line bundle on $X$ such that $h^0_a(X,L)={\rm min}\{h^0(X,L\otimes \alpha)\,|\,\alpha \in {\widehat A}\}\neq 0$.
\bigskip
{\noindent {\underline {\it Multiplication maps}}}. We will offen consider situations {\it up to a multiplication map}, meaning that we will consider base changes via a multiplication map on $A$ by some $d$, which is \'etale of degree $d^{2q}$:
\begin{equation}\label{multiplicationmap}
\xymatrix{
X^{(d)}\ar[d]_{a_d}\ar[r]_{\nu_d} &X\ar[d]^a\\
A\ar[r]_{\nu_d}&A}
\end{equation}
We will denote $L^{(d)}:=\nu_d^*(L)$. Continuous rank and volume are multiplicative through a multiplication map.
\bigskip
\noindent {\underline {\it Continuous moving divisor $L_c$}}. Up to a blow-up, there is a decomposition $L=P+W$ such that, for any $d>>0$ and divisible and any general $\alpha \in {\widetilde A}$, $P^{(d)}$ is base point free and is the moving divisor of $|L^{(d)}\otimes \alpha|$ and $W^{(d)}$ is its fixed divisor. Following \cite{B}, $P$ and $W$ are called the {\it continuous moving divisor} and {\it continuous fixed divisor} of $L$, respectively. According to the notation of \cite{J}, we will set $L_c:=P$ for the continuous moving part.
\bigskip
\noindent {\underline {\it Eventual map and eventual degree.}} Up to a blow-up, there is a factorization of the map $a$, $X \rightarrow X_L\rightarrow A$ such that the map $\phi_L: X\longrightarrow X_L$ verifies the following properties:
\begin{itemize}
\item $L_c=\phi_L^*(R_L)$ for some line bundle $R_L$ on $X_L$ which induces a base point free generically finite morphism on $X_L$ (\cite{BPS3}).
\item Up to a multiplication map, the linear system $|L_c^{(d)}\otimes \alpha|$, for $\alpha$ general, is base point free and induces the map $\phi_L^{(d)}: X^{(d)}\longrightarrow X_L^{(d)}$ (\cite{B}, \cite{BPS3}).
\item Since the map $\phi_L$ factorizes $a$, it is generically finite provided $X$ is of maximal $a$-dimension. It is birational if ${\rm deg}\, a=1$.
\item The map $\phi_L$ is generically finite provided $L_c$ is $a$-big (\cite{B}).
\item A birational model of the map $\phi_L$ is given by the natural map $\rho: X \dashrightarrow \mathbb{P}_A(a_*L)$, where $X_L:=\rho (X)$ (\cite{J}).
\item When the eventual map $\phi_L$ is generically finite, we define the {\it eventual degree} $L$ to be $e(L)={\rm deg} (\phi_L)$ (cf. \cite{BPS3}, Section 3). We can extend the definition to any $L$ just considering the degree of the finite part in the Stein factorization of $\phi_L$.
\item $\kappa (L_c)={\rm dim}\phi_L(X) \geq {\rm dim} \, a(X)$ (\cite{J}).
\end{itemize}
\bigskip
\noindent {\underline {\it Good classes of triplets.}} Given a family ${\mathcal F}$ of triplets with $h^0_a(X,L)\neq 0$, we say that the family is {\it good} if it stable via the following four operations:
\begin{itemize}
\item[(1)] If $(X,L,a)\in {\mathcal F}$, then $({\overline X},\sigma ^*L,a\circ \sigma)\in {\mathcal F}$, where $\sigma:{\overline X}\longrightarrow X$ is a birational morphism.
\item[(2)] If $(X,L,a)\in {\mathcal F}$, then $(X^{(d)},L^{(d)},a_d)\in {\mathcal F}$.
\item[(3)] If $(X,L,a)\in {\mathcal F}$, then $(X,L',a)\in {\mathcal F}$ for $L'\leq L$ such that $h^0_a(X,L')>0$.
\item[(4)] If $(X,L,a)\in {\mathcal F}$, then $(M,L_{|M},a_{|M})\in {\mathcal F}$, for a general smooth $M$, moving in a base point free linear system on $X$.
\end{itemize}
\bigskip
\noindent {\underline {\it Clifford-Severi inequalities (maximal $a$-dimension)}}. Given a triplet $(X,L,a)$ with $h^0_a(X,L)>0$, we define its Clifford-Severi slope as
$$\lambda (L,a)=\frac{{\rm vol}(L)}{h^0_a(X,L)}$$
\noindent which remains constant under multiplication maps. Given a good class ${\mathcal F}$ we define ${\lambda}_{\mathcal F}(n)$ to be the infimum of the Clifford-Severi slopes of triplets in ${\mathcal F}$, of dimension $n$. Clifford Severi-Inequalities for a given good class ${\mathcal F}$ are inequalities of type
$$
{\lambda}_{\mathcal F}(n)\geq \lambda_n.
$$
\medskip
There are many known Clifford-Severi inequalities for different good classes of maximal $a$-dimension varieties: see Remark 4.7 in \cite{B2} for a (almost) complete list. For example:
\begin{itemize}
\item Higher dimensional Severi inequality states that ${\lambda}_{\mathcal F}(n)\geq 2\,n!$ if ${\mathcal F}$ is defined by the property that $L$ is {\it numerically subcanonical}.
\item For a general $L$ we have ${\lambda}_{\mathcal F}(n)\geq e(L)\,n!$ (\cite{BPS2}).
\end{itemize}
\bigskip
\noindent {\underline {\it Clifford-Severi inequalities (non-maximal $a$-dimension)}}. In the case of triplets of non-maximal $a$-dimension, situation is not so clear and depends heavily on conditions of bigness of $L$ or $L_c$ and the geometry of the fibre of the map $a$ or $\phi_L$. The case of irregular threefolds is well understood by results of Zhang (\cite{Z3}).
Here you can find a (non complete) list of known results for arbitrary dimension $n$. We set $k={\rm dim} a(X)$, and $G$ and $T$ for a connected component of the general fibre of $\phi_L$ and $a$, respectively.
In \cite{B}, Main Theorem and Remark 5.8, the author proves
\begin{itemize}
\item If $L_c$ is $a$-big and $L$ is numerically $r$-subcanonical, then $\lambda(L,a)\geq \delta (r)\,k!$.
\item If $L$ is nef and $a$-big then $\lambda(L,a)\geq (L_{|G})^{n-k}\,k!\geq \, k!$.
\end{itemize}
\noindent When $k=n-1$, Zhang gives a better bound (\cite{Z2}):
\begin{itemize}
\item If $g$ is the genus of the curve $T$, then $\lambda (L,a)\geq 2\, \frac{g-1}{g+n-2}\, n!.$
\end{itemize}
\noindent Finally Jiang (\cite{J}) gives a set of inequalities depending on the geometry of $G$ or $T$. The simplest ones are the following (see Proposition 3.6 and Theorem 3.1 in \cite{J} and Remark \ref{beta} for a more detailed result):
\begin{itemize}
\item If $L$ is big, then $\lambda(L,a)\geq {\rm vol}_{X|G}(L)\,k!$.
\item If $L_c$ is big and $T$ is not uniruled, then $\lambda(L,a)\geq 2 \, k!$.
\end{itemize}
\begin{rem} The proof of Main Theorem (iii) in \cite{B} uses implicitly the volume of $L_{|G}$ (see Remark 5.8 in the cited paper). In Corollary B (iii), loc. cit., extending this inequality to $K_X$ in the singular setting, it is erroneously assumed that ${\rm vol}_{X|G}(K_X)\geq 1$, assuming that the minimal variety $X$ is Gorenstein.
\end{rem}
\bigskip
\noindent {\underline {\it Irregularly fibred triplets.}} Given a triplet $(X,L,a)$, we will say that it is {\it irregularly fibred} if moreover we have a fibration $f:X\longrightarrow B$ onto a smooth curve, such that ${\rm dim}\,a(F)>0$, where $F$ is a general smooth fibre of $f$.
If $f$ is an irregular fibration as above, the family of vector bundles $\{f_*(L\otimes \alpha)\}_{{\alpha}\in {\widehat A}}$ has constant type of Harder-Narashimann filtration for $\alpha \in U_0$, for some nonempty open set $U_0$. We set $\{(r_i,\mu_i)\}$ for theit ranks and slopes.
For $\alpha \in U_0$ we will write $f_*(L\otimes \alpha)^+$ for the biggest nef subbundle of $f_*(L\otimes \alpha)$ and we denote
$${\rm deg}_a^+f_*L={\rm deg}f_*(L\otimes \alpha)^+\geq 0.$$
We will define the (continuous) Slope of $L$ w.r.t. $f$ to be:
$$
s(f,L,a)=\frac{{\rm vol}( L)}{{\rm deg}_a^+f_*L}\in (0+\infty].
$$
\noindent which is also constant under multiplication maps.
\bigskip
\noindent {\underline {\it Slope inequalities.}} Given a good class ${\mathcal F}$, we will say that an irregular fibration $f$, with general fibre $F$, is of type ${\mathcal F}$ if $(F,L_{|F},a_{|F})\in {\mathcal F}$ (the triplet $(X,L,a)$ is not necessarily in ${\mathcal F}$). We define $s_{\mathcal F}(n)$ to be the infimum of slopes of fibrations $f$ of type ${\mathcal F}$, where $n={\rm dim} X$.
Slope inequalities for the family ${\mathcal F}$ is a set of inequalities for any $n$:
$$
s_{\mathcal F}(n) \geq \lambda_n.
$$
\medskip
In \cite{HZ} Hu and Zhang give slope inequalities for $L=K_f$ and $X$ of maximal Albanese dimension, by direct computation, giving properties of the limit cases.
In \cite{B2}, Theorem 4.11, a broad generalization is given, for any $L$, establishing an equivalence between Clifford-Severi inequalities and Slope inequalities for a given good class ${\mathcal F}$ of varieties of maximal $a$-dimension. Moreover, this result allows to produce automatically a whole set of Clifford-Severi and Slope inequalities for any dimension, just given one inequality in low dimension, typically 1 or 2 (see Remark 4.12 and Corollary 1.1 in \cite{B2}). The main result can be stated as:
$$
\lambda_{\mathcal F}(n)\geq s_{\mathcal F}(n)\geq n\,\lambda_{\mathcal F}(n-1).
$$
\bigskip
\noindent {\underline {\it Continuous Xiao's method and derived inequalities.}} Take a general $\alpha_0\in {\widehat A}$ and let $L_0=L\otimes \alpha_0$. After a suitable blow-up and multiplication map there is a filtration by nef line bundles:
$$T_1 \leq T_2 \leq ...\leq T_m \leq L_0$$
\noindent such that, for all $i$ $N_i:=T_i-\mu_iF$ is nef and, if $P_i:={N_i}_{|F}$, then
\begin{itemize}
\item $P_i$ is a base point free linear system on $F$ such that $h^0(F,P_i)=h^0_{a_{|F}}(F,P_i)\geq r_i$.
\smallskip
\item ${\rm deg}_a^+f_*(L)={\rm deg}({\mathcal E}_m^{\alpha_0})=\sum_{i=1}^{m}r_i(\mu_i-\mu_{i+1})$
\end{itemize}
\noindent By convention we can take, coherently, $(N_{m+1},\mu_{m+1})=(T_m,0)$ or, in case $L$ is nef, $(N_{m+1},\mu_{m+1})=(L,0)$.
Fix $r\leq n$. Consider an ordered, increasing partition of the set $\{ \, 1,...,m \, \}$ given by subsets $I_s$, $s=1,...,r-1$ (some of the sets $I_s$ may be empty), with $I_{r-1}\neq \emptyset$. Define, decreasingly, for $s=1,...,r-1$
$$b_s=\{
\begin{array}{cc}
{\rm min} I_s & {\rm if} \,\, I_s \neq \emptyset \\
b_{s+1} & {\rm otherwise} \\
\end{array}$$
\bigskip
Then we have that, for any $Q_1,...,Q_{n-r}$ nef $\mathbb{Q}$-Cartier divisors the following inequality holds:
\begin{equation}\label{Xiaogeneral}
Q_1...Q_{n-r}\left[N^r_{m+1}-(\sum _{s=1}^{r-1} (\prod_{k>s}P_{b_k})\sum_{i \in I_s}(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}))\right]\geq 0.
\end{equation}
\bigskip
We will use the following particular cases.
Taking $r=n$, and any partition:
\begin{equation}\label{xiaobuena}
{\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq \sum _{s=1}^{n-1} (\prod_{k>s}P_{b_k})\sum_{i \in I_s}(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}).
\end{equation}
Taking any $r$, the trivial partition $I_{r-1}=I$ ($I_s=\emptyset$ for $s<r-1$) and $Q_i=N_{m+1}$:
\begin{equation}\label{xiaoalbanesenomaxima}
{\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq
\sum_{i=1}^m P_m^{n-r} \left[P_{i+1}^{r-1}+P_{i+1}^{r-2}P_i+...+P_i^{r-1}\right](\mu_i-\mu_{i+1}).
\end{equation}
\bigskip
\section{Slope inequalities for non maximal $a$-dimension fibrations}
\medskip
In \cite{B2} we study the equivalence of Slope and Clifford-Severi inequalities for general irregular varieties and fibrations in a good class ${\mathcal F}$, which mostly applies to maximal $a$-dimension varieties. Our aim is to study whether this equivalence can be extended to classes of varieties of non-maximal $a$-dimension.
To this aim, we need to make this condition stable by smooth sections (condition (3)), and the right condition to impose is on the codimension $c(X)=\dim X-\dim a(X)$, which we called condition $Q_p$, i.e. triplets such that $c(X)\leq p$.
Parts of the main result in \cite{B2} given in Theorem 4.11 hold for families of non maximal $a$-dimension as well, but they are not interesting since in these cases $\lambda_{\mathcal F}(n)$ and $s_{\mathcal F}(n)$ vanish. The reason is that condition $h^0_a(X,L)>0$ does not imply bigness if the variety is not of maximal $a$-dimension. Indeed, if $a:X\longrightarrow A$ verifies that $k=\dim a(X)<n=\dim X$, take $L=a^*H$ for any $H$ very ample on $A$. Then clearly ${\rm vol}(L)=0$ and $h^0_a(X,L)>0$.The same phenomena occur if $X$ is fibred: we can construct examples with $\deg_a^+f_*L\neq 0$ and ${\rm vol}(L)=0$.
So we need to impose extra hypotheses to obtain nontrivial inequalities. Natural conditions are $a$-bigness of $L$ or its continuous moving part $L_c$. There are several strategies (see \cite{B} and \cite{J}), according to whether $L$ or $L_c$ are $a$-big. Observe that bigness of $L_c$ implies bigness of $L$ but the viceversa does not hold (see Remarks 3.7 and 3.8 in \cite{B}). Observe also that in \cite{B} it is shown that $a$-bigness of $L_c$ implies bigness of $L$. In particular, for $L_c$, bigness and $a$-bigness are equivalent, provided $h^0_a(X,L)\neq 0$.
\begin{comment}
The main point here is the concept of {\it eventual map} induced by a line bundle $L$ with $h^0_a(X,L)>0$. Up to a blow-up, there is a factorization of the map $a$, ${\overline X} \rightarrow X_L\rightarrow A$ such that the map $\phi_L: {\overline X}\longrightarrow X_L$ verifies the following properties:
\begin{itemize}
\item $L_c=\phi_L^*(R_L)$ for some line bundle $R_L$ on $X_L$, which induces a base point free generically finite morphism on $X_L$ (\cite{BPS3}).
\item Up to a multiplication map, the linear system $|L_c^{(d)}\otimes \alpha|$, for $\alpha$ general is base point free and induces the map $\phi_L^{(d)}: {\overline X}^{(d)}\longrightarrow X_L^{(d)}$ (\cite{B}, \cite{BPS3}).
\item Since the map $\phi_L$ factorizes $a$, it is generically finite provided $X$ is of maximal $a$-dimension and it is birational if ${\rm deg} a=1$. In general, we have that ${\rm dim} X_L\geq {\rm dim} a(X)$.
\item The map $\phi_L$ is generically finite provided $L_c$ is $a$-big (\cite{B}).
\item A birational model of the map $\phi_L$ is given by the natural map $\rho: X \dashrightarrow \mathbb{P}_A(a_*L)$, where $X_L:=\rho (X)$ (\cite{J}).
\end{itemize}
\bigskip
\end{comment}
If we restrict our good families ${\mathcal F}$ adding the condition of bigness of $L$ (or $L_c$), the resulting subfamily is not {\it good} since bigness is not stable by subbundles (and so condition (4) fails).
Nevertheless, we can obtain some closely related Slope inequalities for fibrations of non maximal $a$-dimension with adapted arguments.
Let us first introduce some extra notation.
\begin{defn} Given a class ${\mathcal F}$ of triplets we define:
\begin{itemize}
\item [(i)] ${\mathcal F}_p=\{(X,L,a)\in {\mathcal F}\,|\, c(X)\leq p\,\}.$
\smallskip
\item[(ii)] ${\overline \lambda}_{{\mathcal F}_p}(n)={\rm inf}\{\lambda(L,a)\,|\, (X,L,a)\in {\mathcal F}_p,\,n=\dim X,\,\, L_c \, {\rm big}\}.$
\smallskip
\item[(ii')] ${\widehat \lambda}_{{\mathcal F}_p}(n)={\rm inf}\{\lambda(L,a)\,|\, (X,L,a)\in {\mathcal F}_p,\,n=\dim X,\,\, L \, {\rm big}\}.$
\smallskip
\item[(iii)] ${\overline s}_{{\mathcal F}_p}(n)={\rm inf}\{s(f,L,a)\,|\, f \,{\rm of}\,\,{\rm type}\,\, {\mathcal F}_p,\,n=\dim X, \,\,(L_{|F})_c \,{\rm big}\}.$
\smallskip
\item[(iii')] ${\widehat s}_{{\mathcal F}_p}(n)={\rm inf}\{s(f,L,a)\,|\, f \,{\rm of}\,\,{\rm type}\,\, {\mathcal F}_p,\,n=\dim X, \,\,(L_{|F}) \,{\rm big}\}.$
\end{itemize}
\end{defn}
\medskip
\begin{rem} \begin{itemize}
\item If the class ${\mathcal F}$ is good, so is ${\mathcal F}_p$.
\item If we consider classes of maximal $a$-dimension as in \cite{B2}, then ${\mathcal F}={\mathcal F}_0$. In this case, since $h^0_a(X,L)\neq 0$ implies bigness, we have that $\lambda_{\mathcal F}(n)={\overline {\lambda}}_{{\mathcal F}_0}(n)={\widehat {\lambda}}_{{\mathcal F}_0}(n)$ and $s_{\mathcal F}(n)={\overline s}_{{\mathcal F}_0}(n)={\widehat s}_{{\mathcal F}_0}(n)$.
\end{itemize}
\end{rem}
\bigskip
One of the two inequalities between Clifford-Severi and Slope inequalities given in Theorem 4.11 in \cite{B2} holds without change in this new setting:
\begin{prop}\label{pardini} Let ${\mathcal F}$ be a good class of triplets of irregular varieties. Then, for all $n$ and $p\leq n-2$ we have
$$
{\overline \lambda}_{{\mathcal F}_p}(n)\geq {\overline s}_{{\mathcal F}_p}(n)\,\,\,\, {\rm and} \,\,\,\, {\widehat \lambda}_{{\mathcal F}_p}(n)\geq {\widehat s}_{{\mathcal F}_p}(n).
$$
\end{prop}
\begin{proof}
We refer to the proof of the maximal $a$-dimension case using Pardini's trick given in Theorem 4.11, (ii) of \cite{B2}. As pointed out in Remark 4.13 (loc.cit.), only properties (1), (2) and (3) of a good class are used, and bigness of $L$ or $L_c$ are maintained in all the process.
The condition $p\leq n-2$ ensures that ${\rm dim} \, a(X)\geq 2$. In this case, the sections of $\nu_d^*(H)$ are irreducible and $f_d$ has connected fibres (and so it is an irregular fibration).
The rest of the proof holds without changes.
\end{proof}
\bigskip
The reverse inequality is more subtle and does depend heavily on bigness properties of $L_c$ or $L$. When $L_c$ is $a$-big two possible strategies are possible. The first option, following \cite{B}, is by hyperplane section argument since the eventual map allows us to maintain the process inside the good class ${\mathcal F}$. This is the content of Theorem \ref{thm1} and Remark \ref{resumen}.
The second option is to consider the geometry of $T$, the connected component of the general fibre of $a_{|F}$. This approach uses Theorem 3.1 in \cite{J}, adapted to the irregularly fibred case via a suitable use of the continuous Xiao's method. This is the content of Theorem \ref{thm2} (ii).
In general $(L_{|F})_c$ may not be $a_{|F}$-big. In this case, we can also obtain a good lower estimation of $s(f,L,a)$, but we need to consider the geometry of $G$, a connected component of the general fibre of the eventual map $\phi_{L_{|F}}$. This approach adapts the argument in Proposition 3.6 in \cite{J} to the relative setting via continuous Xiao's method and is the content of Theorem \ref{thm2} (i).
Observe that bounds in Theorem \ref{thm2} in general are sharper than those in Theorem \ref{thm1} when considering a single fibred triplet $(X,L,a)$ but strongly depend on properties not well behaved in a good class ${\mathcal F}$.
As a by product of the use of Xiao's method, in theorems \ref{thm1} and \ref{thm2} we can give properties in the limit cases, being those in Theorem \ref{thm1} analogous to those obtained in the cases of maximal $a$-dimension varieties.
\bigskip
\begin{thm}\label{thm1}
Let ${\mathcal F}$ be a good class of triplets. Let $(X,L,a)$ be a fibred triplet $f: X\longrightarrow B$ of type ${\mathcal F}$ such that $f$ is of $a$-dimension $k$ (i.e., $k={\rm dim}\,a(F)$). Assume that $L_c$ is $f$-big. Then:
\begin{itemize}
\item [(i)] $s(f,L,a) \geq \,(k+1)\,{\lambda}_{{\mathcal F}_0}(k).$
\smallskip
\item[(ii)] If equality holds, then:
\begin{itemize}
\item There is a family of varieties $V$ of dimension $k$ covering $F$, and line bundles $R\leq L_{|V}$ such that $(V,R,a_{|V})$ are of maximal $a_{|V}$-dimension and verify the Clifford-Severi equality ${\rm vol}(R)={\lambda}_{{\mathcal F}_0}(k)\,h^0_{a_{|V}}(V,R).$
\item $f_*(L\otimes \alpha)^+$ is semistable for general $\alpha \in {\widehat A}$. If, moreover, $L$ is nef, then $f_*(L\otimes \alpha)$ is nef and semistable.
\end{itemize}
\end{itemize}
\end{thm}
\bigskip
\begin{rem}\label{thm1simple}
Statement (i) of the above theorem, combined with Clifford-Severi inequalities for maximal $a$-dimension varieties as given in Remark 4.9 in \cite{B2} gives a broad set of Slope inequalities.
For example, in general we have $s(f,L,a) \geq (k+1)!$ and, if $L$ is numerically $r$-subcanonical, then $s(f,L,a) \geq \delta(r)(k+1)!$
\end{rem}
\medskip
\begin{rem} \label{resumen}
Theorem \ref{thm1} together with Proposition \ref{pardini} can be rephrased as, for $p\leq n-2$:
$$
{\overline \lambda}_{{\mathcal F}_p}(n)\geq {\overline s}_{{\mathcal F}_p}(n)\geq (n-p)\,\lambda_{{\mathcal F}_0}(n-p-1).
$$
\end{rem}
\bigskip
\begin{proof}
\noindent (i) Let $(X,L,a)$ be a triplet with a fibration $f: X \longrightarrow B$ of type ${\mathcal F}$.
Let us apply inequality (\ref{xiaoalbanesenomaxima}) with $r=k+1={\rm dim}\,a(F)+1$:
\begin{equation} \label{uno}
{\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq \sum_{i=1}^m P_m^{n-k-1}\left[P_{i+1}^{k}+P_{i+1}^{k-1}P_i+...+P_i^{k}\right](\mu_i-\mu_{i+1}).
\end{equation}
Since $L_c \leq T_m=N_{m+1}$ (see Lemma 3.1 in \cite{B2}), we have by hypothesis that $P_m={T_m}_{|F}$ is big and hence $ a_{|F}$-big. Hence, its eventual map is generically finite. Moreover, since it is continuously globally generated by construction, up to a multiplication map we can assume that the linear system $|P_m|$ is base point free. Take $V_1,...,V_{n-k-1}\in |P_m|$ general sections and let $V=V_1\cap ... \cap V_{n-k-1}$. Hence $V$ is a smooth variety of maximal $a_{|V}$-dimension $k$. Let $R_i={P_i}_{|V}$. By the properties of a good class, we have that the triplet $(V,R_i,a_{|V})\in {\mathcal F}_0$. Hence we have that
\begin{equation}\label{dos}
R_i^{k}\geq {\lambda}_{{\mathcal F}_0}(k)\,h^0_{a_{|V}}(V,R_i).
\end{equation}
Observe that
$$
P_m^{n-k-1}P_i^{k}=R_i^{k}.
$$
Finally, we use that $P_i-P_m\leq 0$ and then $h^0_{a_{|F}} (F,P_i-P_m)=0$, and the same holds by cutting by successive $V_i$. Then we can conclude that
\begin{equation}\label{tres}
h^0_{a_{|V}}(V,R_i)\geq h^0_{a_{|F}}(F,P_i)\geq r_i.
\end{equation}
\noindent Finally, observe that using general Clifford-Severi inequality for irregular varieties (Main Theorem in \cite{B})
\begin{equation}\label{cuatro}
\delta_i:=R_{i+1}^{k}+R_{i+1}^{k-1}R_i+...+R_i^{k}\geq R_{i+1}^{k}+k\,R_i^{k}\geq \lambda_{{\mathcal F}_0}(k)\,( r_{i+1}+kr_i)\geq (k+1)\,\lambda_{{\mathcal F}_0}(k)\,r_i.
\end{equation}
Then we can conclude
\begin{equation} \label{cinco}
{\rm vol}(L)={\rm vol} (L_0)\geq N_{m+1}^n\geq \sum_{i=1}^m (k+1)\,\lambda_{{\mathcal F}_0}(k)\,r_i(\mu_i-\mu_{i+1})=(k+1)\,\lambda_{{\mathcal F}_0}(k)\,{\rm deg}_a^+f_*L.
\end{equation}
\medskip
\noindent (ii) Assume that equality holds. Then we have equality in (\ref{dos}), (\ref{tres}), (\ref{cuatro}) and (\ref{cinco}), which imply, for all $i=1,...,m$:
\begin{itemize}
\item $r_{i+1}=r_i$,
\item $h^0_{a_{|V}}(V,R_i)= h^0_{a_{|F}}(F,P_i)= r_i$,
\item $R_i^{k}={\lambda}_{\mathcal F}(k)_0\,h^0_{a_{|V}}(V,R_i)$,
\item ${\rm vol} (L_0)=T_m^{n}$.
\end{itemize}
Hence we have that $m=1$ (so $f_*(L\otimes \alpha)^+$ is semistable), $h^0_{a_{|V}}(V,R_m)= h^0_{a_{|F}}(F,P_m)= r_m$, and $(V,R_m)$ verifies the Clifford-Severi equality $R_m^{k}={\lambda}_{{\mathcal F}_0}(k)\,h^0_{a_{|V}}(V,R_m)$.
Observe that if equality holds then $L$ is big since $s(f,L,a)>0$. We have that $L_0=T_m+Z_m$ and ${\rm vol} (L_0)=T_m^{n}$. If $L$ is moreover nef, then we have that $Z_m=0$ (cf. Theorem A in \cite{FKL}). Hence:
$${\rm rank}f_*(L_0)^+=r_1=h^0_{a_{|F}}(F,P_1)=h^0_{a_{|F}}(F,{L_0}_{|F})={\rm rank}f_*(L_0)$$
\noindent and hence $f_*(L_0)$ is semistable and nef.
\end{proof}
\bigskip
\begin{lem}\label{fujita} Let $X$ be a smooth, projective variety and $L$ a big line bundle on $X$. Let $\phi: X \longrightarrow Y$ be a fibred space, $G$ a general fibre of $\phi$ and $R\in {\rm Pic} (Y)$ a line bundle such that $L'=\phi^* (R)\leq L$. Then:
$$
{\rm vol}_X (L)\geq {\rm vol}_{X|G}(L)\,{\rm vol}_{Y}(R).
$$
\end{lem}
\medskip
\begin{proof}
We have that $L=\phi ^*(R)+Z$ with $Z\geq 0$. The result is obvious if $L$ is nef. We will reduce to this case via Fujita Approximation theorem. In the general case, assume $R$ is big, otherwise the result is trivial. Following Theorem 3.5 in \cite{BDPP}, there is an extension of the volume function given by the moving intersection numbers which is non decreasing in each factor, superadditive and coincides with the intersection product for nef line bundles. Let $e={\rm dim} (Y)$. For any birational compatible modifications of $X$ and $Y$ and any decompositions $L=W_1+E_1$ and $R=W_2+E_2$ such that $W_i$ are big and nef $\mathbb{Q}$-divisors and $E_i$ are effective $\mathbb{Q}$-divisors, we have:
$$
{\rm vol}(L)=<L,....,L>\,\geq\, <W_1,...,W_1,\phi^*(W_2),...,\phi^*(W_2)>=W_1^{n-e}(\phi^*(W_2))^e=({W_1}_{|G})^{n-e}W_2^e
$$
\noindent since the moving intersection numbers are nondecreasing and $W_i$ are nef.
To conclude, we apply Fujita Approximation theorem for the volume and the relative volume (see, for example, Proposition 2.11 and Theorem 2.13 in \cite{elmnp}). For any $\epsilon >0$, there are birational modifications of $X$ and $Y$ (that we can make compatible with the above hypotheses) and decompositions $L=W_1+E_1$ and $R=W_2+E_2$ as above and such that
\begin{itemize}
\item ${\rm vol}_{X|G}(L) \geq {W_1}_{|G}^{n-e}\geq {\rm vol}_{X|G}(L)-\epsilon$ and
\item ${\rm vol}_Y(R)\geq W_2^e \geq {\rm vol}_Y(R)-\epsilon$.
\end{itemize}
We apply the above inequality for any such decompositions and we conclude that
$$
{\rm vol}_X (L) \geq {\rm vol}_{X|G}(L)\, {\rm vol}_Y (R).
$$
\end{proof}
\bigskip
\begin{rem}\label{beta}
Let $Z$ be a smooth, projective variety, and $L$ a line bundle on $Z$. In \cite{J}, two invariants to compute the positivity of $L$ are defined: $\delta(L)$ and $\delta_1(L)$, the second being the minimum of volumes of subline bundles of $L$ inducing generically finite maps, when restricted to general positive dimensional subvarieties $V$ covering $Z$. We clearly have that $\delta_1(L)\geq cov.gon(Y)$. For any $k\leq s\leq {\rm dim} (Z)$, define
$$
\beta(L,s,k)={\rm min}\{\binom{s}{k}\delta (L),\delta_1(L)\}.
$$
\noindent We have that $\beta(L,s,k)\geq 1$ and that $\beta(L,s,k)\geq 2$, provided $Y$ is not uniruled.
In \cite{J} Theorem 3.1, the following Clifford-Severi inequality is proved. Consider a triplet $(X,L,a)$ of dimension $n$. Let $k={\rm dim} \, a(X)$ and $T$ be a connected component of the general fibre of $a$. Assume that $L_c$ is $a$-big. Then:
$$
\lambda(L,a)\geq \beta(L_{|T},n,k)\,k!
$$
\end{rem}
\bigskip
\bigskip
\begin{thm}\label{thm2}
Let $(X,L,a)$ be a fibred triplet. Let $G$ be a connected component of the general fibre of the eventual map $\phi_{L_{|F}}$ and let $T$ be a connected component of the general fibre of $a_{|F}$. Then
\begin{itemize}
\item [(i)] If $L_{|F}$ is $a_{|F}$-big, then $s(f,L,a)\geq {\rm vol}_{F|G}(L)\,(k+1)!$
\smallskip
\item [(ii)] If $(L_{|F})_c$ is $a_{|F}$-big, then $s(f,L,a)\geq \beta (L_{|T},n,k+1)\,(k+1)!$
\noindent In particular, if $F$ is not uniruled, then $s(f,L,a)\geq 2\,(k+1)!$
\smallskip
\item [(iii)] If equality holds in (i) or (ii), then $f_*(L\otimes \alpha)^+$ is semistable for general $\alpha \in {\widehat A}$.
\end{itemize}
\end{thm}
\bigskip
\begin{proof} (i) Since $(T_m)_{|F}=P_m$ is continuously globally generated and coincides by construction with $(L_{|F})_c$, up to a multiplication map and a birational modification, we can consider that $|P_m|$ induces the eventual map of $L_{|F}$ and $a_{|F}$ factorizes through this map. Up to a further birational modification and multiplication map, we can consider the relative map induced by the quotient $f_*(L_0)^+=f_*(T_m)^+={\mathcal E}_m\longrightarrow T_m$. The Stein factorization of such map gives a relative fibration $\phi_m: X\longrightarrow X_m$ over $B$, with general fibre $G$ and a factorization as $a={\overline a}\circ \phi_m$ for some ${\overline a}: X_m\longrightarrow A$. We can assume $X_m$ to be smooth. Then $T_m=\phi_m^*(L_m)$ for some line bundle $L_m$ on $X_m$ such that $g_*(L_m)^+=f_*(T_m)^+=f_*(L_0)^+={\mathcal E}^{\alpha_0}_m$, where $g: X_m \longrightarrow B$ is the induced fibration over $B$.
When restricted to a general fibre ${\overline F}$, this is just the Stein factorization of the eventual map $\phi_{L_{|F}}$.
Then we apply Lemma \ref{fujita}:
$$
{\rm vol}_X (L) \geq {\rm vol}_{X|G}(L)\, {\rm vol}_{X_m}(L_m).
$$
Observe that ${L_m}$ restricted to the fibre of $g$ is ${\overline a}$-big. Hence, we can apply Remark \ref{thm1simple} to $(X_m,L_m,{\overline a})$ and obtain
$$
{\rm vol}_{X_m}(L_m)\geq (k+1)!\,{\rm deg}_{\overline a}^+g_*(L_m)=(k+1)!\,{\rm deg}_a^+ f_*(L_0).
$$
\bigskip
\noindent (ii) Consider the set of indexes $I=\{1,....,m\}$ and let us construct an increasing ordered partition as follows. For $s=1,...,n-1$, consider $I_s=\{i\in I \,|\, \kappa (P_i)=s\,\}$. Recall that $\kappa (P_i)={\rm dim} \, \phi_{P_i}(F)$. Since $(L_{|F})_c=P_m$ is $a_{|F}$-big then its eventual map is generically finite, and so $I_{n-1}\neq \emptyset$. On the other hand, since $a_{|F}$ factorizes through any eventual map, we have that ${\rm dim}\,\phi_i(F)\geq k$, and hence $I_s=\emptyset$ if $s<k$.
Consider now Xiao's inequality (\ref{Xiaogeneral}):
\begin{equation}\label{xiaobuena}
N^n_{m+1}\geq \sum _{s=1}^{n-1} (\prod_{k>s}P_{b_k})\sum_{i \in I_s}(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}).
\end{equation}
\bigskip
Consider the Stein factorization of the eventual maps induced by $|P_i|$, $\phi_i: F \longrightarrow F_i$, and let $R_i\in {\rm Pic}(F_i)$ be such that $P_i=\phi_i^*(R_i)$. We can assume that all the $F_i$ are smooth and that $\phi_i$ factorizes through $\phi_j$ if $i<j$. Let $G_i$ the generic fibre of $\phi_i$. Observe that if $i,i'\in I_s$, then $G_i=G_{i'}$ and so we denote it by $G_s$. Observe that ${\rm dim} \, G_s=n-1-s.$ Since eventual maps factorizes $a_{|F}$, we also have maps $a_i: F_i\longrightarrow A$ such that $a_{|F}=a_i\circ \phi_i$.
Then we have that, for $i\in I_s$:
$$
\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l}\geq (s+1)P_i^s\geq \left[(k+1)R_i^s\right]G_s.
$$
Since $(F_i,R_i,a_i)$ is a triplet such that $(R_i)_c=R_i$ is big and of $a_i$-dimension $k$, we can apply general Clifford-Severi inequaliy to obtain $R_i^s\geq k! h^0_{a_i}(F_i,R_i)=k!h^0_{a_{|F}}(F,P_i)\geq k! \, r_i$.
Summing up we have
\begin{equation}\label{paraigualdad}
\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l}\geq \left[k! \,(r_{i+1}+kr_i)\right]\,G_s\geq \left[(k+1)!\,r_i\right]\,G_s
\end{equation}
When $s\leq n-2$, we have that $(\prod_{k>s}P_{b_k})G_s$ is the volume of $P_{b_{n-1}}$ restricted to a curve in $T$. Since $|P_{b_{n-1}}|$ induces a generically finite map, we have that $(\prod_{k>s}P_{b_k})G_s\geq \delta_1(L_{|T})$. Hence, for any $i\in I_s$ we have
$$
(\prod_{k>s}P_{b_k})(\sum_{l=0}^{s}P_i^{s-l}P_{i+1}^{l})(\mu_i-\mu_{i+1}))\geq (s+1)\delta_1(L_{|T})k!r_i \geq (k+1)!\delta_1(L_{|T})r_i.
$$
When $s=n-1$, and $i\in I_{n-1}$ we can use Theorem 3.1 in \cite{J} to obtain
$$
P_i^{n-1-l}P_{i+1}^l\geq P_i^{n-1}\geq \beta(L_{|T},n-1,k) k! r_i,
$$
\noindent and so
$$
(\sum_{l=0}^{n-1}P_i^{n-1-l}P_{i+1}^{l})(\mu_i-\mu_{i+1})\geq n\beta(L_{|T},n-1,k)\,k!\,r_i.
$$
Now we take the minimal lower bound for $s\leq n-2$ and $s=n-1$ and observe that
$$
{\rm min}\{n\,\beta (L_{|T},n-1,k)\,k!, (k+1)!\,\delta_1(L_{|T})=\beta (L_{|T},n,k+1)(k+1)!
$$
We finally obtain
$$
{\rm vol}(L)\geq N^n_{m+1}\geq \beta(L_{|T},n,k+1) (k+1)! \sum_{i=1}^mr_i(\mu_i-\mu_{i+1})=\beta(L_{|T},n,k+1)(k+1)!\,{\rm deg}_a^+f_*L.
$$
\bigskip
\noindent (iii) If equality holds in (i), then equality holds for $s(g,L_m,{\overline a})$ and so Theorem \ref{thm1} applies.
If equality holds in (ii), then equality holds in any step and in particular in formula (\ref{paraigualdad}). If $r_{i+1}=r_i$ for all $i$ then $m=1$ and $f_*(L_0)^+$ is semistable.
\end{proof}
\bigskip
\begin{comment}
\begin{cor}\label{CSeventualdegree}
Let $(X,L,a)$ be a triplet such that $k={\rm dim}\,a(X)\geq 2$. Let $G$ be a connected component of the general fibre of $a$. Then
$$
\lambda (L,a)\geq {\rm vol}_{X|G}(L)\,e(L)\, k!
$$
\end{cor}
\begin{proof}Up to a multiplication map by $d$ and a birational modification, consider a suitable fibration over $\mathbb{P}^1$ as given in the proof of Proposition \ref{pardini}. The fibre $F$ there has the save connected component $G$ of $a_{|F}$ and ${\rm dim}\,a(F)=k-1$. The slope of such fibration verifies inequality given in Theorem \ref{thm2} (i) and tends to $\lambda (L,a)$ when $d\rightarrow +\infty$ as shown in the proof of Proposition \ref{pardini}.
\end{proof}
\end{comment}
|
{
"timestamp": "2020-12-29T02:27:42",
"yymm": "2012",
"arxiv_id": "2012.14183",
"language": "en",
"url": "https://arxiv.org/abs/2012.14183"
}
|
\section{Introduction}
Throughout this paper we let $\Gamma\subset\PSL_2\mathbb R} \newcommand{\BD}{\mathbb D$ be a non-elementary finitely generated discrete subgroup of the group of orientation preserving isometries of the hyperbolic plane $\BH^2$. Suppose for a moment that $\Gamma$ is torsion free, let $S=\BH^2/\Gamma$ be the associated hyperbolic surface, and let $\mathcal S} \newcommand{\CT}{\mathcal T(S)$ be the set of free homotopy classes of closed unoriented primitive essential curves therein. Here essential just means that the given homotopy class of curves is neither trivial nor peripheral. The mapping class group $\Map(S)$ of $S$ acts on $\mathcal S} \newcommand{\CT}{\mathcal T(S)$ and we say that two elements in the same orbit are {\em of the same type}. Mirzakhani studied the asymptotic behavior, when $L\to\infty$, of the number of elements in $\mathcal S} \newcommand{\CT}{\mathcal T(S)$ of some fixed type $\gamma_0$ and with at most length $L$. More concretely, she proved in \cite{Maryam0,Maryam1,Maryam2} that the limit
\begin{equation}\label{eq-maryam}
\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\vert\{\gamma\text{ of type }\gamma_0\text{ with }\ell_S(\gamma)\le L\}\vert
\end{equation}
exists and is positive for every $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T(S)$. Here, $g$ is the genus of $S$, $r$ is the number of ends, and $\ell_S(\gamma)$ is the length of the hyperbolic geodesic in the homotopy class $\gamma$.
The goal of this note is to prove that this statement remains true when $\Gamma$ has torsion, that is when $\RO=\BH^2/\Gamma$ is an orbifold instead of a surface.
\begin{theorem}\label{sat1}
Let $\Gamma\subset\PSL_2\mathbb R} \newcommand{\BD}{\mathbb D$ be a non-elementary finitely generated discrete subgroup and $\RO=\BH^2/\Gamma$ the associated 2-dimensional hyperbolic orbifold. Then the limit
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\vert\{\gamma\text{ of type }\gamma_0\text{ with }\ell_{\RO}(\gamma)\le L\}\vert$$
exists and is positive for any $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$. Here $g$ is the genus of the orbifold $\RO$ and $r$ is the sum of the numbers of singular points and ends.
\end{theorem}
A few comments on the notation and terminology used in Theorem \ref{sat1}:
\medskip
\noindent{\bf (1)} The topological space underlying an orientable 2-dimensional hyperbolic orbifold is an orientable topological surface. The genus of the orbifold is by definition the genus of that surface.
\noindent{\bf (2)} In the theorem, and also in the remaining of the paper, $\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ is the set of free homotopy classes of closed unoriented primitive essential curves, where the homotopy is taken in the category of orbifolds and where essential means that the curves in the given homotopy class are neither peripheral nor represent finite order elements in the orbifold fundamental group $\pi_1^{\orb}(\RO)$. Accordingly, $\ell_{\RO}(\gamma)$ is the length of the shortest curve homotopic to $\gamma$ in the category of orbifolds. See Section \ref{subsec:maps between orbifolds} for details.
\noindent{\bf (3)} As for surfaces, two elements in $\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ are of the same type if they differ by an element of the mapping class group
$$\Map^{\orb}(\RO)=\Homeo^{\orb}(\RO)/\Homeo^{\orb}_0(\RO).$$
Here $\Homeo^{\orb}(\RO)$ is the group of homeomorphisms of $\RO$ in the category of orbifolds, and $\Homeo^{\orb}_0(\RO)$ is its identity component. The mapping class group $\Map^{\orb}(\RO)$ is infinite unless $\RO$ is {\em exceptional}, by what we mean that it has genus $g=0$ and that $r=3$. See Section \ref{sec orbifold mapping class group} for more details on the mapping class group of an orbifold.
\medskip
As is already the case for the proof in \cite{Book} of Mirzakhani's \eqref{eq-maryam}, we will derive Theorem \ref{sat1} from the weak-*-convergence of certain measures on the space $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ of currents, that is the space of $\pi_1^{\orb}(\RO)$-invariant Radon measures on the set of geodesics on the orbifold universal cover $\tilde\RO$ of $\RO$. Trusting that the reader is familiar with currents, we just recall at this point that the set $\mathbb R} \newcommand{\BD}{\mathbb D_{\ge 0}\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ of weighted curves is a dense subset of $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$, that $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ is a cone in a linear space, and that the action of $\Map^{\orb}(\RO)$ on $\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ extends to a linear action on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$. We will recall a few facts about currents in Section \ref{sec currents} below, but we do already at this point refer the reader to \cite{AL,BonahonFrench,Bonahon2,Bonahon,Book} for details and background.
\begin{theorem}\label{main}
Let $\RO$ be a compact orientable non-exceptional hyperbolic orbifold with possibly empty totally geodesic boundary and let $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ be the associated space of geodesic currents. There is a Radon measure $\mathfrak m_{\Thu}$ on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ such that for any $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ we have
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\Map^{\orb}(\RO)\cdot\gamma_0}\delta_{\frac 1L\gamma}=C(\gamma_0)\cdot\mathfrak m_{\Thu}$$
for some positive constant $C(\gamma_0)>0$. Here $g$ is the genus of the orbifold $\RO$, $r$ is the sum of the numbers of singular points and boundary components, and $\RO$ is non-exceptional if $(g,r)\neq(0,3)$. Moreover $\delta_{\frac 1L\gamma}$ stands for the Dirac measure on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ centered at $\frac 1L\gamma$, and the convergence takes place with respect to the weak-*-topology on the space of Radon measures on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$.
\end{theorem}
Again a few comments:
\medskip
\noindent{\bf (1)} Theorem \ref{main} remains true if we replace curves by multicurves, that is if we replace $\gamma_0$ by finite formal linear combinations (with positive coefficients) of elements in $\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$. In fact, the proof is just the same, only needing a bit more of notation to keep track of things, and the interested reader will have no difficulties making the necessary tweaks.
\noindent{\bf (2)} Also, as is the case for surfaces, the statement of Theorem \ref{main} remains true if we replace $\Map^{\orb}(\RO)$ by a finite index subgroup $G$, and the constant on the right side changes exactly as it does in the case of surfaces---see \cite[Exercise 8.2]{Book}. In fact, in the course of the proof of Theorem \ref{main} we will have to work with such a finite index subgroup, the pure mapping class group.
\noindent{\bf (3)} The measure $\mathfrak m_{\Thu}$ in the statement of Theorem \ref{main} arises as the push-forward under a certain map of the usual Thurston measure on the space of measured laminations of a surface. We will however also give a short intrinsic description of $\mathfrak m_{\Thu}$ in Section \ref{sec thurston measure} below.
\noindent{\bf (4)} If one were to drop the assumption in Theorem \ref{main} that the orbifold is non-exceptional then the limit would trivially exist because the mapping class group would be finite, but the measure class of the obtained measure would obviously depend on $\gamma_0$. This is why we do need this assumption in Theorem \ref{main} but not in Theorem \ref{sat1} above or in Theorem \ref{sat3} below.
\medskip
All of this is nice and well and cute, but a more substantial observation is that Theorem \ref{main} implies that Theorem \ref{sat1} also holds if we replace $\ell_{\RO}$ by many other notions of length: length with respect to a variable curvature metric, word-length, extremal length, and so on. In fact, we can replace $\ell_{\RO}$ by any continuous homogenous function
$$F:\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)\to\mathbb R} \newcommand{\BD}{\mathbb D_{\ge 0}$$
on the space of currents, where homogeneous means that $F(t\cdot\lambda)=t\cdot F(\lambda)$. See \cite{EPS,DidacThurston} for many examples of such functions.
\begin{theorem}\label{sat3}
Let $\RO$ be a compact orientable hyperbolic orbifold with possibly empty totally geodesic boundary and let $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ be the associated space of geodesic currents. Then the limit
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\vert\{\gamma\text{ of type }\gamma_0\text{ with }F(\gamma)\le L\}\vert$$
exists and is positive for any $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ and any positive, homogenous, continuous function $F:\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)\to\mathbb R} \newcommand{\BD}{\mathbb D_{\ge 0}$. Here $g$ is the genus of the orbifold $\RO$, $r$ is the sum of the numbers of singular points and boundary components.
\end{theorem}
Let us now describe the strategy of the proof of our main result, Theorem \ref{main}. Instead of aiming to give a stand alone proof of the theorem along the lines of the proof in \cite{Book} of the corresponding result for surfaces, we are going to use the latter to obtain that for orbifolds. Suppose for the sake of concreteness that $\RO$ has no boundary and a single cone point and let $\Sigma$ be the surface obtained by deleting from $\RO$ a small ball around that singular point. The inclusion $\Sigma\hookrightarrow\RO$ induces a surjective map
\begin{equation}\label{eq silly map sick of this}
\mathcal S} \newcommand{\CT}{\mathcal T(\Sigma)\to\mathcal S} \newcommand{\CT}{\mathcal T(\RO)\cup\{*\}
\end{equation}
where $*$ is just a point where one maps all essential curves in $\Sigma$ which are non-essential in $\RO$. In fact, this map is equivariant under the isomorphism $\Map(\Sigma)\simeq\Map^{\orb}(\RO)$ between the corresponding mapping class groups. Equivariance under this isomorphism implies that whenever $\eta_0\in\mathcal S} \newcommand{\CT}{\mathcal T(\Sigma)$ maps to $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ then the push-forward under \eqref{eq silly map sick of this} of the measure $\frac 1{L^{6g-6+2r}}\sum_{\eta\in\Map(S)\cdot\eta_0}\delta_{\frac 1L\eta}$ is the measure $\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\Map^{\orb}(\RO)\cdot\gamma_0}\delta_{\frac 1L\gamma}$. From the analogue of Theorem \ref{main} for surfaces, stated in Section \ref{need batteries} below, we get that the limit
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\eta\in\Map(S)\cdot\eta_0}\delta_{\frac 1L\eta}$$
exists. As we see, Theorem \ref{main} would directly follow if the map \eqref{eq silly map sick of this} were to extend continuously to a map
\begin{equation}\label{eq silly map sick of this2}
\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)\to\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO).
\end{equation}
It is however easy to see that such an extension does not exist: for any three essential $\alpha,\beta,\gamma\in\pi_1(\Sigma)$ with $\beta\in\Ker(\pi_1(\Sigma)\to\pi_1^{\orb}(\RO))$ we have that $\frac 1{2n}[\alpha^n,\beta]\gamma$ converges when $n\to\infty$ in $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$ to $\alpha$ but is mapped to $\frac 1{2n}\gamma\in\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ which converges to 0. We by-pass this problem by choosing the representative $\eta_0$ of $\gamma_0$ so that the currents of the form $\frac 1L\eta$ with $\eta\in\Map(\Sigma)\cdot\eta_0$ are all contained in a closed subset of the set of currents on $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$ to which the map \eqref{eq silly map sick of this} actually extends continuously. We choose $\eta_0$ to be {\em as simple as possible} in some precise sense given in Section \ref{sec:as simple as possible}. That the so chosen $\eta_0$ has the desired property follows from Proposition \ref{main proposition}, the technical result at the core of this paper. This proposition basically asserts that the images in $\tilde\RO$ of geodesics in $\tilde\Sigma$ which are as simple as possible are uniformly quasigeodesic.
\subsection*{Non-orientable orbifolds}It is known that, at least as stated, the limit \eqref{eq-maryam} does not hold for non-orientable surfaces \cite{Gendulphe, Magee} and this is why we assumed in the theorems above that the orbifold is orientable. It is however worth noting that all results here remain true for non-orientable orbifolds whose underlying topological space is an orientable surface. An example is $D\Sigma/\tau$ where $D\Sigma$ is the double of $\Sigma$, an orientable surface with boundary, and $\tau$ is the involution interchanging the copies of the surface. The reason why the theorems remain true is that, up to passing to finite index subgroups, the mapping class group of such an orbifold is isomorphic to the mapping class group of an orientable surface for which we know that the analogue of Theorem \ref{main} holds. Anyways, we decided against extending the theorems above to this kind of non-orientable orbifolds since (1) it would make the paper much harder to read and (2) we do not have any concrete applications in mind.
\subsection*{Plan of the paper}
In Section \ref{sec:orbifolds} we recall some facts and definitions about orbifolds, maps between orbifolds, the mapping class group of orbifolds, and such. In Section \ref{sec:as simple as possible} we state precisely what we mean by {\em as simple as possible} and state, without proof, Proposition \ref{main proposition}. In Section \ref{sec:proofs of theorems} we recall a few facts about currents and, assuming Proposition \ref{main proposition}, prove Theorem \ref{main} and the other results mentioned above. In Section \ref{sec:lemmas} we prove a few facts needed in Section \ref{sec:proof of proposition}, where we prove Proposition \ref{main proposition}.
\subsection*{Acknowledgements} This has been one of those projects that for whatever reason take a long time to be completed. So long in fact that it is be impossible to make a comprehensive list of everyone we owe our gratitude to, and wishing not to be unfair we thank nobody---ingen n\"amnd ingen gl\"omd. With one exception, because the first author has not forgotten that during the start of the project she was supported by Pekka Pankka's Academy of Finland project \#297258 at the University of Helsinki.
\section{Orbifolds}\label{sec:orbifolds}
In this section we recall a few basics about orbifolds such as definitions, (hyperbolic) orbifolds as orbit spaces, and mapping class groups. We also fix some notation that we will use throughout the paper. This is why we encourage also readers who already know all about orbifolds to at least skim over this section.
\subsection{Orbifolds per se}
An orbifold $\RO$ is a space which is locally modeled on the quotient space of euclidean space by a finite group action. More precisely, an {\em orbifold chart} of a Hausdorff paracompact topological space $\RO$ is a tuple $(U,\hat U,\Gamma,\phi)$ where $U\subset\RO$ and $\hat U\subset\mathbb R} \newcommand{\BD}{\mathbb D^n$ are open, where $\Gamma$ is a finite group acting on $\hat U$, and where $\phi:\hat U/\Gamma\to U$ is a homeomorphism. An {\em orbifold atlas} is a collection $\{(U_i,\hat U_i,\Gamma_i,\phi_i)\vert\ i\in I\}$ of orbifold charts such that $\{U_i\vert\ i\in I\}$ is an open cover of $\RO$ closed under intersections and such that whenever $U_i\subset U_j$ there are (1) a group homomorphism $f_{i,j}:\Gamma_i\to\Gamma_j$ and (2) an $f_{i,j}$-equivariant embedding
$$\hat\phi_{i,j}:\hat U_i\to\hat U_j$$
such that the diagram
$$\xymatrix{\hat U_i\ar[d]\ar[r]^{\hat\phi_{i,j}} & \hat U_j\ar[d]\\ \hat U_i/\Gamma_i\ar[d]_{\phi_i}\ar[r] & \hat U_j/\Gamma_j \ar[d]^{\phi_j}\\ U_i \ar@{^{(}->}[r] & U_j}$$
commutes. An {\em orbifold} is then a Hausdorff paracompact space endowed with an orbifold atlas.
The orbifold is orientable if all group actions $\Gamma_i\curvearrowright\hat U_i$ and all embeddings $\hat\phi_{i,j}$ are orientation preserving. Similarly if we replace orientable by smooth. An orbifold with boundary is defined in the same way but this time the sets $\hat U_i$ are assumed to be open in $\mathbb R} \newcommand{\BD}{\mathbb D_{\le 0}\times\mathbb R} \newcommand{\BD}{\mathbb D^{n-1}$. An $n$-dimensional hyperbolic orbifold is one where the sets $\hat U_i$ are contained in $\BH^n$, where the actions $\Gamma_i\curvearrowright\hat U_i$ preserve the hyperbolic metric, and where the maps $\hat\phi_{i,j}$ are isometric embeddings. To define what is a hyperbolic orbifold with totally geodesic boundary then one copies what we just wrote, only replacing $\BH^n$ by a closed half-space therein.
As is the case in the world of manifolds, orbifolds have maximal orbifold atlases, orientable orbifolds have maximal orientable orbifold atlases, smooth orientable orbifolds have maximal smooth orientable orbifold atlases, and so on. We will always assume that our orbifolds (with adjectives) are equiped with maximal atlases (with adjectives).
We refer to \cite{Scott} for more on orbifolds.
\subsection{Singular points}
A point $p$ in an orbifold $\RO$ is {\em singular} if there are an orbifold chart $(U,\hat U,\Gamma,\phi)$ and $\hat p\in\hat U$ with $\phi(\hat p)=p$ and satisfying that $\Stab_\Gamma(\hat p)\neq \Id$. A point which is not singular is {\em regular}. We denote by $\sing(\RO)$ the set of singular points of $\RO$.
The {\em singular set} $\sing(\RO)$ is a closed subset of $\RO$. It might be empty, but also its complement might be empty. It is actually sometimes really important to allow oneself to work with orbifolds with $\sing(\RO)=\RO$---not the simplest example one can find, but the moduli space $\mathcal M} \newcommand{\CN}{\mathcal N_{2,0}$ of closed Riemann surfaces of genus $2$ is such an orbifold. However,
\begin{quote}
all orbifolds in this paper are such that $\sing(\RO)$ is a proper subset of $\RO$.
\end{quote}
In the cases we are interested in, namely compact orbifolds which are orientable, connected and 2-dimensional we have that $\sing(\RO)$ is in fact a finite set of points in the interior of $\RO$.
\begin{bem}
Whenever we need to choose a base point in our orbifold $\RO$, for example when working with the fundamental group $\pi_1^{\orb}(\RO)$, then we will assume without further mention that the base point is regular. The reader might amuse themselves by thinking about what the right notion of base point in the category of orbifolds would be if they allowed singular points to be base points.
\end{bem}
\subsection{Maps between orbifolds}\label{subsec:maps between orbifolds}
A map $f:\RO\to\RO'$ between two orbifolds is then a continuous map such that whenever $\phi_i:\hat U_i/\Gamma_i\to U_i$ and $\phi_i':\hat U_i'/\Gamma_i'\to U_i'$ are orbifolds charts for $\RO$ and $\RO'$ with $f(U_i)\subset U_i'$ then there are a homomorphism $\Gamma_i\to\Gamma_i'$ and an equivariant continuous map $\hat f_i:\hat U_i\to \hat U_i'$ such that the obvious diagram
$$\xymatrix{\hat U_i\ar[r]^{\hat f_i}\ar[d] & \hat U_i' \ar[d] \\ U_i\ar[r]_{f} & U_i'}$$
commutes. If $\RO$ and $\RO'$ are smooth orbifolds and if the $\hat f_i$ are smooth, then $f$ is said to be smooth. Orbifolds, and maps between orbifolds form a category. And the same for smooth orbifolds and smooth maps between them.
If $(U,\hat U,\Gamma,\phi)$ is an orbifold chart of an orbifold $\RO$, then $([0,1]\times U,[0,1]\times\hat U,\Gamma,\Id\times\phi)$ where $g\in\Gamma$ acts on $[0,1]\times \hat U$ via $g(t,x)=(t,gx)$ is an orbifold chart of $[0,1]\times\RO$. The collection of all so obtained orbifold charts forms an orbifold atlas, giving $[0,1]\times\RO$ the structure of an orbifold. It thus makes sense to say that two orbifold maps
$$f,f':\RO\to\RO'$$
are {\em homotopic in the category of orbifolds} if there is an orbifold map
$$F:[0,1]\times\RO\to\RO',\ \ F(t,x)=F_t(x)$$
with $F_0=f$ and $F_1=f'$.
Anyways, armed with the notion of homotopy of orbifold maps one can define the {\em orbifold fundamental group} $\pi_1^{\orb}(\RO)$ of $\RO$ exactly as one does for the usual fundamental group, just replacing homotopies by orbifold homotopies. One should note that any two orbifold maps which are homotopic as orbifold maps are also homotopic as maps between topological spaces, but that the converse does not need to be true. In fact, there are plenty of orbifolds which are simply connected as topological spaces but whose orbifold fundamental group is non-trivial, meaning that there are orbifold maps $\gamma:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to\RO$ which, as orbifold maps, are not homotopic to constant maps. As is the case for manifolds, in the category of orbifolds free homotopy classes of {\em curves} $\gamma:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to\RO$ correspond to conjugacy classes in the orbifold fundamental group. We make the following convention:
\begin{quote}
If $\RO$ is a compact orbifold with boundary then we will say that a curve is {\em essential} if it is not freely homotopic into the boundary and if the associated free homotopy class is that of an infinite order element in the orbifold fundamental group. We will also denote by $\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ the set of all free homotopy classes of essential curves in $\RO$.
\end{quote}
\subsection{Orbifolds as orbit spaces}
Following word-by-word the usual construction of the universal cover of a manifold but replacing homotopies by homotopies of orbifold maps one gets the orbifold universal cover $\tilde\RO$ of the orbifold $\RO$. As is the case for manifolds, the fundamental group $\pi^{\orb}(\RO)$ acts discretely on the universal cover $\tilde\RO$. Similarly, orbifold maps $f:\RO\to\RO'$ between orbifolds induce homomorphisms $f_*:\pi_1^{\orb}(\RO)\to\pi_1^{\orb}(\RO')$ between the associated orbifold fundamental groups and lift to $f_*$-equivariant maps $\tilde f:\tilde\RO\to\tilde\RO'$ between the universal covers.
Orbifolds whose universal cover is a manifold are said to be {\em good}. And they deserve that name because working with them is much easier than working with general orbifolds. For example there is a pretty concrete description of the orbifold charts for good orbifolds $\RO$. They are namely of the form
$$\phi:\hat U/H\to U$$
where $H\subset\pi_1^{\orb}(\RO)$ is a finite subgroup, where $\hat U\subset\tilde\RO$ is an open connected subset with $H\hat U=\hat U$ and $g\hat U\cap\hat U=\emptyset$ whenever $g\notin H$, where $U$ is the image of $\hat U$ under the universal covering map $\pi:\tilde\RO\to\RO$, and where finally $\phi$ is the map given by $\phi(xH)=\pi(x)$.
Hyperbolic orbifolds, if one wants with geodesic boundary, are good. These are the orbifolds we will be interested in. We fix now the notation that we will be using from this point on:
\begin{quote}
{\bf Notation.}
Let $\tilde\RO\subset\BH^2$ be a closed connected (2-dimensional) subset of the hyperbolic plane with possibly empty geodesic boundary, let $\Gamma\subset\PSL_2\mathbb R} \newcommand{\BD}{\mathbb D$ be a discrete subgroup which preserves $\tilde\RO$ and such that the induced action $\Gamma\curvearrowright\tilde\RO$ is cocompact, and denote by
$$\RO=\tilde\RO/\Gamma$$
the associated hyperbolic orbifold. When needed, we will refer to the hyperbolic metric on both $\RO$ and $\tilde\RO$ by $\rho_{\hyp}$. Finally, we also write
$$\sing(\Gamma)=\{p\in\tilde\RO\text{ with }\Stab_\Gamma(p)\neq\Id\}$$
for the set of points in $\tilde\RO$ with non-trivial stabilizer, that is the preimage of $\sing(\RO)$ under the map $\tilde{\RO}\to\RO$.
\end{quote}
In this setting, $\tilde\RO$ is the orbifold universal cover of $\RO$ and $\Gamma=\pi_1^{\orb}(\RO)$ is its orbifold fundamental group.
It is not hard to see that the orbifolds $\RO$ we are interested in are homeomorphic as topological spaces to surfaces, that is to 2-dimensional manifolds. Such homeomorphisms do however destroy the orbifold structure. In fact, much more information is encoded in the surface that we get by deleting the singular points of $\RO$. Since we want to work with compact surfaces, we instead delete small balls around the singular points.
\subsection{The surface associated to a hyperbolic orbifold $\RO$}\label{sec surface associated to orbifold}
Continuing with the same notation let $\RO=\tilde\RO/\Gamma$ be a compact orientable hyperbolic 2-orbifold with possibly empty totally geodesic boundary. We choose now two positive constants $\epsilon$ and $\delta$ which will accompany us throughout the paper. Other than being very small, say $\epsilon<10^{-10}$, here are the conditions that $\epsilon$ has to satisfy:
\begin{itemize}
\item[(C1)] $200\epsilon$ is less than the length of the shortest non-trivial periodic $\rho_{\hyp}$-geodesic in $\RO$,
\item[(C2)] $200\epsilon$ is less than the minimal distance between any two points in $\sing(\Gamma)$, and
\item[(C3)] $200\epsilon$ is less than the distance between any point in $\partial\tilde\RO$ and any point in $\sing(\Gamma)$.
\end{itemize}
When it comes to $\delta$ we will later give a fourth condition (see (C4) in Section \ref{sec:lemmas}) that it has to satisfy but for now we just assume that $3\delta<\epsilon$. Note that this implies that the $\delta$-balls around points in $\sing(\Gamma)$ are disjoint of each other and do not meet $\partial\tilde\RO$. This means that
\begin{equation}\label{eq associated surface}
\hat\Sigma=\tilde\RO\setminus\CN_{\hyp}(\sing(\Gamma),\delta)
\end{equation}
is a smooth surface with boundary, where
$$\CN_{\hyp}(X,r)=\{p\in\tilde\RO\text{ with }d_{\hyp}(p,X)< r\}.$$
Note that the action of $\Gamma=\pi_1^{\orb}(\RO)$ on $\tilde\RO$ induces an action on $\hat\Sigma$ which is not only discrete but also free. We refer to the quotient surface
$$\Sigma=\hat\Sigma/\Gamma$$
as {\em the surface associated to the orbifold $\RO$} and denote its universal cover by $\tilde\Sigma$. By construction, it is also the universal cover of $\hat\Sigma$. In fact, $\hat{\Sigma}$ is the cover of $\Sigma$ corresponding to the normal subgroup of $\pi_1(\Sigma)$ generated by all loops homotopic into $\partial\Sigma\setminus\partial\RO$.
\begin{bem}
We denote by $B_{\hyp}(q,r)\subset\tilde{\RO}$ the hyperbolic ball of radius $r$ around $q$. Equivalently,
$$B_{\hyp}(q,r)=\CN_{\hyp}(\{q\},r).$$
Also, abusing terminology we will not distinguish between $\CN_{\hyp}(X,r)$ or $B_{\hyp}(q,r)$ and their closures. Thats is, both open balls and closed balls, and open neighborhoods and closed neighborhoods are denoted using the same symbol.
\end{bem}
\subsection{Mapping class groups of the orbifold and of the associated surface}\label{sec orbifold mapping class group}
As is the case for manifolds, one can say anything one wants to say about the orbifold $\RO=\tilde\RO/\Gamma$ in terms of $\Gamma$-equivariant objects in the universal cover $\tilde\RO$. For example, the group $\Homeo^{\orb}(\RO)$ of orbifold self-homeomorphisms of $\RO$ can be identified with
$$\Homeo^{\orb}(\RO)=\Homeo_\Gamma(\tilde{\RO})/\Gamma$$
where
$$\Homeo_\Gamma(\tilde\RO)=\{\tilde f\in\Homeo(\tilde\RO)\text{ with }\tilde f\Gamma\tilde f^{-1}=\Gamma\}$$
is the group of (topological) homeomorphisms of $\tilde\RO$ conjugating $\Gamma$ to itself. The mapping class group, in the category of orbifolds, of $\RO$ is then the group
$$\Map^{\orb}(\RO)=\Homeo^{\orb}(\RO)/\Homeo_0^{\orb}(\RO)$$
where $\Homeo_0^{\orb}(\RO)$ is the identity component of $\Homeo^{\orb}(\RO)$.
The group $\Homeo_\Gamma(\tilde\RO)$ acts on the set
$$\sing(\Gamma)=\{p\in\tilde\RO\text{ with }\Stab_\Gamma(p)\neq\Id\}$$
of points with non-trivial stabilizer. It also acts on the set $\pi_0(\partial\tilde\RO)$ of boundary component of $\tilde\RO$. It follows that the mapping class group acts on the finite sets $\sing(\Gamma)/\Gamma$ and $\pi_0(\partial\tilde\RO)/\Gamma$. The {\em pure mapping class group}
$$\PMap^{\orb}(\RO)=\{\phi\in\Map^{\orb}(\RO)\text{ pointwise fixing }\sing(\Gamma)/\Gamma\text{ and }\pi_0(\partial\tilde\RO)/\Gamma\}$$
is the finite index subgroup of $\Map^{\orb}(\RO)$ consisting of mapping classes which act trivially on these two sets.
Note now that the canonical inclusion $\Sigma\hookrightarrow\RO$ into our orbifold of the associated surface is an embedding in the category of orbifolds. We have however also other interesting maps $\Sigma\to\RO$, namely those which are the identity outside of a small neighborhood of the boundary of $\Sigma$ and which map $\Sigma\setminus\partial\Sigma$ homeomorphically to $\RO\setminus\sing(\RO)$. Such maps are not homeomorphisms but they induce homorphisms between the group of homeomorphisms of $\Sigma$ acting trivially on $\pi_0(\partial\Sigma)$ and the group of orbifold homeomorphisms of $\RO$. Any such map induces an isomorphism between the pure mapping class groups
\begin{equation}\label{eq isomorphisms pure mapping class groups}
\PMap(\Sigma)\simeq\PMap^{\orb}(\RO)
\end{equation}
of $\Sigma$ and $\RO$, where
$$\PMap(\Sigma)=\{\phi\in\Homeo(\Sigma)\text{ acting trivially on }\pi_0(\partial\Sigma)\}/\Homeo_0(\Sigma).$$
It is well-known that every mapping class in $\Map(\Sigma)$ can be represented by a diffeomorphism.
Although our definition of the mapping class group differs from theirs (we do not have twists around the boundary) we refer to the book \cite{Farb-Margalit} by Farb and Margalit for background on the mapping class group.
\subsection{A metric on the associated surface $\hat\Sigma$}\label{sec:metric}
Although (locally) negatively curved from the point of view of comparison geometry, the restriction of the hyperbolic metric $\rho_{\hyp}$ to $\hat\Sigma$ is not as nice as one would wish. The problem is that, since the new boundary components are concave, geodesics are not uniquely determined by their tangent vectors at a point. In particular, distinct geodesics do not need to be transversal to each other. This is why we from now on endow $\hat\Sigma$ with a smooth Riemannian metric $\rho$ with the following properties:
\begin{itemize}
\item $\rho$ is negatively curved and $\Gamma$-invariant.
\item The boundary of $\hat\Sigma$ is totally geodesic with respect to $\rho$.
\item Both $\rho$ and $\rho_{\hyp}$ agree on the subset $\tilde\RO\setminus\CN_{\hyp}(\sing(\Gamma),2\delta)$ of $\hat\Sigma$.
\item If $I\subset\tilde\RO$ is a $\rho_{\hyp}$-geodesic segment starting at a point $\sing(\Gamma)$ and with $\rho_{\hyp}$-length $3\delta$, then $I\cap\hat\Sigma$ is a $\rho$-geodesic segment perpendicular to the boundary of $\Sigma$.
\end{itemize}
The reason why we impose this final condition is that, if $p\in\sing(\Gamma)$ and $r>2\delta$ are such that $\rho$ and $\rho_{\hyp}$ agree on $B_{\hyp}(p,r)\setminus B_{\hyp}(p,2\delta)$ then the $\rho_{\hyp}$ radial foliation $\CF$ of $B_{\hyp}(p,r)\setminus B_{\hyp}(p,\delta)$ is $\rho$-geodesic. It follows in particular that the restriction of the radial projection
$$B_{\hyp}(p,r)\setminus B_{\hyp}(p,\delta)\to\partial B_{\hyp}(p,r)$$
to any $\rho$-geodesic segment $\eta$ which is not contained in a leaf of $\CF$ is monotonic in the sense that its derivative is never $0$. We thus get that simple $\rho$-geodesic segments $\eta\subset B_{\hyp}(p,r)\setminus B_{\hyp}(p,\delta)$ whose endpoints are in $\partial B_{\hyp}(p,r)$ and meet each leaf of $\CF$ at most once---see Figure \ref{fig radial foliation}. We record this fact for later use:
\begin{lemma}\label{lem:meet rays once}
Suppose that $p\in\sing(\Gamma)$ and $r>2\delta$ are such that $\rho$ and $\rho_{\hyp}$ agree on $B_{\hyp}(p,r)\setminus B_{\hyp}(p,2\delta)$, and let $\eta\subset B_{\hyp}(p,r)\setminus B_{\hyp}(p,\delta)$ be a $\rho$-geodesic segment whose boundary points are contained in $\partial B_{\hyp}(p,r)$. If $\eta$ is simple, then $\eta$ meets every $\rho_{\hyp}$-geodesic ray emanating out of $p$ at most once. In particular, $\eta$ has at most length $2\pi\sinh(r)$.\qed
\end{lemma}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{radialfoliation.pdf}
\caption{Schematic representation of two $\rho$-geodesic segments in $B_{\hyp}(p,3\delta)\setminus B_{\hyp}(p,\delta)$.}
\label{fig radial foliation}
\end{figure}
We should comment on the existence of $\rho$. In fact, it is not hard to construct such a metric. For example, when working in standard hyperbolic polar coordinates $(r,\theta)$ in the ball $B_{\hyp}(p,3\delta)$ around $p\in\sing(\Gamma)$ one can take any
\begin{equation}\label{go buy batteries}
\rho=dr^2+\phi(r)^2\cdot d\theta^2
\end{equation}
where $\phi:[\delta,3\delta)\to(0,\infty)$ is a smooth function satisfying
$$ \phi''(\cdot)>0,\ \phi'(\delta)=0, \text{ and }\phi(s)=\sinh(s)\text{ for }s>2\delta.$$
The first condition on $\phi$ ensures that the sectional curvature $\kappa= \frac{-\phi''}{\phi}$ is negative, the second that $\{d_{\hyp}(p,\cdot)=\delta\}$ is totally geodesic, and the third that $\rho$ agrees with $\rho_{\hyp}$ on $B_{\hyp}(p,3\delta)\setminus B_{\hyp}(p, 2\delta)$. In particular, if we use the same function $\phi$ on each $3\delta$-ball around points in $\sing(\Gamma)$ and we set $\rho=\rho_{\hyp}$ outside those balls, then we obtain a $\Gamma$-invariant metric on the whole of $\hat{\Sigma}$. Finally note that the curves $t\mapsto (t, \theta)$, that is the $\rho_{\hyp}$-geodesic segments starting at $p$, are $\rho$-geodesic segments for any choice of $\phi$. In other words, also the fourth property we wanted our metric to satisfy holds.
Note that $\Gamma$-invariance of $\rho$ implies that it descends to a metric on $\Sigma$ which we once again call $\rho$. Similarly, we denote also by $\rho$ the induced metric on the universal cover $\tilde{\Sigma}$.
\section{As simple as possible representatives}\label{sec:as simple as possible}
Continuing with the same notation let $\RO=\tilde\RO/\Gamma$ be a compact orientable hyperbolic orbifold and let $\Sigma=\hat\Sigma/\Gamma$ with $\hat\Sigma$ as in \eqref{eq associated surface} be the associated surface, endowed with the metric $\rho$ we just fixed. By construction, $\hat\Sigma$ is a connected subset of the universal cover $\tilde\RO$ of $\RO$. It follows that the inclusion $\Sigma\hookrightarrow\RO$ induces a surjective homomorphism
$$\pi_1(\Sigma)\to\pi_1^{\orb}(\RO)=\Gamma.$$
This means that every homotopically essential curve in $\RO$ is freely homotopic (in the category of orbifolds) to one contained in $\Sigma$. In this section we describe how to pick for curves in $\RO$ representatives in $\Sigma$ which are as simple as possible.
\begin{defi*}
A $\rho$-geodesic $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to \hat{\Sigma}$ whose image is not contained in $\partial\hat\Sigma$ is {\em as simple as possible} if
\begin{enumerate}
\item it is injective, and
\item for all $g\in\Gamma$ the geodesics $\alpha$ and $g(\alpha)$ are either identical or meet at most once.
\end{enumerate}
We say that a $\rho$-geodesic in $\Sigma$ is as simple as possible if its lifts to $\hat{\Sigma}$ are as simple as possible. Similarly, a $\rho$-geodesic in the universal cover $\tilde\Sigma$ of $\Sigma$ is as simple as possible if its images in $\hat\Sigma$ are as simple as possible. Finally, a homotopy class in $\Sigma$ is as simple as possible if its $\rho$-geodesic representative is as simple as possible.
\end{defi*}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{notalmostsimple.pdf}
\caption{Two geodesics (in black) in $\hat{\Sigma}$ which differ by an element in $\Gamma$ fixing the marked point: they are not as simple as possible.}
\label{fig almost simple}
\end{figure}
Before going any further we note that non-trivial closed $\rho_{\hyp}$-geodesics $\gamma:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to\RO$ have representatives $\eta:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to\Sigma\subset\RO$ which are as simple as possible. It suffices to choose $\eta\subset \Sigma$ to be a shortest representative of $\gamma$. Indeed, the fact that $\eta$ is shortest implies that its lifts to $\hat{\Sigma}$ have no bigons, showing that $\eta$ is as simple as possible. We record this fact for later use:
\begin{lemma}\label{lem banana}
Every $\rho_{\hyp}$-geodesic $\gamma:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to\RO$ is freely homotopic, in the category of orbifolds, to a $\rho$-geodesic $\eta:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to\Sigma\subset\RO$ which is as simple as possible.\qed
\end{lemma}
The reader might be wondering why instead of simply speaking of shortest representatives we choose something as clumsy as ``as simple as possible''. The reason is that the latter property is mapping class group invariant:
\begin{lemma}\label{lem as simple as possible mapping class group}
If $\eta\subset\Sigma$ is as simple as possible then $\phi(\eta)$ is also as simple as possible for every $\phi\in\PMap(\Sigma)$.
\end{lemma}
\begin{proof}
Abusing notation, denote the $\rho$-geodesic freely homotopic to $\eta$ by the same letter. To determine $\phi(\eta)$ choose first a representative $\varphi\in\Diff(\Sigma)$ of the mapping class $\phi$ and let $\varphi_*:\pi_1(\Sigma)\to\pi_1(\Sigma)$ be the homomorphism induced by $\varphi$---we can always choose $\varphi$ so that it fixes some point and take that point as the base point for the fundamental group. Note that $\varphi_*$ preserves the normal subgroup of $\pi_1(\Sigma)$ generated by loops freely homotopic into $\partial\Sigma\setminus\partial\RO$ and that $\pi_1(\hat\Sigma)\subset\pi_1(\Sigma)$ is nothing other than this subgroup. We get that $\varphi$ lifts to $\hat\Sigma$, or more precisely, that there is a $\varphi_*$-equivariant lift $\hat\varphi\in\Diff(\hat\Sigma)$. Now, if $\hat\eta$ is a lift of $\eta$ to $\hat\Sigma$ then we have for all $g\in\Gamma$ that
$$\hat\varphi(\hat\eta)\cap\varphi_*(g)\left(\hat\varphi(\hat\eta)\right)=\hat\varphi(\hat\eta)\cap\hat\varphi(g\hat\eta)=\hat\varphi(\eta\cap g\hat\eta).$$
Since $\eta$ was as simple as possible we get that $\hat\varphi(\hat\eta)$ is simple and that $\hat\varphi(\hat\eta)\cap\varphi_*(g)\hat\varphi(\hat\eta)$ intersects its individual $\Gamma$-translates at most once, and that if these intersections take place then they are transversal to each other.
It follows that the image of $\varphi(\eta)$ in $\Sigma$ has no bigons. The same is true for $(\phi(\eta))_*$, the geodesic in $(\Sigma, \rho)$ freely homotopic to $\varphi(\eta)$. Now, \cite[Theorem 2.1]{Hass-Scott} implies that these two curves are not only freely homotopic to each other but also transversely freely homotopic to each other. This means in particular that intersection points are neither destroyed nor created during the homotopy. Hence, each lift of $(\phi(\eta))_*$ to $\hat\Sigma$ meets its individual $\Gamma$-translates in at most one point. In other words, $\phi(\eta)$ is as simple as possible.
\end{proof}
The reason why we are interested in $\rho$-geodesics in $\hat\Sigma$ which are as simple as possible is that, as we will see shortly, this topological property implies that they are uniform quasigeodesics with respect to the hyperbolic metric. Recall that a continuous curve $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to\tilde\RO$ is {\em $A$-quasigeodesic} if we have
$$A\cdot\vert t-s\vert+A\ge d_{\hyp}(\alpha(s),\alpha(t))\ge\frac 1A\vert s-t\vert-A$$
for all $s,t\in\mathbb{R}$. It is quasigeodesic if it is $A$-quasigeodesic for some $A\ge 1$.
We are now ready to state the key technical result of this paper:
\begin{prop}\label{main proposition}
Let $\RO$ be as in the statement of Theorem \ref{main}, $\hat{\Sigma}$ as in \eqref{eq associated surface}, and $\rho$ the metric on $\hat{\Sigma}$ constructed in Section \ref{sec:metric}. There exists $A\ge 1$ such that any unit speed $\rho$-geodesic $\alpha: \mathbb R} \newcommand{\BD}{\mathbb D\to\hat\Sigma$ which is (1) a quasigeodesic in $(\tilde{\RO},\rho_{\hyp})$ and (2) as simple as possible, is actually $A$-quasigeodesic in $(\tilde{\RO},\rho_{\hyp})$.
\end{prop}
Proposition \ref{main proposition} will be proved in Section \ref{sec:proof of proposition}. We just add now a few comments:
\medskip
\noindent{\bf (1)} Note that in Proposition \ref{main proposition} we cannot simply drop the assumption that $\alpha$ is a quasigeodesic in $(\tilde{\RO},\rho)$. For example, if $\alpha$ is a lift of a simple geodesic in $\Sigma$ which spirals in both directions onto components of $\partial\Sigma\setminus\partial\RO$ then it is as simple as possible but not a quasigeodesic and in particular not an $A$-quasigeodesic for any choice of $A$. However, this is basically the only case we have to rule out because we could replace the condition that $\alpha$ is a quasigeodesic in the proposition by the assumption that $\alpha$ does not accumulate on a compact component of $\partial{\hat{\Sigma}}$ in either direction. We leave it however as it is because the curves we will be interested in are automatically quasigeodesics: they are lifts $\hat\eta$ to $\hat\Sigma$ of representatives $\eta$ in $\Sigma$ of essential curves $\gamma$ in $\RO$.
\noindent{\bf (2)} Suppose that $\gamma\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ is an essential curve in $\RO$. Proposition \ref{main proposition} implies that we have
$$\ell_{\rho}(\eta)\le A\cdot\ell_{\hyp}(\gamma)$$
for any $\rho$-geodesic representative $\eta\subset\Sigma$ of $\gamma$ which is as simple as possible.
It follows that $\gamma$ only has finitely many such representatives.
\noindent{\bf (3)}
On the other hand, as simple as possible representatives are not unique. In fact, a primitive closed geodesic in $\RO$ which goes through $k$ cone points of odd order and none of even order, has at least $2^k$ representatives in $\Sigma$ which are as simple as possible: at each one of those $k$ points, steer slightly either right or left to avoid hitting the cone point. And this is not optimal because perturbing the metric slightly one can get the geodesic off $\sing(\RO)$, and representatives that were as simple as possible stay as simple as possible.
\noindent{\bf (4)}
The construction sketched in (3) shows that ``shortest" and ``as simple as possible" are not the same thing.
\section{Main results}\label{sec:proofs of theorems}
In this section we prove Theorem \ref{main} assuming Proposition \ref{main proposition}. However, before doing so we have to recall a few facts about currents and about Mirzakhani's counting theorem.
\subsection{Currents}\label{sec currents}
Let $X$ be a simply connected negatively curved surface with possibly empty totally geodesic boundary, and let $G\subset\Isom_+(X)$ be a discrete subgroup of orientation preserving isometries with $X/G$ compact. We will be interested in the following two possible cases:
\begin{itemize}
\item $X=\tilde\RO\subset\BH^2$ is the universal cover of our compact hyperbolic orbifold $\RO=\tilde\RO/\Gamma$ and $G=\pi_1^{\orb}(\RO)=\Gamma$ is its fundamental group.
\item $X=(\tilde\Sigma,\rho)$ is the universal cover of $\Sigma$ endowed with the metric $\rho$ and $G=\pi_1(\Sigma)$ is its fundamental group.
\end{itemize}
Let $\mathcal G} \newcommand{\CH}{\mathcal H(X)$ be the set of all unoriented bi-infinite geodesics in $X$. The action $G\curvearrowright X$ induces an action of $G$ on $\mathcal G} \newcommand{\CH}{\mathcal H(X)$. A current on $X/G$ is a $G$-invariant Radon measure on $\mathcal G} \newcommand{\CH}{\mathcal H(X)$. Let $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(X/G)$ be the space of all currents on $X/G$ endowed with the weak-*-topology. It is a Hausdorff, metrizable, second countable, and locally compact space, and the projectivized space $\mathbb P\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(X/G)$ is compact.
\begin{bem}
We insist that $X/G$, and thus our orbifold $\RO$, is compact because this is what guarantees that the space $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(X/G)$ is locally compact.
\end{bem}
There are plenty of currents. In fact there is a natural homeomorphism between $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(X/G)$ and the space of geodesic flow invariant Radon measures on the projectivized unit tangent bundle $PT^1X/G$ supported by the set of bi-infinite orbits. For example, every primitive closed unit speed geodesic $\gamma$ in $\mathcal G} \newcommand{\CH}{\mathcal H(X)$, or equivalently every unoriented periodic orbit of the geodesic flow, yields a geodesic flow invariant measure on $PT^1X/G$: the measure of $U$ is the arc length of $\gamma\cap U$. The current associated to this measure is called the {\em counting current associated to the geodesic $\gamma$.} The counting current determines the original geodesic $\gamma$---this justifies referring to the current and the geodesic by the same letter---and the name is explained because, for a set of geodesics $V\subset\mathcal G} \newcommand{\CH}{\mathcal H(X)$ the value of $\gamma(V)$ is nothing other than the number of lifts of $\gamma$ to $X$ which belong to $V$.
Note that every essential curve $\gamma$ in $X/G$ (in the sense that we gave to the word essential at the end of Section \ref{subsec:maps between orbifolds}) is freely homotopic to a unique geodesic $\gamma_*$ in $X/G$. In this case we denote the associated counting current by $\gamma$ instead of $\gamma_*$. We hope that this will not cause any confusion.
\begin{bem}
If the action of $G$ on $X$ is free then we drop the superscript ``$\orb$". For example we write $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$ instead of $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\Sigma)$. We use this superscript to avoid mixing up currents for the orbifold $X/G$ and currents for the underlying topological surface.
\end{bem}
Currents were introduced by Bonahon and we refer to his papers \cite{BonahonFrench,Bonahon2,Bonahon} for details and background. See also \cite{AL}. However, although all these sources are highly recommended, the reader will not be surprised on hearing that we will mostly follow the same notation and terminology as in our book \cite{Book}.
\subsection{Mirzakhani's counting theorem}\label{need batteries}
As we mentioned already in the introduction, Theorem \ref{main} is well-known in the case that we are working with surfaces instead of orbifolds. In that case we have the following result \cite[Theorem 8.1]{Book}:
\begin{named}{Mirzakhani's counting theorem}
Let $\Sigma$ be a compact connected orientable surface of genus $g$ and with $r$ boundary components and suppose that $3g-3+r>0$. Let also $\eta_0\subset\Sigma$ be a homotopically primitive essential curve. Then there are constants $\mathfrak c^{\PMap}(\eta_0),\mathfrak b^{\PMap}_{g,r}>0$ such that
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\eta\in\PMap(\Sigma)\cdot\eta_0}\delta_{\frac 1L\eta}=\frac{\mathfrak c^{\PMap}(\eta_0)}{\mathfrak b^{\PMap}_{g,r}}\cdot\mathfrak m^\Sigma_{\Thu}$$
Here $\delta_{\frac 1L\eta}$ is the Dirac measure on $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$ centered at the current $\frac 1L\eta$, $\mathfrak m_{\Thu}^\Sigma$ is the Thurston measure on $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$, and the convergence takes place with respect to the weak-*-topology on the space of Radon measures on $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$.
\end{named}
\begin{bem}
The counting theorem remains true if we replace the pure mapping class group by any other finite index subgroup of the mapping class group. However the obtained multiple of the Thurston measure depends on the subgroup in question. This explains the superscript $\PMap$ in the constants $\mathfrak c^{\PMap}(\eta_0)$ and $\mathfrak b^{\PMap}_{g,r}$. See \cite[Exercise 8.2]{Book} for explicit formulas for the dependence of the constants on the chosen subgroup of the mapping class group.
\end{bem}
Since we named the above theorem after Maryam Mirzakhani while referring to our book \cite{Book} we should add a brief comment on the genesis of this theorem. For simple curves, Mirzakhani proved this theorem in \cite{Maryam0,Maryam1} but for general curves the history is slightly more complicated. To explain why, note that the theorem implies that
$$\lim_{L\to\infty}\frac{\{\gamma\in\PMap(\Sigma)\text{ with }F(\gamma)\le L\}}{L^{6g-6+2r}}=\frac{\mathfrak c^{\PMap}(\gamma_0)}{\mathfrak b^{\PMap}_{g,r}}\cdot\mathfrak m_{\Thu}^\Sigma(\{F(\cdot)\le 1\})$$
whenever $F:\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)\to\mathbb R} \newcommand{\BD}{\mathbb D_+$ is a continuous, positive and homogenous function (compare with \cite[Theorem 9.1]{Book} or with the proof of Theorem \ref{sat3} below). In \cite{Maryam2} Maryam proved the existence of the latter limit in the case that $F$ is the hyperbolic length. At the same time, we were also investigating the same problem and we proved in \cite{ES} that every sublimit of the sequence in the counting theorem has a subsequence which converges to a multiple of the Thurston measure. The existence of the limit in the counting theorem follows if one combines these two facts \cite{Maryam2,ES}. This was clear to both Mirzakhani and ourselves at the time. A problem with that state of affairs was that Mirzakhani's arguments and ours come from different places, and this made things a bit too opaque. For example there was some confusion about the chosen normalization for the Thurston measure. This meant that it was not obvious how the arising constants should be understood (this problem was, to some extent, solved in \cite{Monin, RS}). Finally, or maybe finally for the time being, a unified proof of the counting theorem as stated above was provided in \cite{Book}. The constants $\mathfrak c^{\PMap}(\gamma_0)$ and $\mathfrak b^{\PMap}_{g,r}$ in the statement of the counting theorem above are as given in \cite[Chapter 8]{Book}, and the Thurston measure is defined to be the scaling limit
\begin{equation}\label{eq defintion thurston measure}
\mathfrak m^\Sigma_{\Thu}=\lim_{L\to\infty}\sum_{\gamma\in\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ}\delta_{\frac 1L\gamma}.
\end{equation}
Anyways, let us return to the concrete topic of this paper.
\subsection{Proof of Theorem \ref{main}}
We prove now our main theorem assuming Proposition \ref{main proposition}. As mentioned earlier, the proposition will be proved in Section \ref{sec:proof of proposition}.
\begin{named}{Theorem \ref{main}}
Let $\RO$ be a compact orientable non-exceptional hyperbolic orbifold with possibly empty totally geodesic boundary and let $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ be the associated space of geodesic currents. There is a Radon measure $\mathfrak m_{\Thu}$ on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ such that for any $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ we have
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\Map^{\orb}(\RO)\cdot\gamma_0}\delta_{\frac 1L\gamma}=C(\gamma_0)\cdot\mathfrak m_{\Thu}$$
for some positive constant $C(\gamma_0)>0$. Here $g$ is the genus of the orbifold $\RO$, $r$ is the sum of the numbers of singular points and boundary components, and $\RO$ is non-exceptional if $(g,r)\neq(0,3)$. Moreover $\delta_{\frac{1}{L}\gamma}$ stands for the Dirac measure on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ centered at $\frac 1L\gamma$, and the convergence takes place with respect to the weak-*-topology on the space of Radon measures on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$.
\end{named}
As we already mentioned in the introduction, the idea of the proof is to show that our given homotopy class $\gamma_0$ has a representative $\eta_0$ in $\Sigma$ such that the measures
$$\frac 1{L^{6g-6+2r}}\sum_{\eta\in\PMap(\Sigma)\cdot\eta_0}\delta_{\frac 1L\eta}$$
inside the limit in Mirzakhani's counting theorem are supported by a closed subset of $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$ which maps continuously to $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$. In fact we will choose $\eta_0$ to be a representative of $\gamma_0$ which is as simple as possible.
Anyways, with $A$ as in Proposition \ref{main proposition} let
$$\mathcal Q} \newcommand{\CR}{\mathcal R(A)\subset\mathcal G} \newcommand{\CH}{\mathcal H(\Sigma)$$
be the set of unit speed geodesics $\alpha$ in $(\tilde\Sigma,\rho)$ with the property that the composition of the maps
$$\xymatrix{\mathbb R} \newcommand{\BD}{\mathbb D\ar[r]^\alpha & (\tilde\Sigma,\rho)\ar[r] &(\hat\Sigma,\rho) \ar@{^{(}->}[r] & (\tilde\RO,\rho_{\hyp})}$$
is an $A$-quasigeodesic. In these terms, Proposition \ref{main proposition} asserts that if $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to(\tilde\Sigma,\rho)$ is a unit speed geodesic whose image in $(\hat\Sigma,\rho)$ is as simple as possible, then $\alpha\in\mathcal Q} \newcommand{\CR}{\mathcal R(A)$. Recalling now that by Lemma \ref{lem banana} the homotopy class of every closed primitive and essential geodesic $\gamma_0$ in $\RO$ can be represented by a $\rho$-geodesic $\eta_0:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to(\Sigma,\rho)$ which is as simple as possible, and that from Lemma \ref{lem as simple as possible mapping class group} we get that the property of being as simple as possible is mapping class group invariant, then we get the following fact that we state as a lemma for later reference:
\begin{lemma}\label{lem well chosen lift}
If $\eta_0:\mathbb S} \newcommand{\BZ}{\mathbb Z^1\to(\Sigma,\rho)$ is any essential $\rho$-geodesic which when considered as a map into $\RO$ is as simple as possible, then the measure
$$\sum_{\eta\in\PMap(\Sigma)\cdot\eta_0}\delta_{\frac 1L}\eta$$
is supported by $\mathcal Q} \newcommand{\CR}{\mathcal R(A)$ for all $L>0$.\qed
\end{lemma}
Now, the fact that the quasigeodesic constant $A$ is fixed implies that $\mathcal Q} \newcommand{\CR}{\mathcal R(A)$ is a closed subset of $\mathcal G} \newcommand{\CH}{\mathcal H(\Sigma)$. Recall also that every $A$-quasigeodesic in $\tilde\RO$, and in particular the image under $\tilde\Sigma\to\hat\Sigma\hookrightarrow\tilde\RO$ of each element of $\mathcal Q} \newcommand{\CR}{\mathcal R(A)$, is at bounded distance of a $\rho_{\hyp}$-geodesic where the bound just depends on $A$. In this way we get a continuous map
\begin{equation}\label{map L geodesics}
\mathcal Q} \newcommand{\CR}{\mathcal R(A)\to\mathcal G} \newcommand{\CH}{\mathcal H(\tilde\RO)
\end{equation}
equivariant under the homomorphism $\pi_1(\Sigma)\to\pi_1^{\orb}(\RO)=\Gamma$. Now, pushing currents forward with \eqref{map L geodesics} (at the end of the day currents are measures) we get a continuous map
\begin{equation}\label{map L geodesics currents}
\Pi:\{\lambda\in\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)\text{ supported by }\mathcal Q} \newcommand{\CR}{\mathcal R(A)\}\to\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO).
\end{equation}
from the closed subset of $\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)$ consisting of currents supported by the closed set $\mathcal Q} \newcommand{\CR}{\mathcal R(A)$ to the space of currents on $\RO$.
The map $\Pi$ given in \eqref{map L geodesics currents} induces in turn a continuous map
\begin{equation}\label{map L geodesics currents measures}
\Pi_*:\text{measures on }
\left\{
\begin{array}{c}
\lambda\in\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)\\ \text{supported by }\mathcal Q} \newcommand{\CR}{\mathcal R(A)
\end{array}
\right\}
\to\text{measures on }\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)
\end{equation}
from the space of Radon measures on $\{\lambda\in\mathcal C} \newcommand{\calD}{\mathcal D(\Sigma)\text{ supported by }\mathcal Q} \newcommand{\CR}{\mathcal R(A)\}$ to the space of Radon measures on $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$.
For $\gamma_0$ as in the statement of Theorem \ref{main}, let $\eta_0$ be as provided by Lemma \ref{lem banana}, and note that we get from Lemma \ref{lem well chosen lift} that the measure $\sum_{\eta\in\Map(\Sigma)\eta_0}\delta_{\frac 1L}\eta$ is supported by the domain of \eqref{map L geodesics currents}. Its image under \eqref{map L geodesics currents measures} is in fact nothing other than the measure $\sum_{\gamma\in\Map^{\orb}(\RO)\cdot\gamma_0}\delta_{\frac 1L}\gamma$. Applying $\Pi_*$ to both sides of the limit in Mirzakhani's counting theorem we get:
\begin{align}\label{viv has a zoom meeting}
\nonumber \Pi_*\left(\frac{\mathfrak c^{\PMap}(\eta_0)}{\mathfrak b^{\PMap}_{g,r}}\cdot\mathfrak m^\Sigma_{\Thu}\right)
&=\Pi_*\left(\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\eta\in\PMap(\Sigma)\cdot\eta_0}\delta_{\frac 1L\eta}\right)\\
&=\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\Pi_*\left(\sum_{\eta\in\PMap(\Sigma)\cdot\eta_0}\delta_{\frac 1L\eta}\right)\\
\nonumber&=\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\PMap^{\orb}(\RO)\cdot\gamma_0}\delta_{\frac 1L\gamma}
\end{align}
Now the $\Map^{\orb}(\RO)$-orbit of $\gamma_0$ is a disjoint union of orbits under $\PMap^{\orb}(\RO)$, more precisely of
$\frac{|\Map^{\orb}(\RO)/\PMap^{\orb}(\RO)|}{|\Stab_{\Map^{\orb}(\RO)}(\gamma_0)/\Stab_{\PMap^{\orb}(\RO)}(\gamma_0)\vert}$ orbits. Theorem \ref{main} follows when we apply \eqref{viv has a zoom meeting} to each one of these orbits and we set
$$C(\gamma_0)=\frac{|\Map^{\orb}(\RO)/\PMap^{\orb}(\RO)|}{|\Stab_{\Map^{\orb}(\RO)}(\gamma_0)/\Stab_{\PMap^{\orb}(\RO)}(\gamma_0)\vert}\cdot \frac{\mathfrak c^{\PMap}(\eta_0)}{\mathfrak b^{\PMap}_{g,r}}$$
and
$$\mathfrak m_{\Thu}=\Pi_*\left(\mathfrak m^\Sigma_{\Thu}\right).$$
We have proved Theorem \ref{main}.\qed
\medskip
Rather, we have proved Theorem \ref{main} while assuming Proposition \ref{main proposition}. Anyways, before proving the proposition let us prove the other theorems mentioned in the introduction and comment briefly on the measure $\mathfrak m_{\Thu}$.
\subsection{A comment on the measure $\mathfrak m_{\Thu}$}\label{sec thurston measure}
In the course of the proof of Theorem \ref{main} we identified the measure $\mathfrak m_{\Thu}$ as the push forward of the Thurston measure $\mathfrak m_{\Thu}^\Sigma$ associated to $\Sigma$ under the map $\Pi_*$. We give now a slightly more intrinsic interpretation of this measure. Note first that every simple essential geodesic in $\Sigma$ is as simple as possible in $\RO$. This means that the set $\mathbb R} \newcommand{\BD}{\mathbb D_{\ge 0}\cdot \mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(\Sigma)$ of multiples of integral measured laminations on $\Sigma$ is contained in $\mathcal Q} \newcommand{\CR}{\mathcal R(A)$. Since the latter is closed we also have that the full space of measured laminations on $\Sigma$ is contained in $\mathcal Q} \newcommand{\CR}{\mathcal R(A)$, that is $\mathcal M} \newcommand{\CN}{\mathcal N\CL(\Sigma)\subset\mathcal Q} \newcommand{\CR}{\mathcal R(A)$. We thus get from the construction \eqref{eq defintion thurston measure} of the Thurston measure $\mathfrak m_{\Thu}^\Sigma$ that
\begin{equation}\label{eq bamboleo 1}
\mathfrak m_{\Thu}=\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(\Sigma)}\delta_{\frac 1L\hat \gamma}
\end{equation}
where, for lack of better notation, we let $\hat\gamma$ be the geodesic representative in $\RO$ of the homotopy class represented by $\gamma\in\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(\Sigma)$. The multicurve curve $\hat\gamma$ is simple in the sense that its lifts to the universal cover $\tilde\RO$ never cross each other. If we denote by $\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(\RO)$ the set of, in this sense, simple geodesic multicurves in $\RO$ then we can rewrite \eqref{eq bamboleo 1} as
\begin{equation}\label{eq bamboleo 2}
\mathfrak m_{\Thu}=\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\hat{\gamma}\in\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(\RO)}\delta_{\frac 1L\hat{\gamma}}
\end{equation}
Yet another description of $\mathfrak m_{\Thu}$ can be given when we recall that $\RO$ has a finite normal cover (in the category of orbifolds) which is a surface. This means that there are a hyperbolic surface $S$ and a finite group $H$ acting on $S$ by isometries such that $\RO=S/H$. The cover $\pi:S\to\RO$ induces a bijection between elements in $\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(\RO)$ and the set $\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(S)^H$ of $H$-invariant simple multicurves in $S$.
The reader familiar with the construction of the usual Thurston measure for surfaces will have no difficulty proving that the limit
$$\mathfrak m_{\Thu}^{S,H}\stackrel{\text{def}}=\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(S)^H}\delta_{\frac 1L\gamma}$$
exists, where $g$ and $r$ are still the genus and the sum of the numbers of singular points and boundary components of the orbifold $\RO$. Taking into account the identification between $\mathcal M} \newcommand{\CN}{\mathcal N\CL_{\mathbb{Z}}(\RO)$ and $\mathcal M} \newcommand{\CN}{\mathcal N\CL_{\mathbb{Z}}(S)^H$ we get
$$\mathfrak m_{\Thu}=\mathfrak m_{\Thu}^{S,H}.$$
Here the left measure lives in the space $\mathcal G} \newcommand{\CH}{\mathcal H(\tilde\RO)$ of geodesics on the universal cover of $\RO$, and the right one lives in the space $\mathcal G} \newcommand{\CH}{\mathcal H(\tilde S)$ of geodesics in the universal cover of $S$, and where the equality makes sense because $\tilde\RO=\tilde S$.
\begin{bem}
It also seems probable that one can recover the measure $\mathfrak m_{\Thu}^{S,H}$ as a multiple of the measure on $\mathcal M} \newcommand{\CN}{\mathcal N\CL(S)^H$ obtained by taking a suitable power of the restriction to that subspace of the Thurston symplectic form on $\mathcal M} \newcommand{\CN}{\mathcal N\CL(S)$. It would be interesting to do as in \cite{Monin} and figure out the precise multiple.
\end{bem}
Anyways, the reader having just the present paper in mind can ignore these past comments and just continue thinking of $\mathfrak m_{\Thu}$ as given in the proof of Theorem \ref{main}.
\subsection{Actually counting curves}
Now we prove Theorem \ref{sat1} and Theorem \ref{sat3} from the introduction. Let us start with the latter:
\begin{named}{Theorem \ref{sat3}}
Let $\RO$ be a compact orientable hyperbolic orbifold with possibly empty totally geodesic boundary and let $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ be the associated space of geodesic currents. Then the limit
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\vert\{\gamma\text{ of type }\gamma_0\text{ with }F(\gamma)\le L\}\vert$$
exists and is positive for any $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ and any positive, homogenous, continuous function $F:\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)\to\mathbb R} \newcommand{\BD}{\mathbb D_{\ge 0}$. Here $g$ is the genus of the orbifold $\RO$, $r$ is the sum of the numbers of singular points and boundary components.
\end{named}
The proof of this theorem is the same as that of the analogous result in the case of surfaces \cite{EPS,Book,RS} but let us recap the argument anyways. Other than the fact that $\mathfrak m_{\Thu}$ is a Radon measure, and as such locally finite, we will also need that
\begin{equation}\label{chichu is enjoying the great outside}
\mathfrak m_{\Thu}(t\cdot U)=t^{6g-6+2r}\mathfrak m_{\Thu}(U)
\end{equation}
for all $U\subset\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ and all $t\ge 0$. This equality holds true because it does so for the standard Thurston measure $\mathfrak m_{\Thu}^\Sigma$ and because the map \eqref{map L geodesics currents} is homogeneous: $\Pi(t\cdot\lambda)=t\cdot\Pi(\lambda)$. Alternatively \eqref{chichu is enjoying the great outside} also follows directly from \eqref{eq bamboleo 2}. Anyways, we are now ready to prove the theorem:
\begin{proof}
Noting that there is nothing to prove if $\RO$ is exceptional, suppose that this is not the case.
For any such homogenous function $F:\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)\to\mathbb R} \newcommand{\BD}{\mathbb D$ we have
\begin{align*}
\frac {\vert\{\gamma\text{ of type }\gamma_0\text{ with }F(\gamma)\le L\}\vert}{L^{6g-6+2r}}
&=\left(\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\Map^{\orb}(\RO)\cdot\gamma_0}\delta_{\gamma}\right)\left(\{F\le L\}\right)\\
&=\left(\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\Map^{\orb}(\RO)\cdot\gamma_0}\delta_{\frac 1L\gamma}\right)\left(\{F\le 1\}\right)
\end{align*}
Now, by Theorem \ref{main} the measures in the last line converge, when $L\to\infty$, to the measure $C(\gamma_0)\cdot\mathfrak m_{\Thu}$. Noting that local finiteness of $\mathfrak m_{\Thu}$ together with \eqref{chichu is enjoying the great outside} implies that
$$C(\gamma_0)\cdot\mathfrak m_{\Thu}\left(\{F=1\}\right)=0.$$
we get that
$$C(\gamma_0)\cdot\mathfrak m_{\Thu}\left(\{F\le 1\}\right)=\lim_{L\to\infty}\left(\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\Map^{\orb}(\RO)\cdot\gamma_0}\delta_{\frac 1L\gamma}\right)\left(\{F\le 1\}\right).$$
Taking all of this together we obtain that
$$C(\gamma_0)\cdot\mathfrak m_{\Thu}\left(\{F\le 1\}\right)=\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\vert\{\gamma\text{ of type }\gamma_0\text{ with }F(\gamma)\le L\}\vert,$$
and we are done.
\end{proof}
We come now to Theorem \ref{sat1}:
\begin{named}{Theorem \ref{sat1}}
Let $\Gamma\subset\PSL_2\mathbb R} \newcommand{\BD}{\mathbb D$ be a non-elementary finitely generated discrete subgroup and $\RO=\BH^2/\Gamma$ the associated 2-dimensional hyperbolic orbifold. Then the limit
$$\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\vert\{\gamma\text{ of type }\gamma_0\text{ with }\ell_{\RO}(\gamma)\le L\}\vert$$
exists and is positive for any $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$. Here $g$ is the genus of the orbifold $\RO$ and $r$ is the sum of the numbers of singular points and ends.
\end{named}
\begin{proof}
We might once again assume that $\RO$ is not exceptional. Let then $\bar\RO$ be a compact hyperbolic orbifold with interior homeomorphic to $\RO$, consider $\gamma_0\in\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\RO)$ as an element in $\mathcal S} \newcommand{\CT}{\mathcal T^{\orb}(\bar\RO)$, and apply Theorem \ref{main} to get
\begin{equation}\label{fefito is a big problem}
\lim_{L\to\infty}\frac 1{L^{6g-6+2r}}\sum_{\gamma\in\Map^{\orb}(\bar\RO)\cdot\gamma_0}\delta_{\frac 1L\gamma}=C(\gamma_0)\cdot\mathfrak m_{\Thu}.
\end{equation}
Now note that the same argument that proves it for surfaces shows that there is a compact subset $K\subset\bar\RO\setminus\partial\bar\RO$ which contains the geodesic $\phi(\gamma_0)$ for all $\phi\in\Map^{\orb}(\bar\RO)$ (see for example \cite{EPS,Book}). It follows that the measures in \eqref{fefito is a big problem} are all supported by the set $\mathcal C} \newcommand{\calD}{\mathcal D_K^{\orb}(\bar\RO)$ of currents in $\mathcal C} \newcommand{\calD}{\mathcal D^{\orb}(\RO)$ whose support projects to a subset of $K$. Now, as was first proved by Bonahon \cite{BonahonFrench} (see also \cite[Exercise 3.9]{Book}) we have that hyperbolic length function $\ell_{\RO}$ extends continuously to $\mathcal C} \newcommand{\calD}{\mathcal D_K^{\orb}(\bar\RO)$. The claim of Theorem \ref{sat1} follows when we repeat word-by-word the argument in the proof of Theorem \ref{sat3}.
\end{proof}
\section{The key observations}\label{sec:lemmas}
In this section we get the tools needed to prove Proposition \ref{main proposition} in the next section. Notation will be as in the proposition: $\RO=\tilde{\RO}/\Gamma$ is a compact orientable hyperbolic orbifold with possibly empty totally geodesic boundary,
$$\hat\Sigma=\tilde\RO\setminus\CN_{\hyp}(\sing(\Gamma),\delta)$$
is as in \eqref{eq associated surface}, and $\rho$ is the metric on $\hat{\Sigma}$ constructed in Section \ref{sec:metric}.
\subsection{Choosing $\delta$}
So far, the only condition we have imposed on $\delta>0$ is that it is smaller than $\frac 13\epsilon$ where $\epsilon>0$ satisfies conditions (C1), (C2) and (C3) from Section \ref{sec surface associated to orbifold}. We are momentarily going to give a more stringent condition on $\delta$, but first recall that the convex hull of a connected set $X$ in a negatively curved manifold is the smallest closed connected set $X'$ with the following property: {\em any path in $X$ is homotopic relative to its endpoints to a geodesic path contained in $X'$}. With this language we fix $\delta$ so that, with $\epsilon>0$ as fixed in Section \ref{sec surface associated to orbifold}, the following holds whenever $\gamma\subset\tilde\RO$ is a $\rho_{\hyp}$-geodesic segment:
\begin{equation}
\tag{C4}\label{star1}
\parbox{\dimexpr\linewidth-4em}{%
\strut
If $r\ge 50\epsilon$ and if $p\in\tilde{\RO}$ is such that $d_{\hyp}(p,\CN_{\hyp}(\gamma,r))\le2\delta$ then the $\rho_{\hyp}$-convex hull of $\CN_{\hyp}(\gamma,r)\cup B_{\hyp}(p,2\delta)$ is contained in $\CN_{\hyp}(\gamma,r)\cup B_{\hyp}(p,\frac\epsilon 2)$.
\strut
}
\end{equation}
The lower bound on $r$ guarantees that $\CN_{\hyp}(\gamma,r)$ is uniformly convex. In particular, the existence of such a $\delta$ is evident when one considers the limit case $\delta=0$. In any case, a computation shows that any $\delta<(\frac \epsilon 4)^3$ works. See Figure \ref{fig convex hull} for a schematic representation of (C4).
\medskip
\begin{figure}[h]
\includegraphics[width=0.6\textwidth]{horizon.pdf}
\caption{Schematic representation of (C4). If a point is close enough to the boundary of the uniformly convex set $\CN_{\hyp}(\gamma,r)$ then its distance to the horizon is really small. The darker line represents the boundary of the convex hull of $\CN_{\hyp}(\gamma,r)\cup B_{\hyp}(p,2\delta)$.}
\label{fig convex hull}
\end{figure}
The reason for imposing (C4) will be clear shortly, but first we need some more notation. If $\gamma\subset\tilde\RO$ is a $\rho_{\hyp}$-geodesic (always compact) segment, consider for $r>0$ the set
\begin{equation}\label{I like pink martini}
\CP(\gamma,r)=\{p\in\sing(\Gamma)\text{ with }d_{\hyp}(p,\gamma)\le r+2\delta\}
\end{equation}
of points which are at most at distance $2\delta$ from the $r$-neighborhood $\CN_{\hyp}(\gamma,r)$ around $\gamma$, and for $t>0$ let
\begin{equation}\label{a lot}
\mathcal U} \newcommand{\CV}{\mathcal V(\gamma,r,t)=\CN_{\hyp}(\gamma,r)\cup\left(\bigcup_{p\in\CP(\gamma, r)}B_p(t,d_{\hyp})\right)
\end{equation}
be the union of that $r$-neighborhood and the $t$-balls around each point in $\CP(\gamma,r)$. The following lemma gives us, for $t=2\delta$, some control of the convex hull of this set with respect to the metric $\rho$:
\begin{lemma}\label{lem11}
If $\gamma\subset\tilde{\RO}$ is a $\rho_{\hyp}$-geodesic segment then we have
$$\left(\rho\text{-convex hull of }\mathcal U} \newcommand{\CV}{\mathcal V(\gamma,r,2\delta)\cap\hat{\Sigma}\right)\subset\left(\mathcal U} \newcommand{\CV}{\mathcal V\left(\gamma,r,\frac\epsilon 2\right)\cap\hat{\Sigma}\right)$$
for all $r\ge 50\epsilon$. Here $\mathcal U} \newcommand{\CV}{\mathcal V(\gamma,r,t)$ is as in \eqref{a lot}.
\end{lemma}
Before launching the proof recall that convexity of a closed set $X$ is a local property of its boundary. We thus get the following useful property:
\begin{equation}
\tag{*}\label{star2}
\parbox{\dimexpr\linewidth-4em}{%
\strut
If $(X_0,X_1,\cdots)$ is a countable locally finite collection of closed subsets of a negatively curved manifold with $X_0\cup X_i$ convex for all $i$ and with $X_i\cap X_j=\emptyset$ for all $i\neq j\ge 1$, then $\cup_{i=0}^\infty X_i$ is convex.
\strut
}
\end{equation}
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{neighborhood.pdf}
\caption{Schematic representation of Lemma \ref{lem11}.}
\label{fig neighborhood}
\end{figure}
We now prove the lemma.
\begin{proof}
Set $X_0=\CN_{\hyp}(\gamma,r)$ and for $p\in\CP(\gamma,r)$ let $Y_p$ be the $\rho_{\hyp}$-convex hull of the union of $\CN_{\hyp}(\gamma,r)\cap\hat\Sigma$ and $B_{\hyp}(p,2\delta)$. Since $r\ge 50\epsilon$ then we get from (C4) that $X_p=Y_p\setminus X_0\subset B_{\hyp}(p,\frac\epsilon 2)$. This implies that the collection of sets $\{X_p\text{ with }p\in\CP(\gamma,r)\}$ is locally finite and that $X_p\cap X_q=\emptyset$ for all distinct $p,q\in\CP(\gamma, r)$. We get thus from (*) that
$$X=X_0\cup\left(\bigcup_{p\in\CP(\gamma,r)}X_p\right)$$
is $\rho_{\hyp}$-convex, meaning that its boundary is $\rho_{\hyp}$-convex. However we have by construction that
$$\partial X\subset\CN_{\hyp}(\gamma,r)\cup\left(\bigcup_{p\in\CP(\gamma,r)}\left(B_{\hyp}(p,\frac\epsilon 2)\setminus B_{\hyp}(p,2\delta)\right)\right),$$
which means that $\partial X$ is not only contained in $\hat\Sigma$ but even contained in the part of $\hat\Sigma$ where the metrics $\rho$ and $\rho_{\hyp}$ agree. This means that $\partial X$ is not only $\rho_{\hyp}$-convex but also $\rho$-convex. It follows that $X\cap\hat\Sigma$ is a $\rho$-convex set containing $\mathcal U} \newcommand{\CV}{\mathcal V(\gamma,r,2\delta)\cap\hat{\Sigma}$ but contained in $\mathcal U} \newcommand{\CV}{\mathcal V\left(\gamma,r,\frac\epsilon 2\right)\cap\hat{\Sigma}$. The claim follows.
\end{proof}
\subsection{Heights and outgoing rays}
Continuing with the same notation, let $\gamma\subset\tilde{\RO}$ be a $\rho_{\hyp}$-geodesic segment and let $p\in\tilde{\RO}\setminus\gamma$ be a point not on $\gamma$. Under the {\em $\gamma$-outgoing ray at $p$} we understand the $\rho_{\hyp}$-geodesic ray starting at $p$ in the direction of the gradient of the function $d_{\hyp}(\gamma,\cdot)$---that is, the ray that $p$ would follow to escape from $\gamma$ at the fastest possible rate.
Now let $\eta$ be a $\rho$-geodesic segment in $\hat{\Sigma}$ whose endpoints lie on $\gamma$. The {\em $\gamma$-height of $\eta$}
$$h_\gamma(\eta)=\max\left\{d_{\hyp}(\gamma,p)\ \middle\vert
\begin{array}{l}
\text{ where }p\in\sing(\Gamma)\setminus\gamma\text{ is such that}\\
\text{ the }\gamma\text{-outgoing ray at }p\text{ meets }\eta
\end{array}
\right\}$$
is the maximum hyperbolic distance from $\gamma$ to a cone point $p\in\sing(\Gamma)$ whose $\gamma$-outgoing ray intersects $\eta$---here we take $h_\gamma(\eta)=0$ if we are taking the maximum over the empty set.
The following lemma asserts that the $\gamma$-height of $\eta$ agrees, up to a small error, with the maximal $d_{\hyp}$-distance to $\gamma$ from points in $\eta$.
\begin{lemma}\label{l:neighborhood}
Let $\gamma\subset\tilde{\RO}$ and $\eta\subset\hat{\Sigma}$ be a $\rho_{\hyp}$-geodesic segment and a $\rho$-geodesics segment, both with the same endpoints. Then we have
$$\eta\subset\CN_{\hyp}(\gamma,r)$$
where $r=\max\{50\epsilon,h_\gamma(\eta)\}+\epsilon$.
\end{lemma}
\begin{proof}
Let $\mathcal Q} \newcommand{\CR}{\mathcal R$ be the set of those singular points $p\in\sing(\Gamma)$ whose $\gamma$-outgoing ray meets $\eta$. Set $r_0=\max\{50\epsilon,h_\gamma(\eta)\}$ and note that $\mathcal Q} \newcommand{\CR}{\mathcal R\subset\CP(\gamma,r_0)$. The geodesic $\eta$ is homotopic in $\hat\Sigma$ and while fixing its endpoints to a curve contained in
$$\mathcal U} \newcommand{\CV}{\mathcal V(\gamma,r_0,2\delta)\cap\hat{\Sigma}=\left(\CN_{\hyp}(\gamma,r_0)\cup\left(\bigcup_{p\in\mathcal Q} \newcommand{\CR}{\mathcal R}B_{\hyp}(p,2\delta)\right)\right)\cap\hat{\Sigma}.$$
The $\rho$-geodesic $\eta$ is then contained in the $\rho$-convex hull of $\mathcal U} \newcommand{\CV}{\mathcal V(\gamma,r_0,2\delta)\cap\hat{\Sigma}$ and hence in $\mathcal U} \newcommand{\CV}{\mathcal V\left(\gamma,r_0,\frac\epsilon 2\right)\cap\hat{\Sigma}\subset \CN_{\hyp}(\gamma,r_0+\epsilon)$ by Lemma \ref{lem11}. We are done.
\end{proof}
\subsection{The main observation}
Our next goal is to establish the following fact:
\begin{lemma}\label{l:main}
Let $\gamma\subset\tilde{\RO}$ and $\eta\subset\hat{\Sigma}$ be respectively a $\rho_{\hyp}$-geodesic segment and a simple $\rho$-geodesic segment, such that both segments have the same endpoints $\partial\gamma=\partial\eta$. Suppose that at least one of the following holds:
\begin{itemize}
\item[(a)] $h_\gamma(\eta)>1$, or
\item[(b)] $\ell_{\hyp}(\gamma)<\epsilon$ and $h_\gamma(\eta)>50\epsilon$.
\end{itemize}
Then there is $g\in\Gamma\setminus\Id$ such that $\eta$ and $g\eta$ transversely intersect at least twice.
\end{lemma}
\begin{bem}
It follows by a limiting argument and Lemma \ref{l:main} that, if we replace the ``$\max$" in the definition of $h_{\gamma}(\eta)$ by a ``$\sup$", then the lemma also holds when $\gamma$ and $\eta$ are a complete $\rho_{\hyp}$-geodesic and a complete simple $\rho$-geodesic which have the same endpoints in $\partial_{\infty}{\tilde{\RO}}$, the boundary at infinity of $\tilde{\RO}$. To see that this is the case parametrize $\eta:\mathbb R} \newcommand{\BD}{\mathbb D\to\hat\Sigma$, let $\gamma_n$ be the $\rho_{\hyp}$-geodesic segment with endpoints $\eta(-n)$ and $\eta(n)$, set $\eta_n=\eta([-n,n])$, and note that
$$h_\gamma(\eta)\le\liminf_{n\to\infty} h_{\gamma_n}(\eta_n).$$
It thus follows from the lemma that if $h_\gamma(\eta)>1$ then there are $n>0$ and $g\in\Gamma\neq\Id$ such that $\eta_n$ and $g\eta_n$ transversely intersect at least twice. This means a fortiori that $\eta$ and $g\eta$ also meet transversely at least twice.
\end{bem}
\begin{proof}
Starting with the proof of Lemma \ref{l:main}, suppose that $\gamma$ and $\eta$ satisfy one of the two possible conditions in the statement. As a first observation note that if $\gamma$ and $\eta$ satisfy (a) (resp. (b)) and meet in a point other than in their end points, then there are subsegments $\gamma'\subset\gamma$ and $\eta'\subset\eta$ with $\partial\gamma'=\partial\eta'$, which still satisfy (a) (resp. (b)) and such that $\gamma'$ and $\eta'$ meet only at their endpoints. This means that we can assume without loss of generality that the loop obtained by concatenating $\gamma$ and $\eta$ is simple. Or said differently that $\gamma$ and $\eta$ bound a disk $\Delta$ in $\BH^2$.
\begin{claim}\label{claim1}
There are a $\rho_{\hyp}$-geodesic segment $\bar\gamma$ and a subsegment $\bar\eta\subset\eta$ satisfying the following properties:
\begin{enumerate}
\item The two segments $\bar\gamma$ and $\bar\eta$ have the same endpoints and disjoint interiors.
\item The pair $(\bar\gamma,\bar\eta)$ satisfies one of the two conditions (a) and (b) in the statement of the lemma.
\item The disk $\bar\Delta$ with boundary $\bar\gamma\cup\bar\eta$ contains a point $p\in\sing(\Gamma)\cap\bar\Delta$ with $d_{\hyp}(p,\bar\gamma)=h_{\bar\gamma}(\bar\eta)$.
\end{enumerate}
\end{claim}
We suggest that at first, instead of studying the proof of the claim, the reader spends some time looking at Figure \ref{fig producing disk}.
\begin{proof}[Proof of Claim \ref{claim1}]
If the disk $\Delta$ bounded by the concatenation of $\gamma$ and $\eta$ satisfies (3) then we have nothing to prove. If this is not the case then we will find a hyperbolic geodesic segment $\gamma'$ and a subsegment $\eta'\subset\eta$ satisfying (1) and (2), and such that $\eta'$ is at least $\epsilon$-shorter than $\eta$, that is $\ell_\rho(\eta')\le\ell_\rho(\eta)-\epsilon$. Now, if the disk $\Delta'$ associated to $\gamma'$ and $\eta'$ satisfies (3) then we are done. Otherwise we iterate our procedure... But this process can only be repeated finitely many times because at each step we lose a definite amount of length, and the length of the original segment $\eta$ is finite.
Let us see how we find $\gamma'$ and $\eta'$. We start by taking a point $p\in\sing(\Gamma)$ such that the $\gamma$-outgoing ray $\sigma$ at $p$ meets $\eta$ and with $d_{\hyp}(p,\gamma)=h_\gamma(\eta)$. Note that Lemma \ref{l:neighborhood} implies that all intersections of $\sigma$ and $\eta$ happen in the annulus $B_{\hyp}(p,\epsilon)\cap \hat\Sigma=B_{\hyp}(p,\epsilon)\setminus B_{\hyp}(p,\delta)$. Since the outgoing ray $\sigma$ intersects $\eta$ and we are assuming that $p\notin\Delta$, $\sigma$ must intersect $\eta$ at least twice. We deduce that there is a closed subsegment
$$\gamma'\subset\sigma\cap(B_{\hyp}(p,\epsilon)\cap \hat\Sigma)$$
with $\gamma'\cap\eta=\partial\gamma'$. Let $\eta'\subset\eta$ be the subsegment of $\eta$ bounded by $\partial\gamma'\subset\eta$.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{disk.pdf}
\caption{Proof of Claim \ref{claim1}.}
\label{fig producing disk}
\end{figure}
By construction the pair $\gamma',\eta'$ satisfies (1). Moreover, since $\eta$ has to travel at least distance $49\epsilon$ to go from $\gamma$ to $B_{\hyp}(p,\epsilon)$ and since the metrics $\rho$ and $\rho_{\hyp}$ agree on $B_{\hyp}(p,50\epsilon)\setminus B_{\hyp}(p,\delta)$ we get that
$$\ell_{\rho}(\eta')\le\ell_{\rho}(\eta)-98\epsilon,$$
which beats our established goal of reducing the length by $\epsilon$ by a proud $97\epsilon$. It just suffices to prove that the pair $(\gamma',\eta')$ satisfies (2), meaning that one of the conditions (a) or (b) holds. Actually, we are going to argue that they satisfy (b). First, $\gamma'$ is shorter than $\epsilon$ by construction. It thus suffices to check that $h_{\gamma'}(\eta')>50\epsilon$. By Lemma \ref{l:neighborhood} it suffices to prove that $\eta'$ exits $\CN_{\hyp}(\gamma',51\epsilon)$, or even better, that it exits the ball $B_{\hyp}(p,52\epsilon)$. Indeed, since $\eta'$ has both endpoints in $\sigma$, a hyperbolic ray emanating from $p$, since $\eta'$ is contained in $\eta$, a simple geodesic which exists $B_{\hyp}(p,52\epsilon)$ in both directions, and since the metrics $\rho$ and $\rho_{\hyp}$ agree by construction on $B_{\hyp}(p,52\epsilon)\setminus B_p(2\delta,d_{\hyp})$ we get from Lemma \ref{lem:meet rays once} that $\eta'$ cannot be contained in $B_{\hyp}(p,52\epsilon)$. We have proved the claim.
\end{proof}
Continuing with the proof of Lemma \ref{l:main} and with notation as in Claim \ref{claim1} choose $g\in\Stab_{\Gamma}(p)$ with rotation angle $\theta\in[\frac{2\pi}{3}, \frac{4\pi}{3}]$. Note that such $g$ always exists.
\begin{claim}\label{claim2}
We have $g^{\pm 1}(\bar\gamma)\cap\bar\Delta=\emptyset.$
\end{claim}
\begin{proof}[Proof of Claim \ref{claim2}]
Suppose first that the pair $(\bar\gamma,\bar\eta)$ satisfies (a), meaning that
$$d_{\hyp}(p,\bar\gamma)=h_{\bar\gamma}(\bar\eta)>1.$$
A computation using standard hyperbolic trigonometry implies that
$$d_{\hyp}(\bar\gamma,g^{\pm 1}\bar\gamma)=2\cosh^{-1}\left(\frac{\sqrt 3}2\cosh(d_{\hyp}(p,\bar\gamma))\right)\ge \frac 32d_{\hyp}(p,\bar\gamma)>h_{\bar\gamma}(\bar\eta)+\epsilon.$$
The claim follows thus because $\bar\Delta\subset\CN_{\hyp}(\bar\gamma,h_{\bar\gamma}(\bar\eta)+\epsilon)$ by Lemma \ref{l:neighborhood}. We are done if the pair $(\bar\gamma,\bar\eta)$ satisfies (a). Suppose now that it satisfies (b) and let $p^*$ be projection of $p$ to $\bar\gamma$. Now, either again a hyperbolic geometry computation, or just plainly comparing with the comparison euclidean triangle one gets that
$$d_{\hyp}(p^*,g^{\pm 1}p^*)\ge\sqrt{ 3}\cdot d_{\hyp}(p,p^*)\ge \sqrt 3\cdot h_{\bar\gamma}(\bar\eta)>h_{\bar\gamma}(\bar\eta)+3\epsilon.$$
Again the claim follows from Lemma \ref{l:neighborhood}.
\end{proof}
We are now ready to conclude the proof of the lemma. First note that
$$g\bar\Delta\cap\bar\Delta\neq\emptyset$$
because $gp=p$. Since $g\bar\Delta$ can neither be contained in $\bar\Delta$ nor contain $\bar\Delta$ we deduce that $\vert g(\partial\bar\Delta)\cap\partial\bar\Delta\vert\ge 2$. Since $\partial\bar\Delta=\bar\gamma\cup\bar\eta$ we get from Claim \ref{claim2} that
$$g\bar{\gamma}\cap(\partial \bar\Delta) =\emptyset \text{ and } \bar{\gamma}\cap g(\partial \bar\Delta)=\emptyset.$$
It follows that $g(\partial \bar\Delta)\cap(\partial \bar\Delta)=g\bar\eta\cap\bar\eta$ and hence that $\vert g\bar\eta\cap\bar\eta\vert\ge 2$. The lemma then follows because $\bar\eta\subset\eta$.
\end{proof}
\section{Proof of Proposition \ref{main proposition}}\label{sec:proof of proposition}
We are now finally ready to prove the remaining proposition:
\begin{named}{Proposition \ref{main proposition}}
Let $\RO$ be as in the statement of Theorem \ref{main}, $\hat{\Sigma}$ as in \eqref{eq associated surface}, and $\rho$ the metric on $\hat{\Sigma}$ constructed in Section \ref{sec:metric}. There exists $A\ge 1$ such that any unit speed $\rho$-geodesic $\alpha: \mathbb R} \newcommand{\BD}{\mathbb D\to\hat\Sigma$ which is (1) a quasigeodesic in $(\tilde{\RO},\rho_{\hyp})$ and (2) as simple as possible, is actually $A$-quasigeodesic in $(\tilde{\RO},\rho_{\hyp})$.
\end{named}
\begin{proof}
Note first that compactness of $\hat\Sigma/\Gamma$, together with $\Gamma$-invariance of the metric $\rho$, implies that the inclusion $(\hat\Sigma,\rho)\hookrightarrow(\tilde\RO,\rho_{\hyp})$ is locally $A_0$-bi-Lipschitz for some $A_0$. It follows in particular that for all $s,t\in\mathbb R} \newcommand{\BD}{\mathbb D$ we have
\begin{equation}\label{eq-tired of this1}
A_0\cdot\vert s-t\vert\ge d_{\hyp}(\alpha(s),\alpha(t)).
\end{equation}
The remaining of the proof is devoted to showing that the other inequality in the definition of quasigeodesic holds for some constant independent of the concrete $\alpha$.
As a first step we choose a $\Gamma$-invariant triangulation $\CT$ of $\tilde\RO$ whose edges $\CT$ are $\rho_{\hyp}$-geodesic segments of length at most $\epsilon$. Consider the set
$$\CP=\{\text{simplices of }\CT\text{ contained in }\CN_{\hyp}(\sing(\Gamma),60\epsilon)\}$$
and let $\vert\CP\vert=\cup_{\sigma\in\CP}\sigma\subset\tilde\RO$ be the union of all those simplexes. Let also
$$\mathcal E} \newcommand{\CF}{\mathcal F=\{\text{edges of }\CT\text{ contained in the closure of }\tilde\RO\setminus\vert\CP\vert\}$$
and let $\vert\mathcal E} \newcommand{\CF}{\mathcal F\vert=\cup_{e\in\mathcal E} \newcommand{\CF}{\mathcal F}e\subset\tilde\RO$ be the graph obtained by taking the union of all the edges of $\CT$ which are disjoint of the interior of $\vert\CP\vert$. Note that the elements in $\mathcal E} \newcommand{\CF}{\mathcal F$ are not only $\rho_{\hyp}$-geodesic but also $\rho$-geodesic because the metrics $\rho$ and $\rho_{\hyp}$ agree away from a very small neighborhood of $\sing(\Gamma)$. Note also that compactness of $\RO=\tilde\RO/\Gamma$ and $\Gamma$-invariance of $\CT$ imply that there is $C$ such that the following holds for all $p\in\tilde\RO$:
\begin{itemize}
\item[(a)] Less than $C$ elements of $\mathcal E} \newcommand{\CF}{\mathcal F$ intersect the ball $B_{\hyp}(p,4)$.
\end{itemize}
We care about all of this because we will estimate the $\rho$-length of subsegments of $\alpha$ as in the statement of Proposition \ref{main proposition} in terms of the number of edges in $\mathcal E} \newcommand{\CF}{\mathcal F$ that they meet. The key observation is that, since $\alpha$ as in the statement is as simple as possible and hence also simple, and since it cannot spend infinite time in any single ball since it is quasigeodesic, we get from Lemma \ref{lem:meet rays once} that there is some $D>\epsilon$ such that:
\begin{itemize}
\item[(b)] If $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to\tilde\RO$ is as in the statement of the proposition and if $[s,t]\subset\mathbb R} \newcommand{\BD}{\mathbb D$ is such that $\alpha[s,t]\cap\vert\mathcal E} \newcommand{\CF}{\mathcal F\vert=\emptyset$ then $\vert s-t\vert<D$.
\end{itemize}
We get from (b) that a geodesic $\alpha$ as in the statement never spends much time without meeting one of the edges in $\mathcal E} \newcommand{\CF}{\mathcal F$. We prove next that once such an $\alpha: \mathbb{R}\to\hat{\Sigma}$ leaves $e\in \mathcal E} \newcommand{\CF}{\mathcal F$ it never comes back to $e$. Indeed, suppose that $s<t$ are such that $\alpha(s),\alpha(t)\in e$ for some $e\in\mathcal E} \newcommand{\CF}{\mathcal F$. Denote by $\gamma$ the subsegment of $e$ between $\alpha(s)$ and $\alpha(t)$ and let $\eta=\alpha[s,t]$. We claim first that $\eta\subset\CN_{\hyp}(\gamma,55\epsilon)$. Otherwise we get from Lemma \ref{l:neighborhood} that $h_\gamma(\eta)>50\epsilon$ and then from Lemma \ref{l:main} that there is $g\in\Gamma$ such that $\vert\eta\cap g\eta\vert\ge 2$, contradicting the assumption that $\alpha$ is as simple as possible. We have thus proved that $\eta\subset\CN_{\hyp}(\gamma,55\epsilon)$. But noting that $\CN_{\hyp}(\gamma,55\epsilon)\subset\CN_{\hyp}(e,55\epsilon)\subset\hat{\Sigma}$ is contractible and that both metrics $\rho$ and $\rho_{\hyp}$ agree thereon we deduce that $\gamma=\eta$ because both are geodesic segments with the same endpoints. We have established the following key fact:
\begin{itemize}
\item[(c)] If $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to\tilde\RO$ is as in the statement of the proposition and if $s<t\in\mathbb R} \newcommand{\BD}{\mathbb D$ are such that $\alpha(s),\alpha(t)\in e$ for some $e\in\mathcal E} \newcommand{\CF}{\mathcal F$ then $\alpha[s,t]\subset e$.
\end{itemize}
The reader surely can at this point imagine how (b) and (c) interplay, but might still be wondering why we bothered to state (a) at all. Well, the reason is coming. Still assuming that $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to\hat\Sigma$ is a $\rho$-geodesic as in the statement, suppose that $s,t\in\mathbb R} \newcommand{\BD}{\mathbb D$ are such that $d_{\hyp}(\alpha(s),\alpha(t))\le 4$, let $\gamma\subset\tilde\RO$ be the $\rho_{\hyp}$-geodesic segment joining $\alpha(s)$ and $\alpha(t)$, and finally let $p$ be the midpoint of $\gamma$. Since $\alpha$ is as simple as possible we get from Lemma \ref{l:main} that $h_\gamma(\alpha[s,t])\leq1$. Lemma \ref{l:neighborhood} yields then that
$$\alpha([s,t])\subset\CN_{\hyp}(\gamma,1+\epsilon)\subset B_{\hyp}(p,4).$$
We thus get from (a) that $\alpha[s,t]$ meets at most $C$ elements of $\mathcal E} \newcommand{\CF}{\mathcal F$, and from (c) that, if we cut $\alpha[s,t]$ at all the points where we enter an element of $\mathcal E} \newcommand{\CF}{\mathcal F$, then we produce at most $C+1$ segments. Since all of them have at most length $D$ by (b) we obtain:
\begin{itemize}
\item[(d)] If $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to\tilde\RO$ is as in the statement of the proposition and if $s,t\in\mathbb R} \newcommand{\BD}{\mathbb D$ are such that $d_{\hyp}(\alpha(s),\alpha(t))\le 4$ then $\vert s-t\vert\le (C+1)\cdot D$.
\end{itemize}
We are almost at the end of the proof of the proposition. Recalling that $\alpha$ as in the statement of the proposition is a quasigeodesic, let $\gamma\subset\tilde\RO$ be the hyperbolic geodesic with the same endpoints as $\alpha$, and let
$$\pi:\BH^2\to\gamma$$
be the nearest point projection. From Lemma \ref{l:main}, or rather from the comment following the said lemma, we get that $h_\gamma(\alpha(\mathbb R} \newcommand{\BD}{\mathbb D))\leq1$. Lemma \ref{l:neighborhood} implies that $d_{\hyp}(\alpha(s), \pi(\alpha(s))\leq 1+\epsilon$ for all $s$. Now, if we have $s,t\in\mathbb R} \newcommand{\BD}{\mathbb D$ with $d_{\hyp}(\pi(\alpha(s)),\pi(\alpha(t)))=1$ we get $d_{\hyp}(\alpha(s),\alpha(t))\le 3+2\epsilon<4$. We thus get from (d) that:
\begin{itemize}
\item[(e)] If $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to\tilde\RO$ is as in the statement of the proposition, if $\gamma$ is the $\rho_{\hyp}$-geodesic at bounded distance of $\alpha$, and if $\pi:\tilde\RO\to\gamma$ is the nearest point projection then we have
$$\vert s-t\vert\le (C+1)\cdot D\stackrel{\text{def}}=A_1$$
for all $s,t\in\mathbb R} \newcommand{\BD}{\mathbb D$ with $d_{\hyp}(\pi(\alpha(s)),\pi(\alpha(t)))\leq 1$.
\end{itemize}
Now, if we have $s<t\in\mathbb R} \newcommand{\BD}{\mathbb D$ arbitrary let $s_0=s$ and, as long as $s_k<t$, define iteratively $s_{k+1}>s_k$ as follows:
\begin{itemize}
\item If $d_{\hyp}(\pi(\alpha(s_k)),\pi(\alpha(t)))<1$ then set $s_{k+1}=t$.
\item Else $s_{k+1}=\max\{s'\in[s,t]\text{ with }d_{\hyp}(\pi(\alpha(s_k)),\pi(\alpha(s')))=1\}$.
\end{itemize}
In this way we get a sequence
$$s=s_0<s_1<s_2<\ldots<s_{N-1}<s_N=t$$
where $N$ satisfies
$$d_{\hyp}(\pi(\alpha(s)),\pi(\alpha(t)))\ge N-1.$$
On the other hand we get from (e) that $\vert s_k-s_{k+1}\vert\le A_1$. This means that
$$N\ge \frac{\vert s-t\vert}{A_1}.$$
Taking all of this together we have that for any $\alpha:\mathbb R} \newcommand{\BD}{\mathbb D\to\hat\Sigma$ as in the statement of the Proposition we will have
\begin{equation}\label{eq-tired of this2}
d_{\hyp}(\alpha(s),\alpha(t))\ge d_{\hyp}(\pi(\alpha(s)),\pi(\alpha(t)))\ge\frac{\vert s-t\vert}{A_1}-1
\end{equation}
for all $s,t\in\mathbb R} \newcommand{\BD}{\mathbb D$.
The claim follows then from \eqref{eq-tired of this1} and \eqref{eq-tired of this2} with $A=\max\{A_0,A_1, 1\}$.
\end{proof}
\bibliographystyle{plain}
|
{
"timestamp": "2020-12-29T02:21:49",
"yymm": "2012",
"arxiv_id": "2012.14018",
"language": "en",
"url": "https://arxiv.org/abs/2012.14018"
}
|
\section{Introduction}
\label{sec:intro}
Theoretical modeling of relativistic stellar structure has become a trend since the attainment of solutions of Einstein field equations (EFEs) by Karl Schwarzschild for the spherically symmetric static exterior \cite{ks1} and interior \cite{ks2} of a stellar object. Schwarzschild obtained the solutions considering a matter distribution with uniform density \cite{ks1,ks2}. Later, the works of Tolman \cite{tolman} and Oppenheimer and Volkoff \cite{OV} branched out the study for more realistic modeling in the proximity of data-based facts as Tolman mentioned eight different exact solutions for EFEs while in the same issue of Physical Review, Oppenheimer and Volkoff were first to obtain computational solutions of EFEs for a degenerate neutron gas. Initially, it was assumed that the nature of spherically symmetric matter is similar to that of a perfect fluid, where radial pressure coincides with tangential pressure. But in 1922, Jeans \cite{jeans} changed this concept suggesting that due to the extreme and unusual conditions reigning through the interior of compact objects, anisotropy needs to be given the importance of to study its nature of matter distribution. In physics, the term anisotropy is used to describe the direction-dependent properties of materials. However, in context of compact stars anisotropy denotes the difference of radial and tangential pressures. Eleven years later Leimatre \cite{leimatre} modeled first anisotropic model entirely with tangential pressure and constant density. In 1972, Ruderman \cite{ruderman} proposed that highly compact objects are, in fact, anisotropic in nature predominantly because of its high density ($> 10^{15}gm/cc$). Bowers and Liang \cite{BL} worked on the anisotropic sphere in General Relativity. Dev and Gleiser \cite{DG1,DG2} showed the significance of physical parameters like mass, structure etc. on anisotropy. Anisotropy can exists for various reasons such as presence of superfluid, solid core \cite{KW}, mixture of different fluids and viscosity \cite{HS}, phase transitions \cite{sokolov}, meson condensation \cite{sawyer}, strong magnetic field \cite{weber}, superconductivity \cite{migdal} to name a few. Modeling of anisotropic compact objects under various conditions with some certain assumptions are performed by many researchers like assuming barotropic equation of state \cite{MK07,RBBU}, assuming pressure anisotropy in (3 + 1)D spacetime \cite{bhar1,bhar2,bhar3}, assuming quadratic equation of state for stellar interior \cite{BM}, assuming quadratic envelope \cite{TMM} and so on. Herrera along with other collaborators \cite{herrera92,HPI,HOP,HSW} studied and analyzed the stability of the self-gravitating system in the presence of local anisotropy. A new class of exact solutions of EFEs for the anisotropic stellar model has been pursued by Mak and Harko \cite{MH03}. Additionally, the duo has done some fascinating work with some other researchers in the context of anisotropic fluid matter distribution \cite{HM1,HM2,HM3,HM4,MDH}. Effect of anisotropy on the formation of the structure of a spherically symmetric stellar model has thoroughly been discussed on various literature \cite{MGRD,MBH,MBG,DCRRG,KRMH,MMKP,TRSD}. Das et. al \cite{DRB} have investigated the diverse physical aspects to study the comportment of the compact stellar model
in anisotropic fluid matter distribution by performing a comparative study of the proposed model with observational data.\\
The general approach for modeling a relativistic compact stellar object is to particularize gravitational potentials, matter variables, consider a specific type of equation of state or to make use of geometric constraints on specific spacetime geometry. Some examples can be provided as Feroze and Siddiqui \cite{FS} and Malaver have considered the linear equation of state \cite{malaver1} and quadratic equations of state \cite{malaver2,malaver3} for the matter distribution and specified particular forms for the gravitational potential and electric field intensity. Considering a polytropic equation of state Mafa Takisa and Maharaj \cite{TM} have obtained new exact solutions to the Einstein-Maxwell system of equations and Thirukkanesh and Ragel \cite{TR} have obtained anisotropic fluid model by solving Einstein field equations. Another way of describing a compact stellar model in relativistic field equation is to embed a $n$ dimensional Euclidean space. Embedding of $n$ dimensional space $V_{n}$ into a $n+p$ dimensional Euclidean space $E_{n+p}$ catches the eye of the researchers after the work on brane theory by Randall and Sundrum \cite{RS}. Though, from the mathematical aspect, the theory of embedding starts with the definition of manifolds by Riemann. Eventually, L. Schlaefli \cite{schlai} laid the foundation in 1871 when he conjectured that spacetime can be embedded into higher dimensional pseudo-Euclidean space. Later, it was proven by Janet and Cartan \cite{janet,cartan}, famously known as Janet-Cartan theorem, that to embed any $4$-dimensional space-time, both locally and isometrically, maximum $10$-dimensions are needed. Subsequently, it was extended to manifold with an indefinite metric by Friedman \cite{friedman}. Recently an intriguing scheme to model stable configuration is to embed a 4-dimensional spacetime into higher dimensional space though another particular form of Buchdahl metric is also studied by Vaidya and Tikekar \cite{VT} and Tikekar \cite{tikekar} by embedding a 3-hyperspace into 4-dimensional space. Recently Buchdahl metric has been analyzed in spherically symmetric spacetime in terms of embedding by Singh et. al \cite{SPG} and in terms of Vaidya-Tikekar and Finch-Skea model by Maurya et al \cite{MBJKPP}. Condition for embedding a 4-dimensional spacetime metric into 5-dimensional Euclidean space was first derived by K. R. Karmarkar \cite{karmarkar}, which is named after him as Karmarkar condition. In this condition, two metric potentials need to be dependent on each other. The simple expression between the metric potentials allows the researcher to choose general forms of metric potential having physical viability to generate acceptable compact models. Any solution of EFEs satisfying Karmarkar condition is considered to be of Class I. Numerous researchers have devoted their time in the modeling of both charged and uncharged stars using Karmarkar condition. Some of the recent literature backing the modeling of anisotropic compact stars using embedding class I condition are authored by Pandya et al \cite{PTGRS,PT}, Singh et al \cite{SBMP,SBLR,SERD,SMERD}, Gedela et al \cite{GBP1,GBP2,GPBP} and many more \cite{OMES,JMA,SSRSS,SSSR,PKMD,FMS,SF,TF,RSERD,MOJ,OML}. Recently, Govender et al \cite{GMSP} have studied the gravitational collapse of a spherically symmetric star by employing Karmarkar condition. It is worth mentioning that Schwarzschild exterior solution is of class II and the interior solution is of class I. The solution for the isotropic fluid sphere that satisfies Karmarkar condition is either Schwarzschild interior solution \cite{ks1,ks2} in which inner solution is conformally flat depicting limited configuration or Kohler-Chao \cite{KCN} solution for which inner solution is conformally non-flat depicting limitless configuration, yet is considered to obtain a new class of relativistic solutions.\\
{Maurya et al \cite{MGTR} have investigated new solutions for EFEs using Karmarkar condition considering a specific form of metric potential viz. $e^\lambda = 1 + \frac{(a - b)r^2}{1 + b r^2}$, with $a \neq b$. In this paper, by utilizing the special case of this metric we have studied some new features of compact stars which was not discussed in their work.} We have studied a solution of EFEs assuming a specific form of metric potential $e^\lambda = \frac{2(1+A r^2)}{2 -A r^2}$, $A$ being the constant parameter and $r$ being the radial coordinate and hence delved into a relativistic model of an anisotropic compact star of embedding class I in static, symmetric and spherical geometry. {Our metric can easily be obtained by substituting $a = A$ and $b = -{A \over 2}$ in the metric considered by \cite{MGTR}.} Since Buchdahl metric \cite{buchdahl59} is one such metric which satisfies all the criteria for physical acceptability of a stellar structure as listed by Delgaty and Lake \cite{DL}, so we have considered ansatz which is a special form of Buchdahl metric. The model is shown to execute linear equation of state. The estimated mass and radius are revealed to be as similar as the observed data for the stars $4$U$1820-30$, SMC X$-1$, SAX J $1808.4-3658$, $4$U$1608-52$, PSR J $1903 + 327$, Vela X$-1$ and $4$U$1538-52$. {Additionally, we have also studied the mass-radius relationship and radius-central density relationship for the chosen metric. Comparing the obtained result for a slow rotating configuration, we have also studied the mass-moment of inertia relationship for our prescribed model.}\\
Our paper has been organized as follows: In Sec.~\ref{sec2}, we have presented the basic equations governing the anisotropic system. In Sec.~\ref{sec3}, we have recalled Karmarkar condition and our proposed model have been discussed mathematically in Sec.~\ref{sec4}. Utilizing the matching condition on the boundary of the stellar object in Sec.~\ref{sec5} we have established a general form of the constant parameters and hence examined physical properties viz. regularity condition, energy condition, generalized Tolman-Oppenheimer-Volkoff (TOV) equation, causality condition, stability condition etc of several compact stars in Sec.~\ref{sec6} in the framework of general relativity. Finally, we have provided a short discussion of matter variables considering several stars in Sec.~\ref{sec7} and concluded our results in Sec.~\ref{sec8}.
\section{\label{sec2}Einstein field equations}
We consider the line element in $4$D co-ordinate system $(t,~r,~\theta,~\phi)$ to describe the interior of a static and spherically symmetric stellar configuration as,
\begin{equation}
ds^{2}=e^{\nu(r)}dt^{2}-e^{\lambda(r)}dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right), \label{le}
\end{equation}
where the metric potentials $e^{\nu(r)}$ and $ e^{\lambda(r)} $ or more precisely $\nu(r)$ and $\lambda(r)$ are functions of the radial coordinate `$r$' only.\\
Assuming $8\pi G =~c~=~1$ the Einstein field equations can be written as,
\begin{equation}
G_{\mu \nu} = - T_{\mu \nu} = \left(R_{\mu \nu}-{1\over 2}R~g_{\mu \nu}\right), \label{tensor}
\end{equation}
where $G_{\mu \nu},~ T_{\mu \nu},~ R_{\mu \nu}, ~ g_{\mu \nu}$ and $R$ are Einstein tensor, the stress energy tensor, Ricci tensor, metric tensor and Ricci scalar respectively.\\
For an anisotropic matter distribution, the energy momentum tensor can be written as,
\begin{equation}
T_{\mu \nu} = \left(\rho(r)+p_t(r)\right)U_\mu U_\nu-p_t(r) g_{\mu \nu}+\left(p_r(r)-p_t(r)\right)\chi_\mu \chi_\nu, \label{tensor2}
\end{equation}
where $\rho(r)$ is the energy density, $p_r(r)$ and $p_t(r)$ represent pressures along the radial and the transverse directions of the fluid configuration respectively. $U^\mu$ is the $4$-velocity and $\chi^\mu$ is an unit $4$-vector along the radial direction. The quantities obey the relations, $\chi_\mu \chi^\mu = 1$ and $\chi_\mu U^\mu = 0$.\\
The Einstein field equations (\ref{tensor}) governing the evolution of the system read as the following form for the metric (\ref{le}) along with the energy tensor (\ref{tensor2}),
\begin{eqnarray}
\rho(r)&=& \frac{1-e^{-\lambda}}{r^{2}}+\frac{e^{-\lambda}\lambda'}{r}, \label{dens}\\
p_{r}(r)&=& \frac{e^{-\lambda}-1}{r^{2}}+\frac{e^{-\lambda}\nu'}{r}, \label{prs}\\
p_t(r)&=& {e^{-\lambda} \over 2} \left(\nu''+ \frac{\nu'-\lambda'}{r} + \frac{\nu'^{2}-\nu'\lambda'}{2} \right), \label{prt}
\end{eqnarray}
where `prime' in Eqs.~(\ref{dens})-(\ref{prt}) denotes differentiation with respect to radial co-ordinate $r$. Also, the anisotropic factor is defined as $\Delta(r) = \left(p_t(r) - p_r(r)\right)$ and its expression for the stellar system is
\begin{equation}
\Delta(r)= \frac{e^{-\lambda}}{2}\left( \nu'' +\frac{\nu'^{2}}{2}-\frac{\nu'\lambda'}{2}- \frac{\nu'-\lambda'}{r} \right) + \frac{1- e^{-\lambda}}{r^2}.
\label{eqdelta}
\end{equation}
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{etnu.eps}
\hfill
\includegraphics[width=.49\textwidth]{etlamda.eps}
\caption{\label{figmp} Behavior of metric potentials $e^{\nu}$ (left) and $e^{\lambda}$ (right) with respect to the radial coordinate $r$ for the compact stars corresponding to the numerical value of constants given in Table~\ref{table1}.}
\end{figure}
\section{\label{sec3}Karmarkar Condition}
The general theory of relativity tells us that whenever an $n$ dimensional spacetime is embedded in a Pseudo Euclidean spacetime of $n+p$ dimension then `$p$' is called embedding class. A symmetric tensor $h_{\alpha\beta}$ of $4$ dimensional Riemannian space can be embed into a $5$ dimensional Pseudo Euclidean space if it is satisfies Gauss \cite{gauss} and Codazzi \cite{codazzi} condition and it can be written as,
\begin{eqnarray}
R_{\alpha \beta i j} &=& \epsilon \left(h_{\alpha i}h_{\beta j}-h_{\alpha j}h_{\beta i}\right),\label{eq8}\\
h_{\alpha \beta ; i}- h_{\alpha i ; \beta} &=& 0,\label{eq9}
\end{eqnarray}
where $R_{\alpha \beta i j}$ denotes curvature tensor.
Here, $\epsilon = 1$, when normal to the manifold is spacelike,\\
$\epsilon = -1$, when normal to the manifold is timelike and the symbol `;' represent covariant derivative. Earlier Kasner \cite{kasner} investigated in the year $1921$ that a $4$ dimensional spacetime of spherically symmetric object can always be embedded in $6$ dimensional Pseudo Euclidean space and later Gupta and Goyel ($1975$) \cite{GG} have shown the same result with another coordinate transformation. In $1924$, Eddington \cite{eddington} found that an n-dimensional spacetime can always be embedded in m-dimensional Pseudo Euclidean space with $m=n(n+1)/2$ and to embed, the minimum extra dimension required is less than or equal to the number $(m-n)$ or same as $n(n-1)/2$. Therefore $4$ dimensional spherically symmetric line element Eq.~(\ref{le}) is of embedding class II.
From Eq.~(\ref{le}), the components of Riemann curvature tensor are expressed as,
\begin{eqnarray}
R_{1414}=-e^\nu \left(\frac{\nu''}{2}+\frac{\nu'^2}{4}-\frac{\lambda'\nu'}{4}\right),~~R_{1212}= \frac{r \lambda'}{2},\nonumber \\
R_{2323}= e^{-\lambda} r^2 {\sin^2\theta}(e^\lambda -1),~~R_{2424}= \frac{1}{2} \nu' r e^{\nu-\lambda},\nonumber \\
R_{3434}= R_{2424} \sin^2\theta ,~~R_{1224}=0, \nonumber \\
R_{1334} = R_{1224}\sin^2 \theta =0. \label{eq10}
\end{eqnarray}
In $1948$, K. R. Karmarkar derived a condition known as Karmarkar condition which allows us to embed any $4$ dimensional spacetime into $5$ dimension flat space.
Now the non zero components of the tensor $h_{\alpha i}$ corresponding to Eq.~(\ref{le}) are $h_{11}$, $h_{22}$, $h_{33}$, $h_{44}$ and $h_{14}(=h_{41})$ due to its symmetric nature and $h_{33}=h_{22}\sin^2\theta$.\\
Using these aforementioned components Eq.~(\ref{eq8}) reduced to
\begin{equation}
R_{1414}R_{2323} = R_{1212}R_{3434} +R_{1224}R_{1334},
\label{eq11}
\end{equation}
which is the expression for Karmarkar Condition. Later in $1981$, Sharma and Pandey \cite{PS} clarified that condition (\ref{eq11}) is only necessary condition for a class one spacetime to be $4$ dimensional spacetime but it is not sufficient. In order to be a class one, a spacetime must satisfies Eq.~(\ref{eq11}) along with $R_{2323}\neq 0$.
On substituting all the values of Eq.~(\ref{eq10}) in Eq.~(\ref{eq11}), we obtain the following differential equation,
\begin{equation}
\frac{2\nu''}{\nu'}+ \nu' = \frac{\lambda' e^\lambda}{e^\lambda - 1},
\label{eq12}
\end{equation}
with $e^\lambda \neq 1$.
Solving Eq.~(\ref{eq12}) we obtain the relationship between $\lambda$ and $\nu$ as,
\begin{equation}
e^{\nu(r)} = \left[C + D \int{\sqrt{e^{\lambda(r)}-1}}dr \right]^2,
\label{nu}
\end{equation}
where $C$ and $D$ being non zero integrating constants. The distinctive feature of class I spacetime is condition (\ref{nu}) i.e. the co-dependency of the metric potentials which further provides scope to generate anisotropic model of embedding class I by specifically choosing one of the metric potentials. \\
According to Maurya et. al \cite{MGRC}, anisotropic factor becomes,
\begin{equation}
\Delta(r) = {\nu'(r) \over 4e^\lambda} \left( \frac{\nu'(r)e^\lambda}{2 r D^2} -1\right) \left( {2 \over r} - \frac{\lambda'(r)}{e^\lambda - 1}\right).
\label{eqanim}
\end{equation}
Clearly, pressure anisotropy will vanish if either one of the term on RHS (right hand side) of Eq.~(\ref{eqanim}) will become zero. Now if the term $\left( \frac{\nu'(r)e^\lambda}{2 r D^2} -1\right)$ vanishes then it leads to Kohler-Chao solution and if $\left( {2 \over r} - \frac{\lambda'(r)}{e^\lambda - 1}\right)$ vanishes then it yields Schwarzschild interior solution.
\section{\label{sec4}A particular model}
The fundamental approach of theoretical modeling is to find the exact solutions of the system of equations represented by Eqs.~(\ref{dens})-(\ref{prt}) thus leading to determine the structure of spacetime for anisotropic fluid distribution. Clearly, if $p_r = p_t$ then the above system of equations lead to a perfect fluid like matter distribution. Now the EFEs are consist of three equations with five unknowns namely
$\lambda(r),~\nu(r),~\rho(r),~p_r(r)$ and $p_t(r)$. However, Karmarkar condition furnish a relation between the two metric potentials $\lambda(r)$ and $\nu(r)$, providing total four equations with five unknowns. To balance this system of equation we consider the metric potential which is a special form of Buchdahl ansatz \cite{buchdahl59}. Specifically this ansatz has been considered by Maurya et. al \cite{MGDJA} and it is given as
\begin{equation}
e^{\lambda(r)} = \frac{2(1 + A r^2)}{2 - A r^2},
\label{etlamda}
\end{equation}
where $A$ is a non negative constant.
It is also to be noted that as $A = 0$ makes the metric function flat as $e^\lambda=1$, so clearly $A \neq 0$ making $A$ strictly positive.
Besides the regularity, known singularity and the fact that metric function is finite at the center of the star ($r = 0$) satisfies the basic physical requirement of a compact star thus making it a physically tenable model.
Plugging Eq.~(\ref{etlamda}) onto Eq.~(\ref{nu}) we have,
\begin{equation}
e^{\nu(r)}=\left[ C - \frac{D\sqrt{3(2 - A r^2)}}{{\sqrt{A}}}\right]^2.
\label{etnu}
\end{equation}
Also the function $e^\nu$ needs to be finite and positive at the center. Observing Fig.~\ref{figmp} it can be concluded that $e^\nu$ is monotonically increasing with radial coordinate `$r$' throughout the star, implying $e^\nu$ may produce a metric potential for a viable model as proposed by Lake \cite{lake}.
The expressions for energy density, radial pressure and transverse pressure are thus obtained as following,
\begin{eqnarray}
\rho(r)&=& \frac{3 A(3 + A r^2)}{2 (1 + A r^2)^2}, \label{density}\\
p_{r}(r)&=& \frac{A \left(3 C \sqrt{\frac{A }{2 - A r^2}} - 5\sqrt{3} D\right)}{2 (1 + A r^2)\left(\sqrt{3}D - C \sqrt{\frac{A }{2 - A r^2}}\right)}, \label{rp}\\
p_t(r)&=& \frac{A \left(3 C \sqrt{\frac{A }{2 - A r^2}} - \sqrt{3} D (5 + A r^2)\right)}{2 (1 + A r^2)^2\left(\sqrt{3} D - C \sqrt{\frac{A }{2 - A r^2}} \right)}. \label{tp}
\end{eqnarray}
The anisotropy becomes,
\begin{equation}
\Delta(r)= \frac{3 A r^3}{4 (1 + A r^2)}.
\label{aniso}
\end{equation}
Also the mass function and the compactness factor are given as,
\begin{eqnarray}
m(r)&=& 4 \pi \int_{0}^{r} \rho(\omega) \omega^2 d\omega = \frac{3 A r^3}{4(1 + A r^2)}, \label{massf}\\
u(r) &=& \frac{m(r)}{r}= \frac{3 A r^2}{4(1 + A r^2)}. \label{compact}
\end{eqnarray}
\section{\label{sec5}The Boundary Conditions}
To analyze a compact object in general theory of relativity it is important to model the spacetime insofar two distinct manifolds are unified at a common boundary. These junction conditions determine the constants for the anisotropic fluid configuration i.e. $C$, $D$ and $A$ in this case and these are given as,\\
\textbf{(i)} Continuity of the first fundamental form at the boundary i.e. the interior solution should be matched to the vacuum exterior Schwarzschild solution at the boundary $r = R$ of the star known as the radius of the star. Generally the process executed here is Darmois-Israel formalism based on Gauss-Codazzi decomposition of spacetime. It expresses the surface properties in terms of jump of extrinsic curvature across the boundary as the function of boundary's intrinsic coordinates \cite{MK}. Israel \cite{israel} formulated his work considering the idea that the $4$ dimensional coordinates may be chosen independently on both side of the boundary. In his innovating work Darmois \cite{darmois} first calculated that the boundary of the structure is to be considered as the periphery of two different manifolds glued together at this boundary.\\
\textbf{(ii)} Continuity of the second fundamental form at the boundary i.e. the radial pressure must vanish at the boundary. Mathematically $p_r(r = R)=0$ \cite{MS}.\\
The exterior spacetime on Schwarzschild metric is expressed as,
\begin{equation}
ds^2 = \left( 1- \frac{2M}{r} \right) dt^2 - \left( 1 -\frac{2M}{r}\right)^{-1}dr^2 - r^2 (d\theta^2 +\sin^{2}\theta d\phi^{2}),
\label{bcs}
\end{equation}
at $r > 2M$, $M$ being stellar mass.
The advancement of metric function over the limiting surface i.e. at the boundary yields,
\begin{eqnarray}
e^{\nu(R)} &=& \left(1 - \frac{2 M}{r}\right)|_R, \label{nu1}\\
e^{-\lambda(R)} &=& \left( 1 - \frac{2 M}{r}\right)|_R. \label{lamda1}
\end{eqnarray}
Utilizing above equations with Eqs.~(\ref{etnu}) and (\ref{etlamda}) respectively we get,
\begin{eqnarray}
\left( C - \frac{D \sqrt{3(2 - A R^2)}}{\sqrt{A}}\right )^2 &=& 1 - \frac{2 M}{R}, \label{nu2}\\
\frac{2 - A R^2}{2(1 + A R^2)} &=& 1 - \frac{2 M}{R}. \label{lamda2}
\end{eqnarray}
Again the later condition $p_r(r = R)= 0$ gives,
\begin{equation}
A \left(3 C \sqrt{\frac{A}{2 - A R^2}} - 5\sqrt{3} D \right) = 0,
\label{prb}
\end{equation}
which prescribes a limitation on the model parameter $A$ which can be further utilize to find the radius of the compact model.\\
Furthermore Eqs.~(\ref{nu2})-(\ref{prb}) generate the mathematical expressions of model parameter as,
\begin{eqnarray}
A &=& \frac{4 M}{R^2 (3 R-4 M)}, \nonumber \\
C &=& {5 \over 2}\sqrt{1-{2 M \over R}},\nonumber \\
D &=& \sqrt{\frac{M}{2 R^3}}. \label{eqacd}
\end{eqnarray}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9.5cm]{matching.eps}
\\
\end{tabular}
\end{center}
\caption{Junction conditions are satisfied at the boundary for each compact stars.} \label{figmatch}
\end{figure}
In Fig.~\ref{figmatch}, we have illustrated the junction condition of the interior metric potentials with the exterior Schwarzschild metric at the boundary for each stars and it depicts the smooth matching of the boundary conditions.
\begin{table}[tbp]
\centering
\setlength{\tabcolsep}{.3\tabcolsep}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{\bf Pulsar} & {\bf Mass ($M_{\odot}$)} & {\bf Radius (Km)} & ${\bf A }$ & ${\bf C }$ & ${\bf D }$ \\
\hline
$4$U$1820-30$ & $1.58$ & $9.1$ & ~~~$0.00626159$~~~ & ~~~$\pm 1.74607$~~~ & ~~~$\pm 0.0393231$~~~ \\
SMC X-$1$ & $1.29$ & $9.13$ & $0.00461632$ & $\pm 1.90917$ & $\pm 0.0353565$ \\
SAX J $1808.4-3658$ & $0.9 $ & $7.951$ & $0.00452972$ & $\pm 2.04034$ & $\pm 0.0363387$ \\
$4$U$1608-52$ & $1.74$ & $9.3$ & $0.00673108$ & $\pm 1.67344$ & $\pm 0.0399421$ \\
PSR J $1903+327$ & $1.667$ & $9.438$ & $0.00597525$ & $\pm 1.73016$ & $\pm 0.038241$ \\
Vela X-$1$ & $1.77$ & $9.56$ & $0.00626551$ & $\pm 1.68415$ & $\pm 0.0386528$ \\
$4$U$1538-52$ & $0.87$ & $7.866$ & $0.00449277$ & $\pm 2.05201$ & $\pm 0.0363086$ \\
\hline
\end{tabular}
\caption{\label{table1}Values of model parameters for different compact objects.}
\end{table}
\section{\label{sec6}Analysis of the features of the model}
This section contains the inspection of various properties of the interior of the stellar structure. The significant features such as regularity, causality and stability criterion are discussed using numerical calculations and generating graphs. Our spectrum of discussions revolve around the pulsars $4$U$1820-30$ (Mass = $1.58~M\odot$, radius = $9.1$ km \cite{GRDDD}), SMC X-$1$ (Mass = $1.29~M\odot$, radius $= 9.13$ km \cite{RCKSR}), SAX J $1808.4-3658$ (Mass $= 0.9~M\odot$, radius $= 7.951$ km \cite{elebert}), $4$U$1608-52$ (Mass $= 1.74~M\odot$, radius $= 9.3$ km \cite{GOCW}), PSR J$1903+327$ (Mass $= 1.667~M\odot$, radius $= 9.438$ km \cite{freire}), Vela X-$1$ (Mass $= 1.77~M\odot$, radius $= 9.56$ km \cite{rawl}) and $4$U$1538-52$ (Mass $= 0.87~M\odot$, radius $= 7.866$ km \cite{rawl}) .
\subsection{Regularity condition}
To be a physically viable model of an anisotropic compact star, the model ought to satisfy some regularity conditions throughout the interior of the structure \cite{DL,HS,leibovitz,PMP, pant}.\\
(i) The spacetime and hence the solutions need to be free from any singularity i.e. the energy density $\rho$ and pressures $(p_r,~p_t)$ should be finite and positive throughout the star.
Also $e^{\lambda(r)}$ and $e^{\nu(r)}$ should be non zero and finite. Here $e^{\lambda(r)} \big|_{r=0} = 1$ and $e^{\nu(r)} \big|_{r=0} = C^2$ i.e. both are non zero and finite at the center. Also Fig.~\ref{figmp} shows that the metric potentials are positive throughout the stellar structure.\\
(ii) Energy density and pressures must be maximum at the center and monotonically decreasing towards the boundary of the star. Fig.~\ref{figpressure} depicts that both the pressures are monotonically decreasing function of $r$ with a maximum value at the center and radial pressure vanishes at the boundary for each of the stars. Moreover from Fig.~\ref{figda}, it can be seen that energy density is monotonically decreasing in nature and the maximum value can be obtained at the center of the compact model. Analytically, $\frac{d\rho}{dr} \big|_{r =0} = 0$ and $\frac{d^2\rho}{dr^2} \big|_{r =0} < 0$. Also Fig.~\ref{figda} also represents the positive nature of anisotropy inside the stellar interior, which is an important feature for a stable compact model as suggested by Gokhroo and Mehra \cite{GM}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{radialpr.eps}
\hfill
\includegraphics[width=.49\textwidth]{transpt.eps}
\caption{\label{figpressure} Radial (left) and Transverse (right) pressures for different compact stars.}
\end{figure}
Central density, central radial and transverse pressures are given as,
\begin{eqnarray}
\rho(0) &=& \frac{9 A}{2} ,\\
p_r(0) = p_t(0)&=&-\frac{A \left(5 \sqrt{3} D - 3 C \sqrt{\frac{A}{2}}\right)}{2 \left( \sqrt{3} D - C \sqrt{\frac{A}{2}} \right)}.
\end{eqnarray}
Since $A$ is positive so central density is always positive. Also, equality of both the pressures at the center indicates the absence of anisotropy at $r = 0$.
To configure a stable model it is require to satisfy Zeldovich's Condition for pressure and density which states that $\frac{p_r}{\rho}$ must be $\leq 1$ at the center \cite{ZN}. Therefore,
\begin{eqnarray}
-\frac{5 \sqrt{3} D - 3 C \sqrt{\frac{A}{2}}}{9 \left( \sqrt{3} D - C \sqrt{\frac{A}{2}} \right)} \leq 1,\\
or, \frac{D}{C} \geq \frac{\sqrt{6 A}}{7}. \label{dc1}
\end{eqnarray}
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{density.eps}
\hfill
\includegraphics[width=.49\textwidth]{aniso.eps}
\caption{\label{figda} Behavior of energy density (left) and anisotropy (right) with respect to the radial coordinate $r$ for various compact stars.}
\end{figure}
Again the gradient of density and pressures can be expressed as,
\begin{eqnarray}
\frac{d\rho}{dr} &=& -\frac{3 A^2 r (5 + A r^2)}{(1 + A r^2)^3},\\
\frac{dp_r}{dr} &=&\frac{A^2 \left( \sqrt{\frac{A r^2}{2 - A r^2}} \left(6 A C^2 - 3 A^2 C^2 r + 15 D^2(2 - A r^2) \right)-\sqrt{3} A C D r (17 + 7 A r^2) \right) }{\sqrt{\frac{A }{2 - A r^2}}(2 - A r^2)^2 (1 + A r^2)^2 \left( \sqrt{3} D - C \sqrt{\frac{A }{2 - A r^2}} \right)^2}, \\
\frac{dp_t}{dr}&=&\frac{A^2 \left(\sqrt{\frac{A r^2}{2 - A r^2}} \left(12 A C^2 (2 - A r^2)+ 6 D^2 (2 - A r^2)(9 + A r^2) \right) \right)}{2 \sqrt{\frac{A}{2 - A r^2}}(2 - A r^2)^2 (1 + A r^2)^3 \left(\sqrt{3}D - C \sqrt{\frac{A}{2 - A r^2}}\right)^2} \nonumber \\
&+&\frac{\sqrt{3} A^3 C D r (A^2 r^4 + 23 A r^2 -62)}{2 \sqrt{\frac{A }{2 - A r^2}}(2 - A r^2)^2 (1 + A r^2)^3 \left(\sqrt{3}D - C \sqrt{\frac{A }{2 - A r^2}}\right)^2}.
\end{eqnarray}
For a stable compact model $\frac{dp_r}{dr} \big|_{r =0} = \frac{dp_t}{dr} \big|_{r =0} = 0$ and $\frac{d^2 p_r}{dr^2} \big|_{r =0} < 0$ and $\frac{d^2 p_t}{dr} \big|_{r =0} < 0$ need to satisfy inside the star i.e. gradient of density and pressures are negative within $0 < ~r< ~R$. The negative nature of the gradient of density and pressures are shown graphically in Fig.~\ref{figgrad}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9.5cm]{gradient.eps}
\\
\end{tabular}
\end{center}
\caption{Energy density gradient, pressures gradient for different compact stars.} \label{figgrad}
\end{figure}
Also $p_r(0) = ~p_t(0) \geq 0$ gives the relation
\begin{equation}
\frac{\sqrt{3A}}{5\sqrt{2}} \leq \frac{D}{C} \leq \sqrt{\frac{A}{6}}.
\label{dc2}
\end{equation}
Combining Eqs.~(\ref{dc1}) and (\ref{dc2}) we get the bounds on the model parameters as
\begin{equation}
\frac{\sqrt{6A}}{7} \leq \frac{D}{C} \leq \sqrt{\frac{A}{6}}. \label{dc}
\end{equation}
\subsection{Kretschmann Scalar}
In General Relativity, scalars are used to look for any singularity present in the metric. The simplest is the Ricci Scalar but since for vacuum solution, Ricci scalar is zero everywhere so to find any physical singularity present in the spacetime Kretschmann Scalar is used. The approach is quite straightforward. For any line element,
\begin{equation}
ds^2= e^{2\mu}dt^2 - e^{2\kappa}dr^2 -
e^{2\zeta} (d\theta^2 + \sin^2 \theta d\phi^2),
\end{equation}
Kretschmann Scalar is calculated as,
\begin{equation}
K = 4 K_1^2 + 8K_2^2 + 8 K_3^2 + 4 K_4^2,
\end{equation}
where,
\begin{eqnarray}
K_1 &=& e^{-(\mu +\kappa)}\frac{d}{dr}\left(\frac{d\mu}{dr}e^{\mu-\kappa }\right), \nonumber \\ \nonumber
K_2 &=& e^{-2\kappa}\frac{d\zeta}{dr}\frac{d\mu}{dr}, \\ \nonumber
K_3 &=& e^{-(\kappa+\zeta)} \frac{d}{dr} \left(e^{\zeta-\kappa} \frac{d\zeta}{dr}\right), \\ \nonumber
K_4 &=& -e^{-2\zeta}+e^{-2\kappa}\left(\frac{d\zeta}{dr}\right). \nonumber
\end{eqnarray}
Computing all the components we have obtained Kretschmann Scalar for various compact objects and it is shown graphically in Fig.~\ref{figks}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9.5cm]{ks.eps}
\\
\end{tabular}
\end{center}
\caption{Kretschmann Scalar for different compact stars.} \label{figks}
\end{figure}
The divergence of Kretschmann Scalar at $r=0$ depicts that there is no singularity at the center of the compact model, although there is a singularity at $r=1$.
\subsection{The Tolman-Oppenheimer-Volkoff or TOV Equation}
To examine the stability of the model it is important to examine the equilibrium condition of the model using TOV equation. This stability equation given by Tolman \cite{tolman} and Oppenheimer and Volkoff \cite{OV} symbolizes the internal structure of a spherically symmetric static compact object which is in equilibrium in presence of anisotropy. The generalized TOV equation can be expressed as \cite{varela,ponce},
\begin{equation}
-{1 \over r}\left(\rho(r)+p_r(r)\right) {d\nu \over dr}-{dp_r(r) \over dr}+{2(p_r-p_t) \over r}=0.\nonumber
\end{equation}
It can also be written as,
\begin{eqnarray}
-{M_g\left(\rho(r)+p_r(r)\right) \over r^2}~e^{(\lambda-\nu)/2}-{dp_r(r) \over dr}+{2\Delta(r) \over r}=0,\nonumber
\\\label{tov1}
\end{eqnarray}
where $M_g(r)$ is the effective gravitational mass inside a sphere of radius `$r$' and it can be derived using Tolman-Whittaker mass formula is given by,
\begin{equation}
M_g(r)={1 \over 2}r^2 e^{\frac{\nu-\lambda}{2}} {d\nu \over dr}.
\label{tov2}
\end{equation}
The TOV equation can be expressed in a simple form to describe the equilibrium condition by defining the forces as gravitational forces($F_g$), hydrostatic forces($F_h$) and anisotropic forces($F_a$).
Thus,
\begin{equation}
F_g(r) + F_h(r) + F_a(r) =0,
\label{force1}
\end{equation}
where,
\begin{eqnarray}
\text{gravitational force},~ F_g(r)& =& -\frac{\nu'\left(\rho(r)+p_r(r)\right)}{2},\nonumber \\
\text{hydrostatic force},~ F_h(r)& =& -{dp_r(r) \over dr},\nonumber \\
\text{anisotropic force},~ F_a(r) &=& {2\Delta(r) \over r}.\label{force2}
\end{eqnarray}
For the prescribed model the expressions for these forces become,
\begin{eqnarray}
F_g(r) &=& \frac{A^3 D r \left( 3 C \sqrt{\frac{A}{2 -A r^2}} - \sqrt{3}D (2 - A r^2) \right)}{\sqrt{\frac{A (2 - A r^2)}{3}} (1 + A r^2)^2 \left(\sqrt{3}D - C \sqrt{\frac{A}{2-A r^2}}\right)\left( A C - \sqrt{3} D \sqrt{A(2-A r^2)}\right)}, \nonumber \\
F_h(r) &=& \frac{A^2 r \left(\sqrt{3}A C D (17-7A r^2)- 3 A C^2 \sqrt{\frac{A}{2-A r^2}} (2 - A r^2) - 15 D^2 \sqrt{A}(2-A r^2)^{3 \over 2} \right)}{\sqrt{A} (2 -A r^2)^{3 \over 2} (1+ A r^2)^2 \left(\sqrt{3}D- C\sqrt{\frac{A}{2-A r^2}}\right)^2}, \nonumber \\
F_a(r) &=& \frac{A^2 r (4\sqrt{3} D - 3 C \sqrt{\frac{A}{2-A r^2}})}{(1+A r^2)^2(\sqrt{3}D - C \sqrt{\frac{A}{2 - A r^2}})}. \label{force3}
\end{eqnarray}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9.5cm]{force.eps}
\\
\end{tabular}
\end{center}
\caption{Different forces for different compact stars.} \label{figforce}
\end{figure}
Equations alluded in Eq.~(\ref{force3}) are examined graphically in the Fig.~\ref{figforce}. It can clearly be seen that negative gravitational force is balanced by the amalgamation of hydrostatic and anisotropic forces to keep the model in equilibrium.
\subsection{Energy condition}
The acceptability of our model depends on fulfillment of some energy conditions namely, Null energy condition (NEC), Weak energy condition (WEC), Strong energy condition (SEC) and Dominant Energy Condition (DEC). All these energy conditions are some inequalities corresponding to stress-energy tensor and are defined as,
\begin{eqnarray}
NEC_r &:& \rho(r) + p_r(r)\geq 0,~~~~ NEC_t : \rho(r) + p_t(r)\geq 0.\nonumber
\\
WEC_r &:& \rho(r)\geq 0,~~ \rho(r) + p_r(r)\geq 0,~~~
WEC_t : \rho(r)\geq 0,~~ \rho(r) + p_t(r)\geq 0.\nonumber
\\
SEC &:& \rho(r) + p_r(r)+2p_t(r)\geq 0. \nonumber \\
DEC &:& \rho(r) \geq p_r(r),~ p_t(r).
\label{energy}
\end{eqnarray}
Analytically NEC suggests that an eyewitness crossing a null bend will quantify the surrounding energy density to be non-negative. WEC implies that energy density estimated by an eyewitness traversing a time like bend is positive. SEC implies that the trace of tidal tensor estimated by the eyewitness is consistently positive \cite{MESTD}. DEC essentially indicates that to any observer the local energy density appears non-negative and local energy flow vector is non-spacelike \cite{HE}.\\
We have discussed the energy conditions on our model graphically in Fig.~\ref{figenergy}, by plotting LHS (left hand side) of above inequalities.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9.5cm]{energy.eps}
\\
\end{tabular}
\end{center}
\caption{Behavior of different energy conditions on different compact stars. The nature of $\rho - p_r - 2 p_t$ is also plotted.}\label{figenergy}
\end{figure}
Also for distinguishing configuration we can develop some limitation on our model parameter from inequalities in Eq.~(\ref{energy}). Specifically we obtain the following relations for the center ($r = 0$) of the stellar structure,
\begin{eqnarray}
NEC_r &:& \rho(0) + p_r(0)\geq 0,~~~~ NEC_t : \rho(0) + p_t(0)\geq 0.\nonumber
\\
WEC_r &:& \rho(0)\geq 0,~~ \rho(0) + p_r(0)\geq 0.\nonumber
\\
WEC_t &:& \rho(0)\geq 0,~~ \rho(0) + p_t(0)\geq 0.\nonumber
\\
SEC &:& \rho(0) + p_r(0) + 2p_t(0)\geq 0.\nonumber \\
DEC &:& \rho(0)\geq p_r(0),~p_t(0) \nonumber \\
&or& \frac{9A}{2} \geq \frac{A\left(5\sqrt{3}D- C \sqrt{A \over 2}\right)}{2\left(\sqrt{3}D - C \sqrt{A \over 2} \right) } \nonumber \\
&or& {D \over C} \geq \sqrt{2 A \over 3}.
\label{energy2}
\end{eqnarray}
\subsection{Causality condition}
To study the stability of an anisotropic fluid stellar, L. Herrera \cite{herrera92} proposed the $cracking$ $method$ or overturning method which states that the velocity of sound speeds (radial and transverse) should never exceed the speed of light inside the star i.e. $v^2 = {dp \over d\rho} < 1$ should be maintained inside the stellar, taking the velocity of light $c = 1$. Also Le Chatelier's principle allows the matter of the star to satisfy $\frac{dp}{d\rho} \geq 0$ to be a stable configuration \cite{NKG}. The sound velocity inside the compact star is expressed by,
\begin{equation}
v_r(r)=\sqrt{{dp_r(r) \over d\rho(r)}},~~~v_t(r)=\sqrt{{dp_t(r) \over d\rho(r)}}.
\end{equation}
Combining the above conditions the causality condition becomes $0 \leq v_r(r),~v_t(r) < 1$. Fig.~\ref{figvsrvst} supports the fulfillment of causality condition of the prescribed model.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{radvsr.eps}
\hfill
\includegraphics[width=.49\textwidth]{transvst.eps}
\caption{\label{figvsrvst} Radial (left) and Transverse (right) sound speeds for different compact stars.}
\end{figure}
Now using the concept of cracking, Abreu et al. \cite{abreu} provided the stability conditions with respect to the stability factor $\left(= \{v_t(r)\}^2 - \{v_r(r)\}^2\right)$ for anisotropic fluid model. The conditions are {\bf:} \textbf{(i)} The region is potentially stable if $ -1 < \{v_t(r)\}^2 - \{v_r(r)\}^2 \leq 0$ and \textbf{(ii)} The region is potentially unstable if $ 0 < \{v_t(r)\}^2 - \{v_r(r)\}^2 < 1$.\\
The expressions for velocity of sound speeds are given below,
\begin{eqnarray}
v_{r}^2 &=& - \frac{\sqrt{\frac{A}{2 - A r^2}}(2 - A r^2) (1 + A r^2)(3 A C^2 + 30 D^2 - 15 A D^2 r^2) }{3 \sqrt{\frac{A}{2 - A r^2}} (2 - A r^2)^2 (5 + A r^2) \left(\sqrt{3}D - C \sqrt{\frac{A}{2 - A r^2}}\right)^2} \nonumber \\
&+& \frac{\sqrt{3} A C D (1 + A r^2)(17 - 7 A r^2) }{3 \sqrt{\frac{A}{2 - A r^2}} (2 - A r^2)^2 (5 + A r^2) (\sqrt{3}D - C \sqrt{\frac{A}{2 - A r^2}})^2} , \label{eqvsr} \\
v_{t}^2 &=& - \frac{\sqrt{\frac{A}{2 - A r^2}}\left( 12 A C^2 (2 - A r^2) + 6 D^2 (2 - A r^2)^2 (9 + A r^2) \right) }{6 \sqrt{\frac{A}{2 - A r^2}} (2 - A r^2)^2 (5 + A r^2) (\sqrt{3} D - C \sqrt{\frac{A}{2 - A r^2}})^2} \nonumber \\
&-& \frac{\sqrt{3} A C D (A^2 r^4 + 23 A r^2 -62)}{6 \sqrt{\frac{A}{2 - A r^2}} (2 - A r^2)^2(5 + A r^2) (\sqrt{3} D - C \sqrt{\frac{A}{2 - A r^2}})^2}. \label{eqvst}
\end{eqnarray}
Clearly the conditions $ |{v_r}^2-{v_t}^2| <~1$ and $ -1 < \{v_t(r)\}^2 - \{v_r(r)\}^2 \leq 0$ are satisfied as depicted in Fig.~\ref{figsound}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{sound.eps}
\hfill
\includegraphics[width=.49\textwidth]{causa.eps}
\caption{\label{figsound} Variation of absolute difference (left) and }variation of difference of sound speeds (right) with radial coordinate for different compact stars.
\end{figure}
\subsection{Adiabatic index }
Stability of anisotropic compact star depends on the adiabatic index which is essentially the ratio of specific heat at constant pressure to the specific heat at constant volume. The adiabatic index determines the stability and the stiffness of the equation of state and it is defined as,
\begin{eqnarray}
\Gamma(r) = {\rho(r)+p(r) \over p(r)}~{dp(r) \over d\rho(r)}.
\end{eqnarray}
For the Newtonian limit, any stellar configuration will maintain its stability if adiabatic gravitational collapse $\Gamma (r) > 4/3$ \cite{HH} and stellar structure will become catastrophic if $< 4/3$ \cite{bondi}. Also, Chan et al.\cite{chan} have suggested that this condition changes depending on the nature of anisotropy for a relativistic fluid sphere. Additionally, Knutsen \cite{knutsen} showed that adiabatic index $\Gamma$ is more than $1$ if the ratio of density and pressure is monotonically decreasing outwards.\\
For our solution, the value of adiabatic indices $\Gamma_r(r)$ and $\Gamma_t(r)$ are more than 4/3 throughout the outer region of a compact star, as evident from Fig.~\ref{figai}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9.5cm]{aindex.eps}
\\
\end{tabular}
\end{center}
\caption{Adiabatic indices for different compact stars.} \label{figai}
\end{figure}
\subsection{Harrison-Zeldovich-Novikov criterion}
Since compact models that are only in stable equilibrium, are of astrophysical interest so any acceptable model should satisfy the static stability criterion. The stability condition for a compact star with respect to `$r$' requires the calculation of eigen-frequency of the fundamental mode \cite{HPY} of radial pulsation without any nodes as described by Chandrasekhar \cite{chandrasekhar}. The complexity of stability criterion was later simplified by Harrison-Zeldovich-Novikov \cite{harrison,ZN}. As per their suggestions, for stability of a compact object the mass should also increase with the increase of the central density $\rho(0)$ i.e. $\frac{d M(\rho(0))}{d \rho(0)}~>~0$ to be a stable structure. The Harrison-Zeldovich-Noikov criterion is a necessary condition, it is not an sufficient one. We write the mass function as function of central density as following,
\begin{eqnarray}
M(\rho(0))&=& \frac{3 R^3 \rho(0)}{2 \left(2 \rho(0) R^2 + 9\right)}, \nonumber \\
\frac{d M(\rho(0))}{d \rho(0)} &=& \frac{27 R^3}{2 \left( 2 \rho(0) R^2 + 9\right)^2}. \label{eqhzn}
\end{eqnarray}
Clearly the fulfillment of Harrison-Zeldovich-Novikov conditions are shown graphically for several stars as shown in Fig.~\ref{fighzn}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{hzn.eps}
\hfill
\includegraphics[width=.54\textwidth]{dhzn.eps}
\caption{\label{fighzn} Variation of $M$ and ${dM \over d\rho(0)}$ with respect to the central density $\rho(0)$ for different compact stars.}
\end{figure}
\subsection{Buchdahl Limit}
The mass function of the proposed model is defined in Eq.~(\ref{massf}) as $m(r)~=~\frac{3 A r^3}{4 (1 +A r^2)}$ which is directly proportional to $r$ i.e. $\lim_{r \to 0} m(r) = 0$ implying the regularity of the mass function at the center of the star. Fig.~\ref{figmass} graphically depicts the mass functions of various compact objects.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=8.8cm]{massfn.eps}
\end{tabular}
\end{center}
\caption{Mass function of different compact star. Here mass function is shown to be monotonically increasing function of $r$. }\label{figmass}
\end{figure}
For spherically symmetric configuration, the ratio of mass to the radius of a compact object is supposed to fall within the limit ${2M \over R}~<~{8 \over 9}$ (considering $8\pi G~=~c~=~1$) as suggested by Buchdahl \cite{buchdahl59}. This condition, named after Buchdahl, is clearly satisfied for our model as shown in Table~\ref{table2}.
\subsection{Mass-radius relationship}
For dynamical stability opposing gravitational collapse into a black hole, the maximum mass of any model needs to be considered to separate compact star and black holes. In fact, any observed compact objects can be identified as black holes if the maximum mass of the compact object exceeds the allowable maximum mass for a stable compact model \cite{ST,HBPP}. To study the mass-radius relation and to calculate maximum mass we have plotted the $(M-R)$ graph in Fig.~\ref{figmr} considering the surface density $\rho(R)~=~9.5 \times 10^{14}~gm~cm^{-3}$. We have considered surface density which is roughly similar to that chosen by some researchers \cite{SM,ThM,RPSD} to study the mass-radius relationship of a compact model.\\
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=7cm]{massradius.eps}
\\
\end{tabular}
\end{center}
\caption{$(M-R)$ plot for the surface density $\rho(R)~=~9.5 \times 10^{14}~gm~cm^{-3}$. The solid circle denotes the maximum allowable mass of the model.}\label{figmr}
\end{figure}
Fig.~\ref{figmr} depicts the ($M-R$) graph for the prescribed model. The maximum mass for the model is calculated to be $4.632~ M_\odot$ with the radius $9.254$ km. Though our model exceeds the Rhoades and Ruffini limit ($\approx 3.2~ M_\odot$) \cite{RR} of maximum mass for a neutron star, it remains within the prescribed range for the spheres in general relativity with uniform density ($5.2~ M_\odot$) as suggested by Shapiro and Teukolsky \cite{ST}.\\
We have also predicted masses and radii for several compact stars as given in Table~\ref{table2}. We have computed the masses and radii from EoSs considering EoS parameter $\omega = 0.003$ and it can clearly be seen that predicted masses and radii are almost similar to that of observed values.
\begin{table}[!htp]
\centering
\setlength{\tabcolsep}{.1\tabcolsep}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Pulsar Name & Observed & Observed & Predicted & Predicted & Compactness \\
& Mass ($M_{\odot}$) & Radius (Km) & Mass ($M_{\odot}$) & Radius (Km) & Factor \\ \hline
$4$U$1820-30$ & $1.58$ & $9.1$ & $1.5508$ & $9.0272$ & $0.1717$ \\
SMC X-$1$ & $1.29$ & $9.13$ & $1.2513$ & $8.9873$ & $0.1392$ \\
SAX J $1808.4-3658$ & $0.9 $ & $7.951$ & $0.8460$ & $8.9168$ & $0.0949$ \\
$4$U$1608-52$ & $1.74$ & $9.3$ & $1.7242$ & $7.9172$ & $0.2178$ \\
PSR J $1903+327$ & $1.667$ & $9.438$ & $1.6387$ & $9.2310$ & $0.1775$ \\
Vela X-$1$ & $1.77$ & $9.56$ & $1.7435$ & $9.3755$ & $0.1859$ \\
$4$U$1538-52$ & $0.87$ & $7.866$ & $0.8273$ & $7.7138$ & $0.1072$ \\ \hline
\end{tabular}
\caption{\label{table2}Observed and predicted mass, radius and compactness factor for different compact objects.}
\end{table}
\subsection{Compactness and surface redshifts}
The dimensionless ratio $\frac{m(r)}{r}$ is known as the compactification factor $u(r)$ of a compact star. The expressions for compactness and surface redshifts are given as following,
\begin{eqnarray}
u(r) &=& \frac{m(r)}{r}= \frac{3 A r^2}{4(1 + A r^2)}, \nonumber \\
z(r) &=& \frac{1 - \left( 1 - 2 u \right)^{ 1 \over 2}}{\left(1 - 2 u \right)^{1 \over 2}} = \sqrt{\frac{2(1 + A r^2)}{2 - A r^2}}-1. \label{eqred}
\end{eqnarray}
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{compactness.eps}
\hfill
\includegraphics[width=.49\textwidth]{redz.eps}
\caption{\label{figcompred} Compactness (left) and surface redshift (right) with the radial coordinate $r$ for different compact stars.}
\end{figure}
The compactness for our model is depicted in Fig.~\ref{figcompred} which indicates the increasing nature of the compactification factor with respect to radial coordinate `$r$'. From Table.~\ref{table2} it can be seen that the model allows compactness within the range $({1 \over 4}, {1 \over 2})$ as prescribed in \cite{JT}.
The surface redshifts $z(r)$ is depicted graphically in Fig.~\ref{figcompred} which shows the increasing nature of surface redshifts with the radial coordinate. Studies by several authors have allowed us to specify an upper bound on the surface redshifts. In absence of cosmological constant, $z(r) \leq 2$ holds for an isotropic star as proposed by Barraco and Hamity \cite{BHamity}. Whereas the presence of cosmological constant pushes the upper bound for an anisotropic star a bit higher, $z(r) \le 5$ \cite{bohmer}. Though the maximum acceptable limit for the surface redshift of a compact star is $5.211$ \cite{ivanov}, the model presented in this paper satisfies the range $z(r) \leq 1$ as predicted by Hewish et. al \cite{HBPSC} as can be seen in Table~\ref{table3}.
\subsection{Equation of State}
One of the significant features of a compact star is the description of its equation of state (EoS) i.e. the relation of the pressure with the energy density for barotropic EoS which then eventually designs the mass-radius relation. The form of the barotropic equation of state can be linear, quadratic, polytropic or some other dependence. Clearly different EoSs lead to different $(M-R)$ relations. Several authors have suggested that EoS can be estimated in the form of $p~=~p(\rho)$ i.e. pressure $p$ can be written as the linear function of energy density $\rho$ in the presence of higher density to elucidate the structural properties of a compact object \cite{FO,HZ,PBP,LPMY,DBDRS,GBZGRDD,zdunik,HC,MMKP,MBJKPP}.
We have obtained the exact similar observation as that of \cite{GBZGRDD,MMKP,MBJKPP}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9.4cm]{eos.eps}
\\
\end{tabular}
\end{center}
\caption{ Variation of radial pressure with density for different compact star corresponding to the numerical value of constants given in Table~\ref{table1}.}\label{figeos}
\end{figure}
For stable configuration the equation of state parameter defined as $\omega_r(r) = \frac{p_r}{\rho}$ and $\omega_t(r) = \frac{p_t}{\rho}$ should belong to $(0,1)$ \cite{RRJC} otherwise known as exotic configuration. The radial EoS for various compact star are described graphically in Fig.~\ref{figeos}, which shows linear relationship.\\
We also have calculated the best fit for each EoSs for each stars by using least squares method \cite{GBZGRDD,zdunik}. Fig.~\ref{figfit} describes graphically best fit for each EoSs. The approximation for the best fitted relation for each stars are given as,
\begin{eqnarray}
4U1820-30 &:& p_r ~=~ 0.362175 ~\rho - 157.655 \nonumber \\
SMC~ X-1 &:& p_r ~=~ 0.27393 ~\rho - 101.376 \nonumber \\
SAX~J~1808.4-3658 &:& p_r ~=~ 0.223828 ~\rho - 91.344 \nonumber \\
4U1608-52 &:& p_r ~=~ 0.416461 ~\rho - 183.463 \nonumber \\
PSR~J~1903 +327 &:& p_r ~=~ 0.37333 ~\rho - 153.083 \nonumber \\
Vela~X-1 &:& p_r ~=~ 0.408083 ~\rho - 168.934 \nonumber \\
4U1538-52 &:& p_r ~=~ 0.220122 ~\rho - 90.0515 \nonumber
\end{eqnarray}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=9cm]{fit.eps}
\\
\end{tabular}
\end{center}
\caption{Best fit for EoSs for each compact star.}\label{figfit}
\end{figure}
\subsection{Moment of inertia and time period}
The study of moment of inertia plays a very important role in modeling of compact objects as it allows us to test the stiffness of EoS. The empirical formula for the moment of inertia $I$ transforms a static system to rotating system as suggested by Bejger-Haensel \cite{BH} and it is given by,
\begin{equation}
I = {2 \over 5} (1 + x) M R^2, \label{eqmi}
\end{equation}
where the parameter $x$ is defined as $x = (M/M_\odot)(km/R)$. Here the maximum mass of uniformly slow rotating configuration gives the approximate moment of inertia. The nature of moment of inertia $I$ with respect to mass $M$ is depicted in Fig.~\ref{figmi}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=7cm]{moi.eps}
\\
\end{tabular}
\end{center}
\caption{Moment of inertia with respect to the mass. The solid circle denotes the maximum moment of inertia for the model.}\label{figmi}
\end{figure}
It can be seen that inertia $I$ is an increasing as the increase of mass and it attains the maximum value for the mass $4.627~ M_\odot$ before declining rapidly. Considering the surface density $\rho(R) = 9.5 \times 10^{14}~gm~cm^{-3}$, we have calculated the $I_{max}$ to be $1773.6$ $km^3$. Comparing the masses of the model star on the Figs.~\ref{figmr} and \ref{figmi}, it can be seen that the mass corresponding to $I_{max}$ is approximately lower by $0.11$\%. \\
This decline of mass indicate the softening of the EoSs without any strong high-density due to hyperonization or phase transition to an exotic state \cite{BBH}.\\
For any non-rotating structure, minimum time-period can be estimated as below provided the EoS obey the sound speeds,
\begin{equation}
\tau \approx 0.82 \sqrt{\left( {M\odot \over M } \right)} \left(\frac{R}{10~ km} \right)^{3 \over 2} ms.
\label{eqtau}
\end{equation}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=7cm]{tao.eps}
\\
\end{tabular}
\end{center}
\caption{Variation of time period of rotation with mass.The solid circle denotes the maximum mass of the compact model.}\label{figtau}
\end{figure}
Fig.~\ref{figtau} depicts the variation of time period with the mass of the model and the maximum time period is obtained to be $1.577~ms$.
\subsection{Mass-central density relationship}
It is evident that the models of cold static compact star represents one parameter family i.e. they can be labeled by using central density or by using central pressure \cite{Haensel}. Fig.~\ref{figmassrho} portrays the profile of mass against the central density and it can be observed that with the increase of the mass of the model, the central density also increases. Moreover, maximum mass is found to be $4.461~M_\odot$ corresponding to the central density $32.11 \times 10^{18}~kg/m^3$.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=7cm]{mrho0.eps}
\\
\end{tabular}
\end{center}
\caption{Variation of central density with the mass. }\label{figmassrho}
\end{figure}
\subsection{Radius-central density relationship}
The internal structure of any compact model can be studied from its relationship of the radius with any of the matter variable. One such physical test is to study the variation of the central density with the radius of the model as this will allow us to comprehend the nature of the prescribed model. The radius - central density relationship will guide through the process of determining the mass of the compact model and vice-versa. We have studied the nature of the central density with the radius graphically in Fig.~\ref{figrcenden}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{rl}
\includegraphics[width=7cm]{rcenden.eps}
\\
\end{tabular}
\end{center}
\caption{Variation of central density with the radius. Solid circle denotes the maximum mass of the compact model.}\label{figrcenden}
\end{figure}
Here the radius increases with the increase of central density until it reaches the critical radius (maximum allowable radius) to remain almost unchanged with the increase of central density. However, the radius for which maximum mass is obtained in Fig.~\ref{figmr}, corresponds to the central density $2.851 \times 10^{18}~kg~m^{-3}$. Additionally the maximum mass occurred in Fig.~\ref{figmassrho} corresponds to the central density $1.11 \times 10^{18}~kg~m^{-3}$.
\subsection{Bounds on model parameter}
Bounds on the parameters $A$, $C$, $D$ are described as,
\begin{center}
\begin{tabular}{ccc}\hline
Conditions & At center $(r=0)$ & At surface $(r = R)$ \\ \hline
$e^\lambda(r)>0$ & satisfied & $ -\frac{1}{R^2}< A< \frac{2}{R^2} $ \\
$e^\nu(r)>0 $ & satisfied & satisfied \\
$\rho(r) >0$ & $A>0$ & $A>0$ \\
$p_r(r)>0$ & $\frac{6 D^2}{C^2} < A < \frac{50 D^2}{3 C^2}$ & $\frac{D}{C}= \frac{\sqrt{A}}{5\sqrt{3(2-A R^2)}}$ \\
$p_t(r)>0$ & $\frac{6 D^2}{C^2} < A < \frac{50 D^2}{3C^2}$ & $0<A<\frac{2}{R^2}$,\\
& & $\frac{3 A}{(2-A R^2)(5+A R^2)}<\frac{D^2}{C^2}$\\
& & $<\frac{A}{3(2-A R^2)}$\\
$\Delta(r)\geq0$ & $0$ & $A>0$ \\
$\frac{d\rho}{dr}\leq0$ & $0$ & $A>0$ \\
Zeldovich's Condition & $\frac{D}{C} \geq \frac{\sqrt{6A}}{7}$ & - \\
$SEC(r)>0$ & $\frac{A}{6}>\frac{D^2}{C^2}$ & $\frac{C^2}{D^2} >\frac{3(2-A R^2)}{A}$ \\
Herrera Condition & $\frac{6 D^2}{C^2}<A<\frac{32 D^2}{3 C^2}$ & same \\
\end{tabular}
\end{center}
We also have calculated $\frac{dp_r}{dr}$, $\frac{dp_t}{dr}$, $\frac{dp_r}{d\rho}$, $\frac{dp_t}{d\rho}$ both at the center and at the surface and combining the above conditions we get the following relations,
\begin{eqnarray}
0~<~A~<~\frac{32}{25 R^2}, \nonumber \\
\frac{25 D^2}{6 C^2}~<~A~<~\frac{32 D^2}{3 C^2}.
\end{eqnarray}
\subsection{Herrera-Ospino-Di Prisco Generating Functions}
Feasible anisotropic solution of EFEs can be obtained by using L. Herrera's \cite{HOP} algorithm. This formalism is essentially an extension of an algorithm proposed by Lake \cite{lake} and it introduces to all solutions with the help of generating functions. More specifically there are two generating functions as suggested by \cite{HOP} to describe all the static spherically symmetric anisotropic fluid matter distribution and the algorithm can be expressed as,
\begin{equation}
e^{\lambda(r)} = \frac{Z^2(r) e^{\int\left( \frac{4}{r^2Z(r)}+2 Z(r) \right) dr}}{r^2\left[ F - 2 \int \frac{Z(r)\left(1 + \Pi(r) r^2 \right) e^{\int\left( \frac{4}{r^2 Z(r)} + 2 Z(r) \right)dr}}{r^8} dr \right]},
\label{gf}
\end{equation}
where $F$ is an arbitrary integrating constant and the corresponding generating functions are as follows,
\begin{eqnarray}
Z(r) &=& {\nu' \over 2} + {1 \over r}, \label{gf1} \\
\Pi(r) &=& \left(p_r(r) - p_t(r) \right). \label{gf2}
\end{eqnarray}
Here the generating function $Z(r)$ is related to the geometry of the spacetime and $\Pi(r)$ is related to the matter distribution as proposed by Rahaman et. al \cite{RSSP19}. Using Class-I embedding condition in Eq.~(\ref{nu}), the generating functions given in Eqs.~(\ref{gf1})-(\ref{gf2}) take the form,
\begin{eqnarray}
Z(r) &=& {\frac{D~\sqrt{e^{\lambda(r)}-1}}{C + D~\int{\sqrt{e^{\lambda(r)}-1}}dr}} + {1 \over r}, \label{gf3} \\
\Pi(r) &=& \left(p_r(r) - p_t(r) \right). \label{gf4}
\end{eqnarray}
Thus the generating functions to find all exact solutions for our model are given by,
\begin{eqnarray}
Z(r) &=& \frac{\sqrt{3}A D \sqrt{\frac{A r^2}{2 - A r^2}}}{A C - \sqrt{3}D \sqrt{\frac{A}{2 - A r^2}}(2 - A r^2)} + {1 \over r}, \label{gf5} \\
\Pi(r) &=& - \Delta(r), \label{gf6}
\end{eqnarray}
where $\Delta(r)$ is given in Eq.~(\ref{aniso}).
\begin{figure}[!htbp]
\centering
\includegraphics[width=.49\textwidth]{gf1.eps}
\hfill
\includegraphics[width=.49\textwidth]{gf2.eps}
\caption{\label{figgf} Behavior of the generating functions $Z(r)$ (left side) and $\Pi(r)$ (right side) with respect to the radial coordinate $r$ for several compact stars.}
\end{figure}
Clearly it can be seen in Fig.~\ref{figgf}, the generating function $Z(r)$ related to redshift function is always a positive and decreasing function of `$r$' and the other generating function $\Pi(r)$ is always negative and decreasing in nature.
\section{\label{sec7}Discussions around various compact objects}
To understand our prescribed model we have calculated the parameters for the compact stars $4$U$1820-30$, SMC X$-1$, SAX J $1808.4-3658$, $4$U$1608-52$, PSR J $1903+327$, Vela X$-1$ and $4$U$1538-52$ as given in Table~\ref{table1}. We have predicted the mass and radius of some compact objects using a fixed EoS parameter in Table~\ref{table2}. Further, we have calculated matter variables such as density, radial and transverse sound speed and strong energy conditions for each of the compact stars both at the centers and at the surfaces. The calculated values so obtained are depicted in tabular form in Table~\ref{table3}. Here $|_0$ denotes the value of the matter variable at the center and $|_R$ denotes the same at the boundary of the stars. It can clearly be seen that the value of the matter variables at the surface is smaller than that of the central value, providing a more compact structure. Additionally, the surface redshifts are also provided in Table~\ref{table3} and surface redshifts for each compact stars are within the prescribed range.
\begin{table}[!htbp]
\centering
\setlength{\tabcolsep}{.3\tabcolsep}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\hline
Pulsar & $\rho|_0$ & $\rho|_R$ & $\frac{dp_r}{d\rho}|_0$ & $\frac{dp_r}{d\rho}|_R$ & $\frac{dp_t}{d\rho}|_0$ & $\frac{dp_t}{d\rho}|_R$ & $SEC|_0$ & $SEC|_R$ & $z|_R$ \\ \hline
$4$U$1820-30$ & $854.09$ & $434.41$ & $0.3589$ & $0.3513$ & $0.3125$ & $0.2478$ & $1308.97$ & $498.19$ & $0.4318$ \\
SMC X-$1$ & $629.73$ & $370.47$ & $0.2683$ & $0.2746$ & $0.2025$ & $0.1795$ & $842.52$ & $412.43$ & $0.3095$ \\
SAX J $1808.4-3658$ & $617.86$ & $409.03$ & $0.2194$ & $0.2275$ & $0.1425$ & $0.1349$ & $758.42$ & $444.55$ & $0.2253$ \\
$4$U$1608-52$ & $918.31$ & $437.95$ & $0.4167$ & $0.3939$ & $0.3821$ & $0.2841$ & $1515.54$ & $508.86$ & $0.4939$ \\
PSR J $1903+327$ & $815.04$ & $408.74$ & $0.3704$ & $0.3601$ & $0.3264$ & $0.2554$ & $1268.52$ & $470.10$ & $0.445$ \\
Vela X-$1$ & $854.63$ & $411.52$ & $0.4072$ & $0.3872$ & $0.3707$ & $0.2785$ & $1394.45$ & $477.24$ & $0.4844$ \\
$4$U$1538-52$ & $612.82$ & $409.97$ & $0.2158$ & $0.2238$ & $0.138$ & $0.1313$ & $747.06$ & $444.63$ & $0.2183$ \\ \hline
\end{tabular}
\caption{\label{table3} Matter variables for different compact objects.}
\end{table}
\section{\label{sec8}Concluding remarks}
In this paper, we have presented a model for spherically symmetric anisotropic sphere considering a specific $e^{\lambda(r)}$. The model is assumed to satisfy Karmarkar condition and smooth matching of interior metric conditions with Schwarzschild exterior solution generate the constants of the model. To analyze our obtained solutions we have considered the physical parameters for some well-known compact stars $4$U$1820-30$, SMC X$-1$, SAX J $1808.4-3658$, $4$U$1608-52$, PSR J $1903+327$, Vela X$-1$ and $4$U$1538-52$ to examine the acceptability of our model.
Some of the key features of our solutions are discussed briefly:\\
\begin{itemize}
\item Both the metric potentials are shown to be regular, well-defined and positive throughout the stellar interior. Moreover, both the metric potentials are described to be finite both at the center and at the boundary to provide an applicable compact model, Fig.~\ref{figmp} supports that.
\item The energy density of a compact star is positive and monotonically decreasing in nature towards the surface with the maximum density at the center making the interior compact. The radial pressure and as well as transverse pressure are monotonically decreasing towards the surface. So all the potentials and parameters are well-valued as illustrated in Figs.~\ref{figpressure} and \ref{figda}. Moreover, the anisotropy is positive as depicted in Fig.~\ref{figda} i.e. the anisotropic force is acting outwards and that leads to model a stable configuration \cite{GM}.
\item For singularity test, we have also computed Kretchsmann Scalar and we found that the model has a singularity at $r =1$ (see Fig.~\ref{figks}), though the model is free from any singularity at the center.
\item All energy conditions are satisfied for this anisotropic physical matter distribution and evidently, we can see from Fig.~\ref{figenergy} that our solutions satisfy all the energy conditions within the star. Additionally it can also be seen that inequality $\rho - p_r - 2 p_t > 0$ is satisfied within the stellar interior.
\item Under the combined action of gravitational, hydrostatic and anisotropic forces our model stays in equilibrium just like any anisotropic fluid configuration needs to be. The behaviour of our model in the effect of TOV equation is shown in Fig.~\ref{figforce}.
\item Stability of our model has been checked using Herrera \cite{herrera92} condition. Also, the absolute value of the difference between radial and transverse sound speeds fulfil Herrera \cite{herrera92} and Andr{\'e}asson \cite{andreasson} criteria making our model potentially stable.
\item Compactness and surface redshifts are examined graphically for our model in Fig.~\ref{figcompred}. It is also examined numerically in Tables~\ref{table2} and \ref{table3}. Additionally, Fig.~\ref{figcompred} depicts that maximum value is attained at the surface and redshifts vanishes at the centre and Table~\ref{table3} shows that maximum value of surface redshifts are $<~5.11$ as suggested by Ivanov \cite{ivanov}.
\item We have plotted mass-radius plot in Fig.~\ref{figmr} and the maximum mass is obtained to be $4.632~M_\odot$ for the radius $9.254$ km. Also Fig.~\ref{figmi} depicts the mass-inertia plot where the maximum mass is approximately $0.11$ \% lower than that in Fig.~\ref{figmr}, which expresses the stiffness of EoS.
\item We have tested the stability of the model by studying mass-central density and radius-central density relationship of the model in the Figs.~\ref{figmassrho}-\ref{figrcenden} respectively. The maximum radius of the model is obtained corresponding to the central density $32.11 \times 10^{18}~kg/m^3$ in Fig.~\ref{figmassrho}. Moreover, Fig.~\ref{figrcenden} suggests that the radius for which the maximum mass is obtained corresponds to the central density $2.851 \times 10^{18}~kg/m^3$.
\end{itemize}
We have tested all the stability criterion both analytically and graphically as depicted in Sec.~\ref{sec6}. Additionally, we have calculated mass and radius for some stars as given in Table.~\ref{table2} taking a certain value of the EoS parameter and one can agree that the predicted radii are in good agreement with the observational data. The generating functions to find all possible exact solutions for our model are also generated. It is shown graphically that $Z(r)$ is always positive and decreasing in nature and $\Pi(r)$ is always negative and increasing in nature which is necessary to be a stable compact star.
Thus it can be concluded that this model satisfies all the properties to be anisotropic compact configurations.
\section*{Acknowledgments}
FR and SD are thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for providing Visiting Associateship.
|
{
"timestamp": "2020-12-29T02:26:45",
"yymm": "2012",
"arxiv_id": "2012.14147",
"language": "en",
"url": "https://arxiv.org/abs/2012.14147"
}
|
\section{Introduction: Scope, significance, and problem definition}
\label{sec:intro}
Wildfires have caused severe damage to forests, wildlife habitats, farms, residential areas, and ecosystems during the past few years.
Based on the reports from National Interagency Fire Center (NIFC) in the USA, total number of 51,296 fires burned more than 6,359,641 acres of lands yearly on average from 2010 to 2019 accounting for more than \$6 billion in damages \cite{National39:online, National18:online}. These alarming facts motivate researchers to seek novel solutions for early fire detection and management. In particular, recent advances in aerial monitoring systems can provide first responders and operational forces with more accurate data on fire behaviour for enhanced fire management.
Traditional approaches to detecting and monitoring fires include stationing personnel in lookout towers or using helicopters or fixed-wing aircraft to surveil fires with visual and infrared imaging. Recent research has suggested Internet of Things (IoT) innovations based on wireless sensor networks \cite{toledo2018forest, kaur2019fog, coen2018transforming, afghah2020cooperative, kamhoua2020modeling}, but such networks would require further investment and testing before providing practical information. At broader scales, satellite imagery is widely used for assessing fires globally \cite{huang2020wildfire, friedlingstein2019global}, but typically at relatively coarse resolution and with the availability of repeat images constrained by satellite orbital patterns.
Considering the challenges and issues of these methods, using Unmanned Aerial Vehicles (UAVs) for fire monitoring is gaining more traction in recent years \cite{keshavarz2020real, keshavarz2018towards, keshavarz2019automatic}. UAVs offer new features and convenience including fast deployment, high maneuverability, wider and adjustable viewpoints, and less human intervention \cite{erdelj2017help,Afghah_ACC, aggarwal2020risk,Afghah_INFOCOM, afghah2018reputation}. Recent studies investigated the use of UAVs in disaster relief scenarios and operations such as wildfires and floods, particularly as a temporary solution when terrestrial networks fail due to damaged infrastructures, communication problems, spectrum scarcity, or coalition formation \cite{shamsoshoara2019distributed, shamsoshoara2019solution, shamsoshoara2020autonomous,mousavi2019use,mousavi2018leader}.
Recent advances in artificial intelligence (AI) and machine learning have made image-based modeling and analysis (e.g., classification, real time prediction, and image segmentation) even more successful in different applications \cite{mousavi2017traffic, mousavi2016deep,sarcinelli2019handling,mousavi2016learning}. Also, with the advent of nanotechnology semiconductors, a new generation of Tensor Processing Units (TPUs) and Graphical Processing Units (GPUs) can provide an extraordinary computation capability for data-driven methods \cite{wang2019benchmarking}. Moreover, modern drones and UAVs can be equipped with tiny edge TPU/GPU platforms to perform on-board processing on the fly to facilitate early fire detection before a catastrophic event happens \cite{google_edge_tpu:online, NVIDIAJe4:online}.
Most supervised learning methods rely on large training datasets to train a reasonably accurate model. Studies such as \cite{wu2020transfer} used a fire dataset from public sources to perform fire detection based on pre-trained ANN architectures such as MobileNet and AlexNet. However, that dataset was based on terrestrial images of the fire. To the best of our knowledge, there exists no aerial imaging dataset for fire analysis, something in urgent need to develop fire modeling and analysis tools for aerial monitoring systems. Note that aerial imagery exhibits different properties such as low resolutions, and top-view perspective, substantially different than images taken by ground cameras.
In this paper, we introduce a new dataset as a collection of fire videos and images taken by drones during a prescribed burning slash piles in Northern Arizona.
The images were taken by multiple drones with different points of view, different zoom, and camera types including regular and thermal cameras. Pile burns can be very helpful to study spot fires and early-stage fires. Pile burns are typically used by forest management for cleaning up forest residues (“slash”) such as branches and foliage from forest thinning and restoration projects. Forest treatments are a key management strategy for reducing fuels and the burning of slash piles is often the most economically efficient and safe means of removing slash. Piles must be monitored by fire managers for a few days after ignition to avoid spread outside the intended burn area. Using automated aerial monitoring systems can substantially reduce the forest management workload.
We propose two sample problems to evaluate the use of dataset for real-world fire management problems.
The contributions of this paper include i) proposing the first of its kind aerial imaging dataset for pile burn monitoring which includes both normal and thermal palettes as well as FLAME (Fire Luminosity Airborne-based Machine learning Evaluation), ii) a DL-based algorithm for frame-based fire classification which can be used for early fire detection, and iii) a DL-based image segmentation method for pixel-wise fire masking for fire expansion modeling.
The rest of the paper is structured as follows. Section~\ref{sec:dataset} presents the FLAME dataset along with the related information regarding the hardware and data. Section~\ref{sec:method} discusses the methodology based on the two defined challenges, namely fire classification and fire segmentation.
The experiments and results are illustrated in Section~\ref{sec:results} over a variety of metrics. Conclusions and discussion points are provided in Section~\ref{sec:conclusion}.
\section{FLAME Dataset: Hardware and Applicable Data}
\label{sec:dataset}
This section details the hardware used to collect information, the data modalities, and types of the captured information.
Prescribed burning of slash piles is a common occurrence primarily during the winter months in high-elevation forests of the Southwest. Prescribed fires provide excellent opportunities for researchers to collect and update imagery data. The current study shows the results of the first test, and from which is available to continually update the dataset by adding more test results. The test was conducted with fire managers from the Flagstaff (Arizona) Fire Department who carried out a burn of piled slash on city-owned lands in a ponderosa pine forest on Observatory Mesa. The prescribed fire took place on January 16th, 2020 with the temperature of 43$^\circ$F ($\sim$ 6$^\circ$C) and partly cloudy conditions and no wind.
\subsection{Hardware}
\label{subsec:hardware}
This study utilizes different drones and cameras to create a dataset of fire aerial images. Table~\ref{tab:tools_hardware} describes the technical specification of the utilized drones and cameras.
\begin{table*}[bt]
\caption{Technical specification of hardware and tools}
\centering{
\label{tab:tools_hardware}
\resizebox{0.965\linewidth}{!}{
\begin{tabular}{c|c}
\toprule
\toprule
\Topspace
\Bottomspace
\multirow{3}{*}[0.2in]{\includegraphics[width=0.2\textwidth]{Figures/phantom3.jpg}} & {\fontsize{10}{17}\selectfont \makecell{Phantom 3 Professional, DJI, 1280 gram, diagonal size=350mm, \bigstrut\\ Max speed=16m/s($\sim$57kph), max flight time is 23 minutes, \bigstrut\\ Flight time is reduced to 18 mins due to additional weight \cite{Phantom345:phantom3}.}}
\bigstrut \\
\hline
\Topspace
\Bottomspace
\multirow{3}{*}[0.4in]{\includegraphics[width=0.2\textwidth]{Figures/m200_matrice.jpg}} & {\fontsize{10}{17}\selectfont \makecell{Matrice 200, DJI, 3.80kg, size:716mm $\times$ 220mm $\times$ 236mm, \bigstrut\\ payload up to 2kg, 16m/s ($\sim$61kph), batteries: (TB50) and TB55 \bigstrut\\ Max flight time: 38 minutes, operation range of 7km \cite{DJITheWo79:Matrice200}.}}
\bigstrut \\
\hline
\Topspace
\Bottomspace
\multirow{3}{*}[0.3in]{\includegraphics[width=0.2\textwidth, height=25mm]{Figures/zenmusex4s.png}} & {\fontsize{10}{17}\selectfont \makecell{Zenmuse X4S, DJI, gimbal: Matrice 200, weight: 253 gram, \bigstrut \\ Fielf Of View (FOV): \ang{84}, resolution: Full HD to Cinematic 4K \bigstrut \\ sensor: CMOS 20MPixels \cite{DJIZenmu25:zenmuse}.}}
\bigstrut \\
\hline
\Topspace
\Bottomspace
\multirow{3}{*}[0.25in]{\includegraphics[width=0.20\textwidth]{Figures/vuepro-r.png}} & {\fontsize{10}{17}\selectfont \makecell{Vue Pro R, FLIR, IR camera, control: Bluetooth and \bigstrut \\ Pulse Width Modulation (PWM) signal, FOV: \ang{45}, resolution: 640 $\times$ 512 \bigstrut \\ Lens: 6.8mm thermal, no gimbal \cite{FLIRVueP5:online}.}}
\bigstrut \\
\hline
\Topspace
\Bottomspace
\multirow{2}{*}[0.3in]{\includegraphics[width=0.20\textwidth]{Figures/phantom_camera.png}} & {\fontsize{10}{17}\selectfont \makecell{Phantom 3 camera, DJI, sensor: 1/2.3" CMOS 12.4MPixels \bigstrut \\ FOV: \ang{94}, (FPSs): 24 to 60, resolution: HD, FHD, UHD \cite{Phantom345:phantom3}.}}
\bigstrut \\
\bottomrule
\end{tabular}
}
}
\end{table*}
\begin{figure*}[bt]
\centering
\includegraphics[width=1\linewidth]{Figures/dataset_normal.pdf}
\caption{Frame samples of the normal spectrum palette.}
\label{fig:dataset_normal}
\end{figure*}
\subsection{Applicable data for the defined problem}
\label{subsec:Data}
This section presents the details of the
captured images, videos.
The captured videos are converted to frames based on the recorded Frames Per Second (FPS).
Four types of video including the normal spectrum, fusion, white-hot, and green-hot palettes are available in the FLAME dataset \cite{qad6-r683-20}.
\begin{figure*}[bt]
\centering
\includegraphics[width=1\linewidth]{Figures/dataset_thermal.pdf}
\caption{Frame samples of thermal images including Fusion, WhiteHot, and GreenHot palettes from top row to the bottom row.}
\label{fig:dataset_thermal}
\end{figure*}
The normal spectrum palette was recorded using both Zenmuse X4S and the phantom 3 camera. Other thermal and IR outputs were collected using the Forward Looking Infrared (FLIR) vue Pro R camera.
Several video clips which include both fire and no fire footage are available. The FLIR camera has a 1280 $\times$ 720 resolution with 29 fame per seconds (FPS).
Another 6 minutes of video is available for one pile burn from the start of the burning at 1280 $\times$ 720 resolution and 29 FPS.
The H.264 codec was used for all the recordings. More details about these videos are available in Table~\ref{tab:dataset_info} along with the dataset link. Figure~\ref{fig:dataset_normal} demonstrates some representative frames from both fire and no-fire videos. The full videos are available in the FLAME dataset repository.
\begin{table*}[bt]
\caption{Dataset information (\href{https://ieee-dataport.org/open-access/flame-dataset-aerial-imagery-pile-burn-detection-using-drones-uavs}{\blue{Link}}) \cite{qad6-r683-20}.}
\centering{
\label{tab:dataset_info}
\resizebox{1.1\linewidth}{!}{
\renewcommand{\arraystretch}{2.5}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
\toprule
\toprule
\Topspace
\Bottomspace
\textbf{\Large} & {\fontsize{12}{17}\selectfont \textbf{Type}} & {\fontsize{12}{17}\selectfont \textbf{Camera}} & {\fontsize{12}{17}\selectfont \textbf{Palette}} & {\fontsize{12}{17}\selectfont \textbf{Duration}} & {\fontsize{12}{17}\selectfont \textbf{Resolution}} & {\fontsize{12}{17}\selectfont \textbf{FPS}} & {\fontsize{12}{17}\selectfont \textbf{Size}} & {\fontsize{12}{17}\selectfont \textbf{Application}} & {\fontsize{12}{17}\selectfont \textbf{Usage}} & {\fontsize{12}{17}\selectfont \textbf{Labeled}}
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{1}} & \fontsize{12}{17}\selectfont Video & \fontsize{12}{17}\selectfont Zenmuse &
\fontsize{12}{17}\selectfont Normal(.MP4) &
\fontsize{12}{17}\selectfont 966 seconds &
\fontsize{12}{17}\selectfont 1280$\times$720 &
\fontsize{12}{17}\selectfont 29 &
\fontsize{12}{17}\selectfont 1.2 GB &
\fontsize{12}{17}\selectfont Classification &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont N
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{2}} & \fontsize{12}{17}\selectfont Video & \fontsize{12}{17}\selectfont Zenmuse &
\fontsize{12}{17}\selectfont Normal(.MP4) &
\fontsize{12}{17}\selectfont 399 seconds &
\fontsize{12}{17}\selectfont 1280$\times$720 &
\fontsize{12}{17}\selectfont 29 &
\fontsize{12}{17}\selectfont 503 MB &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont N
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{3}} & \fontsize{12}{17}\selectfont Video & \fontsize{12}{17}\selectfont FLIR &
\fontsize{12}{17}\selectfont WhiteHot(.MOV) &
\fontsize{12}{17}\selectfont 89 seconds &
\fontsize{12}{17}\selectfont 640$\times$512 &
\fontsize{12}{17}\selectfont 30 &
\fontsize{12}{17}\selectfont 45 MB &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont N
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{4}} & \fontsize{12}{17}\selectfont Video & \fontsize{12}{17}\selectfont FLIR &
\fontsize{12}{17}\selectfont GreenHot(.MOV) &
\fontsize{12}{17}\selectfont 305 seconds &
\fontsize{12}{17}\selectfont 640$\times$512 &
\fontsize{12}{17}\selectfont 30 &
\fontsize{12}{17}\selectfont 153 MB &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont N
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{5}} & \fontsize{12}{17}\selectfont Video & \fontsize{12}{17}\selectfont FLIR &
\fontsize{12}{17}\selectfont Fusion(.MOV) &
\fontsize{12}{17}\selectfont 25 mins &
\fontsize{12}{17}\selectfont 640$\times$512 &
\fontsize{12}{17}\selectfont 30 &
\fontsize{12}{17}\selectfont 2.83 GB &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont N
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{6}} & \fontsize{12}{17}\selectfont Video & \fontsize{12}{17}\selectfont Phantom &
\fontsize{12}{17}\selectfont Normal(.MOV) &
\fontsize{12}{17}\selectfont 17 mins &
\fontsize{12}{17}\selectfont 3840$\times$2160 &
\fontsize{12}{17}\selectfont 30 &
\fontsize{12}{17}\selectfont 32 GB &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont N
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{7}} & \fontsize{12}{17}\selectfont Frame & \fontsize{12}{17}\selectfont Zenmuse &
\fontsize{12}{17}\selectfont Normal(.JPEG) &
\fontsize{12}{17}\selectfont 39,375 frames &
\fontsize{12}{17}\selectfont 254$\times$254 &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont 1.3 GB &
\fontsize{12}{17}\selectfont Classification &
\fontsize{12}{17}\selectfont Train/Val &
\fontsize{12}{17}\selectfont Y
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{8}} & \fontsize{12}{17}\selectfont Frame & \fontsize{12}{17}\selectfont Phantom &
\fontsize{12}{17}\selectfont Normal(.JPEG) &
\fontsize{12}{17}\selectfont 8,617 frames &
\fontsize{12}{17}\selectfont 254$\times$254 &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont 301 MB &
\fontsize{12}{17}\selectfont Classification &
\fontsize{12}{17}\selectfont Test &
\fontsize{12}{17}\selectfont Y
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{9}} & \fontsize{12}{17}\selectfont Frame & \fontsize{12}{17}\selectfont Phantom &
\fontsize{12}{17}\selectfont Normal(.JPEG) &
\fontsize{12}{17}\selectfont 2,003 frames &
\fontsize{12}{17}\selectfont 3480$\times$2160 &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont 5.3 GB &
\fontsize{12}{17}\selectfont Segmentation &
\fontsize{12}{17}\selectfont Train/Val/Test &
\fontsize{12}{17}\selectfont Y(Fire)
\\
\hline
\Topspace
\Bottomspace
{
\fontsize{10}{17}\selectfont \textbf{10}} & \fontsize{12}{17}\selectfont Mask & \fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont Binary(.PNG) &
\fontsize{12}{17}\selectfont 2,003 frames &
\fontsize{12}{17}\selectfont 3480$\times$2160 &
\fontsize{12}{17}\selectfont - &
\fontsize{12}{17}\selectfont 23.4 MB &
\fontsize{12}{17}\selectfont Segmentation &
\fontsize{12}{17}\selectfont Train/Val/Test &
\fontsize{12}{17}\selectfont Y(Fire)
\\
\bottomrule
\end{tabular}
}
}
\end{table*}
The FLAME dataset also includes thermal videos such as Fusion, WhiteHot, and GreenHot palettes.
All videos were captured with the resolution of 640 $\times$ 512 and with 30 FPS. Multiple videos of fire and no-fire types with different lengths are available.
Figure~\ref{fig:dataset_thermal} shows some randomly selected frames for these thermal videos. More details about the FLAME dataset are available in Table~\ref{tab:dataset_info}. Also, a sample video of this dataset is available on YouTube~\cite{youtube2020_dataset}. Sections~\ref{subsec:classification} and \ref{subsec:segmentation} demonstrate some of the videos conversions into frames to address research challenges such as fire classification and fire segmentation. Researchers can use applications of their choice to extract the frames from the videos based on the required FPS. The FLAME dataset including all images, videos, and data are available on IEEE-Dataport \cite{qad6-r683-20}.
\section{Goals: Suggested Experiments and Methodology}
\label{sec:method}
This section presents two example applications that can be defined based on the collected FLAME dataset along with Deep Learning solutions for these problems.
The first problem is the fire versus no-fire classification using a deep neural network (DNN) approach. The second problem deals with fire segmentation, which can be used
for fire detection by masking the identified fire regions on video frames classified as fire-containing in the first problem.
\subsection{Fire vs No-Fire Classification}
\label{subsec:classification}
The image classification problem is one of the challenging tasks in the image processing domain. In the past, traditional image processing techniques utilized RGB channel comparison to detect different objects such as fire in frames or videos \cite{celik2009fire, umar2017state, binti2015fire}. These traditional methods are not free of errors and are not fully reliable \cite{yuan2015uav}. For instance, RGB value comparison methods that usually consider a threshold value to detect fire may detect sunset and sunrise as a false positive outcome. However, training a DNN to perform this image classification task helps to learn elements not germane to the fire. Also, some studies such as \cite{qi2009computer, kundu2018highly} perform pixel-based classification and segmentation based on the HSV (Hue, Saturation, Value) format.
In the present study, a supervised machine learning method is used to classify the captured frames from camera.
For mixed images when fire and non-fire parts coexist, the frame will be considered as the fire-labeled frame and when there is no fire in the frame, it will considered as non-fire-labaled. Instead of the green or fusion heat map, the normal range spectrum of images for the classification was selected using the Zenmuse X4S and the camera from DJI Phantom 3. The binary classification model which was used in this study is the Xception network \cite{chollet2017xception} proposed by Google-Keras\footnote{https://keras.io/examples/vision/image\_classification\_from\_scratch/}. The Xception model is a deep Convolutional Neural Network (DCNN). The structure of the DCNN is shown in Fig.~\ref{fig:Xception_model}. Replacing the standard \textit{Inception} modules of the Inception architecture with depth-wise separable convolutions resulted in the Xception network \cite{chollet2017xception, szegedy2016rethinking, szegedy2016inception}.
\begin{figure}[tp]
\centering
\includegraphics[height=0.95\textheight, keepaspectratio]{Figures/small_Xception_model_vertical.pdf}
\caption{Small version of the Xception network for the fire classification.
}
\label{fig:Xception_model}
\end{figure}
Figure~\ref{fig:Xception_model} is the concise version of the Xception model. The Xception model has three main blocks: 1) the input layer, 2) the hidden layers, and 3) the output layer. The size of the input layer depends on the image size and the number of channels which in our case is ($254\times254\times3$). Then the value of RGBs in different channels are scaled to a float number between 0 and 1.0. The hidden layers rely on depth-wise separable convolutions and shortcut between the convolution blocks (ResNet~\cite{he2016deep}). The entry flow of the hidden layers is a pair of 2-Dimensional (2D) convolutional blocks with a size of 8 and a stride of $2\times2$. Each block follows a batch normalization and a Rectified Linear Unit (ReLU) activation function \cite{li2017convergence}. The batch normalization is used to speed up the training process and bring more randomness by decreasing the importance of initial weights and regularize the model. Next, the model follows two separable 2D convolutional blocks.
The last block of the hidden layer is a separable 2D convolutional layer with a size of 8 followed by another batch normalization and the ReLU function. Since the fire-detection is a binary classification task (Fire/No Fire), the activation function for the output layer is a Sigmoid function. The equation for the Sigmoid function is shown in (\ref{eq:sigmoid}),
\begin{align}\label{eq:sigmoid}
P(\textrm{label=Fire}) = \sigma(\textrm{label=Fire} | \zeta(\theta)) = \frac{1}{1 + e^{-\zeta(\theta)}},
\end{align}
where $\zeta(\theta)$ is the value of the output layer which is extracted based on the input frames and the RGB values of each pixel and all the weights across the hidden network. $\theta$ is the weight for the last layer of the network. The output value of the Sigmoid function is the probability of fire detection based on the imported frames into the network. To train the Xception
network and find the weights of all neurons, a value loss function is targeted to increase the accuracy of the networks and find the optimal values for the weights. As the problem in this section is a binary classification, the considered loss function is a binary cross-entropy defined a
\begin{align}\label{eq:lossfunc_bce}
&\mathcal{L}(y, \hat{y}) =
\\
\nonumber
&- \frac{1}{N} \sum\limits_{i=1}^N (y_{i} * \log(p(\hat{y}_{i})) + (1-y_{i}) * \log(1 - p(\hat{y}_{i})),
\end{align}
where $N$ is the number of total samples in each batch used to update the loss function for each epoch. $y$ is the ground truth
label for the frames of types fire ($y=1$) and no/fire ($y=0$)
based on the training data.
$p(\hat{y})$ is the predicted probability of a frame belonging to the fire class.
Next, the Adam optimizer is used to minimize the loss function and find the optimal weights during the learning process. After training the network with the training dataset, the evaluation is performed using a test dataset in Section~\ref{sec:results}. The implemented code for this learning model is available on GitHub~\cite{github:code_dataset_fire}.
\subsection{Fire Segmentation}
\label{subsec:segmentation}
This section considers the problem of image segmentation for frames labeled as "fire" by the fire classification algorithm presented in section \ref{subsec:classification}. Studying the fire segmentation problem is useful for scenarios like detecting small fires \cite{yuan2017aerial}. Also, fire segmentation helps fire managers localize different discrete places of active burning for the purpose of fire monitoring. The goal is to propose an algorithm to find the pile burn segments in each frame and generate relevant masks. These segmentation problems were handled differently in the past using image processing and RGB threshold values to segment different data batches
which exhibits relatively high error rates \cite{ccelik2007fire, yuan2015uav, khalil2020fire}. The goal is to develop an image semantic segmentation to perform a pixel-wise classification for each frame at the pixel level to define a fire mask for the generated output. To accomplish this task, a DCNN model is implemented to predict the label of each pixel based on the imported data. This segmentation problem can be recast as a binary pixel-wise classification problem, where each pixel can take two labels: "fire" and "non-fire" (background). To accomplish the image segmentation task, the fire test dataset from Section~\ref{subsec:classification} is considered as a training dataset. To train a DCNN model, a Ground Truth Mask dataset is required. Different tools and applications such as Labelbox~\cite{Labelbox:online}, Django Labeller~\cite{Django:online}, LabelImg~\cite{labelImg:online}, MATLAB Image Labeler~\cite{MATLABImageLabeler:online}, GNU Image Manipulation Program (GIMP)~\cite{GIMPGNUI35:online}, etc are available to perform different types of the manual image segmentation such as pixel labeling, annotation (rectangles, lines, and cuboid) on the Regions Of Interest (ROI) to provide training data for the utilized deep learning model. The MATLAB (TM) Image Labeler is used on 2003 frames to generate the Ground Truth Masks. This subcategory of the FLAME dataset of masks and images is presented in Table~\ref{tab:dataset_info}. The implemented image segmentation model is adopted from the U-Net convolutional network developed for biomedical image segmentation \cite{ronneberger2015u}. U-Net is an end-to-end technique between the raw images and the segmented masks. A few changes are made to this network to accommodate the FLAME dataset and adapt it to the nature of this problem. The ReLU activation function is changed to Exponential Linear Unit (ELU) of each two-dimensional convolutional layer to obtain more accurate results \cite{ELU:online}. The ELU function has a negative outcome smaller than a constant $\alpha$ for the negative input values and it exhibits a smoother behavior than the ReLU function. The structure of the customized U-Net is shown in Figure~\ref{fig:customized_unet}. The backbone of the U-Net consists of a sequence of up-convolutions and concatenation with high-resolution features from the contracting path.
The size of the input layer is 512 $\times$ 512 $\times$ 3 designed to match the size of the inputs images and three RGB channels. For computational convenience, the RGB values (between 0 and 255) are scaled down by 255 to yield float values between 0 and 1. Next, it follows the first contracting block including a two-dimensional fully convolutional layers with the ELU activation function, a dropout layer, another same fully convolutional layer,
and a two-dimensional
max pooling layer. This structure is repeated another three times to shape the left side of the U shape. Next, there are two two-dimensional fully connected layers with a dropout layer in between, the same structure of the left side is repeated for the right side of the U shape to have a symmetric structure for the up-convolution path in each block. Also, there exists a concatenation between the current block and the peer block from the contracting path. Since the pixel-wise segmentation is a binary classification problem, the last layer has the Sigmoid activation function.
\begin{figure}[tp]
\centering
\includegraphics[height=0.95\textheight, keepaspectratio]{Figures/customized_unet_vertical.pdf}
\caption{Customized version of the U-Net for the fire segmentation.}
\label{fig:customized_unet}
\end{figure}
The DCNN utilizes a dropout method to avoid the overfitting issue in the FLAME dataset analysis and realize a more efficient regularization noting the small number of ground truth data samples.
The utilized loss function is the binary cross entropy similar to (\ref{eq:lossfunc_bce}). The Adam optimizer is used to find the optimal value of weights for the neurons. The evaluation of the FLAME-trained model with the ground truth data is described in Section~\ref{subsec:res_segmentation}. The implemented code for this section is available on GitHub~\cite{github:code_dataset_fire}.
\section{Results: Metrics and guidance on reporting results}
\label{sec:results}
In this section, we present the results of the two different problems of fire classification and fire segmentation. First, we provide the details of the parameters used in our experiments. Next, we discuss the results of each algorithm. All simulations for the training, validation, and testing phases, are performed using a AMD Ryzen 9 3900X with NVidia RTX 2080 Ti on an Ubuntu system.
\subsection{Fire vs No-Fire Classification}
\label{subsec:res_classification}
In the training section, the total number of the frames is 39,375 which includes 25,018 frames of type "fire" and 14,357 frames of type "non-fire".
The training dataset is further split to 80\% training and 20\% validation sets. All frames are shuffled before feeding into the network. Also, augmentation methods such as horizontal flipping and random rotation are used to create new frames and address the issue of bias for unbalanced number of samples in the two "fire" and "non-fire" classes. The training phase ran over 40 epochs and the learning rate for the Adam optimizer is set to 0.001 which remains fixed during the training phase. Also, the batch size of 32 is used to fit the model in the training phase.
To evaluate the accuracy and loss on the test dataset, 8,617 frames including 5,137 fire-labeled frames and 3,480 No-fire-labeled frames are fed into the pre-trained networks.
Table~\ref{tab:accuracy_loss} reports loss and accuracy on training, validation, and test sets. It is noteworthy to mention that all frames for the training phase are collected using the Matrice 200 drone using Zenmuse X4S camera and all frames for the test set are collected using the Phantom drone and its default mounted camera. Therefore, no overlap exists between the training and test samples. This fact confirms that our method is not biased to the imaging equipment properties, and the actual accuracy would be even higher when using the same imaging conditions for the training and test phase. The achieved accuracy
of the ``Fire vs No-Fire" classification is 76.23\%.
Figure~\ref{fig:accuracy_loss_training} demonstrates the loss and accuracy for the training phase for both the training and validation sets. Also, Figure~\ref{fig:Confusion_matrix} presents the confusion matrix for this binary fire classification task for all predictions. The vertical axis shows the true label of frames and the horizontal axis expresses the predicted label. The confusion matrix considers two classes which is plotted for the test dataset. Since, the ratio of fire and No-fire frames was imbalanced at the training phase, the rate of the false positive (classifying a true no-fire as fire) is higher than the false negative rate (classifying a true fire as no-fire).
\begin{figure}[bt]
\centering
\includegraphics[width=1\linewidth]{Figures/KerasModel_40_EPOCH_20_layers_simple.pdf}
\caption{Accuracy and loss values for the training and validation sets.}
\label{fig:accuracy_loss_training}
\end{figure}
\begin{figure}[bt]
\centering
\includegraphics[width=1\linewidth]{Figures/Confusion_matrix.pdf}
\caption{Confusion matrix for the true and predicted labels.}
\label{fig:Confusion_matrix}
\end{figure}
\begin{table}[bt]
\caption{Accuracy and loss for evaluation of the fire classification.}
\centering{
\label{tab:accuracy_loss}
\resizebox{0.6\linewidth}{!}{
\begin{tabular}{c|c|c}
\toprule
\toprule
\Topspace
\Bottomspace
\textbf{\Large} & \multicolumn{2}{c}{\textbf{\Large Performance}}
\\
\hline
\toprule
\toprule
\Topspace
\Bottomspace
\textbf{\large Dataset} & {\large Loss} & {\large Accuracy(\%)}
\\
\hline
\Topspace
\Bottomspace
{\fontsize{14}{17}\selectfont Test set} & \textbf{\fontsize{14}{17}\selectfont $0.7414$} & \textbf{\fontsize{14}{17}\selectfont $76.23$}
\\
\hline
\Topspace
\Bottomspace
{\fontsize{14}{17}\selectfont Validation set} & \fontsize{14}{17}\selectfont $0.1506$ & \fontsize{14}{17}\selectfont $94.31$
\\
\hline
\Topspace
\Bottomspace
{\fontsize{14}{17}\selectfont Training set} & \fontsize{14}{17}\selectfont $0.0857$ & \fontsize{14}{17}\selectfont $96.79$
\\
\bottomrule
\end{tabular}
}
}
\end{table}
\subsection{Fire Segmentation}
\label{subsec:res_segmentation}
The purpose of fire segmentation is to accurately localize and extract the fire regions from the background. Therefore, the video frames within the test set which are labeled as "fire" by the fire classification stage in section (Section~\ref{subsec:classification}) are used here for training, validation, and test. The total number of frames is 5,137 and 2003 masks generated using the MATLAB (TM) Image Labeler tool for the ground truth data. The ground truth masks and data are generated based on the human subject matter expert (SME) eye efficiency to mark the fire pixels using manual polygon shape in MATLAB (TM) Image Labeler. The split ratio between the training and validation data is 85\% and 15\%. The frames and ground truth data were shuffled accordingly before importing into the training model. The maximum number of epochs is 30; however, an early stop callback was considered when the performance does not substantially change. The batch size for the training is 16. Figure~\ref{fig:segmentation_test} demonstrates six samples of the test set along with the expected ground truth masks and the generated masks from the trained network. The first row is the input frame to the model, the second row is the ground truth (gTruth) which is the expected mask, and the last row is the generated mask by the trained model. Also, Table~\ref{tab:segmentation_performance} shows the performance evaluation of this model. In this table, precision, recall, and Area Under Curve (AUC), F1-score, sensitivity, specificity, and Mean Intersection-Over-Union (Mean IOU) are reported.
\begin{figure*}[bt]
\centering
\includegraphics[width=1\linewidth]{Figures/segmentation_test_4.pdf}
\caption{Performance of the fire segmentation on six frames of the test set.}
\label{fig:segmentation_test}
\end{figure*}
\begin{table}[bt]
\caption{Performance evaluation of the customized U-Net on the fire dataset for the fire segmentation.}
\centering{
\label{tab:segmentation_performance}
\resizebox{1\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c|c}
\toprule
\toprule
\Topspace
\Bottomspace
\textbf{\Large} & \multicolumn{7}{c}{\textbf{\Large Performance evaluation}}
\\
\hline
\toprule
\toprule
\Topspace
\Bottomspace
\textbf{\large Dataset} & {\large Precision(\%)} & {\large Recall(\%)} & {\large AUC(\%)} & {\large F1-Score(\%)} & {\large Sensitivity(\%)} & {\large Specificity(\%)} & {\large IOU(\%)}
\\
\hline
\Topspace
\Bottomspace
{\fontsize{14}{17}\selectfont Image Segmentation} & \textbf{\fontsize{14}{17}\selectfont $91.99$} & \textbf{\fontsize{14}{17}\selectfont $83.88$} & \textbf{\fontsize{14}{17}\selectfont $99.85$} &
\textbf{\fontsize{14}{17}\selectfont $87.75$} &
\textbf{\fontsize{14}{17}\selectfont $83.12$} &
\textbf{\fontsize{14}{17}\selectfont $99.96$} &
\textbf{\fontsize{14}{17}\selectfont $78.17$}
\\
\bottomrule
\end{tabular}
}
}
\end{table}
\section{Open Challenges regarding the dataset}
\label{sec:open_challenges}
This study proposed two different challenges regarding the dataset.
We encourage other researchers to consider this available FLAME dataset and improve the accuracy of the fire classification problem which might include providing more ground truth data as a labeled mask for the fire segmentation.
Also, three other thermal images such as GreenHot, WhiteHot, and the fusion are also available for further investigation regarding the fire segmentation and classification.
Other considerations include the type of fire elements including the different structures of the fire (white hot core, exterior, etc). These elements can be segmented as different parts of the fire to have better understanding of the fire behavior.
Another challenge or problem could be investigating different fire detection models on these thermal images to see which type of the data has better accuracy for the model. Perhaps, another important research direction would be developing integrative imagery-based fire spread models by incorporating other environmental factors such as the terrain model, and the vegetation fuel profile of the region. Extracting such factors from the images and videos and comparing with alternative sources can advance the image-based fire modeling algorithms. Other open challenges and future directions regarding the FLAME dataset include but not limited to 1) transfer learning, 2) context-based fire detection using a model and then zero-shot learning, 3) fire content analysis, 4) temporal analysis, 5) surrogate airborne perspective analysis, 6) metric design, 7) performance standards, 8) user displays, 9) edge node efficiency, and 10) occlusion robustness.
\section{Conclusion}
\label{sec:conclusion}
This paper provided the FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) dataset for pile burns in Northern Arizona forest. Two drones were used to collect aerial frames and videos in four different palettes of normal, Fusion, WhiteHot, and GreenHot using normal and thermal cameras. The frames were used in two different applications, in the first challenge, a convolutional neural network was used as a deep learning binary fire classification to label data. In the second approach, a machine learning approach was proposed to extract fire masks from fire labeled data as an image segmentation technique. These exemplary applications show the utility of the FLAME dataset in developing computer tools for fire management and control. Also, FLAME dataset can be used as a benchmark dataset for testing generic image processing algorithms.
We provide numerical result for the performance of the proposed two algorithms developed for image classification and detection. We believe that developing more advanced models by the research community can further improve the reported results. Another potential use for this dataset is developing fire classification and detection algorithms by a collective analysis of different imaging modalities including regular and thermal images. Also, researchers can utilize fire segmentation methods to define related networking and monitoring problems, such as optimal task scheduling for a fleet of drones to optimally cover the pile burns in a certain region at shortest time possible.
\section*{Acknowledgment}
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0090 and the National Science Foundation under Grant Number 2034218. Thanks to Neil Chapman and Paul Summerfelt of the Flagstaff Fire Department for providing access to the prescribed fire.
|
{
"timestamp": "2020-12-29T02:22:34",
"yymm": "2012",
"arxiv_id": "2012.14036",
"language": "en",
"url": "https://arxiv.org/abs/2012.14036"
}
|
\section{Introduction}
Random graphs, as models for binary relations amongst vertices, have had a deep impact in discrete mathematics, computer science, engineering, and statistics. Modern-day data analysis, however, also involves looking at higher order relations. Therefore, there is now a growing interest to study higher dimensional models such as random simplicial complexes \cite{linial2016phase, linial2006homological, meshulam2009homological}. Our contributions in this paper can be viewed in this light. Specifically, we consider a mean-field model and provide a complete description of the distribution of weights in the minimal spanning tree and also of those in its higher dimensional analogue---the Minimal Spanning Acycle (MSA). As a corollary, we also obtain the distribution of the death times in the associated persistence diagram.
In 2014, Robert Adler \cite{adler2014article} had summed up the state-of-the art in random topology as follows: while a lot is already known about the asymptotic behaviour of the sums of MSA weights and associated death times, respectively, ``almost nothing is known'' about the distributional properties of the individual values.
The work in \cite{skraba2017randomly} was the first to partially address this gap. There, the distribution of the extremal MSA weights and the extremal death times was studied. In this work, not only do we extend those results to a more general scenario but, importantly, describe the behaviour in the bulk as well. In fact, this latter result is new even in the graph case.
\subsection{Synopsis of Key Contributions} Leaving the details to Section~\ref{sec:resultsterm}, we now provide some brief background and a quick summary of our main results along with some visual illustrations.
The Erd\H{o}s-R\'enyi\ graph, denoted by $G(n, p),$ is a random graph on $n$ vertices where each edge is present with probability $p$ independently. An analogue of this model in higher dimensions is the Linial-Meshulam complex, which we denote here by $Y_d(n , p)$ or by $Y(n, p)$ when the dimension $d$ is clear. This complex has $n$ vertices and the complete $(d - 1)$-dimensional skeleton; further, the $d-$dimensional faces are included with probability $p$ independently. Clearly, $Y_1(n, p)$ and $G(n, p)$ are equivalent. Here and henceforth, note that a $k-$dimensional face or simply a $k-$face refers to a subset of the vertex set with cardinality $k + 1,$ while the $k-$skeleton of a simplicial complex $\mathcal{K}$ is the collection of all those faces in $\mathcal{K}$ whose dimension is less than or equal to $k.$
Our focus in this paper is on a weighted analogue of $Y(n, p).$ Specifically, we look at the case where the weights of the $d-$faces in $Y(n, p)$ come from the uniform $[0, 1]$ distribution, while the lower dimensional faces have weights all equal to zero. This setup models the scenario where only a fraction of the actual $d-$face weights are available to the user. Our main results concern this complex and can be informally stated as follows (see Section~\ref{sec:resultsterm} and \ref{subsec:Linial.Meshulam.Complex} for the details).
\noindent \textbf{Main Results (Informal Summary)}:
\emph{Let $d \geq 1$ and $p \in (0, 1].$ In addition, whenever it exists, let $M(n,p)$ denote a $d-$dimensional MSA (or $d-$MSA in short) in the weighted $Y(n, p)$ complex. Then, as $n \to \infty,$ the following claims hold.
\begin{enumerate}
\item \emph{Bulk}: The empirical measure related to $\{n p w(\sigma): \sigma \in M(n,p)\}$ converges to a measure $\mu$ based on the asymptotic shadow density of $Y(n, c/n),$ $c \geq 0;$ see \eqref{d:limiting.bulk.measure} for details. \\[-2ex]
%
\item \emph{Extremes}: $\{np w(\sigma) - d \log(n) + \log(d!): \sigma \in M(n,p)\}$ converges to a Poisson point process. \\[-2ex]
\end{enumerate}
}
Some brief comments on these results are in order. First, the shadow of a complex is, loosely, the complement of graph components in higher dimensions. Next, the scaling used in the second claim gives more prominence to the extremal values amongst the $d-$MSA weights. This follows from the fact that these extremal values are of the order $d\log n/n;$ see Section~\ref{subsec:proof.overview} for further details. Finally, the second claim is new only when $p \in (0, 1);$ the specific case of $p = 1$ also appears in \cite{skraba2017randomly}. The first claim, in contrast, is new for all $p;$ in fact, it is new even in the graph case. Also, we are the first to highlight the connection between the MSA weights and the shadow.
A weighted simplicial complex of the above kind can also be viewed as an evolving complex which, at time $r \in [0, 1],$ includes all those faces whose weight is less than or equal to $r.$ There is a natural persistence diagram associated with this process. This records the birth and death times of the different topological holes that appear and disappear as the process evolves. In \cite[Theorem~3]{skraba2017randomly}, it was shown that the set of death times in the $(d - 1)-$th dimension of this diagram exactly equals the set of weights in the $d-$MSA of the original weighted complex. Consequently, our results can also be stated in terms of the death times in the persistence
diagram corresponding to the weighted $Y(n, p)$ complex; see Sections~\ref{sec:resultsterm} and~\ref{subsec:Linial.Meshulam.Complex} for the exact statements.
\begin{figure}[t!]
\begin{subfigure}{.495\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/300_MST.pdf}
\caption{$n = 300, d = 1$}
\label{fig:Histogram.and.Shadow.d.equal.1}
\end{subfigure}
\begin{subfigure}{.495\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/75_MSA.pdf}
\caption{$n = 75, d = 2$}
\label{fig:Histogram.and.Shadow.d.equal.2}
\end{subfigure}
\caption{Normalized histogram (yellow) of $\{n w(\sigma): \sigma \in M(n, 1)\}$ and the density (red) of the shadow based measure given in \eqref{d:limiting.bulk.measure}.
\label{fig:Histogram.and.Shadow}}
\end{figure}
We now use Figs.~\ref{fig:Histogram.and.Shadow}, \ref{fig:Extremal.Weights}, and \ref{fig:Ordinary.and.Scaled.Histograms} to provide an alternative explanation of our results. The first two figures relate to our result in the $p = 1$ case, while the third one concerns the $p \in (0, 1)$ case.
There are two scenarios in Fig.~\ref{fig:Histogram.and.Shadow}, one corresponding to a sample of the weighted $Y_1(300, 1)$ complex and the other to that of the weighted $Y_2(75, 1)$ complex. In both the scenarios, the yellow plot shows the normalized histogram of the set $\{n w(\sigma) : \sigma \in M(n, 1)\},$ while the red one shows the density of the shadow based limiting measure defined in \eqref{d:limiting.bulk.measure}. Observe that the red and the yellow plots more or less resemble each other. This is the crux of our first claim above which, loosely, states that the difference between these two plots decays to zero as $n \to \infty.$
Fig.~\ref{fig:Extremal.Weights} again shows two distinct scenarios, the first one corresponds to a sample of the weighted $Y_1(500, 1)$ complex while the second one deals with that of the weighted $Y_2(100, 1)$ complex. This time, the figure shows a plot of the values in $\{n w(\sigma) - d\log (n) + \log(d!): \sigma \in M(n, 1)\}.$ As stated before, this scaling gives more prominence to the extremal weights. Accordingly, observe that the values on the extreme right of the figure have distinctly spread out. Our second claim states that these extremal values converge to a Poisson point process while the rest go to $-\infty.$
Finally in Fig.~\ref{fig:Ordinary.and.Scaled.Histograms}, unlike Fig.~\ref{fig:Histogram.and.Shadow}, we discuss our result on the bulk for $p \in (0, 1).$ Again, we consider two broad scenarios and the value of $(n, d)$ in each is $(500, 1)$ and $(50, 2),$ respectively. In each case, we also consider three different values of $p,$ as indicated,
and look at the $d-$MSA in one sample each of the resultant $Y_d(n, p)$ complex; we resample if the MSA does not exist. Notice that all the panels have two distinct plots: the blue one is the histogram corresponding to $\{n w(\sigma): \sigma \in M(n,p)\},$ while the yellow one corresponds to $\{n p w(\sigma) : \sigma \in M(n,p)\}.$ Clearly, unlike the blue plots, the yellow ones look similar irrespective of the $p$ values. Our result indeed captures this phenomena. It states that, whatever be the value of $p,$ the corresponding yellow histogram asymptotically converges to the red curve depicted in Fig.~\ref{fig:Histogram.and.Shadow}. While we haven't shown it, a similar story unfolds in relation to our result on the extremes as well. Hence, by uniformly scaling all the weights in the MSA by $p,$ its limiting behaviour becomes independent of $p$ itself.
\begin{figure}
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width= 0.95\linewidth]{Figures/Extremes_d1_500.pdf}
\caption{$n = 500, \quad d = 1$}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{Figures/Extremes_d2_100.pdf}
\caption{$n = 100, \quad d = 2$}
\end{subfigure}
\caption{Plot of $\{n w(\sigma) - d \log(n) + \log(d!) : \sigma \in M(n, 1)\}.$
\label{fig:Extremal.Weights}}
\end{figure}
\begin{figure}[ht!]
\begin{subfigure}{.495\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/Actual_Squeezed_d1.pdf}
\caption{$n = 500, \quad d = 1$}
\end{subfigure}
\begin{subfigure}{.495\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/Actual_Squeezed_d2.pdf}
\caption{$n = 50, \quad d = 2$}
\end{subfigure}
\caption{Normalized histograms of $\{n p w(\sigma): \sigma \in M(n,p)\}$ and $\{n w(\sigma): \sigma \in M(n,p)\},$ depicted in yellow and blue, respectively. In the last panel, the two histograms coincide since $p = 1.$
\label{fig:Ordinary.and.Scaled.Histograms}}
\end{figure}
\subsection{Related Work}
Our work lies at the intersection of two broad strands of research: one concerning component sizes, shadow densities, and homologies of random graphs and random complexes, and the other dealing with the statistics of weights in random minimal spanning trees and acycles. In this section, we look at few of the historical milestones in these two strands.
Erd\H{o}s\ and R\'enyi\ were the ones who initiated the first strand with their work in \cite{erdos1959random}. There they showed that $\log n/n$ is a sharp asymptotic threshold for connectivity in $G(n, p).$ And, also that, if $p_n = (\log n + c)/n$ for $c \in \mathbb{R}$ then, as $n \to \infty$, almost all the vertices in $G(n, p_n)$ lie in one single component; further, the vertices outside this are all isolated and their number has a Poisson distribution with mean $e^{-c}.$ This result was subsequently refined in \cite{Erdos:1960} and the new statement included the following additional facts. The asymptotic order of the largest component jumps from logarithmic to linear around $1/n.$ Also, for $c > 1,$ the size of the largest component in $G(n, c/n),$ denoted $L_n(c),$ satisfies $L_n(c)/n \to 1 - t(c),$ where $t(c) \in (0, 1)$ is the unique root of
\begin{equation}
\label{e:Giant.Component.Size}
t = e^{-c(1 - t)}.
\end{equation}
A graph is a one dimensional complex, so one can ask if similar phenomena occur in random complexes as well. Since connectivity is associated with the vanishing of the zeroth homology, it is natural to look at the higher order Betti numbers to answer this question. Such a study was first carried out in \cite{linial2006homological, meshulam2009homological} and it was found that the $(d - 1)-$th Betti number of $Y(n, p)$ indeed shows a non-vanishing to vanishing phase transition at $d \log n/n$, just as in the graph case. A separate study done in \cite{kahle2016inside} also showed that this Betti number converges in distribution to a Poisson random variable with mean $e^{-c},$ when $p_n = (d \log n - \log (d!) + c)/n$ for some $c \in \mathbb{R}.$
The result on component sizes, in contrast, was not so easy to generalize. The challenge was in coming up with a higher dimensional analogue of a component in a simplicial complex. The breakthrough came in \cite{linial2014extremal} with the introduction of the shadow. The underlying motivation was that, in a sparse graph, a giant component exists if and only if the shadow has a positive density. With this in mind, the behaviour of the $d-$dimensional shadow of $Y(n, p)$ was investigated in \cite{linial2016phase} and these were the key findings there. One, the size of the $d-$shadow changes from $o(n^{d + 1})$ to $\Theta(n^{d + 1})$ at some $c_*/n,$ where $c_*$ equals $1$ when $d = 1,$ but is strictly greater than $1$ for $d \geq 2.$ Two, for $c > c_*,$ the $d-$shadow of $Y(n, c/n),$ denoted $\textnormal{Sh}(Y(n, c/n)),$ satisfies $|\textnormal{Sh}(Y(n, c/n))| /\binom{n}{d + 1} \to (1 - t(c))^d,$ where $t(c) \in (0, 1)$ is the unique root of the equation
\begin{equation}
\label{eqn:c tc Alt Relation}
t = e^{-c(1 - t)^d}.
\end{equation}
Clearly, this equation matches \eqref{e:Giant.Component.Size} when $d = 1.$ Also, $|\textnormal{Sh}(Y_1(n, c/n))| = \Theta(n^2),$ while $L_1(c) = \Theta(n).$
The pioneering work in the second strand was the one in \cite{frieze1985value} by Frieze. He showed that, given a complete graph on $n$ vertices with uniform $[0, 1]$ weights on each edge, the expected sum of weights in the minimum spanning tree converges to $\zeta(3).$ This result was recently extended in \cite{hino2019asymptotic} to the setting of the $d-$MSA in the weighted $Y(n, 1)$ model. Separately, the work in \cite{skraba2017randomly} also showed that the extremal weights in this $d-$MSA converge to a Poisson point process. As emphasized before, this latter result was the first to deal with the individual values instead of just the sum.
\subsection{Proof Outline}
\label{subsec:proof.overview}
We now briefly describe how we combine the different facts stated above for proving our results. Clearly, the weighted $Y(n, p)$ complex may have some $d-$faces missing. The first question then to ask is what is the probability that a $d-$spanning acycle, and hence a $d-$MSA, exists. As per \cite[Lemma~23]{skraba2017randomly}, a sufficient condition to guarantee existence is that the $(d - 1)-$th Betti number be zero. However, since $p$ is a constant, this indeed holds for $Y(n, p)$ asymptotically \cite{linial2006homological,meshulam2009homological}. Hence, the weighted $Y(n, p)$ complex does have a $d-$MSA a.s. as $n \to \infty.$
For a finite $n,$ however, there is a non-trivial probability that there may not exist a $d-$MSA. Therefore, working directly with the weighted $Y(n, p)$ complex becomes a bit complicated. To overcome this, we first derive our results for $\cL(n,p),$ an intermediary object called the augmented $d-$complex, and then prove the result in the $Y(n, p)$ case. We build $\cL(n,p)$ from the weighted $Y(n, p)$ complex by simply adding all the missing $d-$faces and then assigning them the largest possible weight, i.e., $1.$ The $\cL(n,p)$ complex, thus, always has a $d-$MSA. Also, asymptotically, its $d-$MSA matches the one in the weighted $Y(n, p)$ complex. Because of this, we now only need to explain how to derive our results in the context of $\cL(n,p).$
We begin with our first claim. Let $\cL(n,p)|_{r^{-}}$ be the simplicial complex obtained from $\cL(n,p)$ by retaining only those faces with weight strictly less than $r.$ Clearly, the distributions of $\cL(n,p)|_{r^{-}}$ and $Y(n, rp$), and hence of their shadows, are one and the same. Now, from \cite[Lemma~32]{skraba2017randomly}, a $d-$face with weight $r$ belongs to the $d-$MSA if and only if it does not lie in the shadow of $\cL(n,p)|_{r^{-}}.$ Given that this face could potentially have been any of those outside of $\cL(n,p)_{r^{-}},$ the probability that it belongs to the $d-$MSA must then equal one minus the shadow density of $\cL(n,p)_{r^{-}},$ i.e., the size of shadow divided by the total number of potential $d-$faces. Based on this and the shadow density results from \cite{linial2016phase}, the desired result is then easy to see.
Our proof for the second claim builds upon ideas from \cite{skraba2017randomly} and proceeds as follows. As before, note that, for any $r \in [0, 1],$ the simplicial complex $\cL(n,p)|_{r^{-}}$ has the same distribution as $Y(n, rp).$ This, in turn, implies that the distributions of the $(d - 1)-$th Betti number of $Y(n, rp)$ and the set of death times exceeding $r$ in the $(d - 1)-$th persistence diagram associated with $\cL(n,p)$ are the same. Separately, due to the equivalence between the MSA weights and the death times \cite[Theorem~3]{skraba2017randomly}, it also follows that the set of $d-$MSA weights exceeding $r$ in $\cL(n,p)$ has the same distribution as the previous two quantities. The phase transition results for the $(d - 1)-$th Betti number from \cite{erdos1959random, linial2006homological,meshulam2009homological} then show that the extremal weights in the $d-$MSA are $O(d \log n/n).$ Additionally, the Poisson result from \cite{kahle2016inside} shows that the set of $d-$MSA weights that exceed $(d\log n - \log (d!) + c)/n$ converges in distribution to a Poisson random variable with mean $e^{-c}.$ The proof of the desired Poisson point process convergence is then a technicality.
\subsection{Overview of Contents} Section~\ref{sec:resultsterm} provides the necessary terminology as also the formal statements of our main results in the context of $\cL(n,p),$ both in the bulk and in the extremes. The proof for the result on the bulk is given in Section~\ref{sec:bulk}, while the Section \ref{sec:extremes} deals with that of the extremes. In Section \ref{sec:extensions}, we discuss three distinct extensions of our results for $\cL(n,p)$ including the one for the weighted $Y(n, p)$ complex. We finally end with a discussion and also some future directions. An interesting model discussed here is that of the online setting where the $d-$faces are revealed one at a time.
\section{Main Results and Terminology}\label{sec:resultsterm}
In this section, we introduce the augmented $d-$complex $\cL(n,p)$ and formally state all our results in its context. Alongside, we also describe some important terms and concepts. All the while, we will presume that the reader is well versed with the basics of simplicial homology and Poisson point processes. If not, the necessary background can be found in \cite[Section 2]{skraba2017randomly} and the references therein. Note that the weighted $Y(n, p)$ case is discussed later in Section~\ref{sec:extensions}.
Throughout, $d \geq 1$ denotes an arbitrary but fixed dimension of our simplicial complexes. Most of our notations and results do depend on $d,$ but as it is a constant this dependence will not always be shown. We use $\mathcal{K}$ for a generic simplicial complex. Further, we use $\mathcal{F}^k(\mathcal{K})$ or simply $\mathcal{F}^k$ (when the complex is clear) to denote the $k-$dimensional faces in $\mathcal{K}$. Similarly, $\beta_k(\mathcal{K})$ or simply $\beta_k$ stands for the $k-$th reduced Betti number\footnote{Throughout we assume the Betti numbers are defined using real coefficients, as in \cite{linial2016phase}.} of $\mathcal{K}.$ For a random variable $X,$ we use $\normq{X} = \mathbf{E}(X^q)^{1/q}$ to represent its $L^q-$th norm. Lastly, we use $|S|$ to denote the cardinality of a set $S.$
We begin with the definition of the augmented $d-$complex. Recall that it is obtained from the weighted $Y(n, p)$ complex by adding the missing $d-$faces and giving them the weight value $1.$
\begin{definition}[Augmented $d-$complex]
\label{d:augmented.d.complex}
Let $\mathcal{K}_n$ denote the complete $d-$skeleton on $n$ vertices and let $\{U(\sigma): \sigma \in \mathcal{F}^d\}$ be a collection of i.i.d.\ uniform $[0,1]$ random variables. Then, for $p \in (0, 1]$ and $Y$ being a sample of $Y(n, p),$ the \emph{augmented $d-$complex} $\cL(n,p)$ is the weighted complex $(\mathcal{K}_n, w)$, where $w:\mathcal{K}_n \to [0, 1]$ is given by:
\[
w(\sigma) = \begin{cases}
0 & \text{if } |\sigma| \leq d,\\
U(\sigma) & \text{if } \sigma\in \mathcal{F}^d(Y),\\
1 & \text{otherwise}.
\end{cases}
\]
\end{definition}
Related to $\cL(n,p),$ we have the canonical filtration $\{\cL(n,p)|_s : s \in [0, 1]\},$ where $\cL(n,p)|_s:= \{\sigma: w(\sigma) \leq s\}.$ That it is a filtration can be seen from the fact that $w$ is trivially monotone, i.e., $w(\sigma) \leq w(\tau)$ whenever $\sigma \subseteq \tau$ which, in turn, ensures that $\cL(n,p)|_s$ is a simplicial complex for each $s$ and that $\cL(n,p)|_s \subseteq \cL(n,p)|_r$ for all $s \leq r,$ as desired.
Our next aim is to define spanning acycles \cite{kalai1983enumeration} and minimal spanning acycles, the main objects of our study. These are the higher dimensional generalizations of spanning trees and minimal spanning trees, respectively. Recall that a spanning tree is a subset of the edges in a connected graph that connects all the vertices together without creating any cycles. In that same spirit, a $d-$spanning acycle in a simplicial complex is a subset of the $d-$faces which when added to $\mathcal{K}^{d - 1},$ the $(d - 1)-$skeleton of $\mathcal{K},$ kills the existing $(d - 1)-$th Betti number and also does not create any new $d-$cycles. Likewise, a minimal spanning acycle in a weighted simplicial complex is simply the spanning acycle with the smallest possible weight. As shown in \cite[Lemma~23]{skraba2017randomly}, a sufficient condition to guarantee the existence of a spanning acycle and, hence, a $d-$MSA is $\beta_{d - 1}(\mathcal{K}) = 0.$
\begin{definition}[Spanning Acycle]
Let $\mathcal{K}$ be a simplicial complex with $\beta_{d - 1}(\mathcal{K}) = 0$. Then, a \emph{$d-$spanning acycle} of $\mathcal{K}$ is a set $S$ of $d-$faces in $\mathcal{K}$ such that $\beta_{d - 1}(\mathcal{K}^{d - 1} \sqcup S) = \beta_d(\mathcal{K}^{d - 1} \sqcup S) = 0$.
\end{definition}
\begin{definition}[Minimal Spanning Acycle]
Let $(\mathcal{K}, w)$ be a weighted $d-$complex with $\beta_{d - 1}(\mathcal{K}) = 0$. Then, a \emph{$d-$minimum spanning acycle} $M$ is an element of ${\textnormal{arg}\min}_{S} \{w(S)\}$, where the minimum is taken over all the $d-$spanning acycles and $w(S)$ represents the sum of weights of the faces in $S$.
\end{definition}
It is worth noting that minimal spanning acycles are unique when the $d-$faces in $\mathcal{K}$ have unique weights; this follows, e.g., via Kruskal's algorithm \cite{skraba2017randomly}.
Next, we discuss the concept of a shadow. Introduced in \cite{linial2014extremal}, a shadow generalizes the notion of the complement of graph components to higher dimensions.
\begin{definition}[Shadow]
For a $d-$dimensional simplicial complex $\mathcal{K}$ with vertex set $V$ and having the complete $(d - 1)-$skeleton, its $d-$shadow is given by
\[
\textnormal{Sh}(\mathcal{K}) = \left\{\sigma \in \tbinom{V}{d + 1} \setminus \mathcal{F}^d: \beta_d(\mathcal{K} \sqcup \sigma) = \beta_d(\mathcal{K}) + 1 \right\},
\]
where $\tbinom{V}{d + 1}$ denotes the set of all $(d + 1)-$sized subsets of $V.$
\end{definition}
In analogy with the terms for the spectrum of random matrices, we next define the following measures. With a slight abuse of notation, let $M(n,p)$ denote the $d-$MSA in $\cL(n,p)$ as well. Also, let $D(n,p)$ denote the set of death times in the $(d - 1)-$th persistence diagram of $\{\cL(n,p)|_s : s \in [0, 1]\}.$ Separately, let $\delta$ be the Dirac measure.
\begin{definition}[Bulk Measure]
The \emph{bulk measures} $\mu_{n,p}$ and $\tilde{\mu}_{n,p}$ are random measures corresponding to the empirical distribution of the re-scaled weights in $M(n,p)$ and the re-scaled death times in $D(n,p),$ respectively. Formally, they are given by
\begin{equation}
\label{d:empirical.measure}
\mu_{n,p} := \frac{1}{\binom{n - 1}{d}} \sum_{\sigma \in M(n,p)} \delta_{n p w(\sigma)} \quad \text{ and } \quad \tilde{\mu}_{n,p} := \frac{1}{\binom{n - 1}{d}} \sum_{\Delta \in D(n,p)} \delta_{n p \Delta}.
\end{equation}
\end{definition}
\begin{definition}[Extremal Measure]
The \emph{extremal measures} $\nu_{n,p}$ and $\tilde{\nu}_{n,p}$ are random counting measures corresponding to the extremal weights in $M(n,p)$ and the extremal death times in $D(n,p),$ respectively. Their mathematical formulations are
\begin{equation}
\label{d:counting.measure}
\nu_{n,p} := \sum_{\sigma \in M(n,p)} \delta_{n p w(\sigma) - d \log(n) + \log(d!)} \quad \text{ and } \quad \tilde{\nu}_{n,p} := \sum_{\Delta \in D(n,p)} \delta_{n p \Delta - d \log(n) + \log(d!)}.
\end{equation}
\end{definition}
Finally, we define the deterministic measure $\mu$ which will serve as the limit for both $\mu_{n,p}$ and $\tilde{\mu}_{n,p}.$ For $d = 1$, let $t_* = c_* = 1$. Further, for $d\geq 2$, let $t_* \in (0, 1)$ be the unique root of
\[
(d + 1)(1 − t) + (1 + dt) \log t = 0,
\]
and let $c_* > 0$ be the constant that satisfies $t_* = e^{-c_*(1 - t_*)^d}$, i.e., $c_* = -\log t_*/(1 - t_*)^d$.
Separately, for both $d = 1$ and $d \geq 2$ cases, when $c \geq c_*,$ let $t(c) \in (0, 1)$ denote the smallest root of \eqref{eqn:c tc Alt Relation}. Building upon these terms, let $\mu$ be the measure whose density is given by
\begin{equation}
\label{d:limiting.bulk.measure}
\df{\mu(x)} = \frac{1 - s(x)}{d + 1}\df{x}, \quad x \geq 0,
\end{equation}
where
\begin{equation}
\label{d:s.Val}
s(x) =
%
\begin{cases}
0, & x \leq c_*, \\
%
\big(1 - t(x)\big)^{d + 1}, & x > c_*.
\end{cases}
\end{equation}
Indeed, this $c_*$ is the same constant that we came across in Section~\ref{subsec:proof.overview}. Also, as shown in \cite[Theorem~1.4]{linial2016phase}, $s(c)$ above represents the asymptotic density of the shadow of $Y(n, c/n),$ i.e.,
\[
s(c) = \lim_{n \to \infty} \frac{|\textnormal{Sh}(Y(n, c/n))|}{\binom{n}{d + 1}}, \quad c > 0.
\]
With all the ingredients available, we are now ready to state our main results. Define $K(\rho, \rho')$ to be the Kolmogorov metric between the two measures $\rho$ and $\rho'$, i.e., let
\[
K(\rho,\rho') = \sup_{x \in \mathbb{R}} \big|\rho(-\infty,x] - \rho'(-\infty,x] \big|.
\]
\begin{theorem}[Bulk limit]\label{thm:bulk}
Let $p \in (0,1]$ and $q \geq 1$. Then, the random measures $\mu_{n,p}$ and $\tilde{\mu}_{n,p}$ converge in the Kolmogorov metric to $\mu$ in $L^q$. Moreover, if $f: \mathbb{R} \to \mathbb{R}$ is a continuous function such that $\mu|f| < \infty$ and
%
\begin{equation}
\label{eqn:Suff.Cond.Lq}
\lim_{b \to \infty} \sup_{n \geq 0} \normq{\mu_{n,p} |f| - \mu_{n,p} (|f| \wedge b) \big.} = 0,
\end{equation}
then both $\mu_{n,p} f$ and $\tilde{\mu}_{n,p} f$ converge to $\mu f$ in $L^q$.
\end{theorem}
\begin{corollary}
\label{cor:Spl.Case}
Let $p \in (0,1]$ and $q \geq 1.$ Then, for the unbounded function $f(x) = x^\alpha,$ $\alpha > 0,$ the hypothesis of Theorem~\ref{thm:bulk} holds true. That is, both $\mu|f| < \infty$ and \eqref{eqn:Suff.Cond.Lq} are satisfied.
\end{corollary}
\begin{theorem}[Extremes limit]\label{thm:extremes} Let $p \in (0, 1]$. Then, both random measures $\nu_{n,p}$ and $\tilde{\nu}_{n,p}$ converge vaguely to $\mathcal{P}$ in distribution, where $\mathcal{P}$ is the Poisson point process on $\mathbb{R}$ with intensity $e^{-x} \df{x}$.
\end{theorem}
\begin{remark}
\label{r:Lnp.extension.Ynp}
In Section~\ref{sec:extensions}, we discuss three extensions of the above results. The first two relate to cases where the $d-$face weights have a generic distribution and/or have additional noisy perturbations. The third one is about the weighted $Y(n, p)$ complex, which is an example of a complex where not all the potential $d-$faces may be present.
\end{remark}
\begin{remark}
Clearly, Theorem~\ref{thm:bulk} applies when $f$ is additionally bounded since hypothesis \eqref{eqn:Suff.Cond.Lq} is then trivially satisfied. Hence, an immediate consequence of our result is that $\mu_{n,p}$ converges to $\mu$ weakly in $L^q$. However, and more importantly, our result also applies for a class of unbounded functions. In particular, Corollary~\ref{cor:Spl.Case} shows that the result holds for the case $f(x) = x^\alpha$ for $\alpha > 0$. This is an important example since Frieze's result \cite{frieze1985value} concerning the sum of weights in the MST and its recent generalization to higher dimensions by Hino and Kanazawa (\cite[Theorem~4.11]{hino2019asymptotic}) then become special consequences of Theorem~\ref{thm:bulk}. Moreover, the limiting constant $I_{d - 1}^{(\alpha)}$ obtained in \cite{hino2019asymptotic} can now be interpreted as the $\alpha-$th moment of the measure $\mu$.
\end{remark}
\begin{remark}\label{rmk:inprob}
A variant of the second part of Theorem~\ref{thm:bulk}, obtained by replacing condition \eqref{eqn:Suff.Cond.Lq} by
\begin{equation}\label{eqn:bapprox.inprob}
\lim_{b \to \infty} \sup_{n \geq 0} \Pr\{\mu_{n,p} |f| - \mu_{n,p} (|f| \wedge b) \geq \epsilon\} = 0
\end{equation}
and convergence in $L^q$ by convergence in probability, also holds. Notice that, while the condition on $f$ that we require here is weaker, the conclusion is also weaker. See Proposition \ref{prop:inprob} for details.
\end{remark}
\section{Behaviour in the Bulk}\label{sec:bulk}
In this section, we derive Theorem~\ref{thm:bulk}, Corollary~\ref{cor:Spl.Case} and, also, Proposition~\ref{prop:inprob} which concretizes the claim in Remark~\ref{rmk:inprob}. We emphasize once again that both Theorem~\ref{thm:bulk} and Proposition~\ref{prop:inprob} hold for a class of unbounded functions, examples of which are given in Corollary~\ref{cor:Spl.Case}.
An intermediate step towards proving Theorem~\ref{thm:bulk} is stated in Proposition~\ref{prop:conv on Intervals}. It concerns the limit of $\mu_n f$ in the special case where $f = \mathds{1}(c, \infty)$ for some $c \geq 0.$ Notice that this $f$ is bounded.
\begin{proposition}
\label{prop:conv on Intervals}
Let $c \geq 0$ and $q \geq 1.$ Then, $\lim_{n \to \infty} \normq{\mu_{n,p}(c, \infty) - \mu(c, \infty)} = 0.$
\end{proposition}
The proof of this result is postponed to the end of this section. Instead, we now derive Theorem~\ref{thm:bulk} by extrapolating this result to the case of unbounded functions.
\begin{proof}[Proof of Theorem~\ref{thm:bulk}]
Due to \cite[Theorem~3]{skraba2017randomly}, it suffices to derive the results only for $\mu_{n,p}.$ We first show that $K(\mu_{n,p},\mu) \to 0$ in $L^q$. Let $F_{n,p}(x) = \mu_{n,p}(-\infty,x]$ and $G(x) = \mu(-\infty,x]$.
Fix $m$ and pick $c_0, c_1, \ldots, c_m$ such that $c_0 = 0$, $c_m = \infty$ and $G(c_{i + 1}) - G(c_i) = 1/m$. Then, for $x \in [c_i, c_{i + 1}]$,
\begin{align*}
|F_{n,p}(x) - G(x)| \leq {} &\max\big\{ G(c_{i + 1}) - F_{n,p}(c_i),\; F_{n,p}(c_{i + 1}) - G(c_i)\big\} \\
%
\leq {} & \max\left\{G(c_i) + \frac{1}{m} - F_{n,p}(c_i),\; F_{n,p}(c_{i + 1}) - G(c_{i + 1}) + \frac{1}{m} \right\} \\
%
\leq {} & \sum_{j=1}^m|F_{n,p}(c_j) - G(c_j)| + \frac{1}{m}.
\end{align*}
Since the bound is independent of $x,$ it then follows that
\begin{equation}
\label{e:Kolmogorov.Relation}
K(\mu_{n,p}, \mu) = \sup_{x\in\mathbb{R}} |F_{n,p}(x) - G(x)| \leq \sum_{j=1}^m|F_{n,p}(c_j) - G(c_j)| + \frac{1}{m}.
\end{equation}
Taking the $L^q$ norm now gives
\begin{align*}
\normq{K(\mu_{n,p},\mu)\big.}
\leq {} & \normq{\sum_{j=1}^m|F_{n,p}(c_j) - G(c_j)| + \frac{1}{m}}\\
\leq {} & \sum_{j=1}^m \normq{F_{n,p}(c_j) - G(c_j)} + \frac{1}{m}.
\end{align*}
Clearly, $G(x)=1 - \mu(x,\infty)$ and $F_{n,p}(x)=1-\mu_{n,p}(x,\infty)$ almost surely. Thus, by Proposition \ref{prop:conv on Intervals}, $\normq{F_{n,p}(c_j) - G(c_j)} \to 0$ for all $j.$ Now, since $m$ is arbitrary, $\normq{K(\mu_{n,p},\mu)} \to 0,$ as desired.\medskip
To extend the convergence to unbounded functions satisfying the given conditions, we begin by assuming that $f$ is non-negative. Let $\epsilon > 0$ be arbitrary. For any $b > 0$, triangle inequality shows
\[
\normq{\mu_{n,p} f - \mu f} \leq \normq{\mu_{n,p} f - \mu_{n,p} (f \wedge b)} + \normq{\mu_{n,p} (f \wedge b) - \mu (f \wedge b)} + \normq{\mu f - \mu (f \wedge b)}.
\]
Now pick a large enough $b > 0$ so that
\[
\sup_{n \geq 0} \big\|\mu_{n,p} f - \mu_{n,p} (f \wedge b)\big\|_q \leq \epsilon \quad \text{ and } \quad \normq{\mu f - \mu (f \wedge b)} = \mu f - \mu (f \wedge b) \leq \epsilon.
\]
Such a $b$ indeed exists due to \eqref{eqn:Suff.Cond.Lq} and the fact that $\mu f < \infty.$ Therefore,
\[
\normq{\mu_{n,p} f - \mu f} \leq 2\epsilon + \normq{\mu_{n,p}(f \wedge b) - \mu (f \wedge b)}.
\]
However, $f \wedge b$ is a bounded continuous function. Also, the Kolmogorov metric dominates the L\'evy metric which metrizes weak convergence. Consequently, $\limsup_{n \to \infty} \normq{\mu_{n,p} f - \mu f} \leq 2 \epsilon. $ Since $\epsilon > 0$ is arbitrary, it follows that $\mu_{n,p} f$ converges $\mu f$ in $L^q$.
It remains to deal with the case of general $f$. Clearly, $f = f^+ - f^-$, where $f^+ = \max\{f, 0\}$ and $f^- = \max\{-f, 0\}$. Furthermore, both $(f^+ - b) \; \mathds{1}[f^+ \geq b]$ and $(f^- - b) \; \mathds{1}[f^- \geq b]$ are bounded from above by $ (|f| - b)\; \mathds{1}[|f| \geq b]$, whence it follows that $f^+ - (f^+ \wedge b)$ and $f^- - (f^- \wedge b))$ are bounded from above by $|f| - |f| \wedge b$. Therefore, by repeating the above arguments for both $f^+$ and $f^-$, individually, it is easy to see that the result holds for the case of general $f$ as well.
\end{proof}
Our next goal is to discuss the claim in Remark \ref{rmk:inprob}. Formally, it can be stated as follows.
\begin{proposition}\label{prop:inprob}
Let $p \in (0, 1].$ Then, $K(\mu_{n,p}, \mu) \to 0$ in probability. Moreover, if $f: \mathbb{R} \to \mathbb{R}$ is a continuous function such that \eqref{eqn:bapprox.inprob} holds for every $\epsilon > 0,$ then $\mu_{n,p} f$ converges to $\mu f$ in probability.
\end{proposition}
\begin{proof}
From Proposition~\ref{prop:conv on Intervals} and Markov's inequality, we have that $\mu_{n,p}(c, \infty)$ converges to $\mu(c, \infty)$ in probability for any $c \geq 0.$ This along with \eqref{e:Kolmogorov.Relation} then shows that $K(\mu_{n,p}, \mu) \to 0$ in probability, as desired. The fact that the Kolmogorov metric dominates the L\'evy metric then immediately shows that $\mu_{n,p} f$ converges to $\mu f$ in probability whenever $f: \mathbb{R} \to \mathbb{R}$ is a bounded continuous function.
We now discuss the case of unbounded functions. Again, as in the proof of Theorem~\ref{thm:bulk}, it suffices to derive the result assuming $f$ to be non-negative. Let $\epsilon, \eta > 0$. Now pick a $b > 0$ so that
\[
\sup_{n \geq 0} \Pr\{|\mu_{n,p} f - \mu(f \wedge b)| > \epsilon\} \leq \eta \quad \text{ and } \quad |\mu f - \mu (f \wedge b)| \leq \epsilon.
\]
This can be done on account of \eqref{eqn:bapprox.inprob} and the fact that $\mu f < \infty.$ We then have
\[
\{|\mu_{n,p} f - \mu f| > 3\epsilon\} \subseteq \{|\mu_{n,p} f - \mu_{n,p} (f \wedge b)| > \epsilon\} \cup \{|\mu_{n,p}(f \wedge b) - \mu (f \wedge b)| > \epsilon\},
\]
whence it follows that
\[
\Pr\{|\mu_{n,p} f - \mu f| > 3\epsilon\} \leq \eta + \Pr\{|\mu_{n,p}(f \wedge b) - \mu (f \wedge b)| > \epsilon\}.
\]
However, $f \wedge b$ is bounded and continuous and, hence, $\mu_{n,p} (f \wedge b)$ converges to $\mu (f \wedge b)$ in probability as shown above. Consequently, we have that $\limsup_{n \to \infty} \Pr\{|\mu_{n,p} f - \mu f| > 3\epsilon\} \leq \eta.$ Now, since $\epsilon, \eta$ are arbitrary, the desired result is easy to see. \end{proof}
We now exploit a recent bound on Betti numbers \cite{hino2019asymptotic} to prove Corollary~\ref{cor:Spl.Case} and thereby show that unbounded functions such as polynomials indeed satisfy the hypothesis of Theorem~\ref{thm:bulk}. Since \eqref{eqn:bapprox.inprob} is implied by \eqref{eqn:Suff.Cond.Lq}, such functions also satisfy the assumptions of Proposition~\ref{prop:inprob}. A technical result is also needed for deriving Corollary~\ref{cor:Spl.Case}. We state it here, but prove it afterwards.
\begin{lemma}
\label{lem:c tc Limiting Behaviour}
The function $t(c)$ is continuous over $(c_*, \infty).$ Moreover, $t(c) = O(e^{-c/2^d})$ as $c \to \infty.$
\end{lemma}
\begin{proof}[Proof of Corollary~\ref{cor:Spl.Case}] Lemma~\ref{lem:c tc Limiting Behaviour} shows that $x^\alpha [1 - (1 - t(x))^{d + 1}] = O(x^\alpha t(x)) = O(x^\alpha e^{-x/2^d})$ as $x \to \infty.$ It is then straightforward to see that $\mu |f| < \infty,$ as desired.
We next show that $f(x) = x^\alpha$ satisfies condition \eqref{eqn:Suff.Cond.Lq} as well. Let $\beta_{d - 1}(r),$ $r \in [0, 1],$ denote the $(d - 1)-$th Betti number of $\cL(n,p)|_r.$ Since the set of $d-$MSA weights in a (monotonically) weighted complex equals the set of death times in the associated $(d - 1)-$th persistence diagram \cite[Theorem~3]{skraba2017randomly}, it follows by arguing as in the proof of \cite[Proposition~4.9]{hino2019asymptotic} that
\begin{align*}
\mu_{n,p} f - \mu_{n,p} (f \wedge b)
&= \frac{n^\alpha p^\alpha}{\binom{n-1}{d}} \sum_{\sigma \in M(n,p)} \left[ w(\sigma)^\alpha - w(\sigma)^\alpha \wedge \frac{b}{n^\alpha p^\alpha}\right] \\
%
&= \frac{n^{\alpha} p^{\alpha} \alpha}{\binom{n-1}{d}} \int_{b^{1/\alpha}/np}^{1}\beta_{d - 1}(r) r^{\alpha - 1} \df{r}.
\end{align*}
Now substituting $s = r np$ shows that
\begin{align*}
\mu_{n,p} f - \mu_{n,p} (f \wedge b)
&= \frac{\alpha}{\binom{n-1}{d}} \int_{b^{1/\alpha}}^{np} s^{\alpha - 1}\beta_{d - 1}(s/np) \df{s}\\
&= \frac{\alpha n^d}{\binom{n-1}{d}} \int_{b^{1/\alpha}}^{np} s^{\alpha - 1} \frac{\beta_{d - 1}(s/np)}{n^d} \df{s}.
\end{align*}
Subsequently,
Minkowski's inequality applied to integrals gives
\[
\normq{\mu_{n,p} f - \mu_{n,p} (f \wedge b)} \leq \frac{\alpha n^d}{\binom{n-1}{d}} \int_{b^{1/\alpha}}^{np} s^{\alpha - 1} \normq{\dfrac{\beta_{d - 1}(s/np)}{n^{d}}} \df{s}.
\]
Finally, since $\cL(n,p)|_r,$ $r \in [0, 1),$ has the same distribution as that of $Y(n, pr),$ we get
\begin{equation}
\label{eq:minkowski}
\normq{\mu_{n,p} f - \mu_{n,p} (f \wedge b)} \leq \frac{\alpha n^d}{\binom{n-1}{d}} \int_{b^{1/\alpha}}^{np} s^{\alpha - 1} \normq{\dfrac{\beta_{d - 1}(Y(n, s/n))}{n^{d}}} \df{s}.
\end{equation}
Now, $0 \leq \beta_{d - 1}(Y(n, s/n)) \leq \binom{n}{d}$ and, hence,
\[
0\leq \dfrac{\beta_{d - 1}(Y(n, s/n))}{n^{d}} \leq \frac{1}{d!}.
\]
Moreover, (4.14) from \cite{hino2019asymptotic} shows that there is a $\ell > 1 \vee q\alpha$ such that
\[
\mathbf{E}\left[\frac{\beta_{d - 1}(Y(n, s/n))}{n^{d}}\right] \leq 1 \wedge \dfrac{C}{s^{\ell}}.
\]
Therefore, by writing the inner exponent as $q - 1 + 1,$ we get
\[
\normq{\dfrac{\beta_{d - 1}(s/n)}{n^{d}} } \leq \left(\frac{1}{d!}\right)^{(q-1)/q}\left(1 \wedge \dfrac{C^{1/q}}{s^{\ell/q}} \right)
\]
for all $s \in [0, n]$. Now, applying this bound in \eqref{eq:minkowski}, it follows that for all sufficiently large $b$
\[
\normq{\mu_{n,p} f - \mu_{n,p}(f \wedge b)} \leq \frac{K}{\ell/q - \alpha} b^{-(\ell/q - \alpha)/\alpha}
\]
for some constant $K \geq 0.$ This then shows that the condition in \eqref{eqn:Suff.Cond.Lq} holds, as desired.
\end{proof}
Finally, it remains to prove Lemma~\ref{lem:c tc Limiting Behaviour} and Proposition~\ref{prop:conv on Intervals}. Let $\psi(t) := -\ln t/(1 - t)^d,$ $t \in (0, 1).$
\begin{proof}[Proof of Lemma~\ref{lem:c tc Limiting Behaviour}]
In
\cite[Appendix B]{linial2016phase}, it is shown that $\psi(t),$ $t \in (0, t_*),$ is a continuous function whose value monotonically decreases from $\infty$ to $\lim_{t \to t_*^-} \psi(t)$ as $t$ increases from $0$ to $t_*.$ The strict monotonicity implies that $t(c) = \psi^{-1}(c)$ is continuous over $(c_*, \infty)$ and that $\lim_{c \to \infty} t(c) = 0.$ The latter fact in turn shows $1 - t(c) \geq 1/2$ (say) for all sufficiently large values of $c.$ When combined with \eqref{eqn:c tc Alt Relation}, we then have that $t(c) = O( e^{-c/2^d}),$ as desired.
\end{proof}
To prove Proposition~\ref{prop:conv on Intervals}, we need two additional technical results. Let
\begin{equation}
\label{d:func.g}
g(c) =
%
\begin{cases}
0, & 0 \leq c \leq c_*, \\
%
\displaystyle ct(c)\big(1 - t(c)\big)^d + \frac{c}{d + 1}\big(1 - t(c)\big)^{d + 1} - \big(1 - t(c)\big), & c > c_*,
\end{cases}
\end{equation}
and $h(c) = g(c) + 1 - c/(d + 1).$
\begin{lemma}
\label{lem:Beh d - 1 Betti Number}
For $c \geq 0,$
\[
\lim_{n \to \infty} \normq{\frac{\beta_{d - 1}(Y(n, c/n))}{n^d} - \frac{h(c)}{d!}} = 0.
\]
\end{lemma}
\begin{proof}
This is shown in the proof of \cite[Theorem~4.11]{hino2019asymptotic}.
\end{proof}
\begin{lemma}
\label{lem:h is in fact int of 1 - shadow density}
For any $c \geq 0$, we have $\mu(c,\infty) = h(c)$.
\end{lemma}
\begin{proof}
We first show that $\mu(c,\infty)$ is finite for all $c \geq 0.$ Clearly,
\[
\mu(c,\infty) \leq \frac{2}{d+1}\max\left\{\int_0^{c_*} (1 - s(x)) \df{x}, \int_{c_*}^{\infty} (1 - s(x)) \df{x}\right\}.
\]
Since $t(c) \in (0, 1]$ for all $c \geq c_*$, it follows from \eqref{d:s.Val} and Lemma~\ref{lem:c tc Limiting Behaviour} that $1 - s(x) = O(t(x)) = O(e^{-x/2^d}).$ From this, it is then easy to see that $\mu(c,\infty)$ is finite, as desired.
Next, we claim that $h$ is continuous over $[0, \infty).$ For $c \neq c_*,$ this follows directly from its definition and the conclusion from Lemma~\ref{lem:c tc Limiting Behaviour} that $t(c)$ is continuous for $c > c_*.$ On the other hand, the continuity at $c_*$ holds since $\lim_{c \to c_*^+} t(c) = \lim_{c \to c_*^+} \psi^{-1}(c) = t_*.$
Our final claim is that $\lim_{c \to \infty} h(c) = \mu(c, \infty) = 0$ and $h'(c) = \frac{\df}{\df c}\mu(c, \infty)$ for all $c \neq c_*.$ This and the continuity of $h(c)$ and $\mu(c, \infty)$ at $c_*$ with then show that $h(\cdot) = \mu(\cdot, \infty),$ as desired.
Since $\mu(c, \infty), c \geq 0,$ is finite, we have $\lim_{c \to \infty} \mu(c, \infty) = 0.$ In contrast, $t(c) \in (0, 1]$ implies $h(c) = O(ct(c))$ as $c \to \infty;$ this when combined with Lemma~\ref{lem:c tc Limiting Behaviour} then shows that $\lim_{c \to \infty} h(c) = 0.$
Regarding the statement on the derivatives, we begin by showing that the two functions are differentiable for all $c \neq c_*.$ This statement holds for $h$ on account of \eqref{d:func.g} and \cite[Claim~5.3]{linial2016phase}, while it is true for $\mu(\cdot, \infty)$ simply by definition. From \cite[Claim~5.3]{linial2016phase}, we also have that
\[
h'(c) =
%
\begin{cases}
- \frac{1}{d + 1}, & c < c_*, \\
%
- \frac{1}{d + 1}\big[1 - \big(1 - t(c)\big)^{d + 1}\big], & c > c_*.
\end{cases}
\]
From this, it is then easy to see that $h'(c) = \frac{\df}{\df c}\, \mu(c,\infty)$ for all $c \neq c_*$, as desired.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:conv on Intervals}]
Let $\{D_i\}$ be the set of death times in the $(d - 1)-$th persistence diagram of the filtration $\{\cL(n,p)|_s: s \in [0, 1]\}.$ Then, for all sufficiently large $n,$ we have
\begin{align}
\binom{n - 1}{d} \mu_{n,p}(c, \infty) = \sum_{\sigma \in M(n,p)}\delta_{np w(\sigma)}(c,\infty)
= {} & \left|\left\{\sigma \in M(n,p) : w(\sigma) > \frac{c}{p n}\right\}\right| \nonumber \\
= {} & \left|\left\{i : D_i > \frac{c}{p n}\right\}\right| \label{e:conv.death.times} \\
= {} & \beta_{d - 1}\left(\cL(n,p)\big|_{c/(p n)}\right) \nonumber \\
\overset{d}{=} {} & \beta_{d - 1}\left(Y\left(n,\frac{c}{n}\right)\right), \nonumber
\end{align}
where \eqref{e:conv.death.times} follows due to \cite[Theorem~3]{skraba2017randomly}, while the last relation holds since $\cL(n,p)|_{c/(p n)}$ has the same distribution as $Y(n, c/n)).$ From Lemma~\ref{lem:Beh d - 1 Betti Number} and the fact that ${n-1 \choose d}/n^d \to 1/d!$, it then follows that $\lim_{n \to \infty} \normq{\mu_{n,p}(c,\infty) - h(c)} = 0.$ The desired result now follows from Lemma \ref{lem:h is in fact int of 1 - shadow density}.
\end{proof}
\section{Behavior in the extremes}\label{sec:extremes}
We derive Theorem~\ref{thm:extremes} here by building upon the proofs for \cite[Propositions 33, 35, and 36]{skraba2017randomly}, which look at the $p = 1$ case. The reader should, however, note the following distinction between the result in \cite{skraba2017randomly} and the one here. In \cite{skraba2017randomly}, Poisson point process convergence is shown to hold directly for the extremal values amongst the actual $d-$MSA face weights in $\mathcal{L}(n, 1)$ and amongst the actual death times in the associated persistence diagram. In contrast, we show that the same result holds for $\cL(n,p)$ when the actual values are additionally scaled by $p,$ as in Theorem~\ref{thm:bulk}.
We begin our analysis by summarizing the key steps in the proofs of \cite[Proposition~33, 35, and 36]{skraba2017randomly}. There, via the factorial moment method, it is first shown that the extremal values in the set of nearest face distances\footnote{For a $(d - 1)-$face, the nearest face distance is simply the minimum of the weights on the $d-$faces incident on it; in the $d = 1$ case, this can also be seen as the nearest neighbour distance.} in $\mathcal{L}(n, 1)$ converge to a Poisson point process. This result is then extended to the death times in the $(d - 1)-$th persistence diagram of $\{\mathcal{L}(n, 1)|_s : s \in \mathbb{R}\}.$ This is done by exploiting the relation between the nearest face distances and the $(d - 1)-$th Betti number, cf. the proof of \cite[Theorem~1.10]{kahle2016inside}. Finally, the result for the $d-$MSA face weights is obtained by using the equivalence between the set of these values and the corresponding death times, as given in \cite[Theorem~3]{skraba2017randomly}. Notice that this train of thought proceeds in a direction opposite to how the proof for Theorem~\ref{thm:bulk} goes. More specifically, the above result is first shown for the death times and then extended to the $d-$MSA face weights instead of the other way around.
As per the above ideas, to derive Theorem~\ref{thm:extremes}, we need to sequentially show Poisson point process convergence for three counting measures: the first based on the nearest face distances, the second based on the death times, and the third based on the $d-$MSA face weights. The last two measures are precisely $\tilde{\nu}_{n,p}$ and $\nu_{n,p},$ respectively, while we denote the first one to be $\nu'_{n, p}$ and define it to be
\[
\nu'_{n, p} := \sum_{\tau \in \mathcal{F}^{d - 1}(\cL(n,p))} \delta_{n p C(\tau) - d \log (n) + \log (d!)},
\]
where $C(\tau) := \min\{ w(\sigma): \sigma \in \mathcal{F}^d(\cL(n,p)) \text{ and } \sigma \supset \tau\} .$%
\begin{proof}[Proof of Theorem~\ref{thm:extremes}]
As stated above, our arguments mirror those used in the proof of Propositions~33, 35, and 36 in \cite{skraba2017randomly}. Hence, we only highlight the major steps.
\emph{Convergence of $\nu'_{n, p}$:} As in the proof of \cite[Proposition~33]{skraba2017randomly}, it suffices to show that the $\ell-$th factorial moment of $\nu'_{n, p}(I)$ satisfies
\begin{equation}
\label{eqn:NF.factorial.conv}
\lim_{n \to \infty} \mathbf{E}[{\nu'_{n, p}}^{(\ell)}(I)] = \left(\int_{I} e^{-x} \df{x}\right)^\ell
\end{equation}
for any $I \subseteq \mathbb{R}$ that is a finite union of disjoint intervals.
Furthermore, as shown in \cite[p.33]{skraba2017randomly}, one way to establish the above relation is to let $\bar{C}(\tau) := np C(\tau) - d \log n + \log d!$ and show that
\begin{equation}
\label{eqn:NF.factorial.conv2}
\lim_{n \to \infty} \ell! \binom{\binom{n}{d}}{\ell} \mathbf{E} \left[\prod_{i = 1}^\ell \mathds{1}[\bar{C}(\tau_i) \in (c_i, \infty)] \right] = e^{-\sum_{i = 1}^{\ell} c_i},
\end{equation}
where $\tau_1, \ldots, \tau_\ell$ are distinct $(d - 1)-$faces and $c_1, \ldots, c_\ell$ are distinct real numbers. We derive this last equality now.
For any $\tau \in \mathcal{F}^{d - 1}$, observe that
\[
\mathds{1}[\bar{C}(\tau) \in (c, \infty)] = \mathds{1}\left[C(\tau) > c(n) /p\right],
\]
where $c(n) = (c + d \log n - \log d!)/n$. Since $\cL(n,p)|_s$ has the same distribution as $Y(n, ps)$ for $s \in [0,1)$, the event $[\bar{C}(\tau) \in (c, \infty)]$ is equivalent to the face $\tau$ being isolated in $Y(n, c(n))$ whenever $c(n)/p \in [0, 1)$. Consequently,
\[
\ell! \binom{\binom{n}{d}}{\ell} \mathbf{E} \left[\prod_{i = 1}^\ell \mathds{1}[\bar{C}(\tau_i) \in (c_i, \infty)] \right] \sim \frac{n^{d\ell}}{(d!)^{\ell}} \prod_{i = 1}^{\ell}\left(1 - \frac{c_i + d\log(n) - \log(d!)}{n}\right)^{n - \kappa_i}
\]
for some arbitrary but fixed (w.r.t. $n$) constants $\kappa_1, \ldots, \kappa_i$ that depend only on the number of vertices common to the faces $\tau_1, \ldots, \tau_i$. This is exactly the same relation that was proven in \cite[p33]{skraba2017randomly}. Hence \eqref{eqn:NF.factorial.conv2} and, in turn, \eqref{eqn:NF.factorial.conv} holds.
This shows that $\nu'_{n, p}$ converges vaguely to $\mathcal{P}$ in distribution, as desired. \\
\emph{Convergence of $\tilde{\nu}_{n,p}$:} Based on the arguments in the proof of \cite[Proposition~35]{skraba2017randomly}, we only need to establish that, for $c \in \mathbb{R},$
\begin{equation}
\label{eqn:NF.DT.diff}
\lim_{n \to \infty} \mathbf{E}|\tilde{\nu}_{n,p}(c, \infty) - \nu'_{n, p}(c, \infty)| = 0.
\end{equation}
Let $c(n)$ be as defined above and $N_{d - 1}(Y(n, p))$ be the number of isolated $(d - 1)-$faces in $Y(n, p)$. Then, note that $\tilde{\nu}_{n,p}(c, \infty) = \beta_{d - 1}(\cL(n,p)|_{c(n)/p})$ and $\nu'_{n, p}(c, \infty) = N_{d - 1}(\cL(n,p)|_{c(n)/p})$ whenever $c(n)/p \in [0, 1).$ Again, since $\cL(n,p)|_s$ has the same distribution as $Y(n, ps)$ for $s \in [0,1)$, the relation in \eqref{eqn:NF.DT.diff} now follows trivially from \cite[Lemma~34]{skraba2017randomly}.
Thus, $\tilde{\nu}_{n,p}$ converges vaguely to $\mathcal{P}$ in distribution. \\
\emph{Convergence of $\nu_{n,p}$:} This is immediate from \cite[Theorem~3]{skraba2017randomly} and the convergence of $\tilde{\nu}_{n,p}.$
\end{proof}
\section{Extensions}
\label{sec:extensions}
Here we discuss three different extensions of our results. The first one relates to the case where the $d-$face weights come from some generic distribution. The next one concerns the robustness of our results to noisy perturbations in the $d-$face weights. The third and final one is about the randomly weighted $Y(n, p)$ complex wherein the potential $d-$faces may not be all present.
\subsection{Generic distribution for $d-$face weights}
\label{subsec:Generic.Weights}
With a slight abuse of notation, let $\cL(n,p)$ itself denote the generalization of the complex given in Definition~\ref{d:augmented.d.complex} wherein i.) $\{U(\sigma): \sigma \in \mathcal{F}^d\}$ are real-valued i.i.d. random variables with some generic distribution $\mathscr{F}$ and ii.) the function $w$ is such that $w(\sigma)$ equals $-\infty$ when $|\sigma| \leq d,$ equals $U(\sigma)$ when $\sigma \in \mathscr{F}^d(Y),$ and equals $\infty$ otherwise. We claim that a version of our results also holds in this setup. However, for this, we need to slightly modify the definition of the measures given in \eqref{d:empirical.measure} and \eqref{d:counting.measure}. In particular, we need to replace $w(\sigma)$ and $\Delta$ in these definitions with $\mathscr{F}(w(\sigma))$ and $\mathscr{F}(\Delta),$ respectively, i.e., let
\begin{equation}
\label{d:empirical.measure.generic}
\mu_{n,p} = \frac{1}{|M(n,p)|} \sum_{\sigma \in M(n,p)} \delta_{np \mathscr{F}(w(\sigma))}
\end{equation}
and so on.
\begin{corollary}
\label{cor:generic.reults}
Suppose $\mathscr{F}$ is continuous. Then,
Theorems~\ref{thm:bulk} and \ref{thm:extremes} hold for the modified measures.
\end{corollary}
\begin{proof}
Since $\mathscr{F}$ is continuous, $\{\mathscr{F}(U(\sigma)): \sigma \in \mathcal{F}^d(\mathcal{K}_n)\}$ is
a set of i.i.d. $U[0, 1]$ random variables. With this in mind, let $\mathcal{L}_\mathscr{F}(n, p) = (\mathcal{K}_n, w_\mathscr{F})$ be the weighted complex, where $w_{\mathscr{F}}(\sigma) = \mathscr{F}(w(\sigma))$ for all $\sigma \in \mathcal{K}_n.$ Clearly, $\mathcal{L}_\mathscr{F}(n, p)$ has the same distribution as the augmented $d-$complex given in Definition~\ref{d:augmented.d.complex}. Therefore, Theorems~\ref{thm:bulk} and \ref{thm:extremes} readily apply to it. Furthermore, the analogues of the measures given in \eqref{d:empirical.measure} and \eqref{d:counting.measure} for $\mathcal{L}_{\mathscr{F}}(n, p)$ resemble the modified measures described above. The only thing that remains to be checked is the relationship of the $d-$MSAs in $\cL(n,p)$ and $\mathcal{L}_{\mathscr{F}}(n, p).$ However, since any distribution function is non-decreasing, a $d-$MSA in one is also a $d-$MSA in the other and vice-versa. The claim is now easy to see.
\end{proof}
\subsection{Noisy perturbations in $d-$face weights} Here we establish the robustness of our results to additional noisy perturbations in the $d-$face weights. Consider the following further generalization of the $\cL(n,p)$ complex described in Section~\ref{subsec:Generic.Weights} above.
\begin{definition}
Let $\mathcal{K}_n$ be the complete $d-$skeleton on $n$ vertices and let $\{U(\sigma) : \sigma \in \mathcal{F}^d\}$ be i.i.d. real-valued random variables with a common distribution $\mathscr{F}.$ Further, let $\{\epsilon_n(\sigma): \sigma \in \mathcal{F}^d\}$ be a separate set of random variables denoting noise in the $d-$face weights; these need not be independent nor identically distributed. Then, for $p \in (0, 1]$ and $Y$ being a sample of $Y(n, p),$ the weighted complex with perturbations $\mathcal{L}'(n, p)$ is $(\mathcal{K}_n, w'),$ where $w': \mathcal{K}_n \to [0, 1]$ satisfies
\[
w'(\sigma) =
%
\begin{cases}
-\infty & \text{ if $|\sigma| \leq d,$} \\
%
U(\sigma) + \epsilon_n(\sigma) & \text{ if $\sigma \in \mathcal{F}^d(Y),$} \\
%
\infty & \text{ otherwise.}
\end{cases}
\]
\end{definition}
Our aim here is to show that the main results also hold for $\mathcal{L}'(n, p)$ if $\|\epsilon_n\|_\infty := \max |\epsilon_n(\sigma)|$ decays to zero sufficiently fast. Towards that, let $\cL(n,p)$ and $w$ be exactly as defined in Section~\ref{subsec:Generic.Weights}, but this time we couple the $\{U(\sigma): \sigma \in \mathcal{F}^d\}$ values to the ones in the definition of $\mathcal{L}'(n, p).$ Let $M(n,p)$ and $D(n, p)$ be as before. Similarly, define $M'(n, p)$ and $D'(n, p)$ with respect to $\mathcal{L}'(n, p).$ Finally, let $\mu_{n,p}, \tilde{\mu}_{n,p}, \nu_{n,p},$ and $\tilde{\nu}_{n,p}$ be the measures as defined in Section~\ref{subsec:Generic.Weights}, and let $\mu_{n,p}',$ $\tilde{\mu}_{n,p}',$ $\nu_{n,p}',$ and $\tilde{\nu}_{n,p}'$ denote their analogues obtained by replacing $\mathscr{F}(w(\sigma)),$ $\mathscr{F}(\Delta),$ $M(n,p),$ and $D(n, p)$ with $\mathscr{F}(w'(\sigma)),$ $\mathscr{F}(\Delta'),$ $M'(n, p),$ and $D'(n, p),$ respectively.
\begin{corollary}
Let $p \in (0, 1]$ and $q \geq 1.$ Suppose $\mathscr{F}$ is Lipschitz continuous with Lipschitz constant $\zeta > 0.$ Then the following statements hold.
%
\begin{enumerate}
\item Bulk: Let $\mu$ be as in \eqref{d:limiting.bulk.measure}. If $n \|\epsilon_n\|_\infty \to 0$ in probability, then the random measures $\mu_{n,p}'$ and $\tilde{\mu}_{n,p}'$ converge in the Kolmogorov metric to $\mu$ in $L^q.$ Moreover, if the function $f: \mathbb{R} \to \mathbb{R}$ is a continuous function such that $\mu|f| < \infty$ and $ \lim_{b \to \infty} \sup_{n \geq 0} \normq{\mu_{n,p}' |f| - \mu_{n,p}' (|f| \wedge b) \big.} = 0,$ then both $\mu_{n,p}'f$ and $\tilde{\mu}_{n,p}'f$ converge to $\mu f$ in $L^q.$
\item Extremes: If $n \|\epsilon\|_\infty \to 0$ in probability, then both $\nu_{n,p}'$ and $\tilde{\nu}_{n,p}'$ converge vaguely in distribution to $\mathcal{P},$ where $\mathcal{P}$ is the Poisson point process on $\mathbb{R}$ with intensity $e^{-x} \df{x}$.
\end{enumerate}
\end{corollary}
\begin{proof}
The $p = 1$ case in Statement (2) was proved in \cite[Theorem~7]{skraba2017randomly}. The $p \in (0, 1)$ case can be derived similarly from the unperturbed version given in Corollary~\ref{cor:generic.reults} here.
Consider Statement (1). We only discuss the $\mu_{n,p}'$ result since the $\tilde{\mu}_{n,p}'$ case can be dealt with similarly. For the $\mu_{n,p}'$ case, it suffices to show that an analogue of Proposition~\ref{prop:conv on Intervals} holds. This is because the arguments from the proof of Theorem~\ref{thm:bulk} can then be again used to get the actual result.
Let $\xi_n := \zeta n p \|\epsilon_n\|_\infty.$ From\footnote{There is a typo in the statement of \cite[Lemma~38]{skraba2017randomly}. In the second and third displays, $\gamma(D_i)$ should be $\gamma(D_i')$ and $\gamma(\phi(\sigma_i))$ must be $\gamma(\phi'(\sigma'_i )).$ This follows from Theorem~4 in ibid.} \cite[Lemma~38]{skraba2017randomly}, we have
\[
\inf_\gamma \max_i |w(\sigma) - \gamma(w(\sigma))| \leq \|\epsilon_n\|_\infty,
\]
where the infimum is over all the bijections $\gamma: \{w(\sigma): \sigma \in M(n,p)\} \to \{w'(\sigma'): \sigma' \in M'(n, p)\}.$ Hence, for any measurable set $K \subseteq \mathbb{R},$ it follows that
\begin{equation}
\label{e:subsetRelation}
\mu_{n,p}(K^{-\xi_n}) \leq \mu_{n,p}'(K) \leq \mu_{n,p}(K^{\xi_n}),
\end{equation}
where, for $\xi > 0,$
\[
K^{\xi} := \{x \in \mathbb{R}: d(x, K) \leq \xi\} \quad \text{ and } \quad K^{-\xi} := \{x \in \mathbb{R}: (x - \xi, x + \xi) \subset K\}.
\]
Now, fix an arbitrary $\xi > 0.$ Then,
\begin{align}
|\mu_{n,p}'(c, \infty) - \mu_{n,p} & (c, \infty)| \nonumber \\
%
\leq {} & \max\left\{|\mu_{n,p}(c - \xi_n, \infty) - \mu_{n,p}(c, \infty)|, |\mu_{n,p}(c, \infty) - \mu_{n,p}(c + \xi_n, \infty)|\right\} \label{e:Approx.mup'.by.mup} \\
%
= {} & \mu_{n,p}(c - \xi_n, c + \xi_n] \nonumber \\
%
\leq {} & \mu_{n,p}(c - \xi, c + \xi] \mathds{1}[\xi_n \leq \xi] + \mu_{n,p}(c - \xi_n, c + \xi_n] \mathds{1}[\xi_n > \xi] \nonumber\\
%
\leq {} & \mu_{n,p}(c - \xi, c + \xi] + \mathds{1}[\xi_n > \xi] \label{e:.mup.prob.measure} \\
%
\leq {} & |\mu_{n,p}(c - \xi, c + \xi] - \mu(c - \xi, c+ \xi]| + \mu(c - \xi, c+ \xi] + \mathds{1}[\xi_n > \xi] \nonumber.
\end{align}
where \eqref{e:Approx.mup'.by.mup} follows from \eqref{e:subsetRelation}, while the second term in \eqref{e:.mup.prob.measure} is obtained by using the fact that $\mu_{n,p}(c - \xi_n, c+ \xi_n] \leq 1$ which itself holds since $\mu_{n,p}$ is a probability measure.
A simple application of Proposition~\ref{prop:conv on Intervals}, along with the given condition on $\|\epsilon_n\|_\infty$ and the fact that $\xi$ is arbitrary, then shows that $|
\mu_{n,p}'[c, \infty) - \mu_{n,p}[c, \infty)| \to 0$ in $L^q,$ as desired.
\end{proof}
\subsection{Weighted $Y(n, p)$ complex}
\label{subsec:Linial.Meshulam.Complex} Here, we have a sample $Y$ of $Y(n, p)$ whose $d-$faces are assigned i.i.d. weights with a common distribution $\mathscr{F}$, while all the lower dimensional faces are assigned weight $-\infty.$ This clearly differs from the $\cL(n,p)$ complex in Section~\ref{subsec:Generic.Weights} since not all the potential $d-$faces may be present here. Despite this, we now show that a version of our results holds for this complex as well. As before, we only talk about the $\mu_{n,p}$ and $\nu_{n,p}$ cases.
The first thing one needs to make sure is that there are enough $d-$faces to even guarantee the existence of a $d-$MSA. This is not that hard. In fact, since $p$ is a fixed constant, we have from \cite[Theorem~1.1]{linial2006homological} and \cite[Theorem~1.1]{meshulam2009homological} that $\lim_{n \to \infty} \Pr\{\beta_{d- 1}(Y(n, p)) = 0\} = 1.$ It then follows from \cite[Lemma~23]{skraba2017randomly} that the probability that a $d-$MSA exists converges to $1$ asymptotically.
Next, let $\mathcal{L}_Y(n, p)$ be the complex obtained from the given $Y(n, p)$ complex by adding the missing $d-$faces and giving them weight $+\infty.$ Clearly, $\mathcal{L}_Y(n, p)$ has the same distribution as the $\cL(n,p)$ complex described in Section~\ref{subsec:Generic.Weights}. Further, since the added faces have weight $+\infty,$ the $d-$MSA in $Y(n, p)$ and $\mathcal{L}_Y(n, p)$ will be one and the same almost surely on the event $\mathcal{B}_{n, p} := \{\beta_{d - 1}(Y(n, p)) \neq 0\};$ the almost sure part is to deal with the case of non-unique weights of the $d-$faces in $Y(n, p).$ Now, let $\hat{\mu}_{n,p} = \mathds{1}_{\mathcal{B}_{n, p}^c} \mu_{n,p} + \mathds{1}_{\mathcal{B}_{n, p}} \delta_0,$ where $\delta_0$ is the unit measure at $0,$ and $\mu_{n,p}$ is as in \eqref{d:empirical.measure.generic}. Then, for any continuous function $f$ satisfying \eqref{eqn:Suff.Cond.Lq}, we have $\hat{\mu}_{n,p} f \to \mu f$ in $L^q,$ as desired. This can be seen via the following triangle inequality:
\begin{align*}
\|\hat{\mu}_{n,p} f - \mu_{n,p} f\|_q = {} & \|(\hat{\mu}_{n,p} f - \mu_{n,p} f) \mathds{1}_{\mathcal{B}_{n, p}}\|_q \\
%
\leq {} &\|(\hat{\mu}_{n,p} f) \mathds{1}_{\mathcal{B}_{n, p}}\|_q + \|(\mu_{n,p} |f| - \mu_{n,p} (|f| \wedge b)) \mathds{1}_{\mathcal{B}_{n, p}}\|_q \\
%
{} & \phantom{2ex} + \|\mu_{n,p}(|f| \wedge b) \mathds{1}_{\mathcal{B}_{n, p}}\|_q \\
%
\leq {} & |f(0)| [\Pr(\mathcal{B}_{n, p})]^{1/q} + \sup_{n \geq 0 }\|\mu_{n,p} |f| - \mu_{n,p} (|f| \wedge b)\|_q + b [\Pr(\mathcal{B}_{n, p})]^{1/q}.
%
\end{align*}
The arguments for extending the $\nu_{n,p}$ result are even more simple. Let $\hat{\nu}_{n,p} = \mathds{1}_{\mathcal{B}_{n, p}^c} \nu_{n,p} + \mathds{1}_{\mathcal{B}_{n, p}} \delta_0.$ Then, for any bounded continuous function $f: \mathbb{R} \to \mathbb{R}$ with bounded support, we have
\[
\|\hat{\nu}_{n,p} f - \nu_{n,p} f\|_1 \leq \|(\hat{\nu}_{n,p} f - \nu_{n,p} f) \mathds{1}_{\mathcal{B}_{n, p}}\|_1 \leq |f(0)|\Pr(\mathcal{B}_{n, p}) + \sqrt{\mathbf{E}[(\nu_{n,p} f)^2]} \sqrt{\Pr(\mathcal{B}_{n, p})},
\]
where the last relation follows from the Cauchy-Schwarz inequality. Therefore, $\|\hat{\nu}_{n,p} f - \nu_{n,p} f\|_1 \to 0$ as $n \to \infty.$ An application of Slutsky's theorem now shows that $\hat{\nu}_{n,p} \to \mathcal{P},$ as desired.
\section{Summary and Future Directions}
In this work, we went beyond the sum and studied the distribution of the individual weights in a random $d-$MSA---both in the bulk and in the extremes. One of our main contributions here is an explicit connection between the $d-$MSA weights and the $d-$shadow.
In future, one of the scenarios that we interested in exploring is the streaming setup. Here, the $d-$faces along with their weights are revealed one by one and the $d-$MSA is then incrementally updated by either including the face and then updating $d-$MSA estimate or by discarding the face altogether. Note that we don't assume the faces are revealed in any order. The goal then is to understand how the distribution of weights in the $d-$MSA changes with time.
Our preliminary conjecture on how the sum of these weights will change in the uniformly weighted setup is given in Figure~\ref{fig:Online.MSA}. In this figure, we consider two scenarios: i.) $n = 100$ and $d = 1$ and ii.) $n = 30$ and $d = 2.$ In both the scenarios, the weights of the $d-$faces are i.i.d.\ uniform $[0, 1]$ random variables which are revealed one at a time in an arbitrary order. That is, the weight of each $d-$face is initially presumed to be $1$ which then gets replaced with the actual value when it gets revealed. Accordingly, we begin with an arbitrary $d-$MSA and then sequentially update it as the weight of a new face gets revealed.
Let $M_n(k)$ denote the updated $d-$MSA after the weight of the $k-$face is revealed. Also, let $c_1 = n/\tbinom{n}{d -1}$ and $c_2 = \binom{n}{d + 1} \mu f$ for $f(x) = x.$
The blue and red curves in the figure show the value of $c_1 w(M_n(k))$ and $c_2/k,$ respectively. The red curve is our guess for $c_1 M_n(k)$ based on the results in this paper. Finally, the black curve shows the constant value $\mu x.$ Clearly, the blue and red curves closely match and they both get close to the black one as the value of $k$ increases.
\begin{figure}[ht]
\begin{subfigure}{.495\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/Streaming_MST.pdf}
\caption{$n = 100, d = 1$}
\label{fig:stream-first}
\end{subfigure}
\begin{subfigure}{.495\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/Streaming_MSA.pdf}
\caption{$n = 30, d = 2$}
\label{fig:stream-second}
\end{subfigure}
\caption{Comparison of the scaled weight (blue) of the actual $d-$MSA with our conjecture (red). The values of $c_1$ and $c_2$ are $n/\binom{n - 1}{d}$ and $ \binom{n}{d + 1} \mu x,$ respectively. The black curve denotes the value of $\mu x.$ Recall that $\mu x$ is $\zeta(3) \approx 1.2$ for $d=1$ and $1.56$ for $d=2$. \label{fig:Online.MSA}}
\end{figure}
Some of the other directions that we wish to pursue in the future are as follows.
\begin{enumerate}
\item \emph{Geometric random graphs and complexes}: Extending our results from the Erd\H{o}s-R\'enyi\ and Linial-Meshulam models to geometric random graphs and geometric complexes and understanding differences between the two scenarios.
\item \emph{Large deviation results and central limit theorems}: Obtaining general large deviation results for the Kolmogorov distance between $\mu_{n,p}$ and $\mu$ as well as central limit theorems characterizing the variance of the weight distribution.
\item \emph{Rates of convergence}: Deriving convergence rates in the bulk and the extremes setting.
\end{enumerate}
\section*{Acknowledgements}
The authors would like to thank Matthew Kahle, Omer Bobrowski, Primoz Skraba, Ron Rosenthal, Robert Adler, Christina Goldschmidt, D. Yogeshwaran, and Takashi Owada for useful suggestions. Nicolas Fraiman would like to acknowledge funding from UNC's Junior Faculty Award supported by IBM and R.J. Reynolds Industries. Sayan Mukherjee would like to acknowledge funding from NSF DEB-1840223, NIH R01 DK116187-01, HFSP RGP0051/2017, NSF DMS 17-13012, and NSF CCF-1934964. A portion of this work was done when Gugan Thoppe was a postdoc with Sayan Mukherjee at Duke University. There, his work was supported by the NSF grant DMS 17-13012. His current research at IISc is supported by IISc Start Up Grants SG/MHRD-19-0054 and SR/MHRD-19-0040.
\bibliographystyle{plain}
|
{
"timestamp": "2021-01-13T02:07:16",
"yymm": "2012",
"arxiv_id": "2012.14122",
"language": "en",
"url": "https://arxiv.org/abs/2012.14122"
}
|
\section{Introduction}
In a previous paper \cite{us-imp} we studied an improved quantization for spherically symmetric loop quantum gravity. Earlier work \cite{us} had considered a constant polymerization parameter, similarly to the ``$\mu_0$'' quantization scheme in loop quantum cosmology, whereas the improved quantization is similar to the ``$\bar{\mu}$'' quantization scheme \cite{ashtekarsingh}. Other approaches involving improved quantizations have also been explored in \cite{improved}. We observed that the singularity was removed, but we did not analyze in detail what happened to the space-time beyond the region where the singularity used to be. Here we complete that study. We find that in that region there exist semi-classical quantum states for which the theory behaves like a quantum version of general relativity coupled to an effective anisotropic fluid \cite{BH-fluid}
that violates the dominant energy condition. In the highest curvature region there is a space-like transition surface, something that was unnoticed in \cite{us-imp}. The space-time continues into a white hole geometry, like in Ref. \cite{aos}. However, in this work we consider a different regularization for the parametrized observable associated to the shift function. Its very definition requires the choice of a slicing and the new regularization avoids an undesirable dependence on it in the semiclassical limit.
The organization of this paper is as follows. In section II we discuss the physical sector of the quantum theory, focusing on semiclassical sectors. In section III we introduce a horizon penetrating slicing based on Painlev\'e--Gullstrand coordinates and show how it can be used to connect to a white hole space-time. We end with a discussion.
\section{Physical sector of the quantum theory}
The physical sector of the theory is obtained after combining Loop Quantum Gravity quantization techniques and the Dirac quantization programme for constrained theories. In summary, we start with a kinematical Hilbert space in the loop representation adapted to spherically symmetric spacetimes for the geometrical sector $[(K\varphi,E^\varphi),(K_x,E^x)]$ together with a standard representation for the spacetime mass and its conjugate momentum $(M,P_M)$. A suitable basis of kinematical states is the one provided by spherical symmetric spin networks tensor product with the standard states for the matter sector in the mass representation. Then, we represent the scalar constraint as a well-defined operator in the kinemtical Hilbert space (for the diffeomorphism constraint we rather work with the related finite group of transformations mimicking the full theory).
Following the construction of Ref. \cite{us-imp}, the physical sector of the theory is encoded in physical states (solutions to the scalar constraint) endowed with a suitable inner product and a set of physical observables. This is achieved, for instance, by applying group averaging techniques for both the quantum scalar constraint and the group of finite spatial diffeomorphisms (see also Refs. \cite{us,gowdy}). We focus our study to some of the simplest semi-classical states. Quantum states consist of spatial spin networks labeled by the ADM mass $M$ (a Dirac observable) and integer numbers that characterize the radii of spheres of symmetry associated with each vertex of the network $k_i$. The semi-classical states we are going to consider here are given by superpositions in the mass centered at $M_0$ and of width $\delta M_0$ and are therefore associated with a fixed discrete structure in space (see \cite{us-imp} for more details). They provide excellent approximations to the classical geometry in regions of {small curvature compared to Planck scale. Concretely, we consider the semi-classical states}
\begin{eqnarray}\nonumber
|\psi\rangle&=&\frac{1}{\delta M_0}\int dM e^{iM P_0/\hbar}\cos\left[\frac{\pi(M-M_0)}{2\delta M_0}\right]\Theta(M-M_0+\delta M_0)\Theta(M_0+\delta M_0-M)\\\label{eq:psi}
&&\times|M,k_S,\ldots,k_0,\ldots,k_{-S}\rangle
\end{eqnarray}
with $k_0<k_{j}$ for all $j\neq 0$, namely, $j=-S,-S+1,\ldots,1,-1,\ldots,S$, where
\begin{equation}
k_0={\rm Int}\left[\left(\frac{2 G M_0 \Delta}{4\pi \ell_{\rm Pl}^3}\right)^{2/3}\right],
\end{equation}
times the Planck length squared determines the smallest area of the 2-spheres in the theory. This corresponds to the improved quantization, where $\Delta$ is the area gap. Besides, we choose $M_0\gg m_{\rm Pl}$ and
\begin{equation}
\delta M_0 \leq \frac{3}{2}\left(\frac{4\pi \ell_{\rm Pl}^3}{2G\Delta}\right)^{2/3}M_0^{1/3}.
\end{equation}
The states in Eq. \eqref{eq:psi} belong to a family of sharply peaked semiclassical states in the mass and with support on a concrete spin network (states with higher dispersion in the mass will require superpositions of different spin networks). This choice considerably simplifies the analysis of the effective geometries. As we discussed in our previous papers, the quantum theory has additional observables to the ones encountered in classical treatments \cite{kucharthiemann}
which are the ADM mass and the time at infinity. These emerge from the discrete nature of the spin network treatment and are associated with the $k_i$'s, which in turn are associated with the value of the areas of the spheres of symmetry connected with each vertex of the spin network. One can also consider states that are a superposition of $M$'s. The analysis will remain the same as long as the states are peaked around a value of $M$.
In addition to physical states, the physical observables representing space-time metric components will be defined through suitable parametrized observables. They act as local operators on each vertex of the spin network. Furthermore, they involve point holonomies that are chosen to be compatible with the superselection sectors of the physical Hilbert space (see Ref. \cite{us-imp} for more details). {Some of the basic parametrized observables are
\begin{align}\label{eq:hex}
&\hat E^x(x_j)|M,\vec k\rangle=\hat O(z(x_j))|M,\vec k\rangle=\ell_{\rm Pl}^2 k_{j(x_j)}|M,\vec k\rangl
,\\\label{eq:hdex}
& \hat M|M,\vec k\rangle=M|M,\vec k\rangle,
\end{align}
where $z(x)$ is a suitable gauge function that codifies the freedom in the choice of radial reparametrizations.
}
For the components of the space-time metric on stationary slicings we have, for instance, the lapse and shift,\footnote{In Ref. \cite{us-imp} for the shift we adopted the regularization $K_\varphi(x_j)\to \sin\left(2{\bar\rho}_j K_\varphi(x_j)\right)/2{\bar\rho}_j$, but it introduces an undesirable slicing dependence that is avoided with the present regularization. Besides, the representation that we adopt here for the square of the shift function as a parametrized observable is compatible with the superselection rules of the quantum numbers $\nu_j$ of the kinematical spin networks as it was discussed in Ref. \cite{us-imp}.}
\begin{equation}\label{eq:q-lapshi}
\hat N^2(x_j) :=\frac{1}{4}\frac{([\hat E^x(x_j)]')^2}{(\hat E^\varphi(x_j))^2},\quad
{[\hat N^x(x_j)]^2 ={\frac{\hat E^x(x_j)}{(\hat E^\varphi(x_j))^2}}\widehat{\frac{\sin^2\left({\bar\rho}_j K_\varphi(x_j)\right)}{{\bar\rho}^2_j}}},
\end{equation}
where
\begin{equation}
\label{eq:hephi}
(\hat E^\varphi(x_j))^2 = \frac{\left[(\hat E^x(x_j))'\right]^2/4}{1+\frac{{\sin^2\left(\bar\rho_j K_\varphi(x_j)\right)}}{\bar\rho_j^2} -\frac{2 G \hat M}{\sqrt{|\hat E^x(x_j)|}}},
\end{equation}
{where we polymerized $K_\varphi$} with $\bar \rho$ the polymerization parameter of the improved quantization,
\begin{equation}
\bar \rho = \frac{\Delta}{4\pi \hat E^x}.
\end{equation}
We choose the $k$'s in the one-dimensional spin network in our physical state and the gauge function $z(x)$ such that {
\begin{align}\label{eq:hex2}
&\hat E^x(x_j)=\ell_{\rm Pl}^2{\rm Int}\left[\frac{x_j^2}{\ell_{\rm Pl}^2}\right],
\\\label{eq:hdex2}
& [\hat E^x(x_j)]'|M,\vec k\rangle=\frac{\ell_{\rm Pl}^2}{\delta x}\,{\rm Int}\left[\frac{(x_j+\delta x)^2-x_j^2}{\ell_{\rm Pl}^2}\right]|M,\vec k\rangle
,
\end{align}
}and
with $x_j=\delta x\,|j|+x_0$ and with $j\in\mathbb{Z}$, where
\begin{equation}
x_0=\sqrt{{\rm Int}\left[\left(\frac{2 G M \Delta}{4\pi}\right)^{2/3}\right]}.
\end{equation}
Besides, we will choose $\delta x=\ell_{\rm Pl}$ as in the first paper \cite{us-imp}, although we will discuss the consequences of the limiting choices (for a uniform lattice) $\delta x=\frac{\ell_{\rm Pl}^2}{2x_0}$ and $\delta x=x_0$. The different spacings $\delta x$ just mentioned here correspond to different choices of states in the physical space, all of them lead to the same semiclassical behavior but differ in the deep quantum regime close to the singularity, as one would expect. The quantum regime is for the small values of $k_i$, where if one were to consider a superposition of states, small changes in $k_i$'s would lead to great fluctuations in the properties of the states.
Then, the metric components take the following form in terms of the previous operators
\begin{eqnarray}
\hat g_{tt}(x_j) &=& -\left(\hat N^2-\hat g_{xx}[\hat N^x]^2\right),\quad \hat g_{tx}(x_j) = \hat g_{xx}{\sqrt{[\hat N^x]^2}},\nonumber\\
\hat g_{xx}(x_j) &=& \frac{(\hat E^\varphi)^2}{\hat E^x}, \quad g_{\theta\theta}(x_j)=\hat E^x,\quad g_{\phi\phi}(x_j)=\hat E^x\sin^2\theta.\nonumber
\end{eqnarray}
Let us restrict the study to the family of stationary slicings determined by the condition
\begin{equation}\label{eq:slice}
\widehat{\sin^2\left(\bar\rho_j K_\varphi(x_j)\right)}=[\hat F(x_j)]^2
\end{equation}
where some specific choices of $F(x_j)$ will be studied below. However, any viable choice must be such that $F(x)$ is real and $F(x)\in[-1,1]$. Now, one can easily construct the operators corresponding to the components of the spacetime metric. They are given by
\begin{eqnarray}
\hat g_{tt}(x_j) &=& -\left(1-\frac{\hat r_S}{\sqrt{\hat E^x}}\right),\quad \hat g_{tx}(x_j) = -\sqrt{\frac{\pi}{\Delta}}\frac{\left\{\widehat{\left[E^x\right]'}\right\}{\sqrt{\hat F^2}}}{\sqrt{1-\frac{\hat r_S}{\sqrt{\hat E^x}}+\frac{4\pi \hat E^x \hat F^2}{\Delta}}},\nonumber\\
\hat g_{xx}(x_j) &=& \frac{\left\{\widehat{\left[E^x\right]'}\right\}^2}{4 \hat E^x\left(1-\frac{\hat r_S}{\sqrt{\hat E^x}}+\frac{4\pi \hat E^x \hat F^2}{\Delta}\right)}, \quad\hat g_{\theta\theta}(x_j)=\hat E^x,\quad \hat g_{\phi\phi}(x_j)=\hat E^x\sin^2\theta,\nonumber
\end{eqnarray}
with $\hat r_S = 2G\hat M$. The effective metric is defined as $g_{\mu\nu}=\langle \hat g_{\mu\nu}\rangle$, where the expectation value is computed on the extended physical state $|\psi\rangle$ we presented above. We will focus on the leading order corrections when the dispersion in the mass can be neglected. In this case, we can just remove the hats in the previous expression and denote this contribution by ${}^{(0)} g_{\mu\nu}(x_j)$. In addition, we will take a continuum limit that was discussed in our first paper. Namely, $x_j=\delta x\, |j|+x_0$ is replaced by $(|x|+x_0)$, with $x\in \mathbb{R}$ { and the integer part function ${\rm Int}[\cdot]$ will be dropped from all expressions. This continuum limit means} that the effective geometries bounce when they reach $x=0$.
\section{Painlev\'e-Gullstrand coordinates: black hole to white hole transition}\label{sec:class}
We are interested in spatial slicings that are horizon penetrating and asymptotically flat. For instance, ingoing Painlev\'e-Gullstrand coordinates is one of the well-known choices that meet these requirements. Besides, the time coordinate follows the proper time of a free-falling observer. The slicing is defined by the condition $\hat F(x_j) = \hat F_1(x_j)$ where
\begin{align}\label{eq:gf-f1}
\hat F_1(x_j)=\bar\rho\sqrt{\frac{\hat r_S}{\sqrt{\hat E^x}}}.
\end{align}
This choice is equivalent to a lapse operator $\hat N(x_j)=\hat I$. Besides, one can easily see that in the semiclassical limit $x_j\to x+x_0$ we have the function $F_1(x)<1$ for all $x\neq 0$, while $F_1(x=0)=1$. This is important since this choice will allow us to completely probe the high curvature region of the effective geometries.
They can be obtained as in \cite{us-imp}. One gets
\begin{eqnarray}\nonumber
{}^{(0)} g_{tt}(x) &=& -\left(1-\frac{r_S}{|x|+x_0}\right)\,,\\
{}^{(0)} g_{tx}(x) &=& -{\rm sign}(x)\sqrt{\frac{r_S}{|x|+x_0}}\left(1+\frac{\delta x}{2(|x|+x_0)}\right)\,,\\\nonumber
{}^{(0)} g_{xx}(x) &=& \left(1+\frac{\delta x}{2(|x|+x_0)}\right)^2, \quad {}^{(0)} g_{\theta\theta}(x)=(|x|+x_0)^2,\quad {}^{(0)} g_{\phi\phi}(x)=(|x|+x_0)^2\sin^2\theta.\label{eq:hatgmunu3}
\end{eqnarray}
For this slicing, the low curvature regions occur when $F(x)\simeq 0$ or equivalently at $x\to\pm\infty$. Concretely, at $x\to+\infty$ the effective metric approaches sufficiently fast a classical black hole metric in ingoing Painlev\'e-Gullstrand coordinates, while for $x\to-\infty$ the effective metric approaches sufficiently fast a classical white hole metric in outgoing Painlev\'e-Gullstrand coordinates. On the other hand, as we will see below, the curvature reaches its maximum when $F(x)= 1$, namely, at $x=0$.
\begin{figure}[ht]
{\centering
\includegraphics[width = 0.55\textwidth]{schild-imp}
}
\caption{Penrose diagram of the effective geometry determined by the slicing in Eq. \eqref{eq:gf-f1}.
Black and green lines indicate low and high curvature regions, respectively. Continuous lines represent smooth regions while dotted lines are associated to a discrete geometry. Dashed lines indicate that the spacetime diagram continues up and down.
}
\label{fig:bh-wh}
\end{figure}\begin{figure}[ht]
{\centering
\includegraphics[width = 0.79\textwidth]{gmunu-BH-WH}
}
\caption{The values of the $tt$ component of the metric and the inverse of $xx$ for the metric in diagonal form. When the first vanishes, horizons arise. Notice that in the region between the two horizons the discreteness is significant as represented in the separation of the dots (although in the plot we do not show all the points in the lattice but only one out of fifty).}
\label{gtt}
\end{figure}
In what follows, we refer to figure \ref{fig:bh-wh} {(see the similarities with the Penrose diagram of Ref. \cite{aos})}.
One can see that the condition ${}^{(0)} g_{tt}(x)=0$ has two real solutions in $x$, corresponding to two classical black or white hole horizons, at $x_{BH}>0$ and $x_{WH}<0$. In the spacetime regions with $x>x_{BH}$ or $x<x_{WH}$, the surfaces $x={\rm const}$ are time-like, and correspond to untrapped regions. In the region right behind the black hole horizon, $x<x_{BH}$, $x={\rm const}$ hypersurfaces are space-like. This region is a trapped black hole interior. As we move towards the high curvature region, curvature is maximum at $x=0$. This space-like hypersurface connects the trapped black hole region with an anti-trapped white hole region. This is the so-called transition surface \cite{aos}. The anti-trapped white hole region extends all the way from $x=0$ to the white hole horizon $x=x_{WH}$. In all this region, $x={\rm const}$ hypersurfaces are still space-like. Once the white hole horizon $x=x_{WH}$ is crossed to the outside region, spacetime is untrapped again and $x={\rm const}$ hypersurfaces are again time-like.
In order to illustrate all these properties, it is convenient to first write the effective metric in its diagonal form. (It should be noted that although the theory does not recover the full diffeomorphism invariance of the classical theory in the quantum regions, it is a valid mathematical tool to diagonalize a metric nevertheless.) It can be easily obtained by introducing the change of coordinates
\begin{equation}
d t \to dt+\frac{{}^{(0)}g_{tx}(x)}{{}^{(0)}g_{tt}(x)}dx
\end{equation}
This transformation amounts to the change
\begin{equation}
{}^{(0)}g_{xx}(x) \to {}^{(0)}\tilde g_{xx}(x) = \frac{\left(1+\frac{\delta x}{2(|x|+x_0)}\right)^2}{\left(1-\frac{r_S}{|x|+x_0}\right)}, \quad {}^{(0)} g_{tx}(x) \to {}^{(0)}\tilde g_{tx}(x) = 0,
\end{equation}
while all other components remain as
\begin{align}
{}^{(0)}g_{tt}(x) &\to {}^{(0)}\tilde g_{tt}(x) = -\left(1-\frac{r_S}{|x|+x_0}\right), \\
{}^{(0)} g_{\theta\theta}(x) &\to {}^{(0)}\tilde g_{\theta\theta}(x) = (|x|+x_0)^2, \quad {}^{(0)} g_{\varphi\varphi}(x) \to {}^{(0)}\tilde g_{\varphi\varphi}(x) = (|x|+x_0)^2\sin\theta.
\end{align}
In figure \ref{gtt} we show two components of the effective metric in its diagonal form. There where they vanish, a horizon forms and the coordinate system becomes singular. However, we should remember that around $x\simeq x_0$ spacetime is discrete and the continuous line is just an interpolation. Therefore, the metric will be well defined provided the horizons are not located on a vertex of the lattice.
We have also studied the effective stress-energy tensor that encodes the main deviations from the classical theory. It is defined as
\begin{equation}
T_{\mu\nu}:=\frac{1}{8\pi G} G_{\mu\nu},
\end{equation}
where $G_{\mu\nu}$ is the Einstein tensor. $T_{\mu\nu}$ is characterized by the effective energy density $\rho$ and radial and tangential pressures densities, $p_x$ and $p_{||}$, respectively. They are defined by means of
\begin{equation}
\rho^{ext} := T_{\mu\nu}\frac{X^\mu X^\nu}{X^\rho X_\rho},
\end{equation}
\begin{equation}
p_x^{ext} := T_{\mu\nu}\frac{r^\mu r^\nu}{r^\rho r_\rho},
\end{equation}
and
\begin{equation}
p_{||}^{ext} := T_{\mu\nu}\frac{\theta^\mu \theta^\nu}{\theta^\rho \theta_\rho},
\end{equation}
where $X^\mu$ is the Killing vector field that is time-like in the regions in which $x={\rm const}$ hypersurfaces are time-like. $r^\mu$ and $\theta^\mu$ are the vector fields pointing in the radial and $\theta$-angular directions, respectively. {When the Killing vector field $X^\mu$ is space-like, namely, in the regions in which $x={\rm const}$ hypersurfaces are space-like, $r^\mu$ becomes time-like. Therefore,
\begin{equation}
\rho^{int} := T_{\mu\nu}\frac{r^\mu r^\nu}{r^\rho r_\rho},
\end{equation}
\begin{equation}
p_x^{int} := T_{\mu\nu}\frac{X^\mu X^\nu}{X^\rho X_\rho},
\end{equation}
while $p_{||}^{int} = p_{||}^{ext}$ since $\theta^\mu$ remains space-like. We will assume that these effective space-times can be approximated by a smooth and continuous geometry everywhere, even at the transition surface. This assumption, as we mentioned, fails in the most quantum region. However, we expect that $T_\mu^\nu$ (a quantity only valid when geometry is smooth) will still give us qualitative hints about quantum geometry corrections there.}
In figure \ref{tmunu} we show the components of the stress-energy tensor $T_{\mu\nu}$, or equivalently, the components of the Einstein tensor (up to a factor $(8\pi G)$) {for the choice $\delta x=\ell_{\rm Pl}$. From them it is easy to extract the energy densities and pressures in each region of these effective space-times. }
\begin{figure}[ht]
{\centering
\includegraphics[width = 0.79\textwidth]{Tmunu-BH-WH}
}
\caption{The stress energy tensor of the effective metric ${}^{(0)}\tilde g_{\mu\nu}(x)$. This plot corresponds to $\delta x=\ell_{\rm Pl}$, namely, $s=1$.}
\label{tmunu}
\end{figure}
It is straightforward to compute the value of the energy density and pressures of the stress-energy tensor at the transition surface and in the limit of large mass $r_S\gg \ell_{\rm Pl}$. Actually, their value depend on the choice of spacing $\delta x$ of the uniform lattice in the radial direction. For instance, for $\delta x= x_0\left(\frac{\ell_{\rm Pl}}{x_0}\right)^s$ with $s=0,1,2$, one can see that\footnote{The choices of $\delta x$ shown here correspond to the maximum allowed uniform discretization if $s=0$, while $s=2$ gives the finest uniform refinement. $s=1$ is an intermediate choice.}
\begin{eqnarray}\nonumber
\rho^{int}(x=0) &=& \frac{2\pi}{\Delta}\times{\cal O}\left(\left[\frac{\Delta}{2\pi r_S}\right]^{s/3+2/3}\right),\\\nonumber
p^{int}_x(x=0) &=& -\frac{2\pi}{\Delta}\times{\cal O}\left(\left[\frac{\Delta}{2\pi r_S}\right]^{s/3}\right), \\
p^{int}_{||}(x=0) &=& -\frac{2\pi}{\Delta}\times{\cal O}\left(\left[\frac{\Delta}{2\pi r_S}\right]^{s/3}\right)
\end{eqnarray}
Let us note that in the most quantum region,
\begin{equation}
\omega_{x}(x=0)=\frac{p_x^{int}(x=0)}{\rho^{int}(x=0)} = -{\cal O}\left(\left[\frac{2\pi r_S}{\Delta}\right]^{2/3}\right),\quad \omega_{||}(x=0)=\frac{p_{||}^{int}(x=0)}{\rho^{int}(x=0)} = -{\cal O}\left( \left[\frac{2\pi r_S}{\Delta}\right]^{2/3}\right).
\end{equation}
As we see, at the transition surface, the effective stress-energy tensor does not violate the strong energy condition since $\rho^{int}(x=0)\geq 0$. However, it does actually violate the dominant energy condition. Since the dominant energy condition implies that $|\omega_x|\leq 1$ and $|\omega_{||}|\leq 1$, we conclude that this condition is violated since both $|\omega_{x}(x=0)|$ and $|\omega_{||}(x=0)|$ at the transition quantum spacetime blow up in the limit $r_S\gg \ell_{\rm Pl}$.
One can construct the Penrose diagram of this geometry, together with a possible extension to regions not covered by our slicing.
\section{Discussion}
There are several comments about the scenario studied in this manuscript. On the one hand, the effective geometries that one can derive in this theory are uniquely determined by the semiclassical physical state and the (parameterized) observables that represent the components of the metric. {The quantum corrections on these geometries likewise depend on the minimal area gap $\Delta$ and the size of the discretization of the physical states we are considering. Polymer corrections due to the choice of foliation will also contribute if fluctuations of the mass are considered.} We are taking for simplicity an element (spin network) of the basis in the physical space of states and ignoring superpositions in different discretizations and masses. Quantum corrections break the covariance, in particular because their dependence on the discretization of the chosen quantum states, but also due to foliation dependent terms. The latter produce {$O(\Delta r_S^2/x^2)$} quantum corrections in the (asymptotically flat) external region of the black hole and therefore they are completely unobservable for macroscopic black holes, allowing to recover diffeomorphism invariance. Since different foliations are identified with (observer's) frames of reference, this is equivalent to say that, for physically implementable frames of reference (i.e. physically realizable observers) in the exterior region, quantum corrections will be negligible. Nevertheless, these quantum corrections increase when approaching the high curvature region, reaching maximum values of order {$O(\Delta r_S^2/x_0^2)$.} For instance, a free-falling observer (as it is the case under consideration in this manuscript) and {an accelerated observer will observe there only slightly different corrections, even if its foliation involves accelerations that are Planck order.}
{Regarding the original choice of shift as parametrized observable adopted in Ref. \cite{us-imp}, we noticed that, as mentioned in \cite{ewe-bh}, the most quantum region showed an inner Cauchy horizon connecting the trapped black hole region with a Planckian size transition space-time where $x={\rm const}$ hypersurfaces are time-like. However, strictly speaking, due to this Cauchy horizon, the extension beyond this region is not unique. After the bounce a Cauchy Horizon is traversed and therefore the initial conditions at ${\mathcal I}_-$ that end up producing a black hole are not enough for the determination of the possible extensions beyond the Cauchy horizon. Notice that the Cauchy horizon occurs in a deep quantum region that is in the past of the extension; further non-uniqueness would occur when quantum superpositions are considered. Besides, different foliations capture different extensions. We saw that a choice of foliation {(corresponding to an accelerated observer with a Planck order acceleration)} leads to an anti de Sitter universe beyond the Cauchy horizon. Similar ambiguities have been noted in classical general relativity \cite{dafermos}. One must keep in mind that these ambiguities can be alleviated by considering parametrized observables that correspond to physically implementable frames of reference (i.e. physically realizable observers). We are considering here extrinsic framings corresponding to a choice of polimerization for the functional parameter $K_\varphi(x_j)$. Even though the theory is covariant in the sense that the classical observables become quantum observables in the quantization process, each polimerization corresponds to a different choice of framing. In reference \cite{Gambini:2008ea} we proved that diffeomorphism invariance of the parametrized observables corresponding to the metric is only preserved for diffeomorphism that do not amplify Planck scale separation to macroscopic scale. The introduction of more realistic intrinsic framings resulting from the inclusion of matter would provide a natural choice of slicing allowing to solve this limitation. For instance, the case of Painlev\'e-Gullstrand coordinates, that amount to a unit parametrized observable related to the lapse function.
The kind of midisuperspace model here considered allows to analyze this issues while most of the minisuperspace scenarios proposed in the literature (see \cite{ashtekarsingh,bv,cgp, oss, cctr, bmm, sg-qd, qrlg-bh, adl} for references on hypersurface orthogonal slicings) adopted a particular family of space-time foliations where this issue of slicing dependence did not arise. Other authors have taken the issue of non-covariance to imply that modifications of the constraint algebra are in order, leading to the deformed hypersurface deformation algebra approach \cite{DHDA}.
Summarizing, we have applied an improved quantization scheme for loop quantum gravity in spherical symmetry. The singularity that appears in classical general relativity is eliminated and space-time is continued to a white hole space-time geometry through a transition surface where curvature reaches its maximum value. This is qualitatively similar to scenarios that have been recently proposed \cite{aos}. Our proposal yields effective geometries that are free of undesirable slicing dependencies in the semiclassical limit. Actually, the slicing independence in a precise semiclassical limit of small mass fluctuations can be invoked to restrict polymer modifications of the scalar constraint and the parametrized observables describing the quantum geometry. Finally, it is interesting to note that most of the ideas presented here and in Ref. \cite{us-imp} can be very useful in other situations, like in the vacuum polarized $T^3$ Gowdy cosmologies with local rotational symmetry \cite{gowdy}. }
\section{Acknowledgements}
This work was supported in part by Grant NSF-PHY-1903799, funds of the Hearne Institute for Theoretical Physics, CCT-LSU, Pedeciba, Fondo Clemente Estable FCE\_1\_2019\_1\_155865 and Project. No. FIS2017-86497-C2-2-P of MICINN from Spain. J.O. acknowledges the Operative Program FEDER2014-2020 and the Consejer\'ia de Econom\'ia y Conocimiento de la Junta de Andaluc\'ia.
\input ref
\end{document}
|
{
"timestamp": "2021-06-16T02:21:24",
"yymm": "2012",
"arxiv_id": "2012.14212",
"language": "en",
"url": "https://arxiv.org/abs/2012.14212"
}
|
\section{Introduction} Quantum systems are inevitably interacting with their surrounding environment and very often this effect needs to be taken into account for an accurate description of their properties. In equilibrium the weak coupling to a thermal environment can be described by statistical mechanics, i.e.\ Gibbs ensembles, without considering the details of the environment beyond a few thermodynamic variables like temperature and chemical potential. However, very often we are interested in quantum systems far from thermal equilibrium, for instance, in the context of quantum information processing \cite{SAlipour2020,SVinjanampathy2020}, when considering quantum heat engines \cite{MEsposito2010,MParanauLlobet2020}, or when controlling quantum matter via strong driving \cite{TShirai2016,MSato2020}. In general this is a non-trivial regime in which the properties of a quantum system depend on the details of the environment. Since, a full description of a large environment is typically neither of interest nor feasible, the system is usually described within the theory of open quantum systems \cite{breuerpetruccione} by (microscopically) deriving a master equation (ME). In the Markovian case, where memory effects are negligible, the Lindblad ME is the most general quantum ME \cite{GLindblad1976,GoriniKossakowski1976}. It is also the the basis for the efficient stochastic simulation of larger systems by means of quantum trajectories \cite{Dalibard92,ADaley2014,KMolmer1993,GCHegerfeldt1991,GLueders1951}. \newline
The standard approach for microscopically deriving a ME is the Born-Markov approximation leading to the Redfield ME \cite{FBloch56,AGRedfield65}. See for example the recent developments in quantum chemistry \cite{JThingaPHaenggi2012,EFranciscoMLRonald92,AMCastilloDRReichman15}, atomic physics \cite{FDamanetAJDaley19} and quantum optics \cite{PHaenggi05,Strunz20,Alicki18}.
From this a Lindblad ME follows, when further employing the rotating-wave approximation (RWA) \cite{PHaenggi2005,breuerpetruccione,carmicheal,CWGardiner00}. However, this additional step requires ultra-weak coupling (which is small compared to the energy level splitting of the system) and the RWA only predicts the correct steady-state in the zeroth order of the coupling \cite{JThingaPHaenggi2012}. Problems of the RWA also become significant in the transient dynamics \cite{BBalzerGStock2004,XXuJThinga2019,ADaley2019} as well as for transport properties \cite{HWichterich2007}. \newline
In recent years there has been ongoing development for Lindblad approximations that bypass the RWA \cite{GSchallerTBrandes2008,PhysRevLett.104.070406,HallCresserAndersson2014,PhysRevB.97.035432,Mozgunov2020completelypositive,PhysRevB.101.125131,NathanRudner2020}.
In this work we provide a general approach for deriving an alternative Lindbladian approximation to the Redfield equation that is valid also in regimes of finite coupling, where the RWA fails. It is based on an optimized diagonalization of the Redfield dissipator. We test the resulting ME for an extended Hubbard chain coupled to Ohmic baths and show that in a large pa\-ra\-me\-ter regime where the RWA fails, it provides an accurate description of the Redfield dynamics. Combining our approach with quantum trajectory simulations, we are able to simulate system sizes that we cannot treat by integrating the Redfield equation.
\section{Redfield equation} The starting point for our approach is the Redfield formalism. The total Hamiltonian for the system-bath compound reads $\hat{H}_\mathrm{tot}=\hat{H}_\mathrm{S} + \hat{H}_\mathrm{SB} + \hat{H}_\mathrm{B}$, with the system and bath Hamiltonian $\hat{H}_\mathrm{S}$ and $\hat{H}_\mathrm{B}$, respectively. The interaction between system and bath is described by $\hat{H}_\mathrm{SB} = \hat{S}\otimes\hat{B}$, where $\hat{B}$ shall carry the dimension of energy and $\hat{S}$ is a dimensionless hermitian operator acting on the system. The case of several independent baths and non-hermitian coupling is outlined in \cref{sec:multiplebaths}. \newline
The time evolution of the reduced density matrix for the system $\hat{\rho}=\tr_\mathrm{B}(\rho_\mathrm{tot})$ shall be described in Born-Markov approximation \cite{breuerpetruccione,TAlbash2012}. First, the Born approximation provides a factorization of system and bath states, i.e.\ $\hat{\rho}_\mathrm{tot} = \hat{\rho}\otimes\hat{\rho}_\mathrm{B}$, where the bath stays in thermal equilibrium $\hat{\rho}_\mathrm{B}=\exp[-\beta\hat{H}_\mathrm{B}]/\tr_\mathrm{B}\exp[-\beta\hat{H}_\mathrm{B}]$ at an inverse temperature $\beta=1/T$. Additionally, in the Markov approximation bath correlations are assumed to decay fast compared to the time scales of the system dynamics, resulting in the time-local time-dependent Redfield ME \cite{FBloch56,AGRedfield65},
\begin{align}
\begin{split}
\dot{\hat{\rho}} &= -\frac{i}{\hbar} [ \hat{H}_\mathrm{S} , \hat{\rho} ] + \hat{S}\hat{\rho}\hat{\mathbb{S}}_t^\dagger + \hat{\mathbb{S}}_t \hat{\rho}\hat{S} - \hat{S} \hat{\mathbb{S}}_t \hat{\rho}- \hat{\rho}\hat{\mathbb{S}}_t^\dagger\hat{S} , \\
\hat{\mathbb{S}}_t &= \int\limits_0^{t} C_\tau\, \hat{S}_{-\tau}\ d\tau,
\end{split}
\label{eq:redfield}
\end{align}
with bath correlation $C_\tau=\tr_\mathrm{B}(\hat{B}_\tau\hat{B}\hat{\rho}_\mathrm{B})/\hbar^2$ and Heisenberg operators ${\hat{S}_{\tau}=\exp[i\hat{H}_\mathrm{S} \tau/\hbar] \hat{S}\exp[-i \hat{H}_\mathrm{S}\tau/\hbar]}$ and $\hat{B}_{\tau}=\exp[i\hat{H}_\mathrm{B} \tau/\hbar] \hat{B}\exp[-i \hat{H}_\mathrm{B}\tau/\hbar]$. Often a further approximation is made by setting $\hat{\mathbb{S}}_t \approx \hat{\mathbb{S}}_\infty$, which is sufficient for the late-time or steady-state behaviour \cite{XXuJThinga2019}. In contrast, we will keep the time dependence. The last two terms of the Redfield equation (\ref{eq:redfield}) are not purely dissipative but also contribute to the coherent dynamics. We split the Redfield equation (\ref{eq:redfield}) into coherent and dissipative part as $\dot{\hat{\rho}} = (-i/\hbar) [\hat{H}_\mathrm{S} + \hat{H}_t^\mathrm{LS}, \hat{\rho}] + \mathcal{D}_t^\mathrm{Red}[\hat{\rho}]$, with Lamb-shift Hamiltonian and Redfield dissipator
\begin{align}
\hat{H}_t^\mathrm{LS}& =\hbar \frac{\hat{S} \hat{\mathbb{S}}_t-\hat{\mathbb{S}}_t^\dagger \hat{S}}{2i} \label{eq:lambshift},\\
\mathcal{D}_t^\mathrm{Red}[\hat{\rho}] &=\hat{S} \hat{\rho} \hat{\mathbb{S}}_t^\dagger + \hat{\mathbb{S}}_t \hat{\rho}\hat{S} - \frac{1}{2}\Big\{ \hat{S} \hat{\mathbb{S}}_t + \hat{\mathbb{S}}_t^\dagger \hat{S}, \hat{\rho} \Big\}
\label{eq:redfielddissipator}
\end{align}
respectively, where $\{.,.\}$ denotes the anti-commutator. The Redfield dissipator (\ref{eq:redfielddissipator}) is not of Lindblad-form~\cite{breuerpetruccione,GoriniKossakowski1976,HWichterich2007} as will be seen also explicitly from equation (\ref{eq:pseudolindblad}) below.
\section{Rotating-wave approximation} The standard way to derive a Lindblad ME is closely related to the representation of the Redfield dissipator in the eigenbasis of $\hat{H}_\mathrm{S}$, $S_{qk}=\matrixelement{q}{\hat{S}}{k}$ and $\hat{H}_\mathrm{S}\ket{q}=\varepsilon_q \ket{q}$. For later convenience let the coupling matrix fulfill the normalization condition $\sum_{qk} |S_{qk}|^2=1$, i.e.\ the coupling strength is absorbed in the bath operator $\hat{B}$. In the eigenbasis the Heisenberg operator takes the form $\hat{S}_{-\tau}= \sum_{qk} S_{qk} \exp[-i\Delta_{qk}\tau/\hbar] \hat{L}_{qk}$, with jump operators $\hat{L}_{qk}=\ketbra{q}{k}$ and level splitting $\Delta_{qk}=\varepsilon_q - \varepsilon_k$. The Redfield equation (\ref{eq:redfield}) is quadratic in $\hat{S}$ and, thus, runs over four indices $q,k$ and $q',k'$ for the two pairs of level splittings $\Delta_{qk}$ and $\Delta_{q'k'}$. For very weak coupling the oscillations of the non-secular terms with $\Delta_{qk}\ne\Delta_{q'k'}$ are much faster than the slow variation of the state induced by the coupling and, thus, average out. Neglecting all but the terms with $\Delta_{qk}=\Delta_{q'k'}$ leads to the RWA \cite{breuerpetruccione,carmicheal,CWGardiner00},
\begin{align}
\hat{H}_t^\mathrm{LS,RWA}&= \sum_{qk} \hbar\,h_t(\Delta_{qk})|S_{qk}|^2 \hat{L}_{qk}^\dagger\hat{L}_{qk},\\
\mathcal{D}_t^\mathrm{RWA}[\hat{\rho}] &= \sum\limits_{qk} 2\, g_t(\Delta_{qk})|S_{qk}|^2 \Big[ \hat{L}_{qk} \hat{\rho} \hat{L}_{qk}^\dagger- \frac{1}{2} \Big\{ \hat{L}_{qk}^\dagger \hat{L}_{qk} , \hat{\rho} \Big\} \Big],
\label{eq:RWA}
\end{align}
where $g_t$ and $h_t$ denote the real and imaginary part of the bath correlation function $G_t(\Delta)=\int_0^t \exp[-i\Delta\tau/\hbar]\, C_\tau\, d\tau$. The Lamb-shift Hamiltonian $\hat{H}_t^\mathrm{LS,RWA}$ is diagonal in the energy-basis ($\hat{L}_{qk}^\dagger\hat{L}_{qk}=\ketbra{k}{k}$) and thus modifies the coherent dynamics only by shifting the eigenenergies. In turn, the dissipator is of Lindblad-form and describes quantum jumps between individual energy eigenstates. One also obtains decoupled equations of motion for the diagonal and off-diagonal entries of the density matrix. The off-diagonals decay exponentially leading to a diagonal steady-state. For a thermal bath at inverse temperature $\beta$ this is of canonical Gibbs form, i.e.\ $\hat{\rho}_{ss}^\mathrm{RWA}=\exp[-\beta\hat{H}_\mathrm{S}]/\tr_\mathrm{S}\exp[-\beta\hat{H}_\mathrm{S}]$. This is independent of the coupling $\hat{H}_\mathrm{SB}$ and therefore it only captures the zero coupling limit \cite{JThingaPHaenggi2012,PHaenggi2005}.
\section{Optimized truncation approach} For weak but finite coupling, where the RWA fails, we now derive an alternative approximation to the Redfield equation, which also leads to a Lindblad ME. For this purpose, we first bring the Redfield dissipator (\ref{eq:redfielddissipator}) into the diagonal form
\begin{align}
\mathcal{D}_t^\mathrm{Red}[\hat{\rho}] = \sum_{\sigma=+,-} \sigma \Big[\hat{A}_t^\sigma \hat{\rho} \hat{A}_t^{\sigma \dagger} - \frac{1}{2} \Big\{ \hat{A}_t^{\sigma\dagger} \hat{A}_t^\sigma , \hat{\rho} \Big\} \Big],
\label{eq:pseudolindblad}
\end{align}
by introducing the new jump operators
\begin{equation}
\hat{A}_t^{\pm} = \frac{1}{\sqrt{2 \cos \varphi_t }} \Big[\lambda_t^\pm \hat{S}\pm \frac{1}{\lambda_t^\pm} \hat{\mathbb{S}}_t\Big],
\label{eq:jumpoperators}
\end{equation}
with $\lambda_t^\pm=\lambda_t \exp{(\mp i \frac{\varphi_t}{2})}$ and arbitrary real, time dependent parameters $\lambda_t$ and $\varphi_t$, and where $\lambda_t^{-2}$ carries the dimension of time. By plugging \cref{eq:jumpoperators} into \cref{eq:pseudolindblad} in \cref{sec:pseudoLindblad} it is verified that these equations provide an exact representation of the Redfield dissipator~(\ref{eq:redfielddissipator}). The freedom of choosing $\lambda_t$ and $\varphi_t$ will be crucial in the following. Since the prefactor of the second term in \cref{eq:pseudolindblad} is negative, we refer to it as pseudo-Lindblad dissipator. A similar decomposition of the Redfield equation has been used recently in reference \cite{CGneiting2020} however for a specific choice of $\lambda_t^\pm$ which does not correspond to the optimal value that we derive below. Dissipators of the type of \cref{eq:pseudolindblad} are also used for time-convolutionless description of non-Markovian processes \cite{SAlipour2020,breuer2004,Piilo2009}. In contrast to the RWA, \cref{eq:pseudolindblad,eq:jumpoperators} are obtained without diagonalizing the system's Hamiltonian. Also, whereas in the RWA the number of jump operators grows quadratically with the Hilbert space dimension the pseudo-Lindblad dissipator \cref{eq:pseudolindblad} only has two jump operators. \newline
Finally \cref{eq:pseudolindblad} is reduced to Lindblad-form by neglecting the negative contribution,
\begin{align}
\mathcal{D}_t^\mathrm{Red}[\hat{\rho}] \simeq\mathcal{D}_t^\mathrm{trunc}[\hat{\rho}] = \hat{A}_t^+ \hat{\rho} \hat{A}_t^{+ \dagger} - \frac{1}{2} \Big\{ \hat{A}_t^{+\dagger} \hat{A}_t^+ , \hat{\rho} \Big\}.
\label{eq:truncatedME}
\end{align}
This truncation can be expected to be justified as long as the weight of the negative contribution $\lVert \hat{A}_t^- \rVert^2$ is small compared to the weight of the positive contribution $\lVert \hat{A}_t^+ \rVert^2$. In the following, we will compute the weight using the Frobenius norm $\lVert \hat{A}_t^\pm \rVert^2 = \mathrm{tr}_\mathrm{S}(\hat{A}_t^\pm \hat{A}_t^{\pm\dagger})$. \newline
Due to the special form of the jump operators $\hat{A}_t^\pm$, the optimal values for $\lambda_t$ and $\varphi_t$ minimize the weight of the negative contribution both absolutely and relative to the positive contribution. The optimization is carried out in \cref{sec:optimization} and one finds the optimal values $\lambda_t^4=\overline{g_t^2} + \overline{h_t^2}$ and $\sin\varphi_t=\overline{h_t}/(\overline{g_t^2} + \overline{h_t^2})^{1/2}$, where the overline denotes an average defined by $\overline{x} =\sum_{qk} x(\Delta_{qk}) |S_{qk}|^2$. Here $|S_{qk}|^2$, with $\sum_{qk} |S_{qk}|^2=1$, plays the role of a probability distribution. We could interpret these results, e.g.\ by identifying $\lambda_t^{-2}$ with the typical timescale that is related to the amplitude of the bath correlation function. The optimization is crucial for the validity of the truncated ME, which is further illustrated in \cref{sec:optimizationDiscussion}. The optimized weights read
\begin{equation}
\lVert \hat{A}_t^\pm \rVert^2 = \pm \overline{g_t} + \sqrt{\overline{g_t}^2 + V[g_t] + V[h_t]},
\label{eq:eigenvalues}
\end{equation}
with "variance" $V[x]=\overline{x^2} - \overline{x}^2$.
Thus, the truncation is expected to provide a good approximation, as long as the variances of the real and imaginary parts of the bath correlation function are small. The truncated ME (\ref{eq:truncatedME}) becomes exact in the limit of a constant bath correlation function, i.e.\ energy independent, for which the variances vanish. This is also known as the singular coupling limit in which the bath correlation is time local $C_\tau = \alpha \delta(\tau)$ and the convolution operator $\hat{\mathbb{S}}_t=\alpha \hat{S}$ is proportional to the coupling operator with some real constant $\alpha$ \cite{GoriniKossakowski1976,PFPalmer1977}. In this limit the Lamb-shift vanishes, the optimal parameters reduce to $\lambda_t=|\alpha|^{1/2}$ and $\phi_t=0$, and only the positive jump operator $\hat{A}_t^+=(|\alpha|/2)^{1/2}\hat{S}$ contributes to the Redfield equation \footnote{For singular coupling with time local bath correlation $C_\tau = \alpha \delta(\tau)$ and convolution $\hat{\mathbb{S}}_t=\alpha \hat{S}$ with some real constant $\alpha$ the optimal parameters are most easily obtained from the basis independent form $\lambda^2_t=\lVert\hat{\mathbb{S}}_t\rVert/\lVert\hat{S}\rVert=\alpha$ and $\sin\varphi= \mathrm{Im}\tr_\mathrm{S}(\hat{S}\hat{\mathbb{S}}_t^\dagger)/\lVert\hat{S}\rVert\lVert\hat{\mathbb{S}}_t\rVert=\mathrm{Im}(\alpha)/\alpha=0$}. \newline
In order to estimate the quality of the approximation, let us have a look at the relative weight of the negative contribution. For this purpose, we will focus on Ohmic baths at inverse temperature $\beta$. Results for other bath models are presented in \cref{sec:generalbath}. The bath is characterized by the spectral density $J(\Delta)$ from which the bath correlation is obtained, $C_\tau= \int_{-\infty}^{\infty} \exp[-i\Delta \tau/\hbar] J(\Delta)/(\exp[\beta\Delta]-1)\, d\Delta/\pi\hbar^2$. We consider $J(\Delta) = \gamma\Delta/(1+\Delta^2/E_c^2)$, with Drude cutoff at energy $E_c$, where the dimensionless factor $\gamma$ comprises the coupling strength relative to the energy scales of the system encoded in the level splittings $\Delta$ ta\-king values $\Delta_{kq}$. For this model the bath correlation $C_\tau$ is found to decay exponentially with time $\tau_B=\mathrm{max}\{\hbar/E_c, \hbar\beta/2\pi\}$ \cref{sec:bathcorrfunc}. Thus, assuming a large cutoff energy, the Markov approximation to the Redfield equation is valid if the coupling is small compared to the bath temperature.
For com\-pu\-ting the long-time dynamics or the steady-state, one can replace $\hat{A}^\pm_t$ by $\hat{A}^\pm_\infty$ and obtains
\begin{equation}
\frac{\lVert \hat{A}_\infty^-\rVert^2}{\lVert \hat{A}_\infty^+\rVert^2} = \Big[\frac{1}{16}+\frac{\chi^2}{2}\Big] \beta^2\,V[\Delta] +O(\beta^4\overline{\Delta^4}, \overline{\Delta^2}/E_c^2),
\label{eq:tempscaling}
\end{equation}
where $\chi=\cot(\xi/2)/2 +\xi^2/\pi \sum_{l=1}^\infty \frac{1}{l(\xi^2 - (2\pi l)^2)}$ with $\xi=\beta E_c$. Note that \cref{eq:tempscaling} is found also for Ohmic baths with different cutoff (see \cref{sec:generalbath}). According to \cref{eq:tempscaling} the truncated negative term in the pseudo-Lindblad dissipator is small for a temperature that is large compared to the variance of the level splitting. Consequently for sufficiently small $\beta$ the truncated ME (\ref{eq:truncatedME}) should be applicable beyond the zero coupling limit.
\section{Relation to Brownian motion} One of the few exactly solvable open quantum systems is the paradigmatic example of the damped harmonic oscillator \cite{HuPazZhang1992,KarrleinGrabert97,Paz94,GrabertWeiss84}. In the high-temperature regime it is described by the equation of Brownian motion, which is a Lindblad master equation \cite{breuerpetruccione}. However, there is no corresponding equation of motion for general systems. We now demonstrate that the truncated master equation reproduces the equation of Brownian motion in the high-temperature limit and, thus, might be seen as a generalization for it for general systems. \newline
The damped harmonic oscillator describes a particle with mass $M$ in a quadratic potential with oscillator frequency $\Omega$, whose position is coupled to a continuum of oscillator modes. The total system-bath Hamiltonian is given by
\begin{align}
\hat{H}_\mathrm{tot} &= \frac{\hat{P}^2}{2M} + \frac{M \Omega^2}{2} \hat{Q}^2 \\ &+ \sum_k^\infty \Bigg[ \frac{\hat{p}_k^2}{2m_k} + \frac{m_k \omega_{k}^2}{2} \Big(\hat{q}_k - \frac{c_k}{m_k \omega_k^2} \hat{Q} \Big)^2 \Bigg],
\label{eq:Htot}
\end{align}
with position $\hat{Q}$ and momentum $\hat{P}$ of the central oscillator. The coupling between system and bath is of the form $\hat{H}_\mathrm{SB} = \hat{Q} \otimes \hat{B}$ with bath operator $\hat{B}=\sum_k^\infty -c_k \hat{q}_k$ where the coefficients $c_k$ determine the coupling strength between the individual bath modes and the system. The model also takes into account the potential renormalization $H_\mathrm{RN} = \sum_k^\infty c_k^2/(m_k \omega_k^2) \hat{Q}^2 = 2M h_\infty(0) \hat{Q}^2$, which cancels the damping kernel $h_\infty(0)$ in the imaginary part of the bath correlation function. The Redfield equation takes the form of \cref{eq:lambshift,eq:redfielddissipator} for which one identifies the dimensionless coupling operator $\hat{S}=1/\sqrt{2} (\hat{a} + \hat{a}^\dagger)$ and explicitly obtains the convolution $\hat{\mathbb{S}}_\infty=1/\sqrt{2}\, (G_\infty(-\Omega)\ \hat{a} + G_\infty(\Omega)\ \hat{a}^\dagger )$, where $\hat{a}^\dagger$ ($\hat{a}$) is the creation (annihilation) operator for eigenmodes of the central oscillator that is related to the position and momentum via $\hat{a} = (M\Omega/2\hbar)^{1/2} (\hat{Q} + (i/M\Omega) \hat{P})$. The detailed form of the bath correlation function depends on the particular bath model. However, in the high-temperature limit and by assuming a large cutoff energy for the spectral density one obtains the universal result
\begin{align}
G_\infty(\Omega) \simeq \gamma \bigg[\frac{k_\mathrm{B} T}{\hbar} - i\ \chi \Omega \bigg],
\end{align}
where $k_\mathrm{B}$ is the Boltzmann constant and where $\chi$ is a real number that depends on how the cutoff is introduced. Generically the real part of the bath correlation function is given by the thermal time $\gamma k_\mathrm{B}T/\hbar$. For the imaginary part note that the potential renormalization cancels the damping kernel and in the limit of large cutoff energies the vacuum fluctuations decay such that only the antisymmetric thermal noise contributes. \newline
In order to construct the truncated master equation we calculate the parameters $\lambda^2_\infty$ and $\varphi_\infty$ by following the optimization procedure. For the damped harmonic oscillator one finds explicitly, $\lambda_\infty^2 =1/\sqrt{2}\, \sqrt{|G(\Omega)|^2 + |G(-\Omega)|^2}$, $\sin\varphi_\infty = \lVert \hat{S} \rVert/ (2\, \lVert \hat{\mathbb{S}}_\infty \rVert)\mathrm{Im} [G(\Omega) + G(-\Omega) ]$. In the high-temperature limit this reduces to $\lambda_\infty \simeq \sqrt{\gamma k_\mathrm{B} T/\hbar}$ and $\varphi_\infty\simeq0$. Finally we have everything at hand to write down the jump operator of the truncated master equation,
\begin{align}
\begin{aligned}
\hat{A}^+_\infty &= \sqrt{\frac{\gamma}{2}} \bigg[\sqrt{\frac{k_\mathrm{B} T}{\hbar}} \hat{S} + \frac{1}{\sqrt{k_\mathrm{B} T/\hbar}} \hat{\mathbb{S}}_t\bigg] \\
&= \sqrt{\frac{\gamma \Omega}{2}} \bigg[ \sqrt{\frac{4 M k_\mathrm{B} T}{\hbar^2}} \hat{Q} + \frac{1}{\sqrt{(1/\chi^2) M k_\mathrm{B} T}} \hat{P} \bigg].
\end{aligned}
\end{align}
This is exactly the same jump operator as for the equation of Brownian motion \cite{breuerpetruccione}.
\section{Concrete example} We further test our method for the extended Hubbard chain with $N$ spinless fermions and $l$ sites, described by
\begin{equation}
\hat{H}_\mathrm{ S } = -J \sum\limits_{i=1}^{l-1} \left( \hat{a}_i^\dagger \hat{a}_{i+1} + \hat{a}_{i+1}^\dagger \hat{a}_i \right) + V \sum\limits_{i=1}^{l-1} \hat{n}_i \hat{n}_{i+1},
\label{eq:fermihubbard}
\end{equation}
\begin{figure}[b]
\includegraphics[width=1\columnwidth]{figure1.pdf}
\caption{Error $d^X$ of the RWA [(a),(b), dashed lines] and of the truncated ME [(d),(e), solid lines] as a function of the bath temperature $T/J$ and coupling strength $\gamma$ for the steady-state (left panels) and for the transient with averaging time $\tau_\mathrm{R}=2\hbar/\gamma J$ (right panels). The smaller panels show cuts for fixed $\gamma=0.19$ [(g), (h), along the vertical blue lines in (a,b,d,e)] and fixed $T/J=5.43$ [(c), (f), along the horizontal red lines in (a,b,d,e)]. To the left of the wiggly cyan line in (a),(d),(i) the Redfield steady-state acquires negative populations. The lower panels show $d^\mathrm{RWA}-d^\mathrm{trunc}$ for the steady and the transient state in (i) and (j), respectively. The parameters are $l = 8$, $N=4$, $V=2\,J$, $E_c=17\,J$.}
\label{fig:distmeas}
\end{figure}
with annihilation and number operators $\hat{a}_i$ and $\hat{n}_i=\hat{a}^\dagger_i\hat{a}_i$ at site $i$. The tunneling parameter $J$ quantifies the kinetic energy of the particles and $V$ is the interaction energy of particles occupying adjacent sites. The system is driven by a local heat bath at temperature $T$ that couples to the density $\hat{n}_1$. For a bath that couples globally to all sites similar results are found as outlined in the section below. In order to quantify the deviation of the RWA and truncation approach from the Redfield result, we introduce the error measure \cite{dodonovmanko,gilchristlangford},
\begin{align}
d^\mathrm{RWA/trunc} = \frac{1}{2} \mathrm{tr} \sqrt{(\hat{\rho}^\mathrm{RWA/trunc} - \hat{\rho}^\mathrm{Red})^2} \in [0,1]. \label{eq:distmeas}
\end{align}
\subsection{Steady-state} In equilibrium the total system-bath compound thermalizes at the given temperature and by tracing over the bath degrees of freedom the reduced density matrix of the system has the generalized Gibbs form, $\rho_{th}=\tr_B \exp[-\beta \hat{H}_\mathrm{tot}]/\tr_\mathrm{S}\tr_B \exp[-\beta \hat{H}_\mathrm{tot}] $ \cite{TMori2008}. However, there is yet no ME that gives this steady-state solution in all orders of the coupling strength $\gamma$. The RWA only captures the zeroth order contribution, whereas the Redfield equation also correctly reproduces the coherences in first order \cite{JThingaPHaenggi2012,breuerpetruccione,VRomero1989,EGevaERosenman2000}. We will now compare the steady-state errors defined by \cref{eq:distmeas} for the truncation method with those of the RWA. Both are plotted versus temperature $T/J$ and coupling strength $\gamma$ in \cref{fig:distmeas} (a) and (d). For fixed temperature $T/J=5.43$ in the weak coupling regime the error of the RWA scales linear with $\gamma$ [\cref{fig:distmeas} (c) dashed dark], whereas the error of the truncation is smaller and of higher order [\cref{fig:distmeas} (f) solid dark]. Also the result of the truncated ME shows good agreement for large temperatures. For fixed coupling strength $\gamma=0.19$ in \cref{fig:distmeas} (g) we can see that $d_\mathrm{trunc}$ (solid red line) decays rapidly with temperature, like $\Vert\hat{A}^-_\infty\rVert/^2\Vert\hat{A}^+_\infty\rVert^2$ (black solid line), whereas $d_\mathrm{RWA}$ decays much slower. From $d_\mathrm{RWA}-d_\mathrm{trunc}$ in \cref{fig:distmeas} (i), it is evident that the steady-state solution of the truncated ME is in better agreement with the Redfield result than the RWA for all parameters except for very weak coupling and low temperatures. Note that this is also the regime, in which the Redfield steady-state acquires unphysical negative probabilities as marked by the bright cyan line in Figs.\ \ref{fig:distmeas} (a), (d) and (i). This is a known problem of the Redfield formalism \cite{VRomero1989,ASuarezIOppenheim92,PPechukas94,EGevaERosenmann2000}. Namely, for low temperatures the bath correlation time becomes large compared to the coupling strength for which the Born-Markov approximation no longer holds~\cite{deVaga2017}. This is in accordance with our analysis that for low temperatures the weight of the negative contribution in the pseudo-Lindblad dissipator grows significantly causing the Redfield steady-state to have negative populations. Just recently it has been argued that this loss of positivity indicates the failure of the weak coupling assumption \cite{Strunz20}.
\subsection{Transient dynamics} Let us now study the relaxation dynamics starting from the system's ground state. We evaluate the error for the transient dynamics by introducing the time averaged distance measure $d_{\tau_\mathrm{R}}^\mathrm{RWA/trunc}=(1/\tau_\mathrm{R})\int_0^{\tau_\mathrm{R}} d^\mathrm{RWA/trunc}(t)\ dt$,
where we obtain the solutions $\hat{\rho}^{X}(t)$ by direct integration of the particular ME.
We aim at choosing $\tau_\mathrm{R}$ big enough to cover the transient regime but small enough not to capture the steady-state properties. For the parameters discussed here $\tau_\mathrm{R}=2\hbar/\gamma J$ turns out to be a reasonable choice. \newline
The RWA provides a poor prediction of the transient dynamics [\cref{fig:distmeas} (b)]. A large error of $0.5$ (the maximum value plotted) is reached already for very small coupling $\gamma\simeq0.3$ [\cref{fig:distmeas} (c) cyan dashed line]. For short times the neglect of non-resonant terms with $\Delta_{qk}\ne\Delta_{q'k'}$ in the RWA overestimates the relaxation \cite{BBalzerGStock2004}. Here the truncation method [\cref{fig:distmeas} (e)] clearly outperforms the RWA. For all parameters except a small regime for $T/J\le1$ and $\gamma\le0.07$ the time averaged error for the truncated ME is not only smaller than the one in the RWA [\cref{fig:distmeas} (j)] but also very close to zero [\cref{fig:distmeas} (f) solid bright, (h) solid red].
\subsection{Globally coupled bath}
The coupling operator $\hat{S}$ of the system-bath interaction $\hat{H}_\mathrm{SB}$ defines in which way the bath is coupled to the system. Local coupling operators are most relevant for transport properties, where the baths couple to the edges of a system. In this section we briefly discuss the case when a bath couples globally to the system.
\begin{figure}[b]
\includegraphics{figureS2.pdf}
\caption{For a bath that couples globally error $d^X$ of the RWA [(a),(b), dashed lines] and of the truncated master equation [(d),(e), solid lines] as a function of the bath temperature $T/J$ and system-bath coupling strength $\gamma$ for the steady-state (left panels) and for the transient with averaging time $\tau_\mathrm{R}=1\hbar/\gamma J$ (right panels). The smaller panels show cuts for fixed $\gamma=0.19$ [(g), (h), along the vertical blue lines in (a,b,d,e)] and fixed $T/J=5.43$ [(c), (f), along the horizontal red lines in (a,b,d,e)]. To the left of the wiggly cyan line in (a),(d),(i) the Redfield steady-state acquires negative populations. The lower panels show $d^\mathrm{RWA}-d^\mathrm{trunc}$ for the steady and the transient state in (i) and (j), respectively. The parameters are $l = 8$, $N=4$, $V=2\,J$, $E_c=40\,J$.}
\label{fig:errglob}
\end{figure}
For models where the coupling operator itself is a global quantity, e.g.\ for the damped harmonic oscillator, all the previous expressions hold. However, in particular for the extended Hubbard model studied in the main text the coupling operator $\hat{S} = \sum_{i=1}^l \hat{n}_i= N$ is not a reasonable choice, since it simply corresponds to the total particle number which is conserved. Instead one has to consider a system-bath interaction Hamiltonian that consists of several coupling terms. We choose $\hat{S}_\alpha = \hat{n}_\alpha$ where the index $\alpha$ labels independent baths of the same temperature. In \cref{fig:errglob} we repeat the analysis of \cref{fig:distmeas} of the main text but for a bath that couples globally to the system rather than to a single site only. Here the trace distance to the full Redfield result in \cref{eq:distmeas} again serves as an error measure for the RWA and the truncated master equation, respectively. As compared to a bath that only couples locally, here the damping dominates the coherent dynamics. The relaxation time becomes shorter and therefore we compute the time averaged distance measure in \cref{fig:errglob} (b), (e), (h) and (j) for $\tau_\mathrm{R}=1\hbar/\gamma J$ (as compared to $\tau_\mathrm{R}=2\hbar/\gamma J$, which was used for the local bath). The initial state is a coherent superposition of the ground state and first the excited state. \newline
By looking at the relative error $d^\mathrm{RWA} - d^\mathrm{trunc}$ for the steady state in \cref{fig:errglob} (i) and for the dynamics in \cref{fig:errglob} (j), it is evident that for weak coupling and low temperature the RWA performs better, whereas for finite coupling and higher bath-temperature the truncated master equation is favourable. All in all the case of a global bath is qualitatively similar to the case of a local bath.
\section{Nonequilibrium steady-state} Finally, we examine properties of the nonequilibrium steady-state of the driven-dissipative system, focusing on parameters, where the RWA is known to be an inadequate description \cite{HWichterich2007}. The system is driven by two local baths at different temperature $T_L<T_R$ that couple to the occupations $\hat{n}_1$ and $\hat{n}_l$ of the outermost sites of the chain, respectively.
\begin{figure}[b]
\centering
\includegraphics[width=1\columnwidth]{figure2.pdf}
\caption{Particle imbalance in nonequilibrium steady-state for $N=\lfloor l/2\rfloor$ $U=2J$, $E_c=17J$, $T_L=7J$, $T_R=13J$. Plotted (a) for $l=8$ versus $\gamma$ and (b) for $\gamma =0.2$ versus $l$ and the Hilbert space dimension $\mathrm{dim}H_\mathrm{S}$. For system sizes $l\le8$ it is calculated via sparse LU decomposition (Redfield in dashed grey, truncated ME in solid black, RWA solid red). For $l\ge8$ the truncated ME is solved by quantum trajectory simulations. The inset in (b) shows the statistical error as a function of the number of trajectories for $l=11$ and $l=15$. We average over $5\cdot10^4$, $10^4$, $6\cdot10^3$ and $10^3$ trajectories for $l=8,9,10,11$, $l=12,13$, $l=14$ and $l=15$, respectively. Lines are guides to the eye.}
\label{fig:particleimbalance}
\end{figure}
In \cref{fig:particleimbalance} the particle imbalance $\Delta N= N_L-N_R$ in the nonequilibrium steady-state is shown, where $N_L=\sum_{i<l/2} \expval{\hat{n}_i}$ and $N_R=\sum_{i>l/2} \expval{\hat{n}_i}$ count the particles on the left and right half of the chain, respectively. \newline
According to the thermoelectric effect \cite{thermoelectric} a greater particle mobility near to the hotter, right reservoir is expected such that the particle density tends to the left side of the chain, i.e.\ $\Delta N>0$. However, this is not captured by the RWA. Just as in equilibrium the off-diagonal elements of the density matrix decay and the steady-state is diagonal in the eigenbasis of $\hat{H}_\mathrm{S}$. Since the eigenstates reflect the symmetry of the system that has no preferred orientation the nonequilibrium steady-state in RWA localizes evenly among the left and right half of the chain [\cref{fig:particleimbalance} (a) solid red]. \newline
For finite coupling parity is broken and finite off-diagonal matrix elements of the nonequilibrium steady-state give a non-zero contribution to particle imbalance. This is well captured by the truncated ME [\cref{fig:particleimbalance} (a)]. Furthermore its Lindblad-form allows the use of quantum trajectory simulations \cite{Dalibard92}. This is beneficial especially for many body systems for which the Hilbert space dimension grows exponentially with the system size. Thus, the truncated ME allows to study larger systems that are hardly accessible by direct integration of the Redfield equation [\cref{fig:particleimbalance} solid black].
\section{Conclusion}We have derived an alternative Lindbladian approximation to the Redfield ME. It provides an accurate description in large parameter regimes, where the RWA fails, in particular for non-equilibrium scenarios like transient dynamics and non-equilibrium steady states which are non-trivial also in the high-temperature regime. It, thus, allows for efficient quantum trajectory simulations also beyond ultra-weak coupling.
\begin{acknowledgments}
This research was funded by the Deutsche Forschungsgemeinschaft (DFG) via the Research Unit FOR 2414 under the Project No. 277974659. We thank Daniel Vorberg and Roland Ketzmerick for helpful discussions in the early stage of this project. We thank the developers of QuTiP \cite{qutip}, which was used for numerical calculations.
\end{acknowledgments}
|
{
"timestamp": "2021-06-11T02:18:56",
"yymm": "2012",
"arxiv_id": "2012.14208",
"language": "en",
"url": "https://arxiv.org/abs/2012.14208"
}
|
\section{Introduction}
\section{Introduction}
Quantum key distribution (QKD) offers an information-theoretically secure solution for remote users to share key streams exploring the laws of quantum physics\cite{Bennett2014, Gisin2002}. However, the security of the protocol is proven based on ideal assumptions, which are hard to fit exactly in practical QKD systems. The non-ideal features in QKD systems may annihilate details of the quantum states and quantum devices' response function and compromise their practical security \cite{Scarani2009, Xu2019}. Especially, some deficiencies leave side-channel loopholes for an adversary to intercept the quantum bits without changing the dominant measurement results, and thus can not be perceived in normal QKD processing. These most threatening attacks are named as quantum hacking \cite{Makarov2005a,Qi2005,Zhao2008,Lydersen2010,Huang2013,Wiechers2011,Gerhardt2011,Bugge2014,Qian2018}.
How to bridge the security gap between the ideal model and the system is vital when developing real-life QKD systems. The ultimate solution is to design QKD protocols with minimum assumptions. For example, measurement-device-independent (MDI) \cite{Lo2012a} and twin-field QKD \cite{Lucamarini2018} protocols have successfully closed the detector-side loopholes in principle. However, these protocols are still facing technical challenges for practical applications \cite{Xu2019}, and their key rates are inferior to the widely used BB84 QKD systems up to date \cite{Bennett2014, Yuan2018}. Therefore, to improve the real-life security of prepare-and-measurement QKD systems is a valuable task. Besides optimizing the method to calculate the final secure key rate, practical QKD systems usually adopt pragmatical strategies to compete with the interceptors. For example, in order to discover the device-control attack for single-photon detectors (SPDs) \cite{Hadfield2009,Eisaman2011}, a QKD system can monitor the parameters of the SPDs, such as the photocurrent \cite{Yuan2010,Yuan2011,Elezov2019}, the optical illumination \cite{Wangs2014}, the after-pulse ratio\cite{Ferreira2012}, the backflash radiation \cite{Meda2017}, and the delayed detection events \cite{Koehler-Sidki2019}. The system can also randomly change the detection efficiency of the SPDs \cite{Wiechers2011, Lim2015, Ferreira2015, Huang2016} to perform proactive defense.
The countermeasures mentioned above utilize the measurement results of the quantum systems, which are comprehensive and statistical. These final results may mask some essential details of the quantum signals and the system, such as the quantum devices' temporal response function. Using this kind of nonideal feature, the eavesdroppers have opportunities to hide their attack behaviors by constructing fake events with the same statistical properties but different quantum procedures. Therefore, to reveal the details of the quantum procedure will provide the extra ability for legitimate users to capture the eavesdropping.
For the first time, we propose and demonstrate that the imaging method can contribute to perceiving quantum hacking by monitoring the quantum signals and devices. We explore the temporal ghost imaging (TGI) technique \cite{Setala2010, Shirai2010} to obtain the full-time-scale information of the quantum signals and the responding functions of the quantum systems.To verify the effectiveness of the method, we select the time-shift attack \cite{Zhao2008,Qi2005} and blinding attack \cite{Lydersen2010}, which modify the timing dimension of the quantum signals and the behaviors of the quantum devices, respectively. The proof-of-concept experiments demonstrate that TGI monitoring is a general method to defend time-domain distorting quantum hacking effectively. Furthermore, this method brings a new way to evaluate the detailed temporal information of the quantum signals and systems and is conducive for designing complex quantum information processing systems.
\section{Principle of TGI}
\label{Principle of TGI}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure1.eps}
\caption{Concept diagrams of quantum hacking and TGI. (a) Schematic setup of temporal ghost imaging (TGI). (b) Concept diagram of quantum hacking, where Alice and Bob denote the legitimate users' transmitter and receiver, respectively. Eve denotes the eavesdropper. (c) and (d) are the concept diagrams to perceive quantum hacking with joint and local TGI methods, respectively. BS: beam splitter; Att: attenuator; SPD: single-photon detector.}
\label{fig:concept_200331}
\end{figure*}
TGI \cite{Setala2010,Shirai2010,Ryczkowski2016} is the time-domain version of ghost imaging \cite{Pittman1995, Bennink2002,Erkmen2010,Shapiro2012}. The intrinsic mechanism of TGI is the space-time duality, which is illustrated in Fig. \ref{fig:concept_200331}(a). The randomized light source is divided into the reference path and the test path. A temporal object is located on the test path and detected by a slow detector ($D_s$), whose temporal resolution is coarse or even absent to be a bulk detector. A fast detector ($D_f$) with a high-bandwidth is placed on the reference path, and the high-resolution images of the temporal object can then be reconstructed by calculating the correlation functions of the detecting results of $D_s$ and $D_f$. The correlation function of TGI is the convolution of these two paths \cite{Shirai2010}
\begin{equation}
M(t)=\langle\Delta I_{ref}(t)\Delta I_{test}\rangle_N\propto m(t')\delta(t'-\frac{t}{s})=m(\frac{t}{s})
\label{eq:TGIfunction}
\end{equation}
where $\langle\rangle_N$ is the ensemble average over $N$ times of synchronized measurements, $\Delta I_{ref}(t)=I_{ref}(t)-\langle I_{ref}(t)\rangle_N$ and $\Delta I_{test}=I_{test}-\langle I_{test}\rangle_N$ are the intensity fluctuations of the reference and test paths, respectively. The subscript \textit{test} and \textit{ref} denote the measurement results of $D_s$ and $D_f$, respectively. $m(t')$ is the temporal object, $\delta(t'+t/s)$ is the Dirac function, and $s$ is the magnified factor \cite{Setala2010}. The magnification factor becomes $s=1$ for lensless TGI. The Dirac function $\delta(t'+t/s)$ denotes that the temporal resolution of the object only depends on the detection resolution of the reference path.
According to Eq. (\ref{eq:TGIfunction}), the temporal resolution of the reconstruction image $M(t)$ is determined by the coherence time of the randomized light source ($\tau_l$) and the temporal resolution of $D_f$, which is independent of the temporal resolution of the detector after the object. These features make TGI a potential technique for many applications, such as computational temporal imaging \cite{Devaux2016}, optical secure imaging \cite{Pan2017, Yao2018}, detection efficiency evaluation of SPDs \cite{Wu2019}, and wavelength-conversion imaging \cite{Wuh2019}.
\section{Principle of quantum hacking perceiving}
\label{Principle of quantum hacking perceiving}
Quantum hackers always try to conceal their tracks while changing the characteristics of the quantum states or quantum systems. For example, SPDs usually work in gated mode to suppress noise from ultra-weak photoelectric signals. The final detecting results of the SPDs only indicate click or not, and the precise temporal information within the tagging windows is erased. Therefore, Eve may conceal the corresponding changes due to the lack of temporal details of final detection signals. In the principle of TGI, we can treat the quantum channel and quantum system as temporal objects and can reconstruct their temporal characteristics without requiring any temporal resolution of the detector in the test path. It means that we can use TGI to reveal any behavior changing the temporal characteristics of the quantum signal, quantum channel, and quantum system without modifying the QKD system itself.
From the perspective of exiting signals and the system response, we can classify quantum hacking into two major types (see Fig. \ref{fig:concept_200331}(b)). The first type of quantum hacking modulates transmitted quantum states as the excitation signals while avoiding detecting them directly \cite{Makarov2005a,Qi2005,Zhao2008}. The second type of quantum hacking takes an intercept-and-resend strategy by controlling of the behaviors of quantum devices in the system \cite{Lydersen2010,Wiechers2011,Gerhardt2011,Bugge2014}.
Time-shift attack \cite{Zhao2008,Qi2005} is a representative instance of the first type, which eavesdrops the secure key by introducing additional time delays $\Delta t$ to the transmitted states (see \textbf{APPENDIX B} for more details). Taking the temporal detection efficiency of the SPD as the temporal object $m(t)$ and considering the relative time delay, the reconstruction image of Eq. (\ref{eq:TGIfunction}) becomes
\begin{equation}
M(t)\propto m(t-\Delta t)
\label{eq:time-shiftattack}
\end{equation}
where we assume a lensless TGI and have set the magnification factor $s=1$. Eq. (\ref{eq:time-shiftattack}) means the time-shift of the reconstructed temporal image and the attack behavior will be revealed.
A typical attack of the second type of quantum hacking is the blinding attack, which can entirely control the detection results of the SPDs by injecting intense laser beams into them(see \textbf{APPENDIX C} for more details). This type of quantum hacking is the most dangerous to a prepare-and-measure QKD system, such as the BB84 QKD system. If Eve blinds all detecting rounds of Bob, the attack will lead to a zero detection efficiency of the SPD at single-photon level, which means
\begin{equation}
M(t)\propto m(t)\equiv0
\label{eq:blindingattack}
\end{equation}
Thus the reconstruction image will be nothing but noise, and the abnormal result is a clear alarm signal against the blinding attack.
According to theoretical analysis above, we have designed two TGI schemes to perceive attacks against the QKD system. The first scheme is the joint TGI (Fig. \ref{fig:concept_200331}(c)), where the transmitter (Alice) and the receiver (Bob) separately keep different components of a TGI system and perform the joint measurement. Alice preserves the randomized light source, the reference path, and the fast detector. The test path includes the quantum channel and Bob's detection module. The light intensity of the test path is attenuated into single-photon level before transmitted to Bob. The second scheme is called local TGI (Fig. \ref{fig:concept_200331}(d)), which is solely preserved and executed by Bob. The test path of the local TGI is the detection module of Bob.
According to Eq. (\ref{eq:time-shiftattack}), the joint TGI is available to monitor the temporal distortion of the signals transmitted through the quantum channel. The local TGI is typically performed by Bob to evaluate the temporal responding characteristics of the receiver itself. According to Eq. (\ref{eq:blindingattack}), local TGI will perceive blinding attack if Eve blinds all detecting round of Bob. However, if Eve changes the attacking probability in a lossy quantum channel, the model will be more complicated. We will next prove that the local TGI is still valid for perceiving probabilistic blinding attacking.
\section{Blinding hacking perceiving with lossy channel}
\label{Blinding hacking perceiving with lossy channel}
In a lossy quantum channel, which is the actual situation for QKD, the optimal strategy for Eve is to blind the SPDs for a small fraction rather than the total detecting rounds according to the transmission efficiency of the quantum channel to make sure that the counting rate with or without attack are identical. We will prove that TGI monitoring method is effective in this situation following.
We denote the response of the SPD as $I_{test}\in\{0,1\}$, where 0 and 1 indicate that there is no and one click of the SPD in a detecting window, respectively. If their is no attack to the QKD system, the SPD clicks are triggered by Bob's TGI signals, Alice's quantum signals, and the dark count of the SPD, which are labeled as $I_{tb}$, $I_{ta}$, and $I_{td}$, respectively. Then $I_{test}=1-(1-I_{tb})(1-I_{ta})(1-I_{td})$. If we substitute $I_{test}$ into Eq. (\ref{eq:TGIfunction}), we obtain the amended TGI correlation function without attack as
\begin{equation}
\begin{aligned}
M_1(t)=(1-\langle I_{td}\rangle)(1-\langle I_{ta}\rangle)\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle
\end{aligned}
\label{eq:MtAlice}
\end{equation}
where, $\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle$ is the reconstructed image of the TGI when there is no quantum signal from Alice or dark count of the SPD. Equation (\ref{eq:MtAlice}) shows that the quantum signals or dark counts decrease the amplitude of the reconstructed image linearly.
In a blinding attack scenario with an intercept and resend strategy, Eve can resend fake states with a certain probability, along with the control signals to manipulate the responses of the SPDs. We denote the attack behavior using two parameters $I_{te0}$ and $I_{te1}$. Under the blinding attack, if Bob chooses a different measurement basis from Eve, the attacking light splits into two paths and the detection current of Bob's SPD will be smaller than the threshold. Thus, there will be no click at all, which will be denoted as $I_{te0}=1, I_{te1}=0$. Oppositely, if Bob chooses the same basis with that of Eve, the attacking light will incident into a single path and lead to a detection current above the threshold. In this case, there will certainly be a click event from Bob's SPD, which will be denoted as $I_{te0}=0, I_{te1}=1$. Obviously, $I_{te0}=I_{te1}=0$ when Eve doesn't attack Bob's SPD. Then the response of the SPD becomes $I_{test}^{blind}=[1-(1-I_{tb})(1-I_{te1})(1-I_{td})](1-I_{te0})$. By substituting $I_{test}$ with $I_{test}^{blind}$ in Eq. (\ref{eq:TGIfunction}) and keeping $I_{te0}*I_{te1}=0$ in mind, we obtain the correlation function under attack as
\begin{equation}
M_2(t)=(1-\langle I_{td}\rangle)(1-\langle I_{te1}\rangle-\langle I_{te0}\rangle)\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle
\label{eq:MtEve}
\end{equation}
See \textbf{APPENDIX D} for proofs of Eqs. (\ref{eq:MtAlice})-(\ref{eq:MtEve}). The difference of these scenarios with and without attack can be evaluated using the differential image
\begin{equation}
\begin{aligned}
\Delta M(t)=&M_1(t)-M_2(t)\\
=&(1-\langle I_{td}\rangle)(\langle I_{te1}\rangle+\langle I_{te0}\rangle-\langle I_{ta}\rangle)\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle
\end{aligned}
\label{eq:deltaMt}
\end{equation}
In order to cheat Bob, Eve prefers to make $\langle I_{te1}\rangle=\langle I_{ta}\rangle$ so that the count rate seems to be normal. Since the average dark count rates of the SPDs usually satisfy $\langle I_{td}\rangle\ll 1$, we can simplify Eq. (\ref{eq:deltaMt}) to
\begin{equation}
\Delta M(t)\approx \langle I_{te0}\rangle\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle
\label{eq:deltaMtapprox}
\end{equation}
Equation (\ref{eq:deltaMtapprox}) shows that the differential image is proportional to the original TGI $\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle$ while the amplitude is proportional to the average attack probability. It should be mentioned that Bob cannot distinguish legitimate user from the hacker directly and hence could not use Eq. (\ref{eq:deltaMtapprox}) directly to perceive attacks. However, Bob could obtain the measured image by his setup according to Eq. (\ref{eq:TGIfunction}). This image will be equivalent to Eq. \ref{eq:MtAlice} ( or Eq. (\ref{eq:MtEve})) when there is (or no) attack. And he could also obtain $\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle$, $\langle I_{ta}\rangle$ (or $\langle I_{te1}\rangle$, anyway, $\langle I_{te1}\rangle=\langle I_{ta}\rangle$) and $\langle I_{td}\rangle$ directly by his local TGI and QKD module. That means Bob could also reconstruct a predicted legitimate image according to Eq. (\ref{eq:MtAlice}). The differential image between the measured and predicted legitimate images will be noise only when there is no attack ($I_{te0}=I_{te1}=0$). Once Eve launches Blinding attack, the probability of case "$I_{te0}=1, I_{te1}=0$" will never be zero, as Bob's bases chosen is independent to Eve. That means $\langle I_{te0}\rangle\neq 0$ and the reconstruction differential image will be proportional to Eq. (\ref{eq:TGIfunction}). Hence, Eve's attack will be clearly revealed.
Therefore, the combination of the joint and the local TGI can deal with the two types of quantum hacking attacks mentioned above. To meet the security criteria of QKD, Alice and Bob can randomly choose to perform QKD sessions or TGI sessions independently with a prearranged duty cycle and disclose the sessions they performed during the post-processing of QKD. It is worth noting that they do not need to announce their detection results except for Bob's detecting results during the joint TGI. The monitoring system works as follows:
1. Alice and Bob randomly execute the QKD or TGI procedures;
2. After sufficient rounds, Alice and Bob announce the rounds that execute the joint and local TGIs, respectively, and abandon the rounds that both TGIs are executed;
3. Bob sends the click results of the SPDs of the preserved joint TGI rounds to Alice;
4. Alice and Bob reconstruct the temporal detection efficiencies of the SPDs according to the joint and local monitoring results, respectively, and reveal the attack behaviors.
\section{Experimental demonstration}
\label{Experimental demonstration}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure2.eps}
\caption{Proof-of-principle experimental setup of the TGI monitoring method for quantum hacking perceiving. (a) The experimental setup of the transmitter Alice. The QKD transmitter consist of a pulsed laser (PL${}_1$) and the encoding module. Alice reserve the temporally randomized source (TRS${}_1$) and the reference path of the joint TGI. The test path consists of a fast photodiode (FPD${}_1$) and a real-time oscilloscope with a bandwidth of 25 GHz and 12.5GHz, respectively. (b) The setup of the TRS, which consists of an amplified spontaneous emission (ASE), a 50-GHz-passband optical filter, an erbium-doped fiber amplifier (EDFA), a polarization controller (PC) and an intensity modulator (IM). (c)-(d), The proof-of-principle setup of the time-shift attack and the blinding attack. (e) Bob's setup, which includes the QKD receiver and the local TGI monitoring module. The QKD receiver, including the decoding and the detection modules, is involved in the test path of the joint TGI. (f) The real-time intensity fluctuation of the TRS. The black line is the intensity fluctuation within a single measurement window. The gray background and the red line are the overlapping results and the average value of 5000 measurement events, respectively. (g) and (h) Reconstruction images of the joint TGI without quantum hacking under 3 dB and 7 dB equivalent channel loss, respectively. The black dashed line is the target image (the temporal object) of the SPAD.PL, pulsed laser; BS, beam splitter; Att, attenuator; BL: blinding laser; SPAD, single-photon avalanche detector.}
\label{fig:setup_quantum hacking perceiving_200331}
\end{figure*}
\subsection{The proof-of-principle demo system}
\label{The proof-of-principle demo system}
The proof-of-principle experimental setup to perceive quantum hacking using the TGI monitoring method is shown in Fig. \ref{fig:setup_quantum hacking perceiving_200331}. The QKD transmitter in Alice (Fig. \ref{fig:setup_quantum hacking perceiving_200331}(a)) consists of a pulsed laser (PL${}_1$) and a QKD encoding module. Alice also keeps a temporally randomized source (TRS) and the reference path of the joint TGI (Fig. \ref{fig:setup_quantum hacking perceiving_200331}(b)). The TRS of the joint TGI (TRS${}_1$) is divided into the reference path and the test path by a beam splitter (BS${}_1$). The reference light is detected by a fast photodiode (FPD${}_1$) and an oscilloscope with the bandwidth of 25GHZ and 12.5GHz, respectively.
Alice can randomly execute QKD or the joint TGI procedures for security evaluation. Eve may attack the system by launching the time-shift attack (Fig. \ref{fig:setup_quantum hacking perceiving_200331}(c)) or blinding attack (Fig. \ref{fig:setup_quantum hacking perceiving_200331}(d)) using a variable controlled optical delayer or a intercept-and-resend unit. Therefore, Bob may receive the legitimate quantum photons, the TGI monitoring signals, or the attack light from the quantum channel. Bob has a QKD decoder, including the photon detecting modules, and the local TGI monitoring unit (Fig. \ref{fig:setup_quantum hacking perceiving_200331}(e)). The test path of the local TGI is merged into the SPD with the decoding signals from the quantum channel. In our experiment, we use single-photon avalanche detectors (SPADs) to detect single photons. The SPADs have no temporal resolution within a detection round. The detection results of SPADs are reserved as the raw key or TGI signals after consulting with Alice. Since the detection results of TGI are only used for security evaluating and then abandoned, the proportion of TGI sessions can be kept small to increase the key generation efficiency.
We implement the TRS (Fig. \ref{fig:setup_quantum hacking perceiving_200331}(b)) from an amplified spontaneous emission (ASE) light, which is amplified by an erbium-doped fiber amplifier (EDFA) and then chopped into pulses using an intensity modulator (IM). It should be mentioned that an optical filter with a bandwidth of 50GHz is cascaded to the ASE source to constrain its output bandwidth in order to match the bandwidth of the fast photodiode (FPD${}_1$) and oscilloscope \cite{Ryczkowski2016}.
Fig.\ref{fig:setup_quantum hacking perceiving_200331}(f) indicates the real-time intensity fluctuation of the TRS. The black line is the intensity fluctuation of a single measurement window. The gray background and the red line are the overlapping results and average value of 5000 measurement windows, respectively. The intensity fluctuation indicates a short effective characteristic time about 80 ps, which is the upper temporal resolution of the TGI.
\subsection{Defending time-shift attack}
\label{Defending time-shift attack}
We execute the joint TGI in our experiment to defend the time-shift attack, and the influence of QKD and local TGI can be eliminated during the announcement step. The intensity of TGI photons sent to Bob is about $\mu_t\simeq 0.6$ within every detecting window. We firstly reconstruct the temporal images of Bob's SPAD in the test path of TGI without any attack. Figures \ref{fig:setup_quantum hacking perceiving_200331}(g)-\ref{fig:setup_quantum hacking perceiving_200331}(h) gives the reconstructed images through the quantum channel with 3 dB and 7 dB transmission loss, respectively, with a sample block size of $N=5\times10^6$. We can see that the statistical fluctuation of the image under 7 dB loss is slightly higher because the effective counting rate of the SPAD is relatively low, which can be decreased by increasing the sample size.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure3.eps}
\caption{Reconstruction images of the joint TGI against time-shift attack. (a)-(d) Images with different time-delay values. (e)-(h), Images under the time-shift attack, where the time delay is randomly chosen between two values.}
\label{fig:timeshiftattack}
\end{figure*}
A variable optical delayer is added via the 3-dB-loss quantum channel to simulate the time-shift attack by Eve. Figures \ref{fig:timeshiftattack}(a)-\ref{fig:timeshiftattack}(d) show the reconstructed temporal images under different time delays ($\Delta t=1.0$ ns, 0.3 ns, -0.3 ns, -1.0 ns), which exhibit that the time-shift attack can be regarded as a translation operator to a temporal object (such as the SPAD) and can be monitored using TGI.
The reconstructed images under time-shift attack randomly switching between two delay values are shown in Figs. \ref{fig:timeshiftattack}(e)-\ref{fig:timeshiftattack}(h). In these situations, the reconstructed images (the red lines) become the superposition of that under a single time delay. Comparing with that under no attack (Figs. \ref{fig:setup_quantum hacking perceiving_200331}(g)-\ref{fig:setup_quantum hacking perceiving_200331}(h)), the time-shift attack can be obviously revealed even when the shifting time is as short as 0.30 ns.
\subsection{Defending blinding attack}
\label{Defending blinding attack}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{figure4.eps}
\caption{The monitoring results of local TGI against blinding attack. (a) The reconstructed image of the local TGI without signals from the quantum channel. (b)-(c) Reconstructed image of the local TGI with Alice sending signals over 3 dB and 7 dB loss quantum channels, respectively. (d) Reconstructed image under blinding attack. (e)-(f) Differential images over quantum channels with 3 dB and 7 dB transmission loss, respectively, where Eve attacks all rounds.}
\label{fig:blindingattack}
\end{figure*}
We use the local TGI in Bob to defend the blinding attack. According to Eqs. (\ref{eq:MtAlice})-(\ref{eq:MtEve}), the signals from Alice and Eve are noise to the local TGI in Bob. We first give the reconstructed images with no attack. According to Eq. (\ref{eq:MtAlice}), the reconstructed image is not sensitive to counts triggered by Alice as long as $\langle I_{ta}\rangle$ is much smaller than 1. To show this merit, we set the average photon number of the transmitted quantum state sent by Alice being $\mu_{a}=0.5$, and the effective photon number of the local TGI being $\mu_t\simeq0.12$ within each detection window so that $\langle I_{ta}\rangle$ is much larger than $\langle I_{tb}\rangle$.
Figures \ref{fig:blindingattack}(a) and \ref{fig:blindingattack}(b)-\ref{fig:blindingattack}(c) show the local TGI images without and with QKD, respectively. The average count rates triggered by the local TGI is $\langle I_{tb}\rangle=0.01$, and that triggered by the quantum photons from Alice ($\langle I_{ta}\rangle$) are about 0.025 and 0.050 per round, respectively, over 7dB and 3dB loss channels. As $\langle I_{ta}\rangle$ and the dark count rate of the SPAD is much smaller than 1 ($\langle I_{td}\rangle\approx5\times10^{-5}$), Eq. (\ref{eq:MtAlice}) indicates that the difference between these reconstructed images will approximate to each other. The experiment results in Figs. \ref{fig:blindingattack}(a)-\ref{fig:blindingattack}(c) show that there are no significant difference between the reconstructed images in the situations above when taking the statistical fluctuation into account. Figure \ref{fig:blindingattack}(d) is the result when Eve blinds Bob's SPAD, which only has statistical fluctuation and fits Eq. (\ref{eq:blindingattack}). Figures \ref{fig:blindingattack}(e)-\ref{fig:blindingattack}(f) are the differential images over 7dB and 3dB loss channels, respectively. The differential images clearly reveal the existence of blinding attack, because there should be nothing but statistical fluctuations when the SPAD runs normally according to Eq. (\ref{eq:deltaMtapprox}).
\subsection{Defending blinding attack over lossy channel}
\label{Defending blinding attack over lossy channel}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure5.eps}
\caption{The simulation results of local TGI against blinding attack over lossy channels. (a) Reconstructed images of the local TGI when Alice sends signals over lossy channels. (b) Reconstruction images of the local TGI when Eve blinds a small fraction of detection windows.(c) The differential images between the predicted (Eq. (\ref{eq:MtAlice})) and measured (Eq. (\ref{eq:TGIfunction})) images with (red lines) and without (blue lines) attack, respectively. The black lines are the asymptotic differential images, according to Eq. (\ref{eq:deltaMtapprox}). $N$ is the size of the statistical sample, and 7 dB means the channel loss is 7 dB.}
\label{fig:blindingattacksimulation}
\end{figure*}
In the experiments above, Eve blinds every detection window of Bob's SPAD. However, in a practical high-loss quantum channel, a better strategy for Eve is to attack a portion of the detecting windows of the system. According to Eq. (\ref{eq:deltaMtapprox}), the differential image of local TGI with and without blinding attack is proportional to the average attacking probability. Though $\langle I_{te0}\rangle$ decreases as the channel loss increases, it can never be zero and the attack will be revealed clearly. Larger sample size is required to reveal the attack for a higher lossy channel for practical QKD sessions.
Figure \ref{fig:blindingattacksimulation} gives the simulation results of the local TGI against blinding attack over lossy channels. The simulation parameters are the same as those of the experimental system except for the TRS. The intensity of the TRS here is uniformly distributed between 0 and 1, the time resolution is about 80 ps and $\langle I_{tb}\rangle\simeq 0.050$. The first row (Figs. \ref{fig:blindingattacksimulation}(a1)-\ref{fig:blindingattacksimulation}(a4)) are the predicted (Eq. (\ref{eq:MtAlice}), black dashed lines) and "measured" (Eq. (\ref{eq:TGIfunction}), blue lines) images of the local TGI without attack. The second row (Figs. \ref{fig:blindingattacksimulation}(b1)-\ref{fig:blindingattacksimulation}(b4)) are the corresponding predicted (black lines) and "measured" (red lines) images with blinding attack, where $\langle I_{te1}\rangle=\langle I_{te0}\rangle=1-exp(-\alpha\mu_a\eta)$, $\alpha$ is the channel loss and $\eta$ is the detection efficiency of the SPADs. The blue and red lines of the third row (Figs. \ref{fig:blindingattacksimulation}(c1)-(c4)) show the differential images between the predicted and "measured" reconstructed images without (Figs. \ref{fig:blindingattacksimulation}(a1)-(a4)) and with (Figs. \ref{fig:blindingattacksimulation}(b1)-(b4)) attack, respectively. And the black lines of the third row are the asymptotic differential images constructed by Eq. (\ref{eq:deltaMtapprox}) directly without considering statistical fluctuation. The consistency of the predicted and "measured" images is consistent to the theoretical expectations. For a lossy channel, a sample size of $N=5\times10^6$ is not sufficient to reconstruct a high signal-to-noise differential image (see the red line in Fig. \ref{fig:blindingattacksimulation}(c1)). Though the memory size of our experiment limits the sample size acquired, a high-quality differential image can be achieved when the sample size expands to $5\times10^8$ (Fig. \ref{fig:blindingattacksimulation}c2), and a sample size of $5\times10^9$ is sufficient to support a high-quality differential image through a channel with 20dB loss (Fig. \ref{fig:blindingattacksimulation}c4).
\section{Discussion and conclusion}
\label{Discussion and conclusion}
According to Fig. \ref{fig:blindingattacksimulation}, the sample size required to perceiving blinding attack over high-loss channel is relatively large in the demo system. However, by combining state-of-the-art techniques, such as quantum, compressive and differential ghost imaging methods \cite{Erkmen2010,Shapiro2012,Katz2009,O-Oka2017,Ferri2010}, the sample size required will be reduced by several orders, which means that the TGI system will be easier to implement and has the potential to apply in real-time monitoring of practical QKD system.
For the joint TGI, the identity of the TRS and the QKD light source is essential for defending against eavesdropping. The bandwidth of the filtered TRS is 50 GHz in our experiment, which is at the same level as the light sources commonly used in practical QKD systems \cite{Boaron2018,Dynes2018}. The ASE or LED light is not only a perfect temporally randomized source (TRS), but also a potential candidate for economical and portable QKD devices \cite{Chun2018}. Therefore, the method will be effective by using the same light source, encoding and decoding modules in both the QKD and TGI procedures. Although we only use one SPD in this proof-of-principle experiment, the method is untroubled to be applied to a conventional QKD system with multiple SPDs.
Usually, quantum hacking against the QKD system may be probabilistic and dynamic, while TGI requires the temporal object to be "stationary" during a reconstruction period. However, according to Eq. (\ref{eq:deltaMtapprox}), the TGI monitoring does not require a stationary attack from Eve. The monitoring is effective if only $\langle I_{te0}\rangle\neq 0$, which always stands once Eve blinds Bob's SPD and no matter what the attack probability is.
In conclusion, we have proposed an effective quantum hacking perceiving method using TGI. Due to the missing temporal resolution, some of the quantum devices like SPAD act as temporal black boxes, which provides eavesdroppers chances to hide their hacking evidence. However, the TGI method makes it possible to directly image the SPADs without changing their behaviors at the single-photon level. The method we proposed makes the black box transparent and can reveal the temporal quantum hacking evidence, which provides a novel measure in quantum hacking perceiving. As has been tried in previous works \cite{Maroy2017,Pinheiro2018}, one meaningful study in the future is to integrate the imaging perceiving method into the security proof of the system. Moreover, to defend against the attacks in other degrees of freedom (DOFs) can be developed using the single-photon imaging technology analogously to ghost imaging.
\section*{Acknowledgments}
This work has been supported by National Key Research and Development Program of China (Grant No. 2018YFA0306400), National Natural Science Foundation of China (Grant Nos. 61627820, 61675189, 61905235, 61622506, 61822115), Anhui Initiative in Quantum Information Technologies(Grant No. AHY030000). F.-X.Wang has also been supported by China Postdoctoral Science Foundation (2019M652179).
\section*{Disclosures}
The authors declare no conflicts of interest.
\section*{Appendix A: Single-photon detector}
\label{appendix-a}
The SPADs used in the experiment are commercial products from Anhui Qasky Quantum Technology Co. Ltd. The working gating frequency of the SPAD is 10MHz. The full width at half maximum (FWHM) of the detection window is about 0.27 ns, and the peak detection efficiency is 21.4\%. The dark count rate is about $5\times 10^{-5}$ per gate.
\section*{Appendix B: Time-shift attack}
\label{appendix-b}
\renewcommand\thefigure{S\arabic{figure}}
\setcounter{figure}{0}
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{figureS1.eps}
\caption{Principle of time-shift attack and blinding attack. \textbf{a,} The principle of time-shift attack, where temporal detection efficiency mismatch (see $t_1$ and $t_2$) between SPADs (the red and purple efficiency curves) is the key of attack. \textbf{b,} Current of the SPAD with input light intensity under Geiger mode (red curve) and linear mode (purple curve), respectively. The dashed line is the threshold, above which the SPAD will be triggered. The quantum (blue background) and classical (purple) regions represent a single-photon level (e.g., less than $10^2$ photons) and a strong intensity level (e.g., larger than $10^5$ photons) of input light intensity, respectively.}
\label{fig:attackprinciple}
\end{figure*}
The time-shift attack exploits the efficiency mismatch of SPDs working in Geiger mode \cite{Qi2005,Zhao2008}. The attack principle of the time-shift attack is shown in Fig. \ref{fig:attackprinciple}\textbf{a}. The temporal detection efficiencies of Bob's SPADs are slightly mismatched. Eve can introduce two different time delays (corresponding to $t_1$ and $t_2$, respectively) to transmitted quantum states via the quantum channel. Then, the detection results and the time delay can be one-to-one mapped exploring the detection efficiency mismatch between SPAD${}_1$ (the red curve) and SPAD${}_2$ (the purple curve). For example, when the arrival time is $t_1$, only SPAD${}_2$ responds to the received photon. Eve will get the full information of the sifted key between Alice and Bob after Bob announced his measuring basis for every round.
\section*{Appendix C: Blinding attack}
\label{appendix-c}
Blinding attack alters the behavior of the SPD from Geiger mode to linear mode by injecting illuminating light with proper power \cite{Lydersen2010}. The current of the SPAD increases rapidly via input light intensity in Geiger mode (red curve in Fig. \ref{fig:attackprinciple}\textbf{b}). When a strong continuous-wave (CW) or a pulsed laser beam is incident into the SPAD, the SPAD will be switched to the linear mode from Geiger mode. The corresponding current in linear mode increases much slower via input light intensity (purple line in Fig. \ref{fig:attackprinciple}\textbf{b}). The corresponding current caused by single-photon-level light (the "quantum region" with a blue background) is too small to reach the discrimination threshold and could not trigger a "click" signal. That is, the SPAD is blind to the single-photon signals. However, if the input light intensity becomes strong enough (the "classical region" with a purple background), the corresponding current will be large enough to trigger a "click" signal. By exploiting the above characteristic of linear mode, Eve can control SPADs of the QKD system and acquire full information of the final key without introducing significant QBER. The strategy that Eve adopts is as follows. Eve intercepts and decodes the transmitted state as same as Bob does. Then Eve encodes and resends a strong pulsed light with power $P_0$ to Bob according to his detection results. If Bob chooses the same basis as Eve does, the pulse will inject into a single SPAD and will cause a high-level current to trigger a "click", which can be denoted by $I_{te1}=1$. If Bob chooses a measuring basis different from that of Eve, the pulse will be divided into two SPADs equally. The corresponding current will be lower than the threshold to trigger a "click", which corresponds to $I_{te1}=0$. It should be noted that when the blinding attack is taken and $I_{te1}=0$, the SPD will not respond to the single-photon-level signals from the test path of the TGI and dark count. Thus, we introduce the parameter $I_{te0}$ to differentiate it from the no-attack situation. When there is no response under blinding attack, $I_{te0}=1$. Otherwise, $I_{te0}=0$. Due to there is only one possible output from the SPD, $I_{te0}$ and $I_{te1}$ should satisfy $I_{te1}I_{te0}=0$. It means that $I_{te1}=1$ and $I_{te0}=0$ when Bob chooses the same basis as Eve does, and $I_{te1}=0$ and $I_{te0}=1$ when Bob chooses the basis different from that of Eve. The response function of the SPD becomes $I_{te1}(1-I_{te0})$. By considering the situation without blinding attack, the response function of the SPD becomes $I_{test}^{blind}=[1-(1-I_{tb})(1-I_{te1})(1-I_{td})](1-I_{te0})$.
The intensity of the CW blinding laser (BL) and the pulsed laser (PL${}_2$ in Fig. \textcolor{blue}{2}) used in the demo experiment is 15.4 $\mu$W and 7.46 $\mu$W, respectively.
\section*{Appendix D: Proof of Eqs. (4) and (5)}
\label{appendix-d}
When there is no attack and Alice sends signals to Bob, the response function of Bob's SPD is $I_{test}=1-(1-I_{tb})(1-I_{ta})(1-I_{td})$, where $I_{tb},I_{ta}, I_{td}\in\{0,1\}$. By substituting $I_{test}$ above into Eq. (\textcolor{blue}{1}), the reconstruction image of the local TGI becomes
\begin{equation}
\begin{aligned}
&\quad M_1(t)=\langle\Delta I_{ref}\Delta I_{test}\rangle\\
&=\langle I_{ref}I_{test}\rangle-\langle I_{ref}\rangle \langle I_{test}\rangle\\
&=Cov(I_{ref},I_{tb})+Cov(I_{ref},I_{ta})+Cov(I_{ref},I_{td})\\
&\quad-\langle I_{ta}\rangle Cov(I_{ref},I_{tb})-\langle I_{td}\rangle Cov(I_{ref},I_{tb})\\
&\quad+\langle I_{ta}I_{td}\rangle Cov(I_{ref},I_{tb})-Cov(I_{ref},I_{ta}I_{td})\\
&=(1-\langle I_{td}\rangle)(1-\langle I_{ta}\rangle)\langle\Delta I_{ref}(t)\Delta I_{tb}\rangle
\end{aligned}
\label{eq:MtAlicederivation}
\end{equation}
where $Cov(I_{ref},I_{tb})=\langle I_{ref}I_{tb}\rangle$ and we have adopted $\langle I_{ref}I_{ta}I_{td}\rangle=\langle I_{ref}\rangle\langle I_{ta}\rangle\langle I_{td}\rangle$ and $Cov(I_{ref},I_{ta})=Cov(I_{ref},I_{td})=0$, as $I_{ref}$, $I_{ta}$ and $I_{td}$ are independent of each other. Eq. (\textcolor{blue}{5}) can be derived analogously.
The derivation has not consider statistical fluctuation of all terms. As the differential image is proportional to the average attack probability $I_{te0}$, these eliminated terms of the formula, such as $Cov(I_{ref},I_{ta})$, actually fluctuate around zero and may decrease the differential imaging quality significantly when $I_{te0}$ is relative small. But the fluctuation will decrease to the acceptable level when $N$ becomes larger, which is strongly supported by Figs.\textcolor{blue}{5}\textbf{c1}-\textcolor{blue}{5}\textbf{c2} of the maintext.
|
{
"timestamp": "2020-12-29T02:23:39",
"yymm": "2012",
"arxiv_id": "2012.14062",
"language": "en",
"url": "https://arxiv.org/abs/2012.14062"
}
|
\section{Introduction}
\blue{The charge qubit in semiconductor quantum dots \cite{Shinkai.09,Petersson.10,Cao.13,Li.15,Kim.15, Ward.16,Yang.19b} is a promising candidate to realize universal quantum computing due to its all-electrical control and fast gate operation. Normally, the gate duration for the charge qubits can be as fast as several nanoseconds thanks to the large tunneling and detuning between the neighboring dots. Although recent experiments \cite{Noiri.22,Xue.22,Madzik.22} for spin qubits in silicon has reported fidelity exceeding 99\% for both single and two-qubit gates, the gating time there can be as long as $\mu s$ due to the small value of the microwave-driven Rabi frequency. Therefore, the charge qubits have potential advantages warranting further studies. On the other hand,} it suffers heavily from the charge noise \cite{Dial.13}, resulting in rather short coherent time and thus low gate fidelity \cite{Kim.15}. Despite the progress over the past years, the two-qubit quantum gate-fidelity in the experiment remains below 90\% \cite{Kim.15}, which motivates us to search further useful methods to design new types of charge qubits, aiming at mitigating the gate-fidelity.
As the isolated qubits are scaling up, how to implement distant and high-fidelity entangling gate between the neighbouring qubits remains another challenge. Typically, qubit-qubit interaction can be implemented for two charge qubits via direct capacitive coupling between two double quantum dots (DQDs), where the interaction range is only about 100 nm \cite{Van.18}. With this capacitive coupling, the entangling gate can only achieve gate-fidelity lower than 70\% \cite{Li.15,macquarrie.20}. On the other hand, the electrons confined in the quantum dots can form relatively large dipole moment when the dots are detuned, due to the delocalized wave-function. Therefore, the charge qubits has great potential to be coupled to the superconducting transmission-line resonator using the dipole moment. Recent experiment has demonstrated that both resonant (real) and nonresonant (virtual) resonator-mediated coherent interactions between two separated DQDs are possible \cite{Van.18}. The range of interaction between two charge qubits there can be substantially increased up to several tens of micrometers \cite{Van.18}.
Recently, it is found that one electron confined in a linear triple-quantum dot (TQD) can also be used to encode the so-called charge quadrupole (CQ) qubit \cite{Friesen.17}, which can benefit from the decoherence-free subspace and the dipolar sweet spot, where the dipolar detuning fluctuation is minimized. By using this quadrupole moment of the electron rather than the dipole moment, the long-range coupling between two CQ qubits and the resonator is experimentally realized \cite{koski.20}. Although the CQ qubit can work in the decoherence-free subspace, it still confronts severe leakage induced by the charge noise. To mitigate this leakage, composite pulses are required \cite{Ghosh.17}, which however prolongs the gate time.
Inspired by the CQ qubit, we find that (as shown below) one electron confined in a TQD can alternatively form another type of charge qubit. The logical basis states are defined as the two-lowest eigenstates, which are different from the one for the CQ qubit. Except for the diplolar sweet spot, the charge qubit considered here can also benefit from the quadrupolar sweet spot, where the leading order of the quadrupolar detuning fluctuation is eliminated.
Here, we investigate how the two spatially separated charge qubits defined in the TQD can be entangled with each other via dipolar coupling to a \blue{superconducting} resonator \cite{blais.07,Srinivasa.16, scarlino.19,landig.19}. The resonator field is coupled to the variation of the dipolar detuning of the qubit such that the oscillation in the dipolar detuning can be controlled by the resonator voltage. We have derived the specific form of the coupling between the qubit and the resonator. We further estimate the coupling strength considering the present experimental parameters for the TQD and the resonator. We will present two approaches to construct the entangling gates. When each qubit is in resonance with the resonator, one is able to achieve a holonomic entangling gate. While this hybrid system is working in the dispersive regime, an iSWAP gate is obtained. We numerically simulate the fidelity for these two entangling gates considering with the present experimental decoherence parameters. \blue{We surprisingly find that the fidelity for the iSWAP gate can surpass 99\%, considering the noise level in experiments. While the fidelity for the holonomic gate sensitively depends on the anharmonicity in the resonator. }
\begin{figure}
\includegraphics[width=0.95\columnwidth]{Fig1.pdf}
\caption{Schematic illustration of the HC qubit and the coupling between the TQDs and the resonator. (a) The quantum dots are labeled by 1, 2 and 3 from the left to the right, which corresponds to the position basis states $|\mathit{100}\rangle$, $|\mathit{010}\rangle$ and $|\mathit{001}\rangle$. (b) The site potential for each dot and the tunneling between neighboring dots are depicted. (c) The coupling between the resonator and the TQDs, including their geometry and the parameters used to determine the coupling strength $g$.
}
\label{fig:quantumdot}
\end{figure}
\section{Double sweet spots in the TQD}\label{sec:model}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Fig2.pdf}
\caption{Energy level of the triple-quantum-dot system. The parameters are set to be $\bar{\epsilon}_{d}=t_{m}=0$. The ground, first excited and second excited states are denoted as $|g\rangle$, $|e\rangle$ and $|f\rangle$, which corresponds to the energy level $|E_{g}\rangle$, $|E_{e}\rangle$ and $|E_{f}\rangle$.}
\label{fig:energy}
\end{figure}
As shown in Fig.~\ref{fig:quantumdot}(a), a single electron confined in a linear TQD can occupy the left, middle, or right dots, which corresponds to position states labeled by $\left|\mathit{100}\right\rangle$, $\left|\mathit{010}\right\rangle$, and $\left|\mathit{001}\right\rangle$, respectively.
The Hamiltonian in the position bases is \cite{Friesen.17}
\begin{equation}
\mathcal{H}^{(0)}=\begin{aligned}
\left( {\begin{array}{*{20}{c}}
\epsilon_{d}&t_{12}&0\\
t_{12}&\epsilon_{q}&t_{23}\\
0&t_{23}&-\epsilon_{d}\\
\end{array}} \right)
\end{aligned}
\label{eq:Hpo}
\end{equation}
Here, $t_{12}$ and $t_{23}$ are tunnel couplings between adjacent dots as indicated in Fig.~\ref{fig:quantumdot}(b). $\epsilon_{d}=(U_{1}-U_{3})/2$ and $\epsilon_{q}=U_{2}-(U_{1}+U_{3})/2$ are defined as the dipolar and quadrupolar detuning, respectively. Here, $U_{i}$ ($i$=1,2,3) denotes the site energy for the $i$th quantum dot. We note that, all parameters concerned here are real numbers, and we take $\hbar=1$ for simplicity. Each element in $\mathcal{H}^{(0)}$ can be controlled independently via the gate voltages \cite{Friesen.17}. In the ``even-odd'' bases, i.e.~$\{|E\rangle=(|\mathit{100}\rangle+|\mathit{001}\rangle) / \sqrt{2},|C\rangle=|\mathit{010}\rangle,|L\rangle=(|\mathit{100}\rangle-|\mathit{001}\rangle) / \sqrt{2}\}$, $\mathcal{H}^{(0)}$ can be transformed to
\begin{equation}
H=\begin{aligned}
\left( {\begin{array}{*{20}{c}}
0&t_{p}&\epsilon_{d}\\
t_{p}&\epsilon_{q}&t_{m}\\
\epsilon_{d}&t_{m}&0\\
\end{array}} \right)
\end{aligned}
\label{eq:Hpeel}
\end{equation}
where $t_{p}=(t_{12}+t_{23})/\sqrt{2}$, $t_{m}=(t_{12}-t_{23})/\sqrt{2}$.
\begin{figure*}
\includegraphics[width=2\columnwidth]{Fig3.pdf}
\caption{Population of the triple-quantum-dot system for the ground (a), first excited (b) and second excited states (c). The qubit operation is preferred to be in the region $\epsilon_{q}\gg t_{p}$, where the population of the ground state is dominated by $|E\rangle$, while the first excited state is mainly the state $|L\rangle$. The effective coupling between the qubit and the resonator is maximized in this region (as shown in the section.~\ref{sec:coupling}). The parameters are set to be $\bar{\epsilon}_{d}=t_{m}=0$.}
\label{fig:popu}
\end{figure*}
Charge noise can cause fluctuations for both dipolar and quadrupolar detunings. We model the noise as $\epsilon_{d}\rightarrow\bar{\epsilon}_{d}+\delta\epsilon_{d}$ and $\epsilon_{q}\rightarrow\bar{\epsilon}_{q}+\delta\epsilon_{q}$, where $\bar{\epsilon}_{d}$ and $\bar{\epsilon}_{q}$ are the mean values, and $\delta\epsilon_{d}$ and $\delta\epsilon_{q}$ are the dipolar and quadrupolar fluctuations \cite{Friesen.17}. In the following, we consider the symmetric operating for the dipolar detuning i.e., $\bar{\epsilon}_{d}=0$. We assume that the noise is quasi-static, so that $\delta\epsilon_{d}$ and $\delta\epsilon_{q}$ are treated as constants. This approximation is justified by the fact that the charge noises typically vary on a time scale of about 100 $\rm{\mu s}$ \cite{Wang.14,Ghosh.17}, much longer than the gating time (on the scale of ns) for a charge qubit. We then separate the Hamiltonian $H$ into two parts $H_0=H(\delta\epsilon_{d}=\delta\epsilon_{d}=0)$ and $H'=H-H_0$. $H'$ refers to the fluctuation components, while $H_0$ can be diagonalized analytically as seen in Appendix~\ref{appx:eigen}. The eigenstates are defined as $|g\rangle$, $|e\rangle$ and $|f\rangle$, which denote the ground, the first and second excited states respectively. Their corresponding eigenvalues are $E_{g}$, $E_{e}$ and $E_{f}$. Assuming $|\delta\epsilon_{d}|, |\delta\epsilon_{d}|\ll t_{p}, t_{m}$, we can expand the qubit excitation energy $\omega_{ge}=E_{ge}=E_{e}-E_{g}$ as
\begin{equation}
\begin{aligned}
E_{ge}&=\frac{1}{2}\left(\sqrt{4\left(t_{p}^{2}+t_{m}^{2}\right)+\bar{\epsilon}_{q}^{2}}-\bar{\epsilon}_{q}\right) \\
&- \frac{t_{p} t_{m} \left(3+\frac{\bar{\epsilon}_{q}}{\sqrt{4\left(t_{p}^{2}+t_{m}^{2}\right)+\bar{\epsilon}_{q}^{2}}}\right)}{t_{p}^{2}+t_{m}^{2}} \delta\epsilon_{d}\\
&+\frac{1}{2}\left(\frac{\bar{\epsilon}_{q}}{\sqrt{4\left(t_{p}^{2}+t_{m}^{2}\right)+\bar{\epsilon}_{q}^{2}}}-1\right) \delta\epsilon_{q} \\
&+ O\left((\delta \epsilon_{q}+\delta\epsilon_{d})^{2}\right).
\end{aligned}
\label{eq:expand}
\end{equation}
It is clear that the first term on the right hand side of Eq.~(\ref{eq:expand}) represents the qubit excitation energy without noise, the second and third terms relate to the dipolar and quadrupolar fluctuations. Setting $t_{m}=0$, i.e., $t_{12}=t_{23}$, one is able to find a dipolar detuning sweet spot ($\partial{E_{ge}}/\partial{\epsilon_{d}}|_{t_{m}=0}=0$). In the assumption of $t_{m}=0$, one can further obtain another quadrupole detuning sweet spot ($\partial{E_{ge}}/\partial{\epsilon_{q}}=0$) when it satisfies $\bar{\epsilon}_{q}\gg t_{p}$. Although the dipolar detuning sweet spot has been widely studied \cite{Friesen.17,Kratochwil.21}, this quadrupole detuning sweet spot of the charge qubit in TQD is not yet reported. To take full advantage of these double sweet spots, in this work we are considering the operating region $\bar{\epsilon}_{d}=t_{m}=0$ and $\bar{\epsilon}_{q}\gg t_{p}$. \blue{Note that, although the two tunnel couplings in the experiments can be adjusted conveniently by a barrier gate \cite{Russ.18}, they cannot be totally identical due to either imperfect control of the gate or charge noise fluctuation on the tunneling. Nevertheless, by modeling $t_{p}\rightarrow\bar{t}_{p}+\delta t_{p}$ and $t_{m}\rightarrow\bar{t}_{m}+\delta t_{m}$, one finds that the fluctuation on the dipolar detuning is approximated to be $\frac{4 \delta t_{m}}{t_{p}+\delta t_{p}} \delta \epsilon_{d}$. Here, $\bar{t}_{p}$ and $\bar{t}_{m}$ are the mean values, while $\delta t_{p}$ and $\delta t_{m}$ are the related fluctuations. Since $\delta t_{m}, \delta t_{p} \ll t_{p}, \delta \varepsilon_{d}$ \cite{Friesen.17,Ghosh.17}, we ignore the fluctuations on the tunneling.} In Fig.~\ref{fig:energy}, we plot the energy levels of $H_0$ as a function of $\epsilon_{q}$. As shown in the plot, in the region $\bar{\epsilon}_{q}\gg t_{p}$, the eigenstates for the ground state $|g\rangle$ and the first excited state $|e\rangle$ are approximately the states $|E\rangle$ and $|L\rangle$. In this work, we define the computational bases as the two-lowest eigenstates. In Fig.~\ref{fig:popu}, we further plot the population for the three eigenstates. The chosen operating point and the correspondingly defined qubit states not only benefit from the sweet spots, but also maximize the effective dipolar coupling between the qubit and the resonator (see Section.~\ref{sec:coupling} below). By introducing an external microwave-driven pulse on dipolar detuning around the sweet spot with $\Delta\epsilon_{d}=\epsilon(t)\cos(\omega_{0} t+\phi)$, when the frequency $\omega_{0}$ matches the qubit frequency $\omega_{ge}$, namely, on resonance, the total Hamiltonian can be reduced to an effective two-level structure in the interaction picture as (see Appendix~\ref{appx:B})
\begin{equation}
\begin{aligned}
H_{\rm{eff}}=\frac{\epsilon(t)}{2}(\cos\phi\ \sigma_{x}-\sin\phi\ \sigma_{y})
\end{aligned}
\label{eq:effective}
\end{equation}
\blue{Here, we emphasize again the logical bases are defined as $|0\rangle=|g\rangle$, $|1\rangle=|e\rangle$ within the operating regime $\bar{\epsilon}_{d}=t_{m}=0$ and $\bar{\epsilon}_{q}\gg t_{p}$, such that the Pauli matrix is $\sigma_{z}=|g\rangle\langle g|-| e\rangle\langle e|\approx|E\rangle\langle E|-| L\rangle\langle L|$.} In this way, arbitrary single-qubit gate can be implemented by using the two-lowest states as the computational basis.
Note that, when $\bar{\epsilon}_{d}=t_{m}=0$, $H_0$ can also form a so-called CQ qubit in the bases $\{|E\rangle,|C\rangle\}$, leaving $|L\rangle$ as the leaked state \cite{Friesen.17}.
\section{Dipolar coupling to a resonator}\label{sec:coupling}
We first derive the dipole transition matrix element. Considering three quantum dots centering at $\bm{r}_{1}=-w\hat{x}$, $\bm{r}_{2}=0$, and $ \bm{r}_{3}=w\hat{x}$ respectively, the dipole operator for this triple-quantum-dot system is thus $\bm{d}=-e \sum_{i} \bm{r}_{i} n_{i} \equiv d \hat{x}$ ($i$=1,2,3), where $d=e w\left(n_{1}-n_{3}\right)$ and $n_{i}=|n_{i}\rangle\langle n_{i}|$. In the position bases, the dipole operator reads
\begin{equation}
\begin{aligned}
d=e w\left(|\mathit{100}\rangle\langle \mathit{100}|-| \mathit{001}\rangle\langle \mathit{001}|\right)=
e w\partial_{\epsilon_{d}} \mathcal{H}^{(0)}
\end{aligned}
\label{eq:dipole}
\end{equation}
which implies $n_{1}-n_{3}=\partial_{\epsilon_{d}}\mathcal{H}^{(0)}$. Introduce a small variation in the dipolar detuning $\epsilon_{d}=\bar{\epsilon}_{0}+\mathcal{F}+\delta \epsilon_{d}$, where $\bar{\epsilon}_{0}$ is the chosen operating point and $\mathcal{F}$ is the small variation. We assume $\mathcal{F}$ is with the order of $\delta \epsilon_{d}$. To simplify the discussion, in this section below we temporarily leave alone $\delta \epsilon_{d}$. Then we can expand the Hamiltonian $\mathcal{H}^{(0)}$ near the operating point as
\begin{equation}
\mathcal{H}\approx \mathcal{H}^{(0)}_{\epsilon_{d}=\bar{\epsilon}_{0}}+\left.\partial_{\epsilon_{d}} \mathcal{H}^{(0)}\right|_{\epsilon_{d}=\bar{\epsilon}_{0}} \mathcal{F},
\label{eq:Happro}
\end{equation}
where the first term of the right hand side of Eq.~(\ref{eq:Happro}) denotes the component of the Hamiltonian determined by $\bar{\epsilon}_{0}$, while the second term is proportional to the dipole operator in Eq.~(\ref{eq:dipole}), which we define as the dipole interaction Hamiltonian. In the computational bases, the reduced Hamiltonian for the small variation of the dipolar detuning can be rewritten as
\begin{equation}
\mathcal{H}=-\frac{ \omega_{ge}}{2} \sigma_{z}+\mathcal{F} \eta \sigma_{x}.
\label{eq:Happro2}
\end{equation}
Here, $\eta=\cos\theta$ and $\tan2\theta=2t_{p}/\bar{\epsilon}_{q}$, where we have assumed $\bar{\epsilon}_{0}=t_{m}=0$ (see Appendix.~\ref{appx:eigen}). Comparing Eqs.~(\ref{eq:Happro}) and (\ref{eq:Happro2}), one finds that
\begin{equation}
d_{ge}=\langle g|d| e\rangle=e w \eta=e w\cos\theta.
\label{eq:dipole2}
\end{equation}
Considering the operating region, $\bar{\epsilon}_{q}\gg t_{p}$, we have $\theta\sim0$, thus $d_{ge}$ is maximized. On the other hand, in Fig.~\ref{fig:popu}(c), we can see that in this region the component of the eigenstate $|f\rangle$ is dominated by $|C\rangle$. Namely, the components of the states $|E\rangle$ and $|L\rangle$ are close to zero. Therefore, $d_{ef}=\langle e|d| f\rangle\sim0$ and $d_{gf}=\langle g|d| f\rangle\sim0$, where $d_{mn}=\langle m|d| n\rangle$ is defined as the dipole transition matrix element. This means that, there is no transition between the second excited state $|f\rangle$ and other lower eigenstates induced by the small variation $\mathcal{F}$. In addition, one easily finds that $d_{gg}=d_{ee}=0$. Therefore, the TQD can be regarded as a well-defined two-level system spanned by $|E\rangle$ and $|L\rangle$ in this region.
Next, we determine the effective qubit-resonator coupling strength. We consider a TQD capacitively coupled to a transmission-line resonator with the lowest-energy mode as shown in Fig.~\ref{fig:quantumdot}(c), the geometry of which is similar to Ref.~\cite{Srinivasa.16}, and the coupling of the resonator field to the TQD is via the variation in the dipolar detuning $\epsilon_{d}$. The quantized antinode voltage of the resonator is \cite{Childress.04}
\begin{equation}
\hat{V}=\sqrt{\frac{\hbar\omega_{r}}{LC_{0}}}(a+a^{\dagger})
\label{eq:qvolt}
\end{equation}
To make the derivation clear, we recover $\hbar$. Here, $a^{\dagger}$ ($a$) is the photon creation (annihilation) operator of the resonator, $\omega_{r}=\pi/(LZ_{0}C_{0})$ is the resonator frequency, $L$ denotes the length of the resonator, $C_{0}$ the capacitance per unit length. The characteristic impedance is $Z_{0}=\sqrt{L_{0} / C_{0}}$ with $L_{0}$ being the inductance per unit length. Further, the effective quantized voltage across the resonator is $\hat{V}_{\mathrm{eff}}=C_{c}\hat{V}/(C_{c}+C_{d})=\chi_{0}\hat{V}$, where $C_{c}$ is the total capacitance between the resonator and the TQD while $C_{d}$ denotes the capacitance between the TQD and the ground \cite{Srinivasa.16}. Therefore, the interaction between the TQD and the resonator is
\begin{eqnarray}\label{eq:Hint}
\mathcal{H}_{\mathrm{int}} &=& -\bm{d} \cdot \bm{E}=d \hat{V}_{\mathrm{eff}} / s \notag\\
&=&\hbar g_{0}\left(n_{1}-n_{3}\right)\left(a+a^{\dagger}\right),
\end{eqnarray}
where
\begin{equation}
\begin{aligned}
g_{0}=\frac{e w \chi}{ s L C_{0}} \sqrt{\frac{\pi}{Z_{0} \hbar}}=\frac{e w \chi}{s} \omega_{r} \sqrt{\frac{Z_{0}}{\pi \hbar}}
\label{eq:g0}
\end{aligned}
\end{equation}
is the vacuum Rabi coupling strength and $s$ is the effective distance related to $\hat{V}_{\mathrm{eff}}$. On the other hand, comparing the dipole coupling Hamiltonian in Eqs.~(\ref{eq:Happro}) and (\ref{eq:Hint}), the small variation in the dipole detuning is equal to $\mathcal{F}=e w \hat{V}_{\mathrm{eff}} / s=\hbar g_{0}\left(a+a^{\dagger}\right)$. It is then clear that, the resonator controls the oscillation in the dipole detuning via its voltage and thus induces the transition between the qubit bases states. Moreover, substituting $\mathcal{F}$ into Eq.~(\ref{eq:Happro2}), the effective interaction in the computational bases can then be expressed as
\begin{equation}
\begin{aligned}
\tilde{\mathcal{H}}_{\mathrm{int}}= \hbar g \sigma_{x}\left(a+a^{\dagger}\right),
\label{eq:Heffint}
\end{aligned}
\end{equation}
where $g=g_{0}\eta=g_{0}\cos\theta$ is the effective coupling strength. As stated above, we have considered $\eta=\cos\theta=1$, i.e., $g=g_{0}$. To estimate the coupling strength, we consider $w=s/2$ and $\chi_{0}=0.28$ according to the data in Refs.~\cite{Blais.04,Childress.04,Srinivasa.16}. In addition, from Eq.~(\ref{eq:g0}), the coupling strength is proportional to $\sqrt{Z_{0}}$ and $\omega_{r}$. From the recent experiments~\cite{Stockklauser.17,Van.18, Wang.20}, where an array of high-impedance SQUID array is used to design the resonator, $Z_{0}$ can be as high as $1\ \mathrm{k}\Omega$. For a typical value of the resonator frequency $\omega_{r}/2\pi$ between $1.5$ and $6.5\ \rm{GHz}$, the coupling strength $g_{0}/2\pi\ $ is therefore in the range of [60, 250] MHz.
\section{Two-qubit entangling gates}\label{sec:twoqubitgate}
\blue{
Below, we present two approaches to construct the two-qubit entangling gates: (a) The two qubits are in resonance with each other, while they are detuned from the resonator. When working in the dispersive regime, i.e., $\Delta^{(k)}=\omega^{(k)}-\omega_{r}\gg g^{(k)}$, where $\Delta^{(k)}$ is defined as the qubit-resonator detuning, a dynamical iSWAP-type gate can be implemented. (b) Both qubits and the resonator are in resonance, namely, $\Delta^{(k)}=0$, in this way we can obtain a holonomic two-qubit entangling gate. }
\subsection{Dynamical iSWAP gate operated in the dispersive regime}
We now extend the discussion in Sec.~\ref{sec:coupling} that two separated charge qubits are coupled to the transmission-line resonator. The total Hamiltonian for this hybrid system consisting of two qubits and a resonator reads
\begin{equation}
H_{\rm{tot}} = H_{\rm{res}}+\sum_{k=1}^2 \mathcal{H}_{0}^{(k)}+ \sum_{k=1}^2 \tilde{\mathcal{H}}_{\rm{int}}^{(k)}
\label{eq:Htot1}
\end{equation}
where $\mathcal{H}_{0}^{(k)}$ is the Hamiltonian for the $k$th qubit (TQD) as described in Eq. (\ref{eq:Hpo}), $H_{\rm{res}}=\omega_{r}a^{\dagger}a$ is the Hamiltonian for the resonator, and $\tilde{\mathcal{H}}_{\rm{int}}^{(k)}$ represents the dipole interaction Hamiltonian between the $k$th qubit and the resonator as shown in Eq.~(\ref{eq:Heffint}). Transforming $H_{\rm{tot}}$ into the TQD eigenbasis, we have
\begin{equation}
\begin{aligned}
H_{\rm{tot}}=& \omega_{r} a^{\dagger} a+\sum_{k=1}^{2} \sum_{n=\{g,e,f\}} E_{n}^{(k)} \sigma_{n n}^{(k)} \\
&+\sum_{k=1}^{2} \sum_{m,n=\{g,e,f\}} g^{(k)} d_{mn}^{(k)}\left(a+a^{\dagger}\right) \sigma_{ mn}^{(k)},
\label{eq:Htoteigen}
\end{aligned}
\end{equation}
where $\sigma_{mn}=|m\rangle\langle n|$.
As mentioned in Section~\ref{sec:coupling}, in the operating regime $\bar{\epsilon}_{q}\gg t_{p}$, both qubits can be regarded as a well-defined two-level system, and $d_{gg}=d_{ee}=d_{ef}=d_{gf}\sim0$. Therefore, $H_{\rm{tot}}$ can be reduced to the so-called Tavis-Cummings form as \cite{Fink.09}
\begin{equation}
H_{\rm{TC}}=\omega_{r}a^{\dagger}a-\sum_{k=1}^2 \left[\frac{\omega_{ge} ^{(k)}}{2} \sigma_{z}^{(k)} -g^{(k)} \sigma_{x}\left(a+a^{\dagger}\right)\right],
\label{eq:Htot}
\end{equation}
where $\omega_{ge} ^{(k)}=E_{e}^{(k)}-E_{g}^{(k)}$ and $g^{(k)}$ represent the frequency and couping strength for the $k$th qubit, respectively. To simplify the discussion we set $\omega_{ge} ^{(k)}\equiv\omega^{(k)}$ hereafter.
For approach (a), we expand the discussion of Refs.~\cite{blais.07,Srinivasa.16} on the construction of the iSWAP gate when $\Delta^{(k)}\gg g^{(k)}$. The effective Hamiltonian for $H_{\rm{TC}}$ can be further simplified by using the Schrieffer-Wolff transformation \cite{blais.07}, which can eliminate the direct coupling between the qubit and the resonator:
\begin{equation}\\
H_{d}=H_{d,0}+\frac{1}{2}\left[S, V\right],
\label{eq:Hsw}
\end{equation}
where $H_{d,0}$ denotes the free Hamiltonian for the resonator and the two charge qubits:
\begin{equation}
\begin{aligned}
H_{d,0} &=\omega_{r}a^{\dagger}a- \sum_{k=1}^2 \frac{\omega ^{(k)}}{2} \sigma_{z}^{(k)},
\end{aligned}
\label{eq:Hdisper}
\end{equation}
$V$ the dipolar interaction Hamiltonian for individual qubit
\begin{equation}
\begin{aligned}
V=\sum_{k=1}^2g^{(k)} \sigma_{x}\left(a+a^{\dagger}\right),
\end{aligned}
\label{eq:Hdisperint}
\end{equation}
and $S$ the transformation operator
\begin{equation}
S=\sum_{k=1}^2 \frac{g^{(k)}}{\Delta^{(k)}}(a^{\dagger}\sigma_-^{(k)}-\sigma_+^{(k)}a).
\label{eq:S}
\end{equation}
Combining Eqs. (\ref{eq:Hsw}) and (\ref{eq:S}), the resulting approximated Hamiltonian is
\begin{equation}
\begin{aligned}
H_{d}&\approx H_{d,0}+\sum_{k=1}^2 \frac{(g^{(k)})^{2}}{\Delta^{(k)}}(\sigma_-^{(k)}\sigma_+^{(k)}-\sigma_+^{(k)}\sigma_-^{(k)})a^{\dagger} a\\&-\frac{(g^{(k)})^{2}}{\Delta^{(k)}}\sigma_+^{(k)}\sigma_-^{(k)} -\chi(\sigma_+^{(1)}\sigma_-^{(2)}+\sigma_-^{(1)}\sigma_+^{(2)}),
\end{aligned}
\label{eq:Hdappro}
\end{equation}
where
$\chi= g^{(1)}g^{(2)}(\Delta^{(1)}+\Delta^{(2)})/[2\Delta^{(1)}\Delta^{(2)}]$. In the zero-photon subspace i.e., the computational subspace, $H_{d}$ can be further reduced to
\begin{equation}
\begin{aligned}
\tilde{H}_{d}&=\sum_{k=1}^2 \frac{\tilde{\omega}^{(k)}}{2} \sigma_{z}^{(k)}-\chi(\sigma_+^{(1)}\sigma_-^{(2)}+\sigma_-^{(1)}\sigma_+^{(2)}),
\end{aligned}
\label{eq:Hdtilde}
\end{equation}
where $\tilde{\omega}^{(k)}=-\omega^{(k)}+(g^{(k)})^{2}/\Delta^{(k)}$. Under the Schrieffer-Wolff transformation, the direct coupling between the qubit and the resonator has been safely eliminated and this approximation is correct to first order in $g^{(k)}/\Delta^{(k)} $. Further, we transform $\tilde{H}_{d}$ into a rotating frame via
\begin{equation}
U_{d}=\exp\left[{-i\sum_{k=1}^2\frac{\tilde{\omega}^{(k)}}{2}\sigma_{z}^{(k)}t}\right],
\label{eq:Urot}
\end{equation}
which leads to the Hamiltonian
\begin{equation}
\begin{aligned}
\tilde{H}_{d}&=U_{d}^{\dagger}\tilde{H}_{d}U_{d}-iU_{d}^{\dagger}\frac{\partial U_{d}}{\partial t}\\
&=-\chi\left(\sigma_+^{(1)}\sigma_-^{(2)}+\sigma_-^{(1)}\sigma_+^{(2)}\right),
\label{eq:Hd2}
\end{aligned}
\end{equation}
where we have considered $\tilde{\omega}^{(1)}=\tilde{\omega}^{(2)}$. The evolution operator of the Hamiltonian $\tilde{H}_{d}$ is thus
\begin{equation}
U_{\rm{ent}}'(t)=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & \cos\chi t & i\sin\chi t & 0\\
0 & i\sin\chi t & \cos\chi t & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}.
\label{eq:Uiswap}
\end{equation}
When $\chi t=\pi/2$, $U_{\rm{ent}}'(\frac{\pi}{2\chi})$ is equivalent to an iSWAP gate.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig4.pdf}
\caption{\blue{(a) The schematic of the coupling between the charge qubits and the transmon. (b) Possible transitions in the subspace $S_{1}$ and $S_{2}$, which form a three-level \bm{$\Lambda$} structure. In the subspace $S_{2}$, leakage to the state $|g,g,2\rangle$ occurs if the anharmonicity $\alpha$ is small. (c) Energy level for the transmon, which can be modeled as a resonator with nonlinear spectrum.}}
\label{fig:transmon}
\end{figure}
\subsection{Holonomic gates operated in the resonant regime}
\blue{
As shown in Fig.~\ref{fig:transmon}(b), for the resonant case, the computational subspace is coupled to the leakage state, which is residing in the two-excited subspace (the detail is given below). To suppress this leakage, we introduce the dipolar coupling between the charge qubits and a superconducting transmon to employ its good anharmonic property \cite{scarlino.19,landig.19} (see Fig.~\ref{fig:transmon}(a)). The dipolar coupling model between the transmon and the charge qubits is similar to the case as shown in Section.~\ref{sec:coupling}. In this way, the Hamiltonian for the hybrid system including the transmon is slightly different from Eq.~\ref{eq:Htot1}:
\begin{equation}
H_{\rm{tot}}' = \sum_{n=1}^{n}\left[n \omega_{tr}-\frac{n(n-1)}{2} \alpha\right]|n\rangle\langle n| +\sum_{k=1}^2 \mathcal{H}_{0}^{(k)}+ \sum_{k=1}^2 \tilde{\mathcal{H}}_{\rm{int}}^{(k)}.
\label{eq:Hr}
\end{equation}
Here, $n$ represents the $n$-th level of the transmon, which can be also regarded as the number in the photon for the resonator. While $\omega_{tr}$ and $\alpha$ refer to the intrinsic frequency and anharmonicity for the transmon, respectively. From Eq.~\ref{eq:Hr}, the transmon can be modeled as a resonator with nonlinear spectrum
\begin{equation}
H_{\rm{tot}}'' = \omega_{tr}a^{\dagger}a-\sum_{n=1}^{n}\left[\frac{n(n-1)}{2} \alpha\right]|n\rangle\langle n| +\sum_{k=1}^2 \mathcal{H}_{0}^{(k)}+ \sum_{k=1}^2 \tilde{\mathcal{H}}_{\rm{int}}^{(k)}.
\label{eq:Hr2}
\end{equation}
For the resonant case, i.e., $\omega^{(k)}=\omega_{tr}$, in the interaction picture defined by $U_{tr}=\exp[-i H_{tr}^{0}t ]$ where
\begin{equation}
H_{tr}^{0}=\omega_{tr}a^{\dagger}a-\sum_{n=1}^{n}\left[\frac{n(n-1)}{2} \alpha\right]|n\rangle\langle n| +\sum_{k=1}^2 \mathcal{H}_{0}^{(k)},
\label{eq:Ur}
\end{equation}
we have
\begin{equation}
\begin{aligned}
H_{tr}\simeq&\sum_{k=1}^2g^{(k)}\left(a^{\dagger}\sigma_-^{(k)}+\mathrm{H.c.}\right).
\label{eq:Htotr}
\end{aligned}
\end{equation}
Note that here we have ignored the fast-oscillating terms $g^{(k)}\exp[\pm i \alpha t]$ related to the transmon state $|2\rangle$ by assuming $g^{(k)}\ll \alpha$ (the small anharmonicity effect will be discussed later). Also, we have ignored the energy-nonconserving terms $a\sigma_-^{(k)}$ and $a^{\dagger}\sigma_+^{(k)}$, due to $ \omega_{r}, \omega_{r}-\alpha\gg g$. Further, the higher level of the transmon will be strongly suppressed. Ideally, we can design the holonomic gate considering the photon subspace $n=0,1$.} Because the total number of excitations is conserved, we can further rewrite $H_{tr}$ in a block-diagonal form. In the single- and two-excited subspaces, we have $S_{1}=\operatorname{span}\{|e,g,0\rangle,|g,e,0\rangle,|g,g,1\rangle\}$ and $S_{2}=\operatorname{span}\{|e,e,0\rangle,|e,g,1\rangle,|g,e,1\rangle\}$, where the qubit states from the left to the right denote qubit 1 and 2, respectively. The corresponding Hamiltonian in these two subspaces have the similar forms as
\begin{equation}\label{eq:Hrspace1}
\mathcal{H}_{tr,1}=\left|g,g,1\right\rangle (g^{(1)}\left\langle e,g,0\right|+g^{(2)} \left\langle g,e,0\right|)+\mathrm{H.c.}
\end{equation}
and
\begin{equation}\label{eq:Hrspace2}
\mathcal{H}_{tr,2}=\left|e,e,0\right\rangle (g^{(1)}\left\langle g,e,1\right|+g^{(2)} \left\langle e,g,1\right|)+\mathrm{H.c.}
\end{equation}
As shown in Fig.~\ref{fig:transmon}(b), $\mathcal{H}_{tr,1}$ can form a three-level \bm{$\Lambda$} structure \cite{hong.18, Egger.19,li.20,Zhang.20} with transitions between $|g,g,1\rangle\leftrightarrow\left|e,g,0\right\rangle$ and $|g,g,1\rangle\leftrightarrow|g,e,0\rangle$. Similarly, $\mathcal{H}_{tr,2}$ introduces such transition between $\left|e,e,0\right\rangle\leftrightarrow|e,g,1\rangle$ and \blue{$|e,e,0\rangle\leftrightarrow|g,e,1\rangle$.} \blue{In fact, $\mathcal{H}_{tr}$ in the two-excited subspace can also induce transition between $|g,g,2\rangle\leftrightarrow|g,e,1\rangle$ and $|g,g,2\rangle\leftrightarrow|e,g,1\rangle$ for the small anharmonicity (see Eq.~\ref{fig:transmon}(b)).} The remaining two subspaces are $S_{3}=\operatorname{span}\{\left| {g,g,0} \right\rangle \}$ and $S_{4}=\operatorname{span}\{\left| {e,e,1} \right\rangle \}$, with $\mathcal{H}_{tr,3}=\mathcal{H}_{tr,4}=0$ due to $\Delta^{(k)}=0$.
To make the derivation clear, below we follow Ref.~\cite{Zhang.20} to expand the discussion on how to implement the holonomic operation using $\mathcal{H}_{tr,1}$. The case for $\mathcal{H}_{tr,2}$ can be understood in the same way since they have the similar Hamiltonian structure. $\mathcal{H}_{tr,1}$ can also be expressed using a bright-dark representation
\begin{equation}
\begin{aligned}
\mathcal{H}_{tr,1}=\Omega \left|g,g,1\right\rangle \langle b|+\mathrm{H.c.}
\end{aligned}
\end{equation}
where
\begin{eqnarray}
|b\rangle&=&\sin\frac{\varphi}{2}|e,g,0\rangle -\cos\frac{\varphi}{2}|g,e,0\rangle\notag
\label{eq:dressedb}
\end{eqnarray}
is the bright state, while
\begin{eqnarray}
|d\rangle&=&\cos\frac{\varphi}{2}|e,g,0\rangle +\sin\frac{\varphi}{2}|g,e,0\rangle
\label{eq:dressedd}
\end{eqnarray}
is the dark state.
$\Omega=\sqrt{\left(g^{(1)}\right)^{2}+\left(g^{(2)}\right)^{2}}$ and $\tan\varphi/2=-g^{(1)}/g^{(2)}$. In this representation, the dark state $|d\rangle$ has been dropped out of the dynamics. Therefore, $\mathcal{H}_{tr,1}$ can be regarded as the transitions between the bright state $|b\rangle$ and state $|g,g,1\rangle$. Thus, the evolution operator with respect to $\mathcal{H}_{tr,1}$ is
\begin{eqnarray}\label{eq:Ubd}
U_{tr,1}(t)&=&\exp\left(-i\int_{0}^{t} \mathcal{H}_{tr,1}dt^{\prime}\right)\notag\\
&=&\cos\delta(t) (|g,g,1\rangle\langle g,g,1|+|b\rangle\langle b|)\\
&&-i\sin\delta(t)(|g,g,1\rangle\langle b|+|b\rangle\langle g,g,1|)+|d\rangle\langle d|,\notag
\end{eqnarray}
where $\delta(t)=\int_{0}^{t} \Omega \mathrm{d} t^{\prime}$. According to Eq. (\ref{eq:Ubd}), when the cyclic condition is met, i.e., $\delta(T)=\pi$, the evolution operator in the subspace $S_{1}$ is
\begin{equation}
U_{tr,1}(T)=\begin{aligned}
\left( {\begin{array}{*{20}{c}}
\cos\varphi&\sin\varphi&0\\
\sin\varphi&-\cos\varphi&0\\
0&0&-1\\
\end{array}} \right)
\end{aligned}
\label{eq:Ur3T}
\end{equation}
As we can see that after the cyclic evolution, the operator matrix is block-diagonalized. Therefore, the excited state of the resonator $|g,g,1\rangle$, cannot affect the qubit subspace spanned by $\{|e,g,0\rangle,|g,e,0\rangle\}$. For an arbitrary state initialized in the subspace spanned by $S_{i}=\operatorname{span}\left\{\left|\psi_{1}(0)\right\rangle,\left|\psi_{2}(0)\right\rangle\right\}$, where
\begin{equation}
\begin{aligned}
\left|\psi_{1}(0)\right\rangle &=\alpha|e,g,0\rangle+\beta|g,e,0\rangle, \\
\left|\psi_{2}(0)\right\rangle &=\beta^{*}|e,g,0\rangle-\alpha^{*}|g,e,0\rangle,
\end{aligned}
\label{eq:Sint}
\end{equation}
the corresponding final states under the action of $\mathcal{H}_{tr,1}$ satisfy the parallel-transport condition for the holonomic gate \cite{Erik.12,Sjoqvist.15}, i.e, $\left\langle\psi_{i}(t)\left|\mathcal{H}_{r,1}(t)\right| \psi_{j}(t)\right\rangle=\left\langle\psi_{i}(0)\left|U_{tr,1}^{\dagger}(t)\left|\mathcal{H}_{tr,1}(t)\right| U_{tr,1}(t)\right|\psi_{j}(0)\right\rangle=0$. Here, $\alpha$, $\beta$ $\in \mathbb{C}$ and $|\alpha|^{2}+|\beta|^{2}=1$. Therefore, $U_{tr,1}(T)$ represents a holonomic operation in the qubit subspace $\{|e,g,0\rangle,|g,e,0\rangle\}$. Then, in the complete computational subspace (also the zero-photon subspace) $\left\{\left|g,g,0\right\rangle, \left|g,e,0\right\rangle,\left|e,g,0\right\rangle,\left|e,e,0\right\rangle\right\}$, we have the holonomic two-qubit gate as
\begin{equation}
U_{\rm{ent}}(\varphi)=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & \cos\varphi & \sin\varphi & 0\\
0 & \sin\varphi & -\cos\varphi & 0 \\
0 & 0 & 0 & -1
\end{pmatrix}.
\label{eq:Uent}
\end{equation}
Note that the negative sign in the bottom right is owing to the evolution in the two-excited subspace $\mathcal{H}_{r,2}$ \cite{Zhou.18}. As demonstrated in Ref.~\cite{Zhang.20}, $U_{\rm{ent}}(\pi/2)$, which corresponds to $g^{(1)}=g^{(2)}=g$, denotes an iSWAP-type two-qubit entangling gate.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig5.pdf}
\caption{Fidelity and state population of the holonomic entangling gate $U_{\rm{ent}}(\varphi=\pi/2)$ (left column) and the iSWAP gate $U_{\rm{ent}}'(t=\frac{\pi}{2\chi})$ (right column). The state population for the holonomic and iSWAP entangling gate are shown in (a) and (b), respectively. \blue{ (c) Fidelity of holonomic entangling gate as a function of $\alpha/g$. (d) Fidelity of the iSWAP gate as a function of the ratio $\Delta/g$. The common parameters for all the panels used in the simulation: $g/2\pi=66\ \rm{MHz}$, $\Gamma_{\varphi}^{(1)}/2\pi=\Gamma_{\varphi}^{(2)}/2\pi=2.7\ \rm{MHz}$, $\Gamma_{ge}=\Gamma_{ef}=\Gamma_{gf}=0$ \cite{scarlino.19}. The other parameters: (a) $\Delta=0$, $\Gamma_{a,tr}/2\pi=4\ \rm{KHz}$ \cite{Zi.16,Tao.18}, $\Gamma_{\varphi,tr}/2\pi=0.8\ \rm{MHz}$ \cite{scarlino.19}. (b) $\Delta/g=10$ and $\Gamma_{a,r}/2\pi=0.028\ \rm{MHz}$ \cite{Samkharadze.16}.}}
\label{fig:twoqubit}
\end{figure}
\subsection{Gate fidelity}
To simulate the gate fidelity for the entangling gates, we consider using the master equation as \cite{blais.07}
\begin{eqnarray}
\dot{\rho}=-i\left[H_{\rm{tot}}, \rho\right]+\mathcal{L}_{1} \rho+\mathcal{L}_{\varphi} \rho+\mathcal{L}_{a} \rho,
\label{eq:master2}
\end{eqnarray}
where
\begin{equation}
\begin{aligned}
\mathcal{L}_{1} \rho&=\sum_{k=1,2}\Gamma_{ef}^{(k)} \mathcal{D}[|e\rangle\langle f|]+\Gamma_{gf}^{(k)} \mathcal{D}[|g\rangle\langle f|]+\Gamma_{ge}^{(k)} \mathcal{D}[|g\rangle\langle e|], \\
\mathcal{L}_{\varphi} \rho&=\sum_{k=1,2}\frac{1}{2}\Gamma_{\varphi}^{(k)} \mathcal{D}[| e\rangle\langle e|-| g\rangle\langle g|], \\
\mathcal{L}_{a} \rho&=\Gamma_{a} \mathcal{D}[a],
\end{aligned}
\label{eq:linblad2}
\end{equation}
and
\begin{equation}
\begin{aligned}\mathcal{D}[\hat{L}]=\left(2 L \rho L^{\dagger}-L^{\dagger} L \rho-\rho L^{\dagger} L\right) / 2.
\label{eq:D}
\end{aligned}
\end{equation}
Here, $\mathcal{L}_{1} \rho$ and $\mathcal{L}_{\varphi}\rho$ denote the possible relaxation and dephasing baths for each qubit, while $\mathcal{L}_{a}\rho$ the decay of the resonator. \blue{From the recent experiment in Ref.~\cite{scarlino.19}, we consider the coupling strength $g^{(1)}=g^{(2)}=g=2\pi\times66\ \rm{MHz}$ (corresponding to $\omega_{r}/2\pi\sim1.7\ \rm{GHz}$), and the dephasing rate $\Gamma_{\varphi}^{(1)}/2\pi=\Gamma_{\varphi}^{(2)}/2\pi=2.7\ \rm{MHz}$.} While we set the qubit relaxation rate to be zero, i.e., $\Gamma_{ge}=\Gamma_{ef}=\Gamma_{gf}=0$. Because in the experiment, the linewidths of the qubits can be directly measured, which includes the relaxation effect of the qubits. In addition, according to the work \cite{Samkharadze.16}, \blue{the decay of the resonator can be as low as $\Gamma_{a,r}/2\pi=0.028\ \rm{MHz}$, which corresponds to the quality factor of $10^{5}$. For the iSWAP gate, we consider the initial state of the coupled system as $|g,e,0\rangle$. Ideally, the final state is expected to be $|e,g,0\rangle$ without decoherence effect. In Fig.~\ref{fig:twoqubit}(d), we plot the fidelity of the iSWAP gate as a function of $\Delta/g$. The fidelity is defined as $F=\rm{Tr}\left[\rho_{\rm{id}}. \rho_{\rm{re}}\right]$, where $\rho_{\rm{id}}$ and $ \rho_{\rm{re}}$ denote the ideal and realistic density matrix, respectively. Here, we consider the qubit-resonator detuning $\Delta^{(1)}=\Delta^{(2)}=\Delta$. We find that, the fidelity is increasing as the detuning becomes large. When $\Delta/g=10$, the related fidelity surpass 99.2\%. The corresponding population is shown in Fig.~\ref{fig:twoqubit}(b).
As shown in Fig.~\ref{fig:twoqubit}(c), we plot the fidelity for the holonomic gate as a function of $\alpha/g$. In the simulation, we take a normal decay rate of the transmon as $\Gamma_{a,tr}/2\pi=4\ \rm{KHz}$ \cite{Zi.16,Tao.18}, while the dephasing rate for the transmon is $\Gamma_{\varphi,tr}/2\pi=0.8\ \rm{MHz}$ \cite{scarlino.19} with $\mathcal{L}_{\varphi,tr} \rho=\frac{1}{2}\Gamma_{tr} \mathcal{D}[| 0\rangle\langle 0|-| 1\rangle\langle 1|]$. The other parameters are similar to the case for the iSWAP gate for fair comparison. It is clear that the performance of the holonomic gate sensitively depends on the value of the anharmonicity. When the anharmonicity is zero the fidelity can be as low as 0.4, due to sever leakage to the state $|g,g,2\rangle$. Obviously, the fidelity gradually increase as the anharmonicity is increasing. When the anharmonicity is large enough with $\alpha/g\geq10$, the fidelity can reach about 98\%. However, in the experiment, the large anharmonicity would cause unwanted charge noise for the transmon. Normally, the anharmonicity is with the range of $\alpha/2\pi=[200, 400]$ MHz \cite{Zhao.20}. In Fig.~\ref{fig:twoqubit}(a), we show the population for the holonomic gate considering $\alpha/2\pi\simeq 400$ MHz (corresponding to $\alpha/g\simeq6.2$), the related fidelity is about 90\%. Note that since the leakage to the state $|g,g,2\rangle$ only affect the subspace $S_{2}$, here we consider the initial state to be $|g,e,1\rangle$ rather than $|g,e,0\rangle$.}
\section{Conclusion}
\blue{
We have proposed the implementation of a new type of charge qubit formed by an electron confined in a triple-quantum-dot system, which can work at the dipolar and quadrupolar detuning sweet spots. Particularly, we propose how to couple two separated charge qubits in a TQD via the superconducting resonator, where two types of entangling gates, i.e., the iSWAP and the holonomic gates are implemented. We find that the fidelity for the iSWAP gate can surpass 99\% considering the noise level in experiments. While the fidelity for the Holonomic gate can reach 98\%, if the anharmonicity in the resonator is large enough. To conclude, our proposal might offer an alternative way to implement high-fidelity quantum control for charge qubits based on semiconductor quantum dot.}
\section*{ACKNOWLEDGMENTS}
\blue{We thank Tao Chen for useful discussion.} This work was supported by the Key-Area Research and Development Program of Guang Dong Province (Grant No. 2018B030326001), the National Natural Science Foundation of China (Grant Nos. 11905065, 11874156, 11874312), the Research Grants Council of Hong Kong (No. CityU 11303617), the Guang Dong Innovative and Entrepreneurial Research Team Program (No. 2016ZT06D348), \blue{and the Guangxi Science Foundation (Grant No. AD22035186).}
|
{
"timestamp": "2022-09-13T02:32:45",
"yymm": "2012",
"arxiv_id": "2012.14129",
"language": "en",
"url": "https://arxiv.org/abs/2012.14129"
}
|
\section{Introduction}
Understanding the dynamics of the valence quarks in the nucleon is one of the most interesting aspects of quantum chromodynamics~(QCD).
The problem is not only how the valence PDFs can be calculated from the first principles in QCD, but also
how in general the three relativistic Fermions become bound.
The special feature of valence quarks is that they define the baryonic quantum number of the nucleon
and one expects them to be a bridge between baryonic spectroscopy and the QCD structure of the nucleon.
Since baryonic spectroscopy adheres reasonably well to the SU(6) spin-flavor symmetry, one expects such a symmetry to be found also in the
valence quark distributions in the nucleon and the problem is to
understand how and in which parts of the valence quark momentum distribution this symmetry is broken.
Historically, the first attempts to model valence quark structure of nucleon were based on the non-relativistic picture of constituent quarks in
which SU(6) symmetry was preserved for the constituents carrying approximately one third of the nucleon mass\cite{Isgur:1979be,Close:1979bt}.
These models were successful in describing the phenomenologies of baryonic spectroscopy.
However, experimental extraction of the valence PDFs indicates that SU(6) is apparently violated, especially at large Bjorken $x$.
Thus, one of the unresolved issues is how to reconcile
the apparent success of SU(6) in baryonic spectroscopy and its breaking down in partonic distributions.
Another unique property of valence quarks is that their distribution weighted by Bjorken $x$ has a well defined peak,
even if the shape of the PDF is not an observable.
The peaking of the momentum distribution for the bound system implies the importance of mean-field dynamics for the individual
valence fermions in the nucleon, while the position of the peak is sensitive to the dynamical aspects of interaction
in the mean field
The history of the modeling of partonic distributions is very rich, ranging from non-relativistic\cite{Isgur:1979be} and relativistic\cite{Brodsky:1981jv}
constituent quark models, bag models\cite{Jaffe:1974nj,Chodos:1974pn,Miller:1979kg}, models combining partonic and pionic cloud picture of
the nucleon\cite{Thomas:1981vc,Schreiber:1991qx,Miller:2002ig} as well as models based on the di-quark picture of the nucleon\cite{Close:1988br,Anselmino:1992vg,Roberts:1994dr,Cloet:2005pp,Cloet:2013jya} following from the spin-dependent part of the one-gluon exchange in
the non-relativistic approximation\cite{DeRujula:1975qlm}.
All these models are non-perturtbative in their nature and their predictions varied widely for the general characteristics of the valence quark distribution such as
position of the peak and relative strength of the $u$- and $d$-quark distributions. While calculations based on lattice QCD reproduce the general characteristics of valence PDFs, their complexity does not always allow an understanding of the dominant mechanisms of interactions.
In this and the following papers we develop a new model for valence quark distributions of the nucleon based on the multi-quark correlation picture.
The validity of such a model is based on the fact that even though the number of the quarks is not conserved in the nucleon, the number of
valence quarks is effectively conserved and therefore it is possible to describe them in the framework used for the description of a bound system of finite number of fermions.
This approach is similar to highly successful multi-nucleon correlation model of calculation of momentum distribution of
nucleons in the nuclei\cite{Frankfurt:2008zv,Egiyan:2005hs,Frankfurt:1988nt,Arrington:2011xs,Fomin:2017ydn,Artiles:2016akj} as well as the
calculation of the momentum distribution in ultra-cold
atomic Fermi gas with contact interactions (see e.g. \cite{Tan2008,Hen:2014lia}).
In our approach we consider three distinct interaction dynamics of valence quarks which are: mean field
three valence quark (3q) cluster, two-quark and three-quark short range interactions, all of which are expected to dominate at different momentum fraction ranges of the valence quarks.
We demonstrate that such a framework in the
description of the valence quark dynamics brings a new insight and methodology in analyzing the partonic
distributions in the nucleon.
Why modeling? The advantage of the proposed framework is that it creates a new ground for making predictions for different QCD processes
involving nucleons, since in this case one can make unique predictions based on whether the process under study is dominated by the
interaction of quarks in the mean field or in two-/three- quark correlations. As experience from nuclear physics shows, eventually the ab-initio calculations\cite{Schiavilla:2006xx,Wiringa:2013ala} (lattice calculation in the case of QCD) will reproduce the phenomena observed based
on the model description of the processes\cite{Piasetzky:2006ai,Sargsian:2012sm,Hen:2014nza}.
However, ab-initio calculations, because of their
general approach, are not always well positioned for making experimentally verifiable specific predictions.
The article is organized as follows: in Sec.\ref{phenom} we first present the justification for the mean field, two- and three- quark short range
interaction picture of partonic distributions. We then discuss the phenomenology of partonic distributions, which are almost universal for
the all recent PDFs extracted from the analysis of different high energy scattering data of electro-production and weak interaction
processes (e.g. deep inelastic scattering and Drell-Yan). Our focus is on the dynamical characteristics of the $x$ weighted valence PDFs, such as the position and height of the peaks as well as the ratio of d- to u- quark distributions.
We examine how the the position and the height of the peak changes due to QCD
evolution while the difference between the peak positions for u- and d- quark distribution is largely $Q^2$ independent.
Another observation is the approximate validity of SU(6) symmetry in the region close to the peak of the partonic distributions.
The latter observation is used as a justification for the assumption of approximate SU(6) symmetry for mean-field valence quarks.
In Sec.\ref{model} the mean field model of valence quark distributions is presented.
Based on this model we calculate the $x q_V(x,Q^2)$ in leading order (LO), which is then fitted to the phenomenological distributions (in Sec.\ref{estimates}) to evaluate the parameters defining the non-perturbative wave functions.
The latter allowed us to ascertain the expected strength of the two- and three- quark correlation effects in the normalization and momentum sum rule of valence d- and u- quark distributions. The evaluated parameters of
the wave functions can also be used for estimation of mean field contributions of the different QCD processes sensitive to
the valence quark dynamics at moderate Bjorken x $ \approx 0.2$.
Sec.\ref{outlook} summarizes our results, discusses the predictions that follow from the model and the possibility of their
experimental verification. We also discuss the limitations of the model and outline the second part of the work
in which the mechanism of quark-quark short range interaction
is included to calculate the high momentum component of valence quark distributions.
In Appendix A we summarize the Light-Front effective diagrammatic rules on which the calculation in the paper is based.
In Appendix B we present the mathematical details of the derivation related to the integration in the transverse momentum space.
\section{The model of valence quark distributions and phenomenology of PDFs}
\label{phenom}
The uniqueness of the valence quarks as carriers of the baryonic quantum number of nucleons is that even if the number of quarks
in the bound nucleon is not conserved their ``effective" number is conserved. This situation allows us to introduce a concept of
potential energy and apply a rather well known
theoretical framework for the description of a bound system consisting of a finite number of fermions.
In this framework the bulk of the momentum distribution is defined by the mean field dynamics while,
if interaction is short range, the fermion-fermion short-range correlations are responsible for the high momentum
part of the same distribution. As it was mentioned in the introduction, this approach has been successfully
applied in nuclear physics and physics of ultra-cold atomic Fermi gases.
\subsection{Mean Field and Quark Correlation Model of Valence Quark Distributions in the Nucleon}
\label{totalmodel}
Any finite size, bound system with a fixed number of fermions with nonzero interaction length will
exhibit a mean field interaction of its constituents \cite{Migdal:1967ab}.
Hereafter, we call a mean field an approximation in which the interaction of a given constituent
with other constituents in the system is characterized by a single effective potential representing sum of
all possible non-perturbative interactions between constituents. (Such a condition is similar to one in which
Hartree-Fock approximation is justified.)
In quantum mechanics if the binding of a constituent is due to the minimum of the mean field potential,
the latter can be approximated by a harmonic oscillator potential near the vicinity of the minimum.
Thus the momentum distribution of the constituent in
such a mean field can be described in exponential form with the coefficient of the exponent characterizing the
size of the system (or orbit).
The same mean field dynamics define also
the characteristic average momentum carried by the constituents which can be related to the position of the
peak of the momentum weighted distributions.
For the system in which interaction strength between constituents is not negligible at distances much
smaller than the bulk size of the system (like the contact interaction, see e.g. \cite{Hen:2014lia}), the
high momentum part of the momentum distributions is a result of the short range pair-wise interactions between constituents,
often referred to as short-range correlations.
In the present work we apply such a framework for the calculation of the light-front momentum distribution of valence quarks in the
nucleon\cite{Leon:2020cev}. The approach is based on the argument, that the ``effective" number of valence quarks is conserved and the
strength of the quark-quark interactions is sufficiently large in the volume they occupy. It is assumed that the volume
the valence quarks occupy is smaller than the actual size of the nucleon. Thus we arrive at the picture of a non-perturbative
valence quark cluster embedded in the nucleon. We assume the following interaction dynamics for valence quarks in the nucleon:
the non-perturbative interaction among the three valence quarks provides the confinement
(referred hereafter as mean field interaction), the short range
perturbative quark-quark interactions through hard gluon exchanges generate the high momentum (high Bjorken x) tail of the valence quark
distribution (hereafter, referred to as quark-quark short range correlations) and the interaction of three-valence quark cluster
with the residual nucleon system, R, influences the final momentum distribution of valence quarks in the nucleon.
Such a framework in leading order, is represented by the three light-cone time-ordered diagrams in Fig.\ref{framework} and
it is assumed that they should describe the major properties of partonic distribution at Bjorken $x>0.1$. For the smaller x the valence
quark distributions are defined predominantly by Regge dynamics dominated by $\alpha = {1\over 2}$ trajectory, which is beyond the scope of this paper
(see e.g. Ref.\cite{Roberts:1990ww}).
\begin{figure*}
\includegraphics[width=1.0\textwidth]{figures/Fig1.png}
\centering
\caption{The mean field (a), two-valence (b) and three-valence (c) quark short-range correlation contributions to the deep-inelastic scattering
off the nucleon in the partonic picture.}
\label{framework}
\end{figure*}
Before proceeding with the calculations within this framework, one can evaluate the valence quark distributions parametrically to validate the model at least on a
qualitative level. First, the mean-field dynamics is largely responsible for the bulk properties of the bound three-quark system
such as its size and average momentum of its constituents. The functional form of the latter can be described by an
exponential function with the exponent being proportional to the spatial extension of the valence quarks.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{figures/Fig2.png}
\centering
\caption{The graph shows the comparison of the x weighted valence u-quark distribution fitted by three analytic functions corresponding to
the mean field, 2q- and 3q- correlations.}
\label{xu_vs_framework}
\end{figure*}
To identify the range of Bjorken $x$ that one expects to be
generated by mean field dynamics we note that the phenomenology of PDFs is well consistent
with the average transverse momentum of valence quarks being about $\langle k_t\rangle\sim 200$~MeV/c\cite{Feynman:1973xc}.
Using the approximate spherical symmetry of momentum distribution, we estimate that the Bjorken $x$ range relevant to the mean-field to be
$x\lesssim {2\langle k_t\rangle\over m_N} \sim 0.4$. Thus for massless valence quarks one expects an exponential type distribution for the range of
$0.1 \lesssim x\lesssim 0.4$.
For the highest end of the Bjorken x range, the phenomenological observation is that for $x\ge 0.8$ the valence PDF's behave as $(1-x)^3$ which is
consistent with the dynamics of two hard gluon exchanges between three valence quarks\cite{Brodsky:1974vy,Lepage:1980fj, Ball:2016spl} .
Hereafter we refer them as three-quark (3q) correlations.
The short range nature of such a correlation will require that all three quarks have momenta exceeding the above mentioned $\langle k_t\rangle$ and the interacting
quark balances the other two spectator quarks. The latter results in $x \gtrsim {4\langle k_t\rangle\over m_n}\sim 0.8$.
Establishing the regions where one expects the dominance of mean-field and 3q-correlations our next conjecture is that the transition between these two
regions happens through the two-quark short range interactions which we will refer to as 2q-correlation.
If such two-quark correlations are due to short range gluon
interactions then they are dominated among two quarks with opposite helicites resulting in a functional form of the partonic distribution
$(1-x/B)^2$, where $B$ is a parameter characterizing the total momentum fraction carried by the center of mass of
the 2q-correlated pair.
In Fig.\ref{xu_vs_framework} we fitted
functional forms following from the above discussed scenarios of mean-field, 2q- and 3q- correlations to the $x$ weighted
valence $u$-quark distribution. As the figure shows the exponential mean field as well as $(1-{x\over B})^2$ and $(1-x)^3$ forms for
$2q$- and $3q$- correlations reproduce the shape of the valence quark distribution surprisingly well starting at $x\gtrsim 0.1$.
Note that in principle the same functions should reproduce the $x<0.1$ part of the distributions, however, as it was mentioned earlier,
starting at $x<0.1$ valence quark distributions are dominated by Regge behavior which is not included in the framework considered here.
\subsection{Phenomenological properties of valence quark distributions}
One of the most distinguished properties of valence quarks is that while their distributions do not exhibit any peaking structure,
their $x$ weighted distribution, $h(x,t)\equiv x q_V(x,t)$ has a
clear peak at around $x\sim 0.2$ which makes them
qualitatively different from the sea quark distributions (Fig. \ref{peak_and_height}).
Note that the peaking property is universal for leading and higher order approximation despite the shape of PDFs not being a
physical observable with certain degree of arbitrariness in their definition. Here we define $t = \log{Q^2}$.
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{figures/Fig3.png}
\vspace{-0.5cm}
\caption{The up and down valence PDFs (left) and x weighted PDFs (right) at $Q^2 = 4 GeV^2$.
The 68 \% confidence level error bands for PDF parametrization are shown.
The PDF sets CJ15nlo and NNPDF3.1-nnlo \cite{Accardi:2016qay,Ball:2017nwa} are used in the calculation. Similar features are also observed for other modern PDF parameterizations.}
\label{peak_and_height}
\end{figure*}
One expects that the $Q^2$ dependence of the position and the height of the peak originate
from QCD evolution, in which case with an increase of $Q^2$ the gluon radiations of valence quarks shift the peak towards
smaller $x$ and diminishes the height of the maximum. This can be seen in Fig.\ref{peak_and_height_Q2dep} where the valence
$d$- and $u$- quarks structure functions are presented for $Q^2$ ranging from $5$ to $100$~GeV$^2$.
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{figures/Fig4.png}
\vspace{-0.7cm}
\caption{The $x$ dependence of $x q_V(x,Q^2)$ function for $d-$ and $u-$ valence quarks, calculated for
CT14nnlo \cite{Dulat:2015mca} PDFs at various $Q^2$. }
\label{peak_and_height_Q2dep}
\end{figure*}
In the recent work \cite{Leon:2020nfb} we found that the correlation between the height of
the maximum of the $h(x_p,t)$ function and its position, $x_p$ has a universal exponential form: $h(x_p,t) = C e^{Dx_p}$,
where constants, C and D saturate with the increase of the order of approximation in coupling constant.
As it can bee seen from Fig.\ref{expcor} the exponential behavior extends to practically
the entire range of the coupling constant $\alpha$ being covered in various deep-inelastic processes.
The observed ``height-position" correlation results in the following
analytic form of the valence PDFs at the vicinity of $x_p$\cite{Leon:2020nfb} :
\begin{equation}
h(x,t) \equiv x q_V(x,t) \approx C + CDe x(1-x)^{1-x_p(t) \over x_p(t)}.
\label{meanfield}
\end{equation}
It can be checked that for the above relation the $h(x,t)$ function peaks at $x =x_p$.
Eq.(\ref{meanfield}) indicates that the exponent of the $(1-x)$ term, ${1-x_p(t) \over x_p(t)}$, is defined by the position of the peak $x_p$,
which changes continuously with $t$ due to QCD evolution.
However as Fig.\ref{peak_and_height_Q2dep} (see also Fig.(\ref{R_Q2deps}(b) below) shows, already at starting $Q_0^2=1.69$~GeV$^2$
(before the onset of QCD evolution) the peak-positions $x_p$ are different for valence d and u quarks, resulting in different $x$-dependencies of PDFs at the vicinity of peaks. The observation that the exponent of $(1-x)$ term is flavor-dependent indicates a more complex dynamics in the generation of valence PDFs at $x\sim x_p$ than one expects, for example, from the mechanism of a fixed number gluon exchanges that will
result in a same exponent for valence u- and d- quark PDFs. A flavor-dependent exponent can be generated by considering an effective
nonperturbative potential in Weinberg type equations for relativistic bound states \cite{Weinberg:1966jm}.
\begin{figure*}[thb]
\centering
\includegraphics[width=1.0\textwidth]{figures/Fig5.png}
\vspace{-0.4cm}
\caption{The peak height and position correlation described by an exponential function: $h(x_p,t) =Ce^{Dx_p}$,
from Ref.\cite{Leon:2020nfb}. CT14lo and CT14nnlo corresponds to the CT14 valence PDF parameterization
in leading and next to next to leading order approximation \cite{Dulat:2015mca}.
Other PDF parameterizations also result in an exponential form of the peak height-position correlation. The CT14nnlo is the Hessian error at 68\% confidence level, while the CT14lo shows just the central value.}
\label{expcor}
\end{figure*}
Since the mean-field part of the valence quark wave function will dominate in the calculations related to the bulk (integrated) characteristics
of baryons (e.g. mass spectrum \cite{Isgur:1979be}), then to explain the approximate SU(6) symmetry of baryonic spectroscopy one should
expect the same symmetry to hold for the mean-field part of the valence PDFs. For SU(6) symmetry one expects
that the ratio of valence $d$- to $u$- quark distributions to scale as 0.5. As Fig.\ref{dv_uv_ratio} shows all phenomenological valence quark distributions
produce a $d_V/u_V$ ratio close to $0.5$ in the region, $x\sim x_p \sim 0.2$, where one expects the dominance of mean-field dynamics.
It is quite intriguing that some of these distributions such as CJ12\cite{Owens:2012bv} and
MMHT2014\cite{Harland-Lang:2014zoa} produce an almost constant scaling for $d_V/u_V$ ratios close to 0.5 at $x\approx x_p$
\footnote{It will be interesting if in future analyses of the partonic distributions special attention will be given to the unambiguous
verification of the existence of 0.5 scaling in $d_V/u_V$ ratios in $x\sim x_p \sim 0.2$ region.}.
The figure also shows the decrease of the $d_V/u_V$ ratio with an increase of x ($>$0.3), which is consistent with
the increasingly large violation of SU(6) symmetry.
One such mechanism that can result in such a violation is the onset of quark-correlation dynamics in which case one expects the violation of SU(6) symmetry due to the helicity selection rule in quark-quark interactions through the hard vector~(gluon) exchanges (see e.g. Ref.\cite{Lepage:1980fj}).
\begin{figure*}[ht]
\centering
\includegraphics[width = 0.8\textwidth]{figures/Fig6.png}
\vspace{-0.4cm}
\caption{The ratio of the $d-$ to $u-$ valence quark distributions for various PDF sets \cite{Accardi:2016qay,Martin:2009iq,Dulat:2015mca,Owens:2012bv,Harland-Lang:2014zoa}
calculated at the lowest point of $Q_0^2 = M_c^2 = 1.69$~GeV$^2$ for which all the PDFs are defined.}
\label{dv_uv_ratio}
\end{figure*}
In Fig.\ref{R_Q2deps}(a) we show the $Q^2$ dependence of the $d_V/u_V$ ratio estimated at the peak position of the
$h(x,t) = x\cdot d_V(x,Q^2)$ distribution.
As the figure shows the ratio $\approx 0.5$ is largely
$Q^2$ independent, which indicates that such a ratio reflects an underlying property of the quark wave function of
the nucleon rather than being due to QCD evolution. The QCD evolution only slightly modifies this ratio due to the migration of
possible quark correlation effects from large to small x region.
Finally, another phenomenological feature is that for valence d- quarks the position of the peak
of the $h(x,t)$ function is systematically lower than that of the valence u- quarks
(see Fig. \ref{peak_and_height} (b)). If such a difference in the peak positions is due to the dynamical property of
valence d- and u- quarks in the nucleon, then one expects that it should be insensitive to QCD evolution and be largely
independent on $Q^2$.
Such an expectation is justified by the weak $Q^2$ dependence of the ratio of peak-positions of valence u- and
d-quark distributions for CT class of PDF parameterizations shown in Fig.\ref{R_Q2deps} (b).
The CJ15 parameterization shows $Q^2$ independence starting at $Q^2\approx 10$~GeV$^2$ while
the MMHT14 PDFs reach $Q^2$ independence at $Q^2 > 10^3$~GeV$^2$.
Even though the magnitude of the gap between peaks of $xd_V(x,t)$ and $xu_V(x,t)$ varies
noticeably between different groups of PDFs its $Q^2$ dependence is largely flat, not showing any systematic
$Q^2$ dependence.
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{figures/Fig7.png}
\vspace{-0.5cm}
\caption{(a) The $Q^2$ dependence of the ratio of d- to u- valence quark distributions at the peak position of $h(x,t)$ distribution for $d$ quark.
(b) The $Q^2$ dependence of the ratio of the peak positions of the d- and u- valence quark $h(x,t)$ distributions.
Different PDF sets are used from Refs.\cite{Accardi:2016qay,Martin:2009iq,Dulat:2015mca,Owens:2012bv,Harland-Lang:2014zoa} .}
\label{R_Q2deps}
\end{figure*}
Summarizing this section we would like to emphasize the dedicated study of the above discussed characteristics of valence quark
PDFs, such as exponential correlation of Fig.\ref{expcor}, the approximate $0.5$ scaling of d- to u- valence quark PDF ratios
at $x\sim x_p\sim 0.2$ (Figs.\ref{dv_uv_ratio} and \ref{R_Q2deps}(a)) as well as the gap between d- and u- valence quark
peaking positions will give new venues in understanding the valence quark structure of the nucleon.
Finally, even though in above discussion we considered PDF in higher order approximation (nlo and nnlo),
the observed PDF characteristics are largely the same also for leading order PDFs.
\subsection{Main assumptions of the model}
The above discussed phenomenology of valence PDFs serves us as a ground for several assumptions for the model outlined below:
\begin{itemize}
\item {\bf Dynamics:} The main assumption of the model is that the mean field, two- and three- quark short-range correlations define the dynamics of the valence quarks in the range of $0.1 \lesssim x\lesssim 1$. In the considered model, lepton scattering from the valence quark,
in leading order, proceeds according to the diagrams presented in Fig.\ref{framework}.
\item {\bf Valence Quarks in the Nucleon:} The model assumes an existence of almost massless three-quark valence cluster, $V$, in the nucleon.
The cluster is compact with the transverse separation between any pairs of quarks $b_{qq} \lesssim 0.3$~Fm which indicates that
the 3q system will occupy a size of up to 0.6Fm. Such a separation is
consistent with both phenomenology and theoretical evaluations made in all cases in which the 3q core of the nucleon
is considered (see e.g.\cite{Brodsky:1985qs,Cheedket:2002ik,Weiner:1985ih,Islam:2004ke}).
In the model the
valence quark system defines the baryonic number, but not necessarily the total isospin of the nucleon. It can have total isospin
$I_V = {1\over 2}$ or ${3\over 2}$, each of them corresponding to different excitations or masses of the residual nucleon system.
(For the lowest mass of the recoil system one expects the 3q system to have the same isospin and its projection that
the considered nucleon has).
\item {\bf Residual Structure:} Introducing the residual structure of the nucleon with the spectrum of mass, $m_R$, represents an
introduction of spectral function formalism in the description of the nucleon structure.
The model assumes a certain universality of the residual structure, $R$, entering in all three mechanisms of generation of
valence quark distribution according to Fig.\ref{framework}. This universality is reflected in the fact that one can fix its main properties within one of the approaches (say within mean field model) and apply it in the calculation of $2q$- and $3q$- short range correlation contributions.
The mass spectrum of the residual system is continuous and effectively depends on whether u- or d- valence quarks are probed.
For example, a given valence quark in the proton, can originate from the 3-valence quark cluster having the total isospin and its projection the same as
the proton but it can also originate from the 3-quark cluster with isospin, $I_V={1\over 2}$ and projection $I_V^3=-{1\over 2}$
or isospin, $I_V={3\over 2}$, corresponding to the higher mass
of the residual system. Thus
in the case of the proton one can expand the residual nucleon mass in the form:
\begin{eqnarray}
m_R(u/d) = \alpha_{u/d}\cdot m_R(I_V={1\over 2},I^3_V= {1\over 2} ) \\+ \beta_{u/d} \cdot m_R(I_V={1\over 2},I^3_V= -{1\over 2} ) \nonumber \\
+ \gamma_{(u/d)} \sum\limits_{I^3_V=-{3\over 2}}^{3\over 2} m_R(I_V={3\over 2},I^3_V ) + \cdots \nonumber
\label{massspectrum}
\end{eqnarray}
where $I^3_V$ is the isospin projection of the valence quark cluster and ``$\cdots$" accounts for the possible
higher mass spectrum of the residual system.
Here the $\alpha$, $\beta$ and $\gamma$ factors are estimating the probabilities that
u- or the d- valence quarks emerge from different isospin configurations.
From the above representations
one expects that the minimal mass in the residual system comes from the contribution
of $I_V={1\over 2},I^3_V= {1\over 2}$ term that has the same isospin numbers that the proton has.
As it was mentioned above, for the cases of total isospin ($I_V={1\over 2}$, $I_V^3=-{1\over 2}$) or $I_V={3\over 2}$ the residual system will correspond to higher mass excitations.
With this assumption, we observe that, for the proton having $uud$ configuration, (per one valence quark) the u- valence quark will be accompanied with lesser residual mass than that of the d-quark. Thus one expects
that for proton in general:
\begin{equation}
m_R(u) < m_R(d),
\label{udmass}
\end{equation}
since for the d- valence quarks one expects relatively larger contribution from higher residual masses.
For example, the second $\beta$ term contributes more in the d- valence quark case since it has $udd$ configurations.
It is interesting that such a scenario is in qualitative agreement with violation of Gottfried sum rule\cite{Arneodo:1994sh} or the
"SeaQuest" result\cite{Dove:2021ejl}, since large $\beta$ factor for d- valence quark will generate an excess of $\bar d$ quarks
due to $(ddu)(\bar du)$ configuration in the proton.
In the current paper we will estimate the magnitude of $m_R$ for u- and d- quarks by fitting
the calculated valence quark distributions to the empirical PDFs.
In principle it could be possible to extract $m_R$ experimentally, with dedicated measurements of
tagged structure functions in semi-inclusive DIS processes.
\item {\bf Mean Field Model:} In the considered model we assume that the mean-field dynamics (Fig.\ref{framework}(a))
is largely responsible for the position and the peak of the valence quark structure function, $h(x,t)$.
We assume that mean-field dynamics preserves the SU(6) flavor-spin symmetry of valence quark-cluster for both the $I_V = {1\over 2}$ and ${3\over 2}$ cases. Expecting that mean the field
contribution will be relevant for the region of x up to as much as 0.55-0.65 and requiring
the produced final mass in DIS is $\ge 2$~GeV, we set the starting $Q^2 = 4$~GeV$^2$.
\item{\bf $2q$- and $3q$- Correlations:} Starting at $x\gtrsim 0.4$ the model assumes the onset of $2q$- short range correlations transitioning to $3q$ correlations
at $x\gtrsim 0.7$. All such correlations are taken place within the valence quark cluster.
We assume that such short range correlations are generated by vector exchanges, which in the pQCD limit will correspond to the hard gluon exchanges.
In the calculation of such interactions the finite range of correlations will be introduced by the momentum transfer cut-off in the gluon exchanged diagrams.
Because of the selection rule, in which gluon exchanges are dominated for quarks with opposite helicities in the massless limit, we expect that with the onset of
quark correlations the SU(6) symmetry of the 3q system will break down.
\end{itemize}
Using the above assumptions in developing the quark correlation model of valence quark distributions we also
bring forward several experimentally verifiable predictions. These include the prediction for universality of the recoil system, $R$, for
almost entire range of $x>0.1$ as well as the prediction for the onset of $2q$- and $3q$- short range quark correlations which
will be distinguished by specific correlation properties between struck and recoil valence quarks.
These predictions can be quantified once the model parameters are
fixed by comparing the calculated valence quark distributions with PDFs extracted from DIS measurements.
It is worth mentioning that the presented approach can be applied also for the studies of QCD processes in the
nuclear medium\cite{Sargsian:2002wc,Cosyn:2017ekf,Cosyn:2010ux},
in which case an additional vertex will be added to the diagrams of Fig.\ref{framework} describing the transition of nuclear target
to the ``nucleon- residual nucleus" intermediate state.
\section{Mean-Field Model of Valence Quark Distributions}
\label{model}
Our focus now is on modeling the mean-field dynamics of valence quark interaction and
calculating the valence quark distribution functions that can be compared with empirical
PDFs. The mean field picture that we adopt assumes that the nucleon core
consists of interacting thee-valence quark cluster embedded in the residual nucleon system
consisting of sea quarks, gluons, pions and other possible hadrons.
The main assumptions of the mean field model are as follows:
\begin{itemize}
\item The valence 3q system occupies a space in which the separation of
any given quark pair in the impact parameter space is about $0.3$~Fm.
The interactions among these quarks is described by coupled relativistic
three-dimensional harmonic oscillators, thus satisfying confinement condition.
\item Valence quarks are almost massless with the invariant energy of the 3q system contributing
to the nucleon mass.
\item The residual system generates an external field for 3q valence system and occupies a volume less or equal to the nucleon volume.
\end{itemize}
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{figures/Fig8.png}
\centering
\vspace{-0.4cm}
\caption{The nucleon transition into a valence and residual systems.
The diagram is arranged by light-front
$\tau = x^+ = z + t$ ordering. The two dashed vertical lines represent the two intermediate states
where the light-front wave functions of residual and 3q systems are defined.}
\label{mf_diagram_lc}
\end{figure}
To calculate the amplitude of virtual photon interaction with the valence quark in the above described system we
notice that, in the leading order, the light-front time, $\tau$ evolution of scattering process proceeds according to
the diagram in Fig.\ref{mf_diagram_lc}. Here the first vertex (from the left) describes the interaction of
the 3q valence quark-cluster in the field of the residual system and the second vertex,
the interaction among three valence quarks.
The scattering amplitude corresponding to the diagram of Fig.\ref{mf_diagram_lc} can
be calculated by applying effective light-front perturbation theory in which one introduces
effective vertices that can be absorbed into the definition of light-cone wave
functions. The present approach represents the light-front projection of the covariant amplitude
which formally can be calculated within the effective Feynman diagrammatic
approach\cite{Sargsian:2001ax}.
\subsection{General Considerations: Reference Frame, Kinematics and the Cross Section of the Reaction.}
The calculations here will be done using light-cone~(LC) coordinates with the 4-momenta and
four-products defined as follows:
\begin{eqnarray}
& k^\mu = (k^+, k^-, \mathbf{k}_\perp), \ \ k^\pm = E \pm k_z, \ \ \mathbf{k}_\perp = (k_x, k_y), \ \ \nonumber \\
&k_1 \cdot k_2 = {k_{1}^-k_{2}^+\over 2} + {k_{1}^+k_{2}^-\over 2} - {\bf k_{1, \perp}\cdot k_{2, \perp}}.
\end{eqnarray}
We consider a reference frame where the four-momenta of the nucleon, $p^\mu_N$ and the virtual photon, $q^\mu$ are:
\begin{eqnarray}
& p_N^\mu = (p^+_N, \frac{m_N^2}{p^+_N}, \mathbf{0}_\perp), \ \
q^\mu = (0, \frac{2p \cdot q}{p^+_N}, \mathbf{q}_\perp), \ \ \nonumber \\
& Q^2 = - q^2 = |\mathbf{q}_\perp|^2,
\label{reframe}
\end{eqnarray}
where $m_N$ is the mass of the nucleon and we use the standard definition of Bjorken variable: $x_B = \frac{Q^2}{2 p \cdot q}$.
The important kinematical condition of the chosen reference frame is that we assume:
\begin{equation}
p_{N}^+ \gg m_N, k_{i}^-,k_{i,\perp},
\label{highkin}
\end{equation}
where $k_i^-$ and $k_{i,\perp}$ are the ``-" and transverse components of the momenta of constituents in the nucleon.
In calculating the cross section of deep inelastic inclusive scattering from the nucleon
it is convenient to introduce the nucleonic tensor, $W^{\mu\nu}$, through which in
the one-photon exchange approximation the
differential cross section can be presented as follows:
\begin{equation}
{d\sigma\over dQ^2 dx} =
{2\pi \alpha^2\over Q^4 }{y^2\over Q^2}m_NL_{\mu\nu}W_N^{\mu\nu},
\label{dsigma0}
\end{equation}
where $y = {p_N \cdot q\over p_N \cdot k}$ and the spin averaged leptonic tensor is defined as:
\begin{equation}
L_{\mu\nu} =
4[(k_\mu - {q_\mu\over 2})(k_\nu - {q_\nu\over 2})] + Q^2[-g_{\mu\nu}- {q_\mu q_\nu\over Q^2}],
\end{equation}
where $k_\mu$ is the four momentum of initial lepton.
If one introduces the deep-inelastic nuclear transition current as $J_N^{\mu}(p_X,s_X,p_N,s_N)$ between the initial nucleon and the final deep inelastic state, X, then the
nucleonic tensor can be presented as
\begin{align}
W_N^{\mu\nu} &= {1\over 4\pi m_N }\int \sum\limits_X\sum\limits_{s_X}J^{\mu,\dagger}(p_X,s_X,p_N,s_N) \nonumber \\
&\times J^{\nu}(p_X,s_X,p_N,s_N)
(2\pi)^4\delta^4(q+p_N - p_X) \nonumber \\
&\times \delta(p_X^2-M_X^2) {d^4p_X\over (2\pi)^3},
\label{Wmunu}
\end{align}
where one sums over the all possible finite DIS states, $X$. For the most general case the
nucleonic tensor for unpolarized electron scattering from an unpolarized target can be expressed
through two invariant structure functions
\begin{align}
W_N^{\mu \nu} &= - {F_1 (x_B, Q^2)\over m_N} ( g^{\mu \nu} -
\frac{q^\mu q^\nu}{q^2}) \nonumber \\
&+
\frac{F_2(x_B, Q^2)}{m_N (p_N\cdot q)}(p_N^\mu - \ \frac{p_N \cdot q}{q^2} q^\mu )(p_N^\nu -\frac{p_N \cdot q}{q^2} q^\nu ),
\end{align}
where in the reference frame of Eq.(\ref{reframe}):
\begin{align}
F_{2}(x,Q^2) & = {m_N (p_N\cdot q)\over (p_N^+)^2} W_N^{++} = {m_N Q^2\over 2x (p_N^+)^2)}W_N^{++}
\nonumber \\
F_{1}(x,Q^2) & = {F_{2}(x,Q^2)\over 2x}\left(1 + {2m_N^2x^2\over Q^2}\right) \nonumber \\
&- {m_N\over 2} W_N^{+-},
\label{F2F1}
\end{align}
and because of the gauge invariance,
\begin{equation}
W_N^{+-} = {2Q\over q^-}W_N^{\perp-}.
\label{gaugeinv}
\end{equation}
With these structure functions the invariant cross section of the scattering (\ref{dsigma0}) can be presented
as follows:
\begin{align}
&{d\sigma \over dQ^2 dx} =
{2\pi \alpha^2\over Q^4}y^2\bigg( 2F_{1}(x,Q^2) \nonumber \\
&+ {1\over 2 x y^2}
\bigg[(2-y)^2 - y^2\bigg(1 + {4m_N^2 x^2\over Q^2}\bigg)\bigg]F_2(x,Q^2)\bigg).
\label{dsigma}
\end{align}
With respect to Eq.(\ref{F2F1}) it is worth noting that in the limit of ${2m_N^2x^2\over Q^2}\ll 1$, if
$W^{+-} = 0$, one arrives at Callan-Gross relation:
\begin{equation}
F_{2}(x,Q^2) = 2 x F_{1}(x,Q^2).
\label{CGrel}
\end{equation}
This, together with Eq.(\ref{gaugeinv}) indicates that the Callan-Gross relation is fulfilled when
$W^{\perp-} = 0$, which according to Eq.(\ref{Wmunu}) corresponds to the absence of the transverse
currents $J^\perp=0$ in the deep-inelastic scattering process in the chosen reference frame (\ref{reframe}).
In the partonic model the structure function $F_2(x,Q^2)$ is directly related to the partonic distribution functions \cite{Feynman:1973xc}:
\begin{equation}
F_2(x,Q^2) = \sum\limits_i e_i^2 x f_i(x,Q^2),
\label{partmodel}
\end{equation}
where the $e_i$'s are the charges of interacting quarks relative to the magnitude of electron charge and
$f_i(x,Q^2)$ is the PDF of the quark of type $i$.
From Eqs.(\ref{F2F1}) and (\ref{partmodel}) it follows that to calculate the PDFs one needs to calculate the
$W^{++}$ component of the nucleonic tensor, $W^{\mu\nu}$.
\subsection{Calculation of the Scattering Amplitude}
The transition current, $J^\mu$ in Eq.(\ref{Wmunu}) in our approach is identified
with the scattering amplitude $A^\mu$, which corresponds to the process described by
the diagram of Fig.\ref{mf_diagram_lc}. To calculate this amplitude we apply the effective
light-front diagrammatic rules (summarized in Appendix A) which results in:
\begin{align}
&A^\mu = \sum_{h_V, h_1} \frac{1}{k_V^+} \frac{1}{k_1^+}
\bar{u}(k_1',h_1') (ie_1 \gamma^\mu) u(k_1,h_1) \nonumber\\
& \times {\prod_{i=1}^{3} \bar{u}(k_i, h_i)\Gamma^{V \rightarrow 3q} \chi_V \over
\mathcal{D}_2}
{\chi^\dagger_V \chi^\dagger_R \Gamma^{B \rightarrow VR} u(p_N,h_N)\over \mathcal{D}_1}
\label{Amu1}
\end{align}
where $\Gamma^{N \rightarrow VR}$ is the effective vertex for the
nucleon transition to the valence quark cluster $V$ and residual $R$ systems,
the $\Gamma^{V \rightarrow 3q}$ vertex describes the transition of
the state V into three individual valence quarks.
(These vertices are non-perturbative objects and can not be calculated
with any finite order diagrammatic approach. In our framework they will be associated with
respective wave functions which will be modeled and evaluated by comparing with experiment.)
All the $h$'s denote helicities, the $\chi_V$ and $\chi_R$ denote the
respective spin functions for the valence and residual systems
\footnote{For example, a spinor for a spin 1/2 configuration, a polarization vector for spin 1, etc.}.
The light front energy denominators are $\mathcal{D}_1$ and $\mathcal{D}_2$,
which characterize two intermediate states marked by dashed vertical lines in Fig.\ref{mf_diagram_lc}.
Here we label the struck quark with $i=1$. The color indices are omitted since they are not
involved in electromagnetic interaction.
The sequence of $N\rightarrow VR$ followed by $V\rightarrow 3q$ transitions as presented in
Fig.\ref{mf_diagram_lc} is justified based on the assumption that the $VR$ system is characterized by smaller internal momenta than the $3q$ system.
The assumption of small relative momentum in the $VR$ system allows us to approximate the first intermediate
state as on-shell. For the 3q system such an approximation is not valid and in principle for
the valence quark propagator one will have an on-shell and instantaneous terms due to relation:
$\slashed{k} + m = \sum_h u(k,h) \bar{u}(k,h) + \frac{\gamma^+(k^2-m^2)}{k^+}$. However,
since we are interested in the calculation of the $A^+$ component of the scattering amplitude
which enters in $W^{++}$, the instantaneous
term drops out since it couples with the extra $\gamma^+$ factor
in the $\gamma q q$ vertex and $\gamma^+\gamma^+ = 0$.
Thus, this allows us to represent the second intermediate state by on-shell quark spinors only.
Applying the LF diagrammatic rules (Appendix A) for $\mathcal{D}_1$ and $\mathcal{D}_2$ denominators one obtains:
\begin{align}
\mathcal{D}_1 &= P^- - k_R^- - k_V^-\nonumber\\
&= \frac{m_N^2}{p_N^+} - \frac{k_{R, \perp}^2 + m_R^2}{k_R^+} - \frac{k_{V, \perp}^2 + m_V^2}{k_V^+})
\nonumber \\
&= \frac{1}{p_N^+} \left(m_N^2 - \frac{k_{R, \perp}^2 + m_R^2}{x_R} - \frac{k_{V, \perp}^2 + m_V^2}{x_V} \right) \nonumber \\
\mathcal{D}_2 & = \frac{1}{k_V^+} \big(m_V^2 - \sum_{i=1}^{3} \frac{ k_{i, \perp}^2 +m_i^2 }{\beta_i} ), \label{D1D2}
\end{align}
where $k_R$ and $k_V$ are the four momenta of the residual and valence systems respectively,
$m_R$ and $m_V$ are their masses and, the summation is over the valence quarks, $i$,
with $\beta_i = \frac{x_i}{x_V}$.
Substituting Eq.(\ref{D1D2}) into Eq.(\ref{Amu1}) results in:
\begin{align}
A^\mu &= \sum_{h_1, h_V} \frac{1}{x_V} \frac{1}{\beta_1}
\bar{u}(k_1',h_1') (ie_1 \gamma^\mu) u(k_1,h_1) \nonumber\\
&\times \frac{ \prod_{i=1}^{3} \bar{u}(k_i, h_i)\Gamma^{V \rightarrow 3q} \chi_V}
{m_V^2 - \sum_{i=1}^{3} \frac{ k_{ i, \perp}^2 +m_i^2 }{\beta_i} } \nonumber\\
&\times \frac{\bar{\chi_V} \bar{\chi}_R \Gamma^{B \rightarrow VR} u(p_N,h_N)}
{m_N^2 - \frac{k_{V, \perp}^2 + m_V^2}{x_V} - \frac{k_{R,\perp}^2 + m_R^2}{x_R}}.
\label{Amu2}
\end{align}
To proceed, we introduce light-front wave functions for VR and 3q systems as follows:
\begin{align}
& \psi_{VR} (x_V, \mathbf{k}_{R, \perp}, x_R, \mathbf{k}_{V, \perp}) \nonumber\\
&= \frac{\bar{\chi_V }\bar{\chi_R} \Gamma^{B \rightarrow VR} u(p_N,h_N)}{m_N^2 - \frac{k_{V, \perp}^2 + m_V^2}{x_V} - \frac{k_{R,\perp}^2 + m_R^2}{x_R}} \nonumber \\
& \psi_{3q} (\{\beta_i, \mathbf{k}_{i, \perp}, h_i \}_{i=1}^3) = \frac{\prod_{i=1}^3 \bar{u}(k_i,h_i) \Gamma^{V \rightarrow 3q} \chi_V }
{m_V^2 - \sum_{i=1}^{3} \frac{ k_{ i, \perp}^2 +m_i^2 }{\beta_i} },
\label{LFwfs}
\end{align}
where $\{\beta_i, \mathbf{k}_{i, \perp}, h_i \}_{i=1}^3$ denotes the LC momenta and helicities of the three valence quarks in the wave function.
Using the above definitions of LF wave functions the scattering amplitude can be presented in the following form:
\begin{align}
A^\mu &= \sum_{h_1, h_V}\bar{u}(k_1,h_1) (ie_1 \gamma^\mu) u(k_1,h_1) \nonumber \\ & \times \frac{\psi_{VR}(x_V, \mathbf{k}_{R, \perp}, x_R, \mathbf{k}_{V, \perp})}{x_V} \frac{\psi_{3q}(\{\beta_i, \mathbf{k}_{i, \perp}, h_i \}_{i=1}^3) }{\beta_1}.
\label{Amu3}
\end{align}
\subsection{Calculation of the Valence PDF}
\begin{figure}[ht]
\includegraphics[width= \columnwidth]{figures/Fig9.png}
\centering
\vspace{-0.2cm}
\caption{Hadronic tensor expressed as a cut diagram.}
\label{cutdiagram}
\end{figure}
The hadronic tensor, $W_N^{\mu\nu}$ of Eq.(\ref{Wmunu}) for the considered scenario of scattering in which the final state
consists of three outgoing valence quarks and a residual nucleon system, in the leading order, corresponds to the
cut diagram of Fig.\ref{cutdiagram} for which one obtains:
\begin{align}
&W_N^{\mu\nu} (x, Q^2) = \frac{1}{4\pi m_N} \sum_{ q, h_i}
\int \delta(k_R^2- m_R^2) \frac{d^4 k_R}{(2\pi)^4} \nonumber \\
& \times \delta(k^{\prime,2}_1- m_1^2) \frac{d^4 k^\prime_1}{(2\pi)^4} \prod_{i=2}^3 \delta(k_i^2- m_i^2) \frac{d^4 k_i}{(2\pi)^4} \nonumber \\
& \times \delta^{(4)}(P+q - k^\prime_1+ \sum_{i=2}^3 k_i -k_R) A^{\mu \dagger} A^\nu,
\label{Wmunumodel}
\end{align}
where the sum $ \sum\limits_{ q, h_i} $ is over all the spins/flavors of the outgoing valence quarks and the residual system. The Lorentz invariant phase space of all final particles can be presented as follows:
\begin{align}
\delta(k_j^2-m_j^2) d^4k &= \frac{1}{2} \delta(k_j^+k_j^- - k_{j,\perp}^2 -m_j^2) dk_j^- dk_j^+ d^2\mathbf{k}_{j, \perp} \nonumber \\
&= \frac{ dx_j d^2\mathbf{k}_{j, \perp}}{2x_j} |_{k_j^- = \frac{k_{j,\perp}^2 +m_j^2}{x_jp_N^+}},
\label{onmass}
\end{align}
where $x_j =\frac{k_j^+}{p_N^+}$ is the light-cone momentum fraction of outgoing particles, $j=1^\prime,2,3$ and $R$.
To evaluate the energy-momentum conservation condition, first we express the $\delta^4()$ function through the LC components:
\begin{align}
& \delta^{(4)}(p_N+q - k^\prime_1 - \sum_{i=2}^3 k_i -k_R) \nonumber \\
&= 2 \delta (p_N^+ + q^+ - k^{\prime,+}_1 - \sum_{i=2}^3 k_i^+ - k_R^+) \nonumber \\
& \times \delta (p_N^- + q^- - k^{\prime,-}_1 - \sum_{i=2}^3 k_i^- - k_R^-) \nonumber \\
& \times \delta^{(2)}(\mathbf{p}_{N,\perp} + \mathbf{q}_\perp - k^{\prime}_{1,\perp} - \sum_{i=2}^3 \mathbf{k}_{i, \perp} - \mathbf{k}_{R, \perp}).
\label{delta4}
\end{align}
Using the reference frame of Eq.(\ref{reframe}) in which $q^+=0$, one observes that $k^+_1 = k^{\prime,+}_1$ and
using the definition of LC momentum fractions $x_j =\frac{k_j^+}{p_N^+}$ one obtains:
\begin{equation}
\delta (p_N^+ + q^+ - k^{\prime +} _1- \sum_{i=2}^3 k_i^+ - k_R^+) = \frac{1}{p_N^+} \delta (1 - \sum\limits_{i=1}^3x_i- x_R).
\label{momplus}
\end{equation}
For the ``-" component we use $q^- = {2p_N \cdot q\over p_N^+}$ (Eq.(\ref{reframe})), the on-mass shell condition for outgoing
particles (see Eq.(\ref{onmass})), and conditions of high energy scattering of Eq.(\ref{highkin}).
In the large $Q^2$ limit for which
$k^\prime_{1\, \perp} = k_{1,\perp} + q_\perp \gg k_{1,\perp}$ and $k^{\prime 2}_{1, \perp} \approx Q^2$
one obtains:
\begin{align}
&\delta (p_N^- + q^- - k^{\prime-}_{1} - \sum_{i=2}^3 k_i^- - k_R^-) \nonumber \\
&= \delta \bigg( \frac{m_N^2}{p_N^+}- + \frac{2 p_N \cdot q}{p_N^+} -
\frac{k^{\prime 2}_{1,\perp} +m^2}{x^\prime_{1}p_N^+} \nonumber \\
&- \sum_{i=2}^3 \frac{k_{i,\perp}^2 +m^2}{x_{i}p_N^+} -\frac{k_{R,\perp}^2 +m_R^2}{x_{R}p_N^+}\bigg) \nonumber \\
& \approx \delta \left(\frac{2 p_N \cdot q}{p_N^+} - \frac{Q^2}{x_{1}'p_N^++}\right) \nonumber \\
& = \frac{x_{1}p_N^+}{2 p_N \cdot q} \delta(x_1 - x_B) |_{x_B = \frac{Q^2}{2 p_N \cdot q}}.
\label{momminus}
\end{align}
Furthermore, introducing the initial transverse momentum of the struck quark
$\mathbf{k}_{1\perp} = \mathbf{k}^\prime_{1,\perp} - \mathbf{q}_\perp$ for
the conservation of transverse momentum one obtains:
\begin{align}
&\delta^{(2)}(\mathbf{p}_{N,\perp} + \mathbf{q}_\perp - {\bf k}^{\prime}_{1,\perp} - \sum_{i=2}^3 \mathbf{k}_{i, \perp} - \mathbf{k}_{R, \perp}) \nonumber \\
&= \delta(\sum_{i=1}^3 \mathbf{k}_{i, \perp} + \mathbf{k}_{R, \perp})
\label{momtrans}
\end{align}
Combining Eqs.(\ref{momplus},\ref{momminus}) and (\ref{momtrans}) in Eq.(\ref{delta4}) and substituting it in Eq.(\ref{Wmunumodel}) one obtains:
\begin{equation}
W_N^{\mu\nu} (x, Q^2) = \frac{1}{4\pi m_N} \sum_{h_i, q} \int [dx][d^2 \mathbf{k}_\perp] \delta(x_1 - x_B)
A^{\mu \dagger} A^\nu,
\label{Wmunu2}
\end{equation}
where the integration factors are defined as:
\begin{equation}
[dx] =\delta(1- \sum_{i=1}^3 x_i - x_R) \frac{dx_R}{x_R}\prod_{i=1}^{3} \frac{dx_i }{x_i}
\end{equation}
\begin{equation}
[d^2 \mathbf{k}_\perp] = 16 \pi^3 \delta^{(2)}(\sum_{i=1}^3 \mathbf{k}_{i,\perp} + \mathbf{k}_{R, \perp})\frac{d^2 \mathbf{k}_{R, \perp}}{16 \pi^3 }\prod_{i=1}^{3} \frac{d^2 \mathbf{k}_{i,\perp}}{16 \pi^3}.
\end{equation}
Substituting now the ``+" component of the scattering amplitude from Eq.(\ref{Amu3}) to
Eq.(\ref{Wmunu2}) and using Eq.(\ref{F2F1}) one obtains for the $F_2$ structure
function the following expression:
\begin{align}
&F_2(x_B) = \sum_q \sum_{h_i} \int [dx] [d^2\mathbf{k}_\perp] e_q^2 x_1 \delta(x_1 - x_B)\nonumber \\
&\times |\psi_{3q}(\{\beta_i, \mathbf{k}_{i, \perp}, h_i \}_{i=1}^3) |^2 |\psi_V (x_V, \mathbf{k}_{R, \perp}, x_R, \mathbf{k}_{V, \perp})|^2,
\end{align}
which together with Eq.(\ref{partmodel}) allows us to obtain the expression for the valence PDF in the form:
\begin{align}
&f_q(x_B)= \sum_{h_i} \int [dx] [d^2\mathbf{k}_\perp] \delta(x_1 - x_B) \nonumber \\
& \times|\psi_{3q}(\{\beta_i, \mathbf{k}_{i, \perp}, h_i \}_{i=1}^3) |^2 |\psi_V (x_V, \mathbf{k}_{R, \perp}, x_R, \mathbf{k}_{V, \perp})|^2.
\label{fq_mf}
\end{align}
\subsection{Modeling the Wave Functions}
\subsubsection{An Approach for Modeling LF Wave Functions }
\label{wfappro}
There have been many approaches in modeling light-front valence quark wave functions
of hadrons (see e.g. \cite{Lepage:1980fj,Brodsky:1981jv,Pasquini:2008ax}).
Quantum mechanically, for a bound system the
expansion of the potential close to the stability point results in a harmonic oscillator~(HO)
type interaction potential, thus justifying the use of
the HO eigenfunctions as a basis for modeling nonperturbative wave function of hadrons.
From a practical point of view,
the advantage of using HO wave functions was that they naturally contain the effect of
confinement and the Gaussian form was convenient for analytic calculations.
The HO approach was used both in constituent (e.g. Ref. {\cite{Isgur:1979be}) and current (e.g. Ref.\cite{Lepage:1980fj}) quark descriptions of hadrons.
For the case of current quark description, one important issue was a proper relativistic generalization of the wave functions.
For two valence quark systems such as pions, one straightforward step in the relativistic generalization was
introducing the relative momentum of two valence quarks in the center of mass frame according to
$k^2 = {m_q^2 + k^2_{\perp}\over x(1-x)} - m_q^2$, which allowed to represent the wave function
in the boost-invariant form along the momentum transfer direction
$\bf q$~\cite{Lepage:1980fj,Brodsky:1981jv}.
In our approach we again use the HO basis for the LF wave functions.
In the relativistic generalization of such wave functions we introduce the relative momentum similar as it was
defined above for two-valence quark system. However, we also introduce the
phase space factor for the residual (spectator) system that allows us to recover the non-relativistic normalization of the wave function in the lab
frame where the wave function represents the eigenstate of the HO Hamiltonian.
This phase factor appears also if we relate Weinberg (or Bethe-Salpeter) type equations in the nonrelativistic limit to the Lippman-Schwinger equation for bound systems in the lab frame, where the potential of the interaction is defined (see e.g. \cite{Sargsian:2001ax}). For example, in considering a generic two-body bound system, the HO based LF wave function is
represented in the following form:
\begin{equation}
\psi_2(\alpha_1, \mathbf{p}_{1\perp},\alpha_2, \mathbf{p}_{2\perp}) = \sqrt{2(2\pi)^3M_T} \Psi_0(k_{cm})
\sqrt{\alpha_2},
\label{2bodyWF}
\end{equation}
where $\Psi_0$ is the ground state wave function of the non-relativistic HO Hamiltonian in the lab frame of the target with mass $M_T$, normalized conventionally as:
\begin{equation}
\int |\Psi_0(p)|^2 d^3p = 1.
\label{nonrelnorm}
\end{equation}
In Eq.(\ref{2bodyWF}) the residual (spectator) particle ``2" is treated as on-shell
with $\alpha_1 = 1-\alpha_2$, ${\bf p}_{1, \perp} = - {\bf p}_{2, \perp}$, and
$\alpha_2 = {E^{lab}_{2} + p^{lab}_{2z}\over M_T} = {E^{cm}_{2} + k^{cm}_{z} \over E_{2,cm} + E_{1,cm}}$, where
\begin{equation}
k_{cm}^2 = \frac{(s-(m_1-m_2)^2)(s-(m_1+m_2)^2)}{4s},
\end{equation}
with
\begin{equation}
s = (p_1 + p_2)^2 = \frac{p_{1,\perp}^2 +m^2}{\alpha_1} + \frac{p_{2,\perp}^2 +m^2}{\alpha_2}.\end{equation}
The extra factor of $\sqrt{\alpha_2}$ in Eq.(\ref{2bodyWF}) is to account for the phase space of the residual nucleon that allows to
relate the relativistic normalization of the LF wave function to the nonrelativistic normalization (\ref{nonrelnorm}) in the lab frame. Namely starting
with the relativistic normalization:
\begin{equation}
\int |\psi_2(\alpha_1, \mathbf{p}_{1, \perp},\alpha_2, \mathbf{p}_{2, \perp})|^2 \frac{d\alpha_2}{\alpha_2}\frac{ d^2 p_{2,\perp}}{16\pi^3}= 1,
\label{normrel2}
\end{equation}
and using Eq.(\ref{2bodyWF}) one obtains:
\begin{align}
&M_T16\pi^3 \int |\Psi_0(k_{cm})|^2 \alpha_2 \frac{d\alpha_2}{\alpha_2}\frac{ d^2 p_{2,\perp}}{16\pi^3} \nonumber \\
& \approx M_T 16\pi^3 \int |\Psi_0(k_{cm})|^2 {d p_{2,z}\over M_T} \frac{ d^2 p_{2,\perp}}{16\pi^3} \nonumber \\
&= \int |\Psi(p_2)|^2 d^3p_{2} = 1,
\end{align}
where in the above derivation we used the nonrelativistic limit for $k_{cm}\to p_2$
and $\alpha_2 = \frac{E_2 + p_{2,z}}{M_T}\approx \frac{m_2 + p_{2,z}}{M_T}$ with
$\rightarrow d\alpha_2 = \frac{dp_{2,z}}{M_T}$.
It is worth mentioning that the above described prescription
is not unique to HO wave functions and can be applied to any
quantum mechanical wave function.
\subsubsection{Wave Function of Three Valence Quark System}
\begin{figure}[ht]
\includegraphics[width= 0.75\columnwidth]{figures/Fig10.png}
\centering
\vspace{-0.2cm}
\caption{Three valence quarks coupled in pairwise HO internation.}
\label{coupledvalencequarks}
\end{figure}
We model the three valence quark system as mutually coupled oscillators
(Fig.\ref{coupledvalencequarks}) for which the ground state wave function within the above described prescription can be presented in the following form:
\begin{align}
&\psi_{3q} ( \{x_i, \mathbf{k}_{i,\perp}\}_{i=1}^{3}) \nonumber \\
&= 16\pi^3 m_N A_V \exp{ \left[ - \frac{B_V}{2}( k_{12,cm}^2 + k_{23,cm}^2 + k_{31,cm}^2) \right]}
\nonumber \\
&\times \sqrt{x_2 x_3},
\label{3qwf1}
\end{align}
where $A_V$ and $B_V$ are parameters and $x_i, k_{i,\perp}$, ($i\ne j=1,2,3$) are LC momentum fractions and transverse momenta of each valence quark in the reference frame defined in
Eq.(\ref{reframe}).
The $k_{ij, cm}^2$s, ($i\ne j=1,2,3$) in the exponent of the wave function represent the relative three momenta in the CM system of $i,j$ pairs defined as follows:
\begin{equation}
k_{ij, cm}^2 = \frac{(s_{ij}-(m_i-m_j)^2) (s_{ij} - (m_i + m_j)^2)}{4s_{ij}},
\end{equation}
where the invariant energy of the $i,j$ pair is:
\begin{align}
s_{ij} &= (k_i+k_j)^+ (k_i + k_j)^- - (\mathbf{k}_{i,\perp} + \mathbf{k}_{j,\perp})^2 \nonumber\\
&= (x_i + x_j) \left( \frac{k_{i, \perp}^2 + m_i^2}{x_i} + \frac{k_{j, \perp}^2 + m_j^2}{x_j}\right) \nonumber\\
&-(\mathbf{k}_{i,\perp} + \mathbf{k}_{j,\perp})^2.
\end{align}
Since the momentum fractions defined as $x_i = {k^+_{i}\over p^+_N}$, one can introduce total momentum fraction carried by three-valence quark system as:
\begin{equation}
x_V = x_1 + x_2 + x_3.
\end{equation}
Next we introduce the valence quark transverse momenta defined with respect to the
total momentum of 3q system $\bf k_V$:
\begin{equation}
\mathbf{\tilde k}_{i, \perp} = \mathbf{k}_{i, \perp} - \frac{x_i}{x_V} \mathbf{k}_{V, \perp}, \ \ \mbox{(i = 1,2,3)},
\label{relkperp}
\end{equation}
where
\begin{equation}
\tilde k_{1,\perp} + \tilde k_{2,\perp} + \tilde k_{2,\perp} = 0.
\label{kperpsum}
\end{equation}
Using Eqs.(\ref{relkperp}) and (\ref{kperpsum}) and assuming same masses for valence quarks one obtains:
\begin{equation}
s_{12} + s_{23} + s_{31} = \sum_{i=1}^3 x_V\frac{\tilde k_{i, \perp}^2 + m^2}{x_i} + 3m^2,
\end{equation}
and
\begin{equation}
k_{12,cm}^2 + k_{23,cm}^2 + k_{31,cm}^2= \frac{1}{4} \big( \sum_{i=1}^3 x_V\frac{\tilde k_{i, \perp}^2 + m^2}{x_i} - 9m^2 \big).
\end{equation}
Inserting the above relation into Eq.(\ref{3qwf1}) the wave function of 3q- state can be expressed as follows:
\begin{align}
&\psi_{3q} ( \{ x_i, \mathbf{k}_{i, \perp} \}_{i=1}^3) \nonumber \\
&= 16\pi^3 m_N A_V \exp \left[-\frac{B_V}{8}
\left( \sum_{i=1}^3 x_V\frac{\tilde k_{i, \perp}^2 + m^2}{x_i} - 9m^2 \right) \right] \nonumber \\
&\times \sqrt{x_2 x_3}.
\label{3qwf2}
\end{align}
\subsubsection{Wave Function of Recoil System}
\label{VRwavefunction}
Using the same approach (Sec.\ref{wfappro}) we model the wave function of the recoil system in the following form:
\begin{equation}
\psi_R (x_R, \mathbf{p}_{R, \perp}) = \sqrt{16\pi^3 m_N} A_R e^{- \frac{B_R}{2} p_R^2} \sqrt{x_R}
\label{recoilWF}
\end{equation}
where $x_R$ is the light-cone momentum fraction of the nucleon carried by the recoil system and
$p_R^2 = k_{R, \perp}^2 + p_{R, z}^2$ is its three momentum in the CM of the nucleon.
As a first approximation that allows to perform calculations analytically,
we use a non-relativistic kinematics for the recoil system.
In this case
in the CM frame of the nucleon one can approximate $x_R \approx {m_R + p_{R,z}\over m_N}$ and substitute
\begin{equation}
p_{R,z} = (x_{R}m_N- m_R).
\label{nonrel}
\end{equation}
In numerical evaluations we will treat $B_R$ and $m_R$ as free parameters characterizing the recoil system and evaluate $A_R$ from
the normalization condition similar to Eq.(\ref{normrel2}).
\subsection{Completing the Calculation of Valence PDF}
Substituting the wave functions of 3q (Eq.(\ref{3qwf2})) and the recoil (Eq.(\ref{recoilWF})) systems in Eq.(\ref{fq_mf}) for the valence quark
partonic distribution one obtains:
\begin{align}
&f_q(x_B)= \left( 16 \pi^3m_N\right)^3
|A|^2 \int^{Q^2} 16 \pi^3\delta(x_1-x_B)\nonumber \\
&\delta(1-\sum\limits_{1}^3x_i - x_R)\delta^2(\sum\limits_{1}^3k_{i,\perp}+k_{R,\perp}) \nonumber \\
&\times
\exp \bigg[-{B_V\over 4}\sum_{i=1}^3 x_V \frac{\tilde k_{i,\perp}^2 +m_i^2}{x_i} \nonumber \\
&- B_R \left( (m_N x_R - m_R)^2 + k_{R,\perp}^2 \right) \bigg]
x_2x_3x_R\nonumber \\
& \times \prod\limits_{i=1}^3 {dx_i\over x_i} {d^2k_{i,\perp}\over 16 \pi^3}{dx_R\over x_R} {d^2k_{R,\perp}\over 16 \pi^3},
\label{fqinp1}
\end{align}
where $A = A_R A_V e^{{9\over 4} B_Vm_q^2}$. Note that the transverse momenta ${\bf k}_{i,\perp}$ and ${\bf \tilde k}_{i,\perp}$ are defined in the CM of nucleon and 3q- system respectively and related
to each other according to Eq.(\ref{relkperp}).
Using Eq.(\ref{relkperp}) and relation ${\bf k}_{V,\perp} = - {\bf k}_{R,\perp}$ and $d^2 \mathbf{k}_{i, \perp} = d^2 \mathbf{\tilde k}_{i, \perp}$
allows us to factorize integrations by $\tilde k_{i,\perp}$ and $k_{R,\perp}$ resulting in:
\begin{align}
& f_q(x_B, Q^2) = m_N^3 |A|^2 \int\prod\limits_{i=1}^3 {dx_i\over x_i} \delta(x_B-x_1) \nonumber \\
&\times \exp \left[ -\frac{B_V}{4} \sum_{i=1}^3 x_V\frac{ m_i^2}{x_i} - B_R m_N^2(x_R - \frac{m_R}{m_N})^2 \right] \nonumber \\
&\times x_2 x_3 \int^{Q^2} \prod\limits_{i=1}^3 d^2\tilde k_{i,\perp} \delta^2(\sum\limits_{i=1}^3 {\bf \tilde k}_{i,\perp} )\nonumber \\
&\times \exp{ \left[ -\frac{B_V}{4} \sum_{i=1}^3 x_V \frac{\tilde k_{i,\perp}^2}{x_i}\right]}
\int^{Q^2} d^2k_{R,\perp} \exp{\left[ - B_R k_{R,\perp}^2 \right]}.
\label{f2ev2}
\end{align}
Furthermore we evaluate the following integrals in the above equation (see Appendix B):
\begin{align}
& \int^{Q^2_R} \exp{\left[- B_R k_{R,\perp}^2\right]} d^2p_{R,\perp} = {\pi\over B_R}(1-e^{-B_R Q^2_{R,Max}});\nonumber \\
& \int^{Q^2} \prod\limits_{i=1}^3 d^2\tilde k_{i,\perp}
\delta^2(\sum\limits_{i=1}^3 {\bf \tilde k}_{i,\perp} )
\exp{ \left[ -\frac{B_V}{4} \sum_{i=1}^3 x_V \frac{\tilde k_{i,\perp}^2}{x_i}\right]} \nonumber\\
& = {16\pi^2\over B_V^2} {x_1x_2x_3\over x_V^3}\nonumber \\
&\times(1-e^{-a_{cm}Q^2_{cm,Max}})(1-e^{-a_{rel}Q^2_{rel,Max}}),
\label{perpints}
\end{align}
where $a_{cm} = {B_V x_V\over 4} {x_V\over x_3(x_1+x_2)}$ and $a_{rel} = {B_V x_V\over 4} {x_1+x_2\over x_1x_2}$.
Here $Q^2_{R,Max}$ is the maximal transverse momentum square of the recoil system. Also in the derivation
(see Appendix B) we defined
${\bf \tilde k}_{12,\perp}^{cm} = {\bf \tilde k}_{1,\perp} + {\bf \tilde k}_{2,\perp}$ and
${\bf \tilde k}_{12,\perp}^{rel} =
{x_2{\bf \tilde k}_{1,\perp} - x_1 {\bf \tilde k}_{2,\perp}\over x_1 + x_2}$ and $Q^2_{cm,Max}$ and $Q^2_{rel,Max}$
represent the maximal transverse momentum squares of the center of mass and relative transverse momenta of ``1" and ``2" quark system.
Inserting the expressions of Eq.(\ref{perpints}) into Eq.(\ref{f2ev2}) and taking the integral over $d x_1$ using
$\delta(x_B-x_1)$ (see Appendix~B for details) one arrives at:
\begin{align}\label{PDF-with-finite-Q2}
& f_q(x_B, Q^2) = \mathcal{N} \displaystyle\int_0^{1-x_B}dx_2 \displaystyle \int_0^{1-x_B - x_2} dx_3 \nonumber \\
&\times \exp \left[ -\frac{B_Vx_V}{4} \sum_{i=1}^3 \frac{ m_i^2}{x_i} - B_R M_N^2(x_V - (1- \frac{M_R}{M_N}))^2 \right] \nonumber \\
&\times \frac{x_2 x_3}{x_V^3} \left(1-e^{-a_{cm}Q_{cm}^{max 2}}\right)\left(1-e^{-a_{rel}Q_{rel}^{max 2}}\right)\nonumber \\
&\times \left(1-e^{-B_R Q^2}\right),
\end{align}
where $x_1= x_B$ and $x_V= x_B + x_2 + x_3$ and the normalization constant
\begin{equation}
\mathcal{N} = {16\pi^3 A_V^2 A_R^2 m_N^3\over B_R B_V^2} e^{{9\over 4} B_V m_q^2}.
\label{Norma}
\end{equation}
The above equation simplifies further when $Q^2$ is large so that terms with it in the exponential are negligible in Eq.\ref{PDF-with-finite-Q2}:
\begin{align} \label{f-Q2inf}
& f_q(x_B, Q^2) = \mathcal{N} \int_0^{1-x_B}dx_2 \int_0^{1-x_B - x_2} dx_3 \frac{x_2 x_3}{x_V^3}\nonumber \\
&\times \exp \left[ -\frac{B_Vx_V}{4} \sum_{i=1}^3 \frac{ m_i^2}{x_i} - B_R M_N^2(x_V - (1- \frac{m_R}{m_N}))^2 \right]
\end{align}
We further simplify this expression by considering the massless limit of valence quarks and changing the integration variable from $x_3$ to $x_V$.
This allows us to evaluate the $dx_2$ integration analytically resulting in the final expression for the valence quark distribution in the following form:
\begin{align}
& f_q(x_B, Q^2) = {{\cal N}\over 6} \int\limits_{x_B}^1 dx_V {(x_V-x_B)^3\over x_V^3}\nonumber \\
& \times \exp\left[ -B_R m_N^2\left(x_V - (1- \frac{m_R}{m_N})\right)^2 \right].
\label{f_qfin}
\end{align}
Note that as it follows from the above equation, in the massless limit,
the parameter $B_V$ enters in the structure function only through the normalization factor
$\cal N$.
\subsection{Qualitative Features of the Model}
One important feature of the model can be observed if we consider Eq.(\ref{f_qfin})
at $x_B\sim x_p$, when $x_V\sim 1$. In this case the qualitative behavior of $f_q(x_B, Q^2)$ can be obtained by
evaluating Eq.(\ref{f_qfin}) at the maximum of the exponent;
$x_V = 1- \frac{m_R}{m_N}$. This results in $f_q(x_B,Q^2)\sim (1- x_B - {m_R\over m_N})^3$ and for
the valence quark structure function one obtains:
\begin{equation}
h(x_b,t) = x_B f_q(x_B,Q^2)\sim x_B\left(1- x_B - {m_R\over m_N}\right)^3.
\end{equation}
From the above relation it follows that $h(x,t)$ function peaks at
\begin{equation}
x_{p} \approx {1\over 4}(1-{m_R\over m_N}).
\label{peakpion}
\end{equation}
Thus one observes that within the considered model the peak of the $h(x,t)$ distribution for
valence quarks (see Fig.\ref{peak_and_height}(b)) is related to the mass of the residual system in the nucleon.
Now if we use the characteristic value for the peak position $x_{p} \simeq 0.2$ for valence quarks at starting $Q^2=4$~GeV$^2$, from Eq.(\ref{peakpion}) one observes that it will correspond to $m_R\approx m_\pi$.
Thus the present model mimics a pion-cloud like picture for the residual system.
It is interesting that in Ref. \cite{Close:1988br} it was observed that within diquark model
the magnitude of $x_p$ is related to the mass of the recoil diquark in the form:
\begin{equation}
x_p \approx (1- {m_d\over m_N}),
\label{diquark}
\end{equation}
where $m_d$ is the mass of the diquark. This and Eq.(\ref{diquark}) however are different, reflecting
different dynamics of valence quark generation in the nucleon.
\medskip
It is worth mentioning that with an increase of $Q^2$, because of the radiation of valence quarks the magnitude of $x_V$ diminishes and due to the ${(x_V-x_B)^3\over x_V^3}$ factor in
Eq.(\ref{f_qfin}) the $x_p~-~m_R$ relation become more complicated than
Eq.(\ref{peakpion}).
\medskip
\medskip
Next we evaluate valence quark PDFs at $x_B\rightarrow 1$ limit. For this we substitute
$x_B = 1 - \epsilon$ in Eq.(\ref{f_qfin}) and in the $\epsilon \rightarrow 0$ limit evaluate the
integral which results in
\begin{equation}
f_q(x_B, Q^2)\mid_{x_B\rightarrow 1} = {{\cal N}\over 24}e^{ -B_R m_R^2} (1-x_B)^4.
\label{xto1}
\end{equation}
This result indicates that the considered mean field model predicts a fall off of the valence quark
PDF faster than the one predicted within perturbative QCD\cite{Lepage:1980fj}; $(1-x_B)^3$.
Thus the observation of the $(1-x_B)^4$ behavior at $x_B\rightarrow 1$ limit will indicate
the dominance of the mean field dynamics.
\section{Numerical Estimates of Valence quark distributions}
\label{estimates}
\subsection{Strategy of choosing the parameters of the model}
\label{paramstategy}
We use Eq.(\ref{f_qfin}) as a baseline formula for fitting to the phenomenological PDFs evaluated in leading order.
In general the model has
five parameters $A_V$ $B_V$, $A_R$, $B_R$ and $m_R$.
For the valence quarks we assume that characteristic separations in the 3q system in the impact parameter space
is $\langle b^2_{i,j}\rangle \sim$~(0.3Fm)$^2$. This allows us, based on
Eq.(\ref{3qwf2}), to evaluate $B_V = 4 \langle b^2_{i,j}\rangle {x_i\over x_V} \approx
{4\over 3} \langle b^2_{i,j}\rangle \approx 3.08$~GeV$^{-2}$.
For the recoil system we relate $A_R = \left({B_R\over \pi}\right)^{3\over 4}$.
Finally, the remaining parameter $A_V$ is fixed through the normalization factor, ${\cal N}$ and Eq.(\ref{Norma}), yielding:
\begin{equation}
A_V =\sqrt{B_R{\cal N}\over 16 \pi ^3 m_N ^3} {B_V\over A_R}
\label{AV}
\end{equation}
In the fitting procedure the parameters ${\cal N}$, $m_R$ and $B_R$ are evaluated by fitting Eq.(\ref{f_qfin}}) to the height, position
and the width of the $h(x,t)$ distribution in leading order approximation.
\begin{figure*}[ht]
\includegraphics[width=1.0\textwidth]{figures/Fig11.jpg}
\centering
\caption{The h(x) distribution at $Q^2=4$~GeV$^2$. Left panel for valence d- quark distribution. Right panel
for valence u-quark distributions.}
\label{h_lo_dists}
\end{figure*}
As a starting $Q^2$ we choose $Q^2=4$~GeV$^2$ such that it provides large enough produced mass
$W\ge 2$~GeV in the range of up to $x \sim 0.65$ for which we expect that mean-field dynamics have
non-neggligible contribution. This $Q^2$ is also large enough that it resolves the $3q$ valence cluster in the nucleon,
for which one requires $Q^2\gg {1\over B_V}$.
In Fig.\ref{h_lo_dists} we show the current status of the valence PDFs in LO for d- and u- quarks evaluated by using
modern PDF parameterizations\cite{Accardi:2016qay,Martin:2009iq,Dulat:2015mca,Owens:2012bv,Harland-Lang:2014zoa}
at $Q^2=4$~GeV$^2$. As the figure shows the height and the peak-position of the distributions
vary between different parameterization on the level of 15-20\%. Thus by fitting to these distributions we
evaluate the range of parameters as they change from one to other parameterizations.
We fit using maximum likelihood approximation without estimating the errors of parameters for specific
PDF parameterization. In such way our estimates can be considered qualitative (see also discussion in Sec.V).
\subsection{Fitting results and estimation of the magnitude of high momentum component of the valence quark distributions}
The results for parameters $\cal{N}$, $B_R$ and $m_R$ for considered PDF parameterizations are given in
Table~\ref{table1}. As these parameters show despite their variations, there
are common features shared by different parameterizations. The one is that the residual mass
for the case of valence $u$-quarks ($0.04 \le m_R^u \le 0.07$~GeV) is less than the corresponding mass for
the $d$-quark ($0.17 \le m_R^d \le 0.32$~GeV). Additionally, $m_R^u\le m_{pion}$ and $m_R^d > m_{pion}$.
These evaluations predict that in the case of scattering from valence d- quark in the proton
there is a larger probability to detect recoil pions than in the case of scattering from the u- quark.
\begin{table*
\caption{Fitting parameters for valence d- and u- quarks.}
\centering
\begin{tabular}{l c c c c c c c}
\hline\hline
d-quark \ & N$^d$ & \ B$_R^d$(GeV$^{-2}$) & \ m$_R^d$ (GeV) & \ u-quark & \ N$^u$ & \ B$_R^u$ (GeV$^{-2}$) & \ m$_R^u$ (GeV) \\% [0.5ex]
\hline\hline
CT14LL0\ & 64 & \ 30 & \ 0.26 & \ & 174 & \ 42 & \ 0.07 \\
\hline
CT14L0 \ & 63 & \ 29 & \ 0.24 & \ & 183 & 42 & \ 0.06 \\
\hline
CJ15lo \ & 47 & \ 6 & \ 0.32 & \ & 208 & \ 50 &\ 0.05 \\
\hline
MMHT2014lo\ & 76 & \ 50 & \ 0.16 & \ & 228 & \ 60 & \ 0.04 \\
\hline
MSTW2008lo\ & 71 & \ 50 & \ 0.17 & \ & 235 & \ 65 & \ 0.04 \\
\hline
\hline
\end{tabular}
\label{table1}
\end{table*}
Another characteristics of the parameters is that the slope factor for the residual system in the case of scattering
from valence u-quarks ($42 \le B_R^u \le 65$~GeV$^{-2}$) is systematically larger than for the case of
d-quarks ($29 \le B_R^d \le 50$~GeV$^{-2}$)
(not included is the CJ15 parametrization for the d-quarks which yields $B_R^d \le 6$~GeV$^{-2}$.).
This indicates a more compact state for the case of scattering from d quarks which can be checked by
measuring the attenuation of the residual system in the nucleus depending whether scattering took place on valence
d- or u- quark.
In Fig.\ref{xvfit} we present a comparison of the model with the CT14nnlo PDFs at $Q^2=4$~GeV$^2$\footnote{Similar
picture is obtained for other parameterizations, except for the valence $d$-quark distribution in the case of CJ15 which
shows wider distribution with lower position of the peak,}
As the figure shows, a better description is achieved for the valence d-quark than for the u-quark distribution.
This indicates that larger high momentum component is needed for valence u-quarks than for the d-quark.
To evaluate the expected contribution from the high momentum component of valence quark distribution,
we used the fitting parameters to evaluate normalization and momentum sum rules for both valence d- and u- quarks.
For the normalization one obtains for the d-quark $N_d = 0.64$ and for the u-quark $N_u = 1.37$.
It is interesting that these normalizations result in a
${N_d\over N_u}\sim 0.5$. These results also indicates that one expects that Regge mechanism at $x<x_p$
together with $qq$-correlations
to contribute $\sim$ 36\% and $\sim$32\% of total normalizations for d- and u- quarks respectively.
\begin{figure}[ht]
\vspace{1.0cm}
\hspace{-1.0cm}
\includegraphics[scale=0.42]{figures/Fig12.pdf}
\vspace{-2cm}
\hspace{1cm}
\includegraphics[scale=0.4]{figures/fig_xuv_ct14ln.pdf}
\vspace{1.4cm}
\centering
\caption{Comparison of the results of the fitting of mean-field model to the $x q_V(x)$ distribution for d- (a) and u- (b) quarks. Data points corresponds to the CT14nnlo parameterization for central PDF sets and the shaded areas shows the uncertainties of
PDFs estimated through the PDF error sets (for details see Ref.\cite{Dulat:2015mca}).}
\label{xvfit}
\end{figure}
The evaluation of the momentum sum rule ($P_q = \int x q_V(x) dx$) yields $P_{d} = 0.095$ and $P_u = 0.24$ which
should be compared with $P_{d} = 0.1$ and $P_{u}= 0.268$ obtained from the CT14llo distribution.
Expecting that the Regge mechanism, which dominates at
$x<x_p$, has a smaller contribution to the momentum sum rule the above estimates
allow us to evaluate better the contribution from $qq$ correlations. As the mentioned numbers show,
the expected contribution from $qq$ correlations in the
momentum sum rule for the d-quark is $\sim$~5\% and for the u-quark is $\sim$~10\%.
These evaluations however can be somewhat of an underestimation, since in fitting the model to the PDFs we assumed that all the
strength of PDF at $x=x_p$ is due to the mean-field mechanism. The larger contribution for quark- correlation mechanism may require a renormalization of the mean field distribution.
This problem can be addressed after modeling high-x tail of valence PDFs and combining it with the current mean-field model.
Overall our current result indicates an expected larger contribution of high momentum tail to the valence u- quark distribution
compared to that of d-quarks.
\medskip
Another interesting observable is the ratio of the d- to u- quarks at $x\to 1$ limit. As it follows from Eq.(\ref{xto1}) the model
predicts
\begin{equation}
{f_d(x)\over f_u(x)} \mid_{x_B\rightarrow 1} = {N_d\over N_u}e^{\left (B_R^u(m_R^u)^2 -B_R^d (m_R^d)^2\right)}.
\end{equation}
The interesting feature of the model is that the properties of valence PDFs (the mass of the residual system, the slope factor and normalization constant) at the vicinity of the peak define the magnitude of the $d/u$ ratio at x=1.
The mechanism that results in
the decrease of ${d_V\over u_V}$ ratio with an increase of x, is due to the inequality of Eq.(\ref{udmass}) because of which
the recoil wave function for the case of the d- quark decreases faster than that of the u- quark. Thus the value of the
${d_V\over u_V}$ ratio at $x\to 1$ in the present model is related to the difference in the peak positions for $h(x,t)$ distribution for
d- and u- quarks.
Using parameters of Table~\ref{table1} one evaluates
\begin{equation}
0.06 \le {d_V\over u_V}\mid_{x\to 1} \le 0.1 (0.14)
\end{equation}
where $(0.14)$ is the estimate for CJ15 parameterization.
This is the prediction of the current mean-field model if
quark-quark correlations will be found to be negligible at large $x$.
In Fig.\ref{doveru} we present the $x$ dependence of the ${d_V\over u_V}$ ratio for the same parameters used in
Fig.\ref{xvfit}.
\begin{figure}[ht]
\includegraphics[width=\columnwidth]{figures/Fig13.pdf}
\vspace{-0.4cm}
\caption{The $x$ dependence of ${d_V\over u_V}$ ratio in valence quark region, solid curve. The calculation is compared with
the one following from CT14nnlo parametrization. The area takes into account the spread of the PDFs for d- and u- quark distributions\cite{Dulat:2015mca}.}
\label{doveru}
\end{figure}
If, however, the current estimates of $d/u$ ratio from DIS processes off $A=3$ nuclei
\cite{Abrams:2021xum,Segarra:2021exb} is confirmed,
it will indicate the need of $qq$ short range interactions to enhance the $d/u$ ratio at $x=1$,
to the values, $\sim 0.2$.
\section{Summary and Outlook}
\label{outlook}
We have developed a model which describes the mean-field dynamics of valence quarks in the nucleon by
separating the nucleon into a valence 3q- cluster and residual system.
Within this model based on effective light-front diagrammatic approach we calculated the valence quark distributions
in the nucleon in leading order approximation. The parameters entering the model are ones that
describe the wave functions of nonperturbative 3q and residual systems.
\subsection{Summary of results and predictions of the model}
An important result of the model is the prediction of a new relation for the position of the peak
in the $x q_V(x)$ distribution and the mass of the residual system according to Eq.(\ref{peakpion}) for moderately
large $Q^2$.
This relation naturally describes the fact that the valence d- quarks peak at lower $x_p$ than u- quarks following from the expectations, that in the proton, the residual system
for the d-quark in general has larger mass than for the u-quarks (Eq.(\ref{massspectrum})).
The model predicts $(1-x_B)^4$ behavior for valence quark PDFs at $x_B\rightarrow 1$,
which is stronger than one predicted in pQCD, $(1-x_B)^3$. This indicates
the need of inclusion of 2q and 3q correlations to describe high x tail of valence PDFs.
To evaluate the parameters of the model we fitted the calculated valence quark distributions to the
several leading order phenomenological PDFs with starting $Q^2=4$~GeV$^2$. The obtained parameters
depend on the particular choice of the phenomenological PDF. However they demonstrate the
following common features
\begin{itemize}
\item The mass of the residual system for probed valence u- quarks is systematically lower than that of
the valence d-quarks and it is less than the pion mass
\item The slope factor of the residual system in the proton is larger for the u-quarks than for the d- quark
indicating that d-quark- residual system is more compact
\item The magnitude of the parameters indicate the need for larger (by factor of two) high momentum component
for the valence u-quarks than for the d-quark.
\end{itemize}
The above features allow us to make specific predictions that can be checked experimentally. One is that
there should be a correlation between the multiplicity of pion productions in the target fragmentation region with
the flavor of the knock-out quark in the forward direction. Specifically one expects more pions to be produced
in the target fragmentation region of the proton if the forward scattering takes place on the valence
d-quark. Another prediction of the model is that for asymmetric nuclei, attenuation of the residual system is correlated
with the flavor of struck quark in the forward direction. This prediction is based on the observation that the
recoil system is more compact (larger $B_R$) in the case of interaction with the valence d-quark in the
proton.
\subsection{Possible improvements and fitting to beyond the leading order PDFs}
To be able to calculate the nucleon structure function analytically we treated the kinematics of
recoil system non-relativistically (Eq. (\ref{nonrel})). However the fitting results of $m_R$ indicate the
need of relativistic treatment of the recoil system. This will be done in the process of fitting the model
parameters to the next to leading order PDFs. For the latter case since all modern NLO PDFs adopt
$\overline{MS}$ factorization scheme the relation between structure function $F_2$ and PDF, $f_i(x,t)$ is a convolution integral
which covers region above the probed Bjorken x kinematics. Thus, fitting to the NLO PDFs will require the modeling of the
contribution from $qq$ correlations presented in Fig.\ref{framework}(b) and (c). These calculations are in progress
and will be published in follow-up paper.
Finally, the reliability of the obtained parameters for the 3q-cluster and residual system wave functions can be checked
by calculating the mean-field effects at $x\sim 0.1-0.4$, in many different quantities in QCD, such as form-factors, generalized
partonic distribution functions or transverse momentum distributions for different DIS processes.
With the addition of the two- and three- quark short-range interactions to the present model we hope to
have a framework that will allow us to study the dynamics of the valence quarks in the all
range of $x>0.1$. The parameters obtained in this work, for the wave functions of mean field 3q-cluster (Eqs. (\ref{3qwf2}))
and residual system~(Eq. (\ref{recoilWF})) will allow us to use them in the calculation of the correlation diagrams of
Fig.(\ref{framework})(b) and (c). Note that in this work we did not use polarized PDF's to fix the parameters of the model
since one of the assumptions was that the mean-field dynamics adheres SU(6) symmetry. However once two- and three- quark
short-range interactions are added they will break the SU(6) symmetry because of vector exchange nature of
interactions. Then by fitting calculated distributions to polarized PDFs we will be able to calibrate
contribution from short range interaction at large $x$.
\begin{acknowledgements}
This work is supported by United States Department of Energy grant under contract DE-FG02-01ER41172.
\end{acknowledgements}
|
{
"timestamp": "2021-12-08T02:07:33",
"yymm": "2012",
"arxiv_id": "2012.14030",
"language": "en",
"url": "https://arxiv.org/abs/2012.14030"
}
|
\section{Introduction}
Measuring the difference between two probability distributions is a fundamental problem in statistics and machine learning \citep{cover1999elements,bishop2006pattern,murphy2012machine}. A variety of statistical distances, such as the Kullback--Leibler (KL) divergence \citep{kullback1951information}, Jensen--Shannon (JS) divergence \citep{lin1991divergence}, maximum mean discrepancy (MMD) \citep{gretton2006kernel},
and Wasserstein distance \citep{kantorovich2006translocation}, have been proposed %
to quantify the difference. They have been widely used for generative modeling with different mode covering/seeking behaviors
\citep{kingma2013auto,goodfellow2014generative,binkowski2018demystifying,arjovsky2017wasserstein,genevay2018learning, balaji2019entropic}.
The KL divergence,
directly related to both maximum likelihood estimation and variational inference \citep{wainwright2008graphical,hoffman2013stochastic,blei2017variational},
requires the two probability distributions to share the same support and is often inapplicable if either is an implicit distribution whose probability density function (PDF) is unknown \citep{mohamed2016learning,huszar2017variational,tran2017hierarchical,yin2018semi}.
Variational auto-encoders (VAEs) \citep{kingma2013auto}, the KL divergence based deep generative models, are stable to train, but often exhibit mode-covering behaviors in its generated data, producing
blurred images.
The JS divergence is directly related to the min-max loss of a generative adversarial net (GAN) when the discriminator is optimal \citep{goodfellow2014generative}, while the Wasserstein-1 distance is directly related to the min-max loss of a Wasserstein GAN \citep{arjovsky2017wasserstein}, whose critic is optimized under the 1-Lipschitz constraint. However, it is difficult to maintain a good balance between the updates of the generator and discriminator/critic, making (Wasserstein) GANs notoriously brittle to train. MMD \citep{gretton2006kernel} is an RKHS-based statistical distance behind MMD-GANs \citep{li2015generative,li2017mmd,binkowski2018demystifying}, which have also shown promising results in generative modeling when trained with a min-max loss.
Different from VAEs, these GAN-based models often exhibit mode dropping and face the danger of mode collapse if not well tuned during the training.
In this paper, we introduce conditional transport (CT) as a new method to quantify the difference between two probability distributions, which will be referred to as
the source distribution $p_{X}(\boldsymbol{x})$ and target distribution $p_Y(\boldsymbol{y})$, respectively.
The construction of CT is motivated by the following observation: the difference between $p_{X}(\boldsymbol{x})$ and $p_Y(\boldsymbol{y})$ can be reflected by the expected difference of two dependent random variables $\boldsymbol{x}$ and $\boldsymbol{y}$, whose joint distribution $\pi(\boldsymbol{x},\boldsymbol{y})$ is constrained by both $p_X(\boldsymbol{x})$ and $p_{Y}(\boldsymbol{y})$ in a certain way. Denoting $c(\boldsymbol{x},\boldsymbol{y})\ge 0$ as a cost function to measure the difference between points $\boldsymbol{x}$ and $\boldsymbol{y}$, such as $c(\boldsymbol{x},\boldsymbol{y})=\|\boldsymbol{x}-\boldsymbol{y}\|_2^2$, the expected difference is expressed as $\mathbb{E}_{\pi(\boldsymbol{x},\boldsymbol{y})}[c(\boldsymbol{x},\boldsymbol{y})]$. A basic way to constrain $\pi(\boldsymbol{x},\boldsymbol{y})$ with both $p_X(\boldsymbol{x})$ and $p_Y(\boldsymbol{y})$ is to let $\pi(\boldsymbol{x},\boldsymbol{y})=p_X(\boldsymbol{x})p_Y(\boldsymbol{y})$, which means drawing $\boldsymbol{x}$ and $\boldsymbol{y}$ independently from $p_X(\boldsymbol{x})$ and $p_{Y}(\boldsymbol{y})$, respectively; this expected difference $\mathbb{E}_{p_X(\boldsymbol{x})p_{Y}(\boldsymbol{y})}[c(\boldsymbol{x},\boldsymbol{y})]$ is closely related to the energy distance~\cite{bellemare2017cramer}. Another constraining method is to require both $\int \pi(\boldsymbol{x},\boldsymbol{y}) d\boldsymbol{y} =p_{X}(\boldsymbol{x})$ and $\int \pi(\boldsymbol{x},\boldsymbol{y}) d\boldsymbol{x} =p_{Y}(\boldsymbol{y})$, under which ${\min_{\pi}}\{\mathbb{E}_{\pi(\boldsymbol{x},\boldsymbol{y})}[c(\boldsymbol{x},\boldsymbol{y})]\}$ becomes the Wasserstein distance \citep{kantorovich2006translocation,
villani2008optimal,santambrogio2015optimal,COTFNT}.
A key insight of this paper is that by exploiting the chain rule and Bayes' theorem, there exist two additional ways to constrain $\pi(\boldsymbol{x},\boldsymbol{y})$ with both $p_X(\boldsymbol{x})$ and $p_{Y}(\boldsymbol{y})$: 1) %
A forward CT that can be viewed as moving the source to target distribution; 2) A backward CT that reverses the direction.
Our intuition is that given a source (target) point, it is more likely to be moved to a target (source) point closer to it. More specifically,
if the target distribution does not provide good coverage of the source density, then there will exist source data points that lie in low-density regions of the target,
making the expected cost of the forward CT high. Therefore, we expect that minimizing the forward CT will encourage the target distribution to exhibit a \emph{mode-covering} behavior with respect to (\textit{w.r.t.}) the source PDF. Reversing the direction, we expect that minimizing the backward CT will encourage the target distribution to exhibit a \emph{mode-seeking} behavior \textit{w.r.t.} the source PDF. Minimizing the combination of both
is expected to strike a good balance between these two distinct behaviors.
\iffalse
\mz{delete the following paragraph:
We show that similar to the KL divergence, ACT provides unbiased sample gradients, but different from it, neither $p_X(\boldsymbol{x})$ nor $p_Y(\boldsymbol{y})$ needs to be known. Similar to the Wasserstein distance, it does not require the distributions to share the same support. Different from the Wasserstein distance in its dual form, the sample estimates of ACT and its gradients are unbiased. Different from the expected sample Wasserstein distance and its entropy smoothed variations, the sample estimates of ACT are straightforward to compute as no inner loops are required.
In GANs or Wasserstein GANs \citep{arjovsky2017wasserstein}, having an optimal discriminator or critic is required to unbiasedly estimate %
the JS divergence or Wasserstein distance and hence the gradients of the generator \citep{bottou2017geometrical}. However, this is rarely the case in practice, motivating a common remedy to stabilize %
the training by %
carefully regularizing the gradients, such as clipping or normalizing their values \citep{gulrajani2017improved,miyato2018spectral}. %
By contrast, in an adversarial game under ACT, regardless of how well the critic, which manipulates the point-to-point transport cost $c(\boldsymbol{x},\boldsymbol{y})$ and
the navigators' conditional distributions for $\boldsymbol{x}\rightarrow\boldsymbol{y}$ and $\boldsymbol{x}\leftarrow\boldsymbol{y}$, is optimized,
the sample estimates of ACT stay unbiased.}
\fi
To demonstrate the use of
CT,
we apply it to train implicit (or explicit)
distributions to model %
both 1D and 2D toy data, MNIST digits, and natural images. The implicit distribution is defined by a deep generative model (DGM) that is simple to sample from. We provide empirical evidence to show how to control the mode-covering versus mode-seeking behaviors by adjusting the ratio of the forward CT versus backward CT. To train a DGM for natural images,
we focus on adapting existing GANs, with minimal changes to their settings except for substituting the statistical distances in their loss functions with CT. We leave tailoring the network architectures and
settings to CT for future study.
%
Modifying the loss functions of various existing
DGMs
with CT,
our experiments show consistent improvements in not only quantitative performance and generation quality, but also learning stability. Our code is available at \url{https://github.com/JegZheng/CT-pytorch}.
\section
Chain rule and Bayes' theorem based
conditional transport}
Exploiting the chain rule and Bayes' theorem, we can constrain $\pi(\boldsymbol{x},\boldsymbol{y})$ with both $p_X(\boldsymbol{x})$ and $p_{Y}(\boldsymbol{y})$ in two different ways, leading to the forward CT and backward CT, respectively.
To define the forward CT, we use the chain rule to factorize the joint distribution as
$$\pi(\boldsymbol{x},\boldsymbol{y}) =p_X(\boldsymbol{x}) \pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x}),$$ where $\pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x})$ is a conditional distribution of $\boldsymbol{y}$ given $\boldsymbol{x}$. This construction ensures $\int \pi(\boldsymbol{x},\boldsymbol{y}) d\boldsymbol{y} =p_{X}(\boldsymbol{x})$ but not $\int \pi(\boldsymbol{x},\boldsymbol{y}) d\boldsymbol{x} = p_{Y}(\boldsymbol{y})$.
Denote
$d_{\boldsymbol{\phi}}(\hv_1,\hv_2)\in \mathbb{R}$ as a function parameterized by $\boldsymbol{\phi}$, which measures the difference between two vectors $\hv_1,\hv_2\in \mathbb{R}^H$ of dimension $H$.
While allowing $\int \pi(\boldsymbol{x},\boldsymbol{y}) d\boldsymbol{x} \neq p_{Y}(\boldsymbol{y})$, to appropriately constraint $\pi(\boldsymbol{x},\boldsymbol{y})$
by $p_Y(\boldsymbol{y})$,
we treat $p_Y(\boldsymbol{y})$ as the prior distribution, view $e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})}$ as an unnormalized likelihood term, and follow Bayes' theorem to define
\ba{
\pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x}) = e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})}p_Y(\boldsymbol{y})/Q(\boldsymbol{x}),~~ \textstyle Q(\boldsymbol{x}):=\int e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})}p_Y(\boldsymbol{y}) \text{d}\boldsymbol{y},\label{eq:forwardCT_conditional}
}
where $Q(\boldsymbol{x})$ is a normalization term that ensures $\int \pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x})d\boldsymbol{y}=1$. We refer to $\pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x})$ as the forward ``navigator,'' which specifies how likely a given $\boldsymbol{x}$ will be mapped to a target point $\boldsymbol{y}\sim p_{Y}(\boldsymbol{y})$. We now define the cost of the forward CT as
\ba{
\textstyle &\mathcal C
(
X\rightarrow Y)=
\mathbb{E}_{\boldsymbol{x}\sim p_X(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{y}\sim \pi_{Y}(\boldsymbol{\cdot}\,|\, \boldsymbol{x}) }[c(\boldsymbol{x},\boldsymbol{y})]\label{eq:OT_E_ygivenx}.
}
In the forward CT, we expect large $c(\boldsymbol{x},\boldsymbol{y})$ to typically co-occur with small $\pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x})$ as long as $p_{Y}(\boldsymbol{y})$ provides a good coverage of the density of $\boldsymbol{x}$. Thus minimizing the forward CT cost is expected to encourage $p_{Y}(\boldsymbol{y})$ to exhibit a mode-covering behavior \textit{w.r.t.} $p_{X}(\boldsymbol{x})$. Such kind of behavior is also expected when minimizing the forward KL divergence as $\mathrm{KL}(p_X||p_Y)=\mathbb{E}_{\boldsymbol{x}\sim p_X}\big[\ln \frac{p_X(\boldsymbol{x})}{p_Y(\boldsymbol{x})}\big]$, which calls for $p_Y(\boldsymbol{x})>0$ whenever $p_{X}(\boldsymbol{x})>0$.
%
%
%
%
%
Reversing the direction, we construct the backward CT, where the joint is factorized as
$\pi(\boldsymbol{x},\boldsymbol{y}) =p_Y(\boldsymbol{y}) \pi_X(\boldsymbol{x}\,|\, \boldsymbol{y})$ and
the backward navigator is defined as \ba{
\pi_X(\boldsymbol{x}\,|\, \boldsymbol{y})=e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})} p_X(\boldsymbol{x})/Q(\boldsymbol{y}),~~~\textstyle Q(\boldsymbol{y}):=\int e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})}p_X(\boldsymbol{x}) \text{d}\boldsymbol{x}.\label{eq:backwardCT_conditional}
}
This
ensures $\int \pi(\boldsymbol{x},\boldsymbol{y}) d\boldsymbol{x} =p_{Y}(\boldsymbol{y})$; while allowing $\int \pi(\boldsymbol{x},\boldsymbol{y}) d\boldsymbol{y} \neq p_{X}(\boldsymbol{x})$, it constrains $\pi(\boldsymbol{x},\boldsymbol{y}) $ by treating $p_X(\boldsymbol{x})$ as the prior to construct $\pi_X(\boldsymbol{x}\,|\, \boldsymbol{y})$. The backward CT cost is now defined as
\ba{
\textstyle \mathcal C
(
X\leftarrow Y)
&= %
\mathbb{E}_{\boldsymbol{y}\sim p_{Y}(\boldsymbol{y})}\mathbb{E}_{\boldsymbol{x}\sim \pi_{X}(\boldsymbol{\cdot}\,|\, \boldsymbol{y}) }[c(\boldsymbol{x},\boldsymbol{y})]%
.
\label{eq:OT_E_xgiveny}
}
In the backward CT, we expect large $c(\boldsymbol{x},\boldsymbol{y})$ to typically co-occur with small ${\pi}_X(\boldsymbol{x}\,|\, \boldsymbol{y})$ as long as $p_X(\boldsymbol{x})$ has good coverage of the density of $\boldsymbol{y}$. Thus minimizing the backward CT cost is expected to encourage $p_{Y}(\boldsymbol{y})$ to exhibit a mode-seeking behavior \textit{w.r.t.} $p_{X}(\boldsymbol{x})$. Such kind of behavior is also expected when minimizing the reverse KL divergence as $\mathrm{KL}(p_Y||p_X)=\mathbb{E}_{\boldsymbol{x}\sim p_Y}\big[\ln \frac{p_Y(\boldsymbol{x})}{p_X(\boldsymbol{x})}\big]$, which allows $p_Y(\boldsymbol{x})=0$ when $p_X(\boldsymbol{x})>0$ and it is fine for $p_Y$ to just fit some portion of $p_X$.
In comparison to the forward and revers KLs, the proposed forward and backward CT are more broadly applicable as they don't require $p_X$ and $p_Y$ to share the same distribution support and have analytic PDFs.
For the cases where the KLs can be evaluated,
we introduce
$$\mbox{D}(X,Y)= \mbox{KL}(p_X||p_Y)-\mbox{KL}(p_Y||p_X)$$ as a formal way to quantify the mode-seeking and mode-covering behavior of $p_Y$ w.r.t. $p_X$, with $\mbox{D}(X,Y)>0$ implying mode seeking and with $D(X,Y)< 0$ implying mode covering.
Combining both the forward and backward CTs, we now define the CT cost as
\ba{
%
%
\mathcal C_{\rho}(X, Y)
:=\textstyle
\rho\mathcal C(X\rightarrow Y)+
(1-\rho)\mathcal C(X\leftarrow Y),
\label{eq:CT_divergence}
}
where $\rho\in[0,1]$ is a parameter that can be adjusted to encourage $p_{Y}(\boldsymbol{y})$ to exhibit \textit{w.r.t.} $p_{X}(\boldsymbol{x})$ mode-seeking ($\rho=0$), mode-covering ($\rho=1$), or a balance of two distinct behaviors ($\rho\in(0,1)$).
By definition we have
$\mathcal C_{\rho}(X, Y)\ge 0$, where the equality can be achieved
when $p_X= p_Y$ and the navigator parameter $\boldsymbol{\phi}$ is optimized such that $e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})}$ is equal to one if and only if $\boldsymbol{x}=\boldsymbol{y}$ and zero otherwise. We also have $\mathcal C_{\rho=0.5}(X, Y)=\mathcal C_{\rho=0.5}(Y, X)$. We fix $\rho=0.5$ unless specified otherwise.
\iffalse
Note unless the navigator parameter $\boldsymbol{\phi}$ is optimized such that $e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})}=1$ if and only if $\boldsymbol{x}=\boldsymbol{y}$ and zero otherwise, there is no guarantee that $\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}(p_X, p_{Y}) = 0$ when $p_X= p_{Y}$. For this reason, we may add an additional term into the CT cost to define the CT statistical distance as
$$
\mathcal{D}_{\phi,\boldsymbol{\theta}}(p_X,p_Y)=\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}(p_X, p_Y) - \textstyle\frac{1}{2}\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}(p_X, p_X) -\frac{1}{2}\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}(p_Y, p_Y)
$$
which can also be written as the difference of two expectations as \bas{
\mathcal D_{\phi,\theta}(\mu,\nu)=\mathbb{E}_{\boldsymbol{x}\sim p_X(\boldsymbol{x})}[f^*(\boldsymbol{x})]-\mathbb{E}_{\boldsymbol{y}\sim p_{\boldsymbol{\theta}}(\boldsymbol{y})}[f^*(\boldsymbol{y})]
}
where
$$f^*(\boldsymbol{x})=\mathbb{E}_{\boldsymbol{y}'\sim \pi_{Y}(\boldsymbol{\cdot}\,|\, \boldsymbol{x})}[c(\boldsymbol{x},\boldsymbol{y}')]-\mathbb{E}_{\boldsymbol{x}'\sim \pi_{X}(\boldsymbol{\cdot}\,|\, \boldsymbol{x})}[c(\boldsymbol{x},\boldsymbol{x}')]$$
By construction, we have $\mathcal{D}_{\phi,\boldsymbol{\theta}}(p_X,p_Y)=0$ when $p_X=p_Y$.
When $d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})=Constant$ and $c(\boldsymbol{x},\boldsymbol{y})=\|\boldsymbol{x}-\boldsymbol{y}\|_2$, we recovery the energy distance between $\mu$ and $\nu$.
\fi
\subsection{Conjugacy based analytic conditional distributions}\label{sec:theoretical}
Estimating the forward and backward CTs %
involves %
$\pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x})$ and $\pi_X(\boldsymbol{x}\,|\, \boldsymbol{y})$, respectively. Both conditional distributions, however, are generally intractable to evaluate and sample from, unless
$p_X(\boldsymbol{x})$ and $p_Y(\boldsymbol{y})$ are
conjugate priors for likelihoods proportional to
$e^{-d(\boldsymbol{x},\boldsymbol{y})}$, $i.e.$, $\pi_X(\boldsymbol{x}\,|\, \boldsymbol{y})$ and $\pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x})$ are in the same probability distribution family as $p_X(\boldsymbol{x})$ and $p_Y(\boldsymbol{y})$, respectively. For example, if $d(\boldsymbol{x},\boldsymbol{y})=\|\boldsymbol{x}-\boldsymbol{y}\|_2^2$ and both $p_{X}(\boldsymbol{x})$ and $p_{Y}(\boldsymbol{y})$ are multivariate normal distributions, then both $\pi_X(\boldsymbol{x}\,|\, \boldsymbol{y})$ and $\pi_Y(\boldsymbol{y}\,|\, \boldsymbol{x})$ will follow multivariate normal distributions.
To be more specific, we provide a univariate normal based example, with $x,y,\phi,\theta\in \mathbb{R}$ and
\ba{\textstyle &p_X(x)=\mathcal{N}(0,1),~p_Y(y)= \mathcal N(0, e^\theta),~d_{\phi}(x,y) = {(x-y)^2}/({2 e^{\phi}}),~ c(x,y)=(x-y)^2.\label{eq:1Dnormal}}
Here we have $\mbox{D}(X,Y)= \mbox{KL}[\mathcal N(0,1)|| \mathcal N(0, e^\theta)] - \mbox{KL}[ \mathcal N(0, e^\theta) || \mathcal N(0,1)] = \theta - sinh(\theta)$, which is positive when $\theta<0$, implying mode-seeking, and negative when $\theta>0$, implying mode-covering.
As shown in Appendix~\ref{appendix: 1d_gaussian}, we have analytic forms of the
forward and backward navigators as
$$\pi_{Y}(y\,|\, x) = \mathcal{N}
(\sigma(\theta-\phi)x,\sigma(\theta-\phi)e^\phi),~~~
\pi_{X}(x\,|\, y) = \mathcal{N}( %
\sigma(-\phi)y,\sigma(\phi)
),$$
where $\sigma(a)=1/(1+e^{-a})$ denotes the sigmoid function, and forward and backward CT costs as
$$
\mathcal C(X\rightarrow Y) = %
\sigma(\phi-\theta)(e^\theta+\sigma(\phi-\theta)),~~~
\mathcal C(X\leftarrow Y) =
\sigma(\phi)(1+\sigma(\phi)e^\theta).
$$
\iffalse
In addition, we have
$
\mathcal C_{\phi,\theta}(p_X,p_X) = \sigma(\phi)(1+\sigma(\phi))
,~~\mathcal C_{\phi,\theta}(p_Y,p_Y)
=\sigma(\phi-\theta)(e^{\theta}+\sigma(\phi-\theta)e^{\theta})
$
Thus we have the CT statistical distance as
\bas{
&\mathcal{D}_{\phi,\theta}(p_X,p_Y) =
(1-e^{\theta})\left[
\sigma(\phi-\theta)-\sigma(\phi)\right]\left[
\sigma(\phi-\theta)+\sigma(\phi)\right]
}
\fi
As a proof of concept, we illustrate the optimization under CT using the above example,
for which $\theta=0$ is the optimal solution that makes $p_X=p_Y$.
Thus when applying gradient descent to minimize the CT cost $\mathcal C_{\rho=0.5}(X, Y)$, we expect the generator parameter $\theta\rightarrow 0$ with proper learning dynamic, as long as the learning of the navigator parameter $\phi$ is appropriately controlled.
This is confirmed by Fig.\,\ref{fig:1d_gaussian}, which shows that %
as the navigator $\phi$ gets optimized by minimizing CT cost, it is more obvious that $\theta$ will minimize the CT cost at zero. This suggests that the navigator parameter $\phi$ mainly plays the role in assisting the learning of $\theta$.
The right four subplots describe the log-scale curves of forward cost, backward cost and bi-directional CT costs \textit{w.r.t.} $\theta$ as $\phi$ gets optimized to four different values.
It is worth noting that the forward cost is minimized at $e^\theta>1$, which implies a mode-covering behavior, and the backward cost is minimized at $e^\theta\rightarrow 0$, which implies a mode-seeking behavior, while the bi-directional cost is minimized at around the optimal solution $e^{\theta}=1$; the forward CT cost exhibits a flattened curve on the right hand side of its minimum, adding to which the backward CT cost not only moves that minimum left, making it closer to $\theta=0$, but also raises the whole curve on the right hand side, making the optimum of $\theta$ become easier to reach via gradient descent.
To apply CT in a general setting where the analytical forms of the distributions are unknown, there is no conjugacy, or we only have access to random samples from the %
distributions, below we show we can
approximate the CT cost
by replacing both $p_X(\boldsymbol{x})$ and $p_{Y}(\boldsymbol{y})$ with their
corresponding discrete empirical distributions supported on mini-batches.
Minimizing this approximate CT cost,
amenable to mini-batch
SGD based optimization, is found to be effective
in driving the target (model) distribution $p_Y$ towards the source (data) distribution $p_X$, with the ability to control the mode-seeking and mode-covering behaviors of $p_Y$ \textit{w.r.t.} $p_X$.
\iffalse
The Wasserstein distance {in its primal form} can be defined with Kantorovich’s optimal transport problem \citep{kantorovich2006translocation,
villani2008optimal,santambrogio2015optimal,COTFNT}: %
\ba{
\mathcal{W}(\mu,\nu) &=
\min\nolimits_{ \pi \in \Pi(\mu,\nu)
}\{\textstyle \int_{\mathbb{R}^V\times \mathbb{R}^V}c(\boldsymbol{x},\boldsymbol{y})\pi(\text{d}\boldsymbol{x},\text{d}\boldsymbol{y})\}\notag\\
&=%
\min\nolimits_{ \pi \in \Pi(\mu,\nu)
}\{\mathbb{E}_{(\boldsymbol{x},\boldsymbol{y})\sim \pi(\boldsymbol{x},\boldsymbol{y}) }[c(\boldsymbol{x},\boldsymbol{y})]\},\!\!
\label{eq:OT_E}
}
where
the minimum is taken over $\Pi(\mu,\nu)$, defined as
the set of
all
possible joint probability measures $\pi$ %
on $\mathbb{R}^V\times \mathbb{R}^V$, with marginals
$\pi(A,\mathbb{R}^V)=\mu(A)$ and $\pi(\mathbb{R}^V,A)=\nu(A)$ for any Borel set $A\subset \mathbb{R}^V$.
{
When $c(\boldsymbol{x},\boldsymbol{y})%
=\|\boldsymbol{x}-\boldsymbol{y}\|$, we obtain the Wasserstein-1 distance,
for which there exists a dual form according to the Kantorovich duality as
$
\mathcal W_{1}\left(\mu, \nu\right)=\sup\nolimits_{f \in {\text{Lip}}^{1}} \{\mathbb{E}_{\boldsymbol{x} \sim p_{X}(\boldsymbol{x})}[f(\boldsymbol{x})]-\mathbb{E}_{\boldsymbol{y} \sim p_{Y}(\boldsymbol{y})}[f(\boldsymbol{y})]\},\notag
$
where $f$ is referred to as the ``critic'' and ${\text{Lip}}^{1}$ denotes the set of all 1-Lipschitz functions \citep{villani2008optimal}.
Intuitively, the critic $f$ plays the role of ``amortizing'' the computation of the optimal transport plan.
However, as it is difficult to ensure the 1-Lipschitz constraint, one often resorts to approximations \citep{arjovsky2017wasserstein,gulrajani2017improved,wei2018improving,miyato2018spectral} that inevitably introduce bias into the estimation of $\mathcal W_{1}$ and its gradient \citep{bellemare2017cramer,bottou2017geometrical}.
}
\fi
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{1d/1d_scale2.pdf}\vspace{-2mm}
\caption{ \small
Illustration of minimizing the CT cost $\mathcal C_{\phi,\theta}(X, Y)$ between $\mathcal N(0,1)$ and $\mathcal N(0, e^\theta)$. \textit{Left}:
Evolution of CT cost, its parameters, and forward and backward costs;
\textit{Right}: 4 CT cost curves against $\theta$ as $e^\phi$ is being optimized to a small value to jointly show the optimized $\phi$ provides better learning dynamic for the learning of $\theta$.
}\label{fig:1d_gaussian}\vspace{-4.5mm}
\end{figure*}
\subsection{Approximate CT given empirical samples}\label{sec:empirical}
Below we use generative modeling as an example to show how to apply the CT cost in a general setting that only requires access to random samples of %
both $\boldsymbol{x}$ and $\boldsymbol{y}$.
Denote $\boldsymbol{x}$ as a data taking its value in $%
\mathbb{R}^V$. %
In practice, we observe a finite set $\mathcal{X}=\{\boldsymbol{x}_i\}_{i=1}^{|\mathcal{X}|}$, consisting of $|\mathcal{X}|$ data samples assumed to be $iid$ drawn from $p_X(\boldsymbol{x})$.
Given $\mathcal{X}$, the usual task is to learn a distribution to approximate $p_X(\boldsymbol{x})$,
for which we consider %
a deep generative model (DGM) defined as
$
\boldsymbol{y}=G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}),~\boldsymbol{\epsilon}\sim p(\boldsymbol{\epsilon}), %
$
where $G_{\boldsymbol{\theta}}$ is a generator that transforms noise $\boldsymbol{\epsilon}\sim p(\boldsymbol{\epsilon})$ via a deep neural network parameterized by $\boldsymbol{\theta}$ to generate random sample $\boldsymbol{y}\in \mathbb{R}^V$. While the PDF %
of the generator, denoted as $p_{Y}(\boldsymbol{y};\boldsymbol{\theta})$, is often intractable to evaluate, it is straightforward to draw $\boldsymbol{y}\sim p_{Y}(\boldsymbol{y};\boldsymbol{\theta})$ with $G_{\boldsymbol{\theta}}$. %
\iffalse
Constraining $\pi\in \Pi(\mu,\nu)$, %
the Wasserstein distance satisfies %
$\mathcal W(\mu,\nu)=\mathcal W(\nu,\mu)$.
By contrast, the proposed %
statistical distance
allows $\pi\notin \Pi(\mu,\nu)$.
\fi
While knowing neither $p_X(\boldsymbol{x})$ nor $p_{Y}(\boldsymbol{y};\boldsymbol{\theta})$, we can obtain discrete empirical
distributions $p_{\hat X_N}$ and $ p_{\hat Y_M}$ supported on mini-batches $\boldsymbol{x}_{1:N}$ and $\boldsymbol{y}_{1:M}$,
as defined below, %
to guide the optimization of $G_{\boldsymbol{\theta}}$ in an iterative manner.
With $N$ random observations sampled without replacement from $\mathcal{X}$, we define
\ba{\textstyle
p_{\hat X_N}(\boldsymbol{x})
=\frac{1}{N}\sum_{i=1}^N \delta{(\boldsymbol{x}-\boldsymbol{x}_i)},~~\{\boldsymbol{x}_1,\ldots,\boldsymbol{x}_N\}\subseteq \mathcal{X}
\label{eq:x}
}
as an empirical distribution for $\boldsymbol{x}$. %
Similarly, with $M$ random samples of the generator, we define
\ba{\textstyle
p_{\hat Y_M}(\boldsymbol{y})
=\frac{1}{M}\sum_{j=1}^M \delta{(\boldsymbol{y}-\boldsymbol{y}_j)},~\boldsymbol{y}_j=G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}_j),~\boldsymbol{\epsilon}_j\, {\scriptstyle \stackrel{iid}{\sim}}\, p(\boldsymbol{\epsilon})~~.
\label{eq:y}
}
Substituting $p_{Y}(\boldsymbol{y};\boldsymbol{\theta})$ in \eqref{eq:OT_E_ygivenx} with
$p_{\hat Y_M}(\boldsymbol{y})$,
the continuous forward navigator becomes a discrete one as
\ba{
\textstyle\hat{\pi}_{Y}(\boldsymbol{y}\,|\, \boldsymbol{x})
&=\textstyle\sum_{j=1}^M \hat{\pi}_M(\boldsymbol{y}_j \,|\, \boldsymbol{x},\boldsymbol{\phi}) \delta_{\boldsymbol{y}_j},~~\hat{\pi}_M(\boldsymbol{y}_j \,|\, \boldsymbol{x},\boldsymbol{\phi})
:=\textstyle\frac{e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y}_j) }}{\sum_{j'=1}^M e^{ -d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y}_{j'}) }}.\label{eq:pi_hat_Y}
}
Thus given $p_{\hat Y_M}$,
the cost of a forward CT can be approximated as
\ba{
\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}(
X\rightarrow \hat Y_M
)
=\mathbb{E}_{\boldsymbol{y}_{1:M}\, {\scriptstyle \stackrel{iid}{\sim}}\, p_{Y}(\boldsymbol{y};\boldsymbol{\theta})}\mathbb{E}_{\boldsymbol{x}\sim p_X(\boldsymbol{x})}\left[ \textstyle
\textstyle \sum_{j=1}^M c(\boldsymbol{x},\boldsymbol{y}_j)
\hat{\pi}_M(\boldsymbol{y}_j\,|\, \boldsymbol{x},\boldsymbol{\phi})\right],\label{eq:x2nu}
}
which can be interpreted as the expected cost of following the forward %
navigator to stochastically transport a random source point $\boldsymbol{x}$ %
to one of the $M$ %
randomly instantiated ``anchors'' of the target %
distribution. Similar to previous analysis, we expect this approximate forward CT to stay small as long as $p_{Y}(\boldsymbol{y};\boldsymbol{\theta})$ exhibits a mode covering behavior \textit{w.r.t.} $p_{X}(\boldsymbol{x})$.
Similarly, %
we can approximate the backward navigator and CT cost as
\ba{
&\hat{\pi}_{X}(\boldsymbol{x}\,|\, \boldsymbol{y})
=\textstyle\sum_{i=1}^N \hat{\pi}_N(\boldsymbol{x}_i \,|\, \boldsymbol{y},\boldsymbol{\phi}) \delta_{\boldsymbol{x}_i},~~
\hat{\pi}_N(\boldsymbol{x}_i \,|\, \boldsymbol{y},\boldsymbol{\phi})
:=\textstyle\frac{e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x}_i,\boldsymbol{y}) }}{\sum_{i'=1}^N e^{ -d_{\boldsymbol{\phi}}(\boldsymbol{x}_{i'},\boldsymbol{y}) }},\notag\\
&\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}(
\hat X_N \leftarrow Y
)
=\mathbb{E}_{\boldsymbol{x}_{1:M}\, {\scriptstyle \stackrel{iid}{\sim}}\, p_{X}(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{y}\sim p_{Y}(\boldsymbol{y};\boldsymbol{\theta})}\left[
\textstyle \sum_{i=1}^N c(\boldsymbol{x}_i,\boldsymbol{y})
\hat{\pi}_N(\boldsymbol{x}_i\,|\, \boldsymbol{y},\boldsymbol{\phi})\right]. \label{eq:y2mu}
}
Similar to previous analysis, we expect this approximate backward CT to stay small as long as $p_{Y}(\boldsymbol{y};\boldsymbol{\theta})$ exhibits a mode-seeking behavior \textit{w.r.t.} $p_{X}(\boldsymbol{x})$.
Combining
\eqref{eq:x2nu} and \eqref{eq:y2mu},
we define the approximate CT cost as
\ba{
\mathcal{C}_{\boldsymbol{\phi},\boldsymbol{\theta},\rho}(\hat X_N, \hat Y_M )=
\rho
\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}( X\rightarrow \hat Y_M)
(1-\rho)
\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}( \hat X_N\leftarrow Y) ,%
\label{eq:C_NM}
}
an unbiased sample estimate of which, given mini-batches $\boldsymbol{x}_{1:N}$ and $\boldsymbol{y}_{1:M}$, can be expressed as
\ba{
\mathcal{L}_{\boldsymbol{\phi},\boldsymbol{\theta},\rho}(\boldsymbol{x}_{1:N},\boldsymbol{y}_{1:M})&=\textstyle\sum_{i=1}^N\sum_{j=1}^Mc(\boldsymbol{x}_i,\boldsymbol{y}_j)\left(
\textstyle\frac{\rho}{N}\hat{\pi}_M(\boldsymbol{y}_j\,|\, \boldsymbol{x}_i,\boldsymbol{\phi})+
\textstyle\frac{1-\rho}{M}\hat{\pi}_N(\boldsymbol{x}_i\,|\, \boldsymbol{y}_j,\boldsymbol{\phi})\right)\notag\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!=\textstyle\sum_{i=1}^N\sum_{j=1}^Mc(\boldsymbol{x}_i,\boldsymbol{y}_j)\left(
\textstyle\frac{\rho}{N}
\textstyle\frac{e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x}_i,\boldsymbol{y}_{j}) }}{\sum_{j'=1}^M e^{ -d_{\boldsymbol{\phi}}(\boldsymbol{x}_{i},\boldsymbol{y}_{j'}) }}
+
\textstyle\frac{1-\rho}{M}\textstyle\frac{e^{-d_{\boldsymbol{\phi}}(\boldsymbol{x}_i,\boldsymbol{y}_j) }}{\sum_{i'=1}^N e^{ -d_{\boldsymbol{\phi}}(\boldsymbol{x}_{i'},\boldsymbol{y}_j) }}\right)
\label{eq:BOT_sample}.
}
\begin{lemma}\label{lemma:limit}
Approximate CT in \eqref{eq:C_NM} is asymptotic as %
$\lim_{N,M\rightarrow \infty} %
\mathcal{C}_{\boldsymbol{\phi},\boldsymbol{\theta},\rho}(\hat X_N, \hat Y_M )
=\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta},\rho}(X,Y).$%
\end{lemma}
%
\subsection{Cooperatively-trained or adversarially-trained feature encoder}
\label{sec:feature_extraction}
To apply CT for generative modeling of high-dimensional data, such as natural images,
we need to define an appropriate cost function $c(\boldsymbol{x},\boldsymbol{y})$ to measure the difference between two random points. A naive choice is some distance between their raw feature vectors, such as
$
c(\boldsymbol{x},\boldsymbol{y}) = \|\boldsymbol{x}-\boldsymbol{y}\|_2^2
$, which, however, is known to often poorly reflect the difference between high-dimensional data residing on low-dimensional manifolds. For this reason, with cosine similarity \citep{salimans2018improving} as %
$
{\textstyle\cos(\hv_1,\hv_2) :=\frac
{\hv_1^T\hv_2}{{\sqrt{\hv_1^T\hv_1}}{\sqrt{\hv_2^T\hv_2}}}}$,
we further introduce a feature encoder
$\mathcal T_{\etav}(\boldsymbol{\cdot})$, parameterized by $\etav$, %
to help redefine the point-to-point cost and both navigators %
as
\ba{
\textstyle c_{\etav}(\boldsymbol{x},\boldsymbol{y}) %
= 1 -
\cos(\mathcal{T}_{\etav}(\boldsymbol{x}),\mathcal{T}_{\etav}(\boldsymbol{y}) ),~~~
d_{\boldsymbol{\phi}}\left(\frac{\mathcal{T}_{\etav}(\boldsymbol{x})}{\|\mathcal{T}_{\etav}(\boldsymbol{x})\|},\frac{\mathcal{T}_{\etav}(\boldsymbol{y})}{\|\mathcal{T}_{\etav}(\boldsymbol{y})\|}\right).
\label{eq:d_eta}
}
To apply the CT cost to train a DGM, we find that the feature encoder $\mathcal{T}_{\etav}(\boldsymbol{\cdot})$ can be learned in two different ways: 1) Cooperatively-trained: Training them cooperatively by alternating between two different losses: training the generator under a fixed $\mathcal{T}_{\etav}(\boldsymbol{\cdot})$ with the CT loss, and training $\mathcal{T}_{\etav}(\boldsymbol{\cdot})$ under a fixed generator with a different loss, such as the GAN discriminator loss, WGAN critic loss, and MMD-GAN \cite{binkowski2018demystifying} critic loss. 2) Adversarially-trained: Viewing the feature encoder as a critic and training it to maximize the CT cost, by not only inflating the point-to-point cost, but also distorting the feature space used to construct the forward and backward navigators' conditional distributions.
To be more specific, below we present the details for the adversarial way to train $\mathcal{T}_{\etav}$.
Given training data $\mathcal X$,
to train the generator $G_{\boldsymbol{\theta}}$, forward navigator $\pi_{\boldsymbol{\phi}}(\boldsymbol{y}\,|\, \boldsymbol{x})$, backward navigator $\pi_{\boldsymbol{\phi}}(\boldsymbol{x}\,|\, \boldsymbol{y})$, and encoder $\mathcal T_{\etav}$,
we view the encoder as a critic and propose to solve
a min-max problem as
\ba{\!\!\
\min_{\boldsymbol{\phi},\boldsymbol{\theta}}\max_{\etav} \mathbb{E}_{\boldsymbol{x}_{1:N}\subseteq \mathcal{X},
~\boldsymbol{\epsilon}_{1:M}\, {\scriptstyle \stackrel{iid}{\sim}}\, p(\boldsymbol{\epsilon})}[\mathcal{L}_{\boldsymbol{\phi},\boldsymbol{\theta},\rho,\etav}(\boldsymbol{x}_{1:N},\{G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}_{j})\}_{j=1}^M)],
\label{eq:min-max}
}
where $\mathcal{L}_{\boldsymbol{\phi},\boldsymbol{\theta},\rho,\etav}$ is defined the same as in
\eqref{eq:BOT_sample}, except that we replace $c(\boldsymbol{x}_i,\boldsymbol{y}_j)$ and $d_{\boldsymbol{\phi}}(\boldsymbol{\cdot},\boldsymbol{\cdot})$ with
their corresponding ones
shown in \eqref{eq:d_eta} and use reparameterization in \eqref{eq:y} to draw $\boldsymbol{y}_{1:M}:=\{G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}_{j})\}_{j=1}^M$. %
With SGD, we update $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$ using $\nabla_{\boldsymbol{\phi},\boldsymbol{\theta}} \mathcal{L}_{\boldsymbol{\phi},\boldsymbol{\theta},\rho,\etav}(\boldsymbol{x}_{1:N},\{G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}_{j})\}_{j=1}^M))$ and, if the feature encoder is
adversarially-trained, update $\etav$ using
$
-\nabla_{\etav} \mathcal{L}_{\boldsymbol{\phi},\boldsymbol{\theta},\rho,\etav}(\boldsymbol{x}_{1:N},\{G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}_{j})\}_{j=1}^M))
$.
We find by experiments that both ways to learn the encoder work well, with the adversarial one generally providing better performance. It is worth noting that in (Wasserstein) GANs, while the adversarially-trained discriminator/critic plays a similar role as a feature encoder, the learning dynamics between the discriminator/critic and generator need to be carefully tuned to maintain training stability and prevent trivial solutions ($e.g.$, mode collapse). By contrast, the feature encoder of the CT cost based DGM can be stably trained in two different ways. Its update does not need to be well synchronized with the generator and can be stopped at any time of the training.
\section{Related work}
In practice, variational auto-encoders \citep{kingma2013auto}, the KL divergence based deep generative models, are stable to train, but often exhibit mode-covering behaviors and generate blurred images \citep{chen2016variational,zhao2017infovae,zheng2018degeneration,alemifixing,zheng2019understanding}. By contrast, both GANs and Wasserstein GANs can generate photo-realistic images, but they often suffer from stability and mode collapse issues, requiring the update of the discriminator/critic to be well synchronized with that of the generator.
This paper
introduces
conditional transport (CT)
as
a new method to quantify the difference between two probability distributions. Deep generative models trained under CT not only allow the balance between mode-covering and mode-seeking behaviors to be adjusted, but also allow the encoder to be pretrained or frozen at any time during cooperative/adversarial training.
As the JS divergence requires the two distributions to have the same support,
the Wasserstein distance is often considered as
more appealing for generative modeling as it allows the two %
distributions to have non-overlapping support \citep{villani2008optimal,santambrogio2015optimal,COTFNT}. However, while GANs and Wasserstein GANs in theory are connected to the JS divergence and Wasserstein distance, respectively, several recent works show that they should not be naively understood as the minimizers of
their corresponding statistical distances, and the role played by their min-max training dynamics should not be overlooked \citep{kodali2017convergence,fedus2018many,stanczuk2021wasserstein}.
In particular, \citet{fedus2018many} show that even when the gradient of the JS divergence does not exist and hence GANs are predicted to fail from the perspective of divergence minimization, the discriminator is able to provide useful learning signal.
\citet{stanczuk2021wasserstein} show
that the dual form based Wasserstein GAN loss does not provide a meaningful approximation of the Wasserstein distance; while primal form based methods could better approximate the true Wasserstein distance, they in general clearly underperform Wasserstein GANs in terms of the generation quality for high-dimensional data, such as natural images, and require an inner loop to compute the transport plan for each mini-batch, leading to high computational cost \citep{genevay2018learning,iohara2019generative,mallasto2019well,pinetz2019estimation,stanczuk2021wasserstein}. See previous works for discussions on the approximation error and gradient bias when estimating the Wasserstein distance with mini-batches
\cite{bottou2017geometrical,bellemare2017cramer,
binkowski2018demystifying,bernton2019parameter}.
MMD-GAN \citep{li2015generative,li2017mmd,binkowski2018demystifying} that calculates the MMD statistics in the latent space of a feature encoder is the most similar to the CT cost in terms of the actual loss function used for optimization. In particular, both the MMD-GAN loss and CT loss, given mini-batches $\boldsymbol{x}_{1:N}$ and $\boldsymbol{y}_{1:M}$, involve computing the differences of all $NM$ pairs $(\boldsymbol{x}_i,\boldsymbol{y}_j)$. Different from MMD-GAN, there is no need in CT to choose a kernel and tune its parameters. We provide below an ablation study to evaluate both 1) MMD generator + CT encoder and 2) MMD encoder + CT generator, which shows 1) performs on par with MMD, while 2) performs clearly better than MMD and on par with CT.
\iffalse
{Note in optimal transport, the Wasserstein distance $\mathcal W(\mu,\nu)$ in its primal form, shown in \eqref{eq:OT_E}, is in general intractable to compute. To use the primal form, one often resorts to the sample Wasserstein distance defined as $\mathcal W(\hat\mu_N,\hat\nu_M)$, %
computing which, %
however, requires solving a combinatorial optimization problem %
\citep{COTFNT}. To make $\mathcal W(\hat\mu_N,\hat\nu_M)$ practical to compute, one remedy is to smooth
the optimal transport plan between $\hat\mu_N$ and $\hat\nu_M$ with an entropic regularization
term, resulting in the Sinkhorn distance that still requires to be estimated with an iterative procedure, whose convergence is sensitive to the entropic regularization coefficient \citep{cuturi2013sinkhorn,genevay2016stochastic,genevay2018learning,xie2020fast}.
When the entropic regularization coefficient goes to infinity, we recover maximum mean discrepancy (MMD), which is considered as the metric for minimization, evaluated in a kernel space found by the adversarial mechanism in MMD-GAN \citep{li2015generative,li2017mmd}.
By contrast, equipped with two navigators, the ACT can directly compute
a forward point-to-distribution transport cost, denoted as $\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}( \boldsymbol{x}\rightarrow \hat \nu_M)$ in \eqref{eq:x2nu}, and a backward one, denoted as $\mathcal C_{\boldsymbol{\phi},\boldsymbol{\theta}}( \hat \mu_N\leftarrow \boldsymbol{y})$ in \eqref{eq:y2mu}, which are then combined to define an unbiased sample estimator, as shown in \eqref{eq:BOT0_sample}, of the ACT cost. Intuitively, the navigators %
play the role of ``amortizing'' the computation of the CT plans between two empirical distributions, removing the need of using an iterative procedure to estimate the transport cost. %
}
\fi
\section{Experimental results}\label{sec:experiments}
\textbf{Forward and backward analysis: }
To empirically verify our previous analysis of the mode covering (seeking) behavior of
the forward (backward) CT, we train a DGM with %
\eqref{eq:C_NM}
and show the corresponding interpolation weight from the forward CT cost to the backward one, which means
CT$_\rho$ reduces from forward CT ($\rho = 1$), to the CT in \eqref{eq:C_NM} ($\rho\in(0,1)$), and to backward CT ($\rho = 0$).
We consider the squared Euclidean ($i.e.$ $\mathcal L_2^2$) distance to define both cost $c(\boldsymbol{x},\boldsymbol{y})=\|\boldsymbol{x}-\boldsymbol{y}\|_2^2$ and $d_{\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{y})= \|\mathcal T_{\boldsymbol{\phi}}(\boldsymbol{x})-\mathcal T_{\boldsymbol{\phi}}(\boldsymbol{y})\|_2^2$,
where $\mathcal T_{\boldsymbol{\phi}}$ denotes a neural network parameterized by $\boldsymbol{\phi}$.
We consider a 1D example of a bimodal Gaussian mixture $p_X(x) = \frac{1}{4}\mathcal{N}(x;-5,1) + \frac{3}{4}\mathcal{N}(x;2,1)$ and a 2D example of 8-modal Gaussian mixture with equal component weight as in \citet{gulrajani2017improved}. We use an empirical sample set $\mathcal X$, consisting of $|\mathcal X|=5,000$ samples
from both 1D and 2D cases, and illustrate in Fig.\,\ref{fig:ACT_fb} the KDE of {5000} generated samples $y_j=G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}_j)$ after 5000 training epochs. {For the 1D case, we take 200 grids in $[-10,10]$ to approximate the empirical distribution of $\hat p_X$ and $\hat p_Y$, and report the corresponding forward KL (KL[$\hat p_X||\hat p_Y$]), reverse KL (KL[$\hat p_Y||\hat p_X$]), and their difference $\mathrm{D}(X,Y) = \text{KL}[\hat p_X||\hat p_Y] - \text{KL}[\hat p_X||\hat p_Y]$ below each corresponding sub-figure in Fig.\,\ref{fig:ACT_fb}.}
Comparing the results of different $\rho$ in Fig.\,\ref{fig:ACT_fb}, it suggests that minimizing the forward CT cost only encourages the generator to exhibit mode-covering behaviors, while minimizing the backward CT cost only encourages mode-seeking behaviors.
Combining both costs provides a user-controllable balance between mode covering and seeking, leading to satisfactory fitting performance, as shown in Columns 2-4.
Note that for a fair comparison, we stop the fitting at the same iteration; in practice, we find if training with more iterations, both $\rho = 0.75$ and $\rho = 0.25$ can achieve comparable results as $\rho=0.5$ in this example.
Allowing the mode covering and seeking behaviors to be controlled by adjusting $\rho$ is an attractive property of CT$_\rho$.
\iffalse
\begin{figure}[!t]
\centering
\includegraphics[width=.5\columnwidth]{foward_backward/ACT_forward_backward.pdf}\vspace{-4mm}
\caption{\small Fitting 1D bi-modal Gaussian (\textit{top)} and 2D 8-Gaussian mixture (\textit{bottom}) by interpolating between the forward ACT ($\rho=1$) and backward ACT ($\rho=0$).
}\label{fig:ACT_fb}
\end{figure}
\fi
\begin{figure}[t]
\centering
\includegraphics[width=.9\textwidth]
{foward_backward/ACT_forward_backward.pdf}\vspace{-4mm}
\caption{\small Forward and backward analysis: (\textit{top)} Fitting 1D bi-modal Gaussian. Quantitative results of estimated forward KL (KL[$\hat p_X||\hat p_Y$]), reverse KL (KL[$\hat p_Y||\hat p_X$]), and the difference between the forward and reverse KL (D=KL[$\hat p_X||\hat p_Y$]-KL[$\hat p_Y||\hat p_X$]) are shown below each sub-figure. (\textit{bottom}) 2D 8-Gaussian mixture by interpolating between the forward CT ($\rho=1$) and backward CT ($\rho=0$).
}\label{fig:ACT_fb}\vspace{-4mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.9\textwidth]{CT_ACT_comparison/bias_8gaussian.pdf}
\vspace{-4mm}
\caption{\small Experiments on the resistance to model collapse: Comparison of the generation quality on 8-Gaussian mixture data: one of the 8 modes has weight $\gamma$ and the rest modes have equal weight as $\frac{1-\gamma}{7}$. %
}\vspace{-5mm}
\label{fig:2d_biased_8gaussian}
\end{figure}
\textbf{Resistance to mode collapse: } We continue to use a 8-Gaussian mixture to empirically evaluate how well a DGM resists mode collapse. %
Unlike the data in Fig.\,\ref{fig:ACT_fb}, where 8 modes are equally weighted, here the mode at the left lower corner is set to have weight $\gamma$, while the other modes are set to have the same weight of %
$\frac{1-\gamma}{7}$. We set $\mathcal{X}$ with 5000 samples and the mini-batch size as $N=100$. %
When $\gamma$ is lowered to 0.05, %
its corresponding mode is shown to be missed by
GAN, WGAN, and SWD-based DGM, while well kept by the CT-based DGM.
As an explanation, GANs are known to be susceptible to mode collapse; WGAN and SWD-based DGMs are sensitive to the mini-batch size, as when $\gamma$ equals to a small value, the samples from this mode will appear in the mini-batches less frequently than those from any other mode, amplifying their missing mode problem. Similarly, when $\gamma$ is increased to 0.5, the other modes are likely to be missed by the baseline DGMs, while the CT-based DGM does not miss any modes. The resistance of CT to mode dropping can be attributed to its forward component's
mode-covering property. The backward's mode-seeking property further helps distinguish the density of each mode component to avoid making components of equal weight.
\iffalse
\textbf{Sensitivity to mini-batch size: } We move on to model the empirical samples from a true data distribution, for which it is natural to apply CT. We apply a deep neural network to generator $G_{\boldsymbol{\theta}}$ and another one to
both navigators. The top panel shows the CT cost, its backward and forward costs, and Wasserstein distance between the empirical probability measures $\hat{\mu}_N$ %
and %
$\hat\nu_M$ %
defined as in \eqref{eq:x} and \eqref{eq:y}. We set $M=N$ and hence $\mathcal{W}_2(\hat{\mu}_X,\hat{\nu}_Y)^2$ can be exactly computed by sorting the 1D elements of $x_{1:N}$ and $y_{1:N}$ \citep{COTFNT}.
\iffalse
We illustrate in Fig.\,\ref{fig:1d_gmm} the training with
unbiased sample gradients $\nabla_{\boldsymbol{\phi},\boldsymbol{\theta}} \mathcal{L}_{\boldsymbol{\phi},\boldsymbol{\theta}}(\mathcal{X},y_{1:M})$ of the CT cost shown in \eqref{eq:BOT_sample}, where $y_j=G_{\boldsymbol{\theta}}(\boldsymbol{\epsilon}_j)$. The top panel shows the CT cost, its backward and forward costs, and Wasserstein distance between the empirical probability measures $\hat{\mu}_N$ %
and %
$\hat\nu_M$ %
defined as in \eqref{eq:x} and \eqref{eq:y}. We set $M=N$ and hence $\mathcal{W}_2(\hat{\mu}_X,\hat{\nu}_Y)^2$ can be exactly computed by sorting the 1D elements of $x_{1:N}$ and $y_{1:N}$ \citep{COTFNT}. {We first consider $N=|\mathcal X|=5000$.} %
Fig.\,\ref{fig:1d_gmm} (Top) shows that %
the CT cost converges close to
$\mathcal{W}_2(\hat{\mu}_X,\hat{\nu}_Y)^2$
and the forward and backward costs move closer to each other %
and can sometime go below $\mathcal{W}_2(\hat{\mu}_X,\hat{\nu}_Y)^2$.
Fig.\,\ref{fig:1d_gmm} (Bottom) shows that
minimizing the CT cost successfully drives the generator distribution towards true data density: From the left to right, we can observe that initially the generator is focused on fitting a single mode; at around the $500^{th}$ iteration, as the forward and backward navigators are getting better, they start to help the generator locate the missing mode and we can observe a blue density mode starts to form over there; %
as the generator and both navigators are getting optimized, we can observe that the generator clearly captures both modes and the fitting is getting improved further; finally the generator well approximates the data density. Under the guidance of the CT cost, the generator and navigators are helping each other: An optimized generator helps the two navigators to train and realize the missing mode, and the optimized navigators help the generator locate under-fitted regions and hence better fit the true data density.
\fi
Given the same $\mathcal X$, below we further consider setting $N=20$, $200$, or $5000$ to train the generator,
using either the Wasserstein distance $\mathcal{W}_2(\hat{\mu}_N,\hat{\nu}_N)^2$ or CT cost $\mathcal{L}_{\boldsymbol{\phi},\boldsymbol{\theta}}(x_{1:N},y_{1:N})$ as the loss function.
As shown in Fig.\,\ref{fig:1d_act_wasserstein} (right column),
when the mini-batch size $N$ is as large as 5000, both Wasserstein and CT lead to a well-trained generator. However, as shown in the left and middle columns, when $N$ is getting much smaller,
the generator trained with Wasserstein %
clearly underperforms %
that trained with ACT, especially when the mini-batch size becomes as small as $N=20$. While the Wasserstein distance $\mathcal{W}(\mu,\nu)$ in theory can well guide the training of a generative model,
the sample Wasserstein distance $\mathcal{W}(\hat\mu_N,\hat\nu_N)$, whose optimal transport plan is locally re-computed for each mini-batch, could be sensitive to the mini-batch size $N$, which also explains why in practice the sample Wasserstein-based generative models are difficult to train and desire a large mini-batch size \citep{salimans2018improving}. By contrast, ACT %
amortizes its conditional transport plans through its navigators, whose parameter $\boldsymbol{\phi}$ is globally updated across mini-batches, leading to a well-trained generator whose performance has low sensitivity to the mini-batch size.
\color{black}
\fi
\textbf{CT for 2D toy data and robustness in adversarial feature extraction:} To test CT with more general cases, we further conduct experiments on 4 representative 2D datasets for generative modeling evaluation \cite{gulrajani2017improved}: 8-Gaussian mixture, Swiss Roll, Half Moons, and 25-Gaussian mixture. We apply the vanilla GAN \citep{goodfellow2014generative} and Wasserstein GAN with gradient penalty (WGAN-GP) \citep{gulrajani2017improved} as two representatives of min-max
DGMs that require solving a min-max loss. We then apply the generators trained under the sliced Wasserstein distance (SWD) \citep{deshpande2018generative} and CT cost as two representatives of min-max-free DGMs. Moreover, we include CT with an adversarial feature encoder trained with \eqref{eq:d_eta} to test the robustness of adversary and compare with the baselines in solving the min-max loss.
On each 2D data, we train these DGMs as one would normally do during the first $5k$ epochs. We then only train the generator and freeze all the other learnable model parameters, which means we freeze the discriminator in GAN, critic in WGAN, the navigator parameter $\boldsymbol{\phi}$ of the CT cost, and both $(\boldsymbol{\phi},\etav)$ of CT with an adversarial feature encoder, for another $5k$ epochs. Figs.\,\ref{fig:2d_8gaussian}-\ref{fig:2d_25gaussian} in Appendix~\ref{appendix:2d_toy} illustrate this training process on each dataset, where %
for both min-max baseline DGMs, the models collapse after the first $5k$ epochs, while the training for SWD remains stable and that for CT continues to improve. Compared to SWD, our method covers all data density modes and moves the generator much closer to the true data density. Notably, for CT with an adversarially trained feature encoder, although it
has switched from
solving a min-max loss to freezing the feature encoder after 5$k$ epochs, the frozen feature encoder continues to guide the DGM to finish the training in the last $5k$ epochs, which shows the robustness of the CT cost.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.20\textwidth}
\includegraphics[width=\textwidth]
{ablation_critic_space/8gaussians_CT_5000.pdf}\vspace{-2mm}
\caption{\small Adv CT.
}\label{fig:minmax_CT}
\end{subfigure}\vrule\hfill
\begin{subfigure}[t]{0.18\textwidth}
\includegraphics[width=\textwidth]
{ablation_critic_space/GAN_5000.png}\vspace{-2mm}
\caption{\small GAN.
}\label{fig:min_gan}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.20\textwidth}
\includegraphics[width=\textwidth]
{ablation_critic_space/8gaussians_d_5000.pdf}\vspace{-2mm}
\caption{\small $\mathcal{L}_D$ + CT.
}\label{fig:min_CT_max_d}
\end{subfigure}\vrule\hfill
\begin{subfigure}[t]{0.18\textwidth}
\includegraphics[width=\textwidth]
{ablation_critic_space/SWD_5000.png}\vspace{-2mm}
\caption{\small SWD.
}\label{fig:min_slice}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.20\textwidth}
\includegraphics[width=\textwidth]
{ablation_critic_space/8gaussians_slicing_5000.pdf}\vspace{-2mm}
\caption{\small Slicing + CT.
}\label{fig:min_slicedCT}
\end{subfigure}
\vspace{-1mm}
\caption{\small Ablation of fitting results by minimizing CT in different spaces: (a) CT calculated with adversarially trained encoder. (b-c) GAN \textit{vs.} CT with feature space cooperatively trained with discriminator loss. (d-f) Sliced Wasserstein distance and CT in the sliced space.
}\label{fig:ablation_coop}\vspace{-1mm}
\end{figure}
\begin{table}[t]
\vspace{-4mm}
\begin{minipage}[t]{.34\columnwidth}
\centering
\caption{\small FID comparison with different cooperative training on CIFAR-10 (lower FID is preferred). }
\label{tab:comparison_co-train}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{|c|c|}
\hline
Critic space & FID $\downarrow$ \\ \hline
Discriminator & 29.7 \\
Slicing & 32.4 \\ \hline
Adversarial CT & \textbf{22.1}
\\ \hline
\end{tabular}
}
\end{minipage}~~~~~~~\hfill
\begin{minipage}[t]{.63\columnwidth}
\centering
\caption{\small FID Comparison with using MMD (Rational quadratic kernel/distance kernel) and CT loss in training critic/generator on CIFAR-10 (lower FID is preferred). }
\label{tab:comparison_MMD}
\renewcommand{\arraystretch}{1.2}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{MMD-rq}} & \multicolumn{2}{c|}{Generator loss} & \multicolumn{2}{|c|}{\multirow{2}{*}{MMD-dist}} & \multicolumn{2}{c|}{Generator loss}\\ \cline{3-4} \cline{7-8}
\multicolumn{2}{|c|}{} & MMD & CT & \multicolumn{2}{|c|}{} & MMD & CT \\ \hline
{{Critic}} & MMD & 39.9 & 24.1 & {{Critic}} & MMD & 40.3 & \textbf{28.8} \\ \cline{2-4} \cline{6-8}
{loss} & CT & 41.4 & \textbf{23.9} & {loss} & CT & 30.9 & {29.4} \\\hline
\end{tabular}
}
\end{minipage}~~~~~
\vspace{-3mm}
\end{table}
\textbf{Ablation of cooperatively-trained and adversarially-trained CT:} As previous experiments show the adversarially-trained feature encoder could provide a valid feature space for CT cost, we further study the performance of the encoders cooperatively trained with other losses. Here we leverage, as two alternatives, the space of an encoder trained with the discriminator loss in GANs and the empirical Wasserstein distance in sliced 1D spaces \cite{wu2019sliced}. We test these settings on both 8-Gaussian, as shown Fig.\,\ref{fig:ablation_coop}, and CIFAR-10 data, as shown in Table~\ref{tab:comparison_co-train}. It is confirmed these encoders are able to cooperatively work with CT, in general producing less appealing results with those trained by maximizing CT. From this view, although CT is able to provide guidance for the generators in the feature space learned with various options, maximizing CT is still preferred to ensure the efficiency. Moreover, as observed in Figs.\,\ref{fig:min_gan}-\ref{fig:min_slicedCT}, CT clearly improves the fitting with sliced Wasserstein distance. To explain why CT helps improve in the sliced space, we further provide a toy example in 1D to study the properties of CT and empirical Wasserstein distance in Appendix~\ref{appendix:WvsCT}.
\textbf{Ablation of MMD and CT:}
As MMD also compares the pair-wise sample relations in a mini-batch,
we study if MMD and CT can benefit each other. The feature space of MMD-GAN can be considered as $\mathcal{T}_{\etav} \circ k$, where $k$ is the rational quadratic or distance kernel in {\citet{binkowski2018demystifying}}. Here we evaluate the combinations of MMD/CT as the generator/encoder criterion to train DGMs.
On $\text{CIFAR-10}$, shown in Table~\ref{tab:comparison_MMD}, combining MMD and CT generally has improvement over MMD alone in FID. It is interesting to notice that for MMD-GAN, learning its generator with the CT cost shows more obvious improvement than learning its feature encoder with the CT cost. We speculate the estimation of MMD relies on a supremum of its witness function, which needs to be maximized \textit{w.r.t} $\mathcal{T}_{\etav} \circ k$ and cannot be guaranteed by maximizing CT \textit{w.r.t} $\mathcal{T}_{\etav}$. In the case of MMD-dist, using CT for witness function updates shows a more clear improvement, probably because CT has a similar form as MMD when using the distance kernel. From this view, CT and MMD are naturally able to be combined to compare the distributional difference with pair-wise sample relations. Different from MMD, CT does not involve the choice of kernel and its navigators assist to improve the comparison efficiency.
Below we show on more image datasets, CT is compatible with many existing models, and achieve good results to show improvements on a variety of data with different scale.
\textbf{Adversarially-trained CT for natural images: }
We conduct a variety of experiments on natural images to evaluate the performance and reveal the properties of DGMs optimized under the CT cost.
We consider three widely-used image datasets, including CIFAR-10 \citep{cifar10}, CelebA \citep{celeba}, and LSUN-bedroom \citep{lsun} for general evaluation, as well as CelebA-HQ \cite{karras2018progressive}, FFHQ \cite{karras2019style} for evaluation in high-resolution. We compare the results of DGMs optimized with the CT cost against DGMs %
trained with their original criterion including DCGAN \citep{radford2015unsupervised}, Sliced Wasserstein Generative model (SWG) \cite{deshpande2018generative}, MMD-GAN \citep{binkowski2018demystifying}, SNGAN \citep{miyato2018spectral}, and StyleGAN2 \cite{Karras2019stylegan2}. For fair comparison, we leverage the best configurations reported in their corresponding paper or Github page. The detailed setups can be found in Appendix~\ref{app:experiment detail}. For evaluation metric, we
consider
the commonly used Fr\'echet inception distance (FID, lower is preferred) \cite{heusel2017gans} on all datasets and Inception Score (IS, higher is preferred) \cite{salimans2016improved} on CIFAR-10.
Both FID and IS are calculated using a pre-trained inception model
\citep{szegedy2016rethinking}.
\begin{table}[]
\centering
\caption{\small Results of CT with different deep generative models on CIFAR-10, CelebA and LSUN. Base model results are quoted from corresponding paper or github page. }\label{tab:fid}
\resizebox{.85\columnwidth}{!}{%
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{3}{c}{Fr\'echet Inception Distance (FID $\downarrow$)} & Inception Score ($\uparrow$) \\ \cmidrule(lr){2-4} \cmidrule(lr){5-5}
& CIFAR-10 & CelebA & LSUN-bedroom & CIFAR-10 \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-5}
DCGAN \citep{radford2015unsupervised} & 30.2$\pm$0.9 & 52.5$\pm$2.2 & 61.7$\pm$2.9 & 6.2$\pm$0.1 \\
CT-DCGAN
&
\textbf{22.1$\pm$1.1} &
\textbf{29.4$\pm$2.0} &
\textbf{32.6$\pm$2.5} &
\textbf{7.5$\pm$0.1} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-5}
SWG \citep{deshpande2018generative} & 33.7$\pm$1.5 & 21.9$\pm$2.0 & 67.9$\pm$2.7 & - \\
CT-SWG & \textbf{25.9$\pm$ 0.9} & \textbf{18.8 $\pm$ 1.2} & \textbf{39.0 $\pm$ 2.1} & 6.9 $\pm$ 0.1 \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-5}
MMD-GAN \citep{binkowski2018demystifying} & 39.9$\pm$0.3 & 20.6$\pm$0.3 & \textbf{32.0$\pm$0.3} & 6.5$\pm$0.1 \\
CT-MMD-GAN &
\textbf{23.9 $\pm$ 0.4} &
\textbf{13.8 $\pm$ 0.4} &
38.3 $\pm$ 0.3 &
\textbf{7.4 $\pm$ 0.1} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-5}
SNGAN \citep{miyato2018spectral} & 21.5$\pm$1.3 & 21.7$\pm$1.5 & 31.1$\pm$2.1 & {8.2$\pm$0.1} \\
CT-SNGAN
&
{\textbf{17.2$\pm$1.0}} &
{\textbf{9.2$\pm$1.0}} &
{\textbf{16.8$\pm$2.1}} &
{\textbf{8.8$\pm$0.1}} \\
\cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-5}
StyleGAN2 \citep{Karras2019stylegan2} & 5.8 & 5.2 & \textbf{2.9} & {10.0 } \\
CT-StyleGAN2&
\textbf{2.9 $\pm$ 0.5} &
\textbf{4.0 $\pm$ 0.7} &
6.3 $\pm$ 0.2 &
\textbf{10.1 $\pm$ 0.1} \\
\bottomrule
\end{tabular}%
}\vspace{-2mm}
\end{table}
The summary of FID and IS on previously mentioned model is reported in Table~\ref{tab:fid}. We observe that trained with CT cost, all the models have improvements with different margin in most cases, suggesting that CT is compatible with standard GANs, SWG, MMD-GANs, WGANs and generally helps improve generation quality, especially for data with richer modalities like CIFAR-10. CT is also compatible with advanced model architecture like StyleGAN2, confirming that a better feature space could make CT more efficient to guide the generator and produce better results.
\begin{figure}[!t]
\centering
\includegraphics[width=.315\textwidth]{update_implementation/cifar10.pdf}\,\,
\includegraphics[width=.315\textwidth]{update_implementation/celeba.jpg}\,\,
\includegraphics[width=.315\textwidth]{update_implementation/lsun.pdf
\caption{\small Generated samples of the deep generative model that adopts the backbone of SNGAN but is optimized with the CT cost %
on CIFAR-10, CelebA, and LSUN-Bedroom. See Appendix \ref{app:results} for more results.}\label{fig:generation}
\vspace{-6mm}
\end{figure}
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{lsun/lsunhq_vis_sngan.pdf}\vspace{-2mm}
\caption{\small LSUN-Bedroom(128x128
}\label{fig:lsun_128}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{celeba/celebahq_vis_sngan.pdf}\vspace{-2mm}
\caption{\small CelebA-HQ(256x256
}\label{fig:celebahq}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{HQ_generation/vis_lsun256.pdf}\vspace{-2mm}
\caption{\small LSUN-Bedroom(256x256
}\label{fig:lsun_256}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{HQ_generation/vis_ffhq.pdf}\vspace{-2mm}
\caption{\small FFHQ (256x256/1024x1024
}\label{fig:ffhq}
\end{subfigure}\vspace{-2mm}
\caption{\small Generation results in higher-resolution cases, with SNGAN and StyleGAN2 architecture. \textit{Top}:%
LSUN-Bedroom (128x128) and CelebA-HQ (256x256), done with CT-SNGAN. \textit{Bottom}: LSUN-Bedroom (256x256) and FFHQ (256x256/1024x1024), done with CT-StyleGAN2.
}\label{fig:celebahq_sngan_vis}
\vspace{-3mm}
\end{figure}
The qualitative results shown in Fig.\,\ref{fig:generation} are consistent with quantitative results in Table \ref{tab:fid}. To additionally show how CT works for more complex generation tasks, we show in %
Fig.\,\ref{fig:celebahq_sngan_vis} %
example higher-resolution images generated by CT-SNGAN on LSUN bedroom (128x128) and CelebA-HQ (256x256), as well as images generated by CT-StyleGAN2 on LSUN bedroom (256x256), FFHQ (256x256), and FFHQ (1024x1024).
\begin{wraptable}{r}{.44\columnwidth}
\vspace{-13pt}
\caption{FID of generation results on CIFAR-10, trained with different $\rho$.}\label{tab:cifar10-rho}
\renewcommand{\arraystretch}{1.}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{c|ccccc}
\toprule
$\rho$ & 1 & 0.75 & 0.5 & 0.25 & 0 \\
\midrule
CT-DCGAN & 25.1 & 22.1 & 22.1 & \textbf{21.4} & 72.1 \\
CT-SNGAN & 23.2& 17.5 & \textbf{17.2} & \textbf{17.2} & 33.2 \\
\bottomrule
\end{tabular}
}\vspace{-14pt}
\end{wraptable}
\textcolor{black}{\textbf{On the choice of $\rho$
for natural images:} In previous experiments, we fix $\rho=0.5$ by default when we prefer neither mode-covering nor mode-seeking. We further tune $\rho$ as an additional ablation study on CIFAR-10 dataset with both the CT + DCGAN backbone and CT + SNGAN backbone to see its affects in terms of certain metrics, such as the FID score. The results shown in Table \ref{tab:cifar10-rho} suggest that CT is not sensitive to the choice of $\rho$ as long as $0<\rho<1$, and the FID score could be further improved if we choose a smaller $\rho$ to bias towards mode-seeking.
}
\iffalse
\begin{figure}[!th]
\centering
\begin{minipage}[t]{.32\textwidth}
\centering
\includegraphics[width=\textwidth]{update_implementation/cifar10.pdf}
\end{minipage}\hfill
\begin{minipage}[t]{.32\textwidth}
\centering
\includegraphics[width=\textwidth]{update_implementation/celeba.jpg}
\end{minipage}\hfill
\begin{minipage}[t]{.32\textwidth}
\centering
\includegraphics[width=\textwidth]{update_implementation/lsun.pdf}
\end{minipage}
\caption{\small Generated samples of the deep generative model that adopts the backbone of SNGAN but is optimized with the ACT cost %
on CIFAR-10, CelebA, and LSUN-Bedroom. See Appendix \ref{app:B} for more results.}\label{fig:generation}
\end{figure}
\fi
\section{Conclusion}
We propose conditional transport (CT) as a new criterion to quantify the difference between two probability distributions, via the use of both forward and backward conditional distributions. The forward and backward expected cost are respectively with respect to a source-dependent and target-dependent conditional distribution defined via Bayes' theorem. The CT cost can be approximated with discrete samples and optimized with existing stochastic gradient descent-based methods. Moreover, the forward and backward CT possess mode-covering and mode-seeking properties, respectively. By combining them, CT nicely incorporates and balances these two properties, showing robustness in resisting mode collapse. On complex and high-dimensional data, CT is able to be calculated and stably guide the generative models in a valid feature space, which can be learned by adversarially maximizing CT or cooperatively deploying existing methods. On various benchmark datasets for deep generative modeling, we successfully train advanced models with CT. Our results consistently show improvement over the original ones, justifying the effectiveness of the proposed CT loss.
\textbf{Discussion:} Note CT brings consistent improvement to these DGMs
by neither improving their network architectures nor %
gradient regularization. Thus it has great potential to work in conjunction with other state-of-the-art architectures or methods, such as BigGAN \citep{brock2018large}, self-attention GANs \citep{zhang2019self}, partition-guided GANs \citep{armandpour2021partition}, multimodal-DGMs \citep{Zhang2020Variational}, BigBiGAN \citep{%
donahue2019large}, self-supervised learning \citep{chen2019self}, and data augmentation \citep{karras2020training,zhao2020differentiable,zhao2020image}, which we leave for future study.
As the paper is primarily focused on constructing and validating a new approach to quantify the difference between two probability distributions, we have focused on demonstrating the efficacy and interesting properties of the proposed CT on toy data and benchmark image data. We have focused on these previously mentioned models as the representatives in GAN, MMD-GAN, WGAN under CT, and we leave to future work using the CT to optimize more choices of DGMs, such as VAE-based models \cite{kingma2013auto} and neural-SDE \cite{song2021scorebased}.
\section*{Acknowledgments}
The authors acknowledge the support of
NSF IIS-1812699, the APX 2019 project sponsored
by the Office of the Vice President for Research at The University of Texas at Austin, the support of a gift fund from
ByteDance Inc., and the Texas Advanced Computing Center
(TACC) for providing HPC resources that have contributed
to the research results reported within this paper.
\bibliographystyle{unsrtnat}
|
{
"timestamp": "2021-10-26T02:44:33",
"yymm": "2012",
"arxiv_id": "2012.14100",
"language": "en",
"url": "https://arxiv.org/abs/2012.14100"
}
|
\section{Acknowledgement}\label{sec:acknow}
Authors acknowledge the support of the Ministry of Science and Technology of Taiwan (MOST110-2115-M-A49-003-MY2).
\subsection{Spectral Analysis of Boundary Intersection-over-Union Score}\label{ssec:method_spectral_bioU}
Following the notation in section~\ref{ssec:method_spectral_ioU}, the boundary
intersection-over-union (Boundary IoU)~\cite{cheng2021boundary} score is defined as
\begin{equation}
\begin{aligned}
boundary\,IoU = \frac{|(B \cap B_d) \cap (S \cap S_d)|}{ |(B \cap B_d) \cup (S \cap S_d)|},
\end{aligned}
\end{equation}
where $S_d$ and $B_d$ denote the pixels in the boundary region of $S$ and $B$, respectively; $d$ is the width of boundary region.
Compared to IoU score, such evaluation metric is shown to be sensitive to the boundary especially for the large object. In addition to its sensitivity to object boundary, this section reveals its theoretical insight and demonstrate that it's mainly contributed by the low-frequency component of segmentation map.
Without loss of generality, we analyze the 1 dimensional case of boundary IoU.
We consider the binary segmentation map as follows,
\begin{equation}
\begin{aligned}
S &= H(t - t_{s0}) - H(t - t_{s1}), t_{s0} < t_{s1} \\
B &= H(t - t_{b0}) - H(t - t_{b1}), t_{b0} < t_{b1}
\end{aligned}
\end{equation}
where $H$ is the Heaviside function; $t_{s0}$ and $t_{s1}$ are the boundary pixels of $S$; $t_{b0}$ and $t_{s1}$ are the boundary pixels of $B$;
we model the boundary region of segmentation map by two gaussian function for each boundary edge. Namely,
\begin{equation}
\begin{aligned}
\Omega_S &= S_d \cap S =e^{- \frac {(t - t_{s0})^{2}}{2\sigma^2}} + e^{ - \frac {(t -
t_{s1})^{2}}{2\sigma^2}} \\
\Omega_B &= B_d \cap B = e^{- \frac {(t - t_{b0})^{2}}{2\sigma^2}} + e^{ - \frac {(t - t_{b1})^{2}}{2\sigma^2}}
\end{aligned}
\label{Eq:Omega}
\end{equation}
where $\sigma = \sigma(d)$ is the width of gaussian associating with the $d$ in $S_d$ and $B_d$ mentioned above. We have their Fourier transform as
\begin{equation}
\begin{aligned}
\omega_s &= (e^{- j\,t_{s0}\,\nu} + e^{- j\,t_{s1}\,\nu})\,e^{ - \frac {\nu ^{2}\,\sigma ^{2}}{2}} \\
\omega_b &= (e^{- j\,t_{b0}\,\nu} + e^{- j\,t_{b1}\,\nu})\,e^{- \frac {\nu ^{2}\,\sigma ^{2}}{2}}
\end{aligned}
\label{Eq:omega}
\end{equation}
Following similar deduction as in Eq.~\ref{Eq:iou} and Eq.~\ref{Eq:spectral_iou}, we have
\begin{equation}
\begin{aligned}
boundary\,IoU({s,b}) &= \frac{1}{{\frac{{ {\omega_s(0) + \omega_b(0)} }}{{{\int} \omega_s(\nu)\omega_b(- \nu)d\nu}} - 1}},
\end{aligned}
\label{Eq:spectral_biou}
\end{equation}
where
\begin{equation}
\begin{aligned}
&{{\int} \omega_s(\nu)\omega_b(- \nu)d\nu} = \frac {\sqrt{\pi }}{2\sigma} ( \\
& e^{ - \frac {( - t_{s0} + t_{b0})^{2}}{4\sigma ^{2}}}\mathrm{erf}(\nu \sigma - j{\frac {
(t_{b0} - t_{s0})}{2\sigma }} ) \\
+ & e^{- \frac {( - t_{s0} + t_{b1})^{2}}{4\sigma ^{2}}}\mathrm{erf}(\nu \sigma - j{\frac {(t_{b1} - t_{s0})}{2\sigma }} ) \\
+ & e^{- \frac {( - t_{s1} + t_{b0})^{2}}{4\sigma ^{2}}}\mathrm{erf}(\nu \sigma - j{ \frac {(t_{b0} - t_{s1})}{2\sigma }} ) \\
+ & e^{- \frac {( - t_{s1} + t_{b1})^{2}}{4\sigma ^{2}}}\mathrm{erf}(\nu \sigma - j{ \frac {(t_{b1} - t_{s1})}{2\sigma }} ))
\end{aligned}
\label{Eq:spectral_overla_integral}
\end{equation}
by plugging the Eq.~\ref{Eq:omega}; $\mathrm{erf}$ is the error function.
Similar to Eq.~\ref{Eq:spectral_iou}, Eq.~\ref{Eq:spectral_biou} consists of the zero-frequency part, \textit{i}.\textit{e}.~ ${\omega_s(0) + \omega_b(0)}$, and the non-zero frequency part, \textit{i}.\textit{e}.~ Eq.~\ref{Eq:spectral_overla_integral}. We focus on analyzing the non-zero frequency part to further reveal the sensitivity of boundary IoU with respect to these frequency regime. Eq.~\ref{Eq:spectral_overla_integral} can be further approximated as
\begin{equation}
\begin{aligned}
&{{\int} \omega_s(\nu)\omega_b(- \nu)d\nu} = \frac {\sqrt{\pi }}{2\sigma}\mathrm{erf}(\nu \sigma)(
e^{ - \frac {( - t_{s0} + t_{b0})^{2}}{4\sigma ^{2}}}
\\ & + e^{- \frac {( - t_{s0} + t_{b1})^{2}}{4\sigma ^{2}}}
+ e^{- \frac {( - t_{s1} + t_{b0})^{2}}{4\sigma ^{2}}}
+ e^{- \frac {( - t_{s1} + t_{b1})^{2}}{4\sigma ^{2}}})
\end{aligned}
\label{Eq:spectral_overla_integral_approx}
\end{equation}
by using the expansion of erf function
\begin{equation}
\begin{aligned}
& \mathrm{erf}(\nu \sigma - jC) \simeq \mathrm{erf}(\nu \sigma) +
\\ & \frac{e^{-\nu^2 \sigma^2}}{2\pi \nu \sigma}(1-cos(2\nu \sigma C)+j\,sin(2\nu \sigma C))
\\ & \simeq \mathrm{erf}(\nu \sigma)
\end{aligned}
\label{Eq:erf_approx}
\end{equation}
It follows immediately from Eq.~\ref{Eq:spectral_overla_integral_approx} that ${{\int} \omega_s(\nu)\omega_b(- \nu)d\nu}$ is mainly contributed by low-frequency regime dues to erf function.
This implies that boundary IoU is also mainly contributed by low-frequency regime while being sensitive to the object boundary.
\section{Conclusion}\label{sec:conclusion}
Our proposed spectral analysis for semantic segmentation network correlate CE, IoU score and gradient back-propagation in the spectrum point of view.
We first explicitly decompose CE and demonstrate the CE is mainly contributed by the low-frequency component of the segmentation maps, which associates with the features in CNNs at the same frequency. Furthermore, we proposed $R(\nu_{max})$ to estimate the efficacy of the LRG for segmentation maps.
We test our theory on two applications: feature truncation and block annotation. Our results show that combination of the feature truncation and the network pruning can save computational cost significantly with small accuracy lost. In addition, the block annotation can potentially save more in labeling cost, since the network trained using the block-wise annotation in an efficient LRG performs close to the original network. The results from our experiments agree with our theoretical predictions based on $R(\nu_{max})$.
Lastly, despite the theoretical analysis and validation in this work, it remains unclear that how to determine the efficient band limit $\nu_{max}$ of the LRG for various datasets. It would be our future interests to estimate $\nu_{max}$ from the spectrum of groundtruth annotation.
\section{Validation and Applications}\label{sec:experiments}
This section aims to validate the spectral analysis in section~\ref{sec:method} and further propose the application, including the feature truncation and block-wise annotation.
In section~\ref{ssec:exp_spec_ce_grad}, we validate the spectral analysis in section~\ref{sec:method}, including the frequency components of CE in Eq.~\ref{Eq:spec_ce_component} and the spectral gradient in Eq.~\ref{Eq:ce_spec_grad}.
The validation of $\mathcal{L}_{ce}(\nu)$ the frequency components of CE demonstrates that CE is mainly contributed by the low-frequency component while the validation of $\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ the spectral gradient shows that it can be well approximated as the delta function as in Eq.~\ref{Eq:ce_spec_grad_ieq0}.
Based on these numeric validations, we identify the efficient LRGs and apply the grids onto the features in CNNs and the groundtruth annotation. This leads us to further propose two applications that down-sample the segmentation maps into LRGs: (1) Feature truncation and (2) Block-wise annotation, which detailed in section~\ref{ssec:exp_feat_trunc} and section~\ref{ssec:exp_blk_annot}, respectively.
\\
\noindent\textbf{Datasets and Segmentation Networks.}
We examine the experiments upon the following three semantic segmentation datasets: PASCAL semantic segmentation benchmark \cite{everingham2015pascal}, DeepGlobe land-cover classification challenge \cite{demir2018deepglobe} and Cityscapes pixel-level semantic labeling task \cite{cordts2016cityscapes} (denoted as PASCAL, DeepGlobe and Cityscapes respectively). For segmentation networks, we utilize DeepLab v3+~\cite{chen2018encoder} and Deep Aggregation Net (DAN)~\cite{kuo2018dan}. See section~\ref{ssec:implement} of appendix for implementation details.
\subsection{Validation of Spectral Analysis}\label{ssec:exp_spec_ce_grad}
\subsubsection{Spectral Decomposition of CE.}\label{ssec:exp_spec_ce_grad:ce}
This section aims to demonstrate that CE is mainly contributed by the low-frequency component, as discussed in section~\ref{ssec:method_spectral_ce}, and investigate the efficacy of using the LRG for prediction by the modern networks.
We evaluate $|b(\nu)|$, $|\widehat{y}(\nu)|$, ${\mathcal{L}}_{ce}(\nu)$ and $R({\nu}_{max})$ based on the various segmentation networks (DeepLab v3+ and DAN) and datasets (PASCAL, DeepGlobe and Cityscapes); $|b(\nu)$| and $|\widehat{y}(\nu)|$ are the averaged power spectrum of $b(\nu,c)$ and $\widehat{y}(\nu,c)$ over all semantic classes $c$ respectively.
The results are shown in Fig.~\ref{Fig:spectral_model_stat}.~\footnote{The profiles of $|b(\nu)|$ and $|\widehat{y}(\nu)|$ are normalized with respect to their corresponding maximal values for better comparison over datasets.} To monitor the training progress, the evaluation at both the initial and final stages of network training are additionally shown in Fig.~\ref{Fig:spectral_training_stat}.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{figures/model_stat.png}
\caption{The spectral decomposition of CE (${\mathcal{L}}_{ce}(\nu)$), the relative discrepancy of CE ($R({\nu}_{max})$), and the absolute value of spectra ($|b(\nu)|$ and $|\widehat{y}(\nu)|$). The profiles of $|\widehat{y}(\nu)|$, ${\mathcal{L}}_{ce}(\nu)$, and $R({\nu}_{max})$ are evaluated based on DeepLab v3+ and DAN.}
\label{Fig:spectral_model_stat}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.\columnwidth]{figures/training_stat.png}
\caption{Spectral Decomposition of CE for DeepLab v3+ and DAN at both initial and final training stages. The notation follows Fig.~\ref{Fig:spectral_model_stat}. The suffix $_{init}$ additionally denotes the profiles of the network at initial stage.}
\label{Fig:spectral_training_stat}
\end{figure}
The results shown in Fig.~\ref{Fig:spectral_model_stat} indicate that $|\widehat{y}(\nu)|$ is indeed small in the high-frequency region thus leads to small ${\mathcal{L}}_{ce}(\nu)$, as discussed above. These results also support the fact that CE is mainly contributed by the low-frequency components. On the other hands, the results shown in Fig.~\ref{Fig:spectral_training_stat} reveals that the low-frequency components of ${\mathcal{L}}_{ce}(\nu)$ apparently decreases as training progresses, suggesting the spectral bias for semantic segmentation, \textit{i}.\textit{e}.~ networks learn to capture low-resolution-grid more effectively.
\begin{table}[h]
\centering
\caption{Truncated CE and its relative discrepancy with respect to CE under various band limit ${\nu}$, based on the PASCAL, DeepGlobe and Cityscapes datasets.}
\label{table:spectral_stat}
\resizebox{0.9\columnwidth}{!}
\begin{tabular}{clccccc}
\multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{${\nu}_{max}$} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{32} & \multicolumn{1}{c|}{64} & 256 \\ \hline
\multicolumn{7}{c}{DeepLab v3+} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{0.184} & \multicolumn{1}{c|}{0.040} & \multicolumn{1}{c|}{0.017} & \multicolumn{1}{c|}{0.008} & 0 \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{$R({\nu}_{max})$} & \multicolumn{1}{c|}{0.053} & \multicolumn{1}{c|}{0.017} & \multicolumn{1}{c|}{0.007} & \multicolumn{1}{c|}{0.003} & 0 \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{1.782} & \multicolumn{1}{c|}{0.517} & \multicolumn{1}{c|}{0.065} & \multicolumn{1}{c|}{0.014} & 0 \\ \hline
\multicolumn{7}{c}{DAN} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{0.197} & \multicolumn{1}{c|}{0.045} & \multicolumn{1}{c|}{0.017} & \multicolumn{1}{c|}{0.008} & 0 \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{$R({\nu}_{max})$} & \multicolumn{1}{c|}{0.078} & \multicolumn{1}{c|}{0.025} & \multicolumn{1}{c|}{0.009} & \multicolumn{1}{c|}{0.004} & 0 \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{2.004} & \multicolumn{1}{c|}{0.723} & \multicolumn{1}{c|}{0.153} & \multicolumn{1}{c|}{0.017} & 0
\end{tabular}
}
\end{table}
Here we further investigate the efficacy of LRGs for prediction based on $R({\nu}_{max})$ examined in Fig.~\ref{Fig:spectral_model_stat}. For comparison, we examine the numeric value of $R({\nu}_{max})$ in Table~\ref{table:spectral_stat}. Note that ${\mathcal{L}}_{ce}=\widehat{\mathcal{L}}_{ce}(256)$ dues to the image size 513 in our experiments.
Apparently, as the resolution of the grid goes higher, $R({\nu}_{max})$ becomes smaller since more information on high-frequency is captured. More specifically, $R({\nu}_{max})$ dramatically decrease to small values when $\nu > 10$ in all experiments of the trained networks. On the other hand, the segmentation maps predicted by these networks are evaluated on the 129 $\times$ 129 LRG that corresponds to ${\nu}_{max} = 64$.
These empirical evidences suggest that the LRGs with $10 < {\nu}_{max} < 64$ could still efficiently sample the segmentation map without significant information loss. These efficient LRGs are further validated on our proposed applications, \textit{i}.\textit{e}.~ feature truncation and block-wise annotation, in sections~\ref{ssec:exp_feat_trunc} and section~\ref{ssec:exp_blk_annot}, respectively.
Besides, it is apparent from Table~\ref{table:spectral_stat} that $R({\nu}_{max})$ on the Cityscapes dataset is significantly larger than that on the PASCAL and DeepGlobe datasets. We shall expect a better efficacy of the feature truncation and the block-wise annotation on the PASCAL and DeepGlobe datasets.
\subsubsection{Spectral Gradient.}\label{ssec:exp_spec_ce_grad:grad}
We now turn to evaluate the gradient introduced in section~\ref{ssec:method_spec_grad}, including $\frac{\partial y(\nu_i)}{\partial x(\nu_j)}$ in Eq.~\ref{Eq:convlayer_spec_grad} and $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ in Eq.~\ref{Eq:ce_spec_grad}. We demonstrate that both $\frac{\partial y(\nu_i)}{\partial x(\nu_j)}$ and $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ can be approximated as the delta function.
We validate these spectral gradients on the three datasets (PASCAL, DeepGlobe and Cityscapes datasets) and take the spectra of ASPP (atrous spatial pyramid pooling) features in DeepLab v3+ and DAN as our example $x(\nu_j)$. The evaluated $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ for DeepLab v3+ and DAN are illustrated in Fig.~\ref{Fig:spec_grad_deeplab} and Fig.~\ref{Fig:spec_grad_dan}, respectively. Besides, we validate $\frac{{\partial y(\nu_i)}}{{\partial x(\nu_j)}}$ for various operations within the decoder module of these networks, including the convolution, ReLU, bilinear up-sampling, where $x(\nu_j)$ and $y(\nu_i)$ denote the input and output spectra for these operations, respectively. These spectral gradients are illustrated in Fig.~\ref{Fig:spec_grad}.
For all the spectral gradients shown in Fig.~\ref{Fig:spec_grad_deeplab}, Fig.~\ref{Fig:spec_grad_dan} and Fig.~\ref{Fig:spec_grad}, the frequency $\nu_i=(0, \frac{M}{4}, \frac{M}{2})$ are evaluated as examples, where $M$ is the size of input features. We set $M=33$ in this experiment. For each $\nu_i$, we evaluate the spectral gradient for all frequencies $\nu_j$ in region from $0$ to $\frac{M}{2}$. This results in a matrix of spectral gradient for each $\nu_i$, which is shown in each row respectively.
\begin{figure}[ht]
\captionsetup[subfigure]{}
\centering
\begin{subfigure}[]{0.45\columnwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figures/spectral_gradient_deeplab.png}
\caption{DeepLab v3+}
\label{Fig:spec_grad_deeplab}
\end{subfigure}
\begin{subfigure}[]{0.45\columnwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figures/spectral_gradient_dan.png}
\caption{DAN}
\label{Fig:spec_grad_dan}
\end{subfigure}
\caption{The evaluation of spectral gradient ${{\partial {\mathcal{L}}_{ce}(\nu_i)}}/{{\partial x(\nu_j)}}$ for DeepLab v3+ and DAN. The spectral gradient are evaluated based on PASCAL, DeepGlobe and Cityscapes datasets.}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\columnwidth]{figures/spectral_gradient_unit.png}
\caption{The evaluation of spectral gradient ${{\partial y(\nu_i)}}/{{\partial x(\nu_j)}}$. The spectral gradient for the convolution, ReLU, and bilinear up-sampling operations are denoted as Conv, ReLU, Upsample, respectively.}
\label{Fig:spec_grad}
\end{figure}
We can see from Fig.~\ref{Fig:spec_grad} that $\frac{{\partial y(\nu_i)}}{{\partial x(\nu_j)}}$ for the convolution, ReLU and bilinear up-sampling are delta function for all $\nu_i$, which is consistent with our discussion for Eq.~\ref{Eq:convlayer_spec_grad}.
Moreover, clearly from Fig.~\ref{Fig:spec_grad_deeplab} and Fig.~\ref{Fig:spec_grad_dan}, the spectral gradient $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ that accumulating all the operations in the decoder module can be well approximated by the delta function. These results verify the approximation of Eq.~\ref{Eq:ce_spec_grad} and agrees the discussion in section~\ref{ssec:method_spec_grad} that the feature component at frequency $\nu_i$ only affects ${\mathcal{L}}_{ce}(\nu)$ at the frequency near $\nu_i$, which enable us to further consider the removal of redundant high-frequency feature in section~\ref{ssec:exp_feat_trunc}.
\subsection{Application on Feature Truncation}\label{ssec:exp_feat_trunc}
We propose the feature truncation as a model reduction method that reduce the cost of model without degrading the performance. The feature truncation is intuitively performed by removing the high-frequency components of the decoder features $x(\nu)$ acquired from the in the $R(\nu_{max})$ evaluated in section~\ref{ssec:exp_spec_ce_grad:ce}.
We further adopt the Soft Filter Punning (SFP) method~\cite{he2019asymptotic} and combine it with feature truncation as an example. The experiments are done based on the following conditions: For SFP, we use pruning rate [20\%, 40\% and 60\%] for the encoder while use 20\% for the decoder since parameters of the encoder is much larger than that of the decoder and is potentially over-parameterized.
For the feature truncation applied on the decoder, we down-sample the decoder features from the original size of 129$\times$129 to the efficient LRGs suggested by the analysis in section~\ref{ssec:exp_spec_ce_grad}, \textit{i}.\textit{e}.~ the LRGs with $10 < {\nu}_{max} < 64$. All the down-sampling are done via bilinear interpolation. For simplicity, the experiments mentioned above are only tested on the LRG 65$\times$65 and 33$\times$33 that correspond to the band limit ${\nu}_{max}=32$ and ${\nu}_{max}=16$, respectively. The feature size of decoder are [33, 65, 129] in the setup of feature truncation.
The FLOPs of these experiments are summarized in Table~\ref{table:feat_trunc_flops:flops}, where "Baseline" model denotes the model without SFP. Clearly, the feature truncation effective reduce the FLOPs at various setups of SFP.
We further evaluate the relative FLOPs-drop, which is the relative reduction rate of FLOPs comparing to that of the model with same SFP setup and the original feature size 129, in Table~\ref{table:feat_trunc_flops:flops_drop}. For example, the relative FLOPs-drop of the setup with "60\% Pruning rate for encoder" and feature size "33" based upon DeepLab v3+ is given as $1-\frac{29}{59}=50.5\%$.
As the feature truncated from 129 to 65, the relative FLOPs-drop are 23.3\% and 7.9\% for the "Baseline" model of DeepLab v3+ and DAN, respectively. As the pruning rate of encoder goes to 60\%, the relative FLOPs-drop even increases to 50.5\% and 24.9\% for the two models, respectively. Noting that the number parameters of decoder are 1.3 and 0.4 million for the two models, respectively. Apparently, the cost reduction rates by feature truncation depend on the number parameters in the decoder. The more parameters in the decoder, the better efficacy of feature truncation is achieved. These results demonstrate an effective cost reduction combining the pruning method (SFP) and feature truncation.
\begin{table}[h]
\centering
\begin{subtable}[]{0.71\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{cccc}
\hline \multicolumn{4}{c}{DeepLab v3+} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 139 & 107 & 98 \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 100 & 76 & 70 \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 77 & 53 & 47 \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 59 & 35 & 29 \\ \hline
\multicolumn{4}{c}{DAN} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 107 & 98 & 96 \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 78 & 70 & 69 \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 55 & 47 & 46 \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 37 & 29 & 28
\end{tabular}
}
\caption{FLOPs for the models with SFP and feature truncation; denoted in unit of $10^9$ FLOPs}
\label{table:feat_trunc_flops:flops}
\end{subtable}
\begin{subtable}[]{0.83\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{cccc}
\hline \multicolumn{4}{c}{DeepLab v3+} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 0.0\% & 23.3\% & 29.2\% \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 0.0\% & 23.7\% & 29.8\% \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 0.0\% & 30.9\% & 38.7\% \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 0.0\% & 40.3\% & 50.5\% \\ \hline
\multicolumn{4}{c}{DAN} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 0.0\% & 7.9\% & 9.8\% \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 0.0\% & 9.4\% & 11.7\% \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 0.0\% & 13.4\% & 16.7\% \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 0.0\% & 19.9\% & 24.9\%
\end{tabular}
}
\caption{Relative FLOPs-drop caused by feature truncation.}
\label{table:feat_trunc_flops:flops_drop}
\end{subtable}
\caption{FLOPs reduction.}
\label{table:feat_trunc_flops}
\end{table}
\begin{figure}[h]
\captionsetup[subfigure]{}
\centering
\begin{subfigure}[ht]{1.0\columnwidth}
\centering
\includegraphics[width=1.\columnwidth]{figures/feature_truncation_DeepLab_IouFlops.png}
\caption{DeepLab v3+}
\label{Fig:iou_flops_deeplab}
\end{subfigure}
\begin{subfigure}[ht]{1.0\columnwidth}
\centering
\includegraphics[width=1.\columnwidth]{figures/feature_truncation_DAN_IouFlops.png}
\caption{DAN}
\label{Fig:iou_flops_dan}
\end{subfigure}
\caption{
IoU vs. FLOPs for DeepLab v3+ and DAN with various prune rates of SFP and feature size (\textit{i}.\textit{e}.~ 33, 65, 129) for the feature truncation.}
\label{Fig:iou_flops}
\end{figure}
\begin{figure}[h]
\captionsetup[subfigure]{}
\centering
\begin{subfigure}[]{1.0\columnwidth}
\centering
\includegraphics[width=1.\columnwidth]{figures/feature_truncation_DeepLab.png}
\caption{DeepLab v3+}
\label{Fig:flops_per_iou_deeplab}
\end{subfigure}
\begin{subfigure}[]{1.0\columnwidth}
\centering
\includegraphics[width=1.\columnwidth]{figures/feature_truncation_DAN.png}
\caption{DAN}
\label{Fig:flops_per_iou_dan}
\end{subfigure}
\caption{
The FPI for DeepLab v3+ and DAN with the same setups as in Figure~\ref{Fig:iou_flops}.}
\label{Fig:flops_per_iou}
\end{figure}
On the other hand, such huge cost reduction indispensably degrade the model performance. We evaluate DeepLab v3+ and DAN upon the PASCAL, DeepGlobe and Cityscapes datasets. Figure~\ref{Fig:iou_flops} illustrates the IoU score of these models with respect to the FLOPs; we provide the numeric details in Table~\ref{table:feat_trunc:deeplab} and Table~\ref{table:feat_trunc:dan} of appendix.
For the DeepLab model, the feature truncation with feature size 65 efficiently decrease the FLOPs while keep the IoU score compared to the original model upon the PASCAL and DeepGlobe datasets. In contrast, the truncation partly fails to preserve the IoU score on the Cityscapes dataset, which actually consists to the analysis of $R({\nu}_{max})$ in Table~\ref{table:spectral_stat}, \textit{i}.\textit{e}.~ the relative loss of CE effected by LRG; where the $R({\nu}_{max})$ for the Deeplab model on the Cityscapes dataset is significantly larger than the other datasets. We discuss the correlation between the IoU drop and the analysis of $R({\nu}_{max})$ in section~\ref{ssec:feat_detail} of appendix.
Besides, the truncation with feature size 33 reduce the FLOPs while also leads to apparent IoU drop thus make the overall efficacy unclear. For the feature truncation for the DAN model, the overall efficacy is also unclear since the reduction of FLOPs are relatively smaller than those of DeepLab while the leading IoU drop is non-negligible.
To further estimate the efficiency of feature truncation for these models that simultaneously includes the cost and performance, we define \textbf{FLOPs per IoU score (FPI)} as FLOPs/mIoU.
The lower the FPI, the better the efficiency of the model that minimizes the performance drop while maximizes the cost reduction of inference. The FPI of these models are illustrated in Fig.~\ref{Fig:flops_per_iou_deeplab} and Fig.~\ref{Fig:flops_per_iou_dan}.
For the experiments on PASCAL and DeepGlobe datasets, the FPI decreases as feature truncated from 129 to either 65 or 33. In comparison, the decrements of FPI on the Cityscapes dataset is apparently smaller than those on the other datasets. Moreover, the FPI instead increases as feature truncated from 129 to 33 for the experiment of DAN on the Cityscapes dataset.
These results consist with our analysis on $R({\nu}_{max})$ discussed in section~\ref{ssec:exp_spec_ce_grad}. On the other hand, the results indicates that the feature truncation effectively reduce the computational cost on PASCAL and DeepGlobe datasets with significant reduction of FPI.
In summary, we conclude that the integration of feature truncation and SFP can efficiently reduce the computational cost when $R({\nu}_{max})$ is small as demonstrated on the PASCAL and DeepGlobe datasets. On the other hand, it becomes inefficient when $R({\nu}_{max})$ is relatively large as demonstrated on the Cityscapes dataset.
Our theoretical analysis on $R({\nu}_{max})$ can help to estimate the efficacy of feature truncation for neural networks such as DeepLab v3+ and DAN. To our understanding, this is the first work that provide a theoretical framework to analyze the segmentation performance in the aspect of the frequency domain.
As mentioned in section~\ref{sec:intro}, the existing segmentation networks predict the segmentation maps upon the LRG to save the computational cost. Our framework serves as a analysis tool to estimate the efficient LRG size of segmentation maps as well as the effective features in decoders in saving the computational cost. Furthermore, such analysis can be further generalized to arbitrary features in CNNs.
\subsection{Application on Block-wise annotation}\label{ssec:exp_blk_annot}
In section~\ref{ssec:exp_spec_ce_grad}, we determine the efficient LRG for the segmentation maps via analyzing $R({\nu}_{max})$. In this section, we apply these LRGs to the groundtruth annotations. The resulted block-wise annotation can be considered as a weak annotation. We demonstrate that the performance of the semantic segmentation network trained with these block-wise annotations can also be estimated by $R({\nu}_{max})$.
We perform the experiment that trains DeepLab v3+ and DAN with the block-wise annotation at various band limit $\nu_{max}$ (from 8 to 256) and evaluates the $R({\nu}_{max})$ based on the original pixel-wise annotation. Examples of the block-wise annotation and the prediction of the two models upon the PASCAL, DeepGlobe and Cityscapes datasets are illustrated in Fig.~\ref{Fig:block_annotation}. Note that the block-wise annotation at ${\nu}_{max}=256$ is actually equivalent to the original pixel-wise groundtruth. The experimental results are summarized in Table~\ref{table:blk_annot}. For each $\nu_{max}$, we evaluate mIoU score and mIoU drop. Particularly, mIoU-drop is the reduction rate of mIoU with respect to the one band limit ${\nu}_{max}=256$; this drop actually corresponds to the decrements of IoU score caused by the LRG on the annotation.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{.9\columnwidth}
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}[b]{@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}}
\hline
$\nu_{max}$ & 256 & 32 & 16 & 8 \\
\hline
\multicolumn{5}{c}{PASCAL} \\ \hline \includegraphics[width=.15\columnwidth]{figures/pascal/voc_image.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr1_label.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr8_label.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr16_label.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr32_label.png} \\
\centered{DeepLab v3+} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr1_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr8_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr16_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr32_deeplab.png}} \\
\centered{DAN} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr1_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr8_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr16_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr32_dan.png}} \\ \hline
%
\multicolumn{5}{c}{DeepGlobe} \\ \hline
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_image.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr1_label.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr8_label.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr16_label.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr32_label.png} \\
\centered{DeepLab v3+} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr1_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr8_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr16_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr32_deeplab.png}} \\
\centered{DAN} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr1_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr8_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr16_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr32_dan.png}} \\ \hline
%
\multicolumn{5}{c}{Cityscapes} \\ \hline
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_image.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr1_label.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr8_label.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr16_label.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr32_label.png} \\
\centered{DeepLab v3+} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr1_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr8_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr16_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr32_deeplab.png}} \\
\centered{DAN} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr1_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr8_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr16_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr32_dan.png}}
\end{tabular}
}
\end{subfigure}
\caption{Examples of the block-wise annotation and the prediction of DeepLab v3+ and DAN. For each dataset, we present the block-wise groundtruth annotations and the predictions of these models. The corresponding band limits $\nu_{max}$ are denoted at the top of annotation.}
\label{Fig:block_annotation}
\end{figure}
\begin{table}[h]
\centering
\begin{subtable}[]{0.7\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccc}
\multicolumn{1}{c|}{${\nu}_{max}$} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{32} & 256 \\ \hline
\multicolumn{5}{c}{DeepLab v3+} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{67.4\%} & \multicolumn{1}{c|}{74.2\%} & \multicolumn{1}{c|}{77.1\%} & {78.5\%} \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{52.5\%} & \multicolumn{1}{c|}{53.8\%} & \multicolumn{1}{c|}{54.8\%} & {55.0\%} \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{38.4\%} & \multicolumn{1}{c|}{50.5\%} & \multicolumn{1}{c|}{58.9\%} & {67.8\%} \\ \hline
\multicolumn{5}{c}{DAN} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{67.6\%} & \multicolumn{1}{c|}{74.0\%} & \multicolumn{1}{c|}{76.5\%} & 77.6\% \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{50.1\%} & \multicolumn{1}{c|}{52.2\%} & \multicolumn{1}{c|}{53.4\%} & 53.6\% \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{38.4\%} & \multicolumn{1}{c|}{50.4\%} & \multicolumn{1}{c|}{58.3\%} & 66.4\%
\end{tabular}
}
\caption{mIoU score.}
\label{table:blk_annot:iou}
\end{subtable}
\begin{subtable}[]{0.7\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccc}
\multicolumn{1}{c|}{${\nu}_{max}$} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{32} & 256 \\ \hline
\multicolumn{5}{c}{DeepLab v3+} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{14.1\%} & \multicolumn{1}{c|}{5.4\%} & \multicolumn{1}{c|}{1.8\%} & 0.0\% \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{4.6\%} & \multicolumn{1}{c|}{2.3\%} & \multicolumn{1}{c|}{0.3\%} & 0.0\% \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{43.4\%} & \multicolumn{1}{c|}{25.4\%} & \multicolumn{1}{c|}{13.1\%} & 0.0\% \\ \hline
\multicolumn{5}{c}{DAN} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{12.9\%} & \multicolumn{1}{c|}{4.6\%} & \multicolumn{1}{c|}{1.4\%} & 0.0\% \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{6.5\%} & \multicolumn{1}{c|}{2.6\%} & \multicolumn{1}{c|}{0.3\%} & 0.0\% \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{42.1\%} & \multicolumn{1}{c|}{24.2\%} & \multicolumn{1}{c|}{12.2\%} & 0.0\%
\end{tabular}
}
\caption{relative mIoU-drop.}
\label{table:blk_annot:iou_drop}
\end{subtable}
\caption{Experimental results of using our proposed block-wise annotation for learning semantic segmentation.
}
\label{table:blk_annot}
\end{table}
As the band limit ${\nu}_{max}$ goes lower, mIoU score goes smaller and a positive mIoU drop is observed. Here we also observe a significant larger amount of mIoU drop on the Cityscapes dataset comparing to those on the PASCAL and DeepGlobe datasets, which consists to the trend of $R(\nu_{max})$ in Table~\ref{table:spectral_stat} that $R(\nu_{max})$ on the Cityscapes dataset are significantly larger than the other datasets.
Fig.~\ref{Fig:block_annotation_iou_R} illustrates the correlation between relative mIoU drops and $R(\nu_{max})$ for all experiments. The positive correlation between mIoU drop and $R(\nu_{max})$ agrees the correlation between CE and IoU score discussed in section~\ref{ssec:method_spectral_ioU}. Our studies show that the performance of the semantic segmentation network trained with the block-wise annotation strongly correlates to $R(\nu_{max})$. As a result, one can estimate the performance of the semantic segmentation network trained with the block-wise annotation by simply evaluating $R(\nu_{max})$ without thoroughly performing the experiments over all band limits.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{figures/block_annotation_iou_R.png}
\caption{The correlation between relative IoU-drop and the $R(\nu_{max})$ for block-wise annotation.
}
\label{Fig:block_annotation_iou_R}
\end{figure}
In summary, the proposed spectral analysis enables the advanced analysis of weak annotation in the frequency domain. Our studies reveal the correlation between the segmentation performance and the LRG of segmentation maps. Based on our analysis and experiments, the block-wise annotation can be considered as a weak annotation when the block size is chosen according to the LRG size in the segmentation maps. Notably, these LRGs actually correspond to the coarse contour of instances in the segmentation maps, which are greatly utilized in the existing weak annotation. We provide the theoretical justification of the weak annotations by using our spectral analysis. Further research should be undertaken to investigate the spectral analysis upon the existing weak annotations~\cite{papandreou2015weakly,khoreva2017simple} in the future.
\section{Acknowledgement}\label{sec:acknow}
Authors acknowledge the support of the Ministry of Science and Technology of Taiwan (MOST110-2115-M-A49-003-MY2).
\section{Conclusion}\label{sec:conclusion}
Our proposed spectral analysis for semantic segmentation network correlate CE, IoU score and gradient back-propagation in the spectrum point of view.
We first explicitly decompose CE and demonstrate the CE is mainly contributed by the low-frequency component of the segmentation maps, which associates with the features in CNNs at the same frequency. Furthermore, we proposed $R(\nu_{max})$ to estimate the sampling efficiency of the low-resolution grid for segmentation maps.
We test our theory on two applications: feature truncation and block annotation. Our results show that combination of the feature truncation and the network pruning can save computational cost significantly with small accuracy lost. In addition, the block annotation can potentially save more in labeling cost, since the network trained using the block-wise annotation in an efficient low-resolution grid performs close to the original network. The results from our experiments agree with our theoretical predictions based on $R(\nu_{max})$.
Lastly, despite the theoretical analysis and validation in this work, it remains unclear that how to determine the efficient band limit $\nu_{max}$ of the low-resolution grid for various datasets. It would be our future interests to estimate $\nu_{max}$ from the spectrum of groundtruth annotation.
\section{Validation and Applications}\label{sec:experiments}
This section aims to validate the spectral analysis in section~\ref{sec:method} and further propose the application, including the feature truncation and block-wise annotation.
In section~\ref{ssec:exp_spec_ce_grad}, we validate the spectral analysis in section~\ref{sec:method}, including the frequency components of CE in Eq.~\ref{Eq:spec_ce_component} and the spectral gradient in Eq.~\ref{Eq:ce_spec_grad}.
The validation of $\mathcal{L}_{ce}(\nu)$ the frequency components of CE demonstrates that CE is mainly contributed by the low-frequency component while the validation of $\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ the spectral gradient shows that it can be well approximated as the delta function as in Eq.~\ref{Eq:ce_spec_grad_ieq0}.
Based on these numeric validations, we identify the efficient low-resolution grids and apply the grids onto the features in CNNs and the groundtruth annotation. This leads us to further propose two applications that down-sample the segmentation maps into low-resolution grids: (1) Feature truncation and (2) Block-wise annotation, which detailed in section~\ref{ssec:exp_feat_trunc} and section~\ref{ssec:exp_blk_annot}, respectively.
\noindent\textbf{Datasets.}
We examine the experiments upon the following three semantic segmentation datasets: PASCAL semantic segmentation benchmark \cite{everingham2015pascal}, DeepGlobe land-cover classification challenge \cite{demir2018deepglobe} and Cityscapes pixel-level semantic labeling task \cite{cordts2016cityscapes} (denoted as PASCAL, DeepGlobe and Cityscapes respectively). The PASCAL dataset contains 21 categories, 1464 training images, and 1449 validation images; the dataset further augmented by the extra annotations from \cite{BharathICCV2011}. The DeepGlobe dataset contains 7 categories, 803 training images, which are split into 701 and 102 images for training and validation, respectively. The Cityscapes dataset contains 19 categories, 2975 training images, and 500 validation images.
\noindent\textbf{segmentation networks and implementation details.}
In our experiment, we utilize the standard segmentation networks including DeepLab v3+~\cite{chen2018encoder} and Deep Aggregation Net (DAN)~\cite{kuo2018dan}. We adopt the ResNet-101~\cite{he2016deep} pre-trained on ImageNet-1k~\cite{russakovsky2015imagenet} as the backbone of these networks. These networks are trained by the following training policies: For all datasets, the images are randomly cropped to 513$\times$513 pixels and random-flipped in the training stage; the training batch size are 8. For the PASCAL dataset, the network is trained with initial learning rate 0.0007 and 100 epochs; for DeepGlobe dataset, the network is trained with initial learning rate 0.007 and 600 epochs; for Cityscapes dataset, the network is trained with initial learning rate 0.001 and 200 epochs.
\subsection{Validation of Spectral Analysis}\label{ssec:exp_spec_ce_grad}
\subsubsection{Spectral Decomposition of CE.}\label{ssec:exp_spec_ce_grad:ce}
This section aims to demonstrate that CE is mainly contributed by the low-frequency component, as discussed in section~\ref{ssec:method_spectral_ce}, and investigate the efficacy of using the resolution grid for prediction by the modern networks.
We evaluate $|b(\nu)|$, $|\widehat{y}(\nu)|$, ${\mathcal{L}}_{ce}(\nu)$ and $R({\nu}_{max})$ based on the various segmentation networks (DeepLab v3+ and DAN) and datasets (PASCAL, DeepGlobe and Cityscapes); $|b(\nu)$| and $|\widehat{y}(\nu)|$ are the averaged power spectrum of $b(\nu,c)$ and $\widehat{y}(\nu,c)$ over all semantic classes $c$ respectively.
The results are shown in Fig.~\ref{Fig:spectral_model_stat}. To monitor the training progress, the evaluation at both the initial and final stages of network training are additionally shown in Fig.~\ref{Fig:spectral_training_stat}.
Note here the profiles of $|b(\nu)|$ and $|\widehat{y}(\nu)|$ are normalized with respect to their corresponding maximal values for better comparison over datasets.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{figures/model_stat.png}
\caption{The spectral decomposition of CE (${\mathcal{L}}_{ce}(\nu)$), the relative discrepancy of CE ($R({\nu}_{max})$), and the absolute value of spectra ($|b(\nu)|$ and $|\widehat{y}(\nu)|$). The profiles of $|\widehat{y}(\nu)|$, ${\mathcal{L}}_{ce}(\nu)$, and $R({\nu}_{max})$ are evaluated based on DeepLab v3+ and DAN.}
\label{Fig:spectral_model_stat}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{figures/training_stat.png}
\caption{Spectral Decomposition of CE for DeepLab v3+ and DAN at both initial and final training stages. The notation follows Fig.~\ref{Fig:spectral_model_stat}. The suffix $_{init}$ additionally denotes the profiles of the network at initial stage.}
\label{Fig:spectral_training_stat}
\end{figure}
The results shown in Fig.~\ref{Fig:spectral_model_stat} indicate that $|\widehat{y}(\nu)|$ is indeed small in the high-frequency region thus leads to small ${\mathcal{L}}_{ce}(\nu)$, as discussed above. These results also support the fact that CE is mainly contributed by the low-frequency components. On the other hands, the results shown in Fig.~\ref{Fig:spectral_training_stat} reveals that the low-frequency components of ${\mathcal{L}}_{ce}(\nu)$ apparently decreases as training progresses, suggesting that the network learns to capture low-resolution-grid more effectively.
\begin{table}[h]
\centering
\caption{Truncated CE and its relative discrepancy with respect to CE under various band limit ${\nu}$, based on the PASCAL, DeepGlobe and Cityscapes datasets.}
\label{table:spectral_stat}
\resizebox{0.6\columnwidth}{!}
\begin{tabular}{clccccc}
\multicolumn{1}{c|}{Dataset} & \multicolumn{1}{c|}{${\nu}_{max}$} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{32} & \multicolumn{1}{c|}{64} & 256 \\ \hline
\multicolumn{7}{c}{DeepLab v3+} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{0.184} & \multicolumn{1}{c|}{0.040} & \multicolumn{1}{c|}{0.017} & \multicolumn{1}{c|}{0.008} & 0 \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{$R({\nu}_{max})$} & \multicolumn{1}{c|}{0.053} & \multicolumn{1}{c|}{0.017} & \multicolumn{1}{c|}{0.007} & \multicolumn{1}{c|}{0.003} & 0 \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{1.782} & \multicolumn{1}{c|}{0.517} & \multicolumn{1}{c|}{0.065} & \multicolumn{1}{c|}{0.014} & 0 \\ \hline
\multicolumn{7}{c}{DAN} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{0.197} & \multicolumn{1}{c|}{0.045} & \multicolumn{1}{c|}{0.017} & \multicolumn{1}{c|}{0.008} & 0 \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{$R({\nu}_{max})$} & \multicolumn{1}{c|}{0.078} & \multicolumn{1}{c|}{0.025} & \multicolumn{1}{c|}{0.009} & \multicolumn{1}{c|}{0.004} & 0 \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{2.004} & \multicolumn{1}{c|}{0.723} & \multicolumn{1}{c|}{0.153} & \multicolumn{1}{c|}{0.017} & 0
\end{tabular}
}
\end{table}
Here we further investigate the limitation of the low-resolution grid and the efficient resolution of the features based on $R({\nu}_{max})$ examined in Fig.~\ref{Fig:spectral_model_stat}. For comparison, we examine the numeric value of $R({\nu}_{max})$ in Table~\ref{table:spectral_stat}. Note that ${\mathcal{L}}_{ce}=\widehat{\mathcal{L}}_{ce}(256)$ dues to the image size 513 in our experiments.
Apparently, as the resolution of the grid goes higher, $R({\nu}_{max})$ becomes smaller since more information on high-frequency is captured. More specifically, $R({\nu}_{max})$ dramatically decrease to small values when $\nu > 10$ in all experiments of the trained networks. On the other hand, the segmentation maps predicted by these networks are evaluated on the 129 $\times$ 129 low-resolution grid that corresponds to ${\nu}_{max} = 64$.
These empirical evidences suggest that the low-resolution grids with $10 < {\nu}_{max} < 64$ could still efficiently sample the segmentation map without significant information loss. These efficient low-resolution grids are further validated on our proposed applications, \textit{i}.\textit{e}.~ feature truncation and block-wise annotation, in sections~\ref{ssec:exp_feat_trunc} and section~\ref{ssec:exp_blk_annot}, respectively.
Besides, it is apparent from Table~\ref{table:spectral_stat} that $R({\nu}_{max})$ on the Cityscapes dataset is significantly larger than that on the PASCAL and DeepGlobe datasets. We shall expect a better efficacy of the feature truncation and the block-wise annotation on the PASCAL and DeepGlobe datasets.
\subsubsection{Spectral Gradient.}\label{ssec:exp_spec_ce_grad:grad}
We now turn to evaluate the gradient introduced in section~\ref{ssec:method_spec_grad}, including $\frac{\partial y(\nu_i)}{\partial x(\nu_j)}$ in Eq.~\ref{Eq:convlayer_spec_grad} and $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ in Eq.~\ref{Eq:ce_spec_grad}. We demonstrate that both $\frac{\partial y(\nu_i)}{\partial x(\nu_j)}$ and $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ can be approximated as the delta function.
We validate these spectral gradients on the three datasets (PASCAL, DeepGlobe and Cityscapes datasets) and take the spectra of ASPP (atrous spatial pyramid pooling) features in DeepLab v3+ and DAN as our example $x(\nu_j)$. The evaluated $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ for DeepLab v3+ and DAN are illustrated in Fig.~\ref{Fig:spec_grad_deeplab} and Fig.~\ref{Fig:spec_grad_dan}, respectively. Besides, we validate $\frac{{\partial y(\nu_i)}}{{\partial x(\nu_j)}}$ for various operations within the decoder module of these networks, including the convolution, ReLU, bilinear up-sampling, where $x(\nu_j)$ and $y(\nu_i)$ denote the input and output spectra for these operations, respectively. These spectral gradients are illustrated in Fig.~\ref{Fig:spec_grad}.
For all the spectral gradients shown in Fig.~\ref{Fig:spec_grad_deeplab}, Fig.~\ref{Fig:spec_grad_dan} and Fig.~\ref{Fig:spec_grad}, the frequency $\nu_i=(0, \frac{M}{4}, \frac{M}{2})$ are evaluated as examples, where $M$ is the size of input features. For each $\nu_i$, we evaluate the spectral gradient for all frequencies $\nu_j$ in region from $0$ to $\frac{M}{2}$. This results in a matrix of spectral gradient for each $\nu_i$, which is shown in each row respectively. In this experiment, we set the size of input feature $M=33$, which actually corresponds to the feature size of the bottleneck of DeepLab v3+ and DAN.
\begin{figure}[ht]
\captionsetup[subfigure]{}
\centering
\begin{subfigure}[]{0.45\columnwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figures/spectral_gradient_deeplab.png}
\caption{DeepLab v3+}
\label{Fig:spec_grad_deeplab}
\end{subfigure}
\begin{subfigure}[]{0.45\columnwidth}
\centering
\includegraphics[width=1.0\columnwidth]{figures/spectral_gradient_dan.png}
\caption{DAN}
\label{Fig:spec_grad_dan}
\end{subfigure}
\caption{The evaluation of spectral gradient ${{\partial {\mathcal{L}}_{ce}(\nu_i)}}/{{\partial x(\nu_j)}}$ for DeepLab v3+ and DAN. The spectral gradient are evaluated based on PASCAL, DeepGlobe and Cityscapes datasets.}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\columnwidth]{figures/spectral_gradient_unit.png}
\caption{The evaluation of spectral gradient ${{\partial y(\nu_i)}}/{{\partial x(\nu_j)}}$. The spectral gradient for the convolution, ReLU, and bilinear up-sampling operations are denoted as Conv, ReLU, Upsample, respectively.}
\label{Fig:spec_grad}
\end{figure}
We can see from Fig.~\ref{Fig:spec_grad} that $\frac{{\partial y(\nu_i)}}{{\partial x(\nu_j)}}$ for the convolution, ReLU and bilinear up-sampling are delta function for all $\nu_i$, which is consistent with our discussion for Eq.~\ref{Eq:convlayer_spec_grad}.
Moreover, clearly from Fig.~\ref{Fig:spec_grad_deeplab} and Fig.~\ref{Fig:spec_grad_dan}, the spectral gradient $\frac{{\partial {\mathcal{L}}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ that accumulating all the operations in the decoder module can be well approximated by the delta function. This verifies the approximation of Eq.~\ref{Eq:ce_spec_grad}.
Also, these results agrees the discussion in section~\ref{ssec:method_spec_grad} that the feature component at frequency $\nu_i$ only affects ${\mathcal{L}}_{ce}(\nu)$ at the frequency near $\nu_i$, which enable us to further consider the removal of redundant high-frequency feature in section~\ref{ssec:exp_feat_trunc}.
\subsection{Application on Feature Truncation}\label{ssec:exp_feat_trunc}
We propose the feature truncation as a model reduction method that reduce the cost of model without degrading the performance. The feature truncation is intuitively performed by removing the high-frequency components of the decoder features $x(\nu)$ acquired from the in the $R(\nu_{max})$ evaluated in section~\ref{ssec:exp_spec_ce_grad:ce}.
Noting the the segmentation networks typically have huge parameters in encoder while negligible parameters in decoder. However, the computational costs of decoders are often comparable to those of encoders since it up-samples the features for the dense segmentation map and results in large features for computation.
For example, the encoder of DeepLab v3+ has 95.6 billion FLOPs (floating-point operations) and 60.1 million parameters. In contrast, its decoder has 43.4 billion FLOPs while only has 1.3 million parameters. Similarly, the encoder of DAN has 95.6 billion FLOPs with 60.1 million parameters while the decoder has 11.1 billion FLOPs with only 0.4 million parameters.
The truncation of the features in decoder is thus expected to effectively reduce the computational cost. Moreover, one can combine feature truncation with the typical network pruning method to reduce the computation cost in two different aspects, \textit{i}.\textit{e}.~ the feature size and the redundant parameters.
We further adopt the Soft Filter Punning (SFP) method~\cite{he2019asymptotic} and combine it with feature truncation as an example. The experiments are done based on the following conditions: For SFP, we use pruning rate [20\%, 40\% and 60\%] for the encoder while use 20\% for the decoder since parameters of the encoder is much larger than that of the decoder and is potentially over-parameterized.
For the feature truncation applied on the decoder, we down-sample the decoder features from the original size of 129$\times$129 to the efficient low-resolution grids suggested by the analysis in section~\ref{ssec:exp_spec_ce_grad}, \textit{i}.\textit{e}.~ the grids with $10 < {\nu}_{max} < 64$. All the down-sampling are done via bilinear interpolation. For simplicity, the experiments mentioned above are only tested on the grid 65$\times$65 and 33$\times$33 that correspond to the band limit ${\nu}_{max}=32$ and ${\nu}_{max}=16$, respectively. The feature size of decoder are [33, 65, 129] in the setup of feature truncation.
The FLOPs of these experiments are summarized in Table~\ref{table:feat_trunc_flops:flops}, where "Baseline" model denotes the model without SFP. Clearly, the feature truncation effective reduce the FLOPs at various setups of SFP.
We further evaluate the relative FLOPs-drop, which is the relative deduction rate of FLOPs comparing to that of the model with same SFP setup and the original feature size 129, in Table~\ref{table:feat_trunc_flops:flops_drop}. For example, the relative FLOPs-drop of the setup with "60\% Pruning rate for encoder" and feature size "33" based upon DeepLab v3+ is given as $1-\frac{29}{59}=50.5\%$.
As the feature truncated from 129 to 65, the relative FLOPs-drop are 23.3\% and 7.9\% for the "Baseline" model of DeepLab v3+ and DAN, respectively. As the pruning rate of encoder goes to 60\%, the relative FLOPs-drop even increases to 50.5\% and 24.9\% for the two models, respectively. Recalls that the number parameters of decoder are 1.3 and 0.4 million for the two models. These results actually suggest that the cost reduction rates from feature truncation depend on the number parameters in the decoder. The more parameters in the decoder, the better efficacy of feature truncation is achieved. These results demonstrate an effective cost reduction combining the pruning method (SFP) and feature truncation.
\begin{table}[h]
\centering
\begin{subtable}[]{0.45\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{cccc}
\hline \multicolumn{4}{c}{DeepLab v3+} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 139 & 107 & 98 \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 100 & 76 & 70 \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 77 & 53 & 47 \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 59 & 35 & 29 \\ \hline
\multicolumn{4}{c}{DAN} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 107 & 98 & 96 \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 78 & 70 & 69 \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 55 & 47 & 46 \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 37 & 29 & 28
\end{tabular}
}
\caption{FLOPs for the models with SFP and feature truncation; denoted in unit of $10^9$ FLOPs}
\label{table:feat_trunc_flops:flops}
\end{subtable}
\begin{subtable}[]{0.55\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{cccc}
\hline \multicolumn{4}{c}{DeepLab v3+} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 0.0\% & 23.3\% & 29.2\% \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 0.0\% & 23.7\% & 29.8\% \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 0.0\% & 30.9\% & 38.7\% \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 0.0\% & 40.3\% & 50.5\% \\ \hline
\multicolumn{4}{c}{DAN} \\ \hline
SFP setups \textbackslash Feature size & 129 & 65 & 33 \\ \hline
\multicolumn{1}{c|}{Baseline} & 0.0\% & 7.9\% & 9.8\% \\
\multicolumn{1}{c|}{20\% Pruning rate for encoder} & 0.0\% & 9.4\% & 11.7\% \\
\multicolumn{1}{c|}{40\% Pruning rate for encoder} & 0.0\% & 13.4\% & 16.7\% \\
\multicolumn{1}{c|}{60\% Pruning rate for encoder} & 0.0\% & 19.9\% & 24.9\%
\end{tabular}
}
\caption{Relative FLOPs-drop caused by feature truncation.}
\label{table:feat_trunc_flops:flops_drop}
\end{subtable}
\caption{FLOPs reduction.}
\label{table:feat_trunc_flops}
\end{table}
\begin{table*}[h]
\centering
\begin{subtable}[h]{0.9\textwidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccccc}
\hline
Feature & \multicolumn{2}{c|}{PASCAL} & \multicolumn{2}{c|}{DeepGlobe} & \multicolumn{2}{c}{Cityscapes} \\ \cline{2-7}
size & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & relative mIoU-drop \\ \hline
\multicolumn{7}{c}{Baseline} \\ \hline
\multicolumn{1}{c|}{129} & 78.6\% & \multicolumn{1}{c|}{0.0\%} & 53.9\% & \multicolumn{1}{c|}{0.0\%} & 67.8\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 78.1\% & \multicolumn{1}{c|}{0.6\%} & 53.6\% & \multicolumn{1}{c|}{0.6\%} & 62.7\% & 7.4\% \\
\multicolumn{1}{c|}{33} & 75.6\% & \multicolumn{1}{c|}{3.8\%} & 53.1\% & \multicolumn{1}{c|}{1.5\%} & 53.3\% & 21.3\% \\ \hline
\multicolumn{7}{c}{20\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 76.6\% & \multicolumn{1}{c|}{0.0\%} & 53.7\% & \multicolumn{1}{c|}{0.0\%} & 67.2\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 76.0\% & \multicolumn{1}{c|}{0.8\%} & 53.4\% & \multicolumn{1}{c|}{0.7\%} & 62.4\% & 7.1\% \\
\multicolumn{1}{c|}{33} & 73.2\% & \multicolumn{1}{c|}{4.4\%} & 52.7\% & \multicolumn{1}{c|}{2.0\%} & 53.4\% & 20.5\% \\ \hline
\multicolumn{7}{c}{40\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 74.4\% & \multicolumn{1}{c|}{0.0\%} & 53.0\% & \multicolumn{1}{c|}{0.0\%} & 66.1\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 73.9\% & \multicolumn{1}{c|}{0.7\%} & 52.7\% & \multicolumn{1}{c|}{0.6\%} & 61.5\% & 7.0\% \\
\multicolumn{1}{c|}{33} & 72.1\% & \multicolumn{1}{c|}{3.1\%} & 52.0\% & \multicolumn{1}{c|}{1.9\%} & 52.8\% & 20.1\% \\ \hline
\multicolumn{7}{c}{60\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 65.1\% & \multicolumn{1}{c|}{0.0\%} & 50.1\% & \multicolumn{1}{c|}{0.0\%} & 58.6\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 64.7\% & \multicolumn{1}{c|}{0.6\%} & 49.9\% & \multicolumn{1}{c|}{0.4\%} & 54.8\% & 6.5\% \\
\multicolumn{1}{c|}{33} & 63.3\% & \multicolumn{1}{c|}{2.8\%} & 49.4\% & \multicolumn{1}{c|}{1.4\%} & 47.4\% & 19.2\%
\end{tabular}
}
\caption{DeepLab v3+}
\label{table:feat_trunc:deeplab}
\end{subtable}
\begin{subtable}[h]{0.9\textwidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccccc}
\hline
Feature & \multicolumn{2}{c|}{PASCAL} & \multicolumn{2}{c|}{DeepGlobe} & \multicolumn{2}{c}{Cityscapes} \\ \cline{2-7}
size & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & relative mIoU-drop \\ \hline
\multicolumn{7}{c}{Baseline} \\ \hline
\multicolumn{1}{c|}{129} & 78.7\% & \multicolumn{1}{c|}{0.0\%} & 53.6\% & \multicolumn{1}{c|}{0.0\%} & 66.4\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 77.9\% & \multicolumn{1}{c|}{1.0\%} & 53.3\% & \multicolumn{1}{c|}{0.5\%} & 61.0\% & 8.1\% \\
\multicolumn{1}{c|}{33} & 74.5\% & \multicolumn{1}{c|}{5.4\%} & 52.7\% & \multicolumn{1}{c|}{1.7\%} & 50.0\% & 24.8\% \\ \hline
\multicolumn{7}{c}{20\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 76.8\% & \multicolumn{1}{c|}{0.0\%} & 53.6\% & \multicolumn{1}{c|}{0.0\%} & 65.8\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 75.9\% & \multicolumn{1}{c|}{1.1\%} & 53.2\% & \multicolumn{1}{c|}{0.8\%} & 60.3\% & 8.3\% \\
\multicolumn{1}{c|}{33} & 72.8\% & \multicolumn{1}{c|}{5.2\%} & 52.4\% & \multicolumn{1}{c|}{2.2\%} & 49.0\% & 25.5\% \\ \hline
\multicolumn{7}{c}{40\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 74.6\% & \multicolumn{1}{c|}{0.0\%} & 52.3\% & \multicolumn{1}{c|}{0.0\%} & 65.1\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 73.8\% & \multicolumn{1}{c|}{1.0\%} & 51.9\% & \multicolumn{1}{c|}{0.8\%} & 59.8\% & 8.1\% \\
\multicolumn{1}{c|}{33} & 71.3\% & \multicolumn{1}{c|}{4.4\%} & 50.7\% & \multicolumn{1}{c|}{2.9\%} & 48.9\% & 24.9\% \\ \hline
\multicolumn{7}{c}{60\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 65.6\% & \multicolumn{1}{c|}{0.0\%} & 49.5\% & \multicolumn{1}{c|}{0.0\%} & 57.0\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 65.1\% & \multicolumn{1}{c|}{0.7\%} & 49.4\% & \multicolumn{1}{c|}{0.3\%} & 52.4\% & 8.1\% \\
\multicolumn{1}{c|}{33} & 63.0\% & \multicolumn{1}{c|}{3.9\%} & 48.9\% & \multicolumn{1}{c|}{1.4\%} & 42.8\% & 24.9\%
\end{tabular}
}
\caption{DAN}
\label{table:feat_trunc:dan}
\end{subtable}
\caption{Results for feature truncation and network pruning on DeepLab v3+ and DAN.
We summarize the results of using 4 setups of network pruning: "Baseline" denote the experiment without SFP while "X pruning rate for encoder" denotes that with SFP where X = (20\%, 40\%, and 60\%) are the pruning rates.
For each setup of network pruning, we further evaluate the results with 3 feature sizes for feature truncation, \textit{i}.\textit{e}.~ (129,65,33).}
\label{table:feat_trunc}
\end{table*}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{figures/feature_truncation_iou_R.png}
\caption{The correlation between relative mIoU-drop and the $R(\nu_{max})$ for the feature truncation. The results are evaluated based on PASCAL, DeepGlobe and Cityscapes datasets.}
\label{Fig:feature_truncation_iou_R}
\end{figure}
\begin{figure}[h]
\captionsetup[subfigure]{}
\centering
\begin{subfigure}[ht]{0.7\columnwidth}
\centering
\includegraphics[width=1.\columnwidth]{figures/feature_truncation_DeepLab.png}
\caption{DeepLab v3+}
\label{Fig:flops_per_iou_deeplab}
\end{subfigure}
\begin{subfigure}[ht]{0.7\columnwidth}
\centering
\includegraphics[width=1.\columnwidth]{figures/feature_truncation_DAN.png}
\caption{DAN}
\label{Fig:flops_per_iou_dan}
\end{subfigure}
\caption{
The FPI for DeepLab v3+ and DAN with various prune rates of SFP and feature size (\textit{i}.\textit{e}.~ 33, 65, 129) for the feature truncation.}
\label{Fig:flops_per_iou}
\end{figure}
On the other hand, such huge cost reduction indispensably degrade the model performance. Table~\ref{table:feat_trunc:deeplab} and Table~\ref{table:feat_trunc:dan} summarize the performance of DeepLab v3+ and DAN, respectively. These models are evaluated upon the PASCAL, DeepGlobe and Cityscapes datasets. For the experiment upon each dataset, the "mIoU" and "relative mIoU-drop" are evaluated, where "mIoU" is the mean IoU score over all semantic classes; "relative mIoU-drop" is the relative deduction rate of mIoU with respect to that of the model with same SFP setup and feature size 129.
Apparently from the tables, the mIoU decreases as either the pruning rate increases or as the feature size decreases. Closer inspection of these tables show that the relative mIoU-drop of the experiments on the Cityscapes dataset are significantly larger than that on the PASCAL and DeepGlobe datasets. Taking the "Baseline" model of DeepLab v3+ with feature size 65 as an example, the relative mIoU-drop is 0.6\% for both PASCAL and DeepGlobe datasets, while becomes 7.4\% for the Cityscapes datasets. The same trends holds for the experiments with other SFP setup and feature size based on DeepLab v3+. Also, the similar results are observed in the experiment on DAN as shown in Table~\ref{table:feat_trunc:dan}.
These significant mIoU-drop on Cityscapes is well explained by the theoretical estimation $R({\nu}_{max})$ in Table~\ref{table:spectral_stat}, where $R({\nu}_{max})$ depict the relative loss of CE evaluated that positively correlates to relative mIoU-drop as discussed in section~\ref{ssec:method_spectral_ioU}. We further illustrate the correlation between the relative mIoU-drop (cf. Table~\ref{table:feat_trunc}) and $R({\nu}_{max})$ (cf. Table~\ref{table:spectral_stat}) in Fig.~\ref{Fig:feature_truncation_iou_R}; where we plot the data with ${\nu}_{max}=[16,32,256]$. The positive correlation in the figure agrees our theoretical statement.
To further estimate the efficiency of these models that simultaneously includes the cost and performance, we define \textbf{FLOPs per IoU score (FPI)} as FLOPs/mIoU.
The lower the FPI, the better the efficiency of the model that minimizes the performance drop while maximizes the cost reduction of inference.
The FPI of the models in Table~\ref{table:feat_trunc:deeplab} and Table~\ref{table:feat_trunc:dan} are illustrated in Fig.~\ref{Fig:flops_per_iou_deeplab} and Fig.~\ref{Fig:flops_per_iou_dan} for DeepLab v3+ and DAN, respectively.
For the experiments on PASCAL and DeepGlobe datasets, the FPI decreases as feature truncated from 129 to either 65 or 33. In comparison, the decrements of FPI on the Cityscapes dataset is apparently smaller than those on the other datasets. Moreover, the FPI instead increases as feature truncated from 129 to 33 for the experiment of DAN on the Cityscapes dataset.
These results consist with our analysis on $R({\nu}_{max})$ discussed in section~\ref{ssec:exp_spec_ce_grad}. On the other hand, the results indicates that the feature truncation effectively reduce the computational cost on PASCAL and DeepGlobe datasets with significant reduction of FPI.
In summary, we conclude that the integration of feature truncation and SFP can efficiently reduce the computational cost when $R({\nu}_{max})$ is small as demonstrated on the PASCAL and DeepGlobe datasets. On the other hand, it becomes inefficient when $R({\nu}_{max})$ is relatively large as demonstrated on the Cityscapes dataset.
Our theoretical analysis on $R({\nu}_{max})$ can help to estimate the efficacy of feature truncation for neural networks such as DeepLab v3+ and DAN. To our understanding, this is the first work that provide a theoretical framework to analyze the segmentation performance in the aspect of the frequency domain.
As mentioned in section~\ref{sec:intro}, the existing segmentation networks predict the segmentation maps upon the low-resolution grid to save the computational cost. Our framework serves as a analysis tool to estimate the efficient low-resolution grid size of segmentation maps as well as the effective features in decoders in saving the computational cost. Furthermore, such analysis can be further generalized to arbitrary features in CNNs.
\subsection{Application on Block-wise annotation}\label{ssec:exp_blk_annot}
In section~\ref{ssec:exp_spec_ce_grad}, we determine the efficient low-resolution grid for the segmentation maps via analyzing $R({\nu}_{max})$. In this section, we apply these low-resolution grids to the groundtruth annotations. The resulted block-wise annotation can be considered as a weak annotation. We demonstrate that the performance of the semantic segmentation network trained with these block-wise annotations can also be estimated by $R({\nu}_{max})$.
We perform the experiment that trains DeepLab v3+ and DAN with the block-wise annotation at various band limit $\nu_{max}$ (from 8 to 256) and evaluates the $R({\nu}_{max})$ based on the original pixel-wise annotation. Examples of the block-wise annotation and the prediction of the two models upon the PASCAL, DeepGlobe and Cityscapes datasets are illustrated in Fig.~\ref{Fig:block_annotation}. Note that the block-wise annotation at ${\nu}_{max}=256$ is actually equivalent to the original pixel-wise groundtruth. The experimental results are summarized in Table~\ref{table:blk_annot}. For each $\nu_{max}$, we evaluate mIoU score and mIoU drop. Particularly, mIoU-drop is the reduction rate of mIoU with respect to the one band limit ${\nu}_{max}=256$, where this reduction actually corresponds to the decrements of IoU score caused by the low-resolution grid on the annotation.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{.6\columnwidth}
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}[b]{@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}}
\hline
$\nu_{max}$ & 256 & 32 & 16 & 8 \\
\hline
\multicolumn{5}{c}{PASCAL} \\ \hline \includegraphics[width=.15\columnwidth]{figures/pascal/voc_image.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr1_label.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr8_label.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr16_label.png} &
\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr32_label.png} \\
\centered{DeepLab v3+} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr1_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr8_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr16_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr32_deeplab.png}} \\
\centered{DAN} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr1_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr8_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr16_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/pascal/voc_lr32_dan.png}} \\ \hline
%
\multicolumn{5}{c}{DeepGlobe} \\ \hline
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_image.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr1_label.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr8_label.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr16_label.png} &
\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr32_label.png} \\
\centered{DeepLab v3+} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr1_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr8_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr16_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr32_deeplab.png}} \\
\centered{DAN} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr1_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr8_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr16_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/deepglobe/dg_lr32_dan.png}} \\ \hline
%
\multicolumn{5}{c}{Cityscapes} \\ \hline
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_image.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr1_label.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr8_label.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr16_label.png} &
\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr32_label.png} \\
\centered{DeepLab v3+} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr1_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr8_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr16_deeplab.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr32_deeplab.png}} \\
\centered{DAN} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr1_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr8_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr16_dan.png}} &
\centered{\includegraphics[width=.15\columnwidth]{figures/cityscapes/city_lr32_dan.png}}
\end{tabular}
}
\end{subfigure}
\caption{Examples of the block-wise annotation and the prediction of DeepLab v3+ and DAN. For each dataset, the block-wise groundtruth annotations and the predictions of DeepLab v3+ and that of DAN are shown. The corresponding band limits $\nu_{max}$ are denoted at the top of annotation.}
\label{Fig:block_annotation}
\end{figure}
\begin{table}[h]
\centering
\begin{subtable}[]{0.5\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccc}
\multicolumn{1}{c|}{${\nu}_{max}$} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{32} & 256 \\ \hline
\multicolumn{5}{c}{DeepLab v3+} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{68.2\%} & \multicolumn{1}{c|}{74.6\%} & \multicolumn{1}{c|}{77.5\%} & 78.6\% \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{51.4\%} & \multicolumn{1}{c|}{53.5\%} & \multicolumn{1}{c|}{53.6\%} & 53.9\% \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{38.5\%} & \multicolumn{1}{c|}{50.0\%} & \multicolumn{1}{c|}{59.3\%} & 67.8\% \\ \hline
\multicolumn{5}{c}{DAN} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{68.7\%} & \multicolumn{1}{c|}{74.6\%} & \multicolumn{1}{c|}{77.7\%} & 78.7\% \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{50.0\%} & \multicolumn{1}{c|}{52.2\%} & \multicolumn{1}{c|}{53.3\%} & 53.6\% \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{38.4\%} & \multicolumn{1}{c|}{50.1\%} & \multicolumn{1}{c|}{58.5\%} & 66.4\%
\end{tabular}
}
\caption{mIoU score.}
\label{table:blk_annot:iou}
\end{subtable}
\begin{subtable}[]{0.5\linewidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccc}
\multicolumn{1}{c|}{${\nu}_{max}$} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{16} & \multicolumn{1}{c|}{32} & 256 \\ \hline
\multicolumn{5}{c}{DeepLab v3+} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{13.2\%} & \multicolumn{1}{c|}{5.1\%} & \multicolumn{1}{c|}{1.4\%} & 0.0\% \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{4.6\%} & \multicolumn{1}{c|}{0.7\%} & \multicolumn{1}{c|}{0.6\%} & 0.0\% \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{43.2\%} & \multicolumn{1}{c|}{26.3\%} & \multicolumn{1}{c|}{12.5\%} & 0.0\% \\ \hline
\multicolumn{5}{c}{DAN} \\ \hline
\multicolumn{1}{c|}{PASCAL} & \multicolumn{1}{c|}{12.7\%} & \multicolumn{1}{c|}{5.2\%} & \multicolumn{1}{c|}{1.3\%} & 0.0\% \\
\multicolumn{1}{c|}{DeepGlobe} & \multicolumn{1}{c|}{6.7\%} & \multicolumn{1}{c|}{2.6\%} & \multicolumn{1}{c|}{0.7\%} & 0.0\% \\
\multicolumn{1}{c|}{Cityscapes} & \multicolumn{1}{c|}{42.2\%} & \multicolumn{1}{c|}{24.5\%} & \multicolumn{1}{c|}{11.8\%} & 0.0\%
\end{tabular}
}
\caption{relative mIoU-drop.}
\label{table:blk_annot:iou_drop}
\end{subtable}
\caption{Experimental results of using our proposed block-wise annotation for learning semantic segmentation based on PASCAL, DeepGlobe and Cityscapes datasets.}
\label{table:blk_annot}
\end{table}
As the band limit ${\nu}_{max}$ goes lower, mIoU score goes smaller and a positive mIoU drop is observed. Here we also observe a significant larger amount of mIoU drop on the Cityscapes dataset comparing to those on the PASCAL and DeepGlobe datasets, which is similar to the trend in Flops-drop discussed in previous section. Additionally, such trend consists to the trend of $R(\nu_{max})$ in Table~\ref{table:spectral_stat}, where $R(\nu_{max})$ is significantly larger on the Cityscapes dataset.
Fig.~\ref{Fig:block_annotation_iou_R} illustrates the correlation between relative mIoU drops and $R(\nu_{max})$ for all experiments. The positive correlation between mIoU drop and $R(\nu_{max})$ again agrees the correlation between CE and IoU score discussed in section~\ref{ssec:method_spectral_ioU}. Our studies show that the performance of the semantic segmentation network trained with the block-wise annotation strongly correlates to $R(\nu_{max})$. As a result, one can estimate the performance of the semantic segmentation network trained with the block-wise annotation by simply evaluating $R(\nu_{max})$ without thoroughly performing the experiments over all band limits.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{figures/block_annotation_iou_R.png}
\caption{The correlation between relative IoU-drop and the $R(\nu_{max})$ for block-wise annotation. The results are evaluated based on PASCAL, DeepGlobe and Cityscapes datasets.}
\label{Fig:block_annotation_iou_R}
\end{figure}
In summary, the proposed spectral analysis enables the advanced analysis of weak annotation in the frequency domain. Our studies reveal the correlation between the segmentation performance and the low-resolution grid of segmentation maps. Based on our analysis and experiments, the block-wise annotation can be considered as a weak annotation when the block size is chosen according to the size of low-resolution grid in the segmentation maps. Notably, these low-resolution grids actually corresponds to the coarse contour of instances in the segmentation maps, which are greatly utilized in the existing weak annotation. It should be noted that we aim to provide the theoretical justification of the weak annotations by using our spectral analysis. Further research should be undertaken to investigate the spectral analysis upon the existing weak annotations~\cite{papandreou2015weakly,khoreva2017simple} in the future.
\section{Introduction}\label{sec:intro}
Semantic segmentation, which densely assigns semantic labels to each image pixel, is one of the important topics in computer vision. Recently we have witnessed that the CNNs based on the encoder-decoder architecture~\cite{long2015fully,chen2018encoder,badrinarayanan2017segnet, zhao2017psp,chen2014semantic,yang8578486,noh7410535,yu2015multi,PENG2020107498,ZHANG2021107885,ORSIC2021107611,ZHANG2021107940} achieve striking performance on several segmentation benchmarks~\cite{mottaghi2014role,everingham2015pascal,cordts2016cityscapes,zhou2017ade,caesar2018coco}.
Generally, an encoder-decoder architecture consists of an encoder module that gradually reduces the spatial resolution of features for extracting context information and a decoder module that aggregates the information from the encoder and gradually recovers the spatial resolution of the dense segmentation map.
The existing works, \textit{e}.\textit{g}.~ Fully Convolutional Neural Network (FCN)~\cite{long2015fully}, U-Net \cite{ronneberger2015u}, and DeepLab models~ \cite{chen2014semantic,chen2017rethinking,chen2018encoder}, utilize encoder-decoder architecture to resolve the dense map. Usually,
extra modules such as dense conditional random field (dense CRF)~\cite{chen2014semantic,krahenbuhl2011efficient} and PointRend~\cite{kirillov2019pointrend} can be applied to boost the edge-response near object boundaries. On the other hand, most of the above mentioned methods predict the dense segmentation map on a \textbf{low-resolution grid (LRG)}, \textit{e}.\textit{g}.~ $\frac{1}{4}$ and $\frac{1}{8}$ of the original image resolution, which is then up-sampled to the original image resolution, to save the computational cost~\cite{chen2018encoder,long2015fully}. \\
Besides the LRG for prediction, some of networks can learn sufficient semantic contents from the weak annotations including the pixel level annotation with the coarse contours of objects~\cite{papandreou2015weakly,dai2015boxsup,khoreva2017simple,GUO2021108063,LU2021107924} or those derived from the image level annotation~\cite{ahn2018learning,jing2019coarse,zhou2019wails,shimoda2019self,sun2019fully,nivaggioli2019weakly}. These works demonstrate that networks can still achieve comparable accuracy with those networks that learn from the pixel-wise groundtruth annotation.
These results indicate that
\begin{enumerate}
\item The major semantic contents in segmentation maps can be learned by the existing segmentation networks using the low-resolution annotation in training.
\item Despite the inaccurate object boundaries of the weak annotations in the training data set, the networks are able to predict most of the major components of the objects.
\end{enumerate}
In other word, by using proper LRG and weak annotation (WA), one can save costs in labeling and training process without losing too much on the prediction accuracy. To further reduce the cost of inference, network pruning (NP)~\cite{liu2018rethinking,he2019asymptotic,Molchanov_2019_CVPR,blalock2020state,zhao2019variational,luo2017thinet,karnin1990simple} is usually considered where the redundant network parameters are removed if either its contribution to output~\cite{Molchanov_2019_CVPR,luo2017thinet,zhao2019variational} or its norm~\cite{he2019asymptotic} is negligible.
Despite the success on using LRG, WA and NP for cost saving, it yet remains unclear how these approaches affect the accuracy of the trained networks in theory. In this work, we present a theoretical analysis on the influence of the LRG to the network prediction and identify important features through our theoretical results. We aim to determine an efficient LRG that minimize the performance drop while maximize the cost reduction of training and annotation.
Here note that the LRG is not necessary limited to the network prediction but can be extended to the features in networks or annotation. Our analysis provides an general discussion of the LRG for network prediction, network features (\textit{i}.\textit{e}.~ CNN features) and annotation. Besides, we demonstrate that a more effective cost reduction can be made by combing existing pruning methods and features truncation for network inference based on the chosen efficient LRG. Furthermore, we investigate various block-wise annotations that is associated with the chosen LRG in order to alleviate the heavy cost of pixel level annotation. \\
Recently, the spectral analysis on neural networks for fitting uniformly distributed one dimensional signal have demonstrated that the network tends to learn the low-frequency component of target signal in the regression of the uniformly distributed data with various frequencies~\cite{rahaman2018spectral,ronen2019convergence,luo2019theory,yang2019fine}. Such tendency is also known as spectral bias~\cite{rahaman2018spectral}.
The spectral bias is also observed when we train the segmentation networks, where the semantic contents is mainly learned through coarse contour in WA on the LRG. Hence, we suspect that the WA and LRG actually associates with the low-frequency component of segmentation maps. On the other hand, the noise near object boundary might associates with the high-frequency component of segmentation maps. To verify our speculation on the spectral bias of segmentation networks, we would like to perform a spectral analysis on a semantic segmentation network in section~\ref{sec:method}.
The correlations among the frequency distributions of ground truth annotation and network output segmentation, the objective function (\textit{e}.\textit{g}.~ cross-entropy (CE)), the evaluation metric (intersection-over-union (IoU) score), and the resolution of CNN features shall be evaluated in the frequency domain. This further allow us to estimate the \textbf{sampling efficiency} of CNN features, \textit{i}.\textit{e}.~ the variation of network performance with respect to the band limit of LRG, and determine the best size of the low resolution grid.
In summary, our analysis demonstrates the following key observations:
\begin{itemize}
\item The cross-entropy (CE) can be explicitly decomposed into the summation of frequency components. We find that CE is mainly contributed by the low-frequency component of the ground truth annotation and segmentation maps.
\item Frequency analysis of the IoU score reveals its close relation to the CE. This justifies the CE objective for training the segmentation networks.
\item The correlation between the segmentation logits and the features within CNNs, in the frequency domain, shows that the segmentation logits of a specific frequency are mainly affected by the features within the same frequency.
\item Based on the findings above, the high frequency components of smooth features are found to be less important. Truncating these high frequency components does not interfere the performance of semantic segmentation networks.
\end{itemize}
\noindent Our findings above contribute to the semantic segmentation networks in the following two objectives:
\begin{enumerate}
\item \textbf{Feature truncation for segmentation networks.} The features in the decoder can be truncated since they are generally assumed to be smooth comparing to the ones in the encoder. This truncation method can be easily integrated with the commonly-used pruning approaches~\cite{liu2018rethinking,he2019asymptotic} for further cost reduction. Moreover, one can determine the efficient size of LRG by validating the sampling efficiency of CNN feature in spectral analysis.
\item \textbf{Block-wise annotation.} For semantic segmentation, it is easier to collect a weak annotation that keeps the low-resolution information of the full pixel-wise ground truth annotation. The block-wise annotation can be directly associated with the low frequency spectral information. We propose a block-wise annotation that emulating the existing weak annotations, where only the coarse contours of the instances in the segmentation map~\cite{papandreou2015weakly,khoreva2017simple} are used, and show that the segmentation networks trained via these block-wise annotations are still efficient and accurate.
\end{enumerate}
Applications related to the above objectives will be shown and discussed in section~\ref{sec:experiments}. The experimental results are expected in our analysis.
\section{Proposed Spectral Analysis}\label{sec:method}
As motivated above, we aim to conduct the spectral analysis on segmentation networks and investigate the sampling efficiency of the networks.
To clearly depict the sampling efficiency of the networks, we analyze the formalism of cross-entropy (CE) objective function and the intersection-over-union (IoU) evaluation metric in frequency domain. We demonstrate the analysis of CE in section \ref{ssec:method_spectral_ce} while relegate the analysis of IoU in section~\ref{ssec:method_spectral_ioU} of appendix.
Our results show the positive correlation between IoU score and CE, that justify the usage of CE as objective function and IoU as evaluation metric in the network training framework. Moreover, as CE is decomposed into the components of frequencies, we investigate the learning mechanism for various frequency component. We deduce the gradient propagation for convolutional layers in frequency domain and demonstrate the correlation between the segmentation output and the features in CNNs in section \ref{ssec:method_spec_grad}.
Our results suggest that the high-frequency components of the features and annotations have less influence on the performance of the segmentation networks due to the band limit introduced by the LRG. Finally, we conclude the spectral analysis upon segmentation networks in section~\ref{ssec:method_discussion}.\\
\\
\noindent\textbf{Notation}.
The notations in this section are defined as follows. In general, the upper case letter, \textit{e}.\textit{g}.~ $X, Y, Z, B$, denote the functional in the spatial domain $t$ while the lower case letter \textit{e}.\textit{g}.~ $x, y, z, b$, denote the corresponding spectrum in the frequency domain $\nu$. For example, the spectrum $y(\nu)=\mathcal{F}(Y(t))$ where $\mathcal{F}$ is the Fourier transform operator. The rest of notations will be defined whenever they appear.
\subsection{Spectral Decomposition of Cross-Entropy}\label{ssec:method_spectral_ce}
Let $Y(t,c)$ denote the segmentation logits produced by a semantic segmentation network and $B(t,c)$ denote the groundtruth annotation, in which $c$ and $t$ are indexes for the object class and image pixel respectively. The commonly-used objective function for learning semantic segmentation, cross-entropy (CE), can be written as
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = - { \sum _{c}} {\int} B(t,c) {log} ( { \frac {e^{Y(t,c)}}{{ \sum _{c}} e^{Y(t,c)}}}) dt
\\ & = - { \sum _{c}} {\int} B(t,c)(Y(t,c) - {Y_{p}}(t))dt,
\end{aligned}
\label{Eq:ce}
\end{equation}
where ${Y_{p}}(t)={log}({ \sum _{c}} e^{Y(t,c)})$. Transforming this integral into frequency domain $\nu$ give theorem~\ref{theorem:ce}.
\begin{restatable}[Spectral decomposition of Cross-Entropy]{theorem}{ce}
\label{theorem:ce}
Given $Y$ the segmentation logits and $B$ the groundtruth annotation, the cross-entropy $\mathcal{L}_{CE}$ can be decomposed as $\mathcal{L}_{CE} = {\sum _{\nu}} \mathcal{L}_{ce}(\nu)$, where $\mathcal{L}_{ce}(\nu)$ is the \textbf{frequency components of CE} and can be computed as following,
\begin{equation}
\begin{aligned}
\mathcal{L}_{ce}(\nu) = { \sum_{c} }{b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c))
\end{aligned}
\label{Eq:spec_ce_component}
\end{equation}
here, $b, y,$ and $y_{p}$ are the spectra of $B, Y,$ and $Y_{p}$, respectively.
\end{restatable}
\begin{proof}
Given $Y$ and $B$, $\mathcal{L}_{CE}$ the cross-entropy is
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = - { \sum _{c}} {\int} B(t,c) {log} ( { \frac {e^{Y(t,c)}}{{ \sum _{c}} e^{Y(t,c)}}}) dt
\\ & = - { \sum _{c}} {\int} B(t,c)(Y(t,c) - {Y_{p}}(t))dt,
\end{aligned}
\end{equation}
where ${Y_{p}}(t)={log}({ \sum _{c}} e^{Y(t,c)})$.
For all $Y(t)\in Y(t,c)$ and ${B}(t)\in B(t,c)$, the integral ${\int} Y(t){B}(t)dt$ can be transformed to the frequency domain $\nu$ as follows. (See lemma~\ref{lemma:ft_prod} of appendix)
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt = {\int} y(z)\,{b}( - z)\,dz
\end{aligned}
\end{equation}
where $y(\nu)$ and $b(\nu)$ are the spectrum of the segmentation logits and that of the groundtruth annotations, respectively. The $\mathcal{L}_{CE}$ in Eq.~\ref{Eq:ce} is hence given by
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = - { \sum _{c}} {\int} B(t,c)(Y(t,c) - {Y_{p}}(t))dt
\\ & = { \sum _{c}} {\int} {b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c))d\nu.
\end{aligned}
\label{Eq:spectral_ce_final}
\end{equation}
The discrete integral of Eq.~\ref{Eq:spectral_ce_final} gives us the decomposition of the $\mathcal{L}_{CE}$ over frequency domain $\nu$ as following
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = { \sum _{\nu}} { \sum_{c} }{b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c))
\\ & = { \sum _{\nu}} \mathcal{L}_{ce}(\nu),
\end{aligned}
\label{Eq:spec_ce_final_discrete}
\end{equation}
where
\begin{equation}
\begin{aligned}
\mathcal{L}_{ce}(\nu) = { \sum_{c} }{b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c)).
\end{aligned}
\end{equation}
\end{proof}
The contribution from each frequency components to CE can thus be evaluated. In addition, it follows immediately that $\mathcal{L}_{ce}(\nu)$ is small when either ${y_{p}}(\nu) - y(\nu, c)$ or $b(\nu,c)$ is small. For simplicity, let $\widehat{y}(\nu,c) = {y_{p}}(\nu) - y(\nu, c)$. Recalls that, as mentioned in section~\ref{sec:intro}, $Y(t,c)$ is generally predicted upon the low-resolution grid. Hence, $\widehat{y}(\nu_i,c)$, as well as $\mathcal{L}_{ce}(\nu_i)$, should be small at high-frequency $\nu_i$.
Besides, the magnitude of high frequency components of the ground truth annotation is generally small dues to the intrinsic smoothness of segmentation map~\footnote{Segmentation map usually consists of several segment for each class. All the internal region of segment is flat while only the regions near the segmentation boundaries contain high-frequency components.}
One can therefore conclude that $\mathcal{L}_{CE}$ is the mainly contributed by the low-frequency components. This conclusion will be validated later in section~\ref{ssec:exp_spec_ce_grad} and lead us to the first key observation.
Furthermore, it follows from this observation that the significance of high-frequency component of pixel-wise groundtruth to $\mathcal{L}_{CE}$ is negligible. This also explain why weak annotation, which drop the high-frequency component of segmentation map, can be an effective approach to pixel-wise groundtruth annotation. Most of researches in studying weak annotations~\cite{papandreou2015weakly,dai2015boxsup,khoreva2017simple,GUO2021108063,LU2021107924,ahn2018learning,jing2019coarse,zhou2019wails,shimoda2019self,sun2019fully,nivaggioli2019weakly} demonstrate efficacy of WA solely by experiments on benchmark datasets. Here, our spectral analysis further provide a theoretical foundation to the WA.
Besides the analysis of CE, we further perform the analysis of IoU score and correlate it to the analysis of CE in section~\ref{ssec:method_spectral_ioU}. The analysis reveals how the minimization of CE correlates to the maximization of IoU score. This observation justifies the rationality of the common learning procedure for semantic segmentation networks: the networks are trained by the CE objective function while being validated by the IoU scores.
This corresponds to the second key observation. Therefore, later in our experiments and analysis, we adopt the decomposition of CE to study the frequency response and take IoU as a reasonable metric for evaluation.
\subsection{Spectral Gradient of Convolutional Layers}\label{ssec:method_spec_grad}
In section~\ref{ssec:method_spectral_ce}, we have demonstrated that the CE can be decomposed into the summation of frequency components $\mathcal{L}_{ce}(\nu)$. Here we further deduce the gradient propagation of $\mathcal{L}_{ce}(\nu)$ within CNNs in order to reveal how $\mathcal{L}_{ce}(\nu)$ updates the network, especially for the gradient in low-frequency regime.
We hereafter refer the gradient propagation with respect to input feature $X$ in frequency domain as the \textbf{spectral gradient}. With this, we can analyze how the low-frequency of input feature affect the network performance. For simplicity, we deduce the spectral gradient for a convolutional layer, including a convolution and an activation function
\begin{equation}
\begin{aligned}
Z(t) & = K(t) \otimes X(t)
\\ Y(t) & =\sigma(Z(t)),
\end{aligned}
\label{Eq:conv_layer}
\end{equation}
where $K$ is the kernel, $\sigma$ is the activation function, $X$ is input feature, and $Y$ is the output of convolutional layer. We consider the soft-plus activation $\sigma(Z(t)) = \log ( {1 + {e^{Z(t)}}}) $ since its everywhere differentiable thus make it easier for analysis.
The spectral gradient for a convolutional layer is given as below.
\begin{restatable}[The spectral gradient for a convolutional layer]{theorem}{ce_grad}
\label{theorem:ce_grad}
Given $B$ the groundtruth annotation, $X$, $K$ and $Y$ as in Eq.~\ref{Eq:conv_layer}, the spectral gradient of $\mathcal{L}_{ce}(\nu)$ the frequency components of CE is
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & \sum _c \frac{1}{2}k(\nu_j,c)[D_{0}(\nu_i) s(-\nu_j, c) - \delta_{\nu_j}(\nu_i)b(-\nu_i,c)],
\end{aligned}
\label{Eq:ce_spec_grad}
\end{equation}
and the spectral gradient of $y$ the output is
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} \simeq \frac{1}{2}k( \nu_j){{\delta}_{\nu_j}(\nu_i)},
\end{aligned}
\label{Eq:convlayer_spec_grad}
\end{equation}
assuming the $K(t)$ is small and $|X(t)| < 1$;\footnote{These assumptions rely on the fact that the numeric scale of feature and kernel are usually limited to a small range of value for the numeric stability of networks.} where $y(\nu) = {\cal F}(Y(t))$, $b(\nu) = {\cal F}(B(t))$, $k(\nu) = {\cal F}(K(t))$, $x(\nu) = {\cal F}(X(t))$, and $s(\nu) = {\cal F}(S(t))$ the spectrum of segmentation output. ${\delta}_{\nu_j}(\nu_i)$ is the Kronecker delta function, which is here after abbreviated as the delta function, and $D_{0}(\nu_i)$ is the Dirac delta function.
\end{restatable}
We relegate the proof of Eq.~\ref{Eq:convlayer_spec_grad} in lemma~\ref{lemma:conv_grad} of appendix and that of Eq.~\ref{Eq:ce_spec_grad} in lemma~\ref{lemma:ce_grad} of appendix.
Clearly from the theory, the spectral gradient in Eq.~\ref{Eq:convlayer_spec_grad} can be approximated as the delta function, indicating $y(\nu_i)$ is affected only by $x(\nu_i)$ with the same frequency. This corresponds to the third key observation.
Now, let us further analyze how the variation of feature $ x(\nu_j)$ affect $\mathcal{L}_{ce}(\nu_i)$ based on Eq.~\ref{Eq:ce_spec_grad}. For simplicity, we consider the case $\nu_i \neq 0$ and $\nu_i = 0$ separately. This reveals the contribution of the feature to network performance and provides a guideline for removing the redundant feature.
For $\nu_i \neq 0$, the gradient becomes
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & -\sum _c \frac{1}{2}k(\nu_j,c) \delta_{\nu_j}(\nu_i)b(-\nu_i,c), \text{if~} \nu_j \neq 0,
\end{aligned}
\label{Eq:ce_spec_grad_ineq0}
\end{equation}
which indicates that the CNN feature $x(\nu_j)$ is affected only by $\mathcal{L}_{ce}(\nu_i)$ with same frequency.
For $\nu_i = 0$, the gradient consists of additional term $\sum _c \frac{1}{2}k(\nu_j,c)D_{0}(\nu_i) s(-\nu_j, c)$.
Here recalls that the segmentation map $s(\nu, c)$ is usually predicted upon the low-resolution grid of the original image as mentioned in section \ref{sec:intro}. $s(\nu, c)$ should therefore be smooth so that $s(-\nu_j, c)$ is small when $\nu_j$ is large.
Hence,
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\ll 1, \text{if~} \nu_j \text{~is large} \text{~and~} \nu_j \neq \nu_i = 0.
\end{aligned}
\label{Eq:ce_spec_grad_ieq0}
\end{equation}
It follows from Eq.~\ref{Eq:ce_spec_grad_ineq0} and Eq.~\ref{Eq:ce_spec_grad_ieq0} that the CNN feature $x(\nu_j)$ is affected only by $\mathcal{L}_{ce}(\nu_i)$ with near frequency.
Thus, removing the features $x(\nu_j)$ at high-frequency $\nu_j$ does not effect $\mathcal{L}_{ce}(\nu_i)$ at low-frequency $\nu_i$. This give us the forth key observation. With this observation, it becomes possible to reduce the feature size of CNN, as well as its high-frequency components, while keep the performance. We further provide the numerical validation of this observation in section~\ref{ssec:exp_spec_ce_grad}.
\subsection{Discussion for Spectral Analysis}\label{ssec:method_discussion}
In this section, we summarize the discussion for the spectral analysis based on the theoretical analysis in the above sections. Recalls in section~\ref{sec:intro}, the segmentation map is usually predicted upon the low-resolution grid of the original image. We here aims to determine the efficient grid size that maximizing the cost reduction while minimizing the accuracy lost.
Following the decomposition of CE in Eq.~\ref{Eq:spec_ce_final_discrete}, we define the truncated CE as the frequency components of CE filtered by a low-pass filter, \textit{i}.\textit{e}.~ low-resolution grid, with the band limit ${\nu}_{max}$ as follows,
\begin{equation}
\begin{aligned}
\widehat{\mathcal{L}}_{CE}({\nu}_{max}) = { \sum _{\nu}^{{\nu}_{max}}} \mathcal{L}_{ce}(\nu),
\end{aligned}
\label{Eq:spectral_ce_bandlimit}
\end{equation}
Further, we define
\begin{equation}
R({\nu}_{max}) = |1 - \frac{\widehat{\mathcal{L}}_{CE}(\nu_{max})}{\mathcal{L}_{CE}}|
\label{Eq:r_max}
\end{equation}
as the losses of $\mathcal{L}_{CE}$ due to the truncation at band limit $\nu_{max}$. Hence, an efficient grid can be defined when $R({\nu}_{max})$ is negligible. These derived efficient grids can be applied to either segmentation outputs or groundtruth annotation with the same $R({\nu}_{max})$ by Eq.~\ref{Eq:spec_ce_component}. In addition, these grids can further be applied to the features in CNNs to save computational cost follows from the forth key observation discussed in section~\ref{ssec:method_spec_grad}.
In the following section, we apply the efficient grids on the features and the groundtruth annotation and validate the positive correlation between the the loss of CE and the loss of IoU score caused by the efficient grids.
\section{Appendix}\label{sec:appendix}
\subsection{Spectral Analysis of Intersection-over-Union Score}\label{ssec:method_spectral_ioU}
Given the segmentation logits $Y$ and the groundtruth annotation $B$, the intersection-over-union (IoU) score is typically defined as $\frac{|B \cap S|}{ |B \cup S|}$, where $S$ is the segmentation output $S(t,c)={ \frac {e^{Y(t,c)}}{{ \sum _{c}} e^{Y(t,c)}}}$.
It is common to train the network with CE and evaluate the network performance based on IoU scores. This section aims to analyze the formalism of IoU score in frequency domain and shed some light to the reason why IoU scores can be increased when the CE is decreased.
In order to analyze the IoU score in the frequency domain, we extend the above definition to the continuous space as follows:
\begin{equation}
\begin{aligned}
IoU( {S,B} ) & = \frac{{\int B(t)S(t)} \,dt}{{\int B(t) + S(t)\,dt - \int B(t)S(t)\,dt}}
\\ & = \frac{1}{{\frac{{\int B(t) + S(t)\,dt}}{{\int B(t)S(t)\,dt}} - 1}}
\end{aligned}
\label{Eq:iou}
\end{equation}
where $t$ denotes pixel indexes. Eq.~\ref{Eq:iou} holds for each object class $c$. Here we skip $c$ for simplicity. Notably, this definition is equivalent to the origin definition of IoU score for the binarized segmentation maps. The components in Eq.~\ref{Eq:iou} can be written as follows (see lemma~\ref{lemma:ft_prod} and lemma~\ref{lemma:ft_sum} of appendix),
\begin{equation}
\begin{aligned}
\int B(t)S(t)\,dt & = \int s(\nu)b(- \nu)d\nu, \text{~and}~ \\
{\int} B(t)+S(t)dt &= b(0)+s(0),
\end{aligned}
\end{equation}
where $s(\nu)={\cal F}(S(t))$. As a result, IoU score can be written as
\begin{equation}
\begin{aligned}
& IoU({s,b}) = \frac{1}{{\frac{{ {s(0) + b(0)} }}{{{\int} s(\nu)b(- \nu)d\nu}} - 1}}
\end{aligned}
\label{Eq:spectral_iou}
\end{equation}
and it is composed of two terms: ${s(0) + b(0)}$ and ${\int} s(\nu)b(- \nu)d\nu = \int B(t)S(t)\,dt$.
It can be seen that the IoU score can not be explicitly decomposed as the case for CE in Eq.~\ref{Eq:spec_ce_final_discrete} due to the non-linearity of Eq~\ref{Eq:spectral_iou}. On the other hand, noting that the latter term, \textit{i}.\textit{e}.~ ${\int} s(\nu)b(- \nu)d\nu = \int B(t)S(t)\,dt$, positively correlates to ${\int} B(t){log}(S(t))dt$ the component of CE in Eq.~\ref{Eq:ce} since the $log$ function is monotonically increasing. Hence, minimal $\mathcal{L}_{CE}$ maximizes $\int B(t)S(t)\,dt$ as well as IoU score.
\subsection{Fourier transform of spatial integral}
\begin{lemma}\label{lemma:ft_prod}
Given two functional $Y(t)$ and $B(t)$ in spatial domain $t$, the overlapping integral ${\int} Y(t){B}(t)dt$ can be transformed into the frequency domain $\nu$ as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt = {\int} y(z)\,{b}( - z)\,dz
\end{aligned}
\end{equation}
where $y(\nu)=\mathcal{F}(Y(t))$ and $b=\mathcal{F}(B)$.
\end{lemma}
\begin{proof}
By the convolution lemma, integral ${\int} Y(t){B}(t)dt$ can be written as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt & ={\int} {{\cal F}^{-1}}(y(\nu) \otimes b(\nu))\,dt
\end{aligned}
\label{Eq:spectral_pxprod0}
\end{equation}
; where $ \otimes $ denotes the convolution operation as $y(\nu) \otimes b(\nu) = {\int} y(z)\,b(\nu - z)\,dz$;
${{\mathcal F}^{-1}}(y(\nu)) = {\int} y(\nu) e^{2j\pi\nu t}d\nu$ is the inverse Fourier transform operator. Eq.~\ref{Eq:spectral_pxprod0} can now be written as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt & = {\int}{\int} y(\nu) \otimes b(\nu)e^{2j\pi\nu t}\,d\nu \,dt
\\& = {\int} {\int} {\int} e^{2j\pi \nu t}\,y(z)\,{b}(\nu - z)\,dt\,d\nu \,dz
\end{aligned}
\label{Eq:spectral_pxprod1}
\end{equation}
By the orthogonality of Fourier basis, we have ${\int} e^{2j\pi \nu t}dt = D_{0}(\nu)$, where $D_{0}(\nu)$ is the Dirac delta function:
\begin{equation}
\begin{aligned}
D_{0}(\nu) =
\begin{cases}
\infty,& \text{if } \nu=0 \\
0, & \text{otherwise}
\end{cases}
\end{aligned}
\label{Eq:delta}
\end{equation}
and its integral property is $\int y(\nu)D_{0}(\nu)d\nu=y(0)$.
Hence, Eq.~\ref{Eq:spectral_pxprod1} is given as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt & = {\int} {\int} y(z)\,{b}(\nu - z)\,D_{0}(\nu)\,d\nu \,dz \\
& = {\int} y(z)\,{b}( - z)\,dz
\end{aligned}
\end{equation}
\end{proof}
\begin{lemma}\label{lemma:ft_sum}
Given functional $Y$ in spatial domain $t$, the integral ${\int} Y(t)dt$ can be transformed into the frequency domain $\nu$ as
\begin{equation}
\begin{aligned}
{\int} Y(t)dt & = y(0);
\end{aligned}
\label{Eq:spectral_pxsum}
\end{equation}
where $y(\nu)=\mathcal{F}(Y(t))$.
\end{lemma}
\begin{proof}
The proof follows similar process as for lemma~\ref{lemma:ft_prod}, as follows.
\begin{equation}
\begin{aligned}
{\int} Y(t)dt & ={\int} {{\cal F}^{-1}}(y(\nu))dt
\\& = {\int}{\int} y(\nu)e^{2j\pi \nu t}d\nu dt
\\& = {\int} y(\nu){D_{0}(\nu)}d\nu
\\& = y(0)
\end{aligned}
\end{equation}
\end{proof}
\subsection{Gradient propagation for a convolution layer}
Consider a convolution layer consists of the convolutional kernel $K(t)$ and the soft-plus activation function $\sigma(z(t))=log(1+e^{z(t)})$; $t$ is the spatial location. Let $X$ denote the input, the output of convolution layer is written as
\begin{equation}
\begin{aligned}
Z(t) &= K(t) \otimes X(t) \\
Y(t) &=\sigma(Z(t))
\end{aligned}
\end{equation}
\begin{lemma}\label{lemma:conv_grad}
Assuming $K(t)$ is small and $|X(t)| < 1$, the spectral gradient can be approximated as
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} \simeq \frac{1}{2}k( \nu_j){{\delta}_{\nu_j}(\nu_i)},
\end{aligned}
\end{equation}
where $z(\nu), k(\nu)$, $x(\nu)$ and ${\delta}_{\nu_j}(\nu_i)$ are ${\cal F}(Z(t)), {\cal F}(K(t))$, ${\cal F}(X(t))$ and the Kronecker delta function, respectively.
\end{lemma}
\begin{proof}
The spectral gradient of a convolution layer consists of the spectral gradient for the convolution operator and that for the activation function. We will show two gradient and combine it in the end of derivation.
For the convolution operator, it can be written as in the frequency domain ${z}(\nu) = k(\nu)x(\nu)$, where $z(\nu), k(\nu)$, and $x(\nu)$ are ${\cal F}(Z(t)), {\cal F}(K(t))$, and ${\cal F}(X(t))$, respectively. Without loss of generality, in the discrete frequency domain, the gradient of $z$ under a specific frequency $\nu_i$ with respect to the $x$ under frequency $\nu_j$ is defined as
\begin{equation}
\begin{aligned}
& \frac{\partial z(\nu_i)}{\partial x(\nu_j)} = \frac{\partial }{{\partial x(\nu_j)}}( {k(\nu_i)x(\nu_i)}) = k(\nu_i){{\delta}_{\nu_j}(\nu_i)},
\end{aligned}
\label{Eq:conv_spec_grad}
\end{equation}
where ${\delta}_{\nu_j}(\nu_i)$ is the Kronecker delta function.
\begin{equation}
\begin{aligned}
{\delta}_{\nu_j}(\nu_i) =
\begin{cases}
1, & \text{if } \nu_i=\nu_j \\
0, & \text{otherwise}
\end{cases}
\end{aligned}
\end{equation}
For the soft-plus function, it can be first expressed as Taylor series
\begin{equation}
\begin{aligned}
\sigma(Z(t)) &= \log ( {1 + {e^{Z(t)}}})
\\&= \log (2) + \frac{1}{2}Z(t) + \frac{1}{8}{Z(t)^2} + O(Z(t)^4),
\end{aligned}
\end{equation}
in which $Z(t)$ is small since the kernel $K(t)$ is small and $|X(t)| < 1$ by the assumption.
Hence, $O(Z(t)^4)$ becomes negligible.
The Fourier transform of $\sigma(Z(t))$ is thus given as
\begin{equation}
\begin{aligned}
y(\nu) & = {\cal F}(\sigma( Z(t)))
\\ & \simeq {\cal F}( {\log ( 2 ) + \frac{1}{2}Z(t) + \frac{1}{8}{Z(t)^2}} )
\\ & = 2\pi log(2) \delta_{0}(\nu)+\frac{1}{2} z(\nu)+\frac{1}{8} z(\nu)\otimes z(\nu),
\end{aligned}
\end{equation}
and its spectral gradient is
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial z(\nu_j)} & \simeq \frac{\partial }{\partial{z(\nu_j)}}( 2\pi log(2) \delta_{0}(\nu_i)
\\ & + \frac{1}{2} z(\nu_i)+\frac{1}{8} z(\nu_i)\otimes z(\nu_i) )
\\ & = \frac{\partial }{\partial z(\nu_j)}(\frac{1}{2}z( {{\nu _i}} ) + \frac{1}{8} \sum_{r=0}^{n-1} z( {{\nu _i - \nu _r}} )z( {{\nu _r}} ))
\\ & = \frac{1}{2}{{\delta}_{\nu _j}(\nu _i)} + \frac{1}{4}z( \nu_i - \nu_j),
\end{aligned}
\label{Eq:act_spec_grad}
\end{equation}
where $r$ is a dummy variable for the convolution and $n$ is the spectrum size of features. By Eq.~\ref{Eq:conv_spec_grad} and Eq.~\ref{Eq:act_spec_grad}, the spectral gradient of a convolutional layer in Eq.~\ref{Eq:conv_layer} is then written as
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} & = \sum _{q=0}^n \frac{{\partial y(\nu_i)}}{{\partial z(\nu_q )}}\frac{{\partial z(\nu_q)}}{{\partial x(\nu_j)}}
\\ & \simeq \sum _{q=0}^n ((\frac{1}{2}{{\delta}_{\nu_q}(\nu_i)} + \frac{1}{4}z(\nu _i - \nu_q ))k( \nu_q){{\delta}_{\nu_j}(\nu_q)})
\\ & = k( \nu_j)[\frac{1}{2}{{\delta}_{\nu_j}(\nu_i)} + \frac{1}{4}z(\nu_i-\nu_j)],
\end{aligned}
\label{Eq:convlayer_spec_grad_supp}
\end{equation}
where $i, j, \text{and~} q$ are the frequency indices.
Since $Z(t)$ is small as argued above, the corresponding spectrum $z(\nu)$ should also be small. We can therefore neglect the second term of Eq.~\ref{Eq:convlayer_spec_grad_supp},~\textit{i}.\textit{e}.~ $\frac{1}{4}z(\nu_i-\nu_j)$, and approximate Eq.~\ref{Eq:convlayer_spec_grad_supp} as
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} \simeq \frac{1}{2}k( \nu_j){{\delta}_{\nu_j}(\nu_i)}
\end{aligned}
\end{equation}
\end{proof}
\subsection{Gradient propagation for the frequency component of CE}
\begin{lemma}\label{lemma:ce_grad}
Given a convolutional layer that satisfies the assumption of lemma~\ref{lemma:conv_grad}. Let $x(\nu)$ denote the spectrum of input feature.
For each semantic class $c$ in segmentation maps, let $k(\nu, c)$ and $y(\nu,c)$ denote the spectrum of kernel and that of the segmentation output, respectively.
The spectral gradient for the frequency component of CE, $\mathcal{L}_{ce}(\nu_i)$ is
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & \sum _c \frac{1}{2}k(\nu_j,c)[D_{0}(\nu_i) s(-\nu_j, c) - \delta_{\nu_j}(\nu_i)b(-\nu_i,c)],
\end{aligned}
\end{equation}
where $\delta_{\nu_j}(\nu_i)$ is the Kronecker delta function and $D_{0}(\nu_i)$ is the Dirac delta function.
\end{lemma}
\begin{proof}
By lemma~\ref{lemma:conv_grad} and Eq.~\ref{Eq:spec_ce_component}, the spectral gradient $\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ is given as
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
& = \sum_c \sum_q \frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial y(\nu_q,c)}}\frac{{\partial y(\nu_q,c)}}{{\partial x(\nu_j)}}
\\ & \simeq \sum_c \sum_q \frac{1}{2} \frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial y(\nu_q,c)}} k( \nu_j,c){{\delta}_{\nu_j}(\nu_q)}
\\ & = \sum_c \frac{1}{2} \frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial y(\nu_j,c)}} k(\nu_j,c)
\\ & = \sum_c \frac{1}{2}k(\nu_j,c) \frac
{\partial
{~\sum _{\widetilde{c}} b(-\nu_i,\widetilde{c})({y_p}(\nu_i) - y(\nu_i,\widetilde{c}))}}
{\partial y(\nu_j,c)}
\\ & = \sum _c \frac{1}{2}k(\nu_j,c) [(\frac{\partial {y_p}(\nu_i)}{\partial y(\nu_j,c)} \sum \limits_{\widetilde{c}} b(-\nu_i,\widetilde{c})) \\ & - (\sum \limits_{\widetilde{c}} \frac {\partial y(\nu_i,\widetilde{c})} {\partial y(\nu_j,c)}b(-\nu_i,\widetilde{c}))],
\end{aligned}
\label{Eq:ce_spec_grad1}
\end{equation}
in which
\begin{equation}
\begin{aligned}
\frac{\partial y_p(\nu_i)}{\partial y(\nu_j,c)}
& = \int \frac{\partial y_p(\nu_i)}{\partial Y(t,c)}\frac{\partial Y(t,c)}{\partial y(\nu_j,c)} dt
\\ & = \int \frac{\partial {\mathcal{F}}(Y_p(t))}{\partial Y(t,c)}\frac{\partial {\mathcal{F}}^{-1}(y(\nu,c))}{\partial y(\nu_j,c)} dt
\\ & = \int \frac{\partial \int log({\sum _c} e^{Y(t,c)}) e^{-2j\pi\nu_i t}dt}{\partial Y(t,c)}\frac{\partial {\int y(\nu,c) e^{2j\pi\nu t}d\nu}}{\partial y(\nu_j,c)} dt
\\ & = \int \frac{e^{Y(t,c)}}{{\sum _c} e^{Y(t,c)}} e^{-2j\pi(\nu_i-\nu_j) t}dt
\\ & = \int S(t,c) e^{-2j\pi(\nu_i-\nu_j) t}dt
= s(\nu_i-\nu_j, c),
\end{aligned}
\label{Eq:amax_spec_grad}
\end{equation}
where $S(t,c)$ is the segmentation output after performing softmax on logits $Y$ and $s(\nu,c)$ is the spectrum of the segmentation output.
Further, we have
\begin{equation}
\begin{aligned}
\sum _{\widetilde{c}} b(-\nu_i,\widetilde{c}) = D_{0}(\nu_i)
\end{aligned}
\label{Eq:sum_b}
\end{equation}
by the Fourier transform of $\sum _{\widetilde{c}} B(t,\widetilde{c}) = 1$, \textit{i}.\textit{e}.~ the fact that $B$ denote the probability distribution over semantic classes and should sum to one for each pixel.
Substituting Eq.~\ref{Eq:amax_spec_grad} and Eq.~\ref{Eq:sum_b} into Eq.~\ref{Eq:ce_spec_grad1}, we have the overall spectral gradient as
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & \sum _c \frac{1}{2}k(\nu_j,c)[D_{0}(\nu_i) s(-\nu_j, c) - \delta_{\nu_j}(\nu_i)b(-\nu_i,c)]
\end{aligned}
\end{equation}
\end{proof}
\section{Introduction}\label{sec:intro}
Semantic segmentation, which densely assigns semantic labels to each image pixel, is one of the important topics in computer vision. Recently we have witnessed that the CNNs based on the encoder-decoder architecture~\cite{long2015fully,chen2018encoder,badrinarayanan2017segnet,zhao2017psp,chen2014semantic,yang8578486,noh7410535,yu2015multi} achieve striking performance on several segmentation benchmarks~\cite{mottaghi2014role,everingham2015pascal,cordts2016cityscapes,zhou2017ade,caesar2018coco}.
The existing works, \textit{e}.\textit{g}.~ Fully Convolutional Neural Network (FCN)~\cite{long2015fully}, U-Net \cite{ronneberger2015u}, and DeepLab models~ \cite{chen2014semantic,chen2017rethinking,chen2018encoder}, utilize encoder-decoder architecture to resolve the dense map.
Generally, an encoder-decoder architecture consists of an encoder module that gradually reduces the spatial resolution of features for extracting context information and a decoder module that aggregates the information from the encoder and recovers the spatial resolution of the dense segmentation map. On the other hand, these networks predict the dense segmentation map on a \textbf{low-resolution grid (LRG)}, \textit{e}.\textit{g}.~ $\frac{1}{4}$ and $\frac{1}{8}$ of the original image resolution, which is then up-sampled to the original image resolution, to save the computational cost~\cite{chen2018encoder,long2015fully}.
Besides the LRG for prediction, some of networks can learn sufficient semantic contents from the weak annotations such as the annotation with coarse contours of objects~\cite{papandreou2015weakly,dai2015boxsup,khoreva2017simple} or those annotations derived from the image level ~\cite{ahn2018learning,jing2019coarse,zhou2019wails,shimoda2019self,sun2019fully,nivaggioli2019weakly} to save the labeling cost. These weak annotations based on the coarse contour actually equivalent to the annotation obtained by applying LRG onto the pixel-wise groundtruth annotation. Despite the labeling inaccuracy near object boundary induced by LRG, the existing works demonstrate that networks can still achieve comparable accuracy with those networks that learn from the pixel-wise groundtruth annotation. Hence, these results indicate significance of LRG for either prediction or annotation for saving the cost with negligible accuracy loss.
Despite the success on using LRG for prediction and annotation, \textit{i}.\textit{e}.~ weak annotations, for cost saving, it yet remains unclear how these approaches affect the accuracy of the trained networks in theory and why LRG is significant.
In the sense of spectral (Fourier) analysis, LRG for prediction and annotation actually associates with the low-frequency component of segmentation maps. On the other hand, the labeling inaccuracy near object boundary associates with the high-frequency component of segmentation maps. We hence suspect that the significance of LRG actually corresponds to the learning tendency of networks for the low-frequency signals.
In fact, the existing works of spectral analysis demonstrate that the network tends to learn the low-frequency component of target signal in the regression of the uniformly distributed data with various frequencies~\cite{rahaman2018spectral,ronen2019convergence,luo2019theory,yang2019fine,xu2019frequency}. Such tendency is known as spectral bias~\cite{rahaman2018spectral}.
However, the spectral bias remains unclear for semantic segmentation since the distribution of segmentation annotation is not necessarily uniformly distributed along frequencies regimes. To verify our speculation on the spectral bias of segmentation networks, we perform a spectral analysis on a semantic segmentation networks.
In this work, we present a theoretical analysis on the influence of the LRG to the network prediction and identify important features through our theoretical results. More specifically, we investigate the correlations among the frequency distributions of ground truth annotation and network output segmentation, the objective function (\textit{e}.\textit{g}.~ cross-entropy (CE)), the evaluation metric (\textit{e}.\textit{g}.~ intersection-over-union (IoU) score), and the resolution of CNN features in the frequency domain. We summarize the following key observations:
\begin{itemize}
\item The cross-entropy (CE) can be explicitly decomposed into the summation of frequency components. We find that CE is mainly contributed by the low-frequency component of segmentation maps.
\item Spectral analysis of IoU score reveals its close relation to the CE.
This results justify that the segmentation networks are trained upon CE while evaluated upon IoU.
\item The correlation between the segmentation logits and the features within CNNs, in the frequency domain, shows that the segmentation logits of a specific frequency are mainly affected by the features within the same frequency.
\item Based on the findings above, the high frequency components of smooth features are found to be less important. Truncating these high frequency components does not interfere the performance of semantic segmentation networks.
\end{itemize}
\noindent Our findings above contribute to the semantic segmentation networks in the following two objectives:
\begin{enumerate}
\item \textbf{Feature truncation for segmentation networks.} The features in the decoder can be truncated since they are generally assumed to be smooth comparing to the ones in the encoder. This truncation method can be easily integrated with the commonly-used pruning approaches~\cite{liu2018rethinking,he2019asymptotic} for further cost reduction. Moreover, one can determine the efficient size of LRG by spectral analysis.
\item \textbf{Block-wise annotation.} For semantic segmentation, it is easier to collect a weak annotation that keeps the low-resolution information of the full pixel-wise ground truth annotation. The block-wise annotation can be directly associated with the low frequency spectral information. We propose a block-wise annotation that emulating the existing weak annotations, where only the coarse contours of the instances in the segmentation map~\cite{papandreou2015weakly,khoreva2017simple} are used, and show that the segmentation networks trained via these block-wise annotations are still efficient and accurate.
\end{enumerate}
We relegates the review of related work such as segmentation network, \textit{i}.\textit{e}.~ SSNN, network pruning and spectral bias in section~\ref{ssec:related work} of appendix. We present the proposed analysis in section~\ref{sec:method} and the above applications in section~\ref{sec:experiments}. The experimental results are expected in our analysis.
\section{Proposed Spectral Analysis}\label{sec:method}
To clearly depict the efficacy of LRG, we analyze the formalism of cross-entropy (CE) objective function and the intersection-over-union (IoU) evaluation metric in frequency domain. We demonstrate the analysis of CE in section \ref{ssec:method_spectral_ce} while relegate the analysis of IoU in section~\ref{ssec:method_spectral_ioU} of appendix.
Our results show that CE can be decomposed into the components of frequencies and positively correlate to IoU score; this justifies the usage of CE as objective function and IoU as evaluation metric in the network training framework.
Moreover, as CE is decomposed into the components of frequencies, we investigate the learning mechanism for various frequency component. We deduce the gradient propagation for convolutional layers in frequency domain and demonstrate the correlation between the segmentation output and the features in CNNs in section \ref{ssec:method_spec_grad}.
Our results suggest that the high-frequency components of the features and annotations have less influence on the performance of the segmentation networks due to the band limit introduced by the LRG. Finally, we conclude the spectral analysis upon segmentation networks in section~\ref{ssec:method_discussion}.\\
\\
\noindent\textbf{Notation}.
The notations in this section are defined as follows. In general, the upper case letter, \textit{e}.\textit{g}.~ $X, Y, Z, B$, denote the functional in the spatial domain $t$ while the lower case letter \textit{e}.\textit{g}.~ $x, y, z, b$, denote the corresponding spectrum in the frequency domain $\nu$. For example, the spectrum $y(\nu)=\mathcal{F}(Y(t))$ where $\mathcal{F}$ is the Fourier transform operator. The rest of notations will be defined whenever they appear.
\subsection{Spectral Decomposition of Cross-Entropy}\label{ssec:method_spectral_ce}
Let $Y(t,c)$ denote the segmentation logits produced by a semantic segmentation network and $B(t,c)$ denote the groundtruth annotation, in which $c$ and $t$ are indexes for the object class and image pixel respectively. The commonly-used objective function for learning semantic segmentation, cross-entropy (CE), can be written as
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = - { \sum _{c}} {\int} B(t,c) {log} ( { \frac {e^{Y(t,c)}}{{ \sum _{c}} e^{Y(t,c)}}}) dt
\\ & = - { \sum _{c}} {\int} B(t,c)(Y(t,c) - {Y_{p}}(t))dt,
\end{aligned}
\label{Eq:ce}
\end{equation}
where ${Y_{p}}(t)={log}({ \sum _{c}} e^{Y(t,c)})$. Transforming this integral into frequency domain $\nu$ gives theorem~\ref{theorem:ce}. See proof of theorem~\ref{theorem:ce} in section~\ref{ssec:proof_spectral_ce} of appendix.
\begin{restatable}[Spectral decomposition of Cross-Entropy]{theorem}{ce}
\label{theorem:ce}
Given $Y$ the segmentation logits and $B$ the groundtruth annotation, the cross-entropy $\mathcal{L}_{CE}$ can be decomposed as $\mathcal{L}_{CE} = {\sum _{\nu}} \mathcal{L}_{ce}(\nu)$, where $\mathcal{L}_{ce}(\nu)$ is the \textbf{frequency components of CE} and can be computed as following,
\begin{equation}
\begin{aligned}
\mathcal{L}_{ce}(\nu) = {\sum_{c}}{b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c))
\end{aligned}
\label{Eq:spec_ce_component}
\end{equation}
here, $b, y,$ and $y_{p}$ are the spectra of $B, Y,$ and $Y_{p}$, respectively.
\end{restatable}
The contribution from each frequency components to CE can thus be evaluated. In addition, it follows immediately that $\mathcal{L}_{ce}(\nu)$ is small when either ${y_{p}}(\nu) - y(\nu, c)$ or $b(\nu,c)$ is small. For simplicity, let $\widehat{y}(\nu,c) = {y_{p}}(\nu) - y(\nu, c)$. Recalls that, as mentioned in section~\ref{sec:intro}, $Y(t,c)$ is generally predicted upon the low-resolution grid. Hence, $\widehat{y}(\nu_i,c)$, as well as $\mathcal{L}_{ce}(\nu_i)$, should be small at high-frequency $\nu_i$.
Besides, the magnitude of high frequency components of the ground truth annotation is generally small dues to the intrinsic smoothness of segmentation map~\footnote{Segmentation map usually consists of several segment for each class. All the internal region of segment is flat while only the regions near the segmentation boundaries contain high-frequency components.}
One can therefore conclude that $\mathcal{L}_{CE}$ is the mainly contributed by the low-frequency components. This conclusion will be validated later in section~\ref{ssec:exp_spec_ce_grad} and lead us to the first key observation.
Furthermore, it follows from this observation that the significance of high-frequency component of pixel-wise groundtruth to $\mathcal{L}_{CE}$ is negligible. This also explain why weak annotation, which drop the high-frequency component of segmentation map, can be an effective approach to pixel-wise groundtruth annotation. Most of researches in studying weak annotations~\cite{papandreou2015weakly,dai2015boxsup,khoreva2017simple,GUO2021108063,LU2021107924,ahn2018learning,jing2019coarse,zhou2019wails,shimoda2019self,sun2019fully,nivaggioli2019weakly} demonstrate efficacy of WA solely by experiments on benchmark datasets. Here, our spectral analysis further provide a theoretical foundation to the WA.
Besides the analysis of CE, we further perform the analysis of IoU score and correlate it to the analysis of CE in section~\ref{ssec:method_spectral_ioU}. The analysis reveals how the minimization of CE correlates to the maximization of IoU score. This observation justifies the rationality of the common learning procedure for semantic segmentation networks: the networks are trained by the CE objective function while being validated by the IoU scores.
This corresponds to the second key observation. Therefore, later in our experiments and analysis, we adopt the decomposition of CE to study the frequency response and take IoU as a reasonable metric for evaluation.
\subsection{Spectral Gradient of Convolutional Layers}\label{ssec:method_spec_grad}
In section~\ref{ssec:method_spectral_ce}, we have demonstrated that the CE can be decomposed into the summation of frequency components $\mathcal{L}_{ce}(\nu)$. Here we further deduce the gradient propagation of $\mathcal{L}_{ce}(\nu)$ within CNNs in order to reveal how $\mathcal{L}_{ce}(\nu)$ updates the network, especially for the gradient in low-frequency regime.
We hereafter refer the gradient propagation with respect to input feature $X$ in frequency domain as the \textbf{spectral gradient}. With this, we can analyze how the low-frequency of input feature affect the network performance. For simplicity, we deduce the spectral gradient for a convolutional layer, including a convolution and an activation function
\begin{equation}
\begin{aligned}
Z(t) & = K(t) \otimes X(t)
\\ Y(t) & =\sigma(Z(t)),
\end{aligned}
\label{Eq:conv_layer}
\end{equation}
where $K$ is the kernel, $\sigma$ is the activation function, $X$ is input feature, and $Y$ is the output of convolutional layer. We consider the soft-plus activation $\sigma(Z(t)) = \log ( {1 + {e^{Z(t)}}}) $ since its everywhere differentiable thus make it easier for analysis.
The spectral gradient for a convolutional layer is given as below.
\begin{restatable}[The spectral gradient for a convolutional layer]{theorem}{ce_grad}
\label{theorem:ce_grad}
Given $B$ the groundtruth annotation, $X$, $K$ and $Y$ as in Eq.~\ref{Eq:conv_layer}, the spectral gradient of $\mathcal{L}_{ce}(\nu)$ the frequency components of CE is
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & \sum _c \frac{1}{2}k(\nu_j,c)[D_{0}(\nu_i) s(-\nu_j, c) - \delta_{\nu_j}(\nu_i)b(-\nu_i,c)],
\end{aligned}
\label{Eq:ce_spec_grad}
\end{equation}
and the spectral gradient of $y$ the output is
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} \simeq \frac{1}{2}k( \nu_j){{\delta}_{\nu_j}(\nu_i)},
\end{aligned}
\label{Eq:convlayer_spec_grad}
\end{equation}
assuming the $K(t)$ is small and $|X(t)| < 1$;\footnote{These assumptions rely on the fact that the numeric scale of feature and kernel are usually limited to a small range of value for the numeric stability of networks.} where $y(\nu) = {\cal F}(Y(t))$, $b(\nu) = {\cal F}(B(t))$, $k(\nu) = {\cal F}(K(t))$, $x(\nu) = {\cal F}(X(t))$, and $s(\nu) = {\cal F}(S(t))$ the spectrum of segmentation output. ${\delta}_{\nu_j}(\nu_i)$ is the Kronecker delta function, which is here after abbreviated as the delta function, and $D_{0}(\nu_i)$ is the Dirac delta function.
\end{restatable}
We relegate the proof of Eq.~\ref{Eq:convlayer_spec_grad} in lemma~\ref{lemma:conv_grad} of appendix and that of Eq.~\ref{Eq:ce_spec_grad} in lemma~\ref{lemma:ce_grad} of appendix.
Clearly from the theory, the spectral gradient in Eq.~\ref{Eq:convlayer_spec_grad} can be approximated as the delta function, indicating $y(\nu_i)$ is affected only by $x(\nu_i)$ with the same frequency. This corresponds to the third key observation.
Now, let us further analyze how the variation of feature $ x(\nu_j)$ affect $\mathcal{L}_{ce}(\nu_i)$ based on Eq.~\ref{Eq:ce_spec_grad}. For simplicity, we consider the case $\nu_i \neq 0$ and $\nu_i = 0$ separately. This reveals the contribution of the feature to network performance and provides a guideline for removing the redundant feature.
For $\nu_i \neq 0$, the gradient becomes
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & -\sum _c \frac{1}{2}k(\nu_j,c) \delta_{\nu_j}(\nu_i)b(-\nu_i,c), \text{if~} \nu_j \neq 0,
\end{aligned}
\label{Eq:ce_spec_grad_ineq0}
\end{equation}
which indicates that the CNN feature $x(\nu_j)$ is affected only by $\mathcal{L}_{ce}(\nu_i)$ with same frequency.
For $\nu_i = 0$, the gradient consists of additional term $\sum _c \frac{1}{2}k(\nu_j,c)D_{0}(\nu_i) s(-\nu_j, c)$.
Here recalls that the segmentation map $s(\nu, c)$ is usually predicted upon the low-resolution grid of the original image as mentioned in section \ref{sec:intro}. $s(\nu, c)$ should therefore be smooth so that $s(-\nu_j, c)$ is small when $\nu_j$ is large.
Hence,
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\ll 1, \text{if~} \nu_j \text{~is large} \text{~and~} \nu_j \neq \nu_i = 0.
\end{aligned}
\label{Eq:ce_spec_grad_ieq0}
\end{equation}
It follows from Eq.~\ref{Eq:ce_spec_grad_ineq0} and Eq.~\ref{Eq:ce_spec_grad_ieq0} that the CNN feature $x(\nu_j)$ is affected only by $\mathcal{L}_{ce}(\nu_i)$ with near frequency.
Thus, removing the features $x(\nu_j)$ at high-frequency $\nu_j$ does not effect $\mathcal{L}_{ce}(\nu_i)$ at low-frequency $\nu_i$. This give us the forth key observation. With this observation, it becomes possible to reduce the feature size of CNN, as well as its high-frequency components, while keep the performance. We further provide the numerical validation of this observation in section~\ref{ssec:exp_spec_ce_grad}.
\subsection{Discussion for Spectral Analysis}\label{ssec:method_discussion}
In this section, we summarize the discussion for the spectral analysis based on the theoretical analysis in the above sections. Recalls the segmentation map is usually predicted upon the LRG of the original image. Here we aim to determine the efficient LRG that efficiently reduce the the cost reduction while preserve accuracy.
Following the decomposition of CE in Eq.~\ref{Eq:spec_ce_final_discrete}, we define the truncated CE as the frequency components of CE filtered by LRG
\begin{equation}
\begin{aligned}
\widehat{\mathcal{L}}_{CE}({\nu}_{max}) = { \sum _{\nu}^{{\nu}_{max}}} \mathcal{L}_{ce}(\nu),
\end{aligned}
\label{Eq:spectral_ce_bandlimit}
\end{equation}
where ${\nu}_{max}$ is the band limit that corresponds to $2{\nu}_{max}$ LRG size.
Further, we define
\begin{equation}
R({\nu}_{max}) = |1 - \frac{\widehat{\mathcal{L}}_{CE}(\nu_{max})}{\mathcal{L}_{CE}}|
\label{Eq:r_max}
\end{equation}
as the losses of $\mathcal{L}_{CE}$ due to the truncation at band limit $\nu_{max}$. Hence, an efficient LRG can be defined when $R({\nu}_{max})$ is negligible. One can apply LRG to either segmentation outputs or groundtruth annotation to alleviate cost. In addition, these LRG can further be applied to the features in CNNs to save computational cost follows from the forth key observation discussed in section~\ref{ssec:method_spec_grad}. In the following section, we apply the efficient LRG on the features and the groundtruth annotation and validate the positive correlation between the the loss of CE and the loss of IoU score caused by LRG.
\subsection{Related Work}\label{ssec:related work}
\subsubsection{Semantic Segmentation Neural Network}\label{ssec:relate:ssnn}
Among semantic segmentation neural networks (SSNN), Long~{et al}.~ first propose Fully Convolutional Neural Network (FCN)~\cite{long2015fully} that predicts the dense segmentation map by utilizing the skip-architecture, where the features of different granularities in the encoder are up-sampled and integrated in the decoder, yet still faces the challenge of acquiring accurate object boundaries in the segmentation map.
The similar idea can be also observed in the U-Net \cite{ronneberger2015u}, which further adds dense skip-connections between the corresponding down-sampling and up-sampling modules of the same feature dimensions, in results the boundary localization is improved but not fully resolved yet.
Other than skip connections, Chen~{et al}.~ propose the DeepLab models~ \cite{chen2014semantic,chen2017rethinking,chen2018encoder} that integrate the atrous spatial pyramid pooling module (ASPP), which utilizes the dilated convolutional layer composed of the filters at multiple sampling rates thus having the contextual information at the various spatial resolution, to boost the edge-response at object boundaries. Kou~{et al}.~ further propose Deep Aggregation Net (DAN) that utilize an aggregation decoder and progressively combines encoder features for final prediction to resolve the land cover segmentation across image scales.
Besides of these SSNNs, extra modules such as dense conditional random field (dense CRF)~\cite{chen2014semantic,krahenbuhl2011efficient} and PointRend~\cite{kirillov2019pointrend} can be further applied to boost the edge-response near object boundaries while induce extra computational cost.
It is clear that improving edge-response near object boundaries becomes a main challenge of semantic segmentation while the cost of SSNNs grows dues to the dense decoder feature and post processing modules. This work investigate the spectral analysis and computation cost of DeepLab v3+ and DAN. We briefly review the cost of these SSNNs in the next section.
\subsubsection{Network Pruning}\label{ssec:relate:pruning}
Network pruning~\cite{liu2018rethinking,he2019asymptotic,Molchanov_2019_CVPR,blalock2020state,zhao2019variational,luo2017thinet,karnin1990simple,han2015learning} is proposed to reduce the cost of inference by removing the redundant network parameters.
The redundant parameters are determined when either their contribution to output~\cite{Molchanov_2019_CVPR,luo2017thinet,zhao2019variational} or their norm~\cite{he2019asymptotic} are negligible.
Noting that most of these pruning method are by hard pruning, \textit{i}.\textit{e}.~ remove some weight values of filters~\cite{han2015learning} or completely remove the whole filters ~\cite{luo2017thinet}, while potentially degrading the capacity of networks. In contrast, He~{et al}.~~\cite{he2019asymptotic} propose the soft pruning method that dynamically set redundant parameters to zero while keep the network capacity. This enables the compressed network to have a larger optimization space and make it easier for the model to learn from the training data, and achieve higher accuracy.
Despite the success of these methods for accelerating network, we would like to point out that the existing pruning methods are solely investigated upon image classification~\cite{krizhevsky2009learning,russakovsky2015imagenet} instead of other task, such as semantic segmentation or image generation. We further investigate the application of pruning methods on semantic segmentation in this work. Noting the the segmentation networks typically have huge parameters in encoder while negligible parameters in decoder~\cite{chen2018encoder,kuo2018dan,long2015fully}. However, the computational costs of decoders are often comparable to those of encoders since it up-samples the features for the dense segmentation map and results in large features for computation. For example, the encoder of DeepLab v3+~\cite{chen2018encoder} has 95.6 billion FLOPs (floating-point operations) and 60.1 million parameters. In contrast, its decoder has 43.4 billion FLOPs while only has 1.3 million parameters. Similarly, the encoder of DAN~\cite{kuo2018dan} has 95.6 billion FLOPs with 60.1 million parameters while the decoder has 11.1 billion FLOPs with only 0.4 million parameters. The proposed feature truncation in section~\ref{ssec:exp_feat_trunc} is thus expected to effectively reduce the computational cost. Moreover, one can combine feature truncation with the typical network pruning method to reduce the computation cost in two different aspects, \textit{i}.\textit{e}.~ the feature size and the redundant parameters.
\subsubsection{Spectral Analysis}\label{ssec:relate:spectral}
The existing works of spectral analysis demonstrate that the network tends to learn the low-frequency component of target signal in the regression of the uniformly distributed data with various frequencies~\cite{rahaman2018spectral,ronen2019convergence,luo2019theory,yang2019fine,xu2019frequency}. Such tendency is known as spectral bias~\cite{rahaman2018spectral} or Frequency Principle\cite{xu2019frequency}.
More specifically, these works found that the network tend to learn low-frequency signal in the earlier training stage. Ronen~{et al}.~~\cite{ronen2019convergence} further provide the theoretical explanation based normalized training data that is uniformly distributed on a hypersphere. Under same assumption of data distribution, Yang and Salman~\cite{yang2019fine} further investigate the eigen function of neural tangent kernel (NTK)~\cite{jacot2018neural} and demonstrate that the eigenvalue of NTK decrease as the frequency increases. This provide further theoretical insight and justify the spectral bias that the learning of networks converge faster for low-frequency signals.
So far, the existing works investigate the spectral bias solely under the normalized training data with uniform distribution over frequency regime. This work further extends spectral analysis to semantic segmentation, where the target data is non-uniformly distributed. Furthermore, these works mostly study the convergence speed for each frequency regime while this work focus on the learned distribution of networks at final training stage. This helps us to estimate the capacity of models, in the sense of frequency, under the spectral bias in semantic segmentation.
\section{Appendix}\label{sec:appendix}
\input{relate}
\subsection{Proof of Theorem~\ref{theorem:ce}}\label{ssec:proof_spectral_ce}
\ce*
\begin{proof}
Given $Y$ and $B$, $\mathcal{L}_{CE}$ the cross-entropy is
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = - { \sum _{c}} {\int} B(t,c) {log} ( { \frac {e^{Y(t,c)}}{{ \sum _{c}} e^{Y(t,c)}}}) dt
\\ & = - { \sum _{c}} {\int} B(t,c)(Y(t,c) - {Y_{p}}(t))dt,
\end{aligned}
\end{equation}
where ${Y_{p}}(t)={log}({ \sum _{c}} e^{Y(t,c)})$.
For all $Y(t)\in Y(t,c)$ and ${B}(t)\in B(t,c)$, the integral ${\int} Y(t){B}(t)dt$ can be transformed to the frequency domain $\nu$ as follows. (See lemma~\ref{lemma:ft_prod} of appendix)
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt = {\int} y(z)\,{b}( - z)\,dz
\end{aligned}
\end{equation}
where $y(\nu)$ and $b(\nu)$ are the spectrum of the segmentation logits and that of the groundtruth annotations, respectively. The $\mathcal{L}_{CE}$ in Eq.~\ref{Eq:ce} is hence given by
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = - { \sum _{c}} {\int} B(t,c)(Y(t,c) - {Y_{p}}(t))dt
\\ & = { \sum _{c}} {\int} {b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c))d\nu.
\end{aligned}
\label{Eq:spectral_ce_final}
\end{equation}
The discrete integral of Eq.~\ref{Eq:spectral_ce_final} gives us the decomposition of the $\mathcal{L}_{CE}$ over frequency domain $\nu$ as following
\begin{equation}
\begin{aligned}
\mathcal{L}_{CE} & = { \sum _{\nu}} { \sum_{c} }{b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c))
\\ & = { \sum _{\nu}} \mathcal{L}_{ce}(\nu),
\end{aligned}
\label{Eq:spec_ce_final_discrete}
\end{equation}
where
\begin{equation}
\begin{aligned}
\mathcal{L}_{ce}(\nu) = { \sum_{c} }{b}(-\nu, c)({y_{p}}(\nu) - y(\nu, c)).
\end{aligned}
\end{equation}
\end{proof}
\subsection{Spectral Analysis of Intersection-over-Union Score}\label{ssec:method_spectral_ioU}
Given the segmentation logits $Y$ and the groundtruth annotation $B$, the intersection-over-union (IoU) score is typically defined as $\frac{|B \cap S|}{ |B \cup S|}$, where $S$ is the segmentation output $S(t,c)={ \frac {e^{Y(t,c)}}{{ \sum _{c}} e^{Y(t,c)}}}$.
It is common to train the network with CE and evaluate the network performance based on IoU scores. This section aims to analyze the formalism of IoU score in frequency domain and shed some light to the reason why IoU scores can be increased when the CE is decreased.
In order to analyze the IoU score in the frequency domain, we extend the above definition to the continuous space as follows:
\begin{equation}
\begin{aligned}
IoU( {S,B} ) & = \frac{{\int B(t)S(t)} \,dt}{{\int B(t) + S(t)\,dt - \int B(t)S(t)\,dt}}
\\ & = \frac{1}{{\frac{{\int B(t) + S(t)\,dt}}{{\int B(t)S(t)\,dt}} - 1}}
\end{aligned}
\label{Eq:iou}
\end{equation}
where $t$ denotes pixel indexes. Eq.~\ref{Eq:iou} holds for each object class $c$. Here we skip $c$ for simplicity. Notably, this definition is equivalent to the origin definition of IoU score for the binarized segmentation maps. The components in Eq.~\ref{Eq:iou} can be written as follows (see lemma~\ref{lemma:ft_prod} and lemma~\ref{lemma:ft_sum} of appendix),
\begin{equation}
\begin{aligned}
\int B(t)S(t)\,dt & = \int s(\nu)b(- \nu)d\nu, \text{~and}~ \\
{\int} B(t)+S(t)dt &= b(0)+s(0),
\end{aligned}
\end{equation}
where $s(\nu)={\cal F}(S(t))$. As a result, IoU score can be written as
\begin{equation}
\begin{aligned}
& IoU({s,b}) = \frac{1}{{\frac{{ {s(0) + b(0)} }}{{{\int} s(\nu)b(- \nu)d\nu}} - 1}}
\end{aligned}
\label{Eq:spectral_iou}
\end{equation}
and it is composed of two terms: ${s(0) + b(0)}$ and ${\int} s(\nu)b(- \nu)d\nu = \int B(t)S(t)\,dt$.
It can be seen that the IoU score can not be explicitly decomposed as the case for CE in Eq.~\ref{Eq:spec_ce_final_discrete} due to the non-linearity of Eq~\ref{Eq:spectral_iou}. On the other hand, noting that the latter term, \textit{i}.\textit{e}.~ ${\int} s(\nu)b(- \nu)d\nu = \int B(t)S(t)\,dt$, positively correlates to ${\int} B(t){log}(S(t))dt$ the component of CE in Eq.~\ref{Eq:ce} since the $log$ function is monotonically increasing. Hence, minimal $\mathcal{L}_{CE}$ maximizes $\int B(t)S(t)\,dt$ as well as IoU score.
Besides the analysis of IoU score, we also analyze the boundary IoU~\cite{cheng2021boundary} in the next section. The analysis demonstrate that the boundary IoU is mainly contributed by the low-frequency component of segmentation maps, similar to the analysis of CE. This demonstrate the boundary IoU is not only sensitive to object boundary, as highlighted in~\cite{cheng2021boundary}, while being also sensitive to smooth region, \textit{i}.\textit{e}.~ low-frequency component, of segmentation maps.
\input{boundary_IoU}
\subsection{Fourier transform of spatial integral}
\begin{lemma}\label{lemma:ft_prod}
Given two functional $Y(t)$ and $B(t)$ in spatial domain $t$, the overlapping integral ${\int} Y(t){B}(t)dt$ can be transformed into the frequency domain $\nu$ as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt = {\int} y(z)\,{b}( - z)\,dz
\end{aligned}
\end{equation}
where $y(\nu)=\mathcal{F}(Y(t))$ and $b=\mathcal{F}(B)$.
\end{lemma}
\begin{proof}
By the convolution lemma, integral ${\int} Y(t){B}(t)dt$ can be written as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt & ={\int} {{\cal F}^{-1}}(y(\nu) \otimes b(\nu))\,dt
\end{aligned}
\label{Eq:spectral_pxprod0}
\end{equation}
; where $ \otimes $ denotes the convolution operation as $y(\nu) \otimes b(\nu) = {\int} y(z)\,b(\nu - z)\,dz$;
${{\mathcal F}^{-1}}(y(\nu)) = {\int} y(\nu) e^{2j\pi\nu t}d\nu$ is the inverse Fourier transform operator; $j = \sqrt{-1}$. Eq.~\ref{Eq:spectral_pxprod0} can now be written as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt & = {\int}{\int} y(\nu) \otimes b(\nu)e^{2j\pi\nu t}\,d\nu \,dt
\\& = {\int} {\int} {\int} e^{2j\pi \nu t}\,y(z)\,{b}(\nu - z)\,dt\,d\nu \,dz
\end{aligned}
\label{Eq:spectral_pxprod1}
\end{equation}
By the orthogonality of Fourier basis, we have ${\int} e^{2j\pi \nu t}dt = D_{0}(\nu)$, where $D_{0}(\nu)$ is the Dirac delta function:
\begin{equation}
\begin{aligned}
D_{0}(\nu) =
\begin{cases}
\infty,& \text{if } \nu=0 \\
0, & \text{otherwise}
\end{cases}
\end{aligned}
\label{Eq:delta}
\end{equation}
and its integral property is $\int y(\nu)D_{0}(\nu)d\nu=y(0)$.
Hence, Eq.~\ref{Eq:spectral_pxprod1} is given as
\begin{equation}
\begin{aligned}
{\int} Y(t){B}(t)\,dt & = {\int} {\int} y(z)\,{b}(\nu - z)\,D_{0}(\nu)\,d\nu \,dz \\
& = {\int} y(z)\,{b}( - z)\,dz
\end{aligned}
\end{equation}
\end{proof}
\begin{lemma}\label{lemma:ft_sum}
Given functional $Y$ in spatial domain $t$, the integral ${\int} Y(t)dt$ can be transformed into the frequency domain $\nu$ as
\begin{equation}
\begin{aligned}
{\int} Y(t)dt & = y(0);
\end{aligned}
\label{Eq:spectral_pxsum}
\end{equation}
where $y(\nu)=\mathcal{F}(Y(t))$.
\end{lemma}
\begin{proof}
The proof follows similar process as for lemma~\ref{lemma:ft_prod}, as follows.
\begin{equation}
\begin{aligned}
{\int} Y(t)dt & ={\int} {{\cal F}^{-1}}(y(\nu))dt
\\& = {\int}{\int} y(\nu)e^{2j\pi \nu t}d\nu dt
\\& = {\int} y(\nu){D_{0}(\nu)}d\nu
\\& = y(0)
\end{aligned}
\end{equation}
\end{proof}
\subsection{Gradient propagation for a convolution layer}
Consider a convolution layer consists of the convolutional kernel $K(t)$ and the soft-plus activation function $\sigma(z(t))=log(1+e^{z(t)})$; $t$ is the spatial location. Let $X$ denote the input, the output of convolution layer is written as
\begin{equation}
\begin{aligned}
Z(t) &= K(t) \otimes X(t) \\
Y(t) &=\sigma(Z(t))
\end{aligned}
\end{equation}
\begin{lemma}\label{lemma:conv_grad}
Assuming $K(t)$ is small and $|X(t)| < 1$, the spectral gradient can be approximated as
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} \simeq \frac{1}{2}k( \nu_j){{\delta}_{\nu_j}(\nu_i)},
\end{aligned}
\end{equation}
where $z(\nu), k(\nu)$, $x(\nu)$ and ${\delta}_{\nu_j}(\nu_i)$ are ${\cal F}(Z(t)), {\cal F}(K(t))$, ${\cal F}(X(t))$ and the Kronecker delta function, respectively.
\end{lemma}
\begin{proof}
The spectral gradient of a convolution layer consists of the spectral gradient for the convolution operator and that for the activation function. We will show two gradient and combine it in the end of derivation.
For the convolution operator, it can be written as in the frequency domain ${z}(\nu) = k(\nu)x(\nu)$, where $z(\nu), k(\nu)$, and $x(\nu)$ are ${\cal F}(Z(t)), {\cal F}(K(t))$, and ${\cal F}(X(t))$, respectively. Without loss of generality, in the discrete frequency domain, the gradient of $z$ under a specific frequency $\nu_i$ with respect to the $x$ under frequency $\nu_j$ is defined as
\begin{equation}
\begin{aligned}
& \frac{\partial z(\nu_i)}{\partial x(\nu_j)} = \frac{\partial }{{\partial x(\nu_j)}}( {k(\nu_i)x(\nu_i)}) = k(\nu_i){{\delta}_{\nu_j}(\nu_i)},
\end{aligned}
\label{Eq:conv_spec_grad}
\end{equation}
where ${\delta}_{\nu_j}(\nu_i)$ is the Kronecker delta function.
\begin{equation}
\begin{aligned}
{\delta}_{\nu_j}(\nu_i) =
\begin{cases}
1, & \text{if } \nu_i=\nu_j \\
0, & \text{otherwise}
\end{cases}
\end{aligned}
\end{equation}
For the soft-plus function, it can be first expressed as Taylor series
\begin{equation}
\begin{aligned}
\sigma(Z(t)) &= \log ( {1 + {e^{Z(t)}}})
\\&= \log (2) + \frac{1}{2}Z(t) + \frac{1}{8}{Z(t)^2} + O(Z(t)^4),
\end{aligned}
\end{equation}
in which $Z(t)$ is small since the kernel $K(t)$ is small and $|X(t)| < 1$ by the assumption.
Hence, $O(Z(t)^4)$ becomes negligible.
The Fourier transform of $\sigma(Z(t))$ is thus given as
\begin{equation}
\begin{aligned}
y(\nu) & = {\cal F}(\sigma( Z(t)))
\\ & \simeq {\cal F}( {\log ( 2 ) + \frac{1}{2}Z(t) + \frac{1}{8}{Z(t)^2}} )
\\ & = 2\pi log(2) \delta_{0}(\nu)+\frac{1}{2} z(\nu)+\frac{1}{8} z(\nu)\otimes z(\nu),
\end{aligned}
\end{equation}
and its spectral gradient is
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial z(\nu_j)} & \simeq \frac{\partial }{\partial{z(\nu_j)}}( 2\pi log(2) \delta_{0}(\nu_i)
\\ & + \frac{1}{2} z(\nu_i)+\frac{1}{8} z(\nu_i)\otimes z(\nu_i) )
\\ & = \frac{\partial }{\partial z(\nu_j)}(\frac{1}{2}z( {{\nu _i}} ) + \frac{1}{8} \sum_{r=0}^{n-1} z( {{\nu _i - \nu _r}} )z( {{\nu _r}} ))
\\ & = \frac{1}{2}{{\delta}_{\nu _j}(\nu _i)} + \frac{1}{4}z( \nu_i - \nu_j),
\end{aligned}
\label{Eq:act_spec_grad}
\end{equation}
where $r$ is a dummy variable for the convolution and $n$ is the spectrum size of features. By Eq.~\ref{Eq:conv_spec_grad} and Eq.~\ref{Eq:act_spec_grad}, the spectral gradient of a convolutional layer in Eq.~\ref{Eq:conv_layer} is then written as
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} & = \sum _{q=0}^n \frac{{\partial y(\nu_i)}}{{\partial z(\nu_q )}}\frac{{\partial z(\nu_q)}}{{\partial x(\nu_j)}}
\\ & \simeq \sum _{q=0}^n ((\frac{1}{2}{{\delta}_{\nu_q}(\nu_i)} + \frac{1}{4}z(\nu _i - \nu_q ))k( \nu_q){{\delta}_{\nu_j}(\nu_q)})
\\ & = k( \nu_j)[\frac{1}{2}{{\delta}_{\nu_j}(\nu_i)} + \frac{1}{4}z(\nu_i-\nu_j)],
\end{aligned}
\label{Eq:convlayer_spec_grad_supp}
\end{equation}
where $i, j, \text{and~} q$ are the frequency indices.
Since $Z(t)$ is small as argued above, the corresponding spectrum $z(\nu)$ should also be small. We can therefore neglect the second term of Eq.~\ref{Eq:convlayer_spec_grad_supp},~\textit{i}.\textit{e}.~ $\frac{1}{4}z(\nu_i-\nu_j)$, and approximate Eq.~\ref{Eq:convlayer_spec_grad_supp} as
\begin{equation}
\begin{aligned}
\frac{\partial y(\nu_i)}{\partial x(\nu_j)} \simeq \frac{1}{2}k( \nu_j){{\delta}_{\nu_j}(\nu_i)}
\end{aligned}
\end{equation}
\end{proof}
\subsection{Gradient propagation for the frequency component of CE}
\begin{lemma}\label{lemma:ce_grad}
Given a convolutional layer that satisfies the assumption of lemma~\ref{lemma:conv_grad}. Let $x(\nu)$ denote the spectrum of input feature.
For each semantic class $c$ in segmentation maps, let $k(\nu, c)$ and $y(\nu,c)$ denote the spectrum of kernel and that of the segmentation output, respectively.
The spectral gradient for the frequency component of CE, $\mathcal{L}_{ce}(\nu_i)$ is
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & \sum _c \frac{1}{2}k(\nu_j,c)[D_{0}(\nu_i) s(-\nu_j, c) - \delta_{\nu_j}(\nu_i)b(-\nu_i,c)],
\end{aligned}
\end{equation}
where $\delta_{\nu_j}(\nu_i)$ is the Kronecker delta function and $D_{0}(\nu_i)$ is the Dirac delta function.
\end{lemma}
\begin{proof}
By lemma~\ref{lemma:conv_grad} and Eq.~\ref{Eq:spec_ce_component}, the spectral gradient $\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}$ is given as
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
& = \sum_c \sum_q \frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial y(\nu_q,c)}}\frac{{\partial y(\nu_q,c)}}{{\partial x(\nu_j)}}
\\ & \simeq \sum_c \sum_q \frac{1}{2} \frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial y(\nu_q,c)}} k( \nu_j,c){{\delta}_{\nu_j}(\nu_q)}
\\ & = \sum_c \frac{1}{2} \frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial y(\nu_j,c)}} k(\nu_j,c)
\\ & = \sum_c \frac{1}{2}k(\nu_j,c) \frac
{\partial
{~\sum _{\widetilde{c}} b(-\nu_i,\widetilde{c})({y_p}(\nu_i) - y(\nu_i,\widetilde{c}))}}
{\partial y(\nu_j,c)}
\\ & = \sum _c \frac{1}{2}k(\nu_j,c) [(\frac{\partial {y_p}(\nu_i)}{\partial y(\nu_j,c)} \sum \limits_{\widetilde{c}} b(-\nu_i,\widetilde{c})) \\ & - (\sum \limits_{\widetilde{c}} \frac {\partial y(\nu_i,\widetilde{c})} {\partial y(\nu_j,c)}b(-\nu_i,\widetilde{c}))],
\end{aligned}
\label{Eq:ce_spec_grad1}
\end{equation}
in which
\begin{equation}
\begin{aligned}
\frac{\partial y_p(\nu_i)}{\partial y(\nu_j,c)}
& = \int \frac{\partial y_p(\nu_i)}{\partial Y(t,c)}\frac{\partial Y(t,c)}{\partial y(\nu_j,c)} dt
\\ & = \int \frac{\partial {\mathcal{F}}(Y_p(t))}{\partial Y(t,c)}\frac{\partial {\mathcal{F}}^{-1}(y(\nu,c))}{\partial y(\nu_j,c)} dt
\\ & = \int \frac{\partial \int log({\sum _c} e^{Y(t,c)}) e^{-2j\pi\nu_i t}dt}{\partial Y(t,c)}
\\ &\frac{\partial {\int y(\nu,c) e^{2j\pi\nu t}d\nu}}{\partial y(\nu_j,c)} dt
\\ & = \int \frac{e^{Y(t,c)}}{{\sum _c} e^{Y(t,c)}} e^{-2j\pi(\nu_i-\nu_j) t}dt
\\ & = \int S(t,c) e^{-2j\pi(\nu_i-\nu_j) t}dt
= s(\nu_i-\nu_j, c),
\end{aligned}
\label{Eq:amax_spec_grad}
\end{equation}
where $S(t,c)$ is the segmentation output after performing softmax on logits $Y$ and $s(\nu,c)$ is the spectrum of the segmentation output.
Further, we have
\begin{equation}
\begin{aligned}
\sum _{\widetilde{c}} b(-\nu_i,\widetilde{c}) = D_{0}(\nu_i)
\end{aligned}
\label{Eq:sum_b}
\end{equation}
by the Fourier transform of $\sum _{\widetilde{c}} B(t,\widetilde{c}) = 1$, \textit{i}.\textit{e}.~ the fact that $B$ denote the probability distribution over semantic classes and should sum to one for each pixel.
Substituting Eq.~\ref{Eq:amax_spec_grad} and Eq.~\ref{Eq:sum_b} into Eq.~\ref{Eq:ce_spec_grad1}, we have the overall spectral gradient as
\begin{equation}
\begin{aligned}
\frac{{\partial \mathcal{L}_{ce}(\nu_i)}}{{\partial x(\nu_j)}}
\simeq & \sum _c \frac{1}{2}k(\nu_j,c)
\\ & [D_{0}(\nu_i) s(-\nu_j, c) - \delta_{\nu_j}(\nu_i)b(-\nu_i,c)]
\end{aligned}
\end{equation}
\end{proof}
\subsection{Implementation details}\label{ssec:implement}
\noindent\textbf{Datasets.}
We examine the experiments upon the following three semantic segmentation datasets: PASCAL semantic segmentation benchmark \cite{everingham2015pascal}, DeepGlobe land-cover classification challenge \cite{demir2018deepglobe} and Cityscapes pixel-level semantic labeling task \cite{cordts2016cityscapes} (denoted as PASCAL, DeepGlobe and Cityscapes respectively). The PASCAL dataset contains 21 categories, 1464 training images, and 1449 validation images; the dataset further augmented by the extra annotations from \cite{BharathICCV2011}. The DeepGlobe dataset contains 7 categories, 803 training images, which are split into 701 and 102 images for training and validation, respectively. The Cityscapes dataset contains 19 categories, 2975 training images, and 500 validation images.
\noindent\textbf{segmentation networks and implementation details.}
In our experiment, we utilize the standard segmentation networks including DeepLab v3+~\cite{chen2018encoder} and Deep Aggregation Net (DAN)~\cite{kuo2018dan}. We adopt the ResNet-101~\cite{he2016deep} pre-trained on ImageNet-1k~\cite{russakovsky2015imagenet} as the backbone of these networks. These networks are trained by the following training policies: For all datasets, the images are randomly cropped to 513$\times$513 pixels; the training batch size are 8. For the PASCAL dataset, the network is trained with initial learning rate 0.0007 and 100 epochs; for DeepGlobe dataset, the network is trained with initial learning rate 0.007 and 600 epochs; for Cityscapes dataset, the network is trained with initial learning rate 0.001 and 200 epochs. For evaluation, the images are cropped to 513$\times$513 pixels for all datasets for consistent image size in spectral analysis.
\subsection{Discussion and Experimental Data of Feature Truncation}\label{ssec:feat_detail}
\begin{table*}[h]
\centering
\begin{subtable}[h]{0.665\textwidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccccc}
\hline
Feature & \multicolumn{2}{c|}{PASCAL} & \multicolumn{2}{c|}{DeepGlobe} & \multicolumn{2}{c}{Cityscapes} \\ \cline{2-7}
size & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & relative mIoU-drop \\ \hline
\multicolumn{7}{c}{Baseline} \\ \hline
\multicolumn{1}{c|}{129} & 78.5\% & \multicolumn{1}{c|}{0.0\%} & 55.0\% & \multicolumn{1}{c|}{0.0\%} & 67.8\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 78.1\% & \multicolumn{1}{c|}{0.5\%} & 54.6\% & \multicolumn{1}{c|}{0.7\%} & 62.7\% & {7.4\%} \\
\multicolumn{1}{c|}{33} & 75.6\% & \multicolumn{1}{c|}{3.6\%} & 53.9\% & \multicolumn{1}{c|}{2.0\%} & 53.3\% & {21.3\%} \\ \hline
\multicolumn{7}{c}{20\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 76.6\% & \multicolumn{1}{c|}{0.0\%} & 53.7\% & \multicolumn{1}{c|}{0.0\%} & 67.2\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 76.0\% & \multicolumn{1}{c|}{0.8\%} & 53.4\% & \multicolumn{1}{c|}{0.7\%} & 62.4\% & 7.1\% \\
\multicolumn{1}{c|}{33} & 73.2\% & \multicolumn{1}{c|}{4.4\%} & 52.7\% & \multicolumn{1}{c|}{2.0\%} & 53.4\% & 20.5\% \\ \hline
\multicolumn{7}{c}{40\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 74.4\% & \multicolumn{1}{c|}{0.0\%} & 53.0\% & \multicolumn{1}{c|}{0.0\%} & 66.1\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 73.9\% & \multicolumn{1}{c|}{0.7\%} & 52.7\% & \multicolumn{1}{c|}{0.6\%} & 61.5\% & 7.0\% \\
\multicolumn{1}{c|}{33} & 72.1\% & \multicolumn{1}{c|}{3.1\%} & 52.0\% & \multicolumn{1}{c|}{1.9\%} & 52.8\% & 20.1\% \\ \hline
\multicolumn{7}{c}{60\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 65.1\% & \multicolumn{1}{c|}{0.0\%} & 50.1\% & \multicolumn{1}{c|}{0.0\%} & 58.6\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 64.7\% & \multicolumn{1}{c|}{0.6\%} & 49.9\% & \multicolumn{1}{c|}{0.4\%} & 54.8\% & 6.5\% \\
\multicolumn{1}{c|}{33} & 63.3\% & \multicolumn{1}{c|}{2.8\%} & 49.4\% & \multicolumn{1}{c|}{1.4\%} & 47.4\% & 19.2\%
\end{tabular}
}
\caption{DeepLab v3+}
\label{table:feat_trunc:deeplab}
\end{subtable}
\begin{subtable}[h]{0.665\textwidth}
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{ccccccc}
\hline
Feature & \multicolumn{2}{c|}{PASCAL} & \multicolumn{2}{c|}{DeepGlobe} & \multicolumn{2}{c}{Cityscapes} \\ \cline{2-7}
size & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & \multicolumn{1}{c|}{relative mIoU-drop} & mIoU & relative mIoU-drop \\ \hline
\multicolumn{7}{c}{Baseline} \\ \hline
\multicolumn{1}{c|}{129} & 77.6\% & \multicolumn{1}{c|}{0.0\%} & 53.6\% & \multicolumn{1}{c|}{0.0\%} & 66.4\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 76.7\% & \multicolumn{1}{c|}{1.1\%} & 53.3\% & \multicolumn{1}{c|}{0.5\%} & 61.0\% & {8.1\%} \\
\multicolumn{1}{c|}{33} & 73.4\% & \multicolumn{1}{c|}{5.4\%} & 52.7\% & \multicolumn{1}{c|}{1.7\%} & 50.0\% & {24.8\%} \\ \hline
\multicolumn{7}{c}{20\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 76.8\% & \multicolumn{1}{c|}{0.0\%} & 53.6\% & \multicolumn{1}{c|}{0.0\%} & 65.8\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 75.9\% & \multicolumn{1}{c|}{1.1\%} & 53.2\% & \multicolumn{1}{c|}{0.8\%} & 60.3\% & 8.3\% \\
\multicolumn{1}{c|}{33} & 72.8\% & \multicolumn{1}{c|}{5.2\%} & 52.4\% & \multicolumn{1}{c|}{2.2\%} & 49.0\% & 25.5\% \\ \hline
\multicolumn{7}{c}{40\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 74.6\% & \multicolumn{1}{c|}{0.0\%} & 52.3\% & \multicolumn{1}{c|}{0.0\%} & 65.1\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 73.8\% & \multicolumn{1}{c|}{1.0\%} & 51.9\% & \multicolumn{1}{c|}{0.8\%} & 59.8\% & 8.1\% \\
\multicolumn{1}{c|}{33} & 71.3\% & \multicolumn{1}{c|}{4.4\%} & 50.7\% & \multicolumn{1}{c|}{2.9\%} & 48.9\% & 24.9\% \\ \hline
\multicolumn{7}{c}{60\% Pruning rate for encoder} \\ \hline
\multicolumn{1}{c|}{129} & 65.6\% & \multicolumn{1}{c|}{0.0\%} & 49.5\% & \multicolumn{1}{c|}{0.0\%} & 57.0\% & 0.0\% \\
\multicolumn{1}{c|}{65} & 65.1\% & \multicolumn{1}{c|}{0.7\%} & 49.4\% & \multicolumn{1}{c|}{0.3\%} & 52.4\% & 8.1\% \\
\multicolumn{1}{c|}{33} & 63.0\% & \multicolumn{1}{c|}{3.9\%} & 48.9\% & \multicolumn{1}{c|}{1.4\%} & 42.8\% & 24.9\%
\end{tabular}
}
\caption{DAN}
\label{table:feat_trunc:dan}
\end{subtable}
\caption{Results for feature truncation and network pruning on DeepLab v3+ and DAN.
We summarize the results of using 4 setups of network pruning: "Baseline" denote the experiment without SFP while "X pruning rate for encoder" denotes that with SFP where X = (20\%, 40\%, and 60\%) are the pruning rates.
For each setup of network pruning, we further evaluate the results with 3 feature sizes for feature truncation, \textit{i}.\textit{e}.~ (129,65,33).}
\label{table:feat_trunc}
\end{table*}
Table~\ref{table:feat_trunc} summarize the experimental detail of feature truncation. For the experiment upon each dataset, the "mIoU" and "relative mIoU-drop" are evaluated, where "mIoU" is the mean IoU score over all semantic classes; "relative mIoU-drop" is the relative deduction rate of mIoU with respect to that of the model with same SFP setup and feature size 129.
Apparently from the tables, the mIoU decreases as either the pruning rate increases or as the feature size decreases. Closer inspection of these tables show that the relative mIoU-drop of the experiments on the Cityscapes dataset are significantly larger than that on the PASCAL and DeepGlobe datasets. Taking the "Baseline" model of DeepLab v3+ with feature size 65 as an example, the relative mIoU-drop is 0.6\% for both PASCAL and DeepGlobe datasets, while becomes 7.4\% for the Cityscapes datasets. The same trends holds for the experiments with other SFP setup and feature size based on DeepLab v3+. Also, the similar results are observed in the experiment on DAN as shown in Table~\ref{table:feat_trunc:dan}.
These significant mIoU-drop on Cityscapes is well explained by the theoretical estimation $R({\nu}_{max})$ in Table~\ref{table:spectral_stat}, where $R({\nu}_{max})$ depict the relative loss of CE evaluated that positively correlates to relative mIoU-drop as discussed in section~\ref{ssec:method_spectral_ioU}. We further illustrate the correlation between the relative mIoU-drop (cf. Table~\ref{table:feat_trunc}) and $R({\nu}_{max})$ (cf. Table~\ref{table:spectral_stat}) in Fig.~\ref{Fig:feature_truncation_iou_R}; where we plot the data with ${\nu}_{max}=[16,32,256]$. The positive correlation in the figure agrees our theoretical statement.
\begin{figure}[h]
\centering
\includegraphics[width=1.\columnwidth]{figures/feature_truncation_iou_R.png}
\caption{The correlation between relative mIoU-drop and the $R(\nu_{max})$ for the feature truncation. The results are evaluated based on PASCAL, DeepGlobe and Cityscapes datasets.}
\label{Fig:feature_truncation_iou_R}
\end{figure}
|
{
"timestamp": "2022-07-28T02:09:04",
"yymm": "2012",
"arxiv_id": "2012.14123",
"language": "en",
"url": "https://arxiv.org/abs/2012.14123"
}
|
\section{Introduction}
B-branes on $\mathcal{N}=(2,2)$ supersymmetric theories can be defined in general terms, regardless of the fact that such theory possesses or not an IR superconformal fixed point. This is achieved by simply studying boundary conditions preserving an appropriate subset $\mathbf{2}_{B}$, spanned by $\mathbf{Q}_{-}+\mathbf{Q}_{+}$ and its conjugate supercharge $\overline{\mathbf{Q}}_{-}+\overline{\mathbf{Q}}_{+}$ of the $\mathcal{N}=(2,2)$ supersymmetry algebra \cite{Hori:2000ck,Witten:1992fb,Aspinwall:2009isa}. The collection of such boundary conditions are expected to have the structure of a triangulated category. These categories are expected to be insensitive to deformations of the action of the form $\mathbf{Q}_{+}\overline{\mathbf{Q}}_{-}\mathcal{G}$ (and its adjoint) where $\overline{\mathbf{Q}}_{+}\mathcal{G}=\mathbf{Q}_{-}\mathcal{G}=0$. In the context of the gauged linear sigma model (GLSM) \cite{Witten:1993yc} these deformations can be interpreted as deformations of the twisted superpotential and the space of such deformations is called the quantum K\"ahler moduli space $\mathcal{M}_{K}$. A huge class of GLSMs have geometric points in $\mathcal{M}_{K}$. At such points, the theory in the IR can be thought of as a nonlinear sigma model (NLSM) with the target space given by a K\"ahler manifold $X$ and locally $\mathcal{M}_{K}$ is regarded as the complexified K\"ahler cone of $X$. Then the category of B-branes is equivalent to $D^{b}Coh(X)$. By moving around $\mathcal{M}_{K}$ we can find relations between $D^{b}Coh(X)$ and categories of B-branes at the different Higgs branches arising at other points in $\mathcal{M}_{K}$. When $X$ is Calabi-Yau (CY), this is known to give equivalences or autoequivalences, classified by the space of homotopy equivalent paths in $\mathcal{M}_{K}$, between $D^{b}Coh(X)$ and the aforementioned categories. For $X$ general K\"ahler manifold, the different B-brane categories generically embed into each other and paths correspond to fully faithful functors. The way the categories fit into each other depends on the structure of the Coulomb and Coulomb-Higgs branches, which are either absent or deemed singular in the CY case. This problem has been analyzed thoroughly in the physics literature, starting with \cite{Herbst:2008jq} for nonanomalous abelian GLSMs and further developed in \cite{Clingempeel:2018iub,Hori:2013ika,hori2019notes,eager2017beijing} and in the mathematics literature \cite{segal2011equivalences,halpern2015derived,ballard2019variation}.
On the other hand, one very powerful result of the recent years regarding the interplay between derived categories of projective varieties and their semiorthogonal decompositions is Kuznetsov's homological projective duality \cite{kuznetsov2007homological}. Kuznetsov's result provides a relation between the semiorthogonal decomposition of a projective variety $X$ and its homological projective dual (HPD) variety $Z$ (or more general its HPD category $\mathcal{C}$) and linear sections of them. This result leads to a great deal of insight into equivalences of triangulated categories and their semiorthogonal decompositions (for a review see for example \cite{alex2014semiorthogonal,thomas2015notes}). Mutually HPD varieties/categories have made an appearance as B-brane categories of different phases of GLSMs \cite{Caldararu:2007tc,Hori:2016txh,Hori:2013gga,Addington:2012zv,Hori:2006dk} and more recently this point of view has been studied in a mathematical framework in \cite{ballard2017homological,rennemo2017fundamental}.
Inspired mostly by the works
\cite{ballard2017homological,rennemo2017fundamental}, we develop a proposal for
a GLSM construction of HPD models of a K\"ahler variety $X$, whenever a GLSM
construction for $X$ itself is known. Let us summarize our proposal. Details
can be found in section \ref{sec:proposal}. Starting from a GLSM
$\mathcal{T}_{X}=(G,\rho_{m}: G\rightarrow GL(V),W,t_{\mathrm{ren}},R)$ with a
geometric point in $\mathcal{M}_{X}$ around which we can identify the category
of B-branes with $D^{b}Coh(X)$ and a map $f:X\rightarrow \mathbb{P}(S)$, we
write an extension $\mathcal{T}_{\mathcal{X}}$ of $\mathcal{T}_{X}$ given by
\begin{eqnarray}
\mathcal{T}_{\mathcal{X}}=(\widehat{G}=G\times U(1)_{s+1},\hat{\rho}_{m}: \widehat{G}\rightarrow GL(V\oplus V'),\widehat{W},\widehat{R}),
\end{eqnarray}
where\footnote{Here and in the following, $\mathbb{C}_{(\chi,w)}$ denotes the one-dimensional representation of $\widehat{G}$ determined by the $G$-character $\chi$ and the $U(1)_{s+1}$-weight $w$.} $V'=\mathbb{C}_{(\chi^{-1},-1)} \oplus S^\vee$ is a representation of $\widehat{G}$ with
$\chi\in\mathrm{Hom}(G,\mathbb{C}^{*})$ the character of $G$ defined by \eqref{chi} and $S^\vee \cong \mathbb{C}^{\oplus{n+1}}_{(1,1)}$. By an appropriate choice of basis, we
can write $\mathbb{C}_{\chi^{-1}}$ as a representation of weight $Q$ of a
subgroup $U(1)_{l}\subset G$. Denoting the coordinates of $V'$ as
$(P,S_{0},\ldots,S_{n})$, the superpotential $\widehat{W}$ is given by
\begin{eqnarray}
\widehat{W}=W+P\sum_{j=0}^{n}S_{j}f_{j}(X),
\end{eqnarray}
where $f_{j}(X)$ are the components of the image of the map $f$. The GLSM $\mathcal{T}_{\mathcal{X}}$ is identified with the GLSM of the universal hyperplane class $\mathcal{X}$ of $X$.
Its more important property is that its Higgs branch deep in the second quadrant of the FI parameter of $U(1)_{s+1}\times U(1)_{l}$, upon taking the gauge decoupling limit, is a hybrid model whose category of B-branes can be identified with the HPD category $\mathcal{C}$ of $X$. This is essentially because both categories are equivalent to the subcategory $\mathcal{W}_{-}$ (small window category, see section \ref{sec:GLSM}) of GLSM B-branes. Brane transport between this phase and the geometric phase realizes the correspondence $\mathcal{C}\cong \mathcal{W}_{-} \cong D^{b}Coh(X)$. Moreover, we can take linear sections of $\mathcal{T}_{\mathcal{X}}$, essentially by restricting the chiral fields $S_{j}$ to a linear subspace $L\subset \mathbb{P}(S^{\vee})$. This gives a GLSM termed $\mathcal{T}_{\mathcal{X}_{L}}$ which has two very interesting phases, one that we can identify with the NLSM on $X_{L}$ and one whose category of B-branes can be identified with $D(Z_{L})$ (even though in some cases, the variety $Z$ itself cannot be determined, but then, $D(Z_{L})$ can be defined in terms of a hybrid model). If we take the gauge decoupling limit $e_{s+1}\rightarrow \infty$ of $U(1)_{s+1}$, on $\mathcal{T}_{\mathcal{X}}$, while keeping $\zeta_{s+1}\ll-1$, we end up with a $G\times U(1)_{l}$ gauge theory $\mathcal{T}_{\mathcal{C}'}$ in which one of its phases is related to some category $\mathcal{C}'$ which embeds in $\mathcal{C}$ (although, in some examples $\mathcal{C}'=\mathcal{C}$, a detailed explanation can be found in section \ref{sec:proposal}). The same limit for $\mathcal{T}_{\mathcal{X}_{L}}$ is more interesting since we end up with a theory $\mathcal{T}_{X_{L}}$ in which one phase is related with $X_{L}$ and the other with $\mathcal{C}_{L}$, a subcategory 'shared' between $D(X_{L})$ and $D(Z_{L})$ (a precise definition can be found in section \ref{sec:section2}). We summarize this in the following diagram:
\[
\xymatrix{\mathcal{T}_X \ar@<0.5ex>[d]_-{\mathrm{Extension}} & \\ \mathcal{T}_{\mathcal{X}} \ar@<0.5ex>[rr]^-{\mathrm{Restriction}} \ar@<0.5ex>[d]_-{-\zeta_{s+1},e_{s+1} \rightarrow \infty} && \mathcal{T}_{\mathcal{X}_L} \ar@<0.5ex>[d]_-{-\zeta_{s+1},e_{s+1} \rightarrow \infty} \\
\mathcal{T}_{\mathcal{C}'} \ar@<0.5ex>[rr]^-{\mathrm{Restriction}} && \mathcal{T}_{X_L} }
\]
We provide evidence of our proposal by computing it in several examples, reproducing known results and providing insight into new ones. More precisely we analyze:
\begin{itemize}
\item \textbf{Linear and Veronese embeddings of $\mathbb{P}^{n}$}. In the linear embedding with the Lefschetz decomposition defined by the Beilinson collection, we reproduce the basic result that the HPD corresponds to the classical projective dual. For the double Veronese embedding, we reproduce the result of \cite{kuznetsov2008derived} and we also analyze higher Veronese embeddings finding a HPD analogous to the one from \cite{ballard2017homological}, described by a hybrid model $\mathcal{C}$. Using GLSM technology, we find an explicit functor from this hybrid model to the universal hyperplane section $\mathcal{X}$ of the Veronese embedding which allows us to write explicit generators of $\mathcal{C}$ as a subcategory of $D^{b}Coh(\mathcal{X})$.
\item \textbf{Quadrics in $\mathbb{P}^{n}$}. When $n$ is even, we reproduce the HPD from \cite{kuznetsov2019homological}. When $n$ is odd, our construction induces a different Lefschetz decomposition than the one used in \cite{kuznetsov2019homological}. Therefore, the HPD changes and we are not aware of it being constructed explicitly anywhere else. We analyze again an explicit functor from objects in the hybrid model representing the HPD category to $D^{b}Coh(\mathcal{X})$.
\item \textbf{Fano complete intersections in $\mathbb{P}^{n}$}. In this case, we only analyze the resulting HPD that takes the form of a hybrid model corresponding to the Lefschetz decomposition induced by $(\mathcal{T}_{X},\mathcal{T}_{\mathcal{X}})$.
\item \textbf{Pl\"ucker embedding of $G(k,N)$}. We devote section \ref{sec:nonabelian} to the study of the basic properties of $\mathcal{T}_{\mathcal{X}}$ for the Pl\"ucker embedding of $G(k,N)$. In this case we present the hybrid model representing the HPD and we compute to which Lefschetz decomposition it corresponds, by analyzing the Coulomb branch. We also comment on $\mathcal{T}_{\mathcal{X}_{L}}$ for some particular linear sections.
\end{itemize}
The paper is organized as follows. In section \ref{sec:section2} we review Kuznetsov's homological projective duality theorem and consequences as well as some examples. In section \ref{sec:GLSM} we provide a review of B-branes in GLSMs, focusing on the abelian case but not restriciting to nonanomalous models. The main goal of this section is to review how the grade restriction rule works for anomalous GLSMs. In section \ref{sec:proposal} we present our proposal for a GLSM containing the HPD of a variety $X$ and we analyze its properties in a general setting. In particular, we specify the Lefschetz decomposition induced by the pair $(\mathcal{T}_{X},\mathcal{T}_{\mathcal{X}})$ and we give the details on how to take linear sections and construct $\mathcal{T}_{\mathcal{X}_{L}}$ as well as its properties. In section \ref{sec:examples} we analyze several examples of $(\mathcal{T}_{X},\mathcal{T}_{\mathcal{X}})$: linear and Veronese embedding of projective space, quadrics in $\mathbb{P}^{n}$ and Fano complete intersections in $\mathbb{P}^{n}$. In section \ref{sec:mutually orthogonal section} we analyze $\mathcal{T}_{\mathcal{X}_{L}}$ for the previous examples. In section \ref{sec:nonabelian} we make some remarks on $\mathcal{T}_{\mathcal{X}}$ for the Pl\"ucker embedding of $G(k,N)$. Further background material and complimentary results are collected in the appendices.
\section{\label{sec:section2}Lightning review of homological projective duality}
In order to define the homological projective dual (HPD) of a projective variety $X$, we need to define a few elements of triangulated categories first. We denote by $D(X)$ the bounded derived category of coherent sheaves on $X$, $D^{b}Coh(X)$.
\\\\
\textbf{Definition}.\cite{kuznetsov2007homological} A (right) Lefschetz decomposition of $D(X)$ w.r.t. the line bundle $\mathcal{L} $ on $X$ corresponds to a semiorthgonal decomposition \cite{bondal1995semiorthogonal}
\begin{eqnarray}\label{LDX}
D(X)=\langle\mathcal{A}_{0},\mathcal{A}_{1}(1),\ldots,\mathcal{A}_{k}(k)\rangle
\end{eqnarray}
such that $\mathcal{A}_{0}\supseteq \mathcal{A}_{1}\supseteq\cdots\supseteq\mathcal{A}_{k}$ are a collection of admissible subcategories of $D(X)$ and $\mathcal{A}_{i}(i):=\mathcal{A}_{i}\otimes \mathcal{L}^{\otimes i}$. The Lefschetz decomposition is called rectangular if $\mathcal{A}_{0}=\mathcal{A}_{1}=\cdots=\mathcal{A}_{k}$.\\\\
A few remarks are in order. The Lefschetz decomposition is completely determined by its center $\mathcal{A}_{0}$ and $\mathcal{L}$ via the relation
\begin{eqnarray}
\mathcal{A}_{r}=^{\perp}\mathcal{A}_{r-1}(-r)\cap \mathcal{A}_{r-1},
\end{eqnarray}
We can always construct a dual Lefschetz decomposition by setting \cite{kuznetsov2008lefschetz}
\begin{eqnarray}
\mathcal{B}_{0}=\mathcal{A}_{0},\mathcal{B}_{i}=\mathcal{A}_{0}(i)^{\perp}\cap \mathcal{A}_{i},\qquad i=1,\ldots,k,
\end{eqnarray}
where $\mathcal{A}_{0}(i)^{\perp}$ denotes the right orthogonal of $\mathcal{A}_{0}(i)$. Then we have a left Lefschetz decomposition
\begin{eqnarray}
D(X)=\langle \mathcal{B}_{k}(-k),\ldots,\mathcal{B}_{1}(-1), \mathcal{B}_{0}\rangle.
\end{eqnarray}\\
Consider a smooth projective variety $X$ with a morphism $f:X\rightarrow \mathbb{P}(V)$ and assume we have a Lefschetz decomposition of the form (\ref{LDX}) w.r.t. the line bundle $\mathcal{L}=\mathcal{O}_{X}(1):=f^{*}\mathcal{O}_{\mathbb{P}(V)}(1)$. We define the incidence divisor $\mathcal{H}\subset \mathbb{P}(V)\times \mathbb{P}(V^{\vee})$ by
\begin{eqnarray}
\mathcal{H}=\{(u,v)\in \mathbb{P}(V)\times \mathbb{P}(V^{\vee}) : v(u)=0 \}.
\end{eqnarray}
Then the universal hyperplane section of $X$ is defined by
\begin{eqnarray}
\mathcal{X}:=X\times_{\mathbb{P}(V)}\mathcal{H}\subset X\times \mathbb{P}(V^{\vee})
\end{eqnarray}
and has the following semiorthogonal decomposition \cite{kuznetsov2007homological}:
\begin{eqnarray}
D(\mathcal{X})=\langle \mathcal{C},\mathcal{A}_{1}(1)\boxtimes D(\mathbb{P}(V^{\vee})),\ldots,\mathcal{A}_{k}(k)\boxtimes D(\mathbb{P}(V^{\vee}))\rangle.
\end{eqnarray}
Then, one can define
\\\\
\textbf{Definition}.\cite{kuznetsov2007homological} $\mathcal{C}$ is called the HPD category of $X$. In other words, $\mathcal{C}$ is the right orthogonal of $\langle\mathcal{A}_{1}(1)\boxtimes D(\mathbb{P}(V^{\vee})),\ldots,\mathcal{A}_{k}(k)\boxtimes D(\mathbb{P}(V^{\vee}))\rangle$.\\\\
If we have an algebraic variety $Z$ with a morphism $g:Z\rightarrow \mathbb{P}(V^{\vee})$ such that there is an equivalence $D(Z)\cong \mathcal{C}$, we call $Z$ the HPD to $f:X\rightarrow \mathbb{P}(V)$ w.r.t. the Lefschetz decomposition (\ref{LDX}). A very important property of $\mathcal{C}$, that will be used for consistency checks in several of our examples is the following: given the embedding map $\delta:\mathcal{X}\rightarrow X \times \mathbb{P}(V^{\vee})$, we have:
\begin{eqnarray}\label{deltamapp}
\delta_{*}: \mathcal{C}\rightarrow\mathcal{A}_{0}\boxtimes D(\mathbb{P}(V^{\vee})).
\end{eqnarray}
Indeed, one can define the $\mathcal{C}$ by the objects in $D(\mathcal{X})$ whose image under $\delta_{*}$ belongs to $\mathcal{A}_{0}\boxtimes D(\mathbb{P}(V^{\vee}))$. The main theorem of \cite{kuznetsov2007homological} is
\\\\
\textbf{Theorem}.\cite{kuznetsov2007homological} Assume that $Z$ is the HPD to $X$ as defined as above. Then, $Z$ is smooth and admits a dual Lefschetz decomposition
\begin{eqnarray}
D(Z)=\langle \mathcal{B}_{l}(-l),\ldots,\mathcal{B}_{1}(-1),\mathcal{B}_{0}\rangle\nonumber,\\
\mathcal{B}_{l}\subseteq\cdots \subseteq \mathcal{B}_{1}\subseteq\mathcal{B}_{0}\subseteq D(Z),
\end{eqnarray}
where $\mathcal{B}_{0}\cong \mathcal{A}_{0}$. \footnote{More precisely \cite{kuznetsov2007homological}, $\mathcal{B}_{j}\cong \langle\mathfrak{a}_{0},\ldots,\mathfrak{a}_{N-j-2}\rangle$, where $N=\mathrm{dim}V$ and $\mathfrak{a}_{j}$ is the right orthogonal to $\mathcal{A}_{j+1}$ in $\mathcal{A}_{j}$. Also, is assumed that $N>k+1$.}
For any linear subspace $L\subset V^{\vee}$ satisfying
\begin{eqnarray}
\mathrm{dim}X_{L}&=&\mathrm{dim}X-r\nonumber,\\
\mathrm{dim}Z_{L}&=&\mathrm{dim}Z+r-\mathrm{dim}V,
\end{eqnarray}
where $r=\mathrm{dim}L$, we have the following Lefschetz decompositions
\begin{eqnarray}
D(X_{L})&=&\langle \mathcal{C}_{L}, \mathcal{A}_{r}(1),\ldots,\mathcal{A}_{k}(k+1-r)\rangle,\nonumber\\
D(Z_{L})&=&\langle \mathcal{B}_{l}(N-l-1-r),\ldots,\mathcal{B}_{N-r}(-1),\mathcal{C}_{L}\rangle,
\end{eqnarray}
where $X_{L}:=X\times_{\mathbb{P}(V)}\mathbb{P}(L^{\perp})$, $Z_{L}:=Z\times_{\mathbb{P}(V^{\vee})}\mathbb{P}(L)$.
\\\\
We list a few relevant examples:
\begin{itemize}
\item \textbf{Trivial Lefschetz decomposition}. For any $f:X\rightarrow \mathbb{P}(V)$ we can always take the Lefschetz decomposition with a single component $\mathcal{A}_{0}=D(X)$. Then its HPD is $Z=\mathcal{X}\rightarrow \mathbb{P}(V^{\vee})$, where the map is just the projection and
\begin{eqnarray}
D(Z_{L})=\langle D(X)\otimes \mathcal{O}_{\mathbb{P}(L)}(1-\mathrm{dim}L),\ldots,D(X)\otimes \mathcal{O}_{\mathbb{P}(L)}(-1),D(X_{L})\rangle.
\end{eqnarray}
\item \textbf{Quadrics}. \cite{kuznetsov2019homological} If $f:X\rightarrow \mathbb{P}(V)$ is a smooth quadric with the usual embedding and $V\cong \mathbb{C}^{2m+1}$, then we have the Lefschetz decomposition $\mathcal{A}_{0}=\langle \mathcal{S},\mathcal{O}\rangle$, $\mathcal{A}_{i}=\langle \mathcal{O}\rangle$, $i=1,\ldots 2m-2$ and the HPD of $X$ is $g:Z\rightarrow \mathbb{P}(V^{\vee})$, a double cover of $\mathbb{P}(V^{\vee})$ branched along the dual quadric $X^{\vee}$. On the other hand, if $V\cong \mathbb{C}^{2m+2}$ then, $\mathcal{A}_{0}=\mathcal{A}_{1}=\langle \mathcal{S}_{\pm},\mathcal{O}\rangle$, $\mathcal{A}_{i}=\langle \mathcal{O}\rangle$, $i=2,\ldots 2m-1$ and the HPD of $X$ is $g:Z\rightarrow \mathbb{P}(V^{\vee})$, where $Z=X^{\vee}$ with the usual embedding.
\item \textbf{Double Veronese embedding}. \cite{kuznetsov2008derived} If $f:X=\mathbb{P}(V)\rightarrow \mathbb{P}(S^{2}V)$ is the double Veronese embedding of $\mathbb{P}(V)$ (where $S^{2}V:=\mathrm{Sym}^{2}V$) and $X\cong \mathbb{P}^{2l-\varepsilon}$, $\varepsilon\in \{0,1\}$, we have the Lefschetz decomposition $\mathcal{A}_{0}=\ldots=\mathcal{A}_{l-2}=\langle \mathcal{O}_{X}(-1),\mathcal{O}_{X}\rangle$, and
\begin{eqnarray}
\mathcal{A}_{l-1}=\begin{cases}
\langle \mathcal{O}_{X}(-1)\rangle, & \text{if } \varepsilon=1\\
\langle \mathcal{O}_{X}(-1),\mathcal{O}_{X}\rangle, & \text{if } \varepsilon=0
\end{cases}
\end{eqnarray}
The HPD category of $X$ is $D(\mathbb{P}(S^{2}V^{\vee}),\mathcal{C}l_{0})$, which consists of the coherent sheaves of modules over the even part of the universal Clifford algebra $\mathcal{C}l_0$ on $\mathbb{P}(S^{2} V^{\vee})$.
\end{itemize}
\section{\label{sec:GLSM}Lightning review of window categories in (anomalous) GLSM}
In this section we review the general definition of a GLSM \cite{Witten:1993yc}, in order to fix the notation for the following sections. We define a GLSM by specifying the following data.
\begin{itemize}
\item \textbf{Gauge group}: a compact Lie group $G$.
\item \textbf{Chiral matter fields}: a faithful unitary representation $\rho_{m}: G\rightarrow GL(V)$ of $G$ on some complex vector space $V\cong \mathbb{C}^{N}$. This determines the representation of the chiral $(2,2)$ fields.
\item \textbf{Superpotential}: a holomorphic, $G$-invariant polynomial $W: V\rightarrow \mathbb{C}$, namely $W\in \mathrm{Sym}(V^{\vee})^G$.
\item \textbf{Fayet-Illiopolous (FI)-theta parameters}: a set of complex parameters $t$ such that
\begin{eqnarray}
\exp(t)\in \mathrm{Hom}(\pi_{1}(G),\mathbb{C}^{*})^{\pi_{0}(G)}
\end{eqnarray}
i.e., $\exp(t)$ is a group homomorphism from $\pi_{1}(G)$ to $\mathbb{C}^{*}$ that is invariant under the adjoint action of $G$. It is customary to write $t=\zeta-i\theta$, therefore \cite{hori2019notes}
\begin{eqnarray}
t\in \left(\frac{\mathfrak{t}^{*}_{\mathbb{C}}}{2\pi i \mathrm{P}}\right)^{W_{G}}\cong\frac{\mathfrak{z}^{*}_{\mathbb{C}}}{2\pi i \mathrm{P}^{W_{G}}},
\end{eqnarray}
where $\mathrm{P}$ is the weight lattice, $W_{G}$ is the Weyl subgroup of $G$,
$\mathfrak{t}$ is the Cartan subalgebra of $\mathfrak{g}=\mathrm{Lie}(G)$ and
$\mathfrak{z}=\mathrm{Lie}(Z(G))$. We remark that the $W_{G}$
invariant part of $\mathfrak{t}^{*}_{\mathbb{C}}$, can be alternatively
identified with the group of characters of $\mathfrak{g}_{\mathbb{C}}$.
\item \textbf{R-symmetry}: a vector $U(1)_{V}$ and axial $U(1)_{A}$ R-symmetries that commute with the action of $G$ on $V$. In particular, we denote for future use $R:U(1)_{V}\rightarrow GL(V)$.
To (classically) preserve the $U(1)_{V}$ symmetry the superpotential must have weight $2$ under it:
\begin{eqnarray}
W(R(\lambda)\cdot\phi)=\lambda^{2}W(\phi),
\end{eqnarray}
where $\phi$ denotes the coordinates in $V$.
\end{itemize}
For a GLSM with gauge group $G$, and $n$ chirals in representations $R_{i}$, $i=1,\ldots,n$, the condition for the cancellation of $U(1)_{A}$ anomaly takes the following form:
\begin{eqnarray}
\sum_{i=1}^{n} \mathrm{tr}_{R_i}(T) = 0 \text{ \ for all \ }T \in \mathfrak{g}.
\end{eqnarray}
Consider the case $G=U(1)^{s}$, hence $\mathfrak{g}\cong \mathbb{R}^{s}$. Denote $Q^{j}\in \mathfrak{g}^{*}$, $j=1,\ldots,N$, the weights of $\rho_{m}$. Then, the anomaly condition reads
\begin{eqnarray}
Q^{\mathrm{tot}}:=\sum_{j=1}^{N}Q_{j} = 0.
\end{eqnarray}
We will be mainly interested in models where $Q^{\mathrm{tot}}\neq 0$. Then, the FI-theta parameter gets a 1-loop correction that depends logarithmically in the energy scale:
\begin{eqnarray}\label{zetaren}
\zeta_{\mathrm{ren}}(\mu):=\zeta_{\mathrm{bare}}+Q^{\mathrm{tot}}\log\left(\frac{\mu}{\Lambda}\right),
\end{eqnarray}
we denote $t_{\mathrm{ren}}=\zeta_{\mathrm{ren}}-i\theta$ \footnote{The parameter $\theta$ do not receive quantum corrections, however, for anomalous models, its component along $Q^{\mathrm{tot}}$ can be redefined by a $U(1)_{A}$ rotation.}. The parameters $t_{\mathrm{ren}}$, at fixed $\frac{\mu}{\Lambda}$, span what we will refer to as the quantum K\"ahler moduli space $\mathcal{M}_{K}$. When $Q^{\mathrm{tot}}=0$, $\mathcal{M}_{K}$ takes the form $(\mathbb{C}^{*})^{s}\setminus \Delta$ where $\Delta$ is a complex codimension $1$ discriminant. For the anomalous case $Q^{\mathrm{tot}}\neq0$, the structure of $\mathcal{M}_{K}$ can be more complicated because of the existence of Coulomb and mixed Coulomb-Higgs branches that are not singular. Then a case-by-case analysis is required, but the classical phase space spanned by $t_{\mathrm{ren}}(\mu)$ gives a great deal of information about the Higgs branches, as we will review below. The Coulomb and mixed Coulomb-Higgs branches are determined by the twisted effective superpotential\footnote{The factor of $i$ inside the $\log$ term is necessary in order to match the energy scale $\Lambda$ with the one appearing in (\ref{zetaren}) as shown in \cite{hori2019notes}}:
\begin{eqnarray}\label{effpotsigma}
\widetilde{W}_{\mathrm{eff}}(\sigma)=-t(\sigma)+2\pi i \rho_{W}(\sigma)-\sum_{j}Q_{j}(\sigma)\left(\log\left(i\frac{Q_{j}(\sigma)}{\Lambda}\right)-1\right),
\end{eqnarray}
where $ \rho_{W}$ denotes the Weyl vector of $G$, which is set to $\rho_{W}\equiv 0$ for $G$ abelian. Then, the solutions of $\exp\frac{\partial\widetilde{W}_{\mathrm{eff}}(\sigma)}{\partial\sigma}=1$ establishes the location and nature of the Coulomb and mixed Coulomb-Higgs branches, however, further analysis of the masses of the chiral fields is necessary for each solution as performed in section \ref{sec:examples}. Define the cones $C_{I}\subset \mathfrak{g}^{*}$ as
\begin{eqnarray}
\mathrm{C}_{I}:=\mathrm{Span}_{\mathbb{R}_{\geq 0}}\left\{ Q_{j}:j\in I\right\}\subset \mathfrak{g}^{*},
\end{eqnarray}
i.e. the positive span of the vectors $Q_{j}$ with $j\in I\subset \{1,\ldots,N\}$, then we define the classical phase boundaries as $C_{I}$'s such that $\mathrm{dim}_{\mathbb{R}}(C_{I})=s-1$. Classically, the space of solutions to the D-term equations:
\begin{eqnarray}\label{dterm}
\sum_{j=1}^{N}Q_{j}|\phi_{j}|^{2}=\zeta,
\end{eqnarray}
which consists of points that break $G$ into a finite subgroup (i.e. points whose stabilizer is a finite subgroup of $G$) whenever $\zeta$ is in the interior of a phase. When $\zeta$ is generic but constrained to the interior of a phase boundary, $\zeta\in C_{I}$, then solutions to (\ref{dterm}) break $G$ to $U(1)\times \Gamma$ where $\Gamma$ is a finite group\footnote{In principle, $\Gamma$ can depend on the point we take as solution to (\ref{dterm}), but we will not encounter this situation in this work, hence we ignore it}. In such a case, we can integrate out $\{\phi_{j}\}_{j\in I}$ and we are left with a $U(1)\times \Gamma$ gauge theory with chiral matter $\{\phi_{j}\}_{j\not\in I}$. The superpotential in this local model must be induced from $W$, upon integrating out $\{\phi_{j}\}_{j\in I}$, but actually details of the resulting superpotential are irrelevant for our purposes. Denote $u\in \mathfrak{g}$ as the integral generator of $U(1)\subset G$, i.e., $u$ satisfies $\exp(2\pi i n u)=1\in G$ for any $n\in\mathbb{Z}$. We then define what we call a local GLSM at the phase boundary $C_{I}$, which is given by the following data
\begin{flushleft}
\begin{eqnarray}\label{localGLSM}
&&\textbf{matter \ \ } \{\phi_{j}\}_{j\not\in I}\nonumber\\
&&\textbf{gauge group \ \ }U(1)\times \Gamma, \qquad \mathfrak{u}(1)=\mathbb{R}u\nonumber\\
&&\textbf{FI-theta parameter \ \ } t_{\mathrm{ren}}(u)\nonumber\\
\end{eqnarray}
\end{flushleft}
To a GLSM, for fixed values of $t_{\mathrm{ren}}$ and $W$, we can assign a category of B-branes. Such category, denoted by $MF_{G}(W)$, is described by pairs of objects $(\mathcal{B},L_{t})$ which we will describe in the following. Let us start with what we will call the \emph{algebraic data} consisting of a quadruple $\mathcal{B}=(M,\rho_{M},R_{M},\mathbf{T})$:
\begin{itemize}
\item \textbf{Chan-Paton vector space}: a $\mathbb{Z}_{2}$-graded, finite dimensional free $\mathrm{Sym}(V^\vee)$ module denoted by $M=M_{0}\oplus M_{1}$.
\item \textbf{Boundary gauge and (vector) R-charge representation}: $\rho_{M}: G\rightarrow GL(M)$, and $R_{M}:U(1)_{V}\rightarrow GL(M)$ commuting and even representations, where the weights of $R_{M}$ are allowed to be rational.
\item \textbf{Matrix factorization of $W$}: Also known as the tachyon profile, a $\mathbb{Z}_2$-odd endomorphism $\mathbf{T} \in \mathrm{End}^{1}_{\mathrm{Sym}(V^{\vee})}(M)$ satisfying $\mathbf{T}^{2}=W\cdot\mathrm{id}_{M}$.
\end{itemize}
The group actions $\rho_{M}$ and $R_{M}$ must be compatible with $\rho_{m}$ and $R$, i.e.
for all $\lambda\in U(1)_{V}$ and $g\in G$,
\begin{equation}\label{rhodef}
\begin{aligned}
R_{M}(\lambda)\mathbf{T}(R(\lambda)\phi)R_{M}(\lambda)^{-1} & = \lambda \mathbf{T}(\phi) , \\
\rho_{M}(g)^{-1}\mathbf{T}(\rho_{m}(g)\cdot \phi)\rho_{M}(g) & = \mathbf{T}(\phi) .
\end{aligned}
\end{equation}
The other piece of data that we need, termed $L_{t}$, is a profile for the vector multiplet scalar.
This data consists of a gauge-invariant middle-dimensional subvariety $L_{t}\subset \mathfrak{g}_{\mathbb{C}}$ of the complexified Lie algebra of $G$, or equivalently its intersection $L\subset\mathfrak{t}_{\mathbb{C}}$ with the (complexified) Cartan subalgebra $\mathfrak{t}$, which we refer to as the contour. An \textbf{admissible contour} is a gauge invariant, middle dimensional $L_{t}$ that is a continuous deformation of the real contour $L_{\mathbb{R}}:=\{\Im\sigma=0\}$, where we denote by $\sigma\in \mathfrak{t}_{\mathbb{C}}$ a point in $\mathfrak{t}_{\mathbb{C}}$, such that the imaginary part of the boundary effective twisted superpotential $\widetilde{W}_{\text{eff},q}:\mathfrak{t}_{\mathbb{C}}\rightarrow \mathbb{C}$
\begin{equation}\label{twistedbdry}
\widetilde{W}_{\text{eff},q}(\sigma):= \left(\sum_{\alpha>0}\pm i\pi\,\alpha\cdot\sigma\right)-\left(\sum_j (Q_j(\sigma))\left(\log\left(\frac{iQ_j(\sigma)}{\Lambda}\right)-1\right)\right)-t(\sigma)+2\pi i q(\sigma)
\end{equation}
approaches $+\infty$ in all asymptotic directions of $L_{t}$ and for all the weights $q\in \mathfrak{t}^{*}$ of $\rho_{M}$.
Signs in the sum over positive roots $\alpha$ of $G$ depend on the Weyl chamber in which $\Re\sigma$ lies; this sum is absent in abelian GLSMs. When $\zeta_{\mathrm{ren}}$ is deep inside a phase i.e. we assume $|\zeta_{\mathrm{ren}}|\gg 1$, there always exists a unique, up to homotopy, admissible $L_{t}$ given a quadruple $\mathcal{B}$ and so, we can forget this information and regard the image of $\mathcal{B}$ in the gauge decoupling limit, i.e. where we take the renormalized gauge coupling $g_{\mathrm{ren}}(\mu)$ to infinity while keeping $t_{\mathrm{ren}}(\mu)$ fixed, as an object $\pi_{\zeta_{\mathrm{ren}}}(\mathcal{B})$ in a category which we denote $\mathcal{D}_{\zeta_{\mathrm{ren}}}$. In general, this category has a decomposition of the form
\begin{eqnarray}
\mathcal{D}_{\zeta_{\mathrm{ren}}}=\langle D(Y_{\zeta_{\mathrm{ren}}},W|_{\zeta_{\mathrm{ren}}}),E_{1},\ldots,E_{k} \rangle,
\end{eqnarray}
where $D(Y_{\zeta_{\mathrm{ren}}},W|_{\zeta_{\mathrm{ren}}})$ represents the B-brane category of the Higgs branch. More precisely, denoting $\mu: V\rightarrow\mathfrak{g}^{*}$ the moment map associated to $\rho_{m}$, we identify $Y_{\zeta_{\mathrm{ren}}}=\mu^{-1}(\zeta_{\mathrm{ren}})/G$ and $W|_{\zeta_{\mathrm{ren}}}$ with $W$ restricted to $Y_{\zeta_{\mathrm{ren}}}$. When the stabilizer on $G$ of all points in $Y_{\zeta_{\mathrm{ren}}}\cap dW^{-1}(0)$ is finite, $D(Y_{\zeta_{\mathrm{ren}}},W|_{\zeta_{\mathrm{ren}}})$ denotes the category of B-branes of the hybrid model\footnote{We use the term hybrid model, to refer to Landau-Ginzburg (LG) orbifold models with nontrivial target space. In mathematics literature they are usually called just LG models.} described by the pair $(Y_{\zeta_{\mathrm{ren}}},W|_{\zeta_{\mathrm{ren}}})$ (mathematically this corresponds to coherent matrix factorizations \cite{buchweitz1987maximal,orlov2004triangulated,efimovcoherent,ballard2012resolutions}). We write $E_{1},\ldots,E_{k}$ for Coulomb or mixed Coulomb-Higgs vacua that may arise in the chosen phase. In the case of nonanomalous GLSMs, the $E_{1},\ldots,E_{k}$ pieces are absent in every phase. The map $\pi_{\zeta_{\mathrm{ren}}}$, induced by RG flow, in general, is a projection $\pi_{\zeta_{\mathrm{ren}}}:MF_{G}(W)\rightarrow \mathcal{D}_{\zeta_{\mathrm{ren}}}$ and there is no natural lift. A natural question is then, how $\mathcal{D}_{\zeta_{\mathrm{ren}}}$ and $\mathcal{D}_{\zeta'_{\mathrm{ren}}}$ are related if $\zeta_{\mathrm{ren}},\zeta'_{\mathrm{ren}}$ belong to distinct cones in $\mathcal{M}_{K}$ (if they belong to the same cone, then the categories are trivially equivalent). This problem was solved, for $G$ abelian in \cite{Herbst:2008jq} and in the mathematics literature in \cite{segal2011equivalences,halpern2015derived,ballard2019variation} for general $G$. A physics perspective on the case of general $G$ and anomalous GLSMs can be found in \cite{Clingempeel:2018iub,Hori:2013ika,hori2019notes,eager2017beijing}. Here, we mostly follow \cite{Clingempeel:2018iub,hori2019notes}. Suppose that $\zeta_{\mathrm{ren}},\zeta'_{\mathrm{ren}}$ belong to opposite sides of a phase boundary $C_{I}$. We then can define two families of subcategories $\mathcal{W}_{\pm,l}\subset MF_{G}(W)$, which we call window categories, labeled by $l\in \mathbb{Z}$. These categories are defined as the objects of $MF_{G}(W)$ such that the weights $q$ of $\rho_{M}|_{U(1)}$, for the $U(1)\subset G$ generated by $u$, previously defined, are constrained, to be more precise we define:
\begin{eqnarray}\label{constrwin}
&&\textbf{Small window}:\qquad|\theta(u)+2\pi q(u)|<\pi \mathrm{min}(N_{u,\pm})\nonumber\\
&&\textbf{Big window}:\qquad|\theta(u)+2\pi q(u)|<\pi \mathrm{max}(N_{u,\pm})
\end{eqnarray}
where $N_{u,\pm}:=\sum_{j}(Q_{j}(u))^{\pm}$ and $(x)^{\pm}:=(|x|\pm x)/2$. Therefore we have the definition of the window subcategories by the constraints (\ref{constrwin}): $\mathcal{W}_{+,l}$ (resp. $\mathcal{W}_{-,l}$) corresponds to the objects $\mathcal{B}$ such that the weights of $\rho_{M}$ satisfy the big (resp. small) window constraint for $l=\lfloor\frac{\theta(u)}{2\pi}\rfloor$. WLOG assume we fix a coordinate system such that the wall $C_{I}$ is at $\zeta_{\mathrm{ren}}(u)=0$ and $Q^{\mathrm{tot}}(u)\geq 0$. Then, we can take $\zeta_{\mathrm{ren}}\gg 1$ and $\zeta'_{\mathrm{ren}}\ll -1$. We then expect that there is an equivalence of categories associated with each side of the wall:
\begin{eqnarray}\label{derwin}
\mathcal{W}_{+,l}\cong D(Y_{\zeta_{\mathrm{ren}}(u)},W_{\zeta_{\mathrm{ren}}(u)})\cong \langle D(Y_{\zeta'_{\mathrm{ren}}(u)},W_{\zeta'_{\mathrm{ren}}(u)}),E_{1},\ldots,E_{k} \rangle
\end{eqnarray}
for all $l$ where $k=|N_{u,+}-N_{u,-}|$. The objects $E_{i}$ represent massive vacua from the Coulomb or mixed Coulomb/Higgs branches. When the model is nonanomalous $\mathcal{W}_{+,l}=\mathcal{W}_{-,l}$ and (\ref{derwin}) gives an equivalence of Higgs branch categories. For anomalous models there is a fully faithful map $\mathcal{W}_{-,l}\rightarrow \mathcal{W}_{+,l}$ and we have a refinement of (\ref{derwin}):
\begin{eqnarray}\label{derwinsmall}
\mathcal{W}_{-,l}\cong D(Y_{\zeta'_{\mathrm{ren}}(u)},W_{\zeta'_{\mathrm{ren}}(u)}).
\end{eqnarray}
We remark that the classical phase boundaries defined by the cones $C_{I}$ are not the whole phase space. When taking into account quantum corrections, new walls can appear and some classical ones can be lifted. However, the equivalences (\ref{derwin}) and (\ref{derwinsmall}) are always valid when taking the gauge decoupling limit for $|\zeta_{\mathrm{ren}}(u)|$ sufficiently large. Note that the definition of the window categories comes precisely from the study of B-branes in the local GLSM (\ref{localGLSM}). Because of this, and the previous remark on quantum corrected phase boundaries, the vacua $E_{j}$ in (\ref{derwin}) can be further decomposed into smaller vacua as $\zeta'_{\mathrm{ren}}(u)$ becomes more and more negative, but the simple analysis of the local GLSM will be not enough to see this effect and one has to analyze the global behavior of the GLSM. We will see examples of such a situation in section \ref{sec:examples}.
\section{\label{sec:proposal}Proposal for GLSM for HPD of projective varieties}
We start with a GLSM\footnote{For all of our general analysis, the R-charge
assignment is irrelevant, so it will be left unspecified.}
$\mathcal{T}_{X}=(G,\rho_{m}: G\rightarrow GL(V),W,t_{\mathrm{ren}},R)$ and
assume it has a large volume point in $\mathcal{M}_{K}$ where we can identify a
geometric phase whose category of B-branes at low energies is equivalent to
$D(X)$. In the following, we will omit the subscript `$\mathrm{ren}$' from all
the FI parameters to avoid cluttering. Consider a map $f:X\rightarrow
\mathbb{P}(S)$. In order for this map to be well defined in all of $X$ it must
be base point free, i.e. $\{x\in X : f(x)=0\}=\emptyset$.
Since there is a $G$-action on $S$, one can see that there exists a map
$\alpha:\mathrm{Hom}(G,\mathbb{C}^{*})\rightarrow \mathrm{Pic}(X)$
induced by the $G$-equivariant line bundle $V \times \mathbb{C}_{\mu}$ over $V$ for any $\mu \in \mathrm{Hom}(G,\mathbb{C}^{*})$,
and hence there
exists a character $\chi$ of $G$ such that
\begin{equation}\label{chi}
\alpha(\chi)= f^{*}\mathcal{O}_{\mathbb{P}(S)}(1).
\end{equation}
The character $\chi$ defines a one-dimensional representation $\mathbb{C}_\chi$ of $G$. In the context of the GLSM $\mathcal{T}_{X}$, we can write the image of
$f$ as $[f_{0}(X),\ldots,f_{n}(X)]$, where $n+1=\mathrm{dim}(S)$ and each $f_{j}(X)$
is a $\chi$-invariant function, i.e. $g\in G$ acts on $f_j(x)$ as multiplication
by $\chi(g)$.
For later use, by fixing a basis for the
generators of $G$, each $f_j$ becomes a $G/U(1)_{l}$-invariant function for
some $U(1)_{l}\subseteq G$ and transforms under $U(1)_{l}$ with the weight\footnote{We can consider also $f_{j}(x)$'s with different weights. This will
be interpreted as a map from $X$ to a weighted projective space
$\mathbb{WP}(S)$, but in this work we only consider the case $\mathbb{P}(S)$.} $Q$ determined by $\chi$. The functions $f_{j}(X)$ depend on some fields $X_{a}$, representing the coordinates of $X$, however they are not
necessarily fundamental fields, in general they will be polynomials in the
coordinates $\phi_{\alpha}$, $\alpha=1,\ldots, N=\mathrm{dim}(V)$ of $V$.
Define $\zeta_{l}:=\zeta(h)$ where $h$ is the integral generator of
$\mathfrak{u}(1)_{l}$ and assume WLOG that the large volume phase is located at
a region $\zeta\gg 1$, in particular with $\zeta_{l}\gg 1$. We will require that
our GLSM $\mathcal{T}_{X}$ satisfies the condition:
\begin{eqnarray}\label{cdt1}
Y_{\zeta\gg 1}\cap \{\phi_{\alpha}\in V^{\vee} : f(X(\phi))=0\}=\emptyset.
\end{eqnarray}
Therefore we write\footnote{Even though we assume the large volume phase is weakly coupled, our arguments should carry on for cases where the NLSM on $X$ is realized nonperturbatively such as in \cite{Hori:2006dk}.}
\begin{eqnarray}
X_{\zeta_{l}\gg 1}:=Y_{\zeta\gg 1}\cap dW^{-1}(0),
\end{eqnarray}
where we identify $X$ with $X_{\zeta_{l}\gg 1}$, at the point in the K\"ahler moduli determined by $t$, with $\zeta=\Re(t)\gg 1$. Then, we fix all the FI parameters corresponding to $G/U(1)_{l}$ and we write:
\begin{eqnarray}\label{Xcats}
D(X_{\zeta_{l}\gg 1})\cong \langle D(Y_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1}),E_{1},\ldots,E_{k} \rangle
\end{eqnarray}
where $E_{i}$ denote the Coulomb/mixed vacua deep in the phase where $\zeta_{l}\ll -1$. We construct the following GLSM
\begin{eqnarray}
\mathcal{T}_{\mathcal{X}}=(\widehat{G}=G\times U(1)_{s+1},\hat{\rho}_{m}: \widehat{G}\rightarrow GL(V\oplus V'),\widehat{W},\widehat{R}),
\end{eqnarray}
where $s=\mathrm{dim}(\mathfrak{z})$ and the representation $V'$ of $
\widehat{G}$ is given by
$V'=\mathbb{C}_{(\chi^{-1},-1)}\oplus S^\vee$
where $\chi$ is defined by \eqref{chi}, $U(1)_{s+1}$ acts on $S^\vee$ with weight $1$ and $G$ acts on $S^\vee$ trivially. We denote the
coordinates of $V'$ as $(P,S_{0},\ldots,S_{n})$. The superpotential
$\widehat{W}$ is given by
\begin{eqnarray}
\widehat{W}=W+P\sum_{j=0}^{n}S_{j}f_{j}(X).
\end{eqnarray}
Here we denoted $f_{j}(X)$ for the components of the image of the map $f:X\rightarrow \mathbb{P}(S)$. The GLSM $\mathcal{T}_{\mathcal{X}}$ is quite analogous to the model studied in \cite{ballard2017homological,ballard2014derived}. We interpret $\mathcal{T}_{\mathcal{X}}$ as the GLSM of the universal hyperplane section of $X$, to be more precise, $\mathcal{T}_{\mathcal{X}}$, by construction will have a large volume phase at $\boldsymbol{\zeta}\gg 1$, $\boldsymbol{\zeta}:=(\zeta_{l},\zeta')$ (and fix the other FI parameters to the large volume phase of $X$), where $\zeta'$ is the FI parameter corresponding to $U(1)_{s+1}$. The F-term equations
\begin{eqnarray}
\frac{\partial\widehat{W}}{\partial S_{j}}=0
\end{eqnarray}
implies that $f_{j}(X)=0$ for all $j$ if $P\neq 0$,. Then, the moment map equation for $\zeta_{l}\gg 1$ and the condition (\ref{cdt1}) implies this is not possible and $P$ must vanish. Therefore, we must have $P=0$ on the Higgs branch $\boldsymbol{\zeta}\gg 1$, then it is easy to see that $\widehat{Y}\cap d\widehat{W}^{-1}(0)$ reduces to $\mathcal{X}$. Therefore we identify this phase with the NLSM with target $\mathcal{X}_{\boldsymbol{\zeta}\gg 1}$. Consider now the phase $(\zeta_{l}\ll -1,\zeta'\gg 1)$. Then we have the following correspondence of B-brane categories
\begin{eqnarray}
D(\mathcal{X}_{\boldsymbol{\zeta}\gg 1})\cong \langle D(\widehat{Y}_{\zeta_{l}\ll -1},\widehat{W}_{\zeta_{l}\ll -1}),C_{1},\ldots,C_{k'} \rangle,
\end{eqnarray}
where $C_{i}$ denote the Coulomb/mixed vacua deep in the phase $(\zeta_{l}\ll -1,\zeta'\gg 1)$ and $\widehat{Y}_{\zeta_{l}\ll -1}$ is the appropriate symplectic quotient associated with the D-terms of the GLSM $\mathcal{T}_{\mathcal{X}}$. The vacua $C_{i}$, $i=1,\ldots,k'$ can be computed using the local model (\ref{localGLSM}) at the phase boundary $(\zeta_{l}=0,\zeta'\in \mathbb{R}_{\geq 0})$. Then, we claim
\begin{eqnarray}
\mathcal{C}\cong D(\widehat{Y}_{\zeta_{l}\ll -1},\widehat{W}_{\zeta_{l}\ll -1}),
\end{eqnarray}
where $\mathcal{C}$ is the HPD category of $X$ with respect to $\mathcal{L}$ and the Lefschetz deocomposition described below. In general, in our examples, we will be able to find an explicit description of $D(\widehat{Y}_{\zeta_{l}\ll -1},\widehat{W}_{\zeta_{l}\ll -1})$ in terms of a hybrid model. Denoting by $\mathcal{W}^{U(1)_{l}}_{r,\pm}$ the big/small window categories associated with the $U(1)_{l}$ subgroup of $\widehat{G}$ on $\mathcal{T}_{\mathcal{X}}$, we can rephrase this result in terms of them
\begin{eqnarray}
\mathcal{C}\cong \mathcal{W}^{U(1)_{l}}_{r,-}\hookrightarrow \mathcal{W}^{U(1)_{l}}_{r,+}\cong D(\mathcal{X}_{\zeta_{l}\gg 1})\qquad \text{ \ for all \ }r\in \mathbb{Z},
\end{eqnarray}
where $r$ corresponds to the choice of $\theta_{l}=\theta(h)$. The Lefschetz decomposition induced by our construction, we claim, is the one determined by the center
\begin{eqnarray}\label{lefscenter}
\mathcal{A}_{0}= \langle D(X_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1}),E_{1},\ldots,E_{k-k'} \rangle,
\end{eqnarray}
therefore a Lefschetz decomposition is imposed on us, by construction. This is exactly the same case in \cite{ballard2017homological,ballard2014derived}. Generically the phase space will look like figure \ref{fig:phase}.
We remark that we have two more phases that we can distinguish in the (classical) $\boldsymbol{\zeta}$ space. One is the phase $(Q\zeta'-\zeta_{l}\ll -1,\zeta'\ll -1)$ which has an empty Higgs branch (since $P\neq0$), and $(Q\zeta'-\zeta_{l}\gg 1,\zeta'\ll -1)$ whose B-brane category embeds into $\mathcal{C}$. If the category $D(Y_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1})$ in (\ref{Xcats}) is empty, the phase boundary $(\zeta_{l},\zeta')=(0,\mathbb{R}_{\leq 0})$ will be lifted by quantum effects or not present at all (for an explanation see footnote \ref{footbdry}) and therefore the Higgs branch of this latter phase is trivially equivalent to $\mathcal{C}$. We will see both cases occurring in our examples. The analysis of the Coulomb and mixed Coulomb-Higgs vacua is more involved and a classical analysis is not enough, therefore is not possible to provide a full picture at this level of generality. We provide a full detailed analysis for each family of examples in section \ref{sec:examples}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.70]
\draw[thin,->] (0,0) -- (4,0) node[anchor=north west]{$\zeta_l$};
\draw[thin,->] (0,0) -- (0,4) node[anchor=south east]{$\zeta'$};
\draw [ultra thick] (0,0) -- (0,3.5);
\draw [ultra thick] (0,0) -- (3.5,0);
\draw [ultra thick] (0,0) -- (-3.5,-3.5);
\draw [ultra thick, dashed] (0,0) -- (-4,0);
\draw [thick, dotted, ->] (0,0) -- (-1,-4);
\node at (2.5,2.5) {$D(\mathcal{X})$};
\node at (-2.5,2.5) {${\cal C}$};
\node at (-2.5,-1.25) {${\cal C}'$};
\node at (1,-2.5) {$\varnothing$};
\end{tikzpicture}
\caption{Higgs branches of the GLSM $\mathcal{T}_{\mathcal{X}}$}\label{fig:phase}
{\footnotesize \begin{flushleft} The theory $\mathcal{T}_{\mathcal{X}}$ has a geometric phase realizing the universal hyperplane section $\mathcal{X}$ and a LG phase realizing the HPD category $\mathcal{C}$ (assuming that the Lefschetz decomposition is nontrivial). When $D(Y_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1})$ is empty, $\mathcal{C}' \cong \mathcal{C}$, otherwise $\mathcal{C}'$ is a subcategory of $\mathcal{C}$. The dashed arrow shows the direction of RG flow. \end{flushleft} }
\end{figure}
\subsection{\label{sec:linearsec} Taking linear sections}
An important feature of HPD is the behavior of $\mathcal{C}$ under the intersection of $X$ with linear sections $L\subset S^{\vee}$. We can just modify the GLSM $\mathcal{T}_{\mathcal{X}}$ accordingly. From the matter structure of $\mathcal{T}_{\mathcal{X}}$, is clear that, in the large volume phase $(\zeta_{l}\gg 1,\zeta'\gg 1)$ we can identify the fields $S_{0},\ldots, S_{n}$ with the homogeneous coordinates of $\mathbb{P}(S^{\vee})$, therefore, if $\mathrm{dim}L=r\leq n$, denote a basis of $L$ as $\mathbf{l}^{*}_{a}$, $a=1,\ldots,r$ and we write the rank $r$ matrix $m_{a,i}:=\mathbf{l}^{*}_{a}(\mathbf{e}_{i})$ where $\mathbf{e}_{i}$, $i=0,\ldots,n$ is a basis of $S$. Then, we propose the GLSM $\mathcal{T}_{\mathcal{X}_{L}}$ given by
\begin{eqnarray}
\mathcal{T}_{\mathcal{X}_{L}}=(\widehat{G},\hat{\rho}_{m}: \widehat{G}\rightarrow GL(V\oplus V'_{L}),\widehat{W}_{L},\widehat{R}_{L}),
\end{eqnarray}
where $V'_{L}=\mathbb{C}_{(\chi^{-1},-1)}\oplus L$ with $\chi$ given by \eqref{chi}. We denote the coordinates of $V'_L$ as $(P,S_{1},\ldots,S_{r})$. The superpotential
$\widehat{W}_{L}$ is given by
\begin{eqnarray}
\widehat{W}_{L}=W+P\sum_{a,j}S_{a}m_{a,j}f_{j}(X),
\end{eqnarray}
then the classical phase boundaries (in the $\boldsymbol\zeta$ space) of $\mathcal{T}_{\mathcal{X}_{L}}$ are the same as those of $\mathcal{T}_{\mathcal{X}}$ model previously analyzed. However, the Higgs branches are different. They become the following
\begin{figure}
\centering
\begin{subfigure}[b]{0.5\textwidth}
\begin{tikzpicture}[scale=0.60]
\draw[thin,->] (0,0) -- (4,0) node[anchor=north west]{$\zeta_l$};
\draw[thin,->] (0,0) -- (0,4) node[anchor=south east]{$\zeta'$};
\draw [ultra thick] (0,0) -- (0,3.5);
\draw [ultra thick] (0,0) -- (3.5,0);
\draw [ultra thick] (0,0) -- (-3.5,-3.5);
\draw [ultra thick, dashed] (0,0) -- (-4,0);
\node at (2.5,2.5) {$D(\mathcal{X}_L)$};
\node at (-2.5,2.5) {$D(Z_{L})$};
\node at (-2.5,-1.25) {${\cal C}'_L$};
\node at (1,-2.5) {$D(X_L)$};
\end{tikzpicture}
\caption{$\mathcal{T}_{\mathcal{X}_L}$}\label{fig:phase_restricted_a}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{tikzpicture}[scale=0.60]
\draw[thick,->] (-4,0) -- (4,0) node[anchor=north west]{$\zeta_1$};
\draw [thick, dotted, ->] (2.5,0.5) -- (-3.,0.5);
\node at (-4,1) {${\cal C}'_L$};
\node at (4.75,1) {$D(X_L)$};
\node at (0,-2.75) {$ $};
\end{tikzpicture}
\caption{$\mathcal{T}_{X_L}$}\label{fig:phase_restricted_b}
\end{subfigure}
\caption{Higgs branches of the GLSM $\mathcal{T}_{\mathcal{X}_L}$}\label{fig:phase_restricted}
{\footnotesize \begin{flushleft} (a)~The phase diagram of $\mathcal{T}_{\mathcal{X}_L}$. When restricted to a linear subspace $L \subset S^\vee$, a Higgs branch described by NLSM on $X_L$ appears. (b)~The strong coupling limit of $U(1)_{s+1}$ sees the correspondence between $X_L$ and $\mathcal{C}'_L$. The dashed arrow indicates the direction of RG flow (There is no RG flow if $X_L$ is Calabi-Yau). Note that this arrow can run in either direction, depending on the model. \end{flushleft}}
\end{figure}
\begin{itemize}
\item $(\zeta_{l}\gg 1,\zeta'\gg 1)$: An analogous reasoning shows that this phase becomes the universal hyperplane section of $X_{L}$, denoted by $\mathcal{X}_{L}=X\times_{\mathbb{P}(V)}\mathcal{H}_{L}$ where
\begin{eqnarray}
\mathcal{H}_{L}=\{(u,v)\in \mathbb{P}(V)\times \mathbb{P}(L) : v(u)=0 \}.
\end{eqnarray}
\item $(Q\zeta'-\zeta_{l}\ll -1,\zeta'\ll -1)$: In this phase we have the condition $P\neq 0$ therefore $F_{a}:=m_{a,j}f_{j}(X)=0$ for all $a$. In this case, since the rank of $m_{a,j}$ is less than $n+1$, we can have a nonempty solution. Then, the F-term equation for $\phi_{\alpha}$ is given by
\begin{eqnarray}\label{ftermgeneral}
\frac{\partial W}{\partial\phi_{\alpha}}+P\sum_{a}S_{a}\frac{\partial F_{a}}{\partial \phi_{\alpha}}=0
\end{eqnarray}
if the Jacobian $\frac{\partial F_{a}}{\partial \phi_{\alpha}}$ is full rank in $Y_{\zeta\gg 1}$ then, the value of $S_{a}$ is completely fixed by (\ref{ftermgeneral}). This is the case if we assume transversality of $X_{L}\subset Y_{\zeta\gg 1}$. In all our examples we see that smoothness of $X_{L}\subset Y_{\zeta\gg 1}$ moreover fixes $S_{a}\equiv 0$, and we can identify the Higgs branch with $X_{L}$ and we propose that, in general, this should be the case. However, we do not have a proof for the general case. For instance, if $W\equiv 0$ this is clearly the case, or if we can identify $X$ with the zeroes of a section $s$ of a vector bundle $\mathcal{V}\rightarrow B$ over some manifold $B$, then, we can write a superpotential of the form $W=p(s)$ where $p$ belongs to the dual bundle $\mathcal{V}^{\vee}$. Then, locally $W=p_{\alpha}s^{\alpha}$, and $X$ is described by the complete intersection $s_{\alpha}(x)=0$. Transversality condition for $X_{L}$ will read
\begin{eqnarray}
\left\{ \mathrm{rk}\left[\frac{\partial (s_{\alpha},F_{a})}{\partial x_{i}}\right]< \mathrm{codim}X_{L} \right \}\cap X_{L}=\emptyset
\end{eqnarray}
and $Y_{\zeta\gg 1}$ will be identified with $\mathcal{V}^{\vee}\rightarrow B$. Then, in such cases, (\ref{ftermgeneral}) will imply the vanishing of $P_{\alpha}$ and $S_{a}$.
\item $(\zeta_{l}\ll-1,\zeta'\gg -1)$: The Higgs branch in this phase corresponds to the hybrid model $(\widehat{Y}_{L,\zeta_{l}\ll -1},\widehat{W}_{L,\zeta_{l}\ll -1})$ and we identify $D(\widehat{Y}_{L,\zeta_{l}\ll -1},\widehat{W}_{L,\zeta_{l}\ll -1})\cong D(Z_{L})$ for the case we have a variety $Z$ which we can identify with the HPD of $X$, namely $Z$ satisfies $D(Z)\cong \mathcal{C}$ (see section \ref{sec:section2}).
\item $(Q\zeta'-\zeta_{l}\gg 1,\zeta'\ll -1)$: Generically, the category of B-branes of this phase embeds into $D(\widehat{Y}_{L,\zeta_{l}\ll -1},\widehat{W}_{L,\zeta_{l}\ll -1})$. If the category $D(Y_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1})$ in (\ref{Xcats}) is empty, the Higgs phase will be identical to $(\widehat{Y}_{L,\zeta_{l}\ll -1},\widehat{W}_{L,\zeta_{l}\ll -1})$ i.e. the phase boundary $(\zeta_{l},\zeta')=(0,\mathbb{R}_{\leq 0})$ will be lifted by quantum effects or not present at all\footnote{\label{footbdry}This is simply because, in the case of $D(Y_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1})=\emptyset$, the $\zeta_{l}\ll-1$ phase in $\mathcal{T}_{X}$ (with all the other FI parameters positive) will be just composed by Coulomb vacua or, more precisely, the category $\mathcal{W}_{-}$ will be empty and only $\mathcal{W}_{+}$ will prevail. This means, there exist a GLSM equivalent to $\mathcal{T}_{X}$ where all the charges $Q_{\alpha}(h)$ are positive, hence there is no phase boundary at $(\zeta_{l},\zeta')=(0,\mathbb{R}_{\leq 0})$ in $\mathcal{T}_{\mathcal{X}}$ and $\mathcal{T}_{\mathcal{X}_{L}}$.}. Let us call the category of B-branes of this phase $\mathcal{C}'_{L}$. More remarkably, we claim that this category is equivalent to $\mathcal{C}_{L}$, defined in section \ref{sec:section2} whenever the RG flow, determined by the direction of $-Q^{\mathrm{tot}}$ points towards the $(Q\zeta'-\zeta_{l}\gg 1,\zeta'\ll -1)$ phase. We do not have a proof of this fact, but we check this on the families of examples we analyzed by computing the Witten indices of the different phases. In such a case
\begin{eqnarray}
\mathcal{C}'_{L}\cong\mathcal{C}_{L}\hookrightarrow D(X_{L}).
\end{eqnarray}
Moreover, if the phase boundary $(\zeta_{l},\zeta')=(0,\mathbb{R}_{\leq 0})$ is lifted by quantum effects or not present at all, we have that $D(Z_{L})= \mathcal{C}_{L}=\mathcal{C}'_{L}$. On the other hand, if $-Q^{\mathrm{tot}}$ points against the $(Q\zeta'-\zeta_{l}\gg 1,\zeta'\ll -1)$ phase we have
\begin{eqnarray}
\mathcal{C}_{L}\cong D(X_{L})\hookrightarrow \mathcal{C}'_{L}.
\end{eqnarray}
Finally, if the direction $-Q^{\mathrm{tot}}$ coincides exactly with the wall $\mathbb{R}_{\geq 0}\cdot(-Q,-1)$, we have
\begin{eqnarray}
\mathcal{C}_{L}\cong D(X_{L})\cong\mathcal{C}'_{L}.
\end{eqnarray}
\end{itemize}
In conclusion, generically the phase space will look like figure \ref{fig:phase_restricted_a}. Again, we omit a detailed analysis of the Coulomb and mixed Coulomb-Higgs vacua. Interestingly, in this case, we find that two phases of the GLSM $\mathcal{T}_{\mathcal{X}_{L}}$ can be identified with $X_{L}$ and $D(Z_{L})$ respectively (even though it may be not possible to identify the variety $Z$). This is a different situation than the case when the section $L$ was trivial and is more reminiscent of the GLSM studied in \cite{rennemo2017fundamental}. So, our approach unifies the approaches of \cite{ballard2017homological,ballard2014derived} and \cite{rennemo2017fundamental}.
We can take the strong coupling limit of $U(1)_{s+1}$, while keeping $\zeta_{s+1}\ll -1$, and get a GLSM $\mathcal{T}_{X_L}$ with gauge group $G$ that realizes $D(X_L)$ and $\mathcal{C}'_L$ in two different phases (figure \ref{fig:phase_restricted_b}). Note that $\mathcal{C}'_L$ is equivalent to $D(Z_{L})$ when $D(Y_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1})$ in (\ref{Xcats}) is empty.
\section{\label{sec:examples}Examples}
In this section, we explicitly construct and analyze the GLSM $\mathcal{T}_{\mathcal{X}}$ realizing the universal hyperplane section and HPD of three families of embeddings, namely linear embedding of projective space, Veronese embedding and complete intersections. In each case, we will show that our proposal reproduces known results such as linear embedding, double Veronese embedding and odd dimensional quadrics, and make predictions on more general cases. In all these examples we are also concerned with the Coulomb and mixed Coulomb-Higgs branches, our convention for the computations of these vacua is
\begin{equation}\label{eqssigma}
\exp\left(\frac{\partial \mu^{-1}\widetilde{W}_{\mathrm{eff}}(\sigma')}{\partial \sigma'}\right)=\exp\left(\frac{\partial \widetilde{W}_{\mathrm{eff}}(\sigma)}{\partial \sigma}\right)=1,
\end{equation}
where $\widetilde{W}_{\mathrm{eff}}(\sigma)$ is given in (\ref{effpotsigma}) and $\sigma':=\sigma/\mu$ is dimensionless. Then (\ref{eqssigma}) implies:
\begin{equation}\label{eqssigma2}
\prod_{j=1}^{N}Q_{j}(\sigma')^{Q_{j}^{\alpha}}=e^{-(t')^{\alpha}},\qquad \alpha=1,\ldots,\mathrm{dim}(\mathfrak{t}),
\end{equation}
where
\begin{equation}
t':=t_{\mathrm{ren}}+i\frac{\pi}{2}Q^{\mathrm{tot}}.
\end{equation}
In the following we will simply make the replacement
\begin{equation}
\sigma'\rightarrow\sigma.
\end{equation}
\subsection{Linear embedding of projective space}
Let $V$ be an $n$ dimensional vector space and $E$ an $m$ dimensional subspace. Then we have the linear embedding
\begin{equation}\label{linear_embed}
\mathbb{P}(E) \hookrightarrow \mathbb{P}(V).
\end{equation}
For a fixed basis $\{e_1,e_2,\cdots,e_n\}$ of $V$, let $X_1,\cdots,X_n$ denote the corresponding coordinates. Assume that $E$ is defined by $n-m$ linear equations:
\[
L_\alpha(x) = \sum_{j=1}^n a_{\alpha,j} x_j=0, \alpha=1,\cdots,n-m.
\]
The matrix $a_{\alpha,j}$ has rank $n-m$. The embedding \eqref{linear_embed} can be realized by a $U(1)$ GLSM with the following matter content and charges (i.e. the weights under $U(1)$ representation):
\begin{equation}\label{GLSM_linear_embed}
\begin{array}{ccccccc}
X_1 & X_2 & \cdots & X_n & P_1 & \cdots & P_{n-m} \\
1 & 1 & \cdots & 1 & -1 & \cdots & -1
\end{array}
\end{equation}
together with a superpotential:
\[
W = \sum_{\alpha=1}^{n-m} P_\alpha L_\alpha(X).
\]
When the FI parameter $\zeta \gg 1$, the low energy degrees of freedom are described by NLSM on $\mathbb{P}(E)$.
When the FI parameter $\zeta \ll -1$, there is no Higgs branch, there are $m$ distinct Coulomb vacua satisfying
\[
\sigma^m = (-1)^{n-m} q,
\]
where $q = \exp(-t')$. The number of Coulomb vacua is consistent with the semiorthogonal decomposition determined by \eqref{linear_embed}
\[
D(\mathbb{P}(E)) = \langle \mathcal{A}_0, \mathcal{A}_1(1), \cdots , \mathcal{A}_{m-1}(m-1) \rangle
\]
with $\mathcal{A}_a = \mathcal{O}$ for all $a=0,\ldots,m-1$. By integrating out the massive fields, the GLSM \eqref{GLSM_linear_embed} is the same as the GLSM realizing $\mathbb{P}(E) \cong \mathbb{P}^{m-1}$, namely the $U(1)$ GLSM with the following matter content and charges:
\begin{equation}\label{GLSM_P}
\begin{array}{cccc}
X_1 & X_2 & \cdots & X_m \\
1 & 1 & \cdots & 1
\end{array}
\end{equation}
and without a superpotential.
The homological projective duality of \eqref{linear_embed} is given by (which coincides with the classical projective dual of $\mathbb{P}(E)$)
\begin{equation}\label{linear_embed_HPD}
\mathbb{P}(E^\perp) \hookrightarrow \mathbb{P}(V^\vee).
\end{equation}
Fix a basis for $E$ and denote it by $\{v_1,\cdots,v_m\}$ with $v_a = \sum_{j=1}^n b_{a,j} e_j$, where $b_{a,j}$ has full rank and $\sum_{j}a_{\alpha,j}b_{a,j}=0$ for all $a,\alpha$. Then,
as discussed in section \ref{sec:proposal}, to get the GLSM $\mathcal{T}_{\mathcal{X}}$ describing the HPD of the linear embedding \eqref{linear_embed}, we first extend the GLSM \eqref{GLSM_P} to a GLSM with gauge group $U(1) \times U(1)$ and the following matter content and charges:
\begin{equation}\label{firstuniversal}
(X_{i},P_{\alpha},P,Y_{j})\in \mathbb{C}(1,0)^{\oplus n}\oplus\mathbb{C}(-1,0)^{\oplus n-m}\oplus\mathbb{C}(-1,-1)\oplus \mathbb{C}(0,1)^{\oplus n}
\end{equation}
where $\mathbb{C}(r,s)$ denotes an irreducible representation of $U(1)\times U(1)$ of weights $(r,s)\in \mathbb{Z}^{2}$.
The superpotential is given by
\[
\widehat{W} =\sum_{j,\alpha} P_{\alpha}a_{\alpha,j}X_j+ P\sum_{j}X_{j} Y_j.
\]
The fields $Y_i$ serve as homogeneous coordinates of $\mathbb{P}(V^\vee)$. Is easy to see that the phases $(\zeta_{2}-\zeta_{1} \gg 1,\zeta_{2}\ll -1)$ and $(\zeta_{1}\ll -1,\zeta_{2}\gg 1)$, in the gauge decoupling limit, have the same Higgs branch, namely, we can set $P=1$, $P_{\alpha}\in \mathbb{C}^{n-m}\setminus \{ 0 \}$ and the $Y_{j}$ are constrained by the equation:
\begin{equation}
Y_{j}+\sum_{\alpha}P_{\alpha}a_{\alpha,j}=0.
\end{equation}
Then we identify this Higgs branch with $\mathbb{P}(E^{\perp})\hookrightarrow \mathbb{P}(V^{\vee})$, where $E^{\perp}$ is spanned by the vectors $a_{\alpha}$. Therefore the classical boundary wall $(\zeta_{1},\zeta_{2})\in (0,\mathbb{R}_{\leq 0})$ is lifted. The phase $(\zeta_{2}-\zeta_{1} \ll -1,\zeta_{2}\ll -1)$ has an empty Higgs branch and the local model at the wall $\mathbb{R}_{\leq 0}(-1,-1)$ is a $U(1)$ GLSM with matter content:
\begin{equation}\label{GLSM_linear_embed_HPD}
\begin{array}{ccccccccc}
Y_1 & \cdots & Y_n & X_1 & \cdots & X_n & P_{1} & \cdots & P_{n-m} \\
-1 & \cdots & -1 & 1 & \cdots & 1 & 1 &\cdots & 1
\end{array}
\end{equation}
From here we can read the number of Coulomb vacua, given by $n-m$, consistent with the semiorthogonal decomposition induced by the embedding \eqref{linear_embed_HPD}, namely
\begin{equation}\label{sodperp}
D(\mathbb{P}(E^\perp)) = \langle \mathcal{B}_{n-m-1}(1+m-n), \cdots , \mathcal{B}_{1}(-1), \mathcal{B}_0 \rangle
\end{equation}
with $\mathcal{B}_i = \mathcal{O}$. We remark also, that $\mathbb{P}(E^\perp)$ is the correct HPD induced by the Lefschetz decomposition that can be computed using the prescription (\ref{lefscenter}). Namely, the local model at the phase boundary $\mathbb{R}_{\geq 0}(0,1)$ gives $N_{+}-N_{-}=m-1$ which implies $\mathcal{A}_{0}$ on the Lefschetz decomposition of $X=\mathbb{P}(E)$ has only one element: $\mathcal{A}_{0}=\mathcal{O}$. This plus the line bundle induced by \eqref{linear_embed} reproduces the expected Lefschetz decomposition.
In order to provide further consistency of our proposal in this simple model, we notice that we could have started from the GLSM \eqref{GLSM_P} as $\mathcal{T}_{X}$ and then, $\mathcal{T}_{\mathcal{X}}$ will be given by the GLSM with matter content
\begin{equation}\label{GLSM_linear_embed_universal}
\begin{array}{ccccccccc}
X_1 & \cdots & X_m & P & Y_1 & Y_2 & \cdots & Y_n \\
1 & \cdots & 1 & -1 & 0 & 0 & \cdots & 0 \\
0 & \cdots & 0 & -1 & 1 & 1 & \cdots & 1
\end{array}
\end{equation}
with superpotential
\[
\widehat{W} = P\sum_{a,j} X_{a} b_{a,j} Y_j.
\]
It is easy to show that the phase space of this model coincides with \eqref{firstuniversal}. In this case $\mathbb{E^{\perp}}$ is described as $\sum_{j} b_{a,j} Y_j = 0$. Taking the gauge decoupling limit in the region $\zeta_{2}\ll -1$ in \eqref{GLSM_linear_embed_universal} gives the model
\begin{equation}\label{GLSM_linear_embed_HPD2}
\begin{array}{ccccccccc}
X_1 & \cdots & X_m & Y_1 & \cdots & Y_n \\
1 & \cdots & 1 & -1 & \cdots & -1
\end{array}
\end{equation}
with superpotential
\[
\widehat{W}' = \sum_{a,j} X_{a} b_{a,j} Y_j.
\]
Again we can see that when the FI parameter of \eqref{GLSM_linear_embed_HPD2} $\zeta \gg 1$, there is no Higgs branch, there are $n-m$ Coulomb vacua satisfying
\[
e^{-t} \sigma^{n-m} = (-1)^n,
\]
which we identify with the semiorthogonal decomposition \eqref{sodperp}.
\subsection{Veronese embedding}
The degree-$d$ Veronese embedding $\mathbb{P}(V) \rightarrow \mathbb{P}(\mathrm{Sym}^d V)$ can be realized by abelian GLSM as discussed in \cite{Caldararu:2017usq}. Upon integrating out the massive fields, this theory is equivalent to the GLSM describing $\mathbb{P}(V)$, namely $n+1$ matter fields with charge 1 under the $U(1)$ gauge symmetry, where $\dim(V) = n+1$. Since $\dim(\mathrm{Sym}^d V) = {n+d \choose d}$, our discussion in section \ref{sec:proposal} shows that the GLSM $\mathcal{T}_{\mathcal{X}}$ describing the universal hyperplane section and HPD has gauge group $U(1) \times U(1)$ with matter content:
\begin{equation}\label{GLSM_Veronese_universal}
\begin{array}{ccccccccc}
X_0 & X_1 & \cdots & X_n & P & S_1 & S_2 & \cdots & S_{n+d \choose d} \\
1 & 1 & \cdots & 1 & -d & 0 & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 & -1 & 1 & 1 & \cdots & 1
\end{array}
\end{equation}
and superpotential
\[
W = P \sum_{a=1}^{n+d \choose d} S_a f_a(X),
\]
where $f_a(X)$ form a basis of the monomials in $X_i$ with degree $d$. Higgs branch of one of the phases gives the HPD of degree-$d$ Veronese embedding.
From the F-term and D-term constraints, one can show that the Higgs branches in different phases are as follows:\\
(i)~$\zeta_1>0, \zeta_2>0$: The Higgs branch is the universal hyperplane section of the Veronese embedding, i.e. the universal degree-$d$ hypersurface
\[
\mathcal{X} = \{ \sum_a S_a f_a(X) = 0 \} \subseteq \mathbb{P}^n \times \mathbb{P}^{{n+d \choose d}-1},
\]
where $X_i$ are homogeneous coordinates of $\mathbb{P}^n$ and $S_a$ are homogeneous coordinates of $\mathbb{P}^{{n+d \choose d}-1}$.
(ii)~$\zeta_1<0, \zeta_1<d \zeta_2$: The Higgs branch is the LG model on (the notation $\mathcal{O}(m)$ with $m\in\mathbb{Q}$ is explained in appendix \ref{app:MFgerbe})
\begin{equation}\label{LGVspace}
\mathrm{Tot}\left( \mathcal{O}\left( -\frac{1}{d} \right)^{\oplus(n+1)} \rightarrow \mathbb{P}^{{n+d \choose d}-1} \right)/\mathbb{Z}_d
\end{equation}
with superpotential
\begin{equation}\label{LGVpotential}
W_{0}=\sqrt{\frac{-\zeta_1}{d}} \sum_a S_a f_a(X).
\end{equation}
(iii)~$\zeta_1>d \zeta_2, \zeta_2<0$: There is no Higgs branch.\\
The classical phase diagram of GLSM \eqref{GLSM_Veronese_universal} is shown in Figure \ref{fig:phase_Ver}. The Coulomb/mixed branches can be determined by studying the local model associated with the phase boundaries and by analyzing the asymptotic behavior of the equations of motion on the Coulomb branch:
\[
\left\{ \begin{array}{l}
\sigma_1^{n+1} = q_1 (-d \sigma_1 - \sigma_2)^d, \\
\sigma_2^{{n+d \choose d}} = -q_2 (d \sigma_1+ \sigma_2),
\end{array} \right.
\]
where $q_a = \exp(-t'_a)$.
We provide the details of this analysis in appendix \ref{app:CoulombEOM}.
\begin{figure}\label{fig:Veronese}
\centering
\begin{tikzpicture}
\draw[thin,->] (0,0) -- (4,0) node[anchor=north west]{$\zeta_1$};
\draw[thin,->] (0,0) -- (0,4)node[anchor=south east]{$\zeta_2$};
\draw [ultra thick] (0,0) -- (0,3.5);
\draw [ultra thick] (0,0) -- (3.5,0);
\draw [ultra thick] (0,0) -- (-4,-2);
\node at (-5.1,-2.2) {\footnotesize $d \zeta_2 - \zeta_1 = 0$};
\draw [dashed, thick] (0,0) -- (-4, 1.2);
\node [above] at (-5.75,1) {\footnotesize ${\zeta_1 + (n+1-d) \zeta_2 = 0}$};
\draw [dashed, thick] (0,0) -- (1, -3.5) ;
\node [below] at (1,-3.5) {\footnotesize $(N-1) \zeta_1 + d \zeta_2 =0$};
\node at (3,2.5) {\small Geometric phase};
\node at (-2.5,3) {\small LG phase I };
\node at (-3.75,-0.25) {\small LG phase II};
\node at (-1.5, -2.75) {\small Coulomb phase};
\node at (3, -2) {\small Mixed phase};
\end{tikzpicture}
\quad \quad \quad \caption{Phase diagram of GLSM for HPD of degree $d$ Veroness embedding of $\mathbb{P}^n$.}\label{fig:phase_Ver}
{\footnotesize \begin{flushleft} Here $N = {n+d \choose d}$. The geometric phase describes the universal hypersurface. LG phase I has a Higgs branch described by the LG model realizing the HPD, and $(n+1-d)$ mixed branches, each described by $\mathbb{P}^{N-1}$. LG phase II has the same Higgs branch as LG phase I but with $((n+1-d)N)$ Coulomb vacua. The mixed phase contains $(N-1)$ mixed branches, each described by $\mathbb{P}^n$. The Coulomb phase contains $((N-1)(n+1))$ Coulomb vacua. \end{flushleft}}
\end{figure}
To determine which phase of the GLSM \eqref{GLSM_Veronese_universal} describes the HPD, let's consider the local model at the boundary between phase (i) and (ii) above. It is a $U(1)$ GLSM with matter content
\[
\begin{array}{ccccc}
X_0 & X_1 & \cdots & X_n & P \\
1 & 1 & \cdots & 1 & -d
\end{array}
\]
Thus we have $N_+ = n+1, N_- = d$. The small window consists of the branes satisfying
\[
\left|q+\frac{\theta}{2 \pi}\right| < \frac{1}{2} \mathrm{min} \{ d,n+1 \}.
\]
We can choose $\theta$ such that the charge $q$ of the branes in the small window satisfy
\[
0 \leqslant q \leqslant \mathrm{min} \{ d-1,n \}.
\]
For $d \leq n+1$, the corresponding Lefschetz decomposition is
\begin{equation}\label{Lef_Veronese}
D(\mathbb{P}^n) = \langle \mathcal{A}_0, \mathcal{A}_1(d), \cdots, \mathcal{A}_p(pd) \rangle,
\end{equation}
where $\mathcal{A}_0 = \mathcal{A}_1 = \cdots = \mathcal{A}_{p-1} = \langle \mathcal{O},\cdots,\mathcal{O}(d-1) \rangle$, $\mathcal{A}_p = \langle \mathcal{O},\cdots,\mathcal{O}(k) \rangle$, and $p \ge 0, 0 \le k < d$ with $n = pd+k$.
Branes in the small window have charges $q=0,1,\cdots,d-1$. These branes project to the HPD category $\mathcal{C}$ in phase (i) because they restrict to $\mathcal{A}_0$ on $\mathbb{P}^n$. From \cite{Clingempeel:2018iub}, we know that this subcategory $\mathcal{C}$ of $D(\mathcal{X})$, which is the HPD category by definition, is equivalent to the category of matrix factorizations of the LG model corresponding to LG phase I in figure \ref{fig:phase_Ver}. Therefore, we claim that the HPD of degree-$d$ Veronese embedding with respect to the Lefschetz decomposition \eqref{Lef_Veronese} is described by the LG orbifold defined by \eqref{LGVspace} and \eqref{LGVpotential}. By taking the strong coupling limit of the second $U(1)$ gauge group in \eqref{GLSM_Veronese_universal}, the HPD can also be realized by the negative phase of the $U(1)$ GLSM with matter content
\[
\begin{array}{ccccccccc}
X_0 & X_1 & \cdots & X_n & S_1 & S_2 & \cdots & S_{n+d \choose d} \\
1 & 1 & \cdots & 1 & -d & -d & \cdots & -d
\end{array}
\]
and superpotential
\[
\sum_{a=1}^{n+d \choose d} S_a f_a(X).
\]
If $d \geqslant n+1$, branes in the small window have charge $q=0,1,\cdots,n$. These branes generate $D(\mathcal{X})$ in phase (i). In this case $D(\mathcal{X}) = \mathcal{C}$ because $D(\mathbb{P}^n) = \langle \mathcal{O},\mathcal{O}(1),\cdots, \mathcal{O}(n)\rangle = \mathcal{A}_0$. Therefore, the HPD category is equivalent to $D(\mathcal{X})$.
In the special case $d = n+1$, the HPD can be described by $D(\mathcal{X})$ or the matrix factorizations of the LG orbifold \eqref{LGVspace}\eqref{LGVpotential} because these two categories are equivalent.\\
\textbf{Double Veronese Embedding}. Now let's study the GLSM more carefully in the case $d=2$, we will see that the HPD we obtained matches the mathematical result in \cite{kuznetsov2008derived}. The HPD of double Veronese embedding is the Clifford space $Y = (\mathbb{P}(\mathrm{Sym}^2 V^\vee), \mathcal{C}l_0)$ \cite{kuznetsov2008derived}, where $\mathcal{C}l_0$ denotes coherent sheaves of modules over the even part of the universal Clifford algebra determined by the corresponding quadratic forms in $\mathrm{Sym}^2 V^\vee$. Assume that $\dim V=n+1$, then $\dim \mathrm{Sym}^2 V = (n+1)(n+2)/2$. As discussed above, the HPD $Y$ can be realized by a $U(1)$ GLSM with the following matter content and charges:
\begin{equation}\label{oneHPD2V}
\begin{array}{ccccccc}
X_0 & X_1 & \cdots & X_n & S_1 & \cdots & S_{(n+1)(n+2)/2} \\
1 & 1 & \cdots & 1 & -2 & \cdots & -2
\end{array}
\end{equation}
and a superpotential $W = \sum_{i=1}^{(n+1)(n+2)/2} S_i G_i(X)$, where $G_i$'s form a complete set of quadratic monomials in $X_i$.
When the FI parameter $\zeta \gg 1$, there is no Higgs branch and there are $(n+1)^2$ Coulomb vacua satisfying
\[
2^{(n+1)(n+2)} \sigma^{(n+1)^2} q = 1,
\]
which correspond to the semiorthogonal decomposition
\[
D(\mathbb{P}(\mathrm{Sym}^2 V^\vee), \mathcal{C}l_0) = \langle \mathcal{C}l_{1-(n+1)^2}, \mathcal{C}l_{2-(n+1)^2} \cdots , \mathcal{C}l_{-1}, \mathcal{C}l_0 \rangle,
\]
where $\mathcal{C}l_1$ is the odd part of the Clifford algebra and $\mathcal{C}l_{k}$ for $k\leq 1$ are defined recursively, $\mathcal{C}l_{k-2} = \mathcal{C}l_k \otimes \mathcal{O}_{\mathbb{P}(\mathrm{Sym}^2 V^\vee)}(-1)$.
When the FI parameter $\zeta \ll -1$, the low energy theory is a hybrid model on
\[[\mathrm{Tot}(\mathcal{O}(-1/2)^{\oplus (n+1)} \rightarrow \mathbb{P}(\mathrm{Sym}^2 V^\vee))/\mathbb{Z}_2],\]
where $X_i$'s are fiber coordinates, $S_a$'s are base coordinates.
At each point $[S_1, \cdots, S_{(n+1)(n+2)/2}]$ on $\mathbb{P}(\mathrm{Sym}^2 V^\vee)$, there is a superpotential
\[ W_0 = \sum_{i=1}^{(n+1)(n+2)/2} S_i G_i(X)\] quadratic in $X_i$. Because matrix factorizations of $\mathbb{Z}_2$-orbifold of LG model with quadratic superpotential is equivalent to modules of even part of Clifford algebras \cite{Kapustin:2002bi}, the category of matrix factorizations of the above hybrid model is equivalent to the HPD category $D(\mathbb{P}(\mathrm{Sym}^2 V^\vee), \mathcal{C}l_0)$. Thus our construction recovers the mathematical result in \cite{kuznetsov2008derived}.\\
In general, for the case $d \leq n+1$ we expect that $\mathcal{C}$ has a semiorthgonal decomposition with center $\mathcal{B}_{0}\cong\mathcal{A}_{0}$ and ${n+d \choose d}-1-p$ objects, fitting the diagram
\begin{figure}[!h]
\centering
\begin{tikzpicture}[inner sep=0in,outer sep=0in]
\node (n) {
\centering
\begin{tabular}{r@{}l}
\raisebox{-9.5ex}{$d\left\{\vphantom{\begin{array}{c}~\\[10ex] ~
\end{array}}\right.$} &
\begin{ytableau}
\none[{\cal A}_0] & \none[{\cal A}_1] & \none & \none[\cdots] &\none& \none[{\cal A}_{\scaleto{p-1\mathstrut}{7pt}}] & \none[{\cal A}_p] & \none \\
*(gray) & *(gray) & \none & \none &\none & *(gray) & *(gray) & & \none & \none & \none & & \\
*(gray) & *(gray) & \none & \none &\none& *(gray) & *(gray) & & \none & \none & \none & & \\
*(gray) & *(gray) & \none & \none [\cdots]&\none& *(gray) & *(gray) & & \none & \none[\cdots] & \none & & \\
*(gray) & *(gray) & \none & \none &\none& *(gray) & & & \none & \none & \none & & \\
*(gray) & *(gray) & \none & \none &\none& *(gray) & & & \none & \none & \none & & \\
\none & \none & \none & \none &\none& \none & \none[{\cal B}_{\scaleto{l\mathstrut}{7pt}}] & \none[{\cal B}_{\scaleto{l-1\mathstrut}{7pt}}]&\none &\none[\cdots] &\none &\none[{\cal B}_1] &\none[{\cal B}_0]\\
\end{ytableau}
\raisebox{-5.7ex}{$\left\}\vphantom{\begin{array}{c}~\\[5ex] ~
\end{array}}\right.k+1$} \end{tabular}};
\draw[thin,] (3,1.48) -- (-3.3,1.48);
\draw[thin,] (3,-1.48) -- (-3.3,-1.48);
\end{tikzpicture}
\caption{HPD of degree-$d$ Veronese embedding}
{\footnotesize \begin{flushleft} The gray part of the diagram corresponds to the Lefschetz decomposition of degree-$d$ Veronese embedding. The white part corresponds to Lefschetz decomposition of the dual space. The number of different components for ${\cal A}_i$ and ${\cal B}_a$ are counted by the corresponding number of rows. The number of ${\cal B}_a$'s is given by $l= {n+d \choose d} - p -1$, where $n=pd+k$.\end{flushleft}}
\end{figure}
\subsubsection{B-brane transport}
Let us now consider the situation $d \leq n+1$ and denote the LG model \eqref{LGVspace}\eqref{LGVpotential} by $\mathrm{LG}_{\mathrm{HPD}}$. We want to realize the equivalence between $MF(\mathrm{LG}_{\mathrm{HPD}})$ and the HPD category $\mathcal{C} \subset D(\mathcal{X})$ through brane transport.
Remember that there is a semiorthogonal decomposition
\[
D(\mathcal{X}) = \langle \mathcal{C}, \mathcal{A}_1(1)_{\mathbb{P}(\mathrm{Sym}^2 V^\vee)},\cdots,\mathcal{A}_{m-1}(m-1)_{\mathbb{P}(\mathrm{Sym}^2 V^\vee)} \rangle,
\]
where the non-trivial component $\mathcal{C}$ is the HPD category by definition. When considering the wall crossing between the geometric phase and LG phase I in figure \ref{fig:phase_Ver}, one should expect that, under brane transport, $\mathcal{C}$ is transported to $MF(\mathrm{LG}_{\mathrm{HPD}})$ in LG phase I while the image of $\mathcal{A}_i(i)_{\mathbb{P}(\mathrm{Sym}^d V^\vee)}$ have components on Coulomb vacua or mixed branches. When transported through the small window, $MF(\mathrm{LG}_{\mathrm{HPD}})$ should have images in $\mathcal{C}$.
Explicitly, we take a set of generators of $MF(\mathrm{LG}_{\mathrm{HPD}})$ and lift them to GLSM matrix factorizations in the small window and transport them to the geometric phase, we check that the projections of these matrix factorizations are all equivalent to objects in $\mathcal{A}_0 \boxtimes D(\mathbb{P}(\mathrm{Sym}^d V^\vee))$. Objects in the HPD category $\mathcal{C}$ are precisely the objects in $D(\mathcal{X})$ that can be lifted to objects in the small window. This procedure gives a set of generators of $\mathcal{C}$. Now let us see how the brane transport works.
\textbf{Example:} $\mathbb{P}^2 \rightarrow \mathbb{P}^5$. The small window consists of branes with charges $-1, 0$, the big window consists of branes with charges $-1,0,1$. In the LG phase, non-empty branes are of the form
\begin{equation}\label{MFLG}
\xymatrix{
\mathcal{O}(m+\frac{3}{2}) \ar@<0.5ex>[r]^-{X \bar{\eta}} & \mathcal{O}(m+1)^{\oplus 3} \ar@<0.5ex>[l]^-{\frac{\partial W_{0}}{\partial X}\eta} \ar@<0.5ex>[r]^-{X\bar{\eta}} & \mathcal{O}(m+\frac{1}{2})^{\oplus 3} \ar@<0.5ex>[l]^-{\frac{\partial W_{0}}{\partial X}\eta} \ar@<0.5ex>[r]^-{X \bar{\eta}} & \mathcal{O}(m) \ar@<0.5ex>[l]^-{\frac{\partial W_{0}}{\partial X}\eta}}.
\end{equation}
If $m = n - \frac{1}{2}$ is a half-integer, then \eqref{MFLG} can be lifted to
\begin{equation}\label{liftMFLGG}
\xymatrix{
\mathfrak{W}(-2,n)_{-2} \ar@<0.5ex>[r]^-{X \bar{\eta}} & \mathfrak{W}(-1,n)_{-1}^{\oplus 3} \ar@<0.5ex>[l]^-{\frac{\partial W}{\partial X}\eta} \ar@<0.5ex>[r]^-{X\bar{\eta}} & \mathfrak{W}(0,n)_0^{\oplus 3} \ar@<0.5ex>[l]^-{\frac{\partial W}{\partial X}\eta} \ar@<0.5ex>[r]^-{X \bar{\eta}} & \mathfrak{W}(1,n)_1 \ar@<0.5ex>[l]^-{\frac{\partial W}{\partial X}\eta}}
\end{equation}
with $\theta = B_* + 2 \pi$. Here, $ \mathfrak{W}(a,b)_r$ denotes the so-called \emph{Wilson line brane} defined in \cite{Herbst:2008jq} which determines the boundary interactions. It corresponds to an irreducible representation of weights $(a,b)\in\mathbb{Z}^{2}$ under the gauge group $U(1) \times U(1)$ and R-charge $r$. The fermions $\eta,\bar{\eta}$ in the maps correspond to a representation of the Clifford algebra satisfying $\{\eta_{i},\bar{\eta}_{j}\}=\mathbf{1}\delta_{i,j}$ and all the other anticommutators vanish. In other words, the matrix factorization associated with \ref{liftMFLGG} can be written as
\begin{equation}
\mathbf{T}=X_{i}\bar{\eta}_{i}+\frac{1}{2}\frac{\partial W}{\partial X_{i}}\eta_{i}
\end{equation}
By binding with the empty branes
\[
\xymatrix{
\mathfrak{W}(1,n)_0 \ar@<0.5ex>[r]^-P & \mathfrak{W}(-1,n-1)_{-1} \ar@<0.5ex>[l]^-{G}
}
\]
and
\[
\xymatrix{
\mathfrak{W}(0,n+1)_0 \ar@<0.5ex>[r]^-P & \mathfrak{W}(-2,n)_{-1} \ar@<0.5ex>[l]^-{G},
}
\]
we get
\begin{equation}\label{MFodd}
\xymatrix{
\mathfrak{W}(0,n+1)_{0} \ar@<0.5ex>[r]^-{P X \bar{\eta}} & \mathfrak{W}(-1,n)_{-1}^{\oplus 3} \ar@<0.5ex>[l]^-{\frac{\partial G}{\partial X}\eta} \ar@<0.5ex>[r]^-{X \bar{\eta}} & \mathfrak{W}(0,n)_0^{\oplus 3} \ar@<0.5ex>[l]^-{P \frac{\partial G}{\partial X}\eta} \ar@<0.5ex>[r]^-{P X \bar{\eta}} & \mathfrak{W}(-1,n-1)_{-1} \ar@<0.5ex>[l]^-{\frac{\partial G}{\partial X}\eta}},
\end{equation}
which is in the small window. Denote by $\delta$ the embedding of $\mathcal{X}$ in $\mathbb{P}^2 \times \mathbb{P}^5$.
Assume that the projection of \eqref{MFodd} in the geometric phase is $B_{-}(n)$, then by taking $P=0$, it is easy to see
\[
\delta_*(B_{-}(n)) =
{\xymatrix{\mathcal{O}(-1,n)^{\oplus 3}_{-2} \ar@<0.5ex>[rr]^{\frac{\partial G}{\partial X}} \ar@<0.5ex>[drdr]^{X} & & \mathcal{O}(0,n+1)_{-1} \\ \oplus & & \oplus \\
\mathcal{O}(-1,n-1)_{-2} \ar@<0.5ex>[rr]_{\frac{\partial G}{\partial X}} & & \mathcal{O}(0,n)^{\oplus 3}_{-1} }}
\]
with $B=B_*$.
Thus $\delta_*(B_{-}(n)) \in \mathcal{A}_0 \boxtimes D(\mathbb{P}^5)$, and therefore $B_{-}(n) \in \mathcal{C} \subseteq D(\mathcal{X})$.
From the construction, it is easy to see that $B_-(n)$ is equivalent to
\[
\mathcal{O}_\mathcal{X}(-2,n)[1] \oplus \mathcal{O}_\mathcal{X}(-1,n-1)[1]
\]
with $2 \pi B = \theta_1 H_1 + \theta_2 H_2 + \pi (2 H_1+H_2)$, which can be checked by the hemisphere partition function.
Equivalently, $B_-(n)$ is
\[
\mathcal{O}_\mathcal{X}(0,n+1)[1] \oplus \mathcal{O}_\mathcal{X}(1,n)[1]
\]
with $2 \pi B = \theta_1 H_1 + \theta_2 H_2$.
On the other hand, if $m = n$ is an integer in \eqref{MFLG}, then it can be lifted to
\[
\xymatrix{
\mathfrak{W}(-3,n)_{-2} \ar@<0.5ex>[r]^{X \bar{\eta}} & \mathfrak{W}(-2,n)_{-1}^{\oplus 3} \ar@<0.5ex>[l]^{\frac{\partial W}{\partial X}\eta} \ar@<0.5ex>[r]^{X\bar{\eta}} & \mathfrak{W}(-1,n)_0^{\oplus 3} \ar@<0.5ex>[l]^{\frac{\partial W}{\partial X}\eta} \ar@<0.5ex>[r]^{X \bar{\eta}} & \mathfrak{W}(0,n)_1 \ar@<0.5ex>[l]^{\frac{\partial W}{\partial X}\eta}}
\]
with $\theta = B_* + 2 \pi$.
By binding with the cone of the following morphism
\[ \xymatrix{ \mathfrak{W}(-1,n+1)_0 \ar@<0.5ex>[r]^-P \ar@<0.5ex>[d]^-X & \mathfrak{W}(-3,n)_{-1} \ar@<0.5ex>[l]^-{G} \ar@<0.5ex>[d]^-X\\
\mathfrak{W}(0,n+1)^{\oplus 3}_1 \ar@<0.5ex>[r]^-P & \mathfrak{W}(-2,n)^{\oplus 3}_{0} \ar@<0.5ex>[l]^-{G}
}
\]
which is an empty brane,
we get
\begin{equation}\label{MFeven}
\xymatrix{
\mathfrak{W}(-1,n+1)_{0} \ar@<0.5ex>[r]^{X \bar{\eta}} & \mathfrak{W}(0,n+1)_{1}^{\oplus 3} \ar@<0.5ex>[l]^{P \frac{\partial G}{\partial X}\eta} \ar@<0.5ex>[r]^{P X \bar{\eta}} & \mathfrak{W}(-1,n)_0^{\oplus 3} \ar@<0.5ex>[l]^{\frac{\partial G}{\partial X}\eta} \ar@<0.5ex>[r]^{X \bar{\eta}} & \mathfrak{W}(0,n)_{1} \ar@<0.5ex>[l]^{P \frac{\partial G}{\partial X}\eta}},
\end{equation}
which is in the small window.
Assume that the projection of \eqref{MFeven} in the geometric phase is $B_{+}(n)$, then by taking $P=0$, it is easy to see
\[
\delta_*(B_{+}(n)) =
{\xymatrix{\mathcal{O}(-1,n)^{\oplus 3}_{-1} \ar@<0.5ex>[rr]^{X} \ar@<0.5ex>[drdr]^{\frac{\partial G}{\partial X}} & & \mathcal{O}(0,n)_{0} \\ \oplus & & \oplus \\
\mathcal{O}(-1,n+1)_{-1} \ar@<0.5ex>[rr]_{X} & & \mathcal{O}(0,n+1)^{\oplus 3}_{0} }}
\]
with $B = B_*$.
Thus $\delta_*(B_{+}(n)) \in \mathcal{A}_0 \boxtimes D(\mathbb{P}^5)$, and therefore $B_{+}(n) \in \mathcal{C} \subseteq D(\mathcal{X})$ as expected. Again from the construction, it is easy to see that $B_+(n)$ is equivalent to
\[
\mathcal{O}_{\mathcal{X}}(-3,n) \stackrel{X}{\rightarrow} \mathcal{O}_{\mathcal{X}}(-2,n)^{\oplus 3}
\]
with $2 \pi B = \theta_1 H_1 + \theta_2 H_2 + \pi (2 H_1+H_2)$, which can be checked by the hemisphere partition function.
Equivalently, $B_+(n)$ is
\[
\mathcal{O}_{\mathcal{X}}(-1,n+1) \stackrel{X}{\rightarrow} \mathcal{O}_{\mathcal{X}}(0,n+1)^{\oplus 3}
\]
with $2 \pi B = \theta_1 H_1 + \theta_2 H_2$.
Now let us consider the degree-$d$ Veronese embedding of $\mathbb{P}^n$. Assume that $n+1 = dk +n'$.
The GLSM branes
\begin{equation}
{\cal B}(m):
\xymatrix{
\mathfrak{W}(m,l)_0 \ar@<0.5ex>[r]^-{X \bar{\eta}} & \mathfrak{W}(m+1,l)_1^{\oplus (n+1)} \ar@<0.5ex>[l]^-{\frac{\partial W}{\partial X}\eta} \ar@<0.5ex>[r]^-{X \bar{\eta}} & \cdots \ar@<0.5ex>[l]^-{\frac{\partial W}{\partial X}\eta} \ar@<0.5ex>[r]^-{X \bar{\eta}} & \mathfrak{W}(m+n+1,l)_{n+1} \ar@<0.5ex>[l]^-{\frac{\partial W}{\partial X}\eta},
} \label{cpx-d}
\end{equation}
with $m= -(n+1), -n, \cdots, -(n+1)+d-1$, are lifts of nonempty branes
\begin{equation}
\xymatrix{
{\cal O}(\frac{m}{d}) \ar@<0.5ex>[r]^-{X \bar{\eta}} & {\cal O}(\frac{m+1}{d})^{\oplus (n+1)} \ar@<0.5ex>[l]^-{\frac{\partial W_{0}}{\partial X}\eta} \ar@<0.5ex>[r]^-{X \bar{\eta}} & \cdots \ar@<0.5ex>[l]^-{\frac{\partial W_{0}}{\partial X}\eta} \ar@<0.5ex>[r]^-{X \bar{\eta}} & {\cal O}(\frac{m+n+1}{d}) \ar@<0.5ex>[l]^-{\frac{\partial W_{0}}{\partial X}\eta},
}
\end{equation}
in the LG phase I,
where ${\cal O}(\frac{i}{d})$ represents sheaves on the $\mathbb{Z}_d$-gerbe over $\mathbb{P}^{\begin{psmallmatrix} n+d \\ d \end{psmallmatrix}-1}$.
First, we need to put the branes (\ref{cpx-d}) into the small window, which means the gauge charges under the first $U(1)$ gauge group of the Wilson line branes in the complex should belong to $ \{ -d+1, \cdots,1, 0 \}$. The procedure mainly involves two kinds of cone constructions with the empty brane
\begin{equation}
\xymatrix{
\mathfrak{W}(m, l)_0 \ar@<0.5ex>[r]^-{P} & \mathfrak{W}(m-d,l-1)_{-1}\ar@<0.5ex>[l]^-{G}.
}
\end{equation}in LG phase I.
By taking the cone
\begin{equation}
\begin{split}
\xymatrix{
\mathfrak{W}(m,l)_r \ar@<0.3ex>[r]^-{X \bar{\eta}} \ar@<0.3ex>[dr]^-{\mathrm{Id}} & \mathfrak{W}(m+1,l)_{r+1} \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{X \bar{\eta}}& \cdots \ar@<0.3ex>[l]\\
\mathfrak{W}(m+d,l+1)_{r+2} \ar@<0.3ex>[r]^-{P}& \mathfrak{W}(m,l)_{r+1} \ar@<0.3ex>[l]^-{G}
}
\end{split},\label{cone-1}
\end{equation}
the gauge charge of the first element on the first row is raised to $(m+d,l+1)$ and the new complex is given by
\begin{equation}
\xymatrix{
\mathfrak{W}(m+d,l+1)_{r+2} \ar@<0.3ex>[r]^-{PX \bar{\eta}} & \mathfrak{W}(m+1,l)_{r+1} \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{X \bar{\eta}}& \cdots \ar@<0.3ex>[l]
}.
\end{equation}
Notice that $R-$charge is also raised by two and the morphisms running to the left become $PX\bar{\eta}$ and $\frac{\partial G}{\partial X}\eta$. Subsequently, by taking the cone
\begingroup
\footnotesize
\begin{equation}
\begin{split}
\xymatrix{
\cdots\ar@<0.3ex>[r]^-{X \bar{\eta}} &\mathfrak{W}(m+d-1,l+1)_{r+1} \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{PX \bar{\eta}} \ar@<0.3ex>[dr]^-{X \bar{\eta}} & \mathfrak{W}(m,l)_r \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{X \bar{\eta}} \ar@<0.3ex>[dr]^-{\mathrm{Id}} & \mathfrak{W}(m+1,l)_{r+1} \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{X \bar{\eta}}& \cdots \ar@<0.3ex>[l]\\
& & \mathfrak{W}(m+d,l+1)_{r+2} \ar@<0.3ex>[r]^-{P}& \mathfrak{W}(m,l)_{r+1} \ar@<0.3ex>[l]
}
\end{split}, \label{cone-2}
\end{equation}
\endgroup
one can raise the gauge charges of elements in the middle of a complex. The resulting complex is
\begingroup
\footnotesize
\begin{equation}
\xymatrix{
\cdots\ar@<0.3ex>[r]^-{X \bar{\eta}} &\mathfrak{W}(m+d-1,l+1)_{r+1} \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{X \bar{\eta}} & \mathfrak{W}(m+d,l+1)_{r+2} \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{PX \bar{\eta}} & \mathfrak{W}(m+1,l)_{r+1} \ar@<0.3ex>[l] \ar@<0.3ex>[r]^-{X \bar{\eta}}& \cdots \ar@<0.3ex>[l]
}.
\end{equation}
\endgroup
The Wilson line brane $\mathfrak{W}(m,l)_r$ is replaced with Wilson line brane $\mathfrak{W}(m+d,l+1)_{r+2}$. Also, the right-going morphism changes from $PX\bar{\eta}$ to $X\bar{\eta}$ and the left-going morphism changes from $X\bar{\eta}$ to $PX\bar{\eta}$. Similarly, there are two analogous cone constructions to lower the gauge charges.
Let us take the brane ${\cal B}(-n-1)$ as an example. This brane complex has $n+2$ elements and the first $n+2-d$ elements are outside the small window. In order to bring every element inside the small window, one needs to utilize the cone construction (\ref{cone-1}) for the first element and the cone construction (\ref{cone-2}) one by one for the rest $n+1-d$ elements. Consequently, gauge charge of last $d$ elements is raised into the small window. Then, repeating the procedure again for the first $n+2-2d$ elements will bring another $d$ elements inside small window. After repeating $k$ times, the entire brane complex will be in the small window and the complex becomes
\begin{equation}
\begin{split}
\xymatrix{
{\cal C}(0) \ar@<0.3ex>[r]^-{PX \bar{\eta}} & {\cal C}(1) \ar@<0.3ex>[r]^-{PX \bar{\eta}} \ar@<0.3ex>[l]& {\cal C}(2) \ar@<0.3ex>[r]^-{PX \bar{\eta}} \ar@<0.3ex>[l] & \cdots \ar@<0.3ex>[r]^-{PX \bar{\eta}} \ar@<0.3ex>[l] & {\cal C}(k) \ar@<0.3ex>[l]
},
\end{split} \label{brane--n-1}
\end{equation}
where the chain complex is divided into $k+1$ small complexes with $d$ elements except for ${\cal C}(0)$ which only contains $n' +1 $ elements. The morphisms inside each small complex are $X \bar{\eta}$. The small complexes are connected with the morphism $PX \bar{\eta}$, which means morphism between the last element of ${\cal C}(i)$ and the first element of ${\cal C}(i+1)$ is $PX \bar{\eta}$. More specificly,
\begin{multline}
{\cal C}(0) : \\
\begingroup
\footnotesize
\xymatrix{
\mathfrak{W}(-n', l+k)_{2k} \ar@<0.3ex>[r]^-{X \bar{\eta}}& \mathfrak{W}(-n'+1,l+k)^{\oplus (n+1)}_{2k+1} \ar@<0.3ex>[r]^-{X \bar{\eta}} \ar@<0.3ex>[l] & \cdots \ar@<0.3ex>[r]^-{X \bar{\eta}} \ar@<0.3ex>[l] &\mathfrak{W}(0,l+k)^{\oplus \begin{psmallmatrix}n+1 \\n' \end{psmallmatrix}}_{2k+n'} \ar@<0.3ex>[l]
},
\endgroup
\end{multline}
\begin{multline}
{\cal C}(i) : \\
\begingroup
\footnotesize
\xymatrix{
\mathfrak{W}(-d+1, l+k-i)_{2(k- i)+n'+(i-1)d+1}^{\oplus \begin{psmallmatrix}n+1 \\ n' +(i-1)d+1 \end{psmallmatrix}} \ar@<0.3ex>[r]^-{X \bar{\eta}}& \mathfrak{W}(-d+2,l+k-i)^{\oplus \begin{psmallmatrix}n+1 \\ n' +(i-1)d+2 \end{psmallmatrix}}_{2(k- i)+n'+(i-1)d+2} \ar@<0.3ex>[r]^-{X \bar{\eta}} \ar@<0.3ex>[l] & \cdots \ar@<0.3ex>[l]
\\
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \cdots \ar@<0.3ex>[r]^-{X \bar{\eta}} & \mathfrak{W}(0,l+k-i)^{\oplus \begin{psmallmatrix}n+1 \\n' +id \end{psmallmatrix}}_{2(k- i)+n'+id} \,. \ar@<0.3ex>[l]
}
\endgroup
\end{multline}
The first element of ${\cal C}(i)$ always has charge $-d+1$ under the first $U(1)$ (except for ${\cal C}(0)$ which has charge $-n'$), and the last element has charge $0$. Notice also that if the R-charge of the last element of ${\cal C}(i)$ is $r$, then the R-charge of the first element of ${\cal C}(i+1)$ is $r-1$.
Now, denote the image of the brane (\ref{brane--n-1}) in the geometry phase by $B(-n-1)$ and denote the embedding of ${\cal X}$ into $\mathbb{P}^n \times \mathbb{P}^{\begin{psmallmatrix} n+d \\ d \end{psmallmatrix} -1}$ by $\delta$, one finds by taking $P=0$
\begin{equation}
\delta_*(B(-n-1)) =
\xymatrix{
{\cal C}'(0) & {\cal C}'(1) \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta} & {\cal C}'(2) \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta} & \cdots \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta} & {\cal C}'(k) \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta}
},
\end{equation}
with
\begin{multline}
{\cal C}'(0) : \\
\begingroup
\footnotesize
\xymatrix{
{\cal O}(-n', l+k)_{2k} \ar@<0.3ex>[r]^-{X \bar{\eta}}& {\cal O}(-n'+1,l+k)^{\oplus (n+1)}_{2k+1} \ar@<0.3ex>[r]^-{X \bar{\eta}} & \cdots \ar@<0.3ex>[r]^-{X \bar{\eta}} & {\cal O}(0,l+k)^{\oplus \begin{psmallmatrix}n+1 \\n' \end{psmallmatrix}}_{2k+n'}
},
\endgroup
\end{multline}
\begin{multline}
{\cal C}'(i) : \\
\begingroup
\footnotesize
\xymatrix{
{\cal O} (-d+1, l+k-i)_{2(k-i)+n'+(i-1)d+1}^{\oplus \begin{psmallmatrix}n+1 \\ n' +(i-1)d+1 \end{psmallmatrix}} \ar@<0.3ex>[r]^-{X \bar{\eta}}& {\cal O}(-d+2,l+k-i)^{\oplus \begin{psmallmatrix}n+1 \\ n' +(i-1)d+2 \end{psmallmatrix}}_{2(k-i)+n'+(i-1)d+2} \ar@<0.3ex>[r]^-{X \bar{\eta}} & \cdots
\\
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \cdots \ar@<0.3ex>[r]^-{X \bar{\eta}} & {\cal O}(0,l+k-i)^{\oplus \begin{psmallmatrix}n+1 \\n' +id \end{psmallmatrix}}_{2(k- i)+n'+id} \, ,
}
\endgroup
\end{multline}
where all the sheaves ${\cal O}(a,b)$ is defined on the ambient space. Therefore, $\delta_*(B(-n-1)) \in {\cal A}_0 \boxtimes D(\mathbb{P}^{\begin{psmallmatrix} n+d \\ d \end{psmallmatrix} -1})$ and $B(-n-1) \in {\cal C} \subseteq D(\mathcal{X})$.
To find $B(-n-1)$, one should first write the matrix factorization as an infinite complex by the Kn{\" o}rrer map \cite{Herbst:2008jq} acting on the infinite dimensional space
\[
\widetilde{V} = V \oplus p V \oplus p^2 V \oplus p^3 V \oplus \cdots.
\]
It is easy to see that the majority part of the infinite complex is empty on the geometric phase, leaving only an finite complex, which is the $B(-n-1)$ we are looking for. The $B(-n-1)$ complex is
\begin{equation}
\xymatrix{
{\cal C}''(1) & {\cal C}''(2) \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta} & {\cal C}''(3) \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta} & \cdots \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta} & {\cal C}''(k) \ar@<0.3ex>[l]_-{\frac{\partial G}{\partial X} \eta}
},
\end{equation}
with
\begin{multline}
{\cal C}''(i) : \\
\begingroup
\footnotesize
\xymatrix{
{\cal O} (-d+1, l+k-i)_{2(k-i)+n'+(i-1)d+1}^{\oplus \begin{psmallmatrix}n+1 \\ n' +(i-1)d+1 \end{psmallmatrix}} \ar@<0.3ex>[r]^-{X \bar{\eta}}& {\cal O}(-d+2,l+k-i)^{\oplus \begin{psmallmatrix}n+1 \\ n' +(i-1)d+2 \end{psmallmatrix}}_{2(k-i)+n'+(i-1)d+2} \ar@<0.3ex>[r]^-{X \bar{\eta}} & \cdots
\\
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \cdots \ar@<0.3ex>[r]^-{X \bar{\eta}} & {\cal O}\big((k-i)d,l+k-i \big)_{2(k-i)+n'+kd} \, ,
}
\endgroup
\end{multline}
where the sheaves ${\cal O}(a,b)$ are sheaves over ${\cal X}$. Also, the morphisms between ${\cal C}''(i)$ and ${\cal C}''(i+1)$ are no longer just morphisms between two elements. Since the complexes ${\cal C}''(i)$ are much longer than ${\cal C}'(i)$ for most $i$, one needs to add back the $P$-dependent morphisms $\frac{\partial G}{\partial X} \eta$. The brane $B(-n-1)$ should be written as a double complex with ${\cal C}''(i)$ on the $i$-th row. Then, morphisms $\frac{\partial G}{\partial X} \eta$ map the elements on $(i+1)$-th row with $R$-charge $r$ to the elements on the $i$-th row with $R$-charge $r+1$.
The images of branes $\mathcal{B}(m)$ with $m = -n, -n+1, \cdots, -(n+1)+d-1$ in the geometric phase can be found similarly and they all follow the same pattern. Together they generate the HPD category ${\cal C}$.
\subsection{\label{sec:quadrics}Quadrics}
A quadric $X$ in $\mathbb{P}^n$ defined by $G(X)=0$ can be realized by the positive phase a $U(1)$ GLSM with matter content and charges
\begin{equation}\label{GLSMquadric}
\begin{array}{cccccc}
& X_0 & X_1 & \cdots & X_n & P \\
U(1) & 1 & 1 & \cdots & 1 & -2
\end{array}
\end{equation}
and superpotential
\[
W = P \cdot G(X).
\]
The negative phase of this model is a Landau-Ginzburg model
\begin{equation}\label{LGquadric}
\mathrm{LG}([\mathbb{C}^{n+1}/\mathbb{Z}_2],\tilde{W} = \sqrt{-\zeta/2} \cdot G(X)).
\end{equation}
For a suitable choice of $\theta$-angle. The small window of \eqref{GLSMquadric} consists of branes with gauge charges $q=-1,0$. From the resolutions of the spinor bundles\footnote{Spinor bundles on quadrics are defined in \cite{ottaviani1988spinor,langer2008d,kapranov1986derived}. Here we follow the review of them included in \cite{addington2011spinor}.} $S_\pm$ on $X$
\begin{equation}\label{spinor}
\mathcal{O}^{\oplus N}_{\mathbb{P}^n}(-1) \stackrel{\psi_{\pm}}{\rightarrow} \mathcal{O}^{\oplus N}_{\mathbb{P}^n}
\rightarrow S_{\pm},
\end{equation}
where
\[
N = \left\{ \begin{array}{cc}
2^k, & \dim X = 2k, \\
2^{k+1}, & \dim X = 2k+1,
\end{array} \right.
\]
and $\psi_+$, $\psi_-$ are morphisms such that $\psi_+ \circ \psi_- = \psi_- \circ \psi_+ = G \cdot \mathrm{id}$, we see that the lift of the spinor bundles as GLSM matrix factorizations are in the small window. But the lift of $\mathcal{O}_X(m)$ is not in the small window because its resolution reads
\[
\mathcal{O}_{\mathbb{P}^n}(m-2) \stackrel{G}{\rightarrow} \mathcal{O}_{\mathbb{P}^n}(m) \rightarrow
\mathcal{O}_X(m).
\]
Therefore, the category of matrix factorizations of \eqref{LGquadric} is equivalent to the subcategory of $D(X)$ generated by the spinor bundles. Note that $S_+ \cong S_- \equiv S$ when $\dim X$ is odd. Then we have the Lefschetz decomposition
\begin{equation}\label{Lefquadric}
D(X) = \langle \mathcal{A}_0, \mathcal{A}_1(1),\cdots, \mathcal{A}_{n-2}(n-2) \rangle,
\end{equation}
where
\[
\mathcal{A}_0 = \langle S, \mathcal{O}(-1) \rangle
\]
if $\dim X$ is odd,
\[
\mathcal{A}_0 = \langle S_-, S_+, \mathcal{O}(-1) \rangle
\]
if $\dim X$ is even, and
\[
\mathcal{A}_i = \langle \mathcal{O}(-1) \rangle
\]
for $i>0$. Note that this Lefschetz decomposition is different from the one adopted in \cite{kuznetsov2008derived,alex2014semiorthogonal,kuznetsov2019homological} when $\dim X$ is even.
The HPD of quadrics embedded in projective spaces has been studied in \cite{alex2014semiorthogonal, kuznetsov2019homological} and from a GLSM point of view, this problem was first studied in \cite{Caldararu:2007tc}. From our proposal in section \ref{sec:proposal}, for a quadric in $\mathbb{P}^n$ defined by the zero loci of a quadratic polynomial $G$, the model describing the HPD associated with Lefschetz decomposition \eqref{Lefquadric} is a $U(1) \times U(1)$ GLSM with matter content
\[
\begin{array}{ccccccccc}
X_0 & X_1 & \cdots & X_n & P_1 & P_2 & Y_0 & \cdots & Y_n \\
1 & 1 & \cdots & 1 & -2 & -1 & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 & 0 & -1 & 1 & \cdots & 1
\end{array}
\]
and superpotential
\[
\widehat{W} = P_1 G(X) + P_2 \sum_{i=0}^n Y_i X_i.
\]
This GLSM description is consistent with the mathematical description in \cite{ballard2017homological}. Again the phase with $\zeta_1>0, \zeta_2>0$ is the geometric phase describing the universal hyperplane section of the embedding, i.e. the target space is defined by $G(X)=0$ and $\sum_{i=0}^n X_i Y_i = 0$ in $\mathbb{P}^n \times \check{\mathbb{P}}^n$.
For $\zeta_2<0, \zeta_1>\zeta_2$, there is no Higgs branch. When $\zeta_1<0, \zeta_2>0$, the Higgs branch is described by the LG model on
\begin{equation}\label{LGquadric1}
\mathrm{Tot}(\mathcal{O}^{\oplus(n+1)}\oplus \mathcal{O}(-1) \rightarrow \check{\mathbb{P}}^n)/\mathbb{Z}_2
\end{equation}
with superpotential
\begin{equation}\label{LGquadric2}
\sum_{i,j} X_i Q_{ij} X_j + P_2 \sum_{i=0}^n X_i Y_i,
\end{equation}
where we assume that $G(X) = \sum_{i,j} X_i Q_{ij} X_j$ for a $(n+1)\times (n+1)$ invertible matrix $Q$. This is the LG model description of the HPD with respect to the Lefschetz decomposition \eqref{Lefquadric}.
The first term in \eqref{LGquadric2} gives mass to all $X_i$. Upon integrating out these massive fields, we get an effective potential
\begin{equation}\label{branchquad}
W_{eff} = 2 P_2^2 \sum_{i,j} Y_i (Q^{-1})_{ij} Y_j.
\end{equation}
This suggests that the target space of the low energy theory is a double cover of $\check{\mathbb{P}}^n$ branched over the dual quadric $\sum_{i,j} Y_i (Q^{-1})_{ij} Y_j = 0$. Construction of GLSMs for branched double covers were originally proposed in \cite{Caldararu:2007tc}, and the particular case of GLSMs with superpotentials of the form (\ref{branchquad})were studied in \cite{Halverson:2013eua,Sharpe:2013bwa}. It remains to determine the $\mathbb{Z}_2$ action on this branched double cover.
When $n$ is even, we predict that the $\mathbb{Z}_2$ action is trivial. This can be checked by comparing the Witten index of the LG model with the Euler characteristic of the branched double cover, which are both equal to $n+2$ in this case.
This prediction agrees with the result of \cite{kuznetsov2019homological}.
When $n$ is odd, our construction corresponds to the Lefschetz decomposition \eqref{Lefquadric} with
\[
\mathcal{A}_0 = \langle S_-,S_+,\mathcal{O}(-1) \rangle,
\]
which is different from the one adopted in \cite{kuznetsov2019homological}, so in this case we have a prediction for new result. In this case, the IR limit is expected to be a $\mathbb{Z}_2$ orbifold of the double cover of $\check{\mathbb{P}}^n$ branched over the dual quadric. The $\mathbb{Z}_2$ action simply exchanges the two sheets of the covering space so the dual quadric is the fixed locus. This conjecture can be checked by comparing the Witten index of the LG model \eqref{LGquadric1}\eqref{LGquadric2} with the Euler characteristic of the orbifold. In this case, the Euler characteristic of the branched double cover is $n+1$ but the Witten index of the LG model is $2n+2$. On the other hand, the Euler characteristic of the $\mathbb{Z}_2$-orbifold is
\[
(\chi(\mathbb{P}^{n+1}[2])+3 \chi(\mathbb{P}^n[2]) )/2 = 2(n+1),
\]
which is exactly the Witten index of the LG model. In the formula above, the first term in the parentheses is the contribution from the untwisted sector while the second term is the contribution from the three twisted sectors. The Witten index of the LG model \eqref{LGquadric1}\eqref{LGquadric2} can be computed directly or simply read off from the phase diagram shown in Figure \ref{fig:phase_CI}.
We conclude that the HPD of a quadric embedded in projective space associated with Lefschetz decomposition \eqref{Lefquadric} is the branched double cover of the dual projective space branched over the dual quadric when $n$ is even. This reproduces the result of \cite{kuznetsov2019homological}. The Lefschetz decompositions of $D(X)$ and $\mathcal{C}$ are pictured in fig. \ref{evenquadricdiag}
\begin{figure}[!h]
\centering
\begin{tikzpicture}[inner sep=0in,outer sep=0in]
\node (n) {
\centering
\begin{ytableau}
\none[{\cal A}_0] & \none[{\cal A}_1]& \none[{\cal A}_2]& \none[{\cal A}_3] & \none & \none[\cdots] &\none& \none[{\cal A}_{\scaleto{n-3\mathstrut}{4pt}}] & \none[{\cal A}_{\scaleto{n-2\mathstrut}{4pt}}] & \none \\
*(gray) & *(gray) & *(gray) & *(gray) & \none & \none[\cdots] &\none & *(gray) & *(gray) & & \\
*(gray) & & & & \none & \none[\cdots] & \none & & & & \\
\none &\none[{\cal B}_{\scaleto{n-1\mathstrut}{5pt}}] &\none[{\cal B}_{\scaleto{n-2\mathstrut}{5pt}}] &\none[{\cal B}_{\scaleto{n-3\mathstrut}{5pt}}] &\none & \none[\cdots] & \none &\none[{\cal B}_3] &\none[{\cal B}_2] &\none[{\cal B}_1] &\none[{\cal B}_0] \\
\end{ytableau}
};
\draw[thin,] (3,.59) -- (-3.3,.59);
\draw[thin,] (3,-.59) -- (-3.3,-.59);
\draw[thin,] (3,0.0) -- (-3.3,.0);
\end{tikzpicture}
\caption{HPD of $\mathbb{P}^n[2]$ with $n$ even}\label{evenquadricdiag}
\end{figure}
When $n$ is odd, we predict that the HPD associated with \eqref{Lefquadric} is the $\mathbb{Z}_2$-orbifold of the double cover of $\mathbb{P}^n$ branched over the dual quadric. We draw the diagram for the Lefschetz decompositions of $D(X)$ and $\mathcal{C}$ for this case, in fig. \ref{oddquadricdiag}
\begin{figure}[!h]
\centering
\begin{tikzpicture}[inner sep=0in,outer sep=0in]
\node (n) {
\centering
\begin{ytableau}
\none[{\cal A}_0] & \none[{\cal A}_1]& \none[{\cal A}_2]& \none[{\cal A}_3] & \none & \none[\cdots] &\none& \none[{\cal A}_{\scaleto{n-3\mathstrut}{4pt}}] & \none[{\cal A}_{\scaleto{n-2\mathstrut}{4pt}}] & \none \\
*(gray) & *(gray) & *(gray) & *(gray) & \none & \none[\cdots] &\none & *(gray) & *(gray) & & \\
*(gray) & & & & \none & \none[\cdots] & \none & & & & \\
*(gray) & & & & \none & \none[\cdots] & \none & & & & \\
\none &\none[{\cal B}_{\scaleto{n-1\mathstrut}{4pt}}] &\none[{\cal B}_{\scaleto{n-2\mathstrut}{4pt}}] &\none[{\cal B}_{\scaleto{n-3\mathstrut}{4pt}}] &\none & \none[\cdots] & \none &\none[{\cal B}_3] &\none[{\cal B}_2] &\none[{\cal B}_1] &\none[{\cal B}_0] \\
\end{ytableau}
};
\draw[thin,] (3,.89) -- (-3.3,.89);
\draw[thin,] (3,-.89) -- (-3.3,-.89);
\draw[thin,] (3,.29) -- (-3,.29);
\draw[thin,] (3,-.3) -- (-3,-.3);
\end{tikzpicture}
\caption{HPD of $\mathbb{P}^n[2]$ with $n$ odd}\label{oddquadricdiag}
\end{figure}
When $\zeta_1<\zeta_2<0$, the IR theory is a LG model on $(\mathbb{C}^{\oplus(n+1)} \oplus \mathbb{C}^{\oplus(n+1)})/\mathbb{Z}_2$ with coordinates $X_i$ and $Y_i$ and superpotential
\begin{equation}\label{LG2_quadric}
\sqrt{\frac{\zeta_2-\zeta_1}{2}} G(X) + \sqrt{-\zeta_2} \sum_{i=0}^n X_i Y_i.
\end{equation}
One can construct a GLSM with gauge group $U(1) \times \mathbb{Z}_2$ such that the LG model \eqref{LGquadric1}\eqref{LGquadric2} and the LG model \eqref{LG2_quadric} above describe the Higgs branches of the two phases of this GLSM, which shows that the matrix factorizations of the LG model \eqref{LG2_quadric} form a subcategory of the HPD category.
As in the case of Veronese embedding, we can apply brane transport to find image of the functor
\begin{equation}\label{functor_quadric}
F:~\mathrm{MF}(\mathrm{LG}_{\mathrm{HPD}}) \rightarrow \mathcal{C} \subset D(\mathcal{X}).
\end{equation}
To that end, we only need to lift the LG matrix factorizations to GLSM matrix factorizations in the small window corresponding to the wall crossing between the geometric phase and the HPD phase. The small window can be chosen in such a way that it consists of Wilson lines with charges $q=-2,-1,0$ under the first $U(1)$ gauge group. For example, consider the following matrix factorization of \eqref{LGquadric2}:
\begin{equation}\label{MFLG_quadric}
\xymatrix{
\mathcal{O}_0(n-1)^{\oplus N}
\ar@<0.5ex>[rr]^-{\left( \begin{array}{c} -f \\ \psi_{\pm} \end{array} \right)}
&& { \begin{array}{c} \mathcal{O}_1(n)^{\oplus N} \\ \oplus \\ \mathcal{O}_1(n-1)^{\oplus N} \end{array} }
\ar@<0.5ex>[ll]^-{(-P_2, \psi_{\mp})}
\ar@<0.5ex>[rr]^-{(\psi_{\pm},f)}
&& \mathcal{O}_0(n)^{\oplus N}
\ar@<0.5ex>[ll]^-{\left( \begin{array}{c} \psi_{\mp} \\ P_2 \end{array} \right)} },
\end{equation}
where $f = \sum_i X_i Y_i$, the subscript of $\mathcal{O}$ indicates whether it is $\mathbb{Z}_2$-even or odd, and $\psi_{\pm},N$ are defined in \eqref{spinor}. These LG matrix factorizations can be lifted to GLSM matrix factorizations in the small window as follows:
\begin{equation}\label{MFsmall_quadric}
\xymatrix{
\mathfrak{W}(-2,n-1)_{-2}^{\oplus N}
\ar@<0.5ex>[rr]^-{\left( \begin{array}{c} -f \\ \psi_{\pm} \end{array} \right)}
&& { \begin{array}{c} \mathfrak{W}(-1,n)_{-1}^{\oplus N} \\ \oplus \\ \mathfrak{W}(-1,n-1)_{-1}^{\oplus N} \end{array} }
\ar@<0.5ex>[ll]^-{(-P_2,P_1 \psi_{\mp})}
\ar@<0.5ex>[rr]^-{(\psi_{\pm},f)}
&& \mathfrak{W}(0,n)_0^{\oplus N}
\ar@<0.5ex>[ll]^-{\left( \begin{array}{c} P_1 \psi_{\mp} \\ P_2 \end{array} \right)} }
\end{equation}
which are denoted by $B_{\pm}(n)$. Thus the image of \eqref{MFLG_quadric} under the functor \eqref{functor_quadric} is $\pi_+(B_{\pm}(n))$, where $\pi_+$ is the projection functor from the category of GLSM matrix factorizations onto the derived category of the geometric phase.
Let's denote by $\delta$ the embedding of universal hyperplane section $\mathcal{X}$ in $X \times \check{\mathbb{P}}^n$, and $\iota$ the embedding of $X \times \check{\mathbb{P}}^n$ in $\mathbb{P}^n \times \check{\mathbb{P}}^n$. Then by setting $P_1=P_2=0$ in \eqref{MFsmall_quadric}, we see $\iota_* \circ \delta_* (\pi_+(B_{\pm}(n)))$ is the cone of the morphism
\[ \xymatrix{ \mathcal{O}(-2,n-1)^{\oplus N} \ar@<0.5ex>[r]^-{\psi_{\pm}} \ar@<0.5ex>[d]^-f & \mathcal{O}(-1,n-1)^{\oplus N} \ar@<0.5ex>[d]^-f \\
\mathcal{O}(-1,n)^{\oplus N} \ar@<0.5ex>[r]^-{\psi_{\pm}} & \mathcal{O}(0,n)^{\oplus N} }
\]
Therefore, from the resolution of the spinor bundles \eqref{spinor}, we get
\begin{equation}\label{universalS}
\delta_*(\pi_+(B_{\pm}(n))) = S_{\pm}(-1) \boxtimes \mathcal{O}(n-1) \stackrel{f}{\rightarrow} S_{\pm} \boxtimes \mathcal{O}(n).
\end{equation}
From the short exact sequence
\[
0 \rightarrow S_{\pm}(-1) \rightarrow \mathcal{O}_X^{\oplus N} (-1) \rightarrow S_{\mp} \rightarrow 0,
\]
we see $\delta_*(\pi_+(B_{\pm}(n)))$ is indeed an object in $\mathcal{A}_0 \boxtimes D(\check{\mathbb{P}}^n)$. \eqref{universalS} then tells us that
\[
\pi_+(B_{\pm}(n)) = (S_{\pm} \boxtimes \mathcal{O}(n))|_{\mathcal{X}}.
\]
\subsection{Complete intersection}\label{subsec:CI_HPD}
We can generalize the discussion of the previous subsection. Let's consider a complete intersection $X = \mathbb{P}^n[d_1,d_2,\cdots,d_k]$. The GLSM realizing this geometry is a $U(1)$ gauge theory with matter content and charges
\[
\begin{array}{ccccccc}
X_0 & X_1 & \cdots & X_n & P_1 & \cdots & P_k \\
1 & 1 & \cdots & 1 & -d_1 & \cdots & -d_k
\end{array}
\]
and superpotential
\begin{equation}\label{W_CI}
W = \sum _{l=1}^k P_l G_l(X),
\end{equation}
where each $G_l(X)$ is a degree $d_l$ polynomial and we assume $n+1 \geq \sum_{l=1}^k d_l$. The positive phase of this GLSM is a NLSM with target space $X$, while the negative phase is a Landau-Ginzburg model with target space
\begin{equation}\label{LG_CI}
\mathrm{Tot}\left( \mathcal{O}(-1)^{\oplus(n+1)} \rightarrow
W\mathbb{P}[d_1,d_2,\cdots,d_k] \right)
\end{equation}
and superpotential \eqref{W_CI}, where $X_i$ are fiber coordinates and $P_l$ are
base coordinates. If we denote the LG model \eqref{LG_CI} by
$\mathrm{LG}(d_1,\cdots,d_k)$, then we have a semiorthogonal decomposition
\[
D(X) = \langle \mathrm{MF}(\mathrm{LG}(d_1,\cdots,d_k)), \mathcal{O},\mathcal{O}(1),\cdots,\mathcal{O}(n - \sum_{l=1}^k d_l) \rangle,
\]
and a Lefschetz decomposition
\begin{equation}\label{Lef_CI}
D(X) = \langle \mathcal{A}_0, \mathcal{A}_1(1),\cdots,\mathcal{A}_{n - \sum_{l=1}^k d_l}(n - \sum_{l=1}^k d_l) \rangle,
\end{equation}
where
\[
\mathcal{A}_0 = \langle \mathrm{MF}(\mathrm{LG}(d_1,\cdots,d_k)), \mathcal{O} \rangle, \quad
\mathcal{A}_i = \mathcal{O},i>0.
\]
According to our general discussion in section \ref{sec:proposal}, the HPD of $X$ with respect to the Lefschetz decomposition \eqref{Lef_CI} can be realized by a $U(1) \times U(1)$ GLSM with matter content
\begin{equation}\label{GLSM_HPD_CI}
\begin{array}{ccccccccccc}
X_0 & X_1 & \cdots & X_n & P_0 & P_1 & \cdots & P_k & Y_0 & \cdots & Y_n \\
1 & 1 & \cdots & 1 & -1 & -d_1 & \cdots & -d_k & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 & -1 & 0 & \cdots & 0 & 1 & \cdots & 1
\end{array}
\end{equation}
and superpotential
\[
\widehat{W} = \sum _{l=1}^k P_l G_l(X) + P_0 \sum_{i=0}^n X_i Y_i.
\]
\begin{figure}
\centering
\begin{tikzpicture}
\draw[thin,->] (0,0) -- (4,0) node[anchor=north west]{$\zeta_1$};
\draw[thin,->] (0,0) -- (0,4)node[anchor=south east]{$\zeta_2$};
\draw [ultra thick] (0,0) -- (0,3.5);
\draw [ultra thick] (0,0) -- (3.5,0);
\draw [ultra thick] (0,0) -- (-3.5,-3.5);
\node at (-4.45,-3.65) {\footnotesize $\zeta_1-\zeta_2=0$};
\draw [ultra thick] (0,0) -- (-4,0);
\draw [dashed, thick] (0,0) -- (-3.75,2);
\node at (-5.3,2) {\footnotesize $\zeta_1+(R-1)\zeta_2 =0$};
\draw [dashed, thick] (0,0) -- (1.5, -3.75);
\node at (1.5, -4) {\footnotesize $n\zeta_1 + \zeta_2=0$};
\node at (3,2.5) {\small Geometric phase};
\node at (-2.5,3) {\small LG phase I };
\node at (-3.75,0.75) {\small LG phase II};
\node at (-3.75,- 1.55) {\small LG phase III};
\node at (-0.7, -2.75) {\small Mixed phase II};
\node at (3, -2) {\small Mixed phase I};
\end{tikzpicture}
\quad \quad \quad \caption{Phase diagram of GLSM for HPD of complete intersection $\mathbb{P}^n [d_1, \cdots, d_k]$.}\label{fig:phase_CI}
{\footnotesize \begin{flushleft} Here $R = n+1 - \sum_{l=1}^k d_l$. The geometric phase describes the universal hyperplane section. LG phase I has a Higgs branch described by LG model $\mathrm{LG}_{\mathrm{HPD}}(d_1,\cdots,d_k)$ (eq. \eqref{LG1_space}\eqref{LG1_potential}), and (R-1) mixed branches, each described by $\mathbb{P}^n$. LG phase II has the same Higgs branch as LG phase I and $((n+1)(R-1))$ Coulomb vacua. LG phase III contains a Higgs branch described by LG model $\mathrm{LG}'(d_1,\cdots,d_k)$ (eq. \eqref{LG2_space}\eqref{LG2_potential}), $n$ mixed branches (each described by $\mathrm{LG}(d_1,\cdots,d_k)$ defined in eq. \eqref{LG_CI}) and $((n+1)(R-1))$ Coulomb vacua. Mixed phase I contains $n$ mixed branches, each described by NLSM on $\mathbb{P}^n [d_1, \cdots, d_k]$. Mixed branch II has the same mixed branches as LG phase III, and $(nR)$ Coulomb vacua. \end{flushleft}}
\end{figure}
The phase diagram of this GLSM is shown in Figure \ref{fig:phase_CI}. The Higgs branches in various phases are as follows:\\
(i) $\zeta_1>0,\zeta_2>0$: NLSM with target space the universal hyperplane section
\[
\mathcal{X} = \{G_1=\cdots=G_k=0,\sum_{i=0}^n X_i Y_i = 0\} \subset \mathbb{P}^n \times \check{\mathbb{P}}^n,
\]
where the $X_i$'s and $Y_i$'s are homogeneous coordinates on $\mathbb{P}^n$ and $\check{\mathbb{P}}^n$ respectivlely.\\
(ii) $\zeta_1<0, \zeta_2>0$: Landau-Ginzburg model with target space
\begin{equation}\label{LG1_space}
\mathrm{Tot}\left( \mathcal{O}(-1,0)^{\oplus(n+1)} \oplus \mathcal{O}(1,-1)
\rightarrow \mathrm{W}\mathbb{P}[d_1,\cdots,d_k] \times \check{\mathbb{P}}^n
\right)
\end{equation}
and superpotential
\begin{equation}\label{LG1_potential}
\widetilde{W} = \sum _{l=1}^k P_l G_l(X) + P_0 \sum_{i=0}^n X_i Y_i,
\end{equation}
where $P_l$ are homogeneous coordinates of the weighted projective space
$\mathrm{W}\mathbb{P}[d_1,\cdots,d_k]$, $Y_i$ are homogeneous coordinates of
$\check{\mathbb{P}}^n$, $X_i$ and $P_0$ are coordinates along the fibers of
$\mathcal{O}(-1,0)^{\oplus(n+1)}$ and $\mathcal{O}(1,-1)$ respectively. This LG
model describes the HPD of the complete intersection $X =
\mathbb{P}^n[d_1,d_2,\cdots,d_k]$ with respect to the Lefschetz decomposition
\eqref{Lef_CI}, so let us denote it by
$\mathrm{LG}_{\mathrm{HPD}}(d_1,\cdots,d_k)$. In the special case of
hypersurfaces, $k=1$, $\mathrm{LG}_{\mathrm{HPD}}(d)$ has target space
\[
\mathrm{Tot} \left( \mathcal{O}^{\oplus(n+1)} \oplus \mathcal{O}(-1) \rightarrow
\check{\mathbb{P}}^n \right) / \mathbb{Z}_d
\]
and superpotential
\[
\widetilde{W} = \sqrt{\frac{-\zeta_1}{d}} G(X) + P_0 \sum_{i=0}^n X_i Y_i.
\]\\
(iii) $\zeta_1 < \zeta_2 < 0$: LG model with target space
\begin{equation}\label{LG2_space}
\mathrm{Tot}\left( \mathcal{O}(-1)^{\oplus(n+1)} \oplus \mathcal{O}(1)^{\oplus
(n+1)} \rightarrow \mathrm{W}\mathbb{P}[d_1,\cdots,d_k] \right)
\end{equation}
and superpotential
\begin{equation}\label{LG2_potential}
W '= \sum _{l=1}^k P_l G_l(X) + \sqrt{-\zeta_2} \sum_{i=0}^n X_i Y_i.
\end{equation}
Let's denote this LG model by $\mathrm{LG}'(d_1,\cdots,d_k)$.\\
(iv) $\zeta_2<0, \zeta_1 > \zeta_2$: No Higgs branch.
The Coulomb and mixed branches of different phases can be analyzed by studying the local models associated with wall crossing and the asymptotic behavior of the Coulomb vacua \footnote{See appendix \ref{app:CoulombEOM} for details.}.
\section{\label{sec:mutually orthogonal section}GLSMs for mutually orthogonal linear sections}
The main theorem of \cite{kuznetsov2007homological} reviewed in section \ref{sec:section2} tells us that, for a HPD pair $X \hookrightarrow \mathbb{P}(V)$ and $Z \hookrightarrow \mathbb{P}(V^\vee)$ and a subspace $L\subseteq V^\vee$, we have mutually orthogonal linear sections $X_L$ and $Z_{L}$. As discussed in section \ref{sec:linearsec}, we can take the $U(1)\times U(1)$ GLSM for HPD associated with embedding $X \hookrightarrow \mathbb{P}(V)$ and restrict it to the subspace $L$. In practice, this is done by deleting some of the fields corresponding to the homogeneous coordinates of $\mathbb{P}(V^\vee)$ and keeping only those of $L$. The resulting GLSM will have a phase with Higgs branch described by $X_L$ and another phase described by $Z_L$. This construction embeds $X_L$ and $Z_L$ into the same GLSM as long as $X_L$ and $Z_L$ are both nonempty. In general, $Z$ is described by a hybrid model with base $\mathbb{P}(V^\vee)$, therefore $Z_L$ can be described by a hybrid model with base $\mathbb{P}(L)$. Here we regard $Z$ and $Z_L$ as noncommutative spaces whose derived categories are equivalent to the category of matrix factorizations of the corresponding hybrid models.
In the case that $D(Y_{\zeta_{l}\ll -1},W_{\zeta_{l}\ll -1})$ is empty in the original GLSM describing $X$, such as in the case of Veronese embedding, we can go to the strong coupling limit of the $U(1)_{s+1}$ gauge group under which the deleted fields are charged, and the resulting GLSM also realizes $X_L$ and $Z_L$ in different phases.
\subsection{Veronese embedding}
Let's first consider the Veronese embedding $\mathbb{P}(V) \rightarrow \mathbb{P}(\mathrm{Sym}^d V)$. Suppose that $\dim V = n+1$, the linear space $L \subseteq \mathrm{Sym}^d V^\vee$ has dimension $m$ and the complete intersection $\mathbb{P}(V)_L$ is defined by
\[
\mathbb{P}(V)_L = \{X \in \mathbb{P}(V) | Q_1(X) = \cdots = Q_m(X) = 0\},
\]
where the $Q_a(X)$'s are independent degree-$d$ polynomials associated with $L$ and $m < n$.
Upon restricting to $L$, the GLSM \eqref{GLSM_Veronese_universal} reduces to the GLSM with the following matter content:
\begin{equation}\label{2veroL}
\begin{array}{cccccccc}
X_0 & X_1 & \cdots & X_n & P & S_1 & \cdots & S_m \\
1 & 1 & \cdots & 1 & -d & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 & -1 & 1 & \cdots & 1
\end{array}
\end{equation}
and superpotential
\[
\widehat{W} = P \sum_{a=1}^m S_a Q_a(X).
\]
As discussed in section \ref{sec:proposal}, by solving the D-term and F-term equations one can show:\\
(i)~$\zeta_1>0,\zeta_2>0$:~The Higgs branch is the subvariety defined by
\[
\sum_a S_a Q_a(X) = 0
\]
in $\mathbb{P}^n \times \mathbb{P}(L)$.\\
(ii)~$\zeta_2<0,\zeta_1 > d \zeta_2$:~The Higgs branch is $X_L$ provided that the transversality conditions are satisfied.\\
(iii)~$\zeta_1 <0, \zeta_1< d \zeta_2$:~The Higgs branch is $Z_{L}$, where $Z_L$ is the noncommutative space whose derived category is defined by the matrix factorizations of the Landau-Ginzburg model on
\[
\left[ \mathrm{Tot}(\mathcal{O}\left(-\frac{1}{d}\right)^{\oplus (n+1)} \rightarrow \mathbb{P}^{m-1})/\mathbb{Z}_d \right]
\]
with superpotential
\begin{equation}\label{potential2verL}
\widetilde{W} = \sqrt{-\frac{\zeta_1}{d}} \sum_{a=1}^m S_a Q_a(X),
\end{equation}
where the $\mathbb{Z}_d$ action acts on the fiber. When $d=2$, the category of the matrix factorization is equivalent to
\[
D(Z_{L}) = D(\mathbb{P}(L), \mathcal{C}l_0).
\]
In the strong coupling limit of the second $U(1)$, the GLSM describing $X_L$ and $Z_{L}$ is a $U(1)$ gauge theory with matter content
\begin{equation}\label{2veroLSim}
\begin{array}{ccccccc}
X_0 & X_1 & \cdots & X_n & S_1 & \cdots & S_m \\
1 & 1 & \cdots & 1 & -d & \cdots & -d
\end{array}
\end{equation}
and superpotential
\begin{equation}\label{W_YL_Veronese}
W_{0} = \sum_{a=1}^m S_a Q_a(X).
\end{equation}
From D-term and F-term equations, one can see that the Higgs branch of the positive phase is given by $X_L$. The Higgs branch of the negative phase is the LG model above describing $Z_L$.
Some examples of the GLSM \eqref{2veroLSim} were studied before in the literature \cite{Caldararu:2007tc, Addington:2012zv}. The geometric interpretation was also investigated by various means. Here we list a few examples:\\
$\bullet$ $X_L = \mathbb{P}^3[2,2]:$ Double Veronese embedding of $X=\mathbb{P}^3$, $\dim L = 2$, $Z_L=$ branched double cover of $\mathbb{P}^1$.
In the LG phase, the $X_i$'s can be integrated out due to their mass obtained from the superpotential, the theory becomes a non-linear sigma model on a branched double cover of $\mathbb{P}^1$ \cite{Caldararu:2007tc, Addington:2012zv}, which is an elliptic curve $C$. If we rewrite the superpotential \eqref{W_YL_Veronese} as $W_{0} = \sum_{i,j} X_i A_{ij}(S) X_j$, where the $4 \times 4$ matrix $A$ is linear in $S_a$, then the four branch points are given by the solutions of
\[
\det(A) = 0.
\]
Clearly, the matrix factorization
\begin{equation}\label{MF_P3_22}
\xymatrix{
\mathfrak{W}(q-4) \ar@<0.5ex>[r]^-{X\bar{\eta}} & \mathfrak{W}(q-3)^{\oplus 4} \ar@<0.5ex>[l]^-{AX\eta} \ar@<0.5ex>[r]^-{X\bar{\eta}} & \mathfrak{W}(q-2)^{\oplus 6} \ar@<0.5ex>[l]^-{AX\eta} \ar@<0.5ex>[r]^-{X\bar{\eta}} & \mathfrak{W}(q-1)^{\oplus 4} \ar@<0.5ex>[l]^-{AX\eta} \ar@<0.5ex>[r]^-{X\bar{\eta}} & \mathfrak{W}(q) \ar@<0.5ex>[l]^-{AX\eta}}
\end{equation}
is equivalent to\footnote{See appendix \ref{app:MFgerbe} for more details on matrix factorization on gerbes.}
\[
\mathcal{O}_{[\mathbb{P}^1/\mathbb{Z}_2]}\left(-\frac{q}{2}\right)
\]
in the IR limit because $W_{0}=0$ on the base $\mathbb{P}^1$.
Let $f$ be the projection from $C$ to $\mathbb{P}^1$. For the degree $m$ line bundle $\mathcal{O}_{\mathbb{P}^1}(m)$ on $\mathbb{P}^1$, the pullback $f^*(\mathcal{O}(m))$ is a degree $2m$ line bundle $\mathcal{E}_{2m}$ on $C$. From the fact that
\[
f^*(\mathcal{L}_1 \otimes \mathcal{L}_2) = f^*(\mathcal{L}_1) \otimes f^*(\mathcal{L}_2)
\]
for any pair of line bundles $\mathcal{L}_1$ and $\mathcal{L}_2$,
we conclude that the matrix factorization \eqref{MF_P3_22} projects to the sheaf of sections of $\mathcal{E}_{-q}$ on the elliptic curve $C$ in the IR limit.
The argument above shows the following equivalence between categories
\[
D(C) \cong D(Z_L) \cong D^b(\mathbb{P}^1,\mathcal{C}l_0),
\]
where $\mathcal{C}l_0$ consists of sheaves of modules of the even part of the Clifford algebra determined by $W_{0}$.
By comparing the categories of B-branes of the two phases, we also recover
\[
\mathcal{D}^b(C) \cong \mathcal{D}^b(M),
\]
which is consistent with the fact that $C$ and $M$ are both elliptic curves.
$\bullet$ $X_L = \mathbb{P}^5[2,2,2]:$ Double Veronese embedding of $X=\mathbb{P}^5$, $\dim L = 3$, $Z_L=$ branched double cover of $\mathbb{P}^2$.
The analysis of this example is essentially the same as that of the last example. The IR physics of the positive phase is described by a non-linear sigma model on the complete intersection $M=\mathbb{P}^5[2,2,2]$, which is a K3 surface. The IR physics of the negative phase is described by a non-linear sigma model on the branched double cover of $\mathbb{P}^2$, which is also a K3 surface $S$. The branch locus is the degree 6 curve in $\mathbb{P}^2$. Again we have the following equivalence between categories
\[
D(S) \cong D(Z_L) \cong D(\mathbb{P}^2,\mathcal{C}l_0) \cong D(M).
\]
$\bullet$ $X_L = \mathbb{P}^7[2,2,2,2]:$ Double Veronese embedding of $X = \mathbb{P}^7$, $\dim L = 4$, $Z_L=$ noncommutative resolution of branched double cover of $\mathbb{P}^3$.
This example is different from the last two because the double cover of $\mathbb{P}^3$ is ramified along a degree 8 surface, which is generically singular while the complete intersection $X_L=\mathbb{P}^7[2,2,2,2]$ is in general a smooth Calabi-Yau 3-fold. \cite{Addington:2012zv} shows that the moduli space of point-like B-branes in the negative phase is a small resolution of the double cover\footnote{See appendix \ref{app:D0} for more details on point-like branes.}. However, the low energy physics cannot be described by the small resolution because it is not K\"{a}hler.
The GLSM analysis is nevertheless valid, so we still have
{\footnote{A mathematical proof can be found in \cite{kuznetsov2008derived}.}}
\begin{equation}\label{hpdual1}
D(X_L) \cong D(Z_L) \cong D(\mathbb{P}^3,\mathcal{C}l_0),
\end{equation}
where $D(Z_L)$ is equivalent to the category of matrix factorizations of LG model on
\[
\left[\mathrm{Tot}(\mathcal{O}(-\frac{1}{2})^{\oplus 8} \rightarrow \mathbb{P}^3)/\mathbb{Z}_2\right].
\]
\eqref{hpdual1} justifies the statement that the negative phase of this GLSM is described by the noncommutative resolution $(\mathbb{P}^3,\mathcal{C}l_0)$.
$\bullet$ $X_L = \mathbb{P}^5[3,3]:$ Degree 3 Verondese embedding of $X=\mathbb{P}^5$, $\dim L=2$, $Z_L=$ noncommutative K3 surfaces fibered over $\mathbb{P}^1$.
The GLSM is a $U(1)$ gauge theory with chiral multiplets $X_i$, $i=1,\cdots,6$, and $P_j$, $j=1,2$ with the following charge assignment:
\[
\begin{array}{cccccccc}
X_1 & X_2 & X_3 & X_4 & X_5 & X_6 & P_1 & P_2 \\
1 & 1 & 1 & 1 & 1 & 1 & -3 & -3
\end{array}
\]
and a superpotential:
\begin{equation}\label{potential1}
W = P_1 G_1(X)+P_2 G_2(X),
\end{equation}
where $G_1, G_2$ are cubic polynomials in $X_i$. The positive phase is described by a NLSM whose target space is the complete intersection
\[
X_L = \{ G_1 = G_2 = 0 \} \subseteq \mathbb{P}^5,
\]
which is a Calabi-Yau 3-fold. The negative phase is described by a LG model on
\[
\left[\mathrm{Tot}\left(\mathcal{O}\left(-\frac{1}{3}\right)^{\oplus 6} \rightarrow \mathbb{P}^1\right)/\mathbb{Z}_3\right]
\]
with superpotential \eqref{potential1}. Thus the HPD category $D(Z_L)$ can be described by matrix factorizations of this LG model. Because the category of matrix factorizations of LG model with cubic superpotential in six variables is equivalent to the derived category of a noncommutative K3 surface, $Z_L$ can be thought of as a family of noncommutative K3 surfaces fibered over $\mathbb{P}^1$.
\subsection{Complete intersection}
For complete intersection $X = \mathbb{P}^n[d_1,d_2,\cdots,d_k]$ embedded in $\mathbb{P}^n$, the HPD associated with the Lefschetz decomposition \eqref{Lef_CI} is described by the GLSM \eqref{GLSM_HPD_CI}. Assume that $L$ is a $(m+1)$-dimensional subspace of $V^\vee$, $0 \leq m < n-k$, then the GLSM realizing the duality between $X_L$ and $Z_L$ has the following matter content and charges under $U(1) \times U(1)$ gauge symmetry:
\begin{equation}\label{GLSM_mutual_CI}
\begin{array}{ccccccccccc}
X_0 & X_1 & \cdots & X_n & P_0 & P_1 & \cdots & P_k & Y_0 & \cdots & Y_m \\
1 & 1 & \cdots & 1 & -1 & -d_1 & \cdots & -d_k & 0 & \cdots & 0 \\
0 & 0 & \cdots & 0 & -1 & 0 & \cdots & 0 & 1 & \cdots & 1
\end{array}
\end{equation}
and superpotential:
\[
W = \sum_{l=1}^k P_l G_l(X) + P_0 \sum_{j=0}^m Y_j L_j(X),
\]
where $\deg(G_l) = d_l$ and $L_i$'s are linear. D-term and F-term constraints give rise to the following Higgs branches:\\
(i)~$\zeta_1 \gg 1, \zeta_2 \gg 1$: Nonlinear sigma model with target space
\[
\mathcal{X}_L = \{ G_1(X) = \cdots =G_k(X)=0, \sum_{j=0}^m Y_j L_j(X) = 0\} \subset \mathbb{P}(V) \times \mathbb{P}(V^\vee).
\]
(ii)~$\zeta_1 \gg 1, \zeta_2 \ll -1$ or $\zeta_2 < \zeta_1 \ll -1$: Nonlinear sigma model with target space
\[
X_L = \{G_1(X)=\cdots=G_k(X)=0,L_0(X) = L_1(X) = \cdots = L_m(X) = 0\} \subset \mathbb{P}(V).
\]
(iii)~$\zeta_1 \ll -1, \zeta_2 \gg 1$: the Higgs branch is $Z_L$, which can be described by the Landau-Ginzburg model with target space
\[
\mathrm{Tot}\left( \mathcal{O}(-1,0)^{\oplus(n+1)} \oplus \mathcal{O}(1,-1)
\rightarrow \mathrm{W}\mathbb{P}[d_1,\cdots,d_k] \times \mathbb{P}(L) \right)
\]
and superpotential
\[
W = \sum _{l=1}^k P_l G_l(X) + P_0 \sum_{j=0}^m Y_j L_j(X),
\]
where $P_l$ are homogeneous coordinates of the weighted projective space $\mathrm{W}\mathbb{P}[d_1,\cdots,d_k]$, $Y_i$ are homogeneous coordinates of $\mathbb{P}(L)$, $X_i$ and $P_0$ are coordinates along the fibers of $\mathcal{O}(-1,0)^{\oplus(n+1)}$ and $\mathcal{O}(1,-1)$ respectively.\\
(iv)~$\zeta_1 < \zeta_2 \ll -1$: LG model with target space
\begin{equation}\label{LG2_CI_mutual}
\mathrm{Tot}\left( \mathcal{O}(-1)^{\oplus(n+1)} \oplus
\mathcal{O}(1)^{\oplus(m+1)} \rightarrow \mathrm{W}\mathbb{P}[d_1,\cdots,d_k]
\right)
\end{equation}
and superpotential
\[
W = \sum_{l=1}^k P_l G_l(X) + \sum_{i=0}^m Y_i L_i(X),
\]
where $P_l$ serve as homogeneous coordinates on the base, $X_i$, $Y_j$ are coordinates along the fiber.
We see $X_L$ and $Z_L$ are described by Higgs branches of phases (ii) and (iii) above. Note that in this case, because $\mathrm{LG}(d_1,d_2,\cdots,d_k)$ described in section \ref{subsec:CI_HPD} is nonempty, we cannot take the strong coupling limit of the second $U(1)$ gauge group to embed $X_L$ and $Z_L$ in a single $U(1)$ GLSM. Actually, the resulting $U(1)$ GLSM would have $X_L$ in the positive phase and the LG model \eqref{LG2_CI_mutual} in the negative phase, the matrix factorizations of the latter form a subcategory of the HPD category.
\section{\label{sec:nonabelian}Comments on nonabelian models: Pl\"ucker embedding}
Though our main concern in this paper is HPDs of abelian theories, in this section we briefly mention that our construction can also be applied to nonabelian theories as well by sketching its application to Pl\"ucker embedding. We leave a detailed analysis of nonabelian theories to future work.
The Grassmannian $G(k,V)$ with $\dim V = N$ can be implemented by the geometric phase of a nonabelian GLSM with $U(k)$ gauge group and $N$ chiral fields in the fundamental representation. From our general construction, the HPD of the Pl\"ucker embedding
\[
G(k,V) \rightarrow \mathbb{P}(\wedge^k V)
\]
can be described by a nonabelian GLSM with gauge group $U(k) \times U(1)$ and the following matter content
\begin{equation}\label{GLSM_Gr}
\begin{array}{cccccc}
& \Phi_1 & \cdots & \Phi_N & P & Y_{i_1\cdots i_k}\\
U(k) & \square & \cdots & \square & \det^{-1} & 0 \\
U(1) & 0 & \cdots & 0 & -1 & 1
\end{array}
\end{equation}
where $\square$ stands for fundamental representation of $U(k)$, and the indices $i_p$ satisfy $1 \leq i_1 < i_2 < \cdots < i_k \leq N$. The superpotential is
\[
W = P \sum_{1 \leq i_1 < i_2 < \cdots < i_k \leq N} B_{i_1\cdots i_k} Y_{i_1\cdots i_k},
\]
where the Pl\"ucker coordinates read
\[
B_{i_1\cdots i_k} = \epsilon_{a_1\cdots a_k} \Phi^{a_1}_{i_1} \cdots \Phi^{a_k}_{i_k}.
\]
When the FI parameters satisfy $\zeta_1 \gg 1, \zeta_2 \gg 1$, the IR theory is a NLSM with target space the universal hyperplane section
\[
\mathcal{X} = \left\{ \sum_{1 \leq i_1 < i_2 < \cdots < i_k \leq N} B_{i_1\cdots i_k} Y_{i_1\cdots i_k} = 0 \right\} \subset G(k,V) \times \mathbb{P}(\wedge^k V^\vee),
\]
where $Y_{i_1\cdots i_k}$ serve as homogeneous coordinates of $\mathbb{P}(\wedge^k V^\vee)$.
When $\zeta_1 \ll -1, \zeta_2 > \zeta_1$, the $P$ field receives a vev $\langle P \rangle = \sqrt{-\zeta_1}$, breaking the $U(k)$ gauge symmetry to $SU(k)$. Thus we get a family of $SU(k)$ gauge theories fibered over $\mathbb{P}(\wedge^k V^\vee)$, each has $N$ fundamentals and a superpotential
\[
W = \sqrt{-\zeta_1} \sum_{1 \leq i_1 < i_2 < \cdots < i_k \leq N} B_{i_1\cdots i_k} Y_{i_1\cdots i_k}.
\]
Note that every $B_{i_1\cdots i_k}$ is a section of $\mathcal{O}(-1)$ over $\mathbb{P}(\wedge^k V^\vee)$. The HPD category is expected to be equivalent to the category of matrix factorizations of this $SU(k)$ gauge theory.
Now we want to know which Lefschetz decomposition it corresponds to, this requires the knowledge of the twisting bundle $\mathcal{L}$, which is the pull-back of $\mathcal{O}(1)$ under the Pl\"ucker embedding, and $\mathcal{A}_0$ in the Lefschetz decomposition. Obviously,
\[
\mathcal{L} = \wedge^k \mathcal{S}^\vee,
\]
where $\mathcal{S}$ is the tautological bundle over $G(k,V)$. For a semiorthogonal decomposition of $G(k,V)$, every generator that is not in $\mathcal{A}_0$ will give rise to a mixed branch in the HPD phase above. Therefore, in order to determine the number of generators in $\mathcal{A}_0$, we only need to count the number of mixed branches in the HPD phase, which is equal to the number of Coulomb vacua of the local model corresponding to the wall crossing at the boundary between the geometric phase and the HPD phase. The local model is a $U(2)$ gauge theory with $N$ fundamentals and one chiral field in $\det^{-1}$ representation. The Coulomb vacua satisfy the equations
\[
\sigma_a^N = (-1)^{k} q \sum_{b=1}^k \sigma_b,\quad a=1,\cdots,k,
\]
and
\[
\sigma_a \neq \sigma_b \quad \mathrm{for} ~ a \neq b.
\]
In counting the number of solutions, we should take the residual $S_k$ gauge symmetry that swaps the $\sigma_a$'s into account. For $k=2$, the number of Coulomb vacua is $(N-1)(N-2)/2$, which is exactly the number of generators that is not in $\mathcal{A}_0$ of the Lefschetz decomposition with respect to Kapranov's collection, namely
\begin{equation}\label{Lef_Gr}
\begin{split}
D(G(2,N)) &= \langle \mathcal{A}_0, \mathcal{A}_1(1),\cdots,\mathcal{A}_{N-1}(N-1) \rangle,
\\
\mathcal{A}_0 = \langle \mathcal{O}, \mathcal{S}^\vee, \cdots, \mathrm{Sym}^{N-2}\mathcal{S}^\vee \rangle, \quad &\mathcal{A}_1 = \langle \mathcal{O}, \mathcal{S}^\vee,\cdots, \mathrm{Sym}^{N-3}\mathcal{S}^\vee \rangle, \cdots, \mathcal{A}_{N-2} = \langle \mathcal{O} \rangle.
\end{split}
\end{equation}
Therefore, we predict that the HPD of Pl\"ucker embedding realized by the GLSM \eqref{GLSM_Gr} is with respect to the Lefschetz decomposition \eqref{Lef_Gr}.
If we restrict the theory \eqref{GLSM_Gr} to a subspace $L \subseteq \wedge^k V^\vee$ and $\dim L = m$, then we get a $U(k) \times U(1)$ GLSM with matter content
\begin{equation}\label{GLSM_Gr_mutual}
\begin{array}{cccccccc}
& \Phi_1 & \cdots & \Phi_N & P & Y_1 & \cdots & Y_m \\
U(k) & \square & \cdots & \square & \det^{-1} & 0 & \cdots & 0 \\
U(1) & 0 & \cdots & 0 & -1 & 1 & \cdots & 1
\end{array}
\end{equation}
and superpotential
\[
W = P \sum_{j=1}^m Y_j L_j(B),
\]
where $L_j(B)$'s are linear functions in the Pl\"ucker coordinates. If we take the strong coupling limit of the $U(1)$ gauge group, then we get a $U(k)$ GLSM with matter content
\begin{equation}\label{GLSM_Gr_mutual_sim}
\begin{array}{ccccccc}
& \Phi_1 & \cdots & \Phi_N & P_1 & \cdots & P_m \\
U(k) & \square & \cdots & \square & \det^{-1} & \cdots & \det^{-1}
\end{array}
\end{equation}
and superpotential
\[
W = \sum_{j=1}^m P_j L_j(B).
\]
This theory is of the type studied in \cite{Hori:2006dk}. The positive phase is described by a NLSM with target space $X_L$, which is a complete intersection in $G(k,N)$ defined by $L_j = 0$ for all $j$. The negative phase is described by a family of $SU(k)$ gauge theories fibered over $\mathbb{P}(L)$. The category of matrix factorizations of this theory is defined to be $D(Z_L)$. In some cases, $Z$ and $Z_L$ have geometric interpretations. For example, when $k=2, N=7$ and $\dim L = 7$, $X = G(2,7)$ and $Z$ is the Pfaffian variety $\mathrm{Pf}(4,\wedge^2 V^\vee)$, $X_L$ is the Calabi-Yau three-fold $G(2,7)[1,1,1,1,1,1,1]$ and $Z_L$ is the intersection $Z \cap \mathbb{P}(L)$ in $\mathbb{P}(\wedge^2 V^\vee)$ as shown in \cite{Hori:2006dk}.
\section{Future directions}
In this work, we proposed a construction for a GLSM $\mathcal{T}_{\mathcal{X}}$ that realizes the HPD of $(X, \mathcal{T}_{X})$, as well as its linear sections. This raises many questions as well as allows us to formulate some future directions:
\begin{itemize}
\item \textbf{More general Lefschetz decompositions}. In the work of \cite{rennemo2017fundamental} the Lefschetz decomposition of $D(X)$ is arbitrary. However, we are constrained by the Lefschetz decomposition induced by our pair $(\mathcal{T}_{X},\mathcal{T}_{\mathcal{X}})$. A natural question would be, given a Lefschetz decomposition of $D(X)$, how can we construct a GLSM that induces it or possibly refine the window categories in order to restrict uniquely to an $\mathcal{A}_{i}$ component.
\item \textbf{Nonabelian theories}. An obvious extension of our work is to consider nonabelian GLSMs. We only sketched the simplest generalization of the projective space embedding, namely $G(k,N)\hookrightarrow \mathbb{P}^{r}$. Already this example is not completely solved in the mathematics literature. Even in this seemingly simple situation HPD is only known for certain Grassmannians \cite{kuznetsov2006homological,deliu2011homological} and conjectured for the rest. A deeper study of the GLSM presented in section \ref{sec:nonabelian} may lead to new results on this subject. Let us mention that there are also results regarding HPD of other varieties that can be realized via nonabelian GLSMs, see for instance \cite{bernardara2016homological}.
\item \textbf{HPD of nongeometric phases}. In the examples, we studied $\mathrm{dim}_{\mathbb{C}}\mathcal{M}_{K}=1$ but it should be possible to generalize the construction to multiple parameter models, and define HPD for nongeometric phases, for instance, for LG orbifold models or hybrid models. On the other hand, as we saw in the quadric example \ref{sec:quadrics}, the hybrid HPD has a geometric interpretation, it will be interesting if this also is the case in other examples where the HPD obtained via $\mathcal{T}_{\mathcal{X}}$ is new.
\item \textbf{Mirror of HPD}. Having a GLSM interpretation $\mathcal{T}_{\mathcal{X}}$ for the HPD of a space, opens the possibility of using mirror symmetry for GLSMs \cite{Morrison:1995yh,Hori:2000kt,Gu:2018fpm,Gu:2019byn}. Even for the simple abelian models studied here, this would be interesting and provide new results.
\end{itemize}
\section*{Acknowledgements}
We would like to thank Will Donovan, Alexander Kuznetsov, David Favero, Daniel Pomerleano, Johanna Knapp, Richard Eager, Kentaro Hori and Eric Sharpe for useful discussions and comments. JG acknowledges support from the China Postdoctoral Science Foundation No. 2020T130353. MR thanks Harvard U. and IASM at Zhejiang U. for hospitality. MR acknowledges support from the National Key Research and Development Program of China, grant No. 2020YFA0713000, and the Research Fund for International Young Scientists, NSFC grant No. 11950410500.
|
{
"timestamp": "2022-04-25T02:09:22",
"yymm": "2012",
"arxiv_id": "2012.14109",
"language": "en",
"url": "https://arxiv.org/abs/2012.14109"
}
|
\section{Introduction}
Federated learning (FL) \cite{Konen16,McMahan17} is an emerging distributed learning paradigm on decentralized data. In FL, there are multiple clients (e.g., smartphones, IoT devices, and edge devices) and a service provider (e.g., Google, Apple, and IBM). Each client holds a local training dataset; and the service provider enables the clients to jointly learn a model (called \emph{global model}) without sharing their raw local training data with the service provider. Due to its potential promise of protecting private/proprietary client data, particularly in the age of emerging privacy regulations such as General Data Protection Regulation (GDPR), FL has been deployed by high-profile companies. For instance, Google has deployed FL for next-word prediction on Android Gboard~\cite{gboard}; WeBank uses FL for credit risk prediction~\cite{webank}; and more than 10 leading pharmaceutical companies leverage FL for drug discovery in the project MELLODDY~\cite{melloddy}. Roughly speaking, FL iteratively performs the following three steps:
the server provided by the service provider sends the current {global model} to the clients or a selected subset of them; each selected client trains a model (called \emph{local model}) via fine-tuning the global model using its own local training data and sends the local model updates back to the server\footnote{It is algorithmically equivalent to send local models instead of their updates to the server.}; and the server aggregates the local model updates to be a \emph{global model update} according to an \emph{aggregation rule} and uses it to update the global model. For instance, FedAvg~\cite{McMahan17}, a popular FL method in non-adversarial settings developed by Google, computes the average of the local model updates weighted by the sizes of local training datasets as the global model update.
However, due to its distributed nature, FL is vulnerable to adversarial manipulations on malicious clients, which could be fake clients injected by an attacker or genuine clients compromised by an attacker. For instance, malicious clients can corrupt the global model via poisoning their local training data (known as \emph{data poisoning attacks}~\cite{biggio2012poisoning,Nelson08poisoningattackSpamfilter}) or their local model updates sent to the server (called \emph{local model poisoning attacks}~\cite{fang2019local,bhagoji2019analyzing,bagdasaryan2020backdoor,xie2019dba}). The corrupted global model makes incorrect predictions for a large number of testing examples indiscriminately (called \emph{untargeted attacks}) \cite{fang2019local}, or it predicts attacker-chosen target labels for attacker-chosen target testing examples while the predicted labels for other non-target testing examples are unaffected (called \emph{targeted attacks}) \cite{bagdasaryan2020backdoor,bhagoji2019analyzing,xie2019dba}. For instance, the global model in FedAvg can be arbitrarily manipulated by a single malicious client~\cite{Blanchard17,Yin18}.
Byzantine-robust FL methods \cite{Blanchard17,ChenPOMACS17,Mhamdi18,yang2019byzantine,Yin18}
aim to address malicious clients. The goal therein is to learn an accurate global model when a bounded number of clients are malicious. Their key idea is to
leverage \emph{Byzantine-robust aggregation rules}, which essentially compare the clients' local model updates and remove statistical outliers before using them to update the global model. For instance, Median~\cite{Yin18} computes the coordinate-wise median of the clients' local model updates as the global model update. However, recent studies~\cite{bhagoji2019analyzing,fang2019local} showed that existing Byzantine-robust FL methods are still vulnerable to local model poisoning attacks on malicious clients. The fundamental reason is that they have no root of trust. Specifically, from the server's perspective, every client could be malicious, providing no root of trust for the server to decide which local model updates are suspicious.
\myparatight{Our work} In this work, we propose a new Byzantine-robust FL method called \emph{FLTrust}. Instead of completely relying on the local model updates from clients, the server itself bootstraps trust in FLTrust. Specifically, the service provider manually collects a small clean training dataset (called \emph{root dataset}) for the learning task.
The server maintains a model (called \emph{server model}) for the root dataset just like how a client maintains a local model. In each iteration, the server updates the global model by considering both its server model update and the clients' local model updates.
\myparatight{Our new Byzantine-robust aggregation rule} Specifically, we design a new Byzantine-robust aggregation rule in FLTrust to incorporate the root of trust.
A model update can be viewed as a vector, which is characterized by its \emph{direction} and \emph{magnitude}. An attacker can manipulate both the {directions} and {magnitudes} of the local model updates on the malicious clients. Therefore, our aggregation rule takes both the directions and magnitudes into considerations when computing the global model update. Specifically, the server first assigns a trust score (TS) to a local model update, where the trust score is larger if the direction of the local model update is more similar to that of the server model update. Formally, we use the \emph{cosine similarity} between a local model update and the server model update to measure the similarity of their directions. However, the cosine similarity alone is insufficient because a local model update, whose cosine similarity score is negative, can still have a negative impact on the aggregated global model update. Therefore, we further clip the cosine similarity score using the popular ReLU operation. The ReLU-clipped cosine similarity is our trust score.
Then, FLTrust normalizes each local model update by scaling it to have the same magnitude as the server model update. Such normalization essentially projects each local model update to the same hyper-sphere where the server model update lies in the vector space, which limits the impact of the poisoned local model updates with large magnitudes. Finally, FLTrust computes the average of the normalized local model updates weighted by their trust scores as the global model update, which is used to update the global model.
\myparatight{FLTrust can defend against existing attacks} We perform extensive empirical evaluation on six datasets from different domains, including five image classification datasets (MNIST-0.1, MNIST-0.5, Fashion-MNIST, CIFAR-10, and CH-MNIST) and a smartphone-based human activity recognition dataset (Human Activity Recognition). We compare FLTrust with multiple existing Byzantine-robust FL methods including Krum~\cite{Blanchard17}, Trimmed mean~\cite{Yin18}, and Median~\cite{Yin18}. Moreover, we evaluate multiple poisoning attacks including label flipping attack (a data poisoning attack), Krum attack and Trim attack (untargeted local model poisoning attacks)~\cite{fang2019local}, as well as Scaling attack\footnote{The Scaling attack is also known as a backdoor attack.} (targeted local model poisoning attack)~\cite{bagdasaryan2020backdoor}. Our results show that FLTrust is secure against these attacks even if the root dataset has less than 100 training examples, while existing Byzantine-robust FL methods are vulnerable to them or a subset of them. For instance, a CNN global model learnt using FLTrust has a testing error rate of 0.04 under all the evaluated attacks on MNIST-0.1. However, the Krum attack can increase the testing error rate of the CNN global model learnt by Krum from 0.10 to 0.90. Moreover, we treat FedAvg under no attacks as a baseline and compare our FLTrust under attacks with it. Our results show that FLTrust under attacks achieves similar testing error rates to FedAvg under no attacks. We also study different variants of FLTrust and the impact of different system parameters on FLTrust. For instance, our results show that FLTrust works well once the root dataset distribution does not diverge too much from the overall training data distribution of the learning task.
\myparatight{FLTrust can defend against adaptive attacks}
An attacker can adapt its attack to FLTrust. Therefore, we also evaluate FLTrust against adaptive attacks. Specifically, Fang et al.~\cite{fang2019local} proposed a general framework of local model poisoning attacks, which can be applied to optimize the attacks for any given aggregation rule. An attacker can substitute the aggregation rule of FLTrust into the framework and obtain an adaptive attack that is particularly optimized against FLTrust. Our empirical results show that FLTrust is still robust against such adaptive attacks. For instance, even when 60\% of the clients are malicious and collude with each other, FLTrust can still learn a CNN global model with testing error rate 0.04 for MNIST-0.1. This testing error rate is the same as that of the CNN global model learnt by FedAvg under no attacks.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose the first federated learning method FLTrust that bootstraps trust to achieve Byzantine robustness against malicious clients.
\item We empirically evaluate FLTrust against existing attacks. Our results show that FLTrust can defend against them.
\item We design adaptive attacks against FLTrust and evaluate their performance. Our results show that FLTrust is also robust against the adaptive attacks.
\end{itemize}
\section{Background and Related Work}
\subsection{Background on Federated Learning (FL)}
\label{sec:background}
\begin{figure*}[!t]
\centering
{\includegraphics[width= 0.8\textwidth]{figs/fl.pdf}}
\vspace{2mm}
\caption{Illustration of the three steps in FL.}
\label{fig:fl}
\vspace{-4mm}
\end{figure*}
\xc{Suppose we have $n$ clients and each client has a local training dataset $D_i, i=1,2,\cdots,n$. We use $D=\bigcup_{i=1}^{n} D_{i}$ to denote the joint training data. Each training example in $D$ is drawn from an unknown distribution $\mathcal{X}$. The clients aim to collaboratively learn a shared {global model} with the help of a service provider.
The optimal global model $\bm{w}^*$ is a solution to the following optimization problem: $\bm{w}^* = \argmin_{\bm{w}} F(\bm{w})$, where $F(\bm{w})=\mathbb{E}_{D\sim\mathcal{X}}[f(D,\bm{w})]$ is the expectation of the empirical loss $f(D,\bm{w})$ on the joint training dataset $D$. Since the expectation is hard to evaluate, the global model is often learnt via minimizing the empirical loss in practice, i.e., $\argmin_{\bm{w}} f(D,\bm{w})$ is the learnt global model.} Specifically, each client maintains a local model for its local training dataset. Moreover, a service provider's server maintains the global model via aggregating the local model updates from the clients. Specifically, FL iteratively performs the following three steps (illustrated in Figure \ref{fig:fl}):
\begin{itemize}
\item \textbf{Step I: Synchronizing the global model with clients.} The server sends the current global model $\bm{w}$ to the clients or a subset of them.
\item \textbf{Step II: Training local models.} Each client trains a local model via fine-tuning the global model using its local training dataset. Formally, the $i$th client solves the optimization problem $\min_{\bm{w}_i} f(D_i,\bm{w}_i)$, where $\bm{w}_i$ is the client's local model. In particular, the client initializes its local model as the global model and uses stochastic gradient descent to update the local model for one or more iterations. Then, each client sends its local model update $\bm{g}_i=\bm{w}_i - \bm{w}$ (i.e., the difference between its local model and the current global model) to the server.
\item \textbf{Step III: Updating the global model via aggregating the local model updates.} The server computes a \emph{global model update} $\bm{g}$ via aggregating the local model updates according to some \emph{aggregation rule}. Then, the server updates the global model using the global model update, i.e., $\bm{w} = \bm{w} + \alpha \cdot \bm{g}$, where $\alpha$ is the global learning rate.
\end{itemize}
The aggregation rule plays a key role in FL. Different FL methods essentially use different aggregation rules. Next, we discuss popular aggregation rules.
\subsubsection{FedAvg}
FedAvg \cite{McMahan17} was proposed by Google. FedAvg computes the average of the clients' local model updates as the global model update, where each client is weighted by its number of training examples. Formally, $\bm{g}=\sum_{i=1}^n \frac{|D_i|}{N} \bm{g}_i$, where $|D_i|$ is the local training dataset size on the $i$th client and $N$ is the total number of training examples. FedAvg is the state-of-the-art FL method in non-adversarial settings. However, the global model in FedAvg can be arbitrarily manipulated by a single malicious client \cite{Blanchard17,Yin18}.
\subsubsection{Byzantine-robust Aggregation Rules}
\xc{Most Byzantine-robust FL methods use Byzantine-robust aggregation rules (see, e.g., \cite{Blanchard17,ChenPOMACS17,Mhamdi18,rajput2019detox,xie2019zeno,yang2019byrdie,Yin18,munoz2019byzantine}) that aim to tolerate Byzantine client failures. One exception is that Li et al.~\cite{li2019rsa} introduced a norm regularization term into the loss function.} Examples of Byzantine-robust aggregation rules include Krum \cite{Blanchard17}, Trimmed mean \cite{Yin18}, and Median \cite{Yin18}, which we discuss next.
\myparatight{Krum \cite{Blanchard17}}
Krum selects one of the $n$ local model updates in each iteration as the global model update based on a square-distance score. Suppose at most $f$ clients are malicious. The score for the $i$th client is computed as follows:
\begin{align}
s_i = \sum_{\bm{g}_j\in\Gamma_{i,n-f-2}} \Vert\bm{g}_j-\bm{g}_i\Vert_2^2,
\end{align}
where $\Gamma_{i,n-f-2}$ is the set of $n-f-2$ local model updates that have the smallest Euclidean distance to $\bm{g}_i$. The local model update of the client with the minimal score will be chosen as the global model update to update the global model.
\myparatight{Trimmed Mean (Trim-mean) \cite{Yin18}} Trimmed mean is a coordinate-wise aggregation rule that considers each model parameter individually. For each model parameter, the server collects its values in all local model updates and sorts them. Given a trim parameter $k<\frac{n}{2}$, the server removes the largest $k$ and the smallest $k$ values, and then computes the mean of the remaining $n-2k$ values as the value of the corresponding parameter in the global model update. The trim parameter $k$ should be at least the number of malicious clients to make Trim-mean robust. In other words, Trim-mean can tolerate less than 50\% of malicious clients.
\myparatight{Median \cite{Yin18}} Median is another coordinate-wise aggregation rule. Like Trim-mean, in Median, the server also sorts the values of each individual parameter in all local model updates. Instead of using the mean value after trim, Median considers the median value of each parameter as the corresponding parameter value in the global model update.
Existing FL methods suffer from a key limitation: they are vulnerable to sophisticated local model poisoning attacks on malicious clients, which we discuss in the next section.
\subsection{Poisoning Attacks to Federated Learning}
Poisoning attacks generally refer to attacking the training phase of machine learning.
One category of poisoning attacks called \emph{data poisoning attacks} aim to pollute the training data to corrupt the learnt model. Data poisoning attacks have been demonstrated to many machine learning systems such as spam detection~\cite{Nelson08poisoningattackSpamfilter,rubinstein2009antidote}, SVM~\cite{biggio2012poisoning}, recommender systems~\cite{fang2020influence,fang2018poisoning,poisoningattackRecSys16,YangRecSys17}, neural networks~\cite{Chen17,Gu17,liu2017trojaning,munoz2017towards,shafahi2018poison,Suciu18}, and graph-based methods~\cite{jia2020certified,Wang19,zhang2020backdoor}, as well as distributed privacy-preserving data analytics~\cite{cao2019data,cheu2019manipulation}.
FL is also vulnerable to data poisoning attacks~\cite{tolpegin2020data}, i.e., malicious clients can corrupt the global model via modifying, adding, and/or deleting examples in their local training datasets. For instance, a data poisoning attack known as \emph{label flipping attack} changes the labels of the training examples on malicious clients while keeping their features unchanged.
Moreover, unlike centralized learning, FL is further vulnerable to \emph{local model poisoning attacks}~\cite{bagdasaryan2020backdoor,bhagoji2019analyzing,fang2019local,xie2019dba}, in which the malicious clients poison the local models or their updates sent to the server. Depending on the attacker's goal, local model poisoning attacks can be categorized into \emph{untargeted attacks} \cite{fang2019local} and \emph{targeted attacks} \cite{bagdasaryan2020backdoor,bhagoji2019analyzing,xie2019dba}. Untargeted attacks aim to corrupt the global model such that it makes incorrect predictions for a large number of testing examples indiscriminately, i.e., the testing error rate is high. Targeted attacks aim to corrupt the global model such that it predicts attacker-chosen target labels for attacker-chosen target testing examples while the predicted labels for other non-target testing examples are unaffected.
Note that any data poisoning attack can be transformed to a local model poisoning attack, i.e., we can compute the local model update on a malicious client's poisoned local training dataset and treat it as the poisoned local model update. Moreover, recent studies \cite{bhagoji2019analyzing,fang2019local} showed that local model poisoning attacks are more effective than data poisoning attacks against FL. Therefore, we focus on local model poisoning attacks in this work. Next, we discuss two state-of-the-art untargeted attacks (i.e., Krum attack and Trim attack) \cite{fang2019local} and one targeted attack (i.e., Scaling attack) \cite{bagdasaryan2020backdoor}.
\myparatight{Krum attack and Trim attack \cite{fang2019local}} Fang et al.~\cite{fang2019local} proposed a general framework for local model poisoning attacks, which can be applied to optimize the attacks for any given aggregation rule.
Assuming the global model update without attack is $\bm{g}$,
Fang et al.~\cite{fang2019local} formulate the attack as an optimization problem that aims to change the global model update the most along the opposite direction of $\bm{g}$, by optimizing the poisoned local model updates sent from the malicious clients to the server.
Different aggregation rules lead to different instantiations of the optimization problem.
Fang et al. applied the framework to optimize local model poisoning attacks for Krum (called Krum attack) as well as Trim-mean and Median (called Trim attack).
\myparatight{Scaling attack \cite{bagdasaryan2020backdoor}} This attack aims to corrupt the global model to predict attacker-chosen target labels for attacker-chosen target testing examples, while the predicted labels for other testing examples are unaffected (i.e., the normal testing error rate remains the same). For instance, the attacker-chosen target testing examples can be normal testing examples embedded with a predefined backdoor trigger (e.g., a logo, a specific feature pattern). To achieve the goal, the Scaling attack adds trigger-embedded training examples with the attacker-chosen target label to the local training data of malicious clients. The local model updates on malicious clients are then computed based on the local training datasets augmented with the trigger-embedded examples. However, the poisoned local model updates may have limited impact on the global model update because it is aggregated over all clients' local model updates. For instance, in FedAvg, the effect of the attack will be diluted after the averaging~\cite{bagdasaryan2020backdoor}. Therefore, the attack further scales the poisoned local model updates on malicious clients by a factor that is much larger than 1. The scaled poisoned local model updates are then sent to the server.
\begin{figure*}[!t]
\centering
{\includegraphics[width= 0.8\textwidth]{figs/fltrust.pdf}}
\vspace{2mm}
\caption{Illustration of our aggregation rule, which is applied in each iteration of FLTrust.}
\label{fltrust}
\end{figure*}
\section{Problem Setup}
\label{sec:prob}
\myparatight{Attack model} We follow the attack model in previous works~\cite{bagdasaryan2020backdoor,bhagoji2019analyzing,fang2019local}. Specifically, an attacker controls some malicious clients, which can be fake clients injected by the attacker or genuine ones compromised by the attacker. However, the attacker does not compromise the server. The malicious clients can send arbitrary local model updates to the server in each iteration of the FL training process. \xc{Typically, an attacker has the following partial knowledge about an FL system: local training data and local model updates on the malicious clients, loss function, and learning rate. We notice that the Scaling attack \cite{bagdasaryan2020backdoor} only requires such partial knowledge. The Krum and Trim attacks \cite{fang2019local} are also applicable in this partial-knowledge setting. However, they are stronger in the full-knowledge setting \cite{fang2019local}, where the attacker knows everything about the FL training process, including the local training data and local model updates on all clients in each iteration, as well as the FL's aggregation rule. Therefore, we consider such full-knowledge setting to show that our method can defend against strong attacks.} Moreover, the attacker can perform adaptive attacks to FLTrust, which we discuss in Section \ref{sec:adaptive}.
\myparatight{Defense goals} We aim to design an FL method that achieves Byzantine robustness against malicious clients without sacrificing the fidelity and efficiency. In particular, we treat FedAvg under no attacks as a baseline to discuss fidelity and efficiency, i.e., our method should be robust against malicious clients while being as accurate and efficient as FedAvg under no attacks. Specifically, we aim to design a Byzantine-robust FL method that achieves the following defense goals:
\begin{itemize}
\item {\bf Fidelity.} The method should not sacrifice the classification accuracy of the global model when there is no attack. In particular, under no attacks, the method should be able to learn a global model that is as accurate as the global model learnt by FedAvg, a popular FL method in non-adversarial settings.
\item {\bf Robustness.} The method should preserve the classification accuracy of the global model in the presence of malicious clients performing strong poisoning attacks. In particular, we aim to design a method that can learn a global model under attacks that is as accurate as the global model learnt by FedAvg under no attacks. Moreover, for targeted attacks, our goal further includes that the global model is unlikely to predict the attacker-chosen target labels for the attacker-chosen target testing examples.
\item {\bf Efficiency.} The method should not incur extra computation and communications overhead, especially to the clients. Clients in FL are often resource-constrained devices. Therefore, we aim to design a method that does not increase the workload of the clients, compared to FedAvg under no attacks.
\end{itemize}
Existing Byzantine-robust FL methods such as Krum, Trim-mean, and Median do not satisfy the fidelity and robustness goals. Moreover, Krum does not satisfy the efficiency goal because it requires the server to compute pairwise distances of the clients' local model updates, which is computationally expensive when the number of clients is large.
\myparatight{Defender's knowledge and capability} We consider the defense is performed on the server side. The server does not have access to the raw local training data on the clients, and the server does not know the number of malicious clients. However, the server has full access to the global model as well as the local model updates from all clients in each iteration. Moreover, the server itself can collect a clean small training dataset (we call it root dataset) for the learning task.
\xc{We require the root dataset to be clean from poisoning. The server can collect a clean root dataset by manual labeling. For instance, Google enlists its employees to type with Gboard to create the root dataset for its federated next-word prediction \cite{gboard}; when the learning task is digit recognition, the service provider can hire human workers to label some digits. Since we only require a small root dataset, e.g., 100 training examples, it is often affordable for the server to perform manual collection and labeling. The root dataset may or may not follow the same distribution as the overall training data distribution of the learning task. Our experimental results show that our method is effective once the root dataset distribution does not deviate too much from the overall training data distribution.}
\section{Our FLTrust}
\subsection{Overview of FLTrust}
In our FLTrust, the server itself collects a small clean training dataset (called \emph{root dataset}) and maintains a model (called \emph{server model}) for it just like how a client maintains a local model. In each iteration, our FLTrust follows the general three steps of FL discussed in Section~\ref{sec:background}. However, our FLTrust is different from existing FL methods in Step II and Step III. Specifically, in Step II, each client trains its local model in existing FL methods, while the server also trains its server model via fine-tuning the current global model using the root dataset in FLTrust. In Step III, existing FL methods only consider the clients' local model updates to update the global model, which provides no root of trust. On the contrary, FLTrust considers both the server model update and the clients' local model updates to update the global model.
Specifically, an attacker can manipulate the directions of the local model updates on the malicious clients such that the global model is updated towards the opposite of the direction along which it should be updated; or the attacker can scale up the magnitudes of the local model updates to dominate the aggregated global model update. Therefore, we take both the directions and the magnitudes of the model updates into consideration. In particular, FLTrust first assigns a trust score (TS) to a local model update based on its direction similarity with the server model update. Formally, our trust score of a local model update is its ReLU-clipped cosine similarity with the server model update. Then, FLTrust normalizes each local model update by scaling it to have the same magnitude as the server model update. Such normalization essentially projects each local model update to the same hyper-sphere where the server model update lies in the vector space, which limits the impact of the poisoned local model updates with large magnitudes. Finally, FLTrust computes the average of the normalized local model updates weighted by their trust scores as the global model update, which is used to update the global model.
\subsection{Our New Aggregation Rule}
Our new aggregation rule considers both the directions and magnitudes of the local model updates and the server model update to compute the global model update. Figure~\ref{fltrust} illustrates our aggregation rule.
\myparatight{ReLU-clipped cosine similarity based trust score} An attacker can manipulate the directions of the local model updates on the malicious clients such that the global model update is driven to an arbitrary direction that the attacker desires. Without root of trust, it is challenging for the server to decide which direction is more ``promising'' to update the global model. In our FLTrust, the root trust origins from the direction of the server model update. In particular, if the direction of a local model update is more similar to that of the server model update, then the direction of the local model update may be more ``promising''. Formally, we use the \emph{cosine similarity}, a popular metric to measure the angle between two vectors, to measure the direction similarity between a local model update and the server model update.
However, the cosine similarity alone faces a challenge. Specifically, if a local model update and the server model update are in opposite directions, their cosine similarity is negative, which still has negative impact on the aggregated global model update (see our experimental results in Section~\ref{sec:exp_res}). Therefore, we exclude such local model updates from the aggregation by clipping the cosine similarity. In particular, we use the popular ReLU operation for clipping. Formally, our trust score is defined as follows:
\begin{align}
TS_i = ReLU(c_i),
\end{align}
where $TS_i$ is the trust score for the $i$th local model update $\bm{g}_i$, and $c_i$ is the cosine similarity between $\bm{g}_i$ and the server model update $\bm{g}_0$, i.e., $c_i=\frac{\langle\bm{g}_i,\bm{g}_0 \rangle}{||\bm{g}_i||\cdot ||\bm{g}_0||}$. ReLU is defined as follows: $ReLU(x)=x$ if $x>0$ and $ReLU(x)=0$ otherwise.
\myparatight{Normalizing the magnitudes of local model updates} An attacker can also scale the magnitudes of the local model updates on the malicious clients by a large factor such that they dominate the global model update. Therefore, we normalize the magnitude of each local model update. Without root of trust, it is challenging to decide what quantity we should normalize to. However, the server has the root dataset to bootstrap trust in FLTrust. Therefore, we normalize each local model update such that it has the same magnitude as the server model update. Such normalization means that we rescale local model updates to be the same hyper-sphere where the server model update lies in the vector space. Formally, we have the following:
\begin{align}
\bm{\bar{g}}_i = \frac{||\bm{g}_0||}{||\bm{g}_i||}\cdot \bm{g}_i,
\end{align}
where $\bm{g}_i$ is the local model update of the $i$th client in the current iteration, $\bm{\bar{g}}_i$ is the \emph{normalized local model update} of the $i$th client, $\bm{g}_0$ is the server model update, and $||\cdot||$ means $\ell_2$ norm of a vector. Our normalization ensures that no single local model update has too much impact on the aggregated global model update. Note that our normalization also enlarges a local model update with a small magnitude to have the same magnitude as the server model update. This is based on the intuition that local model updates with small magnitudes are more likely from benign clients, and thus enlarging their magnitudes helps reduce the impact of the poisoned local model updates from the malicious clients, leading to a better global model (see our experimental results in Section~\ref{sec:exp_res}).
\begin{algorithm}[t]
\caption{ModelUpdate($\bm{w}$, $D$, $b$, $\beta$, $R$)}\label{local_upate_algo}
\begin{algorithmic}[1]
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Ensure Model update.
\State $\bm{w}^{0} \leftarrow \bm{w}$.
\For {$r=1,2,\cdots,R$}
\State Randomly sample a batch $D_b$ from $D$.
\State $\bm{w}^r \leftarrow \bm{w}^{r-1} - \beta \nabla Loss(D_b;\bm{w})$.
\EndFor\\
\Return $\bm{w}^R - \bm{w}$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{FLTrust}\label{global_upate_algo}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Require $n$ clients with local training datasets $D_i, i=1,2,\cdots,n$; a server with root dataset $D_0$; global learning rate $\alpha$; number of global iterations $R_g$; number of clients $\tau$ sampled in each iteration; local learning rate $\beta$; number of local iterations $R_l$; and batch size $b$.
\Ensure Global model $\bm{w}$.
\State $\bm{w} \leftarrow $ random initialization.
\For{$r=1,2,\cdots,R_g$}
\State // Step \RomanNumeralCaps{1}: The server sends the global model to clients.
\State The server randomly samples $\tau$ clients $C_1, C_2, \cdots, C_\tau$ from $\{1,2,\cdots,n\}$ and sends $\bm{w}$ to them.
\item[]
\State // Step \RomanNumeralCaps{2}: Training local models and server model.
\State // Client side.
\ForParallel{$i=C_1, C_2, \cdots, C_\tau$}
\State $\bm{g}_i = ModelUpdate(\bm{w},D_{i},b,\beta,R_l)$.
\State Send $\bm{g}_i$ to the server.
\EndFor
\State // Server side.
\State $\bm{g}_0 = ModelUpdate(\bm{w},D_{0},b,\beta,R_l)$.
\item[]
\State // Step \RomanNumeralCaps{3}: Updating the global model via aggregating the local model updates.
\For{$i=C_1, C_2, \cdots, C_\tau$}
\State $TS_i=ReLU\left(\frac{\langle\bm{g}_i,\bm{g}_0\rangle}{\Vert\bm{g}_i\Vert\Vert\bm{g}_0\Vert}\right)$.
\State $\bm{\bar{g}}_i = \frac{||\bm{g}_0||}{||\bm{g}_i||}\cdot \bm{g}_i$.
\EndFor
\State $\bm{g} = \frac{1}{\sum\limits_{j=1}^{\tau}TS_{C_j}} \sum\limits_{i=1}^\tau {TS}_{C_i} \cdot \bm{\bar{g}}_{C_i}$.
\State $\bm{w} \leftarrow \bm{w} + \alpha\cdot\bm{g}$.
\EndFor\\
\Return $\bm{w}$.
\end{algorithmic}
\end{algorithm}
\myparatight{Aggregating the local model updates} We compute the average of the normalized local model updates weighted by their trust scores as the global model update:
\begin{align}
\label{agg_local_model}
\bm{g} &= \frac{1}{\sum\limits_{j=1}^{n}TS_j} \sum_{i=1}^n {TS}_i \cdot \bm{\bar{g}}_i\nonumber\\
&= \frac{1}{\sum\limits_{j=1}^{n}ReLU(c_j)} \sum_{i=1}^n {ReLU(c_i)} \cdot \frac{||\bm{g}_0||}{||\bm{g}_i||}\cdot\bm{g}_i,
\end{align}
where $\bm{g}$ is the global model update. Note that if the server selects a subset of clients in an iteration, the global model update is aggregated from the local model updates of the selected clients. In principle, the server model update can be treated as a local model update with a trust score of 1 and the global model update can be weighted average of the clients' local model updates together with the server model update. However, such variant may negatively impact the global model because the root dataset is small and may not have the same distribution as the training data, but the server model update derived from it has a trust score of 1, reducing the contributions of the benign clients' local model updates (see our experimental results in Section~\ref{sec:exp_res}).
Finally, we update the global model as follows:
\begin{align}
\label{update_global_model}
\bm{w} \leftarrow \bm{w} + \alpha \cdot \bm{g},
\end{align}
where $\alpha$ is the global learning rate.
\subsection{Complete FLTrust Algorithm}
Algorithm \ref{global_upate_algo} shows our complete FLTrust method. FLTrust performs $R_g$ iterations and has three steps in each iteration. In Step I, the server sends the current global model to the clients or a subset of them. In Step II, the clients compute the local model updates based on the global model and their local training data, which are then sent to the server. Meanwhile, the server itself computes the server model update based on the global model and the root dataset. The local model updates and the server model update are computed by the function ModelUpdate in Algorithm~\ref{local_upate_algo} via performing stochastic gradient descent for $R_l$ iterations with a local learning rate $\beta$. In Step III, the server computes the global model update by aggregating the local model updates and uses it to update the global model with a global learning rate $\alpha$.
\xc{
\subsection{Formal Security Analysis}
\label{sec:sec_ana}
As we discussed in Section~\ref{sec:background}, the optimal global model $\bm{w}^*$ is a solution to the following optimization problem: $\bm{w}^* = \argmin_{\bm{w} \in \Theta} F(\bm{w}) \triangleq \mathbb{E}_{D\sim\mathcal{X}} \left[ f(D, \bm{w}) \right]$, where $\Theta$ is the parameter space of the global model, $D=\bigcup_{i=1}^{n} D_{i}$ is the joint training dataset of the $n$ clients, $\mathcal{X}$ is the training data distribution, $f(D, \bm{w})$ is the empirical loss function on the training data $D$, and $F(\bm{w})$ is the expected loss function. Our FLTrust is an iterative algorithm to find a global model to minimize the empirical loss function $f(D, \bm{w})$. We show that, under some assumptions, the difference between the global model learnt by FLTrust under attacks and the optimal global model $\bm{w}^*$ is bounded. Next, we first describe our assumptions and then describe our theoretical results.
\begin{assumption}
\label{assumption_1}
The expected loss function $F(\bm{w})$ is $\mu$-strongly convex and differentiable over the space $\Theta$ with $L$-Lipschitz continuous gradient. Formally, we have the following for any $\bm{w}, \widehat{\bm{w}} \in \Theta$:
\begin{gather*}
F(\widehat{\bm{w}}) \ge F(\bm{w}) + \left\langle {\nabla F(\bm{w}),\widehat{\bm{w}} - \bm{w}} \right\rangle + \frac{\mu}{2}{\left\| \widehat{\bm{w}} - \bm{w} \right\|^2},\\
\left\| {\nabla F(\bm{w}) - \nabla F(\widehat{\bm{w}}) } \right\| \le L\left\| {\bm{w} - \widehat{\bm{w}}} \right\|,
\end{gather*}
where $\nabla$ represents gradient, $\left\|\cdot \right\|$ represents $\ell_2$ norm, and $\left\langle \cdot, \cdot \right\rangle$ represents inner product of two vectors. Moreover, the empirical loss function $f(D,\bm{w})$ is $L_1$-Lipschitz probabilistically. Formally, for any $\delta \in (0,1)$, there exists an $L_1$ such that:
\begin{align}
\text{Pr}\left\{ {\mathop {\sup }\limits_{\bm{w}, \widehat{\bm{w}} \in \Theta :\bm{w} \ne \widehat{\bm{w}}} \frac{\left\| {\nabla f(D,\bm{w}) - \nabla f(D,\widehat{\bm{w}})} \right\|}
{\left\| {\bm{w} - \widehat{\bm{w}}} \right\|} \le {L_1}} \right\} \ge 1 - \frac{\delta }{3}. \nonumber
\end{align}
\end{assumption}
\begin{assumption}
\label{assumption_2}
The gradient of the empirical loss function $\nabla f(D,\bm{w}^*)$ at the optimal global model $\bm{w}^*$ is bounded. Moreover, the gradient difference $h(D,\bm{w}) = \nabla f(D,\bm{w}) - \nabla f(D,\bm{w}^*)$ for any $\bm{w} \in \Theta$ is bounded. Specifically, there exist positive constants $\sigma_1$ and $\gamma_1$ such that for any unit vector $\bm{v}$, $\left\langle {\nabla f(D,\bm{w}^*),\bm{v}} \right\rangle$ is sub-exponential with $\sigma_1$ and $\gamma_1$; and there exist positive constants $\sigma_2$ and $\gamma_2$ such that for any $\bm{w} \in \Theta$ with $\bm{w} \ne \bm{w}^*$ and any unit vector $\bm{v}$, $\left\langle {h(D,\bm{w}) - \mathbb{E}\left[ {h(D,\bm{w})} \right],\bm{v}} \right\rangle / \left\| {\bm{w} - \bm{w}^*} \right\|$ is sub-exponential with $\sigma_2$ and $\gamma_2$.
Formally, for $\forall \left| \xi \right| \le 1/\gamma_1$, $\forall \left| \xi \right| \le 1/\gamma_2$, we have:
\begin{equation*}
\mathop {\sup }\limits_{\bm{v} \in \bm{B}} \mathbb{E}\left[ {\exp (\xi \left\langle {\nabla f(D,\bm{w}^*), \bm{v}} \right\rangle)} \right]
\le
e ^{\sigma_1 ^2 \xi^2 /2},
\end{equation*}
\begin{equation*}
\mathop {\sup }\limits_{\bm{w} \in \Theta ,\bm{v} \in \bm{B}} \mathbb{E} \left[ {\exp \left( {\frac{{\xi \left\langle {h(D,\bm{w}) - \mathbb{E} \left[ {h(D,\bm{w})} \right],\bm{v}} \right\rangle }} {\left\| {\bm{w} - \bm{w}^*} \right\|}} \right)} \right]
\le
e ^{\sigma_2 ^2 \xi^2 /2} ,
\end{equation*}
where $\bm{B}$ is the unit sphere $\bm{B}=\left\{ {\bm{v}:{{\left\| \bm{v} \right\|}} = 1} \right\}$.
\end{assumption}
\begin{assumption}
\label{assumption_3}
Each client's local training dataset $D_i$ ($i=1,2,\cdots,n$) and the root dataset $D_0$ are sampled independently from the distribution $\mathcal{X}$.
\end{assumption}
\begin{thm}
\label{theorem_1}
Suppose Assumption~\ref{assumption_1}-\ref{assumption_3} hold and FLTrust uses $R_l=1$ and $\beta=1$. For an arbitrary number of malicious clients, the difference between the global model learnt by FLTrust and the optimal global model $\bm{w}^*$ under no attacks is bounded. Formally, we have the following with probability at least $1 - \delta$:
\begin{align}
\left\| \bm{w}^t - \bm{w}^* \right\|
\le
\left( 1- \rho \right)^t \left\| \bm{w}^0 - \bm{w}^* \right\| + 12\alpha\Delta_1/ \rho, \nonumber
\end{align}
where $\bm{w}^t$ is the global model in the $t$th iteration, $\rho = 1- \left( \sqrt {1 - {\mu^2}/(4L^2)} + 24\alpha\Delta_2 + 2\alpha L \right)$, $\alpha$ is the learning rate,
$\Delta_1 = \sigma_1 \sqrt \frac{2}{\left| D_0 \right|} \sqrt{d\log 6 + \log(3/\delta)}$,
$\Delta_2 = {\sigma_2}\sqrt {\frac{2}{{\left| D_0 \right|}}} \sqrt {d\log \frac{18L_2}{\sigma_2} + \frac{1}{2}d\log \frac{\left| {D_0} \right|}d + \log \left( \frac{{6\sigma_2^2r\sqrt {\left| D_0 \right|} }}{\gamma_2{\sigma_1}\delta } \right)},
$
$\left| D_0 \right|$ is the size of the root dataset,
$d$ is the dimension of $\bm{w}$, ${L_2} = \max \left\{ {L,{L_1}} \right\}$, and $r$ is some positive number such that $\left\| \bm{w} - {\bm{w}^*} \right\| \le r\sqrt d$ for any $\bm{w} \in \Theta$ (i.e., the parameter space $\Theta$ is constrained). When $|1- \rho| < 1$, we have $\lim_{t \rightarrow \infty} \left\| \bm{w}^t - \bm{w}^* \right\| \leq 12\alpha\Delta_1/ \rho$.
\end{thm}
\begin{proof}
See Appendix~\ref{sec:appendix}.
\end{proof}
}
\section{Adaptive Attacks}\label{sec:adaptive}
When an attacker knows our FLTrust is used to learn the global model, the attacker can adapt its attacks to FLTrust. Therefore, in this section, we design strong adaptive attacks to FLTrust. In particular, Fang et al. \cite{fang2019local} proposed the state-of-the-art framework that can optimize local model poisoning attacks for any given aggregation rule. We generate adaptive attacks to FLTrust via instantiating this framework with our aggregation rule. Next, we first describe the general attack framework in \cite{fang2019local}, then we discuss how to design adaptive attacks to FLTrust based on the framework.
\subsection{Local Model Poisoning Attack Framework}
The framework of local model poisoning attacks introduced in \cite{fang2019local} is general to all aggregation rules. Specifically, in each iteration of FL, the attacker aims to change the global model update the most along the opposite direction of the global model update under no attacks, by carefully crafting the local model updates on the malicious clients. Assuming the first $m$ clients are malicious. The local model poisoning attack is formulated as the following optimization problem\footnote{ Fang et al. formulates the framework based on local models, which is equivalent to formulating the framework based on local model updates.}:
\begin{align}
&\max\limits_{\bm{g}_1',\bm{g}_2', \cdots,\bm{g}_m'}\bm{s}^T(\bm{g}-\bm{g}'),\nonumber\\
\text{subject to } &\bm{g} = \mathcal{A}(\bm{g}_1,\cdots,\bm{g}_m,\bm{g}_{m+1},\bm{g}_n),\nonumber\\
&\bm{g}'= \mathcal{A}(\bm{g}_1',\cdots,\bm{g}_m',\bm{g}_{m+1},\bm{g}_n),
\label{attack_framework}
\end{align}
where $\mathcal{A}$ is the aggregation rule of the FL method, $\bm{g}_i'$ is the poisoned local model update on the $i$th malicious client for $i=1,2,\cdots,m$, $\bm{g}$ is the global model update before attack, $\bm{g}'$ is the global model update after attack, and $\bm{s}$ is a column vector of the sign of the global model update before attack.
\subsection{Adaptive Attack to Our FLTrust}
We leverage the state-of-the-art framework to design adaptive attacks to our FLTrust. The idea is to instantiate the aggregation rule $\mathcal{A}$ with our aggregation rule in FLTrust in the framework. We denote by $\bm{e}_i= \frac{\bm{g}_i}{||\bm{g}_i||}$ the unit vector whose direction is the same as $\bm{g}_i$. Then, our aggregation rule in Equation~(\ref{agg_local_model}) can be rewritten as follows:
\begin{align}
\label{rewrittenlocalrule}
\bm{g} = ||\bm{g}_0||\sum\limits_{i=1}^n \frac{ ReLU(c_i)}{\sum\limits_{j=1}^{n}ReLU{(c_j)}} \bm{e}_i.
\end{align}
Suppose there are $m$ malicious clients, and without loss of generality, we assume the first $m$ clients are malicious. These malicious clients send poisoned local model updates $\bm{g}_i', i=1,2,\cdots,m$ to the server. Let $\bm{e}_i'$ ($i=1,2,\cdots,m$) be the corresponding unit vectors. We note that the cosine similarity $c_i'$ between a poisoned local model update $\bm{g}_i'$ and the server model update $\bm{g}_0$ is the same as the cosine similarity between the corresponding unit vectors, i.e., $c_i'=\langle\bm{e}_i', \bm{e}_0\rangle$, where $\langle\cdot,\cdot\rangle$ means the inner product of two vectors. Therefore, we have the poisoned global model update $\bm{g}'$ under attacks as follows:
\begin{align}
\bm{g}' &= ||\bm{g}_0||\left[\sum\limits_{i=1}^m \frac{ReLU{(\langle\bm{e}_i', \bm{e}_0\rangle)}}{\sum\limits_{j=1}^{m}ReLU{(\langle\bm{e}_j', \bm{e}_0\rangle)} + \sum\limits_{j=m+1}^{n}ReLU{(c_j)}} \bm{e}_i'\right. \nonumber\\
&+ \!\! \left.\sum\limits_{i=m+1}^n \frac{ ReLU(c_i)}{\sum\limits_{j=1}^{m}ReLU{(\langle\bm{e}_j', \bm{e}_0\rangle)} + \sum\limits_{j=m+1}^{n}ReLU{(c_j)}} \bm{e}_i\right] \!\!\!.
\label{under_attack}
\end{align}
\begin{algorithm}[t]
\caption{Our Adaptive Attack to FLTrust.}\label{attack_algo}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\Require $\bm{g}_0; \bm{g}_i$ for $i=1,2,\cdots,n; m; \sigma; \eta; \gamma; Q; V$.
\Ensure $\bm{e}_i'$ for $i=1,2,\cdots,m$.
\State Compute $\bm{e}_0, \bm{e}_i, c_i$ for $i=1,2,\cdots,n$.
\State Initialize $\bm{e}_i'$ using Trim attack for $i=1,2,\cdots,m$.
\For {$v=1,2,\cdots,V$}
\For {$i=1,2,\cdots,m$}
\For {$t=1,2,\cdots,Q$}
\State Randomly sample $\bm{u}\sim N(\bm{0},\sigma^2\bm{I})$.
\State Compute $\nabla_{\bm{e}_i'} h$ according to (\ref{zero_grad}).
\State Update $\bm{e}_i' = \bm{e}_i' + \eta \nabla_{\bm{e}_i'}h$.
\State Normalize $\bm{e}_i'$ such that $\left\| \bm{e}_i' \right\| =1$.
\EndFor
\EndFor
\EndFor\\
\Return $\bm{e}_i'$ for $i=1,2,\cdots,m$.
\end{algorithmic}
\end{algorithm}
Substituting Equations (\ref{rewrittenlocalrule}) and (\ref{under_attack}) into (\ref{attack_framework}), and noticing that optimizing $\bm{g}_i'$ is equivalent to optimizing $\bm{e}_i'$ for $i=1,2,\cdots,m$, we can instantiate the attack framework in Equation (\ref{attack_framework}) as the following optimization problem:
\begin{align}
&\max\limits_{\bm{e}_1',\bm{e}_2',\cdots,\bm{e}_m'} \quad h(\bm{e}_1',\bm{e}_2',\cdots,\bm{e}_m'),
\label{adaptive}
\end{align}
where $h(\bm{e}_1',\bm{e}_2',\cdots,\bm{e}_m')$ is defined as follows:
\begin{align}
& h(\bm{e}_1',\bm{e}_2',\cdots,\bm{e}_m')
= ||\bm{g}_0||\bm{s}^T \left[\sum\limits_{i=1}^n \frac{ ReLU(c_i)}{\sum\limits_{j=1}^{n}ReLU{(c_j)}} \bm{e}_i \right.
\nonumber\\
&\;\left.- \sum\limits_{i=1}^b \frac{ReLU{(\langle\bm{e}_i', \bm{e}_0\rangle)}}{\sum\limits_{j=1}^{m}ReLU{(\langle\bm{e}_j', \bm{e}_0\rangle)} + \sum\limits_{j=m+1}^{n}ReLU{(c_j)}} \bm{e}_i' \right.
\nonumber\\
&\; \left. - \!\!\!\! \left.\sum\limits_{i=m+1}^n \frac{ ReLU(c_i)}{\sum\limits_{j=1}^{m}ReLU{(\langle\bm{e}_j', \bm{e}_0\rangle)} + \sum\limits_{j=m+1}^{n}ReLU{(c_j)}} \bm{e}_i \right. \right] \!\!,
\end{align}
where $\bm{s}^T=\text{sgn}(\bm{g})^T$ is the sign of the global model update without attacks. Solving the optimization problem generates an adaptive attack to FLTrust. We consider a strong adaptive attacker who has full knowledge about the FL system when solving the optimization problem. In particular, $||\bm{g}_0||, \bm{s}, c_i\ (i=1,2,\cdots,n), \bm{e}_0$, and $\bm{e}_i\ (i=1,2,\cdots,n)$ are all available to the attacker.
\myparatight{Solving the optimization problem} We use a standard gradient ascent approach to solve the optimization problem. Specifically, we can compute the gradient $\nabla_{\bm{e}_i'}h$ of the objective function $h$ with respect to each $\bm{e}_i'$ and move $\bm{e}_i'$ a small step along the gradient. Since the gradient $\nabla_{\bm{e}_i'}h$ involves a Jacobian matrix of $\bm{e}_i'$, it is not practical to directly compute the gradient. Therefore, we leverage a zeroth-order method \cite{cheng2018query,nesterov2017random} to compute the gradient, which is a standard method to solve such optimization problems with computationally intractable objective functions. Specifically, we compute the gradient
$\nabla_{\bm{e}_i'} h$ as follows:
\begin{align}
\label{zero_grad}
\nabla_{\bm{e}_i'} h \approx \frac{h(\bm{e}_i' + \gamma\bm{u}) - h(\bm{e}_i') }{\gamma} \cdot \bm{u},
\end{align}
where $\bm{u}$ is a random vector sampled from the multivariate Gaussian distribution $N(\bm{0},\sigma^2\bm{I})$ with zero mean and diagonal covariance matrix, and $\gamma > 0$ is a smoothing parameter.
We optimize $\bm{e}_i'$ one by one following the standard {coordinate ascent} approach, i.e., when optimizing $\bm{e}_i'$, all other $\bm{e}_j', j\neq i$ are fixed. Specifically, we use projected gradient ascent to iteratively optimize $\bm{e}_i'$. In the beginning, we initialize $\bm{e}_i'$ using the Trim attack, i.e., we use the Trim attack to compute the poisoned local model updates and initialize $\bm{e}_i'$ as the corresponding unit vector. Then, in each iteration, we sample a random vector $\bm{u}$ from $N(\bm{0},\sigma^2\bm{I})$ and compute the gradient $\nabla_{\bm{e}_i'}h$ following Equation (\ref{zero_grad}). We multiply the gradient by a step size $\eta$ and add it to $\bm{e}_i'$ to get the new $\bm{e}_i'$. Finally, we project $\bm{e}_i'$ to the unit sphere to ensure that $\bm{e}_i'$ is a valid unit vector. We repeat the gradient ascent process for $Q$ iterations. Moreover, we repeat the iterations over the unit vectors for $V$ iterations. Algorithm \ref{attack_algo} shows our adaptive attack. We let $\bm{g}_i'=\Vert\bm{g}_0\Vert\cdot\bm{e}_i'$ after $\bm{e}_i'$ is solved for $i=1,2,\cdots,m$.
\section{Evaluation}
We evaluate our FLTrust against both existing poisoning attacks to FL and adaptive attacks in this section.
\subsection{Experimental Setup}
\label{sec:setup}
\subsubsection{Datasets}
We use multiple datasets from different domains in our evaluation, including five image classification datasets and a human activity recognition dataset. \xc{We follow previous work \cite{fang2019local} to distribute the training examples in a dataset among clients.
Assuming there are $M$ classes in a dataset. We randomly split the clients into $M$ groups. A training example with label $l$ is assigned to group $l$ with probability $q>0$ and to any other group with probability $\frac{1-q}{M-1}$. Within the same group, data are uniformly distributed to each client. $q$ controls the distribution difference of the clients' local training data. If $q=1/M$, then the clients' local training data are independent and identically distributed (IID), otherwise the clients' local training data are non-IID. Moreover, a larger $q$ indicates a higher degree of non-IID among the clients' local training data. One characteristic of FL is that clients often have non-IID local training data~\cite{Konen16,McMahan17}. Therefore, we will set $q>1/M$ by default to simulate the non-IID settings.
Next, we use the MNIST dataset as an example to show the distribution process. Assume we have 100 clients in total and set $q=0.5$. $M=10$ for the MNIST dataset. We first randomly split the clients into 10 groups, each containing 10 clients. For a training image of digit $l$ (e.g., $l=5$), we first assign it to group 5 with probability 0.5, and to any other group with probability $\frac{1-0.5}{10-1}\approx 0.056$. Once the group is determined, e.g., group 5 is chosen, we will select a client from group 5 uniformly at random and assign this training image to the selected client. }
\myparatight{MNIST-0.1} MNIST \cite{lecun2010mnist} is a 10-class digit image classification dataset, which consists of 60,000 training examples and 10,000 testing examples. We set $q=0.1$ in MNIST-0.1, which indicates local training data are IID among clients. We use MNIST-0.1 to show that FLTrust is also effective in the IID setting.
\myparatight{MNIST-0.5} In MNIST-0.5, we simulate non-IID local training data among the clients via setting $q=0.5$.
\myparatight{Fashion-MNIST} Fashion-MNIST \cite{xiao2017/online} is a 10-class fashion image classification task, which has a predefined training set of 60,000 fashion images and a testing set of 10,000 fashion images. Like the MNIST-0.5 dataset, we distribute the training examples to the clients with $q=0.5$ to simulate non-IID local training data.
\myparatight{CIFAR-10} CIFAR-10 \cite{krizhevsky2009learning} is a color image classification dataset consisting of predefined 50,000 training examples and 10,000 testing examples. Each example belongs to one of the 10 classes. To simulate non-IID local training data, we distribute the training examples to clients with $q=0.5$.
\myparatight{Human activity recognition (HAR)} The HAR dataset \cite{anguita2013public} consists of human activity data collected from the smartphones of 30 real-world users. The data are signals from multiple sensors on a user's smartphone, and the task is to predict the user's activity among 6 possible activities, i.e., WALKING, WALKING\_UPSTAIRS, WALKING\_DOWNSTAIRS, SITTING, STANDING, and LAYING. Each example includes 561 features and there are 10,299 examples in total. Unlike the previous datasets, we don't need to distribute the data to clients in this dataset, as each user is naturally considered as a client. \xc{HAR represents a real-world FL scenario, where each user is considered as a client. We use 75\% of each client's data as training examples and the rest 25\% as testing examples. We note that HAR has unbalanced local training data on clients: the maximum number of training examples on a client is 409, the minimum number is 281, and the mean is 343.}
\begin{table*}[!t]\renewcommand{\arraystretch}{1}
\caption{The default FL system parameter settings.}
\centering
\vspace{2mm}
\addtolength{\tabcolsep}{-3pt}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{} & {Explanation} & {MNIST-0.1} & {MNIST-0.5} & {Fashion-MNIST} & {CIFAR-10} & {HAR} & {\xc{CH-MNIST}} \\ \hline
{$n$} & {\# clients} & \multicolumn{4}{c|}{100} & {30} & {\xc{40}}\\ \hline
{$\tau$} & {\# clients selected in each iteration} & \multicolumn{6}{c|}{$n$}\\ \hline
{$R_l$} & {\# local iterations} & \multicolumn{6}{c|}{1}\\ \hline
{$R_g$} & {\# global iterations} & \multicolumn{2}{c|}{2,000} & {2,500} & {1,500} & {1,000} & {\xc{2,000}}\\ \hline
{$b$} & {batch size} & \multicolumn{3}{c|}{32} & {64} & \multicolumn{2}{c|}{32}\\ \hline
{$\alpha\cdot\beta$} & {combined learning rate} & \multicolumn{2}{c|}{$3\times10^{-4}$} & {$6\times10^{-3}$} & {$2\times10^{-4}$} & {$3\times10^{-3}$} & \xc{\makecell{$3\times10^{-4}$ (decay at the 1500th and \\ 1750th iterations with factor 0.9)}}\\ \hline
{$m/n$} & {fraction of malicious clients (\%)} & \multicolumn{6}{c|}{20}\\ \hline
{$m$} & {\# malicious clients} & \multicolumn{4}{c|}{20} & {6} & {\xc{8}}\\ \hline
{$f$} & {Krum parameter} & \multicolumn{6}{c|}{$m$}\\ \hline
{$k$} & {Trim-mean parameter} & \multicolumn{6}{c|}{$m$}\\ \hline
{$|D_0|$} & {size of the root dataset} & \multicolumn{6}{c|}{100}\\ \hline
\end{tabular}
\label{tab:param}
\vspace{-3mm}
\end{table*}
\xc{\myparatight{CH-MNIST} CH-MNIST \cite{kather2016multi} is a medical image classification dataset consisting of 5,000 images of histology tiles collected from colorectal cancer patients. Each example has $64\times 64$ gray-scale pixels and belongs to one of the 8 classes. We use 4,000 images selected randomly as the training examples and use the other 1,000 images as the testing examples. To simulate non-IID local training data, we distribute the training examples to clients with $q=0.5$.}
\subsubsection{Evaluated Poisoning Attacks}
We consider both data poisoning attacks and local model poisoning attacks. For data poisoning attack, we consider the popular label flipping attack. For local model poisoning attacks, we evaluate Krum attack, Trim attack, and our adaptive attack (untargeted attacks) \cite{fang2019local}, as well as Scaling attack (targeted attack) \cite{bagdasaryan2020backdoor}.
\myparatight{Label flipping (LF) attack} We use the same label flipping attack setting as \cite{fang2019local}. In particular, for each training example on the malicious clients, we flip its label $l$ to $M-l-1$, where $M$ is the total number of labels and $l\in\{0,1,\cdots,M-1\}$.
\myparatight{Krum attack} Krum attack is an untargeted local model poisoning attack optimized for the Krum aggregation rule. We use the default parameter settings in \cite{fang2019local} for the Krum attack.
\myparatight{Trim attack} Trim attack is an untargeted local model poisoning attack optimized for the Trim-mean and Median aggregation rules. We use the default parameter settings in \cite{fang2019local} for the Trim attack.
\myparatight{Scaling attack} Scaling attack is a targeted local model poisoning attack. Specifically, we consider the attacker-chosen target testing examples are normal testing examples with a predefined feature-pattern trigger embedded. Following \cite{bagdasaryan2020backdoor}, we use the data augmentation scheme in \cite{Gu17} to implement the Scaling attack. Specifically, each malicious client copies $p$ fraction of its local training examples, embeds the trigger to them, changes their labels to the attacker-chosen target label, and uses them to augment its local training data. Then, in each iteration of FL, each malicious client computes its local model update based on the augmented local training data and scales it by a factor $\lambda\gg1$ before sending it to the server.
Specifically, we use the same pattern trigger in \cite{Gu17} as our trigger for MNIST-0.1, MNIST-0.5, Fashion-MNIST, \xc{and CH-MNIST}, and we set the attacker-chosen target label as 0; for CIFAR-10, we consider the same pattern trigger and target label (i.e., ``bird") in \cite{bagdasaryan2020backdoor};
and for HAR, we create a feature-pattern trigger by setting every 20th feature to 0 and we set the target label as ``WALKING\_UPSTAIRS". Following previous work~\cite{bagdasaryan2020backdoor}, we set the scaling factor $\lambda=n$, where $n$ is the number of clients. In each dataset, the attacker-chosen target testing examples consist of the trigger-embedded normal testing examples whose true labels are not the target label.
\xc{\myparatight{Adaptive attack}
We evaluate the adaptive attack proposed in Section \ref{sec:adaptive}. Our adaptive attack leverages an zeroth-order optimization method. Following the suggestions by previous work \cite{cheng2018query,nesterov2017random}, we set $\sigma^2=0.5$ and $\gamma = 0.005$ in the zeroth-order method. Moreover, we set $\eta=0.01$ and $V=Q=10$ so that the adaptive attack converges. }
\subsubsection{Evaluation Metrics} For the LF attack, Krum attack, Trim attack, \xc{and adaptive attack}, we use the standard \emph{testing error rate} of the global model to evaluate an FL method since these attacks aim to increase the testing error rate. Specifically, the testing error rate of a global model is the fraction of testing examples whose labels are incorrectly predicted by the global model. An FL method is more robust against these attacks if its global models achieve lower testing error rates under these attacks. The Scaling attack is a targeted attack, which aims to preserve the testing error rate of normal testing examples while making the global model predict the attacker-chosen target label for the attacker-chosen target testing examples. Therefore, other than the testing error rate, we further use \emph{attack success rate} to measure the Scaling attack. Specifically, the attack success rate is the fraction of the attacker-chosen target testing examples whose labels are predicted as the attacker-chosen target label by the global model. An FL method is more robust against the Scaling attack if its global model achieves a lower attack success rate.
\begin{table}[!t]
\caption{The CNN architecture of the global model used for MNIST-0.1, MNIST-0.5, and Fashion-MNIST.}
\centering
\vspace{2mm}
\begin{tabular}{|c|c|} \hline
{Layer} & {Size} \\ \hline
{Input} & { $28\times28\times1$}\\ \hline
{Convolution + ReLU} & { $3\times3\times30$}\\ \hline
{Max Pooling} & { $2\times2$}\\ \hline
{Convolution + ReLU} & { $3\times3\times50$}\\ \hline
{Max Pooling} & { $2\times2$}\\ \hline
{Fully Connected + ReLU} & {100}\\ \hline
{Softmax} & {10}\\ \hline
\end{tabular}
\label{tab:cnn}
\vspace{-2mm}
\end{table}
\subsubsection{FL System Settings} By default, we assume there are $n=100$ clients in total for each dataset except HAR and CH-MNIST. For HAR, the data are collected from 30 users, each of which is treated as a client. Therefore, HAR has 30 clients in total. \xc{For CH-MNIST, there are only 4,000 training examples in total and thus we assume 40 clients such that each client has 100 training examples on average. Unless otherwise mentioned, we assume 20\% of the clients are malicious for each dataset. However, we will also explore the impact of the fraction of malicious clients.} Table \ref{tab:param} shows the default FL system settings that we will use unless otherwise mentioned.
\myparatight{Global models} We train different types of global models on different datasets to show the generality of our method. Specifically, for MNIST-0.1, MNIST-0.5, and Fashion-MNIST, we train a convolutional neural network (CNN) as the global model. Table \ref{tab:cnn} shows the architecture of the CNN. And we train a logistic regression (LR) classifier as the global model for HAR. \xc{For CIFAR-10 and CH-MNIST, we consider the widely used ResNet20 architecture \cite{he2016deep} as the global model.}
\myparatight{Parameter settings of the FL methods} We compare FLTrust with FedAvg~\cite{Konen16,McMahan17}, Krum~\cite{Blanchard17}, Trim-mean~\cite{Yin18}, and Median~\cite{Yin18}.
Details of these FL methods can be found in Section \ref{sec:background}. FedAvg is a popular FL method in non-adversarial settings, while Krum, Trim-mean, and Median are Byzantine-robust FL methods. These methods all follow the three-step framework described in Algorithm \ref{global_upate_algo}, though they use different aggregation rules. Therefore, they all use the parameters $\tau$, $R_l$, $R_g$, $\alpha$, $\beta$, and $b$. Following previous work \cite{fang2019local}, we set $\tau=n$, i.e., all clients are selected in each iteration; and we set $R_l=1$, in which we can treat the product of the global learning rate $\alpha$ and the local learning rate $\beta$ as a single learning rate. We set this combined learning rate on each dataset to achieve small training error rates and fast convergence. We set the batch size $b=32$ for all datasets except CIFAR-10, where we set $b=64$. We set the number of global iterations $R_g$ such that the FL methods converge. Specifically, $R_g=2,000$ for MNIST-0.1, MNIST-0.5, \xc{and CH-MNIST}; $R_g=2,500$ for Fashion-MNIST; $R_g=1,500$ for CIFAR-10; and $R_g=1,000$ for HAR.
Krum further has the parameter $f$ and Trim-mean further has the trim parameter $k$, both of which are an upper bound of the number of malicious clients. We set $f=k=m$, which assumes that the server knows the exact number of malicious clients and gives advantages to Krum and Trim-mean.
\myparatight{Root dataset} Our FLTrust requires a small root dataset. By default, we assume the root dataset has only 100 training examples. Moreover, we consider the following two cases depending on how the root dataset is created.
\begin{itemize}
\item {\bf Case I.} We assume the service provider can collect a representative root dataset for the learning task, i.e., the root dataset has the same distribution as the overall training data distribution of the learning task. In particular, we sample the root dataset from the union of the clients' clean local training data uniformly at random. For instance, for MNIST-0.5, we sample the root dataset from its 60,000 training examples uniformly at random.
\item {\bf Case II.} We assume the root dataset has a distribution different from the overall training data distribution of the learning task. \xc{In particular, we assume the root dataset is biased towards a certain class. Specifically, we sample a fraction of the examples in the root dataset from a particular class (class 1 in our experiments) in the union of the clients' clean local training data and the remaining examples are sampled from the remaining classes uniformly at random, where we call the fraction \emph{bias probability}. Note that, for all the datasets except HAR and CH-MNIST, the root dataset has the same distribution as the overall training data, i.e., Case II reduces to Case I, when the bias probability is 0.1 because they have 10 classes; for HAR and CH-MNIST, Case II reduces to Case I when the bias probability is 0.17 and 0.125 because they have 6 and 8 classes, respectively. The root data distribution deviates more from the overall training data distribution when the bias probability is larger. }
\end{itemize}
In both cases, we exclude the sampled root dataset from the clients' local training data, indicating that the root dataset is collected independently by the service provider. Unless otherwise mentioned, we consider Case I.
\begin{table}[!t]\renewcommand{\arraystretch}{1}
\centering
\caption{The testing error rates of different FL methods under different attacks and the attack success rates of the Scaling attacks. The results for the Scaling attacks are in the form of ``testing error rate / attack success rate''.}
\vspace{1mm}
\centering
\addtolength{\tabcolsep}{-4pt}
\captionsetup[subfloat]{captionskip=0pt}
\subfloat[CNN global model, MNIST-0.1]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& FedAvg & Krum & Trim-mean & Median & FLTrust \\
\hline
No attack & 0.04 & 0.10 & 0.06 & 0.06 & 0.04 \\
\hline
LF attack & 0.06 & 0.10 & 0.05 & 0.05 & 0.04 \\
\hline
Krum attack & 0.10 & 0.90 & 0.07 & 0.07 & 0.04 \\
\hline
Trim attack & 0.16 & 0.10 & 0.13 & 0.13 & 0.04 \\
\hline
Scaling attack & 0.02 / 1.00 & 0.10 / 0.00 & 0.05 / 0.01 & 0.05 / 0.01 & 0.03 / 0.00 \\
\hline
\xc{Adaptive attack} & \xc{0.08} & \xc{0.10} & \xc{0.11} & \xc{0.13} & \xc{0.04} \\
\hline
\end{tabular}
}
\subfloat[CNN global model, MNIST-0.5]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& FedAvg & Krum & Trim-mean & Median & FLTrust \\
\hline
No attack & 0.04 & 0.10 & 0.06 & 0.06 & 0.05 \\
\hline
LF attack & 0.06 & 0.10 & 0.06 & 0.06 & 0.05 \\
\hline
Krum attack & 0.10 & 0.91 & 0.14 & 0.15 & 0.05 \\
\hline
Trim attack & 0.28 & 0.10 & 0.23 & 0.43 & 0.06 \\
\hline
Scaling attack & 0.02 / 1.00 & 0.09 / 0.01 & 0.06 / 0.02 & 0.06 / 0.01 & 0.05 / 0.00 \\
\hline
\xc{Adaptive attack} & \xc{0.13} & \xc{0.10} & \xc{0.22} & \xc{0.90} & \xc{0.06} \\
\hline
\end{tabular}
}
\subfloat[CNN global model, Fashion-MNIST]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& FedAvg & Krum & Trim-mean & Median & FLTrust \\
\hline
No attack & 0.10 & 0.16 & 0.14 & 0.14 & 0.11 \\
\hline
LF attack & 0.14 & 0.15 & 0.26 & 0.21 & 0.11 \\
\hline
Krum attack & 0.13 & 0.90 & 0.18 & 0.23 & 0.12 \\
\hline
Trim attack & 0.90 & 0.16 & 0.24 & 0.27 & 0.14 \\
\hline
Scaling attack & 0.90 / 1.00 & 0.16 / 0.03 & 0.17 / 0.85 & 0.16 / 0.05 & 0.11 / 0.02 \\
\hline
\xc{Adaptive attack} & \xc{0.90} & \xc{0.18} & \xc{0.34} & \xc{0.24} & \xc{0.14} \\
\hline
\end{tabular}
}
\subfloat[ResNet20 global model, CIFAR-10]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& FedAvg & Krum & Trim-mean & Median & FLTrust \\
\hline
No attack & 0.16 & 0.54 & 0.24 & 0.25 & 0.18 \\
\hline
LF attack & 0.21 & 0.56 & 0.27 & 0.45 & 0.18 \\
\hline
Krum attack & 0.24 & 0.90 & 0.52 & 0.64 & 0.18 \\
\hline
Trim attack & 0.81 & 0.51 & 0.72 & 0.75 & 0.20 \\
\hline
Scaling attack & 0.90 / 1.00 & 0.44 / 0.07 & 0.22 / 0.96 & 0.25 / 0.96 & 0.18 / 0.02 \\
\hline
\xc{Adaptive attack} & \xc{0.90} & \xc{0.58} & \xc{0.69} & \xc{0.82} & \xc{0.20} \\
\hline
\end{tabular}
}
\subfloat[LR global model, HAR]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& FedAvg & Krum & Trim-mean & Median & FLTrust \\
\hline
No attack & 0.03 & 0.12 & 0.04 & 0.05 & 0.04 \\
\hline
LF attack & 0.17 & 0.10 & 0.05 & 0.05 & 0.04 \\
\hline
Krum attack & 0.03 & 0.22 & 0.05 & 0.05 & 0.04 \\
\hline
Trim attack & 0.32 & 0.10 & 0.36 & 0.13 & 0.05 \\
\hline
Scaling attack & 0.04 / 0.81 & 0.10 / 0.03 & 0.04 / 0.36 & 0.05 / 0.13 & 0.05 / 0.01 \\
\hline
\xc{Adaptive attack} & \xc{0.04} & \xc{0.19} & \xc{0.05} & \xc{0.06} & \xc{0.05} \\
\hline
\end{tabular}
}
\xc{
\subfloat[\xc{ResNet20 global model, CH-MNIST}]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& FedAvg & Krum & Trim-mean & Median & FLTrust \\
\hline
No attack & 0.10 & 0.24 & 0.10 & 0.11 & 0.10 \\
\hline
LF attack & 0.12 & 0.39 & 0.15 & 0.13 & 0.12 \\
\hline
Krum attack & 0.11 & 0.95 & 0.13 & 0.13 & 0.12 \\
\hline
Trim attack & 0.64 & 0.21 & 0.55 & 0.44 & 0.13 \\
\hline
Scaling attack & 0.26 / 0.20 & 0.34 / 0.03 & 0.14 / 0.02 & 0.11 / 0.01 & 0.14 / 0.03 \\
\hline
Adaptive attack & 0.14 & 0.29 & 0.50 & 0.47 & 0.13 \\
\hline
\end{tabular}
}
}
\label{tab:effective}
\vspace{-4mm}
\end{table}
\subsection{Experimental Results}\label{sec:exp_res}
\myparatight{Our FLTrust achieves the three defense goals} Recall that we have three defense goals (discussed in Section~\ref{sec:prob}): fidelity, robustness, and efficiency. Table \ref{tab:effective} shows the testing error rates of different FL methods under different attacks \xc{including our adaptive attack}, as well as the attack success rate of the Scaling attack on the six datasets. Our results show that FLTrust achieves the three goals.
First, when there is no attack, our FLTrust has testing error rates similar to FedAvg, achieving the fidelity goal. However, existing Byzantine-robust FL methods may have higher or much higher testing error rates under no attacks. For instance, on MNIST-0.1, the testing error rates for FedAvg and FLTrust are both 0.04, while they are 0.10, 0.06, and 0.06 for Krum, Trim-mean, and Median, respectively; on CH-MNIST, FedAvg, Trim-mean, and FLTrust achieve testing error rates 0.10, while Krum and Median achieve testing error rates 0.24 and 0.11, respectively. Our results indicate that FLTrust is more accurate than existing Byzantine-robust FL methods in non-adversarial settings. This is because existing Byzantine-robust FL methods exclude some local model updates when aggregating them as the global model update, while FLTrust considers all of them with the help of the root dataset.
Second, our FLTrust achieves the robustness goal, while existing FL methods do not. Specifically, \xc{the testing error rates of FLTrust under the untargeted attacks including our adaptive attack are at most 0.04 higher than those of FedAvg under no attacks on the six datasets.} On the contrary, every existing Byzantine-robust FL method has much higher testing error rates, especially under the untargeted attack that is optimized for the method. For instance, on MNIST-0.5, Krum attack increases the testing error rate of Krum from 0.10 to 0.91, while Trim attack increases the testing error rates of Trim-mean and Median from 0.06 to 0.23 and 0.43, respectively. FedAvg may have lower testing error rates than existing Byzantine-robust FL methods under the evaluated untargeted attacks. This is because these untargeted attacks are not optimized for FedAvg. Previous work~\cite{Blanchard17} showed that FedAvg can be arbitrarily manipulated by a single malicious client.
\begin{table*}[!t]\renewcommand{\arraystretch}{1.1}
\centering
\caption{The testing error rates of different variants of FLTrust under different attacks and the attack success rates of the Scaling attacks on MNIST-0.5. The results for the Scaling attacks are in the form of ``testing error rate / attack success rate''. ``--'' means that the attacks are not applicable.}
\vspace{2mm}
\addtolength{\tabcolsep}{0pt}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& No attack & LF attack & Krum attack & Trim attack & Scaling attack & \xc{Adaptive attack} \\
\hline
FLTrust-Server & 0.21 & -- & -- & -- & -- & -- \\
\hline
FLTrust-withServer & 0.07 & 0.08 & 0.09 & 0.10 & 0.08 / 0.01 & \xc{0.94} \\
\hline
FLTrust-NoReLU & 0.28 & 0.90 & 0.90 & 0.90 & 0.94 / 0.08 & \xc{0.90}\\
\hline
FLTrust-NoNorm & 0.05 & 0.06 & 0.06 & 0.08 & 0.94 / 0.08 & \xc{0.06}\\
\hline
FLTrust-ParNorm & 0.06 & 0.06 & 0.06 & 0.06 & 0.06 / 0.01 & \xc{0.06}\\
\hline
FLTrust & 0.05 & 0.05 & 0.05 & 0.06 & 0.05 / 0.00 & \xc{0.06}\\
\hline
\end{tabular}%
\label{tab:variants}
\end{table*}%
\begin{figure}[!t]
\centering
\includegraphics[scale = 0.2]{figs/result_train_error_iter.pdf}
\caption{\xc{The training error rates vs. the number of iterations for FLTrust under different attacks and FedAvg without attacks on MNIST-0.5.}}
\label{fig:convergence}
\vspace{-2mm}
\end{figure}
Moreover, for the Scaling attack, FLTrust substantially reduces its attack success rates. Specifically, the attack success rates for FLTrust are at most 0.03. On the contrary, the attack success rates for FedAvg are always high on the six datasets, and they are also high for the existing Byzantine-robust FL methods on multiple datasets, indicating that existing FL methods are not robust against the Scaling attack. One interesting observation is that the Scaling attack may decrease the testing error rates in some cases. \xc{We suspect the reason may be that the data augmentation in the Scaling attack positively impacts the aggregation of the local model updates. Specifically, the data augmentation in the Scaling attack improves the diversity of the training data, and thus helps the learned global model better generalize to the testing dataset.
}
Third, FLTrust achieves the efficiency goal. Specifically, in each iteration, FLTrust does not incur extra overhead to the clients; and compared to FedAvg, the extra computation incurred to the server by FLTrust includes computing a server model update, computing the trust scores, and normalizing the local model updates, which are negligible for the powerful server. Moreover, Figure \ref{fig:convergence} shows the training error rates versus the global iteration number for FLTrust under different attacks and FedAvg under no attack on MNIST-0.5. Our results show that FLTrust converges as fast as FedAvg, which means that FLTrust also does not incur extra communications cost for the clients (each iteration of FL requires communications between clients and server), compared to FedAvg under no attacks. We note that Krum, Trim-mean, and Median do not incur extra overhead to the clients. However, Krum incurs significant computational overhead to the server when there are a large number of clients. This is because Krum requires calculating pairwise distance between local model updates in each iteration.
\begin{table}[!t]\renewcommand{\arraystretch}{1.1}
\centering
\caption{The testing error rates of FLTrust under different attacks and the attack success rates of the Scaling attacks when the root dataset is sampled with different bias probabilities in Case II.}
\vspace{2mm}
\addtolength{\tabcolsep}{-5pt}
\xc{
\subfloat[MNIST-0.1]{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{Bias probability} & {0.1} & {0.2} & {0.4} & {0.6} & {0.8} & {1.0}\\
\hline
{No attack} & {0.04} & {0.04} & {0.04} & {0.05} & {0.05} & {0.34}\\
\hline
{LF attack} & {0.04} & {0.04} & {0.04} & {0.05} & {0.78} & {0.84}\\
\hline
{Krum attack} & {0.04} & {0.04} & {0.07} & {0.89} & {0.89} & {0.89}\\
\hline
{Trim attack} & {0.04} & {0.05} & {0.08} & {0.12} & {0.46} & {0.89}\\
\hline
{Scaling attack} & {0.03 / 0.00} & {0.03 / 0.01} & {0.04 / 0.00} & {0.04 / 0.00} & {0.06 / 0.01} & {0.42 / 0.01}\\
\hline
{Adaptive attack} & {0.04} & {0.05} & {0.08} & {0.12} & {0.90} & {0.90}\\
\hline
\end{tabular}%
\label{mnist-0.1-bias}
}}
\subfloat[MNIST-0.5]{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{Bias probability} & {0.1} & {0.2} & {0.4} & {0.6} & {0.8} & {1.0}\\
\hline
{No attack} & {0.05} & {0.05} & {0.06} & {0.08} & {0.11} & {0.80}\\
\hline
{LF attack} & {0.05} & {0.05} & {0.08} & {0.10} & {0.25} & {0.89}\\
\hline
{Krum attack} & {0.05} & {0.05} & {0.08} & {0.12} & {0.86} & {0.89}\\
\hline
{Trim attack} & {0.06} & {0.06} & {0.08} & {0.12} & {0.16} & {0.89}\\
\hline
{Scaling attack} & {0.05 / 0.00} & {0.05 / 0.01} & {0.06 / 0.00} & {0.07 / 0.01} & {0.12 / 0.00} & {0.86 / 0.01}\\
\hline
\xc{Adaptive attack} & \xc{0.06} & \xc{0.07} & \xc{0.08} & \xc{0.13} & \xc{0.90} & \xc{0.90}\\
\hline
\end{tabular}%
\label{mnist-0.5-bias}
}
\xc{
\subfloat[Fashion-MNIST]{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{Bias probability} & {0.1} & {0.2} & {0.4} & {0.6} & {0.8} & {1.0}\\
\hline
{No attack} & {0.11} & {0.11} & {0.12} & {0.15} & {0.16} & {0.90}\\
\hline
{LF attack} & {0.11} & {0.11} & {0.12} & {0.12} & {0.14} & {0.90}\\
\hline
{Krum attack} & {0.12} & {0.12} & {0.16} & {0.90} & {0.90} & {0.90}\\
\hline
{Trim attack} & {0.14} & {0.14} & {0.15} & {0.21} & {0.90} & {0.90}\\
\hline
{Scaling attack} & {0.11 / 0.02} & {0.12 / 0.04} & {0.12 / 0.04} & {0.13 / 0.02} & {0.15 / 0.03} & {0.90 / 0.00}\\
\hline
{Adaptive attack} & {0.14} & {0.14} & {0.16} & {0.90} & {0.90} & {0.90}\\
\hline
\end{tabular}%
\label{fashionmnist-bias}
}}
\xc{
\subfloat[CIFAR-10]{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{Bias probability} & 0.1 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\
\hline
{No attack} & 0.18 & 0.18 & 0.18 & 0.21 & 0.90 & 0.90 \\
\hline
{LF attack} & 0.18 & 0.19 & 0.20 & 0.24 & 0.90 & 0.90 \\
\hline
{Krum attack} & 0.18 & 0.18 & 0.19 & 0.33 & 0.90 & 0.90\\
\hline
{Trim attack} & 0.20 & 0.20 & 0.24 & 0.63 & 0.90 & 0.90\\
\hline
{Scaling attack} & 0.18 / 0.02 & 0.18 / 0.00 & 0.18 / 0.03 & 0.22 / 0.04 & 0.90 / 0.00 & 0.90 / 0.00 \\
\hline
{Adaptive attack} & 0.20 & 0.20 & 0.27 & 0.68 & 0.90 & 0.90\\
\hline
\end{tabular}%
\label{CIFAR-10-bias}
}}
\xc{
\subfloat[HAR]{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{Bias probability} & {0.17} & {0.2} & {0.4} & {0.6} & {0.8} & {1.0}\\
\hline
{No attack} & {0.04} & {0.04} & {0.06} & {0.06} & {0.07} & {0.48}\\
\hline
{LF attack} & {0.04} & {0.05} & {0.06} & {0.05} & {0.07} & {0.48}\\
\hline
{Krum attack} & {0.04} & {0.05} & {0.05} & {0.05} & {0.09} & {0.48}\\
\hline
{Trim attack} & {0.05} & {0.05} & {0.06} & {0.09} & {0.14} & {0.48}\\
\hline
{Scaling attack} & {0.05 / 0.01} & {0.05 / 0.01} & {0.06 / 0.02} & {0.06 / 0.03} & {0.07 / 0.05} & {0.48 / 0.34}\\
\hline
{Adaptive attack} & {0.05} & {0.05} & {0.06} & {0.09} & {0.48} & {0.48}\\
\hline
\end{tabular}%
\label{har-bias}
}}
\xc{
\subfloat[CH-MNIST]{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{Bias probability} & {0.125} & {0.2} & {0.4} & {0.6} & {0.8} & {1.0}\\
\hline
{No attack} & {0.10} & {0.10} & {0.11} & {0.13} & {0.13} & {0.89}\\
\hline
{LF attack} & {0.12} & {0.12} & {0.12} & {0.17} & {0.21} & {0.89}\\
\hline
{Krum attack} & {0.12} & {0.12} & {0.14} & {0.17} & {0.19} & {0.89}\\
\hline
{Trim attack} & {0.13} & {0.13} & {0.14} & {0.20} & {0.20} & {0.89}\\
\hline
{Scaling attack} & {0.14 / 0.03} & {0.14 / 0.02} & {0.15 / 0.02} & {0.16 / 0.06} & {0.14 / 0.01} & {0.89 / 0.01}\\
\hline
{Adaptive attack} & {0.13} & {0.14} & {0.14} & {0.89} & {0.89} & {0.89}\\
\hline
\end{tabular}%
\label{har-bias}
}}
\label{tab:sample_bias}
\vspace{-2mm}
\end{table}%
\myparatight{Comparing different variants of FLTrust} FLTrust has three key features: a root dataset, using ReLU to clip the cosine similarity scores, and normalizing each local model update. Depending on how each feature is used, we consider the following five variants of FLTrust:
\begin{itemize}
\item {\bf FLTrust-Server.} In this variant, the server only uses the root dataset to train the global model. Therefore, there is no communications between the clients and the server during the training process. \xc{We use this variant to show that the server cannot obtain a good model using its root dataset alone. In other words, even if some clients are malicious, communicating with clients still improves the global model. }
\item {\bf FLTrust-withServer.} In this variant, the server computes the weighted average of the clients' local model updates together with the server model update whose trust score is 1.
\item {\bf FLTrust-NoReLU.} In this variant, the server does not use ReLU to clip the cosine similarity scores of the local model updates when computing their trust scores.
\item {\bf FLTrust-NoNorm.} In this variant, the server does not normalize the local model updates to have the same magnitude as the server model update.
\item {\bf FLTrust-ParNorm.} In this variant, the server applies partial normalization, i.e., only normalizes the local model updates whose magnitudes are larger than that of the server model update to have the same magnitude as the server model update.
\end{itemize}
\begin{figure}[!t]
\centering
\vspace{-3mm}
\subfloat[Testing error rate]{\includegraphics[width=0.24 \textwidth]{figs/result_rootsize_error.pdf}\label{fig:untargeted_rootsize}}
\subfloat[Attack success rate]{\includegraphics[width=0.24 \textwidth]{figs/result_rootsize_succ.pdf}\label{fig:scaling_rootsize}}
\caption{\xc{Impact of the root dataset size on FLTrust under different attacks for MNIST-0.5.}}
\label{fig:num_sample}
\vspace{-3mm}
\end{figure}
Table \ref{tab:variants} compares the variants with respect to their testing error rates under different attacks and the attack success rates of the Scaling attacks on MNIST-0.5. The attacks are not applicable to FLTrust-Server as it does not require communications from the clients. Our results show that FLTrust outperforms the five variants. FLTrust outperforms FLTrust-Server and FLTrust-withServer because the root dataset is small. The fact that FLTrust outperforms
FLTrust-NoReLU, FLTrust-NoNorm, and FLTrust-ParNorm indicates the necessity of our ReLU operation and normalization.
\myparatight{Impact of the root dataset} Our root dataset can be characterized by its size and how it is sampled (i.e., Case I vs. Case II). Therefore, we study the impact of the root dataset on FLTrust with respect to its size and how it is sampled. Figure \ref{fig:num_sample} shows the testing error rates of FLTrust under different attacks and the attack success rates under the Scaling attack on MNIST-0.5 when the size of the root dataset increases from 50 to 500, where the root dataset is sampled uniformly in Case I. We observe that a root dataset with only 100 training examples is sufficient for FLTrust to defend against the attacks. Specifically, when the root dataset has 100 training examples, the testing error rates of FLTrust under attacks are similar to that of FedAvg without attacks, and the attack success rate of the Scaling attack is close to 0. When the size of the root dataset increases beyond 100, the testing error rates and attack success rates of FLTrust further decrease slightly.
\xc{We also evaluate the impact of the bias probability in Case II. Table \ref{tab:sample_bias} shows the testing error rates of FLTrust under different attacks and the attack success rates of the Scaling attacks when the bias probability varies. The second column in each table corresponds to the bias probability with which Case II reduces to Case I. We increase the bias probability up to 1.0 to simulate larger difference between the root data distribution and the overall training data distribution. We observe that FLTrust is accurate and robust when the bias probability is not too large. For instance, when the bias probability is no more than 0.4 for MNIST-0.5, the testing error rates of FLTrust under attacks are at most 0.08, compared to 0.05 when the bias probability is 0.1. Our results show that FLTrust works well when the root data distribution does not diverge too much from the overall training data distribution.}
\begin{figure}[!t]
\centering
\subfloat[LF attack]{\includegraphics[width=0.24 \textwidth]{figs/result2_flip_attack}}
\subfloat[Krum attack]{\includegraphics[width=0.24 \textwidth]{figs/result2_krum_attack}}\\
\vspace{-1mm}
\subfloat[Trim attack]{\includegraphics[width=0.24 \textwidth]{figs/result2_trim_attack}}
\subfloat[Scaling attack]{\includegraphics[width=0.24 \textwidth]{figs/result2_scaling_attack}}\\
\vspace{-1mm}
\subfloat[\xc{Adaptive attack}]{\includegraphics[width=0.24 \textwidth]{figs/result2_adaptive_attack}}
\vspace{1mm}
\caption{Impact of the total number of clients on the testing error rates of different FL methods under different attacks ((a)-(c)) and the attack success rates of the Scaling attacks, where MNIST-0.5 is used. The testing error rates of all the compared FL methods are similar and small under the Scaling attacks, which we omit for simplicity.}
\label{fig:num_clients}
\vspace{-5mm}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[LF attack]{\includegraphics[width=0.24 \textwidth]{figs/result3_flip_attack}\label{fig:num_malicious_lf}}
\subfloat[Krum attack]{\includegraphics[width=0.24 \textwidth]{figs/result3_krum_attack}\label{fig:num_malicious_krum}}\\
\vspace{-1mm}
\subfloat[Trim attack]{\includegraphics[width=0.24 \textwidth]{figs/result3_trim_attack}\label{fig:num_malicious_trim}}
\subfloat[Scaling attack]{\includegraphics[width=0.24 \textwidth]{figs/result3_scaling_attack}\label{fig:num_malicious_scale}}\\
\vspace{-1mm}
\subfloat[\xc{Adaptive attack}]{\includegraphics[width=0.24 \textwidth]{figs/result3_adaptive_attack}\label{fig:num_malicious_scale}}
\vspace{1mm}
\caption{Impact of the fraction of malicious clients on the testing error rates of different FL methods under different attacks ((a)-(c)) and the attack success rates of the Scaling attacks, where MNIST-0.5 is used. The testing error rates of all the compared FL methods are similar and small under the Scaling attacks, which we omit for simplicity.}
\label{fig:num_malicious}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[MNIST-0.1]{\includegraphics[width=0.24 \textwidth]{figs/result_adaptive_mnist01}}
\subfloat[MNIST-0.5]{\includegraphics[width=0.24 \textwidth]{figs/result_adaptive_mnist05}} \\
\vspace{-1mm}
\subfloat[Fashion-MNIST]{\includegraphics[width=0.24 \textwidth]{figs/result_adaptive_fashion}}
\subfloat[CIFAR-10]{\includegraphics[width=0.24 \textwidth]{figs/result_adaptive_cifar10}}\\
\vspace{-1mm}
\subfloat[HAR]{\includegraphics[width=0.24 \textwidth]{figs/result_adaptive_human}}
\subfloat[\xc{CH-MNIST}]{\includegraphics[width=0.24 \textwidth]{figs/result_adaptive_chmnist}}
\vspace{1mm}
\caption{Impact of the fraction of malicious clients on the testing error rates of FLTrust under the adaptive attacks.}
\label{adpative_attack}
\end{figure}
\myparatight{Impact of the total number of clients}
Figure \ref{fig:num_clients} shows the testing error rates of different FL methods under different attacks, as well as the attack success rates of the Scaling attacks, when the total number of clients $n$ increases from 50 to 400. We set the fraction of malicious clients to be $\frac{m}{n}=20\%$. We observe that FLTrust can defend against the attacks for all considered total number of clients. Specifically, FLTrust under attacks achieves testing error rates similar to FedAvg under no attacks, while the attack success rates of the Scaling attacks are close to 0 for FLTrust. Existing methods can defend against the Scaling attacks on MNIST-0.5, i.e., the attack success rates are close to 0. However, they cannot defend against the Krum attack, Trim attack, \xc{and/or our adaptive attack}, i.e., their corresponding testing error rates are large.
\myparatight{Impact of the number of malicious clients} Figure \ref{fig:num_malicious} shows the testing error rates of different FL methods under different attacks and the attack success rates of the Scaling attacks on MNIST-0.5, when the fraction of malicious clients increases from 0 to 95\%. Trim-mean cannot be applied when the fraction of malicious clients exceeds 50\% because the number of local model updates removed by Trim-mean is twice of the number of malicious clients. Therefore, for Trim-mean, we only show the results when the malicious clients are less than 50\%.
We observe that, under existing attacks \xc{and our adaptive attacks}, FLTrust can tolerate up to 90\% of malicious clients. Specifically, FLTrust under these attacks still achieves testing error rates similar to FedAvg without attacks when up to 90\% of the clients are malicious, while the attack success rates of the Scaling attacks for FLTrust are still close to 0 when up to 95\% of the clients are malicious. However, existing Byzantine-robust FL methods can tolerate much less malicious clients.
For instance, under Krum attack, the testing error rate of the global model learnt by Krum increases to 0.90 when only 10\% of the clients are malicious, while the testing error rates of the global models learnt by Trim-mean and Median become larger than 0.85 when the fraction of malicious clients reaches 40\%.
Figure \ref{adpative_attack} further shows the testing error rates of the global models learnt by FLTrust as a function of the fraction of malicious clients under the adaptive attacks on all datasets. Our results show that FLTrust is robust against adaptive attacks even if a large fraction of clients are malicious on all datasets. Specifically, for MNIST-0.1 (MNIST-0.5, Fashion-MNIST, CIFAR-10, HAR, \xc{or CH-MNIST}), FLTrust under adaptive attacks with over 60\% (over 40\%, up to 60\%, up to 60\%, up to 40\%, \xc{or over 40\%}) of malicious clients can still achieve testing error rates similar to FedAvg under no attack.
\xc{\section{Discussion and Limitations}
\label{sec:discussion}
\myparatight{FLTrust vs. fault-tolerant computing}
Fault-tolerant computing \cite{barborak1993consensus} aims to remain functional when there are malicious clients. However, conventional fault-tolerant computing and federated learning have the following key difference: the clients communicate with each other to compute results in fault-tolerant computing \cite{barborak1993consensus}, while clients only communicate with a cloud server in federated learning. Our FLTrust leverages such unique characteristics of federated learning to bootstrap trust, i.e., the server collects a root dataset, and uses it to guide the aggregation of the local model updates.}
\myparatight{Different ways of using the root dataset} Fang et al. \cite{fang2019local} also proposed to use a root dataset (they called it validation dataset). However, we use the root dataset in a way that is different from theirs. In particular, they use the root dataset to remove potentially malicious local model updates in each iteration, while we use it to assign trust scores to clients and normalize local model updates. As shown by Fang et al. \cite{fang2019local}, their way of using the root dataset is not effective in many cases.
\xc{\myparatight{Poisoned root dataset} Our FLTrust requires a clean root dataset. We acknowledge that FLTrust may not be robust against poisoned root dataset. The root dataset may be poisoned when it is collected from the Internet or by an insider attacker. However, since FLTrust only requires a small root dataset, a service provider can collect a clean one by itself with a small cost, e.g., asking its employees to generate and manually label a clean root dataset. }
\xc{\myparatight{Adaptive attacks and hierarchical root of trust}
We considered an adaptive attack via extending the state-of-the-art framework of local model poisoning attacks to our FLTrust. We acknowledge that there may exist stronger local model poisoning attacks to FLTrust, which is an interesting future work to explore. Moreover, it is an interesting future work to consider a hierarchical root of trust. For instance, the root dataset may contain multiple subsets with different levels of trust. The subsets with higher trust may have a larger impact on the aggregation.}
\section{Conclusion and Future Work}
\label{sec:conclusion}
We proposed and evaluated a new federated learning method called FLTrust to achieve Byzantine robustness against malicious clients. The key difference between our FLTrust and existing federated learning methods is that the server itself collects a clean small training dataset (i.e., root dataset) to bootstrap trust in FLTrust. Our extensive evaluations on six datasets show that FLTrust with a small root dataset can achieve Byzantine robustness against a large fraction of malicious clients. In particular, FLTrust under adaptive attacks with a large fraction of malicious clients can still train global models that are as good as the global models learnt by FedAvg under no attacks. \xc{Interesting future work includes 1) designing stronger local model poisoning attacks to FLTrust and 2) considering a hierarchical root of trust.}
\section*{Acknowledgement}
We thank the anonymous reviewers for their constructive comments. This work was supported in part by NSF grants No. 1937786, 1943226, and 2110252, an IBM Faculty Award, and a Google Faculty Research Award.
\bibliographystyle{IEEEtranS}
|
{
"timestamp": "2022-04-13T02:08:01",
"yymm": "2012",
"arxiv_id": "2012.13995",
"language": "en",
"url": "https://arxiv.org/abs/2012.13995"
}
|
\section{\label{Sec: Introduction} Introduction}
Today, geometric effects and controlling various phases of matter from superconducting to semiconducting states to topological insulators with conducting edge or surface states is a central topic in condensed matter physics with potential applications in quantum computing and room temperature superconductivity \cite{Xiao2010,Qi2011}.
Among different quantum states, valley degree of freedom and manifestation of nontrivial quantum phases together with the generation of topological states of light has lately drawn significant attentions \cite{Schaibley2016,Kelardeh2016_Attosecond,Khanikaev2017,Vitale2018,Galan2019,Hafezi2019}.
The possibility of incorporating valley degree of freedom to store and carry information has drawn numerous attention and led to substantial electronic applications labeled as valleytronics. The valley pseudospin has potential to serve as a robust quantum signal processor and data storage with a petahertz bandwidth.
Theoretical and experimental investigation of Valley polarization have predominantly been performed on insulating state such as monolayer hexagonal Boron Nitride \cite{Song2017}, and Diamond \cite{Isberg2013}, as well as semiconductors such as transition metal dichalcogenides (TMDC’s) \cite{Cao2012,Mak2012,Jones2013,Yoshikawa2019,Zeng2012,Heinz2017,Langer2018}, and Silicon \cite{Salfi2014}. Besides, the semimetallic systems with broken inversion symmetry such as Bismuth \cite{Zhu2012,Zhu_Behnia2017}, gapped graphene \cite{Motlagh2019_gapped}, bilayer graphene \cite{Shimazaki2015,Sui2015,Kumar2020}, and graphene superlattice \cite{Yankowitz2012,Gorbachev2014} have been suggested as potential candidates for valley-contrasting optoelectronic devices.
It has been frequently reported that the presence of a gap is a precondition for having valley polarization \cite{Yao2008,Lensky2015,Motlagh2019_gapped}. In this report, we are going to reexamine such a widely accepted notion and address the following important question: can we expect a sizable electric field-induced VP in gapless Dirac semimetals?
Answering this question is imperative considering the fact that introducing an eV-scale staggered bandgap ( also called Semenoff mass \cite{Semenoff1984}) in graphene, which is required for having a measurable VP in noncentrosymmetric graphene systems is technologically challenging.
We report VP in gapless graphene with efficiency as high as 35$\%$ by utilizing a chiral optical pulse with few-cycle (sub ten femtosecond) and strong-field (volt per angstrom scale) with controlled polarity, amplitude, and carrier-envelope phase (CEP). Such a noticeable VP in monolayer graphene is attributed to the nonperturbative dynamics of electrons in the strong electric field of the laser pulse, global topology of the honeycomb band structure and the nonequivalent Berry phase of $\pm\pi$ at the K and K$'$ valleys.
In a nonperturbative nonlinear light-matter interaction where the light pulse has a strong amplitude (comparable with the interatomic field) and contains only one or two cycles, the incident electric field breaks the conservation law and gives rise to nonadiabatic topological effects and generation of the valley-contrasting population in the reciprocal space.
In a honeycomb crystal, the two degenerated valleys can be distinguished by pseudovector quantities, namely the Berry phase, connection, and curvature. Berry phase
describes the phase of quantum mechanical wave functions accumulated along a closed loop in the reciprocal space (or any parametric space). Such a topological phase defines the topology of electronic states and plays a fundamental role in many emerging phenomena in condensed-matter systems, such as High-order harmonic generation \cite{Silva2019,Chacon2020,Gaarde_PRL2020,Bauer2020}, quantum spin Hall effect \cite{Herrero2018}, anomalous quantum Hall effect \cite{Nagaosa2010,Rubio2019,Cavalleri2020}.
Moreover, Berry connection which is defined as $ {\mathbfcal A}^{{\rm{nn}}} = \left\langle {{\phi _n}|i\grad |{\phi _n}} \right\rangle $ in the reciprocal space with $\grad = \left( {{\partial _{{k_x}}},{\partial _{{k_y}}}} \right)$, and ${{\phi _n}}$ being the periodic part of the Bloch band $n$, acts as the vector potential in the momentum space. $ {{\mathbfcal A}^{\rm{n}}}$ is related to the overlap of the two Bloch wavefunctions neighboring in the momentum space, and has the geometrical meaning of “connection” of the manifold in Hilbert space. The real space picture of Berry connection, in two-band $n,m$ with the presence of the strong field, is shifting the electron wave packets in the unit cell by a vector equal to ${{\mathbfcal A}^{n}}-{{\mathbfcal A}^{m}}$.
We discern that our proposed method using a single-cycle pulse for optical control and manipulation of the Valleys in pristine graphene is favored over the two-color bicircular field being pursued in another group \cite{Galan2019}.
The latter shaped-pulse lasers press for multiples of field cycles; thereby the excitonic and many-electron effects may hinder its practical application in valleytronics. Indeed, recent experiments bear witness to the formation of ultrafast exciton on sub-ps time scales in two-dimensional materials \cite{Steinleitner2017,Bernhard2018}.
Furthermore, we note that in order to gain VP close to one, the system requires to have an intrinsic energy gap at the band edge closely in-resonant with the pulse energy. In these systems, such as TMDC's, the valley pseudospin is associated with nonzero Berry curvature and an intrinsic magnetic moment near the Dirac cones. In other words, such crystals allow the simultaneous breakage of time-reversal symmetry ( by taking circular pulse) an inversion symmetry (by an intrinsic gap). Nevertheless, realizing a substantial VP predicted in this report is important, considering the abundance and fabrication challenges of TMDC nanostructures. We trust that such a noticeable VP in graphene can be essentially measured by state of the art technology \cite{Wang2013,Higuchi2017}.
While the physical mechanisms of the phenomena and effects will be discussed below in detail, we emphasize here that the nonperturbative nonreciprocal responses joint together some of the most fundamental issues in condensed matter physics, such as symmetries, topological nature of electrons, and electron correlation.
\section{Methodology}
\label{sec:method}
In this section, we calculate the electron dynamics in gapless graphene induced by a time-dependent, spatially-uniform electric field.
The microscopic theory of strong-field dynamics in solids, and the Coulomb-induced many body interactions is fundamentally described by the density matrix equations.
We consider the field-matter interaction in the length gauge where the Houston functions (also known as the accelerated Bloch states) \cite{Houston_1940} have been previously utilized to describe the coherent dynamics of the various classes of crystalline materials by solving the time dependent density matrix equation. The benefit of using such basis sets is that we can obtain a separation of the induced current into intra- and inter-band components.
It is known that the photoexcited carriers scatter extremely fast in graphene compared to other materials, due to its linear energy dispersion \cite{Gierz2013_Snapshots}.
Li et al. \cite{Li2012} experimentally demonstrated that an ultrafast population inversion and broadband optical gain is established within the duration of a 35-fs light pulse.
Since time dependent Schrödinger equation (TDSE) does not incorporate the relaxation processes, in this study we derive the density matrix equations applicable for a description of dephasing and dessipation of electrons in the two-band tight-binding configuration of graphene.
Here we only take the electron-electron interaction into consideration for our open quantum system, since other interactions such as phonon coupling or quantum optical effects possess time scales with orders of magnitude slower than the electronic dephasing and decoherence time scale \cite{Huttner2017}.\\
Let us consider the solution to the Schrödinger equation for a two-band crystal
in the presence of an intense optical field. The state of the system is described in terms of the general wave function $\Psi $, which obeys the Schrödinger equation
\begin{equation}
\label{Eq:TDSE}
{i\hbar \frac{{d\Psi }}{{dt}} = {\cal H}\Psi }
\end{equation}
with the Hamiltonian operator ${\cal H}$ given by
\begin{equation}
\label{Eq:Hamiltonian}
{{\cal H} = {{\cal H}_0} + e{\bf{F}}(t) \cdot {\bf{r}}}
\end{equation}
where $\mathbf F(t)$ is the pulse's electric field, $e$ is the electron charge, and ${\cal H}_0$ is the Hamiltonian of the solid in the absence of the optical field.
The position operator $\bf{r}$ in the reciprocal space representation is ${\bf{r}} = i{\grad _{\bf{q}}}$.
We approximate ${\cal H}_0$ as a nearest-neighbor tight-binding (TB) Hamiltonian,
\begin{equation}
\label{G_Hamiltonian}
{\cal H}_0 = \left( {\begin{array}{*{20}{c}}
\delta &{g({\bf{q}})}\\
{{g^*}({\bf{q}})}&{ - \delta }
\end{array}} \right)
\end{equation}
$\delta$ determines the slight doping energy of graphene ($ \sim $ meV), $g({\bf{q}}) = \gamma f({\bf{q}})$, with hopping integral $\gamma=-3.03$ eV and
\begin{equation}
f({\bf{q}}) = \exp \left( {i\frac{{a{q_x}}}{{\sqrt 3 }}} \right) + 2\exp \left( { - i\frac{{a{q_x}}}{{2\sqrt 3 }}} \right)\cos \left( {\frac{{a{q_y}}}{2}} \right)
\end{equation}
where $a=2.46~\mathrm{\AA}$ is the lattice constant. We define $g({\bf{q}}) = \left| {g({\bf{q}})} \right|{e^{i{\phi _{\bf{q}}}}}$, with ${\phi _{\bf{q}}} = {\tan ^{ - 1}}\left( {\frac{{{\mathop{\rm Im}\nolimits} (g({\bf{q}}))}}{{{\mathop{\rm Re}\nolimits} (g({\bf{q}}))}}} \right)$.
Accordingly, the eigenstates and eigenenergies of the conduction and valence bands can be found from the above Hamiltonian, ${\cal H}_0$, as follows
\begin{equation}
\label{Eq:eigenstate}
\phi _{\bf{q}}^{c/v}({\bf{r}}) = \frac{{e^{i{\bf{q}} \cdot {\bf{r}}}}}{{\sqrt {2{E_{c/v}}(\delta + {E_{c/v}})} }}\left( {\begin{array}{*{20}{c}}
{\delta + {E_{c/v}}}\\
{\left| g \right|{e^{ - i{\phi _{\bf{q}}}}}}
\end{array}} \right)
\end{equation}
\begin{equation}
\label{eigenenergy}
{E_{c/v}}({\bf{q}}) = \pm \sqrt {{\delta ^2} + {{\left| {g({\bf{q}})} \right|}^2}}
\end{equation}
where the +/- signs correspond to the conduction (c)/ valence (v) band, respectively.
An applied electric field generates both the intraband (adiabatic) and interband (nonadiabatic) electron dynamics. The intraband dynamics is determined by the Bloch acceleration theorem in the reciprocal space, $\hbar {\bf{\dot k}} = e{\bf{F}}({\bf{t}})$.
For an electron with initial momentum $\bf{q}$ the electron dynamics is described by the time dependent wave vector given by ${\bf{k}}(t) = {\bf{q}} - e{\hbar ^{ - 1}}{\bf{A}}(t)$ with ${\bf{A}}(t) = - \int_{ - \infty }^t {\bf{F}} (t')dt'$ as the vector potential of the laser field. In fact, the electron wave packet with initial crystal wave vector $\bf{q}$ transforms to a trajectory-guided vector,${\bf{q}} \mapsto {\bf{k}}(t)$, where its time-dependent trajectory is governed by the vector potential of the incident pulse.
The most general solution to the time-dependent Schrödinger equation (\ref{Eq:TDSE}) can be written in the interaction representation as
\begin{equation}
\label{Eq:Psi_expand}
{\Psi _{\bf{q}}}({\bf{r}},t) = \sum\limits_{n = v,c} {\int_{BZ} {\beta _{_{{\bf{k}}(t)}}^n\Phi _{n,{\bf{q}}}^{({\rm{H}})}({\bf{r}},t)} }
\end{equation}
where $\Phi _{n,{\bf{q}}}^{({\rm{H}})}({\bf{r}},t) = \phi _{{\bf{k}}({\bf{t}})}^n({\bf{r}}){e^{ - \frac{i}{\hbar }\int_{ - \infty }^t {E_n^T[{\bf{k}}(t')]dt'} }}$ are the time-dependent adiabatic basis set, i.e., the Houston functions and are the solutions of Schrödinger equation within a single band without an interband coupling.
\begin{equation}
\label{Eq:E_T}
E_n^T[{\bf{k}}(t')] = {E_{\rm{n}}}[{\bf{k}}(t')] + e{\bf{F}}(t') \cdot{\mathbfcal{A}^{({\rm{nn}})}}[{\bf{k}}(t')]
\end{equation}
is called the modified band energy, accounting the dynamic phase [first term], as well as the geometric (Berry) phase [second term]. Here $ {{\mathbfcal A}^{(nn)}} = i\left\langle {\phi _{{\bf{k}}({\bf{t}})}^n|{\grad_{\bf{q}}}|\phi _{{\bf{k}}({\bf{t}})}^n} \right\rangle $ is the global energy of ${n^{\rm{th}}}$-band including the Bloch energy dispersion [First term], as well as the field-induced geometrical counterpart [second term].
The later term is critical to detect the topological information of solid state systems including the peculiarities observed in the quantum Hall effect regime and pseudospin-related Berry’s phase \cite{Kelardeh2017_superlattice}.
$\phi _{{\bf{k}}({\bf{t}})}^n({\bf{r}})$ are the Bloch eigenstates of graphene (Eq. \ref{Eq:eigenstate}), and $n=v,c$ stands for the valence and conduction bands, respectively.
The expansion coefficients ${\beta _c}(t)$ and ${\beta _v}(t)$ in Eq. \ref{Eq:Psi_expand} interpret as the probability amplitudes that at time $t$ the electron (or hole) is in the conduction or valence band. They satisfy the following system of integro-differential equations
\begin{equation}
\begin{array}{*{20}{l}}
{i\hbar \dot \beta _{{\bf{k}}(t)}^c = e{\bf{F}}(t) \cdot {\bf{Q}}_{{\bf{k}}(t)}^{cv}\beta _{{\bf{k}}(t)}^v}\\
{i\hbar \dot \beta _{{\bf{k}}(t)}^v = e{\bf{F}}(t) \cdot {\bf{Q}}_{{\bf{k}}(t)}^{cv*}\beta _{{\bf{k}}(t)}^c}
\end{array},
\end{equation}
where
\begin{equation}
\label{Eq:Q_cv}
{\bf{Q}}_{{\bf{k}}(t)}^{cv} = {\mathbfcal A}_{{\bf{k}}(t)}^{cv}{e^{\frac{i}{\hbar }\int_{ - \infty }^t {\left( {E_c^T[{\bf{k}}(t')] - E_v^T[{\bf{k}}(t')]} \right)dt'} }}
\end{equation}
determines the matrix element of interband interaction where ${\mathbfcal A}_{{\bf{k}}(t)}^{cv}$ is the interband Berry connection. The interband Berry connection can be defined in terms of the transition dipole moments (TDM) ${\bf{D}}_{{\bf{k}}(t)}^{cv}$ \cite{Kelardeh2015_Graphene} as ${\bf{D}}_{{\bf{k}}(t)}^{cv} = e{\mathbfcal A}_{{\bf{k}}(t)}^{cv}$. TDM determines optical transitions between the VB and CB at a crystal momentum $\mathbf q$.
The exponential factor in Eq. \ref{Eq:Q_cv} which is calculated from Eq. \ref{Eq:E_T}, defines the global phase difference between the states of the CB and VB (i.e., the generalized band offset).
We can write the intraband and interband components of the Berry connection operator In the two-band graphene system, the Berry connection has the following matrix form
\begin{equation}
\mathbfcal{\hat A} (\mathbf{q})= \left( {\begin{array}{*{20}{c}}
{{\mathbfcal{A}^{cc}}}&{{\mathbfcal{A}^{cv}}}\\
{\mathbfcal{A}^{cv*}}&{{\mathbfcal{A}^{vv}}}
\end{array}} \right)
\end{equation}
with
\begin{equation}
\label{Eq:A_nm}
{{\mathbfcal A}^{nm}} = i\left\langle {\phi _{\bf{q}}^{(n)}|{\grad }|\phi _{\bf{q}}^{(m)}} \right\rangle
\end{equation}
Substituting Eq. \ref{Eq:eigenstate} into Eq. \ref{Eq:A_nm}, one finds the an analytical expression for the matrix elements of the Berry connection in the tight-binding approximation.
The intraband components read
\begin{eqnarray}
\label{Eq:BerryCon_intra}
\begin{array}{l}
A_x^{cc/vv} = \frac{{a{\gamma ^2}}}{{\sqrt 3 }}\frac{{1 + {c_0}({c_3} - 2{c_0})}}{{{u_{c/v}}}}\\
A_y^{cc/vv} = a{\gamma ^2}\frac{{{s_0}{s_3}}}{{{u_{c/v}}}}
\end{array}
\end{eqnarray}
and interband Berry connection:
\begin{eqnarray}
\label{Eq:BerryCon_interband}
\begin{array}{l}
{\cal{A}}_x^{cv} = \frac{{a{\gamma ^2}}}{{2\sqrt 3 {E_c}\left| g \right|}}\left[ {1 + {c_0}({c_3} - 2{c_0})} \right] + i\frac{{\sqrt 3 \delta a{\gamma ^2}}}{{2E_c^2\left| g \right|}}{c_0}{s_3}\\
{\cal{A}}_y^{cv} = \frac{{a{\gamma ^2}}}{{2{E_c}\left| g \right|}}{s_0}{s_3} + i\frac{{\delta a{\gamma ^2}}}{{2E_c^2\left| g \right|}}{s_0}\left( {{c_3} + 2{c_0}} \right)
\end{array}
\end{eqnarray}
where ${u_{c/v}} = \left| {{g^2}} \right| + {(\delta + {E_{c/v}})^2}$, and ${c_0} = \cos \left( {a{k_y}/2} \right)$, ${s_0} = \sin \left( {a{k_y}/2} \right)$, ${c_3} = \cos \left( {\sqrt 3 a{k_x}/2} \right)$, ${s_3} = \sin \left( {\sqrt 3 a{k_x}/2} \right)$.
So far we treated the graphene system as a closed two-level system; now, we extend our model to incorporate dephasing and decoherence into the driven solid-state electron dynamics. To this end, we employ the Liouville von Neumann equation and propagate the reduced density matrix in the length gauge interaction picture to describe quantum dynamics in the presence of the relaxation process as below:
\begin{equation}
\label{Eq:Rho_Dot}
\left\{ {\begin{array}{*{20}{l}}
{\dot \rho _{{\bf{k}}(t)}^{cv} = \frac{i}{\hbar }{\bf{F}}(t) \cdot {\bf{Q}}_{{\bf{k}}(t)}^{cv}\left[ {\rho _{{\bf{k}}(t)}^{cc} - \rho _{{\bf{k}}(t)}^{vv}} \right] - \gamma \rho _{{\bf{k}}(t)}^{cv}}\\
{\dot \rho _{{\bf{k}}(t)}^{cc} = 2{\rm{Re}}\left[ {\frac{i}{\hbar }{\bf{F}}(t) \cdot {\bf{Q}}_{{\bf{k}}(t)}^{cv*}\rho _{{\bf{k}}(t)}^{cv}} \right] - \gamma \rho _{{\bf{k}}(t)}^{cc}}
\end{array}} \right.
\end{equation}
${\rho^{ij}}$ are the solution of the Born-Markov Master equation by averaging over the thermal bath degrees of freedom:
\begin{equation}
\label{Eq:Rho}
\rho (t) = {\rm{T}}{{\rm{r}}_B}\left( {\left| \Psi \right\rangle \left\langle \Psi \right|} \right) = \left( {\begin{array}{*{20}{c}}
{\rho _{{\bf{k}}(t)}^{cc}}&{\rho _{{\bf{k}}(t)}^{cv}}\\
{\rho _{{\bf{k}}(t)}^{cv*}}&{\rho _{{\bf{k}}(t)}^{vv}}
\end{array}} \right)
\end{equation}
The diagonal terms in Eq. \ref{Eq:Rho} represent the band population $\rho _{{\bf{k}}(t)}^{cc} = {\left| {\beta _{{\bf{k}}(t)}^c} \right|^2}$ and the nondiagonal terms define the polarization function. Also note that ${\rho ^{vv}} = 1 - {\rho ^{cc}}$.
The relaxation rate $\gamma$ in Eq. \ref{Eq:Rho_Dot} has an inverse proportion of the dephasing time, ${\gamma _{({\rm{PHz}})}} = \frac{1}{{{T_{({\rm{fs}})}}}}$.
The Liouville-Von Neumann equation (Eq.\ref{Eq:Rho_Dot}) in our settings goes beyond the Boltzman transport theory where only the band dispersion and the consequent group velocity appears in the equation.
We assume that the VB is fully occupied and the CB is empty. The applied pulse is an intense single optical oscillation. In the numerical simulation, the important precondition for the chosen electric field waveform impose that $\int_{ - \infty }^\infty {{\bf{F}}(t')dt'} = 0$. We characterize the laser waveform such that it satisfies this condition, by defining ${\bf{A}}(t)$ and obtaining ${\bf{F}}(t)$ as the temporal derivative of it. We employ the following vector potential waveform for the elliptically-polarized pulse.
\begin{equation}
\label{Eq:VectorPotential}
\begin{array}{l}
{A_x}(t) = {F_{0}}{\omega ^{ - 1}}{e^{ - {{(t/\tau )}^2}}}\sin (\omega t + {\phi _{\rm{CEP}}}),\\
{A_y}(t) = \pm {\varepsilon}{F_{0}}{\omega ^{ - 1}}{e^{ - {{(t/\tau )}^2}}}\cos (\omega t + {\phi _{\rm{CEP}}}),
\end{array}
\end{equation}
${F_{0}}$ is the field amplitude, $\tau $ is the pulse length corresponding to carrier frequency $\omega$ = 1.5 eV/$\hbar $, and ${\phi _{\rm{CEP}}}$ is the CEP. Sign $ \pm $ determines the right and left-handedness of the pulse. $\varepsilon \in [0,1]$ controls the degree of ellipticity and hence the curvature of electron trajectory. \\
System of equations \ref{Eq:Rho_Dot} determines the laser-induced electron dynamics; solving these coupled diffrentio-integral equations, we obtain reciprocal space distribution of electrons in the conduction and valence bands. Correspondingly, the time-dependent CB population is defined by $\rho _{{\bf{k}}(t)}^{cc} = {\left| {\beta _{{\bf{k}}(t)}^c} \right|^2}$. The residual value of the CB population, ${\rho _{{\bf{k}}(t \to {t_f})}^{cc}}$, is defined as the population after the pulse. \\
We define the VP as
\begin{equation}
\label{Eq:VP}
{\rm{VP}} = \frac{{\left| {{n_{\rm{K}}} - n{}_{{\rm{K'}}}} \right|}}{{{n_{\rm{K}}} + n{}_{{\rm{K'}}}}},
\end{equation}
with
\begin{equation}
\label{Eq:NkNKprime}
\begin{array}{*{20}{l}}
{{n_{\rm{K}}} = \int_{ - {q_y}/\sqrt 3 }^{{q_y}/\sqrt 3 } {\int_0^{3\kappa /2} {\rho _{{\bf{k}}(t \to {t_f})}^{cc}{\rm{d}}{q_y}{\rm{d}}{q_x}} } ,}\\
{{n_{{\rm{K'}}}} = \int_{{q_y}/\sqrt 3 }^{ - {q_y}/\sqrt 3 } {\int_{ - 3\kappa /2}^0 {\rho _{{\bf{k}}(t \to {t_f})}^{cc}{\rm{d}}{q_y}{\rm{d}}{q_x}} } ,}
\end{array}
\end{equation}
as the total population of the K- and $\rm{K'}$ valleys inside their corresponding triangle and $\kappa = 4\pi /3{a_0}$ is a constant. Due to the honeycomb structure of graphene, the triangular meshgrid is chosen to fully take in the population proportions throughout the BZ, as illustrated in Fig. \ref{Fig_ExtendedBZ_T_100_Hex}.\\
The fine (eye-drop shaped) line within the bright colored CB momentum distribution represent the separatrix, which is the superposition of initial conditions where their trajectories pass through the Dirac point. In other words, the separatrix is the mirror symmetry of the vector potential (polarization state), that steer the trajectory of electron in the momentum space. \\
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth,keepaspectratio ]{ExtendedBZ_T_100_Hex.pdf} \caption{Schematic illustration of the CB population excitation of graphene at the end of a single-cycle circular field with $F_0=0.9 {\rm{V/\AA}}$, CEP=0 and dephasing time T=100 fs. ${\rho _{{\bf{k}}(t \to {t_f})}^{cc}}$, which is calculated from Eq. \ref{Eq:Rho_Dot} is color mapped between 0 (no excitation states) to 1 (full occupation states) in the reciprocal space. The first BZ and the inequivalent $\rm{K}$ and $\rm{K'}$ points are shown. The triangular borderline in-between the $\rm{K}$ and $\rm{K'}$ valleys are also indicated. }
\label{Fig_ExtendedBZ_T_100_Hex}
\end{figure}
\section{Result and discussion}
\label{sec:result}
Valley degeneracy in the momentum space of the conduction and valence bands presents an additional degree of freedom for charge carrier manipulation. As its counterpart spin states in spintronics, controlling the population of valley states is essential to the development of valley-based electronics.
We show that the single-cycle pulse results in a significantly large VP. The controlling parameters of the optical pulse are the field amplitude $F_0$, ellipticity (tuned by $\varepsilon$), and the carrier envelope phase ${\phi _{\rm{CEP}}}$. polarization state
\subsection{\label{Sec:Field_amplitude} Field Amplitude and Ellipticity}
Firstly, we look into the simultaneous roles of ellipticity and field amplitude on VP and find the optimal $\varepsilon$ and $F_0$ for which the VP is maximized. We set CEP=0 and exclude the relaxation processes in this section. Fig. \ref{Fig_Combined_VP_vs_F_ECC}(a) plots the VP versus $F_0$ for a range of polarization states from linear ($\varepsilon$=0) to circular pulse ($\varepsilon$=1). Evidently, since the linear pulse preserves the time-reversal symmetry, its corresponding VP is zero.
We span through a wide range of $F_0$ up until 1.4 ${\rm{V/\AA}}$. In this field region, VP increases monotonically for $\varepsilon$=0.25 and 0.5, whereas for $\varepsilon$=0.75 and 1, VP after a slow and oscillatory increase at small fields, grows exponentially to its highest amount then falls at higher fields. The highest VP corresponds to the circular pulse ($\varepsilon=1$) with field amplitude ${F_0} \simeq 0.9\,{\rm{V/\AA}}$.
Fig. \ref{Fig_Combined_VP_vs_F_ECC}(b), on the other hand, plots the VP as a function of polarization state for different laser field amplitudes. For $F_0 < 0.5 {\rm{V/\AA}}$, VP is approximately zero with practically no influence by the ellipticity modulation. However, at moderate fields $ \sim $ 0.5 to 0.9 ${\rm{V/\AA}}$, VP is abruptly enhanced for $\varepsilon \ge 0.5$.
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth,keepaspectratio ]{Combined_VP_vs_F_ECC.pdf} \caption{(a) depicts the VP as a function of field amplitude, $F_0$, for different ellipticities $\varepsilon$. the purple color corresponds to the circularly polarized pulse, in which the VP increases monotonically with field and reaches to a maximum of ~35 percent at ${F_0} = 0.9\,{\rm{V/\AA}}$, before sharply falling at higher fields. (b) VP is plotted versus ellipticity $\varepsilon$ for different field amplitude $F_0$. For linearly polarized pulse VP=0 since such pulse preserve the time-reversal symmetry. For $F_0 > 0.5 {\rm{V/\AA}}$, a threshold $\varepsilon \simeq 0.5 $ is observed above which the VP steeply increases. }
\label{Fig_Combined_VP_vs_F_ECC}
\end{figure}
\subsection{\label{Sec:relaxation} Induced Current and Net Charge Transport}
To understand the behavior of VP versus field amplitude and ellipticity, we calculate the generated photocurrent and net charge transfer through the graphene system.
The time-dependent electric field of the optical pulse causes the polarization of the system which generates an electric current ${\bf{J}}(t) = \left\{ {{J_x}(t),{J_y}(t)} \right\}$. Both intraband ($\mathbf{J}^{\rm{intra}}(t)$) and interband ($\mathbf{J}^{\rm{inter}}(t)$) currents contribute to the total current, $\mathbf{J}(t)=\mathbf{J}^{\rm{intra}}(t)+\mathbf{J}^{\rm{inter}}(t)$, in the system. the two-band density matrix formalism we incorporated in the main text results in the following equations for the interaband and interband currents:
\begin{equation}
\label{Eq:current}
\begin{array}{*{20}{l}}
{{{\bf{J}}_{{\mathop{\rm int}} ra}}(t) = 2e\sum\limits_{\alpha = c,v} {\int_{\overline {{\rm{BZ}}} } {{{\bf{V}}^\alpha }} } \left( {{\bf{k}}({\bf{q}},t)} \right){\rho ^{\alpha \alpha }}({\bf{q}},t)d{\bf{q}},}\\
{{{\bf{J}}_{{\mathop{\rm int}} er}}(t) = 2e\int_{\overline {{\rm{BZ}}} } {{{\bf{V}}^{vc}}} \left( {{\bf{k}}({\bf{q}},t)} \right){\rho ^{cv}}({\bf{q}},t)d{\bf{q}} + {\rm{c}}.{\rm{c}}.}
\end{array}
\end{equation}
${{\rho ^{\alpha \alpha }}({\bf{q}},t)}$ is the $\alpha$\textsuperscript{th}-band occupation (with the index $m$ running over the valence and conduction bands, i.e.\ $\alpha=v$ and~$c$, respectively) and ${{\rho ^{cv}}({\bf{q}},t)}$ is the interband coherence.
The factor of 2 in Eq. \ref{Eq:current} is due to the spin degeneracy.
${{\bf{V}}^\alpha }$ and ${{\bf{V}}^{\alpha \alpha '}}$, ($\alpha$ and $\alpha'$ interchange between $c$ and $v$) are the matrix elements of the velocity operator where in the two-band picture has the following form:
\[{\bf{V}} = \left( {\begin{array}{*{20}{c}}
{{{\bf{V}}^{cc}}}&{{{\bf{V}}^{cv}}}\\
{{{\bf{V}}^{cv*}}}&{{{\bf{V}}^{vv}}}
\end{array}} \right).\]
The intraband velocity, within the quantum kinetic theory, is defined as
\begin{equation}
\label{Eq:velocity_intra}
\begin{array}{l}
{{\bf{V}}^\alpha } = \frac{1}{\hbar }{\grad _{\bf{k}}}E_\alpha ^T[{\bf{k}}(t)] = \\
\,\,\,\,\,\,\,\,\,\,\frac{1}{\hbar }\left[ {{\grad _{\bf{k}}}{E_\alpha }[{\bf{k}}(t)] + e{\grad _{\bf{k}}}\left( {{\bf{F}}(t) \cdot {{\bf{A}}^{(\alpha \alpha )}}[{\bf{k}}(t)]} \right)} \right]
\end{array}
\end{equation}
and interband velocity matrix element as
\begin{equation}
\label{Eq:velocity_inter}
{{\bf{V}}^{\alpha \alpha '}}({\bf{k}}) = \frac{i}{\hbar }{\bf{Q}}_{{\bf{k}}({\bf{q}},t)}^{\alpha \alpha '}\left[ {E_\alpha ^T({\bf{k}}({\bf{q}},t)) - E_{\alpha '}^T({\bf{k}}({\bf{q}},t))} \right]
\end{equation}
${\bf{Q}}_{{\bf{k}}({\bf{q}},t)}^{\alpha \alpha '}$ in the above equation is obtained from Eq. \ref{Eq:Q_cv} and is related to the interband Berry connection.
It is important to note that the Berry connection and curvature effects come into play in the matrix elements of both inetra- and interband velocities.
Intraband velocity contains two terms: ${\bf{V}}_{\rm{gr}}^\alpha ({\bf{k}}) = {\grad _{{\bf{k}}{\kern 1pt} }}{E_\alpha }({\bf{k}})$ is the particle (i.e.\ electron or hole) group velocity with $E_\alpha({\bf k})$ the bands' energy dispersion, and
${\bf{V}}_{{\rm{anom}}}^\alpha ({\bf{k}}) = e{\grad _{\bf{k}}}\left( {{\bf{F}}(t) \cdot {{\bf{A}}^{(\alpha \alpha )}}[{\bf{k}}({\bf{q}},t)]} \right)$ is the anomalous velocity that captures the quantum geometry of the Bloch wavefunction and Berry phase \cite{Stephanov2012}. ${\bf k}$ is the quasi momentum defined in terms of the crystal wave vector ${\bf q}$ and the vector potential ${\bf A}(t)$ of the laser's electric field as ${\bf{k}}({\bf{q}},t) = {\bf{q}} - e/\hbar {\bf{A}}(t)$.
The current induced by the single-cycle optical pulse, respectively, results in the charge transfer across the system, which can be calculated from the following expression
\begin{equation}
{\bf{Q}} = \int_{ - \infty }^\infty {{\bf{J}}(t){\rm{d}}t}
\end{equation}
Fig. \ref{Fig_J_Q}(a) plots the time-dependent current in graphene for the circularly polarized pulse with different Field amplitudes.
From the numerical calculations, we observe that the current changes drastically, and this occurs in the same situation where the VP shows the switching.
The reason that the photocurrent and VP show the common behavior can be explained by the trajectory of e in the momentum space.
The dominant excitation taking place at $\rm{K}$-point will move as ${\bf{q}} - e/\hbar {\bf{A}}(t)$, and when they move to the $\rm{K'}$-valley, the group velocity, as well as the Anomalous velocity, will show the sign flip.
Such a sign change in the current density, occurs roughly at $ F_0 \sim 0.9 {\rm{V/\AA}}$, where the maximum VP appears (c.f. Sec. \ref{Sec:Field_amplitude}).
The residual population and current, in turn, translate to a transferred charge density and is plotted in Fig. \ref{Fig_J_Q}(b) as a function of the peak laser field. Respective to the sign change of current in Fig. (a), Q also reacts on the critical field and changes its slope at $ F_0 \sim 0.9 {\rm{V/\AA}}$. Such observable substantiate the VP field dependence as represented in Fig. \ref{Fig_Combined_VP_vs_F_ECC}(a).
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth,keepaspectratio ]{J_Q.pdf} \caption{ (a) photoinduced current in graphene is plotted within the incidence of the single-cycle circular field. The corresponding field amplitudes, $F_0$ associated with different colors are indicated. The current density oscillates at high field and changes sign at ${F_0} > 0.9 {\rm{V/\AA}}$ associated with the maximum valley polarization. (b) Transferred charge density through graphene monolayer as a function of field amplitude $F_0$, also manifests the resulting field dependence of the valley-contrasting excitation. }
\label{Fig_J_Q}
\end{figure}
\subsection{\label{Sec:relaxation} Relaxation Dynamics}
In the previous section, we explored the variation of VP as a function of field amplitude and laser waveform, without taking the role of relaxation into account (i.e., $T \to \infty $ in Eq. \ref{Eq:Rho_Dot}). In this and following sections, we study in what manner the relaxation mechanisms alter functionality and amplitude of the VP.
In previous section we showd that, in the absence of relaxation, the circular pulse with $F_0 = 0.9 {\rm{V/\AA}}$ gives rise to a maximum VP for a single-cycle optical laser. In Fig. \ref{Fig_VP_vs_T} we plot the VP as a function of dephasing time T, with field parameter corresponding to the maximum VP. As depicted, the VP falls suddenly for ultrafast dephasing time $T < 3$ fs. Notably, for the fastest relaxation time T=1 fs, we expect to have $~$22$\% $ of the VP.
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth,keepaspectratio ]{VP_vs_T.pdf} \caption{ Valley polarization (VP) plotted as a function of the dephasing time. VP is practically unaffected by the relaxation up until $T \simeq 3$ fs where it exponentially decays to a magnitude of $\sim$0.22 at $T=1$ fs. The laser in this case is a single cycle circular pulse with ${F_0} = 0.9\,{\rm{V/\AA}}$ and CEP=0. }
\label{Fig_VP_vs_T}
\end{figure}
We further look into the influence of the dephasing time, T, on the field amplitude ($F_0$) and ellipticity ($\varepsilon$) dependence of the VP. Fig. \ref{Fig_Combined_Comparison_T_1_100} plots VP as a function of amplitude of the pulse $F_0$ (a), and versus curvature of the pulse $\varepsilon$ (b). Two cases of relaxation time have been examined: The black line correspond to a superfast decaying time (T=1 fs), and the red line correspond to the relatively slow relaxation (T=100 fs). the pulse length is ~5 fs. the fast decay rate suppress the maximum VP from 35 percent to approximately 20 percent, while the general behavior of the VP remains roughly the same as the case with no or slow relaxation.
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth,keepaspectratio ]{Combined_Comparison_T_1_100.pdf} \caption{ (a) VP as a function of the field amplitude for a circular pulse. Two relaxation times T=1 fs (red line) and T=100 fs (black line). VP increases monotonically with field and reaches to a maximum of ~35 percent (red), and ~22 percent (black, fast dephasing and decoherence) at ${F_0} = 0.9\,{\rm{V/\AA}}$, before sharply falling at higher fields. (b) VP versus pulse ellipticity with field amplitude ${F_0} = 0.9\,{\rm{V/\AA}}$. Similarly, red and black correspond to T=1 and 100 fs, respectively. }
\label{Fig_Combined_Comparison_T_1_100}
\end{figure}
In Fig \ref{Fig_combined_ExtendedBZ_T_2fs_TimeEvolution}(a) the impact of relaxation processes on the excitation distribution of graphene is illustrated at the end of the single-cycle circular pulse for decoherence time T=2 fs. The triangular borderline seperating the proportion of $\rm{K}$ and $\rm{K'}$ valleys for calculating ${n_{\rm{K}}}$ and ${n_{\rm{K'}}}$ in Eq. \ref{Eq:NkNKprime} are indicated in the density plot of CB population distribution. The VP, which is calculated by Eq. \ref{Eq:VP} is the total population over the triangular surface of K subtracted from the $\rm{K'}$-triangle, normalized to 1.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{combined_ExtendedBZ_T_2fs_TimeEvolution.pdf}
\caption{[Color online] (a) The influence of fast relaxation (T=2 fs) on the CB population distribution at the end of the pulse for circular waveform (to be compared with Fig. \ref{Fig_ExtendedBZ_T_100_Hex} with T=100 fs). Compared to Fig. \ref{Fig_ExtendedBZ_T_100_Hex} the maximum population reduces from 1 to 0.6. Also, the CB population deforms along the electron trajectory as time progresses arising to an asymmetric population distribution (part b). }
\label{Fig_combined_ExtendedBZ_T_2fs_TimeEvolution}
\end{figure*} \par
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth,keepaspectratio ]{VP_CEP.pdf} \caption{ Exhibits VP versus the $\phi _{\rm{CEP}}$ for three different relaxation times. For all case scenarios, the VP periodically modulates with on and off periodicity of 60 degree. }
\label{Fig_VP_CEP}
\end{figure}
The fast electron scattering impacts on the magnitude of electron wave packet as well as its phase information, giving rise to an asymmetric CB distribution with maximum amplitude of 0.6. In Fig. \ref{Fig_combined_ExtendedBZ_T_2fs_TimeEvolution}(b) we compare the time evolution of the population formation close to the Dirac point for T=2 fs (lower panel) with the case of long relaxation time T=100 fs (upper panel). The corresponding extended BZ distribution of T=100 is depicted in Fig. \ref{Fig_ExtendedBZ_T_100_Hex}. The maximum population for T=2 fs reduces to 0.6, in contrast to the case for T=100 fs where population maxes out to 1. Besides, the fast relaxation time partially smears the excitation distribution along the cyclic path of electron.
\subsection{\label{Sec:CEP} Carrier Envelope Phase}
We have seen in section \ref{Sec:Field_amplitude} that the VP can be optimally controlled by the amplitude of the laser field. In fact, a threshold of $~0.5 {\rm{V/\AA}}$ is observed in Fig. \ref{Fig_Combined_VP_vs_F_ECC}(a), below which there is no VP irrespective of the polarization state and laser waveform. Above this field threshold, the single cycle pulse exhibits dissimilar behavior for different ellipticities. Likewise, VP behaves in a different way as a function of ellipticity above and below the $\varepsilon \simeq 0.5 $ threshold (see Fig. \ref{Fig_Combined_VP_vs_F_ECC}b).
In Fig. \ref{Fig_VP_CEP} the carrier envelope phase (${\phi _{{\rm{CEP}}}}$ in Eq. \ref{Eq:VectorPotential}) dependence of VP is plotted for three different relaxation times: T = 1, 2, and 100 fs. The case of a circularly polarized pulse is considered with amplitude ${F_0} = 0.9\,{\rm{V/\AA}}$ of the electric field corresponding to the optimum VP observed in Sec. \ref{Sec:Field_amplitude}, where ${\phi _{{\rm{CEP}}}}$ was set to zero. We extend over the full range of CEP from 0 to $2\pi $.
Fig. \ref{Fig_VP_CEP} reveals that the VP is highly sensitive to the pulse orientation with respect to the graphene sheet which is controlled by the CEP, ${\phi _{{\rm{CEP}}}}$ in Eq. \ref{Eq:VectorPotential}. The VP switches on and off depending on the CEP angle with periodicity of ${60^ \circ }$. Such a periodic behavior is due to the fact that graphene belongs to the Symmorphic ${\rm{C}}_{6v}$ point group with ${60^ \circ }$ rotational symmetry in real and reciprocal space.
Another important physical point which is drawn from Fig. \ref{Fig_VP_CEP} is the shift of VP with respect to the relaxation time. Such a shift can be understood by operating a unitary transformation on elements of the density matrix in Eq. \ref{Eq:Rho_Dot} as ${\rho ^{ij}} \mapsto {{\tilde \rho }^{ij}}{e^{-\gamma t}}$. Subsequently, we find
\begin{equation}
\label{DensityMatrix-shift}
{\begin{array}{*{20}{l}}
{\dot \tilde \rho _{{\bf{k}}(t)}^{cv} = \frac{ie}{\hbar }{\bf{F}}(t) \cdot {\bf{Q}}_{{\bf{k}}(t)}^{cv}\left[ {\tilde \rho _{{\bf{k}}(t)}^{cc} - \tilde \rho _{{\bf{k}}(t)}^{vv}} \right]}\\
{\dot \tilde \rho _{{\bf{k}}(t)}^{cc} = 2{\rm{Re}}\left[ {\frac{ie}{\hbar }{\bf{F}}(t) \cdot {\bf{Q}}_{{\bf{k}}(t)}^{cv*}\tilde \rho _{{\bf{k}}(t)}^{cv}} \right]}
\end{array}}
\end{equation}
Thus in Eq. \ref{Eq:NkNKprime} we can transform the relaxation rate to the exponent. Since the carrier envelope phase acting as a rotation operator on the electron trajectory in reciprocal space, under the influence of the relaxation processes, acts like a shift vector on the density matrix elements.
\section{Conclusions}
The principal challenge in the development of valleytronics is to lift the valley degeneracy of charge carriers in a controlled way. The ability to exploit valley polarization (VP) has been rather limited until the recent emergence of the 2D materials. 2D materials with honeycomb structures preeminent by graphene offer a combination of properties not obtainable from conventional thin-film materials.
Since the advent of graphene, various possibilities have been envisioned and explored in it for photonic, plasmonic and optoelectronic devices. However, the generation and detection of valley polarization and feasibility of valleytronics in pristine graphene has been elusive due to the presence of inversion symmetry. In this article, we have proposed a potential prototype of valleytronic device based on monolayer graphene. we have characterized the optimality condition of valley polarization and investigated the parameters upon which we can control the carriers in graphene and store data via valley degree of freedom.
We have shown the the valley dependent Berry phase results in a valley-contrasting population and carrier transport in pristine graphene. Exciting graphene with sub- 10 femtosecond light pulse creates nonequilibrium charge states in a highly nonlinear fashion. Nonlinear properties are the basis of functional optical devices, as they enable functions such as ultrafast modulation and control, and optical gain.
In fact, driving a system by strong coherent fields break energy-momentum conservation law, and nonadiabatic geometric effects dominates the dynamical processes. Hence, the perturbative optical phenomena which is induced by the absorption and emission of photons is replaced by the nonadiabatic geometric effects. Respectively, a nonperturbative replica for the common selection rule is set off with the elemental of symmetry, chirality and topology in the optically allowed and forbidden transitions.
According to our results, a circularly polarized pulse with a single-cycle carrier generates a Valley population with efficiency as high as 35$\%$ which is substantial and disprove the general conception that the valley degeneracy can hardly be lifted in gapless materials.
Moreover, the VP is susceptible to the orientation of the laser waveform with respect to the graphene sheet. VP modulates periodically from its maximum value to almost zero depending on the CEP angle with a periodicity of ${60^ \circ }$, indicating a clear-cut and robust modulation of the VP. This periodic switching of VP with respect to CEP is understandable considering the fact that graphene belongs to the Symmorphic ${\rm{C}}_{6v}$ point group with ${60^ \circ }$ rotational symmetry in real and reciprocal space.
We further investigate the effect of relaxation on the resultant VP. Notably, even for an ultrafast decoherence and dephasing time of 1 fs, we expect to have more than twenty percent of the valley-contrasting excitation.
The predicted induction of the lightwave valley polarization with a few-cycles of optical lasers in graphene could be useful for valleytronics applications, and development of electronic devices such as valley-polarized optoelectronic emitters, valley optical interconnects, and ultrafast data storage with utmost reliability and robustness.\\
\begin{acknowledgments}
We would like to thank Alexandra Landsman, Mark Stockman, Vadym Apalkov and Lisa Ortmann for fruitful discussions.\\
\end{acknowledgments}
\def\section*{REFERENCES}{\section*{REFERENCES}}
\bibliographystyle{apsrev4-1}
|
{
"timestamp": "2021-01-08T02:19:26",
"yymm": "2012",
"arxiv_id": "2012.14025",
"language": "en",
"url": "https://arxiv.org/abs/2012.14025"
}
|
\section{Introduction}
Intermediate polars (hereinafter IPs) are a class of cataclysmic variables (CVs) characterized by a spinning, moderately magnetized (with magnetic field intensity in the range $0.1$ -- $10$ MG) white dwarf (WD) accreting material from a donor companion. In general, CVs characterized by a magnetic field strength $\lesssim 0.1$ MG are known as {\it dwarf novae} (see, e.g. \citealt{vanteeseling1996,nucita2009,hoard2010,nucita20092,nucita2011,balman2011,nucita2014,mukai2017}) {while systems with magnetic} field exceeding $10$ MG are instead called {\it polars} (see, e.g., \citealt{ramsay2004, szkody2004}). The reader is referred to \cite{Kuulkers2006} for a review.
In these binary systems the material initially circulates in an accretion disk that is then truncated by the magnetic field of the WD and a {\it curtain} forms with the stream of matter falling on the WD poles. In the accretion column, the matter suffers a shock wave and, consequently, releases $X$-ray photons whose spectrum strongly depends on the details of the accretion process.
In IPs, the high energy signal is modulated on the WD spin $P_{spin}$ and on the orbital period $P_{orb}$ (\citealt{parker2005}). However, as observed by
\cite{warnerbook} (but see also \citealt{king1992}), since the $X$-ray production source shows a variable sight to the observer and due to the presence of reprocessing sites, multiple orbital side-bands are expected and, often, observed. For example, it is expected to observe periodic features at the frequencies $P_{spin}^{-1}\pm P_{orb}^{-1}$ and $P_{spin}^{-1}\pm 2P_{orb}^{-1}$ due to amplitude modulation at $P_{orb}$ and $2P_{orb}$.
As noted by \cite{nucita2019, nucita2020}, but see also the analysis by \cite{mukai2020}, the existence of multiple peaks in the Fourier transform of the high energy light curve offers a powerful tool to assign, routinely, the IP identification to many CV systems. {Therefore, dedicated observations in the $X$-ray band} with large sensitivity in the $0.1$-$15$ keV range (as that offered by the $XMM$-Newton satellite onboard instruments) are crucial for a correct classification of such objects.
Although IPs are expected to be common in the Galaxy (see, e.g., \citealt{worrall1982}) and to contribute substantially to the diffuse $X$-ray ridge background \citep{revnivtsev2009,warwick2014}, only for a restricted number of sources the IP nature was claimed (see e.g. the updated intermediate polar catalogue -{\it IPhome}- available at \url{https://asd.gsfc.nasa.gov/Koji.Mukai/iphome/iphome.html}). Indeed, to that aim, dedicated timing and spectral analyses are necessary for each candidate.
Here, we continue our project aimed to the classification of IP candidates based on a detailed $X$-ray data analysis. A class study of the identified IPs will be presented later elsewhere. In particular, in this paper we concentrate on the CV source VZ Sex (also known as 1RXSJ094432.1+035738) which is recognized to be an IP candidate with an orbital period\footnote{In this work, based on the results of the timing analysis reported in Section 3, we assume for the VZ Sex orbital period the value of $\simeq 3.581$ hours \citep{mennickent2002} (but see also the the updated IPhome catalogue). We note that this value is slightly larger than the period estimates of $3.570$--$3.576$ hours \citep{gansicke2009} and $3.562$--$3.576$ hours \citep{thorstensen2010}.} of $\simeq 3.581$ hours (see also \citealt{mennickent2002}).
At present, no clear identification of the WD spin is known in the literature, although a tentative period of $\simeq 40.83$ minutes is reported in the IPhome catalogue. Furthermore, \citet{mennickent2002} tentatively classify the source as a U Geminorum like dwarf novae although the IP nature cannot be {excluded} by the same authors. We show that the high energy light curve of this source is characterized by a modulation on $\simeq 20.3$ minutes that we interpret as the spin period of the WD. This is supported by the fact that the typical IP side-bands, forming when the spin period beats with the orbital period $P_{orb}\simeq 3.581$ hours, are found in the Lomb-Scargle periodogram. The source is also characterized by a $0.1-10$ keV spectrum which is well described by a multi-temperature plasma model ({based on} the cooling of the gas when falling onto the WD surface) absorbed by the galactic neutral hydrogen column density. The estimated unabsorbed flux is $\simeq 2.98\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ corresponding to an intrinsic luminosity of $\simeq 8.4\times 10^{31}$ erg s$^{-1}$ for a {distance\footnote{
Note here that the distance estimated by \citet{mennickent2002} is rather uncertain and one expects that Gaia data would lead to a much better distance determination. Unfortunately, Gaia DR2 data \citep{gaiadr2} does not report any parallax value for VZ Sex so we will assume $433\pm 100$ pc as the distance to the target.} to the source of $433\pm 100$ pc \citep{mennickent2002}.} All that allows us to conclude that VZ Sex is certainly an IP cataclysmic variable star.
The outline of the paper is as follows: in Sections 2 and 3 we summarize the adopted X-ray data reduction procedure and the performed timing analysis. In Section 4 we {present} the X-ray spectral analysis {along with} our results and, {in Section 5, we finally discuss the results.}
\section{X-ray data reduction}
The cataclysmic variable VZ Sex (also known as 1RXS J094432.1+035738) is located at J2000 coordinates ${\rm RA=09^{\rm h} 44^{\rm m} 31.72^{\rm s}}$ and ${\rm DEC = +03^\circ 58^{\prime} 05.4^{\prime\prime}}$.
{The target was first pinpointed by means of the {\it Röntgen Satellite} and reported in the ROSAT All-Sky Bright Source Catalogue \citep{rosatcatalogue} as a source having a $0.1-2$ keV count rate of $0.087\pm0.022$ count s$^{-1}$ (see Section 5 for further details).}
{In order to study any IP nature of the source, VZ Sex was then pointed by the $XMM$-Newton satellite in 2004 (Observation ID 0201290301, P.I. De Martino\footnote{{For completeness, we note here that VZ Sex and two additional IP candidates (UU Col and RXJ1039-0507) were the targets of an {\it XMM}-Newton proposal associated to several observations (with Observation IDs 0201290101, 0201290201, 0201290301, and 0201290401). The observations were conducted in order to exploit the improved $XMM$-Newton capabilities with respect to those of previous satellites. In the case of UU Col, \citet{demartino2006} clearly demonstrated that the source is an IP since the high energy ($0.2-15$ keV) light curve is characterized by a $\simeq 863$ seconds periodicity (identified as the WD spin period). Weak variabilities at the beat $935$ seconds and at the $3.5$ hours orbital periods were also observed with the orbital modulation being more evident in the soft energy band (below $0.5$ keV). As far as the IP RXJ1039-0507 is concerned, \citet{woudt2003} reported an orbital period of $1.574$ hours and found periodic features at $1932.5$ seconds and $721.9$ seconds which were interpreted as the sideband and first harmonic of the WD spin period of $1444$ seconds, respectively. Here, we concentrate on VZ Sex whose IP nature is still unclear.}})} with the observation starting (ending) on $2004/05/18$ at $14$:$27$:$42$ UT ($2004/05/19$ at $21$:$24$:$20$ UT), i.e. for a nominal observing time of $\simeq 37.6$ ks. The target was observed by all the instruments on board the satellite and, in particular, with the MOS 1, MOS 2 and pn cameras operated in full frame mode and with the thin filter on.
We processed the observation data file (ODFs) by using the $XMM$-Science Analysis System {(SAS version 17.0.0, \citealt{gabriel2004})} with the most updated calibration constituent data files (CCFs). After we run the SAS tasks {\it emchain} and {\it epchain}, we were left with the calibrated event files. We further corrected (via the {\it barycen} SAS tool) the arrival times of the $X$-ray photons to account for the Solar System barycenter and accounted for the out-of-time events by following the recipes described in \cite{xrp}. In order to reduce the soft proton background possibly affecting the data, we built light curves of all the cameras considering only photons with energy above $10$ keV. By requiring that the net count rate is below $0.4$ count s$^{-1}$, we flagged the good time intervals (GTIs) and cut the event list files accordingly. In the case of VZ Sex the observation was affected by strong flares for $\simeq 60\%$ of the whole duration in the second half of the observation. Hence, in the following analysis, we considered only the events falling in the first 20 ks. The event list files corrected for the flares were then used to generate the science products as the light curves in the soft ($0.1$--$2$ keV), hard ($2$--$10$ keV) and full ($0.1$--$10$ keV) and images in the $0.1-10$ keV band for each camera for inspecting purposes. In all the cases, the source plus background signal was extracted on a circular region centred on the nominal position of the target and with a radius of $\simeq 40^{\prime\prime}$ ensuring that $\simeq 90\%$ of the total energy of the source is correctly collected. The background extraction regions were positioned close to VZ Sex, (when possible) on the same chip, and avoiding to encircle any other visible source.
For each camera and band (soft, hard, and full) we extracted the source time series {corresponding to the overlapping time intervals} and flagged the starting and stopping times of the MOS 1, MOS 2 and pn events. Then, we produced in each band synchronized\footnote{Synchronizing the source and background light curves means that each time series starts and ends exactly at the same instants of time and have the same bin size. As a consequence, the background subtraction can be done on a bin-by-bin basis.} light curves with a bin size of $10$ seconds {(for the subsequent analysis) and $120$ seconds (for graphical purposes only)}. We followed the same procedure for the data contained within the background regions so that we have, for each camera and band, synchronized source (plus noise) and background light curves having the same bin size. The source (background corrected) light curves are then obtained by using the {\it epiclccorr} task that accounts for the different areas and exposures corrections. The final MOS 1, MOS 2 and pn source (background subtracted) were then averaged bin-by-bin and scaled in order to start from $0$. In Figures \ref{fig_l} we give the soft, hard, full and hardness ratio (see next Section for details) $X$-ray light curves of VZ Sex with a bin size of 120 seconds.
\begin{figure*}
\centering
{\includegraphics[width=0.9\textwidth]{Epic_LightCurves_EPIC_120bin.eps}}
\caption{From top to bottom: the VZ Sex light curves in the $0.1-2$ keV, $2-10$ keV, $0.1-10$ keV (background subtracted and synchronized) and the hardness ratio time series (see text for details). {The bin size was fixed to 120 seconds.} }
\label{fig_l}
\end{figure*}
Finally, we extracted the spectra for the source and background and rebinned the data by requiring to have at least 25 counts per energy bin. Furthermore, we determined the associated response matrices and ancillary files that were used within {the XSPEC software (version 12.9.0, \citealt{arnaud})} during the spectral analysis of the source and the estimate of the $0.1-10$ keV band flux.
\section{X-ray timing analysis}
{The barycentric corrected, synchronized and background corrected light curves extracted are shown in Figure \ref{fig_l} with a bin size oif 120 seconds. The combined light curves (averaged over the instruments) have average count rate of $0.43\pm0.16$ count s$^{-1}$, $0.12\pm0.07$ count s$^{-1}$, and $0.55\pm0.18$ count s$^{-1}$ for the soft, hard, and full bands, respectively.} Here, the start of the observation corresponds to $MJD=53143.622$ and the whole light curve lasts for $17.76$ ks. It is then clear that the source is soft, i.e. it has an emission in the soft band larger than that in the hard part of the spectrum. This is also observed from the hardness ratio light curve $HR$ (bottom panel in Figure \ref{fig_l}) defined as $HR=(H-S)/(H+S)$, where $S$ and $H$ are the count rate in the soft and hard band, respectively. Inspection of this figure shows that $HR$ remains negative (and almost constant) for all the duration of the observation.
For our analysis we extracted the soft, hard and full band light curves with bin size of $10$ seconds and searched for periodic features by using the Lomb-Scargle technique (\citealt{scargle1982}). In particular, we restricted our analysis to the range between $2 \Delta t$ (where $\Delta t$ is the adopted bin size) and one half of the observational window so that we can test periodicities up to $\simeq 140$ minutes. We stress that the VZ Sex orbital period is $P_{orb}\simeq 3.581$ hours, i.e. well above our upper limit to the testable period so that any periodic feature corresponding to $P_{orb}$ cannot be picked up by our analysis. Note that with the quoted orbital period, VZ Sex is an IP candidate well above the period gap of $2$--$3$ hours so that a ratio between the WD spin and the orbital period in the range $0.01$--$0.1$ is expected.
\begin{figure*}
\centering
{\includegraphics[width=0.9\textwidth]{Epic_LOMBSCARGLE.eps}}
\caption{The Lomb-Scargle periodogram associated to the VZ Sex light curves in the $0.1-2$ keV (red line), $2-10$ keV (green line), and $0.1-10$ keV (black line) energy band. A $10$ second bin size has been used. The inset shows a zoom of the Lomb-Scargle periodograms in the period range $2$--$25$ minutes. The meaning of the vertical lines is detailed in the text.}
\label{fig_lombscargle}
\end{figure*}
The result of the analysis is shown in Figure \ref{fig_lombscargle} where we give the Lomb-Scargle periodograms associated to the VZ Sex light curves in the $0.1-2$ keV, $2-10$ keV, and $0.1-10$ keV with red, green and black lines, respectively. A clear periodic feature at $20.3\pm 1.0$ minutes (labelled by the red dashed vertical line) is detected in the soft band with the signal that almost disappears in the hard part of the spectrum (as it usually happens in sources belonging to the IP sub-class) and obviously re-appears in the periodogram corresponding to the full light curve where
the count rate is still dominated by the $0.1-2$ keV photons.
We tentatively identify the detected periodicity with the WD spin and associate to it, as an error, the full width at half maximum of the associated power peak. In this respect, note that we do not recover any strong feature at $\simeq 40$ minutes as reported in the {\it IPhome} site.
This conclusion is supported by the fact that, as also observed by \citealt{parker2005}, the $X$-ray signal coming from IP sources is often characterized by modulation on the WD spin ($P_{spin}$) and on the binary system orbital period ($P_{orb}$). In fact, as also explained by \citet{warnerbook}, the spinning of the WD polar caps (one of the production sites of $X$-ray photons) and the existence of other reprocessing loci induce the appearance of several side-bands whose periodogram peak intensities depend on the actual geometry of the system and on the line of sight to the observer (see, e.g. \citealt{king1992}). For example, one expects to find a feature at the synodic period $P_{syn}^{-1}=P_{spin}^{-1} - P_{orb}^{-1}$ and also at the beat period $P_{beat}^{-1}=P_{spin}^{-1} + P_{orb}^{-1}$. Therefore, as stressed by \citet{nucita2019,nucita2020} (but see also \citealt{Joshi2016} and \citealt{mukai2020} for similar studies applied to strongly and moderately magnetized CVs) finding signatures of the orbital and spin period (along with the associated side-bands) is a robust method for the classification of a given source as an IP member.
This is exactly the case of VZ Sex as indicated in the inset of Figure \ref{fig_lombscargle} which shows a zoom of the Lomb-Scargle periodograms in the period range $2$--$25$ minutes. The blind search for modulations on the WD spin and the orbital period allowed us to detect the synodic feature at $P_{syn}\simeq 22.5$ minutes (orange dashed line) and the beat periodicity $P_{beat}\simeq 18.6$ minutes (blue dashed line) exactly at the expected locations in the assumption that the periodogram peak at $\simeq 20.3$ minutes corresponds to the WD spin. We also detected other multiple beat frequencies such as, in particular, the $P_{spin}^{-1}+3 P_{orb}^{-1}$ beat at $\simeq 15.8$ minutes, green dashed line), the $2P_{spin}^{-1}+3 P_{orb}^{-1}$ beat at $\simeq 8.9$ minutes (red dotted line), the $3P_{spin}^{-1}+2 P_{orb}^{-1}$ beat at $\simeq 6.4$ minutes (orange dotted line) and $3P_{spin}^{-1}-2 P_{orb}^{-1}$ beat at $\simeq 7.2$ minutes (blue dotted line).
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Epic_Folded.eps}
\caption{The VZ Sex light curve is folded at the WD spin
of $\simeq 20.3$ minutes with 20 bins per cycle. Here, the zero phase corresponds to the start of the observation at $MJD=53143.6192$.}
\label{fig_folded}
\end{figure*}
With the estimated WD spin, the ratio $P_{spin}/P_{orb}$ results to be $\simeq 0.09$, i.e. consistent with what expected for a IP system above the period gap.
We then folded the $0.1-10$ keV light curve of VZ Sex at the detected WD spin $\simeq 20.3$ minutes with 20 bins per cycle (see Figure \ref{fig_folded}). {The resulting light curve shows a clear sinusoidal pattern with the zero phase corresponding to the start of the observation at $MJD=53143.6192$, possibly associated to a change on the projected emitting area. We also note that the modulation is less evident at energies larger than 2 keV as a consequence of the lower count rate. Moreover, the folded light curve shows a possible dip at phase $\simeq 0.7$ which lasts for $\simeq 0.1$. We then test the possibility that the modulation is due to intervening absorption. However, as discussed in the next section, we find that the data at the phase peak and at the dip do not show any clear spectral change.}
Therefore, based on the results presented in this section, VZ Sex can be safely classified as a member of the IP subclass (see, e.g. \citealt{nucita2019, nucita2020}, for similar recent analyses applied to DW Cnc, HP Cet and Swift J0820.6-2805).
\section{X-ray spectral analysis}
VZ Sex is a moderately bright source characterized by a $0.1$--$10$ keV band rate of $\simeq 0.55$ counts s$^{-1}$ which, for a total observation duration of $\simeq 20$ ks, corresponds to a sufficiently large number of collected photons ($\simeq 1.1\times 10^4$) distributed in a relatively high quality spectrum of the source. As previously discussed, the MOS 1, MOS 2, and pn spectra for the source were imported within the XSPEC package along with the spectra for the background and the response matrices of the instruments (see, Figure \ref{spectrum} where the black, red and green points in the upper panel correspond to the pn, MOS 1, and MOS 2 data, respectively).
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth, angle=-90]{best_fit_cemekl.eps}
\caption{The $0.1-10$ keV spectra for the MOS 1 (red), MOS 2 (green) and pn (black) data of VZ Sex together with the best-fit model (see text for details) are presented.}
\label{spectrum}
\end{figure*}
By using XSPEC, we started the analysis by fitting the data with a single mekal model \citep{mekal} absorbed by the neutral hydrogen {({\it phabs} component in XSPEC)} placed along the line of sight which, in this particular case, is $\simeq 3.84\times 10^{20}$ cm$^{-2}$ \citep{nhtool}. By keeping fixed the abundance to the solar value, the fit converged towards a solution characterized by $kT\simeq 5$ keV and a $n_H\simeq 1.75\times 10^{20}$ cm$^{-2}$ which is lower than that provided by the on-line {\it $n_H$ tool}\footnote{The tool is available at
\url{https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl}}, as expected from the fact that VZ Sex is located at a distance of $\simeq 433$ pc. However, the fit is formally unacceptable (having the reduced $\chi^2=1.4$ for 692 d.o.f.) and large residuals appear close to the iron line complex at $\simeq 6.5$ keV. Allowing the metal abundance $A$ to vary resulted in a model with $A\simeq 0.3$ and the other parameters practically unchanged. Although the residuals disappear, the fit is still not acceptable since it is characterized by $\chi^2=1.2$ (reduced) for 691 d.o.f. {Therefore, there is the possibility that a multi-temperature plasma is acting. Hence, after adding a second mekal component, the fit converged towards the solution $n_H = (1.713\pm 0.002)\times 10^{20}$ cm$^{-2}$, $kT_1= 5.9^{+0.5}_{-0.3}$ keV, $kT_2= 0.70^{+0.02}_{-0.05}$ keV, and $A=0.36\pm 0.09$ with $\chi^2=1.06$ (reduced) for 689 d.o.f. All the errors are quoted at the 90\% confidence level.}
{In IP sources, it is expected that the accretion post shock regions are characterized by a temperature gradient due to the cooling of the infalling gas \citep{demartino2005}. We then used XSPEC {\it cemekl} model in which the plasma is characterized by an emission measure gradient following a power law $dEM/dT=(T/T_{max})^{\alpha-1}/T_{max}$. Therefore, the free parameters of the model are the neutral hydrogen column density $n_H$, the maximum value of the temperature $T_{max}$ in the plasma, the power law index $\alpha$, the metal solar abundance $A$, and the model normalization $N$. For any given set of parameters, the spectrum is still dependent on the mekal model and obtained by interpolating on a pre-calculated mekal table (i.e. setting the {\it switch} parameter to 1. We verified that running the code by setting the {\it switch} parameter to 2 (i.e. by using the intrinsic {\it apec} model) does not change our results.}
With this model, the fit converged towards the solution $n_{H}=(1.990\pm 0.003)\times 10^{20}$ cm$^{-2}$, $kT_{max}= 13.7^{+2.7}_{-1.9}$ keV, $\alpha=0.9 \pm 0.2$, $A=0.3\pm 0.1$ {that is formally equivalent to the two-component mekal model described above with $\chi^2=1.02$ (reduced) for 690 d.o.f.} The addition of a partial covering factor did not improve the quality of the fit with respect to the cemekl model, so we do not consider further the presence of a local absorption.
Considering the cemekl model as a good representation of the accretion process in the case of VZ Sex, we found that the $0.1$--$10$ keV band absorbed flux is { $(2.67_{-0.5}^{+0.4})\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$}. The unabsorbed flux\footnote{{We note here that the {\it XMM}-Newton derived flux is consistent with that ($\simeq 3\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$) derived by converting the count rate of $\simeq 0.0876$ counts s$^{-1}$ of the ROSAT satellite observation (and reported on the ROSAT All-Sky Bright Source Catalogue, \citealt{rosatcatalogue}) into a flux in the $0.1-10$ keV band.}} is $(2.98_{-0.5}^{+0.4})\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ so that, for the {VZ Sex distance of $433\pm 100$ pc}, the intrinsic source luminosity {results to be $(7 \pm 4)\times 10^{31}$ erg s$^{-1}$}. This result is in agreement with the distribution in luminosity of the secure IP sample observed in the Swift-BAT 70-month survey \citep{pretorius} although at the lower end of the distribution of the sub-sample with luminosity $\simeq 10^{33}$ erg s$^{-1}$ and at the upper end of the under-luminous IP (see \citealt{mukai2020} and \citealt{nucita2020} for a discussion on other interesting faint IPs). In this respect, VZ Sex seems to be on the bridge between the known bright IP distribution and the still poorly constrained faint IP sources.
{We further investigate the white dwarf spin pulse profile in order to find any possible dependence of the spectral properties on the WD spin phase. Hence, we first defined phases intervals corresponding to the region enclosing the minimum of the light curve (at phases $0\leq \phi \le 0.3$ and $0.9\leq \phi \le 0.1$), around the maximum (at phases $0.3 < \phi \leq 0.6$ and $0.8\leq \phi < 0.9$) and surrounding the possible dip (see Figure \ref{fig_folded}) at phases $0.6 < \phi < 0.8$. By using the {\it phasecalc} task of the SAS suite, we calculated the phases associated to the collected $X$-ray events assuming that the zero phase corresponds to the starting time of the observation at $MJD= 53143.6192$. Hence, for each of the interesting phase intervals, we extracted the spectra (in Figure \ref{spectrum_folded} we give for clarity only the data corresponding to the pn camera). We fit the data by using the cemekl model and found that all the interesting fit parameters converged to values consistent with those obtained above. We then fixed all the parameters to the best values obtained for the full spectrum apart for the hydrogen column density which is free to vary among the data. In this case, we are able to verify whether the modulation observed in the folded light curve is due to an intrinsic absorption. The black and red data in Figure \ref{spectrum_folded} correspond to the data at maximum and minimum of the folded light curve, respectively. We note that the data at maximum and minimum are affected by a column density of $n_{H}=(1.633\pm 0.004)\times 10^{20}$ cm$^{-2}$ and $n_{H}=(2.295\pm 0.003)\times 10^{20}$ cm$^{-2}$, respectively, i.e. the photons at maximum are less absorbed than at minimum (and for energies below $\simeq 1$ keV), although the effect is not dramatic. Conversely, the data associated to the possible dip does not show any change in the spectral change with respect to the data at maximum and, as a consequence, we do not show the corresponding data in Figure \ref{spectrum_folded}. We thus conclude that the modulation observed in the light curve is principally due to a change in the projected emitting surface.
}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth, angle=-90]{phase_folded_spectrum_maximum_minimum.eps}
\caption{The pn $0.1-10$ keV spectra obtained by extracting the data around the maximum (black data) and minimum (red data) of the folded light curve.}
\label{spectrum_folded}
\end{figure*}
\section{Results and discussion}
Intermediate polars are cataclysmic variables in which a moderately magnetized (with magnetic field up to $\simeq 10$ MG) white dwarf accretes material from a companion star. The infalling material can be heated up to temperatures sufficient to produce $X$-ray photons up to energies of 20-50 keV.
IPs are expected to be rather common in the Galaxy (\citealt{pretorius}) and it was shown that most of the confirmed IPs have a typical luminosity of $\simeq 10^{33}$ erg s$^{-1}$, but a sub-class of faint sources (as IP V597 Pup, V475 Sgr and HP Cet, Swift J0820.6-2805) also exists.
In any case, since IPs can be intrinsically faint, they are hard to be detected and, for each source, a dedicated $X$-ray observation is required in order to describe correctly the spectral features which strongly depend on the accretion mechanism.
Furthermore, as noted by \citet{king1992}, \citet{warnerbook} and \citet{parker2005}, the $X$-ray signal from an IP source is characterized by modulation on the WD spin and on the binary system orbital period which pop up as extra peaks in the Fourier power spectrum. As stressed, e.g., by \citet{Joshi2016} (but see also \citet{mukai2020}, this offers a unique tool to confirm the nature of many IP candidates provided that long duration $X$-ray observation by large sensitivity instruments (as those on-board the $XMM$-Newton satellite) are available.
In this paper, we concentrated on the CV source VZ Sex (1RXS J094432.1+035738), an IP candidate with an orbital period (above the gap) of $\simeq 3.581$ hours. Here, by studying the soft ($0.1$--$2$ keV), hard ($2$--$10$ keV), and full ($0.1$--$10$ keV) band light curve we clearly identify in the associated periodograms a feature corresponding to the periodicity of $\simeq 20.3$ minutes that we associate to the WD spin. A confirmation of this association derives from the fact that a few beats with the orbital period are also found in the Lomb-Scargle periodogram (see Fig. \ref{fig_lombscargle} and the discussion in Section 3). This association is also clear in the $0.1-1$ keV light curve folded at the WD spin period of 20.3 minutes (see Fig. \ref{fig_folded}). Furthermore, with the estimated WD spin, the ratio $P_{spin}/P_{orb}$ is $\simeq 0.09$, a value consistent with that expected for a typical IP system above the period gap. Hence, we can firmly assign the IP flag to the source VZ Sex.
{VZ Sex also shows a thermal spectrum which is characterized by an intrinsic luminosity of $\simeq 7\times 10^{31}$ erg s$^{-1}$,} i.e. at the lower end of the distribution of the typical IPs, thus opening to the possibility that a bridge linking the normally bright IPs to the faint population of sources (see recent discussion in \citealt{mukai2020}) does exist.
\begin{acknowledgements}
This paper is based on observations from
XMM-Newton, an ESA science mission with instruments
and contributions directly funded by ESA Member States and NASA. We thank for partial support the INFN projects TAsP and EUCLID. We warmly thank the anonymous Referee for the suggestions that greatly improved the manuscript.
\end{acknowledgements}
{
\software{XSPEC (v12.9.0; \citealt{arnaud})}
\software{SAS (v17.0.0; \citealt{gabriel2004})}
}
|
{
"timestamp": "2021-01-01T02:13:49",
"yymm": "2012",
"arxiv_id": "2012.14134",
"language": "en",
"url": "https://arxiv.org/abs/2012.14134"
}
|
\section{Introduction}\label{sec:1}
\subsection{Backgrounds and main results}\label{sec:1_1}
Integrable systems in classical and quantum theory
have attracted many attentions from those studying
in the field of
mathematical physics.
Related to both classical and quantum integrable systems,
the integrable cellular automata known as the box-ball systems
have provided many stimulating ideas and topics
in this field \cite{IKT12, TS90}.
In relation to classical integrable systems, the box-ball systems are derived from
integrable non-linear differential equations by a procedure
known as tropicalization or ultra-discretization.
Roughly speaking, the notion of geometric lifting is
the inverse of this procedure.
On the other hand,
in relation to quantum integrable systems, the box-ball systems are derived from
integrable quantum spin chain models by a procedure crystallization.
Its mathematical background is given by Kashiwara's theory of crystals \cite{Ka1, Ka2}.
Quite remarkably, there is a geometric lifting of this theory
known as the theory of geometric (and unipotent) crystals by
Berenstein and Kazhdan \cite{BK00}.
Based on their work,
G.~Frieden recently presented
explicit formulas for the
affine type $A$ geometric crystal and its intertwiner, the
geometric $R$-matrix,
by using Grassmannians \cite{F19, F18}.
This is a geometric lifting of the crystal of the so-called
Kirillov-Reshetikhin modules and the associated
combinatorial $R$-matrix,
represented by semi-standard Young tableaux with
rectangular shapes.
Inspired by his work, in this paper we propose a method to
construct
a geometric lifting of a family of integrable cellular automata
with periodic boundary conditions.
These cellular automata are known as the
periodic box-ball system \cite{YYT, YT02} and
its generalizations \cite{KS08, KTT, KT10, KT2}.
Here we note that the periodic box-ball system was conventionally derived from
time-discretized version of the closed Toda chain
(or discrete periodic Toda; dp-Toda \cite{HT95, HTI93})
by the
procedure tropicalization \cite{KT02, IT08, T14}.
Therefore one may think that its geometric lifting simply goes back to the
original dp-Toda chain.
However, our method of geometric lifting is considerably different from that, and
gives a novel family of discrete integrable systems,
which we call \textit{closed geometric crystal chains}.
In order to explain the difference between the dp-Toda chain and
the closed geometric crystal chain, we use the notion of
discrete time Lax equation \cite{Suris04}.
Let $\mathbf{x} = (x^{(1)},\dots,x^{(n-1)})$ be an $(n-1)$-component variable
and set $x^{(n)}:=s/(x^{(1)} \cdots x^{(n-1)})$
where $s$ is a parameter in $\C^{\times}$ or $\R_{>0}$.
We introduce an $n \times n$ matrix $g(\mathbf{x}, s; \lambda)$ as in the
main text (See Example \ref{ex:nov20_2} for $n=4$),
in which the diagonal elements are $(x^{(1)},\dots,x^{(n)})$,
their nearest lower off-diagonals are $1$'s, and there is
an indeterminate $\lambda$ at the top-right corner.
Then, for any $s,l \in \C^{\times}$ and sufficiently generic
$(\mathbf{a},\mathbf{b}) \in (\C^{\times})^{2(n-1)}$,
there is a unique solution $(\mathbf{a}',\mathbf{b}') \in (\C^{\times})^{2(n-1)}$
to the following matrix equation
\begin{equation}\label{eq:dec11_5}
g(\mathbf{b}, s; \lambda) g(\mathbf{a}, l; \lambda) =
g(\mathbf{a}', l; \lambda) g(\mathbf{b}', s; \lambda).
\end{equation}
By regarding the map $T: (\mathbf{a},\mathbf{b}) \mapsto (\mathbf{a}',\mathbf{b}')$
as a time evolution, we obtain a non-linear dynamical system on
$ (\C^{\times})^{2(n-1)}$ which
(with a shift of the indices of the variables' components)
turns out to be the dp-Toda chain.
Its Lax representation is given by
\begin{equation*}
\mathcal{L} (\mathbf{a}',\mathbf{b}' ; \lambda) =
(\mathcal{M}(\lambda))^{-1} \mathcal{L} (\mathbf{a},\mathbf{b} ; \lambda)
\mathcal{M}(\lambda),
\end{equation*}
where $\mathcal{L} (\mathbf{a},\mathbf{b} ; \lambda) =
g(\mathbf{a}, l; \lambda) g(\mathbf{b}, s; \lambda)$ and
$\mathcal{M}(\lambda) = (g(\mathbf{b}, s; \lambda))^{-1}$.
We note that this map is an example of what are called
integrable or Yang-Baxter maps \cite{Sklyanin00, V05}.
For the dp-Toda chain, it is known that
every component of the dependent variables $(\mathbf{a}',\mathbf{b}')$ is
expressed by a subtraction-free rational function of the parameters
$s,l$ and the components of the independent variables $(\mathbf{a},\mathbf{b})$
with non-negative integer coefficients.
This implies that, if we let the parameters $s,l$ to take their values in $\R_{>0}$,
then we can regard
the dp-Toda chain as a dynamical system on
$ (\R_{>0})^{2(n-1)}$ instead of $(\C^{\times})^{2(n-1)}$.
This idea of restricting the domains of the parameters and the
variables into \textit{positive real} spaces
opens a door to the possibility of constructing a new family of
discrete integrable systems out of this well-known matrix $g(\mathbf{x}, s; \lambda)$.
More precisely, we are going to adopt a new guiding principle of our study
for seeking such integrable systems that have positive real dependent variables
but now without regarding them to be written by subtraction-free rational
functions of independent variables.
To be more explicit,
one of the main results of this paper (Theorem \ref{th:main}) claims that
for any $L \in \Z_{>0}$,
$s,l \in \R_{>0}$ and $(\mathbf{b}_1, \dots, \mathbf{b}_L)
\in (\R_{>0})^{L(n-1)} $,
there is a unique \textit{positive real solution}
$(\mathbf{v}, \mathbf{b}'_1, \dots, \mathbf{b}'_L) \in(\R_{>0})^{(L+1)(n-1)}$ to the
following matrix equation
\begin{equation}\label{eq:dec10_1}
g(\mathbf{b}_1, s; \lambda) \cdots g(\mathbf{b}_L, s; \lambda) g(\mathbf{v}, l; \lambda) =
g(\mathbf{v}, l; \lambda) g(\mathbf{b}'_1, s; \lambda) \cdots g(\mathbf{b}'_L, s; \lambda).
\end{equation}
This remarkably simple result is obtained by combining Frieden's work on the
geometric $R$-matrix \cite{F18} with the Perron-Frobenius theorem
in linear algebra (See, for example \cite{Lax96}).
Therefore, by regarding the map $T^{(1)}_l: (\mathbf{b}_1, \dots, \mathbf{b}_L)
\mapsto (\mathbf{b}'_1, \dots, \mathbf{b}'_L)$
as a time evolution, we obtain a non-linear dynamical system on $ (\R_{>0})^{L(n-1)} $.
This is an example of the closed geometric crystal chains.
Obviously, its Lax representation is given by
\begin{equation*}
\mathcal{L} (\mathbf{b}'_1, \dots, \mathbf{b}'_L ; \lambda) =
(\mathcal{M}(\lambda))^{-1} \mathcal{L} (\mathbf{b}_1, \dots, \mathbf{b}_L ; \lambda)
\mathcal{M}(\lambda),
\end{equation*}
where $\mathcal{L} (\mathbf{b}_1, \dots, \mathbf{b}_L ; \lambda) =
g(\mathbf{b}_1, s; \lambda) \cdots g(\mathbf{b}_L, s; \lambda)$ and
$\mathcal{M}(\lambda) = g(\mathbf{v}, l; \lambda)$.
Although we adopted the above mentioned guiding principle,
this Lax representation allows us to obtain such conserved quantities that
still can be tropicalized.
Now we want to explain how the above defined closed geometric crystal chains
are related to integrable cellular automata with periodic boundary conditions.
The geometric $R$-matrix (in the totally one-row tableaux case)
is a map defined to be
$R: (\mathbf{b},\mathbf{a}) \mapsto (\mathbf{a}',\mathbf{b}')$
in which the variables are related by
the matrix equation \eqref{eq:dec11_5}.
For a state $(\mathbf{b}_1, \dots, \mathbf{b}_L)
\in (\R_{>0})^{L(n-1)} $ and a `carrier' $\mathbf{v} \in (\R_{>0})^{n-1}$
we use this map repeatedly.
It is illustrated as
\begin{equation}\label{eq:dec11_6}
\batten{\mathbf{v}_1}{\mathbf{b}_1}{\mathbf{b}_1'}{\mathbf{v}_2}\!\!\!
\batten{}{\mathbf{b}_2}{\mathbf{b}_2'}{\mathbf{v}_3}\!\!\!
\batten{}{}{}{\cdots\cdots}
\quad
\batten{}{}{}{\mathbf{v}_{L-1}}\,\,
\batten{}{\mathbf{b}_{L-1}}{\mathbf{b}_{L-1}'}{\mathbf{v}_{L}}\!\!\!
\batten{}{\mathbf{b}_L}{\mathbf{b}_L'}{\mathbf{v}.}
\end{equation}
We warn that this diagram should be read from the right to the left.
So the variables are related by
$R(\mathbf{b}_i,\mathbf{v}_{i+1})=(\mathbf{v}_{i}, \mathbf{b}'_i)$
where we interpret $\mathbf{v}_{L+1}$ as $\mathbf{v}$.
In the corresponding combinatorial theory,
the geometric $R$-matrix is tropicalized to
the combinatorial $R$-matrix.
This is a map for the isomorphism of the tensor products
of Kashiwara's crystals.
As a combinatorial analogue of the relation
depicted by \eqref{eq:dec11_6},
we present an example in the case of $n=2$ cited from
\cite{KTT}.
\begin{equation*}
\unitlength 0.8mm
{\small
\begin{picture}(130,20)(8,-11)
\multiput(0.8,0)(11.5,0){13}{\line(1,0){4}}
\multiput(2.8,-3)(11.5,0){13}{\line(0,1){6}}
\put(1.8,5){2}
\put(13.3,5){1}
\put(24.8,5){2}
\put(36.3,5){2}
\put(47.8,5){1}
\put(59.3,5){1}
\put(70.8,5){1}
\put(82.3,5){1}
\put(93.8,5){2}
\put(105.3,5){2}
\put(116.8,5){2}
\put(128.3,5){1}
\put(139.8,5){1}
\put(1.8,-7.5){1}
\put(13.3,-7.5){2}
\put(24.8,-7.5){1}
\put(36.3,-7.5){1}
\put(47.8,-7.5){1}
\put(59.3,-7.5){2}
\put(70.8,-7.5){2}
\put(82.3,-7.5){2}
\put(93.8,-7.5){1}
\put(105.3,-7.5){1}
\put(116.8,-7.5){1}
\put(128.3,-7.5){2}
\put(139.8,-7.5){2}
\put(-6,-1.1){122}
\put(5.5,-1.1){112}
\put(17,-1.1){122}
\put(28.5,-1.1){112}
\put(40,-1.1){111}
\put(51.5,-1.1){111}
\put(63,-1.1){112}
\put(74.5,-1.1){122}
\put(86,-1.1){222}
\put(97.5,-1.1){122}
\put(109,-1.1){112}
\put(120.5,-1.1){111}
\put(132,-1.1){112}
\put(143.5,-1.1){122}
\end{picture}
}
\end{equation*}
(This is a `reflected' version of \cite{KTT}
where the left and right
hand sides have been inverted.)
In the language of the box-ball systems,
the single letters $1$ and $2$ denote
an empty box and a box with a ball respectively,
where the capacity of the boxes are all one.
The three consecutive letters' $111, 112, \dots$
on the middle horizontal line denote the states of a carrier of
balls with capacity three, that travels from the right to the left.
The carrier picks up a ball from the box with a ball or
put a ball into an empty box, if possible in either case,
at each site on the way of the traveling.
In \cite{KTT}, A.~Kuniba, A.~Takenouch and one of the authors
proved that:
\begin{proposition}\label{pr:dec11_7}
Suppose $n=2$ and the state is given by a sequence of
single box tableaux.
Then for any capacity of the carrier,
tropical analogue of the equation $\mathbf{v}_1 = \mathbf{v}$
for the picture \eqref{eq:dec11_6} has
at least one solution,
and that even if there are more than one solution to this equation,
tropical analogue of the state $\mathbf{b}'_1, \dots, \mathbf{b}'_L$
on the bottom line is independent of the choice of the
non-unique solutions and hence is
uniquely determined.
\end{proposition}
This fact enabled us to
give a formulation of the periodic box-ball system
in terms of Kashiwara's crystal theory,
and it is natural to consider a generalization of
this formulation to those associated with crystals of
Kirillov-Reshetikhin modules.
In this generalization, the combinatorial analogue of the relation
$R(\mathbf{b}_i,\mathbf{v}_{i+1})=(\mathbf{v}_{i}, \mathbf{b}'_i)$
is given by a relation between \textit{product tableaux} \cite{Ful}.
That is, it is given by
$\tp{\mathbf{b}_i} \cdot \tp{\mathbf{v}_{i+1}} =
\tp{\mathbf{v}_i} \cdot \tp{\mathbf{b}'_{i}}$ where
$\tp{\bullet}$ denotes a rectangular tableau with $k$ rows for $1 \leq k \leq n-1$
obtained by the tropicalization of
any element of $(\R_{>0})^{k(n-k)}$.
Here is an example cited from \cite{KT10}
for $n=4$ where the carrier is given by a tableau with two rows.
\begin{equation*}
\footnotesize
\begin{picture}(200,30)(30,-7)
\put(18,14){1}\put(48,14){3}\put(78,14){4}
\put(108,14){3}\put(138,14){2}\put(168,14){1}
\put(198,14){1}\put(228,14){4}\put(258,14){2}
\multiput(20,3)(30,0){9}{
\put(-6,0){\line(1,0){12}}\put(0,-8){\line(0,1){16}}}
\put(270,3.5){12}\put(270,-3.5){34}
\put(240,3.5){12}\put(240,-3.5){23}
\put(210,3.5){13}\put(210,-3.5){24}
\put(180,3.5){11}\put(180,-3.5){24}
\put(150,3.5){11}\put(150,-3.5){24}
\put(120,3.5){11}\put(120,-3.5){22}
\put(90,3.5){12}\put(90,-3.5){23}
\put(60,3.5){13}\put(60,-3.5){24}
\put(30,3.5){23}\put(30,-3.5){34}
\put(0,3.5){12}\put(0,-3.5){34}
\put(18,-14){3}\put(48,-14){1}\put(78,-14){2}
\put(108,-14){1}\put(138,-14){4}\put(168,-14){1}
\put(198,-14){3}\put(228,-14){2}\put(258,-14){4}
\end{picture}
\normalsize
\end{equation*}
In this example, the above product tableaux relation
can be described in such a way that
the column-insertion of $\tp{\mathbf{b}_i} $ into $\tp{\mathbf{v}_{i+1}} $
coincides with the row-insertion of
$\tp{\mathbf{b}'_i} $ into $\tp{\mathbf{v}_{i}}$.
This example allures us to have a dream that
we might be able to construct associated integrable cellular automata,
in the sense that
for any sequence of letters arbitrarily chosen
from the set $\{1,\dots,n\}$
and for any rectangular shape,
we can always find a tableau of that shape and
solves
tropical analogue of the equation $\mathbf{v}_1 = \mathbf{v}$
for the picture \eqref{eq:dec11_6},
allowing us to define a unique time evolution
compatible with the periodic boundary condition.
In fact, such a dream does not come true because the analogue of
Proposition \ref{pr:dec11_7} does not hold
for $n>2$ \cite{KT10, KT2}, and even in the $n=2$ case it does not hold
for sequences of general one-row tableaux \cite{KS08, T09}.
The motivation for beginning our present study was
to clarify how this situation would be changed if the combinatorial $R$-matrices
are lifted to the geometric $R$-matrices.
The outcome is a realization of the above mentioned dream, in a sense.
The main result of this paper is Theorem \ref{th:main2}, that
generalizes the above mentioned Theorem \ref{th:main} from the
totally one-row tableaux case to the case of carriers of
general rectangular tableaux.
\subsection{Outline}\label{sec:1_2}
Throughout this paper,
a notation for
a positive integer $n \geq 2$ is fixed which comes from
such usages as in the theory of
type $A^{(1)}_{n-1}$ geometric crystals,
semi-standard Young tableaux with the entries taken from
$\{ 1, \dots , n\}$,
or the loop group ${\rm GL}_n (\C (\lambda))$.
In section \ref{sec:2}, we restrict ourselves to the simplest case of $n=2$
and give a detailed description of the simplest nontrivial example
of our new integrable systems, the closed geometric crystal chains.
In section \ref{sec:2_1}, by using only elementary mathematics
we show that the above mentioned scheme of
constructing a new dynamical system associated with the matrix
$g(\mathbf{x}, s; \lambda)$
is indeed possible by the restriction of the variables
to the positive real domains.
In section \ref{sec:2_2}, we study the properties of the
dynamical system and clarify its integrable structures such as
descriptions of its
conservation laws.
In section \ref{sec:2_3},
we consider two different kinds of continuum limits of
our discrete time dynamical system to derive its associated differential
equations in scope for potential application to real physical systems.
In section \ref{sec:2_4}, we study tropicalization of our dynamical system
to elucidate its relation to the generalized periodic box-ball systems.
Extension to the case of
general $n$ is explored in section \ref{sec:3}.
This section is divided into two subsections
according to the shapes of rectangular Young tableaux
whose geometric/rational lifts are used there.
In section \ref{sec:3_1}, we use one-row tableaux only and
consider the matrix equation \eqref{eq:dec10_1} to
define time evolutions for our dynamical system, as well as to study
its conservation laws.
In section \ref{sec:3_2}, we still use one-row tableaux only for
the \textit{states} of our dynamical system, but use general rectangular
tableaux for the \textit{carriers}, that would play the role of
carriers of balls in the associated box-ball systems with
$n-1$ species of balls.
To this end, we present a brief review on the
geometric $R$-matrix introduced by Frieden, and
find a way to use its properties and the Perron-Frobenious theorem to
our construction of commuting time evolutions for the
new integrable systems.
Finally, in section \ref{sec:4} we give a summary and discussions.
\subsection{Notation}\label{sec:1_3}
As explained in the above,
we fix a notation $n \geq 2$ for an integer,
and we write $[n]=\{1,\dots,n\}$.
For any $r \in [n]$, denote
${ [n] \choose r }$ to be the set of $r$-element subsets of $[n]$.
For any admissible pair of $r$-element subsets $I, J$ and any matrix $A$ which has
more than or equal to $r$ rows and columns,
denote $\Delta_{I,J}(A)$ to be a minor determinant of $A$
associated with its $r \times r$ submatrix
specified by rows in $I$ and columns in $J$.
For two integers $i$ and $j$, we write $[i,j]=\{m \in \Z \vert i \leq m \leq j\}$.
Denote ${\rm Gr}(r,n)$ to be the Grassmannian variety of $r$-dimensional
subspaces in $\C^n$.
For $J \in { [n] \choose r }$ we write $P_J(M)$ to denote the
$J$th Pl$\ddot{\rm u}$cker coordinate of the subspace $M \in {\rm Gr}(r,n)$.
Pl$\ddot{\rm u}$cker coordinates are projective, i.~e.~they are only defined
up to a common nonzero scalar multiple.
For the Pl$\ddot{\rm u}$cker coordinates,
in most cases we adopt Convention 3.1 of \cite{F19}.
We often write $P_1, P_{12}, P_{123}$ instead of $P_{\{1\}}, P_{\{1,2\}}, P_{\{1,2,3\}}$.
If $I \in [n]$ does not contain exactly $r$ elements, then we set
$P_I(M)=0$.
If $I$ is any set of integers, we set $P_I(M) = P_{I'}(M)$, where $I'$ is the set
consisting of the residues of the elements of $I$ modulo $n$,
where the residues are thought of as elements of $[n]$.
A point $M \in {\rm Gr}(r,n)$ is represented by a full-rank $n \times r$
matrix $M'$, in the sense that its columns span the subspace $M$.
Thus, for any $B \in {\rm GL}_r(\C)$ the matrix $M' B$ represents
the same point $M$.
This enables us to write the (projective) Pl$\ddot{\rm u}$cker coordinate
as $P_J(M) = \Delta_{J, [r]} (M')$,
because we have $\Delta_{J, [r]} (M'B)= \Delta_{J, [r]} (M') \cdot \det B$
by the Cauchy-Binet formula.
In contrast, for any $A \in {\rm GL}_n(\C)$ the matrix $A M'$ represents
generally another point in ${\rm Gr}(r,n)$ that is denoted by $A \cdot M$.
We write $\mathbb{I}_r$ to denote the $r \times r$ identity matrix.
\section{The case of $\bm{n=2}$}\label{sec:2}
\subsection{Definition of the dynamical system}\label{sec:2_1}
We first consider the simplest case of the geometric $R$-matrix
that is the geometric lifting of the combinatorial $R$-matrix
for
one-row Young tableaux with $2$ kinds of letters.
Let $s,l \in \R_{>0}$ be a pair of parameters, and $R: (\R_{>0})^2 \rightarrow (\R_{>0})^2$
a rational map given by $R:(b,a) \mapsto (a',b')$ where
\begin{equation}\label{eq:jul28_1}
a' = a \frac{b+\frac{l}{a}}{a+\frac{s}{b}}, \quad b' =b \frac{a+\frac{s}{b}}{b+\frac{l}{a}}.
\end{equation}
We depict the relation $R(b,a) = (a',b')$
by
\begin{equation*}
\batten{a'}{b}{b'}{a}.
\end{equation*}
If necessary, we denote by $R^{(s,l)}$ for $R$ to explicitly express
its dependence on the parameters $s,l$.
It is easy to see that $R^{(l,s)} \circ R^{(s,l)}= {\rm Id}$, so in particular
this map is birational.
Let $R_i$ be a map from $(\R_{>0})^{L+1}$ to itself,
which acts as the map $R$ on factors $i$ and $i+1$, and as the identity on
the other factors.
Let $\mathcal{R} = R_1 \circ \cdots \circ R_L$.
Given an arbitrary $(b_1, \dots, b_L, v) \in (\R_{>0})^{L+1}$
let $\mathcal{R} (b_1, \dots, b_L, v) = (v_1, b'_1, \dots, b'_L)$.
It is depicted by
\begin{equation}\label{eq:jul28_2}
\batten{v_1}{b_1}{b_1'}{v_2}\!\!\!
\batten{}{b_2}{b_2'}{v_3}\!\!\!
\batten{}{}{}{\cdots\cdots}
\quad
\batten{}{}{}{v_{L-1}}\,\,
\batten{}{b_{L-1}}{b_{L-1}'}{v_{L}}\!\!\!
\batten{}{b_L}{b_L'}{v.}
\end{equation}
Based on this diagram,
we would like to construct a
discrete time dynamical system on the space $(\R_{>0})^L$
using a map that sends $(b_1, \dots, b_L)$ to $(b'_1, \dots, b'_L)$
as a unit step of its time evolution.
Then, if the $v_1$ appeared at the left end coincides with $v$
at the right end, it is reasonable to say that this one-dimensional system is
satisfying a periodic boundary condition.
Note that
the $v_1$ is a rational function of
the variables $(b_1, \dots, b_L, v) \in (\R_{>0})^{L+1}$
and the parameters $s,l \in \R_{>0}$, because
it is given by a composition of rational maps.
Therefore,
by regarding the $b_i$'s also as parameters, we obtain an
algebraic equation $v = v_1$ for the unknown $v$ that assures the
periodic boundary condition.
Then we have:
\begin{proposition}\label{lem:1}
For any $s,l \in \R_{>0}$ and $(b_1, \dots, b_L) \in (\R_{>0})^{L}$,
there is a unique positive real solution $v \in \R_{>0}$ to the equation $v = v_1$.
\end{proposition}
\proof
One observes that the $a'$ in \eqref{eq:jul28_1}
is determined by the relation
\begin{equation*}
\begin{pmatrix}
b & l \\
1 & s/b
\end{pmatrix}
\begin{pmatrix}
a \\
1
\end{pmatrix}
= \left(a+\frac{s}{b} \right)
\begin{pmatrix}
a' \\
1
\end{pmatrix}.
\end{equation*}
Hence for \eqref{eq:jul28_2} we have
\begin{equation}\label{eq:nov19_1}
\begin{pmatrix}
L_{11} & L_{12} \\
L_{21} & L_{22}
\end{pmatrix}
\begin{pmatrix}
v \\
1
\end{pmatrix}
= \left(v+\frac{s}{b_L} \right) \left(v_L+\frac{s}{b_{L-1}} \right)
\cdots \left(v_2+\frac{s}{b_1} \right)
\begin{pmatrix}
v_1 \\
1
\end{pmatrix},
\end{equation}
where the \textit{monodromy matrix} is given by
\begin{equation}\label{eq:sep16_1}
\begin{pmatrix}
L_{11} & L_{12} \\
L_{21} & L_{22}
\end{pmatrix}
:=
\begin{pmatrix}
b_1 & l \\
1 & s/b_1
\end{pmatrix}
\cdots
\begin{pmatrix}
b_L & l \\
1 & s/b_L
\end{pmatrix}.
\end{equation}
\textit{(Uniqueness.)}
Suppose there exist positive real solutions to the equation $v=v_1$.
From \eqref{eq:nov19_1} we see that for any such solution $v$ it is necessary for
$(v,1)^t$ to be a positive eigenvector
of the monodromy matrix \eqref{eq:sep16_1}.
Then by taking a ratio of the the components of
the vectors in both sides of
the equation \eqref{eq:nov19_1},
we obtain
\begin{equation}\label{eq:aug19_1}
v = \frac{L_{11} v + L_{12}}{L_{21} v + L_{22}}.
\end{equation}
Since this equation
has a unique positive real solution
\begin{equation}\label{eq:jul31_1}
v = \frac{L_{11} - L_{22} + \sqrt{(L_{11}- L_{22} )^2+4 L_{12} L_{21} }}{ 2 L_{21} },
\end{equation}
such a solution is unique.
\par\noindent
\textit{(Existence.)}
Equation \eqref{eq:nov19_1} is valid for any $v \in \R_{>0}$
so in particular for the $v$ in \eqref{eq:jul31_1}.
On the other hand, for this $v$ we also have
\begin{equation}\label{eq:nov25_1}
\begin{pmatrix}
L_{11} & L_{12} \\
L_{21} & L_{22}
\end{pmatrix}
\begin{pmatrix}
v \\
1
\end{pmatrix}
= (L_{21}v + L_{22})
\begin{pmatrix}
v \\
1
\end{pmatrix}.
\end{equation}
By equating the right hand side of the equation \eqref{eq:nov19_1}
with that of \eqref{eq:nov25_1},
we see that this $v$ is indeed a solution to the algebraic equation $v=v_1$.
\qed
\par
With this $v \in \R_{>0}$ in \eqref{eq:jul31_1},
define $T_l: (\R_{>0})^L \rightarrow (\R_{>0})^L$ to be a map given by
\begin{equation}\label{eq:aug4_1}
T_l (b_1, \dots, b_L) = (b'_1, \dots, b'_L),
\end{equation}
where the right hand side is determined by the relation
$\mathcal{R} (b_1, \dots, b_L, v) = (v, b'_1, \dots, b'_L)$.
We call this map a \textit{time evolution}, and $v$ a \textit{carrier} for the state $(b_1, \dots, b_L)$
associated with $T_l$.
For any fixed $s \in \R_{>0}$,
now we obtained a one parameter family of
discrete time dynamical systems on the space $(\R_{>0})^L$
with the time evolutions $T_l \, (l \in \R_{>0})$.
We would like to call such a system a closed geometric crystal chain.
As a discrete dynamical system,
the closed geometric crystal chain
has such properties that any
homogeneous state $(b_1, \dots, b_L) = (\alpha, \dots, \alpha)$
is a fixed point,
and for the case of even $L$
any alternating state $(b_1, \dots, b_L) = (\alpha, \beta, \dots, \alpha, \beta)$
is a periodic point with period $2$.
The latter one is related to a special modulo $2$ conservation law in
Remark \ref{rem:dec28_1}.
Also note that the time evolution $T_{s}$ produces
a cyclic shift by one spacial unit.
Here we present an example of the time evolutions of the closed geometric crystal chain.
Figure \ref{fig:1} shows three results of
repeated applications of $T_l$s to an initial state of the system.
The patterns are showing that there are many `solitons'
traveling with various velocities.
Also, one can observe that many
collisions of the solitons and many phase shifts induced by the collisions
are occurring in those patterns.
Such phenomena are typical to any dynamical system
with both non-linearity and integrability, including the periodic box-ball system.
We will give an additional observation on this example
at the end of section \ref{sec:2}.
\subsection{Properties of the dynamical system}\label{sec:2_2}
\subsubsection{Commutativity of the time evolutions.}\label{sec:2_2_1}
Since the geometric $R$-matrices satisfy the Yang-Baxter relation,
the following standard argument assures the commutativity of
the time evolutions.
Let $(b''_1, \dots, b''_L)= T_{l_2} \circ T_{l_1}(b_1, \dots, b_L)$
with the associated carriers $v$ (resp.~$\tilde{v}$) for the time evolutions
$T_{l_1}$ (resp.~$T_{l_2}$).
Then we have the relation
\begin{equation}\label{eq:aug24_1}
(R_2^{(s,l_2)} \circ \dots \circ R_{L+1}^{(s,l_2)}) \circ
(R_1^{(s,l_1)} \circ \dots \circ R_{L}^{(s,l_1)})(b_1, \dots, b_L,v,\tilde{v})=
(v,\tilde{v},b''_1, \dots, b''_L).
\end{equation}
\clearpage
\vspace{2cm}
\par\noindent
\begin{figure}[htbp]
\centering
\includegraphics[height=8cm]{CGC20201015.eps}
\caption{Time evolutions of the closed geometric crystal chain for $n=2$ and $L=50$.
The values of the parameters are $s=1$ (All), and
$l=0.00001$ (Left), $l=0.1$ (Middle), $l=2.0$ (Right).
The initial state is given by $b_i = i/10 \quad (1 \leq i \leq 50)$
at the top row, and
the time flows from top to bottom.
Visualization is produced by the command MatrixPlot in Mathematica$^{\scriptstyle \mbox{\scriptsize \textregistered}}$.}
\label{fig:1}
\end{figure}
\par\noindent
By repeated use of the Yang-Baxter relation $R_{i+1}^{(s,l_2)} R_{i}^{(s,l_1)} R_{i+1}^{(l_2,l_1)} = R_{i}^{(l_2,l_1)} R_{i+1}^{(s,l_1)} R_{i}^{(s,l_2)} $ and the involution
$R_{L+1}^{(l_2,l_1)} \circ R_{L+1}^{(l_1,l_2)} = {\rm Id}$, we obtain
\begin{align*}
&(R_2^{(s,l_2)} \circ \dots \circ R_{L+1}^{(s,l_2)}) \circ
(R_1^{(s,l_1)} \circ \dots \circ R_{L}^{(s,l_1)}) \\
&\quad = R_2^{(s,l_2)} \circ R_1^{(s,l_1)} \circ R_3^{(s,l_2)} \circ R_2^{(s,l_1)} \circ\dots \circ R_{L+1}^{(s,l_2)} \circ R_{L}^{(s,l_1)}\\
&\quad = (R_2^{(s,l_2)} \circ R_1^{(s,l_1)} \circ R_3^{(s,l_2)} \circ R_2^{(s,l_1)} \circ\dots \circ R_{L+1}^{(s,l_2)} \circ R_{L}^{(s,l_1)}) \circ (R_{L+1}^{(l_2,l_1)} \circ R_{L+1}^{(l_1,l_2)}) \\
&\quad =R_{1}^{(l_2,l_1)} \circ (R_2^{(s,l_1)} \circ R_1^{(s,l_2)} \circ R_3^{(s,l_1)} \circ R_2^{(s,l_2)} \circ \dots \circ R_{L+1}^{(s,l_1)} \circ R_{L}^{(s,l_2)}) \circ R_{L+1}^{(l_1,l_2)} \\
&\quad =R_{1}^{(l_2,l_1)} \circ (R_2^{(s,l_1)} \circ \dots \circ R_{L+1}^{(s,l_1)}) \circ
(R_1^{(s,l_2)} \circ \dots \circ R_{L}^{(s,l_2)}) \circ R_{L+1}^{(l_1,l_2)}.
\end{align*}
By substituting this into \eqref{eq:aug24_1} we obtain
\begin{equation}\label{eq:aug24_2}
(R_2^{(s,l_1)} \circ \dots \circ R_{L+1}^{(s,l_1)}) \circ
(R_1^{(s,l_2)} \circ \dots \circ R_{L}^{(s,l_2)})(b_1, \dots, b_L,u,\tilde{u})=
(u,\tilde{u},b''_1, \dots, b''_L),
\end{equation}
where $(u,\tilde{u}) = R^{(l_1,l_2)} (v,\tilde{v})$.
Therefore we have
$T_{l_1} \circ T_{l_2} (b_1, \dots, b_L) = (b''_1, \dots, b''_L)$.
\subsubsection{Conservation laws.}\label{sec:2_2_2}
Here we show that the closed geometric crystal chains are
discrete integrable systems with $L/2$ (resp.~$(L+1)/2$)
conserved quantities for even (resp.~odd) $L$.
Given $a,b \in \R_{>0}$, the $a',b'$ in \eqref{eq:jul28_1} are
determined by a pair of equations
\begin{equation}\label{eq:aug7_1}
ab = a'b', \qquad b+\frac{l}{a} = a'+\frac{s}{b'}.
\end{equation}
It is equivalent to the following matrix equation
\begin{equation}
\begin{pmatrix}
b & \lambda \\
1 & s/b
\end{pmatrix}
\begin{pmatrix}
a & \lambda \\
1 & l/a
\end{pmatrix}
=
\begin{pmatrix}
a' & \lambda \\
1 & l/a'
\end{pmatrix}
\begin{pmatrix}
b' & \lambda \\
1 & s/b'
\end{pmatrix},
\end{equation}
where $\lambda$ is a parameter called the \textit{loop parameter} \cite{F19}.
Thus for the choice of $v$ in \eqref{eq:jul31_1},
the equation depicted by \eqref{eq:jul28_2} is written as
\begin{equation}\label{eq:jul31_2}
\begin{pmatrix}
b_1 & \lambda \\
1 & s/b_1
\end{pmatrix}
\cdots
\begin{pmatrix}
b_L & \lambda \\
1 & s/b_L
\end{pmatrix}
\begin{pmatrix}
v & \lambda \\
1 & l/v
\end{pmatrix}
=
\begin{pmatrix}
v & \lambda \\
1 & l/v
\end{pmatrix}
\begin{pmatrix}
b'_1 & \lambda \\
1 & s/b'_1
\end{pmatrix}
\cdots
\begin{pmatrix}
b'_L & \lambda \\
1 & s/b'_L
\end{pmatrix}.
\end{equation}
Define $g(\alpha,\beta; \lambda)$ and
$\mathcal{L}(| b \rangle; \lambda)$
for $| b \rangle := (b_1,\dots,b_L)$
to be $2 \times 2$ matrices given by
\begin{equation}\label{eq:dec28_2}
g(\alpha,\beta; \lambda) :=
\begin{pmatrix}
\alpha & \lambda \\
1 & \beta/\alpha
\end{pmatrix},
\quad
\mathcal{L}(| b \rangle; \lambda)=
\begin{pmatrix}
L_{11}(\lambda) & L_{12}(\lambda) \\
L_{21}(\lambda) & L_{22}(\lambda)
\end{pmatrix}
:=
g(b_1,s; \lambda) \cdots g(b_L,s; \lambda).
\end{equation}
Then equation \eqref{eq:jul31_2} is written as
\begin{equation}\label{eq:dec4_1}
\mathcal{L}(T_l | b \rangle; \lambda) =
g(v,l; \lambda)^{-1}
\mathcal{L}(| b \rangle; \lambda) g(v,l; \lambda),
\end{equation}
which can be viewed as a discrete time analogue of the
Lax equation \cite{Suris04}.
This implies that the characteristic polynomial
\begin{equation*}
\det (x \mathbb{I}_2 - \mathcal{L}(| b \rangle; \lambda))=
x^2 - (L_{11}(\lambda) + L_{22}(\lambda))x + \det \mathcal{L}(| b \rangle; \lambda)
\end{equation*}
is invariant under the time evolution $T_l$ for any $l \in \R_{>0}$.
Since $\det \mathcal{L}(| b \rangle; \lambda) = (s-\lambda)^L$ is trivially conserved,
all the non-trivial conserved quantities of this dynamical system are
contained in the trace $L_{11}(\lambda) + L_{22}(\lambda)$.
Let $b_j^{(1)}=b_j, b_j^{(2)}=s/b_j$ and for any $m \in \Z$ we
extend its definition by $b_j^{(m)}=b_j^{(m-2)}$.
Based on \cite{ILP17, LP11}, define
the loop elementary symmetric functions $e^{(r)}_m (| b \rangle) \, (r=0,1)$ by
\begin{align}
e^{(1)}_m (| b \rangle)&= \sum_{1 \leq j_1 <j_2 < \dots < j_m \leq L}
b_{j_1}^{(2-j_1)} b_{j_2}^{(3-j_2)} \cdots b_{j_m}^{(1+m-j_m)}, \nonumber\\
e^{(0)}_m (| b \rangle)&= \sum_{1 \leq j_1 <j_2 < \dots < j_m \leq L}
b_{j_1}^{(1-j_1)} b_{j_2}^{(2-j_2)} \cdots b_{j_m}^{(m-j_m)},\label{eq:nov11_1}
\end{align}
and
$e^{(r)}_0 (| b \rangle)=1,\, e^{(r)}_m (| b \rangle)=0 \, (m<0)$.
By Lemma 6.1 of \cite{ILP17}, we have
\begin{align}
L_{11}(\lambda) = \sum_{m \geq 0} e^{(1)}_{L-2m} (| b \rangle) \lambda^m,\quad
&L_{12}(\lambda) = \sum_{m > 0} e^{(1)}_{L+1-2m} (| b \rangle) \lambda^m,\nonumber\\
L_{21}(\lambda) = \sum_{m \geq 0} e^{(0)}_{L-1-2m} (| b \rangle) \lambda^m,\quad
&L_{22}(\lambda) = \sum_{m \geq 0} e^{(0)}_{L-2m} (| b \rangle) \lambda^m.
\label{eq:aug19_2}
\end{align}
Since the parameter $\lambda$ can take arbitrary values,
every coefficient
\begin{equation}\label{eq:nov11_2}
I_{L-2m}:= e^{(1)}_{L-2m} (| b \rangle) + e^{(0)}_{L-2m} (| b \rangle),
\end{equation}
in the polynomial $L_{11}(\lambda) + L_{22}(\lambda)$
is a conserved quantity.
To summarize, there are $(L+1)/2$ conserved quantities
$I_1, I_3,\dots, I_L$ for odd $L$,
and $L/2$ conserved quantities $I_2, I_4,\dots, I_L$ for even $L$,
besides the trivial one $I_0=2$.
\begin{example}
The traces $L_{11}(\lambda) + L_{22}(\lambda)$ for up to $L=4$ are
as follows:
\begin{align}
L=1: &\quad b_1 + \bar{b}_1,\\
L=2: &\quad 2\lambda + b_1b_2+\bar{b}_1\bar{b}_2,\\
L=3: &\quad \lambda (b_1+\bar{b}_1+b_2+\bar{b}_2+b_3+\bar{b}_3)+b_1b_2b_3+\bar{b}_1\bar{b}_2\bar{b}_3,\\
L=4: &\quad 2 \lambda^2+\lambda ( b_1b_2+b_1\bar{b}_3+b_1b_4+\bar{b}_2\bar{b}_3+
\bar{b}_2b_4+b_3b_4 \nonumber\\
&\quad +\bar{b}_1\bar{b}_2
+\bar{b}_1b_3
+\bar{b}_1\bar{b}_4
+b_2b_3
+b_2 \bar{b}_4+
\bar{b}_3 \bar{b}_4 )
+b_1 b_2 b_3 b_4+\bar{b}_1\bar{b}_2\bar{b}_3\bar{b}_4.
\end{align}
Here $\bar{b}_i$ denotes $s/b_i$.
\end{example}
By taking $\lambda = 0$ in \eqref{eq:jul31_2}, we
see that $I'_L := e^{(1)}_{L} (| b \rangle) = b_1\dots b_L$ is also a conserved quantity,
which can be used in place of $I_L$.
\begin{remark}\label{rem:dec28_1}
In the case of even $L$, the
quantity $I'_1:=e^{(1)}_{1} (| b \rangle) = b_1+\bar{b}_2 +\dots +b_{L-1}+\bar{b}_L$ is invariant under any two consecutive time
evolutions
$T_{l_2} \circ T_{l_1} (b_1, \dots, b_L) = (b''_1, \dots, b''_L)$.
This claim is verified by considering the equation
$\mathcal{L}(| b \rangle; \lambda) g(v,l_1; \lambda) g(\tilde{v},l_2; \lambda) =
g(v,l_1; \lambda) g(\tilde{v},l_2; \lambda) \mathcal{L}(| b'' \rangle; \lambda)$,
divided by $\lambda^{L/2+1}$ and then by taking the limit
$\lambda \rightarrow \infty$.
Since $e^{(1)}_{1} (| b \rangle) e^{(0)}_{1} (| b \rangle) = Ls + I_2$
is a conserved quantity,
the quantity $\bar{I}'_1:=e^{(0)}_{1} (| b \rangle)=\bar{b}_1+b_2+\dots + \bar{b}_{L-1}+b_L$
has also this property.
\end{remark}
\subsubsection{Invertibility.}\label{sec:2_2_3}
The time evolution $T_l$ defined in \eqref{eq:aug4_1} is invertible.
Actually, given any $(b_1,\dots,b_L) \in (\R_{>0})^L$ one can obtain
$(T_l)^{-1} (b_1, \cdots, b_L) = (\tilde{b}_1, \cdots, \tilde{b}_L)$
in the following way.
In view of \eqref{eq:jul31_2} we begin with the matrix equation
\begin{equation}\label{eq:aug4_2}
\begin{pmatrix}
\tilde{b}_1 & \lambda \\
1 & s/\tilde{b}_1
\end{pmatrix}
\cdots
\begin{pmatrix}
\tilde{b}_L & \lambda \\
1 & s/\tilde{b}_L
\end{pmatrix}
\begin{pmatrix}
\tilde{v} & \lambda \\
1 & l/\tilde{v}
\end{pmatrix}
=
\begin{pmatrix}
\tilde{v} & \lambda \\
1 & l/\tilde{v}
\end{pmatrix}
\begin{pmatrix}
L_{11}(\lambda) & L_{12}(\lambda) \\
L_{21}(\lambda) & L_{22}(\lambda)
\end{pmatrix},
\end{equation}
where $\tilde{v}$ and $\tilde{b}_i$'s are the unknowns.
By flipping the matrices with respect to their anti-diagonals,
we see that this equation is equivalent to
\begin{equation}\label{eq:aug4_3}
\begin{pmatrix}
l/\tilde{v} & \lambda \\
1 & \tilde{v}
\end{pmatrix}
\begin{pmatrix}
s/\tilde{b}_L & \lambda \\
1 & \tilde{b}_L
\end{pmatrix}
\cdots
\begin{pmatrix}
s/\tilde{b}_1 & \lambda \\
1 & \tilde{b}_1
\end{pmatrix}
=
\begin{pmatrix}
L_{22}(\lambda) & L_{12}(\lambda) \\
L_{21}(\lambda) & L_{11}(\lambda)
\end{pmatrix}
\begin{pmatrix}
l/\tilde{v} & \lambda \\
1 & \tilde{v}
\end{pmatrix}.
\end{equation}
In the same way as in Proposition \ref{lem:1} to obtain \eqref{eq:jul31_1},
we see that the $\tilde{v}$ satisfying this matrix equation
is determined by the following equation
\begin{equation*}
l/\tilde{v} = \frac{L_{22} l/\tilde{v} + L_{12}}{L_{21} l/\tilde{v} + L_{11}},
\end{equation*}
where $L_{ij} = L_{ij}(l)$s are given by \eqref{eq:sep16_1}.
Its unique positive real solution is
\begin{equation
\tilde{v} = \frac{ 2 l L_{21} }{L_{22} - L_{11} + \sqrt{(L_{11}- L_{22} )^2+4 L_{12} L_{21} }}.
\end{equation}
With this choice of $\tilde{v}$ we can obtain the $(\tilde{b}_1, \dots, \tilde{b}_L)$
in \eqref{eq:aug4_2} by using the inverse map
$\mathcal{R}^{-1} = R_L \circ \cdots \circ R_1$ as
$\mathcal{R}^{-1} (\tilde{v}, b_1, \dots, b_L) = (\tilde{b}_1, \dots, \tilde{b}_L, \tilde{v})$.
\subsection{Continuum limits and associated differential equations}\label{sec:2_3}
\subsubsection{A naive method.}\label{sec:2_3_1}
In order to observe a few of the properties of our new discrete dynamical system,
we consider two different continuum limits of the system.
First we note that this system has an obvious scale invariance.
That is, by
replacing $a,b,s,l$ in \eqref{eq:jul28_1} by $\mu a, \mu b, \mu^2 s, \mu^2 l$
with a parameter $\mu \in \R_{>0}$ results in the replacements of
$a', b'$ by $\mu a', \mu b'$.
So, we may set $s=1$, and
we also use the letter $\tau$ instead of $l$.
Under this setting, we rewrite the equations in \eqref{eq:aug7_1} as
\begin{equation}\label{eq:aug7_2}
ab = a'b', \qquad b+\frac{\tau}{a} = a'+\frac{1}{b'}.
\end{equation}
The first method is a naive one
in which we respect neither the integrability nor the periodicity
of the system.
Let $u(x,t), v(x,t)$ be a pair of variables depending on time $t$ and position $x$,
and $\delta > 0$ a small variable.
We set
\begin{align*}
a &= v(x + c_0 \delta,t), \quad b= u(x, t -\delta), \\
a' &= v(x - c_0 \delta,t), \quad b'= u(x, t +\delta) ,
\end{align*}
and require that both equations in \eqref{eq:aug7_2} are satisfied up to order $1$
of the variable $\delta$.
This requirement is satisfied if the variables $u(x,t), v(x,t)$ satisfy the
following differential equation
\begin{equation}
\frac{\partial_t u(x,t)}{u(x,t)} = c_0 \frac{\partial_x v(x,t)}{v(x,t)},
\end{equation}
and if they are expressed by a new variable $U(x,t)$ as
\begin{align}
&v(x,t) = \sinh \left[ U(x,t) \right]+\sqrt{\tau + \sinh^2 \left[ U(x,t) \right]},\\
&u(x,t) = \exp \left[ U(x,t) \right] .
\end{align}
Putting them together we obtain the following partial differential equation
\begin{equation}
\frac{1}{c_0} \frac{\partial U(x,t)}{\partial t}=
\frac{\cosh \left[ U(x,t) \right]}{\sqrt{\tau + \sinh^2 \left[ U(x,t) \right]}}
\frac{\partial U(x,t)}{\partial x}.
\end{equation}
This is a sort of nonlinear advection equation.
When $\tau = 1$ it reduces the linear differential equation $\partial_t U = c_0 \partial_x U$
for left moving waves of a common velocity $c_0$,
in agreement with the cyclic shift behavior of the original discrete dynamical system
for the case of time evolution $T_{s}$.
\subsubsection{Another method to respect the integrability.}\label{sec:2_3_2}
Here we assume that $L$ is an odd integer.
According to diagram \eqref{eq:jul28_2} we write the evolution equation as
\begin{equation}\label{eq:sep1_1}
v_{i+1} b_i = v_{i} b'_i, \qquad b_i+\frac{\tau}{v_{i+1}} = v_i+\frac{1}{b'_i}.
\end{equation}
Let $a_i(t), u_i(t)$ be a pair of variables depending on time $t$ and position $i \in \Z/L\Z$,
$C$ a constant,
and $\delta > 0$ a small variable.
We set
\begin{align*}
v_{i+1} &= \frac{1}{\delta} + a_{i+1}(t), \quad b_i= u_i(t), \\
v_i &= \frac{1}{\delta} + a_{i}(t), \qquad b'_i= u_i(t +\delta),\\
\tau &= \frac{1}{\delta^2} + \frac{C}{\delta},
\end{align*}
and write \eqref{eq:sep1_1} as the following discrete time analogue of the
Lax triads \cite{Suris04}
\begin{equation}
g(b_i',1;\lambda) = (\delta \cdot g(v_i, \tau; \lambda))^{-1} g(b_i,1;\lambda)
(\delta \cdot g(v_{i+1}, \tau; \lambda)),
\end{equation}
where $g(\bullet, \bullet;\lambda)$
is the $2 \times 2$ matrix defined in \eqref{eq:dec28_2}.
Since
\begin{math}
\delta \cdot g(v_i, \tau; \lambda) = \mathbb{I}_2 + \delta \cdot h(a_i(t),C;\lambda)
+ \mathcal{O} (\delta^2)
\end{math}
where
\begin{equation}
h(\alpha,\beta;\lambda) =
\begin{pmatrix}
\alpha & \lambda \\
1 & \beta - \alpha
\end{pmatrix},
\end{equation}
one can derive the following (continuous time) Lax triads \cite{Suris04}
\begin{equation}\label{eq:dec4_2}
\frac{{\rm d}}{{\rm d} t} g(u_i(t),1;\lambda) = g(u_i(t),1;\lambda)h(a_{i+1}(t),C;\lambda)
-h(a_i(t),C;\lambda)g(u_i(t),1;\lambda).
\end{equation}
This implies that
\begin{align*}
a_{i+1}(t)+a_i(t) &= u_i(t) - \frac{1}{u_i(t)} + C, \\
\frac{{\rm d}}{{\rm d} t} u_i(t) &= u_i(t) ( a_{i+1}(t) - a_i(t) ).
\end{align*}
Since $L$ is odd, we can solve the first
couple of equations as
\begin{equation*}
a_i(t) = \frac12 \sum_{j=0}^{L-1} (-1)^j \left( u_{i+j}(t) - \frac{1}{u_{i+j}(t)} \right) +\frac{C}{2}.
\end{equation*}
By substituting this expression in the second equation, we have
\begin{equation*}
\frac{{\rm d}}{{\rm d} t} u_i(t) = u_i(t) \sum_{j=1}^{L-1} (-1)^{j-1} \left( u_{i+j}(t) - \frac{1}{u_{i+j}(t)} \right).
\end{equation*}
This may be viewed as a variation of the Lotka-Volterra equation.
If we set $u_i(t) = \exp[ U_i(t) ]$, the equation is written as
\begin{equation}
\frac{{\rm d}}{{\rm d} t} U_i(t) = 2 \sum_{j=1}^{L-1} (-1)^{j-1} \sinh U_{i+j}(t).
\end{equation}
This system
has obvious conserved quantities $\sum_{j=1}^{L}U_i(t)$ and
$\sum_{j=1}^{L} \cosh U_i(t)$.
A continuous limit of the discrete Lax equation
\eqref{eq:dec4_1} is obtained by a standard method.
Let $\mathcal{L}(t) = g(u_1(t),1;\lambda) \cdots g(u_L(t),1;\lambda)$
and $\mathcal{B}(t) = h(a_1(t),C;\lambda)$.
Then we obtain
\begin{equation}
\frac{{\rm d}}{{\rm d} t} \mathcal{L}(t) = [\mathcal{L}(t), \mathcal{B}(t)],
\end{equation}
from the Lax triads equation \eqref{eq:dec4_2}.
\subsection{Tropicalization and piecewise linear formulas}\label{sec:2_4}
\subsubsection{An equation for periodic boundary conditions.}\label{sec:2_4_1}
Tropicalization is a procedure for turning subtraction-free rational maps
$(\R_{>0})^{d_1} \rightarrow (\R_{>0})^{d_2}$ into piecewise-linear maps
$\R^{d_1} \rightarrow \R^{d_2}$ by replacing the operations $+,\cdot,\div$ with
the operations min, $+,-$, and ignoring constants.
In fact, there are some variations of the notion of tropicalization.
We adopt one of them which was described in \cite{IKT12}.
With an infinitesimal parameter $\varepsilon >0$,
define $\mathrm{Log}_\ve : \R_{>0} \to \R$ to be a map
given by
\begin{align}
\label{i:loge-map}
\mathrm{Log}_\ve : a \mapsto - \ve \log a.
\end{align}
For $a, b > 0$ define $\tp{a},\tp{b} \in \R$ by $a = \e^{-\frac{\tp{a}}{\ve}}$ and
$b = \e^{-\frac{\tp{b}}{\ve}}$.
Then we have
$$
\mathrm{Log}_\ve (a + b) =
-\ve \log (\e^{-\frac{\tp{a}}{\ve}} + \e^{-\frac{\tp{b}}{\ve}}),
\quad
\mathrm{Log}_\ve (a \times b) = \tp{a} + \tp{b}.
$$
In the limit $\ve \to 0$, $\mathrm{Log}_\ve (a + b)$ becomes $\min(\tp{a},\tp{b})$.
In this manner, the algebra $(\R_{>0},+,\times)$ reduces to
the so called min-plus algebra,
and the procedure $\lim_{\ve \to 0} \mathrm{Log}_\ve$
with the transformation as $a = \e^{-\frac{\tp{a}}{\ve}}$
turns out to be the above mentioned tropicalization.
For example, the map of geometric $R$-matrix \eqref{eq:jul28_1}
is tropicalized to the following piecewise linear map
\begin{align}
A' &= A + \min (B,\tp{l}-A)-\min (A,\tp{s}-B),\nonumber\\
B'&= B+\min (A,\tp{s}-B)-\min (B,\tp{l}-A),
\end{align}
where $A = \tp{a}, B=\tp{b}$ and so on.
When we let the values of the variables be restricted to $\Z_{\geq 0}$,
this reduces to the simplest case of
the combinatorial $R$-matrix in Kashiwara's crystal.
It is described by one-row tableaux of two kinds of letters (1 and 2)
\begin{equation}
\overbrace{1\dots1}^{B} \overbrace{2\dots2}^{\tp{s}-B} \otimes
\overbrace{1\dots1}^{A} \overbrace{2\dots2}^{\tp{l}-A} \mapsto
\overbrace{1\dots1}^{A'} \overbrace{2\dots2}^{\tp{l}-A'} \otimes
\overbrace{1\dots1}^{B'} \overbrace{2\dots2}^{\tp{s}-B'} .
\end{equation}
The generalized periodic box-ball system \cite{KS08, T09}
may be regarded as a tropicalization of the closed geometric
crystal chain in section \ref{sec:2_1}.
Compared with the diagram \eqref{eq:jul28_2},
the corresponding situation may be depicted by
\begin{equation}\label{eq:nov12_3}
\batten{V_1}{B_1}{B_1'}{V_2}\!\!\!
\batten{}{B_2}{B_2'}{V_3}\!\!\!
\batten{}{}{}{\cdots\cdots}
\quad
\batten{}{}{}{V_{L-1}}\,\,
\batten{}{B_{L-1}}{B_{L-1}'}{V_{L}}\!\!\!
\batten{}{B_L}{B_L'}{V,}
\end{equation}
where $V=\tp{v}$, $B_i = \tp{b_i}$ and so on.
However, the assertion of Lemma \ref{lem:1} which
tells the existence of a unique solution to the equation
$V_1=V$,
does not persist in the tropicalization.
This is due to the fact that the expression for $v$ in \eqref{eq:jul31_1}
is not subtraction-free rational.
Therefore, to study the periodic boundary condition for the
generalized periodic box-ball system,
we have to consider the tropicalization of the equation \eqref{eq:aug19_1}
itself.
It reads as
\begin{equation}\label{eq:aug20_1}
V = \min (\Lambda_{11} + V, \Lambda_{12}) - \min( \Lambda_{21} + V, \Lambda_{22}).
\end{equation}
Here
\begin{equation}\label{eq:nov13_1}
\Lambda_{ij} = \tp{L_{ij}} = \tp{L_{ij}(l)},
\end{equation}
that are tropicalizations of the matrix elements of
the monodromy matrix \eqref{eq:sep16_1}.
They can be explicitly written down by using
the loop elementary symmetric functions
\eqref{eq:nov11_1} and \eqref{eq:aug19_2}.
\begin{example}\label{ex:nov12_1}
The $\Lambda_{11}, \Lambda_{21}$ for up to $L=4$ are
as follows:
\begin{align*}
L=1: &\quad \Lambda_{11}=B_1, \Lambda_{21}=0,\\
L=2: &\quad \Lambda_{11}=\min (\tp{l},B_1+B_2), \Lambda_{21}=\min (\bar{B}_1, B_2),\\
L=3: &\quad \Lambda_{11}=\min (\tp{l}+\min(B_1,\bar{B}_2,B_3), B_1+B_2+B_3), \nonumber\\
&\quad \Lambda_{21}=\min (\tp{l}, \bar{B}_1+\bar{B}_2, \bar{B}_1+B_3, B_2+B_3),\\
L=4: &\quad \Lambda_{11}=\min \left( 2\tp{l}, \tp{l}+\min(B_1+B_2,B_1+\bar{B}_3,B_1+B_4,\right. \nonumber\\
&\quad \quad \quad \quad \quad \quad
\left. \bar{B}_2+\bar{B}_3, \bar{B}_2+B_4, B_3+B_4 \right),
B_1+B_2+B_3+B_4), \nonumber\\
&\quad \Lambda_{21}=\min (\tp{l}+\min(\bar{B}_1,B_2,\bar{B}_3,B_4),\nonumber\\
&\quad \quad \quad \quad \quad \quad \bar{B}_1+\bar{B}_2+\bar{B}_3,
\bar{B}_1+\bar{B}_2+B_4,\bar{B}_1+B_3+B_4,B_2+B_3+B_4).
\end{align*}
Here $\bar{B}_i$ denotes $\tp{s}- B_i$.
The other matrix elements are given by $\Lambda_{22}=\Lambda_{11}(B_i \leftrightarrow \bar{B}_i), \Lambda_{12}=\tp{l}+\Lambda_{21}(B_i \leftrightarrow \bar{B}_i)$.
\end{example}
The following result is easily obtained by a simple case-by-case check.
\begin{proposition}\label{pr:nov12_2}
The solution to the equation \eqref{eq:aug20_1} is given by:
\begin{enumerate}
\item If $\frac{\Lambda_{12}+\Lambda_{21}}{2} \leq \min (\Lambda_{11},\Lambda_{22})$,
then $V=\frac{\Lambda_{12}-\Lambda_{21}}{2}$.
\item If $\Lambda_{11} < \min (\Lambda_{22}, \frac{\Lambda_{12}+\Lambda_{21}}{2} )$,
then $V=\Lambda_{11}-\Lambda_{21}$.
\item If $\Lambda_{22} < \min (\Lambda_{11}, \frac{\Lambda_{12}+\Lambda_{21}}{2} )$,
then $V=\Lambda_{12}-\Lambda_{22}$.
\item If $\Lambda_{11}=\Lambda_{22} < \frac{\Lambda_{12}+\Lambda_{21}}{2}$,
then any $V$ such that $\Lambda_{11}-\Lambda_{21} \leq V \leq \Lambda_{12}-\Lambda_{11}$.
\end{enumerate}
\end{proposition}
\proof
For simplicity, let $A=\Lambda_{11}, B=\Lambda_{12}, C=\Lambda_{21}$ and
$D= \Lambda_{22}$.
\par\noindent
Case (i): Suppose $C+V>D$. Then we have $V>D-C \geq \frac{B+C}{2}-C=
\frac{B-C}{2}$,
hence $A+V > \frac{B+C}{2}+\frac{B-C}{2}=B$.
So by \eqref{eq:aug20_1} we get $V=B-D$, but this leads to
$D < C+V = C+B-D \leq 2D -D =D$, a contradiction.
Thus $C+V \leq D$.
Suppose $A+V<B$, which implies $V=A-C$ by \eqref{eq:aug20_1}.
But this leads to $B > A+V = 2A-C \geq B+C-C =B$, a contradiction.
Therefore $A+V \geq B$, hence by \eqref{eq:aug20_1}
we get $V=\frac{B-C}{2}$.
\par\noindent
Case (ii): Suppose $A+V>B$.
Then if $C+V \geq D$ we get $V=B-D$ by \eqref{eq:aug20_1},
which leads to $B < A+V = A+B-D < B$, a contradiction.
Otherwise we have $C+V < D$ and then $V=\frac{B-C}{2}$ by \eqref{eq:aug20_1},
which leads to $B < A+V < \frac{B+C}{2}+\frac{B-C}{2}=B$, a contradiction.
Thus $A+V \leq B$.
Then if $C+V > D$ we have $A=D$ by \eqref{eq:aug20_1}
that contradicts to the assumption.
Therefore $C+V \leq D$, hence by \eqref{eq:aug20_1}
we get $V=A-C$.
\par\noindent
Case (iii): A proof can be obtained from the previous case by exchanging
$A$ with $D$, $B$ with $C$, and $V$ with $-V$.
\par\noindent
Case (iv): Suppose $A+V>B$.
Then if $C+V > D$ we get $V=B-D$ by \eqref{eq:aug20_1},
which leads to $B < A+V = A+B-D = B$, a contradiction.
Otherwise we have $C+V \leq D$ and then $V=\frac{B-C}{2}$ by \eqref{eq:aug20_1},
which leads to $D \geq C+V = \frac{B+C}{2}>D$, a contradiction.
Thus $A+V \leq B$.
Then if $C+V < D$ we have $V=A-C$ by \eqref{eq:aug20_1},
which leads to $D > C+V = A =D$, a contradiction.
Therefore $C+V \geq D$, hence by \eqref{eq:aug20_1}
we get $A=D$ which does not contradict to the assumption.
Thus any $V$ satisfying the condition $A-C \leq V \leq B-A$
solves the equation \eqref{eq:aug20_1}.
\qed
Here we show two examples to examine this result.
\begin{example}\label{ex:nov13_2}
Set $L=2, \tp{s}=2$, and $B_1 = B_2 =1$.
This is the state $12 \otimes 12$ in tableau notation.
By Example \ref{ex:nov12_1} one has
$\Lambda_{11} = \Lambda_{22} = \min (\tp{l},2)$,
and $\frac{\Lambda_{12}+\Lambda_{21}}{2}= 1+\tp{l}/2$.
Hence it falls into either case (i) when $\tp{l}=2$
or into case (iv) otherwise.
In any case the solution is given by $\min (\tp{l}-1,1) \leq V \leq \max (\tp{l}-1,1)$.
In particular, when $\tp{l}=1$ we have $V=0$ and $V=1$ as possible
solutions of integer values.
Then the diagram \eqref{eq:nov12_3} reads as
\begin{equation*
\batten{1}{1}{0}{0}\!\!\!\!\!\!
\batten{}{1}{2}{1}
\qquad
\begin{picture}(40,40)(-20,-20)
\put(0,-1){\makebox(0,0)[b]{and}}
\end{picture}
\qquad
\batten{0}{1}{2}{1}\!\!\!\!\!\!
\batten{}{1}{0}{0},
\end{equation*}
respectively.
The periodic boundary condition $V_1=V$ is indeed satisfied, but
the output states
($22 \otimes 11$ and $11 \otimes 22$ in tableau notation) are different.
So, we can not define a unique time evolution compatible with such
periodic boundary conditions.
\end{example}
\begin{example}\label{ex:nov13_3}
Set $L=3, \tp{s}=2$, and $B_1 = B_2 =B_3=1$.
This is the state $12 \otimes 12 \otimes 12$ in tableau notation.
By Example \ref{ex:nov12_1} one has
$\Lambda_{11} = \Lambda_{22} = 1+ \min (\tp{l},2)$,
and $\frac{\Lambda_{12}+\Lambda_{21}}{2}= \min (\tp{l},2)+\tp{l}/2$.
Hence it falls into either case (i) when $\tp{l}=1,2$
or into case (iv) otherwise.
In the former case the solution is given by
$V =\tp{l}/2$, and
in the latter case it is given by
$1 \leq V \leq \tp{l}-1$.
In particular, when $\tp{l}=1$ we have $V=1/2$ as the solution, but
it is not an integer:
\begin{equation*
\batten{\frac12}{1}{1}{\frac12}\!\!\!\!\!\!
\batten{}{1}{1}{\frac12}\!\!\!\!\!\!
\batten{}{1}{1}{\frac12}.
\end{equation*}
In fact, the only possible diagram \eqref{eq:nov12_3}
for integer value $V$ is given by
\begin{equation*
\batten{1}{1}{0}{0}\!\!\!\!\!\!
\batten{}{1}{2}{1}\!\!\!\!\!\!
\batten{}{1}{0}{0}
\qquad
\begin{picture}(40,40)(-20,-20)
\put(0,-1){\makebox(0,0)[b]{or}}
\end{picture}
\qquad
\batten{0}{1}{2}{1}\!\!\!\!\!\!
\batten{}{1}{0}{0}\!\!\!\!\!\!
\batten{}{1}{2}{1},
\end{equation*}
so neither satisfies the periodic boundary condition.
\end{example}
In the generalized periodic box-ball system, there are states
that do not admit time evolutions by carriers with
specific capacities \cite{KS08}.
The above two examples show how
such `non-evolvable' states actually appear.
\subsubsection{Conservation laws and the energy of paths.}\label{sec:2_4_2}
Although
the closed geometric crystal chain itself cannot be tropicalized in the sense that
the expression for the carrier $v$ in \eqref{eq:jul31_1}
is not subtraction-free rational,
its conserved quantities are given by polynomials with non-negative integer
coefficients and hence can be tropicalized.
It is fairly reasonable to expect that
the tropicalizations of these conserved quantities are
the conserved quantities of the generalized periodic box-ball systems in
\cite{KS08}.
The tropicalization of the
loop elementary symmetric functions \eqref{eq:nov11_1} are given by
\begin{align*}
\tp{e^{(1)}_m (| b \rangle)}&= \min_{1 \leq j_1 <j_2 < \dots < j_m \leq L}
\left( B_{j_1}^{(2-j_1)} +B_{j_2}^{(3-j_2)} +\cdots +B_{j_m}^{(1+m-j_m)} \right), \\
\tp{e^{(0)}_m (| b \rangle)}&= \min_{1 \leq j_1 <j_2 < \dots < j_m \leq L}
\left( B_{j_1}^{(1-j_1)} +B_{j_2}^{(2-j_2)} +\cdots +B_{j_m}^{(m-j_m)} \right),
\end{align*}
where $B_{j}^{(r)}$ denotes $\tp{b_{j}^{(r)}}$.
Note that $B_{j}^{(r)} + B_{j}^{(r-1)} = \tp{s}$, and $r$ is interpreted in modulo $2$.
From the arguments in section \ref{sec:2_2_2} to deduce \eqref{eq:nov11_2},
it is reasonable to consider that the piecewise-linear functions
\begin{equation}
\tp{I_{L-2m}} = \min \left( \tp{e^{(1)}_{L-2m} (| b \rangle)},
\tp{e^{(0)}_{L-2m} (| b \rangle)} \right),
\end{equation}
with $m \in \{ 0,1,\ldots,\lfloor L/2 \rfloor \}$
provide a collection of conserved quantities of the
generalized periodic box-ball system for the following initial state or `path'
\begin{equation}\label{eq:oct23_1}
p = \overbrace{1\dots1}^{B_{1}^{(1)}} \overbrace{2\dots2}^{B_{1}^{(2)}} \otimes
\overbrace{1\dots1}^{B_{2}^{(1)}} \overbrace{2\dots2}^{B_{2}^{(2)}} \otimes
\cdots \otimes
\overbrace{1\dots1}^{B_{L}^{(1)}} \overbrace{2\dots2}^{B_{L}^{(2)}}.
\end{equation}
Based on the notion of an isospectral evolution, we also want to
find an explicit piecewise-linear expression for the tropicalization of
the eigenvalues of the monodromy matrix \eqref{eq:sep16_1}
that is identical with the matrix $\mathcal{L}(| b \rangle; l)$.
To this end, we first consider the trace of this matrix
\begin{equation}\label{eq:oct23_2}
\tp{L_{11}(l) + L_{22}(l)}
=\min_{m \in \{ 0,1,\ldots,\lfloor L/2 \rfloor \}}
\left( m \tp{l}+\tp{I_{L-2m}} \right) =
\min(\Lambda_{11}, \Lambda_{22}).
\end{equation}
Here the last expression is due to \eqref{eq:nov13_1}.
Then the piecewise-linear functions \eqref{eq:oct23_2} with $\tp{l}=1,2,\dots$
also provide a collection of conserved quantities
for the initial state \eqref{eq:oct23_1}.
We note that there is an inequality
\begin{equation}\label{eq:oct28_2}
\tp{L_{11}(l) + L_{22}(l)} \leq \tp{I_L} = \min\left(
\sum_{i=1}^L B_{i}^{(1)}, \sum_{i=1}^L B_{i}^{(2)}
\right) \leq
\frac{L}{2} \tp{s}.
\end{equation}
Now we consider the roots of the quadratic equation
\begin{equation*}
\det (x \mathbb{I}_2 - \mathcal{L}(| b \rangle; l))=
x^2 - (L_{11}(l) + L_{22}(l))x +(s-l)^L =0.
\end{equation*}
One of the roots is the Perron-Frobenius eigenvalue $E_l (>0)$ of
the positive matrix $\mathcal{L}(| b \rangle; l)$.
As we will see in section \ref{sec:3_2_5}, we can regard $\tp{E_l}$ as
the \textit{energy of path} \cite{KS08} for the initial state \eqref{eq:oct23_1} of the generalized periodic box-ball system.
\begin{proposition}
We have the following formula
\begin{equation}\label{eq:oct28_1}
\tp{E_l} = \min \left( \tp{L_{11}(l) + L_{22}(l)} , L \frac{\tp{l}}{2} \right).
\end{equation}
\end{proposition}
\proof
First we consider the case where $L$ is even or $s \geq l$.
Denote the other root by $F_l$.
Then we have $F_l \geq 0$.
If $F_l >0$ then one can tropicalize $F_l$ as well as $E_l$.
Since $E_l$ is the Perron-Frobenius eigenvalue
we have $E_l > F_l$, hence $\tp{E_l} < \tp{F_l}$.
Therefore we can obtain a piecewise linear formula
\begin{equation}
\tp{E_l} = \min \left( \tp{E_l}, \tp{F_l} \right) = \tp{L_{11}(l) + L_{22}(l)}.
\end{equation}
Obviously, this result is also valid for the case of $F_l = 0$.
Then if $L$ is even, this is equivalent to \eqref{eq:oct28_1}
which can be verified by \eqref{eq:oct23_2} and the fact $\tp{I_0}=0$.
Otherwise we have $\tp{s} \leq \tp{l}$ and $L$ is odd,
hence by the inequality \eqref{eq:oct28_2} we obtain the same result.
Next we consider the case where $L$ is odd and $s < l$.
Apply the map $\mathrm{Log}_\ve$ on both sides of the equation
\begin{equation*}
E_l^2 = (L_{11}(l) + L_{22}(l))E_l +(l-s)^L,
\end{equation*}
and take the limit $\ve \to 0$.
Then by noting that
\begin{equation*}
\mathrm{Log}_\ve (l-s)^L =
L \left( \tp{l} - \ve \log \left( 1-\e^\frac{\tp{l} -\tp{s}}{\ve} \right) \right),
\end{equation*}
and $\tp{l} -\tp{s}<0$, we can derive the following piecewise linear equation
\begin{equation}
2 \tp{E_l} = \min \left( \tp{L_{11}(l) + L_{22}(l)} + \tp{E_l}, L \tp{l} \right).
\end{equation}
It is easy to see that this equation has a unique solution \eqref{eq:oct28_1}.
\qed
Since $\tp{E_l}$ is a function of $\tp{l}$, we let
$\mathcal{E}_{\tp{l}}$ denote $\tp{E_l}$.
The notion of the \textit{number of the solitions of length $j$}
was first introduced in \cite{FOY00} for non-periodic box-ball systems,
and then also for periodic systems \cite{KTT}.
In our notation for the tropicalized energy of path,
it is given by
\begin{equation}
m_j = -\mathcal{E}_{j-1} + 2 \mathcal{E}_{j} + \mathcal{E}_{j+1}.
\end{equation}
In the case of the generalized periodic box-ball system \cite{KS08},
this collection of numbers $\{ m_j\}_{j=1,2,\dots}$ were defined only
for evolvable paths.
In contrast, by using the formula \eqref{eq:oct28_1} we can formally define this
quantity even for non-evolvable paths.
For any path of the form $p$ in \eqref{eq:oct23_1}, define its weight to be
$\wt (p) = \sum_{i=1}^L (B_i^{(1)}-B_i^{(2)})$.
In the following examples, we restrict ourselves to
consider the paths with non-negative weights.
\begin{example}
Set $L=2, \tp{s}=2$.
Then we have $\mathcal{E}_{j} = \min (B_1+B_2, \bar{B}_1+\bar{B}_2, j)$.
There are one path with no solitons $11 \otimes 11$,
two paths with one soliton of length one, $11 \otimes 12, 12 \otimes 11$,
and three paths with one soliton of length two,
$11 \otimes 22, 22 \otimes 11, 12 \otimes 12$.
The last one is the non-evolvable path in Example \ref{ex:nov13_2}.
\end{example}
\begin{example}
Set $L=3, \tp{s}=2$.
Then we have $\mathcal{E}_{j} =
\min (B_1+B_2+B_3, \bar{B}_1+\bar{B}_2+\bar{B}_3,
j+\min(B_1,\bar{B}_1,B_2, \bar{B}_2,B_3,\bar{B}_3), 3j/2)$.
There are one path with no solitons $11 \otimes 11 \otimes 11$,
three paths with one soliton of length one, $11 \otimes 11 \otimes 12,
11 \otimes 12 \otimes 11, 12 \otimes 11 \otimes 11$,
six paths with one soliton of length two
\begin{align*}
&11 \otimes 11 \otimes 22, 11 \otimes 22 \otimes 11, 22 \otimes 11 \otimes 11,\\
&11 \otimes 12 \otimes 12, 12 \otimes 12 \otimes 11, 12 \otimes 11 \otimes 12,
\end{align*}
six paths with one soliton of length three
\begin{align*}
&11 \otimes 12 \otimes 22, 12 \otimes 22 \otimes 11, 22 \otimes 11 \otimes 12,\\
&11 \otimes 22 \otimes 12, 22 \otimes 12 \otimes 11, 12 \otimes 11 \otimes 22,
\end{align*}
and one path of `three halves' solitons of length two $12 \otimes 12 \otimes 12$.
The last one, which has fractional number of solitons,
is the non-evolvable path in Example \ref{ex:nov13_3}.
\end{example}
In the box-ball systems, $\tp{l}$ is interpreted as the capacity of
a carrier that carries the balls.
When the time evolution of the box-ball system is given by
a carrier with the capacity $\tp{l}$, it is known that
a soliton of length $j$ has a constant velocity $\min (j, \tp{l})$ when
the soliton is
sufficiently separated from the other solitons.
So if we let $\tp{l}$ be more and more larger,
differences of the speeds of the solitons due to their lengths
become more and more larger.
Note that larger $\tp{l}$ implies smaller $l$.
Actually,
in Figure \ref{fig:1} in section \ref{sec:2_1} we observe that
differences of the speeds of the `solitons'
in the case for $l=0.00001$ are larger than those
in the case for $l=0.1$.
\section{The case of general $\bm{n}$}\label{sec:3}
\subsection{Totally one-row tableaux case}\label{sec:3_1}
\subsubsection{Definition of the dynamical system.}\label{sec:3_1_1}
Based on the notions in \cite{F19, F18}, we
introduce the \textit{positive real} rational $1$-rectangle by $\mathbb{Y}_1 =
(\R_{>0})^{n-1} \times \R_{>0}$.
Let $(\mathbf{x}, s)$ denote
an element of $\mathbb{Y}_1$
with $\mathbf{x} = (x^{(1)},\dots,x^{(n-1)})$, and set $x^{(n)}:=s/(x^{(1)} \cdots x^{(n-1)})$.
Furthermore, we define $x^{(i)}$ for arbitrary $i \in \Z$ to be a variable determined
from $\mathbf{x}$ by the relation $x^{(i)} = x^{(i+n)}$.
Let $E_{i,j}$ be an $n \times n$ matrix which has $1$ in the $(i, j)$ position and
$0$ elsewhere.
Given a fixed loop parameter $\lambda$, we define
\begin{equation}
\Lambda_j(\alpha_1,\dots,\alpha_n) = \sum_{i=1}^{n-j} \alpha_i E_{i+j,i} +
\lambda \sum_{i=n-j+1}^n \alpha_i E_{i+j-n,i},
\end{equation}
for $j \in \{ 0,1,\dots,n-1\}$.
For any $(\mathbf{x}, s) \in \mathbb{Y}_1$, let $g(\mathbf{x}, s; \lambda)$ denote
the associated unipotent crystal matrix defined to be
\begin{equation}\label{eq:sep29_1}
g(\mathbf{x}, s; \lambda) = \Lambda_0(x^{(1)},\dots,x^{(n)}) +
\Lambda_1(1,\dots,1).
\end{equation}
From its $(n-1)$-th minor determinants we define another $n \times n$
matrix $g^*(\mathbf{x}, s; \lambda)$ as
\begin{equation}
g^*(\mathbf{x}, s; \lambda) = \sum_{j=0}^{n-1} \Lambda_j
\left(
\prod_{i=1}^{n-j-1} x^{(i)}, \prod_{i=1}^{n-j-1} x^{(i-1)}, \dots,\prod_{i=1}^{n-j-1} x^{(i-n+1)}
\right).
\end{equation}
\begin{example}\label{ex:nov20_2}
In the case of $n=4$ these matrices look like
\begin{align*}
g(\mathbf{x}, s; \lambda) &=
\begin{pmatrix}
x^{(1)} &0 & 0& \lambda \\
1 & x^{(2)} &0 &0 \\
0& 1 & x^{(3)} & 0\\
0& 0& 1 & x^{(4)}
\end{pmatrix},\\
g^*(\mathbf{x}, s; \lambda) &=
\begin{pmatrix}
x^{(1)} x^{(2)} x^{(3)}& \lambda & \lambda x^{(3)}& \lambda x^{(2)} x^{(3)} \\
x^{(1)} x^{(2)} &x^{(4)} x^{(1)} x^{(2)} & \lambda& \lambda x^{(2)}\\
x^{(1)} & x^{(4)} x^{(1)} & x^{(3)} x^{(4)} x^{(1)} & \lambda \\
1 &x^{(4)} & x^{(3)} x^{(4)} &x^{(2)} x^{(3)} x^{(4)}
\end{pmatrix}.
\end{align*}
These matrices are shifted and folded
versions of the ``whirl'' and the ``curl'' in \cite{LP12}.\end{example}
In order to explain the definition of the matrix $g^*(\mathbf{x}, s; \lambda)$,
here we introduce a useful notion.
For any $n \times n$ matrix $A$ and $1 \leq r \leq n$, let $\mathsf{C}_r (A)$ be
the \textit{$r$-th contravariant alternating tensor representation} of $A$, which is
an ${n \choose r} \times {n \choose r}$
matrix that consists of all the order $r$ minor determinants of $A$
(See, for example \cite{Satake73}).
That is, we define
\begin{equation}\label{eq:sep2_2}
\mathsf{C}_r (A) = \{ \Delta_{I,J} (A) \}_{I,J \in {[n] \choose r}},
\end{equation}
where the indices are assumed to be in lexicographic order if they are
regarded as words, e.g.~$i_1 i_2 \ldots i_r$ for $I = \{ i_1< i_2<\dots< i_{r} \}$.
Then we have
\begin{equation}\label{eq:nov13_5}
\mathsf{C}_{n-1} (g(\mathbf{b}, s; (-1)^n\lambda)) = g^*(\mathbf{b}, s; \lambda).
\end{equation}
Now we consider the following matrix equation
\begin{equation}\label{eq:sep2_1}
g(\mathbf{b}, s; \lambda) g(\mathbf{a}, l; \lambda) =
g(\mathbf{a}', l; \lambda) g(\mathbf{b}', s; \lambda).
\end{equation}
For any $s, l \in \R_{>0}$ and
$(\mathbf{a},\mathbf{b}) \in (\R_{>0})^{2n-2} $,
there is a unique solution
$(\mathbf{a}',\mathbf{b}') \in (\R_{>0})^{2n-2} $ to
this matrix equation
(\cite{F18}, and see also Remark \ref{rem:nov19_3} for the case of $s=l$).
Let
$R^{(s,l)}: (\R_{>0})^{2n-2} \rightarrow (\R_{>0})^{2n-2} $ be a rational map
given by $R^{(s,l)}:(\mathbf{b},\mathbf{a}) \mapsto (\mathbf{a}',\mathbf{b}')$.
This is the geometric $R$-matrix in the present case, and
if we write
$\mathbf{a} = (a^{(1)},\dots,a^{(n-1)}), \mathbf{b} = (b^{(1)},\dots,b^{(n-1)})$
and so on, an explicit expression for the solution is given by \cite{LP12, Y01}
\begin{equation}\label{eq:nov19_2}
a'^{(j)} = a^{(j)} \frac{\kappa_{j+1}}{\kappa_j}, \quad
b'^{(j)} = b^{(j)} \frac{\kappa_j}{\kappa_{j+1}},
\end{equation}
where
\begin{equation}
\kappa_j = \kappa_j(\mathbf{b},\mathbf{a})=
\sum_{r=0}^{n-1} a^{(j)} \cdots a^{(j+r-1)} b^{(j+r+1)} \cdots b^{(j+n-1)}.
\end{equation}
From the definition of the geometric $R$-matrix in \cite{F18},
we see that:
\begin{lemma}\label{lem:2}
The elements of $\mathbf{a}' \in (\R_{>0})^{n-1} $ are determined by the
following formula
\begin{equation}\label{eq:aug28_1}
\begin{pmatrix}
a'^{(1)} a'^{(2)} \cdots \cdots a'^{(n-1)} \\
a'^{(1)} a'^{(2)} \cdots a'^{(n-2)} \\
\dots \\
a'^{(1)} a'^{(2)} \\
a'^{(1)} \\
1
\end{pmatrix}
=
\frac{1}{\kappa_1(\mathbf{b},\mathbf{a})}
g^*(\mathbf{b}, s; l)
\begin{pmatrix}
a^{(1)} a^{(2)} \cdots \cdots a^{(n-1)} \\
a^{(1)} a^{(2)} \cdots a^{(n-2)} \\
\dots \\
a^{(1)} a^{(2)} \\
a^{(1)} \\
1
\end{pmatrix}.
\end{equation}
\end{lemma}
\proof
For any $n \times n$ matrix $A$, we denote by $\pi (A)$ the $n \times (n-1)$
matrix obtained from $A$ by dropping its last column.
Let $\overline{\Theta}_{n-1} (\mathbf{a})$ and $\overline{\Theta}_{n-1} (\mathbf{a}')$ be the $n-1$ dimensional subspaces
of $\C^n$ spanned by the columns of $\pi (g(\mathbf{a}, l; \bullet) )$
and $\pi (g(\mathbf{a}', l; \bullet) )$, respectively.
We regard them as elements of the Grassmannian ${\rm Gr}(n-1,n)$.
Then by the definition of the geometric $R$-matrix
(Definition 5.1 of \cite{F18}), they are related by
$\overline{\Theta}_{n-1} (\mathbf{a}') = g(\mathbf{b}, s; (-1)^n l) \cdot \overline{\Theta}_{n-1} (\mathbf{a})$,
where the meaning of
$\cdot$ in the right hand side was given in section \ref{sec:1_3}.
As a matrix representative of $\overline{\Theta}_{n-1} (\mathbf{a}')$, we introduce an $n \times (n-1)$
matrix $\tilde{M} (\mathbf{a}')$ given by
$\tilde{M} (\mathbf{a}') = g(\mathbf{b}, s; (-1)^n l) \pi (g(\mathbf{a}, l; \bullet) )$.
Then we have
\begin{equation}
\frac{P_{I}(\overline{\Theta}_{n-1} (\mathbf{a}'))}{P_{J}(\overline{\Theta}_{n-1} (\mathbf{a}'))}=
\frac{\Delta_{I, [n-1]}(\tilde{M} (\mathbf{a}'))}{\Delta_{J, [n-1]}
(\tilde{M} (\mathbf{a}'))},
\end{equation}
for any $(n-1)$-subsets $I, J$ of $[n]$.
It is easy to see that the elements of $\mathbf{a}'$ are given by ratios of the Pl$\ddot{\rm u}$cker
coordinates of $\overline{\Theta}_{n-1} (\mathbf{a}')$.
More explicitly,
their expressions are given by
$a'^{(i)} = P_{[n] \setminus \{ i+1 \}}(\overline{\Theta}_{n-1} (\mathbf{a}'))/P_{[n] \setminus \{ i \}}(\overline{\Theta}_{n-1} (\mathbf{a}'))$.
Therefore the $i$-th element of the left hand side of equation \eqref{eq:aug28_1}
is given by
\begin{equation}
a'^{(1)} a'^{(2)} \cdots a'^{(n-i)}=
\frac{P_{[n] \setminus \{ n-i+1 \}}(\overline{\Theta}_{n-1} (\mathbf{a}'))}{P_{[n] \setminus \{ 1 \}}(\overline{\Theta}_{n-1} (\mathbf{a}'))}=
\frac{\Delta_{[n] \setminus \{ n-i+1 \}, [n-1]}(\tilde{M} (\mathbf{a}'))}{\Delta_{[n] \setminus \{ 1 \}, [n-1]}
(\tilde{M} (\mathbf{a}'))}.
\end{equation}
By the Cauchy-Binet formula, we can write its numerator as
\begin{equation}
\Delta_{[n] \setminus \{ n-i+1 \}, [n-1]}(\tilde{M} (\mathbf{a}')) =
\sum_{j=1}^n g^*(\mathbf{b}, s; l)_{i,j} g^*(\mathbf{a}, l; \bullet)_{j,1},
\end{equation}
where we used \eqref{eq:nov13_5}.
On the other hand,
by the formulas for the geometric coenergy functions
(Definition 6.3 and Corollary 7.3 of \cite{F18}) we can write its denominator as
\begin{equation}
\Delta_{[n] \setminus \{ 1 \}, [n-1] }
(\tilde{M} (\mathbf{a}'))=
\kappa_1(\mathbf{b},\mathbf{a}).
\end{equation}
The proof is completed.
\qed
\begin{remark}\label{rem:nov19_3}
When $s=l$, the matrix equation \eqref{eq:sep2_1} has
a trivial solution $\mathbf{a}' = \mathbf{b}, \mathbf{b}' = \mathbf{a}$.
In this case, the non-trivial solution \eqref{eq:nov19_2} reduces to
this trivial one.
This claim is verified by using the following formula
\begin{align*}
&g^*(\mathbf{b},s;s)\\
&\quad ={\rm diag} (\prod_{i=1}^{n-1} b^{(i)}, \dots, b^{(1)}b^{(2)},b^{(1)},1)
\begin{pmatrix}
1 & \dots & 1 \\
\vdots & & \vdots \\
1 & \dots & 1
\end{pmatrix}
{\rm diag} (1, b^{(n)},b^{(n-1)}b^{(n)},\dots,\prod_{i=2}^{n} b^{(i)}),
\end{align*}
and Lemma \ref{lem:2}.
\end{remark}
As in the case of $n=2$,
we define $R_i^{(s,l)}$ and $\mathcal{R}^{(s,l)} = R_1^{(s,l)} \circ \cdots \circ R_L^{(s,l)}$
which are now maps from $ (\R_{>0})^{(L+1)(n-1)} $ to itself.
Given an arbitrary $(\mathbf{b}_1, \dots, \mathbf{b}_L, \mathbf{v}) \in (\R_{>0})^{(L+1)(n-1)} $,
let $\mathcal{R}^{(s,l)} (\mathbf{b}_1, \dots, \mathbf{b}_L, \mathbf{v}) =
(\mathbf{v}_1, \mathbf{b}'_1, \dots, \mathbf{b}'_L)$.
It is depicted by
\begin{equation}\label{eq:aug28_2}
\batten{\mathbf{v}_1}{\mathbf{b}_1}{\mathbf{b}_1'}{\mathbf{v}_2}\!\!\!
\batten{}{\mathbf{b}_2}{\mathbf{b}_2'}{\mathbf{v}_3}\!\!\!
\batten{}{}{}{\cdots\cdots}
\quad
\batten{}{}{}{\mathbf{v}_{L-1}}\,\,
\batten{}{\mathbf{b}_{L-1}}{\mathbf{b}_{L-1}'}{\mathbf{v}_{L}}\!\!\!
\batten{}{\mathbf{b}_L}{\mathbf{b}_L'}{\mathbf{v} \in (\R_{>0})^{(n-1)}.}
\end{equation}
Once again,
we would like to construct a
discrete time dynamical system on the space $ (\R_{>0})^{L(n-1)}$
using a map that sends $(\mathbf{b}_1, \dots, \mathbf{b}_L)$ to $(\mathbf{b}'_1, \dots, \mathbf{b}'_L)$
as a unit step of its time evolution.
We see that
the
$\mathbf{v}_1$ at the left end is a rational function of
the variables $(\mathbf{b}_1, \dots, \mathbf{b}_L, \mathbf{v})
\in (\R_{>0})^{(L+1)(n-1)}$
and the parameters $s,l \in \R_{>0}$,
because it is given by a composition of rational maps.
Again, by regarding the $\mathbf{b}_i$'s also as parameters, we obtain an
algebraic equation $\mathbf{v} = \mathbf{v}_1$ for the unknown $\mathbf{v}$
that assures the system of having a
periodic boundary condition.
Then we have:
\begin{proposition}\label{lem:3}
For any $s,l \in \R_{>0}$ and $(\mathbf{b}_1, \dots, \mathbf{b}_L)
\in (\R_{>0})^{L(n-1)} $,
there is a unique positive real solution $\mathbf{v} \in(\R_{>0})^{n-1}$ to the
equation $\mathbf{v} = \mathbf{v}_1$.
\end{proposition}
\proof
Let $| \mathbf{b} \rangle = (\mathbf{b}_1, \dots, \mathbf{b}_L)$
and $\mathsf{M}_l^{(1)}(| \mathbf{b} \rangle) = g^*(\mathbf{b}_{1}, s; l) \cdots g^*(\mathbf{b}_{L}, s; l)$ which we call a monodromy matrix.
By repeated use of Lemma \ref{lem:2} we have
\begin{align}
&\mathsf{M}_l^{(1)}(| \mathbf{b} \rangle)
\begin{pmatrix}
v^{(1)} v^{(2)} \cdots \cdots v^{(n-1)} \\
v^{(1)} v^{(2)} \cdots v^{(n-2)} \\
\dots \\
v^{(1)} v^{(2)} \\
v^{(1)} \\
1
\end{pmatrix}=\kappa_1(\mathbf{b}_{L},\mathbf{v})
\cdots
\kappa_1(\mathbf{b}_{1},\mathbf{v}_{2})
\begin{pmatrix}
v_1^{(1)} v_1^{(2)} \cdots \cdots v_1^{(n-1)} \\
v_1^{(1)} v_1^{(2)} \cdots v_1^{(n-2)} \\
\dots \\
v_1^{(1)} v_1^{(2)} \\
v_1^{(1)} \\
1
\end{pmatrix}.\nonumber\\
&
\label{eq:nov19_4}
\end{align}
By the Perron-Frobenius theorem, there exists a positive
eivenvector $\vec{\xi} = (\xi_1,\xi_2,\dots,\xi_n)^t$ of
the positive matrix $\mathsf{M}_l^{(1)}(| \mathbf{b} \rangle)$,
and it is unique up to a scalar multiple.
By using this $\vec{\xi}$,
define $\mathbf{v}= (v^{(1)},\dots,v^{(n-1)}) \in(\R_{>0})^{n-1}$ to be a vector
given by
\begin{equation}\label{eq:nov19_5}
v^{(1)} = \xi_{n-1}/\xi_{n}, v^{(2)} = \xi_{n-2}/\xi_{n-1}, \dots,
v^{(n-1)} = \xi_{1}/\xi_{2}.
\end{equation}
Equation \eqref{eq:nov19_4} is valid for any $\mathbf{v} \in(\R_{>0})^{n-1}$
so in particular for this $\mathbf{v}$ given by \eqref{eq:nov19_5}.
On the other hand, by definition this unique $\mathbf{v}$ also satisfies
\begin{equation}\label{eq:nov25_2}
\mathsf{M}_l^{(1)}(| \mathbf{b} \rangle)
\begin{pmatrix}
v^{(1)} v^{(2)} \cdots \cdots v^{(n-1)} \\
v^{(1)} v^{(2)} \cdots v^{(n-2)} \\
\dots \\
v^{(1)} v^{(2)} \\
v^{(1)} \\
1
\end{pmatrix}=E^{(1)}_l
\begin{pmatrix}
v^{(1)} v^{(2)} \cdots \cdots v^{(n-1)} \\
v^{(1)} v^{(2)} \cdots v^{(n-2)} \\
\dots \\
v^{(1)} v^{(2)} \\
v^{(1)} \\
1
\end{pmatrix},
\end{equation}
where $E^{(1)}_l$ is the dominant (or Perron-Frobenius) eigenvalue
of the monodromy matrix.
By equating the right hand side of the equation \eqref{eq:nov19_4} with
that of \eqref{eq:nov25_2},
we see that this $\mathbf{v}$ is a solution to the algebraic equation
$\mathbf{v} = \mathbf{v}_1$, and that
there is no other solution because $\vec{\xi}$ is unique.
\qed
With this unique solution $\mathbf{v}$
in Proposition \ref{lem:3} we
define $T^{(1)}_l: (\R_{>0})^{L(n-1)} \rightarrow (\R_{>0})^{L(n-1)}$
to be a map given by
\begin{equation}\label{eq:aug30_1}
T^{(1)}_l (\mathbf{b}_1, \dots, \mathbf{b}_L) = (\mathbf{b}'_1, \dots, \mathbf{b}'_L),
\end{equation}
where the right hand side is determined by the relation
$\mathcal{R}^{(s,l)} (\mathbf{b}_1, \dots, \mathbf{b}_L, \mathbf{v}) =
(\mathbf{v}, \mathbf{b}'_1, \dots, \mathbf{b}'_L)$.
We call this map a \textit{time evolution}, and $\mathbf{v}$ a \textit{carrier} for the state $(\mathbf{b}_1, \dots, \mathbf{b}_L)$
associated with $T^{(1)}_l$.
This time evolution defines a family of discrete time dynamical systems
on the space $ (\R_{>0})^{L(n-1)}$.
As in the $n=2$ case, we call such a system a closed geometric crystal chain.
We note that, any homogeneous state is a fixed point of this dynamical system.
To verify this claim, consider the result of
Proposition \ref{lem:3} for the one site $L=1$ case.
Then by \eqref{eq:nov19_2}, we have $\mathbf{b}'_1 = \mathbf{b}_1$.
Let $\mathbf{v}$ be the carrier for the state $(\mathbf{b}_1)$.
Then we have $\mathcal{R}^{(s,l)} (\mathbf{b}_1, \dots, \mathbf{b}_1, \mathbf{v}) =
(\mathbf{v}, \mathbf{b}_1, \dots, \mathbf{b}_1)$.
Therefore, this $\mathbf{v}$ is also the unique
carrier for the state $(\mathbf{b}_1, \dots, \mathbf{b}_1)$ and we have
$T^{(1)}_l (\mathbf{b}_1, \dots, \mathbf{b}_1) = (\mathbf{b}_1, \dots, \mathbf{b}_1)$
for any $l \in \R_{>0}$.
Also note that the time evolution $T^{(1)}_{s}$ produces
a cyclic shift by one spacial unit.
This is a consequence of the claim in Remark \ref{rem:nov19_3}.
\subsubsection{Conservation laws.}\label{sec:3_1_2}
By combining the claims in Remark \ref{rem:nov19_3}
and Proposition \ref{lem:3}
we have the following:
\begin{theorem}\label{th:main}
For any $s,l \in \R_{>0}$ and $(\mathbf{b}_1, \dots, \mathbf{b}_L)
\in (\R_{>0})^{L(n-1)} $,
there is a unique positive real solution
$(\mathbf{v}, \mathbf{b}'_1, \dots, \mathbf{b}'_L) \in(\R_{>0})^{(L+1)(n-1)}$ to the
following matrix equation
\begin{equation}\label{eq:nov20_1}
g(\mathbf{b}_1, s; \lambda) \cdots g(\mathbf{b}_L, s; \lambda) g(\mathbf{v}, l; \lambda) =
g(\mathbf{v}, l; \lambda) g(\mathbf{b}'_1, s; \lambda) \cdots g(\mathbf{b}'_L, s; \lambda).
\end{equation}
\end{theorem}
We introduce an $n \times n$ matrix
$\mathcal{L}(| \mathbf{b} \rangle ; \lambda) $
for $| \mathbf{b} \rangle = (\mathbf{b}_1, \dots, \mathbf{b}_L)$
as follows
\begin{equation}\label{eq:sep29_2}
\mathcal{L}(| \mathbf{b} \rangle ; \lambda) =
g(\mathbf{b}_{1}, s; \lambda)
\cdots
g(\mathbf{b}_{L}, s; \lambda).
\end{equation}
We call this matrix a Lax matrix.
Due to the matrix equation \eqref{eq:nov20_1},
the time evolution \eqref{eq:aug30_1} is described by a
discrete time analogue of the Lax equation
\begin{equation}
\mathcal{L}(T^{(1)}_l | \mathbf{b} \rangle ; \lambda)=
g(\mathbf{v},l; \lambda)^{-1}
\mathcal{L}(| \mathbf{b} \rangle ; \lambda) g(\mathbf{v},l; \lambda).
\end{equation}
In order to study the characteristic polynomial of the Lax matrix,
here we present a well-known result related to the contravariant
alternating tensor representation \eqref{eq:sep2_2}.
That is, the characteristic polynomial of any $n \times n$ matrix $A$ is given by
\begin{equation*}
\det (x \mathbb{I}_n - A)=
x^n + \sum_{k=1}^{n-1} (-1)^{n-k} {\rm Tr}\mathsf{C}_{n-k}( A)
x^{k}
+ (-1)^n\det A.
\end{equation*}
(This formula is derived by using calculus of a determinant. See, for example
\cite{Satake73}.
An alternative derivation for the case of $A \in {\rm M}_n(\C)$ will be
given as a consequence of Corollary \ref{cor:oct12_2}.)
The Lax representation implies that the characteristic polynomial
(and hence every coefficient therein) of
the matrix $\mathcal{L}(| \mathbf{b} \rangle ; \lambda)$
is invariant under the time evolution $T^{(1)}_l$ for any $l \in \R_{>0}$.
Therefore,
all the non-trivial conserved quantities of a closed geometric crystal chain are
contained in the traces ${\rm Tr}\mathsf{C}_{n-k}( \mathcal{L}(| \mathbf{b} \rangle ; \lambda)) =
\sum_{I \in {[n] \choose n-k}} \Delta_{I,I} (\mathcal{L}(| \mathbf{b} \rangle ; \lambda)) $
for $1 \leq k \leq n-1$,
besides $\det \mathcal{L}(| \mathbf{b} \rangle ; \lambda) = (s+(-1)^{n-1}\lambda)^L$
which is trivially conserved.
More precisely, since each
${\rm Tr}\mathsf{C}_{n-k}( \mathcal{L}(| \mathbf{b} \rangle ; \lambda))$
is a polynomial of the loop parameter $\lambda$,
its all coefficients are separately conserved.
As in the case of $n=2$, there is an explicit expression for the matrix elements
in the $(i, j)$ position of the Lax matrix (Lemma 6.1 of \cite{ILP17}) given by
\begin{equation*}
\mathcal{L}(| \mathbf{b} \rangle ; \lambda)_{ij}
=\sum_{m \geq 0} e^{(i)}_{j-i+L-mn} (| \mathbf{b} \rangle) \lambda^m ,
\end{equation*}
where the loop elementary symmetric functions $e^{(r)}_m (| \mathbf{b} \rangle)$ are defined as
\begin{equation*}
e^{(r)}_m (| \mathbf{b} \rangle)= \sum_{1 \leq j_1 <j_2 < \dots < j_m \leq L}
b_{j_1}^{(r+1-j_1)} b_{j_2}^{(r+2-j_2)} \cdots b_{j_m}^{(r+m-j_m)},
\end{equation*}
and
$e^{(r)}_0 (| \mathbf{b} \rangle)=1,\, e^{(r)}_m (| \mathbf{b} \rangle)=0 \, (m<0 \quad
\mbox{or} \quad s>L)$.
Therefore, an expression for the conserved quantities
of the closed geometric crystal chains is given by using
this explicit formula for the matrix elements to calculate
the minor determinants $\Delta_{I,I}(\mathcal{L}(| \mathbf{b} \rangle ; \lambda))$.
We will give a discussion on another expression for the conserved quantities
in section \ref{sec:4}.
\subsection{The case of rectangular tableaux for the carriers}\label{sec:3_2}
\subsubsection{Definition of the geometric $R$-matrices.}\label{sec:3_2_1}
As in the one-row tableaux case, we
introduce the positive real rational $k$-rectangle by $\mathbb{Y}_k =
\overline{\mathbb{Y}}_k \times \R_{>0}$, where $\overline{\mathbb{Y}}_k:=(\R_{>0})^{k(n-k)}$.
Define
\begin{equation}
R_k = \{ (i,j) \vert 1 \leq i \leq k, i \leq j \leq i+n-k-1 \},
\end{equation}
to be an index set.
Let $(\mathbf{x}, l)$ denote an element of $\mathbb{Y}_k$
with $\mathbf{x} = (x^{(i,j)})_{(i,j) \in R_k}$, and set
$x^{(i,i+n-k)}:=l/\prod_{j=i}^{i+n-k-1} x^{(i,j)}$ for $1 \leq i \leq k$.
In its associated rectangular tableau with $k$-rows,
$\tp{x^{(i,j)}}$ denotes the number of $j$'s in the $i$th row
and $\tp{l}$ denotes the width of the tableau.
By Definition 4.7 of \cite{F19}, we associate for such $\mathbf{x}$ a planar network $N(\mathbf{x})$ as in Figure \ref{fig:2}, where
diagonal edges are weighted by $x^{(i,j)}$ and vertical ones are by $1$.
\begin{figure}[htbp]
\centering
\includegraphics[height=8cm]{CGC3.eps}
\caption{The planar network $N(\mathbf{x})$.
Vertical edges have weight 1.}
\label{fig:2}
\end{figure}
It has $n$ sources labeled $1,\dots,n$ and $n-k$ sinks labeled $1',\dots,(n-k)'$.
Define the weight of a path to be the product of the weights of the edges in the
path.
Let $M(\mathbf{x})$ be an $n \times (n-k)$ matrix
associated to the network $N(\mathbf{x})$, such that
whose $(i,j)$-entry is the sum of the weights of all paths from source $i$ to
sink $j'$.
\begin{example}\label{ex:dec22_1}
In the case of $n=4$
and $\mathbf{x}, \mathbf{y}, \mathbf{z}$ for $\overline{\mathbb{Y}}_{k=1,2,3}$
these matrices look like
\begin{equation*}
M(\mathbf{x}) =
\begin{pmatrix}
x^{(1,1)} & 0& 0 \\
1 & x^{(1,2)} &0 \\
0& 1 & x^{(1,3)} \\
0&0 & 1
\end{pmatrix},\quad
M(\mathbf{y}) =
\begin{pmatrix}
y^{(1,1)} & 0 \\
y^{(2,2)} & y^{(1,2)}y^{(2,2)}\\
1 & y^{(1,2)} + y^{(2,3)} \\
0 & 1
\end{pmatrix},
\end{equation*}
and
\begin{equation*}
M(\mathbf{z}) =
\begin{pmatrix}
z^{(1,1)} \\
z^{(2,2)}\\
z^{(3,3)} \\
1
\end{pmatrix}.
\end{equation*}
These matrices are given by the planar networks
in Figure \ref{fig:2.55}.
\clearpage
\begin{figure}[htbp]
\centering
\includegraphics[height=5cm]{CGC20201222.eps}
\caption{The planar networks $N(\mathbf{x}), N(\mathbf{y}), N(\mathbf{z})$
for the matrices $M(\mathbf{x}), M(\mathbf{y}), M(\mathbf{z})$.}
\label{fig:2.55}
\end{figure}
\end{example}
Let $\overline{\Theta}_{n-k} (\mathbf{x})$ be the $n-k$ dimensional subspace
of $\C^n$ spanned by the columns of $M (\mathbf{x})$, which is
regarded as a point lies in the Grassmannian variety ${\rm Gr}(n-k,n)$.
This notation implies that there is a map $\overline{\Theta}_{n-k}: \overline{\mathbb{Y}}_k
\rightarrow {\rm Gr}(n-k,n)$, that is basically identical to the one
which is called the \textit{Gelfand-Tsetlin parametrization}
of ${\rm Gr}(n-k,n) \times \C^{\times}$ in \cite{F19, F18}.
It is known that the \textit{positivity} of the Pl$\ddot{\rm u}$cker
coordinates (Corollary 4.15 of \cite{F19}) holds,
which tells that for all $I \in {[n] \choose n-k}$, $P_I(\overline{\Theta}_{n-k} (\mathbf{x}))$
is a non-zero (homogeneous) polynomial in the quantities $x^{(i,j)}$
with non-negative integer coefficients.
For the currently using setup where we are dealing with
only positive real parameters and variables,
this implies that
for any $\mathbf{x} \in \overline{\mathbb{Y}}_k$
every $P_I(\overline{\Theta}_{n-k} (\mathbf{x}))$ takes its value in $\R_{>0}$.
In other words, $\overline{\Theta}_{n-k} (\mathbf{x})$ is a point
lies in the \textit{totally positive Grassmannian}
${\rm Gr}(n-k,n)_{>0} \subset {\rm Gr}(n-k,n)$ \cite{L16}.
Moreover, the map $\overline{\Theta}_{n-k}$
is a bijection between $\overline{\mathbb{Y}}_k$
and ${\rm Gr}(n-k,n)_{>0}$.
In fact, for $M \in {\rm Gr}(n-k,n)_{>0}$ its inverse image
$\mathbf{x}= (x^{(i,j)})_{(i,j) \in R_k} = (\overline{\Theta}_{n-k})^{-1}(M)
\in \overline{\mathbb{Y}}_k$
is given by the following formula (Proposition 4.3 of \cite{F19})
\begin{equation}\label{eq:sep8_1}
x^{(i,i)} = \frac{P_{J_{i,i}}(M)}{P_{J_{i+1,i}}(M)}
\quad \mbox{or} \quad
x^{(i,j)} = \frac{P_{J_{i,j}}(M)P_{J_{i+1,j-1}}(M)}{P_{J_{i+1,j}}(M)P_{J_{i,j-1}}(M)},
\end{equation}
for $j \in [i+1, i+n-k-1]$.
Here we used the
notation for the \textit{basic subsets}
$J_{i,j}:=[i,j] \cup [k+j-i+2,n] \in {[n] \choose n-k}$.
Let $g^{(n-k)}(\mathbf{x},l;\lambda)$ denote
an $n \times n$ matrix that is introduced in \cite{F19} to
define the unipotent crystal map.
The elements in the $(i,j)$ position of this matrix are defined by
\begin{equation*}
g^{(n-k)}(\mathbf{x},l;\lambda)_{ij}=c_{ij}
\frac{P_{[j-n+k+1,j-1] \cup \{i\}}(\overline{\Theta}_{n-k} (\mathbf{x}))}{P_{[j-n+k,j-1]}(\overline{\Theta}_{n-k} (\mathbf{x}))},
\quad
c_{ij} =
\begin{cases}
1 & \mbox{if } j \leq n-k, \\
l & \mbox{if } j > n-k \mbox{ and } i \geq j, \\
\lambda & \mbox{if } j > n-k \mbox{ and } i < j.
\end{cases}
\end{equation*}
\begin{example}\label{ex:dec22_2}
For the $\mathbf{x}, \mathbf{y}, \mathbf{z}$
in Example \ref{ex:dec22_1} for $n=4$ and
$\overline{\mathbb{Y}}_{k=1,2,3}$, we have
\begin{equation*}
g^{(3)}(\mathbf{x},l;\lambda) =
\begin{pmatrix}
\frac{P_{134}}{P_{234}} &0 & 0& \lambda \\
1 & \frac{P_{124}}{P_{134}} &0 &0\\
0& 1 & \frac{P_{123}}{P_{124}} &0 \\
0& 0& 1 & l \frac{P_{234}}{P_{123}}
\end{pmatrix}=
\begin{pmatrix}
x^{(1,1)} & 0& 0& \lambda \\
1 & x^{(1,2)} & 0&0 \\
0 & 1 & x^{(1,3)} & 0\\
0&0 & 1 & x^{(1,4)}
\end{pmatrix},
\end{equation*}
\begin{equation*}
g^{(2)}(\mathbf{y},l;\lambda) =
\begin{pmatrix}
\frac{P_{14}}{P_{34}} &0 & \lambda & \lambda \frac{P_{13}}{P_{23}}\\
\frac{P_{24}}{P_{34}} & \frac{P_{12}}{P_{14}} & 0 & \lambda \\
1 & \frac{P_{13}}{P_{14}} & l \frac{P_{23}}{P_{12}} & 0 \\
0&1 & l \frac{P_{24}}{P_{12}} & l \frac{P_{34}}{P_{23}}
\end{pmatrix}=
\begin{pmatrix}
y^{(1,1)} & 0 & \lambda & \lambda \frac{y^{(1,1)}(y^{(1,2)}+y^{(2,3)})}{y^{(2,2)}y^{(2,3)}} \\
y^{(2,2)} & y^{(1,2)}y^{(2,2)} & 0 & \lambda \\
1 & y^{(1,2)} + y^{(2,3)} & y^{(1,3)}y^{(2,3)} & 0 \\
0 & 1 & y^{(1,3)} & y^{(2,4)}
\end{pmatrix},
\end{equation*}
and
\begin{equation*}
g^{(1)}(\mathbf{z},l;\lambda) =
\begin{pmatrix}
\frac{P_{1}}{P_{4}} &\lambda & \lambda \frac{P_{1}}{P_{2}}& \lambda \frac{P_{1}}{P_{3}}\\
\frac{P_{2}}{P_{4}} & l \frac{P_{2}}{P_{1}} & \lambda & \lambda \frac{P_{2}}{P_{3}}\\
\frac{P_{3}}{P_{4}} & l \frac{P_{3}}{P_{1}} & l \frac{P_{3}}{P_{2}} & \lambda \\
1& l \frac{P_{4}}{P_{1}}& l \frac{P_{4}}{P_{2}} & l \frac{P_{4}}{P_{3}}
\end{pmatrix}=
\begin{pmatrix}
z^{(1,1)} & \lambda & \lambda \frac{z^{(1,1)}}{z^{(2,2)}}& \lambda \frac{z^{(1,1)} }{z^{(3,3)} } \\
z^{(2,2)} & z^{(1,2)}z^{(2,2)} & \lambda & \lambda \frac{z^{(2,2)}}{z^{(3,3)}} \\
z^{(3,3)} & z^{(1,2)}z^{(3,3)} & z^{(2,3)}z^{(3,3)} & \lambda \\
1 & z^{(1,2)} & z^{(2,3)} & z^{(3,4)}
\end{pmatrix}.
\end{equation*}
Note that the first $n-k$ columns of $g^{(n-k)}(\mathbf{x},l;\lambda)$
coincide with those of
the matrix $M (\mathbf{x})$.
\end{example}
Several
properties of the matrix $g^{(n-k)}(\mathbf{x},l;\lambda)$
presented in \cite{F18}
(in our notations) are listed as follows.
\begin{proposition}[\cite{F18}: Proposition 3.17 and Corollary 3.18]\label{pr:nov27_1}
Let $(\mathbf{x}, l) \in \mathbb{Y}_k, A = g^{(n-k)}(\mathbf{x},l;\lambda)$,
and $B \in {\rm M}_n (\C [\lambda, \lambda^{-1}])$.
\begin{enumerate}
\item The first $n-k$ columns of $A$ span the subspace $\overline{\Theta}_{n-k} (\mathbf{x})$.
\item The matrix $A \vert_{\lambda = (-1)^{n-k-1}l}$ has rank $n-k$.
\item The determinant of $A$ is $(l + (-1)^{n-k} \lambda)^k$.
\item The first $n-k$ columns of $A \cdot B \vert_{\lambda = (-1)^{n-k-1}l}$
are contained in the subspace $\overline{\Theta}_{n-k} (\mathbf{x})$.
\end{enumerate}
\end{proposition}
Now we consider the following matrix equation
\begin{equation}\label{eq:sep10_3}
g^{(n-1)}(\mathbf{b}, s; \lambda) g^{(n-k)}(\mathbf{a}, l; \lambda) =
g^{(n-k)}(\mathbf{a}', l; \lambda) g^{(n-1)}(\mathbf{b}', s; \lambda).
\end{equation}
\begin{proposition}\label{pr:nov27_4}
For any $s, l \in \R_{>0}$ and
$(\mathbf{b},\mathbf{a}) \in \overline{\mathbb{Y}}_1 \times \overline{\mathbb{Y}}_k$,
there is a unique solution
$(\mathbf{a}',\mathbf{b}') \in \overline{\mathbb{Y}}_k \times \overline{\mathbb{Y}}_1$ to
the matrix equation \eqref{eq:sep10_3}.
\end{proposition}
\proof
Besides differences of the notations,
an explicit expression for a solution to \eqref{eq:sep10_3}
is available
in section 7.1 of \cite{F18}, where the map
that sends $(\mathbf{b},\mathbf{a})$ to $(\mathbf{a}',\mathbf{b}')$
is denoted by $\Theta R$.
It remains to prove the uniqueness of the solution.
Suppose there are two solutions $(\mathbf{a}',\mathbf{b}')$ and
$(\mathbf{a}'',\mathbf{b}'')$, hence
\begin{equation}\label{eq:nov27_2}
g^{(n-k)}(\mathbf{a}', l; \lambda) g^{(n-1)}(\mathbf{b}', s; \lambda)=
g^{(n-k)}(\mathbf{a}'', l; \lambda) g^{(n-1)}(\mathbf{b}'', s; \lambda).
\end{equation}
Set $\lambda = (-1)^{n-k-1}l$ in the equation \eqref{eq:sep10_3}.
By applying
item $(iv)$ of Proposition \ref{pr:nov27_1} on the right hand side, we
see that the first $n-k$ columns of both sides of the equation
are contained in the $(n-k)$-dimensional subspace
$\overline{\Theta}_{n-k} (\mathbf{a}')$,
and also in $\overline{\Theta}_{n-k} (\mathbf{a}'')$ by \eqref{eq:nov27_2}.
On the other hand,
one observes that
the bottom left $(n-k) \times (n-k)$ submatrix of the
left hand side of the equation \eqref{eq:sep10_3} is independent of $\lambda$,
and its determinant is given by
the geometric coenergy function (Corollary 7.3 of \cite{F18}),
\begin{align}
E(\mathbf{b},\mathbf{a}) &= \Delta_{[k+1,n],[n-k]}(g^{(n-1)}(\mathbf{b}, s; \bullet) g^{(n-k)}(\mathbf{a}, l; \bullet)) \nonumber\\
&= \sum_{m=0}^{n-k} a^{(k,k)} a^{(k,k+1)} \cdots a^{(k,k+m-1)}
b^{(1,k+m+1)} b^{(1,k+m+2)} \cdots b^{(1,n)}. \label{eq:nov13_7}
\end{align}
Since this quantity does not vanish, the first $n-k$ columns of
both sides of equation \eqref{eq:sep10_3} always have full rank.
(This is true even if we set $s=l$ and $k$ is odd, hence the
matrix $g^{(n-1)}(\mathbf{b}, s; (-1)^{n-k-1}s)$
is not invertible and has rank $n-1$.)
Therefore we have $\overline{\Theta}_{n-k} (\mathbf{a}') =
\overline{\Theta}_{n-k} (\mathbf{a}'')$, and
then $\overline{\Theta}_{n-k} (\mathbf{b}') = \overline{\Theta}_{n-k} (\mathbf{b}'')$
by equation \eqref{eq:nov27_2} .
Since the map $\overline{\Theta}_{n-k}$
gives a bijection between $\overline{\mathbb{Y}}_k$
and ${\rm Gr}(n-k,n)_{>0}$, we have the desired result.
\qed
Based on this proposition, define
$R^{(s,l)}: \overline{\mathbb{Y}}_1 \times \overline{\mathbb{Y}}_k \rightarrow \overline{\mathbb{Y}}_k \times \overline{\mathbb{Y}}_1$ to be a rational map
given by $R^{(s,l)}:(\mathbf{b},\mathbf{a}) \mapsto (\mathbf{a}',\mathbf{b}')$.
This is the geometric $R$-matrix in the present case.
\subsubsection{Prerequisites for the Perron-Frobenius theorem.}\label{sec:3_2_2}
For any $\mathbf{x} \in \overline{\mathbb{Y}}_k$ let $\vec{P}(\mathbf{x})$ be an
${n \choose k}$-component vector defined by
\begin{equation}\label{eq:nov19_7}
\vec{P}(\mathbf{x}) = \left( \frac{P_I(\overline{\Theta}_{n-k} (\mathbf{x}))}{P_{[k+1,n]}(\overline{\Theta}_{n-k} (\mathbf{x}))} \right)_{I \in {[n] \choose n-k}},
\end{equation}
where the indices are assumed to be in lexicographic order
as in \eqref{eq:sep2_2}.
Then we see that its last element is always normalized to be one.
By using the formula \eqref{eq:sep8_1}
one can recover all
the elements of $\mathbf{x}= (x^{(i,j)})_{(i,j) \in R_k} \in \overline{\mathbb{Y}}_k$ from
the elements of the vector $\vec{P}(\mathbf{x})$.
Based on this fact, we can generalize Lemma \ref{lem:2} to the following:
\begin{lemma}\label{lem:sep2_3}
Let $R^{(s,l)}(\mathbf{b},\mathbf{a}) = (\mathbf{a}',\mathbf{b}')$.
Then the elements of $\mathbf{a}'\in \overline{\mathbb{Y}}_k$ are determined by the
following relation
\begin{equation}\label{eq:sep2_4}
\vec{P}(\mathbf{a}') = \frac{1}{E(\mathbf{b},\mathbf{a})}
\mathsf{C}_{n-k}(g^{(n-1)}(\mathbf{b}, s; (-1)^{n-k-1} l) )\vec{P}(\mathbf{a}),
\end{equation}
where $\mathsf{C}_{n-k}$ denotes the $(n-k)$-th contravariant alternating tensor
representation \eqref{eq:sep2_2}, and $E(\mathbf{b},\mathbf{a})$
is the geometric coenergy function \eqref{eq:nov13_7}.
\end{lemma}
\proof
By Frieden's definition of the geometric $R$-matrix ((5.1) of \cite{F18}),
we have $\overline{\Theta}_{n-k} (\mathbf{a}')
= g^{(n-1)}(\mathbf{b}, s; (-1)^{n-k-1} l) \cdot \overline{\Theta}_{n-k} (\mathbf{a})$,
where the meaning of
$\cdot$ in the right hand side was given in section \ref{sec:1_3}.
As a matrix representative of $\overline{\Theta}_{n-k} (\mathbf{a}')$,
define $\tilde{M}(\mathbf{a}')$ to be
an $n \times (n-k)$
matrix given by
$\tilde{M}(\mathbf{a}') = g^{(n-1)}(\mathbf{b}, s; (-1)^{n-k-1} l) M(\mathbf{a})$.
Then we have
\begin{equation}\label{eq:sep4_2}
\frac{P_{I}(\overline{\Theta}_{n-k} (\mathbf{a}'))}{P_{J}(\overline{\Theta}_{n-k} (\mathbf{a}'))}=
\frac{\Delta_{I, [n-k]}(\tilde{M} (\mathbf{a}'))}{\Delta_{J, [n-k]}
(\tilde{M} (\mathbf{a}'))},
\end{equation}
for any $(n-k)$-subsets $I, J$ of $[n]$.
On the other hand, by applying
the Cauchy-Binet formula to the definition of $\tilde{M} (\mathbf{a}')$ we have
\begin{align}
\Delta_{I, [n-k]}(\tilde{M} (\mathbf{a}')) &=
\sum_{J \in {[n] \choose n-k}} \Delta_{I,J} (g^{(n-1)}(\mathbf{b}, s; (-1)^{n-k-1} l) )
\Delta_{J,[n-k]} (M(\mathbf{a})) \nonumber\\
&=\Delta_{I,[n-k]}(g^{(n-1)}(\mathbf{b}, s; (-1)^{n-k-1} l) g^{(n-k)}(\mathbf{a}, l; \bullet)),\label{eq:sep4_1}
\end{align}
where we used the fact that the first $n-k$ columns of $g^{(n-k)}(\mathbf{a},l;\lambda)$
coincide with those of
the matrix $M (\mathbf{a})$.
Therefore by setting $I = [k+1,n]$ we have
$\Delta_{[k+1,n], [n-k]}(\tilde{M} (\mathbf{a}')) = E(\mathbf{b},\mathbf{a})$.
By using this identity
and the fact that the bottom left $(n-k) \times (n-k)$ submatrix of $M(\mathbf{a})$ is
upper uni-triangular, we can write the first equality of \eqref{eq:sep4_1} as
\begin{equation*}
\frac{\Delta_{I, [n-k]}(\tilde{M} (\mathbf{a}'))}{\Delta_{[k+1,n], [n-k]}(\tilde{M} (\mathbf{a}'))}=
\frac{1}{E(\mathbf{b},\mathbf{a})}
\sum_{J \in {[n] \choose n-k}} \Delta_{I,J} (g^{(n-1)}(\mathbf{b}, s; (-1)^{n-k-1} l) )
\frac{\Delta_{J, [n-k]}(M(\mathbf{a}))}{\Delta_{[k+1,n], [n-k]}(M(\mathbf{a}))}.
\end{equation*}
By \eqref{eq:sep4_2}, this is the identity which we wanted to prove.
\qed
In what follows we omit the superscript of $g$ in the case of $k=1$,
hence $g(\mathbf{x}, s; \lambda) = g^{(n-1)}(\mathbf{x}, s; \lambda) $
in agreement with the notation which we defined in \eqref{eq:sep29_1}.
Returning to the notations in the previous subsection,
let $(\mathbf{x}, s)$
denote an element of $\mathbb{Y}_1 =(\R_{>0})^{n-1} \times \R_{>0}$
with $\mathbf{x} = (x^{(1)},\dots,x^{(n-1)})$, and set $x^{(n)}:=s/(x^{(1)} \cdots x^{(n-1)})$.
Consider the matrix $\mathsf{C}_{r}(g(\mathbf{x}, s; \lambda) )$ given by \eqref{eq:sep2_2},
the $r$-th contravariant alternating tensor
representation of $g(\mathbf{x}, s; \lambda)$.
Note that the latter matrix has a network representation \cite{F19, F18}
as in Figure \ref{fig:3}.
Denote such a network by $\mathcal{N}_0$.
\begin{figure}[htbp]
\centering
\includegraphics[height=4cm]{CGC1.eps}
\caption{An example of the network representation of the matrix $g(\mathbf{x}, s; \lambda) $ for $n=5$ case.
Vertical edges have weight 1.}
\label{fig:3}
\end{figure}
We say that a matrix is \textit{positive} if all its elements are positive real numbers.
In order to use the Perron-Frobenious theorem to define our system,
we need to prove:
\begin{proposition}\label{pr:sep15_1}
For any $\mathbf{x} \in (\R_{>0})^{n-1}$, $s,l \in \R_{>0}$ and $1 \leq r \leq n-1$,
some power of $\mathsf{C}_{r}(g(\mathbf{x}, s; (-1)^{r-1}l) )$ is
a positive matrix.
\end{proposition}
\proof
It suffices to show that there exists a positive integer $K$ such that
$\Delta_{I,J}(g(\mathbf{x}, s; (-1)^{r-1}l) ^K) > 0$ for any $I,J \in {[n] \choose r}$.
We begin with the fact that
the matrix $g(\mathbf{x}, s; \lambda)^K$ is
represented by a planar network $\mathcal{N}$ that is given by
stacking the network $\mathcal{N}_0$ for the matrix $g(\mathbf{x}, s; \lambda)$
up to $K$ times.
See Figure \ref{fig:4} for an example of $\mathcal{N}$ .
By the Lindstr$\ddot{\rm o}$m Lemma (See, for example Proposition 4.5 of \cite{F19}),
such a minor determinant is expressed as
\begin{equation}\label{eq:sep7_5}
\Delta_{I,J}(g(\mathbf{x}, s; (-1)^{r-1}l) ^K) =
\sum_{\mathcal{F} =(p_a; \sigma): I \rightarrow J}
{\rm sgn} (\sigma) {\rm wt} (\mathcal{F}),
\end{equation}
where the sum is over vertex-disjoint paths
(i.e.~no two of the paths share a vertex)
from $I = \{ i_1< \dots< i_r \}$ to
$J = \{ j_1< \dots< j_r \}$ on the network $\mathcal{N}$ with $\lambda = (-1)^{r-1}l$,
and $\mathcal{F} =(p_a; \sigma)$ is a collection of $r$ paths $p_1, \dots, p_r$
such that $p_a$ starts at source $i_a$ and ends at sink $j'_{\sigma (a)}$,
for some permutation $\sigma \in S_r$.
The weight of $\mathcal{F}$ is defined as
$ {\rm wt} (\mathcal{F}) = \prod_{a=1}^r {\rm wt} (p_a)$,
where ${\rm wt} (p_a)$ is the weight of path $p_a$.
\begin{figure}[htbp]
\centering
\includegraphics[height=6cm]{CGC2.eps}
\caption{A three times stack of the network representation of the matrix $g(\mathbf{x}, s; \lambda) $ for $n=5$ case.}
\label{fig:4}
\end{figure}
First we show that for sufficiently large $K$,
there exists at least one collection of vertex-disjoint paths
from $I$ to $J$ for any $I,J \in {[n] \choose r}$.
\begin{enumerate}
\item
Suppose the elements of both $I$ and $J$ are consecutive mod $n$.
Let all the paths starting from the edges
labeled by $I$ go vertically upward on the
network.
Here we also regard the edges with weight $\lambda$ as
vertical ones.
Then they will arrive at the edges labeled by $J$ in a finite step,
if $K$ is sufficiently large.
\item
Suppose $I = I_1 \sqcup I_2$, where the elements of both $I_1$ and $I_2$
are consecutive mod $n$, and there is a gap between them.
Let all the paths
starting from the edges
labeled by $I_1$ go diagonally upward,
while those from the edges labeled by $I_2$ go vertically upward.
In a finite step, a \textit{merger} will occur.
That is, they will arrive at a collection of mod $n$ consecutive
edges on a common horizontal level.
\item
Suppose $I$ has more than 2 consecutive (mod $n$) subsets.
Let all the paths
starting from the edges
labeled by one of the subsets go diagonally upward,
while those from the edges labeled by the other subsets go vertically upward,
until a merger will occur.
Then by repeating this procedure we will come to the situation in (ii).
This together with the result in (i) is proving that for sufficiently large $K$
there exists at least one collection of vertex-disjoint paths
from $I$ to $[r]$ for any $I \in {[n] \choose r}$.
\item
In the same way, we can show that for sufficiently large $K$,
there exists at least one collection of vertex-disjoint paths
from $[r]$ to $J$ for any $J \in {[n] \choose r}$.
\item
By combining (iii) and (iv), we obtain the result which we wanted to show.
\end{enumerate}
Now we show that for any collection of vertex-disjoint paths
$\mathcal{F} =(p_a; \sigma)$ from $I$ to $J$,
the summand ${\rm sgn} (\sigma) {\rm wt} (\mathcal{F})$
in the expression \eqref{eq:sep7_5}
is positive.
Because of the vertex-disjoint condition,
any $\sigma \in S_r$ must be either
a cyclic shift of the elements of $[r]$
or the identity map.
When going upward on the network
along the collection of paths $\mathcal{F}$, an elementary cyclic shift
$\{ 1,2,\dots,r \} \rightarrow \{2,\dots,r,1\}$ may occur within one unit of the stack.
In that case, the ${\rm sgn} (\sigma)$ is multiplied by $(-1)^{r-1}$, while
an edge with weight $\lambda = (-1)^{r-1}l$ is picked up for the paths $\mathcal{F}$.
As a result, the summand ${\rm sgn} (\sigma) {\rm wt} (\mathcal{F})$
is always positive.
The proof is completed.
\qed
\vspace{5mm}
\par\noindent
Recall the definition of a Lax matrix $\mathcal{L}(| \mathbf{b} \rangle ; \lambda)$ in
\eqref{eq:sep29_2}.
Then one easily sees that
an obvious extension of Proposition \ref{pr:sep15_1} is the following:
\begin{corollary}
For any $| \mathbf{b} \rangle = (\mathbf{b}_1, \dots, \mathbf{b}_L) \in (\R_{>0})^{L(n-1)}$,
$s,l \in \R_{>0}$ and $1 \leq r \leq n-1$,
some power of
$\mathsf{C}_{r}( \mathcal{L}(| \mathbf{b} \rangle ; (-1)^{r-1}l))$ is
a positive matrix.
\end{corollary}
By the Perron-Frobenius theorem, this corollary implies that
there exists a positive
eivenvector
of the matrix
$\mathsf{C}_{r}( \mathcal{L}(| \mathbf{b} \rangle ; (-1)^{r-1}l))$
and it is unique up to a scalar multiple.
\subsubsection{Definition of the dynamical system.}\label{sec:3_2_3}
In order to define the time evolutions by carriers of ``rectangular tableaux'',
we first show the following:
\begin{lemma}\label{lem:sep10_1}
Let $A = \mathcal{L}(| \mathbf{b} \rangle ; (-1)^{n-k-1}l)$.
Then the
Perron-Frobenius eivenvector $\vec{\xi}_{\rm PF}$ of
$\mathsf{C}_{n-k}( A )$ determines a unique point in
${\rm Gr}(n-k,n)_{>0}$.
\end{lemma}
\proof
Fix an arbitrary chosen $M_0 \in {\rm Gr}(n-k,n)_{>0}$.
By using the same argument in the proof of Proposition \ref{pr:nov27_4},
we can define a sequence of $(n-k)$-dimensional subspaces $M_0, M_1, \dots
\in {\rm Gr}(n-k,n)_{>0}$
recursively by the relation
$M_i = A \cdot M_{i-1}$.
Then by using the Cauchy-Binet formula and
a consequence of the Perron-Frobenius theorem, we have
$\lim_{i \rightarrow \infty}(P_I(M_i))_{I \in {[n] \choose n-k}} =
\lim_{i \rightarrow \infty} \mathsf{C}_{n-k}( A )^i (P_I(M_0))_{I \in {[n] \choose n-k}} =
\vec{\xi}_{\rm PF}$
up to a scalar multiple.
Since
the closure (in the Hausdorff topology)
of a totally positive Grassmannian ${\rm Gr}(n-k,n)_{>0}$
is
a totally nonnegative Grassmannian ${\rm Gr}(n-k,n)_{\geq 0}$ (Theorem 3.6 of \cite{L16}),
this implies that there is a limiting point
$M=\lim_{i \rightarrow \infty} M_i \in {\rm Gr}(n-k,n)_{\geq 0}$.
Then since $(P_I(M))_{I \in {[n] \choose n-k}}=
\vec{\xi}_{\rm PF}$ is a positive vector,
this unique point $M$ actually lies in ${\rm Gr}(n-k,n)_{>0}$.
\qed
\begin{remark}
Since the Pl$\ddot{u}$cker coordinates of every point
in a Grassmannian must satisfy the Grassmann-Pl$\ddot{u}$cker relations
(See, for example Proposition 3.2 of \cite{F19}),
a positive vector with arbitrary chosen ${n \choose k}$ components does not
necessarily determines a point in ${\rm Gr}(n-k,n)_{>0}$.
\end{remark}
Recall that
$R^{(s,l)}: \overline{\mathbb{Y}}_1 \times \overline{\mathbb{Y}}_k \rightarrow \overline{\mathbb{Y}}_k \times \overline{\mathbb{Y}}_1$ is the rational map
given by the matrix equation \eqref{eq:sep10_3}.
As in the totally one-row tableaux case,
we define $R_i^{(s,l)}$ and $\mathcal{R}^{(s,l)}= R_1^{(s,l)} \circ \cdots \circ R_L^{(s,l)}$
to be maps which are now from $ (\overline{\mathbb{Y}}_1)^i \times \overline{\mathbb{Y}}_k
\times (\overline{\mathbb{Y}}_1)^{L-i} $ to
$ (\overline{\mathbb{Y}}_1)^{i-1} \times \overline{\mathbb{Y}}_k
\times (\overline{\mathbb{Y}}_1)^{L-i+1}$, and from $ (\overline{\mathbb{Y}}_1)^L \times \overline{\mathbb{Y}}_k$ to
$\overline{\mathbb{Y}}_k
\times (\overline{\mathbb{Y}}_1)^{L}$ respectively.
Given an arbitrary $(\mathbf{b}_1, \dots, \mathbf{b}_L, \mathbf{v}) \in (\overline{\mathbb{Y}}_1)^L \times \overline{\mathbb{Y}}_k $,
let $\mathcal{R}^{(s,l)} (\mathbf{b}_1, \dots, \mathbf{b}_L, \mathbf{v}) =
(\mathbf{v}_1, \mathbf{b}'_1, \dots, \mathbf{b}'_L)$.
It is depicted by the same diagram \eqref{eq:aug28_2}
but now $\mathbf{v}_i$'s are the elements of $\overline{\mathbb{Y}}_k = (\R_{>0})^{k(n-k)}$,
\begin{equation}\label{eq:aug28_2x}
\batten{\mathbf{v}_1}{\mathbf{b}_1}{\mathbf{b}_1'}{\mathbf{v}_2}\!\!\!
\batten{}{\mathbf{b}_2}{\mathbf{b}_2'}{\mathbf{v}_3}\!\!\!
\batten{}{}{}{\cdots\cdots}
\quad
\batten{}{}{}{\mathbf{v}_{L-1}}\,\,
\batten{}{\mathbf{b}_{L-1}}{\mathbf{b}_{L-1}'}{\mathbf{v}_{L}}\!\!\!
\batten{}{\mathbf{b}_L}{\mathbf{b}_L'}{\mathbf{v} \in \overline{\mathbb{Y}}_k.}
\end{equation}
Once again,
by regarding the $\mathbf{b}_i$'s also as parameters, we consider an
algebraic equation $\mathbf{v} = \mathbf{v}_1$ for the unknown $\mathbf{v}$
that assures the system of having a
periodic boundary condition.
Then we have:
\begin{proposition}\label{prop:sep10_2}
For any $s, l \in \R_{>0}$ and $| \mathbf{b} \rangle = (\mathbf{b}_1, \dots, \mathbf{b}_L) \in (\overline{\mathbb{Y}}_1)^L$,
there is a unique positive real solution $\mathbf{v} \in \overline{\mathbb{Y}}_k = (\R_{>0})^{k(n-k)}$
to the equation $\mathbf{v} = \mathbf{v}_1$.
\end{proposition}
\proof
Let $A = \mathcal{L}(| \mathbf{b} \rangle ; (-1)^{n-k-1}l)$.
By repeated use of Lemma \ref{lem:sep2_3} we have
\begin{equation}\label{eq:oct2_1}
\mathsf{C}_{n-k}(A)\vec{P}(\mathbf{v}) =
E(\mathbf{b}_{L},\mathbf{v})
E(\mathbf{b}_{L-1},\mathbf{v}_{L})
\cdots
E(\mathbf{b}_{1},\mathbf{v}_{2})
\vec{P}(\mathbf{v}_1),
\end{equation}
where $\mathbf{v}_i$'s are those determined by the diagram \eqref{eq:aug28_2x}.
\par\noindent
\textit{(Uniqueness.)}
Suppose there exist positive real solutions to the equation $\mathbf{v} = \mathbf{v}_1$.
From \eqref{eq:oct2_1} we see that
for any such solution $\mathbf{v} \in \overline{\mathbb{Y}}_k$ it is necessary for
$\vec{P}(\mathbf{v}) $ to be a positive eigenvector
of the matrix $\mathsf{C}_{n-k}(A)$.
By Lemma \ref{lem:sep10_1}
there is a unique $M \in {\rm Gr}(n-k,n)_{>0}$ such that
$(P_I(M))_{I \in {[n] \choose n-k}}$ is the
Perron-Frobenius eivenvector of the matrix $\mathsf{C}_{n-k}(A)$.
Therefore, if any such solution $\mathbf{v} \in \overline{\mathbb{Y}}_k$ exists, then it
must be equal to the unique one
given from
this $M$ by using the formula \eqref{eq:sep8_1},
because the map $\overline{\Theta}_{n-k}$
is a bijection between $\overline{\mathbb{Y}}_k$
and ${\rm Gr}(n-k,n)_{>0}$.
\par\noindent
\textit{(Existence.)}
Equation \eqref{eq:oct2_1} is valid for any $\mathbf{v} \in \overline{\mathbb{Y}}_k$
so in particular for the $\mathbf{v}$ obtained from the above mentioned unique
$M \in {\rm Gr}(n-k,n)_{>0}$.
On the other hand, for this $\mathbf{v}$ we also have
\begin{equation}\label{eq:nov27_5}
\mathsf{C}_{n-k}(A)\vec{P}(\mathbf{v}) =E^{(k)}_l \vec{P}(\mathbf{v})
\end{equation}
where $E^{(k)}_l$ is the dominant eigenvalue
of the matrix $\mathsf{C}_{n-k}(A)$.
By equating the right hand side of equation \eqref{eq:oct2_1}
with that of \eqref{eq:nov27_5},
and noting that the last element of the vector $\vec{P}(\mathbf{x})$
defined in \eqref{eq:nov19_7} is always one for any
$\mathbf{x} \in \overline{\mathbb{Y}}_k$,
we see that this $\mathbf{v}$ is indeed a solution to the algebraic equation
$\mathbf{v} = \mathbf{v}_1$.
\qed
By combining the claims in Remark \ref{rem:nov19_3}
and Proposition \ref{prop:sep10_2}
we have the following:
\begin{theorem}\label{th:main2}
For any $s,l \in \R_{>0}$ and $(\mathbf{b}_1, \dots, \mathbf{b}_L)
\in (\R_{>0})^{L(n-1)} $,
there is a unique positive real solution
$(\mathbf{v}, \mathbf{b}'_1, \dots, \mathbf{b}'_L) \in (\R_{>0})^{k(n-k)} \times (\R_{>0})^{L(n-1)}$ to the
following matrix equation
\begin{equation}\label{eq:nov27_6}
g(\mathbf{b}_1, s; \lambda) \cdots g(\mathbf{b}_L, s; \lambda) g^{(n-k)}(\mathbf{v}, l; \lambda) =
g^{(n-k)}(\mathbf{v}, l; \lambda) g(\mathbf{b}'_1, s; \lambda) \cdots g(\mathbf{b}'_L, s; \lambda).
\end{equation}
\end{theorem}
Denote this unique positive real $\mathbf{v}$
by $\mathbf{v} = \mathbf{u}_l^{(k)} =
\mathbf{u}_l^{(k)}(| \mathbf{b} \rangle) \in \overline{\mathbb{Y}}_k$.
Define $T^{(k)}_l: (\overline{\mathbb{Y}}_1)^L \rightarrow (\overline{\mathbb{Y}}_1)^L$
to be a map given by
\begin{equation}\label{eq:sep10_4}
T^{(k)}_l (\mathbf{b}_1, \dots, \mathbf{b}_L) = (\mathbf{b}'_1, \dots, \mathbf{b}'_L),
\end{equation}
where the right hand side is determined by the relation
$\mathcal{R}^{(s,l)} (\mathbf{b}_1, \dots, \mathbf{b}_L, \mathbf{u}_l^{(k)}) =
(\mathbf{u}_l^{(k)}, \mathbf{b}'_1, \dots, \mathbf{b}'_L)$.
In the same way as in the totally one-row tableaux case,
we call this map a time evolution, and $\mathbf{u}_l^{(k)}=
\mathbf{u}_l^{(k)}(| \mathbf{b} \rangle) $ a carrier for the state
$| \mathbf{b} \rangle = (\mathbf{b}_1, \dots, \mathbf{b}_L)$
associated with $T^{(k)}_l$.
This time evolution defines so far the most general case of the
closed geometric crystal chain.
Once again, any homogeneous state is a fixed point of this dynamical system.
To verify this claim, consider the result of
Proposition \ref{prop:sep10_2} for the one site $L=1$ case.
Then by using the explicit expression for the geometric $R$-matrix in \cite{F18},
we have $\mathbf{b}'_1 = \mathbf{b}_1$.
Then the same argument in the totally one-row tableaux case leads to
the required consequence.
By Theorem \ref{th:main2},
the time evolution \eqref{eq:sep10_4} is described by a Lax equation
for the matrix \eqref{eq:sep29_2} as
\begin{equation}\label{eq:oct22_1}
\mathcal{L}(T^{(k)}_l | \mathbf{b} \rangle ; \lambda)=
(g^{(n-k)}(\mathbf{u}_l^{(k)},l; \lambda))^{-1}
\mathcal{L}(| \mathbf{b} \rangle ; \lambda) g^{(n-k)}(\mathbf{u}_l^{(k)},l; \lambda).
\end{equation}
Therefore, the conservation laws considered in section \ref{sec:3_1_2}
are valid for any time evolutions $\{ T^{(k)}_l \}_{1 \leq k \leq n-1, l \in \R_{>0}}$.
Also,
since the geometric $R$-matrices satisfy the
Yang-Baxter relation (Theorem 5.10(3) of \cite{F18}),
the same argument in section \ref{sec:2_2_1} assures the commutativity of
the time evolutions.
That is, we have $T^{(k_1)}_{l_1} \circ T^{(k_2)}_{l_2} = T^{(k_2)}_{l_2} \circ T^{(k_1)}_{l_1}$
for any $1 \leq k_1, k_2 \leq n-1$ and $l_1, l_2 \in \R_{>0}$.
\subsubsection{Invertibility.}\label{sec:3_2_4}
As in the $n=2$ case, every time evolution $T^{(k)}_l$ is invertible.
An explicit expression for the inverse map $(T^{(k)}_l)^{-1}$ is given as follows.
For any $n \times n$ matrix $X = (X_{ij})_{1 \leq i,j \leq n}$,
let ${\rm fl} (X)$ be an $n \times n$ matrix such that whose elements
in the $(i, j)$ position are given by ${\rm fl} (X)_{ij} = X_{n+1-j, n+1-i}$.
Then ${\rm fl}$ is an anti-automorphism of the ring of $n \times n$ matrices.
Define $S_l: \overline{\mathbb{Y}}_k \rightarrow \overline{\mathbb{Y}}_k$
to be a map given by $S_l(\mathbf{x}) = \mathbf{x}' = (x'^{(i,j)})_{(i,j) \in R_k}$
where $x'^{(i,j)}= x^{(k+1-i,n+1-j)}$ for $i+1 \leq j \leq i+n-k-1$, and
$x'^{(i,i)}= l/\prod_{m=k+1-i}^{n-i} x^{(k+1-i,m)}$,
for any $\mathbf{x} = (x^{(i,j)})_{(i,j) \in R_k} \in \overline{\mathbb{Y}}_k$.
This map is essentially identical with the geometric Sch$\ddot{\rm u}$tzenberger
involution \cite{F19, F18}.
Then by the proof of Theorem 7.3 of \cite{F19} we have
\begin{equation}
{\rm fl} \circ g^{(n-k)}(\mathbf{x},l; \lambda) =
g^{(n-k)}(S_l(\mathbf{x}),l; \lambda).
\end{equation}
When $k=1$ we sometimes write
$\mathbf{x} = (x^{(1)},x^{(2)},\dots,x^{(n-1)}) \in \overline{\mathbb{Y}}_1$
and then we have $S_s(\mathbf{x}) = (x^{(n)},x^{(n-1)},\dots,x^{(2)})$
where $x^{(n)}=s/(x^{(1)} \cdots x^{(n-1)})$.
By extending its definition,
for any $| \mathbf{b} \rangle = (\mathbf{b}_1, \dots, \mathbf{b}_L) \in (\overline{\mathbb{Y}}_1)^L$ we let
$S_s | \mathbf{b} \rangle = (S_s(\mathbf{b}_L), \dots, S_s(\mathbf{b}_1))$.
Note that the order of the elements is reversed.
Since ${\rm fl}$ is an anti-automorphism we have
${\rm fl} (\mathcal{L}(| \mathbf{b} \rangle ; \lambda)) =
\mathcal{L}(S_s | \mathbf{b} \rangle ; \lambda).$
Therefore, by applying the anti-automorphism ${\rm fl}$ on both sides of \eqref{eq:oct22_1}
we obtain
\begin{equation*
\mathcal{L}(S_s \circ T^{(k)}_l | \mathbf{b} \rangle ; \lambda)=
g^{(n-k)}(S_l(\mathbf{u}_l^{(k)}),l; \lambda)
\mathcal{L}(S_s| \mathbf{b} \rangle ; \lambda) (g^{(n-k)}(S_l(\mathbf{u}_l^{(k)}),l; \lambda))^{-1},
\end{equation*}
or equivalently
\begin{equation*
\mathcal{L}(S_s| \mathbf{b} \rangle ; \lambda)=
(g^{(n-k)}(S_l(\mathbf{u}_l^{(k)}),l; \lambda))^{-1}
\mathcal{L}(S_s \circ T^{(k)}_l | \mathbf{b} \rangle ; \lambda) g^{(n-k)}(S_l(\mathbf{u}_l^{(k)}),l; \lambda).
\end{equation*}
By comparing with the definition of time evolution $T^{(k)}_l$,
we see from this equation that
$S_s| \mathbf{b} \rangle = T^{(k)}_l \circ S_s \circ T^{(k)}_l | \mathbf{b} \rangle$.
Since $S_s$ is an involution and $| \mathbf{b} \rangle$ is an arbitrary
element of $(\overline{\mathbb{Y}}_1)^L$, we have
\begin{equation}\label{eq:nov13_8}
(T^{(k)}_l)^{-1} = S_s \circ T^{(k)}_l \circ S_s.
\end{equation}
This is the explicit expression for the inverse map.
\subsubsection{Geometric lifting of the energy of paths.}\label{sec:3_2_5}
We reconsider the results on the conservation laws
of the closed geometric crystal chains in section \ref{sec:3_1_2}.
In order to study the characteristic polynomial of the Lax matrix \eqref{eq:sep29_2},
we present some basic properties of the contravariant
alternating tensor representation \eqref{eq:sep2_2}.
\begin{lemma}
For any $n \times n$ upper triangular matrix $B$,
the ${n \choose r} \times {n \choose r}$ matrix $\mathsf{C}_r (B)$
is also upper triangular.
\end{lemma}
\proof
Let $B = ( b_{ij} )_{1 \leq i,j \leq n}$ where
$b_{ij} =0$ when $i >j$.
By definition, the matrix elements of $\mathsf{C}_r (B)$ are written as
\begin{equation}\label{eq:oct12_1}
\Delta_{I,J} (B) = \sum_{\sigma \in S_r} {\rm sgn} (\sigma)
b_{i_1, j_{\sigma (1)}} b_{i_2, j_{\sigma (2)}} \cdots b_{i_r, j_{\sigma (r)}},
\end{equation}
for $I = \{ i_1< i_2<\dots< i_{r} \}$ and $J = \{ j_1< j_2<\dots< j_{r} \}$.
We are to show that if $I > J$ in lexicographic order, then $\Delta_{I,J} (B) = 0$.
Let $q \in [r]$ be the smallest integer
such that
$i_{q} > j_{q}$.
Choose an arbitrary permutation $\sigma \in S_r$.
If $\sigma (q) \leq q$ then $i_{q} > j_{q} \geq j_{\sigma (q)}$,
hence $b_{i_{q}, j_{\sigma (q)}}=0$.
Otherwise one has $\sigma (q) \geq q+1$.
Then one of the $\sigma(q+1), \sigma(q+2), \dots, \sigma(r)$ is smaller than or equal to $q$.
Suppose $\sigma (a) \leq q$ for $ a \in [q+1, r]$.
Then $i_a > i_q > j_q \geq j_{\sigma (a)}$,
hence $b_{i_{a}, j_{\sigma (a)}}=0$.
Therefor, every term of the summation in \eqref{eq:oct12_1} is zero
when $I > J$.
The proof is completed.
\qed
Since any square matrix is similar to some
upper triangular matrix,
a consequence of this lemma is the following:
\begin{corollary}\label{cor:oct12_2}
Suppose a collection of complex numbers $\{ \mu_1, \ldots, \mu_n \}$ is the multiset \cite{Stanley97} of eigenvalues
of an $n \times n$ matrix $A \in {\rm M}_n (\C)$, in which
their multiplicities are taken into account as repetitions of the elements.
Then the eigenvalues of the ${n \choose r} \times {n \choose r}$ matrix
$\mathsf{C}_r (A) \in {\rm M}_{{n \choose r}} (\C)$ are given by the multiset
\begin{equation*}
\{ \mu_{i_1} \mu_{i_2} \cdots \mu_{i_r} |
1 \leq i_1 < i_2 < \ldots < i_r \leq n \}.
\end{equation*}
\end{corollary}
Since the matrix elements of the Lax matrix \eqref{eq:sep29_2}
are polynomials of the loop parameter $\lambda$,
it is legitimate to substitute an arbitrary complex number into $\lambda$.
So in what follows we fix a $\lambda \in \C$.
Then the characteristic polynomial of the Lax matrix
$\mathcal{L}(| \mathbf{b} \rangle ; \lambda) $
can be written as
\begin{equation*}
\det (x \mathbb{I}_n - \mathcal{L}(| \mathbf{b} \rangle ; \lambda) )=
\prod_{i=1}^n (x - \mu_i ).
\end{equation*}
Since this polynomial is invariant under the time evolutions,
each eigenvalue $\mu_i $ (that depends on $\lambda \in \C$)
is a conserved quantity of the closed geometric crystal chain.
For any $I =\{ 1 \leq i_1 < \dots < i_{n-k} \leq n \} \in {[n] \choose n-k }$,
let $\mu_I = \mu_{i_1} \cdots \mu_{i_{n-k}}$.
By Corollary \ref{cor:oct12_2} we have
\begin{equation*}
\det \left( x \mathbb{I}_{{n \choose k}} -\mathsf{C}_{n-k}( \mathcal{L}(| \mathbf{b} \rangle ; \lambda ) \right)=
\prod_{I \in {[n] \choose n-k}} (x - \mu_I).
\end{equation*}
Therefore, the eigenvalues of the matrix $\mathsf{C}_{n-k}( \mathcal{L}(| \mathbf{b} \rangle ; \lambda ) $ are also conserved quantities.
In particular, the Perron-Frobenius eigenvalue of the
\textit{monodromy matrix}
\begin{equation*}
\mathsf{M}_l^{(k)}(| \mathbf{b} \rangle):=
\mathsf{C}_{n-k}( \mathcal{L}(| \mathbf{b} \rangle ; (-1)^{n-k-1}l)),
\end{equation*}
is a positive real conserved quantity, which we denoted by $E_l^{(k)}$
in \eqref{eq:nov27_5}.
Then we have
\begin{equation}\label{eq:oct19_1}
\mathsf{M}_l^{(k)}(| \mathbf{b} \rangle)
\vec{P}(\mathbf{u}_l^{(k)}) = E_l^{(k)}\vec{P}(\mathbf{u}_l^{(k)}).
\end{equation}
By comparing this with the expression in \eqref{eq:oct2_1}
we have
\begin{equation*}
E_l^{(k)}
=E(\mathbf{b}_{L},\mathbf{u}_l^{(k)})
E(\mathbf{b}_{L-1},\mathbf{v}_{L})
\cdots
E(\mathbf{b}_{1},\mathbf{v}_{2}),
\end{equation*}
where $\mathbf{v}_i$'s are those determined by the diagram \eqref{eq:aug28_2x}
with $\mathbf{v}_1 = \mathbf{v}=\mathbf{u}_l^{(k)}$.
This expression with the explicit formula for the geometric coenegy
functions \eqref{eq:nov13_7}
implies that the Perron-Frobenius eigenvalues
$E_l^{(k)}$ are the geometric liftings of the \textit{energy of paths},
a conserved quantity of
the (generalized periodic) box-ball systems \cite{FOY00} (See, also \cite{IKT12}).
\section{Summary and Discussions}\label{sec:4}
In this paper we proposed a method to
construct a new family of non-linear discrete integrable systems.
They are thought of as a geometric lifting of the
integrable cellular automata known as the generalized periodic
box-ball systems.
By combining the G.~Frieden's work on the geometric $R$-matrix with
the Perron-Frobenious theorem, we were able to define a commuting family of
time evolutions $T^{(k)}_l$ for any $1 \leq k \leq n-1$ and $l \in \R_{>0}$
on the whole `phase space' $(\R_{>0})^{L(n-1)}$.
In order to apply this theorem in linear algebra to our construction
of the non-linear integrable systems,
Lemmas \ref{lem:2} and \ref{lem:sep2_3} played an important role.
These lemmas claim that, although the geometric $R$-matrix is a non-linear rational map,
it can be viewed as almost a linear map provided that all its
non-linearities have been pushed into a procedure of
the Gelfand-Tsetlin parametrization,
and also into a scalar factor called the geometric coenergy function.
Underlying ideas in these lemmas are inspired by
a deep insight on
the property of this rational map (Remark 5.2 of \cite{F18}).
We have shown that
the time evolutions are described by Lax equations,
hence the conserved quantities are given by the coefficients of the
characteristic polynomial of the associated Lax matrices.
Equivalently, they are given as the eigenvalues of the Lax matrices,
and this fact gave us a simple
viewpoint that the dominant eigenvalue $E^{(k)}_l$
of the monodromy matrix is a geometric lifting of the energy of path (state)
of the corresponding generalized periodic box-ball system.
We noted this fact again here because
the energy of paths
is thought of as
one of the most important notions in the theory of
integrable systems associated with crystals,
because it
is related not only to the number of solitons in the cellular automata,
but also to the number of strings in the combinatorial Bethe ansatz for certain
integrable quantum spin chain models
associated with such cellular automata \cite{KS08, KTT,KT10,KT2}.
Regarding more about
the dominant eigenvalue $E_l^{(k)}$,
for any $k \in [n-1], l \in \R_{>0}$, and a given initial state,
it is a constant under any time evolutions.
This fact allows us to adopt an interpretation that we can regard $E_l^{(k)}$
as one of the constant parameters of the system along with $s$ and $l$.
It also implies that for an actual realization of our dynamical system on
a computer program,
though there is a possible purely numerical calculation process to
solve an algebraic equation for getting the dominant eigenvalue
of the monodromy matrix, we need to do that only once
for each initial condition and
all the remaining calculation processes can be coded purely symbolic.
In fact, under this interpretation,
the elements of the carrier $\mathbf{u}_l^{(k)}$
are viewed as rational
(if not subtraction-free rational)
functions of $| \mathbf{b} \rangle$.
This is based on the fact that the vector
$\vec{P}(\mathbf{u}_l^{(k)})$ satisfies
the system of linear equation \eqref{eq:oct19_1}
with the constant parameters $s, l, E_l^{(k)}$,
and $\mathbf{u}_l^{(k)}$ is obtained from this vector
by using the rational map
\eqref{eq:sep8_1}.
To be more precise for the former claim, let $N={n \choose k},
\mathsf{M}_l^{(k)}(| \mathbf{b} \rangle) = (L_{i,j})_{1 \leq i,j \leq N}$,
and $\vec{P}(\mathbf{u}_l^{(k)})= (\mathcal{P}_1, \mathcal{P}_2, \dots, \mathcal{P}_{N-1}, 1)^t$.
Then we have
\begin{equation*}
\begin{pmatrix}
L_{11}-E_l^{(k)} & L_{12} & \dots & L_{1,N-1} \\
L_{21} & L_{22}-E_l^{(k)} & \dots & L_{2,N-1} \\
\vdots & \vdots & \ddots & \vdots \\
L_{N-1,1} & \dots & L_{N-1,N-2} & L_{N-1,N-1}-E_l^{(k)}
\end{pmatrix}
\begin{pmatrix}
\mathcal{P}_1 \\
\mathcal{P}_2 \\
\vdots \\
\mathcal{P}_{N-1}
\end{pmatrix}=
-\begin{pmatrix}
L_{1,N} \\
L_{2,N} \\
\vdots \\
L_{N-1,N}
\end{pmatrix}.
\end{equation*}
Therefore by the Cramer's rule the $\mathcal{P}_i$s are given by
rational functions of $L_{i,j}$s and $E_l^{(k)}$.
For instance, if $n=2$ the carrier $v (= \mathbf{u}_l^{(1)})$ in \eqref{eq:jul31_1}
is expressed as $v=(E_l^{(1)}-L_{22})/L_{21} = -L_{12}/(L_{11}-E_l^{(1)})$.
In the case of $n=2$, we conducted a detailed study on the
system.
This includes an explicit list of the conserved quantities,
a discussion on the continuum limit of the system
to derive associated differential
equations, and also on the tropicalization of the system.
The latter study was done not only on
the conserved quantities, but also on
the equation for the periodic boundary condition \eqref{eq:aug19_1}.
This result gives us an explanation
for the existence of the non-evolvable states in
the generalized periodic box-ball systems.
That is, while equation \eqref{eq:aug19_1} has a unique positive real solution,
its tropicalized counterpart \eqref{eq:aug20_1} does not always
have a unique non-negative integer
solution.
We note once again that although the equation \eqref{eq:aug19_1} itself can be
tropicalized, its solution can not be tropicalized.
In this sense, we admit that
the closed geometric crystal chain is not literally a
geometric lifting of the generalized periodic box-ball system.
In the case of general $n$, we mainly restricted ourselves
to solve the problem of
whether we can define, if any, time evolutions compatible with
the periodic boundary condition.
Now this work has been done,
in future studies
we would like to clarify more detailed properties of
the system for the case of general $n$,
such as taking a continuum limit of the system to obtain associated
differential equations, and seeking an explicit formula
for the tropical limit of the energy of paths and
the equation for the periodic boundary condition,
as we have done in the case of $n=2$.
Also, there are many other remaining problems to be addressed;
whether we will be able to construct soliton solutions in our systems,
to give any explicit formulas to describe solutions to initial value problems,
to clarify the global structure of the iso-level sets of our dynamical systems
in the phase space $(\R_{>0})^{L(n-1)}$,
to define the action of the geometric crystal operators $e^c_i$'s
on the states of our systems,
and to clarify how the iso-level sets will be changed
under the operations of these operators.
Generalization of the states of the systems
from homogeneous paths associated with only one-row tableaux
to inhomogeneous paths with more general rational rectangles,
may be another interesting problem.
Lastly, we give an additional
discussion on the conserved quantities in the case of general $n$
in section \ref{sec:3_1_2}.
In fact, there is another way of giving them in terms of
polynomials of the variables $\{ b_i^{(j)} \}_{1 \leq i \leq L, 1 \leq j \leq n}$
with non-negative integer coefficients.
As we have seen in the proof of Proposition \ref{pr:sep15_1},
such a minor determinant is expressed as
\begin{equation*
\Delta_{I,I}(\mathcal{L}(| \mathbf{b} \rangle ; \lambda)) =
\sum_{\mathcal{F} =(p_a; \sigma): I \rightarrow I}
{\rm sgn} (\sigma) {\rm wt} (\mathcal{F}),
\end{equation*}
where the sum is over vertex-disjoint paths from $I = \{ i_1< \dots< i_{n-k} \}$ to
itself on a certain planar network associated with the matrix $\mathcal{L}(| \mathbf{b} \rangle ; \lambda)$,
and $\mathcal{F} =(p_a; \sigma)$ is a collection of $n-k$ paths $p_1, \dots, p_{n-k}$
such that $p_a$ starts at source $i_a$ and ends at sink $i_{\sigma (a)}$,
for some permutation $\sigma \in S_{n-k}$.
By a similar consideration in that proof, we see that for any $m$ the coefficient
of $\lambda^m$ in the polynomial
${\rm Tr}\mathsf{C}_{n-k}( \mathcal{L}(| \mathbf{b} \rangle ; \lambda))$
is given by a polynomial of the variables $\{ b_i^{(j)} \}_{1 \leq i \leq L, 1 \leq j \leq n}$
with non-negative integer coefficients, multiplied by
a sign factor $(-1)^{m(n-k-1)}$.
Therefore, by removing this sign factor, every conserved quantity contained in
${\rm Tr}\mathsf{C}_{n-k}( \mathcal{L}(| \mathbf{b} \rangle ; \lambda))$
can be expressed by a polynomial
with non-negative integer coefficients.
This implies that, all the conserved quantities of the closed geometric crystal chains thus obtained
can be tropicalized.
As in the case of $n=2$, it is fairly reasonable to expect that
their tropicalization provides a collection of
piecewise-linear formulas for conserved quantities of the
integrable cellular automata
with periodic boundary conditions in \cite{KT10},
and their possible extensions from single box tableaux to
general one-row tableaux for the site variables in such cellular automata.
\vspace{5mm}
\noindent
{\it Acknowledgement}.
The authors thank Prof.~Takashi Arai and Prof.~Kazuo Hosomichi for valuable comments
on the second author's master thesis to which the present work
is partially related.
\vspace{0.5cm}
|
{
"timestamp": "2021-05-07T02:06:48",
"yymm": "2012",
"arxiv_id": "2012.14151",
"language": "en",
"url": "https://arxiv.org/abs/2012.14151"
}
|
\section{Introduction}
In regression, many diagnostic methods of identifying outliers or influential observations have been suggested. Among
them, our interest will be confined to deletion diagnostic methods of investigating the influence of an observation on
the LSE of a vector of regression coefficients. The change in the LSE of a vector of regression coefficients due to a
case deletion is a vector quantity and hence observations can not be ordered according to their influences. The changes
in the LSE due to case deletions are usually normalized or scaled so that observations can be ordered based on their
influences.
Cook (1977) introduced a scaled distance by scaling the change in the LSE due to a case
deletion. However, the scaling matrix defining Cook's distance does not reflect a distributional property of the change
in the LSE due to a case deletion, which will usually lead to an incorrect detection of influential observations (see
Kim, 2017).
In Section 3.1, we will derive a normalization of the change in the LSE using the Moore-Penrose inverse of the
covariance matrix of the change in the LSE. This normalization turns out to be a square of the internally studentized
residual. In Section 3.2, we will show that the numerator term of Cook's distance does not in general have a
chi-squared distribution except for a single case. Furthermore we will give an elaborate explanation about the
inappropriateness of the choice of a scaling matrix defining Cook's distance. In Section 4 by reflecting a
distributional property of the change in the LSE due to a case deletion, we will suggest a new diagnostic measure that
is a scalar. Hence observations are naturally ordered based on their influences provided by this diagnostic measure.
Three numerical examples are given for illustration.
\section{Preliminaries}
A linear regression model can be defined by
$$ {\bb y} = {\bb X}{\bb \beta} + {\bb \varepsilon},$$
where ${\bb y}$ is an $n \times 1$ vector of response variables, $ {\bb X}
= ({\bb
x}_{1},...,{\bb x}_{n})^{T}$ is an $n \times p$ matrix of full column rank which consists of $n$ measurements on the
$p$ fixed independent variables,
${\bb \beta}$ is a $p \times 1$ vector of unknown regression coefficients, and
${\bb \varepsilon} = (\varepsilon_{1},...,\varepsilon_{n})^{T}$ is an $n \times 1$ vector of unobservable random errors in which
$\varepsilon_{i}$ and $\varepsilon_{j}$ are uncorrelated for all $i$, $j$ ($i \neq j$).
Further it is assumed that each $\varepsilon_{i}$ has mean zero and variance $\sigma^2$.
The LSE of ${\bb \beta}$ is $ \hat{{\bb \beta}} = ({\bb X}^{T}{\bb X})^{-1}{\bb X}^{T}{\bb y}$ which is an unbiased
estimator of ${\bb \beta}$ and the covariance matrix of $ \hat{{\bb \beta}}$ is $\mbox{cov}(\hat{{\bb \beta}})=\sigma^2
({\bb X}^{T}{\bb X})^{-1}$. We write the hat matrix as ${\bb H} = (h_{ij}) = {\bb X}({\bb X}^{T}{\bb X})^{-1}{\bb
X}^{T}$. The residual vector is ${\bb e} = (e_{1},...,e_{n})^{T} = ({\bb I}_n - {\bb H}){\bb y}$, where ${\bb I}_n$
is the identity matrix of order $n$. An unbiased estimator of $\sigma^{2}$ is $ \hat{\sigma}^{2} = {\bb e}^{T}{\bb
e}/(n-p)$. More details can be found in Seber (1977).
\section{Deletion diagnostic measures}
The LSE of ${\bb \beta}$ computed without the $i$-th observation is written as $\hat{{\bb \beta}}_{(i)}$. Then we have
$$
\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)} = ({\bb X}^{T}{\bb X})^{-1}{\bb
x}_{i}\frac{e_{i}}{1-h_{ii}}\hspace{.4cm}(i=1,...,n)
$$
whose derivation can be found in Miller (1974).
The mean vector of $\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ is zero and its
covariance matrix is
$$
\mbox{cov}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})=\frac{\sigma^2}{1-h_{ii}}{\bb V}_{i},
$$
where
$$
{\bb V}_{i} =({\bb X}^{T}{\bb X})^{-1}{\bb x}_{i}{\bb x}_{i}^{T}({\bb X}^{T}{\bb X})^{-1}.
$$
The rank of ${\bb V}_{i}$ is one. It is easily shown that ${\bb x}_i^T ({\bb X}^{T}{\bb X})^{-2} {\bb x}_i $ is the
only nonzero eigenvalue of ${\bb V}_{i}$ and its associated eigenvector is $({\bb X}^T {\bb X})^{-1} {\bb x}_i$. For
more details, refer to Kim (2015).
The difference $\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ is used for investigating the influence of the $i$-th
observation usually through a normalizing or scaling process as follows
$$
(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^{T}{\bb M}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}),
$$
where ${\bb M}$ is an appropriately chosen matrix of order $p$. In the following two subsections we will discuss two
kinds of choices of ${\bb M}$.
\subsection{A normalization of $\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ using Moore-Penrose inverse}
The covariance matrix of $\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ is singular so that it is not invertible.
Following the lines
given in the proof of Theorem 5.1 of Schott (1997), we have the Moore-Penrose inverse of $V_i$ as
$$
{\bb V}_{i}^{+} = [{\bb x}_{i}^{T}({\bb X}^{T}{\bb X})^{-2}{\bb x}_i ]^{-2} {\bb V}_i .
$$
Hence the Moore-Penrose inverse of $\widehat{\mbox{cov}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})}$ is computed as
$$
[\widehat{\mbox{cov}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})}]^{+} =
\left(\frac{\hat{\sigma}^2}{1-h_{ii}}\right)^{-1} [{\bb x}_{i}^{T}({\bb X}^{T}{\bb X})^{-2}{\bb x}_i ]^{-2} {\bb V}_i .
$$
By using the Moore-Penrose inverse of $\widehat{\mbox{cov}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})}$, a normalized
distance between $\hat{{\bb \beta}}$ and $\hat{{\bb \beta}}_{(i)}$ can be obtained as
$$ (\hat{{\bb \beta}} -
\hat{{\bb \beta}}_{(i)})^{T}[\widehat{\mbox{cov}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})}]^{+}(\hat{{\bb \beta}} -
\hat{{\bb \beta}}_{(i)}) =\frac{e_{i}^{2}}{\hat{\sigma}^2 (1-h_{ii})}.
$$
This normalization of $\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ is just a square
of the $i$-th internally studentized residual (see Eq. (4.6) of Chatterjee and Hadi, 1988 for the internally
studentized residuals).
\subsection{Cook's distance}
We will assume hereafter that the error terms have a normal distribution with mean zero and variance $\sigma^2$. Based
on a confidence ellipsoid for ${\bb \beta}$, Cook (1977) introduced a diagnostic measure which can be expressed as
\begin{eqnarray*}
D_{i} &=& \frac{1}{p}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^{T} [\widehat{\mbox{cov}(\hat{{\bb \beta}})}]^{-1} (\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}) \\
&=&\frac{1}{p\hat{\sigma}^{2}}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^{T}({\bb X}^{T}{\bb X})(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}) \\
&=& \frac{1}{p} \frac{h_{ii}}{1-h_{ii}} \frac{e_{i}^{2}}{\hat{\sigma}^2 (1-h_{ii})}.
\end{eqnarray*}
Cook's distance $D_{i}$ is a scaled distance between $\hat{{\bb \beta}}$ and $\hat{{\bb \beta}}_{(i)}$ using
$\widehat{\mbox{cov}(\hat{{\bb \beta}})}$.
\subsubsection{On comparing $D_{i}$ to the percentiles of the central $F$-distribution}
The quantity
$$
\frac{1}{\sigma^{2}}(\hat{{\bb \beta}} - {\bb \beta})^{T}({\bb X}^{T}{\bb X})(\hat{{\bb \beta}} - {\bb \beta})
$$
has a chi-squared distribution with $p$ degrees of freedom. However, the quantity
$$
\frac{1}{\sigma^{2}}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^{T}({\bb X}^{T}{\bb X})(\hat{{\bb \beta}} - \hat{{\bb
\beta}}_{(i)})
$$
does not in general have a chi-squared distribution except for a single case, which will be explained in this
subsection. To this end, we will use Theorem 9.10 of Schott (1997) restated in the following lemma for easy
reference.
\begin{lemma}
Assume that a random vector ${\bb x}$ is distributed as a $p$-variate normal distribution $N_{p} ({\bb 0}, {\bb
\Omega} )$, where ${\bb \Omega}$ is positive semidefinite. Let ${\bb A}$ be a $p \times p$ symmetric matrix. Then a
quadratic form ${\bb x}^T {\bb A} {\bb x}$ has a chi-squared distribution with $r$ degrees of freedom if and only if
${\bb \Omega} {\bb A}{\bb \Omega}{\bb A}{\bb \Omega} = {\bb \Omega} {\bb A}{\bb \Omega}$ and $\mbox{tr}({\bb A}{\bb
\Omega}) = r$.
\end{lemma}
Note that $(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})/\sigma$ has a $p$-variate normal distribution with zero mean
vector and covariance matrix ${\bb V}_{i}/(1-h_{ii})$. In Lemma 1, taking
$$
{\bb \Omega }= \frac{1}{1-h_{ii}}{\bb V}_{i}~~~\mbox{and}~~~{\bb A}={\bb X}^T {\bb X},
$$
we have
$$
{\bb \Omega} {\bb A}{\bb \Omega}{\bb A}{\bb \Omega} = \frac{h_{ii}^2}{(1-h_{ii})^3} {\bb V}_i ~~~\mbox{and}~~~
{\bb \Omega}{\bb A}{\bb \Omega} = \frac{h_{ii}}{(1-h_{ii})^2} {\bb V}_i .
$$
Only when $h_{ii} = 1/2$, the first condition ${\bb \Omega}{\bb A}{\bb \Omega}{\bb A}{\bb \Omega} = {\bb
\Omega}{\bb A}{\bb \Omega}$ holds. Next, since
$$
\mbox{tr}({\bb A}{\bb \Omega} ) = \frac{h_{ii}}{1-h_{ii}},
$$
we have $\mbox{tr}({\bb A}{\bb \Omega} ) = k$ for $k=1,...,p$ when $h_{ii} = k / (1+k)$. Two conditions in Lemma 1 are
satisfied only when $h_{ii} = 1/2$. Thus we have the following theorem.
\begin{theorem} \label{thm:xx}
For each $i$, the quantity
$$
\frac{1}{\sigma^{2}}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^{T}({\bb X}^{T}{\bb X})(\hat{{\bb \beta}} - \hat{{\bb
\beta}}_{(i)})
$$
has a chi-squared distribution with one degree of freedom only when $h_{ii} = 1/2$.
\end{theorem}
Cook (1977) suggests that each $D_{i}$ is compared to the percentiles of the central $F$-distribution
$F(p,n-p)$. Each $D_{i}$ does not strictly have an $F$-distribution (see p.120 of Chatterjee and Hadi, 1988). Also,
Theorem \ref{thm:xx} shows that the use of $F$-distribution as a distributional form and the choice of numerator
degrees of freedom for $p \geq 2$ are inappropriate.
\subsubsection{On the choice of ${\bb X}^{T}{\bb X}$ as a scaling matrix}
First we will consider a distributional property of a random vector with a singular covariance matrix for easy
understanding in the next paragraph.
\begin{theorem} \label{thm:sing}
Assume that a random vector ${\bb x}$ is distributed as a $p$-variate normal distribution $N_{p} ({\bb 0}, {\bb
\Omega} )$, where the rank of ${\bb \Omega}$ is $q$ with $1 \leq q < p$. Then ${\bb x}$ takes values in the column
space of ${\bb \Omega}$ with probability one.
\end{theorem}
\begin{proof}
Let the spectral decomposition of ${\bb \Omega}$ be
$$ {\bb \Omega} = {\bb \Gamma} {\bb \Lambda} {\bb \Gamma}^{T},$$
where ${\bb \Gamma}$ is an orthogonal matrix with its $k$-th column ${\bb \gamma}_{k}$ ($k=1,\cdots , p$) and ${\bb
\Lambda}$ is a diagonal matrix $\mbox{diag}(\lambda_{1}, \cdots , \lambda_{q}, 0, \cdots , 0)$ with positive
eigenvalues $\lambda_{1}, \cdots , \lambda_{q}$ of ${\bb \Omega}$. The set $\{ {\bb \gamma}_{1}, \cdots , {\bb
\gamma}_{p} \}$ forms an orthonormal basis for the $p$-dimensional Euclidean space.
Let $R({\bb \Omega})$ be the column space of ${\bb
\Omega}$ and $N({\bb \Omega})$ be the null space of ${\bb \Omega}$. The set $\{ {\bb \gamma}_{1}, \cdots , {\bb
\gamma}_{q} \}$ is an orthonormal basis for $R({\bb \Omega})$ while the set $\{ {\bb \gamma}_{q+1}, \cdots , {\bb
\gamma}_{p} \}$ is an orthonormal basis for $N({\bb \Omega})$. Since $R({\bb \Omega})$ is the orthogonal complement of
$N({\bb \Omega})$, we have
$$
{\bb x} \in R({\bb \Omega}) ~~\mbox{ if and only if}~~~ {\bb \gamma}_{j}^{T} {\bb x} = 0 ~~\mbox{ for all}~ j=q+1 , \cdots
, p,
$$
which yields
$$
\{ {\bb x} \not \in R({\bb \Omega}) \} ~~ = ~~ \cup_{j=q+1}^{p} \{ {\bb \gamma}_{j}^{T} {\bb x} \neq 0 \} .
$$
For each $j=q+1 , \cdots , p$, the mean of ${\bb \gamma}_{j}^{T} {\bb x}$ is zero and its variance is ${\bb
\gamma}_{j}^{T} {\bb \Omega} {\bb \gamma}_{j} = 0$. Hence the probability that ${\bb \gamma}_{j}^{T} {\bb x}$ is equal
to $0$ is
$$
P ( {\bb \gamma}_{j}^{T} {\bb x} = 0 ) = 1.
$$
Thus it is easy to see that $P ({\bb x} \in R({\bb \Omega}) ) = 1 $, that is, ${\bb x}$ takes values in the column
space of ${\bb \Omega}$ with probability one.
\end{proof}
We consider the spectral decomposition of ${\bb X}^{T}{\bb X}$ as
$${\bb X}^{T}{\bb X} = {\bb G}{\bb L}{\bb G}^T ,$$
where ${\bb L} = \mbox{diag} (l_1,$ $ \cdots,$ $
l_p )$ is a $p \times p$ diagonal matrix consisting of the eigenvalues of ${\bb X}^{T}{\bb X}$, ${\bb G} = ({\bb g}_1 ,
\cdots , {\bb g}_p )$ is a $p \times p$ orthogonal matrix, and ${\bb g}_k$ is the eigenvector of ${\bb X}^{T}{\bb X}$
associated with the eigenvalue $l_k$. Each $D_{i}$ can be expressed as
$$
D_i = \frac{1}{p\hat{\sigma}^{2}} \sum_{k=1}^p l_k [(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^T {\bb g}_k ]^2 . $$
The terms $l_k$ and $(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^T {\bb g}_k$ play a specific role in determining
the magnitude of $D_i$ for each $i$.
Since the rank of $\mbox{cov}(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})$ is one, Theorem \ref{thm:sing} shows that
$\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ is distributed entirely along the line generated by the eigenvector
$({\bb X}^T {\bb X})^{-1} {\bb x}_i$ of ${\bb V}_{i}$ which is one-dimensional subspace of the $p$-dimensional
Euclidean space. Since the eigenvectors of ${\bb X}^{T}{\bb X}$ are orthogonal to each other, all the eigenvectors of
${\bb X}^{T}{\bb X}$ or $p-1$ eigenvectors are not in general parallel to the line generated by $({\bb X}^T {\bb
X})^{-1} {\bb x}_i$ in which the random vector $\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ takes values with
probability one. The distance $D_i$ inevitably includes the components $l_k [(\hat{{\bb \beta}} - \hat{{\bb
\beta}}_{(i)})^T {\bb g}_k ]^2 /p\hat{\sigma}^{2}$ (for all the $k$'s or $p-1$ $k$'s) associated with the axes ${\bb
g}_k$ different from the axis determined by the eigenvector $({\bb X}^T {\bb X})^{-1} {\bb x}_i$. These components of
$D_i$ become a source of distorting the real influence of the $i$-th observation on $\hat{{\bb \beta}}$ because the
coordinates $(\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)})^T {\bb g}_k$ with respect to the axes ${\bb g}_k$ different
from the axis determined by the eigenvector $({\bb X}^T {\bb X})^{-1} {\bb x}_i$ are probabilistically meaningless.
Hence the adoption of ${\bb X}^T{\bb X}$ for scaling the distance between $\hat{{\bb \beta}}$ and $ \hat{{\bb
\beta}}_{(i)}$ is not reasonable and
the Cook's distance measure can not in general correctly identify influential observations.
More details can be found in Kim (2017).
\section{A new diagnostic measure}
We note that the rank of $\mbox{cov}(\hat{{\bb
\beta}} - \hat{{\bb \beta}}_{(i)})$ is one and only $({\bb X}^T {\bb X})^{-1} {\bb x}_i$ is the eigenvector of ${\bb
V}_{i}$ associated with a nonzero eigenvalue.
The eigenvector $({\bb X}^T {\bb X})^{-1} {\bb x}_i$
forms one axis in the $p$-dimensional Euclidean space. Theorem \ref{thm:sing} implies that among $p$ coordinates of
$\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ in the $p$-dimensional Euclidean space, only the coordinate of $\hat{{\bb
\beta}} - \hat{{\bb \beta}}_{(i)}$ with respect to the axis $({\bb X}^T {\bb X})^{-1} {\bb x}_i$ is probabilistically
meaningful. This coordinate (or its absolute value) as a scalar represents naturally the influence of the $i$-th
observation which the difference $\hat{{\bb \beta}} - \hat{{\bb \beta}}_{(i)}$ reflects in the $p$-dimensional
Euclidean space, and it is computed as
$$
K_i = \frac{e_i }{1-h_{ii}} ~ ||({\bb X}^{T}{\bb X})^{-1}{\bb x}_i ||,
$$
where $||{\bb a} ||^2 = {\bb a}^{T}{\bb a}$ for a column vector ${\bb a}$.
Hence it is reasonable to use the quantity $K_{i}$ as a diagnostic measure to investigate the influence of the $i$-th
observation on $\hat{{\bb \beta}}$. The quantities $K_{1}, \cdots , K_{n}$ are natually ordered according to their
magnitudes.
A relatively large absolute
value of $K_i$ implies that the $i$-th observation is potentially influential. The quantity $K_i$ is invariant under
orthogonal transformations of the rows of ${\bb X}$. However, it is not in general invariant under nonsingular
transformations. For a nonsingular transformation ${\bb X}{\bb A}$ with a $p \times p$ nonsingular matrix {\bb A}, we
use $(e_i /(1-h_{ii})) ||{\bb A}^{-1}({\bb X}^{T}{\bb X})^{-1}{\bb x}_i ||$ instead of $K_i$.
\subsection{Hald data}
The regression model with the intercept term is fitted to the Hald data set (Draper and Smith, 1981) which consists of
13 observations on a single response variable and four independent variables. For the Hald data, our discussion is
confined to observations 3 and 8.
Cook's distances show that
observation 8 is the most influential ($D_8 = 0.394$) and observation 3 is the next ($D_3 = 0.301$).
However, the $K_i$ values
show that observation 3 is the most influential ($K_3 = -76.197$) and observation 8 is the next
($K_8 = -25.168$). An analysis of the sources of the $D_i$ values for observations 3 and 8 shows that
the $D_8$ value enlarges the real influence of observation 8 on $\hat{{\bb \beta}}$,
while the $D_3$ value reduces the real influence of observation 3 (Kim, 2017). Hence the $D_{3}$ value does not
identify observation 3 as the most influential one even though the $K_3$ value identifies observation 3 as the most
influential one, and the $D_{8}$ value identifies observation 8 as the most influential one
even though observation 8 is not the most influential based on the $K_i$ values.
\subsection{Body fat data}
We fit the regression model with the intercept term to the body fat data set (Neter et al., 1996, p.261) which has
20 measurements on a single response variable and three independent variables. An analysis of the body fat data is
confined to observations 1 and 3. Based on the $D_i$ values, observation 3 is the most influential ($D_3 = 0.299$) and
observation 1 is the next ($D_1 = 0.279$). However, for the $K_i$ values, observation 1 has the largest absolute value
($K_1 = -72.922$) and observation 3 has $K_3 = -37.466$, not the second largest absolute value. An investigation of the
sources of the $D_i$ values for observations 1 and 3 shows that the $D_3$ value enlarges the real influence of
observation 3 on $\hat{{\bb \beta}}$, while the $D_1$ value reduces the real influence of observation 1 (Kim, 2017).
Hence the $D_{1}$ value does not identify observation 1 as the most influential one even though the $K_1$ value
identifies observation 1 as the most influential one, and the $D_{3}$ value identifies observation 3 as the most
influential one even though observation 3 does not have the largest absolute value based on the $K_i$ values.
\subsection{Rat data}
The regression model with the intercept term is fitted to the rat data set (Cook, 1977) which consists of 19
measurements on a single response variable and three independent variables. For the rat data, we confine our discussion
to observation 3. The $D_i$ values show that observation 3 is the most influential ($D_3 = 0.930$). For the $K_i$
values, observation 3 has the largest absolute value ($K_3 = 2.694$). Both diagnostic measures lead to the same
conclusion that observation 3 is the most influential. The extent to which the $D_3$ value reflects the real
influence of observation 3 on $\hat{{\bb \beta}}$ is very high (Kim, 2017). Hence the $D_3$ value gives the same
result as the $K_3$ value.
|
{
"timestamp": "2022-02-24T02:04:37",
"yymm": "2012",
"arxiv_id": "2012.14127",
"language": "en",
"url": "https://arxiv.org/abs/2012.14127"
}
|
\section{\label{sec1} Introduction}
Exact analytic solutions of General Relativistic field equations is an active field of study for over a century, beginning its journey from the first ever solution proposed by Schwarzschild~\cite{Schwarzschild}. On this aspect some thorough reviews were published from time to time~\cite{kramer}. In particular, static spherically symmetric perfect fluid solutions were reviewed by Delgaty and Lake~\cite{Delgaty}. On close examination, they reported $16$ static spherically symmetric perfect fluid solutions to be physically viable out of total $127$ considered in the review. Correcting the solution given by Duorah and Ray~\cite{Duorah}, Finch and Skea~\cite{Finch} obtained a new exact solution to the Einstein field equations. One astonishing feature of the solution is that their model described compact star in isotropic pressure only. Therefore, here motivation for the present work is to consider a model of anisotropic star that reduces to the Finch-Skea solution for zero anisotropy.
The Finch-Skea metric~\cite{Finch} is well behaved and it satisfies all the criteria to be a Delgaty and Lake solution~\cite{Delgaty} for a static spherically symmetric perfect fluid model and also has been shown to be consistent with the Walecka theory~\cite{Walecka74} for cold condensed star. Additionally, this metric is found to be consistent to study neutron star, especially to investigate central densities of neutron star in relativistic mean-field theory~\cite{Walecka75}. Essentially this spacetime satisfies the characteristic of a perfect fluid matter which obeys barotropic equation of state (EOS). The underlying approach to solving Einstein's equations for spherical symmetry involved {\it ad hoc} assumptions for one of the gravitational potentials as the system of field equations is under-determined and possesses one degree of freedom. It is interesting to note that the Finch-Skea metric involves theorizing a form for the radial potential which allows for the complete integration of the field equations whereupon all the remaining geometric and dynamical quantities may be determined~\cite{CH15}. Kalam et al.~\cite{Kalam13a,Kalam13b} have invested their time to model strange quark stars using the Finch-Skea metric considering the MIT bag model~\cite{Kalam13a} as well as two fluid model~\cite{Kalam13b} and have proposed quintessence stars combining anisotropic pressure corresponding to normal matter~\cite{Kalam13c}.
On the other hand, Maharaj et al.~\cite{Maharaj16} have shown that the Finch-Skea geometry can be generalized to include charge and anisotropy. Considering a particular solution to the charged anisotropy of this model Kileba et al.~\cite{Matondo17} have predicted the masses of the stellar objects for three different scenarios, viz, (i) charged anisotropic, (ii) charged isotropic, and (iii) uncharged isotropic distributions which were found to be compatible with several known compact objects. Tikekar and Jotania~\cite{TJ07} applied the Finch-Skea metric~\cite{Finch}, by assuming the $3$-space of the interior spacetime of a strange star is that of a three-paraboloid immersed in a 4-dimensional Euclidean space, to acquire a two-parameter family of physically viable relativistic models of neutron stars and showed that it admitted possibilities of describing strange stars as well as other highly compact configurations of matter in equilibrium. The Finch-Skea ansatz~\cite{Finch} was also used Sharma and Ratanpal~\cite{SR13} to generate a class of solutions describing the interior of a static spherically symmetric anisotropic star. Later, Pandya et al.~\cite{PTS} have generalized the model of Sharma and Ratanpal~\cite{SR13} by incorporating a dimensionless parameter $n(>0)$ in the Finch-Skea ansatz~\cite{Finch} by assuming the system to be anisotropic. Charged Finch-Skea stars were described in terms of Bessel functions and modified Bessel functions, by Hansraj and Maharaj~\cite{HM06} and Hansraj et al.~\cite{HM16}, where both the models are found to obey a barotropic EOS.
Bhar et al.~\cite{Bhar14} have produced anisotropic stars in ($2 + 1$) dimensions and a quark EOS by using the Finch-Skea metric~\cite{Finch}. A class of interior solutions corresponding to the BTZ~\cite{BTZ} exterior solution has been investigated by Banerjee et al.~\cite{Banerjee13} under the Finch-Skea metric which is relevant for the description of realistic stars in ($3 + 1$) dimensions as a complementary approach to the study by Garc{\'i}a et al.~\cite{Garcia03}.
For higher dimensions, several researchers~\cite{Hansraj17,Dadhich17,Molina17,Patel97,CH15} have studied the Finch-Skea metric as well as its generalizations. Another fascinating study by Hansraj et al.~\cite{Hans15} shows that the Finch-Skea spacetime also arises in the $5$-dimensional Einstein-Gauss-Bonnet modified theory of gravity, suggesting that the Finch-Skea geometry may play an important role in more general Lovelock polynomials with a Lagrangian containing higher order terms~\cite{Maharaj16}.
Now, in the present model the radial pressure is taken different from the transverse components of the pressure and thus anisotropy implies unequal principal stresses. Equality of the transverse components of pressure ensures the spherical symmetry of the model~\cite{Gleiser2002}. Various reasons are being pointed out by the researchers as the origin of anisotropy inside the compact star. Anisotropy can develop in the core of the compact stars due to exotic phase transition at extreme density~\cite{Sokolov}. Jones~\cite{Jones} predicted the presence of type II superconductor inside the compact stars leading to the anisotropy of stress tensor. Pion condensation~\cite{Sawyer}, type $3A$ superfluid~\cite{Kippen} are also identified as the possible origins of anisotropy. Ruderman~\cite{Ruderman} indicated that local anisotropy may develope in compact stars due to the solid core. Strong magnetic field may also lead to the development of anisotropy inside the compact star~\cite{Weber}. Liebling and Palenzuela~\cite{Liebling} have shown that a scalar field in a Boson star may give rise to anisotropy. However, one can look into for review of the local anisotropy by Herrera and Santos~\cite{Herrera97}.
It is worthy to note that anisotropy in pressure plays a significant role in the structure and properties of compact star. Karmarkar et al.~\cite{Karmarkar} indicated that numerical value of the compactness parameter $\frac{2M}{R}$, $M$ and $R$ being the mass and radius of the star, may approach unity for anisotropic stars. The upper limit of the surface redshift for anisotropic stars becomes $3.842$ and $5.211$ when the transverse components of the pressure satisfy the strong and the dominant energy condition respectively~\cite{Ivanov2002}. Mak and Harko~\cite{Mak1,Mak2} have showed that anisotropy must be maximum at the surface of the compact star and it should be zero at the center of the fluid sphere. There are good number of models on anisotropic compact star under General Relativity in literature~\cite{Gleiser2003,Rahaman1,Varela,Sharma7,Maharaj}. In the present paper we have investigated the nature of anisotropy to model a stable compact star.
The structure of the paper is as follows: in Sec. \ref{Sec2} the basic equations required for the description of the model of the compact star are presented. The relevant solutions to the Einstein field equations for the model are presented in Sec.~\ref{Sec3} by discussing two cases: (i) anisotropic case and (ii) isotropic case. The smooth matching for the interior and the exterior solution at the boundary is discussed in Sec.~\ref{Sec4} and hence the general form for the integration constants are found. In Sec. \ref{Sec5} we elaborate the physical analysis for a viable model of compact star. We have discussed the stability analysis for the prescribed model in Sec.~\ref{Sec6}. The mass-radius relationship, and surface redshift for the model are described in Sec. \ref{Sec7}. Additionally, the variation of the central density with the mass and the radius are discussed in Sec. \ref{Sec7}. The last section is dedicated to concluding remarks.
\section{\label{Sec2} Einstein's field equation and the interior solution for the model}
We start by considering the model which represents a static spherically symmetric fluid configuration. The line element describing the interior space-time of a spherically symmetric star in Schwarzschild coordinates\\
$x^0 = t$, $x^1=r$, $x^2 = \theta$, $x^3 = \phi$ can be written as
\begin{equation}
ds_{-}^2 = -A_{0}^2(r)dt^2 + B_{0}^2(r)dr^2 + r^2(d\theta^2 + \sin^2\theta d\phi^2),\label{eq1}
\end{equation}
where $A_{0}(r)$ and $B_{0}(r)$ are the gravitational potential yet to be established.
To study stellar structure and stellar evolution the basic supposition made by the researchers is to consider the interior of a star as a perfect fluid~\cite{DDC,Kippen}. The pressure in the interior of a star is considered to be isotropic to model this perfect fluid \cite{Gleiser2002}. Several studies in recent times have shown that, at very high density, alteration in isotropic pressure plays a vital role in studying the features of a stellar interior~\cite{Ruderman,Canuto}. Thus the energy momentum tensor is anisotropic, isotropy being the extra assumptions on the behaviors of the fields or of the fluid modeling the stellar interior~\cite{Gleiser2002}.
The matter distribution of the stellar interior is thus described by an energy-momentum tensor of the form
\begin{equation}
T_{\alpha\beta} = (\rho + p_t)u_{\alpha} {u_\beta} + p_{t} g_{\alpha \beta} + (p_r - p_t)\chi_{\alpha} \chi_{\beta},\label{eq2}
\end{equation}
where $\rho$ represents the energy-density, $p_r$ and $p_t$, respectively denote fluid pressures along the radial and transverse directions, $u^\alpha$ is the $4$-velocity of the fluid and $\chi^\alpha$ is a unit space-like $4$-vector along the radial direction so that $u^\alpha u_\alpha = 1$, $\chi_\alpha \chi_\beta =-1$ and $u^\alpha\chi_\beta=0 $.
The Einstein field equations governing the evolution of the system is then obtained as (we set $G = c = 1$)
\begin{eqnarray}
8\pi\rho &=& \left[\frac{1}{r^2}-\frac{1}{r^2 B_0^2}+\frac{2B_0'}{r B_0^3}\right],\label{feq3}\\
8\pi p_r &=& \left[-\frac{1}{r^2}+\frac{1}{B_0^2 r^2}+\frac{2A_0'}{r A_0B_0^2}\right],\label{feq4}\\
8\pi p_t &=& \left[\frac{A_0''}{A_0B_0^2} + \frac{A_0'}{rA_0B_0^2} - \frac{B_0'}{r B_0^3} - \frac{A_0'B_0'}{A_0 B_0^3}\right],
\label{feq5}
\end{eqnarray}
where in above set of Eqs.~(\ref{feq3})-(\ref{feq5}), a `prime' denotes differentiation with respect to $r$.
Making use of Eqs.~(\ref{feq4}) and (\ref{feq5}), we define the anisotropic parameter of the stellar system as
\[\Delta(r) = 8\pi (p_t-p_r) = \]
\begin{equation}\label{aeq6}
\left[\frac{A_0''}{A_0B_0^2} - \frac{A_0'}{r A_0B_0^2} - \frac{B_0'}{r B_0^3} +\frac{A_0'B_0'}{A_0B_0^3} - \frac{1}{r^2B_0^2} + \frac{1}{r^2}\right].
\end{equation}
Moreover, the mass contained within a radius $r$ of the sphere is defined as
\begin{equation}
m(r)= \frac{1}{2}\int_0^r\omega^2 \rho(\omega)d\omega.\label{eq7}
\end{equation}
At this stage, we have a system of equations consisting of three equations Eq.~(\ref{feq3})- Eq.~(\ref{feq5}) with five unknowns $\rho$, $p_r$, $p_t$, $A_0(r)$ and $B_0(r)$. Thus to find the exact solutions of the field equations and hence to model a stellar interior, we need to specify two of them. To model a physically reasonable stellar configuration, we propose that the metric potential $g_{rr}$ is of the form as considered by Finch-Skea~\cite{Finch} and it is given by
\begin{equation}
B_0^2 (r) = \left(1 +\frac{r^2}{R^2}\right),\label{eq12b}
\end{equation}
where $R$ is the curvature parameter describing the geometry of the configuration having a dimension of length. This choice of metric potential assures that the function $B^2_0(r)$ is finite, continuous and well defined within stellar interior range. Also $B^2_0(r)=1$ for $r=0$ ensures that it is finite at the center. Again, the metric is regular at the center since $(B^2_0(r))'_{r=0}=0$.
With this choice of $B_0(r)$, Eq.~(\ref{aeq6}) then reduces to
\begin{equation}\label{eq11}
\Delta(r) = \frac{r^3 A_0 + R^2 \left[r \left(r^2 + R^2 \right) A_0'' - \left(R^2 + 2r^2 \right) A_0' \right]}{r \left(r^2 + R^2 \right) A_0}.
\end{equation}
On rearranging Eq.~(\ref{eq11}), we get
\begin{equation}\label{eq12}
\frac{R^2 A_0''}{A_0} - \frac{R^2 \left(R^2 + 2 r^2 \right)}{r \left( r^2 + R^2 \right)} \frac{A_0'}{A_0} + \frac{r^2}{r^2 + R^2} = \Delta (r).
\end{equation}
Now the above Eq.~(\ref{eq12}) can be solved for $A_0(r)$ if the anisotropic parameter, $\Delta(r)$ is specified in particular form. One can easily obtain solutions for the following two cases from Eq.~(\ref{eq12}): (i) $\Delta(r) = 0$ and (ii) $\Delta(r) \neq 0$. We have discussed both the cases in the next Section.
\section{\label{Sec3} Exact solutions to field equations }
To obtain an exact solution for Eq.~(\ref{eq12}), we need to consider the anisotropy in some specific form. We therefore consider the the anisotropic factor as
\begin{equation}
\Delta = \frac{\alpha r^2 (R^2 - r^2)}{(R^2 + r^2)^3}, \label{delalpha}
\end{equation}
where $\alpha$ is the parameter determining the measure of the anisotropy. Now this choice of anisotropy is feasible for the consideration of the anisotropic factor as $\Delta$ is regular for the radial coordinate $r$ and also $\Delta (r=0)=0$ is satisfied at the center. Utilizing this choice of anisotropy in Eq.~(\ref{eq12}), we get the master equation in the form\\
\[ \frac{\big(r R^2 (r^2 + R^2)^3 \big)A''_0 - \left(R^2 (2 r^2 + R^2)(r^2 + R^2)^2\right)A'_0}{r(r^2 + R^2)^3 A_0} \]
\begin{equation}
+ \frac{r^3 \left((r^2+R^2)^2 + \alpha (r^2 - R^2)\right)A_0}{r(r^2 + R^2)^3 A_0}=0.\label{eqme}
\end{equation}
Now we would like to study the exact solutions of the field equations for different values of $\alpha$. In the present work, we have investigated the values of $\alpha$ for $-1,~0$ and $1$. It is worth mentioning that similar investigation have been conducted by Sharma and Das~\cite{SD13} using the Finch-Skea metric. Their model represents an initially static star which is either anisotropic or isotropic in nature and which eventually describes a gravitationally collapsing system. However, in the present work we are attempting to depict a spherically symmetric stable configuration, for both anisotropic and isotropic nature of pressure.
Now using Frobenius Method~\cite{Teschl2012} we can solve Eq.~(\ref{eqme}) at $r~=~0$. Considering the solution in the series form, it can be written as
\begin{equation}
A_0 = \sum_{n=1}^{\infty} c_n r^{s+n},~ c_0 \neq 0.\label{eqc0}
\end{equation}
Computing the values of $A'_0$ and $A''_0$ and substituting all the values on Eq.~(\ref{eqme}), we obtain the following differential equation as
\[ r R^2 (r^6+3 r^4 R^2 + 3 r^2 R^2 + R^6)\]
\[\sum_{n=1}^{\infty} (s+n)(s+n-1)c_n r^{s+n-2} \]
\[ - R^2 (2 r^6 + 5 r^4 R^2 + 4 r^2 R^4 + R^6) \sum_{n=1}^{\infty} (s+n) c_n r^{s +n-1}\]
\begin{equation}
+ r^3 (r^4 + 2 r^2 R^2 + R^4 + \alpha r^2 - \alpha R^2) \sum_{n=1}^{\infty}c_n r^{s+n} = 0. \label{eqde}
\end{equation}
Equating to zero the coefficient of the smallest power of $r$ and hence solving, we get the roots of the indicial equation as $0$ and $2$. Further solving for each coefficient of $r$ we get the solution of Eq.~(\ref{eqme}) as
\begin{equation}
A_0 = M u(r) + N v(r), \label{eqsol1}
\end{equation}
where $M$ and $N$ are two arbitrary constants and
\begin{eqnarray}
u(r) &=& 1 + {7 \over 2R^2} r^2 + \frac{6 R^2 + \alpha}{8 R^6} r^4 +... \\ \nonumber
v(r) &=& r^2 \left[ 1+ \frac{1}{4 R^2} r^2 + \frac{\alpha - 2 R^2}{24 R^6} +... \right]. \label{eqsol2}
\end{eqnarray}
However due to complexity of the solution we investigate the exact solution of the model by considering specific values of $\alpha$ which is described in the following subsections.
\subsection{Exact solution in the presence of anisotropy}
\subsubsection{Case I: $\alpha=-1$}
To obtain an exact solution to Eq.~(\ref{eq12}), we assume the anisotropic parameter to be in the form
\begin{equation}
\Delta(r)= \frac{r^2 \left(r^2 - R^2 \right)}{\left(r^2 + R^2 \right)^3}.\label{eq13}
\end{equation}
The above choice for the anisotropy is physically reasonable, as at the center ($r=0$), anisotropy vanishes as expected. Fig.~\ref{figani} depicts the nature of anisotropy which clearly supports the regularity at the center. However the negative nature for the anisotropy leads us to some consequences as discussed subsequently. Similar profile of the anisotropic pressure can be observed in the work of Thirukkanesh et al.~\cite{TSD20}. As a limitation of our model, we can not generate the isotropic pressure condition from the specified anisotropic form.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{anisotropy.eps}
\end{tabular}
\end{center}
\caption{Behavior of the anisotropy within the configuration with respect to the radial coordinate $r$. }\label{figani}
\end{figure}
Also, this choice provides a solution to Eq.~(\ref{eq12}) in closed form. Substituting Eq.~(\ref{eq13}) in Eq.~(\ref{eq12}), we obtain
\begin{equation}
\frac{R\left( r(r^2 + R^2)^2 A_0'' - (r^2 + R^2)(2 r^2 +R^2) A_0'+ 2 r^3 A_0 \right) }{r(r^2 + R^2) A_0}=0.
\label{eq14}
\end{equation}
We obtain a simple solution to Eq.~(\ref{eq14}) as follows:
\begin{equation}\label{eq15}
A_0 = \left( C \sqrt{r^2 + R^2} + D \left( r^2 + R^2 \right) \right),
\end{equation}
where $C$ and $D$ are integration constants which will be obtained from the boundary conditions.
With these choices of the metric potentials the matter density, radial pressure, transverse pressure and mass are obtained as
\begin{eqnarray}
8\pi\rho &=& \frac{r^2 + 3 R^2}{\left( r^2 + R^2 \right)^2 },\label{eq3b}\\
8\pi p_r &=& \frac{C \left(R^2 - r^2 \right) -D \left( r^2 - 3 R^2 \right) \sqrt{ r^2 + R^2 }}{\left( r^2 + R^2 \right)^2 \left( C + D \sqrt{ r^2 + R^2 } \right)},\label{eq4b}
\end{eqnarray}
\[8\pi p_t = \]
\begin{equation}
\frac{R^2 \left[ C \left(R^2 - r^2 \right) + D \sqrt{ r^2 + R^2 } \left( r^2 + 3 R^2 \right) \right] }{ \left( r^2 + R^2 \right)^3 \left(C + D \sqrt{ r^2 + R^2 } \right)},\label{eq5b}
\end{equation}
\begin{equation}
m(r) = \frac{r^3}{2 \left( r^2 + R^2 \right)}.\label{eq6b}
\end{equation}
Moreover, the gradients of the matter variables are obtained as
\begin{equation}
8\pi\frac{d\rho}{dr} = \frac{2 r}{\left(r^2+R^2\right)^2}-\frac{4 r \left(r^2+3 R^2\right)}{\left(r^2+R^2\right)^3},\label{eq7b}
\end{equation}
\[ 8 \pi \frac{dp_r}{dr} = \frac{2 r\text{C}^2 \left(r^2-3 R^2\right) \sqrt{r^2+R^2}}{\left(r^2+R^2\right)^{7/2} \left(\text{C}+\text{D} \sqrt{r^2+R^2}\right)^2}\]
\[+ \frac{2r \text{C} \text{D} \left(2 r^2-9 R^2\right) \left(r^2+R^2\right)}{\left(r^2+R^2\right)^{7/2} \left(\text{C}+\text{D} \sqrt{r^2+R^2}\right)^2}+ \]
\begin{equation}
\frac{2r \text{D}^2 \left(r^2-7 R^2\right) \left(r^2+R^2\right)^{3/2}}{\left(r^2+R^2\right)^{7/2} \left(\text{C}+\text{D} \sqrt{r^2+R^2}\right)^2},\label{eq8b}
\end{equation}
\[ 8 \pi \frac{dp_t}{dr} = \]
\begin{equation}
\frac{R^2 \left( C \left(R^2-r^2\right)+ D \left(r^2+3 R^2\right) \sqrt{r^2+R^2}\right)}{\left(r^2+R^2\right)^3 \left(C+D \sqrt{r^2+R^2}\right)}.\label{eq9b}
\end{equation}
\subsubsection{Case II: $\alpha = 1$}
The anisotropic factor now reduces to
\begin{equation}
\Delta = \frac{r^2 (R^2 - r^2)}{(R^2 + r^2)^3}. \label{eqalpha1}
\end{equation}
Using the value of Eq.~(\ref{eqalpha1}), the master equation Eq.~(\ref{eq12}) reduces to the form
\[ \frac{\Big(r^2(r^2 + R^2)A''_0 - (2 r^2 + R^2)A'_0 \Big) R^2 (r^2 + R^2)}{r(r^2 + R^2)A_0} + \]
\begin{equation}
\frac{2 r^4}{(r^2 + R^2)}= 0. \label{eqalpha2}
\end{equation}
Since the solution generated from Eq.~(\ref{eqalpha2}) are imaginary (See Appendix), we are excluding this discussions from the present paper.
It is to note that in the original Finch-Skea paper~\cite{Finch} the exact solution in the presence of isotropy is available, i.e. for $p=p_r=p_t$ as such the isotropic pressure condition has been adopted. So, we are not repeating here the calculations based on the isotropic condition, however for the sake of comparison we have plotted graphs for both the cases which will help us to observe the effect of anisotropy in the present model.
\section{\label{Sec4} Exterior spacetime and boundary conditions}
The exterior space-time, spacetime outside the spherically symmetric configuration, for a non-radiating star is empty and is described by the exterior Schwarzschild solution as
\[ds^{2}=-\left(1-\frac{2M}{r}\right)dt^{2}+\left(1-\frac{2M}{r}\right)^{-1}dr^{2} + \]
\begin{equation}\label{extmS}
r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right),
\end{equation}
where $r>2M$, $M$ being the total mass of the stellar object. To study a compact stellar structure, the interior space-time metric (\ref{eq1}) must be matched smoothly to the exterior Schwarzschild spacetime metric Eq.~(\ref{extmS}) at the boundary of the star $r=b$. This condition is known as the continuity of the first fundamental form or Darmois-Israel condition~\cite{Darmois1927,Israel1966} and the continuity of the metric functions across the boundary $r=b$ yields
\begin{eqnarray}
A^2_0(b) &=& \left(1-\frac{2 M}{b}\right),\label{bc1}\\
B^2_0(b) &=& \left(1-\frac{2 M}{b}\right)^{-1}.\label{bc2}
\end{eqnarray}
The radial pressure drops to zero at a finite value of the radial parameter $r$, defined as the radius of the star. This is defined as the continuity of the second fundamental form and utilizing the condition $p_r(r=b)=0$, the radius of the star can be obtained as follows
\begin{equation}
\left[-\frac{1}{b^2}+\frac{1}{B_0^2 b^2}+\frac{2A_0'}{b A_0B_0^2}\right]=0.\label{bc3}
\end{equation}
Fulfillment of continuity for both the first and the second fundamental forms is known as the junction condition and it is utilized to determine the constants for isotropic as well as anisotropic cases. Thus we have the constants in the forms:
\begin{eqnarray}
R &=& \frac{b \sqrt{b - 2 M}}{\sqrt{2 M}}, \label{R1}\\
C &=& \frac{M}{b^2} \left( 3 \sqrt{\frac{b-2M}{2M}}-\sqrt{\frac{2 M}{b-2M}}\right), \label{C1}\\
D &=& \sqrt{\frac{2 M^3}{b^7}}\left( \sqrt{\frac{2 M}{b-2M}}-\sqrt{\frac{b-2M}{2M}}\right), \label{D1}\\
G &=& \frac{ \sqrt{\frac{1}{\varphi}} \left( \sqrt{\varphi}~\cos \left(\sqrt{\varphi}\right) + \sin \left(\sqrt{\varphi}\right) \right)}{2 \sqrt{\varphi} \left( \cos \left(\varphi \right) + \sin \left(\varphi\right) \right)}, \label{G1}\\
H &=& \frac{ \sqrt{\frac{1}{\varphi}} \left( \sqrt{\varphi}~\sin \left(\sqrt{\varphi}\right) - \cos \left(\sqrt{\varphi}\right) \right)}{2 \sqrt{\varphi} \left( \cos \left(\varphi \right) + \sin \left(\varphi\right) \right)}. \label{H1}
\end{eqnarray}
where we take $\frac{b}{b - 2 M} = \varphi$.
\section{\label{Sec5} Physical analysis}
To study the physical features of the prescribed model we have considered the values from the pulsar $4$U~$1608-52$ as mass $=~1.57^{+0.30}_{-0.29}~M_\odot$ and radius $=~9.8 \pm 1.8$ km~\cite{roupas}. The values of the model parameters thus obtained are $R~=~10.3526$, $C~=~0.0535902$, $D~=~-0.000185649$, $G~=~0.328696$ and $H~=~0.305526$. We have analyzed the profile of the model analytically as well as graphically by considering the aforementioned dataset. However, the values of the model parameters for some other known compact objects are depicted in Table \ref{tab1}. \\
\noindent 1. For any acceptable model, regularity of the solutions must be maintained. Analytically, the gravitational potentials should be free from any geometrical or physical singularity. Here, $A^2_0(0)=R^2(C + DR)^2$=constant (for anisotropic case), $A^2_0(0)=0.5403(G-H)+0.84147(G+H)$= constant (for isotropic case) and $B^2_0(0)=1$, i.e., finite at the center ($r=0$) of the stellar configuration. Also one can easily check that $(A^2_0(r))'_{r=0}=(B^2_0(r))'_{r=0}=0$. These imply that the metric is regular at the center and well behaved throughout the stellar interior. Fig.~\ref{mp} exhibits the profile of the metric potentials within the stellar structure for both the anisotropic and isotropic cases.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{a0.eps}\\
\includegraphics[width=8cm]{b0.eps}
\end{tabular}
\end{center}
\caption{Variation of the metric potentials $A^2_0(r)$ (above) and $B^2_0(r)$ (below) with the radial coordinate $r$. Here solid lines (red) represent the anisotropic case and the dashed lines (blue) represent the isotropic case. }\label{mp}
\end{figure}
\noindent 2. The central density, central radial pressure and central tangential pressure in this case are obtained as
\begin{equation}
8 \pi \rho(0) = {3 \over R^2}
\end{equation}
\begin{equation}
8 \pi p_r(0) = 8 \pi p_t(0) = \frac{C + 3 D R}{R^2 (C + D R)},
\end{equation}
for anisotropic case and
\begin{equation}
8 \pi p_r(0) = 8 \pi p_t(0) = \frac{(G+H) + 1.5574 (H -G)}{R^2 \left[(G -H) + 1.5574(G +H) \right]},
\end{equation}
for isotropic case.
Here $R$ being the curvature parameter, it is always positive, thus making the central density a positive quantity. For isotropic nature, both the pressures should be always equal whereas for anisotropic profile, the equality of the central values of both the radial pressure and tangential pressure depicts the absence of anisotropy at the center. The radial and tangential pressures at the center will be non-negative provided the chosen model parameters are all positive. Also according to Zeldovich's condition~\cite{Zeldovich1,Zeldovich2}, $p_r/\rho$ must be $\leq 1$ at the center. Therefore
$$\frac{C + 3 D R}{3(C + D R )} \leq 1~{\text{and}}~\frac{(G + H) + 1.5574 (H -G)}{3\left[ (G-H) + 1.5574 (G+H) \right]} \leq 1,$$
for anisotropic and isotropic cases respectively.
The density $\rho$, radial pressure $p_r$ and transverse pressure $p_t$ are positive inside the structure and monotonically decreasing outward. Fig.~\ref{mvariable} supports the positive and monotonically decreasing behavior of the matter variables. However, since Fig.~\ref{mvariable} depicts that the transverse force is always lower than the radial one throughout the structure implying the attractive nature of the anisotropic force. This type of force is known to make the model less stable than the repulsive force.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{density.eps}\\
\includegraphics[width=8cm]{pressure.eps}
\end{tabular}
\end{center}
\caption{Variation of the matter variables, density (above) and pressure (below) with the radial coordinate $r$. }\label{mvariable}
\end{figure}
\noindent 3. The density variation parameter $\lambda$ is defined as the ratio of the density at the surface to that at the center. Now for the prescribed model, the density variation parameter can be expressed as
\begin{equation}
\lambda = {\rho(b) \over \rho(0)}= \frac{R^2 (b^2 + 3 R^2)}{3 (b^2 + R^2)^2}. \nonumber
\end{equation}
Now for the fixed surface density of $2 \times 10^{14}~gm/cc$, Parui and Sarma~\cite{PS91} have deduced that minimum radius density (the ratio of the surface density to the radius of the star) is minimum for the $\lambda=0.68$. Based on this study, later Parui~\cite{Parui94} has generalized that for both the charged and uncharged neutron star of densities having $10^{15}$ and $10^{16}~gm/cc$, $\lambda_{max}$ becomes $0.68$ in each cases. For our model, considering the fixed surface density $2 \times 10^{14}~gm/cc$, the model parameter $R$ supports the value $0.673623$, the mass and radius become $0.6725~M_\odot$ and $0.346854$ km respectively for maximum limit of density variation parameter. However, the permissible value of $\lambda$ for several different stars of different surface densities are presented in Table~\ref{table2}.
\noindent 4. The gradients of energy density, radial pressure and tangential pressures for anisotropic case are given in Eqs.~(\ref{eq7b})-(\ref{eq9b}).
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{gradient.eps}
\end{tabular}
\end{center}
\caption{Variation of gradients of the matter variables with the radial coordinate $r$.}\label{figgrad}
\end{figure}
The gradient of the density, radial pressure and tangential pressure are negative inside the stellar body are shown graphically in Fig.~\ref{figgrad}.
\noindent 5. The radial and transverse velocity of sound ($c=1$) are obtained as
\[v^2_{r} = \frac{-C^2(r^2 - 3 R^2)}{(r^2 + 5 R^2) (C + D \sqrt{r^2 + R^2})^2} + \]
\[ \frac{D^2 (r^4 - 6 r^2 R^2 + 7 R^4)}{(r^2 + 5 R^2) (C + D \sqrt{r^2 + R^2})^2} + \]
\begin{equation}
\frac{ C D (- 2 r^4 + 7 r^2 R^2 + 9 R^4)}{\sqrt{r^2 + R^2} (r^2 + 5 R^2) (C + D \sqrt{r^2 + R^2})^2}, \label{velocity1}
\end{equation}
\[v^2_{t} = \frac{- 2 R^2 C^2 (r^2 - 2 R^2)}{(r^2 + R^2) (r^2 + 5 R^2) (C + D \sqrt{r^2 + R^2})^2} +\] \[\frac{2 R^2 D^2 (r^4 + 5 r^2 R^2 + 4 R^4)}{(r^2 + R^2) (r^2 + 5 R^2) (C + D \sqrt{r^2 + R^2})^2} +\]
\begin{equation}
\frac{C D R^2 (- r^4 + 10 r^2 R^2 + 11 R^4)}{(r^2 + R^2)^{3 \over 2} (r^2 + 5 R^2) (C + D \sqrt{r^2 + R^2})^2},\label{velocity2}
\end{equation}
\[v^2 = \frac{(r^2 + R^2) \left(G + H \tan \Gamma \right) }{\Gamma (r^2 + 5 R^2)} \]
\[\times
\frac{\left( - H (r^2 + R^2)+ G \Gamma (r^2 + 2 R^2) \right)}{\left[ R(G-H \Gamma) + (H + G \Gamma) \tan \Gamma \right]^2} + \]
\[ \frac{(r^2 + R^2) \left(G + H \tan \Gamma \right) }{\Gamma (r^2 + 5 R^2)} \]
\begin{equation}
\times \frac{\left(G(r^2 + R^2)+ H \Gamma (r^2 + 2 R^2)\right) \tan \Gamma}{\left[ R(G-H \Gamma) + (H + G \Gamma) \tan \Gamma \right]^2},
\label{velocity3}
\end{equation}
where $\sqrt{1 + {r^2 \over R^2}} = \Gamma$. $v^2_{r}$ and $v^2_{t}$ are ${dp_r \over d\rho}$ and $dp_t \over d\rho$ respectively, described for the case of anisotropy. For isotropic case, $v^2$ denotes ${dp \over d\rho}$, $p$ being the pressure for prescribed model.
In this model the speed of sound are smaller than $1$ in the interior of the star, i.e., $ 0 \leq \frac{dp_r}{d\rho} \leq 1 $, $ 0 \leq \frac{dp_t}{d\rho} \leq 1$ for anisotropic case and $0 \leq \frac{dp}{d\rho} \leq 1$ for isotropic case, which has been shown graphically in Fig.~\ref{figsound}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{sound.eps}
\end{tabular}
\end{center}
\caption{Variation of the sound speeds with the radial coordinate $r$.}\label{figsound}
\end{figure}
\noindent 6. Energy Condition: The energy conditions play a crucial role to study the nature of the matter content in GR. The energy conditions are not physical constraints but are rather mathematically imposed boundary conditions on the matter variables. They restrict some contraction of the stress tensor at every spacetime point. The three main conditions studied here are: null energy condition (NEC), weak energy condition (WEC) and strong energy condition (SEC). The expressions for the energy conditions are described as follows:
\begin{eqnarray}
NEC_r &:& \rho(r) + p_r(r)\geq 0,~~ NEC_t : \rho(r) + p_t(r)\geq 0,\nonumber
\\
WEC_r &:& \rho(r)\geq 0,~~ \rho(r) + p_r(r)\geq 0,\nonumber
\\
WEC_t &:& \rho(r)\geq 0,~~ \rho(r) + p_t(r)\geq 0,\nonumber
\\
SEC &:& \rho(r) + p_r(r) + 2p_t(r)\geq 0. \nonumber
\end{eqnarray}
However, for an isotropic fluid sphere, the equality of the radial and the transverse pressure implies the expressions for the energy conditions as $\rho + p_r \geq 0$ and $\rho + 3 p_t \geq 0$, throughout the stellar interior. These quantities are shown to remain positive throughout the compact sphere graphically in Fig.~\ref{figenergy}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{energycs.eps}
\end{tabular}
\end{center}
\caption{Variation of various energy conditions with the radial coordinate $r$.}\label{figenergy}
\end{figure}
\noindent 7. The smooth matching of the interior metric function with that of the Schwarzschild exterior at the boundary is shown graphically in Fig.~\ref{figmatching}. However, the formulation of the model constants obtained from the smooth matching at the boundary have been described in Sec.~\ref{Sec4}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{matching1.eps}\\
\includegraphics[width=8cm]{matching2.eps}
\end{tabular}
\end{center}
\caption{Smooth matching of the metric potentials with Schwarzschild exterior solution at the boundary.}\label{figmatching}
\end{figure}
\noindent 8. EOS parameter: The equation of state parameter is given by
\begin{equation}
\omega_r=\frac{p_r}{\rho};~
\omega_t=\frac{p_t}{\rho}.
\end{equation}
To be non-exotic in nature the value of $\omega= p/\rho$ should lie within $0$ and $1$. The mathematical expressions for the EOS parameters can directly be obtained from the Eqs.~(\ref{eq3b})-(\ref{eq5b}). Graphically, our model is shown to satisfy the conditions $0\leq\omega_r\leq1$ and $0\leq\omega_t\leq1$ in Fig.~\ref{figomega}.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{omega.eps}
\end{tabular}
\end{center}
\caption{Variation of EOS parameter inside the star with the radial distance for the anisotropic and isotropic pressures.}\label{figomega}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{EOS.eps}
\end{tabular}
\end{center}
\caption{Variation of the radial pressure with respect to density.}\label{figEOS}
\end{figure}
\begin{center}
\begin{table*}
\caption{Values of different model parameters corresponding to different known compact star}\label{tab1}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
Compact Star & Mass & Radius & R & C & D & G & H \\
& ($M\odot$) & (kms) & & & & & \\ \hline
SAX~J$1748.9$-$2021$~\cite{roupas} & $1.81^{+0.25}_{-0.37}$ & $11.7 \pm 1.7$ & $12.7697$ & $0.045989$ & $-0.000197$ & $0.344066$ & $0.302342$ \\ \hline
Cen~X-$3$~\cite{roupas} & $1.49 \pm 0.08$ & $9.17 \pm 0.13$ & $9.5572$ & $0.056641$ & $-0.000163$ & $0.322235$ & $0.306764$ \\ \hline
Vela~X-$1$~\cite{roupas} & $1.77 \pm 0.08$ & $9.56 \pm 0.08$ & $8.7142$ & $0.06778$ & $0.000409$ & $0.255118$ & $0.316001$ \\ \hline
PSR~J$0030+0451$~\cite{miller} & $1.44^{+0.15}_{-0.16}$ & $13.02^{+1.24}_{-1.06}$ & $18.7098$ & $0.045295$ & $-0.000407$ & $0.457842$ & $0.268874$ \\ \hline
\end{tabular}
\end{table*}
\end{center}
\begin{center}
\begin{table*}
\caption{Numerical values of the matter variables. Here $|_0$ and $|_b$ denote the values of the matter variables at the center and surface respectively. }
\label{table2}
\begin{tabular}{|c|c|c|c|c|}\hline
\textbf{Compact Star} & \textbf{SAX~J$1748.9-2021$} & \textbf{Cen~X-$3$} & \textbf{Vela~X-$1$} & \textbf{PSR~J$0030+0451$} \\
\textbf{Matter variables} & & & & \\ \hline
\textbf{$\rho|_0$} & $557.659$ & $995.569$ & $1197.49$ & $259.772$ \\ \hline
\textbf{$\rho|_b$} & $210.926$ & $352.714$ & $345.562$ & $136.949$ \\ \hline
\textbf{$\lambda$} & $0.37823$ & $0.35428$ & $0.28857$ & $0.52718$ \\ \hline
\textbf{$v^2_{r}|_0$} & $0.56381$ & $0.58101$ & $0.64117$ & $0.48499$ \\ \hline
\textbf{$v^2_{r}|_b$} & $0.32863$ & $0.33014$ & $0.3392$ & $0.3286$ \\ \hline
\textbf{$v^2_{t}|_0$} & $0.76306$ & $0.78027$ & $0.84043$ & $0.68426$ \\ \hline
\textbf{$v^2_{t}|_b$} & $0.17529$ & $0.16937$ & $0.16679$ & $0.22002$ \\ \hline
\textbf{$v^2|_0$} & $0.27526$ & $0.28898$ & $0.34177$ & $0.23487$ \\ \hline
\textbf{$v^2|_b$} & $0.30125$ & $0.31678$ & $0.37189$ & $0.23487$ \\ \hline
\textbf{$(\rho + p_r + 2 p_t)|_0$} & $1048.86$ & $1931.26$ & $2559.74$ & $413.862$ \\ \hline
\textbf{$(\rho + p_r + 2 p_t)|_b$} & $202.906$ & $345.893$ & $363.771$ & $123.771$ \\ \hline
\textbf{$(\rho + 3 p)|_0$} & $865.771$ & $1598.53$ & $2156.35$ & $344.506$ \\ \hline
\textbf{$(\rho + 3 p)|_b$} & $210.925$ & $352.714$ & $345.562$ & $136.949$ \\ \hline
\textbf{$z|_b$} & $0.35627$ & $0.38586$ & $0.48443$ & $0.2183$ \\ \hline
\end{tabular}
\end{table*}
\end{center}
\begin{center}
\begin{table*}
\caption{Comparison of the prescribed model with a neutron star model based on Walecka's relativistic mean field theory. Here densities are given in $10^{14}~gm/cc$.}\label{tab3}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
\textbf{Mass ($M\odot$)} & \textbf{Radius (kms)} & \textbf{$\rho(0)$ (Walecka)} & \textbf{$\rho(0)$ (Model)} & \textbf{Error \%} \\ \hline
$2.485$ & $11.271$ & $31.62$ & $23.74$ & $24.92$ \\ \hline
$2.543$ & $11.644$ & $25.12$ & $21.65$ & $13.81$ \\ \hline
$2.579$ & $12.027$ & $20.00$ & $19.29$ & $0.035$ \\ \hline
$2.583$ & $12.433$ & $15.85$ & $16.60$ & $-.047$ \\ \hline
$2.577$ & $12.521$ & $15.00$ & $15.98$ & $-0.065$ \\ \hline
$2.530$ & $12.798$ & $12.59$ & $13.847$ & $-0.0998$ \\ \hline
$2.387$ & $13.081$ & $10.00$ & $11.046$ & $-0.1046$ \\ \hline
$2.268$ & $13.167$ & $8.913$ & $9.65$ & $-0.08268$ \\ \hline
$2.119$ & $13.188$ & $7.943$ & $8.399$ & $-0.057$ \\ \hline
$1.919$ & $13.126$ & $7.080$ & $7.135$ & $-0.00776$ \\ \hline
$1.670$ & $12.949$ & $6.310$ & $5.936$ & $0.059$ \\ \hline
$1.400$ & $12.651$ & $5.623$ & $4.909$ & $0.126$ \\ \hline
$1.280$ & $12.486$ & $5.340$ & $4.5079$ & $0.1558$ \\ \hline
$1.123$ & $12.229$ & $5.012$ & $4.0276$ & $0.196$ \\ \hline
$0.594$ & $11.033$ & $3.981$ & $2.514$ & $0.368$ \\ \hline
\end{tabular}
\end{table*}
\end{center}
\section{\label{Sec6} Stability analysis}
\subsection{Stability under different forces}
Modeling of a compact model requires to examine the stability of the model. The important characteristic to study the stability of any model is to check the equilibrium condition of the model by using TOV equation. This stability equation given by Tolman~\cite{tolman} and Oppenheimer and Volkoff~\cite{OV} symbolizes the internal structure of a spherically static symmetric compact object which is in equilibrium in the presence of anisotropy. The generalized TOV equation can be expressed as
\begin{equation}
-{M_G \over r}[\rho(r)+p_r(r)] {A_0(r) \over B_0(r)}-{dp_r(r) \over dr}+{2(p_t-p_r) \over r}=0,\label{force1}
\end{equation}
where $M_G(r)$ is the gravitational mass within the compact objects of radius $r$ which can be derived using Tolman-Whittaker mass formula and it is defined by
\begin{equation}
M_G(r) = \frac{r B_0(r) A_0'(r)}{A^2_0(r)}. \label{force2}
\end{equation}
Now, substituting the value of $M_G(r)$, Eq.~(\ref{force1}) can also be written as
\begin{eqnarray}
-\frac{ A_0'(r) [\rho(r)+p_r(r)]}{A_0(r)}-{dp_r(r) \over dr}+{2(p_t-p_r) \over r}=0.\nonumber
\\\label{tov1}
\end{eqnarray}
Eq.~(\ref{tov1}) describe the equilibrium condition for the model under gravitational forces ($F_g$), hydrostatic forces ($F_h$) and anisotropic forces ($F_a$). The TOV equation can be expressed in a simple form as
\begin{equation}
F_g(r) + F_h(r) + F_a(r) =0,
\label{force3}
\end{equation}
where
\begin{eqnarray}
\text{Gravitational force}: F_g(r)& =& -\frac{ A_0'(r) [\rho(r)+p_r(r)]}{A_0(r)},\nonumber
\\
\text{Hydrostatic force}: F_h(r)& =& -{dp_r(r) \over dr},\nonumber
\\
\text{Anisotropic force}: F_a(r) &=& {2(p_t-p_r) \over r}.\label{force4}
\end{eqnarray}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{force.eps}
\end{tabular}
\end{center}
\caption{Variations of different forces against $r$.}\label{figforce}
\end{figure}
The expressions mentioned in Eq.~(\ref{force4}) are examined graphically in the Fig.~\ref{figforce}. It portrays the stability of the model under various forces. It can clearly be seen that, to keep the model stable in the equilibrium, hydrostatic force should be much larger such that it can counterbalance the combined forces of gravitational and anisotropic forces. However, in the presence of isotropy, the model should be in the stable equilibrium if the negative gravitational force equalize the positive hydrostatic force.
\subsection{Stability under Causality Condition}
To examine the stability of a physically acceptable model, the velocity of the sound must be less than the light's velocity~\cite{LH,Abreu}. The sound velocity inside the compact star is expressed by
\begin{equation}
v_r(r)=\sqrt{{dp_r(r) \over d\rho(r)}},~~~v_t(r)=\sqrt{{dp_t(r) \over d\rho(r)}}.
\end{equation}
Since velocity of light $c = 1$, Thus the causality condition becomes $0 \leq v_r(r), v_t(r) < 1$. Fig.~\ref{figsound} shows the fulfillment of the causality condition for both anisotropic and isotropic case.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{causa.eps}
\end{tabular}
\end{center}
\caption{The absolute difference of the sound speeds is plotted against $r$.}\label{figcausa}
\end{figure}
The stability of any compact object under the radial perturbation is investigated using Herrera's cracking concept~\cite{LH} and it is shown that to be a potentially stable model the absolute difference of sound speeds should be $\leq 1$~\cite{andreasson}. Fig. \ref{figcausa} portrays the stability condition for the prescribed model with anisotropic pressure. It is shown that the stability condition is satisfied by the model throughout the structure with anisotropic pressure.
\subsection{Stability under adiabatic index}
The adiabatic index, the ratio of the specific heats at the constant pressure and volume, is the quantity which incorporates all the basic characteristics of the equation of state on the instability formula and consequently consists the bridge between the relativistic structure of a spherical static object and the equation of state of the interior fluid~\cite{CCM}. Essentially it is a function of the baryon density and consequently exhibits the radial dependence on the instability criterion~\cite{Tooper}. Since the positive anisotropic factor may slow down the growth of instability which implies that the gravitational collapse occurs in the radial direction~\cite{MO}. Therefore, it is enough to study about adiabatic index only in the radial direction which is given mathematically as
\begin{eqnarray}
\Gamma_r(r) = {\rho(r)+p_r(r) \over p_r(r)}~{dp_r(r) \over d\rho(r)}.
\end{eqnarray}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{adi.eps}
\end{tabular}
\end{center}
\caption{The adiabatic indices plotted against $r$.}\label{figai}
\end{figure}
We have checked the stability criteria graphically in Fig.~\ref{figai} and it can be seen that the model remains stable under both the anisotropic and isotropic pressures.
\subsection{Stability under the Harrison-Zeldovich-Novikov criterion}
One of the most important step to test the stability of the anisotropic compact star model is to check the stability of the mass of the model under Harrison~\cite{Harrison} and Zeldovich-Novikov~\cite{ZN} criterion. The general form is to test whether the mass is increasing with the increase of central density of a compact model. Mathematically, ${dM \over d\rho(0)}$ needs to be $<0$ to be stable structure, otherwise declared the model to be unstable. For our model the mass can be written in the form central density as following:
\begin{eqnarray}
M(\rho(0)) &=& \frac{b^3 \rho(0)}{2 (b^2 \rho(0) + 3)}, \label{cenmass}\\
\frac{dM}{d\rho(0)} &=& \frac{3 b^3}{2 (b^2 \rho(0) +3)^2}. \label{dcenmass}
\end{eqnarray}
The profile of mass and the gradient of mass in the form of central density has been depicted in Figs.~\ref{fighzn} and \ref{figdhzn} respectively. It can clearly be seen that ${dM \over d\rho}$ is positive throughout the stellar configuration making it stable.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{zhn.eps}
\end{tabular}
\end{center}
\caption{Variation of the mass with respect to the central density.}\label{fighzn}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{dmassdrho.eps}
\end{tabular}
\end{center}
\caption{Variation of the gradient of the mass against the central density.}\label{figdhzn}
\end{figure}
\section{\label{Sec7} Mass-Radius relationship and redshift}
\subsection{Mass function and mass-radius relationship}
The mass function for the model is given in Eqs~(\ref{eq6b}). Since $\lim_{b \to 0} m(b) = 0$, so the mass function is regular at the center of the structure. Also Fig.~\ref{fimassfn} depicts the positive and monotonically increasing nature of the mass function with respect to the radial coordinate.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{massfn.eps}
\end{tabular}
\end{center}
\caption{The profile of the mass function plotted against $r$.}\label{fimassfn}
\end{figure}
The mass-radius relationship for the model is plotted in Fig.~\ref{figmr} and the maximum mass obtained is $1.731~M_\odot$ corresponding to the radius $10.56~km$ considering the fixed surface density $5.5 \times 10^{14}~gm/cc$. Since we know from the works of Sharma et al.~\cite{SDT} and Sunzu et al.~\cite{SMR}, the mass radius relationship is not affected by the pressure anisotropy. Hence the obtained mass-radius relation can be obtained both for anisotropy and isotropy cases. Also for spherically symmetric stable structure, Buchdahl limit~\cite{Buchdahl} needs to be satisfied, i.e. ${2M \over b}$ must be less than ${8 \over 9}$.
\subsection{Equation of State}
To study cold high-density matter, compact objects act as the natural laboratories and such behavior is governed by the relation between pressure and density, known as the Equation of State (EOS). We can study the mass and radius as well as other macroscopic properties such as moment of inertia and tidal deformability of a compact star. The variation of the pressure with the density is plotted in Fig.~\ref{figEOS}. It can be seen that the anisotropic pressure generates more stiff EOS than that of isotropic pressure. Same results can be concluded fron Fig.~\ref{figsound} also. Since we know the stiffness of EOS can observed from the variation of the sound speed in stellar medium.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{mr.eps}
\end{tabular}
\end{center}
\caption{The mass-radius relationship for the prescribed compact model. The solid circle represents the maximum mass attained by the model.}\label{figmr}
\end{figure}
Now stiffer EOS lead to larger tidal deformability with the anistropic pressure and the presence of anisotropy can reduce the value of the dimensionless tidal deformability by a significant amount for a given mass~\cite{BB}. However, Biswas and Bose~\cite{BB} have exclusively studied the case for positive anisotropy.
\subsection{Mass-central density relationship}
The stability of any model depends on the variation of mass with the central density and it is known as Harrison-Zeldovich-Novikov criterion (which is discussed in the previous sub-section). This criterion states that the model is stable in stellar system only if the mass of the model is increasing with the increase of central density. In Fig.~\ref{figmrho} the increasing nature of mass with respect to central density is quite evident. Also it is to be noted that the central density does not vanish for the absence of mass. For our prescribed model, the maximum mass of the model corresponds to the central density $6.537 \times 10^{15}~gm/cc$.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{masscenden.eps}
\end{tabular}
\end{center}
\caption{The mass-central density relationship for the prescribed compact model. Here solid circle denotes the maximum mass for the model.}\label{figmrho}
\end{figure}
\subsection{Radius-central density relationship}
To examine any viable model it is important to investigate the central density against the radius along with the mass of the model.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{rcenden.eps}
\end{tabular}
\end{center}
\caption{The radius-central density relationship for the prescribed compact model. The solid circle represents the radius for which the maximum mass is attained.}\label{figrcenden}
\end{figure}
The radius-central density relationship is plotted in Fig.~\ref{figrcenden}. It can be observed that central density increases with the increase of radius of the model. Here the maximum central density corresponding to the radius $10.56~km$ is obtained as $4.63 \times 10^{15}~gm/cc$.
\subsection{Surface redshift}
The compactness of the model is defined by a dimensionless parameter $u(r)= {m(r) \over r}$. According to Buchdahl limit~\cite{Buchdahl}, the compactness of a model should be less than $0.444$ to be a stable structure. For our model we have the compactness of our model as $0.2417$ indicating the fulfillment of the Buchdahl condition.
The surface redshift of a spherically symmetric compact object is defined by
\begin{equation}
z = \frac{1}{\sqrt{1 - 2 u(r)}} - 1, \label{eqz}
\end{equation}
where $u(r) = m(r) /r$ is the compactness parameter for the model.
\begin{figure}[!htbp]
\begin{center}
\begin{tabular}{lr}
\includegraphics[width=8cm]{redshift.eps}
\end{tabular}
\end{center}
\caption{The surface redshift plotted against $r$.} \label{figred}
\end{figure}
The surface redshift is plotted against the radial coordinate in Fig.~\ref{figred}.
\section{\label{Sec8} Discussions and conclusion}
We have analyzed the field equations adopting the Finch-Skea ansatz~\cite{Finch} and have obtained a solution that describes a compact model with negative anisotropic pressure. Some salient features of the solution are described as follows:
\begin{itemize}
\item[(1)] The profile for anisotropic pressure of the model have been studied in Fig.~\ref{figani}. Though the anisotropic parameter satisfies the regularity at the center ($r~=~0$), it is shown to portray negative nature throughout the stellar structure, making the anisotropic force acting on the model to be attractive and which is proven to make the model less supportive against the gravitational collapse.
\item[(2)] All the matter variables for the compact model satisfy the physical requirements to be a stable model. Energy density and the profile of pressures in the presence of anisotropy as well as isotropy are plotted in Fig.~\ref{mvariable}. In presence of anisotropy, transverse pressure is observed to be less than the radial one throughout the structure.
\item[(3)] Smooth matching of the interior solutions with the Schwarzschild exterior solutions at the boundary in Fig.~\ref{figmatching} helps to generate the general form for the constants which further provides an outline of the compact stellar model.
\item[(4)] Variation of gradients of matter variables are shown to be negative throughout the star with zero gradients at the center.
\item[(5)] The causality condition is satisfied by the variation of the sound speed as shown in Fig.~\ref{figsound}. Also the absolute difference of the sound speeds are plotted in Fig.~\ref{figcausa}, implying that the model does not satisfy the stability condition by Herrera Cracking concept.
\item[(6)] Fulfillment of various energy conditions by the model in presence of anisotropy and isotropy are shown in Fig.~\ref{figenergy}.
\item[(7)] The stability of the model under the effect of TOV equation is shown in Fig.~\ref{figforce}. The model is shown to remain in static equilibrium if the hydrostatic force neutralize the combined effect of the anisotropic and gravitational forces.
\item[(8)] The monotonically increasing nature of the mass function and the surface redshifts are plotted in Figs.~\ref{fimassfn} and \ref{figred} respectively, which support the physical viability of the prescribed model.
\item[(9)] The maximum mass, obtained for the prescribed model, is $1.731~M_\odot$ corresponding to the radius $10.56$ $km$, which is stable as per Buchdahl limit. Also the radius-central density relationship depicted in Fig.~\ref{figrcenden} illustrate that the central density increases with the increase of radius of the model. The central density is obtained to be $4.63 \times 10^{15}$ $gm/cc$ corresponding to the radius such that maximum mass is obtained in Fig.~\ref{figmr}.
\end{itemize}
We have also represented tables for a comparative study considering some well known stars. Table \ref{tab1} depicts the value of the model parameter while Table \ref{table2} exhibits the values of the matter variables for both anisotropic and isotropic scenario. Obviously the obtained solution is reduced to the solution obtained by Finch-Skea~\cite{Finch} by assuming zero anisotropy.
However, as a final comment we would like to point out that if some physical constraint such as an equation of state, conformal geometry, embedding, etc were invoked then the study would take on a more meaningful flavor. This aspects may be considered seriously in a future project.
\subsection*{Appendix}
The solutions for Eq.~(\ref{eqalpha2}) are obtained using technical computing system as
\begin{equation}
A_0(r) = 2^{-1 \over 4} (-R)^{2n+3 \over 2} \mathit{s}^{3 \over 2} \left[ M \mathcal{I}_n(\mathit{s}) - N (-1)^n \mathcal{K}_n(\mathit{s}) \right], \nonumber
\end{equation}
where $n = {\sqrt{17} \over 2}$, $\mathit{s} = \sqrt{\frac{-2(r^2 + R^2)}{R^2}}$ and $\mathcal{I}$, $\mathcal{K}$ are the modified Bessel's functions of first and second order respectively.
If we try to solve the equations in other approach namely, by transformation we obtain the following results.
If we use Durgapal-Banerjii transformation~\cite{Durgapal1983}, i.e., set $x = {r^2 \over R^2}$, $Z(x) = {1 \over B^2_0(r)}$ and $A^2 y^2 (x)= A^2_0(r)$, we get the the field equations along with the anisotropic factor to be transformed as
\begin{eqnarray}
8 \pi \rho &=& \frac{1 - Z(x)}{x R^2}- {2 Z' \over R^2}, \nonumber \\
8 \pi p_r &=& \frac{Z-1}{x R^2} + \frac{4 Z y'}{R^2 y}, \nonumber \\
\Delta (x) &=& \frac{x(1 - x)}{R^2(1 +x )^3}, \nonumber \\
8 \pi p_t &=& 8 \pi p_r + \Delta (x), \nonumber
\end{eqnarray}
where (') denotes differentiation of the respective function with respect to $x$. Now combining all the above equations and using the transformation $Z(x) = \frac{1}{1 +x}$, we get a second order ODE as
\begin{equation}
4(1 + x)^2 y'' -2 (1 + x)y' + (1 +2 x)y =0. \label{eqx}
\end{equation}
Now we try to solve Eq.~(\ref{eqx}) using some known methods.\\
If we again use transformations as $1+x = V$ and $y = Y V^{3 \over 4}$ on Eq.~(\ref{eqx}), we get the new transformed ODE as
\begin{equation}
V^2 {d^2Y \over dV^2} + V {dY \over dV} + \left({V \over 2} - 1\right)Y=0, \nonumber
\end{equation}
which cannot be simplified in Bessel's form of differential equations. Now as per series solutions, we cannot further use Frobenius Method to solve the above ODE as $\left( {1 \over V}\right)$ is not analytic at $V=0$.
\section*{Acknowledgement}
SD, KC and SR are thankful to the authority of Inter-University
Centre for Astronomy and Astrophysics, Pune, India for providing
them Visiting Associateship under which a part of this work was
carried out.
|
{
"timestamp": "2021-07-14T02:18:06",
"yymm": "2012",
"arxiv_id": "2012.14085",
"language": "en",
"url": "https://arxiv.org/abs/2012.14085"
}
|
\section{Introduction} \label{sec:Introduction}
We are facing, first hand, an abrupt and sudden change of the world as we knew because of the novel Coronavirus outbreak. Known as COVID-19, it first appeared in late December 2019 in Wuhan, China, and few months later, on the 11$^{\text{th}}$ of March 2020, was characterized as a pandemic by World Health Organization (WHO). Given its high contingency nature, relatively unknown behaviour, systemic complications, and adverse effects ranging from human fatalities to economic recessions across the world, it is of importance to develop efficient processing/learning models to help overcome this pandemic and be prepared for potential future ones. The Reverse-Transcription Polymerase Chain Reaction (RT-PCR) is the standard testing approach for early diagnosis of suspected cases of COVID-19. Unavailability of enough RT-PCR testing kits particularly in areas severely affected by the pandemic and its relatively high and variable false-negative rate (i.e., highest during the first five days (up to 67\%), and lowest on day 8 (21\%)~\cite{Kucirka:2020}), resulted in focusing on medical image Radiomics~\cite{Afshar:2019} as a complementary source for diagnosis/prognosis.
Recent studies~\cite{Shireview, Dongreview, Jamshidireview}
show that chest Computed Tomography (CT) scans and Chest Radiographs (CXR) reveal informative features of COVID-19 that can assist in the monitoring, severity assessment, and treatment of COVID-19~\cite{Wang:2020-1}. According to the guideline provided by WHO, use of chest imaging as a complementary source of data is recommended in different scenarios and stages of COVID-19 to assist radiologists and physicians to detect and evaluate the disease more accurately. CT and CXR can decrease the false negative rate both at the admission and discharge times. It is worth mentioning that chest CT has a key role for diagnosis of COVID-19 in the very early stages of the infection and also to set up a prognosis. Comparisons between CT and RT-PCR at early stages of COVID-19 infection show that CT abnormalities may appear before PCR positivity. In other words, CT has greater sensitivity during the early stages of the infection. In addition, false negatives in RT-PCR results occur both at the admission and discharge. Finally, CT plays its role over the course of the disease for evaluating changes in severity and for treatment adjustments. The key power of chest imaging is in its prognostic value to identify severity of the disease, likelihood of needing hospitalization and/or admission to Intensive Care Unit (ICU). However, interpretation of chest images for confirming the suspected cases of COVID-19, and severity assessment of the disease based on imaging findings are time-consuming and may be challenging.
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{Trend3}
\caption{\textcolor{black}{The trend in COVID-19-related research.}}\label{fig:trend}
\end{figure}
Interpretation of CT and CXR images should be performed by expert thoracic radiologists, who may not be easily accessible especially during the outbreak when the number of suspected cases of COVID-19 is growing up exponentially. To address these issues, there has been a surge of interest in developing Signal Processing (SP) and Deep Learning (DL) techniques to extract informative features from chest images and help in fast detection and risk assessment of the COVID-19 infection. We would like to mention that although research works on COVID-19-related topics have started very recently, the extensive number of research works that have been disseminated during this short period of time makes the topic mature. More specifically, extensive research works on application of Signal/Image Processing and Artificial Intelligence (AI) for COVID-19 have lead to almost $1,200$ publications by the end of November 2020. Fig.~\ref{fig:trend} presents COVID-19 research trend in year 2020, obtained from PubMed with the keyword ``COVID-19'' and either of the following keywords: ``Signal Processing'', ``Machine Learning'', ``AI'', or ``Deep Learning''. These publications cover several aspects and applications of SP/DL for COVID-19 including diagnosis, classification, detection, segmentation, severity assessment, and survival analysis.
In summary, in this feature article, we aim to present an overview of the current state, challenges, and opportunities of developing SP/DL-empowered models for diagnosis and prognosis of the COVID-19 infection based on medical images. The article will mainly focus on the problems, applications and on how SP/DL models can be used to address the identified problems/applications. In brief, we will cover the following main topics:
\begin{enumerate}
\item[(i)] We focus on SP techniques specific to COVID-19 images and target specialized SP aspects of COVID-19 diagnoses, including \textit{Analytic Epidemiology} and \textit{Hypersignal Processing (HP)} theory as advanced processing solutions of COVID-19.
\item[(ii)] Introduction of potential applications of SP/DL-based models for the diagnosis and predictive prognosis of COVID-19 infections using medical images as the main source of data.
\item[(iii)] Investigation of the required medical background related to COVID-19 for development of advanced SP/DL models. An overview of different characteristics of COVID-19 that can be observed on chest images is presented. Furthermore, we describe how these imaging findings are related to the severity of the disease, and how they can be utilized in the development of SP/DL models.
\item[(iv)] Presentation of DL Radiomic directions specific to the analysis of COVID-19 infection. We focus on DL-based solutions from the following four different aspects: Segmentation of COVID-19 Lesions; Predictive models for outcome prediction in COVID-19 patients; DL/SP models for severity assessment, and; Diagnosis and classification of COVID-19 cases.
\item[(v)] Introducing challenges, open problems, and opportunities of developing intelligent and autonomous models for diagnosis/prognosis of COVID-19.
\end{enumerate}
\vspace{.1in}
\noindent
\textit{\textbf{Distinction with Existing Articles}}: As a final note, we would like to briefly elaborate on the differences between this article and a recent feature article~\cite{Afshar:2019} on Radiomics and other survays/tutorials~\cite{Shireview, Dongreview, Jamshidireview} on COVID-19. In particular, Reference~\cite{Afshar:2019} is focused on hand-crafted and deep learning-based techniques for extracting features from cancer-related images. Cancer diagnosis is essentially a completely different task than that of COVID-19 analysis, which requires its own specific techniques and solutions. For instance, while nodules have solid shapes and defined locations, COVID-19 infection areas could be multifocal, ill defined and diverse in pattern or (morphology). Therefore, the traditional Radiomics filters are not applicable to the latter. Furthermore, images from patients with pulmonary malignancies contain far less motion artifacts compared to COVID-19 ones, where patients suffer from dyspnea, calling for more advanced artifact reduction techniques. Deep learning models developed for Cancer Radiomics are also not transferable to COVID-19 without modifications. This is mainly due to the fact that in a patient with COVID-19 a large number of slices may be affected, for which 3D analysis and more powerful resources are required. With regards to differences with recent survay/tutorial articles on COVID-19 research, Reference~\cite{Shireview} is limited to deep learning techniques. In this article, however, we also focus on SP modeling, applications, required medical background, and challenges/open-problems. Reference~\cite{Dongreview} is more focused on the medical background and imaging modalities and AI models are not discussed in a separate section compared to this article, where different models and applications are separated and discussed. Reference~\cite{Jamshidireview} reviews a subset of deep learning models, without referring to SP methods, and challenges.
The reminder of the manuscript is organized as follows: Section~\ref{sec:HC_radiomics} is devoted to analytic epidemiology and hypersignal processing for COVID-19. Applications and imaging modalities for diagnosis/prognosis of COVID-19 images are then presented in Section~\ref{sec:Applicatios}. SL/DL-based Radiomic models specific to the analysis of COVID-19 infection are then described in Section~\ref{sec:DR} covering the following four application domains: Segmentation of COVID-19 lesions; Predictive models for outcome prediction; Severity assessment, and; Diagnosis/classification models. Finally, challenges, open problems, and opportunities are discussed in Section~\ref{sec:COO}.
\section{Analytic Epidemiology and HyperSignal Processing for COVID-19} \label{sec:HC_radiomics}
In this section, we focus on SP techniques specific to COVID-19 images (in Section~\ref{sec:DR}, we focus on DL models). Here, we target specialized SP aspects of COVID-19 diagnoses, including analytic epidemiology and Hypersignal Processing (HP) theory as advanced processing solutions of COVID-19. The worldwide outbreaks of COVID-19 and other contemporary contagious diseases have triggered a wide scope of transdisciplinary studies on epidemiology for their systematical treatments, control, prediction, prevention, management, and decision optimization~\cite{WHOGeneva,GoveCanada,CasesData}. The transdisciplinary investigations into the COVID-19 pandemic have led to the emergence of analytic epidemiology underpinned not only by epidemiology, biology, and medical sciences, but also by computer, big data, information, signal and sensor, AI, system science as well as mathematics, sociology, and economics.
\subsection{Analytic Epidemiology Models of COVID-19}
\noindent
\textit{Analytic epidemiology}~\cite{AndersonPopulation, Wang:2020-1} is a transdisciplinary study on the cognitive, theoretical, and mathematical models of COVID-19 and other contagious diseases. It is recognized that analytic epidemiology may be better studied by signal explorations at the macro level rather than merely biological analyses at the micro level in order to not lose the forest for the trees~\cite{Wang:2020-1}.
The decision model of COVID-19 diagnoses may be formally described by a Cartesian product of the sets of symptoms~\cite{Wang:2020-1} and test results. Let the set of symptoms of COVID-19 be
\begin{equation}\medmath{
S \!=\! \Big\{\!S_1(Fever), S_2(Cough), S_3(BreathDifficulty), S_4(Chills), \nonumber}\vspace{-.1in}
\end{equation}
\begin{equation}\medmath{
~~S_5(ChillShaking), S_6(MusclePain), S_7(HeadAche),\nonumber}
\vspace{-.1in}
\end{equation}
\begin{equation}\medmath{
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! S_8(SoreThroat), S_9(LossOfTaste/Smell)\Big\},\nonumber}
\end{equation}
and the set of lab tests be
\begin{equation}\medmath{
L = \Big\{L_1(NucleicAcid), L_2(SoreSample),
L_3(LungImage) \Big\}.\nonumber}
\end{equation}
The diagnosis outcomes $E$ of COVID-19 infections are detected by the Cartesian product between the sets of logical values of \textit{detection symptoms} $E_{S}$ and \textit{lab confirmations} $E_{L}$ as follows~\cite{Wang:2020-1}:
\begin{equation} \label{eq:W1} \medmath{
\begin{array}{l} {E\buildrel\wedge\over= E_{S} \times E_{L} =\mathop{\mbox{\large $\bm{R}$}{\rm \; }}\limits_{i=1}^{9} S_{i} {\rm |L}\times \mathop{\mbox{\large $\bm{R}$}{\rm \; }}\limits_{j=1}^{3} L_{j} {\rm |L}} \\ {{\rm \; \; \; }
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\left\{\begin{array}{l} {\!\!\![(\mathop{\wedge {\rm \; }}\limits_{i=1}^{9} S_{i} {\rm |L)}=T{\rm |L]}\wedge [(\mathop{\wedge {\rm \; }}\limits_{j=1}^{3} L_{j} {\rm |L)}=T{\rm |L]\; }{\rm //\; Positive}} \\
{\!\!\![(\mathop{\vee {\rm \; }}\limits_{i=1}^{9} S_{i} {\rm |L)}=T{\rm |L]}\wedge [(\mathop{\vee {\rm \; }}\limits_{j=1}^{3} L_{j} {\rm |L)}=T{\rm |L]\; //\; Suscetibly\; positive}} \\
{\!\!\![(\mathop{\vee {\rm \; }}\limits_{i=1}^{9} S_{i} {\rm |L)}=T{\rm |L]}\wedge [(\mathop{\wedge {\rm \; }}\limits_{j=1}^{3} L_{j} {\rm |L)}=F{\rm |L]\; }{\rm //\; Suscetibly\; negative}} \\
{\!\!\![(\mathop{\wedge {\rm \; }}\limits_{i=1}^{9} S_{i} {\rm |L)}=F{\rm |L]}\wedge [(\mathop{\wedge {\rm \; }}\limits_{j=1}^{3} L_{j} {\rm |L)}=F{\rm |L]\; }{\rm //\; Negative}} \end{array}\right. } \end{array} }
\end{equation}
where $T|L$ and $F|L$ denotes a Boolean logical variable for True or False, respectively. The diagnosing results are classified in the categories of symptomatic positive, susceptibly positive, negative, and susceptibly negative.
The big-R notation \cite{Wang2009, WangZatarain} is a generic calculus that denotes an iterative or recursive series of recurrent structures or embedded functions. Eq.~\eqref{eq:W1} reveals that many important symptoms and diagnosis of COVID-19 are in the domain of advanced signal processing as a foundation for COVID-19 diagnosis. In analytic epidemiology, the reproductive ratio $R_0$ of a contagious disease is modeled as an exponential transmission series $N_{inf}(t)$ on the $t_0 +$ \textit{k}th day, which is estimated by a product of initial infectives $N_{inf}(t)$ and the average reproductive rate raised to the \textit{k}th power:
\begin{equation} \label{eq:W2} \medmath{
N_{inf} {\rm (}t_{0} +k{\rm )}\buildrel\wedge\over= \bar{R}_{0} {}^{k} N_{inf} (t_{0} ),{\rm \; }\bar{R}_{0} >1.0,k\ge 0,N_{inf} (t_{0} )\ne 0 }.
\end{equation}
Therefore, the average reproductive rate of a pandemic transmission is reduced to the \textit{k}th root of the average ratio between the number of infectives $N_{in\!f}(t_{0}+k)$ cumulatively infected at $t_{0} + k$ by each initial infective $N_{in\!f}(t_0)$:
\begin{equation} \label{eq:W3}
\; \bar{R}_{0} \buildrel\wedge\over= {}^{k} \sqrt{\frac{N_{in\!f} {\rm (}t_{0} +k{\rm )}}{N_{in\!f} (t_{0} )} } ,{\rm \; }k\ge 0,N_{in\!f} (t_{0} )\ne 0
\end{equation}
For instance, WHO has empirically estimated $\bar{R}_{0}$ of COVID-19 in the range of $2.24$ to $4.00$~\cite{WHOGeneva}, which was considerably higher than those obtained in rigorous analyses with real-world signals according to Eqs.~\eqref{eq:W2} and~\eqref{eq:W3} as rigorously treated a long series of pandemic signals.
The \textit{reproductive rate} $\bar{R}_{0} $ in analytic epidemiology has been adopted as the key indicator \textit{$\theta$} for the congruous severity classified in two categories by the threshold $\bar{R}_{0} $= 1.0, i.e.:
\begin{equation} \label{eq:W4}
\theta =\left\{\begin{array}{l} {congruous,{\rm \; \; \; \; \; }R_{0} (t)>1.0} \\ {incongruous,{\rm \; \; 1.0}\ge R_{0} (t)\ge 0} \end{array}\right.
\end{equation}
However, when investigating into the nature of pandemic dynamics to rigorously predict the pandemic trends, we found that in order to model more general and complex pandemic dynamics, the reproductive rate must be treated as a series of variables \textit{R${}_{0}$}(\textit{t}) over time. This finding has led to the formal model of the series of dynamic\textit{ reproductive rates }of COVID-19, which is recursively determined by a long-chain of causal probabilities over time:
\begin{equation} \label{eq:W5} \medmath{
\mathop{\mbox{\large $\bm{R}$}}\limits_{t=2}^{n} {\rm \; }R_{0} (t-1)\buildrel\wedge\over= \mathop{\mbox{\large $\bm{R}$}}\limits_{t=2}^{n} {\rm \; }\frac{N_{in\!f} {\rm (}t-1{\rm )}}{N_{in\!f} (t-2)} ,{\rm \; }R_{0} (0)=1,N_{in\!f} (t)\ne 0 }.
\end{equation}
Simulations performed based on real-world signals have provided highly accurate predications based on the mathematical model of the analytic epidemiology theory and its dynamic predictability for the pandemic signal series~\cite{Wang:2020-1}.
\subsection{Hypersignal Processing (HP) for COVID-19 Image Diagnoses}
Hypersignals are a general structure of abstract or real-world signals beyond 1D or its parallel compositions~\cite{GoveCanada,CasesData}. Hypersignal Processing (HP) theory provides a unified mathematical model for advancing 1D signal (voice and time series) processing to 2D (images) and nD (generic hypersignals) processing~\cite{Wang2009, WangZatarain, WangKeynoteNeuro, WangKeynoteGeneration, WangKeynoteCognitive}. The hypersignals may be embodied by sequences of images (videos), language expressions and semantics, knowledge structures, neural networks, and AI systems. Therefore, HP demands novel theories, mathematical means, and algorithms~\cite{WangKeynoteNeuro, WangKeynoteGeneration, WangKeynoteCognitive}.
For instances, the HP theory models hypersignals as follows:
\begin{equation} \label{eq:W6}
\left\{\begin{array}{l} {Text{\rm |TX:\; TX}\buildrel\wedge\over= S} \\ {Voice{\rm |V:\; V}\buildrel\wedge\over= B\times T} \\ {Image{\rm |I:\; I}\buildrel\wedge\over= B\times B} \\ {Video{\rm |M:\; M}\buildrel\wedge\over= B\times B\times T} \\ {HyperSig{\rm |H:\; H}\buildrel\wedge\over= \mathop{\mbox{\large $\bm{R}$}}\limits_{i=0}^{n_{i} } {\rm \; }\mathop{\mbox{\large $\bm{R}$}}\limits_{j=0}^{n_{j} } {\rm \; \ldots\; }\mathop{\mbox{\large $\bm{R}$}}\limits_{k=0}^{n_{k} } {\rm \; }\Theta (i,j, \ldots, k)} \end{array}\right.
\end{equation}
where \textit{B} stands for a byte, \textit{T} for time, \textit{S} for a string, and $\Theta$ for an instance of an abstract hypersignal.
More generically, the hyper Structure Model (SM) is introduced to model complex hyper signals and entities. For example, the SM model of the color scheme of an image signal is formally modeled as:
\begin{eqnarray} \label{eq:W7}
Image |\text{SM} \buildrel\wedge\over=
\left\{\begin{array}{l}
Image|\text{FG}=\mathop{\mbox{\large $\bm{R}$}}\limits_{i=1}^{|X|} \mathop{\mbox{\large $\bm{R}$}}\limits_{j=1}^{|Y|} Pixel (i,j)|\text{PG} \\
Image |\text{FB}=\mathop{\mbox{\large $\bm{R}$}}\limits_{i=1}^{|X|} \mathop{\mbox{\large $\bm{R}$}}\limits_{j=1}^{|Y|} Pixel (i,j)|\text{PB} \\
Image |\text{FR}^* = \mathop{\mbox{\large $\bm{R}$}}\limits_{i=1}^{|X|} \mathop{\mbox{\large $\bm{R}$}}\limits_{j=1}^{|Y|} Pixel (i,j) |\text{PR}^*\\
Image |\text{FG}^*=\mathop{\mbox{\large $\bm{R}$}}\limits_{i=1}^{|X|} \mathop{\mbox{\large $\bm{R}$}}\limits_{j=1}^{|Y|} Pixel (i,j) |\text{PG}^* \\
Image |\text{FB}^* = \mathop{\mbox{\large $\bm{R}$}}\limits_{i=1}^{|X|} \mathop{\mbox{\large $\bm{R}$}}\limits_{j=1}^{|Y|} Pixel (i,j) |\text{PB}^*
\end{array}\right. \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \nonumber\\
\end{eqnarray}
where the 2D frame is represented in six forms including FC (color), FG (gray), FB (black/white), FR* (red), FG* (green), FB* (blue), and the composition FC = (FR*, FG*, FB*).
A paradigm of HP towards COVID-19 is represented by Image Frame Algebra (IFA)~\cite{Wang2009, WangKeynoteCognitive}, which processes and diagnoses suspectedly infected lung images by efficient and accurate hypersignal handling according to a set of IFA operators. In IFA, a generic image model is an SM based on Eq.~\eqref{eq:W7}. According to IFA, the differential algorithm for COVID-19 images manipulations is implemented as follows:
\begin{eqnarray} \label{eq:W8}
\delta (I_{1},I_{2}) & \buildrel\wedge\over=&1-\sigma (I_{1}^{} ,I_{2}^{} ),{\rm \; }p_{1,2} (i,j) |\text{PG}\in [0, 255] \nonumber\\
&=&\frac{\sum _{i=1}^{|X|}\sum _{j=1}^{|Y|}\left|p_{1} (i,j)|\text{PG} - p_{2} (i,j) |\text{PG}\right|}{255|X|\bullet |Y|} \nonumber\\
\end{eqnarray}
The operations of image differentiation in IFA may be expressed according to Eq.~\eqref{eq:W8} as follows:
\begin{eqnarray} \label{eq:W9}
\left\{\begin{array}{ll}
\text{Space Differentiation:} & Dif\!f_{s} \buildrel\wedge\over= \delta_{s} (I_{L}, I_{R}) \\
\text{Time Differentiation:} & Dif\!f_{t} \buildrel\wedge\over= \delta_{t} (I_{t_{1} }, I_{t_{1} })
\end{array}\right.
\end{eqnarray}
where the first model expresses a differentiation between the left and right images of a symmetric structure such as lungs, breasts, and the brain. The second model denotes a sequential differentiation of image series with respect to time.
For instances, the results of COVID-19 affected lung images, brain tumors, and breast tumors may be diagnosed according to the generic image differential algorithm (Eqs.~\eqref{eq:W8} and~\eqref{eq:W9}), respectively, as shown in Fig.~\ref{fig:Wim-dif}.
\begin{figure}[t!]
\centering
\includegraphics[scale = .7]{imageDifferentiations.jpg}
\caption{Formal diagnoses of COVID-19 affected lung images and other applications
by image differentiations.}
\label{fig:Wim-dif}
\vspace{-.1in}
\end{figure}
\section{Applications and Imaging Modalities for Diagnosis/Prognosis of COVID-19 Images} \label{sec:Applicatios}
As stated previously, chest imaging provides an important source of data for diagnosis/prognosis of COVID-19 infection, assessment of treatment response, and monitoring of COVID-19 patients. In brief, several recent studies have been conducted to investigate specific characteristics of Coronavirus disease on chest images that can be used for design of processing/learning models. Different types of chest imaging patterns and distribution of lung involvement are related to the severity/stage of the COVID-19 infection and can help construct predictive SP/DL models to make decisions on hospital admission versus home isolation, non-ICU hospital admission versus ICU admission, on monitoring the treatment process and on the time of home discharge.
In this section, we first present an overview of potential applications of SP/DL models for the diagnosis and predictive prognosis of COVID-19 infections. Then, we present the required medical background related to COVID-19 for development of SP/DL models and will review different imaging modalities. Advantages and limitations of each modality is discussed together with its application in different stages of COVID-19 management. Generally speaking, applications of SP/DL models for COVID-19 diagnosis/prognosis can be classified into the following four categories:
\vspace{.1in}
\noindent
\textit{\textbf{$\bullet$ Diagnosis of COVID-19 Pneumonia from other Community Acquired Pneumonia (CAP):}} The most common COVID-19 symptoms (cough, shortness of breath, and fever) overlap with CAP symptoms. CAP is mainly caused by a bacterial infection but can also be caused by viruses. In most cases, microbiological tests for CAP such as cultures of sputum and blood, are time consuming with poor sensitivity and specificity, and are not enough to identify the main pathogen~\cite{metlay2020treatment}.
Thus, the Polymerase Chain Reaction (PCR) test via nasopharyngeal or oropharyngeal swab has been used for correct identification of the source of the viral cause of CAP (like influenza)~\cite{burk2016viral}.
For COVID-19, a variation of PCR test, the RT-PCR, has been used as a gold standard~\cite{abduljalil2020laboratory}.
Due to the test inaccuracy, the decision on treatment may be incorrect, i.e., antibacterial drug therapy can be administrated to all CAP patients who may not be confirmed COVID-19 positive, while for patients tested positive for COVID-19 this treatment is not required~\cite{metlay2020treatment}. Cases of negative RT-PCR with persistent COVID-19 symptoms are submitted to chest imaging evaluation. This application domain will be further discussed in Sub-section~\ref{subsec:Calss}, where different SP/DL models developed for COVID-19 diagnosis are presented.
\vspace{.1in}
\noindent
\textit{\textbf{$\bullet$ Localizing COVID-19 Lesions and Identifying their Types:}} The pattern and extent of chest imaging findings is related to the stage and severity of COVID-19 and affects the treatment decision making. In Sub-section~\ref{subsec:Seg}, we will further describe applications of SP/DL models in localizing involved areas and demonstrating imaging features on CT scans.
\vspace{.1in}
\noindent
\textit{\textbf{$\bullet$ Outcome Prediction (COVID-19 Prognosis):}} To efficiently manage the limited medical resources during the pandemic, it is vital to accurately predict the risk of poor outcomes in COVID-19 patients. Some essential outcomes in COVID-19 patients are as follows:
\begin{itemize}
\item Mortality risk;
\item Progression to severe/critical stage;
\item Need for ICU admission/mechanical ventilation, and;
\item Length of hospital stay.
\end{itemize}
Predictive models are required to compute the probability of poor outcomes to help health-care professionals deliver appropriate services to high-risk patients. This application domain will be presented in detail in Sub-section~\ref{subsec:OP}.
\vspace{.1in}
\noindent
\textit{\textbf{$\bullet$ Severity Assessment of COVID-19:}} Chest imaging can be used to assess the lung infection severity in COVID-19 patients~\cite{metlay2020treatment}. Calculation of percentage of parenchymal involvement and CT severity score can be achieved by segmenting the infected regions and lung areas in chest images. This is required to evaluate and quantify severity, then prediction of prognosis of the COVID-19 infection. In Sub-section~\ref{subsec:SA}, we will present different SP/DL models developed for computing lung infection rate and CT severity score metrics as two commonly used criteria for severity assessment of COVID-19 infection.
\subsection{Imaging Modalities and Radiological Characteristics of COVID-19}
The Fleischner Society and the American College of Radiology, among others, recommend CT scan and CXR for COVID-19 patients with moderate to severe cases~\cite{rubin2020role}. Among the chest imaging modalities, the CXR is less sensitive, and less specific compared to CT. The advantages of CXR over CT include its fast availability, ease of execution, and minimization of in-hospital transmissions. In addition, the CXR findings correlate well with CT findings~\cite{ko2020pulmonary}. In some situations where a fast assessment is necessary, the Point-of-Care lung Ultrasound (POCUS) offers a radiation-free imaging modality with higher accuracy in patients without any previous cardiopulmonary disease~\cite{haak2020diagnostic}. In what follows, application of the above-mentioned imaging modalities for COVID-19 diagnosis/prognosis will be discussed in detail.
\subsubsection{Computerized Tomography (CT) Scan}
There has been considerable attention on CT imaging as the most useful imaging modality for representing COVID-19 infections. A study~\cite{li2020coronavirus} on $51$ patients with positive nucleic acid testing reported that only $3.9$\% of patients were misdiagnosed based on their chest CT images.
Fig.~\ref{fig:CTpatterns} shows common CT patterns in COVID-19 patients, where the most prevalent are ``\textit{Ground Glass Opacities (GGOs)}'' and ``\textit{Consolidations}''. GGO is a hazy transparent opacity that does not conceal lung vessels and bronchial areas~\cite{hansell2008fleischner}.
In a consolidation pattern, the air in the alveoli and peripheral bronchioles is replaced by fluid such as pus, water, blood, or an inflammatory material, obscuring the underlying distal airways and vascular margins~\cite{hansell2008fleischner}.
In a research study on $645$ confirmed COVID-19 patients, $88$\% of patients showed either pure GGOs or consolidation or both~\cite{zhang2020epidemiological}.
The appearance of pure GGO is more common in the early stage of the disease, while the appearance of GGOs with consolidations is more frequently seen in the progressive stage~\cite{sun2020systematic}. Another common CT pattern associated with COVID-19 is the so-called ``\textit{Crazy Paving}'' referring to thickened interlobular septa and intralobular interstitium superimposed on GGOs~\cite{hansell2008fleischner}.
The crazy paving pattern is more commonly seen in the progressive stage of the disease~\cite{sun2020systematic}. The appearance of the crazy paving/consolidation patterns as a sign of disease progression/severity can help radiologists evaluate the disease stage. Interlobular septal thickening, air bronchogram, and vascular enlargement are other CT findings in COVID-19 patients~\cite{sun2020systematic}.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.5]{CTpatterns.jpg}
\caption{The most common CT patterns in COVID-19 patients. (a) Axial CT image of a 38 year-old man with bi-lateral GGOs distributed in posterior lung regions~\cite{COVID-DATA}. (b) Axial CT image of a 60-year-old woman, scattered consolidation patterns with mainly peripheral distribution~\cite{COVID-DATA}. (c) Axial CT image of a 51-year-old man, the appearance of GGOs (white arrow) and consolidations (red arrows)~\cite{COVID-DATA}. (d) Crazy-paving pattern in axial CT image of a 43-year-old woman~\cite{bernheim2020chest}. All CT images have been obtained without contrast enhancement.}
\label{fig:CTpatterns}
\end{figure}
\vspace{.05in}
\noindent
\textbf{\textit{Distribution of Lung Involvement in COVID-19:}} CT findings of COVID-19 infections demonstrate that most of the COVID-19 patients have had ``\textit{Bilateral}'' and ``\textit{multifocal}'' lung involvements. Bilateral involvement means that the lesions are distributed in both the right and left lungs, and multifocal involvement implies that more than one lobe (from five lobes) of the lung is affected by the disease. A systematic review of COVID-19 imaging findings~\cite{sun2020systematic} declared that in $17$ out of $36$ studies ($78.2$\%), the number of patients with bilateral lung involvement had been higher than the patients with unilateral involvement.
Researches also showed that COVID-19 lesions in most of the cases are distributed in lower lobes and have a ``\textit{Peripheral}'' instead of central distribution~\cite{sun2020systematic, bernheim2020chest}. Similarities between CT features of COVID-19 with other viral pneumonia pose limitations in using CT images to diagnose COVID-19. However, in a study of $58$ patients~\cite{bai2020performance}, six of seven radiologists could distinguish COVID-19 from other types of viral infections with an accuracy of $67-93$\% and a specificity of $93-100$\%. Peripheral distribution and GGOs were the most critical characteristics for distinguishing COVID-19 from non-COVID-19 pneumonia~\cite{bai2020performance}.
\vspace{.05in}
\noindent
\textbf{\textit{Correlation between CT Findings and Severity/Stage of the Disease:}}
Since CT images provide high sensitivity for detecting COVID-19 patients, they are reliable for developing SP/DL-based diagnosis models. Detecting the pattern of pulmonary involvement in COVID-19 including consolidation and/or crazy-paving patterns in chest CT images by SP/DL-powered networks, can help evaluate the disease severity. Quantifying the extent of lung involvement in COVID-19 patients is a deterministic criterion for assessing the disease's stage/severity. Different CT severity measures have been introduced in the literature that can be mapped to disease severity. The manual calculation of these measures by radiologists is tedious and time-consuming. Severity measures that can be automatically quantified by SP/DL models are as follows:
\begin{table}[t!]
\centering
\caption{\label{tab:PO}\textcolor{black}{Correlation between Percentage of Opacity (PO) measure and COVID-19 stag~\cite{HuangHan}.}}
\renewcommand{\arraystretch}{2}
\begin{adjustbox}{width=0.5\textwidth}
\begin{tabular}{|c|c|}
\hline
\textbf{COVID-19 stage} & \textbf{Percentage of Opacity (PO)} \\
\hline
Moderate & 2.2 (0.4, 7.1) (median and interquartile range).\\
\hline
Sever & 28.9 $\pm$ 19.2 (mean $\pm$ std). \\
\hline
Critical & 49.6 $\pm$ 14.8 (mean $\pm$ std). \\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\begin{itemize}
\item \textbf{\textit{Percentage of Opacity (PO):}} The PO measures the volume of COVID-19 abnormalities related to the whole lung volume. It was reported in Reference~\cite{HuangHan} that the POs for COVID-19 patients are divided into three different categories as shown in Table~\ref{tab:PO}.
\item \textbf{\textit{Percentage of High Opacity (PHO) and Lung High Opacity Score (LHOS):}} The PHO and LHOS introduced in~\cite{ChagantiGrenier} quantify the volume of consolidation regions in the whole lung and across the lobes, respectively.
\begin{table}[t!]
\centering
\caption{\label{tab:CTscore}\textcolor{black}{Scoring system for measuring CT severity score~\cite{francone2020chest}.}}
\renewcommand{\arraystretch}{2}
\begin{adjustbox}{width=0.4\textwidth}
\begin{tabular}{|c|c|}
\hline
\textbf{Lobe involvement rate} & \textbf{Score} \\
\hline
\multicolumn{1}{|c|}{No involvement} & 0\\
\multicolumn{1}{|c|}{Involvement of less than 5\%.} & 1 \\
\multicolumn{1}{|c|}{Involvement from 5\% to 25\%.} & 2 \\
\multicolumn{1}{|c|}{Involvement from 26\% to 50\%.} & 3 \\
\multicolumn{1}{|c|}{Involvement from 51\% to 75\%.} & 4 \\
\multicolumn{1}{|c|}{Involvement is higher than 75\%.} & 5 \\
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\item \textbf{\textit{CT Severity Score:}} Authors in~\cite{francone2020chest} used a severity measure for COVID-19 patients, referred to as the CT score, that measures the extent of involvement based on a semi-quantitative scoring for each of the five lobes. The score ranges from $0$ to $5$, which is computed as shown in Table \ref{tab:CTscore}. The overall CT score would be between $0$ and $25$, which is the sum of the lobar scores (some studies use a different scoring scale, which ends up to a CT score between $0$ and $20$~\cite{bernheim2020chest}). Francone, \textit{et al.}~\cite{francone2020chest} conducted research on $130$ COVID-19 patients and evaluated the correlation between CT score and disease severity. They showed that the CT score is strongly correlated with the COVID-19 clinical stage and severity. For patients in severe or critical categories, CT score is significantly higher than patients in the mild category~\cite{francone2020chest}. CT score greater than $18$ (out of 25) can be used as a predictor of mortality in COVID-19 patients~\cite{francone2020chest}. CT score is highly correlated with patients' age. In~\cite{francone2020chest}, authors revealed that CT score in patients with age range $>50$ was significantly higher than those in the age range of $26$-$50$. For patients in late-stages of the disease, the CT score is higher than those in early-stages. CT score, together with patients' age, can be used to predict COVID-19 patients' death.
\end{itemize}
\subsubsection{Chest Radiography (CXR)}
Some studies report that CXR images often show no lung infection in COVID-19 patients at early stages resulting in a low sensitivity of $69$\% for diagnosis of COVID-19~\cite{wong2020frequency}. However, CXR is helpful for prediction of clinical outcome and for detection of COVID-19 in areas with limited access to reliable RT-PCR testing kits. The most commonly observed patterns in CXR of COVID-19 patients are GGOs and consolidations with bilateral peripheral distribution~\cite{wong2020frequency}. Pre-existence of medical conditions such as heart or other lung diseases will make the interpretation of CXR images challenging. Therefore, the interpretation of CXRs in younger patients would be more reliable and predictive. In Reference~\cite{ToussieVoutsinas}, the authors developed a scoring approach for severity assessment and outcome prediction of COVID-19 patients between the ages of $21$ to $50$ years based on their CXR images. In their scoring system, each lung is divided into three zones. A binary score is then given to each zone based on the appearance/absence of COVID-19 abnormalities, and the total score would be in the range of $0$-$6$. Their study on $338$ patients demonstrates that there is a significant correlation between CXR score greater than two and hospital admission. They also reported that a CXR score greater than three could predict the need for intubation. Using lung Edema severity measure, referred to as RALE score, the authors in~\cite{cozzi2020chest} quantify the extent of lung involvement and compute correlations with the risk of ICU admission for COVID-19 patients. Recent research works have demonstrated potentials of developing SP/DL-based models for grading the disease stage and performing outcome-prediction using CXR images.
\subsubsection{Ultrasound}
Beside the advantages of using CT or CXR combined with RT-PCR test for a correct and precise diagnosis of COVID-19, these imaging modalities have limitations, including diagnostic accuracy, logistic challenges, time-consuming assessment and the use of ionizing radiation~\cite{haak2020diagnostic}. Despite low sensitivity of Ultrasound for diagnosis of COVID-19 patients in mild and moderate categories, lung ultrasound has shown high-sensitivity results in critical cases~\cite{lu2020clinical}.
Due to its low cost, portability, ease of use, and being radiation-free, lung ultrasound can play a crucial role in the follow up and monitoring patients in the ICU. Furthermore, Ultrasound has been widely used for the diagnosis and monitoring of COVID-19 in pregnant women. In Italy, health professionals used lung ultrasound as a screening tool and developed a lung ultrasound score for evaluating the severity of the disease in COVID-19 patients~\cite{vetrugno2020our}.
In another study with $93$ patients, where $27$ ($29$\%) of them were tested positive for COVID-19 by RT-PCR or CT, the Ultrasound imaging achieved a sensitivity of 89\% and specificity of 59\% \cite{haak2020diagnostic}. Considering a subgroup of 37 patients without any cardiopulmonary disease, the assessment based on Ultrasound revealed and sensitivity of 100\% and specificity of 76\% \cite{haak2020diagnostic}. Thus, Ultrasound represents a valuable imaging modality for the detection or assessing COVID-19 severity mainly in patients without any medical history of cardiopulmonary disease.
\begin{table*}[t!]
\centering
\caption{\label{tab:datasets}\textcolor{black}{Available COVID-19 CT scan datasets.}}
\renewcommand{\arraystretch}{2}
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{c c|c|c|c|c|c|c|c|c|c|c|c|}
\cline{2-13}
& \multicolumn{3}{|c|}{\textbf{Number of cases}}
& \multicolumn{2}{|c|}{\textbf{Label type}}
& \multicolumn{2}{|c|}{\textbf{Data Source}}
& \multicolumn{2}{|c|}{\textbf{CT volume}}
& \multicolumn{3}{|c|}{\textbf{Label Level }}
\\
\hline
\multicolumn{1}{|c|}{Dataset} & COVID & CAP & Normal & Classification & Segmentation & Multiple & Single & Available & Not available & Patient-level & Slice-level & Lobe-level \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{Bjorke:2020}} & 49 & NA \footnotemark & NA & &\cmark & \cmark & & \cmark & & &\cmark & \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{Jun:2020}} & 20 & NA & NA & &\cmark & \cmark & & \cmark & & &\cmark & \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{Cohen:2020}} & 20 & NA & NA & &\cmark & \cmark & & \cmark & & &\cmark & \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{Morozov:2020}} & 856 & NA & 254 & \cmark & & \cmark & & \cmark & & \cmark& & \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{Zhao:2020}} & 216 & NA & 55 & \cmark & & \cmark & & &\cmark & &\cmark & \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{Soares:2020}} & 60 & NA & 60 & \cmark & & \cmark & & &\cmark & &\cmark & \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{Rahimzadeh:2020}} & 95 & NA & 282 & \cmark & & &\cmark & \cmark & & \cmark&\cmark & \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{COVID-DATA}} & 171 & 60 & 76 & \cmark & & &\cmark & \cmark & & \cmark&\cmark & \cmark\\
\hline
\end{tabular}
\footnotetext[1]{NA stands for Not Available.}
\end{adjustbox}
\vspace{-.1in}
\end{table*}
\begin{table*}[t!]
\centering
\caption{\label{tab:CXRdatasets}\textcolor{black}{Available COVID-19 CXR datasets.}}
\renewcommand{\arraystretch}{2}
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{c c|c|c|c|c|c|c|c|}
\cline{2-9}
& \multicolumn{3}{|c|}{\textbf{Number of cases}}
& \multicolumn{2}{|c|}{\textbf{Label type}}
& \multicolumn{2}{|c|}{\textbf{Data Source}}
& \multirow{2}{*}{{\textbf{Status}}}
\\
\cline{1-8}
\multicolumn{1}{|c|}{Dataset} & COVID & CAP & Normal & Classification & Segmentation & Multiple & Single& \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{de2020bimcv}} & 802 & 605 & 284 & \cmark & \cmark & \cmark & & Under development \\
\hline
\multicolumn{1}{|c|}{Reference~\cite{cohen2020covid}} & 468 & NA & NA & \cmark & \cmark & \cmark & & Under development \\
\hline
\end{tabular}
\footnotetext[1]{NA stands for Not Available.}
\end{adjustbox}
\vspace{-.1in}
\end{table*}
\vspace{.05in}
\noindent
\textbf{\textit{COVID-19 CT Scans \& CXR Datasets:}} To assure model generalization for clinical use, it is beneficial to train SP/DL models based on a diverse set of dataset acquired from different scanners, different health centers covering a wide range of patients. Table~\ref{tab:datasets} provides an overview of the available CT imaging datasets along with their COVID-19 related information.
CT images represent different resolutions and contrasts depending on the type of scanner, image acquisition approach, and the thickness of the slices. It is, therefore, necessary to make CT images consistent before feeding them into the processing and learning models. For a list of available CT imaging datasets along with their COVID-19 related information please refer to Reference~\cite{COVID-DATA}. Given the heterogeneity of the data source, available data collections comprehend a wide sort of equipment, images characteristics and diagnosed disease.
Table~\ref{tab:CXRdatasets} presents datasets of the two biggest data collection initiatives. Several other datasets are under development once when new data become available it is promptly aggregated by those initiatives. Given the heterogeneity of the data source, these data collection comprehends a wide sort of equipment, images characteristics and diagnosed disease.
\section{Deep Learning Radiomics Specific to COVID-19} \label{sec:DR}
In this section, we present different DL-based Radiomic models specific to the analysis of COVID-19 infection. In particular, we focus on the following four different application domains of discovery Radiomics: Segmentation of COVID-19 lesions (presented in Sub-section~\ref{subsec:Seg}); Predictive models for outcome prediction in COVID-19 patients (described in Sub-section~\ref{subsec:OP}); DL/SP models for severity assessment (presented in Sub-section~\ref{subsec:SA}), and; Diagnosis and classification of COVID-19 cases, which are detailed in Sub-section~\ref{subsec:Calss}.
\subsection{Segmentation of COVID-19 Lesions}\label{subsec:Seg}
In this sub-section, we provide an overview of segmentation networks developed in the context of COVID-19 from different aspects as presented in Fig.~\ref{fig:tax_seg}.
Segmentation networks are image-to-image DL models that are trained to produce a mask indicating the region of interest. Segmentation allows physicians to identify the type and location of lesions; Evaluate the extent of lung involvement, and; Quantify the lung severity measures. Depending on the objective, they one segment the COVID-19 lesions, lungs, and lobe regions. Segmentation-based infection quantification models can be used to evaluate effectiveness of different treatment solutions.
\vspace{.05in}
\noindent
\textit{\textbf{Imaging Modality used for Segmentation of COVID-19 Lesions:}} Since CT images provide the most accurate COVID-19 manifestations for grading and evaluation of infections, they have been widely used in the context of COVID-19 segmentation. The goal in this context is localization of COVID-19 infections and/or grading the disease stage. In the literature, the focus was mainly on development of 2D models for segmentation of lung infections in each CT slice~\cite{RajamaniSiebert, FanZhou, QiuLiu}. There have also been some 3D segmentation models that take the 3D CT volumes as input and segment the lung abnormalities on a patient-level basis~\cite{ZhangXiaohong, MullerRey}.
Since the use of portable CXRs is more feasible for patients in ICU, it is essential to develop segmentation models for severity assessment of COVID-19 patients based on CXR images. A study on $2,951$ COVID-19 CXRs, performed the lung infections segmentation as the first step of their COVID-19 diagnosis pipeline~\cite{DegerliAhishali}. Using a human-machine collaboration, they provided the first COVID-19 CXR dataset with the ground-truth infection masks. Segmenting the lung infections using a U-Net model with DenseNet-121 yielded a higher performance in their classification framework~\cite{DegerliAhishali}.
\vspace{.05in}
\noindent
\textit{\textbf{Region of Interest (RoI):}} The region of interest would be different in COVID-19 segmentation models based on the research objective and can be classified into the following three main categories:
\begin{itemize}
\item [(i)] \textit{Localizing the Infection Regions without Considering their Types:} These studies perform a two-way segmentation approach, assigning each pixel of the CT images either to an infection or to a background class.
\item [(ii)] \textit{Segmenting Different Types of COVID-19 Lesions:} As mentioned previously, different types of COVID-19 infections can be correlated to the stage/severity of the disease. In this regard, some studies have segmented different types of COVID-19 lesions under different classes to further evaluate severity of the disease~\cite{ZhangXiaohong, RajamaniSiebert}
For instance, a 3D DeepLabv3 segmentation model was proposed in~\cite{ZhangXiaohong} using CT images of $4,154$ patients. The constructed model segments lung regions and different types of COVID-19 infections, including consolidations, GGO, pulmonary fibrosis, interstitial thickening, and pleural effusion.
\item [(iii)] \textit{Segmentation of COVID-19 Lesions, Lungs, and Lobs:} Researchers who aim to quantify the extent of lung involvement and determine the COVID-19 severity measures consider segmenting lung/lobe regions besides the COVID-19 lesions.
Advanced segmentation models can be trained to quantify different severity measures such as PO, PHO, CT score, and LHOS, based on CT images. This will be further discussed in Sub-section~\ref{subsec:SA}.
\end{itemize}
Next, we investigate different DL architectures proposed for the segmentation of COVID-19 abnormalities.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.65\textwidth]{seg-chart.png}
\vspace{-.1in}
\caption{Taxonomy of the COVID-19 segmentation techniques using deep learning.}
\label{fig:tax_seg}
\vspace{-.2in}
\end{figure*}
\subsubsection{DL Architectures for Segmentation of COVID-19 Lesions}
Segmentation is considered an essential step in the severity/stage assessment of COVID-19 patients. However, comparing to classification models, limited number of research works have focused on segmentation models for COVID-19. Generally speaking, segmentation models (mostly developed based on CNNs) contain a contracting path (encoder) for extracting informative features from input images and an expanding path (decoder) for reconstructing the mask representing the regions of interest. Unlike classification models, typically, there are no fully connected layers in segmentation models. U-Net network~\cite{RonnebergerFischer}
and its various extensions are the most commonly used architecture for segmentation of COVID-19 lesions. There are few works developed based on other successful segmentation networks. Some researchers have proposed innovative encoder-decoder networks for segmentation of COVID-19 lesions. Below, we present COVID-19 segmentation architectures classified into the aforementioned three main categories:
\begin{itemize}
\item [(ii)] \textbf{\textit{U-Net-based Segmentation Models:}} The majority of researches for segmentation of COVID-19 abnormalities have been developed upon the U-Net model. In U-Net~\cite{RonnebergerFischer},
skip connections transfer the extracted features from the contracting path to the corresponding layer in the expanding path. This helps the model to better understand visual details of images making it an ideal architecture for segmentation in the medical domain. Adoption of pre-existing CNNs like DensNet and ResNet blocks in the encoder path of the U-Net will result in extracting higher resolution features from CT images~\cite{ChagantiGrenier, LiZhong}. Multi-scale feature fusion, i.e., integration of dilated convolutions with different dilation rates, can be added into U-Net-based segmentation models to help capture COVID-19 abnormalities in different scales~\cite{WangLiu}. The current focus is on increasing performance in segmenting the COVID-19 lesions. In this regard, References~\cite{ZhouCanu, ZhengLiu, RajamaniSiebert}
considered incorporation of spatial and channel attention mechanisms within the U-Net architecture. The authors in Reference~\cite{RajamaniSiebert} proposed a new attention mechanism that enables the basic U-Net model to better understand the contextual information from CT slices and perform the segmentation of COVID-19 abnormalities more accurately. Commercial U-Net-based soft-wares have also been used in some researches for quantification of COVID-19 abnormalities and determining the severity of the disease~\cite{MergenKobe, HuangHan}.
\item [(ii)] \textbf{\textit{Non U-Net Segmentation Models:}} Some COVID-19 segmentation researchers developed models based on architectures other than the U-Net model. Fully Convolutional Networks (FCNs), U-Net++, and V-Net are some examples of successful segmentation networks that have been used as a base model for developing COVID-19 segmentation networks.
\item [(iii)] \textbf{\textit{Innovative COVID-19 Specific Encoder-Decoders:}} Some studies have developed innovative models for the segmentation of COVID-19 opacifications from scratch. Qiu, \textit{et al.}~\cite{QiuLiu} proposed a compact (light-weight) segmentation network based on $100$ annotated CT slices from $>40$ COVID-19 patients. In contrary to most of the segmentation models, which are large DL networks with millions of trainable parameters, their proposed model contains only $472$ thousand parameters and achieved comparable results to its large-scale counterparts. Reference~\cite{FanZhou} developed a parallel partial decoder with edge and reverse attention modules to segment the COVID-19 lesions more precisely. Joint classification and segmentation networks, belonging to multi-task learning models, can enhance the performance of both segmentation and classification tasks by sharing the extracted features~\cite{WuGao, AmyarModzelewski}. Finally, segmentation models without use of labeled data can be developed~\cite{YaoXiao} for distinguishing COVID-19 areas of infections. More specifically, random shape, noise generation, and image filtering operations can be used to synthesize COVID-19 lesions for inclusion into healthy chest CT scans and form training pairs.
\end{itemize}
Table~\ref{tab:SegModels} provides classifications of different COVID-19 lesion Segmentation models.
\subsubsection{Hand-Crafted Radiomics}
Hand-crafted radiomics refers to the process of extracting several quantitative and semi-quantitative features from the ROI with the ultimate goal of diagnosis/prediction. Compared to DL techniques, Hand-crafted radiomics is less common in the problem of COVID-19 analysis, as it requires fine delineation of the infected regions and a prior knowledge of the types of the features to extract. Nevertheless, it benefits from more interpretability, as the features are engineered. As shown in Fig.~\ref{fig:hand-crafted}, Hand-crafted radiomics, utilized in a few COVID-19 studies, follows a multi-step process, in the first of which infected regions are annotated. Consequently, several features are extracted from the segmented regions and fed to a conventional model, such as Support Vector Machine (SVM), logistic regression, and decision tree, for making the final decision. Hand-crafted features cover a wide range of categories, including first-order (basic intensity and shape-based features), second-order (texture features extracted from various matrices), and more advanced features such as those calculated from Fourier and Wavelet transforms. Intensity features~\cite{ShiXia},
shape-based~\cite{FangHe},
and/or texture-based features, as well as other COVID-19 related features such as CT quantification metrics can be leveraged~\cite{LiZhong, chassagnon2020ai}. Radiomics in COVID-19 studies are mostly used in adverse outcome prediction models, explained in the next section. It is also possible to develop hybrid frameworks, where both hand-crafted and DL-based features are combined. Such methodology is adopted in Reference~\cite{WangWang} by combining GAN generated features with pre-defined hand-crafted ones.
\onecolumn
\begin{landscape}
\begin{center}
\centering
\begin{tiny}
\begin{longtable}[c]{p{0.05\linewidth} p{0.05\linewidth} p{0.15\linewidth} p{0.1\linewidth} p{0.08\linewidth} p{0.1\linewidth} p{0.08\linewidth} p{0.08\linewidth} p{0.2\linewidth}}\\
\caption{COVID-19 lesion Segmentation models}
\label{tab:SegModels}\\
\toprule
\textbf{Ref.} & \textbf{Input data} & \textbf{Dataset size} & \textbf{Dataset diversity} & \textbf{Model} & \textbf{Task} & \textbf{ROI} & \textbf{Validation} & \textbf{Results}\\
\midrule
\endfirsthead
\caption* {\textbf{Table \ref{tab:SegModels} Continued:} COVID-19 lesion Segmentation models}\\
\toprule
\textbf{Ref.}& \textbf{Input data} & \textbf{Dataset size} & \textbf{Dataset diversity} & \textbf{Model} & \textbf{Task} & \textbf{ROI} & \textbf{Validation} & \textbf{Results}\\
\midrule
\endhead
Ref.~\cite{ChagantiGrenier}& CT scans& 9749 CT volumes& multi-center& U-Net-based& Quantification&lesions, Lungs, lobes& Train/test split& Pearson correlation for PO: 0.92, PHO: 0.97, LSS: 0.91, LHOS: 0.9\\
\\
Ref.~\cite{RajamaniSiebert}& CT scans & 471 slices (100 slice from 40 patients, 371 slices from 9 patients) & multi-center & U-Net-based & Segmentation & COVID-19 lesions (binary/multiple class segmentation) & 3-fold cross validation & DSC: 0.791, SEN: 0.862, SPC: 0.987\\
\\
Ref.~\cite{MaWang}& CT scans& 20 COVID-19 CT volumes from 2 datsets (10 patients from each)& multi-center& U-Net-based& Segmentation& COVID-19 lesions, Lungs& 4-fold cross validation& DSC: on left lung: 0.8581, DSC on right lung: 0.8799, DSC on lesions: 0.6732\\
\\
Ref.~\cite{LaradjiRodriguez}& CT scans& 29 COVID-19 CT volumes& multi-center& FCN-based& Segmentation& COVID-19 lesions& Train/test split& Reducing annotating time \\
\\
Ref.~\cite{YaoXiao}& CT scans& For COVID-19 lesion segmentation: (453 healthy CT volumes, 18 COVID-19 CT volumes) For lung segmentation: 515 lung CT volumes& Three datasets for COVID-19 lesion segmentation, Three datasets for lung segmentation& U-Net-based& Segmentation& COVID-19 lesions, Lungs& Eternal validation& Results on test set 1: DSC: 0.687 $\pm$ 15.8, SPC: 0.851 $\pm$ 6.97, SEN: 0.621 $\pm$ 22.8 ; Results of test set 2: DSC: 0.594 $\pm$ 17.4, SPC: 0.604 $\pm$ 19.7, SEN: 0.618 $\pm$ 18.4 \\
\\
Ref.~\cite{XuCao}& CT scans& 2563 COVID-19 CT volumes, 254 healthy CT volumes& four public datasets for training and three public datasets for testing& U-Net-based + GAN& Segmentation& COVID-19 lesions& External validation& Results on different test sets: DSC: 0.589-0.767, SPC: 0.992-0.998, SEN: 0.584-0.846\\
\\
Ref.~\cite{QiuLiu}& CT scans& 100 annotated CT slices form $>40$ patients& single-center& Innovative model (light-weight architecture)& Segmentation& COVID-19 lesions& Train/test split& DSC: 0.7728, SEN: 0.8362, SPC: 0.9747\\
\\
Ref.~\cite{AmyarModzelewski}& CT scans& 1396 CT slices for classification, 100 slices from $>40$ patients for segmentation& Three datasets for classification, one dataset for segmentation& Innovative model (multi-task learning)& Segmentation + Classification +Image reconstruction& COVID-19 lesions& Train/test split& For segmentation DSC: 0.88; For classification Acc: 0.9467, SEN: 0.96, SPC: 0.92\\
\\
Ref.~\cite{HuangHan}& CT scans& The commercial deep-learning software used for segmentation has been trained based on 842 COVID-19 CT volumes from one hospital. For follow up CTs: 126 COVID-19 patients classified into four clinical stages: 6 mild, 94 moderate, 20 severe, and 6 critical cases& Single-center& U-Net-based& Quantification of lung involvement, monitoring changes in follow up CTs& COVID-19 lesions, Lungs, Lobes& Train/test split& DSC: median: 0.8481, range: 0.6526 - 0.9094; PO for different categories: mild: 0, moderate: 2.2\% (0.4, 7.1), severer: 28.9\% $\pm$ 19.2, critical: 49.6\% $\pm$ 14.8\\
\\
Ref.~\cite{FanZhou}& CT scans& 100 annotated CT slices from $>40$ patients, 1600 CT slices from 20 patients& multi-center& Innovative model& Segmentation& COVID-19 lesions (Bianry/multiple class segmentation)& External validation& Results on test set: DSC: 0.73, SEN: 0.725, SPC: 0.96, Results on external validation: DSC: 0.597, SEN: 0.865, SPC: 0.977\\
\\
Ref.~\cite{ZhouCanu}& CT scans& 100 annotated CT slices from $>40$ patients& multi-center& U-Net-based& Segmentation& COVID-19 lesions& Train/test split& DSC: 0.83.1 , SEN: 0.867, SPC: 0.993\\
\\
Ref.~\cite{WangLiu}& CT scans& 558 COVID-19 patients including 76250 slices& multi-center& U-Net-based& Segmentation& COVID-19 lesions& Train/test split& DSC: 0.8029 $\pm$ 11.14, HD:18.72 $\pm$ 27.26\\
\\
Ref.~\cite{ZhaoLi}& CT scans& 1117 annotated CT images from 19 COVID-19 patients, for external validation: 8 lung CT scans from two COVID-19 patients& single-center& U-Net++-based& Segmentation& COVID-19 lesions& 5-fold cross-validation& Results on test set: DSC: 0.8948, SEN: 0.8874, PPV: 0.9064; Results on external set: only qualitative results have been presented\\
\\
Ref.~\cite{LiZhong}& CT scans& 531 thick-section CT scans from 204 COVID-19 patients& single-center& U-Net-based& Segmentation; Severity assessment using PO and the average infection HU (iHU)& COVID-19 lesions& Train/test split& For segmentation: DSC: 0.74, For severity assessment: two imaging bio-markers (PO and iHU) can distinguish between the severe and non-severe stages with an AUC of 0.9680 (p-value$< 0:001$).\\
\\
Ref.~\cite{MergenKobe}& CT scans, clinical/laboratory information& 60 COVID-19 patients& single-center& U-Net-based& abnormalities quantification, correlation with disease severity& COVID-19 lesions/ Lungs/ Lobes& NA& Patients with need for mechanical ventilation had a significantly higher PO (median 44\%, IQR: 23–58\% versus 13\%, IQR: 10–24\%; p = 0.001) and PHO (median: 11 \%, IQR: 6–21\% versus 3\%, IQR: 2–7 \%, p = 0.002) compared to those without.\\
\\
Ref.~\cite{MaNie}& CT scans& 880 COVID-19 CT volumes, 80 annotated cases& multi-center& nnU-Net-based& Segmentation& COVID-19 lesions& 5-fold cross validation& DSC: 0.7225 $\pm$ 0.1989, HD: 23.46 $\pm$ 32.12\\
\\
Ref.~\cite{ZhengLiu}& CT scans& 2506 slices with COVID-19 infected lesions and 2274 non-infected slices from 18 COVID-19 patients and 18 non-COVID people& single-center& U-Net-based& Segmentation& COVID-19 lesions (including GGOs, interstitial in ltrates, and consolidation)& 5-fold cross validation& for three infection categories: DSC: 0.7422,0.7384,0.8769.SEN and SPC: (0.8593, 0.9742), (0.8268,0.9869) and (0.8645,0.9889) respectively\\
\\
Ref.~\cite{ZhangXiaohong}& CT scans,CT quantitative feature,clinical data& 617,775 CT images from 4,154 patients, 4695 annotated slices& multi-center& DeepLabv3-based& Three-way classification, Segmentation, quantification& Lung regions, COVID-19 lesions including consolidation, GGO, pulmonary fibrosis, interstitial thickening, and pleural effusion& 5-fold cross validation, External validation& three-way classification: accuracy: 0.9249, AUROC: 0.9813 (95\% CI: 0.9691–0.9902); The classification performance on five external datasets: accuracy: above 0.8411, sensitivity above 0.8667, and specificity above 0.8226; The AI-based lesion quantification could evaluate the effectiveness of different treatment solutions"\\
\\
Ref.~\cite{WuGao}& CT scans& 750 CT volumes (400 COVID-19 \& 350 uninfected cases). 3,855 annotated CT slices& multi-center& Innovative model& binary-classification, Segmentation& COVID-19 lesions& Train/test split& Classification: sensitivity: 0.95, specificity: 0.93; segmentation: DSC: 0.783\\
\\
Ref.~\cite{ShanGao}& CT scans& 549 COVID-19 patients & single-center& VB-Net-based& Segmentation, quantification& Covid-19 lesions, Lungs, Lobes& Train/test split& DSC: 0.916\% $\pm$ 10, PO error (the difference between GT and predicted PO): 0.3\%\\
\hline
\end{longtable}
\end{tiny}
\end{center}
\end{landscape}
\twocolumn
\begin{figure}
\centering
\includegraphics[scale=0.2]{hand-crafted.png}
\caption{Hand-crafted Radiomics workflow.}
\label{fig:hand-crafted}
\end{figure}
\begin{table*}[t!]
\centering
\caption{\label{tab:predictModels}\textcolor{black}{Predictive models for outcome prediction in COVID-19 patients.}}
\renewcommand{\arraystretch}{2}
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{c p{0.1\linewidth}|p{0.1\linewidth}|p{0.25\linewidth}|p{0.25\linewidth}|p{0.15\linewidth}|p{0.15\linewidth}|p{0.15\linewidth}|p{0.15\linewidth}|p{0.2\linewidth}|p{0.2\linewidth}|p{0.2\linewidth}|p{0.2\linewidth}|}
\\
\hline
\multicolumn{1}{|c|}{\textbf{Reference}} & \textbf{Dataset Size} & \textbf{Dataset Diversity} & \textbf{Input Data} & \textbf{Model} & \textbf{Target Outcome} \\
\hline
\multicolumn{1}{|c|}{Ref.~\cite{yue2020machine}} & 31 patients & Multi-center & extracted CT features & Logistic regression/ Random forest & short- and long-term hospital stay \\
\hline
\multicolumn{1}{|c|}{Ref.~\cite{bai2020predicting}} & 133 patients in mild stage & single-center & Temporal information of CT scans and clinical/laboratory data & A joint multi-layer perceptron and LSTM network & Progression from mild stage to sever/critical stage \\
\hline
\multicolumn{1}{|c|}{Ref.~\cite{zeng2020risk}} & 338 patients & Single-center & Extracted features from CT images/ clinical variables & Multivariate survival analyses & Progression Risk \\
\hline
\multicolumn{1}{|c|}{Ref.~\cite{colombi2020well}} & 236 patients & Single-center & clinical parameters and CT metrics & Logistic regression & ICU admission or death vs no ICU admission or death \\
\hline
\multicolumn{1}{|c|}{Ref.~\cite{chassagnon2020ai}} & 693 patients & multi-center & Radiomic CT features, clinical/ biological attributes & Ensemble consensus-driven learning & Severe vs non-severe/ short vs long-term prognosis\\
\hline
\multicolumn{1}{|c|}{Ref.~\cite{lassau2020ai}} & 1003 patients & multi-center & Clinical, biological, and CT scan images and reports & DL pipeline for segmentation/ DL pipeline for predicting severity evolution & Progression Risk \\
\hline
\end{tabular}
\end{adjustbox}
\end{table*}
\subsection{Predictive Models for Adverse Outcome Prediction in COVID-19 Patients} \label{subsec:OP}
As stated previously, for efficient utilization of limited medical resources during the COVID-19 pandemic, it is critically important to accurately predict mortality risk, progression to severe/critical stage, need for ICU admission/ventilation, and the length of hospital stay. In this regard, development of predictive models is essential to compute the probability of poor outcomes and help health-care professionals deliver appropriate services to high-risk patients.
Although image-driven features have shown high correlation with COVID-19 outcomes, they are not the only influential factors. In other words, radiologists use image-driven features together with other clinical and risk factors to make the final decision. Some of the clinical/laboratory information used for COVID-19 outcome prediction are patients' symptoms, laboratory test results, oxygen saturation, and comorbid diseases. Chronic lung disease, obesity, hypertension, cardiovascular diseases, and diabetes are examples of comorbidities that will increase the risk of adverse outcomes in COVID-19 pneumonia. In this section, we focus on predictive models that image-driven features to estimate the risk of adverse outcomes in COVID-19 patients. As shown in Fig.~\ref{fig:outcome}, in what follows, predictive models are categorized and described in terms of their: (i) Model structure, and; (ii) Target outcome.
\subsubsection{Model Structure of COVID-19 Outcome Prediction Models}
\begin{figure*}
\centering
\includegraphics[scale=0.6]{outcome-chart.png}
\caption{Taxonomy of the COVID-19 predictive models.}
\label{fig:outcome}
\end{figure*}
Most COVID-19 outcome prediction studies exploit both chest CT images and clinical/laboratory data in their models. To effectively benefit from heterogenous data resources, conventional ML methods or hybrid models can be utilized, as explained below.
\vspace{.1in}
\noindent
(i) \textbf{\textit{Conventional ML Models:}} In some COVID-19 predictive studies, CT Radiomics and quantification features are extracted in a pre-processing step. Extracted features are then used with clinical/laboratory data to train a shallow classifier such as logistic regression or random forest. For instant, Chao \textit{et al.}~\cite{chao2020integrative} used CT features including lobe-wise quantification features and whole lung Radiomics together with patients' clinical information including age, sex, vital signs, and laboratory findings to predict the need for ICU admission in COVID-19 patients. They used a DL-based segmentation model to measure CT quantification features. Integrated input data from various types and resources are then fed into a random forest classifier for outcome prediction. Following this study, one can conclude that adding clinical information to CT features can improve the overall outcome prediction performance.
\vspace{.1in}
\noindent
(ii) \textbf{\textit{Hybrid Models:}} Hybrid models (such as multiple-models, mixture of experts, and ensemble models) are of high importance in the field of medical imaging, typically, improving the initial results. While hybrid models can be developed in a variety of forms, they are mostly adopted in COVID-19 analysis in two main ways, i.e., combinations of Hand-Crafted and Clinical/Laboratory features or combination of DL-driven and clinical/Laboratory features, as described below:
\begin{itemize}
\item \textit{Combinations of Hand-Crafted Features and Clinical/Laboratory Information:} Although hand-crafted Radiomics, typically, rely on fine annotations and prior knowledge of the features to be extracted, they benefit from domain knowledge. Combining the CT Radiomics and clinical/laboratory information in a hybrid model, therefore, benefits from more interpretability. After integration of hand-crafted Radiomics and clinical/laboratory information, DL features can be extracted for achieving improved overall results. For instance, DL models can be developed to predict the probability that mild COVID-19 patients deteriorate into the severe/critical stage. In this regard, Reference~\cite{bai2020predicting} developed a DL model to predict the probability that mild COVID-19 patients deteriorate into the severe/critical stage. In their model, first, clinical/laboratory data are fed into a Multi-Layer Perceptron (MLP). The output is then integrated with extracted hand-crafted features from serial CT scans, which are then fed into an LSTM network followed by fully connected layers. The LSTM network could detect the temporal dependencies between the vector of features and achieved an AUC of $0.92$ in distinguishing mild patients who are more likely to deteriorate into the severe/critical stage.
\item \textit{Combination of DL-driven and Clinical/Laboratory Features}:
The superiority of a mixture model that takes advantage of both image-driven and clinical factors is investigated in~\cite{MengDong,NingLei}.
In this study, gender, age, severity grade, and chronic disease history are combined with the chest CT scans through a joint CNN-MLP network to distinguish between high and low-risk COVID-19 patients. In another study~\cite{NingLei},
blood and urine test results of $1,170$ patients are used as the clinical information. The developed model consists of two successive CNN networks for analysis of CT images, i.e., a DNN network for analysis of clinical features, and a penalized logistic regression to integrate image-driven DL features and the DL features extracted from clinical data. Improvements are reported when image-driven and clinical features are jointly used.
\end{itemize}
\subsubsection{Target Outcomes in COVID-19 Patients}
\begin{figure*}[t!]
\centering
\centering
\includegraphics[scale=0.5]{severity.png}
\vspace{-.15in}
\caption{Taxonomy of the COVID-19 severity assessment techniques using deep learning.}\label{fig:tax_severity}
\vspace{-.2in}
\end{figure*}
Generally speaking, in the context of COVID-19 prognosis, the target outcomes of interest include progression to severe/critical stage, mortality risk, need for ICU admission/ventilation, and the length of hospital stay. Below, we present different COVID-19 outcome prediction models based on these target outcomes:
\begin{itemize}
\item \textit{Risk of Progression to Severer Stages:} Li \textit{et al.}~\cite{LiZhong} aimed at measuring the progression of the disease by monitoring the Portion of Infection (POI) and the average infection HU (iHU). Changes in POI and iHU are calculated over each two consecutive CT examinations and compared with the radiologists' reports, leading to a high agreement in determining the infection increase or decrease. To understand the temporal evolution of COVID-19, Reference~\cite{HuangHan} calculates the Lung Opacity Percentage (LOP) for the whole lung and its $5$ lobes over follow-up CT examinations. Progression assessment has also been followed in Reference~\cite{AmerFrid-Adar} by calculating the pneumonia ratio.
\item \textit{Mortality Risk:} Chassagnon \textit{et al.}~\cite{chassagnon2020ai} proposed an ensemble consensus-driven learning approach with a combination of multiple ML classifiers. They used CT Radiomic features (including first-order, higher-order statistics, texture, and shape information) and patients' data (including age and gender) as the model's input. Their model aimed to predict a short term negative outcome (death in less than four days) or a long-term negative outcome (patients who did not recover after $31$ days may die after four days or still intubated). They trained their model based on a multi-center dataset containing $693$ COVID-19 patients and obtained promising results on multiple external validation sets.
\item \textit{Need of ICU Admission:} Reference~\cite{colombi2020well} implemented a
logistic regression model to predict the risk of ICU admission/death based on clinical parameters and CT quantification metrics of 236 COVID-19 patients from one health center.
CT quantification metrics, i.e., the extent of lung involvement by COVID-19 infections, can also be used to predict ICU admission or death. PO and PHO are calculated in Reference~\cite{MergenKobe} for potential correlation with clinical and laboratory factors. Obtained results show that patients with high PO and PHO have higher needs for mechanical ventilation. Length of ICU stay, the duration of oxygen inhalation, and hospitalization are the main follow up objectives in Reference~\cite{CaiLiu}. Homayounieh \textit{et al.} \cite{HomayouniehEbrahimian} used a logistic regression model to predict the risk of ICU admission or death based on CT radiomics and clinical information. They indicated that CT radiomics could be superior to radiologists' visual assessment in predicting COVID-19 patients' outcomes.
\item \textit{Length of Hospital Stay:} Need for short or long-term hospital stay of COVID-19 patients can be estimated using regression or random forest models. In such scenarios, hand-crafted features such as first-order, second-order, and/or shape features are extracted to predict the length of hospital stay of COVID-19 patients. Yue \textit{et al.}~\cite{yue2020machine} estimated the length of hospital stay in COVID-19 patients using logistic regression and random forest techniques.
\end{itemize}
\subsection{DL/SP Models for COVID-19 Severity Assessment}\label{subsec:SA}
Severity essentially refers to how much the lungs are affected and involved in the disease. COVID-19 severity assessment is of high importance due to its unique role in risk management and resource allocation. In this sub-section, as shown in Fig.~\ref{fig:tax_severity}, we present existing severity assessment methodologies. Table~\ref{tab:SevirityModels} summarizes how different studies are categorized based on these perspectives.
\vspace{.05in}
\noindent
\textit{\textbf{Imaging Modality used for Severity Assessment of COVID-19:}} Severity of the COVID-19 can be assessed using both CXR and CT scans, where the latter, due to its 3D nature, is capable of providing more accurate estimate of the lung involvement. Below, we provide few examples of recent works using CXR or CT for severity assessment of COVID-19 infection:
\begin{itemize}
\item \textit{CXR for Severity Assessment}:
For utilization of CXR for severity assessment, irrelevant, low quality and negative COVID-19 images need to be excluded prior to the analysis~\cite{ZhuShen}. CXR are also used in Reference~\cite{AmerFrid-Adar} for severity assessment.
\item \textit{CT for Severity Assessment}: RT-PCR-confirmed CT scans are utilized in Reference~\cite{MergenKobe} for severity assessment, accompanied by clinical and laboratory data, as well as the need for oxygen supply and mechanical ventilation. CT scans are also utilized by Li \textit{et al.}~\cite{LiZhong} and divided into severe and non-severe groups. The non-severe cases may progress into the severe class during the treatment. Since severe and non-severe patients have different treatment regimens, the same grouping is performed in Reference~\cite{TangZhao}. CT scans from multiple centers are utilized by Ghosh \textit{et al.}~\cite{GhoshKumar} for a generalizable severity assessment. As stated previously, segmentation models are, typically, used to quantify different severity measures such as PO, PHO, CT score, and LHOS, based on CT images. More specifically, to quantify (PO, CT score) and (PHO, LHOS), the model learns to segment COVID-19 infections and COVID-19 high-opacity infections, respectively. The whole lung region and lobe regions are segmented to measure (PO, PHO) and (CT score, LHOS). For instance, Reference~\cite{HuangHan}, segmented the lungs regions and COVID-19 lesions using a U-Net-based commercial software and determined the PO in COVID-19 patients. Based on the clinical data, they could map the PO measure to the severity of the disease. It was concluded in this study that the median PO for patients in the moderate category is $2.2$ ($0.4$, $7.1$), for patients in the severe group is $28.9$ $\pm$ $19.2$, and for patients in the critical category is $49.6$ $\pm$ $14.8$. Along the similar path, Reference~\cite{ShanGao} developed a VB-Net-based segmentation model using $249$ CT volumes of COVID-19 patients to segment the lung regions, lobes, and lung infections. Their model could quantify the PO measure with an error of $0.3$\% over a testset of $300$ CT volumes collected at a single hospital. Commercial U-Net-based soft-wares have also been used in some researches for quantification of COVID-19 abnormalities and determining the severity of the disease~\cite{MergenKobe, HuangHan}.
\end{itemize}
\onecolumn
\begin{landscape}
\begin{center}
\centering
\begin{footnotesize}
\begin{longtable}[c]{p{0.05\linewidth} p{0.05\linewidth} p{0.15\linewidth} p{0.1\linewidth} p{0.08\linewidth} p{0.1\linewidth} p{0.15\linewidth} p{0.1\linewidth}}\\
\caption{COVID-19 severity assessment models.}
\label{tab:SevirityModels}\\
\toprule
\textbf{Ref.} & \textbf{Input data} & \textbf{Dataset size} & \textbf{Dataset diversity} & \textbf{Objective} & \textbf{Segmentation method} & \textbf{Type of assessment}\\
\midrule
\endfirsthead
\caption* {\textbf{Table \ref{tab:ClassModels} Continued:} COVID-19 severity assessment models.}\\
\toprule
\textbf{Ref.} & \textbf{Input data} & \textbf{Dataset size} & \textbf{Dataset diversity} & \textbf{Objective} & \textbf{Segmentation method} & \textbf{Type of assessment}\\
\midrule
\endhead
Ref.~\cite{GhoshKumar} & CT & 509 CT images from 101 COVID-19 patients & Multi-center & Diagnosis & Deep learning; Traditional; Manual & Hand-crafted radiomics\\
\\% LyuLiu ZhuShen AmerFrid-Adar YipKlanecek FengLiu CaiLiu LiArun
Ref.~\cite{LyuLiu} & CT & 51 patients & One hospital & Diagnosis & Automatic followed by manual adjustment & Volume calculation\\
\\
Ref.~\cite{HuangHan} & CT & 842 COVID-19 CT volumes for segmentation; 126 COVID-19 patients classified into four clinical stages: 6 mild, 94 moderate, 20 severe, and 6 critical cases & One hospital & Progression assessment & Deep learning & Volume calculation\\
\\
Ref.~\cite{LiZhong} & CT & 531 thick-section CT scans from 204 COVID-19 patients & One hospital & Progression assessment & Deep learning & Volume calculation\\
\\
Ref.~\cite{MergenKobe} & CT & 60 COVID-19 patients & One hospital & Correlation with clinical factors & Deep learning & Volume calculation\\
\\
Ref.~\cite{ZhuShen} & X-ray & 131 portable CXR from 84 COVID-19 patients & One public dataset & Diagnosis & Not required & Deep learning\\
\\
Ref.~\cite{TangZhao} & CT & 176 patients & 7 hospitals with different scanners & Diagnosis & Deep learning & Volume calculation\\
\\
Ref.~\cite{AmerFrid-Adar} & X-ray & 15756 COVID-19 images & Several public datasets & Progression assessment & Deep learning & Volume calculation\\
\\
Ref.~\cite{YipKlanecek} & CT & 1110 COVID-19 patients & One public dataset & Diagnosis & Traditional & Hand-crafted radiomics\\
\\
Ref.~\cite{FengLiu} & CT & 346 COVID-19 patients & Two hospitals &Progression assessment & Deep learning & Deep learning\\
\\
Ref.~\cite{CaiLiu} & CT & 99 COVID-19 patients & Two institutions & Correlation with clinical factors & Deep learning & Hand-crafted radiomics\\
\\
Ref.~\cite{LiArun} & X-ray & 468 COVID-19 patients & One hospital & Progression assessment & Not required & Siamese neural networks\\
\\
\hline
\end{longtable}
\end{footnotesize}
\end{center}
\end{landscape}
\twocolumn
\subsubsection{Severity Assessment Types}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.4]{classification.png}
\caption{Taxonomy of DL-based COVID-19 classification techniques.}\label{fig:tax_class}
\vspace{-.2in}
\end{figure}
Generally speaking, two types of severity assessment can be defined within the COVID-19 literature. The first one is to consider a classification approach, where different discrete labels are defined to assess the severity. The second type, however, aims at calculating the degree/portion of lung involvement as a measure of severity. Although, the second type, referred to as ``\textit{quantification}'', is often followed by a classification paradigm, degree of lung involvement is essentially embedded in the feature vector. Below, we further elaborate on these two severity assessment types:
\vspace{.1in}
\noindent
(i) \textit{\textbf{COVID-19 Severity Classification:}} Similar to most of the classification problems, COVID-19 severity classification can be solved using either hand-crafted or DL methods.
\begin{itemize}
\item \textit{Hand-crafted Radiomics}: While different engineered features have the potential to distinguish between severe and non-severe cases, Ghosh \textit{et al.}~\cite{GhoshKumar} proposed a hand-crafted feature, referred to as the $L_{norm}$. This feature is defined using the maximum bone reference ($B$), minimum air reference ($A$), and the mean gray scale intensity of the lesion ($L$), as follows
\begin{equation}
L_{norm}=100\times\frac{L-A}{B-A},\quad 0\leq L_{norm} \leq 100.
\end{equation}
The optimum cut-off value to distinguish between severe and non-severe cases using the $L_{norm}$ is then obtained based on a Receiver Operating Characteristic (ROC) curve analysis. Other traditional hand-crafted features, such as first-order histogram features and/or texture-based ones can be incorporated followed by a regression model to distinguish between severe and non-severe patients. First-order histogram features are also used in Reference~\cite{CaiLiu} for severity classification.
\item \textit{Deep Learning}: To identify discrete severity scores of the COVID-19 patients, CNN-based models can alternatively developed~\cite{ZhuShen}. A two stage DL framework is proposed in Reference~\cite{FengLiu} for COVID-19 severity classification. In the first stage, CT scans are individually fed to a U-Net model, whose extracted features are stored for the second stage. Through the second stage, the feature vectors are fed to a bi-directional LSTM model, for the final classification.
\end{itemize}
\vspace{.1in}
\noindent
(ii) \textit{\textbf{Severity Assessment via Quantification:}} Although quantification is performed by calculating the lung and infection volume in most of the studies, it is also possible to adopt a different approach such as using a Siamese neural network. Below, we discuss recent works performed along these two directions:
\begin{itemize}
\item \textit{Quantification via Volume Calculation}: To quantify the COVID-19 severity, Reference~\cite{MergenKobe} calculated PO and PHO. POI and the average infection HU are calculated in Reference~\cite{LiZhong}, to quantify the severity, followed by dividing the patients into two groups of severe and non-severe. Infection and GGO ratio are calculated in Reference~\cite{TangZhao}. These two measures, along with several other quantitative features, are further fed to a Random Forest (RF) classifier, to classify patients as severe and non-severe.
\item \textit{Quantification via Siamese Neural Networks}: This approach consists of two identical models, in terms of weights and parameters, with the goal of finding the similarity between the two inputs. Beside having several applications, Siamese models, in particular convolutional Siamese neural networks, can be adopted for COVID-19 severity assessment~\cite{LiArun}
In such scenarios, the Euclidean distance between the two final layers is calculated as a measure of difference between the inputs. Therefore, the distance between a COVID-19 and normal scan can show the degree of abnormality. Utilizing a pool of normal images, the median distance can represent the severity.
\end{itemize}
\subsection{COVID-19 Classification Models}\label{subsec:Calss}
Development of DL-based COVID-19 classification models can be approached from four main perspectives, as shown in Fig.~\ref{fig:tax_class}. The first aspect is the annotation dependency of the developed frameworks. The second one is whether the proposed methods consider a binary or multi-class classification, followed by the third perspective focusing on the imaging modality used for the classification, as based on the modality different solutions are admissible. The DL model architecture is another important aspect of the developed frameworks. Next, we discuss these four categories in detail.
Table~\ref{tab:ClassModels} summarizes how different studies approach the aforementioned categories.
\subsubsection{Annotation Dependency}
Annotation dependency refers to whether the developed COVID-19 classification models rely on annotated images as inputs. Annotation can be related to either segmenting the whole lung region or the infected areas from the chest image. In this regard, we categorized studies into three groups of (i) no annotation required, (ii) lung segmentation required, and (iii) infection segmentation required. These three groups are discussed in the following:
\onecolumn
\begin{landscape}[t!]
\begin{center}
\centering
\begin{tiny}
\begin{longtable}[c]{p{0.05\linewidth} p{0.05\linewidth} p{0.15\linewidth} p{0.1\linewidth} p{0.08\linewidth} p{0.1\linewidth} p{0.15\linewidth} p{0.1\linewidth}}\\
\caption{COVID-19 classification models.}
\label{tab:ClassModels}\\
\toprule
\textbf{Ref.} & \textbf{Input data} & \textbf{Dataset size} & \textbf{Dataset diversity} & \textbf{Number of classes} & \textbf{Architecture} & \textbf{Bias and over-fitting prevention} & \textbf{Annotation dependency}\\
\midrule
\endfirsthead
\caption* {\textbf{Table \ref{tab:ClassModels} Continued:} COVID-19 classification models.}\\
\toprule
\textbf{Ref.} & \textbf{Input data} & \textbf{Dataset size} & \textbf{Dataset diversity} & \textbf{Number of classes} & \textbf{Architecture} & \textbf{Bias and over-fitting prevention} & \textbf{Annotation dependency}\\
\midrule
\endhead
Ref.~\cite{MeiLee} & CT & 419 COVID-19; 486 Non-COVID & 18 centers & Binary & Inception and ResNet CNN & Data Augmentation; Transfer Learning & Lung segmentation\\
\\
Ref.~\cite{ArdakaniRajabzadeh} & CT & 108 COVID-19; 86 Non-COVID & One hospital & Binary & ResNet CNN & Transfer learning & Infection segmentation\\
\\
Ref.~\cite{Rahimzadeh:2020} & X-ray & 118 COVID-19; 6054 Non-COVID; 8851 Normal & Two public datasets & Multi-class & Xception and ResNet & Data Augmentation; Transfer Learning & Not required \\
\
Ref.~\cite{BaiWang} & CT & 521 COVID-19; 665 Non-COVID & COVID from 10 hospitals; Non-COVID from 3 hospitals & Binary & EfficientNet CNN & Data augmentation; Transfer learning & Lung segmentation\\
\
Ref.~\cite{CastiglioniIppolito} & X-ray & 250 COVID-19; 250 Non-COVID & 2 hospitals & Binary & ResNet CNN & Data augmentation; Transfer learning & Not required\\
\\
Ref.~\cite{LiQin} & CT & 4356 chest CT exams from 3322 patients & 6 hospitals & Multi-class & ResNet50 CNN & Data augmentation & Lung segmentation\\
\\
Ref.~\cite{AbbasAbdelsamea} & X-ray & 105 COVID-19; 11 Non-COVID; 80 Normal & Multi-center & Multi-class & CNN & Data augmentation; Transfer learning & Not required\\
\\
Ref.~\cite{WaheedGoyal} & X-ray & 403 COVID-19; 721 Normal & Multi-center & Binary & VGG CNN & Gan-based data augmentation; Transfer learning & Not required\\
\\
Ref.~\cite{AfsharHeidarian} & X-ray & 266 COVID-19; 5538 Non-COVID; 8066 Normal & Multi-center & Binary & Capsule network & Loss modification; Transfer learning & Not required\\
\\
Ref.~\cite{WangKang} & CT & 44 COVID-19; 55 Non-COVID & 3 hospitals & Binary & Inception CNN; Ensemble of classifiers & Transfer learning & Infection segmentation\\
\\
Ref.~\cite{WangWong} & X-ray & 266 COVID-19; 5538 Non-COVID; 8066 Normal & Multi-center & Multi-class & CNN & Data augmentation; Transfer learning & Not required\\
\\
Ref.~\cite{GozesFrid2} & CT & 106 COVID-19; 100 Normal & Multi-center & Binary & ResNet CNN & Data augmentation; Transfer learning & Lung segmentation\\
\\
Ref.~\cite{FarooqHafeez} & X-ray & 45 COVID-19; 1591 Non-COVID; 1023 Normal & Multi-center & Multi-class & ResNet CNN & Data augmentation; Transfer learning & Not required\\
\\
Ref.~\cite{NarinKaya} & X-ray & 50 COVID-19; 50 Normal & One public dataset & Binary & ResNet CNN & Transfer learning & Not required\\
\\
Ref.~\cite{OzturkTalo} & X-ray & 127 COVID-19; 500 Non-COVID; 500 Normal & Two public datasets & Multi-class; Binary & DarkNet (YOLO base model) & NA & Not required\\
\\
Ref.~\cite{KarimDohmen} & X-ray & 266 COVID-19; 5538 Non-COVID; 8066 Normal & Three public datasets & Multi-class & Ensemble CNN & Data augmentation & Not required\\
\\
Ref.~\cite{AmyarModzelewski} & CT & 449 COVID-19; 495 Non-COVID; 425 Normal & 3 hospitals & Multi-class & Encoder-decoder CNN & Multi-task learning & Not required\\
\\
Ref.~\cite{NarayanKumar} & CT & 127 COVID-19; 500 Non-COVID; 500 Normal & Two public datasets & Multi-class & Xception CNN & Transfer learning & Not required\\
\\
Ref.~\cite{IslamIslam} & X-ray & 1525 COVID-19; 1525 Non-COVID; 1525 Normal & Several public datasets & Multi-class & LSTM & NA & Not required\\
\\
Ref.~\cite{HemdanShouman} & X-ray & 25 COVID-19; 25 Non-COVID & Two public datasets & Binary & CNN & NA & Not required\\
\\
Ref.~\cite{ZhangXie} & X-ray & 599 COVID-19; 24622 Non-COVID; 18881 Normal & Multi-class & One-class anomaly detection & EfficientNet CNN & Data augmentation; Transfer learning & Not required\\
\\
Ref.~\cite{GozesFrid} & CT & 159 COVID-19; 90 Non-COVID & Multi-center & Binary & ResNet CNN & Data augmentation; Transfer learning & Lung segmentation\\
\\
Ref.~\cite{HuGao} & CT & 150 COVID-19; 150 Non-COVID; 150 Normal & Two hospitals & Multi-class & Multi-scale CNN & Loss modification; Data augmentation & Lung segmentation\\
\\
Ref.~\cite{YingZheng} & CT & 88 COVID-19; 101 Non-COVID; 86 Normal & hospitals of two provinces in China & Multi-class & CNN & NA & Lung segmentation\\
\\
Ref.~\cite{WangDeng} & CT & 540 COVID-19; 229 Normal & One hospital & Binary & CNN & Data augmentation & Lung segmentation\\
\\
Ref.~\cite{OhPark} & X-ray & 180 COVID-19; 131 Non-COVID; 191 Normal & Several public datasets & Multi-class & ResNet CNN & Transfer learning & Lung segmentation\\
\\
Ref.~\cite{SedikIliyasu} & X-ray; CT & 288 COVID-19; 288 Normal & NA & Binary & LSTM & Gan-based data augmentation & Not required\\
\\
Ref.~\cite{XuJiang} & CT & 110 COVID-19; 224 None-COVID, 175 Normal & 3 hospitals & Multi-class & CNN & Changing sampling probability & Infection segmentation\\
\\
Ref.~\cite{MohammedWang} & CT & 20 COVID-19; 282 Non-COVID & One public dataset & Multi-class & Bi-directional LSTM; CNN & Changing sampling probability; Data augmentation & Lung segmentation\\
\\
Ref.~\cite{YangJiang} & CT & 146 COVID-19; 149 Normal & One hospital & Binary & CNN & data augmentation & Lung segmentation\\
\\
Ref.~\cite{MengDong} & CT & 366 COVID-19 & 4 centers & Binary & 3D CNN with integrated clinical data & Loss modification; Data augmentation & Lung segmentation\\
\\
Ref.~\cite{HeidarianAfshar} & CT & 171 COVID-19; 60 Non-COVID; 76 Normal & One center & Binary & Capsule network & Loss modification & Lung segmentation\\
\\
Ref.~\cite{HeidarianAfshar2} & CT & 171 COVID-19; 60 Non-COVID; 76 Normal & One center & Binary & Capsule network & Loss modification & Lung segmentation\\
\hline
\end{longtable}
\end{tiny}
\end{center}
\end{landscape}
\twocolumn
\vspace{.1in}
\noindent
\textbf{\textit{COVID-19 Classification without Annotation:}} Studies that do not include any segmentation as a pre-processing step essentially feed the developed model with raw images. As CXR images are single slices and simpler to process compared to CT scans, they are utilized without annotation in most of the studies. Reference~\cite{OzturkTalo} is an example of such studies, where raw CXR images are fed to a DL model for binary and multi-class COVID-19 classification. Narayan Das \textit{et al.}~\cite{NarayanKumar} and Islam \textit{et al.}~\cite{IslamIslam} also utilized CXR images without annotation for a three-way COVID-19 classification. Although using CT scans, Reference~\cite{AmyarModzelewski} is independent from segmented inputs. It exploits annotation labels in the output layer to develop a multi-task training framework. In other words, both classification and segmentation are aimed at in this work.
\vspace{.1in}
\noindent
\textbf{\textit{Lung Segmentation for COVID-19 Classification:}} Lung segmentation is the first step in many COVID-19 classifications studies, as it eliminates unessential information. Gozes \textit{et al.}~\cite{GozesFrid}, for instance, used a pre-trained U-net model for this task. Since the segmentation model should be able to annotate lungs even in the presence of COVID-19 opacities, the U-net model is fine-tuned on a dataset of interstitial lung disease cases. More advanced lung segmentation models are also used in COVID-19 studies. Reference~\cite{HuGao}, for instance, has proposed a multi-window U-Net that incorporates several windows instead of the standard Hounsfield unit (HU) window. Furthermore, this study uses a sequential information attention module to integrate all CT slices.
\vspace{.1in}
\noindent
\textbf{\textit{Infection Segmentation for COVID-19 Classification:}} Beside lung segmentation, some COVID-19 classification studies rely on segmenting the pulmonary regions of infection. Xu \textit{et al.}~\cite{XuJiang}, for instance, have used a 3D CNN model trained on pulmonary tuberculosis for infection segmentation. Although this model is not trained on a COVID-19 dataset, it can still extract candidate patches. The annotation results are consequently used to form cubic patches around the regions of infection, which are then fed to the classification model. Based on the COVID-19 characteristics, such as GGO, Wang \textit{et al.}~\cite{WangKang} have manually delineated the CT scans to extract all the ROIs, from which 2-3 patches are randomly selected as the input to the CNN model for classification purposes.
\subsubsection{COVID-19 Classification Types}
Binary or multi-class COVID-19 classification refers to whether the problem is considered as COVID-19 versus all other possible categories as one class or all the classes are treated separately. These two approaches are investigated in the following:
\vspace{.1in}
\noindent
\textbf{\textit{Binary COVID-19 Classification Problems:}} Reference~\cite{OzturkTalo} is an example of binary classification, where the goal is to distinguish between COVID-19 and non-COVID cases. Non-COVID cases include both normal and pneumonia patients. Reference~\cite{NarinKaya} explores three different COVID-19-related binary classification problems, in each of which COVID-19 is classified against a different class, including viral pneumonia, bacterial pneumonia, and normal. Obtained results show that COVID-19 is best distinguishable from bacterial pneumonia. Beside positive and negative COVID-19, patients can be classified based on other clinical outcomes. Meng \textit{et al.}~\cite{MengDong}, for instance, consider high and low-risk as the binary classification labels.
\vspace{.1in}
\noindent
\textbf{\textit{Multi-class COVID-19 Classification Problems:}} Reference~\cite{OzturkTalo}, beside considering a binary classification problem, tries to solve a multi-class classification consisting of three classes of COVID-19, pneumonia, and normal. The obtained accuracy, however, is lower than the binary scenario. The same categorization is followed in~\cite{IslamIslam}. Reference~\cite{AmyarModzelewski} also followed a three-way classification with the difference that all diseases other than COVID-19 are considered as the ``others" class to be classified against COVID-19 and normal subjects. COVID-19, pneumonia, and other diseases are considered as three separate classes in Reference~\cite{NarayanKumar}. Since Reference~\cite{XuJiang} have used annotated infection patches as inputs to a CNN model, it also considers a irrelevant-to-infection class to exclude incorrectly segmented areas.
It is worth mentioning that unlike binary and multi-class approaches, COVID-19 classification is considered as a one-class anomaly detection in Reference~\cite{ZhangXie}, where the model's output is the anomaly score of the input, along with a confidence score that determines the model's confidence in its prediction. Consequently, subjects with a high anomaly score or low confidence score are considered as positive COVID-19.
\subsubsection{Imaging Modality used for COVID-19 Classification}
CXR and CT are two common imaging modalities considered in the COVID-19 classification studies. These two modalities, however, require different processing strategies, as described below:
\vspace{.1in}
\noindent
(i) \textbf{\textit{COVID-19 Classification via CXR Images:}} CXR images are 2D and as such processing techniques to incorporate the relation between images are not required. CXR images can be independent inputs to a DL model. References~\cite{OzturkTalo,NarayanKumar,IslamIslam} are examples of using CXR images for classification tasks. Unlike most of the COVID-19 classification methods using CXR that incorporate the whole image at once, Oh \textit{et al.}~\cite{OhPark} extract several random patches from the input image and feed them individually to the DL model. The final decision is a majority voting over all the obtained outcomes.
\vspace{.1in}
\noindent
(ii) \textbf{\textit{COVID-19 Classification via CT Scans:}} Unlike CXR images, CT scans are 3D in the sense that each patient is associated with several 2D slices. As a result, analyzing CT scans require specific strategies, the first of which is a slice-level classification, where slices are treated independently with the goal of assigning labels to separate slices. Patient-level classification, on the other hand, tries to make the final decision using all the available slices.
\begin{itemize}
\item \textit{Slice-level Classification}: Reference~\cite{AmyarModzelewski}, as an example of a slice-level classification algorithm, uses separate slices as inputs to a DL model, where slices are gathered from three different data sources and pre-processed to have consistent size, resolution, and contrast. Reference~\cite{HuGao} has assigned patient-level labels to all the slices and leveraged a 2D CNN model. This strategy, however, can cause inconsistency when a slice without any visible manifestation is assigned with COVID-19 or pneumonia label. References~\cite{YangJiang,GozesFrid2} are other examples of slice-level classification models, where target slices are manually selected to train the CNN model. At the test time, however, these studies, average over all the probabilities to form the patient-level classification. Therefore, the underlying studies can be considered as cross-sections of slice-level and patient-level classification, bringing us to the discussion in the next part, i.e., patient-level classification.
\item \textit{Patient-level Classification}: Patient-level classification using CT scans requires a voting strategy to combine the slice-level outcomes. The voting mechanism is of particular importance as the whole CT volume cannot be typically processed at once. Different voting mechanisms have been developed in the literature including the following items:
\begin{itemize}
\item \textit{Volumetric Scoring:} In Reference~\cite{GozesFrid} 2D slices are first processed to form the slice-level outcomes. Summing over the activation maps of the detected positive slices, consequently, results in the aforementioned volumetric score. It is worth mentioning that only activations above a pre-defined threshold are considered in the summation. The obtained COVID-19 score can also be considered as the extent of the disease in a patient's lungs.
\item \textit{Pooling Operations:}
For patient-level classification, one approach~\cite{YingZheng} is to combine different models (e.g., parallel CNNs) in a parallel architecture. Results from individual slices can then be aggregated through pooling operations. Similarly, Li \textit{et al.}~\cite{LiQin} incorporate parallel CNNs, results of which are aggregated through a max pooling operation.
\item \textit{Whole CT Volume:} To leverage the information from all the CT scans and capture their relations, Wang \textit{et al.}~\cite{WangDeng} feed their developed CNN model with the whole CT volume, which is concatenated with the segmented lung mask. The same strategy of feeding the whole CT volume is also used in Reference~\cite{MengDong}.
\item \textit{Bayesian Merging:} A Noisy-or Bayesian function is adopted in Reference~\cite{XuJiang} to combine outcomes of several infection patches.
\item \textit{RNN-based Merging:} Using Recurrent Neural Networks (RNNs) is another strategy to combine the slice-level information and consider the spatial relations. This group of models are discussed in Section~\ref{sec:arch}.
\item \textit{Multi-stage Frameworks:} Designing a multi-stage framework is also a common patient-level classification approach. Mei \textit{et al.}~\cite{MeiLee}, for instance, have designed a two stage workflow, where in the first stage abnormal slices are detected using a previous pre-trained pulmonary tuberculosis (PTB) detection model. Top ten candidate slices are then fed to another CNN, in stage two, to identify slices with positive COVID-19. Final outcome is ultimately set as the average of slice-level prediction of a patient’s ten most abnormal candidates. The same multi-stage strategy is followed in Reference~\cite{HeidarianAfshar}, with the difference that the CNNs are replaced with capsule networks, as described in Section~\ref{sec:arch}.
\end{itemize}
\end{itemize}
\subsubsection{DL Architectures for COVID-19 Classification}\label{sec:arch}
Although different DL architectures are applicable to the task of image classification, in the COVID-19 scenario, discriminative models including CNNs, RNNs, and capsule networks are the most commonly used ones. These networks and how they are incorporated in COVID-19 classification studies are explained below.
\vspace{.05in}
\noindent
\textit{\textbf{CNN-based COVID-19 Classification Models:}} CNNs are stack of convolutional and pooling layers, often followed by fully connected ones. Since the trainable filters share weight across the whole image, these networks are computationally effective, and can extract local features from the input. CNNs have shown promising results in the field of image processing including COVID-19 classification. Although it is possible to design a CNN from scratch, most of the studies have built their models upon pre-existing successful CNN models as described below:
\begin{itemize}
\item \textit{Pre-existing CNN models}: Since the start of the outbreak, the following pre-existing CNN models for COVID-19 classification:
\begin{itemize}
\item \textit{Darknet-19 Model:} DarkCovidNet model proposed in Reference~\cite{OzturkTalo} is a modification of Darknet-19 model, which is the basis of YOLO object detection system. The proposed DarkCovidNet consists of $17$ convolutional layers, which are followed by pooling layers, and eventually one fully connected layer for the final classification.
\item \textit{Inception Model}, is another commonly used CNN model utilized in COVID-19 studies. Reference~\cite{NarayanKumar}, for instance, utilizes extreme version of the Inception model, referred to as Xception. Two other variations of Inception, referred to as InceptionV3 and Inception-ResNetV2, are exploited in Reference~\cite{NarinKaya}, along with three variations of the popular ResNet architecture, namely ResNet50, ResNet101, and ResNet152. Obtained results show superior performance for ResNet50 model.
\item \textit{ResNet Models:} ResNet50 is the basis of the model proposed in Reference~\cite{FarooqHafeez}, referred to as the COVID-ResNet. This model is trained in three stages, where in each stage the image size is increased gradually. The ResNet50 model of Reference~\cite{GozesFrid} is followed by a GradCam localization to verify the pathological areas focused through the training process. The resulting map can provide insights to the radiologist.
\item \textit{EfficientNet}, utilizing compound coefficients to scale up CNNs, is another architecture used for COVID-19 classification in Reference~\cite{ZhangXie}.
\item \textit{Ensemble Models:} Besides adopting the pre-existing CNN architectures, it is also possible to develop ensemble frameworks to leverage the potentials of different CNN models.
\end{itemize}
\item \textit{Self-designed CNN models}: Based on the identified requirements, some studies have designed their own specific CNN models for COVID-19 classification. A multi-scale CNN, for instance, is proposed in Reference~\cite{HuGao}, where intermediate CNN representations are aggregated through a global Max Pooling operation to make the final decision. The self-designed CNN model proposed by Wang \textit{et al.}~\cite{WangDeng} consists of three subsequent blocks, the first of which is a vanilla 3D CNN, followed by a residual block. The last part is a progressive classifier, containing convolutional and fully-connected layers. Beside focusing on designing layers of a CNN, another strategy is to feed the model with information other than the raw image. Such strategy is leveraged in Reference~\cite{XuJiang}, where the distance between the center of infection and pleura is concatenated with a fully-connected layer. This distance can contribute to a more accurate classification, as COVID-19 infection has a pleural distribution, partly distinguishing it from other diseases. Meng \textit{et al.}~\cite{MengDong} utilized patient's clinical factors, such as gender, age, and chronic disease history, as the additional information to be concatenated with the CNN's fully connected layer. More heterogeneous factors, including travel and exposure history and symptomatology, are incorporated in the model designed by Mei \textit{et al.}~\cite{MeiLee}.
\end{itemize}
\vspace{.05in}
\noindent
\textit{\textbf{RNN-based COVID-19 Classification Models:}} RNNs are especially useful in medical imaging when the goal is to process the whole volume or analyze follow-up studies. Since RNNs are subject to the problem of vanishing gradients, LSTM networks are commonly used as an effective alternative. The vanilla LSTM is not designed for extracting local features from images and as such this network is often combined with a CNN, to make use of its weight sharing advantages. Such a model is utilized in Reference~\cite{IslamIslam} for COVID-19 classification, resulting in a CNN-LSTM design. In the underlying study, $12$ convolutional layers are first incorporated to extract features from CXR images. The output of the CNN is then fed to an LSTM, the result of which determines the probability of COVID-19, pneumonia and normal classes. While a conventional LSTM considers only forward relations, bi-directional LSTMs additionally take the backward relations into account. Such models are incorporated in Reference~\cite{MohammedWang} for COVID-19 classification.
\vspace{.05in}
\noindent
\textit{\textbf{CapsNet-based COVID-19 Classification Models:}} CapsNets are relatively new deep learning architectures, proposed to solve the incapability of CNNs to recognize spatial information. Each capsule in a CapsNet, consist of several neurons to represent an object instantiation parameters, as well as its existence probability. The main feature of the CapsNet is its routing by agreement process, through which capsules in a lower layer predict the outcome of capsules in the next layer. The parent capsules take these predictions into account, based on the similarity (agreement) between the prediction and actual outcome. Using the routing by agreement, CapsNet is capable of recognizing spatial relations between image instances, and therefore handle much smaller datasets, compared to CNNs. Reference~\cite{AfsharHeidarian} has recently exploited CapsNets for the problem of COVID-19 classification using CXR, showing improvements over the CNN counterparts. The proposed architecture, referred to as COVID-CAPS, consists of several convolutional, pooling, and capsule layers, the output of which determines the probability of positive COVID-19.
\section{Challenges, Open Problems, and Opportunities} \label{sec:COO}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.7\textwidth]{DL-challenges.png}
\caption{Challenges and possible solutions in developing COVID-19 diagnosis/prognosis Models.}
\label{fig:tax_challenge}
\end{figure*}
In this section, first, we focus on limitations and challenges of developing COVID-19 diagnosis/prognosis models as shown in Fig.~\ref{fig:tax_challenge}. Then, we discuss open problems and potential opportunities for SP research by highlighting problems and challenges of developing SP/DL models for COVID-19 management.
\subsection{Challenges in Developing COVID-19 Diagnosis/Prognosis Models}
The ultimate goal of developing COVID-19 diagnosis/prognosis models is to be used in clinical applications and reduce the healthcare system's workload during pandemic conditions. Some models proposed for diagnosis and prognosis of COVID-19 have shown successful results in real applications and enhanced the performance of junior radiologists to senior-level~\cite{ZhangXiaohong}. However, some common issues such as the risk of bias and over-fitting may cause poor generalization of such models. The leading causes of these issues are: (i) Lack of sufficient data; (ii) Lack of labeled/annotated data, and; (iii) Imbalanced dataset. These three categories are described below together with solutions developed in COVID-19 studies to overcome them:
\subsubsection{Lack of Sufficient Data:} There is no doubt that preparing a high-quality dataset is the most critical part of developing a data-driven model. Collection of sufficient data for training robust COVID-19 models is challenging because: (i) COVID-19 is a new arising disease; (ii) Restrictions imposed to preserve patients' privacy, and; (iii) Health centers' strict data sharing protocols. On the other hand, despite the robustness of CNNs in hierarchically extracting high-value features from images, they cannot recognize the spatial relationships between those features. Due to various shapes and complex appearances of COVID-19 lesions, a large number of chest medical images are required to avoid over-flitting and ensure the model generalization. Data augmentation, transfer learning, multi-task learning, and the use of Capsules networks are some solutions to tackle these problems.
\vspace{.05in}
\noindent
\textit{Data Augmentation}, compensates for the lack of large training dataset, by generating several variations of the original samples~\cite{GozesFrid}. Random cropping, zooming, and flipping, for instance, are applied to the samples in Reference~\cite{ZhangXie}. Flipping, random rotation, and lighting are also adopted in Reference~\cite{FarooqHafeez} to further enlarge the training set. Random Gaussian noises is another data augmentation technique that has been used in Reference~\cite{WangLiu}. Other than applying different transformations to the dataset, Generative Adversarial Networks (GANs) can be used to generate new instances~\cite{SedikIliyasu}.
GANs produce fake images and forces the algorithm to discriminate them from the original ones, which makes the model robust on unseen images. Such strategy is utilized in Reference~\cite{SedikIliyasu}, where a convolutional GAN is leveraged for COVID-19 data augmentation. Similarly, a conditional GAN, referred to as COVIDGAN, is proposed in Reference~\cite{WaheedGoyal}, for CXR data augmentation.
\vspace{.05in}
\noindent
\textit{Transfer Learning,} refers to pre-training a model using an external dataset with the goal of encouraging the model to learn meaningful filters. The model is then fine-tuned on the main dataset, which might be a small one for an independent training. Transfer learning has shown promising results especially in field of medical imaging, where large datasets are scarce. While most of the existing studies utilize natural image datasets for pre-training, it is also possible to leverage similar medical samples as described below:
\begin{itemize}
\item \textit{Natural Image Dataset:} Reference~\cite{NarayanKumar} utilized transfer learning to fine-tune a pre-trained Xception model using a COVID-19 dataset. Transfer learning is also explored in Reference~\cite{NarinKaya} to pre-train five well-known CNN models, namely ResNet50, ResNet101, ResNet152, InceptionV3 and Inception-ResNetV2. ImageNet is the common choice for pre-training the CNN models~\cite{ZhangXie,FarooqHafeez,GozesFrid}. Some COVID-19 segmentation models incorporated an encoder pre-trained on ImageNet in their segmentation models to achieve more accurate results~\cite{QiuLiu}.
\item \textit{Medical Datasets:} Although pre-training with natural image datasets is very common in COVID-19 classification, it is also possible to leverage similar medical datasets, having the advantage of providing more useful filters and features. Such strategy is recently adopted by Afshar \textit{et al.}~\cite{AfsharHeidarian}, where the proposed CapsNet is pre-trained on a CXR dataset collected for a completely different task. In the fine-tuning phase, all the convolutional layers are kept fixed and only the capsule layers are re-trained on the COVID-19 dataset. Reference~\cite{MaWang} trained their COVID-19 segmentation network on a dataset containing $80$\% of lung cancer CT images and $20$\% of COVID-19 CT samples. However, the model failed to segment the COVID-19 regions of infection in test set due to the significant appearance differences between lung cancer tumors and COVID-19 lesions.
\end{itemize}
\vspace{.05in}
\noindent
\textit{Multi-task Learning} is a popular strategy to leverage the information available in several related tasks and make use of small datasets associated with different end goals. Multi-task learning is shown to be effective in reducing over-fitting and can be further divided into two categories, i.e., (i) Hard parameter sharing, and; (ii) Soft parameter sharing. In the former category, different tasks explicitly share several layers. In the latter, however, separate models are trained for separate tasks and the parameters are encouraged to take close values. With this in mind, Amyar \textit{et al.}~\cite{AmyarModzelewski} proposed a DL model to perform COVID-19 classification, segmentation and reconstruction at the same time, using the hard parameter sharing strategy. The model begins with an encoder to encode all CT scan into a latent space for subsequent analysis. For the segmentation and reconstruction tasks, the latent space is decoded into the original feature space. In the classification scenario, however, the latent space goes through a MLP for the final three-way classification. It is also worth mentioning that the encoder-decoder architecture follows the well-known U-Net design. While Mean Squared Error (MSE) is utilized for the reconstruction part, dice score and cross-entropy losses are adopted for segmentation and classification, respectively. The final loss is the sum over all the three losses.
\vspace{.05in}
\noindent
\textit{Capsule Networks}, which are less data-demanding in comparison to CNNs and can be trained using smaller datasets. Incorporation of Capsule Networks for COVID-19 diagnosis models and their superiority when having a limited dataset at hand has been discussed in~\cite{AfsharHeidarian}. Capsule networks can be adopted instead of CNNs in medical segmentation networks. Capsule network-based segmentation network can potentially outperform its CNN-based counterparts. Due to data limitations for COVID-19 lesion segmentation, there is a potential for further investigation of replacing CNNs with Capsule Networks in segmentation models.
\subsubsection{Class-Imbalanced Dataset}
This problem occurs in situations where samples associated with one class outnumber those of the other class, which is a common problem in most real-world problems, including COVID-19 diagnosis/prognosis. In training a model for diagnosis of COVID-19 from normal/CAP cases, we usually have a fewer number of COVID-19 instances and a larger number of other classes. It is the same situation in segmentation models, where the pixels labeled as COVID-19 lesions are in the minority compared to the background pixels. In such scenarios, the model can be biased toward the majority class. To tackle this issue, one can consider: (i) \textit{Modified loss functions}, or; (ii) \textit{Re-sampling techniques} as possible solutions, as discussed below:
\vspace{.05in}
\noindent
\textit{Modified Loss Functions}, which improve the model performance by assigning more penalty to the mis-classified instances/pixels of the minority class. Weighted binary cross-entropy is a commonly used loss function in class-imbalanced classification/segmentation models~\cite{RajamaniSiebert}. Focal Tversky loss is a re-weighted loss functions that optimizes the coverage of predicted and ground-truth masks by assigning more weights to the target pixels. Some COVID-19 segmentation studies adopt a combination of modified loss functions to improve the model performance in both image-level and small ROIs~\cite{FanZhou}. Reference~\cite{HuGao,MengDong} developed a COVID-19 diagnosis model under supervision of a focal loss which assigns smaller weight to easy examples, and thus they contribute less to the loss function.
\vspace{.05in}
\noindent
\textit{Re-sampling Strategies}, that handle the class imbalance problem by either over-sampling the minority class instances or under-sampling from the majority class. Xi \textit{et al.}~\cite{OuyangHuo} adopted an over-sampling strategy in their COVID-19 diagnosis model to adjust the samples of different classes in each mini-batch.
Li \textit{et al.}~\cite{LiWei} introduced a new off-line sampling strategies that ranks the non-COVID-19 samples based on their diversity and difficulty. The most informative samples are then fed into the classification model. Their approach could significantly decrease the training time while achieving comparable results.
\subsubsection{Lack of Labeled/Annotated Data}
One of the most challenging problems when developing medical segmentation networks is the lack of pixel-level labeled images. Pixel-level labeling of medical images by experienced radiologists is time-consuming. When it comes to COVID-19 infections segmentation, since the regions of infection are blurry with hardly-distinguishable boundaries from healthy lung tissues, the experts' annotation may not be consistent, making it necessary to work with a team of radiologists. Manual annotation of one COVID-19 CT volume takes $1$ to $5$ hours~\cite{ShanGao}. Some helpful solutions to overcome this problem are as follows:
\vspace{.05in}
\noindent
\textit{Human-in-loop Annotation Process}, which is a human-machine collaboration approach to ease and accelerate the annotation process. Shan \textit{et al.}.~\cite{ShanGao} proposed an efficient human-in-the-loop system by the collaboration of radiologists and the DL-segmentation model, which could dramatically reduce the annotation time.
\vspace{.05in}
\noindent
\textit{Semi-supervised Learning}, where a few annotated samples together with a large number of non-annotated CT images are fed to the network to increase the model accuracy. Combining 3D segmentation with GANs in a semi-supervised fashion can also be used to segment COVID-19 lesions. GASNet proposed in~\cite{XuCao} is a 3D segmentation framework containing a segmentation network with embedded GANs and a discriminator that uses a semi-supervised approach to segment the COVID-19 lesions. The experimental results on three external public datasets showed that their model's performance trained on a few labeled CT volumes was comparable with fully-supervised segmentation networks trained on a large labeled dataset.
\vspace{.05in}
\noindent
\textit{Un-supervised Learning}, where the model distinguishes out of distribution data in a dataset without any pre-existing label. Li \textit{et al.}~\cite{LiWei} used a new paradigm of un-supervised learning, refereed to as self-learning, to exploit helpful information from unlabeled data in their COVID-19 classification model.
\subsection{Open Problems
In this section, we focus on open problems and potential opportunities for SP research by highlighting problems and challenges of developing SP/DL models for COVID-19 management.
\vspace{.05in}
\noindent
$\bullet$ COVID-19 patients suffer from dyspnea as such there are inevitable motion artifacts in the acquired images. This is in contrast to most of other medical images, where motion artifact is rarely present. The artifacts in the COVID-19 images sometimes overlap with the main areas of infection, making the diagnosis/prognosis challenging even for experienced radiologists. To eliminate the effect of artifacts, most of the studies simply remove the noisy data from the dataset. This, however, reduces the generalizability and applicability of the model in clinical practice. An alternative solution is to approach advanced artifact reduction techniques, among which adaptive techniques are of higher capability as they can adjust and track the signal under noisy conditions.
\begin{figure}
\centering
\includegraphics[scale=0.5]{dose2}
\caption{Acquired CT images with three different dose levels for three COVID-19 patients. (a) Standard-dose, (b) Low-dose, (c) Ultra Low-dose.}\label{fig:dose}
\vspace{-.2in}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[scale=0.7]{Disease}
\caption{Acquired images for three patients suspected to COVID-19. These patients have pre-existing conditions interfering with the diagnosis of COVID-19: (a) 47 year old male with fatty embolism and pulmonary edema, (b) 51 year old female with history of right lung cancer and right lower lobectomy, (c) 27 year old male with gunshot injury in the left hemithorax with pulmonary contusion and left hemothorax and pneumothorax.}
\label{fig:heart}
\vspace{-.2in}
\end{figure*}
\vspace{.05in}
\noindent
$\bullet$ COVID-19 involves a large volume of the lung and is sparsely distributed around the lung volume. This is, in particular, in contrast to medical images in which the region of interest is located in a specific location of the organ. Analyzing and extracting patterns from the COVID-19 images require sparse filtering techniques within the signal processing domain.
\vspace{.05in}
\noindent
$\bullet$ As COVID-19 infection is distributed in the whole lung volume, the relation between the image slices is of high diagnostic and prognostic importance, calling for specific 3D filtering and pattern recognition approaches.
\vspace{.05in}
\noindent
$\bullet$ A key issue with chest CT scan is exposing patients to harmful radiation. In this regard, using low-dose or ultra low-dose scanning is of high interest. In a recent study by Tabatabaei \textit{et al.}~\cite{TabatabaeiTalari}, it is shown that obtained low-dose CT scans have high agreement with standard-dose ones, in terms of typical findings of COVID-19. More importantly, low-dose examinations are associated with less cancer risk, especially in young women. Fig.~\ref{fig:dose} shows the obtained CT scans for three different patients at three different dose levels, i.e., standard, low, and ultra low. According to this figure, although low and ultra low-dose images have more visible artifacts, they can still reveal the presence of COVID-19 infection. The artifacts, however, can hamper the effective training of the model. Furthermore, collecting a dataset of low-dose scans may not resolve this issue, as besides dose, other factors such as the patient's weight can influence the quality of the image, leading to a wide variety of possible artifacts.
This calls for SP/DL models that can cope with images at different resolutions while providing the same level of diagnosis/prognosis performance.
\vspace{.05in}
\noindent
$\bullet$ COVID-19 is relatively new, and as such, large datasets are not easily accessible. Therefore, the developed SP/DL models should be capable of handling small datasets and yet capturing informative features.
\vspace{.05in}
\noindent
$\bullet$ To encourage physicians and health professionals to confidently utilize DL models, it is important to provide explanation and interpretations on the internal behaviour of the DL models and the achieved results and therefore eliminate the ``black-box perception''. Regarding the black-box nature of the deep learning models, communicating explainable outcomes to the physicians is essential for clinical adoption of implemented DL models. Several explainability techniques are leveraged in COVID-19 studies, the simplest of which is to verify the outcomes with a radiologist. This approach is, however, time-consuming and burdensome. Techniques, providing heat-maps of the most important regions of the input image, are also popular within the COVID-19 studies. One of the commonly used heat-map techniques is Class Activation Mapping (CAM), utilized in Reference~\cite{HuGao} at different feature levels. Gradient-weighted Class Activation Mapping (Grad-CAM), visually depicting the deep model's decision, is also a CAM approach with the advantage of not requiring re-training. Grad-CAM outcome shows how the developed model pays more attention to the regions of infection of the chest radiographs in~\cite{OzturkTalo,IslamIslam}. Saliency map has also shown interpretable outcomes within the COVID-19 studies~\cite{HuGao}.
Despite the advances in improving the explainability of the models, there are still examples for which the model fails to provide a clear explanation. Furthermore, heat-maps do not provide enough explanation of the unique features they used to distinguish between COVID-19 and CAP cases.
\vspace{.05in}
\noindent
$\bullet$ Due to the policy of protecting people's privacy and also immediate quarantine of mild cases without further examinations, scans with non-severe symptoms are missing from most of the public datasets, and models are mostly developed based on patients with severe lung lesions who are at late/advanced stages of the disease. The models, therefore, are biased towards severe cases and cannot be easily generalized.
\vspace{.05in}
\noindent
$\bullet$ Evaluating the developed SP/DL models' performance in an unseen domain, results in a decrease in the sensitivity of COVID-19 diagnosis. Most of the developed models, however, incorporate data coming from a single hospital, without a cross-center validation. In other words, the impact of equipment differences are not fully considered yet, and data from different sources are required to verify the generalizability of the models.
\vspace{.05in}
\noindent
$\bullet$ One limitation associated with many COVID-19 studies is that they try to distinguish COVID-19 cases from normal ones or categorize normal and non-COVID pneumonia cases as one class. Studies who consider a separate CAP class also report a relatively poor performance in distinguishing the COVID-19 and CAP classes. This calls for developing models with stronger backbone architectures and higher capacities. Furthermore, pneumonia incidence samples are older compared to the COVID-19 ones and images from pneumonia patients with COVID-19 symptoms are not included in the datasets.
\vspace{.05in}
\noindent
$\bullet$ Although hybrid models, combining images and other relevant clinical information, can play an important role in COVID-19 analysis, few datasets are accompanied with demographic and clinical risk factors.
\vspace{.05in}
\noindent
$\bullet$ One important challenge associated with COVID-19 analysis is the disease manifestation in patients with complications other than COVID-19. Several diseases can impact the lung tissue and interfere or change the appearance of COVID-19. Interstitial lung diseases, pleural or cardiac diseases may have imaging manifestations that may mask superadded COVID-19, and make it challenging for the interpreting radiologist. As shown in Fig.~\ref{fig:heart}, it is not clear if the abnormalities are related to COVID-19. This calls for developing more advanced SP solutions and unique features to facilitate COVID-19 identification.
\vspace{-.1in}
\section{Conclusion} \label{sec:Conc}
Medical imaging plays an important role in the diagnosis and management of COVID-19 infection. Signal Processing (SP) methods coupled with Deep Learning (DL) models can help to develop robust autonomous solutions for diagnosis/prognosis of COVID-19 based on chest images. In this article, an integrated sketch is presented for designing and developing intelligent models for the COVID-19 infection diagnosis/prognosis. Advanced SP methodologies and DL models for diagnosis and prognosis of COVID-19 are presented, taking into consideration major challenges and opportunities. This article provides the SP community with a comprehensive introduction to various solutions to COVID-19 Radiomics. In addition, the article provides the required radiological background, available resources, and challenges/opportunities for extensive future SP research in this multidisciplinary domain to serve our diligent role in combating COVID-19 pandemic and possible future similar ones.
\section*{Acknowledgement}
This project was partially supported by the Department of National Defence's Innovation for Defence Excellence and Security (IDEaS) program, Canada.
We would like to thank the consulting committee and EiC of IEEE SPM for their two-round reviews and encouraging comments.
\vspace{-.2in}
\bibliographystyle{plain}
|
{
"timestamp": "2020-12-29T02:25:30",
"yymm": "2012",
"arxiv_id": "2012.14106",
"language": "en",
"url": "https://arxiv.org/abs/2012.14106"
}
|
\section{Introduction}
Let $(M, \omega)$ be a compact K\"{a}hler manifold of dimension $n \geq 2$ with $c_1(K_{M}) = -c_1(M)$ nef, we will prove the following theorem.
\begin{Th} \label{main}
The Miyaoka-Yau inequality
\begin{equation} \label{eq:MY}
\tag{MY}
(2(n+1)c_2(M) - n c_1(M)^2) \cdot (-c_1(M))^{n-2} \geq 0
\end{equation}
holds on all compact K\"{a}hler manifold with nef canonical bundle.
\end{Th}
Historically \eqref{eq:MY} was proved under different assumptions on $c_1(K_{M}) = -c_1(M)$ (more precisely on $K_{M}$), so let us first recall the relevant assumptions. First we define
\begin{equation}
2 \pi c_1(M) = -[\sqrt{-1} \partial \bar \partial (\mathrm{log} \mathrm{det} g)].
\end{equation}
where $g$ is the metric tensor of $\omega$. Furthermore, let $\theta$ be a representative of $-c_1(M)$ we define:
\begin{enumerate}
\item $-c_1(M)$ is nef if for any $\sigma > 0$, there exists a smooth $\varphi_{\sigma}$ such that $\theta + \sqrt{-1} \partial \bar \partial \varphi_{\sigma} > - \sigma \omega$.
\item $-c_1(M)$ is nef and big if $-c_1(M)$ is nef and $(-c_1(M))^{n} = \int_{M} \theta^{n} > 0$.
\item $-c_1(M)$ is semi-positive if it contains a semi-positive representative.
\end{enumerate}
Equivalently, one can define $-c_1(M)$ to be nef if it lies on the boundary of the K\"{a}hler cone:
\begin{equation*}
\mathcal{C}_{M} := \{ [\alpha] \in H^{1,1}(M, \mathbb R) | \text{ there exists a K\"{a}hler metric } \omega \text{ such that } [ \omega] = [\alpha]\},
\end{equation*}
but the previous definition we give reflects the fact that nefness characterizes the positivity of $-c_1(M)$. Obviously, being semi-positive implies that $-c_1(M)$ is nef directly by definition. If $-c_1(M)$ is nef one can easily conclude that $(-c_1(M))^{n} \geq 0$, but it is not necessarily big. If $-c_1(M)$ is big and nef, then $M$ is projective. By Kawamata's base point free theorem $K_{M}$ is semi-ample. It follows that $-c_1(M)$ has a semi-positive representative as some multilple of the pullback of the Fubini Study metric through the canonical map $\Phi : M \to \mathbb P^{N}$. In conclusion, nefness is a very weak condition compared with the other two conditions. However, by a corollary of the the Abundance Conjecture, nefness is expected to imply semi-positivity. Without assuming the Abundance Conjecture, when we are dealing with a nef class which is not big, one major difficulty lies in the absence of a good representative with the right positivity property since in the definition of nefness we have varying representatives for different $\sigma$. We will call a compact K\"{a}hler manifold a smooth minimal model if $-c_1(M)$ is nef and a smooth minimal model of general type if $-c_1(M)$ is nef and big.
On compact K\"{a}hler manifolds with negative first Chern class there exists a unique K\"{a}hler Einstein metric by \cite{yau1978ricci, MR494932}. \eqref{eq:MY} was initially proved by Yau \cite{MR451180} on such manifolds, and by \cite{MR451180, Miyaoka1977} on complex surfaces with big canonical bundle. Furthermore, if the equality in \eqref{eq:MY} holds on a compact K\"{a}hler manifold of negative first Chern class, then the K\"{a}hler Einstein metric is hyperbolic, i.e. its holomorphic sectional curvature is a negative constant. \eqref{eq:MY} in the case of smooth minimal models of general type was eastablished by the work of Tsuji \cite{MR976585} (see also Song-Wang \cite{MR3470713} for some clarifications) and Zhang \cite{MR2497488}. In addition, \cite{1611.05981, MR4061021} confirmed \eqref{eq:MY} for all minimal projective varieties. More recently, Nomura \cite{1802.05425} was able to obtain it under the assumption that $-c_1(M) = c_1(K_{M})$ is semi-positive but not big using K\"{a}hler Ricci flow. \cite{1611.05981} also includes a more thorough account of the historical development of \eqref{eq:MY}. See also Zhang \cite{1803.06093} for Miyaoka-Yau type inequalities on compact K\"{a}hler manifolds of almost nonpositive holomorphic sectional curvature (which implies nefness).
Our goal here is to prove \eqref{eq:MY} with no further assumption except that $-c_1(M)$ is nef. Our approach treats the case where $-c_1(M)$ is big and not big simultaneously. We will show that Theorem \ref{main} is a direct consequence of the existence of the cscK metrics in a neighborhood of the canonical class.
\begin{Th} \label{existence}
Let $(M, \omega_0)$ be a compact K\"{a}hler manifold. If the canonical class $-c_1(M)$ is nef, then for any $\varepsilon > 0$ small enough, there exists a unique cscK (constant scalar curvature K\"{a}hler) metric in the K\"{a}hler class $- 2\pi c_1(M) + \varepsilon [\omega_0]$.
\end{Th}
This result was obtained recently by both Dyrefelt \cite{2012.07956} and Song \cite{2004.02832}. It was first established for minimal surfaces of general type by Arezzo-Pacard \cite{MR2275832}, then Jian-Shi-Song \cite{MR3981128} proved it for smooth minimal models with semi-ample canonical line bundle.
We will be using the following notations: $(M,\omega_0)$ is a compact manifold of dimension $n$, with a fixed K\"{a}hler metric $\omega_0$. For any K\"{a}hler metric $\omega$, we denote its Ricci curvature by $\mathrm{Ric}(\omega)$ and its scalar curvature by $R(\omega)$, and we know that $\mathrm{Ric}(\omega) \in 2 \pi c_1(M)$. A K\"{a}hler metric $\omega$ is called cscK if $R(\omega) = \frac{2 \pi n c_1(M) \cdot [\omega]^{n-1}}{[\omega]^n} $. Finally, we will denote the unique cscK metric in $-2 \pi c_1(M) + \varepsilon [\omega_0]$ by $\omega_{\varepsilon}$.
\section{Proof of Theorem \ref{main}}
Recall the numerical dimension $v$ of $K_M$ is defined to be
\begin{equation}
v : = \max \{k = 0, \ldots, n|(-c_1(M))^k \cdot [\omega_0]^{n-k} \neq 0\},
\end{equation}
and notice that if $-c_1(M)$ is big and nef, then $v = n$. We will need the following calculation.
\begin{Lemma} \label{lemma 2.3}
\begin{equation}
\lim_{\varepsilon \to 0} \frac{2 \pi n c_1(M) \cdot [\omega_{\varepsilon}]^{n-1}}{[\omega_{\varepsilon}]^{n}} = - v.
\end{equation}
\end{Lemma}
\begin{proof}
Let $\eta$ be a representative of $- 2 \pi c_1(M)$, we have the following elementary expansions:
\begin{equation}
[\omega_{\varepsilon}]^{n} = \int_{M} \omega_{\varepsilon}^n = \int_{M} (\eta + \varepsilon \omega_0)^n = \sum_{i = 0}^{n} {n \choose i} \varepsilon^{n-i} (- 2 \pi c_1(M))^{i} \cdot [\omega_0]^{n-i}
\end{equation}
and
\begin{equation}
\begin{aligned}
&2\pi c_1(M) \cdot [\omega_{\varepsilon}]^{n - 1} \\
&= -\int_{M} \eta \wedge (\eta + \varepsilon \omega_0)^{n-1} \\
&= -\sum_{i = 0}^{n-1}{n -1 \choose i} \varepsilon^{n-i-1} (- 2 \pi c_1(M))^{i +1} \cdot [\omega_0]^{n-i-1}.
\end{aligned}
\end{equation}
Then
\begin{equation}
\begin{aligned}
&\frac{2 \pi n c_1(M)\cdot [\omega_{\varepsilon}]^{n-1}}{[\omega_{\varepsilon}]^{n}} \\
&= - \frac{n\sum_{i = 0}^{n-1}{n-1 \choose i}\varepsilon^{n-i-1} (- 2 \pi c_1(M))^{i +1} \cdot [\omega_0]^{n-i-1} }{\sum_{i = 0}^{n} {n \choose i} \varepsilon^{n-i} (- 2 \pi c_1(M))^{i} \cdot [\omega_0]^{n-i} } \\
&= - \frac{n{n-1 \choose v-1} \varepsilon^{n-v} (- 2\pi c_1(M))^v \cdot [\omega_0]^{n-v} + \ldots + n\varepsilon^{n-1} (-2 \pi c_1(M)) \cdot [\omega_0]^{n-1}}{{n \choose v} \varepsilon^{n-v} (- 2 \pi c_1(M))^v \cdot [\omega_0]^{n-v} + \ldots + \varepsilon^{n} [\omega_0]^{n}}.
\end{aligned}
\end{equation}
\end{proof}
The proof is concluded by noticing that $n {n-1 \choose v-1} = v {n \choose v}$. We are now ready to prove Theorem \ref{main} and the proof is going to be based on the following well-known key estimate.
\begin{Prop}[see for example \cite{MR3643615} Chapter 4 and \cite{MR2497488}]
For any K\"{a}hler metric $\omega$ on a compact K\"{a}hler manifold $M$, the following holds.
\begin{equation}
\begin{aligned}
&(2(n+1)c_2(M) - n c_1(M)^2) \cdot ([\omega])^{n-2}\\
&= \frac{1}{4 \pi^2 n(n-1)} \int_{M} ((n+1)|\mathring{\mathrm{Rm}(\omega)}|_{\omega}^2 - (n+2)|\mathring{\mathrm{Ric}(\omega)}|_{\omega}^2)\omega^n\\
& \geq \frac{1}{4 \pi^2n(n-1)} \int_{M}((n+1) |\mathring{\mathrm{Rm}(\omega)}|_{\omega}^2 - (n+2)|\mathrm{Ric}(\omega) + \omega|_{\omega}^2)\omega^n,
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\omega &:= \sqrt{-1}g_{i \bar j}dz_{i} \wedge d\bar{z}_j,\\
\mathring{\mathrm{Rm}(\omega)}_{i \bar j k \bar l} &:= \mathrm{Rm}(\omega)_{i \bar j k \bar l} - \frac{R(\omega)}{n(n+1)}(g_{i \bar j}g_{k \bar l} + g_{i \bar l}g_{k \bar j}),\\
&\mathring{\mathrm{Ric}(\omega)}:= \mathrm{Ric}(\omega) - \frac{R(\omega)}{n}\omega.
\end{aligned}
\end{equation}
\end{Prop}
According to the above estimate, in order to prove \ref{eq:MY}, we only need to find a sequence of K\"{a}hler metrics $\{\omega_i\}$ satisfying the following:
\begin{equation}
\begin{aligned}
\lim_{i \to \infty}[\omega_i] = - 2 \pi c_1(M), \\
\lim_{i \to \infty} \int_{M} |\mathrm{Ric}(\omega_i) + \omega_i|^2_{\omega_i} \omega_i^n = 0.
\end{aligned}
\end{equation}
Previous approaches by \cite{MR2497488} and \cite{1802.05425} took advantage of the K\"{a}hler Ricci flow which was known to exist all time (see \cite{Tsuji1988, tian2006kahler}) and converges in cohomological sense to $-2 \pi c_1(M)$ when $-c_1(M)$ is nef. The point of \cite{MR2497488} is to use the fact that the scalar curvature along the K\"{a}hler Ricci flow is uniformly bounded which was shown by \cite{MR2544732} (Song-Tian \cite{MR3506382} also showed that the scalar curvature is bounded when $-c_1(M)$ is not big), but it relies on $-c_1(M)$ being big. Assuming semi-positivity and non-bigness, \cite{1802.05425} modified Zhang's \cite{MR2497488} approach using only the uniform lower bound on the scalar curvature but crucially exploiting the fact that the volume of the manifold along the flow is exponenentially collapsing. Instead of using K\"{a}hler Ricci flow we make use of the sequence of the cscK metrics in a neighborhood of the canonical class. Using Theorem \ref{existence}, we can take a sequence of cscK metrics $ \omega_{{\varepsilon}} \in - 2 \pi c_1(M) + \varepsilon [\omega_0]$ for $\varepsilon$ small enough, and we know that
\begin{equation}
\begin{aligned}
& \lim_{\varepsilon \to 0} [\omega_{\varepsilon}] = - 2 \pi c_1(M), \\
& R(\omega_{\varepsilon}) = \frac{2 \pi n c_1(M) \cdot [\omega_{\varepsilon}]^{n-1}}{[\omega_{\varepsilon}]^n} \to - v \text{ as } \varepsilon \to 0.
\end{aligned}
\end{equation}
by Lemma \ref{lemma 2.3}. Now the main point is:
\begin{enumerate}
\item when$-c_1(M)$ is big and nef, $\lim_{\varepsilon \to 0} R(\omega_{\varepsilon}) = -n$.
\item when $-c_1(M)$ is nef but not big, $\lim_{\varepsilon \to 0} R(\omega_{\varepsilon}) < \infty $.
\end{enumerate}
The following lemma is well-known.
\begin{Lemma}[see for example \cite{MR3186384} Chapter 4, Lemma 4.7]\label{lemma1}
For any K\"{a}hler metric $\omega$ on a compact K\"{a}hler manifold $M$, we have
\begin{equation}
\int_{M} |\mathrm{Ric}(\omega)|^2 \omega^n = \int_{M} R(\omega)^2 \omega^n - 4 \pi^ 2n(n-1) c_1(M)^2 \cdot [\omega]^{n-2}.
\end{equation}
\end{Lemma}
Then a straightforward computation yields:
\begin{equation}
\begin{aligned}
& \int_{M} |\mathrm{Ric}(\omega_{\varepsilon}) + \omega_\varepsilon|^2_{ \omega_\varepsilon} \omega_\varepsilon^n \\
&= \int_{M}( 2R( \omega_\varepsilon) + n + |\mathrm{Ric}( \omega_\varepsilon)|^2_{ \omega_\varepsilon} ) \omega_\varepsilon^n \\
&= \Big(\int_{M}(2R( \omega_\varepsilon) + n + R( \omega_\varepsilon)^2) \omega_\varepsilon^n\Big) -4 \pi^{2}n(n-1) (- c_1(M))^2 \cdot [ \omega_\varepsilon]^{n-2} \\
&= (2R( \omega_\varepsilon) + n + R( \omega_\varepsilon)^2)[ \omega_{\varepsilon}]^n - 4\pi^{2}n(n-1) (-c_1(M))^2 \cdot [ \omega_\varepsilon]^{n-2}\\
& \to (-2v + n + v^2)(-2 \pi c_1(M))^{n} - n (n - 1) (-2 \pi c_1(M))^{n}
\end{aligned}
\end{equation}
as $\varepsilon \to 0$, where $v$ is the numerical dimension of $K_{M}$. For the second equality we used Lemma \ref{lemma1}, and for the third equality we used the fact that $R(\omega_{\varepsilon})$ is constant with respect to the manifold. For calculating the limit we used Lemma \ref{lemma 2.3}. When $-c_1(M)$ is not big, we have $(-c_1(M))^n = 0$, thus
\begin{equation}
(-2v + n + v^2)(-2 \pi c_1(M))^{n} - n (n - 1) (-2 \pi c_1(M))^{n} = 0.
\end{equation}
When $(-c_1(M))^{n} > 0$, i.e. $v = n$, we get
\begin{equation}
\begin{aligned}
&(-2v + n + v^2)(-2 \pi c_1(M))^{n} - n (n - 1) (-2 \pi c_1(M))^{n} \\
&= ((n^{2} - n) - n(n - 1))(-2 \pi c_1(M))^{n} = 0.
\end{aligned}
\end{equation}
We conclude this note by remarking that in previous approaches \cite{MR2497488} and \cite{1802.05425} where K\"{a}hler Ricci flow was used, this computation was not possible since they did not have such a strong control on how the scalar curvature behaves along the flow globally.
\section*{Acknowledgement}
The author would like to thank his advisor Ben Weinkove for his continued support, encouragement, and many valuable comments on the manuscript.
\medskip
\printbibliography
\end{document}
|
{
"timestamp": "2020-12-29T02:24:43",
"yymm": "2012",
"arxiv_id": "2012.14096",
"language": "en",
"url": "https://arxiv.org/abs/2012.14096"
}
|
\section{Introduction}
In a not so distant future, gravitational wave (GW) detectors such as LISA \cite{Audley:2017drz}, Taiji \cite{Guo:2018npi}, Tianqin \cite{Luo:2015ght},
DECIGO \cite{Seto:2001qf,Yagi:2011wg}, AION/MAGIS \cite{Badurina:2019hst}, ET \cite{Maggiore:2019uih}
and PTA \cite{Lentati:2015qwp,Bian:2020bps} will observe a wide range of frequencies and amplitudes.
Any detection of a stochastic GW background with a cosmological origin would be a breakthrough in the understanding of the processes of
the early universe, e.g. see Ref.~\cite{Caprini:2018mtu} for a review.
An important source of cosmological GW backgrounds is the so-called scalar induced GWs \cite{tomita,Matarrese:1992rp,Matarrese:1993zf,Matarrese:1997ay,Carbone:2004iv,Ananda:2006af,Baumann:2007zm,Saito:2008jc},
which has received a lot of attention recently \cite{Alabidi:2012ex,Alabidi:2013wtp,Hwang:2017oxa,Espinosa:2018eve,Kohri:2018awv,Cai:2018dig,Bartolo:2018rku,Inomata:2018epa,Yuan:2019udt,Inomata:2019zqy,Inomata:2019ivs,Chen:2019xse,Yuan:2019wwo,DeLuca:2019ufz,Tomikawa:2019tvi,Gong:2019mui,Inomata:2019yww,Yuan:2019fwv,Hwang:2017oxa,Domenech:2017ems,DeLuca:2019ufz,Gong:2019mui,Inomata:2019yww,Yuan:2019fwv,Domenech:2019quo,Ota:2020vfn,Cai:2019jah,Yuan:2019wwo,Cai:2019elf,Cai:2019amo,Bhattacharya:2019bvk,Pi:2020otn}.
They are a crucial counterpart of the primordial black hole scenario \cite{Espinosa:2018eve,Cai:2018dig,Bartolo:2018rku,Yuan:2019udt}
and they constitute a powerful probe of the primordial curvature power spectrum \cite{Inomata:2018epa,Gow:2020bzo}
and of the thermal history of the universe \cite{Cai:2019cdl,Hajkarim:2019nbx,Domenech:2019quo,Domenech:2020kqm}.
Unfortunately, doubts have been cast on the gauge independence of the derived induced GW spectrum \cite{Hwang:2017oxa}.
This is an issue rooted in the fact that tensor modes mix with scalar and vector modes at second order in cosmological perturbation theory.
It is thus very important to show that there is no ambiguity in the theoretical predictions of the induced GWs.
Since Ref.~\cite{Hwang:2017oxa} first pointed out that the induced GW spectrum is strictly speaking gauge dependent
(see also Ref~\cite{Tomikawa:2019tvi} for numerical studies in general cosmological backgrounds), there has been a lot of discussion and
proposed solutions to the gauge issue \cite{Gong:2019mui,Tomikawa:2019tvi,Wang:2019zhj,DeLuca:2019ufz,Inomata:2019yww,Yuan:2019fwv,
Nakamura:2019zbe,Giovannini:2020qta,Lu:2020diy,Chang:2020tji,Ali:2020sfw,Chang:2020iji,Chang:2020mky,Giovannini:2020soq}.
Most of the solutions can be classified in either building a gauge invariant formulation of tensor modes at second order \cite{Wang:2019zhj,Yuan:2019fwv,Nakamura:2019zbe,Chang:2020tji,Chang:2020iji,Chang:2020mky}
or finding an appropriate gauge choice that best describes the GW detection \cite{DeLuca:2019ufz,Inomata:2019yww}.
The problem with the former approach, i.e., the gauge invariant formulation is that there is no clear connection between
the observable and the gauge invariant variable. In particular, this approach seems to miss that choosing to work with a particular gauge invariant
combination is no different than choosing a particular gauge.
The obvious difference is that in the former the degrees of freedom are naturally reduced while in the latter they are reduced by hand.
In the end, the naive GW spectrum still depends on the choice of a gauge invariant variable and so the gauge issue of Ref.~\cite{Hwang:2017oxa} remains.
In contrast, although the latter approach taken by Ref.~\cite{DeLuca:2019ufz} certainly goes in the right direction,
defining the observable strain of GWs onto a GW detector at second order in perturbation theory
seems to be challenging in a general cosmological background.
A good argument presented in Ref.~\cite{DeLuca:2019ufz} is that the most suitable gauge is the gauge
where the coordinates follow a geodesic congruence, e.g. a frame where the mirrors of the interferometer are fixed,
also known as synchronous gauge.\footnote{In Ref.~\cite{DeLuca:2019ufz} they refer to this gauge as transverse-traceless gauge.
However, since this only applies to first order in perturbation theory, we find that calling it synchronous gauge is more appropriate in general.}
However, their reasoning only relies on first order perturbation theory applied to the time delay and does not deal with second order terms.
Nonetheless, the argument is more convincing when it is shown that the induced GW spectrum in the synchronous gauge exactly matches
at late times with the induced GW spectrum in the Newton (or shear-free) \cite{DeLuca:2019ufz,Inomata:2019yww}.
Yet, in the absence of a well-defined observable which is invariant under second order gauge transformations, the gauge issue persists.
Moreover, as we shall see, the synchronous gauge is a subtle gauge for the induced GWs due to its remaining gauge degrees of freedom, as was first noted in Ref.~\cite{Lu:2020diy}.
A third solution would be to build a GW energy momentum tensor which is gauge invariant under second order gauge transformations.
However, although Isaacson \cite{Isaacson:1967zz,Isaacson:1968zza} showed that one can find a well-defined
energy momentum tensor for GWs in the limit of short wavelengths, i.e. short compared to the curvature of spacetime, in the case of vacuum
(Ricci flat) spacetime, there is yet no such object in a cosmological background.
This stems from the fact that in General Relativity there is no well-defined notion of localized energy.
A study in this direction was done in \cite{Mukhanov:1996ak}, but only for the first order perturbation quantities, in the context of
the backreaction of perturbations to the background cosmological evolution.
In our case of interest, for instance, one may include terms quartic in gradients of scalar quantities in the energy density of GWs.
In the end, these additional terms at fourth order may render the total GW spectrum gauge invariant but this direction might actually
miss the essence of the current discussion.
In similar lines, Ref.~\cite{Giovannini:2020qta,Giovannini:2020soq} argued that second order gauge transformations yield
a spurious contribution to the observed GW spectrum and proposed some candidates for the definition of the GW spectral density.
Furthermore, Ref.~\cite{Giovannini:2020soq} reasons that the induced GW spectrum is only meaningful in completely fixed gauges like
the Newton or uniform curvature gauges. This conclusion is in contrast with Ref.~\cite{DeLuca:2019ufz}.
In this paper, we explore a new direction which we believe reduces the gauge issue of the induced GW spectrum to a minimum.
By analogy with GWs generated by mergers of binary black holes, where GWs are only well defined far enough from the source,
we argue that in cosmology the energy density of GWs for modes deep inside the horizon is well defined and gauge invariant
as long as: ($i$) the source of induced GWs is not active and ($ii$) the spacetime slicing is well behaved on small scales.
This implies that the induced GW spectrum is invariant under a set of reasonable gauge transformations on subhorizon scales.
We show such approximate gauge invariance for a general cosmological background filled with a perfect fluid with constant
equation of state and constant speed of sound, $p/\rho=w=c_s^2=\rm constant$.
We also present very simple formulas for both the source term of induced GWs and the transformation of tensor
modes at second order in a general gauge, which helps to clarify the gauge issue.
In this way, our discussion is not obscured by the involved calculations at second order perturbation theory.
The paper is organized as follows. In Sec.~\ref{sec:gaugeinvariance} we present simple formulas for the source term of induced GWs
in a general gauge inspired by the Hamiltonian formalism developed in Ref.~\cite{Domenech:2017ems}.
We also emphasize the difference between the gauge invariant formalism, a gauge choice and the observable energy density.
In Sec.~\ref{sec:GWspectrum}, we define a class of physically meaningful gauges on small scales and show that the induced GW spectrum is invariant
under such a set of gauges on subhorizon scales. We pay special attention to the synchronous gauge. In Sec.~\ref{sec:dustdom} we argue that
the induced GWs generated in pressure-less adiabatic perfect fluid dominated universe can be gauged away at the stage when the source term is active.
Lastly, in Sec.~\ref{sec:conclusions} we discuss the implications of our work.
Details of the calculations and gauge transformations can be found in the appendices.
Throughout the paper we work in reduced Planck units where $c=\hbar=8\pi G=1$.
\section{Gauge invariance vs observable \label{sec:gaugeinvariance}}
In this section, we focus on the difference between a gauge invariant quantity and an observable quantity.
We also derive simple formulas for the generation of tensor modes by scalar squared terms at second order perturbation theory
and briefly review the gauge issue raised in Ref.~\cite{Hwang:2017oxa}.
We begin by describing the universe content and our perturbative expansion which closely follows that of Ref.~\cite{Domenech:2017ems}.
We consider that the universe is filled with a perfect fluid with energy-momentum tensor given by
\begin{align}
T_{\mu\nu}&=(\rho+p)u_{\mu}u_{\nu}+p g_{\mu\nu}\,,
\end{align}
where $\rho$ and $p$ respectively are the energy density and pressure and $u_{\mu}$ is the fluid 4-velocity.
We take the spacetime metric to be a perturbed flat Friedmann-Lema{\^i}tre-Robertson-Walker (FRLW) universe.
By means of the (3+1)-decomposition we may write the metric as
\begin{align}\label{eq:confdecom}
ds^2&=a^2(\tau)\nonumber\\&\times\left(-N^2d\tau^2+{\rm e}^{2\phi}\Upsilon_{ij}\left(dx^i+N^idt\right)\left(dx^j+N^jdt\right)\right)\,,
\end{align}
where $a$ is the scale factor, $N$ is the lapse, $N^i$ is the shift vector and $\phi$ and $\Upsilon_{ij}$ respectively represent
the trace and traceless degrees of freedom of the spatial metric.
We perturbatively expand the lapse, shift and $\Upsilon_{ij}$ as
\begin{align}\label{eq:confdecom2}
N&=1+\alpha\quad,\quad N_i=\partial_i\beta\\
\left[\ln\Upsilon\right]_{ij}&=h_{ij}+2\left(\partial_{i}\partial_{j}-\tfrac{1}{3}\delta_{ij}\Delta\right)E\label{eq:confdecomh}\,,
\end{align}
where $\delta^{ij}h_{ij}=\partial^i h_{ij}=0$ and we have neglected vector modes for simplicity. Thus $h_{ij}$ represents the transverse-traceless degrees of freedom, i.e.
the tensor degrees of freedom. We note, however, that this does not necessarily mean $h_{ij}$ defined here is a directly observable quantity.
As we shall see, the exponential form of the spatial metric in Eq.~\eqref{eq:confdecom} simplifies substantially the calculations and discussions.
For more details on the perturbative expansion see Ref.~\cite{Domenech:2017ems}.
It is important to observe that the relation between
the metric perturbations in our expansion \eqref{eq:confdecom} and the commonly used variables \eqref{eq:confdecom3},
e.g. in Refs.~\cite{Malik:2008im,Hwang:2017oxa,Gong:2019mui}, involve a mixture of scalar and tensor modes, see App.~\ref{app:relation}.
For instance, the tensor modes of our metric \eqref{eq:confdecom} are related to the tensor modes in the common
expansion \cite{Malik:2008im,Hwang:2017oxa,Gong:2019mui} by additional scalar squared terms containing $E$ and $\phi$.
Surely, the resulting induced GW spectrum should not depend on the choice of the perturbative expansion.
Nevertheless, we start to see how the gauge choice may naively affect the spectrum of induced GWs.
At second order in cosmological perturbation theory, the squared of gradients of scalar variables source tensor modes.
In the perturbative expansion \eqref{eq:confdecom}, the equations of motion for the tensor modes at second order without
any gauge fixing take a very simple form, concretely
\begin{align}\label{eq:heom}
(\hat D_\tau-\Delta) h_{ij}=\widehat{TT}^{ab}_{ij}S_{ab}\,,
\end{align}
where $\hat D_\tau\equiv a^{-2}\partial_\tau(a^2\partial_\tau)$, $\Delta\equiv\delta^{ij}\partial_i\partial_j$ is the flat 3-dimentional Laplacian,
$\widehat{TT}^{ab}_{ij}$ is the transverse-traceless projector, e.g. found in Ref.~\cite{Domenech:2017ems}, and
\begin{align}\label{eq:heomsource}
S_{ab}=4&\partial_a\Phi\partial_b\Phi+2a^2\left(\rho+p\right)\partial_aV\partial_bV\nonumber\\&-(\hat D_\tau-\Delta)\left[\partial_a\sigma\partial_b\sigma+\partial_a\partial^kE\partial_k\partial_bE\right]\,.
\end{align}
In the source term \eqref{eq:heomsource} we have defined
\begin{align}\label{eq:definitions}
\Phi&\equiv\phi-\tfrac{1}{3}\Delta E+\sigma\quad,\quad V\equiv v+E'\,,\\
\sigma&\equiv\beta-E'\,.
\end{align}
The variable $\sigma$ corresponds to the scalar component of the shear of the hypersurface,
$v$ is the scalar component of the fluid 3-velocity and $\Phi$ and $V$ are two different gauge invariant variables under first order gauge transformations,
see App.~\ref{app:gauge}. They respectively represent the curvature perturbation and the fluid velocity potential on Newton (or shear-free) slices.
Equation~\eqref{eq:heomsource} is obtained by means of the Hamiltonian formalism presented in Ref.~\cite{Domenech:2017ems}.
We cross-checked it by the direct expansion of the Einstein equations. To achieve such a simplified form of Eq.~\eqref{eq:heomsource}
we used several times the first order equations of motion which can be found in App.~\ref{app:eom}.
Also, we find that the terms containing the variable $E$ in Eq.~\eqref{eq:heomsource} are absent in the general formulas derived in Ref.~\cite{Gong:2019mui}
and later used in Refs.~\cite{DeLuca:2019ufz,Inomata:2019yww}. This is in agreement with Ref.~\cite{Lu:2020diy}.
Although the difference is a redefinition of the tensor modes, these terms play a crucial role in the discussion
in the synchronous gauge of Sec.~\ref{sec:GWspectrum}.
From Eq.~\eqref{eq:heomsource} we have several insights into the issue at hand. First, we see that the terms in Eq.~\eqref{eq:heomsource}
which are not gauge invariant at second order can be reabsorbed by a redefinition of the tensor modes. Explicitly by defining
\begin{align}\label{eq:hN}
h^{N}_{ij}\equiv h_{ij}+\widehat{TT}^{ab}_{ij}\left[\partial_a\sigma\partial_b\sigma+\partial_a\partial^kE\partial_k\partial_bE\right]\,,
\end{align}
we find that
\begin{align}\label{eq:heomN}
(\hat D_\tau-\Delta)& h^{N}_{ij}=\widehat{TT}^{ab}_{ij}\left[4\partial_a\Phi\partial_b\Phi+2a^2\left(\rho+p\right)\partial_aV\partial_bV\right]\,,
\end{align}
has now a gauge invariant form. In Eq.~\eqref{eq:hN} we deliberately used the superscript $N$ to indicate that $h^N_{ij}$ coincides
with the tensor modes $h_{ij}$ evaluated in the Newton gauge \cite{Domenech:2017ems} where $\sigma=E=0$.
It should be noted that even though Eq.~\eqref{eq:heomN} is gauge invariant, one may have chosen any other gauge invariant definition
of the tensor modes. Then, the source term in Eq.~\eqref{eq:heomN} would of course be different and so would be the derived induced GW spectrum.
This point let us move forward to discuss what is actually the observable GWs. A laser interferometer or an array of pulsars detect the GW strain.
However, for stochastic GW backgrounds in cosmology one uses the spectral density of GWs defined by
\begin{align}\label{eq:spectraldensity}
\rho_{\rm GW}(k)=\frac{k^3}{16\pi^2}\sum_\lambda\langle h_{\bm{k},\lambda}'h_{-\bm{k},\lambda}'+ k^2 h_{\bm{k},\lambda}h_{-\bm{k},\lambda}\rangle\,,
\end{align}
where $\langle\cdots\rangle$ denotes ensemble average.
For simplicity, we work with the spectral density from now on but our conclusions should not depend on whether one uses the GW strain
or the GW spectral density. Now, it is obvious that the GWs spectral density as defined in Eq.~\eqref{eq:spectraldensity} is a gauge dependent quantity
if one considers second order gauge transformations. This is solely because $h_{ij}$ changes under a second order gauge transformation.
In other words, the \textit{naive} GW spectrum obtained using Eq.~\eqref{eq:spectraldensity} of the induced GWs, e.g., in the Newton gauge
is a priori different from that in the comoving gauge. This does not mean that the GW spectrum is gauge dependent,
it simply means that energy density of GW is in general not well defined by Eq.~\eqref{eq:spectraldensity} when one considers higher order terms.
The crucial point is to realize that Eq.~\eqref{eq:spectraldensity} should be well-defined on subhorizon scales, $k\gg{\cal H}$,
as long as the source term of induced GWs in Eq.~\eqref{eq:heomsource} is no longer active. In the absence of a source,
the tensor modes $h_{ij}$ deep inside the horizon \textit{are} freely propagating GWs. However, there is a catch.
For practical convenience, our GW detector should be described in a reasonable coordinate system.
Otherwise, if we were in a coordinate system where the GW detector would oscillate wildly, we would most likely confuse GWs with gauge artifacts.
In the next section we explore what would be a set of reasonable gauges and how the energy density is invariant under such a set of gauge transformations.
\section{Gauge invariance of GW spectrum \label{sec:GWspectrum}}
In the previous section we showed that the spectrum of induced GW is always gauge dependent if one uses the definition of the spectral density as in Eq.~\eqref{eq:spectraldensity}. However, one expects that if the gauge transformation leaves the subhorizon physics essentially untouched,
the prediction for the induced GWs should agree independently of the gauge.
In this sense, the Newton gauge seems suitable for physical interpretations since
one recovers Newtonian gravity for $k\gg{\cal H}$. In this paper, we assume this is indeed the case. In regard to the observable strain,
we show that $h_{ij}^N$ coincides with that in a suitable fixed synchronous gauge.
Now, let us take a look at the gauge transformation of the Newton potential since we are working with the gauge invariant variables $\Phi$
and $h_{ij}^N$ that coincide with the Newton potential and the tensor modes in the shear-free gauge. As we already argued,
the Newton gauge is a well-behaved gauge at the smallest scales. Then, we find that the relation between
the trace part of the metric perturbation $\phi$ evaluated in an arbitrary gauge and in the Newton gauge is given by
\begin{align}\label{eq:phiG}
\Phi_G&= \Phi_N+{\cal H}T_G+\tfrac{1}{3}\Delta L_G\,,
\end{align}
where ${\cal H}\equiv a'/a$, $T_G$ and $L_G$ respectively are the time and spatial gauge parameters
and we used the subscript $G$ to emphasize that it is for an arbitrary gauge.
We have also explicitly added the subscript $N$ to $\Phi$, and set $\phi=\Phi_G$ to emphasize that $\Phi_N$ is $\phi$ defined in the Newton gauge
and $\Phi_G$ is $\phi$ in another fixed gauge.
See App.~\ref{app:gauge} for the details on the gauge transformations and App.~\ref{app:eom} for the basic equations of motion.
We define the requirement that the gauge is well-behaved on subhorizon scales as the condition that
\begin{align}\label{eq:condition1}
\Phi_G(k\gg{\cal H})=O\left(\Phi_N(k\gg{\cal H})\right)\,.
\end{align}
This implies that the well-behaved set of gauge transformation has the gauge parameters ${\cal H}T_G(k\gg{\cal H})$
and $\Delta L_G(k\gg{\cal H})$ which decay equal or faster than $\Phi_N(k\gg{\cal H})$.
As we shall show, this includes the flat, constant Hubble and synchronous gauges. The requirement that the density contrast $\delta\rho_G(k\gg{\cal H})=O\left(\delta\rho_N(k\gg{\cal H})\right)$ leads
to a similar conclusion on $T_G$.
The Newton potential in a general cosmological background with a constant equation of state parameter $p/\rho=w=c_s^2$ decays inside
the horizon roughly as
\begin{align}\label{eq:gravitationalpotential2}
\Phi_N(c_sx\gg 1)\propto \Phi(k)\, (c_sx)^{-2-b}\,; \quad c_s\neq0\,,
\end{align}
where $\Phi(k)$ is a $k$-dependent amplitude to be set by the initial condition, $x$ and $b$ are defined by
\begin{align}
x\equiv k\tau\quad,\quad b\equiv\frac{1-3w}{1+3w}\,,
\end{align}
and we have neglected the oscillatory behavior inside the sound horizon.
It should be noted that in the special case of a dust universe when $c_s^2=w=0$ the Newton potential is constant in all scales.
For this very particular limit, the source term of secondary GWs \eqref{eq:heomN} is indefinitely active and our criterion
for the approximate gauge invariance of induced GWs does not apply.
In this special case, we show in Sec.~\ref{sec:dustdom} that the induced GWs can be essentially gauged away.
This means that during the dust domination one cannot straightforwardly tell apart what is a gauge artifact and what are GWs until
the dust domination ends and the universe transitions to an era with $c^2_s\neq0$.
We note that we do not claim that the time-independent $h_{ij}$ generated during dust domination is meaningless.
We only claim that it can be gauged away as long as the universe is dust dominated.
It becomes a genuine gravitational wave when the dust domination ends. In a sense, this is similar to the curvature perturbation
on superhorizon scales during inflation. It can be gauged away as long as the universe is in the inflationary phase, or in an eternally inflating universe.
It becomes physically significant only after the universe enters a decelerated phase.
This highlights the particularities of the induced GW generated in a dust dominated universe.
Note that in models with an early matter dominated stage, the dominant contribution to the induced GW spectrum
comes mostly from the stage right after reheating \cite{Inomata:2019zqy,Inomata:2019ivs}.
Our criterion for the approximate gauge invariance applies to this contribution.
Hence there is no gauge issue with the induced GWs in a universe with an early dust dominated stage
as long as one considers the stage after dust domination.
We shall translate the reasonable gauge condition \eqref{eq:condition1} into conditions on the time dependence of the gauge parameters as
\begin{align}
T_G(x\gg1)&\propto k^{-1} (c_sx)^{-c_T}\,;\quad c_T\geq1+b\,,
\nonumber\\
L_G(x\gg1)&\propto k^{-2} (c_sx)^{-c_L}\,;\quad c_L\geq2+b\,.
\label{eq:condition2}
\end{align}
Let us show that the conditions on $c_T$ and $c_L$ are enough to ensure the gauge independence of the GW spectrum on subhorizon scales.
First, we find that the tensor modes in an arbitrary gauge are related to those in the Newton gauge by
\begin{align}\label{eq:gaugehij}
h^{G}_{ij}=h^{N}_{ij}-\widehat{TT}_{ij}\,^{ab}&\Big\{\partial_aT_G\partial_bT_G+\partial_a\partial_kL_G\partial_b\partial_kL_G\Big\}\,.
\end{align}
Second, the GW spectral density is related to the dimensionless GW spectrum\footnote{The dimensionless power spectrum of a quantity Q is defined by
\begin{align}
\langle Q(k)Q(k')\rangle=\frac{2\pi^2}{k^3} {\cal P}_{Q}(k)\delta(k+k')\,.
\end{align}
} by
\begin{align}\label{eq:OMGWC}
\Omega_{\rm GW}(k\tau\gg1)=\frac{k^2}{12{\cal H}^2}\overline{{\cal P}_h(k\tau\gg1)}\,,
\end{align}
where we used that deep inside the horizon after GWs are generated $h'_{\bm{k}}\sim kh_{\bm{k}}$ and an overline denotes oscillation average.
When the source of the induced GWs is no longer active, we can treat $h_{ij}$ and $\Phi$ as independent variables.
In this situation, by using Eq.~\eqref{eq:gaugehij} the GW spectrum in an arbitrary gauge and in the Newton gauge are related by
\begin{align}\label{eq:PGPG}
\overline{{\cal P}^{G}_h(k\tau\gg1)}=\overline{{\cal P}^{N}_h(k\tau\gg1)}+\overline{{\cal P}_G(k\tau\gg1)}\,,
\end{align}
where $\overline{{\cal P}^{N}_h(k\tau\gg1)}$ is presented in App.~\ref{app:eom} and
\begin{align}\label{eq:PG}
{\cal P}_G&(k\tau\gg1)=\sum_\lambda\frac{k^3}{\pi^2}\int \frac{d^3q}{(2\pi)^3} \,\left(e_\lambda^{ij}(\bm{k})q_iq_j\right)^2
\nonumber\\
&\times\big|T_{\bm{k}}T_{|\bm{k}-\bm{q}|}+(q^2-k^lq_l)L_{\bm{k}}L_{|\bm{k}-\bm{q}|}\big|^2\,.
\end{align}
In Eq.~\eqref{eq:PG}, $\lambda$ is the GW polarization, $e_\lambda^{ij}(\bm{k})$ is the polarization tensor of the GWs
that satisfies $\delta_{ij}e_\lambda^{ij}(\bm{k})=k_ie_\lambda^{ij}(\bm{k})=0$
and $e_\lambda^{ij}(\bm{k})e_{ij,\lambda'}(\bm{k})=\delta_{\lambda\lambda'}$.
For our purposes it is enough to notice that $\overline{{\cal P}_G(k\tau\gg1)}$ is quartic in $T_G$ and $L_G$. Note that in the expressions \eqref{eq:PGPG} and \eqref{eq:PG} the assumption of a finite time source translates into a spectrum of scalar fluctuations with a finite width. This means that there is a moment in time when all the scalar fluctuations are inside the horizon and no more secondary GWs are generated. Thus, the integral over momenta in Eq.~\eqref{eq:PG} has finite and non-zero integration limits and we shall use the subhorizon approximation for $T_G$ and $L_G$ for all scales within the integration limits.
For an arbitrary constant equation of state $w$, the analytical formulas for the spectrum of induced GWs $\overline{{\cal P}^{N}_h(k\tau\gg1)}$
are provided in Ref.~\cite{Domenech:2019quo}.
The only relevant point for our discussion is the fact that the time dependence of the GW power spectrum is given by
\begin{align}\label{eq:powerh}
\overline{{\cal P}^{N}_h(x\gg1)}\approx x^{-2(1+b)}{\cal P}(k)\,,
\end{align}
where ${\cal P}(k)$ is some $k$ dependent amplitude. We refer the reader to Ref.~\cite{Domenech:2019quo} or App.~\ref{app:eom} for the details.
In other words, induced GWs inside the horizon behave as a free massless mode or as free GWs, i.e. they oscillate and decay as
\begin{align}
h_{ij}\propto 1/a(\tau)\propto x^{-1-b}\,.
\end{align}
The condition that the GW spectrum is gauge invariant may be expressed as
$\overline{{\cal P}^{G}_h(k\tau\gg1)}=\overline{{\cal P}^{N}_h(k\tau\gg1)}$. This condition implies from Eq.~\eqref{eq:condition2} that
\begin{align}\label{eq:condition3}
c_T\,,\,c_L\geq (1+b)/2\,.
\end{align}
The conditions \eqref{eq:condition3} are always satisfied as long as conditions \eqref{eq:condition2} are satisfied.
Thus, our requirement of a well-behaved gauge on small scales yields that the GW spectrum is invariant under such a reasonable set of
gauge transformations. Note that this conclusion is independent on whether one uses the GW strain or the GW spectral density.
We shall proceed to show for which commonly used gauges conditions Eq.~\eqref{eq:condition2} are satisfied.
For simplicity let us start with a change of the time slicing. This means we can set $L_G=0$ in our gauge transformation.
Thus, all the discussion lies on whether the gauge parameter $T_G$ satisfies the conditions \eqref{eq:condition2} or not.
In App.~\ref{app:transformationrules} we present the detailed gauge transformation rules.
Let us start with a bad example.
We consider the comoving slicing gauge where the fluid 4-velocity is orthogonal to the constant time hypersurfaces,
or the hypersurface normal vector agrees with the rest frame of the fluid. This slicing is fixed by setting $v+\beta=0$.
We find that at late times and on subhorizon scales, the temporal gauge parameter in the comoving slicing reads
\begin{align}\label{eq:TC}
T_C(x\gg1)\propto\frac{\Phi(k)}{c_sk}(c_sx)^{-b}\,,
\end{align}
where the subscript $C$ stands for comoving. From Eq.~\eqref{eq:TC} it is clear that the comoving gauge does not satisfy
the criteria \eqref{eq:condition2} independently of $w$ (or $b$). This means that the induced GW spectrum evaluated
using \eqref{eq:OMGWC} is quite different in the comoving slicing gauge compared to the Newton gauge.
This is not so surprising since on short distances one expects that the fluid density and velocity oscillate.
Thus the requirement that the slicing is comoving with the fluid would imply a highly deformed slicing in the small scale limit where
the spacetime is almost flat, and would most likely give rise to gauge artifacts in the GW spectrum.
We conclude that the comoving slicing is not well-behaved on small scales
and extra caution should be applied if the induced GW spectrum is computed in such a gauge.
Let us turn to two good examples. The first example is the spatially flat slicing where the curvature of the intrinsic metric (or the spatial curvature) is
homogeneous. This is achieved by setting $\phi=E=0$. The temporal gauge parameter to go to the flat gauge from the Newton gauge is
\begin{align}\label{eq:TF}
T_F(x\gg1)\propto\frac{\Phi(k)}{c_sk}\, (c_sx)^{-1-b}\,,
\end{align}
where $F$ stands for flat slicing.
The second example is the constant Hubble slicing, where the extrinsic curvature is homogeneous.
This implies that $3\phi'-3{\cal H}\alpha-\Delta\beta=0$.
The time gauge parameter from the Newton gauge to the constant Hubble gauge reads
\begin{align}\label{eq:TH}
T_H(x\gg1)\approx\frac{c_s\Phi(k)}{k}(c_sx)^{-2-b}\,,
\end{align}
where $H$ stands for constant Hubble slicing.
We see that Eqs.~\eqref{eq:TF} and \eqref{eq:TH} satisfy the conditions \eqref{eq:condition2} for a reasonable gauge transformation.
This was expected since deep inside the horizon the cosmology should be irrelevant, e.g. the expansion rate or the spatial curvature.
We conclude that the induced GW spectrum computed in the Newton, flat and constant Hubble gauge coincide for scales deep inside
the horizon on an arbitrary background. Thus we have shown the approximate gauge invariance of the induced GW spectrum.
\subsection{Synchronous gauge}
We dedicate a separate subsection for the synchronous gauge as the analysis is more subtle. This is because the synchronous gauge in which
one sets $\alpha=\beta=0$ does not completely kill the gauge degrees of freedom.
There remains a residual gauge ambiguity in the variables. For example see Ref.~\cite{Malik:2008im} or App.~\ref{app:transformationrules}.
As we shall see, it turns out that this ambiguity is very much related to the terms in the equations of motion of secondary GWs \eqref{eq:heom}
which contain the variable $E$ and that have been neglected in previous analyses \cite{DeLuca:2019ufz,Inomata:2019yww}.
We find that the gauge parameters that relate the synchronous gauge with the Newton gauge are given by
\begin{align}\label{eq:TS}
T_S(x\gg1)&\approx \frac{x^{-1-b}}{c_s k}\left(-{\Phi(k)}+ \tilde T_0(k)\right)+O(x^{-2-b})\,,\\
L_S(x\gg1)&\approx \frac{x^{-b}}{c_s^2 k^2}\left({\Phi(k)}-\tilde T_0(k)\right)+O(x^{-2-b})\,,\label{eq:LS}
\end{align}
where $\tilde T_0(k)$ is an arbitrary function of the wavenumber $k$ and reflects the residual gauge ambiguity in the synchronous gauge.
For simplicity, in Eq.~\eqref{eq:LS} we have fixed the residual gauge ambiguity in the spatial gauge by eliminating a constant after integration.
See App.~\ref{app:transformationrules} for more details. We see that while the temporal gauge parameter $T_S$ \eqref{eq:TS} satisfies
conditions \eqref{eq:condition2}, the spatial gauge parameter $L_S$ \eqref{eq:LS} does not in general. Nevertheless, we see that $L_S$ \eqref{eq:LS} satisfies the criterion of a well-behaved gauge if one properly fixes $\tilde T_0(k)$
to remove the leading order terms on the right hand side of Eqs.~\eqref{eq:TS} and \eqref{eq:LS}.
Thus, we conclude that a properly fixed synchronous gauge yields the same induced GW spectrum as the one in the Newton gauge.
Nevertheless, the above discussion shows that the synchronous gauge might not be a suitable gauge for the computation of
induced GWs due to the residual gauge ambiguity, in contrast to the argument of Ref.~\cite{DeLuca:2019ufz}.
At this point, let us compare our results with those of Refs.~\cite{DeLuca:2019ufz,Inomata:2019yww}. As is clear from Eq.~\eqref{eq:heomsource}, the source term of the induced GWs in the synchronous gauge contains terms proportional to $E$. However, in Refs.~\cite{DeLuca:2019ufz,Inomata:2019yww} it seems these terms have been neglected. We show in App.~\ref{app:relation} that in the notation of Refs.~\cite{DeLuca:2019ufz,Inomata:2019yww} additional terms proportional to $E$ also appear, see Eq.~\eqref{eq:heomsource2}. In the end though, our conclusions agree with Refs.~\cite{DeLuca:2019ufz,Inomata:2019yww} because these neglected terms proportional to $E$ are in fact the ones that make a difference in the gauge transformation.
Now, it is important to note that if one solves the equations of motion for the induced GWs \eqref{eq:heomsource} in the synchronous gauge by requiring that the variables $\sigma$ and $E$ are well-behaved on superhorizon scales as in Refs.~\cite{DeLuca:2019ufz,Inomata:2019yww}, one would end up with an induced GW spectrum different compared to the one in the Newtonian gauge. Nevertheless, as we showed in this section, the difference is clearly a gauge artifact as one can do at any time a gauge transformation within the synchronous gauge that removes the spurious contribution. Thus, in the end, one finds that the induced GW spectrum of a properly fixed synchronous gauge agrees with that computed in the Newtonian gauge.
Before ending this section, we note that although the synchronous gauge is most
convenient for computing the response of a gravitational wave detector, one can
of course choose a different gauge without affecting the physical result, provided
that one carefully performs the calculation. In fact, when computing
the detector response, one has to introduce a frame in which the detector is at rest.
This can be done in any gauge, but the synchronous gauge is convenient since the
coordinates can be chosen such that the time coordinate coincides with the proper
time of the rest frame and the spatial coordinates are comoving with the detector.
This also means that the synchronous gauge is not always convenient unless
the remaining gauge degrees of freedom are adequately fixed. If not, one may
encounter a spurious component in the metric that could lead to a wrong result.
For example, if we work in a coordinate system spanned by a family of geodesics that
does not match the rest frame of the detector, one could obtain a singular result
if those geodesics happen to have focusing singularities in the vicinity of the
worldline of the detector.
\section{The dust dominated universe\label{sec:dustdom}}
In this section, we argue that GWs induced during a dust dominated universe, i.e. $w=c_s^2=0$, can be gauged away as long as the source term persists.
There is in fact a reasonable suspicion. During dust domination the Newton potential is constant in time on all scales and,
therefore, so is the source term in Eq.~\eqref{eq:heomN}. A constant source to the tensor modes leads to a constant particular solution to $h_{ij}$
which does not behave like a GW by any means. To show that it is indeed a gauge artifact let us demonstrate that there
is a particular gauge transformation by which the solution essentially vanishes.
We start from the gauge invariant form of the equations of motion \eqref{eq:heomN}. The solutions to $\Phi_N$ and $V_N$
in a dust universe are given in App.~\ref{app:eom}. Here we use the first order equations of motion to replace
the gauge invariant variable $V_N$ in favour of $\Phi_N$. In this way, we obtain
\begin{align}\label{eq:heomMD}
(\hat D_\tau-\Delta)h^N_{ij}=\widehat{TT}^{ab}_{ij}\left\{\frac{20}{3}\partial_a\Phi_N\partial_b\Phi_N\right\}\,.
\end{align}
Since $\Phi_N$ is constant in time there is a particular solution in which $h_{ij}$ is constant in time.
In fact in Fourier space, the solution after requiring that $h_k^N|_{\tau=0}=h_k^{N'}|_{\tau=0}=0$ reads \cite{Mollerach:2003nq,Hwang:2017oxa}
\begin{align}\label{eq:hMD}
h^N_{\bm{k}}(x)=k^{-2}S^N_{k}\left(1-3j_1(x)/x\right)\,,
\end{align}
where $j_1(x)$ is the spherical Bessel of the first kind of order one and
\begin{align}\label{eq:SMD}
S^N_k=\frac{20}{3}\int \frac{d^3q}{(2\pi)^3}e^{ij}(\bm{k})q_iq_j\Phi_{\bm{q}}\Phi_{\bm{k}-\bm{q}}\,.
\end{align}
As is clear from Eq.~\eqref{eq:hMD}, the tensor modes are initially zero at $x\ll1$ and
develop a constant value $h_{\bm{k}}\sim k^{-2}S_{k}$ on subhorizon scales, that is at $x\gg1$.
Before we show that this constant solution is likely to be a gauge artifact it is important to note that the induced tensor modes \eqref{eq:hMD}
do not behave as what is usually considered to be GWs in an expanding universe.
First, its amplitude does not decay proportional to $1/a$ and, therefore,
the energy density of such GWs does not decay as $1/a^{4}$, as is expected from a fluid made of massless fields such as photons.
Second, in the flat spacetime limit, the observable effect would be a time-dependent strain \cite{Maggiore:1900zz,Flanagan:2005yc}.
This is clear from noting that the geodesic deviation equation is proportional to $h''_{ij}$. Therefore, a GW detector in a dust universe would not
detect the strain of \eqref{eq:hMD} as a GW.
So instead of GWs, one should refer to the induced tensor mode \eqref{eq:hMD} as a static anisotropic stress. This was noted in Refs.~\cite{Assadullahi:2009nf,Inomata:2019yww}.
We proceed to argue that the induced tensor modes \eqref{eq:hMD} are gauge artifacts. To do that, we look for a suitable gauge transformation
which brings the amplitude of \eqref{eq:hMD} to zero. Since a static $h_{ij}$ implies a non-vashing intrinsic Riemann tensor of the spatial hypersurfaces,
it can't be gauged away by spatial gauge transformation. Therefore we set $L_G=0$ and
focus only on a temporal gauge transformation with $T_G$ constant in time. In this case, we have that
\begin{align}\label{eq:heomMD2}
h^G_{\bm{k},\lambda}=h^N_{\bm{k},\lambda}
-\int \frac{d^3q}{(2\pi)^3}e^{ij}(\bm{k})q_iq_jT_{G,\bm{q}}T_{G,\bm{k}-\bm{q}}\,,
\end{align}
where $h^N_{\bm{k}}$ is given by the constant term of Eq.~\eqref{eq:hMD}.
We shall present an example of $T_G$ which cancels the contribution from $h^N_{\bm{k}}$, which is only a function of $k$,
up to an arbitrary small contribution. We choose the Fourier transform of $T_G$ to be of the form
\begin{align}\label{eq:TGMD}
T_{G,\bm{q}}=\frac{(2\pi)^{3/4}F(q)}{\varepsilon^{3/2}}
\left(e^{-\frac{(\bm{q}-\bm{q}_+)^2}{2\varepsilon^2}}+e^{-\frac{(\bm{q}-\bm{q}_-)^2}{2\varepsilon^2}}\right)^{1/2}\,,
\end{align}
where $\varepsilon$ is an arbitrarily small parameter satisfying $\varepsilon\ll k$
so that the Gaussians are very sharp, $F(q)$ is an arbitrary function of $q$ and $\bm{q}_+$ and $\bm{q}_-$ are such that
\begin{align}\label{eq:qpm}
|\bm{q}_+|=|\bm{q}_-|=k\quad,\quad \bm{q}_++\bm{q}_-=\bm{k}\,.
\end{align}
We can write explicitly a pair of vectors $q_{\pm,i}$ that satisfy the requirements \eqref{eq:qpm}
by introducing a unit vector $\hat{s_i}$ orthogonal to $k_i$, i.e. $\hat{s}^ik_i=0$. In this way, we arrive at
\begin{align}\label{eq:qpm2}
\frac{q_{\pm,i}}{k}=\frac{1}{2}\hat{k}_i\pm \frac{\sqrt{3}}{2}\hat{s_i}\,,
\end{align}
where $\hat{k}_i$ is the unit vector of $k_i$.
By plugging in Eqs.~\eqref{eq:TGMD} and \eqref{eq:qpm2} into the last term of Eq.~\eqref{eq:heomMD2} we find that
\begin{align}\label{eq:projectionTT}
h^G_{\bm{k},\lambda}=h^N_{\bm{k},\lambda}-\tfrac{3}{2}e_\lambda^{ij}(\bm{k})\hat{s_i}\hat{s_j}k^2F^2(k)
+O\left(\varepsilon^2/k^2\right)\,,
\end{align}
To derive Eq.~\eqref{eq:projectionTT} we used that $\bm{k}-\bm{q}-\bm{q}_\pm=\bm{q}_{\mp}-\bm{q}$
and evaluated the integrand at $\bm{q}=\bm{q}_{\pm}$.
This is a good approximation as long as $F(q)$ is a smoothly varying function of $q$.
Since $\varepsilon$ can be arbitrarily small and we are interested only on subhorizon scales, the approximation is accurate enough.
We may express the polarization tensor in terms of a third orthogonal vector $\hat{t_i}$ such that $\hat{k}_i\hat{t}^i=\hat{s_i}\hat{t}^i=0$
as $\sqrt{2}e^{ij}_+=\hat{s_i}\hat{s_j}-\hat{t_i}\hat{t_j}$ and $\sqrt{2}e^{ij}_\times=\hat{s_i}\hat{t_j}+\hat{t_i}\hat{s_j}$, for example.
In this case, only the $\lambda=+$ contribution to the gauge transformation \eqref{eq:projectionTT} is non-vanishing.
Then setting
\begin{align}
F^2(k)={\frac{2\sqrt{2}}{3k^2}h^N_{k,+}}\,,
\end{align}
we can make the $\lambda=+$ polarization of the induced tensor modes \eqref{eq:hMD} arbitrarily small.
We can do the same for the $\lambda=\times$ polarization by simply rotating $\hat{\bm{s}}$
and $\hat{\bm{t}}$
by 45 degrees.
It should be noted that the resulting form of $T_G$ is well-behaved on short scales as its amplitude
very crudely decays as $T_G(k)\sim {\Phi_N(k)}/{k}$.
This implies that indeed the induced GWs generated in a dust dominated universe may be removed by a small change of gauge.
\section{Conclusions \label{sec:conclusions}}
Induced GWs are a very important probe of the early universe and inflation. They are even more relevant in light of future GW detectors such as LISA \cite{Audley:2017drz}, Taiji \cite{Guo:2018npi}, Tianqin \cite{Luo:2015ght}, DECIGO \cite{Seto:2001qf,Yagi:2011wg},
AION/MAGIS \cite{Badurina:2019hst}, ET \cite{Maggiore:2019uih} and PTA \cite{Lentati:2015qwp,Bian:2020bps}.
However, some doubts were expressed in the literature about the gauge invariance of the induced GW spectrum \cite{Hwang:2017oxa}.
If that were the case, it would pose serious problems for the theoretical predictions of induced GWs. Proposed solutions so far rely on
either finding gauge invariant
variables \cite{Wang:2019zhj,DeLuca:2019ufz,Yuan:2019fwv,Nakamura:2019zbe,Chang:2020tji,Chang:2020iji,Chang:2020mky}
or arguing which would be the most suitable gauge \cite{DeLuca:2019ufz,Inomata:2019yww}.
While these studies go certainly in the right direction, they are not a satisfactory solution of the gauge dependence pointed out
in Ref.~\cite{Hwang:2017oxa}. The main reasons are two-fold: ($i$) a particular choice of gauge invariant variables is very much like
choosing a particular gauge fixing procedure and ($ii$) one should be able to express the observable quantity in different gauges,
albeit physically meaningful ones, and find the same GW spectrum in an arbitrary cosmological background.
In this paper, we made a new attemp to clarify the issue of the seeming gauge dependence of the induced GW spectrum.
We showed that:
\begin{list}{}{}
\item[($i$)]
The GW spectrum is only meaningful for subhorizon modes when the source term is no longer active.
This is in analogy with the GWs from the mergers of binary black holes, where the GWs are well-defined far enough from the source.
\item[($ii$)]
The Newton (or shear-free) gauge is suitable for both the calculations and physical interpretations of the induced GWs.
This is because the equations of motion for the tensor modes $h_{ij}$ at second order in a general gauge \eqref{eq:heom},
naturally gives the simplest form of the source term in the Newton gauge \eqref{eq:heomN}, and because
the Newton gauge is well-behaved on the smallest scales where the Newton gravity limit is recovered.
\item[($iii$)]
The GW spectrum is gauge invariant under a set of reasonable gauge transformations and independent of the cosmological background.
We define the set of reasonable gauges as those gauges which are well-behaved deep inside the horizon.
More concretely, we require that after a gauge transformation the resulting trace part of the metric perturbation is of the order of or smaller than
the gravitational potential in the Newton gauge \eqref{eq:condition1}.
This sets certain conditions on the gauge parameters given by Eq.~\eqref{eq:condition2}.
\end{list}
We discussed the point $(iii)$ for three different cases in a universe dominated by a perfect fluid with a constant equation of state $w$ and
sound speed $c_s^2=w$. We found that well-behaved gauges on small scales are, for example,
the flat slicing, constant Hubble slicing and synchronous gauges.
However, special attention has to be paid to the residual gauge ambiguity in the case of synchronous gauge as it may lead to spurious GWs.
We also showed that the comoving slicing gauge violates the condition $(iii)$ and therefore the induced GW spectrum
in the comoving slicing gauge differs from the one in the Newton gauge. We argued that the comoving slicing gauge is a poor choice of gauge
to describe subhorizon physics as it would be equivalent to describing a GW detector in an oscillating coordinate frame, which could
naturally lead to gauge artifacts.
It is important to note that the requirement ($i$) excludes the case of induced GWs in a universe dominated by dust,
i.e. a perfect fluid with $w=c_s^2=0$. In this case, the Newton potential is constant on all scales and, therefore, the source term for the induced GWs is
continuously active throughout the dust dominated stage. This does not imply that our criteria ($i$) - ($iii$) are wrong.
First, as we argued in Sec.~\ref{sec:dustdom} the tensor modes induced in a dust universe can be hardly regarded as GWs.
They do not behave as a radiation fluid in an expanding background and their time-independence renders them unnoticeable by GW detectors.
Second and more importantly, we showed in Sec.~\ref{sec:dustdom} that the induced GWs during a dust dominated universe
can be gauged away by a small change of gauge. This means that one must follow the induced GWs until the universe transitions to
another stage and is dominated by a fluid with $c_s^2\neq 0$.
In the subsequent stage the induced GW spectrum is well-defined and invariant under the set of gauge transformations of point $(iii)$.
Our conclusion is in line with Refs.~\cite{Inomata:2019zqy,Inomata:2019ivs}, where they show that the dominant contribution to
the induced GWs in a dust dominated universe is always generated right after reheating.
Thus, after the universe is reheated our points ($i$) - ($iii$) apply and the induced GWs are approximately gauge invariant.
Although we have focused on the induced GWs in a universe with a constant equation of state, we expect that the same conclusions
would hold in more general situations. Furthermore, a definitive solution to the gauge issue would be to find a definition of the effective
energy density of GWs which is invariant under general gauge transformations at second order.
Despite being a very interesting direction, it is our of the scope of this paper. We leave these issues for future work.
\section*{Acknowledgments}
G.D. would like to thank J-O.~Gong, K.~Inomata, M.~Kamionkowski, S.~Matarrese and S.~Pi for insightful discussions. We would like to thank J.~Gurian, D.~Jeong, J-c.~Hwang and H.~Noh for useful comments. G.D. as a Fellini fellow was supported
by the European Union’s Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant
agreement No 754496. M.S. was supported in part by the JSPS KAKENHI Nos.~19H01895, 20H04727 and 20H05853. Calculations of
cosmological perturbation theory at second order were checked using the \texttt{xPand (xAct) Mathematica} package \cite{Pitrou:2013hga}.
|
{
"timestamp": "2021-04-14T02:16:31",
"yymm": "2012",
"arxiv_id": "2012.14016",
"language": "en",
"url": "https://arxiv.org/abs/2012.14016"
}
|
\section{Introduction}
Regression models allow the relationship between some covariates and a target variable to be investigated.
These models are defined by an equation on the conditional moment of a transformation of the noise.
This transformation is generally the piecewise derivative of the loss function that defines the type of regression: mean, robust, quantile \citep{koenker1978regression,horowitz2005nonparametric,wei2009quantile}, expectile \citep{newey1987asymmetric,ehm2016quantiles,daouia2018estimation}.
\medskip
The regression model with a fixed group effect is central within this generic paradigm. It considers that the intercept of the regression depends on the group from which the subject belongs (the intercept is common for subjects belonging to the same group but different for subjects belonging to different groups). However, in many applications, the group variable is not observed but other variables related to this variable are observed.
For instance, suppose we want to investigate high blood pressure by considering the levels of physical activity among the covariates. In many cohorts, the level of physical activity of a subject is generally not directly available (because such a variable is not easily measurable) but many variables on the mean time spent doing different activities are available.
Note that the regression model with a fixed group effect and a latent group variable is a specific mixture of regressions \citep{wang1996mixed,hunter2012semiparametric,wu2016mixtures} where only the intercepts of the regressions are different among the components and where the mixture weights depend on some other variables.
Moreover, the regression model with a fixed group effect and a latent group variable can be interpreted as a regression model with specific quantization of the variables that we use to estimate the group membership (see for instance \citet{CHARLIER201514} for the quantization in quantile regression).
\medskip
The estimation of a regression model with a fixed group effect is generally performed using a \emph{two-step approach} as for instance in epidemiology or in economics \citep{auray2015clustering,ando2016panel,zhang2019quantile}. As a first step, a clustering on the individual based on the group related variables is performed to obtain an estimator of the group. As a second step, the regression model is fit by using the estimator of the group variable among the covariates.
The second step considers a regression model with measurement errors on the covariates. Indeed, the group variable is estimated in the clustering step with errors. Hence, it is well-known that the resulting estimators of the parameters of regression are biased (see for instance \citet{carroll1991semiparametric, nakamura1992proportional, bertrand2017robustness}). The bias depends on the accuracy of the clustering step. Note that, despite that the target variable contains information about the group variable (and so is relevant for clustering), this information is not used in the two-step approach, leading to a suboptimal procedures.
\medskip
Some simultaneous approaches have been considered in the framework of latent variable models, such as latent class and latent profile analysis \citep{guo2006latent,kim2016modeling}. In this framework, the authors introduce latent class and latent factor variables to explain the heterogeneity of observed variables. However no direct focus is taken to explain a particular variable given other ones, and the approach is limited to a parametric framework. Another related reference is the work of \cite{sammel1997latent}, where the authors introduce a latent variable mixed effects model, which allows for arbitrary covariate effects, as well as direct modelling of covariates on the latent variable. Some other relevant references can be found in the field of concomitant variables~\citep{dayton_concomitant-variable_1988,grun_flexmix_2008,vankatova_evaluation_2017}, where some additional variables are used to locally adjust the weights of the mixture of regressions. But, these approaches are rather focused on the mixture of regressions task than on clustering data based on concomitant variables.
\medskip
We propose a new procedure (hereafter referred to as the \emph{simultaneous approach}) that estimates simultaneously the clustering and the regression models in a semi-parametric frameworks \citep{hunter2011nonparametric} thus circumventing the limits of the standard procedure (biased estimators). We demonstrate that this procedure improves both the estimators of the partition and regression parameters. A full parametric setting is also presented, however if one of the clustering or regression model were ill-specified, its bias modeling could contaminate the results of the other one.
Thus we focus on semi-parametric mixture where the component densities are defined as a product of univariate densities \citep{chauveau2015semi,zhu2016theoretical,zhengJASA2019}, which is identifiable if the univariate densities are linearly independent and if at least three variables are used for clustering \citep{allman2009identifiability}. Note that, mixtures of symmetric distributions \citep{hunterAOS2007,butuceaScand2014} could also be considered in a similar way.
Semi-parametric inference is achieved by a maximum smoothed likelihood approach \citep{levine2011maximum} via a Maximization-Minimization (MM) algorithm \citep{hunter2004tutorial}. Note that selecting the number of components in a semi-parametric mixture is not easy \citep{kasahara2014non,kwon2019estimation}. However, in our context, the number of components can be selected according to the quality of the prediction of the target variable.
\medskip
This paper is organized as follows.
Section~\ref{sec:grl} introduces a general context where a statistical analysis requires both methods of clustering and prediction, and it presents the standard approach that estimates the parameters in two steps.
Section~\ref{sec:unified} shows that a procedure that allows a simultaneous estimation of the clustering and of the regression parameters generally outperforms the two-step approach. This section also briefly presents the simultaneous procedure on parametric framework, then focuses on the semi-parametric frameworks.
Section~\ref{sec:simu} presents numerical experiments on simulated data showing the benefits of the proposed approach.
Section~\ref{sec:ill} illustrates our proposition for problems associated with high blood pressure prevention.
Section~\ref{sec:conclusion} provides a conclusion and discussion about extensions. The mathematical details are presented in Appendix~A.
\section{Embedding clustering and prediction models} \label{sec:grl}
\subsection{Data presentation}
Let $(V^\top,X^\top,Y)^\top$ be the set of the random variables where $V=(U^\top,Z^\top)^\top$ is a $d_V=d_U + K$ dimensional vector used as covariates for the prediction of the univariate variable $Y\in\mathbb{R}$, $X$ is a $d_X$ dimensional vector and $Z=(Z_1,\ldots,Z_K)^\top\in\mathcal{Z}$ is a categorical variable with $K$ levels. The variable $Z$ indicates the group membership such that $Z_k=1$ if the subject belongs to cluster $k$ and otherwise $Z_k=0$. The realizations of $(U^\top,X^\top,Y)^\top$ are observed but the realizations of $Z$ are unobserved. Thus, $X$ is a set of proxy variables used to estimate the realizations of $Z$. Considering the high blood pressure example, $Y$ corresponds to the diastolic blood pressure, $U$ is the set of observed covariates (gender, age, alcohol consumption, obesity and sleep quality), $X$ is the set of covariates measuring the level of physical activity and $Z$ indicates the membership of a group of subjects with similar physical activity behaviours. The observed data are $n$ independent copies of $(U^\top,X^\top,Y)^\top$ denoted by $\mathbb{U}=(u_1,\ldots,u_n)^\top$, $\mathbb{X}=(x_1,\ldots,x_n)^\top$ and $\mathbb{Y}=(y_1,\ldots,y_n)^\top$ respectively. The $n$ unobserved realizations of $Z$ are denoted by $\mathbb{Z}=(z_1,\ldots,z_n)^\top$.
\subsection{Motivating example}
We use the following example throughout the paper, which examines the general objective of high blood pressure prevention. Here, we focus on the detection of indicators related to the diastolic blood pressure ($Y$); see \citet{berney2018isolated} for the interest of the study. The indicators we wish to consider are the gender, the age, the alcohol consumption, the obesity, the sleep quality and the level of physical activity ($V$). However, the level of physical activity ($Z$) of a patient is not directly measured and we only have a set of variables which describes the physical activity ($X$), such as practice of that recreational activity, hours spent watching TV, hours spent on the computer, {\it etc.} More details of the data are provided in Section~\ref{sec:ill}. The study of the different indicators is performed using a regression model that explains the diastolic blood pressure with a set of covariates where one variable (the physical activity) was not directly observed. Information about this latter variable is available from other variables that do not appear in the regression.
\subsection{Introducing the joint predictive clustering model} \label{sec:model}
\paragraph{Regression model}
Let a loss function be $\mathcal{L}(\cdot)$ and $\rho(\cdot)$ its piecewise derivative. The loss function $\mathcal{L}$ allows the regression model of $Y$ on $V$ to be specified with a fixed group effect given by
\begin{equation} \label{eq:modelpred}
Y = V^\top \beta + \varepsilon \text{ with } \mathbb{E}[\rho(\varepsilon)| V]=0,
\end{equation}
where $\beta=(\gamma^\top,\delta^\top)^\top \in\mathbb{R}^{d_V}$, $\gamma\in\mathbb{R}^{d_U}$ are the coefficients of $U$, $\delta=(\delta_1,\ldots,\delta_K)^\top\in \mathbb{R}^K$ are the coefficients of $Z$ (\emph{i.e.,} the parameters of the group effect), and $\varepsilon$ is the noise. Note that for reasons of identifiability, the model does not have an intercept. The choice of $\mathcal{L}$ allows many models to be considered and, among them, one can cite the mean regression (with $\mathcal{L}(t)=t^2$ and $\rho(t)=2t$), the $\tau$-quantile regression (with $\mathcal{L}(t)= |t|+(2\tau-1)t$ and $\rho(\varepsilon)=\tau - \mathds{1}_{\{\varepsilon \leq 0\}} $; \citet{koenker1978regression}), the $\tau$-expectile regression (with $\mathcal{L}(t)=|\tau - \mathbf{1}\{t\leq 0\}|t^2$ and $\rho(t) = 2t( (1-\tau) \mathbf{1}\{t\leq 0\} + \tau \mathbf{1}\{t > 0\} )$; \citet{newey1987asymmetric}), {\it etc.}
The restriction on the conditional moment of $\rho(\varepsilon)$ given $V$ is sufficient to define a model and allows for parameter estimation. However, obtaining maximum likelihood estimate (MLE) needs specific assumptions on the noise distribution. For instance, parameters of the mean regression can be consistently estimated with MLE by assuming a centred Gaussian noise. Similarly, the parameters of $\tau$-quantile (or $\tau$-expectile) regression can be consistently estimated with MLE by assuming that the noise follows an asymmetric Laplace (or an asymmetric normal) distribution \citep{yu2001bayesian,xing2017bayesian}. Hereafter, we denote the density of the noise $\varepsilon$ by $f_\varepsilon$.
\paragraph{Clustering model}
The distribution of $X$ given $Z_k=1$ is defined by the density $f_k(\cdot)$. Therefore, the marginal distribution of $X$ is a mixture model defined by the density
\begin{equation} \label{eq:modelclust}
f(x;\vartheta) = \sum_{k=1}^K \pi_k f_k(x),
\end{equation}
where $\vartheta=\{\pi_k,f_k;k=1,\ldots,K\}$, $\pi_k>0$ and $\sum_{k=1}^K \pi_k=1$ and where $f_k$ is the density of component $k$. In a parametric approach, $f_k$ is assumed to be parametric so it is denoted by $f_k(\cdot;\alpha_k)$ where $\alpha_k$ are the parameters of component $k$. In a semi-parametric approach, some assumptions are required to ensure model identifiability (see for instance \citet{chauveau2015semi}). In the following, the semi-parametric approaches are considered with the assumption that each $f_k$ is a product of univariate densities (see Section~\ref{sec:np}).
\paragraph{Joint clustering and regression model}
The joint model assumes that $Z$ explains the dependency between $Y$ and $X$ (\emph{i.e.,} $Y$ and $X$ are conditionally independent given $Z$) and that $U$ and $(X^\top,Z^\top)$ are independent.
Moreover, the distribution of $(X,Y)$ given $U$ is also a mixture model defined by the density (noting $\theta=\{\vartheta\}\cup\{\delta_k;k=1,\ldots,K\}\cup\{\gamma,f_\varepsilon\}$)
\begin{equation} \label{eq:modeljoint}
f(x,y|u;\theta) = \sum_{k=1}^K \pi_k f_k(x) f_\varepsilon(y - u^\top\gamma - \delta_k),
\end{equation}
where, for $k=1,\ldots,K$ we have
\begin{equation}
\mathbb{E}[\rho(Y - U^\top\gamma - \delta_k)| U,Z_k=1]=0,\label{eq:condmoment}
\end{equation}
Note that \eqref{eq:modeljoint} is a particular mixture of regressions model where the mixture weights are proportional to $\pi_k f_k(x)$ (thus depending on covariates that do not appear in the regressions) and where only the intercepts (\emph{i.e.,} $\delta_1,\ldots,\delta_K$) are different among the regressions. Contrary to~\citet{grun_flexmix_2008} who consider the density $f(y|u,x;\theta)$ thus focusing on the regression framework, here we propose to consider the density $f(y,x|u;\theta)$ which balances the regression and the clustering frameworks.
\paragraph{Moment condition}
The following lemma gives the moment equation verified on the joint model. It will be used later to justify the need for a simultaneous approach.
\begin{lemma} \label{lemma:eqmoment}
Let an identifiable model defined by \eqref{eq:modeljoint} and \eqref{eq:condmoment}, where, for any $x$ and $k$. Then, noting
$r^{X,Y}_k(x,y)=\frac{\pi_k f_k(x) f_\varepsilon(y - u^\top\gamma - \delta_k)}{\sum_{\ell=1}^K \pi_\ell f_\ell(x) f_\varepsilon(y - u^\top\gamma - \delta_\ell)}$, $\beta=(\delta^\top,\gamma^\top)^\top$
is the single parameter satisfying
\begin{equation}
\forall k=1,\ldots,K,\; \mathbb{E}[r^{X,Y}_k(X,Y)\rho(Y - u^\top\gamma - \delta_k)|U,X]=0. \label{eq:eqmoment}
\end{equation}
\end{lemma}
\section{The proposed simultaneous estimation procedure} \label{sec:unified}
\subsection{Limits of the standard two-step approach estimation} \label{subsec:twostep}
The aim is to explain the distribution of $Y$ given $V=(U^\top,Z^\top)^\top$ from an observed sample. A direct estimation of the model \eqref{eq:modelpred} is not doable because the realizations of $Z$ are unobserved. The standard approach considers the following two-steps:
\begin{enumerate}
\item {\textbf{Clustering step}} Perform a clustering of $\mathbb{X}$ to obtain an estimated hard classification rule $\hat r^X:\mathbb{R}^{d_{X}} \rightarrow \mathcal{Z}$ or an estimated fuzzy classification rule $\hat r^X:\mathbb{R}^{d_{X}} \rightarrow \tilde{\mathcal{Z}}_K$ where $\tilde{\mathcal{Z}}_K$ is the simplex of size $K$.
\item {\textbf{Regression step}} Estimation of the regression parameters given the estimator of the group memberships $\hat\beta^{\hat r^X}:=(\hat\gamma^{\hat r^X \top}, \hat\delta^{\hat r^X \top})^\top = \argmin_{\beta} \sum_{i=1}^n\sum_{k=1}^K \hat r^X_k(x_i)\mathcal{L}(y_i - u_i^\top \gamma - \delta_k)$
where $\hat r^X_k(x_i)$ is the element $k$ of vector $\hat r^X(x_i)$. Note that $\hat r^X_k(x_i)$ is an estimator of the conditional probability that observation $i$ belongs to cluster $k$ given $x_i$, if the fuzzy classification rule is used.
\end{enumerate}
The following lemma states that the two-step approach produces asymptotically biased estimators of the partition and regression parameters, even if the optimal classification rule on $X$ is used.
\begin{lemma}\label{lemm:resasymptotic}
Let an identifiable model defined by \eqref{eq:modeljoint} and \eqref{eq:condmoment}, then
\begin{enumerate}
\item If classification rule $\hat r^X$ converges to a classification rule $\tilde r^X$ (\emph{i.e.,} the best classification rule based on $X$), thus $\hat r^X$ is asymptotically suboptimal since
$\mathbb{E}\left[\sum_{k=1}^K \tilde r^X_k(X)Z_k\right]<\mathbb{E}\left[\sum_{k=1}^K r^{X,Y}_k(X,Y)Z_k\right]$,
with $\tilde{r}_k^X(x)\propto\pi_k f_k(x)$.
\item Let $f_\varepsilon$ defines a random variable with finite variance and $U$ to have a covariance matrix with non-zero eigenvalues. Then, considering the quadratic loss and a consistent fuzzy classification rule $\hat r^X$, the estimator $\hat\gamma^{\hat r^X}$ is asymptotically unbiased but the estimator $\hat\delta^{\hat r^X}$ is asymptotically biased since
$\lim_{n\to\infty}\text{bias}( \hat\delta_k^{\hat r^X}) =\frac{\sum_{\ell=1}^K \Delta_{k\ell} \delta_\ell}{\sum_{h=1}^K \Delta_{kh}} - \delta_k$,
where $\Delta_{k\ell}=\mathbb{E}[r_k^X(X)r_\ell^X(X)]$.
\end{enumerate}
\end{lemma}
Thus the clustering step provides a suboptimal classification rule because the classification neglects the information given by $Y$. Consequently, the regression step provides estimators that are asymptotically biased and implies fitting the parameters of a regression model with measurement errors in the covariates (for instance, considering the hard assignment, we have no guarantee of having a perfect recovery of the partition, \emph{i.e.,}, $\hat r^X(x_i) = z_i$, for $i=1,\ldots,n$). The measurement errors generally produce biases in the estimation. Finally, the quality of the estimated classification rule directly influences the quality of the estimator of the regression parameters.
\subsection{Limits of a parametric simultaneous procedure} \label{sec:param}
In this section, we consider a probabilistic approach with a parametric point-of-view. Thus, the family of distributions of each component $k$ is supposed to be known and parameterized by $\alpha_k$. Moreover, the distribution of the noise $f_\varepsilon$ is chosen according to the type of the regression under consideration (see the discussion in Section~\ref{sec:model}).
The aim of the simultaneous procedure can be achieved by maximizing the log-likelihood of $\mathbb{Y},\mathbb{X}$ given $\mathbb{U}$ with respect to~$\theta$
$$
\ell(\theta;\mathbb{Y},\mathbb{X}\mid \mathbb{U}) = \sum_{i=1}^n \ln\left(
\sum_{k=1}^K \pi_k f_k(x_i;\alpha_k) f_\varepsilon(y_i - u_i^\top \gamma - \delta_k)
\right),
$$
where $\theta=(\pi_1,\ldots,\pi_K,\alpha_1^\top,\ldots,\alpha_K^\top,\beta^\top)$ groups all the model parameters. Indeed, the maximum likelihood inference using $\ell(\theta;\mathbb{Y},\mathbb{X}\mid \mathbb{U}) $ allows for similarly learning the classification rule based on $(X,Y)$ and the regression coefficients. This function cannot be directly maximized so we consider the complete-data log-likelihood with data $(\mathbb{Y},\mathbb{X},\mathbb{Z})$ given $\mathbb{U}$ defined by
$$
\ell(\theta;\mathbb{Y},\mathbb{X}, \mathbb{Z}\mid \mathbb{U}) = \sum_{i=1}^n \sum_{k=1}^K z_{ik} \ln\left(
\pi_k f_k(x_i;\alpha_k) f_\varepsilon(y_i - u_i^\top \gamma - \delta_k)
\right).
$$
The MLE $\hat\theta$ can be obtained via an EM algorithm presented in Appendix~\ref{sec:EM}.
Let an identifiable model defined by \eqref{eq:modeljoint} and \eqref{eq:condmoment}, then
\begin{enumerate}
\item If all the parametric distributions are well-specified, then properties of the MLE imply that the classification rule is asymptotically optimal and $\hat\beta$ is asymptotically unbiased.
\item If at least one parametric distribution is misspecified, then the classification rule is generaly asymptotically suboptimal and $\hat\beta$ is generally asymptotically biased.
\end{enumerate}
Let notice that the distribution of the noise appears at the E-step and thus influences the classification rule. Hence, the classification rule is deteriorated if the distribution of the noise is misspecified. This is not the case when estimation is performed using the two-step approach, since clustering is performed prior to regression, and regression can still be unbiased if the moment condition (see Lemma~\ref{lemma:eqmoment}) is well-specified. Thus, in the next section, we propose a semi-parametric approach that circumvents this issue because it does not assume a specific family of distributions for the noise and the components.
\subsection{Advised simultaneous semi-parametric procedure} \label{sec:np}
\paragraph{Semi-parametric model}
In this section, we consider the semi-parametric version of model defined by \eqref{eq:modeljoint} where the densities of the components are assumed to be a product of univariate densities. Thus, we have
\begin{equation}\label{eq:npmodel}
f(y,x \mid u ; \theta ) = \sum_{k=1}^K \pi_k f_k(y,x \mid u ; \theta ) = \sum_{k=1}^K \pi_k \prod_{j=1}^{d_X} f_{kj}(x_j) f_{\varepsilon} (y - u^\top \gamma - \delta_k),
\end{equation}
where $\theta$ groups all the finite and infinite parameters and $\beta$ is such that \eqref{eq:eqmoment} holds.
A sufficient condition implying identifiability for model \eqref{eq:npmodel} is that the marginal distribution of $X$ is identifiable and thus a sufficient condition is to consider linearly independent densities $f_kj$'s and $d_X\geq 3$ \citep{allman2009identifiability}. For sake of simplicity we will note $w = (x^\top,y)^{\top}$ with $w \in \mathbb{R}^{d_X + 1}$, such that $f(y,x \mid u ; \theta )= \sum_{k=1}^K \pi_k f_k(w \mid u ; \theta ) $.
\paragraph{Smoothed log-likelihood}
Let $\mathcal S$ be the smoothing operator defined by
$\mathcal S f_k(w\mid u;\theta) = \int K_h(w - \tilde w) f_k(\tilde w\mid u; \theta) d\tilde w$, where $K_h(a)=\prod_{j=1}^d K_h(a_j)$ with $a\in\mathbb{R}^d$ and with $K_h(a_j)$ is a rescale kernel function defined by $K_h(a_j) = h^{-1} K(h^{-1}a_j)$ where $h$ is the bandwidth.
For the semi-parametric approach, the estimation can be performed by maximizing the smoothed log-likelihood \citep{levine2011maximum} defined by $\ell(\theta) = \sum_{i=1}^n \ln \left( \sum_{k=1}^K \pi_k \left(\mathcal N f_k \right)(w\mid u; \theta) \right)$
subject to the empirical counterpart of \eqref{eq:eqmoment}, namely
$\frac{1}{n} \sum_{i=1}^n \sum_{k=1}^K \frac{f_k(w\mid u;\theta)}{\sum_{\ell=1}^K \pi_\ell f_\ell(w\mid u;\theta)} \rho(y_i - u_i^\top\gamma - \delta_k) = 0$,
where $\left(\mathcal N f_k \right)(w\mid u; \theta) = \exp \left\{\mathcal S \ln f_k(w\mid u;\theta) \right\}=\exp \left\{ \int K_h(w - \tilde w) \ln f_k(\tilde w\mid u; \theta) d\tilde w) \right\} $.
\paragraph{Majorization-Minorization algorithm} Parameter estimation is achieved via a Majorization-Minorization algorithm. Given an initial value $\theta^{[0]}$, this algorithm iterates between a majorization and a minorization step. Thus, an iteration $[r]$ is defined by
\begin{itemize}
\item Majorization step:
$ t_{ik}^{[r-1]} \propto \pi_k^{[r-1]} \left(\mathcal N f_k^{[r-1]} \right)(w_i\mid u_i; \theta^{[r-1]}) .
$
\item Minorization step:
\begin{enumerate}
\item Updating the parametric elements
$$\pi^{[r]}_k = \frac{1}{n} \sum_{i} t_{ik}^{[r-1]} \text{ and }
\beta^{[r]} = \argmin_{\beta} \sum_{i,k} t_{ik}^{[r-1]} \rho(y_i - u_i^\top \gamma - \delta_k).$$
\item Updating the nonparametric elements
$$
f_{kj} (a) = \frac{1}{n \pi_k^{[r]}} \sum_{i} t_{ik}^{[r-1]} K_h(x_{ij} - a)
\text{ and }
f_{\varepsilon} (a) = \frac{1}{n} \sum_{i,k} t_{ik}^{[r-1]} K_h(y_i - u_i^\top \gamma^{[r]} - \delta_k^{[r]} - a).
$$
\end{enumerate}
\end{itemize}
The Majorization-Minorization algorithm is monotonic for the smoothed log-likelihood. It is a direct consequence of the monotony of the algorithm of \citet{levine2011maximum} where we use the fact that, in order to satisfy the moment condition defined in \eqref{eq:eqmoment} of Lemma~\ref{lemma:eqmoment}, we must have $\beta^{[r]} = \argmin_{\beta} \sum_{i=1}^n \sum_{k=1}^K t_{ik}^{[r-1]} \rho(y_i - u_i^\top \gamma - \delta_k)$.
As in \citet{hunter2012semiparametric}, the majorization step is not explicit. However, because it only implies univariate integrals, it can be efficiently assessed by numerical approximations. Finally, bandwidth selection can be performed as usual for semi-parametric mixtures (see \citet{chauveau2015semi}). However, like in any supervised problem, we can use the cross-validated accuracy of the prediction of $Y$ for bandwidth selection.
\section{Numerical experiments}\label{sec:simu}
\subsection{Simulation setup} \label{sec:simusetup}
Data are generated from a bicomponent mixture with equal proportions (\emph{i.e.,} $K=2$ and $\pi_1=\pi_2=1/2)$ such that the density of $X_i$ given $Z_i$, is a product of univariate densities. For $j=1,\ldots,4$, $X_{ij}= (-1)^{Z_{i1}} \xi + \eta_{ij}$ where the group variable $Z_i$ is sampled according to a multinomial distribution of parameters $\pi_1$ and $\pi_2$, and $\eta_{ij}$ are independent and identically distributed random variables.
Moreover, we have $U_i\sim\mathcal{N}_2(0,\mathbf{I}_2)$ and $Y_i~=~(Z_{i1},Z_{i2},U_{i1},U_{i2})\beta + \varepsilon_i,$ with $\beta = (-1,1,1,1)^\top$. Different distributions are used for $\eta_{ij}$ and $\varepsilon_i$. $100$ samples are generated for each case, and the parameter $\xi$, controlling the class overlap, is tuned to have a theoretical misclassification of 0.10. The approaches are compared with the MSE of the coefficient of the regression $\beta$ and the adjusted rand index (ARI) between the true and the estimated partition. The semi-parametric approach is applied with a fixed bandwidth $h=n^{-1/5}$, this choice is considered for sake of simplicity, the tuning of this window could be considered as in~\cite{chauveau2015semi}.
\subsection{Method comparison} \label{sec:simucompare}
In this experiment, we show that the simultaneous procedure outperforms the standard two-step procedure, in both parameteric and semi-parametric framework. In the case where the parametric model is well-specified, we show that its results are equivalent to those obtained by the semi-parametric model. Moreover, we show that if one distribution is misspecified (component distribution or noise distribution), the results of the parametric approach are deteriorated even if the moment condition of the regression model is well-specified. Thus, we advise to use the semi-parametric model if the family of the distributions is unknown to prevent the bias in the estimation.
In this experiment, we focus on the quadratic loss. We consider one case where the parametric model is well-specified (case-1) where $\eta_{ij}$ and $\varepsilon_i$ follow standard Gaussian distributions. We also consider one case where the parametric model is misspecified on the noise distribution (case-2) where $\eta_{ij}$ follows a standard Gaussian distribution and $\varepsilon_i=\tau_i -1$ with $\tau_i\sim\mathcal{E}xp(1)$, and one case where the parametric model is misspecified on the component distributions (case-3) where $\eta_{ij}$ follows a Student distribution with 3 degrees of freedom and $\varepsilon_i$ follows a standard Gaussian distribution. Finally, we consider one case where the parametric model is misspecified on both of the component and noise distributions (case-4), where $\eta_{ij}$ and $\varepsilon_i$ follows Student distributions with 3 degrees of freedom. Figure~\ref{fig:compare} shows the MSE of the regression parameters and the ARI obtained.
\begin{figure}[ht!]
\centering \includegraphics[scale=0.45]{plotsimu-Comparing.png}
\caption{\label{fig:compare}Boxplots of the MSE of the estimators of the regression parameters and ARI obtained by the simultaneous and two-step methods, in a parametric and semi-parametric framework, on 100 samples of size $n$.}
\end{figure}
\subsection{Robust regression} \label{sec:simurobust}
When the noise of a regression follows an heavy-tail distribution, robust regressions (median, Huber and logcosh regressions) allow for improvement of the estimators of the regression coefficients compare to the ordinary least square estimators. Despite that, with a suitable assumption on the noise distribution, the simultaneous parametric approach could consider such regressions, the parameteric assumptions made on the noise distribution would be quite unrealistic (\emph{e.g.,} Laplace distribution for the median regression). Thus, we now illustrate that the simultaneous approach can easily consider robust regressions, in a semi-parametric framework, and that the resulting estimators are better than those obtained with the quadratic loss.
In this experiment, we consider that $\eta_{ij}$ and $\varepsilon_i$ follow independent Student distributions with three degrees of freedom. Figure~\ref{fig:robust} shows the MSE of the regression parameters and the ARI obtained by the mean regression, the median regression, the Huber regression with parameter 1 and the logcosh regression. Results show that the simultaneous approach improves the estimators (according to the MSE and the ARI) for any type of regression and any sample size. Moreover, robust regressions improve the accuracy of the estimator of the regression parameters. However, for this simulation setup, this improvement does not affect the accuracy of the estimated partitions.
\begin{figure}[ht!]
\centering \includegraphics[scale=0.45]{plotsimu-Robusness.png}
\caption{\label{fig:robust} Boxplots of the MSE of the estimators of the regression parameters and ARI obtained by different regressions (mean, median, Huber with parameter 1 and logcosh) estimated with the simultaneous and two-step semi-parametric methods on 100 samples of size $n$ generated with heavy-tail distributions.}
\end{figure}
\subsection{Asymetric losses} \label{sec:simuasym}
Expectile and quantile regressions respectively generalize the mean and the median regression by focusing on the tails of the distribution of the target variable given the covariates. To illustrate the fact that the semi-parametric simultaneous method allows for easily considering these regression models, data are generated such that $\eta_{ij}\sim\mathcal{N}(0,1)$ and $\varepsilon_i\sim\mathcal{N}(-c_\tau,1)$. The scalar $c_\tau$ is defined according to the regression model. Thus, $c_\tau$ is the 0.75-expectile, 0.9-expectile, 0.75-quantile and 0.9-quantile for the 0.75-expectile, 0.9-expectile, 0.75-quantile and 0.9-quantile regression respectively. Figure~\ref{fig:asym} shows that the simultaneous semi-parametric approach improves the estimators compare to those provided by the two-step approach.
\begin{figure}[ht!]
\centering \includegraphics[scale=0.45]{plotsimu-Asymetric.png}
\caption{\label{fig:asym}Boxplots of the MSE of the estimators of the regression parameters and ARI obtained by asymetric regressions estimated with the simultaneous and two-step semi-parametric methods on 100 samples of size $n$.}
\end{figure}
\section{High blood pressure prevention data set} \label{sec:ill}
\paragraph{Problem reminder}
We come back now to the problem of high blood pressure prevention. We focus on the detection of indicators related to the diastolic blood pressure. The indicators we want to consider are the gender, age, alcohol consumption, obesity, sleep quality and level of physical activity. However, the level of physical activity of a patient is not directly measured and we only have a set of variables that describe the physical activity. Thus, we want to cluster the subjects based on this set of variables to obtain patterns of similar physical activities and we want to use these patterns in the prediction of the diastolic blood pressure.
\paragraph{Material and methods}
The data were obtained from National Health and Nutrition Examination Survey of 2011-2012\footnote{The data are freely downloadable at \\ \emph{https://wwwn.cdc.gov/nchs/nhanes/continuousnhanes/default.aspx?BeginYear=2011}}. The target variable is the \emph{diastolic blood pressure} in mmHg (code BPXDI1). The seven covariates in $U$ are \emph{gender} which was equal to 1 for men et 0 for women (code RIAGENDR), \emph{age} (RIDAGEYR), \emph{alcohol} which indicates whether the subjects consume more than five drinks (for men) and four drinks (for women) of alcoholic beverages almost daily (computed from code ALQ151 and ALQ155), \emph{obesity} which indicates if the body mass index is more than 30 (computed from code BMXBMI), \emph{sleep} which indicates the number of hours of sleeping (computed from code SLD010H), \emph{smoke} which indicates if the subjects used tobacco/nicotine in the last five days (code SMQ680) and \emph{cholesterol} which indicates the total cholesterol in mg/dL (code LBXTC). All the subjects that had missing values for those variables were removed. Seven variables are used in $X$ to evaluate the level of physical activity. Among these variables, five variables are binary and indicated whether the subject has a vigorous work activity (code PAQ605), whether the subject has a moderate work activity (code PAQ620), whether the subject usually travels on foot or by bike (code PAQ635), whether the subject has vigorous recreational activities (code PAQ650) and whether the subject has moderate recreational activities (code PAQ665). The two remaining variables in $X$ have 7 levels and indicate the time spent watching TV (code PAQ710) and the time spent using a computer (code PAQ715). Finally, the studied population is composed of 2626 subjects between 18 and 60 years old. To investigate the performances of the different models, 67\% of the sample (\emph{i.e.,} $1760$ subjects) is used for estimating the model parameters and 33\% of the sample (\emph{i.e.,} $866$ subjects) is used for investigating the performances of the models. The smoothing is performed on the continuous variables with a Gaussian kernel and a bandwidth $h=n^{-1/5}$.
\paragraph{Results}
We present the main results of the application. Details used for the results interpretation are presented in Appendix~\ref{app:appli}.
We consider proposed approach in a semi-parametric framework with a quadratic loss. According to the evolution of the smoothed log-likelihood with respect to the number of classes (see Figure~\ref{fig:smoothed} in Appendix~\ref{app:appli}), the model is considered with $K=3$ classes.
To investigate the relevance of the activity level for explaining high blood pressure, we consider three models with a quadratic loss: the proposed approach in a semi-parametric framework (\emph{regquadUZ-K3}), regression model of $Y$ on $U$ (\emph{regquadU}) with a selection of variables according to AIC (two variables are removed by the criterion: \emph{alchohol} and \emph{smoke}), regression model of $Y$ on $(U^\top,X^\top)$ (\emph{regquadUX}) with a selection of variables according to AIC (six variables are selected by the criterion: \emph{gender}, \emph{age}, \emph{obesity}, \emph{sleep}, \emph{cholesterol} and the binary variable indicating whether the subject usually travels on foot or by bike). Considering the activity levels seems to be relevant for explaining high blood pressure, since the MSE of prediction obtained on the testing samples are 122.55, 122.72 and 122.81 for \emph{regquadUZ-K3}, \emph{regquadUX} and \emph{regquadU} respectively. Thus, the approach permits to summarize the information about the physical activity and slighly improve the prediction accuracy. Note that a Shapiro-Wilk's normality test done on the residuals of \emph{regquadUZ-K3} has a pvalue less than $10^{-4}$ for both of the learning and testing sample. Thus, the semi-parametric approach avoids the normality assumption which is not relevant for the residuals.
To prevent the variability due to outliers, we fit the proposed approach in a semi-parametric framework with the median loss and the logcosh loss. Again, evolution of the smoothed log-likelihood with respect to the number of classes leads us to consider $K=3$ classes for both losses. We now compare the results obtained by the proposed method with $K=3$ classes in a semi-parametric framework with a quadratic loss, median loss (\emph{regmedUZ-K3}) and logcosh loss (\emph{reglogchUZ-K3}). The three models provided similar partition since the adjusted rand index between all the couples of partition is more than 0.85. The regression parameters are presented in Table~\ref{tab:param} of Appendix~\ref{app:appli}. The signs of the coefficients are the same for the three losses. It appears that being a woman prevents the high blood pressure while age, alcohol consumption, overweight, lack of sleeping and cholesterol increase high blood pressure. One can be surprised that results claim that smoking limits the risk of high blood pressure, but this effect has already been brought out in \citet{omvik1996smoking,li2017association}. Note that the robust methods detect a more significant effect of the alcohol, smoking and physical activity to high blood pressure. Moreover, they slightly improve the prediction accuracy because the MSE obtained on the testing sample is 122.44 and 122.48 for the median and the logcosh losses respectively.
We now interpret the clustering results provided by the median loss. Class 1 ($\pi_1=0.15$) is the smallest class and contains the subjects having recreational physical activities, traveling by foot or by bike, having no physical activity at work and spending few hours by watching screens. Class 2 ($\pi_2=0.44$) groups the subjects having few physical activities. Class 3 ($\pi_3=0.44$) groups the subjects having high physical activities at work. These results show that having moderate physical activities (recreational activities, traveling by bike or foot, not spending many hours by watching screens) prevents the high blood pressure.
\section{Conclusion} \label{sec:conclusion}
In this paper, we propose an alternative to the two-step approach that starts by summarizing some observed variables by clustering and then fitting a prediction model using the estimator of the partition as a covariate. Our proposition consists of simultaneously performing the clustering and the estimation of the prediction model to improve the accuracy of the partition and of the regression parameters. This approach can be applied to a wide range of regression models.
Our proposition can be applied in a parametric and semi-parametric framework.
We advise using the semi-parametric approach to avoid bias in the estimation (due to bias in the distribution modeling).
The quality of the prediction could be used as a tool for selecting the number of components and bandwidth, for semi-parametric mixtures. Like in any regression problem, this criterion can also be used for selecting the variables (in the regression part but also in the clustering part). Thus, taking into account the regression is important in model selection for semi-parametric mixtures. Moreover, this could allow for a variable selection in clustering while this approach is only used in a parametric framework \citep{tadesse2005bayesian,raftery2006variable}.
The semi-parametric approach has been presented by assuming that the components are products of univariate densities. However, the proposed approach can also be used by considering location scale symmetric distributions \citep{hunterAOS2007} or by incorporating an independent component analysis structure \citep{zhu2019clustering}.
Moreover, we can easily relax the assumption that $(X^\top,Z^\top)$ is independent of $U$. The crucial assumption of the model is the conditional independence of $Y$ and $X$ given $(Z^\top,U^\top)$.
This approach has been introduced by considering only one latent categorical variable. However, more than one latent categorical variable explained by different sub-groups of variables of $X$ could be considered. This extension is straightforward if the different sub-groups of variables of $X$ are known. However, the case of the sub-groups of variables are also estimated (see the case of multiple partitions in clustering; \citet{MARBAC2019167}) could be considered in future work.
\bibliographystyle{apalike}
|
{
"timestamp": "2020-12-29T02:27:01",
"yymm": "2012",
"arxiv_id": "2012.14159",
"language": "en",
"url": "https://arxiv.org/abs/2012.14159"
}
|
\section{Introduction}
The notion of almost paracontact structure on a differentiable manifold of arbitrary dimension was introduced in \cite{Sato76}. The restriction of this structure on the paracontact distribution is an almost product structure, studied and classified in \cite{Nav83}.
On a manifold equipped with an almost paracontact structure can be considered two kinds of compatible metrics.
If the structure endomorphism induces an isometry on the paracontact distribution of each tangent fibre, then the manifold has an almost paracontact Riemannian structure as in \cite{Sato77,AdatMiya77,SatoMats79}.
In the other case, when the induced transformation is antiisometry, the manifold has a structure of an almost paracontact metric manifold (\cite{NakZam,ZamNak}), where the metric is semi-Riemannian of type $(n+1,n)$.
The object of our considerations are the almost paracontact almost paracomplex Riemannian manifolds. The restriction of the
introduced almost paracontact structure on the paracontact distribution is traceless, i.e. it is an almost paracomplex structure. In \cite{ManSta}, it is given a classification of these manifolds where they are classified under the name of almost paracontact Riemannian manifolds of type $(n,n)$. Their investigation is continued in \cite{ManTav57,ManTav2}.
The Schouten-van Kampen connection preserves by parallelism a pair of complementary dis\-tri\-bu\-tions on a differentiable manifold
endowed with an affine connection \cite{SvK,Ia,BejFarr}. Using the considered connection in \cite{Sol}, there are studied hyperdistributions in Riemannian manifolds.
In \cite{Olsz} and \cite{ManFil}, there are studied the Schouten-van
Kampen connection adapted to an almost (para)contact metric structure and an almost contact B-metric structure, respectively. The studied connection is not natural in general on these manifolds because it preserves the structure tensors except the structure endomorphism.
In the present paper, we introduce and investigate pair of Schou\-ten-van Kampen connections
associated to the pair of Levi-Civita connections and adapted to the paracontact distribution of an almost paracontact almost paracomplex Riemannian manifold.
We characterize the classes of considered manifolds by means of the constructed non-symmetric connections and we obtain some curvature properties.
\section{Almost Paracontact Almost Paracomplex Riemannian Manifolds}
Let us consider an \emph{almost paracontact almost paracomplex Riemannian manifold} denoted by $(\mathcal{M},\phi,\xi,\eta,g)$. This means that $\mathcal{M}$
is a $(2n+1)$-dimensional ($n\in\mathbb{N}$) differentiable manifold equipped with a compatible Rie\-mannian
metric $g$ and an almost
paracontact structure $(\phi,\xi,\eta)$, where $\phi$ is an endomorphism
of the tangent bundle $T\mathcal{M}$, $\xi$ is a characteristic vector field and $\eta$ is its dual 1-form, such that the following
algebraic relations are satisfied:
\begin{equation}\label{strM}
\begin{array}{c}
\phi\xi = 0,\qquad \phi^2 = I - \eta \otimes \xi,\qquad
\eta\circ\phi=0,\qquad \eta(\xi)=1,\qquad \mathrm{tr} \phi=0,\\
g(\phi x, \phi y) = g(x,y) - \eta(x)\eta(y),\qquad g(x, \xi) = \eta(x),
\end{array}
\end{equation}
denoting the identity transformation on $T\mathcal{M}$ by $I$ (\cite{Sato76}, \cite{ManTav57}).
In the latter equalities and further, $x$, $y$, $z$, $w$ will stand for arbitrary elements of $\mathfrak{X}(\mathcal{M})$, the Lie algebra of tangent vector fields, or vectors in the tangent space $T_p\mathcal{M}$ of $\mathcal{M}$ at an arbitrary
point $p$ in $\mathcal{M}$.
Almost paracontact almost paracomplex Riemannian manifolds, known also as \emph{almost paracontact Riemannian manifolds of type $(n,n)$}, are classified in \cite{ManSta}, where eleven basic classes $\mathcal{F}_1$, $\mathcal{F}_2$, $\dots$, $\mathcal{F}_{11}$ are introduced. This classification is made with respect
to the tensor $F$ of type (0,3) defined by
\begin{equation*}\label{F=nfi}
F(x,y,z)=g\bigl( \left( \nabla_x \phi \right)y,z\bigr),
\end{equation*}
where $\nabla$ is the Levi-Civita connection of $g$.
The following identities are valid:
\begin{equation}\label{F-prop}
\begin{array}{l}
F(x,y,z)=F(x,z,y)=-F(x,\phi y,\phi z)+\eta(y)F(x,\xi,z)
+\eta(z)F(x,y,\xi),\\
(\nabla_x\eta)y=g(\nabla_x\xi,y)=-F(x,\phi y, \xi).
\end{array}
\end{equation}
The special class $\mathcal{F}_0$,
determined by the condition $F=0$, is the intersection of the basic classes.
The associated metric $\widetilde{g}$ of $g$ on $\mathcal{M}$ is defined by
$\widetilde{g}(x,y)=g(x,\phi y)+\eta(x)\eta(y)$. It is shown that $\widetilde{g}$ is a compatible metric with $(\mathcal{M},\phi,\xi,\eta)$ and it is a pseudo-Riemannian metric of signature $(n + 1, n)$. Therefore,
$(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ is also an almost paracontact almost paracomplex manifold but with a pseudo-Riemannian metric.
The following 1-forms (known also as Lee forms)
are associated with $F$:
\begin{equation*}\label{t}
\theta(z)=g^{ij}F(e_i,e_j,z),\quad
\theta^*(z)=g^{ij}F(e_i,\phi e_j,z), \quad \omega(z)=F(\xi,\xi,z),
\end{equation*}
where $\left(g^{ij}\right)$ is the inverse matrix of the
matrix $\left(g_{ij}\right)$ of $g$ with respect to
a basis $\left\{\xi;e_i\right\}$ $(i=1,2,\dots,2n)$ of
$T_p\mathcal{M}$.
Further, we use the following characteristic conditions of the
basic classes: \cite{ManSta}
\begin{equation}\label{Fi}
\begin{array}{rl}
\mathcal{F}_{1}: &F(x,y,z)=\frac{1}{2n}\bigl\{g(\phi x,\phi y)\theta(\phi^2 z) +g(\phi x,\phi z)\theta(\phi^2 y)
\\
&\phantom{F(x,y,z)=\frac{1}{2n}\,\,}
-g(x,\phi y)\theta(\phi z)-g(x,\phi z)\theta(\phi y)
\bigr\};\\
\mathcal{F}_{2}: &F(\xi,y,z)=F(x,\xi,z)=0,\quad
\mathop{\mathfrak{S}}\limits_{x,y,z} F(x,y,\phi z)=0,\quad \theta=0;\\
\mathcal{F}_{3}: &F(\xi,y,z)=F(x,\xi,z)=0,\quad
\mathop{\mathfrak{S}}\limits_{x,y,z} F(x,y,z)=0;\\
\mathcal{F}_{4}: &F(x,y,z)=\frac{1}{2n}\theta(\xi)\bigl\{g(\phi x,\phi y)\eta(z)+g(\phi x,\phi z)\eta(y)\bigr\};\\
\mathcal{F}_{5}: &F(x,y,z)=\frac{1}{2n}\theta^*(\xi)\bigl\{g( x,\phi y)\eta(z)+g(x,\phi z)\eta(y)\bigr\};\\
\mathcal{F}_{6}: &F(x,y,z)=F(x,y,\xi)\eta(z)+F(x,z,\xi)\eta(y),\quad \\
&F(x,y,\xi)=F(y,x,\xi)=F(\phi x,\phi y,\xi),\quad \theta=\theta^*=0; \\
\mathcal{F}_{7}: &F(x,y,z)=F(x,y,\xi)\eta(z)+F(x,z,\xi)\eta(y),\quad \\
& F(x,y,\xi)=-F(y,x,\xi)=F(\phi x,\phi y,\xi); \\
\mathcal{F}_{8}: &F(x,y,z)=F(x,y,\xi)\eta(z)+F(x,z,\xi)\eta(y),\quad \\
& F(x,y,\xi)= F(y,x,\xi)=-F(\phi x,\phi y,\xi); \\
\mathcal{F}_{9}: &F(x,y,z)=F(x,y,\xi)\eta(z)+F(x,z,\xi)\eta(y),\quad \\
& F(x,y,\xi)=-F(y,x,\xi)=-F(\phi x,\phi y,\xi); \\
\mathcal{F}_{10}: &F(x,y,z)=-\eta(x)F(\xi,\phi y,\phi z); \\
\mathcal{F}_{11}:
&F(x,y,z)=\eta(x)\left\{\eta(y)\omega(z)+\eta(z)\omega(y)\right\},
\end{array}
\end{equation}
where $\mathop{\mathfrak{S}}\limits_{x,y,z}$ is the cyclic sum by three arguments $x,y,z$.
The relations between the Lee forms and the divergences $\mathrm{div}$ and $\mathrm{div}^*$ regarding $g$ and $\widetilde{g}$, respectively, follow directly from \eqref{F-prop} and they have the form
\begin{equation}\label{divtr}
\theta(\xi)=-\mathrm{div}^*(\eta),\qquad \theta^*(\xi)=-\mathrm{div}(\eta).
\end{equation}
As a corollary, the covariant derivative of $\xi$ with respect to $\nabla$ is determined in each class as follows
\begin{equation}\label{Fi:nxi}
\begin{array}{l}
\mathcal{F}_{1}:\; \nabla\xi=0;\qquad\qquad
\mathcal{F}_{2}:\; \nabla\xi=0;\qquad\qquad
\mathcal{F}_{3}:\; \nabla\xi=0;\\
\mathcal{F}_{4}:\; \nabla\xi=\frac{1}{2n}\mathrm{div}^*(\eta)\,\phi;\qquad\qquad
\mathcal{F}_{5}:\; \nabla\xi=\frac{1}{2n}\mathrm{div}(\eta)\,\phi^2;\\
\mathcal{F}_{6}:\; g(\nabla_x\xi,y)=g(\nabla_{\phi y}\xi,\phi x)=g(\nabla_{\phi x}\xi,\phi y),\quad \mathrm{div}(\eta)=\mathrm{div}^*(\eta)=0; \\
\mathcal{F}_{7}:\; g(\nabla_x\xi,y)=-g(\nabla_{\phi y}\xi,\phi x)=g(\nabla_{\phi x}\xi,\phi y); \\
\mathcal{F}_{8}:\; g(\nabla_x\xi,y)=g(\nabla_{\phi y}\xi,\phi x)=-g(\nabla_{\phi x}\xi,\phi y); \\
\mathcal{F}_{9}:\; g(\nabla_x\xi,y)=-g(\nabla_{\phi y}\xi,\phi x)=-g(\nabla_{\phi x}\xi,\phi y); \\
\mathcal{F}_{10}:\; \nabla\xi=0; \qquad\qquad
\mathcal{F}_{11}:\; \nabla\xi=\eta\otimes\phi\omega^{\sharp},
\end{array}
\end{equation}
where $\sharp$ denotes the musical isomorphism of $T^*\mathcal{M}$ in $T\mathcal{M}$ given by $g$.
Let $\widetilde{\n}$ be the Levi-Civita connection of $\widetilde{g}$.
Let us denote the potential of $\widetilde{\n}$ regarding $\nabla$ by $\Phi$, i.e. $\Phi(x,y)=\widetilde{\n}_x y - \nabla_x y$.
By the well-known Koszul equality for $\widetilde{g}$ and $\widetilde\nabla$, using \eqref{strM} and \eqref{F-prop},
we obtain the relation between $F$ and $\widetilde{F}(x,y,z)=\widetilde{g}(( \widetilde{\n}_x \phi )y,z)$
as well as the expression of $\Phi$ in terms of $F$ as follows
\begin{equation}\label{tFF}
\begin{array}{l}
2\widetilde{F}(x,y,z)=F(\phi y,z,x)-F(y,\phi z,x)+F(\phi z,y,x)-F(z,\phi y,x)\\
\phantom{2\widetilde{F}(x,y,z)=}
+\eta(x)\{F(y,z,\xi)-F(\phi z,\phi y,\xi)+F(z,y,\xi)-F(\phi y,\phi z,\xi)\}\\
\phantom{2\widetilde{F}(x,y,z)=}
+\eta(y)\{F(x,z,\xi)-F(\phi z,\phi x,\xi)+F(x,\phi z,\xi)\}\\
\phantom{2\widetilde{F}(x,y,z)=}
+\eta(z)\{F(x,y,\xi)-F(\phi y,\phi x,\xi)+F(x,\phi y,\xi)\},
\end{array}
\end{equation}
\begin{gather}
\begin{array}{l}\label{PhiF}
2\Phi(x,y,z)=F(x,y,\phi z)+F(y,x,\phi z)-F(\phi z,x,y)\\
\phantom{2\Phi(x,y,z)=}
-\eta(x)\{F(y,z,\xi)-F(\phi z,\phi y,\xi)\} \\
\phantom{2\Phi(x,y,z)=}
-\eta(y)\{F(x,z,\xi)-F(\phi z,\phi x,\xi)\}\\
\phantom{2\Phi(x,y,z)=}
-\eta(z)\{F(\xi,x,y)-F(x,y,\xi)+F(x,\phi y,\xi)-\omega(\phi x)\eta(y)\\
\phantom{2\Phi(x,y,z)=-\eta(z)\{F(\xi,x,y)\,}
-F(y,x,\xi)+F(y,\phi x,\xi)-\omega(\phi y)\eta(x)\}.
\end{array}
\end{gather}
Obviously, the special class $\mathcal{F}_0$ is determined by any of the following equivalent conditions: $F=0$, $\Phi=0$, $\widetilde{F}=0$ and $\nabla=\widetilde{\n}$.
The properties of $\widetilde{\n}_x\xi$ when $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belongs to each of the basic classes are determined in a similar way as in \eqref{Fi:nxi}.
\section{Remarkable metric connections regarding the paracontact distribution on the considered manifolds}
Let us consider an almost paracontact almost paracomplex Riemannian manifold $(\mathcal{M},\phi,\xi,\eta,g)$.
%
Using the structure $(\xi,\eta)$ on $\mathcal{M}$, there are determined the following two distributions
in the tangent bundle $T\mathcal{M}$ of $\mathcal{M}$
\begin{equation*}\label{HV}
\mathcal{H}=\ker(\eta),\qquad \mathcal{V}=\mathrm{span}(\xi),
\end{equation*}
called the horizontal distribution and the vertical distribution, respectively.
They are mutually complementary in $T\mathcal{M}$ and orthogonal with respect to $g$ and $\widetilde{g}$, i.e. $\mathcal{H}\oplus\mathcal{V} =T\mathcal{M}$
and $\mathcal{H}\bot\mathcal{V}$; moreover, $\mathcal{H}$ is known also as the paracontact distribution.
We consider the corresponding horizontal and vertical projectors $h:T\mathcal{M}\mapsto\mathcal{H}$ and $v:T\mathcal{M}\mapsto\mathcal{V}$.
Since $x=\phi^2x+\eta(x)\xi$ for any $x$ in $T\mathcal{M}$, we have
$h(x)=\phi^2x$ and $v(x)=\eta(x)\xi$
or equivalently
\begin{equation}\label{Xhv}
x^h=\phi^2x,\qquad x^v=\eta(x)\xi.
\end{equation}
\subsection{The Schouten-van Kampen connections associated to the Levi-Civita connections}
Let us consider the Schouten-van Kampen connections $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ associated to $\nabla$ and $\widetilde{\n}$, respectively,
and adapt\-ed to the pair $(\mathcal{H}, \mathcal{V})$. These connections are defined (locally in \cite{SvK}, see also \cite{Ia}) by
\begin{equation}\label{SvK}
\begin{array}{c}
\nabla^{\parallel}_x y = (\nabla_x y^h)^h + (\nabla_x y^v)^v,\\[6pt]
\widetilde{\n}^{\parallel}_x y = (\widetilde{\n}_x y^h)^h + (\widetilde{\n}_x y^v)^v.
\end{array}
\end{equation}
The latter equalities imply the parallelism of $\mathcal{H}$ and $\mathcal{V}$ with
respect to $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$.
Taking into account \eqref{Xhv}, we express the formulae of $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ in terms of $\nabla$ and $\widetilde{\n}$, respectively, as follows (cf. \cite{Sol})
\begin{equation}\label{SvK=n}
\nabla^{\parallel}_x y = \nabla_x y -\eta(y)\nabla_x \xi+(\nabla_x \eta)\!(y)\,\xi,
\end{equation}
\begin{equation}\label{tSvK=n}
\widetilde{\n}^{\parallel}_x y = \widetilde{\n}_x y -\eta(y)\widetilde{\n}_x \xi+(\widetilde{\n}_x \eta)(y)\xi.
\end{equation}
Obviously, $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ exist on $(\mathcal{M},\phi,\xi,\eta,g,\widetilde{g})$ in each class regarding $F$.
Let us consider the potentials $Q^{\parallel}$ of $\nabla^{\parallel}$ with respect to $\nabla$, $\widetilde{Q}^{\parallel}$ of $\widetilde{\n}^{\parallel}$ with respect to $\widetilde{\n}$ and the torsions $T^{\parallel}$ of $\nabla^{\parallel}$, $\widetilde{T}$ of $\widetilde{\n}^{\parallel}$ defined by $Q^{\parallel}(x,y)=\nabla^{\parallel}_xy-\nabla_xy$, $\widetilde{Q}^{\parallel}(x,y)=\widetilde{\n}^{\parallel}_xy-\widetilde{\n}_xy$, $T^{\parallel}(x,y)= \nabla^{\parallel}_xy-\nabla^{\parallel}_yx-[x,y]$ and $\widetilde{T}(x,y)= \widetilde{\n}^{\parallel}_xy-\widetilde{\n}^{\parallel}_yx-[x,y]$. Then, they have the following expressions
\begin{gather}\label{Q}
Q^{\parallel}(x,y)=-\eta(y)\nabla_x \xi+(\nabla_x \eta)\!(y)\,\xi,
\\
\label{tQ}
\widetilde{Q}^{\parallel}(x,y)=-\eta(y)\widetilde{\n}_x \xi+(\widetilde{\n}_x \eta)(y)\xi,
\\
\label{T}
T^{\parallel}(x,y)=\eta(x)\nabla_y \xi-\eta(y)\nabla_x \xi+\mathrm{d}\eta(x,y)\,\xi,
\\
\label{tT}
\widetilde{T}^{\parallel}(x,y)=\eta(x)\widetilde{\n}_y \xi-\eta(y)\widetilde{\n}_x \xi+\mathrm{d}\eta(x,y)\xi.
\end{gather}
\begin{thm}\label{thm:D-T}
The Schouten-van Kampen connections $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ are the unique affine connections having torsions of the form \eqref{T} and \eqref{tT}, respectively, and they preserve the structure $(\xi, \eta,g,\widetilde{g})$.
\end{thm}
\begin{proof}
Using \eqref{SvK=n}, we get directly $\nabla^{\parallel}\xi=\nabla^{\parallel}\eta=\nabla^{\parallel}g=0$, i.e. $\xi$, $\eta$ and $g$ are parallel with respect to $\nabla^{\parallel}$.
Since $\nabla^{\parallel}$ is a metric connection, it is completely determined by its torsion $T^{\parallel}$.
The spaces of torsions $\{T\}$ and of potentials $\{Q\}$
are isomorphic and the bijection is given by (\cite{Car25})
\begin{gather}
T (x,y,z) = Q(x,y,z) - Q(y,x,z) ,\label{TQ}\\
2Q(x,y,z) = T (x,y,z) - T (y,z,x) + T(z,x,y).\label{QT}
\end{gather}
We verify directly that the potential $Q^{\parallel}$ and the torsion $T^{\parallel}$ of $\nabla^{\parallel}$, determined by \eqref{Q} and \eqref{T}, respectively, satisfy the latter equalities. This completes the proof for $\nabla^{\parallel}$. Similarly, we prove for $\widetilde{\n}^{\parallel}$.
\end{proof}
\begin{thm}\label{thm:D=n}
The Schouten-van Kampen connection $\nabla^{\parallel}$
coincides with $\nabla$ if and only if $(\mathcal{M},\phi,\xi,\allowbreak{}\eta,g)$ belongs to the class $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{10}$.
\end{thm}
\begin{proof}
According \eqref{SvK=n}, $\nabla^{\parallel}$ coincides with $\nabla$ if and only if $\nabla_x \xi=0$ for any $x$. Having in mind \eqref{Fi:nxi}, this vanishing holds only in the class $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{10}$.
\end{proof}
\begin{thm}\label{thm:tD=nn}
The Schouten-van Kampen connection $\widetilde{\n}^{\parallel}$
coincides with $\widetilde{\n}$ if and only if $(\mathcal{M},\phi,\xi,\allowbreak{}\eta,\widetilde{g})$ belongs to the class $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{9}$.
\end{thm}
\begin{proof}
The connection $\widetilde{\n}^{\parallel}$ coincides with $\widetilde{\n}$ if and only if $\widetilde{\n} \xi$ vanishes. This condition holds if and only if $\widetilde{F}$ satisfies the conditions of $F$ in \eqref{Fi} for $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{9}$.
\end{proof}
Taking into account \eqref{tFF}, we prove immediately the following
\begin{lem}\label{lem:U1}
The manifold $(\mathcal{M},\phi,\xi,\eta,g)$ belongs to the class $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{10}$ if and only if the manifold $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belongs to the class $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{9}$.
\end{lem}
Then, \thmref{thm:D=n}, \thmref{thm:tD=nn} and \lemref{lem:U1} imply the following
\begin{thm}\label{thm:D=ntD=nn}
Let $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ be the Schouten-van Kampen connections associated to $\nabla$ and $\widetilde{\n}$, respectively, and adapted to the pair $(\mathcal{H},\mathcal{V})$ on $(\mathcal{M},\phi,\xi,\eta,g,\widetilde{g})$. Then the following assertions are equivalent:
\begin{enumerate}
\item $\nabla^{\parallel}$ coincides with $\nabla$;
\item $\widetilde{\n}^{\parallel}$ coincides with $\widetilde{\n}$;
\item $(\mathcal{M},\phi,\xi,\eta,g)$ belongs to $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{10}$;
\item $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belongs to $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{9}$.
\end{enumerate}
\end{thm}
\begin{cor}\label{cor:D=ntD=nn}
Let $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ be the Schouten-van Kampen connections associated to $\nabla$ and $\widetilde{\n}$, respectively, and adapted to the pair $(\mathcal{H},\mathcal{V})$ on $(\mathcal{M},\phi,\xi,\eta,g,\widetilde{g})$. If $\widetilde{\n}^{\parallel}\equiv\nabla$ or $\nabla^{\parallel}\equiv\widetilde{\n}$ then the four connections $\nabla^{\parallel}$, $\widetilde{\n}^{\parallel}$, $\nabla$ and $\widetilde{\n}$ coincide. The latter coincidences are equivalent to the condition $(\mathcal{M},\phi,\xi,\eta,g)$ and $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belong to $\mathcal{F}_0$.
\end{cor}
We obtain the following relation between $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$, using \eqref{tSvK=n} and $\Phi$,
\begin{equation}\label{tD=D}
\widetilde{\n}^{\parallel}_x y = \nabla^{\parallel}_x y + \Phi(x,y) -\eta(\Phi(x,y))\xi -\eta(y)\Phi(x,\xi).
\end{equation}
The two connections $\widetilde{\n}^{\parallel}$ and $\nabla^{\parallel}$ coincide if and only if $\Phi(x,y)=\eta(\Phi(x,y))\xi +\eta(y)\Phi(x,\xi)$ which is equivalent to $\Phi(x,y)=\eta(\Phi(x,y))\xi +\eta(x)\eta(y)\Phi(\xi,\xi)$ because $\Phi$ is symmetric.
By virtue of \eqref{PhiF} and the latter expression of $\Phi$, we obtain
\begin{equation}\label{F_D=0}
F(x,y,z)=F(x,y,\xi)\eta(z)+F(x,z,\xi)\eta(y),
\end{equation}
which determines the class $\mathcal{F}_4\oplus\cdots\oplus\mathcal{F}_9\oplus\mathcal{F}_{11}$.
Then, the following assertion is valid.
\begin{thm}\label{thm:tD=D}
The Schouten-van Kampen connections $\widetilde{\n}^{\parallel}$ and $\nabla^{\parallel}$ associated to $\widetilde{\n}$ and $\nabla$, respectively, and adapt\-ed to the pair $(\mathcal{H},\mathcal{V})$
coincide with each other if and only if the manifold belongs to the class $\mathcal{F}_4\oplus\cdots\oplus\mathcal{F}_9\oplus\mathcal{F}_{11}$.
\end{thm}
\subsection{The conditions for natural connections $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ for $(\phi,\xi,\eta,g,\widetilde{g})$}
It is known that a connection is called natural for a structure $(\phi,\xi,\eta,g,\widetilde{g})$ when all of the structure tensors are parallel with respect to this connection. According to \thmref{thm:D-T}, $\nabla^{\parallel}$ preserves $(\xi,\eta,g)$. However, $\nabla^{\parallel}$ is not a natural connection for the studied structures, because $\nabla^{\parallel}\phi$ (therefore $\nabla^{\parallel}\widetilde{g}$, too) is generally not zero.
\begin{thm}\label{thm:D-nat}
The Schouten-van Kampen connection $\nabla^{\parallel}$
is a natural connection for the structure $(\phi,\xi,\eta,g)$ if and only if $(\mathcal{M},\phi,\xi,\eta,g)$ belongs to the class $\mathcal{F}_4\oplus\cdots\oplus\mathcal{F}_9\oplus\mathcal{F}_{11}$.
\end{thm}
\begin{proof} Bearing in mind \eqref{SvK=n}, we obtain the covariant derivative of $\phi$ with respect to $\nabla^{\parallel}$ as follows
\begin{equation}\label{Df}
(\nabla^{\parallel}_x\phi)y=(\nabla_x\phi)y+\eta(y)\phi\nabla_x\xi-\eta(\nabla_x\phi y)\xi.
\end{equation}
Then, $\nabla^{\parallel}\phi$ vanishes if and only if $(\nabla_x\phi)y=-\eta(y)\phi\nabla_x\xi+\eta(\nabla_x\phi y)\xi$ holds, which is equivalent to \eqref{F_D=0}.
Bearing in mind the proof of \thmref{thm:tD=D}, we find that the class with natural connection $\nabla^{\parallel}$ is the class in the statement.
\end{proof}
Taking into account \thmref{thm:D=n} and \thmref{thm:D-nat}, we obtain the following
\begin{cor}
The class of all almost paracontact almost paracomplex Riemannian manifolds can be decomposed orthogonally to the subclass of the manifolds with coinciding connections $\nabla^{\parallel}$ and $\nabla$ and the subclass of manifolds with natural $\nabla^{\parallel}$.
\end{cor}
Taking into account \eqref{tD=D}, we get the following relation between the covariant derivatives of $\phi$ with respect to $\widetilde{\n}^{\parallel}$ and $\nabla^{\parallel}$
\begin{equation}\label{tDfiDfi}
(\widetilde{\n}^{\parallel}_x\phi)y=(\nabla^{\parallel}_x\phi)y+\Phi(x,\phi y)-\phi\Phi(x,y)+\eta(y)\phi\Phi(x,\xi)-\eta(\Phi(x,\phi y))\xi.
\end{equation}
Therefore, we establish that $\widetilde{\n}^{\parallel}\phi$ and $\nabla^{\parallel}\phi$ coincide if and only if the condition
\[
\Phi(x,\phi^2 y,\phi^2 z)=-\Phi(x,\phi y,\phi z)
\]
holds, which is fulfilled only when $(\mathcal{M},\phi,\xi,\eta,g)$ is in the class $\mathcal{F}_3\oplus\mathcal{F}_4\oplus\mathcal{F}_5\oplus\mathcal{F}_6\oplus\mathcal{F}_7\oplus\mathcal{F}_{11}$.
Similarly, we establish that $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belongs to the same class and therefore we proved the following
\begin{thm}\label{thm:tDfi=Dfi}
The covariant derivatives of $\phi$ with respect to the Schouten-van Kampen connections $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$ %
coincide if and only if both of the manifolds $(\mathcal{M},\phi,\xi,\allowbreak{}\eta,\allowbreak{}g)$ and $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belong to the class $\mathcal{F}_3\oplus\mathcal{F}_4\oplus\mathcal{F}_5\oplus\mathcal{F}_6\oplus\mathcal{F}_7\oplus\mathcal{F}_{11}$.
\end{thm}
Bearing in mind \eqref{PhiF}, \eqref{Df} and \eqref{tDfiDfi}, we obtain that $\widetilde{\n}^{\parallel}\phi=0$ is equivalent to
\[
F(\phi y,\phi z,x)+F(\phi^2 y,\phi^2 z,x)-F(\phi z,\phi y,x)-F(\phi^2 z,\phi^2 y,x)=0.
\]
Then, the latter equality and \eqref{Fi} imply the truthfulness of the following
\begin{thm}\label{thm:tD-nat}
The Schouten-van Kampen connection $\widetilde{\n}^{\parallel}$
is a natural connection for the structure $(\phi,\xi,\eta,\widetilde{g})$ if and only if $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belongs to the class $\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_4\oplus\mathcal{F}_5\oplus\mathcal{F}_6\oplus\mathcal{F}_7\oplus\mathcal{F}_{11}$.
\end{thm}
Combining \thmref{thm:D-nat}, \thmref{thm:tDfi=Dfi} and \thmref{thm:tD-nat}, we get the validity of the following
\begin{thm}\label{thm:DtD-nat}
The Schouten-van Kampen connections $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$
are natural connections on $(\mathcal{M},\phi,\xi,\eta,g,\widetilde{g})$ if and only if
$(\mathcal{M},\phi,\xi,\eta,g)$ and $(\mathcal{M},\phi,\xi,\eta,\widetilde{g})$ belong to the class $\mathcal{F}_4\oplus\mathcal{F}_5\oplus\mathcal{F}_6\oplus\mathcal{F}_7\oplus\mathcal{F}_{11}$.
\end{thm}
\section{Torsion properties
of the pair of connections $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$}
Since $g(\xi,\xi)=1$ implies $g(\nabla_x\xi,\xi)=0$, it follows that $\nabla_x\xi\in\mathcal{H}$.
The shape operator $S:\mathcal{H}\mapsto\mathcal{H}$ for $g$ is defined as usually by $S(x)=-\nabla_x\xi$.
Then, using the relations between $T^{\parallel}$, $Q^{\parallel}$ and $S$ given in \eqref{Q}, \eqref{T}, \eqref{TQ}, \eqref{QT}, we have that the properties of the torsion, the potential and the shape operator for $\nabla^{\parallel}$ are related.
Horizontal and vertical components of $Q^{\parallel}$ and $T^{\parallel}$ given in \eqref{Q} and \eqref{T}, respectively, are the following
\begin{equation}\label{QTShv}
\begin{array}{ll}
Q^{\parallel h}=S\otimes\eta,\qquad & Q^{\parallel v}=-S^{\flat}\otimes\xi,
\\
T^{\parallel h}=-\eta\wedge S,\qquad & T^{\parallel v}=-2\mathrm{Alt}(S^{\flat})\otimes\xi,
\end{array}
\end{equation}
where $S^{\flat}(x,y)=g(S(x),y)$, i.e. $S^{\flat}=-\nabla\eta$, whereas $\wedge$ and $\mathrm{Alt}$ denote the exterior product and the alternation, respectively.
Using the vertical components of $Q^{\parallel}$ and $T^{\parallel}$ from \eqref{QTShv}, we obtain immediately
\begin{thm}\label{thm:equiv1}
The following properties are equivalent:
\begin{enumerate}
\item
$\nabla\eta$ is symmetric
\item
$\eta$ is closed, i.e. $\mathrm{d}\eta=0$
\item
$Q^{\parallel v}$ is symmetric
\item
$T^{\parallel v}$ vanishes
\item
$S$ is self-adjoint regarding $g$
\item
$S^{\flat}$ is symmetric
\item
$\mathcal{M}\in\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_4\oplus\mathcal{F}_5\oplus\mathcal{F}_6\oplus\mathcal{F}_9\oplus\mathcal{F}_{10}$.
\end{enumerate}
\end{thm}
\begin{thm}\label{thm:equiv2}
The following properties are equivalent:
\begin{enumerate}
\item
$\nabla\eta$ is skew-symmetric
\item
$\xi$ is Killing with respect to $g$, i.e. $\mathfrak{L}_{\xi}g=0$
\item
$Q^{\parallel v}$ is skew-symmetric
\item
$S$ is anti-self-adjoint regarding $g$
\item
$S^{\flat}$ is skew-symmetric
\item
$\mathcal{M}\in\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_7\oplus\mathcal{F}_8\oplus\mathcal{F}_{10}$.
\end{enumerate}
\end{thm}
\begin{thm}\label{thm:equiv3}
The following properties are equivalent:
\begin{enumerate}
\item
$\nabla\eta=0$
\item
$\mathrm{d}\eta=\mathfrak{L}_{\xi}g=0$
\item
$\nabla\xi=0$
\item
$S=0$
\item
$S^{\flat}=0$
\item
$\nabla^{\parallel}=\nabla$
\item
$\mathcal{M}\in\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{10}$.
\end{enumerate}
\end{thm}
In the same manner, we obtain similar linear relations between the torsion, the potential and the shape operator for $\widetilde{\n}^{\parallel}$.
The equality $\widetilde{g}(\xi,\xi)=1$ implies $\widetilde{g}(\widetilde{\n}_x\xi,\xi)=0$ and therefore $\widetilde{\n}\xi\in\mathcal{H}$ holds true.
The equality $\widetilde{S}(x)=-\widetilde{\n}_x\xi$ defines the shape operator $\widetilde{S}:\mathcal{H}\mapsto\mathcal{H}$ for $\widetilde{g}$.
Now, having in mind \eqref{tQ} and \eqref{tT}, we express the horizontal and vertical components of $\widetilde{Q}^{\parallel}$ and $\widetilde{T}$ of $\widetilde{\n}^{\parallel}$ as follows
\begin{equation}\label{tQThv}
\begin{array}{ll}
\widetilde{Q}^{\parallel h}=\widetilde{S}\otimes\eta,\qquad & \widetilde{Q}^{\parallel v}=-\widetilde{S}^{\flat}\otimes\xi,
\\
\widetilde{T}^{\parallel h}=-\eta\wedge\widetilde{S},\qquad & \widetilde{T}^{\parallel v}=-2\mathrm{Alt}(\widetilde{S}^{\flat})\otimes\xi,
\end{array}
\end{equation}
where we denote $\widetilde{S}^{\flat}(x,y)=\widetilde{g}(\widetilde{S}(x),y)$.
Bearing in mind that $(\widetilde{\n}_x \eta)(y)=(\nabla_x \eta)(y)-\eta(\Phi(x,y))$ and $\widetilde{\n}_x \xi=\nabla_x \xi +\Phi(x,\xi)$, we get
\begin{equation}\label{tSSPhi}
\widetilde{S}(x)=S(x)-\Phi(x,\xi),\qquad \widetilde{S}^{\flat}(x,y)=S^{\flat}(x,\phi y)-\Phi(\xi,x,\phi y).
\end{equation}
Moreover,
\eqref{Q}, \eqref{T}, \eqref{QTShv}, \eqref{tQ}, \eqref{tT} and \eqref{tQThv} imply the following relations
\begin{equation*}\label{tQQ}
\begin{array}{ll
\widetilde{Q}^{\parallel h}=Q^{\parallel h}-(\xi\lrcorner\Phi)\otimes\eta,\qquad & \widetilde{Q}^{\parallel v}=Q^{\parallel v}-(\eta\circ\Phi)\otimes\xi,
\\
\widetilde{T}^{\parallel h}=T^{\parallel h}+\eta\wedge(\xi\lrcorner\Phi),\qquad & \widetilde{T}^{\parallel v}=T^{\parallel v},
\end{array}
\end{equation*}
where $\lrcorner$ denotes the interior product.
Subsequently, using the latter equalities and \eqref{tSSPhi}, we obtain the following formulae
\begin{gather*}\label{tQQS}
\widetilde{Q}^{\parallel}=Q^{\parallel}+(\widetilde{S}-S)\otimes\eta-(\widetilde{S}^{\flat}-S^{\flat})\otimes\xi,
\\
\label{tTTS}
\widetilde{T}^{\parallel}=T^{\parallel}+(\widetilde{S}-S)\wedge\eta;
\\
\begin{array}{ll}\label{tQQTThvS}
\widetilde{Q}^{\parallel h}=Q^{\parallel h}+(\widetilde{S}-S)\otimes\eta,\qquad & \widetilde{Q}^{\parallel v}=Q^{\parallel v}-(\widetilde{S}^{\flat}-S^{\flat})\otimes\xi,
\\
\widetilde{T}^{\parallel h}=T^{\parallel h}+(\widetilde{S}-S)\wedge\eta,\qquad & \widetilde{T}^{\parallel v}=T^{\parallel v}.
\end{array}
\end{gather*}
Bearing in mind the obtained results, we get the truthfulness of the following
\begin{thm}\label{thm:equiv-t1}
The following properties are equivalent:
\begin{enumerate}
\item
$\widetilde{\n}\eta$ is symmetric
\item
$\eta$ is closed
\item
$\widetilde{Q}^{\parallel v}$ is symmetric
\item
$\widetilde{T}^v$ vanishes
\item
$\widetilde{S}$ is self-adjoint regarding $\widetilde{g}$
\item
$\widetilde{S}^{\flat}$ is symmetric
\item
$(\mathcal{M},\phi,\xi,\eta,\widetilde{g})\in\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_4\oplus\mathcal{F}_5\oplus\mathcal{F}_6\oplus\mathcal{F}_{9}\oplus\mathcal{F}_{10}$.
\end{enumerate}
\end{thm}
\begin{thm}\label{thm:equiv-t2}
The following properties are equivalent:
\begin{enumerate}
\item
$\widetilde{\n}\eta$ is skew-symmetric
\item
$\xi$ is Killing with respect to $\widetilde{g}$, i.e. $\mathfrak{L}_{\xi}\widetilde{g}=0$
\item
$\widetilde{Q}^{\parallel v}$ is skew-symmetric
\item
$\widetilde{S}$ is anti-self-adjoint regarding $\widetilde{g}$
\item
$\widetilde{S}^{\flat}$ is skew-symmetric
\item
$(\mathcal{M},\phi,\xi,\eta,\widetilde{g})\in\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_7\oplus\mathcal{F}_{9}$.
\end{enumerate}
\end{thm}
\begin{thm}\label{thm:equiv-t3}
The following properties are equivalent:
\begin{enumerate}
\item
$\widetilde{\n}\eta=0$
\item
$\mathrm{d}\eta=\mathfrak{L}_{\xi}\widetilde{g}=0$
\item
$\widetilde{\n}\xi=0$
\item
$\widetilde{S}=0$
\item
$\widetilde{S}^{\flat}=0$
\item
$\widetilde{\n}^{\parallel}=\widetilde{\n}$
\item
$(\mathcal{M},\phi,\xi,\eta,\widetilde{g})\in\mathcal{F}_1\oplus\mathcal{F}_2\oplus\mathcal{F}_3\oplus\mathcal{F}_{9}$.
\end{enumerate}
\end{thm}
\section{Curvature properties of the pair of connections $\nabla^{\parallel}$ and $\widetilde{\n}^{\parallel}$}
Let $R$ be the curvature tensor of $\nabla$, i.e. $R=[\nabla\ , \nabla\ ]
- \nabla_{[\ ,\ ]}$ and let the corresponding $(0,4)$-tensor be
determined by $R(x,y,z,w)=g(R(x,y)z,w)$. The Ricci tensor
$\rho$ and the scalar curvature $\tau$ are defined by
$\rho(y,z)=g^{ij}R(e_i,y,z,e_j)$ and
$\tau=g^{ij}\rho(e_i,e_j)$.
An arbitrary non-degenerate 2-plane $\alpha$ in
$T_p\mathcal{M}$ with an arbitrary basis $\{x,y\}$ has the following sectional curvature
$
k(\alpha;p)=\frac{R(x,y,y,x)}{\pi_1(x,y,y,x)}
$ with respect to $g$ and $R$, where $\pi_1(x,y,z,w)=g(y,z)g(x,w)-g(x,z)g(y,w)$.
A 2-plane $\alpha$ is said to be a \emph{$\xi$-section}, a \emph{$\phi$-holomorphic section} or a \emph{$\phi$-totally real section}
if $\xi \in \alpha$, $\alpha= \phi\alpha$ or $\alpha\bot \phi\alpha$ regarding $g$, respectively. The latter type of sections exists for $\dim \mathcal{M}\geq 5$.
Let $R^{\parallel}$, $\rho^{\parallel}$, $\tau^{\parallel}$ and $k^{\parallel}$ denote the curvature tensor, the Ricci tensor, the scalar curvature and the sectional curvature of $\nabla^{\parallel}$, respectively. The corresponding $(0,4)$-tensor is
determined by $R^{\parallel}(x,y,z,w)=g(R^{\parallel}(x,y)z,w)$. Analogously, by
$\widetilde{R}$, $\widetilde\rho$, $\widetilde\tau$, $\widetilde{k}$ and
$\widetilde{R}^\parallel$, $\widetilde{\rho}^\parallel$, $\widetilde{\tau}^\parallel$, $\widetilde{k}^\parallel$
we denote the corresponding quantities for the connections $\widetilde\nabla$ and $\widetilde{\n}^{\parallel}$, respectively, where the $(0,4)$-tensors of $\widetilde{R}$ and $\widetilde{R}^\parallel$ are obtained by $\widetilde{g}$.
\begin{thm}\label{thm:KR}
The curvature tensors of
$\nabla^{\parallel}$ and
$\nabla$ ($\widetilde{\n}^{\parallel}$ and $\widetilde\nabla$, respectively) are related as follows
\begin{equation}\label{RDR}
\begin{array}{l}
R^{\parallel}(x,y,z,w)=R\left(x,y,\phi^2z,\phi^2w\right)+\pi_1\bigl(S(x),S(y),z,w\bigr),\\
\widetilde{R}^\parallel(x,y,z,w)=\widetilde{R}\left(x,y,\phi^2z,\phi^2w\right)+\widetilde\pi_1\bigl(\widetilde{S}(x),\widetilde{S}(y),z,w\bigr),
\end{array}
\end{equation}
where $\widetilde\pi_1(x,y,z,w)=\widetilde{g}(y,z)\widetilde{g}(x,w)-\widetilde{g}(x,z)\widetilde{g}(y,w)$.
\end{thm}
\begin{proof}
Using \eqref{SvK=n} and $g(\nabla_x\xi,\xi)=0$ for arbitrary $x$ and $\nabla^{\parallel}\xi=0$, we get
\[
\begin{array}{l}
R^{\parallel}(x,y)z=R(x,y)z-\eta(z)R(x,y)\xi-\eta(R(x,y)z)\xi\\
\phantom{R^{\parallel}(x,y)z=}
-g\left(\nabla_x\xi,z\right)\nabla_y\xi+g\left(\nabla_y\xi,z\right)\nabla_x\xi,
\end{array}
\]
which implies the first relation in \eqref{RDR}.
The second equality in \eqref{RDR} follows in similar way, but in terms of $\widetilde{\n}^{\parallel}$, $\widetilde\nabla$ and their corresponding metric $\widetilde{g}$.
\end{proof}
Let $\widetilde\mathrm{tr}$ denotes the trace with respect to $\widetilde{g}$. We get the following
\begin{cor}\label{cor:Ric}
The Ricci tensors of
$\nabla^{\parallel}$ and
$\nabla$ ($\widetilde{\n}^{\parallel}$ and $\widetilde\nabla$, respectively) are related as follows
\begin{equation}\label{RicDRic}
\begin{array}{l}
\rho^{\parallel}(y,z)=\rho(y,z)-\eta(z)\rho(y,\xi)-R(\xi,y,z,\xi)\\
\phantom{\rho^{\parallel}(y,z)=\rho(y,z)}
-g(S^2(y),z)+\mathrm{tr}(S)g(S(y),z),\\
\widetilde{\rho}^\parallel(y,z)=\widetilde\rho(y,z)-\eta(z)\widetilde\rho(y,\xi)-\widetilde{R}(\xi,y,z,\xi)\\
\phantom{ \widetilde{\rho}^\parallel(y,z)=\widetilde\rho(y,z)}
-\widetilde{g}(\widetilde{S}^2(y),z)+\widetilde\mathrm{tr}(\widetilde{S})\widetilde{g}(\widetilde{S}(y),z).
\end{array}
\end{equation}
\end{cor}
Using \eqref{PhiF} and \eqref{F-prop}, we get that $g^{ij}\Phi(\xi,e_i,e_j)=0$. Therefore, bearing in mind \eqref{divtr} and the definitions of $S$ and $\widetilde{S}$, we have that
$\mathrm{tr}(S)=\widetilde\mathrm{tr}(\widetilde{S})=-\mathrm{div}(\eta)$.
The definition of the shape operator implies
$
R(x,y)\xi =-\left(\nabla_x S\right)y+\left(\nabla_y S\right)x.
$
Then, the latter formula and $S(\xi)=-\nabla_{\xi} \xi=-\phi\omega^{\sharp}$ lead to the following expression
\[
R(\xi,y,z,\xi) = g\bigl(\left(\nabla_{\xi} S\right)y-\left(\nabla_y S\right)\xi,z\bigr)
=g\bigl(\left(\nabla_{\xi} S\right)y-\nabla_y S(\xi)- S(S(y)),z\bigr).
\]
Then, taking the trace of the latter equalities and using
the relations
\[
\mathrm{div}(\omega\circ\phi)=g^{ij}\left(\nabla_{e_i} \omega\circ\phi\right)e_j=g^{ij}g\left(\nabla_{e_i}\phi \omega^{\sharp},e_j\right)=-\mathrm{div}(S(\xi)).
\]
we obtain
\begin{equation}\label{ricxixi}
\rho(\xi,\xi)= \mathrm{tr}(\nabla_{\xi}S)-\mathrm{div}(S(\xi))-\mathrm{tr}(S^2).
\end{equation}
Similarly, we get the equalities for the quantities with respect to $\widetilde{g}$, i.e.
\begin{equation}\label{tricxixi}
\widetilde\rho(\xi,\xi)= \widetilde\mathrm{tr}(\widetilde{\n}_{\xi}\widetilde{S})-\widetilde\mathrm{div}(\widetilde{S}(\xi))-\widetilde\mathrm{tr}(\widetilde{S}^2).
\end{equation}
Bearing in mind the latter results and \corref{cor:Ric}, we obtain the following
\begin{cor}\label{cor:tau}
The scalar curvatures of
$\nabla^{\parallel}$ and
$\nabla$ ($\widetilde{\n}^{\parallel}$ and $\widetilde\nabla$, respectively) are related as follows
\begin{equation*}\label{tauDtau}
\begin{array}{l}
\tau^{\parallel}=\tau-2\rho(\xi,\xi)-\mathrm{tr}(S^2) + (\mathrm{tr}(S))^2,\\
\widetilde\tau^{\parallel}=\widetilde\tau-2\widetilde\rho(\xi,\xi)-\widetilde\mathrm{tr}(\widetilde{S}^2) + (\widetilde\mathrm{tr}(\widetilde{S}))^2,
\end{array}
\end{equation*}
where $\rho(\xi,\xi)$ and $\widetilde\rho(\xi,\xi)$ are expressed in \eqref{ricxixi} and \eqref{tricxixi}, respectively.
\end{cor}
Moreover, using \thmref{thm:KR} we get the following
\begin{cor}\label{cor:kDk}
The sectional curvatures of an arbitrary 2-plane $\alpha$ at $p\in \mathcal{M}$, for an arbitrary basis $\{x,y\}$ of $\alpha$, regarding
$\nabla^{\parallel}$ and
$\nabla$ ($\widetilde{\n}^{\parallel}$ and $\widetilde\nabla$, respectively) are related as follows
\begin{equation}\label{kDk}
\begin{split}
k^{\parallel}(\alpha;p)&=k(\alpha;p)\\
&\phantom{=}+\frac{\pi_1(S(x),S(y),y,x)-\eta(x)R(x,y,y,\xi)-\eta(y)R(x,y,\xi,x)}{\pi_1(x,y,y,x)},\\
\widetilde k^{\parallel}(\alpha;p)&=\widetilde k(\alpha;p)\\
&\phantom{=}+\frac{\widetilde\pi_1(\widetilde{S}(x),\widetilde{S}(y),y,x)-\eta(x)\widetilde{R}(x,y,y,\xi)
-\eta(y)\widetilde{R}(x,y,\xi,x)}{\widetilde\pi_1(x,y,y,x)}.
\end{split}
\end{equation}
\end{cor}
Let $\alpha_{\xi}$ be a $\xi$-section at $p\in \mathcal{M}$ with a basis $\{x,\xi\}$. Then, using \eqref{kDk}, $g(S(x),\xi)=0$ and $\widetilde{g}(\widetilde{S}(x),\xi)=0$ for arbitrary $x$, we get that
\[
k^{\parallel}(\alpha_{\xi};p)=0, \quad \widetilde{k}^\parallel(\alpha_{\xi};p)=0.
\]
Let $\alpha_{\phi}$ be a $\phi$-section at $p\in \mathcal{M}$ with a basis $\{x,y\}$. Then, using \eqref{kDk} and $\eta(x)=\eta(y)=0$, we obtain that
\[
k^{\parallel}(\alpha_{\phi};p)=k(\alpha_{\phi};p)
+\frac{\pi_1(S(x),S(y),y,x)}{\pi_1(x,y,y,x)},
\]
\[
\widetilde k^{\parallel}(\alpha_{\phi};p)=\widetilde k(\alpha_{\phi};p)
+\frac{\widetilde\pi_1(\widetilde{S}(x),\widetilde{S}(y),y,x)}{\widetilde\pi_1(x,y,y,x)}.
\]
Let $\alpha_{\bot}$ be a $\phi$-totally real section orthogonal to $\xi$ with a basis $\{x,y\}$. Then, using \eqref{kDk} and $\eta(x)=\eta(y)=0$, we get that
\[
k^{\parallel}(\alpha_{\bot};p)=k(\alpha_{\bot};p)
+\frac{\pi_1(S(x),S(y),y,x)}{\pi_1(x,y,y,x)},
\]
\[
\widetilde k^{\parallel}(\alpha_{\bot};p)=\widetilde k(\alpha_{\bot};p)
+\frac{\widetilde \pi_1(\widetilde{S}(x),\widetilde{S}(y),y,x)}{\widetilde \pi_1(x,y,y,x)}.
\]
Let us remark that, in the case when $\alpha$ is a $\phi$-totally real section non-orthogonal to $\xi$ regarding $g$ or $\widetilde{g}$, the relation between the corresponding sectional curvatures regarding $\nabla^{\parallel}$ and $\nabla$ ($\widetilde{\n}^{\parallel}$ and $\widetilde{\n}$, respectively) is just the first (respectively, the second) equality in \eqref{kDk}.
\section{A family of Lie groups as manifolds of the studied type}\label{sec-exm}
Let us consider as an example
the $(2n+1)$-dimensional almost paracontact almost paracomplex
Riemannian
manifold $(\mathcal{L},\phi, \xi, \eta, g)$, which is introduced in \cite{ManTav57}.
That means we have
a real connected Lie group $\mathcal{L}$ and its
associated Lie algebra with a global basis $\{E_{0},E_{1},\dots,
E_{2n}\}$ of left invariant vector fields on $\mathcal{L}$ defined by:
\begin{equation}\label{com}
[E_0,E_i]=-a_iE_i-a_{n+i}E_{n+i},\qquad
[E_0,E_{n+i}]=-a_{n+i}E_i+a_{i}E_{n+i},
\end{equation}
where $a_1,\dots,a_{2n}$ are real constants and $[E_j,E_k]$ vanishes in
other cases.
The structure $(\phi,\xi,\eta)$ is determined
for any
${i\in\{1,\dots,n\}}$ as follows:
\begin{equation}\label{strL}
\begin{array}{lll}
\phi E_0=0,\qquad & \phi E_i=E_{n+i},\qquad & \phi E_{n+i}=E_i, \\[0pt]
\xi=E_0, \qquad & \eta(E_0)=1, \qquad & \eta(E_i)=\eta(E_{n+i})=0.
\end{array}
\end{equation}
Moreover, the Riemannian metric $g$ is such that:
\begin{equation}\label{gL}
\begin{array}{l}
g(E_0,E_0)=g(E_i,E_i)=g(E_{n+i},E_{n+i})=1, \\
g(E_0,E_j)=g(E_j,E_k)=0,
\end{array}
\end{equation}
where $i\in\{1,\dots,n\}$ and $j, k \in\{1,\dots,2n\}$, $j\neq k$.
Let us remark that the same Lie group is considered with an
almost contact structure and a compatible Riemannian
metric in \cite{Ol} and the generated almost cosymplectic manifold is studied.
On the other hand,
the same Lie group is equipped with an almost contact structure and
B-metric in \cite{HM} and the obtained manifold is characterized. Moreover, for the latter manifold,
the case of the lowest dimension is considered and properties of the constructed
manifold are determined in \cite{HManMek}.
Let us consider the constructed almost paracontact almost paracomplex Riemannian
manifold
$(\mathcal{L},\phi, \xi, \allowbreak{}\eta, g)$ of dimension 3, i.e. for $n=1$. We use the following results from \cite{ManTav57}.
The basic components of the Levi-Civita connection $\nabla$ of $g$ are given by
\begin{equation}\label{nEi}
\begin{array}{ll}
\nabla_{E_1}E_0=a_1E_1+a_2E_2,\qquad & \nabla_{E_2}E_0=a_2E_1-a_1E_2,\\[0pt]
\nabla_{E_1}E_1=-\nabla_{E_2}E_2=-a_1E_0,\qquad & \nabla_{E_1}E_2=\nabla_{E_2}E_1=-a_2E_0,
\end{array}
\end{equation}
and the others $\nabla_{E_i}E_j$ are zero.
The components $F_{ijk}=F(E_i,E_j,E_k)$
of the fundamental tensor are the following
\[
F_{101}=F_{110}=F_{202}=F_{220}=-a_2,\qquad
F_{102}=F_{120}=-F_{201}=-F_{210}=-a_1,
\]
and the other components of $F$ are zero.
Thus, it is proved the following
\begin{prop}\label{prop-exa1}
The constructed 3-dimen\-sional almost paracontact almost paracomplex Riemannian
manifold $(\mathcal{L},\phi,\xi,\eta,g)$ belongs to:
\begin{enumerate}
\item
$\mathcal{F}_4\oplus\mathcal{F}_9$ if and only if $a_1\neq 0$, $a_2\neq 0$;
\item
$\mathcal{F}_4$ if and only if $a_1= 0$, $a_2\neq 0$;
\item
$\mathcal{F}_9$ if and only if $a_1\neq 0$, $a_2= 0$;
\item
$\mathcal{F}_0$ if and only if $a_1= 0$, $a_2= 0$.
\end{enumerate}
\end{prop}
Let us remark that $(\mathcal{L},\phi,\xi,\eta,g)$ is a para-Sasakian paracomplex Riemannian manifold
if and only if $a_1= 0$, $a_2=1$ \cite{ManTav57}.
Using \eqref{SvK=n} and \eqref{tSvK=n}, we obtain that all components $\nabla^\parallel_{E_i}E_j$ and $\widetilde\nabla^\parallel_{E_i}E_j$ vanish.
That means the pair of of associated Schouten-van Kampen connections $\nabla^\parallel$ and $\widetilde\nabla^\parallel$ coincide with the so-called
Weitzenbock connection, the connection of the parallelization. Therefore $\nabla^\parallel$ and $\widetilde\nabla^\parallel$ have vanishing curvature, but in general nonvanishing
torsion, because it equals to $-[\,\cdot\,,\,\cdot\,]$.
A quick review of the statements proved in previous sections shows that the example in this section confirms them.
\vspace{6pt}
|
{
"timestamp": "2020-12-29T02:27:26",
"yymm": "2012",
"arxiv_id": "2012.14168",
"language": "en",
"url": "https://arxiv.org/abs/2012.14168"
}
|
\section{Introduction}\label{intro}
Knots, which are circles embedded in thickened surfaces, are often treated as the virtual knots introduced by Kauffman \cite{kauffman}. Virtual knots with base points are regarded as long virtual knots, since a circle less one point is homeomorphic to a line.
On the other hand, a Gauss diagram is a circle with chords, where the preimages of each double point of the immersion are connected by the chords. Virtual knots are nothing but equivalence classes of Gauss diagrams. We can place some information on the circle and chords of a Gauss diagram.
This paper considers maps between Gauss diagrams, and it is possible to produce some versions of a single knot invariant. In particular, there is a simple way to define invariants for long virtual knots thorough Gauss diagrams. In this paper, we consider versions of the Jones polynomial in terms of invariants of long virtual knots. We also see that this approach is effective for Khovanov homology.
The plan of this paper is as follows: Sec. \ref{functor_nano} gives a precise definition of long virtual knots and the corresponding Gauss diagrams. Sec. \ref{ver_jones} obtains definitions of the maps between Gauss diagrams and defines versions of the Jones polynomial. We see in Sec. \ref{app_kh} that the same approach is good for Khovanov homology.
\section{Long virtual knots and their presentations as Gauss diagrams}\label{functor_nano}
Virtual knot theory was introduced by Kauffman \cite{kauffman} and virtual knots are often treated as Gauss diagrams.
\subsection{Knots, knot diagrams, long knots, long knot diagrams, and Gauss diagrams}
A {\it{knot}} is a circle smoothly embedding into $\mathbb{R}^{3}$ and a {\it{long knot}} is a smooth embedding $\mathbb{R}$ $\to$ $\mathbb{R}^{3}$. These are often represented by {\it{knot diagrams}} or {\it{long knot diagrams}}, which are images of generic immersions of the circle into the plane adding the information on overpasses and underpasses at double points, as shown in Figs. \ref{knot_and_long} (a) and (b). A long knot is often identified as a knot with a point, called a {{base point}}, on the circle. Its diagram is presented as a knot diagram with a base point on curves distinct from the double points (Fig. \ref{knot_and_long} (c)).
\begin{figure}
\begin{picture}(0,0)
\put(60,10){(a)}
\put(160,10){(b)}
\put(295,10){(c)}
\end{picture}
\includegraphics[width=12cm]{knot_and_long.pdf}
\caption{(a) Knot diagram. (b) Long knot diagram. (c) Knot diagram with base point.}\label{knot_and_long}
\end{figure}
In this paper, we treat knot diagrams with finite double points only. As is well known, two knots are isotopic knots if related by a finite sequence of {\it{Reidemeister moves}}, which are local moves on knot diagrams as shown in Fig. \ref{reidemeister_m}.
\begin{figure}
\begin{picture}(0,0)
\put(50,38){$\Omega_1a$}
\put(104,38){$\Omega_1b$}
\put(195,37){$\Omega_2$}
\put(292,36){$\Omega_3$}
\end{picture}
\includegraphics[width=12cm]{reidemeister_m.pdf}
\caption{Reidemeister moves. The local replacements on the neighborhoods are drawn, and the exteriors of the neighborhoods are the same for both diagrams of each move.}\label{reidemeister_m}
\end{figure}
If necessary, we add an adjective such as {\it{classical}} for referring to the knots defined above and keep this role for other objects: long knots, knot diagrams, and long knot diagrams.
Every generic immersion of a circle into the plane fixes a {\it{Gauss diagram}} that is a circle with chords, where the preimages of each double point of the immersion are connected by the chords (Fig. \ref{diag_to_gauss}). {\it{Oriented Gauss diagrams}} are considered up to orientation preserving homeomorphism underlying circles, and the orientations imply those of knots. In this paper, the underlying circle of every oriented Gauss diagram has counterclockwise orientation. In the rest of this paper, unless otherwise specified, we adopt oriented Gauss diagrams that are simply called Gauss diagrams. To recover a knot up to isotopy from a Gauss diagram, we ascribe signs and arrows for every chord. The sign of a chord is defined as the local writhe number of the corresponding double point, and the arrow of a chord is oriented from the upper branch to the lower branch.
\begin{figure}
\includegraphics[width=13cm]{diag_to_gauss.pdf}
\begin{picture}(0,0)
\put(-40,65){\tiny$+$}
\put(-12,60){\tiny$+$}
\put(-32,34){\tiny$+$}
\put(146,67){\tiny$-$}
\put(168,53){\tiny$+$}
\put(153,30){\tiny$-$}
\put(131,45){\tiny$+$}
\end{picture}
\caption{A long knot diagram and a knot diagram are encoded by Gauss diagrams. }\label{diag_to_gauss}
\end{figure}
In the same way, we define Gauss diagrams of long knot diagrams as in Fig. \ref{diag_to_gauss}.
\subsection{Virtual knots, virtual knot diagrams, long virtual knots, and long virtual knot diagrams}\label{virtual_sec}
A {\it{virtual knot}}, introduced by Kauffman \cite{kauffman}, is defined as follows: A {\it{virtual knot diagram}} is a smooth immersion of the circle into the plane such that all singular points are transversal double points. These double points are divided into real crossing points and virtual crossing points, where real crossing points have information on overpasses and underpasses as for the classical knot diagrams shown in Fig. \ref{virtual_crossing}.
\begin{figure}
\begin{center}
\begin{picture}(0,0)
\put(60,0){(a)}
\put(138,0){(b)}
\put(220,0){(c)}
\end{picture}
\includegraphics[width=10cm]{virtual_crossing.pdf}
\caption{(a), (b): Real crossings. (c): Virtual crossing. }\label{virtual_crossing}
\end{center}
\end{figure}
A branch consisting of a virtual crossing is not divided into an overpass and an underpass. {\it{Virtual knots}} are the set of virtual knot diagrams divided by Reidemeister moves and the {\it{virtual moves}} shown in Fig. \ref{virtual_moves}.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{virtual_moves.pdf}
\caption{Virtual moves.}\label{virtual_moves}
\end{center}
\end{figure}
For virtual knots, the following fact was proved by Goussarov, Polyak, and Viro \cite{gpv} using group systems:
\begin{theorem}[Goussarov, Polyak, Viro]
Virtually isotopic classical knots are isotopic.
\end{theorem}
Here, we enhance the definition of knot diagrams and long knot diagrams for treating virtual knots as classical knots following works by Carter, Kamada, and Saito \cite{cks} and N. Kamada and S. Kamada \cite{kk} (see also Kauffman \cite{kauffman} and Goussarov, Polyak, and Viro \cite{gpv}). In the rest of this paper, objects such as knots or knot diagrams (i.e., containing classical knots, virtual knots, classical long knots, or long virtual knots) are regarded as oriented, unless confusion is likely to occur. {\it{Knot diagrams on surfaces}} are images of generic immersions of the circle into an oriented surface adding information on overpasses and underpasses at double points. {\it{Long knot diagrams on surfaces}} are knot diagrams on surfaces with base point on curves distinct from the double points. As is well known, {\it{virtual knots}} (resp. {\it{long virtual knots}}) are {\it{stable equivalence}} classes of knot diagrams (resp. long virtual knot diagrams) on surfaces. The definition of the stable equivalence is as follows: Two knot diagrams on surfaces that are images of generic immersions are stably equivalent if they can be replaced by a finite sequence of {\it{stable homeomorphisms}} and Reidemeister moves in the ambient surfaces. Two images of generic immersions are stably homeomorphic if there is a homeomorphism of their regular neighborhoods in the ambient surfaces that maps the first diagram onto the second one and preserves the overcrossings and undercrossings as well as the orientations of the surface and the immersed curve. Two long knot diagrams on surfaces are stably equivalent if they can be replaced by a finite sequence of stable homeomorphisms preserving the base point and Reidemeister moves in the ambient surfaces away from the base point. In particular, we now have a purely combinatorial proof that there are injective maps from classical knots (resp. long knots) to virtual knots (resp. long virtual knots) (cf. Turaev \cite{turaev2}).
\subsection{Gauss diagrams for virtual knots and long virtual knots.}
Gauss diagrams of virtual knots and long virtual knots are defined by knot diagrams and long knot diagrams on surfaces in the same way as for classical knot diagrams (resp. classical long knot diagrams) that are generic immersions of circles (resp. circles with base points) into the plane. The alternative definition of Gauss diagrams of virtual knots and long virtual knots is that Gauss diagrams are constructed by using virtual knot diagrams and long virtual knot diagrams on the plane in the same way as for classical knot diagrams, but all virtual crossings are disregarded as shown in Fig. \ref{virtual_gauss}.
\begin{figure}
\includegraphics[width=13cm]{virtual_gauss.pdf}
\begin{picture}(0,0)
\put(-43,68){\tiny$+$}
\put(-14,62){\tiny$+$}
\put(158.5,58){\tiny$+$}
\put(123,48){\tiny$+$}
\put(135,73){\tiny$-$}
\end{picture}
\caption{Gauss diagrams for a long virtual knot and a virtual knot.}\label{virtual_gauss}
\end{figure}
Here, the following important fact \cite[Theorem 1.A]{gpv} should be mentioned:
\begin{theorem}[Goussarov, Polyak, Viro]\label{gpv_thm}
A Gauss diagram defines a virtual knot diagram up to virtual moves.
\end{theorem}
Then, a virtual knot (resp. long virtual knot) equals to the corresponding Gauss diagram (resp. Gauss diagram with a base point) considered up to moves that are the counterparts of Reidemeister moves for Gauss diagrams (resp. Gauss diagrams with base points) as shown in Fig. \ref{rel_gauss_a}.
\begin{figure}
\begin{picture}(0,0)
\put(76,272){$\epsilon$}
\put(313,300){$\epsilon$}
\put(32,200){$- \epsilon$}
\put(35,213){$\epsilon$}
\put(314,199){$- \epsilon$}
\put(319,213){$\epsilon$}
\put(235,105){\small$\zeta$}
\put(232.5,153.3){\small$\eta$}
\put(216.5,133){\small$\epsilon$}
\put(90.3,142){\small$\eta$}
\put(98,150){\small$\epsilon$}
\put(120,104){\small$\zeta$}
\end{picture}
\begin{picture}(0,0)
\put(107,45){$\alpha$}
\put(240,47){$\beta$}
\end{picture}
\includegraphics[width=12cm]{rel_gauss_a.pdf}
\caption{Relations of Gauss diagrams (resp. Gauss diagrams with base points) corresponding to Reidemeister moves of virtual knots (resp. long virtual knots) where $\epsilon$, $\eta$, and $\zeta$ are $+$ or $-$, but $(\epsilon, \eta, \zeta)$ is $(\pm, \pm, \pm)$, $(\mp, \mp, \pm)$, or $(\mp, \pm, \pm)$ in the third row. Directions of chords in the third row, denoted by $\alpha$ and $\beta$ in the fourth row, are defined by Table \ref{table_direction}. }\label{rel_gauss_a}
\end{figure}
\begin{table}
\begin{center}
{\setlength{\tabcolsep}{10pt}
\begin{tabular}{ccc} \hline
Case & signs & arrows \\ \hline
1 & $(+, +, +)$ & $(\alpha, \alpha, \alpha)$ \\ \hline
2 & $(+, +, +)$ & $(\beta, \beta, \beta)$ \\ \hline
3 & $(-, -, -)$ & $(\alpha, \alpha, \alpha)$ \\ \hline
4 & $(-, -, -)$ & $(\beta, \beta, \beta)$ \\ \hline \hline
5 & $(+, +, -)$ & $(\alpha, \alpha, \beta)$ \\ \hline
6 & $(+, +, -)$ & $(\beta, \beta, \alpha)$ \\ \hline
7 & $(-, -, +)$ & $(\alpha, \alpha, \beta)$ \\ \hline
8 & $(-, -, +)$ & $(\beta, \beta, \alpha)$ \\ \hline \hline
9 & $(+, -, -)$ & $(\alpha, \beta, \beta)$ \\ \hline
10 & $(+, -, -)$ & $(\beta, \alpha, \alpha)$ \\ \hline
11 & $(-, +, +)$ & $(\alpha, \beta, \beta)$ \\ \hline
12 & $(-, +, +)$ & $(\beta, \alpha, \alpha)$ \\ \hline
\end{tabular}
}
\end{center}
\caption{Rules for the triples of three chords in the third row of Fig. \ref{rel_gauss_a}. Double lines indicate that we can regard these twelve cases as three groups.}\label{table_direction}
\end{table}
\begin{remark}
A Gauss diagram naturally has the orientation of a circle. Hence, if we adopt the notion of Gauss diagrams for non-oriented knots, Gauss diagrams should be identified up to given arbitrary orientations. On the other hand, when we consider an oriented Gauss diagram, the order of trivalent vertices on the Gauss diagram is fixed. That is why, in this paper, we represent Reidemeister move $\Omega_3$ as the third line of Fig. \ref{rel_gauss_a}. Using \cite{turaev2}, we have the following.
\end{remark}
\begin{lemma}\label{reide_negative}
A long virtual knot is generated by Fig. \ref{rel_gauss_a}. A virtual knot is generated by Fig. \ref{rel_gauss_a} neglecting the base points.
\end{lemma}
\section{Versions of the Jones polynomial}\label{ver_jones}
In this section, the Gauss diagrams are oriented Gauss diagrams and have relations corresponding to Reidemeister moves. The symbol $\epsilon$ stands for $+$ or $-$ as in Fig. \ref{rel_gauss_a}.
First, let us consider Gauss diagrams neglecting the directions of arrows on chords. Then, the map $p_r$ is defined by correspondences of codes:
\begin{picture}(0,20)
\put(10,7){\circle{20}}
\put(27,5){$\mapsto$}
\put(55,7){\circle{20}}
\put(3,0){\vector(1,1){14}}
\put(2,4.5){$\epsilon$}
\put(48,0){\line(1,1){14}}
\put(46,4){$\epsilon$}
\put(65,0){.}
\end{picture}
The projection $p_{r}$ induces relations on the set of Gauss diagrams neglecting the directions of arrows. This topology is determined by Fig. \ref{rel_gauss_a} except for neglecting the directions of arrows. The topological objects are called pseudolinks (resp. long pseudolinks) for virtual knots (resp. long virtual knots).
Turaev obtained the following fact \cite[Section 8.3]{turaev2} through his nanoword theory:
\begin{theorem}[Turaev]\label{main}
The Jones polynomial $V_{K}$ of an oriented knot $K$ is defined by $p_r(G)$, where $G$ is a Gauss diagram of $K$; i.e., $V_{K}$ $=$ $V_{p_r(G)}$.
\end{theorem}
Second, we consider the map $p$ from Gauss diagrams with base points to Gauss diagrams neglecting signs of arrows on chords as follows:
\begin{picture}(0,20)
\put(2,13){\circle*{3}}
\put(10,7){\circle{20}}
\put(13,6.5){\tiny$-$}
\put(17,14){\vector(-1,-1){14}}
\put(20,0){,}
\end{picture}
\qquad
\begin{picture}(0,20)
\put(2,13){\circle*{3}}
\put(10,7){\circle{20}}
\put(27,5){$\mapsto$}
\put(47,13){\circle*{3}}
\put(55,7){\circle{20}}
\put(3,0){\vector(1,1){14}}
\put(1,4.5){\tiny$+$}
\put(46.5,4.5){\small$a$}
\put(48,0){\line(1,1){14}}
\put(65,0){,}
\put(75,0){\text{and}}
\end{picture}
\qquad
\begin{picture}(0,20)
\put(79,13){\circle*{3}}
\put(87,7){\circle{20}}
\put(78,4.5){\tiny$-$}
\put(80,0){\vector(1,1){14}}
\put(97,0){,}
\end{picture}
\begin{picture}(0,20)
\put(102,13){\circle*{3}}
\put(110,7){\circle{20}}
\put(127,5){$\mapsto$}
\put(147,13){\circle*{3}}
\put(155,7){\circle{20}}
\put(113,6.5){\tiny$+$}
\put(117,14){\vector(-1,-1){14}}
\put(148,4.5){\small$b$}
\put(162,14){\line(-1,-1){14}}
\put(165,0){.}
\end{picture}
The projection $p$ means the underlying curves, called open flat virtual knots, for long virtual knots. This topology is determined by the relations of the Gauss diagrams with base points in Fig. \ref{rel_gauss_a} where Table \ref{table_direction} is restricted to Cases 1 and 3, except for replacing $+$ (resp. $-$) with $a$ (resp. $b$) and neglecting the directions of arrows.
Third, we consider the map $i$ between Gauss diagrams as follows:
\begin{picture}(0,20)
\put(2,13){\circle*{3}}
\put(10,7){\circle{20}}
\put(27,5){$\mapsto$}
\put(47,13){\circle*{3}}
\put(55,7){\circle{20}}
\put(3,0){\line(1,1){14}}
\put(1,4.5){\small$a$}
\put(46.5,4.5){\tiny$+$}
\put(48,0){\line(1,1){14}}
\put(65,0){,}
\put(102,13){\circle*{3}}
\put(75,0){\text{and}}
\put(110,7){\circle{20}}
\put(127,5){$\mapsto$}
\put(147,13){\circle*{3}}
\put(155,7){\circle{20}}
\put(103,4.5){\small$b$}
\put(117,14){\line(-1,-1){14}}
\put(145.5,4.5){\tiny$-$}
\put(162,14){\line(-1,-1){14}}
\put(165,0){.}
\end{picture}
\begin{theorem}\label{main_s0}
Let $D$ be a diagram of an arbitrary long virtual knot $K$. The map $V_{i(p(D))}$ is an invariant of the long virtual knot $K$.
\end{theorem}
\begin{proof}
The map $i$ sends open flat virtual knots to long pseudolinks. The map is well defined, since the relations of open flat virtual knots corresponding to Fig. \ref{rel_gauss_a} are sent to the relation defined by the same Gauss diagrams with $a$ (resp. $b$) replacing $+$ (resp. $-$) while neglecting arrow directions. By replacing $p_r(G)$ of Theorem \ref{main} with $i(p(K))$, another Jones polynomial $V_{i(p(K))}$ becomes an invariant of an arbitrary long virtual knot $K$.
\end{proof}
\begin{remark}
Fukunaga regarded the map $i$ as the one producing a topological invariant \cite{fukunaga}.
\end{remark}
Here, in order to capture the graphical meaning of the map $i \circ p$, we prove Theorem \ref{main_s0} in another way as below.
\begin{proof}
Let $K$ be a long virtual knot and $D_K$ its diagram on a surface (cf. Sec. \ref{virtual_sec}). We can consider the map $p$ to mean that every crossing of $D_K$ is replaced with a transversal double point. Without loss of generality, we can assume by invoking plane isotopy that every crossing consists of two orthogonal branches. Hence, we assume this condition in the rest of the proof. Under this assumption, the definition of $p$ is represented as
\begin{equation}\label{p-eq}
\begin{split}
\begin{picture}(35,35)
\put(0,11){$p : $}
\put(20,0){\line(1,1){30}}
\put(20,30){\line(1,-1){10}}
\put(50,0){\line(-1,1){10}}
\put(60,11){$\mapsto$}
\put(80,0){\line(1,1){30}}
\put(80,30){\line(1,-1){30}}
\end{picture}
\qquad\qquad
\end{split}
\end{equation}
for a sufficiently small neighborhood of every crossing, where the exterior of the neighborhoods of the crossings is mapped to itself and contains the base point. Then, by $p$, the curve $p(D_K)$ with the base point on a surface is determined to stable homeomorphisms preserving the base point and orientations of the curve and the surface. Every transversal double point has exactly two tangent vectors $t_1$ and $t_2$, so there exist two types of crossings: one type has a positively oriented pair $(t_1, t_2)$ and the other has a negatively oriented pair $(t_1, t_2)$.
More graphically, if the ambient surface containing the curve has counterclockwise orientation, every double point of $p(D_K)$ belongs to exactly one of two types:
\begin{equation*}
\begin{picture}(50,50)
\put(0,5){$1$st}
\put(32,5){$2$nd}
\put(10,15){\vector(1,1){30}}
\put(40,15){\vector(-1,1){30}}
\put(60,5){,}
\put(104,5){$1$st}
\put(72,5){$2$nd}
\put(80,15){\vector(1,1){30}}
\put(110,15){\vector(-1,1){30}}
\end{picture}
\qquad\qquad
\end{equation*}
where $1$st (resp. $2$nd) means the first (resp. second) branch passing trough the double point starting from the base point.
Without loss of generality, we can assume that the ambient surface containing $D_K$ or $p(D_K)$ has counterclockwise orientation in the rest of the proof. Under this assumption, for these two types of double points, we consider the map $q$ as follows:
\begin{equation}\label{q-eq}
\begin{split}
\begin{picture}(130,50)
\put(0,0){$1$st}
\put(30,0){$2$nd}
\put(10,10){\vector(1,1){30}}
\put(40,10){\vector(-1,1){30}}
\put(60,20){$\mapsto$}
\put(110,10){\line(-1,1){10}}
\put(90,30){\vector(-1,1){10}}
\put(80,10){\vector(1,1){30}}
\put(120,10){,}
\end{picture}
\begin{picture}(100,50)
\put(0,0){$2$nd}
\put(33,0){$1$st}
\put(10,10){\vector(1,1){30}}
\put(40,10){\vector(-1,1){30}}
\put(60,20){$\mapsto$}
\put(110,10){\vector(-1,1){30}}
\put(100,30){\vector(1,1){10}}
\put(80,10){\line(1,1){10}}
\end{picture}
\end{split}
\end{equation}
for a sufficiently small neighborhood of every double point, where the exterior of the neighborhoods of the double points is mapped to itself and contains the base point. The image $q \circ p(D_K)$ becomes a long virtual knot diagram.
In what follows, we show that if $D_{K_1}$ and $D_{K_2}$ are stably equivalent, $q \circ p(D_{K_1})$ and $q \circ p(D_{K_2})$ are stably equivalent.
According to the definition of $q \circ p$ by (\ref{p-eq}) and (\ref{q-eq}), if $D_{K_1}$ and $D_{K_2}$ are stably homeomorphic, preserving the base point and the orientations of the curve and the surface, so are $q \circ p(D_{K_1})$ and $q \circ p(D_{K_2})$. Subsequently, we will verify that if $D_{K_1}$ and $D_{K_2}$ can be replaced by Reidemeister moves in the ambient surface away from the base point, so can $q \circ p(D_{K_1})$ and $q \circ p(D_{K_2})$.
\begin{itemize}
\item Reidemeister moves $\Omega_1 a$ and $\Omega_1 b$.
Let $D_1$ (resp. $D_2$) be the local diagram defined by the left (resp. right) side of the move $\Omega_1 a$ in Fig. \ref{reidemeister_m}, and let $D_3$ be the local diagram defined by the right side of the move $\Omega_1 b$ in Fig. \ref{reidemeister_m}. For each of $D_1$, $D_2$, and $D_3$, there are two cases by choice of orientation. If the orientation of $D$ $=$ $D_1$, $D_2$, or $D_3$ is along the direction from the bottom to the top (resp. from the top to the bottom), we denote the local diagram by $D^{u}$ (resp. $D^{d}$) where $u$ (resp. $d$) stands for up (resp. down). Then, we have to check the following four pairs: $(D_3^{u}, D_2^{u})$ (Case 1), $(D_1^{u}, D_2^{u})$ (Case 2), $(D_1^{d}, D_2^{d})$ (Case 3), and $(D_3^{d}, D_2^{d})$ (Case 4). Since each check is similar to the others, we first show the one for Case 2.
According to the definition of $q \circ p$ by (\ref{p-eq}) and (\ref{q-eq}), $q \circ p(D_1^{u})$ $=$ $D_3^{u}$. On the other hand, $q \circ p(D_2^{u})$ $=$ $D_2^{u}$. Since $D_3^{u}$ and $D_2^{u}$ can be replaced by Reidemeister move $\Omega_1 b$, so can $q \circ p(D_1^{u})$ and $q \circ p(D_2^{u})$.
Using the list below, we can show the other cases by analogy.
Case 1: $q \circ p(D_3^{u})$ $=$ $D_3^{u}$.
Case 2: $q \circ p(D_1^{u})$ $=$ $D_3^{u}$.
Case 3: $q \circ p(D_1^{d})$ $=$ $D_1^{d}$.
Case 4: $q \circ p(D_3^{d})$ $=$ $D_1^{d}$.
\item Reidemeister move $\Omega_2$.
Let $D_1$ (resp. $D_2$) be the local diagram defined by the left (resp. right) side of the move $\Omega_2$ in Fig. \ref{reidemeister_m}.
For $D$ $=$ $D_1$ or $D_2$, let $D_r$ be the local diagram obtained by looking at $D$ upside down as shown in Fig. \ref{diagramref}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4cm]{diagramref.pdf}
\caption{The local diagrams $D_{1r}$ (left) and $D_{2r}$ (right).}\label{diagramref}
\end{center}
\end{figure}
By definition, $D_{2r}$ is the same as $D_{2}$.
For each of $D_1$, $D_2$, $D_{1r}$, and $D_{2r}$, there are four cases by choice of orientation. If the orientations of the two branches of $D$ $=$ $D_1$ or $D_2$ are both in the direction from the bottom to the top (resp. from the top to the bottom), we denote the local diagram by $D^{uu}$ (resp. $D^{dd}$). Similarly, $D^{ud}$ (resp. $D^{du}$) stands for the local diagram $D$ where the orientations of the two branches are upward (resp. downward) and downward (resp. upward) from the left. Now, by Lemma \ref{reide_negative}, it is sufficient here to consider only the cases of $D^{ud}$ and $D^{du}$.
The local diagram $D$ $=$ $D_1$, $D_2$, $D_{1r}$, or $D_{2r}$ consists of two branches. The branch in which the endpoints are at the bottom left and the top left is called the left branch, and the other is called the right branch. If the first branch of $D^{ud}$ is the right (resp. the left) when starting from the base point, we denote the local diagram by $D^{\overline{ud}}$ (resp. $D^{ud}$). If the first branch of $D^{du}$ is the right (resp. the left), we denote the local diagram by $D^{\overline{du}}$ (resp. $D^{du}$). There are some relations between the oriented $D$ and $D_{r}$ that can be observed by looking at these upside down. For example, when we look at $D_{1r}^{\overline{ud}}$ upside down, we see $D_1^{ud}$. We can recognize ``looking at it upside down'' as the operator $f_{\pi}$, and using this operator we have
\begin{equation}\label{pi_formula}
\begin{split}
f_{\pi}(D_{1r}^{ud}) = D_{1}^{\overline{ud}}~{\rm{and}}~f_{\pi}(D_{2r}^{ud}) = D_{2}^{\overline{ud}},\\
f_{\pi}(D_{1r}^{du}) = D_{1}^{\overline{du}}~{\rm{and}}~f_{\pi}(D_{2r}^{du}) = D_{2}^{\overline{du}},\\
f_{\pi}(D_{1r}^{\overline{ud}}) = D_{1}^{ud}~{\rm{and}}~f_{\pi}(D_{2r}^{\overline{ud}}) = D_{2}^{ud},\\
f_{\pi}(D_{1r}^{\overline{du}}) = D_{1}^{du}~{\rm{and}}~f_{\pi}(D_{2r}^{\overline{du}}) = D_{2}^{du}.
\end{split}
\end{equation}
In the eight formulae of (\ref{pi_formula}), $f_{\pi}$ behaves as the involution.
The second row of Fig. \ref{rel_gauss_a} shows the eight moves between $D_{1}$ and $D_{2}$ or between $D_{1r}$ and $D_{2r}$ as follows ($\ast$ $=$ $1$ or $2$): $D_\ast^{ud}$ (Case 1), $D_\ast^{\overline{ud}}$ (Case 2), $D_\ast^{du}$ (Case 3), $D_\ast^{\overline{du}}$ (Case 4), $D_{\ast r}^{ud}$ (Case 5), $D_{\ast r}^{\overline{ud}}$ (Case 6), $D_{\ast r}^{du}$ (Case 7), and $D_{\ast r}^{\overline{du}}$ (Case 8). We would like to show that the move between $q \circ p(D_1)$ and $q \circ p(D_2)$ is one of these eight cases. However, if (\ref{pi_formula}) is used, it is sufficient to check only Cases 1 -- 4.
Since each check is similar to the others, we first show the one for Case 2. According to the definition of $q \circ p$ by (\ref{p-eq}) and (\ref{q-eq}), $q \circ p(D_1^{\overline{ud}})$ $=$ $D_{1r}^{\overline{ud}}$. Likewise, $q \circ p(D_2^{\overline{ud}})$ $=$ $D_2^{\overline{ud}}$ $=$ $D_{2r}^{\overline{ud}}$. Therefore, $q \circ p(D_1^{\overline{ud}})$ and $q \circ p(D_2^{\overline{ud}})$ can be replaced by the Reidemeister move of Case 6.
Using the list below, we can show the other cases by analogy.
Case 1: $q \circ p(D_1^{ud})$ $=$ $D_1^{ud}$.
Case 2: $q \circ p(D_1^{\overline{ud}})$ $=$ $D_{1r}^{\overline{ud}}$.
Case 3: $q \circ p(D_1^{du})$ $=$ $D_1^{du}$.
Case 4: $q \circ p(D_{1}^{\overline{du}})$ $=$ $D_{1r}^{\overline{du}}$.
\item Reidemeister moves similar to $\Omega_3$.
Let us recall that an equivalence relation for a long virtual knot is defined by Lemma \ref{reide_negative} and Fig. \ref{rel_gauss_a}. We have already verified the invariance of $V_{q \circ p(K)}$ under the moves in the first and second rows of Fig. \ref{rel_gauss_a}. Consequently, it is sufficient to show the invariance of $V_{q \circ p(K)}$ under the moves in the third row of Fig. \ref{rel_gauss_a}.
The moves in the third row of Fig. \ref{rel_gauss_a} are explained by Table \ref{table_direction}, which is realized as Fig. \ref{3rd_move_ver} by using the local knot diagrams.
\begin{figure}
\qquad\qquad
\includegraphics[width=11cm]{3rd_move_ver.pdf}
\begin{picture}(0,0)
\put(-180,36){Case 6: }
\put(15,36){Case 12: }
\put(-180,100){Case 5: }
\put(15,100){Case 11: }
\put(-180,160){Case 4: }
\put(15,160){Case 10: }
\put(-180,215){Case 3: }
\put(15,215){Case 9: }
\put(-180,273){Case 2: }
\put(15,273){Case 8: }
\put(-180,330){Case 1: }
\put(15,330){Case 7: }
\put(-131,20){$3$\quad $2$ \quad$1$}
\put(-131,75){$1$\quad $2$ \quad$3$}
\put(-131,132){$1$\quad $2$ \quad$3$}
\put(-131,186){$3$\quad $2$ \quad $1$}
\put(-129,244){$3$\quad $2$ \quad $1$}
\put(-130,300){$1$\quad $2$ \quad $3$}
\put(-60,20){$3$\quad $2$ \quad$1$}
\put(-60,75){$1$\quad $2$ \quad$3$}
\put(-60,132){$1$\quad $2$ \quad$3$}
\put(-60,186){$3$\quad $2$ \quad $1$}
\put(-60,244){$3$\quad $2$ \quad $1$}
\put(-60,300){$1$\quad $2$ \quad $3$}
\put(63,20){$1$\quad $2$ \quad $3$}
\put(63,75){$3$\quad $2$ \quad $1$}
\put(63,132){$3$\quad $2$ \quad $1$}
\put(63,186){$1$\quad $2$ \quad $3$}
\put(63,244){$1$\quad $2$ \quad $3$}
\put(63,300){$3$\quad $2$ \quad $1$}
\put(132,20){$1$\quad $2$ \quad $3$}
\put(132,75){$3$\quad $2$ \quad $1$}
\put(132,132){$3$\quad $2$ \quad $1$}
\put(132,186){$1$\quad $2$ \quad $3$}
\put(132,244){$1$\quad $2$ \quad $3$}
\put(132,300){$3$\quad $2$ \quad $1$}
\end{picture}
\caption{Reidemeister moves that should be checked. These cases correspond to Table \ref{table_direction}. Numbers 1--3 indicate the order of branches, which is defined as the order for passing through the neighborhood when starting from the base point.}\label{3rd_move_ver}
\end{figure}
Let $D_{il}$ (resp. $D_{ir}$) be the local diagram defined by the left (resp. right) side of the move in Case $i$ of Fig. \ref{3rd_move_ver}. According to the definition of $q \circ p$ by (\ref{p-eq}) and (\ref{q-eq}), if $i$ $=$ $1$, $4$, $5$, $8$, $9$, or $12$, we have $q \circ p(D_{il})$ $=$ $D_{2l}$ and $q \circ p(D_{ir})$ $=$ $D_{2r}$. Similarly, if $i$ $=$ $2$, $3$, $6$, $7$, $10$, or $11$, we have $q \circ p(D_{il})$ $=$ $D_{8l}$ and $q \circ p(D_{ir})$ $=$ $D_{8r}$.
Here, we denote one of the Reidemeister moves between $D_{il}$ and $D_{ir}$ ($1 \le i \le 12$) by $\sim$, and we have
\begin{equation}\label{3rd-move-eq}
\begin{split}
q \circ p(D_{il}) &= D_{2l} \sim D_{2r} = q \circ p(D_{ir}) \qquad (i = 1, 4, 5, 8, 9, 12), \\
q \circ p(D_{il}) &= D_{8l} \sim D_{8r} = q \circ p(D_{ir}) \qquad (i = 2, 3, 6, 7, 10, 11).
\end{split}
\end{equation}
\end{itemize}
The formulae (\ref{3rd-move-eq}) complete the check that $q \circ p(D_{il})$ and $q \circ p(D_{ir})$ can be replaced by one of the Reidemeister moves between $D_{il}$ and $D_{ir}$ ($1 \le i \le 12$).
As proved above, map $q \circ p$ is well defined as the map from the set of long virtual knots to itself.
On the other hand, we can assume that the domain of the map $p_r$ is the set of long virtual knots. Under this assumption, Theorem \ref{main} implies that $V_{K}$ $=$ $V_{{p_r}(K)}$. Here, we notice that $p_r \circ q$ $=$ $i$. Then, we have
\[V_{i \circ p(K)} = V_{p_r \circ q \circ p(K)} = V_{q \circ p(K)}.\]
Therefore, the map $V_{i \circ p(K)}$ is well defined as the map from the set of long virtual knots. That is, the map $V_{i \circ p(K)}$ is an invariant for Reidemeister moves and virtual moves. This completes the proof.
\end{proof}
In what follows, we show applications of Theorem \ref{main_s0}.
\begin{ex}\label{ex_phrase}
Let $K_1$ and $K_2$ be the long virtual knots shown in Fig. \ref{k1_k2}, with Jones polynomials $V_{K_1}(t)$ $=$ $V_{K_2}(t)$ $=$ $t^{-2}$ $+$ $t^{- 3/2}$ $-$ $t^{-1}$ $-$ $t^{- 1/2}$ $+$ $t^{1/2}$. However, $V_{i(p(K_1))}(t)$ $=$ $V_{K_1}(t)$ $\neq$ $V_{K_1}(t^{-1})$ $=$ $V_{K_2}(t^{-1})$ $=$ $V_{i(p(K_2))}(t)$.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{k1_k2.pdf}
\caption{Two long virtual knots $K_1$ (left) and $K_2$ (right).}\label{k1_k2}
\end{center}
\end{figure}
\end{ex}
Example \ref{ex_phrase} implies the following:
\begin{theorem}\label{strongerthan}
Let $K$ be a long virtual knot $K$. For $K$, the pair of $V_{K}$ and $V_{i(p(K))}$ is a stronger invariant than the polynomial $V_K$. In other words, there exist two long virtual knots $K_1$ and $K_2$ such that $V_{K_1}$ $=$ $V_{K_2}$ but $V_{i(p(K_1))}$ $\neq$ $V_{i(p(K_2))}$.
\end{theorem}
\begin{proof}
The above example demonstrates the statement.
\end{proof}
Example \ref{ex_phrase} also means that $V_{K}$ detects the orientation of the long virtual knot for $K_1$. Let $-K$ be a knot with an orientation that is the reverse of that for a knot $K$.
\begin{remark}\label{ori}
Let $K$ be a long virtual knot that has a Gauss diagram which satisfies the condition that when arrow directions are neglected, the Gauss diagram is symmetric with respect to a line passing thorough the base point (e.g. the right figure of Fig. \ref{virtual_gauss}). If a knot $K$ satisfies the assumption ($\diamond$) that the Jones polynomial $V_{i(p(K))}$ changes by replacing $t^{1/2}$ $\mapsto$ $t^{- 1/2}$, then the polynomial $V_{i(p(K))}$ of $K$ detects the orientation of $K$ (e.g., $K$ $=$ $K_1$). This is because of the well-known fact that the Jones polynomial $V_{\overline{K}}$ of the mirror image $\overline{K}$ is obtained by replacing $t^{1/2}$ with $t^{- 1/2}$. In other words, by the assumption ($\diamond$), $V_{i(p(K))}$ $\neq$ $V_{i(p(\overline{K}))}$ $=$ $V_{i(p(-K))}$. However, there is no example satisfying the assumption ($\diamond$) for classical long knots since an arbitrary open flat virtual knot on the plane is equal to the trivial open flat virtual knot under its relations. Here, the consideration is summarized by the following proposition:
\end{remark}
\begin{proposition}\label{ori_prop}
Let $K$ be an arbitrary nonclassical long virtual knot and $\overline{K}$ its mirror image. If the Jones polynomial $V_{K}(t)$ is not symmetric with respect to $t^{1/2} \mapsto t^{-1/2}$, the Jones polynomial $V_{i(p(K))}(t)$ detects its orientation.
\end{proposition}
Next, we consider another type of example.
\begin{ex}\label{ex2_phrase}
Let $K_3$ and $K_4$ be the long virtual knots shown in Fig. \ref{k3_k4}. Then, $V_{K_3}$ $=$ $V_{K_4}$ but $V_{i(p(K_3))}(t)$ $=$ $V_{K_3}(t^{-1})$ $\neq$ $V_{K_3}(t)$ $=$ $V_{K_4}(t)$ $=$ $V_{i(p(K_4))}(t)$.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{k3_k4.pdf}
\caption{Two long virtual knots $K_3$ (upper) and $K_4$ (lower).}\label{k3_k4}
\end{center}
\end{figure}
\end{ex}
Similarly, $p_a$ is defined as the composition $\tau_0 \circ p$ of the two maps $p$ and the involution map $\tau_0 :$ $a$ $\mapsto$ $b$ on chords of Gauss diagrams with base points. Moreover, $p_{ra}$ is defined as the composition $\tau_1 \circ p_r$ of two maps $p_r$ and the involution $\tau_1 :$ $+$ $\mapsto$ $-$ on chords of Gauss diagrams (we consider Gauss diagrams with base points if necessary). We can also consider the map $i_a$ defined as the composition $\tau_1 \circ i$. It is easy to see that these maps imply well-defined maps between equivalence classes of Gauss diagrams determined by topological objects which we treated. Then, we have the following.
\begin{theorem}\label{four_thm}
All the choices of $p_r$, $p_{ra}$, $p$, $p_a$, $i$, and $i_a$, together generate four types of the Jones polynomials for long virtual knots.
As a corollary to Theorem \ref{strongerthan}, the tuple of four versions of the Jones polynomial is stronger than the Jones polynomial for long virtual knots.
\end{theorem}
\begin{proof}
Considering every combination of $p_r$, $p_{ra}$, $p$, $p_a$, $i$, and $i_a$ yields the formulas $i \circ p_a$ $=$ $i_a \circ p$ and $i_a \circ p_a$ $=$ $i \circ p$.
\end{proof}
We consider these maps in Examples \ref{ex_pra} and \ref{ia}.
Here, let us consider the graphical meaning of these variations in the $V_K$ of a long virtual knot $K$. Recall the definition of $q \circ p$ by (\ref{p-eq}) and (\ref{q-eq}). For the diagram $D$ of a long virtual knot, $q \circ p_a (D)$ $=$ $q \circ \tau_0 \circ p(D)$ is the mirror image of $q \circ p(D)$. On the other hand, when we denote the mirror image of $D$ by $D^{*}$, we have $V_{p_{ra} (D)}$ $=$ $V_{D^{*}}$. The pair of Jones polynomials $(V_{p_r (D)}, V_{p_{ra} (D)}, V_{i \circ p}(D), V_{i_a \circ p}(D))$ is nothing but $(V_{D}, V_{D^{*}}, V_{q \circ p(D)}, V_{\left(q \circ p(D)\right)^{*}})$; that is, we calculate the four values of the Jones polynomials of two diagrams of long virtual knots and their mirror images.
\section{Application of the discussion to Khovanov homology}\label{app_kh}
In this section, we apply the above discussion to Khovanov homology. After we recall the Khovanov homology, we consider the above discussion for Khovanov homology in the case of the coefficient $\mathbb{Z}_2$.
\subsection{Khovanov homology}
In this section, we recall the Khovanov homology of the Jones polynomial introduced by Khovanov \cite{khovanovjones}. There are two major redefinitions of Khovanov homology (\cite{bar-natan}, \cite{viro}), and here we give a brief review of the definition in the style of Viro \cite{viro}.
For a given knot diagram, let us consider a small edge, called a {\it{marker}}, for each crossing on the link diagram. In the rest of this paper, we can use a simple notation such as that of Fig. \ref{markers} (c) for the marker of Fig. \ref{markers} (a). Every marker has its sign defined as in Fig. \ref{markers}. The signed markers determine the directions of smoothing for all crossings (Fig. \ref{smoothing}). The smoothened link diagram is called the {\it{Kauffman state}} or simply the {\it{state}}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{markers.pdf}
\end{center}
\begin{picture}(90,0)
\put(-35,5){(a)}
\put(37,5){(b)}
\put(111,5){(c)}
\end{picture}
\caption{(a) Positive marker. (b) Negative marker. (c) Simple notation for positive marker.}\label{markers}
\end{figure}
\begin{figure}
\begin{center}
\begin{picture}(0,90)
\put(0,30){(b)}
\put(0,115){(a)}
\end{picture}\qquad\qquad\qquad
\includegraphics[width=7cm]{smoothing.pdf}
\caption{Smoothing producing states. The marker on the crossing in (a) is the positive marker, and that in (b) is the negative marker.}\label{smoothing}
\end{center}
\end{figure}
In the next step, we assign {\it{labels}} $x$ or $1$ for every circle of the state. The {\it{degree}} of $y$ $=$ $x$ or $y$ $=$ $1$ is defined by ${\rm{deg}}(x)$ $=$ $-1$ or ${\rm{deg}}(1)$ $=$ $1$. The state whose circles have labels $x$ or $1$ is called an {\it{enhanced state}} and is denoted by $S$. Let $\sigma(S)$ be the number of positive markers minus the number of negative markers for an arbitrary enhanced state $S$. For a label $y$ $=$ $x$ or $y$ $=$ $1$, we set $\tau(S)$ $=$ $\sum_{{\text{circles}}~y~{\text{in}}~S}$ ${\rm{deg}}(y)$. For a link diagram $D$ of a link $L$, the unnormalized Jones polynomial $\hat{J}(L)$ of a link $L$ is obtained as
\begin{equation}
\hat{J}(L) = \sum_{{\text{enhanced states}}~S~{\text{of}}~D} (-1)^{i(S)} q^{j(S)}
\end{equation}
where $i(S)$ $=$ $(w(D) - \sigma(S))/2$ and $j(S)$ $=$ $w(D)$ + $i(S)$ + $\tau(S)$. Here, $w(D)$ is the writhe number of $D$, which is defined as the number of positive crossings minus negative crossings. The unnormalized Jones polynomial $\hat{J}(L)(q)$ is $(q + q^{-1}) V_{L}(q)$, with the variable $q$ replaced by $q$ $=$ $- t^{1/2}$ for the well-known (normalized) Jones polynomial $V_{L}(t)$. Now, we define the Khovanov complex $C^{i, j}(D)$ as the abelian group generated by the enhanced Kauffman states $S$ of a fixed link diagram $D$ satisfying $i(S)$ $=$ $i$ and $j(S)$ $=$ $j$. Let $T$ be an enhanced state obtained when a neighborhood of only one crossing with a positive marker is replaced by that of the negative marker, where the neighborhood in each of the cases is as listed in Fig. \ref{differential}.
\begin{figure}
\qquad
\begin{picture}(0,0)
\put(-10,133){(a)}
\put(170,133){(d)}
\put(-10,80){(b)}
\put(170,80){(e)}
\put(-10,28){(c)}
\put(170,28){(f)}
\put(20,33){$1$}
\put(20,82){$1$}
\put(20,133){$x$}
\put(35,152){$S$}
\put(50,133){$1$}
\put(50,82){$x$}
\put(50,33){$1$}
\put(110,133){$x$}
\put(110,82){$x$}
\put(110,33){$1$}
\put(123,152){$T$}
\put(192,17){$1$}
\put(261,17){$1$}
\put(261,43){$x$}
\put(192,68){$1$}
\put(261,68){$x$}
\put(261,95){$1$}
\put(192,120){$x$}
\put(192,168){$S$}
\put(261,147){$x$}
\put(261,120){$x$}
\put(261,168){$T$}
\end{picture}
\includegraphics[width=10cm]{differential.pdf}
\caption{Incidence numbers $(S:T)$. Each $S$ is locally replaced with $T$. The dotted arcs show how fragments of $S$ or $T$ are connected in the whole $S$ or $T$. Using another traditional notation, we can write the above formulae as (a) $m(x \otimes 1)$ $=$ $x$, (b) $m(1 \otimes x)$ $=$ $x$, (c) $m(1 \otimes 1)$ $=$ $1$, (d) $\Delta(x)$ $=$ $x \otimes x$, and (e), (f) $\Delta(1)$ $=$ $1 \otimes x$ $+$ $x \otimes 1$. Here, a circle of enhanced states corresponds to a module $\mathbb{Z}_2 1 \oplus \mathbb{Z}_2 x$ over $\mathbb{Z}_2$.}\label{differential}
\end{figure}
For an arbitrary enhanced state $S$, $d(S)$ is defined by
\begin{equation}
d(S) = \sum_{{\text{enhanced states}}~T} (S : T)\, T
\end{equation}
where the incidence number $(S : T)$ is unity in each of the cases listed in Fig. \ref{differential} and zero if the couple of $S$ and $T$ does not appear in the list of Fig. \ref{differential}. The map is extended to the homomorphism $d$ from $C^{i, j}(D)$ to $C^{i+1, j}(D)$. It is a well-known fact that $d$ is the coboundary operator, usually called the {\it{differential}} in the case of Khovanov homology; i.e., $d^{2}$ $=$ $0$.
\begin{theorem}[Khovanov]
Let $D$ be a diagram of an arbitrary link $L$. For arbitrary $i$ and $j$, the homology $H^{i}(C^{*, j}(D), d)$ is an isotopy invariant of $L$, and so this homology can be denoted by $H^{i, j}(L)$. The homology $H^{i, j}(L)$ satisfies
\begin{equation}
\hat{J}(L) = \sum_{j} q^{j} \sum_{i} (-1)^{i} {\rm{rank}} H^{i, j}(L).
\end{equation}
\end{theorem}
\subsection{Application to Khovanov homology}
Manturov extended the definition of the Khovanov homology to that of virtual knots, denoted here by $KH^{i, j}$ through adding the map between enhanced states of virtual knots. The problem is that the change of one positive marker to define the differential does not require the change of the component enhanced states for all cases, as shown in Fig. \ref{differential}. Fortunately, in the case of the coefficient $\mathbb{Z}_{2}$, the definition was extended to virtual knots straightforwardly by regarding these cases as zero maps and using Fig. \ref{differential}. Moreover, Manturov found the following property \cite{manturov1}:
\begin{theorem}[Manturov]
For ${KH}^{i, j}(K)$, the Khovanov homology of Manturov, ${KH}^{i, j}(K)$ $\simeq$ ${KH}^{i, j}(p_r(K))$ for an arbitrary virtual knot $K$. In other words, the Khovanov homology of Manturov is invariant under virtualization of Fig. \ref{virtulization_move}.
\end{theorem}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{virtualization_move.pdf}\qquad\qquad
\caption{Virtualization. Arbitrary orientations of these moves are permitted.}\label{virtulization_move}
\end{center}
\end{figure}
Then, we have the counterpart of Theorem \ref{strongerthan}:
\begin{theorem}\label{ip}
Let $K$ be an arbitrary long virtual knot. A pairing of the two Khovanov homologies ${KH}^{i, j}(p_r(K))$ and ${KH}^{i, j}(i(p(K)))$ is stronger than Manturov's Khovanov homology ${KH}^{i, j}(K)$ in terms of an invariant of long virtual knots. In other words, there exist two long virtual knots $K_1$ and $K_2$ such that ${KH}^{i, j}(p_r(K_1))$ $\simeq$ ${KH}^{i, j}(p_r(K_2))$ for any $(i, j)$ but ${KH}^{i, j}(i(p(K_1)))$ $\not\simeq$ ${KH}^{i, j}(i(p(K_2)))$ for some $(i, j)$.
\end{theorem}
\begin{proof}
Example \ref{ex_k1_k2} gives what needs to be shown.
\end{proof}
\begin{ex}\label{ex_k1_k2}
By definition, ${KH}^{i, j}(K_1)$ $\simeq$ ${KH}^{i, j}(K_2)$ for any $(i, j)$. However, ${KH}^{-2, -5}(i(p(K_1))$ $\simeq$ $\mathbb{Z}_2$ and ${KH}^{-2, -5}(i(p(K_2))$ $\simeq$ $0$.
\end{ex}
\begin{ex}\label{ex_pra}
Let us consider another type of $p_r$ denoted by $p_{ra}$. We have ${KH}^{2, -5}$ $(p_r(K_1))$ $\simeq$ ${KH}^{2, -5}(p_r(\emptyset))$ $\simeq$ $0$. However, ${KH}^{2, -5}(p_{ra}(K_1))$ $\simeq$ $\mathbb{Z}_{2}$, which is not $0$ $\simeq$ $KH^{2, -5}(p_{ra}(\emptyset))$.
\end{ex}
\begin{ex}\label{ia}
Let us consider another type of $i$ denoted by $i_a$. As described above, ${KH}^{-2, -5}(i(p(K_2)))$ $\simeq$ ${KH}^{-2, -5}(i(p(\emptyset)))$ $\simeq$ $0$. However, ${KH}^{-2, -5}(i_a(p(K_2)))$ $\simeq$ $\mathbb{Z}_2$, which is not $0$ $\simeq$ ${KH}^{-2, -5}(i_a(p(\emptyset)))$.
\end{ex}
We also have the counterpart of Theorem \ref{four_thm}:
\begin{theorem}
All the choices of $p_r$, $p_{ra}$, $p$, $p_a$, $i$, and $i_a$ together generate four types of the Khovanov homology for long virtual knots.
As a corollary of Theorem \ref{ip}, the tuple of four Khovanov homologies is stronger than the Khovanov homology $KH^{i, j}$ in terms of long virtual knots.
\end{theorem}
As in Sec. \ref{ver_jones}, four invariants $KH^{i, j}(p_r (D))$, $KH^{i, j}(p_{ra} (D))$, $KH^{i, j}(i \circ p (D))$, and $KH^{i, j}(i_a \circ p (D))$ means considering Khovanov homology for four long virtual knot diagrams $D$, $D^{*}$, $q \circ p(D)$, and $\left( q \circ p(D) \right)^{*}$.
\section*{Acknowledgements}
The author thanks Professor Kouki Taniyama and the referee for useful comments on an earlier version of this paper. This work was partly supported by Grant-in-Aid for Young Scientists (B) (23740062), IRTG 1529, and a Waseda University Grant for Special Research Projects (Project number: 2010A-863).
|
{
"timestamp": "2020-12-29T02:23:37",
"yymm": "2012",
"arxiv_id": "2012.14060",
"language": "en",
"url": "https://arxiv.org/abs/2012.14060"
}
|
\section{Introduction}\label{sec1}}
\IEEEPARstart{5}{G} creates tremendous opportunities for social digitalization and industrial interconnection. On top of the physical
infrastructure, diversified service requirements (eMBB, mMTC, and uRLLC) can be met in the service-oriented end-to-end network slicing
(E2E-NS) architecture. The E2E-NS architecture supports both the co-existent accesses of multiple standards (5G, LTE, and Wi-Fi), and
the coordination between different site types (macro cell, micro cell, and pico cell base stations), which is mainly attributed to the
flexible orchestration and on-demand deployment of virtualized network functions (VNFs) \cite{5G-archi}\cite{ns-survey2}\cite{ns-survey3}.
The substantive characteristics of the E2E-NS architecture is \textit{cloudification}. It involves the transformation from traditional
hardbox network functions to the all-on-cloud management \& control planes \cite{ns-survey}. In this architecture, network slicing is
the key to enabling networking capabilities for vertical industries. Many business players, such as infrastructure providers
(InPs), mobile network operators (MNOs), cloud providers (one kind of InPs actually), edge \& cloud service providers (a.k.a. \textit{tenants}),
service subscribers (i.e. \textit{users}), service brokers and mobile virtual network operators (MVNOs) are involved\cite{ns-business}
\cite{mobicom-slicing-paper}. For the scenario considered in this paper, the InP offers the physical network infrastructure to the MVNO by
leasing or selling and is responsible for hardware upgrades and maintenance. After having control of the physical networks, the MVNO
virtualizes the network resources, divides each kind of resource into slices, and rents them to the tenants according to tenants' demand.
Therewith, each tenant creates service instances based on its slices, and provides services to its subscribers. Normally, the level of
services are stated in Service Level Agreements (SLAs). SLAs define the metrics to measure and show if the expected \textit{quality of service}
(QoS) is achieved or not. The process\footnote{To be clear, this is not the only business model in network slicing. In some scenarios,
the MNOs are the owners and maintainers of physical resources. They create slices on top of the resources, which will be offered to the MVNOs
to perform services to subscribers. For all this, the network slicing model provided in this paper is applicative.} is illustrated in
Fig. \ref{fig1}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=1.7in]{figs/economic_model.png}}
\caption{Business players involved in network slicing and how does the process works.}
\label{fig1}
\end{figure}
The key problem underlying network slicing is \textit{efficient resource allocation} for VNFs \cite{nfv}\cite{nfv-offline},
which is algorithmicaly NP-hard \cite{ns-algorithm-pers}. There has been lots of research done so far for different scenarios, including
slicing the radio access networks (RANs) \cite{ran-slice}\cite{edge-slice2}\cite{tsc-edge}\cite{tmc-edge}, the
core networks (5GCs) \cite{core-slice}\cite{core-add1}\cite{core-add3}, and the federated edge \cite{edge-slice} \cite{resource-manage-DQN} \cite{tpds-burst},
etc. In these cases, survivability constraints, heterogeneous QoS requirements, geographical limitations, and other \textit{scenario-specific}
constraints are taken into considerations to formulate complicated combinatorial non-convex problems. To solve them, the most typical
and general class of works are based on fine-tuned heuristics \cite{heuristic} or deep machine learning models such as deep $Q$-network (DQN)
\cite{resource-manage-DQN}\cite{DL}. These algorithms can achieve (approximately) optimal solutions and make the communication systems smart
and intelligent \cite{add3}\cite{edge-intelligence}. However, they are usually complex and do not scale with the types of resources and the
number of tenants. Take Deep $Q$-Network (DQN) as an example, it could take days even weeks for obtaining \textit{not-particularly-good} actions
even though the state and action spaces have been discretized. Although several reinforcement learning methods can avoid
privacy leakage, such as \cite{add1} and \cite{add2}, the centralized algorithms are generally built on the complete knowledge regarding
all preferences of involved business players, including the monetary budget of tenants, the number and purchasing-power of service subscribers,
etc. The formulation of the centralized optimization problem itself is a detriment on privacy and trade secrets.
To avoid insufferable complexity and privacy leakage, in recent years, many researchers establish network slicing models based
on standard economic frameworks, such as \textit{Fisher markets} \cite{fisher1}\cite{fisher2}, and different auction-based mechanisms,
such as the VCG-Kelly mechanisms \cite{VCG}\cite{vcg-kelly}. In these works, all tenants get together and bid for maximizing their
profits. For instance, Wang et al. studied the relationship between resource efficiency and profit maximization and developed an
optimization framework to maximize net social welfare \cite{s1}. Similarly, Jiang et al. addressed a joint resource and revenue
optimization problem and solved it with the auction mechanisms \cite{s3}. Furthermore, some works resort to game theory to
model tenants' and MVNOs' strategic (or non-strategic) behaviors, and take the \textit{price of anarchy} (PoA) to analyze the efficiency
of potentially existent Nash equilibrium (NE) \cite{zhang2011game}. For instance, Caballero et al. studied the resource allocation
mechanism by formulating a network slicing game \cite{congest-game-slice}. They proved that when the game associated with strategic
behavior of tenants, i.e., adjusting their preferences depending on perceived resource contention, convergence to a Nash equilibrium
(\textit{under some specific conditions}) can be achieved. Luu et al. also study a network slicing game, but under specific constraints
of RAN \cite{ran-slice}. Generally, auction mechanisms are efficient and scalable to diversified service requirements. However, most of
these auction-based works are designed under an \textit{offline} setting, i.e., the MVNO knows the willingness to bid and many other
private information of all tenants during each bidding round. Besides, a tenant's partial private information might be disclosed to
all the remaining tenants. Nevertheless, this may not possible in many real-world business transactions because it is rare that all
the tenants negotiate the rental business details simultaneously. The MVNO should not know anything about the arrival
sequence of tenants, much less the private information of the served users of each tenant. It should only have the knowledge on the
resource surplus and the attributes saved in the generic network slice templates (GSTs) \cite{nst}. In addition, a tenant private
information should not be available to the other tenants.
The above analysis shows that auction mechanisms may not be ideal for online network slicing problems. In addition to the above
reasons, auctions take time and require multiple communication rounds between the MVNO and the tenants \cite{auction-or-posted}.
They may have poor performances when the distribution of bidders' arrival instance is unknown \cite{posted-mechanism}. By contrast,
take-it-or-leave-it, i.e., \textit{posted price}, is a more practical option for online settings. Therefore, in this paper, we
design an online slicing algorithm based on the posted price mechanism. A decentralized, low-complexity, and privacy-preserving
algorithm, \textsf{DPoS}, mainly based on previous theoretical works on the online primal-dual algorithms \cite{primal-dual2},
\cite{online-mechanism}, and \cite{primal-dual1}, is proposed. Specifically, we extend the basic model proposed in \cite{online-mechanism}
into multi-resource scenarios. \textsf{DPoS} is consists of two parts, \textsf{DPoS-MVNO} (agent for the MVNO) and \textsf{DPoS-TNT}$_n$
(agent for the $n$-th tenant), with a complexity of $O(N C)$ and $O(C)$, respectively. Here $N$ is the number of tenants, and $C$ is
the number of type of resources. \textsf{DPoS} runs in a fully decentralized way. Each time a new tenant $n$ arrives, \textsf{DPoS-TNT}$_n$
decides to rent the demand resources or not according to the rental prices of each type of resource, published by \textsf{DPoS-MVNO}
beforehand. Therewith, \textsf{DPoS-TNT}$_n$ sends the decision and payment (if tenant $n$ has the willingness to pay) to \textsf{DPoS-MVNO}.
Then, \textsf{DPoS-MVNO} checks whether the resource surplus can satisfy tenant $n$ and inform \textsf{DPoS-TNT}$_n$ the transaction
is succeeded or failed. Note that each tenant may experience different prices on the same kind of resource, which depends on the
pricing mechanism the MVNO adopts. In the above procedure, only a small flow of privacy-irrelevant information are transferred between
the MVNO and each tenant. No information are transferred among tenants. Trade secrets, especially the information of service subscribers
and the pricing policies of tenants, will not be disclosed.
Our main contributions are summarized as follows.
\begin{itemize}
\item We design a decentralized, privacy-preserving online network slicing algorithm, \textsf{DPoS}. This algorithm
enjoys low complexity, and it is practicable under diversified multi-resource requirements. Trade secrets and related private
information can be fully preserved.
\item We find that, when the cost function of each resource is linear, \textsf{DPoS} achieves the \textit{optimal} competitive
ratio over all the online algorithms for the maximization of social welfare.
\item We verify the superiority of \textsf{DPoS} from multiple angles, including social welfare achieved,
cross-agent communication data size, algorithm execution time, etc. The experimental results show that \textsf{DPoS} not only achieves
close-to-offline-optimal performance, but also has low algorithmic overheads.
\end{itemize}
The remainder of the paper is organized as follows. Sec. \ref{sec2} presents the system model and formulates the global offline problem.
Sec. \ref{sec3} demonstrates the design details of the algorithm \textsf{DPoS}. Theoretical analysis on the competitive ratio is provided
in Sec. \ref{sec4}. The experiment results are demonstrated in Sec. \ref{sec5}. Sec. \ref{sec6} reviews related works and Sec. \ref{sec7}
concludes this paper.
\section{Problem Formulation}\label{sec2}
To simplify the notations without damaging its economic structure, our model concerns one InP, one MVNO, several tenants and each tenant's
served users. Our model and algorithm can be directly adapted to multi-MVNO multi-InP scenarios. We consider the scenario where
multiple network slices are built upon an SDN/NFV-enabled 5G network infrastructure, which is rented from the InP by the MVNO.
Roughly, physical resources in this infrastructure can be divided into computation, storage, and forwarding/bandwidth. At great length, physical
resources are usually organized as a \textit{weighted directed graph} \cite{s1} \cite{resource-allocation-infocom17}, where the \textit{node}
can be base station at the access, forwarding router in the bearer, and physical machine or virtual machine in the regional datacenters, and
the \textit{edge} is a directed link with certain propagation speed. Each node is the carrier of VNFs with different capabilities
while link has unique bandwidth and data transfer rate. To ensure the universality of the model, we do not add any specific limitation and simply
use $\mathcal{C} \triangleq \{1, ..., C\}$ to denote the set of resources. Without loss of generality, the capacity limit of each resource is
normalized to be $1$.
\begin{table}[htbp]
\begin{center}
\caption{\label{key}Summary of key notations.}
\begin{tabular}{l|l}
\toprule
{\textsf{\textbf{Notation}}}& {\textsf{\textbf{Description}}}\\[+0.1mm]
\midrule
$\mathcal{C}$ & The set of network resources\\[+0.7mm]
$\mathcal{N}$ & The set of tenants\\[+0.7mm]
$\mathcal{S}_n$ & The set of users of tenant $n \in \mathcal{N}$\\[+0.7mm]
$\{d_n^c\}_{\forall c \in \mathcal{C}}$ & Resource demands of tenant $n$\\[+0.7mm]
$v_n$ & The revenue in estimation of tenant $n$\\[+0.7mm]
$e_n^c$ & The valuation density of tenant $n$\\[+0.7mm]
$\underline{p_c} \textrm{ and } \overline{p_c}$ & The lower (upper) bound of earning density\\[+0.7mm]
$x_n \in \{0, 1\}$ & The decision variable of tenant $n$\\[+0.7mm]
$\pi_n$ & The payment made by tenant $n$\\[+0.7mm]
$y_c \in [0, 1]$ & The resource rent out of type $c$\\[+0.7mm]
$\{f_c\}_{\forall c \in \mathcal{C}}$ & Non-decreasing zero-startup cost functions\\[+0.7mm]
$\underline{\varsigma_c}$ and $\overline{\varsigma_c}$ & The derivative of $f_c(\cdot)$ at point $0$ and $1$\\[+0.7mm]
$\{\tilde{f}_c\}_{\forall c \in \mathcal{C}}$ & The extended cost functions\\[+0.7mm]
$\{F_{p_c}\}_{\forall c \in \mathcal{C}}$ & The profit functions\\[+0.7mm]
$\{h_{c}\}_{\forall c \in \mathcal{C}}$ & The maximum profit functions\\[+0.7mm]
$\psi_n \textrm{ and } p_c$ & The dual variables corresponding to $x_n$ and $y_c$\\[+0.7mm]
$\{\phi_c\}_{\forall c \in \mathcal{C}}$ & The pricing functions\\[+0.7mm]
$\alpha$ & Competitive ratio of online algorithms\\[+0.7mm]
\bottomrule
\end{tabular}
\end{center}
\end{table}
Let us use $\mathcal{N} \triangleq \{1, .., N\}$ to denote the set of tenants. In our model, each tenant requests one (and ony one) slice from
the MVNO\footnote{In the following, we may interpret $n$ as the $n$-th tenant or the $n$-th slice. It depends on the content.}. Generally, a
slice is a collection of different types of resources, the topology of which can be mapped onto the substrate network as a connected
subgraph\footnote{There have been lots of works on the VNF placement and mapping \cite{nfv}\cite{core-add1}\cite{edge-slice}\cite{congest-game-slice}.
But this is not the subject of this paper.}. We use $\{d_n^c\}_{\forall c \in \mathcal{C}}$ to denote the requirements of the $n$-th tenant (slice),
where $d_n^c$ is the demand of resource of type $c \in \mathcal{C}_n \subseteq \mathcal{C}$, and
\begin{equation}
d_n^c
\left\{
\begin{array}{rl}
> 0 & \textrm{if } c \in \mathcal{C}_n \\
= 0 & \textrm{otherwise}.
\end{array}
\right.
\end{equation}
The traffic demand on the $n$-th slice is denoted by $\{f_s(\gamma, \tau)\}_{\forall s \in \mathcal{S}_n}$, where $s$ is a service subscriber
from tenant $n$'s served users $\mathcal{S}_n$, and $f_s(\gamma, \tau)$ is a data flow with promissory data rate $\gamma$ and latency
constraint $\tau$ from some source node to some destination node. A slice's consumption on resources is embodied in the execution of VNFs and
the occupation of bandwidth. For tenant $n$, we define a function $\sigma_n: \{f_s(\gamma, \tau)\}_{\forall s \in \mathcal{S}_n} \to \mathbb{R}$ to calculate the
payment of user $s$ for enjoying the data flow $f_s(\gamma, \tau)$. We regard $\sigma_n$ as \textit{private} because it involves business secrets
of the tenant. The \textit{estimated} revenue of each tenant $n$ is from the payment of its service subscribers, which is defined as follows:
\begin{equation}
v_n \triangleq \sum_{s \in \mathcal{S}_n} \varrho_s \cdot \sigma_n \Big(f_s(\gamma, \tau)\Big).
\label{vn}
\end{equation}
In \eqref{vn}, $\varrho_s$ is the level of QoS for subscriber $s \in \mathcal{S}_n$, which is decided based on the commitment on delay torelant, reliability,
isolation level, etc \cite{ns-game}. A full list of attributes can be found at \cite{nst}. Under normal circumstances, the higher the level of QoS, the
faster data rate and tighter latency constraints on the data flow, which in turn leads to more resource consumption.
Note that all the $\{d_n^c\}_{\forall c \in \mathcal{C}}$ are used to support the data flows $\{f_s(\gamma, \tau)\}_{\forall s \in \mathcal{S}_n}$.
If we assume that the mapping function from data flow $f_s(\gamma, \tau)$ to the $c$-th resource occupation is
$g_c: \{f_s(\gamma, \tau)\}_{\forall s \in \mathcal{S}_n} \to [0, d_n^c]$, we have the following identity:
\begin{equation*}
d_n^c = \sum_{s \in \mathcal{S}_n} g_c \Big(f_s(\gamma, \tau)\Big), \forall c \in \mathcal{C}_n.
\end{equation*}
In practice, $v_n$ can be interpreted as the \textit{willingness-to-pay} of tenant $n$ for renting the required resources \cite{online-mechanism}.
$\forall c \in \mathcal{C}, \forall n \in \mathcal{N}$, we define the \textit{earning density} $e_n^c$ as $v_n / d_n^c$. $e_n^c$ can be interpreted
as the estimated revenue per unit of resource $c$ to the tenant $n$. Following \cite{online-mechanism}\cite{assumption1}, we define
$\underline{p_c}$ and $\overline{p_c}$ as follows.
\begin{equation}
\forall c \in \mathcal{C}:
\left\{
\begin{array}{l}
\underline{p_c} \leq \min_{\forall n \in \mathcal{N}: d_n^c \neq 0} e_n^c \\
\overline{p_c} \geq \max_{\forall n \in \mathcal{N}: d_n^c \neq 0} e_n^c.
\end{array}
\right.
\label{p}
\end{equation}
The lower bound means that the MVNO will reject the tenant $n$ directly if $\exists c \in \mathcal{C}, e_n^c$ is lower than $\underline{p_c}$.
The role of the lower bound is to avoid the tenants deliberately overstating their resource demands to get a discount. In other words, the
tenants are forced to engage the transactions with their true preferences and no resource will be wasted. The upper bound in \eqref{p}
is to eliminate irrational tenants or mock auctions, which exists naturally in a wholesome market.
For each tenant $n \in \mathcal{N}$, we use $x_n \in \{0, 1\}$ to indicate whether the deal is successful. The utility of tenant $n$ is
defined as $U_n \triangleq (v_n - \pi_n) \cdot x_n$, where $\pi_n$ is the payment. The utility of the MVNO is defined as
$U_o \triangleq \sum_{n \in \mathcal{N}} \pi_n \cdot x_n - \sum_{c \in \mathcal{C}} f_c\Big(\sum_{n \in \mathcal{N}} d_n^c x_n \Big)$, where
$\forall c \in \mathcal{C}, f_c: [0, 1] \to \mathbb{R}$ is a non-decreasing zero-startup cost function of resource $c$. We set $f_c$ as a non-decreasing
function because more resources virtualized and sliced, more operating and maintenance costs on those rent-out slices.
In the above formulation, we take $\sigma_n$ and the mapping $\{f_s(\gamma, \tau)\}_{\forall s \in \mathcal{S}_n} \to \mathcal{S}_n$
as private information of tenant $n$ which should not be accessible to the MVNO and the other tenants. The former comes down to the payment models and
pricing strategies adopted by tenant $n$ and reveals the purchasing power of it served users. The latter establishes the relationship between data
flows and their initiators. In online settings, the deals between the MVNO and the tenants are made one-by-one according to the arrival sequence of
tenants. Our goal is to (approximately) maximize the social welfare of this ecosystem, i.e. the sum of the MVNO's utility and all the tenants', in
an \textit{online} and \textit{decentralized} setting. Before introducing the online algorithm proposed in this paper, we first formulate
the global \textit{offline} social welfare maximization problem as follows.
\begin{subequations}
\begin{equation*}
\mathcal{P}_1: \max_{\{x_n\}_{\forall n \in \mathcal{N}}} \sum_{n \in \mathcal{N}} v_n x_n -
\sum_{c \in \mathcal{C}} f_c \Big( \sum_{n \in \mathcal{N}} d_n^c x_n \Big)
\end{equation*}
\begin{equation}
s.t. \quad \sum_{n \in \mathcal{N}} d_n^c x_n \leq 1, \forall c \in \mathcal{C},
\label{p1_con1}
\end{equation}
\begin{equation}
\qquad x_n \in \{0, 1\}, \forall n \in \mathcal{N}.
\label{p1_con2}
\end{equation}
\end{subequations}
In $\mathcal{P}_1$, $v_n$ is obtained through \eqref{vn}. Even though the problem is hard to solve, it is formulated based on the complete knowledge
of the ecosystem. In other words, the formulation of $\mathcal{P}_1$ itself is a detriment on privacy. In an online setting, the MVNO should only
know the setup information $\{ f_c, \underline{p_c}, \overline{p_c} \}_{\forall c \in \mathcal{C}}$ and the attributes defined in the GSTs
$\{\varrho_s\}_{\forall s \in \mathcal{S}_n, \forall n \in \mathcal{N}}$ handed in by tenants as a priori. It should not know anything
about the private tuple
$$
\vec{\theta} \triangleq \Big( \{\sigma_n\}_{\forall n \in \mathcal{N}}, \big\{ \{f_s(\gamma, \tau)\}_{\forall s \in \mathcal{S}_n} \to \mathcal{S}_n\big\}_{\forall n \in \mathcal{N}} \Big)
$$
and the arrival sequence of tenants. In addition, each tenant should know nothing about the other tenants at all. As a result, to solve the problem
in a privacy-preserving decentralized setting, we need to ensure that the deal is made with only a small flow of information transferred between
the MVNO and each tenant without revealing any sensitive information.
In the proposed algorithm \textsf{DPoS}, which will be introduced in the following, each time when a new tenant $n$ arrives,
the tenant makes the decision $x_n$ \textit{by itself} according to the disclosed information such as current rental price of each kind resource.
If $x_n$ is set as $1$, then tenant $n$ sends $(1, \pi_n, \{d_n^c\}_{\forall c \in \mathcal{C}})$ to the MVNO. Otherwise $(0, 0, 0)$ is sent.
The MVNO can only access the data transferred to it.
\section{Algorithm Design}\label{sec3}
To maximize the social welfare in an online and decentralized setting, we first introduce some notations, then demonstrate the designing of the
\textbf{D}istributed \textbf{P}rivacy-preserving \textbf{o}nline \textbf{S}licing algorithm, \textsf{DPoS}.
\subsection{The Primal-Dual Approach}\label{sec3.1}
$\forall c \in \mathcal{C}$, we introduce the \textit{extended cost function} $\tilde{f}_c$ as follows.
\begin{equation}
\tilde{f}_c(y) \triangleq
\left\{
\begin{array}{ll}
f_c(y) & \textrm{if } y \in [0, 1]\\
+\infty & \textrm{if } y \in (1, +\infty).
\end{array}
\right.
\label{extended_f}
\end{equation}
$\tilde{f}_c$ extends the domain of $f_c$ to $[0, +\infty)$. Correspondingly, we define the \textit{profit function} $F_{p_c}$ of resource $c$ for
the MVNO as follows:
\begin{equation}
F_{p_c} (y_c) \triangleq p_c y_c - \tilde{f}_c(y_c), \forall y_c \in [0, +\infty).
\label{F_c}
\end{equation}
Regarding $y_c$ as the total resource rented out of type $c$ and $p_c$ as the rental price of resource $c$, $F_{p_c} (y_c)$ is the revenue
obtained by renting out $y_c$ unit of resource $c$ minus the maintainers cost of it. Based on \eqref{F_c}, we denote the \textit{maximum profit}
$h_c$ of resource $c$ when the rental price is $p_c$ by
\begin{equation}
h_c (p_c) \triangleq \max_{y_c \geq 0} F_{p_c} (y_c).
\label{h_c}
\end{equation}
Following the primal-dual approach \cite{online-mechanism}\cite{primal-dual1}, we introduce the \textit{Relaxed Primal Problem} $\mathcal{P}_2$.
\begin{subequations}
\begin{equation*}
\mathcal{P}_2: \max_{\vec{x}, \vec{y}} \sum_{n \in \mathcal{N}} \sum_{s \in \mathcal{S}_n} \varrho_s \cdot \sigma_n \Big(f_s(\gamma, \tau)\Big) x_n -
\sum_{c \in \mathcal{C}} \tilde{f}_c ( y_c )
\end{equation*}
\begin{equation}
s.t. \quad \sum_{n \in \mathcal{N}} d_n^c x_n \leq y_c, \forall c \in \mathcal{C},
\label{p2-cons1}
\end{equation}
\begin{equation}
\qquad \vec{x} \leq \mathbf{1}, \vec{x} \geq 0, \vec{y} \geq 0,
\label{p2-cons2}
\end{equation}
\end{subequations}
where $\vec{x} = [x_n]_{n \in \mathcal{N}} \in [0, 1]^N$, and $\vec{y} = [y_c]_{y \in \mathcal{C}} \in \mathbb{R}^C$.
In terms of the relation between $\mathcal{P}_1$ and $\mathcal{P}_2$, we have the following proposition.
\begin{proposition}
$\mathcal{P}_2$ is equivalent to $\mathcal{P}_1$ except the relaxation of $\{x_n\}_{\forall n \in \mathcal{N}}$.
\begin{proof}
To maximize the objective of $\mathcal{P}_1$, the optimal $\vec{y}^{\star}$ must be located in $[0, 1]^N$. Because $f_c$ is
non-decreasing for all kinds of resource $c \in \mathcal{C}$, the optimal $y_c^{\star}$ must be the minimum allowed, i.e.
$\sum_{n \in \mathcal{N}} d_n^c x_n$. Thus, except relaxing $\{x_n\}_{\forall n \in \mathcal{N}}$ to the continuous interval $[0, 1]^N$,
$\mathcal{P}_2$ is the same as $\mathcal{P}_1$.
\end{proof}
\end{proposition}
Take $\mathcal{P}_2$ as the primal problem, the following proposition gives the dual problem $\mathcal{P}_3$.
\begin{proposition}
The dual problem of $\mathcal{P}_2$ is:
\begin{subequations}
\begin{equation}
\mathcal{P}_3: \min_{\vec{p}, \vec{\psi}} \sum_{n \in \mathcal{N}} \psi_n +
\sum_{c \in \mathcal{C}} h_c ( p_c )
\end{equation}
\begin{equation}
s.t. \quad \psi_n \geq v_n - \sum_{c \in \mathcal{C}} p_c d_n^c, \forall n \in \mathcal{N},
\label{dual1}
\end{equation}
\begin{equation}
\qquad \vec{\psi} \geq \mathbf{0}, \vec{p} \geq \mathbf{0},
\label{dual2}
\end{equation}
\end{subequations}
where $\vec{\psi} = [\psi_n]_{n \in \mathcal{N}} \in \mathbb{R}^N$ and $\vec{p} = [p_c]_{c \in \mathcal{C}} \in \mathbb{R}^C$ are the dual variables
corresponding to $\vec{x}$ and $\vec{y}$, respectively.
\begin{proof}
By introducing the Lagrangian multipliers $\{p_c\}_{\forall c \in \mathcal{C}}$ and $\{\psi_n\}_{\forall n \in \mathcal{N}}$ for
\eqref{p2-cons1} and the first inequality of \eqref{p2-cons2}, respectively, the Lagrangian of $\mathcal{P}_2$ is
\begin{eqnarray*}
\qquad \Lambda (\vec{x}, \vec{y}, \vec{\psi}, \vec{p}) =
\sum_{c \in \mathcal{C}} \Big( p_c y_c - \tilde{f}_c (y_c) \Big) + \sum_{n \in \mathcal{N}} \psi_n \quad \\
\qquad + \sum_{n \in \mathcal{N}} x_n \Bigg( \sum_{s \in \mathcal{S}_n} \varrho_s \cdot \sigma_n \Big(f_s(\gamma, \tau)\Big) - \sum_{c \in \mathcal{C}} p_c d_n^c - \psi_n \Bigg)
\end{eqnarray*}
Thus, we have
\begin{equation*}
\min_{\vec{\psi}, \vec{p}} \max_{\vec{x}, \vec{y}} \Lambda =
\min_{\vec{\psi}, \vec{p}} \bigg( \max_{\vec{y}} \sum_{c \in \mathcal{C}} \Big( p_c y_c - \tilde{f}_c (y_c) \Big) + \sum_{n \in \mathcal{N}} \psi_n \bigg)
\end{equation*}
when $\forall n \in \mathcal{N}$, $\psi_n \geq v_n - \sum_{c \in \mathcal{C}} p_c d_n^c$. Therein,
$\sum_{s \in \mathcal{S}_n} \varrho_s \cdot \sigma_n \Big(f_s(\gamma, \tau) \Big)$ is replaced by $v_n$ through \eqref{vn}. The result
is immediate with \eqref{h_c}.
\end{proof}
\end{proposition}
Regarding $\psi_n$ as the utility of tenant $n$. The objective of $\mathcal{P}_3$ is the aggregate utilities of all tenants plus the
optimal utility of the MVNO. Both the objective of $\mathcal{P}_1$ and $\mathcal{P}_3$ indicate the social welfare of the ecosystem.
\subsection{The \textsf{DPoS} Algorithm}
Note that the rental price $p_c$ of resource $c$ is a global variable known to all tenants. Thus, if the final optimal price $\vec{p}$ is
known to the MVNO, each time a tenant $n$ arrives, then this tenant can make the rent decision $x_n$ without worrying about whether the optimal social
welfare is achieved or not. However, it is impossible to know the exact value of $\vec{p}$ in advance without the arrival sequence and
$\vec{\theta}$. To tackle with this problem, inspired by \cite{primal-dual2}, \cite{online-mechanism} and \cite{primal-dual1},
we design the \textsf{DPoS} algorithm based on the alternating update of primal \& dual variables (of $\mathcal{P}_2$ and $\mathcal{P}_3$) and the
\textit{predict-and-update} of $\vec{p}$. In the following, we place a hat on top of variables that denote the decisions made online.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3in]{figs/algo.png}}
\caption{How \textsf{DPoS} works. Each time a new tenant $n$ arrives, only a small flow of privacy-irrelevant data are transferred between
\textsf{DPoS-MVNO} and \textsf{DPoS-TNT}$_n$.}
\label{fig2}
\end{figure}
\begin{figure}[!ht]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{\textsf{DPoS-MVNO}}
\KwIn{$\{ f_c, \underline{p_c}, \overline{p_c}, \phi_c \}_{\forall c \in \mathcal{C}}$}
$\forall c \in \mathcal{C}$, initialize $\hat{y}_c^{(0)}$ as zero, set $\hat{p}_c^{(0)}$ as $\phi_c(\hat{y}_c^{(0)})$ \\
\While{a new tenant $n$ arrives}
{
Publish the rental price $\{ \hat{p}_c^{(n-1)} \}_{c \in \mathcal{C}}$ to \textsf{DPoS-TNT}$_n$\\
Receive $\hat{x}_n$, $\hat{\pi}_n$, and $\{d_n^c\}_{\forall c \in \mathcal{C}}$ from \textsf{DPoS-TNT}$_n$\\
\uIf{$\hat{x}_n$ \textbf{is} $1$}
{
\uIf{$\exists c \in \mathcal{C}$ such that $\hat{y}_c^{(n-1)} + d_n^c > 1$}
{
Update $\hat{x}_n$ as $0$\\
Send $\hat{\pi}_n$ and \texttt{FAIL} back to \textsf{DPoS-TNT}$_n$
}
\Else
{
Update the total resource utilization: $$\forall c \in \mathcal{C}, \hat{y}_c^{(n)} \leftarrow \hat{y}_c^{(n-1)} + d_n^c$$\\
Send \texttt{SUCC} back to \textsf{DPoS-TNT}$_n$
}
}
\Else
{
$\forall c \in \mathcal{C}, \hat{y}_c^{(n)} \leftarrow \hat{y}_c^{(n-1)}$\\
}
Update the rental price: $$ \forall c \in \mathcal{C}, \hat{p}_c^{(n)} \leftarrow \phi_c (\hat{y}_c^{(n)}) $$
}
\end{algorithm}
\end{figure}
\textsf{DPoS} consists of two parts, \textsf{DPoS-MVNO} and \textsf{DPoS-TNT}$_n$ (each for a tenant). Before a new tenant $n$ arrives,
\textsf{DPoS-MVNO} prices for each resource $c$ with a function $\phi_c$:
\begin{equation}
\hat{p}_c^{(n-1)} = \phi_c (\hat{y}_c^{(n-1)}), \forall c \in \mathcal{C}.
\label{price}
\end{equation}
The pricing functions $\{\phi_c\}_{c \in \mathcal{C}}$ are closely associated to the properties of the cost functions $\{f_c\}_{c \in \mathcal{C}}$.
We will provide the analytic forms of them in the follwing subsection.
\textsf{DPoS-MVNO} discloses the rental prices $\{ \hat{p}_c^{(n-1)} \}_{c \in \mathcal{C}}$ to tenant $n$. Then, tenant $n$ judges whether it has
\textit{positive} utility if it decides to rent $\{d_n^c\}_{\forall c \in \mathcal{C}}$ ($\hat{x}_n \leftarrow 1$). If yes, \textsf{DPoS-TNT}$_n$
sets the payment $\hat{\pi}_n$ as $\sum_{c \in \mathcal{C}} d_n^c \cdot \hat{p}_c^{(n-1)}$. Otherwise, both $\hat{x}_n$ and $\hat{\pi}_n$ are set
as zero. In the end, \textsf{DPoS-TNT}$_n$ sends $(\hat{x}_n, \hat{\pi}_n, \{d_n^c\}_{\forall c \in \mathcal{C}})$ to \textsf{DPoS-MVNO}.
When \textsf{DPoS-MVNO} receives the message from \textsf{DPoS-TNT}$_n$, it checks whether the resource surplus can satisfy tenant $n$. If yes,
\textsf{DPoS-MVNO} sends the indicator \texttt{SUCC} to \textsf{DPoS-TNT}$_n$ to inform the success of this transaction. Otherwise, it sends \texttt{FAIL}
and returns the rent $\hat{\pi}_n$. If succeed, tenant $n$ hands in the GST and other matters that need to be provided.
The procedure is visualized in Fig. \ref{fig2}.
Note that the data transfer between \textsf{DPoS-MVNO} and \textsf{DPoS-TNT}$_n$ is \textit{stop-and-wait}, i.e., a new arrival tenant will not be
handed by the MVNO until the transaction between the MVNO and the previous tenant is done.
In \textsf{DPoS}, only a small flow of privacy-irrelevent data $(\hat{x}_n, \hat{\pi}_n, \{d_n^c\}_{\forall c \in \mathcal{C}})$ are
transferred between \textsf{DPoS-TNT}$_n$ and \textsf{DPoS-MVNO}. The MVNO cannot collect any information from $\vec{\theta}$. In addition, each tenant
knows nothing about the other tenants. \textsf{DPoS} is implemented in a posted price manner \cite{posted1}\cite{posted2}, where the rent decision
made by each tenant is only \textit{take-it-or-leave-it}. A tenant cannot get any discount even if it rents relatively large amounts of resources,
which leads to the fact that \textit{how much to use, how much to rent.} No resource will be wasted.
It is easy to verify that the complexity of \textsf{DPoS} is linear with the scale of tenants and type of resources.
In \textsf{DPoS-MVNO}, the while-loop terminates after all the $|\mathcal{N}|$ tenants finish their transactions in turn. During
the loop, the most time-consuming operation lies in step 6 and step 10, where the MVNO needs check whether each type of resource $c$ is enough to
support a transaction and take $d_n^c$ off if permitted. In worst case, the number of operations is $2 |\mathcal{C}|$. Considering that all the
left steps can be executed in $O(1)$-complexity, the worst-case complexity of \textsf{DPoS-MVNO} is $O(|\mathcal{N}| \cdot |\mathcal{C}|)$.
As for \textsf{DPoS-TNT}$_n$, time-consumption operations lie in step 2, step 4, and step 7, all of which are $O(|\mathcal{C}|)$-complexity.
Therefore, \textsf{DPoS-TNT}$_n$ is of $O(|\mathcal{C}|)$-complexity in worst case.
\subsection{The Dynamic Pricing Functions}
In \textsf{DPoS}, the only difficulty lies in that how the pricing functions $\{\phi_c\}_{c \in \mathcal{C}}$ are designed. As mentioned
before, the analytic forms of $\{\phi_c\}_{c \in \mathcal{C}}$ strongly rely on the properties of cost functions $\{f_c\}_{c \in \mathcal{C}}$.
Even so, we claim that \textit{in \textsf{DPoS}, $\{\phi_c\}_{\forall c \in \mathcal{C}}$ are monotonically non-decreasing positive functions}.
We set $\phi_c$ as a non-decreasing function because it profoundly reflects the underlying economic phenomenon, i.e., \textit{a thing is valued
in proportion to its rarity.} The later the tenant comes to renting the remaining resources, the higher cost it has to pay.
\begin{figure}[!ht]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{\textsf{DPoS-TNT}$_n$}
\KwIn{$\{d_n^c\}_{\forall c \in \mathcal{C}}$ and $\theta_n$}
Receive the rental price $\{ \hat{p}_c^{(n-1)} \}_{c \in \mathcal{C}}$ from \textsf{DPoS-MVNO}\\
$\hat{\psi}_n \leftarrow \max \big\{ v_n - \sum_{c \in \mathcal{C}} d_n^c \cdot \hat{p}_c^{(n-1)}, 0 \big\}$\\
\uIf{$\hat{\psi}_n < 0$}
{
Set $\hat{x}_n$ and $\hat{\pi}_n$, and $\{d_n^c\}_{\forall c \in \mathcal{C}}$ as zero\\
}
\Else
{
Set $\hat{x}_n$ as $1$\\
Set the payment:
$$ \hat{\pi}_n \leftarrow \sum_{c \in \mathcal{C}} d_n^c \cdot \hat{p}_c^{(n-1)} $$
}
Send $\big(\hat{x}_n, \hat{\pi}_n, \{d_n^c\}_{\forall c \in \mathcal{C}}\big)$ to \textsf{DPoS-MVNO}
\end{algorithm}
\end{figure}
Now, we demonstrate the forms of $\{\phi_c\}_{\forall c \in \mathcal{C}}$ when the costs are linear.
Concretely, if $\forall c \in \mathcal{C}$, the cost function has the form
\begin{equation}
f_c (y) = q_c y,
\label{linear_cost}
\end{equation}
where $0 < q_c < \underline{p_c}$. Then, in \textsf{DPoS}, the pricing function $\phi_c$ is set as follows:
\begin{eqnarray}
\phi_c (y) =
\left\{
\begin{array}{ll}
\underline{p_c} & y \in [0, w_c) \\
q_c + (\underline{p_c} - q_c) \cdot e^{y/w_c - 1} & y \in [w_c, 1] \\
+\infty & y \in (1, +\infty),
\end{array}
\right.
\label{linear_price}
\end{eqnarray}
where
\begin{equation}
w_c = \bigg( 1 + \ln \frac{\sum_{c' \in \mathcal{C}} (\overline{p_{c'}} - q_{c'})}{\underline{p_c} - q_c} \bigg)^{-1}
\label{w_c}
\end{equation}
is a threshold. Tan et al. also discuss the construction of the pricing function (for single resource and multiple substitutable resources) when the resource's cost
function is strictly-convex \cite{online-mechanism}, which involves the solving of several first-order two-point boundary value problems (BVPs)
\cite{ODE}. In the next section, we will show that the competitive ratio of \textsf{DPoS} is the optimal one over all the online algorithms
when $\{f_c\}_{c \in \mathcal{C}}$ are linear.
\section{Theoretical Analysis}\label{sec4}
The commonly used measure for online algorithms is the standard competitive
analysis framework \cite{competitive}. The definition of \textit{competitive ratio} for any online algorithm to $\mathcal{P}_1$ is given below.
\begin{definition}
For any arrival instance $1, 2, ..., N$, denoted by $\mathcal{A}$, the competitive ratio for an online algorithm is defined as
\begin{equation}
\alpha \triangleq \max_{\forall \mathcal{A}} \frac{\mathbf{\Theta}_{\text{off}} (\mathcal{A})}{\mathbf{\Theta}_{\text{on}} (\mathcal{A})},
\end{equation}
where $\mathbf{\Theta}_{\text{off}} (\mathcal{A})$ is the maximum objective value of $\mathcal{P}_1$, $\mathbf{\Theta}_{\text{on}} (\mathcal{A})$
is the objective function value of $\mathcal{P}_1$ obtained by this online algorithm.
\end{definition}
Obviously, $\alpha \geq 1$ always holds. The smaller $\alpha$ is, the better the online algorithm. An online algorithm is \textit{competitive}
if its competitive ratio is upper bounded. Further, we can define the optimal competitive ratio as
\begin{equation}
\alpha^\star \triangleq \inf \max_{\forall \mathcal{A}} \frac{\mathbf{\Theta}_{\text{\textit{off}}} (\mathcal{A})}{\mathbf{\Theta}_{\text{\textit{on}}} (\mathcal{A})},
\end{equation}
where the $\inf$ is taken w.r.t. all possible online algorithms. In the following, we drop the parenthesis and $\mathcal{A}$ for simplification.
Note that whether optimal or not, competitive ratio only gives the \textit{worst-case} guarantee.
To analyze the competitive ratio achieved by \textsf{DPoS}, we need to introduce several propositions and theorems beforehand. We will firstly
verify that \textsf{DPoS} is $\alpha$-competitive for some constant $\alpha$, then prove that it is the optimal one over all online
algorithms when $\{f_c\}_{\forall c \in \mathcal{C}}$ are linear. The first proposition introduced is related to the maximum utility $h_c$.
\begin{proposition}
$\forall c \in \mathcal{C}$, the function $h_c$, defined in \eqref{h_c}, can also be written as
\begin{equation}
h_c (p_c) =
\left\{
\begin{array}{ll}
F_{p_c} \big(f_c'^{-1}(p_c)\big) & p_c \in [\underline{\varsigma_c}, \overline{\varsigma_c}] \\
F_{p_c} (1) & p_c \in (\overline{\varsigma_c}, +\infty), \\
\end{array}
\right.
\label{conjugate}
\end{equation}
where $\underline{\varsigma_c} \triangleq f_c'(0)$, $\overline{\varsigma_c} \triangleq f_c'(1)$, $f_c'$ is the derivative of
$f_c$, and $f_c'^{-1}$ is the inverse of $f_c'$.
\begin{proof}
$\forall c \in \mathcal{C}$, when $\underline{\varsigma_c} \leq p_c \leq \overline{\varsigma_c}$, regarding $p_c$ as the derivative of
the non-decreasing $f_c$, then we have $f_c'^{-1}(p_c) \in [0, 1]$. Now we need to find the $y_c^\star$ which maximizes $F_{p_c} (y_c)$. By analyzing
the property of $\partial F_{p_c} (y_c) / \partial y_c$, which is $p_c - f_c'(y_c)$, we can find that the exact $y_c^\star$ satisfies
$p_c = f'(y_c^\star)$. Thus $h_c (p_c)$ is $F_{p_c} \big(f_c'^{-1}(p_c)\big)$ when $\underline{\varsigma_c} \leq p_c \leq \overline{\varsigma_c}$.
The same applies to the second segment of \eqref{conjugate}.
\end{proof}
\end{proposition}
\eqref{conjugate} is known as the \textit{convex conjugate} of $\tilde{f}_c$ \cite{convex-opt}. For a given online algorithm, denote the objective
of $\mathcal{P}_2$ and $\mathcal{P}_3$ by $\mathbf{\Theta}_{\mathcal{P}_2}^n $ and $\mathbf{\Theta}_{\mathcal{P}_3}^n$ after processing tenant $n$,
respectively. Also, we use $\mathcal{V}_{\mathcal{P}_2} \triangleq \big(\{ \hat{x}_n \}_{\forall n \in \mathcal{N}}, \hat{y}_N\big)$ and
$\mathcal{V}_{\mathcal{P}_3} \triangleq \big(\{ \hat{\psi}_n \}_{\forall n \in \mathcal{N}}, \hat{p}_N\big)$ to denote the complete set of online
primal and dual solutions, respectively. In the following, we demonstrate the sufficient conditions of designing an $\alpha$-competitive online
algorithm for $\mathcal{P}_1$, and then show that \textsf{DPoS} satisfies the conditions.
\begin{proposition}
(Adapted from proposition 3.1 of \cite{online-mechanism}) When $\{f_c\}_\forall c \in \mathcal{C}$ are linear\footnote{Proposition 1 of
\cite{online-mechanism} also requires that $\{\underline{\varsigma_c} < \underline{p_c}\}_{\forall c \in \mathcal{C}}$ holds, which is not
required in this proposition.}, an online algorithm is $\alpha$-competitive if the following conditions are satisfied:
\begin{itemize}
\item All the online primal solutions in $\mathcal{V}_{\mathcal{P}_2}$ are feasible to $\mathcal{P}_1$;
\item All the online dual solutions in $\mathcal{V}_{\mathcal{P}_3}$ are feasible to $\mathcal{P}_3$;
\item There exists a tenant $k \in \mathcal{N}$ such that
\begin{equation}
\mathbf{\Theta}_{\mathcal{P}_2}^k \geq \frac{1}{\alpha} \mathbf{\Theta}_{\mathcal{P}_3}^k
\label{con0}
\end{equation}
and $\forall n \in \{k+1, ..., N\}$,
\begin{equation}
\mathbf{\Theta}_{\mathcal{P}_2}^n - \mathbf{\Theta}_{\mathcal{P}_2}^{n-1} \geq \frac{1}{\alpha}
\big( \mathbf{\Theta}_{\mathcal{P}_3}^n - \mathbf{\Theta}_{\mathcal{P}_3}^{n-1} \big)
\label{con1}
\end{equation}
holds.
\end{itemize}
\end{proposition}
\begin{proof}
Let us denote the optimal objective of $\mathcal{P}_2$ and $\mathcal{P}_3$ as $\mathbf{\Theta}_{\mathcal{P}_2}^\star$ and
$\mathbf{\Theta}_{\mathcal{P}_3}^\star$, respectively. Then,
\begin{equation}
\mathbf{\Theta}_{\text{\textit{off}}} \leq \mathbf{\Theta}_{\mathcal{P}_2}^\star = \mathbf{\Theta}_{\mathcal{P}_3}^\star
\leq \mathbf{\Theta}_{\mathcal{P}_3}^N.
\end{equation}
The reason for the first inequality is that $\mathcal{P}_2$ is a relaxation of $\mathcal{P}_1$. The reason for the first equality is that
when $\{f_c\}_{\forall c \in \mathcal{C}}$ are linear, strong duality holds between $\mathcal{P}_2$ and $\mathcal{P}_3$.
Besides, $\mathbf{\Theta}_{\text{\textit{on}}} = \mathbf{\Theta}_{\mathcal{P}_2}^N$. As a result, to make
$\alpha \geq \mathbf{\Theta}_{\text{\textit{off}}}/\mathbf{\Theta}_{\text{\textit{on}}}$ always hods, we can try to ensure that
$\mathbf{\Theta}_{\mathcal{P}_2}^N \geq \frac{1}{\alpha} \mathbf{\Theta}_{\mathcal{P}_3}^N$ holds.
According to \eqref{con1}, the following inequalities holds:
\begin{eqnarray*}
\sum_{n \in \mathcal{N}, n > k} \Big(\mathbf{\Theta}_{\mathcal{P}_2}^n - \mathbf{\Theta}_{\mathcal{P}_2}^{n-1}\Big) &\geq&
\frac{1}{\alpha} \sum_{n \in \mathcal{N}, n > k} \Big(\mathbf{\Theta}_{\mathcal{P}_3}^n - \mathbf{\Theta}_{\mathcal{P}_3}^{n-1}\Big) \\
\Longleftrightarrow \qquad \mathbf{\Theta}_{\mathcal{P}_2}^N - \mathbf{\Theta}_{\mathcal{P}_2}^k &\geq&
\frac{1}{\alpha} \Big( \mathbf{\Theta}_{\mathcal{P}_3}^N - \mathbf{\Theta}_{\mathcal{P}_3}^k \Big) \\
\Longleftrightarrow \quad \mathbf{\Theta}_{\mathcal{P}_2}^N - \frac{1}{\alpha} \mathbf{\Theta}_{\mathcal{P}_3}^N &\geq&
\mathbf{\Theta}_{\mathcal{P}_2}^k - \frac{1}{\alpha} \mathbf{\Theta}_{\mathcal{P}_3}^k \\
\Longleftrightarrow \qquad \mathbf{\Theta}_{\mathcal{P}_2}^N &\geq& \frac{1}{\alpha} \mathbf{\Theta}_{\mathcal{P}_3}^N. \qquad \vartriangleright \eqref{con0}
\end{eqnarray*}
We thus complete the proof.
\end{proof}
Proposition 4 gives three conditions for designing an $\alpha$-competitive online algorithm when $\{f_c\}_\forall c \in \mathcal{C}$ are linear.
If we can prove that these conditions hold for \textsf{DPoS}, then we prove that \textsf{DPoS} is at least $\alpha$-competitive for some constant $\alpha$.
In the following, we prove that the first and the second condition hold.
\begin{itemize}
\item It is obvious that $\mathcal{V}_{\mathcal{P}_2}$ obtained by \textsf{DPoS} is feasible to $\mathcal{P}_2$ because the
``\textbf{if statement}'' in step 6 of \textsf{DPoS-MVNO} and step 4 \& 6 of \textsf{DPoS-TNT}$_n$ ensure that \eqref{p1_con1} and
\eqref{p1_con2} can never be violated.
\item From step 2 of \textsf{DPoS-TNT}$_n$ we can find that
$\forall c \in \mathcal{C}, \hat{\psi}_n \geq v_n - \sum_{c \in \mathcal{C}} d_n^c \cdot \hat{p}_c^{(n-1)}$.
Because $\{\phi_c\}_{\forall c \in \mathcal{C}}$ defined in \textsf{DPoS} are non-decreasing positive functions, the following inequality
$$\hat{p}_c^{(N)} \geq \hat{p}_c^{(n)} \geq \hat{p}_c^{(n-1)} > 0$$
holds. Thus $\forall n \in \mathcal{N}$, $\hat{\psi}_n \geq v_n - \sum_{c \in \mathcal{C}} d_n^c \hat{p}_c^{(N)}$ holds,
where $\hat{p}_c^{(N)}$ is the final rental price of resource $c$, i.e. $p_c$ in $\mathcal{P}_3$.
Thus, \eqref{dual1} not violated. Step 2 of \textsf{DPoS-TNT}$_n$ ensures that $\hat{\vec{\psi}} \geq \vec{0}$ holds.
Also note that in \textsf{DPoS} $\{\phi_c\}_{\forall c \in \mathcal{C}}$ are non-decreasing positive functions, which leads to
$\hat{\vec{p}} \geq \vec{0}$ always holds. We thus prove that \eqref{dual2} is not violated. Since both \eqref{dual1} and \eqref{dual2}
are not violated, the second condition in proposition 4 holds for \textsf{DPoS}.
\end{itemize}
The proof of that the third condition holds is related to the design of the pricing functions $\{\phi_c\}_{\forall c \in \mathcal{C}}$.
The following theorem shows that when $\{\phi_c\}_{\forall c \in \mathcal{C}}$ in \textsf{DPoS} are designed as \eqref{41} $\sim$ \eqref{43}
indicate, the third condition in proposition 4 holds.
\begin{theorem}
(Adapted from theorem 4.1 of \cite{online-mechanism}) When $\{f_c\}_{\forall c \in \mathcal{C}}$ are linear and
$\{0 < \underline{\varsigma_c} < \underline{p_c}\}_{\forall c \in \mathcal{C}}$ holds, if $\forall c \in \mathcal{C}$, the pricing function
$\phi_c$ in \textsf{DPoS} has the form:
\begin{equation}
\phi_c (y) =
\left\{
\begin{array}{ll}
\underline{p_c} & y \in [0, w_c) \\
\varphi_c (y) & y \in [w_c, 1] \\
+\infty & y \in (1, +\infty),
\end{array}
\right.
\label{41}
\end{equation}
where
\begin{equation}
w_c \in \Big[0, \argmax_{y \geq 0} \underline{p_c} y - \tilde{f}_c(y)\Big]
\label{41-add}
\end{equation}
is a threshold that satisfies
\begin{equation}
F_{\underline{p_c}} (w_c) \geq \frac{1}{\alpha_c} h_c (\underline{p_c}),
\label{42}
\end{equation}
and $\varphi_c (y)$ is an increasing function that satisfies
\begin{equation}
\left\{
\begin{array}{l}
\varphi'_c (y) \leq \alpha_c \cdot \frac{\varphi_c (y) - f'_c(y)}{h_c'(\varphi_c (y))}, \text{if } y \in (w_c, 1) \\
\varphi_c (w_c) = \underline{p_c} \\
\varphi_c (1) \geq \overline{p_c} + \sum_{c' \in \mathcal{C}\backslash\{c\}} h_{c'} (\overline{p_{c'}}),
\end{array}
\right.
\label{43}
\end{equation}
then \textsf{DPoS} is $\max_{c \in \mathcal{C}} \alpha_c$-competitive.
\end{theorem}
\begin{proof}
Assume that $\forall c \in \mathcal{C}, w_c = \sum_{n=1}^k d_n^c$, which means that $k$ is the number of tenants such that the total resource
rented out of type $c$ equals $w_c$. Substitute the definition of $F_{p_c}(\cdot)$ into \eqref{42}, we have
\begin{equation*}
\underline{p_c} \cdot \Big( \sum_{n=1}^k d_n^c \Big) - \tilde{f}_c \Big( \sum_{n=1}^k d_n^c \Big) \geq \frac{1}{\alpha_c} h_c (\underline{p_c}).
\end{equation*}
Because $\alpha_c \geq 1$ holds for each $c \in \mathcal{C}$ and $\hat{\vec{\psi}} \geq \vec{0}$, the above inequality leads to
\begin{eqnarray*}
\Big( 1 - \frac{1}{\alpha_c} \Big) \sum_{n=1}^k \hat{\psi}_n
&+& \sum_{c \in \mathcal{C}} \Bigg(
\underline{p_c} \cdot \Big( \sum_{n=1}^k d_n^c \Big) - \tilde{f}_c \Big( \sum_{n=1}^k d_n^c \Big)
\Bigg) \\
&\geq& \sum_{c \in \mathcal{C}} \frac{1}{\alpha_c} h_c (\underline{p_c}).
\end{eqnarray*}
Further, we have
\begin{eqnarray}
&\sum_{n=1}^k \Big(\hat{\psi}_n + \sum_{c \in \mathcal{C}} \underline{p_c} \cdot d_n^c\Big) - \sum_{c \in \mathcal{C}} \tilde{f}_c \Big( \sum_{n=1}^k d_n^c \Big) \nonumber\\
&\geq \min_{c' \in \mathcal{C}} \frac{1}{\alpha_{c'}} \bigg(\sum_{n=1}^k \hat{\psi}_n + \sum_{c \in \mathcal{C}} h_c (\underline{p_c}) \bigg).
\label{prove-t1-1}
\end{eqnarray}
The pricing function in \eqref{41} indicates that the requirements of all tenants will be satisfied as long as each resource $c$'s utilization
is below $w_c$. Thus, we have $\hat{y}_c^{(k)} = \sum_{n=1}^k d_n^c = w_c$.
Besides, the rental price of resource $c$ these tenants experienced is the same, i.e., $\underline{p_c}$.
Therefore, \eqref{prove-t1-1} indicates
$\mathbf{\Theta}_{\mathcal{P}_2}^k \geq \min_{c \in \mathcal{C}} \frac{1}{\alpha_{c}} \mathbf{\Theta}_{\mathcal{P}_3}^k$.
Meanwhile, it is obvious that $w_c$ must be less than or equal to $\argmax_{y \geq 0} \underline{p_c} y - \tilde{f}_c(y)$ because the rental
price must be larger than or equal to the \textit{marginal cost} $f_c'(w_c)$ (the result is immediate with \eqref{conjugate}).
The above has proved that \eqref{con0} holds. In the following, we prove \eqref{con1} holds. The change in the objective of
$\mathcal{P}_2$ when a new tenant $n$ arrives is
\begin{eqnarray*}
\mathbf{\Theta}_{\mathcal{P}_2}^n - \mathbf{\Theta}_{\mathcal{P}_2}^{n-1} &=& \hat{\psi}_n + \sum_{c \in \mathcal{C}} \phi_c (\hat{y}_c^{(n-1)})
\Big( \hat{y}_c^{(n)} - \hat{y}_c^{(n-1)} \Big) \\
&-& \sum_{c \in \mathcal{C}} \Big( \tilde{f}_c (\hat{y}_c^{(n)}) - \tilde{f}_c (\hat{y}_c^{(n-1)}) \Big).
\end{eqnarray*}
The change in the objective of $\mathcal{P}_3$ when a new tenant $n$ arrives is
\begin{equation*}
\mathbf{\Theta}_{\mathcal{P}_3}^n - \mathbf{\Theta}_{\mathcal{P}_3}^{n-1} =
\hat{\psi}_n + \sum_{c \in \mathcal{C}} \Big( h_c (\hat{p}_c^{(n)}) - h_c (\hat{p}_c^{(n-1)}) \Big).
\end{equation*}
To guarantee \eqref{con1} holds, it is \textit{equivalent} to guarantee the following \textit{per-resource} inequality
\begin{eqnarray*}
&\phi_c (\hat{y}_c^{(n-1)}) \Big( \hat{y}_c^{(n)} - \hat{y}_c^{(n-1)} \Big) -
\Big( \tilde{f}_c (\hat{y}_c^{(n)}) - \tilde{f}_c (\hat{y}_c^{(n-1)}) \Big) \\
&\geq \frac{1}{\alpha_c} \Big( h_c (\hat{p}_c^{(n)}) - h_c (\hat{p}_c^{(n-1)}) \Big).
\end{eqnarray*}
Divide both side of the above inequality by $\hat{y}_c^{(n)} - \hat{y}_c^{(n-1)}$, we get
\begin{equation}
\phi_c (y_c) - \tilde{f}_c'(y_c) \geq \frac{1}{\alpha_c} \cdot h_c'(\phi_c(y_c)) \cdot \phi_c'(y_c)
\label{prove-t1-2}
\end{equation}
when $y_c \in [w_c, 1)$. This means that if $\forall y_c \in [w_c, 1)$, \eqref{prove-t1-2} holds, the incremental inequality in \eqref{con1}
holds for all $y_c \in [w_c, 1)$ for each type of resource. This result is exactly the first segment of \eqref{43}. The second segment of \eqref{43} is to ensure
the continuity of $\phi_c$. The third segment of \eqref{43} is to make up the missing proof for \eqref{con1} on the exact point $y_c = 1$, which
can be derived by the deformation of
\begin{equation*}
\underline{p_c} w_c + \int_{w_c}^1 \phi_c (y_c) d y_c - \tilde{f}_c (1) \geq \frac{1}{\alpha_c} \sum_{c \in \mathcal{C}} h_c (\overline{p_c}).
\end{equation*}
The above inequality is obtained by taking integration of both sides of \eqref{prove-t1-2}.
So far, we have proved that when $\{\phi_c\}_{\forall c \in \mathcal{C}}$ in \textsf{DPoS} are designed as \eqref{41} $\sim$ \eqref{43} suggested,
the thrid condition in proposition 4, i.e., \eqref{con0} and \eqref{con1} hold. Thus, we have proved that \textsf{DPoS} is
$\max_{c \in \mathcal{C}} \alpha_{c}$-competitive.
\end{proof}
In the following, we verify that the design of $\{\phi_c\}_{c \in \mathcal{C}}$ in \textsf{DPoS} when $\{f_c\}$ are linear, which is demonstrated
in \eqref{linear_price}, satisfies the requirements defined in \eqref{41} $\sim$ \eqref{43}.
When $f_c(y) = q_c y$ and $q_c > 0$, the conjugate $h_c (p_c)$ defined in \eqref{h_c} is given by
\begin{equation}
h_c (p_c) =
\left\{
\begin{array}{ll}
0 & p_c \in [0, q_c] \\
p_c - q_c & p_c \in (q_c, +\infty)
\end{array}
\right.
\end{equation}
Note that $0 < q_c < \underline{p_c} \leq \overline{p_c}$. In this case, \eqref{42} is equal to
\begin{equation*}
\underline{p_c} w_c - f(w_c) \geq \frac{1}{\alpha_c} \Big( \underline{p_c} - f_c (1) \Big),
\end{equation*}
which indicates $w_c \geq \frac{1}{\alpha_c}$. \eqref{43} is thus equal to
\begin{equation*}
\left\{
\begin{array}{l}
\varphi_c (y) - f'_c(y) \geq \frac{1}{\alpha_c} \cdot \varphi'_c (y) \cdot h_c'(\varphi_c (y)), w_c < y < 1\\
\varphi_c (w_c) = \underline{p_c} \\
\varphi_c (1) = \sum_{c \in \mathcal{C}} \overline{p_c} - \sum_{c' \in \mathcal{C} \backslash \{c\}} q_{c'}.
\end{array}
\right.
\end{equation*}
To minimize $\alpha_c$, it suffices to set $w_c$ as $1/\alpha_c$ and thus the above BVP leads to \eqref{linear_price} and \eqref{w_c}.
The above analysis leads to the following theorem immediately.
\begin{theorem}
When the cost functions $\{f_c\}_{\forall c \in \mathcal{C}}$ are linear and $\{0 < \underline{\varsigma_c} < \underline{p_c}\}_{\forall c \in \mathcal{C}}$ holds,
the competitive ratio $\alpha$ \textsf{DPoS} achieves is the optimal one over all possible online algorithms. Further, its value is
\begin{equation}
\alpha = \max_{\forall c \in \mathcal{C}} \alpha_c
= \max_{\forall c \in \mathcal{C}} \frac{1}{w_c},
\label{worst-ratio}
\end{equation}
where $w_c$ is defined in \eqref{w_c}.
\end{theorem}
\section{Experimental Results}\label{sec5}
In this section, we conduct extensive simulation experiments to evaluate the effectiveness and efficiency of \textsf{DPoS}.
We firstly verify the performance of \textsf{DPoS} against several popular algorithms and handcrafted benchmarking policies on social
welfare, efficiency, and competitive ratio. Then, we analyze the impact of several system parameters.
We summarize the key findings of our experiments as follows, and details can be found in Sec. \ref{sec5.2} and Sec. \ref{sec5.3}.
\begin{itemize}
\item \textsf{DPoS} not only achieves the highest social welfare among all the online algorithms compared, but also shows the
\textit{close-to-offline-optimal} performance, especially when the number of tenants is not more than $100$ and the number of
resource type is $1$.
\item In most cases, the ratio of the optimal social welfare to the social welfare achieved by \textsf{DPoS} (fluctuate between
$1.00$ and $2.57$) is far less than the worst-case guarantee, i.e. the competitive ratio calculated by \eqref{w_c} and
\eqref{worst-ratio} (fluctuate between $5.82$ and $8.54$).
\item \textsf{DPoS} is insensitive to environment parameters such as the distribution of $\{d_n^c\}_{\forall c \in \mathcal{C}}$
and the value of the coefficient of the linear cost, $\{q_c\}_{\forall c \in \mathcal{C}}$.
\item \textsf{DPoS} achieves a satisfactory balance between the overheads (corss-agent communication data size, algorithm's
running time, etc.) and the performance.
\end{itemize}
\subsection{Experiment Setup}\label{sec5.1}
By default, we set the number of tenants $N$ as $100$. We also set the number of types of resources as $3$ in default because
the resources can be roughly divided into computation, storage, and forwarding/bandwidth. Note that $100$ and $3$ are only default
settings. In Sec. \ref{sec5.2} and Sec. \ref{sec5.3}, we will
analyze the scalability of \textsf{DPoS} extensively.
For each tenant $n$, $\{d_n^c\}_{\forall c \in \mathcal{C}}$ is uniformly sampled from the Gaussian distribution
$N (\mu = \frac{1}{N}, \sigma = \frac{1}{N^2})$.
The pay level $l_n$ is randomly sampled from $[2, 6]$. The highest level of QoS, denoted by $l_n^h$, is randomly sampled from
$U(2, 6)$. The lowest level of QoS, denoted by $l_n^0$, is \textit{free user level}. We set the percentage of free users near $40\%$ for
each tenant \cite{data}. Moreover, the remaining users are randomly assigned to a QoS level according to the pyramid structure. The higher
the QoS level, the fewer the users. The payment of user $s \in \mathcal{S}_n$ is proportional to his QoS level. By default,
$\forall n \in \mathcal{N}, \forall s \in \mathcal{S}_n$, we set $\sigma_n$ as identity function. For each type of resource, we take linear
cost defined in \eqref{linear_cost}. By default, $\forall c \in \mathcal{C}$, $q_c$ is randomly chosen from
$[\frac{1}{6}\underline{p_c}, \frac{5}{6}\underline{p_c}]$.
\begin{table}[htbp]
\begin{center}
\caption{\label{para}Default parameter settings.}
\begin{tabular}{c|c|c|c}
\toprule
{\textsf{\textbf{Parameter}}} & {\textsf{\textbf{Value}}} & {\textsf{\textbf{Parameter}}} & {\textsf{\textbf{Value}}}\\[+0.1mm]
\midrule
$N$ & $100$ & $C$ & $3$\\[+0.7mm]
$\{d_n^c\}_{\forall c \in \mathcal{C}}$ & $\sim N (\mu = \frac{1}{N}, \sigma = \frac{1}{N^2})$ & $l_n^h$ & $\sim U(2, 6)$ \\[+0.7mm]
$\mathcal{S}_n$ & $\sim N (\mu = 10^6, \sigma = 10^5)$ & $\Pr (l_n^0)$ & $\approx 40\%$ \\[+0.7mm]
$q_c$ & $\sim U(\frac{1}{6}\underline{p_c}, \frac{5}{6}\underline{p_c})$ & $\sigma_n$ & identity\\[+0.7mm]
\bottomrule
\end{tabular}
\end{center}
\end{table}
\textsf{DPoS} is compared with the following algorithms. Thereinto, \textit{CVX} and \textit{Heuristic} are used to obtain the approximate
optimal of the offline problem $\mathcal{P}_1$. \textit{SCPA} \cite{ns-game} is a state-of-the-art auction-based algorithm. We also design
online algorithms \textit{Myopic Slicing} and \textit{Random Slicing} as baselines.
\begin{itemize}
\item \textit{CVX} (offline \& centralized): This refers to the algorithm behinds CVXPY\footnote{\texttt{https://www.cvxpy.org/}}. We use this as a professional solver to obtain
the approximately optimal solution of the global offline problem $\mathcal{P}_1$.
\item \textit{Heuristic} (offline \& centralized): We take Genetic Algorithm (GA) to obtain the approximate optimal solution of $\mathcal{P}_1$.
\item \textit{SCPA} (offline \& decentralized)\cite{ns-game}: To adapt this algorithm to our model, we made some simple deformation. In this algorithm,
all the tenants and the MVNO get together. The bids are the utilities. Specifically, in each bidding around, each tenant calculate its utility.
If the utility is positive, it sends $x_n = 1$ and $\{d_n^c\}_{\forall c \in \mathcal{C}}$ to the MVNO. The MVNO selects the exact tenant
which can maximize the its own utility and accepts the transaction if resource surplus is satisfied. All the left tenants are rejected. The
procedure ends when no tenant has the willingness to bid.
\item \textit{Myopic Slicing (MS)} (online \& decentralized): This algorithm is almost the same with \textsf{DPoS}, expect the pricing
functions. The pricing functions are designed as follows:
$\forall c \in \mathcal{C}, \phi'_c (y) \triangleq \frac{\underline{p_c} + \overline{p_c}}{C} y$ when $y \leq 1$, otherwise $+\infty$.
\item \textit{Random Slicing (RS)} (online): Each time when a new tenant arrives, randomly set $x_n$ as $0$ or $1$. Note that if $x_n = 1$,
the resource surplus must be satisfied.
\end{itemize}
The following analyze is based on the average returns of 1000 trials.
\subsection{Performace Verification}\label{sec5.2}
We firstly analyze the performance under different scales of tenants. As shown in Fig. \ref{fig-exp1}, all the offline algorithms
outperform the online algorithms. Therewith, CVX achieves the highest social welfare whatever the number of tenants. In the following,
we will simply take CVX as the optimal solution. It is interesting to find that both Heuristic and SCPA show a trend of performance
decline as the number of tenants increase. For Heuristic, as the solution space grows exponentially with the increase of tenant size,
it becomes more difficult to find the approximate optimal solution under the constraints of iteration times and population size.
When the scale of tenants grows, the performance of all the online algorithms present a rising trend. This is becasue the transaction
success rate increases (although not by as much) with scale under the well-designed pricing functions. Further, we can find that
\textsf{DPoS} not only achieves the highest social welfare among all the online algorithms, but also shows the \textit{close-to-offline-optimal}
performance. Specifically, we define the indicator $\alpha_{\textrm{\textit{CVX}}}$, $\alpha_{\textrm{\textit{heuristic}}}$, and
$\alpha_{\textrm{\textit{SCPA}}}$, where each is the ratio of the social welfare achieved by CVX, Heuristic, and SCPA to \textsf{DPoS},
respectively. From Fig. \ref{fig-exp1} we find that even in the worst case ($N = 500$), the gap between CVX and \textsf{DPoS} is
only $0.815\times$. This ratio is much better (lower) compared with previous work \cite{online-ns-auction}. Compared with the popular
offline Heuristic (GA), the gap is $0.390\times$ at the peak ($N = 200$). Compared with the state-of-the-art offline auciton-based
algorithm SCPA \cite{ns-game}, the gap is $0.175\times$ at the peak ($N = 100$). Becasue of the performance downgrade of Heuristic and SCPA,
the ratio $\alpha_{\textrm{\textit{heuristic}}}$ and $\alpha_{\textrm{\textit{SCPA}}}$ shows a tendency to increase first and then decrease.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.5in]{figs/opt11.png}}
\caption{The social welfare achieved by each algorithm and the ratio of social welfare achieved by each offline algorithm to \textsf{DPoS},
under different number of tenants.}
\label{fig-exp1}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.5in]{figs/opt12.png}}
\caption{Left $y$-axis: The average rental rate over 3 kinds of resources of Heuristic, SCPA, and DPoS. We do not draw the rental
rate of CVX because the value is close to $1$ under any circumstances.
Right $y$-axis: the comparison of $\alpha_{\textrm{\textit{CVX}}}$ and the theoretical worst-case competitive ratio $\alpha$.}
\label{fig-exp2}
\end{figure}
Fig. \ref{fig-exp2} demonstrates that Heuristic has a near-to-1 rental rate whatever the number of tenants but SCPA's and \textsf{DPoS}'s
rental rate are much lower ($64.37\%$ and $69.89\%$ in average, respectively). However, from Fig. \ref{fig-exp1} we have concluded that the performance
of Heuristic is much inferior to the optimal especially when $N$ is $500$. Thus, we can conclude that there is no \textit{linear} relationship between
the sum of net profits and the transaction success rate. In fact, this conclusion can also be draw by observing the analytic form of
social welfare defined in $\mathcal{P}_1$. Besides, the scale of tenants has no significant impact on the rental rate, whether
it is an offline algorithm, or \textsf{DPoS}. Another interesting point is that under normal circumstances, the worst-case theoretical guarantee,
i.e. the competitive ratio calculated according to \eqref{w_c} and \eqref{worst-ratio}, is far from need.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.5in]{figs/opt21.png}}
\caption{The social welfare achieved by each algorithm and the ratio of social welfare achieved by each offline algorithm to \textsf{DPoS},
under different number of resource types.}
\label{fig-exp3}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.5in]{figs/opt22.png}}
\caption{Left $y$-axis: The average rental rate over 3 kinds of resources of Heuristic, SCPA, and DPoS.
Right $y$-axis: the comparison of $\alpha_{\textrm{\textit{CVX}}}$ and the theoretical worst-case competitive ratio $\alpha$.}
\label{fig-exp4}
\end{figure}
\begin{table*}[htbp]
\begin{center}
\caption{\label{para2}Comparsion of transferred data size and algorithm's running time under default parameter settings.}
\label{tab3}
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
& {\textsf{\textbf{CVX}}} & {\textsf{\textbf{Heuristic}}} & {\textsf{\textbf{SCPA}}}
& {\textsf{\textbf{DPoS}}} & {\textsf{\textbf{MS}}} & {\textsf{\textbf{RS}}}\\[+0.1mm]
\midrule
\textsf{input form} & offline & offline & offline & online & online & online\\[+0.7mm]
\textsf{architecture} & centralized & centralized & decentralized & decentralized & decentralized & -\\[+0.7mm]
\textsf{transferred data size} & 4.16KB & 4.16KB & 4.16KB & 1.92KB & 1.92KB & -\\[+0.7mm]
\textsf{running time} & 78.81 & 2172.35 & 24.43 & 1 & 0.93 & 0.48\\[+0.7mm]
$\alpha_{\textrm{\textit{CVX}}}$ & 1 & 1.199 & 1.189 & 1.578 & 2.04 & 2.47\\[+0.7mm]
\bottomrule
\end{tabular}
\end{center}
\end{table*}
In the following we analyze the performance of \textsf{DPoS} under different scale of resource types $C$. From Fig. \ref{fig-exp3}, firstly,
we find that \textsf{DPoS} is still the best online algorithm and has a close performance to Heuristic and SCPA. When $C = 1$, \textsf{DPoS} can
achieve near-to-offline-optimal performance! Secondly, all the algorithms show a downward trend when the number of resource types increase,
except CVX. This is becasue each tenant has requirements on all the resource types, and the increase in resource types significantly
reduces the probability of requirements being satisfied. Ulteriorly, the transaction success rate reduces significantly. The phenomena
can also be found in Fig. \ref{fig-exp4}. For online scenarios, the phenomena is amplified by the randomness of arrival sequence of tenants.
Thus, online algorithms perform more unsatisfied. Even though, the advantage of \textsf{DPoS} is clear. In the worst case, i.e., when
$C = 9$, the ratio $\alpha_{\textrm{\textit{CVX}}}$ is $2.37$, which is still acceptable for online algorithms. It even outperforms
the offline algorithm Heuristic when $C$ is $5$ and $7$ by $18.00\%$ and $13.40\%$, respectively.
\begin{figure}[htbp]
\centerline{\includegraphics[width=2.2in]{figs/opt3.png}}
\caption{The ratio of social welfare achieved by \textsf{DPoS} to the optimal, CVX, under different scales of tenants and resource types.}
\label{fig-exp5}
\end{figure}
Fig. \ref{fig-exp5} demonstrates the impact of scales of tenants and resource types on the performance of \textsf{DPoS} comprehensively.
In general, the gap between \textsf{DPoS} and the offline optimal increases with the increasing scale of the problem. When $C$ is $1$ and $N$ is $50$,
what \textsf{DPoS} achieves is exactly the offline optimal. When $C$ is $9$ and $N$ is $500$, the gap is the highest, which reachs $1.57\times$.
Further, we can find that the ratio grows faster with resource types than with tenant size. We leave the design of resource type-scalable
pricing functions as future work. Table \ref{tab3} comapares all the algorithms from multiple angles, including social welfare achieved,
cross-agent communication data size, and algorithm running time. The amount of data transferred by the decentralized online algorithm refers
to the amount of data communicated between tenants and the MVNO. Meanwhile, the amount of data transferred by the centralized algorithm is
all data related to problem $\mathcal{P}_1$. The data size is calculated as 4 bytes for each value. Note that we normalize the running time
of \textsf{DPoS} to $1$. We can find that the superiority of CVX and Heuristic are based on a lot of computing time overhead. By contrast,
\textsf{DPoS} achieves a satisfactory balance between the overheads the performance.
In addition to the 4-th line of Table \ref{tab3}, Fig. \ref{fig-add1} and Fig. \ref{fig-add2} also verify the linear algorithmic
runtime of \textsf{DPoS} intuitively.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.2in]{figs/opt-add1.png}}
\caption{The runtime of each algorithm under different number of tenants.}
\label{fig-add1}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.2in]{figs/opt-add2.png}}
\caption{The runtime of each algorithm under different number of resource types.}
\label{fig-add2}
\end{figure}
\subsection{Sensitivity Analysis}\label{sec5.3}
In this subsection, we analyze the sensitivity of \textsf{DPoS} under different environment parameters settings.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.5in]{figs/opt4.png}}
\caption{The social welfare achieved by each algorithm and the ratio of social welfare achieved by CVX and SCPA to \textsf{DPoS},
under different sampling of $\{d_n^c\}_{\forall c \in \mathcal{C}}$.}
\label{fig-exp6}
\end{figure}
Fig. \ref{fig-exp6} demonstrates the impact of tenants' resource requirements. The $x$-axis is the mean value $\mu$ of the Normal distribution
$N(\mu, \sigma=\frac{1}{N^2})$ where $N$ is $100$. We find that when the resource requirements increase, the transaction success rate
decreases, which further decreases the social welfare achieved. It is interesting that the social welfare achieved by CVX also decreases
significantly when tenants' resource requirements increase. This phenomenon indicates that the competition among tenants for resources
significantly reduces the feasible solution space. Even so, the ratio on social welfare is stable no matter how the resource requirements change.
Fig. \ref{fig-exp7} and Fig. \ref{fig-exp8} demonstrate the impact of $\{q_c\}_{\forall c \in \mathcal{C}}$ and $\{l_n\}_{\forall n \in \mathcal{N}}$.
We can find that the ratio on social welfare has a smooth variation. Considering that their impacts are minor, no more detailed discussion will
be launched.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.5in]{figs/opt5.png}}
\caption{The social welfare achieved by each algorithm and the ratio of social welfare achieved by CVX and SCPA to \textsf{DPoS},
under different sampling of $\{q_c\}_{\forall c \in \mathcal{C}}$.}
\label{fig-exp7}
\end{figure}
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.5in]{figs/opt6.png}}
\caption{The social welfare achieved by each algorithm and the ratio of social welfare achieved by CVX and SCPA to \textsf{DPoS},
under different sampling of pay levels $\{l_n\}_{\forall n \in \mathcal{N}}$.}
\label{fig-exp8}
\end{figure}
All the experiment results in this subsection show the robustness of \textsf{DPoS}.
\section{Related Works}\label{sec6}
Network slicing is widely accepted as an architectural enabling technology for 5G by industry and standardization communities
\cite{5G-archi}\cite{ns-survey2}\cite{ns-survey3}\cite{ns-survey}. The idea is to \textit{slice} the physical resources of the mobile
networks into logical network functions, and orchestrate them to support diversified over-the-top services.
Previous works on network slicing mainly focus on the architectural aspects, while efficient resource allocation and sharing,
which has been identified as a key issue by the Next Generation Mobile Network (NGMN) alliance \cite{5G-archi2}, lags behind.
A number of studies have emerged in recent years to fill the gap, especially for mobile network slicing \cite{ran-slice}\cite{edge-slice2}
\cite{edge-slice} and core network slicing \cite{core-slice}\cite{core-add1}\cite{core-add3}. Overall, these works formulate
a non-convex combinatorial problem to maximize the utilities of involved business players. Take \cite{ran-slice} as an example,
the authors defined the utility according to the satisfaction of multiple slice resource demands (SRDs). They formulated the resource
sharing problem as a Mixed Integer Linear Programming (MILP) and proposed a two-step approach (provisioning-and-deployment) to solve
it efficiently. Similarly, Caballero et al. proposed a dynamic resource allocation algorithm based on the weighted proportionally fairness,
also for the RAN resources \cite{edge-slice2}. Based on this algorithm, they devised a practical approach with limited computational
information and handoff overheads. Further, the authors verified the approximate optimality of the approach with both theoretical proof and
extensive simulations. In addition to the heuristics designed by the above mentioned works, AI-based optimization has been gaining in popularity.
For example, Yan et al. resorted to deep reinforcement learning (DRL) to formulate an intelligent resource scheduling strategy, iRSS, for 5G RAN
slicing \cite{DL}. They take deep neural networks to perform large time-scale resource allocation while the reinforce
agent performs online resource scheduling to predict network states and dynamics. Likewise, the authors of \cite{resource-manage-DQN}
also designed a DRL-based algorithm, to perform corss-slice resource sharing.
In addition to the centralized and fine-tuned algorithms, a substantial literature designed the network slicing algorithms based on
economic frameworks, especially the auction-related mechanisms \cite{ns-business}\cite{nfv} \cite{nfv-offline}\cite{congest-game-slice}
\cite{ns-game}\cite{auction}. These algorithms are usually decentralized, easy-to-use and simply constructed. In these works, the tenants
sequentially compete and bid for the network resources. The utilization of auction mechanism usually integrate tightly with dynamic pricing
and game model \cite{auction-or-posted}. For example, Wang et al. solved the joint efficiency and revenue maximization problem with a
varying-pricing policy \cite{s1}. They designed a decentralized algorithm, run by each player, to maximize the net social welfare. In \cite{ns-game},
the authors designed a non-cooperative game where each tenant reacts to the user allocations of the other tenants so as to maximize
its own utility selfishly. Existing works mainly resort to Fisher market \cite{fisher2}, where strategic players anticipate the impact
of their bids. Besides, VCG-Kelly mechanisms and their derivatives \cite{vcg-kelly} are also popular for slice resource allocation and
sharing \cite{ns-game}\cite{online-ns-auction}. In Kelly's mechanism, the bidders bid for prices, and the resources are allocated to them
according to their bids. In VCG mechanism, in a different way, the bids are the utility of involved players. We find that existing auction-based
works are mainly designed for offline markets, where all the tenants participate the auction and bid for their interests sequentially.
Even so, we still discover an online auction-based resource allocation algorithm, proposed in \cite{online-ns-auction}. The authors model
the slicing resource allocation problem as an online winner determination problem, with aim to maximize the social welfare of auction
participants. However, what the authors of \cite{online-ns-auction} proposed is a centralized algorithm, where the bidding and
privacy-relevant information has to be collected by the MVNO.
Our work is based on the posted price mechanism \cite{posted-mechanism}, under the principle of \textit{take-it-or-leave-it}.
Compared with fined-tuned heuristics and DRL-based works, our algorithm has fairly low complexity and is well-suited for online network slicing
scenarios. Besides, the time-consuming repeat bidding between tenants and the MVNO is not required compared with auction-based works.
In addition, our algorithm provides each business player an agent, which can be deployed in a realistic online market directly without any modification.
\section{Concluding Remarks}\label{sec7}
We presented a decentralized and low-complexity online slicing algorithm, \textsf{DPoS}, by virtue of the primal-dual approach and posted price
mechanism. Our goal was to address the problem of the high complexity, privacy leakage, and unrealistic offline setting of current network slicing algorithms.
We firstly presented the global offline social welfare maximization problem. Then, we relax the original combinatorial problem to a convex primal problem
and give its dual. Based on the alternative update of primal and dual variables, \textsf{DPoS} maximizes the social welfare with a
$O \big( \max_{c \in \mathcal{C}} \{\ln \sum_{c' \in \mathcal{C}} (\overline{p_{c'}} - q_{c'}) - \ln (\underline{p_c} - q_c) \} \big)$ gap
in worst case. By giving back the decision-making power to each player, \textsf{DPoS} stops the privacy leakage from the source. This decentralized
property also erases the heavy burden to solve a centralized offline optimization algorithm, which is often of high complexity. In addition to
the efficiency, the competitive ratio of \textsf{DPoS} is the optimal over all the online algorithms. The extensive simulation further verify that
\textsf{DPoS} can not only achieve close-to-offline-optimal performance, but also have much lower algorithmic overheads compared with contrast algorithms.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work was partially supported by the National Science Foundation of China (No. U20A20173 and No. 61772461), the National Key
Research and Development Program of China (No. 2019YFD1101105) and Natural Science Foundation of Zhejiang Province (No. LR18F020003).
Schahram Dustdar's work is supported by the Zhejiang University Deqing Institute of Advanced technology and Industrilization (ZDATI).
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-04-08T02:09:58",
"yymm": "2012",
"arxiv_id": "2012.14108",
"language": "en",
"url": "https://arxiv.org/abs/2012.14108"
}
|
\section{Nucleus-nucleus collisions\protect\footnote{Section editors: \'{E}milien Chapon, Pol-Bernard Gossiaux.
}}
\label{sec:aa}
\newcommand{\Big{\lbrack}}{\Big{\lbrack}}
\newcommand{\Big{\rbrack}}{\Big{\rbrack}}
\newcommand{\Big{(}}{\Big{(}}
\newcommand{\Big{)}}{\Big{)}}
\newcommand{\Big{\lbrace}}{\Big{\lbrace}}
\newcommand{\Big{\rbrace}}{\Big{\rbrace}}
\newcommand{\nonumber}{\nonumber}
\newcommand{\Big{\vert}}{\Big{\vert}}
\newcommand{\Big{\rangle}}{\Big{\rangle}}
\newcommand{\Big{\langle}}{\Big{\langle}}
\newcommand{\lambda}{\lambda}
\newcommand{\ensuremath{R_{\textrm{AA}}}\xspace}{\ensuremath{R_{\textrm{AA}}}\xspace}
\newcommand{\ensuremath{v_{2}}\xspace}{\ensuremath{v_{2}}\xspace}
\newcommand{\ensuremath{v_{3}}\xspace}{\ensuremath{v_{3}}\xspace}
\newcommand{\ensuremath{N_{\rm part}}\xspace}{\ensuremath{N_{\rm part}}\xspace}
\subsection{Introduction and context}
\label{AAintro}
A hot and dense state of matter, the quark gluon plasma (QGP), is created in \rm{PbPb}\xspace collisions at the LHC. The quarkonia provide natural probes to study its properties, since the heavy quarks are created early in the collision and, as bound states, they are sensitive to a large variety of initial- to final-state effects. The in-medium modifications to the fundamental force between two static colour charges can be investigated via changes in the quarkonium spectroscopy. The heavier, looser bound states should typically be the most suppressed in heavy-ion collisions. In the cooling of the fireball, the quarkonia can also be ``(re)generated'' through the recombination of individual heavy quarks and antiquarks in the medium. Such an effect is more pronounced for charmonia than bottomonia due to the larger number of \ensuremath{c\overline{c}}\xspace pairs present in the QGP.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\begin{axis}[
xscale=2.0,
yscale=1.5,
xtick={0,1,2,...,17},
xticklabels={ ,\ensuremath{\psi(2S)}\xspace, $\chi_{c2}$, $\chi_{c1}$, $\chi_{c0}$, \ensuremath{J/\psi}\xspace, $\eta_c$,
$\Upsilon^{\prime\prime}$, $\chi_{b2}^{\prime}$, $\chi_{b1}^{\prime}$,
$\chi_{b0}^{\prime}$, $\Upsilon^{\prime}$, $\chi_{b2}$, $\chi_{b1}$, $\chi_{b0}$, $\Upsilon$,
$\eta_b$},
xmin=0, xmax=17,
xlabel=Quarkonium states,
ylabel=Binding energy (GeV)]
\addplot [pattern color=gray!50, pattern=north east lines] coordinates {(0, 0.385) (17,0.385) (17,0.740) (0, 0.740)};
\addplot [draw=blue,fill=blue!30!white,
ybar,xtick=data,font=\scriptsize,
nodes near coords,
nodes near coords align={vertical}] coordinates {
(1,0.05)
(2,0.18)
(3,0.22)
(4,0.32)
(5,0.64)
(6,0.75)
(7,0.20)
(8,0.29)
(9,0.30)
(10,0.34)
(11,0.53)
(12,0.64)
(13,0.67)
(14,0.70)
(15,1.10)
(16,1.16)
};
\addplot [draw=red] coordinates {(0, 0.385) (17,0.385)};
\addplot [draw=red] coordinates {(0, 0.740) (17, 0.740)};
\node [font=\large,color=red] at (axis cs:2,0.55) {T$_{\rm peak}$};
\end{axis}
\end{tikzpicture}
\caption{\label{fig:quarkonia_states}
Binding energies of several quarkonium states in the vacuum~\cite{Satz:2005hx} along with an estimation for the QGP peak temperature, $T_{\rm peak}$, in \rm{PbPb}\xspace\ collisions at \ensuremath{\sqrt{s_{_{NN}}}} = 2.76~TeV~\cite{Adam:2015lda} (shaded area).}
\end{figure}
The quarkonium production in heavy-ion collisions has long been proposed to be directly related to the temperature of the produced QGP~\cite{Matsui:1986dk}. \cf{fig:quarkonia_states} illustrates the power of quarkonium spectroscopy in measuring the medium temperature, compared to conventional methods using the slope of photon yields at intermediate $\ensuremath{P_T}\xspace\approx 1$--4~GeV~\cite{dEnterria:2005jvq,Adare:2009qk}. The peak temperature from the thermal-photon spectrum strongly relies on the unknown QGP formation time assumed in the model estimations. The reported direct-photon spectra may also include a large contribution from meson bremsstrahlung~\cite{Linnyk:2015tha}. Some caveats should be kept in mind when using quarkonia as a QGP thermometer:
\begin{description}
\item (i) for a given family, the measurement of the relative quarkonium yields is supposed to be insensitive to initial-state effects, but a validation is desirable via the measurements of the quarkonium modification versus rapidity, in order to scan over the fractional momentum $x$, in a controlled particle-multiplicity and collision-energy environment;
\item (ii) it is still a question which temperature the quarkonium states probes: the peak temperature, the initial temperature or some average temperature;
\item (iii) it is also under debates how important the quarkonium breaking during the hadronic phase is;
\item (iv) for the charmonia, the recombination competes with the suppression in heavy-ion collisions and these effects should be disentangled.
\end{description}
It is also useful to take the opposite approach: fix the temperature to a value favoured by other measurements, and test the quarkonium dynamics and interactions given this assumed thermodynamic properties of the bulk.
The main features of the nuclear modification factor (\ensuremath{R_{\textrm{AA}}}\xspace), the ratio of production in \ensuremath{AA}\xspace to \ensuremath{pp}\xspace\ accounting for the amount of nuclear overlap, for \ensuremath{J/\psi}\xspace mesons at the LHC are already well measured after ten years of operation. A smaller suppression relative to the lower RHIC collision energies has been found~\cite{Abelev:2012rv}
especially at low \ensuremath{P_T}\xspace (below approximately 5 GeV)~\cite{Abelev:2013ila,Adam:2016rdg} and has been attributed to regeneration. A greater supppresion is observed at higher \ensuremath{P_T}\xspace~\cite{Aaboud:2018quy,Sirunyan:2017isk}, which is consistent with stronger dissociation effects from the bigger, longer-lived QGP at the LHC. The behaviour of \ensuremath{R_{\textrm{AA}}}\xspace at high \ensuremath{P_T}\xspace at the LHC (up to 50\,GeV or beyond), on the other hand, is still uncertain. There are hints of an increasing trend in \ensuremath{R_{\textrm{AA}}}\xspace from ATLAS~\cite{Aaboud:2018quy} and CMS~\cite{Sirunyan:2017isk} with the Run-2 data. Data from the HL-LHC will allow one to improve the precision and push to even higher \ensuremath{P_T}\xspace~\cite{CMS-PAS-FTR-18-024}, which will confirm (or disprove) this trend which could be related to parton energy loss~\cite{Arleo:2017ntr} or to jet quenching~\cite{CMS-PAS-HIN-19-007}. Further studies are needed, however, as the energy-loss phenomenology yields tensions with the hierarchy of nuclear modifications of ground and excited states~\cite{Makris:2019ttx}. A projection of the high \ensuremath{P_T}\xspace reach from CMS is shown in \cf{fig:Jpsi_pTvsLumi}. %
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{figures/Jpsi_pTvsLumi.pdf}
\hspace{0.3cm}
\includegraphics[width=0.48\textwidth]{figures/Upsilon1S_pTvsLumi.pdf}
\caption{Prompt \ensuremath{J/\psi}\xspace (Left) and \ensuremath{\Upsilon}\xspace (Right) [$\ensuremath{P_T}\xspace^{low}$,$\ensuremath{P_T}\xspace^{up}$] boundaries for the highest \ensuremath{P_T}\xspace bin as a function of the luminosity in the CMS experiment~\cite{CMS-PAS-FTR-18-024}. The boundaries are chosen in such a way that the number of quarkonia in the bin for the corresponding luminosity equals the number of mesons found in the last \ensuremath{P_T}\xspace bin of the analysis with a luminosity of 368~\ensuremath{\mu\textrm{b}^{-1}}\xspace, as used for existing measurements~\cite{Sirunyan:2017isk,Sirunyan:2018nsz}, keeping the width of this last \ensuremath{P_T}\xspace bin fixed. The projection for the expected luminosity of 10~\ensuremath{\textrm{nb}^{-1}}\xspace, roughly matching the expectation for the HL-LHC (see \ct{tab:yrlumis}), is highlighted with dashed lines. [Figure from~\cite{CMS-PAS-FTR-18-024}]}
\label{fig:Jpsi_pTvsLumi}
\end{figure}
In particular, information about excited-quarkonium production in heavy-ion collisions is very limited at the moment~\cite{Sirunyan:2016znt,Sirunyan:2018nsz,Aaboud:2018quy,Acharya:2018mni,Adam:2015sia}, and further data on the excited states will be crucial. Given the large feed-down contributions, corresponding to about half of the \ensuremath{\Upsilon}\xspace yield at large \ensuremath{P_T}\xspace for instance~\cite{Lansberg:2019adr,Andronic:2015wma,Aaij:2014caa}, the yield of excited states is difficult to determine and this directly reflects on the precision of the model predictions for \ensuremath{J/\psi}\xspace or \ensuremath{\Upsilon}\xspace. In addition, the charmonium ground state, \ensuremath{\eta_c}\xspace, remains unmeasured.
Anisotropic pressure gradients in the QGP, which are a consequence of the non-spherical (elliptic, to first order) shape of the overlap region between the colliding nuclei, induce anisotropies in the azimuthal distribution of the final particle momenta, including the so-called elliptic flow. This flow is characterised by the second order \ensuremath{v_{2}}\xspace of the Fourier expansion of this distribution. The \ensuremath{v_{2}}\xspace of \ensuremath{J/\psi}\xspace mesons has been measured to be non-zero~\cite{Aaboud:2018ttm,Acharya:2020jil}, which is qualitatively well reproduced by the transport models at low \ensuremath{P_T}\xspace where the \ensuremath{J/\psi}\xspace \ensuremath{v_{2}}\xspace relates to the thermalisation of the charm quarks and their interaction with the hydrodynamic expansion of the medium. However, at higher \ensuremath{P_T}\xspace, the agreement is not good. As shown in \cf{fig:projvtwo}, a precise measurement of \ensuremath{J/\psi}\xspace \ensuremath{v_{2}}\xspace up to higher \ensuremath{P_T}\xspace, complementary to $\ensuremath{R_{\textrm{AA}}}\xspace(\ensuremath{P_T}\xspace)$, will be instrumental in understanding the charmonium-production mechanisms in heavy-ion collisions. It will allow their behaviour at high \ensuremath{P_T}\xspace to be related to the energy loss via a path-length dependence effect, by distinguishing between the long and short axes of the ellipsoid-like medium. Higher orders of the anisotropic flow of quarkonia are becoming accessible, such as \ensuremath{v_{3}}\xspace~\cite{Acharya:2020jil}, and more precise measurements of these will provide further information about quarkonium production and transport at low \ensuremath{P_T}\xspace.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.6\textwidth]{figures/canvasFlow.pdf}
\caption{Projections for the $v_2$ coefficient as a function of $\ensuremath{P_T}\xspace$ for the \ensuremath{J/\psi}\xspace, \ensuremath{\Upsilon}\xspace and \ensuremath{\Upsilon(2S)}\xspace mesons in \rm{PbPb}\xspace\ collisions at $\ensuremath{\sqrt{s_{_{NN}}}}~=~5.02$~TeV in the ALICE experiment, assuming the predictions from the transport model of~\cite{Du:2017qkv} and compared to an alternative model~\cite{Bhaduri:2018iwr}. [Figure from~\cite{Citron:2018lsq}]}
\label{fig:projvtwo}
\end{figure}
Measurements of other charmonium states than \ensuremath{J/\psi}\xspace are currently limited to the \ensuremath{\psi(2S)}\xspace meson, though these have poor precision. The production of \ensuremath{\psi(2S)}\xspace mesons is found to be much more suppressed than \ensuremath{J/\psi}\xspace, even at relatively high \ensuremath{P_T}\xspace (up to 30\,GeV)~\cite{Sirunyan:2016znt,Aaboud:2018quy}. Data from the HL-LHC will help {to} better understand \ensuremath{\psi(2S)}\xspace production in \rm{PbPb}\xspace collisions, though a \ensuremath{v_{2}}\xspace measurement may remain challenging. In particular, a precise measurement of the \ensuremath{\psi(2S)}\xspace/\ensuremath{J/\psi}\xspace ratio (\cf{fig:psi2spsi_aa}) will test the validity of the statistical hadronisation model~\cite{Andronic:2017pug} compared to dynamical models~\cite{Du:2015wha}.
The measurement of the $P$-wave states, such as \ensuremath{\chi_c}\xspace, would help complete the picture, but it is experimentally challenging due to the difficulty of reconstructing very-low-\ensuremath{P_T}\xspace photons %
in a heavy-ion environment. A possible option would be to look at $\chi_c \to \ensuremath{J/\psi}\xspace \mu\mu$ decays.
Similarly, the \ensuremath{\eta_c}\xspace states only have large branching fractions to hadrons,
usually with a rather large number of final-state particles, and thus will be very challenging to measure in heavy-ion collisions.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/psi2s2psi_Npart_pbpb502_y3.pdf}
\caption{Ratio of the $\ensuremath{\psi(2S)}\xspace / \ensuremath{J/\psi}\xspace$ production yields vs. \ensuremath{N_{\rm part}}\xspace in \rm{PbPb}\xspace\ collisions measured by ALICE over $2.5<y<4$~\cite{Abelevetal:2014cna,CERN-LHCC-2013-014}. Model predictions in the transport approach~\cite{Du:2015wha} and from the statistical hadronisation~\cite{Andronic:2017pug} are included. The values of the ratio used for the projections are quasi-arbitrary. [Figure from~\cite{Citron:2018lsq}]}
\label{fig:psi2spsi_aa}
\end{figure}
Due to the much smaller number of \ensuremath{b\overline{b}}\xspace compared to \ensuremath{c\overline{c}}\xspace pairs produced in \rm{PbPb}\xspace collisions, regeneration is thought to play a much smaller role in the dynamical models for bottomonia than for charmonia, though it may still be significant for the strongly suppressed excited states~\cite{Du:2017qkv,Krouppa:2017jlg}. Indeed, \ensuremath{R_{\textrm{AA}}}\xspace for the \ensuremath{\Upsilon}\xspace states does not feature a significant \ensuremath{P_T}\xspace or rapidity dependence~\cite{Sirunyan:2018nsz}. In addition, there is a strong centrality dependence (with smaller \ensuremath{R_{\textrm{AA}}}\xspace in central collisions), as well as a sequential ordering in the suppression of the different states, with the \ensuremath{\Upsilon(3S)}\xspace being so suppressed that it has not yet been measured significantly in heavy-ion collisions. Data from the HL-LHC will provide higher precision measurements (\cf{fig:upsilon_aa}), up to high \ensuremath{P_T}\xspace, enabling better model discrimination: smaller uncertainties may reveal structures in \ensuremath{R_{\textrm{AA}}}\xspace, which appears flat as a function of \ensuremath{P_T}\xspace with current experimental precision. The \ensuremath{v_{2}}\xspace of \ensuremath{\Upsilon}\xspace mesons in \rm{PbPb}\xspace collisions has recently been measured for the first time~\cite{Sirunyan:2020qec,Acharya:2019hlv}, though with limited precision. Such studies will be continued at the HL-LHC. Finally, similar comments to their charmonium counterparts above, can be made for \ensuremath{\chi_b}\xspace (and \ensuremath{\eta_b}\xspace) mesons.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figures/Projection_RAA_vs_pt_wo3S.pdf}
\caption{Projected \ensuremath{R_{\textrm{AA}}}\xspace as a function of \ensuremath{P_T}\xspace for \ensuremath{\Upsilon}\xspace and \ensuremath{\Upsilon(2S)}\xspace yields expected at the CMS~\cite{CMS-PAS-FTR-18-024} experiment, with $10~\ensuremath{\textrm{nb}^{-1}}\xspace$ of \rm{PbPb}\xspace data and $650~\ensuremath{\textrm{pb}^{-1}}\xspace$ of reference \ensuremath{pp}\xspace data. [Figure from~\cite{CMS-PAS-FTR-18-024}]
%
}
\label{fig:upsilon_aa}
\end{figure}
The possible complications due to the different production mechanisms in \ensuremath{AA}\xspace collisions (the suppression of the direct production, and the recombination from correlated pairs or the regeneration in the plasma from uncorrelated pairs) could be circumvented by comparing data at several collision energies, {i.e.}\xspace running at lower energies than the nominal one. This is one of the advantages of taking data in the FT mode~\cite{Hadjidakis:2018ifr}, as mentioned in Section~\ref{sec:energy_dependence}.
A running scenario for the HL LHC, starting with Run~3 (2021) and the major upgrades of the ALICE and LHCb detectors for heavy-ions, has been proposed in the corresponding CERN Yellow Report (Working Group 5)~\cite{Citron:2018lsq} and is given in \ct{tab:yrlumis}. The listed integrated luminosities have not yet been endorsed by the experiments, the LHC, or CERN. They are reported indicatively in order to give the reader the order of magnitude of what can be expected. The interested reader may refer to~\cite{Bruce:2722753} for a recent study of the expected future performances of the LHC for heavy-ion beams. There is also an interest in running with ion beams in Run~5 and beyond, possibly with lighter ions (such as argon or krypton), to reach higher luminosities and reduce combinatorial background, giving access to rarer states such as $\ensuremath{\chi_b}\xspace (1P)$. The ALICE~\cite{Adamova:2019vkf} and LHCb~\cite{Bediaga:2018lhg} experiments have started planning upgrades supporting this scenario.
\begin{table}[h!]
\begin{center}\renewcommand{\arraystretch}{1.5}
\begin{tabular}{llll}\hline
Year & Systems, $\ensuremath{\sqrt{s_{_{NN}}}}$ & Duration & $\ensuremath{\int \mathcal L}\xspace$ \\
\hline \hline
2021 & \rm{PbPb}\xspace 5.5~TeV & 3 weeks & $2.3~\ensuremath{\textrm{nb}^{-1}}\xspace$ \\
& \ensuremath{pp}\xspace\ 5.5~TeV & 1 week & $3~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ALICE), $300~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ATLAS, CMS), $25~\ensuremath{\textrm{pb}^{-1}}\xspace$ (LHCb) \\
\hline
2022 & \rm{PbPb}\xspace 5.5~TeV & 5 weeks & $3.9~\ensuremath{\textrm{nb}^{-1}}\xspace$ \\
& OO, $p$O & 1 week & $500~{\rm \mu b}^{-1}$ and $200~{\rm \mu b}^{-1}$ \\
\hline
2023 & $p\mathrm{Pb}$\ 8.8~TeV & 3 weeks & $0.6~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ATLAS, CMS), $0.3~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ALICE, LHCb) \\
& \ensuremath{pp}\xspace\ 8.8~TeV & few days & $1.5~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ALICE), $100~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ATLAS, CMS, LHCb) \\
\hline
2027 & \rm{PbPb}\xspace 5.5~TeV & 5 weeks & $3.8~\ensuremath{\textrm{nb}^{-1}}\xspace$ \\
& \ensuremath{pp}\xspace\ 5.5~TeV & 1 week & $3~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ALICE), $300~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ATLAS, CMS), $25~\ensuremath{\textrm{pb}^{-1}}\xspace$ (LHCb) \\
\hline
2028 & $p\mathrm{Pb}$\ 8.8~TeV & 3 weeks & $0.6~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ATLAS, CMS), $0.3~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ALICE, LHCb) \\
& \ensuremath{pp}\xspace\ 8.8~TeV & few days & $1.5~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ALICE), $100~\ensuremath{\textrm{pb}^{-1}}\xspace$ (ATLAS, CMS, LHCb) \\
\hline
2029 & \rm{PbPb}\xspace 5.5~TeV & 4 weeks & $3~\ensuremath{\textrm{nb}^{-1}}\xspace$ \\
\hline \hline
Run-5 & Intermediate \ensuremath{AA}\xspace & 11 weeks & {e.g.}\xspace\ \,Ar--Ar 3--9~$\ensuremath{\textrm{pb}^{-1}}\xspace$ (optimal species to be defined) \\
& \ensuremath{pp}\xspace\ reference & 1 week & \\
\hline
\end{tabular}
\end{center}
\caption{Indicative running scenarios for different heavy-ion runs at the HL-LHC, with the expected integrated luminosity, as proposed in the CERN Yellow Report~\cite{Citron:2018lsq} (but subject to review by the experiments, LHC, and CERN). The years in the table do not account for modifications of the schedule after the publication of the report, which include a delay of the start of Run~4 (2027 $\to$ 2028) and a delay of the start of Run~3 because of the COVID-19 pandemic (2021 $\to$ 2022).}
\label{tab:yrlumis}
\end{table}
Further discussion of the physics case and prospects for quarkonium measurements in heavy-ion collisions at the HL-LHC is given in Section~7 of the aforementioned CERN Yellow Report~\cite{Citron:2018lsq}. A few selected topics are discussed below.
\subsection{Recent theory developments}
\label{subsection_rec_th_dev}
Although some concrete predictions for quarkonium yields can be made assuming they are produced according to the laws of statistical physics at the pseudo-transition temperature, most approaches on the market attempt to implement the suppression and (re)generation of quarkonia, advocated in the introduction, through dedicated dynamical transport models. While early approaches were formulated in terms of kinetic equations making use of dissociation rates and cross sections obtained from pQCD calculations or effective models, more recent developments have profited from the concept of imaginary potentials that, in approaches like potential NRQCD (pNRQCD), makes the bridge between those coefficients used in the transport models and lattice QCD (lQCD) calculations. These new developments enable more solid links between the experimental observables to be measured with better precision at HL-LHC runs and the basic fundamental properties of the in-medium $Q\bar{Q}$ interactions.
In parallel, the formalism of open quantum systems (OQS) is nowadays considered, by an increasing part of the theoretical community, as the emerging paradigm that should either supersede semi-classical approaches or, at least, provide the methods to generate corrections to these approaches. This prospect is particularly appealing for HL-LHC runs as well, as the experimental precision needs to be matched by an increased control of the theoretical models. In Section~\ref{AAtransportOQS}, we provide a description of progresses achieved with the OQS formalism as well as its links to Boltzmann transport.
Among the various theoretical challenges, the correct quantum treatment of the regeneration of low-\ensuremath{P_T}\xspace charmonia, due to numerous $c\bar{c}$ pairs, is a key question for the most central \rm{PbPb}\xspace collisions at LHC energies and beyond. In Section~ \ref{AAtransportdensityop}, we discuss a recent approach stemming from a direct reduction of the Von Neumann equation, which provides an alternative to semi-classical algorithms deduced for instance in~\cite{Blaizot:2017ypk,Yao:2018nmy} and yields preliminary predictions for \ensuremath{R_{\textrm{AA}}}\xspace and \ensuremath{v_{2}}\xspace of \ensuremath{J/\psi}\xspace.
The production of quarkonia at high \ensuremath{P_T}\xspace is another key issue in the global landscape, as the few present \ensuremath{v_{2}}\xspace predictions from transport models fail to reproduce the experimental data at intermediate \ensuremath{P_T}\xspace, leaving the door open for other mechanisms like that involving energy loss of the $Q\bar{Q}$ pair. However, the modelling of such a situation requires knowledge of the in-medium interaction of a $Q\bar{Q}$ pair at finite velocity, which is still not fully known. In Section~ \ref{AAtransportEFT},
we present a summary of recent progresses made in one of the state-of-the-art approaches.
\subsubsection{Semi-classical transport and open quantum system}
\label{AAtransportOQS}
Modern phenomenological studies of quarkonium production in heavy-ion collisions require consistency in considering static screening, dissociation and recombination in the treatment of hot-medium effects. Semi-classical transport equations such as the Boltzmann equation and the rate equation, which is obtained from the Boltzmann equation by integrating the quarkonium distribution over phase space, have been applied widely and shown to be phenomenologically successful~\cite{ Grandchamp:2003uw,Grandchamp:2005yw,Yan:2006ve,Liu:2009nb,Song:2011xi,Song:2011nu,Sharma:2012dy,Nendzig:2014qka,Krouppa:2015yoa,Chen:2017duy,Zhao:2017yan,Du:2017qkv,Aronson:2017ymv,Ferreiro:2018wbd,Yao:2018zrg,Hong:2019ade,Chen:2019qzx,Yao:2020xzw}.
Such a phenomenological success of the semi-classical Boltzmann equation has been explained by deriving the transport equation under systematic expansions that are closely related to a hierarchy of scales $M\gg Mv \gg Mv^2 \gtrsim T$~\cite{Yao:2018nmy,Yao:2020eqy}, where $M$ is the heavy-quark mass, $v$ the typical relative velocity between the heavy quark-antiquark pair in a quarkonium state and $T$ the temperature of the plasma. Under this separation of scales, pNRQCD~\cite{Brambilla:1999xf,Brambilla:2004jw,Fleming:2005pd} can be used to simplify the calculations. The starting point of the derivation is the OQS formalism that has recently been used to study quarkonium transport~\cite{Young:2010jq,Borghini:2011ms,Akamatsu:2011se,Akamatsu:2014qsa,Blaizot:2015hya,Katz:2015qja,Brambilla:2016wgg,Brambilla:2017zei,Kajimoto:2017rel,DeBoni:2017ocl,Blaizot:2017ypk,Blaizot:2018oev,Akamatsu:2018xim,Miura:2019ssi,Sharma:2019xum,Brambilla:2019tpt}. In this formalism, the quarkonium is an open subsystem interacting with the thermal QGP. Integrating out the degrees of freedom of the thermal bath results in a non-unitary and time-irreversible evolution equation, which further leads to a Lindblad equation in the Markovian approximation. The Lindblad equation can be shown to become the semi-classical Boltzmann equation in the semi-classical limits after a Wigner transform is applied to the subsystem density matrix (a Gaussian smearing is required to maintain positivity). The Markovian approximation can be justified when the subsystem is weakly interacting with the thermal bath, which is true in our power counting since the quarkonium size is small $rT\sim\frac{T}{Mv} \lesssim v \ll 1$. A schematic diagram of the various approximations and resulting equations is shown in Fig.~\ref{fig:lindblad_transport}. The derivation clearly demonstrates the validity condition of the semi-classical transport equations. Essentially, what should be preserved is the non-relativistic nature of the heavy quarks from the QGP viewpoint. As the \ensuremath{P_T}\xspace\ of the quarkonium increases, the energy of the medium excitation in the rest frame of the quarkonium is boosted, which could ruin the nonrelativistic expansion. Furthermore, the derivation provides a way of systematically including the quantum corrections to the semi-classical transport equation. Alternatively, one can directly solve the Lindblad equation for the quarkonium phenomenology, which in principle is a {quantum} system. Improvements towards a full quantum phenomenological treatment are still needed.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figures/schema_quarkonium_transport.pdf}
\caption{Various approximations made and evolution equations obtained in the derivation of the semi-classical transport equation for quarkonium from the von Newmann equation of closed quantum systems.}
\label{fig:lindblad_transport}
\end{figure}
To explore the limits of the semi-classical transport approaches and find solid experimental evidence of quantum effects in the quarkonium transport, experimental data with high precision are needed. For example, precise measurements of the azimuthal angular anisotropy and of \ensuremath{R_{\textrm{AA}}}\xspace of excited quarkonium states such as $\ensuremath{\chi_b}\xspace (1P)$ and \ensuremath{\Upsilon(3S)}\xspace will be very helpful in distinguishing different semi-classical transport calculations. A precise measurement of \ensuremath{R_{\textrm{AA}}}\xspace of $\ensuremath{\chi_b}\xspace (1P)$ is of particular interest~\cite{Yao:2020xzw} since, if the dissociation is the only hot medium effect, then one expects \ensuremath{R_{\textrm{AA}}}\xspace {of} \ensuremath{\Upsilon(2S)}\xspace and of $\ensuremath{\chi_b}\xspace (1P)$ to be similar (with the value for $\ensuremath{\chi_b}\xspace (1P)$ slightly higher)~\cite{Krouppa:2015yoa} since the binding energies and sizes of the two states are comparable. However, it is known from recent studies using the OQS formalism that the dissociation is a result of the wave-function decoherence. Due to the decoherence of the original state, say \ensuremath{\Upsilon(2S)}\xspace, a non-vanishing overlap can be developed with other states that exist in the medium ({i.e.}\xspace\ the local temperature is below their melting temperature), say $\ensuremath{\chi_b}\xspace (1P)$. This gives a probability to form another quarkonium state from a dissociating state. This recombination process (known as ``correlated recombination"~\cite{Yao:2020xzw}) involves a heavy quark-antiquark pair from the same hard vertex (a dissociating quarkonium) and is different from the traditional concept of recombination, which comes from heavy quarks and antiquarks initially produced from different hard vertices. The existence of such a correlated recombination is
mandatory for theory consistency and is well-motivated from OQS studies. With correlated recombination, an initial \ensuremath{\Upsilon(2S)}\xspace state may dissociate first and then recombine as a $\ensuremath{\chi_b}\xspace (1P)$ state and vice versa. The probabilities of both these processes are similar since \ensuremath{\Upsilon(2S)}\xspace and $\ensuremath{\chi_b}\xspace (1P)$ have similar binding energies and sizes. However, primordially many more $\ensuremath{\chi_b}\xspace (1P)$ mesons are produced,
which leads to less suppressed yields of the \ensuremath{\Upsilon(2S)}\xspace than the $\ensuremath{\chi_b}\xspace (1P)$ state. Consequently, it is of great interest to measure the ratio between the \ensuremath{R_{\textrm{AA}}}\xspace suppression factors of $\ensuremath{\chi_b}\xspace (1P)$) and \ensuremath{\Upsilon(2S)}\xspace. Calculations that include correlated recombination (which requires some information of the two-particle distribution function of the heavy quark-antiquark pairs) such as~\cite{Yao:2020xzw}, predict the ratio to be about $1/3$ in central collisions while calculations such as~\cite{Krouppa:2015yoa}, which do not include correlated recombination, give a ratio larger than unity. The contrast is dramatic (for example, compare Fig.~1 of~\cite{Krouppa:2015yoa} and Fig.~7 of~\cite{Yao:2020xzw}). If it is possible to measure this ratio in future experiments, its power for discriminating between models will be high. This ratio calculated in~\cite{Yao:2020xzw} also depends on \ensuremath{P_T}\xspace, and approaches unity as \ensuremath{P_T}\xspace increases, as shown in Fig.~\ref{fig:2Svs1P}.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/2Svs1P_ratio_pT.pdf}
\caption{Ratios of \ensuremath{R_{\textrm{AA}}}\xspace$(\ensuremath{\chi_b}\xspace (1P))$ and \ensuremath{R_{\textrm{AA}}}\xspace(\ensuremath{\Upsilon(2S)}\xspace) as a function of $\ensuremath{P_T}\xspace$ predicted in~\cite{Yao:2020xzw}. Different lines correspond to different choices of parameters. Nuclear PDF effects largely cancel in the ratio.}
\label{fig:2Svs1P}
\end{figure}
The correlated recombination discussed above is not an intrinsic quantum effect and can be accounted for in semi-classical transport equation calculations. With high precision data, it may also be possible to find a common inconsistency between all semi-classical approaches and experimental data, which then hints at the importance of quantum effects in quarkonium in-medium dynamics. In view of the discussion in Section~\ref{sec:pa_cnm} on nPDF effects, one expects these to largely cancel in such a ratio and, in any case, to be negligible compared to the variation shown in \cf{fig:2Svs1P}.
\subsubsection{A density-operator model for $\pazocal{Q}$ production in \ensuremath{AA}\xspace collisions}
\label{AAtransportdensityop}
In this section, a newly developed model is introduced that aims to offer a better understanding of heavy-quarkonium formation in the presence of many $Q\bar{Q}$ pairs (that is, in the LHC conditions) while offering a different approach to the existing ones. In this model, the formation of heavy quarkonia is conceived as a coalescence in phase space, based on composite particle cross sections, following Remler's formalism~\cite{GYULASSY:83,REMLER:81} directly deduced from the Von Neumann's equation.
By computing an effective production rate (including both dissociation and recombination processes), the model is able to keep track of the inclusive formation probability of the quarkonium system with time, taking into account the heavy-quark kinematics and their interaction with medium particles. For a given $\{Q,\bar{Q}\}$ pair $\{1,2\}$, the contribution to the rate is
\begin{equation}
\Gamma(t)=\sum_{i=1,2}\sum_{j\geqslant 3}\delta(t-t_{ij}(\nu))[W_{Q\overline{Q}}(t+\epsilon)-W_{Q\overline{Q}}(t-\epsilon)],
\end{equation}
where the sum $j$ reflects the sum over all particles from the bulk, while the $\delta$ factor only acts when one of the members of a given pair ($i=1$ or $i=2$) undergoes a collision with a particle ($j\geqslant 3$). %
The rate expression relies on the Wigner distribution of the quarkonium vacuum states, through the gain $W_{Q\overline{Q}}(t+\epsilon)$ and loss $W_{Q\overline{Q}}(t-\epsilon)$ terms. While the expression only shows the rate contribution from one pair, it can easily be extended to all pairs in the medium by summing over all combinations. In this way, both recombination and dissociation are taken into account inside both the gain and loss terms, which represent the overall contribution of the recombination and dissociation at any given time. The time evolution of the probability $P$ for a quarkonium state formation thereby follows:
\begin{equation}
P(t)=P(t_{0})+\int \Gamma(t)dt\,.
\end{equation}
In this approach, the heavy quarks do not need to be considered at thermal (or chemical) equilibrium at any stage of the collision. This feature makes it possible to apply it, not only to large systems like \ensuremath{AA}\xspace collisions, but to small systems as well. It also offers concrete perspectives to deal with several particles ($Q\overline{Q}$ pairs) in real-time dynamics. Finally, the model is also quite sensitive to key ingredients of the quarkonium production such as: the primordial, or initial, $Q\overline{Q}$ pair production (with cold nuclear matter effects); the $Q\overline{Q}$ interaction; and the local medium temperature field which is modelled according to the EPOS event generator~\cite{Werner:2013tya} in the present implementation.
In \cf{fig:Remler}, preliminary predictions are provided for \ensuremath{R_{\textrm{AA}}}\xspace and \ensuremath{v_{2}}\xspace as a function of the \ensuremath{J/\psi}\xspace $\ensuremath{P_T}\xspace$ obtained within this operator model, both with and without screened binding interactions.
\begin{figure}[h!]
\centering
\includegraphics[width=0.47\textwidth]{figures/RemlerRAA.pdf}
\hspace{0.2cm}
\includegraphics[width= 0.47\textwidth]{figures/Remlerv2.pdf}
\caption{Prediction for \ensuremath{R_{\textrm{AA}}}\xspace (Left) and \ensuremath{v_{2}}\xspace (Right) {of} the \ensuremath{J/\psi}\xspace with the density-operator model for \rm{PbPb}\xspace\ collisions at \ensuremath{\sqrt{s_{_{NN}}}} = 5~TeV. The solid lines corresponds to \ensuremath{c\overline{c}}\xspace interactions modelled following screened binding potential, while the dashed curves correspond to $c$ and $\bar{c}$ solely interacting with the light quarks and the gluons through elastic scatterings. [The observed oscillations result from numerical fluctuations.]
}
\label{fig:Remler}
\end{figure}
In the future, both lQCD calculations and HL-LHC data on \ensuremath{R_{\textrm{AA}}}\xspace and \ensuremath{v_{2}}\xspace of \ensuremath{J/\psi}\xspace will be brought together to constrain the interactions among $Q$, $\bar{Q}$, and medium partons, and then explore in detail the consequences of the model for excited states and higher harmonics like $v_3$, which start to be accessible experimentally~\cite{Acharya:2018pjd}.
\subsubsection{An advanced EFT for $\pazocal{Q}$ in matter}
\label{AAtransportEFT}
In recent years, significant progress in understanding subatomic particle propagation in matter has been reached using modern effective field theories (EFTs). First developed for light partons~\cite{Idilbi:2008vm,Ovanesyan:2011xy,Fickinger:2013xwa}, the Soft-Collinear Effective Theory with Glauber gluons (SCET$_{\rm G}$) was applied to describe the suppression of inclusive hadrons and jets as well as the modification of jet substructure~\cite{Chien:2015hda,Chien:2015vja,Kang:2017frl}. This approach was subsequently extended to open heavy-flavour~\cite{Kang:2016ofv,Sievert:2019cwq} to understand the production of $D$ mesons, $B$ mesons, and heavy-flavour-tagged jets~\cite{Li:2017wwc,Li:2018xuv}. A logical next step is to start with the theory of quarkonium production, %
NRQCD and its modern
formulations~\cite{Bodwin:1994jh,Brambilla:1999xf,Luke:1999kz}, and to introduce interactions with the background nuclear medium~\cite{Makris:2019ttx}.
The hierarchy of ground and excited quarkonium suppression emphasises the need for such an EFT approach. In order for the traditional energy-loss phenomenology~\cite{Arleo:2017ntr} to contribute significantly to the modification of quarkonium cross sections in QCD matter, the quarkonium formation must happen outside of the medium and be expressed as the fragmentation of partons into the various \ensuremath{J/\psi}\xspace and \ensuremath{\Upsilon}\xspace states.
This is possible in the recently developed Leading-Power (LP) factorisation approach to QCD~\cite{Bodwin:2014gia} (see also Section~\ref{sec:pp}) although we stress that it is only in the high-\ensuremath{P_T}\xspace range that this LP factorisation is thought to work well.
\begin{figure}[h!]
\centering
\includegraphics[width=0.47\textwidth]{figures/LHC5.1-010ElossVSdissocionATLAS}
\hspace{0.2cm}
\includegraphics[width=0.47\textwidth]{figures/LHC5.02TeVPStoJCMSptDissVSEloss}
\caption{Left: Suppression of the \ensuremath{J/\psi}\xspace production in central PbPb collisions at ATLAS compared to the energy loss (yellow) and EFT (blue) quarkonium dissociation calculations. Right: The double ratio of \ensuremath{\psi(2S)}\xspace to \ensuremath{J/\psi}\xspace suppression as a measure of the relative significance of QCD matter effects on ground and excited states (CMS) compared to the same energy loss (purple) and EFT (blue) theoretical models. }
\label{fig:2Sto1Sptcent}
\end{figure}
As an example, we calculate the baseline \ensuremath{J/\psi}\xspace and \ensuremath{\psi(2S)}\xspace cross sections from LDMEs extracted using LP factorisation in \rm{PbPb}\xspace collisions at the LHC (Fig.~\ref{fig:2Sto1Sptcent}). The energy-loss evaluation is carried out in the soft-gluon-emission limit of the full in-medium splitting kernels~\cite{Sievert:2018imd,Sievert:2019cwq}, and is well constrained by light-hadron quenching~\cite{Chien:2015hda,Kang:2014xsa}. The energy loss approach overpredicts the suppression of \ensuremath{J/\psi}\xspace measured by ATLAS~\cite{Aaboud:2018quy} in the range $\ensuremath{P_T}\xspace > 10$~GeV where the computation starts to be applicable. The discrepancy is a factor of 2 to 3 in both minimum bias and central collisions (yellow band in \cf{fig:2Sto1Sptcent} Left). The most important discrepancy, however, is in the relative medium-induced suppression of \ensuremath{\psi(2S)}\xspace to \ensuremath{J/\psi}\xspace as shown in the right panel of \cf{fig:2Sto1Sptcent} (purple band). The energy-loss model predicts smaller suppression for the \ensuremath{\psi(2S)}\xspace state compared to \ensuremath{J/\psi}\xspace and $\ensuremath{R_{\textrm{AA}}}\xspace[\ensuremath{\psi(2S)}\xspace] / \ensuremath{R_{\textrm{AA}}}\xspace[\ensuremath{J/\psi}\xspace] \approx 1.1$. The CMS experimental results~\cite{Sirunyan:2016znt} show that the suppression of the weakly bound \ensuremath{\psi(2S)}\xspace is 2 to 3 times larger than that of \ensuremath{J/\psi}\xspace.
Such a tension between the data and the energy-loss calculations shows that a formulation of a general microscopic theory of the quarkonium interactions in matter~\cite{Makris:2019ttx,Makris:2019kap,Vitev:2019yig} is necessary.
When an energetic particle propagates in a hot or cold nuclear medium, the interaction with its quasi-particles is typically mediated by off-shell-gluon exchanges -- Glauber or Coulomb gluons. Their typical momenta depend on the source of in-medium interactions -- collinear, static, or soft.
We construct the Lagrangian of NRQCD$_{\rm G}$ by adding to the velocity-renormalisation-group NRQCD (vNRQCD) Lagrangian the terms that include the interactions with medium sources through virtual Glauber/Coulomb gluons exchanges. It takes the form:
\begin{equation}
\mathcal{L}_{\text{NRQCD}_{\rm G}} = \mathcal{L}_{\text{vNRQCD}} + \mathcal{L}_{Q-G/C} (\psi,A_{G/C}^{\mu,a}) + \mathcal{L}_{\bar{Q}-G/C} (\chi,A_{G/C}^{\mu,a})\;,
\end{equation}
where, in the background field method, the effective $A_{G/C}^{\mu,a}$ incorporate the information about the sources. Here, $\psi$ and $\chi$ are the heavy quark and antiquark fields respectively.
The leading and subleading correction to the NRQCD$_{\rm G}$ Lagrangian in the heavy-quark sector from virtual (Glauber/Coulomb) gluon insertions, {i.e.}\xspace\ $\mathcal{L}_{Q-G/C}$,
are derived using three different methods yielding the same results. We find that at LO the modification in the leading Lagrangian, $\mathcal{L}_{Q-G/C}^{(0)}$, is independent of the nature of the quasiparticles of the QCD medium:
\begin{equation}
\label{eq:L0-NR}
\mathcal{L}_{Q-G/C}^{(0)} (\psi,A_{G/C}^{\mu,a}) = \sum_{{\bf p},{\bf q}_T}\psi^{\dag}_{{\bf p}+{\bf q}_T} \Big{(} - g A^{0}_{G/C} \Big{)} \psi_{{\bf p}}\;\; \textrm{(collinear/static/soft)}\; .
\end{equation}
As the quarks (and antiquarks) couple to the time like component of the Glauber/Coulomb field, it is easy to implement the interactions in a background-field approach.
In contrast, at NLO, the distinction is manifest in the subleading Lagrangian, $\mathcal{L}_{Q-G/C}^{(1)}$~\cite{Makris:2019ttx}.
The dissociation rates for the various quarkonium states in the QGP can be obtained from NRQCD$_{\textrm{G}}$ and incorporated into the rate equations first developed to describe the propagation of open heavy-flavour states in matter~\cite{Adil:2006ra,Sharma:2009hn}. Under the approximation where the transition between states is neglected, the quarkonium transport takes the form derived in~\cite{Sharma:2012dy,Aronson:2017ymv}. The EFT predictions are also shown in \cf{fig:2Sto1Sptcent} (blue bands), and give a much better description of the data than the energy-loss approach. Furthermore, in this limit, the surviving quarkonia are expected to retain the polarisation acquired from their initial production. A measurement that the HL-LHC might explore is that of quarkonium polarisation in nucleus-nucleus collisions (see Section~\ref{Pol-AA}).
On the theory side, it will be important to extend such predictions to higher \ensuremath{P_T}\xspace, on the order 100~GeV. The increased data sample at the HL-LHC should in principle allow one to check whether the LP factorisation limit and the energy-loss dominance are reached. Furthermore, as \ensuremath{J/\psi}\xspace and \ensuremath{\Upsilon}\xspace data becomes more precise,
in particular with smaller uncertainties on the relative suppression of excited to ground states, it will be possible look for effects of medium-induced transition between quarkonium states~\cite{Makris:2019ttx}.
\subsection{Opportunities at HL-LHC}
\subsubsection{Studying the collision-energy dependency of $\pazocal{Q}$ production}
\label{sec:energy_dependence}
The LHC high-luminosity program will have the opportunity to explore different collision energies. One of the most interesting possibilities is to explore the current maximum \rm{PbPb}\xspace energy of $\ensuremath{\sqrt{s_{_{NN}}}}=5.02$~TeV and low-energy collisions in the FT mode, similar to the RHIC {\rm c.m.s.}\xspace\ energy range.
The current energy achieved in the FT mode by the LHCb experiment, using the nominal 2.5~TeV Pb beam energies, is $\ensuremath{\sqrt{s_{_{NN}}}}=69$~GeV. Quarkonium total cross-sections decrease by approximately a factor 15 in \ensuremath{pp}\xspace collisions between $\ensuremath{\sqrt{s}}=5$~TeV and $\ensuremath{\sqrt{s}}=69$~GeV. Large integrated luminosities in this mode are desirable to compensate for such a difference in yields. The expected integrated luminosities, \ensuremath{\int \mathcal L}\xspace, for the FT mode in LHCb, SMOG2, is 20~\ensuremath{\textrm{nb}^{-1}}\xspace in one year of \rm{PbAr}\xspace\ collisions at $\ensuremath{\sqrt{s_{_{NN}}}}=72$~GeV~\cite{Bursche:2649878}, nearly 100 times larger than what was recorded by the same experiment in \rm{PbPb}\xspace collisions at 5~TeV. Similar estimations have been obtained by the AFTER@LHC study group for different techniques (gas target, solid target with bent-crystal beam splitting or with a dedicated beam line)~\cite{Lansberg:2012kf,Trzeciak:2017csa,Hadjidakis:2018ifr} using both the LHCb and ALICE detectors.
Whereas the freeze-out temperatures are nearly constant in the aforementioned collision energy range, the peak temperature is expected to change by roughly a factor of three when comparing the estimated peak temperatures obtained from direct-photon-yield slopes measured at RHIC~\cite{Adare:2009qk} and at the LHC~\cite{Adam:2015lda}. The same factor is obtained for charged-particle multiplicities as reported in~\cite{Adam:2015ptt}. It is noteworthy in these publications that the peak temperature also depends on the collision centrality. However, the peak temperature shows a more modest variation of around 50\% from peripheral to central events. It would be relevant to measure the modification of the quarkonium spectrum at the same particle multiplicity but distinct collision energies. Such a measurement, made preferably with the same detector, would have a stringent constraint on models that consider the quarkonium breaking by co-moving hadrons~\cite{Hadjidakis:2018ifr}.
The contribution from charmonium (re)generation strongly depends on the collision energy. More than 100 $c\bar{c}$ pairs are produced in a single central \rm{PbPb}\xspace collision at $\ensuremath{\sqrt{s_{_{NN}}}} = 5$~TeV according to the charm cross section published in~\cite{ALICE_charm}. This large number of uncorrelated \ensuremath{c\overline{c}}\xspace pairs is an abundant source for the charmonium (re)generation process, which could explain the enhancement of low-\ensuremath{P_T}\xspace \ensuremath{J/\psi}\xspace yields~\cite{Abelev:2013ila}, and to a lesser extent the elliptic flow~\cite{Acharya:2017tgv, Aaboud:2018ttm, Khachatryan:2016ypw} observed in \rm{PbPb}\xspace collisions at LHC.
Taking the FONLL computation~\cite{Cacciari:1998it} of the charm cross section at $\ensuremath{\sqrt{s_{_{NN}}}} = 69$~GeV,
less than one charm pair per collision is expected on average, leaving no room for charmonium (re)generation. This feature alone makes high-luminosity, low-energy collisions at the FT-LHC a significant opportunity to use (re)generation-free charmonium states to probe the QGP temperature.
Bottomonia are expected not to be affected by (re)generation (from uncorrelated \ensuremath{b\overline{b}}\xspace pairs) given that, even at the maximum-energy \rm{PbPb}\xspace collisions at LHC, less than one \ensuremath{b\overline{b}}\xspace pair is produced on average per collision. Looking at quarkonium suppression as a function of rapidity and system size (another asset of the versatile FT mode) would permit searches for the onset of QGP effects, and hence put constrains on the in-medium modification of the quarkonium potential.
The current status of the implementation of a FT operation mode within the LHCb detector, and the ongoing technical developments performed towards the achievement of an extended FT physics programme during HL-LHC, both in LHCb and potentially with the ALICE detector, have been discussed in Section~\ref{sec:dif_FT}. Conventional detectors with a coverage of the forward rapidities in the laboratory frame, such as LHCb or the ALICE muon arm, allow scanning of the mid to backward {\rm c.m.s.}\xspace\ rapidity region. With the HL-LHC luminosities in the FT mode, the yearly charmonium yields in PbXe collisions are expected to be very large, of the order of $\sim$10$^{7}$ \ensuremath{J/\psi}\xspace mesons in LHCb (Fig.~\ref{fig:FixedTarget-Quarkonium}, left) and of the order of a few 10$^{6}$ \ensuremath{J/\psi}\xspace mesons in the ALICE muon spectrometer.
The understanding of the (dynamical) charmonium suppression (given the negligible (re)generation expected at low {\rm c.m.s.}\xspace\ energy) in the QGP would highly benefit from systematic and precise studies of all excited states. This includes direct \ensuremath{\chi_c}\xspace measurement, which becomes less challenging at low energy and backward rapidity, thanks to a low-background environment. Projections with an LHCb-like detector of the \ensuremath{\psi(2S)}\xspace nuclear modification factor indicate, for instance, that a statistical precision of a few percent could be reached at mid rapidity in the {\rm c.m.s.}\xspace~\cite{Hadjidakis:2018ifr}. In addition to the \ensuremath{\psi(2S)}\xspace and \ensuremath{\chi_c}\xspace measurements, novel observables, such as quarkonium-pair (\ensuremath{J/\psi}\xspace + \ensuremath{J/\psi}\xspace) or quarkonium--heavy-quark correlations ($\ensuremath{J/\psi}\xspace + D$), which require a larger luminosity and acceptances than that achieved in the past at the SPS and RHIC, could be explored for the first time in this energy regime. As can be seen in \cf{fig:FixedTarget-Quarkonium} (Left), the low level of background in the \ensuremath{J/\psi}\xspace region supports the feasibility of the aforementioned studies, especially in the most backward region where backgrounds are smallest.
\begin{figure}[htbp!]
\centering
\includegraphics[scale=0.38]{figures/Psi72gev_Xe_signalLog_pt7y2-5.pdf}
\hspace{0.3cm}
\includegraphics[scale=0.38]{figures/UpsilonStates72gev_BkgFit_Xe_UpsilonSignal_pt7y3-5.pdf}
\caption{Di-muon invariant mass distribution in the \ensuremath{J/\psi}\xspace, \ensuremath{\psi(2S)}\xspace (Left) and $\Upsilon(nS)$ (Right) regions, expected in \rm{PbXe}\xspace\ FT collisions at $\ensuremath{\sqrt{s_{_{NN}}}} = 72$~GeV, for an LHCb-like detector ($2 < y_{\rm lab} < 5$). The combinatorial background is subtracted using like-sign pairs. No nuclear modification are assumed. The integrated luminosity of $\ensuremath{\int \mathcal L}\xspace = 30 \ensuremath{\textrm{nb}^{-1}}\xspace$ corresponds to one LHC-year of data-taking for ions ({i.e.}\xspace typically one month of data taking), with an LHCb-like detector equipped with a gaseous storage-cell of 1-m length. The maximum luminosity achieved has been limited such that no more than 15$\%$ of the Pb beam is removed by the interaction with the target. [Plots taken from~\cite{Hadjidakis:2018ifr}].}
\label{fig:FixedTarget-Quarkonium}
\end{figure}
Figure~\ref{fig:FixedTarget-Quarkonium} (Right) shows projections for the $\Upsilon(nS)$ invariant-mass region, in the di-muon-decay channel, after the combinatorial-background subtraction, in \rm{PbXe}\xspace\ collisions, with an LHCb-like detector. The yearly \ensuremath{\Upsilon}\xspace, \ensuremath{\Upsilon(2S)}\xspace and \ensuremath{\Upsilon(3S)}\xspace yields are about {$4 \times 10^{3}$, $10^{3}$ and $5 \times 10^{2}$} mesons, respectively. Given the excellent resolution of LHCb, the three states are well separated. The expected statistical precision on the measurements of \ensuremath{R_{AA}}\xspace\ {of} each of the three states will be about 7\%, 20\% and 30\% for the \ensuremath{\Upsilon}\xspace, \ensuremath{\Upsilon(2S)}\xspace and \ensuremath{\Upsilon(3S)}\xspace, respectively. Yield projections in the bottomonium sector also exists for the ALICE muon arm. Typically a few hundred \ensuremath{\Upsilon}\xspace mesons will be collected in \rm{PbXe}\xspace\ collisions in one year of LHC data taking. The study of the excited states, even for several data-taking years, will remain rather limited with ALICE in the FT mode. Such studies of $\Upsilon(nS)$ suppression, especially with the LHCb detector, will therefore bring crucial new inputs to our understanding of the nature of the hot medium created in this energy regime, complementary to the studies already performed at the LHC (see CMS results~\cite{Sirunyan:2017lzi, Khachatryan:2016xxp, Chatrchyan:2012lxa}). This will allow tests of the different approaches discussed in
sections \ref{AAintro} and \ref{subsection_rec_th_dev}
and comparisons with effective models, such as the CIM, that deals with quarkonium suppression and accounts for Landau damping~\cite{Ferreiro:2018wbd}.
\subsubsection{Prospects for $X(3872)$ studies}
\label{sec:aa_x3872}
About 20 years after its discovery~\cite{Choi:2003ue}, the question of whether $X(3872)$ is a molecule, a compact tetraquark, or a hybrid state is still a subject of intense debate. It is thus worth wondering whether heavy-ion experiments can help us understand its nature, in addition to the other exotic $XYZ$ states. Such investigations with heavy ions are needed in parallel to the recent experimental~\cite{Aaij:2020hpf} and theoretical~\cite{Esposito:2020ywk} work related to high-multiplicity \ensuremath{pp}\xspace\ collisions mentioned in Section~\ref{pAcom} (see also Section~\ref{sec:pp-xyz} for a general discussion of the production in \ensuremath{pp}\xspace\ collisions). To advance our understanding using heavy ions, two inputs are needed:
precise measurements of the $X(3872)$ yields in heavy-ion collisions and solid theoretical calculations that lead to different results for different underlying structures. Neither is an easy task.
On the theory side, many phenomena may affect the production of the $X(3872)$. For example, for low-\ensuremath{P_T}\xspace production, the dissociation and recombination of the $X(3872)$ similar to those of charmonium can happen in the hot QGP (for a compact tetraquark state) or in the hadronic gas (for a molecule). These processes in the hadronic gas are also connected with similar processes in \ensuremath{pp}\xspace and {\ensuremath{pA}}\xspace collisions, though the background hadronic gas densities are different. These reaction rates are poorly understood from first principles. Furthermore, the recombination is sensitive to the total number of charm quarks produced in one event, which has not been precisely determined in heavy-ion collisions.
At larger \ensuremath{P_T}\xspace, energy loss may also affect the $X(3872)$ yields. Moreover, in order to convert calculations to phenomenology, one needs precise knowledge of the branching ratio of the decay channel of the $X(3872)$ used in the measurement (for example, $J/\psi\pi\pi$), which is also not well known but may be improved with future measurements at $B$ factories. Though the task is difficult, one still hopes that one can do some analyses with precise data. So far, only the ratio between the production yields of the $X(3872)$ and the \ensuremath{\psi(2S)}\xspace has been measured by the CMS collaboration in the $\ensuremath{P_T}\xspace =10$--50~GeV range~\cite{CMS-PAS-HIN-19-005}. Since the suppression mechanism of \ensuremath{\psi(2S)}\xspace is not well understood, it is preferable if the direct yield (rather than the ratio) of $X(3872)$ can be measured
as a function of \ensuremath{P_T}\xspace in the soft regions. This would indicate how significant recombination is to the production of the $X(3872)$, since recombination is sensitive to the particle wave function.
This idea is motivated by the important contribution from recombination in charmonium production at low \ensuremath{P_T}\xspace. On the experimental side, the size of the $X(3872)$ data samples needs to be increased in order to carry out more differential measurements.
At the same time, phenomenological calculations assuming different structures of the $X(3872)$ have to be carried out.
Using the $X(3872)$ production yields in heavy-ion collisions to understand its structure may not be fully
successful by itself, but provides complementary information to measurements in other collision systems.
\subsubsection{${\psi}$ polarisation in ${\rm{PbPb}\xspace}$ collisions}
\label{Pol-AA}
The question of whether the \ensuremath{J/\psi}\xspace meson is polarised in \ensuremath{AA}\xspace collisions has been addressed by only a few authors~\cite{Gupta:1998ut,Ioffe:2003rd,Faccioli:2012kp}, who have advocated that a modification of the \ensuremath{J/\psi}\xspace polarisation in \ensuremath{AA}\xspace (as compared to \ensuremath{pp}\xspace) could be due to either the disappearance of feed-down from higher states due to the suppression of these states in QGP, or the modification of the \ensuremath{c\overline{c}}\xspace $\to$ \ensuremath{J/\psi}\xspace conversion mechanism, which would be altered in those collisions, through a modification of the LDMEs at {freeze-out}. To quantify the \ensuremath{\chi_c}\xspace suppression in \ensuremath{AA}\xspace collisions, in~\cite{Faccioli:2012kp}, it is considered that the {feed-down} from \ensuremath{\chi_c}\xspace results in the ``blurring'' of the direct \ensuremath{J/\psi}\xspace production in \ensuremath{pp}\xspace, which would then be recovered in \ensuremath{AA}\xspace.
However, the first measurement of \ensuremath{J/\psi}\xspace polarisation in \ensuremath{AA}\xspace collisions by ALICE ~\cite{Acharya:2020xko}, although still affected by sizeable uncertainties, does not show a significant modification of the \ensuremath{J/\psi}\xspace polarisation parameters, $\lambda_\theta$, $\lambda_\phi$ and $\lambda_{\theta \phi}$. At present, the first question to be answered is indeed whether the polarisation differs in the \ensuremath{pp}\xspace\ and \ensuremath{AA}\xspace samples, rather than the actual size of the polarisation in \ensuremath{AA}\xspace collisions.
In this context, it would be helpful to look at \ensuremath{R_{AA}}\xspace as a function of the cosine of the di-lepton polar angle, $\cos \theta$, {since the polarisation directly affects the $\cos \theta$ distribution}: this measurement may have greater experimental precision than a direct measurement of $\lambda_\theta$.
Details of the quarkonium formation can also be addressed in \ensuremath{AA}\xspace collisions. From the viewpoint of the theoretical modelling, the SCET$_{\rm G}$ approach described in Section~\ref{AAtransportEFT} is an ideal candidate to investigate the polarisation at large \ensuremath{P_T}\xspace, a regime where each directly produced charmonium is expected to have the same polarisation as in \ensuremath{pp}\xspace collisions, due to helicity quasi-conservation in the energy loss process, but where the energy loss and suppression affect their relative yields and their subsequent contribution to the lower-lying states.
At smaller \ensuremath{P_T}\xspace, the interactions with the QGP, neglected in most of the previous studies, and the large fraction of the \ensuremath{J/\psi}\xspace yield due to recombination are expected to partly wash away the polarisation of the \ensuremath{c\overline{c}}\xspace state. On the contrary, the strong magnetic field created in the early QGP stage could enforce a spin alignment of the \ensuremath{J/\psi}\xspace perpendicular to the event plane~\cite{Tuchin:2013ie}. Experimental investigations should therefore be supported by quantitative theoretical predictions that include all these ingredients, and are based on state-of-the-art understanding of \ensuremath{J/\psi}\xspace polarisation in \ensuremath{pp}\xspace collisions.
An alternate strategy could consist in measuring the polarisation of prompt \ensuremath{\psi(2S)}\xspace in \ensuremath{AA}\xspace collisions, which do not receive any {feed-down}, in kinematic regions where the yield is found to be polarised in \ensuremath{pp}\xspace collisions. One may reasonably anticipate a gradual reduction of the polarisation in \ensuremath{AA}\xspace collisions for an increasing centrality
due to interactions with the QGP constituents. The issue is that, for now, the \ensuremath{\psi(2S)}\xspace has only been found to be (longitudinally) polarised for $\ensuremath{P_T}\xspace > 10$~GeV at forward rapidities~\cite{Aaij:2014qea}.
In addition, this represents a genuine experimental challenge: the $\ensuremath{\psi(2S)}\xspace / \ensuremath{J/\psi}\xspace$ ratio in \rm{PbPb}\xspace in the di-muon channel is about 1--2\% (3--5\% in \ensuremath{pp}\xspace)~\cite{Sirunyan:2016znt}. The published ALICE results in \rm{PbPb}\xspace~\cite{Acharya:2020xko} use $750~\ensuremath{\mu\textrm{b}^{-1}}\xspace$ of data, while $10~\ensuremath{\textrm{nb}^{-1}}\xspace$ are expected after Runs 3--4. This means that naively a \ensuremath{\psi(2S)}\xspace polarisation in \rm{PbPb}\xspace at the end of Run~4 will still be less precise than the Run-2 \ensuremath{J/\psi}\xspace measurement (not accounting for the much lower signal-over-background for \ensuremath{\psi(2S)}\xspace compared to \ensuremath{J/\psi}\xspace, nor for improvements due to detector upgrades). The situation may be different with lighter ions (which may be available for Run~5 or beyond), providing more integrated luminosity, and for which less suppression is expected with respect to \ensuremath{pp}\xspace. Finally, if such a measurement was to be done by the ATLAS or CMS experiments, the gain in acceptance in rapidity may be compensated by a larger \ensuremath{P_T}\xspace threshold.
\section*{Acknowledgements}
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the grant agreement No.824093 (STRONG-2020).
This project has also received funding from the French ANR under the grant ANR-20-CE31-0015 (``PrecisOnium'').
This work was also partly supported by the French CNRS via the IN2P3 project GLUE@NLO, via the Franco-Chinese LIA FCPPL (Quarkonium4AFTER), via the IEA No.205210 (``GlueGraph") and ``Excitonium'', by the Paris-Saclay U. via the P2I Department and by the P2IO Labex via the Gluodynamics project.
D.Y.A.V. and P.B.G. acknowledge the support of the "R\'egion Pays de la Loire" under the contract No. 2015-08473.
M.A.O.’s work was partly supported by the ERC grant 637019 “MathAm”.
The work of B.D. has been supported by the ERC Starting Grant 715049 ``QCDforfuture''.
C.V.H.\ has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska--Curie grant agreement No 792684.
The work of F.G.C.\ has been supported by the Italian MIUR under the FARE program (code n.\ R16XKPHL3N, 3DGLUE), of U.D. and C.P. by Fondazione di Sardegna under the projects “Quarkonium at LHC energies”, No. F71I17000160002 (University of Cagliari) and ''Proton tomography at the LHC”, No. F72F20000220007 (University of Cagliari).
D.P.\ is supported by the Science and Technology Facilities Council under grants ST/M005437/1 and ST/N000374/1.
The work of S.B. has been supported by the National Science Foundation under Contract No.\ PHY-1516088. The work of XY is supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics grant DE-SC0011090.
J.-W.Q and K.W. are supported by Jefferson Science Associates, LLC under U.S.\ DOE Contract No.DE-AC05-06OR23177. This work is also supported within the framework of the TMD Topical Collaboration.
The work of V.K. was supported in part by the Shota Rustaveli National Science Foundation of Georgia (SRNSFG) under grant FR17-184.
M.G.E.\ is supported by the Spanish MICINN grant PID2019-106080GB-C21.
E.G.F.\ is supported by Ministerio de Ciencia e Innovaci\'on of Spain under project FPA2017-83814-P; Unidad de Excelencia Mar\'ia de Maetzu under project MDM-2016-0692; and Xunta de Galicia (Conseller\'ia de Educaci\'on) and FEDER.
J. He is partly supported by NSFC (No. 11775227).
E. Chapon is supported by MoST (No. 2018YFA0403901) and NSFC (No. 11875275, 12061141003) and partially by CCEPP (China).
The work of M.N. has been supported by the Ministry of Education and Science of Russia via the State assignment to educational and research institutions under the project FSSS-2020-0014.
S.B.\ would like to thank Andreas Metz for insightful discussions, and Jian Zhou for the collaboration.
F.G.C.\ thanks Alessandro Bacchetta, Alessandro Papa, and Michael Fucilla for fruitful conversations.
\section{Summary}
Quarkonium measurements at the LHC are not only motivated by the intrinsic goal of advancing our understanding of their underlying production mechanisms, which are still not fully understood today, but also by the broad and unique opportunities they offer to perform a wide range of studies. In this document, we have reviewed the prospects for quarkonium studies in the upcoming high-luminosity phases of the LHC, with proton-proton (\ensuremath{pp}\xspace), proton-nucleus ({\ensuremath{pA}}\xspace), and nucleus-nucleus (\ensuremath{AA}\xspace) collisions. Among the research topics highlighted are:
opportunities in multi-quark spectroscopy; in new probes of the proton parton distributions including transverse-momentum dependent and spin effects; in sensitive observables for the study of double parton scattering interactions; in nuclear PDFs or other nuclear effects; and in studies to determine the properties of the quark-gluon plasma.
Section~\ref{sec:pp} surveyed the prospects for measurements of %
quarkonium production in \ensuremath{pp}\xspace\ collisions in the coming years and into the HL-LHC era. The motivations for future measurements of the quarkonium \ensuremath{P_T}\xspace spectra and polarisation were discussed, as well as the possibilities for more detailed characterisations of the properties of quarkonium-production events, in particular for \ensuremath{J/\psi}\xspace and \ensuremath{\Upsilon}\xspace. Such investigations can be undertaken through the study of (i) the hadronic activity accompanying scatterings where quarkonia are produced, (ii) the formation of quarkonia within high-energy jets, and (iii) their associated production alongside highly energetic objects such as jets, vector bosons, or other quarkonium states. We have highlighted where such measurements can provide new insights into broader fields such as in the search for new physics phenomena and in the study of multi-parton interactions, and where such measurements have already shown significant promise. The potential for the study of $C$-even quarkonia as well as multi-quark and molecular states was presented. Opportunities for HL-LHC quarkonium data to provide constraints on proton PDFs in the low-$x$ and low-scale regime were also outlined.
Section~\ref{sec:excl_diff} addressed diffractive and, mainly, exclusive photoproduction of quarkonia in hadron-hadron collisions. After a short description of selected experimental results and the discussion of open points in experiment and theory, this section focused on measurements possible at the HL-LHC, both in the collider and fixed-target modes, that either have not yet been performed or that have not yet been sufficiently exploited. In particular, the study of forward \ensuremath{J/\psi}\xspace production in combination with a backward jet and the study of exclusive single-quarkonium and quarkonium-pair production, which provide access to the multi-dimensional nucleon and nucleus partonic structure, have been discussed. Here, the advantage of {\ensuremath{pA}}\xspace collisions in the collider data-taking mode, and the need for high integrated luminosities, have been highlighted in order to fully exploit the potential of exclusive measurements.
Section~\ref{sec:spin} focused on studies of the transverse-momentum-dependent and spin dynamics in quarkonium production in \ensuremath{pp}\xspace\ collisions. Having first reviewed the two main frameworks that account for transverse-momentum-dependent effects, {i.e.}\xspace\ TMD factorisation and HE factorisation, a discussion followed on their applicability to quarkonium production along with potential challenges, open issues, and opportunities. In particular, the discussion covered those quarkonium-production processes that can be used to study the impact of factorisation-breaking effects and the region of applicability of these frameworks. Single transverse-spin asymmetries, believed to be generated in quarkonium production by the gluon Sivers effect, that arises from the correlation between the proton spin and the gluon motion, were also addressed. Three approaches that can account for this correlation have been discussed, as well as a selection of experimental projections for the HL-LHC in unpolarised collisions, and in polarised collisions in the LHC FT mode.
Section~\ref{sec:pa} focused on inclusive quarkonium studies in {\ensuremath{pA}}\xspace\ collisions at the LHC. First, a survey of the different phenomena at play was given. This was followed by an overview of the current status of the use of quarkonium data to constrain nPDFs in the collider and FT modes, and of low-$x$ parton saturation calculations applied to quarkonium production. Second, experimental observables used to compare quarkonium production in {\ensuremath{pA}}\xspace\ and \ensuremath{pp}\xspace\ collisions were discussed. The section concluded with a discussion of the status and prospects for the understanding of flow-like phenomena observed in {\ensuremath{pA}}\xspace\ collisions, as well as of the experimental and theoretical status of quarkonium-hadronisation modifications in {\ensuremath{pA}}\xspace compared to \ensuremath{pp}\xspace collisions.
Section~\ref{sec:aa} focused on quarkonium production in \ensuremath{AA}\xspace collisions. The main physics phenomena at play in quarkonium physics in heavy-ion collisions were introduced, as well as the theoretical state-of-the-art and experimental prospects for the HL-LHC. Recent theory developments were been discussed, including the semi-classical transport in open quantum systems, a density-operator model, and an advanced effective-field-theory model. A selection of opportunities offered by the HL-LHC was presented, including the investigation of the collision-energy dependence of various observables through comparisons of FT and collider data, and prospects for studies of the $X(3872)$ state and for measurements of the \ensuremath{J/\psi}\xspace polarisation.
Section~\ref{sec:dps} discussed the current theoretical and experimental status of the physics of double and triple parton scatterings (DPS and TPS) in \ensuremath{pp}\xspace and {\ensuremath{pA}}\xspace collisions, with an emphasis on the role of measurements of the production of multiple quarkonia, or quarkonia plus electroweak gauge boson, as a means to clarify the multiple open issues in the field. Detailed theoretical perspectives and experimental prospects of relevance for the HL-LHC operation, including expected number of events for various DPS and TPS final states with quarkonia, were provided.
Overall, this document reviewed how the HL-LHC will, on the one hand, help to understand quarkonium production better and, on the other, help to advance the use of quarkonia as tools for multiple aspects of QCD physics.
\section{Double and triple parton scatterings\protect\footnote{Section editors: David d'Enterria, Tomas Kasemets. %
}}
\label{sec:dps}
\newcommand\cO{{\cal O}}
\newcommand\cN{{\cal N}}
\newcommand\bR{{\cal B}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{L}_{\mbox{\rm \tiny{int}}}}{\mathcal{L}_{\mbox{\rm \tiny{int}}}}
\newcommand{cm$^{-2}$s$^{-1}$}{cm$^{-2}$s$^{-1}$}
\newcommand{{d^2r}}{{d^2r}}
\newcommand{{d^2b}}{{d^2b}}
\newcommand{\sigma_{{\rm SPS}}}{\sigma_{{\rm SPS}}}
\newcommand{\sigma_{{\rm DPS}}}{\sigma_{{\rm DPS}}}
\newcommand{\sigma_{{\rm TPS}}}{\sigma_{{\rm TPS}}}
\newcommand{\sigma_{{\rm SPS}}}{\sigma_{{\rm SPS}}}
\newcommand{\sigma_{{\rm DPS}}}{\sigma_{{\rm DPS}}}
\newcommand{\sigma_{{\rm TPS}}}{\sigma_{{\rm TPS}}}
\newcommand{\sigma_{\rm eff}}{\sigma_{\rm eff}}
\newcommand{\sigma_{{\rm eff},pp}}{\sigma_{{\rm eff},pp}}
\newcommand{\sigma_{{\rm eff},pA}}{\sigma_{{\rm eff},pA}}
\newcommand{\sigma_{{\rm eff},AA}}{\sigma_{{\rm eff},AA}}
\newcommand{\sigma_{\rm eff,{DPS}}}{\sigma_{\rm eff,{DPS}}}
\newcommand{\sigma_{\rm eff,{TPS}}}{\sigma_{\rm eff,{TPS}}}
\newcommand{\sigma_{\rm eff,{NPS}}}{\sigma_{\rm eff,{NPS}}}
\newcommand{\sigma_{{\rm eff,{DPS}},pA}}{\sigma_{{\rm eff,{DPS}},pA}}
\newcommand{\sigma_{{\rm eff,{TPS}},pA}}{\sigma_{{\rm eff,{TPS}},pA}}
\newcommand{\sigma_{{\rm eff,{DPS}},AA}{\sigma_{{\rm eff,{DPS}},AA}
\newcommand{\sigma_{{\rm eff,{TPS}},A}}}{\sigma_{{\rm eff,{TPS}},A}}}
\newcommand{\sigma_{{\rm {DPS,1}}}}{\sigma_{{\rm {DPS,1}}}}
\newcommand{\sigma_{{\rm {DPS,2}}}}{\sigma_{{\rm {DPS,2}}}}
\newcommand{\sigma_{{\rm {DPS,3}}}}{\sigma_{{\rm {DPS,3}}}}
\newcommand{\sigma_{{\rm {TPS,1}}}}{\sigma_{{\rm {TPS,1}}}}
\newcommand{\sigma_{{\rm {TPS,2}}}}{\sigma_{{\rm {TPS,2}}}}
\newcommand{\sigma_{{\rm {TPS,3}}}}{\sigma_{{\rm {TPS,3}}}}
\newcommand{\sigma_{{\rm {TPS,9}}}}{\sigma_{{\rm {TPS,9}}}}
\newcommand{\rm N_{{\rm {{DPS}}}}}{\rm N_{{\rm {{DPS}}}}}
\newcommand{\htr}[1]{{\color{red} #1}}
\newcommand{\htb}[1]{{\color{blue} #1}}
\newcommand{\htg}[1]{{\color{green} #1}}
\newcommand{\prn}[2]{{}^{#1} #2} %
\newcommand{\prb}[2]{{}^{#1}\! #2} %
\newcommand{\prl}[2]{{}^{#1}\! #2} %
\newcommand{\tvec}[1]{\boldsymbol{#1}}
\subsection{Introduction}
The extended nature of hadrons and their large parton densities when probed at the HL-LHC collision energies, make it very likely to produce simultaneously two or more quarkonium states alone or together with other heavy particles via separate multi-parton interactions in \ensuremath{pp}\xspace~\cite{Lansberg:2014swa}, {\ensuremath{pA}}\xspace~\cite{Strikman:2001gz,Cattaruzza:2004qb,dEnterria:2014lwk,dEnterria:2016yhy}, and \ensuremath{AA}\xspace~\cite{dEnterria:2013mrp,dEnterria:2014lwk} collisions. Double, triple, and in general {$n$-tuple} parton scatterings (DPS, TPS, and NPS, respectively) depend on the degree of transverse overlap of the matter densities of the colliding hadrons, and give access to the phase space distributions of partons inside the proton or nucleus. The study of NPS provides thereby valuable information on the hadronic wave functions describing the correlations among partons in space, momentum, flavour, colour, spin, etc., and their corresponding evolution as a function of collision energy.
In addition, understanding double and triple parton scatterings is of relevance in the study of backgrounds for the associated production of quarkonia plus other hard particles (Section~\ref{sec:oniumassociate}), for rare Standard Model (SM) decays, and/or for searches for new physics in final states with multiple heavy particles.
The pQCD-factorised expression to compute the cross section of a given double parton scattering process in hadron collisions reads
\begin{align}
& \sigma_{\text{DPS}}
= \left(\frac{\mathpzc{m}}{2}\right)
%
\sum_{a_1a_2b_1b_2}\sum_{R}
\left[\prn{R}{\hat{\sigma}}_{a_1 b_1}\,
\prn{R}{\hat{\sigma}}_{a_2 b_2}\right]\,
\otimes \int d^2\tvec{y}\;
\prb{R}{F}_{b_1 b_2}(x_i,\tvec{y}) \,
\prb{R}{F}_{a_1 a_2}(\bar{x}_i,\tvec{y}) \,,
\label{eq:dps_xsec}
\end{align}
where $\mathpzc{m}$ is a combinatorial factor to avoid multiple counting of the same process, $\otimes$ denotes a convolution over longitudinal momentum fractions, $\hat{\sigma}_{ab}$ is the partonic cross section for the interaction between partons $a_i$ and $b_i$, while $F_{a_1a_2}$ is the double parton distribution function (dPDF) of two partons inside a proton~\cite{Paver:1982yp}, separated by a distance $\tvec{y}$ and each carrying a longitudinal momentum fraction $x_i$. The sum over $a_i$ and $b_i$ runs over parton flavours and spin, while $R$ runs over the allowed colour representations~\cite{Buffing:2017mqm,Diehl:2017kgu}.
Section~\ref{sec:DPS_TH} describes the current status of the theoretical DPS calculations based on \ce{eq:dps_xsec}. Often, however, rather than the full calculations, a useful simplistic approximation is employed to estimate the cross sections for the DPS production of two hard particles $H_1$ and $H_2$ from the product of their corresponding single-parton-scattering ($\sigma_{{\rm SPS}}$) values
normalised by an effective cross section $\sigma_{\rm eff}$ to warrant the proper units of the final result, namely
\begin{equation}
\sigma^{hh' \to H_1+H_2}_{\rm DPS} = \left(\frac{\mathpzc{m}}{2}\right)\, \frac{\sigma^{hh' \to H_1}_{\rm SPS} \cdot
\sigma^{hh' \to H_2}_{\rm SPS}}{\sigma_{\rm eff}}\,.
\label{eq:pocketDPS}
\end{equation}
This so-called ``pocket formula'' encapsulates the intuitive result that, in the absence of any partonic correlations, the probability to have two parton-parton scatterings producing two heavy or high-$\ensuremath{P_T}\xspace$ particles ({e.g.}\xspace\ quarkonium states) in a given inelastic hadron-hadron collision should be proportional to the product of probabilities to independently produce each one of them. In the extreme case where one assumes that (i) the dPDF factorises as the product of transverse and longitudinal densities, (ii) the longitudinal components themselves reduce to the product of independent single PDFs, (iii) the transverse profile is the same for all partons, and (iv) no other parton correlations are present, the effective cross section is~\cite{Calucci:1997ii,dEnterria:2017yhd}
\begin{equation}
\sigma_{\rm eff}\equiv\sigma_{\rm eff,{DPS}}=\left[ \int d^2b\,T^2(b)\right]^{-1} \,.
\label{eq:sigmaeff_DPS}
\end{equation}
Consequently, $\sigma_{\rm eff}$ can be written as a function of the \ensuremath{pp}\xspace overlap $T(b)$ at impact parameter $b$, computable from the transverse parton-density profile of the proton $\rho(b)$ in a Glauber approach~\cite{dEnterria:2020dwq}.
For conventional transverse parton $\rho(b)$ distributions of the proton, such as those typically implemented in the modern \ensuremath{pp}\xspace Monte Carlo (MC) event generators {\sc pythia}\xspace~8~\cite{Sjostrand:2017cdm}, and {\sc herwig++}\xspace~\cite{Seymour:2013qka}, one expects values of $\sigma_{\rm eff}\approx15-$25~mb. These $\sigma_{\rm eff}$ values are smaller than the purely geometric ``soft'' \ensuremath{pp}\xspace\ cross section of $\sigma_\mathrm{inel}\approx 35$~mb, derived from the electromagnetic radius of the proton, because of the inherent ``centrality bias'' that appears when one or more hard enough parton scatterings is required.
The pocket formula (\ce{eq:pocketDPS}) can be used for \ensuremath{pp}\xspace, {\ensuremath{pA}}\xspace, and \ensuremath{AA}\xspace\ collisions, and its generalisation for the hard production of $n$ sets of particles, denoted $H_i$, in $n$ parton scatterings from the corresponding single parton values can be expressed as the $n^{\rm th}$-product of the corresponding SPS cross sections for the production of each single final-state particle, normalised by the ($n^{\rm th}-1$) power of an effective NPS cross section~\cite{dEnterria:2017yhd}:
\begin{equation}
\sigma^{hh' \to H_1+\ldots+\,H_n}_{\rm {nps}} = \left(\frac{\mathpzc{m}}{n!}\right)\,
\frac{\prod_i\sigma^{hh' \to H_i}_{\rm SPS}}{(\sigma_{\rm eff,{NPS}})^{n-1}},
\;\;\;\mbox{with}\;\;\;
\sigma_{\rm eff,{NPS}}=\left\{\int d^2b \,T^n(b)\right\}^{-1/(n-1)}\,,
\label{eq:pocketNPS}
\end{equation}
where, again, the second equality holds in the strong assumption of absence of any parton correlations and $\sigma_{\rm eff,{NPS}}$ bears a simple geometric interpretation in terms of powers of the inverse of the integral of the hadron-hadron overlap function $T(b)$ over all impact parameters.
The expressions based on \ce{eq:pocketNPS} applied to quarkonium production in DPS and TPS processes below provide baseline (purely ``geometric'') order-of-magnitude estimates of their expected cross sections by combining (i) SPS cross sections $\sigma_{{\rm SPS}}$, which are either experimentally measured or computed within perturbative QCD {e.g.}\xspace\ at NLO or NNLO accuracy today, %
plus (ii) a value of $\sigma_{\rm eff,{NPS}}$, theoretically derived in a Glauber geometric approach, or extracted from experimental measurements. The comparisons of experimental data, and/or more complete theoretical predictions with reduced number of approximations, to the simple cross sections expected from the pocket formula, allow one to assess the corresponding size and impact of parton correlations in the proton (or nuclear) wave functions. Since DPS and TPS cross sections depend on the square and cube of the corresponding SPS cross sections, perturbative processes with large enough SPS cross sections are needed in order to have a visible number of events at the HL-LHC: this is the advantage of using multiple production of quarkonia, over more rare heavy particles such as electroweak bosons, in NPS studies. %
\subsection{Theoretical status of Double Parton Scattering}
\label{sec:DPS_TH}
\subsubsection{Factorisation of DPS cross sections}
The theoretical predictions for DPS cross sections rely on the factorisation of the underlying dynamics as a convolution of parton-parton cross sections and dPDFs, as described by \ce{eq:dps_xsec}. A first vital question is whether this factorisation actually holds, up to power-suppressed corrections. A starting point is to look at the production of two colourless systems, {e.g.}\xspace of (pairs of) $W$, $Z$ or $H^0$ bosons since, for their SPS production, factorisation was proven for both the total and TMD cross sections in the 1980s~\cite{Bodwin:1984hc, Collins:1985ue, Collins:1988ig, Collins:2011zzd}. In recent years, it has been established that DPS factorisation also holds for the double Drell--Yan (DDY) process both for the case of the total cross section, and for the case where the transverse momenta of the bosons are measured (double TMD, or DTMD, case)~\cite{Diehl:2015bca,Diehl:2016eyd,Diehl:2018wfy,Gaunt:2018eix,Vladimirov:2017ksc, Buffing:2017mqm}.
The steps that need to be taken to demonstrate factorisation for the latter case are schematically shown in Fig.~\ref{fig:DDYfact}.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{figures/DDY_fact.pdf}
\caption{Diagrammatic illustration of the steps to achieve a proof of factorisation for the double Drell--Yan cross sections at a given transverse momentum. The green blobs represent the right and left moving proton, yellow blobs the hard interactions and orange blob the soft interactions. The graph is for the cross section, with the vertical line indicating the final state cut. [Figure modified from~\cite{Diehl:2018wfy}].}
\label{fig:DDYfact}
\end{figure}
Certain contributions to DPS overlap with (loop) contributions to single scattering, and yet others overlap with other more-exotic scattering mechanisms such as higher twist contributions, or DPS-SPS interference. A consistent factorisation framework for DPS should avoid double counting between DPS, single scattering, and other mechanisms. A framework that achieves this, and maintains a description of the DPS part in terms of separate rigorously defined dPDFs for each hadron, was developed in~\cite{Diehl:2017kgu} (other proposals were made earlier \cite{Blok:2011bu, Gaunt:2011xd, Manohar:2012pe, Gaunt:2012dd}, although these did not have this last property). The application of the approach of \cite{Diehl:2017kgu} to the DTMD case is described in~\cite{Buffing:2017mqm,Gaunt:2018eix}.
Whether factorisation holds for other DPS processes, including those involving quarkonium production, is less clear. Whatever factorisation-breaking complications apply for SPS of quarkonium, via the CO channel at a given $\ensuremath{P_T}\xspace$ (Section~\ref{sec:TMDchallenges}), are expected to be carried over to the DPS case.
\subsubsection{Evolution of dPDFs}
Apart from DPS factorisation, a second key element for the computation of double parton cross sections via \ce{eq:dps_xsec}) is to control the phase space evolution of dPDFs. Double parton distributions $F_{ab} (x_i, y, \mu_i)$, with $i = 1,2$, enter the DPS factorised cross section through the parton luminosities $\mathcal{L}_{a_1 a_2 b_1 b_2} = \int \! \mathrm{d}^2 \tvec{y} \, F_{a_1 a_2} (y) \, F_{b_1 b_2} (y)$, where $\tvec{y}$ is the transverse separation between the two partons.
A measurement of the dPDFs functional form is at present not yet available, and DPS phenomenological studies rely on model Ans\"{a}tze. However, their scale dependence is well known~\cite{Snigirev:2003cq, Gaunt:2009re, Diehl:2011yj, Blok:2011bu, Manohar:2012jr} and is analogous to the familiar one for PDFs. Double PDFs evolve in energy scale $\mu_i$ according to generalised DGLAP equations
\begin{align} \label{eq:dps:double_dglap}
\frac{d}{d \log \mu^2_1} \, F_{a_1 a_2} (x_i, y, \mu_i) = \left( P^{(1)}_{a_1 c} \underset{1}{\otimes} F_{c a_2} \right) (x_i, y, \mu_i) \, \& \,
\frac{d}{d \log \mu^2_2} \, F_{a_1 a_2} (x_i, y, \mu_i) = \left( F_{a_1 c} \underset{2}{\otimes} P^{(2)}_{c a_2} \right) (x_i, y, \mu_i)
\,,\end{align}
where $P^{(1,2)}_{ab}$ are the splitting functions, $y=|\tvec{y}|$ and the convolution $\otimes$ is performed in $x_1$ or $x_2$.
This set of equations is valid for the $y$-dependent dPDFs: integration over $y$ introduces an additional inhomogeneous term in \ce{eq:dps:double_dglap}.
Given their dependence on many parameters, and their $\mathcal{O}(N_f^2)$ multiplicity, dPDFs are complex to handle numerically, both in terms of memory occupation and computational time. The LO double-DGLAP evolution for the $y$-integrated dPDFs was first studied in~\cite{Gaunt:2009re}, and a publicly available dPDF set (GS09 \cite{Gaunt:website}) was provided based on a product Ansatz $F_{ab} = f_a \cdot f_b \cdot \Phi$, where $f_{a,b}$ are regular PDFs, and $\Phi$ is a suppression factor. Recent progress has been made with a new tool called ChiliPDF~\cite{Nagar:2019njl,Diehl:2020uuu}. This tool can solve the double-DGLAP equations up to NNLO, including $\mathcal{O}(\alpha_s^2)$ matching at the flavour transition scales, in a fast and relatively lightweight way, with a working numerical precision below $\mathcal{O}(10^{-4})$ for $x_1 + x_2 < 0.8$ that is safely far beyond theory uncertainties on dPDFs (Fig.~\ref{fig:dps:dpds}, Left). Quark-mass effects can be sizeable for dPDFs. Considering as boundary condition for \ce{eq:dps:double_dglap} the perturbative splitting from PDFs into dPDFs (denoted as ``\textbf{1}'', as opposed to the product Ansatz ``\textbf{2}'' \cite{Diehl:2017kgu}), the inclusion of the heavy-quark masses by matching at the flavour-transition scales
introduces large scale uncertainties on the evolved dPDFs (Fig.~\ref{fig:dps:dpds}, centre). The insertion of the gluon splitting into massive quarks in the ``\textbf{1}'' dPDFs visibly reduces the uncertainties even at LO, as shown in Fig.~\ref{fig:dps:dpds} (Right)~\cite{Diehl:2020vvv}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.42\textwidth,draft=false]{./figures/dpd_accu_e-7_mu3_x3_all_log.pdf}
\includegraphics[width=0.28\textwidth,draft=false]{./figures/luminosity_plot_mjet_yc_scalevar_bbbar.pdf}
\includegraphics[width=0.28\textwidth,draft=false]{./figures/luminosity_plot_mjet_yc_massive_scalevar_bbbar.pdf}
\caption{Left: Relative accuracy reached by ChiliPDF for a set of dPDFs ($F_{gg}$, $F_{u\bar{u}}$, $F_{ug}$, $F_{sg}$) evolved to $\mu_1 = \mu_2 = 100 \, \text{GeV}$ at NNLO in the variable-flavour-number (VFN) scheme, as a function of $x_1$ at fixed $x_2 = 0.3$.
Centre: LO symmetrised luminosity $\mathcal{L}_{b\bar{b}\bar{b}b}$ for pure splitting (``1v1''), pure product (``2v2''), and mixed (``1v2+2v1'') combinations of dPDF Ans\"{a}tze, at $\mu_1 = \mu_2 = 25 \, \text{GeV}$, and with the two final system rapidities $Y_1 = 0$ and $Y_2 = Y$. The lines correspond to variations of the matching scales $\mu_{c,b} = (1\dots 5)\cdot m_{c,b}$ in the splitting Ansatz. Right: Same as centre plot, but with the inclusion of the gluons splitting into massive $c\bar{c}$ and $b\bar{b}$ pairs. [Left and centre figures are taken from~\cite{Nagar:2019njl}].}
\label{fig:dps:dpds}
\end{figure}
\subsubsection{Impact of parton correlations on \texorpdfstring{$\sigma_{\rm eff}$}{sigma(eff)}}
The pocket formula (\ce{eq:pocketDPS}) provides the baseline purely geometric DPS cross section expected in the absence of any parton correlations. Obviously, longitudinal, transverse, and spin correlations, among others, are present at the parton level, and are expected to modify the $\sigma_{\rm eff}$ value extracted from the ratio of squared SPS over DPS cross sections~\cite{Rinaldi:2013vpa,Rinaldi:2014ddl,Rinaldi:2016jvu}. The role of these effects in digluon distributions, fundamental in gluon initiated processes such as those relevant for quarkonium production, has been covered in~\cite{Rinaldi:2018bsf} in the covariant relativistic Light-Front (LF) approach adopted to calculate the dPDFs. In this case, rotations between the canonical and the LF spin induce model-independent correlations that prevent a factorisation between the $(x_1,x_2)$--$b_\perp$ dependence (\cf{fig:dPDFcorrel}, Left), where $b_\perp$ is the transverse partonic distance. By properly considering general features of moments of dPDFs, the following relationship has been derived for $\sigma_{\rm eff}$~\cite{Rinaldi:2018slz,Rinaldi:2018bsf}:
\begin{align}
\dfrac{\sigma_{\rm eff}}{3 \pi} \leq \langle b_\perp^2 \rangle \leq
\dfrac{\sigma_{\rm eff}}{ \pi}~.
\end{align}
The right panel of Fig.~\ref{fig:dPDFcorrel} shows how data on $\sigma_{\rm eff}$ can thereby constrain the mean transverse distance between two partons.
\begin{figure}[htpb!]
\centering
\includegraphics[scale=0.77]{./figures/ratio_WJ_44.pdf} \quad
\raisebox{6pt}{\includegraphics[scale=0.2]{./figures/errori10.png}}
\caption{Left: Ratio $r_b(x_1,x_2,b_\perp,Q^2)$ quantifying the impact of relativistic correlations on digluon distributions~\cite{Rinaldi:2018bsf}. This quantity would be equal to 1 if parton correlations were absent. Black and yellow curves shown the results of different models of double parton distribution functions for $Q^2 = m^2_H$ (full and dashed) and $Q^2 = 4m^2_c$ (dot-dashed and dotted).
Right: Range of allowed transverse partonic distances obtained from the extracted mean values of $\sigma_{\rm eff}$. [Figure adapted from~\cite{Rinaldi:2018slz}.]}
\label{fig:dPDFcorrel}
\end{figure}
Since correlations between the spin of partons have direct consequences on the angular distribution of the particles produced in the final state, it has been proposed to study various asymmetries in DPS processes to extract information on correlated quantum properties of two partons inside a proton. A calculation of double same-sign $W$-boson production cross sections has recently demonstrated that spin correlations can have large effects on the distribution of particles, and that the HL-LHC phase (if not before) opens up the possibility to measure them~\cite{Cotogno:2020iio,Cotogno:2018mfv}. A promising variable for spin correlation measurements is the asymmetry between the DPS cross section for the case when the leptons from the $W$-boson decay go towards the same or opposite hemispheres. The rightmost panel of Fig.~\ref{fig:spin_in_dps} shows the estimated significance of a possible observation of such an asymmetry as a function of the integrated luminosity collected. Even a {\it null measurement}, {i.e.}\xspace a precisely measured zero asymmetry, would be interesting, as it would severely constrain the spin correlations inside the proton.
\begin{figure}[h!]
\centering
\includegraphics[width=0.30\textwidth,draft=false]{./figures/spin_in_dps_4_all_etaProd_S72_Bin2_fit1.pdf}
\includegraphics[width=0.30\textwidth,draft=false]{./figures/spin_in_dps_4_all_etaProd_S72_Bin2_fit2.pdf}
\includegraphics[width=0.36\textwidth,draft=false]{./figures/spin_in_dps_SigmaEst3.pdf}
\caption{Results of template fits in Scenario 1 (correlated DPS with uncorrelated extraction, Left) and Scenario 2 (uncorrelated DPS with correlated extraction, centre) for the product of pseudorapidity densities of same-sign leptons from the decay of DPS $W+W$ production. Right: Estimate of the significance (in standard deviations) of an assumed asymmetry of 0.11 for a signal cross section of 0.29~fb. Blue line/band corresponds to $\mu^+\mu^+$ only, while the red line/band includes all positively charged combinations of $e^+$ and $\mu^+$. Dashed curves show the sensitivity of the central red curve to changes in the asymmetry of $\pm$20\% (orange dashed curves) and the magnitude of the DPS cross section by a factor of $3/2$ or $3/4$ (green dashed curves). [Plots are taken from~\cite{Cotogno:2020iio}].
\label{fig:spin_in_dps}}
\end{figure}
\cf{fig:spin_in_dps} (Left and Centre) shows two template fits to a combination of DPS signal and backgrounds. In Scenario 1 (Fig.~\ref{fig:spin_in_dps}, Left) partons are assumed to be correlated, but the extraction assumes uncorrelated DPS. In Scenario 2 (Fig.~\ref{fig:spin_in_dps}, centre) the roles are reversed. %
Assuming an underlying effective cross section of $\sigma_{\rm eff} = 15$~mb, the values for the fiducial cross sections and associated $\sigma_{\rm eff}$ derived after analyses of the angular distributions in the two scenarios are: $\sigma_{{\rm DPS}} = 0.59$~fb, $\sigma_{\rm eff} = 12.2$~mb, and
$\sigma_{{\rm DPS}} = 0.44$~fb, $\sigma_{\rm eff} = 16.4$~mb, respectively. As one can see, the 30\% span of the DPS production cross section and the corresponding variation of $\sigma_{\rm eff}$, found in this simple treatment, illustrate the danger of neglecting correlations in DPS measurements in general, and of using correlation-sensitive variables in template fits in particular. %
Quantum number correlations will also be present in DPS involving one or more quarkonium states. The size of these effects are largely unknown, and the lessons learned from double-$W$ production can be directly applied to DPS quarkonium production. The smaller momentum fractions probed tend to decrease the relevance of correlations, but the corresponding lower energy scale tends to increase their effects. If difficulties in isolating the DPS contribution in quarkonium production can be overcome, the large DPS cross sections provide unique possibilities to study interparton correlations.
\subsection{DPS studies with $\pazocal{Q}$}
\label{sec:DPSonia}
\subsubsection{Current status}
Thanks to their large production yields in hadronic collisions, multiple measurements exist now of the cross sections for the production of two quarkonium states, or a quarkonium plus another high-$\ensuremath{P_T}\xspace$ or heavy particle, in proton-(anti)proton collisions at the LHC and Tevatron.
The corresponding SPS studies, as tools to understand the quarkonium production mechanism itself, are discussed in Sec.~\ref{sec:oniumassociate}. The measurements can be generally categorised as diquarkonium processes: $\ensuremath{J/\psi}\xspace+\ensuremath{J/\psi}\xspace$~\cite{Aaij:2011yc,Abazov:2014qba,Khachatryan:2014iia,Aaboud:2016fzt,Aaij:2016bqq}, $\ensuremath{J/\psi}\xspace+\Upsilon$~\cite{Abazov:2015fbl}, and $\Upsilon+\Upsilon$~\cite{Khachatryan:2016ydm,Sirunyan:2020txn}, quarkonium in association with a vector boson: $\ensuremath{J/\psi}\xspace+W^{\pm}$~\cite{Aad:2014rua,Aaboud:2019wfr} and $\ensuremath{J/\psi}\xspace+Z$~\cite{Aad:2014kba}, or with an open heavy-flavour hadron: $\ensuremath{J/\psi}\xspace$+open-charm hadron~\cite{Aaij:2012dz}, $\Upsilon$+open-charm hadron~\cite{Aaij:2015wpa}. All these processes have recently been reviewed in~\cite{Lansberg:2019adr}.
The standard DPS measurements proceed as follows. Since DPS are by nature more kinematically uncorrelated than single scattering processes, the DPS contribution preferentially populates the regions with larger azimuthal and rapidity separations between the two produced objects, compared to the SPS production mechanisms. The $y$ and $\phi$ differential cross sections measured in data are compared to the expectations of SPS models, and any excess with respect to the SPS predictions is attributed to DPS contributions.
An example of a differential production cross section in bins of rapidity difference (of two $\ensuremath{J/\psi}\xspace$ mesons), is shown in~\cf{fig:DPSexp}\,(Left). This plot also illustrates a typical misconception of theoretical uncertainties in which the shapes of the extremal curves of an uncertainty band are assumed to give the shape of all the possible theory curves. Theory uncertainty bands rather indicate where one expects to find curves computed at a higher precision; these may not follow the same shape. In the present case, such an approximation artificially underestimates the SPS uncertainty and overestimates the
discriminating power of the $| \Delta y|$ spectrum between the DPS and SPS contributions.
\begin{figure}[h!]
\centering
\includegraphics[width=0.475\textwidth]{./figures/diJpsi_deltay_13TeV_LHCb.pdf}
\includegraphics[width=0.515\textwidth]{./figures/sigmaeff_2020.pdf}
\caption{Left: Differential $\ensuremath{J/\psi}\xspace$-pair production cross section as a function of rapidity difference~$\left| \Delta y \right|$ of two $\ensuremath{J/\psi}\xspace$ mesons measured by LHCb in \ensuremath{pp}\xspace collisions at $\ensuremath{\sqrt{s}}=13$ TeV, compared to SPS (various models, without uncertainties) and DPS predictions.
Right: Comparison of $\sigma_{\rm eff}$ values extracted with different processes in \ensuremath{pp}\xspace and \ensuremath{p\bar{p}}\xspace\ collisions. [Left plot is from~\cite{Aaij:2016bqq}, and right plot is from~\cite{Shao:2020lqk}.]
\label{fig:DPSexp}}
\end{figure}
From the derived value of $\sigma_{{\rm DPS}}$, one can then usually derive the associated effective cross section taking its ratio to the product of corresponding SPS cross sections as per \ce{eq:pocketDPS}, $\sigma_{\rm eff} \propto (\sigma^{H_1}_{\rm SPS}\sigma^{H_2}_{\rm SPS})/\sigma^{H_1+H_2}_{\rm DPS}$. Smaller values of $\sigma_{\rm eff}$ correspond to larger DPS cross sections. \cf{fig:DPSexp}\,(Right) summarises the current status of $\sigma_{\rm eff}$ extractions, based on the DPS pocket formula, with different final states~\cite{Abe:1993rv,Abe:1997xk,Abazov:2009gc,Aad:2013bjm,Chatrchyan:2013xxa,Abazov:2014qba,Lansberg:2014swa,Aaboud:2016fzt,Aaij:2012dz,Aaij:2015wpa,Shao:2016wor,Lansberg:2016rcx,Lansberg:2016muq,Lansberg:2017chq,Shao:2020kgj}. Values of $\sigma_{\rm eff}\approx 2$--30~mb have been derived, though with large errors, with a simple (unweighted) average giving $\sigma_{\rm eff} \approx 15$~mb. %
%
This summary plot indicates that $\sigma_{\rm eff}$ is smaller when derived from measurements of quarkonium processes in the ATLAS and CMS experiments, which typically cover central rapidities and require $\ensuremath{J/\psi}\xspace$ mesons with relatively large $\ensuremath{P_T}\xspace$ in order to ensure the decayed muons can reach the muon chambers. On the contrary, the other (forward) quarkonium-based extractions lead to larger $\sigma_{\rm eff}$ values, indicating smaller DPS contributions.
Such differences can be interpreted as indicative of the non-universality of $\sigma_{\rm eff}$ when measured in different kinematic ranges (LHCb vs ATLAS/CMS), ({e.g.}\xspace\ due to the different relative weight of gluon vs.\ quark initial states), of non-universal parton correlations, and/or attributed to poorly controlled subtractions of SPS contributions. One typical example for the last point is the process of production of a $\ensuremath{J/\psi}\xspace$ meson in association with a $D^0$ meson discussed in~\cite{Shao:2020kgj}. This more recent and more refined analysis with improved SPS calculations yields a factor of two larger $\sigma_{\rm eff}$ value than the one presented in the original LHCb paper~\cite{Aaij:2012dz}, %
where the SPS contribution was assumed to be negligible.
Similar caution is necessary when using associated production of \ensuremath{\Upsilon}\xspace plus open-flavour to extract DPS cross sections, as done by LHCb Collaboration with the $\Upsilon+D$ final state~\cite{Aaij:2015wpa}. As shown in~\cite{Karpishkov:2019vyt}, taking into account NRQCD CO processes, feed-down decays, and $g\to D$ fragmentation contributions, %
one can obtain SPS cross section values of the same order as observed in the data. Moreover, the initial-state radiation effects, considered in~\cite{Karpishkov:2019vyt} within the HE factorisation, lead to kinematic distributions very similar to those observed in the experiment (with the notable exception of the $\Delta\phi$ distribution, which has a hard-to-explain enhancement towards $\Delta\phi\to 0$ in the data). Therefore, the conclusion that $\Upsilon+D$ production is dominated by DPS is significantly weakened.
%
The importance of appropriately controlling the SPS production mechanism before attempting to extract any DPS cross section is further illustrated in Fig.~\ref{fig:diJpsi_DY_CMS} for double-\ensuremath{J/\psi}\xspace\ production. In NRQCD factorisation, the total cross section of (direct) $\ensuremath{J/\psi}\xspace$ pair production is dominated by double CS ${}^3S_1^{[1]}$ contribution as confirmed both in the collinear factorisation up to NLO accuracy~\cite{Lansberg:2013qka,Sun:2014gca,Lansberg:2014swa,Lansberg:2019fgm} and in HE factorisation~\cite{He:2019qqr} discussed in~Section~\ref{sec:HEfactorisation}. On the other hand, in the CEM with collinear factorisation at LO and NLO accuracy, such a contribution is absent, thereby leading to an underestimation of the SPS cross sections in the whole $\Delta y$ range~\cite{Lansberg:2020rft} (Fig.~\ref{fig:diJpsi_DY_CMS}, Left). If the CEM prediction was to be trusted, then practically the whole double-$\ensuremath{J/\psi}\xspace$ production cross section should be attributed to DPS, which would lead to a correspondingly reduced value of $\sigma_{\rm eff}$ and would require unrealistically strong partonic correlations to describe the $\ensuremath{J/\psi}\xspace$ momenta distributions observed in data.
\begin{figure}[h!]
\centering
\includegraphics[width=0.475\textwidth]{figures/double_Jpsi_Deltay_CMS7TeV-240420.pdf}
\includegraphics[width=0.51\textwidth]{figures/cmsyy23_JPL.pdf}
\caption{Differential $\ensuremath{J/\psi}\xspace$-pair production cross section as a function of the rapidity difference between the two $\ensuremath{J/\psi}\xspace$ mesons, $\left| \Delta y \right|$, measured by the CMS experiment in \ensuremath{pp}\xspace collisions at $\ensuremath{\sqrt{s}}=7$ TeV~\cite{Khachatryan:2014iia}, compared to the predictions of the LO and NLO CEM (Left) and of the HE factorisation (Right). [Left plot is from~\cite{Lansberg:2020rft}, and right plot adapted from~\cite{He:2019qqr}].}
\label{fig:diJpsi_DY_CMS}
\end{figure}
In addition to the effects related to unknowns arising in the SPS cross section, further caution must be exercised with the assumptions on the DPS cross section itself. In particular, as discussed above, the shape of the DPS signal is unknown in the presence of parton correlations and can have an impact on the extractions of the DPS cross section.
In the next section, we discuss how upcoming measurements, in particular with the new opportunities opened up at the HL-LHC, can help to clarify the aforementioned experimental and theoretical issues and thereby improve our understanding of DPS processes with quarkonium-based studies.
\subsubsection{HL-LHC prospects}
A first step to exploit quarkonium DPS measurements in order to extract quantitative information on the hadronic wave functions (in particular, on the various underlying sources of partonic correlations) and their energy evolution, is to understand the wide span of $\sigma_{\rm eff}$ extractions shown in Fig.~\ref{fig:DPSexp} (Right). As mentioned above, leading sources of confusion are the different techniques used in each measurement to determine and remove the contamination from SPS contributions in the DPS signal region.
{Key to the ability to impose more stringent cuts and to better probe different corners of phase space are, firstly
the very large data samples, and secondly, the upgraded charged-particle tracking over a wide pseudorapidity, $|\eta| \lesssim 5$, (with muon acceptance extended by half a unit, up to $|\eta| \lesssim 3$) in the ATLAS and CMS detectors during the HL-LHC phase}. Both advantages will allow a better study of the azimuthal and rapidity separations
between quarkonium states simultaneously produced in SPS and DPS processes. The following concrete experimental proposals are suggested for DPS studies at HL-LHC:
\begin{itemize}
\item %
In order to better extract the DPS signal from the data, rather than simple standard cut-based analyses used so far, more advanced multi-variate analyses of the relevant quarkonium pair kinematic variables ($y_{ij}$, $\phi_{ij}$, ${\ensuremath{P_T}\xspace}_{ij}$,...) should be carried out. SPS predictions with the highest order of accuracy (ideally, at least, NLO plus resummation and/or parton showering) should be used only, and effects of variations in the shape of the DPS cross section should be explicitly investigated. The theoretical uncertainty associated with the SPS cross sections should be properly propagated into any experimentally extracted DPS cross section and $\sigma_{\rm eff}$ value.
\item Final states with $\ensuremath{\psi(2S)}\xspace$ mesons, free of feed-down contributions in constrast to the $\ensuremath{J/\psi}\xspace$ mesons commonly studied so far, should be considered.
High-statistics measurements of the $\Delta\phi$-differential distribution of $Q\bar Q$-pair production at $|\Delta y|\gtrsim2.5$, where all SPS models tend to fail, should be performed.
The $\ensuremath{J/\psi}\xspace + \ensuremath{\psi(2S)}\xspace$ and $\ensuremath{J/\psi}\xspace + \chi_{c}$ final states are of particular interest as they can provide new ways to differentiate the SPS and DPS contributions
since their feed-down fractions to $\ensuremath{J/\psi}\xspace$ pairs are significantly different when they are produced by SPS and DPS~\cite{Lansberg:2014swa,Lansberg:2019adr}.
\item The production of charmonium ({e.g.}\xspace $\ensuremath{\psi(2S)}\xspace$ as an essentially feed-down-free state) plus a $B$-meson (or a non-prompt \ensuremath{J/\psi}\xspace) is an interesting candidate for future DPS studies, since the leading $v$ contribution from the CSM is suppressed at LO~\cite{Lansberg:2019adr} and the $B$ fragmentation function is better controlled than that of the $D$.
\item Beyond the first study of the associated production of a $\ensuremath{J/\psi}\xspace$ with a charmed meson carried out by LHCb~\cite{Aaij:2012dz}, it will be instructive to perform precise comparisons of the $\ensuremath{P_T}\xspace$ spectra associated with different charm hadrons and those produced alone.
With more precise measurements, it will be possible to confirm the hint of a slight difference in the \ensuremath{P_T}\xspace\ spectrum of the $\ensuremath{J/\psi}\xspace$ produced alone or with a charmed hadron. Such a confirmation would go against the DPS dominance, thus along the lines of~\cite{Shao:2020kgj}. Obviously, this could be complemented by the extraction of the DPS yields using data from control regions and DPS MC simulations.
\item The unique feature of the ALICE detector for quarkonium studies is a forward muon system, covering $2.5<\eta<4$, combined with a central-barrel tracking/PID system ($|\eta|<0.9$) to achieve pseudorapidity differences $|\Delta\eta|$ up to 4.9, exceeding the capabilities of ATLAS and CMS. %
With an expected \ensuremath{pp}\xspace-collision data set corresponding to $\sim0.2$ fb$^{-1}$, measurements of $D$ or $B$ mesons in the central region, associated with a quarkonium in the forward region, become possible with very low limits on $\ensuremath{P_T}\xspace$ for both objects. Compared to ATLAS and CMS, the relatively low integrated luminosity of ALICE will be compensated by the much smaller pileup probability (and lower $\ensuremath{P_T}\xspace$). %
\item During Run-2, the LHCb experiment collected around 6~fb$^{-1}$ of \ensuremath{pp}\xspace collisions at $\ensuremath{\sqrt{s}}=13$ TeV which, for double-$\ensuremath{J/\psi}\xspace$ production, translates into data samples 20 times larger than in previous DPS measurements. This data sample remains to be analysed and will make it possible to study doubly differential production cross sections, {e.g.}\xspace\ in two-dimensional bins of $\ensuremath{J/\psi}\xspace$-pair transverse momentum and invariant mass, especially probing the momentum distribution of linearly polarised gluons inside unpolarised protons~\cite{Lansberg:2017dzg,Scarpa:2019fol} (Section \ref{sec:psi-psi-TMDs}). Using the data sets expected at the HL-LHC, one can carry out a similar programme of measurements for rarer processes, such as those, for example, involving $\ensuremath{\psi(2S)}\xspace$, $\Upsilon$ or even $\eta_c$~\cite{Lansberg:2013qka}.
\item In all the above cases, quarkonium polarisation measurements can be instrumental in disentangling DPS from SPS. If the former are dominant, the quarkonium polarisation should be identical in both single and associated production.
\item \ensuremath{J/\psi}\xspace-pair production could also be studied at the FT-LHC at $\ensuremath{\sqrt{s}}=115$~GeV with large enough yields to look for possible DPS contributions~\cite{Lansberg:2015lva}. This would provide a possibly unique measurement of $\sigma_{\rm eff}$ in this energy range.
\end{itemize}
On the theoretical side, the following developments, among others, are needed to fully exploit the experimental data made available:
\begin{itemize}
\item The theoretical SPS cross sections (``subtracted'' from the experimental data in order to identify the DPS contributions) need to include the largest number of perturbative corrections possible, both for FO and resummed logarithms terms. Additionally, and particularly relevant for quarkonium production, efforts to significantly reduce the model dependence will be crucial to isolate the DPS cross section. Predictions with limited theoretical accuracy should be avoided as they {consequently degrade the DPS cross-section extraction}. %
\item Progress towards full-NLO corrections for the DPS cross sections for double-quarkonium production, including pQCD-induced partonic correlations, computed via \ce{eq:dps_xsec}, must be made. %
\item {Studies should be undertaken of} the impact of perturbative and non-perturbative effects on gluon-gluon double parton distribution functions calculated within phenomenological models, such as constituent quark models.
\item A consistent treatment of heavy-quark-mass thresholds and {the} evaluation of their numerical effect in DPS cross sections are required.
\item Cross-section calculations should be performed for double quarkonium production including explicitly the effects of parton correlations of (i) kinematic (momentum fractions, transverse separation), (ii) quantum (flavour, spin, colour, fermion number), and (iii) mixed (involving interplay between the two) origins.
\item Explicit studies of the $x$-dependence of the effective cross section $\sigma_{\rm eff}$, and identification of experimental observables sensitive to such an evolution, are required.
\end{itemize}
\subsection{TPS studies with $\pazocal{Q}$ in \ensuremath{pp}\xspace collisions}
As discussed in the previous Section, the wide span of $\sigma_{\rm eff}$ extractions based on the DPS pocket formula for double-quarkonium measurements (Fig.~\ref{fig:DPSexp}, right) calls for alternative studies that can shed light on the origin of the ranges of derived values. In~\cite{dEnterria:2016ids}, it was pointed out for the first time that the study of triple parton scatterings (TPS) can further help to independently improve our understanding of the transverse proton profile and estimate the impact of parton correlations. The pocket formula for triple parton scattering reads, based on \ce{eq:pocketNPS},
\begin{equation}
\sigma^{hh' \to H_1+H_2+H_3}_{\rm TPS} = \left(\frac{\mathpzc{m}}{3!}\right)\, \frac{\sigma^{hh' \to H_1}_{\rm SPS} \cdot
\sigma^{hh' \to H_2}_{\rm SPS} \cdot \sigma^{hh' \to H_3}_{\rm SPS}}{\sigma_{\rm eff,{TPS}}^2}, \;\;\;\mbox{with}\;\;\;
\sigma_{\rm eff,{TPS}}^2=\left[ \int d^2b \,T^3(b)\right]^{-1}\,.
\label{eq:sigmaTPS}
\end{equation}
In this purely geometric approach, it was demonstrated that for a wide range of proton transverse profiles (encoded in the cube of the overlap function $T^3(b)$), the effective triple and double effective cross sections are actually proportional and very similar numerically~\cite{dEnterria:2016ids}:
\begin{equation}
\sigma_{\rm eff,{TPS}} = k\times\sigma_{\rm eff}, \; {\rm with}\;\; k = 0.82\pm 0.11\,.%
\label{eq:TPS_DPS_factor}
\end{equation}
Therefore, from the $\sigma_{\rm eff,{TPS}}$ values extracted from the data, one can derive independent values of $\sigma_{\rm eff}$. However, since TPS cross sections depend on the cube of the corresponding SPS cross sections, a triple hard process $\ensuremath{pp}\xspace\to H_i+H_i+H_i$, with SPS cross sections
$\sigma_{\rm SPS}^{\ensuremath{pp}\xspace\to H_i}\approx \rm 1\;\mu b$, %
has a very small TPS cross section %
$\sigma_{\rm TPS}^{\ensuremath{pp}\xspace \to H_i+H_i+H_i}\approx 1$~fb,
and perturbative processes with large enough SPS cross sections are needed in order to have visible number of events. The very large yields of quarkonium expected at the HL-LHC allow one to carry out triple parton scattering (TPS) studies for the first time~\cite{dEnterria:2016ids}.
As TPS is a priori of subleading power with respect to single and double parton scattering, its theoretical investigation is challenging. As one goes to high scale $Q$, TPS contributions will rapidly diminish as $\Lambda^4_{\rm QCD}/Q^4$ compared to SPS. On the other hand, if one goes to few-GeV-scale observables, usually the theoretical predictions are plagued with very large intrinsic theoretical uncertainties. In addition, extracting TPS contributions requires an accurate control, not only of the SPS but also, of the SPS+DPS contributions, as sources of the same final states. These three facts may reduce the eventual potential of TPS studies at the LHC. %
Production modes that have been studied in the literature are the triple-$\ensuremath{J/\psi}\xspace$~\cite{Shao:2019qob} and triple $D\bar{D}$-mesons~\cite{Maciula:2017meb} production, while other processes, like $\ensuremath{J/\psi}\xspace+$two same-sign open charm, and $\ensuremath{J/\psi}\xspace+\ensuremath{J/\psi}\xspace$ plus open charm production~\cite{Lansberg:2014swa}, are also worth pursuing. %
We will focus here on the triple-$\ensuremath{J/\psi}\xspace$ production process.
A complete study of triple-prompt $\ensuremath{J/\psi}\xspace$ production in \ensuremath{pp}\xspace collisions at 13 TeV has been carried out in~\cite{Shao:2019qob}, by computing SPS, DPS, and TPS contributions simultaneously for the first time based on the event generator {\sc\small HELAC-Onia}~\cite{Shao:2012iz,Shao:2015vga}. The study shows that the process receives a suppressed SPS contribution with respect to the DPS and TPS ones. Thus, it becomes a golden channel for the first-ever observation of TPS processes, and to provide new valuable insights into double-quarkonium production by comparing the value of $\sigma_{\rm eff}$ obtained from the DPS contribution measured directly in the process to that derived from the TPS yields via \ce{eq:TPS_DPS_factor}. The cumulative cross section $\sigma(pp\rightarrow 3\ensuremath{J/\psi}\xspace)\times{\rm BR}^3(\ensuremath{J/\psi}\xspace\rightarrow \mu^+\mu^-)$ after imposing the $\ensuremath{P_T}\xspace^{\ensuremath{J/\psi}\xspace}>P_{T,{\rm min}}$ cut and the rapidity gap cut $|\Delta y(\ensuremath{J/\psi}\xspace,\ensuremath{J/\psi}\xspace)|>|\Delta y|_{\rm min}$ on each $\ensuremath{J/\psi}\xspace$ pair can be found in Figs.~\ref{fig:d3psiplota} and~\ref{fig:d3psiplotb}, respectively. By assuming $100\%$ event-reconstruction efficiency, the horizontal lines in the two plots indicate the cross sections at which 100 events are collected for several integrated luminosities. In particular, with the nominal HL-LHC luminosity of 3~ab$^{-1}$, 100 events are anticipated with $\ensuremath{P_T}\xspace^{\ensuremath{J/\psi}\xspace}>7$ GeV. Moreover, Fig.~\ref{fig:d3psiplotb} shows that the minimal rapidity gap cut between $\ensuremath{J/\psi}\xspace$ pairs can be used to improve the purity of the TPS signal. Such a study was carried out by assuming zero correlation between the partonic scatterings following \ce{eq:sigmaTPS} above. The measurement of this novel process with HL-LHC data should definitely clarify whether such a simple geometric hypothesis is justified.
\begin{figure}[h!]
\centering
\subfloat[$P_{T,{\rm min}}$]{
\includegraphics[width=0.40\textwidth,draft=false]{./figures/dptmin_sigma_CMS-crop.pdf}\label{fig:d3psiplota}}
\hspace{0.5cm}
\subfloat[$|\Delta y|_{{\rm min}}$]{\includegraphics[width=0.40\textwidth,draft=false]{./figures/dy_sigma_nocut-crop.pdf}\label{fig:d3psiplotb}}
\caption{Cumulative cross section of the dependence of the triple-prompt $\ensuremath{J/\psi}\xspace$ production ($\sigma(pp\rightarrow 3\ensuremath{J/\psi}\xspace)\times{\rm BR}^3(\ensuremath{J/\psi}\xspace\rightarrow \mu^+\mu^-)$, in fb) on the minimal transverse momentum cut $\ensuremath{P_T}\xspace^{\ensuremath{J/\psi}\xspace}>P_{\rm T, min}$ (Left) and of the minimal rapidity gap cut $|\Delta y(\ensuremath{J/\psi}\xspace,\ensuremath{J/\psi}\xspace)|>|\Delta y|_{\rm min}$ (Right) among three $\ensuremath{J/\psi}\xspace$'s in \ensuremath{pp}\xspace collisions at $\ensuremath{\sqrt{s}}=13$ TeV.}
\end{figure}
\subsection{DPS and TPS studies with $\pazocal{Q}$ in {\ensuremath{pA}}\xspace\ collisions}
{\ensuremath{pA}}\xspace and \ensuremath{AA}\xspace collisions also provide new handles on improving our understanding of DPS, and in general NPS, processes. DPS~\cite{Strikman:2001gz,Cattaruzza:2004qb,dEnterria:2012jam} and TPS~\cite{dEnterria:2016yhy} are significantly enhanced in {\ensuremath{pA}}\xspace collisions compared to \ensuremath{pp}\xspace\ collisions thanks to the (much) larger transverse parton density of nuclei compared to protons. As discussed in the \ensuremath{pp}\xspace case, final states with quarkonia benefit from large production yields that have lead to the first measurements of DPS processes, and to future more detailed analyses, as discussed below.
In the case of DPS, the cross section receives contributions from interactions where the two partons of the nucleus belong to the same nucleon ($\sigma_{{\rm {DPS,1}}}$), and two different nucleons ($\sigma_{{\rm {DPS,2}}}$). The pocket formula for the DPS cross section of particles $H_1,H_2$ in {\ensuremath{pA}}\xspace collisions can be written as a function of the elementary proton-nucleon ($pN$) SPS cross sections to produce $H_1$ and $H_2$ separately as~\cite{dEnterria:2012jam}
\begin{eqnarray}
\sigma_{\rm DPS}^{{\ensuremath{pA}}\xspace\to H_1 + H_2} = \left(\frac{\mathpzc{m}}{2}\right) \frac{\sigma_{\rm SPS}^{\ensuremath{pN}\xspace \to H_1} \cdot \sigma_{\rm SPS}^{\ensuremath{pN}\xspace \to H_2}}{\sigma_{{\rm eff,{DPS}},pA}}\,,
\label{eq:sigmapA_DPS}
\end{eqnarray}%
where the effective DPS {\ensuremath{pA}}\xspace\ cross section in the denominator, $\sigma_{{\rm eff,{DPS}},pA}$, depends on the standard $\sigma_{\rm eff}$ parameter measured in \ensuremath{pp}\xspace collisions, \ce{eq:pocketDPS}, and on a pure geometric quantity, $T_{\ensuremath{AA}\xspace}(0)$, that is directly derivable from the well-known nuclear transverse profile via a Glauber model~\cite{dEnterria:2020dwq}.
The overall expected DPS enhancement in {\ensuremath{pA}}\xspace compared to \ensuremath{pp}\xspace collisions is $\sigma_{\rm eff,{DPS}}/\sigma_{{\rm eff,{DPS}},pA} \approx [A+A^{4/3}/\pi]$ which in the case of $p\mathrm{Pb}$\ amounts to a factor of $\sim$600 relative to \ensuremath{pp}\xspace, {i.e.}\xspace\ a factor of $[1+A^{1/3}/\pi]\approx$~3 higher than the naive expectation assuming the same $A$-scaling of the single parton cross sections~\cite{dEnterria:2012jam}. The relative weights of the two DPS contributions are $\sigma_{{\rm {DPS,1}}}:\sigma_{{\rm {DPS,2}}} = 0.7 : 0.3$ (for small mass number $A$), and $0.33 : 0.66$ (for large $A$)~\cite{dEnterria:2012jam}.
One can thus exploit such large expected DPS signals over the SPS backgrounds in {\ensuremath{pA}}\xspace\ collisions to study double parton scatterings in detail and, in particular, to extract the value of $\sigma_{\rm eff,{DPS}}$ independently of measurements in \ensuremath{pp}\xspace collisions.
In addition, recent studies that incorporate impact-parameter-dependent nPDF effects~\cite{Shao:2020acd}, have pointed out that the study of DPS processes in heavy-ion collisions provide useful information on the (unknown) spatial-dependence of nuclear parton densities.
In the case of triple parton scatterings, a similar formula to \ce{eq:sigmapA_DPS} has been derived~\cite{dEnterria:2016ids}, that includes now three types of contributions from interactions where the three partons of the nucleus belong to the same nucleon ($\sigma_{{\rm {DPS,1}}}$), two ($\sigma_{{\rm {DPS,2}}}$) and three different nucleons ($\sigma_{{\rm {DPS,3}}}$). For $p\mathrm{Pb}$\ collisions, the
three TPS terms are $\sigma_{{\rm {TPS,1}}}:\sigma_{{\rm {TPS,2}}}:\sigma_{{\rm {TPS,3}}} = 1 : 4.54 : 3.56$, and their sum amounts to 9.1, namely the TPS cross sections are nine times larger than the naive expectation based on an {$A$ scaling} of the corresponding proton-nucleon TPS cross sections. Generic pocket formulas exist that allow the determination of the cross sections for any combination of three final-state particles, including quarkonium states in {\ensuremath{pA}}\xspace\ collisions~\cite{dEnterria:2017yhd}. Using NNLO predictions for single heavy-quark production, the authors of~\cite{dEnterria:2016ids} have shown that three $D\bar{D}$-pairs are produced from separate parton interactions in about 10\% of the $p\mathrm{Pb}$\ events at the LHC. The study of TPS in {\ensuremath{pA}}\xspace\ scattering at the HL-LHC will provide novel experimental and theoretical handles to understand double and triple parton scatterings, constrain the parton transverse profile of the proton, and clarify the role of partonic correlations in the proton and ion wave functions.
\subsubsection{Current status}
The first-ever experimental study of DPS in {\ensuremath{pA}}\xspace\ collision has been carried out by LHCb, measuring the like-sign $D+D$ ($D^0+D^0$, $D^0+D^+$ and $D^++D^+$) and $\ensuremath{J/\psi}\xspace + D$ production in $p\mathrm{Pb}$\ collisions at $\ensuremath{\sqrt{s_{_{NN}}}} = 5.02$~TeV~\cite{Aaij:2020smi}.
The azimuthal angle between the two charm hadrons in a pair, $\Delta\phi$, is measured to be flat, independent of a cut on the charm transverse momentum for $DD$ pairs, while that of $D\bar{D}$ pairs tends to peak at $\Delta\phi\approx 0$ for higher charm $\ensuremath{P_T}\xspace$. The ratio of cross sections between $DD$ and $D\bar{D}$ pairs is shown in \cf{fig:DPS_LHCb} (Left), with a magnitude of about $0.3$, while the measurement in \ensuremath{pp}\xspace collisions is about 0.1~\cite{Aaij:2012dz}. %
The forward-backward ratio ($R_\mathrm{FB}$) quantifying the production at positive rapidities over that at negative rapidities in the common range $2.7<|y(D)|<3.7$, is measured for $D\bar{D}$ pairs to be $R_\mathrm{FB}(D\bar{D})=0.61\,\pm0.04\,(\mathrm{stat})\,\pm0.12\,(\mathrm{syst})$, which is consistent with that of inclusive charm production $R_\mathrm{FB}(D)$~\cite{LHCb-CONF-2019-004}, but that of $DD$ pairs is $R_\mathrm{FB}(DD)=0.40\,\pm0.05\,(\mathrm{stat})\,\pm0.10\,(\mathrm{syst})\approx R_\mathrm{FB}^2(D\bar{D})$. Since the forward-backward ratio being lower than unity is explained by the modification of nuclear PDF, the value for like-sign $DD$ pairs is consistent with two pairs of partons participating in the hard scattering. These observations support a significant contribution of DPS in the like-sign $DD$ production while the opposite-sign $D\bar{D}$ production has a large component of SPS, namely the inclusive production of a single charm quark pair.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{figures/Rapidity_differential_Ratio_compare.pdf}
\hspace{0.2cm}
\includegraphics[width=0.45\textwidth]{figures/SigmaEffSimple.pdf}
\caption{Left: Ratios of cross sections between like-sign and opposite-sign open charm pairs for different rapidity regions of charm hadrons~\cite{Aaij:2020smi}. The weighted average of ratios for different pairs is shown as a shaded magenta box. Right: $\sigma_{\rm eff}$ parameter derived using $D^0+D^0$ and $\ensuremath{J/\psi}\xspace+D^0$ production for both negative and positive rapidities. The shaded area corresponds to the prediction from~\cite{dEnterria:2012jam} scaled by $A^2$, which predicts around a factor of three relative enhancement for DPS production compared to a naive scaling from \ensuremath{pp}\xspace collisions. Vertical bars (boxes) are statistical (systematic) uncertainties.
%
%
\label{fig:DPS_LHCb}}
\end{figure}
The $\sigma_{{\rm eff},pA}$ parameter is obtained using $D^0+D^0$ and $\ensuremath{J/\psi}\xspace+D^0$ production assuming solely DPS contribution (\cf{fig:DPS_LHCb}, Right). The LHCb derivation of $\sigma_{{\rm eff},pA}$ results in a value that is (arbitrarily) normalised to be $A^2 = 208^2$ times larger than that defined in \ce{eq:sigmapA_DPS}. The theoretical prediction, shown as the grey band in the plot, amounts to $\sigma_{{\rm eff},pA} \approx 1$~b, %
and is supported by the data. This result confirms the predicted factor of three enhancement for DPS compared to a simple $A$ scaling~\cite{Strikman:2001gz,Cattaruzza:2004qb,dEnterria:2012jam}. Looking in more detail, the positive-rapidity data exhibit a higher $\sigma_{{\rm eff},pA}$ value compared to the negative rapidity one, which implies the necessity of considering the impact-parameter dependence in nPDFs~\cite{Shao:2020acd} (see below). The $\sigma_{{\rm eff},pA}$ parameter measured for $\ensuremath{J/\psi}\xspace + D^0$ production hints at smaller values than that derived from $D^0+D^0$ production, and the same behaviour was observed in \ensuremath{pp}\xspace data~\cite{Aaij:2012dz}. This is suggestive of a non-negligible contribution of SPS in $\ensuremath{J/\psi}\xspace + D^0$ production~\cite{Shao:2020kgj} which is not subtracted in the LHCb analysis. Due to limited statistics, the kinematic correlation between $\ensuremath{J/\psi}\xspace$ and $D^0$, {e.g.}\xspace the $\Delta\phi$ distribution, does not provide yet enough information to identify the SPS component.
The LHCb observation of non-identical $\frac{\sigma^2_{p{\rm Pb}\to D^0}}{2\sigma_{p{\rm Pb}\to D^0+D^0}}$ values in the forward and backward regions indicates the presence of more effects beyond the expected geometrical DPS enhancement in \ensuremath{pp}\xspace compared to {\ensuremath{pA}}\xspace collisions. Assuming the same test function of the transverse spatial dependence $G(x)\propto x^a$ in the nuclear PDF modifications suggested in~\cite{Shao:2020acd}, \cf{fig:D0D0nPDF} shows that the LHCb data supports exponent values $a>1.5$, which however suffer from the uncertainty of $\sigma_{{\rm eff},pp}$. A smaller $\sigma_{{\rm eff},pp}$ value in fact requires stronger impact-parameter dependence ({i.e.}\xspace larger $a$). The nuclear modification factors of single inclusive $D^0$ production in the two rapidity intervals are from independent measurements of the single inclusive process. This example corroborates the conclusion of~\cite{Shao:2020acd} that DPS in {\ensuremath{pA}}\xspace collisions can be used to probe the impact-parameter-dependent nPDFs. %
\begin{figure}[htbp]
\centering
\includegraphics[width=0.44\textwidth]{figures/sigmaeff_DPS_LHCb-Paper-2020-010_35mb-crop.pdf}
\hspace{0.2cm}
\includegraphics[width=0.44\textwidth]{figures/sigmaeff_DPS_LHCb-Paper-2020-010_21mb-crop.pdf}
\caption{Comparison of the ratio $\sigma^2_{p{\rm Pb}\to D^0}/(2\sigma_{p{\rm Pb}\to D^0+D^0})$ between the impact-parameter-dependent DPS calculation~\cite{Shao:2020acd} and the LHCb data~\cite{Aaij:2020smi} in both forward ($1.5<y(D^0)<4.0$) and backward ($-5.0<y(D^0)<-2.5$) rapidity intervals. Two different $\sigma_{{\rm eff},pp}$ values are shown in the left and right plots respectively.}
\label{fig:D0D0nPDF}
\end{figure}
\subsubsection{HL-LHC prospects}
At the HL-LHC, with the size of $p\mathrm{Pb}$\ data samples increased by about a factor of ten compared to Run-2, one can exploit the large expected DPS signals over the SPS backgrounds in quarkonium final states as a means to scrutinise double and triple parton scatterings and, in particular in the purely geometric picture neglecting parton correlations, to extract the value of the effective DPS cross section $\sigma_{\rm eff}$ independently of (and complementarily to) measurements in \ensuremath{pp}\xspace collisions. First off, measurements in fine bins of final-state kinematics can be obtained for $\ensuremath{J/\psi}\xspace + D^0$ pairs in order to understand the possible difference of the $\sigma_{{\rm eff},pA}$ parameter derived from $\ensuremath{J/\psi}\xspace+D^0$ and $D^0+D^0$ data, and shed light on the varying values at negative and positive rapidities (Fig.~\ref{fig:DPS_LHCb}). %
Table~\ref{tab:3} collects the expected DPS cross sections for the combined production of quarkonia ($\ensuremath{J/\psi}\xspace, \Upsilon$) and/or electroweak bosons ($W,\,Z$) in $p\mathrm{Pb}$\ collisions at the nominal LHC energy of $\ensuremath{\sqrt{s_{_{NN}}}} = 8.8$~TeV. The individual SPS \ensuremath{pN}\xspace cross sections have been derived in~\cite{dEnterria:2014lwk} at NLO accuracy with the colour evaporation model (CEM)~\cite{Vogt:2012vr} for quarkonia, and with {\sc mcfm}\xspace\ for the electroweak bosons, using the CT10~\cite{Lai:2010vv} proton and EPS09~\cite{Eskola:2009uj} nPDFs. The EPS09 nPDF does not include any impact-parameter dependence of nuclear effects, {i.e.}\xspace it ignores the effects discussed in Fig.~\ref{fig:D0D0nPDF}. The DPS cross sections are estimated with the factorised expression for {\ensuremath{pA}}\xspace\ collisions, \ce{eq:sigmapA_DPS} with $\sigma_{{\rm eff,{DPS}},pA} = 22.5$~$\mu$b. The visible DPS yields (${\rm N_{{\rm {{DPS}}}}}_{p\mathrm{Pb}}$ values quoted) are estimated taking into account the relevant di-lepton decay branching fractions BR($\ensuremath{J/\psi}\xspace,\,\Upsilon,\,W,\,Z) = 6\%$, 2.5\%, 11\%, 3.4\%, plus simplified acceptance and efficiency losses. For \ensuremath{J/\psi}\xspace, the following value $\left({\cal A\times E}\right)_{\ensuremath{J/\psi}\xspace}\approx$~0.01 was assumed over merely one unit of rapidity at $|y|=0$, and $|y|=2$, corresponding to ATLAS/CMS central, and ALICE/LHCb forward, acceptances. For $\Upsilon$ and $W,Z$, the following values $\left({\cal A\times E}\right)_{\Upsilon}\approx 0.2$ and $\left({\cal A\times E}\right)_{W,Z}\approx 0.5$ were assumed over $|y|<2.5$. The quoted numbers were evaluated for an integrated luminosity amounting to ${\cal L}_{\rm int}=1$~pb$^{-1}$ . The quoted ${\rm N_{{\rm {{DPS}}}}}_{p\mathrm{Pb}}$ values are conservative for two reasons. First, ATLAS/CMS may ultimately integrate about ${\cal L}_{\rm int}=2$~pb$^{-1}$ $p\mathrm{Pb}$\ collisions (although ALICE/LHCb should record half this value, see Table~\ref{tab:yrlumis})~\cite{Citron:2018lsq}. Second, for final states with $\ensuremath{J/\psi}\xspace$, the expected number of visible events can be easily multiplied by a factor of 3--5, taking into account the full rapidity acceptance (enlarged after Run-2, in some cases) of the ALICE/LHCb and ATLAS/CMS detectors. All listed processes are therefore in principle observable in the LHC proton-lead runs. Rarer DPS processes like $W+Z$ and $Z+Z$ have much lower cross sections and will require much higher integrated luminosities at the HL-LHC and/or {\rm c.m.s.}\xspace\ energies such as those reachable at the CERN Future Circular Collider~\cite{Dainese:2016gch,Benedikt:2018csr}.
\renewcommand{\arraystretch}{1.3}
\begin{table}[h!]
\centering
\caption{\label{tab:3}Estimated production cross sections at $\ensuremath{\sqrt{s_{_{NN}}}} = 8.8$~TeV for SPS quarkonia and electroweak bosons in \ensuremath{pN}\xspace collisions, and for DPS double-$\ensuremath{J/\psi}\xspace$, $\ensuremath{J/\psi}\xspace+\Upsilon$, $\ensuremath{J/\psi}\xspace+W$, $\ensuremath{J/\psi}\xspace+Z$, double-$\Upsilon$, $\Upsilon+W$, $\Upsilon+Z$, and same-sign $W+W$, in $p\mathrm{Pb}$. DPS cross sections are obtained via \ce{eq:sigmapA_DPS} for $\sigma_{{\rm eff,{DPS}},pA} = 22.5$~$\mu$b (uncertainties, not quoted, are of the order of 30\%), and the associated yields for 1~pb$^{-1}$ integrated luminosity, after di-lepton decays and acceptance+efficiency losses~\protect\cite{dEnterria:2014lwk,Snigirev:2018arx}. %
We note that the $\ensuremath{J/\psi}\xspace$ yields quoted are only {\it per unit of rapidity} at mid- or forward-$y$.}
\vspace{0.25cm}
\begin{tabular}{lcccccccc}\hline
$p\mathrm{Pb}$, $\ensuremath{\sqrt{s_{_{NN}}}}=8.8$ TeV & \multicolumn{4}{c}{final states}\\\hline
& $\ensuremath{J/\psi}\xspace+\ensuremath{J/\psi}\xspace$ \hspace{0.5cm}& $\ensuremath{J/\psi}\xspace+\Upsilon$ \hspace{0.5cm}& $\ensuremath{J/\psi}\xspace+W$ \hspace{0.5cm}& $\ensuremath{J/\psi}\xspace+Z$ \hspace{0.5cm} \\\hline
$\sigma_{{\rm SPS}}^{\ensuremath{pN}\xspace\to a},\,\sigma_{{\rm SPS}}^{\ensuremath{pN}\xspace\to b}$ & 45~$\mu$b ($\times2$) & 45~$\mu$b, 2.6~$\mu$b & 45~$\mu$b, 60~nb & 45~$\mu$b, 35~nb \\
$\sigma_{{\rm DPS}}^{p\mathrm{Pb}}$ & 45 $\mu$b & 5.2 $\mu$b & 120~nb & 70~nb \\
$\rm N_{{\rm {{DPS}}}}^{p\mathrm{Pb}}$ (1 pb$^{-1}$)\hspace{0.5cm}& $\sim$65 & $\sim$60 & $\sim$15 & $\sim$3 \\\hline
& $\Upsilon+\Upsilon$ & $\Upsilon+W$ & $\Upsilon+Z$ & ss\,$W+W$ \\\hline
$\sigma_{{\rm SPS}}^{{\ensuremath{pN}\xspace}\to a},\,\sigma_{{\rm SPS}}^{\ensuremath{pN}\xspace\to b}$ & 2.6~$\mu$b ($\times2$) & 2.6~$\mu$b, 60~nb & 2.6~$\mu$b, 35~nb & 60~nb ($\times2$) \\
$\sigma_{{\rm DPS}}^{p\mathrm{Pb}}$ & 150~nb & 7~nb & 4~nb & 150 pb \\
$\rm N_{{\rm {{DPS}}}}^{p\mathrm{Pb}}$ (1 pb$^{-1}$)& $\sim$15 & $\sim$8 & $\sim$1.5 & $\sim$4 \\\hline
\end{tabular}
\end{table}
%
%
%
\section{Exclusive and diffractive production\protect\footnote{Section editors: Charlotte Van Hulse, Ronan McNulty.
}
}
\label{sec:excl_diff}
The diffractive production of quarkonia differs from inclusive production, discussed in the previous section, by the presence of colourless particle exchanges that lead to rapidity gaps, devoid of any hadronic activity, in the final state of the event. Diffractive processes are called exclusive if the final state, including the forward scattered protons, is fully determined. In hadron-hadron collisions, such events are generally characterised by two large rapidity gaps with a centrally produced object, which can consist of a single particle or a pair of particles.
Diffractive quarkonium production at hadron colliders offers a unique tool to study the nature of both $C$-even pomerons and $C$-odd odderons, multi-gluon colourless systems exchanged in scatterings with hadrons, which are fundamental to the understanding of soft hadron interactions. In the perturbative regime, the pomeron and odderon can roughly be interpreted as consisting of two and three gluons, respectively, though in general these are non-perturbative objects.
Diffractive processes can provide an improved understanding of the production of quarkonium states. Different Feynman diagrams contribute in inclusive, diffractive, and exclusive quarkonium production, which can be accessed through a comparison of results; {e.g.}\xspace\ in exclusive $\ensuremath{J/\psi}\xspace$ production, CO contributions are entirely absent.
In addition, exclusive production presents a particularly clean experimental environment and, sometimes, a simpler theoretical domain, which may assist with the identification of exotic quarkonia. In the large {\rm c.m.s.}\xspace energy limit, diffractive processes serve as a special testing ground for the BFKL resummation of HE logarithms entering at all orders of the perturbative expansion.
One of the most fruitful applications of exclusive quarkonium production is their use as probes of the partonic structure of the colliding objects.
Exclusive measurements are the only way to probe the 3D distribution of partons as functions of their longitudinal momentum and transverse position (through single-particle production), and their 5D distribution in terms of transverse position, longitudinal, and transverse momentum (through the production of pairs of particles or jets).
These 5D distributions are related to Generalised Transverse Momentum Distributions (GTMDs), which are Fourier transforms of Wigner distributions. They are known as the ``mother distributions", since they contain the most complete information on the nucleon structure. Integrating them over the parton transverse momentum gives the generalised parton distributions (GPDs) and taking the forward limit of the GPDs results in the PDFs.
Selected experimental results on exclusive and diffractive quarkonium measurements are presented in Section~\ref{sec:dif_exp}, with some discussion of open questions for theory and experiment. A measurement of diffractive quarkonium production for the study of BFKL resummation is introduced in Section~\ref{sec:dif_production}. In Section~\ref{sec:excl_quark}, the physics accessible in single vector-quarkonium production is discussed under three headings. Firstly, Section~\ref{sec:dif_upc} presents processes in hadron-hadron interactions useful for the extraction of GPDs: unlike DIS data, which have been extensively used to constrain GPDs, hadron-hadron collider data have not yet been exploited. Secondly, exclusively-produced quarkonia can, with certain approximations, provide information on PDFs, but until now such measurements have not been included in global PDF fits. Section~\ref{sec:dif_gluonpdf} discusses the theoretical framework and proposes a method for the extraction of PDFs from exclusive $\ensuremath{J/\psi}\xspace$ production. Thirdly, with FT-LHC, a kinematic region complementary to that in the collider mode could be accessed, as discussed in Section~\ref{sec:dif_FT}. The exclusive production of pairs of quarkonia (and jets) is discussed in Section~\ref{sec:dif_wig}. Until recently, it was not known how to access GTMDs, but now it has been shown that they can be extracted from pairs of particles or jets both in DIS and photoproduction. DIS measurements await the future Electron-Ion Collider (EIC)~\cite{Accardi:2016ndt}, but for photoproduction in hadron-hadron collisions, the LHC is the ideal machine. A discussion of some of the most favourable experimental channels at the HL-LHC is provided in Section~\ref{sec:dif_wig}.
As already discussed, three main distinct modes of operation are foreseen for HL-LHC: \ensuremath{pp}\xspace, {\ensuremath{pA}}\xspace\ and \ensuremath{AA}\xspace\ collisions\footnote{Collisions using O, Ar, Kr and Xe beams may also be envisioned.} where $A$ is an ion, usually lead. Compared to previous LHC running, data taken in HL-LHC \ensuremath{pp}\xspace collisions will be difficult to use for exclusive measurements because of the high number of \ensuremath{pp}\xspace interactions per beam collision. Measurements in such an environment may still be possible using proton taggers that could allows identification of the separate \ensuremath{pp}\xspace primary vertices, or through dedicated data collection with a lower number of interactions per beam collision.
Collisions in the {\ensuremath{pA}}\xspace mode offer several advantages for the study of the nucleon, and have been under-utilised to date, due to the low integrated luminosities taken thus far. For HL-LHC, a nearly tenfold increase of data is foreseen~\cite{Citron:2018lsq}. This will not result in many multiple interactions per beam crossing and will provide a more appropriate channel to perform photoproduction measurements, exploiting the enhanced photon flux from the nucleus (which goes approximately as $Z^2$). Compared to \ensuremath{pp}\xspace\ collisions (in the absence of a proton tagger), {\ensuremath{pA}}\xspace\ collisions also have the advantage of identifying the photon emitter.
In addition, they might offer a handle on constraining nuclear distributions, through photoproduction on the nucleus in the nucleus-going direction.
While the foreseen tenfold increase in luminosity is important for a wide range of physics, exclusive measurements would clearly benefit from even higher luminosities in order to exploit their full potential ({e.g.}\xspace for $\Upsilon$ and charmonium-pair production, as discussed later).
Access to nuclear distributions (PDFs, GPDs, and Wigner distributions) is best provided through the study of ion-ion (\ensuremath{AA}\xspace) collisions. A nearly tenfold increase of data collection in PbPb collisions is foreseen for the HL-LHC. For photoproduction processes, the ambiguity in the identity of the photon emitter can in part be lifted through the detection of neutrons emitted by one of the Pb ions, {e.g.}\xspace\ in zero-degree calorimeters~\cite{Guzey:2013jaa}.
\subsection{Experimental results}
\subsubsection{Selected experimental results}
\label{sec:dif_exp}
Exclusive and diffractive production has been studied in
lepton-hadron interactions, both at the FT experiments HERMES~\cite{Airapetian:2001yk, Airapetian:2006zr, Airapetian:2008aa, Airapetian:2009aa, Airapetian:2009bm, Airapetian:2011uq, Airapetian:2010aa, Airapetian:2010dh, Airapetian:2012mq, Airapetian:2012pg, Airapetian:2014gfp, Airapetian:2015jxa, Airapetian:2017vit}, COMPASS~\cite{Alexakhin:2007mw, Adolph:2012ht, Adolph:2013zaa, Adolph:2014lvj, Adolph:2014hba, Adolph:2016ehf, Akhunzyanov:2018nut, Alexeev:2019qvd}, and the experiments at Jefferson Lab
~\cite{Camacho:2006qlk, Mazouz:2007aa, Collaboration:2010kna, Defurne:2015kxq, Hattawy:2018liu, HirlingerSaylor:2018bnu, Park:2017irz, Hattawy:2017woc, Bedlinskiy:2017yxe, Bosted:2016hwk, Bosted:2016spx, Girod:2007aa, Stepanyan:2001sm}, and at the collider experiments H1 (see, among others, ~\cite{H1:2020lzc, H1:2015bxa, Andreev:2015cwa, Andreev:2014yra, Alexa:2013xxa, Aaron:2012ad, Aaron:2010su, Aaron:2009xp, Aaron:2009ac, Aaron:2008ab, Aaron:2007ab, Aktas:2007bv, Aktas:2007hn, Aktas:2006up, Aktas:2006hy, Aktas:2006hx, Aktas:2006qe}) and ZEUS (see, among others,~\cite{ZEUS:2017nkv,Abramowicz:2015vnu,Aaron:2012hua,Abramowicz:2011pk,Abramowicz:2011fa, Chekanov:2009zz, Chekanov:2008vy, Chekanov:2008cw, Chekanov:2007zr, Chekanov:2007pm, Chekanov:2005cqa, Chekanov:2005vv, Chekanov:2004mw}).
It has also been studied in hadron-hadron collisions at the Tevatron, RHIC, and LHC. While lepton-hadron interactions offer the advantage of high-precision measurements by using a {point probe} to study hadrons, hadron colliders can reach a higher {\rm c.m.s.}\xspace energy, hence providing access to lower values of the parton fractional momentum, $x$.
Future experiments are envisaged with expanded possibilities for exclusive and diffractive measurements. For the study of lepton-hadron interactions, the EIC construction is in development~\cite{accardi2012electron}. The EIC will allow the collection of large samples of data at variable {\rm c.m.s.}\xspace energies, thus making possible high-precision, multi-differential measurements with a vast kinematic coverage. Moreover, measurements with polarised nucleons and (unpolarised) nuclear-ion beams will allow one to probe the nucleon spin structure and nuclear matter, respectively. Stepwise upgrades for the LHC and the LHC experiments are also ongoing and planned, on a time scale preceding the EIC, at {\rm c.m.s.}\xspace\ energies about 50 times larger than those accessible at the EIC. Such a programme has the potential to access rarer diffractive and exclusive processes. In addition, there exists the possibility to perform measurements with FT collisions at the LHC, covering {\rm c.m.s.}\xspace\ energies similar to those at the EIC. These provide access to the high-$x$ region. Also, ideas and studies for measurements with a polarised target at the LHC, sensitive to spin-related physics, are underway~\cite{Kikola:2017hnp,Hadjidakis:2018ifr}. Furthermore, planned data collection with a heavy-ion beam and nuclear targets provides the possibility to study the nuclear parton density.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.35]{figures/exclusive_diagrams.png}
\caption{Photoproduction (Left) and double-pomeron exchange production (Right) of charmonium at hadron colliders.}
\label{fig:fd_exclusive}
\end{figure}
Exclusive quarkonium production at hadron colliders is commonly studied in ultra-peripheral collisions (UPCs)~\cite{Baltz:2007kq,dEnterria:2007zra,Contreras:2015dqa,Klein:2017nqo}. In such collisions, impact parameters are typically larger than the sum
of the nuclear radii, so strong interactions are suppressed, while electromagnetic interactions are favoured. Exclusive meson production has been studied at the Tevatron in $p\bar{p}$ collisions, at RHIC in AuAu and $p$Au collisions, and at the LHC in proton-proton, $p\mathrm{Pb}$, and \rm{PbPb}\xspace collisions. The measurements cover light vector-meson production, such as single-$\rho$ production~\cite{Sirunyan:2019nog, Sirunyan:2020cmr, Adam:2020sap, Adamczyk:2017vfu, Agakishiev:2011me, Abelev:2008ew, Abelev:2007nb, Aaltonen:2015uva, Albrow:2010yb, Albrow:2014yma}, and heavier single and pair-produced quarkonia $\ensuremath{J/\psi}\xspace$, $\psi(2S)$, $\ensuremath{\chi_c}\xspace$ and $\ensuremath{\Upsilon}\xspace$~\cite{Aaij:2014rms, Aaij:2014iea, Aaij:2018arx, Aaij:2015kea,Sirunyan:2018sav,Afanasiev:2009hy,Albrow:2014yma, LHCb:2011dra}. In heavy-quarkonium production, the charm and bottom quarks provide the hard scale that makes possible a theoretical perturbative expansion, and the interpretation of the results in terms of PDFs, GPDs, or Wigner distributions. The majority of the measurements so far were performed using unpolarised hadrons, but preliminary measurements with a transversely polarised proton have been performed at RHIC and these allow access to spin-dependent PDFs, GPDs, and Wigner distributions~\cite{jarda_talk}. Differential cross-sections have been measured as functions of quarkonium rapidity and the Mandelstam variable $t$, which can be approximated by the $\ensuremath{P_T}\xspace^2$ of the produced meson system.
Single vector-meson production involves a single-pomeron exchange (SPE), and is the most studied process so far in central exclusive production, while scalar and tensor quarkonium are produced through double-pomeron exchange (DPE); the different production mechanisms are shown in Fig.~\ref{fig:fd_exclusive}.
Since only gluon propagators are present in DPE, central exclusive production is a fertile hunting ground for glueballs, tetraquarks, and quark-gluon hybrid states, with the potential advantage of a lower background contamination compared to non-exclusive measurements.
Such gluon-rich media are also a good environment to study the odderon, predicted in QCD but not unambiguously observed~\cite{Boussarie:2019vmk,McNulty:2020ktd}.
A very promising channel that would provide strong evidence for the existence of the odderon is exclusive photoproduction of C+ quarkonia, which can only be produced if the photon fuses with another C- propagator.
Searches for the exclusive production of scalar~\cite{Bartels:2004hb,Czyzewski:1996bv} or tensor quarkonia in pA or AA collisions are therefore of great interest and require high luminosity.
Exclusive, in comparison to inclusive, measurements can also give insight into the production mechanisms of charmonia and could indirectly help to distinguish between different frameworks used for inclusive production.
The HL-LHC operation warrants a future programme of work for experimentalists and theorists in which the different frameworks can be better disentangled through the comparison of suitably chosen high-precision observables in exclusive, diffractive, and inclusive reactions. For example, in exclusive reactions, CO states are absent and, in non-exclusive processes, it is plausible that, as the produced quarkonium becomes more isolated, the CO contributions become more suppressed.
By virtue of the Landau-Yang theorem~\cite{Landau:1948kw,Yang:1950rg}, which states that a spin-1 particle cannot couple to two identical massless vector particles, the exclusive production of $\chi_{c1}$ is expected to be heavily suppressed compared to its spin partners, the $\chi_{c0}$ and $\chi_{c2}$. In inclusive production, this suppression may not be as pronounced because of the CO contributions. If the initial gluons are allowed to be off-shell or if a third gluon is emitted, this suppression is lifted but the $\chi_{c1}$ rates remain partly suppressed~\cite{Khoze:2004yb,Pasechnik:2009bq} compared to the $\chi_{c2}$. In the exclusive case, CO contributions are absent and any gluon emission is forbidden. Thus,
measuring the yield ratio $\chi_{c2}/\chi_{c1}$ here would directly probe the degree of off-shellness of the gluons compared to the inclusive mode discussed in Section~\ref{sec:beyond_TMD}.
Such investigations can be further expanded by measuring quarkonium polarisation. The first study on polarisation of prompt $\ensuremath{\chi_{c1}}\xspace$ and $\ensuremath{\chi_{c2}}\xspace$ in inclusive production~\cite{Sirunyan:2019apc} uncovered a significant difference in polar anisotropy, $\lambda_\theta$, in agreement with NRQCD. Measurements that avoid the need to detect photon conversions are expected to improve the experimental resolution~\cite{Aaij:2017vck}. This may in fact be possible in the exclusive mode where the polarisation invariants (see Section~\ref{sec:quarkonium-pol-pp}) can reach their extremal values.
While photoproduction of quarkonia at heavy-ion colliders is typically studied in UPCs, there are now indications of contributions from photoproduced $\ensuremath{J/\psi}\xspace$ in peripheral collisions (with partial hadronic overlap), both at LHC by the ALICE experiment~\cite{Adam:2015gba}, and at RHIC by the STAR experiment~\cite{STAR:2019yox}. At low \ensuremath{P_T}\xspace, an excess of $\ensuremath{J/\psi}\xspace$ yields compared to that expected from hadroproduction is observed, which can be explained by contributions from photon-induced $\ensuremath{J/\psi}\xspace$ production. In this context, it would be extremely useful to study the polarisation of the $\ensuremath{J/\psi}\xspace$, since it is measured to be unpolarised in hadroproduction and transversely polarised in photoproduction, as inherited from the {parent (real) photon}. This study could be expanded to include $\ensuremath{\psi(2S)}\xspace$ and $\ensuremath{\Upsilon}\xspace$.
Contradictory results have been found for the ratio of coherently photoproduced $\ensuremath{\psi(2S)}\xspace$ to $\ensuremath{J/\psi}\xspace$. While the data at central rapidity in \rm{PbPb}\xspace\ collisions at $\ensuremath{\sqrt{s_{_{NN}}}} = 2.76$~TeV showed a ratio almost twice that in \ensuremath{pp}\xspace\ collisions~\cite{Adam:2015sia}, new data at forward rapidity in \rm{PbPb}\xspace\ collisions at 5.02~TeV give a ratio consistent with the \ensuremath{pp}\xspace\ results~\cite{Acharya:2019vlb}.
Studying diffractive processes where the probed beam hadron breaks up provides some sensitivity to the non-uniformity of the gluon distribution in the transverse impact-parameter space (see~\cite{Cepila:2018zky,Mantysaari:2020axf} and references therein). Cross sections and cross-section ratios of coherent and incoherent $\ensuremath{J/\psi}\xspace$ and $\ensuremath{\Upsilon}\xspace$ photoproduction are a valuable tool to study the nucleon shape~\cite{Cepila:2018zky, Mantysaari:2017dwh,Cepila:2016uku,Cepila:2017nef}.
The identification of diffractive processes in general and the separation of diffractive processes where the beam particles break up or stay intact are experimentally and theoretically challenging, especially in collider experiments. Some aspects involved in the identification of exclusive and non-exclusive diffractive processes are discussed next.
%
%
%
%
%
%
%
%
%
\subsubsection{Experimental identification of diffractive processes}
The experimental identification of diffractive events usually relies
on the identification of a large rapidity gap, found by
ordering all charged particles in pseudorapidity and noting the
largest difference, $\Delta \eta$, between adjacent particles.
There are at least two practical problems with this approach,
due to the fact that detectors are not hermetic.
Firstly, how big a gap is required to identify an event as diffractive?
Secondly, how does one deal with the dissociation of the projectiles, which
occurs within a few units of rapidity of the beams and often enters
uninstrumented regions? Both these issues are briefly discussed below:
neither is fully resolved and each is difficult to approach
both theoretically and experimentally.
One way to investigate the size of the gap required to tag an
event as diffractive is to look at the (background) gap sizes in inclusive events.
However, modelling this in generators is difficult as the results depend
on non-perturbative effects: a single soft particle can destroy the gap.
It was shown in~\cite{Khoze:2010by} that, if the threshold for detecting
tracks is relatively high ($\ensuremath{P_T}\xspace>1$ GeV), similar results are obtained with
cluster hadronisation and string fragmentation models. However, order-of-magnitude
differences occur at lower transverse momenta: the probability of
$\Delta\eta>4$ in minimum bias \ensuremath{pp}\xspace\ events at $\ensuremath{\sqrt{s}}=7$ TeV was found
to be about 0.1 using a cluster hadronisation model~\cite{Winter:2003tt}
and 0.02 for string fragmentation~\cite{Sjostrand:2003wg}.
A recent measurement by ATLAS~\cite{Aad:2019tek},
comparing the largest gap in \ensuremath{pp}\xspace\ events
at $\ensuremath{\sqrt{s}}=8$ TeV with {\sc pythia}\xspace~\cite{Sjostrand:2007gs}
and {\sc herwig++}\xspace~\cite{Bahr:2008pv} (Fig.~\ref{fig:iso}, Left),
shows that the data exhibit fewer large gaps than the models predict.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.41]{figures/atlas_gap.pdf}
\raisebox{6pt}{\includegraphics[scale=0.40]{figures/lhcb_gap.pdf}}%
\caption{Left:
Hadron-level differential single-diffractive cross section as a function of $\Delta\eta$,
comparing the measured data with {\sc pythia}\xspace and {\sc herwig++}\xspace~7 predictions.
Right:
Transverse momentum squared distribution of $\ensuremath{J/\psi}\xspace$ candidates showing
estimated fractions of exclusive, feed-down, and proton-dissociation contributions. [Figures taken from~\cite{Aad:2019tek} and~\cite{Aaij:2018arx}.]}
\label{fig:iso}
\end{figure}
A related problem occurs in central exclusive production, where large
gaps should exist on either side of the central system ({e.g.}\xspace\ $\ensuremath{pp}\xspace\rightarrow p\oplus \ensuremath{J/\psi}\xspace\oplus p$ as discussed further in Section~\ref{sec:dif_gluonpdf}).
Various methodologies have been employed in such systems to determine
whether the candidate events are truly isolated
or whether the proton dissociates.
A simple approach, taken in the analysis of exclusive $\pi\pi$ production
by CMS~\cite{Sirunyan:2020cmr}, is to fit additional neutral energy deposits
in known non-exclusive events, and extrapolate to the signal region, assuming similar behaviour for like-sign and unlike-sign combinations.
Another approach, by LHCb, in the analysis of
exclusive $\ensuremath{J/\psi}\xspace$ production~\cite{Aaij:2018arx}, uses Regge theory to
fit the $\ensuremath{P_T}\xspace$ distribution in known non-exclusive events to model
the dissociative process and combines this
with the signal shape to determine the purity of a sample of candidate
exclusive events (Fig.~\ref{fig:iso}, Right).
A more complex approach was presented in a recent H1 analysis of the
photoproduction of $\rho$ mesons~\cite{Bolz:2019znd}.
Dissociative events are not well described either at generator level or in the
detector simulation. Therefore, a sophisticated re-weighting of the
DIFVM generator~\cite{List:1998jz} was employed, tuned using control samples from data.
An elegant solution to the problem of identifying exclusive events is found if the intact protons can be reconstructed.
This requires dedicated detectors installed at very low angles to the beam, typically in Roman pots located several hundred metres from the interaction point.
Both ATLAS (through the AFP spectrometer~\cite{Tasevsky:2015xya}) and CMS-TOTEM (using CT-PPS~\cite{Albrow:2015ois}) use such technology, which has the additional advantage of providing an independent measurement of the mass of the central system.
\subsection{Forward \ensuremath{J/\psi}\xspace + backward jet production}
\label{sec:dif_production}
In Section \ref{sec:onium_jets}, experimental studies were motivated towards the measurement, in inclusive reactions, of quarkonia associated with jets, following the first proposal of~\cite{Lansberg:2019adr}. Motivations for studying them in diffractive reactions are now considered.
Diffractive reactions featuring a semi-hard scale hierarchy~\cite{Gribov:1984tu}, {i.e.}\xspace\ $\ensuremath{\sqrt{s}} \gg \{Q\} \gg \Lambda_{\rm QCD}$, with $\ensuremath{\sqrt{s}}$ the centre-of-mass energy and $\{Q\}$ a (set of) characteristic hard scale(s), serve as a special testing ground for the dynamics of strong interactions in the High-Energy (HE) limit.
Here, a genuine Fixed-Order (FO) treatment based on collinear factorisation fails, since large energy logarithms enter the perturbative series in the strong coupling, $\alpha_s$, with a power that increases with the order. In particular, large final-state rapidities (or rapidity distances), typical of single forward emissions (or double forward/backward emissions) with colourless exchanges in the $t$-channel, directly enhance the weight of terms proportional to $\ln (s)$.
The HE factorisation based on the BFKL equation %
performs an all-order resummation of these large energy logarithms both in the leading-logarithmic approximation (LL), which means inclusion of all terms proportional to $\alpha_s^n \ln (s)^n$, and in the next-to-leading-logarithmic approximation (NLL), including all terms proportional to $\alpha_s^{n+1} \ln (s)^n$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{./figures/process_di-jet.pdf}
\hspace{1.5cm}
\includegraphics[scale=0.45]{./figures/process_onium-jet.pdf}
\\ \vspace{0.25cm}
\hspace{1.25cm}
a) Mueller-Navelet jets
\hspace{3.75cm}
b) Forward quarkonium + backward jet
\caption[]
{Pictorial representation of two semi-hard reactions in the hybrid ``HE + collinear'' factorisation. Red blobs denote collinear PDFs, whereas green (blue) ones refer to the hard part of impact factors accounting for jet (quarkonium) emissions. They are connected to the BFKL gluon Green's function, schematically represented in yellow, via pomeron lines.}
\label{fig:semi-hard}
\end{figure}
Over the last few years, predictions for observables in a wide range of semi-hard final states have been proposed~\cite{Caporale:2015int,Caporale:2016soq,Chachamis:2015crx,Caporale:2016xku,Caporale:2016zkc,Caporale:2015vya,Celiberto:2016hae,Celiberto:2016vhn,Celiberto:2017ptm,Celiberto:2020rxb,Bolognino:2018oth,Bolognino:2019yqj,Golec-Biernat:2018kem,Xiao:2018esv,Celiberto:2020tmb,Celiberto:2017nyx,Bolognino:2019ouc,Bolognino:2019yls}.
Among them, azimuthal correlations between two Mueller-Navelet jets~\cite{Mueller:1986ey} have been identified as favourable observables in the discrimination between BFKL- and FO-inspired calculations~\cite{Celiberto:2015yba,Celiberto:2015mpa,Celiberto:2020wpk}. This channel, depicted in~\cf{fig:semi-hard} (a), is characterised by hadroproduced jets with high transverse momenta, a large difference in rapidity, and a secondary undetected gluon system.\footnote{Although featuring secondary gluon emissions in the final state, this reaction can be classified as a diffractive one. Indeed, the imaginary part of its cross section, dominant with respect to the real part in the HE limit, can be directly linked to the forward elastic-scattering amplitude via the optical theorem.}
\begin{figure}[h!]
\centering
\includegraphics[scale=1.25]{./figures/JPsi-jet_sigma_10_10_13TeV.pdf}
\includegraphics[scale=1.25]{./figures/JPsi-jet_cos_10_10_13TeV.pdf}\\
\small CMS ($-4.5 < y_j < 0 \, , \, 0 < y_{\ensuremath{J/\psi}\xspace} < 2.5 \, , \, P_{T, j} = P_{T, \ensuremath{J/\psi}\xspace}=10$ GeV)
\\ \vspace{0.25cm}
\includegraphics[scale=1.25]{./figures/JPsi-jet_sigma_10_10_13TeV_castor.pdf}
\includegraphics[scale=1.25]{./figures/JPsi-jet_cos_10_10_13TeV_castor.pdf}\\
CMS+CASTOR ($-6.5 < y_j < -5 \, , \, 0 < y_{\ensuremath{J/\psi}\xspace} < 2.5 \, , \, P_{T, j} = P_{T, \ensuremath{J/\psi}\xspace}=10$ GeV)
\caption[]
{Cross section (Left) and $\langle \cos \varphi \rangle$ (Right) at $\ensuremath{\sqrt{s}} = 13$ TeV as a function of the rapidity distance $\Delta Y_{\ensuremath{J/\psi}\xspace, j}$ between the $\ensuremath{J/\psi}\xspace$ and the jet as obtained in the BFKL approach in~\cite{Boussarie:2017oae}, for three different $\ensuremath{J/\psi}\xspace$ hadronisation models. [Figures adapted from~\cite{Boussarie:2017oae}.]}
\label{fig:onium-jet}
\end{figure}
Several phenomenological studies have been conducted so far~\cite{Marquet:2007xx,Colferai:2010wu,Caporale:2012ih,Ducloue:2013bva,Ducloue:2013hia,Caporale:2013uva,Caporale:2014gpa,Colferai:2015zfa,Mueller:2015ael,Celiberto:2016ygs,Colferai:2016inu,Celiberto:2017ydk,Caporale:2018qnm} and have been found to be in fair agreement with data collected by the CMS collaboration~\cite{Khachatryan:2016udy}.
The theoretical description relies on the so-called \emph{hybrid factorisation}, where DGLAP ingredients are elegantly combined with the HE resummation. On the one hand, longitudinal momentum fractions of the two jets are assumed to be large enough so that the collinear factorisation applies, thus permitting a description of the incoming partons in terms of the usual PDFs. On the other hand, transverse momenta exchanged in the $t$-channel are not negligible due to the large rapidity interval in the final state, thus calling for a HE-factorised treatment, genuinely afforded by the BFKL approach.
In line with these studies, the inclusive detection of a forward $\ensuremath{J/\psi}\xspace$ and a very backward jet in hadronic collisions at the LHC was recently proposed~\cite{Boussarie:2017oae} as a novel semi-hard channel (\cf{fig:semi-hard}, b). Here, at variance with most of the previous analyses, calculations are done in a hybrid HE + collinear factorisation with partial NLL BFKL accuracy, while different quarkonium production mechanisms are at work. This study allows a probe of the dynamics of the HE resummation, its effect being emphasised by the smaller values of transverse momentum at which identified mesons can be tagged with respect to jets (thus heightening the weight of secondary, undetected gluons). At the same time, it offers an intriguing and complementary opportunity to probe different approaches for the description of the production of quarkonium states.
Predictions for the differential cross section and for the azimuthal correlation, $\langle \cos \varphi \rangle$, with $\varphi = \varphi_{\ensuremath{J/\psi}\xspace} - \varphi_j - \pi$, the difference of the azimuthal angles of both emitted objects, are presented in~\cf{fig:onium-jet} as a function of the rapidity interval, $\Delta Y_{\ensuremath{J/\psi}\xspace, j}$, between the $\ensuremath{J/\psi}\xspace$ and the jet at $\ensuremath{\sqrt{s}} = 13$ TeV.
The meson is detected in the forward rapidity region of the CMS detector, $0 < y_{\ensuremath{J/\psi}\xspace} < 2.5$, while two possibilities are considered for the backward-jet emission: it can be tagged a) by CMS $-4.5 < y_j < 0$, or b) inside the ultra-backward CASTOR detector~\cite{CMS:2016ndp}, $-6.5 < y_j < -5$. Notably, case b) compensates %
for the smaller rapidities at which mesons can be detected (which represents a major drawback in the detection of a $\ensuremath{J/\psi}\xspace$ instead of a jet), thus restoring the rapidity intervals typical of the Mueller-Navelet jet production. Both the $\ensuremath{J/\psi}\xspace$ and jet \ensuremath{P_T}\xspace are required to be above 10~GeV.
The uncertainty bands combine the effect of the variation of the renormalisation and the factorisation scales, together with the running of the non-perturbative constants related to the hadronisation of the $\ensuremath{J/\psi}\xspace$. In particular, the CS LDME $\langle {\cal O}^{^3S^{[1]}_1}_{\ensuremath{J/\psi}\xspace} \rangle$ is varied between 1.16 and 1.32 GeV$^3$, as obtained in~\cite{Eichten:1995ch} and~\cite{Bodwin:2007fz} respectively. The CO LDME $\langle {\cal O}^{^3S^{[8]}_1}_{\ensuremath{J/\psi}\xspace} \rangle$ is varied between $0.224 \times 10^{-2}$ and $1.1 \times 10^{-2}$ GeV$^3$~\cite{Butenschoen:2011yh,Chao:2012iv,Bodwin:2014gia}.\footnote{Note that the NRQCD result presented in~\cite{Boussarie:2017oae} only takes into account the $3S^{[1]}_1$ CS and $^3S^{[8]}_1$ CO states and it should be kept in mind that the other CO states could also give a sizeable contribution.} In the CEM, the parameter $F_{J/\psi}$ represents the fraction of the $c\bar{c}$ pairs produced in the invariant mass range $[2m_c,2m_D]$ hadronising into $\ensuremath{J/\psi}\xspace$ mesons and it is varied between 0.02 and 0.04~\cite{Nelson:2012bc} (see also~\cite{Lansberg:2016rcx}).
The inspection of results in~\cf{fig:onium-jet} leads to significant cross sections that can be studied at the HL-LHC. In the NRQCD approach, the CO contribution prevails over the CS one~\cite{Boussarie:2017oae}, while the CEM exhibits a behaviour similar to the NRQCD (CS+CO) result.
Azimuthal correlations show patterns very similar to the ones obtained for the Mueller-Navelet dijet and, in general, for all the semi-hard channels investigated so far: large rapidity intervals enhance the weight of undetected hard-gluon radiation, thus leading to a loss of correlation between the two final-state particles in the azimuthal plane.
Future studies will extend this work to: (\emph{i}) a full NLL BFKL analysis, (\emph{ii}) the integration of transverse momenta of the $\ensuremath{J/\psi}\xspace$ and the jet over kinematic ranges accessible at the LHC, (\emph{iii}) the evaluation of possible DPS effects~\cite{Ducloue:2015jba}.
\subsection{Single vector-$\pazocal{Q}$ exclusive photoproduction}
\label{sec:excl_quark}
Measurements of quarkonia in UPCs allow one to probe various parton distributions. In general, as discussed in Section~\ref{sec:dif_upc}, exclusive processes provide access to GPDs. At very low values of $x$ and $t$, the GPD can be related to the conventional integrated PDF, via the Shuvaev transform, as discussed in Section~\ref{sec:dif_gluonpdf}. While data collected at the LHC in the collider mode probes the low-$x$ region, data collected with a FT, as presented in Section~\ref{sec:dif_FT}, can constrain GPDs at high $x$. In both the low- and high-$x$ regions, measurements are scarce and hence the distributions currently suffer from large uncertainties.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{figures/ExclpgamJpsi.pdf}
\caption{Cross section for exclusive $\ensuremath{J/\psi}\xspace$ production as a function of
the photon-proton {\rm c.m.s.}\xspace energy, $W$, extracted from data collected in proton-proton collisions by LHCb (red circles and black squares), proton-lead collisions by ALICE (magenta diamonds), $\ell p$ collisions by H1 and ZEUS (triangles), and FT experiments. [Figure taken from~\cite{Aaij:2018arx}.]}
\label{fig:ExclpgamJpsi}
\end{figure}
Exclusive production of $\ensuremath{J/\psi}\xspace$ and $\ensuremath{\psi(2S)}\xspace$ has been measured in \ensuremath{pp}\xspace~\cite{Aaij:2013jxj,Aaij:2014iea,Aaij:2018arx}, $p\mathrm{Pb}$~\cite{TheALICE:2014dwa,Acharya:2018jua}, and \rm{PbPb}\xspace~\cite{Abelev:2012ba,Abbas:2013oua,Adam:2015sia,Acharya:2019vlb,Khachatryan:2016qhq,LHCb-CONF-2018-003} collisions by the experiments at the LHC, in AuAu collisions at RHIC~\cite{Afanasiev:2009hy}, and in $p\bar{p}$ collisions at the Tevatron~\cite{Aaltonen:2009kg}. Exclusive production of $\ensuremath{\Upsilon}\xspace$ has also been analysed in \ensuremath{pp}\xspace~\cite{Aaij:2015kea} and in $p\mathrm{Pb}$~\cite{Sirunyan:2018sav} collisions at the LHC. \cf{fig:ExclpgamJpsi} presents the $\gamma p$ cross section for exclusive $\ensuremath{J/\psi}\xspace$ production as a function of the $\gamma p$ {\rm c.m.s.}\xspace energy, $W$, extracted by the LHCb~\cite{Aaij:2018arx}, ALICE~\cite{TheALICE:2014dwa}, H1~\cite{Alexa:2013xxa}, ZEUS~\cite{Chekanov:2002xi}, and FT experiments. Good consistency over two orders of magnitude in energy is seen between photoproduction in diverse experimental conditions, which hints at the universality of the underlying physics.
Exclusive production of vector quarkonia also has sensitivity to odderon production.
In addition to the photon-pomeron fusion process shown in Fig.~\ref{fig:fd_exclusive}, vector quarkonia can be produced through odderon-pomeron fusion.
It was shown in ~\cite{Bzdak:2007cz} that the odderon contribution may be significant at the LHC and that it may dominate at large transverse momenta.
The two production mechanisms can therefore potentially be separated through the transverse momentum distribution.
Although the precise shape of each spectrum is somewhat uncertain, an excess of events at high \ensuremath{P_T}\xspace could be evidence for the odderon.
One possibility for HL-LHC would be to measure the \ensuremath{P_T}\xspace spectrum precisely in pA collisions, where any odderon production is heavily suppressed with respect to photoproduction, and then to compare that to the spectrum obtained in pp collisions.
The presence of proton-taggers would greatly assist this measurement as it would allow the major background in the high-\ensuremath{P_T}\xspace region due to proton dissociation (see Fig.~\ref{fig:iso}, Right) to be heavily suppressed.
\subsubsection{Accessing GPDs from data collected in UPCs}
\label{sec:dif_upc}
Introduced more than 20 years ago~\cite{Ji:1996nm,Mueller:1998fv,Radyushkin:1997ki}, GPDs have been since studied both theoretically and experimentally. They provide access to the quark and gluon orbital angular momenta~\cite{Ji:1996ek}, the 3D distribution of quarks and gluons as a function of their longitudinal momentum and transverse position~\cite{Burkardt:2000za,Burkardt:2002hr}, and the distribution of pressure and shear forces inside the nucleon~\cite{Polyakov:2018zvc,Lorce:2018egm}.
The channels to experimentally access GPDs are exclusive processes with a hard scale. Their extraction requires a measurement that is doubly differential in $x$ and $t$. So far GPDs have mainly been constrained in the high-to-medium $x$ region from measurements of deeply virtual Compton scattering (DVCS)~\cite{Airapetian:2001yk, Airapetian:2006zr, Airapetian:2008aa, Airapetian:2009aa, Airapetian:2009bm, Airapetian:2011uq, Airapetian:2010aa, Airapetian:2012mq, Airapetian:2012pg, Akhunzyanov:2018nut, Camacho:2006qlk, Mazouz:2007aa, Defurne:2015kxq, Hattawy:2018liu, HirlingerSaylor:2018bnu, Hattawy:2017woc, Girod:2007aa, Stepanyan:2001sm, Aaron:2009ac, Aaron:2007ab, Chekanov:2008vy} and exclusive meson production in DIS~\cite{Airapetian:2010dh, Airapetian:2014gfp, Airapetian:2015jxa, Airapetian:2017vit, Alexakhin:2007mw, Adolph:2012ht, Adolph:2013zaa, Adolph:2014lvj, Adolph:2016ehf, Alexeev:2019qvd, Collaboration:2010kna, Park:2017irz, Bedlinskiy:2017yxe, Bosted:2016spx, Bosted:2016hwk, H1:2020lzc, H1:2015bxa, Aaron:2009xp, Abramowicz:2011pk, Chekanov:2007zr,Chekanov:2005cqa}, where the hard scale is provided by the large virtuality, $Q$, of the photon exchanged between the incoming lepton and nucleon. Each of these processes provides complementary information, with a sensitivity to different types and flavour combinations of the GPDs.
Instead of requiring a highly virtual incoming photon, a real photon can be used as a probe if the final-state particle is a heavy quarkonium (ideally $\Upsilon$), where now the hard scale is provided by the large mass of the quarkonium. Alternatively, GPDs can be probed in timelike Compton scattering (TCS), characterised by a real incoming photon and producing a highly virtual outgoing photon that provides the hard scale. The $ep$ collider experiments H1 and ZEUS measured the photoproduced heavy quarkonia, $\ensuremath{J/\psi}\xspace$ and $\Upsilon$~\cite{Alexa:2013xxa, Abramowicz:2011fa, Chekanov:2009zz, Chekanov:2004mw}, but did not have sufficiently large data samples to measure TCS.
Hadron-hadron UPCs can also photoproduce quarkonia and TCS. The large {\rm c.m.s.}\xspace energy at the LHC offers the unique advantage of providing access to the very low-$x$ region, down to $x \approx10^{-6}$ (for photon virtualities of 1~GeV$^2$). In the case of heavy-ion UPCs, there is a further benefit compared to \ensuremath{pp}\xspace\ or $\ell p$ collisions of an increased photon flux, since it is proportional to $Z^2$. The cleanest extraction of GPDs at the HL-LHC would be obtained in {\ensuremath{pA}}\xspace\ collisions, which necessitates a high luminosity for this collision mode due to the double-differential nature of the measurement.
Exclusive production of quarkonia (\cf{fig:fd_exclusive}, Left) is an ideal channel in UPCs to study gluon GPDs, since it is already sensitive to gluons at LO. In contrast, access to quark GPDs in UPCs is provided at LO by the TCS process~\cite{Berger:2001xd,Boer:2015fwa}. At the same time, TCS shows some sensitivity to gluons due to NLO contributions, which are sizeable at the LHC~\cite{Moutarde:2013qs}. In the FT mode, with polarised and unpolarised targets, exclusive quarkonium production and TCS~\cite{Lansberg:2015kha} provide additional information to constrain gluon and quark GPDs, respectively. Exclusive quarkonium measurements in FT collisions are discussed in more detail in Section~\ref{sec:dif_FT}.
In general, exclusive measurements in \ensuremath{pp}\xspace\ collisions allow the study of nucleon GPDs, while the analysis of \ensuremath{AA}\xspace\ collisions gives access to nuclear GPDs.
{\ensuremath{pA}}\xspace\ collisions can access both nucleon and nuclear GPDs. Indeed, depending on the rapidity of the final-state particles, $\gamma p$ or $\gamma A$ interactions dominate~\cite{Guzey:2013taa}. Hence, with a non-central detector, as for example LHCb, measurements in {\ensuremath{pA}}\xspace\ and in {\ensuremath{Ap}}\xspace\ collisions offer important complementary information.
Some caveats regarding the study of GPDs in UPCs should also be kept in mind.
At present, there is still no all-order factorisation proof
of exclusive quarkonium production. In addition, higher-twist, higher-order, and mass corrections could play a sizeable role when evaluating the process amplitude.
%
%
%
%
%
%
\subsubsection{Probing the low-$x$ and low-scale gluon PDF with exclusive $\pazocal{Q}$ production}
\label{sec:dif_gluonpdf}
In~\cite{Flett:2019ept, Flett:2019pux}, the utility of the exclusive $\ensuremath{J/\psi}\xspace$ data, measured recently by the LHCb collaboration in the forward rapidity interval $2<y_\pazocal{Q}<4.5$, as a means of probing and ultimately determining the low-$x$ and low-$Q$ gluon PDF is discussed. To date, the exclusive data have not been included in global PDF analyses for two reasons. First, the underlying theory prediction within collinear factorisation at NLO for exclusive $\ensuremath{J/\psi}\xspace$ production suffered from a large scale uncertainty and exhibited poor perturbative stability. Second,
one could not readily extract a PDF to compare to the $\overline{\text{MS}}$ collinear distributions determined in the global fits due to the off-forward kinematics and the description of the process via GPDs with the skew parameter $\xi$.
However, both of these problems have recently been overcome and the reader is pointed to~\cite{Jones:2015nna, Jones:2016ldq} and~\cite{Shuvaev:1999fm} for more details.
At small $x$ and skewness $\xi$ values, one may relate the conventional collinear PDFs to the GPDs via the Shuvaev transform~\cite{Shuvaev:1999fm}. This approach exploits the observation that the evolution of the conformal moments of GPDs is similar to that of the Mellin moments of PDFs. The polynomiality in the series of $\xi$ of the conformal moments of the GPDs allows an identification of the leading term as the Mellin moments of the PDFs. In turn, one may then systematically construct all the conformal moments of the GPDs at small $\xi$ with an accuracy of $O(\xi^2)$ at LO. At NLO, the evolution becomes non-diagonal and the accuracy is lowered to $O(\xi)$. Still, for the diffractive processes of interest, this is more than adequate.
Therefore, by virtue of the exclusive $\ensuremath{J/\psi}\xspace$ process sitting at low $x$ and at a low $Q$ scale, one can relate the underlying GPD inputs to the conventional PDFs.
After a systematic taming within the NLO result, amounting to a resummation of a class of large logarithms and implementation of a low $Q_0$ cut within the NLO coefficient function, the cross-section predictions utilising state-of-the-art NLO global parton fits describe the data well in the HERA region, yet produce vastly different results in the LHCb region, see~\cf{fig:chris_label} (Left) and~\cite{Flett:2019pux}. This large uncertainty {\it between} the global predictions is tantamount to the lack of data constraints for $x<10^{-3}$, where the global parton behaviour in the low $(x,Q^2)$ domain is based on extrapolating the input PDF Ansatz from larger $x$. As shown in the right panel of~\cf{fig:chris_label}, the propagation of this, currently large, uncertainty at small $x$ to the exclusive $\ensuremath{J/\psi}\xspace$ cross section demonstrates the sizeable uncertainty for any given parton function set and provides support for the claim that the exclusive $\ensuremath{J/\psi}\xspace$ data are in a position to reliably constrain the low-$x$ gluon.
\begin{figure}[h]
\centering
\includegraphics[scale=0.450]{figures/Fig3left.pdf}
\qquad
\includegraphics[scale=0.445]{figures/Fig6right.pdf}
\caption{Cross sections for exclusive $\ensuremath{J/\psi}\xspace$ photoproduction in $ep$ and $pp$ collisions as a function of $\gamma p$ {\rm c.m.s.}\xspace\ energy. Left: Data compared to predictions using three distinct sets of global PDFs with scales $\mu_F^2 = \mu_R^2 = m_c^2$ (solid lines). Also shown for CT14 is the prediction with scales $\mu_F^2 = \mu_R^2 = 2m_c^2$ (dashed line), demonstrating the stability of the theory with respect to scale variations. Right: Data compared to two sets of global PDFs, showing the global PDF $1\sigma$ uncertainty, which greatly exceeds the experimental uncertainty. The data are from~\cite{Chekanov:2002xi,Chekanov:2004mw,Aktas:2005xu,Alexa:2013xxa} and the LHCb $W_+$ solutions are constructed from~\cite{Aaij:2014iea, Aaij:2018arx}. [Plots taken from~\cite{Flett:2019ept}.]}
\label{fig:chris_label}
\end{figure}
For the future, there are three points of note regarding exclusive $\ensuremath{J/\psi}\xspace$ production via UPCs and the extraction of low-$x$ gluon PDFs.
Firstly, there is no indication of gluon density saturation down to $x=10^{-5}$ -- the data are compatible with a rising power law. With increasing data quality and statistics in the upcoming HL-LHC phase together with, in time, higher collision energies, some sensitivity to the effects of saturation might be seen.
Secondly, there is a need to reconcile the differing behaviour of the low-$x$ gluon PDF obtained from independent analyses using the inclusive ($D$-meson)~\cite{Gauld:2016kpd} and exclusive ($\ensuremath{J/\psi}\xspace)$~\cite{Flett:2020duk} sectors. It is unclear whether this is a question of data quality or whether the theory framework needs improving. Thirdly, measurements of the inclusive forward $C$-even charmonia ($\chi_{c0}$, $\chi_{c2}$, $\eta_c$) production, (and indeed bottomonia) integrated over $\ensuremath{P_T}\xspace$ are of high value. The NLO gluon can be probed down to approximately the same $x$ and $Q$ values as in exclusive $\ensuremath{J/\psi}\xspace$, but now in the conventional inclusive mode. From a phenomenological standpoint, it would be interesting to compare the low-$x$ gluons obtained from fits to scalar, vector, and tensor charmonia.
The same methodology applies to making NLO predictions for $\ensuremath{\psi(2S)}\xspace$ and $\ensuremath{\Upsilon}\xspace$ production.
For the latter, the scale dependencies are significantly reduced and the predictions are more robust (see \cite{Aaij:2015kea}).
However, the experimental precision for both is poorer due to lower statistics. This situation can be remedied at the HL-LHC.
The ideal situation is to measure both these processes in high-luminosity {\ensuremath{pA}}\xspace\ collisions.
At present, though, the anticipated integrated luminosity for this phase of running is probably not sufficient to make a competitive measurement.
Further studies are required in order to determine whether $\ensuremath{pp}\xspace$ collisions could be used.
The increase in luminosity at HL-LHC means an increased pileup of $\ensuremath{pp}\xspace$ interactions, but it may still be possible to select exclusive $\ensuremath{\psi(2S)}\xspace$ and $\ensuremath{\Upsilon}\xspace$ production through their characteristic signals of precisely two muons consistent with a primary interaction point and/or using forward proton tagging.
\subsubsection{FT measurements of $\pazocal{Q}$ photoproduction}
\label{sec:dif_FT}
UPCs are not only unique tools to study photoproduction processes with hadron beams in the collider mode, but also in the FT mode. LHC FT collisions can release a {\rm c.m.s.}\xspace energy of $\ensuremath{\sqrt{s}}=115$ GeV with the LHC 7 TeV proton beam~\cite{Brodsky:2012vg}, giving access to the high-$x$ range of the parton distributions.
FT collisions have already been achieved by the LHCb collaboration, thanks to its System for Measuring Overlap with Gas (SMOG)~\cite{Aaij:2018ogq}. In the upcoming Run~3 of the LHC, a new system (SMOG2) will be installed, consisting of a target cell, in which gas is injected in the centre and pumped out at both ends~\cite{smog2_a,Bursche:2649878}. The HL-LHC plans to have an upgraded polarised SMOG system. It is worth noting that the ALICE collaboration is also studying the feasibility to conduct a FT programme after Run~3 using a solid target coupled to a bent crystal to deflect the beam halo~\cite{Galluccio:2671944}.
FT measurements with an unpolarised target in general access spin-independent (TMD) PDFs, GPDs, and Wigner distributions, while polarised targets access different spin-dependent objects. Exclusive quarkonium measurements in polarised FT collisions are discussed in~\cite{Lansberg:2018fsy}. With transversely polarised protons, the measurement of exclusive photoproduction of vector quarkonia is sensitive to the gluon GPD, $E_{g}$, which in turn allows, in principle, a determination of the gluon orbital angular momentum, $L_{g}$. Both $L_{g}$ and $E_{g}$ are currently essentially unknown.
With an integrated yearly luminosity of only 150~pb$^{-1}$ foreseen for Run~3 at the LHC, one could expect to produce about 3000 $\ensuremath{J/\psi}\xspace$ in the LHCb acceptance to perform preliminary studies of the multi-dimensional gluon content of the proton. Similar studies can in principle be conducted with a Pb beam on a H gas target but would only produce a few tens of $\ensuremath{J/\psi}\xspace$~\cite{Lansberg:2018fsy}.
\cf{fig:FixedTarget} shows projections for HL-LHC, for an LHCb-like detector. These assume an integrated luminosity of 10~fb$^{-1}$, corresponding to the maximum luminosity that can be obtained in a year of running at the LHC in $p$H collisions using a storage cell gas target with a longitudinal dimension of 1~m. The left panel shows the differential cross section as a function of $y_\psi$ in the laboratory frame before and after applying kinematic cuts for the LHCb region, covering a rapidity between 2 and 5. The top $x$ axis shows the photon-proton {\rm c.m.s.}\xspace\ energy, $W_{(\gamma p)}$, while the right $y$ axis shows the yearly yield per 0.1 rapidity unit. A yearly yield of about $2\times10^5$ in the di-muon decay channel is expected. The right panel of \cf{fig:FixedTarget} shows the projection of the single transverse-spin asymmetry (STSA) ($A_{N}$) of the $\ensuremath{J/\psi}\xspace$ as a function of Feynman $x_{\rm F}$, for two ranges in \ensuremath{P_T}\xspace and one year of FT-LHC data taking in $p$H collisions. The asymmetry can be measured with an absolute precision ranging from 1 to 4$\%$, making possible a first measurement sensitive to $E_{g}$ in FT mode at the LHC with a polarised target, likely before the EIC.
Projections also exist for the ALICE detector operated in the FT mode for HL-LHC, assuming the usage of a polarised gas system with a storage cell technology.%
The maximum yearly integrated luminosities considered are mainly limited by the detector rate capabilities and amount to 260~pb$^{-1}$ in $p$H collisions (polarised or unpolarised), leading to a photoproduced $\ensuremath{J/\psi}\xspace$ yield of about 1300 $\ensuremath{J/\psi}\xspace$ in the ALICE muon spectrometer. The statistics are even more scarce in the ALICE central barrel, which is covering the very backward rapidity region, at the edge of the $\ensuremath{J/\psi}\xspace$ phase space.
\begin{figure}[h!]
\centering
%
\includegraphics[scale = 0.3]{figures/pH_AFTER_ynew.pdf}
\hspace{1 true cm}
\includegraphics[scale = 0.45]{figures/assym_pp-270718.pdf}
\caption{Rapidity-differential $\ensuremath{J/\psi}\xspace$ photoproduction cross section predicted by the STARLIGHT Monte Carlo generator~\cite{Klein:2016yzr} (Left), and projected $\ensuremath{J/\psi}\xspace$ $A_{N}$ distribution as a function of $x_F$ (Right), for FT $p$H collisions at $\ensuremath{\sqrt{s}}$~=~115~GeV, assuming the performances of an LHCb-like detector and a polarised target with 80$\%$ effective polarisation. [Figures taken from~\cite{Lansberg:2018fsy}.]}
\label{fig:FixedTarget}
\end{figure}
\subsection{Accessing Wigner functions through $\pazocal{Q}$-pair production}
\label{sec:dif_wig}
The 1D PDFs, and the 3D TMDs and GPDs, each describing different aspects of the non-perturbative structure of hadrons, can all be derived from the more general GTMDs~\cite{Meissner:2009ww,Lorce:2013pza}. There are several compelling reasons to study GTMDs. Firstly, they contain more physics content than that encoded in the TMDs and GPDs, and thus allow an exploration of physics that is lost in taking the TMD/GPD kinematic limits. Secondly, GTMDs can be related, via Fourier transformations, to Wigner functions, which are the quantum-mechanical version of classical phase-space distributions encountered in statistical physics. Partonic Wigner functions may allow for a hadron tomography in 5D phase space~\cite{Belitsky:2003nz,Lorce:2011dv}. Thirdly, certain GTMDs can unravel unique correlations between the parton orbital motion and the spin of hadrons~\cite{Lorce:2011kd,Hatta:2011ku,Lorce:2014mxa}. Fourthly, there is a particular GTMD that is related to the Sivers TMD (see also Section~\ref{sec:polarised}). By establishing the equivalence between GTMDs and QCD-odderons at small $x$, the authors in~\cite{Boussarie:2019vmk} have shown that it is possible to access the gluon Sivers TMD through exclusive $\pi^{0}$ production in \textit{unpolarised} $ep$ scattering. This finding goes against the traditional belief that the Sivers function can only be accessed via a transversely polarised target.
\begin{figure}[h!]
\includegraphics[width = 0.46\textwidth]{figures/Fig1_Wigner.png}
\hspace{1cm}
\includegraphics[width = 0.38\textwidth]{figures/Fig2_Wigner.JPG}
\caption{Left: Leading-order Feynman diagram for exclusive dijet production in lepton-nucleon/nucleus scattering. Right: Leading-order Feynman graph for the exclusive double quarkonium production in nucleon-nucleon collisions. The perturbative subprocess $gg \rightarrow \eta_{Q}$ is computed in the CSM. The $\eta_{Q}$ meson has a transverse momentum that is determined by the (intrinsic) transverse momentum of the gluons through which it is produced.}
\label{fig:fd_wig}
\end{figure}
For a long time, it was questionable whether GTMDs could be measured at all. The authors of~\cite{Hatta:2016dxp} were the first to suggest the measurement of gluon GTMDs through exclusive dijet production in lepton-nucleon/nucleus collisions at small $x$ (\cf{fig:fd_wig}, Left). Given that the GTMDs depend on the transverse vectors $\vec{q}_{T}$ (partonic transverse momentum) and $\vec{\Delta}_{T}$ (the Fourier transform of the impact-parameter $\vec{b}_{T}$), it is possible to parameterise the angular correlation between these two vectors by a symmetric and an antisymmetric part. The latter, known as the elliptic distribution, has a characteristic $\cos(2\phi)$ angular modulation, where $\phi$ is the angle between the relative transverse momenta of the dijets and the recoiling nucleon/nucleus. This angular modulation is similar to the observed elliptic flow phenomenon in relativistic heavy-ion collisions~\cite{Zhou:2016rnt,Hagiwara:2017ofm,Iancu:2017fzn}.
This pioneering work gave impetus to the field of GTMDs and subsequently many other interesting ideas were put forward~\cite{Hatta:2016aoc,Ji:2016jgn,Hagiwara:2017fye,Bhattacharya:2017bvs}, among which is the ability to access gluon GTMDs in exclusive photoproduction in {\ensuremath{pA}}\xspace\ collisions.
In~\cite{Bhattacharya:2018lgm} it was shown that the production of two pseudoscalar quarkonia ({e.g.}\xspace\ $\eta_c\eta_c$, $\eta_c\eta_b$ or $\eta_b\eta_b$) in exclusive hadronic collisions, where both hadrons remain intact, is a direct probe of GTMDs for gluons at moderate $x$ (\cf{fig:fd_wig}, Right).
In that work, an observable sensitive to gluon (canonical) orbital angular momentum was identified. A similar idea came out in~\cite{Boussarie:2018zwg}, where the authors proposed to access gluon GTMDs at small $x$ via the production of pairs of $C=+1$ quarkonia ($\eta_c$ or $\chi_{cJ}$) in exclusive \ensuremath{pp}\xspace\ collisions, where one of the protons breaks up after the collision. Although the latter has a larger count rate, there may be contamination from NRQCD CO contributions. An alternative suggestion would be to consider either a combination of a $\ensuremath{J/\psi}\xspace$ and a $C$-even quarkonium~\cite{FengYuan}, or possibly the associated production of two $\ensuremath{J/\psi}\xspace$ in \ensuremath{pp}\xspace collisions at the LHC, as opposed to two $C$-even quarkonium states. The production mechanisms for these processes with one or two $\ensuremath{J/\psi}\xspace$, in the GTMD-type of kinematics, is expected to be more complicated. Nonetheless, given the experimental ease with which $\ensuremath{J/\psi}\xspace$ states can be detected, theoretical efforts in this direction are surely warranted.
From the experimental side, exclusive pairs of charmonia have been studied in \ensuremath{pp}\xspace\ collisions by the LHCb experiment~\cite{Aaij:2014rms}, but the statistical precision is insufficient to extract the cross section as a function of the \ensuremath{P_T}\xspace\ of the quarkonia. The CMS collaboration recently performed~\cite{CMS:2020ekd} a preliminary measurement of exclusive dijets in PbPb collisions: the $\cos(2\phi)$ modulation between the sum of the transverse momenta of the jets and their difference was extracted, which for the first time provides the information relevant for the determination of the GTMDs. This first experimental access to GTMDs can be extended in the future to similar measurements but for pairs of quarkonia.
As discussed for PDFs and GPDs, dedicated measurements during the HL-LHC should thus provide access to the Wigner distributions of the proton.
\begin{comment}
\subsection{Tensor polarisation as a probe for hadrons and hadronic matter at the HL-LHC}
\label{sec:dif_pol}
Tensor polarisation is revealed in the decay of spin-$1$ particles and provides a sensitive probe for their production in hadronic and nuclear collisions. It can be found through the measurement of the angular distribution of dilepton pairs from the decay and requires high statistics data that the HL-LHC can provide.
The respective angular coefficients depend on the choice of coordinate axes in the vector particle (quarkonium) rest frame, while there are invariant combinations corresponding to the eigenvalues of the density matrix, of which the symmetric and antisymmetric parts should be considered separately, like linear and circular polarisations of real photons. Such invariants were already applied to $\ensuremath{J/\psi}\xspace$ production at PHENIX~\cite{Gavrilova:2019jea}.
The specific instrument which is provided by tensor polarisation are the {\it positivity constraints}
for angular coefficients~\cite{Teryaev:2011zza, Gavrilova:2019jea}.
These are realised as a set of inequalities.
What is important in the transition to exclusive processes is that these inequalities become equalities.
Indeed, while the eigenvalues $\lambda_i$ of the density matrix are positive, the positivity constraints
may be expressed as a positivity of the sums of the products of the type $$\sum \lambda_i \lambda_j, \sum \lambda_i \lambda_j \lambda_k, \ldots \geq 0.$$
As soon as the exclusive process is described by a single amplitude, the respective pure-state density matrix contains only one eigenvalue, $\lambda=1$, and all the respective products turn to zero.
In particular, in some hard exclusive processes the longitudinal polarisation of dileptons may appear, instead of the usual transverse ones.
This may also be manifested at the $\ensuremath{J/\psi}\xspace$ peak.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.46\textwidth]{figures/qt1.png}
\caption{Leading-order Feynman graph for the exclusive Drell--Yan process }
\label{FigEDY}
\end{figure}
However, the clearest observation of the phenomenon may be in the exclusive Drell--Yan process~\cite{Pivovarov:2015vya}, shown in \cf{FigEDY},
which interferes with the purely electromagnetic process where the dilepton pair is produced in the collision of a quasi-real photons emitted by the colliding nucleons or nuclei.
The charge parity of the dilepton pair in the electromagnetic process is $C=+$, while that from the Drell--Yan process or the $\ensuremath{J/\psi}\xspace$ decay is $C=-$.
Because of the different charge parities in the amplitudes, the interference term is antisymmetric with the interchange of the dileptons and disappears if the integration over polar and azimuthal angles is performed.
The properties of polarisation, like Lam--Tung relation violations, can be described in the framework of a geometric model~\cite{Teryaev:2011zza,Peng:2015spa,Chang:2017kuv} when the angular distribution depends only on the polar angle in some frame, while the azimuthal angle dependence emerges due to axis rotation. This model can also be applied to the polarisation of vector mesons and quarkonia in heavy-ion collisions, leading to the (anti)correlations of polarisation with elliptic flow, which provide a qualitative explanation for the ALICE data on $\phi$ and $K^*$ polarisation~\cite{Singh:2018uad, Kundu:2018axy}. The elliptic-flow relation with the elliptic Wigner function~\cite{Hagiwara:2017fye} may be studied using tensor polarisation in the (semi)exclusive production. This relation should be manifest in the saturation of the positivity bounds for the density matrix discussed above.
Tensor polarisation of deuteron beams, which may be studied even in inclusive processes~\cite{Teryaev:2019ale}, provides access to the shear forces (complementing the pressure) generated by quarks and (in the case of quarkonia) gluons.
The crossing counterparts of the parton distribution is provided by the respective fragmentation function~\cite{Schafer:1999am}.
It is therefore possible to study shear in vector mesons by analysing the tensor polarisation of produced vector mesons.
In the case of quarkonia, this will be related to the shear generated by gluons.
\end{comment}
%
%
%
%
%
%
\section{Introduction}
The Large Hadron Collider (LHC) accelerator and detector systems are being upgraded to enable their optimal exploitation after 2027 with a ten–fold increase in their instantaneous luminosity in the proton-proton (\ensuremath{pp}\xspace) running mode with respect to the nominal design values~\cite{Apollinari:2017cqg}. The ultimate high-luminosity phase of the collider (referred to as HL-LHC) will lead to the collection of huge data samples of \ensuremath{pp}\xspace collisions from total integrated luminosities reaching $\ensuremath{\mathcal L}\xspace = 3$~ab$^{-1}$ at ATLAS and CMS, and around $\ensuremath{\mathcal L}\xspace = 0.3$~ab$^{-1}$ at LHCb, by the end of the LHC operation in 2035. In addition, integrated luminosities of about 13\,\ensuremath{\textrm{nb}^{-1}}\xspace, 13\,\ensuremath{\textrm{nb}^{-1}}\xspace, and 2\,\ensuremath{\textrm{nb}^{-1}}\xspace of \rm{PbPb}\xspace data
as well as 1.2\,\ensuremath{\textrm{pb}^{-1}}\xspace, 0.6\,\ensuremath{\textrm{pb}^{-1}}\xspace, and 0.6\,\ensuremath{\textrm{pb}^{-1}}\xspace of $p\mathrm{Pb}$\ data are expected to be obtained by each of the ATLAS/CMS, ALICE, and LHCb experiments until 2030, respectively~\cite{Citron:2018lsq}. Such unprecedented data sets will open up new exciting physics opportunities in the study of the Standard Model~\cite{Azzi:2019yne}, and in particular its heavy-flavour sector~\cite{Cerri:2018ypt}. This phase will also offer the possibility to collect data using the LHC proton and ion beams on Fixed Targets (FT). The corresponding physics programme~\cite{Brodsky:2012vg,Hadjidakis:2018ifr} of the LHC in the FT mode (referred to as FT-LHC) relies on extremely high {\it yearly} luminosities.
The FT mode is under study for ALICE and LHCb, where up to 10\,\ensuremath{\textrm{fb}^{-1}}\xspace in \ensuremath{pp}\xspace, 300\,\ensuremath{\textrm{pb}^{-1}}\xspace in proton-nucleus ({\ensuremath{pA}}\xspace), and 30\,\ensuremath{\textrm{nb}^{-1}}\xspace in lead-nucleus ($Pb$A) collisions are expected.
The aim of this review is to highlight the impact that the upcoming operations of the LHC, in particular the HL-LHC and FT-LHC, will have on various sectors of quarkonium-production studies in \ensuremath{pp}\xspace, {\ensuremath{pA}}\xspace, and nucleus-nucleus (\ensuremath{AA}\xspace) collisions.
Not only is the mechanism underlying the inclusive production of quarkonia ($\pazocal{Q}$) still an outstanding problem in hadroproduction~\cite{Lansberg:2019adr}, but quarkonia can also serve as tools for the study of many other aspects of Quantum Chromodynamics (QCD).
To name a few, charmonia and bottomonia can be used to probe the proton gluon content in terms of various parton densities such as parton distribution functions (PDFs, see {e.g.}\xspace~\cite{Halzen:1984rq,Martin:1987ww,Martin:1987vw,Jung:1992uj,Lansberg:2020ejc}), transverse-momentum densities (TMDs, see {e.g.}\xspace~\cite{Boer:2012bt,Dunnen:2014eta,Boer:2016bfj,Lansberg:2017dzg,Lansberg:2017tlc,Lansberg:2018fwx,Bacchetta:2018ivt,DAlesio:2019qpk,Kishore:2019fzb,Scarpa:2019fol,Boer:2020bbd}), and generalised parton densities (GPDs, see {e.g.}\xspace~\cite{Diehl:2003ny,Ivanov:2004vd,Jones:2015nna,Flett:2019pux}); the gluon content of heavy nuclei through nuclear PDFs (see {e.g.}\xspace~\cite{Ferreiro:2011xy,Lansberg:2016deg,Albacete:2016veq,Albacete:2017qng,Kusina:2017gkz}).
More generally, they allow the study of the initial stages of ultra-relativistic heavy-ion collisions (see {e.g.}\xspace~\cite{Andronic:2015wma}) and at the same time, they offer new ways to investigate the dynamics of hard multi-parton interactions (see {e.g.}\xspace~\cite{Kom:2011bd,Lansberg:2014swa,dEnterria:2016ids,Shao:2019qob}) or to measure the properties of the quark-gluon plasma (see {e.g.}\xspace~\cite{Rapp:2008tf,Andronic:2015wma,Rothkopf:2019ipj}).
The interested reader will find it useful to consult the following reviews~\cite{Kramer:2001hh,Brambilla:2004wf,Lansberg:2006dh,Brambilla:2010cs,ConesadelValle:2011fw} addressing HERA and Tevatron results, and more recent ones~\cite{Andronic:2015wma,Lansberg:2019adr} concerning recent advances in the field with the RHIC and LHC data.
With regards to existing experimental results, the reader is guided to the HEPData database (\href{https://www.hepdata.net/}{\tt https://www.hepdata.net/}), to a dedicated repository of quarkonium measurements up to 2012 (\href{http://hepdata.cedar.ac.uk/review/quarkonii/}{\tt http://hepdata.cedar.ac.uk/review/quarkonii/}) documented in~\cite{Andronic:2013zim} and to a recent review on RHIC results~\cite{Tang:2020ame}.
The document is organised as follows. Section~\ref{sec:pp} focuses on the studies accessible in \ensuremath{pp}\xspace\ collisions. Section~\ref{sec:excl_diff} discusses quarkonium production in exclusive and diffractive interactions, while Section~\ref{sec:spin} is devoted to the impact on studies involving transverse-momentum-dependent observables. Quarkonium studies in {\ensuremath{pA}}\xspace\ and \ensuremath{AA}\xspace\ collisions are respectively covered in Sections~\ref{sec:pa} and~\ref{sec:aa}. Finally, Section~\ref{sec:dps} addresses quarkonium production in double parton scattering (DPS) and triple parton scattering (TPS). Following the structure of similar previous prospective quarkonium studies at the LHC, such as~\cite{Lansberg:2008zm}, each chapter starts with a short summary of the current state-of-the-art and open experimental and theoretical issues, followed by a succinct list of HL-LHC studies that should further improve the understanding of all quarkonium-related physics. Anywhere relevant we have indicated when higher luminosities are needed or when similar luminosities to those already collected during the previous runs will be sufficient with dedicated triggers for these new studies.
\section{Proton-nucleus collisions\protect\footnote{Section editors: Michael Winn, Bertrand Duclou\'e.
}}
\label{sec:pa}
\subsection{Introduction}
Quarkonium production in {\ensuremath{pA}}\xspace collisions is studied in several contexts at the LHC. Traditionally, it is used as a baseline for the investigation of quarkonium production in \ensuremath{AA}\xspace collisions, where the production of heavy quark-antiquark bound states with different binding energies contains information about the properties of the final-state deconfined medium (see Section~\ref{sec:aa}). In the absence of any other initial- or final-state effects, any changes to the yield in {\ensuremath{pA}}\xspace collisions compared to \ensuremath{pp}\xspace collisions is attributable to intrinsic modifications of the PDFs in the nucleus with respect to the free proton PDFs. Quarkonium production at the LHC is thereby an interesting probe of the nuclear PDFs (nPDFs) in both the collider and FT operations. Quarkonium production in the collider mode gives access to very low parton momentum fractions, down to below~$x\approx 10^{-5}$, up to still relatively large energy scales (\cf{fig:plane}). This extreme kinematic region has not been constrained by accurate measurements at any heavy-ion facility so far,
and hence deserves further attention. The FT mode allows probing large partonic momentum fractions $x>0.3$, another region where only loose constraints exist at present.
In addition, {\ensuremath{pA}}\xspace collisions provide an ideal arena to explore the dynamics of heavy-quarks in cold-nuclear matter. At collider energies, the non-perturbative hadronisation of the heavy-quark pair is factorisable from its production~\cite{Andronic:2015wma}, %
allowing the study of some universal features of cold-nuclear effects. Initial-state multiple interactions of a colliding parton inside a heavy ion together with final-state multiple scatterings of the produced heavy-quark pair before it exits the nucleus can interfere, which also modifies the yield in {\ensuremath{pA}}\xspace collisions compared to \ensuremath{pp}\xspace collisions (see Section~\ref{sec:pa_cnm_models}).
However, the soft-gluon interaction between the produced heavy-quark pair and the parton spectators of the colliding beams in both \ensuremath{pp}\xspace and {\ensuremath{pA}}\xspace collisions could break the factorisation between the production and the bound-state formation that dominates at lower $\ensuremath{P_T}\xspace$. Such a factorisation breaking can be studied using the CIM (see Sections~\ref{sec:pp-xyz} \& \ref{pAcom}). Systematic theoretical treatments of the multiple scattering between the heavy-quark pair and the traversed nucleus are urgently needed for predictions for the HL-LHC {\ensuremath{pA}}\xspace programme~\cite{Albacete:2016veq,Albacete:2017qng}.
Furthermore, the heavy-ion-physics programme at the LHC has shown smooth system-size evolutions of various key QGP signatures. They appear for large final-state particle multiplicities, but they extend also towards lower particle-multiplicity environments in $p\mathrm{Pb}$\ and \ensuremath{pp}\xspace\ collisions. Quarkonium is interesting due to its role as a signature of the QGP creation, as well as its heavy mass providing an additional dimension in the investigation of these phenomena.
Strong nuclear modifications of quarkonium production were observed in {\ensuremath{pA}}\xspace\ collisions at the LHC~\cite{Andronic:2015wma}, but whether or not they can be ascribed to the modification of the nuclear wave function~\cite{Ferreiro:2013pua,Vogt:2015uba}, or to energy loss, or to other mechanisms including final-state phenomena associated usually to \ensuremath{AA}\xspace collisions, is not yet resolved. Hence, the question is open as to whether quarkonium and heavy-flavour observables are a tool to constrain the hadronic wave function or whether they inform us about final-state parton collectivity; it may also depend on the specific observable.
In addition to collider data, quarkonium production in the LHC FT mode, pioneered by LHCb~\cite{Aaij:2018ogq}, implemented for higher luminosity in Run~3 for LHCb and investigated for further upgrades in LHCb and ALICE, opens up new possibilities to study QCD phenomena at large $x$ in nuclei with unprecedented detail and precision~\cite{Brodsky:2012vg,Lansberg:2012kf,Massacrier:2015qba,Hadjidakis:2018ifr,Kusina:2019grp}. %
All the open questions outlined above need to be addressed in order to deepen our understanding of the QGP properties and of hadron structure at high energies. New quarkonium studies, enabled by higher luminosities in {\ensuremath{pA}}\xspace data at the HL-LHC and related theory progress, are reported in the following.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figures/LHCxBjorken_ppLHCb.pdf}
\caption{Kinematic plane $M_T$ vs $x_2$ at the LHC in collider and FT mode. The \ensuremath{P_T}\xspace range is limited to kinematic regions where large enough data samples can be collected. For simplicity, small gaps within the experimental acceptance at small momentum in the collider mode are omitted. FT coverages are taken from~\cite{Hadjidakis:2018ifr}.
}
\label{fig:plane}
\end{figure}
\subsection{Cold-nuclear-matter effects in $\pazocal{Q}$ production}
\label{sec:pa_cnm}
In view of using {\ensuremath{pA}}\xspace collisions as a baseline for \ensuremath{AA}\xspace collisions or as a testing ground for the initial-state partonic structure of the nucleus --{i.e.}\xspace\ using nuclear modifications in $p\mathrm{Pb}$\ collisions compared to \ensuremath{pp}\xspace\ collisions to study the so-called cold-nuclear-matter effects--, one relies on several theoretical models to capture the origin of these effects based on different assumptions.
\subsubsection{Theoretical models: setting the scene}
\label{sec:pa_cnm_models}
\paragraph{Collinear factorisation and nPDFs.} %
The use of nPDFs for calculations involving {\ensuremath{pA}}\xspace collisions relies on the assumption that one can factorise {\ensuremath{pA}}\xspace\ scattering cross sections into a hard component --the partonic cross section-- and soft components --the PDFs and fragmentation functions (FFs). These soft components are supposed to be universal, meaning in particular that the nPDFs would not depend on the process under study or the collision system ({e.g.}\xspace $\ell A$ or {\ensuremath{pA}}\xspace).
Even though there is still no proof that factorisation is applicable to collisions involving nuclei, there are few doubts that it applies for processes like Drell--Yan pair production in {\ensuremath{pA}}\xspace\ collisions. Yet, this remains the first and probably most important assumption made when performing nPDF fits using {\ensuremath{pA}}\xspace\ data, and it indeed deserves further dedicated studies. In addition, it is assumed that the nPDFs encapsulate all the nuclear effects at play. At FT energies, it is very possible that the energy loss can play a visible role in Drell--Yan production whereas, at collider energies, this effect is negligible. %
Existing data usable to fit nPDFs are limited and the resulting uncertainties are significant, which makes it difficult to perform tests of the factorisation hypothesis. Future LHC {\ensuremath{pA}}\xspace\ data for processes probing different scales and incoming parton flavours can be crucial in this respect: since the DGLAP equation governs the evolution from low to high scales and couples quarks with gluons, the inclusion of a given data set in global fits would also have an impact on the description of another. However, disentangling the impact of a possibly violated factorisation assumption is not trivial at the moment, due to the fact that we have not reached yet an accurate comprehensive description of all cold-nuclear-matter effects. The simultaneous analysis of future LHC data, as well as of those that will be available at the EIC, will certainly help to develop a more complete picture in this respect.
While proton PDF fits have reached a high level of sophistication and precision~\cite{Accardi:2016ndt}, modern nPDF fits~\cite{Eskola:2009uj,deFlorian:2011fp,Kovarik:2015cma,Eskola:2016oht,Khanpour:2016pph, Walt:2019slu,AbdulKhalek:2019mzd} are not yet as precise, for basically three reasons:
\begin{enumerate}[(i)]
\item much less data are available from $\ell A$ and {\ensuremath{pA}}\xspace\ collisions compared to from $\ell p$ and \ensuremath{pp}\xspace, and these data cover more restricted ranges in $x$ and the scale;
\item nPDFs require the further determination of their $A$ dependence, where $A$ is the atomic number of the nucleus, and consequently more data to obtain a similar precision;
\item
since the physics of {\ensuremath{pA}}\xspace\ collisions is more complex than that of \ensuremath{pp}\xspace\ collisions, due to the possible presence of multiple nuclear matter effects, one has to be more cautious when enlarging the data sets used in fits to new reactions.
\end{enumerate}
Whereas the accuracy of proton PDF fits is at NNLO (and further work has started towards N$^3$LO accuracy), a similar level of accuracy for unpolarised nPDF fits is desirable, but so far has only been achieved in fits exploiting nuclear DIS data~\cite{Walt:2019slu, AbdulKhalek:2019mzd}. In the NNLO fit of~\cite{Khanpour:2016pph}, besides DIS data, FT {\ensuremath{pA}}\xspace\ Drell--Yan data have been used, covering larger $x$ values, up to $x \approx 1$.
Unfortunately, data on FT $\ell A$ DIS can only constrain nPDFs down to $x \approx 6 \cdot 10^{-3}$.
In contrast, the minimum $x$ probed in {\ensuremath{pA}}\xspace\ collisions at the LHC extends to significantly lower values, depending on the {\rm c.m.s.}\xspace, the kinematic cuts, and the measured final states.
Therefore, the tendency to use collider data obtained in {\ensuremath{pA}}\xspace\ collisions to constrain nPDFs has recently increased, mainly thanks to newly available hard-scattering high-precision measurements from the $p\mathrm{Pb}$\ runs at the LHC.
\paragraph{The Colour-Glass Condensate (CGC).} At high collision energy (or small $x$ values), parton densities inside hadrons become so large that non-linear effects such as gluon recombination become important, which can lead to a saturation of the gluon density. The CGC effective field theory
~\cite{McLerran:1993ni,McLerran:1993ka,McLerran:1994vd,Iancu:2002tr,Iancu:2003xm,Gelis:2010nm,Kovchegov:2012mbw}
provides a convenient framework to describe processes in this regime. In this formalism, a {\ensuremath{pA}}\xspace collision can be seen as the scattering of a dilute projectile (the proton) on a dense target (the nucleus). Because the gluon density in a nucleus scales roughly like $A^{1/3}$, such non-linear effects are enhanced in {\ensuremath{pA}}\xspace compared to \ensuremath{pp}\xspace collisions. The Nuclear Modification Factor (NMF) $\ensuremath{R_{pA}}\xspace$ is thus a useful observable to study saturation effects. In particular, the CGC provides a natural explanation for the decreasing behaviour of $\ensuremath{R_{pA}}\xspace$ as a function of rapidity observed in forward $\ensuremath{J/\psi}\xspace$ production at the LHC, which probes very small $x$ values: since at a given $x$ the gluon density in a nucleus is larger than in a proton and thus closer to saturation, it will not increase as quickly with decreasing $x$ (or increasing rapidity). %
While this general trend was present in the first predictions for this process at the LHC in the CGC formalism~\cite{Fujii:2013gxa} and was subsequently confirmed experimentally~\cite{Abelev:2013yxa}, it turned out that the measured $\ensuremath{R_{pA}}\xspace$ was much larger than the predicted one. The origin of this discrepancy can be attributed to approximations related to the initial conditions for the gluon density. Indeed, while in the CGC formalism the evolution of the gluon density as a function of $x$ is fully (perturbatively) determined by the Balitsky--Kovchegov equation~\cite{Balitsky:1995ub,Kovchegov:1999yj}, the initial conditions of this evolution, expressed at moderate $x_0 \approx 0.01$, involves non-perturbative dynamics that cannot be computed. The initial conditions for a proton target can be extracted by a fit to HERA DIS data, but the lack of similar high-precision low-$x$ data for nuclei makes some modelling mandatory in the case of proton-nucleus collisions. In~\cite{Fujii:2013gxa}, this was done by taking $Q_{s 0, A}^2=c Q_{s 0, p}^2$, where $Q_{s 0, p}$ and $Q_{s 0, A}$ are the initial saturation scales of the proton and nucleus, respectively, and $c \sim A^{1/3}$. In practice, the value of $c$ was varied between 4 and 6. However a study looking at the $A$ dependence of $F_2$ measured by the NMC collaboration~\cite{Arneodo:1996rv} found that that data is best described with $c$ values between 1.5 and 3~\cite{Dusling:2009ni}. Another way to extrapolate the initial condition of a proton to a nucleus is to use, as in~\cite{Lappi:2013zma}, the optical Glauber model, which assumes that at $x=x_0$ the high-energy probe scatters independently off the nucleons, which are distributed according to the standard Woods-Saxon distribution~\cite{dEnterria:2020dwq}. These two methods were shown to lead to results in good agreement with experimental data~\cite{Ducloue:2015gfa,Ma:2015sia,Fujii:2015lld}.
\paragraph{Coherent energy loss.} Another possible nuclear effect is medium-induced gluon radiation via multiple scatterings of an incoming probe in the target nucleus. In~\cite{Arleo:2010rb}, it was shown that, for long formation times, the interference between initial- and final-state emissions can lead to an energy loss which is proportional to the incoming particle energy. This is in contrast with the Landau--Migdal--Pomeranchuk (LPM) effect~\cite{Wang:1994fx}, at play for short formation times, that shows an energy dependence that is at most logarithmic and is expected to be small at LHC energies. This fully coherent energy loss (the coherent action of all scattering centres in the medium at large formation times) requires both initial and final states to be coloured (in this model, quarkonium production is assumed to proceed via the splitting of an incoming gluon into a quark-antiquark pair in a colour octet state). Under these assumptions the energy-loss spectrum is found to depend only on one free parameter, the transport coefficient of the nuclear medium $\hat{q}$. A fit to E866 data~\cite{Leitch:1999ea} leads to a value of $\hat{q}=0.075$~GeV$^2$/fm~\cite{Arleo:2012rs} which can then be used to make predictions for other energies. The main output of the calculation is the energy loss probability distribution that, when convoluted with the \ensuremath{pp}\xspace\ cross section (evaluated at a shifted energy corresponding to the energy loss), leads to the {\ensuremath{pA}}\xspace\ cross section. To reduce the number of assumptions and parameters (especially in the case of quarkonia for which the production mechanism is not well understood), in~\cite{Arleo:2012rs} the cross section in \ensuremath{pp}\xspace collisions is not calculated but instead obtained by fitting experimental data using a simple functional form. However, it is not clear how such an energy loss depends on the production mechanism or on the quantum numbers of the produced quarkonia. Under these assumptions, the resulting prediction for the NMF of $\ensuremath{J/\psi}\xspace$ production was later found to be in good agreement with the measurements at the LHC, both at forward and backward rapidities.
\paragraph{Global view.}
As shown in \cf{fig:jpsiRpA}, several calculations, based on different theoretical models, are able to describe the experimental measurements of the $\ensuremath{J/\psi}\xspace$ NMF as a function of rapidity in $p\mathrm{Pb}$\ collisions at the LHC. The question of which physical mechanisms are responsible for these observed nuclear modifications is still under debate, noting that these effects are not all mutually exclusive.
In particular, the suppression of quarkonium production at forward rapidities can be caused by shadowing, or by coherent energy losses, or by heavy-quark absorption in the nucleus, or by a combination of more than one of these effects. To ascertain which effects are responsible for the observations, simultaneous measurements of different heavy-flavour probes in {\ensuremath{pA}}\xspace\ and \ensuremath{AA}\xspace\ collisions covering the same small $x$ values, in collisions at the same $\ensuremath{\sqrt{s_{_{NN}}}}$, can help to {disentangle} them.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{figures/RpA_ALICE_LHCB_y_v5-90856.pdf}
\caption{Comparison of the ALICE~\cite{Acharya:2018kxc} and LHCb~\cite{Aaij:2017cqq} measurements of the NMF of $\ensuremath{J/\psi}\xspace$ production in $p\mathrm{Pb}$\ collisions at $\ensuremath{\sqrt{s_{_{NN}}}}=8.16$ TeV with several model calculations~\cite{Ferreiro:2014bia,Kusina:2017gkz,Albacete:2017qng,Ma:2017rsu,Ducloue:2016pqr,Arleo:2014oha,Chen:2016dke}. Note that the curves labelled nCTEQ15 and EPPS16 are obtained after reweighting the corresponding nuclear PDF sets using LHC heavy-flavour data. [Figure taken from~\cite{Acharya:2018kxc}.]
%
%
%
}
\label{fig:jpsiRpA}
\end{figure}
It is also worth noting some essential issues in the model calculations. Firstly, the fundamental mechanisms of the heavy-quark pair production in {\ensuremath{pA}}\xspace\ collisions could be quite different from that in \ensuremath{pp}\xspace\ collisions. In particular, when $\ensuremath{P_T}\xspace\lesssim m_Q$, and when one specifically addresses $\ensuremath{P_T}\xspace$-dependent quantities as opposed to integrated ones, the perturbative QCD collinear factorisation approach for quarkonium production is not necessarily the most reliable theoretical approach (see {e.g.}\xspace~\cite{Brambilla:2010cs,Lansberg:2019adr} for a review).
As discussed in Secs.\,\ref{sec:quarkonium-pt-pp} and \ref{sec:spin}, the transverse momentum dependent (TMD) factorisation framework should take over the collinear factorisation for heavy-quark pair production when $\ensuremath{P_T}\xspace \ll m_Q$, which makes the nPDFs effect unclear.
For evaluating nuclear effects in the TMD factorisation approach, one must clarify how to include nuclear size or $A^{1/3}$-enhanced power corrections~\cite{Accardi:2004be,Qiu:2001hj,Qiu:2003vd} into the leading-twist TMD factorisation approach~\cite{Kang:2008us} although, in general, the power corrections in hadronic collisions cannot be factorised beyond the sub-leading power~\cite{Qiu:1990xy,Qiu:1990cu}. Besides, one must understand how to incorporate the nuclear dependence into the non-perturbative TMD distributions~\cite{Kang:2012am}. Interestingly, it has been clarified that the leading-twist TMD factorisation framework can be recovered by getting rid of higher body scattering corrections in the CGC framework~\cite{Altinoluk:2019fui,Altinoluk:2019wyu,Fujii:2020bkl}. Therefore, some precautions are required to compare nPDFs with parton saturation effects. Higher-twist effects can be studied by considering ``clean'' processes, such as Drell--Yan production in {\ensuremath{pA}}\xspace\ collisions, and semi-inclusive nuclear DIS at the EIC. See Section~\ref{ex:disc} for experimental prospects.
It has been argued that if the quarkonium is produced at a very forward rapidity, the hadronisation of the pair takes place outside of the colliding heavy ion (see {e.g.}\xspace~\cite{Arleo:1999af} and references therein).
Multiple scattering of the produced pair in the nuclear medium could enhance its invariant mass so much (beyond the $D\bar{D}$ or $B\bar{B}$ mass threshold) to prevent the pair from binding and leading to threshold-sensitive suppression~\cite{Qiu:1998rz}. Some model calculations have been carried out along this line in the CGC framework~\cite{Ma:2017rsu}. This threshold-sensitive suppression could be caused by multiple scattering of the pair in the colliding ion or by exchanging soft gluons with co-moving spectator partons of the beam, also known as the comover mechanism. One could investigate this threshold suppression effect on top of the energy loss effect. So, a careful examination is required in order to distinguish the modification of the hadronisation mechanism in nuclear medium from the modification of the initial PDFs, or higher twist effects, and so forth. See Section~\ref{sec:hadronisation-pA} for a discussion about hadronisation.
\subsubsection{Improved constraints on nPDFs from LHC data}
Given the caveats related to possible confounding factors involved in {\ensuremath{pA}}\xspace\ collisions already discussed, the usage of quarkonium to improve our knowledge on nPDFs depends on whether or not the nPDF effect is the dominant one. Even though such a dominance may not be straightforward to establish, one should note that the quarkonium data sets are large, relatively precise and thus constraining under this working hypothesis that can be falsified if tensions with other data sets appear.
The first $p\mathrm{Pb}$\ LHC data included into global nPDF fits~\cite{Eskola:2016oht} are from inclusive $W$ and $Z$ boson production measurements, constraining valence and sea quarks down to $x \approx 10^{-3}$ {at high scale}, and di-jet data, with direct sensitivity to the gluon distribution (whereas inclusive DIS is sensitive to gluons only indirectly, through scale violations) down to $x \approx 5 \cdot 10^{-3}$.
Including data on charmonium and bottomonium production, as well as open-charm production in the LHC kinematics, can further extend the range in $x$ and the scale. In particular, in~\cite{Kusina:2017gkz}, using LHC heavy-flavour data at {\rm c.m.s.}\xspace energies up to 8~TeV to reweight two nPDF fits (EPPS16 and nCTEQ15) led to evidence for shadowing effects at $x \approx 10^{-5}$ and for antishadowing at $x \approx 10^{-1}$. It was also shown that the inclusion of these data has a strong impact on the gluon nPDF uncertainties, with a reduction of at least a factor of two with respect to the nominal values.
This first very promising analysis encourages further developments. In particular, the matrix-elements that have been used in the study correspond to a purely phenomenological parametrisation, instead of a calculation from a Lagrangian. It would be interesting to check to which extent the results of the study are confirmed by adopting a more sound theoretical description of the considered processes, once it becomes available. This development is not straightforward because of the present limitations of the NRQCD framework, which has not been able to explain all charmonium observables simultaneously. %
In addition, it was stressed in~\cite{Kusina:2017gkz,Kusina:2018pbp,Kusina:2020dki} that the nPDF constraints set by the LHC $p\mathrm{Pb}$\ heavy-flavour data crucially depend on the value of the factorisation scale, $\mu_F$, at which the reweighting is performed. This should always be kept in mind when these constraints are discussed, especially for the charm(onium) cases, where the constraints are stringent, while the ambiguity from the scale choice is significant.
Moreover, since the analysis of~\cite{Kusina:2017gkz} is based on a reweighting technique, the compatibility with other data ({e.g.}\xspace\ Drell--Yan data and DIS data) included in the original nPDF fits has only been assessed a posteriori, by verifying that the goodness-of-fit ($\chi^2$) is not worsened by the inclusion of the additional heavy-flavour data. It would be desirable to go beyond this approach, by including all the data at the same time, {i.e.}\xspace\ from the very beginning, in the nPDF fit. This would allow an homogeneous treatment of all parameters in the theory predictions, ensure they are fully consistent with each other, and keep track of correlations between different sets of data. In this first reweighting study, the data from $D$, $J/\psi$, $B$ and $\ensuremath{\Upsilon}\xspace$ were considered separately. In the case of $D$ and $J/\psi$, it was however noted that the constraints for strong shadowing were very similar, which is a first sign of the universality expected if the nPDF factorisation holds~\cite{Kusina:2017gkz} (See also~\cite{Eskola:2019bgf}).
It remains an open question whether the antishadowing effect obtained in~\cite{Kusina:2017gkz} is a consequence of the data included, or simply depends on having imposed the momentum sum rule. In order to explore this further, it is important to consider data at various $x$ values, {e.g.}\xspace\ in different rapidity ranges, including the antishadowing peak region, as well as overlaps between different data sets, covering slightly different $x$ ranges.
It was also noted~\cite{Kusina:2017gkz} that LHC bottomonium data (as well as $B$ data) are affected by much smaller scale uncertainties but that the current very large experimental uncertainties do not yet allow the gluon nPDF to be constrained. It is expected that with forthcoming data, in particular those on bottomonium production, the nPDF constraints will further improve~\cite{Citron:2018lsq}. The statistical uncertainties for ALICE and LHCb from Run-3 and -4 data will shrink by about a factor of 5 compared to the 2016 data at $\ensuremath{\sqrt{s_{_{NN}}}}=8.16$~TeV, systematic uncertainties will also improve due to the better precision with which control channels can be determined.
ATLAS and CMS will reduce the statistical uncertainties by about a factor of 2 or 3. These numbers do not include further improvements that may be achieved due to better resolution, acceptances, or efficiencies.
The high-luminosity phase of the LHC should also allow for precise measurements of the feed-down contributions from $\chi_c$ and $\ensuremath{\psi(2S)}\xspace$ states to $\ensuremath{J/\psi}\xspace$, and from $\chi_b$, $\ensuremath{\Upsilon(2S)}\xspace$ and $\ensuremath{\Upsilon(3S)}\xspace$ to $\ensuremath{\Upsilon}\xspace$, that are fundamental aspects for reliable predictions of the absolute cross sections for $\ensuremath{J/\psi}\xspace$ and $\ensuremath{\Upsilon}\xspace$ production and to control the impact of final-state effects.%
\subsubsection{{\ensuremath{pA}}\xspace collisions at the FT-LHC: high-precision input for global nPDF fits}
In addition to collider-mode data, FT data at the LHC, using various nuclear targets (He, Ar, Ne, Xe, \dots) at relatively low $\ensuremath{\sqrt{s_{_{NN}}}}$, can provide crucial constraints for global nPDF fits, especially in the large-$x$ region. These data can be regarded as complementary to the {\ensuremath{pA}}\xspace collider measurements because they allow for the study of NMFs down to small values of $A$. Using relatively light targets is very important not only to understand nuclear-matter effects for increasing system sizes, but also for high-energy-cosmic-ray astrophysical applications~\cite{Zenaiev:2019ktw, Bhattacharya:2016jce}, bearing in mind that the extrapolation from the region of large $A = 208$ (for Pb) to the region of $A \lesssim 20$ is delicate at large $x$. Along the sames lines, a high-luminosity collider run with oxygen will be particularly useful~\cite{Citron:2018lsq}.
In the LHC FT kinematics, one is sensitive to the EMC effect (see~{e.g.}\xspace \cite{Arneodo:1996rv}), which is seen to be linked to short-range nucleon correlations~\cite{Hen:2016kwk}. These are strongly nucleus-dependent, and thus one cannot rely on a simple $A$-dependent parameterisation like $\sigma_{\ensuremath{pA}}\xspace=\sigma_{pp} \times A^\alpha$ is indeed not suitable in the EMC region.
We also stress the fundamental importance of having a H target to derive a precise enough \ensuremath{pp}\xspace baseline reference at \ensuremath{\sqrt{s_{_{NN}}}}=115~GeV in order to extract \ensuremath{R_{pA}}\xspace, which can then be compared to theoretical models.
The high-intensity LHC beams offer unique opportunities for HL {\ensuremath{pA}}\xspace data taking in the FT mode as detailed in~\cite{Hadjidakis:2018ifr}. In the kinematic regime accessible with ALICE\footnote{for which a FT upgrade is under consideration.} and LHCb\footnote{for which a FT programme exists with an ongoing upgrade.}, the $x_2$ region to be probed by charmonium and bottomonium production spans from 0.02 up to 1. Quarkonium production in this energy range may be subject to several effects besides those of the nPDFs. One effect is the energy loss (see {e.g.}\xspace~\cite{Arleo:2018zjw}); another is the dissociation of quarkonium due to secondary interactions within the nucleus that could explain the suppression patterns at the SPS and partially at RHIC~\cite{Vogt:1999cu,Satz:2005hx,Ferreiro:2008wc,Ferreiro:2011xy}. The separate quantification of these nuclear modifications will require a systematic study of open heavy flavour, quarkonium, and Drell-Yan production over the kinematically available phase space accessible via measurements with large integrated luminosity and varying nuclear targets. Once these ambiguities are lifted based on a careful analysis of the various dependencies, quarkonium production offers unique access to parton densities in the EMC-effect region of the nuclei, where strong nuclear effects were observed for quark degrees of freedom, but where the gluonic modifications remain largely \textit{terra incognita}.
In the collider mode, this region can also be probed with top quark production~\cite{dEnterria:2015mgr}, which is severely statistically limited, and more promisingingly, with di-jet measurements as already started with CMS data~\cite{Chatrchyan:2014hqa,Sirunyan:2018qel,Citron:2018lsq}.
Compared to these collider-mode data, FT quarkonium data probe very different hard scales and energies but similar $x$ values. These measurements are thus complementary since, as explained in Section~\ref{sec:pa_cnm_models}, their description in collinear factorisation is connected via DGLAP evolution. Performing both kinds of measurements would thus lead to more stringent constraints on nPDFs and potentially allow one to test the factorisation hypothesis.
Several possible experimental setups will allow for high-luminosity {\ensuremath{pA}}\xspace data-taking in ALICE and LHCb~\cite{Hadjidakis:2018ifr}. Gaseous targets as pioneered with the upgraded SMOG setup~\cite{LHCBSMOG}, starting operation in 2021, will enable the collection of high-luminosity \ensuremath{pp}\xspace\ data. %
Fig.~\ref{fig:pAFT} illustrates the constraining power of \ensuremath{J/\psi}\xspace and $\ensuremath{\Upsilon}\xspace$ production for the nPDFs in the EMC region based on the yearly luminosities achievable with LHCb and a gaseous FT setup, assuming that nPDF effects are the dominant mechanism behind nuclear modifications or that other nuclear effects are subtracted, which {\it de facto} means that various probes are measured altogether to disentangle these different effects.
\begin{figure}[htpb!]
\centering
\includegraphics[width=0.48\textwidth]{figures/fig_jpsi_FT.pdf}\hspace{0.2cm}
\raisebox{6pt}{\includegraphics[width=0.48\textwidth]{figures/fig_ups_FT.pdf}}
\caption{Ratio of the xenon nuclear over the proton PDFs, showing the impact of a nPDF reweighting according to generated pseudodata for prompt $\ensuremath{J/\psi}\xspace$~production~(Left) and $\ensuremath{\Upsilon}\xspace$~(Right) for integrated luminosities of 10~fb$^{-1}$ (for \ensuremath{pp}\xspace collisions) and 100~pb$^{-1}$ (for $p$Xe collisions) in a LHCb-like setup. A strong reduction in the nPDF uncertainty is visible in the EMC region $0.3<x<0.7$. [Figures taken from~\cite{Hadjidakis:2018ifr}].}
\label{fig:pAFT}
\end{figure}
\subsubsection{Developments in theory and phenomenology of QCD at low $x$ and its connection to $\pazocal{Q}$ production}
Most phenomenological studies in the CGC formalism performed recently have relied on the LO approximation, with a subset of NLO corrections related to the running of the $\ensuremath{\alpha_{\rm s}}$ coupling. In particular, this includes calculations of forward $\ensuremath{J/\psi}\xspace$ production in {\ensuremath{pA}}\xspace collisions that have been compared to LHC data~\cite{Fujii:2013gxa,Ducloue:2015gfa,Ma:2015sia}. Given that $\ensuremath{\alpha_{\rm s}}$ is not very small, especially at the low scales probed in $\ensuremath{J/\psi}\xspace$ production, theoretical uncertainties on cross sections are large which is one of the reasons why many studies focus on ratios such as the NMF. The full NLO corrections are expected to be sizeable and must be taken into account for realistic applications and reliable comparisons with experimental data, not only on ratios but also on cross sections. Calculations in this formalism rely on factorisation into a process-dependent hard part, or impact factor, which can be computed perturbatively, and unintegrated gluon distributions, whose $x$ evolution is governed by the Balitsky-Kovchegov (BK) equation, and are assumed to be process-independent (although not all processes probe the same distributions~\cite{Dominguez:2011wm}). Intense efforts are ongoing to extend the accuracy of the CGC framework to NLO, which requires the NLO expressions of both the BK equation and process-dependent impact factors.
The NLO BK equation was first presented in~\cite{Balitsky:2008zza}, and the first numerical solution to this equation showed that it is unstable because of very large and negative NLO corrections enhanced by collinear logarithms~\cite{Lappi:2015fma}. This instability was not unexpected since it already appeared with the NLO BFKL equation (the linearised version of the BK equation) and was cured by resumming these logarithms to all orders, restoring the convergence of the perturbative expansion. Similar resummations were proposed in the non-linear BK regime~\cite{Beuf:2014uia,Iancu:2015vea}, apparently leading to a stable evolution~\cite{Iancu:2015joa,Albacete:2015xza,Lappi:2016fmu}. However, it was later found that while the evolution itself is stable, it is affected by a very large resummation scheme dependence that spoils the predictability of the calculations~\cite{Ducloue:2019ezk}. This was traced back to the fact that the studies in~\cite{Beuf:2014uia,Iancu:2015vea} considered the evolution as a function of the projectile rapidity and not the target rapidity, which is the natural variable in this context and simply corresponds to $\ln 1/x_\text{Bj}$ in the case of DIS. Considering the evolution as a function of target rapidity, and performing the resummation in this variable leads to a stable evolution, a small resummation-scheme dependence, and a good agreement with HERA DIS data at small $x$~\cite{Ducloue:2019jmy,Beuf:2020dxl}.
The other important source of higher-order corrections is the process-dependent impact factor. The first process for which the NLO impact factor, necessary for saturation studies, was derived is single inclusive light-hadron production~\cite{Chirilli:2011km,Chirilli:2012jd}. The numerical implementation of these expressions~\cite{Stasto:2013cha} showed that the NLO cross section turns negative when the produced-particle \ensuremath{P_T}\xspace is on the order of the saturation scale, which is precisely the regime where the CGC formalism should apply. {A new formulation of the factorisation at NLO~\cite{Iancu:2016vyg}, which relaxes some approximations made in~\cite{Stasto:2013cha} and which leads to a factorisation that is non-local in rapidity}, was shown to lead to physical results~\cite{Ducloue:2017mpb}. It was also found that a very similar problem appears with the recently-derived NLO impact factor for inclusive DIS~\cite{Beuf:2017bpd} and that it can be solved in the same way~\cite{Ducloue:2017ftk}.
Despite recent progress in the two directions (evolution and impact factors), there is at the moment no phenomenological study taking into account both sources of NLO corrections in {\ensuremath{pA}}\xspace\ collisions, which would be necessary for more reliable comparisons with data. In addition, only a few impact factors have been computed up to NLO, and these only consider massless quarks as additional complications arise due to finite mass effects. Therefore it is difficult to expect comparisons of NLO calculations with LHC quarkonium data in the near future. However there are better prospects for relatively simpler processes such as forward light hadron~\cite{Chirilli:2011km,Chirilli:2012jd,Stasto:2013cha,Stasto:2014sea,Iancu:2016vyg,Ducloue:2017mpb} or isolated photon~\cite{Benic:2016uku,Benic:2018hvb} production, which can also probe very small $x$ values in the target.
The experimental study of a wider variety of processes at the LHC is also important in view of the current limitations related to the modelling of the initial conditions for a nucleus (see the discussion in Sec.~\ref{sec:pa_cnm_models}). Future {\ensuremath{pA}}\xspace\ data on isolated photon and Drell--Yan production
would provide direct constraints on the nuclear wave function, and allow one to test whether these various processes can be described with a single set of parameters. Theoretical calculations for these processes are also under better control, as they are not affected by the uncertainties related to the hadronisation mechanism of quark-antiquark pairs into quarkonia.
Future data on light hadrons production would also be valuable. While absolute cross sections for this process are affected by rather large fragmentation functions uncertainties~\cite{dEnterria:2013sgr}, the NMF is a more robust observable and the comparison with other processes could help to discriminate between different nuclear suppression mechanisms. For instance, in the CGC approach, the suppression at forward rapidities is similar for light hadrons and Drell-Yan production~\cite{Ducloue:2017kkq,Ducloue:2017zfd}. On the other hand, in the coherent energy loss model, the first process shows a sizeable suppression~\cite{Arleo:2020eia,Arleo:2020hat} while the NMF for Drell-Yan is unity at forward rapidity since one considers the production of a colourless object~\cite{Arleo:2015qiv}. The comparison between quarkonium and Drell-Yan suppression was advocated in~\cite{Arleo:2015qiv} as a possible way to discriminate between initial- and final-state effects (see the discussion in the following section). However light hadrons production may be a more reliable example than quarkonium of a process sensitive to coherent energy loss, since it is generally better understood.
\subsubsection{Discrimination of different nuclear effects with new measurements}
\label{ex:disc}
In order to resolve the ambiguity between the energy loss and the nuclear modification of the nuclear wave function, electromagnetic measurements, like Drell--Yan pair ~\cite{LHCb:2018qbx} or photon~\cite{ALICE-PUBLIC-2019-005} production, in particular at forward rapidities, involving hard scales similar to quarkonium production can be crucial.
The authors of~\cite{Arleo:2015qiv} proposed to study the ratio of the NMFs of $\ensuremath{J/\psi}\xspace$ over Drell-Yan production as a function of rapidity. They showed that this ratio has a very different behaviour in calculations employing nPDFs, where it remains close to or above unity, and in the coherent energy loss model, where it decreases quickly (see~\cf{fig:DY} (Right)). However, it should be noted that a full cancellation of the scale uncertainties is assumed in these different processes, which may not be justified and needs to be investigated with further studies. A calculation of this ratio in the CGC formalism shows that it is relatively flat and close to unity~\cite{Ducloue:2017zfd}. Therefore this observable could help to discriminate between models based on the energy loss and on nuclear modifications of parton densities, with or without saturation. The NMF of Drell-Yan production itself is also of great interest: since this process is not subject to coherent energy loss, this ratio is unity in the energy-loss model at forward rapidities, where isospin effects are negligible~\cite{Arleo:2015qiv}. On the contrary, recent nPDF fits such as nCTEQ15~\cite{Kovarik:2015cma} and EPPS16~\cite{Eskola:2016oht} show a rather strong shadowing at small $x$ values, which would lead to a NMF significantly below unity. This can be seen for example in \cf{fig:DY} (Left), where the central value for the LHCb projection was obtained using the EPPS16 NLO set. It has to be emphasised that here the nPDF parameterisations for the gluon distribution assume the same functional shape as for the sea quarks, which is currently difficult to test with the limited data available. Future, more accurate data could invalidate this assumption and show the need for new nPDF fits with more flexible parameterisations. In view of this, one is entitled to question the discriminative power of such ratios of NMF advocated in~\cite{Arleo:2015qiv}. Yet, a measurement as precise as possible of the Drell-Yan process in this kinematic regime is welcome as an important and clean probe of the nPDFs in a range where gluons dominate the partonic content.
\begin{figure}[htpb!]
\centering
\raisebox{6pt}{ \includegraphics[width=0.52\textwidth]{figures/Fig1b.pdf} }
%
\includegraphics[width=0.45\textwidth]{figures/dyjps_ppb_5020_nlo.pdf}
\caption{Left: Projection for the measurement of \ensuremath{R_{p\rm Pb}}\xspace for Drell-Yan pair production with the anticipated integrated luminosity for Run 3 and Run 4 at LHCb, see~\cite{LHCb:2018qbx} for details. [Figure taken from~\cite{LHCb:2018qbx}]. Right: Double ratio of the NMFs for Drell-Yan and quarkonium production in $p\mathrm{Pb}$\ collisions for various nPDFs (see the text for the assumptions used). [Figure adapted from~\cite{Arleo:2015qiv}]. %
}
\label{fig:DY}
\end{figure}
Since these measurements access an uncharted domain of the nuclear wave function at low $x$, clearly beyond the reach of the future EIC, they should be pursued with high priority for the quest of non-linear parton evolution, as pointed out in the CERN Yellow Report~\cite{Citron:2018lsq}. Such measurements would also provide new data to be included in global fits of nPDFs, which are at present poorly constrained at low $x$ and scales due to a lack of existing data in this region. This would help to reduce the corresponding theoretical uncertainties and thus lead to a better discrimination with other models.
The studies performed for Drell-Yan measurements illustrate the need for large data samples, as can be seen from the relatively large error bars in \cf{fig:DY} (Left), in all bins except that containing the $Z$ boson.
The systematic uncertainties, dominated by the knowledge of the background, in particular at low invariant masses, could also be further reduced by improved precision. The CMS collaboration recently made a step in this direction, extending the Drell-Yan measurement down to $M_{\mu\mu}\approx 15$~GeV requiring a $\ensuremath{P_T}\xspace$ greater than 15~GeV for the leading muon and 10~GeV for the sub-leading muon in the central rapidity acceptance of CMS ($|\eta_\mu|<2.4$)~\cite{CMS-PAS-HIN-18-003}. The current precision of the measurement cannot discriminate between different scenarios in this central rapidity region and at still rather high lepton-pair momenta.
In order to make full use of the {\ensuremath{pA}}\xspace\ measurements at the HL-LHC, it will also be necessary to improve the theory in order to tame the impact of the factorisation-scale uncertainty that can become the largest of all the theoretical uncertainties, in particular in the regime of charm production~\cite{Kusina:2017gkz,Kusina:2020dki}.
Our discussion of the aforementioned models has, until now, only concerned vector quarkonium states.
In parallel, specific efforts are needed both on the theory and experimental sides to compare these models to measurements of {e.g.}\xspace\ $\chi_c$ and $\eta_c$. This would allow one to further test the underlying theoretical assumptions. LHCb pioneered the measurement of the hidden-over-open-charm ratio~\cite{Aaij:2017gcy}, but with uncertainties that were too large to see differences with the value obtained in \ensuremath{pp}\xspace\ collisions.
More precise measurements are therefore required and could provide information on the magnitude of effects that act differently on quarkonium and open-charm production\footnote{However, the interpretation of such ratios should always be made while considering the possible non-cancellation of theoretical uncertainties such as those from the unphysical renormalisation and factorisation scales.}. These measurements are also of prime interest in the context of heavy-ion-like suppression patterns observed for the excited states as well as considerations related to their azimuthal anisotropies.
In addition, a comprehensive set of precise quarkonium measurements in {\ensuremath{pA}}\xspace collisions including polarisation data may provide better constraints than when considering only \ensuremath{pp}\xspace\ data in data-theory comparisons and allow for additional consistency checks. For example, in the CGC+NRQCD approach, each intermediate state is suppressed in a different way~\cite{Ma:2015sia}. Including \ensuremath{R_{pA}}\xspace data in global fits could be a way to try to set more stringent constraints on the LDMEs and to further test NRQCD, although this, of course, would be at the expense of the predictivity of the CGC+NRQCD approach in {\ensuremath{pA}}\xspace collisions.
\subsection{Flow-like phenomena in $\pazocal{Q}$ production}
\subsubsection{Theoretical prospects}
Traditionally, {\ensuremath{pA}}\xspace\ collisions were considered to produce final states without any QGP, and were therefore used as a tool to study cold-nuclear-matter effects as well as a baseline reference for QGP effects in \ensuremath{AA}\xspace\ collisions. However, the discovery of collective phenomena in high-multiplicity \ensuremath{pp}\xspace~\cite{Khachatryan:2010gv} and $p\mathrm{Pb}$~\cite{CMS:2012qk,Abelev:2012ola,Aad:2012gla} collisions at the LHC has led to a change of paradigm. The presence of long-range azimuthal correlations observed for particles produced in such events, quantified by the second harmonic coefficient $v_2$ with respect to the reaction plane, were found to be similar to those found in \ensuremath{AA}\xspace\ collisions, where they are conventionally interpreted as a signature of an anisotropic hydrodynamic flow built up in the QGP. Various mechanisms have been proposed to explain the experimental observations. These include the formation of a hydrodynamically expanding mini-QGP and a initial-state gluon saturation within the CGC formalism.
In this context, the case of quarkonia is of particular interest. Precise measurements in \rm{PbPb}\xspace collisions at the LHC showed a significant $\ensuremath{J/\psi}\xspace$ flow~\cite{ALICE:2013xna,Khachatryan:2016ypw,Acharya:2017tgv,Aaboud:2018ttm,Acharya:2020jil}. Even though the experimental data are not entirely consistent with the predictions of a parton-transport model~\cite{Du:2018wsj}, the observed flow is believed to originate from the recombination of thermalised charm quarks within the QGP volume or at the phase boundary at low $\ensuremath{P_T}\xspace$, and from the path-length-dependent colour screening and energy loss at high $\ensuremath{P_T}\xspace$. Within the QGP scenario, the $\ensuremath{J/\psi}\xspace$ flow is expected to be practically negligible in $p\mathrm{Pb}$\ collisions. At low $\ensuremath{P_T}\xspace$, the incomplete thermalisation of the charm quarks during the short-lived QGP phase and the small number of initially produced $c\bar{c}$ pairs would result in negligible recombination effects. The small system size is not expected to produce significant path-length dependent effects either. The ALICE and CMS collaborations performed measurements of the inclusive and prompt $\ensuremath{J/\psi}\xspace$ flow in $p\mathrm{Pb}$\ collisions~\cite{Acharya:2017tfn,Sirunyan:2018kiz}. The $\ensuremath{J/\psi}\xspace$ mesons are reconstructed %
via their di-muon decay channel. The flow measurements are performed using associated mid-rapidity charged-particle yields per $\ensuremath{J/\psi}\xspace$ trigger. The contribution from recoil jets is suppressed by a subtraction of the yields in low-multiplicity collisions from those in high-multiplicity collisions. The second harmonic coefficient of the azimuthal distribution of the produced $\ensuremath{J/\psi}\xspace$ is then extracted assuming a factorisation of the flow coefficients of the $\ensuremath{J/\psi}\xspace$ and of the associated charged particles.
The ALICE and CMS data clearly indicate significant $\ensuremath{J/\psi}\xspace$ $v_2$ values approaching the values measured in central \rm{PbPb}\xspace\ collisions, while the transport model predict smaller values (\cf{Quarkonia_v2_pPb}, Left).
Alternative calculations developed within the CGC approach (see Section~\ref{sec:v2_IS}) result in $\ensuremath{J/\psi}\xspace$ $v_2$ values consistent with the experimental data. Nevertheless, it is worth noting that in this scenario the heavy-quark momentum space anisotropy is not correlated with the spatial anisotropy of the initial state of the collision, while the measurements of $\ensuremath{J/\psi}\xspace$ $v_2$ are done with respect to the charged-particle bulk, and the various LHC and RHIC measurements in small and large collisions indicate that the flow coefficients of the bulk are driven by the initial-state collision geometry~\cite{Nagle:2018nvi}.
\subsubsection{Experimental prospects}
The future LHC data from Runs 3 and 4 will allow for a significant improvement in the precision of the $\ensuremath{J/\psi}\xspace$ flow measurement in $p\mathrm{Pb}$\ collisions. \cf{Quarkonia_v2_pPb} (Right) shows the projection of the expected precision of the ALICE measurement using an integrated luminosity of 500 nb$^{-1}$. The Muon Forward Tracker (MFT) detector~\cite{CERN-LHCC-2013-014,CERN-LHCC-2015-001}, which is part of the 2020--21 ALICE upgrade, will allow the separatation of the prompt $\ensuremath{J/\psi}\xspace$ contribution from the {feed-down} of $B$-hadron decays. A comparable precision is likely to be reachable also in the dielectron channel at mid-rapidity, because of the upgraded central-barrel readout system that will allow one to record basically the same integrated luminosity as the MFT system. The CMS experiment will also be able to improve significantly the precision of the measurement due to a more than six-fold increase of the integrated luminosity compared to that in Run 2. The wide combined rapidity coverage of these measurements will shed light on the origin of the significant $\ensuremath{J/\psi}\xspace$ $v_2$. At the same time, more differential theory predictions are needed.
The parton-transport model predicts higher $\ensuremath{\psi(2S)}\xspace$ $v_2$ values compared to those of $\ensuremath{J/\psi}\xspace$ (\cf{Quarkonia_v2_pPb}, Left), while within the CGC-based model, $\ensuremath{\psi(2S)}\xspace$ and $\ensuremath{J/\psi}\xspace$ $v_2$ values are expected to be practically the same. Taking into account the charmonium-signal significance, the expected
statistical uncertainty of $\ensuremath{\psi(2S)}\xspace$ $v_2$ is at least one order of magnitude larger than for $\ensuremath{J/\psi}\xspace$, and thus significantly larger than the $\ensuremath{\psi(2S)}\xspace$ $v_2$ enhancement predicted by the transport model. Nevertheless, given the sizeable suppression of $\ensuremath{\psi(2S)}\xspace$ with respect to $\ensuremath{J/\psi}\xspace$ in the Pb-going direction, the measurement of $\ensuremath{\psi(2S)}\xspace$ $v_2$ at forward and backward rapidities is quite interesting in itself.
\begin{figure}[htpb!]
\centering
\includegraphics[width = 0.54\textwidth]{figures/pPb8160LHCmidrv2_high.pdf}
\raisebox{5pt}{ \includegraphics[width = 0.42\textwidth]{figures/2018-10-25-500nb_new3.pdf} }
\caption{Left: Comparison between parton-transport-model calculations of {the} $\ensuremath{J/\psi}\xspace$ and $\ensuremath{\psi(2S)}\xspace$ azimuthal flow versus $\ensuremath{P_T}\xspace$ (curves) and published ALICE~\cite{Acharya:2017tfn} and CMS~\cite{Sirunyan:2018kiz} data on $\ensuremath{J/\psi}\xspace$ flow (data points) [Figure taken from~\cite{Du:2018wsj}]. Right: Projection of the expected precision of $\ensuremath{J/\psi}\xspace$ $v_2$ to be measured by ALICE in $p\mathrm{Pb}$\ collisions with an integrated luminosity of 500~nb$^{-1}$)~\cite{Citron:2018lsq}. [Figure taken from~\cite{ALICE-PUBLIC-2019-001}].}
\label{Quarkonia_v2_pPb}
\end{figure}
The measurement of $\ensuremath{\Upsilon}\xspace$ flow will be of particular interest, but a realistic extrapolation of the expected precision with the upcoming luminosities awaits the first measurements by CMS or ATLAS which are better positioned than ALICE for these measurements thanks to larger acceptances and partially better resolutions.
\subsubsection{Azimuthal anisotropies from initial-state effects}
\label{sec:v2_IS}
As can be seen from \cf{Quarkonia_v2_pPb}, current calculations based on parton-transport models seem to not be able to generate enough flow to match the data on $\ensuremath{J/\psi}\xspace$ $v_2$ at the LHC due to the large particle mass and the short-lived system. On the other hand, a recent calculation based on the CGC framework has shown that momentum space anisotropies for heavy mesons can directly be generated by the $p\mathrm{Pb}$\ collision process itself, without the need for a QGP-droplet phase~\cite{Zhang:2019dth,Zhang:2020ayy}, leading to a good agreement with LHC data (\cf{fig:v2_jpsi_upsilon}). This requires that the gluon density in the nucleus {be} large enough for non-linear QCD dynamics to be relevant, which is the case in high-multiplicity collisions at the LHC. By contrast with the heavy-ion-like flow paradigm, these anisotropies are not connected to the initial spatial anisotropies of the system.
In this model, one considers that both a quark (which serves as a reference to evaluate the $v_2$) and a gluon (which then splits into a $Q\bar{Q}$ pair) from the projectile proton participate in the interaction with the nucleus. The quark and gluon are assumed to be initially independent, but they can become correlated due to colour interference arising when they interact with the dense gluonic-background fields in the nucleus.
In this initial-state-effect-only picture, the mass dependence of these intrinsic QCD anisotropies is not the same for quarkonia and open production, due to completely different production mechanisms.
In the first case, the $Q$ and $\bar{Q}$ must be produced close to each other in order to form a quarkonium state, and therefore the correlation of the $Q\bar{Q}$ pair with the reference quark is driven by the kinematics of the gluon before the $g \to Q\bar{Q}$ splitting. On the other hand, in the case of open production, one of the heavy quarks is integrated over and the distance between the $Q$ and $\bar{Q}$ can be arbitrarily large, which allows for some mass dependence.
Along the same lines, it was predicted that the azimuthal anisotropies for open bottom would be much smaller than for open charm, which was confirmed by experimental data~\cite{Sirunyan:2020obi}, but that the azimuthal anisotropies for $\ensuremath{\Upsilon}\xspace$ would be essentially identical to those for $\ensuremath{J/\psi}\xspace$ because both are directly driven by the gluon kinematics. A future measurement of the $\ensuremath{\Upsilon}\xspace$ $v_2$ at the LHC to either confirm or invalidate this prediction would therefore provide an important test for this model and could give more insight into the origin of azimuthal anisotropies in quarkonia produced in {\ensuremath{pA}}\xspace collisions. %
\begin{figure}[htpb!]
\centering
\includegraphics[width=0.5\textwidth]{figures/v2_jpsi_upsilon.pdf}
\caption{Comparison of the CGC calculation of $\ensuremath{J/\psi}\xspace$ and open-heavy-flavour $v_2$ coefficients versus $\ensuremath{P_T}\xspace$~\cite{Zhang:2019dth} with CMS data~\cite{Sirunyan:2020obi}. [Figure taken from~\cite{Sirunyan:2020obi}].}
\label{fig:v2_jpsi_upsilon}
\end{figure}
\subsection{$\pazocal{Q}$-hadronisation studies}
\label{sec:hadronisation-pA}
\subsubsection{Theoretical status}
\label{pAcom}
When studying effects on quarkonium production, an extended analysis must include both initial- and final-state effects. However, it is probably more phenomenologically appropriate to distinguish between effects that impact the whole charmonium (or bottomonium) family with the same magnitude from those which are expected to impact differently the ground and the excited states.
Among those effects equally affecting all the states of a given family, one naturally finds the initial-state effects, since they act on a pre-quarkonium state, which is not yet fixed when the effects are at work. On the contrary, final-state effects can affect each quarkonium state differently.
Former measurements of the production rates of excited and ground charmonia in {\ensuremath{pA}}\xspace collisions at lower energies by the E866/NuSea~\cite{Leitch:1999ea} and NA50~\cite{Alessandro:2006jt} collaborations revealed a stronger suppression of the excited state near $x_F \approx 0$. At such low energies, this dissimilarity has been straightforwardly interpreted as the effect of a stronger breakup of the \ensuremath{\psi(2S)}\xspace meson in interactions with the primordial nucleons contained in the colliding nucleus, the so-called nuclear absorption.
However, at higher energies, the quarkonium formation time is expected to be larger than the nucleus radius.
This is due to the large boost between the nucleus (and its nucleons) and the pair. At rest, a $c\bar{c}$ or $b\bar{b}$ pair takes 0.3–0.4~fm to hadronise, but this time has to be considered in the rest frame of the nucleus. The formation time is thus dilated by a large Lorentz-boost factor, which at LHC energies\footnote{except for extremely large Pb-going rapidities.} results in times orders-of-magnitude\red{-}longer than the Pb nucleus radius. In other words, the spectator nucleons of the nucleus cannot discriminate ground and excited quarkonium states, since they cross each others too early.
An alternative explanation has been proposed to be at play in {\ensuremath{pA}}\xspace collisions, based on the interaction of the nascent quarkonia with some particles that are produced in the collision and which happen to travel along with the heavy-quark pair. This implies that such {\it comovers}, those particles with a similar rapidity as the quarkonium state, can continue to interact well outside of the nuclear volume up to the moment where the quarkonium is fully formed. Since the excited states are larger, the comover dissociation affects them more strongly than the ground states, which explains the observed difference between them even at high energies.
This effect is naturally more important when the densities of particles are larger. As such, it increases with the centrality and, for asymmetric collisions such as {\ensuremath{pA}}\xspace, it will be stronger in the nucleus-going direction. Along the same lines, the same effect should be at work for any colliding system. In particular, the excited states should also dissociate more in high-multiplicity \ensuremath{pp}\xspace\ collisions (see Section~\ref{sec:pp-xyz}).
In the CIM~\cite{Ferreiro:2014bia}, initial-state and final-state effects are treated separately, respectively via nPDFs and the aforementioned interaction with comoving particles. The behaviour of the excited-over-ground-state ratio is then naturally driven by the comover suppression since the nPDF effects cancel in the ratio.
The rate equation that governs the density of quarkonia is just a Boltzmann equation that depends on the density of particles and their break-up cross section. It has been proposed in~\cite{Ferreiro:2018wbd} that the break-up can be evaluated using an empirical formula that accounts for the geometrical size and the binding energy of the different states. It can be applied to all the states and yields a natural explanation for the experimental data on excited-over-ground states (\cf{fig:UpsipPb8TeVLHCb}).
\begin{figure}[htpb!]
\centering
\includegraphics[width=0.4\textwidth]{figures/double_ratio_Psi2S_LHCb.pdf}
\hspace{1.2cm}
\includegraphics[width=0.4\textwidth]{figures/double_ratio_Psi3S_LHCb.pdf}
\caption{Double ratios for $\ensuremath{\Upsilon(2S)}\xspace$ (Left) and $\ensuremath{\Upsilon(3S)}\xspace$ (Right) as a function of laboratory rapidity in $p\mathrm{Pb}$\ collisions at $\ensuremath{\sqrt{s}} = 8.16$~TeV. The bands correspond to the theoretical prediction for the CIM~\cite{Ferreiro:2018wbd} compared to LHCb data~\cite{Aaij:2018scz}. [Figures taken from~\cite{Aaij:2018scz}].}
\label{fig:UpsipPb8TeVLHCb}
\end{figure}
In~\cite{Ma:2017rsu}, the breaking of factorisation was examined for $\ensuremath{J/\psi}\xspace$ and $\ensuremath{\psi(2S)}\xspace$ production at forward rapidities in the CGC framework by modelling the threshold-sensitive suppression due to the comover interactions. The factorisation effectively holds for $\ensuremath{J/\psi}\xspace$ production in minimum bias {\ensuremath{pA}}\xspace\ collisions, but it may break in high-multiplicity events, even though the $\ensuremath{J/\psi}\xspace$ is a strongly-bound system. The multiple rescattering effects in the CGC framework can describe high-multiplicity events due to the sizeable semi-hard saturation scale~\cite{Ma:2018bax,Ma:2018wdl}, although the underlying physics behind such phenomena is still not very clear (see Section~\ref{sec:oniaparticlemultiplicity}). In high-multiplicity {\ensuremath{pA}}\xspace\ collisions, the nuclear-enhanced-comover-rescattering effect could lead to the modification of the $\ensuremath{J/\psi}\xspace$ production rate due to a threshold-sensitive suppression~\cite{Qiu:1998rz}. For larger quarkonium states, like $\ensuremath{\psi(2S)}\xspace$ and $\ensuremath{\chi_c}\xspace$, both the semi-hard and soft comover rescatterings before the bound-state formation should be essentially of the same size in high-multiplicity \ensuremath{pp}\xspace\ collisions. Further investigations are thus welcome to examine the duality between the CIM and these qualitative expectations from the CGC framework.
With the increased luminosity available in the upcoming HL-LHC, the measurement
of production rates for multiple quarkonium states, with
different physical sizes and binding energies, offers an excellent tool for probing
the time scale of the evolution of heavy quark-antiquark
pairs into bound colourless quarkonium states.
The study of the yields of excited-over-ground quarkonia as a function of the charged-particle multiplicity, both in \ensuremath{pp}\xspace and {\ensuremath{pA}}\xspace collisions, will help {to} clarify the hadronisation mechanism at play. In particular, the measurement of different quarkonium states with good precision should lead to better model constraints that test its applicability. Estimates of the precision with which the yields can be determined in the different experiments are given in the following section.
In addition, much progress in heavy-meson spectroscopy has taken place in the last years at the LHC. Recently, the LHCb collaboration presented results on the relative production rates of promptly produced $X$(3872) over $\ensuremath{\psi(2S)}\xspace$ as a function of particle multiplicity, given by the total number of charged particle tracks reconstructed in the VELO detector for the forward pseudorapidity region, $2 < \eta < 5$ in \ensuremath{pp}\xspace collisions at 8~TeV~\cite{Aaij:2020hpf}.
This ratio is found to decrease with increasing multiplicity. Hadronisation mechanisms, as described above, can provide insights on the nature of these exotic states~\cite{Esposito:2020ywk}.
\subsubsection{Experimental prospects}
\label{pAhadro}
The NMFs of excited vector-meson states in {\ensuremath{pA}}\xspace collisions show significantly stronger suppression than for the ground states, as measured by the ALICE~\cite{Acharya:2019lqc, Acharya:2020giw, Abelev:2013yxa, Abelev:2014zpa, Abelev:2014oea, Adam:2015iga, Adam:2015jsa, Adam:2016ohd,Acharya:2018yud}, ATLAS~\cite{Aaboud:2017cif}, CMS~\cite{Sirunyan:2017mzd,Sirunyan:2018pse}, and LHCb~\cite{Aaij:2018scz,Aaij:2013zxa,Aaij:2014mza,Aaij:2016eyl} collaborations at the LHC. In order to solidify the experimental findings, it will be crucial to analyse larger statistical samples. %
The ratio between two different quarkonia is a useful quantity to directly compare the relative suppression of both states between various experiments as a function of different kinematic variables since many systematic uncertainties, both in experiments and theory predictions, cancel in the ratio. We focus here on the expectations for $\ensuremath{\psi(2S)}\xspace/\ensuremath{J/\psi}\xspace$ and $\ensuremath{\Upsilon}\xspace(nS)/\ensuremath{\Upsilon}\xspace(1S)$ ratios in $p\mathrm{Pb}$\ collisions in ALICE. These kinds of ratios have already been measured at the LHC~\cite{Acharya:2020wwy, Acharya:2019lqc, Aaij:2018scz}, but the statistical uncertainties are $20\%$ or larger.
All projections use a total integrated luminosity of 500~nb$^{-1}$ of $p\mathrm{Pb}$\ collisions, split in two sub-samples of equal size for the two possible beam orientations.
The left panel of \cf{Psi2S_by_JPsi_UpsinS_by_UPsi1S} shows the projection for the $\ensuremath{\psi(2S)}\xspace/\ensuremath{J/\psi}\xspace$ ratio as a function of $y_{{\rm c.m.s.}\xspace}$ with the $p\mathrm{Pb}$\ data expected to be collected at $\ensuremath{\sqrt{s}} = 8.8$~TeV at the HL-LHC, compared to the current Run-2 results at $\ensuremath{\sqrt{s}} = 8.16$~TeV. This $\ensuremath{\psi(2S)}\xspace/\ensuremath{J/\psi}\xspace$ ratio is currently found to have a significance (in units of standard deviations) of 2.9$\sigma$ and 0.9$\sigma$ below the same ratio measured in \ensuremath{pp}\xspace\ collisions at $\sqrt{s}$ = 8.16~TeV at backward and forward rapidity, respectively. The statistical uncertainty is reduced by a factor of $\sim$5 using the same luminosity increase assumed for other ALICE projections shown in~\cite{Citron:2018lsq}. Similar expectations for the $\ensuremath{\Upsilon(2S)}\xspace/\ensuremath{\Upsilon}\xspace(1S)$ and $\ensuremath{\Upsilon(3S)}\xspace/\ensuremath{\Upsilon}\xspace(1S)$ ratios as a function of $y_{{\rm c.m.s.}\xspace}$ are shown in \cf{Psi2S_by_JPsi_UpsinS_by_UPsi1S} (Right). The $\ensuremath{\Upsilon(2S)}\xspace/\ensuremath{\Upsilon}\xspace(1S)$ ratio is found, at backward and forward rapidity, to be respectively 1$\sigma$ and 0.8$\sigma$ below the same ratio measured in \ensuremath{pp}\xspace\ collisions at $\sqrt{s}$ = 8~TeV~\cite{LHCb:2015upsilon}. Correspondingly, the $\ensuremath{\Upsilon(3S)}\xspace/\ensuremath{\Upsilon}\xspace(1S)$ ratio is respectively 0.4$\sigma$ and 1.6$\sigma$ below the \ensuremath{pp}\xspace\ ratio. The statistical uncertainty is reduced by a factor of $\sim$5.
\begin{figure}[h!]
\centering
\includegraphics[width=0.47\textwidth]{figures/Psi2S_Jpsi_vs_y.pdf}
\hspace{0.5cm}
\includegraphics[width=0.47\textwidth]{figures/UpsilonNS_Upsilon1S.pdf}
\caption{Projections for the
$\ensuremath{\psi(2S)}\xspace/\ensuremath{J/\psi}\xspace$ (Left) and $\ensuremath{\Upsilon}\xspace(nS)/\ensuremath{\Upsilon}\xspace(1S)$ (Right) ratios as a function of the ${\rm c.m.s.}\xspace$ rapidity
$y_{\rm c.m.s.}\xspace$ in $p\mathrm{Pb}$\ collisions at $\ensuremath{\sqrt{s}} = 8.8$~TeV compared to the current Run-2 results at $\ensuremath{\sqrt{s}} = 8.16$~TeV. [Figures taken from~\cite{ALICE-PUBLIC-2020-008}].
}
\label{Psi2S_by_JPsi_UpsinS_by_UPsi1S}
\end{figure}
The systematic uncertainties on the signal extraction, which are most relevant for the ratio measurements, are also expected to decrease significantly with the improved knowledge of the background distributions, but
precise numbers are difficult to estimate at the moment. The reduction of uncertainties will allow one to clearly establish the suppression of the excited states and to perform precise model comparisons to advance our understanding of the physical origin of the observed nuclear modifications. Similar improvements of statistical precision can be expected for LHCb thanks to a similar luminosity ratio between the already available data and the planned luminosity for the 2030s in $p\mathrm{Pb}$\ collisions. For ATLAS and CMS, a reduction of the statistical uncertainties by about a factor 2 can also be expected according to~\cite{Citron:2018lsq}.
The estimates provided here are purely statistical: further improvements can be expected from better instrumentation in terms of acceptances, efficiencies, and resolution thanks to the foreseen upgrades that will be fully beneficial for detector-occupancy conditions in {\ensuremath{pA}}\xspace collisions, which are well below those of the \ensuremath{pp}\xspace data with orders-of-magnitude-larger pileup.
\begin{figure}[h!]
\centering
\includegraphics[width=0.47\textwidth]{figures/pPb_NC_Gauss_D0BG_250_700.pdf}
\includegraphics[width=0.47\textwidth]{figures/pPb_DD_Gauss_ChebPol3_100_900.pdf}
\caption{Mass difference $\Delta M = M(\mu^+\mu^-\gamma)- M(\mu^+\mu^-)$ measured at forward rapidity by LHCb in the 2016 $p\mathrm{Pb}$\ data (integrated luminosity $13.6\pm0.3$~$\mu$b$^{-1}$)~\cite{LHCB-FIGURE-2019-020}. Left: Reconstruction via calorimetric photons for $\ensuremath{P_T}\xspace^{\ensuremath{J/\psi}\xspace}>3$~GeV. Right: Reconstruction via converted photons, integrated over $\ensuremath{P_T}\xspace$. The distributions are integrated over $1.5<y^\star<4$ where $y^\star$ is the laboratory rapidity. These plots demonstrate the feasibility of first measurements at the LHC in {\ensuremath{pA}}\xspace collisions. [Figures taken from~\cite{LHCB-FIGURE-2019-020}]. %
}
\label{fig:chicpA}
\end{figure}
In addition to the vector states, the $P$-wave states of charmonium, $\chi_c$, could provide complementary information to that shown in~\cite{Abelevetal:2014cna} based on the models outlined in \cite{Andronic:2003zv, Zhao:2011cv}, in particular due to their strong suppression predicted in \ensuremath{AA}\xspace\ collisions. In \ensuremath{pp}\xspace\ collisions, LHCb carried out measurements of $\chi_{c1}/\chi_{c2}$~production ratios~\cite{LHCb:2012ac} and of the $\chi_c$ to $\ensuremath{J/\psi}\xspace$~cross section ratio~\cite{Aaij:2013dja} based on an integrated luminosity of 36~pb$^{-1}$ at 7~TeV. Similar measurements were performed by CMS~\cite{Chatrchyan:2012ub} and ATLAS~\cite{ATLAS:2014ala} at $\sqrt{s}=7$~TeV with an integrated luminosity of $\sim$4.5~pb$^{-1}$. The LHCb measurements appear complementary for the study of nuclear modifications in $p\mathrm{Pb}$\ collisions, given the lower \ensuremath{P_T}\xspace region explored compared to the CMS ($7<\ensuremath{P_T}\xspace<$~25~GeV) and ATLAS ($10<\ensuremath{P_T}\xspace<$~30~GeV) ones. All these measurements are based on the decay channel $\chi_c \to \ensuremath{J/\psi}\xspace \gamma $ reconstructing the $\ensuremath{J/\psi}\xspace$ in the di-muon channel and the photon in the calorimeter. Measurements of cross-section ratios of $\chi_c$~production have been performed %
based on the reconstruction of the converted photon in the tracker~\cite{Aaij:2013dja}.
The planned increased $p\mathrm{Pb}$\ integrated luminosity at the HL LHC should allow measurements in these decay channels with similar precision as in \ensuremath{pp}\xspace~collisions. The data already recorded %
demonstrate the reconstructibility of the decay channels of interest~\cite{LHCB-FIGURE-2019-020} as shown in \cf{fig:chicpA}. Recently, LHCb~\cite{Aaij:2017vck} and the BESIII~\cite{Ablikim:2019jqp} collaboration observed the decay of $\chi_c \to \ensuremath{J/\psi}\xspace \mu \mu$. With the upcoming very high-luminosity data takings beyond Run 3, this decay channel could be used as well for cross section measurements despite its very small branching fraction of the order of $10^{-4}$ for $\chi_{c1}$ and $\chi_{c2}$~\cite{Ablikim:2019jqp}. %
\section{Proton-proton collisions\protect\footnote{
Section editors: Darren Price, Hua-Sheng Shao.
}}
\label{sec:pp}
\subsection{Introduction: status and prospects}
The production of quarkonium in high-energy particle collisions has been linked to longstanding challenges in our understanding of quark confinement in QCD. The study of quarkonium production provides not only valuable information on non-perturbative QCD physics, but also crucial and often novel signatures for the exploration of new phenomena, multi-quark spectroscopy, probes of proton structure, and double parton scattering interactions, amongst other subjects.
Various theoretical treatments tackling the production of quark-antiquark pairs and their subsequent formation of a quarkonium bound state have been proposed and confronted with experimental data.
The most notable and used of these are the Colour Evaporation Model (CEM)~\cite{Fritzsch:1977ay,Gluck:1977zm,Barger:1979js,Amundson:1995em,Amundson:1996qr}, the Colour-Singlet Model (CSM)~\cite{Chang:1979nn,Berger:1980ni,Baier:1981uk}, and non-relativistic QCD (NRQCD)~\cite{Bodwin:1994jh} factorisation, which extends the CSM by the introduction of the Colour Octet (CO) mechanism.
As regards their application to hadroproduction, in particular at the LHC, these are often employed within the framework of collinear factorisation, High-Energy (HE) factorisation\footnote{also referred to as \ensuremath{k_T}\xspace factorisation.} or even Colour Glass Condensates (CGC). Even though most of the discussion in this section focuses on NRQCD used with collinear factorisation, the proposed measurements are also relevant to test most of the theoretical models discussed in the literature.
No single approach has been able to simultaneously describe all experimental observables collected to date, but the NRQCD approach is the most rigorous and thus has had greatest success.
However, the predictive power of NRQCD relies heavily on the universality of the non-perturbative long-distance matrix elements (LDMEs) ensured by the factorisation approach.
These LDMEs characterise the transition rates for various colour-spin states of a produced heavy-quark pair to become a physical quarkonium and should be process-independent.
Despite the success of NRQCD in many phenomenological applications (see, {e.g.}\xspace,~\cite{Brambilla:2010cs}), there are still challenges to understand the single-inclusive quarkonium production mechanism at colliders, notably a unified description of their cross section in different production modes, the polarisation of the vector states, the constraints set by the production of pseudoscalar mesons via NRQCD Heavy-Quark-Spin Symmetry (HQSS), not to mention further puzzles in associated production.
Differences between various sets of LDMEs extracted from the accumulated data~\cite{Butenschoen:2011yh,Chao:2012iv,Gong:2012ug,Bodwin:2014gia,Shao:2014yta,Bodwin:2015iua,Han:2014kxa,Feng:2015wka} persist, and there is a long way to go to confirm the universality of these LDMEs~\cite{Lansberg:2019adr,Chung:2018lyq}.
Hence, a coherent physical picture to interpret quarkonium production data is still missing today.
This impasse in our understanding of quarkonium production, and its subsequent limitations on the use of quarkonium data as a tool for other physics processes, has motivated the critical need to establish novel observables that can serve to advance both goals. The study of various new final states containing quarkonia, and measurements of novel quarkonium observables in hadron-hadron collisions, $e^+e^-$, lepton-hadron, photon-hadron, photon-photon and nuclear collisions, can provide complementary sensitivity to different combinations of LDMEs (see~\cite{Lansberg:2019adr} and references therein) as well as insight into a wide range of phenomena.
The large \ensuremath{pp}\xspace\ collision data sets expected to be collected at the HL-LHC for inclusive quarkonium production will provide a compelling setting for these investigations. In the following sections, we outline the potential that the HL-LHC experiments have to explore these topics, and highlight priorities for study. In Section~\ref{sec:quarkonium-pt-pp}, we begin by reviewing how measurements of the \ensuremath{J/\psi}\xspace and \ensuremath{\Upsilon}\xspace transverse-momentum ($\ensuremath{P_T}\xspace$) distributions and polarisations is central to our understanding of their production and outline expectations for the HL-LHC period. Section~\ref{sec:oniacharacterisation} explains how studies of the surrounding hadronic activity in collision events containing quarkonia provide new insights to various QCD topics. In Section~\ref{sec:oniaUnconventional}, we address physics opportunities opened up through the study of unconventional quarkonium states, {e.g.}\xspace\ the $C$-even $\eta_Q$ and $\chi_Q$ states. Section~\ref{sec:oniumassociate} examines how data on associated production of quarkonium with other heavy states provides a rich opportunity to explore topics as diverse as searches for new phenomena and multi-parton interactions at the HL-LHC. Finally, in Section~\ref{sec:pdfofpp} we outline how quarkonium data in the HL-LHC era can be a compelling tool for precision PDF determinations both at low $x$ and low scale, and at high $x$.
\subsection{\ensuremath{J/\psi}\xspace and \ensuremath{\Upsilon}\xspace conventional measurements: \ensuremath{P_T}\xspace spectra and polarisation}
\subsubsection{\ensuremath{P_T}\xspace spectra: going higher}
\label{sec:quarkonium-pt-pp}
The study of $\ensuremath{P_T}\xspace$ distributions of heavy-quarkonium states produced at hadron colliders has played a critical role in the development of our understanding of the underlying production mechanism.
The drastically different $\ensuremath{P_T}\xspace$ dependence observed between the Tevatron~\cite{Abe:1997jz,Abe:1997yz} data and the leading order (LO) theoretical predictions for $\ensuremath{J/\psi}\xspace$ and $\ensuremath{\psi(2S)}\xspace$ production at mid and high $\ensuremath{P_T}\xspace$ led to tremendous improvements in our understanding on how a produced heavy-quark pair at a large $\ensuremath{P_T}\xspace$ transmutes itself into a physical quarkonium state, and to the development of the NRQCD-factorisation approach for the production of heavy quarkonia~\cite{Bodwin:1994jh,Cho:1995vh}.
For a given heavy-quark mass, $m_Q$, heavy-quarkonium production in hadron-hadron collisions can be divided into three kinematic regimes: $\ensuremath{P_T}\xspace^2 \gg m_Q^2$, $\ensuremath{P_T}\xspace^2 \sim m_Q^2$, and $\ensuremath{P_T}\xspace^2 \ll m_Q^2$, which provide tools sensitive to very different, but often complementary, physics issues.
For heavy-quarkonium production, QCD factorisation connects the colliding hadron(s) to the underlying quark-gluon scattering that produces a heavy-quark pair while NRQCD factorisation matches the pair, produced with various colour-spin states, to a physical quarkonium through the corresponding LDMEs.
For both QCD and NRQCD factorisations, calculating quarkonium production in these three kinematic regimes requires different approximations and treatments.
When $\ensuremath{P_T}\xspace^2 \gg m_Q^2$, quarkonium production is ideal to isolate the non-perturbative hadronisation part. Since the produced heavy-quark pair at high $\ensuremath{P_T}\xspace$ is so separated in phase space from the colliding hadron beams, one can thereby pin down the uncertainty associated to the LDMEs. However, the production involves two very different momentum scales, and resummation of large $\log(\ensuremath{P_T}\xspace^2/m_Q^2)$ terms is necessary~\cite{Nayak:2005rt,Kang:2014tta,Kang:2014pya}.
Although reliable factorisation formalisms have been derived for, at least, the first two regimes where $\ensuremath{P_T}\xspace^2 \gg m_Q^2$ and $\ensuremath{P_T}\xspace^2 \sim m_Q^2$, a smooth matching between these two regimes is needed to be able to compare theoretical calculations with experimental data. For example, when $\ensuremath{P_T}\xspace^2 \gg m_Q^2$, theoretical calculations are organised in terms of QCD collinear factorisation of the leading power and next-to-leading power contributions in the $1/\ensuremath{P_T}\xspace^2$ expansion~\cite{Nayak:2005rt,Kang:2014tta,Kang:2014pya}. Since no QCD collinear factorisation is valid beyond the first subleading power contribution~\cite{Qiu:1990xy}, QCD factorisation formalisms in this regime can only include the leading $1/\ensuremath{P_T}\xspace^4$ and the first subleading $1/\ensuremath{P_T}\xspace^6$ factorised partonic hard parts. On the other hand, for the regime where $\ensuremath{P_T}\xspace^2 \sim m_Q^2$, the leading term in the power of the strong coupling constant, $\alpha_s$, and heavy-quark relative velocity, $v$, in the NRQCD factorisation approach contains $1/\ensuremath{P_T}\xspace^6$ or $1/\ensuremath{P_T}\xspace^8$ terms depending on the colour-spin states of the pair. Clearly, the two factorisation approaches lead to different $\ensuremath{P_T}\xspace$ dependencies and a consistent matching between these two regimes is clearly needed, especially for fitting the LDMEs~\cite{Kang:2008zzd,LQSW:2020}.
On the experimental side, the Run 1 \& 2 LHC data already allow one to push further the reachable \ensuremath{P_T}\xspace domain for \ensuremath{J/\psi}\xspace and \ensuremath{\Upsilon}\xspace production compared to the Tevatron range. The latter already reached $\ensuremath{P_T}\xspace \simeq 25$ GeV for $\psi$ but this was barely sufficient to assume $\ensuremath{P_T}\xspace^2 \gg m_Q^2$. Moreover, the Tevatron data sample was clearly insufficient for the $\ensuremath{\Upsilon}\xspace$ states. Thanks to the extended LHC \ensuremath{P_T}\xspace reach and to the advent of NLO NRQCD computations~\cite{Butenschoen:2011yh,Chao:2012iv,Gong:2012ug,Bodwin:2014gia,Shao:2014yta,Bodwin:2015iua,Han:2014kxa,Feng:2015wka}, one could set novel constraints on the NRQCD LDMEs needed to fit the $\psi$ data since high-\ensuremath{P_T}\xspace data prefer a dominance of {$^1S_0^{[8]}$ rather than the $^3S_1^{[8]}$} state that was compatible with mid-\ensuremath{P_T}\xspace data at the Tevatron using LO estimates. Yet, there is still a debate about whether large $\log(\ensuremath{P_T}\xspace^2/m_Q^2)$ could affect the determination of the LDMEs. In this context, data at even higher \ensuremath{P_T}\xspace will not be superfluous, especially for \ensuremath{\psi(2S)}\xspace for which constraints from other colliding systems are very limited. Such data will also be useful in confirming the inability of the CEM to account for this high-\ensuremath{P_T}\xspace regime~\cite{Lansberg:2016rcx,Lansberg:2020rft}. The reader is guided to a recent review~\cite{Lansberg:2019adr} where the impact of the current LHC data on the phenomenology is explained in details. For the three \ensuremath{\Upsilon}\xspace states, data at higher \ensuremath{P_T}\xspace, probably above 100~GeV, are certainly also welcome to be sure the fragmentation limit has been reached. These are certainly within the reach of HL-LHC.
\subsubsection{Polarisation: going further}
\label{sec:quarkonium-pol-pp}
The most studied quarkonia are the vector states, $\psi(nS)$ and $\ensuremath{\Upsilon}\xspace(nS)$, because they are easily produced in $e^+e^-$ annihilation but also because they are easily detectable via their di-lepton decay channels.
This also offers the possibility to directly measure their polarisation, also referred to as spin alignment, via the analysis of the angular distribution of their decay products, which can be parameterised as:
\begin{equation}
\frac{d^{2}N}{d\cos\theta d\phi} \propto 1+\lambda_\theta \cos^2\theta + \\ \lambda_\phi \sin^2\theta \cos2\phi + \lambda_{\theta\phi}\sin2\theta \cos\phi \,,
\label{eq:angularDistribution}
\end{equation}
where $\theta$ is the polar angle between the positively charged lepton momentum in the quarkonium rest frame, $p_{\ell^+}$, and the spin-quantisation axis ($z$ axis) and $\phi$ is the azimuthal angle between the projection of $p_{\ell^+}$ on the $x-y$ plane (thus orthogonal to the spin-quantisation axis) and the $x$ axis.
The decay angular coefficients, $\lambda_\theta$ (also known as $\alpha$), $\lambda_\phi$ and $\lambda_{\theta\phi}$ are related to specific elements of the spin density matrix but are frame dependent. They obviously depend on the choice of the spin-quantisation axis. The reader is guided to~\cite{Lansberg:2019adr} for an up-to-date discussion of the predicted values of these parameters in different production models and to~\cite{Andronic:2015wma} for an exhaustive
list of the existing measurements up to 2015.
It had been hoped that such polarisation measurements could verify the smoking-gun prediction of NRQCD according to which vector quarkonia are produced transversely polarised~\cite{Cho:1994ih,Braaten:1994xb}, {i.e.}\xspace\ $\lambda_\theta=+1$, in the helicity frame\footnote{In the helicity frame, the quantisation axis is the $\pazocal{Q}$ momentum in the {\rm c.m.s.}\xspace\ frame and the $xz$ plane contains the latter and the momenta of the colliding particles.} at large \ensuremath{P_T}\xspace. However, when Tevatron data (see {e.g.}\xspace\ \cite{Aaltonen:2009dm}) became precise enough,
it became clear that this prediction was wrong. At the same time, NRQCD results at NLO were found to deviate from this seemingly fundamental NRQCD result obtained at LO\footnote{For the record, the first NLO polarisation computations were performed in the CSM back in 2008~\cite{Gong:2008sn,Artoisenet:2008fc} showing a longitudinal yield at NLO instead of the transverse LO yield. The NLO NRQCD studies date back to 2012 by the Hamburg~\cite{Butenschoen:2012px}, PKU~\cite{Chao:2012iv,Shao:2014yta} and IHEP~\cite{Gong:2012ug} groups and their interpretations differ much. The Hamburg and IHEP NLO fits show increasingly
transverse $\psi$ yields with increasing \ensuremath{P_T}\xspace in the helicity frame (at variance with Tevatron and LHC data). The PKU NLO fit --including Tevatron polarisation data at the beginning \cite{Chao:2012iv} but excluding them later \cite{Shao:2014yta}-- shows a quasi unpolarised yield at high \ensuremath{P_T}\xspace.}. From a smoking gun in the 1990's, the polarisation turned, in the 2010's, into a mere constraint via NRQCD fits. Indeed, the complex interplay between the different CO contributions at NLO renders NRQCD predictions for polarisation sensitive to tiny details of the fits.
In this context, we would like to provide several recommendations:
\begin{itemize}
\item In the currently studied kinematic region, clear-cut theory predictions should not be expected. Leaving aside the feed-down effects, which however constitute a serious source of complications, the polarisation in fact essentially depends on a linear combination of LDMEs. New types of data, such as those discussed in the following sections are needed. In fact, even at extremely high \ensuremath{P_T}\xspace and for the
feed-down-free $\ensuremath{\psi(2S)}\xspace$, it is not clear that a simple picture will emerge. Yet, it remains important to consolidate the current measurements especially for the excited states, which admittedly remain very limited.
\item Precise polarisation measurements at low \ensuremath{P_T}\xspace in the collider mode certainly remain useful, especially in the central rapidity region, to get a more global view including RHIC and Tevatron measurements. Along the same lines, measurements in the FT mode in the 100~GeV range, as can be realised at the FT-LHC, will be critical to complete the picture.
\item It is essential to measure the three angular coefficients in order to avoid relying on theoretical and/or experimental assumptions. This also allows one to compute frame-invariant quantities~\cite{Faccioli:2010ej,Palestini:2010xu,Shao:2012fs,Martens:2017cvj,Peng:2018tty,Gavrilova:2019jea}, which can serve, by using determination of these invariants in multiple frames, as consistency checks of the experimental procedure. For further checks, it would also be interesting to measure the other angular distribution coefficients beyond \ce{eq:angularDistribution} like the $\lambda_{\phi}^\perp\sin^2{\theta}\sin{2\phi}$ and $\lambda^{\perp}_{\theta\phi}\sin{2\theta}\sin{\phi}$ terms, which are predicted to be exactly zero from parity invariance.
\item Beside the extraction of these invariants from combinations of the angular coefficients, it is possible to extract them directly as functions of the lepton momenta~\cite{Teryaev:2011zza,Martens:2017cvj,Gavrilova:2019jea}. We encourage attempts in this direction.
\item We encourage theorists to compute the 3 angular coefficients and the related invariants. We note that the first calculation of $\lambda_{\theta \phi}$ at NLO in NRQCD was only computed in 2018~\cite{Feng:2018ukp} for the \ensuremath{J/\psi}\xspace meson.
\item Polarisation measurements also remain very important in quantifying the acceptance corrections to be applied to pass from experimental cross-section measurements performed in a fiducial region to the inclusive ones. Even though the currently available results show no significant polarisation, it should be kept in mind that, in specific kinematic conditions, the polarisation could drastically change.
Ideally, experiments should publish fiducial cross-section results, which would free their results from this additional source of uncertainty. However, it should be clear that advancing the theoretical predictions to higher precision (we are not even yet at NNLO in \ensuremath{\alpha_{\rm s}}) and, at the same time, providing predictions for the di-lepton angular distribution in designated fiducial regions, may not be possible due to the computational complexity. This certainly calls for a concerted effort between both experimental and theoretical communities.
\end{itemize}
\subsection{Characterisation of $\pazocal{Q}$ events}
\label{sec:oniacharacterisation}
\subsubsection{$\pazocal{Q}$ in jets}
\label{sec:oniainjets}
In the past few years, quarkonium production within jets has been attracting increasing attention as a probe of heavy-quark hadronisation and quarkonium production mechanisms. At the LHC, such a process has been measured by the LHCb~\cite{Aaij:2017fak} and CMS~\cite{Diab:2019she} collaborations. Both collaborations have observed striking deviations of data from Monte Carlo simulations, which use LO NRQCD complemented with subsequent parton showering. The observable measured in both experiments is the transverse momentum fraction, $z = \ensuremath{P_T}\xspace^{\psi}/\ensuremath{P_T}\xspace^{\text{jet}}$, carried by the quarkonium state $\ensuremath{J/\psi}\xspace$ inside the corresponding jet. This observable is indicative of the fragmenting-parton momentum carried by the quarkonium state. Theoretical predictions for the LHCb data have been provided in~\cite{Bain:2017wvk} by using the fragmentation-jet functions (FJF) and the Gluon Fragmentation Improved {\sc pythia}\xspace (GFIP), which correspond to a modified parton shower approach, where the quarkonium fragmentation is only implemented at the end of the shower
As shown in \cf{fig:in-jet}, the predictions reproduce many important features of the data. One however notes that the agreement depends both on the values of LDMEs (compare the bands between the 3 plots) and the fragmentation modelling (compare the red and grey bands in each plot). In principle, to improve the discriminating power of the data to the LDMEs, it is important not to integrate over $\ensuremath{P_T}\xspace^{\text{jet}}$ and to look at how the probability to produce a \ensuremath{J/\psi}\xspace at fixed $z$ varies with $\ensuremath{P_T}\xspace^{\text{jet}}$.
Even though these exploratory studies have shown that these new observables can provide deeper insights into quarkonium production at large \ensuremath{P_T}\xspace\,, more detailed studies are necessary to obtain a more comprehensive picture. The reader is guided to the review~\cite{Lansberg:2019adr} for a discussion of the theoretical caveats, and in particular that all the current
predictions rely on LO fragmentation functions (FF). As such, NLO corrections may be very large and diminish sensitivity to the LDMEs.
\begin{figure}[h!]
\centerline{\includegraphics[width = \textwidth]{./figures/plotsFJF.pdf}}
\caption{Comparisons of the $z(\ensuremath{J/\psi}\xspace)$ LHCb measurements to the predicted $z(\ensuremath{J/\psi}\xspace)$ distribution using FJF (red) and GFIP (gray) for three choices of LDMEs (left, centre, right).[Plots adapted from~\cite{Bain:2017wvk}]}
\label{fig:in-jet}
\end{figure}
There are significant opportunities to expand these existing studies of \ensuremath{J/\psi}\xspace (and other quarkonia) in jets. Quarkonia in jets can be selected by data collected via leptonic triggers from the decay of the quarkonium or through the use of high-\ensuremath{P_T}\xspace hadronic triggers to select the jet candidate, potentially providing data spanning hundreds of GeV in jet transverse momenta. As events of interest will necessarily be characterised by leptons surrounded by significant hadronic activity, care will be needed to ensure such events are not vetoed by standard online or offline lepton- or jet-reconstruction algorithms optimised for HL-LHC conditions.
At the HL-LHC, we identify the following major subsequent studies with significant phenomenological impact:
\begin{itemize}
\item The transverse momentum fraction, $z$, should be measured with finer binning and for a wider range of jet transverse momentum. The transverse momentum of the jet, that sets the hard scale of the process, controls the size of evolution. This will allow the test of various aspects of the quarkonium fragmentation mechanisms, and their relative contributions. In addition, a detailed study of the regime $1-z\ll 1$ will give insights into the non-perturbative aspects of quarkonium hadronisation where the process is particularly sensitive to soft radiation.
\item To decouple modelling of the jet transverse momentum dependence from modelling of the fractional momentum associated with quarkonium states, we encourage experiments to perform measurements such as those illustrated in~\cf{fig:in-jet} in narrow intervals of jet transverse momentum, or ultimately as multi-differential measurements in jet \ensuremath{P_T}\xspace and $z$. As well as providing additional information to explore the observed discrepancies, this will allow for more detailed comparisons and combinations of data between experiments accessing complementary \ensuremath{P_T}\xspace ranges at HL-LHC.
\item Jet substructure observables, such as thrust and other angularities, will give access to details of the radiation surrounding the quarkonium within the jet. The distribution of radiation in the jet is sensitive to the production mechanisms of quarkonium and could offer an additional handle on the numerical values of the LDMEs. A theoretical investigation of observables of this type has been performed in~\cite{Bain:2016clc}.
\item Multi-differential measurements of quarkonium energy fractions and transverse momentum with respect to the jet axis~\cite{Bain:2016rrv, Makris:2017hjk} can provide a three-dimensional picture of quarkonium fragmentation.
\item Studies should be expanded to include measurement of other quarkonia, such as $\ensuremath{\psi(2S)}\xspace$ in jets or $\ensuremath{\Upsilon}\xspace(nS)$ in jets for the first time. Such measurements are critical to provide a complementary way to constrain all LDMEs used in production modelling.
\end{itemize}
Compared to light hadrons, quarkonium production is anticipated to be less affected by background contributions from multi-parton interactions and underlying event activity. However, at the HL-LHC, the use of jet-grooming techniques might also be helpful in further removing contributions of soft radiation, and allow for an improved convergence of experimental studies and theoretical calculations. In particular, it is important to keep under control the impact of DPS, whereby a quarkonium is produced in one scattering simultaneously with a dijet from another, with one of these jets being so close to the quarkonium that the quarkonium is considered to belong to this jet.
\subsubsection{$\pazocal{Q}$ as a function of the particle multiplicity}
\label{sec:oniaparticlemultiplicity}
Multiplicity-dependent studies of quarkonium production give insights into the correlations between hard (heavy-quark production) and soft (charged-particle multiplicity) processes, and improve our understanding of multi-parton interactions (MPI) and initial-state effects. Figures~\ref{yields_vs_quarkonia_mid} and~\ref{yields_vs_quarkonia} show the normalised yields of quarkonium production at mid- and forward-rapidity as a function of the charged-particle multiplicity, measured at mid-rapidity d$N_{\rm ch}$/d$\eta$ for \ensuremath{pp}\xspace\ collisions at $\ensuremath{\sqrt{s}}$ = 5.02 and 13 TeV, respectively~\cite{Acharya:2020pit}. The results at forward rapidity (Fig.~\ref{yields_vs_quarkonia}) include $\ensuremath{\Upsilon}\xspace$ states as well as $\ensuremath{\psi(2S)}\xspace$ and $\ensuremath{J/\psi}\xspace$, whereas the central rapidity ones (Fig.~\ref{yields_vs_quarkonia_mid}) show only inclusive $\ensuremath{J/\psi}\xspace$.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.4\textwidth]{figures/2020-05-22-2020-05-22-pp13_NormJpsi-NormNch_pTint_models.pdf}
\hspace{0.5cm}
\includegraphics[width = 0.4\textwidth]{figures/2020-05-22-2020-05-22-pp13_NormJpsi-NormNch_pTdep_EPOS_CPP.pdf}
\caption{Normalised inclusive $\ensuremath{J/\psi}\xspace$ yield at mid-rapidity as a function of the normalised charged-particle pseudorapidity density at mid-rapidity for the $\ensuremath{P_T}\xspace$-integrated (left) and $\ensuremath{P_T}\xspace$ differential (right) cases. The data~\cite{Acharya:2020pit} are compared to theoretical predictions from EPOS-3~\cite{Werner:2013tya}, the coherent particle production model (CPP)~\cite{Kopeliovich:2013yfa}, the percolation model~\cite{Ferreiro:2012fb}, the CGC model~\cite{Ma:2018bax}, the 3-Pomeron CGC model~\cite{Siddikov:2019xvf}, and {\sc pythia}\xspace~8.2~\cite{Sjostrand:2014zea}. [Plots taken from~\cite{Acharya:2020pit}]
}
\label{yields_vs_quarkonia_mid}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{figures/2019-06-05-quarkonia_all_energy.pdf}
\hspace{0.1cm}
\includegraphics[width=0.4\textwidth]{figures/2020-05-22-2020-05-22-Psi2sYieldWithRatio.pdf}
\caption{Normalised yields of inclusive quarkonia at forward rapidity as a function of the charged-particle multiplicity at midrapidity d$N_{\rm ch}$/d$\eta$ in \ensuremath{pp}\xspace\ collisions at $\ensuremath{\sqrt{s}}$ = 5.02 (left) and 13 TeV (right). Both quantities are normalised by the corresponding value for minimum bias \ensuremath{pp}\xspace\ collisions (d$N_{Q\bar{Q}}$/d$y$, d$N_{\rm ch}$/d$\eta$). The error bars represent the statistical uncertainty on the relative quarkonium yields, while the point-to-point systematic uncertainties on the relative quarkonium yield are depicted as boxes. The dotted linear line ($y=x$) is drawn to visualise the deviation from linearity of the data points. [Plots taken from~\cite{Acharya:2020pit}]
\label{yields_vs_quarkonia}
}
\end{figure}
The quarkonium normalised yield increases linearly as a function of the multiplicity at forward rapidity, while at mid-rapidity a faster-than-linear increase is observed both for the $\ensuremath{P_T}\xspace$-integrated and $\ensuremath{P_T}\xspace$-differential $\ensuremath{J/\psi}\xspace$ cases. Several theoretical models predict a correlation of the $\ensuremath{J/\psi}\xspace$ normalised yield with the normalised event multiplicity that is stronger than linear. These include a coherent particle production model~\cite{Kopeliovich:2013yfa}, the percolation model~\cite{Ferreiro:2012fb}, the EPOS3 event generator, a CGC+NRQCD model~\cite{Ma:2018bax}, the {\sc pythia}\xspace\,8.2 event generator~\cite{Sjostrand:2014zea}, and the 3-Pomeron CGC model~\cite{Siddikov:2019xvf}. In all these models, the predicted correlation is the result of a ($N_{ch}$-dependent) reduction of the charged-particle multiplicity. This can be an effect due to the colour string reconnection mechanism as implemented in {\sc pythia}\xspace, but the initial-state effects as implemented in the percolation or CGC models lead to a similar reduction in the particle multiplicity. Concerning the excited-over-ground yield ratios, recent results by the ALICE collaboration on charmonium production at forward rapidity are consistent with unity within the systematic uncertainties, while pointing at a possible reduction for an increasing normalised multiplicity~\cite{Gromada:2020knj}. Moreover, the preliminary results on the $\ensuremath{\Upsilon}\xspace$ and $\ensuremath{\Upsilon(2S)}\xspace$ normalised yields versus the normalised multiplicity~\cite{Ding:2020stx} point at a stronger departure from linearity for the excited state when compared to the ground state, which would lead to a reduction of the ratio $\ensuremath{\Upsilon}\xspace/\ensuremath{\Upsilon(2S)}\xspace$ at high multiplicities.
The measurements outlined here are currently limited by systematic uncertainties that will be reduced with the upcoming HL-LHC quarkonium data. With the expected much larger data samples and enlarged rapidity coverage, LHC experiments have good prospects to perform multi-differential studies among different kinematic variables, also at forward rapidity. This will enable focused studies of the dependence in specific regions of the phase space and precision analyses of the higher-excited quarkonium states, such as the $\ensuremath{\Upsilon(3S)}\xspace$.
\subsection{Production of unconventional states}
\label{sec:oniaUnconventional}
\subsubsection{Production of \ensuremath{\eta_c}\xspace and \ensuremath{\eta_c(2S)}\xspace states\label{sec:etac}}
Studies of the \ensuremath{\eta_c}\xspace and \ensuremath{\eta_c(2S)}\xspace states, as spin partners of the \ensuremath{J/\psi}\xspace and \ensuremath{\psi(2S)}\xspace, provide independent constraints on the LDMEs of the spin-triplet family based on HQSS, following the velocity-scaling rule of NRQCD. Next-to-leading order (NLO) NRQCD calculations~\cite{Butenschoen:2014dra,Han:2014jya,Zhang:2014ybe} show that there are only two relevant channels, a CO channel and the leading-$v^2$ CS channel, while the feed-down contribution is negligible. This greatly simplifies the corresponding theoretical analysis. On the other hand, all $S$- and $P$-wave CO channels, as well as significant feed-down contributions in the $\ensuremath{J/\psi}\xspace$ case, compete with each other in the $\psi$ hadroproduction processes.
At the LHC, two LHCb measurements of \ensuremath{\eta_c}\xspace exist~\cite{Aaij:2014bga,Aaij:2019gsn}, via the hadronic decay channel $\ensuremath{\eta_c}\xspace\to p\bar{p}$~\cite{Barsuk:2012ic} with a branching fraction of about $1.45\cdot 10^{-3}$~\cite{Tanabashi:2018oca}. The current trigger at the LHCb detector allows only the measurement of the $\ensuremath{P_T}\xspace$ spectrum of \ensuremath{\eta_c}\xspace with $\ensuremath{P_T}\xspace>6.5$ GeV. Nevertheless, such measurements have already presented surprises that indicate that the CS channel alone is already sufficient to account for the experimental data within the range $6.5<\ensuremath{P_T}\xspace<14$~GeV. Therefore, the data substantially constrain one CO LDME of the $\ensuremath{J/\psi}\xspace$ by using HQSS, essentially ruling out most of the LDME sets from the world data fits. With the much larger data samples anticipated at the HL-LHC, the extensions of the $\ensuremath{P_T}\xspace$ range in the measurements will be beneficial at, at least, two levels. On the one hand, the low-$\ensuremath{P_T}\xspace$ data will be very useful to extract the low-$x$ gluon PDF in the proton as discussed in Sec.~\ref{sec:pdfofpp}, given the dominance of the CS channel in this regime. The same measurement in the fixed-target mode~\cite{Hadjidakis:2018ifr} at the HL-LHC would allow a probe of the high-$x$ regime of the gluon density~\cite{Feng:2019zmn}. On the other hand, the higher-$\ensuremath{P_T}\xspace$ data will improve the sensitivity to the CO LDME, thanks to the harder $\ensuremath{P_T}\xspace$ spectrum of the CO compared to the CS channel.
For the same reasons, the study of the \ensuremath{\eta_c(2S)}\xspace state is interesting to understand the $\psi(2S)$ production mechanism based on HQSS. Its feasibility at the LHC has been explored in~\cite{Lansberg:2017ozx} via several decay channels. Figure~\ref{fig:etac2S} shows that the $\ensuremath{P_T}\xspace$-differential cross section strongly depends on the choice of the CO LDME set. A measurement, with a dedicated trigger, of \ensuremath{\eta_c(2S)}\xspace at the HL-LHC will impact the final theoretical interpretation of the charmonium production data.
Equivalent measurements with bottomonium are also feasible:
prospects for $\eta_b$ studies at the LHC have recently been discussed in~\cite{Lansberg:2020ejc}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.32\textwidth]{figures/XSBr_etac2S_LHCb13TeV_Shao-JPL-crop.pdf}
\includegraphics[width=0.32\textwidth]{figures/XSBr_etac2S_LHCb13TeV_Gong-JPL-crop.pdf}
\includegraphics[width=0.32\textwidth]{figures/XSBr_etac2S_LHCb13TeV_Bodwin-JPL-crop.pdf}
\caption{Differential-$\ensuremath{P_T}\xspace$ cross section for \ensuremath{\eta_c(2S)}\xspace production times ${\cal B}(\ensuremath{\eta_c(2S)}\xspace \to p \bar p)$ for the three CO LDME sets~\cite{Shao:2014yta,Gong:2012ug,Bodwin:2015iua} along the projected statistical uncertainties using the central theoretical values in each case, with an assumed efficiency of 2\% and a luminosity of 1.5~fb$^{-1}$ by the LHCb detector at $\ensuremath{\sqrt{s}}=13$~TeV. [Plots taken from~\cite{Lansberg:2017ozx}]}
\label{fig:etac2S}
\end{figure}
\subsubsection{Polarisations of $\ensuremath{\chi_{c1}}\xspace$ and $\ensuremath{\chi_{c2}}\xspace$ states\label{sec:chicpol}}
It has been advocated in~~\cite{Shao:2014fca,Faccioli:2018uik} that the measurement of the polarisations of the $\ensuremath{\chi_{c1}}\xspace$ and $\ensuremath{\chi_{c2}}\xspace$ mesons at the LHC would uncover how the $P$-wave states are produced at hadron colliders. These states contribute to around $30\%$ of the prompt $\ensuremath{J/\psi}\xspace$ yields via the radiative decays $\chi_{c}\to \ensuremath{J/\psi}\xspace+\gamma$ (see~\cite{Lansberg:2019adr} for an up-to-date discussion on the feed-down component). Such studies are also motivated by the simplicity of these states from the NRQCD perspective with the HQSS relation. Indeed, only a single CO LDME needs to be determined from the experimental data compared to three for the $S$-wave states to reach a comparable precision, and thereby NRQCD has a stronger predictive power for the $P$-wave states.
A first measurement by the CMS collaboration with 19.1~fb$^{-1}$ of data in \ensuremath{pp}\xspace\ collisions at $\ensuremath{\sqrt{s}}=8$ TeV appeared recently~\cite{Sirunyan:2019apc}. Such a measurement is very challenging at both ATLAS and CMS because the photons from these radiative transitions have to be measured through their conversions to $e^+e^-$~\cite{ATLAS:2014ala,Khachatryan:2014ofa} to achieve a high precision measurement. The analysis is thus limited by the large systematic uncertainty associated to muon and photon detection efficiencies. Consequently, only the difference between the $\ensuremath{\chi_{c1}}\xspace$ and $\ensuremath{\chi_{c2}}\xspace$ polarisations, from the angular dependence of the $\ensuremath{\chi_{c2}}\xspace/\ensuremath{\chi_{c1}}\xspace$ yield ratio, is available. In other words, the $\ensuremath{\chi_{c1}}\xspace$ and $\ensuremath{\chi_{c2}}\xspace$ polarisations are not yet known separately.
Figure~\ref{fig:chicpol} displays the coefficient $\lambda_{\theta}$ %
in the decay chain $\chi_{c}\to \ensuremath{J/\psi}\xspace\gamma\to \mu^+\mu^-\gamma$, which follows the form $1+\lambda_{\theta}\cos^2{\theta}$~\cite{Faccioli:2010ji,Faccioli:2011be,Shao:2012fs}, where $\theta$ is the polar angle of $\mu^+$ in the rest frame of the $\ensuremath{J/\psi}\xspace$ meson. The left panel shows the polarisation pattern of $\ensuremath{\chi_{c2}}\xspace$ when assuming unpolarised $\ensuremath{\chi_{c1}}\xspace$.
The right panel compares data with a NLO NRQCD prediction~\cite{Shao:2014fca,Faccioli:2018uik} for $\ensuremath{\chi_{c2}}\xspace$ polarisation after fixing the $\ensuremath{\chi_{c1}}\xspace$ polarisation to the correponding NRQCD values.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{figures/CMS_chic_polarisation.pdf}
\caption{Measurements of the polarisation parameter $\lambda_{\theta}$ versus $\ensuremath{P_T}\xspace/m_{\ensuremath{\chi_c}\xspace}$ for $\ensuremath{\chi_{c1}}\xspace$ and $\ensuremath{\chi_{c2}}\xspace$ production in \ensuremath{pp}\xspace\ collisions at $\ensuremath{\sqrt{s}}=8$ TeV measured by the CMS collaboration compared to NRQCD predictions. [Plots adapted from~\cite{Sirunyan:2019apc}]}
\label{fig:chicpol}
\end{figure}
At the HL-LHC, the limitation of measuring only the $\ensuremath{\chi_{c1}}\xspace$ and $\ensuremath{\chi_{c2}}\xspace$ polarisation difference can be lifted, with the much larger data samples and a better control of systematic uncertainties. This will require a commitment to collect high-statistics calibration data at low $\ensuremath{P_T}\xspace$ for the determination of muon and photon conversion efficiencies. It is thus very desirable to measure the independent polarisation for each $P$-wave state in the future. The equivalent measurements for the $\ensuremath{\chi_b}\xspace$ states will also be useful for understanding the corresponding bottomonium sector. In fact, these measurements are also necessary for a proper interpretation of the \ensuremath{J/\psi}\xspace and \ensuremath{\Upsilon}\xspace polarisation measurements to properly account for the effect of the $\chi_Q$ feed-down component.
\subsubsection{Production of the exotic $X$, $Y$ and $Z$ states}
\label{sec:pp-xyz}
Many $X$, $Y$ and $Z$ states have been found following the $X(3872)$ observation~\cite{Choi:2003ue}, but their underlying (multi-quark or hadron-molecular) nature is still unclear. Studying the production of $X$, $Y$ and $Z$ states, both theoretically and experimentally, can provide important information to understand the formation and properties of multi-quark states.
Let us take as an example the $X(3872)$, now also referred to as $\chi_{c1}(3872)$. Its prompt production rate has been estimated within the NRQCD factorisation approach~\cite{Artoisenet:2009wk} assuming that it is a pure charm-meson molecule. In order to be able to adequately describe measurements of its production rate from the CDF collaboration, the charm-meson rescattering mechanism was introduced in~\cite{Artoisenet:2009wk}. However, the LO calculation with the non-perturbative matrix element determined from the CDF data leads to much bigger yields than experimentally measured by the CMS collaboration~\cite{Chatrchyan:2013cld}, as shown in~\cf{fig:X3872HadroProd} (left). On the contrary, the authors of~\cite{Meng:2013gga} suggested that the $X(3872)$ is a mixture of $\ensuremath{\chi_{c1}}\xspace(2P)$ and $D^0\bar{D}^{0*}$ states, and the hadroproduction proceeds dominantly through its $\ensuremath{\chi_{c1}}\xspace(2P)$ component. The cross section through the charm-meson molecular component has been assumed to be negligible in~\cite{Meng:2013gga}, and the fraction of the charmonium component in $X(3872)$ has been tuned to the CMS data.
Once the overall normalisation of the cross section from CMS observations was fit by fixing this charmonium fraction, the NRQCD approach was found to describe the data across a much larger range of \ensuremath{P_T}\xspace as observed by ATLAS~\cite{Aaboud:2016vzw} (Fig.~\ref{fig:X3872HadroProd}, right). It is worth noting that the NRQCD calculations plotted in both panels of~\cf{fig:X3872HadroProd} have very different underlying assumptions. The LO NRQCD curve~\cite{Artoisenet:2009wk} in the left (CMS) plot assumes $X(3872)$ to be a pure loosely bound charm-meson molecular state and the model includes the charm-meson rescattering mechanism, whereas the NLO NRQCD curve in the right (ATLAS) panel, takes the $X(3872)$ as a mixed state of charmonium and charm-meson molecule and assumes its production via the charmonium component is dominant. On the other hand, the non-prompt $X(3872)$ production rate measured by ATLAS~\cite{Aaboud:2016vzw} was found to be poorly described by fixed-order next-to-leading-log (FONLL) predictions~\cite{Cacciari:2012ny} in contrast to the good agreement observed for the charmonium $\ensuremath{\psi(2S)}\xspace$ case. Analysis of the non-prompt $X(3872)$ lifetime distribution indicated the presence of an anomalously large short-lifetime component consistent with decays via the $B_c$, which has yet to be fully understood. Given the unclear nature of the $X(3872)$, and different model/theoretical assumptions in various existing cross-section calculations, it is not yet conclusive which picture can successfully reproduce the world data on the $X(3872)$.
\begin{figure}[h!]
\centering
\raisebox{15pt}{\includegraphics[width=0.49\textwidth]{figures/X3872_CMS_2013_PromptPTVsTheo.pdf}}
\includegraphics[width=0.45\textwidth]{figures/X3872_ATLAS_2017_PromptPTVsTheo.pdf}
\caption{\ensuremath{P_T}\xspace\ differential cross section times branching fraction as a function of $\ensuremath{P_T}\xspace$ for the prompt production of the $X(3872)$ state, measured by the CMS collaboration~\cite{Chatrchyan:2013cld} (left), and by the ATLAS collaboration~\cite{Aaboud:2016vzw} (right), compared to the theoretical predictions from~\cite{Artoisenet:2009wk} and~\cite{Meng:2013gga}, respectively. [Plots taken from~\cite{Chatrchyan:2013cld,Aaboud:2016vzw}]}
\label{fig:X3872HadroProd}
\end{figure}
\begin{figure}[h!]
\centerline{\includegraphics[width=0.49\linewidth]{figures/Upsilon_pp_CMS_c_updated.pdf}
\includegraphics[width=0.49\linewidth]{figures/figLHCb_CIM_X_updated.pdf}}
\caption{\label{fig:figppUpsilonX}
(Left) Relative yields of excited-to-ground-state $\ensuremath{\Upsilon}\xspace$ mesons as a function of event multiplicity in \ensuremath{pp}\xspace\ collisions at $\ensuremath{\sqrt{s}} = 2.76$~TeV at central rapidities as measured by the CMS collaboration~\cite{Chatrchyan:2013nza}. (Right) Relative yields of $X(3872)$ over $\psi(2S)$ as a function of charged-particle multiplicity at $\ensuremath{\sqrt{s}} = 8$~TeV in the forward region as measured by the LHCb collaboration~\cite{Aaij:2020hpf}. The bands represent the CIM results~\cite{Ferreiro:2018wbd,Esposito:2020ywk}, while the points refer to the experimental data. In the right panel, the brown band assumes $X(3872)$ is a molecular state with a radius of 5~fm, while the green band assumes it is a compact tetraquark state with a radius of 0.65~fm.%
}
\end{figure}
The study of the relative production of $X(3872)$ over conventional quarkonium states as a function of particle multiplicity, as performed by the LHCb collaboration~\cite{Aaij:2020hpf} in \ensuremath{pp}\xspace\ collisions at 8 TeV, can help to discriminate the nature of this exotic state. The preliminary LHCb data on the relative yields of the exotic $X(3872)$ over $\psi(2S)$ meson show a similar behaviour to the relative yields of excited-over-ground-state $\ensuremath{\Upsilon}\xspace$ mesons reported by the CMS collaboration~\cite{Chatrchyan:2013nza} in \ensuremath{pp}\xspace\ collisions at 2.76 TeV, pointing to a possible common origin.
Both data sets have been studied in the framework of the comover interaction model (CIM), {i.e.}\xspace\ including final-state interactions with the comoving medium~\cite{Ferreiro:2018wbd,Esposito:2020ywk}. The results are shown in~\cf{fig:figppUpsilonX}. Within this approach, the radius of the $X(3872)$ is about twice that of the $\ensuremath{\psi(2S)}\xspace$.
This finding supports the $X(3872)$ being a tetraquark state and disfavours the molecular interpretation that would need a much larger radius, close to 5 fm.
With the data delivered by the HL-LHC, many more rare $X$, $Y$ and $Z$ states are likely to be collected. These future datasets offer the opportunity to measure their double differential production cross section as functions of transverse momentum and rapidity, or set much more stringent upper limits on their hadroproduction cross sections. In addition, the polarisations of some $X$, $Y$ and $Z$ states in \ensuremath{pp}\xspace\ collisions, {e.g.}\xspace\ as proposed in~\cite{Butenschoen:2019npa} for the $X(3872)$, can also be measured, as done recently by the CMS collaboration for the $\chi_{c1,2}$ mesons~\cite{Sirunyan:2019apc} (see also the discussion in Sec.~\ref{sec:chicpol}).
\subsection{$\pazocal{Q}$-associated-production processes}
\label{sec:oniumassociate}
\subsubsection{Associated production of $\pazocal{Q}$ and vector bosons }
\label{sec:onium_bosons}
The study of associated-production processes, such as the combined production of quarkonia with a vector boson, provides a new tool to study perturbative and non-perturbative QCD, novel approaches to searches for new phenomena for both light and heavy states~\cite{Clarke:2013aya,Aaltonen:2014rda}, and an additional probe of multiple-parton-scattering interactions complementary to $W+\mathrm{jet}$, $WW$, and di-quarkonium production processes, which are discussed in Section~\ref{sec:dps}.
Associated $\ensuremath{J/\psi}\xspace+Z$ and $\ensuremath{J/\psi}\xspace+W$ production have both been observed~\cite{Aad:2014rua,Aad:2014kba,Aaboud:2019wfr} by the ATLAS collaboration. These are extremely rare processes, with only approximately one in every $10^{6}$ $W$ or $Z$ boson-production events also producing a $\ensuremath{J/\psi}\xspace$ in the fiducial volume of the ATLAS detector. Yet they can provide rich physics opportunities well-suited to precision studies with the large data-sets at the HL-LHC. The presence of a vector boson allows for more efficient event triggering than that would be possible for inclusive quarkonium processes. The resulting relatively high-$\ensuremath{P_T}\xspace$ multi-lepton signatures mean that selections are resilient to the expected high instantaneous luminosities (and correspondingly large numbers of multiple simultaneous \ensuremath{pp}\xspace\ pileup interactions) anticipated at the HL-LHC. Based on existing selections, this means that the ultimate HL-LHC data-sets of 8\,500 prompt $\ensuremath{J/\psi}\xspace+Z$ events and 30\,000 prompt $\ensuremath{J/\psi}\xspace+W$ events, and double as many non-prompt events, can be expected to be recorded for study by each of the general purpose detectors. Similar measurements of $\ensuremath{J/\psi}\xspace+\gamma$ associated production, as well as equivalent processes with bottomonium production and with excited quarkonium states, together provide a rich laboratory for future exploration.
\cf{fig:zjpsi_atlas} shows an example of the measured differential $\ensuremath{J/\psi}\xspace+Z$ rates for prompt $\ensuremath{J/\psi}\xspace$ production compared with CO and CS NLO NRQCD predictions and data-driven estimates of the DPS contribution. Existing measurements point to discrepancies that can in part be explained by: enhanced DPS rates inconsistent with measurements from inclusive vector boson and hadronic jet processes; non-trivial correlations in DPS interactions; or enhanced contributions to single parton scattering (SPS) rates that become particularly important at large transverse momenta.
The limited data currently available implies that the existing DPS extractions are approximate (see detailed discussions in Section~\ref{sec:dps}). Data from the HL-LHC will enable more detailed studies of DPS dynamics necessary to decouple DPS from SPS interactions at low momentum transfer. Current $\ensuremath{J/\psi}\xspace$ measurements are limited to differential rates versus $\ensuremath{P_T}\xspace$, but measurements of other observables and two-dimensional differential distributions, such as the difference in the azimuthal angle between the boson and the quarkonium, $\Delta\phi(Z, \ensuremath{J/\psi}\xspace)$, versus $\ensuremath{P_T}\xspace(\ensuremath{J/\psi}\xspace)$, are recommended in the future to decouple SPS and DPS dynamics and allow precision studies and reinterpretation of these data. First studies in this direction have been produced by the ATLAS collaboration, that
observed~\cite{Wjpsi2020aux} no strong $\ensuremath{P_T}\xspace(\ensuremath{J/\psi}\xspace)$ dependence on the distribution of $\Delta\phi(W, \ensuremath{J/\psi}\xspace)$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.47\textwidth]{figures/zjpsi_fig_06a.pdf}
\includegraphics[width=0.47\textwidth]{figures/zjpsi_figaux_09a.pdf}
\caption{Measured differential prompt $\ensuremath{J/\psi}\xspace+Z$ rates as a function of $\ensuremath{J/\psi}\xspace$ $\ensuremath{P_T}\xspace$ compared to SPS predictions and a DPS rate normalised to $\sigma_\mathrm{eff}=15$~mb (left), where
the normalisation is fit to data at low $\Delta\phi(Z, \ensuremath{J/\psi}\xspace)$ resulting in a preferred $\sigma_\mathrm{eff}=5.3$~mb (right). [Plots taken from~\cite{Aad:2014kba}.]}
\label{fig:zjpsi_atlas}
\end{figure}
At high \ensuremath{P_T}\xspace, SPS can be expected to dominate over DPS processes. As $\ensuremath{J/\psi}\xspace$ produced in association with a vector boson have been observed to be produced with harder $\ensuremath{P_T}\xspace$ spectra than inclusive $\ensuremath{J/\psi}\xspace$ (a two- to three-orders of magnitude drop from 10---100~GeV in the former compared to a six-order of magnitude drop~\cite{Aad:2015duc,Khachatryan:2015rra} in the latter), data at the HL-LHC are expected to provide a comparable high-$\ensuremath{P_T}\xspace$ reach to current inclusive measurements for detailed testing of perturbative QCD calculations.
\cf{fig:wjpsi_atlas}~(Left) illustrates how rates of $\ensuremath{J/\psi}\xspace+W$ production can be described by NLO CEM predictions together with a DPS contribution with an effective cross section of $\sigma_\mathrm{eff} = 6.1^{+3.3}_{-1.9}{}^{+0.1}_{-0.3}$~mb~\cite{Lansberg:2017chq}, compatible with the minimum of $6.3\pm 1.9$~mb determined by ATLAS~\cite{Aaboud:2019wfr} and with the lower (68\% C.L.) limit of $5.3$~mb determined in the $\ensuremath{J/\psi}\xspace+Z$ process~\cite{Aad:2014kba}. However, new data, shown in \cf{fig:wjpsi_atlas}~(right), illustrate that challenges remain in describing associated production in the high-$\ensuremath{P_T}\xspace$ regime, where perturbative calculations underestimate the data by an order of magnitude.
\begin{figure}[h!]
\centering
\raisebox{12pt}{\includegraphics[width=0.47\textwidth]{figures/dpt_ppsiW_ATLAS_CEM_MC127-crop.pdf}}
\hspace{0.5cm}
\includegraphics[width=0.45\textwidth]{figures/wjpsi_8TeV_fig_03b.pdf}
\caption{(Left) Measured differential prompt $\ensuremath{J/\psi}\xspace+W$ rates as a function of $\ensuremath{J/\psi}\xspace$ $\ensuremath{P_T}\xspace$ in \ensuremath{pp}\xspace\ collisions at 7~TeV and (Right) at 8~TeV extending to $\ensuremath{P_T}\xspace(\ensuremath{J/\psi}\xspace)>100$~GeV compared to CEM SPS and DPS predictions. The data--theory agreement is good on the left, but the data exceed the predictions in the right distribution.
[Plots and data taken from~\cite{Lansberg:2017chq} and~\cite{Aad:2014rua,Aaboud:2019wfr}.]}
\label{fig:wjpsi_atlas}
\end{figure}
Studies of other observables such as the spin alignment of quarkonia in associated production are only expected to become possible in the HL-LHC era, and will provide additional information on the underlying production mechanisms. The NLO CS contributions, which are expected to dominate at high $\ensuremath{P_T}\xspace$, predict the polar anisotropy of direct $\ensuremath{J/\psi}\xspace$ and $\ensuremath{\Upsilon}\xspace$ produced in association with a $Z$ boson to become strongly longitudinal ($\lambda_\theta<0$)
as the $\ensuremath{P_T}\xspace$ of the quarkonium increases~\cite{Gong:2012ah}. This stands in contrast to observations of very weak spin alignment in inclusive-production modes~\cite{Chatrchyan:2013cla,Aaij:2013nlm} and so it is an area where even measurements of limited precision can provide useful inputs.
Associated production of a photon and a quarkonium state offers an additional new probe of production dynamics. This process has yet to be observed, although these final
states have been the subject of study for both $P$-wave quarkonium states~\cite{Aad:2011ih,LHCb:2012ac,Chatrchyan:2012ub,ATLAS:2014ala,Sirunyan:2018dff} and exclusive $H^0$ boson decays~\cite{Aaboud:2018txb,Sirunyan:2018fmm}. The non-resonant production of $\gamma+\pazocal{Q}$ is challenging to distinguish due to low photon-reconstruction efficiencies or large experimental backgrounds at low transverse momenta. These processes are however predicted to have large SPS NLO cross-sections~\cite{Lansberg:2009db} that make them well-suited for future study, in particular to constrain gluon TMDs (see Section~\ref{sec:jpsigammaTMD}). Expected DPS rates are not well-known, a fact that can compound the interpretation of measurements at low $\ensuremath{P_T}\xspace$ in a similar fashion to the existing $\ensuremath{J/\psi}\xspace+W/Z$ measurements. The requirement of a high-$\ensuremath{P_T}\xspace$ photon in experimental measurements will likely suppress DPS contributions and enable excellent prospects for study at the HL-LHC if photons can be reliably associated to the quarkonium-production vertex.
Similarly, associated vector-boson processes involving bottomonium states have yet to be observed. This is in part due to lower expected production rates and the (slightly) larger combinatorial backgrounds (from leptonic $B$ meson decays) present in the $\ensuremath{\Upsilon}\xspace(nS)\to\mu^+\mu^-$ invariant mass region.
Such processes may be sensitive to heavy-quark-gluon-fusion contributions, in addition to
the gluon-gluon and light-quark-gluon processes present for the charmonium-production modes, and
provide further complementary tools with which to study SPS and DPS dynamics.
Associated $\pazocal{Q}+W/Z/\gamma$ production provides a new opportunity to study heavy-flavour production in association with a vector boson, which has to-date otherwise been predominantly tested in hadronic jet final states~\cite{Chatrchyan:2012vr,Aaboud:2017skj,Aad:2020gfi}. Quarkonium-production modes provide an opportunity to test theoretical predictions of $W/Z/\gamma+b(c)$ in a complementary regime at low transverse momentum and at small opening angles sensitive to gluon splitting contributions.
Open-heavy-flavour states have begun to be exploited for the study of charm~\cite{Aad:2014xca,Sirunyan:2018hde}, and $\ensuremath{J/\psi}\xspace+b$ final states have demonstrated their effectiveness for study of heavy-flavour modelling~\cite{Aaboud:2017vqt} in topologies that are challenging to study with hadronic jet final states.
Existing measurements of non-prompt $\ensuremath{J/\psi}\xspace+Z$ production~\cite{Aad:2014kba} have established
these processes as having relatively high production rates and small DPS contributions. Non-prompt $\ensuremath{J/\psi}\xspace+Z$ production has been found~\cite{Lansberg:2016muq} to be a sensitive probe of $Z+b$ production which is complementary to $b$-jet identification approaches and that will, in particular, benefit from the enlarged acceptances and increased datasets at the HL-LHC.
Prompt-$\pazocal{Q}+V$ processes also represent a tool and an opportunity to study a variety of potentially new phenomena. Prompt-$\pazocal{Q}+V$ production has been proposed as a compelling prospect for the study of rare decays of the $H^0$ boson~\cite{Doroshenko:1987nj,Kartvelishvili:1988pu,Gonzalez-Alonso:2014rla}, or new heavy states~\cite{Diaz:1994pk,Davoudiasl:2012ag,Falkowski:2014ffa,Clarke:2013aya}. Such searches have begun to be explored experimentally~\cite{Aaltonen:2014rda,Aaboud:2018txb,Aad:2020hzm}, but the potential of such searches will only be fully realised in the HL-LHC era. In addition to searches for resonant phenomena decaying into $Q\bar{Q}+V$ final states, such measurements can be re-purposed in the search for new light-mass states produced in association with a vector boson. A study~\cite{Clarke:2013aya} using initial $\ensuremath{J/\psi}\xspace+W$-observation data~\cite{Aad:2014rua} was able to set competitive limits on the production of a light scalar near the $\ensuremath{J/\psi}\xspace$ mass, exceeding constraints both from dedicated low-mass di-lepton searches at the LHC, as well as from searches via radiative $\ensuremath{\Upsilon}\xspace$ decays from $e^+e^-$ experimental data. A dedicated programme of searches for new phenomena in $Q\bar{Q}+V$ final states has yet to be performed within the LHC collaborations but has fruitful prospects. The potential inclusion of $Z$-boson associated-production modes, and its extension to include higher di-lepton masses, up to and beyond the \ensuremath{\Upsilon}\xspace, \ensuremath{\Upsilon(2S)}\xspace and \ensuremath{\Upsilon(3S)}\xspace resonances, together with the large HL-LHC datasets, offer the opportunity for the LHC to far surpass~\cite{Clarke:2013aya} the current bounds from LEP data.
\subsubsection{$\pazocal{Q}$-pair production}
\label{sec:onium_pair_pp}
The production of pairs of quarkonia also offers rich opportunities for the study of both SPS and DPS as well as of searches for rare decays and for new particles. Unlike for inclusive quarkonium production, which is presumably dominated by quarkonium plus jet(s) or minijet(s) final states (see, {e.g.}\xspace,~\cite{Artoisenet:2008fc,Lansberg:2008zm,Shao:2018adj,Flore:2020jau}), the quarkonium pair production processes, including double charmonia, double bottomonia and charmonium+bottomonium, can in principle provide independent handles to investigate the SPS quarkonium production mechanism at LHC energies. Such measurements are, however, contaminated by sizeable DPS contributions. Conversely, the measurements of these processes are also strongly motivated~\cite{Kom:2011bd,Lansberg:2014swa} by their potential for study of DPS interactions in their own right (see Section~\ref{sec:dps}), and as a probe of the linearly polarised gluons inside the proton~\cite{Lansberg:2017dzg,Scarpa:2019fol} (see Section~\ref{sec:spin}), as well as for the search for new exotic states predicted in QCD~\cite{Chen:2018cqz,Aaij:2020fnh} and in beyond the Standard Model theories, and for searches for rare decay modes of $H^0$ and $Z$ bosons~\cite{Sirunyan:2019lhe}.
The di-\ensuremath{J/\psi}\xspace final state has proven the potential of such searches with the large
datasets beginning to become available at the LHC with the recent
observation by the LHCb collaboration of a new state~\cite{Aaij:2020fnh} at a mass of 6.9~GeV, widely interpreted as a fully-charm tetraquark state.
Due to their rare but distinctive four-lepton signatures, di-quarkonia are excellent candidates for precision studies with the large datasets expected at the HL-LHC. The experimental challenge will be to ensure wide kinematic coverage and high event-selection efficiency in the complex HL-LHC environment. Searches for their production in the decay of $H^0$ bosons~\cite{Aaboud:2018fvk,Sirunyan:2019lhe},
or other high-mass states, will benefit from Lorentz boosts in systems with large invariant mass. However, for new particle searches below the $Z$ boson peak (and particularly below the $B\bar{B}$ threshold)~\cite{Aaij:2018zrb,Sirunyan:2020txn}, an effective use of the unprecedented luminosities delivered in the HL-LHC programme requires that the experiments (in particular, the general purpose detectors) maintain a high efficiency for reconstruction of low transverse-momentum leptons, $\mathcal{O}(2-4~\mathrm{GeV})$, in four-lepton signature events containing one or more quarkonium candidates.
The di-$\ensuremath{J/\psi}\xspace$ final states have been the focus of many theoretical studies~\cite{Qiao:2002rh,Li:2009ug,Qiao:2009kg,Ko:2010xy,Berezhnoy:2011xy,Kom:2011bd,Lansberg:2013qka,Li:2013csa,Sun:2014gca,Lansberg:2014swa,Lansberg:2015lva,He:2015qya,Baranov:2015cle,Likhoded:2016zmk,Borschensky:2016nkv,Lansberg:2017dzg,Scarpa:2019fol,Gridin:2019nhc,Lansberg:2019fgm,He:2019qqr,Lansberg:2020rft}, reflected in the concentration of measurements from the LHC so far into the same final state~\cite{Aaij:2011yc,Abazov:2014qba,Khachatryan:2014iia,Aaboud:2016fzt,Aaij:2016bqq}.
The experimental picture has been recently broadened with measurements of di-$\ensuremath{\Upsilon}\xspace$ production by CMS~\cite{Khachatryan:2016ydm,Sirunyan:2020txn}, and a study of
$\ensuremath{J/\psi}\xspace+\ensuremath{\Upsilon}\xspace$~\cite{Abazov:2015fbl} production by D\O\ at the Tevatron.
From the theoretical perspective, di-\ensuremath{\Upsilon}\xspace production is more-or-less similar to the di-\ensuremath{J/\psi}\xspace process, while both are significantly different with respect to $\ensuremath{J/\psi}\xspace+\ensuremath{\Upsilon}\xspace$. The bulk of SPS events can be accounted for by the leading-$v^2$ CS channel in the former ones, while, for the latter, the complete study of~\cite{Shao:2016wor} reveals that the contributions from CO plus the feed-down contribution from excited quarkonium states are larger than the CS channel in SPS production of $\ensuremath{J/\psi}\xspace+\ensuremath{\Upsilon}\xspace$. Given this, $\ensuremath{J/\psi}\xspace+\ensuremath{\Upsilon}\xspace$ would be a priority for study from the viewpoint of investigating the CO mechanism.
Although plagued by a significant fraction of events from DPS interactions, the CO contributions can potentially be determined at the HL-LHC through measurement of their invariant mass distribution as shown in~\cf{fig:psiYplot}. Such a process has never been observed experimentally, while the D\O\ collaboration only found $3\sigma$ evidence at the Tevatron for the inclusive process, which should be expected to be composed of SPS and DPS components. It is thus desirable to carry out an analysis at the LHC to establish observation and decouple the SPS component for further study.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\textwidth,draft=false]{./figures/dM_JpsiY_LHCb.pdf}
\caption{Prediction for the invariant mass distribution of $\ensuremath{J/\psi}\xspace+\ensuremath{\Upsilon}\xspace$ production in \ensuremath{pp}\xspace\ collisions at $\ensuremath{\sqrt{s}} = 13$~TeV within the fiducial volume $2<y_{\ensuremath{J/\psi}\xspace},\,y_{\ensuremath{\Upsilon}\xspace}<4.5$.
[Plot taken from~\cite{Shao:2016wor}]
\label{fig:psiYplot}}
\end{figure}
Besides the continuing measurements on $\ensuremath{J/\psi}\xspace+\ensuremath{J/\psi}\xspace$ and $\ensuremath{\Upsilon}\xspace(1S)+\ensuremath{\Upsilon}\xspace(1S)$, it would be difficult to achieve a coherent picture without a survey of all the excited states, and mapping the different production modes will require the large datasets in the HL-LHC era. Measurements of these excited states are eagerly anticipated: different combinations of quarkonia provide independent tests for the existence of new anticipated~\cite{Wu:2016vtq,Chen:2016jxd,Karliner:2016zzc} and unanticipated states, and will not only provide information on feed-down contributions to existing production measurements but also provide a direct probe of their own production mechanisms.
\subsubsection{Associated production of $\pazocal{Q}$ and jets}
\label{sec:onium_jets}
The production mechanisms for high-$\ensuremath{P_T}\xspace$ quarkonium-production result in the production of one or more hadronic jets accompanying the quarkonium state. The multiplicity of these jets and the radiation patterns relative to the produced state (or the similar $\ensuremath{J/\psi}\xspace$--hadron azimuthal correlation studies carried out {e.g.}\xspace\ by STAR~\cite{Abelev:2009qaa}) provide valuable insights into the underlying production mechanisms of quarkonia, and are sensitive to CS/CO contributions as well as to higher-order quantum corrections.
{We} encourage the HL-LHC experimental collaborations to measure the production cross section of quarkonium states as functions of both jet multiplicity (for jets above some fixed transverse momentum scale, $\ensuremath{P_T}\xspace>Q_0$) and quarkonium transverse momentum. Ideally, these measurements should be matched to corresponding measurements of inclusive jet production in the same fiducial volume. As well as serving as an alternative tool for the study of single parton production, such datasets can find further application as an additional probe of DPS through measurements of angular correlations and $\ensuremath{P_T}\xspace$ balance observables, analogous to those performed in $V+\mathrm{jets}$~\cite{Aad:2013bjm,Chatrchyan:2013xxa} and other quarkonium final states (see Section~\ref{sec:DPSonia}).
Although initial studies can already be performed with the existing LHC data, high-$\ensuremath{P_T}\xspace$ $\pazocal{Q}$ + inclusive~jets signatures can be efficiently recorded at the HL-LHC, where the large datasets will enable access to events with high jet multiplicities as well as the hadronic activity accompanying very high-$\ensuremath{P_T}\xspace$ ($100$--$300$~GeV) quarkonium, a regime that inclusive production measurements are now starting to probe~\cite{ATLAS:2019ilf}.
Such studies would be complementary to measurements of quarkonium produced {\em in} jets
(Section~\ref{sec:oniainjets}) as well as studies of quarkonium correlations with soft
hadronic activity at low $\ensuremath{P_T}\xspace$ (Section~\ref{sec:oniaparticlemultiplicity}).
The high performance of jet flavour tagging at ATLAS/CMS offers the potential for novel studies of associated quarkonium and heavy-flavour production in final states with hadronic jets. Measurements of such processes have not yet been performed. The $\ensuremath{P_T}\xspace$ dependence of the production rates of $\ensuremath{J/\psi}\xspace+c$-jet or $\ensuremath{\Upsilon}\xspace+b$-jet events is sensitive to the CS and CO contributions. Measurements of the topology of such events would provide valuable information, with CS transitions expected to dominate for quasi-collinear $\ensuremath{J/\psi}\xspace+c$ or $\ensuremath{\Upsilon}\xspace+b$ production, while CO contributions would dominate in topologies with two heavy-flavour jets recoiling against the quarkonium state. Production rates~\cite{Artoisenet:2007xi} for these processes are sufficiently large at high $\ensuremath{P_T}\xspace^\pazocal{Q}>20$~GeV (needed to ensure jets can be adequately reconstructed and tagged), and thereby there are good prospects for study of these processes at the HL-LHC. Assuming a combined trigger and dimuon efficiency of approximately 50\%~\cite{Aad:2012dlq}
and low-$\ensuremath{P_T}\xspace$ $c$-tagging and $b$-tagging efficiencies of 25\% and 50\%, respectively~\cite{Aad:2015ydr,Sirunyan:2017ezt}, estimated yields of $\ensuremath{J/\psi}\xspace+c\bar{c}$ and $\ensuremath{\Upsilon}\xspace+b\bar{b}$ (with at least one identified heavy-flavour jet) of 7\,500 and 150\,000 can be expected at both ATLAS and CMS detectors with the HL-LHC dataset, {\em if} the current efficiencies for triggering on quarkonium states down to $\ensuremath{P_T}\xspace\approx$~20~GeV can be maintained. %
\subsection{Constraining the gluon PDF in the proton using $\pazocal{Q}$\label{sec:pdfofpp}}
The progressive accumulation of large amounts of experimental data at the LHC, with associated increasingly reduced statistical uncertainties, requires a parallel effort to decrease the theoretical uncertainties of the corresponding predictions. Theoretical hadronic cross sections exhibit dependencies on intrinsic scales such as the factorisation and renormalisation scales. The inclusion of higher-order perturbative QCD corrections in calculations of partonic cross sections, mostly at NLO accuracy today, will lead to a reduction of such scale uncertainties. For many theoretical calculations, the PDF uncertainties are often the leading uncertainty today. In particular, the gluon density in the small-$x$ domain ($x\lesssim 10^{-4}$), extrapolated from larger-$x$ regions, is essentially unconstrained by experimental data, and affects quarkonium cross sections in the low-$\ensuremath{P_T}\xspace$ and/or forward regimes. Low-$x$ studies are also relevant in searches for phenomena beyond the DGLAP linear QCD evolution equations~\cite{Gribov:1972ri,Dokshitzer:1977sg,Altarelli:1977zs}, such as those driven by Balitsky-Fadin-Kuraev-Lipatov (BFKL)~\cite{Fadin:1975cb,Kuraev:1976ge,Kuraev:1977fs,Balitsky:1978ic} or non-linear parton saturation~\cite{Gribov:1984tu,Mueller:1985wy,Gelis:2010nm}.
It is thus helpful to exploit new paths to perform PDF fits, by trying to incorporate new precise data already available, which are not traditionally considered in PDF global fits. A possibility in this direction is offered by the LHC data on hidden and open-heavy-flavour production, characterised by very high statistical precision. There are recent small-$x$ gluon constraints, though not yet in the global analyses, from inclusive open-charm production~\cite{Zenaiev:2015rfa,Gauld:2016kpd, Bertone:2018dse, Zenaiev:2019ktw} and from $\ensuremath{J/\psi}\xspace$ in exclusive reactions~\cite{Flett:2020duk} (see section~\ref{sec:dif_gluonpdf} for details). These data probe the very low $x$, down to $x\simeq 3\cdot 10^{-6}$, and low scale (a few GeV$^2$) domain. However, tensions arise between the open charm and the exclusive $\ensuremath{J/\psi}\xspace$ extractions. It would therefore be valuable to have new insights from additional independent determinations with similar inputs, such as from the inclusive quarkonium data.
Given the lack of consensus on the production mechanisms, the inclusion of inclusive quarkonium production data in the PDF global fits, which was initially proposed in the 1980s~\cite{Martin:1987vw, Martin:1987ww}, has been abandoned. In addition, the $\psi$ and $\ensuremath{\Upsilon}\xspace$ hadroproduction processes suffer from large LDME uncertainties from several competitive CO channels. These non-perturbative CO LDMEs could be largely determined from the corresponding experimental data for each PDF choice. The usage of the $\psi$ and $\ensuremath{\Upsilon}\xspace$ data in a PDF fit, albeit still lacking the coherent picture, could be taken correlated with the corresponding LDME determinations, which is analogous to the role of the strong coupling $\alpha_s$ in other PDF analysis.
The situation could be radically improved if, for a given quarkonium observable, one is able to identify a single dominant channel or mechanism. One typical example is \ensuremath{\eta_c}\xspace hadroproduction at the LHC at $\ensuremath{P_T}\xspace \lesssim 12$~GeV (see also discussions in Section~\ref{sec:etac}), where it is understood that only the leading-$v^2$ CS channel is relevant. The remaining obstacle in using the quarkonium data to pin down the small-$x$ gluon density in the proton is the large intrinsic theoretical uncertainties in the cross section calculations, that are at NLO accuracy today. Similar to the open-charm case, these scale uncertainties can be largely mitigated by looking at ratios of (differential) cross sections (such as the ratios of two independent measurements at different centre-of-mass energies~\cite{Mangano:2012mh} or of cross sections in two different rapidity bins~\cite{Zenaiev:2015rfa}). These ratios have the extra advantage of cancellation of some of the systematic uncertainties, such as those related to the (single) LDME in theoretical calculations and the correlated systematical errors in experimental measurements. Exploiting the \ensuremath{\eta_c}\xspace LHCb Run-2 measurement~\cite{Aaij:2019gsn} is however not competitive as it is dominated by large statistical uncertainties. The HL-LHC is clearly able to significantly increase the precision of the measurement. All such inclusive quarkonium data are able to improve our knowledge of the proton PDFs lying in the low $x$ ($x<10^{-5}$) and low {scales} (a few GeV$^2$) regime, which are expected to be hard to constrain in general.
In addition, the collider mode, FT-LHC~\cite{Hadjidakis:2018ifr} will allow one to probe the high-$x$ range of the proton PDFs. In such a colliding configuration, the probed $x$ range of the parton (gluon, charm and valence quarks) densities can reach $x\simeq 0.5$, if not larger, by using various final states, including open-heavy-flavour hadrons and quarkonia.
For further aspects on the experimental side, the experimental collaborations should provide all information needed
to include their data with the appropriate uncertainties in PDF fits. In particular, information~\cite{Abdallah:2020pec} on bin-by-bin correlations of systematic uncertainties, in the form of covariance error matrices in differential distributions, as well as those on correlations between different distributions, are essential to perform a fully meaningful statistical analysis and extraction of best-fit PDFs accompanied by reliable uncertainty estimates. Moreover, it is obvious that the quarkonium data used in a standalone way are not enough to perform PDF fits at all values of $x$ and {scale}. Therefore, it is ideal to use them in conjunction with all other data traditionally already used in PDF global fits, in particular those on inclusive and semi-inclusive Deep-Inelastic-Scattering (DIS), as a complementary tool to extend the ($x$, scale) coverage of the latter.
The last considerations {of this section regard recent theory progresses}. It has been known for a long time that, for some choices of parameterisation, where gluon PDFs are rather flat, the open-charm and charmonium \ensuremath{P_T}\xspace-integrated cross sections at low scales can become negative~\cite{Schuler:1994hy,Mangano:1996kg, Feng:2015cba,Accardi:2016ndt,Ozcelik:2019qze} {at high \ensuremath{\sqrt{s}}}. Such pathological behaviours appear at (N)NLO for open charm and at NLO for charmonium. Hence, imposing the positivity of these cross sections, {assuming that} the missing higher order QCD corrections do not completely change the picture, {would} add additional constraints on the gluon PDF. {This would also go along the lines of a recent} exploratory study on the positivity of the $\overline{{\rm MS}}$ PDF itself~\cite{Candido:2020yat}.
{However, it was recently found in~\cite{Lansberg:2020ejc} that the unphysical behaviour of the $\eta_c$ cross section at NLO for increasing \ensuremath{\sqrt{s}}\ (but not necessarily for extremely large \ensuremath{\sqrt{s}}) could efficiently be tempered by a specific factorisation-scale choice. The resulting cross section indeed then shows a reduction of the renormalisation scale uncertainty while remaining very sensitive to the gluon PDF at low scales. If a similar scale choice can be used for $J/\psi$ for which numerous \ensuremath{P_T}\xspace-integrated cross sections have been measured, it could certainly be used in the future to fit the gluon PDF at NLO.}
\begin{comment}
\nolinenumbers
\subsection{Reviewers' comments on the \ensuremath{pp}\xspace\ section}
[only editors are supposed to put inline comments; otherwise, comments should be put in this subsection]
\linenumbers
\end{comment}
\section{Transverse-Momentum-Dependent effects in inclusive reactions\protect\footnote{Section editors: Miguel G. Echevarria, Vato Kartvelishvili.
}}
\label{sec:spin}
The multi-dimensional structure in momentum space of a nucleon has recently attracted much interest, as a possible source of many observable effects in hadronic interactions and, more generally, as a way of improving our understanding of QCD. This structure can be parameterised in terms of several objects, which encode different correlations between the momentum and spin of a parton and its parent nucleon. In simple terms, these are three-dimensional generalisations of the usual one-dimensional, collinear PDFs or FFs, but with a dependence on the parton transverse momentum, \ensuremath{k_T}\xspace. The way to introduce and define such generalisations is a subject of intense investigations, and debates, within the community~\cite{Angeles-Martinez:2015sea}.
What is at stake here is a crucial step in our understanding of the nucleon 3D structure (in momentum space), and hence in our understanding of colour confinement in QCD.
In what follows, we will discuss four approaches that address transverse-momentum-dependent and/or spin effects and are all relevant for quarkonium studies at the HL-LHC:
\begin{itemize}
\item The TMD factorisation framework, applicable both in unpolarised and polarised collisions, in which the TMDs have a concrete definition in terms of gauge-invariant operators and properties such as QCD evolution;
\item The High-Energy (HE) or \ensuremath{k_T}\xspace factorisation framework, designed to account for HE effects (large $\sqrt{s}$).
Besides the transverse momentum of the initial partons, \ensuremath{k_T}\xspace, this formalism also considers their virtualities, which naturally becomes relevant in this limit;
\item The collinear twist-3 (CT3) factorisation framework, which is an extension of collinear factorisation to treat polarised parton/nucleon collisions, and which matches TMD factorisation in the large-\ensuremath{k_T}\xspace limit;
\item The Generalised Parton Model (GPM), a phenomenological model meant to extend collinear factorisation with functions accounting for the Sivers effect both in the quark and gluon sectors.
\end{itemize}
It should be clear to the reader that these approaches are not meant to be considered on an equal footing:
the GPM computations are restricted to polarised collisions, but more importantly they are essentially descriptive.
Yet, they can be very useful to check various hypotheses about the underlying phenomena generating the spin asymmetries.
CT3 predictions go further with a deeper connection to the QCD properties but are based on collinear considerations where the transverse-momentum effect are integrated over in higher-twist correlators. HE factorisation, only applied to unpolarised collisions so far, is first designed to treat new effects at large \ensuremath{\sqrt{s}}. As such, care should be taken when using its predictions when \ensuremath{\sqrt{s}}\ is not very large, in particular for systems or conditions where TMD factorisation is a priori not applicable.
Indeed, the latter, while being probably the most inclusive in terms of phenomena generated by the \ensuremath{k_T}\xspace\ of the partons, is also the most restrictive in terms of applicability owing to its ambition to be the most rigorous.
The purpose of this section is to outline the recent progress regarding quarkonium production in processes where the transverse-momentum-dependent gluon effects enter, and how the HL-LHC can contribute to this emerging research domain.
The TMD factorisation framework is briefly introduced in Section~\ref{sec:TMD_factorisation}, followed by a discussion in Section~\ref{sec:TMDchallenges} on several specificities and open issues related to the treatment of quarkonium production, while HE factorisation is treated in Sections~\ref{sec:HEfactorisation} and~\ref{sec:HEchallenges}.
Section~\ref{sec:unpolarised} focuses on various-quarkonium production processes in unpolarised \ensuremath{pp}\xspace\ collisions within the TMD factorisation framework, with a special focus on the unpolarised and the linearly-polarised gluon TMDs, $f_1^g$ and $h_1^{\perp g}$.
In Section~\ref{sec:beyond_TMD}, we address the complex issue of factorisation-breaking effects or, more generally, effects beyond TMD and HE factorisations, and discuss some easily measurable processes where they can be studied.
Finally, in Section~\ref{sec:polarised}, collisions with polarised nucleons are considered; these become measurable at the HL-LHC with a polarised target in the FT mode, allowing one to measure STSAs in quarkonium production to probe {e.g.}\xspace the gluon Sivers effect accounted for by the TMD and CT3 factorisations and the GPM.
\subsection{TMD factorisation in the gluon sector}
\label{sec:TMD_factorisation}
In the last few years, the field of TMDs has taken a large leap forward.
Both the theoretical framework~\cite{GarciaEchevarria:2011rb,Echevarria:2012pw,Echevarria:2012js,Echevarria:2014rua,Collins:2011zzd,Echevarria:2015uaa,Scimemi:2018xaf} and the phenomenological analyses (see {e.g.}\xspace~\cite{DAlesio:2014mrz,Echevarria:2014xaa,Bacchetta:2015ora,Bacchetta:2017gcc,Anselmino:2016uie,Scimemi:2017etj,Bertone:2019nxa,Echevarria:2020hpy,Bury:2020vhj}) have developed, including new, higher-order perturbative calculations (see {e.g.}\xspace~\cite{Gutierrez-Reyes:2019rug,Gutierrez-Reyes:2018iod,Vladimirov:2017ksc,Echevarria:2016scs,Echevarria:2015byo,Echevarria:2015usa,Bacchetta:2018dcq}).
This progress, however, has been made mainly in the quark sector, with the gluon sector lagging behind due to the difficulty in cleanly probing gluons in high-energy processes.
Gluon TMDs at the leading twist, first analysed and classified in~\cite{Mulders:2000sh}, are shown in \ct{tab:gluon_TMDs}, in terms of both the polarisation of the gluon itself and of its parent hadron.
The distribution of unpolarised gluons inside an unpolarised hadron, $f_1^g$, and of circularly polarised gluons inside a longitudinally polarised hadron, $g_1^g$, correspond ({i.e.}\xspace\ are matched at large \ensuremath{k_T}\xspace through an operator product expansion) to the well-known collinear unpolarised and helicity gluon PDFs respectively.
The distribution of linearly-polarised gluons in an unpolarised parton, $h_1^{\perp g}$, is particularly interesting, since it gives rise to spin effects even in collisions of unpolarised hadrons, like at the LHC.
The Sivers function, $f_{1T}^{\perp g}$, which encodes the distribution of unpolarised gluons in a transversely-polarised nucleon, has a very important role in the description of STSAs.
There is a classification analogous to \ct{tab:gluon_TMDs} for quark TMDs, and also for both quark and gluon TMD FFs, which are as relevant as TMD distributions for processes which are sensitive to the role of transverse dynamics of partons in the fragmentation process.
{
\renewcommand{\arraystretch}{1.7}
\begin{table}[hbt!]
\centering
\hspace{1cm} gluon polarisation \\ \vspace{0.1cm}
\rotatebox{90}{\hspace{-1.5cm} nucleon polarisation} \hspace{0.1cm}
\begin{tabular}[c]{|m{0.5cm}|c|c|c|}
\hline
& $U$ & circular & linear \\
\hline
$U$ & $f_{1}^{g}$ & & \textcolor{blue}{$h_{1}^{\perp g}$} \\
\hline
$L$ & & $g_{1}^{g}$ & \textcolor{red}{$h_{1L}^{\perp g}$} \\
\hline
$T$ & \textcolor{red}{$f_{1T}^{\perp g}$} & \textcolor{blue}{$g_{1T}^{g}$} & \textcolor{red}{$h_{1}^{g}$}, \textcolor{red}{$h_{1T}^{\perp g}$} \\
\hline
\end{tabular}
\caption{Gluon TMD PDFs at twist 2.
$U$, $L$, $T$ describe unpolarised, longitudinally polarised and transversely-polarised nucleons.
$U$, `circular', `linear' stand for unpolarised, circularly polarised and linearly-polarised gluons.
Functions in blue ($h_{1}^{\perp g}$, $g_{1T}^{g}$) are $T$-even.
Functions in black ($f_{1}^{g}$, $g_{1}^{g}$) are $T$-even and survive integration over the parton \ensuremath{k_T}\xspace.
Functions in red ($h_{1L}^{\perp g}$, $f_{1T}^{\perp g}$,$h_{1}^{g}$, $h_{1T}^{\perp g}$) are $T$-odd.}
\label{tab:gluon_TMDs}
\end{table}
}
%
As is the case for quark TMDs, gluon TMDs contain information on the initial- and/or final-state QCD interactions of the incoming hadron. {Different types of gluon TMDs exist, distinguished by the precise structure of the gauge links in their operator definition, which depends on the hard process under consideration: the two most common are the so-called Weizs\"acker-Williams (WW) and dipole (DP) types~\cite{Dominguez:2011wm,Buffing:2013kca,Boer:2016bfj}}.
The WW type involves either initial- or final-state interactions, while the DP type involves both, so different processes probe different types of gluon TMDs.
Incidentally, the WW type is the gluon TMD which, in the proper choice of gauge, can be written as the gluon number operator acting on the hadron Fock state, implying the physical interpretation of the TMD as a number density.
{Exploratory} analyses on gluon TMD distributions~\cite{Lu:2016vqu,Mulders:2000sh,Pereira-Resina-Rodrigues:2001eda} were done in the so-called \emph{spectator-model} approach.
Originally developed for studies in the quark-TMD sector~\cite{Bacchetta:2008af,Bacchetta:2010si,Gamberg:2005ip,Gamberg:2007wm,Jakob:1997wg,Meissner:2007rx}, this relies on the assumption that the struck nucleon emits a gluon, after which the remnants are treated as a single spectator particle, taken on-shell.
The power of the spectator-model framework lies in the possibility to concurrently generate all TMD densities at twist-2~(\ct{tab:gluon_TMDs}).
In this context, a novel parameterisation for $T$-even distributions has been recently proposed in~\cite{Bacchetta:2020vty}.
At variance with previous studies, the spectator mass is allowed to take a continuous range of values weighted by a flexible spectral function, which allows one to effectively reproduce both the small- and the moderate-$x$ behaviour of the TMDs.
Furthermore, it embodies the effect of $q\bar{q}$ contributions, which are generally neglected by standard spectator models.
These results on the 3D tomography of (un)polarised gluons inside (un)polarised nucleons are part of the effort to gain a deeper understanding of observables sensitive to gluon TMD dynamics.
So far, quarkonium-production observables are one of the most promising tools at our disposal to access gluon TMDs, since they are directly sensitive to gluons.
These processes are quite challenging from the theoretical point of view, because they involve several momentum scales and require dealing with different aspects of QCD, from the formation of heavy-quark bound states to soft-gluon resummation.
For this reason, they represent a wonderful testing ground of
our knowledge of QCD.
Indeed, the interest has grown lately, with a number of LO analyses assuming TMD factorisation (see {e.g.}\xspace\ \cite{Godbole:2012bx,Boer:2012bt,Godbole:2013bca,Godbole:2014tha,Zhang:2014vmh,Zhang:2015yba,Mukherjee:2015smo,Mukherjee:2016qxa,Mukherjee:2016cjw,Boer:2016bfj,Lansberg:2017tlc,Godbole:2017syo,DAlesio:2017rzj,Rajesh:2018qks,Bacchetta:2018ivt,Lansberg:2017dzg,Kishore:2018ugo,Scarpa:2019fol,Boer:2020bbd,DAlesio:2019gnu})
and others that perform NLO calculations (see {e.g.}\xspace\ \cite{Ma:2012hh,Ma:2014oha,Ma:2015vpt}).
Experimental information on gluon TMDs is however very limited. The first attempt~\cite{Lansberg:2017dzg} to fit unpolarised gluon TMD PDF, $f^g_1$, was only made in 2017 and was performed using $J/\psi$ pairs. So far nothing is known on $h^{\perp g}_1$.
The possible extension of this first fit with forthcoming LHC data, as well as other quarkonium channels of interest, will also be discussed.
\subsection{TMD factorisation in $\pazocal{Q}$ production: challenges and opportunities}
\label{sec:TMDchallenges}
As discussed in Section~\ref{sec:pp}, besides NRQCD, other approaches have been proposed to describe quarkonium production, like the CSM~\cite{Chang:1979nn,Berger:1980ni,Baier:1981uk} or the CEM~\cite{Halzen:1977rs,Halzen:1977im,Fritzsch:1977ay}, and their variations
and extensions~\cite{Ma:2016exq,Haberzettl:2007kj,Lansberg:2005pc,Khoze:2004eu}.
All these frameworks have been routinely used along with collinear factorisation.
Whereas a factorisation proof exists for NRQCD and collinear factorisation, it does not exist at present for the other approaches.
Their combination with TMD factorisation is then potentially even more delicate.
This is why, in this section, we only consider the combination of the NRQCD and TMD factorisations and
some adjustments are needed to properly deal with the low-\ensuremath{P_T}\xspace region.
If one wishes to predict \ensuremath{P_T}\xspace spectra, NRQCD factorisation is only applicable when the quarkonium is produced with a relatively large $\ensuremath{P_T}\xspace \gtrsim 2 m_Q$. Intuitively, in this kinematic regime, emissions of soft and ultra-soft gluons from the heavy-quark pair cannot alter the large \ensuremath{P_T}\xspace\ of the quarkonium.
Ignoring these soft emissions, the quarkonium \ensuremath{P_T}\xspace is then determined by the short-distance reactions.
At the same time, the infrared (IR) divergences that remain from the hard scattering are absorbed into the non-perturbative LDMEs and collinear PDFs.
However, when quarkonia are produced with small \ensuremath{P_T}\xspace, large double logarithms arise and need to be resummed.
Indeed, the observed (low) $\ensuremath{P_T}\xspace$ distribution of $\ensuremath{\Upsilon}\xspace$ production at the Tevatron and LHC was found to be consistent with the prediction from a TMD factorisation Ansatz with resummation of the large double logarithms~\cite{Berger:2004cc,Sun:2012vc,Qiu:2017xbx} (even if a simple NLO treatment seems to provide fairly good results as well~\cite{Artoisenet:2008fc}).
In any case, the key point here is that, at low \ensuremath{P_T}\xspace, the soft-gluon factorisation assumption must be abandoned.
In~\cite{Beneke:1997qw, Fleming:2003gt, Fleming:2006cd}, the quarkonium production in $\ell p$ collisions and $e^+e^-$ annihilation was studied in the endpoint region, which is sensitive to soft radiations exactly where NRQCD factorisation breaks down.
It was found that promoting the LDMEs into quarkonium shape functions is necessary to accurately account for soft radiation from the heavy-quark pair.
Similarly, for the TMD spectrum of quarkonia, TMD-shape functions are needed to be able to rigorously derive the relevant factorisation theorems at low \ensuremath{P_T}\xspace.
The degrees of freedom for studying such low-\ensuremath{P_T}\xspace processes were introduced in the context of soft-collinear effective theory in~\cite{Echevarria:2019ynx, Fleming:2019pzj}, where it was shown that the cross section for quarkonium production at low \ensuremath{P_T}\xspace involves a new kind of non-perturbative object besides the TMDs, which can be seen as the 3D extension of the well-known NRQCD LDMEs.
These TMD-shape functions, like the LDMEs, scale with the relative velocity, $v$, of the heavy quark-antiquark pair in the quarkonium rest frame.
Therefore, the factorisation turns out to be a simultaneous expansion in the relative quark-pair velocity $v$ and $\ensuremath{P_T}\xspace/(2m_Q)$.
\begin{comment}
Generally speaking, TMD factorisation (in the impact parameter space) for quarkonium production (with mass $M$ and rapidity $Y$) in hadronic collisions would read
\begin{equation}
\label{eq:TMD-shape}
\frac{d\sigma}{dM dY d\ensuremath{q_T}\xspace} =
\int\frac{d^2b_\perp}{(2\pi)^2} e^{-iq_\perp\cdot b_\perp}\,
\sum_{n\in \{^1S_0^{[1]},\ldots \}}
H^{(n)}_{ij}(M,Y,s;\mu)\,
B_{i/p}(x_1,b_T;\mu,\eta)\,
B_{j/p}(x_2,b_T;\mu,\eta)\,
S^{(n)}_{\psi}(b_T;\mu,\eta)
\,,
\end{equation}
\Remark{JPL}{Turn the formuma into a sentence like for HEF}
where $B_{i/p}$ are the unsubtracted TMD PDFs and $S^{(n)}_{\psi}$ are the quarkonium shape functions (for details of the definition of shape functions see Refs.~\cite{Echevarria:2019ynx, Fleming:2019pzj}).
$\mu$ and $\eta$ are the factorisation/resummation and rapidity scales, respectively, $x_{1,2}$ the longitudinal momentum fractions and $b_T=|b_\perp|$ (analogously for $q_T$).
The summation \red{would in principle be} performed over the various colour and angular momentum configurations. \red{However, it happens that it is not possible to rigorously establish the factorisation in \ensuremath{pp}\xspace\ collisions when CO state are produced. As for now, such a factorisation could only be established for $\eta_Q$ production~\cite{Echevarria:2019ynx}.}
\end{comment}
Currently, a few of open questions remain with regard to the factorisation:
\begin{itemize}
\item
The double expansion in $\ensuremath{P_T}\xspace/(2m_Q)$ and the heavy-quark pair relative velocity, $v$, allows for a priori sub-leading contributions in one expansion parameter, which might be enhanced in the other.
Thus the reorganisation of terms in the cross section becomes non-trivial, and a potential contribution of higher-twist TMDs and TMD-shape functions cannot be discarded.
\item
This approach involves a summation over the various colour and angular-momentum configurations that contribute to the formation of the bound state. This might spoil the factorisation in \ensuremath{pp}\xspace\ collisions when CO states are produced.
This is due to the so-called Glauber or Coulomb exchanges, which are a subset of soft gluons that can entangle initial and final states and thus prevent the factorisation.
At the moment, such a factorisation has only been established for $\eta_Q$ production~\cite{Echevarria:2019ynx}, where the CS state dominates the production process, following the NRQCD velocity-scaling rules.
It might be extended to other processes dominated by the CS channel, like di-quarkonium or associated production.
This represents an opportunity to study effects in QCD that connect long and short distance physics.
\item
In addition to these issues specific to quarkonium production, one should keep in mind that, in hadronic collisions, the final state must not explicitly involve {coloured} objects for TMD factorisation to apply~\cite{Collins:2007nk,Collins:2007jp,Rogers:2010dm,Rogers:2013zha,Gaunt:2014ska,Schwartz:2018obd}.
Thus, it is not supposed to hold for $\psi$ or $\Upsilon$ hadroproduction in the CSM
where the quarkonium is necessarily produced along with a hard gluon.
On the one hand, if this hard gluon is not observed, the connection between the quarkonium \ensuremath{P_T}\xspace and the initial-parton \ensuremath{k_T}\xspace is lost.
On the other, if, instead, one proposes to measure the associate production of $\psi$ or $\Upsilon$ with a jet (or hadron), the observed final state is coloured, and the colour flow arising from the reaction becomes so entangled that it prevents one from deriving a factorised form for the cross section
(see however~\cite{Boer:2014lka}).
In other words, and as already mentioned in the first point, Glauber exchanges play a role and spoil the factorisation.
\end{itemize}
\subsection{The HE factorisation framework}
\label{sec:HEfactorisation}
The aim of High-Energy (HE) factorisation --also called $k_T$ factorisation or the Parton Reggeisation Approach (PRA)-- is to go beyond collinear factorisation by resumming corrections to the hard-scattering coefficient which are enhanced by powers of $\log(1/z_{\pm})$ when $z_{\pm}$ gets small.
As $z_{\pm}=q_{\pm}/k_{\pm}$, with $q_{\pm}$ the light-cone components of the momentum of the studied final state and $k_{\pm}$ those of the initial parton, such corrections indeed get large at high \ensuremath{\sqrt{s}}.
The general HE factorisation formula for the inclusive cross section $d\sigma$ of a hard process in $pp$ collisions~\cite{Gribov:1984tu,Collins:1991ty,Catani:1994sq} can be outlined as a double convolution (in the momentum fraction $x_i$ and in the transverse momentum $k_{Ti}$ of both incoming partons) of a partonic cross section $d\hat{\sigma}_{ij}$ with two unintegrated PDFs (UPDFs) or gluon densities (UGDs).
When one refers to uPDFs, one considers that they are obtained from the convolution of an evolution factor ${\cal C}_{ij}$, which performs the resummation and satisfies some version of the %
BFKL equation~\cite{Fadin:1975cb,Kuraev:1976ge,Kuraev:1977fs,Balitsky:1978ic}, and a collinear PDF $f_{j/p}$.
The complete cancellation of the $\mu_F$ dependence will happen only if the collinear PDFs with small-$x$-resummed DGLAP evolution are used, see {e.g.}\xspace~\cite{Altarelli:1999vw,Ball:2017otu}.
This however does not prevent one from using the usual PDFs if the observable under consideration shows more sensitivity to the transverse momenta $k_{T1,2}$ than to the $x$ dependence of the UPDF.
On the contrary, as far as the concept of UGD is concerned, no collinear input is implied. These are written as a convolution of the BFKL gluon Green's function and a non-perturbative proton \emph{impact factor} which is meant to be determined by data. They have been the subject of intense studies since the early days both in exclusive and inclusive channels. Originally employed in the study of DIS structure functions~\cite{Hentschinski:2012kr,Hentschinski:2013id}, the UGD has been studied through exclusive diffractive vector-meson leptoproduction~\cite{Anikin:2009bf,Anikin:2011sa,Besse:2013muy,Bolognino:2018rhb,Bolognino:2018mlw,Bolognino:2019bko,Bolognino:2019pba,Celiberto:2019slj} measured at HERA~\cite{Aaron:2009xp,Adloff:2002tb}, single--bottom-quark production~\cite{Chachamis:2015ona} at the LHC, inclusive forward Drell--Yan di-lepton production~\cite{Motyka:2014lya,Brzeminski:2016lwh,Motyka:2016lta,Celiberto:2018muu} measured {by} LHCb~\cite{LHCb:2012fja}{, and exclusive $\psi$ and $\ensuremath{\Upsilon}\xspace$ photoproduction~\cite{Bautista:2016xnp,Garcia:2019tne,Hentschinski:2020yfm}}.
Recent analyses on the diffractive electroproduction of $\rho$ mesons~\cite{Bolognino:2018rhb,Celiberto:2019slj} have corroborated the underlying assumption~\cite{Ivanov:1998gk} that the small-size dipole-scattering mechanism is at work, thus validating the use of the UGD formalism, which holds when the observable \ensuremath{P_T}\xspace is large.
In contrast to TMD factorisation, HE factorisation has the advantage not to be limited to the low-$\ensuremath{q_T}\xspace$ region (compared to the relevant hard scale of the process).
Indeed, large values of $\ensuremath{P_T}\xspace$ also contribute to the region $z_{\pm}\ll 1$ if additional radiation is highly separated in rapidity from the observed system.
Radiative corrections of this kind become important with increasing $\ensuremath{\sqrt{s}}$, since more phase space for such emissions opens up.
This difference in their range of applicability is often a source of confusion and debate, especially because sometimes the acronym TMD is also used outside the scope of TMD factorisation.
In our discussion, when such objects are discussed we will always use their names like $f^g_1$ or $h^{\perp g}_1$.
However, HE factorisation has its own theoretical shortcomings compared to TMD factorisation.
In general, not all corrections beyond Next-to-Leading Logarithmic approximation\footnote{The N${}^{k}$LL-approximation in the context of HEF is defined as the resummation of terms $\sim \ensuremath{\alpha_{\rm s}}^{n+k}\log^n(1/z_{\pm})$.} can be taken into account by the standard HEF formulation.
This can be traced back to the fact that, even at the leading power in the HE limit, $z_{\pm}\ll 1$, QCD amplitudes only admit a factorisation in terms of matrix elements of multiple light-like Wilson lines with a complicated colour structure or, equivalently, in terms of multi-Reggeon exchanges in the $\hat{t}$-channel (see {e.g.}\xspace~\cite{Caron-Huot:2013fea} for a review).
In order to take all such contributions arising from multi-Reggeon exchanges into account, results from the CGC formalism can be incorporated~\cite{Altinoluk:2019fui} in a factorised formula inspired by TMD factorisation.
However, in the phenomenology at the leading twist, it is usually assumed that the largest N${}^{k\geq 1}$LL-corrections can still be represented by an effective UPDF that takes into account both DGLAP and BFKL effects.
Numerous recipes to obtain such UPDF can be found in the literature, such as Kimber--Martin--Ryskin--Watt (KMRW) UPDF~\cite{Kimber:2001sc,Watt:2003mx,Watt:2003vf}, Collins--Ellis--Bluemlein UPDF~\cite{Collins:1991ty,Blumlein:1995eu}, Parton-Branching Method~\cite{Martinez:2018jxt} and many more.
The coefficient function $d\hat{\sigma}$ at LO in $\ensuremath{\alpha_{\rm s}}$ and at leading-power in $z_{\pm}$ can be understood as a partonic cross section involving off-shell (Reggeised) initial-state partons with virtualities $k_{1,2}^2=-{\bf k}_{T1,2}^2$.
For simple processes, such as $g^\star(k_1)+g^\star(k_2) \to Q\bar Q$, it can be computed by usual QCD Feynman Rules with the following replacement for the polarisation vectors of initial-state gluons: $\varepsilon^\mu(k_{1,2})\to k^\mu_{T1,2}/|{\bf k}_{T1,2}|$.
However, there is no analogous simple rule for off-shell quarks in the initial state.
In addition, for more general QCD processes, such coefficient function will not be gauge-invariant.
The coefficient function for any subprocess can be computed to any order in $\ensuremath{\alpha_{\rm s}}$ using the effective field theory (EFT) for multi-Regge processes in QCD~\cite{Lipatov95, LipatovVyazovsky, AntonovFRs} and its gauge invariance is guaranteed by construction within the EFT.
The formalism of~\cite{vanHameren:2012if,vanHameren:2013csa,vanHameren:2016kkz} is equivalent to the EFT at tree level.
Hereafter we will refer to all these approaches, like \ensuremath{k_T}\xspace factorisation and Parton Reggeisation Approach (PRA)~\cite{Karpishkov:2019vyt}, as HE factorisation.
\subsection{High-Energy factorisation in $\pazocal{Q}$ production: challenges and opportunities}
\label{sec:HEchallenges}
The HE factorisation coefficient functions for inclusive heavy-quarkonium production in NRQCD at LO were first computed in~\cite{Hagler:2000dd,Hagler:2000eu,Kniehl:2006sk,Kniehl:2006vm} and the relevance of the gluon off-shellness in $\chi_{c1}$ production to {lifting} the Landau-Yang suppression was first highlighted in~\cite{Hagler:2000dd}.
LDMEs from recent fits on hadroproduction data~\cite{Saleev:2012hi, Nefedov:2013qya,Baranov:2019lhm,Karpishkov:2020wwe} are comparable to those obtained at NLO in collinear factorisation, especially for the LDME of the ${}^3S_1^{(8)}$ state, while the LDMEs of ${}^3P_J^{[8]}$ and ${}^1S_0^{[8]}$ states turn out to have the same order of magnitude as in collinear factorisation, but often with an opposite sign.
This is because LO HE factorisation calculations do not take into account NLO corrections due to final-state radiation effects.
Recently, HE factorisation has been used together with the formalism of CS Light-Front Wave Functions (LFWFs)~\cite{Babiarz:2020jkh,Babiarz:2019mag} and the Improved CEM (ICEM)~\cite{Cheung:2018upe, Cheung:2018tvq, Maciula:2018bex} to describe {the} bound-state formation.
The CS LFWF calculation shows an interesting discrepancy with the strict non-relativistic approximation (see {e.g.}\xspace Figs.~10 and~11 of~\cite{Babiarz:2020jkh}), which points towards potentially large relativistic corrections.
The ICEM calculation somewhat counter-intuitively predicts mostly unpolarised production of charmonia~\cite{Cheung:2018upe} and bottomonia~\cite{Cheung:2018tvq} at high $\ensuremath{P_T}\xspace$, unlike {e.g.}\xspace the NRQCD-factorisation-based predictions of~\cite{Kniehl:2016sap}.
This disagreement uncovers some interesting aspects of the physics of heavy-quarkonium polarisation in the ICEM and its interplay with HE factorisation that deserve further study.
All the calculations mentioned above have been performed at LO.
So far, no NLO quarkonium studies exist in HE factorisation, which are far more complex than in collinear or TMD factorisations.
However, such NLO computations would be in some respects equivalent to an NNLO accuracy for collinear factorisation, which are in fact not yet available for heavy-quarkonium production in none of the aforementioned production models.
With such NLO computations at our disposal, it will also become possible to quantitatively characterise the region of applicability of HE factorisation in quarkonium production, where NLO corrections would be under control.
As regards advances towards first NLO computations, the reader is guided to~\cite{Nefedov:2019mrg} for progress in the computations of loop corrections, where the recent progress towards the automation of the computation of gauge-invariant HE factorisation amplitudes reported in~\cite{vanHameren:2017hxx} would certainly be beneficial for the completion of the real-emission-correction computations.
Exploratory NLO calculations have recently been successfully performed~\cite{Nefedov:2020ecb,Hentschinski:2020tbi} and these show that one can overcome the problem of large unphysical NLO corrections found, for instance, in BFKL-based computations.
All these developments make NLO HE factorisation calculations possible in the near future, with the aim of describing more accurately a variety of observables related to single and associated production of heavy quarkonia in different quarkonium-production models.
Confronting the results of these calculations with HL-LHC data, which will briefly be discussed in Section~\ref{sec:beyond_TMD}, will allow one to learn more about, on the one hand, the quarkonium-production mechanisms and, on the other, the relevance of HE phenomena in these reactions.
\subsection{Unpolarised TMD studies with $\pazocal{Q}$ at the HL-LHC }
\label{sec:unpolarised}
As already mentioned, inside an unpolarised proton one can define two independent gluon TMD densities: the unpolarised $f_{1}^{g}$ and the linearly-polarised $h_{1}^{\perp g}$ distributions~\cite{Mulders:2000sh,Meissner:2007rx,Boer:2016xqr}.
Being time-reversal even ($T$-even), these TMDs can be nonzero even in (sub)processes where neither initial-state nor final-state interactions are present.
However, like all other TMDs, they are affected by such interactions, which can render them process-dependent and even hamper factorisation.
The distribution of linearly-polarised gluons has attracted much attention in the last few years.
It corresponds to an interference between $+1$ and $-1$ gluon helicity states, which can be different from zero if the gluon \ensuremath{k_T}\xspace is taken into account.
If sizeable, this TMD can affect the \ensuremath{P_T}\xspace distributions of scalar and pseudoscalar particles produced in the final state, such as, for instance, $H^0$ bosons or $C$-even charmonium and bottomonium states.
Interestingly, it turns out that at small $x$, the linearly-polarised distribution may reach its maximally allowed size, bounded by the unpolarised-gluon density~\cite{Mulders:2000sh}.
Moreover, linearly-polarised gluons can also be generated perturbatively from unpolarised quarks and gluons inside the proton~\cite{Nadolsky:2007ba,Catani:2010pd}.
This determines the large-\ensuremath{k_T}\xspace tail of the distribution~\cite{Sun:2011iw}.
{From the experimental point of view, in contrast to quark TMDs, almost nothing is known about gluon TMDs, due to the lack of processes, like single-inclusive DIS (SIDIS) and Drell--Yan pair production, that directly probe them.}
A Gaussian-shape extraction of the unpolarised gluon TMD has recently been performed, based on the LHCb measurement of the \ensuremath{P_T}\xspace spectra of $\ensuremath{J/\psi}\xspace$ pairs~\cite{Lansberg:2017dzg}, the first of its kind.
Many proposals have been put forward to access TMDs in \ensuremath{pp}\xspace\ collisions, mainly by looking at azimuthal asymmetries and \ensuremath{P_T}\xspace distributions for quarkonium production.
The quarkonium processes for which one can hope TMD factorisation to hold -- with NRQCD properly modified -- are
\begin{itemize}
\item $p\,p\to \eta_{c,b}+ X$~\cite{Boer:2012bt},
\item $p\,p\to \ensuremath{J/\psi}\xspace+ \gamma+ X$ and $p\,p\to \Upsilon+ \gamma+ X$~\cite{Dunnen:2014eta},
\item $p\,p\to \ensuremath{J/\psi}\xspace + \ell\, \bar \ell + X$ and $p\,p\to \ensuremath{\Upsilon}\xspace+ \ell\, \bar \ell + X$~\cite{Lansberg:2017tlc},
\item $p\,p\to \eta_c + \eta_c+ X$~\cite{Zhang:2014vmh},
\item $p\,p\to \ensuremath{J/\psi}\xspace+ \ensuremath{J/\psi}\xspace+ X$ and $p\,p\to \ensuremath{\Upsilon}\xspace +\ensuremath{\Upsilon}\xspace+ X$~\cite{Lansberg:2017dzg,Scarpa:2019fol},
\end{itemize}
at LO in $v$, thus only considering the CS contributions.
The reason to focus on these CS processes is to avoid the presence of final-state interactions which, together with the initial-state interactions present in \ensuremath{pp}\xspace\ collisions, would lead to the breakdown of TMD factorisation~\cite{Collins:2007nk,Collins:2007jp,Rogers:2010dm,Rogers:2013zha,Gaunt:2014ska,Schwartz:2018obd}.
The case of $p\,p\to \chi_{0c, b}+ X$ or $p\,p\to \,\chi_{2 c,b}+ X $~\cite{Boer:2012bt} is particular since CS and CO appear at the same order in $v$, which is a likely source of complication.
As such, we will come back to it in Section~\ref{sec:beyond_TMD} when discussing considerations beyond the strict TMD factorisation.
Among these quarkonium reactions, we should make a distinction between single and associated production.
Whereas the former {is} probably simpler to analyse, it does not allow the scale of the process to be tuned by increasing the invariant mass of the produced system. Consequenlty, there is not much room for TMD factorisation to apply as one is forced to be in the region $\ensuremath{P_T}\xspace \lesssim 2 m_Q$.
In addition, single-quarkonium production only provides an indirect way to probe $h_{1}^{\perp g}$ through \ensuremath{P_T}\xspace\ modulations, as {it} does not offer the possibility of accessing the azimuthal asymmetries generated by the linearly-polarised gluons.
Finally, during the HL-LHC period, these single-quarkonium cross sections, though much larger than {for} associated production, will be extremely complicated to measure with the ATLAS and CMS detectors in the applicability region of TMD factorisation. In contrast, the increased luminosity available at the HL-LHC will make the associated-production channels more accessible.
However, single low-\ensuremath{P_T}\xspace\ quarkonia can probably be studied in the much less hostile environment of FT-LHC by the LHCb and ALICE detectors. All these aspects will be addressed in the following three subsections.
At this stage, it is important to note that the unpolarised and linearly-polarised gluon distributions to be extracted from the above-mentioned reactions, which correspond to the WW distributions in the small-$x$ limit, are expected~\cite{Boer:2016fqd} to be the same as those entering (open and closed) heavy-quark-pair production in $ep$ collisions.
This represents an important test of the universality of the gluon TMDs inside unpolarised protons, which can only be performed by comparing data from \ensuremath{pp}\xspace\ and $ep$ colliders.
On the other hand, the consideration of processes where the TMD factorisation is not supposed to hold can be very valuable in advancing our understanding of long-distance correlations in QCD, by quantifying the actual role of the expected factorisation-breaking contributions.
This will also be addressed in Section~\ref{sec:beyond_TMD}.
\subsubsection{Single low-\ensuremath{P_T}\xspace $C$-even $\pazocal{Q}$ production}
\label{sec:TMDssinglequarkonium}
Single-quarkonium production offers the possibility of constraining both unpolarised and linearly-polarised gluon TMDs~\cite{Boer:2012bt}, even if the hard scale is set by the mass of the bound state and thus the room for TMD factorisation to work is limited.
Leaving aside the complications of TMD-shape functions pointed out in~\cite{Echevarria:2019ynx, Fleming:2019pzj}, which should be properly taken into account to perform quantitatively consistent phenomenological analyses, the analysis of their (low) \ensuremath{P_T}\xspace spectra up to roughly half their mass can of course give information about the \ensuremath{k_T}\xspace dependence of the unpolarised TMD $f_{1}^{g}$ at the scale 3 GeV (10 GeV) for the charmonium (bottomonium), but also on the distribution of the linearly-polarised gluons, $h_1^{\perp g}$, which modulate the quarkonium \ensuremath{P_T}\xspace spectrum. Estimations of these modulations and of the $x$ range where they can be accessed at the LHC in the collider and FT modes are given in~\ct{t:processes1}. Estimated rates are also given to illustrate that they can be measured, provided that the detectors can cope with the background at low \ensuremath{P_T}\xspace.
\begin{table}[hbt!]
\begin{center}\renewcommand{\arraystretch}{1.2}
\resizebox{\textwidth}{!}{
\begin{tabular}{c|c|c|c|c|c}
$\pazocal{Q}$& \shortstack{expected yield \\ ($\sqrt{s}=115$~GeV)} & \shortstack{expected yield \\ ($\sqrt{s}=14$~TeV)} & \shortstack{$x_2$ range \\ ($\sqrt{s}=115$~GeV)} & \shortstack{$x_2$ range \\ ($\sqrt{s}=14$~TeV)} & Low-\ensuremath{P_T}\xspace modulation \\
\hline
\hline
$\eta_{c}$ & ${\cal O}(10^{5\div 6})$ & ${\cal O}(10^{6\div 7})$ & $0.02\div 0.5$ & $10^{-6}\div 3\!\cdot\!10^{-5}$ & $0\div 80 \%$~\cite{Boer:2012bt,Signori:2016jwo} \\
\cline{1-5}
$\eta_{b}$ & ${\cal O}(10^{1\div 2})$ & ${\cal O}(10^{3\div 4})$ & $0.1\div 1$ & $5\!\cdot\!10^{-6}\div 10^{-4}$ & $0\div 80 \%$~\cite{Boer:2012bt,Signori:2016jwo,Echevarria:2015uaa} \\
\cline{1-5}
$\chi_{c0}(1P)$ & ${\cal O}(10^{3\div 4})$ & ${\cal O}(10^{4\div 5})$ & $0.02\div 0.5$ & $10^{-6}\div 3\!\cdot\!10^{-5}$ & $0\div 80 \%$~\cite{Boer:2012bt} \\
\cline{1-5}
$\chi_{c2}(1P)$ & ${\cal O}(10^{5\div 6})$ & ${\cal O}(10^{6\div 7})$ & $0.02\div 0.5$ & $10^{-6}\div 3\!\cdot\!10^{-5}$ & $<1\%$~\cite{Boer:2012bt} \\
\cline{1-5}
$\chi_{b0}(nP)$ & ${\cal O}(10^{1\div 2})$ & ${\cal O}(10^{3\div 4})$ & $0.1\div 1$ & $5\!\cdot\!10^{-6}\div 10^{-4}$ & $0\div 80 \%$~\cite{Boer:2012bt} \\
\cline{1-5}
$\chi_{b2}(nP)$ & ${\cal O}(10^{2\div 3})$ & ${\cal O}(10^{4\div 5})$ & $0.1\div 1$ & $5\!\cdot\!10^{-6}\div 10^{-4}$ & $<1\%$~\cite{Boer:2012bt} \\ \hline
\end{tabular}
}
\caption{%
Expected $\ensuremath{P_T}\xspace$ modulations generated by $h_{1}^{\perp g}$ for a selection of quarkonium-production observables, along with the expected yields and $x_2$ ranges derived from $x_2=M e^{-y_{\rm c.m.s.}\xspace}/\sqrt{s}$ for a rapidity coverage $-2.8<y_{\rm c.m.s.}\xspace<0.2$ for FT mode at $\sqrt{s}=115$~GeV and $2<y_{\rm c.m.s.}\xspace<5$ for collider mode at $\sqrt{s}=14$~TeV.}
\label{t:processes1}
\end{center}
\end{table}
However, we should stress that, in principle, TMD factorisation is only supposed to hold for $\eta_Q$ production.
The measurement of scalar and tensor $\chi_{c,b}$ states is essential to get a complete picture, but their low-\ensuremath{P_T}\xspace spectra are subject to specific factorisation-breaking effects~\cite{Ma:2014oha} (see Section~\ref{sec:beyond_TMD}).
\subsubsection{$\pazocal{Q}+\pazocal{Q}$ production}
\label{sec:psi-psi-TMDs}
Pair-production of quarkonia at the LHC is in large majority from gluon fusion~\cite{Qiao:2009kg,Lansberg:2013qka}, even down to the energy of the FT mode~\cite{Lansberg:2015lva,Hadjidakis:2018ifr}.
Thus, they enable the study of the gluon content of the proton with low contamination from quark-induced contributions.
As seen in Section~\ref{sec:pp} and to be discussed again in Section~\ref{sec:dps}, the hadroproduction of quarkonium pairs can be initiated~\cite{Kom:2011bd,Lansberg:2014swa,Lansberg:2019adr} by SPS or by DPS.
Only the SPS component is of interest here to probe TMDs in gluon fusion.
It is thus important to control the potential contamination from DPS.
At low rapidity separations, $\Delta y_{\pazocal{Q}\Q}$, and when the invariant mass of the pair, $M_{\pazocal{Q}\Q}$, increases, the relative contribution of DPS gets so low that it becomes a minor source of uncertainty.
Near the threshold, {i.e.}\xspace\ the region so far measured by LHCb~\cite{Aaij:2016bqq}, the DPS contribution should be subtracted, which calls for a good understanding of its kinematic distribution.
In addition, increasing $M_{\pazocal{Q}\Q}$, as in the data samples of ATLAS~\cite{Aaboud:2016fzt} and CMS~\cite{Khachatryan:2014iia}, allows one to probe higher transverse-momentum of the pair, $P_{\pazocal{Q}\Q_{\scriptscriptstyle T}}$, while remaining in the region of applicability of TMD factorisation. $\ensuremath{J/\psi}\xspace$-pair and $\ensuremath{\Upsilon}\xspace$-pair production were already studied several times by LHC collaborations with various setups, although these studies were not designed for the extraction of information regarding gluon TMDs {(see Section~\ref{sec:onium_pair_pp})}.
Increasing the samples of $\ensuremath{J/\psi}\xspace$ pairs would allow for the measurement of double- or even triple-differential cross sections, which are much more suitable for the extraction of gluon TMDs without diluting their effects.
More data on di-$\Upsilon$ production would allow one to probe gluon TMDs at similar masses of the pair, but in a different system with different feed-down, DPS or $v$-correction contamination.
To highlight the importance of measuring azimuthal modulations, it is instructive to note that the differential cross section of the process of $\pazocal{Q}\Q$ production via gluon-gluon fusion has the general form~\cite{Lansberg:2017dzg}:
\begin{align}&
\frac{d \sigma}{d M_{\pazocal{Q}\Q} d Y_{\pazocal{Q}\Q} d^2 \bm{P}_{\pazocal{Q}\Q_{\scriptscriptstyle T}} d \Omega}
\propto \frac{\sqrt{M_{\pazocal{Q}\Q}^2-4 M_\pazocal{Q}^2}}{s M_{\pazocal{Q}\Q}^2}
\times \nonumber\\
&
\Bigl\{F_1\, {\C[\fone\fone]}+F_2\, {\C[w_2\hone\hone]}+\cos(2 \phi)\left(F_3\, {\C[w_3\fone\hone]}\right.\Bigr.
\Bigl.\left.+F_3'\, {\C[w_3'\hone\fone]}\right)+\cos(4 \phi)\, F_4\, {\C[w_4\hone\hone]}\Bigr\}\,,
\label{TMDxsect}
\end{align}
where the angular variables in $d\Omega=d\cos\theta d\phi$ are defined in the Collins--Soper frame and describe the spatial orientation of the back-to-back
pair in this frame.
For vector-quarkonium-pair production, the hard-scattering coefficient $F_2$ remains negligible over the whole phase space ($F_2/F_1< 0.01$).
Thus, ${\mathrm{d}}\sigma/\difP_{\Q\Q_T}$ is not modulated by $h_1^{\perp g}$ and its measurement gives direct access to $f_1^g$.
\begin{figure}[htpb!]
\centering
{\includegraphics[width=9.cm]{figures/dsigdqt_norm_050120mod.pdf}}
\caption{Comparison of the \textit{normalised} $P_{\psi\psi_T}$-spectrum for $\ensuremath{J/\psi}\xspace$-pair production at $M_{\psi\psi}$ = 8~GeV computed using two models of the gluon TMDs with that measured by LHCb.
[Figure taken from~\cite{Scarpa:2019fol}]}
\label{fig:qtCf1f1}
\end{figure}
As for the azimuthal asymmetries, they can be conveniently studied by defining
\begin{equation}
\cnf{n}=\frac{\int {\mathrm{d}} \phi_{\mathrm{CS}} \cos(n\phi_{\rm CS}) {\mathrm{d}}\sigma}{\int {\mathrm{d}} \phi_{\rm CS} {\mathrm{d}}\sigma}\,\label{cnf}.
\end{equation}
In fact, $\cnf{2,4}$ represent half the relative magnitude of the corresponding $\phi_{\rm CS}$-asymmetries in~\ce{TMDxsect} with respect to the azimuthally-{independent} part, and thus they are directly connected to $h_1^{\perp g}$.
\begin{figure}[htpb!]
\centering
\subfloat[]{\includegraphics[width=0.35\textwidth]{figures/evol_R2_forward_050120.pdf}\label{fig:R2_forward_psi}}
\subfloat[]{\includegraphics[width=0.35\textwidth]{figures/evol_R4_central_050120.pdf}\label{fig:R4_central_psi}}\\
\subfloat[]{\includegraphics[width=0.35\textwidth]{figures/evol_R2_forward_Ups_050120.pdf}\label{fig:R2_forward_Ups}}
\subfloat[]{\includegraphics[width=0.35\textwidth]{figures/evol_R4_central_Ups_050120.pdf}\label{fig:R4_central_Ups}}
\caption{Azimuthal asymmetries for di-$\ensuremath{J/\psi}\xspace$ (a,b) and di-$\Upsilon$ (c,d) production as functions of $P_{\Q\Q_T}$:
(a,c) $2\cnf{2}$
for $0.25<|\cos(\theta_{\rm CS})|<0.5$, and (b,d) $2\cnf{4}$ at $|\cos(\theta_{\rm CS})|<0.25$.
The results are presented for $M_{\psi\psi}$ = 12, 21 and 30 GeV and for $M_{\Upsilon\Upsilon}$ = 30, 40 and 50~GeV, for ${b_{T_{\lim}}}$ = 2, 4 and 8 GeV$^{-1}$.
[Figure taken from~\cite{Scarpa:2019fol}]}
\label{fig:asym}
\end{figure}
The normalised $P_{\psi\psi_T}$ spectra for di-$\ensuremath{J/\psi}\xspace$ production computed using a Gaussian-based TMD model~\cite{Lansberg:2017dzg} or using an evolved TMD~\cite{Scarpa:2019fol} are compared on \cf{fig:qtCf1f1} to the LHCb data~\cite{Aaij:2016bqq}, from which the DPS is subtracted assuming that they are fully uncorrelated.
The data considered are for $P_{\psi\psi_T} < M_{\psi\psi}/2$ with $\langle M_{\psi\psi}\rangle \simeq 8$~GeV.
The Gaussian-based TMD model fits the data best with a width $\langle k_\sT^2 \rangle$ of the order of 3~GeV$^2$.
Such a large value is a consequence of TMD evolution increasing the intrinsic momentum of the gluons entering the hard scattering.
The spectrum using evolved TMDs is plotted for widths ${b_{T_{\lim}}}$ of a Gaussian nonperturbative Sudakov factor between 2 and 8 GeV$^{-1}$.
The lower bound corresponds to the conventional limit with the perturbative region, while the higher bound matches the diameter of the proton.
While the computation with evolution can account for the LHCb spectrum, the lack of a double differential measurement in $P_{\psi\psi_T}$ and $M_{\psi\psi}$ does not allow the TMD evolution to be constrained.
The relative size of azimuthal asymmetries in $\ensuremath{J/\psi}\xspace$- and $\Upsilon$-pair production are presented in \cf{fig:asym} as a function of $P_{\psi\psi_T}$, for two ranges of rapidity difference ($|\cos(\theta_{\rm CS})|<0.25$ corresponds to central production, while $0.25<|\cos(\theta_{\rm CS})|<0.5$ corresponds to forward production), different values of $M_{\psi\psi}$ and for ${b_{T_{\lim}}}$ in the range [2;8] GeV$^{-1}$.
Asymmetries reach magnitudes of 8 to 10\% at larger $P_{\Q\Q_T}$ at central rapidities for both $\ensuremath{J/\psi}\xspace$- and $\Upsilon$-pair production.
Much larger data samples to be collected at the HL-LHC will measure $P_{\Q\Q_T}$ distributions, allowing for a proper fit of $f_1^g$ at different scales.
Additionally, they will allow for a measurement of the azimuthal asymmetries, which could be as large as 10\% and which would tell if indeed $h_1^{\perp g}$ is non-zero.
Other studies of quarkonium-pair production are discussed in Section~\ref{sec:beyond_TMD}.
\subsubsection{$\pazocal{Q}+\gamma$ production}
\label{sec:jpsigammaTMD}
Besides vector-quarkonium-pair production, the study of a vector quarkonium produced in association with an isolated photon is another very promising way to access the distribution of both the \ensuremath{k_T}\xspace and the polarisation of the gluon in an unpolarised proton in \ensuremath{pp}\xspace\ collisions at the LHC~\cite{Dunnen:2014eta}.
Despite its likely smaller cross section compared to quarkonium-pair production, it is likely to be less prone to factorisation breaking effects (see Section~\ref{sec:beyond_TMD}), while it shows a very similar capability in accessing $h_1^{\perp g}$.
The differential cross section for the production of $\pazocal{Q}+\gamma$ ($\pazocal{Q}=\ensuremath{J/\psi}\xspace,\ensuremath{\Upsilon}\xspace$) via gluon-gluon fusion has the same general form as for di-onia:
\begin{align}\label{separatecrosssect}
&\frac{d\sigma}{dM_{\gamma\pazocal{Q}}dY_{\gamma\pazocal{Q}}d^2{q}_Td\Omega} \propto
\frac{M_{\gamma\pazocal{Q}}^2-M_\pazocal{Q}^2}{s M_\pazocal{Q}^3\,M_{\gamma\pazocal{Q}}^3}
\left\{ F_1\mathcal{C}[f_1^gf_1^g] +
\cos(2\phi) \;F_3\mathcal{C}[w_3f_1^{g}h_1^{\perp g}] +
\cos(4\phi) \;F_4\mathcal{C}[w_4h_1^{\perp g}h_1^{\perp g}] \right\},
\end{align}
where \ensuremath{q_T}\xspace is the transverse momentum of the quarkonium-photon pair and the angular variables in $d\Omega=d\cos\theta d\phi$ are defined in the Collins-Soper frame~\cite{Dunnen:2014eta}.
Like for di-onia, the first term in the curly brackets corresponds to the contribution from unpolarised gluons described by $f_1^g$, while the second and third terms contain the linearly-polarised gluon TMD function $h_1^{\perp g}$ and bring in some azimuthal modulations.
While in~\cite{Dunnen:2014eta} the amplitudes of $2\phi$ and $4\phi$ modulation terms were found to be of comparable size, more realistic simulations suggest that it may be safer to concentrate on the $4\phi$ term, which is less likely to be mimicked by typical acceptance requirements of the general-purpose LHC detectors on muons and photons.
It was found that the $4\phi$ modulation is larger at small values of $\cos^2\theta$, and a cut at $\cos^2\theta=0.1$ allows one to separate the low-$\cos^2\theta$ area where the $4\phi$ modulation is enhanced from the high-$\cos^2\theta$ area where it is suppressed.
In the absence of experimental data, we have found it useful to perform some feasibility studies to extract the $4\phi$ modulation.
In what follows, the process is simulated for \ensuremath{pp}\xspace\ collisions at 13~TeV using the {\sc pythia}\xspace~8 generator, with $h_1^{\perp g}=0$.
In order to emulate the effects of a possible non-zero $h_1^{\perp g}$, each event is assigned a weight proportional to the expression in the curly bracket in \ce{separatecrosssect}).
For such pioneering investigations, it is sufficient to mimic evolution effects by assuming a simple Gaussian dependence of the unpolarised gluon distribution $f_1^g$ on the transverse momentum of the gluon ${k}_T$~\cite{Lansberg:2017dzg, Boer:2011kf}, $f_1^g(x,{k}_T^2)={G(x)}/{\pi\langle{k}_T^2\rangle}\exp{\big(-{{k}_T^2}/{\langle{k_T^2}\rangle}}\big)$, where the collinear gluon distribution function is given by $G(x)$, and $\langle{k_T^2}\rangle$ is assumed to be independent of $x$.
A model-independent positivity bound is used to restrict possible parameterisations for $h_1^{\perp g}$~\cite{Mulders:2000sh}: ${{k}_T^2}|h_1^{\perp g}(x,{k}_T^2)|\leq{2M^2}f_1(x,{k}_T^2)$.
Following Refs.~\cite{Boer:2011kf, Lansberg:2017dzg}, `Model 1' is defined by $h_1^{\perp g}(x,{k}_T^2)=\frac{M^2G(x)}{\pi\langle{k_T^2}\rangle^2}\exp{\big(1-{{k}_T^2}/{r\langle{k_T^2}\rangle}\big)},$ while in `Model 2' $h_1^{\perp g}(x,{k}_T^2)$ is chosen to saturate the positivity bound.
According to \ce{separatecrosssect}), for an ideal experiment with full acceptance, the dependence on $\phi$ in the absence of gluon polarisation is expected to be flat, while in the case of non-zero gluon polarisation, a $\phi$ modulation appears with the magnitude proportional to the magnitude of $h_1^{\perp g}$.
However, the kinematics of a typical general-purpose LHC detector such as ATLAS or CMS suggests that the minimum $\ensuremath{P_T}\xspace$ of an identified muon is around 4~GeV, which implies $\ensuremath{P_T}\xspace^{\ensuremath{J/\psi}\xspace} > 8$~GeV and hence requires a cut $\ensuremath{P_T}\xspace^\gamma > 8-9$~GeV to produce a $\ensuremath{P_T}\xspace$-balanced final state where \ensuremath{q_T}\xspace is smaller than, say, $M_{\pazocal{Q}\gamma}/2$.
These cuts cause a significant non-trivial distortion of the observed $\phi$ distribution, which complicates the extraction of the $\phi$-modulated terms.
It was observed that this distortion is almost independent of $\cos\theta$, and one can use the ratio of the differential cross sections with low and high $\cos^2\theta$ to largely eliminate the kinematic distortion and help to extract the $4\phi$-modulated contribution.
The comparison between unweighted and weighted distributions of the ratio of differential cross sections with low $\cos^2\theta < 0.1$ and high $\cos^2\theta > 0.1$ is shown in~\cf{fig:ratiooverlay}.
The distributions are fitted with a Fourier series truncated after $\cos4\phi$.
The dashed blue line shows the unweighted result, which assumes $h_1^{\perp g}=0$, while the solid red lines in~\cf{fig:ratiooverlay}a and~\cf{fig:ratiooverlay}b correspond, respectively, to Model 1 and Model 2 defined above.
\begin{figure}[htbp!]
\centering
\subfloat[]{\includegraphics[height = 5.4cm, keepaspectratio]{figures/449M1.pdf}}
\subfloat[]{\includegraphics[height = 5.4cm, keepaspectratio]{figures/449M2.pdf}}
\caption{
The ratios of differential cross sections for events with $z^2\equiv \cos^2\theta<0.1$ over the events with $\cos^2\theta>0.1$.
On both plots, the open points describe the unweighted distribution, corresponding to $h_1^{\perp g}=0$, with the fit shown as the dashed blue line.
The solid points describe the weighted distributions, with the fits shown as solid red lines, for Model 1 (a) and Model 2 (b) described in the text.
}
\label{fig:ratiooverlay}
\end{figure}
For the level of statistics of this Monte Carlo sample, which roughly corresponds to an integrated luminosity of 100/fb at 13~TeV (with no account for detection efficiency), the change in the coefficient of the $\cos4\phi$ modulation term relative to the unweighted case, $\Delta P_4$, is non-significant for Model 1 at $\Delta P_4(M1) = (9\pm6) 10^{-3}$, but should be reliably measured if the gluon TMDs are described by Model 2, with $\Delta P_4(M1,2) = (50\pm6)10^{-3}$.
An increase in the integrated luminosity by a factor of 100 should allow one to reach the sensitivity needed for Model 1 even for a detection efficiency of $\sim20\%$.
A similar picture is expected to be obtained for prospects with the CMS detector, whereas dedicated simulations are clearly needed to assess whether one could venture to even lower \ensuremath{P_T}\xspace\ values with the LHCb detector.
\subsection{Beyond and in between TMD and HE factorisations}
\label{sec:beyond_TMD}
Quarkonia are nearly always produced by gluons at the LHC and, as such, their \ensuremath{P_T}\xspace spectra are, more or less directly, sensitive to the gluon distribution in the transverse plane. However, even in unpolarised hadronic collisions, many phenomena come into play when one wishes to study this connection. Depending on the theoretical formalism one employs to approach this relationship between the dynamics of the gluon and that of quarkonia, different effects are emphasised.
As was previously alluded to, TMD factorisation is expected to have a restrictive range of applicability, both in terms of kinematics (\ensuremath{P_T}\xspace should be smaller than the hard scale, the usual invariant mass of the observed system) and processes (no colour flow in the final state in hadronic collisions). On the contrary, HE factorisation is much more inclusive in terms of processes but, being designed to account for HE effects, it may be inaccurate or simply miss some phenomena when the collision energy is finite. When put in the context of quarkonium production, for which the mechanisms at work are not even an object of consensus, it is not surprising that the situation quickly gets intricate. In this section, we will simply attempt to correlate some possible future measurements at the HL-LHC with theoretical objectives. It should be clear that these are not necessarily absolutely rigorous, completely achieved nor objects of consensus.
Let us first start with ideas of quarkonium measurements inspired by considerations from TMD factorisation with NRQCD for which specific factorisation-breaking effects can be identified. The first on the list of course is that of single $J/\psi$ or $\Upsilon$ as a function of \ensuremath{P_T}\xspace, which have been routinely measured at colliders for thirty years. These have been investigated assuming the validity of TMD factorisation within NRQCD with CO contributions~\cite{Mukherjee:2016cjw} as well as the CEM~\cite{Mukherjee:2015smo}.
There are two issues here concerning why TMD factorisation should in principle not apply. First, if one {focuses} on the leading-$v$ contributions, thus to the CSM, both $J/\psi$ and $\Upsilon$ are produced with a gluon. If its momentum is integrated over, the connection between the final-state measured momenta and the initial-state ones is lost. Second, if one considers the sub-leading $v$ contributions from CO, which at low \ensuremath{P_T}\xspace are enhanced by one power of \ensuremath{\alpha_{\rm s}}, the colour flow is so entangled that one cannot expect to derive a factorised formula for the hadronic cross section. However, it does not prevent the analysis of data along the lines of what a would-be TMD factorised cross section predicts and, then, to attempt to extract information on TMDs. Along these lines, it would be very interesting to compare the low-\ensuremath{P_T}\xspace spectra of the vector and pseudoscalar states, {i.e.}\xspace for $\ensuremath{P_T}\xspace < M_\pazocal{Q}/2$. Such data exist for the vector states, but not yet for the pseudoscalar ones.
If they are found to be different, caution will be needed before attributing this difference either to \ensuremath{P_T}\xspace modulations from $h_1^{\perp g}$ or, simply, to factorisation-breaking effects beyond factorised TMDs expected for these processes.
The same remark can be made for the $\chi_{Q}$ states. Different \ensuremath{P_T}\xspace modulations from $h_1^{\perp g}$ are expected~\cite{Boer:2012bt} between the scalar and the tensor states. They are however also subject to factorisation-breaking effects owing to their CO content~\cite{Ma:2014oha}. It may nevertheless happen that these effects could be related by HQSS between both these $\chi_{Q}$ states. As what regards the pseudovector state, $\chi_{Q1}$, according to NRQCD, its arbitrary\footnote{The trade-off between the CO and CS component in a $P$-wave quarkonium is set by the unphysical NRQCD scale.} CO content would normally allow its production by the fusion of two on-shell gluons. LHCb data however show~\cite{Aaij:2013dja} a $\chi_{c2}/\chi_{c1}$ ratio steadily rising when \ensuremath{P_T}\xspace approaches zero in accordance with the Landau--Yang theorem, but in disagreement with the NRQCD expectations. In other words, the impact of CO is not as expected. In view of this, one should certainly not refrain from incorporating the $\chi_{Q}$ states in a global TMD survey {for} fear of factorisation-breaking effects due to their CO content. %
The HE factorisation framework provides further motivations for such studies of low-\ensuremath{P_T}\xspace $\chi_{Q}$ states and, particularly, of ratios such as $\chi_{c2}/\chi_{c1}$. At the LHC, according to HE factorisation, this should also not show the observed Landau-Yang enhancement, this time not because of CO, but because the pseudo-vector $\chi_{c1}$ can be produced by two gluons when at least one is off-shell\footnote{We recall that the gluon virtuality is expected to increase for decreasing $x$ according to HE factorisation.}. Similarly, one may also want to compare the pseudoscalar and tensor \ensuremath{P_T}\xspace spectra. Clearly, what is at stake then is the correlation between the off-shellness of the initial gluons and their fractional momenta, rather than their possible linear polarisation. This illustrates how a single observable can highlight two different phenomena in two different formalisms. Of course, {if observed, the question on how they are connected could be asked}.
Similarly there remain further aspects of the connection between the virtualities and the \ensuremath{k_T}\xspace\ of the initial gluons that can be studied in associated production of quarkonia. For instance, calculations in HE factorisation for inclusive double charmonium hadroproduction are discussed in Section~\ref{sec:dps} and provide, even at LO, a reasonable account of the $P_{\psi\psi_T}$ spectra, which is connected to $f_1^g$ as shown in Section~\ref{sec:psi-psi-TMDs}. In TMD factorisation, a first attempt to connect the size of the azimuthal modulations generated by $h_1^{\perp g}$ to the quarkonium polarisations was made in~\cite{Scarpa:2020sdy}. It would be interesting to see how, in HE factorisation, the quarkonium polarisation evolves with energy and understand how it is correlated to the initial-gluon virtualities. More generally, quarkonium-pair production, which should be studied even more widely at the HL-LHC, likely represents a very versatile laboratory in {which to} analyse possible dualities between HE and TMD factorisations.
\subsection{Single transverse-spin asymmetries at the HL-LHC in FT mode}
\label{sec:polarised}
STSAs, or $A_N$, are defined as\footnote{Note that another direction, such as the transverse momentum of the produced particles, is needed. (see {e.g.}\xspace~\cite{Anselmino:2009st} for more details).}
\begin{equation}
A_{N} = \frac{1}{{\cal P}_{\text{eff}}} \frac{\sigma^{\uparrow} - \sigma^{\downarrow}}{\sigma^{\uparrow} + \sigma^{\downarrow}}
\,,
\end{equation}
where $\sigma^{\uparrow\,(\downarrow)}$ is a differential cross section produced with a nucleon polarised upwards (downwards) and ${\cal P}_{\text{eff}}$ is the effective polarisation.
Large STSAs were observed for the first time in 1976, in $\Lambda^0$ production in FT $p$Be scattering at Fermilab ~\cite{Bunce:1976yb}, and have been seen in many other experiments since then.
When considering only the scattering of quarks or gluons, STSAs are expected to scale with the quark mass and the {\rm c.m.s.}\xspace\ energy as $A_N\sim m_q/\sqrt{s}$, as was shown in the seminal paper~\cite{Kane:1978nd}.
This prediction is many orders of magnitude smaller than the experimental observation, hence the explanation for large STSAs has to be found beyond the perturbative realm of QCD.
Two different theoretical mechanisms have been proposed, both relating STSAs to the structure of hadrons in terms of QCD.
The first mechanism, called the collinear twist-3 (CT3) approach~\cite{Efremov:1981sh,Efremov:1983eb,Qiu:1991pp}, is valid in the presence of one hard scale.
An example would be the single inclusive production of a light meson in \ensuremath{pp}\xspace\ scattering, where the hard scale is provided by the large \ensuremath{P_T}\xspace of the meson.
The STSAs is then due to quark-gluon-quark or triple-gluon correlators, which are the sub-leading (in the scale) twist-3 extensions of the usual collinear PDFs (with fragmentation-type twist-3 correlators also being relevant~\cite{Kanazawa:2014dca}).
The second mechanism takes place within TMD factorisation, and is therefore valid in the presence of two ordered hard scales: a small and a large one. In this framework, large STSAs are caused by the distribution of unpolarised partons inside the transversely-polarised hadron, parameterised by the Sivers TMD PDF $f_{1T}^{\perp}$~\cite{Sivers:1989cc} or by the fragmentation of a transversely-polarised parton into an unpolarised light meson, as parameterised by the Collins TMD FF $H_1^\perp$~\cite{Collins:1992kk}. Note that in the kinematic region where $\ensuremath{P_T}\xspace$ approaches the hard scale $M$, the TMD framework maps smoothly to the collinear regime, see {e.g.}\xspace~\cite{Ji:2006ub,Collins:2016hqq,Echevarria:2018qyi}.
Finally, a phenomenological approach is the Generalised Parton Model (GPM)~\cite{DAlesio:2007bjf}, in which the Sivers and Collins mechanisms are applied even in single-scale processes, keeping track of the transverse-momentum exchanges in the partonic scattering.
This approach has proven to be quite successful in phenomenological analyses, although one should be careful when extracting conclusions about the involved TMDs and the underlying physics.
In any case, it can be used to give a fair estimate of STSAs in single-scale processes, where the analysis in the proper twist-3 framework becomes a real challenge, due to the many involved and still unconstrained twist-3 functions.
Below STSAs in different quarkonium-production processes are discussed, in the context of a future FT experiment at the HL-LHC~\cite{Hadjidakis:2018ifr}, which could perform these measurements by polarising a target.
\subsubsection{Vector $\pazocal{Q}$ production}
In this subsection, STSAs in the $p^\uparrow p \to \pazocal{Q} + X$ process are discussed. %
As mentioned above, such processes are strictly speaking not TMD factorisable and do not therefore directly probe the TMD $f_{1T}^{\perp g}$~\cite{Boer:2015vso}, which encapsulates the Sivers effect believed to generate these STSAs, in the absence of the Collins effect from the fragmentation of the hadron.
However, within the GPM, which is an effective model where spin and intrinsic transverse-momentum effects are taken into account, such STSAs are treated as if they were factorisable in terms of an analogous object which is denoted, in what follows, as the Gluon Sivers function (GSF), in order to account for the gluon Sivers effect.
A more sophisticated extension of the GPM, the Colour-Gauge-Invariant GPM (CGI-GPM)~\cite{Gamberg:2010tj,DAlesio:2017rzj}, can also be considered, where effects from initial-state and final-state interactions, in the one-gluon approximation, are encapsulated in the GSF modelling. {Within the CGI-GPM, there are two independent GSFs, acting as phenomenological counterparts to the previously mentioned WW and DP $f_{1T}^{\perp g}$ in Section~\ref{sec:TMD_factorisation}, here denoted by $f$- and $d$-type, respectively.}
In this respect, one can in principle also address the process dependence of the GSF.
Concerning the quarkonium-production mechanism, both the CSM and NRQCD can be considered since factorisation-breaking effects are put aside. All details about such phenomenological studies can be found in~\cite{DAlesio:2017rzj,DAlesio:2019gnu,DAlesio:2020eqo}. In what follows, some selected results which support future experimental studies at the LHC will be shown.
\begin{figure}[htbp!]
\centering
\includegraphics[trim = 2.5cm .15cm 2.5cm 0cm, clip, height = 6.5cm, keepaspectratio]{figures/gsf-1stmom_CGI-GPM.pdf}
\caption{ Upper values for the first $k_T$ moments of the GSFs at $Q^2 = 2$ GeV$^2$~\cite{DAlesio:2018rnv}.}
\label{fig:1stm_AN}
\end{figure}
A first attempt to constrain these effective GSFs both within the GPM and the CGI-GPM approaches, from mid-rapidity pion and $D$-meson STSA data from RHIC~\cite{Adare:2013ekj,Aidala:2017pum}, was presented in~\cite{DAlesio:2015fwo,DAlesio:2018rnv}.
\cf{fig:1stm_AN} shows the extracted upper bounds of the first $k_T$ moment of the GSFs. Note that this quantity is a necessary ingredient for the evolution of the GSF itself.
As a matter of fact, the obtained GSF allows for a fairly good description of the available $\ensuremath{J/\psi}\xspace$ STSA data (almost compatible with zero)~\cite{Aidala:2018gmp}, even if no definite conclusion can be drawn owing to the possible presence of non-factorisable contributions, the feed-down from states that could depend in a different manner on the GSF, and the still rather large experimental uncertainties in a restricted domain in $x$.
It is thus extremely important to extend this analysis to more quarkonium states and to the
kinematics reachable at the FT-LHC as discussed in~\cite{Kikola:2017hnp,Hadjidakis:2018ifr}.
\begin{figure}[htbp!]
\centering
\includegraphics[trim = 1.cm 0cm 1cm .25cm, clip, height = 5.5cm, keepaspectratio]{figures/ANvsPT_LHC-GPM-sat-y-backward_Waves_BK11.pdf}
\includegraphics[trim = 1.cm 0cm 1cm .25cm, clip, height=5.5cm, keepaspectratio]{figures/ANvsPT_LHC-CGI-sat-y-backward_Waves_BK11.pdf}
\caption{Maximised values for $A_N$ vs.~$P_T$ for the process $p p^\uparrow \to \ensuremath{J/\psi}\xspace + X$ at $\sqrt s=115$ GeV and $y_{\rm cm}=-2$ within the GPM (left panel) and the CGI-GPM (right panel) approaches~\cite{DAlesio:2019gnu,DAlesio:2020eqo}. The full result (red solid lines) together with its wave decomposition (see legend) are shown.
}
\label{fig:AN_GPM-CGI_waves}
\end{figure}
\cf{fig:AN_GPM-CGI_waves} shows estimates for the STSA in the FT mode at the LHC, $A_N$, obtained by \emph{maximising} the Sivers effect within the GPM (left panel) and the CGI-GPM $f$-type (right panel) approaches.
The maximised quark contributions as well as those from the $d$-type GSF (not shown) are compatible with zero.
For completeness, the contribution from each wave to the full result is shown (see legend).
Notice that some contributions within the GPM are larger than one.
Since the denominator of the STSA includes all terms (entering with relative signs), while the numerator considers only a specific wave state.
The overall result (red solid lines) is, as expected, smaller than one.
A full comparison between two different mechanisms for quarkonium production (CSM vs.~NRQCD) and two effective TMD schemes (GPM vs.~CGI-GPM) for the maximised $A_N$ is presented in \cf{fig:AN_GPM-CGI}. These results, where no previous information on the GSF has been used, illustrate the potential role of such a dedicated phenomenological study.
\begin{figure}[htbp!]
\centering
\includegraphics[trim = 1.cm 0cm 1cm .25cm, clip, height = 5.5cm, keepaspectratio]{figures/ANvsPT_LHC-CGI_GPM-sat_BK11-QT.pdf}
\includegraphics[trim = 1.cm 0cm 1cm .25cm, clip, height=5.5cm, keepaspectratio]{figures/ANvsxF_LHC-CGI_GPM-sat_BK11-QT.pdf}
\caption{Maximised values for $A_N$ for the process $p p^\uparrow \to \ensuremath{J/\psi}\xspace + X$ at $\sqrt s=115$ GeV at fixed $y_{\rm cm}=-2$ vs.~$P_T$ (left panel) and at fixed $P_T=3$~GeV vs.~$x_F$ (right panel), obtained adopting the CGI-GPM and GPM approaches, within the CSM and NRQCD~\cite{DAlesio:2019gnu,DAlesio:2020eqo}.}
\label{fig:AN_GPM-CGI}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{./figures/AnUpsilon-xF-pp115GeV-P08.pdf}
\caption{Statistical-precision projections for $\Upsilon(nS)$ $A_{N}$ as a function of $\ensuremath{x_{\rm F}}\xspace$.
The quarkonium states are assumed to be measured in the di-muon channel with a LHCb-like detector.
The signal and the background are calculated in fast simulations that take into account the performance of the LHCb detector~\cite{Massacrier:2015qba,Kikola:2017hnp}.
[Figure taken from~\cite{Hadjidakis:2018ifr}]}
\label{fig:An:VectorOnium}
\end{figure}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.5\textwidth]{./figures/AnJpsi-statErr-ALICE-pp115GeV-L45inv-pb.pdf}
\caption{Statistical-precision projections for $J/\psi$ $A_{N}$ as a function of $\ensuremath{x_{\rm F}}\xspace$ compared to the existing measurements~\cite{Adare:2010bd,Aidala:2018gmp}. The \ensuremath{J/\psi}\xspace\ di-muon spectrum is assumed to be measured in the Muon Spectrometer of the ALICE detector, with the target located at the nominal interaction point ($z_{\mathrm{target}}\approx 0)$.
The signal and the background are extrapolated at $\ensuremath{\sqrt{s_{_{NN}}}}=115$~GeV from the ALICE measurements in~\cite{Adam:2015rta}.
[Figure taken from~\cite{Hadjidakis:2018ifr}]}
\label{fig:An:VectorOnium:ALICE}
\end{figure}
In~\cite{Kikola:2017hnp,Hadjidakis:2018ifr}, first studies of the projected uncertainties on $A_N$ were performed.
\cf{fig:An:VectorOnium} shows the estimated statistical uncertainties at the FT-LHC for $A_N$ as a function of $\ensuremath{x_{\rm F}}\xspace$ in $\Upsilon$ production in $pp^{\uparrow}$ collisions at $\sqrt{s} = 115$~GeV for an LHCb-like detector with 10 fb$^{-1}$ of luminosity, while \cf{fig:An:VectorOnium:ALICE} shows the expected statistical precision for $A_N$ in \ensuremath{J/\psi}\xspace production with an ALICE-like detector for $pp^{\uparrow}$ collisions with 45 pb$^{-1}$ of luminosity.
The expected $\Upsilon$, \ensuremath{J/\psi}\xspace and background yields were extrapolated from the \ensuremath{J/\psi}\xspace-rapidity spectrum and the signal-to-background ratios of~\cite{Adam:2015rta} with the procedure described in~\cite{Kikola:2017hnp}.
The signal-to-background ratio at 115 GeV is 1.2 and an efficiency of 13\% was assumed~\cite{Abelev:2014qha}.
The projected uncertainties, on the order of a few percent, can certainly help in constraining the GSF and the related twist-3 correlators, investigating different phenomenological approaches and entering a more quantitative phase in the study of gluon TMDs.
\subsubsection{$C$-even $\pazocal{Q}$ states}
The production of $C$-even quarkonium states has recently attracted
great attention both theoretically and experimentally (see Section~\ref{sec:pp}). %
With a detector similar to LHCb, STSAs for $\chi_c$, $\chi_b$ and $\eta_c$ could be measured {at low \ensuremath{P_T}\xspace} in the FT mode, as suggested by several studies of $\chi_c$ states in the busier collider mode down to a $\ensuremath{P_T}\xspace$ as low as 2 GeV~\cite{LHCb:2012af,LHCb:2012ac}.
The first study of inclusive $\eta_c$ production above $\ensuremath{P_T}\xspace=6$~GeV was performed by LHCb together with non-prompt $\eta_c(2S)$ production~\cite{Aaij:2016kxn}.
Such prompt studies can clearly be carried out by LHCb~\cite{Lansberg:2017ozx}. Indeed, given the lower combinatorial background at lower energies and the fact that the cross section for pseudoscalar charmonium production is similar to that of the vector ones, the low-$\ensuremath{P_T}\xspace$ region should be in reach. It may also be the case for $\eta_b$ production~\cite{Lansberg:2020ejc}, which offers a slightly wider range of applicability for TMD factorisation in terms of the \ensuremath{P_T}\xspace range.
The measurement of STSAs of $C$-even quarkonium states would give a clean access not only to CT3 tri-gluon correlators~\cite{Schafer:2013wca}, but also to $f_{1T}^{\perp g}$ and the GSF of the GPM, if the low-$\ensuremath{P_T}\xspace$ region can be measured. Such processes would offer an oppotunity for comparisons between these frameworks. Estimations of both $\eta_Q$ and $\chi_Q$ STSAs from the CT3 formalism are however not yet available, nor is any robust information on $f_{1T}^{\perp g}$. \ct{t:processes1} presents some yield estimations and the expected $x$ ranges that can be accessed.
\subsubsection{STSAs in associated $\pazocal{Q}$ production}
Associated-production channels~\cite{Dunnen:2014eta,Boer:2014lka,Lansberg:2015hla,Signori:2016jwo,Signori:2016lvd,Boer:2016bfj,Scarpa:2019fol}, where a quarkonium is produced along with another particle ({e.g.}\xspace\ another quarkonium, a photon, a lepton-pair, etc.), represent a very useful tool to access $f_{1T}^{\perp g}$ of TMD factorisation, the GSF of the (CGI-)GPM and the related tri-gluon correlators for CT3 factorisation. With the possibility to scan over the invariant mass of the observed system, one gets an interesting handle on the scale evolution of the Sivers effect. In addition, these associated-production channels enlarge the range of processes (with gluon-sensitive colourless final states) where TMD factorisation is expected to apply, offering various options to verify the universality of the extracted TMDs. A problem however is that such processes usually have small cross sections at RHIC and FT-LHC energies, and this makes their study very challenging and probably requires high luminosity.
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.49\textwidth]{figures/AnDiJpsi-pp115GeV.pdf}
\hspace{0.1cm}
\includegraphics[width=0.49\textwidth]{figures/AnVskT-DiJpsi-pp115GeV.pdf}
\caption{Statistical-precision projections for di-$\ensuremath{J/\psi}\xspace$ $A_N$ as a function of (a) $\ensuremath{x_{\rm F}}\xspace$ and (b) the pair $\ensuremath{k_T}\xspace$ with a LHCb-like detector.
The horizontal lines in (b) denote the width of the $\ensuremath{k_T}\xspace$ bins used for the calculations.
[Figure taken from~\cite{Hadjidakis:2018ifr}.]}
\label{fig:An:diJpsi}
\end{figure}
Di-$\ensuremath{J/\psi}\xspace$ production is certainly one of the most promising channels since the yields are not too small at the FT-LHC~\cite{Lansberg:2015lva} and the measurement is clearly feasible (unlike, for example, di-$\gamma$ studies). Furthermore, the feed-down contamination is limited to \ensuremath{\psi(2S)}\xspace~\cite{Lansberg:2014swa,Lansberg:2015lva}, which probe $f_{1T}^{\perp g}$ in the same way.
\cf{fig:An:diJpsi} shows the expected statistical precision for $A_N$ obtainable from di-\ensuremath{J/\psi}\xspace\ production at the FT-LHC with the LHCb detector as a function of the transverse momentum of the pair, $\ensuremath{k_T}\xspace$, and the corresponding $x_2$. Two scenarios are considered for the analysis of $A_{N}$ as a function of $\ensuremath{k_T}\xspace$: bins with a fixed width of 1~GeV\ ($d\ensuremath{k_T}\xspace = 1$~GeV, red points) and bins containing equal yields (black points).
Here, the $\ensuremath{k_T}\xspace$ dependence is modelled as a Gaussian distribution with width $\sigma = 2$~GeV.
The $x_2$-integrated $A_{N}$ will allow for the determination of the STSA with a few percent precision, and the $A_N(\ensuremath{k_T}\xspace)$ will give access to the $\ensuremath{k_T}\xspace$-dependence of the gluon Sivers TMD up to $\ensuremath{k_T}\xspace \approx 4$~GeV, which is not accessible anywhere else.
\begin{comment}
\begin{figure}[hbt!]
\centering
\includegraphics[width=0.48\textwidth]{figures/AnJpsi-statErr-ALICE-pp115GeV-L45inv-pb.pdf}
\caption{Statistical-precision projections for $\ensuremath{J/\psi}\xspace$ $A_{N}$ as a function of $\ensuremath{x_{\rm F}}\xspace$ compared to the existing measurements~\cite{Adare:2010bd,Aidala:2018gmp} for AFTER@ALICE$_{\mu}$ with target located at the nominal interaction point ($z_{\it{target}}\approx 0)$.
The \ensuremath{J/\psi}\xspace\ di-muon spectrum is assumed to be measured in the Muon Spectrometer of the ALICE detector.
The signal and the background are extrapolated at $\sqrtsNN=115$~GeV from the ALICE measurements in~\cite{Adam:2015rta}. }
\label{fig:An:VectorOnium:ALICE}
\end{figure}
\cf{fig:An:VectorOnium:ALICE} shows the expected statistical precision for \ensuremath{J/\psi}\xspace $A_{N}$ with an ALICE-like detector for $pp^{\uparrow}$ collisions with 45 pb$^{-1}$ of luminosity.
The expected \ensuremath{J/\psi}\xspace and the background yields were extrapolated from the \ensuremath{J/\psi}\xspace-rapidity spectrum and the signal-to-background ratios of~\cite{Adam:2015rta} with the procedure described in~\cite{Kikola:2017hnp}.
The signal-to-background ratio at 115 GeV is 1.2 and an efficiency of 13\% was assumed~\cite{Abelev:2014qha}.
\end{comment}
\begin{comment}
\subsection{title?}
Derivation of factorisation theorems for quarkonia production with small transverse momentum;
phenomenological studies of quarkonia production reactions;
need for flexible models of unpolarised and polarised gluon TMDs, suited to phenomenology;
quarkonia production for fragmentation: current status and future prospects;
Extraction of gluon TMDs from global fits.
\subsection{title?}
\begin{itemize}
\item constraining collinear counterparts of gluon Sivers functions through single-spin asymmetries
\item gluon TMDs from global fits and creation of dedicated public libraries
\item dedicated studies on functional forms for gluon TMDs
\end{itemize}
\subsection{title?}
List of processes (with references) that quarkonium production has been identified as an important process for probing gluon structure functions.
List of production mechanisms that have non-vanishing tree level in typical collider processes ($pp$, $ep$ and maybe focus on EIC). Can $\ensuremath{q_T}\xspace/M$ power-counting be spoiled by relative velocity?
Regions of transverse momentum (small, medium, large) for $\psi$ and $\Upsilon$ (TMD vs Fixed order vs Fragmentation functions). What is our current understanding of the boundaries of these regions.
Brief summary of modern approaches to quarkonium in soft/collinear background such as SCET and shape function approaches.
\subsection{title?}
Study of the role of initial and final state interactions in the single spin asymmetries and their impact on the extraction of the gluon Sivers function.
Colour-gauge invariant extension of the generalised parton model as a bridge between the twist-3 formalism and TMD factorisation approach
\subsection{title?}
A Monte Carlo simulation study of the gluon TMDs in proton-proton collisions is performed with the $\ensuremath{J/\psi}\xspace + \gamma$ final state. the potential of a proton-proton collider to observe the transverse-momentum dependence of gluon TMDs, and the sensitivity of the angular dependence with respect to the linearly-polarised gluon component is assessed.
\end{comment}
\begin{comment}
\clearpage
\nolinenumbers
\subsection{Reviewers' comments on the TMD effect section}
[only editors are supposed to put inline comments; otherwise, comments should be put in this subsection]
\linenumbers
\end{comment}
|
{
"timestamp": "2021-12-02T02:09:06",
"yymm": "2012",
"arxiv_id": "2012.14161",
"language": "en",
"url": "https://arxiv.org/abs/2012.14161"
}
|
\section{Background} \label{sec:2}
In this section, we briefly review the Markov Decision Process (MDP) under the average reward setting, the variance-constrained policy optimization problem, and some background of the deep neural network.
\vskip4pt
\noindent{\bf Markov Decision Process.}
We consider the Markov decision process $({\mathcal{S}}, \cA, {\mathcal{P}}, r)$, where ${\mathcal{S}}$ is a compact state space, $\cA$ is a finite action space, ${\mathcal{P}}: {\mathcal{S}}\times{\mathcal{S}}\times\cA \to \RR$ is the transition kernel, and $r: {\mathcal{S}}\times\cA \to \RR$ is the reward function.
A stationary policy $\pi$ maps each state to a probability distribution over $\cA$ that $\pi(\cdot|s)\in {\mathcal{P}}(\cA)$, where ${\mathcal{P}}(\cA)$ is the probability simplex on the action space $\cA$.
Given a policy $\pi$, the state $\{ s_t\}_{t\geq 0}$ and state-action pair $\{ (s_t, a_t )\}_{t\geq 0}$ are sampled from the Markov chain over ${\mathcal{S}}$
and ${\mathcal{S}} \times \cA$, respectively.
Throughout this paper, we assume that the Markov chains induced by any stationary policy admit stationary distributions.
Moreover,
we denote by $\nu_\pi (s)$ and $\sigma_\pi (s, a)=\pi(a\,|\,s)\cdot \nu_\pi(s)$ the stationary state distribution and the stationary state-action distribution associated with a policy $\pi$, respectively. For ease of presentation, we denote by $\EE_{\sigma_\pi}[\,\cdot\,]$ and $\EE_{\nu_\pi}[\,\cdot\,]$ the expectations $\EE_{(s,a)\sim\sigma_\pi}[\,\cdot\,] = \EE_{a\sim\pi(\cdot\,|\,s), s\sim \nu_\pi(\cdot)}[\,\cdot\,]$ and $\EE_{s\sim\nu_\pi}[\,\cdot\,]$, respectively.
\vskip4pt
\noindent{\bf Average Reward Setting.}
For a given stationary policy $\pi: \cA \times {\mathcal{S}} \to \RR$, we measure its performance using its (long-run) average reward per step, which is defined as
\#\label{eq:rho}
\rho(\pi)=\lim _{T \rightarrow \infty} \frac{1}{T} \cdot \EE\Bigl[\sum_{t=0}^{T-1} r(s_t, a_t) \,\big|\, \pi\Bigr] = \EE_{(s,a)\sim\sigma_\pi}[r(s,a)].
\#
For all states $s$ in ${\mathcal{S}}$ and actions $a$ in $\cA$, the differential action-value function (Q-function) of a policy $\pi$ is defined as
\# \label{eq:def:q}
Q^{\pi}(s,a) = \sum^\infty_{t = 0}\EE\big[ r(s_t, a_t) - \rho(\pi) \, | \, s_0 = s, ~ a_0 = a,~ a_{t} \sim \pi(\cdot\,|\, s_t), ~s_{t+1} \sim {\mathcal{P}}(\cdot\,|\, s_t, a_t)\big].
\#
Correspondingly, the differential state-value function of a policy $\pi$ is defined as
\#\label{eq:def:v}
V^\pi(s) = \sum^\infty_{t = 0}\EE\big[ r(s_t, a_t) - \rho(\pi) \, | \, s_0 = s,~ a_{t} \sim \pi(\cdot\,|\, s_t), ~s_{t+1} \sim {\mathcal{P}}(\cdot\,|\, s_t, a_t)\big].
\#
In the context of risk-sensitive optimization, one of the most common risk measures is the long-run variance of reward obtained under policy $\pi$, which is defined as
\$
\Lambda(\pi) = \lim _{T \rightarrow \infty} \frac{1}{T} \cdot \EE\Bigl[\sum_{t=0}^{T-1} {\bigl(r(s_t, a_t) - \rho(\pi)\bigr)}^{2} \big|\, \pi \Bigr] = \EE_{(s,a)\sim\sigma_\pi} \bigl[r(s,a) - \rho(\pi)\bigr]^2.
\$
It is not difficult to show that
\#\label{eq:eta}
\Lambda(\pi) = \eta(\pi) - {\rho(\pi)}^{2}, \quad \text{where } \, \eta(\pi) = \EE_{(s,a)\sim\sigma_\pi}[r(s,a)^2].
\#
Let $W^{\pi}$ and $U^{\pi}$ be the differential action-value and value functions associated with the squared reward of policy $\pi$ that
\begin{align}
W^{\pi}(s,a) & = \sum^\infty_{t = 0}\EE[ r(s_t, a_t)^2 - \eta(\pi) \, | \, s_0 = s, ~ a_0 = a,~ a_{t} \sim \pi(\cdot\,|\, s_t), ~s_{t+1} \sim {\mathcal{P}}(\cdot\,|\, s_t, a_t)], \label{eq:def:w}\\
U^\pi(s) & = \sum^\infty_{t = 0}\EE[ r(s_t, a_t)^2 - \eta(\pi) \, | \, s_0 = s, a_{t} \sim \pi(\cdot\,|\, s_t), ~s_{t+1} \sim {\mathcal{P}}(\cdot\,|\, s_t, a_t)]. \label{eq:def:u}
\end{align}
We denote by $\langle \cdot, \cdot \rangle$ the inner product over $\cA$, e.g., we have $V^\pi(s) = \EE_{a\sim\pi(\cdot\,|\,s)}[Q^\pi(s,a)] = \langle Q^\pi(s,\cdot), \pi(\cdot\,|\,s)\rangle$ and $U^\pi(s) = \EE_{a\sim\pi(\cdot\,|\,s)}[W^\pi(s,a)] = \langle W^\pi(s,\cdot), \pi(\cdot\,|\,s)\rangle$.
Throughout our discussion, we impose a standard assumption that the reward function is uniformly bounded. In particular, we assume that there exists a constant $M>0$ such that $M = \sup_{(s,a)\in {\mathcal{S}}\times\cA}|r(s, a)|$.
As an immediate consequence, we have that for any policy~$\pi$,
\# \label{eq:bound:rho:eta}
| \rho(\pi) | \le M, \quad \qquad |\eta(\pi)| \le M^2 .
\#
\vskip5pt
\noindent{\bf Variance-Constrained Problem.}
We consider the following constrained policy optimization problem to find a policy that maximizes the long-run average reward subject to the constraint that the long-run variance is upper bounded by a certain threshold. In particular, for a given $\alpha>0$, we consider the following constrained optimization problem
\#\label{eq:problem}
\max_{\pi}\rho(\pi) \text{ \qquad subject to } \Lambda(\pi)\leq\alpha.
\#
\vskip5pt
\noindent{\bf Deep Neural Networks.}
To facilitate our discussion, we briefly review some basics of deep neural networks (DNNs) \citep{allen2018convergence,gao2019convergence}. Let $x\in\RR^d$ be the input data. Suppose that we have a DNN with $H$ layers of width $m$. We denote by $W_h$ the weight matrix at the $h$-th layer for $ h\in [H]$, where $W_1\in\RR^{d\times m}$ and $W_h\in\RR^{m\times m}$ for $2\le h \le H$. For a DNN with depth $H$, width $m$, and parameter $\theta = \bigl(\mathop{\text{vec}}(W_1)^\top, \cdots, \mathop{\text{vec}}(W_H)^\top\bigr)^\top$, its output $u_\theta(x)$ is recursively defined~as
\#\label{eq:def-nn-form}
& x^{(0)} = x, \notag \\
& x^{(h)} = \frac{1}{\sqrt{m}}\cdot \sigma( W_h^\top x^{(h-1)} ), \quad \text{ for }h\in[H], \\
& u_\theta(x) = b^\top x^{(H)}, \notag
\#
where $\sigma(\cdot) = \max\{0, \cdot\}$ is the ReLU activation function, and $b\in\{-1,1\}^{m}$ is the output layer.
Without loss of generality, we assume that the input $x \in \mathbb{R}^d$ satisfies $\|x\|_2 = 1$, where $\|\cdot\|_2$ denotes the $\ell_2$-norm.
In the context of deep reinforcement learning, this can be achieved by having a known embedding function that maps each state-action pair to the unit sphere in $\RR^d$.
Besides, we initialize the network parameters randomly by
\# \label{eq:initialization}
&[W_1]_{i,j} \overset{\rm i.i.d.}{\sim} \mathcal N(0,1) \text{ for all } (i,j) \in [d] \times [m] , \notag \\
&[W_h]_{i,j} \overset{\rm i.i.d.}{\sim} \mathcal N(0,1) \text{ for all } (i,j) \in [m]\times[m] \text{ and } 2 \le h \le H , \\
&b_i \overset{\rm i.i.d.}{\sim} {\rm Unif}(\{-1,1\}) \text{ for all } i \in [m]. \notag
\#
Without loss of generality, we only update $\{W_h\}_{h\in[H]}$ throughout the training process, and fix the output layer~$b$ as its initialization. We denote by $\theta_0 = (\mathop{\text{vec}}(W_1^0)^\top, \cdots, \mathop{\text{vec}}(W_H^0)^\top)^\top$ the initialization of the network parameter.
In addition, we restrict network parameter~$\theta$ within a ball centered at $\theta_0$ with radius $R > 0$, which is given by
\#\label{eq:def-proj-set}
{\mathcal{B}}(\theta_0, R) = \bigl\{\theta \in \RR^{m_{\rm all}} \colon \|W_{h} - W_{h}^0\|_\text{F} \leq R \text{ for $h\in[H]$} \bigr\},
\#
where $\{W_{h}\}_{h\in[H]}$ and $\{W_{h}^0\}_{h\in[H]}$ are the weight matrices of network parameters $\theta$ and $\theta_0$, respectively, and $\|\cdot\|_\text{F}$ denotes the Frobenius norm. For any fixed depth $H$, width $m$, and radius $R > 0$, the corresponding class of DNNs is
\#\label{eq:def-dnn-class}
\cU(m, H, R) = \bigl\{u_\theta(\cdot)\colon \theta\in {\mathcal{B}}(\theta_0, R)\bigr\}.
\#
\section{Introduction}
Reinforcement learning (RL) is a powerful approach to solving multi-stage decision-making problems by interacting with the environment and learning from experiences.
Thanks to the practical
efficacy
of reinforcement learning, it draws substantial attentions from different communities such as operations research \citep{bertsekas1996neuro,mertikopoulos2016learning,wen2017efficient,wang2017stochastic,zeng2018asyncqvi}, computer science \citep{sutton1998introduction} and statistics \citep{menictas2019artificial,clifton2020q}.
With the advance of deep learning, over the past few years, we have witnessed phenomenal successes of deep reinforcement learning (DRL) in solving extremely challenging problems such as Go \citep{silver2016mastering, silver2017mastering, openai2019}, robotics \citep{kober2013reinforcement, gu2017deep}, and natural language processing \citep{narasimhan2015language}, which were once regarded too complicated to be solvable by computer programs in the past.
Despite these empirical successes,
providing
theoretical justifications for deep reinforcement learning
is rather challenging.
A significant challenge is that
the optimization problems associated with deep reinforcement learning are usually highly nonconvex, which is due to a combination of the following two sources.
First, under the risk-neutral setting, where the goal is to find a policy that maximizes the (long-run) average reward in expectation within a parametric policy class,
the optimization objective is
a nonconvex function of the policy parameter.
This is true even when the policy admits a tabular or linear parameterization, and the global convergence and optimality of policy optimization algorithms
for these cases are only established recently. See, e.g., \cite{agarwal2020optimality,shani2020adaptive, mei2020global, cen2020fast} and the references therein.
Second, when the policy is represented by a deep neural network, due to its nonlinearity and complicated structure, policy optimization is significantly more challenging.
Theoretical guarantees for deep policy optimization is rather limited.
Recently, built upon the theory of neural tangent kernel \citep{jacot2018neural},
\cite{liu2019neural, wang2019neural, fu2020single} prove that various actor-critic algorithms with overparameterized neural networks provably achieve global convergence and optimality.
In this paper,
going beyond the risk-neutral setting,
we
make the first attempt to study risk-sensitive deep reinforcement learning.
In particular,
we focus on the variance risk measure
\citep{sobel1982variance}
and aim to find a neural network policy that maximizes the expected value of the long-run average reward under the constraint that the variance of the long-run average reward is upper bounded by a certain threshold.
Here the variance constraint incorporates the risk-sensitivity --- the reinforcement learning agent is willing to achieve a possibly smaller expected reward in exchange for a smaller variance.
Moreover, such a problem is substantially more challenging than the risk-neutral setting and finds important applications.
Our goal is to establish an algorithm that provably finds a globally optimal solution to such a risk-sensitive policy optimization problem within the class of deep neural network policies.
To the best of our knowledge, this problem has never been considered in existing deep reinforcement learning literature.
\subsection{Motivating Applications}
Imposing the variance constraint is of substantial practical interests. We provide two concrete motivating applications. The first application is in portfolio management. Reinforcement learning/deep reinforcement learning methods have been applied for portfolio optimization \citep{moody1998performance,jiang2017cryptocurrency}, where we dynamically allocate the assets to maximize the total return over time. In such applications, while optimizing the expected total return, it is important to control the volatility/risks of the portfolio. In the celebrated Markowitz model \citep{markowitz1952portfolio}, it is suggested that the risk of a portfolio is based on the variability/variance of returns, and the model is exactly maximizing the expected total return for a given level of the variance of total return.
The second example is in robotics.
It is known that one of the emerging and promising applications of robotics is senior care/medicine \citep{kohlbacher2015leading,taylor2016medical,tan2020governing}. In these applications, while achieving the maximum expected return, it is extremely important to control the variability of the outcome, as a little change in robotics' operation could lead to devastating outcomes.
While deep reinforcement learning has achieved phenomenal successes in training robotics \citep{gu2017deep,tai2017virtual} under the risk-neutral setting,
this example shows that risk-sensitive/variance-constrained deep reinforcement indeed calls for a principled solution.
\subsection{Major Contribution}
Incorporating a variance constraint into deep reinforcement learning raises several challenges.
First, this makes the optimization problem a constrained one.
Although there are various algorithms designed for constrained Markov decision process (CMDP) \citep{altman1999constrained},
these methods cannot be directly applied to our variance-constrained problem.
In particular,
the constraint in CMDP is irrelevant to the reward function in the objective,
whereas the constraint in our problem is the variance of the long-run average reward.
Thus, handling such a constraint requires new algorithms. Second, as we employ deep neural network
policies, both the expected value and variance of the long-run average reward are
highly nonconvex functions of the policy parameter.
Third,
to obtain the
policy update directions,
we need to characterize the landscape of the variance of average reward as a functional of the policy.
As discussed in \cite{tamar2016learning},
due to the nonlinearity of the variance of a random variable in the probability space,
this raises a substantial challenge even in the simpler linear setting.
To tackle these challenges,
inspired by the celebrated actor-critic framework \citep{konda2000actor}, we propose a \underline{var}iance-constrained \underline{a}ctor-\underline{c}ritic (VARAC) algorithm,
where both the policy (actor) and the value functions (critic) are represented by multi-layer overparameterized neural networks.
In specific, to handle the first challenge,
we transform the constrained problem into an unconstrained saddle point problem via Lagrangian duality.
Then, to cope with the third challenge, leveraging Fenchel duality, we further write the variance
into the variational form by introducing a dual variable.
Thus, the original problem is transformed into a saddle point problem involving the policy $\pi$, Lagrange multiplier $\lambda$, and dual variable $y$.
More importantly, when $\lambda$ and $y$ are fixed, the objective is equal to the long-run average of a transformed reward, and thus we can characterize its landscape for policy optimization.
For such a saddle point problem, VARAC updates
$\pi$, $\lambda$, and $y$ via first-order optimization.
Specifically, in each iteration, we update the policy $\pi$ via proximal update with the Kullback-Leibler (KL) divergence serving as the Bregman divergence, while $\lambda$ is updated via (projected) gradient method, and $y$-update step admits a closed-form solution.
Moreover,
the update directions are all based on the solution to
the inner problem of the critic, which corresponds to solving two policy evaluation problems determined by current $\lambda$ and $y$ via temporal-difference learning \citep{sutton1988learning} with deep neural networks.
Our KL-divergence regularized policy update is closely related to the trust-region policy optimization \citep{schulman2015trust} and proximal policy optimization
\citep{schulman2017proximal}, which have demonstrated great empirical successes.
Finally, to tackle the second challenge,
from a functional perspective,
we view the policy update of VARAC
as an instantiation of infinite-dimensional mirror descent \citep{beck2003mirror,zhang2018convergence},
which is well approximated by the parameter update of the policy when the neural network is overparameterized.
Thus,
we show that under mild assumptions, despite nonconvexity,
the policy sequence obtained by VARAC
converges to a globally optimal policy at a sublinear $\mathcal{O}(1/\sqrt{K})$ rate, where $K$ is the iteration counter.
In summary, our contribution is two-fold. First, to our best knowledge, we make the first attempt to study risk-sensitive deep reinforcement learning by imposing a variance-based risk constraint.
Second, we propose a novel actor-critic algorithm, dubbed as VARAC, which provably finds a globally optimal policy of the variance constrained problem at a sublinear rate.
We believe that our work brings a promising future research direction for both optimization and machine learning communities.
\subsection{Related Work}
Our work extends the field of risk-sensitive optimization. The risks are essentially some measures of the aleatoric uncertainty. In nature, there are two types of uncertainty \citep{clements2019estimating}. The first type is {\it epistemic} uncertainty, which refers to the uncertainty caused by a lack of knowledge
and can be reduced by acquiring more data.
The second type is {\it aleatoric} uncertainty, which refers to the notion of inherent randomness.
That is,
the uncertainty due to the stochastic nature of the environment, which cannot be reduced even with unlimited data.
Optimizing returns while controlling the risk is of great practical importance.
Various risk measures are proposed for different applications, which include variance \citep{rubinstein1973mean}, value at risk (VaR) \citep{pflug2000some}, conditional value at risk (CVaR) \citep{rockafellar2000optimization}, and utility function \citep{browne1995optimal}. The notion of risk is widely studied in the optimization community over past decades. See, e.g., \cite{ruszczynski2006conditional,ruszczynski2006optimization,ruszczynski2010risk,dentcheva2019risk,kose2020risk}, and the references therein.
Furthermore, our work is
closely related to the literature on risk-sensitive reinforcement learning
with the variance risk measure.
The study of the variance of the total returns in a Markov decision process (MDP) dates back to \cite{sobel1982variance}.
\cite{filar1989variance} formulates the variance-regularized MDP
as a nonlinear program.
\cite{mannor2011mean}
proves that finding an exact optimal policy of variance-constrained reinforcement learning is NP-hard, even when the model is known.
More recently,
with linear function approximation,
\citep{tamar2016learning} proposes a temporal-difference learning algorithm for estimating the variance of the total reward, and
\cite{tamar2013variance,prashanth2016variance, prashanth2018risk} propose actor-critic algorithms for variance-constrained policy optimization.
These works all establish asymptotic convergence guarantees via stochastic approximation \citep{borkar2009stochastic}.
A more related work is \cite{xie2018block}, which proposes an actor-critic algorithm via Lagrangian and Fenchel duality, which is shown to converge to a stationary point at a sublinear rate under the linear setting. In contrast, our work employs deep neural networks, adopts a different KL-divergence regularized policy update, and our algorithm provably finds a globally optimal policy at a sublinear rate.
\vspace{5pt}
\noindent{\bf Paper Organization}. The rest of this paper is organized as follows. In Section~\ref{sec:2}, we briefly introduce some background knowledge. In Section~\ref{sec:alg}, we present the VARAC algorithm. In Section~\ref{sec:results}, we provide theoretical guarantees for the VARAC algorithm. We conclude the paper in Section~\ref{sec:con}.
\vspace{5pt}
\noindent{\bf Notations}.
For an integer $H$, we denote by $[H]$ the set $\{1, 2, \cdots, H\}$. Furthermore, we denote by $\| \cdot \|_2$ the $\ell_2$-norm of a vector or the spectral norm of a matrix, and denote by $\| \cdot \|_\text{F}$ the Frobenius norm of a matrix. Also, let $\{a_n\}_{n \ge 0}$ and $\{b_n\}_{n \ge 0}$ be two positive sequences. If there exists some positive constant $c$ such that $\limsup_{n \rightarrow \infty} a_n / b_n \le c$, we write $a_n = \mathcal{O}(b_n)$.
If $\liminf_{n \rightarrow \infty} a_n / b_n \ge c_1$ for some positive constant $c_1$, we write $a_n = \Omega(b_n)$.
\section{Algorithm}\label{sec:alg}
In this section, we present the Variance-Constrained Actor-Critic with Deep Neural Networks (VARAC) algorithm for solving the variance-constrained problem \eqref{eq:problem}.
\subsection{Problem Formulation}
As we discussed in the introduction, a major challenge of solving problem \eqref{eq:problem} is that the constraint is difficult to handle. We first transform the problem into an unconstrained saddle point problem by employing the Lagrangian dual formulation that
\# \label{eq:lagrangian}
&\min_{\lambda}\max_{\pi}~\rho(\pi) - \lambda\bigl(\Lambda(\pi) - \alpha\bigr) =\min_{\lambda}\max_{\pi}~\rho(\pi) - \lambda\eta(\pi) +\lambda\rho(\pi)^2 + \lambda\alpha ,
\#
where the equality follows from \eqref{eq:eta}. As mentioned earlier, the quadratic term $\rho(\pi)^2$ makes the problem nonlinear in the probability distribution, and raises substantial challenges in the computation. Following Lemma 1 of \cite{xie2018block}, we reformulate the problem by leveraging the quadratic term's Fenchel dual. In particular, by the Fenchel duality, we have that $\rho(\pi)^2 =\max_{y\in\RR}(2y\rho(\pi)-y^2)$. Then, the Lagrangian dual is transformed to the following form that
\# \label{eq:problem-formulated}
&\min_{\lambda}\max_{\pi}~\rho(\pi) - \lambda\eta(\pi) +\lambda\rho(\pi)^2 + \lambda\alpha \notag\\
&\qquad=\min_{\lambda}\max_{\pi}~\rho(\pi) - \lambda\eta(\pi) + \lambda\max_{y}\bigl(-y^2 + 2y\rho(\pi)\bigr) + \lambda\alpha \notag \\
&\qquad=\min_{\lambda}\max_{\pi}\max_{y}~(1 + 2\lambda y)\rho(\pi) - \lambda\eta(\pi) - \lambda y^2 + \lambda\alpha .
\#
To facilitate our discussion, we denote the Lagrangian dual function as
\#\label{eq:def-L}
{\mathcal{L}}(\lambda,\pi,y) = (1 + 2\lambda y)\rho(\pi) - \lambda\eta(\pi) - \lambda y^2 + \lambda\alpha .
\#
To handle the potentially complicated functional structures, we propose to use DNNs to represent the policy $\pi$, differential action-value function $Q$ defined in \eqref{eq:def:q}, and differential action-value function $W$ associated with the squared reward defined in \eqref{eq:def:w}. In particular, we consider the energy-based policy $\pi_\theta(a\,|\, s) \propto \exp( \tau^{-1} f_\theta(s,a) )$, where the energy function $f_\theta(s,a)\in\cU(m_{\rm a}, H_{\rm a}, R_{\rm a})$ is parameterized as a DNN with network parameter~$\theta$ \citep{ haarnoja2017reinforcement,wang2019neural}. Also, we assume that $Q(s,a) = Q_{q}(s, a)$ and $W(s,a) = W_{\omega}(s,a)$ for all $(s,a)\in{\mathcal{S}}\times\cA$, where $Q_q(s,a)\in\cU(m_{\rm c}, H_{\rm c}, R_{\rm c})$ and $W_\omega(s,a)\in\cU(m_{\rm b}, H_{\rm b}, R_{\rm b})$ are parameterized as DNNs with network parameters $q$ and $\omega$, respectively.
\subsection{VARAC Algorithm}
We propose the variance-constrained actor-critic with deep neural networks (VARAC) algorithm to solve \eqref{eq:problem}. The algorithm follows the general framework of the actor-critic method \citep{konda2000actor}. This method solves the unconstrained problem of maximizing the long-run average reward in \eqref{eq:rho}. At each iteration, in the actor update step, we improve the policy that given the previous estimator for the Q-function, we compute an estimator for the policy gradient, and conduct a gradient step of the policy. In the critic update step, by plugging the updated policy in, we invoke a policy evaluation algorithm to update the estimator for the Q-function.
In our setting, due to the variance constraint, the problem is substantially more challenging, and the actor-critic algorithm cannot be directly applied. As discussed in the previous subsection, by employing the Lagrangian and Fenchel dual formulations, we aim to solve the unconstrained min-max-max problem \eqref{eq:problem-formulated}. Specifically, at each iteration, we first conduct an actor update step, where we update $\lambda$ and $\pi$. In particular, using solutions from the previous iteration $k$, we update the Lagrangian multiplier $\lambda$ by a projected gradient descent step. Next, we update $\pi_\theta$ by the proximal policy optimization (PPO) algorithm \citep{schulman2017proximal}, where we maximize a KL-penalized objective over $\theta$. To be more specific, in updating $\pi_\theta$, by plugging previous estimators for the Q-function in \eqref{eq:def:q} and the W-function in \eqref{eq:def:w}, we aim to maximize a linearized version of ${\mathcal{L}}(\lambda_{k},\pi_{\theta_{k+1}}, y_k)$ over $\theta_{k+1}$ with a penalty of KL-divergence of $\pi_{\theta_{k+1}}$ and $\pi_{\theta_{k}}$, which is equivalent to
$$
\max_{\theta_{k+1}} \EE_{\nu_{\pi_{\theta_k}}} \bigl[ \bigl\langle (1+ 2{\lambda}_{k}{y}_{k})Q_{q_k}(s, \cdot) - {\lambda}_{k}W_{\omega_{k}}(s, \cdot), \pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr\rangle- \beta_k \cdot {\rm KL}\bigl(\pi_{\theta_{k+1}}(\cdot\,|\,s)\,\|\, \pi_{\theta_{k}}(\cdot\,|\,s)\bigr) \bigr].
$$
The key observation is that, by considering energy-based policies, the problem above admits a tractable solution and can be computed efficiently. Finally, we update $y$ by maximizing a quadratic function.
For the critic update step, we update the estimators for the Q-function and W-function by minimizing the Bellman errors. Recall that we parameterize the Q-function and W-function by deep neural networks with parameters $q$ and $\omega$, respectively. The Bellman error minimization problems become solving
$$
\min_{q \in {\mathcal{B}}(q_0,R_c)}\EE_{\sigma_k}[\bigl(Q_q(s,a) - [{\mathcal{T}}^{\pi_{\theta_k}}Q_q](s,a)\bigr)^2], \text{ and }\min_{\omega \in {\mathcal{B}}(\omega_0,R_{\rm b})}\EE_{\sigma_k}\big[W_\omega(s,a) - [\hat{{\mathcal{T}}}^{\pi_{\theta_k}}W_\omega](s,a)\big]^2,
$$
where ${\mathcal{T}}^{\pi_{\theta_k}}$ and $\hat{{\mathcal{T}}}^{\pi_{\theta_k}}$ are Bellman operators defined later in \eqref{eq:bellman1} and \eqref{eq:bellman2}, respectively.
We solve these problems by the temporal difference (TD) method \citep{sutton1988learning}.
We then present the details of the VARAC algorithm. At the $(k-1)$-th iteration, we estimate $\rho(\pi_{\theta_{k}})$ and $\eta(\pi_{\theta_{k}})$ by their sample average estimators that
\# \label{eq:estimate:rho:eta}
\overline{\rho}(\pi_{\theta_{k}}) = \frac{1}{T}\cdot\sum^{T}_{t=1}r(s_t^{k}, a_t^{k}), \qquad \overline{\eta}(\pi_{\theta_{k}}) = \frac{1}{T}\cdot\sum^{T}_{t=1}r(s_t^{k}, a_t^{k})^2 ,
\#
where $T$ is the sample size, and $\{(s_t^{k},a_t^{k})\}_{t=1}^T$ are simulated samples of states and actions following the policy from the previous iteration. In what follows, with some slight abuse of notation, we write $(s_t^{k},a_t^{k})$ as $(s_t,a_t)$. Note that by the boundedness of the reward, we have
\# \label{eq:bound:rho:eta:bar}
| \overline{\rho}(\pi_{\theta_{k}}) | \le M, \quad \qquad | \overline{\eta}(\pi_{\theta_{k}}) | \le M^2 .
\#
We then present the actor and critic updates at each iteration.
\vskip4pt
{\noindent\bf Actor Update: (i) $\lambda$-Update Step.}
At the $k$-th iteration, given the solution $(\overline{\lambda}_k,{\pi}_{\theta_k},\overline{y}_k)$ from the $(k-1)$-th iteration, we compute $\lambda_{k+1}$ using the projected gradient method, where we project the solution onto a bounded region to guarantee the convergence \citep{prashanth2013actor,prashanth2016variance}.
In particular, we choose a sufficiently large $N > 0$ and update $\lambda_{k+1}$ that
\$
\lambda_{k+1} = \Pi_{[0,N]}\big(\lambda_k - \frac{1}{2\gamma_k}\partial_\lambda{\mathcal{L}}(\lambda_k,\pi_{\theta_k},y_{k}) \big) =\Pi_{[0,N]}\Big(\lambda_k - \frac{1}{2\gamma_k}\big(\alpha + 2y_{k}\rho(\pi_{\theta_k}) - \eta(\pi_{\theta_k}) -y_k^2\big) \Big),
\$
where $\gamma_k>0$ is some prespecified stepsize.
As discussed previously, we do not observe $\rho(\pi_{\theta_k})$ and $\eta(\pi_{\theta_k})$, and as discussed later in $y$-update step, we do not observe the ``ideal" $y_k$. Instead, we estimate them using $\overline{\rho}(\pi_{\theta_k})$, $\overline{\eta}(\pi_{\theta_k})$ in \eqref{eq:estimate:rho:eta} and $\overline{y}_k$ in \eqref{eq:update-y-form}, respectively. We then adopt the following plug-in estimator $\overline{\lambda}_{k+1}$ for the ``ideal" $\lambda_{k+1}$ that
\#\label{eq:update-lambda-form}
\overline{\lambda}_{k+1} = \Pi_{[0,N]}\Bigl(\overline{\lambda}_k - \frac{1}{2\gamma_k}\bigl(\alpha + 2\overline{y}_k\overline{\rho}(\pi_{\theta_k}) - \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2\bigr) \Bigr).
\#
\vskip4pt
{\noindent\bf (ii) $\pi$-Update Step.}
Note that the policy $\pi$ is parametrized by $\theta$. By the proximal policy optimization method \citep{schulman2017proximal}, we update our policy $\pi_{\theta_{k+1}}$ by maximizing the following $\rm{KL}$-penalized objective over $\theta_{k+1}$,
\#\label{eq:dnn2222}
\begin{aligned}
\max_{\theta_{k+1}}L(\theta_{k+1}) = \EE_{\nu_{\pi_{\theta_k}}} \bigl[ & \bigl\langle (1+ 2\overline{\lambda}_{k}\overline{y}_{k})Q_{q_k}(s, \cdot) - \overline{\lambda}_{k}W_{\omega_{k}}(s, \cdot), \pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr\rangle \\
& - \beta_k \cdot {\rm KL}\bigl(\pi_{\theta_{k+1}}(\cdot\,|\,s)\,\|\, \pi_{\theta_{k}}(\cdot\,|\,s)\bigr) \bigr],
\end{aligned}
\#
where $\overline{\lambda}_k$ in \eqref{eq:update-lambda-form} and $\overline{y}_k$ in \eqref{eq:update-y-form} are the estimators for $\lambda_k$ and $y_k$, and $\beta_k>0$ is some prespecified penalty parameter. Note that here we use DNNs $Q_{q_k}$ and $W_{\omega_k}$ to estimate $Q^{\pi_{\theta_k}}$ and $W^{\pi_{\theta_k}}$, and we provide the theoretical justifications of using DNNs in Section~\ref{sec:ac-error}.
Solving problem~\eqref{eq:dnn2222} is challenging since the gradient of {\rm KL}-divergence in the objective is difficult to derive. To efficiently and approximately solve the maximization problem~\eqref{eq:dnn2222}, we consider the energy-based policy
$\pi_{\theta_{k+1}} \propto \exp(\tau_{k+1}^{-1} f_{\theta_{k+1}})$, where $\tau_{k+1} > 0$ is a temperature parameter, and $f_\theta(s,a)\in\cU(m_{\rm a}, H_{\rm a}, R_{\rm a})$, which is parameterized by a DNN with network parameter~$\theta$, is an energy function \citep{liu2019neural}. The next proposition shows that problem \eqref{eq:dnn2222} admits a tractable solution of the oracle infinite-dimensional policy~update.
\begin{proposition}\label{prop:exp-policy}
Let $\pi_{\theta_{k+1}} \propto \exp(\tau_{k+1}^{-1} f_{\theta_{k+1}})$ be an energy-based policy. For any given $\overline{\lambda}_{k}$ and $\overline{y}_{k}$, and given estimators $Q_{q_k}$, $W_{\omega_{k}}$ for $Q^{\pi_{\theta_k}}$ and $W^{\pi_{\theta_k}}$ respectively, we have that $\hat{\pi}_{k+1} = \argmax_{\pi}(\EE_{\nu_k}[\langle (1+2\overline{\lambda}_{k}\overline{y}_{k})Q_{q_k}(s, \cdot) - \overline{\lambda}_{k}W_{\omega_{k}}(s, \cdot),\pi(\cdot\,|\,s) \rangle - \beta_k \cdot {\rm KL}(\pi(\cdot\,|\,s)\,\|\, \pi_{\theta_k}(\cdot\,|\,s))])$ satisfies
\#\label{eq:pi-new}
\hat{\pi}_{k+1} \propto \exp\big(\beta_k^{-1}(1 + 2\overline{\lambda}_{k}\overline{y}_{k})Q_{q_k} - \beta_k^{-1}\overline{\lambda}_{k}W_{\omega_{k}} + \tau_k^{-1} f_{\theta_k}\big),
\#
where $\nu_k = \nu_{\pi_{\theta_k}}$ is the stationary state distribution generated by $\pi_{\theta_k}.$
\end{proposition}
\begin{proof}
See Appendix \ref{appendix:alg-proof} for the detailed proof.
\end{proof}
By Proposition \ref{prop:exp-policy}, we update the policy parameter $\theta$ by solving the following problem,
{\small
\#\label{eq:policy-update}
\theta_{k+1} = \argmin_{\theta \in {\mathcal{B}}(\theta_0,R_a)}\EE_{{\sigma}_k} \bigl[\bigl(f_{\theta}(s,a) - \tau_{k+1}\cdot(\beta_k^{-1}(1 + 2\overline{\lambda}_{k}\overline{y}_{k})Q_{q_k} - \beta_k^{-1}\overline{\lambda}_{k}W_{\omega_{k}} + \tau^{-1}_{k}f_{\theta_k}(s,a))\bigr)^2\bigr],
\#}
where $\sigma_k$ is the stationary state-action distribution of $\pi_{\theta_k}$. That is, to minimize the distance between the output $f_{\theta_{k+1}}$ and the right hand side of \eqref{eq:policy-update}.
To solve \eqref{eq:policy-update}, we adopt the projected stochastic gradient descent method. Specifically, given an initial $\theta_0$, at the $t$-th iteration, we update
\begin{equation}\label{eq:sgd-update1}
\begin{aligned}
\theta{(t+1)} \leftarrow \Pi_{{\mathcal{B}}(\theta_0, R_a)}\Bigl(&\theta{(t)} - \zeta \cdot \bigl(f_{\theta{(t)}}(s,a) - \tau_{k+1}\cdot\big(\beta_k^{-1}(1 + 2\overline{\lambda}_{k}\overline{y}_{k})Q_{q_k} \\
&+ \beta_k^{-1}\overline{\lambda}_{k}W_{\omega_{k}} + \tau^{-1}_{k}f_{\theta_k}(s,a)\big)\bigr)\cdot\nabla_\theta f_{\theta{(t)}}(s,a)\Bigr),
\end{aligned}
\end{equation}
where the operator $\Pi_{{\mathcal{B}}(\theta_0, R_a)}(\cdot)$ projects the solution onto the set ${\mathcal{B}}(\theta_0, R_a)$ defined in \eqref{eq:def-proj-set}, the state-action pair $(s, a)$ is sampled from $\sigma_k = \sigma_{\pi_{\theta_k}}$, and $\zeta>0$ is the stepsize. See Algorithm~\ref{alg:ac-update} in Appendix \ref{appendix:alg} for a pseudocode.
\vskip4pt
{\noindent\bf (iii) $y$-Update Step.}
Given $\overline{\lambda}_{k+1}$ and $\theta_{k+1}$, we update $y_{k+1}= \argmax_y {\mathcal{L}}(\overline{\lambda}_{k+1},\pi_{\theta_{k+1}},y)$. By the property of the quadratic function, it is easy to see that $y_{k+1}=\rho(\pi_{\theta_{k+1}})$. However, since $\rho(\pi_{\theta_{k+1}})$ is unknown,
we adopt $\overline{\rho}(\pi_{\theta_{k+1}})$ defined in \eqref{eq:estimate:rho:eta} as an estimator for $y_{k+1}$ that
\#\label{eq:update-y-form}
\overline{y}_{k+1} = \overline{\rho}(\pi_{\theta_{k+1}}).
\#
\vskip4pt
{\noindent\bf Critic Update: (i) $q$-Update Step.}
In the critic update, we evaluate the current solution $(\overline{\lambda}_k,\pi_{\theta_k},\overline{y}_k)$ by estimating the corresponding value functions. We first consider the differential action-value function (Q-function), and derive an estimator $Q_{q_k}$ for $Q^{\pi_{\theta_k}}$ in \eqref{eq:dnn2222}, where $Q_{q_k}$ is parametrized as a DNN with network parameter $q_k$. To obtain $q_k$, we solve the following least-squares problem
\#\label{eq:mspbe1}
q_{k} = \argmin_{q \in {\mathcal{B}}(q_0,R_c)}\EE_{\sigma_k}[\bigl(Q_q(s,a) - [{\mathcal{T}}^{\pi_{\theta_k}}Q_q](s,a)\bigr)^2],
\#
where $\sigma_k$ is the stationary state-action distribution of the policy $\pi_{\theta_k}$.
Here the Bellman
operator ${\mathcal{T}}^\pi$ of a policy $\pi$ i
\#\label{eq:bellman1}
[{\mathcal{T}}^\pi Q](s, a) = \EE [r(s,a) -\rho(\pi) + Q(s',a') \,\big|\, s'\sim{\mathcal{P}}(\cdot\,|\,s, a),~a'\sim\pi(\cdot\,|\,s')].
\#
Recall that $Q_{q} \in {\cU} (m_c, H_c, R_c)$ is defined through a deep neural network in \eqref{eq:def-dnn-class}, where $q$ is the network parameter, $H_c$ is the depth, $m_c$ is the width, and $R_c$ is the projection radii. To solve \eqref{eq:mspbe1}, given an initial $q_0$, we use the iterative TD-update that at the $t$-th iteration, we let
\begin{equation}\label{eq:td-update1}
\begin{aligned}
q{(t+1)} \leftarrow \Pi_{{\mathcal{B}}_{(q_0,R_{\rm c})}}\Bigl(&q{(t)} - \delta\cdot \bigl(Q_{q{(t)}}(s,a) - r(s, a) \\
&+ \overline{\rho}(\pi_{\theta_{k}}) - Q_{q{(t)}}(s', a')\bigr)\cdot\nabla_q Q_{q{(t)}}(s,a)\Bigr),
\end{aligned}
\end{equation}
{\noindent where $(s, a) \sim \sigma_k$, $s'\sim{\mathcal{P}}(\cdot\,|\,s,a)$, $a' \sim \pi_{\theta_k}(\cdot\,|\,s')$, and $\delta$ is the stepsize. See Algorithm \ref{alg:td} in Appendix \ref{appendix:alg} for a pseudocode.}
\vskip4pt
{\noindent\bf (ii) $\omega$-Update Step.}
Next, we derive an estimator $W_{\omega_k}$ for $W^{\pi_{\theta_k}}$ in \eqref{eq:dnn2222}. The procedure is similar to the previous step. We solve the following least-squares problem to obtain~$\omega_{k}$,
\#\label{eq:mspbe2}
\omega_{k} = \argmin_{\omega \in {\mathcal{B}}(\omega_0,R_{\rm b})}\EE_{\sigma_k}\big[W_\omega(s,a) - [\hat{{\mathcal{T}}}^{\pi_{\theta_k}}W_\omega](s,a)\big]^2,
\#
where the operator $\hat{{\mathcal{T}}}^\pi$ of a policy $\pi$ is defined as
\#\label{eq:bellman2}
[\hat{{\mathcal{T}}}^\pi W](s, a) = \EE[r(s,a)^2 -\eta(\pi) + W(s',a') \,\big|\, s'\sim{\mathcal{P}}(\cdot\,|\,s, a),~a'\sim\pi(\cdot\,|\,s')].
\#
As we discussed earlier, we parameterize $W$ using a DNN that we let $W_{\omega} \in {\cU} (m_b, H_b, R_b)$ defined in \eqref{eq:def-dnn-class}, where $\omega$ is the network parameter, $H_b$ is the depth, $m_b$ is the width, and $R_b$ is the projection radii. To solve \eqref{eq:mspbe2}, given an initial $\omega_0$, we use the TD update that at the $t$-th iteration, we let
\begin{equation}\label{eq:td-update2}
\begin{aligned}
\omega{(t+1)} \leftarrow \Pi_{{\mathcal{B}}_{(\omega_0,R_b)}}\Bigl(& \omega{(t)} - \delta\cdot \bigl(W_{\omega(t)}(s,a) - r(s, a)^2 \\
& + \overline{\eta}(\pi_{\theta_{k}}) - W_{\omega{(t)}}(s', a')\bigr)\cdot\nabla_\omega W_{\omega{(t)}}(s,a)\Bigr),
\end{aligned}
\end{equation}
where $(s, a) \sim \sigma_k$, $s'\sim{\mathcal{P}}(\cdot\,|\,s,a)$, $a' \sim \pi_{\theta_k}(\cdot\,|\,s')$, and $\delta$ is the stepsize. See Algorithm \ref{alg:td2} in Appendix \ref{appendix:alg} for a pseudocode.
Putting the actor and critic updates together, we present the pseudocode of the VARAC algorithm in Algorithm~\ref{alg:risac}.
\begin{algorithm
\caption{Variance-Constrained Actor-Critic with Deep Neural Networks}
\begin{algorithmic}[1]\label{alg:risac}
\REQUIRE MDP $({\mathcal{S}}, \cA, {\mathcal{P}}, r)$, penalty parameter $\beta$, widths $m_a$, $m_b$ and $m_c$, depths $H_a$, $H_b$ and $H_c$, projection radii $R_a$, $R_b$ and $R_c$, number of SGD and TD iterations $T$ and number of VARAC iterations $K$
\STATE Initialize with uniform policy: $\tau_0\leftarrow 1$, $f_{\theta_0} \leftarrow 0$, $ \pi_{\theta_0} \leftarrow \pi_0 \propto \exp(\tau_0^{-1}f_{\theta_0})$
\STATE Sample $\{(s_t, a_t, a^0_t, s_t', a_t')\}^{T}_{t = 1}$ with $(s_t, a_t) \sim \sigma_0$, $a^0_t\sim \pi_0(\cdot\,|\,s_t)$, $s_t'\sim{\mathcal{P}}(\cdot\,|\,s_t, a_t)$ and $a_t' \sim \pi_{\theta_0}(\cdot\,|\,s_t')$
\STATE Estimate $\rho(\pi_{\theta_{0}})$ and $\eta(\pi_{\theta_{0}})$ by $\overline{\rho}(\pi_{\theta_{0}}) = \frac{1}{T}\cdot\sum^{T}_{t=1}r(s_t, a_t)$ and $\overline{\eta}(\pi_{\theta_{0}}) = \frac{1}{T}\cdot\sum^{T}_{t=1}r(s_t, a_t)^2$
\FOR{$k = 0, \dots, K-1$}
\STATE Set temperature parameter $\tau_{k+1} \leftarrow \beta\sqrt{K}/(k+1)$ and penalty parameter $\beta_k \leftarrow \beta\sqrt{K}$
\STATE Solve $Q_{q_{k}}(s, a) \in \cU(m_{\rm c}, H_{\rm c}, R_{\rm c})$ in \eqref{eq:mspbe1} using the TD update in \eqref{eq:td-update1} (Algorithm \ref{alg:td})\label{line:td:q}
\STATE Solve $W_{\omega_{k}}(s, a) \in \cU(m_{\rm b}, H_{\rm b}, R_{\rm b})$ in \eqref{eq:mspbe2} using the TD update in \eqref{eq:td-update2} (Algorithm~\ref{alg:td2})\label{line:td:w}
\STATE Update $\lambda$ : $\overline{\lambda}_{k+1} = \Pi_{[0,N]}\bigl(\overline{\lambda}_k - \frac{1}{2\gamma_k}(\alpha + 2\overline{y}_k\overline{\rho}(\pi_{\theta_k}) + \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2) \bigr)$
\STATE Solve $f_{\theta_{k+1}} \in \cU(m_{\rm a}, H_{\rm a}, R_{\rm a})$ in \eqref{eq:policy-update} using the SGD update in \eqref{eq:sgd-update1} (Algorithm~\ref{alg:ac-update})\label{line:sgd}
\STATE Update policy: $\pi_{\theta_{k+1}} \propto \exp(\tau_{k+1}^{-1}f_{\theta_{k+1}})$ \label{line:c}
\STATE Sample $\{(s_t, a_t, a^0_t, s_t', a_t')\}^{T}_{t = 1}$ with $(s_t, a_t) \sim \sigma_{k+1}$, $a^0_t\sim \pi_0(\cdot\,|\,s_t)$, $s_t'\sim{\mathcal{P}}(\cdot\,|\,s_t, a_t)$ and $a_t' \sim \pi_{\theta_{k+1}}(\cdot\,|\,s_t')$
\STATE Estimate $\rho(\pi_{\theta_{k+1}})$ and $\eta(\pi_{\theta_{k+1}})$ by $\overline{\rho}(\pi_{\theta_{k+1}}) = \frac{1}{T}\cdot\sum^{T}_{t=1}r(s_t, a_t)$ and $\overline{\eta}(\pi_{\theta_{k+1}}) = \frac{1}{T}\cdot\sum^{T}_{t=1}r(s_t, a_t)^2$\label{line:estimate}
\STATE Update $y$ : $\overline{y}_{k+1} = \overline{\rho}(\pi_{\theta_{k+1}}) $
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Theoretical Results}\label{sec:results}
In this section, we establish the convergence of the proposed VARAC algorithm by analyzing the estimation and computation errors, and we show that the duality gap diminishes at an $\mathcal{O}(1/\sqrt{K})$ rate. Before going further, we first impose some mild assumptions.
\begin{assumption}[]\label{assumption:unique-solution}
There exists a saddle point $(\lambda^*, \pi^*, y^*)$, which is a solution of the saddle point optimization problem \eqref{eq:problem-formulated}.
\end{assumption}
\begin{assumption}[]\label{assumption:closed-q-w}
For any $Q_q \in \cU(m_{\rm c},H_{\rm c},R_{\rm c})$, $W_\omega \in \cU(m_{\rm b},H_{\rm b},R_{\rm b})$, and policy $\pi$, we have ${\mathcal{T}}^\pi Q_q \in \cU(m_{\rm c},H_{\rm c},R_{\rm c})$ and $\hat{{\mathcal{T}}}^\pi W_\omega \in \cU(m_{\rm b},H_{\rm b},R_{\rm b})$.
\end{assumption}
Assumption \ref{assumption:unique-solution} assumes the existence of a solution. Assumption \ref{assumption:closed-q-w} assumes that the class of DNNs $\cU(m_{\rm c},H_{\rm c},R_{\rm c})$ in \eqref{eq:def-dnn-class} is closed under the Bellman operator ${\mathcal{T}}^\pi$ defined in \eqref{eq:bellman1}, and the class $\cU(m_{\rm b},H_{\rm b},R_{\rm b})$ is closed under the operator $\hat{{\mathcal{T}}}^\pi$ defined in \eqref{eq:bellman2}. Such an assumption is standard in literature for all classes of policies \citep{munos2008finite,antos2008fitted,tosatto2017boosted,yang2019theoretical,liu2019neural}.
Furthermore, to guarantee the convergence of TD updates \eqref{eq:td-update1} and \eqref{eq:td-update2}, we need an additional contraction condition, which is common in reinforcement learning literature \citep{van1998learning}.
In particular, suppose $s \in \cR^d$, and define a Hilbert space $L_2(\cR^d, {\mathcal{B}}(\cR^d), \pi)$, which is endowed with an inner product $\langle J_1,J_2\rangle_\pi = \int J_1(s)J_2(s)\pi(ds)$ for any real-valued functions $J_1,J_2$ on the Hilbert space. Also, for any policy $\pi$, we denote by $P^\pi$ an operator given by $(P^\pi J)(s) = \EE_{\pi}[J(s_1) \,|\, s_0 = s]$. The contraction assumption assumes the contraction property of the operator $P^\pi$ as follows.
\begin{assumption}\label{assumption:contraction}
For any policy $\pi$, there exists a constant $\beta_\pi \in [0,1)$ such that $\|P^\pi J\|_\pi \le \beta_\pi \| J \|_\pi$, where $\|J\|_\pi = \langle J, J\rangle _\pi$, for all $J:L_2(\cR^d, {\mathcal{B}}(\cR^d), \pi)\rightarrow \RR$ that are orthogonal to $e=(1,1,...,1,1)^\top$.
\end{assumption}
\subsection{Estimation Errors}\label{sec:estimate-error}
We first bound the estimation errors, where we provide the rates of convergence of the estimators $\overline{\rho}(\pi_{\theta_k})$ and $\overline{\eta}(\pi_{\theta_k})$ towards $\rho(\pi)$ and $\eta(\pi)$.
\begin{lemma}[Estimation Errors]\label{lem:average-error}
For any $p\in (0,1)$, and for all $k\in[K]$, the estimators $\overline{\rho}(\pi_{\theta_k})$ and $\overline{\eta}(\pi_{\theta_k})$ in \eqref{eq:estimate:rho:eta} satisfy, with probability at least $1-p$,
\$
|\rho(\pi_{\theta_k}) - \overline{\rho}(\pi_{\theta_k}) | \le \mathcal{O}\bigl(T^{-1/2}\log(4K/p)^{1/2}\bigr), \quad |\eta(\pi_{\theta_k}) - \overline{\eta}(\pi_{\theta_k}) | \le \mathcal{O}\bigl(T^{-1/2}\log(4K/p)^{1/2}\bigr),
\$
where $T$ is the simulated sample size.
\end{lemma}
\begin{proof}
Fix $k \in [K]$, by the bounded reward assumption and Azuma-Hoeffding inequality,
it holds with probability at least $1-p/(2K)$ that
\$
| \rho(\pi_{\theta_k}) - \overline{\rho}(\pi_{\theta_k}) | \leq \mathcal{O}\bigl(T^{-1/2}\log(4K/p)^{1/2}\bigr).
\$
Similarly, with probability at least $1-p/(2K)$, it holds that
\$
| \eta(\pi_{\theta_k}) - \overline{\eta}(\pi_{\theta_k}) | \leq \mathcal{O}\bigl(T^{-1/2}\log(4K/p)^{1/2}\bigr) .
\$
Together with the union bound argument, we complete the proof.
\end{proof}
By this lemma, in what follows, without loss of generality, we assume that the errors satisfy that, for some $c_k,d_k >0$,
\#\label{eq:average error}
|\rho(\pi_{\theta_k}) - \overline{\rho}(\pi_{\theta_k}) | \le c_k, \quad\qquad\qquad |\eta(\pi_{\theta_k}) - \overline{\eta}(\pi_{\theta_k}) | \le d_k.
\#
\subsection{Computation Errors}\label{sec:ac-error}
In this subsection, we bound the approximation errors of deep neural networks. First, in the following lemma, we characterize the error in the actor update step, which is induced by solving subproblem \eqref{eq:policy-update} using the SGD method in \eqref{eq:sgd-update1}.
\begin{lemma}[$\pi$-Update Error]\label{thm:ac-error}
Suppose that Assumption \ref{assumption:closed-q-w} holds. Let $\zeta = T^{-1/2}$, $H_{\rm a} = \mathcal{O} (T^{1/4})$, $R_{\rm a} = \mathcal{O} (m_{\rm a}^{1/2} H_{\rm a}^{-6}(\log m_{\rm a})^{-3})$ and $m_{\rm a} = \Omega(d^{3/2}R_{\rm a}^{-1} H_{\rm a}^{-3/2} \log^{3/2}(m_{\rm a}^{1/2}/R_{\rm a}))$. Then, at the $k$-th iteration of Algorithm \ref{alg:risac}, with probability at least $1 - \exp( - \Omega(R_{\rm a}^{2/3} m_{\rm a}^{2/3}H_a))$, the output $f_{\overline{\theta}}$ of Algorithm~\ref{alg:ac-update} satisfies
\$
&\EE \bigl[ \bigl(f_{\overline\theta}(s,a) - \tau_{k+1}\cdot(\beta_k^{-1}(1 + 2\overline{\lambda}_{k}\overline{y}_{k})Q_{q_k} - \beta_k^{-1}\overline{\lambda}_{k}W_{\omega_{k}} + \tau^{-1}_{k}f_{\theta_k}(s,a))\bigr)^2 \bigr] \\
&\quad= \mathcal{O} ( R_{\rm a}^2 T^{-1/2} + R_{\rm a}^{8/3} m_{\rm a}^{-1/6} H_{\rm a}^{7} \log m_{\rm a} ),
\$
where the expectation is taken over $\overline \theta$ and $(s,a)\sim \sigma_{\pi_{\theta_k}}$, and $T$ is the iteration counter for the SGD method.
\end{lemma}
\begin{proof}
See the proof of Proposition B.3 in \cite{fu2020single} for the detailed proof.
\end{proof}
Similarly, we characterize the computation errors in the critic update step, which are induced in $q$- and $\omega$-update steps in solving subproblems in \eqref{eq:mspbe1} and \eqref{eq:mspbe2} using the TD updates in \eqref{eq:td-update1} and \eqref{eq:td-update2}.
\begin{lemma}[$q$-Update Error]\label{thm:td}
Suppose that Assumptions \ref{assumption:closed-q-w} and \ref{assumption:contraction} hold. Let the parameters be that $\delta = T^{-1/2}$, $H_{\rm c} = \mathcal{O} (T^{1/4})$, $R_{\rm c} = \mathcal{O} (m_{\rm c}^{1/2} H_{\rm c}^{-6}(\log m_{\rm c})^{-3})$ and $m_{\rm c} = \Omega(d^{3/2}R_{\rm c}^{-1} H_{\rm c}^{-3/2} \log^{3/2}(m_{\rm c}^{1/2}/R_{\rm c}))$. Then, at the $k$-th iteration of Algorithm \ref{alg:risac}, with probability at least $1 - \exp( - \Omega(R_{\rm c}^{2/3} m_{\rm c}^{2/3}H_c))$, the output $Q_{\overline{q}}$ of Algorithm~\ref{alg:td} satisfies
\$
\EE\bigl[ \bigl(Q_{\overline{q}}(s,a) - Q^{\pi_{\theta_k}}(s,a)\bigr)^2 \bigr] = \mathcal{O} ( R_{\rm c}^2 T^{-1/2} + R_{\rm c}^{8/3} m_{\rm c}^{-1/6} H_{\rm c}^{7} \log m_{\rm c} ),
\$
where the expectation is taken over $\overline q$ and $(s,a)\sim \sigma_{\pi_{\theta_k}}$, and $T$ is the iteration counter for the TD method.
\end{lemma}
\begin{proof}
See Appendix \ref{appendix:td} for the detailed proof.
\end{proof}
\begin{lemma}[$\omega$-Update Error]\label{thm:td2}
Suppose that Assumptions \ref{assumption:closed-q-w} and \ref{assumption:contraction} hold. Let the parameters be that $\delta = T^{-1/2}$, $H_{\rm b} = \mathcal{O} (T^{1/4})$, $R_{\rm b} = \mathcal{O} (m_{\rm b}^{1/2} H_{\rm b}^{-6}(\log m_{\rm b})^{-3})$ and $m_{\rm b} = \Omega(d^{3/2}R_{\rm b}^{-1} H_{\rm b}^{-3/2} \log^{3/2}(m_{\rm b}^{1/2}/R_{\rm b}))$. Then, at the $k$-th iteration of Algorithm \ref{alg:risac}, with probability at least $1 - \exp( - \Omega(R_{\rm b}^{2/3} m_{\rm b}^{2/3}H_b ))$, the output $W_{\overline{\omega}}$ of Algorithm~\ref{alg:td2} satisfies
\$
\EE \bigl[ \bigl(W_{\overline{\omega}}(s,a) - W^{\pi_{\theta_k}}(s,a)\bigr)^2 \bigr] = \mathcal{O} ( R_{\rm b}^2 T^{-1/2} + R_{\rm b}^{8/3} m_{\rm b}^{-1/6} H_{\rm b}^{7} \log m_{\rm b} ),
\$
where the expectation is taken over $\overline \omega$ and $(s,a)\sim \sigma_{\pi_{\theta_k}}$, and $T$ is the iteration counter for the TD method.
\end{lemma}
\begin{proof}
This proof is similar to the proof of Lemma \ref{thm:td}, and we omit it to avoid repetition.
\end{proof}
Essentially, putting Lemmas \ref{thm:ac-error}, \ref{thm:td} and \ref{thm:td2} together, we establish that the computation errors incurred by fitting the DNNs diminish at rates of $\mathcal{O}(T^{-1/2})$ if the network widths $m_{\rm a}$, $m_{\rm c}$ and $m_{\rm_b}$ of the DNNs $f_{\theta}$, $Q_{q}$ and $W_\omega$ are sufficiently large.
\subsection{Error Propagation}\label{sec:error}
We then bound the policy error propagation at each iteration by analyzing the difference between our policy update $\pi_{\theta_{k+1}}$ in \eqref{eq:policy-update} and an ideal policy update $\pi_{k+1}$ defined below in~\eqref{eq:pi-new-true}. Recall that, as defined in \eqref{eq:pi-new}, $\hat{\pi}_{k+1}$ is a policy update based on $\overline{\lambda}_k$, $\overline{y}_k$, $Q_{q_k}$ and $W_{\omega_k}$, which are the estimators for the true $\lambda_k$, $y_k$, $Q^{\pi_{\theta_k}}$ and $W^{\pi_{\theta_k}}$, respectively. Correspondingly, we define the ideal policy update based on $\overline{\lambda}_k$, $\overline{y}_k$, $Q^{\pi_{\theta_k}}$ and $W^{\pi_{\theta_k}}$ as
\begin{equation}
\begin{aligned}\label{eq:pi-new-true}
\pi_{k+1} = \argmax_\pi\EE_{\nu_k}& [\langle (1+2\overline{\lambda}_k\overline{y}_k)Q^{\pi_{\theta_k}}(s, \cdot) - \overline{\lambda}_kW^{\pi_{\theta_k}}(s, \cdot), \pi(\cdot, s)\rangle \\
&- \beta_k \cdot {\rm KL}\bigl(\pi(\cdot\,|\,s)\,\|\, \pi_{\theta_k}(\cdot\,|\,s)\bigr)\bigr].
\end{aligned}
\end{equation}
By Proposition \ref{prop:exp-policy}, we have a closed-form solution of $\pi_{k+1}$ that
\$
\pi_{k+1}\propto \exp\bigl(\beta_k^{-1}(1+2\overline{\lambda}_k\overline{y}_k) Q^{\pi_{\theta_k}} - \beta_k^{-1}\overline{\lambda}_kW^{\pi_{\theta_k}}+ \tau_k^{-1} f_{\theta_k}\bigr).
\$
For ease of presentation, we adopt the following notations to denote density ratios of policies and stationary distributions,
\#\label{eq:w0w}
\phi^*_{k} = \EE_{{\sigma}_k}[|{\ud \pi^*}/{\ud \pi_0} - {\ud \pi_{\theta_k}}/{\ud \pi_0}|^2]^{1/2},\quad
\psi^*_k = \EE_{\sigma_k}[|{\ud \sigma^*}/{\ud \sigma_k} - \ud \nu^*/\ud \nu_k|^2]^{1/2},
\#
where $\ud \pi^*/\ud \pi_0$, $\ud \pi_{\theta_k}/\ud \pi_0$, $\ud \sigma^*/\ud \sigma_k$, and $\ud \nu^*/\ud \nu_k$ are the Radon-Nikodym derivatives, and recall that we denote the optimal policy as $\pi^*$, its stationary state distribution as $\nu^*$, and its stationary state-action distribution as $\sigma^*$.
We then prove an important lemma for the error propagation, which essentially quantifies how the errors of policy update $\hat{\pi}_{k+1}$ in \eqref{eq:policy-update} and the policy evaluation propagate into the infinite-dimensional policy space.
\begin{lemma}[Error Propagation]\label{lem:error-kl}
Suppose that the policy improvement error in Line \ref{line:sgd} of Algorithm \ref{alg:risac} satisfies
\#\label{eq:error-kl1}
\EE_{{\sigma}_k}\bigl[\bigl(f_{\theta_{k+1}}(s, a) - \tau_{k+1}\cdot(\beta_k^{-1}Q_{\omega_k}(s, a) - \tau^{-1}_kf_{\theta_k}(s, a))\bigr)^2 \bigr] \leq \epsilon_{k+1},
\#
and the policy evaluation error of Q-function in Line \ref{line:td:q} of Algorithm \ref{alg:risac} satisfies
\#\label{eq:error-kl2}
\EE_{\sigma_k}\bigl[\bigl(Q_{q_k}(s, a) - Q^{\pi_{\theta_k}}(s, a)\bigr)^2\bigr] \leq \epsilon'_{k},
\#
and the policy evaluation error of W-function in Line \ref{line:td:w} of Algorithm \ref{alg:risac} satisfies
\#\label{eq:error-kl3}
\EE_{\sigma_k}\bigl[\bigl(W_{\omega_k}(s, a) - W^{\pi_{\theta_k}}(s, a)\bigr)^2\bigr] \leq \epsilon_{k}''.
\#
For $\pi_{k+1}$ defined in \eqref{eq:pi-new-true} and $\pi_{\theta_{k+1}}$ obtained in Line \ref{line:sgd} of Algorithm \ref{alg:risac}, we have
\#\label{812759}
\bigl| \EE_{\nu^*}\big[ \big\langle \log\big(\pi_{\theta_{k+1}}(\cdot\,|\,s)/\pi_{k+1}(\cdot\,|\,s)\big), \pi^*(\cdot\,|\,s)-\pi_{\theta_k}(\cdot\,|\,s) \big\rangle]\bigr|\le \varepsilon_k,
\#
where $\varepsilon_k= \tau_{k+1}^{-1}\epsilon_{k+1}\cdot\phi^*_{k}+(1+2MN)\cdot\beta_k^{-1}\epsilon_{k}'\cdot\psi^*_{k} +N\cdot\beta_k^{-1}\epsilon_{k}''\cdot\psi^*_{k}.$
\end{lemma}
\begin{proof}See Appendix \ref{812354} for the detailed proof. \end{proof}
Recall that we consider energy-based policies, where the energy function $f_\theta$ is parametrized as a DNN. The next lemma characterizes the stepwise energy difference by quantifying the difference between
$f_{\theta_{k+1}}$ and $f_{\theta_k}$.
\begin{lemma}[Stepwise Energy Difference]\label{lem:stepwise-energy}
Under the same assumptions of Lemma \ref{lem:error-kl}, we~have
\$
\EE_{\nu^*}[\| \tau_{k+1}^{-1} f_{\theta_{k+1}}(s,\cdot) - \tau_k^{-1}f_{\theta_k}(s,\cdot) \|_{\infty}^2] \le 2\varepsilon_k'+ 2\beta_k^{-2}\hat{M},
\$
where $\varepsilon_k'=|\cA|\cdot \tau_{k+1}^{-2}\epsilon_{k+1}^2$ and $\hat{M}=4(1+2MN)^2 \cdot \EE_{\nu^*}[ \max_{a\in\cA} (Q_{q_0}(s,a))^2 + R_{\rm c}^2]+ 4N^2 \cdot \EE_{\nu^*}[ \max_{a\in\cA} (W_{\omega_0}(s,a))^2 + R_{\rm b}^2 ] .$
\end{lemma}
\begin{proof}See Appendix \ref{812354} for the detailed proof. \end{proof}
\subsection{Global Convergence of VARAC}\label{sec:main}
In this subsection, we establish the global convergence of the VARAC algorithm. In particular, we derive the convergence of the solution path, and then show that, despite the nonconvexity of our problem, the duality gap diminishes.
We first prove the convergence of the solution path by showing that the objective of
the Lagrangian function \eqref{eq:def-L} of the solution path converges to the corresponding objective of a saddle point.
Specifically, the following theorem characterizes the convergence of ${\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k)$ towards ${\mathcal{L}}(\lambda^*, \pi^*, y^*)$.
\begin{theorem}[Approximate Saddle Point]\label{thm:main}
Suppose that Assumptions~\ref{assumption:unique-solution}, \ref{assumption:closed-q-w}, and \ref{assumption:contraction} hold. For the sequences $\{\overline{\lambda}_k\}^{K}_{k=1}$, $\{\pi_{\theta_k}\}^{K}_{k = 1}$ and $\{\overline{y}_k\}^{K}_{k=1}$ generated by the VARAC algorithm (Alg.~\ref{alg:risac}), we have
\begin{equation}\label{eqn:m1}
\begin{aligned}
& - \sum_{k=0}^{K-1} (c_k + d_k)\cdot \mathcal{O}(1/K) - \mathcal{O}(1/\sqrt{K})\\
&\qquad \leq \frac{1}{K}\sum_{k=0}^{K-1} \bigl({\mathcal{L}}(\lambda^*,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k) \bigr)\\
&\qquad \leq \bigl( \sum_{k=0}^{K-1}c_k\bigr) \cdot \mathcal{O}(1/K) + \sum_{k=0}^{K-1}( \varepsilon_k + \varepsilon_k') \cdot \mathcal{O}(1/\sqrt{K}) + \mathcal{O}(1/\sqrt{K}),
\end{aligned}
\end{equation}
where $c_k$ and $d_k$ are estimation errors defined in \eqref{eq:average error}. Here $\varepsilon_k= \tau_{k+1}^{-1}\epsilon_{k+1}\cdot\phi^*_{k}+(1+2MN)\cdot\beta_k^{-1}\epsilon_{k}'\cdot\psi^*_{k} +N\cdot\beta_k^{-1}\epsilon_{k}''\cdot\psi^*_{k}$ and $\varepsilon_k'=|\cA|\cdot \tau_{k+1}^{-2}\epsilon_{k+1}^2$, where
\$
&\epsilon_{k+1} = O ( R_{\rm a}^2 T^{-1/2} + R_{\rm a}^{8/3} m_{\rm a}^{-1/6} H_{\rm a}^{7} \log m_{\rm a} ), \qquad
\epsilon'_{k} = O ( R_{\rm c}^2 T^{-1/2} + R_{\rm c}^{8/3} m_{\rm c}^{-1/6} H_{\rm c}^{7} \log m_{\rm c} ), \\
&\epsilon''_{k} = O ( R_{\rm b}^2 T^{-1/2} + R_{\rm b}^{8/3} m_{\rm b}^{-1/6} H_{\rm b}^{7} \log m_{\rm b} ).
\$
\end{theorem}
In what follows, we prove Theorem \ref{thm:main} through a few lemmas.
We first present the performance difference lemma, which evaluates the difference in the values of the Lagrangian function \eqref{eq:def-L} for different policies.
\begin{lemma}[Performance Difference]\label{lem:v-diff}
For ${\mathcal{L}}(\lambda, \pi, y)$ defined in \eqref{eq:def-L}, we have
\$
{\mathcal{L}}(\lambda, \pi^*, y) - {\mathcal{L}}(\lambda, \pi, y) = \EE_{\nu^*}\big[\big\langle (1 + 2\lambda y)Q^{\pi}(s,\cdot) - \lambda W^{\pi}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi(\cdot\,|\,s)\big\rangle\big],
\$
where $\nu^*$ is the stationary state distribution of the optimal policy $\pi^*$.
\end{lemma}
\begin{proof} See Appendix \ref{appendix:main} for the detailed proof.\end{proof}
In the next two lemmas, we establish the one-step descent of the Lagrangian multiplier $\lambda$- and policy $\pi$-update steps, respectively. The key idea of the proof follows from the analysis of the mirror descent algorithm \citep{beck2003mirror,nesterov2013introductory}.
\begin{lemma}[One-Step Descent of $\lambda$]\label{lem:main-descent-lambda}
At the $k$-th iteration of Algorithm~\ref{alg:risac}, we have that
$\overline{\lambda}_k$ in \eqref{eq:update-lambda-form} and the optimal solution $\lambda^*$ satisfy
\#\label{eq:descent-lambda}
&\| \lambda^* - \overline{\lambda}_k\|^2 - \|\lambda^* - \overline{\lambda}_{k+1} \|^2 \\
&\qquad\geq -\frac{1}{\gamma_k} \cdot \bigl({\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k},\overline{y}_k) \bigr) - \frac{1}{\gamma_k}\cdot2N\cdot(d_k + 2Mc_k) - \frac{1}{4\gamma_k^2}\cdot(\alpha+ 4M^2)^2. \notag
\#
\end{lemma}
\begin{proof}
By the updating rule of $\lambda$ in \eqref{eq:update-lambda-form}, we have
\begin{equation}\label{eq:lambda110}
\begin{aligned}
&\|\lambda^* - \overline{\lambda}_k\|^2 - \|\lambda^* - \overline{\lambda}_{k+1} \|^2 \\
&\qquad = - 2\langle \lambda^* - \overline{\lambda}_{k+1}, \overline{\lambda}_k - \overline{\lambda}_{k+1} \rangle +\|\overline{\lambda}_{k+1} - \overline{\lambda}_{k}\|^2 \\
& \qquad \geq - \bigl\langle \lambda^* - \overline{\lambda}_{k+1}, \gamma_k^{-1}\cdot \bigl(\alpha - \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2 + 2\overline{y}_k\overline{\rho}(\pi_{\theta_k}) \bigr) \bigr\rangle +\|\overline{\lambda}_{k+1} - \overline{\lambda}_{k}\| ^2 \\
& \qquad = -\frac{1}{\gamma_k}\cdot\langle \lambda^* - \overline{\lambda}_{k}, \alpha - \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2 +2\overline{y}_k\overline{\rho}(\pi_{\theta_k}) \rangle \\
&\qquad\qquad+\frac{1}{\gamma_k}\cdot\langle \overline{\lambda}_{k+1} - \overline{\lambda}_{k},\alpha - \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2 +2\overline{y}_k\overline{\rho}(\pi_{\theta_k}) \rangle
+\|\overline{\lambda}_{k+1} - \overline{\lambda}_{k}\|^2,
\end{aligned}
\end{equation}
where the inequality follows from the non-expansiveness of the projection in \eqref{eq:update-lambda-form}.
By the definition of ${\mathcal{L}}(\lambda, \pi, y)$ in \eqref{eq:def-L}, we obtain
\begin{equation}\label{eq:lambda111}
\begin{aligned}
& -\frac{1}{\gamma_k}\cdot\langle \lambda^* - \overline{\lambda}_{k}, \alpha - \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2 +2\overline{y}_k\overline{\rho}(\pi_{\theta_k}) \rangle \\
& \qquad = -\frac{1}{\gamma_k}\cdot\langle \lambda^* - \overline{\lambda}_{k}, \alpha - \eta(\pi_{\theta_k}) -\overline{y}_k^2 +2\overline{y}_k\rho(\pi_{\theta_k}) \rangle \\
&\qquad\qquad - \frac{1}{\gamma_k}\cdot \bigl\langle \lambda^* - \overline{\lambda}_{k}, \eta(\pi_{\theta_k}) - \overline{\eta}(\pi_{\theta_k}) +2\overline{y}_k\bigl(\overline{\rho}(\pi_{\theta_k}) - \rho(\pi_{\theta_k})\bigr) \bigr\rangle \\
& \qquad \geq -\frac{1}{\gamma_k} \cdot \bigl({\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k},\overline{y}_k) \bigr) - \frac{1}{\gamma_k}\cdot2N\cdot(d_k + 2Mc_k),
\end{aligned}
\end{equation}
where the last inequality is obtained by \eqref{eq:bound:rho:eta}, \eqref{eq:bound:rho:eta:bar}, \eqref{eq:update-y-form}, \eqref{eq:average error} and the assumption $\lambda_k \le N$. Meanwhile, by \eqref{eq:bound:rho:eta:bar}, \eqref{eq:update-y-form}, and the inequality $2xy \geq -x^2 - y^2$, we have
\begin{equation}\label{eq:lambda112}
\begin{aligned}
& \bigl\langle \overline{\lambda}_{k+1} - \overline{\lambda}_{k}, {\gamma_k}^{-1}\cdot \bigl(\alpha - \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2 +2\overline{y}_k\overline{\rho}(\pi_{\theta_k}) \bigr) \bigr\rangle \\
& \qquad \geq -\|\overline{\lambda}_{k+1} - \overline{\lambda}_{k}\|^2 - \frac{1}{4\gamma_k^2}\cdot \bigl(\alpha - \overline{\eta}(\pi_{\theta_k}) -\overline{y}_k^2 +2\overline{y}_k\overline{\rho}(\pi_{\theta_k})\bigr)^2 \\
& \qquad \geq -\|\overline{\lambda}_{k+1} - \overline{\lambda}_{k}\|^2 - \frac{1}{4\gamma_k^2}\cdot(\alpha+ 4M^2)^2.
\end{aligned}
\end{equation}
Plugging \eqref{eq:lambda111} and \eqref{eq:lambda112} into \eqref{eq:lambda110}, we have
\$
&\| \lambda^* - \overline{\lambda}_k\|^2 - \|\lambda^* - \overline{\lambda}_{k+1} \|^2 \\
&\qquad\geq \frac{1}{\gamma_k} \cdot \bigl({\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k},\overline{y}_k) \bigr) - \frac{1}{\gamma_k}\cdot2N\cdot(d_k + 2Mc_k) - \frac{1}{4\gamma_k^2}\cdot(\alpha+ 4M^2 )^2,
\$
which concludes the proof. \end{proof}
\begin{lemma}[One-Step Descent of $\pi$]\label{lem:main-descent}
For the oracle improved policy $\pi_{k+1}$ defined in \eqref{eq:pi-new-true} and the policy $\pi_{\theta_{k}}$ generated by Algorithm~\ref{alg:risac}, we have that, for any $s \in {\mathcal{S}}$,
\$
&{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr) - {\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s)\bigr)\\
& \qquad\leq \langle \log(\pi_{\theta_{k+1}}(\cdot\,|\,s)/\pi_{k+1}(\cdot\,|\,s)), \pi_{\theta_k}(\cdot\,|\,s)-\pi^*(\cdot\,|\,s) \rangle \\
&\qquad\qquad- \beta_k^{-1}\cdot\langle(1+2y^*\overline{\lambda}_k)Q^{\pi_{\theta_k}}(s, \cdot) - \overline{\lambda}_kW^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle\notag\\
&\qquad\qquad + \beta_k^{-1}\cdot\la2(y^* - \overline{y}_k)\overline{\lambda}_kQ^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle\notag\\
&\qquad\qquad - \langle \tau_{k+1}^{-1}f_{\theta_{k+1}}(s, \cdot) - \tau_k^{-1}f_{\theta_k}(s, \cdot), \pi_{\theta_{k}}(\cdot\,|\,s) - \pi_{\theta_{k+1}}(\cdot\,|\,s) \rangle \\
&\qquad\qquad - 1/2\cdot\|\pi_{\theta_{k+1}}(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\|_1^2.
\$
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma \ref{lem:main-descent-lambda}, and we defer the details to~Appendix~\ref{appendix:main}.
\end{proof}
Next, we derive an upper bound of ${\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k)$.
\begin{lemma}\label{lem:main-descent-y}
For the optimal solution $y^*$ and $\overline{y}_k$ obtained in \eqref{eq:update-y-form}, we have
\#\label{eq:812335}
{\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k) \le 4MNc_k.
\#
\end{lemma}
\begin{proof}
By the definition of ${\mathcal{L}}(\lambda,\pi,y)$ in \eqref{eq:def-L}, we have
\$
&{\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k) \\
& \qquad=\overline{\lambda}_k\cdot \langle y^*- \overline{y}_k, 2\rho(\pi_{\theta_k}) - y^* - \overline{y}_k \rangle \notag\\
&\qquad=2\overline{\lambda}_k \cdot\langle y^*- \overline{y}_k, \rho(\pi_{\theta_k}) - \overline{y}_k \rangle -\overline{\lambda}_k \cdot (y^*- \overline{y}_k)^2 .
\$
By \eqref{eq:update-y-form} and \eqref{eq:average error}, we have $|\rho(\pi_{\theta_k}) - \overline{y}_k| = |\rho(\pi_{\theta_k}) - \overline{\rho}(\pi_{\theta_k})| \le c_k$. Combined with \eqref{eq:update-lambda-form}, \eqref{eq:update-y-form} and the fact that $(y^*- \overline{y}_k)^2$ is nonnegative, we have
\#\label{eq:l09}
{\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k) \le 4MNc_k,
\#
which concludes the proof.
\end{proof}
Combining Lemma \ref{lem:main-descent} and Lemma \ref{lem:main-descent-y}, we derive an upper bound of ${\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, \overline{y}_k)$ in the next lemma.
\begin{lemma}\label{lem:descent:lambda:pi}
For the sequences $\{\overline{\lambda}_k\}^{K}_{k=1}$, $\{\pi_{\theta_k}\}^{K}_{k = 1}$, and $\{\overline{y}_k\}^{K}_{k=1}$ generated by the VARAC algorithm, we have
\$
&\beta_k^{-1}\cdot \bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, \overline{y}_k)\bigr) \\
&\qquad \le
\EE_{\nu^*} \bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s)\bigr)\bigr] - \EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr)\bigr] \notag\\
&\qquad\qquad +\beta_k^{-2} \hat{M} +\beta_k^{-1}\cdot8MNc_k+ \varepsilon_k + \varepsilon_k'. \notag
\$
\end{lemma}
\begin{proof}
Taking expectation of ${\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)) - {\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s))$ with respect to $s\sim\nu^*$, and by Lemma \ref{lem:error-kl} and Lemma \ref{lem:main-descent}, we have
\$
&\EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr)\bigr] - \EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s)\bigr)\bigr] \notag\\
&\qquad\le
\varepsilon_{k} - \beta_k^{-1}\cdot \EE_{\nu^*}[\langle(1+2y^*\overline{\lambda}_k)Q^{\pi_{\theta_k}}(s, \cdot) - \overline{\lambda}_kW^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle] \notag\\
&\qquad\qquad + \beta_k^{-1}\cdot \EE_{\nu^*}[\la2(y^*-\overline{y}_k)\overline{\lambda}_kQ^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle] \notag\\
&\qquad\qquad
- \EE_{\nu^*}[\langle \tau_{k+1}^{-1}f_{\theta_{k+1}}(s, \cdot) - \tau_k^{-1}f_{\theta_k}(s, \cdot), \pi_{\theta_k}(\cdot\,|\,s)- \pi_{\theta_{k+1}}(\cdot\,|\,s) \rangle] \notag\\
&\qquad\qquad - 1/2\cdot\EE_{\nu^*}[ \|\pi_{\theta_{k+1}}(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\|^2_1 ],
\$
where $\varepsilon_k$ is defined in Lemma \ref{lem:error-kl}.
By Lemma \ref{lem:v-diff} and the H{\"o}lder's inequality, we further have
\#\label{812209}
&\EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr)\bigr] - \EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s)\bigr)\bigr] \notag\\
&\qquad\le
\varepsilon_{k} - \beta_k^{-1}\cdot\bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)\bigr) +\beta_k^{-1}\cdot\bigl(2\overline{\lambda}_k(\overline{y}_k-y^*)\bigr)\cdot\bigl(\rho(\pi^*) - \rho(\pi_{\theta_{k}})\bigr) \notag\\
&\qquad\qquad +\EE_{\nu^*}[\| \tau_{k+1}^{-1}f_{\theta_{k+1}}(s,\cdot)- \tau_{k}^{-1}f_{\theta_{k}}(s,\cdot)\|_{\infty}\cdot \| \pi_{\theta_k}(\cdot\,|\,s)- \pi_{\theta_{k+1}}(\cdot\,|\,s) \|_1]\notag\\
&\qquad\qquad - 1/2\cdot\EE_{\nu^*}[ \|\pi_{\theta_{k+1}}(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\|^2_1 ] \notag\\
&\qquad\le
\varepsilon_{k} - \beta_k^{-1}\cdot\bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)\bigr) +\beta_k^{-1}\cdot\bigl(2\overline{\lambda}_k(\overline{y}_k-y^*)\bigr)\cdot\bigl(\rho(\pi^*) - \rho(\pi_{\theta_{k}})\bigr) \notag\\
&\qquad\qquad + 1/2 \cdot \EE_{\nu^*}[ \| \tau_{k+1}^{-1}f_{\theta_{k+1}}(s,\cdot)- \tau_{k}^{-1}f_{\theta_{k}}(s,\cdot)\|_{\infty}^2 ]\notag\\
&\qquad\le
\varepsilon_{k} - \beta_k^{-1}\cdot\bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)\bigr) \notag\\
&\qquad\qquad +\beta_k^{-1}\cdot\bigl(2\overline{\lambda}_k(\overline{y}_k-y^*)\bigr)\cdot\bigl(\rho(\pi^*) - \rho(\pi_{\theta_{k}})\bigr) + (\varepsilon_k'+\beta_k^{-2}\hat{M}), \#
where the second inequality holds by the fact that $2xy - y^2 \leq x^2$ for any $x,y\in\RR$, and the last inequality holds by Lemma~\ref{lem:stepwise-energy}. Rearranging the terms in \eqref{812209}, we have
\begin{equation}\label{812334}
\begin{aligned}
&\beta_k^{-1}\cdot \bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)\bigr) \\
&\qquad \le
\EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s)\bigr)\bigr] - \EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr)\bigr] \\
&\qquad\qquad + 2\beta_k^{-1}\cdot\overline{\lambda}_k\cdot(\overline{y}_k-y^*)\cdot\bigl(\rho(\pi^*) - \rho(\pi_{\theta_{k}})\bigr)
+\beta_k^{-2} \hat{M} + \varepsilon_k + \varepsilon_k'.
\end{aligned}
\end{equation}
Furthermore, by the definition that $\overline{y}_k = \overline{\rho}(\pi_{\theta_k})$ and $y^*=\rho(\pi^*)$, we have
\$
&\overline{\lambda}_k\cdot (\overline{y}_k-y^*)\cdot\bigl(\rho(\pi^*) - \rho(\pi_{\theta_{k}})\bigr) \\
&\qquad=\overline{\lambda}_k\cdot \bigl[ (\overline{y}_k-y^*)\cdot\bigl( \overline{\rho}(\pi_{\theta_{k}})- \rho(\pi_{\theta_{k}})\bigr)
-\bigl(\rho(\pi^*) - \overline{\rho}(\pi_{\theta_{k}})\bigr)^2 \bigr] \notag \\
&\qquad \le \overline{\lambda}_k \cdot (\overline{y}_k-y^*)\cdot\bigl( \overline{\rho}(\pi_{\theta_{k}})- \rho(\pi_{\theta_{k}})\bigr), \notag
\$
where the inequality holds by the fact that $ \overline{\lambda}_k(\overline{y}_k-y^*)^2$ is nonnegative. By \eqref{eq:update-lambda-form}, \eqref{eq:update-y-form} and \eqref{eq:average error}, we further have $\overline{\lambda}_k \le N$, $ | \overline{y}_k-y^* | \le 2M$, and $ | \overline{\rho}(\pi_{\theta_{k}})- \rho(\pi_{\theta_{k}}) | \le c_k$. Hence, we obtain
\#\label{eq:812444}
\overline{\lambda}_k \cdot (\overline{y}_k-y^*)\cdot\bigl(\rho(\pi^*) - \rho(\pi_{\theta_{k}})\bigr) \le 2MNc_k,
\#
where the inequality holds by \eqref{eq:update-y-form}, \eqref{eq:average error}, and the fact that $ \overline{\lambda}_k(\overline{y}_k-y^*)^2$ is nonnegative.
Plugging \eqref{eq:812335} and \eqref{eq:812444} into \eqref{812334}, we obtain
\begin{equation}\label{eq:812336}
\begin{aligned}
&\beta_k^{-1}\cdot \bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, \overline{y}_k)\bigr) \\
&\qquad \le
\EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s)\bigr)\bigr] - \EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)\bigr)\bigr] \\
&\qquad\qquad +\beta_k^{-2} \hat{M} +\beta_k^{-1}\cdot 10 MNc_k+ \varepsilon_k + \varepsilon_k',
\end{aligned}
\end{equation}
which concludes the proof.
\end{proof}
Now, we are ready to prove Theorem \ref{thm:main} by casting the VARAC algorithm as an infinite-dimensional mirror descent with primal and dual errors.
\begin{proof}[Proof of Theorem \ref{thm:main}]
We show the convergence in two steps by showing the first and second inequalities in \eqref{eqn:m1}, respectively. \\
\noindent\textbf{Part 1.}
Letting $\gamma_k =\gamma\sqrt{K}$ and telescoping \eqref{eq:descent-lambda} for $k + 1 \in [K]$, we have
\begin{equation}\label{eq:result-part1}
\begin{aligned}
&\frac{1}{K}\sum_{k=0}^{K-1} \bigl( ({\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k},\overline{y}_k)\bigr) \\
&\qquad\geq \gamma \cdot \frac{\|\lambda^* - \overline{\lambda}_K\|^2 - \|\lambda^* - \overline{\lambda}_0\|^2}{\sqrt{K}} - \frac{ 2N\sum_{k=0}^{K-1} (d_k + 2Mc_k)}{K} - \frac{(\alpha+ 4M^2)^2 }{4\gamma \sqrt{K}} \\
&\qquad \geq -\frac{\gamma \cdot \|\lambda^* - \overline{\lambda}_0\|^2}{\sqrt{K}} - \frac{ 2N\sum_{k=0}^{K-1} (d_k + 2Mc_k)}{K} - \frac{(\alpha+ 4M^2)^2 }{4\gamma \sqrt{K}} \\
&\qquad = - \sum_{k=0}^{K-1} (c_k + d_k)\cdot \mathcal{O}(1/K) - \mathcal{O}(1/\sqrt{K}),
\end{aligned}
\end{equation}
where the second inequality holds by the fact that $\|\lambda^* - \overline{\lambda}_K\|^2$ is nonnegative. By the definition of saddle-point that ${\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) \leq {\mathcal{L}}(\lambda^*, \pi^*, y^*) $, we complete the proof of the first part of Theorem~\ref{thm:main}.
\vskip5pt
\noindent\textbf{Part 2.}
By telescoping \eqref{eq:812336} for $k + 1\in [K]$, we obtain
\$
&\sum_{k=0}^{K-1} \beta_k^{-1}\cdot\bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k)\bigr)\notag\\
&\qquad \le
\EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{0}}(\cdot\,|\,s)\bigr)\bigr] - \EE_{\nu^*}\bigl[{\rm KL}\bigl(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{K}}(\cdot\,|\,s)\bigr)\bigr] \notag\\
&\qquad\qquad + \sum_{k=0}^{K-1}(\beta_k^{-2} \hat{M} +\beta_k^{-1}\cdot10MNc_k+ \varepsilon_k + \varepsilon_k') .
\$
Note that we have (i) $\EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_0}(\cdot\,|\,s))] \leq \log|\cA|$ due to the uniform initialization of policy, and (ii) the KL-divergence is nonnegative. Setting $\beta_k = \beta\sqrt{K},$ we have
\begin{equation} \label{eq:result-part2}
\begin{aligned}
&\frac{1}{K}\sum_{k=0}^{K-1} \bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k)\bigr) \\
&\qquad\le \frac{10MN\sum_{k=0}^{K-1}c_k}{K} +
\frac{\beta\log|\cA| + \beta^{-1} \hat{M} +\sum_{k=0}^{K-1}( \varepsilon_k + \varepsilon_k')}{\sqrt{K}} \\
&\qquad =\sum_{k=0}^{K-1}c_k \cdot \mathcal{O}(1/K) + \sum_{k=0}^{K-1}( \varepsilon_k + \varepsilon_k') \cdot \mathcal{O}(1/\sqrt{K}) + \mathcal{O}(1\sqrt{K}) .
\end{aligned}
\end{equation}
By the definition of saddle point that ${\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) \geq {\mathcal{L}}(\lambda^*, \pi^*, y^*) $, we conclude the proof.
\end{proof}
By optimizing the input parameters, we obtain the $\mathcal{O}(1/\sqrt{K})$ rate of convergence in the following corollary.
\begin{corollary}\label{coro:main-bound}
Suppose that Assumptions \ref{assumption:unique-solution}, \ref{assumption:closed-q-w}, and \ref{assumption:contraction} hold. Let $R_{\rm a} = R_{\rm b} = R_{\rm c} = \mathcal{O}(m_{\rm a}^{1/2} H_{\rm a}^{-6}(\log m_{\rm a})^{-3}),$ $T = \Omega( K^{3} (\phi^*_{k}+\psi^*_{k})^2 |\cA| R_{\rm a}^4H_am_a^{2/3} )$, $m_{\rm a} = m_{\rm b} = m_{\rm c} = \Omega( d^{3/2}K^{9}(\phi^*_{k}+\psi^*_{k})^6|\cA|^3 R_{\rm a}^{16} H_{\rm a}^{42} \log^6 m_{\rm a} )$ and $p=\exp( - \Omega(R_{\rm a}^{2/3} m_{\rm a}^{2/3}H_a ))$ for any $0\le k \le K$. With probability at least $1 - 4\exp( - \Omega(R_{\rm a}^{2/3} m_{\rm a}^{2/3}H_a ))$, we have
\$
\frac{1}{K} \bigg| \sum_{k=0}^{K-1} \bigl({\mathcal{L}}(\lambda^*,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k)\bigr) \bigg| \leq \mathcal{O}(1/\sqrt{K}).
\$
\end{corollary}
\begin{proof}
See Appendix \ref{appendix:main} for the detailed proof.
\end{proof}
Finally, we show in the next theorem that the duality gap diminishes at an $\mathcal{O}(1/\sqrt{K})$ rate despite the nonconvexity of problem~\eqref{eq:problem}, which shows that the solution path generated by the VARAC algorithm converges to a globally optimal solution.
\begin{theorem}[Duality Gap]\label{thm:duality-gap}
Suppose that Assumptions~\ref{assumption:unique-solution}, \ref{assumption:closed-q-w}, and \ref{assumption:contraction} hold. For the sequences $\{\overline{\lambda}_k\}^{K}_{k=1}$, $\{\pi_{\theta_k}\}^{K}_{k = 1}$ and $\{\overline{y}_k\}^{K}_{k=1}$ generated by the VARAC algorithm, we have
\$
0& \leq \frac{1}{K}\sum_{k=0}^{K-1} \bigl( {\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\lambda^*, \pi_{\theta_k}, \overline{y}_k) \bigr)\\
& \leq \sum_{k=0}^{K-1}(c_k + d_k)\cdot \mathcal{O}(1/K) + \sum_{k = 0}^{K - 1}(\varepsilon_k + \varepsilon_k') \cdot \mathcal{O}(1/\sqrt{K}) + \mathcal{O}(1/\sqrt{K}).
\$
Moreover, if we set the input parameters same as Corollary \ref{coro:main-bound}, it holds that, with probability at least $1 - 4\exp( - \Omega(R_{\rm a}^{2/3} m_{\rm a}^{2/3}H_a ))$,
\$
\frac{1}{K} \bigg| \sum_{k=0}^{K-1} \bigl( {\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\lambda^*, \pi_{\theta_k}, \overline{y}_k)\bigr) \bigg| \le \mathcal{O}(1/\sqrt{K}).
\$
\end{theorem}
\begin{proof}
See Appendix \ref{appendix:main} for the detailed proof.
\end{proof}
\section{Conclusion} \label{sec:con}
To conclude, to the best of our knowledge, we make the first attempt to study risk-sensitive deep reinforcement learning, where we consider the variance constrained deep reinforcement learning. We propose an efficient and theoretically sound VARAC algorithm to solve the problem. Under mild assumptions, despite the overparametrization and nonconvexity, we show that our algorithm achieves an $\mathcal{O}(1/\sqrt{K})$ convergence rate to a saddle point, and that the duality gap diminishes at a same rate. Essentially, this shows that our algorithm finds a globally optimal policy at a sublinear rate.
For future work, we plan to extend the risk constraints to other coherent risk measures such as the conditional value at risk.
\bibliographystyle{ims}
\section{Proof Sketch}\label{sec:sketch}
In this section, we sketch the proof of Theorem \ref{thm:main}. In detail, we cast VARAC in Algorithm \ref{alg:risac} as infinite-dimensional mirror descent with primal and dual errors and show the approximation to the saddle point to establish its global convergence.
Recall that the objective ${\mathcal{L}}(\pi)$ is defined in \eqref{eq:def-L} and $\nu^*$ is the stationary state distribution of the optimal policy $\pi^*$.
\begin{lemma}[Performance Difference]\label{lem:v-diff}
For ${\mathcal{L}}(\lambda, \pi, y)$ defined in \eqref{eq:def-L}, we have
\$
{\mathcal{L}}(\lambda, \pi^*, y) - {\mathcal{L}}(\lambda, \pi, y) = \EE_{\nu^*}[\langle (1 + 2\lambda y)Q^{\pi}(s,\cdot) - \lambda W^{\pi}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi(\cdot\,|\,s)\rangle].
\$
\end{lemma}
\begin{proof} See Appendix \ref{appendix:proof-sketch} for a detailed proof.\end{proof}
\begin{lemma}[One-Step Descent of $\lambda$]\label{lem:main-descent-lambda}
For the optimal solution $\lambda^*$ and $\overline{\lambda}_k$ obtained in \eqref{eq:update-lambda-form}, we have
\#\label{eq:descent-lambda}
&\| \lambda^* - \overline{\lambda}_k\|^2 - \|\lambda^* - \overline{\lambda}_{k+1} \|^2 \\
&\quad\geq -\frac{1}{\gamma_k} \cdot ({\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k},\overline{y}_k) ) - \frac{1}{\gamma_k}\cdot2N\cdot(d_k + 2Mc_k) \notag\\
&\quad\qquad - \frac{1}{4\gamma_k^2}\cdot(\alpha+ F + 3M^2 + 2Mc_k + d_k)^2. \notag
\#
\end{lemma}
\begin{proof}
See Appendix \ref{appendix:proof-sketch} for a detailed proof.
\end{proof}
\begin{lemma}[One-Step Descent of $\pi$]\label{lem:main-descent}
For the ideal improved policy $\pi_{k+1}$ defined in \eqref{eq:pi-new-true} and the current policy $\pi_{\theta_{k}}$, we have that, for any $s \in {\mathcal{S}}$,
\$
&{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s)) - {\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s))\\
& \quad\leq \langle \log(\pi_{\theta_{k+1}}(\cdot\,|\,s)/\pi_{k+1}(\cdot\,|\,s)), \pi_{\theta_k}(\cdot\,|\,s)-\pi^*(\cdot\,|\,s) \rangle \\
&\quad\qquad- \beta_k^{-1}\cdot\langle(1+2y^*\overline{\lambda}_k)Q^{\pi_{\theta_k}}(s, \cdot) - \overline{\lambda}_kW^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle\notag\\
&\quad\qquad + \beta_k^{-1}\cdot\langle(2(y^* - \overline{y}_k)\overline{\lambda}_k)Q^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle\notag\\
&\quad\qquad - \langle \tau_{k+1}^{-1}f_{\theta_{k+1}}(s, \cdot) - \tau_k^{-1}f_{\theta_k}(s, \cdot), \pi_{\theta_{k}}(\cdot\,|\,s) - \pi_{\theta_{k+1}}(\cdot\,|\,s) \rangle - 1/2\cdot\|\pi_{\theta_{k+1}}(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\|_1^2.
\$
\end{lemma}
\begin{proof}
See Appendix \ref{appendix:proof-sketch} for a detailed proof.
\end{proof}
\begin{lemma}[One-Step Descent of $y$]\label{lem:main-descent-y}
For the optimal solution $y^*$ and $\overline{y}_k$ obtained in \eqref{eq:update-y-form}, we have
\$
{\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k) \le 4MNc_k.
\$
\end{lemma}
\begin{proof}
See Appendix \ref{appendix:proof-sketch} for a detailed proof.
\end{proof}
Based on Lemmas \ref{lem:v-diff}, \ref{lem:main-descent-lambda}, \ref{lem:main-descent} and \ref{lem:main-descent-y}, we prove Theorem \ref{thm:main} by casting VARAC as infinite-dimensional mirror descent with primal and dual errors.
\begin{proof}[Proof of Theorem \ref{thm:main}]
We show convergence in two parts. \\
\noindent\textbf{Part 1.}
Setting the parameter $\gamma_k =\gamma\sqrt{K}$ and telescoping \eqref{eq:descent-lambda} for $k+1\in [K]$, we have
\#\label{eq:result-part1}
&\frac{1}{K}\sum_{k=0}^{K-1} ( ({\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k},\overline{y}_k)) \\
&\quad\geq \frac{\|\lambda^* - \overline{\lambda}_K\|^2 - \|\lambda^* - \overline{\lambda}_0\|^2 - 2N\sum_{k=0}^{K-1} (d_k + 2Mc_k)}{K} - \frac{\sum_{k=0}^{K-1}(\alpha+ F + 3M^2 + 2Mc_k + d_k)^2 }{4\gamma K \sqrt{K}} \notag \\
&\quad = - \sum_{k=0}^{K-1} (c_k + d_k)^2\cdot O(1/K\sqrt{K}) - \sum_{k=0}^{K-1} (c_k + d_k)\cdot O(1/K) - O(1/K).\notag
\#
Using saddle-point relation that ${\mathcal{L}}(\lambda^*, \pi_{\theta_k},\overline{y}_k) \leq {\mathcal{L}}(\lambda^*, \pi^*, y^*) $, we complete the proof of the first part of Theorem \ref{thm:main}.
\vskip5pt
\noindent\textbf{Part 2.}
Taking expectation with respect to $s\sim\nu^*$ and invoking Lemmas \ref{lem:error-kl} and \ref{lem:main-descent}, we have
\$
&\EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s))] - \EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s))] \notag\\
&\quad\le
\varepsilon_{k} - \beta_k^{-1}\cdot \EE_{\nu^*}[\langle(1+2y^*\overline{\lambda}_k)Q^{\pi_{\theta_k}}(s, \cdot) - \overline{\lambda}_kW^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle] \notag\\
&\quad\qquad + \beta_k^{-1}\cdot \EE_{\nu^*}[\la2(y^*-\overline{y}_k)\overline{\lambda}_kQ^{\pi_{\theta_k}}(s, \cdot), \pi^*(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\rangle] \notag\\
&\quad\qquad
- \EE_{\nu^*}[\langle \tau_{k+1}^{-1}f_{\theta_{k+1}}(s, \cdot) - \tau_k^{-1}f_{\theta_k}(s, \cdot), \pi_{\theta_k}(\cdot\,|\,s)- \pi_{\theta_{k+1}}(\cdot\,|\,s) \rangle] \notag\\
&\quad\qquad - 1/2\cdot\EE_{\nu^*}[ \|\pi_{\theta_{k+1}}(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\|^2_1 ].
\$
By Lemma \ref{lem:v-diff} and the H{\"o}lder's inequality, we further have
\#\label{812209}
&\EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s))] - \EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s))] \notag\\
&\quad\le
\varepsilon_{k} - \beta_k^{-1}\cdot({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)) +\beta_k^{-1}\cdot(2\overline{\lambda}_k(\overline{y}_k-y^*))\cdot(\rho(\pi^*) - \rho(\pi_{\theta{k}})) \notag\\
&\quad\qquad +\EE_{\nu^*}[\| \tau_{k+1}^{-1}f_{\theta_{k+1}}(s,\cdot)- \tau_{k}^{-1}f_{\theta_{k}}(s,\cdot)\|_{\infty}\cdot \| \pi_{\theta_k}(\cdot\,|\,s)- \pi_{\theta_{k+1}}(\cdot\,|\,s) \|_1]\notag\\
&\quad\qquad - 1/2\cdot\EE_{\nu^*}[ \|\pi_{\theta_{k+1}}(\cdot\,|\,s) - \pi_{\theta_{k}}(\cdot\,|\,s)\|^2_1 ] \notag\\
&\quad\le
\varepsilon_{k} - \beta_k^{-1}\cdot({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)) +\beta_k^{-1}\cdot(2\overline{\lambda}_k(\overline{y}_k-y^*))\cdot(\rho(\pi^*) - \rho(\pi_{\theta{k}})) \notag\\
&\quad\qquad + 1/2 \cdot \EE_{\nu^*}[ \| \tau_{k+1}^{-1}f_{\theta_{k+1}}(s,\cdot)- \tau_{k}^{-1}f_{\theta_{k}}(s,\cdot)\|_{\infty}^2 ]\notag\\
&\quad\le
\varepsilon_{k} - \beta_k^{-1}\cdot({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)) \notag\\
&\quad\qquad +\beta_k^{-1}\cdot(2\overline{\lambda}_k(\overline{y}_k-y^*))\cdot(\rho(\pi^*) - \rho(\pi_{\theta{k}})) + (\varepsilon_k'+\beta_k^{-2}\hat{M}), \#
where in the second inequality we use $2xy - y^2 \leq x^2$ and in the last inequality we use Lemma \ref{lem:stepwise-energy}. Rearranging the terms in \eqref{812209}, we have
\#\label{812334}
&\beta_k^{-1}\cdot \bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, y^*)\bigr) \\
&\quad \le
\EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s))] - \EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s))] \notag\\
&\quad\qquad + 2\beta_k^{-1}\cdot\overline{\lambda}_k(\overline{y}_k-y^*)\cdot(\rho(\pi^*) - \rho(\pi_{\theta{k}}))
+\beta_k^{-2} \hat{M} + \varepsilon_k + \varepsilon_k'.\notag
\#
Recall Lemma \ref{lem:main-descent-y}, we have
\#\label{eq:812335}
{\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k) \le 4MNc_k .
\#
And by the rule of updating $y$ and the fact that $y^*=\rho(\pi^*)$ we have
\#\label{eq:812444}
&\overline{\lambda}_k(\overline{y}_k-y^*)\cdot(\rho(\pi^*) - \rho(\pi_{\theta{k}})) \\
&\quad=\overline{\lambda}_k [ (\overline{y}_k-y^*)\cdot( \overline{\rho}(\pi_{\theta{k}})- \rho(\pi_{\theta{k}}))
+(\overline{y}_k-\overline{\rho}(\pi_{\theta_k}))\cdot(\rho(\pi^*) - \overline{\rho}(\pi_{\theta{k}}))
-(\rho(\pi^*) - \overline{\rho}(\pi_{\theta{k}}))^2 ] \notag \\
&\quad \le N\cdot(2Mc_k + 2Mc_k) \notag\\
&\quad = 4MNc_k, \notag
\#
where the inequality is obtained by Assumption \ref{assumption:bounded-reward} and the fact that $ \overline{\lambda}_k(\overline{y}_k-y^*)^2$ is nonnegative.
Plugging \eqref{eq:812444} and \eqref{eq:812335} into \eqref{812334}, we obtain
\#\label{eq:812336}
&\beta_k^{-1}\cdot \bigl({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k,\pi_{\theta_k}, \overline{y}_k)\bigr) \\
&\quad \le
\EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k}}(\cdot\,|\,s))] - \EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{k+1}}(\cdot\,|\,s))] \notag\\
&\quad\qquad +\beta_k^{-2} \hat{M} +\beta_k^{-1}\cdot8MNc_k+ \varepsilon_k + \varepsilon_k'. \notag
\#
By telescoping \eqref{eq:812336} for $k+1\in [K]$, we obtain
\$
&\sum_{k=0}^{K-1} \beta_k^{-1}\cdot({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k))\notag\\
&\quad \le
\EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{0}}(\cdot\,|\,s))] - \EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_{K}}(\cdot\,|\,s))] \notag\\
&\quad\qquad + \sum_{k=0}^{K-1}(\beta_k^{-2} \hat{M} +\beta_k^{-1}\cdot8MNc_k+ \varepsilon_k + \varepsilon_k') .
\$
Note that we have (i) $\EE_{\nu^*}[{\rm KL}(\pi^*(\cdot\,|\,s)\,\|\,\pi_{\theta_0}(\cdot\,|\,s))] \leq \log|\cA|$ due to the uniform initialization of policy, and that (ii) the KL-divergence is nonnegative. Setting the parameter $\beta_k = \beta\sqrt{K},$ we have
\# \label{eq:result-part2}
&\frac{1}{K}\sum_{k=0}^{K-1} ({\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) - {\mathcal{L}}(\overline{\lambda}_k, \pi_{\theta_k}, \overline{y}_k)) \\
&\quad\le \frac{8MN\sum_{k=0}^{K-1}c_k}{K} +
\frac{\beta\log|\cA| + \beta^{-1} \hat{M} +\sum_{k=0}^{K-1}( \varepsilon_k + \varepsilon_k')}{\sqrt{K}} \notag\\
&\quad =\sum_{k=0}^{K-1}c_k \cdot O(1/K) + \sum_{k=0}^{K-1}( \varepsilon_k + \varepsilon_k') \cdot O(1/\sqrt{K}) + O(1\sqrt{K}) \notag .
\#
Using saddle-point relation that ${\mathcal{L}}(\overline{\lambda}_k,\pi^*, y^*) \geq {\mathcal{L}}(\lambda^*, \pi^*, y^*) $, we finish the proof of the second part of Theorem \ref{thm:main}.
\end{proof}
|
{
"timestamp": "2020-12-29T02:24:44",
"yymm": "2012",
"arxiv_id": "2012.14098",
"language": "en",
"url": "https://arxiv.org/abs/2012.14098"
}
|
\section{Introduction}
The \textit{wonderful compactification} of a symmetric space was introduced by C. De Concini and C. Procesi in \cite{DP83}. Later on, D. Luna gave a more general definition of wonderful variety and then he proved that, according to his definition, all wonderful varieties are spherical \cite{Lu96}.
Let $\mathscr{G}$ be a reductive group, and $\mathscr{B}\subset\mathscr{G}$ a Borel subgroup. A \textit{spherical variety} is a variety admitting an action of $\mathscr{G}$ with an open dense $\mathscr{B}$-orbit. For \textit{wonderful varieties} we require in addition the existence of an open orbit whose complementary set is a simple normal crossing divisor $D_1\cup\dots\cup D_r$, where the $D_i$ are the $\mathscr{G}$-invariant prime divisors in $X$. The number $r$ is called the rank of $X$. Note that $\mathscr{G}$ has $2^r$ orbits in $X$ given by all the possible intersections among the $D_i$. The unique closed orbit is $\bigcap_{i=1}^rD_i$.
Apart from their role in group theory, wonderful varieties proved themselves important in enumerative geometry and recently also in birational geometry. We refer to \cite{BL11}, \cite{Pe14}, \cite{Pe18} for comprehensive treatments of these topics.
Classical examples of wonderful varieties are the spaces of complete quadrics and of complete collineations. These spaces have been studied both from the geometrical and enumerative point of view \cite{Se48}, \cite{Se51}, \cite{Se52}, \cite{Ty56}, \cite{Va82}, \cite{Va84}, \cite{TK88}, \cite{LLT89}, \cite{Tha99}. An aspect that will be fundamental in this paper is that spaces of complete quadrics and collineations play a role in the study of other moduli spaces such as Hilbert schemes and Kontsevich spaces of stable maps \cite{Al56}, \cite{Pi81}, \cite{Ca16}. The birational geometry of the spaces of complete quadrics and collineations, mostly from the point of view of Mori theory, has recently been studied in \cite{Ce15}, \cite{Ma18a}, \cite{Ma18b}.
The spaces of complete collineations and quadrics have been constructed, as a sequence of blow-ups, by I. Vainsencher in \cite{Va84}, \cite{Va82}, and a similar construction for complete skew-forms has been carried out by M. Thaddeus in \cite{Tha99}. In this paper we construct the wonderful compactification of the space of symmetric and symplectic matrices. More precisely, we summarize our main results in Propositions \ref{orb_dim}, \ref{fun}, and Theorem \ref{main1} as follows:
\begin{thm}\label{A}
Let $\mathbb{P}^N$ be the projective space parametrizing $2r\times 2r$ symmetric matrices modulo scalar, consider the following $Sp(2r)$-action:
$$
\begin{array}{ccc}
Sp(2r)\times \mathbb{P}^{N} & \longrightarrow & \mathbb{P}^{N}\\
(M,Z) & \longmapsto & MZM^{t}
\end{array}
$$
and denote by $X_{2r}\subset\mathbb{P}^N$ the closure of the $Sp(2r)$-orbit of the identity. Then $X_{2r}$ admits a stratification
$$Y_1\subset Y_2\subset\dots Y_r\subset X_{2r}$$
where the variety $Y_k$ parametrizes matrices in $X_{2r}$ of rank at most $k$, $\dim(Y_k) = 2rk+k-k^2-1$ for $k = 1,\dots,r$, and $\dim(X_{2r}) = r(r+1)$.
Furthermore, consider the following sequence of blow-ups
$$\mathcal{S}_{2r}:=X_{2r}^{(r-1)}\rightarrow X_{2r}^{(r-2)}\rightarrow X_{2r}^{(r-3)}\rightarrow\dots\rightarrow X_{2r}^{(1)}\rightarrow X_{2r}^{(0)}:=X_{2r}$$
where $X_{2r}^{(k)}\rightarrow X_{2r}^{(k-1)}$ is the blow-up of the strict transform of $Y_k$ in $X_{2r}^{(k-1)}$ for $k = 1,\dots, r-1$. Denote by $E_k\subset\mathcal{S}_{2r}$ the exceptional divisor over $Y_k$ for $k=1,\dots,r-1$, and by $S_r^{(r-1)}(\mathcal{V}_2^{2r-1})$ the strict transform of the divisor $Y_r\subset X_{2r}$. Then $E_1,\dots,E_{r-1},S_r^{(r-1)}(\mathcal{V}_2^{2r-1})$ are smooth and intersect transversally. Furthermore, the closures of the orbits of the $Sp(2r)$-action on $\mathcal{S}_{2r}$ induced by the $Sp(2r)$-action above are given by all the possible intersections among $E_1,\dots,E_{r-1},S_r^{(r-1)}(\mathcal{V}_2^{2r-1})$ and $\mathcal{S}_{2r}$ itself. Therefore $\mathcal{S}_{2r}$ is wonderful.
\end{thm}
We will call $\mathcal{S}_{2r}$ the space of \textit{complete symplectic quadrics} of dimension $2r-2$. By Proposition \ref{fun} $Y_k$ is the intersection of $X_{2r}$ with the secant variety $\sec_k(\mathcal{V}_2^{2r-1})$ that is the closure of the union of the $(k-1)$-planes generated by $k$ general points on the Veronese variety $\mathcal{V}_2^{2r-1}$ of degree two and dimension $2r-1$.
Note that the formula for the dimension of $Y_k$ in Theorem \ref{A} yields that $\mathcal{V}_2^{2r-1}$ is entirely contained in $X_{2r}$, while for $r\geq 2$ the orbit closure $X_{2r}$ intersects $\sec_k(\mathcal{V}_2^{2r-1})$ in a proper subvariety. Furthermore, by Proposition \ref{sec} we have that set-theoretically $\sec_k(\mathcal{V}_2^{2r-1})\cap X_{2r} = \sec_r(\mathcal{V}_2^{2r-1})\cap X_{2r}$ for $k\geq r$. Interestingly, this means that if $M$ is a symmetric $2r\times 2r$ matrix that is a limit of a family of symplectic matrices then either $1\leq \rank(M)\leq r$ or $\rank(M) = 2r$.
For instance, by Proposition \ref{x4g14} $X_4$ is the Grassmannian $\mathbb{G}(1,4)$ of lines in $\mathbb{P}^4$. In this case by Theorem \ref{A} we have that $\mathcal{S}_4$ is the blow-up of $\mathbb{G}(1,4)$ along the Veronese $3$-fold $\mathcal{V}_2^3\subset\mathbb{G}(1,4)$. This is a wonderful variety of rank two. As remarked in \cite{Wa96} wonderful varieties of rank two are a building block in the theory of spherical varieties. The wonderful compactification $\mathcal{S}_4$ is the sixth variety in \cite[Table C]{Wa96}, and will be a central character throughout the whole paper.
\begin{Remark}\label{Rem}
The use of wonderful compactifications in enumerative geometry dates back to the solution of M. Chasles to a problem posed by J. Steiner asking how many conics in the plane are tangent to five given general conics \cite{KL80}. Steiner's answer, which then turned out to be wrong, was $6^5 = 7776$. Later on Chasles computed the right number which is $3264$.
Although enumerative problems are not within the scope of this paper, we give a simple application of our construction in enumerative geometry. It is well known that there are $92$ quadric surfaces in $\mathbb{P}^3$ that are tangent to nine general lines \cite[Remark 4.3]{BFS20}. The points of $\mathcal{S}_4$ in a divisor of class $2H-E_1$, where $H$ is the pull-back of the hyperplane class of $X_4$, correspond to the symplectic quadrics in $\mathbb{P}^3$ that are tangent to a general line. We have that $(2H-E_1)^6 = 40$. From the enumerative point of view this means that there are exactly $40$ symplectic quadrics in $\mathbb{P}^3$ that are tangent to six general lines.
\end{Remark}
The variety $X_{2r}$ is singular for $r\geq 3$. The wonderful variety $\mathcal{S}_{2r}$ may be seen as an incarnation, in the singular setting, of the process producing a wonderful compactification from a conical one in \cite{MP98}. Furthermore, by Proposition \ref{symplcones} $\mathcal{S}_{2r}$ provides a resolution of a variety with conical singularities as remarked in \cite[Section 3.3]{MP98}.
In Section \ref{divS2r} and \ref{birS2r} we take advantage of the spherical structure of $\mathcal{S}_{2r}$ to study its birational geometry from the point of view of Mori theory. Roughly speaking, a \textit{Mori dream space} is a projective variety $X$ whose cone of effective divisors $\Eff(X)$ admits a well-behaved decomposition into convex sets, called Mori chamber decomposition, and these chambers are the nef cones of birational models of $X$. These varieties, introduced by Y. Hu and S. Keel in \cite{HK00}, are named so because they behave in the best possible way from the point of view of the minimal model program. In general, to determine whether or not a variety is a Mori dream space, and in case to study in detail its Mori chamber decomposition is a hard problem. This has been done for instance when $X$ is obtained by blowing-up points in a projective space \cite{Mu01}, \cite{CT06}, \cite{AM16}, \cite{AC17}, \cite{BM17}, \cite{LP17}.
Spherical varieties are Mori dream spaces, we refer to \cite{Pe14} for a comprehensive treatment of these topics. Cox rings were first introduced by D. A. Cox for toric varieties \cite{Cox95}, and then his construction was generalized to projective varieties in \cite{HK00}. These algebraic objects are basically universal homogeneous coordinate rings of projective varieties, defined as the direct sum of the spaces of sections of all isomorphism classes of line bundles on them. We have that a normal $\mathbb{Q}$-factorial projective variety $X$, over an algebraically closed field, with finitely generated Picard group is a Mori dream space if and only if its Cox ring is finitely generated \cite[Proposition 2.9]{HK00}. Summing-up the results in Propositions \ref{pic_x2r}, \ref{eff_nef}, \ref{mcd_4} and Theorem \ref{dec_S6} we have the following:
\begin{thm}\label{B}
Fix homogeneous coordinates $[z_{0,0}:\dots:z_{n,n}]$ on $\mathbb{P}^N$, and consider the blow-up $f:\mathcal{S}_{2r}\rightarrow X_{2r}\subset\mathbb{P}^N$ with exceptional divisors $E_1,\dots,E_{r-1}$ in Theorem \ref{A}. For $i=1,\dots,r$ we define the divisors $D_i$ as the strict transforms in $\mathcal{S}_{2r}$ of the divisor given by the intersection of
$$\det \begin{pmatrix}
z_{0,0} & \dots & z_{0,i-1}\\
\vdots & \ddots & \vdots \\
z_{0,i-1} & \dots & z_{i-1,i-1}\\
\end{pmatrix}=0$$
with $X_{2r}$, and let $H$ be the pull-back of the hyperplane section of $X_{2r}\subset\mathbb{P}^N$ to $\mathcal{S}_{2r}$.
The Picard rank of $\mathcal{S}_{2r}$ is $\rho(\mathcal{S}_{2r}) = r$ and $\Pic(\mathcal{S}_{2r})$ is generated by $H,E_1,\dots,E_{r-1}$. Furthermore, the effective cone $\Eff(\mathcal{S}_{2r})$ is generated by $E_1,\dots,E_{r-1},S_r^{(r-1)}(\mathcal{V}_2^{2r-1})$, the nef cone $\Nef(\mathcal{S}_{2r})$ is generated by $D_1,\dots,D_r$, and the Cox ring of $\mathcal{S}_{2r}$ is generated by the sections of $E_1,\dots,E_{r-1},S_r^{(r-1)}(\mathcal{V}_2^{2r-1}), D_1,\dots,D_r$.
Finally, the Mori chamber decomposition of the $\Eff(\mathcal{S}_{4})$ has three chambers, and the Mori chamber decomposition of the $\Eff(\mathcal{S}_{6})$ has nine chambers.
\end{thm}
We refer to Proposition \ref{mcd_4} and Theorem \ref{dec_S6} for a detailed description of the Mori chamber decompositions.
In Section \ref{K_Grass} we investigate the birational geometry of Kontsevich moduli spaces of conics in Lagrangian Grassmannians. These spaces are denoted by $\overline{M}_{g,n}(X,\beta)$ where $X$ is a projective scheme and $\beta\in H_2(X,\mathbb{Z})$ is the homology class of a curve in $X$. A point in $\overline{M}_{g,n}(X,\beta)$ corresponds to a holomorphic map $\alpha$ from an $n$-pointed genus $g$ curve $C$ to $X$ such that $\alpha_{*}([C])=\beta$. If $X$ is a homogeneous variety then there exists a smooth, irreducible Deligne-Mumford stack $\overline{\mathcal{M}}_{0,n}(X,\beta)$ whose coarse moduli space is $\overline{M}_{0,n}(X,\beta)$ \cite{FP}. When $X$ is a Lagrangian Grassmannian the class $\beta$ is then completely determined by its degree and we will write $\beta = d[L]$, where $[L]$ is the class of a line in the Pl\"ucker embedding. The Mori theory of the spaces $\overline{M}_{0,n}(X,\beta)$, especially when the target variety is a projective space or a Grassmannian, has been widely investigated in a series of papers \cite{CS06}, \cite{Ch08}, \cite{CHS08}, \cite{CHS09}, \cite{CC10}, \cite{CC11}, \cite{CM17}.
On the Kontsevich space $\overline{M}_{0,0}(LG(r,2r),2)$ of conics in the Lagrangian Grassmannian, $LG(r,2r)$ parametrizing Lagrangian subspaces of a $2r$-dimensional symplectic vector space, we consider the divisor classes: $\Delta^r$ of maps with reducible domain, $T^r$ of conics tangent to a fixed hyperplane section of $LG(r,2r)$, $H^r_{\sigma_2}$ of conics intersecting a fixed codimension two Schubert variety $\Sigma_2^r\subset LG(r,2r)$, and $D_{unb}^r$ which we now define. A stable map $\alpha:\mathbb{P}^1\rightarrow LG(r,2r)$ induces a rank two subbundle $\mathcal{E}_{\alpha}\subset \mathcal{O}_{\mathbb{P}^1}\otimes K^{2r}$. If $r = 2$ we define $D_{unb}$ as the closure of the locus of maps $[\mathbb{P}^1,\alpha]\in \overline{M}_{0,0}(LG(2,4),2)$ such that $\mathcal{E}_{\alpha}\neq \mathcal{O}_{\mathbb{P}^1}(-1)^{\oplus 2}$. If $r\geq 3$ there is a trivial subbundle $\mathcal{O}_{\mathbb{P}^1}^{\oplus r-2}\subset\mathcal{E}_{\alpha}$ which induces a $(r-2)$-dimensional subspace $H_{\alpha}\subset\mathbb{P}^{2r-1}$. We define $D_{unb}^r$ as the closure of the locus of maps $[\mathbb{P}^1,\alpha]\in\overline{M}_{0,0}(LG(r,2r),2)$ such that $H_{\alpha}$ intersects a fixed $(r+1)$-dimensional subspace of $\mathbb{P}^{2r-1}$.
The main results in Lemma \ref{sphKLG}, Proposition \ref{effneflg}, Theorem \ref{mcd_lg}, Remark \ref{contr4} and Corollary \ref{Fano} can be summarized in the following statement:
\begin{thm}\label{C}
Let $\overline{M}_{0,0}(LG(r,2r),2)$ be the Kontsevich space of conics in the Lagrangian Grassmannian $LG(r,2r)$, parametrizing Lagrangian subspaces of a $2r$-dimensional symplectic vector space, with $r\geq 2$.
The effective cone $\Eff(\overline{M}_{0,0}(LG(r,2r),2))$ is generated by $\Delta^r$ and $D_{unb}^r$, and the nef cone $\Nef(\overline{M}_{0,0}(LG(r,2r),2))$ is generated by $H_{\sigma_2}^r$ and $T^r$.
The Mori chamber decomposition of $\Eff(\overline{M}_{0,0}(LG(r,2r),2))$ has three chambers as displayed in the following picture:
$$
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
xmin=2.5,
xmax=7.5,
ymin=1.7032313370047312,
ymax=5.301934941656083,
xtick={2.5,3.0,...,5.5},
ytick={2.0,2.5,...,5.0},]
\clip(2.1,2.0) rectangle (7.5,5.4);
\draw [->,line width=0.1pt] (3.,4.) -- (3.,5.);
\draw [->,line width=0.1pt] (3.,4.) -- (4.,4.);
\draw [->,line width=0.1pt] (3.,4.) -- (5.,3.);
\draw [->,line width=0.1pt] (3.,4.) -- (5.,2.);
\draw [shift={(3.,4.)},line width=0.4pt,fill=black,fill opacity=0.15000000596046448] (0,0) -- plot[domain=-0.46364760900080615:0.,variable=\t]({1.*0.6965067581669063*cos(\t r)+0.*0.6965067581669063*sin(\t r)},{0.*0.6965067581669063*cos(\t r)+1.*0.6965067581669063*sin(\t r)}) -- cycle ;
\begin{scriptsize}
\draw[color=black] (3.085808915158281,5.2) node {$D_{unb}^r$};
\draw[color=black] (4.4,4) node {$H_{\sigma_2}^r$};
\draw[color=black] (5.25,3) node {$T^r$};
\draw[color=black] (5.3,2.1) node {$\Delta^r$};
\end{scriptsize}
\end{tikzpicture}
$$
where $H_{\sigma_2}^{r}\sim \frac{1}{2}(\Delta^r+2D_{unb}^r)$ and $T^r\sim\Delta^r+D_{unb}^r$. Furthermore, if $r\geq 2$ then $\Mov(\overline{M}_{0,0}(LG(r,2r),2))$ is generated by $T^r$ and $D_{unb}^r$ while $\Mov(\overline{M}_{0,0}(LG(2,4),2))$ is generated by $T^r$ and $H_{\sigma_2}^r$.
The divisor $H_{\sigma_2}^r$ induces a birational morphism
$$f_{H_{\sigma_2}^r}:\overline{M}_{0,0}(LG(r,2r),2)\rightarrow \widetilde{Chow}(LG(r,2r),2)$$
which is an isomorphism away form the locus $Q^r(1)$ of double covers of a line in $LG(r,2r)$, and contracts $Q^r(1)$ so that the locus of double covers with the same image maps to
a point, where $\widetilde{Chow}(LG(r,2r),2)$ is the normalization of the Chow variety of conics in $LG(r,2r)$.
The divisor $T^r$ induces a morphism
$$f_{T^r}:\overline{M}_{0,0}(LG(r,2r),2)\rightarrow \overline{M}_{0,0}(LG(r,2r),2,1)$$
which is an isomorphism away from $\Delta^r$ and contracts the locus of maps with reducible domain $[C_1\cup C_2,\alpha]$ to $\alpha(C_1\cap C_2)$, where $\overline{M}_{0,0}(LG(r,2r),2,1)$ is the moduli space of weighted stable maps to $LG(r,2r)$.
The birational model $X_r$ corresponding to the chamber delimited by $H_{\sigma_2}^r$ and $D_{unb}^r$ is a fibration $X_r\rightarrow SG(r-2,2r)$ with fibers isomorphic to the Grassmannian $\mathbb{G}(2,4)$ parametrizing plane in $\mathbb{P}^4$, where $SG(r-2,2r)$ is the symplectic Grassmannian parametrizing isotropic subspaces of dimension $r-2$. Moreover, $D_{unb}^r$ contracts $\overline{M}_{0,0}(LG(r,2r),2)$ onto $SG(r-2,2r)$.
Finally, $\overline{M}_{0,0}(LG(r,2r),2)$ is Fano for $2\leq r\leq 6$, weak Fano, that is $-K_{\overline{M}_{0,0}(LG(r,2r),2)}$ is nef and big, for $r = 7$, and $-K_{\overline{M}_{0,0}(LG(r,2r),2)}$ is not ample for $r\geq 8$.
\end{thm}
Moreover, Proposition \ref{isor2}, Remarks \ref{Rem}, \ref{contr4} and Corollary \ref{aut} provide additional information for the case $r = 2$.
\begin{thm}\label{D}
The following $Sp(4)$-action
$$
\begin{array}{cll}
Sp(4) \times \overline{M}_{0,0}(LG(2,4),2) & \longrightarrow & \overline{M}_{0,0}(LG(2,4),2) \\
(M, [C, \alpha]) & \longmapsto & [C, \wedge^{2}M\circ\alpha]
\end{array}
$$
induces on $\overline{M}_{0,0}(LG(2,4),2)$ a structure of spherical variety. Furthermore, there exists an isomorphism
$$\varphi : \overline{M}_{0,0}(LG(2,4),2) \rightarrow \mathcal{S}_4$$
where $\mathcal{S}_4$ is the wonderful compactification of the space of symplectic quadrics of $\mathbb{P}^3$, mapping a smooth conic $C\subset LG(2,4)$ to the quadric $\bigcup_{[L]\in C}L\subset\mathbb{P}^3$. The Cox ring $\Cox(\overline{M}_{0,0}(LG(2,4),2))$ is generated by the sections of $\Delta^2,D_{unb}^2,H_{\sigma_2}^2,T^2$.
The moduli space $\overline{M}_{0,0}(LG(2,4),2)$ identifies with the blow-up of $\mathbb{G}(1,4)$ along the Veronese $\mathcal{V}_2^{3}$. With this identification the morphism associated to $H_{\sigma_2}^2$ is the blow-down and $\widetilde{Chow}(LG(2,4),2)\cong \mathbb{G}(1,4)$, while the morphism associated to $T^2$ is induced by the strict transform on $\mathcal{S}_4$ of the linear system of quadrics containing $\mathcal{V}_2^{3}$, and its image is a $6$-fold of degree $40$ in $\mathbb{P}^{14}$ isomorphic to $\overline{M}_{0,0}(LG(2,4),2,1)$.
Finally, $\operatorname{PsAut}(\overline{M}_{0,0}(LG(2,4),2))\cong\operatorname{Aut}(\overline{M}_{0,0}(LG(2,4),2)) \cong PSp(4)$ where $PSp(4)$ is the projective symplectic group, and $\operatorname{PsAut}(\overline{M}_{0,0}(LG(2,4),2))$ is the group of birational self-maps of $\overline{M}_{0,0}(LG(2,4),2)$ inducing automorphisms in codimension one.
\end{thm}
\subsection*{Organization of the paper} Throughout the paper we will work over an algebraically closed field $K$ of
characteristic zero. In Section \ref{sec1}, as a warm-up we prove some of the main results in \cite{Va82}, \cite{Va84}, using the techniques based on tangent cones computations that we will then apply to the more involved case of symplectic quadrics. In Section \ref{CSSF} we construct the wonderful compactification $\mathcal{S}_{2r}$ of the space of symmetric and symplectic $2r\times 2r$ matrices. In Section \ref{divS2r} we study the Picard rank, the effective and the nef cones of $\mathcal{S}_{2r}$. In Section \ref{birS2r} we compute the Mori chamber decomposition of the effective cone of $\mathcal{S}_{4}$ and $\mathcal{S}_{6}$. Finally, in Section \ref{K_Grass}, taking advantage of the theory of complete symplectic quadrics, we investigate the birational geometry of Kontsevich spaces of conics in Lagrangian Grassmannians.
\subsection*{Acknowledgments}
We thank very much Alex Casarotti, Massimiliano Mella, Giorgio Ottaviani and Jason Starr for useful discussions, and the referee for many helpful comments that helped us to improve the exposition and correct a mistake about the sphericity of $\overline{M}_{0,0}(LG(r,2r),2)$ for $r > 2$ in a first version of the paper.
The second named author is a member of the Gruppo Nazionale per le Strutture Algebriche, Geometriche e le loro Applicazioni of the Istituto Nazionale di Alta Matematica "F. Severi" (GNSAGA-INDAM).
\section{Complete quadrics}\label{sec1}
Let $V$ be a $K$-vector space of dimension $n+1$, and let $\mathbb{P}^N$ with $N = \binom{n+2}{2}-1$ be the projective space parametrizing quadratic forms on $\mathbb{P}^n = \mathbb{P}(V)$ up to a scalar multiple.
The line bundle $\mathcal{O}_{\mathbb{P}^n}(2)$ induces an embedding
$$
\begin{array}{cccc}
\nu:
&\mathbb{P}^n & \longrightarrow & \mathbb{P}^N\\
& [x_0:\dots :x_n] & \longmapsto & [x_0^2:x_0x_1:\dots :x_n^2]
\end{array}
$$
The image $\mathcal{V}^n_2 = \nu(\mathbb{P}^n) \subset \mathbb{P}^{N}$ is the \textit{Veronese variety} of dimension $n$ and degree $2^n$.
We will denote by $[z_{0,0}:\dots :z_{n,n}]$ the homogeneous coordinates on $\mathbb{P}^N$, where $z_{i,j}$ corresponds to the product $x_ix_j$.
\subsubsection*{Secant varieties}
Given an irreducible and reduced non-degenerate variety $X\subset\P^N$, and a positive integer $h\leq N$ we denote by $\sec_h(X)$
the \emph{$h$-secant variety} of $X$. This is the subvariety of $\P^N$ obtained as the closure of the union of all $(h-1)$-planes
$\langle x_1,...,x_{h}\rangle$ spanned by $h$ general points of $X$.
A point $p\in \mathbb{P}^N$ can be represented by an $(n+1)\times (n+1)$ symmetric matrix $Z$. The Veronese variety $\mathcal{V}^n_2$ is the locus of rank one matrices. More generally, $p\in \sec_h(\mathcal{V}^n_2)$ if and only if $Z$ can be written as a linear combination of $h$ rank one matrices that is if and only if $\rank(Z)\leq h$. If $p = [z_{0,0}:\cdots:z_{n,n}]$ then we may write
\stepcounter{thm}
\begin{equation}\label{matrix}
Z = \left(
\begin{array}{ccc}
z_{0,0} & \dots & z_{0,n}\\
\vdots & \ddots & \vdots\\
z_{0,n} & \dots & z_{n,n}
\end{array}\right)
\end{equation}
Then, the ideal of $\sec_h(\mathcal{V}^n_2)$ is generated by the $(h+1)\times (h+1)$ minors of $Z$.
By \cite[Lemma 3.3]{Ma18a} the $SL(n+1)$-action
$$
\begin{array}{ccc}
SL(n+1)\times \mathbb{P}^n & \longrightarrow & \mathbb{P}^n\\
(M,[v]) & \longmapsto & [Mv]
\end{array}
$$
induces the $SL(n+1)$-action on $\mathbb{P}^{N}$ given by
\stepcounter{thm}
\begin{equation}\label{acss}
\begin{array}{ccc}
SL(n+1)\times \mathbb{P}^{N} & \longrightarrow & \mathbb{P}^{N}\\
(M,Z) & \longmapsto & MZM^{t}
\end{array}
\end{equation}
The orbit closures of the action (\ref{acss}) are precisely the secant varieties $\sec_h(\mathcal{V}^n_2)$. Now, let us recall the notion of spherical and wonderful variety.
\begin{Definition}
A \textit{spherical variety} is a normal variety $X$ together with an action of a connected reductive affine algebraic group $\mathscr{G}$, a Borel subgroup $\mathscr{B}\subset \mathscr{G}$, and a base point $x_0\in X$ such that the $\mathscr{B}$-orbit of $x_0$ in $X$ is a dense open subset of $X$.
Let $(X,\mathscr{G},\mathscr{B},x_0)$ be a spherical variety. We distinguish two types of $\mathscr{B}$-invariant prime divisors: a \textit{boundary divisor} of $X$ is a $\mathscr{G}$-invariant prime divisor on $X$, a \textit{color} of $X$ is a $\mathscr{B}$-invariant prime divisor that is not $\mathscr{G}$-invariant. We will denote by $\mathcal{B}(X)$ and $\mathcal{C}(X)$ respectively the set of boundary divisors and colors of $X$.
\end{Definition}
For instance, any toric variety is a spherical variety with $\mathscr{B}=\mathscr{G}$ equal to the torus. For a toric variety there are no colors, and the boundary divisors are the usual toric invariant divisors.
\begin{Definition}
A \textit{wonderful variety} is a smooth projective variety $X$ with the action of a semi-simple simply connected group $\mathscr{G}$ such that:
\begin{itemize}
\item[-] there is a point $x_0\in X$ with open $\mathscr{G}$ orbit and such that the complement $X\setminus \mathscr{G}\cdot x_0$ is a union of prime divisors $E_1,\cdots, E_r$ having simple normal crossing;
\item[-] the closures of the $\mathscr{G}$-orbits in $X$ are the intersections $\bigcap_{i\in I}E_i$ where $I$ is a subset of $\{1,\dots, r\}$.
\end{itemize}
\end{Definition}
As proven by D. Luna in \cite{Lu96} wonderful varieties are in particular spherical. Note that $\mathbb{P}^N$ is not a wonderful compactification of $SL(n+1)/H$, where $H$ is the stabilizer of the identity matrix with respect to the $SL(n+1)$-action in (\ref{acss}), since for instance the orbit closure $\sec_n(\mathcal{V}^n_2)$ is a non smooth divisor. In order to get a wonderful compactification we must consider the space of complete quadrics that we now describe. The \textit{space of complete quadrics} is the closure of the graph of the rational map
$$
\begin{array}{ccc}
\mathbb{P}(\Sym^2V)& \dasharrow & \mathbb{P}(\Sym^2\bigwedge^{2}V)\times\dots\times \mathbb{P}(\Sym^2\bigwedge^nV)\\
Z & \longmapsto & (\wedge^2Z,\dots,\wedge^{n}Z)
\end{array}
$$
By \cite[Theorem 6.3]{Va82} the space of complete quadrics can be constructed as a sequence of blow-ups as follows.
\begin{Construction}\label{ccq}
Let us consider the following sequence of blow-ups:
\begin{itemize}
\item[-] $\mathcal{Q}(n)_1$ is the blow-up of $\mathcal{Q}(n)_0:=\mathbb{P}^{N}$ along the Veronese variety $\mathcal{V}^n_2$;
\item[-] $\mathcal{Q}(n)_2$ is the blow-up of $\mathcal{Q}(n)_1$ along the strict transform of $\sec_2(\mathcal{V}^n_2)$;\\
$\vdots$
\item[-] $\mathcal{Q}(n)_i$ is the blow-up of $\mathcal{Q}(n)_{i-1}$ along the strict transform of $\sec_i(\mathcal{V}^n_2)$;\\
$\vdots$
\item[-] $\mathcal{Q}(n)_{n-1}$ is the blow-up of $\mathcal{Q}(n)_{n-2}$ along the strict transform of $\sec_{n-1}(\mathcal{V}^n_2)$.
\end{itemize}
Let $f_i:\mathcal{Q}(n)_i\rightarrow \mathcal{Q}(n)_{i-1}$ be the blow-up morphism. We will denote by $E_i^q$ both the exceptional divisor of $f_i$ and its strict transforms in the subsequent blow-ups. We will denote by $\mathcal{Q}(n)$ the last blow-up $\mathcal{Q}(n)_{n-1}$ and by $f:\mathcal{Q}(n)\rightarrow\mathbb{P}^{N}$ the composition of the $f_i$.
Then for any $i = 1,\dots,n-1$ the variety $\mathcal{Q}(n)_{i}$ is smooth, the strict transform of $\sec_{i+1}(\mathcal{V}^n_2)$ in $\mathcal{Q}(n)_{i}$ is smooth, and the divisor $E_1^q\cup E_2^q\cup\dots \cup E^q_{i}$ in $\mathcal{Q}(n)_{i}$ is simple normal crossing. Furthermore, the variety $\mathcal{Q}(n)$ is isomorphic to the space of complete $(n-1)$-dimensional quadrics.
\end{Construction}
In particular, $\mathcal{Q}(n)$ is a wonderful compactification of the homogeneous space $SL(n+1)/SO(n+1)$. We now recall some facts about the varieties $\sec_h(\mathcal{V}^n_2)$.
\begin{Remark}\label{sec_ver}
Recall that $\sec_h(\mathcal{V}^n_2)$ identifies with the variety parametrizing $(n+1)\times (n+1)$ symmetric matrices modulo scalar of rank at most $h$. An argument similar to the one used to estimate the dimension of the spaces of matrices, not necessarily symmetric, of rank at most $h$ in \cite[Example 12.1]{Ha95} shows that
$$\dim(\sec_h(\mathcal{V}^n_2)) = \frac{2nh-h^2+3h-2}{2}$$
for $h\leq n$. Furthermore, identifying $\sec_h(\mathcal{V}^n_2)$ with the variety parametrizing $(n+1)\times (n+1)$ symmetric matrices modulo scalar of corank at least $n+1-h$, by \cite[Proposition 12(b)]{HT84} we get that the degree of $\sec_h(\mathcal{V}^n_2)$ is given by
$$\deg(\sec_h(\mathcal{V}^n_2)) = \prod_{i=0}^{n-h}\frac{\binom{n+1+i}{n+1-h-i}}{\binom{2i+1}{i}}.$$
In particular, for $h = n$ we get $n+1$, and for $h = 1$ we get $2^n$.
\end{Remark}
\begin{Proposition}\label{tcones}
The tangent cone of $\sec_h(\mathcal{V}^n_2)$ at a point $p\in \sec_k(\mathcal{V}^n_2)\setminus\sec_{k-1}(\mathcal{V}^n_2)$ for $k\leq h$ is a cone with vertex of dimension $\binom{n+2}{2}-1-\frac{(n-k+1)(n-k+2)}{2}$ over $\sec_{h-k}(\mathcal{V}^{n-k}_2)$. In particular, for $k < h$ we have
$$\mult_{\sec_k(\mathcal{V}^n_2)\setminus\sec_{k-1}(\mathcal{V}^n_2)}\sec_h(\mathcal{V}^n_2) = \prod_{i=0}^{n-h}\frac{\binom{n-k+1+i}{n+1-h-i}}{\binom{2i+1}{i}}
$$
and $\Sing(\sec_h(\mathcal{V}^n_2)) = \sec_{h-1}(\mathcal{V}^n_2)$.
\end{Proposition}
\begin{proof}
We compute the tangent cone of $\sec_h(\mathcal{V}^n_2)$ at
$$
p_k = \left(
\begin{array}{cc}
I_{k,k} & 0_{k,n+1-k} \\
0_{n+1-k,k} & 0_{n+1-k,n+1-k}
\end{array}
\right)
$$
where $I_{k,k}$ is the $k\times k$ identity matrix. Consider the affine chart $z_{0,0}\neq 0$ and the change of coordinates $z_{i,i}\mapsto z_{i,i}-1$ for $i = 1,\dots,k-1$, $z_{i,j}\mapsto z_{i,j}$ if $i\neq j$. Then the matrix $Z$ in (\ref{matrix}) takes the following form
$$
\left(
\begin{array}{ccccccc}
1 & z_{0,1} & \hdots & z_{0,k-1} & z_{0,k} & \hdots & z_{0,n} \\
z_{0,1} & z_{1,1}-1 & \hdots & z_{1,k-1} & z_{1,k} & \hdots & z_{1,n} \\
\vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
z_{0,k-1} & z_{1,k-1} & \hdots & z_{k-1,k-1}-1 & z_{k-1,k} & \hdots & z_{k-1,n} \\
z_{0,k} & z_{1,k} & \hdots & z_{k-1,k} & z_{k,k} & \hdots & z_{k,n} \\
\vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
z_{0,n} & z_{1,n} & \hdots & z_{k-1,n} & z_{k,n} & \hdots & z_{n,n}
\end{array}
\right)
$$
Recall that $\sec_h(\mathcal{V}^n_2)\subseteq\mathbb{P}^N$ is cut out by the $(h+1)\times (h+1)$ minors of $Z$. Now, the lowest degree terms of these minors are given by the $(h+1-k)\times (h+1-k)$ minors of the following matrix
$$
\left(
\begin{array}{ccc}
z_{k,k} & \hdots & z_{k,n}\\
\vdots & \ddots & \vdots \\
z_{k,n} & \hdots & z_{n,n}
\end{array}
\right)
$$
Therefore, the tangent cone $TC_{p_k}\sec_h(\mathcal{V}^n_2)$ is contained in the cone $C$ over $\sec_{h-k}(\mathcal{V}^{n-k}_2)$ with vertex the linear subspace of $\mathbb{P}^N$ given by $\{z_{k,k} =\dots = z_{k,n} = z_{k+1,k+1} = \dots = z_{k+1,n} = \dots = z_{n,n}=0\}$. Now, Remark \ref{sec_ver} yields
$$\dim(C) = \binom{n+2}{2}-1-\frac{(n-k+1)(n-k+2)}{2}+\dim(\sec_{h-k}(\mathcal{V}^{n-k}_2))+1 = \dim(\sec_h(\mathcal{V}^n_2))$$
and hence $TC_{p_k}\sec_h(\mathcal{V}^n_2) = C$. Finally, to get the formula for the multiplicity it is enough to observe that
$$\mult_{p_k}\sec_h(\mathcal{V}^n_2) = \mult_{p_k}TC_{p_k}\sec_h(\mathcal{V}^n_2) = \deg(\sec_{h-k}(\mathcal{V}^{n-k}_2))$$
and to apply the formula for the degree of the secant varieties of $\mathcal{V}^n_2$ in Remark \ref{sec_ver}.
\end{proof}
We will need the following result on fibrations with smooth fibers on a smooth base.
\begin{Proposition}\label{smooth_fib}
Let $f:X\rightarrow Y$ be a surjective morphism of varieties over an algebraically closed field with equidimensional smooth fibers. If $Y$ is smooth then $X$ is smooth as well.
\end{Proposition}
\begin{proof}
By \cite[Theorem 3.3.27]{Sch99} the morphism $f:X\rightarrow Y$ is flat. Finally, since all the fibers of $f:X\rightarrow Y$ are smooth and of the same dimension \cite[Theorem 3', Chapter III, Section 10]{Mum99} yields that $X$ is smooth.
However, a direct proof is at hand and we present it in what follows. Since the problem is local on both $X$ and $Y$ we may assume that $X\subset K^N$ is an affine variety cut out by polynomials $g_1,\dots,g_a$, $Y = K^m$, and $f:X\rightarrow Y$ is given by $f(x) = (f_1(x),\dots, f_m(x))$.
Consider a point $p\in X$. Without loss of generality we may assume that $f(p) = 0$. Then the fiber $X_0$ of $f$ through $p$ is given by
$$f^{-1}(0) = \{x\in K^N \: | \: g_1(x) = \dots = g_a(x) = f_1(x) = \dots = f_m(x) = 0\}.$$
Now, since $X_0$ is smooth at $p$ there are $b\leq a$ polynomials among $g_1,\dots,g_a$ and $l\leq m$ polynomials among $f_1,\dots,f_m$ such that $b+l = m+N-\dim(X)$ and the vectors
$$(\nabla g_1)(p),\dots,(\nabla g_b)(p),(\nabla f_1)(p),\dots,(\nabla f_l)(p)$$
are linearly independent. Now, $l\leq m$ yields $b\geq N-\dim(X)$. On the other hand, $X$ is irreducible of codimension $N-\dim(X)$ and hence $b\leq N-\dim(X)$. We conclude that $b = N - \dim(X)$ and the vectors
$$(\nabla g_1)(p),\dots,(\nabla g_{N - \dim(X)})(p)$$
are linearly independent. So $X$ is smooth at $p$.
\end{proof}
\begin{Notation}\label{Not_ST_Q}
We will denote by $\sec_h(\mathcal{V}^n_2)^i$ the strict transform of $\sec_h(\mathcal{V}^n_2)$ in $\mathcal{Q}(n)_i$ for $h > i$. Furthermore, as already said in Construction \ref{ccq}, for simplicity of notation we will denote by $E_i^q$ both the exceptional divisor of $f_i$ and its strict transforms in the subsequent blow-ups.
\end{Notation}
In the following we will analyze the geometry of the $SL(n+1)$-orbits in the blow-ups $\mathcal{Q}(n)_i$ in Construction \ref{ccq}.
\begin{Proposition}\label{Qn_won}
For any $i = 0,\dots,n-1$ the variety $\mathcal{Q}(n)_i$ is smooth and the divisors $E^q_1,\dots,E^q_i$ are smooth and intersect transversally in $\mathcal{Q}(n)_i$. Furthermore, the strict transform $\sec_{i+1}(\mathcal{V}_2^n)^i$ of $\sec_{i+1}(\mathcal{V}_2^n)$ in $\mathcal{Q}(n)_i$ is smooth and the intersections among $\sec_{i+1}(\mathcal{V}_2^n)^i, E^q_1,\dots,E^q_i$ are transversal. The closures of the orbits of the $SL(n+1)$-action on $\mathcal{Q}(n)_i$ induced by (\ref{acss}) are given by all the possible intersections of $E^q_1,\dots,E^q_{i},\sec_{i+1}(\mathcal{V}_2^n)^i,\dots,\sec_{n}(\mathcal{V}_2^n)^i$ and $\mathcal{Q}(n)_i$ itself.
In particular, the variety $\mathcal{Q}(n)$ is smooth, the divisors $E^q_1,\dots,E^q_{n-1},\sec_n(\mathcal{V}_2^n)^{n-1}$ are smooth and the intersections among them are transversal, the closures of the orbits of the $SL(n+1)$-action on $\mathcal{Q}(n)$ induced by (\ref{acss}) are given by all the possible intersections of the divisors $E^q_1,\dots,E^q_{n-1},\sec_n(\mathcal{V}_2^n)^{n-1}$ and $\mathcal{Q}(n)$ itself. Hence, $\mathcal{Q}(n)$ is wonderful.
\end{Proposition}
\begin{proof}
We will proceed as follows. For $i = 0,1$ we will prove the statement for any $n$. Then we will prove that if for $i<j$ the statement holds for any $n$ then it also holds for $i = j$ and any $n$. This will prove the statement for any $n\geq 1$ and $i = 0,\dots, n-1$.
For $i = 0$ we have $\mathcal{Q}(n)_0\cong \mathbb{P}^N$, there are no exceptional divisors, and the closures of the orbits of the action (\ref{acss}) are the secant varieties of $\mathcal{V}_2^n$. Therefore, for $i = 0$ the statements holds for any $n$. Even though we could use the case $i = 0$ as the first step of the proof, to get acquainted with the arguments we will apply, we develop in full detail the case $i = 1$ as well.
The variety $\mathcal{Q}(n)_1$ is the blow-up of $\mathbb{P}^N$ along the Veronese variety $\mathcal{V}_2^n$. Hence it is smooth. By Proposition \ref{tcones} $\sec_2(\mathcal{V}_2^n)$ is smooth away from $\mathcal{V}_2^n$ and $\sec_2(\mathcal{V}_2^n)^1\cap E^q_1\rightarrow \mathcal{V}_2^n$ is a fibration whose fibers are isomorphic to $\mathcal{V}_2^{n-1}$. Hence, Proposition \ref{smooth_fib} yields that $\sec_2(\mathcal{V}_2^n)^1\cap E^q_1$ is smooth and since $\dim(\sec_2(\mathcal{V}_2^n)^1\cap E^q_1) = n+n-1 = 2n-1 = \dim(\sec_2(\mathcal{V}_2^n)^1)-1$ we conclude that $\sec_2(\mathcal{V}_2^n)^1$ is smooth and the intersection $\sec_2(\mathcal{V}_2^n)^1\cap E^q_1$ is transversal.
Now, via the action of $SL(n+1)$ in (\ref{acss}) we can translate any fiber of $E^q_1$ over $\mathcal{V}_2^n$ to any other fiber. Fix one such fiber $E^q_{1,p}$. By Proposition \ref{tcones} we have that $\sec_h(\mathcal{V}_2^n)^1\cap E^q_{1,p} = \sec_{h-1}(\mathcal{V}_2^{n-1})$ and the action of $SL(n+1)$ in (\ref{acss}) restricts on $E^q_{1,p}$ to the corresponding action of $SL(n)$. This proves the statement about the orbits for $\mathcal{Q}(n)_1$ for any $n\geq 1$.
Assume that for any $i < j$ the statement holds for any $n$. Since $\mathcal{Q}(n)_{j-1}$ and $\sec_{j}(\mathcal{V}^n_2)^{j-1}\subset\mathcal{Q}(n)_{j-1}$ are smooth the blow-up $\mathcal{Q}(n)_{j}$ of $\mathcal{Q}(n)_{j-1}$ along $\sec_{j}(\mathcal{V}^n_2)^{j-1}$ is smooth as well. Furthermore, since all the intersections among $\sec_{j}(\mathcal{V}_2^n)^{j-1}, E^q_1,\dots,E^q_{j-1}$ in $\mathcal{Q}(n)_{j-1}$ are transversal we have that all the intersections among $E^q_1,\dots,E^q_{j}$ in $\mathcal{Q}(n)_{j}$ are transversal as well.
Now, consider an intersection of the following form $\sec_{j+1}(\mathcal{V}^n_2)^j\cap E^q_{j_1}\cap\dots \cap E^q_{j_t}$. By Proposition \ref{tcones} the restriction of the blow-down morphism
$$\sec_{j+1}(\mathcal{V}^n_2)^j\cap E^q_{j_1}\cap\dots \cap E^q_{j_t}\rightarrow E^q_{j_1}\cap \dots \cap E^q_{j_{t-1}}\cap \sec_{j_t}(\mathcal{V}^n_2)^{j_t-1}$$
has fibers isomorphic to $\sec_{j-j_t+1}(\mathcal{V}^{n-j_t})^{j-j_t}$. Since both $E^q_{j_1}\cap \dots \cap E^q_{j_{t-1}}\cap \sec_{j_t}(\mathcal{V}^n_2)^{j_t-1}$ and $\sec_{j-j_t-1}(\mathcal{V}^{n-j_t})^{j-j_t}$ are smooth Proposition \ref{smooth_fib} yields that $\sec_{j+1}(\mathcal{V}^n_2)^j\cap E^q_{j_1}\cap\dots \cap E^q_{j_t}$ is smooth as well. Moreover, note that
$$
\dim(\sec_{j+1}(\mathcal{V}^n_2)^j\cap E^q_{j_1}\cap\dots \cap E^q_{j_t}) = \dim(E^q_{j_1}\cap \dots \cap E^q_{j_{t-1}}\cap \sec_{j_t}(\mathcal{V}^n_2)^{j_t-1})+ \dim(\sec_{j-j_t+1}(\mathcal{V}^{n-j_t})^{j-j_t})
$$
and
$$\dim(E^q_{j_1}\cap \dots \cap E^q_{j_{t-1}}\cap \sec_{j_t}(\mathcal{V}^n_2)^{j_t-1}) = \frac{2nj_t-j_t^2+3j_t-2}{2}-(t-1)$$
yield that $\dim(\sec_{j+1}(\mathcal{V}^n_2)^j\cap E^q_{j_1}\cap\dots \cap E^q_{j_t})$ is given by
$$
\begin{array}{l}
\frac{2nj_t-j_t^2+3j_t-2}{2}-(t-1) + \frac{2(n-j_t)(j-j_t+1)-(j-j_t+1)^2+3(j-j_t+1)-2}{2} = \\
\frac{2n(j+1)-(j+1)^2+3(j+1)-2}{2}-t = \dim(\sec_{j+1}(\mathcal{V}^n_2)^j) - t
\end{array}
$$
and hence the intersection $\sec_{j+1}(\mathcal{V}^n_2)^j\cap E^q_{j_1}\cap\dots \cap E^q_{j_t}$ is transversal.
In the following we prove the claim about the orbit closures. If an orbit closure in $\mathcal{Q}(n)_i$ is not contained in the exceptional divisor $E^q_i$ then it is the strict transform of an orbit closure in $\mathcal{Q}(n)_i$, and hence it is given as an intersection among $E^q_1,\dots, E^q_{i-1},\sec_{i+1}(\mathcal{V}^n_2)^i,\dots, \sec_{n}(\mathcal{V}^n_2)^i$.
Now, let us analyze the orbit closures in the exceptional divisor $E^q_i$. The fibers of $E^q_i$ over $\sec_i(\mathcal{V}^n_2)^{i-1}$ are projective spaces of dimension
$$N_{n-i} = \binom{n-i+2}{2}-1.$$
Moreover, $SL(n+1)$ acts transitively on fibers that lie over the same orbit in $\mathcal{Q}(n)_{i-1}$. Note that by Lemma \ref{tcones} $\sec_h(\mathcal{V}^n_2)^i$ intersects each of these $N_{n-i}$-dimensional projective spaces along $\sec_{h-i}(\mathcal{V}^{n-i}_2)$, and the $SL(n+1)$-action on $\mathcal{Q}(n)_{i}$ in (\ref{acss}) induces the corresponding $SL(n-i+1)$-action on the fibers of $E^q_i$. Finally, the statement on the orbit closures in $\mathcal{Q}(n)_{i-1}$ follows then from the statement on the orbit closures in $\mathcal{Q}(n-i)_{0}$.
\end{proof}
\section{Complete symmetric symplectic forms}\label{CSSF}
From now on we will consider the case $n+1 = 2r$ even. Let $Sp(2r)$ be the symplectic group of $2r\times 2r$ symplectic matrices, that is
$$Sp(2r) = \{M\in \Hom(V,V) \: | \: M^t \Omega M = \Omega \}$$
where
\stepcounter{thm}
\begin{equation}\label{symform}
\Omega = \left(
\begin{array}{cc}
0 & I_{r,r} \\
-I_{r,r} & 0
\end{array}
\right)
\end{equation}
is the standard symplectic form. Over an algebraically closed field of characteristic zero the symplectic group is a non-compact, irreducible, simply connected, simple Lie group.
\begin{Remark}\label{tan_symp}
Let us write a $2r\times 2r$ matrix $M$ as
$$
M = \left(
\begin{array}{cc}
A & B \\
C & D
\end{array}
\right)
$$
where $A,B,C,D$ are four $r\times r$ matrices. The condition of being symplectic translates then into the following system of equations
$$
\left\lbrace
\begin{array}{lll}
-C^tA+A^tC & = & 0_{r,r};\\
-C^tB+A^tD & = & I_{r,r};\\
-D^tA+B^tC & = & -I_{r,r};\\
-D^tB+B^tD & = & 0_{r,r}.
\end{array}\right.
$$
Considering the transformation
\stepcounter{thm}
\begin{equation}\label{trasl}
\left(\begin{array}{cc}
A & B\\
C & D
\end{array}\right)\mapsto
\left(\begin{array}{cc}
A-I_{r,r} & B\\
C & D-I_{r,r}
\end{array}\right)
\end{equation}
we get the following relations for the tangent space of $Sp(2r)$ at the identity
$$A = -D^t,\: B = B^t,\: C = C^t.$$
Hence, the tangent space of $Sp(2r)$ at the identity is the Lie algebra $\mathfrak{sp}(2r,K)$ consisting of $2r\times 2r$ matrices of the form
$$
\left(
\begin{array}{cc}
A & B \\
C & -A^t
\end{array}
\right)
$$
with $C$ and $B$ symmetric. In particular, $\dim(Sp(2r)) = r^2+2\frac{r(r+1)}{2} = r(2r+1)$.
\end{Remark}
\begin{Remark}\label{borelsym}
By \cite[Section 1]{ou} the Borel subgroup of the symplectic group can be described as follows:
$$\mathscr{B}= \Bigr\{
\begin{pmatrix}
A & 0_{r,r}\\
B & A^{-t}
\end{pmatrix}
\text{ with } A^tB=B^tA\Bigl\}$$
\end{Remark}
where $A \in GL(r)$ is lower triangular and $B$ is a general $r \times r$ matrix. Now, $Sp(2r)$ is a subgroup of $SL(n+1)$ and the $SL(n+1)$-action (\ref{acss}) restricts to the following $Sp(2r)$-action:
\stepcounter{thm}
\begin{equation}\label{acsimp}
\begin{array}{ccc}
Sp(2r)\times \mathbb{P}^{N} & \longrightarrow & \mathbb{P}^{N}\\
(M,Z) & \longmapsto & MZM^{t}
\end{array}
\end{equation}
We denote by $O_{2r}$ the $Sp(2r)$-orbit of the identity in $\mathbb{P}^{N}$ and by $X_{2r} = \overline{O_{2r}}\subseteq\mathbb{P}^{N}$ its closure.
\begin{Proposition}\label{orb_dim}
Let $Y_k = \overline{O_k}\subset\mathbb{P}^{N}$ be the closure of the $Sp(2r)$-orbit of the matrix
$$I_k = \left(\begin{array}{cc}
I_{k,k} & 0_{k,2r-k}\\
0_{2r-k,k} & 0_{2r-k,2r-k}
\end{array}\right)$$
via the action in (\ref{acss}). If $k\leq r$ then
$$\dim(Y_k) = r(2r+1)-\frac{k(k-1)}{2}-r(r-k)-\frac{r(r+1)}{2}-\frac{(r-k)(r-k+1)}{2}-1=2rk+k-k^2-1.$$
Finally, $\dim(Y_{2r}) = r(r+1)$.
\end{Proposition}
\begin{proof}
Our aim is to compute the dimension of the stabilizer $H\subset Sp(2r)$ of $I_k$. Consider the incidence correspondence
\[
\begin{tikzpicture}[xscale=1.5,yscale=-1.5]
\node (A0_1) at (1, 0) {$\mathcal{I} = \{(M,\lambda)\: | \: MI_k M^t = \lambda I_k\}\subseteq Sp(2r) \times K^{*}$};
\node (A1_0) at (0, 1) {$Sp(2r)$};
\node (A1_2) at (2, 1) {$K^{*}$};
\path (A0_1) edge [->]node [auto] {$\scriptstyle{\psi}$} (A1_2);
\path (A0_1) edge [->]node [auto,swap] {$\scriptstyle{\phi}$} (A1_0);
\end{tikzpicture}
\]
Note that the fibers of $\psi$ are isomorphic subgroups of $Sp(2r)$. We will compute the dimension of $H_1 = \psi^{-1}(1)$ and then the dimension of $H = \phi(\mathcal{I})$ will be given by
\stepcounter{thm}
\begin{equation}\label{dim_stab}
\dim(H) = \dim(\mathcal{I}) = \dim(H_1)+1.
\end{equation}
Consider first the case $k\leq r$. Subdivide as usual the matrices in $Sp(2r)$ in four $r\times r$ blocks and write the matrix whose orbit we want to study as
$$\left(\begin{array}{cc}
Z_{k} & 0_{r,r}\\
0_{r,r} & 0_{r,r}
\end{array}\right)$$
where $Z_k$ is the following $r\times r$ matrix
$$Z_k = \left(\begin{array}{cc}
I_{k,k} & 0_{k,r-k}\\
0_{r-k,k} & 0_{r-k,r-k}
\end{array}\right).$$
Now, we have
$$\left(\begin{array}{cc}
A & B\\
C & D
\end{array}\right)\left(\begin{array}{cc}
Z_{k} & 0_{r,r}\\
0_{r,r} & 0_{r,r}
\end{array}\right)
\left(\begin{array}{cc}
A^{t} & C^{t}\\
B^{t} & D^{t}
\end{array}\right) =
\left(\begin{array}{cc}
AZ_{k}A^{t} & AZ_{k}C^{t}\\
CZ_{k}A^{t} & CZ_{k}C^{t}
\end{array}\right).
$$
Subdivide the matrix $A$ in blocks as follows
$$A = \left(\begin{array}{cc}
A_{k,k} & A_{k,r-k}\\
A_{r-k,k} & A_{r-k,r-k}
\end{array}\right).
$$
Then
$$
\left(\begin{array}{cc}
A_{k,k} & A_{k,r-k}\\
A_{r-k,k} & A_{r-k,r-k}
\end{array}\right)
\left(\begin{array}{cc}
I_{k,k} & 0_{k,r-k}\\
0_{r-k,k} & 0_{r-k,r-k}
\end{array}\right)
\left(\begin{array}{cc}
A_{k,k}^t & A_{r-k,k}^t\\
A_{k,r-k}^t & A_{r-k,r-k}^t
\end{array}\right) =
\left(\begin{array}{cc}
A_{k,k}A_{k,k}^t & A_{k,k}A_{r-k,k}^t\\
A_{r-k,k}A_{k,k}^t & A_{r-k,k}A_{r-k,k}^t
\end{array}\right).
$$
Therefore, considering the transformation (\ref{trasl}) we get the following relations for the tangent space of $H_1$ at the identity
$$A_{k,k} = -A_{k,k}^t, \: A_{r-k,k} = 0_{r-k,k}.$$
Moreover, subdividing $C$ as we did for $A$, we get that the matrix
$$
\left(\begin{array}{cc}
A_{k,k} & A_{k,r-k}\\
A_{r-k,k} & A_{r-k,r-k}
\end{array}\right)
\left(\begin{array}{cc}
I_{k,k} & 0_{k,r-k}\\
0_{r-k,k} & 0_{r-k,r-k}
\end{array}\right)
\left(\begin{array}{cc}
C_{k,k}^t & C_{r-k,k}^t\\
C_{k,r-k}^t & C_{r-k,r-k}^t
\end{array}\right) =
\left(\begin{array}{cc}
A_{k,k}C_{k,k}^t & A_{k,k}C_{r-k,k}^t\\
A_{r-k,k}C_{k,k}^t & A_{r-k,k}C_{r-k,k}^t
\end{array}\right)
$$
must be zero. This yields the following further relations for the tangent space of $H_1$ at the identity
$$C_{k,k} = 0_{k,k},\: C_{r-k,k} = 0_{r-k,k}.$$
Plugging-in the relations for the tangent space at the identity of $Sp(2r)$ in Remark \ref{tan_symp} we get that the tangent space at the identity of $H_1$ is given by matrices of the form
$$
\left(\begin{array}{cc}
A & B\\
C & -A^t
\end{array}\right)
$$
where $B = B^t$ and $C = C^t$. Note that $C = C^t, C_{k,k} = 0_{k,k},\: C_{r-k,k} = 0_{r-k,k}$ yield $C_{k,r-k} = 0_{k,r-k}$ and $C_{r-k,r-k} = C_{r-k,r-k}^t$.
Hence $A$ depends on $\frac{k(k-1)}{2}+k(r-k)+(r-k)^2$ parameters, $B$ depends on $\frac{r(r+1)}{2}$ parameters and $C$ depends on $\frac{(r-k)(r-k+1)}{2}$ parameters. Then by (\ref{dim_stab}) we get
$$\dim(H) = \frac{k(k-1)}{2}+k(r-k)+(r-k)^2+\frac{r(r+1)}{2}+\frac{(r-k)(r-k+1)}{2}+1$$
and
$$\dim(Y_k) = \dim(Sp(2r))-\dim(H) = r(2r+1)-\dim(H)$$
yields the formula in the statement.
Finally, consider the case $k = 2r$, and let $H\subset Sp(2r)$ be the stabilizer of the identity matrix. The equality
$$\left(\begin{array}{cc}
A & B\\
C & D
\end{array}\right)\left(
\begin{array}{cc}
I_{r,r} & 0_{r,r}\\
0_{r,r} & I_{r,r}
\end{array}\right)
\left(\begin{array}{cc}
A^{t} & C^{t}\\
B^{t} & D^{t}
\end{array}\right) =
\left(\begin{array}{cc}
AA^{t}+BB^t & AC^{t}+BD^t\\
CA^{t}+DB^t & CC^{t}+DD^t
\end{array}\right) =
\left(\begin{array}{cc}
\lambda I_{r,r} & 0_{r,r}\\
0_{r,r} & \lambda I_{r,r}
\end{array}\right)
$$
for some $\lambda\in K^{*}$ yields, applying as usual the transformation (\ref{trasl}), the following system of equations
$$
\left\lbrace
\begin{array}{lll}
-C^t(A-I_{r,r})+(A-I_{r,r})^tC & = & 0_{r,r};\\
-C^tB^t+(A-I_{r,r})^t(D+I_{r,r}) & = & (D-I_{r,r})^t(A-I_{r,r})-B^tC;\\
-(D-I_{r,r})^tB+B^t(D-I_{r,r}) & = & 0_{r,r}.
\end{array}\right.
$$
Note that if $M\in H$ taking the determinants on both sides of $M^T\Omega M = \lambda\Omega$ we see that $\lambda$ can only take finitely many values. Hence, by Remark \ref{tan_symp} we have the following relations for the tangent space of $H$ at the identity
$$A = -D^t,\: B = B^t,\: C = C^t,\: C = -B^t,\: A = -A^t.$$
Therefore, the tangent space consists of matrices of the following form
$$
\left(\begin{array}{cc}
A & B\\
-B^t & -A^t
\end{array}\right)
$$
with $B = B^t$ and $A = -A^t$. We conclude that
$$\dim(H) = \frac{r(r+1)}{2}+\frac{r(r-1)}{2} = r^2$$
and hence $\dim(Y_{2r}) = r(2r+1)-r^2 = r(r+1)$.
\end{proof}
\begin{Corollary}\label{prop_dim}
The projective variety $X_{2r}$ is irreducible and its dimension is given by $\dim(X_{2r}) = r(r+1)$.
\end{Corollary}
\begin{proof}
The variety $X_{2r}$ is the closure of an $Sp(2r)$-orbit, so it is irreducible. Since $X_{2r} = Y_{2r}$ the formula for its dimension follows from Proposition \ref{orb_dim}.
\end{proof}
\begin{Example}
Consider the case $r = 1$. Then Corollary \ref{prop_dim} yields $\dim(X_{2}) = 2$ and hence $X_{2} = \mathbb{P}^2$. Moreover, $O_{2} = \mathbb{P}^2\setminus C$ where $C\subset\mathbb{P}^2$ is the conic parametrizing rank one matrices.
\end{Example}
\begin{Remark}\label{equations}
We work out equations for $X_{2r}$. The points of the orbit $O_{2r}$ represent symmetric matrices having a scalar multiple that is symplectic, that is $Z^t \Omega Z = \lambda \Omega$ for some $\lambda\in K^{*}$. The matrix $N = Z^t \Omega Z$ is skew-symmetric and so $N_{i,i} = 0$ for $i = 0,\dots, 2r-1$. Furthermore, for any $i = 0,\dots,2r-2$ we must have
$$N_{i,i+1} = \dots = N_{i,r+i-1} = N_{i,r+i+1} = \dots = N_{i,2r-1} = 0.$$
This gives $2r-i-2$ quadratic equations for any $i = 0,\dots,r-1$, and $2r-i-1$ quadratic equations for any $i = r,\dots, 2r-1$. Moreover, we must have
$$N_{0,r} = N_{1,r+1} = \dots = N_{r-1,2r-1}$$
and hence we get $r-1$ additional quadratic equations. Summing-up we get
$$\sum_{i=0}^{r-1} (2r-i-2) + \sum_{i = r}^{2r-1}(2r-i-1)+ r-1 = (2r+1)(r-1)$$
quadratic equations for $X_{2r}$ in $\mathbb{P}^{N}$.
Now, we explicitly compute these equations. Consider a general symmetric matrix $Z=(z_{i,j})_{i,j=0,\dots,2r-1}$ with $z_{i,j}=z_{j,i}$ and the standard symplectic form
$\Omega$. Then
\begin{align*}
c_{i,j} := (Z \cdot \Omega)_{i,j} = \sum_{k=0}^{2r-1} z_{i,k} \Omega_{k,j} = \begin{cases} z_{i,j-r} & \text{ for } j \ge r; \\ -z_{i,j+r} & \text{ for } j < r; \\ \end{cases}
\end{align*}
and so
\begin{align*}
N_{i,j}:=(Z \cdot \Omega \cdot Z)_{i,j} &= \sum_{k=0}^{2r-1} c_{i,k} z_{k,j} = \sum_{k=0}^{r-1} c_{i,k} z_{k,j} + \sum_{k=r}^{2r-1} c_{i,k} z_{k,j} = \sum_{k=0}^{r-1} -z_{i,k+r} z_{k,j} + z_{i,k} z_{k+r,j}.
\end{align*}
Summing-up, the equations
$$\begin{cases}
N_{l,r+l}-N_{l+1,r+l+1}=0 &\text{ for } l=0, \dots, r-2;\\
N_{i,j}=0 &\text{ for } i=0, \dots, 2r-2 , \, j > i, \, j \neq r+i;\\
\end{cases}$$
can be explicitly written as follow
\begin{equation*}
\begin{footnotesize}
\begin{cases}
\sum_{k=0}^{r-1}-z_{l,k+r}z_{k,r+l}+z_{l,k}z_{k+r,r+l}+z_{l+1,k+r}z_{k,r+l+1}-z_{l+1,k}z_{k+r,r+l+1}=0 &\text{ for } l=0, \dots, r-2;\\
\sum_{k=0}^{r-1} -z_{i,k+r} z_{k,j} + z_{i,k} z_{k+r,j}=0 &\text{ for } i=0, \dots, 2r-2, \, j > i, \, j \neq r+i.\\
\end{cases}
\end{footnotesize}
\end{equation*}
\end{Remark}
Now, our aim is to construct a wonderful compactification of the space of complete symmetric symplectic forms.
\begin{Construction}\label{ccssf}
Set $S_h(\mathcal{V}^{2r-1}_2):=\sec_{h}(\mathcal{V}^{2r-1}_2) \cap X_{2r}$. Let us consider the following sequence of blow-ups:
\begin{itemize}
\item[-] $X_{2r}^{(1)}$ is the blow-up of $X_{2r}^{(0)}:=X_{2r}$ along the Veronese variety $\mathcal{V}^{2r-1}_2 \subset X_{2r}$;
\item[-] $X_{2r}^{(2)}$ is the blow-up of $X_{2r}^{(1)}$ along the strict transform of $S_2(\mathcal{V}^{2r-1}_2)$;\\
$\vdots$
\item[-] $X_{2r}^{(i)}$ is the blow-up of $X_{2r}^{(i-1)}$ along the strict transform of $S_i(\mathcal{V}^{2r-1}_2)$;\\
$\vdots$
\item[-] $X_{2r}^{(r-1)}$ is the blow-up of $X_{2r}^{(r-2)}$ along the strict transform of $S_{r-1}(\mathcal{V}^{2r-1}_2)$.
\end{itemize}
Let $f_i:X_{2r}^{(i)}\rightarrow X_{2r}^{(i-1)}$ be the blow-up morphism. We will denote by $E_i$ both the exceptional divisor of $f_i$ and its strict transforms in the subsequent blow-ups. We set $\mathcal{S}_{2r}:=X_{2r}^{(r-1)}$ and we indicate with $f: \mathcal{S}_{2r} \rightarrow X_{2r}$ the composition of the $f_i$.
\end{Construction}
Let $M_{2r,2r}(K)$ be the space of $2r \times 2r$ matrices. Following \cite{defa} we define the operator
$$
\begin{array}{cccc}
\Phi_\Omega: & M_{2r,2r}(K) & \longrightarrow & M_{2r,2r}(K)\\
& A & \mapsto & \Omega^{-1}A^T\Omega
\end{array}
$$
\begin{Definition}
A matrix $A \in M_{2r,2r}(K)$ is symplectically congruent to a matrix $B \in M_{2r,2r}(K)$ if there exists a symplectic matrix $Q$ such that $QAQ^T=B$.
\end{Definition}
By \cite[Theorem 21]{defa} a matrix $A \in M_{2r,2r}(K)$ is symplectically congruent to a diagonal matrix if and only if $A$ is symmetric and $A\Phi_\Omega(A)$ is diagonalizable.
\begin{Proposition}\label{fun}
The quadratic equations in Remark \ref{equations} cut out $X_{2r}$ set-theoretically. Furthermore $Y_i= \sec_{i}(\mathcal{V}^{2r-1}_2) \cap X_{2r}$, set-theoretically, and there is a stratification
$$Y_{1}\subset Y_2\subset\dots\subset Y_{r-1}\subset Y_{r}\subset Y_{2r} = X_{2r}.$$
In particular, $\dim(\sec_{i}(\mathcal{V}^{2r-1}_2) \cap X_{2r}) = 2ri+i-i^2-1$ for $i = 1,\dots,r$ and $Y_{r}$ is a divisor in $X_{2r}$.
\end{Proposition}
\begin{proof}
Let $Z$ be a symmetric matrix satisfying the equations in Remark \ref{equations}. Then we have two cases:
\begin{itemize}
\item[(i)] $N_{0,r} = \dots = N_{r-1,2r-1} = \lambda\in K^{*}$;
\item[(ii)] $N_{0,r} = \dots = N_{r-1,2r-1} = 0$.
\end{itemize}
Consider (i). Then $Z^{t}\Omega Z = \lambda\Omega$ and $\det(Z) \neq 0$. Moreover,
$$Z\Phi_\Omega(Z) = Z\Omega^{-1}Z^{t}\Omega = -Z^{t}\Omega Z\Omega = -\lambda\Omega^2 = \lambda I_{2r,2r}$$
and by \cite[Theorem 21]{defa} $Z$ is symplectically congruent to a diagonal matrix.
In case (ii) $Z^{t}\Omega Z$ is the zero matrix. So $\det(Z) = 0$, and $Z\Phi_\Omega(Z)$ is the zero matrix as well. Again, \cite[Theorem 21]{defa} yields that $Z$ is symplectically congruent to a diagonal matrix.
So if $Z$ is a symmetric matrix satisfying the equations in Remark \ref{equations} there is a symplectic matrix $Q$ such that $QZQ^{t} = D$ with $D$ diagonal. Our aim is to prove that $D$ can be moved to a matrix of the form $I_k$, where $k$ is the rank of $D$, with the action of the symplectic group.
Let $D_{\alpha} = \diag(\alpha_1,\dots,\alpha_{2r})$ be a diagonal matrix satisfying the equations in Remark \ref{equations}. Then either $\alpha_i\alpha_{r+i} = 0$ for all $i = 1,\dots,r$, or $\alpha_i \alpha_{r+i} = \lambda\in K^{*}$ for all $i = 1,\dots,r$. Write
$$
D_{\alpha} = \left(
\begin{array}{cc}
D_{\alpha_1,\dots,\alpha_p} & 0_{r,r} \\
0_{r,r} & D_{\alpha_{p+1},\dots,\alpha_{p+q}}
\end{array} \right)
$$
with $p+q\leq r$, where $D_{\alpha_1,\dots,\alpha_p}$ is an $r\times r$ diagonal matrix with the $\alpha_i$ appearing on the diagonal, and similarly for $D_{\alpha_{p+1},\dots,\alpha_{p+q}}$. Note that up to permuting the upper and lower diagonal simultaneously we may assume that $\alpha_1,\dots,\alpha_p$ are the first $p$ entries on the diagonal of $D_{\alpha_1,\dots,\alpha_p}$, and $\alpha_{p+1},\dots,\alpha_{p+q}$ are the last $q$ entries on the diagonal of $D_{\alpha_{p+1},\dots,\alpha_{p+q}}$.
Now, set $p+q = r$. Let $A,B,C,D$ be $r\times r$ matrices defined as follows:
\begin{itemize}
\item[-] the first $p$ entries on the diagonal of $A$ are $a_i \in K^{*}$ for $i = 1,\dots,p$, and the other entries are zero;
\item[-] the last $q$ entries on the diagonal of $B$ are $-b_i^{-1} \in K^{*}$ for $i = p+1,\dots,p+q$, and the other entries are zero;
\item[-] the last $q$ entries on the diagonal of $C$ are $b_i\in K^{*}$ for $i = p+1,\dots,p+q$, and the other entries are zero;
\item[-] the first $p$ entries on the diagonal of $D$ are $a_i^{-1}\in K^{*}$ for $i = 1,\dots,p$, and the other entries are zero.
\end{itemize}
Consider the matrix
$$
P = \left(
\begin{array}{cc}
A & B \\
C & D
\end{array}\right)
$$
and note that $P$ is symplectic. Furthermore, by taking $a_i,b_j$ such that $a_i^2 = \alpha_i$ for $i = 1,\dots,p$, and $b_j^{-2} = \alpha_j$ for $j = p+1,\dots,p+q$ we have $P^{t}I_{r}P = D_{\alpha}$ when $p+q = r$.
If $p+q < r$ by permuting the upper diagonal of $I_{p+q}$, we transform $I_{p+q}$ into the matrix $I_{p+q}^{*}$ whose entries on the diagonal are $(I_{p+q}^{*})_{i,i} = 1$ for $i = 1,\dots, p$, $(I_{p+q}^{*})_{i,i} = 0$ for $i = p+1,\dots, p+s$, $(I_{p+q}^{*})_{i,i} = 1$ for $i = p+s+1,\dots, p+s+q$, and $(I_{p+q}^{*})_{i,i} = 0$ for $i = p+s+q+1,\dots, 2r$, where $p+s+q = r$. In this case consider $r\times r$ diagonal matrices $\overline{A},\overline{B},\overline{C},\overline{D}$ such that
\begin{itemize}
\item[-] the first $p$ entries on the diagonal of $\overline{A}$ are $a_i \in K^{*}$ for $i = 1,\dots,p$, $(\overline{A})_{i,i} = 1$ for $i = p+1,\dots,p+s$, $(\overline{A})_{i,i} = 0$ for $i = p+s+1,\dots,p+s+q$;
\item[-] the first $p+s$ entries on the diagonal of $\overline{B}$ are zero, followed by $-b_{p+1}^{-1},\dots,-b_{p+q}^{-1}$;
\item[-] the first $p+s$ entries on the diagonal of $\overline{C}$ are zero, followed by $b_{p+1},\dots,b_{p+q}$;
\item[-] the first $p$ entries on the diagonal of $\overline{D}$ are $a_i^{-1} \in K^{*}$ for $i = 1,\dots,p$, $(\overline{D})_{i,i} = 1$ for $i = p+1,\dots,p+s$, $(\overline{D})_{i,i} = 0$ for $i = p+s+1,\dots,p+s+q$;
\end{itemize}
and set
$$
\overline{P} = \left(
\begin{array}{cc}
\overline{A} & \overline{B} \\
\overline{C} & \overline{D}
\end{array}\right).
$$
Then $\overline{P}$ is symplectic and by taking again $a_i,b_j$ such that $a_i^2 = \alpha_i$ for $i = 1,\dots,p$, and $b_j^{-2} = \alpha_j$ for $j = p+1,\dots,p+q$ it holds $\overline{P}^{t}I_{p+q}^{*}\overline{P} = D_{\alpha}$.
Furthermore, when $D_{\alpha}$ is of maximal rank we consider the diagonal symplectic matrix
$$P = \diag(a_{1},\dots,a_r,a_{1}^{-1},\dots,a_{r}^{-1}).$$
Note that taking $a_{i}\in K^{*}$ such that $a_i^2 = \alpha_i\mu^{-1}$, with $\mu^2 = \lambda$, for $i = 1,\dots,r$, we get that $P^{t}I_{2r,2r}P$ is a scalar multiple of $D_{\alpha}$. Consider the matrices
$$
\Psi_t = \left(
\begin{array}{cc}
I_{r,r} & 0_{r,r} \\
0_{r,r} & T_{r,r}
\end{array}
\right);\quad
\Lambda_t = \left(
\begin{array}{ccc}
I_{k,k} & 0 & 0_{k,2r-k-1}\\
0 & t & 0_{1,2r-k-1}\\
0_{2r-k-1,k} & 0_{2r-k-1,1} & 0_{2r-k-1,2r-k-1}
\end{array} \right) \quad \text{for}\: k = 1,\dots,r-1
$$
where $T_{r,r} = \diag(t,\dots,t)$. By the first part of the proof we have that $\{\Psi_t\}_{t\in K^{*}}$ is a family of matrices in $O_{2r}$, and $\lim_{t\mapsto 0}\Psi_t = I_{r}$. Furthermore, $\{\Lambda_t\}_{t\in K^{*}}$ is a family of matrices in $O_{k+1}$, and $\lim_{t\mapsto 0}\Lambda_{t} = I_{k}$ for $k = 1,\dots,r-1$.
Summing-up we proved that if $Z$ is a symmetric matrix for rank $k$ with $1\leq k\leq r$ or $k = 2r$ satisfying the equations in Remark \ref{equations} then $Z$ can be symplectically transformed into the matrix $I_k$, and hence it lies in $O_k$.
\end{proof}
\begin{Remark}\label{Ver_c}
Proposition \ref{fun} yields that the Veronese variety $\mathcal{V}^{2r-1}_2$ is contained in $X_{2r}$. On the other hand, for $h\geq 2$ the secant variety $\sec_h(\mathcal{V}^{2r-1}_2)$ is not contained in $X_{2r}$.
\end{Remark}
\begin{Proposition}\label{sec}
For any $k \ge r$ we have that $X_{2r} \cap \sec_{r}(\mathcal{V}^{2r-1}_2) = X_{2r} \cap \sec_{k}(\mathcal{V}^{2r-1}_2)$
\end{Proposition}
\begin{proof}
Assume there is a matrix $M \in X_{2r} \cap \sec_{k}(\mathcal{V}^{2r-1}_2)$ of rank $r< k < 2r-1$. Arguing as in the proof of Proposition \ref{fun} we can move $M$ with the action of $Sp(2r)$ in a diagonal matrix $D_k$ of rank $k$, and $D_{k}\in X_{2r} \cap \sec_{k}(\mathcal{V}^{2r-1}_2)$. However, $D_{k}$ does not satisfy the equation $N_{j,r+j}-N_{j+1, r+j+1}=0$ for $X_{2r}$ in Remark \ref{equations}. A contradiction.
\end{proof}
We analyze in detail the geometry of the objects we introduced in the first non trivial case, namely $r = 2$.
\begin{Proposition}\label{x4g14}
The variety $X_4$ is isomorphic to the Grassmannian $\mathbb{G}(1,4)\subset\mathbb{P}^9$ of lines in $\mathbb{P}^4$. Furthermore, $\mathcal{V}_2^{3}\subset X_4$, and $S_2(\mathcal{V}_2^{3})\subset X_4$ is an irreducible and reduced divisor singular along $\mathcal{V}_2^{3}$. In particular, the equations in Remark \ref{equations} cut out $X_4$ scheme-theoretically, and $S_2(\mathcal{V}_2^{3}) = Y_2$ scheme-theoretically.
\end{Proposition}
\begin{proof}
Consider homogeneous coordinates $z_{i,j}$ on $\mathbb{P}^9$ and identify them with the entries of a general $4\times 4$ symmetric matrix $Z$. The change of variables $z_{0,0}\mapsto z_{1,2}, z_{0,1}\mapsto z_{0,1}, z_{0,2}\mapsto \frac{z_{1,4}-z_{2,3}}{2}, z_{0,3}\mapsto z_{0,2}, z_{1,1}\mapsto z_{1,3}, z_{1,2}\mapsto z_{0,3}, z_{1,3}\mapsto \frac{z_{1,4}+z_{2,3}}{2}, z_{2,2}\mapsto z_{3,4}, z_{2,3}\mapsto z_{0,4}, z_{3,3}\mapsto z_{2,4}$ transforms the five equations in Remark \ref{equations} into the standard Pl\"ucker equations cutting out $\mathbb{G}(1,4)$ in $\mathbb{P}^9$.
By Remark \ref{Ver_c} we have $\mathcal{V}_2^{3}\subset X_4$. We can compute the tangent cones of $S_2(\mathcal{V}_2^{3})\subset X_4$ at a point representing a rank one matrix, and at a point representing a rank two matrix using the equations for $X_{4}$ in Remark \ref{equations} together with the equations cutting out $\sec_{2}(\mathcal{V}_2^{3})$. In the first case we get a cone with a $3$-dimensional vertex over $\mathcal{V}_1^{2}$ which in particular is irreducible and reduced, and in the second case we get a $5$-dimensional linear space. Finally, since by Proposition \ref{fun} $Y_2$ has dimension five we conclude that the equations in Remark \ref{equations} together with the equations cutting out $\sec_{2}(\mathcal{V}_2^{3})$ define $Y_2$ scheme-theoretically.
\end{proof}
\begin{Remark}
The variety $X_4\cong\mathbb{G}(1,4)$ has been studied in relation to moduli spaces of rank two vector bundles over a smooth quadric \cite[Table I]{OS94}.
\end{Remark}
\begin{Proposition}\label{symplcones}
The tangent cone $TC_{p_k}(X_{2r})$ of $X_{2r}$ at a point $p_k\in S_k(\mathcal{V}^{2r-1}_2) \setminus S_{k-1}(\mathcal{V}^{2r-1}_2)$ for $k=1, \dots, r-1$ is a cone with vertex of dimension $k(2r+1-k)-1$ over $X_{2(r-k)}$. Moreover, $X_{2r}$ is smooth along $S_{r}(\mathcal{V}^{2r-1}_2) \setminus S_{r-2}(\mathcal{V}^{2r-1}_2)$, and the equations in Remark \ref{equations} define $X_{2r}$ scheme-theoretically.
The tangent cone $TC_{p_k}(S_{h}(\mathcal{V}^{2r-1}_2))$ of $S_h(\mathcal{V}^{2r-1}_2)$ at a point $p_k\in S_k(\mathcal{V}^{2r-1}_2) \setminus S_{k-1}(\mathcal{V}^{2r-1}_2)$ for $k=1, \dots, r-1, k< h$ is a cone with vertex of dimension $k(2r+1-k)-1$ over $S_{h-k}(\mathcal{V}_2^{2(r-k)-1})$. Moreover, the equations in Remark \ref{equations} together with the equations cutting out $\sec_{h}(\mathcal{V}^{2r-1}_2)$ define $S_h(\mathcal{V}^{2r-1}_2)$ scheme-theoretically.
In particular, $X_{2r}$ is smooth along $S_{r}(\mathcal{V}^{2r-1}_2) \setminus S_{r-2}(\mathcal{V}^{2r-1}_2)$ and $S_{r}(\mathcal{V}^{2r-1}_2)$ is a divisor in $X_{2r}$.
\end{Proposition}
\begin{proof}
Let $p_{k}=(p_{i,j})_{i,j=0, \dots, 2r-1, i \le j}$ be the point representing the standard matrix of rank $k$ with $p_{i,i}=1$ for $i=0,\dots,k-1$ and $p_{i,j}=0$ otherwise.
We proceed by induction on $r$. The base case $r = 2$ is in Proposition \ref{x4g14}. We will use the equations in Remark \ref{equations} to compute $TC_{p_k}X_{2r}$. Consider the change of coordinates $z_{i,i}\mapsto z_{i,i}-z_{0,0}$ for $i = 1,\dots,k-1$, and set $z_{0,0} = 1$. Note that the lowest degree terms of the equations in Remark \ref{equations} after this change of coordinates are obtained by removing from $Z^t\Omega Z = \lambda\Omega$ the rows and columns indexed by $0,\dots,k-1,r,\dots,r+k-1$. Therefore, we get a cone with vertex of dimension $k(2r+1-k)-1$ over $X_{2(r-k)}$ which by induction hypothesis is irreducible and reduced since the equations in Remark \ref{equations} define $X_{2(r-k)}$ scheme-theoretically. Now, $k(2r+1-k)+\dim(X_{2(r-k)}) = \dim(X_{2r})$ yields that this is the tangent cone $TC_{p_k}X_{2r}$, and hence the equations in Remark \ref{equations} define $X_{2r}$ scheme-theoretically. Note that at the points representing $I_r$ and $I_{2r,2r}$ the equations in Remark \ref{equations} yield a linear subspace of the same dimension of $X_{2r}$.
Now, consider $S_h(\mathcal{V}_2^{2r-1})$. Note that $TC_{p_k}S_h(\mathcal{V}_2^{2r-1})$ is contained in $TC_{p_k}X_{2r}\cap TC_{p_k}\sec_h(\mathcal{V}_2^{2r-1})$. By the previous computation of $TC_{p_k}X_{2r}$ and the computation of $TC_{p_k}\sec_h(\mathcal{V}_2^{2r-1})$ in Proposition \ref{tcones}, we conclude that $TC_{p_k}X_{2r}\cap TC_{p_k}\sec_h(\mathcal{V}_2^{2r-1})$ is a cone with vertex of dimension $k(2r+1-k)-1$ over $S_h(\mathcal{V}_2^{2(r-k)-1}) = \sec_{h-k}(\mathcal{V}_2^{2(r-k)-1})\cap X_{2(r-k)}$. Again by induction this is an irreducible and reduced cone which by the computation of the dimension of $S_h(\mathcal{V}_2^{2r-1})$ in Proposition \ref{fun} must coincide with $TC_{p_k}S_h$. Hence the equations in Remark \ref{equations} together with the equations cutting out $\sec_{h}(\mathcal{V}^{2r-1}_2)$ define $S_h(\mathcal{V}^{2r-1}_2)$ scheme-theoretically.
\end{proof}
Now, we are ready to prove the main result of this section. We will denote by $S_h^{(i)}(\mathcal{V}^{2r-1}_2)$ the strict transform of $S_h(\mathcal{V}^{2r-1}_2)$ in $X_{2r}^{(i)}$.
\begin{thm}\label{main1}
For any $i = 1,\dots,r-1$ the strict transform $S_{i+1}^{(i)}(\mathcal{V}_2^{2r-1})$ of $S_{i+1}(\mathcal{V}_2^{2r-1})$ in $X_{2r}^{(i)}$ is smooth. Moreover, the variety $\mathcal{S}_{2r}$ is smooth and the exceptional divisors $E_1, \dots, E_{r-1} \subset \mathcal{S}_{2r}$ are smooth as well.
The closures of the orbits of the $Sp(2r)$-action on $\mathcal{S}_{2r}$ induced by the action in (\ref{acsimp}) are given by all the possible intersections among $E_1,\dots,E_{r-1},S_{r}^{(r-1)}(\mathcal{V}_2^{2r-1})$ and $X_{2r}^{(i)}$ itself.
In particular, the variety $\mathcal{S}_{2r}$ with boundary divisors $E_1,\dots,E_{r-1},S_{r}^{(r-1)}(\mathcal{V}_2^{2r-1})$ is wonderful.
\end{thm}
\begin{proof}
For every $r$ in $X_{2r}^{(0)}$ we have $S_1^{(0)}(\mathcal{V}^{2r-1}_2)=\mathcal{V}^{2r-1}_2$ which is smooth. We will assume that $S_j^{(j-1)}(\mathcal{V}^{2r-1}_2)$ is smooth for every $r$ and for every $j <i$ and prove that also $S_i^{(i-1)}(\mathcal{V}^{2r-1}_2)$ in $X_{2r}^{(i-1)}$ is smooth. We have $S_{i}^{(i-1)}(\mathcal{V}^{2r-1}_2) = \sec_{i}^{(i-1)}(\mathcal{V}_2^{2r-1}) \cap X_{2r}^{(i-1)}$, so we consider $S_{i}^{(i-1)}(\mathcal{V}^{2r-1}_2)$ inside $\mathcal{Q}(2r-1)_{(i-1)}$. By Proposition \ref{Qn_won}, for every $r$ and for every $i=0, \dots, 2r-1$ the varieties $\mathcal{Q}(2r-1)_{(i-1)}$, $\sec_{i}^{(i-1)}(\mathcal{V}_2^{2r-1})$, $E_1^q, \dots,E_{i-1}^q$ are smooth.
Now, $S_{i}^{(i-1)}(\mathcal{V}^{2r-1}_2)$ is smooth away from $E_1^{q}, \dots,E_{i-1}^{q}$. Moreover, by Proposition \ref{symplcones} for every $k=1, \dots, i-1$, $S_{i}^{(i-1)}(\mathcal{V}^{2r-1}_2) \cap E_k^{q} \rightarrow S_{k}^{(k-1)}(\mathcal{V}^{2r-1}_2)$ is a fibration with fibers isomorphic to $S_{i-k}^{(i-k-1)}(\mathcal{V}_2^{2(r-k)-1})$ which is smooth by induction. Proposition \ref{smooth_fib} yields that $S_{i}^{(i-1)}(\mathcal{V}^{2r-1}_2) \cap E_k^{q}$ is smooth for $k=1, \dots, i-1$. Now, since by Proposition \ref{fun} we have
$$\dim S_k^{(k-1)}(\mathcal{V}^{2r-1}_2)+\dim S_{i-k}^{(i-k-1)}(\mathcal{V}^{2(r-k)-1}_2) = 2ri+i-i^2-2 = \dim S_i^{(i-1)}(\mathcal{V}^{2r-1}_2)-1$$
we get that $S_{i}^{(i-1)}(\mathcal{V}^{2r-1}_2)$ is smooth as well.
By Proposition \ref{symplcones} for every $r$, in $X_{2r}^{(1)}$ we have that $E_1 \cap S_2^{(1)}(\mathcal{V}^{2r-1}_2) \rightarrow \mathcal{V}^{2r-1}_2$ is a fibration with fibers isomorphic to $\mathcal{V}_2^{2(r-1)-1}$ and then by Proposition \ref{smooth_fib} $E_1 \cap S_2^{(1)}(\mathcal{V}^{2r-1}_2)$ is smooth of dimension $4r-4 = \dim(S_2^{(1)}(\mathcal{V}^{2r-1}_2))-1$. More generally, consider intersections of the form $S_{i+1}^{(i)}(\mathcal{V}^{2r-1}_2)\cap E_{j_1}\cap\dots \cap E_{j_t}$, for $1 \le j_1 < \dots < j_t \le i$. By Proposition \ref{symplcones}, the restriction of the blow-down morphism
$$S_{i+1}^{(i)}(\mathcal{V}^{2r-1}_2) \cap E_{j_1} \cap \dots \cap E_{j_t}\rightarrow E_{j_1}\cap \dots \cap E_{j_{t-1}}\cap S_{j_t}^{(j_t-1)}(\mathcal{V}^{2r-1}_2)$$
has fibers isomorphic to $S_{i+1-j_t}^{(i-j_t)}(\mathcal{V}_2^{2(r-j_t)-1})$. Now, by Proposition \ref{smooth_fib} $S_{i+1}^{(i)}(\mathcal{V}^{2r-1}_2)\cap E_{j_1}\cap\dots \cap E_{j_t}$ is smooth since we proved before that $S_{i+1-j_t}^{(i-j_t)}(\mathcal{V}_2^{2(r-j_t)-1})$ is smooth and $E_{j_1}\cap \dots \cap E_{j_{t-1}}\cap S_{j_t}^{(j_t-1)}(\mathcal{V}^{2r-1}_2)$ is smooth by induction. Moreover,
$$
\dim(S_{i+1}^{(i)}(\mathcal{V}^{2r-1}_2) \cap E_{j_1} \cap \dots \cap E_{j_t}) = \dim(E_{j_1}\cap \dots \cap E_{j_{t-1}}\cap S_{j_t}^{(j_t-1)}(\mathcal{V}^{2r-1}_2))+ \dim(S_{i+1-j_t}^{(i-j_t)}(\mathcal{V}_2^{2(r-j_t)-1}))
$$
and by induction $\dim(E_{j_1}\cap \dots \cap E_{j_{t-1}}\cap S_{j_t}^{(j_t-1)}(\mathcal{V}^{2r-1}_2)) = 2rj_t+j_t-j_t^2-1-(t-1)$.
This yields, using Proposition \ref{fun}, that
\stepcounter{thm}
\begin{equation}\label{eq_dim}
\begin{array}{l}
\dim(S_{i+1}^{(i)}(\mathcal{V}^{2r-1}_2) \cap E_{j_1} \cap \dots \cap E_{j_t}) = 2r(i+1)+(i+1)-(i+1)^2-1-t = \dim(S_{i+1}^{(i)}(\mathcal{V}^{2r-1}_2)) - t.
\end{array}
\end{equation}
Now, consider the variety $\mathcal{S}_{2r}$ as a subvariety of the variety $\mathcal{Q}(2r-1)_{r-1}$ in Construction \ref{ccq}. By Proposition \ref{symplcones} $\mathcal{S}_{2r}$ is smooth away from the exceptional divisors. Furthermore, the exceptional divisor $E_i^{q}$ in Construction \ref{ccq} intersects $\mathcal{S}_{2r}$ in the exceptional divisor $E_i$ in Construction \ref{ccssf}. By Proposition \ref{symplcones} $E_i\rightarrow S_{i}^{(i-1)}(\mathcal{V}_2^{2r-1})$ is a fibration with $\mathcal{S}_{2(r-i)}$ as fiber. Hence $E_i^q\cap \mathcal{S}_{2r}$ is a smooth divisor in $\mathcal{S}_{2r}$ and therefore $\mathcal{S}_{2r}$ is smooth.
Now, consider an intersection of the form $E_{j_1}\cap\dots \cap E_{j_t}$ and the fibration
$$E_{j_1}\cap\dots \cap E_{j_t}\rightarrow E_{j_1}\cap\dots E_{j_{t-1}}\cap S_{j_t}^{(j_t-1)}(\mathcal{V}^{2r-1}_2).$$
By Proposition \ref{symplcones} this fibration has fibers isomorphic to $\mathcal{S}_{2(r-j_t)}$. By the previous part of the proof we have that
$$\dim(E_{j_1}\cap\dots E_{j_{t-1}}\cap S_{j_t}^{(j_t-1)}(\mathcal{V}^{2r-1}_2))+\dim(\mathcal{S}_{2(r-j_t)}) = r^2+r-t = \dim(\mathcal{S}_{2r})-t$$
and hence the intersection $E_{j_1}\cap\dots \cap E_{j_t}$ is transversal. Note also that considering the fibration
$$S_{r}^{(r-1)}(\mathcal{V}^{2r-1}_2) \cap E_{j_1} \cap \dots \cap E_{j_t}\rightarrow E_{j_1}\cap \dots \cap E_{j_{t-1}}\cap S_{j_t}^{(j_t-1)}(\mathcal{V}^{2r-1}_2)$$
and (\ref{eq_dim}) we get that the intersection $S_{r}^{(r-1)}(\mathcal{V}^{2r-1}_2) \cap E_{j_1} \cap \dots \cap E_{j_t}$ is transversal as well.
Finally, for the claim about the orbit closures it is enough to recall that the $Sp(2r)$-action on $\mathcal{S}_{2r}$ is the restriction of the $SL(2r)$-action on $\mathcal{Q}(2r-1)_{r-1}$ in (\ref{acss}) and to use the statement about the orbit closures in Proposition \ref{Qn_won}.
\end{proof}
\begin{Proposition}\label{mult}
We have that
$$ \mult_{S_r(\mathcal{V}^{2r-1}_2)} S_{2r-1}(\mathcal{V}^{2r-1}_2) = r.$$
Moreover, if $H_{X_{2r}}$ is the hyperplane section of $X_{2r}$, we have that $S_r(\mathcal{V}^{2r-1}_2) \sim 2H_{X_{2r}}$.
\end{Proposition}
\begin{proof}
We will compute the tangent cone of $S_r(\mathcal{V}^{2r-1}_2)$ at the point $p_r = (p_{i,j})_{i,j=0,\dots,2r-1}$, where $p_{i,i}=1$ for $i=0, \dots, r-1$ and $p_{i,j}=0$ otherwise.
Consider the change of coordinates $z_{i,i}\mapsto z_{i,i}-z_{0,0}$ ans set $z_{0,0} = 1$. By Remark \ref{equations} the tangent space of $X_{2r}$ at $p_r$ is cut out by a set of linear equations and among these equations we have $\{z_{i,j} = 0\}$ for $i,j = r,\dots, 2r-2$, and $z_{i,i} = z_{i+1,i+1}$ for $i = r,\dots, 2r-2$.
Now, the tangent cone of $\sec_{2r-1}(\mathcal{V}_{2}^{2r-1})$ is cut out by the determinant of the bottom right $r\times r$ submatrix of the matrix $Z$ in (\ref{matrix}). Note that substituting the relations on the $z_{i,j}$ above in this determinant we get $z_{2r-1,2r-1}^r$.
By Proposition \ref{sec} $\sec_{2r-1}(\mathcal{V}_{2}^{2r-1})$ and $\sec_{r}(\mathcal{V}_{2}^{2r-1})$ cut out on $X_{2r}$ the same divisor set-theoretically. The previous computation yields that $\sec_{2r-1}(\mathcal{V}_{2}^{2r-1})$ cuts out $\sec_{r}(\mathcal{V}_{2}^{2r-1})\cap X_{2r}$ on $X_{2r}$ with multiplicity $r$.
Now, recall that by Remark \ref{sec_ver} $\deg(\sec_{2r-1}(\mathcal{V}_2^{2r-1}))=2r$. Let $D$ be the divisor $\sec_r(\mathcal{V}_2^{2r-1})\cap X_{2r}$. Finally $\sec_{2r-1}(\mathcal{V}_2^{2r-1})\cap X_{2r}\sim 2rH_{X_{2r}}$ yields $D\sim 2H_{X_{2r}}$.
\end{proof}
\begin{Remark}
In the case $r = 2$ we worked out explicitly the quadratic polynomial cutting out $S_2(\mathcal{V}_2^3)$ in $X_{4}$ and we got that $S_2(\mathcal{V}_2^3) = X_{4}\cap \{z_{0,3}z_{1,2}+z_{1,3}^2-z_{0,1}z_{2,3}-z_{1,1}z_{3,3} = 0\}$.
\end{Remark}
\section{Divisors on $\mathcal{S}_{2r}$}\label{divS2r}
Let $X$ be a normal projective $\mathbb{Q}$-factorial variety over an algebraically closed field of characteristic zero. We denote by $N^1(X)$ the real vector space of $\mathbb{R}$-Cartier divisors modulo numerical equivalence.
The \emph{nef cone} of $X$ is the closed convex cone $\Nef(X)\subset N^1(X)$ generated by classes of nef divisors.
The stable base locus $\textbf{B}(D)$ of a $\mathbb{Q}$-divisor $D$ is the set-theoretic intersection of the base loci of the complete linear systems $|sD|$ for all positive integers $s$ such that $sD$ is integral
\stepcounter{thm}
\begin{equation}\label{sbl}
\textbf{B}(D) = \bigcap_{s > 0}B(sD).
\end{equation}
The \emph{movable cone} of $X$ is the convex cone $\Mov(X)\subset N^1(X)$ generated by classes of
\emph{movable divisors}. These are Cartier divisors whose stable base locus has codimension at least two in $X$.
The \emph{effective cone} of $X$ is the convex cone $\Eff(X)\subset N^1(X)$ generated by classes of
\emph{effective divisors}. We have inclusions $\Nef(X)\ \subset \ \overline{\Mov(X)}\ \subset \ \overline{\Eff(X)}$. We refer to \cite[Chapter 1]{De01} for a comprehensive treatment of these topics.
In this section we will study the Picard rank and the cones of effective and nef divisors of the wonderful compactification $\mathcal{S}_{2r}$. We will need the following result.
\begin{Lemma}\label{lso}
Let $SO(2r)$ be the special orthogonal group. Then
$$SO(2r) \cap Sp(2r) \cong GL(r).$$
In particular, $SO(2) \cong GL(1) \cong K^{*}$.
\end{Lemma}
\begin{proof}
Consider the bilinear symmetric form given by the matrix
$J=\begin{pmatrix}
0_{r,r} & I_{r,r}\\
I_{r,r} & 0_{r,r}
\end{pmatrix}$.
Set $N= \begin{pmatrix}
I_{r,r} & \xi I_{r,r}\\
\frac{1}{2} I_{r,r} & -\frac{\xi}{2} I_{r,r}
\end{pmatrix}$, with $\xi^2 = -1$. Note that $N^t J N = I_{2r,2r}$ and $N^t \Omega N = -\xi\Omega$. Therefore, we may prove the statement for the intersection $SO_J(2r) \cap Sp(2r)$, where $SO_J(2r)$ is the group of determinant one matrices which are orthogonal with respect to $J$.
Let $M = \begin{pmatrix}
A & B\\
C & D
\end{pmatrix}\in GL(2r)$ be a general $2r\times 2r$ invertible matrix, where $A,B,C,D$ are $r \times r$ matrices. Now, $M \in SO_J(2r) \cap Sp(2r)$ if and only if
$$\begin{cases}
A^t C=0_{r,r};\\
A^t D=I_{r,r};\\
B^t C=0_{r,r};\\
B^t D=0_{r,r};\end{cases} \rm{that\: is\quad} \begin{cases} D=A^{-t};\\
C=0_{r,r};\\
B=0_{r,r};\end{cases}$$
and hence
$$SO(2r) \cap Sp(2r) \cong SO_J(2r) \cap Sp(2r) =\Bigr\{\begin{pmatrix}
A & 0_{r,r}\\
0_{r,r} & A^{-t}
\end{pmatrix} \text{ for } A \in GL(r) \Bigl\} \cong GL(r).$$
For the last claim in the case $r = 1$ it is enough to note that $SO(2) \cap Sp(2) = SO(2)$. In fact every $2 \times 2$ matrix with determinant one is symplectic.
\end{proof}
\begin{Proposition}\label{pic_G/H}
Let $O_{2r}\subset X_{2r}$ be the orbit of the identity. Then $\Pic(O_{2r}) \cong \mathbb{Z}/2\mathbb{Z}$.
\end{Proposition}
\begin{proof}
The group $G =Sp(2r)$ is semi-simple and simply connected. If $H\subset G$ is the stabilizer of the identity then \cite[Theorem 4.5.1.2]{ADHL15} yields that $\Pic(G/H) \cong \mathbb{X}(H)$, where $\mathbb{X}(H)$ is the group of characters of $H$. We have that
$$H=\{M \in Sp(2r) , MM^t = \lambda_M I_{2r,2r} , \text{for some } \lambda_M \in K^{*}\}.$$
Then, for a general element $M \in H$ we have
$$\begin{cases}
MM^t= \lambda_M I_{2r,2r};\\
M^t \Omega M = \Omega;
\end{cases} \Rightarrow \lambda_M M^{-1} \Omega M = \Omega \Rightarrow \lambda_M \Omega M = M \Omega.$$
Let $v$ be an eigenvector of $\Omega$ with eigenvalue $\mu$. Then
$$\lambda_M \Omega M v = M \Omega v = M \mu v = \mu M v.$$
Setting $y= M v$ we have $\Omega y = (\lambda_M^{-1} \mu) y$ and so $y$ is an eigenvector of $\Omega$ with eigenvalue $\lambda_M^{-1} \mu$. The characteristic polynomial of $\Omega$ is $P_\Omega(\lambda)=(\lambda-\xi)^r(\lambda+\xi)^r$ where $\xi^2 = -1$. Therefore the only eigenvalues of $\Omega$ are $\xi$ and $-\xi$. So
$$\begin{cases}
\mu = \pm \xi;\\
\lambda_M^{-1} \mu = \pm \xi;
\end{cases} \Rightarrow \lambda_M^{-1}= \pm 1 \Rightarrow \lambda_M= \pm 1
$$
and there is a morphism of groups
\begin{align*}
\varphi: H &\longrightarrow \mathbb{Z}/2\mathbb{Z}\\
M &\longmapsto \lambda_M
\end{align*}
The morphism $\varphi$ is surjective. Indeed we have $\varphi(I_{2r,2r})=1$, and if $S = \begin{pmatrix}
0_{r,r} & \xi I_{r,r} \\
\xi I_{r,r} & 0_{r,r}
\end{pmatrix}$ then $S^t \Omega S = \Omega$, $SS^t= - I_{2r,2r}$, $S \in H$ and $\varphi(S)=-1$. This yields an exact sequence
\stepcounter{thm}
\begin{equation}\label{exseq}
1 \rightarrow \overline{H} \rightarrow H \rightarrow \mathbb{Z}/2\mathbb{Z} \rightarrow 1
\end{equation}
where $\overline{H}= \{M \in Sp(2r) , MM^t = I_{2r,2r}\}$, and we can write $H= \overline{H} \cup S\overline{H}$.
As in Lemma \ref{lso}, we consider the bilinear form $J=\begin{pmatrix}
0_{r,r} & I_{r,r}\\
I_{r,r} & 0_{r,r}
\end{pmatrix}$, which is congruent to the bilinear form $I_{2r,2r}$ via the matrix $N= \begin{pmatrix}
I_{r,r} & \xi I_{r,r}\\
\frac{1}{2} I_{r,r} & -\frac{\xi}{2} I_{r,r}
\end{pmatrix}$, where $\xi^2=-1$. Set $\overline{H}_J= \{M \in Sp(2r) , M J M^t = J\}$ and $H_J=\{M \in Sp(2r) , M J M^t = \lambda_M J , \text{for some } \lambda_M \in K^{*}\}$. There is an isomorphism
\begin{align*}
\alpha: H &\longrightarrow H_J \\
M &\longmapsto N M N^{-1}
\end{align*}
such that $\alpha(\overline{H}) = \overline{H}_J$, $\tilde{S}:=\alpha(S)= \begin{pmatrix}
0 & -2 I_{r,r} \\
\frac{1}{2} I_{r,r} & 0
\end{pmatrix}$ and $H_J = \overline{H}_J \cup \tilde{S} \overline{H}_J$.
Take $B\in H_J$ and consider $\alpha^{-1}(B)\in H$. By the first part of the proof there is a morphism of groups $H_J\rightarrow\mathbb{Z}/2\mathbb{Z}$ mapping $B$ to $\lambda_{\alpha^{-1}(B)}$, and fitting in the following exact sequence
$$
1 \rightarrow \overline{H}_J \rightarrow H_J \rightarrow \mathbb{Z}/2\mathbb{Z} \rightarrow 1
$$
Since $H_J/\overline{H}_J$ is abelian the commutator $[H_J,H_J]$ of $H_J$ is contained in $\overline{H}_J$. By the proof of Lemma \ref{lso} we have that an element $h \in \overline{H}_J$ is of the form $h=\begin{pmatrix}
A & 0_{r,r}\\
0_{r,r} & A^{-t}
\end{pmatrix} \text{ for } A \in GL(r)$. Then $h^{-1}=\begin{pmatrix}
A^{-1} & 0_{r,r}\\
0_{r,r} & A^{t}
\end{pmatrix} \text{ for } A \in GL(r)$. Furthermore $\tilde{S}^{-1} = \begin{pmatrix}
0_{r,r} & 2 I_{r,r} \\
-\frac{1}{2} I_{r,r} & 0_{r,r}
\end{pmatrix}$. Therefore
\begin{align*}
[\tilde{S},h] &= \tilde{S}h\tilde{S}^{-1}h^{-1} = \begin{pmatrix}
0_{r,r} & -2 I_{r,r} \\
\frac{1}{2} I_{r,r} & 0_{r,r}
\end{pmatrix} \begin{pmatrix}
A & 0_{r,r}\\
0_{r,r} & A^{-t}
\end{pmatrix} \begin{pmatrix}
0_{r,r} & 2 I_{r,r} \\
-\frac{1}{2} I_{r,r} & 0_{r,r}
\end{pmatrix} \begin{pmatrix}
A^{-1} & 0_{r,r}\\
0_{r,r} & A^{t}
\end{pmatrix} =\begin{pmatrix}
A^{-t}A^{-1} & 0_{r,r}\\
0_{r,r} & A A^{t}
\end{pmatrix}.
\end{align*}
Setting $B = A^{-t}A^{-1}$, we have $B^{-t}=(A^{-t}A^{-1})^{-t}=AA^t$ with $B \in GL(r)$ symmetric. So $[H_J,H_J]$ is the subgroup of $\overline{H}_J \cong GL(r)$ generated by symmetric matrices and since by \cite[Theorem 1]{Bos86} all $r \times r$ matrices can be written as product of symmetric matrices we get $[H_J,H_J]=\overline{H}_J$.
Then, $H / [H,H] \cong H_J / [H_J,H_J] \cong H_J / \overline{H}_J \cong H/\overline{H}$ and by the exact sequence (\ref{exseq}) we have $H/\overline{H} \cong \mathbb{Z}/2\mathbb{Z}$. Finally, by \cite[Lemma 22.2]{Bur65} $\mathbb{X}(H) \cong \mathbb{X}(H/[H,H])$, and hence $\Pic(G/H) \cong \mathbb{X}(H) \cong \mathbb{Z}/2 \mathbb{Z}$.
\end{proof}
Now, we are ready to compute the Picard rank and the colors of the wonderful variety $\mathcal{S}_{2r}$.
\begin{Proposition}\label{pic_x2r}
The Picard rank of $\mathcal{S}_{2r}$ is $\rho(\mathcal{S}_{2r}) = r$.
\end{Proposition}
\begin{proof}
As before set $G=Sp(2r)$ and let $H$ be the stabilizer of the identity. By Theorem \ref{main1} the variety $\mathcal{S}_{2r}$ is wonderful with boundary divisors $E_1,\dots, E_{r-1},S_r^{(r-1)}(\mathcal{V}_2^{2r-1})$. By \cite[Proposition 2.2.1]{Br07} there is an exact sequence
$$ 0 \rightarrow \mathbb{Z}^r \rightarrow \Pic(\mathcal{S}_{2r}) \rightarrow \Pic(G/H) \rightarrow 0$$
Hence, Proposition \ref{pic_G/H} yields that the Picard rank of $\mathcal{S}_{2r}$ is $r$.
\end{proof}
For $i=1,\dots,r$ we define the divisors $D_i$ as the strict transforms in $\mathcal{S}_{2r}$ of the divisor given by the intersection of
$$\det \begin{pmatrix}
z_{0,0} & \dots & z_{0,i-1}\\
\vdots & \ddots & \vdots \\
z_{0,i-1} & \dots & z_{i-1,i-1}\\
\end{pmatrix}=0$$
with $X_{2r}$.
\begin{Proposition}\label{bound_col}
The set of boundary divisors of $\mathcal{S}_{2r}$ is $\{E_1,\dots,E_{r-1},S_r^{r-1}(\mathcal{V}_2^{2r-1})\}$ while the set of colors of $\mathcal{S}_{2r}$ is $\{D_1,\dots,D_r\}$.
\end{Proposition}
\begin{proof}
The claim on the set of boundary divisors follows from Theorem \ref{main1}. We compute the colors. We first prove that $D_r \subseteq \mathcal{S}_{2r}$ is stabilized by the Borel subgroup. Consider a matrix $Z=\begin{pmatrix}
Z_{0,0} & Z_{0,1} \\
Z_{0,1} & Z_{1,1}
\end{pmatrix}$ where the $Z_{i,j}$ are $r \times r$ matrices. Let $M =\begin{pmatrix}
A & 0_{r,r}\\
B & A^{-t}
\end{pmatrix} \in \mathscr{B}$, then
\begin{align*}
\bar{Z} = M \cdot Z \cdot M^t & =
\begin{pmatrix}
AZ_{0,0}A^t & A Z_{0,0} B^t+ A Z_{0,1} A^{-1}\\
B Z_{0,0} A^t + A^{-t} Z_{0,1} A^t & B Z_{0,0} B^t + A^{-t} Z_{0,1} B^t + B Z_{0,1} A^{-1} + A^{-t}Z_{1,1} A^{-1}
\end{pmatrix}
\end{align*}
and $\det(\bar{Z}_{0,0})=\det(AZ_{0,0}A^t)=\det(A)^2 \det(Z_{0,0})$ where $\det(A) \neq 0$ since $A \in GL(r)$. Therefore, $D_r$ is stabilized by the Borel subgroup.
We focus now on the block $\bar{Z}_{0,0}$ of the matrix $\bar{Z}$. We divide the matrices $A$ and $Z_{0,0}$ respectively in blocks $A_{j,k}$, $W_{j,k}$ of matrices $j \times k$ as follows
$A= \begin{pmatrix}
A_{i,i}& A_{i,r-i} \\
A_{r-i,i} & A_{r-i,r-i}
\end{pmatrix}$ and $Z_{0,0}=\begin{pmatrix}
W_{i,i} & W_{i,r-i} \\
W_{r-i,i} & W_{r-i,r-i}
\end{pmatrix}$. Recall that by Remark \ref{borelsym} the matrix $A$ is lower triangular. We have
$
\bar{Z}_{0,0}
= \begin{pmatrix}
\bar{W}_{i,i} & \bar{W}_{i,r-i} \\
\bar{W}_{r-i,i} & \bar{W}_{r-i,r-i}
\end{pmatrix}
$
with $\bar{W}_{i,i} = A_{i,i}W_{i,i}A_{i,i}^t$. The divisor $D_i$ is defined by $\det(W_{i,i})=0$ and since $\det(A)= \det(A_{i,i})\det(A_{r-i,r-i})\neq 0$ we get that $D_i$ is stabilized by $\mathscr{B}$ for $i=1, \dots r$.
As noticed in \cite[Remark 4.5.5.3]{ADHL15}, if $(X,\mathscr{G},\mathscr{B},x_0)$ is a spherical wonderful variety with colors $D_1,\dots,D_s$ the big cell $X\setminus (D_1\cup\dots \cup D_s)$ is an affine space. Therefore, it admits only constant invertible global functions and $\Pic(X)$ is generated by $D_1,\dots,D_s$.
Therefore, in order to conclude that we found all the colors of $\mathcal{S}_{2r}$ it is enough to recall that by Proposition \ref{pic_x2r} $\mathcal{S}_{2r}$ has Picard rank $r$.
\end{proof}
In the following we will denote by $H$ the pull-back in $\mathcal{S}_{2r}$ of the hyperplane section of $X_{2r}$. By Proposition \ref{pic_x2r} $H,E_1\dots,E_{r-1}$ generate $\Pic(\mathcal{S}_{2r})$.
\begin{Proposition}\label{eff_nef}
The extremal rays of $\Eff(\mathcal{S}_{2r})$ are generated by $E_1,\dots,E_{r-1},S_r^{r-1}(\mathcal{V}_2^{2r-1})$ and the extremal rays of $\Nef(\mathcal{S}_{2r})$ are generated by $D_1,\dots,D_r$.
\end{Proposition}
\begin{proof}
By \cite[Proposition 4.5.4.4]{ADHL15} and Proposition \ref{bound_col} $\Eff(\mathcal{S}_{2r})$ is generated by $E_1,\dots,E_{r-1},S_r^{r-1}(\mathcal{V}_2^{2r-1})$ and $D_1,\dots,D_r$.
Note that by Constructions \ref{ccq} and \ref{ccssf} there as an inclusion $i:\mathcal{S}_{2r}\rightarrow\mathcal{Q}(2r-1)_{r-1}$ inducing an isomorphism of the Picard groups. By \cite[Section 2]{Ce15} the linear system on $\mathcal{Q}(2r-1)_{r-1}$ that restricts to the linear system of $D_i$ on $\mathcal{S}_{2r}$ induces a birational morphism $\mathcal{Q}(2r-1)_{r-1}\rightarrow W_i$ whose exceptional locus is contained in the union of the exceptional divisors in Construction \ref{ccq}. Therefore, $D_i$ induces a birational morphism $\mathcal{S}_{2r}\rightarrow Z_i$ and hence $D_i$ lies in the interior of the effective cone of $\mathcal{S}_{2r}$ for any $i = 1,\dots, r$. This proves that the effective cone of $\mathcal{S}_{2r}$ is generated by $E_1,\dots,E_{r-1},S_r^{r-1}(\mathcal{V}_2^{2r-1})$. Finally, by \cite[Section 2.6]{Br89} $D_1,\dots,D_r$ generate the extremal rays of the nef cone.
\end{proof}
In order to study the birational geometry of $\mathcal{S}_{2r}$ we will need the following result.
\begin{Proposition}\label{cones_col}
Let $H_i^r$ be the divisor in $X_{2r}\subset\mathbb{P}^N$ cut out by the determinant of the $i\times i$ top left submatrix of the matrix $Z$ in (\ref{matrix}). The tangent cone of $H_i^r$ at a point of $S_k(\mathcal{V}^{2r-1}_2)\setminus S_{k-1}(\mathcal{V}^{2r-1}_2)$ for $i=2, \dots,r$ and $k <i$ is a cone with vertex of dimension $k(2r+1-k)$ over $H_{i-k}^{r-k}$.
\end{Proposition}
\begin{proof}
It is enough to note that the tangent cone of $H_i^r$ at the point $p_k = (p_{i,j})_{i,j=0,\dots,2r-1}$, where $p_{i,i}=1$ for $i=0, \dots, k-1$ and $p_{i,j}=0$ otherwise, is cut out by
$$\det \left(\begin{array}{cccc}
z_{k,k} & z_{k,k+1} & \hdots & z_{k,i-1} \\
z_{k,k+1} & z_{k+1,k+1} & \hdots & z_{k+1,i-1} \\
\vdots & \vdots & \ddots & \vdots \\
z_{k,i-1} & z_{k+1,i-1} & \hdots & z_{i-1,i-1}
\end{array}\right) = 0$$
and by the equations for the tangent cone of $X_{2r}$ in the proof of Proposition \ref{symplcones}.
\end{proof}
\section{Birational geometry of $\mathcal{S}_{2r}$}\label{birS2r}
The stable base locus of an effective $\mathbb{Q}$-divisor on a normal $\mathbb{Q}$-factorial projective variety $X$ has been defined in (\ref{sbl}). Since stable base loci do not behave well with respect to numerical equivalence \cite[Example 10.3.3]{La04II}, we will assume that $h^{1}(X,\mathcal{O}_X)=0$ so that linear and numerical equivalence of $\mathbb{Q}$-divisors coincide.
Then numerically equivalent $\mathbb{Q}$-divisors on $X$ have the same stable base locus, and the pseudo-effective cone $\overline{\Eff}(X)$ of $X$ can be decomposed into chambers depending on the stable base locus of the corresponding linear series. The resulting decomposition is called \textit{stable base locus decomposition}.
\begin{Remark}\label{SBLMC}
Recall that two divisors $D_1,D_2$ are said to be \textit{Mori equivalent} if $\textbf{B}(D_1) = \textbf{B}(D_2)$ and the following diagram of rational maps is commutative
\[
\begin{tikzpicture}[xscale=1.5,yscale=-1.2]
\node (A0_1) at (1, 0) {$X$};
\node (A1_0) at (0, 1) {$X(D_1)$};
\node (A1_2) at (2, 1) {$X(D_2)$};
\path (A1_0) edge [->]node [auto] {$\scriptstyle{}$} node [rotate=180,sloped] {$\scriptstyle{\widetilde{\ \ \ }}$} (A1_2);
\path (A0_1) edge [->,dashed]node [auto] {$\scriptstyle{\phi_{D_2}}$} (A1_2);
\path (A0_1) edge [->,swap, dashed]node [auto] {$\scriptstyle{\phi_{D_1}}$} (A1_0);
\end{tikzpicture}
\]
where the horizontal arrow is an isomorphism. Therefore, the Mori chamber decomposition is a, possibly trivial, refinement of the stable base locus decomposition.
\end{Remark}
Let $X$ be a normal $\mathbb{Q}$-factorial variety with free and finitely generated divisor class group $\Cl(X)$. Fix a subgroup $G$ of the group of Weil divisors on $X$ such that the canonical map $G\rightarrow\Cl(X)$, mapping a divisor $D\in G$ to its class $[D]$, is an isomorphism. The \textit{Cox ring} of $X$ is defined as
$$\Cox(X) = \bigoplus_{[D]\in \Cl(X)}H^0(X,\mathcal{O}_X(D))$$
where $D\in G$ represents $[D]\in\Cl(X)$, and the multiplication in $\Cox(X)$ is defined by the standard multiplication of homogeneous sections in the field of rational functions on $X$. If $\Cox(X)$ is finitely generated as an algebra over the base field, then $X$ is said to be a \textit{Mori dream space}. A perhaps more enlightening definition, especially for the relation with the minimal model program, is the following.
\begin{Definition}\label{def:MDS}
A normal projective $\mathbb{Q}$-factorial variety $X$ is called a \emph{Mori dream space}
if the following conditions hold:
\begin{enumerate}
\item[-] $\Pic{(X)}$ is finitely generated, or equivalently $h^1(X,\mathcal{O}_X)=0$,
\item[-] $\Nef{(X)}$ is generated by the classes of finitely many semi-ample divisors,
\item[-] there is a finite collection of small $\mathbb{Q}$-factorial modifications
$f_i: X \dasharrow X_i$, such that each $X_i$ satisfies the second condition above, and $
\Mov{(X)} \ = \ \bigcup_i \ f_i^*(\Nef{(X_i)})$.
\end{enumerate}
\end{Definition}
The collection of all faces of all cones $f_i^*(\Nef{(X_i)})$ above forms a fan which is supported on $\Mov(X)$.
If two maximal cones of this fan, say $f_i^*(\Nef{(X_i)})$ and $f_j^*(\Nef{(X_j)})$, meet along a facet,
then there exist a normal projective variety $Y$, a small modification $\varphi:X_i\dasharrow X_j$, and $h_i:X_i\rightarrow Y$, $h_j:X_j\rightarrow Y$ small birational morphisms of relative Picard number one such that $h_j\circ\varphi = h_i$. The fan structure on $\Mov(X)$ can be extended to a fan supported on $\Eff(X)$ as follows.
\begin{Definition}\label{MCD}
Let $X$ be a Mori dream space.
We describe a fan structure on the effective cone $\Eff(X)$, called the \emph{Mori chamber decomposition}.
We refer to \cite[Proposition 1.11]{HK00} and \cite[Section 2.2]{Ok16} for details.
There are finitely many birational contractions from $X$ to Mori dream spaces, denoted by $g_i:X\dasharrow Y_i$.
The set $\Exc(g_i)$ of exceptional prime divisors of $g_i$ has cardinality $\rho(X/Y_i)=\rho(X)-\rho(Y_i)$.
The maximal cones $\mathcal{C}$ of the Mori chamber decomposition of $\Eff(X)$ are of the form: $\mathcal{C}_i \ = \left\langle g_i^*\big(\Nef(Y_i)\big) , \Exc(g_i) \right\rangle$. We call $\mathcal{C}_i$ or its interior $\mathcal{C}_i^{^\circ}$ a \emph{maximal chamber} of $\Eff(X)$.
\end{Definition}
If $X$ is a Mori dream space, satisfying then the condition $h^1(X,\mathcal{O}_X)=0$, determining the stable base locus decomposition of $\Eff(X)$ is a first step in order to compute its Mori chamber decomposition.
\begin{Remark}\label{sphMDS}
By the work of M. Brion \cite{Br93} we have that $\mathbb{Q}$-factorial spherical varieties are Mori dream spaces. An alternative proof of this result can be found in \cite[Section 4]{Pe14}. In particular, by Theorem \ref{main1} the wonderful compactification $\mathcal{S}_{2r}$ is a Mori dream space.
\end{Remark}
\begin{Remark}\label{toric}
Recall that by \cite[Proposition 2.11]{HK00} given a Mori Dream Space $X$ there is an embedding $i:X\rightarrow \mathcal{T}_X$ into a simplicial projective toric variety $\mathcal{T}_X$ such that $i^{*}:\Pic(\mathcal{T}_X)\rightarrow \Pic(X)$ is an isomorphism inducing an isomorphism $\Eff(\mathcal{T}_X)\rightarrow \Eff(X)$. Furthermore, the Mori chamber decomposition of $\Eff(\mathcal{T}_X)$ is a refinement of the Mori chamber decomposition of $\Eff(X)$. Indeed, if $\Cox(X) \cong \frac{K[T_1,\dots,T_s]}{I}$ where the $T_i$ are homogeneous generators with non-trivial effective $\Pic(X)$-degrees then $\Cox(\mathcal{T}_X)\cong K[T_1,\dots,T_s]$.
Since the variety $\mathcal{T}_{X}$ is toric, the Mori chamber decomposition of $\Eff(\mathcal{T}_{X})$ can be computed by means of the Gelfand–Kapranov–Zelevinsky, GKZ for short, decomposition \cite[Section 2.2.2]{ADHL15}. Let us consider the family $\mathcal{W}$ of vectors in $\Pic(\mathcal{T}_{X})$ given by the generators of $\Cox(\mathcal{T}_{X})$, and let $\Omega(\mathcal{W})$ be the set of all convex polyhedral cones generated by some of the vectors in $\mathcal{W}$. By \cite[Construction 2.2.2.1]{ADHL15} the GKZ chambers of $\Eff(\mathcal{T}_{X})$ are given by the intersections of all the cones in $\Omega(\mathcal{W})$ containing a fixed divisor in $\Eff(\mathcal{T}_{X})$.
\end{Remark}
\begin{Remark}\label{gen_cox}
Let $(X,\mathscr{G},\mathscr{B},x_0)$ be a projective spherical variety. Consider a divisor $D$ on $X$, and let $f_D$ be the, unique up to constants, section of $\mathcal{O}_X(D)$ associated to $D$. We will denote by $\lin_K(\mathscr{G}\cdot D)\subseteq\Cox(X)$ the finite-dimensional vector subspace of $\Cox(X)$ spanned by the orbit of $f_D$ under the action of $\mathscr{G}$ that is the smallest linear subspace of $\Cox(X)$ containing the $\mathscr{G}$-orbit of $f_D$.
By \cite[Theorem 4.5.4.6]{ADHL15} if $\mathscr{G}$ is a semi-simple and simply connected algebraic group and $(X,\mathscr{G},\mathscr{B},x_0)$ is a spherical variety with boundary divisors $E_1,\dots,E_r$ and colors $D_1,\dots,D_s$ then $\Cox(X)$ is generated as a $K$-algebra by the canonical sections of the $E_i$'s and the finite dimensional vector subspaces $\lin_{K}(\mathscr{G}\cdot D_i)\subseteq \Cox(X)$ for $1\leq i\leq s$.
\end{Remark}
\begin{Definition}
Let $X$ be a normal projective $\mathbb{Q}$-factorial variety. We say that $X$ is weak Fano if $-K_X$ is nef and big.
\end{Definition}
By \cite[Corollary 1.3.2]{BCHM} a weak Fano variety is a Mori dream space.
\begin{Remark}\label{stpb}
Let $Y$ be a smooth and irreducible subvariety of a smooth variety $X$, and let $f:Bl_YX\rightarrow X$ be the blow-up of $X$ along $Y$ with exceptional divisor $E$. Then for any divisor $D\in \Pic(X)$ in $\Pic(Bl_YX)$ we have
$$\widetilde{D} \sim f^{*}D-\mult_{Y}(D) E$$
where $\widetilde{D}\subset Bl_YX$ is the strict transform of $D$, and $\mult_Y(D)$ is the multiplicity of $D$ at a general point of $Y$.
\end{Remark}
\begin{Corollary}\label{Cox_gen}
The Cox ring of $\mathcal{S}_{2r}$ is generated by the sections of $D_1,\dots D_r,E_1,\dots, E_{r-1}, S^{(r-1)}_r(\mathcal{V}_r^{2r-1})$.
\end{Corollary}
\begin{proof}
This follows from Proposition \ref{bound_col} and Remark \ref{gen_cox}.
\end{proof}
Our aim is to study the Mori chamber decomposition of the wonderful compactification $\mathcal{S}_{2r}$. Since $\mathcal{S}_{2}\cong\mathbb{P}^2$ the first interesting case is for $r = 2$.
\begin{Proposition}\label{mcd_4}
For the variety $\mathcal{S}_4$ we have that $\Pic(\mathcal{S}_4)$ is generated by $D_1, E_1$. Furthermore, $D_1\sim H$, $D_2 \sim 2H-E_1$, $S_2^{(1)}(\mathcal{V}_2^3) \sim 2 H - 2 E_1$, and $\Cox(\mathcal{S}_4)$ is generated by the sections of $D_1,D_2,E_1,S^{(1)}_2(\mathcal{V}_2^3)$. The Mori chamber decomposition of $\Eff(\mathcal{S}_4)$ has three chambers as displayed in the following picture:
$$
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
xmin=2.5,
xmax=7.5,
ymin=1.75,
ymax=5.301934941656083,
xtick={2.5,3.0,...,5.5},
ytick={2.0,2.5,...,5.0},]
\clip(2.1,1.65) rectangle (7.5,5.301934941656083);
\draw [->,line width=0.1pt] (3.,4.) -- (3.,5.);
\draw [->,line width=0.1pt] (3.,4.) -- (4.,4.);
\draw [->,line width=0.1pt] (3.,4.) -- (5.,3.);
\draw [->,line width=0.1pt] (3.,4.) -- (5.,2.);
\draw [shift={(3.,4.)},line width=0.4pt,fill=black,fill opacity=0.15000000596046448] (0,0) -- plot[domain=-0.46364760900080615:0.,variable=\t]({1.*0.6965067581669063*cos(\t r)+0.*0.6965067581669063*sin(\t r)},{0.*0.6965067581669063*cos(\t r)+1.*0.6965067581669063*sin(\t r)}) -- cycle ;
\begin{scriptsize}
\draw[color=black] (3.085808915158281,5.121548796059524) node {$E_1$};
\draw[color=black] (4.25,4.1) node {$D_1$};
\draw[color=black] (5.25,3.119262579937719) node {$D_2$};
\draw[color=black] (5.6,2.118119471876817) node {$S_2^{(1)}(\mathcal{V}_2^3)$};
\end{scriptsize}
\end{tikzpicture}
$$
and the movable cone coincides with the nef cone generated by $D_1$ and $D_2$.
\end{Proposition}
\begin{proof}
Since $\mathcal{S}_4$ is the blow-up of a smooth variety along a smooth subvariety the relations $D_2 \sim 2H-E_1$, $S_2^{(1)}(\mathcal{V}_2^3) \sim 2 H - 2 E_1$ follow from Propositions \ref{symplcones}, \ref{mult}, \ref{cones_col} and Remark \ref{stpb}.
The statement on the generators of the Cox ring follows from Corollary \ref{Cox_gen}. Furthermore, by Remarks \ref{toric} and \ref{gen_cox} the Mori chamber decomposition of $\Eff(\mathcal{S}_4)$ is a, possibly trivial, coarsening of the decomposition in the statement. On the other hand, by Proposition \ref{eff_nef} we know that $H$ and $2H-E_1$ generate $\Nef(\mathcal{S}_4)$ while $E_1$ and $2H-2E_1$ generate $\Eff(\mathcal{S}_4)$. So no ray can be removed and the above decomposition coincides with the Mori chamber decomposition of $\Eff(\mathcal{S}_4)$.
\end{proof}
Next, we consider the case $r=3$.
\begin{Lemma}\label{rel_S6}
For the variety $\mathcal{S}_6$ the Picard group $\Pic(\mathcal{S}_6)$ is generated by $H, E_1, E_2$, and we have the following relations: $D_1\sim H$, $D_2 \sim 2H-E_1$, $D_3 \sim 3H -2E_1-E_2$ and $S_3^2(\mathcal{V}_2^5) \sim 2H-2E_1-2E_2$.
\end{Lemma}
\begin{proof}
Recall, that the first blow-up $f_1:X_6^{(1)}\rightarrow X_6$ in Construction \ref{ccssf} is the blow-up of $X_6$ along the Veronese variety $\mathcal{V}_2^5$ which by Proposition \ref{symplcones} is the singular locus of $X_6$. Hence, in this case we can not use Remark \ref{stpb} to compute the discrepancies of the relevant divisors with respect to $E_1$. In order to do this we consider the line $L = \{z_{1,1}-z_{0,1}=z_{1,1}-z_{2,2}=z_{0,2}=z_{0,3}=z_{0,4}=z_{0,5}=z_{1,2}=z_{1,3}=z_{1,4}=z_{1,5}=z_{2,3}=z_{2,4}=z_{2,5}=z_{3,3}=z_{3,4}=z_{3,5}=z_{4,4}=z_{4,5}=z_{5,5} = 0\}$ and let $\widetilde{L}$ be its strict transform in $X_6^{(1)}$. Slightly abusing the notation we will denote by $D_i$ also the strict transform in $X_6^{(1)}$ of the divisor $H_i^3$ in Proposition \ref{cones_col} for $i = 1,2,3$ and by $H$ the pull-back of the hyperplane section to $X_6^{(1)}$. Clearly, $D_1\sim H$.
Now, let us write $D_2\sim 2H-aE_1$. Note that the line $L$ intersects $\mathcal{V}_2^5$ just at the point $p = [1:0\dots:0]$, and by Remark \ref{equations} and Proposition \ref{symplcones} $L\subset X_6$. By Proposition \ref{symplcones} the tangent cone of $X_6$ at $p$ is a cone over $X_4\cong\mathbb{G}(1,4)$ with $5$-dimensional vertex and $\widetilde{L}$ intersects $E_1$ just at the point $q = [1:0:0:0:1:0:\dots :0]$ of $X_4$. Hence $\widetilde{L}\cdot E_1 = 1$. The divisor $H_2^3$ intersects $L$ in $p$ and in another point not lying on $\mathcal{V}_2^5$. Moreover, by Proposition \ref{cones_col} the tangent cone of $H_2^3$ at $p$ is a hyperplane section of $X_4$ not passing through $q$. Then $\widetilde{L}\cdot D_2 = 1$. By the projection formula we have
$$1 = \widetilde{L}\cdot D_2 = 2\widetilde{L}\cdot H-a\widetilde{L}\cdot E_1 = 2L\cdot H_1^3-a = 2-a$$
and hence $a = 1$. So we may write $D_2\sim 2H-E_1$.
Now, write $D_3\sim 3H-bE_1$. The divisor $H_3^3$ intersects $L$ in $p$ with multiplicity two and in another point not lying on $\mathcal{V}_2^5$. By Proposition \ref{cones_col} the tangent cone of $H_3^3$ at $p$ is a quadratic section of $X_4$ not passing through $q$. Hence
$$1 = \widetilde{L}\cdot D_3 = 3\widetilde{L}\cdot H-a\widetilde{L}\cdot E_1 = 3L\cdot H_1^3-a = 3-a$$
and $a = 2$. Then $D_3\sim 3H-2E_1$.
We will denote by $S_3$ the strict transform of $S_3(\mathcal{V}_2^5)$ in $X_6^{(1)}$. Let $R\subset X_4\cong\mathbb{G}(1,4)$ be a general line. Note that $R$ is contracted by the blow-down morphism and hence
$$1 = R\cdot D_2 = 2R\cdot H-R\cdot E_1 = -R\cdot E_1$$
yields $R\cdot E_1 = -1$. By Proposition \ref{mult} we may write $S_3\sim 2H-cE_1$ and since by Proposition \ref{symplcones} the tangent cone of $S_3(\mathcal{V}_2^5)$ at a point of $\mathcal{V}_2^5$ is a quadratic section of $X_4$ we have $R\cdot S_3 = 2$. This yields
$$2 = R\cdot S_3 = 2R\cdot H-cR\cdot E_1 = -cR\cdot E_1 = c$$
and $S_3\sim 2H-2E_1$.
Now, by Proposition \ref{symplcones} the morphism $f_2:\mathcal{S}_6\rightarrow X_6^{(1)}$ in Construction \ref{ccssf} is the blow-up of a smooth variety along a smooth subvariety. So we can apply Remark \ref{stpb} in order to compute the discrepancies of the divisors with respect to $E_2$. Finally, again by Proposition \ref{symplcones} we get the claim.
\end{proof}
\begin{thm}\label{dec_S6}
The Cox ring of $\mathcal{S}_6$ is generated by the sections of $D_1,D_2,D_3,E_1,E_2,S_3^{(2)}(\mathcal{V}_2^5)$. The Mori chamber decomposition of the effective cone of $\mathcal{S}_6$ has nine chambers as displayed in the following $2$-dimensional section of $\Eff(\mathcal{S}_6)$:
$$
\definecolor{uuuuuu}{rgb}{0.26666666666666666,0.26666666666666666,0.26666666666666666}
\begin{tikzpicture}[xscale=5.5,yscale=4.65][line cap=round,line join=round,>=triangle 45,x=5.0cm,y=5.0cm]
\clip(-1.4,-0.03) rectangle (1.2,1.06);
\fill[line width=0.4pt,fill=black,fill opacity=0.10000000149011612] (-0.1111111111111111,0.4444444444444444) -- (0.,0.5) -- (0.2,0.4) -- cycle;
\draw [line width=0.1pt] (-1.,0.)-- (0.,1.);
\draw [line width=0.1pt] (1.,0.)-- (0.,1.);
\draw [line width=0.1pt] (-1.,0.)-- (1.,0.);
\draw [line width=0.1pt] (-0.1111111111111111,0.4444444444444444)-- (0.,0.5);
\draw [line width=0.1pt] (0.,0.5)-- (0.2,0.4);
\draw [line width=0.1pt] (0.2,0.4)-- (-0.1111111111111111,0.4444444444444444);
\draw [line width=0.1pt] (0.,0.5)-- (0.,1.);
\draw [line width=0.1pt] (0.2,0.4)-- (1.,0.);
\draw [line width=0.1pt] (-0.1111111111111111,0.4444444444444444)-- (-1.,0.);
\draw [line width=0.1pt] (-0.1111111111111111,0.4444444444444444)-- (1.,0.);
\draw [line width=0.1pt] (0.2,0.4)-- (-1.,0.);
\draw [line width=0.1pt] (-0.1111111111111111,0.4444444444444444)-- (0.,1.);
\draw [line width=0.1pt] (0.,1.)-- (0.2,0.4);
\begin{scriptsize}
\draw [fill=black] (-1.,0.) circle (0.1pt);
\draw[color=black] (-1.107,0.04) node {$S_3^{(2)}(\mathcal{V}_2^5)$};
\draw [fill=black] (0.,1.) circle (0.1pt);
\draw[color=black] (0.016094824612243562,1.03) node {$E_2$};
\draw [fill=black] (1.,0.) circle (0.1pt);
\draw[color=black] (1.035,0.021295712735575137) node {$E_1$};
\draw [fill=black] (-0.1111111111111111,0.4444444444444444) circle (0.0pt);
\draw[color=black] (-0.16,0.46729156695787455) node {$D_3$};
\draw [fill=black] (0.,0.5) circle (0.1pt);
\draw[color=black] (0.05,0.5217643430460943) node {$D_2$};
\draw [fill=black] (0.2,0.4) circle (0.0pt);
\draw[color=black] (0.25,0.419) node {$D_1$};
\draw [fill=uuuuuu] (0.09090909090909086,0.36363636363636365) circle (0.0pt);
\draw[color=uuuuuu] (0.102,0.319) node {$P$};
\end{scriptsize}
\end{tikzpicture}
$$
where $P\sim 3H-E_1-E_2$ and $\Mov(\mathcal{S}_6)$ is generated by $D_1,D_2,D_2$ and $P$.
\end{thm}
\begin{proof}
The computation of the movable cone follows from \cite[Proposition 3.3.2.3]{ADHL15}, Proposition \ref{bound_col} and Remark \ref{gen_cox}, and the statement on the generators of $\Cox(\mathcal{S}_6)$ follows from Corollary \ref{Cox_gen}.
Furthermore, by Lemma \ref{rel_S6}, Proposition \ref{bound_col} and Remarks \ref{toric}, \ref{gen_cox} the Mori chamber decomposition of $\Eff(\mathcal{S}_6)$ is a, possibly trivial, coarsening of the decomposition in the statement.
Note that the stable base loci of a divisor in the interior of chamber delimited by $S_3^{(2)}(\mathcal{V}_2^5),P,E_1$; $S_3^2(\mathcal{V}_2^5),P,D_3$; $S_3^{(2)}(\mathcal{V}_2^5),D_3,E_2$; $D_2,D_3,D_1,E_2$; $E_1,D_1,E_2$,; $P,D_1,E_1$ are respectively given by $S_3^{(2)}(\mathcal{V}_2^5)\cup E_1$; $S_3^2(\mathcal{V}_2^5)$; $E_2\cup S_3^{(2)}(\mathcal{V}_2^5)$; $E_2$; $E_1\cup E_2$; $E_1$. Furthermore, since Mori chambers are convex the stable base locus chamber delimited by $D_2,D_3,D_1,E_2$ must be divided in two Mori chambers by the wall joining $D_2$ and $E_2$. Hence the decomposition in the statement gives the Mori chamber decomposition of $\Eff(\mathcal{S}_6)$ outside of the movable cone.
Finally, note that the only modifications we could perform inside the movable cone are removing the wall joining $D_1$ and $D_3$ and adding a wall joining $D_2$ and $P$. However, both these modifications are not allowed since by Proposition \ref{eff_nef} the chamber delimited by $D_1,D_2,D_3$ is the nef cone of $\mathcal{S}_6$.
\end{proof}
\section{Moduli spaces of conics in Lagrangian Grassmannians}\label{K_Grass}
An $n$-pointed rational pre-stable curve $(C,(x_{1},...,x_{n}))$ is a projective, connected, reduced rational curve with at most nodal singularities of arithmetic genus zero, with $n$ distinct and smooth marked points $x_1,...,x_n\in C$. We will refer to the marked and the singular points of $C$ as special points.
Let $X$ be a homogeneous variety. A map $(C,(x_{1},...,x_{n}),\alpha)$, where $\alpha:C\rightarrow X$ is a morphism from an $n$-pointed rational pre-stable curve to $X$, is stable if any component $E\cong\mathbb{P}^{1}$ of $C$ contracted by $\alpha$ contains at least three special points.
Now, let us fix a class $\beta\in H_2(X,\mathbb{Z})$. By \cite[Theorem 2]{FP} there exists a smooth, proper, and separated Deligne-Mumford stack $\overline{\mathcal{M}}_{0,n}(X,\beta)$ parametrizing isomorphism classes of stable maps $[C,(x_{1},...,x_{n}),\alpha]$ such that $\alpha_{*}[C] = \beta$. Furthermore, by \cite[Corollary 1]{KP} the coarse moduli space $\overline{M}_{0,n}(X,\beta)$ associated to the stack $\overline{\mathcal{M}}_{0,n}(X,\beta)$ is a normal, irreducible, projective variety with at most finite quotient singularities of dimension
$$
\dim(\overline{M}_{0,n}(X,\beta)) = \dim(X)+\beta\cdot c_1(T_X)+n-3.
$$
The variety $\overline{M}_{0,n}(X,\beta)$ is called the \textit{moduli space of stable maps}, or the \textit{Kontsevich moduli space} of stable maps of class $\beta$ from a rational pre-stable $n$-pointed curve to $X$.
\subsubsection*{Kontsevich spaces of conics in Grassmannians}
We will denote by $\overline{M}_{0,0}(\mathbb{G}(k,n),2)$ the moduli space of degree two stable maps to the Grassmannian $\mathbb{G}(k,n)$ parametrizing $k$-planes in $\mathbb{P}^n$ embedded via the Pl\"ucker embedding. Now, following \cite[Section 2]{CC10} we are going to describe divisor classes on $\overline{M}_{0,0}(\mathbb{G}(k,n),2)$. Fix projective subspaces $\Pi^{n-k},\Pi^{n-k-2}\subset\mathbb{P}^n$ of dimension $n-k$ and $n-k-2$, and consider the Schubert cycles
$$
\begin{array}{lll}
\sigma_{1,1}^{k,n} & = & \{W\in\mathbb{G}(k,n)\: | \: \dim(W\cap\Pi^{n-k})\geq 1\};\\
\sigma_{2}^{k,n} & = & \{W\in\mathbb{G}(k,n)\: | \: \dim(W\cap\Pi^{n-k-2})\geq 0\}.
\end{array}
$$
Let $\pi:\overline{M}_{0,1}(\mathbb{G}(k,n),2)\rightarrow\overline{M}_{0,0}(\mathbb{G}(k,n),2)$ be the forgetful morphism and $ev:\overline{M}_{0,1}(\mathbb{G}(k,n),2)\rightarrow\mathbb{G}(k,n)$ the evaluation morphism. We define
$$H_{\sigma_{1,1}}^{k,n} = \pi_{*}ev^{*}\sigma_{1,1}, \: H_{\sigma_2}^{k,n} = \pi_{*}ev^{*}\sigma_2.$$
Furthermore, we will denote by $T^{k,n}$ the class of the divisor of conics that are tangent to a fixed hyperplane section of $\mathbb{G}(k,n)$.
Let $D_{deg}^{k,n}$ be the class of the divisor of maps $[C,\alpha]\in \overline{M}_{0,0}(\mathbb{G}(k,n),2)$ such that the projection of the span of the linear spaces parametrized by $\alpha(C)$ from a fixed subspace of dimension $n-k-2$ has dimension less than $k+2$.
Next we define the divisor class $D_{unb}^{k,n}$. A stable map $\alpha:\mathbb{P}^1\rightarrow\mathbb{G}(k,n)$ induces a rank $k+1$ subbundle $\mathcal{E}_{\alpha}\subset \mathcal{O}_{\mathbb{P}^1}\otimes K^{n+1}$. If $k = 1$ we define $D_{unb}^{k,n}$ as the closure of the locus of maps $[\mathbb{P}^1,\alpha]\in \overline{M}_{0,0}(\mathbb{G}(k,n),2)$ such that $\mathcal{E}_{\alpha}\neq \mathcal{O}_{\mathbb{P}^1}(-1)^{\oplus 2}$. If $k\geq 2$ there is a trivial subbundle $\mathcal{O}_{\mathbb{P}^1}^{\oplus k-1}\subset\mathcal{E}_{\alpha}$ which induces a $(k-2)$-dimensional subspace $H_{\alpha}\subset\mathbb{P}^n$. In this way we get a map
$$
\begin{array}{lclc}
\xi: & \overline{M}_{0,0}(\mathbb{G}(k,n),2) & \dashrightarrow & \mathbb{G}(k-2,n)\\
& [\mathbb{P}^1,\alpha] & \mapsto & H_{\alpha}
\end{array}
$$
We define $D_{unb}^{k,n} = \xi^{*}\mathcal{O}_{\mathbb{G}(k-2,n)}(1)$ that is $D_{unb}^{k,n}$ is the closure of the locus of maps $[\mathbb{P}^1,\alpha]\in \overline{M}_{0,0}(\mathbb{G}(k,n),2)$ such that $H_{\alpha}$ intersects a fixed $(n-k+1)$-dimensional subspace of $\mathbb{P}^n$.
Finally, we denote by $\Delta^{k,n}$ the boundary divisor parametrizing stable maps with reducible domain.
The connection between $\overline{M}_{0,0}(\mathbb{G}(1,3),2)$ and the space of complete quadrics $\mathcal{Q}(3)$ is due to \cite[Lemma 21]{Ce15} which states that there is a finite morphism of degree two
\stepcounter{thm}
\begin{equation}\label{2to1}
\phi:\overline{M}_{0,0}(\mathbb{G}(1,3),2)\rightarrow\mathcal{Q}(3)
\end{equation}
which maps a smooth conic $C\subset \mathbb{G}(1,3)$ to the quadric surface $\bigcup_{[L]\in C}L\subset\mathbb{P}^3$.
\subsubsection*{Kontsevich spaces of conics in Lagrangian Grassmannians}\label{K_Lag}
The Lagrangian Grassmannian $LG(r,2r)\subset\mathbb{G}(r-1,2r-1)$ parametrizes $r$-dimensional subspaces of $K^{2r}$ which are isotropic with respect to the standard symplectic form $\Omega$ in (\ref{symform}). By \cite[Section 2.1]{Te05} $LG(r,2r)$ is an irreducible variety of dimension $\frac{r(r+1)}{2}$ and of Picard rank one. Moreover, the restriction of the Pl\"ucker embedding of $\mathbb{G}(r-1,2r-1)$ yields the minimal homogeneous embedding of $LG(r,2r)$.
In this section we will study the moduli space $\overline{M}_{0,0}(LG(r,2r),2)$ parametrizing conics in $LG(r,2r)$. Let $\mathcal{E}$ be the universal quotient bundle on $\mathbb{G}(r-1,2r-1)$. The Lagrangian Grassmannian $LG(r,2r)\subset\mathbb{G}(r-1,2r-1)$ is the zero locus of a section of $\bigwedge^2\mathcal{E}$ which has first Chern class $(r-1)c_1(\mathcal{O}_{\mathbb{G}(r-1,2r-1)}(1))$. Hence the canonical bundle of $LG(r,2r)$ is given by $\omega_{LG(r,2r)}\cong \mathcal{O}_{LG(r,2r)}(-r-1)$, and $\dim(\overline{M}_{0,0}(LG(r,2r),2)) = \frac{r^2+5r-2}{2}$.
\begin{Remark}\label{comlg}
We recall some facts about the cohomology of $LG(r,2r)$. For details we refer to \cite[Section 3]{BKT03}. Consider a flag $F^1 \subset F^2 \subset \dots \subset F^r \subset K^{2r}$, where $F^j$ are isotropic subspaces of $K^{2r}$ of dimension $j$. Let $\mathcal{D}_r$ be the set of strict partitions $\lambda=(\lambda_1, \dots, \lambda_l)$ with $0 < \lambda_l < \dots < \lambda_1 \le r$ and denote by $\left|\lambda \right|= \lambda_1 + \dots + \lambda_l$ the weight of $\lambda$. For each $\lambda \in \mathcal{D}_r$ there is a codimension $\left|\lambda \right|$ Schubert variety $\Sigma^r_\lambda \subseteq LG(r,2r)$ defined by
$$\Sigma^r_\lambda:=\{W \in LG(r,2r),\, \dim(W \cap F^{r+1-\lambda_i}) \ge i,\; i=1, \dots,l\}.$$
The class of the Schubert variety $\Sigma^r_\lambda$ in the cohomology ring $H^*(LG(r,2r),\mathbb{Z})$ will be denoted by $\sigma^r_\lambda$. We have that
$$H^*(LG(r,2r),\mathbb{Z})= \bigoplus_{\lambda \in \mathcal{D}_r}\mathbb{Z}\cdot \sigma^r_\lambda$$
with the following relations:
\stepcounter{thm}
\begin{equation}\label{relcomlg}
(\sigma_i^{r})^{2}+2\sum_{k=1}^{r-i}(-1)^{k}\sigma^r_{i+k}\sigma^r_{i-k}=0
\end{equation}
where by convention $\sigma^r_0=1$ and $\sigma^r_i=0$ for $i < 0$.
\end{Remark}
Now, we define divisor classes on $\overline{M}_{0,0}(LG(r,2r),2)$. We denote by $\Delta^r$, the boundary divisor parametrizing stable maps with reducible domain, this is the restriction to $\overline{M}_{0,0}(LG(r,2r),2)$ of the divisor $\Delta^{r-1,2r-1}$ on $\overline{M}_{0,0}(\mathbb{G}(r-1,2r-1),2)$.
Fix an isotropic subspace $F^{r-1}$ of dimension $r-1$, and consider the divisor $H_{\sigma_2}^r = \pi_{*}ev^{*}\sigma^r_2$, where $\pi: \overline{M}_{0,1}(LG(r,2r),2) \rightarrow \overline{M}_{0,0}(LG(r,2r),2)$ is the forgetful morphism, $ev: \overline{M}_{0,1}(LG(r,2r),2) \rightarrow LG(r,2r)$ is the evaluation morphism, and $\sigma_2^r$ is the Schubert cycle corresponding to the Schubert variety
$$\Sigma^r_2:=\{W \in LG(r,2r), \dim(W \cap F^{r-1}) \ge 1\}.$$
By Remark \ref{comlg}, in $LG(r,2r)$ the only Schubert cycle of codimension two is $\sigma^r_2$, so by \cite[Theorem 1]{Op05} we get that $\Delta^r$ and $H_{\sigma_2}^r$ generate the Picard group of $\overline{M}_{0,0}(LG(r,2r),2)$. Furthermore, we have that both the divisors $H^{r-1,2r-1}_{\sigma_{1,1}}$ and $H^{r-1,2r-1}_{\sigma_2}$ of $\overline{M}_{0,0}(\mathbb{G}(r-1,2r-1),2)$ restrict to $H_{\sigma_2}^r$ on $\overline{M}_{0,0}(LG(r,2r),2)$. Then, also $D_{deg}^{r-1,2r-1}$ and $D_{unb}^{r-1,2r-1}$ restrict to the same divisor $D_{unb}^r$ on $\overline{M}_{0,0}(LG(r,2r),2)$.
Finally, we will denote by $T^r$ the restriction of the divisor $T^{r-1,2r-1}$ to $\overline{M}_{0,0}(LG(r,2r),2)$, this is the class of the divisor of conics that are tangent to a fixed hyperplane section of $LG(r,2r)$.
\begin{Proposition}\label{propemblg}
Consider the subspaces $H=\{x_2= \dots = x_{r-1}=x_{r+2}=\dots=x_{2r-1} =0\}$ and $\Pi^{r-3}=\{x_0 = \dots = x_{r+1}=0\}$ in $\mathbb{P}^{2r-1}$. There is an embedding
$$
\begin{array}{ccll}
i:& LG(2,H) & \hookrightarrow & LG(r,2r) \\
& L & \mapsto & \langle L , \Pi^{r-3} \rangle
\end{array}
$$
which induces an embedding $j:\overline{M}_{0,0}(LG(2,4),2) \rightarrow \overline{M}_{0,0}(LG(r,2r),2)$. Moreover, the pull-back map $j^{*}:\Pic(\overline{M}_{0,0}(LG(r,2r),2))\rightarrow\Pic(\overline{M}_{0,0}(LG(2,4),2))$ is an isomorphism.
\end{Proposition}
\begin{proof}
Since $\Pi^{r-3}$ is the projectivization of an isotropic subspace of $K^{2r}$, and disjoint from $H$, the map $i$ is well-defined. By \cite[Theorem 1]{Op05} the Picard group of $\overline{M}_{0,0}(LG(r,2r),2)$ is generated by $\Delta^r$ and $H_{\sigma_{2}}^r$.
Furthermore, we have that $i^*(\sigma_2^r)= \sigma_2^2$ and then $j^*(H^r_{\sigma_2^r})=H^2_{\sigma_2^2}$. Finally, since $j^*(\Delta^r)=\Delta^2$ we conclude that the pull-back map is an isomorphism.
\end{proof}
\begin{Lemma}\label{sym_symp}
Let $C_1, C_2\subset\mathbb{G}(1,3)$ be two smooth conics corresponding to the rulings $\bigcup_{[L]\in C_1}L$ and $\bigcup_{[L]\in C_2}L$ of a smooth quadric $Q\subset\mathbb{P}^3$. The following are equivalent:
\begin{itemize}
\item[(a)] $C_1$ is contained in $LG(2,4)$ but $C_2$ is not;
\item[(b)] the lines in the ruling $\bigcup_{[L]\in C_1}L$ are Lagrangian while the general line in the ruling $\bigcup_{[L]\in C_2}L$ is not;
\item[(c)] the matrix of $Q$ has a scalar multiple that is symplectic.
\end{itemize}
\end{Lemma}
\begin{proof}
The actions of $Sp(4)$ on $\overline{M}_{0,0}(LG(2,4),2)$ in (\ref{actKLG}) and on $\mathcal{S}_4$ in (\ref{acsimp}) are compatible. Therefore, it is enough to prove that the equivalence of the conditions in statement holds for a particular smooth quadric.
Consider the quadric $Q = \{x_0^2+x_1^2-x_2^2-x_3^2 = 0\}\subset\mathbb{P}^3$. If $M_{Q}$ is the matrix of $Q$ we have $M_{Q}^t\Omega M_{Q} = -\Omega$, and hence $iM_{Q}$ is symplectic.
Now, one of the rulings of $Q$ is given by the following lines
$$L_{s,t} = \left\langle (t,-s,-t,s), (s,t,s,t)\right\rangle$$
with $[s:t]\in\mathbb{P}^1$. Note that $L_{s,t}$ is Lagrangian with respect to $\Omega$ for all $[s:t]\in\mathbb{P}^1$.
Fix homogeneous coordinates $[Z_0:\dots :Z_5]$ on $\mathbb{P}^5$. The Lagrangian Grassmannian $LG(2,4)$ is cut out on the Grassmannian $\mathbb{G}(1,3)$ by the hyperplane $H = \{Z_1+Z_4 = 0\}$. Via the Pl\"ucker embedding the ruling $L_{s,t}$ corresponds to the conic given by the image of the following morphism
$$
\begin{array}{ccl}
\mathbb{P}^1 & \longrightarrow & \mathbb{G}(1,3)\\
(s,t) & \longmapsto & [t^2+s^2:2st:t^2-s^2:-s^2+t^2:-2st:-t^2-s^2]
\end{array}
$$
which therefore is contained in $H\cap\mathbb{G}(1,3) = LG(2,4)$. The other ruling of $Q$ is given by
$$R_{u,v} = \left\langle (u,-v,u,v), (v,u,-v,u)\right\rangle$$
with $[u:v]\in\mathbb{P}^1$. The corresponding conic is given by the image of
$$
\begin{array}{ccl}
\mathbb{P}^1 & \longrightarrow & \mathbb{G}(1,3)\\
(u,v) & \longmapsto & [u^2+v^2:-2uv:u^2-v^2:v^2-u^2:-2uv,u^2+v^2]
\end{array}
$$
which is not contained in $H\cap\mathbb{G}(1,3) = LG(2,4)$. Hence, the general line in the ruling $R_{u,v}$ is not Lagrangian.
\end{proof}
\begin{Lemma}\label{sphKLG}
The following $Sp(4)$-action on $\overline{M}_{0,0}(LG(2,4),2)$
\stepcounter{thm}
\begin{equation}\label{actKLG}
\begin{array}{cll}
Sp(4) \times \overline{M}_{0,0}(LG(2,4),2) & \longrightarrow & \overline{M}_{0,0}(LG(2,4),2) \\
(M, [C, \alpha]) & \longmapsto & [C, \wedge^{r}M\circ\alpha]
\end{array}
\end{equation}
gives to $\overline{M}_{0,0}(LG(2,4),2)$ a structure of spherical variety.
\end{Lemma}
\begin{proof}
By Lemma \ref{sym_symp} a ruling of the quadric $Q = \{x_0^2 + x_1^2 - x_2^2 - x_3^2 =0\}$ yields a conic in $LG(2,4)$. Let $\mathscr{B}\subset Sp(4)$ be the Borel subgroup of the symplectic group in Remark \ref{borelsym}. Note that $\dim(\mathscr{B})=6$. The stabilizer of $Q$ in $\mathscr{B}$ is given by
$$\begin{pmatrix}
A_{2,2} & 0_{2,2} \\
B_{2,2} & A_{2,2}^{-t}
\end{pmatrix} \begin{pmatrix}
I_{2,2} & 0_{2,2} \\
0_{2,2} & -I_{2,2}
\end{pmatrix} \begin{pmatrix}
A_{2,2}^t & B_{2,2}^t \\
0_{2,2} & A_{2,2}^{-1}
\end{pmatrix} = \begin{pmatrix}
A_{2,2}^tA_{2,2} & A_{2,2}B^t_{2,2} \\
B_{2,2}A^t_{2,2} & B_{2,2}B^t_{2,2} - A_{2,2}^{-t}A_{2,2}^{-1}
\end{pmatrix}$$
So, we get $B_{2,2}=0_{2,2}$ and $A_{2,2}^tA_{2,2} = I_{2,2}$. Then
$$Stab_{\mathscr{B}}(Q) = \left\lbrace M = \begin{pmatrix}
a & 0 & 0 & 0 \\
0 & b &0 & 0 \\
0 & 0 & \frac{1}{a} & 0 \\
0 & 0 & 0 & \frac{1}{b} \\
\end{pmatrix}; \text{ with } a^2 = b^2 = 1 \right\rbrace$$
and $\dim(Stab_{\mathscr{B}}(Q))=0$.
\end{proof}
\begin{Proposition}\label{isor2}
The restriction of the map in (\ref{2to1}) to $\overline{M}_{0,0}(LG(2,4),2)$ yields an isomorphism
\stepcounter{thm}
\begin{equation}\label{maplg}
\varphi : \overline{M}_{0,0}(LG(2,4),2) \rightarrow \mathcal{S}_4
\end{equation}
where $\mathcal{S}_4$ is the wonderful compactification of the space of symplectic quadrics of $\mathbb{P}^3$.
\end{Proposition}
\begin{proof}
By Lemma \ref{sym_symp} the restriction of the map in (\ref{2to1}) to $\overline{M}_{0,0}(LG(2,4),2)$ yields a $1$-to-$1$ morphism which is surjective since both $\overline{M}_{0,0}(LG(2,4),2)$ and $\mathcal{S}_4$ are $6$-dimensional.
Finally, since $\mathcal{S}_4$ is smooth and $\overline{M}_{0,0}(LG(2,4),2)$ is normal Zariski's main theorem \cite[Chapter 3, Section 9]{Mum99} yields that the morphism in (\ref{maplg}) is an isomorphism.
\end{proof}
\begin{Lemma}\label{boucollg}
The divisor classes $\Delta^{2}, D_{unb}^{2}$ and the divisor classes $H_{\sigma_2}^{2},T^{2}$ are respectively the classes of the boundary divisors and the colors of the spherical variety $\overline{M}_{0,0}(LG(2,4),2)$.
\end{Lemma}
\begin{proof}
The actions (\ref{actKLG}) and (\ref{acsimp}) are equivariant with respect to the map $\varphi$ in (\ref{maplg}). So boundary divisors and colors of $\overline{M}_{0,0}(LG(2,4),2)$ are mapped by $\varphi$ to boundary divisors and colors of $\mathcal{S}_4$ respectively. By Proposition \ref{bound_col}, in $\mathcal{S}_4$ the colors are $D_1,D_2$ and the boundary divisors are $E_1,S_2^{(1)}(\mathcal{V}_2^{3})$. Moreover, $\Delta^{2}, D_{unb}^{2}$ are stabilized by the $Sp(4)$-action in (\ref{actKLG}) and choosing the flag of isotropic linear subspaces $ \{x_0 = x_1 = 0\}\subset\{x_0 = 0\}$ we see that $H_{\sigma_2}^{2},T^{2}$ are stabilized by the action of the Borel subgroup of $Sp(4)$ in Remark \ref{borelsym}. Moreover, it is straightforward to see that the inverse image via the morphism $\varphi$ in (\ref{maplg}) of $S_2^{(1)}(\mathcal{V}_2^{3}), E_1, D_1,D_2$ are divisors of classes $\Delta^{2}, D_{unb}^{2},H_{\sigma_2}^{2},T^{2}$. Now, assume to have another boundary divisor in $\overline{M}_{0,0}(LG(2,4),2)$. Then, $\varphi$ maps this divisor to a boundary divisor of $\mathcal{S}_4$, but the only boundary divisors of $\mathcal{S}_4$ are $S_2^{(1)}(\mathcal{V}_2^{3}), E_1$. Then, the only boundary divisors of $\overline{M}_{0,0}(LG(2,4),2)$ are $\Delta^{2}, D_{unb}^{2}$, and similarly the only colors of $\overline{M}_{0,0}(LG(2,4),2)$ are $H_{\sigma_2}^{2},T^{2}$.
\end{proof}
We denote by $\overline{M}_{0,0}(LG(r,2r),2,1)$ the moduli space of weighted stable maps to $LG(r,2r)$. In this space degree one tails of a stable map are replaced by their attaching point. We refer to \cite{MM07} for the construction of moduli of weighted stable maps.
\begin{Proposition}\label{effneflg}
The divisors $\Delta^{r}, D_{unb}^{r}$ generate the effective cone of $\overline{M}_{0,0}(LG(r,2r),2)$, and the divisors $H_{\sigma_2}^r, T^r$ generate the nef cone of $\overline{M}_{0,0}(LG(r,2r),2)$.
The divisor $H_{\sigma_2}^r$ induces a birational morphism
$$f_{H_{\sigma_2}^r}:\overline{M}_{0,0}(LG(r,2r),2)\rightarrow \widetilde{Chow}(LG(r,2r),2)$$
which is an isomorphism away form the locus $Q^r(1)$ of double covers of a line in $LG(r,2r)$, and contracts $Q^r(1)$ so that the locus of double covers with the same image maps to
a point, where $\widetilde{Chow}(LG(r,2r),2)$ is the normalization of the Chow variety of conics in $LG(r,2r)$.
The divisor $T^r$ induces a morphism
$$f_{T^r}:\overline{M}_{0,0}(LG(r,2r),2)\rightarrow \overline{M}_{0,0}(LG(r,2r),2,1)$$
which is an isomorphism away from $\Delta^r$ and contracts the locus of maps with reducible domain $[C_1\cup C_2,\alpha]$ to $\alpha(C_1\cap C_2)$. Hence, $f_{T^r}$ contracts the divisor $\Delta^r$ onto $LG(r,2r)\subset \overline{M}_{0,0}(LG(r,2r),2,1)$.
\end{Proposition}
\begin{proof}
By \cite[Proposition 4.5.4.4]{ADHL15} and Lemma \ref{boucollg} the effective cone of $\overline{M}_{0,0}(LG(2,4),2)$ is generated by $\Delta^{2},D_{unb}^{2}, H_{\sigma_2}^{2}, T^{2}$. Consider the isomorphism $\varphi$ in (\ref{maplg}). We have
$$
\varphi^{*}E_1 =D_{unb}^{2},\: \varphi^{*}S_2^{(1)}(\mathcal{V}_2^3) = \Delta^{2},\: \varphi^{*}D_1 = H_{\sigma_2}^{2},\: \varphi^{*}D_2= T^{2}.
$$
Now, the relations among the boundary divisors and the colors of $\mathcal{S}_4$ in Proposition \ref{mcd_4} yield the following relations in the Picard group of $\overline{M}_{0,0}(LG(2,4),2)$:
\stepcounter{thm}
\begin{equation}\label{relPiclg}
H_{\sigma_2}^{2}\sim\frac{\Delta^{2}+2D_{unb}^2}{2},\: T^{2} \sim \Delta^{2}+D_{unb}^2
\end{equation}
and the statement in the case $r = 2$ follows from Propositions \ref{eff_nef} and \ref{isor2}.
Now, consider the case $r > 2$. Since $T^r$ is the pull-back of $T^{r-1,2r-1}$ via the embedding $\overline{M}_{0,0}(LG(r,2r),2)\hookrightarrow\overline{M}_{0,0}(\mathbb{G}(r-1,2r-1),2)$ \cite[Theorem 3.8]{CC10} yields that $T^r$ induces a morphism
$$f_{T^r}:\overline{M}_{0,0}(LG(r,2r),2)\rightarrow \overline{M}_{0,0}(LG(r,2r),2,1)$$
which is an isomorphism away from $\Delta^r$ and contracts the locus of maps with reducible domain $[C_1\cup C_2,\alpha]$ to $\alpha(C_1\cap C_2)$. Hence, $f_{T^r}$ contracts the divisor $\Delta^r$ onto $LG(r,2r)\subset \overline{M}_{0,0}(LG(r,2r),2,1)$. So $\Delta^r$ generates an extremal ray of the effective cone, and $T^r$ generates an extremal ray of the nef cone.
Similarly, \cite[Proposition 3.7]{CC10} yields the morphism $f_{H_{\sigma_2}^r}:\overline{M}_{0,0}(LG(r,2r),2)\rightarrow \widetilde{Chow}(LG(r,2r),2)$, and hence $H_{\sigma_2}^r$ generates the other extremal ray of the nef cone.
Now, following the proof of \cite[Lemma 3.4]{CC10} we define the class of a curve $\Gamma$ in $\overline{M}_{0,0}(LG(r,2r),2)$ whose deformations cover the whole of $\overline{M}_{0,0}(LG(r,2r),2)$. Consider a general hyperplane section $Z$ of $LG(2,4)\subset\mathbb{P}^4$, and a general line in this hyperplane section. The planes containing the line cut out a pencil of conics on $Z\subset LG(2,4)$. Hence we get a rational curve $C\subset \overline{M}_{0,0}(LG(2,4),2)$ parametrizing these conics. Let $\Gamma$ be the image of $C$ via the embedding in Proposition \ref{propemblg}. Then $H_{\sigma_2}^r\cdot\Gamma = 1$, and $\Delta^r\cdot\Gamma = 2$ since there are two reducible conics in a general pencil of conics in the quadric surface $Z$. Now, by (\ref{relPiclg}) we get that $D_{unb}^r\cdot\Gamma = 0$, and by \cite[Theorem 2.2]{BDPP13} we conclude that $D_{unb}^r$ generates the other extremal ray of the effective cone.
\end{proof}
\begin{Remark}\label{contr4}
Note that $Q^r(1)$ is a divisor in $\overline{M}_{0,0}(LG(r,2r),2)$ if and only if $r = 2$. By Proposition \ref{isor2} we have $\overline{M}_{0,0}(LG(2,4),2)\cong\mathcal{S}_4$ which by Proposition \ref{x4g14} is the blow-up of $\mathbb{G}(1,4)$ along the Veronese $\mathcal{V}_2^{3}$. In this case
$$f_{H_{\sigma_2}^2}:\overline{M}_{0,0}(LG(2,4),2)\rightarrow \widetilde{Chow}(LG(2,4),2)$$
is nothing but the blow-down morphism $\mathcal{S}_4\rightarrow\mathbb{G}(1,4)$. Indeed, since $LG(2,4)\subset\mathbb{P}^4$ is a quadric hypersurface and hence does not contain any plane we have that all planes in $\mathbb{P}^4$ cut out a conic on $LG(2,4)$. Hence, we may identify the Chow variety of conics in $LG(2,4)$ with $\mathbb{G}(2,4)\cong\mathbb{G}(1,4)$.
Furthermore, by Proposition \ref{mcd_4} the morphism
$$f_{T^2}:\overline{M}_{0,0}(LG(2,4),2)\rightarrow \overline{M}_{0,0}(LG(2,4),2,1)$$
is induced by the strict transform of the restriction to $\mathbb{G}(1,4)$ of the linear system of quadrics in $\mathbb{P}^9$ containing $\mathcal{V}_2^{3}$. In this way we realize $\overline{M}_{0,0}(LG(2,4),2,1)$ as a $6$-fold of degree $40$ in $\mathbb{P}^{14}$ which is singular along a $3$-fold isomorphic to $LG(2,4)$.
\end{Remark}
\begin{thm}\label{mcd_lg}
The Mori chamber decomposition of $\Eff(\overline{M}_{0,0}(LG(r,2r),2))$ has three chambers as displayed in the following picture:
$$
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
xmin=2.5,
xmax=7.5,
ymin=1.7032313370047312,
ymax=5.301934941656083,
xtick={2.5,3.0,...,5.5},
ytick={2.0,2.5,...,5.0},]
\clip(2.1,2.0) rectangle (7.5,5.4);
\draw [->,line width=0.1pt] (3.,4.) -- (3.,5.);
\draw [->,line width=0.1pt] (3.,4.) -- (4.,4.);
\draw [->,line width=0.1pt] (3.,4.) -- (5.,3.);
\draw [->,line width=0.1pt] (3.,4.) -- (5.,2.);
\draw [shift={(3.,4.)},line width=0.4pt,fill=black,fill opacity=0.15000000596046448] (0,0) -- plot[domain=-0.46364760900080615:0.,variable=\t]({1.*0.6965067581669063*cos(\t r)+0.*0.6965067581669063*sin(\t r)},{0.*0.6965067581669063*cos(\t r)+1.*0.6965067581669063*sin(\t r)}) -- cycle ;
\begin{scriptsize}
\draw[color=black] (3.085808915158281,5.2) node {$D_{unb}^r$};
\draw[color=black] (4.4,4) node {$H_{\sigma_2}^r$};
\draw[color=black] (5.25,3) node {$T^r$};
\draw[color=black] (5.3,2.1) node {$\Delta^r$};
\end{scriptsize}
\end{tikzpicture}
$$
where $H_{\sigma_2}^{r}\sim \frac{1}{2}(\Delta^r+2D_{unb}^r)$ and $T^r\sim\Delta^r+D_{unb}^r$. Furthermore, $\Mov(\overline{M}_{0,0}(LG(r,2r),2))$ is generated by $T^r$ and $D_{unb}^r$ if $r > 2$, while $\Mov(\overline{M}_{0,0}(LG(2,4),2))$ is generated by $T^2$ and $H_{\sigma_2}^2$. The Cox ring $\Cox(\overline{M}_{0,0}(LG(2,4),2))$ is generated by the sections of $\Delta^2,D_{unb}^2,H_{\sigma_2}^2,T^2$.
The birational model $X_r$ corresponding to the chamber delimited by $H_{\sigma_2}^r$ and $D_{unb}^r$ is a fibration $X_r\rightarrow SG(r-2,2r)$ with fibers isomorphic to $\mathbb{G}(2,4)$, where $SG(r-2,2r)$ is the symplectic Grassmannian parametrizing isotropic subspaces of dimension $r-2$. Finally, $D_{unb}^r$ contracts $\overline{M}_{0,0}(LG(r,2r),2)$ onto $SG(r-2,2r)$.
\end{thm}
\begin{proof}
First consider the case $r = 2$. The statement on the generators of the Cox ring follows from Proposition \ref{effneflg} and Remark \ref{gen_cox}. Furthermore, by Remarks \ref{toric} and \ref{gen_cox} the Mori chamber decomposition of $\Eff(\overline{M}_{0,0}(LG(2,4),2))$ is a, possibly trivial, coarsening of the decomposition in the statement. Since by Proposition \ref{effneflg} the effective cone $\Eff(\overline{M}_{0,0}(LG(2,4),2))$ is generated by $\Delta^{2}$ and $D_{unb}^{2}$, and $H_{\sigma_2}^2, T^2$ generate $\Nef(\overline{M}_{0,0}(LG(2,4),2))$ no ray can be removed, and the Mori chamber decomposition is as in the statement. The relations $H_{\sigma_2}^{r}\sim \frac{1}{2}(\Delta^r+2D_{unb}^r)$ and $T^r\sim\Delta^r+D_{unb}^r$ follow from the proof of Proposition \ref{propemblg} and (\ref{relPiclg}).
Now, consider the case $r >2$. By Proposition \ref{effneflg} the wall-crossing of $T^r$ induces a divisorial contraction, and a divisor inside the chamber delimited by $T^r$ and $H^r_{\sigma_2}$ is ample. By Proposition \ref{effneflg} the wall-crossing of $H_{\sigma_2}^r$ yields a birational contraction whose exceptional locus is the variety $Q^r(1)$ of double covers of a line in $LG(r,2r)$.
Next, we will construct the birational model of $\overline{M}_{0,0}(LG(r,2r),2)$ corresponding to the chamber delimited by $H_{\sigma_2}^r$ and $D_{unb}^r$. Let $H\subset\mathbb{P}^{2r-1}$ be an $(r+1)$-plane containing an isotropic $(r-1)$-plane $\Pi\subset\mathbb{P}^{2r-1}$. Then $\Pi = \Pi^{\perp}\supset H^{\perp}$. So $H^{\perp}\subset H$. Now, the $(r+1)$-planes containing their orthogonal are in bijection with the $(r-3)$-planes of $\mathbb{P}^{2r-1}$ that are isotropic. The variety parametrizing such $(r-3)$-planes is the symplectic Grassmannian $SG(r-2,2r)$. Let $\mathcal{U}_r$ be the universal bundle on $SG(r-2,2r)$, $\mathcal{U}_r^{\perp}\subset \mathcal{U}_r$ its orthogonal, and $\mathcal{Q}_r = \mathcal{U}_r/\mathcal{U}_r^{\perp}$ the quotient bundle. Then $\mathcal{Q}_r$ has rank four, and we may consider the relative Lagrangian Grassmannian $LG(2,\mathcal{Q}_r)\rightarrow SG(r-2,2r)$, and the relative Hilbert scheme $\Hilb_2(LG(2,\mathcal{Q}_r))\rightarrow SG(r-2,2r)$. Note that since $LG(2,4)$ does not contain planes the fibers of $\Hilb_2(LG(2,\mathcal{Q}_r))\rightarrow SG(r-2,2r)$ are isomorphic to $\mathbb{G}(2,4)$. Indeed, we can associate to a plane in $\mathbb{P}^4$ the conic it cuts out on $LG(2,4)$. Set $X_r := \Hilb_2(LG(2,\mathcal{Q}_r))\rightarrow SG(r-2,2r)$. Note that
$$\dim(X_r) = \dim(SG(r-2,2r))+6 = 2r^2-4r-\frac{3(r-2)^2-r+2}{2} + 6 = \frac{r^2+5r-2}{2} = \dim(\overline{M}_{0,0}(LG(r,2r),2))$$
and there is a birational transformation $\overline{M}_{0,0}(LG(r,2r),2)\dasharrow X_r$ inducing an isomorphism between the complement of $Q^r(1)$ in $\overline{M}_{0,0}(LG(r,2r),2)$ and the complement of the locus of double lines in $X_r$. Since $r> 2$ both these loci are in codimension greater that one. Furthermore, $H_{\sigma_2}^r$ induces a morphism on $X_r$ associating to a conic the reduced curve on which it is supported. Hence, this morphism is birational and contracts the locus of double lines. Finally $D_{unb}^r$ induces on $X_r$ the fibration $X_r \rightarrow SG(r-2,2r)$. Indeed, this fibration yields the rational fibration $\overline{M}_{0,0}(LG(r,2r),2)\dasharrow SG(r-2,2r)$ associating to a stable map that is not $2$-to-$1$ onto a line the orthogonal of the $(r+1)$-plane in $\mathbb{P}^{2r-1}$ generated by the $(r-1)$-planes parametrized by the image of the map. Hence, the cone generated by $H_{\sigma_2}^r$ and $D_{unb}^r$ is the nef cone of $X_r$.
Finally, the claim about the movable cones follows from Remark \ref{contr4} since $H_{\sigma_2}^2$ induces a divisorial contraction, while for $r > 2$ the divisor $H_{\sigma_2}^2$ yields a small contraction and $D_{unb}^r$ induces a non trivial fibration.
\end{proof}
We now study the positivity of the anti-canonical divisor of $\overline{M}_{0,0}(LG(r,2r),2)$.
\begin{Proposition}\label{can}
Let $\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)$ be the smooth Deligne-Mumford stack of degree two stable maps to $LG(r,2r)$, $\overline{H}_{\sigma_2}^r,\overline{T}^r,\overline{\Delta}^r,\overline{D}_{unb}^r$ the divisors on $\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)$ corresponding to $H_{\sigma_2}^r,T^r,\Delta^r,D_{unb}^r$ respectively.
The anti-canonical divisor of the stack $\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)$ is given by
$$-K_{\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)} = 5\overline{H}_{\sigma_2}^r +\frac{r-7}{2}\overline{D}_{unb}^r$$
for $r> 2$, while $-K_{\overline{\mathcal{M}}_{0,0}(LG(2,4),2)} = 5\overline{H}_{\sigma_2}^2 -5\overline{D}_{unb}^2$. Furthermore, the anti-canonical divisor of $\overline{M}_{0,0}(LG(2,4),2)$ is given by
$$-K_{\overline{M}_{0,0}(LG(r,2r),2)} = 5H_{\sigma_2}^r +\frac{r-7}{2}D_{unb}^r$$
for $r>2$, while for $r = 2$ we have that
$$-K_{\overline{M}_{0,0}(LG(2,4),2)} = 5H_{\sigma_2}^2 -2 D_{unb}^2.$$
\end{Proposition}
\begin{proof}
We will compute the canonical divisor of $\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)$ using the formula in \cite[Theorem 1.1]{dJS17}. Hence, we need the Chern classes $c_1(T_{LG(r,2r)}), c_2(T_{LG(r,2r)})$, where $T_{LG(r,2r)}$ is the tangent bundle of $LG(r,2r)$. Recall that $T_{LG(r,2r)}\cong \Sym^2(S^{\vee})$, where $S$ is the universal bundle.
Let us pretend that $S^{\vee} = L_1\oplus\dots \oplus L_r$ splits as direct sum of line bundles. We will then use Whitney's formula along with the splitting principle to compute the Chern classes of $\Sym^2(S^{\vee})$. Set $c_1(L_i) = \alpha_i$ for $i = 1,\dots, r$. Then
$$c(S^{\vee}) = \prod_{i = 1}^{r}(1+\alpha_i)$$
and hence
\begin{equation}\label{chern1}
c_1(S^{\vee}) = \alpha_1+\dots+\alpha_r,\quad c_2(S^{\vee}) = \alpha_1\alpha_2+\dots+\alpha_1\alpha_r+\alpha_2\alpha_3+\dots+\alpha_{r-1}\alpha_r.
\end{equation}
Furthermore
$$\Sym^2(S^{\vee}) = L_1^{\otimes 2}\oplus (L_1\otimes L_2)\oplus\dots\oplus (L_1\otimes L_r)\oplus L_2^{\otimes 2}\oplus\dots \oplus L_r^{\otimes 2}
$$
yields
$$
\begin{array}{ll}
c(\Sym^2(S^{\vee})) = & (1+2\alpha_1)(1+\alpha_1+\alpha_2)\dots (1+\alpha_1+\alpha_r)(1+2\alpha_2)\dots (1+2\alpha_r) =\\
& 1+(r+1)\sum_{i=1}^r\alpha_i+\frac{r^2+r-2}{2}\sum_{i=1}^{r}\alpha_i^2+(r^2+2r)(\alpha_1\alpha_2+\dots+\alpha_{r-1}\alpha_r)+\dots=\\
& 1 +(r+1)\sum_{i=1}^r\alpha_i + \frac{r^2+r-2}{2}(\sum_{i=1}^r\alpha_i)^2+(r+2)(\alpha_1\alpha_2+\dots+\alpha_{r-1}\alpha_r)+\dots=\\
& 1 +(r+1)c_1(S^{\vee}) + \frac{r^2+r-2}{2}c_1(S^{\vee})^2+(r+2)c_2(S^{\vee})+\dots
\end{array}
$$
where in the last equality we plugged-in the formulas in (\ref{chern1}). Recall that $c_1(S^{\vee}) = \sigma_1^r$, $c_2(S^{\vee}) = \sigma_{2}^r$ and that by (\ref{relcomlg}) we have $(\sigma_1^r)^2 = 2\sigma_{2}^r$. Hence
$$c_1(T_{LG(r,2r)}) = (r+1)\sigma_1^r,\quad c_2(T_{LG(r,2r)}) = (r^2+2r)\sigma_{2}^r.$$
Now, plugging-in these formulas in \cite[Theorem 1.1]{dJS17} we get
$$
K_{\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)} = -\frac{2r+6}{4}\overline{H}_{\sigma_2}^r+\frac{r-7}{4}\overline{\Delta}^r.
$$
Let $\pi:\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)\rightarrow\overline{M}_{0,0}(LG(r,2r),2)$ be the canonical morphism from $\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)$ to its coarse moduli space. Note that $\pi:\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)\rightarrow\overline{M}_{0,0}(LG(r,2r),2)$ is an isomorphism in codimension one for all $r > 2$, while for $r=2$ it is ramified on the divisor $D_{unb}^2$. When $r = 2$ the stack has non trivial inertia along the divisor $\overline{D}_{unb}^2$ since a general stable map in $\overline{D}_{unb}^2$ has automorphism group $\mathbb{Z}/2\mathbb{Z}$. Taking this into account we get that $\pi^{*}D_{unb}^2 = 2\overline{D}_{unb}^2$, and hence Theorem \ref{mcd_lg} yields $\overline{\Delta}^r = 2\overline{H}_{\sigma_2}^r-2\overline{D}_{unb}^r$ if $r > 2$, and $\overline{\Delta}^2 = 2\overline{H}_{\sigma_2}^2-4\overline{D}_{unb}^2$. So, in terms of $\overline{H}_{\sigma_2}^r$ and $\overline{D}_{unb}^r$ the canonical divisor of the stack is given by
$$K_{\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)} = -5\overline{H}_{\sigma_2}^r-\frac{r-7}{2}\overline{D}_{unb}^r$$
if $r> 2$ , and $K_{\overline{\mathcal{M}}_{0,0}(LG(2,4),2)} = -5\overline{H}_{\sigma_2}^2+5\overline{D}_{unb}^2$.
Furthermore, when $r>2$ the formula above gives the expression of the canonical divisor of $\overline{M}_{0,0}(LG(r,2r),2)$ in the statement since $\overline{M}_{0,0}(LG(r,2r),2)$ and $\overline{\mathcal{M}}_{0,0}(LG(r,2r),2)$ are isomorphic in codimension one for $r > 2$.
However, when $r = 2$ we have that
$$K_{\overline{\mathcal{M}}_{0,0}(LG(2,4),2)} = \pi^{*}K_{\overline{M}_{0,0}(LG(2,4),2)}+\overline{D}_{unb}^2.$$
Let us write $K_{\overline{M}_{0,0}(LG(2,4),2)} = -5H_{\sigma_2}^2+a D_{unb}^2$. Recalling that $\pi^{*}D_{unb}^2 = 2\overline{D}_{unb}^2$ we get
$$-5\overline{H}_{\sigma_2}^2+5\overline{D}_{unb}^2 = K_{\overline{\mathcal{M}}_{0,0}(LG(2,4),2)} = \pi^{*}(-5H_{\sigma_2}^2+a D_{unb}^2)+\overline{D}_{unb}^2 = -5\overline{H}_{\sigma_2}^2+(2a+1)\overline{D}_{unb}^2.$$
Hence, $a = 2$ and $K_{\overline{M}_{0,0}(LG(2,4),2)} = -5H_{\sigma_2}^2+2D_{unb}^2$.
\end{proof}
\begin{Remark}
Since $\omega_{\mathbb{G}(1,4)} = \mathcal{O}_{\mathbb{G}(1,4)}(-5)$ and $\codim_{\mathbb{G}(1,4)}(\mathcal{V}_2^3)=3$ the formula $K_{\overline{M}_{0,0}(LG(2,4),2)} = -5H_{\sigma_2}^2+2D_{unb}^2$ can also be deduced from the description of $\overline{M}_{0,0}(LG(2,4),2)$ as the blow-up of $\mathbb{G}(1,4)$ along $\mathcal{V}_2^3$ in Proposition \ref{isor2}.
\end{Remark}
\begin{Corollary}\label{Fano}
The moduli space $\overline{M}_{0,0}(LG(r,2r),2)$ is Fano for $2\leq r\leq 6$, weak Fano for $r = 7$, and $-K_{\overline{M}_{0,0}(LG(r,2r),2)}$ is not ample for $r\geq 8$.
\end{Corollary}
\begin{proof}
By Propositions \ref{mcd_lg} and \ref{can} we have that $-K_{\overline{M}_{0,0}(LG(r,2r),2)}$ is a multiple of $H_{\sigma_2}^r$ if $r = 7$. Furthermore, $-K_{\overline{M}_{0,0}(LG(r,2r),2)}$ lies in the interior of $\Nef(\overline{M}_{0,0}(LG(r,2r),2))$ for $2\leq r\leq 6$, while for $r\geq 8$ we have that $-K_{\overline{M}_{0,0}(LG(r,2r),2)}$ lies in the interior of the cone generated by $H_{\sigma_2}^r$ and $D_{unb}^r$.
\end{proof}
Finally, the following result on automorphisms of $\overline{M}_{0,0}(LG(2,4),2)$ is at hand.
\begin{Corollary}\label{aut}
The automorphism group of $\overline{M}_{0,0}(LG(2,4),2)$ is given by
$$\operatorname{PsAut}(\overline{M}_{0,0}(LG(2,4),2))\cong \operatorname{Aut}(\overline{M}_{0,0}(LG(2,4),2)) \cong PSp(4)$$
where $PSp(4)$ is the projective symplectic group, and $\operatorname{PsAut}(\overline{M}_{0,0}(LG(2,4),2))$ is the group of birational self-maps of $\overline{M}_{0,0}(LG(2,4),2)$ inducing automorphisms in codimension one.
\end{Corollary}
\begin{proof}
By Propositions \ref{x4g14}, \ref{isor2} we have that $\overline{M}_{0,0}(LG(2,4),2)$ is isomorphic to the blow-up of $\mathbb{G}(1,4)$ along the Veronese $\mathcal{V}_2^3$. Let $\varphi\in \operatorname{Aut}(\overline{M}_{0,0}(LG(2,4),2))$ be an automorphism. Then either $\phi$ preserves the two extremal rays of $\Eff(\overline{M}_{0,0}(LG(2,4),2))$ in Theorem \ref{mcd_lg} or it swaps them. In the second case $\phi$ must swap also the extremal rays of $\Nef(\overline{M}_{0,0}(LG(2,4),2))$ but this is not possible since for instance $T^2$ has more sections than $H^2_{\sigma_2}$. Therefore, $\phi$ stabilizes the exceptional divisor $D_{unb}^2$ of the blow-up and then it induces an automorphism $\overline{\phi}$ of $\mathbb{G}(1,4)$ that stabilizes $\mathcal{V}_2^3$.
Now, the automorphism group of $\mathbb{G}(1,4)$ is isomorphic to $PGL(5)$ and all these automorphisms are induced by automorphisms of the ambient projective space $\mathbb{P}^9$ \cite[Theorem 1.1]{Co89}. The restriction of $\overline{\phi}$ to $\mathcal{V}_2^3$ yields an automorphism $\overline{\phi}_{|\mathcal{V}_2^3}$ of $\mathbb{P}^3$. Since $\overline{\phi}$ is an automorphism of $\mathbb{G}(1,4)$, which we interpret as the closure of the space of symplectic and symmetric matrices modulo scalar, the restriction $\overline{\phi}_{|\mathcal{V}_2^3}\in PGL(4)$ must map symplectic matrices to symplectic matrices. Hence, $\overline{\phi}_{|\mathcal{V}_2^3}\in PSp(4)$. So, we get a morphism of groups
$$
\begin{array}{ccll}
\chi :& \operatorname{Aut}(\overline{M}_{0,0}(LG(2,4),2)) & \rightarrow & PSp(4)\\
& \phi & \mapsto & \overline{\phi}_{|\mathcal{V}_2^3}
\end{array}
$$
which is surjective. Now, if $\overline{\phi}_{|\mathcal{V}_2^3}$ is the identity it must be the restriction of the identity automorphism of the ambient projective space $\mathbb{P}^9$ in which both $\mathcal{V}_2^3$ and $\mathbb{G}(1,4)$ are embedded. Since $\mathbb{G}(1,4)$ and $\overline{M}_{0,0}(LG(2,4),2)$ are birational we get that $\overline{\phi}_{|\mathcal{V}_2^3}$ must come from the identity of $\operatorname{Aut}(\overline{M}_{0,0}(LG(2,4),2))$, and hence $\chi$ is an isomorphism. Finally, since by Proposition \ref{isor2} and Corollary \ref{Fano} $\overline{M}_{0,0}(LG(2,4),2)$ is a smooth Fano variety the result on $\operatorname{PsAut}(\overline{M}_{0,0}(LG(2,4),2))$ follows from \cite[Proposition 7.2]{Ma18a}.
\end{proof}
\bibliographystyle{amsalpha}
|
{
"timestamp": "2021-11-05T01:19:16",
"yymm": "2012",
"arxiv_id": "2012.13999",
"language": "en",
"url": "https://arxiv.org/abs/2012.13999"
}
|
\section{Introduction}
We say that 0-1 matrix $A$ \emph{contains} the 0-1 matrix $P$ if some submatrix of $A$ is either equal to $P$ or can be turned into $P$ by changing some ones to zeroes. Otherwise, we say that $A$ \emph{avoids} $P$. The extremal function $\exm(n, P)$ is defined as the maximum number of ones in an $n \times n$ 0-1 matrix that avoids $P$. This extremal function has been applied to a wide range of problems including the proof of the Stanley-Wilf conjecture \cite{mt}, the best known bounds on the maximum number of unit distances in a convex $n$-gon \cite{furedi}, a bound on the complexity of an algorithm for finding a minimal path in a rectlilinear grid with obstacles \cite{mitchell}, and bounds on extremal functions of forbidden sequences \cite{pettie}.
There is a long history of the study of extremal functions of forbidden 0-1 matrices, see e.g. \cite{ts, kst, bg, pt, h, cm, gst}. Marcus and Tardos \cite{mt} proved that $\exm(n, P) = O(n)$ for every permutation matrix $P$, and used this fact to resolve the F\"{u}redi-Hajnal conjecture \cite{fh}, which also resolved the Stanley-Wilf conjecture using an earlier result of Klazar \cite{k}. In a different paper, Tardos finished bounding the extremal functions of all forbidden 0-1 matrices with at most four ones up to a constant factor \cite{T}. Many connections have been found between extremal functions of forbidden 0-1 matrices and the maximum possible lengths of generalized Davenport-Schinzel sequences \cite{ck0, cds, ff0, N, pettieds, wp}.
Besides permutation matrices, other families of forbidden 0-1 matrices with linear extremal functions have been found including 0-1 matrices obtained by doubling permutation matrices \cite{g}, 0-1 matrices corresponding to visibility graphs \cite{fulek, gs}, and 0-1 matrices obtained by performing operations on smaller 0-1 matrices with linear extremal functions \cite{keszegh, pettie}. After the result of Marcus and Tardos, Fox \cite{fox} proved that almost all $k \times k$ permutation matrices have $\exm(n, P) = 2^{\Omega(\sqrt{k})}n$. Fox also showed that $\exm(n, P) = 2^{O(k)}n$ for every $k \times k$ permutation matrix $P$, and these results for permutation matrices were also generalized to extremal functions of forbidden $d$-dimensional permutation matrices \cite{km, gt, ck}.
Saturation problems have been studied for graphs \cite{dkm, ehm, fk0, kt}, posets \cite{fkk, klm}, and set systems \cite{fkk0, gkl}. Recently Brualdi and Cao initiated the study of a saturation problem for 0-1 matrices \cite{bc}, which is different from the saturation problem for 0-1 matrices defined in \cite{dpt}. We say that 0-1 matrix $A$ is \emph{saturating} for $P$ if $A$ avoids $P$ but changing any zero to a one in $A$ creates a copy of $P$. Define the saturation function $\sat(n, P)$ to be the minimum possible number of ones in an $n \times n$ 0-1 matrix that is saturating for $P$. Brualdi and Cao proved for $n \geq k$ that $\exm(n, P) = \sat(n, P)$ for all $k \times k$ identity matrices $P$.
Fulek and Keszegh \cite{fk} found a general upper bound on the saturation function in terms of the dimensions of $P$, and proved for every 0-1 matrix $P$ that either $\sat(n, P) = O(1)$ or $\sat(n, P) = \Theta(n)$. They found two 0-1 matrices $P$ for which $\sat(n, P) = O(1)$, the $1 \times 1$ identity matrix $I_1$ and the $5 \times 5$ permutation matrix $Q = \begin{bmatrix}
0 & 0 & 0 & 1 & 0\\
1 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1\\
0 & 1 & 0 & 0 & 0
\end{bmatrix}$. Note that any matrix $P$ that can be obtained from $Q$ by any sequence of vertical/horizontal reflections or $90$ degree rotations must also have $\sat(n, P) = O(1)$, since vertical/horizontal reflections and $90$ degree rotations do not change $\sat(n, P)$ or $\exm(n, P)$. Fulek and Keszegh also found infinite classes of 0-1 matrices $P$ for which $\sat(n, P) = \Theta(n)$, including 0-1 matrices $P$ in which every column has at least two ones, or every row has at least two ones. This result implies that $\sat(n, P) = \Theta(n)$ for almost all $k \times k$ 0-1 matrices $P$, since almost all $k \times k$ 0-1 matrices $P$ have at least two ones in every row and at least two ones in every column.
Fulek and Keszegh conjectured that there are many more 0-1 matrices $P$ such that $\sat(n, P) = O(1)$ besides $I_1$ and $Q$. They asked for a characterization of all permutation matrices $P$ for which $\sat(n, P) = O(1)$. We affirm Fulek and Keszegh's conjecture by proving that almost all $k \times k$ permutation matrices $P$ have $\sat(n, P) = O(1)$. In order to prove this result, we define a family of permutation matrices $P$ for which $\sat(n, P) = O(1)$, and then we prove that almost all $k \times k$ permutation matrices are in this family. We say that a permutation matrix $Q$ is \emph{ordinary} if no matrix that can be obtained from $Q$ by any sequence of vertical/horizontal reflections or $90$ degree rotations is in any of the following classes. We refer to the classes below as Class 1, Class 2, Class 3, and Class 4. We say that $Q$ \emph{reduces} to the Class $i$ if there is a sequence of vertical/horizontal reflections or $90$ degree rotations from $Q$ to an element of Class $i$. So, a permutation matrix $Q$ is ordinary if $Q$ does not reduce to any of the following classes.
\begin{enumerate}
\item Class 1 consists of permutation matrices $P$ for which the rows can be split into two nonempty sets $R_1$, $R_2$ of contiguous rows (where the rows in $R_1$ precede the rows in $R_2$) and the columns can be split into two nonempty sets $C_1$, $C_2$ of contiguous columns (where the columns in $C_1$ precede the columns in $C_2$) such that $P$ has ones in the submatrices restricted to $R_1 \times C_1$ and $R_2 \times C_2$, and $P$ only has ones in those submatrices except for at most $2$ ones in the submatrix restricted to $R_1 \times C_2$. If $P$ has two ones in the submatrix restricted to $R_1 \times C_2$, then they are in adjacent rows.
\item Class 2 consists of permutation matrices $P$ for which the rows can be split into three nonempty sets $R_1$, $R_2$, $R_3$ of contiguous rows (where the rows in $R_1$ precede the rows in $R_2$, which precede the rows in $R_3$) and the columns can be split into two nonempty sets $C_1$, $C_2$ of contiguous columns (where the columns in $C_1$ precede the columns in $C_2$) such that $P$ has ones in the submatrices restricted to $R_1 \times C_2$ and $R_3 \times C_2$, $P$ has at least two ones in the submatrix restricted to $R_2 \times C_1$, and $P$ only has ones in those submatrices except for at most $2$ ones in the submatrix restricted to $R_2 \times C_2$. If $P$ has two ones in the submatrix restricted to $R_2 \times C_2$, then they are in adjacent rows.
\item Class 3 consists of permutation matrices $P$ which can be obtained by starting with an element $X$ of Class $2$, taking the row $r$ of $X$ containing the rightmost one in the submatrix restricted to $R_2 \times C_1$, deleting $r$ from its current position, and adding $r$ in front of the first row of $X$.
\item Class 4 consists of permutation matrices $P$ for which the rows can be split into three nonempty sets $R_1$, $R_2$, $R_3$ of contiguous rows (where the rows in $R_1$ precede the rows in $R_2$, which precede the rows in $R_3$) and the columns can be split into three nonempty sets $C_1$, $C_2$, $C_3$ of contiguous columns (where the columns in $C_1$ precede the columns in $C_2$, which precede the columns in $C_3$) such that $P$ has ones in the submatrices restricted to $R_1 \times C_2$, $R_2 \times C_1$, $R_2 \times C_3$, and $R_3 \times C_2$, and only in those submatrices.
\end{enumerate}
There is clearly some overlap between the classes. For example, in the definition of Class 3, when we construct a permutation matrix $P$ in Class 3 from a permutation matrix $X$ in Class 2, the only instances when $P$ will not be an element of Class 2 are when $X$ has exactly two ones in the submatrix restricted to $R_2 \times C_1$.
In the next section, we prove that every ordinary permutation matrix has bounded saturation function and almost all $k \times k$ permutation matrices are ordinary. In the final section, we discuss some possible directions for future research, including saturation functions of forbidden $d$-dimensional 0-1 matrices.
\section{New results}
In this section, we will prove the main result below.
\begin{thm}\label{mainth}
Almost all $k \times k$ permutation matrices $P$ have $\sat(n, P) = O(1)$.
\end{thm}
In order to prove the main result, we first show that all ordinary permutation matrices have bounded saturation functions, and then we prove that almost all $k \times k$ permutation matrices are ordinary.
\begin{thm}
Every ordinary permutation matrix $P$ has $\sat(n, P) = O(1)$.
\end{thm}
\begin{proof}
For every permutation matrix $P$, we define a 0-1 matrix $T_P$ with an odd number of rows and an odd number of columns such that the middle row and middle column of $T_P$ have all zero entries. We will prove that $T_P$ avoids $P$ for all ordinary permutation matrices $P$. We construct $T_P$ so that changing a zero to a one in the middle row or middle column of $T_P$ creates a copy of $P$. In order to obtain a 0-1 matrix $T'_P$ from $T_P$ that is saturating for $P$, change zeroes in $T_P$ to ones one at a time without creating a copy of $P$ until it is impossible to change any more zeroes without creating a copy of $P$. By definition, the resulting matrix $T'_P$ must be saturating for $P$. Note that any zeroes in $T_P$ that we change to ones to make $T'_P$ must be off the middle row and off the middle column, or else it will create a copy of $P$.
In order to construct $T_P$, we define four submatrices of $P$: $P_N$ is obtained by deleting the row and column containing the bottom one in $P$, $P_S$ is obtained by deleting the row and column containing the top one in $P$, $P_E$ is obtained by deleting the row and column containing the leftmost one in $P$, and $P_W$ is obtained by deleting the row and column containing the rightmost one in $P$.
We start with $T_P$ as an $n \times n$ matrix of all zeroes with $n = 6k+1$. The exact value of $n$ in this proof is not important, just the fact that it is sufficiently large with respect to $k$. We place a copy of $P$ in the top $k$ rows of $T_P$, with the bottom one of this copy of $P$ in the middle column of $T_P$ and all columns in the copy adjacent, and then we delete the bottom one of $P$ in the copy to leave a copy of $P_N$ in the top $k-1$ rows of $T_P$. We place another copy of $P$ in the leftmost $k$ columns of $T_P$, with the rightmost one of this copy of $P$ in the middle row of $T_P$ and all rows in the copy adjacent, and then we delete the rightmost one of $P$ in the copy to leave a copy of $P_W$ in the leftmost $k-1$ columns of $T_P$.
We place another copy of $P$ in the bottom $k$ rows of $T_P$, with the top one of this copy of $P$ in the middle column of $T_P$. For any columns of $P$ adjacent to the column containing the top one, we put those columns in the bottom $k$ rows adjacent to the middle column of $T_P$. However, any columns of $P$ to the left of these columns go in the bottom $k$ rows directly to the left of the columns that contain the copy of $P_N$ in $T_P$, and any columns of $P$ to the right of these columns go in the bottom $k$ rows directly to the right of the columns that contain the copy of $P_N$ in $T_P$. After placing this copy of $P$ in the bottom $k$ rows of $T_P$, we delete the top one of $P$ in the copy to leave a copy of $P_S$ in the bottom $k-1$ rows of $T_P$. Note that if the top and bottom ones in $P$ are not the leftmost or rightmost ones in $P$, then the union of the columns containing $P_N$ and $P_S$ will consist of two sets of contiguous columns, separated by only the middle column of $T_P$, and the intersection of the columns containing $P_N$ and $P_S$ will consist only of the two columns that are adjacent to the middle column of $T_P$.
We place another copy of $P$ in the rightmost $k$ columns of $T_P$, with the leftmost one of $P$ in the middle row of $T_P$. For any rows of $P$ adjacent to the row containing the leftmost one, we put those rows in the rightmost $k$ columns adjacent to the middle row of $T_P$. However, any rows of $P$ above these rows go in the rightmost $k$ columns directly above the rows that contain the copy of $P_W$ in $T_P$, and any rows in $P$ under these rows go in the rightmost $k$ columns directly under the rows that contain the copy of $P_W$ in $T_P$. After placing this copy of $P$ in the rightmost $k$ columns of $T_P$, we delete the leftmost one of $P$ in the copy to leave a copy of $P_E$ in the rightmost $k-1$ columns of $T_P$. Note that if the leftmost and rightmost ones in $P$ are not the top or bottom ones in $P$, then the union of the rows containing $P_W$ and $P_E$ will consist of two sets of contiguous rows, separated by only the middle row of $T_P$, and the intersection of the rows containing $P_W$ and $P_E$ will consist only of the two rows that are adjacent to the middle row of $T_P$. We call the copies of $P_N, P_S, P_E$, and $P_W$ the \emph{sections} of $T_P$. This completes the construction of $T_P$, and Figure \ref{fig:tq} shows $T_Q$ and $T_R$ for $Q =
\begin{bmatrix}
0 & 0 & 0 & 1 & 0\\
1 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 1\\
0 & 1 & 0 & 0 & 0
\end{bmatrix}$ and $R =
\begin{bmatrix}
0 & 0 & 0 & 1 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1\\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0
\end{bmatrix}$.
\begin{figure}
\begin{subfigure}[b]{0.3\textwidth}
\tiny
\[
\begin{bmatrix*}[r]
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0\\
\textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0}\\
0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix*}
\]
\caption{$T_Q$}
\label{fig:y equals x}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2\textwidth}
\tiny
\[
\begin{bmatrix*}[r]
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0\\
\textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0}& \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0}\\
0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & 0\\
0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & 0 & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{1} & 0 & 0 & 0 & 0 & \textbf{0} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \textbf{0} & \textbf{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix*}
\]
\caption{$T_R$}
\label{fig:five over x}
\end{subfigure}
\caption{$T_Q$ and $T_R$, with middle rows and middle columns and all ones in bold}
\label{fig:tq}
\end{figure}
\normalsize
We need to prove that $T_P$ avoids $P$ if $P$ is ordinary. Suppose that $T_P$ contains $P$, in order to show that $P$ is not ordinary. It is impossible for the copy of $P$ to be fully contained in a single section of $T_P$, since each section has too few ones. Thus the copy of $P$ must be contained in at least two sections of $T_P$. If the copy of $P$ is contained in exactly two sections of $T_P$, then there are six possibilities. If $P$ is contained in $P_N$ and $P_E$, $P_E$ and $P_S$, $P_S$ and $P_W$, or $P_W$ and $P_N$, then $P$ reduces to Class 1, so $P$ is not ordinary.
Suppose that the copy of $P$ is on $P_W$ and $P_E$ (the proof is analogous for $P_N$ and $P_S$). If the leftmost or rightmost one in $P$ is also the top or bottom one in $P$, then $P$ reduces to Class 1, so $P$ is not ordinary. So, we may assume that the leftmost and rightmost ones in $P$ are not the top or bottom ones in $P$. If the copy of $P$ only had a single one in $P_E$ (resp. $P_W$), then the copy of $P$ would have to use all of the ones in $P_W$ (resp. $P_E$). However, the leftmost or rightmost one would be out of position by how we constructed $T_P$, if the leftmost and rightmost ones in $P$ are not the top or bottom ones in $P$. So there is no copy of $P$ on $P_W$ and $P_E$ with only a single one in $P_E$ or $P_W$, if the leftmost and rightmost ones in $P$ are not the top or bottom ones in $P$. It remains to consider the cases when the copy of $P$ on $P_W$ and $P_E$ has multiple ones in both $P_W$ and $P_E$. If the copy of $P$ has ones in $P_E$ above the rows that contain $P_W$ and below the rows that contain $P_W$, then $P$ is a Class 2 permutation matrix, so $P$ is not ordinary. If the copy of $P$ has ones in $P_E$ above the rows that contain $P_W$, but not below the rows that contain $P_W$, then $P$ reduces to Class 1, so $P$ is not ordinary. If the copy of $P$ has ones in $P_E$ below the rows that contain $P_W$, but not above the rows that contain $P_W$, then $P$ is in Class 1, so $P$ is not ordinary. Finally if the copy of $P$ had ones in $P_E$ only in the rows that contain $P_W$, then the copy of $P$ would only have the two ones in $P_E$ that are adjacent to the middle row of $T_P$, so $P$ would reduce to Class 1 or Class 2. However, two of the ones in $P_W$ are adjacent to the middle row of $T_P$, so there are not enough remaining ones in $P_W$ to form a copy of $P$. So, there is no copy of $P$ on $P_W$ and $P_E$ with ones only in the rows that contain $P_W$.
If the copy of $P$ is contained in exactly $3$ sections (which will always have two opposite sections and a \emph{middle} section between them), then $P$ reduces to a Class 2 permutation matrix unless the copy of $P$ only has a single one in the middle section. Suppose that the copy of $P$ only has a single one in the middle section. Let $P'$ be the matrix obtained from $P$ by removing the row and column containing the one in the middle section. Suppose that the copy of $P'$ is on $P_W$ and $P_E$ (the proof is analogous for $P_N$ and $P_S$). If the copy of $P'$ only has a single one in $P_E$ or $P_W$, then $P$ must reduce to Class 1 or Class 3. Now suppose the copy of $P'$ has multiple ones in both $P_W$ and $P_E$. If the copy of $P'$ has ones in $P_E$ above the rows that contain $P_W$ and below the rows that contain $P_W$, then $P'$ and $P$ are in Class 2, so $P$ is not ordinary. If the copy of $P'$ has ones in $P_E$ above the rows that contain $P_W$, but not below the rows that contain $P_W$, then $P'$ and $P$ reduce to Class 1, so $P$ is not ordinary. If the copy of $P'$ has ones in $P_E$ below the rows that contain $P_W$, but not above the rows that contain $P_W$, then $P'$ and $P$ are in Class 1, so $P$ is not ordinary. If the copy of $P'$ has ones in $P_E$ only in the rows that contain $P_W$, then the copy of $P'$ only has the two ones in $P_E$ that are adjacent to the middle row of $T_P$, so both $P'$ and $P$ reduce to Class 1 or Class 2.
Finally if the copy of $P$ has ones in all four sections, then $P$ is a Class 4 permutation matrix. Thus, in all possible cases $P$ is not ordinary.
In order to see that $\sat(n, P) = O(1)$ for every ordinary permutation matrix $P$, observe that changing a zero to a one in the middle row or the middle column of $T'_P$ will create a copy of $P$. For any integer $j > 0$, we can add $j$ rows of zeroes on the middle row of $T'_P$ and $j$ columns of zeroes on the middle column of $T'_P$. The resulting 0-1 matrix must still be saturating for $P$.
\end{proof}
The next lemma completes the proof of Theorem \ref{mainth}.
\begin{lem}\label{almostall}
Almost all $k \times k$ permutation matrices $P$ are ordinary.
\end{lem}
\begin{proof}
It suffices to prove that the number of permutation matrices in Classes 1, 2, 3, and 4 is $o(k!)$. For Class 1, we split the bound into three parts. First, consider Class 1 permutation matrices with no ones in the submatrix restricted to $R_1 \times C_2$. The number of these matrices with $|R_1| = j$ is $j! (k-j)!$ since there are $j!$ possibilities for the entries in $R_1 \times C_1$ and $(k-j)!$ possibilities for the entries in $R_2 \times C_2$. When $j = 1$ or $j = k-1$, $j!(k-j)! = (k-1)! = o(k!)$. When $2 \le j \le k-2$, $j!(k-j)! = O(\frac{k!}{k^2})$, so $\sum_{j = 2}^{k-2} j!(k-j)! = o(k!)$. Thus the total number of permutation matrices in Class 1 with no ones in the submatrix restricted to $R_1 \times C_2$ is $o(k!)$.
Next, consider Class 1 permutation matrices with a single one in the submatrix restricted to $R_1 \times C_2$. The number of these matrices with $|R_1| = j+1$ is at most $(j+1) j! (k-j)!$. We must have $j \le k-2$ since $j+1 < k$. When $j = 1$ or $j = k-2$, $(j+1)j!(k-j)! = o(k!)$. When $2 \le j \le k-3$, $(j+1)j!(k-j)! = O(\frac{k!}{k^2})$, so $\sum_{j = 2}^{k-3} (j+1)j!(k-j)! = o(k!)$. Thus the total number of permutation matrices in Class 1 with a single one in the submatrix restricted to $R_1 \times C_2$ is $o(k!)$.
Next, consider Class 1 permutation matrices with two ones in the submatrix restricted to $R_1 \times C_2$. The number of these matrices with $|R_1| = j+2$ is at most $(j+1) j! (k-j)!$. We must have $j \le k-3$ since $j+2 < k$. When $j = 1$, $(j+1)j!(k-j)! = o(k!)$. When $2 \le j \le k-3$, $(j+1)j!(k-j)! = O(\frac{k!}{k^2})$, so $\sum_{j = 2}^{k-3} (j+1)j!(k-j)! = o(k!)$. Thus the total number of permutation matrices in Class 1 with two ones in the submatrix restricted to $R_1 \times C_2$ is $o(k!)$.
For Class 2, we again split the bound into $3$ parts. First we bound the number of permutation matrices in Class 2 with no ones in the submatrix restricted to $R_2 \times C_2$. When $|R_1| = i$ and $|R_2| = j$, there are at most $j!(k-j)!$ permutation matrices in Class 2 with no ones in the submatrix restricted to $R_2 \times C_2$. By definition of Class 2, we must have $2 \le j \le k-2$. When $j = 2$ or $j = k-2$, $j!(k-j)! = 2 (k-2)! = O(\frac{k!}{k^2})$, so $k j!(k-j)! = o(k!)$, covering all possible $i$ for $j = 2$ and $j = k-2$ since $i < k$. When $3 \le j \le k-3$, $j!(k-j)! = O(\frac{k!}{k^3})$, so $\sum_{j = 3}^{k-3} j!(k-j)! = O(\frac{k!}{k^2})$ and $k \sum_{j = 3}^{k-3} j!(k-j)! = o(k!)$, covering all possible $i$ for $3 \le j \le k-3$. Thus the total number of permutation matrices in Class 2 with no ones in the submatrix restricted to $R_2 \times C_2$ is $o(k!)$.
Next we bound the number of permutation matrices in Class 2 with a single one in the submatrix restricted to $R_2 \times C_2$. When $|R_1| = i$ and $|R_2| = j+1$, there are at most $(j+1) j!(k-j)!$ permutation matrices in Class 2 with a single one in the submatrix restricted to $R_2 \times C_2$. By definition of Class 2, we must have $j \ge 2$. When $j = 2$, $(j+1) j!(k-j)! = O(\frac{k!}{k^2})$, so $k (j+1) j!(k-j)! = o(k!)$, which covers all possible $i$ for $j = 2$. We cannot have $j = k-2$, since $i > 0$ and $i + (j+1) < k$. When $j = k-3$, $(j+1) j!(k-j)! = o(k!)$ and $i = 1$ since $i > 0$ and $i + (j+1) < k$. When $3 \le j \le k-4$, $(j+1)j!(k-j)! = O(\frac{k!}{k^3})$, so $\sum_{j = 3}^{k-4} (j+1)j!(k-j)! = O(\frac{k!}{k^2})$ and $k \sum_{j = 3}^{k-4} (j+1)j!(k-j)! = o(k!)$, which covers all possible $i$ for $3 \le j \le k-4$. Thus the total number of permutation matrices in Class 2 with a single one in the submatrix restricted to $R_2 \times C_2$ is $o(k!)$.
Next we bound the number of permutation matrices in Class 2 with two ones in the submatrix restricted to $R_2 \times C_2$. When $|R_1| = i$ and $|R_2| = j+2$, there are at most $(j+1) j!(k-j)!$ permutation matrices in Class 2 with two ones in the submatrix restricted to $R_2 \times C_2$. By definition of Class 2, we must have $j \ge 2$. When $j = 2$, $(j+1) j!(k-j)! = O(\frac{k!}{k^2})$, so $k (j+1) j!(k-j)! = o(k!)$, which covers all possible $i$ for $j = 2$. We cannot have $j = k-2$ or $j = k-3$, since $i > 0$ and $i+(j+2) < k$. When $3 \le j \le k-4$, $(j+1) j!(k-j)! = O(\frac{k!}{k^3})$, so $\sum_{j = 3}^{k-4} (j+1) j!(k-j)! = O(\frac{k!}{k^2})$ and $k \sum_{j = 3}^{k-4} (j+1) j!(k-j)! = o(k!)$, covering all possible $i$ for $3 \le j \le k-4$. Thus the total number of permutation matrices in Class 2 with two ones in the submatrix restricted to $R_2 \times C_2$ is $o(k!)$.
Note that each permutation matrix in Class 3 is created from at least one permutation matrix in Class 2, and each permutation matrix in Class 2 creates only one permutation matrix in Class 3, so the number of permutation matrices in Class 3 is $o(k!)$.
Finally we bound the number of permutation matrices in Class 4. When $|R_2| = j$ for a permutation matrix in Class 4, we must have $|C_2| = k-j$, so the number of permutation matrices in Class 4 with $|R_1| = i$, $|R_2| = j$, and $|C_1| = r$ is at most $j!(k-j)!$. We must have $2 \le |R_2| \le k-2$ for a permutation matrix in Class 4, since both the submatrix restricted to $R_2 \times C_1$ and the submatrix restricted to $R_2 \times C_3$ must have ones, as must the submatrices restricted to $R_1 \times C_2$ and $R_3 \times C_2$. If $j = 2$, then $|C_2| = k-2$, so $r = 1$ and the number of permutation matrices in Class 4 with $|R_2| = 2$ is at most $k 2!(k-2)! = o(k!)$, covering all possible $i$ for $j = 2$. If $j = k-2$, then $i = 1$, so the number of permutation matrices in Class 4 with $|R_2| = k-2$ is at most $k 2!(k-2)! = o(k!)$, covering all possible $r$ for $j = k-2$. If $j = 3$, then $|C_2| = k-3$, so $r = 1$ or $r = 2$ and the number of permutation matrices in Class 4 with $|R_2| = 3$ is at most $2k 3!(k-3)! = o(k!)$, covering all possible $i$ and $r$ for $j = 3$. If $j = k-3$, then $i = 1$ or $i = 2$, so the number of permutation matrices in Class 4 with $|R_2| = k-3$ is at most $2k 3!(k-3)! = o(k!)$, covering all possible $i$ and $r$ for $j = k-3$. If $4 \le j \le k-4$, then $j!(k-j)! = O(\frac{k!}{k^4})$, so $\sum_{j = 4}^{k-4} j! (k-j)! = O(\frac{k!}{k^3})$ and $k^2 \sum_{j = 4}^{k-4} j! (k-j)! = o(k!)$, covering all possible $i$ and $r$ for $4 \le j \le k-4$. Thus the total number of permutation matrices in Class 4 is $o(k!)$.
\end{proof}
\section{Discussion}
We showed that almost all $k \times k$ permutation matrices have bounded saturation functions, but Fulek and Keszegh's problem of characterizing all permutation matrices $P$ for which $\sat(n, P) = O(1)$ is still open. We made some progress on this problem by showing that every ordinary permutation matrix $P$ has $\sat(n, P) = O(1)$. Ordinary permutation matrices are the permutation matrices that do not reduce to any of Class 1, Class 2, Class 3, or Class 4, so the remaining problem is to characterize the permutation matrices $P$ in Classes 1, 2, 3, and 4 for which $\sat(n, P) = O(1)$. Fulek and Keszegh showed that all permutation matrices $P$ in Class 1 with no ones in the submatrix restricted to $R_1 \times C_2$ have $\sat(n, P) = \Theta(n)$ \cite{fk}. What about the rest of Class 1, Class 2, Class 3, and Class 4?
As for specific 0-1 matrices, we have not determined whether $\sat(n, P) = O(1)$ for $P =
\begin{bmatrix}
0 & 0 & 1 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 1 & 0 & 0
\end{bmatrix}$. This was another open problem from \cite{fk}. Observe that $P$ is the matrix obtained by removing the middle row and middle column of $Q$. In this case we may construct $T_P$, but $P$ is in Class 4 and $T_P$ contains $P$.
Let $L_k$ denote the number of $k \times k$ permutation matrices $P$ for which $\sat(n, P) = \Theta(n)$. The proof of Lemma \ref{almostall} implies that $L_k = O((k-1)!)$. We also have $L_k = \Omega((k-1)!)$, since Fulek and Keszegh showed that $\sat(n, P) = \Theta(n)$ for all permutation matrices $P$ in Class 1 with no ones in the submatrix restricted to $R_1 \times C_2$. Thus, $L_k = \Theta((k-1)!)$. It would be interesting to sharpen the bounds on $L_k$.
Our construction implies that $\sat(n, P) = O(k^2)$ for any ordinary $k \times k$ permutation matrix $P$. Another interesting problem would be to bound the maximum possible value of $\sat(n, P)$ for any $k \times k$ permutation matrix $P$ with $\sat(n, P) = O(1)$. A more general open problem from Fulek and Keszegh is to determine whether there exists a decision procedure to answer whether $\sat(n, P) = O(1)$ for any given 0-1 matrix $P$ \cite{fk}. An approach would be to bound the maximum possible value of $\sat(n, P)$ for any $k \times k$ 0-1 matrix $P$ with $\sat(n, P) = O(1)$. It would also be interesting to investigate the complexity of computing $\sat(n, P)$ and determining whether $\sat(n, P) \leq m$, both in general and for special families of 0-1 matrices $P$ such as permutation matrices.
Finally, a natural direction for future research is to investigate saturation functions of forbidden $d$-dimensional 0-1 matrices. Extremal functions of forbidden $d$-dimensional 0-1 matrices have been investigated in \cite{ck, ff0, gt, km, gkst}. As with the $2$-dimensional case, we say that a $d$-dimensional 0-1 matrix $A$ is \emph{saturating} for the forbidden $d$-dimensional 0-1 matrix $P$ if $A$ avoids $P$ but changing any zero to a one in $A$ creates a copy of $P$. For any $d$-dimensional 0-1 matrix $P$, we define the saturation function $\sat(n, P, d)$ to be the minimum possible number of ones in a $d$-dimensional 0-1 matrix of dimensions $n \times \dots \times n$ that is saturating for $P$.
Given any $2$-dimensional 0-1 matrix $P$ of dimensions $r \times s$, we can create a $d$-dimensional 0-1 matrix $P_d$ corresponding to $P$ for which all dimensions are of length $1$ except the first and second dimensions which are of length $r$ and $s$ respectively. We define entry $(i, j, 1, \dots, 1)$ of $P_d$ to be $1$ if and only if entry $(i, j)$ of $P$ is $1$. We have $\sat(n, P_d, d) = n^{d-2} \sat(n, P)$. To see that $\sat(n, P_d, d) \le n^{d-2} \sat(n, P)$, consider any $n \times n$ 0-1 matrix $A$ with $\sat(n, P)$ ones that is saturating for $P$, and let $B$ be the $d$-dimensional 0-1 matrix of dimensions $n \times \dots \times n$ obtained from $A$ by defining entry $(x_1, x_2, x_3, \dots, x_d)$ of $B$ to be $1$ if and only if entry $(x_1, x_2)$ of $A$ is $1$. Clearly $B$ avoids $P_d$, since a copy of $P_d$ in $B$ would imply there is a copy of $P$ in $A$. Moreover if we change any zero in entry $(x_1, x_2, x_3, \dots ,x_d)$ of $B$ to a one, then we create a copy of $P_d$ in $B$ for which the last $d-2$ coordinates of all entries in the copy are $x_3, \dots, x_d$. This is because the entries of $B$ with last $d-2$ coordinates equal to $x_3, \dots, x_d$ form a copy of $A$ when restricted only to the first two dimensions, and changing any zero in $A$ to a one creates a copy of $P$. Thus $B$ is saturating for $P_d$, and $B$ has $n^{d-2} \sat(n, P)$ ones, so $\sat(n, P_d, d) \le n^{d-2} \sat(n, P)$.
To see that $\sat(n, P_d, d) \ge n^{d-2} \sat(n, P)$, suppose for contradiction that $\sat(n, P_d, d) < n^{d-2} \sat(n, P)$. Then there exists a $d$-dimensional 0-1 matrix $C$ of dimensions $n \times \dots \times n$ that avoids $P_d$ with fewer than $n^{d-2} \sat(n, P)$ ones such that changing any zero in $C$ to a one creates a copy of $P_d$. By the pigeonhole principle, there exist $x_3, \dots, x_d$ such that the entries of $C$ with last $d-2$ coordinates equal to $x_3, \dots, x_d$ contain fewer than $\sat(n, P)$ ones. Let $T$ be the $2$-dimensional 0-1 matrix for which entry $(i, j)$ of $T$ is $1$ if and only if entry $(i, j, x_3, \dots, x_d)$ of $C$ is $1$. Then $T$ is an $n \times n$ 0-1 matrix that is saturating for $P$, but $T$ has fewer than $\sat(n, P)$ ones, a contradiction. Thus we proved the following lemma.
\begin{lem}
Given any $2$-dimensional 0-1 matrix $P$ of dimensions $r \times s$, let $P_d$ be the $d$-dimensional 0-1 matrix of dimensions $r \times s \times 1 \times \dots \times 1$ for which entry $(i, j, 1, \dots, 1)$ of $P_d$ is $1$ if and only if entry $(i, j)$ of $P$ is $1$. Then $\sat(n, P_d, d) = n^{d-2} \sat(n, P)$.
\end{lem}
It would be interesting to investigate the saturation functions of $d$-dimensional 0-1 matrices for which more than two of the dimensions have length greater than $1$. For example, what is $\sat(n, P, d)$ for every $d$-dimensional permutation matrix $P$?
|
{
"timestamp": "2020-12-29T02:26:49",
"yymm": "2012",
"arxiv_id": "2012.14150",
"language": "en",
"url": "https://arxiv.org/abs/2012.14150"
}
|
\section{Introduction}
Single-item, single-stocking location, stochastic inventory systems
have long been investigated under various operational assumptions, and
the associated literature is large. Scarf's seminal paper, \cite{scarf1959optimality}, addressed this problem over a finite
planning horizon comprising discrete time periods, non-stationary
stochastic demands, a fixed ordering cost, and linear holding and
shortage costs. Scarf proved that the $(s,S)$ policy (more precisely
the $(s_t,S_t)$ policy) is cost-optimal. In this policy the decision-maker checks the current inventory position at review epochs (the
start of each time period) and if the inventory position is at or
below $s_t$ an order is placed to raise it to $S_t$. For a planning
horizon of $T$ periods the optimal $(s_t,S_t)$ policy requires $2T$
policy parameters, computed in a here-and-now fashion at the start of
the planning horizon. Actual replenishment timings and associated
order quantities are instead determined in a wait-and-see manner.
In this paper, we address a more general form of the inventory control problem described by Scarf.
According to \cite{silver1981operations} the $(R,s,S)$ policy is one
of the most commonly adopted inventory control strategies (also
called $(T,s,S)$ or $(s,S,T)$ in the literature, \cite{babai2010empirical, lagodimos2012computing}). In an
$(R_t, s_t, S_t)$ system the inventory level is checked only at review epochs
$R_t$, which are policy parameters that are fixed at the start of the
planning horizon. After a review, an order may be placed to raise the inventory level up to $S_t$ if it is at or below $s_t$.
Two special cases of the $(R,s,S)$ policy naturally arise. Firstly,
it reduces to the $(s,S)$ case if there is no explicit cost involved
in carrying out inventory reviews. Inventory review (also known
as stock-taking) is costly in practice, so we consider the case in
which a fixed {\it system control\/} cost \cite{silver1981operations}
is incurred when the inventory is reviewed, e.g. \cite{fathoni2019development,christou2020fast}. The $(R,s,S)$ policy relaxes the cost accounting assumption that the fixed cost of replenishment covers both review and delivery costs, and separates the fixed cost of conducting a review from the fixed ordering cost. One practical implication of this relaxation is that the order cancellation and relevant costs can be explicitly incorporated into inventory planning.
Secondly, the $(R,s,S)$ reduces to the
$(R,S)$ policy (the {\it replenishment cycle policy\/}) if reorder
levels $s_t$ are equal to the order-up-to-levels $S_t$. In a replenishment cycle policy,
the replenishment periods are fixed at the beginning of the planning
horizon and the replenishment orders are placed only in these periods
after period demands so far have been observed.
Although $(R,s,S)$ is one of the most general and frequently used
inventory policies, as pointed out by \cite{silver1981operations} {\it
the determination of the exact best values of the three parameters
is extremely difficult\/}. To the best of our knowledge no approach
to computing them has been presented in the literature. We fill this
important gap in the literature by making the following contributions:
\begin{itemize}
\item
we introduce an efficient hybrid of branch-and-bound and stochastic dynamic program (SDP) to
compute optimal policy parameters;
\item
we improve the branch-and-bound by using tighter bounds computed
through a separate dynamic programming (DP) method;
\item
we show empirically that the new algorithm performs significantly
better than a baseline method and that it is able to solve problems of realistic size in a reasonable time.
\end{itemize}
The paper is structured as follows. Section
\ref{sec:literature_review} surveys related literature. Section
\ref{sec:prob_description} provides a problem description. Section
\ref{sec:sdp_ss} introduces a simple SDP formulation. Section
\ref{sec:method} introduces a branch-and-bound strategy. Section
\ref{sec:experimental} carries out a comprehensive numerical study.
Finally, Section \ref{sec:conclusion} concludes the paper.
\section{Literature review} \label{sec:literature_review}
The problem of computing policy parameters for an inventory control system under stochastic demand has received a great deal of attention. In this section, we survey the relevant literature on the classic stochastic inventory control problem. We then survey different versions of the problem. Finally, we survey $(R,s,S)$ real-world applications.
An important class of these problems is the single-item single-location non-stationary stochastic lot-sizing under linear holding costs, penalty costs and both linear and fixed ordering costs. Different policies can be used to determine the size and timing of the orders on such setting.
In his seminal work, Scarf characterises the structure of the optimal policy for such a problem. The framework proposed by \cite{bookbinder1988strategies} divides the policies into three classes: static uncertainty, dynamic uncertainty and static-dynamic uncertainty. These classes differ in the moment at which the decisions are taken. Since then, numerous research works tackled the computation of policy parameters under demand uncertainty, mainly focusing on the $(s,S)$ and the $(R,S)$ policies that have a flexible order quantity. According to the strategies categorisation presented in \cite{powell2019unified}, these works can be divided into two types: \textit{deterministic/special structure solutions} or \textit{sample models}. The first category, that comprises this study, includes a wide variety of approaches, based on: dynamic programming \cite{scarf1959optimality,rossi2011state,ozen2012static}, mixed-integer linear programming \cite{tarim2004stochastic,xiang2018computing,tunc2018extended}, approximations \cite{gutierrez2017simple}, and constraint programming \cite{rossi2012constraint}. The sample models category includes two-stage stochastic programming, that has been applied to inventory policy computation in \cite{fattahi2015investigating,cunha2017two,dos2019enhanced}.
Different comparison studies have been conducted recently to benchmark different aspects of these policies: \cite{kilic2011investigation} extends a measure of planning instability for the non-stationary stochastic lot-sizing; \cite{dural2019benefit} compares different policies performances in the receding horizon, \cite{sani1997selecting} and \cite{babai2010empirical} are comparative studies on the performances of $(s,S)$ heuristics.
Modifications on the original inventory model have been proposed to allow a closer representation of real-world problems. \cite{dillon2017two} proposes an $(R,S)$ policy solution to manage the blood supply chain, that includes perishable products. \cite{alvarez2020inventory}'s model considers both quantity and quality decay of the inventory product; the quality can be improved by mixing it with a higher quality product. A set of heuristics for the lot-sizing problem with remanufacturing of returned products is presented in \cite{kilic2019heuristics}. All-units discount $(s,S)$ policy has been analysed in \cite{wang2019procurement}. Uncertainty can involve other aspects of the inventory system; for example, \cite{bashyam1998optimization,rossi2010computing} considers a stochastic lead time. Different supply chain configuration can be considered; for example, a two-echelon inventory system \cite{schneider1991empirical,schneider1995power}. \cite{ma2019stochastic} provides an updated review on stochastic inventory control algorithms, while \cite{bushuev2015review} presents a broader picture of the state-of-the-art in lot sizing.
The $(R, s, S)$ policy parameters computation has been tackled in the literature under the stationary, continuous time setting. With this configuration, only three parameters have to be optimised since the demand does not change over time. This problem has been solved to optimality by \cite{lagodimos2012computing}. In \cite{lagodimos2012optimal,christou2020fast} a batch version of the policy is considered.
None of the surveyed methods can be easily adapted to compute the $(R,s,S)$ policy parameters under the finite horizon and discrete time setting: since it has three sets of decision variables making the previous models not applicable.
While other policies can be used for the same problem, the $(R,s,S)$ is a more generic model and has better cost performances, being the $(s,S)$ and the $(R,S)$ specific case of an $(R,s,S)$ policy. The introduction of the review cost makes no difference in the $(s,S)$ and $(R,S)$ policy computation; in the $(s,S)$ the cost is charged in every period, while in the $(R,S)$ every review coincides with an order. A static policy would have poor performance as well because it can not react to the demand realisations \cite{dural2019benefit}.
The $(R,s,S)$ policy is widely used by practitioners, usually not
independently but as a component of complex supply chains, and here we
survey some recent models. Due to the
complexity of the determination of its parameters, in the surveyed
papers, the value of $R$ is considered to be constant across the time
horizon. \cite{bijvank2012inventory} describe an
inventory control system for point-of-use location. They compare the
performance of $(R,s,Q)$ policies (with fixed order quantities)
against $(R,s,S)$ under stationary stochastic demand. Because of
stationarity, the policy parameters were constant throughout the
horizon. \cite{ahmadi2018optimal,monthatipkul2008inventory} tackle a
capacitated two-echelon inventory system with one warehouse and
multiple retailers. They use a heuristic based on
\cite{schneider1995power} for the $(R,s_t,S_t)$ policy.
\cite{cabrera2013solving} consider a similar two-level supply chain in
which a single plant serves a set of warehouses, which in turn serve a
set of end customers or retailers. The warehouses model is based on
$(R,s,S)$ and they develop a heuristic to solve an inventory location
model with this configuration. The same problem has been tackled by
\cite{araya2018lagrangian} using Lagrangian relaxation and the
subgradient method. \cite{bijvank2012lost} analysed lost-sales
inventory control policies with service level. They define an optimal
policy starting from the $(s,S)$ SDP introduced by
\cite{scarf1959optimality}. They present a value-iteration algorithm
to find the $(R,s,S)$ parameters that minimise the inventory cost
subjected to service constraints. As the parameters are fixed, their
solution is unsuitable for a non-stationary setting.
The analysis of the state-of-the-art confirms the novelty of the solution, and the practitioners' interest in the $(R,s,S)$ policy usage in a stochastic environment.
\section{Problem description} \label{sec:prob_description}
We consider the single-item, single-stocking location, stochastic
inventory control problem over a $T$-period planning horizon. Without
loss of generality, we assume that orders are placed at the start of
each period and that the lead time is zero, as is common in the
literature
\cite{scarf1959optimality,bollapragada1999simple,tarim2004stochastic}.
An inventory control policy defines the timing and quantities of
orders over the planning horizon. We define a review moment, or
review period, as a period in which the level of the inventory is
assessed and an order can be placed. A replenishment cycle is
represented by the interval between two review moments.
We denote by $Q_t$ the quantity of the order placed in period $t$, and
an inventory review cost by $W$. Ordering costs are represented by a
fixed value $K$ and a linear cost, but we shall assume without loss of
generality that the linear cost is zero. The extension of our
solution to the case of a non-zero production/purchasing cost is
straightforward, as this cost can be reduced to a function of the
expected closing inventory level at the final period
\cite{tarim2004stochastic}. At the end of each period, a linear
holding cost $h$ is charged for every unit carried from one period to
the next.
Demands $d_t$ in each period $t$ are independent random variables with known probability distributions. Backlogging of excess demand is
assumed, so if the demand in a period exceeds on-hand inventory the
rest of the demand is carried to the next period; a linear penalty
cost $b$ is incurred on any unit of back-ordered demand at the end of
a period.
Under the non-stationarity assumption the $(R,s,S)$ policy takes the
form $(R_t,s_t,S_t)$ where $R_t$ denotes the length of the $t^{th}$
replenishment cycle, while parameters $s_t$ and $S_t$ denote the
reorder-level and order-up-to-level associated with the $t^{th}$
inventory review. We consider the problem of computing the $(R,s,S)$
policy parameters that minimize the expected total cost over the
planning horizon.
\section{Stochastic dynamic programming formulation} \label{sec:sdp_ss}
In this section, we provide a simple technique to compute the optimal
$(R,s,S)$ policy parameters. It can be considered the
state-of-the-art in computing such parameters in the presence of
stochastic non-stationary demand. Moreover, it constitutes the basis
of the branch-and-bound technique introduced later.
We represent the replenishment moments by binary variables $\gamma_t$
($t = {1, \dots, T}$) which take value 1 if a review is placed in
period $t$ and 0 otherwise. We assume $Q_t = 0$ if $\gamma_t = 0$ so
no order will be placed outside a review moment. The optimal
$(R,s,S)$ policy for our problem is represented by the parameters
$\gamma_t,s_t,S_t$ that minimize the expected total cost.
Consider an arbitrary review cycle plan with $\gamma_t$ as a
parameter, not a decision variable.
We denote the closing inventory level for each period by $I_t$, and
the given initial inventory level by $I_0$. We assume that the orders
are placed at the beginning of each time period and delivered
instantaneously. The problem can be formulated and solved to
optimality as an SDP \cite{bellman1966dynamic}.
The expected immediate cost combining ordering, review, holding and penalty costs,
given action $Q_t$:
\begin{align}
\label{eq:immediate_cost}
f_t(I_{t-1}, Q_t) &=& \gamma_t W + K \mathbbm{1}\{ Q_t>0 \} +
E[h \max(I_{t-1} + Q_t -d_t, 0) +\nonumber\\
&& b \max(d_t-I_{t-1}- Q_t, 0)]
\end{align}
Let $C_t(I_{t-1})$ represent the expected total cost of an optimal policy over periods $t, \dots, \ n$ and $\mathbbm{1}$ is the indicator function. These variables are the states of the DP formulation. We model the problem with the functional equation:
\begin{equation}
\label{eq:functional_eq}
C_t(I_{t-1}) = \underset{0 \leq Q_t \leq M \gamma_t}{\min} ( f_t(I_{t-1}, Q_t) +
E[ C_{t+1}(I_{t-1}+ Q_t - d_t) ] )
\end{equation}
where $M$ is a sufficiently large number.
The boundary condition is:
\begin{equation}
C_{T+1}(I_{T}) = 0
\end{equation}
$C_{1}(I_0)$, where $I_0$ is the initial inventory level, contains the
expected cost for the optimal $(s,S)$ policy associated with the
\textbf{$\gamma$} assignment. To reduce the computational time we can
exploit the property of K-convexity \cite{scarf1959optimality} when
solving the SDP.
Let $\hat{C}_1(I_0)$ represent the expected total cost of the optimal
$(R,s,S)$ policy, given the initial inventory level $I_0$ at period 1.
We can define it as:
\begin{equation}
\label{eq:baseline}
\hat{C}_1(I_{0}) = \underset{\gamma_1, \dots, \gamma_T}{\min} ( C_1(I_0) )
\end{equation}
Evaluating the optimal $(s,S)$ policy for all possible assignments of
$\gamma_1, \dots, \gamma_T$ yields the optimal $(R,s,S)$ policy. The model works with every possible demand distribution, as long as it is finite and discretisable. This
is our baseline method on which we aim to improve.
\subsection{Unit cost} \label{sec:unit_cost}
The algorithm can be extended to model the per unit ordering cost. There are two options, reducing it to a function of the expected closing inventory, e.g. \cite{tarim2004stochastic}; or including it in the immediate cost function.
Let $v$ be the per unit ordering/production cost, Equation \ref{eq:immediate_cost} is replaced by:
\begin{align}
f_t(I_{t-1}, Q_t) =& \gamma_t W + K \mathbbm{1}\{ Q_t>0 \} + v Q_t +
\nonumber\\
& E[h \max(I_{t-1} + Q_t -d_t, 0) + b \max(d_t-I_{t-1}- Q_t, 0)]
\end{align}
\subsection{Lost sales} \label{sec:lost_sales}
Complete backlogging of the demand is a limiting assumption in many real-world settings. Studies analysing customer behaviour show that in case of a stock out, only a minority delay the purchase \cite{verhoef2006out}. According to \cite{bijvank2012inventory}, the lost sales configuration is underrepresented in the lot-sizing literature, even if it is more appropriate to model customers' behaviour. Approximating a lost sales model with a backlog model results in a non-negligible increase in costs \cite{zipkin2008old}.
The SDP formulation can be extended to model lost sales. We considered the partial backorder configuration presented in \cite{dos2019enhanced}. They define as $\beta$ ($\beta \in [0,1]$) the fraction of the unmet demand that is carried on to the next period and the reminder is lost. This parameter gives the flexibility to model both backlog ($\beta=1$), lost sales ($\beta= 0$) or a combination of the two. The functional equation \ref{eq:functional_eq} becomes:
\begin{equation}
\label{eq:functional_eq_lost}
C_t(I_{t-1}) = \underset{0 \leq Q_t \leq M \gamma_t}{\min} ( f_t(I_{t-1}, Q_t) +
E[ C_{t+1}(\max(I_{t-1}+ Q_t - d_t, \beta (I_{t-1}+ Q_t - d_t)) ] )
\end{equation}
\subsection{Example} \label{example}
We use a simple example to illustrate the application of our method,
with a 3-period planning horizon. We assume an initial inventory
level of zero and a Poisson distributed demand for each period with
averages $\overline{d} = [20,30,40]$.
We consider an ordering cost value $K=30$, a review cost $W=10$, and
holding and penalty costs of $h=1$ and $b=10$ per unit per period
respectively.
The algorithm must choose replenishment moments $\gamma = \langle
\gamma_1, \gamma_2, \gamma_3 \rangle$ that minimize the expected cost
of the policy. Table \ref{tab:toy_baseline} shows the expected cost
of each $(s,S)$ policy computed with different review periods. The
optimal solution is $\gamma = \langle 1,0,1 \rangle$ with expected
cost 142.7. However, exhaustive search becomes impractical as the
planning horizon grows so in Section \ref{sec:method} we develop a
more efficient method.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c||c|}
\hline
$\gamma_1$ & $\gamma_2$ & $\gamma_3$ & Expected cost \\ \hline
\hline
0 & 0 & 0 & 1600.0 \\ \hline
0 & 0 & 1 & 751.8 \\ \hline
0 & 1 & 0 & 304.7 \\ \hline
0 & 1 & 1 & 302.0 \\ \hline
1 & 0 & 0 & 185.0 \\ \hline
1 & 0 & 1 & \textbf{142.7} \\ \hline
1 & 1 & 0 & 153.1 \\ \hline
1 & 1 & 1 & 150.4 \\ \hline
\end{tabular}
\end{center}
\caption{Optimal expected cost for the 3-period example.}
\label{tab:toy_baseline}
\end{table*}
\section{A hybrid of branch-and-bound and SDP} \label{sec:method}
In this section, we present a hybrid technique that combines SDP and
branch-and-bound. The algorithm obtains optimal $(R,s,S)$ policies
associated with specific review plans at leaf nodes. The search tree
(defined in Section \ref{sec:searchtree}) is explored by depth-first
search (DFS). The subproblems associated with the nodes are defined
in Section \ref{sec:subproblems}. Section \ref{sec:pruning}
introduces the pruning condition and lower bound computed with DP.
Finally, Section \ref{sec:nodes} presents the node resolution process.
\subsection{Search tree} \label{sec:searchtree}
The branch-and-bound goal is to find the review plan with the minimum
expected cost. During branching of $\gamma_t$, the value is fixed to $1$ or
$0$. The search tree has $T+1$ levels, and the branching at its root
fixes the value of $\gamma_T$. At level $\ell$ branching involves the
variable $\gamma_{T- \ell+1}$. The path from the root to a node at
level $\ell$ represents a fixed assignment of the suffix $\langle
\gamma_{T- \ell +2}, \dots, \gamma_{T} \rangle$. A leaf node
represents a complete assignment of the $\gamma$ values. Figure
\ref{fig:binary_tree} shows the search tree of a 3-period problem, as
in the example presented in the previous section.
\begin{figure}
\tikzset{every tree node/.style={minimum width=2em,draw,circle},
blank/.style={draw=none},
edge from parent/.style=
{draw,edge from parent path={(\tikzparentnode) -- (\tikzchildnode)}},
level distance=2cm,
level 3/.style={sibling distance = 1cm}
}
\centering
\begin{tikzpicture}
\Tree
[. 1
\edge node[auto=right] {$\gamma_3 = 1$};
[. 2
\edge node[auto=right] {$\gamma_2 = 1$};
[. 3
\edge node[auto=right] {$\gamma_1 = 1$};
[.4 ]
\edge node[auto=left] {$\gamma_1 = 0$};
[.4 ]
]
\edge node[auto=left] {$\gamma_2 = 0$};
[.3
[.4 ]
[.4 ]
]
]
\edge node[auto=left] {$\gamma_3 = 0$};
[.2
\edge node[auto=right] {$\gamma_2 = 1$};
[.3
[.4 ]
[.4 ]
]
\edge node[auto=left] {$\gamma_2 = 0$};
[.3
[.4 ]
[.4 ]
]
]
]
\end{tikzpicture}
\caption{Search tree for a 3-period instance: nodes contains level
numbers.}
\label{fig:binary_tree}
\end{figure}
\subsection{Subproblems} \label{sec:subproblems}
Given the period $t$ and the partial assignment of a suffix of the
review moments $\langle \gamma_t, \dots, \gamma_T \rangle$, the
problem at a node is to find the $\langle \gamma_1, \dots \gamma_{t-1}
\rangle$ that minimizes the expected cost of the optimal policy. We
denote this problem as BnB-SDP($t$,$\langle \gamma_t, \dots, \gamma_T
\rangle$). For each subproblem using Equation \ref{eq:functional_eq}
we can compute the expected cost of the optimal policy starting at
period $t$ with inventory level $i$. This is possible because all
review moments after period $t$ are fixed, and because of the SDP
stage structure presented in Section \ref{sec:sdp_ss}.
\subsection{Bounds and pruning} \label{sec:pruning}
If all the solutions in the subtree rooted in a node are suboptimal
then we can prune that node without compromising
optimality.
\begin{proposition}
\label{prop:property_monotonical}
Given a fixed assignment of \textbf{$\gamma$}:
\begin{equation}
\label{eq:property_monotonical}
\underset{I}{\min}(C_t(I)) \geq \underset{I}{\min}(C_{t-1}(I))
\end{equation}
\end{proposition}
From the functional equation (\ref{eq:functional_eq}) it is clear that
$C_t$ is equal to the expected value of $C_{t+1}$ plus some
non-negative costs, so the minimum cost in each stage increases
monotonically with tree depth.
During tree search $\bar{C}$ records the expected cost of the best
plan computed so far, that is the minimum $C_1(I_0)$ among all leaves
already computed. This is used as an upper bound for the expected
cost of the optimal plan as follows. Considering the subproblem
BnB-SDP($t$,$[\gamma_t, \dots, \gamma_T]$) with the associated
$C_t(i)$ expected costs:
\begin{proposition}
If
\begin{equation}
\label{eq:pruning_condition}
\underset{i}{\min}(C_t(i)) \geq \bar{C}
\end{equation}
then because of the monotonicity of the cost function:
(\ref{eq:property_monotonical}):
\begin{equation}
\underset{i}{\min}(C_1(i)) \geq \bar{C}
\end{equation}
Finally, since the expected cost associated with a plan ($C_1(I_0)$)
is part of $C_1$:
\begin{equation}
C_1(I_0) \geq \bar{C}
\end{equation}
\end{proposition}
Hence if (\ref{eq:pruning_condition}) is true the subproblem
BnB-SDP($t$,$[\gamma_t, \dots, \gamma_T]$) is not part of an optimal
solution and the search tree can be pruned.
However, this pruning condition makes no assumption on the costs faced
on periods $ 1, \dots , t-1$, and a lower bound on the costs in those
periods leads to more effective pruning. Let $MC_t(I_t)$ represent a
lower bound on the cost faced in periods $1, \dots , t$ with a closing
inventory of $I_t$ in period $t$. The pruning condition
(\ref{eq:pruning_condition}) can be refined to:
\begin{equation}
\label{eq:pruning_condition_updated}
\underset{I_t}{\min}(C_t(I_{t-1}) + MC_{t-1}(I_{t-1}) ) \geq \bar{C}
\end{equation}
Having a bound independent from the review periods allows us to compute
it only once before the branch-and-bound algorithm.
The bounds can be computed by a DP with stages and states equivalent
to the SDP presented in Section \ref{sec:sdp_ss} and functional
equation:
\begin{equation}
MC_t(I_{t}) = \min
\left\{\begin{matrix}
f_t(I_{t}, 1) + \underset{j < I_t}{\min} (MC_{t-1}(j))\\
f_t(I_{t}, 0) + \underset{j \geq I_t}{\min} (MC_{t-1}(j))
\end{matrix}\right.
\label{eq:funceqdp}
\end{equation}
where, as defined in Section \ref{sec:sdp_ss}, $I_t$ is the current
inventory level, and $ f_t(I_{t}, Q_t)$ is the
ordering-holding-penalty cost. In the first case, an order has been
placed in period $t$ so the inventory level in the previous period was
less than or equal to the current level. In the second case, an order
has not been placed so the previous inventory level was greater than
or equal to the current level. The boundary condition is:
\begin{equation}
MC_1(I_{1}) =
\left\{\begin{matrix}
W + K + f_1(I_{1}) & \text{if } I_1 > I_0\\
f_1(I_{1}) & \text{if } I_1 \leq I_0
\end{matrix}\right.
\end{equation}
where $I_0$ is the initial inventory. Considering finite demand, the DP has an amount of states equal to the number of periods multiplied by the maximum inventory level. Each state requires a single computation of Equation \ref{eq:funceqdp}, that is pseudo-polynomial in relation to the maximum inventory. The overall complexity of a DP is the number of states multiplied by the complexity required to solve one of them, so the overall complexity is pseudo-polynomial.
\subsection{Node computation} \label{sec:nodes}
Algorithm \ref{alg:bnb} summarises the branch-and-bound procedure. In
line 1, the SDP stage $t$ is solved. In line 7, the pruning condition
is evaluated: if a pruning occurs the branching phase is skipped. In
lines 8 and 9 DFS recursively continues. Lines 3--6 relate to leaf
nodes: if the policy represented by the leaf is better than the best
found so far, the value of $\bar{C}$ is updated. The algorithm starts
by invoking BnB-SDP($T+1$,$\emptyset$), and at the end, the expected
cost of the optimal policy is given by $\bar{C}$.
The algorithm as always shown branches by assigning first $\gamma_t =
0$, but its performance can be improved by randomisation. If, during
each branching phase, we randomly order lines 8--9 we obtain a better
solution earlier, leading to a stronger pruning of the search tree.
We evaluate the effect of this randomisation in Section
\ref{sec:experimental}.
\begin{algorithm}
\caption{BnB-SDP($t$,$[\gamma_t, \dots, \gamma_T]$)}\label{alg:bnb}
\emph{Data}: the current upper bound $\bar{C}$, the $C_{t+1}(i)$
computed at the parent node, and the bounds $MC(i)$.
\begin{algorithmic}[1]
\State \text{Compute $C_t$ using Equation \ref{eq:functional_eq}}
\If {$t = 1$}
\If {$C_1(I_0) < \bar{C}$}
\State $\bar{C} \gets C_1(I_0)$
\State Save $[\gamma_1, \dots, \gamma_T]$ as incumbent review plan
\EndIf
\Else
\If {$\min(C_t(i) + MC_{t-1}(i)) \geq \bar{C} $}
\Return
\EndIf
\State BnB-SDP($t-1$,$[0, \gamma_t, \dots, \gamma_T]$)
\State BnB-SDP($t-1$,$[1, \gamma_t, \dots, \gamma_T]$)
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{Guided tree search} \label{sec:guided}
The random descent can get stuck in inferior branches of the search tree. It takes a considerable time to get a reasonable review plan, and a good cost upper bound in these cases. Computing a near-optimal review plan and using it to guide the search leads to the immediate computation of a policy with a low expected cost. This tighter bound increases the number of nodes proved not-optimal by the pruning condition.
A reasonable review plan can be computed using the $(R_t,S_t)$ policy. As mentioned in the introduction, this policy places an order at each review moment. The replenishment cycles ($R_t$) can be used as a review plan, while the order-up-to-levels $S_t$ can be ignored. During the first descend of the branch-and-bound search tree, the delta values are selected following this review plan; thus, the first leaf to be computed is the one that has $R_t$ as review moments. This leaf represents the optimal $(R_t, s_t, S_t)$ policy for that review plan, and it should have a low expected cost.
After computing the first leaf of the tree, the search proceeds in the replenishment plan's neighbourhood using a randomised approach.
The experimental section shows the improvement in pruning efficacy and computational time.
Good computational performance and implementation simplicity make the MILP formulation presented in \cite{rossi2015piecewise} a good solution to compute the $(R_t,S_t)$ policy; this formulation is used in the experimental section.
\subsection{Example} \label{sec:example_pruning}
The search tree with the DP bounds for the example of Section
\ref{example} is represented in Figure
\ref{fig:binary_tree_toy_pruning}. Each internal node contains the
value of the pruning condition with the DP bounds
(\ref{eq:pruning_condition_updated}). An internal node is underlined
if the pruning occurs in that node. Each leaf is in bold if it
contains an improvement compared to the previous best solution
$\bar{C}$. Pruned nodes are indicated by an asterisk (*).
\begin{figure}
\tikzset{every internal node/.style={minimum width=2em,draw,circle},
every leaf node/.style={minimum width=2em,draw,circle,dashed},
blank/.style={draw=none},
edge from parent/.style=
{draw,edge from parent path={(\tikzparentnode) -- (\tikzchildnode)}},
level distance=2cm,
level 3/.style={sibling distance = 1cm}
}
\centering
\begin{tikzpicture}
\Tree
[. 0
\edge node[auto=right] {$\gamma_3 = 1$};
[.107
\edge node[auto=right] {$\gamma_{2} = 1$};
[.142
\edge node[auto=right] {$\gamma_{1} = 1$};
[.\textbf{150} ]
\edge node[auto=left] {$\gamma_{1} = 0$};
[.302 ]
]
\edge node[auto=left] {$\gamma_{2} = 0$};
[.138
\edge node[auto=right] {$\gamma_{1} = 1$};
[.\textbf{143} ]
\edge node[auto=left] {$\gamma_{1} = 0$};
[.752 ]
]
]
\edge node[auto=left] {$\gamma_3 = 0$};
[.109
\edge node[auto=right] {$\gamma_{2} = 1$};
[.\underline{144}
\edge[dashed];
[.* ]
\edge[dashed];
[.* ]
]
\edge node[auto=left] {$\gamma_{2} = 0$};
[.\underline{181}
\edge[dashed];
[.* ]
\edge[dashed];
[.* ]
]
]
]
\end{tikzpicture}
\caption{Branch-and-bound technique applied to the toy problem.}
\label{fig:binary_tree_toy_pruning}
\end{figure}
We define \textit{pruning percentage} as the percentage of nodes that
are proved to be suboptimal by the pruning condition during tree
search. In this example, the number of computed nodes is 10 and 4
nodes have been pruned, so the pruning percentage is $4/14=28.57\%$.
\section{Computational study} \label{sec:experimental}
In this section, we evaluate the new methods, including an
assessment of the effects of branching randomisation and problem
parameters (costs) empirically. We conduct two sets of experiments as follows.
In Section \ref{sec:scalability}, we analyse the scalability of the new
approaches by increasing the number of periods until no method is able
to solve the problem within a 1-hour time limit consistently. In
Section \ref{sec:type_analysis} we fix the planning horizon to 10 and 20 periods and vary the cost parameters.
For the experiments, we use
three $(R,s,S)$ policy solvers:
\begin{itemize}
\item
\textbf{SDP}, the SDP technique described in Section \ref{sec:sdp_ss}
which we consider the current state-of-the-art.
\item
\textbf{BnB}, the branch-and-bound solution introduced in Section
\ref{sec:method}.
\item
\textbf{BnB-Rand}, branch-and-bound with randomised branching.
\item
\textbf{BnB-Guided}, branch-and-bound with a guided tree search, Section \ref{sec:guided}.
\end{itemize}
We compare these in terms of computational time, pruning percentage
and average number of review periods (but not expected costs because
the solutions are optimal in each case). All experiments are executed
on an Intel(R) Xeon E5620 Processor (2.40GHz) with 32 Gb RAM.
For the sake of reproducibility, we made the code available\footnote{\url{https://github.com/andvise/RsS-EJOR}}.
We base our numerical studies on the set of instances originally
proposed by \cite{berry1972lot} and widely used in the literature
\cite{dural2016comparison,rossi2008global,xiang2018computing}. A Poisson variable represents the demand in each period.
\subsection{Scalability} \label{sec:scalability}
This experiment aims to assess the improvement provided by the branch-and-bound approach compared to what we can consider as the state-of-the-art. Furthermore, we aim to assess how the randomisation and the guided search affect the computational performances and the pruning percentage.
For the scalability analysis, we use randomly generated parameter
values and progressively increase the number of periods. We fix the
holding cost per unit at $h=1$, but the other cost parameters are
uniform random variables: ordering cost is in the range $K \in
[80,320]$, review cost is in the range $W \in [80,320]$ and penalty
cost per unit is in the range $b \in [4,16]$. Demands per period are
uniform random variables in the range $[30,70]$. We generate 100
different instances and for planning horizons in the range 4--20
periods.
\begin{figure}
\centering
\begin{tikzpicture} [scale = 0.9]
\pgfplotsset{every axis legend/.append style={at={(1.05,0.5)}, anchor=west}}
\begin{semilogyaxis}[
xlabel={Number of periods},
ylabel={Computational time}
]
\addplot coordinates{
(4,1.63) (5,5.13) (6,15.46) (7,41.31) (8,110.15) (9,281.19) (10,689.54) (11,1674.18)
};
\addplot coordinates{
(4,0.31) (5,0.78) (6,1.71) (7,3.33) (8,6.54) (9,12.05) (10,21.42) (11,37.23) (12,63.07) (13,106.04) (14,190.2) (15,333.01) (16,469.6) (17,758.17) (18,1214.61)
};
\addplot coordinates{
(4,0.26) (5,0.59) (6,1.16) (7,2.43) (8,4.57) (9,8.04) (10,14.62) (11,22.93) (12,38.27) (13,63.61) (14,112.31) (15,180.15) (16,252.56) (17,406.99) (18,680.07) (19,1124.43)
};
\addplot coordinates{
(4,0.33) (5,0.63) (6,1.29) (7,2.09) (8,3.61) (9,6.02) (10,10.15) (11,16.96) (12,28.43) (13,46.2) (14,79.44) (15,129.65) (16,192.41) (17,303.55) (18,491.09) (19,773.62) (20,1251.98)
};
\legend{SDP,BnB,BnB-Rand,BnB-Guided}
\end{semilogyaxis}
\end{tikzpicture}
\caption{Average computational time of the 100 instances over the number of periods. Time limit 1
hour.}
\label{fig:comp_time_log}
\end{figure}
Figure \ref{fig:comp_time_log} shows the average computational time over the 100 instances. The
y-axis is logarithmic to show the exponential behaviour of the
solutions. The new method is able to solve instances almost twice as
large in a reasonable time. Though it still has exponential
behaviour, its slope is considerably less than that of the SDP. The randomisation reduces the computational effort needed. The guided search requires the computation of an $(R,S)$ policy before the BnB approach. For small instances, the added computational effort is higher than the improvement provided by a higher pruning percentage. However, for medium/big instances, the improvement is considerable.
\begin{figure}
\centering
\begin{tikzpicture}[scale = 0.9]
\pgfplotsset{every axis legend/.append style={at={(1.05,0.5)}, anchor=west}}
\begin{semilogyaxis}[xlabel={Number of periods}, ylabel={Computational time}]
\addplot+[only marks, error bars/.cd, y dir=both, y explicit,
error bar style={line width=0pt},
error mark options={
rotate=90,
blue,
mark size=4pt,
line width=0.5pt
}] coordinates{
(4,1.63) +=(0,0.71) -=(0,0.65) (5,5.13) +=(0,1.96) -=(0,1.84) (6,15.46) +=(0,21.58) -=(0,6.26) (7,41.31) +=(0,12.33) -=(0,15.5) (8,110.15) +=(0,26.93) -=(0,29.71) (9,281.19) +=(0,60.91) -=(0,86.15) (10,689.54) +=(0,159.88) -=(0,179.92) (11,1674.18) +=(0,330.11) -=(0,446.69)
};
\addplot+[only marks, error bars/.cd, y dir=both, y explicit,
error bar style={line width=0pt},
error mark options={
rotate=90,
red,
mark size=4pt,
line width=0.5pt
}] coordinates{
(4,0.31) +=(0,0.2) -=(0,0.13) (5,0.78) +=(0,0.46) -=(0,0.29) (6,1.71) +=(0,1.57) -=(0,0.75) (7,3.33) +=(0,1.91) -=(0,1.38) (8,6.54) +=(0,3.84) -=(0,2.54) (9,12.05) +=(0,6.52) -=(0,4.07) (10,21.42) +=(0,12.42) -=(0,6.64) (11,37.23) +=(0,27.78) -=(0,11.55) (12,63.07) +=(0,52.09) -=(0,21.58) (13,106.04) +=(0,95.07) -=(0,39.94) (14,190.2) +=(0,204.21) -=(0,85.34) (15,333.01) +=(0,318.97) -=(0,163.15) (16,469.6) +=(0,478.1) -=(0,198.56) (17,758.17) +=(0,903.58) -=(0,360.8) (18,1214.61) +=(0,1733.3) -=(0,657.07)
};
\pgfplotsset{cycle list shift=1}
\addplot+[only marks, error bars/.cd, y dir=both, y explicit,
error bar style={line width=0pt},
error mark options={
rotate=90,
black,
mark size=4pt,
line width=0.5pt
}] coordinates{
(4,0.33) +=(0,0.27) -=(0,0.14) (5,0.63) +=(0,0.56) -=(0,0.31) (6,1.29) +=(0,1.93) -=(0,0.72) (7,2.09) +=(0,1.57) -=(0,1.07) (8,3.61) +=(0,3.44) -=(0,1.68) (9,6.02) +=(0,6.43) -=(0,3.4) (10,10.15) +=(0,14.45) -=(0,6.4) (11,16.96) +=(0,21.98) -=(0,9.99) (12,28.43) +=(0,30.37) -=(0,16.55) (13,46.2) +=(0,59.5) -=(0,28.81) (14,79.44) +=(0,149.9) -=(0,54.95) (15,129.65) +=(0,324.96) -=(0,97.18) (16,192.41) +=(0,287.28) -=(0,138.8) (17,303.55) +=(0,512.66) -=(0,210.13) (18,491.09) +=(0,1179.31) -=(0,368.68) (19,773.62) +=(0,1801.87) -=(0,618.56) (20,1251.98) +=(0,2212.14) -=(0,1023.21)
};
\legend{SDP,BnB,BnB-Guided}
\end{semilogyaxis}
\end{tikzpicture}
\caption{Range of computational time over the number of periods. Time limit 1 hour.}
\label{fig:comp_time_range}
\end{figure}
Figure \ref{fig:comp_time_range} shows the range of the minimum and maximum computational times for increasing planning horizon lengths; we omitted BnB-Rand to improve the readability of the plot. The SDP solution has a low variability in the required computational time. BnB-Guided presents the highest variability among the different solutions. This is due to the fact that in some instances, the pre-computed replenishment plan is the optimal one and leads to a strong pruning of the tree that reduces the computational time considerably.
Figure \ref{fig:prun_percentage} shows the pruning percentage (Section \ref{sec:example_pruning}) of the branch-and-bound approaches. The
pruning becomes more effective for longer planning horizons. A high value means that the approach finds a good policy earlier in the search process; the cost of this policy provides a tighter bound for the pruning condition (Equation \ref{eq:pruning_condition_updated}). We can see that BnB-Guided provides considerable improvement.
\begin{figure}
\centering
\begin{tikzpicture}[scale = 0.9]
\pgfplotsset{every axis legend/.append style={at={(1.02,0.5)}, anchor=west}}
\begin{axis}[xlabel={Number of periods}, ylabel={Pruning percentage}]
\pgfplotsset{cycle list shift=1}
\addplot coordinates{
(4,45.48) (5,52.35) (6,62.06) (7,69.62) (8,75.54) (9,80.93) (10,85.19) (11,88.67) (12,91.45) (13,93.49) (14,95.06) (15,96.28) (16,97.17) (17,97.87)
};
\addplot coordinates{
(4,56.77) (5,66.48) (6,76.76) (7,79.14) (8,83.82) (9,87.82) (10,90.24) (11,93.28) (12,94.96) (13,96.2) (14,97.15) (15,98.02) (16,98.5) (17,98.87) (18,99.11) (19,99.3)
};
\addplot coordinates{
(4,67.35) (5,76.38) (6,81.39) (7,86.54) (8,90.07) (9,92.62) (10,94.36) (11,95.66) (12,96.65) (13,97.51) (14,98.16) (15,98.63) (16,98.94) (17,99.22) (18,99.4) (19,99.56)
};
\legend{BnB,BnB-Rand,BnB-Guided}
\end{axis}
\end{tikzpicture}
\caption{Average percentage of nodes pruned over the 100 instances in relation to the number of periods.}
\label{fig:prun_percentage}
\end{figure}
\subsection{Instance type analysis} \label{sec:type_analysis}
In the parameter value analysis, we aim to understand how the cost parameters affect the computational effort required to find the policy and the pruning percentage.
We use a testbed of 324 instances.
To generate the average demand values we use seasonal data with
different trends:
\begin{itemize}
\item \textbf{(STA)} stationary case: $\tilde{d}_t = 50 $
\item \textbf{(INC)} positive trend case: $\tilde{d}_t = \left \lceil 100t/(n-1) \right \rceil $
\item \textbf{(DEC)} negative trend case: $\tilde{d}_t = \left \lceil 100 - 100t/(n-1) \right \rceil $
\item \textbf{(LCY1)} life-cycle trend 1 case: this pattern is a combination of the first 3 trends. The first third of positive trend up to an average demand of 75, a central stationary one and the last negative third. If the number of periods is not a multiple of 3, the central period is extended.
\item \textbf{(LCY2)} life-cycle trend 2 case: this pattern is a combination of INC and DEC trends. Positive trend for the first half of the planning horizon and negative trend for the second half.
\item \textbf{(RAND)} erratic: $\tilde{d}_t = \left \lceil U(1,100)\right \rceil $
\end{itemize}
All the patterns have an average demand of 50 per period. For the
cost parameters we use all possible combinations of ordering cost
values $K \in \{80,160,320\}$, review costs $W \in \{ 80, 160, 320 \}$
and penalty costs $b \in \{ 4, 8, 16 \}$, with holding cost fixed at
$h = 1$. We use all combinations of cost parameters and the six
demand patterns presented above for a full factorial experiment. We
analyze the results for the 10-periods and 20-periods instances.
Since the baseline (equation \ref{eq:baseline} in Section
\ref{sec:sdp_ss}) is too computationally expensive (it takes approximately 45 days to solve a 20 periods instance) we replace it with an estimate in the 20-periods instances. The estimate is computed by solving 100 times the SDP for different $\gamma$ assignments and averaging it over all the possible assignments.
Tables \ref{tab:10_periods} and \ref{tab:20_periods} give an overview
of the computational time, the pruning percentage and the average
number of reviews of the methods for the 10- and 20-period
experiments. They show that SDP is not strongly affected by the cost
parameters and that the main difference is caused by the demand
patterns. This is due to the maximum average demand per period being
lower for STA, LCY1 and RAND. The stationary case is faster to
compute as its maximum is 50, the second-fastest is the first life
cycle with a maximum of 75, and the erratic pattern is slowest. All
the other patterns have a maximum of 100.
The pruning percentage gives an indication of the efficacy of the
branch-and-bound. Our algorithms perform particularly well on high
review costs. For instance, with 20 periods and $W = 320$ the pruning
percentage reached an impressive average of $99.83\%$ for the BnB-Guided, solving one instance in less than 13 minutes on average, while the
baseline is expected to take more than six weeks. For the BnB,
the percentage is $98.52\%$, so it visits more than twice the nodes
compared to the guided version. The randomised search (not shown in the table for the sake of readability) reaches an average of $98.92\%$. We note that the penalty cost
also affects performance: a higher penalty cost reduces pruning.
The average number of review moments of the optimal policies decrease
as the ordering and the review increase. Also, a higher penalty cost
leads to more frequent reviews, which reduces the probability of
demand excess and mitigates the uncertainty of the inventory level.
We observe that the decreasing pattern requires fewer review periods
than the others, due to its decreasing tail that reduces the number of
orders needed.
Our best-proposed method outperforms the baseline by factors of 50 and
1300 on 10- and 20-period instances, respectively.
\begin{table*}
\centering
\resizebox{\hsize}{!}{%
\def\arraystretch{1.5
\renewcommand{\tabcolsep}{1.5mm}
\begin{tabular}{|c c||c|c|c||c|c||c|}
\hline
& & \multicolumn{3}{c||}{Computational time} & \multicolumn{2}{c||}{Pruning \%} & \\ \hline
& & Base & BnB & BnB-Guided & BnB & BnB-Guided & Nr. reviews \\ \hline
K values & 80 & 14.62 & 0.55 & 0.3 & 82.15(3.37) & 91.51(3.94) & 3.0\\
& 160 & 14.83 & 0.56 & 0.28 & 81.79(3.38) & 92.06(4.07) & 2.56\\
& 320 & 14.97 & 0.61 & 0.32 & 80.31(4.74) & 91.06(5.37) & 2.06\\
\hline
W values & 80 & 14.83 & 0.68 & 0.46 & 78.36(4.37) & 86.94(4.08) & 3.0\\
& 160 & 14.79 & 0.56 & 0.27 & 81.94(2.84) & 92.48(2.48) & 2.56\\
& 320 & 14.8 & 0.48 & 0.17 & 83.96(1.99) & 95.2(1.76) & 2.06\\
\hline
b values & 4 & 14.89 & 0.57 & 0.28 & 81.48(3.32) & 92.06(4.21) & 2.39\\
& 8 & 14.81 & 0.57 & 0.29 & 81.69(3.72) & 91.89(4.27) & 2.56\\
& 16 & 14.71 & 0.58 & 0.33 & 81.09(4.7) & 90.68(4.93) & 2.67\\
\hline
Pattern & STA & 10.76 & 0.37 & 0.21 & 82.87(2.87) & 91.58(4.21) & 2.63\\
& INC & 17.3 & 0.69 & 0.46 & 81.27(3.27) & 88.67(3.96) & 2.7\\
& DEC & 17.39 & 0.66 & 0.27 & 81.5(2.63) & 93.7(3.6) & 2.33\\
& LCY1 & 15.03 & 0.65 & 0.32 & 78.61(3.49) & 90.72(4.52) & 2.59\\
& LCY2 & 16.56 & 0.71 & 0.36 & 78.99(3.25) & 90.57(5.01) & 2.48\\
& RAND & 11.79 & 0.35 & 0.17 & 85.27(3.86) & 94.01(3.19) & 2.48\\
\hline
Average & & 14.81 & 0.57 & 0.3 & 81.42(3.96) & 91.54(4.52) & 2.54\\
\hline
\end{tabular}
}
\caption{Computational times (in minutes), pruning percentage and number of reviews for 10 periods instances. Between brackets is the standard deviation of the pruning percentage.}
\label{tab:10_periods}
\end{table*}
\begin{table*}
\centering
\resizebox{\hsize}{!}{%
\def\arraystretch{1.5
\renewcommand{\tabcolsep}{1.5mm}
\begin{tabular}{|c c||c|c|c||c|c||c|}
\hline
& & \multicolumn{3}{c||}{Computational time} & \multicolumn{2}{c||}{Pruning \%} & \\ \hline
& & Base & BnB & BnB-Guided & BnB & BnB-Guided & Nr. reviews \\ \hline
K values & 80 & 65366.67 & 105.12 & 47.57 & 98.56(0.76) & 99.34(0.52) & 6.04\\
& 160 & 65470.02 & 109.35 & 49.78 & 98.53(0.92) & 99.33(0.59) & 5.17\\
& 320 & 66070.17 & 115.98 & 51.9 & 98.47(1.04) & 99.32(0.68) & 4.13\\
\hline
W values & 80 & 66737.03 & 181.66 & 100.37 & 97.61(0.9) & 98.67(0.56) & 6.04\\
& 160 & 64772.93 & 96.12 & 36.09 & 98.68(0.45) & 99.5(0.23) & 5.17\\
& 320 & 65396.9 & 52.67 & 12.8 & 99.27(0.23) & 99.83(0.1) & 4.13\\
\hline
b values & 4 & 65851.88 & 96.59 & 41.92 & 98.7(0.76) & 99.45(0.53) & 4.78\\
& 8 & 65847.45 & 108.37 & 48.75 & 98.56(0.83) & 99.35(0.55) & 5.2\\
& 16 & 65207.52 & 125.49 & 58.59 & 98.3(1.07) & 99.21(0.69) & 5.35\\
\hline
Pattern & STA & 43447.11 & 73.24 & 38.54 & 98.51(0.84) & 99.23(0.66) & 5.3\\
& INC & 72449.66 & 110.73 & 69.98 & 98.69(0.62) & 99.18(0.53) & 5.41\\
& DEC & 72706.98 & 141.49 & 48.06 & 98.29(1.05) & 99.43(0.57) & 4.7\\
& LCY1 & 62607.87 & 139.2 & 62.58 & 98.05(0.96) & 99.13(0.72) & 5.19\\
& LCY2 & 69243.25 & 141.35 & 56.22 & 98.22(0.82) & 99.3(0.57) & 5.04\\
& RAND & 73358.85 & 54.88 & 23.13 & 99.36(0.31) & 99.74(0.23) & 5.04\\
\hline
Average & & 65635.62 & 110.15 & 49.75 & 98.52(0.91) & 99.33(0.6) & 5.11\\
\hline
\end{tabular}
}
\caption{Computational times (in minutes), pruning percentage and number of reviews for 20 periods instances. Between brackets the standard deviation of the pruning percentage.}
\label{tab:20_periods}
\end{table*}
\section{Conclusion and Future Work} \label{sec:conclusion}
In this paper, we considered a single-item single-stocking location
inventory lot-sizing problem with non-stationary stochastic demand,
fixed and linear ordering cost, review cost, holding cost and penalty cost. We
present the first algorithm to compute optimal $(R,s,S)$ policy
parameters. This policy has a high practical value, but the
computation of optimal or near-optimal parameters has been considered
extremely difficult. Our proposed technique is a hybrid of
branch-and-bound and stochastic dynamic programming, enhanced by ad
hoc bounds computed with dynamic programming, and by a randomised
depth-first exploration of the search tree.
In an extensive numerical study, we first investigated the scalability
of the technique at increasing time horizon, analysing both
computational time and the efficacy of the bounding technique. We
then tested the performance of the method for different cost
parameters. Our technique performs best on low penalty costs and high
review costs. On 20-period instances, it outperforms a baseline method
by three orders of magnitude.
This technique opens up multiple research directions on the
determination of $(R,s,S)$ policy parameters. It can lead to new
optimal solutions for the same problem, and it can be improved with
tighter bounds. It is also useful for computing optimality gaps of
new heuristics.
As future studies, the approach presented herein can be extended to overcome some of the limitations of the problem setting. Considering multiple items with joint shipping or modelling a more complex supply chain with multiple echelons are generalisations of this problem, and they would increase the applicability of the $(R,s,S)$ policy.
\subsubsection*{Acknowledgments}
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 12/RC/2289-P2, which is co-funded under the European Regional Development Fund.
|
{
"timestamp": "2020-12-29T02:27:20",
"yymm": "2012",
"arxiv_id": "2012.14167",
"language": "en",
"url": "https://arxiv.org/abs/2012.14167"
}
|
\section{\label{sec1} Introduction}
One of the most challenging issues in astrophysics and gravitation theory is that of dealing with the classical problem of gravitational collapse. Equilibrium gravitating structures like planets, stars, galaxies - all are considered to be formed through the process of gravitational collapse. The physics of gravitational collapse of stars has been studied extensively in astrophysics, in particular.
A star maintains its equilibrium state as long as it produces outward thermal pressure through burning of its nuclear fuel; mainly hydrogen fusing into helium and later other heavier elements to counterbalance its inward gravitational pull. When the limited nuclear fuel supply exhausts completely, it can not halt its own gravitational pull and ultimately undergoes a contraction. This physical process is described as gravitational collapse and at the end it settles down to a new state of equilibrium. Given this situation, the effects of general relativity become the key factor in determining its final state. Hence, it is necessary to study the dynamical gravitational collapse within the framework of Einstein's theory of gravity. In the 1960's, the discovery of the quasi-stellar radio sources, very high energy phenomena such as the gamma-ray bursts that release enormous amounts of energy in the form of gravitational radiation were explained as a result of the gravitational collapse of massive stars\cite{Hoyle}.
Depending upon the mass of a collapsing star, the final equilibrium state of gravitational collapse under the force of its own gravity at the end of its life cycle can be different in nature. The star will end as a white dwarf in which gravity is opposed by the electron degeneracy pressure if the mass of the collapsing core is less than $1.4$ solar mass which is the Chandrashekhar limit\cite{Chandra+1935}. It will end as a neutron star for which the mass is $\approx 0.7$ solar mass as calculated by Oppenheimer-Volkoff\cite{Oppen+Volkoff+1939}. In the later case, gravity is opposed by the neutron degeneracy pressure if the core has a mass greater than the Chandrashekhar limit and less than about $3-5$ times the mass of the sun. For a sufficiently large mass (of the order of about ten solar masses or more) the thermal energy is radiated away much quickly and no known form of internal pressure can balance the inward pull due to gravity. Without settling into any equilibrium state, the end stage of such a catastrophic collapse results in the formation of space-time singularity which may be covered by an event horizon - a black hole or a naked singularity, depending upon the initial conditions of the collapsing configuration. According to singularity theorems, a sufficiently massive collapsing object will undergo continual gravitational collapse, resulting in the formation of a gravitational singularity. Event horizon telescope provides an excellent observational evidence in support of astrophysical black holes.
In relativistic astrophysics, one of the central questions is to investigate the final fate of massive stars. By choosing the appropriate geometry of interior and exterior regions, investigators are trying to understand the underlying physics. The initial step to study the gravitational collapse was taken by Oppenheimer and Synder\cite{Oppen+Snyder+1939} and Datt\cite{Datt+1938}(OSD). In the OSD model, under the assumptions of homogeneity, sphericity and dust equation of state, they have shown that the end state of gravitational collapse leads to the formation of a black hole. Cosmic censorship conjecture by Penrose\cite{Penrose} ensures that the gravitational collapse of any realistic matter distribution ends with the formation of a black hole similar to the OSD model. There is no theoretical or mathematical proof available to this conjecture though. By relaxing homogeneity condition, Datt-Tolman-Bondi have found a solution that describes the collapse of a dust sphere with non-uniform initial density. They have shown that depending on the initial conditions, the collapse ends with the formation of two kinds of singularities - black hole and a naked singularity. The construction of more realistic gravitational collapse models was possible after Vaidya\cite{Vaidya+1951} discovered a solution describing the exterior gravitational field of a stellar body with outgoing radiation and Santos\cite{Santos+1985} developed the junction conditions at the boundary joining the interior space-time of the collapsing object to the Vaidya exterior metric. Self-similar gravitational collapse of a perfect fluid with a linear equation of state was considered by Ori and Piran\cite{Ori+Piran}. The collapse leads to a curvature singularity which can be either naked or hidden. Such a scenario was also analyzed by Joshi and Dwivedi\cite{Joshi+Dwi}. Considering the collapsing fluid to be imperfect in nature, collapse properties have been analyzed by Szekeres and Iyer\cite{S+I}. Recently, collapse of a fluid with tangential pressure has been studied\cite{Magli,Singh+W}. By introducing the cosmological constant, the effects of positive as well as negative cosmological constant on gravitational collapse have been studied by Markovic and Shapiro\cite{Markovic+S} and Lake\cite{Lake}. The quasi-spherical collapse has been studied by Debnath {\em et al}\cite{Debnath}. Sharif and Ahmad\cite{Sharif+A1,Sharif+A2} have studied the gravitational collapse of a perfect fluid in higher dimensional space-time. The gravitational collapse of a scalar field has been studied by many investigators\cite{Christodoulou,Giambo+2005,Bhattacharjee+2010}. In the framework of alternative theories of gravity such as $f(R)$ gravity, it has been reported that both black hole and a naked singularity are possible so far as the final outcome of a gravitational collapsing star is concerned\cite{Ziaie+2010,Bedjaoui+2010,Ohashi+2011}.
It is noteworthy that gravitational collapse is a highly dissipating process. To understand the dynamics of dissipation during the collapse process, impacts of factors like bulk viscosity, anisotropic pressure, shear, electromagnetic field etc. have been introduced. Realistic stellar models have been developed where impacts of many of these factors have been taken up and studied\cite{Bonnor+et al+1989,Govender+et+al+2018,Oliveira+et al+1988,Oliveira+Kolassis+Santos+1988,Herrera+Ospino+Prisco+2008,Herrera+Prisco+Pastora+Santos+1998,Herrera+Santos+1997,Herrera+Prisco+Martin+Ospino+Santos+Troconis+2004,Herrera+Barrreto+2011,Herrera+et al+1997,Herrera+et al+2002,Herrera+et al+2011,Prisco+et al+2007,Prisco+et al+1997,Martinez+1996, Pinheiro+Chan,Chan+2000,Chan+2003,Chan+1994,Tikekar+Patel+1992,Maharaj+Govender+2000,Mena+et al+2004,Thiru+Mharaj+2009,Govinder+et al+1998,Dirk+et al+2000,Ivanov,Sarwe+Tikekar+2010,Sharma+Tikekar+2012,Sharma2+Tikekar+2012,Sharma+et al+2015}. Of particular interest is the effects of pressure anisotropy, which is the difference of pressures generated in a system between radial and tangential directions. In relativistic astrophysics, the consideration of anisotropic stress in a collapsing system originates from various factors. Ruderman\cite{Ruderman+1972} and Canuto\cite{Canuto+1974} have shown that in the high-density regime of the compact stellar interior, anisotropy may develop. In astrophysical objects, the existence of a solid core or type $3$A super fluid\cite{Kippen+Weigert+1990}, strong magnetic field\cite{Weber+1999}, slow rotation\cite{Herrera+Santos+1995}, phase transition\cite{Sokolov+1980}, pion condensation\cite{Sawyer+1972}, strong electromagnetic field\cite{Usov+2004} etc. can generate anisotropy. It has also been shown that a mixture of perfect and a null fluid effectively provides an anisotropic fluid model\cite{Letelier+1980}. Similarly, it has been pointed out by Ivanov\cite{Ivanov+2010} that shear, charge, etc. on self-bound systems can be absorbed if one considers the system to be anisotropic, in general. The shearing motion of the fluid can be considered as one of the reasons for the presence of anisotropy in a self-gravitating body\cite{Chan+1997, Chan+1998, Chan+1993}. Bowers and Liang\cite{Bowers+Liang+1974} have extensively explored the underlying causes of pressure anisotropy in the stellar interior and analyzed the effects of anisotropic stress on the physical behaviour like radial pressure, critical mass and maximum surface redshift of a star undergoing gravitational collapse. Dev and Gleiser\cite{Dev+Gleiser+2002} have shown that critical mass and surface redshifts increase with anisotropy. Herrera and co-workers\cite{Herrera+et al+2008} have analyzed the impact of anisotropic stresses on gravitational collapse of fluid distributions. The analysis of Dev and Gleiser\cite{Dev+Gleiser+2003} and Chan\cite{Chan+1993} show that anisotropy provides a more stable configuration. Sharma and Das\cite{Sharma+Das+2013} have developed a model for a self-gravitating spherically symmetric relativistic star which begins to collapse from an initially static configuration by dissipating energy in the form of radial heat flow. The model provides a simple technique to examine the impact of pressure anisotropy on collapse. Reddy {\em et al}\cite{Reddy+et al+2015} have analyzed impacts of anisotropic pressure in a shearing condition onto the collapse, which starts collapsing from an initial static configuration described by Bowers and Liang\cite{Bowers+Liang+1974} static model. Govender {\it et al}\cite{Goven+et al+2016} have constructed a model of a collapsing star by incorporating dynamical effects into the Pant and Sah model.
The aim of the current investigation is to analyze the effects of anisotropy on the collapse for a large variety of masses. Earlier, Das {\it et al}\cite{Das+et al+2016} have developed a model to study the effects of pressure anisotropy on the evolution of a collapsing star which begins its collapse from an initial static configuration described by Paul and Deb\cite{{pd+et al+2014}}. In this model, we extend our previous work to analyze the impact of anisotropy on the evolution of massive collapsing stars and temperature evolution during the collapse. It will be interesting to analyze how the different model parameters can affect the collapse of massive as well as lower masses stars. In our study, we shall investigate the problem of gravitational collapse for a spherically symmetric collapsing anisotropic star in the presence of dissipation.
The paper is organized as follows. We first outline the model developed previously by Das {\it et al}\cite{Das+et al+2016} for an anisotropic matter distribution collapsing under the influence of its own gravity and dissipating energy in the form of radial heat flux. In Sec.~\ref{sec2}, the Einstein field equations are laid down. The exterior space-time of the collapsing matter is appropriately described by the Vaidya metric and in Sec.~\ref{sec3}, we outline the main results of the junction conditions joining smoothly the interior ($r < r_{\Sigma}$) and the exterior ($r > r_{\Sigma}$) space-times across the boundary surface $\Sigma$ separating the two regions. Making use of the junction conditions and a static seed solution, we develop collapsing model in Sec.~\ref{sec4}. In Sec.~\ref{sec5}, we analyze the effects of pressure anisotropy. We discuss the thermal behaviour of the collapsing star in Sec.~\ref{sec6}. Finally, we conclude by discussing our results in Sec.~\ref{sec7}.
\section{\label{sec2}Einstein field equations}
We write the interior spacetime of a gravitationally collapsing star in isotropic coordinates as
\begin{equation}
ds_{-}^2 = -A^2(r,t)dt^2 + B^2(r,t)\left[dr^2+r^2(d\theta^2 + \sin^2\theta d\phi^2)\right],\label{intm1}
\end{equation}
where $A(r,t)$ and $B(r,t)$ are yet to be specified. We assume the matter composition of the collapsing star to be an imperfect fluid with anisotropic stresses undergoing dissipation in the form of radial heat flow. Accordingly, energy-momentum tensor of the collapsing fluid is written in the form
\begin{equation}
T_{ij} = (\rho + p_t)u_i u_j + p_t g_{ij} + (p_r-p_t)X_i X_j + q_i u_j + q_j u_i,\label{emt1}
\end{equation}
where $\rho$ is the energy density of the fluid, $p_r$ is the radial pressure, $p_t$ is the tangential pressure, $q^i=q\;\delta_1^i$ is the radial heat flux vector, $X^i$ is an unit $4$-vector along the radial direction and $u^i$ is the $4$-velocity of the fluid. The quantities obey the following relations: $u^i q_i = 0$, $X_i X^i = 1$, $X_i u^i = 0$.
The Einstein field equations for the line element (\ref{intm1}) are obtained as
\begin{equation}
\kappa^2 \rho =-\frac{1}{B^2}\left(\frac{2B''}{B}+\frac{4B'}{rB}-\frac{B'^2}{B^2}\right)+\frac{3\dot{B}^2}{ A^2 B^2}, \label{Eq1}
\end{equation}
\[
\kappa^2 (p_r-\zeta \Theta) =\frac{1}{B^2}\left(\frac{B'^2}{B^2}+\frac{2B'}{rB}+\frac{2A'B'}{AB}+\frac{2A'}{rA}\right)
\]
\begin{equation}
\; \; \; \; \; \; \; \; \; \; \; \; \; \; +\frac{1}{A^2}\left(-\frac{2\ddot{B}}{B}-\frac{\dot{B}^2}{B^2}+\frac{2\dot{A}\dot{B}}{AB}\right), \label{Eq2}
\end{equation}
\[
\kappa^2 (p_t-\zeta \Theta) = -\frac{2\ddot{B}}{A^2 B}+\frac{2\dot{A}\dot{B}}{A^3 B}-\frac{\dot{B}^2}{A^2 B^2}
+\frac{A'}{r A B^2}
\]
\begin{equation}
\hspace{0.8 cm } \; +\frac{B'}{r B^3}+\frac{A''}{A B^2}-\frac{B'^2}{B^4}+\frac{B''}{B^3}, \label{Eq3}
\end{equation}
\begin{equation}
\kappa^2 q = -\frac{2}{A B^2}\left(-\frac{\dot{B}'}{B}+\frac{\dot{B}B'}{B^2}+\frac{A'\dot{B}}{AB}\right), \label{Eq4}
\end{equation}
where, we set $c=1$ and $8\pi G = \kappa^2$. Note that a prime ($'$) and an overhead dot ($.$) denote differentiation with respect to $r$ and $t$, respectively.
\section{\label{sec3}Exterior space-time and junction conditions}
The space-time exterior to a radiating star is appropriately described by the Vaidya\cite{Vaidya+1951} metric
\begin{equation}
ds_{+}^2 = -\left(1-\frac{2m(v)}{{\sf r}}\right)dv^2 - 2dv d{\sf r} + {\sf r}^2[d \theta^2 + \sin^2 \theta d\phi^2], \label{Vm}
\end{equation}
where the mass function $m(v)$ is a function of retarded time $v$. The junction conditions for smooth joining of the interior (\ref{intm1}) and exterior (\ref{Vm}) spacetimes are obtained by assuming a time-like $3$-surface $\Sigma$ that separates the interior and the exterior manifolds\cite{Santos+1985}. Continuity of metric functions ($(ds_{-}^2)_{\Sigma} = (ds_{+}^2)_{\Sigma} = ds_{\Sigma}^2$) and extrinsic curvatures ($K_{ij}^{-} = K_{ij}^{+}$) across the surface $\Sigma$, then yield the following matching conditions for the interior ($r \leq r_\Sigma$) and for the exterior ($ r \geq r_\Sigma$) space-times at the boundary:
\begin{equation}
m(v) = \left(\frac{r^3 B \dot{B}^2}{2 A^2}-r^2 B' -\frac{r^3 B'^2}{2 B}\right)_{\Sigma}, \label{mm}
\end{equation}
\begin{equation}
(p_r)_\Sigma =(q B)_\Sigma.\label{eqj}
\end{equation}
Consequently, using Eq.~(\ref{mm}), one can determine the mass of the evolving star at any instant $t$ within a radius $r < r_{\Sigma}$ as
\begin{equation}
m(r,t) \stackrel{\Sigma}{=} \left(\frac{r^3 B \dot{B}^2}{2 A^2}-r^2 B' -\frac{r^3 B'^2}{2 B}\right).\label{inmass}
\end{equation}
In the next section, a method to solve the system of equations is provided which will help us to study the behaviour of the collapsing star. The technique was earlier provided by Das {\it et al}\cite{Das+et al+2016}.
\section{\label{sec4} A Toy model}
By combining Eqs.~(\ref{Eq2}) and (\ref{Eq3}), we obtain the anisotropy which is given by
\begin{equation}
\Delta(r,t)=\frac{1}{B^2}\left[\frac{B''}{B}+\frac{A''}{A}-\frac{2 B'^2}{B^2}-\frac{B'}{r B}-\frac{2 A' B'}{A B}-\frac{A'}{r A}\right],\label{ani}
\end{equation}
where, $\Delta(r,t) = \kappa^2 (p_t-p)$ is the measure of anisotropy of the collapsing matter. Now, to make eq.~(\ref{ani}) tractable, we assume that the metric potentials $A(r,t)$, $B(r,t)$ are separable in their variables and accordingly we write
\begin{eqnarray}
A(r,t) &=& A_0(r), \label{metA}\\
B(r,t) &=& B_0(r) h(t). \label{metB}
\end{eqnarray}
We further define
\begin{equation}
\Delta(r,t) = \frac{\Delta_s(r)}{h^2(t)},\label{ani2}
\end{equation}
where $\Delta_s$ is the anisotropy for a static configuration of a relativistic star.
Using eq.~(\ref{ani}) we define $\Delta_s$ which is given by
\begin{equation}
\Delta_s(r) = \frac{1}{B_0^2}\left[\frac{B_0''}{B_0}+\frac{A_0''}{A_0}-\frac{2 B_0'^2}{B_0^2}-\frac{2 A_0' B_0'}{A_0 B_0}-\frac{A_0'}{r A_0}-\frac{B_0'}{B_0 r}\right].\label{ani3}
\end{equation}
Therefore, a physically reasonable static anisotropic stellar solution ($A_0(r), B_0(r)$) satisfying Eq.~(\ref{ani3}) is useful to study stellar collapse. The Einstein field eqs. (\ref{Eq1})-(\ref{Eq4}) can be written as
\begin{eqnarray}
\rho &=& \frac{\rho_0}{h^2}+\frac{3 \dot{h}}{\kappa^2 h^2 A_0^2},\label{fEq1}\\
p_r &=& \frac{p_0}{h^2}-\frac{1}{\kappa^2 A^2_0}\left(\frac{2\ddot{h}}{h}+\frac{\dot{h}^2}{h^2}\right
,\label{fEq2}\\
p_t &=& \frac{p_{t0}}{h^2}-\frac{1}{\kappa^2 A^2_0}\left(\frac{2\ddot{h}}{h}+\frac{\dot{h}^2}{h^2}\right
,\label{fEq3}\\
q &=& -\frac{2A'_0\dot{h}}{\kappa^2 A^2_0 B^2_0 h^3},\label{fEq4}
\end{eqnarray}
where,
\begin{eqnarray}
\rho_0 &=& -\frac{1}{\kappa^2 B_0^2}\left[\frac{2B_0''}{B_0}+\frac{4B_0'}{rB_0}-\frac{{B_0'}^2}{B_0^2}\right],\label{sden}\\
p_{r0} &=& \frac{1}{\kappa^2 B_0^2}\left[\frac{B_0'^2}{B_0^2}+\frac{2B_0'}{rB_0}+\frac{2A'_0 B'_0}{A_0 B_0}+\frac{2A'_0}{rA_0}\right],\label{spr}\\
p_{t0} &=& \frac{1}{\kappa^2 B^2_0 r^4} \left[\frac{B_0''}{B_0}+\frac{B_0'}{r B_0}+\frac{A_0''}{A_0}+\frac{A_0'}{r A_0}-\frac{B_0'^2}{B_0^2}\right].\label{spt}
\end{eqnarray}
In our model, we assume that the collapse begins from an initial anisotropic distribution of matter in a geometry obtained by Paul and Deb\cite{pd+et al+2014}, which is given by
\begin{eqnarray}
A_0(r) &=& a\left(\frac{1-k \alpha+n\frac{r^2}{R^2}}{1+k\alpha}\right),\label{pd1}\\
B_0(r) &=& \frac{(1+k \alpha)^2}{1+\frac{r^2}{R^2}},\label{pd2}
\end{eqnarray}
with $a$ a constant and
\begin{equation}
\alpha(r) = \sqrt{\frac{1+\frac{r^2}{R^2}}{1+\frac{\lambda r^2}{R^2}}},\label{pd3}
\end{equation}
which contain four parameters $\lambda$, $k$, $n$ and $R$. Here the anisotropy of a compact object depends on the non zero values of the parameter $n$ as $n=0$ leads to isotropy which will be described in the next section. It is interesting to note that $p_r > p_t$ corresponds to $n<0$ and $p_r < p_t$ corresponds to $n>0$. For a given mass and radius, the parameters can be fixed by using appropriate boundary conditions. Details of the numerical procedure to evaluate the values of constraints and physical viability of the model are discussed in {\it Ref.}\cite{pd+et al+2014}. For appropriate choice of the model parameters, it is shown that the relativistic solution satisfies the causality condition as well as strong energy condition accommodating realistic stars. Thus in a compact star, collapse may be studied for a given star which begins its collapse from a state with reasonable initial conditions when equilibrium is lost.
The dynamical quantities of the initial configuration of a star in this case are given below:
\begin{eqnarray}
\rho_0 &=& \frac{12(1+ k \alpha^5 \lambda)}{\kappa^2 R^2(1+k \alpha)^5},\label{sden0}\\
p_{r0} &=&\frac{2(2 x^2(k^2 \alpha^6 \lambda-1)+n(2-\alpha^2 z))}{x R^2 (1+k\alpha)^5(n(\alpha^2-1)+(1-k\alpha)x)},\label{sp0}\\
p_{t0} &=& \frac{x(4 x^2(k^2 \alpha^6 \lambda-1)+n y)}{R^4 v(\alpha^2-1)^2(n(\alpha^2-1)+(1-k \alpha)x)},\label{spt0}\\
\Delta_s &=& \frac{n \alpha(1-\alpha^2)(-8\alpha(\lambda-1)+k(-3+u ))}{ v (\alpha^2\lambda-1)(n(\alpha^2-1)-(k\alpha-1) x) },\label{sani0}
\end{eqnarray}
where, we have defined
\begin{eqnarray}
x &=& (1-\alpha^2 \lambda),\label{c1}\\
y &=& (4 + \alpha (4 \alpha (\alpha^2 -1) + k (3 + \alpha^4) - 4 \alpha (1 + 2 k \alpha+ \alpha^2) \lambda \nonumber\\
&& + \alpha^3 (4 + k \alpha (9 - 8 \alpha^2 + 3 \alpha^4) \lambda^2))) \label{c2},\\
z &=& (-2 +6 \lambda+\alpha(-2\alpha(-1+\lambda+\lambda^2)\nonumber\\
&&-k(1+\lambda+\alpha^2(1+\lambda^2(5 \alpha^2-3)-\lambda(2+3 \alpha^2))))),\\
u& = & \alpha^2(-1+\lambda(10+3\alpha^2(2+(-5+\alpha^2)\lambda))), \\
v &=& R^2(1+k \alpha)^5. \label{c3}
\end{eqnarray}
Using eq.~(\ref{eqj}) together with the condition that pressure of the initial static configuration must vanish at the surface, {\it i.e., }$p_0(r = r_\Sigma) = 0$, we obtain
\begin{equation}
2h\ddot{h}+\dot{h}^2-2 \gamma \dot{h} =0,\label{sur}
\end{equation}
where,
\begin{equation}
\gamma = \left[\frac{A_0'}{B_0}\right]_\Sigma.\label{surcon}
\end{equation}
Note that for a given initial static configuration $\gamma$ is a constant. Solution of eq.~(\ref{sur}) governs the overall temporal behaviour of the collapsing star. A similar approach was adopted by Govender {\em et al}\cite{Govender+et+al+2018} for solving junction conditions so as to study the impact of anisotropy on the time of formation of the horizon and the evolution of the temperature.
Following Bonnor\cite{Bonnor+et al+1989}, we express the second order differential equation into a first order differential equation
\begin{equation}
\dot{h} = -\frac{2\gamma}{\sqrt{h}}(1-\sqrt{h}),\label{surf}
\end{equation}
which admits a solution
\begin{equation}
t = \frac{1}{\gamma}\left[\frac{h}{2}+\sqrt{h}+\ln(1-\sqrt{h})\right].\label{eq}
\end{equation}
Obviously, at $t \rightarrow - \infty$, $h = 1$ and $\dot{h} =0$ when $h =1$. Thus, in this construction, the collapse begins in the remote past $t \rightarrow - \infty$ when $h = 1$ and it continues till the black hole is formed.
Using eq.~(\ref{inmass}), we write the mass of the evolving star at any instant $t$ within a radius $r$ as
\begin{equation}
m(r,t) \stackrel{\Sigma}{=} \left[m_0 h(t)+\frac{2 \gamma^2 r^3 B_0^3(1-\sqrt{h})^2}{A_0
^2}\right],\label{mass}
\end{equation}
where,
\begin{equation}
m_0 \stackrel{\Sigma}{=} -\frac{r^2 B_0'}{2}\left(2+\frac{B_0' r}{B_0}\right),\label{masstc}
\end{equation}
defined to be the mass of the initial static star. We assume that at $t= - \infty$ (i.e., when $h(t) = 1$), the collapsing star had an initial mass $m_0$ within a boundary surface $r_0$. The star continues to collapse until it shrink to a radius approaching the horizon $[r h (t_{bh})]_\Sigma =r_0 h(t_{bh}) = 2m(v)$, where $t= t_{bh}$ represent the time of formation of the black hole. The condition yields
\begin{equation}
h(t)=\left[\frac{\frac{2 r \gamma B_0}{A_0}}{\frac{2 r \gamma B_0}{A_0}+\sqrt{1-\frac{2 m_0}{r B_0}}}\right]^2_\Sigma,\label{hts}
\end{equation}
which attains the value $h_{bh}$ at a time
\begin{equation}
t_{bh} = \frac{1}{\gamma}\left[\frac{h_{bh}}{2}+\sqrt{h_{bh}}+\ln(1-\sqrt{h_{bh}})\right],\label{tbh}
\end{equation}
where $h(t_{bh}) = h_{bh}$.
\section{\label{sec5}Physical analysis}
In this section, the effects of anisotropy onto the collapse of compact objects for different initial masses and radii with different compactness factor are studied. We note the following:
{\bf Case -I}: Consider a compact object with the initial configuration of mass $m_0 = 3~M_{\odot}$ and radius $r_0 = 10\times1.475~$km, with compactness factor $u=0.3$. It is found that in the stellar model, once the star departs from equilibrium configuration, it starts collapsing until it becomes a black hole. For different values of the anisotropic parameter $n$ the final of a massive star {\it i.e.}, its black hole mass ($m_{bh}$) and the corresponding horizon radius ($r_{bh}$) are determined.
The time of formation of a black hole ($t_{bh}$) in addition of $m_{bh}$ and $r_{bh}$ for stars having different anisotropies $n$ and $k$ for the given mass is tabulated in Table~\ref{tab1}. The value of $a$ turns out to be $1$ for all values of $n$.
Though the variation of $t_{bh}$ is small in this configuration, we note that, if $p_r > p_t$, {\it i.e.}, $n < 0$, the horizon is formed at a later stage in a star having more negative $n$ value.
{\bf Case -II}: Consider a compact object with the configuration of mass $m_0 = 5~M_{\odot}$ and radius $r_0 = 2159~$km\cite{Bonnor+et al+1989}. The star considered here has the compactness factor $u=0.00341$. It has been shown that, once the star departs from equilibrium, it starts collapsing until it becomes a black hole. For different values of the anisotropic parameter $n$, the values of the model parameters including the black hole mass ($m_{bh}$) and its corresponding horizon radius ($r_{bh}$) are determined, which are tabulated in Table~\ref{tab2}. The time of formation of black hole ($t_{bh}$) for stars having different anisotropies $n$ for a given mass are tabulated in Table~\ref{tab2}. The value of $a$ turns out to be $1$ for all values of $n$. Though the variation of $t_{bh}$ is very small in this configuration, we note that, when the tangential pressure of the initial configuration is greater than the radial pressure ($p_t > p_r$), {\it i.e., } $n > 0$, the horizon is found to form at an earlier epoch compared to a less anisotropic star. However, if $p_r > p_t$, {\it i.e.}, $n < 0$, the horizon is formed at a later stage in a star having more negative $n$ values.
{\bf Case -III}: Consider a compact object with the configuration of mass $m_0=6~M_{\odot}$ and radius $r_0=25.742~$km\cite{Woosley+Phillips+1988} which has the compactness factor $u=0.343$. With the variation of the anisotropic parameter $n$, the values of the model parameters including the mass ($m_{bh}$) of the black hole and the corresponding horizon radius ($r_{bh}$) are determined, which are presented in Table~\ref{tab3}. The time of formation of black hole ($t_{bh}$) for stars having different anisotropies $n$ for a given mass are also tabulated in Table~\ref{tab3}. Here also for $p_t > p$, {\it i.e., } $n > 0$, the horizon is found to be formed at an earlier epoch compared to a less anisotropic star. However, if $p_r > p_t$, {\it i.e.}, $n < 0$, the horizon will be formed at a later stage in a star having more negative $n$ values.
{\bf Case -IV}: Consider a star having mass $m_0=10~M_{\odot}$, radius $r_0=40\times1.475~km$, $k=0.2$, compactness factor $u=0.25$. The values of the model parameters namely, mass of the black hole ($m_{bh}$), corresponding black hole radius ($r_{bh}$) and time of black hole formation ($t_{bh}$) for different anisotropic parameter $n$, are displayed in Table~\ref{tab4}. It is noted that as the anisotropy increases the formation time of black hole from infinitely past decreases, thereafter it increases for negative $n$ values. It is also noted that stars with $m=10 M_{\odot}$ having different physical radius $2 r_0$, the black hole formation occurs at a late stage for stars with bigger radius with lower black hole mass to be the end product. In this case, the values of the model parameters namely, mass of the black hole ($m_{bh}$), corresponding black hole radius ($r_{bh}$) and time of black hole formation ($t_{bh}$) for different anisotropic parameter $n$, are displayed in Table~\ref{tab5}.
{\bf Case -V}: Consider a massive compact star with the configuration of mass $m_0 = 20~M_{\odot}$ and radius $r_0 = 60\times1.475~km$. The star considered here has the compactness factor $u=0.333$. For different values of the anisotropic parameter $n$, the values of the model parameters including the mass ($m_{bh}$) of the black hole and the corresponding horizon radius ($r_{bh}$) are determined, which are presented in Table~\ref{tab6}. We note that, for $n < 0$, the horizon is formed at a later stage in a star as compared to its isotropic configuration ($n=0$). In this case, the black hole formation delayed for a star having more negative $n$ values.
{\bf Case -VI}: Consider the gravitational collapse of stars with higher initial static mass. For this purpose we consider star having initial mass $m_0=60~M_{\odot}$, physical radius $r_0 =600 \times1.475$ km., $k=0.1$ and having compactness factor $u=0.1$. In Table~\ref{tab7} we have complied all the parameters related to describe the collapsing system. We note that as the anisotropic parameter $n$ is decreased the mass of black hole ($m_{bh}$) and corresponding black hole radius ($r_{bh}$) both decreases. However, the formation time of black hole ($t_{bh}$) increases.
\begin{table}
\caption{\label{tab1} Dynamical variables for different anisotropic star. ( $m_0 = 3~M_{\odot}$, $r_0 = 10\times1.475~km$.)}
\begin{tabular}{cccccccc}
$n$ & $k$ &$\lambda$ & $R$ & $h_{bh}$ & $t_{bh}$ & $r_{bh}$ & $m_{bh}~(M_{\odot})$\\ \hline
0.5 & 0.2 &54.28 & 135.22 & 0.0722 & -1.582 & 1.410 & 0.4780 \\
& 0.4 &835.5 & 197.24 & 0.2015 & -3.616 & 3.512 & 1.1654\\ \hline
0 & 0.2 &123.19 & 194.08 & 0.0667 & -1.467 & 1.302 & 0.4413 \\
& 0.4 &2106.07 & 276.84 & 0.1744 & -3.566 & 3.402 & 1.1534\\ \hline
-1.0 &0.2& 261.51 & 276.42 & 0.0650 & -1.408 & 1.240 & 0.4230 \\
& 0.4 &3961.39 & 377.76 & 0.1714 & -3.512 & 3.344 & 1.1335 \\ \hline
-2.0 &0.2& 399.93 & 339.34 & 0.0630 & -1.389 & 1.229 & 0.4169 \\
& 0.4 &5819.79 & 457.02 & 0.1703 & -3.491 & 3.322 & 1.1261\\ \hline
-3.0 &0.2& 538.39 & 392.29 & 0.0620 & -1.379 & 1.221 & 0.4138 \\
& 0.4 &7679.29 & 524.47 & 0.1697 & -3.481 & 3.311 & 1.1223\\ \hline
-5.0 &0.2& 815.34 & 481.01 & 0.0621 & -1.370 & 1.212 & 0.4108\\
& 0.4 &11398.70 & 638.34 & 0.1691 & -3.470 & 3.299 & 1.1184\\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab2} Dynamical parameters for different anisotropic star. ( $m_0 = 5~M_{\odot}$, $r_0 = 2159~km$ and $k= 0.01$)}
\begin{tabular}{ccccccccc}
$n$ & $\lambda$ & $R$ & $t_{bh}$ & $r_{bh}$ & $m_{bh}~(M_{\odot})$ \\ \hline
1.00 & 945869 & 367763 & -0.06733 & 0.100046 & 0.034056 \\
0.90 & $4.151\times 10^7$ & $2.412\times 10^6$ & -0.06354 & 0.094814 & 0.032140 \\
0.80 & $8.129\times 10^7$ & $3.374\times 10^6$ & -0.06349 & 0.094746 & 0.032117 \\
0.70 & $1.214\times 10^8$ & $4.125\times 10^6$ & -0.06348 & 0.094732 & 0.032112\\
0.50 & $2.018\times 10^8$ & $5.317\times 10^6$ & -0.06347 & 0.094710 & 0.032105 \\
-0.10 & $4.429\times 10^8$ & $7.870\times 10^6$ & -0.06346 & 0.094699 & 0.032101 \\
-0.50 & $6.036\times 10^8$ & $9.195\times 10^6$ & -0.06345 & 0.094690 & 0.032098 \\
-1.00 & $8.046\times 10^8$ & $1.061\times 10^7$ & -0.06345 & 0.094687 & 0.032097 \\
-1.99 & $1.202\times 10^9$ & $1.297\times 10^7$ & -0.06345 & 0.094686 & 0.032097 \\
-2.99 & $1.604\times 10^9$ & $1.499\times 10^7$ & -0.06344 & 0.094685 & 0.032096 \\
-3.99 & $2.006\times 10^9$ & $1.676\times 10^7$ & -0.06302 & 0.094543 & 0.031870 \\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab3} Dynamical parameters for different anisotropic star. ( $m_0 = 6~M_{\odot}$, $r_0 = 25.742~km$ and $k= 0.2$.)}
\begin{tabular}{ccccccccc}
$n$ & $\lambda$ & $a$ & $R $ & $h_{bh}$ & $t_{bh}$ & $r_{bh}$ & $m_{bh}(M_{\odot})$\\ \hline
0.7 & 9.81 & 1.0074 & 178.04 & 0.04246 & -1.8858 & 1.5009 & 0.5087 \\
0.4 & 29.20 & 1.0069 & 262.25 & 0.03844 & -1.7121 & 1.3579 & 0.4603 \\
0.1 & 48.74 & 1.0068 & 324.98 & 0.03696 & -1.6495 & 1.3066 & 0.4429 \\
-0.4 & 81.58 & 1.0067 & 409.04 & 0.03589 & -1.6036 & 1.2691 & 0.4301 \\
-0.7 & 101.27 & 1.0067 & 451.95 & 0.03555 & -1.5890 & 1.2571 & 0.4261 \\
-1.5 & 153.80 & 1.0066 & 550.23 & 0.03505 & -1.5670 & 1.2391 & 0.4200 \\
-2.5 & 219.49 & 1.0066 & 652.58 & 0.03474 & -1.5537 & 1.2282 & 0.4163 \\
-5.5 & 416.60 & 1.0065 & 891.75 & 0.03438 & -1.5382 & 1.2156 & 0.4208 \\
-8.5 & 613.73 & 1.0065 & 1079.15 & 0.03425 & -1.5325 & 1.2109 & 0.4105 \\
-10.0 & 712.29 & 1.0065 & 1161.57 & 0.03421 & -1.5307 & 1.2096 & 0.4100 \\
-50.0 & 3340.74 & 1.00655& 2504.76&0.034021 & -1.5224 &1.2027 &0.40771\\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab4} Dynamical variables for different anisotropic star. ( $m_0 =10 ~M_{\odot}$, $r_0 =40\times1.475~km$ and $k = 0.2$.)}
\begin{tabular}{cccccccc}
$n$ & $\lambda$ & $a$ & $R $ & $h_{bh}$ & $t_{bh}$ & $r_{bh}$ & $m_{bh}~(M_{\odot})$\\ \hline
0.80 & 67.32 & 1.0040 & 436.90 & 0.09911 & -7.359 & 7.40 & 2.51 \\
0.50 & 182.63 & 1.0044 & 671.53 & 0.08522 & -6.389 & 6.36 & 2.16 \\
0.30 & 260.49 & 1.0044 & 790.76 & 0.08228& -6.181 & 6.14& 2.08 \\
0.10 & 338.57 & 1.0044 & 894.37 & 0.08059& -6.063 & 6.02 & 2.04\\
-1 & 769.12 & 1.0045 & 1327.15 & 0.0773& -5.583 & 5.77& 1.96\\
-2 & 1160.92& 1.0045 & 1632.50 & 0.07638& -5.762& 5.70& 1.93 \\
-5 & 2363.71& 1.0045 & 2293.39& 0.075451& -5.696& 5.63& 1.91\\
-8 & 3512.61& 1.0045 & 2807.78& 0.075141& -5.674& 5.61& 1.90 \\
-10 & 4296.56& 1.0046 & 3103.70& 0.075028& -5.666& 5.60& 1.89 \\
-15 & 6256.46& 1.0046 & 3742.51& 0.07487& -5.655& 5.59& 1.89 \\
-100 & 39575.31& 1.00457 & 9399.75 & 0.0745753 & -5.633 & 5.56& 1.88 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab5} Model parameters for different anisotropic star. ( $m_0 = 10~M_{\odot}$, $r_0 =80\times1.475~km$ and $k= 0.2$.)}
\begin{tabular}{cccccccc}
$n$ & $\lambda$ & $R $ & $t_{bh}$ & $r_{bh}$ & $m_{bh}~(M_{\odot})$\\ \hline
0.80 & 6650.82 & 3203.9 & -4.67102 & 5.80172 &1.967 \\
0.50 & 14016.6 & 4622.43 & -4.5475 & 5.6426 & 1.913 \\
0.30 & 18956.6 & 5367.65& -4.51822& 5.60488& 1.899 \\
0.10 & 23924.4& 6024.78& -4.50087 & 5.58252& 1.892 \\
-1 & 51101.1& 8789.3& -4.4653& 5.536& 1.877 \\
-2 & 75810.4& 10699.9& -4.45515& 5.52386& 1.872 \\
-5& 150042& 15044.9& -4.4447& 5.51042& 1.868 \\
-8 & 224251& 18389.6& -4.44115& 5.505& 1.866 \\
-10& 273724& 20315.7& -4.4398& 5.5042& 1.866 \\
-15& 397406& 24476.7& -4.43803& 5.50186& 1.865 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab6} Dynamical variables for different anisotropic star. ( $m_0 =20 ~M_{\odot}$, $r_0 =60\times1.475~km$ and $k = 0.3$.)}
\begin{tabular}{cccccccc}
$n$ & $\lambda$ & $a$ & $R $ & $h_{bh}$ & $t_{bh}$ & $r_{bh}$ & $m_{bh}~(M_{\odot})$\\ \hline
0 & 330.91 & 1.0077 & 1105.80 & 0.1585 & -21.65 & 19.09& 6.47 \\
-1.0 & 661.87 & 1.0079 & 1543.07& 0.1528& -20.95& 18.41& 6.24 \\
-5.0 & 1989.21& 1.0081 &2650.08&0.1488&-20.46&17.93&6.08 \\
-10 & 3649.21&1.0081 &3581.53&0.1479&-20.34&17.82&6.04\\
-20 & 6969.49&1.0081 &4943.37&0.1474&-20.28&17.75&6.02\\
-50 &16930.50&1.0081 &7698.44&0.1470&-20.24&17.71&6.00\\
-100&33532.20&1.0081 &10831.2&0.1469&-20.22&17.70&5.99 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\centering
\caption{Dynamical variables for different anisotropic star. ($m_0 = 60~M_{\odot}$, $r_0 = 600\times1.475~km$ and $k= 0.1$.)}
\label{tab7}
\begin{tabular}{cccccccc}
$n$ & $\lambda$ & $a$ & $R $ & $\gamma$ & $t_{bh}$ & $r_{bh}$ & $m_{bh}~(M_{\odot})$\\ \hline
0.999 & 159 & 0.9999 & 7658 & 0.0000960 & -26.12 & 33.85 & 11.48 \\
0.9 & 722 & 1.0004 & 14432 & 0.0000767 & -18.19 & 23.34 & 7.91 \\
0.2 & 4988 & 1.0005 & 36371 & 0.00007005 & -15.63 & 19.98 & 6.77 \\
-0.1 & 6823 & 1.0005 & 42450 & 0.0000697 & -15.51 & 19.81 & 6.72 \\
-0.2 & 7435 & 1.0005 & 44292 & 0.0000696 & -15.48 & 19.78 & 6.70 \\
-0.5 & 9270 & 1.0005 & 49406 & 0.0000695 & -15.41 & 19.69 & 6.68 \\
-0.7 & 10494 & 1.0005 & 52540 & 0.0000694 & -15.39 & 19.66 & 6.66 \\
-1.0 & 12330 & 1.0005 & 56919 & 0.0000693 & -15.35 & 19.61 & 6.65 \\
-1.5 & 15389 & 1.0005 & 63550 & 0.0000692 & -15.31 & 19.56 & 6.63 \\
-2.0 & 18449 & 1.0005 & 69552 & 0.0000691 & -15.29 & 19.53 & 6.62 \\
-2.5 & 21509 & 1.0005 & 75075 & 0.0000691 & -15.27 & 19.51 & 6.61 \\
-3.0 & 24569 & 1.0005 & 80219 & 0.0000690 & -15.26 & 19.49 & 6.67 \\ \hline
\end{tabular}
\end{table}
\section{Temperature profile}
\label{sec6}
The laws of thermodynamics during the dynamical gravitational evolutions of physical systems must be obeyed. Black holes are considered to possess remarkable thermodynamic properties. Hawking, Carter and Bardeen by presenting the laws of black hole thermodynamics were successful in establishing a connection between gravity and thermodynamics on the event horizon of a stationary black hole.
The temperature evolution of the collapsing system depends on the anisotropy of the system\cite{Sharma+Das+2013}. The relativistic Maxwell-Cattaneo relation for temperature governing the heat transport\cite{Israel+Stewart+1979, Maartens+1995} is given by
\begin{equation}
\tau(g^{\alpha\beta}+u^{\alpha}u^{\beta})u^{\delta}q_{\beta;\delta} + q^{\alpha} = -K(g^{\alpha\beta}+u^{\alpha}u^{\beta})[T_{,\beta}+T\dot{u_{\beta}}],\label{teq41}
\end{equation}
where $T$ is the temperature, $K (\geq 0)$ is the thermal conductivity and $\tau (\geq 0)$ is the relaxation time. For the line element (\ref{intm1}), eq.~(\ref{teq41}) reduces to
\begin{equation}
\tau\frac{d}{dt}(qhB_0) + q h A_0 B_0 = -K \frac{1}{h B_0}\frac{d}{dr}(A_0T).\label{teq42}
\end{equation}
The relativistic Fourier heat transport equation can be simplified by setting $\tau = 0$ in (\ref{teq42}) as shown in Ref.~\cite{Sharma+Das+2013}. For $\tau=0$, combining eqs.~(\ref{fEq4}) and (\ref{teq42}), we get
\begin{equation}
8\pi G K(A_0T)' = \frac{2A_0'\dot{h}}{A_0h}.\label{teq43}
\end{equation}
Let us now assume that the thermal conductivity varies as $K = \beta T^{\omega}$, where $\beta$ and $\omega$ are constants. eq.~(\ref{teq43}), then yields
\begin{equation}
8\pi\beta(A_0T)'=\frac{2A_0' T^{-\omega}}{A_0}\left[\frac{2\gamma(\sqrt{h}-1)}{h\sqrt{h}}\right],\label{teq44}
\end{equation}
where we have used eq.~(\ref{surf}). Integrating the above equation, we get
\begin{equation}
T^{\omega+1}=\frac{\gamma(\sqrt{h}-1)}{2\pi\beta h\sqrt{h}}\left(\frac{\ln A_0}{A_0}\right)+T_0(t)
,\label{teq45}
\end{equation}
To get an overall picture of temperature evolution, we set $\omega=0$, $\beta = 1$ and $T_0(t) = 0$ without loss of any generality. Note that at the onset collapse, in the absence of adequate information about luminosity vis-a-vis thermal profile of the collapsing star, we can set $T_0(t) =0$ so as to analyze the temporal evolution of the system. With the above choices of the constants the surface temperature at any instant reads as
\begin{equation}
T(r_\Sigma,t)=\frac{\gamma(\sqrt{h}-1)}{2\pi h\sqrt{h}}\left(\frac{\ln A_0}{A_0}\right)_\Sigma
,\label{teq46}
\end{equation} is plotted in Fig.~(\ref{fg1})-(\ref{fg3}). It is evident that the surface temperature which was initially low at $h=1$, gradually increases to higher values as collapse progresses towards $h\rightarrow 0$.
\begin{figure}
\centering
\includegraphics[width=8.0cm, angle=0]{bcp22.eps}
\caption{Surface temperature profile ($m_0 = 3~M_{\odot}$, $r_0 = 10\times1.475~km$ and $k= 0.2$).}
\label{fg1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.0cm, angle=0]{bcp1.eps}
\caption{Surface temperature profile ($m_0 = 6~M_{\odot}$, $r_0 = 25.742~km$ and $k= 0.2$).}
\label{fg2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.0cm, angle=0]{bcp11.eps}
\caption{Surface temperature profile ($m_0 =10 ~M_{\odot}$, $r_0 =40\times1.475~km$ and $k = 0.2$).}
\label{fg3}
\end{figure}
\section{Discussions}
\label{sec7}
The gravitational collapse of a relativistic anisotropic star in the presence of radial heat flux is studied. The star is assumed to begin its collapse from an initial configuration described by Paul and Deb\cite{{pd+et al+2014}} model. In general, it is observed that in the case of anisotropic stars, when the tangential pressure of the initial configuration exceeds the radial pressure ($p_t > p_r$), {\it i.e.,} $n > 0$, the horizon is found to form at an earlier epoch compared to that of a less anisotropic star. However, if $p_r > p_t$, {\it i.e.,} $n < 0$, the horizon is found to form at a later stage for a star with more negative $n$ values. The time of formation of a black hole ($t_{bh}$), the mass of black hole ($m_{bh}$) and corresponding radius ($r_{bh}$) are found to follow similar behaviour for a wide range of massive stars with different compactness factors.
It is evident from Table~\ref{tab4} and Table~\ref{tab5} that as the compactness factor increases, for a particular mass with a constant $k$, the time of formation of a black hole is found to decrease i.e., it leads to a black hole that forms at a faster rate as compared to stars with higher compactness. A star with a larger black hole mass ($m_{bh}$) and size ($r_{bh}$) ends with a high mass. However, for the same $k$ if the anisotropy increases (more negative $n$ values) then it is found that a less massive black hole is formed.
For a given mass and radius, if the geometrical parameter $k$ is increased, it is observed that black hole may form at a faster rate. However, the black hole mass ($m_{bh}$) and the corresponding radius ($r_{bh}$) for such stars ends into a comparately massive black hole for higher values of $k$ for the same anisotropic parameter $n$.
For a given mass, as $\lambda$ is increased, the time of formation of a black hole increases. However, the final masses ($m_{bh}$) and radii ($r_{bh}$) of the black holes are found to be identical. It is also found that for a star with an initial configuration having same $k$ and $n$, in the case of a massive star, the time to form a black hole is less i.e., the black hole is formed at an earlier epoch compared to that of an anisotropic star with lower mass. Accordingly, black hole mass ($m_{bh}$) and black hole radius ($r_{bh}$) both become high for the higher initial massive star.
To conclude, in this paper we have analyzed the effects of anisotropy on the gravitational collapse of a star. The work provides a simple mechanism to investigate the effects of pressure anisotropy for a wide range of initial configurations. The effects of other factors such as shear, viscosity, charge etc. are beyond the scope of the current investigation. We have also studied the anisotropic effect on the temporal behaviour a collapsing system. It is observed that for $n>0$ the surface temperature lies on the higher side as compared to the case of $n < 0$ at all stages of the gravitational collapse.
\begin{acknowledgments}
BCP and RS gratefully acknowledge support from the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India, under its Visiting Research Associateship Programme. SD is also thankful to IUCAA, Pune for its hospitality where part of this work was carried out.
\end{acknowledgments}
|
{
"timestamp": "2020-12-29T02:24:22",
"yymm": "2012",
"arxiv_id": "2012.14084",
"language": "en",
"url": "https://arxiv.org/abs/2012.14084"
}
|
\section{Introduction}
In nature and the current highly technological era, acquired signals are usually affected by various complicated factors and appear as multi-component (time-overlapping) modes in the form of the adaptive harmonic model (AHM) with an additive trend function, namely:
\begin{equation}
\label{AHM0}
x(t)=A_0(t)+\sum_{k=1}^K x_k(t), \quad x_k(t)=A_k(t) \cos \big(2\pi \phi_k(t)\big),
\end{equation}
where $A_0(t)$ represents the trend, $A_1(t), \cdots, A_K (t) \ge 0$ the instantaneous amplitudes (IAs), and $2\pi \phi_1(t), \cdots, 2\pi \phi_K (t)$ the instantaneous phases (IPhs), of the multi-component source signal (or composite signal) $x(t)$. The trend along with the instantaneous frequencies (IFs) $\phi^{\prime}_k(t)$ of $x(t)$ in \eqref{AHM0} are often used to describe the underlying dynamics of $x(t)$. Here, the IF of the unknown component $x_k(t)$ is defined by the derivative $\phi^{\prime}_k(t)$ of $1/2\pi$ multiple of the phase function. For example, radar echoes may be generated by multiple targets close to each other, or by different micro-motion parts in one target. Also, seismic signals usually consist of multiple modes that change in time with the dynamic variations of the IAs $A_k(t)$ and the IPhs $2\pi \phi_k(t)$, aroused by the adjacent thin layers. In many situations it is necessary to separate the multi-component source signal $x(t)$ into a finite number of mono-components $x_k(t) = A_k(t) \cos \big(2\pi \phi_k(t)\big)$ to recover the modes and underlying dynamics, implicated for the purpose of source signal processing, parameter estimation, feature extraction, and pattern recognition, etc. Unfortunately, there are very few effective rigorous methods available in the published literature for extracting or recovery of such signal components or sub-signals.
In this regard, it is important to point out that in general, the signal decomposition approach is not suitable for resolving the inverse problem of extracting the signal components $x_1(t), \cdots, x_K(t)$ and trend $A_0(t)$ from the source data $x(t)$ in \eqref{AHM0}. In particular, although function decomposition methods in the mathematics literature are abundant, the general objective of such approach is to decompose a given function in a certain function class into its building blocks, which are not of the form of the signal components in \eqref{AHM0}. For example, in the pioneering paper \cite{RR_Coifman} by R. Coifman, the function building blocks (called atoms) do not have the phase and frequency contents as $x_k(t)=A_k(t) \cos \big(2\pi \phi_k(t)\big)$. In another pioneering paper \cite{Chen_Donoho_Saunders} by S. Chen, D. Donoho, and M. Saunders, a desired library of function building blocks is compiled to apply an innovative basis pursuit algorithm for atomic decomposition. However, it is not feasible to compile a huge library of atoms of the form $A_k(t) \cos \big(2\pi \phi_k(t)\big)$ for arbitrary IAs and IPhs, to apply the basis pursuit algorithm for resolving the inverse problem of recovering the number $K$ of components $x_1(t), \cdots, x_K(t)$ in \eqref{AHM0}, from the source signal $x(t)$. Of course there are other well-known signal decomposition schemes, such as the discrete wavelet decomposition and sub-band coding, for signal decomposition, but they are data-independent computational schemes and definitely cannot be applied to solving this inverse problem. Even the most popular data-dependent signal (or time series) decomposition algorithm, called ``Emperical Mode Decomposition (EMD)'', proposed by N. Huang et al, as well as all variants developed by others, such as \cite{Cicone20, HM_Zhou16, HM_Zhou20, Flandrin04, LCJJ19, HM_Zhou09, Rilling08, Walt, Walt2,
Y_Wang12, Wu_Huang09, Xu06, Li_Ji09}, fail in resolving this inverse problem. The reason is that there is absolutely no reason for the EMD decomposed components, called intrinsic mode functions (IMFs), to possess any phase and frequency information. After all, the manipulation of applying the Hilbert transform to analytically extend each IMF from the real line to the upper half plane, followed by taking the real part of the polar formulation of the extension to obtain the instantaneous phase representation, can also be applied to any arbitrary integrable function. In fact, the derivative of this artificial instantaneous phase function is not necessarily positive (for the formulation of the instantaneous frequency), even if the derivative exists.
On the other hand, it would be much more reasonable to first extract the (instantaneous) frequencies, and then using the frequency contents to “decompose” the source signal into its components. Let us call this procedure the ``signal resolution'' approach. In other words, the signal resolution approach is a logical way to resolve the inverse problem using the data information from the source signal. For stationary signals (that is, source signals with linear-phase components), the signal resolution approach has a very long history, dated back to B.G.R. De Prony, who introduced the Prony method in his 1795 paper \cite{De Prony} to solving the inverse problem of time-invariant linear systems with constant coefficients. This pioneering paper stimulates the development of two very important and popular algorithms, called ``MUSIC\rq\rq{} proposed by R.O. Schmidt in \cite{MUSIC} and ``ESPRIT\rq\rq{} introduced by R. Roy and Kailath in \cite{ESPRIT}. While the number $K$ of signal components of the stationary model \eqref{AHM0} with linear phases and constant coefficients is needed for carrying out the Prony method, it is not necessary for both MUSIC and ESPRIT, even with non-constant coefficients in \eqref{AHM0}.
The first signal resolution approach for non-stationary signals, coined ``synchrosqueezed transform (SST)\rq\rq{} by I. Daubechies and S. Maes in \cite{Daub_Maes96} and studied by H.-T. Wu in his Ph.D. dissertation \cite{Wu_thesis}, where both the continuous wavelet transform (CWT) and the short-time Fourier transform (STFT) are considered to compute some reference frequency from the source signal for the SST operation to squeeze out the instantaneous frequencies (IFs) of the signal components. The full developments of SST using CWT and STFT are published in \cite{Daub_Lu_Wu11} and \cite{Thakur_Wu11}, respectively. One of the limitations of the SST approach is the need of sufficiently accurate IFs for applying the normalized integral of the SST output in a small neighborhood of each IF to recover the signal components, but without assurance of the number $K$ of such IFs or signal components in \eqref{AHM0}. Further development and study in the area of SST and its applications include the more recent publications \cite{ Flandrin_Wu_etal_review13,Chui_Lin_Wu15, Chui_Walt15, Daub_Wang_Wu15, Yang15, Yang14, MOM14, OM17, BMO18, Wu17, Saito17, LCHJJ18, LCJ18, CJLS18, LJL20, Li_Liang12, Wang_etal14,Jiang_Suter17, WCSGTZ18}.
More recently, another time-frequency approach, coined ``signal separation operation or operator (SSO)\rq\rq{} by the first author and H.N. Mhaskar in the joint work \cite{Chui_Mhaskar15} for resolving the inverse problem \eqref{AHM0} by using discrete data acquired from the source multi-component signal. In contrast to SST, the SSO is a direct method for recovering the signal components simply by plugging the computed IF values in the same SSO (operator). Further development in the direction of SSO includes \cite{Chui_Mhaskar_Walt, LCJ20, CJLL20_adpSTFT, Chui_Han_Mhaskar, Chui_Mhaskar}. In the literature, both SST and SSO are commonly called ``time- frequency\rq\rq{} approaches. Another consideration of the signal resolution approach is the ``time-scale\rq\rq{} approach by using the CWT and recalling that the scale parameter of the CWT is inversely proportional to the frequency to be estimated by the CWT. Of course the constants of inverse proportionality depend on the choice of the analysis wavelets for the CWT. In very recent paper \cite{Chui_Han20}, the classical Haar function is extended to a family of cardinal splines, called extended Haar wavelets $\psi_{m, n}(x)$, with any desirable polynomial spline order $m\ge1$, any desirable order $n\ge1$ of vanishing moments, and compact supports $[-(m+n)2, (m+n)/2]$, for which the constants of inverse proportionality are easily computed (see Equation (2.2) and Tables 3--5 in \cite{Chui_Han20}). One advantage of the time-scale approach proposed in \cite{Chui_Han20} over the SSO is the elimination of the additional parameter for estimating the IFs of the signal components.
Observe that in applying
SSO and the time-scale approach in \cite{Chui_Han20}, the phase functions of the signal components in the source signal model \eqref{AHM0} are approximated by some linear polynomials at any local time for the purpose of extracting the IFs. More recently,
quadratic approximation at local time gives rise to the SSO of ``linear chirp-based model\rq{}\rq{} proposed in our paper \cite{LCJ20}. This model provides a more accurate component recovery formulae, with theoretical analysis established in our recent work \cite{CJLL20_adpSTFT}.
The main reason for considering the quadratic terms of the phase approximation is to recover signal components with the same IF values. In this regard, we emphasize that in the current literature, including all time-frequency and time-scale approaches, the IFs of the signal components are assumed to be distinct and well separated. This strict assumption must be removed in order to apply the methods and algorithms to separate more general real-world multi-component or composite signals. To demonstrate this point of view, let us consider radar signal processing, where the micro-Doppler effects are represented by highly non-stationary signals. When the target or any structure on the target undergoes micro-motion dynamics, such as mechanical vibrations, rotations, or tumbling and coning motions \cite{radar_basic_2016, Stankovic_compressive_sense_2013}, the frequency curves of the signal components may cross one another. For example, Fig.\ref{figure:Micro_Doppler} shows the simulated micro-Doppler modulations (that is, two sinusoidal frequency-modulation signals and one single-tone signal) and the STFT of the synthetic signal.
\begin{figure}[th]
\centering
\begin{tabular}{cc}
\resizebox {2.4in}{1.8in} {\includegraphics{sim_mic_Dop1}}
\quad & \quad
\resizebox {2.4in}{1.8in} {\includegraphics{mic_Dop1}}
\end{tabular}
\vskip -0.3cm
\caption{\small Micro-Doppler modulations induced by target\rq{}s tumbling (Left) and STFT of the signal (Right).}
\label{figure:Micro_Doppler}
\end{figure}
To be precise, we say that two signal components $x_k(t)$ and $x_\ell(t)$ of a multi-component signal $x(t)$ governed by \eqref{AHM0} overlap in the time-frequency plane at $t=t_0$, if $\phi_k^{\prime}(t_0) = \phi_\ell^{\prime}(t_0)$ but $\phi_k^{\prime}(t)\not = \phi_k^{\prime}(t)$ in some deleted neighborhood of $t_0$.
Based on the linear chirp-based model proposed in our previous paper \cite{LCJ20}, we have extended the SSO method in \cite{Chui_Mhaskar15} by incorporating a chirp rate parameter to introduce a computational scheme in our work \cite{LHJC20} for the recovery of signal components with overlapping frequency curves.
In the present paper, we propose another innovative time-scale approach by introducing a 3D time-scale-chirp\_rate transform, formulated by incorporating a complex quadratic phase function with a continuous wavelet-like transform (CWLT),
to be called an adaptive ``time-scale-chirp\_rate (TSC-R)\rq{}\rq{} component recovery operator, and develop a rigorous theory for assurance of solving the inverse problem in separating the signal components $x_k(t)$ of the multi-component signal $x(t)$ governed by \eqref{AHM0}, without the assumption of well separated IFs, but rather by assuming that
if the two IF curves of the signal components $x_k$ and $x_{\ell}$ cross at some $t=t_0$, then $|\phi''_{k}(t)-\phi''_{\ell}(t)| \ge \delta$ for some $\delta > 0$, for $|t - t_0| <\epsilon$, where $\epsilon > 0$.
For convenience, we will consider, without loss of generality, the following complex-version of \eqref{AHM0} without the trend function $A_0(t)$ function, namely:
\begin{equation}
\label{AHM}
x(t)=\sum_{k=1}^K x_k(t)=\sum_{k=1}^K A_k(t) e^{i2\pi\phi_k(t)}
\end{equation}
where $A_k(t), \phi_k'(t)>0$. The reader is referred to \cite{Chui_Mhaskar15} for the methods of polynomial trend removal.
The presentation of this paper is organized as follows. In Section 2, the adaptive TSC-R operator is introduced and developed, along with some error bounds, for instantaneous frequency estimation and signal components recovery. When the Gaussian function is used as the wavelet-like scalable window, more precise error bounds are derived for the adaptive TSC-R operation in Section 3. Numerical experimental results will be discussed in Section 4.
\section {Time-scale-chirp\_rate signal recovery operator}
To extract and separate the (unknown) signal components with crossover IFs from the multi-component signal governed by \eqref{AHM}, we propose the following adaptive {\bf time-scale-chirp\_rate signal recovery} (TSC-R) operator, by introducing an adaptive continuous wavelet-like transform (CWLT), namely:
\begin{eqnarray}
\nonumber
U_x(a, b, \lambda) \hskip -0.6cm && := \int_{-\infty}^\infty x(t) \frac 1{a}g\big(\frac{t-b}{a\sigma(b)}\big)
e^{-i2\pi \mu \frac{t-b}{a}}
e^{ -i\pi \lambda (t-b)^2} dt\\
\label{def_adaptiveTSC-R} &&= \int_{{\mathbb R}} x(b+at) \frac 1 {\sigma(b)} g\big(\frac t {\sigma(b)}\big) e^{-i2\pi \mu t -i\pi \lambda a^2 t^2}dt,
\end{eqnarray}
where $g(t)$ is a window function, $\mu$ is a positive constant, and $\sigma(b)$ is a positive function of $b$. In this paper, all window functions $g$ are assumed to be functions in $L_2({\mathbb R})$ that decay to zero at $\infty$ and satisfy $\int_{\mathbb R} g(t) dt =1$. Observe that when $\lambda=0$, $U_x(a, b, \lambda)$ is reduced to the adaptive CWLT of $x(t)$,
denoted by $\widetilde W_x(a, b)$, as considered in \cite{LCJ18}, and that the TSC-R of $x(t)$ can be considered as a multi-component signal in the 3-dimensional space of time $t$, scale $a$, and chirp rate $\lambda$. The importance of this transform is that when the IF curves of two components $x_k(t)$ and $x_\ell(t)$ cross each other, they may be well-separated in the 3-dimensional space by adaptive TSC-R operator, provided that $\phi_k''(t)\not = \phi_\ell''(t)$ for $t$ in some neighborhood of the cross-over time instant $t_0$. Thus, a multi-component signal $x(t)$ with certain signal components that have the same IF values can be extracted and well-separated in the 3-dimensional TSC-R space adaptively. Hence, it is feasible to reconstruct signal components by adaptive TSC-R.
In practice, for a particular signal $x(t)$, its
adaptive CWLT $\widetilde W_x(a, b)$
lies in a region of the scale-time plane:
$$
\{(a, b): \; a_1(b)\le a\le a_2(b), b\in {\mathbb R}\}
$$
for some $0<a_1(b), a_2(b) <\infty$. That is $\widetilde W_x(a,b)$
is negligible for $(a, b)$ outside this region. Throughout this paper we assume for each $b\in {\mathbb R}$, the scale $a$ is in the interval:
\begin{equation}
\label{a_interval}
a_1(b)\le a\le a_2(b).
\end{equation}
\begin{mdef}
\label{def:function_class}
For $\epsilon_1>0$ and $\epsilon_3>0$, let $\mathcal{E}_{\epsilon_1,\epsilon_3}$ denote the set consisting of (complex) adaptive harmonic models (AHMs) defined by \eqref{AHM} with $A_k(t) \in L_\infty({\mathbb R}), \; A_k(t)>0, \phi_k(t)\in C^3({\mathbb R}), \inf_{b\in {\mathbb R}} \phi_k'(t)>0, \sup_{b\in {\mathbb R}} \phi_k'(t)<\infty$, and
$A_k(t), \phi_k(t)$ satisfying
\begin{eqnarray}
\label{cond_A}
&& |A_k(t+\tau)-A_k(t)|\leq \varepsilon_1 |\tau|A_{k}(t),~~t\in {\mathbb R}, \; k=1,\cdots, K, \\
\label{cond_phi}
&& |\phi'''_{k}(t)| \leq \varepsilon_3, ~~t\in {\mathbb R}, \; k=1, \cdots, K.
\end{eqnarray}
\end{mdef}
\bigskip
For a window function $g\in L_1({\mathbb R})$, denote
\begin{equation}
\label{def_PFT}
\widebreve g (\eta, \lambda):=\int_{{\mathbb R}} g(t) e^{-i2\pi\eta t-i\pi \lambda t^2}dt.
\end{equation}
$\widebreve g(\eta, \lambda)$ is called a polynomial Fourier transform of $g$ \cite{Bi_Stankovic11,Stankovic13}. Note that
$\widebreve g (0, 0)=1$ since $\int_{\mathbb R} g(t) dt=1$.
When $g$ is the Gaussian function defined by
\begin{equation}
\label{def_g}
g(t)=\frac 1{\sqrt {2\pi}} \; e^{-\frac {t^2}2},
\end{equation}
then we have (refer to \cite{Leon_Cohen, LCHJJ18})
\begin{equation}
\label{g_PFT}
\widebreve g(\eta, \lambda)=\frac 1{\sqrt{1+i2\pi\lambda}} e^{-\frac{2\pi^2 \eta ^2}{1+i2\pi \lambda}},
\end{equation}
where $\sqrt{1+i2\pi\lambda}$ denotes the square root of $1+i2\pi\lambda$ lying in the same quadrant as $1+i2\pi\lambda$.
\bigskip
We say $s(t)$ is a linear chirp or a linear frequency modulation signal if
\begin{equation*}
s(t)=A e^{i2\pi \phi(t)}=A e^{i2\pi (ct +\frac 12 r t^2)},
\end{equation*}
where $c$ and $r$ are constants. We use linear chirps to approximate each $x_k(t)$ at any local time. Namely, we write
$$
x_k(b+a t)=x_k(b)e^{i2\pi (\phi'_k(b) a t +\frac 12\phi''_k(b) (at)^2) }+ x_{{\rm r}, k}(a, b, t),
$$
where
$$
x_{{\rm r}, k}(a, b, t)= x_k(b+a t)-x_k(b)e^{i2\pi (\phi'_k(b) a t +\frac 12\phi''_k(b) (at)^2) }.
$$
Note that, as a function of $t$, $x_k(b)e^{i2\pi (\phi'_k(b)at+\frac 12\phi''_k(b)a^2t^2) }$ is a linear chirp.
Thus $x(b+at)$ can be approximated by a superposition of linear chirps at any local time $t$:
\begin{equation*}
x(b+a t)=x_{\rm m}(a,b,t)+x_{\rm r}(a,b,t),
\end{equation*}
where
\begin{eqnarray*}
&&x_{\rm m}(a,b,t):=\sum_{k=1}^K x_k(b)e^{i2\pi (\phi_k'(b) at+\frac 12\phi''_k(b) (at)^2) }\; , \\
&&x_{\rm r}(a,b,t):=\sum_{k=1}^K x_{{\rm r}, k}(a, b, t).
\end{eqnarray*}
Denote
\begin{equation}
\label{def_MSSO_m1}
\mathfrak{R}_x(a, b, \lambda) :=
\int_{{\mathbb R}} x_{\rm m}(a, b, t) \frac 1 {\sigma(b)} g\big(\frac t {\sigma(b)}\big) e^{-i2\pi \mu t-i\pi \lambda a^2 t^2} dt.
\end{equation}
Then we have
\begin{equation}
\label{def_MSSO_m2}
\mathfrak{R}_x(a, b, \lambda) =\sum_{k=1}^K x_k(b) \widebreve g\Big( \sigma(b)\big(\mu -a \phi'_k(b)\big), \sigma^2(b)a^2\big (\lambda - \phi''_k(b)\big)\Big).
\end{equation}
In the following, we denote
\begin{equation}\label{def_M_u}
\nu=\nu(b):= \min_{1\leq k\leq K} A_{k}(b), ~~ M=M(b):=\sum_{k=1}^K A_{k}(b).
\end{equation}
In the next lemma we provide an error bound for $|U_x(a, b, \lambda)-\mathfrak{R}_x(a, b, \lambda)|$.
\begin{lem}\label{lem1}
Let $x(t)\in \mathcal{E}_{\epsilon_1,\epsilon_3}$ for some $\epsilon_1>0, \epsilon_3>0$, and let $U_x(a, b, \lambda)$ be its adaptive TSC-R defined by \eqref{def_adaptiveTSC-R} with a window function $g$ and $\mathfrak{R}_x(a, b, \lambda)$ the approximation of $U_x(a, b, \lambda)$ defined by \eqref{def_MSSO_m1}. Then
\begin{equation}
\label{S_R_error}
\big|U_x(a, b, \lambda)-\mathfrak{R}_x(a, b, \lambda)\big|\le M(b)\Pi(a, b),
\end{equation}
where
\begin{equation}
\label{def_Pi0}
\Pi(a, b):=\varepsilon_1 I_1 a \sigma(b) + \frac {\pi}3\varepsilon_3 I_3 a^3 \sigma^3(b),
\end{equation}
{\rm with}
\begin{equation}
\label{def_In}
I_n:=\int_{\mathbb R} \big|g(t) t^n\big| dt, \; n=1, 2, \cdots.
\end{equation}
\end{lem}
\begin{proof} By \eqref{cond_A} and \eqref{cond_phi},
\begin{eqnarray*}
&&|x(b+at)-x_{\rm m}(a, b, t)|=|x_{\rm r}(a, b, t)|\\
&&=\sum_{k=1}^K \Big\{(A_k(b+at)-A_k(b))e^{i2\pi \phi_k(b+at)}
\\&&\hskip 1cm
+x_k(b)e^{i2\pi (\phi'_k(b) at+\frac 12\phi''_k(b)(at)^2) }
\big(
e^{i2\pi (\phi_k(b+at)-\phi_k(b)-\phi_k'(b) at - \frac 12\phi''_k(b)(at)^2)}-1\big)\Big\}\\
&& \le \sum_{k=1}^K\Big\{
\Big|A_k(b+at)-A_k(b)\Big| +A_k(b) \Big |i2\pi \Big(\phi_k(b+at)-\phi_k(b)-\phi_k'(b) at - \frac 12\phi''_k(b)(at)^2\Big)
\Big|\Big\}\\
&& \le \sum_{k=1}^K\Big\{ A_k(b) \varepsilon_1 a |t| + A_k(b) 2\pi \sup_{\xi \in {\mathbb R}} \frac 16 \big |\phi'''_k(\xi) (a t)^3 \big|\Big\}\\
&&\le M(b) \varepsilon_1 a |t| + M(b) \frac \pi 3 \varepsilon_3 a^3 |t|^3.
\end{eqnarray*}
This leads to
\begin{eqnarray*}
&&\big|U_x(a, b, \lambda)-\mathfrak{R}_x(a, b, \lambda)\big| =
\Big| \int_{\mathbb R} (x(b+at)-x_{\rm m}(a, b, t))\frac 1{\sigma(b)}g(\frac t{\sigma(b)}) e^{-i2\pi \mu a t-i\pi \lambda (a t)^2}dt \Big|\\
&&\qquad \le\int_{\mathbb R} M(b) \big(\varepsilon_1 a|t| + \frac \pi 3 \varepsilon_3 a^3|t|^3\big)
\big|\frac 1{\sigma(b)}g(\frac t{\sigma(b)}) \big|dt \\
&&\qquad =M(b)\big(\varepsilon_1 I_1 \sigma(b) + \frac {\pi}3\varepsilon_3 I_3 \sigma^3(b)\big).
\end{eqnarray*}
This completes the proof of Lemma \ref{lem1}.
\end{proof}
\begin{mrem}
Recall that we assume in this paper $a$ is in an interval as shown in \eqref{a_interval}. Thus, we have
\begin{equation}
\label{S_R_error_interal}
\big|U_x(a, b, \lambda)-\mathfrak{R}_x(a, b, \lambda)\big|\le M(b)\Pi_0(b),
\end{equation}
where
\begin{equation}
\label{def_Pi_large}
\Pi_0(b):=\Pi(a_2,b)=\varepsilon_1 I_1 \sigma(b)a_2
+ \frac {\pi}3\varepsilon_3 I_3 \sigma^3(b)a_2^3.
\end{equation}
\hfill $\blacksquare$
\end{mrem}
In the following, we assume any two IF curves of the signal components $x_k$ and $x_{\ell}$ satsify
\begin{equation}\label{def_sep_cond_cros}
\hbox{either} ~~ \frac {|\phi'_{k}(t)-\phi'_{\ell}(t)|}{\phi'_{k}(t)+\phi'_{\ell}(t)}\ge \triangle, \; t\in {\mathbb R}, ~~ \hbox{or} ~~ |\phi''_{k}(t)-\phi''_{\ell}(t)| \ge 2 \triangle_1, \; t\in {\mathbb R},
\end{equation}
where $0<\triangle<1, \triangle_1>0$. Clearly $\phi'_{k}(t)$ and $\phi'_{\ell}(t)$ could be cross over at a time instant.
For $1\le k\le K$, define
\begin{equation}
\label{def_Zk}
Z_k:=\{(a, b, \lambda): \;
|\mu-a \phi'_k(b)|<\triangle ~ \hbox{and} ~ |\lambda-\phi''_k(b)| < \triangle_1, \; b\in {\mathbb R}\}.
\end{equation}
\begin{lem}\label{lem:Zk_disjoint}
If $\phi_k$ satisfies \eqref{def_sep_cond_cros}, then $Z_k, 1\le k\le K$ are disjoint, that is $Z_\ell \cap Z_{ k}=\emptyset$ for $\ell\not=k$.
\end{lem}
The proof of Lemma \ref{lem:Zk_disjoint} is straightforward and it is omitted.
\bigskip
Note that for $(a, b, \lambda) \in Z_\ell$, the scale variable $a$ satisfies
$$
\frac {\mu-\triangle}{\phi'_\ell(b)}< a <\frac {\mu+\triangle}{\phi'_\ell(b)}.
$$
Hence for any $(a, b, \lambda) \in Z_\ell$, by \eqref{S_R_error}, we have
\begin{equation}
\label{S_R_error1}
\big|U_x(a, b, \lambda)-\mathfrak{R}_x(a, b, \lambda)\big|\le M(b)\Pi_\ell(b),
\end{equation}
where
\begin{equation}
\label{def_Pi_ell}
\Pi_\ell(b):=\Pi(\frac {\mu+\triangle}{\phi'_\ell(b)},b)=\varepsilon_1 I_1 \sigma(b)\frac {\mu+\triangle}{\phi'_\ell(b)}
+ \frac {\pi}3\varepsilon_3 I_3 \sigma^3(b)\Big(\frac {\mu+\triangle}{\phi'_\ell(b)}\Big)^3.
\end{equation}
\bigskip
For a fixed $b$, and a positive number $\widetilde \epsilon_1$,
we let $\mathcal H_b$ and $\mathcal H_{b, k}$ denote the sets defined by
\begin{equation}
\label{def_cGk}
\begin{array}{l}
\mathcal H_b:=\big\{(a, \lambda): \; |U_x(a, b, \lambda)|>\widetilde \epsilon_1\big\}, \\
\mathcal H_{b, k}:=\Big\{(a, \lambda) \in \mathcal H_b: \;
|\mu-a \phi'_k(b)|<\triangle ~ \hbox{and} ~ |\lambda-\phi''_k(b)| < \triangle_1\Big\}.
\end{array}
\end{equation}
Note that $\mathcal H_b$ and $\mathcal H_{b, k}$ depend on $\widetilde \epsilon_1$, and for simplicity of presentation, we drop
$\widetilde \epsilon_1$ from them.
Let $\Upsilon(b), \Upsilon_{\ell, k}(b)$ with $\Upsilon(b)\ge \Upsilon_{\ell, k}(b)$ for $k\not=\ell$
be some functions satisfying
\begin{equation}
\label{def_upper_bounds}
\begin{array}{l}
\sup_{\{(a, \lambda): (a, b, \lambda)\not \in \cup_{k=1}^K Z_k\}}\big|\widebreve g\big( \sigma(b)(\mu -a \phi'_k(b)), \sigma^2(b)a^2 (\lambda - \phi''_k(b))\big)\big|\le \Upsilon(b), \\
\sup_{\{(a, \lambda): (a, b, \lambda)\not \in Z_\ell\}}
\big|\widebreve g\big( \sigma(b)(\mu -a \phi'_k(b)), \sigma^2(b)a^2 (\lambda - \phi''_k(b))\big)\big | \le \Upsilon_{\ell, k}(b).
\end{array}
\end{equation}
About the quantities $\Upsilon(b)$ and $\Upsilon_{\ell, k}(b)$, refer to Section 3 when $g$ is the Gaussian window function.
\bigskip
Next we provide another lemma which will be used to derive our main theorem. In the following lemma and the rest of this paper, $\sum_{k\not= \ell}$ denotes $\sum_{\{k: ~ k\not= \ell, 1\le k\le K\}}$.
\begin{lem}
Let $x(t)\in \mathcal{E}_{\epsilon_1,\epsilon_3}$ for some $\epsilon_1>0,\epsilon_3>0$, and $U_x(a, b, \lambda)$ be the adaptive TSC-R of $x(t)$ with a window function $g$. Then for any $(a, \lambda)\in \mathcal H_{b, \ell}$,
\begin{eqnarray}
\label{U_with_ell}
&&\big|U_x(a, b, \lambda)-x_\ell(b) \widebreve g\big( \sigma(b)(\mu -a \phi'_\ell(b)), \sigma^2(b)a^2(\lambda - \phi''_\ell(b))\big) \big|\le {\rm Err}_\ell(b),
\end{eqnarray}
where
\begin{equation}
\label{def_Err}
{\rm Err}_\ell(b):=M(b)\Pi_\ell(b)+\sum_{k\not= \ell} A_k(b) \Upsilon_{\ell, k}(b)
\end{equation}
with $\Pi_\ell(b)$ defined by \eqref{def_Pi_ell}.
\end{lem}
\begin{proof} By \eqref{def_MSSO_m2}, we have for any $(a, \lambda)\in \mathcal H_{b, \ell}$,
\begin{eqnarray*}
&& \big|\mathfrak{R}_x(a, b, \lambda)-x_\ell(b) \widebreve g\big( \sigma(b)(\mu -a \phi'_\ell(b)), \sigma^2(b)a^2(\lambda - \phi''_\ell(b))\big) \big| \\
&& =\Big|\sum_{k\not= \ell } x_k(b) \widebreve g\big( \sigma(b)(\mu -a \phi'_k(b)), \sigma^2(b)a^2 (\lambda - \phi''_k(b))\big)\Big|\\
&& \le \sum_{k\not= \ell } A_k(b) \Big| \widebreve g\big( \sigma(b)(\mu -a \phi'_k(b)), \sigma^2(b)a^2 (\lambda - \phi''_k(b))\big)\Big|
\le \sum_{k\not= \ell } A_k(b) \Upsilon_{\ell, k}(b).
\end{eqnarray*}
This, along with \eqref{S_R_error1}, leads to that
\begin{eqnarray*}
&& \hbox{Left hand side of \eqref{U_with_ell}}\\
&& \le \big|U_x(a, b, \lambda)-\mathfrak{R}_x(a, b, \lambda)\big|+
\big|\mathfrak{R}_x(a, b, \lambda)-x_\ell(b) \widebreve g\big( \sigma(b)(\mu -a \phi'_\ell(b)), \sigma^2(b)a^2(\lambda - \phi''_\ell(b))\big) \big| \\
&& \le M(b)\Pi_\ell(b)+\sum_{k\not= \ell } A_k(b) \Upsilon_{\ell, k}(b).
\end{eqnarray*}
Thus \eqref{U_with_ell} holds true.
\end{proof}
Next we have the following theorem.
\begin{theo}\label{theo:adaptive TSC-R1}
Let $x(t)\in \mathcal{E}_{\epsilon_1,\epsilon_3}$ for some $\epsilon_1>0,\epsilon_3>0$, and $U_x(a, b, \lambda)$ be the adaptive TSC-R of $x(t)$ with a window function $g$. Suppose $x(t)$ satisfies \eqref{def_sep_cond_cros} for some $0<\triangle<1$ and $\triangle_1>0$, and
\begin{equation}
\label{theo1_cond1}
2M(b)\big(\Upsilon(b)+\Pi_0(b)\big)\le \nu(b)
\end{equation}
holds. Let $\mathcal H_b$ and $\mathcal H_{b, k}$ be the sets defined by \eqref{def_cGk} with a function $\widetilde \epsilon_1=\widetilde \epsilon_1(b)>0$ satisfying
\begin{equation}
\label{cond_ep1}
M(b)\big(\Upsilon(b)+\Pi_0(b)\big)\le
\widetilde \epsilon_1 \le \nu(b)-M(b)\big(\Upsilon(b)+\Pi_0(b)\big).
\end{equation}
Then the following statements hold.
\begin{enumerate}
\item[{\rm (a)}] $\mathcal H_b=\cup_{k=1}^K \mathcal H_{b, k}$.
\item[{\rm (b)}] The sets $\mathcal H_{b, k}, 1\le k\le K$ are disjoint, i.e. $\mathcal H_{b, k}\cap {\mathcal H}_{b, k\rq{}}=\emptyset$ if $k\not= k\rq{}$.
\item[{\rm (c)}] Each set $\mathcal H_{b, k}$ is non-empty.
\end{enumerate}
\end{theo}
We delay the proof of Theorem \ref{theo:adaptive TSC-R1} to the end of this section.
\bigskip
Denote
\begin{equation}
\label{def_max_eta}
(\widehat a_\ell, \widehat \lambda_\ell) =(\widehat a_\ell(b), \widehat \lambda_\ell(b)):={\rm argmax}_{(a, \lambda) \in\mathcal H_{b, \ell} }|U_x(a, b, \lambda)|, ~~ \ell=1, \cdots, K.
\end{equation}
From Theorem \ref{theo:adaptive TSC-R1}, we know $\widehat a_\ell(b)$ and $\widehat \lambda_\ell(b)$ are well defined. We will use them to estimate $\phi'_\ell(b)$, chirp rate $\phi^{{\prime}\gp}_\ell(b)$ and to recover $x_\ell(b)$. More precisely, we have the following
TSC\_R operator scheme for IF estimation and component recovery.
\begin{alg} {\bf (Time-Scale-Chirp\_rate operator scheme)} \; Suppose $x(t)\in \mathcal{E}_{\epsilon_1,\epsilon_3}$ satisfies the conditions in Theorem \ref{theo:adaptive TSC-R1}.
\begin{itemize}
\item[] {\bf Step 1.} Calculate $\widehat a_\ell(b)$ and $\widehat \lambda_\ell(b)$ by \eqref{def_max_eta}.
\item[] {\bf Step 2.} Obtain IF and chirp rate estimates by
\begin{equation}
\label{IF_estimate}
\phi'_\ell(b) \approx \frac \mu{\widehat a_\ell(b)}, \quad
\phi^{{\prime}\gp}_\ell(b) \approx \widehat \lambda_\ell(b),
\end{equation}
\item[] {\bf Step 3.} Obtain the recovered $\ell$-th component by
\begin{equation}
\label{comp_recover}
x_\ell(b)\approx U_x(\widehat a_\ell, b, \widehat \lambda_\ell).
\end{equation}
\hfill $\blacksquare$
\end{itemize}
\end{alg}
Observe that the recovered component is obtained
simply by substituting the time-scale ridge $\widehat a_\ell(b)$ and time-chirp rate ridge $\widehat \lambda_\ell(b)$ to adaptive TSC-R, which is different from SST method with which the recovered $x_k(t)$ is computed by a definite integral along each estimated IF curve on the SST plane.
Next we study the error bounds for these approximations. To this regard, we introduce admissible window functions.
\begin{mdef}\label{definition2} {\rm ({\bf Admissible window function})} \;
A function $g(t)$ in $L_2({\mathbb R})$
is called an admissible window function if $\int_{\mathbb R} g(t)dt=1$, $g$ has certain at $\infty$ and satisfies the following conditions.
\begin{itemize}
\item[{\rm (a)}] $|\widebreve g(\eta, \lambda)|$ can be written as $f(|\eta|, |\lambda|)$ for some function $f(\xi_1, \xi_2)$ defined on $0\le \xi_1, \xi_2< \infty$.
\item[{\rm (b)}] There exist $c_0$ with $0<c_0<1$ and (strictly) decreasing non-negative continuous functions $\beta(\xi)$ and $\gamma(\xi)$ on $[0, \infty)$ with $\beta(0)=1$, $\gamma(0)=1$
such that if $f$ in {\rm (a)} satisfies
\begin{equation}
\label{ineq_cond}
1-c\le f(\eta, \lambda),
\end{equation}
for some $c$ with $0\le c \le c_0$ and $\eta, \lambda$, then
\begin{equation}
\label{cond_gb_gga}
1-c\le \beta(\eta), \quad 1-c\le \gamma(\lambda).
\end{equation}
\end{itemize}
\end{mdef}
\begin{theo}\label{theo:adaptive TSC-R2}
Let $x(t)\in \mathcal{E}_{\epsilon_1,\epsilon_3}$ for some $\epsilon_1>0,\epsilon_3>0$, and $U_x(a, b, \lambda)$ be the adaptive TSC-R of $x(t)$ with an admissible window function $g$ for certain $c_0$ such that \eqref{cond_gb_gga} holds. Suppose \eqref{def_sep_cond_cros}
and \eqref{theo1_cond1} hold and that for $1\le \ell \le K$, $2{\rm Err}_\ell(b)/A_\ell(b)\le c_0$, where ${\rm Err}_\ell(b)$ is defined by \eqref{def_Err}. Let $\mathcal H_b$ and $\mathcal H_{b, k}$ be the sets defined by \eqref{def_cGk} for some
$\widetilde \epsilon_1$ satisfying \eqref{cond_ep1}. Let $\widehat a_\ell(b), \widehat \lambda_\ell(b)$ be the functions defined by \eqref{def_max_eta}. Then the following statements hold.
\begin{enumerate}
\item[{\rm (a)}] For $\ell=1, 2, \cdots, K$,
\begin{eqnarray}
&&\label{phi_est}
|\mu - \widehat a_{\ell}(b)\phi_{\ell}'(b)|\le {\rm Bd}_{1, \ell}:=\frac{1}{\sigma(b)} \beta^{-1}\big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big), \\
&&\label{phi_est_gl}
|\widehat\lambda_{\ell}(b)-\phi_{\ell}''(b)|\le {\rm Bd}_{2, \ell}:=\frac{1}{\sigma^2(b)\widehat a_\ell^2} \gamma^{-1}\big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big).
\end{eqnarray}
\item[{\rm (b)}] For $\ell=1, 2, \cdots, K$,
\begin{equation}
\label{comp_xk_est}
\big|U_{x}(\widehat a_\ell, b, \widehat \lambda_\ell)- x_\ell(b)\big|
\le{\rm Bd}_{3, \ell},
\end{equation}
where
\begin{equation*}
{\rm Bd}_{3, \ell}:={\rm Err}_\ell(b)+2\pi I_1 A_\ell(b) \beta^{-1}\big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)
+\pi I_2 A_\ell(b) \gamma^{-1}\big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)
\end{equation*}
with $I_1$ and $I_2$ defined by \eqref{def_In}.
\item[{\rm (c)}]
If, in addition, the window function $g(t)\ge 0$ for $t\in {\mathbb R}$, then for $\ell=1, 2, \cdots, K$,
\begin{equation}
\label{abs_IA_est}
\big| |U_x(\widehat a_{\ell}, b, \widehat \lambda_\ell)|-A_{\ell}(b) \big|\le {\rm Err}_\ell(b).
\end{equation}
\end{enumerate}
\end{theo}
Note that since $\lim_{\xi\to 1^-} \beta^{-1}(\xi)=0$ and $\lim_{\xi\to 1^-} \gamma^{-1}(\xi)=0$, the error bounds ${\rm Bd}_{1, \ell}$, ${\rm Bd}_{2, \ell}$, ${\rm Bd}_{3, \ell}$ are small as long as ${\rm Err}_\ell(b)$ is small. We will study these error bounds in more details in the next section when $g$ is the Gaussian window function.
As shown in \eqref{phi_est}-\eqref{abs_IA_est}, $\frac \mu{\widehat a_\ell(b)}$ is an estimate to $\phi'_\ell(b)$ as shown in \eqref{IF_estimate} and $U_{x}(\widehat a_\ell, b, \widehat \lambda_\ell)$ is the recovered component of $x_\ell(b)$.
For a real-valued $x_\ell(t)$, we will use
\begin{eqnarray}
\label{comp_recover_real}
&&x_\ell(b)\approx 2{\rm Re}\Big(U_x(\widehat a_\ell, b, \widehat \lambda_\ell)\Big).
\end{eqnarray}
In addition, the chirp rate $\phi^{{\prime}\gp}_\ell(b)$ and IA $A_\ell(b)$ can be estimated by $\widehat \lambda_\ell(b)$ and $|U_x(\widehat a_{\ell}, b, \widehat \lambda_\ell)|$ respectively.
\begin{mrem}
Adaptive TSC-R defined by \eqref{def_adaptiveTSC-R} can be extended to adaptive CWLT with a higher order polynomial phase function. More precisely, one may define
\begin{eqnarray}
\nonumber U_x(a, b, \lambda_1, \cdots, \lambda_{m}) \hskip -0.6cm && := \int_{-\infty}^\infty x(t) \frac 1{a}\overline{\psi_{\sigma(b)} \big(\frac{t-b}a\big)} e^{ -i2\pi \sum_{\ell=2}^{m+1}\lambda_{\ell-1}\frac{(t-b)^\ell}{\ell!}} dt\\
\label{def_adaptiveTSC-R_high}
&& = \int_{{\mathbb R}} x(b+at) \frac 1 {\sigma(b)} g\big(\frac t {\sigma(b)}\big) e^{-i2\pi \mu t
-i2\pi \sum_{\ell=2}^{m+1}\lambda_{\ell-1}\frac{(at)^\ell}{\ell!}}
dt.
\end{eqnarray}
$U_x(a, b, \lambda_1, \cdots, \lambda_{m})$ can be used for IF estimation and mode recovery of such a multicomponent signal that IFs $\phi^{{\prime}}_k(t)$ and $\phi^{{\prime}}_\ell(t)$ of two components are \lq\lq{}highly\rq\rq{} crossover at some time $t_0$: $\phi^{(j)}_k(t_0)=\phi^{(j)}_\ell(t_0), 1\le j\le m$.
One can establish theorems similar Theorems \ref{theo:adaptive TSC-R1} and \ref{theo:adaptive TSC-R2} for $U_x(a, b, \lambda_1, \cdots, \lambda_{m})$.
\hfill $\blacksquare$
\end{mrem}
\bigskip
Finally in this section we present the proofs of Theorems \ref{theo:adaptive TSC-R1} and \ref{theo:adaptive TSC-R2}. For simplicity of presentation, we write $\sigma$ for $\sigma(b)$.
{\bf Proof of Theorem \ref{theo:adaptive TSC-R1}(a).} Clearly $\cup_{k=1}^K \mathcal H_{b, k}\subseteq \mathcal H_b$. Next we show $\mathcal H_b\subseteq \cup_{k=1}^K \mathcal H_{b, k}$.
Let $(a, \lambda)\in \mathcal H_b$. Suppose $(a, \lambda) \not \in \mathcal H_{b, k}$ for any $k$. Then $(a, b, \lambda) \not \in \cup_{k=1}^K Z_k$.
Hence, by \eqref{def_upper_bounds}, we have
\begin{eqnarray*}
\big|\mathfrak{R}_x(a, b, \lambda)\big| \hskip -0.6cm &&=\Big|\sum_{k=1}^K x_k(b) \widebreve g\big( \sigma(\mu -a \phi'_k(b)), \sigma^2a^2 (\lambda - \phi''_k(b))\big)\Big|\\
&& \le \sum_{k=1}^K A_k(b) \Upsilon(b)=M(b)\Upsilon(b).
\end{eqnarray*}
This, together with \eqref{S_R_error_interal}, implies
\begin{eqnarray*}
\big|U_x(a, b, \lambda)\big|\hskip -0.6cm && \le \big|U_x(a, b, \lambda)-\mathfrak{R}_x(a, b, \lambda)\big|+\big|\mathfrak{R}_x(a, b, \lambda)\big|\\
&& \le M(b)\Pi_0(b)+ M(b)\Upsilon(b) \le \widetilde \epsilon_1,
\end{eqnarray*}
a contradiction to that $(a, \lambda)\in \mathcal H_b$. Hence there must exist an $\ell$ such that $(a, \lambda)\in \mathcal H_{b,\ell}$. This shows $\mathcal H_b=\cup_{k=1}^K \mathcal H_{b, k}$.
\bigskip
{\bf Proof of Theorem \ref{theo:adaptive TSC-R1}(b).} Observe that $\mathcal H_{b, k}=\mathcal H_b\cap \{(a, \lambda): (a, b, \lambda)\in Z_ k\}$. Since $Z_k, 1\le k \le K$ are disjoint, we conclude that $\mathcal H_{b, k}, 1\le k \le K$ are also disjoint.
\bigskip
{\bf Proof of Theorem \ref{theo:adaptive TSC-R1}(c).}
To show that each $\mathcal H_{b, \ell}$ is non-empty, it is enough to show $(\frac{\mu}{\phi'_\ell(b)}, \phi''_\ell(b))\in \mathcal H_b$. Indeed, with $\widebreve g(0, 0)=1$,
\eqref{U_with_ell} with $\eta=\frac{\mu}{\phi'_\ell(b)}, \lambda=\phi''_\ell(b)$
implies
\begin{eqnarray*}
&&
\big|U_x(\frac{\mu}{\phi'_\ell(b)}, b, \phi''_\ell(b))\big|\ge \big|x_\ell(b) \widebreve g(0, 0)\big|- {\rm Err}_\ell(b)\\
&& = A_\ell(b)-M(b)\Pi_\ell(b)-\sum_{k\not= \ell} A_k(b) \Upsilon_{\ell, k}(b)\\
&& > \nu(b)-M(b)\Pi_0(b)-M(b)\Upsilon(b)\ge \widetilde \epsilon_1.
\end{eqnarray*}
Thus $(\frac{\mu}{\phi'_\ell(b)}, \phi''_\ell(b))\in \mathcal H_b$. Hence $(\frac{\mu}{\phi'_\ell(b)}, \phi''_\ell(b))\in \mathcal H_{b,\ell}$, and $\mathcal H_{b,\ell}$ is non-empty.
\hfill $\blacksquare$
\bigskip
{\bf Proof of Theorem \ref{theo:adaptive TSC-R2}(a).}
From \eqref{U_with_ell}, we have
\begin{equation}
\label{est_xk1}
\big|U_x(\widehat a_\ell, b, \widehat \lambda_\ell)\big| \le
\big|x_\ell(b) \widebreve g\big(\sigma(\mu-\widehat a_\ell \phi_\ell'(b)), \sigma^2\widehat a_\ell^2(\widehat \lambda_\ell-\phi_\ell''(b)) \big)\big|+ {\rm Err}_\ell(b).
\end{equation}
On the other hand, by the definitions of $\widehat a_\ell, \widehat \lambda_\ell$ and by \eqref{U_with_ell} with $a=\frac\mu{\phi_\ell'(b)}, \lambda=\phi_\ell''(b)$, we have
\begin{equation}
\label{est_xk2}
\big|U_x(\widehat a_\ell, b, \widehat \lambda_\ell)\big| \ge \big|U_x(\frac\mu{\phi_\ell'(b)}, b, \phi_\ell''(b))\big |\ge |x_\ell(b) \widebreve g(0, 0) \big| - {\rm Err}_\ell(b)
=A_\ell(b)- {\rm Err}_\ell(b).
\end{equation}
This, together with \eqref{est_xk1}, implies
$$
A_\ell(b) - {\rm Err}_\ell(b)\le A_\ell(b)
\big|\widebreve g\big(\sigma(\mu-\widehat a_\ell \phi_\ell'(b)), \sigma^2\widehat a_\ell^2(\widehat \lambda_\ell-\phi_\ell''(b)) \big)\big|
+ {\rm Err}_\ell(b).
$$
Thus we have
\begin{equation}
\label{est_xk3}
1- \frac{2\; {\rm Err}_\ell(b)}{A_\ell(b)}\le f\big(\sigma|\mu-\widehat a_\ell \phi_\ell'(b)|, \sigma^2 \widehat a_\ell^2 |\widehat \lambda_\ell-\phi_\ell''(b)| \big).
\end{equation}
Since $2{\rm Err}_\ell(b)/A_\ell(b)\le c_0$, \eqref{est_xk3} along with \eqref{ineq_cond} and \eqref{cond_gb_gga} leads to
$$
1- \frac{2\; {\rm Err}_\ell(b)}{A_\ell(b)}\le \beta \big(\sigma\big|\mu-\widehat a_\ell \phi_\ell'(b)\big|\big), \quad
1- \frac{2\; {\rm Err}_\ell(b)}{A_\ell(b)}\le \gamma \big(\sigma^2\widehat a_\ell^2 \big|\widehat \lambda_\ell-\phi_\ell''(b)\big|\big).
$$
Since $\beta(\xi), \gamma(\xi)$ decreasing, we have
$$
\sigma\big| \mu-\widehat a_\ell \phi_\ell'(b) \big|\le \beta^{-1}\big(1- \frac{2\; {\rm Err}_\ell(b)}{A_\ell(b)}\big), \;
\sigma^2\widehat a_\ell^2\big| \widehat \lambda_\ell-\phi_\ell''(b) \big|\le \gamma^{-1}\big(1- \frac{2\; {\rm Err}_\ell(b)}{A_\ell(b)}\big).
$$
Thus shows \eqref{phi_est} and \eqref{phi_est_gl}.
\bigskip
{\bf Proof of Theorem \ref{theo:adaptive TSC-R2}(b).}
From \eqref{U_with_ell}, we have
\begin{eqnarray*}
&&\big|U_x(\widehat a_\ell, b, \widehat\lambda_\ell)- x_\ell(b)\big|\le \big|U_x(\widehat a_\ell, b, \widehat\lambda_\ell) - x_\ell(b) \widebreve g\big( \sigma(\mu - \widehat a_\ell \phi'_\ell(b)), \sigma^2\widehat a_\ell^2(\widehat \lambda_\ell - \phi''_\ell(b))\big)\big|\\
&& \qquad +\big |x_\ell(b) \widebreve g\big( \sigma(\mu - \widehat a_\ell \phi'_\ell(b)), \sigma^2\widehat a_\ell^2(\widehat \lambda_\ell - \phi''_\ell(b))\big)
- x_\ell(b)\big|
\\
&&\le {\rm Err}_\ell(b)+A_\ell(b) \Big| \int_{\mathbb R} \frac 1\sigma g(\frac t\sigma)\Big(e^{-i2\pi (\mu - \widehat a_\ell \phi'_\ell(b))t -i\pi \widehat a_\ell^2(\widehat \lambda_\ell- \phi_\ell''(b)) t^2}-1\Big) dt \Big|\\
&&\le {\rm Err}_\ell(b)+A_\ell(b) \int_{\mathbb R} \Big| \frac 1\sigma g(\frac t\sigma)\Big| \; \big|2\pi (\mu - \widehat a_\ell \phi'_\ell(b))t +\pi\widehat a_\ell^2 (\widehat \lambda_\ell- \phi_\ell''(b)) t^2 \big| dt \\
&&\le {\rm Err}_\ell(b)+A_\ell(b) 2\pi |\mu - \widehat a_\ell \phi'_\ell(b)|\int_{\mathbb R} \Big| \frac 1\sigma g(\frac t\sigma) t \Big| dt +A_\ell(b)\pi\widehat a_\ell^2 \big|\widehat \lambda_\ell- \phi_\ell''(b)\big| \int_{\mathbb R} \frac 1\sigma \Big|g(\frac t\sigma)\Big| t^2 dt \\
&&= {\rm Err}_\ell(b)+A_\ell(b) 2\pi I_1 \sigma |\mu-\widehat a_\ell\phi_\ell'(b)|+ A_\ell(b)\pi I_2 \sigma^2\widehat a_\ell^2
\big|\widehat \lambda_\ell- \phi_\ell''(b)\big|\\
&&\le {\rm Err}_\ell(b)+2\pi I_1 A_\ell(b) \beta^{-1}\big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)+
\pi I_2 A_\ell(b) \gamma^{-1}\big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big),
\end{eqnarray*}
where the last inequality follows from \eqref{phi_est} and \eqref{phi_est_gl}.
This completes the proof of \eqref{comp_xk_est}.
\bigskip
{\bf Proof of Theorem \ref{theo:adaptive TSC-R2}(c).} Note that when $g(t)\ge 0$, by the assumption $\int_{\mathbb R} g(t)dt=1$, we have that $|\widebreve g(\eta, \lambda)|\le 1$ for any $\eta, \lambda\in {\mathbb R}$. This fact, together with
\eqref{est_xk1}, implies
\begin{equation*}
\big|U_x(\widehat a_\ell, b, \widehat \lambda_\ell)\big| \le
A_\ell(b) + {\rm Err}_\ell(b).
\end{equation*}
This and \eqref{est_xk2} lead to \eqref{abs_IA_est}. This completes the proof of Theorem \ref{theo:adaptive TSC-R2}(c).
\hfill $\blacksquare$
\section{Time, scale and chirp\_rate signal recovery operator with Gaussian window function}
The Gaussian function is the only function (up to scalar multiplication, shift and modulations) which gains the optimal time-frequency resolution. Hence it has been used in many applications.
In this section we consider the adaptive TSC-R with the window function being the Gaussian function
and obtain more precise estimates for the error bounds ${\rm Bd}_{1, \ell}$, ${\rm Bd}_{2, \ell}$, ${\rm Bd}_{3, \ell}$ in Theorem 2.
In the following $g$ is always the Gaussian function given in \eqref{def_g}.
From \eqref{g_PFT}, we have that
$|\widebreve g(\eta, \lambda)|= f(|\eta|, |\lambda|)$ with
\begin{equation}
\label{def_f}
f(\eta, \lambda):=\frac 1{(1+4\pi^2\lambda^2)^{1/4}} e^{-\frac{2\pi^2 \eta ^2}{1+4\pi^2 \lambda^2}} \; .
\end{equation}
First one can obtain that
\begin{equation}
\label{ineq_gaussian}
|\widebreve g(\eta, \lambda)|\le\min\Big\{\frac 1{(2\pi^2 \eta ^2)^{1/4}},
\frac 1{(1+4\pi^2\lambda^2)^{1/4}}\Big\}.
\end{equation}
Indeed, if
$2\pi^2 \eta ^2\ge 1+4\pi^2 \lambda^2$, then
\begin{eqnarray*}
|\widebreve g(\eta, \lambda)|\hskip -0.6cm &&\le \frac 1{(1+4\pi^2\lambda^2)^{1/4}} \frac{1+4\pi^2 \lambda^2}{2\pi^2 \eta ^2}
=\frac{(1+4\pi^2 \lambda^2)^{3/4}}{2\pi^2 \eta ^2}
\le \frac 1{(2\pi^2 \eta ^2)^{1/4}};
\end{eqnarray*}
otherwise, for
$2\pi^2 \eta ^2< 1+4\pi^2 \lambda^2$, we have
\begin{eqnarray*}
|\widebreve g(\eta, \lambda)|\hskip -0.6cm &&\le \frac 1{(1+4\pi^2\lambda^2)^{1/4}}.
\end{eqnarray*}
Hence \eqref{ineq_gaussian} holds.
Next let us consider the quantities $\Upsilon(b)$ and $\Upsilon_{\ell, k}(b)$ satisfying \eqref{def_upper_bounds}.
Suppose $(a, b, \lambda)\not \in Z_k$. By \eqref{ineq_gaussian}, we have
\begin{eqnarray}
\nonumber &&\big|\widebreve g\big( \sigma (\mu -a \phi'_k(b)), \sigma^2a^2 (\lambda - \phi''_k(b))\big)\big| \\
\label{Ineq_Gaussian1}
&&\qquad \le \min\Big\{\frac 1{(2\pi^2)^{1/4} \sqrt{\sigma |\mu -a \phi'_k(b)|}},
\frac 1{\big(1+4\pi^2 \sigma^4a^4 (\lambda - \phi''_k(b))^2 \big)^{1/4}}\Big\}.
\end{eqnarray}
If $|\mu -a \phi'_k(b)|\ge \triangle$, then
$$
\frac 1{(2\pi^2)^{1/4} \sqrt{\sigma |\mu -a \phi'_k(b)|}}\le \frac 1{(2\pi^2)^{1/4}\sqrt \triangle \sqrt{\sigma} };
$$
otherwise, if $|\mu -a \phi'_k(b)|<\triangle$, then $|\lambda - \phi''_k(b))|\ge \triangle_1$. Therefore,
$$
\frac 1{\big(1+4\pi^2 \sigma^4a^4 (\lambda - \phi''_k(b))^2 \big)^{1/4}}\le \frac 1{\sqrt{2\pi \triangle_1} a\sigma}
\le \frac 1{\sqrt{2\pi \triangle_1} a_1 \sigma}.
$$
Hence, by \eqref{Ineq_Gaussian1}, we have
\begin{eqnarray}
\label{Ineq_Gaussian2} &&\big|\widebreve g\big( \sigma (\mu -a \phi'_k(b)), \sigma^2a^2 (\lambda - \phi''_k(b))\big)\big|
\le \max\Big\{\frac 1{(2\pi^2)^{1/4}\sqrt \triangle \sqrt{\sigma} }, \frac 1{\sqrt{2\pi \triangle_1} a_1 \sigma}\Big\}.
\end{eqnarray}
Thus we may let
$$
\Upsilon(b)=\frac 1{\sqrt{\sigma}\min \big\{(2\pi^2)^{1/4}\sqrt \triangle, a_1\sqrt{2\pi \triangle_1 \sigma} \big\} }.
$$
Since $Z_\ell$ and $Z_k$ are not overlapping if $\ell\not= k$, we may simply let $\Upsilon_{\ell, k}(b)=\Upsilon(b)$.
For such choice of $\Upsilon(b)$ and $\Upsilon_{\ell, k}(b)$, \eqref{def_upper_bounds} holds. Note that if $\sigma=\sigma(b)$ is large,
then $\Upsilon(b)$ and $\Upsilon_{\ell, k}(b)$ will be small.
\bigskip
Next we consider the functions $\beta(\xi), \gamma(\xi)$ satisfying \eqref{cond_gb_gga} for $f(\eta, \lambda)$ given by
\eqref{def_f}. Clearly, we may choose
\begin{equation}
\label{def_gga}
\gamma(\lambda)=\frac 1{(1+4\pi^2\lambda^2)^{1/4}}.
\end{equation}
Next we will show that for this $f(\eta, \lambda)$, if $c_0$ in \eqref{ineq_cond} satisfies
$c_0\le 1-e^{-1/4}$, then we can choose
\begin{equation}
\label{def_gb}
\beta(\eta)=e^{-2\pi^2 \eta ^2}.
\end{equation}
To this regard, we first have the following two lemmas.
\begin{lem}\label{lem4}
Let $f(\eta, \lambda)$ be the function defined by \eqref{def_f}. If $0\le\eta\le \frac 1{2\pi \sqrt 2}$, then
\begin{equation}
\label{ineq_f_1}
f(\eta, \lambda)\le f(\eta, 0)=e^{-2\pi^2 \eta ^2}, \; \lambda \in [0, \infty).
\end{equation}
\end{lem}
\begin{proof}
One can obtain from $\partial_\lambda f(\eta, \lambda)$ that for fixed $\eta$, $f(\eta, \lambda)$ is a decreasing function in $\lambda$ on $\lambda\ge 0$ and
\begin{equation}
\label{small_ineq}
8\pi^2 \eta^2\le 1+4\pi^2 \lambda^2.
\end{equation}
Notice that \eqref{small_ineq} holds true for any $\lambda\ge 0$ if $0\le\eta\le \frac 1{2\pi \sqrt 2}$. Hence, \eqref{ineq_f_1} holds true.
\end{proof}
\begin{lem}\label{lem5}
Let $f(\eta, \lambda)$ be the function defined by \eqref{def_f}. Let $c$ be a number satisfying $0\le c\le 1-e^{-1/4}$. Then
$1-c\le f(\eta, \lambda)$ for some $\eta, \lambda\ge 0$ implies $0\le\eta\le \frac 1{2\pi \sqrt 2}$.
\end{lem}
\begin{proof}
Assume $\eta > \frac 1{2\pi \sqrt 2}$. Let $\lambda\ge 0$. If $1+4\pi ^2 \lambda^2 \le 8\pi ^2 \eta^2$, then
$$
f(\eta, \lambda)\le \frac 1{(1+4\pi^2\lambda^2)^{1/4}} e^{-1/4} <e^{-1/4}.
$$
Otherwise, when $1+4\pi ^2 \lambda^2 > 8\pi ^2 \eta^2$, let $\lambda_0>0$ be the number such that
$1+4\pi ^2 \lambda_0^2 =8\pi ^2 \eta^2$. As mentioned in the proof of Lemma \ref{lem4}, $f(\eta, \lambda)$ is a decreasing function in $\lambda$ for $8\pi^2 \eta^2\le 1+4\pi^2 \lambda^2$. Since $\lambda_0<\lambda$, we have
$$f(\eta, \lambda)\le f(\eta, \lambda_0)=\frac 1{(8\pi^2 \eta^2)^{1/4}}e^{-1/4}<e^{-1/4}.
$$
So in either case, we have $f(\eta, \lambda)<e^{-1/4}$, a contradiction to that
$$
f(\eta, \lambda)\ge 1-c\ge e^{-1/4}.
$$
Therefore $\eta \le \frac 1{2\pi \sqrt 2}$. This completes the proof of Lemma \ref{lem5}.
\end{proof}
Lemmas \ref{lem4} and \ref{lem5} immediately lead to the following proposition.
\begin{pro}
Let $f(\eta, \lambda)$, $\beta(\eta)$ and $\gamma(\lambda)$ be the functions defined by \eqref{def_f}, \eqref{def_gb} and \eqref{def_gga} respectively. Suppose $c$ satisfies $0\le c\le 1-e^{-1/4}$. Then $1-c\le f(\eta, \lambda)$ implies
$$
1-c \le \beta(\eta), \; 1-c\le \gamma(\lambda).
$$
\end{pro}
\begin{proof} Clearly $1-c\le \gamma(\lambda)$ since $f(\eta, \lambda)\le \gamma(\lambda)$.
By Lemma \ref{lem5}, $1-c\le f(\eta, \lambda)$ implies $\eta \le \frac 1{2\pi \sqrt 2}$. This, together with Lemma \ref{lem4}, implies
$$
f(\eta, \lambda)\le f(\eta, 0)= \beta(\eta).
$$
Thus $1-c\le f(\eta, \lambda)\le \beta(\eta)$, as desired.
\end{proof}
For $\gamma(\lambda)$ given by \eqref{def_gga}, its inverse $\gamma^{-1}(\xi)$ is given by
$$
\gamma^{-1}(\xi)=\frac 1{2\pi \xi^2}\sqrt{1-\xi^4}.
$$
Hence the error bound ${\rm Bd}_{2, \ell}$ in \eqref{phi_est_gl} is given by
\begin{eqnarray}
\nonumber {\rm Bd}_{2, \ell}\hskip -0.6cm &&:=\frac{1}{\sigma^2(b)} \gamma^{-1}\big(1- \frac{2\; {\rm Err}_\ell(b)}{A_\ell(b)}\big)\\
\label{B2_est} &&=
\frac 1{\sigma^2(b) 2\pi \big(1-\frac{2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)^2}\sqrt{1-\big(1-\frac{2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)^4} \; .
\end{eqnarray}
Hence, if $\frac{{\rm Err}_\ell(b)}{A_\ell(b)}\approx 0$, then
$$
{\rm Bd}_{2, \ell}\approx \frac {\sqrt 2}{\pi\sigma^2(b)} \sqrt{\frac{{\rm Err}_\ell(b)}{A_\ell(b)}} \; .
$$
\bigskip
The inverse function $\beta^{-1}(\xi)$ of $\beta(\lambda)$ given by \eqref{def_gb} is
$$
\beta^{-1}(\xi)=\frac1{\pi \sqrt 2} \sqrt{-\ln \xi}, \; 0<\xi <1.
$$
Thus if $\frac{2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\le 1-e^{-1/4}$,
then by \eqref{est_xk3} and Proposition 1, we have
\begin{eqnarray*}
&&|\mu - \widehat a_{\ell}(b)\phi_{\ell}'(b)|\le {\rm Bd}_{1, \ell}:=\frac{1}{\sigma(b)} \beta^{-1}\big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)\\
&& = \frac{1}{\sigma(b) {\pi \sqrt 2}} \sqrt{-\ln \big(1-\frac {2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)}.
\end{eqnarray*}
Using the fact $-\ln(1-t)< e^{1/4}t$ for $0<t<1-e^{-1/4}$, we have
\begin{equation}
\label{B1_est}
{\rm Bd}_{1, \ell}\le \frac{e^{1/8}}{\sigma(b)\pi } \sqrt{\frac{{\rm Err}_\ell(b)}{A_\ell(b)}}.
\end{equation}
In addition, the error bound ${\rm Bd}_{3, \ell}$ in \eqref{comp_xk_est} for component recovery is bounded by
\begin{equation}
\label{B3_est}
{\rm Bd}_{3, \ell}\le {\rm Err}_\ell(b)+2 e^{1/8} I_1 \sqrt{{{\rm Err}_\ell(b)}{A_\ell(b)}}
+\frac{I_2 A_\ell(b)}{2\big(1-\frac{2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)^2}\sqrt{1-\big(1-\frac{2 \; {\rm Err}_\ell(b)}{A_\ell(b)}\big)^4}\; .
\end{equation}
To summarize, we have the following theorem.
\begin{theo}\label{theo:adaptive TSC-R3}
Let $x(t)\in \mathcal{E}_{\epsilon_1,\epsilon_3}$ for some $\epsilon_1>0,\epsilon_3>0$, and $U_x(a, b, \lambda)$ be the adaptive TSC-R
of $x(t)$ with Gaussian window function $g$ in \eqref{def_g}. Suppose \eqref{def_sep_cond_cros} and \eqref{theo1_cond1} hold and that for $1\le \ell \le K$, $2{\rm Err}_\ell(b)/A_\ell(b)\le 1-e^{-1/4}$.
Let $\mathcal H_b$ and $\mathcal H_{b, k}$ be the sets defined by \eqref{def_cGk} for some
$\widetilde \epsilon_1$ satisfying \eqref{cond_ep1}. Let $\widehat a_\ell(b), \widehat \lambda_\ell(b)$ be the functions defined by \eqref{def_max_eta}. Then \eqref{phi_est}, \eqref{phi_est_gl} and \eqref{comp_xk_est} hold with
${\rm Bd}_{1, \ell}, {\rm Bd}_{2, \ell}$ and ${\rm Bd}_{3, \ell}$ bounded by the quantities in \eqref{B1_est}, \eqref{B2_est} and \eqref{B3_est} respectively.
\end{theo}
\section{Experiments}
\begin{figure}[th]
\centering
\hspace{-0.7cm}
\begin{tabular}{c@{\hskip -0.2cm}c @{\hskip -0.2cm}c}
\resizebox{2.2in}{1.65in}{\includegraphics{two_lfm_wavef}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{two_lfm_Spect}}\quad &
\resizebox{2.2in}{1.65in}{\includegraphics{two_lfm_IFs}}\\
\resizebox{2.2in}{1.65in}{\includegraphics{two_lfm_CWT}}\quad &
\resizebox{2.2in}{1.65in}{\includegraphics{two_lfm_SST}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{two_lfm_SST2}}
\end{tabular}
\caption{\small Two-component signal $x(t)$ in \eqref{two_lfm} and its time-frequency representations with SST.
Top row (from left to right): Waveform of $x(t)$, magnitude spectrum and ground truth IFs of two components $x_1(t)$ and $x_2(t)$; Bottom row (from left to right): CWLT, CWT-based SST and CWT-based second-order SST.}
\label{fig:two_chirp_signal}
\end{figure}
In this section we provide some experimental results to demonstrate our method and general theory. We set $\mu=1$.
\begin{example}
{\rm Let $x(t)$ be a signal consisting of two-component linear chirps, given as
\begin{equation}
\label{two_lfm}
x(t) = x_1(t)+x_2(t) = \cos(2\pi c_1 t+ \pi r_1 t^2) + \cos(2\pi c_2 t + \pi r_2 t^2), \; t\in[0,0.75),
\end{equation}
where $c_1 = 21$, $c_2 = 71$, $r_1 = 67$ and $r_2 = -61$. }
\end{example}
The IFs of $x_1$ and $x_2$ are $\phi_1'(t) = c_1+r_1t$ and $\phi_2'(t) = c_2+r_2t$, respectively. See the top-right panel of Figure 2.
The chirp rates of $x_1$ and $x_2$ are $\phi_1''(t) = r_1$ and $\phi_2''(t) = r_2$, respectively.
Here signal $x(t)$ is discretized with sampling rate 256Hz.
That means there are 192 samples for $x(t)$. In the following, we just use these 192 samples to analyze the signal. The waveform of $x(t)$ and its magnitude spectrum are presented in the top row of Fig.\ref{fig:two_chirp_signal}.
The bottom row of Fig.\ref{fig:two_chirp_signal} shows the results of CWLT, SST \cite{Daub_Lu_Wu11} and the second-order SST \cite{OM17}, where parameter $\sigma=0.023$. Here the scale variable $a$ is discretized as $\big(2^{j/n_v} \triangle t\big)_j$, where $\triangle t=1/256$ for this example, and $n_v$ is the number of voice. Here and below we set $n_v=32$.
Due to the IF curves of the components are crossover, these methods cannot represent the synthetic signal sharply and separately. In addition, EMD performs poorly in decompose this signal. Consequently, these methods are hardly to recover this two-component signal with crossover IFs.
\begin{figure}[th]
\centering
\hspace{-0.7cm}
\begin{tabular}{cc }
\resizebox{2.2in}{1.65in}{\includegraphics{AQCWT_x1}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{AQCWT_x2}}\\
\resizebox{2.2in}{1.65in}{\includegraphics{AQCWT_t1}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{AQCWT_t2}}\\
\resizebox{2.2in}{1.65in}{\includegraphics{AQCWT_c1}}\quad &
\resizebox{2.2in}{1.65in}{\includegraphics{AQCWT_c2}}
\end{tabular}
\caption{\small Some slices of adaptive TSC-R.
Top row (from left to right): Two slices of $|U_x(a,b,\lambda)|$ when $\lambda=r_1$, $\lambda=r_2$;
Middle row (from left to right): Two slices of $|U_x(a,b,\lambda)|$ when $b=\frac{32}{256}$ and $b= \frac{160}{256}$;
Bottom row (from left to right): Two slices of $|U_x(a,b,\lambda)|$ when $a = 2^{\frac{51}{32} }/256$ and $a = \frac{1}{64}$.}
\label{fig:two_chirp_QP_CWT}
\end{figure}
Next let us look at our method. Due to that the adaptive TSC-R $U_x(a,b,\lambda)$ is 3-dimensional, here we show some slices of $|U_x(a,b,\lambda)|$. First we look at the slice when $\lambda=r_1$, the ground truth chirp rate of $x_1$. The top-left panel of Fig.\ref{fig:two_chirp_QP_CWT} is $|U_x(a,b,r_1)|$. The clear and sharp scale-time ridge shown in this panel is exactly the curve $(b, \frac{\mu}{\phi^{\prime}_1(b)})$, which gives a precise estimate of $\phi^{\prime}_1(b)$, the IF of $x_1(t)$. The top-right panel is $|U_x(a,b,r_2)|$, where the clear and sharp scale-time curve corresponds to $(b, \frac{\mu}{\phi^{\prime}_2(b)})$. These two pictures tell us that in two scale-time planes (sub-spaces of ${\mathbb R}^3$) $(b, a, r_1)$ and $(b, a, r_2), b, a\in {\mathbb R}, a>0$,
there do exist two clear and sharp scale-time ridges which are desired to estimate $\phi^{\prime}_1(b)$ and $\phi^{\prime}_2(b)$. Note that these two scale-time planes are well-separated in the 3-dimensional space ${\mathbb R}^3$ since the distance between them is $r_1-r_2=128$, which is large. Thus the estimated chirp rates $\widehat \lambda_1(b)$ and $\widehat \lambda_2(b)$ should be easily obtained. In addition, if they are close to $r_1$ and $r_2$ respectively, then we will have accurate estimates for $\phi^{\prime}_1(b)$ and $\phi^{\prime}_2(b)$. Here we use the same parameters as those used in Fig.\ref{fig:two_chirp_signal}, especially, $\sigma$ is constant, namely $\sigma=0.023$.
As we see from our theorems that for a given multicomponent signal, the key for the success of our method to recover its modes is: (i) For each $b$, can we obtain $\widehat a_\ell(b)$ and $\widehat \lambda_\ell(b)$? and (ii) if yes for Question (i), then whether $\widehat a_\ell(b)$ and $\widehat \lambda_\ell(b)$ are close to $\mu/\phi^{\prime}_\ell(b)$ and $\phi^{{\prime}\gp}_\ell(b)$? The answer to Question (ii) is guaranteed by the error bounds in our theorems. So the most important step is whether
we can obtain $\widehat a_\ell(b)$ and $\widehat \lambda_\ell(b)$. For this example of the two-component signal, the question is
for each $b$,
two peaks of the function $h(a, \lambda):=|U_x(a,b,\lambda)|$ with $a\in (0, \infty), \lambda\in {\mathbb R}$
are far apart enough from each other so that we can easily obtain the (local) maximum points $(\widehat a_1, \widehat \lambda_1)$ and $(\widehat a_2, \widehat \lambda_2)$ in the scale-(chirp rate) plane? As examples, in the middle-left panel of Fig.\ref{fig:two_chirp_QP_CWT}, we show $h(a, \lambda)$ with $b=32/256$; while $h(a, \lambda)$ with $b=160/256$ is presented in the
middle-right panel of Fig.\ref{fig:two_chirp_QP_CWT}. From these two panels, we observe that for either $b=32/256$ or $b=160/256$, two peaks of $h(a, \lambda)$ do be far apart and hence we should easily obtain $(\widehat a_1, \widehat \lambda_1)$ and $(\widehat a_2, \widehat \lambda_2)$. Also observe from these two panels, the scale coordinates $\widehat a_1, \widehat a_2$ of the $(\widehat a_1, \widehat \lambda_1)$ and $(\widehat a_2, \widehat \lambda_2)$ change for different $b=32/256$ or $b=160/256$; while chirp rate coordinates $\widehat \lambda_1, \widehat \lambda_2$
essentially stay the same (around $70$ and $-60$ respectively). This is due to the fact that $\phi^{\prime}_1(b)$ and $\phi^{\prime}_2(b)$ change with time $b$, while $\phi^{{\prime}\gp}_1(b)$ and $\phi^{{\prime}\gp}_2(b)$ are independent of $b$.
In the bottom row of Fig.\ref{fig:two_chirp_QP_CWT}, we provide slices $|U_x(a,b,\lambda)|$ with $a=2^{\frac{51}{32} }/256$ and $a=1/64$. All these pictures demonstrate that the components $x_1$ and $x_2$ are well-separated in the 3-dimensional space of adaptive TSC-R, although their IF curves are crossover.
\begin{figure}[th]
\centering
\hspace{-0.7cm}
\begin{tabular}{c@{\hskip -0.2cm}c@{\hskip -0.2cm}c}
\resizebox{2.2in}{1.65in}{\includegraphics{IF_estimation_x1}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{IF_estimation_x2}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{CR_estimation}}
\end{tabular}
\caption{\small
Estimated IF of $x_1$ (left panel), estimated IF of $x_2$ (middle panel) and estimated chirp rates (right panel) of two components by our method TSC-R.
}
\label{fig:two_chirp_QP_CWT_IF}
\end{figure}
Fig.\ref{fig:two_chirp_QP_CWT_IF} shows the estimated IFs $\frac 1{\widehat a_1(b)}, \frac1{\widehat a_2(b)}$ and estimated chirp rates $\widehat \lambda_1(b)$ and $\widehat \lambda_2(b)$. Observe the estimated IFs are very close to ground truth $\phi^{{\prime}}_1(b)$ and $\phi^{{\prime}}_2(b)$. The estimation errors of IFs and chirp rates are mainly caused by the bound effect, which can also be improved with a time-varying $\sigma(b)$.
\hfill $\blacksquare$
\begin{figure}[th]
\centering
\hspace{-0.7cm}
\begin{tabular}{c@{\hskip -0.2cm}c @{\hskip -0.2cm}c}
\resizebox{2.2in}{1.65in}{\includegraphics{three_comp_wavef}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{three_comp_spec}}\quad &
\resizebox{2.2in}{1.65in}{\includegraphics{three_comp_IFs}}\\
\resizebox{2.2in}{1.65in}{\includegraphics{three_comp_CWT}}\quad &
\resizebox{2.2in}{1.65in}{\includegraphics{three_comp_SST}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{three_comp_SST2}}
\end{tabular}
\caption{\small Three-component signal $y(t)$ given in \eqref{three_comp} and its time-frequency representations with SST.
Top row (from left to right): Waveform of $y(t)$, magnitude spectrum and ground truth IFs of $y_1(t)$, $y_2(t)$ and $y_3(t)$; Bottom row (from left to right): CWLT of $y(t)$, CWT-based SST and CWT-based second-order SST.}
\label{fig:three_comp_signal}
\end{figure}
\begin{example}
{\rm Let $y(t)$ be a truncation of the synthetic micro-Doppler signal in Fig.1, given as,
\begin{eqnarray}
\nonumber
y(t)\hskip -0.6cm && =y_1(t)+y_2(t)+y_3(t)\\
\label{three_comp} &&=
\cos \left(82\pi t + 50 \cos \big (\pi t + \frac{\pi}{2} \big ) \right) + \cos (82 \pi t)
+ \cos \left(82\pi t + 50 \cos \big (\pi t - \frac{\pi}{2} \big ) \right),
\end{eqnarray}
where $t\in[0,1)$. }
\end{example}
In the following experiment, $y(t)$ is discretized with the sampling rate {\rm 256Hz}, namely $t=0, \frac{1}{256}, \dots, \frac{255}{256}$.
The IFs of $y_1$, $y_2$ and $y_3$ are $\phi_1'(t) = 41- 25\sin(\pi t + \frac{\pi}{2})$, $\phi_2'(t) = 41$ and $\phi_3'(t) = 41- 25\sin(\pi t - \frac{\pi}{2})$, respectively. See the top-right panel in Fig.\ref{fig:three_comp_signal} for IFs. The bottom row of Fig.\ref{fig:three_comp_signal} shows CWLT, SST and the second-order SST, where parameter $\sigma=0.023$ and the number of voice $n_v=32$.
Observed that CWLT and SST are hardly to represent any of these three sub-signals separated and reliably. Thus they cannot separate sub-signals. Actually, as far as we known, there is no efficient algorithm available to recover the three components with the 256 points observation of $y(t)$ above.
\begin{figure}[th]
\centering
\hspace{-0.7cm}
\begin{tabular}{c@{\hskip -0.2cm}c}
\resizebox{2.4in}{1.8in}{\includegraphics{three_comp_AQCWT_t1}} \quad &
\resizebox{2.4in}{1.8in}{\includegraphics{three_comp_IF_y1}}\\
\resizebox{2.4in}{1.8in}{\includegraphics{three_comp_IF_y2}} \quad &
\resizebox{2.4in}{1.8in}{\includegraphics{three_comp_IF_y3}}\\
\end{tabular}
\caption{\small Slice of adaptive TSC-R $|U_y(a,b,\lambda)|$ of $y(t)$ when $b=0.5$ (Top-left panel) and estimated IF (dotted lines) of three components (Top-right and bottom panels).}
\label{fig:three_comp_IFs_estimation}
\end{figure}
The top-left panel in Fig.\ref{fig:three_comp_IFs_estimation} shows the slice of the adaptive TSC-R $|U_y(a,b,\lambda)|$ of $y(b)$ when $b=0.5$, namely the specific time point when the IFs of the three components are crossover. Observe
even at this particular time $b=0.5$, the three peaks of $|U_y(a,0.5,\lambda)|$ (marked by $+$) are far apart in the scale-(chirp rate) plane. Thus we can easily obtain $(\widehat a_1, \widehat \lambda_1)$, $(\widehat a_2, \widehat \lambda_2)$ and $(\widehat a_3, \widehat \lambda_3)$ for this $b$. Actually
for other $b$, three peaks of $|U_y(a, b,\lambda)|$ in the scale-(chirp rate) plane are also far apart, and much clearer and sharper than the case here when $b=0.5$. Hence, these three components are well-separated in the 3-dimensional space of $(a, b, \lambda)$.
The estimated IF of each component is given in the other panels of Fig.\ref{fig:three_comp_IFs_estimation}. The result shows our method is able to estimate the IF of each component correctly and precisely.
We provide the result of component recovery in Fig.\ref{fig:three_comp_signal_recovery}, which shows that each recovered waveform is very close to the corresponding mode, except for near time $b=0.5$, where the IFs of three components are crossover.
The results show the validity and correctness of the proposed method.
Here we use a time-varying parameter $\sigma(b)$, which improves IF estimation and mode recovery performance a lot. How to select $\sigma(b)$ will be addressed in our future work.
\hfill $\blacksquare$
\begin{figure}[th]
\centering
\hspace{-0.7cm}
\begin{tabular}{c@{\hskip -0.2cm}c @{\hskip -0.2cm}c}
\resizebox{2.2in}{1.65in}{\includegraphics{recovery_y1}}\quad &
\resizebox{2.2in}{1.65in}{\includegraphics{recovery_y2}} \quad &
\resizebox{2.2in}{1.65in}{\includegraphics{recovery_y3}}
\end{tabular}
\caption{\small Recovered sub-signals (dotted red lines) of $y_1(t)$, $y_2(t)$ and $y_3(t)$ respectively (from left to right).}
\label{fig:three_comp_signal_recovery}
\end{figure}
\renewcommand{\arraystretch}{1.5}
\begin{table}[tp]
\centering
\fontsize{10}{8}\selectfont
\begin{threeparttable}
\caption{IF estimate and component recovery errors under white Gaussian noise with different SNRs.}
\label{tab:error_SNR}
\begin{tabular}{|c|ccc|ccc|}
\toprule
\multirow{2}{*}{SNR}&
\multicolumn{3}{c|}{IF estimate errors}&\multicolumn{3}{c|}{Mode recovery errors}\cr
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
&$\phi_1'(t)$&$\phi_2'(t)$&$\phi_3'(t)$&$y_1(t)$&$y_2(t)$&$y_3(t)$\cr
\midrule
0 dB&0.1335&0.1361&0.0765&0.7830&0.6457&0.6037\cr
5 dB&0.0281&0.0092&0.0312&0.3183&0.2311&0.3390\cr
10 dB&0.0117&0.0077&0.0162&0.2056&0.1746&0.2506\cr
15 dB&0.0112&0.0005&0.0109&0.1926&0.1241&0.2137\cr
20 dB&0.0099&0.0005&0.0407&0.1927&0.1291&0.2053\cr
\bottomrule
\end{tabular}
\end{threeparttable}
\end{table}
Finally, let us consider the effect of our computational scheme for signal data with additive noise,
by adding a noise $n(t)$ process to signal $y(t)$ to have a synthetic signal $z(t)$ contaminated by noise $n(t)$:
$$z(t)=y(t)+n(t).$$
Here we let $n(t)$ be a zero-mean Gaussian noise. Table 1 shows the IF estimate and mode recovery errors with different signal-to-noise ratios (SNRs), where the SNR is defined by
$10\log_{10} \frac{||y||_2}{||n||_2} $. The errors in Table 1 are defined by
$$
E_f = \frac{||f-\widetilde f||_2}{||f||_2},
$$
where $\widetilde f$ is the estimation of $f$.
$E_f$ is also called the normalized mean square error.
Observe that for SNR$ \ge 10$, IF estimation is stable and mode recovery errors are reasonable small.
|
{
"timestamp": "2020-12-29T02:21:36",
"yymm": "2012",
"arxiv_id": "2012.14010",
"language": "en",
"url": "https://arxiv.org/abs/2012.14010"
}
|
\section{Introduction}
Accreting high mass X-ray binary (HMXB) pulsars are among the brightest X-ray sources in our Galaxy (Nagase et al. 1989). In these binaries, a neutron star and a massive ($\geq$ 10 $M_{\odot}$) main-sequence star rotate around the common center of mass of the system in a wide and eccentric orbit (Tauris \& van den Heuvel 2006). The neutron star accretes matter from the companion star through the capture of stellar wind or Roche-lobe overflow. A majority of the HMXB systems are known to be Be/X-ray binaries (BeXRBs) in which the mass-donor is a non-supergiant B or O spectral type star which shows emission lines in its optical/infrared spectrum (Reig 2011). Rapid rotation of the companion Be star in the BeXRB system expels its photospheric matter equatorially, forming a circumstellar disk around it. The continuously evolving, equatorial circumstellar disk is known to be the cause of the emission lines and infrared excess in the optical/infrared spectrum of the companion star in the BeXRBs. Significant evolution of the circumstellar disk allows the neutron star to capture copious amount of matter while passing through the periastron. This abrupt accretion of matter by the neutron star enhances its X-ray emission by several orders of magnitude which lasts for several days to tens of days. These events are termed as Type-I X-ray outbursts. Once the neutron star moves away from the periastron, accretion from the circumstellar disk is no more possible and the X-ray source returns to the quiescence. The long term X-ray activity in BeXRBs is characterized by the regular Type-I outbursts with peak luminosity of the order of $ L_{x} \le 10^{37}$~erg~s$^{-1}$ and irregular rare giant (Type-II) X-ray outbursts with peak luminosity of $L_{x} \geq 10^{37}$~erg~s$^{-1}$. The Type-I X-ray outbursts are of short duration, covering 20--30 \% of orbit and coincide with the periastron passage of the neutron star whereas the Type-II outbursts show no preferred orbital phase dependence but once set in, they tend to cover a large fraction of the orbital period or even several orbital periods (see, e.g., Okazaki \& Negueruela 2001, Reig 2011, Jaisawal \& Naik 2016, Wilson-Hodge et al. 2018, Jaisawal et al. 2019).
EXO~2030+375 is one of the well studied Be/X-ray binary pulsars associated with regular Type-I outbursts during almost every periastron passage. This transient accreting X-ray pulsar was discovered in 1985 with {\it EXOSAT} during a giant outburst (Parmar et. al. 1989) with $\sim$42~s pulsations. The transient behaviour of this pulsar could be traced since its discovery when its initial 1-20 keV outburst luminosity (1.0$\times$10$^{38}~d_{5}^{2}$)~erg~s$^{-1}$ on 1985 May 18 declined by a factor of $\ge$2600 within 100 days of the outburst. The associated optical counterpart of EXO~2030+375 is a highly reddened B0 Ve star (Motch \& Janot-Pascheco 1987) showing infrared excess and H$\alpha$ in emission (Coe et al. 1988). Using the relationship between extinction and distance of sources in the Galactic plane, Wilson et al. (2002) estimated the distance of EXO~2030+375~ to be 7.1 kpc. The regular Type-I X-ray outbursts of EXO~2030+375, occurring almost at every periastron passage of its $\sim$46 day orbit (Wilson et al. 2008), have been extensively monitored with the X-ray instruments onboard {\it RXTE}, {\it INTEGRAL}, {\it XMM-Newton}, {\it Suzaku} and {\it Swift/BAT} observatories to understand the characteristic properties of the pulsar (Wilson et al. 2002; Naik et al. 2013; Naik \& Jaisawal 2015, Ferrigno et al. 2016; Epili et al. 2017 and references therein).
In June 2006, EXO 2030+375 was caught for the second time in a giant (Type-II) X-ray outburst with initial flux of 180 mCrab. This surpassed the previous peak flux of about 50~mCrab observed during the entire life of the {\em RXTE}/ASM mission (Corbet \& Levine 2006).
The 2006 Type-II outburst was also followed by {\em Swift}/BAT which reported the peak flux steadily increased to 750 mCrab (Krimm et. al. 2006). Spin-up trend was observed in the pulsar during the giant X-ray outbursts in 1985 (Parmar et al. 1989) and 2006 (Wilson, Finger \& Camero-Arranz 2008) whereas spin-down episodes have been observed at low luminous outbursts in 1994-2002 (Wilson et al. 2002; Wilson, Fabregatet \& Coburn 2005) and during faint outbursts after March 2016 (Kretschmar et al. 2016).
The phase-averaged spectra of EXO~2030+375 during normal and giant outbursts prior to 2006 giant outburst were described with various phenomenological and physical models (in some cases) along with an iron emission line at 6.4~keV and interstellar absorption (Epili et al. 2017 and references therein). Apart from the continuum spectrum, several interesting features have also been observed in the pulsar spectrum. {\it Suzaku} observations of the pulsar EXO~2030+375 during less intense Type-I outbursts in 2007 and 2012 did not show any evidence of cyclotron absorption features in the X-ray spectrum of EXO~2030+375. However, presence of additional matter locked at certain pulse phases of the pulsar was reported and interpreted as the cause of several prominent absorption dips in the pulse profiles (Naik et al. 2013; Naik \& Jaisawal 2015). During the brighter Type-I outburst in 2007, Naik et al. (2013) detected several narrow emission lines (i.e Si~{\sc XII}, Si~{\sc XIV}, S~{\sc XV}) for the
first time along with Fe~K$_{\alpha}$ and Fe~{\sc XVI} in the X-ray spectrum.
\begin{table*}[bt!]
\tabularfont
\centering
\caption{Log of observations of EXO~2030+375~ with {\em AstroSat}, \textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT.}
\begin{tabular}{ccllcclc}
\hline
\hline
&Observation ID &\multicolumn{2}{c}{Start of Observation} &\multicolumn{2}{c}{Exposure (in ks)} &Spin period &Count \\
& &Date &MJD &LAXPC &SXT & (s) &Rate $^b$\\
\hline
{\underline {\em AstroSat}} \\
Obs-1 &G06\_089T01\_9000000746 &23 October 2016 &57684.99 &48.5 &12.1 &41.2895(7) &32\\
Obs-2 &G08\_081T01\_9000002144 &6 June 2018 &58275.53 &43.6 &24.1 &41.272(9) &24\\
Obs-3 &G08\_081T01\_9000002178 &19 June 2018 &58288.88 &46.4 &23 &41.30(1) &11\\
Obs-4 &G08\_081T01\_9000002350 &9 September 2018 &58370.3 &46.5 &22.8 &41.2747(8) &65\\
Obs-5 &T03\_244T01\_9000003912 &2 October 2020 &59124.57 &95 &8.9 &41.306(3) &27\\
\hline
\textit{NuSTAR}\xspace &90201029002 &25 July 2016 &57594.36 &\multicolumn{2}{c}{56.7} &41.287054$^a$ &---\\
\textit{Swift}\xspace &00030799022 &25 July 2016 &57594.85 &\multicolumn{2}{c}{1} &--- &---\\
\hline
\hline
\end{tabular}
\label{log}
\tablenotes{$^a$: from F{\"u}rst et al. (2017). $^b$: Average source count rate (in counts s$^{-1}$) per LAXPC unit is given in 3-80 keV energy range. }
\end{table*}
\begin{figure}[bt!]
\centering
\includegraphics[width=0.5\textwidth, angle=0]{fig1.pdf}
\caption{MAXI (2-20 keV, blue data points) and \textit{Swift}\xspace/BAT (15-50 keV, shaded) long-term monitoring light curves of EXO~2030+375~ ranging from (a). 21 June 2016 (MJD 57560) to 27 November 2016 (MJD 57719), (b). 12 April 2018 (MJD 58220) to 14 October 2018 (MJD 58405), and (c). 20 July 2020 (MJD 59050) to 13 October 2020 (MJD 59135) are shown in top, middle, and bottom panels, respectively. Arrow marks in the panels represent the epochs of \textit{AstroSat}\xspace and \textit{NuSTAR}\xspace observations of the pulsar. }
\label{maxi-bat}
\end{figure}
A detailed and comprehensive study of EXO~2030+375 was carried out by using extensive {\it RXTE} pointed observations during many Type-I and 2006 Type-II outbursts starting from 1995 till 2011 (Epili et al. 2017). Timing and spectral studies of the pulsar were carried out in 3--30 keV luminosity range from 3.8$\times$10$^{36}$ to 2.6$\times$10$^{38}$ erg s$^{-1}$, covered during the entire duration of {\it RXTE} campaign. Timing studies of more than 600 {\it RXTE} pointings revealed the evolution of pulse profiles of the pulsar with luminosity - a main peak and a minor peak at low luminosity evolved into a two-peaked profile along with minor dips at high luminosity. This study revealed that pulse profiles of the pulsar at a particular luminosity were identical irrespective of the type of X-ray outbursts, indicating that the emission geometry depends mainly on the accretion rate.
Since the discovery in 1985, the pulsar had been showing regular X-ray outbursts for about 25 years. Since early 2015, however, the Type-I outbursts appeared to be of decreasing intensity and eventually vanished from the light curve towards the end of 2015 or early 2016 (F{\"u}rst et al. 2016). The Type-I X-ray outburst activity commenced again in early 2016 and still continuing, though with much fainter peak luminosity ($\le$10$^{36}$ erg s$^{-1}$) than the usual ones. F{\"u}rst et al. (2017) reported the detection of pulsation at a minimum luminosity of 6.8$\times$10$^{35}$ erg s$^{-1}$ in 3-78 keV range, considered to be the lowest luminosity of the pulsar with X-ray pulsations in the light curve. Though the pulsar was observed with \textit{Swift}\xspace/XRT at a fainter phase, the data quality was not good enough for pulsation search. As the pulsar is still showing Type-I X-ray outbursts with fainter peak luminosity, it is interesting to carry out timing and spectral studies with {\it Astrosat}~ to explore whether the pulsar has gone into the propeller regime or still undergoing accretion. Detection of pulsations in the light curve at lower luminosity compared to that during the earlier \textit{Swift}\xspace/XRT observation (F{\"u}rst et al. 2017), would rule out the onset of propeller regime. Further, detection of pulsation at a limiting luminosity may allow us in estimating the magnetic field of the pulsar. The \textit{AstroSat}\xspace observations at lower luminosity, therefore, are important to investigate above properties of the pulsar. In this paper, we investigate the pulsation activities, shape of pulse profiles and spectral properties of the pulsar at a significantly lower luminosity level using five epochs of \textit{AstroSat}\xspace observations. For comparison, data from \textit{NuSTAR}\xspace observation of the pulsar on 25 July 2016, reported in F{\"u}rst et al. (2017), were also used in present work. The observations of the pulsar and data reduction procedures are described in Section~2, results obtained from the timing and spectral analysis are presented in Section~3 \& 4, respectively. The implication of our results are discussed in Section~5.
\begin{figure}[t!]
\centering
\includegraphics[height=2.6in, width=3.2in, angle=0]{Spin-EXO.pdf}
\caption{Spin period evolution of the pulsar with luminosity during \textit{AstroSat}\xspace observations. L$_{36}$ represents the 3-30 keV unabsorbed luminosity in unit of 10$^{36}$~erg~s$^{-1}$.}
\label{spin-period}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.23\textwidth,angle=-90]{fig2.pdf}
\caption{ Pulse profiles of EXO~2030+375~ in 0.3-7 keV range with SXT instrument are shown for all five \textit{AstroSat}\xspace observations (left to right). These profiles are obtained by folding the light curves at respective pulse period determined from LAXPC data. Two pulses are shown for clarity.}
\label{sxt-profile}
\end{figure*}
\begin{figure}[bt!]
\centering
\includegraphics[height=4.8in, width=3.2in, angle=0]{fig3.pdf}
\caption{ Pulse profiles of EXO~2030+375~ in 3-80 keV range are shown for all five \textit{AstroSat}\xspace observations (top to bottom). L$_{\rm 36}$ denotes the 0.5-30 keV unabsorbed luminosity of the pulsar in 10$^{\rm 36}$~erg~s$^{-1}$ at a distance of 7.1~kpc. Two pulses are shown for clarity.}
\label{lxp-profile}
\end{figure}
\begin{figure*}[bt!]
\centering
\includegraphics[width=.5\textwidth,angle=-90]{fig4-obs1-pp.pdf}\quad
\includegraphics[width=.5\textwidth, angle=-90]{fig4-obs2-pp.pdf}\quad
\includegraphics[width=.5\textwidth, angle=-90]{fig4-obs3-pp.pdf}\\
\medskip
\includegraphics[width=.5\textwidth, angle=-90]{fig4-obs4-pp.pdf}\quad
\includegraphics[width=.5\textwidth, angle=-90]{fig4-obs5-pp.pdf}
\caption{Energy resolved pulse profiles of EXO~2030+375~ obtained by folding the energy resolved light curves from LAXPC instrument(s) onboard \textit{AstroSat}\xspace at the respective estimated spin period(s). Two pulses are shown in each panel for clarity. The error-bars in the figure represent 1$\sigma$ uncertainties.}
\label{resolved-profiles}
\end{figure*}
\begin{figure}[t!]
\centering
\includegraphics[height=3.3in, width=2.5in, angle=-90]{fig5-pf.pdf}
\caption{Pulse fraction variation of EXO~2030+375~ with energy, obtained from the pulse profiles
in multiple energy bands from five \textit{AstroSat}\xspace observations.}
\label{pf}
\end{figure}
\begin{figure*}[bt!]
\begin{center}$
\begin{array}{cccccc}
\includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec1.pdf} &
\includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec2.pdf} \\
\includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec3.pdf} &
\includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec4.pdf} \\
\includegraphics[width=0.35\textwidth, angle=-90]{fig6-spec5.pdf} &
\includegraphics[width=0.35\textwidth, angle=-90]{fig6-nustar-xrt.pdf} \\
\end{array}$
\end{center}
\caption{Best-fitting energy spectra obtained from first, second, third, fourth and fifth \textit{AstroSat}\xspace observations of EXO~2030+375~. Broadband energy spectra from \textit{NuSTAR}\xspace and {\em Swift}/XRT data in 2016 July are also shown.}
\label{spec}
\end{figure*}
\section{Observations and Data Reduction}
\subsection{\textit{AstroSat}\xspace}
The first Indian multi-wavelength astronomical satellite, \textit{AstroSat}\xspace was launched by Indian Space Research Organization on 28 September 2015 (Agrawal 2006, Singh et al. 2014). The observatory is sensitive to photons from optical to hard X-ray ranges by using five sets of instruments such as Ultraviolet Imaging Telescope (UVIT; Tandon et al. 2017), Soft X-ray Telescope (SXT; Singh et al. 2017), Large Area X-ray Proportional Counters (LAXPCs; Agrawal et al. 2017, Antia et al. 2017), Cadmium Zinc Telluride Imager (CZTI; Rao et al. 2017), and a Scanning Sky Monitor (SSM; Ramadevi et al. 2018). However, in the present study, five epochs of \textit{AstroSat}\xspace observations of EXO~2030+375~ with SXT and LAXPC instruments are used. Details of the observations are summarised in Table 1. As the source was very faint during all five epochs, it was not detected in the CZTI. The UVIT was not operational during these observations. \textit{AstroSat}\xspace caught the source at different phases of the regular Type~I X-ray outbursts. Top and bottom panels of Figure~\ref{maxi-bat} show the MAXI (Monitor of All-sky X-ray Image, Matsuoka et al. 2009) and \textit{Swift}\xspace/BAT (Burst Alert Telescope, Krimm et al. 2013) monitoring light curves of the pulsar covering the epochs of \textit{AstroSat}\xspace observations. The first, fourth and fifth \textit{AstroSat}\xspace observations were carried out at the declining phase of the Type-I X-ray outbursts at a source luminosity of 15-40 mCrab with BAT. However, during the second observation, monitoring data from MAXI or {\it Swift}/BAT were not available. An extremely low level of X-ray intensity, $\sim$10 mCrab in 15-50 keV range, was estimated during the third \textit{AstroSat}\xspace observation. A log of these \textit{AstroSat}\xspace pointings of EXO~2030+375~ is given in Table~1.
The SXT is a soft X-ray focusing telescope onboard \textit{AstroSat}\xspace. It consists of shells of conical mirrors that focus the soft X-ray photons in 0.3--8~keV energy range on a CCD detector. The field of view of the SXT is 40 arcmin. The effective area of the telescope is 90~cm$^2$ at 1.5 keV. The energy resolution of the detector is 90 eV at 1.5 keV and 136~eV at 5.9~keV. The source was observed with SXT in the photon counting mode, yielding a time resolution of 2.4~s. We followed standard analysis procedure for the SXT data reduction as suggested by the \textit{AstroSat}\xspace Science Support Cell (ASSC\footnote{\url{http://astrosat-ssc.iucaa.in/}}). The source spectrum was extracted from a 8~arcmin circular region centered at the source coordinate on the SXT chip using {\tt XSELECT} package. The background spectrum was extracted from the blank sky region on the chip.
The LAXPC is a proportional counter detector sensitive to X-ray photons in the 3--80 keV energy range. There are three identical detector units onboard \textit{AstroSat}\xspace with an effective area of about 8000 cm$^2$ at 15~keV. The time and energy resolutions of these units are 10~$\mu$s and 12\% at 22~keV, respectively. Standard data analysis routines ({\tt LAXPCsoftware}) are used to obtain the source light curves and spectral products from the event mode data. We have used SXT and LAXPC data in our timing study. Depending on the quality of the LAXPC data and instrument gain stability, we have considered events from single or combined LAXPC units. For timing studies, combined data from LAXPC-10, 20 \& 30 are used during Obs-1, while data from LAXPC-20 only are considered for Obs-2, Obs-3, and Obs-5. The events from LAXPC-10 \& 20 are used for timing studies from Obs-4. Background products corresponding to each observation are accumulated from the same data by analysing the Earth occultation period. A systematic uncertainty of 2\% is also added in the LAXPC spectra.
\subsection{\textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT}
In the present study, we also used \textit{NuSTAR}\xspace (Harrison et al. 2013) and \textit{Swift}\xspace/XRT (X-Ray Telescope; Burrows et al. 2005) observations on 25 July 2016, at a reported lowest luminosity of EXO~2030+375~ till date (F{\"u}rst et al. 2017), to compare the results obtained from the \textit{AstroSat}\xspace observations. For \textit{NuSTAR}\xspace observation, we used the {\tt NuSTARDAS} 1.6.0 software in {\tt HEASoft} version 6.24. Unfiltered events from the FPMA and FPMB were reprocessed by using the {\it nupipeline} routine in the presence of CALDB of version 20191219. Source products were then extracted by selecting circular region of 120~arcsec radius with souce coordinates as center by using the {\it nuproducts} task. Background products were also accumulated in a similar manner by selecting a source-free region. Data from the \textit{Swift}\xspace/XRT observation in photon counting mode, with an effective exposure of 1 ks, are also used. We obtained XRT products by using the online standard tool provided by the UK Swift Science Data Centre\footnote{\url{http://www.swift.ac.uk/user_objects/}} (Evans et al. 2009).
\section{Timing Analysis}
We extracted source and background light curves from the SXT and LAXPC event data at 2.4~s and 0.1~s binning time, respectively. After subtracting the background, X-ray pulsations were searched in the barycentric corrected light curves of EXO~2030+375~ from all five observations. We applied the chi-square maximization technique using {\tt efsearch} task of {\tt FTOOLS} package (Leahy 1987). The spin period of the pulsar is estimated to be 41.2895(7) s, 41.272(9) s, 41.30(1) s, 41.2747(8) s, and 41.306(3) from first, second, third, fourth, and fifth \textit{AstroSat}\xspace/LAXPC observations, respectively. The spin period and its error are also estimated by using the Lomb-Scargle and Clean techniques in the publicly available {\tt PERIOD} package (Currie et al. 2014). This package has been used for period estimation in several other binary X-ray pulsars e.g. 4U~2206+54 (Torrejón et al. 2018), 2S~1417-624 (Gupta et al. 2019), Swift~J0243.6+6124 (Beri et al. 2021). The results obtained from these methods are found to agree with the above quoted values. The evolution of pulse period with luminosity using {\it Astrosat} observations is presented in Figure~\ref{spin-period}. As the source was observed at a low luminosity level, a few measurements have large errors on the spin period. A marginal spinning-up with increasing luminosity can be seen in the figure, though it is not adequate to draw any significant claim on this.
The light curves in 0.3-7 keV and 3-80 keV ranges from the SXT and LAXPC data from each epoch of observations are folded with the corresponding estimated pulse period to obtain the pulse profiles of the pulsar. The pulse profiles obtained from the SXT and LAXPC data for all five \textit{AstroSat}\xspace observations are shown in Figure~\ref{sxt-profile} \& ~\ref{lxp-profile}, respectively. Phases of the pulse profiles are adjusted manually to align the minima at phase zero. The profiles obtained from the SXT data (Figure~\ref{sxt-profile}) appears single peaked. This is possibly due to the the fact that the soft X-ray photons are largely affected by absorption due to the material along the line of sight and low source count rate in SXT. The profiles from the LAXPC data, however, are found to be complex due to the presence of multiple structures at various pulse phases of the pulsar during the first, third, fourth and fifth observations (see Figure~\ref{lxp-profile}). Sharp dip-like features were detected in 0.1--0.2 and 0.60-0.85 phase ranges during these observations. Pulse profile of the pulsar from the second epoch of observations, however, appears relatively simpler.
To investigate the observed features in the LAXPC profiles with energy, barycenter corrected light curves in 3-10, 10-25, 25-50, and 50-80~keV ranges are extracted from the LAXPC data from all epochs of observations and folded with the respective spin period and shown in Figure~\ref{resolved-profiles}. The energy resolved pulse profile are found to be strongly energy dependent. The presence of dip-like features are seen up to higher energies in the profiles from all observations. The observed dips are evident up to $\sim$50~keV, especially during the first observation, whereas during second, fourth and fifth observations, the features are present up to $\sim$25~keV. However, during the third observation, the dips are present up to $\sim$10 keV (Figure~\ref{resolved-profiles}). We checked the significance of pulsations in the hard X-ray band by taking a ratio between the peak count rate and the standard deviation of the low or minimum intensity interval observed in the pulse profile. It is found that the significance of detection of pulsation in 50-80 keV range is more than 15$\sigma$ during all observations except the third observation.
We calculated the pulse fraction of the pulsar using the pulse profiles in various energy bands and presented in Figure~\ref{pf}. It is done to determine the nature of pulsating component. In our study, we define the pulse fraction as the ratio between the difference and sum of maximum and minimum intensities observed in the pulse profile of the pulsar. For all the observations, we found that the pulse fraction decreases with energy. A maximum value of pulse fraction of $\sim$60\% is detected in the profiles below 20 keV during the first observation. A relatively lower value is observed in rest of the data sets.
\begin{table*}
\tabularfont
\centering
\caption{Best-fitting spectral parameters (with 90\% errors) of EXO~2030+375~ during \textit{AstroSat}\xspace and {\em NuSTAR}+XRT observations.}
\begin{tabular}{lccccccc}
\hline
\hline
Parameters &Obs-1 &Obs-2 &Obs-3 &Obs-4 &Obs-5 &{\em NuSTAR}+XRT\\
\hline
N$_{\rm H}$ (10$^{22}$~cm$^{-2}$) &4.5$\pm$0.3 &4.7$\pm$0.3 &3.5$\pm$0.5 &4.6$\pm$0.3 &4$\pm$0.4 &5.8$\pm$0.6\\
Photon index ($\Gamma$) &1.61$\pm$0.07 &1.92$\pm$0.05 &2.1$\pm$0.2 &1.54$\pm$0.1 &1.85$\pm$0.05 &1.64$\pm$0.05 \\
Norm (10$^{-2}$) &3.7$\pm$0.4 &3.4$\pm$0.3 &1.3$\pm$0.5 &6.5$\pm$1 &2.9$\pm$0.3 &2.5$\pm$0.2 \\
E$_{\rm fold}$ (keV) &7.2$\pm$0.7 &-- &-- &6$\pm$2 &-- &6.5$\pm$0.4\\
E$_{\rm cut}$ (keV) &36$^{+8}_{-6}$ &-- &-- &32$^{+11}_{-8}$ &-- &27$\pm$2 \\
Flux$^a$ (3-30 keV) &2.8$\pm$0.1 &1.5$\pm$0.1 &0.42$\pm$0.03 &5$\pm$0.1 &1.51$\pm$0.1 &1.67$\pm$0.1\\
Flux$^a$ (0.5-30 keV) &3.1$\pm$0.1 &1.7$\pm$0.1 &0.50$\pm$0.05 &5.5$\pm$0.1 &1.71$\pm$0.1 &2$\pm$0.1\\
Luminosity$^b$ (10$^{36}$~erg~s$^{-1}$) &1.9 &1.0 &0.25 &3.3 &1.0 &1.21\\
$\chi^2_\nu$ ($\nu$) &1.06 (363) &1.28 (258) &0.91 (105) &1.18 (454) &1.48 (100) &0.96 (808) \\
\hline
\hline
\end{tabular}
\label{table-spec}
\tablenotes{$^a$: unabsorbed flux in 10$^{-10}$~erg~s$^{-1}$~cm$^{-2}$; $^b$: 0.5-30 keV unabsorbed luminosity at a distance of 7.1 kpc.\\
Note: By fitting \textit{NuSTAR}\xspace and XRT data, we estimated the unabsorbed flux of EXO~2030+375~ in 3-10 and 0.5-79 keV ranges to be 8.8$\times$10$^{-11}$ and 2.3$\times$10$^{-10}$~erg~s$^{-1}$~cm$^{-2}$, respectively. This is for comparison with the quoted values in F{\"u}rst et al. (2017).\\
}
\end{table*}
\begin{figure}[t!]
\centering
\includegraphics[height=3.3in, width=2.7in, angle=-90]{fig7.pdf}
\caption{Variation of power-law photon index with 3-30 keV luminosity are shown for {\em AstroSat} and {\em NuSTAR} observations of EXO~2030+375 with solid bullets and star symbols (red points), respectively, along with the corresponding data from the {\it RXTE} observations (black points) as shown in Figure~6 of Epili et al. (2017). The power-law photon index obtained from the present study follows the anti-correlated pattern with the luminosity in the sub-critical regime of the pulsar. L$_{\rm 37}$ denotes the 3-30~keV unabsorbed luminosity of the pulsar in 10$^{\rm 37}$~erg~s$^{-1}$~ at a distance of 7.1~kpc.}
\label{lum-ind}
\end{figure}
\section{Spectral Studies}
The spectral properties of EXO~2030+375~ are studied using data from all five \textit{AstroSat}\xspace observations. Using the source and background spectra extracted from the SXT and LAXPC data (as described above) and response files provided by the instrument team, we carried out spectral fitting of the data in 0.5--7 keV from SXT and 3.5-25 keV from LAXPC by using {\tt XSPEC} package of version 12.10.0 (Arnaud 1996). The LAXPC data were limited to 25 keV in our spectral fitting because of background uncertainties at high energies. Various standard models such as power-law, cutoff power-law, high energy cutoff power-law are attempted to fit the 0.5-25 keV spectrum, along with a component for photo-electric absorption ({\tt TBabs}, Wilms, Allen \& McCray 2000). We found that a cutoff based model is necessary to describe the spectra obtained from the first and fourth observations when the pulsar was relatively brighter (Figure~\ref{maxi-bat}). However, a simple absorbed power-law model can describe the spectra from second, third and fifth observations, satisfactorily. These models provided goodness of fit per degree of freedom close to $\chi^2_\nu$=$\chi^2/\nu$ $\approx$1 in all cases. In place of a power-law component, we also tried to fit the spectra from second, third and fifth observations with a thermal blackbody component. This yielded a poor fit with a $\chi^2/\nu$ of more than 5. We do not detect any signature of cyclotron absorption scattering feature(s) (Jaisawal \& Naik 2017; Staubert et al. 2019) in 0.5-25 keV spectral range. The spectra obtained from SXT and LAXPC data also do not show any iron emission line(s) in 6-7 keV range. Spectral parameters estimated from these fittings are given in Table~2. In our fitting, the relative instrument normalization for SXT was found to be in the range of 0.65-0.80 with respect to LAXPC.
We fitted the \textit{NuSTAR}\xspace data from FPMA and FPMB detectors along with the \textit{Swift}\xspace/XRT data in 1-79 keV energy range with a high energy cutoff power-law model along with the interstellar absorption. This model fitted the spectrum well. The spectral parameters such as column density, power-law photon index, cutoff and folding energies obtained from our fitting are found to be consistent with the values reported in Table~1 of F{\"u}rst et al. (2017).
The energy spectra corresponding to each observation along with best-fit model and corresponding residuals are shown in Figure~\ref{spec}. The {\tt cflux} convolution model is used for flux estimation in our study. We would like to mention here that the quoted flux and luminosity in Table~2 are estimated in 0.5-30 and 3-30 keV range though 0.5-25 keV data were used in spectral fitting with \textit{AstroSat}\xspace. Estimation of flux and luminosity in 3-30 keV range was done for comparison with the earlier values reported in literature. It is done by extrapolating the best-fit model up to 30 keV. We attempted to find any correlation between the power-law photon index and luminosity of the pulsar during the \textit{AstroSat}\xspace observations. For this, we plotted the photon index with the observed 3--30 keV luminosity of the pulsar from \textit{AstroSat}\xspace and combined \textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT observations in Figure~\ref{lum-ind}. Along with this, corresponding data from the \textit{RXTE}\xspace observations of EXO~2030+375, as reported in the top left panel of Figure~6 of Epili et al. (2017), are also shown for comparison. The figure shows that the photon index and luminosity are anti-correlated during all five epochs of \textit{AstroSat}\xspace and combined \textit{NuSTAR}\xspace and XRT observations. The results obtained from the present work extended the observed spectral behaviour of EXO~2030+375~ in sub-critical regime at much lower luminosity.
\section{Discussion}
Be/X-ray binary pulsars are expected to show X-ray enhancements, termed as Type-I X-ray outbursts, at the periastron passage of the neutron star (Okazaki \& Negueruela 2001, Reig 2011). However, in most of the cases, such enhancements are not always observed which has been interpreted as due to the lack of significant evolution of the equatorial circumstellar disk around the Be star companion. An alternative interpretation of the lack of X-ray activities (Type-I or Type-II outbursts) is related to the Be-disk dynamics due to Kozai-Lidov effect (Laplace et al. 2017). EXO~2030+375~ is unique in the sense that the pulsar shows Type-I X-ray outburst almost at every periastron passage of binary. The pulsar had been observed and studied extensively with the {\it Rossi X-ray Timing Explorer (RXTE)} for 606 epochs, spanning over 15 years, during Type-I and Type-II (giant) X-ray outbursts (Epili et al. 2017) though there are many pointing observations with other observatories used to study the characteristics of the source. Long term monitoring data from {\it RXTE}/ASM, {\it Swift}/BAT and MAXI/GSC show regular Type~I X-ray outbursts at the periastron passage of the pulsar. However, it has been noticed that the intensity at the peak of the Type-I X-ray outbursts has been in decline since last several years (Naik \& Jaisawal 2015, F{\"u}rst et al. 2016, Laplace et al. 2017) along with an extended period of low X-ray activity without any Type-I outbursts from 57200 MJD (27 June 2015) to 57600 MJD (31 July 2016) (Kretschmar et al. 2016). Following the extended low state, the transient activities started with the appearances of outbursts, though of lower peak intensities to date.
\subsection{Detection of X-ray pulsations at the lowest observed luminosity}
The {\it NuSTAR} and \textit{Swift}\xspace/XRT observations on 25 July 2016 were reported to be carried out at the lowest luminosity of EXO~2030+375~ during which pulsations were detected in the light curves (F{\"u}rst et al. 2017). Though the pulsar was observed at even lower luminosity of $\sim$10$^{34}$ erg s$^{-1}$ with the {\it Swift}/XRT in 3--10 keV range, poor data quality refrained from pulsation search (F{\"u}rst et al. 2017). On reanalysis of the \textit{NuSTAR}\xspace plus \textit{Swift}\xspace/XRT data, the 0.5-30 keV range luminosity of the pulsar on 25 July 2016 was estimated to be 1.21$\times$10$^{36}$ erg s$^{-1}$ (Table~2). On comparing the luminosities during five \textit{AstroSat}\xspace observations with the \textit{NuSTAR}\xspace and \textit{Swift}\xspace/XRT observations, it is interesting to point out that the pulsar was caught at even lower luminosities during second, third and fifth \textit{AstroSat}\xspace observations. Among these, the lowest luminosity of 2.5$\times$10$^{35}$erg~s$^{-1}$ in 0.5-30 keV range was estimated during the third epoch of \textit{AstroSat}\xspace observation. The LAXPC data from this observation showed a clear pulsation at 41.3~s in the light curve. Since the discovery in 1985, the luminosity of 2.5$\times$10$^{35}$erg~s$^{-1}$ in 0.5--30 keV range, observed with \textit{AstroSat}\xspace/LAXPC on 19 June 2018 is the lowest luminosity of EXO~2030+375~ at which X-ray pulsations are detected in the light curves.
In accretion powered X-ray pulsars, material is channeled from the disk to magnetic poles. Decrease in the mass accretion rate decreases the ram pressure, eventually leading to the increase in the size of the magnetosphere (Illarionov \& Sunyaev 1975, Nagase et al. 1989). In case magnetosphere exceeds beyond co-rotation radius, the centrifugal barrier prohibits accreting material to fall onto the neutron star. This leads to the cessation of pulsations of the pulsar and is referred as propeller effect (Illarionov \& Sunyaev 1975). Though, EXO~2030+375~ was detected at a lowest luminosity level of $\sim$10$^{34}$~erg~s$^{-1}$ using {\em Swift}/XRT data (F{\"u}rst et al. 2017), non-detection of X-ray pulsations and presence of softer thermal component with a temperature of 1.22~keV suggest the neutron star surface to be the source of observed emission, which occurs when the neutron star enters into the propeller phase (see, e.g., Wijnands \& Degenaar 2016, Tsygankov et al. 2016, F{\"u}rst et al. 2017). This allowed us to consider the luminosity of 2.5$\times$10$^{35}$erg~s$^{-1}$ (third \textit{AstroSat}\xspace observation) as the lowest during which pulsations are seen. Assuming above luminosity as the upper limit for the onset of propeller effect, we can calculate the pulsar magnetic field as follows (Campana et al. 2002, F{\"u}rst et al. 2017)
\begin{equation}\label{eq}
L_{lim}=7.3 \times k^{7/2} P^{-7/3}~R{_{6}^5}~B_{12}^{2}~M_{1.4}^{-2/3} \times 10^{37} {\rm erg~s^{-1}}
\end{equation}
where $P$ is spin period in $s$, $B_{12}$ is magnetic field in unit of 10$^{12}$~G, $R_6$ is the neutron star radius in unit of 10$^{6}$~cm, $M_{1.4}$ is the mass of the neutron star in unit of 1.4 \ensuremath{M_{\odot}}\xspace. The factor $k$ is related to the accretion geometry with a value of $k$=0.5 and 1 in case of disk and spherical wind accretions, respectively. Using above equation and assuming disk accretion scenario for EXO~2030+375, we obtain a range of equatorial magnetic field between (3--15)$\times$10$^{12}$~G for a minimum luminosity of $\approx$1$\times$10$^{34}$ and 2.5$\times$10$^{35}$erg~s$^{-1}$, respectively. Based on the detection of cyclotron line, the polar magnetic field of the neutron star is tentatively estimated to be 1$\times$10$^{12}$~G (Wilson et al. 2008) and 5$\times$10$^{12}$~G (Klochkov et al. 2008). However, later studies did not confirm the cyclotron feature in a broad energy range (Naik et al. 2013, Naik \& Jaisawal 2015, F{\"u}rst et al. 2017). In absence of firm detection of cyclotron line, we calculate the magnetic field by putting standard parameters of a neutron star in Equation~\ref{eq} and found that EXO~2030+375~ hosts a highly magnetized neutron star with a field strength between (3--15)$\times$10$^{12}$~G.
\subsection{Pulse profiles and spectroscopy}
Extensive studies of \textit{RXTE}\xspace observations of EXO~2030+375~ revealed that the pulse profiles of the pulsar strongly depend on the luminosity or mass accretion rate (Epili et al. 2017). Irrespective of the type of X-ray outbursts, whether regular Type-I or giant Type-II, or phases of the outbursts, the morphology of the pulse profiles remain same at a certain luminosity. In the present study, the pulse profiles of the pulsar during all five epochs of \textit{AstroSat}\xspace observations are characterised with the presence of narrow dip-like features, though the features are prominent in the lowest luminosity phase on 19 June 2018. These features are commonly seen in Be/X-ray binary pulsars (see, e.g., Devasia et al. 2011, Jaisawal, Naik \& Epili 2016, Gupta et al. 2018, Jaisawal et al. 2018). At a luminosity an order of 10$^{36}$~erg~s$^{-1}$, Ferrigno et al. (2016) and F{\"u}rst et al. (2017) have detected a sharp absorption feature in the pulse profile of EXO~2030+275. This feature is interpreted as due to the effect of obscuration through accretion column along the line of sight. This is supported by the phase-resolved spectroscopy which revealed a high column density and effectively harder spectrum due to reprocessing of the emission. In our study, the low luminosity of the pulsar and limited understanding of the background and spectral calibration of the instruments at high energies (Antia et al. 2017) prevented us to investigate the cause of the prominent dips in the pulse profiles through pulse-phase resolved spectroscopy. From energy resolved pulse profile (Figure~\ref{resolved-profiles}), clear pulsations up to $\sim$50 keV are seen during all five epochs of observations. Significance of pulsation can also be seen from the values of pulse fraction with energy (Figure~\ref{pf}). The fraction of number of photons contributing towards pulsation can be found to decrease with energy as well as luminosity.
Broad-band energy spectrum of accretion powered X-ray pulsars originates due to thermal and bulk Comptonization of soft X-rays photons from the thermal mound on the neutron star surface (Becker \& Wolff 2007). In spite of complex processes taking place in the accretion column, the observed spectrum can be described by high energy cutoff power-law, exponential cutoff power-law models along with components for emission lines and absorption due to the interstellar medium. We have studied five \textit{AstroSat}\xspace and {\em NuStar}+XRT observations between 2016 and 2020 after renewed activities from EXO~2030+375. Spectral analysis of these observations revealed the dependence of power-law photon index with luminosity. Extensive studies of available {\it RXTE} observations of the pulsar established the relation between the power-law photon index with source luminosity (Epili et al. 2017). From above study, the photon index are found to be distributed in three distinct regions depending on the 3-30 keV luminosity, suggesting the spectral transition from sub-critical to super-critical regimes through the critical luminosity of (2-4)$\times$10$^{37}$erg~s$^{-1}$ for EXO~2030+375~ at constant photon index. The source spectrum became harder with the luminosity in the sub-critical regime. A softening in the spectral emission was thereafter detected in the super-critical regime of the pulsar. As quoted, \textit{AstroSat}\xspace observations were carried out at lower luminosities compared to the \textit{RXTE}\xspace observations. In this study, we found that the power-law photon index is anti-correlated with the luminosity of the pulsar (Figure~\ref{lum-ind}) in the same manner as reported by Epili et al. (2017) at lower luminosities. This confirms that the the spectral shape of the pulsar depends on the mass accretion rate.
\section{Conclusion}
In this paper, we carried out timing and spectral studies of EXO~2030+375~ using five \textit{AstroSat}\xspace observations at various phases of its Type-I X-ray outbursts. The source luminosity was detected to as low as 2.5$\times$10$^{35}$~erg~s$^{-1}$ in 0.5-30 keV range at which clear pulsations are detected. This is the first time when pulsations at such a low luminosity level is detected in the pulsar. Considering this as an limiting luminosity for propeller regime, we calculated the magnetic field of the neutron star. We have also studied pulse profiles of the pulsar. The pulse morphology is found to be complex due to presence of multiple absorption like features. The energy spectrum of EXO~2030+375~ can be described by a high energy cutoff power-law model during brighter (first and fourth) \textit{AstroSat}\xspace observations. The power-law photon index shows an anti-correlation with the source luminosity which is expected when the source is below the critical luminosity.
\vspace{2em}
\section*{Acknowledgements}
We thank the anonymous reviewer for suggestions on the paper. This publication uses the data from \textit{AstroSat}\xspace mission of the ISRO, archived at the Indian Space Science Data Centre. We thank members of SXT and LAXPC instrument teams for their contribution to the development of the instruments and analysis software. The SXT and LAXPC Payload Operations Centers (POCs) at TIFR are acknowledged for verifying and releasing the data via the ISSDC data archive and providing the necessary software tools for data analyses. We also acknowledge the contributions of the \textit{AstroSat}\xspace project team at ISAC and IUCAA. This research has made use of data obtained through HEASARC Online Service, provided by the NASA/GSFC, in support of NASA High Energy Astrophysics Programs. This work used the NuSTAR Data Analysis Software ({\tt NuSTARDAS}) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA).
|
{
"timestamp": "2020-12-29T02:25:47",
"yymm": "2012",
"arxiv_id": "2012.14113",
"language": "en",
"url": "https://arxiv.org/abs/2012.14113"
}
|
\section{Introduction}
Pre-trained models such as BERT \cite{devlin2019bert}, GPT \cite{radford2018improving}, and RoBERTa \cite{liu2019roberta} have advanced the state-of-the-art performances of various natural language processing tasks. The successful recipe is that a model is first pre-trained on a huge volume of unsupervised data with self-supervised objectives, and then is fine-tuned on supervised data with the same data scheme. Dominant pre-trained models represent a text as a sequence of tokens\footnote{Such tokens can be words or word pieces. We use token for clarity.}. The merits are that such basic text representations are available from vast amounts of unsupervised data, and that models pre-trained and fine-tuned with the same paradigm usually achieve good accuracy in practice \cite{guu2020realm}. However, an evident limitation of these methods is that richer syntactic structure of text is ignored.
In this paper, we seek to enhance pre-trained models with syntax of text. Related studies attempt to inject syntax information either only in the fine-tuning stage \cite{nguyen2020tree, sachan2020syntax}, or only in the pre-training stage \cite{wang2020k}, which results in discrepancies. When only fusing syntax information in the fine-tuning phase, \citet{sachan2020syntax} finds that there is no performance boost unless high quality human-annotated dependency parses are available. However, this requirement would limit the application of the model to broader scenarios where human-annotated dependency information is not available.
To address this, we conduct a large-scale study on injecting automatically produced syntax of text in both the pre-training and fine-tuning stages. We construct a pre-training dataset by applying an off-the-shelf dependency parser \cite{qi2020stanza} to one billion sentences from common crawl news. With these data, we introduce a syntax-aware pre-training task, called dependency distance prediction, which predicts the syntactic distance between tokens in the dependency structure. Compared with the pre-training task of dependency head prediction \cite{wang2020k} that only captures local syntactic relations among words, dependency distance prediction leverages global syntax of the text. In addition, we developed a syntax-aware attention layer, which can be conveniently integrated into Transformer \cite{vaswani2017attention} to allow tokens to selectively attend to contextual tokens based on their syntactic distance in the dependency structure.
We conduct experiments on entity typing, question answering and relation classification on six benchmark datasets. Experimental results show that our method achieves state-of-the-art performance on all six datasets. Further analysis shows that our model can indicate the importance of syntactic information on downstream tasks, and that the newly introduced dependency distance prediction task could capture the global syntax of the text, performs better than dependency head prediction. In addition, compared with experimental results of injecting syntax information in either the pre-training or fine-tuning stage, injecting syntax information in both stages achieves the best performance.
In summary, the contribution of this paper is threefold. (1) We demonstrate that infusing automatically produced dependency structures into the pre-trained model shows superior performance over downstream tasks. (2) We propose a syntax-aware attention layer and a pre-training task for infusing syntactic information into the pre-trained model. (3) We find that the newly introduced dependency distance prediction task performs better than the dependency head prediction task.
\begin{figure*}[!tp]
\centering
\includegraphics[scale=0.6]{distance.pdf}
\caption{The dependency tree of the sentence, ``My dog is playing frisbee outside the room," after running the Stanza parser.}
\label{fig:distance}
\vspace{-2mm}
\end{figure*}
\section{Related Work}
Our work involves injecting syntax information into pre-trained models. First, we will review recent studies on analyzing the knowledge presented in pre-trained models, and then we will introduce the existing methods that enhance pre-trained models with syntax information.
\subsection{Probing Pre-trained Models}
With the huge success of pre-trained models \cite{devlin2019bert, radford2018improving} in a wide range of NLP tasks, lots of works study to what extent pre-trained models inherently. Here, we will introduce recent works on probing linguistic information, factual knowledge, and symbolic reasoning ability from pre-trained models respectively. In terms of linguistic information, \citet{syntax_probe_bert_hewitt2019} learn a linear transformation to predict the depth of each word on a syntax tree based on their representation, which indicates that the syntax information is implicitly embedded in the BERT model. However, \citet{treetransformer_treeintoselfatt} find that the attention scores calculated by pre-trained models seem to be inconsistent with human intuitions of hierarchical structures, and indicate that certain complex syntax information may not be naturally embedded in BERT. In terms of probing factual knowledge, \citet{Petroni2019LanguageMA} find that pre-trained models are able to answer fact-filling cloze tests, which indicates that the pre-trained models have memorized factual knowledge. However, \citet{Poerner2019BERTIN} argue that BERT's outstanding performance of answering fact-filling cloze tests is partly due to the reasoning of the surface form of name patterns. In terms of symbolic reasoning, \citet{talmor2020olmpics} test the pre-trained models on eight reasoning tasks and find that the models completely fail on half of the tasks. Although probing knowledge from pre-trained model is a worthwhile area, it runs perpendicular to infusing knowledge into pre-trained models.
\subsection{Integrating Syntax into Pre-trained Models}
Recently, there has been growing interest in enhancing pre-trained models with syntax of text. Existing methods attempt to inject syntax information in the fine-tuning stage or only in the pre-training stage. We first introduce related works that inject syntax in the fine-tuning stage. \citet{nguyen2020tree} incorporate a tree-structured attention into the Transformer framework to help encode syntax information in the fine-tuning stage. \citet{zhang2020sg} utilize the syntax to guide the Transformer model to pay no attention to the dispensable words in the fine-tuning stage and improve the performance in machine reading comprehension. \citet{sachan2020syntax} investigate two distinct strategies for incorporating dependency structures in the fine-tuning stage and obtain state-of-the-art results on the semantic role labeling task. Meanwhile, \citet{sachan2020syntax} argue that the performance boost is mainly contributed to the high-quality human-annotated syntax. However, human annotation is costly and difficult to extend to a wide range of applications. Syntax information can also be injected in the pre-training stage. \citet{wang2020k} introduce head prediction tasks to inject syntax information into the pre-trained model, while syntax information is not provided during inference. Note that the head prediction task in \citet{wang2020k} only focuses on the local relationship between two related tokens, which prevents each token from being able to perceive the information of the entire tree. Despite the success of utilizing syntax information, existing
methods only consider the syntactic information
of text in the pre-training or the fine-tuning
stage so that they suffer from discrepancy between the pre-training and the fine-tuning stage. To bridge this gap, we conduct a large-scale study on injecting automatically produced syntax information in both the two stages. Compared with the head prediction task \cite{wang2020k} that captures the local relationship, we introduce the dependency distance prediction task that leverages the global relationship to predict the distance of two given tokens.
\section{Data Construction}
In this paper, we adopt the dependency tree to express the syntax information. Such a tree structure is concise and only expresses necessary information for the parse \cite{jurafsky2000speech}. Meanwhile, its head-dependent relation can be viewed as an approximation to the semantic relationship between tokens, which is directly useful for capturing semantic information. The above advantages help our model make more effective use of syntax information.
Another available type of syntax information is the constituency tree, which is used in \citet{nguyen2020tree}. However, as pointed out in \citet{jurafsky2000speech}, the relationships between the tokens in dependency tree can directly reflect important syntax information, which is often buried in the more complex constituency trees. This property requires extra techniques to extracting relation among the words from a constituency tree \citep{jurafsky2000speech}\footnote{\href{https://web.stanford.edu/~jurafsky/slp3/}{https://web.stanford.edu/\~{}jurafsky/slp3/}}.
The dependency tree takes linguistic words as one of its basic units. However, most pre-trained models take subwords (also known as the word pieces) instead of the entire linguistic words as the input unit, and this necessitates us to extend the definition of the dependency tree to include subwords. Following \citet{wang2020k}, we will add edges from the first subword of $v$ to all subwords of $u$, if there exists a relationship between linguistic word $v$ and word $u$.
Based on the above extended definition, we build a pre-training dataset from open-domain sources. Specifically, we randomly collect 1B sentences from publicly released common crawl news datasets \cite{NIPS2019_9106} that contain English news articles crawled between December 2016 and March 2019. Considering its effectiveness and ability to expand to multiple languages, we adopt off-the-shelf Stanza\footnote{\href{https://github.com/stanfordnlp/stanza}{https://github.com/stanfordnlp/stanza}} to automatically generate the syntax information for each sentence. The average token length of each sentence is 25.34, and the average depth of syntax trees is 5.15.
\section{Methodology}
In this section, we present the proposed \textbf{S}yntax-\textbf{E}nhanced \textbf{PRE}-trained \textbf{M}odel (\textbf{SEPREM}). We first define the syntax distance between two tokens. Based on the syntax distance, we then introduce a syntax-aware attention layer to learn syntax-aware representation and a pre-training task to enable model to capture global syntactic relations among tokens.
\subsection{Syntax Distance over Syntactic Tree}
Intuitively, the distance between two tokens on the syntactic tree may reflect the strength of their linguistic correlation. If two tokens are far away from each other on the syntactic tree, the strength of their linguistic correlation is likely weak. Thus, we define the distance of two tokens over the dependency tree as their syntactic distance. Specifically, we define the distance between the token $v$ and token $u$ as 1, i.e. $d(v, u)=1$, if $v$ is the head of $u$. If two tokens are not directly connected in the dependency graph, their distance is the summation of the distances between adjacent nodes on the path. If two tokens are separated in the graph, their distance is set to infinite. Taking the sentence ``\textit{My dog is playing frisbee outside the room.}" in Fig \ref{fig:distance} as an example, $d(\textit{playing}, \textit{frisbee})$ equals 1 since the token ``\textit{playing}" is the head of the token ``\textit{frisbee}".
\subsection{Syntax-Aware Transformer}
\label{Syntax-Aware Transformer}
We follow BERT \cite{devlin2019bert} and use the multi-layer bidirectional Transformer \cite{vaswani2017attention} as the model backbone. The model takes a sequence $X$ as the input and applies $N$ transformer layers to produce contextual representation:
\begin{equation} \label{equ:transformer}
\bm{H}^n=transformer_n((1-\alpha)\bm{H}^{n-1}+\alpha\bm{\hat{H}}^{n-1})
\end{equation}
where $n \in [1,N]$ denotes the $n$-th layer of the model, $\bm{\hat{H}}$ is the syntax-aware representation which will be described in Section \ref{sec:distance-aware attention layer}, $\bm{H}^0$ is embeddings of the sequence input $X$, and $\alpha$ is a learnable variable.
However, the introduction of syntax-aware representation $\bm{\hat{H}}$ in the Equation \ref{equ:transformer} changes the architecture of Transformer, invalidating the original weights from pre-trained model, such as BERT and RoBERTa. Instead, we introduce a learnable importance score $\alpha$ that controls the proportion of integration between contextual and syntax-aware representation. When $\alpha$ is equal to zero, the syntax-aware representation is totally excluded and the model is architectural identical to vanilla Transformer. Therefore, we initialize the parameter $\alpha$ as the small but not zero value, which can help better fuse syntactic information into existing pre-trained models. We will discuss importance score $\alpha$ in detailed in Section \ref{model analysis}.
Each transformer layer $transformer_n$ contains an architecturally identical transformer block, which is composed of a multi-headed self-attention $MultiAttn$ \cite{vaswani2017attention} and a followed feed forward layer $FFN$. Formally, the output $\bm{H}^n$ of the transformer block $transformer_n(H'_{n-1})$ is computed as:
\begin{equation}\label{equa:transformer-block}
\begin{split}
&\bm{G}'_n=LN(MultiAttn(H'_{n-1})+H'_{n-1})\\
&\bm{H}^n=LN(FFN(\bm{G}'_n)+\bm{G}'_n)\\
\end{split}
\end{equation}
where the input $H'_{n-1}$ is $(1-\alpha) \bm{H}^{n-1} + \alpha\bm{\hat{H}}^{n-1}$ and $LN$ represents a layer normalization operation.
\subsection{Syntax-aware Attention Layer}
\label{sec:distance-aware attention layer}
In this section, we will introduce how to obtain the syntax-aware representation $\bm{\hat{H}}$ used in syntax-aware transformer.
\paragraph{Tree Structure Encoding}
We adopt a distance matrix \bm{$D$} to encode the tree structure. The advantages of distance matrix \bm{$D$} are that it can well preserve the hierarchical syntactic structure of text and can directly reflect the distance of two given tokens. Meanwhile, its uniqueness property guarantees the one-to-one mapping of the tree structure. Given a dependency tree, the element $\bm{D}_{i,j}$ of distance matrix \bm{$D$} in $i$-th row and $j$-th column is defined as:
\begin{equation}
\bm{D}_{i,j} = \left\{
\begin{array}{cl}
d(i,j), & \text{if exists a path from $v_i$ to $v_j$}, \\
0, & \text{if $i$ = $j$ and otherwise}.
\end{array}
\right.
\end{equation}
where $v_i$ and $v_j$ are tokens on the dependency tree.
Based on the concept that distance is inversely proportional to importance, we normalize the matrix \bm{$D$} and obtain the normalized correlation strength matrix \bm{$\tilde{D}$} as follows:
\begin{equation}
\bm{\tilde{D}}_{i,j} = \left\{
\begin{array}{cl}
\frac{1 / \bm{D}_{i,j}}{\sum_{z\in\{y|\bm{D}_{i,y} \neq 0\}} (1/\bm{D}_{i,z})}, & \text{if $\bm{D}_{i,j} \neq 0$}, \\
0, & \text{otherwise}.
\end{array}
\right.
\end{equation}
\paragraph{Syntax-aware Representation}
Given the tree structure representation $\bm{\tilde{D}}$ and the contextual representation $\bm{H}^n$, we fuse the tree structure into the contextual representation as:
\begin{equation}
\label{diatance-aware representation}
\bm{\hat{H}}^n = \sigma(\bm{W}_n^1\bm{H}^n + \bm{W}_n^2\bm{\tilde{D}}\bm{H}^n)
\end{equation}
where $\sigma$ is the activation function, $\bm{W}_n^1$ and $\bm{W}_n^2\in\mathbb{R}^{d_h \times d_h}$ are model parameters. We can see that $\bm{\tilde{D}}\bm{H}^n$ allows one to aggregate information from others along the tree structure. The closer they are on the dependency tree, the larger the attention weight, and thus more information will be propagated to each other, and vice verse.
\subsection{Syntax-aware Pre-training Task}
\label{sec:Syntax-aware_Pre-training_Tasks}
To better understand the sentences, it is beneficial for model to be aware of the underlying syntax. To this end, a new pre-training task, named dependency distance prediction task (DP), is designed to enhance the model's ability of capturing global syntactic relations among tokens. Specifically, we first randomly mask some elements in the distance matrix $\bm{D}$, e.g., supposed $\bm{D}_{i,j}$. Afterwards, the representations of tokens $i\text{ and }j$ from SEPREM are concatenated and fed into a linear classifier, which outputs the probabilities over difference distances. In all of our experiments, 15\% of distance are masked at random.
Similar to BERT \cite{devlin2019bert} and RoBERTa \cite{liu2019roberta}, we conduct the following operations to boost the robustness. The distance in matrix $\bm{D}$ will be masked at 80\% probability or replaced by a random integer with a probability of 10\%. For the rest 10\% probability, the distance will be maintained.
During pre-training, in addition to the DP pre-training task, we also use the dependency head prediction (HP) task, which is used in \citet{wang2020k} to capture the local head relation among words, and the dynamic masked language model (MLM), which is used in \citet{liu2019roberta} to capture contextual information. The final loss for the pre-training is the summation of the training loss of DP, HP and MLM tasks.
\subsection{Implementation Details}
The implementation of SEPREM is based on HuggingFace's Transformer \citep{wolf2019huggingface}. To accelerate the training process, we initialize parameters from RoBERTa model released by HuggingFace\footnote{\href{https://huggingface.co/transformers/}{https://huggingface.co/transformers/}}, which contains 24 layers, with 1024 hidden states in each layer. The number of parameters of our model is 464M. We pre-train our model with 16 32G NVIDIA V100 GPUs for approximately two weeks. The batch size is set to 2048, and the total steps are 500000, of which 30000 is the warm up steps.
In both pre-training and fine-tuning stages, our model takes the syntax of the text as the additional input, which is pre-processed in advance. Specially, we obtain the dependency tree of each sentence via Stanza and then generate the normalized distance matrix.
\section{Experiments}
In this section, we evaluate the proposed SEPREM on six benchmark datasets over three downstream tasks, {\it i.e.}, entity typing, question answering and relation classification.
\subsection{Entity Typing}
The entity typing task requires the model to predict the type of a given entity based on its context. Two fine-grained public datasets, Open Entity \citep{choi2018openentity} and FIGER \citep{ling2015figer}, are employed to evaluate our model. The statistics of the aforementioned datasets are shown in Table \ref{table:statistic_oe_rc}. Following \citet{wang2020k}, special token ``@" is added before and after a certain entity, then the representation of the first special token ``@" is adopted to predict the type of the given entity. To keep the evaluation criteria consistent with previous works \citep{shimaoka2016attentive, zhang2019ernie, peters2019knowledge, wang2019kepler, xiong2019pretrained}, we adopt loose micro precision, recall, and F1 to evaluate model performance on Open Entity datasets. As for FIGER datasets, we utilize strict accuracy, loose macro-F1, and loose micro-F1 as evaluation metrics.
\begin{table}[!t]
\small
\begin{center}
\begin{tabular}{@{}l|cccc@{}}
\toprule
Dataset & Train & Dev & Test & Label\\ \midrule
Open Entity & 2,000 & 2,000 & 2,000 & 6 \\
FIGER & 2,000,000 & 10,000 & 563 & 113 \\
\midrule
TACRED & 68,124 & 22,631 & 15,509 & 42 \\
\bottomrule
\end{tabular}
\end{center}
\caption{The statistics of the entity typing datasets, i.e., Open Entity and FIGER, and relation classification dataset TACRED. Label refers to type of a given entity or relation between two entities.}
\label{table:statistic_oe_rc}
\vskip -0.15in
\end{table}
\paragraph{Baselines}
NFGEC \citep{shimaoka2016attentive} recursively composes representation of entity context and further incorporates an attention mechanism to capture fine-grained category memberships of an entity. KEPLER \citep{wang2019kepler} infuses knowledge into the pre-trained models and jointly learns the knowledge embeddings and language representation. RoBERTa-large (continue training) learns on the proposed pre-training dataset under the same settings with SEPREM but only with dynamic MLM task. In addition, we also report the results of BERT-base \citep{devlin2019bert}, ERNIE \citep{zhang2019ernie}, KnowBERT \citep{peters2019knowledge}, WKLM \citep{xiong2019pretrained}, RoBERTa-large, and K-adapter \citep{wang2020k} for a full comparison.
\begin{table*}[!t]
\centering
\begin{tabular}{@{}l|ccc|ccc@{}}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{3}{c|}{\textbf{OpenEntity}} & \multicolumn{3}{c}{\textbf{FIGER}} \\ \cmidrule(l){2-7}
& \textbf{P} & \textbf{R} & \textbf{Mi-$\textbf{F}_1$} & \textbf{Acc} & \textbf{Ma-$\textbf{F}_1$} & \textbf{Mi-$\textbf{F}_1$} \\ \midrule
NFGEC \cite{shimaoka2016attentive} & 68.80 & 53.30 & 60.10 & 55.60 & 75.15 & 71.73 \\
BERT-base \cite{zhang2019ernie} & 76.37 & 70.96 & 73.56 & 52.04 & 75.16 & 71.63 \\
ERNIE \cite{zhang2019ernie} & 78.42 & 72.90 & 75.56 & 57.19 & 75.61 & 73.39 \\
KnowBERT \cite{peters2019knowledge} & 78.60 & 73.70 & 76.10 & - & - & - \\
KEPLER \cite{wang2019kepler} & 77.20 & 74.20 & 75.70 & - & - & - \\
WKLM \cite{xiong2019pretrained} & - & - & - & 60.21 & 81.99 & 77.00 \\
K-Adapter \cite{wang2020k} & 79.25 & 75.00 & 77.06 & 61.81 & 84.87 & 80.54 \\
\midrule
RoBERTa-large & 77.55 & 74.95 & 76.23 & 56.31 & 82.43 & 77.83 \\
RoBERTa-large (continue training) & 77.63 & 75.01 & 76.30 & 56.52 & 82.37 & 77.81\\
SEPREM & \textbf{81.07} & \textbf{77.14} & \textbf{79.06} & \textbf{63.21} & \textbf{86.14} & \textbf{82.05}\\
\bottomrule
\end{tabular}%
\vspace{-2mm}
\caption{Results for entity typing task on the OpenEntity and FIGER datasets.}
\label{tab:entity_typing}
\end{table*}
\paragraph{Experimental Results} As we can see in Table \ref{tab:entity_typing}, our SEPREM outperforms all other baselines on both entity typing datasets. In the Open Entity dataset, with the utility of the syntax of text, SEPREM achieves an improvement of 3.6\% in micro-F1 score comparing with RoBERTa-large (continue training) model. The result demonstrates that the proposed syntax-aware pre-training tasks and syntax-aware attention layer help to capture the syntax of text, which is beneficial to predict the types more accurately. As for the FIGER dataset, which contains more labels about the type of entity, SEPREM still brings an improvement in strict accuracy, macro-F1, and micro-F1. This demonstrates the effectiveness of leveraging syntactic information in tasks with more fine-grained information. Specifically, compared with the K-adapter model, our SEPREM model brings an improvement of 2.6\% F1 score on Open Entity dataset. It is worth noting that SEPREM model is complementary to the K-adapter model, both of which inject syntactic information into model during pre-training stage. This improvement indicates that injecting syntactic information in both the pre-training and fine-tuning stages can make full use of the syntax of the text, thereby benefiting downstream tasks.
\begin{table}[!t]
\begin{center}
\begin{tabular}{@{}l|ccc@{}}
\toprule
Dataset & Train & Dev & Test \\
\midrule
SearchQA & 99,811 & 13,893 & 27,247 \\
Quasar-T & 28,496 & 3,000 & 3,000 \\
\midrule
CosmosQA & 25,588 &3,000 & 7,000 \\
\bottomrule
\end{tabular}%
\end{center}
\caption{The statistics of the question answering datasets: SearchQA, Quasar-T and CosmosQA.}
\label{table:statistic_qa}
\vskip -0.15in
\end{table}
\subsection{Question Answering}
We use open-domain question answering (QA) task and commonsense QA task to evaluate the proposed model. Open-domain QA requires models to answer open-domain questions with the help of external resources such as materials of collected documents and webpages. We use SearchQA \citep{dunn2017searchqa} and QuasarT \citep{dhingra2017quasar} for this task, and adopt ExactMatch (EM) and loose F1 scores as evaluation metrics. In this task, we first retrieve related paragraphs according to the question from external materials via the information retrieval system, and then a reading comprehension technique is adopted to extract possible answers from the above retrieved paragraphs. Following previous work \citep{lin2018denoising}, we use the retrieved paragraphs provided by \citet{wang2017gated} for the two datasets.
For fair comparison, we follow \citet{wang2020k} to use \resizebox{\hsize}{!}{$[$$<$$sep$$>$$, quesiton, $$<$$/sep$$>$$,paragraph,$$<$$/sep$$>$$]$}
as the input, where $<$$sep$$>$ is a special token in front of two segmants and $<$$/sep$$>$ is a special symbol to split two kinds of data types. We take the task as a multi-classification to fine-tune the model and use two linear layers over the last hidden features from models to predict the start and end positions of the answer span.
Commonsense QA aims to answer questions which require commonsense knowledge that is not explicitly expressed in the question.
We use the public CosmosQA dataset \citep{huang2019cosmos} for this task, and the accuracy scores are used as evaluation metrics. The data statistics of the above three datasets are shown in Table \ref{table:statistic_qa}. In CosmosQA, each question has 4 candidate answers, and we concatenate the question together with each answer separately as $[$$<$$sep$$>$$, context, $$<$$/sep$$>$$,paragraph,$$<$$/sep$$>$$]$ for input. The representation of the first token is adopted to calculate a score for this answer, and the answer with the highest score is regarded as the prediction answer for this question.
\begin{table*}[!t]
\centering
\small
\begin{tabular}{@{}l|cc|cc|c@{}}
\toprule
\multirow{2}{*}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{SearchQA}} & \multicolumn{2}{c|}{\textbf{Quasar-T}} & \textbf{CosmosQA} \\ \cmidrule(l){2-6} & \textbf{EM} & \textbf{$\textbf{F}_1$} & \textbf{EM} & \textbf{$\textbf{F}_1$} & \textbf{Accuracy} \\
\midrule
BiDAF \citep{SeoKFH2017} & 28.60 & 34.60 & 25.90 & 28.50 & -\\
AQA \citep{Buck2017AskTR} & 40.50 & 47.40 & - & - & -\\
R\textasciicircum{}3 \citep{wang2017reinforced} & 49.00 & 55.30 & 35.30 & 41.70 & -\\
DSQA \citep{lin2018denoising} & 49.00 & 55.30 & 42.30 & 49.30 & -\\
Evidence Agg. \citep{wang2017evidence} & 57.00 & 63.20 & 42.30 & 49.60 & -\\
BERT \citep{xiong2019pretrained} & 57.10 & 61.90 & 40.40 & 46.10 & -\\
WKLM \citep{xiong2019pretrained} & 58.70 & 63.30 & 43.70 & 49.90 & -\\
WKLM + Ranking \citep{xiong2019pretrained} & 61.70 & 66.70 & 45.80 & 52.20 & -\\
$\text{BERT-FT}_{RACE+SWAG}$ \citep{huang2019cosmos} & - & - & - & - & 68.70\\
\textsc{K-Adapter} \citep{wang2020k} & 61.96 & 67.31 & 45.69 & 52.48 & 81.83\\
\midrule
RoBERTa-large & 59.01 & 65.62 & 40.83 & 48.84 & 80.59\\
RoBERTa-large (continue training) & 59.34 & 65.71 & 40.91 & 49.04 & 80.75\\
SEPREM & \textbf{62.31} & \textbf{67.74} & \textbf{46.37} & \textbf{53.18} & \textbf{82.37}\\
\bottomrule
\end{tabular}
\caption{Results on QA datasets including: SearchQA, Quasar-T and CosmosQA.}
\label{tab:question_answering}
\vskip -0.15in
\end{table*}
\begin{table}[!t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}lccc@{}}
\toprule
\textbf{Model} & \textbf{P} & \textbf{R} & \textbf{$\textbf{F}_1$} \\ \midrule
C-GCN \citep{zhang2018graph} & 69.90 & 63.30 & 66.40 \\
BERT-base \citep{zhang2019ernie} & 67.23 & 64.81 & 66.00 \\
ERNIE \citep{zhang2019ernie} & 69.97 & 66.08 & 67.97 \\
BERT-large \citep{soares2019matching} & - & - & 70.10 \\
BERT+MTB \citep{soares2019matching} & - & - & 71.50 \\
KnowBERT \citep{peters2019knowledge} & 71.60 & 71.40 & 71.50 \\
KEPLER \citep{wang2019kepler} & 70.43 & 73.02 & 71.70 \\
K-Adapter \citep{wang2020k} & 70.05 & 73.92 & 71.93\\
\midrule
RoBERTa-large & 70.17 & 72.36 & 71.25 \\
RoBERTa-large (continue training) & 70.19 & 72.41 & 71.28\\
SEPREM & \textbf{70.57} & \textbf{74.36} & \textbf{72.42}\\
\bottomrule
\end{tabular}}
\caption{Results for relation classification task on TACRED dataset.}
\label{table:RC}
\vskip -0.2in
\end{table}
\paragraph{Baselines}
BiDAF \citep{SeoKFH2017} is a bidirectional attention network to obtain query-aware context representation. AQA \citep{Buck2017AskTR} adopts a reinforce-guide questions re-write system and generates answers according to the re-written questions. R\textasciicircum{}3 \citep{wang2017reinforced} selects the most confident paragraph with a designed reinforcement ranker. DSQA \citep{lin2018denoising} employs a paragraph selector to remove paragraphs with noise and a paragraph reader to extract the correct answer from denoised paragraphs. Evidence Agg. \citep{wang2017evidence} makes use of multiple passages to generate answers. $\text{BERT-FT}_{RACE+SWAG}$ \citep{huang2019cosmos} sequentially fine-tunes the BERT model on the RACE and SWAG datasets for knowledge transfer. Besides the aforementioned models, we also report the results of BERT \citep{xiong2019pretrained}, WKLM \citep{xiong2019pretrained}, WKLM + Ranking \citep{xiong2019pretrained}, RoBERTa-large, RoBERTa-large (continue training), and K-Adapter \citep{wang2020k} for a detailed comparison.
\paragraph{Experimental Results}
The results of the open-domain QA task are shown in Table \ref{tab:question_answering}. We can see that the proposed SEPREM model brings significant gains of 3.1\% and 8.4\% in F1 scores, compared with RoBERTa-large (continue training) model. This may be partially attributed to the fact that, QA task requires a model to have reading comprehension ability \citep{wang2020k}, and the introduced syntax information can guide the model to avoid concentrating on certain dispensable words and improve its reading comprehension capacity \citep{zhang2020sg}. Meanwhile, SEPREM achieves state-of-the-art results on the CosmosQA dataset, which demonstrates the effectiveness of the proposed SEPREM model. It can be also seen that the performance gains observed in CosmosQA are not as substantial as those in the open-domain QA tasks. We speculate that CosmosQA requires capacity for contextual commonsense reasoning and the lack of explicitly injection of commonsense knowledge into SEPREM model limits its improvement.
\begin{figure*}[ht]
\centering
\subfigure[Open Entity]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2.03in]{OE.pdf}
\end{minipage}
}%
\subfigure[CosmosQA]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2.05in]{QA.pdf}
\end{minipage}
}%
\centering
\subfigure[TACRED]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=2.06in]{TA.pdf}
\end{minipage}%
}%
\caption{Ablation study of the SEPREM model on three different datasets over entity typing, question answering, and relation classification tasks. All the evaluation models are pre-trained on 10 million sentences.}
\label{fig:ablation}
\vskip -0.1in
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.43]{CS.pdf}
\caption{Case study results on the TACRED dataset of relation classification tasks. Models are required to predict the relation between tokens in
\bm{{\color{BurntOrange}orange}} and \bm{{\color{NavyBlue}blue}} colors. Predictions with mark \checkmark are the same with true labels.}
\label{fig:CT}
\vskip -0.1in
\end{figure*}
\subsection{Relation Classification}
A relation classification task aims to predict the relation between two given entities in a sentence. We use a large-scale relation classification dataset TACRED \citep{zhang2017tacred} for this task, and adopt Micro-precision, recall, and F1 scores as evaluation metrics. The statistics of the TACRED datasets are shown in Table \ref{table:statistic_oe_rc}. Following \citet{wang2020k}, we add special tokens ``@" and ``\#" before and after the first and second entity respectively. Then, the representations of the former token ``@" and ``\#" are concatenated to perform relation classification.
\paragraph{Baselines}
C-GCN \citep{zhang2018graph} encodes the dependency tree via graph convolutional networks for relation classification. BERT+MTB \citep{soares2019matching} trains relation representation by matching the blanks. We also include the baseline models of BERT-base \citep{zhang2019ernie}, ERNIE \citep{zhang2019ernie}, BERT-large \citep{soares2019matching}, KnowBERT \citep{peters2019knowledge}, KEPLER \citep{wang2019kepler}, RoBERTa-large, RoBERTa-large (continue training), and K-Adapter \citep{wang2020k} for a comprehensive comparison.
\paragraph{Experimental Results}
Table \ref{table:RC} shows the performances of baseline models and the proposed SEPREM on TACRED. As we can see that the proposed syntax-aware pre-training tasks and syntax-aware attention mechanism can continuously bring gains in relation classification task and SEPREM outperforms baseline models overall. This further confirms the outstanding generalization capacity of our proposed model. It can be also seen that compared with K-Adapter model, the performance gains of SEPREM model observed in the TACRED dataset are not as substantial as that in Open Entity dataset. This may be partially due to the fact that K-Adapter also injects factual knowledge into the model, which may help in identifying relationships.
\subsection{Ablation Study}
To investigate the impacts of various components in SEPREM, experiments are conducted for entity typing, question answering and relation classification tasks under the different corresponding benchmarks, ${\it i.e.}$, Open Entity, CosmosQA, and TACRED, respectively. Note that due to the time-consuming issue of training the models on entire data, we randomly sample 10 million sentences from the whole data to build a small dataset in this ablation study.
The results are illustrated in Figure \ref{fig:ablation}, in which we eliminate two syntax-aware pre-training tasks ({\it i.e.,} HP and DP) and syntax-aware attention layer to evaluate their effectiveness. It can be seen that without using the syntax-aware attention layer, immediate performance degradation is observed, indicating that leveraging syntax-aware attention layer to learn syntax-aware representation could benefit the SEPREM. Another observation is that for all three experiments, eliminating DP pre-training task leads to worse empirical results. In other words, compared with existing method ({\it i.e.}, head prediction task), the proposed dependency distance prediction task is more advantageous to various downstream tasks. This observation may be attributed to the fact that leveraging global syntactic correlation is more beneficial than considering local correlation. Moreover, significant performance gains can be obtained by simultaneously exploiting the two pre-training tasks and syntax-aware attention layer, which further confirms superiority of our pre-training architecture.
\subsection{Case Study}
We conduct a case study to empirically explore the effectiveness of utilizing syntax information. In the case of relation classification task, we need to predict the relationship of two tokens in a sentence.
As the three examples shown in Figure \ref{fig:CT}, SEPREM can capture the syntax information by the dependency tree and make correct predictions. However, without utilizing syntax information, RoBERTa fails to recognize the correct relationship. To give further insight of how syntax information affects prediction, we also take case 1 for detailed analysis. The extracted dependency tree captures the close correlation of ``\textit{grew}'' and ``\textit{Jersey}'', which indicates that ``\textit{New Jersey}'' is more likely to be a residence place. These results reflects that our model can better understand the global syntax relations among tokens by utilizing dependency tree.
\subsection{Analysis of Importance Score $\alpha$}
\label{model analysis}
Under the syntax-enhanced pre-trained framework introduced here, the contextual representation ($\bm{H}^n$) and syntax-aware representation ($\bm \hat{H}^n$) are jointly optimized to abstract semantic information from sentences. An interesting question concerns how much syntactic information should be leveraged for our pre-trained model. In this regard, we further investigate the effect of the importance score $\alpha$ on the aforementioned six downstream tasks, and the learned weights $\alpha$ after fine-tuning SEPREM model are shown in Table \ref{table:alpha}. We observe that the values of $\alpha$ are in the range of 13\% and 15\% on six downstream datasets, which indicates that those downstream tasks require syntactic information to obtain the best performance and once again confirms the effectiveness of utilizing syntax information.
To have a further insight of the effect brought by importance score $\alpha$, we conduct experiments on SEPREM w/o $\alpha$, which eliminates the $\alpha$ in Equation \ref{equ:transformer} and equally integrates the syntax-aware and contextual representation, i.e., $\bm{H}^n=transformer_n(\bm{H}^{n-1}+\bm{\hat{H}}^{n-1})$. The pre-training settings of the SEPREM w/o $\alpha$ model are the same with the proposed SEPREM model. It can be seen in Table \ref{table:alpha} that, the performances drop 1\%$\sim$3\% on the six datasets when excluding the $\alpha$. This observation indicates the necessity of introducing the $\alpha$ to better integrate the syntax-aware and contextual representation.
\begin{table}[!tp]
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|}
\toprule
Datasets & Model & Performance & Values of $\alpha$ \\
\midrule
\multirow{2}{*}{Open Entity} & SEPREM & 79.06 & 0.1334 \\
\cmidrule(r){2-4} & SEPREM w/o $\alpha$ & 77.13 & - \\
\midrule
\multirow{2}{*}{FIGER} & SEPREM & 82.05 & 0.1428 \\
\cmidrule(r){2-4} & SEPREM w/o $\alpha$ & 79.54 & - \\
\midrule
\multirow{2}{*}{SearchQA} & SEPREM & 67.74 & 0.1385 \\
\cmidrule(r){2-4} & SEPREM w/o $\alpha$ & 66.31 & - \\
\midrule
\multirow{2}{*}{Quasar-T} & SEPREM & 53.18 & 0.1407 \\
\cmidrule(r){2-4} & SEPREM w/o $\alpha$ & 51.84 & - \\
\midrule
\multirow{2}{*}{CosmosQA} & SEPREM & 82.37 & 0.1357 \\
\cmidrule(r){2-4} & SEPREM w/o $\alpha$ & 81.06 & - \\
\midrule
\multirow{2}{*}{TACRED} & SEPREM & 72.42 & 0.1407 \\
\cmidrule(r){2-4} & SEPREM w/o $\alpha$ & 71.82 & - \\
\bottomrule
\end{tabular}
}
\caption{The model's performance and the corresponding values of importance score $\alpha$ after fine-tuning on six public benchmark datasets. Performance is under the evaluate metrics of either Mi-F1 or accuracy scores.}
\label{table:alpha}
\vskip -0.18in
\end{table}
\section{Conclusion}
In this paper, we present SEPREM that leverage syntax information to enhance pre-trained models. To inject syntactic information, we introduce a syntax-aware attention layer and a newly designed pre-training task are proposed. Experimental results show that our method achieves state-of-the-art performance over six datasets. Further analysis shows that the proposed dependency distance prediction task performs better than dependency head prediction task.
\section*{Acknowledgments}
We are grateful to Yeyun Gong, Ruize Wang and Junjie Huang for fruitful comments. We are obliged to Zijing Ou and Wenxuan Li for perfecting this article. We appreciate Genifer Zhao for beautifying the figures of this article.
Zenan Xu and Qinliang Su are supported by the National Natural Science Foundation of China (No. 61806223, 61906217, U1811264), Key R\&D Program of Guangdong Province (No. 2018B010107005), National Natural Science Foundation of Guangdong Province (No. 2021A1515012299). Zenan Xu and Qinliang Su are also supported by Huawei MindSpore.
\bibliographystyle{acl_natbib}
|
{
"timestamp": "2021-06-01T02:09:38",
"yymm": "2012",
"arxiv_id": "2012.14116",
"language": "en",
"url": "https://arxiv.org/abs/2012.14116"
}
|
\section*{Introduction}
Many important phenomena that we would like to understand $-$ formation of public opinion, trending topics on social networks, movement of stock markets, development of cancer cells, the outbreak of epidemics, and collective computation in distributed systems $-$ are closely related to predicting large-scale behaviors in networks of locally interacting dynamic agents. Perhaps the most widely studied and mathematically intractable of such collective behavior is the \textit{synchronization} of coupled oscillators (e.g., blinking fireflies, circadian pacemakers, BZ chemical oscillators), and has been an important subject of research in mathematics and various areas of science for decades \cite{strogatz2000kuramoto, acebron2005kuramoto}. Moreover, it is closely related to the \textit{clock synchronization} problem, which is essential in establishing shared notions of time in distributed systems and has enjoyed fruitful applications in many areas including wildfire monitoring, electric power networks, robotic vehicle networks, large-scale information fusion, and wireless sensor networks
\cite{dorflfer2012synchronization, nair2007stable, pagliari2010scalable}.
For a system of deterministic coupled oscillators (e.g., the Kuramoto model \cite{kuramoto2003chemical}), the entire forward dynamics (i.e., the evolution of phase configurations) is analytically determined by 1) the initial phase configuration and 2) the graph structure (see Figure \ref{fig:dataset_visualization_full}). In this paper, we are concerned with the fundamental problem of \textit{predicting whether a given system of coupled oscillators will eventually synchronize}, \commHL{using some information on the underlying graph or on the initial dynamics (that is, early-stage of the forward dynamics).} More specifically, we consider the following three types of synchronization prediction problems (see Figure \ref{fig:dataset_visualization_training}):
\begin{customquestion}{Q1}\label{Q1}
Given the initial dynamics and graph structure, can we predict whether the system will eventually synchronize?
\end{customquestion}
\begin{customquestion}{Q2}\label{Q2}
Given the initial dynamics and not knowing the graph structure, can we predict whether the system will eventually synchronize?
\end{customquestion}
\begin{customquestion}{Q3}\label{Q3}
Given the initial dynamics partially observed on a subset of nodes and possibly not knowing the graph structure, can we predict whether the whole system will eventually synchronize?
\end{customquestion}
Analytical characterization of synchronization would lead to a perfect algorithm for the synchronization prediction problems above. However, while a number of sufficient conditions \commHL{on graph topology \cite{eom2016concurrent,boccaletti2006complex,chowdhury2020effect, lyu2015synchronization} (e.g., complete graphs or trees)}, model parameters (e.g., large coupling strength) or on initial configuration (e.g., phase concentration into open half-circle) for synchronization are known, obtaining an analytic or asymptotic solution to the prediction question, in general, appears to be out of reach, \commHL{especially when these sufficient conditions for synchronization are not satisfied. Namely, we are interested in predicting the synchronization of coupled oscillators where the underlying graph\commHL{s are non-isomorphic} and the initial configuration is not confined within an open half-circle in the cyclic phase space.}
Since the global behavior of coupled oscillators is built on non-linear local interactions, as the number of nodes increase and the topology of the underlying graphs become more diverse, the behavior of the system becomes rapidly intractable. To provide a sense of the complexity of the problem, note that there are more than $10^{9}$ non-isomorphic connected simple graphs with $11$ nodes \cite{combinatorial_data}.
However, the lack of a general analytical solution does not necessarily preclude the possibility of successful prediction of synchronization. In this work, we propose a radically different approach to this problem that we call \textit{Learning To Predict Synchronization} (L2PSync), where we view the synchronization prediction problem as a binary classification problem for two classes of `synchronizing' and `non-synchronizing', \commHL{based on the fact that any given deterministic coupled oscillator system eventually synchronizes or converges to a non-synchronizing limit cycle.} \commHL{In this work, we consider three models of continuous and discrete coupled oscillators --- the Kuramoto model (KM) \cite{acebron2005kuramoto}, Firefly Cellular Automata (FCA) \cite{lyu2015synchronization}, and Greenberg-Hastings model (GHM) \cite{greenberg1978spatial}.}
\commHL{Utilizing a few basic statistics of the underlying graphs, our method can achieve perfect accuracy when there is a significant difference in the topology of the underlying graphs between the synchronizing and the non-synchronizing examples (see Figures \ref{fig:toy_graph_dynamics} and \ref{fig:toy}). We find that when these graph statistics cannot separate the two classes of synchronizing and non-synchronizing very well (e.g., when the graphs are generated from the same random graph model, see Tables \ref{table:datasets} and \ref{table:datasets_big}), pairing a few iterations of phase configurations in the initial dynamics along with the graph statistics as the input to the classification algorithms can lead to significant improvement in accuracy. Our methods far surpass what is known by the half-circle concentration principle in classical oscillator theory (see our Methods section). We also find that in almost all such settings, dropping out the basic graph statistics and training our algorithms with only initial dynamics achieves nearly the same accuracy as with the graph statistics. Furthermore, we find that our methods are robust under using incomplete initial dynamics only observed on a few small subgraphs of large underlying graphs. }
\begin{figure*}[htpb]
\centering
\includegraphics[width=1\textwidth]{Figures/figure1.pdf}
\caption{Sample points in the 30-node dynamics dataset for synchronization prediction. The heat maps show phase dynamics on graphs beneath them, where colors represent phases and time is measured by iterations from bottom to top (e.g. $t=0$ to $t=25$). Each example is labeled as `synchronizing' if it synchronizes at iteration 1758 for the Kuramoto model (70 for FCA and GHM)
and `non-synchronizing' otherwise.
Synchronizing examples have mostly uniform colors in the top row. For training, only a portion of dynamics is used so that the algorithms rarely see a fully synchronized example (see Figure \ref{fig:dataset_visualization_training}).
}
\label{fig:dataset_visualization_full}
\end{figure*}
\vspace{-0.09cm}
\subsection*{Problem statement}
A graph $G=(V,E)$ consists of sets $V$ of nodes and $E$ of edges. Let $\Omega$ denote the \textit{phase space} of each node, which may be taken to be the circle $\mathbb{R}/2\pi \mathbb{Z}$ for continuous-state oscillators or the color wheel $\mathbb{Z}/\kappa \mathbb{Z}$, $\kappa\in \mathbb{N}$ for discrete-state oscillators. We call a map $X:V\rightarrow \Omega$ a \textit{phase configuration}, and say it is \textit{synchronized} if it takes a constant value across nodes (i.e., $X(v)=Const.$ for all $v\in V$). A \textit{coupling} is a function $\mathcal{F}$ that maps each pair $(G,X_{0})$ of graph and initial configuration $X_{0}:V\rightarrow \Omega$ deterministically to a \textit{trajectory} $(X_{t})_{t\ge 0}$ of phase configurations $X_{t}:V\rightarrow \Omega$. For instance, $\mathcal{F}$ could be the time evolution rule for the KM, FCA, or GHM. Throughout the paper, $\mathbf{1}(\cdot)$ denotes the indicator function. The main problem we investigate in this work is stated below:
\begin{description}
\item[$\bullet$] (Synchronization Prediction)
\textit{Fix parameters $n\in \mathbb{N}$, $T\gg r>0$, and coupling $\mathcal{F}$. Predict the following indicator function $\mathbf{1}(\text{$X_{T}$ is synchronized})$ given the initial trajectory $(X_{t})_{0\le t\le r}$ and optionally also with \commHL{statistics of} graph $G$.}
\end{description}
We remark that as $T$ tends to infinity, the indicator in the problem statement converges to the following indicator function $\mathbf{1}(\text{$X_{t}$ is eventually synchronized})$ which aligns more directly with the initial questions \ref{Q1} and \ref{Q2} than the indicator in the above problem statement, but determining whether a given system will never synchronize in amounts to finding a non-synchronizing periodic orbit, which is computationally infeasible in general.
See Figure \ref{fig:dataset_visualization_training} for an illustration of the synchronization prediction problem.
\subsection*{Related works}
\label{subsection:related_works}
There are a number of recent works incorporating machine learning methods to investigate problems on coupled oscillators or other related dynamical systems, which we briefly survey in this section and summarize in Table \ref{table:comparison}.
\commHL{In Fan et al.\cite{fan2021anticipating}, the authors are concerned with identifying the critical coupling strength at which a system of coupled oscillators undergo a phase transition into synchronization, where the underlying graph consists of 2-4 nodes with a fixed topology (e.g., a triangle or a star with three leaves). Guth et al. \cite{guth2019machine} use binary classification methods with surrogate optimization (SO) in order to learn optimal parameters and predictors of extreme events. Their work is primarily concerned with learning whether or not intermittent extreme events will occur in various 1D or 2D partial differential equation models.
Similarly, Chowdhury et al. \cite{chowdhury2021extreme} utilize a long-short term memory (LSTM) \cite{hochreiter1997long} network to predict whether or not an extreme event will occur on globally coupled mean-field logistic maps on complete graphs. Thiem et al.\cite{Thiem2020emergent} use Feed-forward neural networks (FFNN) \cite{bishop2006pattern} to learn coarse-grained dynamics of Kuramoto oscillators and recover the classical order parameter. Biccari et al.\cite{Biccari2020stochastic} use gradient descent (GD) and the random batch method (RBM) to learn control parameters to enhance the synchronization of Kuramoto oscillators. Slightly less related work is Hefny et al.\cite{hefny2015supervised}, where the authors use hidden Markov models, LASSO regression, and spectral algorithms for learning lower-dimensional state representations of dynamical systems and apply their method to a knowledge tracing model for a dataset of students' responses to a survey. }
\begin{table}[htbp]
\centering
\begin{tabular}{c|ccccccc}
\textit{References} & $\#$ nodes & $\#$ graphs &\# configs. & model & ML & Goal \\
\hline
\rule{0pt}{1.1\normalbaselineskip} Fan et al. \cite{fan2021anticipating} & 2-4 & 1 & 1 & Lorenz, KM & FFNN & Phase transition \\
\rule{0pt}{1.1\normalbaselineskip} Guth et al. \cite{guth2019machine} & N/A & N/A & 1 & 1D and 2D PDE & SO & Extreme events \\
\rule{0pt}{1.1\normalbaselineskip} Chowdhury et al.\cite{chowdhury2021extreme} & 200 & 1 & 1 & $\begin{matrix} \textup{Mean-field} \\ \textup{logistic map} \end{matrix}$ & LSTM & Extreme events \\
\rule{0pt}{1.1\normalbaselineskip} Thiem et al.\cite{Thiem2020emergent} & 1500-8000 & 1 & 2000 & KM & FFNN & Parameter estimation\\
\rule{0pt}{1.1\normalbaselineskip} Biccari et al.\cite{Biccari2020stochastic} & 10-1000 & 1 & 1 & KM & GD, RBM & Parameter estimation\\
\rule{0pt}{1.1\normalbaselineskip} Itabashi et al.\cite{itabashi2021evaluating} & 128-256 & 1 &100 & KM & TDA & $\begin{matrix} \textup{Synchronization}/ \\ \textup{Non-synchronization}/ \\ \textup{Chimera state} \end{matrix} $ \\
\hline
&&&& KM, & RF, GB, & \text{Synchronization}/ \\
This work & 15-600 & \textbf{2K-200K} & 1 & FCA, & FFNN, & \textup{Non-synchronization}\\
&&&& GHM & LRCN \\
\end{tabular}%
\caption{Comparison of settings in related works on learning coupled oscillator dynamics using machine learning methods. Recent works in the table focus on learning features of dynamics on fixed graphs. \commHL{In contrast, we aim to classify the long-term dynamics of a given system on a diverse set of underlying graphs.} The column for `\# configs.' refers to the number of distinct initial phase configurations considered for each graph in training
}
\label{table:comparison}
\end{table}
\commHL{ On the other hand, Itabashi et al.\cite{itabashi2021evaluating} consider classifying coupled Kuramoto oscillators according to their future dynamics from certain features derived (using topological data analysis (TDA)) from their early-stage dynamics (or `initial dynamics' in our terminology). As in the other references above, the underlying graph is fixed in each classification task (a complete graph with weighted edges, which may have two or four communities). But unlike in the references above, the initial configuration, instead of the model parameter, is varied to generate different examples on the same underlying graph. The authors observed that some long-term dynamical properties (e.g., multi-cluster synchronization) can be predicted by the derived features of the initial dynamics. This point is consistent with one of our findings that the first few iterations of the initial dynamics may contain crucial information in predicting the long-term behavior of coupled oscillator systems. }
\commHL{While sharing high-level ideas and approaches with the aforementioned works, our work has multiple distinguishing characteristics in the problem setting and approaches. First, in all of the aforementioned works, a dynamical system on a fixed underlying graph is considered. But in our setting, there are as many as 200K non-isomorphic underlying graphs and we seek to predict whether a given system of oscillators on highly varied or even unknown graphs will eventually synchronize or not. \commHL{Furthermore, we also consider the case when machine learning algorithms are trained on partial observation (e.g., initial dynamics restricted on some subgraphs). }
Second, only in our work, are discrete models of coupled oscillators (e.g., FCA and GHM) considered, whereas only models with continuous phase space (e.g., the KM) are considered in the literature. Third, only in our work, is the classical concentration principle (a.k.a. the `half-circle concentration', see our Methods section) in the oscillator theory used as a rigorous benchmark to evaluate the efficacy of employed machine learning methods. Finally, we employ various binary classification algorithms such as Random Forest (RF) \cite{breiman2001random} Gradient Boosting (GB)\cite{friedman2002stochastic}, Feed-forward Neural Networks (FFNN) \cite{bishop2006pattern}, and our own adaptation of a Long-term Recurrent Convolutional Networks (LRCN) \cite{donahue2015long} . }
We remark that there are a number of cases where rigorous results are available for the question of predicting the long-term behavior of coupled oscillators on a graph $G$ and initial configuration $X_{0}$. For instance, the $\kappa=3$ instances of GHM and another related model called Cyclic Cellular Automata (CCA) \cite{fisch1990cyclic} have been completely solved \cite{gravner2018limiting}. Namely, given the pair $(G,X_{0})$, the trajectory $X_{t}$ synchronizes eventually if and only if the discrete vector field on the edges of $G$ induced from $X_{0}$ is conservative (see \cite{gravner2018limiting} for details). Additionally, the behavior of FCA on finite trees is also well-known: given a finite tree $T$ and $\kappa\in \{3,4,5,6\}$, every $\kappa$-color initial configuration on $T$ synchronizes eventually under $\kappa$-color FCA if and only if the maximum degree of $T$ is less than $\kappa$; for $\kappa\ge 7$, this phenomenon does not always hold\cite{lyu2015synchronization, lyu2016phase}. \commHL{This theoretical result on FCA was used in the experiment in Figures \ref{fig:toy_graph_dynamics} and \ref{fig:toy}}. Furthermore, there is a number of works on the clustering behavior of these models on the infinite one-dimensional lattice, $\mathbb{Z}$ (FCA\cite{lyu2015synchronization,lyu2019persistence}, CCA\cite{fisch1990one, fisch1991cyclic, fisch1992clustering,lyu2019persistence} and GHM \cite{lyu2019persistence, durrett1991some}).
\section*{Methods}
The pipeline of our approach is as follows. Namely, 1) fix a model for coupled oscillators; 2) generate a \textit{dynamics dataset} of a large number of non-isomorphic graphs with an even split of synchronizing and non-synchronizing dynamics; 3) train a selected binary classification algorithm on the dynamics dataset to classify each example (initial dynamics with or without underlying features of the graph) into one of two classes, `synchronizing' or `non-synchronizing'; 4) validate the accuracy of the trained algorithms on fresh examples by comparing the predicted behavior of the true long-term dynamics. We use the following classification algorithms: Random Forest (RF) \cite{breiman2001random} Gradient Boosting (GB)\cite{friedman2002stochastic}, Feed-forward Neural Networks (FFNN) \cite{bishop2006pattern}, and our own adaptation of Long-term Recurrent Convolutional Networks (LRCN) \cite{donahue2015long} which we call the \textit{GraphLRCN} (further information such as implementation details and hyperparameters can be found in the SI).
As a baseline for our approach, we use a variant of the well-known ``concentration principle'' in the literature on coupled oscillators. Namely, regardless of the details of graph structure and model, synchronization is guaranteed if the phases of the oscillators are concentrated in a small arc of the phase space \textit{at any given time} (see the next subsection). This principle is applied at each configuration up to the training iteration used to train the binary classifiers.
For question \ref{Q3} on synchronization prediction on the initial dynamics partially observed on subgraphs, as well as reducing the computational cost of our methods for answering \ref{Q1} and \ref{Q2}, we propose an ``ensemble prediction'' algorithm (Algorithm \ref{algorithm:collective_predictor}) that scales up our method to large graphs by training on dynamics observed from multiple random subgraphs. Namely, suppose we are to predict the dynamics of some connected $N$-node graphs, where only the initial dynamics are observed on a few small connected subgraphs of $n \ll N$ nodes. We first train a binary classification algorithm on the dynamics observed from those subgraphs and then aggregate the predictions from each subgraph (e.g., using a majority vote) to get a prediction for the full dynamics on $N$ nodes.
\subsection*{The concentration principle for synchronization and baseline predictor}
In the literature on coupled oscillators, there is a fundamental observation that concentration (e.g., into an open half-circle) of the initial phase of the oscillators leads to synchronization for a wide variety of models on arbitrary connected graphs (see, e.g., \cite[Lem 5.5]{lyu2018global}). This is stated in the following bullet point for the KM and FCA and we call it the ``concentration principle''. This principle has been used pervasively in the literature of clock synchronization \cite{nishimura2011robust, klinglmayr2012guaranteeing, proskurnikov2016synchronization, nunez2014synchronization} and also in multi-agent consensus problems \cite{moreau2005stability, papachristodoulou2010effects, chazelle2011total}.
\begin{description}
\item[$\bullet$] (Concentration Principle) \textit{Let $G$ be an arbitrary connected graph. For the Kuramoto model (see eq. (1) in SI) with identical intrinsic frequency and for FCA (see eq. (3) in SI), given dynamics on $G$ synchronize if all phases at any given time are confined in an open half-circle in the phase space $\Omega$. Furthermore, if all states used in the time-$t$ configuration $X_{t}$ are confined in an open half-circle for any $t\ge 1$, then the trajectory on $G$ eventually synchronizes.}
\end{description}
The `open half-circle' refers to any arc of length $<\pi$ for the continuous \commHL{phase} space $\Omega=\mathbb{R}/2\pi \mathbb{Z}$ and any interval of $<\kappa/2$ consecutive integers ($\textup{mod}\,\, \kappa$) for the discrete \commHL{phase} space $\Omega=\mathbb{Z}/\kappa \mathbb{Z}$. This is a standard fact known to the literature and it follows from the fact that the couplings in the statement monotonically contract given any initial phase configuration under the half-circle condition toward synchronization. It is not hard to see the above half-circle concentration does not hold for GHM. Accordingly, for GHM, we say a phase configuration $X_{t}$ is \textit{concentrated} if $X_{t}$ is synchronized.
We now introduce the following baseline synchronization predictor: Given $(X_{t})_{0\le t \le r}$ and $T>r$,
\begin{description}
\item[$\bullet$] (Baseline predictor) \textit{Predict synchronization of $X_{T}$ if $X_{t}$ is concentrated for any $1\le t \le r$. Otherwise, flip a fair coin. }
\end{description}
Notice that the baseline predictor never predicts synchronization incorrectly if $X_{r}$ is concentrated. For non-concentrated cases, the baseline does not assume any knowledge and gives a completely uninformed prediction. Quantitatively, suppose we have a dataset where $\alpha$ proportion of \commHL{entire} samples are synchronizing \commHL{(in all our datastes, $\alpha=0.5$)}. Suppose we apply the baseline predictor where we use the first $r$ iterations of dynamics for each sample. Let $x=x(r)$ denote the proportion of synchronizing samples \commHL{that concentrate by iteration $r$ among all among all synchronizing samples.} Then the baseline predictor's accuracy is given by $x\alpha + (1-\alpha+(1-x)\alpha)/2=0.5+x\alpha/2$, where the term $x\alpha/2$ can be regarded as the gain obtained by using the concentration principle layered onto the uninformed decision.
\iffalse
\begin{align}
&1-\alpha + x\alpha \\
&x\alpha + (1-\alpha+(1-x)\alpha)/2=(1+x\alpha)/2 \\
L-P &= 0.5 -\alpha + x \left( \frac{3\alpha-1}{2} \right)
\end{align}
\fi
\subsection*{An illustrative example: the Kuramoto model on complete vs. ring; GHM on path vs. complete; FCA on tree vs. ring}
We first give a simple example to illustrate our machine learning approach to the synchronization prediction problem. More specifically, we create \commHL{datasets} of an equal number of synchronizing and non-synchronizing examples, where there is a significant difference in the topology of the underlying graphs between the synchronizing and the non-synchronizing examples. In such a setting, one can expect that knowing the basic graph features --- the number of edges, min/max degree, diameter, and the number of nodes --- will be enough to distinguish between synchronizing and non-synchronizing examples.
\begin{figure*}[htpb]
\centering
\includegraphics[width=1\textwidth]{Figures/figure2.png}
\caption{ Examples of synchronizing and non-synchronizing dynamics of Kuramoto, Greenberg-Hastings, and FCA oscillators on 30-node graphs with two topologies (complete vs. ring, path vs. complete, and tree vs. ring, respectively). The heat maps show phase dynamics on graphs beneath them, where colors represent phases and time is measured by iterations from bottom-to-top (e.g. $t=0$ to $t=70$). Synchronizing examples have uniform color in the top row. The horizontal bar indicates the `training iteration', which is the maximum number of iterations in the initial dynamics fed into the classification algorithm for prediction.
}
\label{fig:toy_graph_dynamics}
\end{figure*}
\commHL{For each of the coupled oscillator models of the KM, FCA, and GHM, we create a dataset that consists of 1K synchronizing examples and 1K non-synchronizing examples on 30 node graphs, where all 1K examples in each of the two classes share the same underlying graph topology. For the KM, the synchronizing and the non-synchronizing examples are on a complete graph and on a ring, respectively. For GHM (with $\kappa=5$), the synchronizing and the non-synchronizing examples are on a path and on a complete graph, respectively. Lastly for FCA (with $\kappa=5$), the synchronizing and the non-synchronizing examples are on a tree with a maximum degree at most four and on a ring, respectively. Our choice of these graph topologies reflects rigorously established results in the literature. Namely, it is well-known that coupled Kuramoto oscillators (with identical intrinsic frequencies) on a complete graph will always synchronize, and one can easily generate a non-synchronizing initial configuration for Kuramoto oscillators on a ring \cite{strogatz2000kuramoto,acebron2005kuramoto}. For GHM, in the SI, we prove that a $\kappa$-color (arbitrary $\kappa \geq 3$) GHM on a path of length $n$ will always synchronize by iteration $n+\kappa$, regardless of the initial configuration. Lastly for FCA, it is known that FCA with $\kappa=5$ color dynamics always synchronize on trees with maximum degree at most four \cite{lyu2015synchronization}. The results of our experiments for these three datasets are shown in Figure \ref{fig:toy}.}
\commHL{For the three datasets described above, we achieve perfect classification accuracy with FFNN of distinguishing synchronizing vs non-synchronizing by using the following five basic statistics of the underlying graphs ---
the number of edges, min/max degree, diameter, and the number of nodes --- as expected by the theoretical results we mentioned above. Additionally, we hide the graph statistics completely and train FFNN only using the initial dynamics up to a variable training iteration. As we feed in more initial dynamics for training, FFNN quickly improves in prediction accuracy, far more rapidly than the baseline does. This indicates that FFNN may be quickly learning some latent distinguishing features from the initial dynamics that are more effective than the half-circle concentration employed by the baseline predictor.}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{Figures/dynvfeat.png}
\caption{\commHL{Binary classification accuracies of synchronizing vs. non-synchronizing dynamics when the underlying graphs in either of these two classes share the same topology. For the Kuramoto model, we compare complete graphs (synchronizing) to rings (non-synchronizing), for the GHM model we compare rings (synchronizing) vs complete graphs (non-synchronizing), and for FCA we compare trees with maximum degree 4 (synchronizing) to rings (non-synchronizing). FFNN (Features) uses FFNN as the binary classification algorithm with five basic graph features --- the number of edges, min/max degree, diameter, and the number of nodes --- as the input, which achieves perfect classification in all cases. FFNN (Dynamics) uses FFNN as the binary classification algorithm with a varying number of iterations of initial dynamics as input (specified as the horizontal axis), without any information about the underlying graph. The baseline predictor uses the concentration principle (see the main text for more details). Notice that FFNN (Dynamics) far surpasses the baseline.}
}
\label{fig:toy}
\end{figure}
\subsection*{Generating the dynamics datasets}
\commHL{In the example we considered in Figures \ref{fig:toy_graph_dynamics} and \ref{fig:toy}, there was a clear distinction between the topology of the underlying graphs that were synchronizing and the graphs that were non-synchronizing, and basic graph statistics can yield perfect classification accuracy through FFNN as the choice of the binary classifier. In this section, we consider datasets for which it is much more difficult to classify based only on the same graph statistics. The way we do so is by generating a large number of underlying graphs from the \textit{same} random graph model with the \textit{same} parameters. In this way, even though individual graphs realized from the random graph model may have different topologies, graph statistics such as edge density or maximum degree are concentrated around their expected values. This is in contrast to the set of underlying graphs in the three datasets we considered in Figures \ref{fig:toy_graph_dynamics} and \ref{fig:toy}, where half of them are isomorphic to each other and another half are also isomorphic to each other. For the datasets we generate in this way, which we will describe in more detail shortly after, classifying only with the basic graph statistics achieves an accuracy of 60-70\% (in comparison to 100\% as before).
}
\commHL{We generate a total of tweleve datasets described in Tables \ref{table:datasets} and \ref{table:datasets_big} as follows.} Data points in each dataset consist of three statistics computed for a pair $(G,X_{0})$ of an underlying graph, $G=(V,E)$, and initial configuration, $X_{0}:V\rightarrow \Omega$: 1) first $r$ iterations of dynamics $(X_{t})_{0\le t \le r}$ (using either the KM, FCA, or GHM), (optional) 2) features of $G$ and $X_{0}$, and 3) the label that indicates whether $X_{T}$ is concentrated or not. We optionally include the following six features: \textit{number of edges}, \textit{min degree}, \textit{max degree}, \textit{diameter}, \textit{number of nodes}, and \textit{quartiles of initial phases in $X_{0}$}. We say a data point is `synchronizing' if the label is 1, and `non-synchronizing,' label 0, otherwise. Every dataset we generate contains an equal number of synchronizing and non-synchronizing examples, and the underlying graphs are all connected and non-isomorphic \commHL{(as opposed to the datasets in Figures \ref{fig:toy_graph_dynamics} and \ref{fig:toy})}.
To generate a single $n$-node graph, we use an instance of the Newman-Watts-Strogatz (NWS) model\cite{newman2002random}, which originally has three
parameters $n$ (number of nodes), $p$ (shortcut edge probability) and $k$ (initial degree of nodes; We use the implementation in \texttt{networkx} python package, see \cite{hagberg2008exploring}), with an added integer parameter $M$ (number of calls for adding shortcut edges). Namely, we start from a cycle of $n$ nodes, where each node is connected to its $k$ nearest neighbors. Then we attempt to add a new edge between each initial non-edge $(u,v)$ with probability $p/(n-k-3)$ independently $M$ times. The number of new edges added in this process follows the binomial distribution with $\binom{n}{2} - \frac{nk}{2}$ trials with success probability $\left(1 - \left(1-\frac{p}{n-k-1} \right) \right)^{M}\approx pm/(n-k-1)$. This easily yields that the expected number of edges in our random graph model is $\frac{nk}{2} + \frac{n^{2}pM}{2(n-k-1)} + O(k)$.
\begin{table}[htbp]
\centering
\begin{tabular}{ccccccc}
\hline
\textit{Datasets} & $\dataset{KM}_{15}$ & $\dataset{KM}_{30}$ & $\dataset{FCA}_{15}$ & $\dataset{FCA}_{30}$ & $\dataset{GHM}_{15}$ & $\dataset{GHM}_{30}$ \\
\hline
\# nodes & 15 & 30 & 15 & 30 &15 & 30 \\
avg of \# edges & 29.65 &57.49 & 23.91 & 47.45 & 22.88& 48.18 \\
std of \# edges & 3.42 & 5.67 & 2.34& 4.15 & 2.35& 4.11 \\
avg diameter & 4.32 & 7.00 & 4.32 & 6.01 & 4.30 & 6.12 \\
std of diameter & 0.68 & 1.29 & 0.67 & 0.95 & 0.66 & 0.98\\
$r$ (training iter) & 126 & 126 & 25 & 25 & 25 & 25 \\
$T$ (prediction iter) & 1758 & 1758 & 70 & 70 & 70 & 70 \\
\# Sync. & 100K & 40K & 100K & 40K & 100K & 40K\\
\# Nonsync. & 100K & 40K & 100K & 40K & 100K & 40K\\
\hline
\end{tabular}%
\caption {
Dynamics datasets were generated for three models with two node counts. In each dataset, all graphs are connected and non-isomorphic. $\#$ Sync. denotes the number of examples in the dataset such that the phase configuration $X_{T}$ at iteration $T$ is concentrated}
\label{table:datasets}
\end{table}
\begin{figure*}[htpb]
\centering
\includegraphics[width=1\textwidth]{Figures/figure4.pdf}
\caption{Sample points in the 30-node training data set for synchronization prediction. The full dataset consists of 40K synchronizing and 40K non-synchronizing 30-node connected non-isomorphic graphs and dynamics on them for each of the three models the KM, FCA, and GHM (\commHL{see} Table \ref{table:datasets}).
}
\label{fig:dataset_visualization_training}
\end{figure*}
\iffalse
\begin{align}
\mathbb{E}[G(n,k,p,M)]&=\frac{nk}{2} + \mathbb{E}\left[ \sum_{u\nsim v} \mathbf{1}(\textup{$uv$ filled in after $M$ calls})\right] \\
&= \frac{nk}{2} + \left(\binom{n}{2} - \frac{nk}{2}\right) \left(1 - \left(1-\frac{p}{n-k-1} \right)^{M} \right) \\
&= \frac{nk}{2} + \left(\binom{n}{2} - \frac{nk}{2}\right) \frac{pM}{n-k-1} + O(1) \\
&= \frac{nk}{2} + \frac{n^{2}}{2} \frac{pM}{n-k-1} + O(k) \\
\end{align}
\fi
The NWS model of random graphs is known to exhibit the `small world property', which is to have relatively small mean path lengths in contrast to having a low local clustering coefficient. This is a widely observed property of many real-world complex networks \cite{newman2002random} --- as opposed to the commonly studied Erd\"os-R\'eyni graphs. It is known that many models of coupled oscillators have a hard time synchronizing when the underlying graph is a ring, as the discrepancy between oscillators tends to form traveling waves circulating on the ring \cite{lee2008experiments}. On the other hand, it is observed that coupled oscillator systems on dense graphs are relatively easier to synchronize. For instance, Kassabov, Strogatz, and Townsend recently showed that Kuramoto oscillators with an identical natural frequency on a connected graph where each node is connected to at least 3/4 of all nodes are globally synchronizing for almost all initial configurations \cite{kassabov2021sufficiently}. Since we intend to generate both synchronizing and non-synchronizing examples to form a balanced dataset, it is natural for us to use a random graph model that sits somewhere between rings and dense graphs. In this sense, NWS is a natural choice for a random graph model for our purpose. Using other models such as Erd\"os-R\'eyni, for example, \commHL{for} generating the balanced dataset of synchronizing and non-synchronizing examples as in Tables \ref{table:datasets} and \ref{table:datasets_big}, \commHL{is} computationally very demanding.
In Table \ref{table:datasets}, we make note of the average graph edge counts and standard deviations of the random graphs that were used to simulate our models. These characteristics, average edge count and standard deviation of edges counts, correspond to the clustering of edges in the graph, which intuitively affects the propagation of information or states in our cellular automata.
Also in Table \ref{table:datasets}, we give a summary of the six datasets on the three models for two node counts $n=15,30$, each with 200K and 80K examples, respectively, which we refer to as $\dataset{KM}_{n}$, $\dataset{FCA}_{n}$, $\dataset{GHM}_{n}$ for $n=15,30$. Underlying graphs are sampled from the NWS model with parameters $n\in \{15,30\}$, $M=1$, and $p=0.85$ for the KM and $p=0.65$ for FCA and GHM. In all cases, we generated about 400K examples and subsampled 200K and 80K examples for $n=15,30$, respectively, so that there are an equal number of synchronizing and non-synchronizing examples, with all underlying graphs as non-isomorphic. The limits for both sets were chosen by memory constraints imposed by the algorithms used. To give a glance at the datasets, we provide visual representations. In Figure \ref{fig:dataset_visualization_training}, we show five synchronizing and non-synchronizing examples in $\dataset{KM}_{30}$, $\dataset{FCA}_{30}$, and $\dataset{GHM}_{30}$. \\
\begin{table}[htbp]
\centering
\begin{tabular}{ccccccc}
\hline
\textit{Datasets} & $\dataset{FCA}_{600} $& $\dataset{FCA}_{600}'$ & $\dataset{FCA}_{600} ''$ & $\dataset{KM}_{600}$ & $\dataset{KM}_{600}'$ & $\dataset{KM}_{600} ''$\\
\hline
\# nodes & 600 & 600 & 300-600 & 600 & 600 & 300-600\\
std of \# nodes & 0 & 0 & 86.60 & 0 & 0 & 86.60 \\
avg of \# edges &2985.53&4749.24&2799.49 &1051.08&1109.85&757.89\\
std of \# edges &37.85&2371.72& 1461.08&19.66&79.47&56.81\\
$r$ & 50 & 50 & 50& 400 & 400 & 400 \\
$T$ & 600 & 600 & 600 & 1758 & 1758 & 1758\\
\# Sync. & 1K & 1K & 1K & 1K & 1K & 1K \\
\# Nonsync. & 1K & 1K & 1K & 1K & 1K & 1K \\
\hline
\end{tabular}%
\caption{Dynamics datasets generated for FCA and Kuramoto on 600 nodes ($\dataset{FCA}_{600}$, $\dataset{KM}_{600}$, $\dataset{FCA}_{600}'$, $\dataset{KM}_{600}'$) and on 300-600 nodes ($\dataset{FCA}_{600}''$, $\dataset{KM}_{600}''$). In each dataset, all graphs are connected and non-isomorphic. $\#$ Sync. denotes the number of examples in the dataset such that the phase configuration $X_{T}$ at iteration $T$ is concentrated. Here $r$ and $T$ refers to the training and the prediction iterations defined in Table \ref{table:datasets}.}
\label{table:datasets_big}
\end{table}
We also generated six dynamics datasets with a larger number of nodes on FCA and Kuramoto dynamics, as described in Table \ref{table:datasets_big}. The fixed node datasets $\dataset{FCA}_{600}$, $\dataset{FCA}_{600}'$, $\dataset{KM}_{600}$, and $\dataset{KM}_{600}'$ each consist of 1K synchronizing and non-synchronizing examples of FCA and Kuramoto dynamics on non-isomorphic graphs of 600 nodes. The underlying graphs for the fixed node FCA datasets are generated by the NWS model with parameters $n=600$, $p=0.6$ and $N=5$ for $\dataset{FCA}_{600}$ and $p\sim\textup{Normal(}\mu_X,0.04)$, $\mu_X\sim\textup{Uniform}(0.32,0.62)$, for each $N\sim \textup{Uniform}(\{1,2,\dots,20\})$ calls for $\dataset{FCA}_{600}'$. Similarly, to generate the fixed node Kuramoto datasets, $\dataset{KM}_{600}$ and $\dataset{KM}_{600}'$, we used the NWS model with parameters $n=600$, $p=0.15$ and $N=5$ for $\dataset{KM}_{600}$ and $p\sim\textup{Normal(}\mu_X,0.04)$, $\mu_X\sim\textup{Uniform}(0.32,0.62)$, for each $N\sim \textup{Uniform}(\{1,2,\dots,20\})$ calls for $\dataset{KM}_{600}'$. Consequently, the number of edges in the graphs from $\dataset{KM}_{600}$ and $\dataset{FCA}_{600}$ are sharply concentrated around its mean whereas $\dataset{KM}_{600}'$ and $\dataset{FCA}_{600}'$ have much greater overall variance in the number of edges (see Table \ref{table:datasets_big}). For the varied node datasets, $\dataset{FCA}_{600}''$ and $\dataset{KM}_{600}''$, we kept $p$ distributed in the same way as done for $\dataset{KM}_{600}'$ $\dataset{FCA}_{600}'$, but for each $N\sim \textup{Uniform}(\{1,2,\dots,20\})$ calls of adding shortcut edges, but additionally varied the number of nodes as $n\sim \textup{Uniform}(\{300,301,\dots,600\})$. In this case, both the number of nodes and edges have relatively greater variation compared to the other datasets.
We omit the GHM from this experiment because the dynamics are extremely prone to non-synchronization as a network has more cycles\cite{durrett1993asymptotic}. Hence, for these large graphs, almost all \commHL{GHM dynamics} will be non-synchronizing. \commHL{For instance, for the same set of networks we used to generate the $\dataset{FCA}_{600}$ dataset (consisting of 40K networks of 600 nodes), none of the GHM dynamics synchronized. Hence, there is no meaningful} classification problem to be discussed since the presence of synchronizing graphs that follow this model is extremely sparse.
\subsection*{Scaling up by learning from \commHL{subgraphs}}
In this subsection, we discuss a way to extend our method for the dynamics prediction problem simultaneously in two directions; 1) larger graphs and 2) a variable number of nodes. The idea is to train our dynamics predictor on subsampled dynamics of large graphs (specified as induced subgraphs and induced dynamics), and to combine the local classifiers to make a global prediction. In the algorithm below, $f(X_{T}):=\mathbf{1}(\textup{$X_{T}$ is concentrated})$, and if $X_{t}$ is a phase configuration on $G=(V,E)$ and $G_{i}=(V_{i},E_{i})$ is a subgraph of $G$, then $X_{t}|_{G_{i}}$ denotes the restriction $v\mapsto X_{t}(v)$ for all $v\in V_{i}$.
\begin{algorithm}[H]
\caption{Ensemble Prediction of Synchronization}
\label{algorithm:collective_predictor}
\begin{algorithmic}[1]
\State \textbf{Input:} Dynamics dataset on graphs with $\ge N$ nodes; \,\, Test point $(G', (X'_{t})_{0\le t \le r})$;
\State \textbf{Parameters:} $n_{0}\le N$ (size of subgraphs),\,\, $k_{\textup{train}}$, $k_{\textup{test}}$ ($\#$ of subgraphs) ,\,\, $\theta$ (prediction threshold)
\State \textbf{Subsample Dynamics:}
\State \quad For each data point $(G,(X_{t})_{0\le t \le r}, f(X_{T}))$:
\State \quad\quad Sample $n_{0}$-node connected subgraphs $G_{1},\dots,G_{k_{\textup{train}}}$ of $G$;
\State \qquad Form restricted triples $(G_{i}, (X_{t}|_{G_{i}})_{0\le t\le r}, f(X_{T}))$
\State \textbf{Train Dynamics Predictor:}
\State \quad Train a binary classifier on the restricted triples;
\State \textbf{Ensemble Prediction:}
\State \quad Sample $n_{0}$-node connected subgraphs $G_{1}',\dots,G_{k_{\textup{test}}}'$ of $G'$;
\State \quad $\hat{f}:=$ mean of predictions of $f(X_{T})$ on subdynamics on $G_{i}'$'s
\State \textbf{Output:} $\mathbf{1}(\hat{f}>\theta)$
\end{algorithmic}
\end{algorithm}
\section*{Results}
Regardless of the three models of coupled oscillators and selected binary classification algorithm, we find that our method used to address \ref{Q1} and \ref{Q2} on average shows at least a 30\% improvement in prediction performance compared to this concentration-prediction baseline for dynamics on 30 node graphs. In other words, our results indicate that the concentration principle applied at each configuration is too conservative in predicting synchronization, and there might be a generalized concentration principle that uses the whole initial dynamics, which our machine learning methods seem to learn implicitly.
Using Algorithm \ref{algorithm:collective_predictor} for \ref{Q3}, we achieve an accuracy score of over 85\% for predicting the commonly studied Kuramoto model on 600 node graphs by only using four 30-node subgraphs where the corresponding baseline gets $55\%$ accuracy. In particular, we observe the baseline with locally observed initial dynamics tends to misclassify non-synchronizing examples as synchronizing, as locally observed dynamics can concentrate in a short time scale while the global dynamics do not.
\subsection*{Synchronization prediction accuracy for 15-30 node graphs}
We apply the four binary classification algorithms for the six 15-30 node datasets described in Table \ref{table:datasets} in order to learn to predict synchronization. Each experiment uses initial dynamics up to a variable number of training iterations $r$ that is significantly less than the test iteration $T$, and the goal is to predict whether each example in the dataset is synchronized at an unseen time $T$. We also experiment with and without the additional graph information described in Table \ref{table:datasets} in order to investigate the main questions \ref{Q1} and \ref{Q2}, respectively. We plot prediction accuracy using four classification algorithms (RF, GB, FFNN, LRCN) and the baseline predictor versus the training iteration $r$, with and without the graph features. The problem of synchronization prediction becomes easier as we increase the training iteration $r$, as indicated by the baseline prediction accuracy in Figure \ref{fig:full_plot_accuracy}. For instance, for $\dataset{KM}_{30}$, there is no trivially synchronizing example at iteration $0$, and but about 10$\%$ of the synchronizing examples will have become phase-concentrated by iteration $r=25$.
Now we discuss the results in Figure \ref{fig:full_plot_accuracy}. \commHL{As we intended before, for the six datasets in Table \ref{table:datasets}, classifying only with the basic graph statistics achieves an accuracy of 60-70\% in comparison to 100\% as in the experiments in Figures \ref{fig:toy_graph_dynamics} and \ref{fig:toy}. This is shown by the gray horizontal lines in Figure \ref{fig:full_plot_accuracy}, which indicates the classification accuracy of FFNN trained only with the same five basic graph statistics used before. This indicates that we may need to use more information on the data points in order to obtain improved classification accuracy. One way to proceed is to use more graph statistics such as clustering coefficients\cite{watts1998collective}, modularity\cite{newman2013spectral}, assortativity \cite{allen2017two}, eigenvalues of the graph Laplacian \cite{zhang2011laplacian}, etc. }
\commHL{Instead, aiming at investigating Question \ref{Q1}, we proceed by additionally using dynamics information, meaning that we include the initial dynamics, $(X_{t})_{0\le t \le r}$, up to a varying number of training iteration $r$. These results are shown in the first and the third columns in Figure \ref{fig:full_plot_accuracy}. At $r=0$, the input consists of five graph statistics we considered before --- \textit{number of edges}, \textit{min degree}, \textit{max degree}, \textit{diameter}, \textit{number of nodes} --- as well the initial configuration $X_{0}$ with its quartiles. In all cases, all four binary classifiers trained with initial dynamics significantly outperform the \commHL{concentration principle baseline.} The classifiers RF, GB, and FFNN show similar performance in all cases. On the other hand, the GraphLRCN in some cases outperforms the other classifiers, especially so with GHM on 30 nodes. For instance, when $r=20$, $10$ and $4$ for $\dataset{KM}_{30}$, $\dataset{FCA}_{30}$ and $\dataset{GHM}_{30}$, respectively, GraphLRCN achieves a prediction accuracy of $73\%$ (baseline $55\%$, $1.25\%$ concentrates), $84\%$ (baseline $52\%$, $1\%$ concentrates) and $96\%$ (baseline $50\%$, $0\%$ concentrates), respectively.
}
\commHL{
Now that we have seen that initial dynamics can be used along with the basic graph features to gain a significant improvement in classification performance, we take a step further and see how accurate we can be by only using the initial dynamics, in relation to Question \ref{Q2}. That is, we drop all graph-related features from the input, and train the binary classifiers only with the initial dynamics up to varying iteration $r$. The results are reported in the second and the fourth columns in Figure \ref{fig:full_plot_accuracy}. Since now the classifiers are not given any kind of graph information at all, one might expect a significant drop in the performance. However, surprisingly, except for the dataset $\dataset{KM}_{30}$, the classification accuracies are almost unchanged after dropping graph features altogether from training.
}
\begin{figure*}[htpb]
\centering
\includegraphics[width=1\linewidth]{Figures/figure5_new.pdf}
\vspace{-0.5cm}
\caption{Synchronization prediction accuracies of four machine learning algorithms for the KM, FCA, and GHM coupled oscillators synchronization. For each of the six datasets in Table \ref{table:datasets}, we used 5-fold cross-validation with 80/20 split of test/train. Accuracy is measured by the ratio of the number of correctly classified examples to the total number of examples. Algorithms for the second and the fourth columns are trained only with dynamics up to various iterations $r$ indicated on the horizontal axes, whereas the other two columns also use additional graph features. \commHL{$r=0$ in the "with Features" columns indicates the input consisting of initial phase configuration $X_0$ and its \textit{quartiles} paired with the five graph statistics of \textit{number of edges}, \textit{min degree}, \textit{max degree}, \textit{diameter}, \textit{number of nodes}. The gray dashed line represents training FFNN only on the five graph statistics. Hence its accuracy is constant with respect to the varying number of training iterations. } }
\label{fig:full_plot_accuracy}
\end{figure*}
\commHL{
In addition, note that for $\dataset{FCA}_{15}$ and $\dataset{FCA}_{30}$, training all four classifiers on only training iterations of $r=3$ and $r=5$, respectively, without any graph features, produces a prediction accuracy of at least $70\%$. In this case, the baseline achieves only $50\%$, meaning that no synchronizing example is phase-concentrated by a given amount of training iterations. Similarly, for $\dataset{KM}_{15}$, training all four classifiers on only training iterations of $r=5$, without any graph features, produces a prediction accuracy of about $77\%$. In this case, the baseline achieves only $52\%$, so only $10\%$ of all synchronizing examples are phase-concentrated by iteration $r$ (see the formula for baseline accuracy in the section on baseline predictor). This indicates that there may be some evidentiary condition for synchronization on the initial dynamics, which is \textit{different} from half-circle concentration. A further investigation is due to exactly pin-point what such an evidentiary condition for synchronization might be. }
We give multiple additional remarks on the experiments reported in Figure \ref{fig:full_plot_accuracy}. First, we see that the $\dataset{KM}_{30}$ dataset is adversely affected without the use of the graph features, and beginning at $r=0$, we only achieve 50\% accuracy initially. This is further discussed in the Discussion section along with a systematic analysis of the statistical significance of the features to the classification accuracy in more detail.
Second, the GraphLRCN binary classifier is offset with respect to the other algorithms as it is fed dynamics information encoded into the adjacency matrix of the underlying graph (see eq. (5) in SI), so it cannot be trained only with the initial dynamics information. Oftentimes, GraphLRCN begins below the classical algorithms but is able to outperform them in intermediate iterations, but by the final training iteration, there is only a negligible difference in the performance over different classifiers.
Third, one may be concerned that the few selected graph features we use for training may not give enough information about the given classification tasks, and whether including further graph features may change the results significantly. We remark that GraphLRCN uses the entire graph adjacency matrix as part of the input (see the SI), so technically it uses every possible graph feature during training. In Figure \ref{fig:full_plot_accuracy}, it does show somewhat improved performance on 30 node graphs with the FCA and the GHM dynamics, but virtually identical results for the Kuramoto dynamics. On the other hand, its performance is diminished for 15 node graphs. This may be due to the fact that the training data for 15 node graphs is not rich enough to properly train GraphLRCN, which is a significantly more complex classification model combining convolutional and recurrent neural networks than the other classification models we use.
Fourth, \commHL{in order to investigate the impact of the actual amount of graphs that were used in the datasets with respect to our experiments}, we perform an additional experiment on the 30 node case, in which we subsample 10K and 40K \commHL{balanced datasets} from our 80K sets to see if we get similar performance to Figure \ref{fig:full_plot_accuracy}, \commHL{where the full datasets of 80K examples are used.} As seen in Figure \ref{fig:volume} in the SI, we can see that most methods \commHL{show} a significantly better accuracy \commHL{than} the baseline, with the exception of the \commHL{GB} and \commHL{FFNN} method\commHL{s} for \dataset{KM}$_{30}$ without graph features.
We see however that the \commHL{accuracies between different methods become almost identical as we increase the number of training iterations.}
We note that with only a subsampled portion of the datasets being used, we see larger differences in accuracy between the methods themselves, which contrasts with Figure \ref{fig:full_plot_accuracy}, where all methods were rather saturated and displayed very similar accuracy across all iterations.
\subsection*{Synchronization prediction accuracy for 300-600 node graphs}
In this section, we investigate Question \ref{Q3}, which is to predict synchronization only based on local information observed on select subgraphs. Note that this is a more difficult task than \ref{Q2}, since not only may we not have information about the underlying graph but we also may not have observed the entire phase configuration. For example, the dynamics may appear to be synchronized at a local scale (e.g., on 30-node connected subgraphs), but there are still large-scale waves being propagated and the global dynamics are not synchronized. Nevertheless, we can use the ensemble prediction method (Algorithm \ref{algorithm:collective_predictor}) to combine decisions based on each subgraph to predict the synchronization of the full graph.
\begin{figure*}[!htpb]
\centering
\includegraphics[width=0.85\linewidth]{Figures/figure6.png}
\caption{Accuracy curves for predicting synchronization of the Kuramoto model on 600-node graphs from dynamics observed from $k\in\{1,2,4,8\}$ subgraphs of 30 nodes. All plots observe the performance of both the ensemble machine learning (solid) and baseline (dashed) accuracies over increasing amounts of training iterations $r\in\{0,80,240,320,400\}$. The first row shows results using only dynamics whereas the second row includes both the dynamics and graph features. Maximum accuracies for using $k$ subgraphs are given by `$k$-sub: Acc. (Baseline Acc.)'}
\label{fig:ensemble_plot_accuracy1}
\end{figure*}
In Figures \ref{fig:ensemble_plot_accuracy1} and \ref{fig:ensemble_plot_accuracy2}, we report the synchronization prediction accuracy of the ensemble predictor (Algorithm \ref{algorithm:collective_predictor}) on datasets $\dataset{FCA}_{600}$, $\dataset{FCA}_{600}'$, $\dataset{FCA}_{600}''$, and $\dataset{KM}_{600}$, $\dataset{KM}_{600}'$, $\dataset{KM}_{600}''$, respectively, described in Table \ref{table:datasets_big}. We used Algorithm \ref{algorithm:collective_predictor} with $n_{0}=30$ (amount of nodes in the subgraphs) and $k\in \{1,2,4,8\}$ ($\#$ of subgraphs). The binary classification algorithm we used is FFNN. We chose FFNN because as seen in Figure \ref{fig:full_plot_accuracy}, there is no significant difference in accuracies between all methods used in the 30 node case. Furthermore, we can not use the GraphLRCN model, as \commHL{this method relies on knowing} the dynamics and adjacency matrices of the underlying subgraphs. As the baseline, we use a slight modification of the baseline predictor from the 15-30 node classification task. Namely, we combine all phase values observed on all $k$ subgraphs and predict synchronization if they satisfy the concentration principle otherwise we flip a fair coin. Note that the concentration of phases observed on subgraphs does \textit{not} imply synchronization of the full system, as the full phase configuration may not be concentrated.
\begin{figure*}[!htpb]
\centering
\includegraphics[width=0.85\linewidth]{Figures/figure7.png}
\caption{Accuracy curves for predicting synchronization of 5-color FCA on 600-node graphs from dynamics observed from $k\in\{1,2,4,8\}$ subgraphs of 30 nodes. See the caption of Figure \ref{fig:ensemble_plot_accuracy1} for details.}
\label{fig:ensemble_plot_accuracy2}
\end{figure*}
So far we have been using the metric of accuracy for our synchronization prediction task, which is defined as the ratio of the correctly predicted synchronizing and non-synchronizing examples to the total number of examples in the dataset. There are multiple ways that this metric could be low, for example, if an algorithm is overly conservative and misses lots of synchronizing examples or in the opposite case it may incorrectly classify lots of non-synchronizing examples. \commHL{In order to provide better insights} on these aspects, we also report the performance of our method (Algorithm \ref{algorithm:collective_predictor}) and the baseline in terms of precision and recall. Here, \textit{precision} is formally defined as the proportion of positive classifications that are correct, or $\frac{TP}{TP+FP}$, where $TP$ is "true positive" and $FP$ is "false positive." In our problem, this corresponds to what proportion of predictions in synchronization was truly correct. \textit{Recall} is formally defined as the proportion of accurately identifying true labels in the data, or $\frac{TP}{TP+FN}$, where $FN$ is "false negative." Again with respect to our prediction problem, measuring recall corresponds to how well a given algorithm, ensemble, or baseline, correctly identifies synchronization behavior when presented with synchronizing data.
Our accuracy results for both the ensemble method and baseline are presented in Figures \ref{fig:ensemble_plot_accuracy1} and \ref{fig:ensemble_plot_accuracy2} for Kuramoto and FCA models, respectively. In these figures, the first columns represents the results for the datasets $\{\dataset{KM,FCA}\}_{600}$ with fixed number of nodes (600) and relatively smaller variation of edge counts (std $\approx 20, \,38$, respectively); the second columns for the datasets $\{\dataset{KM,FCA}\}'_{600}$ with fixed number of nodes (600) and relatively larger variation of edge counts (std $\approx 80,\, 2371$, respectively); and the third for the datasets $\{\dataset{KM,FCA}\}''_{600}$ with varied number of nodes (std $\approx 86$) and relatively larger variation of edge counts (std $\approx 1461, \,56$, respectively). The first rows of both figures represent prediction accuracies using exclusively dynamics data, and the second row utilizes both dynamics and graph features. In the SI, Figures \ref{fig:ensemble_plot_accuracy3} and \ref{fig:ensemble_plot_accuracy4} show the recall and precision curves of the ensemble and baseline methods with the same row and column orientation as the accuracy figures representing different graph datasets and subsets of features. For the Kuramoto data in Figure \ref{fig:ensemble_plot_accuracy1}, we applied the ensemble and baseline algorithms cumulatively up to training iterations $r=0, 80, 160, 240, 320$ and $400$; and for the FCA data in Figure \ref{fig:ensemble_plot_accuracy2}, we applied the ensemble and baseline algorithms cumulatively up to training iterations $r=0, 10, 20, 30, 40$ and $50$ (Note that using $r=0$ means fitting the prediction algorithms at the initial coloring). We additionally remark that these curves were averaged over 30 train-test splits with 80\% training and 20\% testing.
Across all datasets considered, we see that our ensemble method consistently outperforms the baseline method in accuracy at training iterations $r=400$ for Kuramoto and $r=50$ for FCA. For example, across all Kuramoto datasets and feature subsets by the last iteration, $r=400$, the ensemble method on a single subgraph outperforms the baseline algorithm on eight subgraphs; the best that the baseline algorithm does is 60.42\% accuracy compared to the 91.96\% for the ensemble method's best accuracy. For FCA, the baseline accuracy is 70.29\% compared to the best ensemble method score of 85.79\%, both on eight subgraphs. Considering the recall and the precision plots in Figures \ref{fig:ensemble_plot_accuracy3} and \ref{fig:ensemble_plot_accuracy4} in the SI gives a more detailed explanation. Namely, the ensemble method significantly outperforms the baseline in the recall by at least 35\%, whereas it performs relatively worse in precision than the baseline by at most $10\%$ for both Kuramoto and FCA with $k=8$ subgraphs except for the datasets $\dataset{KM}_{600}$. This means the baseline is `too conservative' in the sense that it misses correctly classifying a large number of synchronizing examples. From this, we deduce that a large number of synchronizing examples exhibit phase concentration over subgraphs much later in the dynamics---making early detection through phase concentration difficult. On the contrary, the high recall scores of the ensemble methods indicate that our method can still detect most of the synchronizing examples by only using local information observed on the selected subgraphs. To elaborate, the baseline only determines phase concentration at a single point in time, whereas the ensemble method is able to learn the whole variation of dynamics up to iteration $r$. Furthermore, between training our data on dynamics only, versus dynamics \textit{and} graph features, across accuracy, recall, and precision curves, the inclusion of graph features hardly improves the maximum score of these values.
Finally, for both the Kuramoto model and FCA, we observe that high variation of the node and edge counts boosts all performance metrics of accuracy, recall, and precision. For example, for the recall curves we see that for varied edge count, compared to fixed edge count, the recall is considerably higher than fixed edge count by iteration $r=400$, comparing a recall rate of 0.8755 and 0.9759 for 8 subgraphs. Furthermore, the performance gain in introducing a larger variation of node and/or edge counts is significantly larger for the Kuramoto model than for FCA. We speculate that having a larger variation of node and edge counts within the dataset presumably implies a better separation between the synchronizing and the non-synchronizing examples in the space of initial dynamics. We remark that the performance gain here is not due to a better separation between the classes in terms of the graph features, as we can see by comparing the first and the second rows of all figures (Figures \ref{fig:ensemble_plot_accuracy1}, \ref{fig:ensemble_plot_accuracy2} and also Figures \ref{fig:ensemble_plot_accuracy3} and \ref{fig:ensemble_plot_accuracy4} in the SI).
\iffalse
\begin{figure}[htpb]
\centering
\includegraphics[width=1\linewidth]{Figures/600prime.PNG}
\caption{(Top) Histogram for edge counts of data points in $\dataset{FCA}_{600}'$. (Bottom) Scatterplot of data points in $\dataset{FCA}_{600}''$ with respect to node and edge counts.}
\label{fig:FCA'_edge_histogram}
\end{figure}
\fi
\iffalse
\begin{figure}[htpb]
\centering
\includegraphics[width=1\linewidth]{Figures/histogram_larger_set.png}
\caption{Empirical distribution of iterations until concentration for the synchronizing examples in datasets $\dataset{FCA}_{600}$ (top), $\dataset{FCA}_{600}'$ (middle), and $\dataset{FCA}_{600}''$ (bottom). }
\label{fig:iteration_historgram_larger_set}
\end{figure}
\fi
\section*{Discussion}
\label{subsection:gini}
In Figure \ref{fig:full_plot_accuracy}, observe that not using the additional graph features as input (column 4) decreases the prediction accuracy from $70\%$ to $50\%$ for the case of \dataset{KM}$_{30}$ at initial training iteration $r=0$, but there is no such significant difference for \commHL{the discrete models} \dataset{FCA}$_{30}$ and \dataset{GHM}$_{30}$. In fact, in all experiments we perform in this work (those reported in Figures \ref{fig:full_plot_accuracy}, \ref{fig:ensemble_plot_accuracy1}, \ref{fig:ensemble_plot_accuracy2} and Figures \ref{fig:ensemble_plot_accuracy3} and \ref{fig:ensemble_plot_accuracy4} in the SI), this is only instance that we see that including the graph features during training affects the prediction accuracy.
In order to explain why this is the case, we compare the statistical significance of the all features (including the initial coloring) we use for the prediction task. We do so by computing the Gini indexes of all features by repeating the prediction experiment over 300 train/test splits of the datasets \dataset{KM}$_{30}$, \dataset{FCA}$_{30}$, \dataset{GHM}$_{30}$ using GB for the choice of binary classifier. The analysis using GB will be representative of all other binary classifiers since there are negligible differences in their performance in Figure \ref{fig:full_plot_accuracy}.
As mentioned before, the procedure works by fitting random subsets of features and iteratively growing decision trees; it also records what is known as the Gini index \cite{hartmann1982application} over each consecutive partition through multiple training iterations. As decision trees split the feature space, the Gini index measures total variance across class labels--- in our case, synchronizing v.s. non-synchronizing --- over each partition. Allowing the supposition that synchronization can be modeled through our graph features data using the gradient boosting method, observing the mean decrease in the Gini index across decision trees allows us to infer feature importance for synchronization over different models and graphs (see Figure \ref{fig:giniboxplot}). See, Hartman et al.\cite{hartmann1982application} for more discussions on GB and computing the Gini index.
Interestingly, the discrete models of FCA and GHM place significantly greater importance on color statistics such as the initial quartile colorings, and less importance on graph features such as diameter and minimum and maximum degree. Note that the initial color statistics can be directly computed from the initial dynamics even at training iteration $r=0$, so this explains that there is no significant performance difference in prediction accuracy for the discrete models in Figure \ref{fig:full_plot_accuracy}. On the contrary, the Kuramoto model puts greater importance on graph features such as diameter and number of edges rather than the initial color statistics, and such graph features are not available from the information given by the initial dynamics at training iteration $r=0$. From this, we can see why our algorithms show low prediction accuracy in the case of $\dataset{KM}_{30}$. In addition, we note that there is only a negligible difference in prediction accuracy for $\dataset{KM}_{15}$ in Figure \ref{fig:full_plot_accuracy}. We speculate that this is because for $\dataset{KM}_{15}$, as we see in Table \ref{table:datasets}, having only 15 nodes does not allow to have a significant variation in the diameter and the edge counts of the graphs in the dataset.
Lastly, we remark that the synchronization behavior of the Kuramoto model on complete graphs is well-understood in the literature on dynamical systems
\cite{strogatz2000kuramoto, kuramoto2003chemical, acebron2005kuramoto}. In our experiments, we observed that graphs with small diameters tend to be synchronized, and this aligns with the literature since graphs with the same number of nodes but with smaller diameters are closer to complete graphs.
\begin{figure}[!hbt]
\centering
\includegraphics[width=1\linewidth]{Figures/figure8.png}
\caption{Boxplots of Gini index values sampled from gradient boosting procedure. Color information appears to be very important according to the distribution of Gini index values in both the FCA and GH models, discrete cellular automata models, and diameter overwhelmingly appears to have the greatest importance for Kuramoto.}
\label{fig:giniboxplot}
\end{figure}
\section*{Conclusion}
\commHL{Predicting whether a given system of coupled oscillators with an underlying arbitrary graph structure will synchronize is a relevant yet analytically intractable problem in a variety of fields. In this work, we offered an alternative approach to this problem by viewing this problem as a binary classification task, where each data point consisting of initial dynamics \commHL{and/or statistics of underlying graphs }needs to be classified into two classes of `synchronizing' and `non-synchronizing' dynamics, depending on whether a given system eventually synchronizes or converges to a non-synchronizing limit cycle. We generated large datasets with non-isomorphic underlying graphs, where classification only using basic graph statistics is challenging. In this setting, we found that pairing a few iterations of the initial dynamics along with the graph statistics as the input to the classification algorithms can lead to significant improvement in accuracy; far exceeding what is known by the half-circle concentration principle in classical oscillator theory. More surprisingly, we found that in almost all such settings, dropping out the basic graph statistics and training our algorithms with only initial dynamics achieves nearly the same accuracy. Finally, we have also shown that our methods are scale well to large underlying graphs by using incomplete initial dynamics only observed on a few small subgraphs . }
Drawing conclusions from our machine learning approaches to the synchronization prediction problem, we pose the following hypotheses:
\begin{description}
\item[$\bullet$] The entropy of the dynamics of coupled oscillators may decay rapidly in the initial period to the point that the uncertainty of the future behavior from an unknown graph structure becomes not significant.
\item[$\bullet$] The concentration principle applied at any given time is too conservative in predicting synchronization, and there might be a generalized concentration principle that uses the whole initial dynamics, which our machine learning methods seem to learn implicitly.
\end{description}
Given that our machine learning approach is able to achieve high prediction accuracy, we suspect that there may be some analytically tractable characterizations on graphs paired with corresponding initial dynamics signaling eventual synchronization or not, which we are yet to establish rigorously. As mentioned at the end of the Related Works section, previously known characterizing conditions include the initial vector field on the edges induced by the initial color differential
for the 3-color GHM and CCA \cite{gravner2018limiting}, as well as the number of available states being strictly less than the maximum degree of underlying trees for FCA \cite{lyu2015synchronization, lyu2016phase}. Designing similar target features into datasets and training binary classification algorithms could guide the further analytic discovery of such conditions for the coupled oscillator models considered in this work.
Furthermore, even though we have focused on predicting only two classes of the long-term behavior of complex dynamical systems as only synchronizing and non-synchronizing dynamics, our method can readily be extended to predicting an arbitrary number of classes of long-term behaviors. For instance, one can consider the $\kappa$-state voter model on graphs, where the interest would be the final dominating color. In such circumstances, one can train $\kappa$-state classification machine learning algorithms on datasets of \commHL{non-isomorphic} graphs. \commHL{We can also consider extending our method to be able to predict synchronization on a network based on parameter control. For instance, we can train many different trajectories on a singular graph using different intrinsic frequencies in the Kuramoto model, and learn to predict what range of values of intrinsic frequencies promote synchronization.}
Finally, a more ambitious task beyond long-term dynamic behavior quantified by a single metric is the potential extension of our methods to full time-series and graph state regression. In other words, if each node in the graph represents an individual in an arbitrary social network, can we predict the sentiment level for a given topic at any given time $t$ for every single individual in that particular social network? One can again generate large overarching social networks and run many simulations of sentiment dynamics with many possible edge configurations between individuals (for example, measured by the number of mutual friends or likes/shares of posts on social media). The ultimate goal would be a framework for learning to predict, with precision, entire trajectories of complex dynamical systems.
\section*{Acknowledgments}
This work is supported by NSF grant \href{https://www.nsf.gov/awardsearch/showAward?AWD_ID=2010035&HistoricalAwards=false}{DMS-2010035}. We are also grateful for partial support from and the Department of Mathematics and the Physical Sciences Division at UCLA. We also thank Andrea Bertozzi and Deanna Needell for support and helpful discussions. JV is partially supported by Deanna Needell's grant NSF BIGDATA DMS \#1740325.
\section*{Data Availability}
The codes for the main algorithm used during the current study are available in the repository \url{https://github.com/richpaulyim/L2PSync}. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
\small{
\bibliographystyle{amsalpha}
|
{
"timestamp": "2022-08-25T02:02:00",
"yymm": "2012",
"arxiv_id": "2012.14048",
"language": "en",
"url": "https://arxiv.org/abs/2012.14048"
}
|
\section{Introduction}
This article studies the emergence of the bipolar drift-diffusion system \\
\begin{equation} \label{BDD1}
\begin{dcases}
\rho_t=\nabla \cdot \Big(\rho \nabla\frac{\delta \mathcal{E}}{\delta \rho}\Big) \\
n_t=\nabla \cdot \Big(n\nabla\frac{\delta \mathcal{E}}{\delta n}\Big)\\
-\Delta \phi = \rho - n
\end{dcases} \end{equation}
\begin{equation*}
\rho \nabla \frac{\delta \mathcal{E}}{\delta \rho} \cdot \nu = n\nabla\frac{\delta \mathcal{E}}{\delta n} \cdot \nu = \frac{\partial \phi}{\partial \nu} = 0, \ \ \ \text{on} \ [0,T[ \times \partial \Omega
\end{equation*} \\
as a relaxation limit of the bipolar Euler-Poisson system: \\
\begin{equation} \label{BEP1}
\begin{dcases}
\rho_t + \nabla \cdot (\rho u) = 0 \\
(\rho u)_t + \nabla \cdot (\rho u \otimes u) = -\frac{1}{\varepsilon} \rho \nabla \frac{\delta \mathcal{E}}{\delta \rho} -\frac{1}{\varepsilon} \rho u \\
n_t + \nabla \cdot (n v) = 0 \\
(nv)_t + \nabla \cdot (nv \otimes v) = -\frac{1}{\varepsilon} n \nabla\frac{\delta \mathcal{E}}{\delta n} -\frac{1}{\varepsilon} nv \\
-\Delta \phi = \rho - n
\end{dcases}
\end{equation}
\begin{equation*}
u \cdot \nu = v \cdot \nu = \frac{\partial \phi}{\partial \nu} = 0, \ \ \ \text{on} \ [0,T[ \times \partial \Omega
\end{equation*} \\
in the space-time domain $]0,T[ \times \Omega$, where $T>0$ is a fixed time horizon and $\Omega$ is a smooth bounded domain of $\mathbb{R}^d$ with smooth boundary $\partial \Omega$, where $d \in \mathbb{N} \setminus \{1,2 \}$.
The systems are expressed using the formalism of Euler flows in \cite{GLT}, based on the functional
\begin{align}
\label{bipen}
\mathcal{E}(\rho,n) \coloneqq \int_\Omega &h_1(\rho)+h_2(n)+ \tfrac{1}{2}|\nabla \phi|^2dx,
\\
\label{poisson}
-\Delta \phi &= \rho -n.
\end{align}
There is a coupling of the two densities via the Poisson equation \eqref{poisson}, and
$\dfrac{\delta \mathcal{E}}{\delta \rho}$, $\dfrac{\delta \mathcal{E}}{\delta n} $ stand for the functional derivatives of \eqref{bipen} which are given by
\begin{equation} \label{bipderiv}
\begin{aligned}
\dfrac{\delta \mathcal{E}}{\delta \rho}(\rho,n) = h_1^\prime(\rho) + \phi \, , \quad
\dfrac{\delta \mathcal{E}}{\delta n}(\rho,n) = h_2^\prime(n) - \phi.
\end{aligned}
\end{equation}
The models \eqref{BDD1} and \eqref{BEP1} (subject to \eqref{bipen}-\eqref{poisson}) describe two charged fluid systems interacting through
an electrostatic potential, and are basic models for applications in semiconductor devices or plasma physics, \cite{transportjungel,markowich},\cite[Chapter 3, Subsection 3.3.7]{chenFF}.
Our objective is to describe the
relation between the two models, thus extending the framework of convergence developed in \cite{gasdynamics} for a single fluid system.
The technical tool for the comparison is a relative energy identity for the bipolar fluid models considered here.
The functions $h_1,h_2$ represent the internal energies of the fluids, and the electrostatic potential $\phi$ is obtained from the fluid densities $\rho,n$ via the elliptic equation $-\Delta \phi = \rho -n$. The solution of the Poisson equation is expressed as $$\phi(t,x) = \big(N*(\rho - n)\big)(t,x) \coloneqq \int_\Omega N(x,y) \big(\rho(t,y) - n(t,y) \big)dy,$$ and its spatial gradient is understood as
$$\nabla \phi(t,x) = \big( \nabla_x N*(\rho - n)\big)(t,x) \coloneqq \int_\Omega \nabla_x N(x,y) \big(\rho(t,y) - n(t,y) \big)dy,$$
where $N$ is the Neumann function \cite{carloskenig}.
Using the symmetry of $N,$ one derives the formulas \eqref{bipderiv} for the functional derivatives
$\frac{\delta \mathcal{E}}{\delta \rho}$, $\frac{\delta \mathcal{E}}{\delta n} $.
Introducing \eqref{bipderiv} to \eqref{BEP1} leads to \eqref{BEP},
where $p_1,p_2$ are the pressures connected to the internal energies via the usual thermodynamic formulas (\ref{thermoconsistency}).
The formal relaxation limit of \eqref{BEP1} is the system \eqref{BDD1}; establishing this limit is the objective
of the present work.
\par
Relaxation problems arise in physics and chemistry, and from a mathematical viewpoint they have been analysed in several contexts.
Compensated compactness methods have been used to perform the relaxation limit of single-species hydrodynamic models towards a drift-diffusion equation in one and three spatial dimensions \cite{natalini,lattanzio}. We refer to \cite{DD1996} for an interesting analysis leading to existence of weak solutions for the bipolar Euler-Poisson in one-space dimension.
\par
The relative energy method is used here to perform this limiting process for strong solutions of \eqref{BDD1} in several space dimensions. This approach was successful for the relaxation limit in single-species fluid models \cite{gasdynamics,carrillo}, as well as for certain (weakly coupled through friction) multicomponent systems \cite{xiaokai}.
The relative energy method provides an efficient mathematical mechanism for stability analysis and establishing limiting processes; see \cite{dafermus2} for early developments, \cite{diffusiverelax3,gasdynamics, lattanziothanos2013} and references therein for applications to diffusive relaxation.
Here, the bipolar fluid models are considered in a bounded domain, which is closer to the actual physical situation and requires handling the boundary conditions. No-flux boundary conditions are applied to the velocities for (\ref{BEP1}), to the fluxes for (\ref{BDD1}), and to the electric field for both systems.
\par
In order to compare a solution of (\ref{BEP1}) with a solution of (\ref{BDD1}), one calculates the evolution of a relative energy functional;
the formal calculation is presented in subsection \ref{sec3}. The main convergence result is stated in section \ref{sec4}, and the convergence analysis is
carried out in section \ref{sec5}.
This comparison is effected between a dissipative weak solution of (\ref{BEP1}) and a classical and bounded away from vacuum solution of (\ref{BDD1}); the
precise hypotheses are stated in section \ref{sec4}.
The solution of (\ref{BDD1}) is regarded as an approximate solution of (\ref{BEP1}) and the relative energy identity in the relevant regularity class
is done in Proposition \ref{relativeentropy}.
The technical part amounts to bound the error terms in the relative energy identity.
The term requiring attention is the one associated with the electric field. Due to the antisymmetry of the electric charges and the fact that the velocities of the fluids are distinct, one cannot reproduce the argument of \cite{gasdynamics} to simplify this electric field term. The desired bound is reached using results on Riesz potentials \cite{stein}
and Neumann functions \cite{carloskenig}. A Gronwall inequality then yields the relaxation convergence as a stability result.
The latter is the main result of this work, Theorem \ref{mainresult}, and shows that if a strong solution of (\ref{BDD1}) is bounded away from vacuum and
the initial data converge at the initial time then this convergence is preserved for all times $t \in [0,T[.$
\section{Bipolar fluid models}\label{sec:bipolardd}
The systems of equations considered in this article describe the dynamics of fluids formed by charged particles. Such models are common in semiconductor devices (electrons and holes), or in modeling of plasmas, and play a significant role in various technological contexts
related to semiconductors or plasma physics. Both systems can be derived from the semi-classical bipolar Boltzmann model \cite{transportjungel}.
We introduce the monotone increasing pressure functions
$
p_1, p_2 \in C^2(]0, +\infty[) \cap C([0,+\infty[)$ which satisfy $p_i^\prime(r) > 0$ for $r > 0$ and $i=1,2,$ and are connected to the internal energy functions $h_1, h_2 \in C^3(]0,+\infty[)\cap C([0,+\infty[)$ through the thermodynamic consistency relations
\begin{equation} \label{thermoconsistency}
r h_i^{\prime \prime}(r)=p_i^{\prime}(r), \quad r h_i^{\prime}(r)=p_i(r)+h_i(r),
\end{equation}
for $r > 0$ and $i=1,2.$
Observe that $h_i^{\prime \prime}(r) > 0$ for $r>0$ and $i=1,2$ corresponds to the monotonicity of the pressures.
\subsection{Bipolar drift-diffusion}
For the energy \eqref{bipen}-\eqref{poisson} the system (\ref{BDD1}) is composed by two drift-diffusion equations for the densities, coupled with a Poisson equation for the electrostatic potential:
\begin{equation} \label{BDD}
\begin{cases}
\rho_t =\nabla \cdot \big( \nabla p_1(\rho) +\rho \nabla \phi \big)
\\
n_t = \nabla \cdot \big( \nabla p_2(n) - n \nabla \phi \big)
\\
-\Delta \phi = \rho - n.
\end{cases} \end{equation} \par
No-flux boundary conditions are considered for the fluxes and for the gradient of the electrostatic potential,
that is,
\begin{equation} \label{boundarycondBDD}
( \nabla p_1 (\rho) + \rho \nabla \phi ) \cdot \nu = ( \nabla p_2 (n) - n \nabla \phi ) \cdot \nu = \frac{\partial \phi}{\partial \nu}=0 \ \text{on} \ [0,T[ \times \partial \Omega, \ \ \ \int_{\partial \Omega} \phi \ dx = 0.
\end{equation}
The condition $\int_{\partial \Omega} \phi \ dx = 0$ is a normalization condition serving to fix the constant in the electrostatic potential determined by
the Poisson equation with Neumann boundary conditions (see the properties of the Neumann function in subsection \ref{sec:Neumann}).
System (\ref{BDD}) is provided with non-negative initial data $(\bar{\rho}_0, \bar{n}_0)$ that satisfy
\begin{equation} \label{initialdcm2}
\int_\Omega \bar{\rho}_0 \ dx = \int_\Omega \bar{n}_0 \ dx = {\bar M} < +\infty.
\end{equation}
From the structure of the first two equations of system (\ref{BDD}) one observes that condition (\ref{initialdcm2}) is formally preserved for all $t \in [0,T[.$ \par
Drift-diffusion equations are commonly used to model semiconductor devices \cite{markowich}, and existence theories under various boundary conditions applicable to the semiconductor setting can be found in
\cite{GajewskiKroeger, jungel3}.
Drift-diffusion equations incorporate a gradient flow structure induced by the Wasserstein distance \cite{otto}, and its role in the
bipolar drift-diffusion model is studied in \cite{monsaingeon}. In the one-dimensional case, \cite{DD1996} presents a theory of weak solutions
to a bipolar drift-diffusion system as the limit of a scaled sequence of entropy weak solutions of a bipolar Euler-Poisson system, while \cite{DDfrancesco}
studies the long-time asymptotics.
\subsection{Bipolar Euler-Poisson}\label{sec:bipolar}
The bipolar Euler-Poisson system (\ref{BEP1}) describes the motion of two-species charged isentropic fluids subjected to an electric field.
The system is formed by a pair of continuity equations for the densities, two momentum equations with friction, and a coupling Poisson equation for the electric field. The friction term is responsible for a damping force that gives rise to energy dissipation. \par
For a theory of existence of weak solutions to a bipolar Euler-Poisson system in the one-dimensional case, refer to \cite{DD1996}. There, compensated compactness is used to establish the existence of weak entropy solutions as the limit of a numerical approximation based on a modified fractional step Lax-Friedrich scheme. There it is also proved that these solutions satisfy $L^\infty$ and $L^2$ bounds.
\par
Regarding the structure of the momenum equations of system (\ref{BEP1}), observe that the frictional coefficient $1/ \varepsilon$ also multiplies the internal energy and electric field terms. From the bipolar Boltzmann-Poisson model with a Lenard-Bernstein collision operator \cite{transportjungel,lenard,markowich}, one formally derives the following system (see appendix for details)
\begin{equation} \label{EP2}
\begin{dcases}
\rho_t + \nabla \cdot (\rho u) = 0 \\
(\rho u)_t + \nabla \cdot (\rho u \otimes u) + \nabla p_1(\rho) = - \rho \nabla \phi -\frac{1}{\tau} \rho u \\
n_t + \nabla \cdot (n v) = 0 \\
(n v)_t + \nabla \cdot (n v \otimes v)+\nabla p_2(n) = n \nabla \phi - \frac{1}{\tau} nv \\
-\Delta \phi = \rho - n,
\end{dcases}
\end{equation}
where $\rho$ and $n$ represent the densities of the fluids, $\rho u$ and $nv$ represent the momentums, $\phi$ stands for the electrostatic potential, and $\tau$ is the collision time.
The formal limit of this system as $\tau \to 0$ is trivial in the hyperbolic scale.
In the assumed Eulerian frame of reference, the collision time is usually much smaller than the observational time.
For this reason one considers the time scaling $\partial / \partial t \to \tau \partial / \partial t$ (the so-called diffusion scaling), so that the observational time is measured in multiple units of the collision time. A change of scale is also applied for the velocities $u^\prime = \dfrac{ u}{\tau},$ $v^\prime = \dfrac{v}{\tau}$. After dropping the primes ($u^\prime \to u$, $v^\prime \to v$) and setting $\varepsilon = \tau^2$ one obtains the system
\begin{equation} \label{BEP}
\begin{dcases}
\rho_t + \nabla \cdot (\rho u) = 0 \\
(\rho u)_t + \nabla \cdot (\rho u \otimes u) + \frac{1}{\varepsilon} \nabla p_1(\rho) = - \frac{1}{\varepsilon} \rho \nabla \phi -\frac{1}{\varepsilon} \rho u \\
n_t + \nabla \cdot (n v) = 0 \\
(n v)_t + \nabla \cdot (n v \otimes v)+\frac{1}{\varepsilon} \nabla p_2(n) = \frac{1}{\varepsilon} n \nabla \phi - \frac{1}{\varepsilon} nv \\
-\Delta \phi = \rho - n,
\end{dcases}
\end{equation}
\\
which is system (\ref{BEP1}) thanks to (\ref{bipderiv}) and (\ref{thermoconsistency}).
The collision time squared $\tau^2=\varepsilon$ is called the momentum relaxation time of system (\ref{BEP}). The limit of system (\ref{BEP}) as $\varepsilon \to 0$ is called the relaxation, overdamped or high-friction limit.
System (\ref{BEP}) is prescribed with no-flux boundary conditions for the velocities and for the electric field. Precisely, the boundary conditions are
\begin{equation} \label{boundarycondBEP}
u \cdot \nu = v \cdot \nu = \frac{\partial \phi}{\partial \nu} = 0 \ \; \; \text{on} \ [0,T[ \times \partial \Omega, \ \ \ \int_{\partial \Omega} \phi \ dx = 0,
\end{equation}
where $\nu$ is an outer normal vector to $\partial \Omega$. Under this setting the fluids do not exit the set $\Omega$ and the system is electrically insulated.
For the initial datum $(\rho_0, u_0, n_0, v_0)$, we assume that $\rho_0,n_0$ are non-negative and satisfy
\begin{equation} \label{initialdcm}
\int_\Omega \rho_0 \ dx = \int_\Omega n_0 \ dx = M < +\infty,
\end{equation}
while $u_0,v_0$ satisfy the no-flux condition at the boundary.
The continuity equations together with \eqref{boundarycondBEP} imply that
condition (\ref{initialdcm}) is preserved for all times $t \in [0,T[.$
\subsection{Relative energy identity for the bipolar fluid system} \label{sec3}
In this section, a relative energy identity for solutions of (\ref{BEP}) is derived. This identity, expression \eqref{REBEPEvol}, is produced by an exact formal calculation using the abstract formalism presented in \cite{GLT}. Then, in Proposition \ref{relativeentropy}, we outline an argument producing the relative energy calculation between a weak dissipative solution of (\ref{BEP}) and a strong and bounded away from vacuum solution of (\ref{BDD}). \par
The potential energy functional that generates the bipolar fluid system is
$$
\begin{aligned}
\mathcal{E}=\mathcal{E}(\rho,n) &= \int_\Omega h_1(\rho) + h_2(n) + \tfrac{1}{2}|\nabla \phi|^2 dx
\\
&= \int_\Omega h_1(\rho) + h_2(n) + \tfrac{1}{2} (\rho -n) \big ( N \ast (\rho - n) \big ) dx.
\end{aligned}
$$
Using \eqref{thermoconsistency} one can readily see
\begin{equation} \label{rhopstress}
-\rho \nabla \frac{\delta \mathcal{E}}{\delta \rho}(\rho,n) = \nabla \cdot S_1(\rho) - \rho \nabla \big(N * (\rho - n) \big),
\end{equation}
\begin{equation} \label{npstress}
-n \nabla \frac{\delta \mathcal{E}}{\delta n}(\rho,n) = \nabla \cdot S_2(n) + n \nabla \big(N * (\rho - n) \big),
\end{equation} \\
where $S_1(\rho) = -p_1(\rho)I,$ $S_2(n) = -p_2(n)I$ are the pressure stresses. \par
To develop the relative energy identity for (\ref{BEP}), some preliminary formulas are needed.
Let $\varphi$ be a vector valued test function, and write the weak forms of the identities (\ref{rhopstress}) and (\ref{npstress})
\begin{equation} \label{weakrhopstress}
\big< \frac{\delta \mathcal{E}}{\delta \rho} (\rho,n), \nabla \cdot (\rho \varphi) \big> = - \int_\Omega S_1(\rho) : \nabla \varphi \ dx + \int_\Omega \big(N*(\rho - n) \big) \nabla \cdot (\rho \varphi)dx,
\end{equation}
\begin{equation} \label{weaknpstress}
\big< \frac{\delta \mathcal{E}}{\delta n} (\rho,n), \nabla \cdot (n \varphi) \big> = - \int_\Omega S_2(n) : \nabla \varphi \ dx - \int_\Omega \big(N*(\rho - n) \big) \nabla \cdot (n \varphi)dx.
\end{equation} \\
Taking the directional derivative of \eqref{weakrhopstress} in the direction $\rho$ and of \eqref{weaknpstress} in the direction $n$ we respectively obtain
\begin{equation} \label{2ndformularho}
\begin{aligned}
\big< \big< \frac{\delta^2 \mathcal{E}}{\delta \rho^2}(\rho,n), \ & \big(\nabla \cdot (\rho \varphi), \psi \big) \big> \big> + \big< \frac{\delta \mathcal{E}}{\delta \rho} (\rho,n), \nabla \cdot (\psi \varphi) \big> = \\
=& - \int_\Omega \big< \frac{\delta S_1}{\delta \rho}(\rho), \psi \big> : \nabla \varphi \ dx + \int_\Omega (N*\psi) \nabla \cdot (\rho \varphi) dx \\
&+\int_\Omega \big(N*(\rho - n) \big) \nabla \cdot (\psi \varphi)dx,
\end{aligned}
\end{equation}
\begin{equation} \label{2ndformulan}
\begin{aligned}
\big< \big< \frac{\delta^2 \mathcal{E}}{\delta n^2}(\rho,n), \ & \big(\nabla \cdot (n \varphi), \psi \big) \big> \big> + \big< \frac{\delta \mathcal{E}}{\delta n} (\rho,n), \nabla \cdot (\psi \varphi) \big> = \\
=& - \int_\Omega \big< \frac{\delta S_2}{\delta n}(n), \psi \big> : \nabla \varphi \ dx + \int_\Omega (N*\psi) \nabla \cdot (n \varphi) dx \\
&-\int_\Omega \big(N*(\rho - n) \big) \nabla \cdot (\psi \varphi)dx,
\end{aligned}
\end{equation}
where $\psi$ is a scalar test function.
Finally, we need a formula for the mixed functional derivatives of $\mathcal{E}$, which, for scalar valued test functions $\alpha, \beta$, takes the form:
\begin{equation} \label{crossfunctderiv}
\big< \big< \frac{\delta^2 \mathcal{E}}{\delta \rho \delta n}(\rho,n), \ ( \alpha, \beta ) \big> \big> = \big< \big< \frac{\delta^2 \mathcal{E}}{\delta n \delta \rho}(\rho,n), \ ( \alpha, \beta ) \big> \big> = - \int_\Omega (N* \beta) \alpha \ dx.
\end{equation} \\
Next, define the relative functional associated with $\mathcal{E}$, given by the quadratic part of the Taylor series expansion of $\mathcal{E}$.
Given density pairs $(\rho, n)$, $(\bar \rho, \bar n)$, with $\phi= N \ast (\rho -n)$, $\bar \phi = N \ast (\bar \rho - \bar n),$ the relative potential energy functional
is defined by
$$\mathcal{E}(\rho, n | \bar{\rho}, \bar{n}) \coloneqq \mathcal{E}(\rho,n)- \mathcal{E}(\bar{\rho}, \bar{n}) - \big<\frac{\delta \mathcal{E}}{\delta \rho}(\bar{\rho}, \bar{n}) ,\rho - \bar{\rho} \big> - \big<\frac{\delta \mathcal{E}}{\delta n}(\bar{\rho}, \bar{n}) ,n - \bar{n} \big>.
$$
A straightforward computation gives
$$\mathcal{E}(\rho, n | \bar{\rho}, \bar{n}) = \int_\Omega h_1(\rho | \bar{\rho}) + h_2(n | \bar{n}) + \tfrac{1}{2} |\nabla (\phi-\bar{\phi})|^2dx,$$
where $h(r| \bar{r}) \coloneqq h(r)-h(\bar{r}) - h^\prime(\bar{r})(r-\bar{r}).$ \par
Now let $(\rho, \rho u, n, nv), \ (\bar \rho, \bar \rho \bar u, \bar n, \bar n \bar v),$ with $\phi = N * (\rho - n), \ \bar \phi = N * (\bar \rho - \bar n),$ be two smooth solutions of (\ref{BEP}). Using the continuity equations one derives
\begin{equation} \label{formalRE1}
\begin{aligned}
\frac{d}{dt}\mathcal{E}&(\rho , n | \bar \rho, \bar n) = \frac{d}{dt} \Big( \mathcal{E}(\rho,n)-\mathcal{E}(\bar \rho, \bar n) - \big< \frac{\delta \mathcal{E}}{\delta \rho}(\bar \rho, \bar n), \rho - \bar \rho \big> - \big< \frac{\delta \mathcal{E}}{\delta n}(\bar \rho, \bar n), n - \bar n \big> \Big)
\\
= &\left ( - \big<\frac{\delta \mathcal{E}}{\delta \rho}(\rho,n), \nabla \cdot (\rho u) \big> + \big<\frac{\delta \mathcal{E}}{\delta \rho}(\bar \rho,\bar n), \nabla \cdot (\bar \rho \bar u) \big> + \big<\frac{\delta \mathcal{E}}{\delta \rho}(\rho,n), \nabla \cdot \big(\rho (u-\bar u) \big) \big> \right . \\
& + \big< \big< \frac{\delta^2 \mathcal{E}}{\delta \rho^2}(\bar \rho,\bar n), \ \big(\nabla \cdot (\bar \rho \bar u), \rho - \bar \rho \big) \big> \big> + \big< \frac{\delta \mathcal{E}}{\delta \rho} (\bar \rho,\bar n), \nabla \cdot \big((\rho -\bar \rho) \bar u \big) \big> \\
& \left . + \big< \big< \frac{\delta^2 \mathcal{E}}{\delta \rho \delta n}(\bar \rho,\bar n), \ \big( \nabla \cdot (\bar \rho \bar u), n - \bar n \big) \big> \big> \right ) \\
& - \big< \frac{\delta \mathcal{E}}{\delta \rho}(\rho,n) - \frac{\delta \mathcal{E}}{\delta \rho}(\bar \rho, \bar n), \nabla \cdot \big(\rho(u-\bar u) \big) \big> \\
&+ \left ( - \big<\frac{\delta \mathcal{E}}{\delta n}(\rho,n), \nabla \cdot (n v) \big> + \big<\frac{\delta \mathcal{E}}{\delta \rho}(\bar \rho,\bar n), \nabla \cdot (\bar n \bar v) \big> + \big<\frac{\delta \mathcal{E}}{\delta \rho}(\rho,n), \nabla \cdot \big(n (v-\bar v) \big) \big> \right . \\
& + \big< \big< \frac{\delta^2 \mathcal{E}}{\delta n^2}(\bar \rho,\bar n), \ \big(\nabla \cdot (\bar n \bar v), n - \bar n \big) \big> \big> + \big< \frac{\delta \mathcal{E}}{\delta n} (\bar \rho,\bar n), \nabla \cdot \big((n -\bar n) \bar v \big) \big> \\
&\left . + \big< \big< \frac{\delta^2 \mathcal{E}}{\delta n \delta \rho}(\bar \rho,\bar n), \ \big( \nabla \cdot (\bar n \bar v), \rho - \bar \rho \big) \big> \big> \right ) \\
& - \big< \frac{\delta \mathcal{E}}{\delta n}(\rho,n) - \frac{\delta \mathcal{E}}{\delta n}(\bar \rho, \bar n), \nabla \cdot \big(n(v-\bar v) \big) \big> \\
=:& \ I_{\rho u} - \big< \frac{\delta \mathcal{E}}{\delta \rho}(\rho,n) - \frac{\delta \mathcal{E}}{\delta \rho}(\bar \rho, \bar n), \nabla \cdot \big(\rho(u-\bar u) \big) \big> \\
&+ I_{n v} - \big< \frac{\delta \mathcal{E}}{\delta n}(\rho,n) - \frac{\delta \mathcal{E}}{\delta n}(\bar \rho, \bar n), \nabla \cdot \big(n(v-\bar v) \big) \big>
\end{aligned}
\end{equation}\\
where $I_{\rho u}$ and $I_{n v}$ are the terms in parentheses, respectively.
To compute $I_{\rho u},$ one applies formula (\ref{weakrhopstress}) to its first three terms, formula (\ref{2ndformularho}) with $\psi = \rho - \bar \rho,$ $\varphi = \bar u$ to its fourth and fifth terms, and formula (\ref{crossfunctderiv}) with $\alpha = \nabla \cdot (\bar \rho \bar u),$ $\beta = n - \bar n$ to its last term to obtain
\begin{equation} \label{Irhou}
\begin{aligned}
I_{\rho u} = & \int_\Omega \big( S_1(\rho) - S_1(\bar \rho) - \big< \frac{\delta S_1}{\delta \rho}(\bar \rho), \rho - \bar \rho\big> \big) : \nabla \bar u \ dx \\
& - \int_\Omega \big(N*(\rho - n - \bar \rho + \bar n) \big) \nabla \cdot \big( (\rho - \bar \rho) \bar u \big)dx \\
= & \int_\Omega S_1(\rho | \bar \rho) : \nabla \bar u \ dx + \int_\Omega (\rho - \bar \rho) \bar u \cdot \nabla (\phi - \bar \phi)dx.
\end{aligned}
\end{equation}
In a similar fashion, using formulas (\ref{weaknpstress}), (\ref{2ndformulan}), (\ref{crossfunctderiv}) one derives
\begin{equation} \label{Inv}
I_{nv} = \int_\Omega S_2(n | \bar n) : \nabla \bar v \ dx - \int_\Omega (n - \bar n) \bar v \cdot \nabla (\phi - \bar \phi)dx.
\end{equation}
Substituting (\ref{Irhou}) and (\ref{Inv}) into (\ref{formalRE1}) it yields
\begin{equation} \label{formalRE2}
\begin{aligned}
\frac{d}{dt} \mathcal{E}(\rho,n | \bar \rho, \bar n) = & \int_\Omega S_1(\rho | \bar \rho) : \nabla \bar u + S_2(n | \bar n) : \nabla \bar v\ dx + \int_\Omega \big( (\rho - \bar \rho) \bar u - (n - \bar n) \bar v \big)\cdot \nabla (\phi - \bar \phi)dx \\
& - \big< \frac{\delta \mathcal{E}}{\delta \rho}(\rho,n) - \frac{\delta \mathcal{E}}{\delta \rho}(\bar \rho, \bar n), \nabla \cdot \big(\rho(u-\bar u) \big) \big> - \big< \frac{\delta \mathcal{E}}{\delta n}(\rho,n) - \frac{\delta \mathcal{E}}{\delta n}(\bar \rho, \bar n), \nabla \cdot \big(n(v-\bar v) \big) \big>.
\end{aligned}
\end{equation} \par
To reach identity (\ref{formalRE2}) one has only used the continuity equations and the structure of the potential energy functional $\mathcal{E}.$ In order to exploit the structure of the momentum equations, we consider the kinetic energy functional $\mathcal{K}$ for the bipolar fluid system,
$$\mathcal{K}(\rho, \rho u, n, nv) = \int_\Omega \tfrac{1}{2}\rho |u|^2 + \tfrac{1}{2}n |v|^2dx,
$$
and compute the relative kinetic energy
\begin{equation*}
\begin{split}
\mathcal{K}(\rho, \rho u, n, nv | \bar{\rho}, \bar{\rho} \bar{u}, \bar{n}, \bar{n} \bar{v}) = & \ \mathcal{K}(\rho, \rho u, n, nv) - \mathcal{K}(\bar{\rho}, \bar{\rho} \bar{u}, \bar{n}, \bar{n} \bar{v}) \\
& - \big< \frac{\delta \mathcal{K}}{\delta \rho}(\bar{\rho}, \bar{\rho} \bar{u}, \bar{n}, \bar{n} \bar{v}) , \rho - \bar{\rho}\big> - \big< \frac{\delta \mathcal{K}}{\delta (\rho u)}(\bar{\rho}, \bar{\rho} \bar{u}, \bar{n}, \bar{n} \bar{v}) , \rho u - \bar{\rho u}\big> \\
& - \big< \frac{\delta \mathcal{K}}{\delta n}(\bar{\rho}, \bar{\rho} \bar{u}, \bar{n}, \bar{n} \bar{v}) , n - \bar{n}\big> - \big< \frac{\delta \mathcal{K}}{\delta (n v)}(\bar{\rho}, \bar{\rho} \bar{u}, \bar{n}, \bar{n} \bar{v}) , n v - \bar{n v}\big>.
\\
& = \int_\Omega \tfrac{1}{2} \rho |u - \bar{u}|^2 + \tfrac{1}{2} n |v - \bar{v}|^2 dx,
\end{split}
\end{equation*}
where $(\rho, \rho u, n, nv), \ (\bar \rho, \bar \rho \bar u, \bar n, \bar n \bar v)$ are two pairs of densities and momenta. \par
Let $(\rho, \rho u, n, nv), \ (\bar \rho, \bar \rho \bar u, \bar n, \bar n \bar v),$ with $\phi = N * (\rho - n), \ \bar \phi = N * (\bar \rho - \bar n),$ be two solutions of (\ref{BEP}).
Consider the momentum equation satisfied by the difference $u-\bar u,$
$$\varepsilon (u - \bar u)_t + \varepsilon (u \cdot \nabla)(u- \bar u) + \varepsilon \big( (u - \bar u) \cdot \nabla \big) \bar u = - \nabla \Big( \frac{\delta \mathcal{E}}{\delta \rho}(\rho,n) - \frac{\delta \mathcal{E} }{\delta \rho}(\bar \rho, \bar n) \Big)-(u-\bar u).$$
Taking the inner product with $\rho (u - \bar u)$ and using the continuity equation gives
$$
\begin{aligned}
\big(\varepsilon \tfrac{1}{2} \rho |u - \bar u|^2 \big)_t + \varepsilon \nabla \cdot \big( \tfrac{1}{2} \rho |u - & \bar u|^2 u \big) + \varepsilon \nabla \bar u : \rho (u - \bar u) \otimes (u - \bar u) = \\
& = - \nabla \Big( \frac{\delta \mathcal{E}}{\delta \rho}(\rho,n) - \frac{\delta \mathcal{E} }{\delta \rho}(\bar \rho, \bar n) \Big)\cdot \big(\rho (u - \bar u) \big)-\rho|u-\bar u|^2.
\end{aligned}
$$
Analogously,
$$
\begin{aligned}
\big(\varepsilon \tfrac{1}{2} n |v - \bar v|^2 \big)_t + \varepsilon \nabla \cdot \big( \tfrac{1}{2} n |v - & \bar v|^2 v \big) + \varepsilon \nabla \bar v : n (v - \bar v) \otimes (v - \bar v) = \\
& = - \nabla \Big( \frac{\delta \mathcal{E}}{\delta n}(\rho,n) - \frac{\delta \mathcal{E} }{\delta n}(\bar \rho, \bar n) \Big)\cdot \big(n (v - \bar v) \big)-n|v-\bar v|^2.
\end{aligned}
$$
Adding the previous expressions and integrating over space renders the evolution of the relative kinetic energy
\begin{equation} \label{evolrelativekinetic}
\begin{aligned}
\frac{d}{dt} \int_\Omega & \varepsilon \tfrac{1}{2} \rho |u - \bar{u}|^2 + \varepsilon \tfrac{1}{2} n |v - \bar{v}|^2 dx + \int_\Omega \rho|u-\bar u|^2 + n |v - \bar v|^2dx = \\
= & - \varepsilon \int_\Omega \nabla \bar{u} : \rho (u-\bar{u}) \otimes (u-\bar{u})+ \nabla \bar{v} : n (v-\bar{v}) \otimes (v-\bar{v})dx \\
& + \big< \frac{\delta \mathcal{E}}{\delta \rho}(\rho,n) - \frac{\delta \mathcal{E}}{\delta \rho}(\bar \rho, \bar n), \nabla \cdot \big(\rho(u-\bar u) \big) \big> + \big< \frac{\delta \mathcal{E}}{\delta n}(\rho,n) - \frac{\delta \mathcal{E}}{\delta n}(\bar \rho, \bar n), \nabla \cdot \big(n(v-\bar v) \big) \big>.
\end{aligned}
\end{equation}
\smallskip
The relative energy identity is then obtained by adding (\ref{formalRE2}) with (\ref{evolrelativekinetic}):
\begin{equation} \label{REBEPEvol}
\begin{split}
\frac{d}{dt} \Big( & \mathcal{E}(\rho, n | \bar{\rho}, \bar{n}) + \varepsilon \int_\Omega \tfrac{1}{2} \rho |u - \bar u|^2 + \tfrac{1}{2} n |v - \bar v|^2 dx\Big)
+ \int_\Omega \rho |u - \bar{u}|^2 + n |v - \bar{v}|^2 dx =
\\
= & - \varepsilon \int_\Omega \nabla \bar{u} : \rho (u-\bar{u}) \otimes (u-\bar{u})+ \nabla \bar{v} : n (v-\bar{v}) \otimes (v-\bar{v})dx \\
& + \int_\Omega S_1(\rho | \bar{\rho}) : \nabla \bar{u} + S_2(n | \bar{n}) : \nabla \bar{v} \ dx + \int_\Omega \big( (\rho -\bar{\rho}) \bar{u} - (n - \bar{n})\bar{v} \big) \cdot \nabla(\phi- \bar{\phi}) dx.
\end{split}
\end{equation} \par
\section{Statement of the main result}
\label{sec4}
The main objective of this work is to compare a dissipative weak solution $(\rho,\rho u,n,nv)$, with $\phi = N \ast (\rho - n)$, of the bipolar Euler-Poisson system \eqref{BEP}
with a strong and bounded away from vacuum solution $(\bar \rho, \bar n)$, $\bar \phi = N \ast (\bar \rho - \bar n)$, of the bipolar drift-diffusion system \eqref{BDD}.
As precised in subsection \ref{sec:bipolar}, the limit $\varepsilon \to 0$ corresponds to the overdamping limit for the bipolar Euler-Poisson system.
The plan of the section is to first precise
the notions of solutions utilized in this work, then describe the methodology of comparison via the relative energy, and finally state the main result.
The internal energy functions
$h_1,h_2 $ and the pressure functions $p_1,p_2$ are assumed to satisfy, apart from
(\ref{thermoconsistency}), the limiting behaviors
\begin{equation} \label{ho(g)}
\lim\limits_{r \to +\infty} \frac{h_i(r)}{r^{\gamma_i}} = \frac{k_i}{\gamma_i -1},
\end{equation}
and
\begin{equation}\label{pbound}
|p_i^{\prime \prime}(r)| \leq \hat{k}_i \frac{p_i^\prime(r)}{r}, \ r>0,
\end{equation}
for some exponents $\gamma_1, \gamma_2 > 1$ and
for some positive constants $k_i,\hat{k}_i, \ i=1,2$.
The prototypical examples satisfying these conditions are
$$p(r) = k r^\gamma, \ \ \ h(r)= \frac{k}{\gamma -1}r^\gamma,$$ where $\gamma > 1$ and $k > 0$.
\medskip
First we describe the assumptions on the solution of the bipolar Euler-Poisson system \eqref{BEP}.
\begin{definition} \label{weakformulation}
The vector function $(\rho,\rho u,n,nv)$ with $\rho, n \ge 0$ and regularity
$$\rho \in C\big([0,T[; L^{\gamma_1}(\Omega)\big), \qquad \ n \in C\big([0,T[; L^{\gamma_2}(\Omega)\big),$$
$$\rho u, \ nv \in C\Big([0,T[;\big(L^1(\Omega)\big)^d\Big),$$
$$\rho |u|^2, \ n |v|^2 \in L^1 \big ( ]0, T[ \times \Omega) \big), $$
together with $\phi = N * (\rho -n)$ satisfying
$$
\rho \nabla \phi , \; n \nabla \phi \in L^1 \left ( ]0,T[ \times \Omega \right )
$$
is a weak solution of (\ref{BEP}) provided that:
\begin{enumerate}[(i)]
\item $(\rho,\rho u,n,nv)$ satisfies (\ref{BEP}) in the weak sense
\begin{equation} \label{weak1}
-\int_0^T \int_\Omega \varphi_t \rho \ dxdt-\int_0^T \int_\Omega \nabla \varphi \cdot (\rho u) dxdt - \int_\Omega \varphi \rho \big|_{t=0}dx=0,
\end{equation}
\begin{equation} \label{weak2}
\begin{split}
-\varepsilon \int_0^T \int_\Omega \tilde{\varphi}_t \cdot (\rho u)dxdt - \varepsilon \int_0^T \int_\Omega \nabla \tilde{\varphi} : \rho u \otimes u \ dx dt - \int_0^T \int_\Omega (\nabla \cdot \tilde{\varphi}) p_1(\rho) dxdt \\
-\varepsilon \int_\Omega \tilde{\varphi} \cdot (\rho u)\big|_{t=0}dx = - \int_0^T \int_\Omega \tilde{\varphi}\cdot (\rho \nabla \phi)dxdt - \int_0^T \int_\Omega \tilde{\varphi} \cdot (\rho u)dxdt,
\end{split}
\end{equation}
\begin{equation} \label{weak3}
-\int_0^T \int_\Omega \psi_t n \ dxdt-\int_0^T \int_\Omega \nabla \psi \cdot (nv) dxdt - \int_\Omega \psi n \big|_{t=0}dx=0,
\end{equation}
\begin{equation} \label{weak4}
\begin{split}
-\varepsilon \int_0^T \int_\Omega \tilde{\psi}_t \cdot (nv)dxdt - \varepsilon \int_0^T \int_\Omega \nabla \tilde{\psi} : nv \otimes v \ dx dt - \int_0^T \int_\Omega (\nabla \cdot \tilde{\psi}) p_2(n) dxdt \\
- \varepsilon \int_\Omega \tilde{\psi} \cdot (nv)\big|_{t=0}dx = \int_0^T \int_\Omega \tilde{\psi} \cdot (n \nabla \phi)dxdt - \int_0^T \int_\Omega \tilde{\psi} \cdot (nv)dxdt,
\end{split}
\end{equation}
for all Lipschitz test functions $\varphi, \psi : [0,T[ \times \bar{\Omega} \to \mathbb{R}, \ \tilde{\varphi}, \tilde{\psi}:[0,T[ \times \bar{\Omega} \to \mathbb{R}^d$ compactly supported in time and satisfying $\tilde{\varphi} \cdot \nu = \tilde{\psi} \cdot \nu = 0 $ on $[0,T[ \times \partial \Omega,$ where $\nu$ is any outer normal vector to the boundary;
\item $(\rho,\rho u,n,nv)$ is equipped with the bounds:
\begin{equation} \label{massconservation}
\int_\Omega \rho \ dx = \int_\Omega n \ dx = M < +\infty, \ \ \ \forall t \in [0,T[,
\end{equation}
\begin{equation} \label{energyconservation}
\underset{[0,T[}{\text{sup}} \int_\Omega \varepsilon\tfrac{1}{2}\rho|u|^2+\varepsilon \tfrac{1}{2}n|v|^2+h_1(\rho)+h_2(n)+ \tfrac{1}{2} |\nabla \phi|^2 dx < +\infty;
\end{equation}
\end{enumerate}
\end{definition}
\medskip
\noindent
Solutions of (\ref{BEP}) clearly depend on $\varepsilon$, that is $(\rho,\rho u,n,nv)=(\rho_\varepsilon,\rho_\varepsilon u_\varepsilon,n_\varepsilon,n_\varepsilon v_\varepsilon)$; this dependence is supressed for simplicity.
Property (\ref{massconservation}) represents the conservation of mass for $\rho$ and $n$, whereas (\ref{energyconservation}) asserts that
the total energy is finite.
\begin{definition}
A weak solution $(\rho,\rho u,n,nv), \ \phi = N*(\rho -n),$ of (\ref{BEP}) is called dissipative if $\rho |u|^2, \ n |v|^2, \ |\nabla \phi|^2 \in C\big([0,T[; L^1(\Omega) \big),$ and it satisfies
\begin{equation} \label{weakdissip}
\begin{split}
- \int_0^T & \int_\Omega \big(\varepsilon \tfrac{1}{2}\rho|u|^2+\varepsilon \tfrac{1}{2}n|v|^2+ h_1(\rho)+h_2(n)+ \tfrac{1}{2} |\nabla \phi|^2\big) \dot{\theta}(t)dxdt \\
+ & \int_0^T \int_\Omega\big( \rho |u|^2+ n |v|^2\big) \theta(t)dxdt \\
\leq \int_\Omega & \big(\varepsilon \tfrac{1}{2}\rho|u|^2+\varepsilon \tfrac{1}{2}n|v|^2+h_1(\rho)+h_2(n)+ \tfrac{1}{2} |\nabla \phi|^2\big)\big|_{t=0} \theta(0) dx
\end{split}
\end{equation} \\
for any non-negative $\theta \in W^{1,\infty}([0,T[)$ with compact support.
\end{definition} \par
Next, we turn to solutions $(\bar{\rho}, \bar{n}),$ with $\bar{\phi}= N \ast (\bar{\rho}- \bar{n}),$ of the bipolar drift-diffusion system.
These are assumed to be classical solutions of \eqref{BDD} which satisfy the boundary conditions (\ref{boundarycondBDD}),
and emanate from initial data satisfying the bounds
\begin{align}
\int_\Omega \bar \rho_0 \ dx = \int_\Omega \bar n_0 \ dx = \bar M < +\infty,
\\
\int_\Omega h_1(\bar{\rho}_0)+h_2(\bar{n}_0)+ \tfrac{1}{2} |\nabla \bar{\phi}_0|^2 dx < +\infty,
\label{energyconservationDD}
\end{align}
where $\bar \phi_0 = N*(\bar \rho_0 - \bar n_0).$ Setting
\begin{equation}\label{defappsol}
\bar{u} \coloneqq -\nabla\big(h_1^{\prime}(\bar{\rho})+\bar{\phi}\big), \ \ \ \bar{v} \coloneqq -\nabla\big(h_2^{\prime}(\bar{n})-\bar{\phi}\big),
\end{equation}
after multiplying the previous expressions by $\bar \rho \bar u$ and $\bar n \bar v,$ respectively, and integrating over space one obtains the energy identity for system (\ref{BDD}):
\begin{equation}\label{energyBDD}
\frac{d}{dt} \int_\Omega h_1(\bar{\rho})+h_2(\bar{n})+ \tfrac{1}{2} |\nabla \bar{\phi}|^2 dx = - \int_\Omega \bar \rho | \bar u |^2 + \bar n |\bar v|^2dx.
\end{equation}
Due to (\ref{energyBDD}), condition (\ref{energyconservationDD}) is preserved for all times $t \in [0,T[.$
Expressing the energy identity (\ref{energyBDD}) in a weak form, one has that a strong solution of (\ref{BDD}) satisfies
\begin{equation} \label{strongdissip}
\begin{split}
& - \int_0^T \int_\Omega \big(h_1(\bar{\rho})+h_2(\bar{n})+ \tfrac{1}{2} |\nabla \bar{\phi}|^2\big) \dot{\theta}(t)dxdt + \int_0^T \int_\Omega\big( \bar{\rho} |\bar{u}|^2+ \bar{n} |\bar{v}|^2\big) \theta(t)dxdt \\
& = \int_\Omega \big(h_1(\bar{\rho})+h_2(\bar{n})+ \tfrac{1}{2} |\nabla \bar{\phi}|^2\big)\big|_{t=0} \theta(0) dx,
\end{split}
\end{equation}
for all $\theta \in W^{1,\infty}([0,T[)$ with compact support.
\par
Moreover, $(\bar{\rho}, \bar{n})$ is assumed to be bounded away from vacuum: \\
\textbf{(H)} There exist $\delta_1, \delta_2 > 0$ and $M_1,M_2 < +\infty$ such that
$$
\bar{\rho}(t,x) \in [\delta_1, M_1] \, , \quad \bar{n}(t,x) \in [\delta_2, M_2] \quad \mbox{ for $(t,x) \in [0,T[\times \Omega$. }
$$
\par
In order to compare $(\rho, \rho u, n, n v, \phi)$ with $(\bar \rho, \bar n, \bar \phi)$ we proceed along the lines of \cite{lattanziothanos2013} and view
$(\bar \rho, \bar n, \bar \phi)$ as an approximate solution of \eqref{BEP}. This is accomplished by setting $(\bar u, \bar v)$ via \eqref{defappsol}.
We refer to the resulting $(\bar \rho, \bar \rho u, \bar n , \bar n \bar v)$ as a strong and bounded away from vacuum solution of \eqref{BDD}. The regularity ``strong'' refers to the boundedness of all of its derivatives that will appear later. Precisely, one requires that the derivatives
$$
\dfrac{\partial \bar{\rho}}{\partial t}, \
\dfrac{\partial \bar{n}}{\partial t}, \
\dfrac{\partial ^2 \bar{\rho}}{\partial x_i\partial t}, \
\dfrac{\partial ^2 \bar{n}}{\partial x_i \partial t}, \
\dfrac{\partial ^2 \bar{\phi}}{\partial x_i\partial t},\
\dfrac{\partial^2 \bar{\rho}}{\partial x_i \partial x_j}, \
\dfrac{\partial^2 \bar{n}}{\partial x_i \partial x_j}, \
\dfrac{\partial^2 \bar{\phi}}{\partial x_i \partial x_j}
$$
are in $L^\infty([0,T[ \times \Omega)$ for all $i,j = 1, \ldots, d.$ \par
One easily checks that
\begin{equation*}
\bar{\rho}_t + \nabla \cdot(\bar{\rho} \bar{u}) = 0,
\end{equation*}
\begin{equation*}
\bar{n}_t+ \nabla \cdot(\bar{n} \bar{v}) = 0,
\end{equation*}
and
\begin{equation} \label{nofluxrhou}
\bar \rho \bar{u} \cdot \nu = \bar n \bar{v} \cdot \nu = 0 \ \text{on} \ [0,T[ \times \partial \Omega
\end{equation}
for $\nu$ an outer normal vector to $\partial \Omega.$
Then setting
$$\bar{e}_1 \coloneqq (\bar{\rho} \bar{u})_t + \nabla \cdot (\bar{\rho} \bar{u} \otimes \bar{u}),$$ $$\bar{e}_2 \coloneqq (\bar{n} \bar{v})_t + \nabla \cdot (\bar{n} \bar{v} \otimes \bar{v}),$$ the equilibrium system (\ref{BDD}) can be rewritten as an approximation of the system \eqref{BEP},
\begin{equation} \label{BDDlifted}
\begin{dcases}
\bar{\rho}_t + \nabla \cdot (\bar{\rho} \bar{u}) = 0 \\
(\bar{\rho} \bar{u})_t + \nabla \cdot (\bar{\rho} \bar{u} \otimes \bar{u}) = -\frac{1}{\varepsilon} \bar{\rho} \nabla(h_1^{\prime}(\bar{\rho})+\bar{\phi})-\frac{1}{\varepsilon} \bar{\rho} \bar{u} + \bar{e}_1 \\
\bar{n}_t + \nabla \cdot (\bar{n} \bar{v}) = 0 \\
(\bar{n} \bar{v})_t + \nabla \cdot (\bar{n} \bar{v} \otimes \bar{v}) = -\frac{1}{\varepsilon} \bar{n} \nabla(h_2^{\prime}(\bar{n})-\bar{\phi})-\frac{1}{\varepsilon} \bar{n} \bar{v} + \bar{e}_2\\
-\Delta \bar{\phi} = \bar{\rho} - \bar{n}.
\end{dcases}
\end{equation} \par
where
$$
\bar{e}_1, \bar{e}_2 \in \big(L^\infty(]0,T[ \times \Omega)\big)^d.
$$
The two solutions $(\rho, \rho u, n, n v)$ and $(\bar \rho, \bar \rho u, \bar n , \bar n \bar v),$ with $\phi=N*(\rho -n), \ \bar \phi = N*(\bar \rho - \bar n)$ are then compared by means of the relative energy
$\Psi : [0,T[ \to \mathbb{R}$ for \eqref{BEP} given by
$$
\Psi(t)=\int_\Omega \varepsilon \tfrac{1}{2} \rho |u - \bar{u}|^2+\varepsilon \tfrac{1}{2} n |v - \bar{v}|^2 + h_1(\rho | \bar{\rho}) + h_2(n | \bar{n}) +\tfrac{1}{2} |\nabla (\phi - \bar{\phi})|^2dx.
$$
We prove:
\begin{theorem}\label{mainresult}
Let $(\rho,\rho u,n,n v)$, with $\phi = N*(\rho - n)$, be a dissipative weak solution of (\ref{BEP}) with $\gamma_1,\gamma_2 \geq 2 - \frac{1}{d},$ and
let $(\bar{\rho}, \bar \rho \bar u, \bar{n}, \bar n \bar v)$, with $\bar{\phi}= N*(\bar{\rho}-\bar{n})$, be a strong and bounded away from vacuum solution of \eqref{BDD}.
There exists $C > 0$ such that for $t \in [0, T [$ the relative energy $\Psi$ between these two solutions satisfies the stability estimate
$$
\Psi(t) \leq e^{CT}\big(\Psi(0) + \varepsilon^2 \big).
$$
Therefore if $\Psi(0) \to 0$ as $\varepsilon \to 0$, then $\Psi(t) \to 0$ as $\varepsilon \to 0$ for every $t \in [0, T [$.
\end{theorem}
\section{Convergence in the relaxation limit}
\label{sec5}
This section contains the proof of Theorem \ref{mainresult}. We start with some auxilliary results on the behavior of the Neumann function and Riesz potentials,
then continue with the derivation of the relative energy identity within the regularity class detailed in section \ref{sec4}, and conclude with the proof
of the stability estimate.
\subsection{Auxiliary results} \label{sec:Neumann}
Regarding the Neumann function $N \in C^\infty ( \bar{\Omega} \times \bar{\Omega} \setminus \{(x,x) \ | \ x \in \bar{\Omega} \} )$, the relevant properties that will be used are \cite[Chapter 1, Section 6]{carloskenig}:
\begin{enumerate}[(i)]
\item $N(x,y) = N(y,x),$
\item $N(x,y) \leq \dfrac{C}{|x-y|^{d-2}},$
\item $\nabla_x N(x,y) \leq \dfrac{C}{|x-y|^{d-1}},$
\item If $f \in H^1(\Omega)^* \cap W^{1,p}(\Omega)^*,$ for $p < d/(d-1),$ satisfies $\int_\Omega f \ dx = 0,$ then $\beta = N * f$ is the unique solution of
$$\int_\Omega \nabla \beta \cdot \nabla \varphi \ dx = \int_\Omega f \varphi \ dx \ \ \forall \varphi \in H^1(\Omega)$$ that satisfies $\int_{\partial \Omega} \beta \ dx = 0$ and belongs to $C^\alpha(\bar{\Omega}),$ with $\alpha$ depending only on $d.$
\end{enumerate}
\vspace{3mm}
\par
In order to deal with the electrostatic potential $\phi$ one needs to recall the notion of Riesz potential. Given a function $f : \mathbb{R}^d \to \mathbb{R}$, the Riesz potential of $f$ is the function $I_\alpha(f)$ given by
$$I_\alpha(f)(x) = \int_{\mathbb{R}^d} \dfrac{f(y)}{|x-y|^{d-\alpha}} dy, $$
with $0 < \alpha < d.$
Regarding these potentials one has the following result \cite[Chapter V, Section 1]{stein}:
\begin{proposition} \label{steinprop}
Let $0 < \alpha < d$ and $1 < p < d/\alpha.$ If $f \in L^p(\mathbb{R}^d),$ then $I_\alpha(f)(x)$ converges absolutely for a.e. $x \in \mathbb{R}^d$ and
$$||I_\alpha(|f|)||_{L^{\frac{dp}{d-\alpha p}}(\mathbb{R}^d)} \leq C || f ||_{L^p(\mathbb{R}^d)},$$
for some positive constant $C = C(\alpha, d,p).$
\end{proposition}
Combining this previous proposition with the properties of the Neumann function one has:
\begin{proposition} \label{neumannprop}
Let $d \in \mathbb{N} \setminus \{1,2\},$ $f,g \in L^\gamma(\Omega), \ \phi = N * f, \ \varphi = N * g, \ \nabla \phi = \nabla_xN * f,$ and $\nabla \varphi = \nabla_x N * g,$ where $\gamma \geq \frac{2d}{d+2},$ $\Omega \subseteq \mathbb{R}^d$ is a smooth bounded domain with smooth boundary, and $N$ is the Neumann function. Then, $\phi, \varphi \in L^{\frac{2d}{d-2}}(\Omega),$ $\nabla \phi, \nabla \varphi \in L^2(\Omega),$ and
\begin{equation} \label{intparts}
\int_\Omega \nabla \phi \cdot \nabla \varphi dx = \int_\Omega f \varphi dx = \int_\Omega g \phi dx = \int_\Omega \int_\Omega f(x)N(x,y)g(y) dxdy.
\end{equation}
\end{proposition}
\begin{proof}
First one demonstrates that $\phi \in L^{\frac{2d}{d-2}}(\Omega)$ and $\nabla \phi \in L^2(\Omega)$ (so $\varphi \in L^{\frac{2d}{d-2}}(\Omega)$ and $\nabla \varphi \in L^2(\Omega)$ aswell).
Set $p = \frac{2d}{d+2}$ and observe that since $d > 2$ one has $1< p < d/2$. Let $\tilde{f}$ be given by
$$\tilde{f}(x) =
\begin{cases}
f(x), \ \text{if} \ x \in \Omega, \\
0, \ \text{otherwise.}
\end{cases}$$
Clearly $\tilde{f} \in L^1(\mathbb{R}^d) \cap L^\gamma(\mathbb{R}^d),$ and since $\gamma \geq p = \frac{2d}{d+2}$ interpolation gives that $\tilde{f} \in L^p(\mathbb{R}^d).$
From the properties of the Neumann function one deduces that
\begin{equation*}
|\phi(x)| \leq \int_\Omega |N(x,y)||f(y)|dy
\leq C \int_\Omega \frac{|f(y)|}{|x-y|^{d-2}}dy
\leq C \int_{\mathbb{R}^d} \frac{|\tilde{f}(y)|}{|x-y|^{d-2}}dy
= CI_2(|\tilde{f}|)(x), \ x \in \Omega.
\end{equation*}
\\
Using Proposition \ref{steinprop} with $\alpha = 2$, $p = \frac{2d}{d+2}$, one obtains
\begin{equation*}
||\phi||_{L^{\frac{2d}{d-2}}(\Omega)} \leq C||I_2(|\tilde{f}|)||_{L^{\frac{2d}{d-2}}(\Omega)} \leq C||I_2(|\tilde{f}|)||_{L^{\frac{2d}{d-2}}(\mathbb{R}^d)} \leq C || \tilde{f} ||_{L^p(\mathbb{R}^d)}
= C || f ||_{L^p(\Omega)}.
\end{equation*}
Similarly, $$|\nabla \phi(x)| \leq C I_1(|\tilde{f}|)(x), \ x \in \Omega, $$
hence
$$
||\nabla \phi||_{L^2(\Omega)} \leq C \| I_1 ( |\tilde{f}| ) \|_{L^2(\Omega)} \le C ||f||_{L^p(\Omega)},
$$
where we used Proposition \ref{steinprop} with $\alpha = 1$, $p = \frac{2d}{d+2}$. \par
To prove the second and third equalities of expression (\ref{intparts}) one observes that $p^\prime = \frac{2d}{d-2},$ so
$$\int_\Omega f \varphi dx \leq \Big( \int_\Omega |f|^{p}dx \Big)^{\frac{1}{p}} \Big( \int_\Omega |\varphi|^{p^\prime}dx \Big)^{\frac{1}{p^\prime}} < + \infty, $$ \\
then Fubini's theorem and the symmetry of the Neumann function yield the desired conclusion.
Next, we prove the first equality in (\ref{intparts}). For $f,g \in L^p(\Omega),$ there exist sequences $(f_n)_{n \in \mathbb{N}},(g_n)_{n \in \mathbb{N}}$ belonging to $ C_c^{\infty}(\Omega)$ such that $$f_n \to f \ \text{in} \ L^p(\Omega), $$ $$g_n \to g \ \text{in} \ L^p(\Omega). $$ Let $\phi_n = N * f_n$ and $\varphi_n = N * g_n $ Then,
$$||\phi - \phi_n ||_{L^{p^\prime}(\Omega)} \leq C ||f - f_n ||_{L^p(\Omega)} \to 0 \ \text{as} \ n \to +\infty, $$
and
$$||\nabla \phi - \nabla \phi_n ||_{L^2(\Omega)} \leq C ||f - f_n ||_{L^p(\Omega)} \to 0 \ \text{as} \ n \to +\infty. $$
In other words, $\phi_n \to \phi \ \text{in} \ L^{p^\prime}(\Omega)$ and $\nabla \phi_n \to \nabla \phi \ \text{in} \ L^2(\Omega),$ and the same holds for $\varphi_n, \varphi, \nabla \varphi_n, \nabla \varphi.$ \\
Thus,
\begin{equation*}
\begin{split}
\Big|\int_\Omega f_n \varphi_n dx - \int_\Omega f \varphi dx\Big| & \leq \int_\Omega |f_n| |\varphi_n - \varphi|dx + \int_\Omega |f_n - f||\varphi| dx \\
& \leq ||f_n ||_{L^p(\Omega)} ||\varphi_n - \varphi ||_{L^{p^{\prime}}(\Omega)} + ||f_n - f||_{L^p(\Omega)}|| \varphi ||_{L^{p^{\prime}}(\Omega)} \\
& \to 0 \ \text{as} \ n \to +\infty,
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\Big| \int_\Omega \nabla \phi_n \cdot \nabla \varphi_n dx - \int_\Omega \nabla \phi \cdot \nabla \varphi dx \Big| & \leq ||\nabla \phi_n - \nabla \phi ||_{L^2(\Omega)}||\nabla \varphi_n ||_{L^2(\Omega)}+|| \nabla \phi||_{L^2(\Omega)}||\nabla \varphi_n - \nabla \varphi ||_{L^2(\Omega)} \\
& \to 0 \ \text{as} \ n \to +\infty.
\end{split}
\end{equation*}
Observing that $f_n, \phi_n, \varphi_n$ satisfy
$$ \int_\Omega \nabla \phi_n \cdot \nabla \varphi_ndx = \int_\Omega f_n \varphi_n dx, $$
after letting $n \to +\infty$ one obtains the desired identity.
\end{proof}
\par
We finish this subsection with a result proved in \cite[Lemma 2.4]{lattanziothanos2013}, which is used in the proof of Lemma \ref{lemmaJ3}.
\begin{lemma} \label{hlemma}
Let $h \in C^2(]0,+\infty[) \cap C([0,+\infty[)$ be such that $\lim\limits_{r \to +\infty} \frac{h(r)}{r^\gamma}=\frac{k}{\gamma-1}$ for some $k>0$ and $\gamma > 1,$ and $h^{\prime \prime}(r) > 0 \ \forall r>0.$ Assume that $\bar{r} \in [\delta,M],$ where $\delta > 0$ and $M < +\infty.$ Then, there exists $R \geq M+1$ and positive constants $C_1, C_2$ such that
\begin{equation*}
h(r| \bar{r}) \geq
\begin{cases}
C_1 |r-\bar{r}|^2, \ \text{if} \ (r,\bar{r}) \in [0,R]\times [\delta,M] \\
C_2 |r-\bar{r}|^{\gamma}, \ \text{if} \ (r,\bar{r}) \in ]R,+\infty [\times [\delta,M].
\end{cases}
\end{equation*}
Furthermore, if $\gamma \geq 2$, then $h(r| \bar{r}) \geq C|r-\bar{r}|^2 $ for every $(r,\bar{r})\in[0,+\infty[\times[\delta,M]$, where $C=\min\{C_1,C_2\}.$
\end{lemma}
\subsection{Derivation of the relative energy inequality}
The relative energy inequality is now derived within the regularity class detailed in section \ref{sec4}.
\begin{proposition} \label{relativeentropy}
Let $(\rho,\rho u,n,nv),$ with $\phi = N*(\rho - n),$ be a dissipative weak solution of (\ref{BEP}) with $\gamma_1,\gamma_2 \geq \frac{2d}{d+2}$, and let $(\bar{\rho},\bar{\rho} \bar{u},\bar{n},\bar{n} \bar{v}),$ with $\bar{\phi}= N*(\bar{\rho}-\bar{n}),$ be a strong and bounded away from vacuum solution of (\ref{BDD}). Then, for each $t \in [0,T[$, the relative energy $\Psi$ between these two solutions satisfies the following relative energy inequality:
\begin{equation} \label{relativeentropyinequality}
\Psi(t)-\Psi(0) + \int_0^t \int_\Omega \rho |u - \bar{u}|^2 + n |v - \bar{v}|^2 dx d\tau \leq \mathcal{J}_1(t) + \mathcal{J}_2(t) + \mathcal{J}_3(t) + \mathcal{J}_4(t),
\end{equation}
where
\begin{equation*}
\begin{split}
\mathcal{J}_1(t) & = -\varepsilon \int_0^t \int_\Omega \nabla \bar{u}: \rho (u - \bar{u}) \otimes (u - \bar{u})+\nabla \bar{v}: n (v - \bar{v}) \otimes (v - \bar{v})dxd \tau, \\
\mathcal{J}_2(t) & = - \int_0^t \int_\Omega (\nabla \cdot \bar{u}) p_1(\rho | \bar{\rho}) + (\nabla \cdot \bar{v}) p_2(n | \bar{n}) dx d\tau, \\
\mathcal{J}_3(t) & = \int_0^t \int_\Omega \big((\rho - \bar{\rho})\bar{u}-(n - \bar{n})\bar{v}\big)\cdot \nabla (\phi - \bar{\phi}) dx d\tau, \\
\mathcal{J}_4(t) & = -\varepsilon \int_0^t \int_\Omega \frac{\rho}{\bar{\rho}} \bar{e}_1 \cdot (u - \bar{u}) + \frac{n}{\bar{n}} \bar{e}_2 \cdot (v - \bar{v}) dx d\tau. \\
\end{split}
\end{equation*}
\end{proposition}
\begin{proof}
Fix $t \in [0,T[,$ let $\kappa$ be such that $t+\kappa < T$, and define $\theta : [0,T[\ \to \mathbb{R}$ by
$$\theta (\tau) =
\begin{dcases}
1, \ \text{if} \ 0 \leq \tau < t \\
\frac{t- \tau}{\kappa}+1, \ \text{if} \ t \leq \tau < t+\kappa \\
0, \ \text{if} \ t+\kappa \leq \tau < T.
\end{dcases}
$$
Using this choice of $\theta$ in (\ref{weakdissip}) it yields
\begin{equation*}
\begin{split}
& \int_t^{t+ \kappa} \int_\Omega \frac{1}{\kappa}\big(\varepsilon \tfrac{1}{2}\rho|u|^2+\varepsilon \tfrac{1}{2}n|v|^2+h_1(\rho)+h_2(n)+ \tfrac{1}{2} |\nabla \phi|^2\big)dxd\tau \\
& + \int_0^t \int_\Omega \rho|u|^2+n|v|^2dxd\tau + \int_t^{t+ \kappa} \int_\Omega \Big(\frac{t - \tau}{\kappa} + 1\Big) \big(\rho|u|^2+n|v|^2\big)dxd\tau \\
& \leq \int_\Omega \big(\varepsilon\tfrac{1}{2}\rho|u|^2+\varepsilon\tfrac{1}{2}n|v|^2+h_1(\rho)+h_2(n) + \tfrac{1}{2} |\nabla \phi|^2 \big)\big|_{\tau = 0}dx.
\end{split}
\end{equation*}
Letting $\kappa \to 0^+$ above one deduces
\begin{equation} \label{REweakdiss}
\begin{split}
& \int_\Omega \big(\varepsilon \tfrac{1}{2}\rho|u|^2+\varepsilon \tfrac{1}{2}n|v|^2+h_1(\rho)+h_2(n)+ \tfrac{1}{2} |\nabla \phi|^2\big)\big|_{\tau = 0}^{\tau = t}dx \\
& \leq - \int_0^t \int_\Omega \rho|u|^2+n|v|^2dxd\tau.
\end{split}
\end{equation}
Next, observe that from \eqref{BDDlifted}, after a straightforward calculation, one obtains
\begin{equation} \label{ue}
\begin{split}
& \int_0^t \int_\Omega \bar{u} \cdot \bar{e}_1 \ dx d\tau = \int_\Omega \tfrac{1}{2} \bar{\rho} |\bar{u}|^2 dx \Big |_{\tau=0}^{\tau=t},
\\
& \int_0^t \int_\Omega \bar{v} \cdot \bar{e}_2 \ dx d\tau = \int_\Omega \tfrac{1}{2} \bar{n} |\bar{v}|^2 dx \Big |_{\tau=0}^{\tau=t} .
\end{split}
\end{equation}
Using the same choice of $\theta$ in (\ref{strongdissip}) together with (\ref{ue}) gives
\begin{equation} \label{REstrongdiss}
\begin{split}
& \int_\Omega \big(\varepsilon \tfrac{1}{2}\bar{\rho}|\bar{u}|^2+\varepsilon \tfrac{1}{2}\bar{n}|\bar{v}|^2+h_1(\bar{\rho})+h_2(\bar{n})+ \tfrac{1}{2} |\nabla \bar{\phi}|^2\big)\big|_{\tau = 0}^{\tau = t}dx \\
& = - \int_0^t \int_\Omega \bar{\rho}|\bar{u}|^2+\bar{n}|\bar{v}|^2dxd\tau + \varepsilon \int_0^t \int_\Omega \bar{u} \cdot \bar{e}_1 + \bar{v} \cdot \bar{e}_2 \ dxd\tau.
\end{split}
\end{equation}
\par
Regarding the difference $(\rho - \bar{\rho},\rho u - \bar{\rho} \bar{u},n - \bar{n},nv -\bar{n} \bar{v}) $ between a weak solution of (\ref{BEP}) and a strong solution of (\ref{BDD}), one has the following:
\begin{equation*}
-\int_0^T \int_\Omega \varphi_t (\rho - \bar{\rho}) \ dxdt-\int_0^T \int_\Omega \nabla \varphi \cdot (\rho u - \bar{\rho} \bar{u}) dxdt - \int_\Omega \varphi (\rho - \bar{\rho}) \big|_{t=0}dx=0,
\end{equation*}
\begin{equation*}
\begin{split}
-& \varepsilon \int_0^T \int_\Omega \tilde{\varphi}_t \cdot (\rho u- \bar{\rho} \bar{u})dxdt - \varepsilon \int_0^T \int_\Omega \nabla \tilde{\varphi} : (\rho u \otimes u - \bar{\rho} \bar{u} \otimes \bar{u})dx dt \\
&- \int_0^T \int_\Omega (\nabla \cdot \tilde{\varphi}) \big(p_1(\rho)-p_1(\bar{\rho}) \big)dxdt
- \varepsilon \int_\Omega \tilde{\varphi} \cdot (\rho u - \bar{\rho} \bar{u})\big|_{t=0}dx \\
=& \ - \int_0^T \int_\Omega \tilde{\varphi} \cdot (\rho \nabla \phi - \bar{\rho} \nabla \bar{\phi})dxdt - \int_0^T \int_\Omega \tilde{\varphi} \cdot (\rho u - \bar{\rho} \bar{u})dxdt- \varepsilon \int_0^T \int_\Omega \tilde{\varphi} \cdot \bar{e}_1dxdt,
\end{split}
\end{equation*}
\begin{equation*}
-\int_0^T \int_\Omega \psi_t (n - \bar{n}) \ dxdt-\int_0^T \int_\Omega \nabla \psi \cdot (nv - \bar{n} \bar{v}) dxdt - \int_\Omega \psi (n - \bar{n}) \big|_{t=0}dx=0,
\end{equation*}
\begin{equation*}
\begin{split}
-& \varepsilon \int_0^T \int_\Omega \tilde{\psi}_t \cdot (nv - \bar{n} \bar{v})dxdt - \varepsilon \int_0^T \int_\Omega \nabla \tilde{\psi} :( nv \otimes v - \bar{n} \bar{v} \otimes \bar{v}) dx dt\\
& - \int_0^T \int_\Omega (\nabla \cdot \tilde{\psi}) \big(p_2(n)-p_2(\bar{n}) \big) dxdt
- \varepsilon \int_\Omega \tilde{\psi} \cdot (nv- \bar{n} \bar{v})\big|_{t=0}dx \\
= & \ \int_0^T \int_\Omega \tilde{\psi} \cdot (n \nabla \phi - \bar{n} \nabla \bar{\phi})dxdt - \int_0^T \int_\Omega \tilde{\psi} \cdot (nv - \bar{n} \bar{v})dxdt- \varepsilon \int_0^T \int_\Omega \tilde{\psi} \cdot \bar{e}_2dxdt,
\end{split}
\end{equation*}
for all Lipschitz test functions $\varphi, \psi : [0,T[ \times \Omega \to \mathbb{R}, \ \tilde{\varphi}, \tilde{\psi}:[0,T[ \times \Omega \to \mathbb{R}^d$ compactly supported in time and with $\tilde{\varphi}, \tilde{\psi}$ satisfying the no-flux boundary condition on the boundary. \par
Set
$$(\varphi, \ \tilde{\varphi}, \ \psi, \ \tilde{\psi}) =\big( \theta (- \varepsilon \tfrac{1}{2}|\bar{u}|^2+h_1^\prime(\bar{\rho})+ \bar{\phi} ), \ \theta \bar{u},\ \theta (- \varepsilon \tfrac{1}{2}|\bar{v}|^2+h_2^\prime(\bar{n})- \bar{\phi}), \ \theta \bar{v} \big), $$
where $\theta$ is as before.
In view of \eqref{boundarycondBDD}, this choice of $(\varphi, \tilde{\varphi}, \psi, \tilde{\psi})$ satisfies
$\tilde{\varphi}\cdot \nu = \tilde{\psi}\cdot \nu = 0$ on $[0,T[ \times \partial \Omega$ and can be used in the weak formulation.
Using that choice and letting $\kappa \to 0^+$ one obtains
\begin{equation} \label{weakdiff1}
\begin{split}
& \int_\Omega \big( (-\varepsilon \tfrac{1}{2}|\bar{u}|^2+h_1^\prime(\bar{\rho})+ \bar{\phi} )(\rho - \bar{\rho}) \big)\big|_{\tau = 0}^{\tau = t}dx -\int_0^t \int_\Omega \partial_\tau (- \varepsilon \tfrac{1}{2}|\bar{u}|^2+ h_1^\prime(\bar{\rho})+ \bar{\phi} )(\rho - \bar{\rho})dxd\tau \\
& - \int_0^t \int_\Omega \nabla (- \varepsilon \tfrac{1}{2}|\bar{u}|^2+h_1^\prime(\bar{\rho})+ \bar{\phi} ) \cdot (\rho u - \bar{\rho} \bar{u})dxd\tau
= 0,
\end{split}
\end{equation}
\begin{equation} \label{weakdiff2}
\begin{split}
\varepsilon & \int_\Omega \big( \bar{u} \cdot (\rho u - \bar{\rho} \bar{u}) \big) \big|_{\tau = 0}^{\tau = t}dx - \varepsilon \int_0^t \int_\Omega (\partial_\tau \bar{u}) \cdot (\rho u - \bar{\rho} \bar{u})dxd\tau \\
& - \varepsilon \int_0^t \int_\Omega \nabla \bar{u} : (\rho u \otimes u - \bar{\rho} \bar{u} \otimes \bar{u})dxd\tau- \int_0^t \int_\Omega (\nabla \cdot \bar{u})\big(p_1(\rho)-p_1(\bar{\rho})\big)dxd\tau \\
= & - \int_0^t \int_\Omega \bar{u} \cdot (\rho \nabla \phi - \bar{\rho} \nabla \bar{\phi})dxd\tau - \int_0^t \int_\Omega \bar{u} \cdot (\rho u - \bar{\rho} \bar{u})dxd\tau - \varepsilon \int_0^t \int_\Omega \bar{u} \cdot \bar{e}_1dxd\tau,
\end{split}
\end{equation}
\begin{equation} \label{weakdiff3}
\begin{split}
&\int_\Omega \big( (- \varepsilon \tfrac{1}{2}|\bar{v}|^2+ h_2^\prime(\bar{n})- \bar{\phi} )(n - \bar{n}) \big)\big|_{\tau = 0}^{\tau = t}dx -\int_0^t \int_\Omega \partial_\tau (- \varepsilon \tfrac{1}{2}|\bar{v}|^2+h_2^\prime(\bar{n})- \bar{\phi} )(n - \bar{n})dxd\tau \\
& - \int_0^t \int_\Omega \nabla (- \varepsilon \tfrac{1}{2}|\bar{v}|^2+h_2^\prime(\bar{n})- \bar{\phi} ) \cdot (nv - \bar{n} \bar{v})dxd\tau = 0,
\end{split}
\end{equation}
\begin{equation} \label{weakdiff4}
\begin{split}
\varepsilon &\int_\Omega \big( \bar{v} \cdot (nv - \bar{n} \bar{v})\big) \big|_{\tau = 0}^{\tau = t}dx - \varepsilon \int_0^t \int_\Omega (\partial_\tau \bar{v}) \cdot (nv - \bar{n} \bar{v})dxd\tau \\
& - \varepsilon \int_0^t \int_\Omega \nabla \bar{v} : (nv \otimes v - \bar{n} \bar{v} \otimes \bar{v})dxd\tau- \int_0^t \int_\Omega (\nabla \cdot \bar{v})\big(p_2(n)-p_2(\bar{n})\big)dxd\tau \\
= & \int_0^t \int_\Omega \bar{v} \cdot (\rho \nabla \phi - \bar{\rho} \nabla \bar{\phi})dxd\tau - \int_0^t \int_\Omega \bar{v} \cdot (nv - \bar{n} \bar{v})dxd\tau - \varepsilon \int_0^t \int_\Omega \bar{v} \cdot \bar{e}_2dxd\tau.
\end{split}
\end{equation}
From the computation $(\ref{REweakdiss}) - (\ref{REstrongdiss}) - \big((\ref{weakdiff1})+(\ref{weakdiff2})+(\ref{weakdiff3})+(\ref{weakdiff4})\big)$ it follows that
\begin{equation} \label{RE3}
\begin{split}
\int_\Omega & \big(\varepsilon\tfrac{1}{2} \rho |u - \bar{u}|^2 + \varepsilon \tfrac{1}{2} n |v - \bar{v}|^2 + h_1(\rho | \bar{\rho}) + h_2(n | \bar{n}) + \tfrac{1}{2}|\nabla(\phi - \bar{\phi})|^2\big) \big|_{\tau = 0}^{\tau = t}dx \\
\leq & - \int_0^t \int_\Omega \rho |u|^2- \bar{\rho} |\bar{u}|^2 - \bar{u} \cdot (\rho u - \bar{\rho} \bar{u}) dxd\tau \\
& - \int_0^t \int_\Omega n |v|^2- \bar{n} |\bar{v}|^2 - \bar{v} \cdot (n v - \bar{n} \bar{v}) dxd\tau \\
& - \int_0^t \int_\Omega \partial_\tau (-\varepsilon \tfrac{1}{2}|\bar{u}|^2 + h_1^\prime(\bar{\rho}) + \bar{\phi} )(\rho - \bar{\rho})dxd\tau \\
& - \int_0^t \int_\Omega \partial_\tau (-\varepsilon \tfrac{1}{2}|\bar{v}|^2+h_2^\prime(\bar{n}) -\bar{\phi} )(n - \bar{n})dxd\tau \\
& -\varepsilon \int_0^t \int_\Omega (\partial_\tau \bar{u}) \cdot (\rho u - \bar{\rho} \bar{u})dxd\tau -\varepsilon \int_0^t \int_\Omega (\partial_\tau \bar{v}) \cdot (nv - \bar{n} \bar{v})dxd\tau \\
& - \int_0^t \int_\Omega \nabla (-\varepsilon \tfrac{1}{2}|\bar{u}|^2+h_1^\prime(\bar{\rho}) +\bar{\phi} ) \cdot (\rho u - \bar{\rho} \bar{u})dxd\tau \\
& - \int_0^t \int_\Omega \nabla (-\varepsilon \tfrac{1}{2}|\bar{v}|^2+h_2^\prime(\bar{n}) -\bar{\phi} ) \cdot (nv - \bar{n} \bar{v})dxd\tau \\
& - \varepsilon \int_0^t \int_\Omega \nabla \bar{u} : (\rho u \otimes u - \bar{\rho} \bar{u} \otimes \bar{u})dxd\tau- \varepsilon \int_0^t \int_\Omega \nabla \bar{v} : (n v \otimes v - \bar{n} \bar{v} \otimes \bar{v})dxd\tau \\
& - \int_0^t \int_\Omega (\nabla \cdot \bar{u})\big(p_1(\rho)-p_1(\bar{\rho})\big)dxd\tau - \int_0^t \int_\Omega (\nabla \cdot \bar{v})\big(p_2(n)-p_2(\bar{n})\big)dxd\tau
\\
& + \int_0^t \int_\Omega \bar{u} \cdot (\rho \nabla \phi - \bar{\rho} \nabla \bar{\phi})dxd\tau - \int_0^t \int_\Omega \bar{v} \cdot (n \nabla \phi - \bar{n} \nabla \bar{\phi})dxd\tau.
\end{split}
\end{equation} \par
The strong, bounded away from vacuum solution $(\bar{\rho},\bar{\rho} \bar{u},\bar{n},\bar{n} \bar{v}),$ with $ \bar \phi = N *(\bar \rho - \bar n),$ satisfies the following system
\begin{equation} \label{uvbarsystem}
\begin{dcases}
\varepsilon \big( \bar{u}_t+\bar{u} \cdot \nabla \bar{u} \big) = -\nabla \big(h_1^\prime(\bar{\rho})+\bar{\phi}\big)- \bar{u} + \varepsilon \frac{\bar{e}_1}{\bar{\rho}}
\\
\varepsilon \big( \bar{v}_t+\bar{v} \cdot \nabla \bar{v} \big)= - \nabla \big(h_2^\prime(\bar{n})-\bar{\phi}\big)- \bar{v} + \varepsilon \frac{\bar{e}_2}{\bar{n}}.
\end{dcases}
\end{equation}
Multiplying the first and second equations above by $\rho (u - \bar{u})$ and $n (v - \bar{v})$, respectively, yields:
\begin{equation} \label{RE4}
\begin{dcases}
\begin{split}
\varepsilon &\big(- \tfrac{1}{2} |\bar{u}|^2\big)_t(\rho - \bar{\rho})+\varepsilon \bar{u}_t \cdot (\rho u - \bar{\rho} \bar{u}) + \varepsilon \nabla \big(- \tfrac{1}{2} |\bar{u}|^2\big) \cdot (\rho u - \bar{\rho} \bar{u}) \\
& + \varepsilon \nabla \bar{u} : (\rho u \otimes u-\bar{\rho} \bar{u} \otimes \bar{u}) \\
= & -\rho\nabla h_1^\prime(\bar{\rho}) \cdot (u-\bar{u})-\rho \nabla \bar{\phi} \cdot (u-\bar{u})-\rho \bar{u} \cdot (u - \bar{u})\\
& + \varepsilon \rho \nabla \bar{u} : (u - \bar{u}) \otimes (u - \bar{u})+ \varepsilon \frac{\rho}{\bar{\rho}}\bar{e}_1 \cdot (u- \bar{u})
\end{split}
\\ \\
\begin{split}
\varepsilon &\big(- \tfrac{1}{2} |\bar{v}|^2\big)_t(n - \bar{n})+ \varepsilon \bar{v}_t \cdot (n v - \bar{v} \bar{v}) + \varepsilon \nabla\big(- \tfrac{1}{2} |\bar{v}|^2\big) \cdot (n v - \bar{n} \bar{v})\\
& +\nabla \bar{v} : (n v \otimes v-\bar{n} \bar{v} \otimes \bar{v}) \\
= & -n\nabla h_2^\prime(\bar{n}) \cdot (v-\bar{v})+n \nabla \bar{\phi} \cdot (v-\bar{v})-n \bar{v} \cdot (v - \bar{v})\\
& + \varepsilon \nabla \bar{v} : (v - \bar{v}) \otimes (v - \bar{v})+\varepsilon \frac{n}{\bar{n}}\bar{e}_2 \cdot (v- \bar{v}).
\end{split}
\end{dcases}
\end{equation}
Substituting (\ref{RE4}) into (\ref{RE3}) renders that \\
\begin{equation} \label{RE5}
\begin{split}
\int_\Omega & \big( \varepsilon \tfrac{1}{2} \rho |u - \bar{u}|^2+\varepsilon \tfrac{1}{2} n |v - \bar{v}|^2 + h_1(\rho | \bar{\rho}) + h_2(n | \bar{n}) + \tfrac{1}{2} |\nabla (\phi - \bar{\phi})|^2 \big) \big|_{\tau=0}^{\tau=t}dx \\
\leq & - \int_0^t \int_\Omega \rho |u|^2- \bar{\rho} |\bar{u}|^2 - \bar{u} \cdot (\rho u - \bar{\rho} \bar{u}) - \rho \bar{u} \cdot (u - \bar{u})dxd\tau \\
& - \int_0^t \int_\Omega n |v|^2- \bar{n} |\bar{v}|^2 - \bar{v} \cdot (n v - \bar{n} \bar{v})- n \bar{v} \cdot (v - \bar{v}) dxd\tau \\
& -\varepsilon \int_0^t \int_\Omega \nabla \bar{u} : \rho (u-\bar{u}) \otimes (u - \bar{u}) + \nabla \bar{v} : n (v-\bar{v}) \otimes (v - \bar{v})dxd\tau \\
& - \int_0^t \int_\Omega \partial_\tau \big(h_1^\prime(\bar{\rho}) \big)(\rho - \bar{\rho}) + \nabla h_1^\prime(\bar{\rho}) \cdot (\rho u -\bar{\rho} \bar{u}) dxd\tau \\
& - \int_0^t \int_\Omega (\nabla \cdot \bar{u}) \big(p_1(\rho) - p_1(\bar{\rho}) \big) - \nabla h_1^\prime(\bar{\rho}) \cdot (\rho u -\rho \bar{u}) dxd\tau \\
& - \int_0^t \int_\Omega \partial_\tau \big(h_2^\prime(\bar{n}) \big)(n - \bar{n}) + \nabla h_2^\prime(\bar{n}) \cdot (n v -\bar{n} \bar{v}) dxd\tau \\
& - \int_0^t \int_\Omega (\nabla \cdot \bar{u}) \big(p_2(n) - p_2(\bar{n}) \big) - \nabla h_2^\prime(\bar{n}) \cdot (n v -n \bar{v}) dxd\tau \\
& + \int_0^t \int_\Omega - (\partial_\tau \bar{\phi})(\rho - \bar{\rho}) - \nabla \bar{\phi} \cdot (\rho u - \bar{\rho} \bar{u}) dx d\tau \\
& + \int_0^t \int_\Omega \bar{u} \cdot (\rho \nabla \phi - \bar{\rho} \nabla \bar{\phi}) + \rho \nabla \bar{\phi} \cdot (u - \bar{u}) dx d\tau \\
& - \int_0^t \int_\Omega - (\partial_\tau \bar{\phi})(n - \bar{n}) - \nabla \bar{\phi} \cdot (n v - \bar{n} \bar{v}) dx d\tau \\
& - \int_0^t \int_\Omega \bar{v} \cdot (\rho \nabla \phi - \bar{n} \nabla \bar{\phi}) + n \nabla \bar{\phi} \cdot (v - \bar{v}) dx d\tau \\
& - \varepsilon \int_0^t \int_\Omega \frac{\rho}{\bar{\rho}} \bar{e}_1 \cdot (u - \bar{u}) + \frac{n}{\bar{n}} \bar{e}_2 \cdot (v - \bar{v}) dx d\tau.
\end{split}
\end{equation}
A simple calculation provides
\begin{equation} \label{RE6}
- \int_0^t \int_\Omega \rho |u|^2- \bar{\rho} |\bar{u}|^2 - \bar{u} \cdot (\rho u - \bar{\rho} \bar{u}) - \rho \bar{u} \cdot (u - \bar{u})dxd\tau = - \int_0^t \int_\Omega \rho |u - \bar{u}|^2 dxd \tau,
\end{equation}
\begin{equation} \label{RE7}
- \int_0^t \int_\Omega n |v|^2- \bar{n} |\bar{v}|^2 - \bar{v} \cdot (n v - \bar{n} \bar{v})- n \bar{v} \cdot (v - \bar{v}) dxd\tau = - \int_0^t \int_\Omega n |v - \bar{v}|^2 dxd \tau.
\end{equation}
Additionally, since $\bar{\rho}_t+\nabla \cdot (\bar{\rho} \bar{u}) = 0$ and $\bar{n}_t+\nabla \cdot (\bar{n} \bar{v}) = 0$ one derives
\begin{equation} \label{RE8}
\begin{split}
- & \int_0^t \int_\Omega \partial_\tau \big(h_1^\prime(\bar{\rho}) \big)(\rho - \bar{\rho}) + \nabla h_1^\prime(\bar{\rho}) \cdot (\rho u -\bar{\rho} \bar{u}) dxd\tau \\
& - \int_0^t \int_\Omega (\nabla \cdot \bar{u}) \big(p_1(\rho) - p_1(\bar{\rho}) \big) - \nabla h_1^\prime(\bar{\rho}) \cdot (\rho u -\rho \bar{u}) dxd\tau\\
= & - \int_0^t \int_\Omega (\nabla \cdot \bar{u}) p_1(\rho | \bar{\rho})dx d\tau,
\end{split}
\end{equation}
\begin{equation} \label{RE9}
\begin{split}
- & \int_0^t \int_\Omega \partial_\tau \big( h_2^\prime(\bar{n}) \big) (n - \bar{n}) + \nabla h_2^\prime(\bar{n}) \cdot (n v -\bar{n} \bar{v}) dxd\tau \\
& - \int_0^t \int_\Omega (\nabla \cdot \bar{v}) \big(p_2(n) - p_2(\bar{n}) \big) - \nabla h_2^\prime(\bar{n}) \cdot (n v -\rho \bar{v}) dxd\tau \\
= & - \int_0^t \int_\Omega (\nabla \cdot \bar{v}) p_2(n | \bar{n})dx d\tau.
\end{split}
\end{equation}
Moreover, the second equality of identity (\ref{intparts}) and the no-flux boundary conditions (\ref{nofluxrhou}) imply that
\begin{equation} \label{RE10}
\begin{split}
& \int_0^t \int_\Omega - (\partial_\tau \bar{\phi})(\rho - \bar{\rho}) - \nabla \bar{\phi} \cdot (\rho u - \bar{\rho} \bar{u}) dx d\tau \\
& + \int_0^t \int_\Omega \bar{u} \cdot (\rho \nabla \phi - \bar{\rho} \nabla \bar{\phi}) + \rho \nabla \bar{\phi} \cdot (u - \bar{u}) dx d\tau \\
& - \int_0^t \int_\Omega - (\partial_\tau \bar{\phi})(n - \bar{n}) - \nabla \bar{\phi} \cdot (n v - \bar{n} \bar{v}) dx d\tau \\
& - \int_0^t \int_\Omega \bar{v} \cdot (\rho \nabla \phi - \bar{n} \nabla \bar{\phi}) + n \nabla \bar{\phi} \cdot (v - \bar{v}) dx d\tau \\
= & \int_0^t \int_\Omega - (\rho - \bar{\rho} - n + \bar{n})(\partial_\tau \bar{\phi}) + \nabla(\phi - \bar{\phi}) \cdot (\rho \bar{u} - n \bar{v}) dx d\tau \\
= & \int_0^t \int_\Omega \big((\rho - \bar{\rho})\bar{u} - (n - \bar{n})\bar{v} \big) \cdot \nabla (\phi - \bar{\phi}) dx d\tau.
\end{split}
\end{equation}
Finally, replacing (\ref{RE6}), (\ref{RE7}), (\ref{RE8}), (\ref{RE9}) and (\ref{RE10}) in (\ref{RE5}) yields \\
\begin{equation*}
\begin{split}
\int_\Omega & \big( \varepsilon \tfrac{1}{2} \rho |u - \bar{u}|^2+\varepsilon \tfrac{1}{2} n |v - \bar{v}|^2 + h_1(\rho | \bar{\rho}) + h_2(n | \bar{n}) + \tfrac{1}{2} |\nabla (\phi - \bar{\phi})|^2 \big) \big|_{\tau=0}^{\tau=t}dx \\
\leq & - \int_0^t \int_\Omega \rho |u - \bar{u}|^2 + n |v - \bar{v}|^2 dx d\tau \\
& - \varepsilon \int_0^t \int_\Omega \nabla \bar{u}: \rho (u - \bar{u}) \otimes (u - \bar{u})+\nabla \bar{v}: n (v - \bar{v}) \otimes (v - \bar{v})dxd \tau \\
& - \int_0^t \int_\Omega (\nabla \cdot \bar{u}) p_1(\rho | \bar{\rho}) + (\nabla \cdot \bar{v}) p_2(n | \bar{n}) dx d\tau \\
& + \int_0^t \int_\Omega \big((\rho - \bar{\rho})\bar{u}-(n - \bar{n})\bar{v}\big)\cdot \nabla (\phi - \bar{\phi}) dx d\tau \\
& - \varepsilon \int_0^t \int_\Omega \frac{\rho}{\bar{\rho}} \bar{e}_1 \cdot (u - \bar{u}) + \frac{n}{\bar{n}} \bar{e}_2 \cdot (v - \bar{v}) dx d\tau,
\end{split}
\end{equation*} \\
which completes the proof.
\end{proof}
\subsection{Bounds in terms of the relative energy}
\begin{lemma} \label{lemmaJ1}
Under the conditions of Proposition \ref{relativeentropy},
$$\mathcal{J}_1(t) \leq C \int_0^t \Psi(\tau)d\tau, \ \ \ t \in [0,T[, $$
for some positive constant $C$.
\end{lemma}
\begin{proof}
Note that for $t \in [0,T[,$
\begin{equation*}
\begin{split}
\mathcal{J}_1(t) &= -\varepsilon \int_0^t \int_\Omega \nabla \bar{u}: \rho (u - \bar{u}) \otimes (u - \bar{u})+\nabla \bar{v}: n (v - \bar{v}) \otimes (v - \bar{v})dxd \tau \\
& \leq ( ||\nabla \bar{u} ||_{\infty} + ||\nabla \bar{v} ||_{\infty}) \int_0^t \int_\Omega \varepsilon \rho |u - \bar{u}|^2+\varepsilon n |v - \bar{v}|^2dxd\tau \\
& \leq C \int_0^t \Psi(\tau)d\tau.
\end{split}
\end{equation*}
\end{proof}
\begin{lemma}
Under the conditions of Proposition \ref{relativeentropy},
$$\mathcal{J}_2(t) \leq C \int_0^t \Psi(\tau)d\tau, \ \ \ t \in [0,T[, $$
for some positive constant $C$.
\end{lemma}
\begin{proof}
From conditions (\ref{pbound}) and \eqref{thermoconsistency} it follows that
\begin{equation*}
\begin{split}
p_i(r| \bar{r}) &= (r-\bar{r})^2 \int_0^1 \int_0^\tau p_i^{\prime \prime}\big(sr+(1-s)\bar{r}\big)dsd\tau \\
& \leq (r-\bar{r})^2 \hat{k}_i \int_0^1 \int_0^\tau h_i^{\prime \prime}\big(sr+(1-s)\bar{r}\big)dsd\tau \\
& \leq \hat{k}_i h_i(r | \bar{r}).
\end{split}
\end{equation*} \par
Thus, for $t \in [0,T[,$
\begin{equation*}
\begin{split}
\mathcal{J}_2(t) &= - \int_0^t \int_\Omega (\nabla \cdot \bar{u}) p_1(\rho | \bar{\rho})+(\nabla \cdot \bar{v}) p_2(n | \bar{n})dxd \tau \\
& \leq ( ||\nabla \cdot \bar{u} ||_{\infty} + ||\nabla \cdot \bar{v} ||_{\infty})(\hat{k}_1+\hat{k}_2) \int_0^t \int_\Omega h_1(\rho | \bar{\rho})+h_2(n | \bar{n})dxd\tau \\
& \leq C \int_0^t \Psi(\tau)d\tau.
\end{split}
\end{equation*}
\end{proof}
\begin{lemma} \label{lemmaJ3}
Under the conditions of Proposition \ref{relativeentropy} and for $\gamma_1, \gamma_2 \geq 2-\frac{1}{d}$,
$$\mathcal{J}_3(t) \leq C \int_0^t \Theta(\tau)d\tau, \ \ \ t \in [0,T[, $$
for some positive constant $C$.
\end{lemma}
\begin{proof}
Let $\gamma = \min \{\gamma_1,\gamma_2 \}.$ The proof is divided into two cases: $\gamma \geq 2$ and $\gamma \in [2-\frac{1}{d},2[$.
\medskip
\noindent
\textit{Case $\gamma \geq 2$} :
Using inequality $ab \leq \tfrac{1}{2}a^2 + \tfrac{1}{2}b^2$ and Lemma \ref{hlemma}, one derives
\begin{equation*}
\begin{split}
\mathcal{J}_3(t) &= \int_0^t \int_\Omega \big((\rho - \bar{\rho})\bar{u}-(n - \bar{n})\bar{v}\big)\cdot \nabla (\phi - \bar{\phi}) dx d\tau \\
& \leq ( ||\bar{u} ||_{\infty} + ||\bar{v} ||_{\infty})\int_0^t \int_\Omega |\rho - \bar{\rho}||\nabla (\phi-\bar{\phi})|+|n - \bar{n}||\nabla (\phi-\bar{\phi})| dxd\tau \\
& \leq C \int_0^t \int_\Omega \Big(|\rho - \bar{\rho}|^2+|n - \bar{n}|^2+|\nabla (\phi -\bar{\phi})|^2 \Big) dxd\tau \\
& \leq C \int_0^t \int_\Omega h_1(\rho| \bar{\rho})+h_2(n| \bar{n})+|\nabla (\phi-\bar{\phi})|^2dxd\tau \\
& \leq C \int_0^t \Psi(\tau)d\tau, \qquad t \in [0,T[.
\end{split}
\end{equation*}
\medskip
\noindent
\textit{Case $\gamma \in [2-\frac{1}{d}, 2[$} : Fix $t \in [0,T[$ and let $q= \frac{2}{3-\gamma}, \ q'= \frac{q}{q-1},$ and
$p=\frac{2d}{d(\gamma -1)+2},$ so that $q^\prime = \frac{dp}{d-p}$. Since $\gamma \in [2-\frac{1}{d},2[,$ then $1<p \leq q < \gamma < 2$.
Set $J(t) := \int_\Omega\big( (\rho - \bar \rho) \bar u - (n - \bar n) \bar v \big) \cdot \nabla (\phi - \bar \phi) dx$ and note that
\begin{equation}\label{estim1}
\begin{split}
J(t) & \leq \Big | \int_\Omega \big((\rho - \bar{\rho})\bar{u}-(n - \bar{n})\bar{v}\big)\cdot \nabla (\phi - \bar{\phi}) dx \Big |
\\
& \leq ( ||\bar{u} ||_{\infty} + ||\bar{v} ||_{\infty}) \int_\Omega (|\rho - \bar{\rho}|+|n - \bar{n}|) |\nabla (\phi - \bar{\phi})| dx \\
& \leq C \Big( \int_\Omega(|\rho - \bar{\rho}|+|n - \bar{n}|)^q dx \Big)^{\frac{1}{q}} \Big( \int_\Omega |\nabla (\phi - \bar{\phi})|^{q'} dx \Big)^{\frac{1}{q'}}.
\end{split}
\end{equation}
Consider the Neumann problem
\begin{equation*}
\begin{cases}
- \Delta (\phi - \bar \phi ) = \rho - n - \bar \rho + \bar n & \mbox{in $\Omega$}
\\
\; \; \frac{ \partial}{\partial \nu} (\phi - \bar \phi) = 0 & \mbox{on $\partial \Omega$}.
\end{cases}
\end{equation*}
Let
$ f = \rho - n - \bar \rho + \bar n$ and $\varphi = \nabla (\phi -\bar{\phi}). $
Then $f \in L^\gamma(\Omega) \subseteq L^p(\Omega)$ and $\varphi = \nabla _x N * f.$ Define $\tilde{f}$ by
$$
\tilde{f} =
\begin{cases}
f, \ \text{in} \ \Omega
\\
0, \ \text{in} \ \mathbb{R}^d\setminus\Omega.
\end{cases}
$$
Clearly $\tilde{f} \in L^p(\mathbb{R}^d),$ and from the properties of the Neumann function one deduces that
$$|\varphi(x)| \leq C I_1(|\tilde{f}|)(x), \ \ x \in \Omega. $$ Thus, Proposition \ref{steinprop} with $\alpha = 1$ and $p = \frac{2d}{d(\gamma -1)+2}$ implies that
\begin{equation}\label{estim2}
\Big( \int_\Omega |\nabla (\phi - \bar{\phi})|^{q'} dx \Big)^{\frac{1}{q'}} = ||\varphi ||_{L^{\frac{dp}{d-p}}(\Omega)}
\leq C ||I_1(|\tilde{f}|) ||_{L^{\frac{dp}{d-p}}(\Omega)}
\leq C ||f ||_{L^p(\Omega)}.
\end{equation}
Furthermore, choosing $r > 0$ so that $\frac{1}{r} = \frac{1}{p} - \frac{1}{q}$ it yields $$||f ||_{L^p(\Omega)} \leq |\Omega|^\frac{1}{r} ||f ||_{L^q(\Omega)}.$$
Combining \eqref{estim1} and \eqref{estim2} gives
\begin{equation}\label{estim3}
\begin{split}
J(t) & \leq C \Big( \int_\Omega(|\rho - \bar{\rho}|+|n - \bar{n}|)^q dx \Big)^{\frac{1}{q}} \Big( \int_\Omega(|\rho - \bar{\rho}-n + \bar{n}|)^q dx \Big)^{\frac{1}{q}}.\\
& \leq C \Big( \int_\Omega(|\rho - \bar{\rho}|+|n - \bar{n}|)^q dx \Big)^{\frac{2}{q}} \\
& \leq C \Big( \int_\Omega |\rho - \bar{\rho}|^qdx \Big)^{\frac{2}{q}}+C \Big( \int_\Omega |n - \bar{n}|^qdx \Big)^{\frac{2}{q}}.
\end{split}
\end{equation}
Our next goal is to show
\begin{equation}\label{estim4}
\Big( \int_\Omega |\rho - \bar{\rho}|^qdx \Big)^{\frac{2}{q}} \leq C \int_\Omega h_1(\rho | \bar{\rho})dx.
\end{equation}
To this end, we split the domain into
$B(t)=\{x \in \Omega \ | \ 0 \leq \rho \leq R \}$ and $U(t) = \{x \in \Omega \ | \ \rho > R \},$ where $R > M_1 + 1$ is as in Lemma \ref{hlemma}.
First observe that $$\Big( \int_\Omega |\rho - \bar{\rho}|^q dx\Big)^{\frac{2}{q}} \leq C \Big( \int_{B(t)} |\rho - \bar{\rho}|^q dx\Big)^{\frac{2}{q}} + C \Big( \int_{U(t)} |\rho - \bar{\rho}|^q dx\Big)^{\frac{2}{q}}.$$
Since $q < 2,$ the inclusion $L^2(\Omega) \subseteq L^q(\Omega)$ holds, and together with Lemma \ref{hlemma} implies
$$\Big( \int_{B(t)} |\rho - \bar{\rho}|^q dx\Big)^{\frac{2}{q}} \leq C \int_{B(t)} |\rho - \bar{\rho}|^2 dx \leq C \int_{\Omega} h_1(\rho | \bar{\rho})dx.$$ \\
Moreover, since $\frac{1}{q}=\frac{\theta}{\gamma}+(1-\theta)$ with $ 2\theta= \gamma$ one has
\begin{equation*}
\begin{split}
\Big( \int_{U(t)} |\rho - \bar{\rho}|^q dx\Big)^{\frac{2}{q}} & = \Big( \int_{U(t)} |\rho - \bar{\rho}|^{(1-\theta) q} |\rho - \bar{\rho}|^{\theta q}dx \Big)^{\frac{2}{q}} \\
& \leq \Bigg( \Big( \int_{U(t)} |\rho - \bar{\rho}| dx \Big)^{(1-\theta)q} \Big( \int_{U(t)} |\rho - \bar{\rho}|^{\gamma}dx \Big)^{\frac{\theta q}{\gamma}}\Bigg)^{\frac{2}{q}} \\
& \leq (M+\bar M)^{2 - \gamma} \int_{U(t)} |\rho - \bar{\rho}|^\gamma dx.
\end{split}
\end{equation*}
If $\gamma = \gamma_1$, then $$\int_{U(t)} |\rho - \bar{\rho}|^\gamma dx \leq C \int_\Omega h_1(\rho | \bar{\rho})dx $$ immediately follows from Lemma \ref{hlemma}. \\
If $\gamma = \gamma_2$, then $\gamma \leq \gamma_1$, and since $|\rho-\bar{\rho}| > 1$ in $U(t)$, again by Lemma \ref{hlemma} one obtains
$$
\int_{U(t)} |\rho - \bar{\rho}|^\gamma dx \leq \int_{U(t)} |\rho - \bar{\rho}|^{\gamma_1} dx \leq C \int_\Omega h_1(\rho | \bar{\rho})dx.
$$
Consequently, we obtain \eqref{estim4} as projected.
In a similar fashion, it holds
\begin{equation}\label{estim5}
\Big( \int_\Omega |n - \bar{n}|^qdx \Big)^{\frac{2}{q}} \leq C \int_\Omega h_2(n | \bar{n})dx.
\end{equation}
Then \eqref{estim3} in conjunction with \eqref{estim4}, \eqref{estim5} gives
$$
J(t) \leq C \int_\Omega h_1(\rho | \bar{\rho})+h_2(n|\bar{n})dx \leq C \Psi(t),
$$
wherefrom
$$ \mathcal{J}_3(t) = \int_0^t J(\tau) d\tau \leq C \int_0^t \Psi(\tau) d\tau, $$
which completes the proof.
\end{proof}
\begin{lemma}\label{lemmaJ4}
Under the conditions of Proposition \ref{relativeentropy},
$$
\mathcal{J}_4(t) \leq \frac{1}{2} \int_0^t \int_\Omega \rho|u-\bar{u}|^2+n|v-\bar{v}|^2dxd \tau + C \varepsilon^2, \ \ \ t \in [0,T[,
$$
for some positive constant $C$.
\end{lemma}
\begin{proof}
The boundedness of $\bar{e}_1$ and $\bar{e}_2$, the conservation of mass and $\textbf{(H)}$ imply, for $t \in [0,T[$, that
\begin{equation*}
\begin{split}
\mathcal{J}_4(t) &= - \varepsilon \int_0^t \int_\Omega \frac{\rho}{\bar{\rho}} \bar{e}_1 \cdot (u - \bar{u}) + \frac{n}{\bar{n}} \bar{e}_2 \cdot (v - \bar{v}) dx d\tau \\
& \leq \frac{1}{2 } \int_0^t \int_\Omega \rho|u-\bar{u}|^2+n|v-\bar{v}|^2 dxd\tau+\frac{\varepsilon^2}{2}\int_0^t \int_\Omega \rho \bigg|\frac{\bar{e}_1}{\bar{\rho}}\bigg|^2+n \bigg|\frac{\bar{e}_2}{\bar{n}}\bigg|^2dxd\tau\\
& \leq \frac{1}{2 } \int_0^t \int_\Omega \rho|u-\bar{u}|^2+n|v-\bar{v}|^2 dxd\tau + C \varepsilon^2 t\\
\end{split}
\end{equation*}
for some positive constant $C$.
\end{proof}
Combining (\ref{relativeentropyinequality}) with the bounds in Lemmas \ref{lemmaJ1}-\ref{lemmaJ4} gives
\begin{equation} \label{REafterbounds}
\Psi(t)+\frac{1}{2} \int_0^t \int_\Omega \rho|u-\bar{u}|^2+n|v-\bar{v}|^2dxd \tau \leq \Psi(0) + C \int_0^t \Psi(\tau)d\tau + C \varepsilon^2 t,
\ \ t \in [0,T[.
\end{equation}
Theorem \ref{mainresult} follows by the Gronwall inequality. The constant $C$ depends on $d$, $\Omega,$ $\gamma_1,$ $\gamma_2,$
$k_1,$ $k_2,$ $\hat{k}_1,$ $\hat{k}_2,$ $M$, $\bar M$,
$\delta_1,$ $\delta_2,$ $M_1,$ $M_2,$ $||\bar{u}||_\infty,$ $||\bar{v}||_\infty,$ $||\nabla \bar{u}||_\infty,$ $||\nabla \bar{v}||_\infty,$
$||\bar{e}_1||_\infty$ and $||\bar{e}_2||_\infty.$
|
{
"timestamp": "2021-06-29T02:17:48",
"yymm": "2012",
"arxiv_id": "2012.14203",
"language": "en",
"url": "https://arxiv.org/abs/2012.14203"
}
|
\section{Introduction}
\label{sec:Introduction}
In effective (super)gravity actions, higher-curvature terms appear as stringy and/or quantum corrections to the corresponding two-derivative actions ---see {\it e.g.,}\ \cite{Grisaru:1986px,Gross:1986iv,Gubser:1998nz}. In the AdS/CFT context \cite{Maldacena,Witten,Gubser}, the holographic duals of such modified actions are inequivalent to the ones defined by Einstein gravity ({\it e.g.,}\ the trace anomaly coefficients in four dimensions, $a$ and $c$, no longer coincide in general).
This extends beyond explicit top-down constructions and, in fact, particular higher-curvature models ---{\it e.g.,}\ with certain special properties which makes them more appealing--- can be used to probe interesting CFT physics \cite{Buchel:2009sk,Myers:2010jv,HoloECG,Camanho:2013pda,deBoer:2009gx}. In some cases, this approach has been used to unveil universal properties valid for completely general CFTs \cite{Myers:2010xs,Myers:2010tj,Kats:2007mq,Brigante:2007nu,Camanho:2010ru,Mezei:2014zla,Bueno1,Miao:2015dua,Bueno:2018yzo,Bueno:2020odt}.
An important entry in the holographic dictionary corresponds to entanglement entropy (EE), which for holographic theories dual to Einstein gravity (plus possible additional matter fields) can be computed using the Ryu-Takayanagi (RT) prescription \cite{Ryu:2006bv,Ryu:2006ef}. According to this, the EE for a region $A$ in the boundary CFT
is obtained as the area of the bulk surface, $\Gamma_A$, which has the smallest area amongst all bulk surfaces which are homologous to $A$, divided by $4G$, {\it i.e.,}\
\begin{equation}\label{RTaka}
S_{\rm \scriptscriptstyle HEE}^{\rm E} (A) = \frac{\mathcal{A} (\Gamma_A)}{4G}\, ,
\end{equation}
where the ``E'' stands for Einstein gravity.
When the action includes higher-curvature terms, the area functional needs to be modified, similarly to the way the Bekenstein-Hawking black hole entropy formula \cite{Bekenstein:1973ur,Hawking:1974sw} is replaced by Wald's one \cite{Wald:1993nt,Iyer:1994ys}. The naive modification which would correspond to replacing \req{RTaka} by the same Wald functional fails for entanglement entropy \cite{Hung:2011xb}, and additional terms involving extrinsic curvatures of the generalized bulk surface are required. A hint of this is the fact that for Lovelock gravities, the result obtained from Wald's entropy differs from the alternative Jacobson-Myers functional \cite{Jacobson:1993xs} by terms of that type, which generically vanish for Killing horizons, but not for holographic entangling surfaces. The right expression for the holographic entanglement entropy (HEE) functional in the case of quadratic gravities was obtained in \cite{Fursaev:2013fta}. Building up on the generalized entropy methods of \cite{Lewkowycz:2013nqa}, a general formula (in principle) valid for theories involving arbitrary contractions of Riemann tensors and metrics was obtained in \cite{Dong:2013qoa,Camps:2013zua}. Schematically, it has the form
\begin{equation}\label{dongf}
S^{\mathcal{L}_E({\rm Riemann})}_{\rm \scriptscriptstyle HEE}(A)=S_{\rm Wald} + S_{\rm Anomaly}\, ,
\end{equation}
where, in addition to a Wald-like piece, there appears an extra ``anomaly'' term involving extrinsic curvatures of the generalized holographic surface. In adapted coordinates ---see subsection \ref{sec:Notation} below for our conventions--- these two terms read\footnote{Interesting additional developments and explorations include \cite{Bhattacharyya:2013gra,Bhattacharyya:2013jma,Bhattacharyya:2014yga,Chen:2013qma,Dong:2015zba,Harper:2018sdd,Huang:2015zua}.}\footnote{Generalizations of \req{dongf} to the case in which covariant derivatives of the Riemann appear in the action have also been presented \cite{Miao:2014nxa}.}
\begin{align}\label{waldano}
S_{\rm Wald}&= 2 \pi \int_{\Gamma_A} \mathrm{d}^{d-1} y \, \sqrt{h} \, \frac{\partial \mathcal{L}_E}{\partial R_{z \bar{z} z \bar{z}}}\, , \\ \label{anomaly} S_{\rm Anomaly}&= 2 \pi \int_{\Gamma_A} \mathrm{d}^{d-1} y \, \sqrt{h} \, \sum_{\alpha} \left( \frac{\partial^2 \mathcal{L}_E}{\partial R_{z i z j} \partial R_{\bar{z} k \bar{z} l}} \right)_\alpha \frac{8 K_{zij} K_{\bar{z} k l}}{(1+q_\alpha )}\, .
\end{align}
In principle, the generalized holographic surface $\Gamma_A$ should be obtained by extremizing the new functional \cite{Dong:2017xht}.
In the anomaly term, once the second derivative is performed, each of the Riemann tensor components appearing in the resulting expression has to be split into sums of pieces with different weights $q_{\alpha}$ according to some prescription. That prescription depends on the way the conical defect appearing near the entangling region in the replica trick approach is regulated. As observed and studied in \cite{Miao:2014nxa,Camps:2014voa,Miao:2015iba,Camps:2016gfs}, this procedure is non-unique, which leads to the so-called ``splitting problem''.\footnote{See subsection \ref{grsplit} for a more detailed summary of the discussion included in this paragraph and the following two.} While the choice of splittings does not affect $f(R)$, Lovelock or quadratic theories, it does play a crucial r\^ole for general theories involving $n\geq 3$ densities.\footnote{
The final form of the anomaly term once the (some) splitting procedure and the sum over $\alpha$ are performed differs considerably from \req{anomaly}. In particular, for $n$-order densities, it may contain terms involving up to $2(n-1)$ extrinsic curvatures. This is evident from our new expression in \req{NewFunctional:FinalForm3}.}
The right splittings could in principle be identified for each particular theory by imposing that the relevant bulk geometry satisfies the corresponding equations of motion. In doing so, one would be left with a functional ready to extremize, and the resulting on-shell evaluation would yield non-perturbative results for the HEE of the corresponding theory. Doing this in practice is a highly non-trivial task which has not been pursued for explicit higher-curvature theories so far. If one followed this approach, another relevant issue would arise.
For generic higher-curvature theories, the equations of motion implementing the extremization of the functional are not second-order in derivatives, so it is not completely clear how to deal with the associated boundary value problem in those cases.
A different approach, which we follow here, entails considering holographic entangling surfaces which extremize the RT functional (\ref{RTaka}) along with the splittings prescribed by Einstein gravity. By doing so, we avoid the boundary-value-problem issues associated to higher-order equations, and the results obtained are perturbatively valid at leading order in the higher-curvature couplings \cite{Camps:2016gfs}. Within this framework, we manage to get rid of the $\alpha$ sum in the anomaly piece (\ref{anomaly}) and obtain a general expression which can be compactly written as\footnote{See Section \ref{covid} for the covariant form.}
\begin{align} \label{NewFunctional:FinalForm3}
&S_{\rm Anomaly}=32\pi \int_{\Gamma_A} \mathrm{d}^{d-1} y \, \sqrt{h} \left[ \int_0^1 {\rm d}u \, u\, {\rm e}^{-F(u)} \left(\frac{\partial^2 \mathcal{L}_E}{\partial R_{z i z j} \partial R_{\bar{z} k \bar{z} l}} K_{zij} K_{\bar{z} k l} \right) \right] \, , \quad
\end{align}
where the operator appearing in the exponential takes the form\footnote{As explained later, there is a normal ordering prescription implicit in this expression which forces derivatives to act exclusively on the object in the parentheses ---see \req{NewFunctional:NormalOrdering} below.}
\begin{equation}
F(u)\equiv [(1-u^2)\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}+ (1- u)\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}]\, .
\end{equation}
and where $\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}$ and $\mathcal{K}\indices{_{BI}} \hat{\partial}^{BI}$ are differential operators involving derivatives with respect to particular Riemann tensor components contracted with extrinsic and Riemann curvature components. They appear defined in \req{NewFunctional:OperatorA} and \req{NewFunctional:OperatorB} respectively. This new form of the functional becomes particularly simple for cubic and quartic theories ---see \req{AnomalyCubic:ExpandedExpression} and \req{AnomalyQuartic:ExpandedExpression} respectively--- and we use it to evaluate the explicit (covariant) HEE functionals for all cubic and quartic densities. The result for Lovelock theories is also rather suggestive ---see \req{OneComponent:LovelockExponential}. Using our new results, we are also able to show that densities constructed exclusively from Ricci curvatures have a vanishing anomaly term, similarly to the well-known case of $f(R)$ gravities. This also extends to densities involving a single Riemann tensor contracted with Ricci curvatures. More generally, we prove that an order-$n$ density involving $n_R$ Riemann curvatures and $n-n_R$ Ricci curvatures can produce HEE functionals containing at most $2(n_R-1)$ extrinsic curvatures. As an application of our results, we compute a variety of universal contributions to the EE coming from various symmetric regions in general dimensions for holographic theories dual to cubic gravities. Particularly interesting are the results for strips, for which no alternative interpretation of their coefficients exists beyond EE, and for corners, for which the functional form of the Einstein gravity function only starts to get modified at cubic order.
The remainder of the paper goes as follows. In subsection \ref{sec:Notation} we introduce our conventions and some notation. In Section \ref{grsplit} we briefly review the construction that leads to the general form of the holographic entanglement entropy functional, the issue with the Riemann tensor splittings and the choice that allows us to obtain results perturbatively valid for general higher-curvature theories. In Section \ref{sec:Rewriting} we derive a new formula for the anomaly piece of the HEE functional valid for perturbative higher-curvature corrections to Einstein gravity. We show how the formula gets considerably simplified in the cases of cubic, quartic and Lovelock densities. We also illustrate how our formula should be used in concrete cases by performing a detailed example for a term coming from quintic densities, verifying the match with the $\alpha$-expansion method. In Section \ref{sec:explicit} we present the explicit form of the HEE functionals for general: $f(R)$, Lovelock, quadratic, cubic, quartic, $\mathcal{L}({\rm Ricci})$ and $R_{\mu\nu\rho\sigma}T^{\mu\nu\rho\sigma}({\rm Ricci})$ densities in covariant form. We also prove here that the functionals corresponding to densities involving $n-n_R$ Ricci tensors contain at most $2(n_R-1)$ extrinsic curvatures.
In Section \ref{unite} we evaluate, for general quadratic and cubic theories, the universal entanglement entropy coefficients characterizing spheres and strips in general dimensions, cylinders in $d=4$ and $d=6$ and corners in $d=3$. For the latter, we show that the functional dependence on the opening angle of the corner gets modified by the introduction of cubic densities with respect to the Einstein gravity result. We perform some comparisons of the result with free fields calculations, strengthening previously observed universal properties of this function. We conclude in Section \ref{finalc} with some final comments and directions.
Appendix \ref{formuls} contains the proof of a couple of identities which we use in our derivation of the new functional in Section \ref{sec:Rewriting}.
\subsection{Notation and conventions}
\label{sec:Notation}
In the present paper we deal with various manifolds and metrics. Here we make some comments on our conventions and notation. We take indices in the $(d+1)$-dimensional bulk to be $\mu, \nu, \dots$, and the bulk metric is denoted by $g\indices{_{\mu \nu}}$. The entanglement entropy of a boundary region $A$ is computed as the integral of the entanglement functional on a spatial codimension-2 bulk surface homologous to $A$, which we call $\Gamma_A$. The induced metric on this surface is written as $h\indices{_{\mu \nu}}$, and we will often have to deal with its extrinsic curvature, $K\indices{^a_{\mu \nu}}$. This is defined considering two orthonormal vectors to the surface $n\indices{_a^{\mu}}$, where indices $a, b, \dots$ take values 1 and 2:
\begin{equation} \label{Notation:extrinsic_curvature}
K\indices{^a_{\mu \nu}} \equiv h\indices*{^{\rho}_{\mu}} h\indices*{^{\sigma}_{\nu}} \nabla\indices{_{\rho}} n\indices{^a_{\sigma}} \, ,
\end{equation}
and we assume an arbitrary extension of $n\indices{^a_{\mu}}$ to a neighborhood of the surface which keeps them normalized. Notice also that we work in Euclidean signature, which means $g\indices{_{\mu \nu}} n\indices{_a^{\mu}} n\indices{_b^{\nu}} = \delta\indices{_{a b}}$, and we define $n\indices{^a_{\mu}} = \delta\indices{^{a b}} g\indices{_{\mu \nu}} n\indices{_b^{\nu}}$. In particular, the induced metric can be written as
\begin{equation} \label{Notation:inducedmetric}
h\indices{_{\mu \nu}} = g\indices{_{\mu \nu}} - n\indices{^a_{\mu}} n\indices{_{a \nu}}\, .
\end{equation}
We also introduce projectors
\begin{equation} \label{Notation:projectors}
t_i^{\mu}\equiv \frac{ \partial x^{\mu}}{\partial y^i}\, ,
\end{equation}
where indices $i, j, \dots$ denote the tangent directions to the surface. Tensors with these kind of indices are always obtained by application of such projectors to their corresponding bulk tensors, {\it e.g.,}\
\begin{equation}
h_{ij} \equiv t_i^{\mu} t_j^{\nu} h_{\mu \nu}\, , \quad \quad K_{ij}^a\equiv t_i^{\mu} t_j^{\nu} K_{\mu\nu}^a \, .
\end{equation}
We also define the binormal to the surface and the normal projector, respectively, as
\begin{equation} \label{Covariant:BinormalNormalDefinitions}
\epsilon\indices{_{\mu \nu}} \equiv \epsilon\indices{_{a b}} n\indices{^a_{\mu}} n\indices{^b_{\nu}} ~, \qquad \perp\indices{_{\mu \nu}} \equiv \delta\indices{_{a b}} n\indices{^a_{\mu}} n\indices{^b_{\nu}} \, ,
\end{equation}
where $\epsilon\indices{_{a b}}$ is the two-dimensional Levi-Civita symbol. In particular, this means that when indices $a,b,\dots$ appear repeated in a tensorial structure the corresponding bulk tensor is contracted with the normal projector, namely
\begin{equation}
V^a_{\, \, \, a} \equiv V^{\mu\nu} \bot_{\mu \nu}\, .
\end{equation}
The binormal and the normal projector satisfy the useful relations
\begin{equation}\label{relss}
\epsilon_{\mu\nu} \epsilon_{\rho \sigma} =2 \bot_{\mu [\rho} \bot_{\nu | \sigma]} \, ,\quad \quad
g^{\mu \rho }\epsilon_{\mu\nu}\epsilon_{\rho\sigma}=\bot_{\nu \sigma}
\, , \quad \quad \epsilon_{\mu\nu}\epsilon^{\mu\nu}=2\, .
\end{equation}
When performing generic computations of the entanglement functional we follow the conventions of \cite{Dong:2013qoa,Camps:2016gfs}. This means that we take a particular set of adapted coordinates for $\Gamma_A$ so that
\begin{equation} \label{Notation:adapted_metric}
\mathrm{d} s^2 = \mathrm{d} z \, \mathrm{d} \bar{z} + h\indices{_{i j}} \mathrm{d} y^i \, \mathrm{d} y^j \, ,
\end{equation}
where $z \equiv \rho e^{i \tau}$, $\bar{z} \equiv \rho e^{-i \tau}$ are complex coordinates orthogonal to the surface. In these coordinates, the off-diagonal components $g\indices{_{z \bar{z}}} = 1/2$ and $g\indices{^{z \bar{z}}} = 2$ are the only non-vanishing part of the normal metric to the surface.
We take the cosmological constant to be negative throughout the paper, and write $-2\Lambda \equiv d(d-1)/L^2$, so that the action scale $L$ coincides with the AdS$_{d+1}$ radius, which we denote by $L_{\star}$, for Einstein gravity. For generic higher-curvature gravities, the equation which relates $L$ and $L_{\star}$ involves the corresponding higher-order couplings (it appears in \req{k00} below). Nevertheless, at leading order in the couplings ---which is the setup we consider here--- the two scales are equal to each other, $L=L_{\star}+\mathcal{O}(\alpha_i)$. We choose to present the results (mostly in Section \ref{unite}) in terms of the AdS radius $L_{\star}$.
\section{GR splittings for perturbative higher-curvature theories }\label{grsplit}
Let us start considering the entanglement entropy for a region $A$ in some global state $\rho$ of some holographic CFT. This can be obtained as the $n \to 1$ limit of R\'enyi entropies $S_n(A)$, which in turn can be obtained via the replica trick as
\begin{equation} \label{SplittingProblem:ReplicaTrick}
S_n(A) = - \frac{1}{n - 1} \log {\rm Tr} \left( \rho_A^n \right) = - \frac{1}{n - 1} \left( \log \mathcal{Z}_n - n \log \mathcal{Z}_1 \right) \, .
\end{equation}
In this expression, $n$ is a positive integer, $\rho_A$ is the reduced density matrix of region $A$, and $\mathcal{Z}_n$ is the partition function of the field theory in the $n$-fold cover.
In particular, $\mathcal{Z}_1$ is the partition function of the Euclidean manifold which, upon path integration, prepares the global state. In order to obtain the entanglement entropy $S_{\ssc \rm EE}(A)$ as the limit $n \to 1$ of the previous expression, an analytic continuation in $n$ is also needed.
Following the argument of \cite{Lewkowycz:2013nqa}, when the field theory has a gravity dual, in the saddle-point approximation it is possible to identify $\log \mathcal{Z}_n = - I_E[B_n]$, where $I_E[B_n]$ is the Euclidean action of the gravitational theory evaluated at the bulk solution $B_n$ which is dual to the $n$-fold cover. This boundary geometry has a $\mathbb{Z}_n$ symmetry which interchanges the $n$ copies and, if this is respected in the bulk, we can consider the quotient $\hat{B}_n = B_n / \mathbb{Z}_n$, which is regular everywhere except at the codimension-2 bulk surface $\mathcal{C}_n$ consisting of the fixed points of $\mathbb{Z}_n$. Furthermore, the replica symmetry also guarantees that
\begin{equation} \label{SplittingProblem:ReplicaAction}
I_E[B_n] = n I_E[\hat{B}_n] \, .
\end{equation}
We can now analytically continue to non-integer $n$ this construction, and obtain the entanglement entropy as:
\begin{equation} \label{SplittingProblem:EntanglementFromAction}
S_{\rm \scriptscriptstyle HEE}(A) = \lim_{n \to 1} \frac{n}{n - 1} \left( I_E[\hat{B}_n] - I_E[B_1] \right) = \left. \partial_n I_E[\hat{B}_n] \right|_{n = 1} \, .
\end{equation}
Since $I_E[B_1]$ is a bulk solution to the equations of motion, this variation away from $n = 1$ might seem to vanish. This is not the case because when we vary $n$ we change the opening angle of the conical defect at $\mathcal{C}_n$, and this region has to be excluded from the action integral, introducing a boundary where conditions change with $n$. Details of this procedure can be found {\it e.g.,}\ in \cite{Dong:2013qoa}. The relevant fact is that the computation of the entanglement entropy gets reduced to the evaluation of the on-shell Euclidean action of the gravitational theory in the conical defect $\mathcal{C}_n$. The opening angle of this defect is $2 \pi /n$, and after obtaining the contributions to the action we must take an $n$-derivative at $n = 1$.
In order to compute $S_{\rm \scriptscriptstyle HEE}(A)$ we need to evaluate the action of a given gravitational theory for a bulk geometry which regulates the conical singularity. This is a rather technical task, but there is a key point which was initially overlooked in \cite{Dong:2013qoa,Camps:2013zua}: there are many ways in which a conical defect can be regulated \cite{Miao:2014nxa,Camps:2014voa,Miao:2015iba,Camps:2016gfs}. Different prescriptions produce different functionals. This ambiguity is usually called the ``splitting problem''. The particular gravitational theory of interest should determine the correct one through its equations of motion \cite{Camps:2016gfs,Dong:2017xht}.
When interested in perturbative higher-curvature corrections to Einstein gravity, the appropriate splittings were obtained in \cite{Camps:2016gfs}. At first order in the higher-order couplings one can simply regulate using Einstein's equations. This is so because the particular regularization does not affect the Einstein gravity term in \eqref{SplittingProblem:EntanglementFromAction} (it always produces the usual area law), and the higher curvature terms in the action are already first order in the couplings. As a consequence, corrections to the regulated geometry coming from modifications to the equations of motion are second order in the action.
All in all, the expression for the holographic entanglement entropy for a perturbative higher-curvature gravity with Euclidean action $\mathcal{L}_E(g_{\mu\nu},R_{\mu\nu\rho\sigma})$ is given by
\begin{equation} \label{SplittingProblem:GeneralFunctional}
S_{\rm \scriptscriptstyle HEE}(A) = 2 \pi \int_{\Gamma_A} d^{D-2} y \, \sqrt{h} \, \left[ \frac{\partial \mathcal{L}_E}{\partial R_{z \bar{z} z \bar{z}}} + \sum_{\alpha} \left( \frac{\partial^2 \mathcal{L}_E}{\partial R_{z i z j} \partial R_{\bar{z} k \bar{z} l}} \right)_\alpha \frac{8 K_{zij} K_{\bar{z} k l}}{q_\alpha + 1} \right] \, ,
\end{equation}
where $\Gamma_A$ is just the RT surface and the prescription for the $\alpha$-sum is unambiguously determined ---see below.
The area term in the previous equation ---coming from the Einstein gravity part of the action--- is stationary for the RT surface, and therefore first order variations of the surface will not change its value. On the other hand, contributions of higher-order terms to the previous functional will already be first-order in the couplings, and thus insensitive to first-order modifications of the surface.
As we mentioned before, there are in principle different ways to regulate the conical singularity, which give rise to different prescriptions for the $\alpha$ sum.
On general grounds, the idea is the following. The second derivative of the Lagrangian will be a sum of terms which are monomials with different contractions of components of the Riemann tensor. These contractions are to be expanded in terms of their $z$ and $\bar{z}$ indices, obtaining an expression of the second derivative of the Lagrangian involving only $R\indices{_{z \bar{z} z \bar{z}}}$, $R\indices{_{z \bar{z} z i}}$, $R\indices{_{z \bar{z} i j}}$, $R\indices{_{z i \bar{z} j}}$, $R\indices{_{z i z j}}$, $R\indices{_{z i j k}}$, $R\indices{_{i j k l}}$, plus components related to these by complex conjugation of the indices.\footnote{Notice that components of the Ricci tensor and the Ricci scalar have to be expanded in terms of these basic objects as well. For instance, we would write
\begin{equation} \label{SplittingProblem:ExpansionRicci}
R\indices{_{z \bar{z}}} = g\indices{^{\mu \nu}} R\indices{_{z \mu \bar{z} \nu}} = - 2 R\indices{_{z \bar{z} z \bar{z}}} + g\indices{^{i j}} R\indices{_{z i \bar{z} j}} \, .
\end{equation}}
After this is done, each regularization of the conical defect will provide a ``splitting'': a rule to divide each of the previous components of the Riemann tensor schematically as
\begin{equation} \label{NotationRewriting:GeneralSplitting}
R\indices{_{MI}} = \tilde{R}\indices{_{MI}} + \mathcal{K}\indices{_{MI}} \, .
\end{equation}
In this expression, $M$ labels the different components of the Riemann tensor enumerated before, while $I$ is a generalized index containing all the $i, j, k, \dots$ indices of the particular component under consideration (which might be none). This expansion has to be performed in all the components of the Riemann tensor, and once this is done, each of the resulting monomials is labelled by $\alpha$. The splitting provides also a value $q_{\alpha}$ for each $\mathcal{K}\indices{_{MI}}$. In each term we have a definite value of $q_{\alpha}$, given by the sum of the values of all the $\mathcal{K}\indices{_{MI}}$ in that monomial. Expression \eqref{SplittingProblem:GeneralFunctional} instructs us then to divide each term by $q_{\alpha} + 1$. Once this is done, we can eliminate the $\tilde{R}\indices{_{MI}}$ (which are auxiliary objects in this construction whose particular geometrical meaning is irrelevant as far as the functional construction is concerned) in favor of the Riemann tensor components by using \eqref{NotationRewriting:GeneralSplitting} again.
The particular example of \eqref{NotationRewriting:GeneralSplitting} relevant for our purposes comes from the regularization of the conical defect imposed by Einstein's equations, which is valid for any theory containing perturbative corrections to Einstein gravity in the action. In such a case, the splittings take the form
\begin{align} \label{SplittingProblem:ExpansionsRiemann}
\nonumber R\indices{_{z \bar{z} z \bar{z}}} & = \tilde{R}\indices{_{z \bar{z} z \bar{z}}} - \frac{1}{8} K\indices{^{a i j}} K\indices{_{a i j}} \, , \\
\nonumber R\indices{_{z \bar{z} ij}} & = \tilde{R}\indices{_{z \bar{z} i j}} - 2 K\indices{_{z [i|}^k} K\indices{_{\bar{z} |j] k}} \, , \\
\nonumber R\indices{_{z i \bar{z} j}} & = \tilde{R}\indices{_{z i \bar{z} j}} - K\indices{_{z i}^k} K\indices{_{\bar{z} j k}} \, , \\
R\indices{_{ijkl}} & = \tilde{R}\indices{_{ijkl}} - 2 K\indices{_{a i [k}} K\indices{^a_{l] j}} \, ,
\end{align}
with the remaining components having a trivial splitting, {\it i.e.,}\ $\tilde{R}\indices{_{MI}} = 0$ for them. The values of $q_{\alpha}$ are: $q_{\alpha} = 1$ for any of the previous terms quadratic in extrinsic curvatures, $q_{\alpha} = 1$ for $R\indices{_{z i z j}}$ (and its complex conjugate), and $q_{\alpha} = 1/2$ for $R\indices{_{z i j k}}$ and $R\indices{_{z \bar{z} z i}}$ (and their complex conjugates).
All in all, this complicated procedure is nothing but a way to generate contributions to the holographic entanglement entropy functional containing higher and higher powers of the extrinsic curvature. One of the main results in this paper will consist in reinterpreting and rewriting this algorithm in a more transparent way, making manifest this generation of terms with an increasing number of powers of $K$.
\section{Rewriting the HEE functional}
\label{sec:Rewriting}
In this section we perform a rewriting of the holographic entanglement entropy functional for higher-curvature gravities. We manage to write it completely in terms of explicit contractions of extrinsic curvatures and derivatives with respect to Riemann tensors, getting rid of the weighted sum over $\alpha$ appearing in the anomaly piece. We do this for the Riemann tensor splittings corresponding to Einstein gravity, which allows us to produce a new general expression valid for arbitrary higher-curvature theories at leading order in the corresponding couplings.
The structure of the expression is particularly simple for densities up to quartic order in curvature, and we provide new explicit formulas for cubic and quartic theories. Applied to the case of Lovelock theories, our formula for the corresponding anomaly piece can be suggestively written in terms of an exponential of the derivative of the only component of the Riemann tensor which is relevant in that case, contracted with two extrinsic curvatures. We also perform a hopefully illustrative application of our formulas to a particular monomial coming from putative quintic densities showing how it agrees with the result obtained via the $\alpha$ sum.
\subsection{Symmetry factors in derivatives and some notation}
\label{subsec:SymmetryFactors}
Let us start by making a couple of comments regarding how to take derivatives with respect to Riemann tensor components and introducing some notation which we will be using throughout this section.
The issues discussed here arise due to the conventional definition of the derivative with respect to the Riemann tensor:
\begin{equation} \label{SymmetryFactors:RiemannDerivative}
\frac{\partial R\indices{_{\mu \nu \rho \sigma}}}{\partial R\indices{_{\alpha \beta \gamma \delta}}} \equiv \frac{1}{2} \left[ \delta\indices*{_{[\mu}^{\alpha}} \delta\indices*{_{\nu]}^{\beta}} \delta\indices*{_{[\rho}^{\gamma}} \delta\indices*{_{\sigma]}^{\delta}} + \delta\indices*{_{[\rho}^{\alpha}} \delta\indices*{_{\sigma]}^{\beta}} \delta\indices*{_{[\mu}^{\gamma}} \delta\indices*{_{\nu]}^{\delta}} \right] \, .
\end{equation}
This definition respects the symmetries of the Riemann tensor and, at the same time, it has the following nice (and expected) property, %
\begin{equation} \label{SymmetryFactors:TaylorFullRiemann}
R\indices{_{\alpha \beta \gamma \delta}} \frac{\partial R\indices{_{\mu \nu \rho \sigma}}}{\partial R\indices{_{\alpha \beta \gamma \delta}}} = R\indices{_{\mu \nu \rho \sigma}} \, ,
\end{equation}
which will be key when performing Taylor-like expansions of functions of the Riemann tensor.
Some care must be taken, however, when singling out specific components of the Riemann tensor.
For instance, using the previous definition one finds
\begin{equation}
\frac{\partial R\indices{_{z \bar{z} i j}}}{\partial R\indices{_{z \bar{z} k l}}} = \frac{1}{2} \left[ \delta\indices*{_{[z}^{z}} \delta\indices*{_{\bar{z}]}^{\bar{z}}} \delta\indices*{_{[i}^{k}} \delta\indices*{_{j]}^{l}} + \delta\indices*{_{[i}^{z}} \delta\indices*{_{j]}^{\bar{z}}} \delta\indices*{_{[z}^{k}} \delta\indices*{_{\bar{z}]}^{l}} \right] = \frac{1}{4} \delta\indices*{_{[i}^{k}} \delta\indices*{_{j]}^{l}} \, ,
\end{equation}
which leads to
\begin{equation}
R\indices{_{z \bar{z} k l}} \frac{\partial R\indices{_{z \bar{z} i j}}}{\partial R\indices{_{z \bar{z} k l}}} = \frac{1}{4} R\indices{_{z \bar{z} i j}} \, .
\end{equation}
The factor $1/4$ arises from the different positions in which we can put the $z$, $\bar{z}$ indices using the symmetries of the Riemann tensor, $R\indices{_{z \bar{z} k l}}$, $R\indices{_{\bar{z} z k l}}$, $R\indices{_{k l z \bar{z}}}$, and $R\indices{_{k l \bar{z} z}}$. Something analogous happens for the rest of components of the Riemann tensor. Hence, whenever performing Taylor-like expansions in terms of such components we will need to take these extra factors into account. In order to do so, it will prove useful to define a new derivative operator, $\hat{\partial}$, which already includes them.
The definitions for the different components read
\begin{align} \label{SymmetryFactors:DerivativeFactorsDef}
\frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} z \bar{z}}}} \equiv & 4 \frac{\partial}{\partial R\indices{_{z \bar{z} z \bar{z}}}} \, , \quad \quad \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} z i}}} \equiv 8 \frac{\partial}{\partial R\indices{_{z \bar{z} z i}}} \, , \quad \quad
\frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} i j}}} \equiv 4 \frac{\partial}{\partial R\indices{_{z \bar{z} i j}}} \, , \\ \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i z j}}} \equiv & 4 \frac{\partial}{\partial R\indices{_{z i z j}}} \, , \quad \quad
\frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i \bar{z} j}}} \equiv 8 \frac{\partial}{\partial R\indices{_{z i \bar{z} j}}} \, , \quad \quad \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i j k}}} \equiv 4 \frac{\partial}{\partial R\indices{_{z i j k}}} \, , \\
\frac{\hat{\partial}}{\hat{\partial} R\indices{_{i j k l}}} \equiv& \frac{\partial}{\partial R\indices{_{i j k l}}} \, .
\end{align}
The remaining ones can be obtained by complex conjugation.
Below we will manipulate expressions involving multiple derivatives with respect to all these components of the Riemann tensor. In order to do that, it is convenient to introduce some notation which allows us to represent them in a compact form. Firstly, let us define
upper case latin indices $I, J, \dots$ to collect all $i, j, k, \dots$ indices that might appear in a given tensor.
Similarly, we introduce $M, N, \dots$ indices to represent the different Riemann tensor components involving $z$ and $\bar{z}$ indices. In practice, we just want this notation to perform Taylor expansions, for which the relevant thing to keep in mind is the following compact definition
\begin{align} \label{NotationRewriting:FullTaylorOperator}
\nonumber R\indices{_{MI}} \hat{\partial}^{MI} \equiv & +R\indices{_{z \bar{z} z \bar{z}}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} z \bar{z}}}} + R\indices{_{z \bar{z} i j}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} i j}}} + R\indices{_{z i \bar{z} j}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i \bar{z} j}}} + R\indices{_{i j k l}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{i j k l}}} \\
& + \left[ R\indices{_{z \bar{z} z i}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} z i}}} + R\indices{_{z i z j}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i z j}}} + R\indices{_{z i j k}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i j k}}} + {\rm c.c.} \right] \, ,
\end{align}
where c.c. stands for the complex conjugate components of the terms in the parentheses (which are the only ones that have a different number of $z$ and $\bar{z}$ indices). This can be thought of as a sum over $M$ (the $z$ and $\bar{z}$ indices) and then, for each $M$, an extra sum over tangent indices $I$. Note that for $R\indices{_{z \bar{z} z \bar{z}}}$ the second sum does not exist, and in that case $I$ represents an empty set of tangent indices.
As we explained in the previous section, different components of the Riemann tensor have different splitting structures. In general, any component splits as in \req{NotationRewriting:GeneralSplitting}
where $\tilde{R}\indices{_{MI}}$ has $q_{\alpha} = 0$ and $\mathcal{K}\indices{_{MI}}$ has $q_{\alpha} \neq 0$. The $q_{\alpha}$ for the $\mathcal{K}\indices{_{MI}}$ piece can take two values. Components $R_{z \bar{z} z \bar{z}}, R_{z \bar{z} i j}, R_{z i \bar{z} j}, R_{z i z j}, R_{\bar{z} i \bar{z} j}$, and $R_{i j k l}$ have $q_{\alpha} = 1$ for that part, and we will generically refer to them with labels $A, A', \dots$ On the other hand, components $R\indices{_{z i j k}}, R\indices{_{\bar{z} i j k}}, R\indices{_{z \bar{z} z i}}$, and $R\indices{_{\bar{z} z \bar{z} i}}$ have $q_{\alpha} = 1/2$ for the $\mathcal{K}\indices{_{MI}}$ part and we will refer to them with labels $B, B', \dots$. In terms of these, the operator \eqref{NotationRewriting:FullTaylorOperator} splits into two contributions:
\begin{align} \label{NotationRewriting:OperatorsAB}
R\indices{_{AI}} \hat{\partial}^{AI} \equiv & + R\indices{_{z \bar{z} z \bar{z}}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} z \bar{z}}}} + R\indices{_{z \bar{z} i j}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} i j}}} + R\indices{_{z i \bar{z} j}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i \bar{z} j}}} + R\indices{_{i j k l}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{i j k l}}}\\ \notag & + \left[ R\indices{_{z i z j}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i z j}}} + {\rm c.c.} \right] ~ , \\
R\indices{_{BI}} \hat{\partial}^{BI} \equiv & \left[ R\indices{_{z i j k}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z i j k}}} + R\indices{_{z \bar{z} z i}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{z \bar{z} z i}}} + {\rm c.c.} \right] \, .
\end{align}
\subsection{New form of the HEE functional}
\label{subsec:NewFunctional}
Equipped with this notation, we are ready to start rewriting the anomaly piece in the holographic entanglement entropy functional.
The $\alpha$ expansion appearing in that term is performed on the following object, for which we define the shorthand notation
\begin{equation} \label{NewFunctional:ShorthandSecondDerivative}
\left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \equiv 8 \frac{\partial^2 \mathcal{L}_E}{\partial R\indices{_{z i z j}} \partial R\indices{_{\bar{z} k \bar{z} l}}} K\indices{_{z i j}} K\indices{_{\bar{z} k l}} \, .
\end{equation}
This object is a complicated expression involving the different Riemann tensor components. Once we have it for a given theory, we have to apply an splitting of the form \eqref{NotationRewriting:GeneralSplitting} to each component, account for the $q_{\alpha}$ value of each monomial and divide it by $(1 + q_{\alpha})$.
In order to understand the steps we will follow,
it is illustrative to consider first a simplified version of the problem. Suppose we have some function $f(x)$ and we want to substitute $x = \tilde{x} + k$ in a way such that we explicitly isolate monomials depending on the number of $k$ factors they have. A simple way to do this is to Taylor-expand $f(\tilde x+k)$ around $x=0$, namely,
\begin{equation} \label{NewFunctional:ExpansionFunction0}
f(\tilde{x} + k) = \sum_{n = 0}^{\infty} \frac{1}{n!} (\tilde{x} + k)^n f^{(n)}(0) \, ,
\end{equation}
and then apply the binomial theorem to $(\tilde{x} + k)^n$ to isolate terms with a definite number of $k$ factors. Notice also that, if we wish to avoid evaluating derivatives at $0$, we can also Taylor-expand the derivative around a general point $x$,
\begin{equation} \label{NewFunctional:ExpansionFunctionDerivative}
f^{(n)}(0) = \sum_{m = 0}^{\infty} \frac{1}{m!} (0 - x)^m f^{(n+m)}(x) = \sum_{m = 0}^{\infty} \frac{1}{m!} (- x)^m f^{(n+m)}(x) \, .
\end{equation}
Observe that, despite its appearance, this expression does not really depend on $x$.
Putting the pieces together, we see that counting the number of $k$'s in each monomial appearing in $f(\tilde{x} + k)$ amounts to expanding the binomial $(\tilde{x} + k)^n$ in the expression
\begin{equation} \label{NewFunctional:ExpansionFunction}
f(\tilde{x} + k) = \sum_{n,m = 0}^{\infty} \frac{1}{n!m!} (\tilde{x} + k)^n (-x)^m f^{(n+m)}(x) \, ,
\end{equation}
where we emphasize that the $x$'s in the right-hand side are not to be substituted by $\tilde{x}+k$. In the above expression we can
pair each of the $n+m$ derivatives with the factors $(\tilde{x} + k)$ and $x$ provided we introduce some ordering convention. The idea is to impose that derivatives only act on $f(x)$, and not on explicit $x$ factors,
\begin{equation} \label{NewFunctional:ExpansionFunction}
f(\tilde{x} + k) = \left[ : \sum_{n,m = 0}^{\infty} \frac{1}{n!m!} \left[(\tilde{x} + k) \partial_x \right]^n (-x \partial_x)^m : \right] f (x) ~ .
\end{equation}
This notation will turn out to be convenient when dealing with the analogous expressions involving Riemann tensor components.
Now, with some care, the idea presented above can be extended to functions of several variables. In the case of interest here, these variables will be Riemann tensor components. Roughly speaking, $f(x)$ will be replaced by the object defined in (\ref{NewFunctional:ShorthandSecondDerivative}) and $x=\tilde x + k$ will be the splitting of each component, $R_{MI}=\tilde R_{MI}+\mathcal{K}_{MI}$. The first step is the expansion around $0$, which gives an expression in which such splitting
is already applied
\begin{align} \label{NewFunctional:ExpansionAt0}
\left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) = \sum_{n=0}^{\infty} \frac{1}{n!} & \left( \tilde{R}\indices{_{M_1 I_1}} + \mathcal{K}\indices{_{M_1 I_1}} \right) \dots \left( \tilde{R}\indices{_{M_n I_n}} + \mathcal{K}\indices{_{M_n I_n}} \right) \\ \nonumber
& \times \left[ \frac{\hat{\partial}}{\hat{\partial} R\indices{_{M_1 I_1}} \dots \hat{\partial} R\indices{_{M_n I_n}}} \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right]_{{\rm Riem} = 0} \, .
\end{align}
Just like for $f^{(n)}(0)$ above, the derivatives piece at zero can be traded by derivatives at a general value of the Riemann tensor as
\begin{align} \label{NewFunctional:ExpansionDerivativeAt0}
& \left[ \frac{\hat{\partial}}{\hat{\partial} R\indices{_{M_1 I_1}} \dots \hat{\partial} R\indices{_{M_n I_n}}} \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right]_{{\rm Riem} = 0} = \\
\nonumber & \sum_{m = 0}^{\infty} \frac{1}{m!} (-R\indices{_{N_1 J_1}}) \dots (-R\indices{_{N_m J_m}}) \frac{\hat{\partial}}{\hat{\partial} R\indices{_{N_1 J_1}} \dots \hat{\partial} R\indices{_{N_m J_m}} \hat{\partial} R\indices{_{M_1 I_1}} \dots \hat{\partial} R\indices{_{M_n I_n}}} \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \, .
\end{align}
The two previous expressions can be combined into a single and simpler one if we introduce again a sort of normal ordering prescription for derivatives. By this we mean:
\begin{equation} \label{NewFunctional:NormalOrdering}
:\left( R\indices{_{MI}} \hat{\partial}^{MI} \right)^n: \, \equiv R\indices{_{M_1 I_1}} \dots R\indices{_{M_n I_n}} \frac{\hat{\partial}}{\hat{\partial} R\indices{_{M_1 I_1}} \dots \hat{\partial} R\indices{_{M_n I_n}}} ~,
\end{equation}
so that derivatives only act on the object completely to the right of the expression. Then we have
\begin{equation} \label{NewFunctional:ExpansionAtR}
\left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) = \left[ : \sum_{n,m=0}^{\infty} \frac{1}{n!m!} \left(\left(\tilde{R}\indices{_{MI}} + \mathcal{K}\indices{_{MI}} \right) \hat{\partial}^{MI} \right)^n \left( - R\indices{_{NJ}} \hat{\partial}^{NJ} \right)^m : \right] \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \, .
\end{equation}
From now on, we will work with the operator between brackets alone, since it contains all we need, namely, the explicit dependence on the $\mathcal{K}\indices{_{MI}} $. We will also implicitly assume the normal ordering convention for derivatives.
Now, let us use the following useful identity
\begin{equation} \label{NewFunctional:SwitchSums}
\sum_{n,m=0}^{\infty} f(n,m) = \sum_{S=0}^{\infty} \sum_{n=0}^{S} f(n, S-n) \, ,
\end{equation}
to collect terms in the sums depending on the total number of derivatives they have, $S = n+m$. This gives:
\begin{align} \label{NewFunctional:ExpressionBeforeQSplit}
\nonumber & \sum_{S=0}^{\infty} \sum_{n=0}^{S} \frac{1}{n!(S-n)!} \left(\left(\tilde{R}\indices{_{MI}} + \mathcal{K}\indices{_{MI}} \right) \hat{\partial}^{MI} \right)^n \left( - R\indices{_{NJ}} \hat{\partial}^{NJ} \right)^{S-n} \\
& =\sum_{S=0}^{\infty} \frac{1}{S!} \left(\left(\tilde{R}\indices{_{MI}} + \mathcal{K}\indices{_{MI}} - R\indices{_{MI}} \right) \hat{\partial}^{MI} \right)^S ~,
\end{align}
where we have applied the binomial theorem.\footnote{For this to be valid, we need the elements inside the parentheses to commute with each other. This is guaranteed by the normal ordering prescription for derivatives.} Let us pause for a moment and look at \req{NewFunctional:ExpressionBeforeQSplit}. Here we could be tempted to use $\tilde{R}\indices{_{MI}} + \mathcal{K}\indices{_{MI}} - R\indices{_{MI}} = 0$, which would mean that the previous operator is simply the identity (because of the $S=0$ term). This is not a contradiction. As a matter of fact, the only thing we have done so far is applying the identity in an elaborated way. But we have achieved our goal, since we have isolated the appearances of $\mathcal{K}\indices{_{MI}} $ in the $\alpha$-expansion: all these factors are the ones explicitly appearing in the previous expression.
From now on, we will have to deal separately with the two types of Riemann tensor components: those we called type $A$ (with $q_{\alpha} = 1$ for the corresponding $\mathcal{K}\indices{_{MI}}$) and those we called type $B$ (with $q_{\alpha} = 1/2$ for the corresponding $\mathcal{K}\indices{_{MI}}$). This can be easily done from the previous expression,
\begin{align}
& \sum_{S=0}^{\infty} \frac{1}{S!} \left(\left(\tilde{R}\indices{_{AI}} + \mathcal{K}\indices{_{AI}} - R\indices{_{AI}} \right) \hat{\partial}^{AI} + \left(\tilde{R}\indices{_{BI}} + \mathcal{K}\indices{_{BI}} - R\indices{_{BI}} \right) \hat{\partial}^{BI} \right)^S = \\
\nonumber & \sum_{S = 0}^{\infty} \sum_{T = 0}^S \frac{1}{T! (S-T)!} \left[\left(\tilde{R}\indices{_{AI}} + \mathcal{K}\indices{_{AI}} - R\indices{_{AI}} \right) \hat{\partial}^{AI}\right]^T \left[\left(\tilde{R}\indices{_{BJ}} + \mathcal{K}\indices{_{BJ}} - R\indices{_{BJ}} \right) \hat{\partial}^{BJ}\right]^{S-T}
\, .
\end{align}
The next step is to isolate the number of $\mathcal{K}$'s of each type, to prepare for the $(1 + q_{\alpha})$ division,
\begin{align}
\sum_{S = 0}^{\infty} \sum_{T = 0}^S \sum_{\lambda_1 = 0}^T \sum_{\lambda_2 = 0}^{S-T} & \frac{1}{T! (S-T)!} \frac{T!}{\lambda_1! (T - \lambda_1)!} \left[\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}\right]^{\lambda_1} \left[\left(\tilde{R}\indices{_{A'I'}} - R\indices{_{A'I'}} \right) \hat{\partial}^{A'I'}\right]^{T-\lambda_1} \\ \nonumber
& \frac{(S-T)!}{\lambda_2! (S - T - \lambda_2)!} \left[\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right]^{\lambda_2} \left[\left(\tilde{R}\indices{_{B'J'}} - R\indices{_{B'J'}} \right) \hat{\partial}^{B'J'}\right]^{S-T-\lambda_2} ~ .
\end{align}
In this expression, it is manifest that we have $\lambda_1$ components $\mathcal{K}_{AI}$, which contribute 1 to $q_{\alpha}$, and $\lambda_2$ components $\mathcal{K}_{BJ}$, which contribute $1/2$ to $q_{\alpha}$. Hence, we are ready to divide by $(1 + q_{\alpha} )=( 1 + \lambda_1 + \lambda_2/2)$, obtaining
\begin{align}
\nonumber \sum_{S = 0}^{\infty} \sum_{T = 0}^S \sum_{\lambda_1 = 0}^T \sum_{\lambda_2 = 0}^{S-T} & \frac{2}{(2 + 2 \lambda_1 + \lambda_2)} \frac{1}{\lambda_1! (T - \lambda_1)!} \left[\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}\right]^{\lambda_1} \left[\left(\tilde{R}\indices{_{A'I'}} - R\indices{_{A'I'}} \right) \hat{\partial}^{A'I'}\right]^{T-\lambda_1} \\
& \frac{1}{\lambda_2! (S - T - \lambda_2)!} \left[\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right]^{\lambda_2} \left[\left(\tilde{R}\indices{_{B'J'}} - R\indices{_{B'J'}} \right) \hat{\partial}^{B'J'}\right]^{S-T-\lambda_2} ~ .
\end{align}
At this point, the $\alpha$-sum has been performed, and we do not need to explicitly keep the $\mathcal{K}$ dependence isolated. We can also rewrite the $\tilde{R}$ back in terms of conventional Riemann tensor components. Using $\tilde{R}\indices{_{MI}} = R\indices{_{MI}} - \mathcal{K}\indices{_{MI}}$ we have
\begin{equation}
\sum_{S = 0}^{\infty} \sum_{T = 0}^S \sum_{\lambda_1 = 0}^T \sum_{\lambda_2 = 0}^{S-T} \frac{2}{(2 + 2 \lambda_1 + \lambda_2)} \frac{(-1)^{S-\lambda_1-\lambda_2}}{\lambda_1! (T - \lambda_1)!} \frac{1}{\lambda_2! (S - T - \lambda_2)!} \left(\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}\right)^{T} \left(\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right)^{S-T} ~ .
\end{equation}
At this point we proceed to perform the $\lambda_1$ and $\lambda_2$ sums, which do not affect the derivative operators. Let us start with the $\lambda_2$ one. It is possible to show that
\begin{align} \label{NewFunctional:SumLambda1Expression}
\sum_{\lambda=0}^{S-T} \frac{(-1)^{\lambda}}{(2+2\lambda_1+\lambda )} \frac{1}{\lambda! (S-T - \lambda)!} & = \frac{(2\lambda_1+1)!}{(2\lambda_1+2+S-T)!} \, .
\end{align}
Detailed derivations of this identity as well as of \req{NewFunctional:SumLambda2Expression} are included in appendix \ref{formuls}.
After performing this sum, the operator becomes:
\begin{equation} \label{NewFunctional:OperatorInProgress}
2 \sum_{S = 0}^{\infty} \sum_{T = 0}^S \sum_{\lambda_1 = 0}^T \frac{(-1)^{S-\lambda_1}}{\lambda_1! (T - \lambda_1)!} \frac{(2 \lambda_1 + 1)!}{(2\lambda_1 + 2 + S - T)!} \left(\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}\right)^{T} \left(\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right)^{S-T} \, .
\end{equation}
We can now try to do the $\lambda_1$ one. We find the following integral representation of the sum,\footnote{This can be explicitly written in terms of Gauss' hypergeometric function as \begin{equation} \sum_{\lambda=0}^{T} \frac{(-1)^{\lambda}}{\lambda! (T - \lambda)!} \frac{(2 \lambda + 1 )!}{(2 \lambda + 2+S-T)!} = \frac{2+S- (S-T)\, {}_{2}F_1 \left[1,-T ;3+S;-1\right]}{2(1+S)(2+S)(S-T)! T!}\, ,\end{equation} but the integral form turns out to be more useful for our purposes.}
\begin{align} \label{NewFunctional:SumLambda2Expression}
\sum_{\lambda=0}^{T} \frac{(-1)^{\lambda}}{\lambda! (T - \lambda)!} \frac{(2 \lambda + 1 )!}{(2 \lambda + 2+S-T)!} & = \frac{1}{T! (S-T)!} \int_0^1 {\rm d} u \, u (1 - u^2)^T (1-u)^{S-T} \, .
\end{align}
Continuing from \eqref{NewFunctional:OperatorInProgress}, we have
\begin{align}
\nonumber & \int_0^1 {\rm d}u \, 2u \sum_{S = 0}^{\infty} \sum_{T = 0}^S \frac{(-1)^S}{T! (S - T)!} \left((1-u^2) \mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}\right)^{T} \left((1- u)\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right)^{S-T} \\
& = \sum_{S = 0}^{\infty} \frac{(-1)^S}{S!} \int_0^1 {\rm d}u \, 2u \left[(1-u^2) \mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} + (1- u)\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right]^{S} \, .
\end{align}
This is our final result. Let us collect everything here, including the definitions needed to interpret it. We have found that the anomaly term in the holographic entanglement entropy functional can be written as
\begin{align} \label{NewFunctional:FinalForm}
&\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} \\
\nonumber & = \sum_{S = 0}^{\infty} \frac{1}{S!} \int_0^1 {\rm d}u \, 2u :\left[- (1-u^2) \mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} - (1- u)\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right]^{S}: \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \, ,
\end{align}
where we emphasized again that derivatives have to be taken after normal ordering and
\begin{align} \label{NewFunctional:OperatorA}
\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} \equiv & - \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} \frac{\partial}{\partial R\indices{_{z \bar{z} z \bar{z}}}} - 8 K\indices{_{z i}^{k}} K\indices{_{\bar{z} jk}} \frac{\partial}{\partial R\indices{_{z \bar{z} i j}}} - 8 K\indices{_{z i}^k} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z i \bar{z} j}}} \\
\nonumber & - 2 K\indices{_{a i k}} K\indices{^a_{l j}} \frac{\partial}{\partial R\indices{_{i j k l}}} + \left( 4 R\indices{_{z i z j}} \frac{\partial}{\partial R\indices{_{z i z j}}} + {\rm c.c.} \right) ~ , \\\label{NewFunctional:OperatorB}
\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ} \equiv & \left( 4 R\indices{_{z i j k}} \frac{\partial}{\partial R\indices{_{z i j k}}} + 8 R\indices{_{z \bar{z} z i}} \frac{\partial}{\partial R\indices{_{z \bar{z} z i}}} + {\rm c.c.} \right) ~ .
\end{align}
Observe that the sum in \req{NewFunctional:FinalForm} can be formally performed, allowing us to write the result in an exponential form
\begin{align} \label{NewFunctional:FinalForm2}
&\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha}=\int_0^1 {\rm d}u \, 2u\, {\rm e}^{-F(u)} \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \, ,
\end{align}
where
\begin{equation}
F(u)\equiv [(1-u^2)\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}+ (1- u)\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}]\, .
\end{equation}
In subsection \ref{covid} below we present a covariant version of these new formulas. Observe that even though the anomaly term naively involves the contraction of intrinsic curvatures with two extrinsic curvatures, it is manifest from our formula that the sum over $\alpha$ hides possible contractions with an arbitrary (even) number of extrinsic curvatures ---in particular, order $n$ densities will produce terms involving up to $2(n-1)$ extrinsic curvatures.
There are some obvious particular cases in which the above expression simplifies considerably. Firstly, if no $B$ type terms appear in the second derivative of the Lagrangian, we can write
\begin{align} \label{NewFunctional:OnlyTypeA}
\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} &= \sum_{S = 0}^{\infty} \frac{1}{(S+1)!} \left( - \mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} \right)^{S} \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \\ \notag &= \left[\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} \right]^{-1} \left[1- {\rm e}^{-\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI}} \right] \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \, .
\end{align}
Similarly, if only type $B$ terms were present,
the result would simplify to
\begin{align} \label{NewFunctional:OnlyTypeB}
\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} & = \sum_{S = 0}^{\infty} \frac{2}{(S+2)!} \left( - \mathcal{K}\indices{_{BI}} \hat{\partial}^{BI} \right)^{S} \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \\ \notag &=-2 \left[\mathcal{K}\indices{_{BI}} \hat{\partial}^{BI} \right]^{-2} \left[1-\mathcal{K}\indices{_{BI}} \hat{\partial}^{BI}-{\rm e}^{-\mathcal{K}\indices{_{BI}} \hat{\partial}^{BI}} \right] \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \, .
\end{align}
As we will see in a moment, there is at least one important case for which only type $A$ terms appear, namely, Lovelock theories. It is harder to imagine how only type $B$ terms could appear. Nevertheless, the result obtained here will prove to be useful for presenting the explicit form of the anomaly term for cubic and quartic theories.
Before closing this subsection, let us mention that, while our new formulas have been obtained assuming a particular splitting for the Riemann tensor components ---namely, the one valid for perturbative higher-curvature gravities summarized in \req{NotationRewriting:GeneralSplitting}--- an analogous procedure to the one presented here should allow to produce similar expressions for other possible splittings.
\subsection{Anomaly term in Lovelock theories}
\label{subsec:AnomalyLovelock}
Lovelock gravities \cite{Lovelock1,Lovelock2} are special in many respects ---see also subsection \ref{love1} below. In particular, as argued in \cite{Dong:2013qoa}, the object \eqref{NewFunctional:ShorthandSecondDerivative} only contains a single kind of Riemann tensor component for them, namely, $R\indices{_{i j k l}}$.
The Lovelock density of order $n$ is defined by
\begin{equation} \label{AnomalyLovelock:LovelockLagrangian}
\mathcal{X}_{2n}(R) \equiv \frac{1}{2^n} \delta_{\nu_1 \nu_2 \cdots \nu_{2n-1}\nu_{2n}}^{\mu_1 \mu_2\cdots \mu_{2n-1}\mu_{2n}} R^{\nu_1 \nu_2}_{\mu_1 \mu_2} \cdots R^{\nu_{2n-1} \nu_{2n}}_{\mu_{2n-1}\mu_{2n}}\, ,
\end{equation}
where $\delta_{\nu_1 \nu_2 \cdots \nu_{2n-1}\nu_{2n}}^{\mu_1 \mu_2\cdots \mu_{2n-1}\mu_{2n}}$ is the totally antisymmetric product of $2n$ Kronecker deltas.
Now, since we have that
\begin{equation}
\frac{\partial R^{\mu \nu}_{\rho \sigma}}{\partial R\indices{_{z i z j}}} = \frac{1}{2} \left( g^{z[\mu} g^{\nu] i} \delta_{[\rho}^{z} \delta_{\sigma]}^{j} + g^{z[\mu} g^{\nu] j} \delta_{[\rho}^{z} \delta_{\sigma]}^{i} \right) = 2 \delta_{\bar{z}}^{[\mu} \delta_{m}^{\nu]} \delta_{[\rho}^{z} \delta_{\sigma]}^{(i} g^{j) m}\, ,
\end{equation}
and a similar result for the derivative with respect to $R\indices{_{\bar{z} k \bar{z} l}}$, the second derivative contracted with $K^2$ appearing in the anomaly term is of the form:
\begin{equation} \label{AnomalyLovelock:SecondDerivative}
\frac{8\partial^2 \mathcal{X}_{2n}}{\partial R\indices{_{z i z j}} \partial R\indices{_{\bar{z} k \bar{z} l}}} K\indices{_{z i j}} K\indices{_{\bar{z} k l}} = \frac{- 8n (n-1)}{2^{n-2}} \delta^{z \bar{z} i j \mu_1 \mu_2 \dots \mu_{2n - 5} \mu_{2n - 4}}_{z \bar{z} k l \nu_1 \nu_2 \dots \nu_{2n - 5} \nu_{2n - 4}} R^{\nu_1 \nu_2}_{\mu_1 \mu_2} \cdots R^{\nu_{2n-5} \nu_{2n-4}}_{\mu_{2n-5}\mu_{2n-4}} K\indices{_{z i}^k} K\indices{_{\bar{z} j}^l} \, .
\end{equation}
Due to the completely antisymmetric character of the generalized delta, none of the indices $\mu_n$ or $\nu_n$ can be $z$ or $\bar{z}$. This forces all components of the Riemann tensor to be of the type $R^{j_1 j_2}_{i_1 i_2}$, as anticipated.\footnote{Something similar happens with the Wald term. As a result, the entanglement entropy functional for Lovelock theories can be written in terms of intrinsic curvatures to the surface ---see \req{jm} below.}
Therefore, we only have to take into account the part proportional to $\partial / \partial R\indices{_{i j k l}}$ in \eqref{NewFunctional:OperatorA}. Using the result \eqref{NewFunctional:OnlyTypeA} valid when only type $A$ terms are present we find
\begin{align} \label{OneComponent:LovelockExponential}
& \sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}^{\rm Lovelock}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} \\
= & \sum_{S=0}^{\infty} \frac{1}{(S+1)!} \left( 2 K\indices{_{a i k}} K\indices{^a_{l j}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right)^{S}\left( \frac{8 \partial^2 \mathcal{L}^{\rm Lovelock}_E}{\partial {\rm Riem}^2} K^2 \right) \\
= & \left[ 2 K\indices{_{a i k}} K\indices{^a_{l j}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right]^{-1} \left[ \exp \left( 2 K\indices{_{a i k}} K\indices{^a_{j l}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right) - 1 \right] \left( \frac{8 \partial^2 \mathcal{L}^{\rm Lovelock}_E}{\partial {\rm Riem}^2} K^2 \right) \, .
\end{align}
This is a rather suggestive expression. On the other hand, we know that for Lovelock theories the combination of the anomaly and Wald terms must reduce to the so-called Jacobson-Myers (JM) functional ---see \req{jm} below. Let us see how this works when the anomaly term is written as in \req{OneComponent:LovelockExponential}. First of all, notice that the extrinsic curvatures in the second derivative can be written covariantly using the antisymmetry of the generalized delta as
\begin{equation} \label{AnomalyLovelock:SecondDerivativeCovariant}
\left( \frac{8 \partial^2 \mathcal{X}_{2n}}{\partial {\rm Riem}^2} K^2 \right) = \frac{n (1-n)}{2^{n-3}} \delta^{i_1 j_1 \dots i_{n-1} j_{n-1}}_{k_1 l_1 \dots k_{n-1} l_{n-1}} R^{k_1 l_1}_{i_1 j_1} \cdots R^{k_{n-2} l_{n-2}}_{i_{n-2} j_{n-2}} K\indices{_{a i_{n-1}}^{k_{n-1}}} K\indices{^a_{j_{n-1}}^{l_{n-1}}} \, ,
\end{equation}
where we have also reduced the generalized delta eliminating the $z$ and $\bar{z}$ indices. Applying $S$ times the differential operator is now straightforward:
\begin{align} \label{AnomalyLovelock:SOperators}
& \sum_{S=0}^{\infty} \frac{1}{(S+1)!} \left( 2 K\indices{_{a i k}} K\indices{^a_{l j}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right)^{S} \left( \frac{8 \partial^2 \mathcal{X}_{2n}}{\partial {\rm Riem}^2} K^2 \right) \\
\nonumber = & - \sum_{S=0}^{n-2} \frac{1}{(S+1)!} \frac{n (n-1) \cdots (n-1-S)}{2^{n-3 - S}} \delta^{i_1 j_1 \cdots i_{n-1} j_{n-1}}_{k_1 l_1 \cdots k_{n-1} l_{n-1}} R^{k_1 l_1}_{i_1 j_1} \cdots R^{k_{n-2-S} l_{n-2-S}}_{i_{n-2-S} j_{n-2-S}} \\
\nonumber & \quad \times K\indices{_{a_{n-1-S} i_{n-1-S}}^{k_{n-1-S}}} K\indices{^{a_{n-1-S}}_{j_{n-1-S}}^{l_{n-1-S}}} \dots K\indices{_{a_{n-1} i_{n-1}}^{k_{n-1}}} K\indices{^{a_{n-1}}_{j_{n-1}}^{l_{n-1}}} \\
\nonumber = & - n \sum_{S=1}^{n-1} \frac{1}{2^{n-2 - S}} \binom{n-1}{S} \delta^{i_1 j_1 \cdots i_{n-1} j_{n-1}}_{k_1 l_1 \dots k_{n-1} l_{n-1}} R^{k_1 l_1}_{i_1 j_1} \cdots R^{k_{n-1-S} l_{n-1-S}}_{i_{n-1-S} j_{n-1-S}} \\
\nonumber & \quad \times K\indices{_{a_{n-S} i_{n-S}}^{k_{n-S}}} K\indices{^{a_{n-S}}_{j_{n-S}}^{l_{n-S}}} \cdots K\indices{_{a_{n-1} i_{n-1}}^{k_{n-1}}} K\indices{^{a_{n-1}}_{j_{n-1}}^{l_{n-1}}} \, .
\end{align}
Furthermore, the Wald term reads
\begin{equation} \label{AnomalyLovelock:WaldTerm}
\frac{\partial \mathcal{X}_{2n}}{\partial R\indices{_{z \bar{z} z \bar{z}}}} = - \frac{n}{2^{n-2}} \delta^{i_1 j_1 \dots i_{n-1} j_{n-1}}_{k_1 l_1 \dots k_{n-1} l_{n-1}} R^{k_1 l_1}_{i_1 j_1} \dots R^{k_{n-1} l_{n-1}}_{i_{n-1} j_{n-1}} \, .
\end{equation}
This can be combined with \eqref{AnomalyLovelock:SOperators}, acting as the $S = 0$ term of the sum. When this is included, the binomial coefficient and the $2^{-S}$ factor in each term can be employed to write the full functional as
\begin{align} \label{AnomalyLovelock:CombinationWaldAnomaly}
& \frac{\partial \mathcal{X}_{2n}}{\partial R\indices{_{z \bar{z} z \bar{z}}}} + \sum_{S=0}^{\infty} \frac{1}{(S+1)!} \left( 2 K\indices{_{a i k}} K\indices{^a_{l j}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right)^{S} \left( \frac{8 \partial^2 \mathcal{X}_{2n}}{\partial {\rm Riem}^2} K^2 \right) \\
\nonumber = & -\frac{n}{2^{n-2}} \delta^{i_1 j_1 \dots i_{n-1} j_{n-1}}_{k_1 l_1 \dots k_{n-1} l_{n-1}} \left(R^{k_1 l_1}_{i_1 j_1} + 2 K\indices{_{a_1 i_1}^{k_1}} K\indices{^{a_1}_{j_1}^{l_1}}\right) \\ &\notag \dots \left( R^{k_{n-1} l_{n-1}}_{i_{n-1} j_{n-1}} + 2 K\indices{_{a_{n-1} i_{n-1}}^{k_{n-1}}} K\indices{^{a_{n-1}}_{j_{n-1}}^{l_{n-1}}} \right) \, ,
\end{align}
where we used the fact that the binomial factor is the number of ways we can pick
$S$ squared extrinsic curvature factors and $(n-1-S)$ Riemann tensors from the previous product (and the antisymmetric delta can be used to rewrite all possible combinations as essentially the same). The final observation is that $\tilde{R}\indices{_{i j k l}}$ is actually the intrinsic curvature tensor of the surface \cite{Dong:2013qoa}, which we denote $\mathcal{R}\indices{_{i j k l}}$. Then, comparing with \req{SplittingProblem:ExpansionsRiemann}, it follows that we can write the HEE functional for a given order-$n$ Lovelock density as
\begin{equation} \label{AnomalyLovelock:JMFunctional}
S^{{\mathcal{X}_{2n}}}_{\rm HEE} = - 4 \pi n \int_{\Gamma_A} {\rm d}^{d-1}y \, \sqrt{h} \, \mathcal{X}_{2(n-1)}(\mathcal{R}) \, ,
\end{equation}
which is the JM form \cite{Jacobson:1993xs,Hung:2011xb}. This has the interesting property of being fully determined in terms of intrinsic curvatures associated to the holographic entangling surface.
\subsection{Anomaly term for cubic gravities}
\label{subsec:AnomalyCubic}
Our new formula for the anomaly term in \eqref{NewFunctional:FinalForm} gets notably simplified for cubic theories. This is a consequence of the second derivative of the Lagrangian being linear in curvatures for these theories, which implies that only $S = 0, 1$ terms need to be included in the sum. In addition, the object \eqref{NewFunctional:ShorthandSecondDerivative} is ``neutral'' in $z$ and $\bar{z}$ indices ---{\it i.e.,}\ it has and equal number of $z$'s and $\bar{z}$'s\footnote{This is a consequence of the scalar character of the Lagrangian, which guarantees that, when written with lower indices, Riemann tensor components are contracted with metrics $g^{\mu \nu}$. The only non-vanishing component in the $z$, $\bar{z}$ indices is $g^{z \bar{z}} = 2$, so for each $z$ there must be one and only one $\bar{z}$. After the two derivatives are taken following \eqref{NewFunctional:ShorthandSecondDerivative}, the number of $z$'s and $\bar{z}$'s in Riemann tensor components decreases by two, but it is still equal for both types of indices.}--- so no components with a different number of $z$ and $\bar{z}$ indices can appear inside it. In particular, there are no type $B$ terms, and the last term appearing in \eqref{NewFunctional:OperatorA} is also missing. Therefore, we can write the anomaly term for cubic theories as
\begin{align} \label{AnomalyCubic:ExpandedExpression}
&\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}^{\rm Riem^3}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} = \left[ 1 + \frac{1}{4} K\indices{^{a i j}} K\indices{_{a i j}} \frac{\partial}{\partial R\indices{_{z \bar{z} z \bar{z}}}} + 4 K\indices{_{z i}^{k}} K\indices{_{\bar{z} jk}} \frac{\partial}{\partial R\indices{_{z \bar{z} i j}}} \right. \\
\nonumber & \quad \left. + 4 K\indices{_{z i}^k} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z i \bar{z} j}}} + K\indices{_{a i k}} K\indices{^a_{j l}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right] \left( \frac{8 \partial^2 \mathcal{L}^{\rm Riem^3}_E}{\partial {\rm Riem}^2} K^2 \right) \, .
\end{align}
In the explicit expressions for the functionals presented in the following section we have obtained the corresponding functionals using both the $\alpha$-expansion procedure and this new derivative expression, finding perfect agreement.
\subsection{Anomaly term for quartic gravities}
\label{subsec:AnomalyQuartic}
Although slightly more complicated than the cubic ones, quartic theories are still simple enough to deserve an independent discussion. In this case, the second derivative of the Lagrangian is quadratic in curvature tensors, so we have to include $S = 0, 1, 2$ in \eqref{NewFunctional:FinalForm}. However, the neutral character in $z$'s and $\bar{z}$'s of \eqref{NewFunctional:ShorthandSecondDerivative} allows us to simplify the general expression. In the expansion of the second derivative in terms of the basic components of the Riemann tensor,
each of the resulting monomials must be neutral in $z$ and $\bar{z}$.
The first consequence of this fact is that components $R\indices{_{z \bar{z} z \bar{z}}}$, $R\indices{_{z \bar{z} i j}}$, $R\indices{_{z i \bar{z} j}}$, and $R\indices{_{i j k l}}$ cannot appear paired with the remaining ones, so we can drop all terms that involve mixed second derivatives between these two sets. Furthermore, by the same argument, $R\indices{_{z i z j}}$ can only appear paired with $R\indices{_{\bar{z} i \bar{z} j}}$ and thus, at second order in derivatives, type $B$ components do not mix with the type $A$ ones.
Also, the last term (in parentheses) in \eqref{NewFunctional:OperatorA} does not mix with the remaining part of that operator when taking the square. All this means that the $S = 2$ term of \eqref{NewFunctional:FinalForm} for quartic theories will be:
\begin{align}\notag
& \frac{1}{3!} \left( \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} \frac{\partial}{\partial R\indices{_{z \bar{z} z \bar{z}}}} + 8 K\indices{_{z i}^{k}} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z \bar{z} i j}}} + 8 K\indices{_{z i}^k} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z i \bar{z} j}}} + 2 K\indices{_{a i k}} K\indices{^a_{j l}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right)^2 \\
& + \frac{1}{3!} \left( - 4 R\indices{_{z i z j}} \frac{\partial}{\partial R\indices{_{z i z j}}} + {\rm c.c.} \right)^2 + \frac{2}{4!} \left( - 4 R\indices{_{z i j k}} \frac{\partial}{\partial R\indices{_{z i j k}}} - 8 R\indices{_{z \bar{z} z i}} \frac{\partial}{\partial R\indices{_{z \bar{z} z i}}} + {\rm c.c.} \right)^2 ~ ,
\end{align}
where, although not explicitly written, recall that all derivatives are to be understood under the normal ordering prescription, so they do not act on any of the Riemann tensor components appearing explicitly in the previous expression. This can be simplified a little bit more by using once again the fact that all terms in the second derivative of the Lagrangian have to be neutral in $z$ and $\bar{z}$. Thus, in the second term, only the mixed derivative $\partial^2 /( \partial R\indices{_{z i z j}} \partial R\indices{_{\bar{z} k \bar{z} l}})$ contributes. Something similar happens in the last term, where only globally neutral combinations contribute. All in all, including also the $S = 0$ and $S = 1$ parts of the anomaly term, we can write, for quartic theories:
\begin{align}\label{AnomalyQuartic:ExpandedExpression}
& \sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}^{\rm Riem^4}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} \\
\nonumber & = \left[ 1 + \frac{1}{2} \left[ \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} \frac{\partial}{\partial R\indices{_{z \bar{z} z \bar{z}}}} + 8 K\indices{_{z i}^{k}} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z \bar{z} i j}}} + 8 K\indices{_{z i}^k} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z i \bar{z} j}}} + 2 K\indices{_{a i k}} K\indices{^a_{j l}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right. \right. \\
\nonumber & \quad \left. \left( - 4 R\indices{_{z i z j}} \frac{\partial}{\partial R\indices{_{z i z j}}} + {\rm c.c.} \right) \right] + \frac{2}{3!} \left( - 4 R\indices{_{z i j k}} \frac{\partial}{\partial R\indices{_{z i j k}}} - 8 R\indices{_{z \bar{z} z i}} \frac{\partial}{\partial R\indices{_{z \bar{z} z i}}} + {\rm c.c.} \right) \\
\nonumber & \quad + \frac{1}{3!} \left( \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} \frac{\partial}{\partial R\indices{_{z \bar{z} z \bar{z}}}} + 8 K\indices{_{z i}^{k}} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z \bar{z} i j}}} + 8 K\indices{_{z i}^k} K\indices{_{\bar{z} j k}} \frac{\partial}{\partial R\indices{_{z i \bar{z} j}}} + 2 K\indices{_{a i k}} K\indices{^a_{j l}} \frac{\partial}{\partial R\indices{_{i j k l}}} \right)^2 \\
\nonumber & \quad + \frac{32}{3!} R\indices{_{z i z j}} R\indices{_{\bar{z} k \bar{z} l}} \frac{\partial^2}{\partial R\indices{_{z i z j}} \partial R\indices{_{\bar{z} k \bar{z} l}}} + \frac{2}{4!} \left( 128 R\indices{_{z \bar{z} z i}} R\indices{_{\bar{z} z \bar{z} j}} \frac{\partial^2}{\partial R\indices{_{z \bar{z} z i}} \partial R\indices{_{\bar{z} z \bar{z} j}}} \right. \\
\nonumber & \quad \left. \left.+ 64 R\indices{_{z \bar{z} z i}} R\indices{_{\bar{z} j k l}} \frac{\partial^2}{\partial R\indices{_{z \bar{z} z i}} \partial R\indices{_{\bar{z} j k l}}} + 64 R\indices{_{\bar{z} z \bar{z} i}} R\indices{_{z j k l}} \frac{\partial^2}{\partial R\indices{_{\bar{z} z \bar{z} i}} \partial R\indices{_{z j k l}}} + 32 R\indices{_{z i j k}} R\indices{_{\bar{z} l m n}} \frac{\partial^2}{\partial R\indices{_{z i j k}} \partial R\indices{_{\bar{z} l m n}}} \right) \right] \\
\nonumber & \left( \frac{8 \partial^2 \mathcal{L}^{\rm Riem^4}_E}{\partial {\rm Riem}^2} K^2 \right) \, .
\end{align}
When computing the 26 functionals corresponding to independent quartic densities in subsection \ref{quarticc} we have made use of this expression, which turns out to be much faster than performing the corresponding $\alpha$ expansions. We have nonetheless verified in a few cases that both procedures yield the same results.
\subsection{An example mixing type $A$ and type $B$ terms}
\label{subsec:MixedTypes}
In the previous subsections involving Lovelock, cubic and quartic densities, we found that it was possible to treat separately type $A$ and $B$ terms. In this subsection we provide a simple example of a situation in which this separation is not possible. The previous arguments show that this happens for densities which are at least fifth-order in the Riemann tensor. In order to avoid unnecessary complications, let us assume that one of these densities produces a term mixing type $A$ and $B$ with the following form
\begin{equation} \label{MixedTypes:ExampleTerm}
\left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \supset C(K^2) R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} R\indices{_z^i_{\bar{z}}^j} \, ,
\end{equation}
where $C(K^2) \equiv c K\indices{_{z}^{l m}} K\indices{_{\bar{z} l m}}$ with $c$ a constant, and the $\supset$ symbol means that this is only one of many terms that would appear when expanding the second derivative in terms of the different $z$ and $\bar{z}$ components of the curvature tensor for an actual quintic (or higher order) density. We have not checked whether or not a term like this arises from a concrete fifth order Lagrangian, but it certainly could.\footnote{Notice that the term is globally neutral in $z$ and $\bar{z}$, as it should.} In any case, it will serve as an example of how one should proceed if a different combination of type $A$ and $B$ terms arises.
Let us first obtain the result by means of the $\alpha$ sum, which in this case turns out to be particularly simple. Applying the splitting rules, this term becomes
\begin{equation} \label{MixedTypes:AlphaSum}
\sum_{\alpha} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} \supset C(K^2) \left( R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} \tilde{R}\indices{_z^i_{\bar{z}}^j} - R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} K\indices{_z^{i l}} K\indices{_{\bar{z}}^j_l} \right) \, .
\end{equation}
The first term has $q_{\alpha} = 1$, while the second has $q_{\alpha} = 2$. Then, dividing by $1 + q_{\alpha}$, we get
\begin{align} \notag
\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} & \supset C(K^2) \left( \frac{1}{2} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} \tilde{R}\indices{_z^i_{\bar{z}}^j} - \frac{1}{3} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} K\indices{_z^{i l}} K\indices{_{\bar{z}}^j_l} \right) \\
\label{MixedTypes:AlphaDivided} & = C(K^2) \left( \frac{1}{2} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} R\indices{_z^i_{\bar{z}}^j} + \frac{1}{6} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} K\indices{_z^{i l}} K\indices{_{\bar{z}}^j_l} \right) \, ,
\end{align}
where we have rewritten $\tilde{R}\indices{_z^i_{\bar{z}}^j}$ in terms of the Riemann tensor component again in the last line.
Let us now obtain the same result by means of the derivative expression, \eqref{NewFunctional:FinalForm}. We need to take into account terms up to $S = 3$ in the series, but fortunately not every type $A$ or $B$ component appears in the piece of the Lagrangian we are considering. This means we can define new operators including only the relevant parts:
\begin{equation} \label{MixedTypes:ReducedOperators}
\partial_A \equiv - 8 K\indices{_{z i}^l} K\indices{_{\bar{z} j l}} \frac{\partial}{\partial R\indices{_{z i \bar{z} j}}} ~ , \qquad \partial_B \equiv 4 R\indices{_{z i j k}} \frac{\partial}{\partial R\indices{_{z i j k}}} + 8 R\indices{_{\bar{z} z \bar{z} k}} \frac{\partial}{\partial R\indices{_{\bar{z} z \bar{z} k}}} \, .
\end{equation}
Now, the $S = 0$ term is just the original \eqref{MixedTypes:ExampleTerm}. For the $S = 1$ term we apply the operator:
\begin{equation} \label{MixedTypes:OperatorS1}
- \frac{1}{2!} \partial_A - \frac{2}{3!} \partial_B \, ,
\end{equation}
which produces:
\begin{equation} \label{MixedTypes:ResultS1}
s_1 \equiv C(K^2) \left( - \frac{2}{3} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} R\indices{_z^i_{\bar{z}}^j} + \frac{1}{2} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} K\indices{_z^{i l}} K\indices{_{\bar{z}}^j_l} \right) \, .
\end{equation}
For the $S = 2$ term operator we already find mixing between $\partial_A$ and $\partial_B$. Solving the integral expression given in \eqref{NewFunctional:FinalForm}
\begin{equation} \label{MixedTypes:OperatorS2}
\frac{1}{2!} \int_0^1 \mathrm{d} u\,2u : \left(- (1-u^2) \partial_A - (1- u)\partial_B \right)^{2} : \; = \; : \left( \frac{1}{6} \partial_A^2 + \frac{7}{30} \partial_A \partial_B + \frac{1}{12} \partial_B^2 \right) : \, .
\end{equation}
We stress once again that normal ordering means that derivatives do not act on curvature components appearing in the operators \eqref{MixedTypes:ReducedOperators}, only on those components in the second derivative object \eqref{MixedTypes:ExampleTerm}. This makes $\partial_A$ and $\partial_B$ commuting objects (inside a normal ordered expression). Furthermore, having only a single type $A$ component, the $\partial_A^2$ term in the previous expression will not contribute. The last two terms produce
\begin{equation} \label{MixedTypes:ResultS2}
s_2 \equiv C(K^2) \left( \frac{1}{6} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} R\indices{_z^i_{\bar{z}}^j} - \frac{7}{15} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} K\indices{_z^{i l}} K\indices{_{\bar{z}}^j_l} \right) \, .
\end{equation}
Finally, let us consider the $S = 3$ term. The operator is
\begin{equation*} \label{MixedTypes:OperatorS3}
\frac{1}{3!} \int_0^1 \mathrm{d} u\,2u : \left(- (1-u^2) \partial_A - (1- u)\partial_B \right)^{3} : \; = - : \left( \frac{\partial_A^3}{24} + \frac{19 \partial_A^2 \partial_B}{210} + \frac{\partial_A \partial_B^2}{15} + \frac{\partial_B^3}{60} \right) : \, .
\end{equation*}
In this case, since \eqref{MixedTypes:ExampleTerm} has one type $A$ and two type $B$ terms, the third piece of this operator is the only one giving a non-vanishing contribution. Its value is
\begin{equation} \label{MixedTypes:ResultS3}
s_3 \equiv \frac{2}{15} C(K^2) R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} K\indices{_z^{i l}} K\indices{_{\bar{z}}^j_l} \, .
\end{equation}
We can finally combine all contributions, $s_0$ (which is just the original term \eqref{MixedTypes:ExampleTerm}), $s_1$, $s_2$, and $s_3$ to obtain
\begin{equation}\label{MixedTypes:ResultDerivatives}
\hspace{-.2cm}\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} \supset C(K^2) \left( \frac{1}{2} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} R\indices{_z^i_{\bar{z}}^j} + \frac{1}{6} R\indices{_{z i j k}} R\indices{_{\bar{z} z \bar{z}}^k} K\indices{_z^{i l}} K\indices{_{\bar{z}}^j_l} \right) \, ,
\end{equation}
which coincides with \eqref{MixedTypes:AlphaDivided}, as it should.
\subsection{Covariant form of the new HEE formula}
\label{covid}
So far we have presented all our expressions in the particular set of adapted coordinates $(z, \bar{z}, y^i)$. Here we will rewrite our general formulas in covariant form, which is more useful for explicit applications (like the ones in Section \ref{unite}). In order to do that, we first write the metric as in \req{Notation:inducedmetric},
\begin{equation} \label{Covariant:MetricNormalVectors}
g\indices{_{\mu \nu}} =h_{\mu\nu}+ \delta\indices{_{a b}} n\indices{^a_{\mu}} n\indices{^b_{\nu}} \, ,
\end{equation}
so that in the adapted coordinates $n\indices{^a_i} = 0$, and $h\indices{_{\mu \nu}}$ is non-vanishing only for tangent components ($h\indices{_{z z}} = h\indices{_{z \bar{z}}} = h\indices{_{\bar{z} \bar{z}}} = 0$).
It is easy to check that, in the adapted coordinates, the binormal to the surface and the normal projector, defined in \req{Covariant:BinormalNormalDefinitions}, satisfy $\epsilon\indices{_{z \bar{z}}} = - \epsilon\indices{_{z \bar{z}}} = i/2$, $\perp\indices{_{z z}} = \perp\indices{_{\bar{z} \bar{z}}} = 0$,\footnote{There is an ordering assumption in the value of $\epsilon\indices{_{z \bar{z}}}$, the normal vectors $n\indices{^1}$ and $n\indices{^2}$ are defined so that we get $\epsilon\indices{_{z \bar{z}}} = i/2$} and $\perp\indices{_{z \bar{z}}} = 1/2$. The following identities can then be shown to hold for the adapted coordinates
\begin{align} \label{Covariant:DeltasInZ}
\delta\indices*{_{\mu}^z} \delta\indices*{_{\nu}^{\bar{z}}} & = \perp\indices{_{\mu \nu}} - i \epsilon\indices{_{\mu \nu}} \, ,
& \delta\indices*{^{\mu}_z} \delta\indices*{^{\nu}_{\bar{z}}} & = \frac{1}{4} \left( \perp\indices{^{\mu \nu}} + i \epsilon\indices{^{\mu \nu}}\right) \, , \\
\delta\indices*{_{\mu}^z} \delta\indices*{^{\nu}_{z}} & = \frac{1}{2} \left( \perp\indices{_{\mu}^{\nu}} - i \epsilon\indices{_{\mu}^{\nu}} \right) \, ,
& \delta\indices*{_{\mu}^{\bar{z}}} \delta\indices*{^{\nu}_{\bar{z}}} & = \frac{1}{2} \left( \perp\indices{_{\mu}^{\nu}} + i \epsilon\indices{_{\mu}^{\nu}}\right) \, .
\end{align}
These are all different forms of the same identity, related by raising or lowering the $z$ and $\bar{z}$ indices, but the different forms are useful in different contexts. In particular, they can be used to write in a covariant form the different terms appearing in the entanglement entropy functional.
Let us start with the Wald term,
\begin{equation} \label{Covariant:WaldTerm}
\frac{\partial \mathcal{L}_E}{\partial R\indices{_{z \bar{z} z \bar{z}}}} = \delta\indices*{_{[\mu}^z} \delta\indices*{_{\nu]}^{\bar{z}}} \delta\indices*{_{[\rho}^z} \delta\indices*{_{\sigma]}^{\bar{z}}} \frac{\partial \mathcal{L}_E}{\partial R\indices{_{\mu \nu \rho \sigma}}} = - \epsilon\indices{_{\mu \nu}} \epsilon\indices{_{\rho \sigma}} \frac{\partial \mathcal{L}_E}{\partial R\indices{_{\mu \nu \rho \sigma}}} ~ .
\end{equation}
The last form, which is the familiar one for this piece \cite{Wald:1993nt,Iyer:1994ys}, is fully covariant, as desired. Similar manipulations can be applied to the anomaly term. For the second derivative of the Lagrangian contracted with two extrinsic curvatures we get\footnote{Notice that we take \eqref{Notation:extrinsic_curvature} as defining a spacetime tensor, $K\indices{^{\lambda}_{\mu \nu}} \equiv K\indices{^{a}_{\mu \nu}} n\indices{_a^{\lambda}}$. This tensor satisfies, in adapted coordinates, $K\indices{_{\lambda \mu \nu}} V^{\nu} = K\indices{_{\lambda \mu i}} V^{i}$ for any vector $V^{\mu}$.}
\begin{align} \label{Covariant:SecondDerivative}
\nonumber \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) = & 2 \left[ \perp\indices{^{\lambda_1 \lambda_2}} \left( \perp\indices{_{\mu_1 \mu_2}} \perp\indices{_{\nu_1 \nu_2}} - \epsilon\indices{_{\mu_1 \mu_2}} \epsilon\indices{_{\nu_1 \nu_2}} \right) + \epsilon\indices{^{\lambda_1 \lambda_2}} \left( \perp\indices{_{\mu_1 \mu_2}} \epsilon\indices{_{\nu_1 \nu_2}} + \epsilon\indices{_{\mu_1 \mu_2}} \perp\indices{_{\nu_1 \nu_2}} \right) \right] \\
& \times \frac{\partial^2 \mathcal{L}_E}{\partial R\indices{_{\mu_1 \rho_1 \nu_1 \sigma_1}} \partial R\indices{_{\mu_2\rho_2 \nu_2 \sigma_2}}} K\indices{_{\lambda_1 \rho_1 \sigma_1}} K\indices{_{\lambda_2 \rho_2 \sigma_2}} \, .
\end{align}
The operator for the type $A$ terms \eqref{NewFunctional:OperatorA} becomes
\begin{align} \label{Covariant:OperatorA}
\nonumber \mathcal{K}\indices{_{A I}} \hat{\partial}\indices{^{A I}} = & \bigg[ \frac{1}{2}\perp\indices{^{\lambda_1 \lambda_2}} h\indices{^{\tau_1 \tau_2}} h\indices{^{\omega_1 \omega_2}} \epsilon\indices{_{\mu \nu}} \epsilon\indices{_{\rho \sigma}} - 2 \epsilon\indices{^{\lambda_1 \lambda_2}} h\indices*{^{\tau_1}_{\rho}} h\indices*{^{\tau_2}_{\sigma}} h \indices{^{\omega_1 \omega_2}} \epsilon\indices{_{\mu \nu}} - 2 \perp\indices{^{\lambda_1 \lambda_2}} h\indices*{^{\tau_1}_{\mu}} h\indices*{^{\omega_1}_{\rho}} h\indices*{^{\tau_2}_{\nu}} h\indices*{^{\omega_2}_{\sigma}} \\
\nonumber & \; - 2 \big( \perp\indices{^{\lambda_1 \lambda_2}} \perp\indices{_{\mu \rho}} + \epsilon\indices{^{\lambda_1 \lambda_2}} \epsilon\indices{_{\mu \rho}} \big) h\indices*{^{\tau_1}_{\nu}} h\indices*{^{\tau_2}_{\sigma}} h\indices{^{\omega_1 \omega_2}} \bigg] K\indices{_{\lambda_1 \tau_1 \omega_1}} K\indices{_{\lambda_2 \tau_2 \omega_2}} \frac{\partial}{\partial R\indices{_{\mu \nu \rho \sigma}}} \\
& \; + 2 \left( \perp\indices{_{\mu_2}^{\mu_1}} \perp\indices{_{\rho_2}^{\rho_1}} - \epsilon\indices{_{\mu_2}^{\mu_1}} \epsilon\indices{_{\rho_2}^{\rho_1}} \right) h\indices*{^{\nu_1}_{\nu_2}} h\indices*{^{\sigma_1}_{\sigma_2}} R\indices{_{\mu_1 \nu_1 \rho_1 \sigma_1}} \frac{\partial}{\partial R\indices{_{\mu_2 \nu_2 \rho_2 \sigma_2}}} \, ,
\end{align}
while that of type $B$ terms reads
\begin{equation} \label{Covariant:OperatorB}
\mathcal{K}\indices{_{B I}} \hat{\partial}\indices{^{B I}} =4 \left[ \perp\indices*{_{\mu_2}^{\mu_1}} h\indices*{^{\nu_1}_{\nu_2}} h\indices*{^{\rho_1}_{\rho_2}} h\indices*{^{\sigma_1}_{\sigma_2}} + \perp\indices*{^{\mu_1}_{\mu_2}} \perp\indices*{^{\nu_1}_{\nu_2}} \perp\indices*{^{\rho_1}_{\rho_2}} h\indices*{^{\sigma_1}_{\sigma_2}} \right] R\indices{_{\mu_1 \nu_1 \rho_1 \sigma_1}} \frac{\partial}{\partial R\indices{_{\mu_2 \nu_2 \rho_2 \sigma_2}}} \, .
\end{equation}
Note that, since they always appear in pairs, all the binormals in these expressions could be replaced by normal projectors via the first identity appearing in \req{relss}, %
so the whole thing would be written exclusively in terms of contractions of $h_{\mu\nu}$ and $\perp_{\mu\nu}$ with curvature tensors.
The covariant form of the full holographic entanglement entropy functional can be finally written as
\begin{align} \label{Covariant:FullFunctional}
&S_{\rm \scriptscriptstyle HEE}(A) = -2 \pi \int_{\Gamma_A} d^{D-2} y \, \sqrt{h} \, \Bigg[ \epsilon\indices{_{\mu \nu}} \epsilon\indices{_{\rho \sigma}}\frac{\partial \mathcal{L}_E}{\partial R\indices{_{\mu \nu \rho \sigma}}} \\
\nonumber & - \sum_{S = 0}^{\infty} \frac{1}{S!} \int_0^1 {\rm d}u \, 2u : \left(- (1-u^2) \mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} - (1- u)\mathcal{K}\indices{_{BJ}} \hat{\partial}^{BJ}\right)^{S} : \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \Bigg] ~ ,
\end{align}
where derivatives are to be taken respecting the normal ordering prescription introduced in \eqref{NewFunctional:NormalOrdering}, and the covariant form of the objects appearing in the last line are given in \eqref{Covariant:SecondDerivative}--\eqref{Covariant:OperatorB}.
\section{Explicit covariant form of the functionals}\label{sec:explicit}
In this section we present the explicit holographic entanglement entropy functionals for various classes of higher-curvature theories. Like in the rest of the paper, our approach here is to consider such terms as perturbative corrections to Einstein gravity, so that entanglement entropies are computed by the on-shell evaluation of the corresponding Ryu-Takayanagi surfaces on the corrected functionals obtained using the Einstein gravity splitting. We start with a review of the previously known cases of $f(R)$, Lovelock and quadratic theories, for which the splitting problem plays no r\^ole (and hence the functionals can be also used non-perturbatively). Then, we present new functionals valid for general cubic and quartic theories at leading order in the couplings. We also show that for theories constructed from general contractions of the Ricci tensor and the metric, the anomaly piece vanishes at leading order in the couplings. We observe that the same happens for densities involving a single Riemann tensor, and make general comments on the structure of the perturbative functionals as a function of the number of Riemann tensors.
\subsection{$f(R)$ gravities}
Let us start with $f(R)$ theories. These are the simplest modifications of the Einstein-Hilbert action within the pure-metric class. For an action of the form
\begin{equation}
I_E^{ f(R)}=-\frac{1}{16\pi G} \int \mathrm{d}^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+ R+ f(R) \right]\, ,
\end{equation}
the HEE functional only contains a Wald-like piece and is simply given by \cite{Dong:2013qoa}
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{ f(R)}=\frac{\mathcal{A}(\Gamma_A)}{4G}+\frac{1}{4 G} \int_{\Gamma_A} \mathrm{d}^{d-1}y \sqrt{h} f'(R)\, .
\end{equation}
Since there is no anomaly piece, this expression can be used non-perturbatively in the putative $f(R)$ couplings by extremizing the full functional.
\subsection{Lovelock gravities}\label{love1}
Let us move to Lovelock theories \cite{Lovelock1,Lovelock2,Padmanabhan:2013xyr}. These are the most general diffemorphism-invariant pure-metric theories of gravity which possess covariantly-conserved second-order equations of motion. The general Euclidean action in $d+1$ dimensions reads
\begin{equation}\label{lovel}
I_E^{\rm Lovelock}=-\frac{1}{16\pi G} \int \mathrm{d}^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+ R+\sum_{n=2}^{\lfloor \frac{d+1}{2} \rfloor } \lambda_n L^{2(n-1)} \mathcal{X}_{2n}(R) \right]\, ,
\end{equation}
where $\lfloor x \rfloor$ is the integer part of $x$, the $\lambda_n$ are dimensionless couplings and the order-$n$ invariants $\mathcal{X}_{2n}$ were defined in \req{AnomalyLovelock:LovelockLagrangian} above.
$\mathcal{X}_{2n}$ becomes the Euler density of compact manifolds when evaluated in $2n$ dimensions. The simplest Lovelock theories (besides Einstein gravity) correspond to the Gauss-Bonnet and cubic densities, which read respectively
\begin{align}
\mathcal{X}_4=&+ R^2-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\, ,\\
\mathcal{X}_6=&+R^3-12R_{\mu}^{\nu} R_{\nu}^{\mu} R +16 R_{\mu}^{\nu}R_{\nu}^{\rho}R_{\rho}^{\mu}+24 R_{\mu\nu\rho\sigma}R^{\mu\rho}R^{\nu\sigma}+3 R R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} \\ &-24 R_{\mu\nu\rho\sigma}R^{\mu\nu\rho}\,_{\gamma}R^{\sigma\gamma}-8 R\indices{_{\mu}^{\rho}_{\nu}^{\sigma}} R\indices{_{\rho}^{\gamma}_{\sigma}^{\delta}} R\indices{_{\gamma}^{\mu}_{\delta}^{\nu}}+ 4 R\indices{_{\mu\nu}^{\rho\sigma}} R\indices{_{\rho\sigma}^{\gamma\delta}} R\indices{_{\gamma\delta}^{\mu\nu}}\, .
\end{align}
As we mentioned before, for theories beyond quadratic order, the splitting problem challenges the construction of general entanglement entropy functionals. However, the special structure of Lovelock theories makes them unaffected by the splittings choice \cite{Camps:2014voa,Miao:2015iba,Camps:2016gfs}. The entanglement entropy is then unambiguously given by the JM functional \cite{Jacobson:1993xs,Hung:2011xb} previously mentioned. This reads, for a general Lovelock theory,
\begin{equation}\label{jm}
S_{\rm \scriptscriptstyle HEE}^{\rm Lovelock}=\frac{\mathcal{A}(\Gamma_A)}{4G}+ \sum_{n=2}^{\lfloor \frac{d+1}{2} \rfloor} \frac{L^{2(n-1)}}{4G} \int_{\Gamma_A} \mathrm{d}^{d-1}y \sqrt{h} \lambda_n \Delta_n^{\rm Lovelock} \, , \end{equation}
where
\begin{equation}
\Delta_n^{\rm Lovelock}= n \mathcal{X}_{2(n-1)}(\mathcal{R})\, ,
\end{equation}
where the lower-order densities are computed with respect to the induced metric $h_{ij}$.
\subsection{Quadratic gravities}
Next we consider theories involving up to four derivatives of the metric. The most general action can be written as
\begin{equation}\label{quaact}
I_E^{\rm Riem^2}=-\frac{1}{16\pi G} \int \mathrm{d}^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+R+L^2 \sum_{i=1}^3 \alpha_i \mathcal{L}_i^{(2)} \right]\, ,
\end{equation}
where
\begin{equation}
\mathcal{L}_1^{(2)}\equiv R^2\, , \quad \mathcal{L}_2^{(2)}\equiv R_{\mu\nu}R^{\mu\nu}\, , \quad \mathcal{L}_3^{(2)}\equiv R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\, .
\end{equation}
The HEE functional for this class of theories was first obtained in \cite{Fursaev:2013fta}. It reads
\begin{equation}\label{seerie2}
S_{\ssc \rm EE}^{\rm Riem^2}=\frac{\mathcal{A}(\Gamma_A)}{4G}+ \frac{L^2}{4G} \int_{\Gamma_A} \mathrm{d}^{d-1}y \sqrt{h} \sum_{i=1}^3\alpha_i \Delta_i^{(2)}\, ,
\end{equation}
where
\begin{equation}
\Delta_1^{(2)}=2 R\, , \quad \Delta_2^{(2)}= R^a_a-\frac{1}{2}K^a K_a \, , \quad \Delta_3^{(2)}= 2 \left(R^{ab}_{ab}-K_{aij}K^{aij} \right)\, .
\end{equation}
Just like for $f(R)$, and Lovelock theories, there is no splitting problem in this case as the expressions only involve terms quadratic in extrinsic curvatures. Consequently, \req{seerie2} can be trusted at all orders in $\alpha_i$.
When the terms are considered as perturbative corrections to Einstein gravity, the above expressions get slightly simplified, namely
\begin{equation}\label{seerie22}
S_{\rm \scriptscriptstyle HEE}^{\rm Riem^2}=\frac{\mathcal{A}(\Gamma_A)}{4G}+ \frac{L^2}{4G} \int_{\Gamma_A} \mathrm{d}^{d-1}y \sqrt{h} \sum_{i=1}^3\alpha_i \Delta_i^{(2)} + \mathcal{O}(\alpha_i^2)\, ,
\end{equation}
where now
\begin{equation}
\Delta_1^{(2)}=2 R\, , \quad \Delta_2^{(2)}= R^a_a \, , \quad \Delta_3^{(2)}= 2 \left(R^{ab}_{ab}-K_{aij}K^{aij} \right)\, .
\end{equation}
The difference with respect to the nonperturbative case is the fact that, in this case, the functional that needs to be extremized is the RT one, whose equation of motion reads $K^a=0$. We can then remove all the traces of extrinsic curvatures appearing in the higher-curvature functionals when looking for expressions valid at leading order in the couplings.
\subsection{Cubic gravities}
Let us now move to the cubic case. At this order, there are eight independent cubic invariants
\begin{equation}\label{cubic}
I_E^{\rm Riem^3}=-\frac{1}{16\pi G} \int \mathrm{d}^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+R+L^4 \sum_{i=1}^8 \beta_i \mathcal{L}_i^{(3)} \right] \, .
\end{equation}
We label our basis of densities as follows
\begin{align}
&\mathcal{L}_1^{(3)} \equiv R\indices{_{\mu}^{\rho}_{\nu}^{\sigma}} R\indices{_{\rho}^\delta_\sigma^\gamma} R\indices{_\delta^\mu_\gamma^\nu} \, , &&\mathcal{L}_2^{(3)} \equiv R\indices{_{\mu\nu}^{\rho\sigma}}R\indices{_{\rho\sigma}^{\delta\gamma}} R\indices{_{\delta\gamma}^{\mu\nu}}\, , \\ \notag
&\mathcal{L}_3^{(3)}\equiv R_{\mu\nu\rho\sigma}R\indices{^{\mu\nu\rho}_\delta} R^{\sigma\delta}\, , &&\mathcal{L}_4^{(3)} \equiv R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma}R \, , \\ \notag
& \mathcal{L}_5^{(3)}\equiv R_{\mu\nu\rho\sigma} R^{\mu \rho} R^{\nu\sigma}\, , && \mathcal{L}_6^{(3)}\equiv R_{\mu}^{\nu} R_{\nu}^{\rho}R_{\rho}^{\mu}\, , \\ \notag
& \mathcal{L}_7^{(3)} \equiv R_{\mu\nu}R^{\mu\nu} R\, , && \mathcal{L}_8^{(3)}\equiv R^3\, .
\end{align}
Using our new formula in \req{AnomalyCubic:ExpandedExpression} for the anomaly piece, we
find the following expression for the functional corresponding to a general cubic Lagrangian of the form (\ref{cubic}),
\begin{equation}\label{cubicfun}
S_{\rm \scriptscriptstyle HEE}^{\rm Riem^3}=\frac{\mathcal{A}(\Gamma_A)}{4G}+\frac{L^4}{4G} \int_{\Gamma_A} \mathrm{d}^{d-1}y \sqrt{h} \sum_{i=1}^8\beta_i \Delta_i^{(3)} + \mathcal{O}(\beta_i^2)\, ,
\end{equation}
where the new terms read
\begin{align}\label{cubic1}
\Delta_1^{(3)}=& +\frac{3}{2} \left( R^{a \nu}{}_{a \mu} R^{b \mu}{}_{b \nu} - R^{a \nu b \mu} R_{a \mu b \nu} \right)\\ \notag &-\frac{3}{2} R^{ijkl} K_{a i k} K^{a}{}_{jl} - 3 R^{a b i j} K_{a i}{}^{k} K_{b j k} + \frac{3}{4} R^{ab}{}_{ab} K^{c i j} K_{c i j} -
\frac{3}{8} K^{a i j} K_{a i j} K^{b k l} K_{b k l}\\ \notag & + \frac{9}{4} K_{a i}{}^j K_{bj}{}^k K^a{}_{k}{}^l K^b{}_l{}^i - \frac{3}{2} K_{a i}{}^j K^a{}_{j}{}^k K_{b k}{}^l K^b{}_l{}^i
- \frac{3}{4} K_{aij} K_{b k l} K^{bij} K^{akl}\, ,\\
\Delta_2^{(3)}=&+3 R^{a b \rho \sigma} R_{a b \rho \sigma} \\ \notag &-6 K_{a i}{}^k K_{b j k} \left( R^{a i b j} - R^{b i a j} \right) -6 K_{a i k} K^{a j k} R^{b i}{}_{bj} +3 K_{a i}{}^j K_{bj}{}^k K^a{}_{k}{}^l K^b{}_l{}^i \\ \notag &-6 K_{a i}{}^j K^a{}_{j}{}^k K_{b k}{}^l K^b{}_l{}^i \, , \\
\Delta_3^{(3)}=& +\frac{1}{2} R^{a \mu \nu \rho} R_{a \mu \nu \rho} +2 R^{a \lambda} R^b{}_{a b \lambda} \\ \notag &
- K^{a}{}_{i}{}^k K_{a j k} R^{ij} -\frac{1}{2} K^{aij} K_{aij} R^b{}_b \, ,\\
\Delta_4^{(3)}=&+ R_{\mu \nu \rho \sigma} R^{\mu \nu \rho \sigma} +2 R R^{ab}{}_{ab} \\ \notag & -2 K^{a i j} K_{a i j} R \, , \\
\Delta_5^{(3)}=&+ R_\mu{}^\nu R^{a \mu}{}_{a \nu} -\frac{1}{2} R^{ab} R_{ab} +\frac{1}{2} R^a{}_a R^b{}_b \, ,\\
\Delta_6^{(3)}=& +\frac{3}{2} R^{a \mu} R_{a \mu} \, , \\
\Delta_7^{(3)}=& +R_{\mu \nu} R^{\mu \nu} +R^a{}_a R
\, ,\\
\Delta_8^{(3)}=& +3R^2 \, .
\end{align}
In each case, the first line corresponds to the Wald-like piece, whereas the rest come from the anomaly one. In the above expressions we have already made use of the RT on-shell condition $K^a=0$. If they were to be used nonperturbatively (including extremization of the whole functionals, etc.), additional terms would appear \cite{Caceres:2020jrf}. However, in that case one would need to find first the right splittings in each case and the whole functionals would (most likely) change completely ---although the results at $\mathcal{O}(\beta_i)$ will have to reduce to the ones found using the perturbatively valid ones presented here.
We observe that the first two functionals, which are the only ones involving chains of three Riemann tensors, have the most complicated expressions. On the other hand, $\Delta_3^{(3)}$ and $\Delta_4^{(3)}$, which involve pairs of Riemann tensors are simpler but still have pieces coming from the anomaly part. Finally, densities with a single Riemann or none have a vanishing anomaly piece and their HEE functionals at leading order are just given by the corresponding Wald-entropy expressions. We will see later that this hierarchy in the level of complication of the functionals as a function of the number of Riemann tensors involved actually extends to general-order densities.
Besides the cubic Lovelock densities, there are other interesting theories one can consider and whose HEE functionals can be straightforwardly obtained by replacing the corresponding combinations of $\beta_i$ in \req{cubicfun}.
Below, when computing EE universal terms, we will also make explicit the results for a couple of such theories in $d=4$ and $d=3$, respectively. The first is five-dimensional Quasi-topological gravity \cite{Quasi2,Quasi,Oliva:2011xu,Oliva:2012zs}, whose action reads
\begin{equation}
I_E^{\rm QTG}=-\frac{1}{16\pi G} \int \mathrm{d}^{5}x \sqrt{g} \left[\frac{12}{L^2}+R+\frac{7\mu_{\rm QTG} L^4}{4} \mathcal{Z}_5\right]\, ,
\end{equation}
where
\begin{equation}
\mathcal{Z}_5\equiv \mathcal{L}_1^{(3)}-\frac{9}{7} \mathcal{L}_3^{(3)}+\frac{3}{8} \mathcal{L}_4^{(3)} +\frac{15}{7} \mathcal{L}_5^{(3)} + \frac{18}{7} \mathcal{L}_6^{(3)} -\frac{33}{14} \mathcal{L}_7^{(3)}+\frac{15}{56} \mathcal{L}_8^{(3)}\, ,
\end{equation}
and where we have omitted the usual Gauss-Bonnet density which is usually included in the action. The second is four-dimensional Einsteinian cubic gravity \cite{PabloPablo,Hennigar:2016gkm,PabloPablo2}, whose action is given by
\begin{equation}\label{cubic2}
I^{\rm ECG}_E=- \frac{1}{16\pi G} \int \mathrm{d} ^{4}x \sqrt{g} \left[\frac{6}{L^2}+R- \frac{\mu_{\rm ECG} L^4}{8} \mathcal{P} \right]\, ,
\end{equation}
where
\begin{equation}
\mathcal{P}\equiv 12 \mathcal{L}_{1}^{(3)}+\mathcal{L}_{2}^{(3)}-12\mathcal{L}_{5}^{(3)}+8\mathcal{L}_{6}^{(3)} \, .
\end{equation}
These theories define holographic toy models of non-supersymmetric CFTs in $d=4$ and $d=3$, respectively. Various holographic aspects of such models have been explored before {\it e.g.,}\ in \cite{Myers:2010jv,Myers:2010xs,Myers:2010tj,HoloECG,Bueno:2020odt,Bueno:2018yzo,Mir:2019ecg}. The special properties of Quasi-topological and Einsteinian cubic gravities include the fact that they possess second-order equations on maximally symmetric backgrounds, that they allow for generalizations of the Schwarzschild solution with a single function, {\it i.e.,}\ satisfying $g_{tt}g_{rr}=-1$, as well as the fact that the associated thermodynamic properties can be computed fully analytically \cite{Hennigar:2017ego,Bueno:2017sui,Bueno:2019ycr}.
\subsection{Quartic gravities}\label{quarticc}
At the following order, quartic in curvature, there are 26 independent densities one can write ---see {\it e.g.,}\ \cite{0264-9381-9-5-003,Aspects},
\begin{equation}\label{quartic}
I_E^{\rm Riem^4}=-\frac{1}{16\pi G} \int \mathrm{d}^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+R+L^6 \sum_{i=1}^{26} \gamma_i \mathcal{L}_i^{(4)} \right]\, ,
\end{equation}
where we choose our basis to be
\begin{align}
&\mathcal{L}_{26}^{(4)} \equiv R^4 \, , &&\mathcal{L}_{25}^{(4)} \equiv R^2 R\indices{_{\mu \nu}} R\indices{^{\mu \nu}} \, , \\ \notag
&\mathcal{L}_{24}^{(4)} \equiv R R\indices{_{\mu}^{\nu}} R\indices{_{\nu}^{\rho}} R\indices{_{\rho}^{\mu}} \, , &&\mathcal{L}_{23}^{(4)} \equiv R\indices{_{\mu \nu}} R\indices{^{\mu \nu}} R\indices{_{\rho \sigma}} R\indices{^{\rho \sigma}} \, , \\ \notag
&\mathcal{L}_{22}^{(4)} \equiv R\indices{_{\mu}^{\nu}} R\indices{_{\nu}^{\rho}} R\indices{_{\rho}^{\sigma}} R\indices{_{\sigma}^{\mu}} \, , &&\mathcal{L}_{21}^{(4)} \equiv R R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \rho}} R\indices{^{\nu \sigma}} \, , \\ \notag
&\mathcal{L}_{20}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{_{\mu \rho \nu \sigma}} R\indices{^{\delta \rho}} R\indices{_{\delta}^{\sigma}} \, , &&\mathcal{L}_{19}^{(4)} \equiv R^2 R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho \sigma}} \, , \\ \notag
&\mathcal{L}_{18}^{(4)} \equiv R R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho}_{\delta}} R\indices{^{\sigma \delta}} \, , &&\mathcal{L}_{17}^{(4)} \equiv R\indices{_{\delta \gamma}} R\indices{^{\delta \gamma}} R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho \sigma}} \, , \\ \notag
&\mathcal{L}_{16}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{_{\nu}^{\rho}} R\indices{^{\sigma \delta \gamma}_{\mu}} R\indices{_{\sigma \delta \gamma \rho}} \, , &&\mathcal{L}_{15}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{^{\rho \sigma}} R\indices{^{\delta \gamma}_{\mu \rho}} R\indices{_{\delta \gamma \nu \sigma}} \, , \\ \notag
&\mathcal{L}_{14}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{^{\rho \sigma}} R\indices{^{\delta}_{\mu}^{\gamma}_{\nu}} R\indices{_{\delta \rho \gamma \sigma}} \, , &&\mathcal{L}_{13}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{^{\rho \sigma}} R\indices{^{\delta}_{\mu}^{\gamma}_{\rho}} R\indices{_{\delta \nu \gamma \sigma}} \, , \\ \notag
&\mathcal{L}_{12}^{(4)} \equiv R R\indices{_{\mu \nu}^{\rho \sigma}} R\indices{_{\rho \sigma}^{\delta \gamma}} R\indices{_{\delta \gamma}^{\mu \nu}} \, , &&\mathcal{L}_{11}^{(4)} \equiv R R\indices{_{\mu}^{\rho}_{\nu}^{\sigma}} R\indices{_{\rho}^{\delta}_{\sigma}^{\gamma}} R\indices{_{\delta}^{\mu}_{\gamma}^{\nu}} \, , \\ \notag
&\mathcal{L}_{10}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{_{\mu}^{\rho}_{\nu}^{\sigma}} R{_{\delta \gamma \xi \rho}} R\indices{^{\delta \gamma \xi}_{\sigma}} \, , &&\mathcal{L}_{9}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{^{\rho \sigma \delta \gamma}} R\indices{_{\rho \sigma}^{\xi}_{\mu}} R\indices{_{\delta \gamma \xi \nu}} \, , \\ \notag
&\mathcal{L}_{8}^{(4)} \equiv R\indices{^{\mu \nu}} R\indices{^{\rho \sigma \delta \gamma}} R\indices{_{\rho}^{\xi}_{\delta \mu}} R\indices{_{\sigma \xi \gamma \nu}} \, , &&\mathcal{L}_{7}^{(4)} \equiv R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\delta \gamma \xi \chi}} R\indices{^{\delta \gamma \xi \chi}} \, , \\ \notag
&\mathcal{L}_{6}^{(4)} \equiv R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu \nu \rho}^{\delta}} R\indices{_{\gamma \xi \chi \sigma}} R\indices{^{\gamma \xi \chi}_{\delta}} \, , &&\mathcal{L}_{5}^{(4)} \equiv R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu \nu}^{\delta \gamma}} R\indices{_{\delta \gamma}^{\chi \xi}} R\indices{_{\rho \sigma \chi \xi}} \, , \\ \notag
&\mathcal{L}_{4}^{(4)} \equiv R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu \nu}^{\delta \gamma}} R\indices{_{\rho \delta}^{\chi \xi}} R\indices{_{\sigma \gamma \chi \xi}} \, , &&\mathcal{L}_{3}^{(4)} \equiv R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu \nu}^{\delta \gamma}} R\indices{_{\rho}^{\chi}_{\delta}^{\xi}} R\indices{_{\sigma \chi \gamma \xi}} \, , \\ \notag
&\mathcal{L}_{2}^{(4)} \equiv R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu}^{\delta}_{\rho}^{\gamma}} R\indices{_{\delta}^{\chi}_{\gamma}^{\xi}} R\indices{_{\nu \chi \sigma \xi}} \, , &&\mathcal{L}_{1}^{(4)} \equiv R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu}^{\delta}_{\rho}^{\gamma}} R\indices{_{\delta}^{\chi}_{\nu}^{\xi}} R\indices{_{\gamma \chi \sigma \xi}} \, .
\end{align}
Using our formula in \req{AnomalyQuartic:ExpandedExpression}, we find the explicit functional for the above densities to be given by
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{\rm Riem^4}=\frac{\mathcal{A}(\Gamma_A)}{4G}+\frac{L^6}{4G} \int_{\Gamma_A} \mathrm{d}^{d-1}y \sqrt{\gamma} \sum_{i=1}^{26}\gamma_i \Delta_i^{(4)} + \mathcal{O}(\gamma_i^2)\, ,
\end{equation}
where now we find
\begin{align}\label{quartic1}
\Delta_{26}^{(4)}=& + 4 R^3
\, ,\\
\Delta_{25}^{(4)}=& + 2 R R\indices{_{\mu \nu}} R\indices{^{\mu \nu}} + R^2 R\indices{^a_a}
\, ,\\
\Delta_{24}^{(4)}=& + R\indices{_{\mu}^{\nu}} R\indices{_{\nu}^{\rho}} R\indices{_{\rho}^{\mu}} + \frac{3}{2} R R^{a \mu} R_{a \mu}
\, ,\\
\Delta_{23}^{(4)}=& + 2 R\indices{^a_a} R\indices{_{\mu \nu}} R\indices{^{\mu \nu}}
\, ,\\
\Delta_{22}^{(4)}=& + 2 R\indices{^{a \mu}} R\indices{_a^{\nu}} R\indices{_{\mu \nu}}
\, ,\\
\Delta_{21}^{(4)}=& + R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \rho}} R\indices{^{\nu \sigma}} + R R\indices{^{a \mu}_{a \nu}} R\indices{^{\nu}_{\mu}} - \frac{1}{2} R R\indices{^{a b}} R\indices{_{a b}} + \frac{1}{2} R R\indices{^a_a} R\indices{^b_b}
\, ,\\
\Delta_{20}^{(4)}=& - R\indices{_{a \mu \nu \rho}} R\indices{^{a \rho}} R\indices{^{\mu \nu}} + \frac{1}{2} R\indices{^{a \mu}_{a \nu}} R\indices{_{\mu}^{\rho}} R\indices{_{\rho}^{\nu}} - \frac{1}{2} R\indices{_{\mu}^a} R\indices{_a^b} R\indices{_b^{\mu}} + \frac{1}{2} R\indices{^a_a} R\indices{_{\mu b}} R\indices{^{\mu b}}
\, ,\\
\Delta_{19}^{(4)}=& + 2 R R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho \sigma}} + 2 R^2 R\indices{^{a b}_{a b}} \\ \notag
& - 2 R^2 K\indices{^{a i j}} K\indices{_{a i j}}
\, ,\\
\Delta_{18}^{(4)}=& + R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho}_{\delta}} R\indices{^{\sigma \delta}} + \frac{1}{2} R R\indices{_{a \mu \nu \rho}} R\indices{^{a \mu \nu \rho}} + 2 R R\indices{^{a b}_{a \mu}} R\indices{^{\mu}_b} \\ \notag
& - \frac{1}{2} R R\indices{^a_a} K\indices{^{b i j}} K\indices{_{b i j}} - R R\indices{^{ij}} K\indices{_{a i k}} K\indices{^a_j^k}
\, ,\\
\Delta_{17}^{(4)}=& + R\indices{^a_a} R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho \sigma}} + 2 R\indices{^{a b}_{a b}} R\indices{_{\mu \nu}} R\indices{^{\mu \nu}} \\ \notag
& + 2 K\indices{^{a i j}} K\indices{_{a i j}} \left( R\indices{_{b k}} R\indices{^{b k}} + \frac{2}{3} R\indices{_{b c}} R\indices{^{b c}} - R\indices{_{\mu \nu}} R\indices{^{\mu \nu}} - \frac{1}{3} R\indices{^b_b} R\indices{^c_c} \right)
\, ,\\
\Delta_{16}^{(4)}=& + R\indices{_{\mu}^a} R\indices{_{a \nu \rho \sigma}} R\indices{^{\mu \nu \rho \sigma}} + 2 R\indices{_{\mu}^a} R\indices{^b_{a b \nu}} R\indices{^{\mu \nu}} \\ \notag
& - \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} \left( \frac{1}{2} R\indices{_{b k}} R\indices{^{b k}} + \frac{1}{3} R\indices{_{b c}} R\indices{^{b c}} + \frac{1}{3} R\indices{^b_b} R\indices{^c_c} \right) - K\indices{^a_i^k} K\indices{_{a j k}} \left( R\indices{^{i l}} R\indices{^j_l} + \frac{1}{2} R\indices{^{i b}} R\indices{^j_b} \right)
\, ,\\
\Delta_{15}^{(4)}=& + R\indices{_{a \mu \rho \sigma}} R\indices{^a_{\nu}^{\rho \sigma}} R\indices{^{\mu \nu}} + 2 R\indices{^{a \mu}} R\indices{^{b \nu}} R\indices{_{a b \mu \nu}} \\ \notag
& - K\indices{^a_i^k} K\indices{_{a j k}} \left( R\indices{^b_b} R\indices{^{i j}} - \frac{1}{2} R\indices{^{i b}} R\indices{^j_b} \right) - K\indices{^a_i^k} K\indices{^b_{j k}} R\indices{^{[i}_a} R\indices{^{j]}_b}
\, ,\\
\Delta_{14}^{(4)}=& + R\indices{^a_{\mu a \rho}} R\indices{_{\nu \sigma}} R\indices{^{\mu \nu \rho \sigma}} + R\indices{^a_a} R\indices{^b_{\mu b \nu}} R\indices{^{\mu \nu}} - R\indices{_{a \mu b \nu}} R\indices{^{a b}} R\indices{^{\mu \nu}} \\ \notag
& + \frac{1}{24} K\indices{^{a i j}} K\indices{_{a i j}} \left( R\indices{^b_b} R\indices{^c_c} - 2 R\indices{_{b c}} R\indices{^{b c}} \right) - \frac{1}{4} K\indices{^a_i^k} K\indices{_{a j k}} R\indices{^{i b}} R\indices{^j_b} - \frac{1}{2} K\indices{^a_i^k} K\indices{^b_{j k}} R\indices{^{[i}_a} R\indices{^{j]}_b} \\ \notag
& - \frac{1}{2} K\indices{^a_{i j}} K\indices{_{a k l}} R\indices{^{i j}} R\indices{^{k l}} \, , \\
\Delta_{13}^{(4)}=& + R\indices{^a_{\mu \rho \nu}} R\indices{_a^{\mu}_{\sigma}^{\nu}} R\indices{^{\rho \sigma}} - R\indices{_{a \mu b \nu}} R\indices{^{a \nu}} R\indices{^{b \mu}} + R\indices{^a_{\mu a \nu}} R\indices{^{\mu b}} R\indices{^{\nu}_b} \\ \notag
& - \frac{1}{8} K\indices{^{a i j}} K\indices{_{a i j}} R\indices{^b_b} R\indices{^c_c} - \frac{1}{2} K\indices{^a_i^k} K\indices{_{a j k}} R\indices{^b_b} R\indices{^{i j}} - \frac{1}{2} K\indices{^a_{i j}} K\indices{_{a k l}} R\indices{^{i k}} R\indices{^{j l}} \, , \\
\Delta_{12}^{(4)}=& + R\indices{_{\mu \nu}^{\rho \sigma}} R\indices{_{\rho \sigma}^{\delta \gamma}} R\indices{_{\delta \gamma}^{\mu \nu}} + 3 R R\indices{_{a b \mu \nu}} R\indices{^{a b \mu \nu}} \\ \notag
& - 6 K\indices{_{a i}^k} K\indices{_{b j k}} \left( R\indices{^{a i b j}} - R\indices{^{b i a j}} \right) R - 6 K\indices{^a_i^k} K\indices{_{a j k}} R\indices{^{b i}_b^j} R + 3 K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R \\ \notag
& - 6 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^i} R \, , \\
\Delta_{11}^{(4)}=& + R\indices{_{\mu}^{\rho}_{\nu}^{\sigma}} R\indices{_{\rho}^{\delta}_{\sigma}^{\gamma}} R\indices{_{\delta}^{\mu}_{\gamma}^{\nu}} + \frac{3}{2} R R\indices{^{a \mu}_{a \nu}} R\indices{^{b \nu}_{b \mu}} - \frac{3}{2} R R\indices{^{a \mu b \nu}} R\indices{_{b \mu a \nu}} \\ \notag
& - \frac{3}{2} K\indices{^a_{i j}} K\indices{_{a k l}} R\indices{^{i k j l}} R - 3 K\indices{_{a i}^k} K\indices{_{b j k}} R\indices{^{a b i j}} R + \frac{3}{4} K\indices{^{a i j}} K\indices{_{a i j}} R\indices{^{b c}_{bc}} R - \frac{3}{8} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} R \\ \notag
& - \frac{3}{4} K\indices{_{a i j}} K\indices{_{b k l}} K\indices{^{b i j}} K\indices{^{a k l}} R + \frac{9}{4} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R - \frac{3}{2} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^i} R \, , \\ \notag
\Delta_{10}^{(4)}=& + \frac{1}{2} R\indices{^a_{\mu a \nu}} R\indices{^{\mu}_{\rho \sigma \delta}} R\indices{^{\nu \rho \sigma \delta}} + 2 R\indices{^{\mu \nu}} R\indices{_{\mu}^a_{\nu}^{\rho}} R\indices{^b_{a b \rho}} + \frac{1}{2} R\indices{^a_a} R\indices{_{b \mu \nu \rho}} R\indices{^{b \mu \nu \rho}} - \frac{1}{2} R\indices{^a_b} R\indices{_{a \mu \nu \rho}} R\indices{^{b \mu \nu \rho}} \\
& + K\indices{_{a i j}} K\indices{_{b k l}} R\indices{^{i[a}} R\indices{^{b]k j l}} - K\indices{_{a i j}} K\indices{^{a}_{k l}} \left( R\indices{^{i j}} R\indices{_b^{k b l}} - \frac{1}{2} R\indices{^{i b}} R\indices{_b^{k j l}} \right) \\ \notag
& - K\indices{_{a i}^k} K\indices{_{b j k}} R\indices{^{i [a}} R\indices{_c^{b] c j}} - \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} \left( R\indices{^{k l}} R\indices{^{b}_{k b l}} + R\indices{^{b k}} R\indices{^{c}_{b c k}} + \frac{1}{2} R\indices{^b_b} R\indices{^{c d}_{c d}} \right) \\ \notag
& + K\indices{_{a i}^k} K\indices{^a_{j k}} \left( R\indices{_{b l}} R\indices{^{b i j l}} - \frac{1}{2} R\indices{_b^i} R\indices{_c^{b c j}} + \frac{2}{3} R\indices{_{b c}} R\indices{^{b i c j}} - \frac{1}{6} R\indices{_b^b} R\indices{_c^{i c j}} - R\indices{_{l m}} R\indices{^{i l j m}} \right) \\ \notag
& + \frac{1}{2} K\indices{_{a l}^i} K\indices{_{b i}^j} K\indices{^b_j^k} K\indices{^a_{k m}} R\indices{^{l m}} - K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^i} K\indices{^b_{l m}} R\indices{^{lm}} - \frac{1}{4} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b l}^k} K\indices{^b_{m k}} R\indices{^{l m}} \\ \notag
& + \frac{1}{8} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} R\indices{^c_c} - \frac{1}{4} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^i} R\indices{^c_c} \, , \\
\Delta_{9}^{(4)}=& + \frac{1}{2} R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu \nu}^{\xi a}} R\indices{_{\rho \sigma \xi a}} + R\indices{^{\mu \nu}} R\indices{^{a b \rho}_{\mu}} R\indices{_{a b \rho \nu}} - 2 R\indices{^{\mu a}} R\indices{^{\nu \rho b}_{\mu}} R\indices{_{\nu \rho a b}} \\ \notag
& + 2 K\indices{_{a i j}} K\indices{_{b k l}} R\indices{^{i k}} R\indices{^{j [a b] l}} - K\indices{_{a i j}} K\indices{^a_{k l}} R\indices{^{i k}} R\indices{_b^{j b l}} + K\indices{_{a i}^k} K\indices{_{b j k}} \left( - 4 R\indices{^{i l}} R\indices{^{j [a b]}_l} + R\indices{^{i c}} R\indices{^{a b j}_c} \right. \\ \notag
& \left. - 2 R\indices{^{a l}} R\indices{^{b [i j]}_l} + 6 R\indices{^{c [a}} R\indices{^{b] j i}_c} \right) + K\indices{_{a i}^k} K\indices{^a_{j k}} \left( - 2 R\indices{^{i l}} R\indices{^{b j}_{b l}} - R\indices{_b^{i}} R\indices{_c^{b c j}} + R\indices{_{b l}} R\indices{^{b i j l}} \right. \\ \notag
& \left. - \frac{2}{3} R\indices{_{b c}} R\indices{^{b i c j}} - \frac{7}{6} R\indices{_b^b} R\indices{_c^{i c j}} \right) - \frac{3}{2} K\indices{_{a l}^i} K\indices{_{b i}^j} K\indices{^b_j^k} K\indices{^a_{k m}} R\indices{^{l m}} + \frac{3}{2} K\indices{_{a l}^i} K\indices{_{b i}^j} K\indices{^a_j^k} K\indices{^b_{k m}} R\indices{^{l m}} \\ \notag
& - \frac{3}{2} K\indices{_{a l}^i} K\indices{^a_i^j} K\indices{_{b j}^k} K\indices{^b_{k m}} R\indices{^{l m}} + \frac{3}{4} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R\indices{^c_c} - \frac{3}{2} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^i} R\indices{^c_c} \, , \\
\Delta_{8}^{(4)}=& + \frac{1}{2} R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu}^a_{\rho}^{\xi}} R\indices{_{\nu a \sigma \xi}} + R\indices{^{\mu \nu}} R\indices{^{a \rho}_{[a| \mu}} R\indices{^b_{\rho |b] \nu}} + 2 R\indices{^{a \mu}} R\indices{_{\mu}^{\nu}_{[a|}^{\rho}} R\indices{^b_{\nu |b] \rho}} \\ \notag
& - \frac{1}{2} K\indices{_{a i j}} K\indices{_{b k l}} \left( R\indices{^{i[a}} R\indices{^{b] k j l}} + R\indices{^{i k}} R\indices{^{a b j l}} \right) + K\indices{_{a i j}} K\indices{^a_{k l}} \left( R\indices{^{i m}} R\indices{^{j k l}_m} - \frac{1}{4} R\indices{_b^i} R\indices{^{b k j l}} - \frac{1}{4} R\indices{_b^b} R\indices{^{i k j l}} \right) \\ \notag
& - K\indices{_{a i}^k} K\indices{_{b j k}} \left( R\indices{^{j l}} R\indices{^{a b i}_l} - \frac{1}{2} R\indices{^{a l}} R\indices{^{i j b}_l} + \frac{1}{2} R\indices{^{i [a}} R\indices{_c^{b] c j}} + \frac{3}{4} R\indices{_c^c} R\indices{^{a b i j}} \right) \\ \notag
& + \frac{1}{4} K\indices{_{a i}^k} K\indices{^a_{j k}} \left( R\indices{^{i j}} R\indices{^{b c}_{b c}} - R\indices{_b^i} R\indices{_c^{b c j}} \right) + \frac{1}{4} K\indices{^{a i j}} K\indices{_{a i j}} \left( R\indices{^{b k}} R\indices{^c_{b c k}} + R\indices{^b_b} R\indices{^{c d}_{c d}} \right) \\ \notag
& - \frac{1}{2} K\indices{_{a l}^i} K\indices{_{b i}^j} K\indices{^b_j^k} K\indices{^a_{k m}} R\indices{^{l m}} + \frac{5}{4} K\indices{_{a l}^i} K\indices{_{b i}^j} K\indices{^a_j^k} K\indices{^b_{k m}} R\indices{^{l m}} - \frac{1}{4} K\indices{_{a l}^i} K\indices{^a_{i}^j} K\indices{_{b j}^k} K\indices{^b_{k m}} R\indices{^{l m}} \\ \notag
& - \frac{1}{2} K\indices{_a^{i j}} K\indices{_{b i j}} K\indices{^a_l^k} K\indices{^b_{m k}} R\indices{^{l m}} - \frac{1}{8} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b l}^k} K\indices{^b_{m k}} R\indices{^{l m}} - \frac{1}{8} K\indices{_{a i j}} K\indices{_{b k l}} K\indices{^{b i j}} K\indices{^{a k l}} R\indices{^c_c} \\ \notag
& - \frac{1}{8} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} R\indices{^c_c} + \frac{1}{2} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R\indices{^c_c} - \frac{3}{8} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^i} R\indices{^c_c} \, , \\
\Delta_{7}^{(4)}=& + 4 R\indices{_{\mu \nu \rho \sigma}} R\indices{^{\mu \nu \rho \sigma}} R\indices{^{a b}_{a b}} \\ \notag
& + \frac{64}{3} K\indices{_{a i j}} K\indices{_{b k l}} R\indices{^{i a j [b|}} R\indices{^{k |c] l}_c} - \frac{8}{3} K\indices{_{a i j}} K\indices{^a_{k l}} R\indices{^{i b j}_b} R\indices{^{k c l}_c} - 4 K\indices{^{a i j}} K\indices{_{a i j}} \left( R\indices{^{b c}_{b c}} R\indices{^{d e}_{de}} + 2 R\indices{^{b c d k}} R\indices{_{b c d k}} \right. \\ \notag
& \left. + 2 R\indices{^{b c k l}} R\indices{_{b c k l}} + \frac{8}{3} R\indices{^{b k c l}} R\indices{_{b k c l}} - \frac{4}{3} R\indices{^{b k c l}} R\indices{_{c k b l}} + \frac{4}{3} R\indices{_b^{k b l}} R\indices{^c_{k c l}} + 2 R\indices{^{b k l m}} R\indices{_{b k l m}} + R\indices{^{k l m n}} R\indices{_{k l m n}} \right) \\ \notag
& - 8 K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k l}} K\indices{^b_{m n}} R\indices{^{k m l n}} - 16 K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^m} K\indices{_{c l m}} \left( R\indices{^{[b| k |c] l}} + R\indices{^{b c k l}} \right) \\ \notag
& - 8 K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^m} K\indices{^b_{l m}} R\indices{_c^{k c l}} + 4 K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} R\indices{^{c d}_{c d}} \\ \notag
& + \frac{32}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^k} - \frac{32}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} \\ \notag
& - \frac{8}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k l}} K\indices{_{c m n}} K\indices{^{c k l}} K\indices{^{b m n}} - \frac{4}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} K\indices{^{c m n}} K\indices{_{c m n}} \, , \\
\Delta_{6}^{(4)}=& + 4 R\indices{^{\mu \nu \rho \sigma}} R\indices{^a_{\nu \rho \sigma}} R\indices{^b_{a b \mu}} \\ \notag
& + 2 K\indices{_{a i j}} K\indices{_{b k l}} \left( R\indices{^{[b| i j m}} R\indices{^{|a] k l}_m} + \frac{4}{3} R\indices{^{i a j [b|}} R\indices{^{k |c] l}_c} + 2 R\indices{^{i k j [a|}} R\indices{^{l c |b]}_c} \right) \\ \notag
& + K\indices{_{a i j}} K\indices{^a_{k l}} \left( \frac{1}{3} R\indices{^{i b k}_{b}} R\indices{^{j c l}_{c}} - \frac{2}{3} R\indices{^{i b k c}} R\indices{^{(j}_b^{l)}_c} - \frac{7}{3} R\indices{^{i b j}_b} R\indices{^{k c l}_c} - R\indices{^{i b j m}} R\indices{^k_b^l_m} + 2 R\indices{^{i k j b}} R\indices{^{l c}_{b c}} \right) \\ \notag
& + K\indices{_{a i}^k} K\indices{_{b j k}} \left( R\indices{^{i [b| c d}} R\indices{^{j |a]}_{c d}} + \frac{4}{3} R\indices{^{i [b| c l}} R\indices{^{j |a]}_{c l}} + \frac{4}{3} R\indices{^{i [a b] l}} R\indices{^{j c}_{c l}} \right) \\ \notag
& + K\indices{_{a i}^k} K\indices{^a_{j k}} \left( - \frac{3}{2} R\indices{^{i b c d}} R\indices{^j_{b c d}} + 2 R\indices{^{i b}_{l [c}} R\indices{^{j c l}_{b]}} - 3 R\indices{^{i b l c}} R\indices{^j_{b l c}} - 2 R\indices{^{i l b c}} R\indices{^j_{l b c}} - R\indices{^{i b l m}} R\indices{^j_{b l m}} \right. \\ \notag
& \left. - 2 R\indices{^{i l b m}} R\indices{^j_{l b m}} - 2 R\indices{^{i l m n}} R\indices{^j_{l m n}} \right) + K\indices{^{a i j}} K\indices{_{a i j}} \left( - R\indices{^{b c}_{b c}} R\indices{^{d e}_{d e}} - \frac{3}{2} R\indices{^{b c d k}} R\indices{_{b c d k}} \right. \\ \notag
& \left. - R\indices{^{b c k l}} R\indices{_{b c k l}} - \frac{4}{3} R\indices{^{b k c l}} R\indices{_{b k c l}} + \frac{2}{3} R\indices{^{b k c l}} R\indices{_{c k b l}} - \frac{2}{3} R\indices{_b^{k b l}} R\indices{^c_{k c l}} - \frac{1}{2} R\indices{^{b k l m}} R\indices{_{b k l m}} \right) \\ \notag
& - 4 K\indices{_{a i}^k} K\indices{_{b k}^l} K\indices{^b_{l j}} K\indices{^a_{m n}} R\indices{^{i m j n}} - 4 K\indices{_{a i}^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_{m j}} \left( R\indices{^{a b i j}} + R\indices{^{[a| i |b] j}} \right) \\ \notag
& - 2 K\indices{_{a i}^k} K\indices{^a_k^l} K\indices{_{b l}^m} K\indices{^b_{m j}} R\indices{_c^{i c j}} - 2 K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^m} K\indices{_{c l m}} \left( R\indices{^{b c k l}} + R\indices{^{[b| k |c] l}} \right) \\ \notag
& - K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^m} K\indices{^b_{l m}} R\indices{_c^{k c l}} - 2 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^i} K\indices{^b_{l m}} R\indices{_c^{l c m}} + K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} R\indices{^{c d}_{c d}} \\ \notag
& + \frac{10}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^i} - 2 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^i} \\ \notag
& - \frac{2}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^i} - \frac{2}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^i} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^l} \\ \notag
& - \frac{4}{3} K\indices{_a^{i j}} K\indices{_{b i j}} K\indices{^a_k^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} + K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^k} \\ \notag
& - \frac{4}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} - \frac{1}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} K\indices{^{c m n}} K\indices{_{c m n}} \, , \\
\Delta_{5}^{(4)}=& + 4 R\indices{^{\mu \nu \rho \sigma}} R\indices{_{\mu \nu}^{a b}} R\indices{_{\rho \sigma a b}} \\ \notag
& + 16 K\indices{_{a i j}} K\indices{_{b k l}} R\indices{^{i [a b] k}} R\indices{^{j c l}_c} + 4 K\indices{_{a i j}} K\indices{^a_{k l}} \left( 2 R\indices{^{i b k c}} R\indices{^j_{[b}^l_{c]}} - R\indices{^{i b k}_b} R\indices{^{j c l}_c} \right) \\ \notag
& + 8 K\indices{_{a i}^k} K\indices{_{b j k}} \left( R\indices{^{i [b| c d}} R\indices{^{j |a]}_{c d}} + \frac{4}{3} R\indices{^{i [b| c l}} R\indices{^{j |a]}_{c l}} + \frac{8}{3} R\indices{^{i [a b] l}} R\indices{^{j c}_{l c}} + R\indices{^{[b| i l m}} R\indices{^{|a] j}_{l m}} \right) \\ \notag
& - 4 K\indices{_{a i}^k} K\indices{^a_{j k}} \left( R\indices{^{i b c d}} R\indices{^j_{b c d}} + \frac{8}{3} R\indices{^{i b l c}} R\indices{^j_{b l c}} - \frac{8}{3} R\indices{^{i b l}_{[c|}} R\indices{^{j c}_{l |b]}} + R\indices{^{i b l m}} R\indices{^j_{b l m}} \right) \\ \notag
& + 48 K\indices{_{a i}^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_{m j}} R\indices{^{i [a b] j}} + 24 K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} R\indices{^{a i b j}} \\ \notag
& - 12 K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{^c_{l}^m} K\indices{_{b m j}} R\indices{^{a i b j}} - 12 K\indices{_{c i}^k} K\indices{_{a k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} R\indices{^{a i b j}} \\ \notag
& - 12 K\indices{_{a i}^k} K\indices{^a_k^l} K\indices{_{b l}^m} K\indices{^b_{m j}} R\indices{_c^{i c j}} + 12 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^i} \\ \notag
& - 12 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^i} - 4 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^i} \, , \\
\Delta_{4}^{(4)}=& + 4 R\indices{^{a b \mu \nu}} R\indices{_{a \mu}^{\rho \sigma}} R\indices{_{b \nu \rho \sigma}} \\ \notag
& + 4 K\indices{_{a i j}} K\indices{_{b k l}} \left( 2 R\indices{^{i [a b] k}} R\indices{^{j c l}_c} - R\indices{^{[a| i k m}} R\indices{^{|b] l j}_m} \right) + 2 K\indices{_{a i j}} K\indices{^a_{k l}} \left( \frac{4}{3} R\indices{^{i b k}_{[c}} R\indices{^{l c j}_{b]}} \right. \\ \notag
& \left. - \frac{4}{3} R\indices{^{i b k c}}R\indices{^l_b^j_c} - R\indices{^{i b k m}}R\indices{^l_b^j_m} \right) + 2 K\indices{_{a i}^k} K\indices{_{b j k}} \left( - 2 R\indices{^{a b i j}} R\indices{^{c d}_{c d}} + \frac{4}{3} R\indices{^{i [a| c l}} R\indices{^{j |b]}_{c l}} \right. \\ \notag
& \left. + \frac{4}{3} R\indices{^{i [a b] l}} R\indices{^{j c}_{l c}} - 4 R\indices{^{i j a l}} R\indices{^{b c}_{l c}} + R\indices{^{[a| i l m}} R\indices{^{|b] j}_{l m}} - 2 R\indices{^{a b l m}} R\indices{^{i j}_{l m}} \right) \\ \notag
& + K\indices{_{a i}^k} K\indices{^a_{j k}} \left( - 2 R\indices{^{i b c d}} R\indices{^j_{b c d}} - \frac{14}{3} R\indices{^{i b l c}} R\indices{^j_{b l c}} + \frac{20}{3} R\indices{^{i b l}_{[c|}} R\indices{^{j c}_{l |b]}} - R\indices{^{i b l m}} R\indices{^j_{b l m}} \right) \\ \notag
& - 4 K\indices{_{a i}^m} K\indices{_{b j m}} K\indices{^a_k^n} K\indices{^b_{l n}} R\indices{^{i j k l}} + 4 K\indices{_{a i}^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_{m j}} \left( R\indices{^{a b i j}} + 4 R\indices{^{i [a b] j}} \right) \\ \notag
& + 4 K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} \left( R\indices{^{a i b j}} - R\indices{^{a b i j}} \right) - 2 K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{^c_l^m} K\indices{_{b m j}} R\indices{^{a i b j}} \\ \notag
& - 2 K\indices{_{c i}^k} K\indices{_{a k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} R\indices{^{a i b j}} - 6 K\indices{_{a i}^k} K\indices{^a_k^l} K\indices{_{b l}^m} K\indices{^b_{m j}} R\indices{_c^{i c j}} \\ \notag
& + 2 K\indices{^{c l m}} K\indices{_{c l m}} K\indices{_{a i}^k} K\indices{_{b j k}} R\indices{^{a b i j}} + 2 K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R\indices{^{c d}_{c d}} \\ \notag
& - 2 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^i} R\indices{^{c d}_{c d}} + \frac{2}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^i} \\ \notag
& + 2 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^i} - \frac{14}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^i} \\ \notag
& - \frac{4}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^k} + \frac{4}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} \, , \\
\Delta_{3}^{(4)}=& + 2 R\indices{^{a b \mu \nu}} R\indices{_a^{\rho}_{\mu}^{\sigma}} R\indices{_{b \rho \nu \sigma}} + R\indices{^{\mu a \rho \sigma}} R\indices{^{\nu}_{a \rho \sigma}} R\indices{^b_{\mu b \nu}} - R\indices{^{\mu a \rho \sigma}} R\indices{^{\nu b}_{\rho \sigma}} R\indices{_{\mu b \nu a}} \\ \notag
& - 2 K\indices{_{a i j}} K\indices{_{b k l}} \left( R\indices{^{i k j [a|}} R\indices{^{l c |b]}_c} + R\indices{^{i k a b}} R\indices{^{j c l}_c} + 2 R\indices{^{i [a b] m}} R\indices{^{j k l}_m} + R\indices{^{[a| i k m}} R\indices{^{j l |b]}_m} \right) \\ \notag
& + K\indices{_{a i j}} K\indices{^a_{k l}} \left( 2 R\indices{^{i k b c}} R\indices{^j_b^l_c} - \frac{1}{2} R\indices{^{i k b c}} R\indices{^{j l}_{b c}} - R\indices{^{i k j b}} R\indices{^{l c}_{b c}} + R\indices{^{i k b m}} R\indices{^j_b^l_m} - \frac{1}{2} R\indices{^{i k b m}} R\indices{^{j l}_{b m}} \right. \\ \notag
& \left. + 2 R\indices{^{i b m}_b} R\indices{^{j k l}_m} - \frac{1}{2} R\indices{^{i k m n}} R\indices{^{j l}_{m n}} \right) - K\indices{_{a i}^k} K\indices{_{b j k}} \left( R\indices{^{i [a b] j}} R\indices{^{c d}_{c d}} + R\indices{^{a b i j}} R\indices{^{c d}_{c d}} \right. \\ \notag
& \left. + \frac{1}{2} R\indices{^{i [a| c d}} R\indices{^{j |b]}_{c d}} + 2 R\indices{^{i j a l}} R\indices{^{b c}_{l c}} + R\indices{^{[a| l i m}} R\indices{^{|b]}_m^j_l} + R\indices{^{a b l m}} R\indices{^{i j}_{l m}} + 2 R\indices{^{a l b m}} R\indices{^i_{[l}^j_{m]}} \right) \\ \notag
& + K\indices{_{a i}^k} K\indices{^a_{j k}} \left( \frac{1}{2} R\indices{_b^{i b j}} R\indices{^{c d}_{c d}} - \frac{1}{4} R\indices{^{i b c d}} R\indices{^j_{b c d}} - R\indices{^b_{l b m}} R\indices{^{i l j m}} + \frac{1}{2} R\indices{^{i l b m}} R\indices{^j_{m b l}} \right) \\ \notag
& + \frac{1}{4} K\indices{^{a i j}} K\indices{_{a i j}} \left( R\indices{^{b c}_{b c}} R\indices{^{d e}_{d e}} + R\indices{^{b c d k}} R\indices{_{b c d k}} + R\indices{^{b c k l}} R\indices{_{b c k l}} \right) \\ \notag
& + K\indices{_{a i}^m} K\indices{_{b j m}} K\indices{^a_k^n} K\indices{^b_{l n}} \left( \frac{1}{2} R\indices{^{i l j k}} - R\indices{^{i j k l}} - \frac{3}{2} R\indices{^{i k j l}} \right) - \frac{1}{2} K\indices{_{a i}^m} K\indices{^a_{j m}} K\indices{_{b k}^n} K\indices{^b_{l n}} R\indices{^{i k j l}} \\ \notag
& + K\indices{_{a i j}} K\indices{_{b k}^m} K\indices{^a_m^n} K\indices{^b_{n l}} R\indices{^{i k j l}} - 2 K\indices{_{a i j}} K\indices{^a_k^m} K\indices{_{b m}^n} K\indices{^b_{n l}} R\indices{^{i k j l}} \\ \notag
& - K\indices{_{a i}^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_{m j}} \left( 2 R\indices{^{b i a j}} + 3 R\indices{^{a i b j}} \right) + K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} \left( 3 R\indices{^{a i b j}} + R\indices{^{b i a j}} - 2 R\indices{^{a b i j}} \right) \\ \notag
& + \frac{1}{2} K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{^c_l^m} K\indices{_{b m j}} R\indices{^{a i b j}} + \frac{1}{2} K\indices{_{c i}^k} K\indices{_{a k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} R\indices{^{a i b j}} + K\indices{_a^{l m}} K\indices{_{b l m}} K\indices{_{c i}^k} K\indices{^c_{j k}} R\indices{^{a i b j}} \\ \notag
& - 2 K\indices{^{c l m}} K\indices{_{b l m}} K\indices{_{a i}^k} K\indices{_{c j k}} R\indices{^{a i b j}} + K\indices{^{c l m}} K\indices{_{c l m}} K\indices{_{a i}^k} K\indices{_{b j k}} \left( R\indices{^{a b i j}} + \frac{3}{4} R\indices{^{b i a j}} + \frac{1}{4} R\indices{^{a i b j}} \right) \\ \notag
& + \frac{3}{2} K\indices{_{a i}^k} K\indices{^a_k^l} K\indices{_{b l}^m} K\indices{^b_{m j}} R\indices{_c^{i c j}} - \frac{1}{2} K\indices{_{a k}^l} K\indices{_{b l}^m} K\indices{^b_m^k} K\indices{^a_{i j}} R\indices{_c^{i c j}} - \frac{5}{4} K\indices{^{b l m}} K\indices{_{b l m}} K\indices{_{a i}^k} K\indices{^a_{j k}} R\indices{_c^{i c j}} \\ \notag
& + \frac{1}{4} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R\indices{^{c d}_{c d}} - \frac{1}{4} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} R\indices{^{c d}_{c d}} \\ \notag
& + \frac{2}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^i} + 2 K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^i} \\ \notag
& - \frac{4}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^i} - \frac{1}{3} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{_{c k}^i} K\indices{^a_l^m} K\indices{^b_m^n} K\indices{^c_n^l} \\ \notag
& - \frac{1}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^i} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^l} - \frac{2}{3} K\indices{_a^{i j}} K\indices{_{b i j}} K\indices{^a_k^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} \\ \notag
& - \frac{1}{6} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} + \frac{1}{12} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} K\indices{^{c m n}} K\indices{_{c m n}} \, , \\
\Delta_{2}^{(4)}=& + 4 R\indices{^{\mu \nu \rho \sigma}} R\indices{^{[a}_{\mu a \rho}} R\indices{^{b]}_{\nu b \sigma}} \\ \notag
& + 2 K\indices{_{a i j}} K\indices{_{b k l}} \left( \frac{2}{3} R\indices{^{i a j [c|}} R\indices{^{k |b] l}_c} + 2 R\indices{^{i k j [a|}} R\indices{^{l c |b]}_c} - R\indices{^{i [a b] k}} R\indices{^{j c l}_c} \right) \\ \notag
& + K\indices{_{a i j}} K\indices{^a_{k l}} \left( R\indices{^{i k j l}} R\indices{^{b c}_{b c}} - 2 R\indices{^{i k j b}} R\indices{^{l c}_{b c}} - \frac{1}{6} R\indices{^{i b j}_b} R\indices{^{k c l}_c} - \frac{1}{2} R\indices{^{i b k}_b} R\indices{^{j c l}_c} - \frac{4}{3} R\indices{^{i b j c}} R\indices{^k_b^l_c} \right. \\ \notag
& \left. + R\indices{^{i b k c}} R\indices{^{[j}_b^{l]}_c} + R\indices{^{i k b c}} R\indices{^{j l}_{b c}} - 2 R\indices{^{i b j m}} R\indices{^k_b^l_m} - 2 R\indices{^{i m j n}} R\indices{^k_m^l_n} \right) \\ \notag
& - 2 K\indices{_{a i}^k} K\indices{_{b j k}} \left( \frac{8}{3} R\indices{^{i [a b] l}} R\indices{^{j c}_{l c}} + \frac{2}{3} R\indices{^{i [a| c l}} R\indices{^{j |b]}_{c l}} + R\indices{^{[a| l i m}} R\indices{^{|b]}_l^j_m} \right) \\ \notag
& - K\indices{_{a i}^k} K\indices{^a_{j k}} \left( R\indices{^{i b c d}} R\indices{^j_{b c d}} + \frac{4}{3} R\indices{^{i b l c}} R\indices{^j_{b l c}} + \frac{4}{3} R\indices{^{i b l}_{[b|}} R\indices{^{j c}_{l |c]}} + 2 R\indices{^{i l b c}} R\indices{^j_{l b c}} + R\indices{^{i l b m}} R\indices{^j_{l b m}} \right) \\ \notag
& - \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} \left( R\indices{^{b c}_{b c}} R\indices{^{d e}_{d e}} + R\indices{^{b c d k}} R\indices{_{b c d k}} + \frac{2}{3} R\indices{^{b k c l}} R\indices{_{(b| k |c) l}} - \frac{1}{3} R\indices{_b^{k b l}} R\indices{^c_{k c l}} \right) \\ \notag
& + 2 K\indices{_{a i j}} K\indices{_{b k}^m} K\indices{^a_m^n} K\indices{^b_{n l}} R\indices{^{i k j l}} - 2 K\indices{_a^{m n}} K\indices{_{b m n}} K\indices{^a_{i j}} K\indices{^b_{k l}} R\indices{^{i k j l}} - \frac{1}{2} K\indices{^{a m n}} K\indices{_{a m n}} K\indices{_{b i j}} K\indices{^b_{k l}} R\indices{^{i k j l}} \\ \notag
& - 2 K\indices{_{a i}^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_{m j}} \left( 3 R\indices{^{a b i j}} + R\indices{^{i (a b) j}} \right) + K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} \left( 2 R\indices{^{a b i j}} - R\indices{^{a i b j}} - 2 R\indices{^{b i a j}} \right) \\ \notag
& + \frac{1}{2} K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{^c_l^m} K\indices{_{b m j}} R\indices{^{a i b j}} + \frac{1}{2} K\indices{_{c i}^k} K\indices{_{a k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} R\indices{^{a i b j}} - \frac{3}{2} K\indices{_{a i}^k} K\indices{^a_k^l} K\indices{_{b l}^m} K\indices{^b_{m j}} R\indices{_c^{i c j}} \\ \notag
& - K\indices{_{a k}^l} K\indices{_{b l}^m} K\indices{^b_m^k} K\indices{^a_{i j}} R\indices{_c^{i c j}} - \frac{1}{2} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R\indices{^{c d}_{c d}} + \frac{1}{2} K\indices{_a^{i j}} K\indices{_{b i j}} K\indices{^{a k l}} K\indices{^b_{k l}} R\indices{^{c d}_{c d}} \\ \notag
& + \frac{1}{2} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} R\indices{^{c d}_{c d}} + \frac{5}{6} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^i} \\ \notag
& - \frac{7}{2} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^i} + \frac{3}{2} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^i} \\ \notag
& - \frac{1}{3} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^i} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^l} + \frac{4}{3} K\indices{_a^{i j}} K\indices{_{b i j}} K\indices{^a_k^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} \\ \notag
& + K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^k} - \frac{2}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} \\ \notag
& - \frac{4}{3} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_b^{k l}} K\indices{_{c k l}} K\indices{^{b m n}} K\indices{^c_{m n}} + \frac{1}{6} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{^{b k l}} K\indices{_{b k l}} K\indices{^{c m n}} K\indices{_{c m n}} \, , \\
\Delta_{1}^{(4)}=& + 4 R\indices{^{a \mu \rho \nu}} R\indices{^{\sigma}_{\nu [a| \mu}} R\indices{^b_{\rho |b] \sigma}} \\ \notag
& + K\indices{_{a i j}} K\indices{_{b k l}} \left( \frac{4}{3} R\indices{^{i a j [c|}} R\indices{^{k |b] l}_c} + 2 R\indices{^{i k j [a|}} R\indices{^{l c |b]}_c} + 4 R\indices{^{i [a b] m}} R\indices{^{j k l}_m} + R\indices{^{[a| i j m}} R\indices{^{|b] k l}_m} \right) \\ \notag
& + K\indices{_{a i j}} K\indices{^a_{k l}} \left( - R\indices{^{i k j b}} R\indices{^{l c}_{b c}} - \frac{1}{3} R\indices{^{i b k}_b} R\indices{^{j c l}_c} - \frac{2}{3} R\indices{^{i b j c}} R\indices{^k_b^l_c} - \frac{2}{3} R\indices{^{i b k c}} R\indices{^j_b^l_c} + \frac{1}{3} R\indices{^{i b k c}} R\indices{^l_b^j_c} \right. \\ \notag
& \left. + R\indices{^{i k b c}} R\indices{^{j l}_{b c}} + 2 R\indices{_b^{i b m}} R\indices{^{j k l}_m} - \frac{1}{2} R\indices{^{i b j m}} R\indices{^k_b^l_m} - R\indices{^{i b k m}} R\indices{^j_b^l_m} + \frac{1}{2} R\indices{^{i k b m}} R\indices{^{j l}_{b m}} - R\indices{^{i m k n}} R\indices{^j_m^l_n} \right) \\ \notag
& + K\indices{_{a i}^k} K\indices{_{b j k}} \left( 3 R\indices{^{i [a b] j}} R\indices{^{c d}_{c d}} + \frac{1}{2} R\indices{^{i [a| c d}} R\indices{^{j |b]}_{c d}} + \frac{2}{3} R\indices{^{i [a b] l}} R\indices{^{j c}_{l c}} - 4 R\indices{^{i [a| j l}} R\indices{^{|b] c}_{l c}} + \frac{2}{3} R\indices{^{i [a| c l}} R\indices{^{j |b]}_{c l}} \right. \\ \notag
& \left. - 2 R\indices{^{a l b m}} R\indices{^i_{[l}^j_{m]}} \right) + K\indices{_{a i}^k} K\indices{^a_{j k}} \left( \frac{1}{2} R\indices{_b^{i b j}} R\indices{^{c d}_{c d}} - \frac{3}{4} R\indices{^{i b c d}} R\indices{^j_{b c d}} + \frac{1}{6} R\indices{^{i b l}_b} R\indices{^{j c}_{l c}} + \frac{1}{3} R\indices{^{i b l c}} R\indices{^j_{(b c) l}} \right. \\ \notag
& \left. - R\indices{^{i l b c}} R\indices{^j_{l b c}} - R\indices{^{i l j m}} R\indices{^b_{l b m}} \right) + \frac{1}{4} K\indices{^{a i j}} K\indices{_{a i j}} \left( 2 R\indices{^{b k c l}} R\indices{_{[b| k |c] l}} - R\indices{_b^{k b l}} R\indices{^c_{k c l}} \right) \\ \notag
& + K\indices{_{a i}^m} K\indices{_{b j m}} K\indices{^a_k^n} K\indices{^b_{l n}} R\indices{^{i (k l) j}} - \frac{1}{2} K\indices{_{a i}^m} K\indices{^a_{j m}} K\indices{_{b k}^n} K\indices{^b_{l n}} R\indices{^{i k j l}} - 2 K\indices{_{a i j}} K\indices{_{b k}^m} K\indices{^a_m^n} K\indices{^b_{n l}} R\indices{^{i k j l}} \\ \notag
& - 2 K\indices{_{a i}^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_{m j}} \left( 2 R\indices{^{a b i j}} + R\indices{^{b i a j}} \right) + K\indices{_{a i}^k} K\indices{_{c k}^l} K\indices{_{b l}^m} K\indices{^c_{m j}} \left( 2 R\indices{^{a b i j}} + 3 R\indices{^{b i a j}} - R\indices{^{a i b j}} \right) \\ \notag
& + K\indices{_a^{l m}} K\indices{_{b l m}} K\indices{_{c i}^k} K\indices{^c_{j k}} R\indices{^{a i b j}} - 2 K\indices{_a^{l m}} K\indices{_{c l m}} K\indices{_{b i}^k} K\indices{^c_{j k}} R\indices{^{b i a j}} \\ \notag
& + \frac{1}{2} K\indices{^{c l m}} K\indices{_{c l m}} K\indices{_{a i}^k} K\indices{_{b j k}} \left( 5 R\indices{^{a i b j}} - 3 R\indices{^{b i a j}} \right) + K\indices{_{a i}^k} K\indices{^a_k^l} K\indices{_{b l}^m} K\indices{^b_{m j}} R\indices{_c^{i c j}} \\ \notag
& - K\indices{_{a k}^l} K\indices{_{b l}^m} K\indices{^b_m^k} K\indices{^a_{i j}} R\indices{_c^{i c j}} - \frac{3}{2} K\indices{^{b l m}} K\indices{_{b l m}} K\indices{_{a i}^k} K\indices{^a_{j k}} R\indices{_c^{i c j}} + \frac{3}{4} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{^a_k^l} K\indices{^b_l^i} R\indices{^{c d}_{c d}} \\ \notag
& - \frac{1}{2} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^i} R\indices{^{c d}_{c d}} + \frac{13}{6} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^i} \\ \notag
& + \frac{1}{2} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^i} - \frac{7}{6} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^i} \\ \notag
& - \frac{1}{3} K\indices{_{a i}^j} K\indices{_{b j}^k} K\indices{_{c k}^i} K\indices{^a_l^m} K\indices{^b_m^n} K\indices{^c_n^l} - \frac{1}{2} K\indices{_{a i}^j} K\indices{^a_j^k} K\indices{_{b k}^i} K\indices{_{c l}^m} K\indices{^c_m^n} K\indices{^b_n^l} \\ \notag
& - K\indices{_a^{i j}} K\indices{_{b i j}} K\indices{^a_k^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} - \frac{13}{12} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{_{c l}^m} K\indices{^b_m^n} K\indices{^c_n^k} \\ \notag
& + \frac{5}{6} K\indices{^{a i j}} K\indices{_{a i j}} K\indices{_{b k}^l} K\indices{^b_l^m} K\indices{_{c m}^n} K\indices{^c_n^k} \, .
\end{align}%
Again, we observe that the greater the number of Riemann tensors involved in the corresponding density, the more complicated the expressions. In particular, for theories with zero or one Riemann tensors, the contribution comes completely from the Wald piece. For densities with two Riemanns we get contributions which are quadratic in extrinsic curvatures, for those with three Riemanns, we get terms which are quartic, and for densities with four Riemann tensors there are terms involving up to six extrinsic curvatures.
\subsection{$\mathcal{L}(g_{\mu\nu},R_{\rho\sigma})$ gravities}\label{subsec:functional_Ricci}
Let us now consider densities constructed from general contractions of the Ricci tensor, {\it i.e.,}\ of the form
\begin{equation}\label{lricci}
I_E^{\rm \mathcal{L}(Ricci)}=-\frac{1}{16\pi G} \int \mathrm{d}^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+R+\lambda \mathcal{L}(g_{\mu\nu},R_{\rho\sigma})\right]\, ,
\end{equation}
where $\lambda$ is some constant.
By looking at the quadratic, cubic and quartic densities of this kind, we observe that no contribution from the anomaly part arises in the HEE functional when those terms are considered perturbatively. As we show now, this is in fact a general property which holds for all theories of the form (\ref{lricci}).
The proof goes as follows. For the anomaly term, we need to compute the second derivative of the Lagrangian with respect to $R_{z i z j}$ and $R_{\bar{z} k \bar{z} l}$. Let us consider first the one with $R_{z i z j}$. Since the Lagrangian is a contraction of $n$ Ricci tensors for an $n$-th order theory, we can expand the derivative as
\begin{equation} \label{functional_Ricci:Lagrangian_expansion}
\frac{\partial \mathcal{L}}{\partial R_{z i z j}} = \sum_{k=1}^{n} \frac{\partial R_{\mu \nu}}{\partial R_{z i z j}} T_{(k)}^{\mu \nu} \, ,
\end{equation}
where $T_{(k)}^{\mu \nu}$ represents the remaining part of the Lagrangian contracted with each of the Ricci tensors ---this can include metric tensors, so the previous expansion is also valid when there are Ricci scalars in the Lagrangian. Now, it can be shown from \req{SymmetryFactors:RiemannDerivative} that
\begin{equation} \label{functional_Ricci:derivative_Ricci}
\frac{\partial R_{\mu \nu}}{\partial R_{\alpha \beta \gamma \delta}} = \delta_{(\mu}^{[\beta} g^{\alpha] [\gamma} \delta_{\nu)}^{\delta]}\quad \Rightarrow \quad \frac{\partial R_{\mu \nu}}{\partial R_{z i z j}} = \frac{1}{4} h^{ij} \delta_{\mu}^{z} \delta_{\nu}^z \, ,
\end{equation}
since $g^{zz} = g^{z i} = 0$. Therefore, \eqref{functional_Ricci:Lagrangian_expansion} is proportional to $h^{ij}$. An analogous argument with the other derivative shows that it is proportional to $h^{kl}$. The conclusion is that the anomaly term is then some expression containing curvature tensors in which we have to perform the $\alpha$-expansion, times the following contraction of extrinsic curvatures:
\begin{equation} \label{functional_Ricci:K_contraction}
h^{ij}h^{kl} K_{z i j} K_{\bar{z} k l} = K_z K_{\bar{z}} = \frac{1}{4} K^a K_a \, ,
\end{equation}
which vanishes when evaluated for the RT surface. Hence, the anomaly part of the functional does not contribute perturbatively for theories constructed from general contractions of the Ricci tensor. Note that this is actually true irrespective of the splitting being used.
Hence, for theories of this kind one finds
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{\rm \mathcal{L}(Ricci)}=\frac{\mathcal{A}(\Gamma_A)}{4G}+\frac{\lambda}{8G} \int_{\Gamma_A} \mathrm{d}^{d-1}y\sqrt{h}\, \frac{\partial \mathcal{L}}{\partial R_{\mu \nu}} \bot_{\mu\nu} + \mathcal{O}(\lambda^2)\, .
\end{equation}
We emphasize that this formula holds for general-order densities of the form $\mathcal{L}(g_{\mu\nu},R_{\rho\sigma})$. Hence, we observe that, at least perturbatively in the higher-curvature couplings, the purely-Wald nature of the $f(R)$ functional actually extends to the much greater family of densities constructed from arbitrary contractions of the Ricci tensor and the metric.
\subsection{General structure depending on the number of Riemann tensors}
\label{subsec:functional_Riemann}
The observations made in the previous subsections suggest a more general pattern which we explore here.
The starting point is the observation made in subsection \ref{subsec:functional_Ricci} that whenever one of the two derivatives appearing in \eqref{NewFunctional:ShorthandSecondDerivative} hits a Ricci tensor, the contraction of the resulting intrinsic metric with the extrinsic curvature produces a trace, $K\indices{_z}$ or $K\indices{_{\bar{z}}}$, which is zero for the RT surface (and therefore also for the perturbative functional). Consider then an $n$-th order curvature density containing $n_R$ Riemann tensors and $n - n_R$ Ricci tensors or scalars. After the two derivatives are taken, the only non-vanishing pieces will be of the form
\begin{equation} \label{GeneralStructureDerivatives:SecondDerivative}
\left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \sim \sum K^2{\rm Ricci}_1 \dots {\rm Ricci}_{n-n_R} {\rm Riem}_1 \dots {\rm Riem}_{n_R-2} \, .
\end{equation}
In this expression, we use the symbol $\sim$ to represent the structure of the object in terms of the curvature tensors appearing, ignoring the particular components. The sum means that several terms with this structure will show up in general. Each ${\rm Ricci}_k$ represents a particular component of the Ricci tensor or scalar and, analogously, ${\rm Riem}_k$ represents a component of the Riemann tensor.
Observe now the following property. Writing explicitly the Ricci tensor and scalar in terms of Riemann tensor components, we get
\begin{align} \label{GeneralStructureDerivatives:RicciComponents}
\nonumber R\indices{_{z z}} & = h\indices{^{i j}} R\indices{_{z i z j}} ~, & R\indices{_{z \bar{z}}} & = - 2 R\indices{_{z \bar{z} z \bar{z}}} + h\indices{^{ij}} R\indices{_{z i \bar{z} j}} \, , \\
\nonumber R\indices{_{z i}} & = - 2 R\indices{_{z \bar{z} z i}} + h\indices{^{j k}} R\indices{_{z j i k}} \, , & R\indices{_{i j}} & = 2 R\indices{_{z i \bar{z} j}} + 2 R\indices{_{z j \bar{z} i}} + h\indices{^{k l}} R\indices{_{i k j l}} \, , \\
R & = 4 R\indices{_{z \bar{z}}} + h\indices{^{i j}} R\indices{_{i j}} \, ,
\end{align}
plus the ones obtained by complex conjugation. Then, the differential operators defined in \req{NewFunctional:OperatorA} and \req{NewFunctional:OperatorB} act on these components as follows
\begin{align} \label{GeneralStructureDerivatives:OperatorsAction}
\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} R\indices{_{z z}} & = R\indices{_{z z}} \, , & \mathcal{K}\indices{_{BI}} \hat{\partial}^{BI} R\indices{_{z z}} & = 0 \, , \\
\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} R\indices{_{z \bar{z}}} & = 0 \, , & \mathcal{K}\indices{_{BI}} \hat{\partial}^{BI} R\indices{_{z \bar{z}}} & = 0 \, , \\
\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} R\indices{_{z i}} & = 0 \, , & \mathcal{K}\indices{_{BI}} \hat{\partial}^{BI} R\indices{_{z i}} & = R\indices{_{z i}} \, , \\
\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} R\indices{_{i j}} & = - K\indices{_{a i j}} K\indices{^a} \, , & \mathcal{K}\indices{_{BI}} \hat{\partial}^{BI} R\indices{_{i j}} & = 0 \, , \\
\mathcal{K}\indices{_{AI}} \hat{\partial}^{AI} R & = -K\indices{_a} K\indices{^a} \, , & \mathcal{K}\indices{_{BI}} \hat{\partial}^{BI} R & = 0 \, .
\end{align}
Notice also that if the Ricci components are acted upon with several powers of the differential operators in normal order, like in the functional \eqref{NewFunctional:FinalForm}, the remaining powers would not act on the curvature tensors appearing in the right-hand side of the previous expressions. In any case, the relevant observation is that after applying the differential operator, any Ricci factor in \eqref{GeneralStructureDerivatives:SecondDerivative} generates either something proportional to the very same component or something proportional to $K\indices{^a}$. When evaluated at the RT surface, this second possibility gives zero, so in a perturbative functional no Ricci tensor component can ever generate powers of the extrinsic curvature. This is not the case with Riemann tensor components, for which the differential operator generates non-vanishing contractions of extrinsic curvatures in general.\footnote{This is not true for \emph{all} Riemann tensor components. As shown in subsection \ref{subsec:MixedTypes}, some components do not generate extrinsic curvatures, and a second derivative monomial of the form \eqref{MixedTypes:ExampleTerm} produces only something with the structure ${\rm Riem}^3 + {\rm Riem}^2 K^2$, as in \eqref{MixedTypes:ResultDerivatives}.} The conclusion is that the expression which results from applying the full differential operator of the anomaly term to a second derivative of the form \eqref{GeneralStructureDerivatives:SecondDerivative} has the structure
\begin{align}
&\sum_{\alpha} \frac{1}{1 + q_{\alpha}} \left. \left( \frac{8 \partial^2 \mathcal{L}_E}{\partial {\rm Riem}^2} K^2 \right) \right._{\alpha} \\ \notag & \quad \sim \sum{\rm Ricci}^{n-n_R} \left( {\rm Riem}^{n_R-2} K^2 + {\rm Riem}^{n_R-3} K^4 + \dots + {\rm Riem}\, K^{2n_R - 4} +K^{2n_R - 2}\right) \, .
\end{align}
One can verify that this is indeed the case for all quadratic, cubic, and quartic Lagrangian densities presented in the previous sections.
In summary, we have shown that densities containing $n_R$ Riemann curvatures can contain terms involving extrinsic curvatures up to the power $2n_R-2$. In particular, this implies that densities with zero or one Riemann tensors have no anomaly piece. We already studied the former case in the previous subsection. As for the latter, for a theory of the form
\begin{equation}\label{lricci}
-\frac{1}{16\pi G} \int \mathrm{d}^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+R+\lambda R^{\mu \nu \rho \sigma} T_{\mu \nu \rho \sigma} ({\rm Ricci})\right]\, ,
\end{equation}
where $T_{\mu \nu \rho \sigma} ({\rm Ricci})$ is some tensorial structure involving Ricci tensors and metrics,
the corresponding functional reads
\begin{equation}
\frac{\mathcal{A}(\Gamma_A)}{4G}+\frac{\lambda}{8G} \int_{\Gamma_A} \mathrm{d}^{d-1}y\sqrt{h}\, \left[2 T^{\mu \nu \rho \sigma} \bot_{\mu[\rho} \bot_{\nu |\sigma]} +R^{\mu\nu\rho\sigma} \frac{\partial T_{\mu\nu\rho\sigma}}{\partial R_{\alpha \beta}} \bot_{\alpha \beta}\right] + \mathcal{O}(\lambda^2)\, .
\end{equation}
On the other hand, densities with two Riemann tensors have terms with up to two extrinsic curvatures, those with three have terms with up to four extrinsic curvatures, and so on.
\section{Universal terms }\label{unite}
In this section we study how the universal coefficients appearing in the EE of various symmetric entangling regions get modified in the presence of quadratic and cubic corrections. Some of these coefficients can be computed from alternative methods, and in that case we verify that the results agree with them. In other cases, like for strip regions, the corresponding universal coefficients do not have a known alternative interpretation beyond entanglement entropy. Universal terms for various types of regions have been previously computed for particular higher-curvature theories in certain dimensions in several papers such as \cite{Buchel:2009sk,Myers:2010jv,deBoer:2011wk,Hung:2011xb,Bueno2,Safdi:2012sn,Miao:2015iba,Bhattacharyya:2014yga,Cano:2018ckq}. Our results reproduce the ones found in those papers in the appropriate cases.
We will restrict ourselves to the vacuum state. This means that all expressions involving intrinsic bulk curvatures will be evaluated on pure AdS$_{d+1}$, for which $R_{\mu\nu}^{\rho\sigma}=-1/L_{\star}^{2}\cdot \left[\delta_{\mu}^{\rho}\delta_{\nu}^{\sigma}-\delta_{\mu}^{\sigma}\delta_{\nu}^{\rho} \right]$. On such a background ---more generally, on any maximally symmetric background--- one can show that the variation of any higher-curvature Lagrangian with respect to the Riemann tensor is given by
\begin{equation} \label{spheres:lagrangian_derivative_AdS}
\left. \frac{\partial \mathcal{L}}{\partial R_{\mu \nu \rho \sigma}} \right|_{\rm AdS} = k_0 \left[g^{\mu \rho} g^{\sigma \nu}-g^{\mu \sigma} g^{\rho \nu} \right]\, ,
\end{equation}
where the constant $k_0$ is fixed by imposing AdS$_{d+1}$ to be a solution of the equations of motion of the theory as \cite{Aspects}
\begin{equation}\label{k00}
k_0= - \frac{L_{\star}^2}{4d} \left.\mathcal{L} \right|_{\rm AdS}\, ,
\end{equation}
where $\left.\mathcal{L} \right|_{\rm AdS}$ is the on-shell Lagrangian of the theory evaluated on AdS$_{d+1}$.
Now, it has been argued using different arguments \cite{Imbimbo:1999bj,Schwimmer:2008yh,Myers:2010tj,Myers:2010xs,HoloECG} that $\mathcal{L}
|_{{\rm AdS}}$ is actually related to the universal coefficient $a^{\star(d)}$ appearing in the EE across spherical regions in general dimensions. For a general CFT in $d$-dimensions, this is given by
\begin{equation}\label{asta}
\left. S_{\ssc \rm EE}^{(d)} \right|_{\rm sphere}\supset \begin{cases}
(-)^{\frac{d-2}{2}} 4 a^{\star(d)} \log\left(\tfrac{ R}{\delta} \right) \quad &\text{for even } d \, , \\
(-)^{\frac{d-1}{2}}2\pi a^{\star(d)} \quad &\text{for odd } d\, .
\end{cases}
\end{equation}
The exact relation for holographic higher-curvature gravities reads
\begin{equation}\label{astar}
a^{\star(d)}=-\frac{\pi^{d/2} L_{\star}^{d+1}}{d \Gamma(d/2)}\mathcal{L}
|_{{\rm AdS}}\, , \quad \text{so} \quad k_0=\frac{\Gamma\left[ \tfrac{d}{2}\right] a^{\star(d)}}{4\pi^{d/2} L_{\star}^{d-1}}\, .
\end{equation}
As a consequence, Wald's piece in the HEE formula becomes proportional to the Ryu-Takayanagi functional in that case, with an overall coefficient controlled by $a^{\star(d)}$. One has
\begin{equation} \label{spheres:functional_Wald_AdS}
S_{\rm \scriptscriptstyle HEE} = \frac{2\Gamma\left[\tfrac{d}{2} \right] a^{\star(d)}}{\pi^{\frac{d-2}{2}}L_{\star}^{d-1}} \int \mathrm{d}^{d-1}y\, \sqrt{h} + S_{\rm \scriptscriptstyle Anomaly} \, .
\end{equation}
Hence, for theories for which the anomaly piece is absent, all possible universal terms are proportional to the coefficient $a^{\star(d)}$. As we saw above this includes, at the perturbative level, all $\mathcal{L}(g_{\mu\nu},R_{\rho\sigma})$ densities as well as those including a single Riemann tensor. For them, all the different universal coefficients we will consider in this section will modify the Einstein gravity result by the same overall factor given by $a^{\star(d)}/a^{\star(d)}_{\rm E}$, where
\cite{Ryu:2006bv,Ryu:2006ef}
\begin{equation}
a^{\star(d)}_{\rm E}=\frac{\pi^{\frac{d-2}{2}}}{8\Gamma \left[\tfrac{d}{2} \right]} \frac{L_{\star}^{d-1}}{G}\, .
\end{equation}
The coefficient $a^{\star(d)}$ can be easily computed for quadratic and cubic theories, yielding
\begin{align}
a^{\star(d)}_{\rm Riem^2}=& \left[1-2d(d+1)\alpha_1-2d\alpha_2-4\alpha_3\right] a^{\star(d)}_{\rm E}\, ,\\ \notag
a^{\star(d)}_{\rm Riem^3}=&\left[1+3(d-1)\beta_1+12\beta_2+6d\beta_3+6d(d+1)\beta_4+3d^2\beta_5+3d^2\beta_6 \right. \\ \notag &\left.+3d^2(d+1)\beta_7+3d^2(d+1)^2\beta_8 \right]a^{\star(d)}_{\rm E} \, .
\end{align}
For the reasons explained above, the corrections corresponding to $\alpha_1$, $\alpha_2$, $\beta_5$, $\beta_6$, $\beta_7$, $\beta_8$ will appear as overall corrections to the Einstein gravity result with precisely the above coefficients for all possible entangling regions. Particularizing to the Gauss-Bonnet and cubic Lovelock cases, one finds
\begin{align}
a^{\star(d)}_{ \mathcal{X}_4}&=[1-2 (d-2)(d-1) \lambda_2] a^{\star(d)}_{ \rm E}\, , \\
a^{\star(d)}_{ \mathcal{X}_6}&=[1+3 (d-4)(d-3)(d-2)(d-1)\lambda_3] a^{\star(d)}_{ \rm E} \, .
\end{align}
In both cases, the corrections are zero below the critical dimensions, as they should, since in those cases the corresponding contributions to the JM functional (\ref{jm}) identically vanish. For a general Lovelock theory of the form (\ref{lovel}), one would have
\begin{equation}
a^{\star(d)}_{ \rm Lovelock}=[1-\sum_n n \, (-1)^n\, \prod_{k=1}^{2(d-1)} (d-k)\lambda_n] a^{\star(d)}_{ \rm E} \, .
\end{equation}
The result for the charges $a^{\star(d)}$ for Quasi-topological gravity and Einsteinian cubic gravity reads in each case
\begin{align}
a^{\star(4)}_{\rm QTG}&=[1+9\mu_{\rm QTG}]a^{\star (4)}_{ \rm E} \, , \\
a^{\star(3)}_{\rm ECG}&=[1+3\mu_{\rm ECG}]a^{\star (3)}_{ \rm E} \, .
\end{align}
\subsection{Spherical regions}
Let us see how the above results for $a^{\star(d)}$ can be obtained from an explicit calculation for spherical entangling surfaces, $\partial A=\mathbb{S}^{d-2}$, using the corresponding HEE functionals. Across spheres, the universal contribution to the entanglement entropy is given, for a general CFT in $d$-dimensions by \req{asta}.
In the even-dimensional case, the corresponding logarithmic term for a general smooth region is a linear combination of local integrals over the entangling surface weighted by the different trace-anomaly charges \cite{Solodukhin:2008dh,Fursaev:2012mp,Safdi:2012sn,Miao:2015iba} ---see \req{see4} and \req{see6} below. One of the integrals involves the Euler density of the entangling surface and the corresponding trace-anomaly coefficient which appears in front is customarily denoted by ``$a$'' (or ``$A$'' in $d\geq 6$). The rest of integrals involve various combinations of the extrinsic curvature of $\partial A$, and therefore all of them vanish for a spherical entangling surface. Hence, the sphere isolates the $a$-type coefficient, and we have simply $a^\star =a$ for even $d$.
The nature of $a^\star$ is very different in odd dimensions. In that case, it appears as a constant contribution to the EE, and it has an intrinsically non-local nature. In fact, as shown in \cite{CHM}, $a^\star$ is proportional to the free energy, $F=-\log Z$, of the corresponding theory evaluated on $\mathbb{S}^{d}$, namely
$
F_{\mathbb{S}^{d}}= (-)^{\frac{d+1}{2}} 2\pi a^\star $
or, alternatively,
to the thermal entropy of the corresponding CFT at a temperature $T= 1/(2\pi R)$ on the hyperbolic cylinder $\mathbb{R}\times \mathbb{H}^{d-1}$ \cite{CHM}.
From an holographic perspective, this means that $a^\star$ can be obtained, besides from a direct entanglement entropy calculation, like the one we perform here, either from the Euclidean
on-shell action of pure AdS$_{(d+1)}$ with $\mathbb{S}^d$ boundary or from the Wald entropy of AdS$_{(d+1)}$ with $\mathbb{R} \times \mathbb{H}^{d-1}$ boundary ---see also \cite{Fonda:2015nma,Anastasiou:2020smm}.
We write the AdS$_{d+1}$ metric as
\begin{equation} \label{spheres:bulk_metric_D}
\mathrm{d} s^2 = \frac{L_{\star}^2}{z^2} \left[ \mathrm{d} \tau^2+\mathrm{d} z^2 + \mathrm{d} r^2 + r^2 \mathrm{d} \Omega^2_{d-2} \right]\, ,
\end{equation}
where $\mathrm{d} \Omega^2_{d-2}$ is the metric of the usual round sphere. Our entangling surface is a sphere $\mathbb{S}^{d-2}$ of radius $r=\ell$ centered at $r=0$. Let us parametrize the RT surface as: $\tau=0$, $z=Z(r)$. Then,
unit normals to the surface are given by
\begin{equation} \label{spheres:unit_normals}
n_1 = \frac{z}{L_{\star}} \partial_{\tau} ~ , \qquad n_2 = \frac{z}{L_{\star} \sqrt{1 + Z'^2}} \left( Z' \partial_r - \partial_z \right) \, .
\end{equation}
We have already extended these vector fields to a neighborhood of the surface while keeping them normalized.
On the surface, one fixes $z = Z(r)$, and $Z'(r)$ is well-defined for any $(r,z)$ with $r \in (0, \ell)$. The induced metric on the surface is given by
\begin{equation} \label{spheres:induced_metric}
h_{\mu \nu} \mathrm{d} x^{\mu} \mathrm{d} x^{\nu} = \frac{L_{\star}^2}{Z^2} \left[\frac{1}{1+Z'^2} \left( \mathrm{d} r + Z' \mathrm{d} z \right)^2 + r^2 \mathrm{d} \Omega^2_{d-2} \right] \, .
\end{equation}
With these results one can compute in full generality the components of the extrinsic curvatures,
\begin{align}
K^1{}_{\mu \nu} & = 0 ~, \\
K^2{}_{rr} & = \frac{L_{\star}}{Z^2 (1 + Z'^2)^{5/2}} \left(1 + Z'^2 + Z Z'' \right) ~ , \quad K^2{}_{rz} = Z' K^2{}_{rr} ~ , \\
K^2{}_{zz} &= Z'^2 K^2{}_{rr} ~ ,\quad K^2{}_{mn} = \frac{L_{\star}}{Z^2 \sqrt{1 + Z'^2}} \left( Z Z' + r \right) r \hat{g}_{mn} \, ,
\end{align}
where $\hat{g}_{mn}$ is the metric of the unit $\mathbb{S}^{d-2}$. Obtaining the traces is now easy. $K^1 = 0$ trivially, whereas
\begin{equation} \label{spheres:trace_extrinsic}
K^2 = \frac{1}{L_{\star} r (1 + Z'^2)^{3/2}} \left[ r Z Z'' + (d-2) Z Z' (1 + Z'^2) + (d-1) r (1 + Z'^2) \right] \, .
\end{equation}
The vanishing of this trace is exactly the differential equation for the surface one would obtain by minimizing the RT functional, which in this case reads%
\begin{equation} \label{spheres:RT_functional}
S_{\rm \scriptscriptstyle HEE}^{\rm E} = \frac{L_{\star}^{d-1} \pi^{(d-1)/2}}{2 G \Gamma \left[ \frac{d-1}{2} \right]} \int_0^{\ell} \mathrm{d} r \, \frac{r^{d-2}}{Z^{d-1}} \sqrt{1 + Z'^2} \, .
\end{equation}
The solution for this differential equation satisfying the boundary condition $Z(\ell) = 0$ is $r^2 + Z^2 = \ell^2$.
The simplicity of this RT surface has another important consequence: since $ZZ' = -r$ and $ZZ'' = - (1 + Z'^2)$, the extrinsic curvature $K^2{}_{\mu \nu}$ vanishes. Thus, both $K^1{}_{\mu \nu}$ and $K^2{}_{\mu \nu}$ are zero for the RT surface. Now, since the anomaly term in the general higher-curvature functional is quadratic in extrinsic curvatures of the surface, when minimizing the functional, the RT surface will also be extremal for the full functional if we were to consider it fully non-perturbatively.\footnote{This is true irrespective of the splitting used.}
In order to compute the universal contribution to the HEE the last step is to regulate \req{spheres:RT_functional}, {\it e.g.,}\ by writing
\begin{equation} \label{spheres:RT_reparametrized}
S_{\rm \scriptscriptstyle HEE}^{\rm E} = \frac{L_{\star}^{d-1} \pi^{(d-1)/2}}{2 G \Gamma \left[ \frac{d-1}{2} \right]} \int_{\delta/\ell}^{1} \mathrm{d} y \, \frac{(1-y^2)^{(d-3)/2}}{y^{d-1}} \, ,
\end{equation}
where we introduced a cutoff at $z = \delta$. Integrating by parts, it is easy to show that for odd $d$ we get a constant term while for even $d$ we get a logarithmic one. The final result takes the form \req{asta}, plus a series of non-universal divergent pieces of the form $(\ell/\delta)^{(d-2k)}$ with $k=1,2,\dots,(d-1)/2$ for odd $d$ and $k=1,2,\dots,(d-2)/2$ for even $d$ ---see {\it e.g.,}\ \cite{Ryu:2006bv} for the numerical coefficients. When higher-curvature terms are included, the vanishing of $K^1{}_{\mu \nu}$ and $K^2{}_{\mu \nu}$ makes the result reduce to the corresponding Wald piece, which in turn reduces to an overall constant proportional to $\mathcal{L}|_{\rm AdS}$ via \req{k00} times the Einstein gravity result. Hence, we are left again with \req{asta} where $a^\star$ is given by \req{astar} in each case.
\subsection{Slab regions}
Let us consider now an entangling region consisting of a slab of width $\ell$ along a particular dimension, $x \in \left[\ell/2, \ell/2 \right]$, and infinite along the remaining $(d-2)$. For general theories, the EE in that case takes the form
\begin{equation}\label{slabs}
S_{\ssc \rm EE}=\xi \frac{L_y^{d-2}}{\delta^{d-2}} - \kappa^{(d)} \frac{L_y^{d-2}}{\ell^{d-2}}\, ,
\end{equation}
where $\xi$ is a non-universal constant. As opposed to other universal EE contributions considered here, $ \kappa^{(d)}$ does not have any (known) alternative interpretation beyond EE. For instance, it is not expected to be related to charges characterizing simple local correlators. Previous papers where $ \kappa^{(d)}$ was computed for certain holographic higher-curvature gravities include \cite{Bueno2}, where it was evaluated for quadratic theories in $d=3$, and \cite{deBoer:2011wk}, where it was computed for Gauss-Bonnet gravity in $d=4$ fully nonperturbatively using the JM functional.
We write the AdS$_{d+1}$ metric as
\begin{equation} \label{spheres:bulk_metric_D}
\mathrm{d} s^2 = \frac{L_{\star}^2}{z^2} \left[ \mathrm{d} \tau^2+\mathrm{d} z^2 + \mathrm{d} x^2+\mathrm{d} \vec{y}^2_{d-2} \right]\, .
\end{equation}
The RT surface will be invariant under translations along the $(d-2)$ transverse directions, so we can parametrize it by $z=Z(x)$. Unit normals to the surface will be given by
\begin{equation} \label{spheres:unit_normals}
n_1 = \frac{z}{L_{\star}} \partial_{\tau} ~ , \qquad n_2 = \frac{z}{L_{\star} \sqrt{1 + Z'^2}} \left( Z' \partial_x - \partial_z \right) \, .
\end{equation}
The induced metric on the surface is now given by
\begin{equation} \label{spheres:induced_metric}
h_{\mu \nu} \mathrm{d} x^{\mu} \mathrm{d} x^{\nu} = \frac{L_{\star}^2}{Z^2} \left[\frac{1}{1+Z'^2} \left( \mathrm{d} x + Z' \mathrm{d} z \right)^2 + \mathrm{d} \vec{y}^2_{d-2} \right] \, .
\end{equation}
The non-vanishing components of the extrinsic curvatures $K_{\mu\nu}^a$ read
\begin{align}
K^2{}_{xx} = \frac{L_{\star} \left(1 + Z'^2 + Z Z'' \right)}{Z^2 (1 + Z'^2)^{5/2}} =\frac{ K^2{}_{xz}}{ Z'}=\frac{
K^2{}_{zz}}{ Z'^2} \, , \quad K^2{}_{mn} = \frac{L_{\star} \delta_{mn}}{Z^2 \sqrt{1 + Z'^2}}\, ,
\end{align}
whereas all components of $K_{\mu\nu}^1$ vanish.
Projectors on the surface are given by
\begin{equation}
t_x=Z' \partial_z + \partial_x\, , \qquad t_{m}=\partial_m \, ,\quad \forall m=1,\dots,d-2\, .
\end{equation}
Using these, we find
\begin{equation}
h_{ij} \mathrm{d} y^i \mathrm{d} y^j= \frac{L_{\star}^2}{Z^2} \left[(1+Z'^2)\mathrm{d} x^2+\mathrm{d} \vec{y}^2_{d-2} \right] \, .
\end{equation}
Also, the non-vanishing components of $K_{ij}^2$ read (note the slight abuse of notation)
\begin{equation} \label{spheres:trace_extrinsic}
K^2_{xx} = \frac{L_{\star} (1+Z'^2+Z Z'')}{Z^2\sqrt{1+Z'^2}}\, , \quad K_{mn}^2=\frac{L_{\star}}{Z^2\sqrt{1+Z'^2}}\delta_{mn}\, .
\end{equation}
With these building blocks we can compute all the different pieces appearing in the corresponding EE functionals. For instance, the relevant expressions for the quadratic ones read
\begin{equation}
K_{ij}^aK^{aij}=\frac{(d-1)(1+Z'^2)^2+Z^2Z''^2+2(1+Z'^2)ZZ''}{L_{\star}^2 (1+Z'^2)^3}\, ,
\end{equation}
\begin{equation}
R^{ab}\,_{ab}=-\frac{2}{L_{\star}^2}\, , \quad R=-\frac{d(d+1)}{L_{\star}^2}\, ,\quad R^a\,_a=-\frac{2d}{L_{\star}^2}\, .
\end{equation}
Now, the Ryu-Takayanagi surface is determined by the condition $K^2=0$, where in this case we have
\begin{equation} \label{spheres:trace_extrinsic}
K^2 = \frac{(d-1) (1+Z'^2)+Z Z''}{L_{\star} (1 + Z'^2)^{3/2}} \, .
\end{equation}
A first integral can be shown to exist so that
\begin{equation}
Z'=-\frac{\sqrt{z_{\star}^{2(d-1)} - Z^{2(d-1)}}}{Z^{(d-1)}}\, , \quad \text{where} \quad z_\star=\frac{\Gamma\left[\tfrac{1}{2(d-1)} \right]}{2\sqrt{\pi} \Gamma \left[ \tfrac{d}{2(d-1)}\right]}\ell
\end{equation}
is the value of $z$ corresponding to the turning point of the surface.
Now, after some massaging, the EE for Einstein gravity can be seen to be given by \cite{Ryu:2006ef,Ryu:2006bv}
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{\rm E}=\frac{L_{\star}^{d-1} L_y^{d-2}}{2G z_\star^{d-2}} \int^1_{\delta} \frac{\mathrm{d} y }{y^{d-1}\sqrt{1-y^{2(d-1)}}}= \xi_{\rm E}\frac{L_y^{d-2}}{\delta^{d-2}} - \kappa^{(d)}_{\rm E} \frac{L_y^{d-2}}{\ell^{d-2}} \, ,
\end{equation}
where $L_y$ are IR regulators for the $(d-2)$ transverse directions. The universal and non-universal constants $\kappa^{(d)}_{\rm E}$ and $\xi_{\rm E}$ read respectively
\begin{equation}
\kappa^{(d)}_{\rm E}=\frac{2^{d-3 } \pi^{\frac{d-1}{2}} \Gamma\left[ \tfrac{d}{2(d-1)}\right]^{d-1} }{(d-2) \Gamma\left[ \tfrac{1}{2(d-1)}\right]^{d-1}} \frac{L_{\star}^{d-1}}{G}\, , \quad \xi_{\rm E}=\frac{L_{\star}^{d-1}}{2(d-2)G} \, .
\end{equation}
Let us see how these generalize when quadratic and cubic terms are introduced. For a general quadratic theory of the form (\ref{quaact}) one finds
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{\rm Riem^2}=\frac{L_{\star}^{d-1} L_y^{d-2}}{2G z_\star^{d-2}} \int_{\delta}^1 \frac{\left[1-2d (d+1) \alpha_1-2d \alpha_2 -2\alpha_3\left[2+(d-1)(d-2) y^{2(d-1)}\right] \right] }{y^{d-1}\sqrt{1-y^{2(d-1)}}} \mathrm{d} y\, ,
\end{equation}
and from this,
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{\rm Riem^2}=\xi_{\rm Riem^2}\frac{L_y^{d-2}}{\delta^{d-2}} - \kappa^{(d)}_{\rm Riem^2} \frac{L_y^{d-2}}{\ell^{d-2}} \, ,
\end{equation}
where now $\xi_{\rm Riem^2}$ gets a factor identical to the one of $a^{\star (d)}_{\rm Riem^2}$ whereas the universal coefficient reads
\begin{align}
\kappa^{(d)}_{\rm Riem^2} &=\left[1-2d(d+1)\alpha_1-2d \alpha_2+2(d-3) \left[2+d(d-2) \right]\alpha_3\right] \kappa^{(d)}_{\rm E} \, .
\end{align}
Note that there are two kinds of terms in the integrand. On the one hand, pieces arising from purely intrinsic curvatures are proportional to the Einstein gravity one, which is of the form $\sim 1/(y^{d-1}\sqrt{1-y^{2(d-1)}})$. On the other hand, the contribution which involves two extrinsic curvatures has an extra $\sim y^{2(d-1)}$ factor. It is easy to see that
$\xi_{\rm Riem^2}$ is unaffected by the second type of terms, which explains why the same prefactor as for $a^*_{\rm Riem^2}$ appears in that case. Nevertheless, recall that $\xi$ is not a universal quantity (we can modify it by changing the regulator), so its interest is very limited. On the other hand, the universal constant $\kappa^{(d)}_{\rm Riem^2} $ does get affected by the extrinsic curvature term. The result for $\kappa^{(3)}_{\rm Riem^2}$ agrees with the one obtained in \cite{Bueno2}, as it should.
We find a similar kind of behavior for the cubic theories. Wald-like terms produce contributions proportional to the Einstein gravity result, and the non-universal constant $\xi_{\rm Riem^3}$ is proportional to $a^{\star(d)}_{\rm Riem^3}$, namely, $\xi_{\rm Riem^3}/\xi_{\rm E} = a^{\star(d)}_{\rm Riem^3}/a^{\star(d)}_{\rm E}$.
On the other hand, terms with two extrinsic curvatures have an extra factor $\sim y^{2(d-1)}$ in the integrand, and those with four, one of the form $\sim y^{4(d-1)}$. Both types of terms affect the universal coefficient.\footnote{Effectively every extra factor $y^{2(d-1)}$ can be replaced by a $(2-d)$ factor as far as $\kappa^{(d)}$ is concerned, and every extra $y^{4(d-1)}$ can be replaced by a $d(2-d)/(2d-1)$. If we repeated the calculation including general order-$n$ higher-curvature pieces, we would obtain extra factors generally involving all even powers of $y$ up to $y^{2(n-1)(d-1)}$.} The final result reads
\begin{align}\label{kaappa}
\kappa^{(d)}_{\rm Riem^3} =&\Bigg[1+ 3\left[d-1+(d-1)(d-2)^2+\frac{d(d-2)^2(8+d(d^2-9))}{8(2d-1)} \right]\beta_1 \\ \notag & \left. + 12\left[1-(d-1)(d-2)^2+\frac{d(d-1)(d-2)^2(7+d(d-5))}{4(2d-1)} \right]\beta_2 \right. \\ \notag & \left.+2d\Big[3-(d-1)(d-2)^2\Big]\beta_3+2d(d+1)\Big[3-(d-1)(d-2)^2\Big]\beta_4 \right. \\ \notag & +3d^2\beta_5+3d^2\beta_6+3d^2(d+1)\beta_7+3d^2(d+1)^2\beta_8\Bigg] \kappa^{(d)}_{\rm E} \, .
\end{align}
A check of these results for $\kappa^{(d)}_{\rm Riem^2}$ and $\kappa^{(d)}_{\rm Riem^3}$ can be performed by particularizing them to Lovelock theories, for which the JM formula in \req{jm} can be alternatively used. We find
\begin{align}
\kappa^{(d)}_{ \mathcal{X}_4}&=[1+2 (d-3)(d-2)(d-1) \lambda_2] \kappa^{(d)}_{ \rm E}\, , \\
\kappa^{(d)}_{ \mathcal{X}_6}&=[1-\frac{3 (d-5)(d-4)(d-3)(d-2)(d-1)^2}{(2d-1)} \lambda_3] \kappa^{(d)}_{ \rm E} \, ,
\end{align}
which precisely agree with the ones obtained using \req{jm}. Observe that the corrections to the Einstein gravity result vanish in dimensions lower or equal to the critical one, {\it i.e.,}\ for $d+1\leq 2n$. One can also verify that $\kappa^{(4)}_{ \mathcal{X}_4}$ agrees with the nonperturbative result found in \cite{deBoer:2011wk} at leading order in $\lambda_2$. For Quasi-topological and Einsteinian cubic gravity we find, respectively
\begin{align}
\kappa^{(4)}_{\rm QTG}&= \left[1+9\mu_{\rm QTG} \right] \kappa^{(4)}_{\rm E}\, , \\
\kappa^{(3)}_{\rm ECG}&=\left[1+3\mu_{\rm ECG} \right] \kappa^{(3)}_{\rm E}\, .
\end{align}
As mentioned above, the coefficient $\kappa^{(d)}$ does not have an alternative interpretation beyond entanglement entropy, which is manifest in this case from the fact that in all cases in which various coefficients characterizing the dual theory have been computed for some of the above theories, all the corresponding values differ from the ones obtained here for $\kappa^{(d)}$.\footnote{The exception is the sharp-limit corner coefficient $\kappa$, which can be shown to coincide with $\kappa^{(3)}$ on general grounds, as explained below.} This includes, in particular, all the rest of coefficients computed in this paper ($c$, $a$ in $d=4$; $A$, $B_1$, $B_2$, $B_3$ in $d=6$; $a^{\star (d)}$ in general $d$; the corner charge $\sigma$ in $d=3$) as well as others like the stress-tensor two-point function charge $C_{\ssc T}$, the coefficient $C_S$ relating the thermal entropy of a plasma to its temperature, as well as others arising in the context of holographic complexity
\cite{Buchel:2009sk,Myers:2010jv,deBoer:2011wk,Hung:2011xb,Bueno2,Safdi:2012sn,Miao:2015iba,Bhattacharyya:2014yga,Cano:2018ckq}.
\subsection{Cylinder regions}
Let us now consider (hyper)cylindrical entangling surfaces. We will be mostly interested in the universal logarithmic piece arising for such regions in $d=4$ and $d=6$ theories.
We write the Euclidean AdS$_{(d+1)}$ metric as
\begin{equation}
\mathrm{d} s^2=\frac{L_{\star}^2}{z^2} \left[\mathrm{d} \tau^2 +\mathrm{d} z^2+\mathrm{d} \vec{y}^2_{(d-3-j)}+\mathrm{d} r^2+ r^2 \mathrm{d} \Omega^2_{(j+1)} \right] \, ,
\end{equation}
where $\mathrm{d} \Omega^2_{(j+1)}$ is the metric of a round $(j+1)$-dimensional sphere.
Our entangling regions will be parametrized by $\tau=0$, $r=R_0$, with $j$ taking values $j=0,\dots,d-3$, which correspond to entangling surfaces $\partial A=\mathbb{S}^1\times \mathbb{R}^{d-3},\mathbb{S}^2\times \mathbb{R}^{d-4},\dots,\mathbb{S}^{d-3}\times \mathbb{R}^1,\mathbb{S}^{d-2}$, respectively.
We parametrize the RT surface as $r=R(z)$. Unit normals and projectors on the surface read
\begin{equation}
n_1=\frac{z}{L_{\star}} \partial_{\tau}\, , \quad n_2=\frac{z}{L_{\star} \sqrt{1+R'^2}} \left(R' \partial_z - \partial_r \right)\, , \quad t_z=\partial_z+ R' \partial_r\, , t_m=\partial_m\, , t_{\phi}=\partial_{\phi} \, ,
\end{equation}
where $m=1,\dots,d-3-j$ and $\phi=1,\dots,j+1$.
The induced metric reads
\begin{equation}
h_{ij} \mathrm{d} y^i \mathrm{d} y^j= \frac{L_{\star}^2}{z^2} \left[(1+R'^2)\mathrm{d} z^2+\mathrm{d} \vec{y}^2_{(d-3-j)}+R^2 d\Omega^2_{(j+1)} \right] \, .
\end{equation}
The non-vanishing components of $K_{ij}^2$ read now
\begin{align} \label{spheres:trace_extrinsic}
K^2_{zz} &= \frac{-L_{\star} (R'+R'^3-z R'')}{z^2\sqrt{1+R'^2}}\, , \quad K_{mn}^2=\frac{-L_{\star} R' \delta_{mn} }{z^2\sqrt{1+R'^2}}\, , \\ K^2_{\phi_j \phi_k}&=\frac{-L_{\star} (z+R R') R}{z^2\sqrt{1+R'^2}} \prod_{l=1}^{j-1} \sin^2\phi_l \delta_{jk}\, .
\end{align}
The equation for the RT surface is, as usual, $K^{2}=0$, where
\begin{equation}\label{k2}
K^{2}=\frac{(R R''-(j+1))z - (d-1) R R' (1+R'^2)-(j+1)z R'^2}{L_{\star} R (1+R'^2)^{3/2}}\, .
\end{equation}
In the case of Einstein gravity, the RT functional reduces to
\begin{equation}\label{eee}
S_{\ssc \rm EE}^{\rm E}=\frac{L_{\star}^{d-1} L_y^{d-3-j} \Omega_{(j+1)}}{4G } \int^{z_{\rm max}}_{\delta} \mathrm{d} z \frac{R^{j+1}}{z^{d-1}}\sqrt{1+R'^2} \, ,
\end{equation}
where $\Omega_{(j+1)} \equiv 2\pi^{(j+2)/2}/ \Gamma[(j+2)/2]$. As anticipated, we are interested in the logarithmic contribution to the entanglement entropy in even dimensional theories. Such a contribution is local in the entangling surface $\partial A$ so, from the holographic perspective, it suffices to consider a perturbative solution to $K^{2}=0$ near the boundary. The result reads\footnote{When performing this expansion, it does not seem to be possible to solve the equation beyond quadratic order for $d=4$ and beyond quartic order in $d=6$. While this does not affect our calculations, it would be interesting to better understand the origin of this issue. }
\begin{equation}\label{exp}
R(z)=R_0 - \frac{(j+1)}{2R_0 (d-2)}z^2+\mathcal{O}(z^4) \, ,
\end{equation}
which we need to plug back into our functionals.
\subsubsection{Four dimensions}
For general CFTs in four dimensions, the universal contribution to the entanglement entropy for a smooth entangling surface characterized by some scale $\ell$ is given by Solodukhin's formula \cite{Solodukhin:2008dh,Fursaev:2012mp}
\begin{equation} \label{see4}
S_{\ssc \rm EE}^{(4)} \supset -\frac{1}{2\pi } \int_{\partial A} \mathrm{d}^2y \sqrt{\gamma}\left[ a {\cal R} +c \left( {\tilde \rho} k^2-\frac12 k^2\right)
\right] \log \left( \frac{\ell }{\delta} \right)\, ,
\end{equation}
where $\mathcal{R}$ is the Ricci scalar of the induced metric induced on $\partial A$, $\gamma_{ij}$, and here and in the next subsection we use the notation $k \equiv \gamma^{ij} k_{ij}$ and ${\tilde \rho} k^n\equiv k_{i_1}^{i_2} k_{i_2}^{i_3}\dots k_{i_n}^{i_1}$, where $k_{ij}$ is the extrinsic curvature. $a$ and $c$ are the coefficients appearing in the usual trace-anomaly expression \cite{Duff:1977ay}
\begin{equation}
\braket{T_a^a} = -\frac{a}{16\pi^2} \mathcal{X}_4+\frac{c}{16\pi^2} C_{abcd}C^{abcd}\, ,
\end{equation}
where $\mathcal{X}_4$ and $ C_{abcd}$ are the Euler density and Weyl tensor of the curved manifold in which the CFT is considered.
Let us then start considering our holographic functionals for $d=4$ and $j=0$. For \req{eee} one finds
\begin{equation}\label{egs}
S_{\rm \scriptscriptstyle HEE}^{\rm E}=\frac{\pi L_{\star}^{3} L_y }{2G } \int^{z_{\rm max}}_{\delta} \mathrm{d} z \left[\frac{R_0}{z^3}- \frac{1}{8R_0 z}+\dots \right]=\dots - \frac{c_{\rm E}}{2}\frac{L_y}{R_0} \log\left(R_0/\delta \right) +\dots \, ,
\end{equation}
where
\begin{equation}
c_{\rm E}=\frac{\pi L_{\star}^3}{8G}\, .
\end{equation}
This takes the form expected for a cylinder region in general CFTs, where the value of $c_{\rm E}$ matches the corresponding trace anomaly charge. In our conventions, this is in turn related to the stress-tensor two-point function charge\footnote{This is defined as the only theory-dependent content of the stress-tensor correlator, which otherwise is completely fixed by conformal symmetry \cite{Osborn:1993cr}. For a general CFT in $d$-dimensions one finds $\braket{T_{ab}(x) T_{cd}(0)}=C_{\ssc T} I_{ab,cd}(x)/x^{2d}$, where $I_{ab,cd}(x)$ is a fixed tensorial structure.} $C_{\ssc T}$ through $c=\pi^4 C_{\ssc T}/40 $ for general theories ---compare with $C_{\ssc T}^{\rm E}$ in \req{ctee}.
Performing the analogous calculations for quadratic and cubic theories, we observe that introducing the expansion \req{exp} in the corresponding functionals there are three kinds of terms which appear multiplying the Einstein gravity integrand in \req{egs}: terms coming from the Wald pieces, which are constant; terms involving products of two extrinsic curvatures, which are $\sim z^2$;
and terms involving products of four extrinsic curvatures, which go with $\sim z^4$. Terms of the latter kind do not contribute to $c$, which is a manifestation of the splitting-independent nature of this coefficient. The final result for $c_{\rm Riem^2}$ and $c_{\rm Riem^3}$ reads
\begin{align}
c_{\rm Riem^2}&= \left[1-40\alpha_1-8\alpha_2+4\alpha_3 \right]c_{\rm E}\, , \\
c_{\rm Riem^3}&=\left[1+21\beta_1-36\beta_2-8\beta_3-40\beta_4+48\beta_5+48\beta_6 +240\beta_7+1200\beta_8 \right] c_{\rm E} \, .
\end{align}
These are again in agreement with the general relation with $C_{\ssc T}$. Indeed, for general quadratic and cubic theories in $d$-dimensions one finds
\begin{align}\label{ctee23}
C_{\ssc T}^{\rm Riem^2}=& \left[1-2d(d+1)\alpha_1-2d\alpha_2+4(d-3)\alpha_3\right] C_{\ssc T}^{\rm E} \\ \notag
C_{\ssc T}^{\rm Riem^3}=&\left[1+3(3d-5)\beta_1-12(2d-5)\beta_2-2d(2d-7)\beta_3-2d(2d-7)(d+1)\beta_4+3d^2\beta_5\right. \\ \notag &\left.+3d^2\beta_6 +3d^2(d+1)\beta_7+3d^2(d+1)^2\beta_8 \right] C_{\ssc T}^{\rm E} \, ,
\end{align}
where the Einstein gravity result reads
\begin{equation}\label{ctee}
C_{\ssc T}^{\rm E}=\frac{\Gamma[d+2]}{8(d-1)\Gamma[\tfrac{d}{2}]\pi^{\tfrac{d+2}{2}}} \frac{L_{\star}^{d-1}}{G}\, .
\end{equation}
These results for $C_{\ssc T}$ can be obtained in different ways. A simple one consists in computing the linearized equations of the theory around an AdS background. For a general higher-curvature gravity, these are fourth-order equations which describe the dynamics of a massive scalar mode and a ghost-like massive graviton in addition to the usual general relativity massless graviton. The resulting equations can be characterized in terms of the masses of the new two modes as well as an effective Newton constant \cite{Tekin1,Aspects}. This generically takes the form $G_{\rm eff}=G/ \gamma$, where $\gamma$ depends on the higher-curvature couplings. Via holography, a rescaling of $G$ is equivalent to a rescaling of the stress-tensor charge $C_{\ssc T}$, which becomes $\gamma C_{\scriptscriptstyle T}^{\rm E}$.
$G_{\rm eff}$ was computed in \cite{Aspects} explicitly for general quadratic, cubic and quartic gravities in general dimensions, so we can easily obtain the values of $C_{\ssc T}$ shown above.
In the particular cases of Lovelock, Quasi-topological and Einsteinian cubic gravity densities, they reduce to
\begin{align}
C_{\ssc T}^{\mathcal{X}_4}=& \left[1-2 (d-2)(d-3)\lambda_2\right] C_{\ssc T}^{\rm E} \\
C_{\ssc T}^{\mathcal{X}_6}=&\left[1+3(d-2)(d-3)(d-4)(d-5)\right] C_{\ssc T}^{\rm E} \, , \\
C_{\ssc T}^{\rm QTG}=&\left[ 1-3 \mu_{\rm QTG}\right] C_{\ssc T}^{\rm E} \, , \\
C_{\ssc T}^{\rm ECG}=&\left[1-3 \mu_{\rm ECG} \right] C_{\ssc T}^{\rm E} \, .
\end{align}
Note that all these differ from the slab coefficients $\kappa^{(d)}$ computed in the previous subsection.
\subsubsection{Six dimensions}
Let us now turn to six dimensions. In this case, a similar expression for the logarithmic term involving the trace anomaly coefficients holds for general CFTs, and is given by \cite{Safdi:2012sn,Miao:2015iba}
\begin{align}\label{see6}
S_{\ssc \rm EE}^{(6)} \supset \int_{\partial A} \mathrm{d}^4 y \sqrt{\gamma} \Big[2 A \mathcal{X}_4+\frac{3\pi}{2} B_1 (3T_1-2T_2)-12\pi B_2 T_2 \\ \notag + 6\pi B_3 (T_3+9T_1-12T_2) \Big] \log\left(\frac{\ell}{\delta}\right)\, ,
\end{align}
where $\mathcal{X}_4$ is the Euler density associated to the induced metric $\gamma_{ij}$ and now
\begin{align}
T_1 & \equiv ({\tilde \rho} k^2)^2- \frac{1}{2}k^2{\tilde \rho} k^2+\frac{1}{16}k^4 \, , \\
T_2 & \equiv {\tilde \rho} k^4-k {\tilde \rho} k^3+\frac{3}{8}k^2{\tilde \rho} k^2-\frac{3}{64}k^4 \, , \\
T_3 &\equiv (\nabla_i k)^2-\frac{25}{16}k^4+11 k^2{\tilde \rho} k^2-6({\tilde \rho} k^2)^2-16 k {\tilde \rho} k^3+12 {\tilde \rho} k^4\, .
\end{align}
Similarly, the coefficients $A$, $B_1$, $B_2$ and $B_3$ are the ones appearing in the trace anomaly, which in this case takes the form \cite{Bonora:1985cq,Deser:1993yx,Henningson:1998gx,Bastianelli:2000hi}
\begin{equation}
\braket{T_a^a}=\sum_{i=1}^3 B_i I_i + 2 A \mathcal{X}_6\, ,
\end{equation}
where $\mathcal{X}_6$ is the Euler density and the $I_i$ are cubic conformal invariants given by
\begin{align}
I_1\equiv C_{d abc}C^{a ef b}C\indices{_e ^{dc} _f}\, , \quad I_2\equiv C\indices{_{ab} ^{cd}}C\indices{_{cd} ^{ef}} C\indices{_{ef} ^{ab}}\, , \\ I_3\equiv C\indices{_{aceg}} (\nabla^2 \delta^a_{b}+ 4R^a_b-\frac{6}{5} R \delta^a_b) C^{bceg}\, .
\end{align}
For the entangling regions we are considering here, the induced metric on $d=6$ Minkowski space reads
\begin{equation}
ds^2_{\gamma}= \mathrm{d} \vec{y}^2_{(3-j)}+R_0^2 \mathrm{d} \Omega^2_{(j+1)} \, .
\end{equation}
The relevant expressions for the extrinsic curvature invariants read
\begin{equation}
k=\frac{(j+1)}{R_0}\, , \quad {\tilde \rho} k^n=\frac{(j+1)}{R_0^n}\, ,
\end{equation}
and from this, one finds
\begin{align}
\mathcal{X}_4&=\frac{(j-2)(j-1)j(j+1)}{R_0^4} \, , \\
T_1&=\frac{(j-3)^2(j+1)^2}{16 R_0^4} \, , \\ \quad T_2&=-\frac{(j-3)(j+1)(7+3j(j-2))}{64 R_0^4}\, ,\\ \quad T_3&=\frac{(j-3)(j+1)(3+j(26-25j)}{16 R_0^4}\, ,
\end{align}
where, for completeness, we also included the value of $\mathcal{X}_4$ which vanishes for all the cylinder-like regions ($j=0,1,2$). Then, the entanglement entropy universal term reduces, for general CFTs, to
\begin{align}\label{redu}
S_{\ssc \rm EE} \supset \frac{(j+1) \Omega_{(j+1)}}{64} & \left[128 A j (j-1)(j-2)+3\pi (j-3) \Big[B_1 (9j (j-2)-11)\right. \\ \notag & \left. +4B_2(3j(j-2)+7)-8B_3(j+1)(3+7j) \Big] \right]\frac{L_y^{3-j}}{R_0^{3-j}} \log\left(\frac{\ell}{\delta}\right) \, .
\end{align}
On the other hand, the holographic result for Einstein gravity read
\begin{align}\notag
S_{\rm \scriptscriptstyle HEE}^{\rm E}&=\frac{L_{\star}^{5} L_y^{3-j} \Omega_{(j+1)}}{4G } \int^{z_{\rm max}}_{\delta} \mathrm{d} z \left[\frac{R_0^{j+1}}{z^5}-\frac{3(j+1)^2R_0^{j-1}}{32z^3}+\frac{(j+1)^3(7j-9)}{2048 R_0^{3-j} z}+\dots \right] \, , \\ \label{eede}
&=\dots + \frac{(1+j)^3(7j-9) \Omega_{(j+1)}L_{\star}^5 }{8192 G }\frac{L_y^{3-j}}{R_0^{3-j}} \log\left(\frac{\ell}{\delta}\right)+\cdots
\end{align}
Comparing with \req{redu} for $j=0,1,2,3$ we can obtain the Einstein gravity values of $A$, $B_1$, $B_2$, $B_3$. The results read
\begin{equation}
A_{\rm E}=\frac{L_{\star}^5}{512 G} \, , \quad B_1^{\rm E}=-\frac{L_{\star}^5}{256 \pi G} \, , \quad B_2^{\rm E}=-\frac{L_{\star}^5}{1024 \pi G} \, , \quad B_3^{\rm E}=\frac{L_{\star}^5}{3072 \pi G} \, ,
\end{equation}
in agreement with previous calculations \cite{deBoer:2009pn,Safdi:2012sn}. In particular, the value of the $A$ charge satisfies $A_{\rm E}=a^{\star (6)}_{\rm E}/ (32\pi^2)$, a relation which holds for general theories in the present conventions. In particular, the values of $A$ for all the rest of holographic higher-curvature theories are proportional to the corresponding coefficients $a^{\star (6)}$.
Moving to quadratic theories, the contributions without anomaly piece modify the charges in the same way as $a^{\star (6)}$, whereas the term involving two Riemanns contains an extra piece coming from a contraction of extrinsic curvatures, which in this case reads
\begin{equation}
K_{aij} K^{aij}=-\frac{(j-3)(j+1)}{4L_{\star}^2 R_0^2} z^2 + \frac{(j-3)^2(j+1)^2}{64L_{\star}^2R_0^4}z^4 +\dots
\end{equation}
Putting the pieces together in the quadratic functional \req{seerie2} and again comparing with \req{redu} we find
\begin{align}
B_1^{\rm Riem^2}&=\left[1-84\alpha_1-12\alpha_2+\frac{4}{3}\alpha_3\right]B_1^{\rm E}\, , \\
B_2^{\rm Riem^2}&=\left[1-84\alpha_1-12\alpha_2-\frac{28}{3}\alpha_3\right]B_2^{\rm E}\, , \\
B_3^{\rm Riem^2}&=\left[1-84\alpha_1-12\alpha_2+12\alpha_3\right]B_3^{\rm E}\, .
\end{align}
We have verified that these results reduce to the ones found in \cite{Miao:2013nfa} for seven-dimensional Critical Gravity \cite{Lu,Deser:2011xc}. In that case, $\alpha_1=-1/240$, $\alpha_2=1/20$, $\alpha_3=-1/16$ and the charges read $B_1^{\rm CG}=2/3$, $B_1^{\rm CG}=4/3$, $B_3^{\rm CG}=0$. It is also easy to verify that the resulting charges satisfy the relation $3B_3=(B_2-B_1/2)$, which holds for theories that are unaffected by the splittings choice, as argued in \cite{Miao:2015iba}.
Proceeding analogously with the cubic densities, we obtain
\begin{align}
B_1^{\rm Riem^3}&=\left[1+39\beta_1-20\beta_2+4\beta_3+28\beta_4+108\beta_5+108\beta_6+756\beta_7+5292\beta_8\right]B_1^{\rm E}\, , \\
B_2^{\rm Riem^3}&=\left[1+7\beta_1-20\beta_2+68\beta_3+476\beta_4+108\beta_5+108\beta_6+756\beta_7+5292\beta_8\right]B_2^{\rm E}\, , \\
B_3^{\rm Riem^3}&=\left[1+39\beta_1-84\beta_2-60\beta_3-420\beta_4+108\beta_5+108\beta_6+756\beta_7+5292\beta_8\right]B_3^{\rm E}\, .
\end{align}
We can check, at this order, which theories satisfy the $3B_3-(B_2-B_1/2)=0$ condition. Evaluating the quantity in the left-hand side, one obtains
\begin{equation}
3B_3-(B_2-B_1/2)=-\frac{(\beta_1+2\beta_2) L_{\star}^2}{32\pi G}\, .
\end{equation}
Hence, such a combination vanishes for all theories for which $\beta_1=-2\beta_2 $. This includes, in particular, the cubic Lovelock density, in agreement with the result of \cite{Safdi:2012sn}. The explicit expressions for the quadratic and cubic theories read
\begin{align}
B_1^{\mathcal{X}_4}&=\left[1-\frac{104}{3}\lambda_2\right]B_1^{\rm E}\, , \quad B_2^{\mathcal{X}_4}=\left[1-\frac{136}{3}\lambda_2\right]B_2^{\rm E}\, , \quad B_3^{\mathcal{X}_4}=\left[1-24\lambda_2\right]B_3^{\rm E}\, ,\\
B_1^{\mathcal{X}_6}&=\left[1+136\lambda_3\right]B_1^{\rm E}\, , \quad B_2^{\mathcal{X}_6}=\left[1+200\lambda_3\right]B_2^{\rm E}\, , \quad B_3^{\mathcal{X}_6}=\left[1+72\lambda_3\right]B_3^{\rm E}\, .
\end{align}
\subsection{Corner regions}
In this subsection we construct the universal function characteristic of corner regions for general holographic cubic gravities using the perturbative HEE functionals. We show that the introduction of such terms in the bulk Lagrangian modifies the angular dependence of the Einstein gravity function, as opposed to previously considered quadratic and $f(R)$ theories. We compute the new functions explicitly and perform some comparisons with the analogous ones corresponding to free scalars and fermions.
\subsubsection*{General aspects of corner entanglement}
The structure of divergences and universal terms in the entanglement entropy gets modified when the entangling surface $\partial A$ contains geometric singularities ---see {\it e.g.,}\ \cite{Myers:2012vs,Bueno:2019mex} for some general accounts of this phenomenon in various dimensions. Here, we will focus on the prototypical example of (straight) corners in $d=3$ CFTs. Given a fixed time slice, the entanglement entropy corresponding to a corner region of opening angle $\theta$ in the ground state of a CFT regulated by a UV cutoff $\delta$ takes the form
\begin{equation}\label{geneco}
S_{\ssc \rm EE} = b_1 \frac{H }{\delta} - a(\theta) \log(H/\delta)+ b_0\, .
\end{equation}
Here, $H$ is an IR regulator and $b_1$ is a non-universal coefficient. On the other hand, $b_0$ is a coefficient which generically contains a universal non-local contribution and a non-universal part of intrinsically local nature induced by possible redefinitions of the regulator $\delta$.
With respect to the case of smooth regions, the novelty here is the appearance of a new logarithmic divergence controlled by the corner function $a(\theta)$, of universal nature. By now, many aspects of this function have been studied in a plethora of contexts
---{\it e.g.,}\ for free fields \cite{Casini:2006hu,Casini:2008as,Casini:2009sr,Bueno3,Dowker:2015pwa,Dowker:2015tma,Elvang:2015jpa}, for large-$N$ vector models \cite{Whitsitt2017}, for holographic theories \cite{Hirata:2006jx,Bueno2,Fonda:2015nma,Alishahiha:2015goa,Miao:2015dua,Pang:2015lka,Bianchi:2016xvf,Mozaffar:2015xue,Pastras:2017fsy,Ghasemi:2017pke,Bakhshaei:2017qud,Caceres:2018luq,Ghasemi:2019mif,Dorn:2018als}, in interacting lattice models \cite{2011PhRvB..84p5134K,PhysRevLett.110.135702,sahoo15,Kallin:2014oka,Laflorencie:2015lwa,Helmes:2015mwa,DeNobili:2016nmj,Helmes:2016fcp}, and for general CFTs \cite{Bueno1,Faulkner:2015csl,Bueno:2015ofa,Witczak-Krempa:2016jhc,Chu:2016tps}. As a result of this thorough study, the function $a(\theta)$ has been shown to satisfy a number of properties, universal relations and bounds which we summarize now.
On the one hand, the purity of the ground state, which implies the well-known relation $S_{\ssc \rm EE}(A)=S_{\ssc \rm EE}(\bar A)$, requires $a(\theta)=a(2\pi-\theta)$. Besides, using strong subadditivity and Lorentz invariance one can show that \cite{Casini:2008as}
\begin{equation}\label{atht}
a(\theta)\geq 0 \, ,\quad \partial_{\theta} a(\theta) \leq 0\, , \quad \partial^2_{\theta} a(\theta) \geq -\frac{\partial_{\theta }a(\theta)}{\sin \theta}\, , \quad \text{for} \quad \theta \in [0,\pi]\, .
\end{equation}
In particular, this implies that $a(\theta)$ is a positive, monotonously-decreasing and convex function of the opening angle as we vary it from $\theta\sim 0$, corresponding to a very sharp corner, to $\theta\sim \pi$, corresponding to a very open, almost-smooth, corner. In those two limits, the function behaves, respectively, as \cite{Casini:2006hu,Casini:2009sr,Casini:2008as}
\begin{equation}\label{athth}
a(\theta \simeq 0) = \frac{\kappa}{ \theta} + \mathcal{O}(\theta)\, , \quad \quad a(\theta \simeq \pi) = \sigma \cdot (\theta - \pi)^2+\sum_{p=2} \sigma^{(p-1)} \cdot (\theta- \pi)^{2p}\, .
\end{equation}
In the first expression, $\kappa$ is a constant which can be shown to coincide with the slab coefficient $\kappa^{(3)}$ ---see \req{slabs} above--- for general theories \cite{Myers:2012vs,Bueno2}. In the second formula, we have made manifest the fact that only even powers appear in the expansion. The leading coefficient, $\sigma$, turns out to be related to the stress-energy tensor two-point function coefficient $C_{\ssc T}$ through
\begin{equation}\label{sigmact}
\sigma=\frac{\pi^2}{24} C_{\ssc T} \, ,
\end{equation}
for general CFTs. This relation was conjectured in \cite{Bueno1} based on holographic and free-field calculations and proved in full generality in \cite{Faulkner:2015csl} ---see also \cite{Miao:2015dua,Bueno4,Elvang:2015jpa} for intermediate progress and partial proofs. In fact, the full corner functions of all CFTs considered so far in the literature turn out to become very close to each other when normalized by $C_{\ssc T}$ \cite{Bueno1}.
Using \req{sigmact} and the third relation in \req{atht}, a lower bound on $a(\theta)$ valid for general CFTs was obtained in \cite{Bueno:2015ofa}. This takes the form
\begin{equation} \label{amin}
a(\theta) \geq \mathfrak{a}_{\rm min}(\theta) \, , \quad \text{where} \quad \mathfrak{a}_{\rm min}(\theta)\equiv \frac{\pi^2 C_{\ssc T}}{3} \log \left[1/\sin(\theta/2) \right]\, ,
\end{equation}
where $C_{\ssc T}$ is to be understood as the one corresponding to the theory we are comparing with. The bound turns out to be pretty tight for all theories considered so far, even for considerably small values of the opening angle \cite{Bueno:2015ofa} ---see also \cite{Sirois:2020zvc}. In particular, the actual values found from numerical and lattice simulations corresponding to various models for $\theta=\pi/2$, all fall within the approximate range $a(\pi/2)/C_{\ssc T} \in(1.2, 1.3)$ \cite{PhysRevLett.110.135702,sahoo15,Kallin:2014oka,Helmes:2016fcp}, whereas the bound value reads $\mathfrak{a}_{\rm min}(\pi/2)/C_{\ssc T}\simeq 1.1402$.
Additional lower bounds valid also for the general R\'enyi entropy versions of $a(\theta)$ can be constructed using the inequalities
\begin{equation}
{\rm det} \left\{ \partial_{\theta}^{j+k+2} a_{n}(\theta) \right\}_{j,k=0}^{M-1} \geq 0\, ,
\end{equation}
which follow from the reflection positivity property of Euclidean QFTs \cite{Casini4}. Such bounds were explored in \cite{Bueno:2015ofa,Helmes:2016fcp} and suggest, in particular, that all coefficients in the almost-smooth expansion in \req{athth} are positive, {\it i.e.,}\ $ \sigma^{(p-1)} >0$ $\forall \, p$.\footnote{This has been shown to be true in general for $p=1,2,3,4,5$ in \cite{Bueno:2015ofa}.} In fact, for sufficiently large $p$, it was observed in \cite{Bueno:2015ofa} that those coefficients behave as
\begin{equation}
\sigma^{(p)} \simeq \frac{2\kappa}{\pi^{2p+3}}\, , \quad p\gg 1\, ,
\end{equation}
where $\kappa$ is the sharp-limit coefficient.
The results mentioned so far are valid for general CFTs. Theories for which $a(\theta)$ has been actually computed for general values of the opening angle are nonetheless scarce. For free scalars and fermions, $a(\theta)$ was obtained numerically from a complicated set of coupled differential and algebraic equations in \cite{Casini:2006hu,Casini:2008as,Casini:2009sr}. In addition, the Ryu-Takayanagi prescription allowed for the computation of the corresponding corner function for holographic theories dual to Einstein gravity \cite{Hirata:2006jx}. The resulting expression is shown below in \req{eisss} and is given implicitly in terms of two integrals.
The only two cases for which a completely explicit expression for $a(\theta)$ is known correspond, respectively, to certain Lifshitz quantum critical points \cite{fradkin} and the so-called ``Extensive Mutual Information model'' \cite{Casini:2005rm,Casini:2008wt,Swingle:2010jz}. The corresponding corner functions read
\begin{equation}
a_{\rm \scriptscriptstyle Lif.}(\theta)=\frac{(\theta-\pi)^2}{\theta(2\pi-\theta)}\, , \quad\quad a_{\rm \scriptscriptstyle EMI}(\theta)=1+ (\pi-\theta) \cot \theta\, .
\end{equation}
Using these two functions, it is possible to construct a simple approximation to the corner function of any CFT provided one knows the values of the corresponding sharp and smooth coefficients, $\kappa$ and $\sigma$. This is given by \cite{Bueno3}
\begin{equation}\label{trii}
\tilde a(\theta)=\frac{2\pi (\kappa-3\pi \sigma)}{\pi^2-6} \frac{(\theta-\pi)^2}{\theta(2\pi-\theta)}-\frac{3(2\kappa-\pi^3\sigma)}{\pi (\pi^2-6)} \left[1+ (\pi-\theta) \cot \theta \right]\, .
\end{equation}
This respects the asymptotic behavior both as $\theta\rightarrow 0$ and as $\theta \rightarrow \pi$ and produces very precise approximations to the actual free-field and Einstein gravity results. In all cases, the relative agreement is always better than 99$\%$ for all values of $\theta$.
If access to some of the subleading coefficients $\sigma^{(p)}$ is also available, improved ansatze can be constructed, as shown in \cite{Helmes:2016fcp}.
\subsubsection*{Einstein gravity}
Let us quickly review how the corner function is obtained for Einstein gravity \cite{Hirata:2006jx,Drukker:1999zq}. First, it is useful to write the AdS$_3$ metric as
\begin{equation}
\mathrm{d} s^2=\frac{L_{\star}^2}{z^2}[\mathrm{d} \tau^2+\mathrm{d} z^2+\mathrm{d} r^2+r^2 \mathrm{d} \phi^2]\, .
\end{equation}
The corner region is defined by $\tau=0$, $r \geq 0$, $|\phi| \leq \theta/2$. We can parametrize the bulk surface as $z=r h(\phi)$, where $h(\phi)$ is a function satisfying $h(\phi \rightarrow \pm \theta/2)\rightarrow 0$. Unit normals to the surface are given by
\begin{equation}
n_1=\frac{z}{L_{\star}} \partial_{\tau} \, , \quad n_2=\frac{z}{L_{\star} \sqrt{1+h^2+\dot h^2}}\left[\partial_z - h \partial_r-\frac{\dot h}{r} \partial_{\phi} \right]\, .
\end{equation}
Using these we have
\begin{align}
h_{\mu\nu}\mathrm{d} x^{\mu}\mathrm{d} x^{\nu}=\frac{L_{\star}^2}{z^2} &\left[ \mathrm{d} \tau^2+ \frac{1}{(1+h^2+\dot h^2)} \left[(h^2+\dot h^2)\mathrm{d} z^2+ (1+\dot h^2)\mathrm{d} r^2 \right. \right. \\ &\left. \left. + r^2(1+h^2) \mathrm{d} \phi^2 + 2h \mathrm{d} z \mathrm{d} r +2 r \dot h \mathrm{d} z \mathrm{d} \phi -2 r h \dot h \mathrm{d} r \mathrm{d} \phi \right] \right]\, .
\end{align}
Projectors on the surface are given by
\begin{equation}
t_r= h \partial_z + \partial_r \, , \quad t_{\phi}= r \dot h \partial_z+\partial_{\phi}\, ,
\end{equation}
and the projected induced metric reads
\begin{align}
h_{ij} \mathrm{d} y^i \mathrm{d} y^j =\frac{L_{\star}^2}{r^2 h^2} \left[ (1+h^2)\mathrm{d} r^2+r^2 (1+\dot h^2)\mathrm{d} \phi^2 + 2r h \dot h \mathrm{d} r \mathrm{d} \phi \right]\, .
\end{align}
The non-vanishing components of the extrinsic curvatures, $K^2_{ij}$, are in turn given by
\begin{align}
& K^2_{rr}=\frac{-L_{\star} (1+h^2)}{r^2 h^2\sqrt{1+h^2+\dot h^2}}\, , \quad K^2_{r \phi}=\frac{-L_{\star} \dot h}{r h \sqrt{1+h^2+\dot h^2}}\, , \\ & K_{\phi\phi}^2=\frac{-L_{\star} (1+h^2+\dot h^2+\ddot h h)}{h^2\sqrt{1+h^2+\dot h^2}}\, .
\end{align}
These are all the pieces we will need to evaluate the corner function for perturbative higher-order gravities.
For our parametrization of the holographic entangling surface, the Ryu-Takayanagi functional becomes
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{\rm E}=\frac{L_{\star}^2}{2G}\int_{\delta/h_0}^H\frac{\mathrm{d} r}{ r} \int_0^{\theta/2-\epsilon} \mathrm{d} \phi \frac{\sqrt{1+h^2+\dot h^2}}{h^2}\, ,
\end{equation}
where we already made manifest the UV cutoff at $z=\delta$ and where $h_0\equiv h(0)$ is the maximum value taken by the function $h(\phi)$. Also, the angular cutoff $\epsilon$ is defined through the condition $r h(\theta/2-\epsilon)=\delta$, which means that the integral over $r$ cannot be performed without doing the angular one first. The extremal surface condition, $K^2=0$, reads
\begin{equation}
2+3h^2+h^4+2\dot h^2+h(1+h^2)\ddot h=0\, .
\end{equation}
This has a first integral,
\begin{equation}
\frac{1+h^2}{h^2\sqrt{1+h^2+\dot h^2}}=\frac{\sqrt{1+h_0^2}}{h_0^2}\, ,
\end{equation}
which can be used to write $\dot h$ in terms of $h$ in the RT functional. Trading the integral over $\phi$ by one over $h$ and making the change of variables $y=\sqrt{1/h^2-1/h_0^2}$ we are left with
\begin{align}
S_{\rm \scriptscriptstyle HEE}^{\rm E}&=\frac{L_{\star}^2}{2G}\int_{\delta/h_0}^H\frac{\mathrm{d} r}{ r} \int_0^{\sqrt{(r/\delta)^2-1/h_0^2}}\mathrm{d} y \sqrt{\frac{1+h_0^2(1+y^2)}{2+h_0^2(1+y^2)}} \\
&= \frac{L_{\star}^2}{2G}\int_{\delta/h_0}^H\frac{\mathrm{d} r}{ r} \int_0^{\infty} \mathrm{d} y \left[ \sqrt{\frac{1+h_0^2(1+y^2)}{2+h_0^2(1+y^2)}} -1\right]+ \frac{L_{\star}^2}{2G} \int_{\delta/h_0}^H \frac{\mathrm{d} r}{ r} \sqrt{\frac{r^2}{\delta^2}-\frac{1}{h_0^2}}\, .
\end{align}
Expanding this expression for small $\delta$ one finally obtains
\begin{equation}
S_{\rm \scriptscriptstyle HEE}^{\rm E}=\frac{L_{\star}^2}{2G} \frac{H}{\delta}- a_{\rm E}(\theta) \log (H/\delta)+ \mathcal{O}(\delta^0) \, ,
\end{equation}
in agreement with the general expression \req{geneco}. The result for the Einstein gravity corner function can be written as
\cite{Drukker:1999zq,Hirata:2006jx}
\begin{align}\label{eisss}
a_{\rm E}(\theta)&=\frac{L_{\star}^2}{2G} \int_{0}^{+\infty} \mathrm{d} y \left[1-\sqrt{\frac{1+h_0^2(1+y^2)}{2+h_0^2(1+y^2)}} \right]\, , \\ \theta&=\int_{0}^{h_0} \mathrm{d} h \frac{2 \sqrt{1+h_0^2}h^2}{ \sqrt{1+h^2}\sqrt{(h_0^2-h^2)(h_0^2+(1+h_0^2)h^2)} }\, ,
\end{align}
where the dependence on the opening angle follows implicitly from the relation $h_0(\theta)$ determined by the second integral.
The above expressions can be alternatively written in terms of elliptic functions \cite{Fonda:2014cca} as
\begin{align}
a_{\rm E}(\theta)&=\frac{L_{\star}^2}{2G} \sqrt{1+\frac{2}{h_0^2}} \left[ \mathbb{E}\left[\frac{1}{2+h_0^2} \right] - \frac{(1+h_0^2)}{(2+h_0^2)} \mathbb{K}\left[\frac{1}{2+h_0^2} \right] \right] \, , \\
\theta&=-\frac{2h_0}{\sqrt{2+h_0^2(3+h_0^2)}} \left[\mathbb{K}\left[\frac{1}{2+h_0^2} \right]- \Pi \left[\frac{1+h_0^2}{2+h_0^2},\frac{1}{2+h_0^2} \right] \right]\, .
\end{align}
It can be verified that $a_{\rm E}(\theta)$ satisfies all properties explained in the previous subsection. Values of the opening angle close to $\theta=\pi$ correspond to $h_0\rightarrow \infty$, and an expansion of the $\theta(h_0)$ integral in that case can be obtained and inverted giving
\begin{equation}
h_0=\left(\frac{\pi}{\pi - \theta}\right)-\frac{3}{4} \left( \frac{\pi - \theta}{\pi} \right)- \frac{11}{64} \left( \frac{\pi - \theta}{\pi} \right)^3 -\frac{17}{256} \left( \frac{\pi - \theta}{\pi} \right)^5 -\frac{383}{16384} \left( \frac{\pi - \theta}{\pi} \right)^7+ \mathcal{O}(\pi - \theta)^9\, .
\end{equation}
Inserting this in $a_{\rm E}(\theta)$ one obtains an expansion of the form of the second expression in \req{athth}, where the leading smooth-limit coefficients are given by \cite{Bueno2,Bueno:2015ofa}
\begin{align*}
\sigma_{\rm E}=\frac{L_{\star}^2}{8\pi G}\, , \quad \sigma'_{\rm E}=\frac{5L_{\star}^2}{64\pi^3 G}\, , \quad \sigma''_{\rm E}=\frac{37L_{\star}^2}{512 \pi^5 G}\, , \quad \sigma'''_{\rm E}=\frac{585L_{\star}^2}{8192 \pi^7 G}\, , \quad \sigma^{(4)}_{\rm E}=\frac{9399L_{\star}^2}{131072 \pi^9 G}\, .
\end{align*}
As many higher-order coefficients as desired can be determined analytically in the same way.
On the other hand, the sharp limit coefficient is given by \cite{Bueno2}
\begin{equation}
\kappa_{\rm E}=\frac{L_{\star}^2}{2\pi G} \Gamma[3/4]^4\, .
\end{equation}
\subsubsection*{Quadratic theories}
As observed in \cite{Bueno2}, the only modification produced on the Einstein gravity corner function $a_{\rm E}(\theta)$ which arises from including quadratic or $f(R)$ terms in the gravitational action is an overall constant shift. In particular, for an action of the form \req{quaact} one finds
\begin{equation}
a_{\rm Riem^2}(\theta)= \left[1-24 \alpha_1-6\alpha_2 \right]\, a_{\rm E}(\theta)\, .
\end{equation}
Hence, no new functional dependence on the opening angle is found from these gravitational interactions.
As discussed in some detail in the same paper, the reason for this can be easily understood. On the one hand, all terms involving bulk curvatures will reduce to terms proportional to the Ryu-Takayanagi functional when evaluated on the pure AdS$_4$ background we are considering. On the other hand, any term proportional to $K^aK_a$ will also be extremized by RT surfaces, since the extremal surface condition reads $K^a=0$. As a consequence, terms proportional to $K^aK_a$ in the action will simply vanish on extremal surfaces and will not contribute. Finally, a term like $K_{aij}K^{aij}$ can also be deduced not to contribute from the fact that we can replace the $R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ piece by the Gauss-Bonnet density (plus additional $R^2$ and $R_{\mu\nu}R^{\mu\nu}$ terms) whose contribution to the EE functional is the intrinsic Ricci scalar on the RT surface \cite{Jacobson:1993xs,Hung:2011xb}, which is a topological term in $(d-1)=2$ dimensions and therefore makes no contribution to the equations of motion. In this case, it does not even modify the Einstein gravity result by an overall constant.
Our results here allow us to compute the corner function for cubic theories and verify that non-trivial modifications of $a_{\rm E}(\theta)$ arise in the presence of such terms.
\subsubsection*{Cubic theories}
Let us then consider a general cubic action of the form \req{cubic}. If we only turn on the couplings corresponding to $\mathcal{L}_{i}^{(3)}$ with $i=3,4,5,6,7,8$ we find that, similarly to the quadratic case, the corner function is the same as for Einstein gravity up to an overall factor. In the $i=5,6,7,8$ cases, the fact that the functionals have no anomaly contribution imply that the overall coefficient correcting the Einstein gravity result is the same as for $a^{\star (3)}$. For $i=3,4$, even though there is no modification in the functional dependence of the corner function, there is a modification to the overall coefficient coming from the anomaly terms. The result for all these densities reads
\begin{equation}
a_{\mathcal{L}_{(3,4,5,6,7,8)}^{(3)}}(\theta) = [1+6\beta_3+24\beta_4+27\beta_5+27\beta_6+108\beta_7+432\beta_8] a_{\rm E}(\theta) \, .
\end{equation}
On the other hand, $\mathcal{L}_{1}^{(3)}$ and $\mathcal{L}_{2}^{(3)}$ do modify the angular dependence of $a_{\rm E}$. Keeping only those two terms in the action, we find instead
\begin{equation}
a_{\mathcal{L}_{(1,2)}^{(3)}}(\theta) = [1+6\beta_1+12\beta_2] a_{\rm E}(\theta)+ \sum_{i=1}^2\beta_i g_{i}(\theta) \, ,
\end{equation}
where
\begin{align}
g_{1}(\theta)&\equiv + \frac{L_{\star}^2}{2G} \int_{0}^{+\infty} \frac{3(1+h_0^2)\left[3+h_0^2(5+4y^2)+2h_0^4(1+y^2)^2 \right]}{\left[1+h_0^2(1+y^2)\right]^{7/2} \sqrt{2+h_0^2(1+y^2)}}\mathrm{d} y \, ,\\
g_{2}(\theta)&\equiv - \frac{L_{\star}^2}{2G} \int_{0}^{+\infty} \frac{6(1+h_0^2)\left[3+h_0^2(7+8y^2)+4h_0^4(1+y^2)^2 \right]}{\left[1+h_0^2(1+y^2)\right]^{7/2} \sqrt{2+h_0^2(1+y^2)}}\mathrm{d} y \, .
\end{align}
Hence, at cubic order we find the first examples of holographic corner functions which modify the angular dependence of $a(\theta)$ in a nontrivial way with respect to the Einstein gravity case.
As we mentioned earlier, the almost-smooth limit of the corner function is controlled by $C_{\ssc T}$ for all CFTs.
For cubic theories, the result for this coefficient appears in \req{ctee23} above. In $d=3$ one finds
\begin{equation}
C_{\scriptscriptstyle T}^{\rm Riem^3}=\left[1+12\beta_1-12\beta_2+6\beta_3+24\beta_4+27\beta_5+27\beta_6+108\beta_7+432\beta_8\right] C_{\scriptscriptstyle T}^{\rm E}\, ,
\end{equation}
where $C_{\scriptscriptstyle T}^{\rm E}=3L^2/(\pi^3 G)$.
Now, including all cubic terms in the action, we find for the smooth limit of $a_{{ \rm{Riem}}^3}(\theta)$ that indeed
\begin{equation}
\sigma_{\rm Riem^3}= \frac{\pi^2}{24} C_{\scriptscriptstyle T}^{\rm Riem^3}\, ,
\end{equation}
holds, as expected. This was in fact previously verified in \cite{Miao:2015dua}, where several general results regarding the behavior of $a(\theta)$ for holographic theories were discussed, including the fact that $\kappa$ is not universally related to $C_{\ssc T}$, as opposed to $\sigma$. The subleading coefficients in the smooth-limit expansion are modified with respect to the Einstein gravity result in an obvious way for $\mathcal{L}_{(3,4,5,6,7,8)}^{(3)}$ but in a nontrivial one for $\mathcal{L}_{1}^{(3)}$ and $\mathcal{L}_{2}^{(3)}$. The first few of them read
\begin{align}
& \sigma_{\mathcal{L}_{(1,2)}^{(3)}}=[1+12\beta_1-12\beta_2] \sigma_{\rm E}\, , \quad &&\sigma'_{\mathcal{L}_{(1,2)}^{(3)}}=[1+15\beta_1-6\beta_2]\sigma'_{\rm E}\, , \\ &\sigma''_{\mathcal{L}_{(1,2)}^{(3)}}=\left[1+\frac{1173}{74}\beta_1-\frac{189}{37}\beta_2\right] \sigma''_{\rm E}\, , \quad &&\sigma'''_{\mathcal{L}_{(1,2)}^{(3)}}=\left[1+\frac{963}{65}\beta_1-\frac{414}{65}\beta_2\right] \sigma'''_{\rm E}\, , \\
& \sigma^{(4)}_{\mathcal{L}_{(1,2)}^{(3)}}=\left[1+\frac{43946}{3133}\beta_1-\frac{24896}{3133}\beta_2\right] \sigma^{(4)}_{\rm E}\, .
\end{align}
Just like $\sigma$ is controlled by the stress-tensor two-point coefficient $C_{\ssc T}$ for general theories, it is tempting to speculate that $\sigma'$ may be controlled by the stress-tensor three-point coefficients, which for $d=3$ CFTs can be chosen to be $C_{\ssc T}$ and an additional dimensionless coefficient, customarily denoted $t_4$ \cite{Hofman:2008ar}. This possibility was pointed out in \cite{Miao:2015dua} and explored in \cite{Bueno:2015ofa}. There, using the available results for free fields and holographic Einstein gravity it was shown that $\sigma'$ was not a linear combination of $C_{\ssc T}$ and $C_{\ssc T} t_4$ in general. Using the results obtained in \cite{Li:2019auk} for $t_4$ for general cubic higher-curvature theories, we verify that this is not the case either for this class of theories. In the opposite limit, we find
\begin{equation}
\kappa_{\rm Riem^3}=\left[1+\frac{69}{5}\beta_1-\frac{42}{5}\beta_2+6\beta_3+24\beta_4+27\beta_5+27\beta_6+108\beta_7+432\beta_8\right] \kappa_{\rm E}\, .
\end{equation}
Obviously, the coefficients for $\mathcal{L}_{i}^{(3)}$ with $i=3,\dots,8$ are the same as those appearing in $\sigma_{\rm Riem^3}$, but that is not the case for $\mathcal{L}_{1}^{(3)}$ and $\mathcal{L}_{2}^{(3)}$. On the other hand, as expected on general grounds \cite{Myers:2012vs,Bueno2}, $\kappa_{\rm Riem^3}$ matches the coefficient of the slab EE computed above ---compare with \req{kaappa} for $d=3$.
We would like to perform some more comparisons of our new corner functions. For the sake of conciseness, from now on we restrict the discussion to Einsteinian cubic gravity, whose Lagrangian we introduced in \req{cubic2}.
\begin{figure}[t] \vspace{-0.1cm} \centering
\includegraphics[scale=0.61]{corners.pdf} \includegraphics[scale=0.42]{corners2.pdf}
\caption{We plot the corner functions (normalized by their respective charges $C_{\ssc T}$) for a free scalar (blue), a free fermion (red), holographic Einstein gravity (yellow) and holographic Einsteinian cubic gravity (green). For the limit value $\mu \simeq 0.00312$ corresponding to $t_4 =+4$ (see discussion below), the curve lies very close but slightly below the Einstein gravity result (green dashed line). The case $\mu \simeq -0.00322$ corresponding to the other limit value ($t_4=-4$) lies even closer but slightly above the Einstein gravity curve and just below the fermion one. The right plot is a zoom of the curves between $\theta=\pi/4$ and $\theta=3\pi/8$. The orange region in the left plot is excluded for general theories by the inequality \req{amin}. The green dotted curves correspond to the values $\mu=-0.05$ (upper curve) and $\mu=+0.05$ (lower curve) which we have included (only) in the left plot for visual reference. }
\label{corn}
\end{figure}
We find the corner function for this theory to be given by
\begin{align}
a_{\rm ECG}(\theta)&=(1+3\mu )a_{\rm E}(\theta)-\frac{\mu L_{\star}^2}{2G} \int_0^{\infty} \mathrm{d} y \frac{3(1+h_0^2) (15+8h_0^4(1+y^2)^2+h_0^2(23+16y^2))}{4(1+h_0^2(1+y^2))^{7/2}\sqrt{2+h_0^2(1+y^2)}}\, .
\end{align}
This can be also written in terms of elliptic functions as
\begin{align}
a_{\rm ECG}(\theta)&=(1+3\mu )a_{\rm E}(\theta)- \frac{ \mu L_{\star}^2}{40Gh_0\sqrt{1+h_0^2}} \left[(-51-51h_0^2+8h_0^4) \mathbb{E} \left[-\frac{1}{1+h_0^2} \right] \right. \\ \notag & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \, \left. +(51+47h_0^2-8h_0^4) \mathbb{K}\left[-\frac{1}{1+h_0^2} \right] \right] \, .
\end{align}
The first smooth-limit coefficients and the sharp-limit one read in this case
\begin{align}
& \sigma_{\rm ECG}=[1-3\mu] \sigma_{\rm E}\, , \quad &&\sigma'_{\rm ECG}=\left[1-\frac{33}{4}\mu \right]\sigma'_{\rm E}\, , \\ &\sigma''_{\rm ECG}=\left[1-\frac{2673}{296}\mu\right] \sigma''_{\rm E}\, , \quad &&\sigma'''_{\rm ECG}=\left[1-\frac{2061}{260}\mu\right] \sigma'''_{\rm E}\, , \\
& \sigma^{(4)}_{\rm ECG}=\left[1-\frac{41023}{6266}\mu\right] \sigma^{(4)}_{\rm E}\, , \quad &&\kappa_{\rm ECG}=\left[1-\frac{123}{20}\mu \right]\kappa_{\rm E}\, .
\end{align}
The positivity of these coefficients impose the bound $\mu\leq 0.1107$ (coming from $\sigma''_{\rm ECG}\geq 0$). However, as shown in \cite{HoloECG}, the general bounds on the stress-tensor three-point function coefficient $-4\leq t_4 \leq 4$ \cite{Buchel:2009sk} impose more severe constraints on the allowed values of $\mu$, namely, $-0.00322 \leq \mu \leq 0.00312$. In the perturbative analysis performed in the present paper, bounds on finite values of $\mu$ are not so relevant, but we can use them to give us an idea of how much it is sensible to deviate $\mu$ from zero when performing comparisons with other theories. In Fig.\,\ref{corn} we have plotted $a_{\rm ECG}(\theta)$ for the limiting values $\mu\simeq-0.00322$ and $\mu\simeq 0.00312$ (all intermediate values of $\mu$ lie between the two curves) along with the Einstein gravity result and the free scalar ($t_4=+4$) and free fermion ($t_4=-4$) ones \cite{Casini:2006hu,Casini:2008as,Casini:2009sr}. We can see that all curves are remarkably close to each other, in agreement with the observation/conjecture of \cite{Bueno1} that $a(\theta)/C_{\ssc T}$ is an almost-universal quantity for general CFTs. We observe this to be the case for the whole family of theories parametrized by the continuous parameter $\mu$ lying between the limiting cases extremizing the value of $t_4$. By making the values of $|\mu|$ greater, we can obtain curves which deviate more significantly from the Einstein and free-field curves (see dotted lines in Fig.\,\ref{corn}). However, those would correspond to toy models of CFTs which do not respect the general bounds $|t_4|\leq 4$. Hence, it is reasonable to expect that for actual CFTs the curves will indeed fall extremely close to each other in general. In fact, the ECG curves with $t_4=4$ and $t_4=-4$ lie even closer to the Einstein gravity one than the scalar and fermion curves do. This suggests that the scalar field curve may be an upper bound for general CFTs.
On the other hand, the possibility that the Einstein gravity curve is a lower bound for general curves suggested in \cite{Bueno1} seems to be ruled out by our analysis: the introduction of higher-curvature corrections allows to go below the Einstein gravity one.\footnote{The same conclusion was previously reached in \cite{Miao:2015dua}.} Note that such conjecture was also supported by the fact that while $t_4=0$ for Einstein gravity, both the scalar and the fermion curves ---which have, respectively, the largest positive and negative values of $t_4$ allowed--- lie above it. Here we observe that, contrary to the scalar case, ECG theories with $t_4\geq 0$ lie below the Einstein gravity one.
\begin{figure}[t] \vspace{-0.1cm} \centering
\includegraphics[scale=0.7]{trials2.pdf}
\caption{We plot $1-a(\theta)/\tilde a(\theta)$ where $a(\theta)$ is the exact corner function and $\tilde a(\theta)$ the trial function defined in \req{trii} for Einstein gravity (yellow) and ECG for different values of $\mu$ (from top to bottom: $\mu=+0.00312,\,+ 0.002,\,+ 0.001,\, -0.001,\, -0.002,\, -0.00322$). The disagreement between both functions is always smaller than $\sim 1.2\%$ throughout the whole range of values of the opening angle. }
\label{corn2}
\end{figure}
In the previous subsection, we mentioned the possibility of approximating the function $a(\theta)$ for a given theory using the values of the almost-smooth and very-sharp limit coefficients, $\sigma$ and $\kappa$. The proposed trial function $\tilde a(\theta)$ appears in \req{trii}. We can use the new ECG corner functions to test the accuracy of such approximation beyond the free-field and Einstein cases explored in \cite{Bueno3}. In Fig.\,\ref{corn2}, we plot $1-a(\theta)/\tilde a(\theta)$ for various values of the ECG coupling falling between the limiting cases of $t_4=\pm4$. We observe that in all cases, the error in the approximation never exceeds $\sim 1.2\%$ for any value of the opening angle, the approximation being slightly better for negative values of $\mu$. This provides good evidence that $\tilde a(\theta)$ can be used as an accurate approximation to the exact corner function for general CFTs.
\section{Final comments}\label{finalc}
The main results of the paper appear summarized in the introduction. Let us conclude with some final comments.
In this paper we have obtained a new formula for the HEE functional valid for general higher-curvature gravities when considered as perturbative corrections to Einstein gravity ---the covariant form of the new expression appears in \req{Covariant:FullFunctional}. This formula, which gets rid of the weighted sum over $\alpha$ present in the original functional (\ref{SplittingProblem:GeneralFunctional}), is computationally much simpler to use in concrete cases beyond cubic order, and allowed us to evaluate the explicit form of the functionals for general quartic densities. If desired, it should be possible to implement it in a mathematical software and compute the analogous expressions for even higher orders.
Besides its computational simplicity, the new form of the anomaly piece can be suggestively written in terms of the exponential of a differential operator ---this is particularly neat for Lovelock theories, see \req{OneComponent:LovelockExponential}. This form may be useful for potential applications beyond HEE, which may include new versions of the second law for higher-curvature black holes, {\it e.g.,}\ along the lines of \cite{Wall:2015raa,Bhattacharyya:2016xfs}.
As we have emphasized throughout the paper, the fact that our new expression is restricted to perturbative higher-curvature theories beyond quadratic order is related to the splitting problem, which requires the identification of the precise way in which Riemann tensor components must be decomposed into pieces of different weight $q_{\alpha}$ in the original functional for a given theory. While this could be in principle determined using the procedure developed in \cite{Dong:2017xht} on a theory by theory basis,\footnote{To the best of our knowledge, this has not been done explicitly for any non-trivial higher-curvature theory yet.} general results can be obtained at leading order in the couplings by considering the splittings corresponding to Einstein gravity, which has been our approach in this paper. Nonetheless, we would like to stress that, in fact, our formalism should be straightforwardly adaptable to situations in which the Riemann tensor components split in a different fashion. In that case, instead of the separation into type $A$ and $B$ components one may have to introduce additional types $C$, $D$, etc., depending on the different possible weights corresponding to the different split components. One could even think of a sort of general-splitting version of our formulas.
In Section \ref{unite} we have used our new expressions for cubic theories to evaluate several universal contributions to the EE characterizing the holographic CFTs they define. An analogous catalogue of coefficients could be obtained for quartic theories using the functionals presented in subsection \ref{quarticc}. Naturally, there are many possible additional applications within the HEE framework one could consider exploring using the new functionals presented here.
\section*{Acknowledgements}
We thank Felix Haehl, Rong-Xin Miao, Rob Myers and William Witczak-Krempa for useful discussions on related topics. PB and JC were supported by the Simons Foundation through the ``It From Qubit'' Simons collaboration. The work of AVL is supported by the Spanish MECD fellowship FPU16/06675, and by MINECO FPA2017-84436-P, Xunta de Galicia ED431C 2017/07, Xunta de Galicia (Centro singular de investigaci\'on de Galicia accreditation 2019-2022) and the European Union (European Regional Development Fund-ERDF), ``Mar\'ia de Maeztu'' Units of Excellence MDM-2016-0692, and the Spanish Research State Agency.
|
{
"timestamp": "2020-12-29T02:22:28",
"yymm": "2012",
"arxiv_id": "2012.14033",
"language": "en",
"url": "https://arxiv.org/abs/2012.14033"
}
|
\section{Introduction}
Person Re-Identification (ReID) learning has attracted much attention in the machine learning community \cite{alpher01,alpher32,alpher33}. It aims to judge whether two person images indicate the same target
or not and has widespread applications in video surveillance for public security. Inspired by the success of deep learning, many methods have been presented to deal with this task \cite{zheng2016person}. Among them, deep metric learning has become popular due to the seamless combination of the distance metric learning and deep neural network \cite{chengl15,alpher21,alpher16}. One representative work in the recent literature is TriNet \cite{hermans2017defense}, a convolutional neural network for learning embedding for person images. It exploits the triplet loss to learn an embedding space where the data points of the same identities are closer to each other than those with different identities.
A key part of metric learning with triplet loss is the mining of hard triplets, since hard triplets in the training set will produce large gradients while the gradient of easy triplets are close to zero. Although TriNet \cite{hermans2017defense} emphasizes the importance of hard example mining and propose a Batch-Hard mining strategy, which achieves competitive performance on person ReID benchmark,
we observe that the loss of TriNet can still inevitably stagnate and confront with zero gradient. It is because that randomly chosen examples in a mini-batch still do not contain enough hard triplets, especially when the training dataset consists of a large number of distinct classes with few training instances for each, which is common for person ReID task.
Since hard triplets usually account for a minority while most of the easy triplets makes little contribution to the optimization, a natural question is how to translate existing easy triplets into ``informative" hard cases so as to boost the generalization performance.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{fig/illustrate.pdf}
\caption{The comparison of traditional deep learning with (up) triplet loss and (bottom) adversarial triplet loss. Given a triplet $(\mathbf{x}_{a},\mathbf{x}_{p},\mathbf{x}_{n})$ consisting of an anchor $\vx_a$, its positive sample $\vx_p$ and negative sample $\vx_n$, whose labels satisfy $y_a = y_p \neq y_n$, triplet loss learns an embedding space where the data points of the same class are closer to each other than those with different classes. Our proposed ATE aims to introduce a bounded perturbation $\bm{\delta}_{a}$ into the anchor point $\mathbf{x}_{a}$ to generate a much harder triplet $(\mathbf{x}_{a}+\bm{\delta}_{a},\mathbf{x}_{p},\mathbf{x}_{n})$, in which the distance between $\vx_a$ and $\vx_p$ is enlarged while the distance between $\vx_a$ and $\vx_n$ is reduced. This mechanism is performed via an adversarial process and is further converted into a minimax problem to learn a worst case $\bm{\delta}_{a}^{adv}$ for the current loss. }
\label{fig:illustrate}
\vspace{-1.2em}
\end{figure}
To address the above problems, we propose a new metric learning objective called adversarial triplet embedding (ATE), which extend the triplet loss with an adversarial process. As shown in Fig{\ref{fig:illustrate}}, compared with the traditional traditional metric learning with the given triplet, our ATE learns to discriminate the generated adversarial triplets. Here, the adversarial triplets are automatically generated by introducing adversarial perturbations into the training process. Different from the traditional adversarial training \cite{goodfellowss14, miyato2016adversarial}, we apply the adversarial perturbation to the triplet loss, rather than directly to the input. The triplet loss model is pitted against an adversary: an adversarial mechanism that learns a worst case perturbation for the current loss. Further, we convert the adversarial game into a minimax problem which has the optimal solution from the theoretical perspective.
Note that the proposed adversarial manner is a good complements for the widely-used hard example mining. On one hand, it would provide a variety of adversarial triplets close to the margin through adding perturbations, which will improve the robustness of the model. On the other hand, the adversarial process would translate some easy triplets into hard ones which produce relatively large gradient for the optimization of the loss function. Consequently, through training with current hard example mining techniques, the proposed adversarial framework has a better generalization ability.
In experiment, we demonstrate the superiority of the proposed ATE to the triplet loss as well as other competing metric learning objectives on person ReID task. The ATE is evaluated on several real-world datasets. Extensive experiments on these benchmark datasets demonstrate the effectiveness of our approach against the state-of-the-art.
\section{Related Work}
The main topics of person ReID focus on discriminative feature learning and effective similarity measurement. For feature learning, many robust person image descriptors are developed for coping with misalignment and variations of color and texture. Commonly used hand-crafted descriptors include HSV color histogram \cite{alpher01}, local binary patterns (LBP) \cite{alpher03}, SIFT features \cite{alpher02}, and etc. Some efforts also exploit the properties of person images to boost the recognition rate, such as symmetry of local features \cite{alpher01} and the horizontal occurrence of local descriptors \cite{alpher32}. For similarity measurement, the key idea lies on metric learning which aims to learn an embedding from feature space to a new space so as to maximize the inter-class variations while minimizing the intra-class variations. Representative methods include local Fisher discriminant analysis \cite{alpher12}, Large Margin Nearest Neighbour \cite{alpher13}, KISS metric learning \cite{alpher14}, attribute consistent matching \cite{khamis2014joint}, and pair-wise constrained component analysis \cite{alpher11}.
Inspired by the success of deep learning networks, many efforts apply deep CNN models to the task of person re-identification and have achieved remarkable improvement over the approaches based on handcrafted features. In particular, end-to-end training \cite{alpher16, alpher21,wang2018resource} are exploited to learn discriminative representations and effective metrics simultaneously so that the feature learning network is directly optimized for the final task. Some methods considers person ReID as a ranking problem \cite{chengl15}, which is usually solved by deep networks with a triplet loss. Ding et al. \cite{alpher21} develop a triplet generation scheme for person re-identification by randomly select a small number of persons from the dataset. Cheng et al. \cite{alpher33} introduce a multi-channel parts-based CNN framework which improves the triplet loss function by imposes a margin constraint on the pair of matched images. Chen et al. \cite{alpher43} present a quadruplet loss for person re-identification, which extends the traditional triplet loss by considering pushing away negative pairs from positive pairs with different probe images.
In addition to the triple loss and its variants, the person ReID problem also can be tackled from the classification perspective \cite{alpher16, alpher22}. For example, Ahmed et al. \cite{alpher16} propose to learn representations and similarity metric simultaneously with an improved deep architecture, which takes pair-wise images as input and outputs a similarity value indicating whether the two input samples depict the same person. Zhang et al. \cite{yaqing2016} seek to learn cross-image representations for input image pairs and adopt the cross-entropy loss to represent the probability that the two images are of the same class (person) or not.
Further, triplet loss has shown its superiority on person re-identification task over other surrogate losses, such as classification and verification losses \cite{hermans2017defense}. A key part of learning with the triple loss is the mining of good hard triplets \cite{schroff2015facenet, shi2016embedding}. Batch Hard strategy has been proposed by Hermans et al. \cite{hermans2017defense} and has achieved state-of-the-art performance on several benchmark datasets. The core idea is to form batches by randomly sampling several person identities and then randomly sampling several images from each identity.
Our model differs from the above deep network with triplet loss. More specifically, we propose to train the triplet embedding with an adversarial process. A worst case perturbation for anchor point is learned to construct a much harder optimization problem, which encourages further reducing the intra-class variations and enlarging the inter-class variations. In particular, the learned perturbation help form an adaptive margin for triplet loss, which makes it possible to learn robust more representations for the person re-identification problem.
\section{The Proposed Method}
In this section, we present a new deep metric learning method called adversarial triplet embedding, where an adversarial perturbation is added to the anchor point to produce much harder triplets. We first overview the probabilistic interpretation of triplet embedding, and then investigate the statistical property of triplet embedding with perturbations. Further, we propose the adversarial triplet embedding through introducing adversarial perturbations into the training process.
\subsection{Probabilistic Triplet Embedding}
Suppose that we are provided with a set of training images of $K$ classes, we first extract the feature embedding of each sample by CNN and obtain $\{(\vx, y), \cdots \}$, where $\vx \in \real^D$ denotes the feature embedding and $y \in \{1 ,\cdots ,K\}$ is the corresponding label.
At each training iteration, we sample a mini-batch of triplets, each of which $\cT=(\vx_a, \vx_p, \vx_n)$ consists of an anchor $\vx_a$, its positive sample $\vx_p$ and negative sample $\vx_n$, whose labels satisfy $y_a = y_p \neq y_n$. For the purpose of simplicity, we will explain the sampling strategy in Sec.~\ref{sec:detail}.
Triplet loss aims at learning a representation with correct ranking order, \ie, the distance between the anchor and the positive should less than the distance between the anchor and the negative:
\begin{equation}
\norm{\vx_a - \vx_p }_2^2 < \norm{\vx_a - \vx_n } _2^2. \label{eq:constraint}
\end{equation}
Further, the corresponding probability of a given triplet \cite{Maaten2012StochasticTE} satisfying the above constraint can be written as:
\begin{align}
p(\mathbf{x}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n}|\phi)= & \frac{\exp(-\norm{\vx_a - \vx_p }_2^2 )}{\exp(-\norm{\vx_a - \vx_p }_2^2)+\exp(-\norm{\vx_a - \vx_n }_2^2)}\notag \\
= & \frac{1}{1+\exp(\norm{\vx_a - \vx_p }_2^2-\norm{\vx_a - \vx_n }_2^2)}
\end{align}
In our case, $\mathbf{x}$ is deep representation with CNN. To learn the network parameters $\phi$ from a given set of triplets, we solve the following objective:
\begin{align}
\arg\min_{\phi}\sum_{(\mathbf{x}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})}-\text{log}(p(\mathbf{x}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n}|\phi))
\end{align}
the above objective can be interpreted as maximizing the likelihood in Eq.(\ref{eq:constraint}).
\subsection{Triplet Embedding with Perturbations}
Consider a triplet $(\mathbf{x}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})$, where $\mathbf{x}_{a}$ is corrupted with perturbations. Let $\tilde{\mathbf{x}}_{a}$ be the original uncorrupted anchor point. We assume that the triplet is generated as follows: first $(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})$ is generated according to a distribution $p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)$, where $\theta$ is an unknown parameter and would be estimated from the data set; given $(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})$, $\mathbf{x}_{a}$ is assumed to be generated from $\tilde{\mathbf{x}}_{a}$ according to a distribution $p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})$, where $\tilde{\theta}$ is another unknown parameter and $\sigma_{a}$ is an estimated parameter for the perturbations of $\mathbf{x}_{a}$. The joint probability of $(\mathbf{x}_{a}, \tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})$ can be formulated as follows:
\begin{align}
p(\mathbf{x}_{a}, \tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})=p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})
\end{align}
The joint probability distribution of $p(\mathbf{x}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})$ is computed by integrating out the unobserved quantity $\tilde{\mathbf{x}}_{a}$
\begin{align}
p(\mathbf{x}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n})=\int p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})d\tilde{\mathbf{x}}_{a}
\end{align}
This distribution can be considered as a probabilistic mixture model where each mixture component indicates a possible true anchor point $\mathbf{x}_{a}$. Further, the related parameters $(\theta, \tilde{\theta})$ can be estimated from the data through the maximum-likelihood estimation:
\begin{align}
\max_{\theta, \tilde{\theta}} \sum_{(a,p,n)}\text{ln}p(\mathbf{x}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n}|\theta, \tilde{\theta})
=
\max_{\theta, \tilde{\theta}}\sum_{(a,p,n)}\text{ln}\int p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})d\tilde{\mathbf{x}}_{a}
\label{int}
\end{align}
However, this approach usually is difficult to solve as the integration over the unknown $\mathbf{x}_{a}$ has a very complicated formulation. Further, it is not easily extended to discriminative formulations such as maximum margin methods. Consequently, we consider an alternative approximation which is more tractable and usually employed in engineering applications. In particular, each $\tilde{\mathbf{x}}_{a}$ is regarded as a parameter of the probability distribution and thus the maximum-likelihood can be formulated as:
\begin{align}
\max_{\theta,\tilde{\theta}}\sum_{(a,p,n)}\text{ln}\sup_{\tilde{\mathbf{x}}_{a}}\left[p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})\right]
\label{discrete}
\end{align}
Since large values of $p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})$ decide the value of $\int p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})d\tilde{\mathbf{x}}_{a}$, both objectives in Eq.(\ref{int}) and Eq.(\ref{discrete}) behave similarly and prefer to make the product $p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})$ large for some $\tilde{\mathbf{x}}_{a}$. On one hand, if an observation $\mathbf{x}_{a}$ is corrupted with large perturbations, we would select an appropriate $\tilde{\mathbf{x}}_{a}$ which predicts the probability well. On the other hand, if an observation $\mathbf{x}_{a}$ is corrupted with very small perturbations, then Eq.(\ref{int}) and Eq.(\ref{discrete}) would optimize the parameter of the model $\theta$ so that the $p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a})$ is enough large. Such way of modeling would rely on data which are less corrupted and ignore those uncertain ones.
In conditionally probabilistic modeling, we assume that $p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta)=p(\tilde{\mathbf{x}}_{a})p(\mathbf{x}_{p}, \mathbf{x}_{n}|\theta,\tilde{\mathbf{x}}_{a})$. As an example, we consider modeling with Gaussian perturbations,
\begin{align}
p(\tilde{\mathbf{x}}_{a}, \mathbf{x}_{p}, \mathbf{x}_{n} |\theta) & \sim \frac{1}{1+\exp(\norm{\tilde{\vx}_a - \vx_p }_2^2-\norm{\tilde{\vx}_a - \vx_n }_2^2)}\notag \\
p(\mathbf{x}_{a}|\tilde{\theta}, \sigma_{a}, \tilde{\mathbf{x}}_{a}) & \sim \exp\left(-\frac{\norm{\mathbf{x}_{a}-\tilde{\mathbf{x}}_{a}}^2}{2\sigma_{a}^{2}}\right)
\label{example}
\end{align}
The framework in Eq.(\ref{discrete}) becomes
\begin{align}
\theta=\arg\min_{\theta}\sum_{(a,p,n)}\inf_{\tilde{\mathbf{x}}_{a}}\left[\text{ln}(1+L)+\frac{\norm{\mathbf{x}_{a}-\tilde{\mathbf{x}}_{a}}^2}{2\sigma_{a}^{2}}\right]
\end{align}
where $L=\exp(\norm{\tilde{\vx}_a - \vx_p }_2^2-\norm{\tilde{\vx}_a - \vx_n }_2^2)$.
\subsection{Adversarial Triplet Embedding}
Our formulation of adversarial triplet embedding (ATE) is motivated by adversarial examples training \cite{goodfellowss14, miyato2016adversarial}, which improves the robustness of classifier with adversarial perturbations on the input. Adversarial triplet embedding applies the adversarial perturbation to the triplet loss, rather than directly to the input. It is straightforward to put into use when $\mathbf{x}_{a}$ and $\mathbf{x}_{p}$ are learned with deep neural networks.
We assume that anchor points are subject to an additive perturbation, i.e., $\mathbf{x}_{a}=\mathbf{x}_{a}+\bm{\delta}_{a}$ where perturbation $\bm{\delta}_{a}$ follows certain distribution. In real-applications, bounded perturbations are usually discussed in adversarial training \cite{goodfellowss14, miyato2016adversarial} and the resulted methods exhibit robustness to the corrupted inputs. Thus, instead of adopting Gaussian noise as in Eq.(\ref{example}), we consider a simple bounded perturbation model $\norm{\bm{\delta}_{a}}\leq\epsilon_{a}$, which has a similar effect of the Gaussian noise model.
To maximize the effect of this bounded perturbation, we propose to train a triplet embedding function via an adversarial process, in which we simultaneously learn two components: a discriminative embedding space where the distance of samples with different classes is larger than that of samples with the same class, and an adversarial perturbation which increases the difficulty of the learning task as much as possible. This framework behaves like a minimax two-player game with the following loss function:
\begin{align}
\min_{\theta}\sum_{(a,p,n)}\max_{\norm{\bm{\delta}_{a}}\leq \epsilon} & \left[\text{ln}(1+\exp(\norm{{\vx}_a +\bm{\delta}_{a}- \vx_p }_2^2\right.\notag \\
& \left.-\norm{{\vx}_a +\bm{\delta}_{a} - \vx_n }_2^2))\right]
\label{ATEloss}
\end{align}
where the adversarial perturbation is introduced into the anchor point so that we reconstruct an more difficult problem, for which we would learn better similarity metric.
\subsection{Optimization and Extensions}
For each step of training, we learn the worst case perturbations $\bm{\delta}_{a}^{adv}$ against the current loss in Eq.(\ref{ATEloss}), and train the network to be robust to such perturbations through minimizing the following problem with respect to the network parameter $\theta$,
\begin{align}
\cL_{\textrm{tri}} (\vx_a+\bm{\delta}_{a}^{adv},\vx_p, \vx_n;\theta) & =\left[\text{ln}(1+\exp(\norm{{\vx}_a +\bm{\delta}_{a}^{adv}- \vx_p }_2^2\right.\notag \\
& \left.-\norm{{\vx}_a +\bm{\delta}_{a}^{adv} - \vx_n }_2^2))\right]
\label{adv_loss1}
\end{align}
where the adversarial perturbation $\bm{\delta}_{a}^{adv}$ would be computed as follows:
\begin{align}
\bm{\delta}_{a}^{adv} & =\arg \max\limits_{\bm{\delta}_{a},\norm{\bm{\delta}_{a}}\leq\epsilon_a}\cL_{\textrm{tri}}(\vx_a+\bm{\delta}_{a},\vx_p,\vx_n;\hat{\theta})\notag \\
& =\arg \max\limits_{\bm{\delta}_{a},\norm{\bm{\delta}_{a}}\leq\epsilon_a}\left[\text{ln}(1+\exp(\norm{{\vx}_a - \vx_p }_2^2\right. \notag \\
& \left.-\norm{{\vx}_a - \vx_n }_2^2+2\bm{\delta}_{a}(\mathbf{x}_{n}-\mathbf{x}_{p})))\right]
\label{adv_loss2}
\end{align}
where $\hat{\theta}$ uses a constant copy of the current parameters of the network. Such setup indicates that the backpropagation would not be used to update gradient through the adversarial loss construction process. With the Cauchy-Schwarz inequality, we have
\begin{align}
\left|\bm{\delta}_{a}(\mathbf{x}_{n}-\mathbf{x}_{p})\right|\leq ||\bm{\delta}_{a}|| \cdot ||\mathbf{x}_{n}-\mathbf{x}_{p}||
\end{align}
where the equality holds if and only if $\bm{\delta}_{a}=k(\mathbf{x}_{n}-\mathbf{x}_{p})$ for certain scalar $k$. Based on the bounded constraint $\norm{\bm{\delta}_{a}}\leq\epsilon_a$, the worst case perturbation is obtained as follows:
\begin{align}
\bm{\delta}_{a}^{adv}=\epsilon_{a}\frac{\mathbf{x}_{n}-\mathbf{x}_{p}}{||\mathbf{x}_{n}-\mathbf{x}_{p}||}
\end{align}
Consequently, the problem in Eq.(\ref{adv_loss1}) can be formulated as:
\begin{align}
\cL_{\textrm{tri}} (\vx_a+\bm{\delta}_{a}^{adv},\vx_p, \vx_n;\theta) & =\left[\text{ln}(1+\exp(\norm{{\vx}_a - \vx_p }_2^2\right.\notag \\
& \left.-\norm{{\vx}_a - \vx_n }_2^2+s||\mathbf{x}_{n}-\mathbf{x}_{p}||_{2}^{2}))\right]
\label{adv_loss3}
\end{align}
where $s=2\epsilon_{a}/||\mathbf{x}_{n}-\mathbf{x}_{p}||$.
\noindent\textbf{Relationship to triplet loss}
As in \cite{alpher33, alpher43}, the traditional triplet embedding seeks to minimize the following hinge loss:
\begin{equation}
\cL_{\textrm{tri}} (\cT) = \left[\norm{\vx_a - \vx_p}_2^2
- \norm{\vx_a-\vx_n}_2^2 +m \right]_+
\label{eq:loss}
\end{equation}
where the operator $[\cdot ]_+ = \max (0, \cdot ) $ denotes the hinge function, which is used to avoid correcting "already correct " triplets. However, in person ReID it is necessary to pull together samples from the same identity as much as possible so as to reduce the intra-class variations. Based on this consideration, TriNet \cite{hermans2017defense} replaces the hinge function by a smooth version using the softplus function:
\begin{align}
\cL_{\textrm{tri}} (\cT)=\left[\text{ln}(1+\exp(\norm{{\vx}_a - \vx_p }_2^2-\norm{{\vx}_a - \vx_n }_2^2))\right]
\end{align}
As shown in Eq.(\ref{adv_loss3}), this soft margin version is a part of our adversarial triplet loss, but without the third term in the $\exp(\cdot)$. The third term provides a help from the perspective of adaptive margin. It can further reduce the intra-class variations and enlarge the inter-class variations so that the generalization performance would be improved.
\subsection{Implementation Details} \label{sec:detail}
For batch construction during training we leverage the
idea of PK batches also introduced by \cite{hermans2017defense}.
This approach has shown very good performance in similarity-based ranking and avoids the need to generate a combinatorial number of triplets.
In each batch there are $K$ sample images for each of $P$ identities.
During one training epoch, each identity is selected in its batch in turn, and the remaining $P-1$ batch identities are sampled at random. $K$ samples for each identity are then also selected at random.
For sampling strategy within a batch, we modified the most hard sample strategy slightly to stabilize training process inspired by \cite{ristani2018features}.
Originally, the batch-hard sampling strategy consider only the
hardest positive and negative samples within a batch.
Compared to triplet loss without any sampling strategy, the batch-hard strategy converge faster and better, since it avoid the gradient of easy samples washing out the gradients of informative samples.
However, this method is sensitive to outliers and requires careful tunning of hyperparameter.
To overcome this weakness, we apply the softmax to the negative distance, obtain the importance of this sample in a probabilistic manner, and sample the positive and negative stochastically.
\section{Experiments}
In this section, we evaluate the ATE loss on the Person ReID task. Our method has been shown to achieve state-of-the-art performance on three public benchmark datasets.
\subsection{Dataset and Protocol}
We conduct experiments on three public benchmark datasets: CUHK03 \cite{li2014deepreid}, Market1501 \cite{zheng2015scalable}, VIPeR~\cite{Alpher29}, and PRID450s~\cite{Alpher38}.
The statistics of the datasets are shown in Tab.~\ref{tab:stat}.
\begin{table}
\centering
\caption{The statistics of three benchmark dataset}
\label{tab:stat}
\begin{tabular}[]{cccccccc}
\toprule
Dataset & Market1501 & CUHK03 & VIPeR & PRID450s \\
\midrule
Identities & 1501 & 1360 & 632 & 450 \\
BBoxes & 32,668 & 13,164 & 1264 & 900 \\
Cameras & 6 & 6 & 2 & 2 \\
Label method & DPM & Hand/DPM & Hand & Hand\\
Train \# imgs & 12,936 & 7368/7365 & 632 & 450\\
Train \# ids & 751 & 767 & 316 & 225 \\
Test \# imgs & 19,732 & 1,400 & 632 & 450 \\
Test \#ids & 750 & 700 & 316 & 225 \\
\bottomrule
\end{tabular}
\end{table}
{\bf CUHK03} dataset contains 13,164 images of 1,360 identities.
It provides bounding boxes detected from deformable part models (DPMs) and manual labeling.
The traditional protocol split the dataset into a training set containing 1,160 identities and a testing set containing 100 identities.
The new training/testing protocol raised by \cite{zhong2017reranking} using a new training/testing protocol similar to that of Market-1501. The new protocol splits the dataset into training set consisting of 767 identities and testing set of 700 identities.
In testing, the new protocol randomly selects one image from each camera as a query for each identity and use the rest of images to construct the gallery set.
The new protocol gains two advantages: 1) For each identity, there are multiple ground truths in the gallery, which is more consistent with practical application scenario. 2) Evenly dividing the dataset into training set and testing set at once helps avoid time-consumingly repeating training and testing multiple times.
{\bf Market1501} dataset contains 32,668 images of 1,501 labeled
persons of six camera views. There are 751 identities in the training set and 750 identities in the testing set. In the original study on this proposed dataset, the author also uses mAP as the evaluation criteria to test the algorithms.
{\bf VIPeR} dataset contains 632 person images captured by two cameras in an outdoor
environment, and each person has only one image in each
camera view.
{\bf PRID450s} dataset, similarly, consists of 450 identities, both captured
by two disjoint cameras.
The widely adopted experimental protocol on two datasets is similar to the old protocol of cuhk03.
A random selection of half persons is used for training and the rest for testing. The
procedure is repeated for 10 times, then the average performances are reported.
This procedure is time-consuming but acceptable on the small dataset, thus we strictly follow this widely adopted protocol.
For evaluation metrics, we use several standard evaluation metrics: rank-n Cumulative Matching Characteristic accuracy,
which is an estimation of finding the corrected top-n match,
and mean average precision (mAP)
, which measures the overall quality of predicted ranklist. We use officially provided evaluation code for all the results. The only modifications are to use mean pooling on the embeddings of a tracklet rather than max pooling on Mars and to ignore the 6 queries with no right results in gallery.
\subsection{Experiments Settings}
We adpot the same settings as \cite{hermans2017defense} and \cite{wang2018resource}. We define an epoch as iterating over the whole dataset and train the network for 65 epochs. In this way, we do not need to tuning the number of iterations for different datasets.
At training stage, Each input image is randomly cropped by a bounding box with random aspect ratio between $[1.5,3]$ and random area size between $[0.85,1]$, then randomly flipped horizontally, and resize to $256 \times 128$.
At test stage we resize the image to $256 \times 128$ and
average the embedding of the original image and of the horizontally flipped image to obtain the final embedding.
We take the ResNet-50 \cite{he2016deep} pre-trained on ImageNet \cite{deng2009imagenet} as backbone network to extract features.
We train the model with Adam optimizer and batch size of 128, which contains 64 different people and 4 different images each. The learning rate $\alpha$ is adjusted similarly as \cite{hermans2017defense}, starting from $\alpha_0 = 3 \times 10^{-4}$.
\begin{equation}
\alpha (t) =
\begin{cases}
\alpha_0 & \text{ if } t \le t_0, \\
\alpha_0 \times 0.001^{\frac{t-t_0}{t_1-t_0}} & \text { if } t_0 \le t \le t_1,
\end{cases}
\end{equation}
where we set $t_0=35$ and $t_1=65$. $\beta_1$ of Adam is reduced from 0.9 to 0.5 after $t_0$ as well. All hyperparameters were taken from the optimized model of \cite{hermans2017defense}. Potentially we could improve the performance further through proper hyperparameter tuning.
\subsubsection{Baselines}
To demonstrate how ATE improves the generalization performance in comparison with the state-of-the-art person ReID methods, we compare it with the recent literature, including
PAN~\cite{zheng2018pedestrian}, SVDNet~\cite{sun2017svdnetfp},
LOMO + XQDA~\cite{alpher32}, LOMO + Null Space~\cite{zhang2016learning}, DTL~\cite{geng2016deep},
ResNet50 (I+V)~\cite{zheng2017discriminatively}, Gated siamese CNN~\cite{varior2016gated},
CNN + DCGAN~\cite{zheng2017unlabeled}, BraidNet-CS + SRL~\cite{wang2018person},
JLML~\cite{li2017person},
KISSME~\cite{alpher14},
LSSCDL~\cite{Alpher31},
ImprovedDL~\cite{alpher16},
and TCP~\cite{alpher33}. In addition, we also compare it with the following
triplet loss methods and its variants (with the same batch hard mining setup):
\begin{itemize}
\item TriNet, a representative method for person ReID which exploits the triplet loss and proposes a batch hard mining scheme \cite{hermans2017defense}.
\item Improved Triplet loss (ImpTrpLoss), which improves the triplet loss function by imposes a margin constraint on the pair of the same class \cite{alpher33}.
\item QuadrupletNet, a model which extends the traditional triplet loss by considering pushing away negative pairs from positive pairs with different probe images \cite{alpher43}.
\item TriNet+adversarial training (TriNetAdv), a model which introduces adversarial examples regularization to the triplet loss by making small perturbations to the input \cite{goodfellowss14}.
\end{itemize}
\subsection{Performance Comparison}
\textbf{Comparisons on CUHK03 dataset}.
We conduct the experiments on both labelled and detected CUHK03 datasets. From Table \ref{table:cuhk03}, we see that our proposed approach achieves the better results than the competing methods. On the labelled dataset, our method perform better than the next best method by an improvement of 3.03\% (53.03\% vs. 56.06\%) in the metric of mAP. On the detected dataset, the performance decreases a little due to the misalignment and incompleteness caused by the detector. However, the proposed method still achieves an improvement 3.41\% over the next best method (51.82\% vs. 55.23\%).
\textbf{Comparisons on Market1501 dataset}.
From Table \ref{tab:mkt}, we see that our proposed method achieves the best performance of 86.49\% (rank-1) and 71.80\% (mAP) (vs. 85.12\% and 70.62\% respectively by the next best method).
\textbf{Comparisons on VIPeR and PRID450s dataset}.
Following \cite{alpher16}, we pre-train the network using CUHK03 datasets, and fine-tune on the training set of VIPeR and PRID450s. As shown in the Table \ref{tab:viper}, the proposed ATE is better than the competing methods in all the cases except the rank-10 recognition rate for PRID450s dataset.
\begin{table}
\centering
\caption{Comparison of the state-of-the-art results on labelled and detected CUHK03 dataset (New protocol). The CMC scores (\%) at rank 1, 5, 10 and mAP are listed.} \label{table:cuhk03}
\vspace{1ex}
\scalebox{.85}{
\begin{tabular}{c|cc|cc}
\hline
\multirow{2}{*}{Methods} &
\multicolumn{2}{c|}{ labelled CUHK03 } &
\multicolumn{2}{c}{ detected CUHK03 } \cr
\cline{2-3} \cline{4-5}
& rank-1 & mAP & rank-1 & mAP \cr
\hline
PAN \cite{zheng2018pedestrian} & 36.9 & 35.0 & 36.3 & 34.0 \\
SVDNet \cite{sun2017svdnetfp} & 40.9 & 37.8 & 41.5 & 37.2 \\
IDE + XQDA \cite{zhong2017reranking} & 32.0 & 29.6 & 31.1 & 28.2 \\
\hline \hline
TriNet \cite{hermans2017defense} & 56.72 & 51.32 & 54.25 & 50.71 \\
QuadrupletNet \cite{chen2017beyond} & 58.35 & 53.23 & 55.96 & 52.35 \\
TriNetAdv \cite{goodfellow2014explainingah} & 54.66 & 49.71 & 51.19 & 48.02 \\
ImpTrpLoss \cite{alpher33} & 57.98 & 53.03 & 55.23 & 51.82 \\
ATE & \bf 60.96 & \bf 56.06 & \bf 59.29 & \bf 55.23 \\
\hline
\end{tabular}
}
\end{table}
\begin{table}
\centering
\caption{Comparison of the state-of-the-art results on Market1501 dataset.}
\label{tab:mkt}
\vspace{1ex}
\scalebox{0.95}{
\begin{tabular} {c|cc}
\hline
Methods & rank-1 & mAP \\ \hline
LOMO + Null Space \cite{zhang2016learning} & 55.43 & 29.87 \\
DTL \cite{geng2016deep} & 83.7 & 65.6 \\
ResNet50 (I+V) \cite{zheng2017discriminatively} & 79.51 & 59.87 \\
Gated siamese CNN \cite{varior2016gated} & 65.88 & 39.55 \\
CNN + DCGAN \cite{zheng2017unlabeled} & 78.06 & 56.23 \\
BraidNet-CS + SRL \cite{wang2018person} & 83.70 & 69.48 \\
SVDNet \cite{sun2017svdnet} & 82.3 & 62.1 \\
IDE+XQDA \cite{zhong2017reranking} & 77.58 & 56.06 \\
JLML \cite{li2017person} & 85.1 & 65.5 \\ \hline \hline
TriNet \cite{hermans2017defense} & 84.07 & 68.39 \\
QuadrupletNet \cite{chen2017beyond} & 85.12 & 70.62 \\
TriNetAdv \cite{goodfellow2014explainingah} & 84.03 & 69.45 \\
ImpTrpLoss \cite{alpher33} & 84.74 & 69.72 \\
ATE & \bf 86.49 & \bf 71.80 \\
\hline
\end{tabular}
}
\end{table}
\begin{table}
\centering
\caption{Comparison of state-of-the-art results on VIPeR dataset. The cumulative matching scores (\%) at rank 1, 5, and 10 are listed.}
\label{tab:viper}
\vspace{1ex}
\scalebox{0.75}{
\begin{tabular}{c|ccc|ccc}
\hline
\multirow{2}*{Methods} &
\multicolumn{3}{c|}{VIPeR} &
\multicolumn{3}{c}{PRID450s} \\
\cline{2-7}
& r=1 & r=5 & r=10 & r=1 & r=5 & r=10 \\ \hline
KISSME \cite{alpher14} & 19.6 & 48.0 & 62.2 & 15.0 & - & 39.0 \\
LSSCDL \cite{Alpher31} & 42.66 & - & 84.27 & 60.49 & - & 88.58 \\
LOMO+XQDA \cite{zhong2017reranking} & 40.00 & 68.13 & 80.51 & 61.42 & - & 90.84 \\
ImprovedDL \cite{alpher16} & 34.81 & 63.61 & 75.63 & 34.81 & 63.72 & 76.24 \\
TCP \cite{alpher33} & 47.80 & 74.70 & 84.80 & - & - & - \\
SSM \cite{bai2017} & 53.73 & - & 91.49 & 72.98 & - & \bf 96.76 \\
FFN \cite{wu2016} & 51.06 & 81.01 & 91.39 & 66.62 & 86.84 & 92.84 \\
Mirror-KMFA \cite{chen2015mirror} & 42.97 & 75.82 & 87.28 & 55.42 & 79.29 & 87.82 \\
\hline
TriNet \cite{hermans2017defense} & 55.26 & 82.24 & 91.62 & 73.76 & 91.67 & 95.13 \\
QuadrupletNet \cite{chen2017beyond} & 56.24 & 82.81 & 91.69 & 73.82 & 91.95 & \bf 95.23 \\
TriNetAdv \cite{goodfellow2014explainingah} & 54.53 & 81.62 & 91.22 & 72.14 & 90.81 & 94.12 \\
ImpTrpLoss \cite{alpher33} & 55.39 & 82.25 & 91.59 & 73.74 & 91.53 & 95.06 \\
ATE & \bf 56.45 & \bf 83.11 & \bf 91.78 & \bf 73.91 & \bf 92.03 & \bf 95.24 \\
\hline
\end{tabular}
}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=.99\linewidth]{fig/hyperparam.pdf}
\caption{The rank-1 score for the testing set as a function of perturbation bound parameter $\epsilon_{a}$ for ATE on the labelled CUHK03, Market 1501, VIPeR, and PRID450s datasets, respectively.} \label{fig:hyperparam}
\end{figure*}
{\bf Triplet loss}: as shown in Table \ref{table:cuhk03}, \ref{tab:mkt}, and \ref{tab:viper}, TriNet achieves competitive performance in these datasets by emphasizing the importance of hard example mining for deep learning with triple loss. Compared with other traditional ReID approaches, triplet loss shows a significant benefit by directly performing end-to-end learning between the input images and the desired ranking relationship.
{\bf QuadrupletNet vs. TriNet}: as shown in Table \ref{table:cuhk03}, \ref{tab:mkt}, and \ref{tab:viper}, Quadruplet performs better than TriNet with the same setup of batch hard mining. This is because it extends the traditional triplet loss by considering pushing away negative pairs from positive pairs with different probe images, which provides a help from the aspect of orders with different probe images and further enlarge the inter-class variations.
{\bf ImpTrpLoss vs. TriNet}: as shown in Table \ref{table:cuhk03}, \ref{tab:mkt}, and \ref{tab:viper}, ImpTrpLoss performs a little better than TriNet in most cases. This is because it improves the triplet loss function by imposes a margin constraint on the pair of matched images, which reduces the intra-class variations and improve the performance on the test data.
{\bf TriNetAdv vs. TriNet}: as shown in Table \ref{table:cuhk03}, \ref{tab:mkt}, and \ref{tab:viper}, TriNetAdv does not show superiority over TriNet by introducing adversarial training regularization. Intuitively, the adversarial perturbation on pixel space is not semantically meaningful and not consistent to the distribution of test set, thus adversarial training do not contribute a lot to the improvement of performance and sometime even worse the accuracy on test set.
{\bf ATE vs. TriNet}: from Table \ref{table:cuhk03}, \ref{tab:mkt}, and \ref{tab:viper}, we see that ATE performs significantly better than TriNet.
On one hand, this illustrates the importance of simultaneously considering learning adversarial triplets and discriminative feature embedding in an adversarial manner.
The adversarial triplets generated by this adversarial manner are good complements for the widely-used hard example mining.
On the other hand, the adaptive margin provided by the proposed model further enlarges the inter-class variations while reducing the intra-class variations, which improves the generalization performance.
In particular, our proposed method outperforms TriNet by an improvement on rank-1 scores of 4.24\% (60.96\% v.s. 56.72\%) on labelled CUHK03 dataset and 5.04\% (59.29\% v.s. 54.25\%) on detected CUHK03 dataset.
Similarly, on the Market1501 dataset our model also achieves an improvement over TriNet on rank-1score of 2.42\% (86.49\% v.s. 84.07).
\begin{figure*}
\centering
\includegraphics[width=0.98\linewidth]{fig/visual-2.pdf}
\caption{Visualization of the recognition results for hard cases of the Market1501 dataset by TriNet and our proposed model, respectively. }
\label{fig:visual}
\end{figure*}
\subsection{Sensitivity Study}
We conduct a sensitivity study on how the bound parameter $\epsilon_{a}$ affect the recognition performance of the proposed model in terms of rank-1. Figure \ref{fig:hyperparam} shows how the parameter $\epsilon_{a}$ affects the recognition performance of the proposed model in terms of rank-1. This bound hyperparameter $\delta_a$ controls the strength of the adversarial perturbation. Too small a $\epsilon_a$ is selected, the ATE loss would make no difference with the Triplet loss. When too large a $\epsilon_a$ is selected, it would be unstable to optimize or stagnate at local minimum and obtain a degraded performance. However, in practice, it is observed that ATE is not very sensitive to the selection of $\epsilon_a$ and produces fairly good improvement in a wide range of hyperparameter. To illustrate this point, we evaluate its performance at $\epsilon_a=[10^{-4}, 5\times 10^{-3}, 10^{-3}, 5 \times 10^{-2}, 10^{-2}, 5\times 10^{-1}, 10^{-1}]$. In Fig.~\ref{fig:hyperparam}, the red dashed line is TriNet baseline, and the blue line represents rank-1 of ATE model. We conclude $\epsilon_a=10^{-2}$ is the best for large dataset like CUHK03 and Market1501, $\epsilon_a=10^{-3}$ is the best for small dataset like VIPeR and PRID450s, and there is no great difference on rank-1 metric between selecting $10^{-3}$ and $10^{-2}$. We guess $\epsilon_a$ may slightly depended on the distribution and characteristic of datasets, \eg, a large dataset allows us to adopt a large adversarial perturbation to explore the optimal margin for learning discriminative features.
\subsection{Visualization of Hard Cases}
From the TriNet model to the ATE model on Market1501 Dataset, the mAP increase from 68.39\% to 71.80\%, by a margin of 2.77\%. There are 4.3\% queries which improves more than 75\% AP. These queries are poorly predicted by the model supervised by the original triplet loss, and greatly corrected by ATE loss, which shows the superiority of our model. Further, we rank the queries by its improvement of AP, and choose several successfully corrected cases for visualization. As shown in \ref{fig:visual}, the proposed ATE model is robust to the large cross-view appearance variations caused by mutual occlusions, background clutters, distractors and misalignment caused by detector, and doppelganger identities. It is contributed to the explicit model the adversarial perturbation in the feature space in training process. For example, in row one, the TriNet model is misled by a similar identity of query 934. The ATE model is still confused by a more similar but reasonable doppelganger identity, and rank the gallery image correct identity higher in the rank list. In row two, although the query image is cluttered by background and of low quality,
the ATE model successfully retrieves both high-quality and low quality image. In row three, the TriNet model is confused by distractor caused by poor detector, which is the error of previous detection stage. The ATE model, however, is resistant to this error and successfully retrieve the correct images.
\section{Conclusion}
In this paper, we develop a new deep metric learning method called Adversarial Triplet Embedding (ATE) for person ReID, in which we simultaneously generate adversarial triplets and discriminative feature embedding in an unified framework. In particular, adversarial triplets are generated by introducing adversarial perturbations into the training process. Further, this adversarial process is a good complements for the widely-used hard example mining. In addition, we convert this adversarial game into a minimax problem so as to have an optimal solution from the theoretical aspect. Extensive experiments on several benchmark datasets demonstrate the effectiveness of the approach against the state-of-the-art literature.
\bibliographystyle{unsrt}
|
{
"timestamp": "2020-12-29T02:23:34",
"yymm": "2012",
"arxiv_id": "2012.14057",
"language": "en",
"url": "https://arxiv.org/abs/2012.14057"
}
|
\section{Introduction}
The problem of Direction of Arrival (DoA) estimation is of central
importance in the field of array processing with many applications
in radar, sonar, and wireless communications \cite{van2002optimum,Haykin1992,Bjorn1996}. Estimating DoAs using Uniform Linear Arrays (ULAs) is well-investigated in the literature; a number of algorithms such as the Maximum Likelihood (ML) estimator, MUSIC, ESPRIT and subspace
fitting were presented and their performance thoroughly analyzed \cite{paulraj1993,Li1993,stoica1989,Stoica19901,Viberg1991,Jansson1999}. However, it is widely
known that ULAs are not capable of identifying more sources than the number of physical elements in the array \cite{Haykin1992,Stoica19901}.
To transcend this limitation, exploitation of Sparse Linear Arrays (SLAs) with particular geometries, such as Minimum Redundancy Arrays (MRAs) \cite{Moffet1968}, co-prime
arrays \cite{Vaidyanathan2011} and nested arrays \cite{Pal2010} has been proposed. These architectures can dramatically boost the degrees of freedom of the array
for uncorrelated source signals such that a significantly larger number of
sources than the number of physical elements in the array can be identified. In addition, the enhanced degrees of freedom provided by these SLAs can improve the resolution performance appreciably compared to ULAs \cite{Pal2010}.
These features have spurred further research on DoA estimation using SLAs in recent years.
A detailed study on DoA estimation via SLAs through an analysis of the Cram\'{e}r-Rao Bound (CRB) was conducted in \cite{Liu2017}.
Further, a number of approaches to estimating DoAs from SLA measurements were proposed in the literature. In general, the proposed approaches can be classified under two main groups: \begin{enumerate*} \item Sparsity-Based Methods (SBMs); \item Augmented Covariance-Based Methods (ACBMs)\end{enumerate*}. SBMs estimate DoAs by imposing sparsity constraints on source profiles and exploiting the compressive sensing recovery techniques \cite{Zhang2013,Shen2016,Pal2015,Pal2012con,Chi2010,Tan2014,Yang2014}.
However, in ACBMs, DoAs are estimated by applying conventional subspace methods such as MUSIC and ESPRIT on an Augmented Sample Covariance Matrix (ASCM) developed from the original sample covariance matrix by exploiting the difference co-array structure~\cite{Pal2010,Wang2017,Sedighi2018SAM}. In addition, the authors of this paper recently proposed a Weighted Least Squares (WLS) estimator capable of asymptotically achieving the corresponding CRB for DoA estimation from SLA data \cite{SedighiTSP2019,Sedighi2018Asilomar}.
The aforementioned techniques for DoA estimation from SLA data rest on the assumption that the analog array measurements are digitally represented by a significantly large number of bits per sample such that the resulting quantization errors can be disregarded. However, the production costs and energy consumption of Analog-to-Digital Converters (ADCs) escalate dramatically as the number of quantization bits and sampling rate increase \cite{Walden1999}. In consequence, deployment of high-resolution ADCs in many modern applications, e.g. cognitive radio \cite{Sun2013}, cognitive radars \cite{Lunden2015}, automotive radars \cite{Hasch2012}, radio astronomy\cite{burke2019introduction} and massive multiple-input multiple-output (MIMO) systems\cite{Lu2014}, is not economically viable owing to their very high bandwidth.
In order to reduce energy consumption and production cost in such applications, researchers and system designers have recently proposed using low-resolution ADCs. As an extreme case of low-resolution ADCs,
one-bit ADCs, which convert an analog signal into digital data using a single bit per sample, has received significant attention in the literature. One-bit ADCs offer an extremely high sampling rate at a low cost and very low energy consumption \cite{Walden1999}. Additionally, they enjoy the benefits of relatively easy implementation due to their simple architecture \cite{pelgrom2013analog}. In the past few years, numerous studies were conducted to investigate the impact of using one-bit sampling
on various applications such as massive MIMO systems \cite{Gokceoglu2017,Saxena2017,Rao2019,Pirzadeh2020,Wan2020},
dictionary learning \cite{zayyani2015dictionary},
radar \cite{zhao2018deceptive,ameri2019,Zahabi2020,Xi2020,sedighi2020localization}, and array processing \cite{BarShalom2002,Stein2016}.
\subsection{Relevant Works}
The problem of DoA estimation from one-bit quantized data has been studied in the literature presuming both the deterministic signal model \cite{stoica1989} and the stochastic signal model \cite{Stoica19901}. The studies in \cite{Huang2020,Yoffe2019,Stockle2015,Huang2018,Meng2018} presuppose the deterministic signal model. The authors in \cite{Huang2020} developed an algorithm for reconstruction of the unquantized array measurements from one-bit samples followed by MUSIC to determine DOAs. The ML estimation was deployed in \cite{Yoffe2019} for finding DoAs from one-bit data. In \cite{Meng2018}, the authors utilized a sparse Bayesian learning algorithm to solve the DoA estimation problem from one-bit samples. Two sparsity-based approaches were also proposed in \cite{Stockle2015,Huang2018}. Further, DoA estimation from one-bit data assuming the stochastic signal model has been discussed in \cite{BarShalom2002,Stein2016,Chen2018,Huang2019}.
In the special case of a two-sensor array, the exact CRB expression for the DoA estimation problem from one-bit quantized data was derived in \cite{BarShalom2002}. Moreover, an approach for estimating DoAs from one-bit ULA samples was proposed in \cite{BarShalom2002} which is based on reconstruction of the covariance matrix of unquantized data using the arcsine law \cite{VanVleck1966}.
In contrast to the approach employed in \cite{Liuonebit} which relies on the covaraince matrix reconstruction of unquantized data, the DoA estimation was performed in \cite{Huang2019} by directly applying MUSIC on the sample covariance matrix of one-bit ULA data. The numerical simulations demonstrated that the approach proposed in \cite{Huang2019} performs similar to the algorithm proposed in \cite{BarShalom2002} in the low Signal-to-Noise Ratio (SNR) regime.
An upper bound on the CRB of estimating a single source DoA from one-bit ULA measurements was derived in \cite{Stein2016}.
The aforementioned research works considered using ULAs for one-bit DoA estimation. Exploitation of SLAs for one-bit DoA estimation has been studied in \cite{Liuonebit,Ramamohan2010,Cheng2020,Zhou2020}. The authors in \cite{Liuonebit} deployed the arcsine law \cite{VanVleck1966} to reconstruct the ASCM from one-bit SLA data. Then, they applied MUSIC on the reconstructed ASCM to estimate DoAs.
It was shown in \cite{Liuonebit} that the performance degradation due to one-bit quantization can,
to some extent, be compensated using SLAs. An array interpolation-based algorithm was employed in \cite{Zhou2020} to estimate DoAs from one-bit data received by co-prime arrays. Cross-dipoles sparse arrays were deployed in \cite{Cheng2020} to develop a method for one-bit DoA estimation which is robust against polarization states. In \cite{Ramamohan2010}, the authors proposed an approach to jointly estimate DoAs and array calibration errors from one-bit data.
Nonetheless, the analytical performance of DoA estimation from one-bit SLA measurements has not yet been studied in the literature and performance analysis in the literature has been limited to simulations studies. Therefore, fundamental performance limitations of DoA estimation form one-bit SLA measurements have not well understood.
\subsection{Our Contributions}
It is of great importance to analytically investigate the performance of DoA estimation from one-bit SLA measurements.
Such a performance analysis not only provides us with valuable insights into the performance of DoA estimation from one-bit SLA data but also enables us to compare its performance with that of DoA estimation using infinite-bit (unquantized) SLA data. Hence, as one of the contributions of this paper, we conduct a rigorous study on the performance of estimating source DoAs from one-bit SLA samples. Furthermore, we propose a new algorithm for estimating source DoAs from one-bit SLA measurements and analyze its asymptotic performance. Specifically, the contributions of this paper are described as follows:
\begin{itemize}
\item {\bf Identifiability Analysis:} We study the identifiability conditions for the DoA estimation from one-bit SLA data. We first show that the identifiability condition for estimating DoAs from one-bit SLA data is equivalent to the case when DoAs are estimated from infinite-bit (unquantized) SLA data. Then, we determine a sufficient condition for global identifiablity of DoAs from one-bit data based on the relationship between the number of source and array elements.
\item {\bf CRB Derivation and Analysis:} We derive a pessimistic approximation of the CRB of DoA estimation using one-bit data received by an SLA. This pessimistic CRB approximation provides a benchmark for the performance of DoA estimation algorithms from one-bit data. Additionally, it helps us to spell out the condition under which the Fisher Information Matrix (FIM) of one-bit data is invertible, and thus, the CRB is a valid bound for one-bit DoA estimators. Further, we derive the performance limits of one-bit DoA estimation using SLAs at different conditions.
\item {\bf Novel One-bit DoA Estimator:} We propose a new MUSIC-based algorithm for estimating DoAs from one-bit SLA measurements. In this regard, we first construct an enhanced estimate of the normalized covariance matrix of infinite-bit (unquantized) data by exploiting the structure of the normalized covariance matrix efficiently. Then, we apply MUSIC to an augmented version of the enhanced normalized covariance matrix estimate to determine the DoAs.
\item {\bf Performance Analysis of the Proposed Estimator:} We derive a closed-form
expression for the second-order statistics of the asymptotic distribution (for the large number of snapshots) of the proposed algorithm. Our asymptotic performance analysis shows that the proposed estimator outperform its counterparts in the literature and that its performance is very close to the proposed pessimistic approximation of the CRB. Moreover, the asymptotic performance analysis of the proposed DoA estimator enables us to provide valuable insights on its performance. For examples, we observe that the Mean Square Error (MSE) depends on both the physical array geometry and the co-array geometry. In addition, we observe that the MSE does not drop to zero even if the SNR approaches infinity.
\item {\bf Wider Applicability of the derived performance Analysis:} We provide a closed-form expression for the large sample performance of the one-bit DoA estimator in \cite{Liuonebit} as a byproduct of the performance analysis of our proposed DoA estimator.
\end{itemize}
\emph{Organization}: Section \ref{sec:model} describes the system model.
In Section \ref{sec:iden}, the identifiability condition for DoA estimation problem from one-bit quantized data is discussed. Section \ref{sec:crb} presents the pessimistic approximation of the CRB and related discussions. In Section \ref{sec:est}, the proposed algorithm for DoA estimation from one-bit measurements is given and its performance is analyzed.
The simulation results and related discussions are included in Section \ref{sec:simulations}. Finally, Section \ref{sec:conclusion} concludes the paper.
\emph{Notation}: Vectors and matrices are referred to by lower- and upper-case bold-face, respectively. The superscripts $*$, $T$, $H$ denote the conjugate, transpose and Hermitian
(conjugate transpose) operations, respectively.
$[\mathbf{A}]_{i,j}$ and $[\mathbf{a}]_i$ indicate the $(i,j)^{\rm th}$ and $i^{\rm th}$ entry of $\mathbf{A}$ and $\mathbf{a}$, respectively.
$\|\mathbf{a}\|_2$ stands for the $\ell_2$-norm of $\mathbf{a}$. $|\mathds{A}|$ represents the cardinality of the set $\mathds{A}$. $|a|$, $\lceil a \rceil$ and $\lfloor a \rfloor$ represent the absolute value of, the least integer greater than or equal to and greatest integer less than or equal to the scalar $a$, respectively. $\DIAG(\mathbf{a})$ and $\DIAG(\mathbf{A})$ are
diagonal matrices whose diagonal entries are equal to the elements of $\mathbf{a}$ and to the diagonal elements of $\mathbf{A}$, respectively. The $M \times M$ identity matrix is denoted by $\mathbf{I}_M$. $\SGN(x)$ denotes the sign function with $\SGN(x) = 1$ for $x \geq 0$ and $\SGN(x) = - 1$ otherwise.
The real and image part
of $a$ are denoted by $\Re\{a\}$ and $\Im\{a\}$, respectively.
$\EX\{.\}$ stands for the statistical expectation.
$\otimes$ and $\odot$ represent Kronecker and Khatri-Rao products, respectively. $\TRAC(\mathbf{A})$, $\det(\mathbf{A})$ and $\RK(\mathbf{A})$
denote the trace, determinant and rank, respectively.
$
\VE\left(\mathbf{A}\right)=
\begin{bmatrix}
\mathbf{a}_1^T & \mathbf{a}_2^T & \cdots & \mathbf{a}_n^T \\
\end{bmatrix}^T
$
represents the vectorization operation and $\MAT_{m,n}(.)$ is its inverse operation.
$\mathbf{A}^{\dagger}$ and $\Pi^{\bot}_{\mathbf{A}}$ indicate the pseudoinverse and the projection matrix onto the null space of the full column rank matrix $\mathbf{A}^H$, respectively. ${\cal CN}(\mathbf{a}, \mathbf{A})$ denote the circular complex Gaussian distribution with mean $\mathbf{a}$ and covariance matrix $\mathbf{A}$.
\section{System Model}\label{sec:model}
We consider an SLA with $M$ elements located at positions $m_1\frac{\lambda}{2}, m_2\frac{\lambda}{2}, \cdots, m_M\frac{\lambda}{2}$ with $m_i \in \mathds{M}$. Here $\mathds{M}$ is a set of integers with cardinality $|\mathds{M}|\!=\!M$, and $\lambda$ denotes the wavelength of the incoming signals. It is assumed that $K$ narrowband signals with distinct DoAs $\boldsymbol{\theta}\!=\![ \theta_1, \theta_2, \cdots, \theta_K ]^T \in [-\pi/2, \pi/2]^{K \times 1}$ impinge on the SLA from far field. The signal received at the array at time instance $t$ can be modeled as
\small
\begin{align}\label{model-eq-1}
\mathbf{y}(t)=\mathbf{A}(\boldsymbol{\theta})\mathbf{s}(t)+\mathbf{n}(t) \in \mathds{C}^{M \times 1}, ~~~ t=0,\cdots, N-1,
\end{align}\normalsize
where $\mathbf{s}(t) \in \mathds{C}^{K \times 1}$ denotes the vector of source signals, $\mathbf{n}(t) \in \mathds{C}^{M \times 1}$ is additive noise, and $\mathbf{A}(\boldsymbol{\theta})=[\mathbf{a}\left(\theta_1\right), \mathbf{a}\left(\theta_2\right), \cdots, \mathbf{a}\left(\theta_K\right)] \in \mathds{C}^{M \times K}$ represents the SLA steering matrix with
\small
\begin{align}\label{model-eq-2}
\mathbf{a}(\theta_k)\!=\![e^{j\pi \sin \theta_k m_1},~ e^{j\pi \sin \theta_k m_2},~ \cdots,~ e^{j\pi \sin \theta_k m_M}]^T,
\end{align}\normalsize
being the SLA manifold vector for the $i^{\rm th}$ signal.
Further, the following assumptions are made on source signals and noise:
\begin{itemize}
\item[{\bf A1}] $\mathbf{n}(t)$ follows a zero-mean circular complex Gaussian distribution with the covariance matrix $\EX\{\mathbf{n}(t)\mathbf{n}^H(t)\}\!=\!\sigma^2\IDM_M$.
\item[{\bf A2}] The source signals are modeled as zero-mean \emph{uncorrelated} circular complex Gaussian random variables with covariance matrix $\EX\{\mathbf{s}(t)\mathbf{s}^H(t)\}=\DIAG(\mathbf{p})$ where $\mathbf{p}=[ p_1, p_2, \cdots, p_K ]^T \in \mathds{R}_{>0}^{{K \times 1}}$ (i.e., $p_k > 0,~\forall k$).
\item[{\bf A3}] Source and noise vectors are mutually independent.
\item[{\bf A4}] There is no temporal correlation between the snapshots, i.e., $\EX\{\mathbf{n}(t_1)\mathbf{n}^H(t_2)\}\!=\!\EX\{\mathbf{s}(t_1)\mathbf{s}^H(t_2)\}\!=\!\mathbf{0}$ if $t_1\neq t_2$.
\item[{\bf A5}] An exact knowledge of the number of sources is available.
\end{itemize}
Given {\bf A1} - {\bf A4}, the covariance matrix of $\mathbf{y}(t)$ is expressed as
\small
\begin{align}\label{model-eq-3}
\mathbf{R} &= \mathbf{A}(\boldsymbol{\theta})\DIAG(\mathbf{p})
\mathbf{A}^H(\boldsymbol{\theta})+\sigma^2\IDM_M \in \mathds{C}^{M \times M}.
\end{align}\normalsize
Vectorizing $\overline{\mathbf{R}}$ leads to \cite{Liu2017,Wang2017,SedighiTSP2019}
\small
\begin{align}\label{model-eq-4}
\mathbf{r}& \doteq \VE (\mathbf{R}) = \left(\mathbf{A}^*(\boldsymbol{\theta}) \odot \mathbf{A}(\boldsymbol{\theta})\right)\mathbf{p}+\sigma^2\VE(\IDM_M),\nonumber\\
&=\mathbf{J}\mathbf{A}_d(\boldsymbol{\theta})\mathbf{p}+\sigma^2\mathbf{J}\mathbf{e} \in \mathds{C}^{M^2 \times 1},
\end{align}\normalsize
where $\mathbf{A}_d(\boldsymbol{\theta}) \in \mathds{C}^{(2D-1) \times K}$ corresponds to the steering matrix of the difference co-array of the SLA whose elements are located at $(-\ell_{D-1} \frac{\lambda}{2}, \cdots, 0, \cdots, \ell_{D-1} \frac{\lambda}{2})$ with $\ell_i \in \mathds{D}=\{\tiny|m_p-m_q\tiny| : m_p, m_q \in \mathds{M}\}$
and $D=|\mathds{D}|$. Moreover, $\mathbf{e} \in \{0,1\}^{(2D-1) \times 1}$ is a column vector with $[\mathbf{e}]_i=\delta[i-D]$, and the selection matrix $\mathbf{J} \in \{0,1\}^{M^2 \times (2D-1)}$ is represented as follows \cite{Liu2017}:
\small
\begin{align}\label{model-eq-5}
\mathbf{J} \!=\! \begin{bmatrix} \VE (\mathbf{L}^T_{D-1}), \!&\! \cdots, \!&\! \VE (\mathbf{L}_{0}), \!&\! \cdots, \!&\! \VE (\mathbf{L}_{D-1}) \end{bmatrix},
\end{align}\normalsize
where
$
\Scale[0.9]{[\mathbf{L}_n]_{p,q}=\left\{\begin{array}{cc}
1, & \text{if}~~ m_p-m_q=\ell_n, \\
0, & \text{otherwise},
\end{array}
\right.}
$
with $1 \leq p, q \leq M$ and $0 \leq n \leq D-1$. The steering matrix of the difference co-array includes a contiguous ULA segment around the origin with the size of $2v-1$ where $v$ is the largest integer such that $\{0, 1, \cdots, v-1\} \subseteq \mathds{D}$. The size of the contiguous ULA segment of the
difference co-array plays a crucial role in the number of identifiable sources such that $K$ distinct sources are identifiable if $K \leq v-1$. Hence, in case the SLA is designed properly
such that $v > M$, we are able to identify more sources than
the number of physical elements in the SLA; exploiting the resulting structure of $\mathbf{R}$ efficiently\cite{Vaidyanathan2011,Pal2010,Liu2017,SedighiTSP2019}.
An illustrative example of an SLA, the corresponding difference co-array, and its contiguous ULA segment is presented in Fig. \ref{fig-1}.
\begin{figure}[!t]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[semithick] (-10,0) -- (10,0);
\draw[semithick] (-10,2) -- (10,2);
\foreach \x in {-9,-8,-7,-6,-5,-4,-3,-2,0,-1,1,2,3,4,5,6,7,8,9}
\draw (\x cm, 5pt) -- (\x cm, -5pt) node[anchor=north] {\tiny $\x$};
\fill [radius=5pt,color=black] (-9,0) circle[] (-7,0) circle[] (-6,0) circle[] (-5,0) circle[] (-4,0) circle[] (-3,0) circle[] (-2,0) circle[] (-1,0) circle[] (0,0) circle[] (1,0) circle[] (2,0) circle[] (3,0) circle[] (4,0) circle[] (5,0) circle[] (6,0) circle[] (7,0) circle[] (9,0) circle;
\foreach \x in {-9,-8,-7,-6,-5,-4,-3,-2,0,-1,1,2,3,4,5,6,7,8,9}
\draw (\x cm, 61.6929pt) -- (\x cm, 51.6929pt);
\fill [radius=5pt,color=black] (0,2) circle[] (2,2) circle[] (3,2) circle[] (4,2) circle[] (6,2) circle[] (9,2) circle;
\node at (-11,2) {(a)};
\node at (-11,0) {(b)};
\draw [decorate,decoration={brace,amplitude=5pt,mirror,raise=3ex}]
(-7,0.0) -- (7,0.0) node[midway,yshift=-2.5em]{\footnotesize The contiguous ULA segment};
\draw [decorate,decoration={brace,amplitude=2pt,raise=1ex}]
(0,2) -- (1,2) node[midway,yshift=1.35em]{\footnotesize $\frac{\lambda}{2}$};
\end{tikzpicture}
\vspace*{-0.12in}
\DeclareGraphicsExtensions.
\caption{Array geometry of a co-prime array with $M=6$ elements: (a) physical array with $\mathds{M}=\{0,2,3,4,6,9\}$; (b) difference co-array with $\mathds{D}=\{0,1,2,3,4,5,6,7,9\}$ and $v=8$.}
\label{fig-1}
\vspace{-5 mm}
\end{figure}
Here it is assumed that each array element is connected to a one-bit ADC which directly converts the received analog signal into binary data by comparing the real and imaginary parts of the received signal individually with zero. In such a case, the one-bit measurements at the $m^{\rm th}$ array element are given by
\small
\begin{align}
\label{model-eq-6}
\hspace{-1mm}[\mathbf{x}(t)]_m \!=\! \frac{1}{\sqrt{2}} \SGN\left(\Re\{[\mathbf{y}(t)]_m\} \right)
\!+\! \frac{j}{\sqrt{2}}~ \SGN\left(\Im\{[\mathbf{y}(t)]_m \}\right).
\end{align}\normalsize
The problem under consideration is the estimation of source DoAs, i.e., $\boldsymbol{\theta}$, from one-bit quantized measurements, i.e., $\mathbf{X} = \begin{bmatrix}\mathbf{x}(0), & \mathbf{x}(1), & \cdots, & \mathbf{x}(N-1)\end{bmatrix}$, collected by the SLA.
\section{Identifiability Conditions}
\label{sec:iden}
Note that there is a significant information loss expected when going from infinite-bit (unquantized) data, i.e., $\mathbf{Y}=[\mathbf{y}(0), \mathbf{y}(1), \cdots, \mathbf{y}(M)]$, to one-bit data, i.e., $\mathbf{X}$. This information loss may affect the attractive capability of SLAs to identify a larger number of uncorrelated sources than the number of array elements. To address this concern, we will consider the identifiability conditions for DoA estimation from one-bit SLA measurements in this section. Before proceeding further, we first need to give a clear definition of identifiability for this problem.
\begin{Def}[Identifiability]
\label{definition-1}
Let $f(\mathbf{X} \!\mid\! \boldsymbol{\theta}, \mathbf{p}, \sigma^2)$ denote the Probability Density Function (PDF) of $\mathbf{X}$ parameterized by $\boldsymbol{\theta}$, $\mathbf{p}$ and $\sigma^2$. Then, the source DoAs are said to be identifiable from $\mathbf{X}$ at point $\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ if there exist no $\breve{\boldsymbol{\theta}} \neq \boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ such that $f(\mathbf{X} \mid \boldsymbol{\theta}_0, \mathbf{p}, \sigma^2) = f(\mathbf{X} \mid \breve{\boldsymbol{\theta}}, \breve{\mathbf{p}}, \breve{\sigma}^2)$ for any arbitrary values of $\mathbf{p} \in \mathds{R}_{>0}^{{K \times 1}}$, $\breve{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\sigma^2$ and $\breve{\sigma}^2$ \cite[Ch. 1, Definition 5.2]{lehmann2006theory} \cite[pp. 62]{van2000asymptotic}.
\end{Def}
\begin{rmk}
\label{rmk-1}
The above definition can be used for identifiabilty of $\boldsymbol{\theta}_0$ from $\mathbf{Y}$ by replacing $f(\mathbf{X} \!\mid\! \boldsymbol{\theta}, \mathbf{p}, \sigma^2)$ with $f(\mathbf{Y} \!\mid\! \boldsymbol{\theta}, \mathbf{p}, \sigma^2)$.
\end{rmk}
Based on the above definition, the necessary and sufficient condition for a particular DoA point to be identifiable from one-bit SLA data is given in the following Theorem.
\begin{theo}
\label{Theo-1}
The source DoAs are identifiable from $\mathbf{X}$ at $\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ if and only if they are identifiable from $\mathbf{Y}$ at $\boldsymbol{\theta}_0$.
\end{theo}
\begin{proof}
See Appendix \ref{app-A}.
\end{proof}
The above Theorem shows that the identifiability condition for the DoA estimation problem from one-bit SLA measurements is equivalent to that for the DoA estimation problem from infinite-bit (unquantized) SLA measurements. Hence, the information loss arises from one-bit quantization does not influence the number of identifiable sources. However, Theorem \ref{Theo-1} simply spells out the identifiability condition of a single DoA point. A sufficient condition for global identifiablity of source DoAs from one-bit data is given in the following theorem.
\begin{Def}[Global identifiability]
\label{definition-2}
The source DoAs are said to be globally identifiable from $\mathbf{X}$ if there exists no distinct $\boldsymbol{\theta} \in [-\pi/2, \pi/2]^{K \times 1}$ and $\breve{\boldsymbol{\theta}} \in [-\pi/2, \pi/2]^{K \times 1}$ such that $f(\mathbf{X} \mid \boldsymbol{\theta},\mathbf{p},\sigma^2) = f(\mathbf{X} \mid \breve{\boldsymbol{\theta}},\breve{\mathbf{p}},\breve{\sigma}^2)$ for any arbitrary values of $\mathbf{p} \in \mathds{R}_{>0}^{{K \times 1}}$, $\breve{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\sigma^2$ and $\breve{\sigma^2}$.
\end{Def}
\begin{theo}
\label{Theo-2}
The sufficient conditions for global indentifiability and global non-indentifiability of source DoAs from one-bit SLA data are given as follows:
\begin{enumerate}
\item[{\bf S1}] The source DoAs are globally identifiable (with probability one) from $\mathbf{X}$ for any value of $\boldsymbol{\theta} \in [-\pi/2, \pi/2]^{K \times 1}$ if $K \leq v-1$.
\item[{\bf S2}] The source DoAs are globally unidentifiable from $\mathbf{X}$ for any value of $\boldsymbol{\theta} \in [-\pi/2, \pi/2]^{K \times 1}$ if $K \geq D$.
\end{enumerate}
\end{theo}
\begin{proof}
See Appendix \ref{app-B}.
\end{proof}
Having revealed that one-bit quantization does not affect the indentifiability conditions of source DoAs, we will investigate the performance of DoA estimation from one-bit SLA data through a CRB analysis in the next section.
\section{Cram\'{e}r-Rao Bound Analysis}
\label{sec:crb}
It is well-known that the CRB offers a lower bound on the covariance of any unbiased estimator \cite{kay1993fundamentals}. Hence, it is considered as a standard metric for evaluating the performance of estimators. In particular, the CRB can
provide valuable insights into the fundamental limits of estimation for specific problems as well as the dependence of the estimation performance on various system parameters. Deriving a closed-form expression for the CRB requires knowledge of the data distribution. However, the data distribution may not be known for some problems. In such cases, the Gaussian assumption is a natural choice which leads to the largest (most pessimistic) CRB in a general class of
data distributions \cite{Stoica2011Lec}.
In the problem of DoA estimation from one-bit SLA measurements, the true PDF of one-bit data is obtained from the orthant probabilities \cite{abrahamson1964orthant} of Gaussian distribution, for which a closed-form expression is not available in general. Motivated by this fact, in what follows, we derive a pessimistic closed-form approximation for the CRB of the DoA estimation problem from one-bit SLA data through considering a Gaussian distribution for $\mathbf{x}(t)$. This pessimistic closed-form approximation is used for benchmarking the performance of one-bit DoA estimators as well as for investigating the performance limits of the DoA estimation problem from one-bit data. Making use of assumptions {\bf A1}-{\bf A4}, it is readily confirmed that
$
\EX\{\mathbf{x}(t)\} = \ZEROV.
$
Further, the arcsine law \cite{VanVleck1966} establishes the following relationship between $\mathbf{R}$ and $\mathbf{R}_{\mathbf{x}}$:
\small
\begin{align}
\label{Eq-arclaw}
\mathbf{R}_{\mathbf{x}} = \EX\{\mathbf{x}(t)\mathbf{x}^H(t)\} = \frac{2}{\pi} \asin( \overline{\mathbf{R}} ),
\end{align}\normalsize
where $[\asin(\overline{\mathbf{R}})]_{m,n} \!=\! \arcsin(\Re\{[\overline{\mathbf{R}}]_{m,n}\})\!+\!j \arcsin(\Im\{[\overline{\mathbf{R}}]_{m,n}\})$ and
\small
\begin{align}
\label{Eq-CRB-1}
\overline{\mathbf{R}} \!=\! \frac{\mathbf{R}}{\sigma^2+\sum_{k=1}^K p_k} \!=\! \mathbf{A}(
\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}}) \mathbf{A}^H(
\boldsymbol{\theta}) \!+\! (1\!-\!\sum_{k=1}^K \overline{p}_k) \IDM_M,
\end{align}\normalsize
is the normalized covariance matrix of $\mathbf{y}(t)$ with $\overline{\mathbf{p}} = [\overline{p}_1, \overline{p}_2, \cdots, \overline{p}_K]^T$ and $\overline{p}_k = \frac{p_k}{\sigma^2+\sum_{k=1}^K p_k}$.
It follows from \eqref{Eq-arclaw} and \eqref{Eq-CRB-1} that $\mathbf{R}_{\mathbf{x}}$ is a function of the parameters $\boldsymbol{\theta}$ and $\overline{\mathbf{p}}$. Let $\boldsymbol{\varrho} = [\boldsymbol{\theta}, \overline{\mathbf{p}}]^T$ denote the vector of unknown parameters. Then, considering the Gaussian assumption, the worst-case Fisher Information Matrix (FIM) ${\cal I}_{w}(\boldsymbol{\varrho})$ is given by \cite{kay1993fundamentals}
\small
\begin{align}
\label{Eq-CRB-2}
[{\cal I}_w(\boldsymbol{\varrho})]_{m,n} &= N \TRAC(\mathbf{R}_{\mathbf{x}}^{-1} \frac{\partial \mathbf{R}_{\mathbf{x}}}{\partial [\boldsymbol{\varrho}]_m} \mathbf{R}_{\mathbf{x}}^{-1} \frac{\partial \mathbf{R}_{\mathbf{x}}}{\partial [\boldsymbol{\varrho}]_n} ) \nonumber\\
& = N \frac{\partial \mathbf{r}_{\mathbf{x}}^H}{\partial [\boldsymbol{\varrho}]_m} (\mathbf{R}_{\mathbf{x}}^{-T} \otimes \mathbf{R}_{\mathbf{x}}^{-1}) \frac{\partial \mathbf{r}_{\mathbf{x}}}{\partial [\boldsymbol{\varrho}]_n},
\end{align}\normalsize
where $\mathbf{r}_{\mathbf{x}}=\VE(\mathbf{R}_{\mathbf{x}})$ and the last equality is obtained by using the relation $\TRAC(\mathbf{C}_1 \mathbf{C}_2 \mathbf{C}_3 \mathbf{C}_4) = \VE^H(\mathbf{C}_2^H) (\mathbf{C}_1^T \otimes \mathbf{C}_3 ) \VE(\mathbf{C}_4)$. From \eqref{model-eq-4}, \eqref{Eq-arclaw} and \eqref{Eq-CRB-2}, we obtain
\footnotesize
\begin{align}
\label{Eq-CRB-3}
\hspace{-1mm} \mathbf{r}_{\mathbf{x}} \!=\! \frac{2}{\pi} \asin( \VE(\overline{\mathbf{R}}) )\!=\! \frac{2}{\pi} \mathbf{J} \asin\left(\mathbf{A}_d(\boldsymbol{\theta}) \overline{\mathbf{p}}\!+\!(1\!-\!\sum_{k=1}^K \overline{p}_k) \mathbf{e}\right).
\end{align}\normalsize
Computing the derivative of $ \mathbf{r}_{\mathbf{x}}$ with respect to $\theta_k$ and $\overline{p}_k$ yields
\small
\begin{align}
\label{Eq-CRB-4}
\frac{\partial \mathbf{r}_{\mathbf{x}}}{\partial \theta_k} =& j \pi \cos(\theta_k) \overline{p}_k \mathbf{J} \DIAG(\mathbf{d}) \\
&\times\left[ \DIAG(\overline{\mathbf{h}}) \Re\{\mathbf{a}_d(\theta_k)\} + j \DIAG(\mathbf{h}) \Im\{\mathbf{a}_d(\theta_k)\}\right], \nonumber\\
\label{Eq-CRB-5}
\frac{\partial \mathbf{r}_{\mathbf{x}}}{\partial \overline{p}_k} =& \mathbf{J} \left[ \DIAG(\mathbf{h}) \Re\{\mathbf{a}_d(\theta_k)\} + j \DIAG(\overline{\mathbf{h}}) \Im\{\mathbf{a}_d(\theta_k)\}\right],
\end{align}\normalsize
where $\mathbf{h}$ and $\overline{\mathbf{h}}$ are given in \eqref{Eq-CRB-6} and \eqref{Eq-CRB-7} at the top of the next page, $\mathbf{a}_d(\theta_k)$ denotes the $k^{\rm th}$ column of $\mathbf{A}_d(\boldsymbol{\theta})$ and $\mathbf{d}=[-\ell_{D-1}, \cdots, \ell_0, \cdots, \ell_{D-1} ]^T$.
\begin{figure*}[!t]
\small
\begin{align}
\label{Eq-CRB-6}
\mathbf{h} &= \begin{bmatrix} \frac{1}{\sqrt{1-|\Re\{\sum_{k=1}^K \overline{p}_k e^{-j \pi \sin \theta_k \ell_{D-1}}\}|^2}} & \cdots & 0 & \cdots & \frac{1}{\sqrt{1-|\Re\{\sum_{k=1}^K \overline{p}_k e^{j \pi \sin \theta_k \ell_{D-1}}\}|^2}}\end{bmatrix}^T,\\
\label{Eq-CRB-7}
\overline{\mathbf{h}} &= \begin{bmatrix} \frac{1}{\sqrt{1-|\Im\{\sum_{k=1}^K \overline{p}_k e^{-j \pi \sin \theta_k \ell_{D-1}}\}|^2}} & \cdots & 0 & \cdots & \frac{1}{\sqrt{1-|\Im\{\sum_{k=1}^K \overline{p}_k e^{j \pi \sin \theta_k \ell_{D-1}}\}|^2}}\end{bmatrix}^T,
\end{align}\normalsize
\vspace{-1mm}
\hrulefill
\vspace{-2mm}
\end{figure*}
It follows from \eqref{Eq-CRB-2}, \eqref{Eq-CRB-4} and \eqref{Eq-CRB-5} that
\small
\begin{align}
\label{Eq-CRB-8}
{\cal I}_w(\boldsymbol{\varrho}) = N \begin{bmatrix}
\mathbf{G}^H \\ \mathbf{V}^H
\end{bmatrix} \mathbf{J}^H (\mathbf{R}_{\mathbf{x}}^{-T} \otimes \mathbf{R}_{\mathbf{x}}^{-1}) \mathbf{J} \begin{bmatrix}
\mathbf{G} & \mathbf{V}
\end{bmatrix},
\end{align}\normalsize
where
\small
\begin{align}
\label{Eq-CRB-9}
\mathbf{G} =& j \pi \DIAG(\mathbf{d}) \big[ \DIAG(\overline{\mathbf{h}}) \Re\{\mathbf{A}_d(\boldsymbol{\theta})\} \\
&+ j \DIAG(\mathbf{h}) \Im\{\mathbf{A}_d(\boldsymbol{\theta})\}\big] \boldsymbol{\Phi}(\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}}), \nonumber\\
%
\label{Eq-CRB-10}
\mathbf{V} =& \DIAG(\mathbf{h}) \Re\{\mathbf{A}_d(\boldsymbol{\theta})\} + j \DIAG(\overline{\mathbf{h}}) \Im\{\mathbf{A}_d(\boldsymbol{\theta})\},
\end{align}\normalsize
with $\boldsymbol{\Phi}(\boldsymbol{\theta})=\DIAG([ \cos \theta_1, \cos \theta_2, \cdots, \cos \theta_K ]^T)$. If ${\cal I}_w(\boldsymbol{\varrho})$ is non-singular, a pessimistic approximation for the CRB of estimating DoAs from one-bit SLA data can be obtained through inverting ${\cal I}_w(\boldsymbol{\varrho})$. Hence, we need to first establish the non-singularity of ${\cal I}_w(\boldsymbol{\varrho})$.
\begin{lem}
\label{Lem-1}
Define $\mathbf{\Upsilon} = \begin{bmatrix} \mathbf{\Delta} & \mathbf{\digamma} \end{bmatrix} \in \mathds{C}^{(2D-1) \times 2K}$, where
\small
\begin{align}
\label{Eq-CRB-11}
\mathbf{\Delta} &= \DIAG(\mathbf{d}) \big[ \DIAG(\overline{\mathbf{h}}) \Re\{\mathbf{A}_d(\boldsymbol{\theta})\} + j \DIAG(\mathbf{h}) \Im\{\mathbf{A}_d(\boldsymbol{\theta})\}\big],\\
\mathbf{\digamma} &= \DIAG(\mathbf{h}) \Re\{\mathbf{A}_d(\boldsymbol{\theta})\} + j \DIAG(\overline{\mathbf{h}}) \Im\{\mathbf{A}_d(\boldsymbol{\theta})\}.
\end{align}\normalsize
Then, ${\cal I}_w(\boldsymbol{\varrho})$ is non-singular if and only if $\mathbf{\Upsilon}$ is full-column rank.
\end{lem}
\begin{proof}
See Appendix \ref{app-C}
\end{proof}
\begin{rmk}
\label{rmk-2}
Assuming ${\cal I}(\varrho)$ to be the true FIM, it follows from ${\cal I}(\varrho) \succeq {\cal I}_w(\boldsymbol{\varrho})$ that
$\mathbf{\Upsilon}$ being full-column rank
is also a sufficient condition for the non-sigularity of ${\cal I}(\varrho)$.
\end{rmk}
\begin{theo}
\label{Theo-3}
Let $CRB(\boldsymbol{\theta})$ denote the CRB for source DoAs $\boldsymbol{\theta}$ from $\mathbf{X}$. If ${\cal I}_w(\boldsymbol{\varrho})$ is non-singular, then a pessimistic approximation of $CRB(\boldsymbol{\theta})$, denoted by $CRB_w(\boldsymbol{\theta})$, is given by
\small
\begin{align}
\label{Eq-Pes-CRB}
CRB(\boldsymbol{\theta}) \preceq CRB_w(\boldsymbol{\theta}) = \frac{1}{4 N \pi^2} (\mathbf{Q}^H \Pi_{\mathbf{M}^{\frac{1}{2}} \mathbf{V}}^{\perp} \mathbf{Q})^{-1},
\end{align}\normalsize
where $\mathbf{\Omega} = \frac{1}{\pi}\mathbf{G}$,
\vspace{-1mm}
\small
\begin{align}
\mathbf{M} & = \mathbf{J}^H \left(\asin(\overline{\mathbf{R}}^T) \otimes \asin(\overline{\mathbf{R}}) \right)^{-1} \mathbf{J},\\
\mathbf{Q} & = \mathbf{M}^{\frac{1}{2}} \DIAG(\mathbf{d}) \mathbf{\Omega} \mathbf{\Phi}(\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}}),
\end{align}\normalsize
with $\mathbf{G}$ and $\mathbf{V}$ being given in \eqref{Eq-CRB-9} and \eqref{Eq-CRB-10}, respectively.
\end{theo}
\vspace{-1mm}
\begin{proof}
See Appendix \ref{app-D}
\end{proof}
\begin{rmk}
\label{rmk-new-new-2}
We note that $CRB_w(\boldsymbol{\theta})$ bears a superficial resemblance to the CRB expression for DoA estimation from unquantized data, given by \cite[Theorem 2]{Liu2017}
\begin{align}
CRB_{I}(\boldsymbol{\theta}) = \frac{1}{4 N \pi^2} (\widetilde{\mathbf{Q}}^H \Pi_{\widetilde{\mathbf{M}}^{\frac{1}{2}} \widetilde{\mathbf{V}}}^{\perp} \widetilde{\mathbf{Q}})^{-1},
\end{align}
where
\small
\begin{align}
\widetilde{\mathbf{M}} & = \mathbf{J}^H \left(\overline{\mathbf{R}}^T \otimes \overline{\mathbf{R}} \right)^{-1} \mathbf{J},\\
\widetilde{\mathbf{Q}} & = \widetilde{\mathbf{M}}^{\frac{1}{2}} \DIAG(\mathbf{d}) \mathbf{A}_d(\boldsymbol{\theta}) \mathbf{\Phi}(\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}}),\\
\widetilde{\mathbf{V}} &= \begin{bmatrix} \mathbf{A}_d(\boldsymbol{\theta}) & \mathbf{e} \end{bmatrix}.
\end{align}\normalsize
\end{rmk}
\begin{theo}
\label{Theo-5}
Assume all sources have equal power $p$ and $SNR=p/\sigma^2$. Then, we have
\small
\begin{align}
\lim_{SNR \to \infty} CRB_w(\boldsymbol{\theta}) \succ \ZEROV.
\end{align}
\end{theo}\normalsize
\begin{proof}
See Appendix \ref{app-E}
\end{proof}
\begin{rmk}
Theorem \ref{Theo-5} implies that the $CRB_w(\boldsymbol{\theta})$ does not go to zero as the SNR increases. As a consequence, in the one-bit DoA estimation problem, we may not be able to render estimation errors arbitrarily small by increasing the SNR.
\end{rmk}
\section{Proposed One-Bit DoA Estimator}
\label{sec:est}
In this section, we first derive an enhanced estimate of the normalized covariance matrix of $\mathbf{y}(t)$, i.e., $\overline{\mathbf{R}}$, from one-bit SLA measurements through exploiting the structure of $\overline{\mathbf{R}}$. Then, we obtain DoA estimates by applying Co-Array-Based MUSIC (CAB-MUSIC) \cite{Liu2015,Wang2017} to the enhanced estimate of $\overline{\mathbf{R}}$. Further, we investigate the analytical performance of the proposed method for estimating DoAs from one-bit measurements.
\vspace{-3mm}
\subsection{Enhanced One-Bit Co-Array-Based MUSIC}\label{sec:OBCABM}
It is deduced from the strong law of large numbers \cite[ch. 8]{papoulis1991stochastic} that the sample covariance matrix of one-bit data provides a consistent estimate of $\mathbf{R}_{\mathbf{x}}$ with probability $1$, i.e.,
$
\label{Eq-Est-1}
{\rm Pr}\left(\lim_{N \to \infty} \widehat{\mathbf{R}}_{\mathbf{x}} = \mathbf{R}_{\mathbf{x}}\right) =1
$,
where $\widehat{\mathbf{R}}_{\mathbf{x}} = \frac{1}{N} \mathbf{X} \mathbf{X}^H$. In addition,
reformulating \eqref{Eq-arclaw} gives $\overline{\mathbf{R}}$ based on the covariance matrix of one-bit data as follows:
\small
\begin{align}
\label{Eq-Est-2}
\overline{\mathbf{R}} =
\sine ( \frac{\pi}{2} \mathbf{R}_{\mathbf{x}}),
\end{align}\normalsize
where $[\sine ( \frac{\pi}{2} \mathbf{R}_{\mathbf{x}})]_{m,n} = \sin(\frac{\pi}{2}\Re\{[\overline{\mathbf{R}}]_{m,n}\}) + j \sin(\frac{\pi}{2}\Im\{[\overline{\mathbf{R}}]_{m,n}\})$.
Accordingly, a consistent estimate of $\overline{\mathbf{R}}$ is obtained as
\small
\begin{align}
\label{Eq-Est-3}
\widetilde{\overline{\mathbf{R}}} =
\sine ( \frac{\pi}{2} \widehat{\mathbf{R}}_{\mathbf{x}}).
\end{align}\normalsize
Most of the algorithms in the literature employ $\widetilde{\overline{\mathbf{R}}}$ for estimating DoAs from one-bit measurements \cite{BarShalom2002,Liuonebit}. However, an enhanced estimate of $\overline{\mathbf{R}}$
compared to $\widetilde{\overline{\mathbf{R}}}$ can be found if the structure of $\overline{\mathbf{R}}$ is taken into account. This enhanced estimate could in turn yield a better DoA estimation performance. In what follows, we introduce such an enhanced estimate of $\overline{\mathbf{R}}$ by exploiting its structure. Then, we use this enhanced estimate to improve the DoA estimation performance from one-bit data.
It is readily known from \eqref{Eq-CRB-1} that $\overline{\mathbf{R}}$ has the following structure
\small
\begin{align}
\label{Eq-Est-4}
\overline{\mathbf{R}} = \IDM_M + \sum_{n=1}^{D-1} u_n \mathbf{L}_n + \sum_{n=1}^{D-1} u_n^* \mathbf{L}_n^T ,
\end{align}\normalsize
where $u_n = \sum_{k=1}^K \overline{p}_k e^{j\pi \sin \theta_k \ell_n}$ and $\mathbf{L}_n$ is given after eq. \eqref{model-eq-5} for $1 \leq n \leq D-1$. It can be observed from \eqref{Eq-Est-4} that the diagonal elements of $\overline{\mathbf{R}}$ are all one while the off-diagonal elements are parameterized by the vector $\mathbf{u} = [u_1, \cdots, u_{D-1}]^T \in \mathds{C}^{(D-1) \times 1}$. This means that there exist only $2D-2$ free real parameters in $\overline{\mathbf{R}}$. Let $\ddot{\mathbf{r}} \in \mathds{C}^{(M^2-M) \times 1}$ be the vector containing the off-diagonal elements of $\overline{\mathbf{R}}$, obtained by removing the diagonal elements of $\overline{\mathbf{R}}$ from $\VE(\overline{\mathbf{R}})$. Evidently, $\ddot{\mathbf{r}}$ is given by
\small
\begin{align}
\label{Eq-Est-5}
\ddot{\mathbf{r}} &= \overline{\mathbf{J}} \begin{bmatrix} \mathbf{u}^* & \mathbf{u} \end{bmatrix}^T = \overline{\mathbf{J}} \mathbf{\Psi} \boldsymbol{\phi},
\end{align}\normalsize
where $\boldsymbol{\phi} = [\Re\{\mathbf{u}\}^T, \Im\{\mathbf{u}\}^T]^T \in \mathds{R}^{(2D-1 \times 1)}$,
\small
\begin{align}
\label{Eq-Est-PSIM}
\mathbf{\Psi} = \begin{bmatrix}
\IDM_{D-1} & -j \IDM_{D-1} \\
\IDM_{D-1} & j \IDM_{D-1}
\end{bmatrix} .
\end{align}\normalsize
and $\overline{\mathbf{J}} \in \{0,1\}^{(M^2-M) \times (2D-2)}$
is obtained by removing the $D$-th column as well as the rows with indices $(i-1)M+1$ for all $1 \leq i \leq M$ from $\mathbf{J}$.
It follows from \eqref{Eq-Est-5} that $\overline{\mathbf{R}}$ is parameterized by the real-valued vector $\boldsymbol{\phi}$.
We wish to find $\boldsymbol{\phi} \in \mathds{E}_{\boldsymbol{\phi}} = \{ \boldsymbol{\phi} \mid \overline{\mathbf{R}}(\boldsymbol{\phi}) \succeq \ZEROV\}$ from $\widehat{\mathbf{R}}_{\mathbf{x}}$. To this end,
let $\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}} \in \mathds{R}^{(M^2-M) \times 1}$ denote the vector containing the off-diagonal elements of $\widehat{\mathbf{R}}_{\mathbf{x}}$, obtained by removing the diagonal entries of $\widehat{\mathbf{R}}_{\mathbf{x}}$ from $\VE(\widehat{\mathbf{R}}_{\mathbf{x}})$.
For large $N$, it follows from the Central Limit Theorem (CLT) \cite[ch. 8]{papoulis1991stochastic} that the distribution of $\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}$ asymptotically approaches a complex proper Gaussian distribution, i.e., $\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}} \stackrel{D}{\rightarrow} {\cal CN}(\ddot{\mathbf{r}}_{\mathbf{x}}, \frac{4}{\pi^2 N}\mathbf{\Sigma})$, where $\ddot{\mathbf{r}}_{\mathbf{x}}$ is the vector obtained from stacking the off-diagonal elements of $\mathbf{R}_{\mathbf{x}}$ and $\mathbf{\Sigma} = \frac{\pi^2 N}{4} \EX\{(\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}-\ddot{\mathbf{r}}_{\mathbf{x}}) (\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}-\ddot{\mathbf{r}}_{\mathbf{x}})^H\} \in \mathds{C}^{(M^2 -M) \times (M^2-M)}$. The closed-form expressions for the elements of $\mathbf{\Sigma}$ are provided in Appendix K (kindly refer to the supplementary document). It is observed that the elements of $\mathbf{\Sigma}$ are functions of $\ddot{\mathbf{r}}$, thereby parameterized by $\boldsymbol{\phi}$ as well. Considering the transformation \eqref{Eq-Est-3}, the asymptotic distribution of the off-diagonal elements of $\widetilde{\overline{\mathbf{R}}}$, denoted by $\widetilde{\ddot{\mathbf{r}}} \in \mathds{C}^{(M^2-M) \times 1}$, is given by \eqref{Eq-Est-7} at top of this page.
\begin{figure*}[!t]
\small
\begin{align}
\label{Eq-Est-7}
&f(\widetilde{\ddot{\mathbf{r}}} \mid \boldsymbol{\phi}) = \left(\frac{N^{M^2-M}}{(2 \pi)^{M^2-M} \det(\mathbf{\Sigma}(\boldsymbol{\phi}))} \right)\frac{\exp\{-N[ \asin(\widetilde{\ddot{\mathbf{r}}}) - \overline{\mathbf{J}}\mathbf{\Psi}\arcsin(\boldsymbol{\phi})]^H \mathbf{\Sigma}^{-1}(\boldsymbol{\phi}) [ \asin(\widetilde{\ddot{\mathbf{r}}}) - \overline{\mathbf{J}}\mathbf{\Psi}\arcsin(\boldsymbol{\phi})]\} }{ \Pi_{n=1}^{D-1} (1-[\boldsymbol{\phi}]_n^2)^{\nu_n} (1-[\boldsymbol{\phi}]_{n+D-1}^2)^{\nu_n} }.
\end{align}\normalsize
\vspace{-1mm}
\hrulefill
\vspace{-2mm}
\end{figure*}
Hence, the asymptotic ML estimation of $\boldsymbol{\phi}$ from $\widetilde{\ddot{\mathbf{r}}}$ is derived as follows;
\small
\begin{align}
\label{Eq-Est-8}
\widehat{\boldsymbol{\phi}} = ~ {\rm \underset{\boldsymbol{\phi} \in \mathds{E}_{\boldsymbol{\phi}}} {argmin}} ~L(\boldsymbol{\phi}),
\end{align}\normalsize
where the cost function $L(\boldsymbol{\phi})$ is given in \eqref{Eq-Est-9} at the top of the next page in which $\nu_n = \|\VE (\mathbf{L}_n)\|^2$.
\begin{figure*}[!t]
\vspace{-4mm}
\small
\begin{align}
\label{Eq-Est-9}
L(\boldsymbol{\phi}) = \ln \det(\mathbf{\Sigma}(\boldsymbol{\phi})) - \sum_{n=1}^{D-1} \nu_n \ln (1-[\boldsymbol{\phi}]_n^2)(1-[\boldsymbol{\phi}]_{n+D-1}^2)+N[ \asin(\widetilde{\ddot{\mathbf{r}}}) - \overline{\mathbf{J}}\mathbf{\Psi}\arcsin(\boldsymbol{\phi})]^H \mathbf{\Sigma}^{-1}(\boldsymbol{\phi}) [ \asin(\widetilde{\ddot{\mathbf{r}}}) - \overline{\mathbf{J}}\mathbf{\Psi}\arcsin(\boldsymbol{\phi})].
\end{align}\normalsize
\vspace{-1mm}
\hrulefill
\vspace{-2mm}
\end{figure*}
However, the minimization of \eqref{Eq-Est-9} with respect to $\boldsymbol{\phi}$ is very complicated owing to the nonlinearity of the cost function as well as the constraint $\boldsymbol{\phi} \in \mathds{E}_{\boldsymbol{\phi}}$. To make the problem computationally tractable, we first find an asymptotic equivalent approximation of $L(\boldsymbol{\phi})$ which is much simpler to minimize. Let $\boldsymbol{\gamma} \in \mathds{E}_{\boldsymbol{\gamma}} \subset \mathds{R}^{(M^2-M) \times 1}$ be the $(M^2-M) \times 1$ vector containing the real and imaginary parts of the elements of $\overline{\mathbf{R}}$ above its main diagonal elements. Obviously, there is the following relationship between $\boldsymbol{\phi}$ and $\boldsymbol{\gamma}$:
\small
\begin{align}
\label{Eq-Est-10}
\boldsymbol{\gamma} = \mathbf{F} \overline{\mathbf{J}} \mathbf{\Psi} \boldsymbol{\phi} , \forall \boldsymbol{\phi} \in \mathds{E}_{\boldsymbol{\phi}},
\end{align}\normalsize
where $\mathbf{F} = \frac{1}{2}\begin{bmatrix}
\ddot{\mathbf{F}}^T &
j \widetilde{\mathbf{F}}^T
\end{bmatrix}^T \in \{0,1\}^{(M^2-M) \times (M^2-M)}
$
such that for all $1 \leq p < q \leq M $:
\begin{enumerate}
\item the \small$\left((p-1)M+q-\frac{p(p+1)}{2}\right)$\normalsize-th rows of $\ddot{\mathbf{F}} \in \{0,1\}^{\frac{(M^2-M)}{2} \times (M^2-M)}$ is obtained by removing the elements with indices $(i-1)M+1$ for all $1 \leq i \leq M$ from $\overline{\mathbf{e}}_p^T \otimes \overline{\mathbf{e}}_q^T + \overline{\mathbf{e}}_q^T \otimes \overline{\mathbf{e}}_p^T$ with $[\overline{\mathbf{e}}_p]_n = \delta[p-n]$ for $1 \leq n \leq M$.
\item the \small$\left((p-1)M+q-\frac{p(p+1)}{2}\right)$\normalsize-th rows of $\widetilde{\mathbf{F}} \in \{0,1\}^{\frac{(M^2-M)}{2} \times (M^2-M)}$ is obtained by removing the elements with indices $(i-1)M+1$ for all $1 \leq i \leq M$ from $\overline{\mathbf{e}}_p^T \otimes \overline{\mathbf{e}}_q^T - \overline{\mathbf{e}}_q^T \otimes \overline{\mathbf{e}}_p^T$ with $[\overline{\mathbf{e}}_p]_n = \delta[p-n]$ for $1 \leq n \leq M$.
\end{enumerate}
\begin{lem}
\label{lem-new-2}
The matrices $\mathbf{F}$, $\mathbf{\Psi}$ and $\overline{\mathbf{J}}$ are full rank.
\end{lem}
\begin{proof}
See Appendix \ref{app-new-1}.
\end{proof}
The mapping from $\boldsymbol{\phi} \in \mathds{E}_{\boldsymbol{\phi}}$ to $\boldsymbol{\gamma} \in \mathds{E}_{\boldsymbol{\gamma}}$ is one-to-one due to the full rankness of $\mathbf{F}$, $\mathbf{\Psi}$ and $\overline{\mathbf{J}}$. Hence, it is possible to equivalently reparameterize \eqref{Eq-Est-9} in terms of $\boldsymbol{\gamma}$ instead of $\boldsymbol{\phi}$. This can be done by simply replacing $\boldsymbol{\phi}$ with $\mathbf{\Psi}^{-1}\overline{\mathbf{J}}^{\dagger}\mathbf{F}^{-1}\boldsymbol{\gamma}$. To achieve computational simplification, we make use of the fact that a consistent estimate of $\boldsymbol{\gamma}$ can be obtained as $\widetilde{\boldsymbol{\gamma}} = \mathbf{F} \widetilde{\ddot{\mathbf{r}}}$. We see that $\widetilde{\boldsymbol{\gamma}} \in \mathds{R}^{(M^2-M) \times 1} \notin \mathds{E}_{\boldsymbol{\gamma}}$ with probability one, since the $\mathds{E}_{\boldsymbol{\gamma}}$ is a zero-measure
subset of $\mathds{R}^{(M^2-M) \times 1}$. Now,
considering the Taylor series expansion of $L(\boldsymbol{\gamma})$ around $\widetilde{\boldsymbol{\gamma}}$, we obtain
\small
\begin{align}
\label{Eq-Est-11}
L(\boldsymbol{\gamma}) =& L(\widetilde{\boldsymbol{\gamma}}) + (\boldsymbol{\gamma} - \widetilde{\boldsymbol{\gamma}})^H\nabla_{\boldsymbol{\gamma}}L(\widetilde{\boldsymbol{\gamma}}) \nonumber\\
&+\frac{1}{2}(\widetilde{\boldsymbol{\gamma}} \!-\! \boldsymbol{\gamma})^H \nabla_{\boldsymbol{\gamma}}^2L(\widetilde{\boldsymbol{\gamma}}) (\widetilde{\boldsymbol{\gamma}} \!-\! \boldsymbol{\gamma}) \!+\! \cdots\!,
\end{align}\normalsize
where $\nabla_{\boldsymbol{\gamma}}L(\widetilde{\boldsymbol{\gamma}})$ and $\nabla_{\boldsymbol{\gamma}}^2L(\widetilde{\boldsymbol{\gamma}})$ denote the gradient vector and the Hessian matrix of $L(\boldsymbol{\gamma})$ with respect to $\boldsymbol{\gamma}$, computed at $\widetilde{\boldsymbol{\gamma}}$, respectively. The first term in \eqref{Eq-Est-11} is constant and, moreover, the higher other term can be neglected for large $N$ considering the fact that $\widetilde{\boldsymbol{\gamma}}$ is a consistent estimate of $\boldsymbol{\gamma}$. Consequently, making use of \eqref{Eq-Est-10} and the fact that $\widetilde{\boldsymbol{\gamma}} = \mathbf{F} \widetilde{\ddot{\mathbf{r}}}$, we have
\small
\begin{align}
\label{Eq-Est-12}
\widehat{\boldsymbol{\phi}} \simeq ~& {\rm \underset{\boldsymbol{\phi} \in \mathds{E}_{\boldsymbol{\phi}}} {argmin}} ~( \overline{\mathbf{J}} \mathbf{\Psi} \boldsymbol{\phi} - \widetilde{\ddot{\mathbf{r}}})^H \mathbf{F}^H \nabla_{\boldsymbol{\gamma}}L(\widetilde{\boldsymbol{\gamma}}) \nonumber\\
&+ (\widetilde{\ddot{\mathbf{r}}} - \overline{\mathbf{J}} \mathbf{\Psi} \boldsymbol{\phi})^H \mathbf{F}^H \nabla_{\boldsymbol{\gamma}}^2L(\widetilde{\boldsymbol{\gamma}})\mathbf{F} (\widetilde{\ddot{\mathbf{r}}} - \overline{\mathbf{J}} \mathbf{\Psi} \boldsymbol{\phi}).
\end{align}\normalsize
The above quadratic optimization problem is asymptotically equivalent to \eqref{Eq-Est-8} but is much more convenient to work with. Relaxing the constraint $\boldsymbol{\phi} \in \mathds{E}_{\boldsymbol{\phi}}$ with $\boldsymbol{\phi} \in \mathds{R}^{2D-2}$ yields the following closed-form solution for $\widehat{\boldsymbol{\phi}}$
\small
\begin{align}
\label{Eq-Est-13}
\widehat{\boldsymbol{\phi}} \simeq & \mathbf{\Psi}^{-1}\left(\overline{\mathbf{J}}^H \mathbf{F}^H \nabla_{\boldsymbol{\gamma}}^2L(\widetilde{\boldsymbol{\gamma}}) \mathbf{F} \overline{\mathbf{J}} \right)^{-1} \overline{\mathbf{J}}^H \nonumber\\
& \times \left[\mathbf{F}^H\nabla_{\boldsymbol{\gamma}}^2L(\widetilde{\boldsymbol{\gamma}})\mathbf{F} \widetilde{\ddot{\mathbf{r}}} - \mathbf{F}^H\nabla_{\boldsymbol{\gamma}}L(\widetilde{\boldsymbol{\gamma}})\right].
\end{align}\normalsize
To derive the final expression for $\widehat{\boldsymbol{\phi}}$, we need to calculate $\nabla_{\boldsymbol{\gamma}}L(\widetilde{\boldsymbol{\gamma}})$ and $\nabla_{\boldsymbol{\gamma}}^2L(\widetilde{\boldsymbol{\gamma}})$. It is straightforward to derive $L(\boldsymbol{\gamma})$ by making use of \eqref{Eq-Est-10}. It follows that
\small
\begin{align}
\label{Eq-Est-14}
\nabla_{\boldsymbol{\gamma}}L(\widetilde{\boldsymbol{\gamma}}) = \mathbf{g}(\widetilde{\boldsymbol{\gamma}}),
\end{align}\normalsize
where $[\mathbf{g}(\boldsymbol{\gamma})]_n = \frac{4 [\boldsymbol{\gamma}]_n}{1-|[\boldsymbol{\gamma}]_n|^2} + \frac{\partial \ln \det(\mathbf{\Sigma}(\boldsymbol{\gamma}))}{\partial [\boldsymbol{\gamma}]_n}$, for $1 \leq n \leq M^2-M$. Additionally, the Hessian matrix at $\widetilde{\boldsymbol{\gamma}}$ is obtained as
\small
\begin{align}
\label{Eq-Est-15}
\nabla_{\boldsymbol{\gamma}}^2L(\widetilde{\boldsymbol{\gamma}}) =& N \DIAG(\widehat{\mathbf{b}}) \mathbf{F}^{-H} \widehat{\mathbf{\Sigma}}^{-1} \mathbf{F}^{-1} \DIAG(\widehat{\mathbf{b}}) + \mathbf{E}(\widetilde{\boldsymbol{\gamma}}),
\end{align}\normalsize
where $\widehat{\mathbf{\Sigma}} =\mathbf{\Sigma}(\widetilde{\boldsymbol{\gamma}})$,
$
[\widehat{\mathbf{b}}]_n =\frac{1}{\sqrt{1-|[\widetilde{\boldsymbol{\gamma}}]_n|^2}}
$, for $1 \leq n \leq M^2-M$ and $[\mathbf{E}(\boldsymbol{\phi})]_{n,l} = \frac{2\nu_n (1+|[\boldsymbol{\phi}]_n|^2)}{(1-|[\boldsymbol{\phi}]_n|^2)^2} + \frac{\partial^2 \ln \det(\mathbf{\Sigma}(\boldsymbol{\gamma}))}{\partial [\boldsymbol{\gamma}]_n \partial [\boldsymbol{\gamma}]_m}$.
Inserting \eqref{Eq-Est-14} and \eqref{Eq-Est-15} into \eqref{Eq-Est-13} leads to
\footnotesize
\begin{align}
\label{Eq-Est-16}
&\widehat{\boldsymbol{\phi}} \!\simeq\! \mathbf{\Psi}^{-1} \left( \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\widehat{\mathbf{b}}) \mathbf{F}^{-H} \widehat{\mathbf{\Sigma}}^{-1} \mathbf{F}^{-1} \DIAG(\widehat{\mathbf{b}}) \mathbf{F} \overline{\mathbf{J}} \!+\!\overbrace{ \frac{ \mathbf{E}(\widetilde{\boldsymbol{\gamma}})}{N}}^{\hbar} \right)^{-1} \hspace{-4mm} \times \\
&\overline{\mathbf{J}}^H \bigg(\mathbf{F}^H \DIAG(\widehat{\mathbf{b}}) \mathbf{F}^{-H} \widehat{\mathbf{\Sigma}}^{-1} \mathbf{F}^{-1} \DIAG(\widehat{\mathbf{b}}) \mathbf{F} \widetilde{\ddot{\mathbf{r}}}\!+\!\underbrace{\frac{\mathbf{F}^H \mathbf{E}(\widetilde{\boldsymbol{\gamma}}) \mathbf{F} \widetilde{\ddot{\mathbf{r}}}- \mathbf{F}^H\mathbf{g}(\widetilde{\boldsymbol{\gamma}})}{N}}_{\aleph}\bigg). \nonumber
\end{align}\normalsize
In the above equation, the terms $\hbar$ and $\aleph$ can be neglected for large $N$, thus \eqref{Eq-Est-16} may be simplified as
\small
\begin{align}
\label{Eq-Est-17}
&\widehat{\boldsymbol{\phi}} \simeq \mathbf{\Psi}^{-1} \left( \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\widehat{\mathbf{b}}) \mathbf{F}^{-H} \widehat{\mathbf{\Sigma}}^{-1} \mathbf{F}^{-1} \DIAG(\widehat{\mathbf{b}}) \mathbf{F} \overline{\mathbf{J}} \right)^{-1}\nonumber\\
&\times \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\widehat{\mathbf{b}}) \mathbf{F}^{-H} \widehat{\mathbf{\Sigma}}^{-1} \mathbf{F}^{-1} \DIAG(\widehat{\mathbf{b}}) \mathbf{F} \widetilde{\ddot{\mathbf{r}}} .
\end{align}\normalsize
Hence, from \eqref{Eq-Est-5}, an enhanced consistent estimate of $\overline{\mathbf{r}} = \VE(\overline{\mathbf{R}})$ is derived as follows
\small
\begin{align}
\label{Eq-Est-18}
\widehat{\overline{\mathbf{r}}} = \mathbf{J} \begin{bmatrix} \ZEROV & \IDM_{D-1} & -j \IDM_{D-1} \\
1 & \ZEROV & \ZEROV \\
\ZEROV & \IDM_{D-1} & j \IDM_{D-1} \end{bmatrix} \begin{bmatrix} 1 \\ \widehat{\boldsymbol{\phi}}\end{bmatrix}.
\end{align}\normalsize
\begin{rmk}
\label{rmk-4}
Considering $\lim_{N \to \infty} \widetilde{\ddot{\mathbf{r}}} = \ddot{\mathbf{r}}$, it is readily observed from \eqref{Eq-Est-5} and \eqref{Eq-Est-17} that $\widehat{\boldsymbol{\phi}}$ is a consistent estimate of $\boldsymbol{\phi}$. This in turn implies that $\widehat{\overline{\mathbf{r}}}$ is also a consistent estimate of $\overline{\mathbf{r}}$.
\end{rmk}
To estimate DoAs using $\widehat{\overline{\mathbf{r}}}$, we resort to CAB-MUSIC \cite{Liuonebit}. Specifically, we first construct the normalized augmented covariance matrix as
\small
\begin{align}
\label{Eq-Est-19}
\widehat{\overline{\mathbf{R}}}_v = \begin{bmatrix}
\mathbf{T}_v\mathbf{J}^{\dagger}\widehat{\overline{\mathbf{r}}} & \mathbf{T}_{v-1}\mathbf{J}^{\dagger}\widehat{\overline{\mathbf{r}}} & \cdots &
\mathbf{T}_1\mathbf{J}^{\dagger}\widehat{\overline{\mathbf{r}}}
\end{bmatrix} \in \mathds{C}^{v \times v},
\end{align}\normalsize
where $\mathbf{T}_i$ is a selection matrix, defined as
\small
\begin{align}\label{Eq-Est-20}
\hspace{-2mm}\mathbf{T}_i = \begin{bmatrix} \mathbf{0}_{v \times (i+D-v-1)} & \IDM_v & \mathbf{0}_{v \times (D-i)}\end{bmatrix} \in \{0,1\}^{v \times (2D-1)}. \hspace{-2mm}
\end{align}\normalsize
It follows from the consistency of $\widehat{\overline{\mathbf{r}}}$ that
\small
\begin{align}
\label{Eq-Est-21}
\lim_{N \to \infty} \widehat{\overline{\mathbf{R}}}_v &= \begin{bmatrix}
\mathbf{T}_v\mathbf{J}^{\dagger}\overline{\mathbf{r}} & \mathbf{T}_{v-1}\mathbf{J}^{\dagger}\overline{\mathbf{r}} & \cdots &
\mathbf{T}_1\mathbf{J}^{\dagger}\overline{\mathbf{r}}
\end{bmatrix} \in \mathds{C}^{v \times v} \nonumber\\
&= \mathbf{A}_v(\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}}) \mathbf{A}_v^H(\boldsymbol{\theta}) + \overline{\sigma}^2 \IDM_v,
\end{align}\normalsize
where $\mathbf{A}_v(\boldsymbol{\theta}) = [\mathbf{a}_v\left(\theta_1\right), \mathbf{a}_v\left(\theta_2\right), \cdots, \mathbf{a}_v\left(\theta_K\right)] \in \mathds{C}^{v \times K}$ denotes the steering matrix of a contiguous ULA with $v$ elements located at $(0 ,\frac{\lambda}{2}, \cdots, (v-1)\frac{\lambda}{2})$.
Hence, we can apply MUSIC to $\widehat{\overline{\mathbf{R}}}_v$ to estimate the DoAs. We call the proposed method Enhanced One-bit CAB-MUSIC (EOCAB-MUSIC). Algorithm \ref{alg-1} summarizes the steps of EOCAB-MUSIC.
\begin{algorithm}[t]
\caption{EOCAB-MUSIC}
\begin{algorithmic}[1]
\qinput SLA one-bit observations, i.e., $\mathbf{X}$.
\qoutput The estimates of source DoAs.
\State Compute the sample covariance matrix of one-bit data as $\widehat{\mathbf{R}}_{\mathbf{x}} = \frac{1}{N} \mathbf{X} \mathbf{X}^H$.
\State Compute $\widetilde{\overline{\mathbf{R}}}$ from \eqref{Eq-Est-3}.
\State Form $\widetilde{\ddot{\mathbf{r}}}$ by removing the diagonal elements of $\widetilde{\overline{\mathbf{R}}}$ from $\VE(\widetilde{\overline{\mathbf{R}}})$.
\State Compute $\widetilde{\boldsymbol{\gamma}}$ from $\widetilde{\boldsymbol{\gamma}} = \mathbf{F} \widetilde{\ddot{\mathbf{r}}}$.
\State Compute $\widehat{\mathbf{b}}$ using $[\widehat{\mathbf{b}}]_n =\frac{1}{\sqrt{1-|[\widetilde{\boldsymbol{\gamma}}]_n|^2}}
$, for $1 \leq n \leq M^2-M$.
\State Compute $\widehat{\mathbf{\Sigma}}$ by using (125) and replacing $\overline{\mathbf{R}}$ with $\widetilde{\overline{\mathbf{R}}}$ in (129), (130), (133) - (137), (139)-(148), (150) and (152)-(161) given in Appendix K.
\State Compute $\widehat{\boldsymbol{\phi}}$ from \eqref{Eq-Est-17}.
\State Compute $\widehat{\overline{\mathbf{r}}}$ from \eqref{Eq-Est-18}.
\State Compute $\widehat{\overline{\mathbf{R}}}_v$ from \eqref{Eq-Est-19}.
\State Apply MUSIC to $\widehat{\overline{\mathbf{R}}}_v$ to estimate DoAs.
\end{algorithmic}
\label{alg-1}
\end{algorithm}
\begin{rmk}
The computational complexity of each step of Algorithm \ref{alg-1} is separately specified in Table \ref{table-1} where ${\cal G}(n)$, ${\cal K}(n)$ and ${\cal Z}$ denote the complexity of the chosen algorithm for multiplication of two $n$-digit numbers, the complexity of integration in (108) and the number of grid point of the MUSIC algorithm, respectively. Considering that $D$ and $v$ are typically in the order of $M^2$ and, moreover, $n$ and $M$ are normally very smaller than ${\cal Z}$, it follows from Table \ref{table-1} that the complexity of EOCAB-MUSIC is in the order of ${\cal O}( MN + M^2 ( {\cal G}(n) ( {\cal Z} + M^4) + {\cal K}(n) M^2 ) )$. On the other hand implementation of OCAB-MUSIC needs only steps 1, 2, 9 and 10 in algorithm \ref{alg-1}. Hence, its complexity is given by ${\cal O}( MN + M^2 {\cal G}(n) ( {\cal Z} + M^4) )$. Typically, we have ${\cal G}(n) ( {\cal Z} + M^4) \gg {\cal K}(n) M^2$, implying that the complexity of EOCAB-MUSIC is almost in the same order as that of OCAB-MUSIC.
\begin{table}[!]
\centering
\caption{Complexity of the steps of Algorithm \ref{alg-1}}
\begin{tabular}{ |c|c| }
\hline
Step order & Complexity\\
\hline
1 & ${\cal O}(MN)$\\
\hline
2 & ${\cal O}({\cal G}(n) \sqrt{n} M^2)$ \\
\hline
3 & ${\cal O}(M^2)$ \\
\hline
4 & ${\cal O}({\cal G}(n) M^4)$\\
\hline
5 & ${\cal O}({\cal G}(n) M^2)$\\
\hline
6 & ${\cal O}({\cal K}(n) M^4)$\\
\hline
7 & ${\cal O}({\cal G}(n) (DM^4+M^6+D^3) )$\\
\hline
8 & ${\cal O}({\cal G}(n) (D^2M^2)$\\
\hline
9 & ${\cal O}({\cal G}(n) (M^2v(2D-1+v))$\\
\hline
10 & ${\cal O}({\cal G}(n) ({\cal Z}M^2 + M^3) )$\\
\hline
\end{tabular}
\label{table-1}
\end{table}
\end{rmk}
\subsection{Asymptotic Performance Analysis}
\label{sec:MSE}
In this section, we investigate the asymptotic performance of the proposed estimator through the derivation of a closed-form expression for the second-order statistics of the asymptotic distribution (as $N \to \infty$) of the DoA estimation errors. Our main results are summarized in Theorem \ref{Theo-6}, Corollary \ref{Col-1} and Theorem \ref{Theo-7}.
\begin{lem}
\label{lem-2}
$\widehat{\boldsymbol{\theta}}$ obtained by EOCAB-MUSIC is a consistent estimate of $\boldsymbol{\theta}$ if $K \leq v- 1$.
\end{lem}
\begin{proof}
See Appendix \ref{App-F}
\end{proof}
\begin{theo}
\label{Theo-6}
The closed-form expression for the covariance of the asymptotic distribution (as $N \to \infty$) of the DoA estimation errors obtained by EOCAB-MUSIC is given by
\small
\begin{align}
\label{Eq-cov}
&{\cal E}_{\theta_{k_1}, \theta_{k_2}} = \EX\{(\theta_{k_1} - \widehat{\theta}_{k_1})(\theta_{k_2} - \widehat{\theta}_{k_2})^*\} \\
&= \frac{ (\sigma^2+\sum_{k=1}^K p_k)^2}{ N \pi^2 p_{k_1} p_{k_2} q_{k_1} q_{k_2} \cos \theta_{k_1} \cos \theta_{k_2} }\nonumber\\
&\times \Re\{\mathbf{z}^T_{k_1} \overline{\mathbf{T}} (\overline{\mathbf{J}}^H \mathbf{W} \overline{\mathbf{J}})^{-1} \overline{\mathbf{J}}^H \mathbf{W} \mathbf{\Gamma} \mathbf{W} \overline{\mathbf{J}} (\overline{\mathbf{J}}^H \mathbf{W} \overline{\mathbf{J}})^{-1} \overline{\mathbf{T}}^H \mathbf{z}_{k_2}^* \},\nonumber
\end{align}\normalsize
where
\small
\begin{align}
\mathbf{z}_k =& \boldsymbol{\beta}_k \otimes \boldsymbol{\alpha}_k,\\
\boldsymbol{\beta}_k =& \Pi^{\perp}_{\scaleto{\mathbf{A}_v(\boldsymbol{\theta})\mathstrut}{5pt}} \DIAG(\mathbf{v}) \mathbf{a}_v(\theta_k),\\
\boldsymbol{\alpha}_k =& \mathbf{A}_v^{\dagger T}(\boldsymbol{\theta}) \boldsymbol{\imath}_k,
\end{align}
\begin{align}
q_k =& \mathbf{a}_v^H(\theta_k) \DIAG(\mathbf{v}) \Pi^{\perp}_{\mathbf{A}_v} \DIAG(\mathbf{v}) \mathbf{a}_v(\theta_k),\\
\label{Eq-Est-22}
\mathbf{W} =& \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F},\\
\label{Eq-Est-23}
[\mathbf{\Gamma}]_{p,q} =& \frac{1}{2} \bigg(\sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_q\}]^2}\\
&+ \sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_q\}]^2}\bigg) \Re\{[\mathbf{\Sigma}]_{p,q}\} \nonumber\\
&+\frac{j}{2} \bigg(\sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_q\}]^2} \nonumber\\
&+ \sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_q\}]^2}\bigg) \Im\{[\mathbf{\Sigma}]_{p,q}\}, \nonumber
\end{align}\normalsize
with $\mathbf{v} = [0,1,2, \cdots, v-1]^T$, $
[\mathbf{b}]_n =\frac{1}{\sqrt{1-|[\boldsymbol{\gamma}]_n|^2}}
$ for $1 \leq n \leq M^2-M$, $\mathbf{\Sigma} \in \mathds{C}^{(M^2-M) \times (M^2-M)}$ as given in Appendix K (kindly refer to the supplementary document), $\overline{\mathbf{T}} \in \mathds{C}^{v^2 \times (2D-2)}$ as defined in \eqref{Eq-app-G-9-1} in Appendix \ref{app-G}, and $\boldsymbol{\imath}_k$ being the $k^{\rm th}$ column of $\IDM_K$.
\end{theo}
\begin{proof}
See Appendix \ref{app-G}
\end{proof}
\begin{Col}
\label{Col-1}
The asymptotic MSE expression (as $N \to \infty$) for the DoA estimates obtained by EOCAB-MUSIC is given by
\small
\begin{align}
\label{Eq-mse}
{\cal E}_{\theta_k} &= \EX\{(\theta_{k_1} - \widehat{\theta}_k)^2\} = \frac{(\sigma^2+\sum_{k'=1}^K p_{k'})^2}{N \pi^2 p_k^2 q_k^2 \cos^2 \theta_k} \\
&\times \Re\{\mathbf{z}^T_k \overline{\mathbf{T}} (\overline{\mathbf{J}}^H \mathbf{W} \overline{\mathbf{J}})^{-1} \overline{\mathbf{J}}^H \mathbf{W} \mathbf{\Gamma} \mathbf{W} \overline{\mathbf{J}} (\overline{\mathbf{J}}^H \mathbf{W} \overline{\mathbf{J}})^{-1} \overline{\mathbf{T}}^H \mathbf{z}_k^* \}.\nonumber
\end{align}\normalsize
\end{Col}
\begin{Col}
\label{Col-2}
The covariance of the asymptotic distribution (as $N \to \infty$) of the DoA estimation errors and the asymptotic MSE expression (as $N \to \infty$) for the one-bit DoA estimator given in \cite{Liuonebit}, named as One-bit CAB-MUSIC (OCAB-MUSIC), is easily obtained by replacing $\mathbf{W}$ with $\IDM_{M^2-M}$ in \eqref{Eq-cov} and \eqref{Eq-mse}, respectively.
\end{Col}
\begin{proof}
See Appendix \ref{app-I}
\end{proof}
\begin{rmk}
\label{rmk-5}
It is concluded from Corollary \ref{Col-1} and Corollary \ref{Col-2} that, similar to Infinite-bit Co-Array-Based MUSIC (ICAB-MUSIC) \cite{Wang2017}, the MSEs of EOCAB-MUSIC and OCAB-MUSIC depend on both the physical and the virtual array geometries through $\mathbf{A}_v(\theta)$ and $\overline{\mathbf{R}}$, respectively.
\end{rmk}
\begin{rmk}
\label{rmk-6}
Another interesting implication of Corollary \ref{Col-1} is that the MSEs of EOCAB-MUSIC and OCAB-MUSIC reduce at the same rate as that of ICAB-MUSIC \cite{Wang2017} with respect to $N$; i.e. ${\cal E}_{\theta_k} \propto \frac{1}{N}$ for both.
\end{rmk}
\begin{rmk}
\label{rmk-7}
It is readily clear from the definition that $\overline{\mathbf{r}}$ is a function of the SNR, and not $\mathbf{p}$ and $\sigma^2$. This indicates that $\mathbf{W}$ and $\mathbf{\Gamma}$ are also functions of the SNR instead of $\mathbf{p}$ and $\sigma^2$. Further, multiplying the numerator and denominator of $(\sigma^2\!+\!\sum_{k'=1}^K p_{k'})^2/p_k^2$ by $1/\sigma^4$ reformulates it as a function of the SNR. These observations imply that the MSEs of EOCAB-MUSIC and OCAB-MUSIC are functions of the SNR instead of $\mathbf{p}$ and $\sigma^2$. This fact can also be deduced directly from system model where we have
\small
\begin{align}
\hspace{-1mm}[\mathbf{x}(t)]_m \!=\! \frac{1}{\sqrt{2}} \SGN\left(\Re\{[\mathbf{y}(t)]_m\} \right)
\!+\! \frac{j}{\sqrt{2}}~ \SGN\left(\Im\{[\mathbf{y}(t)]_m \}\right)\nonumber\\
= \frac{1}{\sqrt{2}} \SGN\left(\Re\{\frac{[\mathbf{y}(t)]_m}{\sigma}\} \right)
\!+\! \frac{j}{\sqrt{2}}~ \SGN\left(\Im\{\frac{[\mathbf{y}(t)]_m}{\sigma} \}\right).
\end{align}\normalsize
for $\sigma > 0$. This implies that, without loss of generality, we can consider the power of each source equal to the SNR for that source and the noise variance equal to $1$.
\end{rmk}
\begin{theo}
\label{Theo-7}
Assume all sources have equal power $p$ and $SNR=p/\sigma^2$. Then, for a sufficiently large SNR, the MSE of EOCAB-MUSIC converges to the following constant value:
\small
\begin{align}
&\lim_{SNR \to \infty} {\cal E}_{\theta_k} = \frac{K^2}{N \pi^2 q_k^2 \cos^2 \theta_k} \times \\
& \Re\{\mathbf{z}^T_k \overline{\mathbf{T}} (\overline{\mathbf{J}}^H \mathbf{W}_{\infty} \overline{\mathbf{J}})^{-1} \overline{\mathbf{J}}^H \mathbf{W}_{\infty} \mathbf{\Gamma}_{\infty} \mathbf{W}_{\infty} \overline{\mathbf{J}} (\overline{\mathbf{J}}^H \mathbf{W}_{\infty} \overline{\mathbf{J}})^{-1} \overline{\mathbf{T}}^H \mathbf{z}_k^* \}\!>\!0,\nonumber
\end{align}\normalsize
where $\mathbf{W}_{\infty}$ and $\mathbf{\Gamma}_{\infty}$ are obtained by replacing $\overline{\mathbf{R}}$, $\ddot{\mathbf{r}}$ and $\boldsymbol{\gamma}$ in the definitions of $\mathbf{W}$ and $\mathbf{\Gamma}$ (kindly refer to Theorem \ref{Theo-6}) with $\overline{\mathbf{R}}_{\infty}$, $\boldsymbol{\gamma}_{\infty}$ and $\ddot{\mathbf{r}}_{\infty}$, respectively, where
\small
\begin{align}
\overline{\mathbf{R}}_{\infty} = \frac{1}{K} \mathbf{A}(\boldsymbol{\theta}) \mathbf{A}^H(\boldsymbol{\theta}) + (1-\frac{1}{K}) \IDM_M,
\end{align}\normalsize
$\boldsymbol{\gamma}_{\infty}$ is the $(M^2-M) \times 1$ vector containing the real and imaginary parts of the elements of $\overline{\mathbf{R}}_{\infty}$ above its main diagonal elements and $\ddot{\mathbf{r}}_{\infty} = \mathbf{\Psi}^{-1}\overline{\mathbf{J}}^{\dagger}\mathbf{F}^{-1} \boldsymbol{\gamma}_{\infty}$.
\end{theo}
\begin{proof}
See Appendix \ref{app-J}.
\end{proof}
\begin{rmk}
\label{rmk-8}
It follows from Theorem \ref{Theo-7} that it is not possible to make the MSEs of EOCAB-MUSIC and OCAB-MUSIC arbitrarily small by increasing the SNR.
\end{rmk}
\section{Simulation Results}
\label{sec:simulations}
In this section, we provide some numerical results to validate the analytical results obtained in previous sections as well as to assess the performance of the proposed DoA estimator. Specifically, we will show that the proposed estimator yields better performance in terms of estimation accuracy and resolution compared to the approach given in \cite{Liuonebit}. In the rest of this section, we will refer to: \begin{enumerate*} \item the CRB for DoA estimation from infinite-bit measurements as Infinite-bit CRB (I-CRB), whose expression is given in Remark \ref{rmk-new-new-2}; \item the pessimistic approximation of the CRB for DoA estimation from one-bit measurements as One-bit CRB (O-CRB);
\item CAB-MUSIC using infinite-bit measurements as Infinite-bit CAB-MUSIC (ICAB-MUSIC); \item the DoA estimator given in \cite{Liuonebit} as one-bit CAB-MUSIC (OCAB-MUSIC); \item the proposed estimator in this paper as Enhanced One-bit CAB-MUSIC (EOCAB-MUSIC) \end{enumerate*}.
\subsection{General Set-up}
In all experiments, each simulated point has been computed by $5000$ Monte Carlo repetitions. Unless the source locations are specified for a particular result, it is assumed that the $K$ independent sources are equally spaced in the angular domain $[-\ang{60}, \ang{60}]$ such that $\theta = -\ang{60}$ when $K=1$. Further, all sources are assumed to have equal powers, i.e., $p_k = p$ for all $k$, and the SNR is defined as $10 \log \frac{p}{\sigma^2}$. For our numerical investigation, we use four different types of arrays with $M=10$ physical elements and the following geometries:
\small
\begin{align}
\label{nested}
&\mathds{M}_{\text{nested}}: \left\{1, 2, 3, 4, 5, 6, 12, 18, 24, 30\right\}, \\
\label{co-prime}
&\mathds{M}_{\text{co-prime}}: \left\{0, 3, 5, 6, 9, 10, 12, 15, 20, 25\right\}, \\
\label{MRA}
&\mathds{M}_{\text{MRA}}: \left\{0, 1, 3, 6, 13, 20, 27, 31, 35, 36 \right\},\\
\label{ULA}
&\mathds{M}_{\text{ULA}}: \left\{0, 1, 2, \cdots, 9\right\}.
\end{align}\normalsize
These arrays generate the difference co-arrays:
\small
\begin{align}
\label{co-nested}
&\mathds{D}_{\text{nested}}: \left\{0,1,2, \cdots, 29\right\}, \\
\label{co-co-prime}
&\mathds{D}_{\text{co-prime}}: \left\{0,1, 2, \cdots, 22, 25 \right\}, \\
\label{co-MRA}
&\mathds{D}_{\text{MRA}}: \left\{0,1, 2, \cdots, 36 \right\},
\\
\label{co-ULA}
&\mathds{D}_{\text{ULA}}: \left\{0,1, 2, \cdots, 9 \right\}.
\end{align}\normalsize
Further, we generate the grid from $-\ang{90}$ to $\ang{90}$ with step size $\ang{0.001}$ to implement MUSIC.
To avoid griding, alternatively, it is also possible to use root-MUSIC.
\begin{figure*}[!]
\centering
\subfloat[]{\includegraphics[width=0.7\columnwidth]{MSEvsNSnR3M10K5.pdf}%
\label{Fig:2-a}}
\hfil
\subfloat[]{\includegraphics[width=0.7\columnwidth]{MSEvsNSnR3M10K12.pdf}%
\label{Fig:2-b}}
\caption{RMSE in degrees for $\theta_2$ versus $N$ for a nested array with $M=10$ elements and configuration given in (\ref{nested}), ${\rm SNR}= 3$ dB, and: (a) $K=5<M$; (b) $K=12>M$.}
\label{Fig:2}
\vspace{-4mm}
\end{figure*}
\begin{figure}[!]
\centering
\includegraphics[width=0.7\columnwidth]{MSEvsNM10K3UNEQSNR8.pdf}
\DeclareGraphicsExtensions.
\caption{RMSE in degrees for $\theta_2$ versus $N$ for a nested array with $M=10$ elements and configuration given in (\ref{nested}) when $K=3$, $\theta_1=\ang{2}$, $\theta_2=\ang{3}$, $\theta_3=\ang{75}$, ${\rm SNR}_1= 20$ dB, ${\rm SNR}_2= 8$ dB and ${\rm SNR}_3= 22$ dB.}
\label{Fig:8}
\end{figure}
\subsection{MSE vs. the Number of Snapshots}
Fig. \ref{Fig:2} depicts the Root-Mean-Squares-Error (RMSE) for $\theta_2$ in degree versus the number of snapshots when the nested array in (\ref{nested}) is used. The SNR is assumed to be $3$ dB. In addition, noting $M=10$, two different scenarios are considered: (a) $K=5<M$, and (b) $K=12>M$. Fig. \ref{Fig:2} illustrates a close agreement between the numerical simulations and analytical expression derived for RMSEs of OCAB-MUSIC and EOCAB-MUSIC when about $200$ or more snapshots are available. Further, a considerable gap is observed between the performance of OCAB-MUSIC and that of the EOCAB-MUSIC. For instance, at $N=400$, Figs. \ref{Fig:2-a} and \ref{Fig:2-b} show a performance gain of roughly $3$ dB and $1$ dB, respectively, in terms of the RMSE when the EOCAB-MUSIC is used. It is also observed that EOCAB-MUSIC performs as well as ICAB-MUSIC when $K=5<M$. Further, it is observed that the RMSE of EOCAB-MUSIC is very close to O-CRB when $K=5<M$ but we see a gap between them when $K=12>M$.
Fig. \ref{Fig:2} also shows that when a small number of snapshots is available, e.g. less than $1000$, all estimators are confronted with substantial performance degradation. This performance loss is justified by the subspace swap arising from the inaccurate estimate of the normalized covariance matrix of $\mathbf{y}(t)$, i.e. $\overline{\mathbf{R}}$, in this case. However, it is seen that the proposed estimator still has superior performance compared to OCAB-MUSIC, even in the low snapshot paradigm.
Fig. \ref{Fig:8} depicts the RMSE $\theta_2$ in degree versus the number of snapshots when $K=3$ and the sources powers are unequal. Specifically, It is assumed that $\theta_1=\ang{2}$, $\theta_2=\ang{3}$, $\theta_3=\ang{75}$, ${\rm SNR}_1= 20$ dB, ${\rm SNR}_2= 8$ dB and ${\rm SNR}_3= 22$ dB. Comparing Fig. \ref{Fig:2} with Fig. \ref{Fig:8} reveals that a high difference between the SNRs of the closely-spaced source signals do not have a meaningful impact on the relative asymptotic performance of ICAB-MUSIC, OCAB-MUSIC and EOCAB-MUSIC, however, by increasing the difference between SNRs, OCAB-MUSIC needs more number of snapshots to achieve its asymptotic performance compared to EOCAB-MUSIC and ICAB-MUSIC.
\subsection{MSE vs. SNR}
Fig. \ref{Fig:3} shows the RMSE for $\theta_2$ in degrees versus SNR for the same setup used for Fig. \ref{Fig:2}. The number of snapshots is considered to be $N=500$. It is seen in Figs. \ref{Fig:3-a} and Fig. \ref{Fig:3-b} that the RMSEs of OCAB-MUSIC and EOCAB-MUSIC perfectly match with their asymptotic analytical RMSEs given in Corollary \ref{Col-1} and Corollary \ref{Col-2}.
Fig. \ref{Fig:3} demonstrates that the I-CRB tends to decay to zero as the SNR increases when $K=5<M$ while it gets saturated as the SNR increases when $K=12>M$. However, as opposed to the I-CRB, O-CRB tends to converge to a constant non-zero value at the high SNR regime for both the cases $K=5<M$ and $K=12>M$. This behavior of O-CRB was already predicted by Theorem \ref{Theo-5}. In addition, as shown in Theorem \ref{Theo-7}, the RMSEs of OCAB-MUSIC and EOCAB-MUSIC also converge to a constant non-zero value as the SNR increases for both $K=5<M$ and $K=12>M$.
\begin{figure*}[!]
\centering
\subfloat[]{\includegraphics[width=0.7\columnwidth]{MSEvsSNRM10K5N500.pdf}%
\label{Fig:3-a}}
\hfil
\subfloat[]{\includegraphics[width=0.7\columnwidth]{MSEvsSNRM10K12N500.pdf}%
\label{Fig:3-b}}
\caption{RMSE in degrees for $\theta_2$ versus SNR wen the source powers are equal for a nested array with $M=10$ elements and configuration given in (\ref{nested}), $N=500$, and: (a) $K=5<M$; (b) $K=12>M$.}
\label{Fig:3}
\vspace{-3mm}
\end{figure*}
We observe from Fig. \ref{Fig:3} that EOCAB-MUSIC preforms better than OCAB-MUSIC in both scenarios $K=5<M$ and $K=12>M$. For example, at ${\rm SNR}=5$, EOCAB-MUSIC leads to performance gains of about $3.7$ dB and $1.15$ dB in terms of RMSE compared to OCAB-MUSIC. Further, it is seen that EOCAB-MUSIC even outperforms ICAB-MUSIC at high SNR regime when $K=5<M$. Another interesting observation is that the RMSE of O-CRB is either better or equal to that of ICAB-MUSIC.
Fig. \ref{Fig:6} shows the RMSE for $\theta_2$ in degrees versus SNR when the sources powers are unequal and DoAs are not exactly on the grid as opposed to Fig. \ref{Fig:3}. The number of snapshots is considered to be $N=500$. In case of $K=5<M$, the sources are located at $\theta_1= \ang{-49.4551}, \theta_2=\ang{-30.1443}, \theta_3=\ang{-2.4525}, \theta_4=\ang{26.8293}$ and $\theta_5=\ang{56.5149}$. Further, the source SNRs are assumed to be ${\rm SNR}_1= 0.75 \times {\rm SNR}_2, {\rm SNR}_3=1.22 \times {\rm SNR}_2, {\rm SNR}_4= 0.92 \times {\rm SNR}_2$ and ${\rm SNR}_5= 0.66 \times {\rm SNR}_2$ while ${\rm SNR}_2$ varies from $10$ dB to $20$ dB as shown in Fig. \ref{Fig:6-a}. Further, in case of $K=12>M$, the sources are located at $\theta_1= \ang{-56.3351}, \theta_2=\ang{-36.2628}, \theta_3=\ang{-19.9004}, \theta_4=\ang{-2.4093}, \theta_5=\ang{0.0027}, \theta_6=\ang{13.1840}, \theta_7=\ang{23.8495}, \theta_8=\ang{25.8044}, \theta_9=\ang{29.2889}, \theta_10=\ang{40.9107},
\theta_11=\ang{48.4465}$ and $\theta_12=\ang{48.5667}$. The source SNRs are assumed to be ${\rm SNR}_1= 1.34 \times {\rm SNR}_2, {\rm SNR}_3=0.84 \times {\rm SNR}_2, {\rm SNR}_4= 0.83 \times {\rm SNR}_2, {\rm SNR}_5= 0.67 \times {\rm SNR}_2, {\rm SNR}_6= 0.69 \times {\rm SNR}_2, {\rm SNR}_7= 0.95 \times {\rm SNR}_2, {\rm SNR}_8= 0.61 \times {\rm SNR}_2, {\rm SNR}_9= 0.79 \times {\rm SNR}_2, {\rm SNR}_10= 0.56 \times {\rm SNR}_2, {\rm SNR}_10= 0.82 \times {\rm SNR}_2$ and ${\rm SNR}_12= 0.88 \times {\rm SNR}_2$ while ${\rm SNR}_2$ varies from $10$ dB to $20$ dB as shown in Fig. \ref{Fig:6-b}. Comparing Fig. \ref{Fig:6} with Fig. \ref{Fig:3} reveals that unequal source powers do not have remarkable impact on the estimation accuracy particularly in high-SNR regime.
\begin{figure*}[!]
\centering
\subfloat[]{\includegraphics[width=0.7\columnwidth]{MSEvsSNRM10K5N500uq.pdf}%
\label{Fig:6-a}}
\hfil
\subfloat[]{\includegraphics[width=0.7\columnwidth]{MSEvsSNRM10K12N500uq.pdf}%
\label{Fig:6-b}}
\caption{RMSE in degrees for $\theta_2$ versus SNR when the source powers are unequal for a nested array with $M=10$ elements and configuration given in (\ref{nested}), $N=500$, and: (a) $K=5<M$; (b) $K=12>M$.}
\label{Fig:6}
\vspace{-3mm}
\end{figure*}
\subsection{CRB vs. the Number of Source Signals}
Fig. \ref{Fig:4} plots the I-CRB and the O-CRB for $\theta_2$ in degree versus the number of source signals for ${\rm SNR} = 3~{\rm dB}$ and $N=500$ and different types of arrays given in \eqref{nested}, \eqref{co-prime}, \eqref{MRA} and \eqref{ULA}. The values of $D$ and $v$ for the different types of arrays are as: \begin{enumerate*}
\item MRA: $D=37$ and $v=37$; \item nested array: $D=30$ and $v=30$; \item co-prime array: $D=26$ and $v=23$; ULA: $D=10$ and $v=10$ \end{enumerate*}. Fig. \ref{Fig:4} indicates that both the I-CRB and the O-CRB increase as the number of source signals increases. Moreover, it is observed that the I-CRB and the O-CRB are quite small for all the SLAs as long as $1 \leq K \leq v-1$, but they escalate dramatically when $K$ approaches values that are equal to or larger than $D$. This observation is in compliance with Theorem \ref{Theo-2} which indicates that the DoA estimation problem is globally identifiable when $1\leq K \leq v-1$ and is globally non-identifiable when $K \geq D$.
\begin{figure}[!]
\centering
\includegraphics[width=0.7\columnwidth]{MSEvsKM10N500SNR3.pdf}
\DeclareGraphicsExtensions.
\caption{The CRB versus $K$ for various array configurations given from \eqref{nested} to \eqref{MRA}, $N=500$ and ${\rm SNR}= 3$ dB.}
\label{Fig:4}
\end{figure}
\subsection{Resolution Probability}
Fig. \ref{Fig:5} depicts the probability of resolution versus the source separation for ICAB-MUSIC, EOCAB-MUSIC and OCAB-MUSIC when the nested array given in (\ref{nested}) is employed. The number of snapshots and the SNR are considered to be $N=500$ and $0$ dB, respectively. In addition, we consider two sources with equal powers, located at $\theta_1=\ang{20}-\frac{\Delta \theta} {2}$ and $\theta_2=\ang{20}+\frac{\Delta \theta}{2}$. We define the two sources as being resolvable if ${\rm \underset{\Scale[0.5]{i \in \{1,2\}}} {max}}|\hat{\theta}_i-\theta_i|<\frac{\Delta \theta}{2}$ \cite{Kaveh1986}. According to this definition and making use of two-dimensional Chebychev's bound \cite{Lal1955}, the probability of resolution can be lower bounded as
\small
\begin{align}
\label{Eq-ProRes}
&\mathbb{P}({\rm \underset{\Scale[0.5]{i \in \{1,2\}}} {max}}|\hat{\theta}_i-\theta_i|<\frac{\Delta \theta}{2}) \\
&= \mathbb{P}(|\hat{\theta}_1-\theta_1|<\frac{\Delta \theta}{2}, |\hat{\theta}_2-\theta_2|<\frac{\Delta \theta}{2}) \geq \nonumber 1-\frac{2[{\cal E}(\theta_1)+{\cal E}(\theta_2) ]}{\Delta \theta^2}\\
&+\frac{2\sqrt{{\cal E}^2_{\theta_1}+{\cal E}^2_{\theta_2}+2{\cal E}_{\theta_1}{\cal E}_{\theta_2}-4{\cal E}^2_{\theta_1,\theta_2}}}{\Delta \theta ^2}, \nonumber
\end{align}\normalsize
where ${\cal E}(\theta_1)$, ${\cal E}(\theta_2)$ and ${\cal E}(\theta_1,\theta_2)$ are given in \eqref{Eq-mse} and \eqref{Eq-cov}. The analytical expression on the right-hand side of \eqref{Eq-ProRes} enables us to predict the minimum source separation required for achieving a particular probability of resolution. For example, Fig. \ref{Fig:4} shows the predicted values for the minimum source separation to achieve a probability of resolution greater than $0.9$, obtained from \eqref{Eq-ProRes}, for ICAB-MUSIC, OCAB-MUSIC and EOCAB-MUSIC. It is observed that the predicted values of the minimum source separation for ICAB-MUSIC, EOCAB-MUSIC and OCAB-MUSIC, which are respectively $\Delta \theta = \ang{1.2}$, $\Delta \theta = \ang{1.4}$ and $\Delta \theta = \ang{1.5}$, are in a good agreement with the values obtained from the numerical simulations, which are respectively $\Delta \theta = \ang{1.1}$, $\Delta \theta = \ang{1.2}$ and $\Delta \theta = \ang{1.3}$. Additionally, Fig. \ref{Fig:5} demonstrates the resolution performance of EOCAB-MUSIC is superior to that of OCAB-MUSIC while
ICAB-MUSIC outperforms both of them.
\begin{figure}[!]
\centering
\includegraphics[width=0.7\columnwidth]{ProResSNR0N500.pdf}
\DeclareGraphicsExtensions.
\caption{Probability of resolution versus source separation in degree for a nested array with $M=10$ elements and configuration given in (\ref{nested}), $N=500$ and ${\rm SNR}= 0$ dB.}
\label{Fig:5}
\vspace{-2mm}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we considered the problem of DoA estimation from one-bit measurements received by an SLA. We showed that the idetifiability condition for the DoA estimation problem from one-bit SLA data is equivalent to that for the case when DoAs are estimated from infinite-bit unquantized measurements. Then, we derived a pessimistic approximation of the corresponding CRB. This pessimistic CRB was used as a benchmark for assessing the performance of one-bit DoA estimators. Further, it provides us with valuable insights on the performance limits of DoA estimation from one-bit quantized data. For example, it was shown that the DoA estimation errors in one-bit scenario reduces at the same rate as that of infinite-bit case with respect to the number of samples and, moreover, that the DoA estimation errors in one-bit scenario converges to a constant value by increasing the SNR. We also proposed a new algorithm for estimating DoAs from one-bit quantized data. We investigated the analytical performance of the proposed method through deriving a closed-form expression for the second-order statistics of its asymptotic distribution (for the large number of snapshots) and show that it outperforms the existing algorithms in the literature. Numerical simulations were provided to validate the analytical derivations and corroborate the improvement in estimation performance.
\appendices
\section{Proof of Theorem \ref{Theo-1}}\label{app-A}
We first prove the sufficiency. Assume that $\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ is identifiable from $\mathbf{Y}$. This implies that $f(\mathbf{Y} \mid \boldsymbol{\theta}_0, \mathbf{p}, \sigma^2) \neq f(\mathbf{Y} \mid \breve{\boldsymbol{\theta}}, \breve{\mathbf{p}}, \breve{\sigma}^2)$ for any arbitrary values of $\breve{\boldsymbol{\theta}} \neq \boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$, $\mathbf{p} \in \mathds{R}_{>0}^{{K \times 1}}$, $\breve{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\sigma^2$ and $\breve{\sigma}^2$.
Hence, considering $\mathbf{y}(0), \mathbf{y}(1), \cdots, \mathbf{y}(N-1)$ are independent and identically distributed with $\mathbf{y}(t) \sim {\cal CN}(\ZEROV, \mathbf{R})$, we have
\small
\begin{align}
\label{Eq-app-A-1}
\mathbf{A}\!(\boldsymbol{\theta}_0)\DIAG(\mathbf{p})
\mathbf{A}\!^H\!(\boldsymbol{\theta}_0)\!+\!\sigma^2\IDM_M \!\neq\! \mathbf{A}\!(\breve{\boldsymbol{\theta}})\DIAG(\breve{\mathbf{p}})
\mathbf{A}\!^H\!(\breve{\boldsymbol{\theta}})\!+\!\breve{\sigma}^2\IDM_M,
\end{align}\normalsize
for all $\breve{\boldsymbol{\theta}} \neq \boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$, $\mathbf{p} \in \mathds{R}_{>0}^{{K \times 1}}$, $\breve{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\sigma^2$ and $\breve{\sigma}^2$.
In what follows, we employ the method of proof by contradiction to prove the sufficiency. In particular, we assume that
$\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ is non-identifiable from $\mathbf{X}$. Hence, there exists a $\breve{\boldsymbol{\theta}} \neq \boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ at which $f(\mathbf{X} \mid \boldsymbol{\theta}_0, \widetilde{\mathbf{p}}, \widetilde{\sigma}^2) = f(\mathbf{X} \mid \breve{\boldsymbol{\theta}}, \dot{\mathbf{p}}, \dot{\sigma}^2)$ for some values of $\widetilde{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\dot{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\widetilde{\sigma}^2$ and $\dot{\sigma}^2$. It is readily clear from assumption {\bf A4} and \eqref{model-eq-6} that $\EX\{\mathbf{x}(t_1) \mathbf{x}^H(t_2)\}=\ZEROV$ when $t_1 \neq t_2$. Accordingly, we have
\small
\begin{align}
\label{Eq-app-A-5}
&\EX\left\{\mathbf{X}\XM^H \!\mid\! \boldsymbol{\theta}_0, \widetilde{\mathbf{p}}, \widetilde{\sigma}^2 \right\} = \EX\left\{\mathbf{X}\XM^H \!\mid\! \breve{\boldsymbol{\theta}}, \dot{\mathbf{p}}, \dot{\sigma}^2 \right\},\\
\Rightarrow & \! \sum_{t=0}^{N-1} \EX\{\mathbf{x}(t)\mathbf{x}^H(t) \!\mid\! \boldsymbol{\theta}_0, \widetilde{\mathbf{p}}, \widetilde{\sigma}^2\} \!=\! \sum_{t=0}^{N-1} \EX\{\mathbf{x}(t)\mathbf{x}^H(t) \!\mid\! \breve{\boldsymbol{\theta}}, \dot{\mathbf{p}}, \dot{\sigma}^2\}. \nonumber
\end{align}\normalsize
From \eqref{Eq-app-A-5}, \eqref{Eq-arclaw}, \eqref{model-eq-3} and the fact that the arcsine function is one-to-one when its argument is between $-1$ and $1$, it follows that
\small
\begin{align}
\label{Eq-app-A-6}
&\frac{1}{\widetilde{\sigma}^2+\sum_{k=1}^K \tilde{p}_k}\left[\mathbf{A}(\boldsymbol{\theta}_0)\DIAG(\widetilde{\mathbf{p}})
\mathbf{A}^H(\boldsymbol{\theta}_0)\!+\!\widetilde{\sigma}^2\IDM_M \right] = \nonumber\\
& \frac{1}{\dot{\sigma}^2+\sum_{k=1}^K \dot{p}_k} \left[\mathbf{A}(\breve{\boldsymbol{\theta}})\DIAG(\dot{\mathbf{p}})
\mathbf{A}^H(\breve{\boldsymbol{\theta}})\!+\!\dot{\sigma}^2\IDM_M \right].
\end{align}\normalsize
Considering $\mathbf{p} = \frac{\widetilde{\mathbf{p}}}{\widetilde{\sigma}^2+\sum_{k=1}^K \tilde{p}_k}$, $\sigma^2 = \frac{\widetilde{\sigma}^2}{\widetilde{\sigma}^2+\sum_{k=1}^K \tilde{p}_k}$, $\breve{\mathbf{p}} = \frac{\dot{\mathbf{p}}}{\dot{\sigma}^2+\sum_{k=1}^K \dot{p}_k}$ and $\breve{\sigma}^2 = \frac{\dot{\sigma}^2}{\dot{\sigma}^2+\sum_{k=1}^K \dot{p}_k}$, we obtain
\small
\begin{align}
\label{Eq-app-A-6-1}
\mathbf{A}(\boldsymbol{\theta}_0)\DIAG(\mathbf{p})
\mathbf{A}^H(\boldsymbol{\theta}_0)+\sigma^2\IDM_M = \mathbf{A}\!(\breve{\boldsymbol{\theta}})\DIAG(\breve{\mathbf{p}})
\mathbf{A}^H(\breve{\boldsymbol{\theta}})+\breve{\sigma}^2\IDM_M,
\end{align}\normalsize
which is in contradiction with \eqref{Eq-app-A-1}.
Hence, the initial assumption that $\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ is non-identifiable from $\mathbf{X}$ cannot be true. This proves the sufficiency.
To show the necessity, let assume that $\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ is non-identifiable from $\mathbf{Y}$. This implies that there exist some $\breve{\boldsymbol{\theta}} \in [-\pi, \pi]^{q \times 1} \neq \boldsymbol{\theta}_0$, $\mathbf{p}$, $\breve{\mathbf{p}}$, $\sigma^2$ and $\breve{\sigma}^2$ for which $f(\mathbf{Y} \mid \boldsymbol{\theta}_0, \mathbf{p}, \sigma^2) = f(\mathbf{Y} \mid \breve{\boldsymbol{\theta}}, \breve{\mathbf{p}}, \breve{\sigma}^2)$. Since the true PDF of $\mathbf{X}$ is obtained from the orthant probabilities of $\mathbf{Y}$, it is readily deduced that $f(\mathbf{X} \mid \boldsymbol{\theta}_0, \mathbf{p}, \sigma^2) = f(\mathbf{X} \mid \breve{\boldsymbol{\theta}}, \breve{\mathbf{p}}, \breve{\sigma}^2)$ as well. This proves that identifiability of $\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ from $\mathbf{Y}$ is a necessary condition for identifiability of $\boldsymbol{\theta}_0 \in [-\pi/2, \pi/2]^{K \times 1}$ from $\mathbf{X}$.
%
\section{Proof of Theorem \ref{Theo-2}}\label{app-B}
We first prove {\bf S1}. Consider arbitrary $\boldsymbol{\theta} \neq \in [-\pi/2, \pi/2]^{K \times 1}$ and $\breve{\boldsymbol{\theta}} \in [-\pi/2, \pi/2]^{K \times 1}$ such that $\boldsymbol{\theta}_k \neq \breve{\boldsymbol{\theta}}$. Moreover, let $\mathbf{A}_v(\boldsymbol{\theta})$ be the steering matrix of a contiguous ULA with $v$ elements located at $(0 ,\frac{\lambda}{2}, \cdots, (v-1)\frac{\lambda}{2})$. Considering the fact that $\mathbf{A}_v(\boldsymbol{\theta})$ is a Vandermonde matrix, if $K \leq v-1$, it follows from Caratheodory-Fejer-Pisarenko decomposition \cite{caratheodory1911zusammenhang} that
\small
\begin{align}
\label{Eq-App-B-1}
\mathbf{A}_v(\boldsymbol{\theta}) \DIAG(\mathbf{p}) \mathbf{A}_v^H\!(\boldsymbol{\theta}) \!+\! \sigma^2 \IDM_v \!\neq\! \mathbf{A}_v\!(\breve{\boldsymbol{\theta}}) \DIAG(\breve{\mathbf{p}})\mathbf{A}_v^H\!(\breve{\boldsymbol{\theta}}) \!+\! \breve{\sigma}^2 \IDM_v,
\end{align}\normalsize
for any arbitrary values of $\mathbf{p} \in \mathds{R}_{>0}^{{K \times 1}}$, $\breve{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\sigma^2$ and $\breve{\sigma}^2$. From \cite[Eq. (113)]{SedighiTSP2019}, vectorizing both sides of \eqref{Eq-App-B-1} leads to
\small
\begin{align}
\label{Eq-app-B-2}
\mathbf{T}{'} \mathbf{A}_{\vartheta}(\boldsymbol{\theta})\mathbf{p}
+\sigma^2\mathbf{T}{'}\mathbf{e}{'} \neq
\mathbf{T}{'}\mathbf{A}_{\vartheta}(\breve{\boldsymbol{\theta}})\breve{\mathbf{p}}
+\breve{\sigma}^2\mathbf{T}{'}\mathbf{e}{'},
\end{align}\normalsize
where $\mathbf{A}_{\vartheta}(\boldsymbol{\theta}) \in \mathds{C}^{(2v-1) \times K}$ denotes the steering matrix corresponding to the contiguous ULA segment of the difference co-array, $\mathbf{T}' \in \{0,1\}^{v^2 \times (2v-1)}$ is a selection matrix defined in \cite[Eq. (114)]{SedighiTSP2019} and $\mathbf{e}{'} \in \{0,1\}^{(2v-1) \times 1}$ is a column vector with $[\mathbf{e}{'}]_i=\delta[i-v]$. Considering $\mathbf{T}'$ is full-column rank \cite{SedighiTSP2019}, multiplying both sides of \eqref{Eq-app-B-2} by $\mathbf{T}'^{\dagger}$ and then moving all the terms to one side of the equation yields
\small
\begin{align}
\label{Eq-app-A-7}
\mathbf{A}_{\vartheta}(\boldsymbol{\theta})\mathbf{p} -\mathbf{A}_{\vartheta}(\breve{\boldsymbol{\theta}})\breve{\mathbf{p}}
+(\sigma^2
-\breve{\sigma}^2)\mathbf{e}{'} \neq \ZEROV.
\end{align}\normalsize
It follows from $\breve{\boldsymbol{\theta}} \!\neq\! \boldsymbol{\theta}_0$ that $\breve{\boldsymbol{\theta}}$ could differs from $\boldsymbol{\theta}_0$ at $q$ DoAs for some integer $q \!\in\! [1,K]$. Noting this fact, \eqref{Eq-app-A-7} is simplified to
\small
\begin{align}
\label{Eq-app-B-3}
\begin{bmatrix} \mathbf{A}_{\vartheta}(\boldsymbol{\theta}) & \mathbf{A}_{\vartheta}(\ddot{\boldsymbol{\theta}}) & \mathbf{e}{'} \end{bmatrix} \begin{bmatrix}
\mathbf{p} - \breve{\mathbf{p}} \odot \boldsymbol{\varepsilon} \\ -\ddot{\mathbf{p}} \\ \sigma^2 - \breve{\sigma}^2
\end{bmatrix} \neq \ZEROV,
\end{align}\normalsize
where $\ddot{\boldsymbol{\theta}} \in [-\pi, \pi]^{q \times 1}$ consists of those elements of $\breve{\boldsymbol{\theta}}$ which do not intersect with those in $\boldsymbol{\theta}$, $\ddot{\mathbf{p}} \in \mathds{R}_{>0}^{q \times 1}$ contains those elements of $\breve{\mathbf{p}}$ corresponding to $\ddot{\boldsymbol{\theta}}$ and
\small
\begin{align}
\label{Eq-app-A-9}
[\boldsymbol{\varepsilon}]_i = \left\{\begin{array}{cc}
1, & [\boldsymbol{\theta}]_i = [\breve{\boldsymbol{\theta}}]_i, \\
0, & \text{otherwise}.
\end{array}
\right.
\end{align}\normalsize
Considering that \small$\begin{bmatrix}\mathbf{A}_{\vartheta}(\boldsymbol{\theta}) \!&\! \mathbf{A}_{\vartheta}(\breve{\boldsymbol{\theta}}) \!&\! \mathbf{e}^{'} \end{bmatrix} \in \mathds{C}^{(2v-1) \times (2K+1)}$\normalsize~ is a sub-matrix of \small$\begin{bmatrix} \mathbf{A}_d(\boldsymbol{\theta}) \!&\! \mathbf{A}_d(\ddot{\boldsymbol{\theta}}) \!&\! \mathbf{e} \end{bmatrix} \in \mathds{C}^{(2D-1) \times (2K+1)}$\normalsize, obtained from $2v-1$ rows of \small$\begin{bmatrix}\mathbf{A}_d(\boldsymbol{\theta}) \!&\! \mathbf{A}_d(\ddot{\boldsymbol{\theta}}) \!&\! \mathbf{e}\end{bmatrix}$\normalsize, it follows from \eqref{Eq-app-B-3} that
\small
\begin{align}
\label{Eq-app-B-3-1}
&\begin{bmatrix} \mathbf{A}_d(\boldsymbol{\theta}) & \mathbf{A}_d(\ddot{\boldsymbol{\theta}}) & \mathbf{e} \end{bmatrix} \begin{bmatrix}
\mathbf{p} - \breve{\mathbf{p}} \odot \boldsymbol{\varepsilon} \\ -\ddot{\mathbf{p}} \\ \sigma^2 - \breve{\sigma}^2
\end{bmatrix} \neq \ZEROV,
\\
\label{Eq-app-B-3-2}
\Rightarrow & \mathbf{A}_d(\boldsymbol{\theta})\mathbf{p} -\mathbf{A}_d(\breve{\boldsymbol{\theta}})\breve{\mathbf{p}}
+(\sigma^2
-\breve{\sigma}^2)\mathbf{e} \neq \ZEROV.
\end{align}\normalsize
Multiplying \eqref{Eq-app-B-3-2} by $\mathbf{J}$ and exploiting \eqref{model-eq-3} and \eqref{model-eq-4}, after some algebraic manipulations, we obtain
\small
\begin{align}
&\VE(\mathbf{A}(\boldsymbol{\theta}) \DIAG(\mathbf{p}) \mathbf{A}^H(\boldsymbol{\theta}) + \sigma^2 \IDM_M) \nonumber\\
&\neq \VE(\mathbf{A}(\breve{\boldsymbol{\theta}}) \DIAG(\breve{\mathbf{p}})\mathbf{A}^H(\breve{\boldsymbol{\theta}}) + \breve{\sigma}^2 \IDM_M),
\end{align}\normalsize
which in turn implies that
\small
\begin{align}
\label{Eq-App-B-4}
\mathbf{A}\!(\boldsymbol{\theta}) \DIAG(\mathbf{p}) \mathbf{A}^H\!(\boldsymbol{\theta}) \!+\! \sigma^2 \IDM_M \!\neq\! \mathbf{A}\!(\breve{\boldsymbol{\theta}}) \DIAG(\breve{\mathbf{p}})\mathbf{A}^H\!(\breve{\boldsymbol{\theta}}) \!+\! \breve{\sigma}^2 \IDM_M,
\end{align}\normalsize
for all $\boldsymbol{\theta} \neq \breve{\boldsymbol{\theta}} \in [-\pi/2, \pi/2]^{K \times 1}$, $\mathbf{p} \in \mathds{R}_{>0}^{{K \times 1}}$, $\breve{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\sigma^2$ and $\breve{\sigma}^2$. Considering $\mathbf{y}(0), \mathbf{y}(1), \cdots, \mathbf{y}(N-1)$ are independent and identically distributed with $\mathbf{y}(t) \sim {\cal CN}(\ZEROV, \mathbf{R})$, it follows from \eqref{Eq-App-B-4} that $f(\mathbf{Y} \mid \boldsymbol{\theta}_0, \mathbf{p}, \sigma^2) \neq f(\mathbf{Y} \mid \breve{\boldsymbol{\theta}}, \breve{\mathbf{p}}, \breve{\sigma}^2)$ for any arbitrary values of $\boldsymbol{\theta} \neq \breve{\boldsymbol{\theta}} \in [-\pi/2, \pi/2]^{K \times 1}$, $\mathbf{p} \in \mathds{R}_{>0}^{{K \times 1}}$, $\breve{\mathbf{p}} \in \mathds{R}_{>0}^{{K \times 1}}$, $\sigma^2$ and $\breve{\sigma}^2$ if $K \leq v-1$. Now, from Theorem \ref{Theo-1}, we conclude that $f(\mathbf{X} \mid \boldsymbol{\theta}_0, \mathbf{p}, \sigma^2) \neq f(\mathbf{X} \mid \breve{\boldsymbol{\theta}}, \breve{\mathbf{p}}, \breve{\sigma}^2)$. This completes the proof of {\bf S1}.
We now prove {\bf S2}. We know from Lemma \ref{Lem-1} that the FIM is singular for any value of $\boldsymbol{\theta} \in [-\pi/2, \pi/2]^{K \times 1}$ if $K \geq D$. This means that the problem is not even locally indentifiable at any $\boldsymbol{\theta}$ \cite{rothenberg1971identification}. Since the local identifiablity is a necessary condition for the identifiablity any particular point, the problem is not identifiable for any $\boldsymbol{\theta}$.
\section{Proof of Lemma \ref{Lem-1}}\label{app-C}
Let $\mathbf{R}_{\mathbf{x}}^r$ and $\overline{\mathbf{R}}^r$ denote the equivalent real representation for $\mathbf{R}_{\mathbf{x}}$ and $\overline{\mathbf{R}}$, respectively, given as
\small
\begin{align}
\mathbf{R}_{\mathbf{x}}^r = \begin{bmatrix}
\Re\{\mathbf{R}_{\mathbf{x}}\} & -\Im\{\mathbf{R}_{\mathbf{x}}\} \\
\Im\{\mathbf{R}_{\mathbf{x}}\} & \Re\{\mathbf{R}_{\mathbf{x}}\}
\end{bmatrix},~~
\overline{\mathbf{R}}^r = \begin{bmatrix}
\Re\{\overline{\mathbf{R}}\} & -\Im\{\overline{\mathbf{R}}\} \\
\Im\{\overline{\mathbf{R}}\} & \Re\{\overline{\mathbf{R}}\}.
\end{bmatrix}
\end{align}\normalsize
Making use of \eqref{Eq-arclaw} and Taylor expansion of arcsine function, we have
\small
\begin{align}
\label{Eq-app-C-0}
\mathbf{R}_{\mathbf{x}}^r = & \frac{2}{\pi} \arcsin(\overline{\mathbf{R}}^r) = \overline{\mathbf{R}}^r + \frac{1}{6} \overline{\mathbf{R}}^r \odot \overline{\mathbf{R}}^r \odot \overline{\mathbf{R}}^r \nonumber \\
&+\frac{3}{40} \overline{\mathbf{R}}^r \odot \overline{\mathbf{R}}^r \odot \overline{\mathbf{R}}^r \odot \overline{\mathbf{R}}^r \odot \overline{\mathbf{R}}^r + \cdots \nonumber\\
=& \sum_{n=0}^{\infty} \frac{(2n)!}{(2^n n!)^2 (2n+1)} \underbrace{\overline{\mathbf{R}}^r \odot \overline{\mathbf{R}}^r \odot \cdots \odot \overline{\mathbf{R}}^r}_{2n+1~{\rm times}}.
\end{align}\normalsize
It is clear from \eqref{Eq-CRB-1} that $\overline{\mathbf{R}}$ is positive definite, and so is $\overline{\mathbf{R}}^r$. Further, it follows from the Schur product theorem \cite[Theorem 3.1]{styan1973hadamard}, which establishes that the Hadamard product of two positive-definite matrices is also a positive-definite matrix, that the $2n+1$ times Hadamard products of $\overline{\mathbf{R}}^r$ by itself is also positive definite for any integer $n \in [0, \infty)$. Hence, it follows from \eqref{Eq-app-C-0} that $\mathbf{R}_{\mathbf{x}}^r$ is obtained from a weighted sum of positive definite matrices, and thus it is positive definite. Evidently, $\mathbf{R}_{\mathbf{x}}$ is also positive definite. This in turn indicates non-singularity of $(\mathbf{R}_{\mathbf{x}}^{-T} \otimes \mathbf{R}_{\mathbf{x}}^{-1})$. Hence, since $\mathbf{J}$ is also full-column rank \cite{Liu2017}, we easily conclude that $\mathbf{J}^H (\mathbf{R}_{\mathbf{x}}^{-T} \otimes \mathbf{R}_{\mathbf{x}}^{-1}) \mathbf{J}$ is full rank. This implies that ${\cal I}_w(\boldsymbol{\varrho})$ is non-singular if and only if \small$\begin{bmatrix}
\mathbf{G} & \mathbf{V} \end{bmatrix} \in \mathds{R}^{(2D-1) \times 2K}$\normalsize is full-column rank. In other words, ${\cal I}_w(\boldsymbol{\varrho})$ is non-singular if and only if
\small
\begin{align}
\label{Eq-app-C-1}
\begin{bmatrix}
\mathbf{G} & \mathbf{V} \end{bmatrix} \begin{bmatrix}
\mathbf{c}_1 \\ \mathbf{c}_2
\end{bmatrix} \neq \ZEROV,
\end{align}\normalsize
for any arbitrary non-zero $\mathbf{c} = [\mathbf{c}_1^T, \mathbf{c}_2^T]^T \in \mathds{C}^{2K \times 1}$.
Inserting \eqref{Eq-CRB-9} and \eqref{Eq-CRB-10} into \eqref{Eq-app-C-1} leads to
\small
\begin{align}
\begin{bmatrix} \mathbf{\Delta} & \mathbf{\digamma} \end{bmatrix} \begin{bmatrix}
\widetilde{\mathbf{c}}_1 \\ \mathbf{c}_2
\end{bmatrix} \neq \ZEROV,
\end{align}\normalsize
where $\widetilde{\mathbf{c}}_1\!=\!j \pi \boldsymbol{\Phi}(\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}}) \mathbf{c}_1$. This completes the proof.
\section{Proof of Theorem \ref{Theo-3}}\label{app-D}
We know from Appendix \ref{app-C} that $\mathbf{M} = \mathbf{J}^H (\mathbf{R}_{\mathbf{x}}^{-T} \otimes \mathbf{R}_{\mathbf{x}}^{-1}) \mathbf{J}$ is positive-definite. Hence, \eqref{Eq-CRB-8} can be rewritten as
\small
\begin{align}
\label{Eq-app-D-1}
{\cal I}_w(\boldsymbol{\varrho}) &= N \begin{bmatrix} \mathbf{G}^H \mathbf{M}^{\frac{1}{2}} \\ \mathbf{V}^H \mathbf{M}^{\frac{1}{2}}
\end{bmatrix} \begin{bmatrix} \mathbf{M}^{\frac{1}{2}}\mathbf{G} & \mathbf{M}^{\frac{1}{2}}\mathbf{V}
\end{bmatrix} \nonumber\\
&= N \begin{bmatrix}
\mathbf{G}^H \mathbf{M} \mathbf{G} & \mathbf{G}^H \mathbf{M} \mathbf{V} \\ \mathbf{V}^H \mathbf{M} \mathbf{G} & \mathbf{V}^H \mathbf{M} \mathbf{V}
\end{bmatrix}.
\end{align}\normalsize
The $CRB_w(\boldsymbol{\theta})$ is then obtained by block-wise inversion as follows:
\small
\begin{align}
\label{Eq-app-D-3}
CRB_w(\boldsymbol{\theta})&\!=\!\frac{1}{N}\left(\mathbf{G}^H \mathbf{M} \mathbf{G} \!-\! \mathbf{G}^H \mathbf{M} \mathbf{V} \left(\mathbf{V}^H \mathbf{M} \mathbf{V} \right)^{-1} \mathbf{V}^H \mathbf{M} \mathbf{G} \right)^{-1} \nonumber\\
& \!=\! \frac{1}{N}\left(\mathbf{G}^H \mathbf{M}^{\frac{1}{2}} \Pi^{\perp}_{\mathbf{M}^{\frac{1}{2}} \mathbf{V}} \mathbf{M}^{\frac{1}{2}} \mathbf{G} \right)^{-1} .
\end{align}\normalsize
The facts that $\mathbf{G} = j \pi \DIAG(\mathbf{d}) \mathbf{\Omega} \mathbf{\Phi}(\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}})$ and $\mathbf{R}_{\mathbf{x}} = \frac{2}{\pi} \asin(\overline{\mathbf{R}})$ will lead to \eqref{Eq-Pes-CRB}. In addition, It follows from ${\cal I}(\varrho) \succeq {\cal I}_w(\boldsymbol{\varrho})$ that $CRB(\boldsymbol{\theta}) \preceq CRB_w(\boldsymbol{\theta})$.
\section{Proof of Theorem \ref{Theo-5}}\label{app-E}
Recalling $\overline{p}_k = \frac{p_k}{\sigma^2+\sum_{k=1}^K p_k}$ and assuming that all sources have equal power $p$, we have
\small
\begin{align}
\label{Eq-app-E-1}
\lim_{SNR \to \infty} \overline{p}_k = \lim_{SNR \to \infty} \frac{SNR}{K \times SNR +1 } = \frac{1}{K}.
\end{align}\normalsize
Making use of \eqref{Eq-app-E-1}, it can be readily shown that
\small
\begin{align}
\label{Eq-app-E-2}
\lim_{SNR \to \infty} \overline{\mathbf{R}} &= \frac{1}{K} \mathbf{A}(\boldsymbol{\theta}) \mathbf{A}^H(\boldsymbol{\theta}) + (1-\frac{1}{K}) \IDM_M.
\end{align}\normalsize
The above equation implies that $\lim_{SNR \to \infty} \overline{\mathbf{R}}$ is a positive-definite matrix independent of the SNR. Further, it follows from \eqref{Eq-app-E-1} that
\small
\begin{align}
\label{Eq-app-E-3}
&\lim_{SNR \to \infty}\DIAG(\overline{\mathbf{p}}) = \frac{1}{K} \IDM_K,\\
\label{Eq-app-E-4}
&\lim_{SNR \to \infty} \mathbf{h} = \bigg[\frac{1}{\sqrt{1-\frac{|\Re\{\sum_{k=1}^K e^{-j \pi \sin \theta_k \ell_{D-1}}\}|^2}{K^2}}}, \cdots, 0, \nonumber\\
&~~~~~ \cdots, \frac{1}{\sqrt{1-\frac{|\Re\{\sum_{k=1}^K e^{j \pi \sin \theta_k \ell_{D-1}}\}|^2}{K^2}}}\bigg]^T,\\
\label{Eq-app-E-5}
&\lim_{SNR \to \infty} \overline{\mathbf{h}} = \bigg[\frac{1}{\sqrt{1-\frac{|\Im\{\sum_{k=1}^K e^{-j \pi \sin \theta_k \ell_{D-1}}\}|^2}{K^2}}}, \cdots, 0, \nonumber\\
&~~~~~ \cdots, \frac{1}{\sqrt{1-\frac{|\Im\{\sum_{k=1}^K e^{j \pi \sin \theta_k \ell_{D-1}}\}|^2}{K^2}}}\bigg]^T.
\end{align}\normalsize
Substituting \eqref{Eq-app-E-3}, \eqref{Eq-app-E-4} and \eqref{Eq-app-E-5} back into \eqref{Eq-CRB-9} and \eqref{Eq-CRB-10} indicates that $\lim_{SNR \to \infty} \begin{bmatrix}
\mathbf{G} & \mathbf{V}
\end{bmatrix}$ is a full-column rank matrix independent of the SNR. Hence, recalling \eqref{Eq-CRB-8}, we can conclude that $\lim_{SNR \to \infty}{\cal I}_w(\varrho)$ is positive-definite and independent of the SNR. This in turn implies that $\lim_{SNR \to \infty}CRB_w(\boldsymbol{\theta})$, which is Schur complement of $\lim_{SNR \to \infty}{\cal I}_w(\varrho)$, is also positive-definite and independent of the SNR. This completes the proof.
\section{Proof of Lemma \ref{lem-new-2}}
\label{app-new-1}
We start with showing that $\mathbf{\Psi}$ is full rank. Making use of relations \small$\det(\begin{bmatrix} \mathbf{C}_1 & \mathbf{C}_2\\ \mathbf{C}_3 & \mathbf{C}_4\end{bmatrix}) = \det(\mathbf{C}_1) \det(\mathbf{C}_4 - \mathbf{C}_3 \mathbf{C}_1^{-1} \mathbf{C}_2) $\normalsize, we obtain
\small
\begin{align}
\det(\mathbf{\Psi}) = \det(\IDM_{D-1}) \det(2j \IDM_{D-1}) = (2j)^{D-1} \neq 0,
\end{align}\normalsize
which implies full rankness of $\mathbf{\Psi}$.
Next, we proceed with proving that $\overline{\mathbf{J}}$ is full rank. Let $\ddot{\mathbf{J}}$ denote the matrix obtained after removing the $D$-th column from $\mathbf{J}$. $\ddot{\mathbf{J}}$ is full column rank since its columns are a sub-set of the columns of the full-column-rank matrix $\mathbf{J}$ \cite{Liu2017}. Further, for $1 \leq i \leq M$, it is readily confirmed that the $((i-1)M+1)$-th row of $\VE(\mathbf{L}_n)$ as well as $\VE(\mathbf{L}_n^T)$ equals the $i$-th diagonal element of $\mathbf{L}_n$, which is obviously zero for $n \neq 0$ according to the definition given after \eqref{model-eq-5}. Given \eqref{model-eq-5}, this in turn implies that the rows of $\ddot{\mathbf{J}}$ with indices $(i-1)M+1$, for all $1 \leq i \leq M$,
are zero vectors. As a result, the matrix obtained by removing these rows from $\ddot{\mathbf{J}}$, i.e., $\overline{\mathbf{J}}$, has the same column rank as $\ddot{\mathbf{J}}$. This completes the proof.
Finally, we show that $\mathbf{F}$ is full rank. It follows from the fact that $\overline{\mathbf{e}}_p^T \overline{\mathbf{e}}_q = 0$ for $p \neq q$ that
\small
\begin{align}
&(\overline{\mathbf{e}}_i^T \otimes \overline{\mathbf{e}}_j^T \pm \overline{\mathbf{e}}_j^T \otimes \overline{\mathbf{e}}_i^T) (\overline{\mathbf{e}}_p \otimes \overline{\mathbf{e}}_q \pm \overline{\mathbf{e}}_q \otimes \overline{\mathbf{e}}_p) = \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_p \otimes \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_q \nonumber\\
&\pm \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_q \otimes \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_p \pm \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_p \otimes \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_q \pm \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_q \otimes \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_p = 0.
\end{align}\normalsize
for $1 \leq i < j \leq M$ and $1 \leq p < q \leq M$ when either $p$ or $q$ differs from $i$ and $j$. In addition, in case $i=p$ and $j=q$, we have
\small
\begin{align}
&(\overline{\mathbf{e}}_i^T \otimes \overline{\mathbf{e}}_j^T + \overline{\mathbf{e}}_j^T \otimes \overline{\mathbf{e}}_i^T) (\overline{\mathbf{e}}_i \otimes \overline{\mathbf{e}}_j - \overline{\mathbf{e}}_j \otimes \overline{\mathbf{e}}_i) = \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_i \otimes \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_j \nonumber\\
&- \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_j \otimes \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_i + \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_i \otimes \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_j - \overline{\mathbf{e}}_j^T \overline{\mathbf{e}}_j \otimes \overline{\mathbf{e}}_i^T \overline{\mathbf{e}}_i = 0.
\end{align}\normalsize
It is also observed that, for $1 \leq i \leq M$ and $1 \leq p < q \leq M$, the $((i-1)M+1)$-th element of $\overline{\mathbf{e}}_p^T \otimes \overline{\mathbf{e}}_q^T \pm \overline{\mathbf{e}}_p^T \otimes \overline{\mathbf{e}}_q^T$ is equal to the $i$-th diagonal element of $\overline{\mathbf{e}}_p \overline{\mathbf{e}}_q^T \pm \overline{\mathbf{e}}_p\overline{\mathbf{e}}_q^T$, which is obviously zero for $p \neq q$. Consequently, the row vectors obtained by removing the elements with indices $(i-1)M+1$ for all $1 \leq i \leq M$ from $\overline{\mathbf{e}}_p^T \otimes \overline{\mathbf{e}}_q^T + \overline{\mathbf{e}}_q^T \otimes \overline{\mathbf{e}}_p^T$ and $\overline{\mathbf{e}}_p^T \otimes \overline{\mathbf{e}}_q^T - \overline{\mathbf{e}}_q^T \otimes \overline{\mathbf{e}}_p^T$ will be still orthogonal with each other. Hence, it is deduced that the square matrix $\mathbf{F}$ has orthogonal rows, thereby being full rank.
\section{Proof of Lemma \ref{lem-2}}
\label{App-F}
Let define $E(\theta) = \mathbf{a}_v^H(\theta) \widehat{\mathbf{U}}_n \widehat{\mathbf{U}}^H_n \mathbf{a}_v(\theta)$ and $\breve{E}(\theta) = \mathbf{a}_v^H(\boldsymbol{\theta}) \mathbf{U}_n \mathbf{U}^H_n \mathbf{a}_v(\boldsymbol{\theta})$ where $\widehat{\mathbf{U}}_n$ and $\mathbf{U}_n$ consist of, respectively, the eigenvectors of $\widehat{\overline{\mathbf{R}}}_v$ and $\mathbf{A}_v(\boldsymbol{\theta}) \DIAG(\overline{\mathbf{p}}) \mathbf{A}_v^H(\boldsymbol{\theta}) + \overline{\sigma}^2 \IDM_v$ corresponding to their $v-K$ smallest eigenvalues with $K \leq v-1$. We know that the elements of $\widehat{\boldsymbol{\theta}}$ are equal to the minimizers of $E(\theta)$. Defining $E_n = {\underset{\theta} {\rm sup}}|E(\theta) - \breve{E}(\theta)|$, we have
\small
\begin{align}
\label{Eq-app-F-1}
E_n &= {\underset{\theta} {\rm sup}} \left| \mathbf{a}_v^H(\boldsymbol{\theta}) (\widehat{\mathbf{U}}_n \widehat{\mathbf{U}}^H_n-\mathbf{U}_n \mathbf{U}^H_n) \mathbf{a}_v(\boldsymbol{\theta}) \right|
\nonumber\\
&= {\underset{\theta} {\rm sup}} \left|\left(\mathbf{a}_v^T(\boldsymbol{\theta}) \otimes \mathbf{a}_v^H(\boldsymbol{\theta}) \right) \VE(\widehat{\mathbf{U}}_n \widehat{\mathbf{U}}^H_n-\mathbf{U}_n \mathbf{U}^H_n) \right| \nonumber\\
&\leq \|\mathbf{a}_v^T(\boldsymbol{\theta}) \otimes \mathbf{a}_v^H(\boldsymbol{\theta})\|_2 \|\VE(\widehat{\mathbf{U}}_n \widehat{\mathbf{U}}^H_n-\mathbf{U}_n \mathbf{U}^H_n)\|_2 \nonumber\\
&= v^2 \|\VE(\widehat{\mathbf{U}}_n \widehat{\mathbf{U}}^H_n-\mathbf{U}_n \mathbf{U}^H_n)\|_2.
\end{align}\normalsize
It follows from \eqref{Eq-Est-21} that $\lim_{N \to \infty} \widehat{\mathbf{U}}_n \widehat{\mathbf{U}}^H_n = \mathbf{U}_n \mathbf{U}^H_n$. Hence, $E_n \to 0$ as $N \to \infty$. This implies that $E(\theta)$ converges uniformly to $\breve{E}(\theta)$ as $N \to \infty$.
Thus, the minimizers of $E(\theta)$, i.e., the elements of $\widehat{\boldsymbol{\theta}}$, converge to the minimizers of $\breve{E}(\theta)$, i.e., $\theta_1, \theta_2, \cdots, \theta_K$, as $N \to \infty$. This completes the proof.
\section{Proof of Theorem \ref{Theo-6}}
\label{app-G}
Considering the consistency of $\widehat{\boldsymbol{\theta}}$ and following the same arguments as in \cite[App. B]{Wang2017}, for sufficiently large $N$, the asymptotic estimation error expression for EOCAB-MUSIC is given by
\begin{align}
\label{Eq-app-G-1}
\hat{\theta}_k - \theta_k = - \frac{ \Re\{\mathbf{z}^T_k \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}}\} }{\pi \overline{p}_k q_k \cos(\theta_k)} ,
\end{align}
where $\Delta \overline{\mathbf{r}} = \widehat{\overline{\mathbf{r}}} - \overline{\mathbf{r}}$ and $\mathbf{T}=
\begin{bmatrix}
\mathbf{T}_v^T & \mathbf{T}_{v-1}^T & \cdots &
\mathbf{T}_1^T
\end{bmatrix}^T \in \mathds{C}^{v^2 \times (2D-1)}$.
From \eqref{Eq-app-G-1}, the covariance of the asymptotic distribution (as $N \to \infty$) of the DoA estimation errors is given by
\small
\begin{align}
\label{Eq-app-G-2}
{\cal E}_{\theta_{k_1}, \theta_{k_2}} &= \EX\{ (\hat{\theta}_{k_1} - \theta_{k_1})(\hat{\theta}_{k_2} - \theta_{k_2}) \} \nonumber\\
&= \frac{ \EX\left\{ \Re\{\mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \Re\{\mathbf{z}^T_{k_2} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \right\} }{\pi^2 \overline{p}_{k_1} \overline{p}_{k_2} q_{k_1} q_{k_2} \cos(\theta_{k_1}) \cos(\theta_{k_2})}.
\end{align}\normalsize
Making use of the identity $\Re\{\mathbf{c}_1^H \mathbf{c}_2\} \Re\{\mathbf{c}_3^H \mathbf{c}_2\} = \frac{1}{2}\Re\{\mathbf{c}_1^H \mathbf{c}_2 \mathbf{c}_2^H \mathbf{c}_3 + \mathbf{c}_1^H \mathbf{c}_2 \mathbf{c}_2^T \mathbf{c}_3^*\}$, we obtain
\small
\begin{align}
\label{Eq-app-G-3}
&\EX\left\{ \Re\{\mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \Re\{\mathbf{z}^T_{k_2} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \right\} = \nonumber\\
&\frac{1}{2}\EX\big\{ \Re\{ \mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \Delta \overline{\mathbf{r}}^H \mathbf{J}^{\dagger H} \mathbf{T}^H \mathbf{z}_{k_2}^*\} \nonumber\\
&+ \Re\{ \mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \Delta \overline{\mathbf{r}}^T \mathbf{J}^{\dagger H} \mathbf{T}^H \mathbf{z}_{k_2} \} \big\}.
\end{align}\normalsize
The matrix $\MAT_{M,M}(\mathbf{J}^{\dagger H} \mathbf{T}^H \mathbf{z}_k)$ is Hermitian \cite[Lemma 6]{Wang2017}, thereby
\small
\begin{align}
\label{Eq-app-G-4}
\mathbf{J}^{\dagger H} \mathbf{T}^H \mathbf{z}_k^* = \mathbf{K}_M \mathbf{J}^{\dagger H} \mathbf{T}^H \mathbf{z}_k.
\end{align}\normalsize
where $\mathbf{K}_M \in \{0,1\}^{M^2 \times M^2}$ is the commutation matrix defined as $\VE(\mathbf{C}^T) = \mathbf{K}_M \VE(\mathbf{C})$ for any arbitrary matrix $\mathbf{C}$ \cite{Magnus2007}. In addition, since $\overline{\mathbf{R}}^H = \overline{\mathbf{R}}$, we have
\small
\begin{align}
\label{Eq-app-G-5}
\Delta \overline{\mathbf{r}}^T = \Delta \overline{\mathbf{r}}^H \mathbf{K}_M^H.
\end{align}\normalsize
Inserting \eqref{Eq-app-G-4} and \eqref{Eq-app-G-5} into \eqref{Eq-app-G-3} and using the fact that $\mathbf{K}_M = \mathbf{K}_M^H = \mathbf{K}_M^{-1}$, we obtain
\small
\begin{align}
\label{Eq-app-G-6}
&\EX\left\{ \Re\{\mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \Re\{\mathbf{z}^T_{k_2} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \right\} \nonumber\\
&= \EX\{\Re\{ \mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \Delta \overline{\mathbf{r}}^H \mathbf{J}^{\dagger H} \mathbf{T}^H \mathbf{z}_{k_2}^* \}\}.
\end{align}\normalsize
Recalling \eqref{Eq-Est-18} and \eqref{Eq-Est-4}, we have
\small
\begin{align}
\label{Eq-app-G-7}
\mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}}
&=
\mathbf{T} \begin{bmatrix} \ZEROV & \IDM_{D-1} & -j \IDM_{D-1} \\
0 & \ZEROV & \ZEROV \\
\ZEROV & \IDM_{D-1} & j \IDM_{D-1} \end{bmatrix} \begin{bmatrix} 0 \\ \widehat{\boldsymbol{\phi}} - \boldsymbol{\phi} \end{bmatrix}.
\end{align}\normalsize
Additionally, from \cite[Eq. (114) and Eq. (116)]{SedighiTSP2019}, we know
\footnotesize
\begin{align}\label{Eq-app-G-8}
&\mathbf{T}= \\
&\big[ \ZEROV_{v^2 \times (D-v)}, \VE (\overline{\mathbf{L}}^T_{v-1}), \cdots, \VE (\overline{\mathbf{L}}_{0}), \cdots, \VE (\overline{\mathbf{L}}_{v-1}), \ZEROV_{v^2 \times (D-v)} \big],\nonumber
\end{align}\normalsize
where
{\footnotesize{
$
[\overline{\mathbf{L}}_n]_{p,q}=\left\{\begin{array}{cc}
1, & \text{if}~~ p-q=n, \\
0, & \text{otherwise}.
\end{array}
\right..
$}}
Substituting \eqref{Eq-app-G-8} into \eqref{Eq-app-G-7} yields
\small
\begin{align}
\label{Eq-app-G-9}
\mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} = \overline{\mathbf{T}} \mathbf{\Psi} (\widehat{\boldsymbol{\phi}} - \boldsymbol{\phi}),
\end{align}\normalsize
where
\begin{align}
\label{Eq-app-G-9-1}
&\Scale[0.8]{\hspace{-1mm}\overline{\mathbf{T}}}= \\
&\hspace{-1mm}\Scale[0.79]{\big[ \ZEROV_{v^2 \times (D-v)}, \VE (\overline{\mathbf{L}}^T_{v-1}), \cdots, \VE (\overline{\mathbf{L}}^T_{-1}), \VE (\overline{\mathbf{L}}_{1}), \cdots, \VE (\overline{\mathbf{L}}_{v-1}), \ZEROV_{v^2 \times (D-v)} \big].} \nonumber
\end{align}
Inserting \eqref{Eq-app-G-9} into \eqref{Eq-app-G-6} gives
\small
\begin{align}
\label{Eq-app-G-10}
&\EX\left\{ \Re\{\mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \Re\{\mathbf{z}^T_{k_2} \mathbf{T} \mathbf{J}^{\dagger} \Delta \overline{\mathbf{r}} \} \right\} \nonumber\\
&= \Re\{ \mathbf{z}^T_{k_1} \overline{\mathbf{T}} \EX\{\mathbf{\Psi}(\widehat{\boldsymbol{\phi}} - \boldsymbol{\phi}) (\widehat{\boldsymbol{\phi}} - \boldsymbol{\phi})^H\mathbf{\Psi}^H\} \overline{\mathbf{T}}^H \mathbf{z}_{k_2}^* \}\}.
\end{align}\normalsize
As a result, for sufficiently large $N$, using a first-order perturbation expansion leads to
\small
\begin{align}
\label{Eq-app-G-11}
&\EX\{\mathbf{\Psi}(\widehat{\boldsymbol{\phi}} - \boldsymbol{\phi}) (\widehat{\boldsymbol{\phi}} - \boldsymbol{\phi})^H\mathbf{\Psi}^H\} \simeq \\
&\left( \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F} \overline{\mathbf{J}} \right)^{-1} \nonumber\\
&\times \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F} \EX\{\widetilde{\ddot{\mathbf{r}}}\widetilde{\ddot{\mathbf{r}}}^H \} \nonumber\\
&\times \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F} \overline{\mathbf{J}} \nonumber\\
&\times \left( \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F} \overline{\mathbf{J}} \right)^{-1} - \mathbf{\Psi} \boldsymbol{\phi} \boldsymbol{\phi}^H \mathbf{\Psi}^H, \nonumber
\end{align}\normalsize
where $\mathbf{\Sigma} = \mathbf{\Sigma}(\boldsymbol{\gamma})$ given in Appendix K (kindly refer to the supplementary document) and $[\mathbf{b}]_n =\frac{1}{\sqrt{1-|[\boldsymbol{\gamma}]_n|^2}}$ for $1 \leq n \leq M^2-M$. It remains to compute $\EX\{\widetilde{\ddot{\mathbf{r}}}\widetilde{\ddot{\mathbf{r}}}^H \}$. Making use of the relation $\widetilde{\ddot{\mathbf{r}}} = \sine(\frac{\pi}{2} \widehat{\ddot{\mathbf{r}}}_{\mathbf{x}})$, we obtain
\small
\begin{align}
\label{Eq-app-G-12}
&\EX\{[\widetilde{\ddot{\mathbf{r}}}]_p[\widetilde{\ddot{\mathbf{r}}}]_q^*\} = \nonumber\\
&\frac{1}{4} \EX\bigg\{ e^{\frac{j\pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } + e^{-\frac{j\pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } \nonumber\\
&- e^{\frac{j\pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}+\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } - e^{-\frac{j\pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } \nonumber\\
& + e^{\frac{j\pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } + e^{-\frac{j\pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } \nonumber\\
&- e^{\frac{j \pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}+\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } - e^{- \frac{ j \pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}+\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} }\bigg\} \nonumber\\
& + \frac{j}{4} \EX\bigg\{ e^{\frac{j \pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } + e^{\frac{-j \pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } \nonumber\\
&- e^{\frac{j \pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}+\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } - e^{\frac{-j \pi \left(\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}+\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } \nonumber\\
& - e^{\frac{j \pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } - e^{- \frac{j \pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}-\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } \nonumber\\
&+ e^{\frac{j \pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}+\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2}} + e^{- \frac{j \pi \left(\Re\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_p\}+\Im\{[\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}}]_q\}\right)}{2} } \bigg\}.
\end{align}\normalsize
Considering that $\widehat{\ddot{\mathbf{r}}}_{\mathbf{x}} \stackrel{D}{\rightarrow} {\cal CN}(\ddot{\mathbf{r}}_{\mathbf{x}}, \frac{4}{\pi^2 N}\mathbf{\Sigma})$, the expectations in \eqref{Eq-app-G-12} can be computed using the characteristic function of the Gaussian distribution as follows:
\small
\begin{align}
\label{Eq-app-G-13}
&\EX\{[\widetilde{\ddot{\mathbf{r}}}]_p[\widetilde{\ddot{\mathbf{r}}}]_q^*\} = \frac{e^{\frac{-[\mathbf{\Sigma}]_{p,p}-[\mathbf{\Sigma}]_{q,q}}{4N}}}{2} \nonumber\\
&\times \bigg[ \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}} \nonumber\\
&- \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{-\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}} \nonumber\\
& + \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}} \nonumber\\
&- \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{-\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}}\nonumber\\
&+ j \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}} \nonumber\\
&-j \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{-\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}} \nonumber\\
& - j \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{-\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}} \nonumber\\
&+ j \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) e^{\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}} \bigg].
\end{align}\normalsize
Exploiting the Taylor expansion of the exponential function, \eqref{Eq-app-G-13} can be approximated for sufficiently large $N$ as
\small
\begin{align}
\label{Eq-app-G-14}
&\EX\{[\widetilde{\ddot{\mathbf{r}}}]_p[\widetilde{\ddot{\mathbf{r}}}]_q^*\} \simeq \frac{1}{2} \\
\times&\bigg[ \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left(1+\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right) \nonumber\\
&- \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left(1-\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right) \nonumber\\
& + \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left( 1+ \frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right) \nonumber\\
&- \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left(1-\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right)\nonumber\\
&+ j \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left(1+\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right) \nonumber
\end{align}
\begin{align}
&-j \cos\left(\frac{\pi}{2} \left[\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left(1-\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right) \nonumber\\
& - j \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}-\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left(1-\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right) \nonumber\\
&+ j \cos\left(\frac{\pi}{2} \left[\Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}+\Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_q\}\right] \right) \left(1+\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2 N}\right) \bigg] \nonumber\\
&= \sin(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \sin(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \nonumber\\
&+ \sin(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \sin(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\})
\nonumber\\
&+j \sin(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \sin(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \nonumber\\
&-j \sin(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \sin(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \nonumber\\
&+\frac{\Re\{[\mathbf{\Sigma}]_{p,q}\}}{2N}\bigg[ \cos(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \cos(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \nonumber\\
&+ \cos(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \cos(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \bigg] \nonumber\\
&+j +\frac{\Im\{[\mathbf{\Sigma}]_{p,q}\}}{2N}\bigg[ \cos(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \cos(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \nonumber\\
&+ \cos(\frac{\pi}{2} \Re\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \cos(\frac{\pi}{2} \Im\{[\ddot{\mathbf{r}}_{\mathbf{x}}]_p\}) \bigg] .
\end{align}\normalsize
Consequently, it follows from \eqref{Eq-Est-2} that
\small
\begin{align}
\label{Eq-app-G-15}
[\mathbf{\Gamma}]_{p,q}=&\EX\{[\widetilde{\ddot{\mathbf{r}}}]_p[\widetilde{\ddot{\mathbf{r}}}]_q^*\} \simeq [\ddot{\mathbf{r}}]_p [\ddot{\mathbf{r}}]_q^* \\
&+ \frac{1}{2N} \bigg(\sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_q\}]^2}\nonumber\\
&+ \sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_q\}]^2}\bigg) \Re\{[\mathbf{\Sigma}]_{p,q}\} \nonumber\\
&+\frac{j}{2 N} \bigg(\sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_q\}]^2} \nonumber\\
&+ \sqrt{1-[\Re\{[\ddot{\mathbf{r}}]_p\}]^2} \times \sqrt{1-[\Im\{[\ddot{\mathbf{r}}]_q\}]^2}\bigg) \Im\{[\mathbf{\Sigma}]_{p,q}\} . \nonumber
\end{align}\normalsize
Inserting \eqref{Eq-app-G-15} into \eqref{Eq-app-G-11} and making use of \eqref{Eq-Est-5} yields
\small
\begin{align}
\label{Eq-app-G-16}
&\EX\{\mathbf{\Psi}(\widehat{\boldsymbol{\phi}} \!-\! \boldsymbol{\phi}) (\widehat{\boldsymbol{\phi}} \!-\! \boldsymbol{\phi})^H\mathbf{\Psi}^H\} \!\simeq\! \\
&\left( \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F} \overline{\mathbf{J}} \right)^{-1}\times \nonumber\\
& \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F} \mathbf{\Gamma} \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1}\mathbf{F}^{-1} \nonumber\\
&\times \DIAG(\mathbf{b}) \mathbf{F} \overline{\mathbf{J}}\left( \overline{\mathbf{J}}^H \mathbf{F}^H \DIAG(\mathbf{b}) \mathbf{F}^{-H} \mathbf{\Sigma}^{-1} \mathbf{F}^{-1} \DIAG(\mathbf{b}) \mathbf{F} \overline{\mathbf{J}} \right)^{-1}, \nonumber
\end{align}\normalsize
Finally, substituting \eqref{Eq-app-G-16} into \eqref{Eq-app-G-10} and considering that $\overline{p}_k = \frac{p_k}{\sigma^2 + \sum_{k=1}^K p_k}$ concludes the proof of Theorem \ref{Theo-6}.
\vspace{-2mm}
\section{Proof of Corollary \ref{Col-2}}
\label{app-I}
OCAB-MUSIC employs $\widetilde{\overline{\mathbf{r}}} = \VE(\widetilde{\overline{\mathbf{R}}})$ instead of $\widehat{\overline{\mathbf{r}}}$. Hence, its asymptotic estimation error is obtained by replacing $\Delta \overline{\mathbf{r}}$ with $\widetilde{\overline{\mathbf{r}}} - \overline{\mathbf{r}}$ in \eqref{Eq-app-G-1}. Following the same steps from \eqref{Eq-app-G-2} to \eqref{Eq-app-G-6}, the covariance of the asymptotic distribution (as $N \to \infty$) of the DoA estimation errors for OCAB-MUSIC is obtained as
\small
\begin{align}
\label{Eq-app-I-1}
{\cal E}_{\theta_{k_1}, \theta_{k_2}} = \Re\{ \mathbf{z}^T_{k_1} \mathbf{T} \mathbf{J}^{\dagger} \EX\{(\widetilde{\overline{\mathbf{r}}} - \overline{\mathbf{r}}) (\widetilde{\overline{\mathbf{r}}} - \overline{\mathbf{r}})^H \} \mathbf{J}^{\dagger H} \mathbf{T}^H \mathbf{z}_{k_2}^* \}.
\end{align}\normalsize
Considering the fact that the diagonal elements of $\widetilde{\overline{\mathbf{R}}}$ and $\overline{\mathbf{R}}$ are equal to one, \eqref{Eq-app-I-1} is simplified as
\small
\begin{align}
\label{Eq-app-I-2}
{\cal E}_{\theta_{k_1}, \theta_{k_2}} = \Re\{ \mathbf{z}^T_{k_1} \overline{\mathbf{T}} \overline{\mathbf{J}}^{\dagger} \left[\EX\{\widetilde{\ddot{\mathbf{r}}}\widetilde{\ddot{\mathbf{r}}}^H\} - \ddot{\mathbf{r}} \ddot{\mathbf{r}}^H \right] \overline{\mathbf{J}}^{\dagger H} \overline{\mathbf{T}}^H \mathbf{z}_{k_2}^* \}.
\end{align}\normalsize
Substituting $\EX\{\widetilde{\ddot{\mathbf{r}}}\widetilde{\ddot{\mathbf{r}}}^H\}$ from \eqref{Eq-app-G-15} completes the proof.
\section{Proof of Theorem \ref{Theo-7}}
\label{app-J}
To derive $\lim_{SNR \to \infty} {\cal E}_{\theta_k}$, we need to calculate $\lim_{SNR \to \infty} (\sigma^2+\sum_{k'=1}^K p_{k'})^2/p_k^2$, $\mathbf{W}_{\infty} = \lim_{SNR \to \infty} \mathbf{W}$ and $\mathbf{\Gamma}_{\infty} = \lim_{SNR \to \infty} \mathbf{\Gamma}$. It is obtained from \eqref{Eq-app-E-1} that
\small
\begin{align}
\lim_{SNR \to \infty} \frac{(\sigma^2+\sum_{k'=1}^K p_{k'})^2}{p_k^2} = \lim_{SNR \to \infty} \frac{1}{\overline{p}_k^2} = K^2.
\end{align}\normalsize
In addition, it follows from \eqref{Eq-Est-22} and \eqref{Eq-Est-23} that $\mathbf{W}$ and $\mathbf{\Gamma}$ depend on SNR through $\overline{\mathbf{R}}$, $\boldsymbol{\gamma}$ and $\ddot{\mathbf{r}}$. Hence, for calculating $\mathbf{W}_{\infty}$ and $\mathbf{\Gamma}_{\infty}$, it is sufficient to first compute $\overline{\mathbf{R}}_{\infty} = \lim_{SNR \to \infty} \overline{\mathbf{R}}$, $\boldsymbol{\gamma}_{\infty} = \lim_{SNR \to \infty} \boldsymbol{\gamma}$ and $\ddot{\mathbf{r}}_{\infty} = \lim_{SNR \to \infty} \ddot{\mathbf{r}}$, and then insert them back into the expressions of $\mathbf{W}$ and $\mathbf{\Gamma}$ given in \eqref{Eq-Est-22} and \eqref{Eq-Est-23}. $\overline{\mathbf{R}}_{\infty}$ is obtained in \eqref{Eq-app-E-2}. Given $\overline{\mathbf{R}}_{\infty}$, $\boldsymbol{\gamma}_{\infty} = \lim_{SNR \to \infty} \boldsymbol{\gamma}$ is equal to the $(M^2-M) \times 1$ vector containing the real and imaginary parts of the elements of $\overline{\mathbf{R}}_{\infty}$ above its main diagonal elements. Then, exploiting \eqref{Eq-Est-10}, we have $\ddot{\mathbf{r}}_{\infty} = \lim_{SNR \to \infty} \mathbf{\Psi}^{-1}\overline{\mathbf{J}}^{\dagger}\mathbf{F}^{-1} \boldsymbol{\gamma} = \mathbf{\Psi}^{-1}\overline{\mathbf{J}}^{\dagger}\mathbf{F}^{-1} \boldsymbol{\gamma}_{\infty}$. This completes the proof.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-10-22T02:20:27",
"yymm": "2012",
"arxiv_id": "2012.14051",
"language": "en",
"url": "https://arxiv.org/abs/2012.14051"
}
|
\section{Conclusion \& Future Work}
In this paper, we approach the problem of generating extended summaries, given a long document. Our proposed model is a multi-task learning approach that unifies sentence selection and section prediction processes, extracting summary-worthy sentences. We further collect two large-scale extended summary datasets (arXiv-Long and PubMed-Long) from scientific papers. Our results on three datasets show the efficacy of the joint multi-task model in the extended summarization task. While it achieves fairly competitive performance with the baseline on one of three datasets, it consistently improves over the baseline in the other two evaluation datasets. We further performed extensive quantitative and qualitative analyses over the generated results by both models. These evaluations revealed our model's qualities compared to the baseline. Based on the error analysis, it could be noticed that the performance of this model highly depends on the multi-tasking objectives. Future studies could fruitfully explore this issue further by optimizing the multi-task objectives in a way that both sentence selection and section prediction tasks can benefit.
\section{Experimental Setup}
In this section, we give details about the pre-processing steps on the datasets and parameters that we used for the experimented models.
For our baseline, we used the pre-trained \textsc{BertSum} model and implementation provided by the authors \cite{Liu2019TextSW}.\footnote{\url{https://github.com/nlpyang/PreSumm}} The \textsc{BertSumExtMulti} is that of the model used in \cite{sotudeh-gharebagh-etal-2020-guir}, but without post-processing module at inference time, which utilizes trigram-blocking~\cite{Liu2019TextSW} to hinder repetitions in the final summary. We intentionally removed the post-processing part as the model could attain higher scores in the absence of this module throughout our experiments. In order to obtain ground-truth section labels associated with each sentence, we utilized the external sequential-sentence package\footnote{\url{https://github.com/allenai/sequential_sentence_classification}} by \citet{Cohan2019PretrainedLM}. To provide oracle labels for source sentences in our datasets, we use a greedy labelling approach \cite{Liu2019TextSW} with slight modification for labelling up top 30, 15, and 25 sentences for Longsumm, arXiv-Long, and PubMed-Long datasets, respectively, since these numbers of oracle sentences yielded the highest oracle scores.~\footnote{The modification was made to assure that the oracle sentences are sampled from diverse sections.} For the joint model, we tuned $\alpha$ (loss weighting parameter) at 0.5 as it resulted in the highest scores throughout our experiments. In all our experiments, we pick the checkpoint that achieves the best average of \textsc{Rouge-2} and \textsc{Rouge-L} scores on the validation intervals as our best model for inference.
\section{Introduction}
In the past few years, there has been a significant progress on both extractive \citep[e.g.,][]{Nallapati2017SummaRuNNerAR, Zhou2018NeuralDS, Liu2019TextSW, Xu2020DiscourseAwareNE, Jia2020NeuralES} and abstractive \citep[e.g.,][]{See2017GetTT, Cohan2018ADA, MacAvaney2019SIG, zhang2019pegasus, Sotudeh2020AttendTM, Dong2020MultiFactCI} approaches for document summarization. These approaches generate a concise summary of a document, capturing its salient content.
However, for a longer document containing numerous details, it is sometimes helpful to read an extended summary, providing details about its different aspects. Scientific papers are examples of such documents; while their abstracts provide a short summary about their main methods and findings, the abstract does not include details of the methods or experimental conditions. To those who seek more detailed information about a document without having to cover the entire document, an extended or long summary can be desirable~\cite{chandrasekaran-etal-2020-overview-insights, sotudeh-gharebagh-etal-2020-guir, ghosh-roy-etal-2020-summaformers}.
Many long documents, including scientific papers, follow a certain hierarchical structure where content is organized throughout multiple sections and sub-sections. For example, research papers often describe objectives, problem, methodology, experiments, and conclusions \cite{collins-etal-2017-supervised}. Few prior studies have noted the importance of documents' structure in shorter-form summary generation \cite{collins-etal-2017-supervised, Cohan2018ADA}. However, we are not aware of existing summarization methods explicitly approaching modeling the document structure when it comes to generating \emph{extended summaries}.
We approach the problem of generating extended summary by incorporating document's hierarchical structure into the summarization model. Specifically, we hypothesize that integrating the processes of sentence selection and section prediction improves the summarization model's performance over the existing baseline models on extended summarization task. To substantiate our hypothesis, we test our proposed model on three extended summarization datasets, namely, arXiv-Long, PubMed-Long, and Longsumm. We further provide comprehensive analyses over the generated results for two long datasets, demonstrating the qualities of our model over the baseline. Our analysis reveals that the multi-tasking model helps with adjusting sentence extraction probability to the advantage of salient sentences scattered across different sections of the document. Our contributions are threefold:
\begin{enumerate}
\item A multi-task learning approach for leveraging document structure in generating extended summaries of long documents.
\item In-depth and comprehensive analyses over the generated results to explore the qualities of our model in comparison with the baseline model.
\item Collecting two large-scale extended summarization datasets with oracle labels for facilitating ongoing research in extended summarization domain.
\end{enumerate}
\section{Dataset}
We use three extended summarization datasets in this research. The first one is Longsumm dataset, which has been provided in the Longsumm 2020 shared task~\cite{chandrasekaran-etal-2020-overview-insights}. To further validate the model, we collect two additional datasets called arXiv-Long and PubMed-Long by filtering the instances of arXiv and PubMed corpora to retain those whose abstract contains at least 350 tokens. Also, to measure how our model works on the mixed varied-length scientific dataset, we exploit the arXiv summarization dataset \cite{Cohan2018ADA}.
\subsubsection{Longsumm} The Longsumm dataset was provided for the Longsumm challenge ~\cite{chandrasekaran-etal-2020-overview-insights} whose aim was to generate extended summaries for scientific papers. It consists of two types of summaries:
\begin{itemize}
\item Extractive summaries: these summaries are coming from the TalkSumm dataset \citep{Lev2019TalkSummAD}, containing 1705 extractive summaries of scientific papers according to their video talks in conferences (i.e., ACL, NAACL, etc.). Each summary within this corpus is formed by appending top 30 sentences of the paper.
\item Abstractive summaries: an add-on dataset containing 531 abstractive summaries from several CS domains such as Machine Learning, NLP, and AI, that are written by NLP and ML researchers on their blogs. The length of summaries in this dataset ranges from 50-1500 words per paper.
\end{itemize}
In our experiments, we use the extractive set along with 50\% of the abstractive set as our training set, containing 1969 papers; and 20\% of it as the validation set. Note that these splits are made for the purpose of our internal experiments as the official test set containing 22 abstractive summaries is blind~\cite{chandrasekaran-etal-2020-overview-insights}.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.50]{model_trans.pdf}
\caption{The overview of \textsc{BertSumExtMulti} model. The baseline model (i.e., \textsc{BertSumExt}) is dash-boarded. The extension to the baseline model is addition of Section Prediction linear layer (specified in green box).}
\label{fig:model}
\end{figure*}
\subsubsection{arXiv-Long \& PubMed-Long.} To further test our methods on additional datasets, we construct two extended summarization datasets for our task. For creating the first dataset, we take arXiv summarization dataset introduced by~\citet{Cohan2018ADA} and filter the instances whose abstract (i.e., ground-truth summary) contains at least 350 tokens. We call this dataset arXiv-Long. We repeat the same process on the PubMed papers obtained from the Open Access FTP service~\footnote{\url{https://www.ncbi.nlm.nih.gov/pmc/tools/ftp}} and call this dataset PubMed-Long. The motivation is that we are interested in validating our model on extended summarization datasets to investigate its effects compared to the existing works, and 350 is the length threshold that we use to characterize papers with ``long'' summaries. The resulting sets contain 11,149 instances for arXiv-Long, and 88,035 instances for PubMed-Long datasets. Note that the abstract of papers are used as ground-truth summaries in these two datasets. The overall statistics of the datasets are shown in Table \ref{tab:dsStat}. We release these datasets to facilitate future research in extended summarization.~\footnote{\url{https://github.com/Georgetown-IR-Lab/ExtendedSumm}}
\begin{table}[h]
\scalebox{.78}{
\begin{tabular}{lrrr}
\toprule
\multirow{2}{*}{Datasets} & \multirow{2}{*}{\# docs} & avg. doc. length & avg. summ. length\\
& & (tokens) & (tokens) \\
\midrule\vspace{-1em}
\\
arXiv & 215K & 4938 & 220\\
Longsumm & 2.2K & 5858 & 920\\
arXiv-Long & 11.1K & 9221 & 574 \\
PubMed-Long & 88.0K & 5359 & 403\\
\bottomrule
\end{tabular}
}
\caption{Statistics on arXiv~\cite{Cohan2018ADA}, Longsumm~\cite{chandrasekaran-etal-2020-overview-insights}, and two extended summarization datasets (arXiv-Long, PubMed-Long), collected by this work.}
\label{tab:dsStat}
\end{table}
\vspace{-1em}
\section{Methodology}
In this section, we discuss our proposed method that aims at jointly learning to predict sentence importance and its corresponding section. Before discussing the details of our summarization model, we investigate the preliminary background that provides a fair basis for implementing our method.
\subsection{Background}
\subsubsection{Extractive Summarization}
The extractive summarization system aims at extracting salient sentences to be included in the summary. Formally, let $P$ show a scientific paper containing sentences $[s_1, s_2, s_3, ..., s_m]$, where $m$ is the number of sentences. The extractive summarization is then defined as the task of assigning a binary label ($\hat{y}_i \in \{0,1\}$) to each sentence $s_i$ within the paper, signifying whether the sentence should be included in the summary.
\subsubsection{\textsc{BertSum}: \textsc{Bert} for Summarization}
As our base model we use the \textsc{BertSum} extractive summarization model \cite{Liu2019TextSW}, a \textsc{Bert}-based sentence classification model fine-tuned for summarization.
After \textsc{BertSum} outputs sentence representations within the input document, several inter-sentence Transformer layers are stacked upon the \textsc{BertSum} to collect document-level features. The final output layer is a linear classifier with Sigmoid activation function to decide whether the sentence should be included or not. The loss function is defined as below:
\begin{equation}
\mathcal{L}_1 = -\frac{1}{N} \sum_{i=1}^{n} y_i\mbox{log}(\hat{y_i}) + (1-y_i)\mbox{log}(1-\hat{y_i})
\end{equation}
where $N$ is the output size, $\hat{y_i}$ is the output of the model, and $y_i$ is the corresponding target value. In our experiments, we use this model to extract salient sentences (i.e., those with the positive label) to form the summary. We set this model as the baseline called \textsc{BertSumExt}~\cite{Liu2019TextSW}.
\subsection{Our model: a section-aware summarizer}
Inspired by few prior works that have studied the effect of document's hierarchical structure in summarization task \cite{Conroy2017SectionMM, Cohan2018ADA}, we define a section prediction task, aiming at predicting the relevant section for each sentence in the document. Specifically, we add an additional linear classification layer on top of \textsc{BertSum} sentence representations to predict the relevant section to each sentence. The loss function for the section prediction network is defined as follows:
\begin{equation}
\mathcal{L}_2 = -\sum_{i=1}^{S} y_i\mbox{log}(\hat{y_i})
\end{equation}
where $y_i$ and $\hat{y}_i$ are the ground-truth and the model scores for each section $i$ in $S$.
The entire extractive network is then trained to optimize both tasks (i.e., sentence selection and section prediction) in a multi-task setting:
\begin{equation}
\mathcal{L}_{\text{Multi}} = \alpha \mathcal{L}_1 + (1-\alpha) \mathcal{L}_2
\end{equation}
where $\mathcal{L}_1$ is the binary cross-entropy loss from sentence selection task, $\mathcal{L}_2$ is the categorical cross-entropy loss from section prediction network, and $\alpha$ is the weighting parameter that balances the learning procedure between the sentence and section prediction tasks.
\begin{table*}[t]
\centering
\begin{center}
\begin{tabular*}{\textwidth}{l@{\hspace{3em}}lclllllll}
\toprule
& & \multicolumn{3}{c}{Validation} & & & \multicolumn{3}{c}{Test} \\
\cline{3-5} \cline{7-9}
Model & Dataset & \small RG-1(\%) &\small RG-2(\%) &\small RG-L(\%) & &\small RG-1(\%) &\small RG-2(\%) &\small RG-L(\%) \\
\midrule
\textsc{BertSumExt} & \multirow{2}{*}{Longsumm} & \fontsize{11}{60}\selectfont 43.2 & \fontsize{11}{60}\selectfont 12.4 & \fontsize{11}{60}\selectfont 16.8 & & \fontsize{11}{60}\selectfont --& \fontsize{11}{60}\selectfont --& \fontsize{11}{60}\selectfont -- \\
\textsc{BertSumExtMulti} & & \fontsize{11}{60}\selectfont \textbf{43.3} & \fontsize{11}{60}\selectfont \textbf{13.0\ensuremath{{}^{\textstyle *}} }& \fontsize{11}{60}\selectfont \textbf{17.0} & & \fontsize{11}{60}\selectfont 53.1& \fontsize{11}{60}\selectfont 16.8& \fontsize{11}{60}\selectfont 20.3 \\
\midrule
\textsc{BertSumExt} & \multirow{2}{*}{arXiv-Long} & \fontsize{11}{60}\selectfont 47.1 & \fontsize{11}{60}\selectfont 18.2 & \fontsize{11}{60}\selectfont 20.8 & & \fontsize{11}{60}\selectfont 47.2& \fontsize{11}{60}\selectfont 18.4& \fontsize{11}{60}\selectfont 21.1 \\
\textsc{BertSumExtMulti} & & \fontsize{11}{60}\selectfont \textbf{47.8\ensuremath{{}^{\textstyle *}} } & \fontsize{11}{60}\selectfont \textbf{18.9\ensuremath{{}^{\textstyle *}} }& \fontsize{11}{60}\selectfont \textbf{21.3\ensuremath{{}^{\textstyle *}} } & & \fontsize{11}{60}\selectfont \textbf{47.8\ensuremath{{}^{\textstyle *}} }& \fontsize{11}{60}\selectfont \textbf{19.2\ensuremath{{}^{\textstyle *}} }& \fontsize{11}{60}\selectfont \textbf{21.5\ensuremath{{}^{\textstyle *}} } \\
\midrule
\textsc{BertSumExt} & \multirow{2}{*}{PubMed-Long} & \fontsize{11}{60}\selectfont \textbf{49.1} & \fontsize{11}{60}\selectfont \textbf{24.3} & \fontsize{11}{60}\selectfont \textbf{25.7} & & \fontsize{11}{60}\selectfont \textbf{49.1}& \fontsize{11}{60}\selectfont \textbf{24.5}& \fontsize{11}{60}\selectfont \textbf{25.8} \\
\textsc{BertSumExtMulti} & & \fontsize{11}{60}\selectfont 48.9 & \fontsize{11}{60}\selectfont 24.1& \fontsize{11}{60}\selectfont 25.5 & & \fontsize{11}{60}\ 48.9& \fontsize{11}{60}\selectfont 24.1& \fontsize{11}{60}\selectfont 25.5
\vspace{0.01em} \\
\bottomrule
\end{tabular*}
\end{center}
\caption{\textsc{Rouge (F1)} results of the baseline (i.e., \textsc{BertSumExt}) and our proposed model (i.e., \textsc{BertSumExtMulti}) on extended summarization datasets. \ensuremath{{}^{\textstyle *}} shows the statistically significant improvement (paired t-test, $p<0.01$). The validation set for Longsumm refers to our internal validation set (20\% of the abstractive set) as there was no official validation set provided for this dataset.}
\label{tab:summ}
\end{table*}
\begin{table*}
\centering
\begin{center}
\centering
\begin{tabular}{ l @{\hspace{5\tabcolsep}} rrrc}
\toprule
& \fontsize{10}{60}\selectfont RG-1 & \fontsize{10}{60}\selectfont RG-2 & \fontsize{10}{60}\selectfont RG-L & F-Measure average \\
\midrule
\textit{Other systems} \\
\hspace{1em}\fontsize{10.5}{60}\selectfont Summaformers~\cite{ghosh-roy-etal-2020-summaformers} & \fontsize{11}{60}\selectfont 49.38 & \fontsize{11}{60}\selectfont \textbf{16.86} & \fontsize{11}{60}\selectfont \textbf{21.38} & 29.21 \\
\hspace{1em}\fontsize{10.5}{60}\selectfont Wing & \fontsize{11}{60}\selectfont 50.58 & \fontsize{11}{60}\selectfont 16.62 & \fontsize{11}{60}\selectfont 20.50 & 29.23 \\
\hspace{1em}\fontsize{10.5}{60}\selectfont IIITBH-IITP~\cite{reddy-etal-2020-iiitbh} & \fontsize{11}{60}\selectfont 49.03 & \fontsize{11}{60}\selectfont 15.74 & \fontsize{11}{60}\selectfont 20.46 & 28.41 \\
\hspace{1em}\fontsize{10.5}{60}\selectfont Auth-Team~\cite{gidiotis-etal-2020-auth} & \fontsize{11}{60}\selectfont 50.11 & \fontsize{11}{60}\selectfont 15.37 & \fontsize{11}{60}\selectfont 19.59 & 28.36 \\
\hspace{1em}\fontsize{10.5}{60}\selectfont CIST\_BUPT~\cite{li-etal-2020-cist} & \fontsize{11}{60}\selectfont 48.99 & \fontsize{11}{60}\selectfont 15.06 & \fontsize{11}{60}\selectfont 20.13 & \fontsize{11}{60}\selectfont 28.06 \\
\midrule
\textit{This work} \\
\hspace{1em} \fontsize{10.5}{60}\selectfont \textsc{BertSumExtMulti} & \fontsize{11}{60}\selectfont \textbf{53.11} & \fontsize{11}{60}\selectfont 16.77 & \fontsize{11}{60}\selectfont 20.34 & \textbf{30.07} \\
\bottomrule
\end{tabular}
\caption{\textsc{Rouge (F1)} results of our multi-tasking model on the blind test set of Longsumm shared task containing 22 abstractive summaries \cite{chandrasekaran-etal-2020-overview-insights}, along with the performance of other participants' systems. We only show top 5 participants in this table.}
\label{tab:blind}
\end{center}
\end{table*}
\begin{table*}[t]
\centering
\begin{center}
\begin{tabular*}{\textwidth}{l@{\hspace{4em}}l@{\hspace{4.5em}}rrrrrrrr}
\toprule
& & \multicolumn{3}{c}{Validation} & & & \multicolumn{3}{c}{Test} \\
\cline{3-5} \cline{7-9}
Model & Dataset & \small RG-1(\%) &\small RG-2(\%) &\small RG-L(\%) & &\small RG-1(\%) &\small RG-2(\%) &\small RG-L(\%) \\
\midrule
\textsc{BertSumExt} & \multirow{2}{*}{arXiv} & \fontsize{11.5}{60}\selectfont \textbf{43.6} & \fontsize{11.5}{60}\selectfont \textbf{16.6} & \fontsize{11.5}{60}\selectfont \textbf{20.2} & & \fontsize{11.5}{60}\selectfont \textbf{44.0}& \fontsize{11.5}{60}\selectfont \textbf{16.8}& \fontsize{11.5}{60}\selectfont \textbf{20.4} \\
\textsc{BertSumExtMulti} & & \fontsize{11.5}{60}\selectfont 43.4 & \fontsize{11.5}{60}\selectfont 16.5& \fontsize{11.5}{60}\selectfont 19.8 & & \fontsize{11.5}{60}\selectfont 43.5& \fontsize{11.5}{60}\selectfont 16.5& \fontsize{11.5}{60}\selectfont 20.0 \\
\bottomrule
\end{tabular*}
\end{center}
\caption{\textsc{Rouge (F1)} results of the baseline (i.e., \textsc{BertSumExt}) and our proposed model (i.e., \textsc{BertSumExtMulti}) on arXiv summarization dataset.}
\label{tab:arxsum}
\end{table*}
\section{Related Work}
\subsubsection{Scientific document summarization} Summarizing scientific papers has garnered vast attention from the research community during recent years, although it has been studied for decades. The characteristics of scientific papers, namely the length, writing style, and discourse structure, lead to special model considerations to overcome the summarization task in scientific domain. Researchers have utilized different approaches to address these challenges.
In earlir work, \citet{Teufel2002SummarizingSA} proposed a Na\"ive bayes classifier to do content selection over the documents' sentences with regard to their rhetorical sentence role.
More recent works have given rise to the importance of discourse structure and its usefulness in summarizing scientific papers. For example, \citet{collins-etal-2017-supervised} used a set of pre-defined section clusters that source sentences are appeared in as a categorical feature to aid the model at identifying summary-worthy sentences. \citet{Cohan2018ADA} introduced large-scale datasets of arXiv and PubMed (collected from public repositories), and used a hierarchical encoder to model the discourse structure of a paper, and then used an attentive decoder to generate the summary. More recently, \citet{Xiao2019ExtractiveSO} proposed a sequence-to-sequence model that incorporates both the global context of the entire document and local context within the specified section. Inspired by the fact that discourse information is important when dealing with long documents \citep{Cohan2018ADA}, we utilize this structure in scientific summarization. Unlike prior works, we integrate sentence selection and sentence section labeling processes through a multi-task learning approach.
In a different line of research, the use of citation context information has been shown to be quite effective at summarizing scientific papers~\cite{AbuJbara2011CoherentCS}. For instance, \citet{Cohan2015ScientificAS,cohan2018scientific} utilized a citation-based approach, denoting how the paper is cited in the reference papers, to form the summary. Here, we do not exploit any citation context information.
\subsubsection{Extended summarization} While summarization research has been extensively explored in literature, extended summarization has recently gained a huge deal of attention from the research community. Among the first attempts to encourage the ongoing research in this field, \citet{chandrasekaran-etal-2020-overview-insights} set up the Longsumm shared task~\footnote{\url{https://ornlcda.github.io/SDProc/sharedtasks.html}} on producing extended summaries from scientific documents and provided a extended summarization dataset called Longsumm over which participants were urged to generate extended summaries. To tackle this challenge, researchers used different methodologies. For instance, \citet{sotudeh-gharebagh-etal-2020-guir} proposed a multi-tasking approach to jointly learn sentence importance along with its section to be included in the summary. Herein, we aim at validating the multi-tasking model on a variety of extended summarization datasets and provide a comprehensive analysis to guide future research. Moreover, ~\citet{ghosh-roy-etal-2020-summaformers} utilized section-contribution pre-computations (training set) to assign weights via a budget module for generating extended summaries. After specifying the section contribution, an extractive summarizer is executed over each section separately to extract salient sentences. Unlike their work, we unify sentence selection and sentence section prediction tasks to effectively aid the model at identifying summary-worthy sentences scattered around different sections. Furthermore, ~\citet{reddy-etal-2020-iiitbh} proposed a CNN-based classification network for extracting salient sentences. \citet{gidiotis-etal-2020-auth} proposed to use a divide and conquer (DANCER) approach ~\cite{Gidiotis2020ADA} to identify the key sections of the paper to be summarized. The PEGASUS abstractive summarizer~\cite{zhang2019pegasus} then runs over each section separately to produce section summaries, which are finally concatenated to form the extended summary. \citet{Beltagy2020LongformerTL}
proposed ``Longformer'' that utilizes ``Dilated Sliding Windows'', enabling the model to achieve better long-range coverage on long documents.
With all being mentioned above, to the best of our knowledge, we are the first to conduct quite a comprehensive analysis over the generated summarization results in the extended summarization domain.
\section{Results}
In this section, we present the performance of the baseline and our model over the validation and test sets of the extended summarization datasets. We then discuss our proposed model's performance compared to baseline over a mix of varied-length summarization dataset (i.e., arXiv). As the evaluation metrics, we report the summarization systems' performance in terms of \textsc{Rouge-1 (F1)}, \textsc{Rouge-2 (F1)}, and \textsc{Rouge-L (F1)}) metrics.
As we see in Table \ref{tab:summ}, we notice that having section predictor model incorporated into summarization model (i.e., \textsc{BertSumExtMulti} model) performs fairly well compared to the baseline model. This is a particularly important finding since it characterizes the importance of injecting documents' structure when summarizing a scientific paper. While the score gap is relatively higher in arXiv-Long and Longsumm datasets, it is similar in PubMed-Long dataset.
As observed in Table. \ref{tab:blind}, it is noticeable that \textsc{BertSumExtMulti} approach performs top among the state-of-the-art long summarization methods on the blind test set of LongSumm challenge~\cite{chandrasekaran-etal-2020-overview-insights}. While this model improves \textsc{Rouge-1} quite significantly over the other state-of-the-art, it stays competitive on \textsc{Rouge-2} and \textsc{Rouge-L} metrics. In terms of \textsc{Rouge} (F1) F-Measure average, \textsc{BertSumExtMulti} model ranks first by a huge margin compared to the other systems.
To test the model on mixed varied-length summarization datasets, we trained and tested it on arXiv~\cite{Cohan2018ADA} dataset, which contains a mix of varying length abstracts as ground-truth summaries. Table \ref{tab:arxsum} shows that our model can achieve competitive performance on this dataset. While the model does not yield any improvement on arXiv dataset, our hypothesis was to investigate if our model is superior to existing models on longer-form datasets --such as those we have used in this research, which we validated by presenting the evaluation results on long summarization datasets.
\section{Analysis}
In order to gain insights into how our multi-tasking approach works on different long datasets, we perform an extensive analysis in this section to explore the qualities of our multi-tasking system (i.e., \textsc{BertSumExtMulti}) over the baseline (i.e., \textsc{BertSumExt}). Specifically, we perform two types of analyses: 1) quantitative analysis; 2) qualitative analysis.
For the first part, we choose to use two metrics: $\text{RG}_{\text{diff}}${} which denotes the average \textsc{Rouge (F1)} difference (i.e., gap) between the baseline and our model~\footnote{The average is defined on \textsc{Rouge-1 (F1)}, \textsc{Rouge-2 (F1)}, and \textsc{Rouge-L (F1)} scores.}. Positive values indicate the improvement, while negative values denote the decline in scores. Similarly, $\text{F}_{\text{diff}}${} is the average difference of F1 score between the baseline and our model. We create three bins sorted by $\text{RG}_{\text{diff}}${}: \textsc{Improved} which contains the reports whose average \textsc{Rouge (F1)} score is improved by the multi-tasking model; \textsc{Tied} including those that the multi-tasking model leaves unchanged in terms of modifying average \textsc{Rouge (F1)} score; and \textsc{Declined} containing those whose average \textsc{Rouge (F1)} score has decreased by the joint model.
\begin{figure*}
\centering
\bgroup
\setlength{\tabcolsep}{0pt}
\renewcommand{\arraystretch}{0}
\begin{tabular}{c@{\hspace{5em}}c}
\includegraphics[scale=0.47]{longumm-length/RG2.pdf} &
\includegraphics[scale=0.47]{arXivL/RG2.pdf}\vspace{0.5em}\\
\vspace{0.5em}
(a) \textsc{Rouge-2} scores on Longsumm & (b) \textsc{Rouge-2} scores on arXiv-Long \\
\end{tabular}
\caption{Bar charts exhibiting the correlation of ground-truth summary length (in tokens) with the baseline (i.e., \textsc{BertSumExt}) and our multi-tasking model's (i.e., \textsc{BertSumExtMulti}) performance. The diagrams are shown for Longsumm and arXiv-Long datasets' test set. Each bin contains 31 summaries for Longsumm, and 196 summaries for arXiv-Long. As denoted, the multi-tasking model generally outperforms the baseline on later bins which include longer-form summaries.}
\label{fig:longsumm}
\egroup
\end{figure*}
\begin{figure*}[t]
\centering
\bgroup
\setlength{\tabcolsep}{0pt}
\renewcommand{\arraystretch}{0}
\begin{tabular}{c}
\includegraphics[scale=0.053]{ER/astro-ph9807040-BertSum.p.pdf} \\
\multicolumn{1}{l}{\hspace{0.6cm}(a) Extraction probability distribution of the baseline model (i.e., \textsc{BertSumExt}) over the source sentences.} \\
\includegraphics[scale=0.053]{ER/astro-ph9807040-multi.p.pdf} \\
\multicolumn{1}{l}{\hspace{-0.2cm}(b) Extraction probability distribution of the multi-tasking model (i.e., \textsc{BertSumExtMulti}) over the source sentences.} \\
\end{tabular}
\vspace{-0.5em}
\caption{Heat-maps showing the extraction probabilities over the source sentences (Paper ID: \texttt{astro-ph9807040} sampled from arXiv-Long dataset). For simplicity, we have only shown the sentences that gain over 15\% extraction probability by the models. The cells bordered in black show the models' final selection, and oracle sentences are indicated with *.}
\label{fig:heatmap}
\egroup
\end{figure*}
For the qualitative analysis section, we specifically aim at comparing the methods in terms of section distribution since that is where our method's improvements are expected to come from. Furthermore, we conduct an additional length analysis over the results generated by the baseline versus our model.
\begin{table}[h]
\centering
\scalebox{0.9}
{
\begin{tabular}{lcrrr}
\toprule
Bin & Dataset & Count & $\text{RG}_{\text{diff}}${} & $\text{F}_{\text{diff}}${} \\
\midrule
\vspace{-0.4em} \\
\textsc{Improved} & \multirow{3}{*}{Longsumm} & 76 & 2.05 & 6.16 \\
\textsc{Tied} & & 4 & 0 & 0 \\
\textsc{Declined} & & 74 & \textminus1.47 & 1.95\vspace{0.5em}\\
\midrule
Total & & 154 & 0.31 & 4.11 \\
\bottomrule
\vspace{-0.4em} \\
\textsc{Improved} & \multirow{3}{*}{arXiv-Long} & 1,084 & 2.40 & 4.47 \\
\textsc{Tied} & & 67 & 0 & 0.32 \\
\textsc{Declined} & & 801 & \textminus1.82 & \textminus1.34\vspace{0.5em} \\
\midrule
Total & & 1952 & 0.59 & 1.94 \\
\bottomrule
\end{tabular}
}
\vspace{0.1cm}
\caption{\textsc{Improved}, \textsc{Tied}, and \textsc{Declined} bins on the test set of Longsumm and arXiv-Long datasets. The numbers show the improvements (positive) and drops (negative) compared to the baseline model (i.e., \textsc{BertSumExt}).}
\label{tab:lsum-ann}
\end{table}
\subsection{Quantitative Analysis}
We first perform the quantitative analysis over the long summarization datasets' test sets in two parts including 1) \textit{Metric analysis} which aims at comparing different bins based on the average \textsc{Rouge} score difference of the baseline and our model; 2) \textit{Length analysis} that targets at finding the correlation between the summary length on different bins and models' performance.
\subsubsection{Metric analysis}
Table \ref{tab:lsum-ann} shows the overall quantities of Longsumm and arXiv-Long datasets in terms of average difference of \textsc{Rouge} and F1 scores. As shown, the multi-tasking approach is able to improve 76 summaries with an average \textsc{Rouge (F1)} improvement of 2.05\%. This is even more when it comes to evaluating the model on arXiv-Long dataset with average \textsc{Rouge} improvement of 2.40\%.
Interestingly, our method can consistently improve F1 measure in general (See total F1 scores in Table. \ref{tab:lsum-ann}). Seemingly, F1 metric directly correlates with \textsc{Rouge (F1)} metric on arXiv-Long dataset, whereas this is not the case on \textsc{Declined} bin of the Longsumm dataset. This might be due to the relatively small test set size of Longsumm dataset. It has to be mentioned that \textsc{Improved} bin holds relatively higher counts and improved metrics than that of \textsc{Declined} bin across both datasets in our evaluation.
\subsubsection{Length analysis} We analyze the generated results by both models to see if the summary length affects the models' performance using bar charts in Figure \ref{fig:longsumm}. The bar charts are intended to provide the basis for comparing both models on different length bins (x-axis), which are evenly-spaced (i.e., having the same number of papers). It has to be mentioned that we used five bins (each bin with 31 summaries) and ten bins (each bin with 196 summaries) for Longsumm and arXiv-Long datasets, respectively.
As shown in Figure \ref{fig:longsumm} (a), for Longsumm dataset, as the length of the ground-truth summary increases, the multi-tasking model generally improves over the baseline consistently on both datasets, except for the last bin on Longsumm dataset where it achieves comparable performance. This behaviour is also observed on \textsc{Rouge-1} and \textsc{Rouge-L} for Longsumm dataset. The \textsc{Rouge} improvement is even more noticeable when it comes to analysis over arXiv-Long dataset (See Figure \ref{fig:longsumm} (b)). Thus, the length analysis supports our hypothesis that the multi-tasking model outperforms the baseline more significantly when the summary is of longer-form.
\subsection{Qualitative analysis}
As the results of the qualitative analysis on the \textsc{Improved} bin is observed, we found out that the multi-tasking model can effectively sample sentences from diverse sections when the ground-truth summary is also sampled from diverse sections. It improves significantly over the baseline when the extractive model can detect salient sentences from important sections.
By investigating the summaries from\textsc{ Declined} bin, we noticed that in declined summaries, while our multi-tasking approach can adjust extraction probability distribution to diverse sections, it has difficulty picking up salient sentences (i.e., positive sentences) from the corresponding section; thus, it leads to relatively lower \textsc{Rouge} score. This might be improved if two networks (i.e., sentence selection and section prediction) are optimized in a more elegant way such that the extractive summarizer can further select salient sentences from the specified sections when they could be identified. For example, the improved multi-tasking methods can involve task prioritization \cite{Guo2018DynamicTP} to dynamically balance the learning process between two tasks during training, rather than using a fixed $\alpha$ parameter.
In the cases where the F1 score and \textsc{Rouge (F1)} were not consistent with each other, we observed that adding non-salience sentences to the final summary hurts the final \textsc{Rouge (F1)} scores. In other words, while the multi-tasking approach can achieve a higher F1 score compared to the baseline since it chooses different non-salient (i.e., negative) sentences than baseline, the overall \textsc{Rouge (F1)} scores drop slightly. Having conditional decoding length (i.e., sentences) might help with this as done in \cite{Mao2020MultidocumentSW}.
Fig. \ref{fig:heatmap} shows the extraction probabilities that each model outputs on the source sentences. It is observable that the baseline model picks most of the sentences (47\%) from the beginning of the paper, while the multi-tasking approach (b) can effectively distract probability distribution to summary-worthy sentences that are all around different sections of the paper, and pick those with higher confidence. Our model achieves the overall F1 score of 53.33\% on this sample paper, while the baseline's F1 score is 33.33\%.
|
{
"timestamp": "2020-12-29T02:26:17",
"yymm": "2012",
"arxiv_id": "2012.14136",
"language": "en",
"url": "https://arxiv.org/abs/2012.14136"
}
|
\section{\label{sec:level1}First-level heading:\protect\\ The line
Superfluid $^3$He is a condensed matter system with a complex order parameter. Superfluidity onsets with the condensation of pairs into a state with finite angular momentum via a second order phase transition at a pressure dependent transition temperature, $T_{\rm{c}}$ \cite{LeggettRMP1975,WheatleyRMP1975,Lee1997,Dobbs2001,vollhardt2013}. Pressure dependent strong coupling favors the anisotropic A phase at high pressures, while the isotropic B phase is the stable phase below the $T_{\rm{AB}}(P)$ line \cite{Greywall86SH}. Under these conditions the equilibrium phase diagram exhibits a polycritical point (PCP) at which the line of first order transitions ($T_{\rm{AB}}$) intersects the line of second order transitions ($T_{\rm{c}}$) at 21.22 bar and 2.232 mK (Figure~\ref{fig::0_PhasediagramDetailsofExperimentalCell}(a)). The transition between the A and B phases is first order and thus subject to hysteresis. At the PCP, the bulk free energies of A, B superfluid phases and normal state are equal.
At high pressure the A phase supercools well below $T_{\rm{AB}}$ and can be long lived \cite{schifferPRL1992_m}. A phase supercooling occurs because any formation of a bubble of radius $r$ of B phase (from the parent A phase) sets off the unusually large interfacial energy ($\propto r^2$) \cite{Osheroff1977} against the small free energy gain ($\propto -r^3$) \cite{cahn-hilliard1958_m} leading to to a critical radius $\approx$ 1 $\mu$m. The extreme purity and low temperatures that limit thermal fluctuations together with the barrier to homogeneous nucleation lead to calculated lifetimes of the supercooled A phase greater than the age of the Universe. The transition has been the subject of extensive experimental \cite{wheatley1974_m,hakonenprl1985_m,Wheatley1986,fukuyama1987_m,swift1987_m,BoydSwift1990,schifferPRL1992_m,BauerleNature1996,RuutuNature1996,BunkovPRL1998,BartkowiakPRL2000} (summarized briefly in Supplementary Note 1 \cite{supplement}) that have limited applicability to the experiments in this letter since they were performed in a variety of magnetic fields and not focused on the PCP. The A$\rightarrow$B transition was also the subject of extensive theoretical investigation \cite{LeggettResp1985,LeggettPRL1986,leggettyip1990_m,LeggettJLTP,Kibble1976,Zurek1985,HongJLTP,TyePRB2011}. As Leggett has pointed out \cite{LeggettPRL1986,leggettyip1990_m,LeggettJLTP}, the nucleation mechanism of the B phase ``remains a mystery". Its study represents a unique opportunity to gain fundamental insights and is potentially relevant to phase transitions in the evolution of the early universe \cite{Volovik2002}.
\begin{figure}
\centering
\includegraphics[
width=\linewidth, keepaspectratio]{SupercoolingFig0.eps}
\caption{(a) The phase diagram of $^3$He \cite{Greywall86SH,PLTS2000_m}, showing the extent of the equilibrium A phase (yellow), the B phase (green) separated by the equilibrium $T_{\rm{AB}}$ line (green). Superfluidity onsets at the $T_{\rm{c}}$ line (red). The region investigated here is within the box centered on the polycritical point.
(b) Schematic of cell.}
\label{fig::0_PhasediagramDetailsofExperimentalCell}
\end{figure}
Here we study the nucleation of B phase in a well characterized isolated volume and in negligible magnetic field (the Earth’s field) near the PCP. In this region the free energy landscape as a function of complex order parameter, pressure and temperature is of particular interest. Over the limited $P,T$ phase space (box in Figure~\ref{fig::0_PhasediagramDetailsofExperimentalCell}(a)) we observe both a reproducibility of B phase nucleation and an unexpected path dependence.
Two mechanisms for nucleation of the B phase have experimental support. The ``Baked-Alaska mechanism" \cite{leggettyip1990_m} requires local heating by deposition of energy following passage of a cosmic ray or charged particle and was tested using quartz cells of roughness $<$ 10 nm \cite{schifferPRL1992_m,SchifferRMP1995} in a magnetic field of 28.3 mT. The A$\rightarrow$B transition could be induced by a nearby radioactive source, confirming aspects of the mechanism. In the cosmological or Kibble-Zurek scenario \cite{BunkovPRL1998,Kibble1976,Zurek1985}, small regions undergo phase transitions that are ``oriented" differently under quench conditions (cooling through $T_{\rm{c}}$) \cite{BauerleNature1996,BunkovPRL1998,Bunkov2013}. When they eventually coalesce, they produce a cosmic string, or its equivalent in $^3$He - a vortex line. Other, yet to be tested models cite Q balls \cite{HongJLTP} and Resonant Tunneling (RT) \cite{TyePRB2011}. RT is an intrinsic nucleation mechanism, in which the transition rate into the equilibrium B phase (“true vacuum”), depends on the details of the order parameter landscape. Under certain precise conditions of temperature and pressure a nearby “false vacuum” facilitates the transition. Thus the mechanism relies on the richness of the superfluid $^3$He 18-dimensional order parameter, with multiple possible states \cite{Barton75,Marchenko1988}. Furthermore some of these states have degeneracies, which are broken by weak interactions, for example spin-orbit interaction. An example of this is the spatially modulated B-phase, stabilized by confinement \cite{LevitinPRL19}, which may explain the observed absence of supercooling in Ref.~\cite{Zhelev17NC}.
Our experiment consists of two chambers (Figure~\ref{fig::0_PhasediagramDetailsofExperimentalCell}(b)), filled with bulk $^3$He separated by a $D=1.1$ $\mu$m height channel. The experimental set-up and the associated thermal parameters are described in detail in a previous publication \cite{lotnyk2020_m} (see also Supplementary Note 2 \cite{supplement}). The A$\rightarrow$B transition is observed in an isolated chamber (IC) using a quartz tuning fork \cite{Blaauwgeers07} whose resonant frequency, $f$, and $Q$ (Quality factor, $Q=f/\Delta f$ with $\Delta f$ the full linewidth at half power) are monitored continuously. The second chamber (HEC) contains the silver heat exchanger, as well as a second tuning fork. Nucleation of B phase in the HEC does not propagate into the IC, because the A phase is stabilized in the channel by confinement \cite{Levitin13Science} under all conditions studied here. There are no sintered powders in the IC to promote nucleation of the B phase but the surfaces are not specially prepared. The experiment is located where the magnetic field is $\le0.1$ mT, the $^3$He pressure, $P$ was regulated to within $\pm0.01$ bar using a room temperature gauge (see Supplementary Note 3 \cite{supplement}). Temperatures, $T$ were read off from a $^3$He melting curve thermometer \cite{Greywall86SH} after correction for thermal gradients ($\le 15$ $\mu$K \cite{lotnyk2020_m}) and converted to the PLTS2000 temperature scale \cite{PLTS2000_m} (Supplementary Note 4 \cite{supplement}). The temperature, $T$ (read off from the melting curve thermometer), and pressure, $P$ read off from a regulated pressure gauge located in the room temperature gas handling system accurately represent the $T,P$ coordinates in the IC and HEC during all parts of the experiments (see Supplementary Note 3 \cite{supplement}).
\begin{figure}
\centering
\includegraphics[
width=0.8\linewidth, keepaspectratio]{SupercoolingFig1.eps}
\caption{The quality factor $Q$ of the quartz fork in the isolated chamber while cooling (solid blue squares) and warming (open red circles) at 21.8 bar. Dashed lines mark the supercooled A$\rightarrow$B and B$\rightarrow$A transitions.}
\label{fig::1_DetailsofExperimentalCell}
\end{figure}
The measured $Q$ of the IC fork while cooling (blue) and warming (red) through $T_{\rm{c}}$, and the A$\rightarrow$B (blue) or B$\rightarrow$A (red) transitions are shown in Figure~\ref{fig::1_DetailsofExperimentalCell}. The displacement of the dashed lines in Figure~\ref{fig::1_DetailsofExperimentalCell} illustrates supercooling via the hysteresis of the first order A$\rightarrow$B (B$\rightarrow$A) phase transitions. We cooled to within 5 $\mu$K of the supercooled transition at 22 bar and maintained the temperature within 5 $\mu$K of that transition for a day and observed no A$\rightarrow$B transition, emphasizing the stability of the metastable A phase close to the observed supercooled transition temperature.
The $P,T$ of the A$\rightarrow$B supercooled phase transitions while ramping temperature at $\le 10$ $\mu$K/hr is shown in Figure~\ref{fig::2 OverallSupercooling}(a) as left-pointing triangles with a heavy blue line drawn to guide the eye. These points lie below the equilibrium $T_{\rm{AB}}$ line (light green) at zero magnetic field \cite{Greywall86SH} where the free energies of the A and B phases are equal. The light green and heavy blue lines bound the supercooled A phase (light yellow). We observed the A$\rightarrow$B transition at 20.89 bar $\sim$24 $\mu$K below $T_{\rm{c}}$ but no A$\rightarrow$B transition was seen at 20.88 bar (Supplementary Note 5 \cite{supplement}). Thus we do not extend the blue line to $T_{\rm{c}}$; instead, we draw a gray dashed line at 20.88 bar. Clearly, the A phase is reliably observed while cooling at constant pressure through $T_{\rm c}$ below the polycritical point; however, it does not reappear on warming at these pressures. This confirms that the magnetic field, which would otherwise stabilize a thin sliver of A phase, is negligible. The set of A$\rightarrow$B transitions observed in the HEC along with the transitions shown here in the IC are briefly discussed in Supplementary Note 5 \cite{supplement}; the presence of silver powder significantly raises the temperature of A$\rightarrow$B transitions.
\begin{figure}
\centering
\includegraphics[
width=0.45\linewidth, keepaspectratio]{SupercoolingFig2.eps}
\caption{(a) The red line marks the second-order phase transition $T_{\rm{c}}(P)$, from the normal liquid (blue) to the superfluid state. The light green line $T_{\rm{AB}}(P)$, marks the limit of the equilibrium A phase (dark yellow), where the B$\rightarrow$A transition is seen on warming. Blue, left-pointing triangles and the heavy blue line (guide to the eye) bound the supercooled A phase (light yellow). The grey dashed line at 20.88 bar shows the limit of supercooled A phase observed under slow constant pressure cooling at $\le 10$ $\mu$K/hr. A series of fast-cooled ($\sim$0.1 mK/hr) transitions ($Q~ vs.$ time) are shown at 21.0 bar (b) and 21.4 bar (c) following heat pulses that carry the IC into the normal state. The $Q$ of slow supercooled transitions (see Figure~\ref{fig::1_DetailsofExperimentalCell}) are marked by dotted grey lines. In (a) the arrows show trajectories of fast and slow cooled transitions including at 21.1 bar. For the full set of low pressure fast and slow cooled transitions and discussion of the stability of the A phase below the PCP, see Supplementary Notes 5-7 \cite{supplement}. }
\label{fig::2 OverallSupercooling}
\end{figure}
\begin{figure}
\centering
\includegraphics[
width=0.9\linewidth, keepaspectratio]{SupercoolingFig3.eps}
\caption{The path shown in dotted purple crossed the constant pressure cooled supercooled transition line (heavy blue line) at several points. Solid, dashed, and dot-dashed purple lines depict paths followed where cooling at constant pressure was followed by depressurization. A$\rightarrow$B transitions are denoted by purple triangles.
}
\label{fig::3 Details of Supercooling}
\end{figure}
To sample the A$\rightarrow$B transition statistics, we increased the drive voltage to the quartz fork in the IC (by 10$\times$) for a few hundred seconds, to warm the IC above $T_{\rm{c}}$ and then cool back through $T_{\rm{c}}$ and $T_{\rm{A\rightarrow B}}$ as rapidly as possible ($\sim$100 $\mu$K/hr at $T_{\rm{A\rightarrow B}}$). Warming the IC above $T_{\rm{c}}$ is essential to prevent premature nucleation by persistent pockets of B phase \cite{BartkowiakPRL2000}. The $Q$ following these pulses
is shown in Figure~\ref{fig::2 OverallSupercooling}(b,c) (see also Supplementary Note 5 \cite{supplement}). The $^3$He in the channel is certainly in the A phase before the IC cools through $T_{\rm{c}}$ \cite{Levitin13Science,Zhelev17NC,Davis2020,lotnyk2020_m} and the $^3$He in the HEC is in the B phase. In Figure~\ref{fig::2 OverallSupercooling}(b), the A$\rightarrow$B transition occurs in a very narrow interval of $Q$ (and thus $T$). The width of the distribution of $T_{\rm{A\rightarrow B}}$ at 21.4 bar is $\sigma= 3.6$ $\mu$K, close to the slow cooled $T_{\rm{A\rightarrow B}}$, and similarly for 21.1 bar, $\sigma= 3.0$ $\mu$K. At 21.0 bar, the fast cooled A$\rightarrow$B transitions were more broadly distributed ($\sigma= 6.0$ $\mu$K). The distributions are shown in Supplementary Note 5 \cite{supplement}. Pulsed experiments at 20.95 and 20.90 bar showed only a few A$\rightarrow$B transitions with most pulsed transitions crossing directly from the normal to the B phase. Slow cooled A$\rightarrow$B transitions were seen at 20.95, 20.92, 20.90 and 20.89 bar. These various slow and fast cooled transitions are shown in Supplementary Note 5 \cite{supplement}. The scatter in $T_{\rm{AB}}$ and increase in width of the distribution for fast cooled transitions at low pressures argues for the onset of an instability of the A phase under cooling at constant pressure from $T_{\rm{c}}$. The initiation of the A phase while cooling at constant pressure through $T_{\rm{c}}$ below the PCP is briefly discussed in Supplementary Note 6 \cite{supplement}. Termination of this instability line away from $T_{\rm{c}}$, similar to a critical point (see Supplementary Note 6 \cite{supplement}), is not excluded.
Despite the sharpness of the (blue) instability line at constant pressure, we now show that nucleation of the B phase is path-dependent. We carried out a series of experiments where we followed different trajectories in the $P,T$ plane (Figure~\ref{fig::3 Details of Supercooling}). It is clear that supercooling of A phase below the instability line (in one case involving several crossings of this line) is possible. Traversal of the gap between the apparent termination of the instability line and $T_{\rm{c}}$ is also possible. If the transition observed under constant pressure cooling were due to an enhanced transition probability at (or near) certain values of ($P,T$), then we should have observed an A$\rightarrow$B transition on crossing the $T_{\rm{A\rightarrow B}}$ ($P$ = Const.) line. We conclude that $P,T$ are insufficient to describe the probability of the change of state of the system.
\begin{figure}
\centering
\includegraphics[
width=0.9\linewidth, keepaspectratio]{SupercoolingFig4.eps}
\caption{Cyan lines, with differing symbols show paths from 22 bar to 21.3 bar, from 22 bar to 20.6 bar and from 21.3 bar to 20.6 bar, to observe A$\rightarrow$B transitions (blue triangles). A$\rightarrow$B transitions were also observed after cooling through $T_{\rm c}$ at 21.3 bar and then pressurizing to 22 bar (pink and purple lines with differing symbols), terminating in pink and purple triangles. A$\rightarrow$B transitions observed following depressurization (or pressurization) retain memory of the pressure that $T_{\rm{c}}$ was traversed at since they supercool deeper (or less) than their constant pressure cooled counterparts. Supercooled A$\rightarrow$B transitions (paths not shown) that crossed through $T_{\rm{c}}$ at pressures between 23 and 22 bar, and then cooled through the ``blue line" below 21.5 bar while depressurizing are shown as downward pointing triangles along a broad green line (guide to the eye).
}
\label{fig::4 More Details of Supercooling}
\end{figure}
In another series of experiments we find an enhanced region of supercooling (striped region in Figure~\ref{fig::4 More Details of Supercooling}) if we initially cool through $T_{\rm{c}}$ between 23 and 22 bar. This cooling is followed by a trajectory (paths not shown) in which we depressurize and cool slowly, all trajectories crossing the blue instability line below 21.5 bar. The supercooled A$\rightarrow$B transitions occur along a reasonably well-defined lined in the $P,T$ plane shown as a broad green line in Figure~\ref{fig::4 More Details of Supercooling}. This path-dependent enhancement of the supercooled region, suggests a “memory” of $T_{\rm{c}}$ at which the normal-superfluid transition occurred. Such a memory or path dependence is confirmed since {\it under}-supercooling (albeit small) is seen after pressurization, (pink lines in Figure~\ref{fig::4 More Details of Supercooling}); similarly, depressurization following cooling at a constant pressure results in {\it greater} supercooling (cyan lines in Figure~\ref{fig::4 More Details of Supercooling}) compared to cooling at constant pressure through $T_{\rm{c}}$ to the same final pressure.
In summary, we have carried out a study of the nucleation of the superfluid B phase of $^3$He from the supercooled A phase in the vicinity of the polycritical point, where the difference in free energy of the two phases is small. On cooling at {\it constant pressure}, we identify a well-defined instability line in the $P,T$ plane at which the first order supercooled A$\rightarrow$B transition occurs. We find that this instability line appears to terminate at a point, separated from the line of second order normal-superfluid transitions $T_{\rm{c}}$, and at a pressure 0.3 bar below PCP. The locus of the instability line does not depend on the cooling rates studied, which differ by an order of magnitude, except in the immediate vicinity of the terminus point. However, by following a variety of different trajectories in the $P,T$ plane we demonstrate that supercooling displays a path dependence. Thus pressure and temperature alone do not provide coordinates to specify where supercooled A phase transforms to B phase. An open question is the potential analog with path dependence in supercritical region of classical liquids \cite{schienbein2018_m}, which may also relate to the observed terminus of the instability line.
We find that supercooling can be enhanced by crossing $T_{\rm{c}}$ and then depressurizing. In principle such a “memory effect” could be explained by small $^3$He-filled cavities in the surface connected to the bulk $^3$He via a narrow orifice (see Fig. 1 in \cite{leggettyip1990_m}). However we believe this is not a likely mechanism here (see Supplementary Note 8 \cite{supplement}). Our experiment also provides a test of the Baked-Alaska mechanism of cosmic ray induced nucleation in a well-motivated but relatively unexplored region of phase space near the PCP. We believe that neither the statistics of nucleation at the constant pressure instability line, nor the path dependence of nucleation are explained by this model.
We suggest that the full free energy landscape in the isolated chamber should be taken into consideration, within the framework of resonant tunnelling or alternative models. The equilibrium order parameter has a strong spatial dependence: at surfaces of the chamber and the tuning fork, where gap suppression depends on surface scattering; at sharp corners \cite{Machida2011, LevitinPRL19, heikkinen2019}. The orientation of the order parameter (texture) in the complex geometry of the chamber and tuning fork may also play a role, although in this case the energy scales are much smaller \cite{LeggettRMP1975,vollhardt2013}. Superfluid domain walls, both textural and “cosmic” \cite{Salomaa1988} may also play a role \cite{yang2011} and respond differently under (de)pressurization. All these effects are in the context of the bulk free energy landscape of the superfluid $^3$He order parameter, in which strong coupling effects (source of stability of A phase) are both pressure and temperature dependent \cite{WimanPRB2015}.
Further investigations of these phenomena will be aided by the following. Surface scattering conditions can be tuned from diffuse to specular by adjustment of surface $^4$He boundary layer \cite{TholenPRL1991,TholenPRB1993,heikkinen2019}. The free energy difference of bulk A and B phases can be tuned by magnetic fields \cite{wheatley1974a_m}. Surface quality and geometry can be tailored using silicon nanofabrication techniques \cite{Wiman2014,Zhelev17NC,zhelev18rsi_m}, extending the method of confining channels adopted in this work to isolate the chamber from B phase. The A$\rightarrow$B transition can be assayed by a non-invasive probe, such as NMR \cite{Levitin13PRL,LevitinPRL19}. It remains to be explored whether such path dependence is confined to the restricted region near the polycritical point.
We conjecture that the puzzling detachment of the constant pressure instability line and the reliable nucleation of A phase below the PCP, may arise from the fact that the sample is cooled through a channel in which the A phase is stabilized by confinement, and this imprints the A phase on the bulk chamber. If so, it may be possible to seed non-equilibrium phases of superfluid $^3$He, such as the polar phase, by cooling through a channel in which the polar phase is stabilized by oriented nanoscale structures \cite{HalperinNatPhys12,Wiman2014,Dmitriev2015,ZhelevNC2016}.
The quest to understand the nucleation of B phase from A phase remains open, with implications for cosmology. First order transitions have been proposed in the early universe, such as the electroweak transition \cite{Perelstein2014}, and in eternal inflation \cite{Guth2007}. These have potential signature signals in future gravitational wave detectors \cite{LISA,Hindmarsh2019,Caprini2020}, the prediction of which relies on nucleation theory. This provides strong motivation to identify the possible intrinsic nucleation mechanisms in superfluid $^3$He as a laboratory-based simulator for cosmology.
We acknowledge useful input from J.A. Sauls, B. Widom, H. Tye , M. Hindmarsh and A.J. Leggett. This work at Cornell was supported by the NSF under DMR-1708341, 2002692 (Parpia), PHY-1806357 (Mueller), and in London by the EPSRC under EP/R04533X/1 and STFC under ST/T006749/1. Fabrication of the channel was carried out at the Cornell Nanoscale Science and Technology Facility (CNF) with assistance and advice from technical staff. The CNF is a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the National Science Foundation (Grant NNCI-1542081).
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
{
"timestamp": "2021-04-05T02:16:51",
"yymm": "2012",
"arxiv_id": "2012.14044",
"language": "en",
"url": "https://arxiv.org/abs/2012.14044"
}
|
\section{Introduction}
One of the most famous results on manifolds with nonnegative scalar curvature is the positive mass theorem proved by Scheon and Yau \cite{Schoen1979,ScYa2,ScYa3}. They proved that the Arnowitt-Deser-Misner (ADM) mass of each end of an $n$-dimensional asymptotically flat manifold with nonnegative scalar curvature is nonnegative. Moreover, if the ADM mass of an end is zero, then the manifold is isometric to the Euclidean space. Later, this was used by Schoen \cite{Sc} to completely solve the Yamabe problem. If the manifold is spin, Witten \cite{Witten} proved the positive mass theorem by a different method (see also Parker and Taubes \cite{PaTa}, Bartnik \cite{Ba}). All the results are assumed to be smooth. It is natural to ask:
{\it If the manifold admits singularity in a subset $\Sigma$, what is the conditions of $\Sigma$ such that the positive mass theorem still holds? }
The present paper is motivated by this question. It is necessary to assume that the metric is continuous. In \cite[Proposition 2.3]{ShiTam}, Shi and Tam has constructed an asymptotically flat metric with a cone singularity and nonnegative scalar curvature, but with negative ADM mass.
Actually, there are many results on positive mass theorem for such problems with nonsmooth metrics by Miao \cite{Miao2002}, Shi-Tam \cite{ShiTam1}, \cite{ShiTam}, McFeron-Sz\'ekelyhidi \cite{McSz}, Lee-LeFloch \cite{Lee2015a}, and Li-Mantoulidis \cite{LiMa}. Miao \cite{Miao2002} and Shi-Tam \cite{ShiTam1} studied and proved a positive mass theorems with Lipschitz metric and $\Sigma$ is a hypersurface satisfying certain conditions on the mean curvatures of the hypersurface $\Sigma$. \cite{ShiTam1} used this result to prove the positivity of the Brown-York quasilocal masss. McFeron and Sz\'ekelyhidi \cite{McSz} used Ricci flow giving a new proof of positivity of Miao's result and proved also the rigidity when the ADM mass is zero. Lee \cite{Lee13} considered a positive mass theorem for $(M^n,g)$ with bounded $W^{1,p}$-metric for $n<p\le \infty$ and are smooth away from a singular set $\Sigma$ with $\frac{n}{2}(1-n/p)$-dimensional Minkowski content vanishing. Lee's result was improved recently by Shi-Tam \cite{ShiTam}, where they showed that the singular set $\Sigma$ only requires $(n-2)$-dimensional Minkowski content vanishing. If $(M^n,g)$ is spin, Lee and LeFloch \cite{Lee2015a} were able to prove a positive mass theorem for continuous metric with bounded $W^{1,n}$-norm, where the metric could be singular. Lee-LeFloch's theorem can be applied to all previous results for nonsmooth metrics under the additional assumption that the manifold is spin. If we assume further the information on the second derivative of metrics, the positive mass theorem was considered by Bartnik \cite{Ba} for metric with bounded $W^{2,p}$-norm with $p>n$. Grant and Tassotti \cite{LiMa} considered continuous metric with bounded $W^{2,n/2}$-norm. Recently, Li-Mantoulidis \cite{LiMa} considered bounded metric with skeleton singularities along a codimensional two submanifolds. Furthermore, it is very interesting that without further derivative assumptions on metric Li-Mantoulidis \cite{LiMa} were able to prove a positive mass theorem in dimensional three with isolated singularity.
The main result of this paper can be considered as an extension of Shi-Tam \cite{ShiTam}, Lee \cite{Lee13} and Lee-LeFloch \cite{Lee2015a}. We improve and recover some of their results. The main theorem we will prove is the following. Our proof would depend on the work of \cite{Miao2002,ShiTam,Lee13,LiMa} with some extension.
\begin{theorem}\label{thm1.2}
Let $M^n$ ($n\geq3$) be a smooth manifold. Let $g\in C^0\cap W^{1,p}_{loc}(M)$ ($n\le p\le \infty$) be a complete asymptotically flat metric on $M$. Assume $g$ is smooth away from a bounded closed subset $\Sigma$ with $\mathcal{H}^{n-\frac{p}{p-1}}(\Sigma)<\infty$ if $n\le p<\infty$ or $\mathcal{H}^{n-1}(\Sigma)=0$ if $p=\infty$, assume that $R_g\ge 0$ on $M\setminus \Sigma$. Then the ADM mass of $g$ of each end is nonnegative. Moreover, the ADM mass of one end is zero if and only if when $(M, g)$ is isometric to Euclidean space.
\end{theorem}
\begin{remark}
For the rigidity part, we will show such space has nonnegative Ricci curvature in RCD sense provided that the mass is zero. The rigidity would follow by the volume rigidity of nonnegative Ricci curvature
\end{remark}
\begin{remark}
The assumption of continuity of the metric is necessary, see Proposition 2.3 of \cite{ShiTam} for an example.
\end{remark}
\begin{remark}
For the case $p=\infty$, the condition $\mathcal{H}^{n-1}(\Sigma)=0$ is optimal, since one can construct counterexample if $\Sigma$ is a hypersurface. In particular, this confirms a conjecture of Lee \cite{Lee13}.
\end{remark}
\vskip 3mm
\noindent
\textbf{Organization:}
In section 2, we will recall some basic results of asymptotically flat manifold. We will use smooth metric approximating the singular metric in $W^{1,p}$ sense and prove that the scalar curvature is closed in the integral sense (see Lemma \ref{lm2.2}). Furthermore, we will check in Lemma \ref{lm4.1} that the singular metric has nonnegative scalar curvature in distribution sense as introduced in \cite{Lee2015a}.
\noindent
In section 3, we will prove the mass is nonnegative. The proof is based on the idea from \cite{Miao2002} and our approximation Lemma \ref{lm2.2} and some uniform estimates for the conformal functions.
\noindent
In section 4, we will prove the rigidity part when mass is zero. First, based on the argument of \cite{ShiTam} we can show that the metric is Ricci flat away from the singular set $\Sigma$. Next, based on a gradient estimate for harmonic function we will show that the space has nonnegative Ricci curvature in RCD sense. The rigidity result follows directly by the volume rigidity of nonnegative Ricci curvature.
\begin{comment}
Similarly, for Yamabe invariant theory we can also prove similar result.
\begin{theorem}\label{thm1.1}
Let $M^n$ be compact manifold with a metric $g\in C^0\cap W^{1,p}_{loc}(M)$ with $n\le p\le \infty$. Assume $g$ is $C^2$ away from a closed subset $\Sigma$ with $\mathcal{H}^{n-\frac{p}{p-1}}(\Sigma)<\infty$ if $p<\infty$ and $\mathcal{H}^{n-1}(\Sigma)=0$ if $p=\infty$, assume that $R_g\ge 0$ on $M\setminus \Sigma$ and the Yamabe invariant $\sigma(M)\le 0$, then $g$ is smooth and Ricci flat.
\end{theorem}
Let us outline the proof when $p=n$. In first step, we generalize Yamabe invariant to metric in $C^0\cap W^{1,n}_{loc}$ and show that the generalized Yamabe invariant coincides with the standard Yamabe invariant. In the second step, we show that if $g$ satisfies the assumption as the main theorem, then the generalized Yamabe invariant is nonnegative and equality holds if and only if $R_g\equiv 0$ on $M\setminus \Sigma$. In the third step, based on the argument in Li-Mantoulidi, we have that $g$ is Ricci flat on $M\setminus \Sigma$. In the fourth step, by choose a new coordinate, we can argue as Shi-Tam under the assumption on $\Sigma$ to get that $g\in C^0\cap W^{2,n/2}_{loc}$ which improves the regularity. Now standard argument (see also \cite{}) shows that $g$ is smooth and Ricci flat where we will present a Ricci flow proof.
\end{comment}
\section{Background}
In this section we will recall some definitions and study the smoothing approximation of metric. We will also prove the weak nonnegative scalar curvature under the same condition as Theorem \ref{thm1.2}. Let us give some fundamental definitions firstly.
\begin{definition}[asymptotically flat]\label{definition1}
Let $M$ be a smooth $n$-manifold, and $g$ be a $C^0$ metric on $M$. We say that $(M,g)$ is asymptotically flat if there exists a compact subset $K$ on $M$, such that $g$ is $C^2$ on $M\backslash K$, $M\backslash K$ has finite many components, says $\Sigma_l$, $l=1,\dots,p$, and for each component $\Sigma_l$ there exists a smooth diffeomorphism $\Phi_l$ from it to $\mathbb{R}^n$ minus a ball, such that if we see $\Phi_l$ as a coordinate system on $\Sigma_l$, then \[g_{ij}-\delta_{ij}=O(|x|^{-\delta})\] \[g_{ij,k}=O(|x|^{-\delta-1})\] \[g_{ij,kl}=O(|x|^{-\delta-2}),\] where $\delta$ is some constant greater than $(n-2)/2$, and the commas denote partial derivatives in the coordinate system. We also call each components $\Sigma_l$ as an end of $M$.
\end{definition}
\begin{definition}[ADM mass]Given a asymptotically flat manifold $(M,g)$, we define the ADM mass of each end $\Sigma_l$ as the limit \[\lim\limits_{r \to \infty }\frac{1}{2(n-1)}\omega_{n-1}\int_{S_r}\sum_{i,j=1}^n(g_{ij,i}-g_{ii,j})\nu^jd\mu,\]
where $S_r$ is the coordinate sphere in $(\Sigma_l,\Phi_l)$ of radius $r$, $\nu$ is the unit outward normal vector of $S_r$, $\omega_{n-1}$ is the volume of the unit $(n-1)$ dimensional sphere and $d\mu$ is the volume form of $S_r$ in Euclidean metric. In a given end $\Sigma_l$, we will denote $r=\Phi_l^*\left(\sqrt{\Sigma_{i=1}^n (x^i)^2}\right)$ in this paper.
\end{definition}
Bartnik \cite{Ba} proved that this limit exists provided that the scalar curvature of $g$ is integrable on M and it is a geometric invariant. We can denote the mass of the end $\Sigma_i$ by $m(\Sigma_i,g)$. In this article, we only need to check one end of M, and then the case of other ends holds in the same way, so we just choose an arbitrary end and denote the mass by $m(g)$ for simplicity.
In this paper, we will fix a smooth background metric $h$ on $M$ such that $C^{-1}h\le g\le Ch$ for some constant $C>1$, and $h=g$ outside some compact subset of $M$. All the convergencs of functions or tensors are taken with respect to $h$. And we will let $\tilde\nabla$ denote the covariant derivative taken with respect to $h$.
\begin{definition}
Let $M^n$ be a smooth manifold and $g$ be a $C^0\cap W^{1,p}_{\rm{loc}}$ metric on $M$ and $h$ is a fixed smooth metric as above, for a family of smooth function $f_\delta$, we say that $f_\delta$ converge to a function $f$ locally in $W^{1,p}$-norm, if for any $\epsilon>0$ and for any $0<r<1$, there exists some $\delta_0>0$, such that for any $\delta\in(0,\delta_0),x\in M$, we have
\begin{align*}
\int_{B_r(x)}| f_\delta- f|^p d\mu_h<\epsilon, \int_{B_r(x)}|\tilde\nabla f_\delta-\tilde\nabla f|^p d\mu_h<\epsilon,
\end{align*}
here and below the norm are taken with respect to $h$.
We say that $f_\delta$ converge to a function $f$ locally in $C^{0}$-norm, if for any $\epsilon>0$ and for any $r>0$, there exists some $\delta_0>0$, such that for any $\delta\in(0,\delta_0),x\in M$, we have
\begin{align*}
\sup_{B_r(x)}|f_\delta-f|d\mu_h<\epsilon,
\end{align*}
For a family of smooth tensors $T_\delta$, $\delta>0$, we say that $T_\delta$ converge to $T$ locally in $W^{1,p}$-norm or in $C^0$-norm, if $T_\delta$ and $T$ are of the same type, and for each chart on $M$, the component functions $(T_\delta)_{ijk\ldots}^{abc\ldots}$converge to $T_{ijk\ldots}^{abc\ldots}$ respectively locally in $W^{1,p}$-norm or in $C^0$-norm .
\end{definition}
\begin{definition}
Let $(M^n,g)$ be $C^0\cap W^{1,n}_{loc}$ metric and $h$ is a fixed smooth metric as above. We can define the scalar curvature distribution as in \cite{Lee2015a}
\begin{align}
\langle R_g,\varphi\rangle :=\int_M \left(-V\cdot \tilde\nabla \left(\varphi \frac{d\mu_g}{d\mu_h}\right)+F\varphi \frac{d\mu_g}{d\mu_h}\right)d\mu_h
\end{align}
for any compactly supported $\varphi\in W^{1,n/(n-1)}$,
where the dot product are taken with respect to $h$, $V$ is a vector field and $F$ is a scalar field,
\begin{align}
\Gamma_{ij}^k&:=\frac{1}{2}g^{kl}\left(\tilde\nabla_i g_{jl}+\tilde\nabla_j g_{il}-\tilde\nabla_l g_{ij}\right), \\
V^k&:=g^{ij}\Gamma_{ij}^k-g^{ik}\Gamma_{ji}^j=g^{ij}g^{k\ell}(\tilde\nabla_jg_{i\ell}-\tilde\nabla_{\ell}g_{ij}),\\
F&:=\bar R-\tilde \nabla_kg^{ij}\Gamma_{ij}^k+\tilde\nabla_kg^{ik}\Gamma_{ji}^i+g^{ij}\left(\Gamma_{k\ell}^k\Gamma_{ij}^\ell-\Gamma_{j\ell}^k\Gamma_{ik}^\ell\right),
\end{align}
and $\mu_h$ is the Lebesgue measure with respect to $h$. By \cite{Lee2015a}, $\langle R_g,\varphi\rangle$ is independent of $h$, and it coincides with the integral $\int_M R_g \varphi d\mu_h$ when $g$ is $C^2$ and $R_g$ is defined in classical sense. We say that $g$ has weakly nonnegative scalar curvature if for any nonnegative test function $\varphi$, we have $\langle R_g,\varphi\rangle\ge 0$.
\end{definition}
\subsection{Smoothing the metric}
The following mollification lemma could be found in \cite[Lemma 4.1]{grant2014positive}, though their lemma is a $W^{2,\frac{n}{2}}$ version, our version could be proved in the same manner.
\begin{lemma}[\cite{grant2014positive}]\label{lm2.1}
Let $M^n$ be a smooth manifold and $g$ be a $C^0\cap W^{1,n}_{\rm{loc}}$ metric on $M$, then there exists a family of smooth metric $g_\delta$, $\delta>0$, such that $g_\delta$ converge to $g$ locally both in $C^0$-norm and in $W^{1,n}$ norm. Moreover, if $g$ is smooth away from a compact subset, then we can take $g_\delta$ such that $g_\delta$ coincide with $g$ outside some compact set $K$ independent of $\delta$.
\end{lemma}
\begin{remark}
Since we take $g_\delta$ such that $g_\delta$ coincide with $g$ outside some compact set $K$ independent of $\delta$, by the definition of ADM mass we have that $m(g_\delta)=m(g)$.
\end{remark}
For this mollification, the scalar curvature distribution has an approximation. Concretely, we have the following lemma
\begin{lemma}\label{lm2.2}
Let $M^n$ be a smooth manifold and $g$ be a $C^0\cap W^{1,n}_{\rm{loc}}$ metric on $M$. Suppose that the $L^2$ Sobolev constant of $(M,g)$ has an upper bound $C_s$. Let $g_\delta$ be the mollification in Lemma \ref{lm2.1}. Suppose that $g_\delta$ coincide with $g$ outside some compact set $K$. Then
\[|\langle R_{g_\delta},u^2\rangle-\langle R_g,u^2\rangle|\leq \Psi(\delta)\int_M|\nabla u|^2d\mu_{g },\forall u\in C_0^\infty(M),\]
where $R_{g_\delta}$ is scalar curvature of $g_\delta$ and $\lim_{\delta\to 0}\Psi(\delta)=0$. Here $\Psi(\delta)$ depends only on the Sobolev constant $C_s$ and the $W^{1,n}$-norm of $|g-g_\delta|$.
\end{lemma}
\begin{remark}
The $L^2$ Sobolev constant condition means that for any $u\in C_0^\infty(M)$, we have
\[\left(\int_M u^{\frac{2n}{n-2}}d\mu_g\right)^{\frac{n-2}{n}}\leq C_s \int_M|\nabla u|^2d\mu_g\]
holds. This always holds for asymptotically flat manifolds.
\end{remark}
\begin{proof}
Let $h$ be a smooth metric on $M$ with $C^{-1}h<g<Ch$, here and below $C$ will denote some positive constant might depend on $n$ and $C_s$, but independent of $\delta$ and can vary from line to line. Then the Sobolev inequality also holds for metric $h$ since $h$ and $g$ are equivalent. Let $V_\delta$ and $F_\delta$ be the vector field and scalar field in the definition of scalar curvature distribution of $g_\delta$. Then we have
\begin{align}
\lim_{\delta\to0^{+}}\left(\int_M|V^k_\delta-V^k|^nd\mu_h+\int_M|F_\delta-F|^{n/2}d\mu_h\right)=0,
\end{align}
where the integrals are taken only in a compact subset due to the fact that $g=g_\delta$ away from a compact subset.
Recall that for any $\varphi\in C_0^\infty$
\begin{align}
\langle R_g,\varphi\rangle &=\int_M \left(-V\cdot \tilde\nabla \left(\varphi \frac{d\mu_g}{d\mu_h}\right)+F\varphi \frac{d\mu_g}{d\mu_h}\right)d\mu_h\\
\langle R_{g_\delta},\varphi\rangle &=\int_M \left(-V_\delta\cdot \tilde\nabla \left(\varphi \frac{d\mu_{g_\delta}}{d\mu_h}\right)+F_\delta\varphi \frac{d\mu_{g_\delta}}{d\mu_h}\right)d\mu_h
\end{align}
Noting that $g_\delta$ coincides with $g$ away from a compact subset, the formula $\langle R_g,\varphi\rangle-\langle R_{g_\delta},\varphi\rangle$ only depends on the information in a compact subset.
For the term involving $F_\delta$, by H\"older inequality and Sobolev inequality, we can estimate
\begin{align}
&\Big|\int_M F_\delta u^2 \frac{d\mu_{g_\delta}}{d\mu_h}d\mu_h-\int_M Fu^2\frac{d\mu_{g}}{d\mu_h}d\mu_h\Big|\\
&\le \int_M \Big| F_\delta u^2\frac{d\mu_{g_\delta}}{d\mu_h}-Fu^2\frac{d\mu_{g}}{d\mu_h}\Big|d\mu_h\\
&\le \int_M|F_\delta u^2-Fu^2|\frac{d\mu_{g_\delta}}{d\mu_h} d\mu_h+\int_M |F|u^2\Big| \frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\Big|d\mu_h\\
&\le C \int_M|F_\delta u^2-Fu^2| d\mu_h+\sup_M\Big| \frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\Big| \int_M |F|u^2d\mu_h\\
&\le C\left(\int_M |F_\delta-F|^{n/2}d\mu_h \right)^{2/n}\left(\int_Mu^{2n/(n-2)}d\mu_h\right)^{(n-2)/n}+\sup_M\Big| \frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\Big| \left(\int_M |F|^{n/2}d\mu_h\right)^{2/n}\left(\int_M |u|^{2n/(n-2)}d\mu_h\right)^{(n-2)/n}\\
&\le \left(C\left(\int_M |F_\delta-F|^{n/2}d\mu_h \right)^{2/n}+C\sup_M\Big| \frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\Big| \left(\int_M |F|^{n/2}d\mu_h\right)^{2/n}\right) \int_M |\tilde\nabla u|^{2}d\mu_h \le \Psi(\delta) \int_M |\tilde\nabla u|^{2}d\mu_h,
\end{align}
where $\lim_{\delta\to 0}\Psi(\delta)=0$.
For the term involving $V^k$, we can estimate
\begin{align}
\Big|\int_M& V\cdot \tilde\nabla \left(u^2\frac{d\mu_g}{d\mu_h}\right)d\mu_h-\int_M V_\delta\cdot \tilde\nabla \left(u^2\frac{d\mu_{g_\delta}}{d\mu_h}\right)d\mu_h\Big|\\
\le & \int_M |V-V_\delta|\cdot\Big|\tilde\nabla \left(u^2\frac{d\mu_{g_\delta}}{d\mu_h}\right)\Big|d\mu_h+\int_M |V|\cdot\Big|\tilde\nabla \left(u^2\frac{d\mu_{g}}{d\mu_h}-u^2\frac{d\mu_{g_\delta}}{d\mu_h}\right)\Big|d\mu_h\\
\le & \left(\int_M|V-V_\delta|^nd\mu_h\right)^{1/n}\left(\int_M\Big|\tilde\nabla \left(u^2\frac{d\mu_{g_\delta}}{d\mu_h}\right)\Big|^{n/(n-1)}d\mu_h\right)^{(n-1)/n}\\
&+ \left(\int_M|V|^nd\mu_h\right)^{1/n}\left(\int_M\Big|\tilde\nabla \left(u^2\frac{d\mu_{g_\delta}}{d\mu_h}-u^2\frac{d\mu_{g}}{d\mu_h}\right)\Big|^{n/(n-1)}d\mu_h \right)^{(n-1)/n}.
\end{align}
Notice that
\begin{align}
\int_M&\Big|\tilde\nabla \left(u^2\frac{d\mu_{g_\delta}}{d\mu_h}\right)\Big|^{n/(n-1)}d\mu_h\\
\le &C(n)\int_M\left( |\tilde\nabla u|\cdot |u|\frac{d\mu_{g_\delta}}{d\mu_h}\right)^{n/(n-1)}d\mu_h+C(n)\int_M\left( u^2|\tilde\nabla \frac{d\mu_{g_\delta}}{d\mu_h}|\right)^{n/(n-1)}d\mu_h\\
\le & C\int_M |\tilde\nabla u|^{n/(n-1)}|u|^{n/(n-1)}d\mu_h+C(n)\left(\int_M u^{2n/(n-2)}d\mu_h\right)^{(n-2)/(n-1)}\left(\int_M|\tilde\nabla \frac{d\mu_{g_\delta}}{d\mu_h}|^nd\mu_h\right)^{1/(n-1)}\\
\le &C\left(\int_M |\tilde\nabla u|^{2}d\mu_h\right)^{\frac{n}{2(n-1)}}\left(\int_M|u|^{2n/(n-2)}d\mu_h\right)^{\frac{n-2}{2(n-1)}}+C(n)\left(\int_M u^{2n/(n-2)}d\mu_h\right)^{(n-2)/(n-1)}\left(\int_M|\tilde\nabla \frac{d\mu_{g_\delta}}{d\mu_h}|^nd\mu_h\right)^{1/(n-1)}\\
\le &C\left(1+\int_M|\tilde\nabla \frac{d\mu_{g_\delta}}{d\mu_h}|^nd\mu_h\right)^{1/(n-1)}\left(\int_M |\tilde\nabla u|^{2}d\mu_h\right)^{\frac{n}{(n-1)}}
\end{align}
and similarly,
\begin{align}
\int_M&\Big|\tilde\nabla \left(u^2\frac{d\mu_{g_\delta}}{d\mu_h}-u^2\frac{d\mu_{g}}{d\mu_h}\right)\Big|^{n/(n-1)}d\mu_h\\
\le & C\sup_M \Big|\frac{d\mu_{g_\delta}}{d\mu_h}- \frac{d\mu_{g}}{d\mu_h}\Big|^{n/(n-1)}\int_M |\tilde\nabla u|^{n/(n-1)}|u|^{n/(n-1)}d\mu_h\\
&+ C \left(\int_M u^{2n/(n-2)}d\mu_h\right)^{(n-2)/(n-1)}\left(\int_M|\tilde\nabla \left(\frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\right)|^nd\mu_h\right)^{1/(n-1)}\\
\le & C \sup_M \Big|\frac{d\mu_{g_\delta}}{d\mu_h}- \frac{d\mu_{g}}{d\mu_h}\Big|^{n/(n-1)}\left(\int_M |\tilde\nabla u|^{2}d\mu_h\right)^{\frac{n}{2(n-1)}}\left(\int_M|u|^{2n/(n-2)}d\mu_h\right)^{\frac{n-2}{2(n-1)}}\\
&+ C \left(\int_M u^{2n/(n-2)}d\mu_h\right)^{(n-2)/(n-1)}\left(\int_M|\tilde\nabla \left(\frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\right)|^nd\mu_h\right)^{1/(n-1)}\\
\le &C\left(\sup_M \Big|\frac{d\mu_{g_\delta}}{d\mu_h}- \frac{d\mu_{g}}{d\mu_h}\Big|^{n/(n-1)}
+ \left(\int_M|\tilde\nabla \left(\frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\right)|^nd\mu_h\right)^{1/(n-1)}\right)\left(\int_M |\tilde\nabla u|^{2}d\mu_h\right)^{\frac{n}{(n-1)}}
\end{align}
Combining the estimates above, we arrive at
\begin{align}
\Big|\int_M& V\cdot \tilde\nabla \left(u^2\frac{d\mu_g}{d\mu_h}\right)d\mu_h-\int_M V_\delta\cdot \tilde\nabla \left(u^2\frac{d\mu_{g_\delta}}{d\mu_h}\right)d\mu_h\Big|\\
\le & C\left(\int_M|V_\delta-V|^nd\mu_h\right)^{1/n}\left( 1+\left(\int_M|\tilde\nabla \frac{d\mu_{g_\delta}}{d\mu_h}|^nd\mu_h\right)^{1/n}\right)\int_M |\tilde\nabla u|^{2}d\mu_h\\
&+ C\left( \sup_M \Big|\frac{d\mu_{g_\delta}}{d\mu_h}- \frac{d\mu_{g}}{d\mu_h}\Big|
+ \left(\int_M|\tilde\nabla \left(\frac{d\mu_{g_\delta}}{d\mu_h}-\frac{d\mu_{g}}{d\mu_h}\right)|^nd\mu_h\right)^{1/n}\right)\int_M |\tilde\nabla u|^{2}d\mu_h.
\end{align}
Therefore, we get
\begin{align}
|\langle R_{g_\delta},u^2\rangle-\langle R_g,u^2\rangle|\le \Psi(\delta)\int_M |\tilde\nabla u|^{2}d\mu_h,\forall u\in C_0^\infty(M).
\end{align}
Since $g$ and $h$ is comparable, we get
\begin{align}
|\langle R_{g_\delta},u^2\rangle-\langle R_g,u^2\rangle|\le \Psi(\delta)\int_M | \nabla u|^{2}d\mu_g,\forall u\in C_0^\infty(M).
\end{align}
which completes the proof of the lemma.
\end{proof}
\begin{remark}
In the same condition of Lemma \ref{lm2.2}, we can calculate in the same manner and get that
\[|\langle R_{g_\delta},u\rangle-\langle R_g,u\rangle|\leq \Psi(\delta)\left(\int_M|\nabla u|^\frac{n}{n-1}d\mu_{g_\delta}\right)^\frac{n-1}{n},\forall u\in C_0^\infty(M),\]
where $\Psi(\delta)$ is independent of $u$ and $\Psi(\delta)\to 0$ as $\delta\to 0$.
\end{remark}
\subsection{Weak nonnegative scalar curvature }
Under the same assumption about $\Sigma$ as in Theorem \ref{thm1.2}, we can check that $R_g$ is weakly nonnegative. We have the following lemma.
\begin{lemma}\label{lm4.1}
Let $ M^n $ be a smooth manifold with $g\in C^0\cap W^{1,p}_{loc}(M)$ with $n\le p\le \infty$. Assume $g$ is smooth away from a closed subset $\Sigma$ with $\mathcal{H}^{n-\frac{p}{p-1}}(\Sigma)<\infty$ if $n\le p<\infty$ or $\mathcal{H}^{n-1}(\Sigma)=0$ if $p=\infty$, and assume $R_g\ge 0$ on $M\setminus \Sigma$, then $\langle R_g, u\rangle\ge 0$ for any nonnegative, compactly supported $u\in W^{1,p/(p-1)}$.
\end{lemma}
\begin{proof}
By definition, we have
\begin{align}
\langle R_g,u\rangle=\int_M \left(-V\cdot \tilde\nabla \left(u \frac{d\mu_g}{d\mu_h}\right)+Fu \frac{d\mu_g}{d\mu_h}\right)d\mu_h.
\end{align}
When $n\le p<\infty$, since nonnegative Lipschitz function is dense in $W^{1,p/(p-1)}$, assume $u_i\to u$ in $W^{1,p/(p-1)}$ where $u_i\ge 0$ and Lipschitz. By using Cauchy inequality, it is easy to check that as $i\to \infty$
\begin{align}
\Big| \langle R_g,u\rangle-\langle R_g,u_i\rangle\Big|=\Big|\langle R_g, u-u_i\rangle \Big|\to 0.
\end{align}
Therefore, to prove the lemma, it suffices to show it holds for nonnegative Lipschitz function $u$. Let us now assume $u\ge 0$ and $|\nabla u|\le L$. Since $u$ has compact support, in particular, we have $|u|\le C(L)$.
Let $\eta_\epsilon\ge 0$ be a sequence of smooth cutoff functions of $\Sigma$ as in Lemma \ref{l:cut-off} such that
\begin{itemize}
\item[(1)] $\eta_\epsilon\equiv 1$ in a neighborhood of $\Sigma$,
\item[(2)] ${\rm supp ~}\eta_\epsilon \subset B_{\epsilon}(\Sigma)$ and $0\le \eta_\epsilon\le 1$.
\item[(3)] $\lim_{\epsilon\to 0}\int_M|\nabla \eta_\epsilon|^{p/(p-1)}d\mu_h=0$.
\end{itemize}
We have
\begin{align}
\langle R_g,u\rangle=\langle R_g,\eta_\epsilon u\rangle+\langle R_g,(1-\eta_\epsilon)u\rangle.
\end{align}
Noting that for any $\epsilon>0$ we have
\begin{align}
\langle R_g,(1-\eta_\epsilon)u\rangle=\int_{M\setminus \Sigma} R_g(1-\eta_\epsilon)ud\mu_g\ge 0.
\end{align}
To prove the lemma, it suffices to show that
\begin{align}\label{e:limitepsilonRgu0}
\lim_{\epsilon\to 0}|\langle R_g,\eta_\epsilon u\rangle|=0.
\end{align}
Actually, noting that $u$ is bounded and Lipschitz we can estimate
\begin{align}
|\langle R_g,\eta_\epsilon u\rangle|\le& \int_M |V|\cdot \Big|\tilde\nabla (\eta_\epsilon u \frac{d\mu_g}{d\mu_h})\Big|d\mu_h+\int_M |F|\cdot \eta_\epsilon u \frac{d\mu_g}{d\mu_h}d\mu_h\\
\le &C\int_M |V||\bar \nabla \eta_\epsilon|d\mu_h +C \int_M |V| |\tilde\nabla g| \eta_\epsilon d\mu_h+C\int_M |V| \eta_\epsilon d\mu_h+C \int_M|F|\eta_\epsilon d\mu_h\\
\le& C\left(\int_{M\cap B_{\epsilon}(\Sigma)}|\tilde\nabla g|^pd\mu_h\right)^{1/p}\left(\int_M\Big(|\tilde\nabla\eta_\epsilon|^{p/(p-1)}+|\eta_\epsilon|^{p/(p-1)}\Big) d\mu_h\right)^{(p-1)/p}\\
&+C\left(\int_{M\cap B_{\epsilon}(\Sigma)}|\tilde\nabla g|^pd\mu_h\right)^{2/p}\left(\int_M|\eta_\epsilon|^{p/(p-2)}d\mu_h\right)^{(p-2)/p}.
\end{align}
Letting $\epsilon\to 0$, by the properties of $\eta_\epsilon$ we get \eqref{e:limitepsilonRgu0}. Thus we finish the proof of the lemma for $n\le p<\infty$.\\
When $p=\infty$, the arguement above still works. (See also \cite{Lee2015a}.) Thus the lemma follows.
\end{proof}
\section{Positive mass Theorem: Nonnegativity}
Now we use some technical results established above and modify Miao's method \cite{Miao2002} to prove Theorem \ref{thm1.2}. At first, we choose an arbitrary end of $M$. Let us recall a lemma essentially proved in \cite[Lemma 3.2]{Schoen1979}.
\begin{lemma}[\cite{Schoen1979}]\label{lm6.1}
Let $(M,{\tilde g})$ be a complete smooth asymptotically flat manifold, $f,h$ be smooth functions with compact support on $M$, then there exists an $\epsilon>0$, such that if $f$ satisfies
\begin{align*}
\int_M f\xi^2d\mu_{\tilde g}\geq -\epsilon \int_M |\nabla \xi|^2d\mu_{\tilde g}, \forall \xi\in C_0^\infty(M),
\end{align*}
then the equation
\begin{equation}\label{eqqq}
\Delta_{\tilde g}v -fv=h
\end{equation}
has a solution $v$ satisfying $v=O(r^{2-n})$ as $r\to\infty$. Moreover, we have
\[v=\frac{A}{r^{n-2}}+\omega,\]
where $A$ is a constant, $\omega=O(r^{1-n})$ and $|\partial\omega|=O(r^{-n})$.
\end{lemma}
\begin{remark}
The condition that $f$ and $h$ have compact support can be weakened to a decay condition. However, Lemma \ref{lm6.1} is good enough for our use.
\end{remark}
\begin{remark}
We will let $\tilde g=g_\delta$ when we apply Lemma \ref{lm6.1}.
\end{remark}
\begin{proof}
Since the proof is almost the same as Schoen-Yau \cite[Lemma 3.2]{Schoen1979}, we only give the proof of the existence of $v$ arguing as Schoen-Yau \cite{Schoen1979}. Suppose that $\Sigma_k$ is the end we take in consideration and there is an asymptotically flat coordinate from $\Sigma_k$ to $\mathbb{R}^n\backslash B_\sigma(0)$. Let us solve the equation
\begin{align}\label{eq6.1}\left\{
\begin{array}{rl}
&\Delta_{\tilde g} v_\sigma-fv_\sigma=h \quad on \quad M^\sigma\\
&v_\sigma=0 \quad on \quad \partial B_\sigma
\end{array} \right.,
\end{align}
where $M^\sigma=(M\backslash\Sigma_k)\bigcup (\Sigma_k\bigcap B_\sigma)$ and $\sigma>\sigma_0$.
To study the kernel of $\Delta_{\tilde g}-f$, we take $h=0$, and assume $v_\sigma$ is the solution. Multiplying $v_\sigma$ to the equation and integrating by parts, then we get
\begin{align*}
\int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g}&=-\int_{M^\sigma} f v_\sigma^2d\mu_{\tilde g}\\
&\leq \epsilon \int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g}
\end{align*}
Therefore, if we take $\epsilon<1$, then we have $|\nabla v_\sigma|\equiv0$ on $M^\sigma$. By the boundary condition, we get that the kernel is trivial. Thus by Fredholm alternative, (\ref{eq6.1}) has a unique solution for general $h\in C_0^\infty(M)$.
For general $h\in C_0^\infty(M)$, multiplying $v_\sigma$ to (\ref{eq6.1}) and integrating by parts, by Sobolev inequality and Cauchy inequality, we get
\begin{align*}
\int_{M^\sigma} |\nabla v_\sigma|^2d\mu_g&=
-\int_{M^\sigma} f v_\sigma^2d\mu_{\tilde g}-\int_{M^\sigma} h v_\sigma d\mu_{\tilde g}\\
&\leq \epsilon \int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g} +\|h\|_{L^{2n/(n+2)}} \|v_\sigma\|_{L^{2n/(n-2)}}\\
&\leq \epsilon \int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g} + C(\tilde g, h) \left(\int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g}\right)^\frac{1}{2}\\
&\leq \epsilon \int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g} +\epsilon \int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g} +C(\tilde g, h)\epsilon^{-1}
\end{align*}
Thus if we choos $\epsilon<\frac{1}{2}$, then we have
\begin{align*}
\int_{M^\sigma} |\nabla v_\sigma|^2d\mu_{\tilde g}< C(\tilde g, h)
\end{align*}
Thus by Sobolev inequality, we have
\begin{align*}
\|v_\sigma\|_{L^{2n/(n-2)}}< C(\tilde g, h).
\end{align*}
By Moser iteration (see \cite[Theorem 4.1]{han1997elliptic}), we have
\begin{align*}
\|v_\sigma\|_{C^0}< C(\tilde g, h).
\end{align*}
Thus by Schauder theory (see \cite[Theorem 6.2]{GTru}), we have that $\{v_\sigma|\sigma>\sigma_0\}$ is equicontinuous in $C^2$ topology on any compact subsets of $M$. Thus by Arzela-Ascoli Theorem, there is a $v\in C^2(M)$ and a sequence $\sigma_i\to\infty$ such that $v_{\sigma_i}\to v$ uniformly in $C^2$-norm on any compact subsets of $M$. Thus $v$ solves $\ref{eqqq}$ and therefore $v$ is smooth.
The analysis of the asymptotic behavior of $v$ has no difference with Schoen-Yau \cite[Lemma 3.2]{Schoen1979}. Thus the lemma is proved.
\end{proof}
Now, let $M^n$ ($n\geq3$) be a smooth manifold, and let $g\in C^0\cap W^{1,p}_{loc}(M)$ ($n\le p\le \infty$) be a complete asymptotically flat metric on $M$. Assume $g$ is smooth away from a bounded closed subset $\Sigma$ with $\mathcal{H}^{n-\frac{p}{p-1}}(\Sigma)<\infty$ if $n\le p<\infty$ or $\mathcal{H}^{n-1}(\Sigma)=0$ if $p=\infty$, assume that $R_g\ge 0$ on $M\setminus \Sigma$. Let $g_\delta$ be the smooth mollification metric proposed in Lemma \ref{lm2.1}, which converge to $g$ and equal to $g$ outside a compact set $K \supset \Sigma$. Denote $c_n=\frac{n-2}{4(n-1)}$ and $R_{g_\delta}$ to be the scalar curvature of $g_\delta$. Let $\varphi:M\to[0,1]$ be a smooth cut-off function such that $\varphi=1$ on $K$, $\varphi=0$ outside some neighborhood of $K$.
We consider the equation (see also \cite{LiMa} )
\begin{align}\label{eq6.2}\left\{
\begin{array}{rl}
&\Delta_{g_\delta} u_\delta-c_n\varphi^2 R_{g_\delta} u_\delta=0 \quad on \quad M\\
&\lim_{r\to\infty}u_\delta=1
\end{array} \right.,
\end{align}
\begin{corollary}\label{cor6.2}
There exists $\delta_0>0$ such that the equation (\ref{eq6.2}) has a positive solution for all $\delta\in(0,\delta_0)$. Moreover, we have
\begin{align*}
u_\delta=1+\frac{A_\delta}{r^{n-2}}+\omega,
\end{align*}
where $A_\delta$ is a constant, $\omega=O(r^{1-n})$ and $|\partial\omega|=O(r^{-n})$.
\end{corollary}
\begin{proof}
Denote $v_\delta=u_\delta-1$, then the equation becomes
\begin{align}\label{eq6.3}
\Delta_{g_\delta} v_\delta-c_n\varphi^2 R_{g_\delta} v_\delta=c_n\varphi^2 R_{g_\delta}
\end{align}
By \cite[Lemma 3.1]{Schoen1979}, there exists a constant $C>0$, such that for any $\xi\in C_0^{\infty}(M)$, the Sobolev inequality
\[\left(\int_M \xi^{\frac{2n}{n-2}}d
\mu_h\right)^{\frac{n-2}{n}}\leq C \int_M|\nabla \xi|^2d
\mu_h\]
holds. Thus by Lemma \ref{lm2.2}, and since $g_\delta$ are uniformly equivalent to $g$, we know that for any $\epsilon>0$, there exists $\delta_0\in(0,\delta_1)$, such that for any $\xi\in C_0^\infty(M)$
\begin{align}
|\langle R_{g_\delta},\xi^2 \rangle-\langle R_g,\xi^2 \rangle|\leq \epsilon\int_M|\nabla \xi |^2d\mu_{g_\delta},\forall \delta\in (0,\delta_0).
\end{align}
Since Lemma \ref{lm4.1} gives
\[\langle R_g,\xi^2 \rangle\geq 0,\]
we have
\[\langle R_{g_\delta},\xi^2 \rangle\geq -\epsilon\int_M|\nabla \xi |^2d\mu_{g_\delta},\forall \delta\in (0,\delta_0).\]
Thus we can compute that
\begin{align}
\int_M c_n\varphi^2 R_{g_\delta}\xi^2d\mu_{g_\delta}&=c_n\langle R_{g_\delta},\varphi^2\xi^2\rangle\notag\\
&\geq -C\epsilon\int_M|\nabla (\varphi\xi) |^2d\mu_{g_\delta}\notag\\
&=-C\epsilon\left(\int_M|\varphi\nabla \xi |^2d\mu_{g_\delta}+\int_M|\xi\nabla \varphi |^2d\mu_{g_\delta}\right)\notag\\
&\geq -C\epsilon\left(\int_M|\nabla \xi |^2d\mu_{g_\delta}+\int_{{\rm{supp}\varphi}\backslash K}|\xi|^2d\mu_{g_\delta}\right)\notag\\
&\geq -C\epsilon\left(\int_M|\nabla \xi |^2d\mu_{g_\delta}+\left(\int_{{\rm{supp}\varphi}\backslash K}|\xi|^{\frac{2n}{n-2}}d\mu_{g_\delta}\right)^{\frac{n-2}{n}}\right)\label{eq6.5}\\
&\geq -C\epsilon \int_M|\nabla \xi|^2d\mu_{g_\delta}\label{eq6.6}, \forall \delta\in (0,\delta_0),
\end{align}
where $C$ denotes some positive constant which is independent of $\epsilon$ and $\delta$ and varies from line to line, and the last two inequalities follows from H\"older inequality and Sobolev inequality.
Thus by Lemma \ref{lm6.1}, we get the existence and the asymptotical estimate of $u_\delta$. For the positivity of $u_\delta$, we just need to combine the positivity proof in \cite[Lemma 3.3]{Schoen1979} and our proof of Lemma \ref{lm6.1}. See \cite{Schoen1979} for more details.
\end{proof}
\begin{proposition}\label{prop6.3}
Let $u_\delta$ be the positive solution of (\ref{eq6.2}), then
\begin{itemize}
\item[(1)]There exists a $\delta_0>0$, such that for any compact $A\subset M$, there exists a positive constant $C(A)$, such that
\[\int_A u_{\delta}^\frac{2n}{n-2}d\mu_{g_{\delta}}\leq C(A),\forall \delta\in(0,\delta_0).\]
\item[(2)]\[A_\delta=\frac{1}{(2-n)\omega_n}\int_M \big(|\nabla_{g_\delta}u_\delta|^2+c_n\varphi^2 R_{g_\delta} u_\delta^2\big)d\mu_{g_\delta},\]
where $\omega_n$ is the Euclidean volume of the $n-1$ dimensional unit sphere in $\mathbb{R}^n$.
\end{itemize}
\end{proposition}
\begin{proof}
Let us first prove the first statement (1). By the asymptotical behavior of $u_\delta$, we know that $u_\delta$ is bounded. However the $L^\infty$ bound may depend on $\delta$. Let us now show that $u_\delta$ is locally $L^\frac{2n}{n-2}$ where the bound is independent of $\delta$. To prove this, we only need to prove that there exists $\delta_0>0$, such that for any compact $A\subset M$, we have $\int_A u_{\delta}^\frac{2n}{n-2}d\mu_{g_{\delta}}\leq C(A)$, for all $\delta\in(0,\delta_0)$. Assume $v_\delta=u_\delta-1$ as in the proof of Corollary \ref{cor6.2}.
Multiplying $v_\delta$ to both sides of equation (\ref{eq6.3}), we have
\begin{equation}\label{eq6.7}
\int_M|\nabla v_\delta|^2d\mu_{g_\delta}=-\int_M c_n \varphi^2 R_{g_\delta} v_\delta^2d\mu_{g_\delta}-\int_M c_n\varphi^2R_{g_\delta} v_\delta d\mu_{g_\delta}.
\end{equation}
By Lemma \ref{lm2.2}, we have that for any $\epsilon>0$, there exists $\delta_0>0$ such that
\begin{align*}
|c_n\langle R_{g_ {\delta}},\varphi^2 v_\delta^2\rangle-c_n\langle R_g,\varphi^2 v_\delta^2\rangle|\leq \epsilon\int_M|\nabla_{g_{\delta}} (\varphi v_ {\delta})|^2d\mu_{g_ {\delta}},\forall \delta\in(0,\delta_0),
\end{align*}
\begin{align*}
|c_n\langle R_{g_ {\delta}},\varphi^2 v_\delta\rangle-c_n\langle R_g,\varphi^2 v_ {\delta}\rangle|\leq \epsilon\left(\int_M|\nabla_{g_{\delta}} (\varphi^2 v_ {\delta})|^\frac{n}{n-1}d\mu_{g_ {\delta}}\right)^\frac{n-1}{n},\forall \delta\in(0,\delta_0),
\end{align*}
where $c_n\langle R_g,\varphi^2 v_ {\delta}^2\rangle\geq0$, and
\begin{align*}c_n\langle R_g,\varphi^2 v_ {\delta}\rangle&=c_n\langle R_g,\varphi^2 (v_ {\delta}+1)\rangle-c_n\langle R_g,\varphi^2\rangle\\
&\geq0-c_n\int_M \left(-V\cdot \tilde\nabla \left(\varphi^2 \frac{d\mu_g}{d\mu_h}\right)+F\varphi^2 \frac{d\mu_g}{d\mu_h}\right)d\mu_h\\
&\geq -C.
\end{align*}
Thus by equation (\ref{eq6.7}) we have
\begin{align*}\int_M |\nabla_{g_{\delta}} v_ {\delta}|^2d{\mu_{g_ {\delta}}}&\leq \epsilon\int_M|\nabla_{g_{\delta}} (\varphi v_ {\delta})|^2d\mu_{g_ {\delta}}+\epsilon\left(\int_M|\nabla_{g_{\delta}} (\varphi^2 v_ {\delta})|^\frac{n}{n-1}d\mu_{g_ {\delta}}\right)^\frac{n-1}{n}+C\\
&\leq \epsilon\int_M|\nabla_{g_{\delta}} (\varphi v_ {\delta})|^2d\mu_{g_ {\delta}}+C\epsilon \left(\int_{\rm{supp}\varphi}|\nabla_{g_{\delta}}(\varphi^2v_ {\delta})|^2d\mu_{g_ {\delta}}\right)^{\frac{1}{2}}+C\\
&\leq \epsilon\int_M|\nabla_{g_{\delta}} (\varphi v_{\delta})|^2d\mu_{g_ {\delta}}+C\epsilon\left(\int_M |\nabla_{g_{\delta}}(\varphi^2 v_ {\delta})|^2d\mu_{g_ {\delta}}+1\right)+C.
\end{align*}
Since
\begin{align*}
\int_M|\nabla_{g_{\delta}} (\varphi v_ {\delta})|^2d\mu_{g_ {\delta}}&\leq 2\int_M|\nabla_{g_{\delta}}\varphi|^2v_ {\delta}^2d\mu_{g_ {\delta}}+2\int_M\varphi^2|\nabla_{g_{\delta}} v_ {\delta}|^2d\mu_{g_ {\delta}}\\
&\leq 2\| \nabla_{g_{\delta}}\varphi\|_{L^{{n}}}^2\|v_ {\delta}\|_{L^\frac{2n}{n-2}}^2+2\int_M|\nabla_{g_{\delta}} v_ {\delta}|^2d\mu_{g_ {\delta}}\\
&\leq (C\| \nabla_{g_{\delta}}\varphi\|_{L^{n}}^2+2)\int_M|\nabla_{g_{\delta}} v_ {\delta}|^2d\mu_{g_ {\delta}}\\
&\leq C\int_M|\nabla_{g_{\delta}} v_ {\delta}|^2d\mu_{g_ {\delta}},
\end{align*}
where we used that fact that $v_\delta$ is asymptotical to zero and thus the Sobolev inequality $||v_\delta||_{L^{2n/n-2}(M)}\le C||\nabla v_\delta||_{L^2(M)}$ holds for $v_\delta$.
Similarly, we have
\[\int_M|\nabla_{g_{\delta}} (\varphi^2 v_ {\delta})|^2d\mu_{g_ {\delta}}\leq C\int_M|\nabla_{g_{\delta}} v_ {\delta}|^2d\mu_{g_ {\delta}}.\]
Hence we arrive at
\begin{equation*}\int_M|\nabla_{g_{\delta}} v_ {\delta}|^2d\mu_{g_ {\delta}}\leq C,\forall \delta\in(0,\delta_0).
\end{equation*}
Using Sobolev inequality again, we have
\begin{equation*}
\int_M v_ {\delta}^{\frac{2n}{n-2}}d\mu_{g_ {\delta}}\leq C,\forall \delta\in(0,\delta_0).
\end{equation*}
Therefore we have that for any compact $A\subset M$,
\[\int_A u_{\delta}^\frac{2n}{n-2}d\mu_{g_{\delta}}\leq C(A),\forall \delta\in(0,\delta_0).\]
Now we begin the proof of (2). We multiply $u_\delta$ to both sides of equation (\ref{eq6.2}) and integrate it by parts, then we have
\begin{align*}
-\int_M|\nabla u_\delta|^2d\mu_{g_\delta}+\lim_{r\to\infty}\int_{S_r}u_\delta\frac{\partial u_\delta}{\partial r}d\mu_{g_\delta}-c_n\int_M \varphi^2 R_{g_\delta} u_\delta^2d\mu_{g_\delta}=0.
\end{align*}
Since $u_\delta=1+\frac{A_\delta}{r^{n-2}}+\omega$, where $\omega=O(r^{1-n})$ and $|\partial\omega|=O(r^{-n})$, we have
\[\lim_{r\to\infty}\int_{S_r}u_\delta\frac{\partial u_\delta}{\partial r}d\mu_{g_\delta}=(2-n)\omega_nA_\delta.\]
Thus we get the required result.
\end{proof}
Now, we define the conformal metrics
\[\tilde g_\delta=u_\delta^{\frac{4}{n-2}}g_\delta\]
Then the standard conformal transformation formula shows
\begin{align*}\tilde R_{g_\delta}&=-c_n^{-1}u_\delta^{-\frac{n+2}{n-2}}(\Delta_{g_\delta} u_\delta-c_nR_{g_\delta} u_\delta)\\
&=u_\delta^{1-\frac{n+2}{n-2}}(R_{g_\delta}-\varphi^2 R_{g_\delta})\\
&\geq 0,
\end{align*}
since $\varphi=1$ on $K$ and $R_{g_\delta}=R(g)\geq0$ on $M\backslash K$.
\begin{lemma}\label{lm6.3}
The lower limit of mass of $g_\delta$ is no less than the lower limit of mass of $\tilde g_\delta$.
\end{lemma}
\begin{proof}
By the definition of mass, we can calculate straightforwardly and get the following equation (see \cite[Lemma4.2]{Miao2002}),
\begin{equation}\label{eq6.8}
m(\tilde g_\delta)=m(g_\delta)+(n-1)A_\delta
\end{equation}
By (2) of Proposition \ref{prop6.3}
\[A_\delta=\frac{1}{(2-n)\omega_n}\int_M \big(|\nabla_{g_\delta}u_\delta|^2+c_n\varphi^2 R_{g_\delta} u_\delta^2\big)d\mu_{g_\delta}\]
We calculate as inequality (\ref{eq6.5}), and get
\begin{equation}\label{eq_1}\int_M c_n\varphi^2 R_{g_\delta} u_\delta^2d\mu_{g_\delta}\geq -C\epsilon\left(\int_M|\nabla u_\delta |^2d\mu_{g_\delta}+\left(\int_{{\rm{supp}}\varphi\backslash K}|u_\delta|^{\frac{2n}{n-2}}d\mu_{g_\delta}\right)^{\frac{n-2}{n}}\right).
\end{equation}
By Proposition \ref{prop6.3} (1), we have that for any $\epsilon>0$, there exists $\delta_0>0$, such that
\[\int_Mc_n\varphi^2R_{g_\delta} u_\delta^2d\mu_{g_\delta}\geq-C\epsilon\int_M|\nabla u_\delta|^2d_{\mu_{g_\delta}}-C\epsilon,\forall\delta\in(0,\delta_0).\]
Therefore we have
\[\uplim_{\delta\to0^{+}}A_\delta\leq0,\]
and thus
\[\lowlim_{\delta\to0^{+}} m(g_\delta)\geq\lowlim_{\delta\to0^{+}} m(\tilde g_\delta).\]
\end{proof}
Now we can prove the inequality part of Theorem \ref{thm1.2}
\begin{proof}[proof of the inequality part of Theorem \ref{thm1.2}]
Since $g_\delta=g$ on $M\backslash K$, we have
\[m(g_\delta)=m(g)\]
Since $\tilde g_\delta$ has nonnegative scalar curvature, by the classical positive mass theorem, we have
\[m(\tilde g_\delta)\geq0\]
Thus by Lemma \ref{lm6.3}, we have
\[m(g)\geq0\]
\end{proof}
\section{Positive mass Theorem: Rigidity}
In this section, we will prove the rigidity part of Theorem \ref{thm1.2} when $m(g)=0$.
Let us outline the idea of the proof. First, we will follow the idea of Shi-Tam \cite{ShiTam} to show that the manifold is Ricci flat away from $\Sigma$. Then we will show that the manifold has nonnegative Ricci curvature in RCD sense. Noting that the manifold is asymptotically flat, by volume convergence and volume comparison, we get the rigidity result.
\subsection{Ricci flat away from the singular set}
Let us first prove that $m(g)=0$ implies Ricci curvature vanishing away from the singular set.
\begin{comment}
\begin{lemma}
If $m(g)=0$, then $R_g\equiv0$ on $M\backslash\Sigma$.
\end{lemma}
\begin{proof}
We argue by a contradiction. Suppose that $R_g(p)>0$ for some point $p\in M\backslash\Sigma$. When we construct the mollification metric, we can take $K$ small enough such that $p\in M\backslash K$. And when we construct the cut-off function $\varphi$, we make an additional requirement that $\varphi=1$ on $U$, where $U$ is an open set satisfying $p\in U \subset M\backslash K$, and $R_g>R_g(p)/2$ on $U$. Note that $g_\delta=g$ on $U$. We let $C$ denote a positive constant which is independent of $\delta$ and varies from line to line.
\\
We use equation (\ref{eq6.7}) to prove that $\int_M |\nabla v_\delta|^2d_\mu{g_\delta}$ is uniformly bounded at first. By Lemma \ref{lm2.2}, we have that for any $\epsilon>0$, there exists $\delta_0>0$ such that
\begin{align*}
|c_n\langle R_{g_\delta},\varphi^2 v_\delta^2\rangle-c_n\langle R_g,\varphi^2 v_\delta^2\rangle|\leq \epsilon\int_M|\nabla (\varphi v_\delta)|^2d\mu_{g_\delta},\forall \delta\in (0,\delta_0),
\end{align*}
\begin{align*}
|c_n\langle R_{g_\delta},\varphi^2 v_\delta\rangle-c_n\langle R_g,\varphi^2 v_\delta\rangle|\leq \epsilon\left(\int_M|\nabla (\varphi^2 v_\delta)|^\frac{n}{n-1}d\mu_{g_\delta}\right)^\frac{n-1}{n},\forall \delta\in (0,\delta_0),
\end{align*}
where $c_n\langle R_g,\varphi^2 v_\delta^2\rangle\geq0$, and
\begin{align*}c_n\langle R_g,\varphi^2 v_\delta\rangle&=c_n\langle R_g,\varphi^2 (v_\delta+1)\rangle-c_n\langle R_g,\varphi^2\rangle\\
&\geq0-c_n\int_M \left(-V\cdot \tilde\nabla \left(\varphi^2 \frac{d\mu_g}{d\mu_h}\right)+F\varphi^2 \frac{d\mu_g}{d\mu_h}\right)d\mu_h\\
&\geq -C.
\end{align*}
Thus by equation (\ref{eq6.7}) we have
\begin{align*}\int_M |\nabla v_\delta|^2d{\mu_{g_\delta}}&\leq \epsilon\int_M|\nabla (\varphi v_\delta)|^2d\mu_{g_\delta}+\epsilon\left(\int_M|\nabla (\varphi^2 v_\delta)|^\frac{n}{n-1}d\mu_{g_\delta}\right)^\frac{n-1}{n}+C\\
&\leq \epsilon\int_M|\nabla (\varphi v_\delta)|^2d\mu_{g_\delta}+C\epsilon \left(\int_{\rm{supp}\varphi}|\nabla(\varphi^2v_\delta)|^2d\mu_{g_\delta}\right)^{\frac{1}{2}}+C\\
&\leq \epsilon\int_M|\nabla (\varphi v_\delta)|^2d\mu_{g_\delta}+C\epsilon\left(\int_M |\nabla(\varphi^2 v_\delta)|^2d\mu_{g_\delta}+1\right)+C.
\end{align*}
Since
\begin{align*}
\int_M|\nabla (\varphi v_\delta)|^2d\mu_{g_\delta}&\leq \int_M|\nabla\varphi|^2v_\delta^2d\mu_{g_\delta}+\int_M\varphi^2|\nabla v_\delta|^2d\mu_{g_\delta}\\
&\leq \| |\nabla\varphi|^2\|_{L^{\frac{n}{2}}}\|v_\delta^2\|_{L^\frac{n}{n-2}}+\int_M|\nabla v_\delta|^2d\mu_{g_\delta}\\
&\leq (\| |\nabla\varphi|^2\|_{L^{\frac{n}{2}}}+1)\int_M|\nabla v_\delta|^2d\mu_{g_\delta}\\
&\leq C\int_M|\nabla v_\delta|^2d\mu_{g_\delta},
\end{align*}
and similarly
\[\int_M|\nabla (\varphi^2 v_\delta)|^2d\mu_{g_\delta}\leq C\int_M|\nabla v_\delta|^2d\mu_{g_\delta},\]
we have that there exists $\delta_0>0$, such that
\begin{equation}\label{eq_2}\int_M|\nabla v_\delta|^2d\mu_{g_\delta}\leq C,\forall \delta\in(0,\delta_0).
\end{equation}
Thus by Sobolev inequality, we have
\begin{equation}\label{eqSobv}
\int_M v_\delta^{\frac{2n}{n-2}}d\mu_{g_\delta}\leq C,\forall\delta\in(0,\delta_0).
\end{equation}
We use $\fint$ to denote the integral average, i.e. $\forall A\subset M$ measurable in $\mu_{g_\delta}$, $\forall f$ integrable in $A$, we denote
\[\fint_A fd\mu_{g_\delta}=\frac{\int_A fd\mu_{g_\delta}}{\int_A d\mu_{g_\delta}}.\]
Then we can take $\Omega$ be a bounded domain large enough such that $U\subset\Omega$ and $\fint_\Omega v_\delta^\frac{2n}{n-2}d\mu_{g_\delta}<\frac{1}{10^\frac{2n}{n-2}}$, thus by H\"older inequality, we have
\[\fint_\Omega|v_\delta|d\mu_{g_\delta}\leq \left(\fint_\Omega v_\delta^\frac{2n}{n-2} d\mu_{g_\delta}\right)^\frac{n-2}{2n}<\frac{1}{10}.\]
Then we have
\[\fint_\Omega u_\delta d\mu_{g_\delta}\leq \fint_\Omega(|v_\delta|+1)d\mu_{g_\delta}<\frac{11}{10},\]
and
\begin{equation}\label{eq6.9}
\fint_\Omega u_\delta d\mu_{g_\delta}\geq \fint_\Omega(1-|v_\delta|)d\mu_{g_\delta}>\frac{9}{10}.
\end{equation}
Since $\fint_\Omega u_\delta d\mu_{g_\delta}\leq \mu(\Omega) \sup_\Omega u_\delta$, by Proposition \ref{prop6.3} (2),
we get that there exists a positive constant $C_0$ independent of $\delta$, such that
\[\inf_\Omega u_\delta>C_0,\forall\delta\in(0,\delta_0).\]
We denote $\eta$ as a cut-off function with $\rm{supp}\eta\in M\backslash K$, $\eta=1$ on $U$ and $0\leq\eta\leq1$. By Proposition \ref{prop6.3}, we have that
\begin{align*}
A_\delta&=\frac{c_n}{(2-n)\omega_n}\int_M \varphi^2R_{g_\delta} u_\delta d\mu_{g_\delta}\\
&=\frac{c_n}{(2-n)\omega_n}\left(\int_M \eta \varphi^2R_{g_\delta} u_\delta d\mu_{g_\delta}+\int_M (1-\eta)\varphi^2R_{g_\delta} u_\delta d\mu_{g_\delta}\right)\\
&=\frac{c_n}{(2-n)\omega_n}\left(\int_{M\backslash K}\eta \varphi^2R_g u_\delta d\mu_{g_\delta}+\int_M (1-\eta)\varphi^2R_{g_\delta} u_\delta d\mu_{g_\delta}\right)\\
&<\frac{c_nC_0}{(2-n)\omega_n}\left(\int_U R_g d\mu_{g}+\int_M (1-\eta)\varphi^2R_{g_\delta} u_\delta d\mu_{g_\delta}\right)\\
&<-CR_g(p)-C\int_M (1-\eta)\varphi^2R_{g_\delta} u_\delta d\mu_{g_\delta}\qquad,\forall\delta\in(0,\delta_0).
\end{align*}
By (\ref{eq_1}), (\ref{eq_2}) and Proposition \ref{prop6.3}(1), we get that for any $\epsilon>0$, there exists $\delta_0>0$, such that
\[A_\delta<-C(R_g(p)-\epsilon),\forall\delta\in(0,\delta_0).\]
Take $\epsilon=R_g(p)/2$, we get that there exists $\delta_0>$, such that $A_{\delta_0}<0$.
Since $m(g_\delta)=m(g)$,
by equation (\ref{eq6.8}), we have $m(\tilde g_\delta)<0$,
which is a contradiction of the classical positive mass theorem.
\end{proof}
\end{comment}
\begin{lemma}\label{lm6.6}
Assume as Theorem \ref{thm1.2}. If $m(g)=0$, then ${\rm Ric}_g\equiv0$ on $M\backslash\Sigma$.
\end{lemma}
\begin{proof}
Suppose that there exists a point $p\in M\backslash \Sigma$ such that ${\rm Ric}_g(p)\not=0$, and let $U\subset M\backslash\Sigma$ be a neighborhood of $p$ such that $|{\rm Ric}_g|^2\geq \frac{|{\rm Ric}_g|^2(p)}{2}$ on $U$. When we construct
the mollification metric, we can take $K$ small enough such that $U\subset M\backslash K$. And when we construct the
cut-off function $\varphi$, we make an additional requirement that $\varphi=1$ on U. We let $\psi\in C_0^{\infty}(U)$ be a cut-off function such that $0\leq\psi\leq1$ and $\psi=1$ on $B_r(p)$, where $r$ is some positive constant such that $B_r(p)\subset U$. Denote $g_{\delta;t}=g_\delta-t\psi {\rm Ric}_g$, $g_t=g-t\psi {\rm Ric}_g$, $t\geq 0$, and denote $R_{g_{\delta;t}}$, $R_{g_\delta}$, $R_{g_t}$ as the scalar curvature of $g_{\delta;t}$, $g_\delta$, $g_t$ respectively. Then we have
\[R_{g_{\delta;t}}=R_{g_\delta}-t{\rm div}_{g_{\delta;t}}({\rm div}_{g_{\delta;t}}(\psi {\rm Ric}_g))+t\Delta_{g_{\delta;t}}{\rm{tr}}_{g_{\delta;t}}(\psi {\rm Ric}_g)+t\langle \psi {\rm Ric}_g,{\rm Ric}_{g_{\delta;t}}\rangle_{g_{\delta;t}}+h_\delta,\]
where $|h_\delta|\leq Ct^2$ and ${\rm{supp } } h_\delta\subset U$. Here and below, $C$ and $C_i$ will denote some positive constant depend on $n,g,r$ and are independent of $\delta$, $t$ and $C$ can vary from line to line.
Since $g_\delta=g$ on $U$, we have
\begin{align}\label{Re}
R_{g_{\delta;t}}=R_{g_\delta}-t{\rm div}_{g_t}({\rm div}_{g_t}(\psi {\rm Ric}_g))+t\Delta_{g_t}{\rm{tr}}_{g_t}(\psi {\rm Ric}_g)+t\langle \psi {\rm Ric}_g,{\rm Ric}_{g_t}\rangle_{g_t}+h,
\end{align}
where $h$ is independent of $\delta$, $|h|\leq Ct^2$ and ${\rm{supp}} h\subset U$.
Let $u_{\delta;t}$ be the solution to equation
\begin{align}\label{eq6.211}\left\{
\begin{array}{rl}
&\Delta_{g_{\delta;t}} u_{\delta;t}-c_n \varphi^2R_{g_{\delta;t}} u_{\delta;t}=0 \quad on \quad M\\
&\lim_{r\to\infty}u_{\delta;t}=1
\end{array} \right.,
\end{align}
Then the metric $\tilde g_{\delta;t}=u_{\delta;t}^\frac{4}{n-2}g_{\delta;t}$ is $C^2$ on $M$ and has nonnegative scalar curvature. And we have
\begin{equation}
m(\tilde g_{\delta;t})=m(g_{\delta;t})+(n-1)A_{\delta;t},
\end{equation}
where $m(g_{\delta;t})=m(g)=0$, and $A_{\delta;t}=\frac{1}{(2-n)\omega_n}\int_M \left(|\nabla_{g_{\delta;t}} u_{\delta;t} |^2+c_n\varphi^2 R_{g_{\delta;t}}u_{\delta;t}^2\right)d\mu_{g_{\delta;t}}$.
Since $\varphi=1$ on $U$, by (\ref{Re}), we have
\begin{align}\label{ieq6.14}
\int_M\varphi^2 R_{g_{\delta;t}}u_{\delta;t}^2 d\mu_{g_{\delta;t}}=&\int_M \varphi^2 R_{g_\delta} u_{\delta;t}^2 d\mu_{g_{\delta;t}}-t\int_U u_{\delta;t}^2 {\rm div}_{g_t}({\rm div}_{g_t}(\psi {\rm Ric}_g))d\mu_{g_t}+t\int_U u_{\delta;t}^2 \Delta_{g_t}{\rm{tr}}_{g_t}(\psi {\rm Ric}_g)d\mu_{g_t}\notag\\
&+t\int_U u_{\delta;t}^2 \langle \psi {\rm Ric}_g,{\rm Ric}_{g_t}\rangle_{g_t}d\mu_{g_t}+\int_U h u_{\delta;t}^2 d\mu_{g_t}
\end{align}
As the same of Corollary \ref{cor6.2}, by \cite[Lemma 3.1]{Schoen1979}, there exists a constant $C>0$, such that for any $\xi\in C_0^{\infty}(M)$, the Sobolev inequality
\[\left(\int_M \xi^{\frac{2n}{n-2}}d\mu_g\right)^{\frac{n-2}{n}}\leq C \int_M|\nabla \xi|^2d\mu_g\]
holds. Thus by Lemma \ref{lm2.2}, we know that for any $\epsilon>0$, there exists $\delta_0>0$, such that
\begin{align*}
|\langle R_{g_\delta},\xi^2 \rangle-\langle R_g,\xi^2 \rangle|\leq \epsilon\int_M|\nabla \xi |^2d\mu_{g_\delta},\forall \delta\in (0,\delta_0).
\end{align*}
Since Lemma \ref{lm4.1} gives
\[\langle R_g,\xi^2 \rangle\geq 0,\]
we have
\[\langle R_{g_\delta},\xi^2 \rangle\geq -\epsilon\int_M|\nabla \xi |^2d\mu_{g_\delta},\forall \delta\in (0,\delta_0).\]
Thus we can fix some $t_0>0$, such that for any $\epsilon>0$, there exists $\delta_0>0$, such that for any $\delta\in(0,\delta_0)$, $t\in(0,t_0)$, we have
\begin{align*}
\int_M \varphi^2 R_{g_\delta} u_{\delta;t}^2d\mu_{g_{\delta;t}}
&= \langle R_{g_\delta},\varphi^2 u_{\delta;t}^2\rangle \notag\\
&\geq -C\epsilon\int_M|\nabla (\varphi u_{\delta;t}) |^2d\mu_{g_\delta}\notag\\
&=-C\epsilon\left(\int_M|\varphi\nabla u_{\delta;t} |^2d\mu_{g_\delta}+\int_M| u_{\delta;t}\nabla \varphi |^2d\mu_{g_\delta}\right)\notag\\
&\geq -C\epsilon\left(\int_M|\nabla u_{\delta;t} |^2d\mu_{g_\delta}+\int_{{\rm{supp}\varphi}\backslash K}| u_{\delta;t}|^2d\mu_{g_\delta}\right)\notag\\
&\geq -C\epsilon\left(\int_M|\nabla u_{\delta;t} |^2d\mu_{g_\delta}+\left(\int_{{\rm{supp}\varphi}\backslash K}| u_{\delta;t}|^{\frac{2n}{n-2}}d\mu_{g_\delta}\right) ^{\frac{n-2}{n}}\right)
\end{align*}
Thus there exists $\delta_0>0$ and $t_0>0$, such that for any $\delta\in(0,\delta_0)$, $t\in(0,t_0)$, we have
\begin{align}\label{eqR2}
\int_M \varphi^2R_{g_{\delta}} u_{\delta;t}^2 d\mu_{g_{\delta;t}}\geq -\frac{1}{10}\int_M|\nabla_{g_{\delta;t}} u_{\delta;t} |^2d\mu_{g_{\delta;t}}-a_\delta \left(\int_{{\rm{supp}\varphi}\backslash K}|u_{\delta;t}|^{\frac{2n}{n-2}}d\mu_{g_{\delta;t}}\right)^{\frac{n-2}{n}},
\end{align}
where $a_\delta$ is a function of $\delta$ which is independent of $t$ and $\lim_{\delta\to0}a_\delta=0$.
Now we will prove that there exists $\delta_0>0$ and $t_0>0$, such that for any compact measurable set $A\subset M$, we have $\int_A u_{\delta;t}^\frac{2n}{n-2}d\mu_{g_{\delta;t}}\leq C(A)$, for all $\delta\in(0,\delta_0)$, $t\in(0,t_0)$.
Let $v_{\delta;t}=u_{\delta;t}-1$, then we have
\begin{equation}\label{eq6.13}
\int_M|\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}}=-\int_M c_n \varphi^2 R_{g_{\delta;t}} v_ {\delta;t}^2d\mu_{g_{\delta;t}}-\int_M c_n\varphi^2R_{g_{\delta;t}} v_{\delta;t} d\mu_{g_ {\delta;t}}.
\end{equation}
We use equation (\ref{eq6.13}) to prove that $\int_M |\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}}$ is uniformly bounded at first. By Lemma \ref{lm2.2}, we have that for any $\epsilon>0$, there exists $\delta_0, t_0>0$ such that
\begin{align*}
|c_n\langle R_{g_ {\delta;t}},\varphi^2 v_ {\delta;t}^2\rangle-c_n\langle R_g,\varphi^2 v_ {\delta;t}^2\rangle|\leq \epsilon\int_M|\nabla_{g_{\delta;t}} (\varphi v_ {\delta;t})|^2d\mu_{g_ {\delta;t}},\forall \delta\in(0,\delta_0), t\in (0, t_0),
\end{align*}
\begin{align*}
|c_n\langle R_{g_ {\delta;t}},\varphi^2 v_ {\delta;t}\rangle-c_n\langle R_g,\varphi^2 v_ {\delta;t}\rangle|\leq \epsilon\left(\int_M|\nabla_{g_{\delta;t}} (\varphi^2 v_ {\delta;t})|^\frac{n}{n-1}d\mu_{g_ {\delta;t}}\right)^\frac{n-1}{n},\forall \delta\in(0,\delta_0), t\in (0, t_0),
\end{align*}
where $c_n\langle R_g,\varphi^2 v_ {\delta;t}^2\rangle\geq0$, and
\begin{align*}c_n\langle R_g,\varphi^2 v_ {\delta;t}\rangle&=c_n\langle R_g,\varphi^2 (v_ {\delta;t}+1)\rangle-c_n\langle R_g,\varphi^2\rangle\\
&\geq0-c_n\int_M \left(-V\cdot \tilde\nabla \left(\varphi^2 \frac{d\mu_g}{d\mu_h}\right)+F\varphi^2 \frac{d\mu_g}{d\mu_h}\right)d\mu_h\\
&\geq -C.
\end{align*}
Thus by equation (\ref{eq6.13}) we have
\begin{align*}\int_M |\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d{\mu_{g_ {\delta;t}}}&\leq \epsilon\int_M|\nabla_{g_{\delta;t}} (\varphi v_ {\delta;t})|^2d\mu_{g_ {\delta;t}}+\epsilon\left(\int_M|\nabla_{g_{\delta;t}} (\varphi^2 v_ {\delta;t})|^\frac{n}{n-1}d\mu_{g_ {\delta;t}}\right)^\frac{n-1}{n}+C\\
&\leq \epsilon\int_M|\nabla_{g_{\delta;t}} (\varphi v_ {\delta;t})|^2d\mu_{g_ {\delta;t}}+C\epsilon \left(\int_{\rm{supp}\varphi}|\nabla_{g_{\delta;t}}(\varphi^2v_ {\delta;t})|^2d\mu_{g_ {\delta;t}}\right)^{\frac{1}{2}}+C\\
&\leq \epsilon\int_M|\nabla_{g_{\delta;t}} (\varphi v_{\delta;t})|^2d\mu_{g_ {\delta;t}}+C\epsilon\left(\int_M |\nabla_{g_{\delta;t}}(\varphi^2 v_ {\delta;t})|^2d\mu_{g_ {\delta;t}}+1\right)+C.
\end{align*}
Since
\begin{align*}
\int_M|\nabla_{g_{\delta;t}} (\varphi v_ {\delta;t})|^2d\mu_{g_ {\delta;t}}
&\leq 2\int_M|\nabla_{g_{\delta;t}}\varphi|^2v_ {\delta;t}^2d\mu_{g_ {\delta;t}}+2\int_M\varphi^2|\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}}\\
&\leq 2\| \nabla_{g_{\delta;t}}\varphi\|_{L^{{n}}}^2\|v_ {\delta;t}\|_{L^\frac{2n}{n-2}}^2+2\int_M|\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}}\\
&\leq (C\| \nabla_{g_{\delta;t}}\varphi\|_{L^{{n}}}^2+2)\int_M|\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}}\\
&\leq C\int_M|\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}},
\end{align*}
and similarly
\[\int_M|\nabla_{g_{\delta;t}} (\varphi^2 v_ {\delta;t})|^2d\mu_{g_ {\delta;t}}\leq C\int_M|\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}},\]
we have that there exists $ t_0>0$, such that
\begin{equation*}\int_M|\nabla_{g_{\delta;t}} v_ {\delta;t}|^2d\mu_{g_ {\delta;t}}\leq C,\forall \delta\in(0,\delta_0), t\in(0, t_0).
\end{equation*}
Thus by Sobolev inequality, we have
\begin{equation*}
\int_M v_ {\delta;t}^{\frac{2n}{n-2}}d\mu_{g_ {\delta;t}}\leq C,\forall \delta\in(0,\delta_0), t\in(0,t_0).
\end{equation*}
Therefore we have that for any compact measurable set $A\subset M$,
\begin{equation}\label{uLp}\int_A u_{\delta;t}^\frac{2n}{n-2}d\mu_{g_{\delta;t}}\leq C(A),\forall \delta\in(0,\delta_0), t\in(0,t_0).
\end{equation}
Thus (\ref{eqR2}) becomes
\begin{align}\label{ieq6.21}
\int_M \varphi^2R_{g_{\delta}} u_{\delta;t}^2 d\mu_{g_{\delta;t}}\geq -\frac{1}{10}\int_M|\nabla_{g_{\delta;t}} u_{\delta;t} |^2d\mu_{g_{\delta;t}}-b_\delta ,
\end{align}
where $b_\delta$ is a function of $\delta$ which is independent of $t$ and $\lim_{\delta\to0}b_\delta=0$.
Then, by integration by parts and Cauchy inequality, we have
\begin{align}\label{ieq6.22}
-t\int_U u_{\delta;t}^2 {\rm div}_{g_t}({\rm div}_{g_t}(\psi {\rm Ric}_g))d\mu_{g_t}&=
t\int_U \langle{\rm div}_{g_t}(\psi {\rm Ric}_g),\nabla_{g_t} u_{\delta;t}^2\rangle d\mu_{g_t}\notag\\
&\geq -t\int_U \left|{\rm div}_{g_t}(\psi {\rm Ric}_g)\right|\cdot|\nabla_{g_t}u_{\delta;t}^2|d\mu_{g_t}\notag\\
&\geq -Ct\int_U 2u_{\delta;t}|\nabla_{g_t} u_{\delta;t}|d\mu_{g_t}\notag\\
&\geq -\frac{1}{10}\int_M|\nabla_{g_t} u_{\delta;t}|^2d\mu_{g_t}-Ct^2 \int_U u_{\delta;t}^2d\mu_{g_t}.
\end{align}
Since $\Delta_{g_t}{\rm{tr}}_{g_t}(\psi {\rm Ric}_g)={\rm div}_{g_t}({\rm{tr}}_{g_t}\nabla_{g_t}(\psi {\rm Ric}_g))$, similarly we have
\begin{align}\label{ieq6.222}
t\int_U u_{\delta;t}^2 \Delta_{g_t}{\rm{tr}}_{g_t}(\psi {\rm Ric}_g)d\mu_{g_t} \geq -\frac{1}{10}\int_M|\nabla_{g_t} u_{\delta;t}|^2d\mu_{g_t}-Ct^2 \int_U u_{\delta;t}^2d\mu_{g_t}.
\end{align}
Moreover, we have that there exists $t_0>0$, such that
\begin{align}\label{ieq6.23}
\int_U \langle\psi {\rm Ric}_g,{\rm Ric}_{g_t}\rangle_{g_t}u_{\delta;t}^2d\mu_{g_t}\geq \frac{|{\rm Ric}_g|^2(p)}{4}\int_U \psi u_{\delta;t}^2d\mu_{g_t}, \forall\delta\in(0,\delta_0), t\in(0,t_0).
\end{align}
By (\ref{ieq6.14}), (\ref{ieq6.21}), (\ref{ieq6.22}), (\ref{ieq6.222}) and (\ref{ieq6.23}), we have that there exists $t_0>0$, such that
\begin{align*}
\int_M\varphi^2 R_{g_{\delta;t}}u_{\delta;t}^2 d\mu_{g_{\delta;t}}\geq -\frac{1}{2}\int_U |\nabla_{g_t} u_{\delta;t}|^2d\mu_{g_t}+Ct\int_U \psi u_{\delta;t}^2d\mu_{g_t}-Ct^2\int_U u_{\delta;t}^2d\mu_{g_t}, \forall \delta\in(0,\delta_0), t\in(0,t_0),
\end{align*}
and then we have
\begin{align}\label{ieqA2}
A_{\delta,t}&\leq \frac{1}{(2-n)\omega_n}\left(\frac{1}{10}\int_M|\nabla_{g_{\delta,t}} u_{\delta;t}|^2d\mu_{g_{\delta,t}}+C_0t\int_U\psi u_{\delta;t}^2d\mu_{g_{t}}-C_1t^2\int_U u_{\delta;t}^2d\mu_{g_{t}}\right)+b_\delta\notag\\
&\leq \frac{1}{(2-n)\omega_n}\left(\frac{1}{10}\int_M|\nabla_{g_{\delta,t}} u_{\delta;t}|^2d\mu_{g_{\delta,t}}+C_0t\int_U\psi u_{\delta;t}^2d\mu_{g_{t}}-C_2t^2\right)+b_\delta\notag \\
&= \frac{1}{(2-n)\omega_n}\left(\frac{1}{10}\int_M|\nabla_{g_{\delta,t}} v_{\delta;t}|^2d\mu_{g_{\delta,t}}+C_0t\int_U\psi u_{\delta;t}^2d\mu_{g_{t}}-C_2t^2\right)+b_\delta\notag \\
&\leq \frac{1}{(2-n)\omega_n}\left(C_3\left(\int_{B_r(p)}|v_{\delta;t}|^\frac{2n}{n-2}d\mu_{g_{\delta,t}}\right)^\frac{n-2}{n}+C_0t\int_{B_r(p)} u_{\delta;t}^2d\mu_{g_{t}}-C_2t^2\right)+b_\delta\notag \\
&\leq \frac{1}{(2-n)\omega_n}\left(C_4\fint_{B_r(p)}|v_{\delta;t}|^2d\mu_{g_{\delta,t}}+C_5t\fint_{B_r(p)}u_{\delta;t}^2d\mu_{g_{t}}-C_2t^2\right)+b_\delta
, \forall \delta\in(0,\delta_0), t\in(0,t_0),
\end{align}
where the second inequality follows from (\ref{uLp}) and H\"older inequality.
By H\"older inequality, we have
\begin{align*}
\fint_{B_r(p)}u^2_{\delta;t}d\mu_{g_{\delta;t}}&=\fint_{B_r(p)}(1+v_{\delta;t})^2d\mu_{g_{\delta;t}}\\
&=\fint_{B_r(p)}(1+v_{\delta;t}^2+2v_{\delta;t})d\mu_{g_{\delta;t}}\\
&\ge \fint_{B_r(p)}(1+v_{\delta;t}^2-\frac{1}{2}-2v^2_{\delta;t})d\mu_{g_{\delta;t}}\\
&=\frac{1}{2}-\fint_{B_r(p)}v^2_{\delta;t}d\mu_{g_{\delta;t}}.
\end{align*}
If $\fint_{B_r(p)}v^2_{\delta;t}d\mu_{g_{\delta;t}}\le\frac{1}{10}$, then we have $\fint_{B_r(p)}u^2_{\delta;t}d\mu_{g_{\delta;t}}\ge\frac{4}{10}$, thus by (\ref{ieqA2}), we have
\begin{align*}
A_{\delta;t}\le \frac{1}{(2-n)\omega_n}\left(C_4\fint_{B_r(p)}|v_{\delta;t}|^2d\mu_{g_{\delta,t}}+C_6t-C_2t^2\right)+b_\delta, \forall \delta\in(0,\delta_0), t\in(0,t_0).
\end{align*}
Thus we can choose $t$ small enough such that $C_6t-C_2t^2> 0$, and choose $\delta$ small enough such that $A_{\delta;t}< 0$, then we have $m(\tilde g_{\delta;t})<0$, which is a contradiction.
Otherwise, if $\fint_{B_r(p)}v^2_{\delta;t}d\mu_{g_{\delta;t}}\ge\frac{1}{10}$, then by (\ref{ieqA2}), we have
\begin{align*}
A_{\delta;t}\le \frac{1}{(2-n)\omega_n}\left(C_7-C_2t^2\right)+b_\delta, \forall \delta\in(0,\delta_0), t\in(0,t_0).
\end{align*}
We can still let $t$ and $\delta$ small enough such that $A_{\delta;t}< 0$ and thus $m(\tilde g_{\delta;t})<0$, which makes the contradiction again. Thus the proof of the lemma is completed.
\end{proof}
\subsection{RCD space with nonnegative Ricci curvature}
In this subsection, we will show that our singular space has nonnegative Ricci curvature in RCD sense providing the manifold is Ricci flat away from the singular set. In this subsection, $\Delta_g$ will denote the Dirichlet laplacian taken with respect to metric $g$, and we will denote its domain by $D(\Delta_g)$. We will omit the subscription when there is no ambiguity of the metric.
\begin{theorem}\label{t:RCDnonnegative}
Let $(M^n,g)$ $(n\ge 3)$ be a smooth manifold with $g\in C^0\cap W^{1,p}_{loc}(M)$ with $n\le p\le\infty$. Assume $g$ is smooth and Ricci flat away from a closed subset $\Sigma$ with $\mathcal{H}^{n-\frac{p}{p-1}}(\Sigma)<\infty$ when $n\le p<\infty$
and $\mathcal{H}^{n-1}(\Sigma)=0$ when $p=\infty$, and assume $g$ is asymptotically flat. Then $(M^n,g)$ as a metric measure space with Lebesgue measure has nonnegative Ricci curvature in the sense of RCD.
\end{theorem}
Let us recall the definition of RCD space, see (\cite{RCD,AMS,EKS15}). In general, this is defined on metric measure space. In our setting, we will consider manifold with $C^0$-metric which has a natural metric and measure structure.
\begin{definition}[RCD Ricci lower bound]\label{R}Let $K$ be some real constant. For a Riemannian manifold $(M^n,g)$ with volume measure and $g\in C^0(M)$, we say that it is a RCD($K,n$) space, or say that it has Ricci curvature not less than $K$ in the sense of RCD, if
\begin{itemize}
\item[(1)] it is infinitesimally Hilbertian,
\item[(2)] for some $C>0$, and some point $p\in M$, it holds $\mu_g(B_r(x))\le e^{Cr^2}$, for any $r>0$, where $\mu_g$ is Lebesgue measure taken with respect to $g$,
\item[(3)] for any $f\in W^{1,2}(M)$ satisfying $|\nabla f|\in L^\infty(M)$, it admits a Lipschitz representative $\tilde f$ with $\text{Lip} (\tilde f)\le \|\nabla f\|_{L^\infty(M)}$,
\item[(4)] for any $f\in D(\Delta)$ with $\Delta f\in W^{1,2}(M)$, and for any $\varphi \in L^\infty(M)\cap D(\Delta)$ with $\varphi \ge 0$, $\Delta \varphi\in L^\infty(M)$, the Bochner inequality
\[\frac{1}{2}\int_M |\nabla f|^2\Delta \varphi d\mu_g\ge \frac{1}{n}\int_M (\Delta f)^2\varphi d\mu_g+\int_M \varphi \left( \langle \nabla f,\nabla \Delta f\rangle +K|\nabla f|^2 \right)d\mu_g\]
holds.
\end{itemize}
\end{definition}
\begin{remark}
For a Riemannian manifold $(M^n,g)$ with volume measure and $g\in C^0(M)$, (1) and (3) in Definition \ref{R} hold automatically. If $g$ is asymptotically flat, then (2) holds. Therefore, to prove Theorem \ref{t:RCDnonnegative}, it only needs to check a weak Bochner inequality (see Proposition \ref{p:weakbochnerinequaliyt}, see also \cite{BKMR} ). To approach this, we need some apriori estimates. First we requires gradient estimates for harmonic functions and Bochner inequality for harmonic functions.
\end{remark}
\begin{proposition}\label{p:gradientdestimate_Deltabound}
Assume as Theorem \ref{t:RCDnonnegative} and $u\in D(\Delta) $ satisfying $\Delta u\in L^\infty(B_1(x))$. Then $|\nabla u|$ is bounded. Moreover, if $\Delta u=0$, then we have
\begin{align}
\sup_{B_{1/2}(x)}|\nabla u|^2\le C\fint_{B_1(x)}|\nabla u|^2dx,
\end{align}
where $C=C(n,g)$ depends only on $n$, $\|g\|_{W^{1,p}}$ and the $L^2$-Sobolev constant in $B_1(x)$.
\end{proposition}
\begin{remark}
We should point out that $p=n$ and $p=\infty$ are two critical cases. In the following lemmas, sometimes we will deal with these two cases separately. When $n<p\le \infty$, one can use approximation to show $|\nabla u|$ is bounded(see Lemma \ref{l:gradient_plargern}). However, when $p=n$, we need to use the fact that $g$ is Ricci flat away from $ \Sigma$.
\end{remark}
\begin{lemma}\label{l:laplaciancovergence}
Let $(M^n,g_i)$ satisfy $g_i\in C^0$ and $g_i\to g$ in $C^0$-sense. Suppose that $u_i\in D(\Delta_{g_i})$ and $u\in D(\Delta_g)$. Let $\Delta_{g_i}u_i=f_i$ on $B_1(x)$ with $f_i\to f\in L^2$ and $u_i\to u$ in $W^{1,2}$, then $\Delta_gu=f$.
\end{lemma}
\begin{proof}
The result follows directly. Actually, let $\varphi\in C_0^1(B_1(x))$ be a test function. Then
\begin{align}
\int_{B_1(x)}\langle \nabla u_i,\nabla \varphi\rangle_{g_i}d\mu_{g_i}=-\int_{B_1(x)}f_i \varphi d\mu_{g_i}.
\end{align}
Since $g_i\to g$ in $C^0$ and $u_i\to u$ in $W^{1,2}$-sense, letting $i\to \infty$, we get
\begin{align}
\int_{B_1(x)}\langle \nabla u,\nabla \varphi\rangle_{g}d\mu_{g}=-\int_{B_1(x)}f \varphi d\mu_{g}.
\end{align}
This means $\Delta_g u=f$ which proves the lemma.
\end{proof}
For $n<p\le \infty$, we can use approximation argument to get the following gradient estimates.
\begin{lemma}\label{l:gradient_plargern}
Let $(M^n,g)$ be manifold with $g\in C^0\cap W^{1,p}_{loc}(M)$ with $n< p\le\infty$. Assume $u\in D(\Delta) $ and $\Delta u\in L^\infty(B_1(x))$, then $|\nabla u|$ is locally bounded. Moreover, if $\Delta u=0$, then we have
\begin{align}
\sup_{B_{1/2}(x)}|\nabla u|^2\le C\fint_{B_1(x)}|\nabla u|^2dx,
\end{align}
where $C=C(n,g)$ depends only on the $L^2$-Sobolev constant in $B_1(x)$.
\end{lemma}
\begin{proof}
Since the result is local, we will only consider the estimate around $x$. Let $B_r(x)\subset B_{1/2}(x)$ be in a local coordinate. Let $f_i\in C^\infty(M)$ be a sequence of smooth functions converging in $L^2$ sense to $f:=\Delta u$ and satisfying $\sup_{B_1(x)}|f_i|\le 2\sup_{B_1(x)}|\Delta u|+1$. Let $g_i$ be a sequence of smooth metrics converging in $W^{1,p}$-sense to $g$. Let us solve $\Delta_{g_i}u_i=f_i$ in $B_r(x)$ with $u_i=u$ on $\partial B_r(x)$. We will first show $u_i\to u$ in $W^{1,2}(B_r(x))$-sense and then show that $u_i\to u$ pointwisely in $B_{r/2}(x)$ and $|\nabla u_i|$ has uniform bounds on $B_{r/2}(x)$. This gives the bound of $|\nabla u|$.
Since $u\in W^{1,p}$ with $p>n$, by Sobolev embedding, $u$ is bounded. Applying maximum principle to each $\Delta_{g_i}u_i=f_i$, we get $|u_i|$ is bounded by the bound of $u$ and the bound of $f_i$.
Multiplying $u-u_i$ to $\Delta_gu=f$ and $\Delta_{g_i}u_i=f_i$, and integrating by parts, we get
\begin{align}
\int_{B_r(x)}\langle \nabla(u-u_i),\nabla u\rangle_gd\mu_g=-\int_{B_r(x)}f(u-u_i)d\mu_g\\
\int_{B_r(x)}\langle \nabla(u-u_i),\nabla u_i\rangle_{g_i}d\mu_{g_i}=-\int_{B_r(x)}f_i(u-u_i)d\mu_{g_i}.
\end{align}
Hence
\begin{align}
\int_{B_r(x)}\langle \nabla(u-u_i),\nabla (u-u_i)\rangle_{g_i}d\mu_{g_i}=&\int_{B_r(x)}\langle \nabla(u-u_i),\nabla u\rangle_{g_i}d\mu_{g_i}-\int_{B_r(x)}\langle \nabla(u-u_i),\nabla u_i\rangle_{g_i}d\mu_{g_i}\\
=&\int_{B_r(x)}\langle \nabla(u-u_i),\nabla u\rangle_{g_i}d\mu_{g_i}+\int_{B_r(x)}f_i(u-u_i)d\mu_{g_i}\\
=&\int_{B_r(x)}\langle \nabla(u-u_i),\nabla u\rangle_{g_i}d\mu_{g_i}-\int_{B_r(x)}\langle \nabla(u-u_i),\nabla u\rangle_gd\mu_g\\
&+\int_{B_r(x)}f_i(u-u_i)d\mu_{g_i}-\int_{B_r(x)}f(u-u_i)d\mu_g\\
=&\int_{B_r(x)}\partial_{\alpha}(u-u_i)\partial_{\beta}u\left(g^{\alpha\beta}_i\sqrt{\det(g_{i,\alpha\beta})}-g^{\alpha\beta}\sqrt{\det(g_{\alpha\beta})}\right)dy\\
&+\int_{B_r(x)}(u-u_i)\left(f_i\sqrt{\det(g_{i,\alpha\beta})}-f\sqrt{\det(g_{\alpha\beta})}\right)dy.
\end{align}
This implies that $|\nabla u_i|$ has uniform bounded $L^2$-norm. Moreover, letting $i\to \infty$, noting that $g_i\to g$ in $C^0$ sense we also get that
\begin{align}
\lim_{i\to \infty}\int_{B_r(x)}\langle \nabla(u-u_i),\nabla (u-u_i)\rangle_{g_i}d\mu_{g_i}=0.
\end{align}
Since $u_i-u=0$ on $\partial B_r(x)$, by Poincare inequality, we get $u_i\to u$ in $W^{1,2}$-sense. Let us now show uniformly interior estimates for $u_i$. Since $f_i$ is smooth and $g_i$ is smooth, the function $u_i$ is smooth in $B_r(x)$. By Bochner formula, we have
\begin{align}
\frac{1}{2}\Delta_{g_i} |\nabla u_i|^2=|\nabla^2u_i|^2+\langle \nabla \Delta_{g_i} u_i,\nabla u_i\rangle_{g_i}+{\rm Ric}_{g_i}(\nabla u_i,\nabla u_i).
\end{align}
Since $g_i$ has uniform $W^{1,p}$-norm on $B_r(x)$, by elliptic estimates we can get that (see the details in Lemma \ref{l:gradient_W1pmetric})
\begin{align}\label{ieq4.24}
\sup_{B_{r/2}(x)}|\nabla u_i|^2\le C(n,g_i)\left(\fint_{B_r(x)}|\nabla u_i|^2d\mu_{g_i}+r^2\sup_{B_r(x)}|f_i|^2\right).
\end{align}
where the constant $C(n,g_i)$ depends only on the $W^{1,p}$-norm of $g_i$ on $B_r(x)$, which is uniform. Thus we get uniform estimate for $|\nabla u_i|$ on $B_{r/2}(x)$. By Arzela-Ascoli theorem, up to a subsequence $u_i$ converges to a Lipschitz function $\tilde{u}$ pointwisely on $B_{r/2}(x)$. However, $u_i\to u$ in $W^{1,2}$-sense on $B_r(x)$. Thus $u=\tilde{u}$ and hence $u$ is Lipschitz which finishes the proof.
Moreover, if $\Delta u=0$, then we can let $f_i=0$ on $M$ for each $i$, and (\ref{ieq4.24}) gives
\begin{align}
\sup_{B_{1/2}(x)}|\nabla u_i|^2\le C(n,g_i)\left(\fint_{B_1(x)}|\nabla u_i|^2d\mu_{g_i} \right).
\end{align}
Choose an arbitrary point $y\in B_{1/2}(x)$, let $r_y$ be a positive small enough such that $B_{2r_y}(y)\subset B_{1/2}(x)$ and $\sup_{B_{2r_y}(y)}|\nabla u_i|^2\le C(n,g)\fint_{B_1(x)}|\nabla u|^2d\mu_g$ implies ${\rm{Lip}}_{B_{r_y}(y)}u_i\le C(n,g_i)\left(\fint_{B_1(x)}|\nabla u|^2d\mu_{g_i}\right)^\frac{1}{2}$, where ${\rm{Lip}}_{B_{r_y}(y)}u_i$ means the Lipschitz constant of $u_i$ on $B_{r_y}(y)$. Then we have ${\rm{Lip}}_{B_{r_y}(y)}u \le C(n,g)\left(\fint_{B_1(x)}|\nabla u|^2d\mu_g\right)^\frac{1}{2}$, where ${\rm{Lip}}_{B_{r_y}(y)}u $ means the Lipschitz constant of $u $ on $B_{r_y}(y)$, and the constant $C(n,g)$ depends only on the $W^{1,p}$-norm of $g $ on $B_1(x)$. Since $|\nabla u|(y)\le {\rm{Lip}}_{B_{r_y}(y)}u $, we have
\begin{align}
|\nabla u|^2(y)\le C(n,g)\fint_{B_1(x)}|\nabla u|^2d\mu_g.
\end{align}
Since $y$ is arbitrary in $y\in B_{1/2}(x)$, we have
\begin{align}
\sup_{B_{1/2}(x)}|\nabla u |^2\le C(n,g)\fint_{B_1(x)}|\nabla u|^2d\mu_g,
\end{align}
which complete the proof of the Lemma
\end{proof}
Since $g\in C^0$, we can use elliptic $L^p$-theory to get some apriori estimates for $u$.
\begin{lemma}\label{l:aprioriW2q_harmonic}
Assume as Proposition \ref{p:gradientdestimate_Deltabound}. Then
$u\in W^{2,q}$ for any $q<p$ when $p=n$ or $p=\infty$, and $u\in W^{2,p}$ for $n<p<\infty$.
\end{lemma}
\begin{remark}
In the application below, we only require a priori that $u\in W^{2,q}$ for some $q>2$ and $|\nabla u|\in L^q$ for any $q<\infty$. Since $n\ge 3$, this alway holds by Lemma \ref{l:aprioriW2q_harmonic}.
\end{remark}
\begin{proof}Since the result is local, we will only prove the result around $x$. Let us assume $B_r(x)$ is contained in a coordinate neighborhood. Let us begin the proof basing on the above observations. In a local coordinate we know that $u$ satisfies in distribution sense that
\begin{align}
\frac{1}{\sqrt{\det(g_{ij})}}\partial_i \left(g^{ij}\sqrt{\det(g_{ij})}\partial_ju\right)=\Delta u.
\end{align}
Since $g^{ij}\in C^0$, by $W^{1,q}$-estimate for divergence form elliptic equation \cite{CaPe}, we have $u\in W^{1,q}$ for any $1\le q<\infty$.
On the other hand, since $(M,g)$ is smooth away from $\Sigma$, by standard elliptic estimates we get that $u\in W^{2,q}_{loc}$ on $B_r(x)\setminus \Sigma$. Let $v=\varphi u$ where $\varphi$ is a smooth cut-off function with support in $B_r(x)$. Noting on $B_r(x)\setminus \Sigma$ that
\begin{align}
g^{ij}\partial_i\partial_j u-\frac{1}{\sqrt{\det(g_{ij})}}\partial_i \left(g^{ij}\sqrt{\det(g_{ij})}\right)\partial_ju=\Delta u,
\end{align}
we get on $B_r(x)\setminus \Sigma$ that
\begin{align}\label{e:vQ}
g^{ij}\partial_i\partial_j v=2g^{ij}\partial_i\varphi \partial_ju+ug^{ij}\partial_i\partial_j\varphi+\frac{\varphi}{\sqrt{\det(g_{ij})}} \partial_i \left(g^{ij}\sqrt{\det(g_{ij})}\right)\partial_ju+\Delta(\varphi u):=Q(u,\partial u,\partial g,\varphi),
\end{align}
Then $Q\in L^{q}$ for any $q<p$.
By solving $g^{ij}\partial_i\partial_jw=Q(u,\partial u,\partial g,\varphi)$ on $B_r(x)$ with $w=v=0$ on $\partial B_r(x)$, we have by Theorem 9.15 of \cite{GTru} that $w\in W^{2,q}$ for any $q<p$. Furthermore, let us check as Shi-Tam \cite{ShiTam} that $w-v\equiv 0$, which will imply $v\in W^{2,q}$ for any $q<p$. Actually,
we have in $B_r(x)\setminus \Sigma$ that
\begin{align}\label{e:diveregencew-v}
g^{ij}\partial_i\partial_j(w-v)=0,
\end{align}
For any $\epsilon>0$, assume $\psi_\epsilon$ is a cut-off function of $\Sigma$ from Lemma \ref{l:cut-off} satisfying $\psi_\epsilon\equiv 1$ on $M\setminus B_\epsilon(\Sigma)$ and $\psi_\epsilon$ vanishes in a neighborhood of $\Sigma$, and $\lim_{\epsilon\to 0}\int_M|\nabla \psi_\epsilon|^q(x)dx=0$ with $q=\frac{p}{p-1}$.
Multiplying $(w-v)\psi_\epsilon$ to both sides of \eqref{e:diveregencew-v} and integrating by parts, we get
\begin{align}\label{e:partialw-v}
-\int_{B_r(x)} g^{ij}\psi_\epsilon\partial_i(w-v)\partial_j(w-v)d\mu_g= \int_{B_r(x)}g^{ij}(w-v)\partial_i\psi_{\epsilon}\partial_j(w-v)d\mu_g+\int_{B_r(x)}(w-v)\psi_\epsilon \partial_ig^{ij}\partial_j(w-v)d\mu_g
\end{align}
When $n\le p<\infty$, noting that $w-v\in W^{1,q}$ for any $q<\infty$ and $\lim_{\epsilon\to 0}\int_M|\nabla \psi_\epsilon|^{q'}(x)dx=0$ for some $q'>1$, letting $\epsilon\to 0$, we have
\begin{align}
\lim_{\epsilon\to 0}|\int_{B_r(x)}g^{ij}(w-v)\partial_i\psi_{\epsilon}\partial_j(w-v)d\mu_g|\le \lim_{\epsilon\to 0}C\left(\int_{B_r(x)}|\nabla\psi_{\epsilon}|^{q'}|d\mu_g\right)^{1/q'}\left(\int_{B_r(x)}|\nabla (w-v)|^{q'/(q'-1)}d\mu_g\right)^{1-1/q'}=0.
\end{align}
When $p=\infty$, by Lemma \ref{l:gradient_plargern}, $v-w\in W^{1,\infty}$. Noting that $\lim_{\epsilon\to 0}\int_M|\nabla \psi_\epsilon|(x)d\mu_g=0$, letting $\epsilon\to 0$, we have
\begin{align}
\lim_{\epsilon\to 0}|\int_{B_r(x)}g^{ij}(w-v)\partial_i\psi_{\epsilon}\partial_j(w-v)dd\mu_g|\le \lim_{\epsilon\to 0}C\int_{B_r(x)}|\nabla\psi_{\epsilon}|d\mu_g=0.
\end{align}
Therefore, we get from \eqref{e:partialw-v} that
\begin{align}
-\int_{B_r(x)} g^{ij}\partial_i(w-v)\partial_j(w-v)d\mu_g=\int_{B_r(x)}(w-v) \partial_ig^{ij}\partial_j(w-v)d\mu_g.
\end{align}
Therefore, by Cauchy inequality we get
\begin{align}
\int_{B_r(x)} g^{ij}\partial_i(w-v)\partial_j(w-v)d\mu_g\le \left(\int_{B_r(x)}(w-v)^{2n/(n-2)}d\mu_g\right)^{(n-2)/2n}\left(\int_{B_r(x)}|\partial_j(w-v)|^2d\mu_g\right)^{1/2} \left(\int_{B_r(x)}|\partial g^{ij}|^nd\mu_g\right)^{1/n}.
\end{align}
Since $g_{ij}\in W^{1,n}$, for any $\delta>0$, by choosing $r=r(\delta)$ small we have $\int_{B_r(x)}|\partial g^{ij}|^ndx\le \delta^n$. Therefore we get
\begin{align}
\int_{B_r(x)} |\partial_i(w-v)|^2d\mu_g\le C\delta \left(\int_{B_r(x)}(w-v)^{2n/(n-2)}d\mu_g\right)^{(n-2)/2n}\left(\int_{B_r(x)}|\partial_j(w-v)|^2d\mu_g\right)^{1/2}.
\end{align}
Thus
\begin{align}
\int_{B_r(x)} |\partial_i(w-v)|^2d\mu_g \le C\delta^2\left(\int_{B_r(x)}(w-v)^{2n/(n-2)}d\mu_g\right)^{(n-2)/n}\le C\delta^2 \int_{B_r(x)} |\partial_i(w-v)|^2d\mu_g,
\end{align}
where we have used Sobolev inequality $\left(\int_{B_r(x)}f^{2n/n-2}d\mu_g\right)^{(n-2)/n}\le C_g \int_{B_r(x)}|\nabla f|^2d\mu_g$ for compact support function $f=w-v$. Noting that the constant $C$ is independent of $r$, if $\delta$ is small enough, $\int_{B_r(x)} |\partial_i(w-v)|^2d\mu_g$ must vanish. Since $w-v\equiv 0$ on $\partial B_r(x)$, hence we have proved $w-v\equiv 0$. Thus $v\in W^{2,q}$ for any $q<p$. In particular, $u\in W^{2,q}$ for any $q<p$.
Furthermore, for $n<p<\infty$, noting that $u\in W^{2,q}$ for $q<p$, we can choose $q>n$, thus by Sobolev embedding, $|\nabla u|$ is bounded (see also Lemma \ref{l:gradient_plargern} ). Therefore, the function $Q$ in \eqref{e:vQ} satisfies $Q\in L^p$. The same argument as above gives that $u\in W^{2,p}$. This completes the proof of the lemma.
\end{proof}
\begin{remark}
Note that in a local coordinate $\nabla^2u(\partial_i,\partial_j)=\partial_i\partial_j u-\Gamma_{ij}^k\partial_ku$. If $u\in W^{2,p}$ and $g\in W^{1,p}$ then $|\nabla^2u|\in L^p$.
\end{remark}
\begin{remark}[$W^{2,q}$-estimates]\label{r:W2qestimate}
By the same argument as above, we see that if $\Delta u\in L^\infty$ and $|\nabla u|\in L^\infty$ then $u\in W^{2,p}$ when $n\le p<\infty$ and $u\in W^{2,q}$ for any $q<\infty$ when $p=\infty$.
\end{remark}
\begin{lemma}\label{l:weakbochner_boundedlap}
Assume as Theorem \ref{t:RCDnonnegative}, $n\le p < \infty$ and $u\in D(\Delta) $ satisfying $\Delta u\in L^\infty$. Then for each $s>0$, the following distributional Bochner inequality holds:
\begin{align}\label{e:bochnerinequalityharmonicbounded}
\int_{B_1(x)} \langle \nabla \varphi,\nabla \sqrt{|\nabla u|^2+s}\rangle d\mu_g\le & \int_{B_1(x)}\left(\frac{\varphi(\Delta u)^2}{\sqrt{|\nabla u|^2+s}}-\varphi\frac{\Delta u}{|\nabla u|^2+s} \langle \nabla u,\nabla \sqrt{|\nabla u|^2+s}\rangle +\frac{\Delta u}{\sqrt{|\nabla u|^2+s}} \langle \nabla \varphi,\nabla u\rangle\right)d\mu_g,
\end{align}
where $\varphi\in W^{1,2}(B_1(x))$ is any nonnegative compactly support function.
\end{lemma}
\begin{proof}
By Lemma \ref{l:aprioriW2q_harmonic}, $u\in W^{2,q}$ for any $q<p$. Let $u_i\in C^\infty$ be a sequence smooth functions converging in $W^{2,q}$-sense to $u$. Let us show for each $u_i$ that \eqref{e:bochnerinequalityharmonicbounded}
holds. Actually, since $u_i\in C^\infty$, then $|\nabla^2u_i|\in L^p$ and $|\nabla u_i|\in L^\infty$ for each $n\le p<\infty$. By direct computations, we can get the Bochner formula $B_1(x)\setminus \Sigma$ that
\begin{align}
\Delta |\nabla u_i|^2=2|\nabla^2u_i|+2\langle \nabla \Delta u_i,\nabla u_i\rangle.
\end{align}
In particular, for any $s>0$ we have weakly on $B_1(x)\setminus \Sigma$ that
\begin{align}\label{e:weakDeltasqrtnablau}
\Delta \sqrt{|\nabla u_i|^2+s}\ge \frac{\langle \nabla \Delta u_i,\nabla u_i\rangle}{\sqrt{|\nabla u_i|^2+s}}.
\end{align}
Let us show that this holds weakly on $B_1(x)$. For any $\epsilon>0$, assume $\psi_\epsilon$ is a cut-off function of $\Sigma$ from Lemma \ref{l:cut-off} satisfying $\psi_\epsilon\equiv 1$ on $M\setminus B_\epsilon(\Sigma)$ and $\psi_\epsilon$ vanishes in a neighborhood of $\Sigma$, and $\lim_{\epsilon\to 0}\int_M|\nabla \psi_\epsilon|^q(x)dx=0$ with $q=\frac{p}{p-1}$. For any nonnegative $\varphi\in C^1_0(B_1(x))$, multiplying $\varphi \psi_\epsilon$ to both sides of \eqref{e:weakDeltasqrtnablau}
and integrating by parts, we get
\begin{align}
\int_{B_1(x)} \langle \nabla (\varphi\psi_\epsilon),\nabla \sqrt{|\nabla u_i|^2+s}\rangle d\mu_g \le & \int_{B_1(x)}\left(\frac{(\varphi\psi_\epsilon)(\Delta u_i)^2}{\sqrt{|\nabla u_i|^2+s}}-(\varphi\psi_\epsilon)\frac{\Delta u_i}{|\nabla u_i|^2+s} \langle \nabla u_i,\nabla \sqrt{|\nabla u_i|^2+s}\rangle +\frac{\psi_\epsilon\Delta u_i}{\sqrt{|\nabla u_i|^2+s}} \langle \nabla \varphi,\nabla u_i\rangle\right)d\mu_g\\ &+\int_{B_1(x)}\frac{\varphi\Delta u_i}{\sqrt{|\nabla u_i|^2+s}} \langle \nabla \psi_\epsilon,\nabla u_i\rangle d\mu_g.
\end{align}
Since $|\nabla^2u_i|+|\Delta u_i|\in L^p$ and $|\nabla u_i|\in L^\infty$, letting $\epsilon\to 0$, we get
\begin{align}\label{e:distribution_nablau+1}
\int_{B_1(x)} \langle \nabla \varphi,\nabla \sqrt{|\nabla u_i|^2+s}\rangle d\mu_g \le & \int_{B_1(x)}\left(\frac{\varphi(\Delta u_i)^2}{\sqrt{|\nabla u_i|^2+s}}-\varphi\frac{\Delta u_i}{|\nabla u_i|^2+s} \langle \nabla u_i,\nabla \sqrt{|\nabla u_i|^2+s}\rangle +\frac{\Delta u_i}{\sqrt{|\nabla u_i|^2+s}} \langle \nabla \varphi,\nabla u_i\rangle\right)d\mu_g.
\end{align}
Noting that $u_i\to u$ in $W^{2,q}$-sense for any $q<p$. Moreover, since $\varphi\in C^1_0(B_1(x))$, letting $i\to \infty$, we get
\begin{align}\label{e:distribution_nablauc1varphi}
\int_{B_1(x)} \langle \nabla \varphi,\nabla \sqrt{|\nabla u|^2+s}\rangle d\mu_g\le & \int_{B_1(x)}\left(\frac{\varphi(\Delta u)^2}{\sqrt{|\nabla u|^2+s}}-\varphi\frac{\Delta u}{|\nabla u|^2+s} \langle \nabla u,\nabla \sqrt{|\nabla u|^2+s}\rangle +\frac{\Delta u}{\sqrt{|\nabla u|^2+s}} \langle \nabla \varphi,\nabla u\rangle\right)d\mu_g.
\end{align}
Let $\varphi\in W^{1,2}$ be nonnegative compactly support function. Assume $\varphi_\alpha\in C_0^1(B_1(x))$ is a sequence of nonnegative functions approximating $\varphi$ in $W^{1,2}$-sense. Noting that $\Delta u\in L^\infty$, applying each $\varphi_\alpha$ to \eqref{e:distribution_nablauc1varphi} and letting $\alpha\to \infty$, we conclude that
\begin{align} \int_{B_1(x)} \langle \nabla \varphi,\nabla \sqrt{|\nabla u|^2+s}\rangle d\mu_g\le & \int_{B_1(x)}\left(\frac{\varphi(\Delta u)^2}{\sqrt{|\nabla u|^2+s}}-\varphi\frac{\Delta u}{|\nabla u|^2+s} \langle \nabla u,\nabla \sqrt{|\nabla u|^2+s}\rangle +\frac{\Delta u}{\sqrt{|\nabla u|^2+s}} \langle \nabla \varphi,\nabla u\rangle\right)d\mu_g,
\end{align}
holds also for $W^{1,2}$ functions. This completes the proof.
\end{proof}
Now we are ready to prove Proposition \ref{p:gradientdestimate_Deltabound}.
\begin{proof}[Proof of Proposition \ref{p:gradientdestimate_Deltabound} ]
If $n<p\le \infty$, Proposition \ref{p:gradientdestimate_Deltabound} follows immediately from Lemma \ref{l:gradient_plargern}.
If $p=n$, by the Bochner inequality in Lemma \ref{l:weakbochner_boundedlap}, the Proposition is standard by Moser iteration. For the sake of convenience, we give a proof here. By Lemma \ref{l:weakbochner_boundedlap}, we have for any $s>0$ and nonnegative $\varphi\in W^{1,2}_0(B_1(x))$ that
\begin{align*}
\int_{B_1(x)} \langle \nabla \varphi,\nabla \sqrt{|\nabla u|^2+s}\rangle d\mu_g \le & \int_{B_1(x)}\left(\frac{\varphi(\Delta u)^2}{\sqrt{|\nabla u|^2+s}}-\varphi\frac{\Delta u}{|\nabla u|^2+s} \langle \nabla u,\nabla \sqrt{|\nabla u|^2+s}\rangle +\frac{\Delta u}{\sqrt{|\nabla u|^2+s}} \langle \nabla \varphi,\nabla u\rangle\right)d\mu_g .
\end{align*}
We denote $v= \sqrt{|\nabla u|^2+s}$, $f=\frac{(\Delta u)^2}{\sqrt{|\nabla u|^2+s}}$.
Then (5.18) becomes
\begin{align}
\int_{B_1(x)} \langle \nabla \varphi,\nabla v\rangle d\mu_g \le \int_{B_1(x)}\left(\varphi f-\varphi \frac{\Delta u}{|\nabla u|^2+s} \langle \nabla u,\nabla v\rangle + v^{-1}\Delta u\langle \nabla \varphi, \nabla u\rangle\right)d\mu_g .
\end{align}
If $\Delta u\equiv 0$, we will let $s\to 0^{+}$ in the end of the following argument. If $\Delta u\not\equiv 0$, we denote $D=\sup_{B_1(x)}|\Delta u|$, and we let $s=D^2$. Let $\varphi= \eta^2 v^{2q-1}$, where $q\ge 1$ and $\eta $ is some cut-off function in $C^\infty_0(B_1(x))$. By Lemma \ref{l:aprioriW2q_harmonic}, $\varphi\in W^{1,2}_0(B_1(x))$. Noting that $D\le v$ and $|\nabla u|\le v$ we have by H\"older inequality that
\begin{align} \label{B3}
&(2q-1)\int_{B_1(x)} v ^{2q-2}\eta^2|\nabla v |^2d\mu_g+2\int_{B_1(x)} v ^{2q-1}\eta\langle\nabla v ,\nabla \eta\rangle d\mu_g\notag\\
\le &\int_{B_1(x)}f\eta^2v ^{2q-1} d\mu_g+D\int_{B_1(x)}\left|\eta^2 v^{2q-3}\langle \nabla u,\nabla v\rangle\right| d\mu_g\notag\\
&+ 2D\int_{B_1(x)}\left|\langle \nabla u,\nabla\eta\rangle\eta v ^{2q-2}\right|d\mu_g+(2q-1)D\int_{B_1(x)}\left|\langle \nabla u,\nabla v \rangle\eta^2v ^{2q-3}\right|d\mu_g\notag\\
\le & \int_{B_1(x)}\eta^2 v^{2q}d\mu_g+2\int_{B_1(x)}|\nabla\eta|\eta v ^{2q} d\mu_g+2q\int_{B_1(x)} |\nabla v |\eta^2v ^{2q-1}d\mu_g.
\end{align}
Denote $w=v ^{q}$, since $|\nabla u|\le v$, by H\"older inequality and Sobolev inequality, we can estimate each term:
\begin{align}\label{B4}
(2q-1)\int_{B_1(x)} v ^{2q-2}\eta^2|\nabla v |^2d\mu_g=\frac{2q-1}{q^2}\int_{B_1(x)}\eta^2|\nabla w|^2d\mu_g,
\end{align}
and
\begin{align}\label{B5}
2\int_{B_1(x)}| v ^{2q-1}\eta\langle\nabla v ,\nabla \eta\rangle| d\mu_g
&\le \frac{1}{4q}\int_{B_1(x)}\eta^2|\nabla w|^2d\mu_g+\frac{4}{q}\int_{B_1(x)}w^2|\nabla\eta|^2d\mu_g,
\end{align}
and
\begin{align} \label{B7}
\int_{B_1(x)}\left| \eta^2v ^{2q }\right|d\mu_g \leq \int_{B_1(x)}\eta^2 w^2d\mu_g,
\end{align}
\begin{align}\label{B6}
2 \int_{B_1(x)} |\nabla \eta|\eta v^{2q} d\mu_g \le \int_{B_1(x)}|\nabla \eta|^2 w^2d\mu_g+ \int_{B_1(x)}\eta^2 w^2d\mu_g,
\end{align}
and
\begin{align} \label{B9}
2q \int_{B_1(x)}|\nabla v|\eta^2 v^{2q-1}d\mu_g
\le \frac{1}{8q} \int_{B_1(x)}|\nabla w|^2 \eta^2d\mu_g+8 q\int_{B_1(x)}\eta^2 w^2d\mu_g.
\end{align}
Apply (\ref{B4})--(\ref{B9}) to inequality (\ref{B3}), we get
\begin{align*}
\int_{B_1(x)}\eta^2|\nabla w|^2d\mu_g\leq \left(100q \right)^2\int_{B_1(x)}w^2(|\nabla \eta|^2+\eta^2)d\mu_g.
\end{align*}
Let $\eta=1$ on $B_r(x)$, $\eta=0$ on $M\backslash B_R(x)$, $\eta\le 1$ on $B_1(x)$ and $|\nabla \eta|\le \frac{C(g)}{(R-r)^2}$ on $B_1(x)$ .
Then we have
\begin{align}
\int_{B_r(x)}|\nabla w|^2d\mu_g
\le
\left(\frac{100q }{R-r}\right)^2\int_{B_1(x)}w^2d\mu_g.
\end{align}
Denote $\chi=\frac{n}{n-2}>1$, then by Sobolev inequality, we have
\begin{align}
\left(\int_{B_r(x)}w^{2\chi}d\mu_g\right)^{\frac{1}{\chi}}
\le
\left(\frac{100q )}{R-r}\right)^2\int_{B_1(x)}w^2d\mu_g.
\end{align}
Thus we have
\begin{align}
\left(\int_{B_r(x)}v^{2q\chi}d\mu_g\right)^{\frac{1}{2q\chi}}
\le
\left(\frac{100q }{R-r}\right)^{\frac{1}{q}}\left(\int_{B_1(x)}v^{2q}d\mu_g\right)^{\frac{1}{2q}}.
\end{align}
Take $q_i=\chi^i$, $r_i=\frac{1}{2}+\frac{1}{2^{i+1}}$, for $i=0,1,2,\ldots$, then we have
\begin{align}
\|v\|_{L^{2q_{i+1}}(B_{r_{i+1}}(x))}
\le \left(100(2\chi)^i \right)^{\frac{1}{\chi^i}}\|v\|_{L^{2q_i }(B_{r_{i }}(x))}.
\end{align}
By iteration we have
\begin{align}
\|v\|_{L^{2q_{i+1}}(B_{r_{j+1}}(x))}
\le (2\chi)^{\Sigma_{i=0}^j\frac{i}{\chi ^i}}\left( 100 \right)^{\Sigma_{i=0}^j\frac{1}{\chi^i}}\|v\|_{L^{2 }(B_1(x))}.
\end{align}
Since the seris $\Sigma_{i=0}^\infty\frac{i}{\chi ^i}$ and $\Sigma_{i=0}^\infty\frac{1}{\chi ^i}$ are both converge, we can let $i\to \infty$, and get
\begin{align}
\|v\|_{L^{\infty}(B_{\frac{1}{2}}(x))}
\le (2\chi)^{\Sigma_{i=0}^\infty\frac{i}{\chi ^i}}\left( 100 \right)^{\Sigma_{i=0}^\infty\frac{1}{\chi^i}}\|v\|_{L^{2 }(B_1(x))}\le C(n) ||v||_{L^2(B_1(x))}.
\end{align}
Thus we have
\begin{align}
\sup_{B_\frac{1}{2}(x)} |\nabla u|^2
\le C(n,g)\left(\int_{B_1(x)}|\nabla u|^2d\mu_g+\sup_{B_1(x)}|\Delta u|^2\right),
\end{align}
here and below $C(n,g)$ will denote a positive constant depending only on $n$, $\|g\|_{L^{\frac{n}{2}}}$ and the $L^2$ Sobolev constant in $B_1(x)$. Thus $|\nabla u|$ is bounded.
Moreover, if $\Delta u=0$, by rescaling we have
\begin{align}\label{e:gradient_byL2gradient}
\sup_{B_{1/2}(x)}|\nabla u|^2\le C(n,g)\fint_{B_{3/4}(x)}|\nabla u|^2(y)d\mu_g(y).
\end{align}
Let $\varphi$ be a cut-off function satisfying $\varphi\equiv 1$ in $B_{3/4}(x)$ and ${\rm supp }\varphi\subset B_1(x)$ and $|\nabla \varphi|\le (n,g)$. Multiplying $u\varphi^2$ to $\Delta u=0$ and integrating we have
\begin{align}
0=\int_{B_1(x)}\varphi^2 u\Delta u d\mu_g(y)= -\int_{B_1(x)}\varphi^2 |\nabla u|^2d\mu_g(y)-\int_{B_1(x)}2\varphi u\langle \nabla \varphi,\nabla u\rangle d\mu_g(y).
\end{align}
Thus by Cauchy inequality, we get
\begin{align}
\int_{B_1(x)}\varphi^2 |\nabla u|^2d\mu_g(y)\le 4\int_{B_1(x)}|\nabla \varphi|^2u^2d\mu_g(y)\le C(n,g)\int_{B_1(x)}u^2d\mu_g(y).
\end{align}
Combining with \eqref{e:gradient_byL2gradient}, we have
\begin{align}
\sup_{B_{1/2}(x)}|\nabla u|^2\le C(n,g)\fint_{B_1(x)}|\nabla u|^2d\mu_g(x).
\end{align}
This finishes the proof of Proposition \ref{p:gradientdestimate_Deltabound}.
\end{proof}
\begin{comment}
Now we are ready to prove Proposition \ref{p:gradientdestimate_Deltabound}.
\begin{proof}[Proof of Proposition \ref{p:gradientdestimate_Deltabound} ]
Since $u\in W^{2,q}$ for $q<p$, by smoothing approximating in $W^{2,q}$-sense, we can get the Bochner formula holds weakly in distribution sense on $B_1(x)\setminus \Sigma$ that
\begin{align}
\Delta |\nabla u|^2=2|\nabla^2u|+2\langle \nabla \Delta u,\nabla u\rangle.
\end{align}
In particular, for any $s>0$ we have weakly on $B_1(x)\setminus \Sigma$ that
\begin{align}\label{e:weakDeltasqrtnablau}
\Delta \sqrt{|\nabla u|^2+s}\ge \frac{\langle \nabla \Delta u,\nabla u\rangle}{\sqrt{|\nabla u|^2+s}}.
\end{align}
Let us show that this holds weakly on $B_1(x)$. For any $\epsilon>0$, assume $\psi_\epsilon$ is a cut-off function of $\Sigma$ from Lemma \ref{l:cut-off} satisfying $\psi_\epsilon\equiv 1$ on $M\setminus B_\epsilon(\Sigma)$ and $\psi_\epsilon$ vanishes in a neighborhood of $\Sigma$, and $\lim_{\epsilon\to 0}\int_M|\nabla \psi_\epsilon|^q(x)dx=0$ with $q=\frac{p}{p-1}$ when $p<\infty$, and $q>1$ when $p=\infty$. For any nonnegative $\varphi\in C^1_0(B_1(x))$, multiplying $\varphi \psi_\epsilon$ to both sides of \eqref{e:weakDeltasqrtnablau}
and integrating by parts, we get
\begin{align}
\int_{B_1(x)} \langle \nabla (\varphi\psi_\epsilon),\nabla \sqrt{|\nabla u|^2+s}\rangle \le & \int_{B_1(x)}\left(\frac{(\varphi\psi_\epsilon)(\Delta u)^2}{\sqrt{|\nabla u|^2+s}}-(\varphi\psi_\epsilon)\frac{\Delta u}{|\nabla u|^2+s} \langle \nabla u,\nabla \sqrt{|\nabla u|^2+s}\rangle +\frac{\psi_\epsilon\Delta u}{\sqrt{|\nabla u|^2+s}} \langle \nabla \varphi,\nabla u\rangle\right)dx\\
&+\int_{B_1(x)}\frac{\varphi\Delta u}{\sqrt{|\nabla u|^2+s}} \langle \nabla \psi_\epsilon,\nabla u\rangle dx.
\end{align}
By the a priori estimate for $u$ in Lemma \ref{l:aprioriW2q_harmonic}, letting $\epsilon\to 0$, we get
\begin{align}\label{e:distribution_nablau+1}
\int_{B_1(x)} \langle \nabla \varphi,\nabla \sqrt{|\nabla u|^2+s}\rangle \le & \int_{B_1(x)}\left(\frac{\varphi(\Delta u)^2}{\sqrt{|\nabla u|^2+s}}-\varphi\frac{\Delta u}{|\nabla u|^2+s} \langle \nabla u,\nabla \sqrt{|\nabla u|^2+s}\rangle +\frac{\Delta u}{\sqrt{|\nabla u|^2+s}} \langle \nabla \varphi,\nabla u\rangle\right)dx.
\end{align}
This proves \eqref{e:weakDeltasqrtnablau} holds weakly on $B_1(x)$. By using $C^1$ function approximating $W^{1,2}$-function, one can easily see that \eqref{e:distribution_nablau+1} holds also for $W^{1,2}$ functions. Let $\varphi =(|\nabla u|^2+s)^{q/2}\eta^2$ be the test function in \eqref{e:distribution_nablau+1} where $\eta$ is a cut-off function with support in $B_1(x)$ and $q\ge 1$. By standard Moser iteration argument we get (See Lemma \ref{l:gradient_Deltaubounded} for a detail proof)
\begin{align}
\sup_{B_{1/2}(x)}|\nabla u|^2\le C(n,g)\left(\fint_{B_1(x)}|\nabla u|^2dx+ \sup_{B_1(x)}|\Delta u|^2\right)
\end{align}
Hence we finish the proof.
\end{proof}
\end{comment}
\begin{comment}
\begin{proof}
To prove the result, it suffices to show that $\Delta |\nabla u|\ge 0$ weakly on $B_1(x)$. Actually, by Bochner formula, we have on $B_1(x)\setminus \Sigma$ that
\begin{align}
\Delta |\nabla u|^2=2|\nabla^2u|^2.
\end{align}
By Kato's inequality $|\nabla |\nabla u||\le |\nabla^2u|$ we conclude on $B_1(x)\setminus \Sigma$ that
\begin{align}\label{e:nonnegative_nableu}
\Delta |\nabla u|\ge 0.
\end{align}
Let $\varphi\in C_0^1(B_1(x))$ be a nonnegative test function. Let $\eta_\epsilon$ be a cut-off function of $\Sigma$ from Lemma \ref{l:cut-off} such that $\eta_\epsilon=1$ in $M\setminus B_\epsilon(\Sigma)$ and $\eta_\epsilon=0$ in a neighborhood of $\Sigma$.
\end{proof}
\begin{lemma}\label{l:aprioriW2q_harmonic}
Assume as Theorem \ref{t:RCDnonnegative} and $u\in W^{1,2}(B_1(x))$ is a harmonic function on $M$ with $x\in M$. Then
$u\in W^{2,q}$ for any $q<p$ when $p=n$ or $p=\infty$, and $u\in W^{2,p}$ for $n<p<\infty$.
\end{lemma}
\begin{proof}
In local coordinate we know that $u$ satisfies
\begin{align}
\frac{1}{\sqrt{\det(g_{ij})}}\partial_i \left(g^{ij}\sqrt{\det(g_{ij})}\partial_ju\right)=0.
\end{align}
Since $g^{ij}\in C^0$, by $W^{1,q}$-estimate for divergence form elliptic equation \cite{CaPe}, we have $u\in W^{1,q}$ for any $1\le q<\infty$.
Let $v=\varphi u$ where $\varphi$ is a cut-off function with support in $B_1(x)$. Noting on $B_1(x)\setminus \Sigma$ that
\begin{align}
g^{ij}\partial_i\partial_j u-\frac{1}{\sqrt{\det(g_{ij})}}\partial_i \left(g^{ij}\sqrt{\det(g_{ij})}\right)\partial_ju=0,
\end{align}
we get on $B_1(x)\setminus \Sigma$ that
\begin{align}\label{e:vQ}
g^{ij}\partial_i\partial_j v=2g^{ij}\partial_i\varphi \partial_ju+ug^{ij}\partial_i\partial_j\varphi+\frac{\varphi}{\sqrt{\det(g_{ij})}} \partial_i \left(g^{ij}\sqrt{\det(g_{ij})}\right)\partial_ju:=Q(u,\partial u,\partial g,\varphi),
\end{align}
Then $Q\in L^{q}$ for any $q<p$.
By solving $g^{ij}\partial_i\partial_jw=Q(u,\partial u,\partial g,\varphi)$ on $B_1(x)$ with $w=v=0$ on $\partial B_1(x)$, we have by Theorem 9.30 of \cite{GTru} that $w\in W^{2,q}$ for any $q<p$. Furthermore, it is easy to check that $w-v\equiv 0$(see also Shi-Tam \cite{ShiTam}). Thus $v\in W^{2,q}$ for any $q<p$.
For $n<p<\infty$, noting that $u\in W^{2,q}$ for $q<p$, we can choose $q>n$, thus by Sobolev embedding, $|\nabla u|$ is bounded. Therefore, the function $Q$ in \eqref{e:vQ} satisfies $Q\in L^p$. The same argument as above gives that $u\in W^{2,p}$. This completes the proof of the lemma.
\end{proof}
\begin{remark}[$W^{2,q}$-estimates]\label{r:W2qestimate}
By the same argument as above, we see that if $\Delta u\in L^\infty$ and $|\nabla u|\in L^\infty$ then $u\in W^{2,p}$ when $n\le p<\infty$ and $u\in W^{2,q}$ for any $q<\infty$ when $p=\infty$.
\end{remark}
\end{comment}
\begin{proposition}[Heat kernel estimates]\label{p:heatkernel}
Assume as Theorem \ref{t:RCDnonnegative}, then the heat kernel $\rho_t(x,y)$ of $(M^n,g)$ satisfies for $0<t\le 1$ that
\begin{itemize}
\item[(1)] $\rho_t(x,y)=\rho_t(y,x)>0$ and $(\partial_t-\Delta_x)\rho_t(x,y)=0$ for all $t>0, x,y\in M$.
\item[(2)] $\lim_{t\to 0}\rho_t(x,y)=\delta_x(y)$.
\item[(3)] $\rho_t(x,y) \le Ct^{-n/2}e^{-d^2(x,y)/C}$ for all $x,y\in M$.
\item[(4)] $|\nabla_x\rho_t(x,y)|\le Ct^{-(n+1)/2}e^{-d^2(x,y)/C}$ for all $x,y\in M$.
\end{itemize}
where the constant $C$ depends on $(M^n,g)$.
\end{proposition}
\begin{proof}
(1) and (2) are the basic properties of heat kernel. To see (3), noting that the metric is $C^0$ and asymptotically flat, the upper bound (3) follows directly by Theorem 0.2 of \cite{Sturm}. Noting the gradient estimates for harmonic functions in Proposition \ref{p:gradientdestimate_Deltabound}, (4) is a consequence of Theorem 1.2 of \cite{CJKS}. Hence we complete the proof. Actually, one can also argue as the gradient estimates of harmonic functions to get the heat kernel gradient estimates.
\end{proof}
\vskip 3mm
\begin{lemma}[$W^{1,2}$-approximation]\label{l:w12approximateheatkernel}
Assume as Theorem \ref{t:RCDnonnegative} and let $u\in D(\Delta)$ and $\Delta u\in W^{1,2}$. Define $u_t(x)=\int_M\rho_t(x,y)u(y)dy$. Then
\begin{enumerate}
\item $u_t\to u$ in $W^{1,2}$-sense on any compact subset as $t\to 0$.
\item $\Delta u_t\to \Delta u$ in $L^2$-sense on any compact subset as $t\to 0$.
\item For any $t>0$, $|\nabla u_t|,|\Delta u_t|, |\nabla \Delta u_t|$ are bounded.
\item For any $t>0$, $u_t\in W^{2,p}$ when $n\le p<\infty$, and $u_t\in W^{2,q}$ for any $q<\infty$ when $p=\infty$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us first show that $u_t\to u$ in $L^2$-sense. Since we can use continuous functions approximating $u$, for simplicity, let us assume $u$ is continuous. Then for any compact measurable subset $\Omega\subset M$ we have
\begin{align}
\int_{\Omega} |u_t(x)-u(x)|^2d\mu_g(x)&= \int_{\Omega}\left(\int_M u(y)\rho_t(x,y)d\mu_g(y)-u(x)\right)^2d\mu_g(x)\\
&\le \int_{\Omega}\int_M |u(y)-u(x)|^2\rho_t(x,y)d\mu_g(y)d\mu_g(x).
\end{align}
By heat kernel upper bound estimate in Proposition \ref{p:heatkernel} and noting that $u$ is continuous we conclude that $\lim_{t\to 0}\int_{\Omega} |u_t(x)-u(x)|^2d\mu_g(x)=0$. Then, noting that $u_t(x)=\int_Mu(y)\rho_t(x,y)d\mu_g(y)$, we have
\begin{align}
\Delta u_t(x)=\int_M u(y)\Delta_x\rho_t(x,y)d\mu_g(y)=\int_M u(y)\partial_t\rho_t(x,y)d\mu_g(y) =\int_Mu(y)\Delta_y\rho_t(x,y)d\mu_g(y)=\int_M\Delta u(y)\rho_t(x,y)d\mu_g(y).
\end{align}
Therefore, the same argument as above shows that $\Delta u_t\to \Delta u$ in $L^2$-sense on any compact measurable subset, which proves (2). Let us now show $u_t\to u$ in $W^{1,2}$-sense. Actually, for any compact support cut-off function $\varphi$ we can compute
\begin{align}
\int_M \varphi^2(x) |\nabla u_t-\nabla u|^2(x)d\mu_g(x)\le& \int_M |\Delta u_t-\Delta u|\cdot |u_t-u|\varphi^2(x)d\mu_g(x)+\int_M 2|\nabla \varphi|(x)|\nabla u_t-\nabla u|(x) |u_t-u|(x)\varphi(x)d\mu_g(x)\\
\le& \frac{1}{2}\int_M \varphi^2(x) |\nabla u_t-\nabla u|^2(x)d\mu_g(x)+2\int_M |\nabla \varphi|^2 |u_t-u|^2(x)d\mu_g(x)\\
&+\left(\int_M|\Delta u_t-\Delta u|^2\varphi^2(x)d\mu_g(x)\right)^{1/2}\left(\int_M|u_t- u|^2\varphi^2(x)d\mu_g(x)\right)^{1/2}.
\end{align}
Letting $t\to 0$, we get $\lim_{t\to 0}\int_M \varphi^2(x) |\nabla u_t-\nabla u|^2(x)d\mu_g(x)=0$. Hence we complete the proof of (1) and (2).
(3) follows directly from the heat kernel gradient estimate in Proposition \ref{p:heatkernel}. By the gradient bound in (3), we get by Remark \ref{r:W2qestimate} that (4) holds. Hence we finish the whole proof.
\end{proof}
\vskip 3mm
\begin{proposition}[Weak Bochner inequality]\label{p:weakbochnerinequaliyt}
Assume as Theorem \ref{t:RCDnonnegative}. Then for any $u\in D(\Delta)$ such that $\Delta u\in W^{1,2}$, and any nonnegative bounded test function $\varphi\in D(\Delta)$ with $|\Delta \varphi|\in L^\infty$, the following Bochner inequality holds
\begin{align}\label{e:weakbochnoerlowerbound}
\frac{1}{2}\int_M \Delta \varphi |\nabla u|^2 d\mu_g-\int_M \varphi \langle \nabla \Delta u,\nabla u\rangle d\mu_g\ge \frac{1}{n}\int_M\varphi (\Delta u)^2d\mu_g.
\end{align}
\end{proposition}
\begin{proof}
Let $\varphi$ be a nonnegative bounded test function, $\varphi\in D(\Delta)$ with $|\Delta \varphi|\in L^\infty$. By Proposition \ref{p:gradientdestimate_Deltabound}, we have that $|\nabla \varphi|$ is bounded. For any given $R>0$, assume $\eta_R$ is a smooth cut-off function satisfying $\eta_R\equiv 1$ on $B_R(\Sigma)$ and $\eta_R\equiv 0$ outside $B_{2R}(\Sigma)$. Let $\varphi_R=\varphi \eta_R$. Since the metric $g$ is smooth away from $\Sigma$ and $|\nabla \eta_R|$ vanishes on $B_R(\Sigma)$, then one can check directly that $\varphi_R\in W^{1,2}$ and $\Delta \varphi_R$ is bounded. Let us now first prove \eqref{e:weakbochnoerlowerbound} for $\varphi_R$. Then letting $R\to \infty$, we will get the desired formula by noting $\Delta \varphi_R\to \Delta \varphi$ in $L^\infty$ sense and $\varphi_R\to \varphi$ in $L^\infty$ sense in any compact subset.
Let $u_t(x)=\int_Mu(y)\rho_t(x,y)dy$. By Lemma \ref{l:w12approximateheatkernel} we have $u_t\in W^{2,q}_{loc}$ for any $q<p$. For any fixed $t>0$, assume $h_{i,t}$ is smooth and converges in $W^{2,q}$-sense to $u_t$ as $i\to \infty$. In particular, $|\nabla h_{i,t}|\in L^\infty$ and $|\nabla^2h_{i,t}|\in L^p$ for any $n\le p\le \infty$. Here we use $h_{i,t} $ approximating $u_t$ again just because $|\nabla^2u_t|$ may not in $L^p$ when $p=\infty$. For $n\le p<\infty$, one can only use $u_t$ in the following argument but not $h_{i,t}$.
By Bochner formula, we have on $M\setminus \Sigma$ that
\begin{align}\label{e:bochner_ut}
\frac{1}{2}\Delta |\nabla h_{i,t}|^2&=|\nabla^2h_{i,t}|^2+\langle \nabla \Delta h_{i,t},\nabla h_{i,t}\rangle +{\rm Ric}(\nabla h_{i,t},\nabla h_{i,t})\notag\\
&=|\nabla^2h_{i,t}|^2+\langle \nabla \Delta h_{i,t},\nabla h_{i,t}\rangle\notag\\
&\ge \frac{1}{n}(\Delta h_{i,t})^2+\langle \nabla \Delta h_{i,t},\nabla h_{i,t}\rangle.
\end{align}
For any $\epsilon>0$, assume $\psi_\epsilon$ is a cut-off function of $\Sigma$ from Lemma \ref{l:cut-off} satisfying $\psi_\epsilon\equiv 1$ on $M\setminus B_\epsilon(\Sigma)$ and $\psi_\epsilon$ vanishes in a neighborhood of $\Sigma$, and $\lim_{\epsilon\to 0}\int_M|\nabla \psi_\epsilon|^qd\mu_g=0$ with $q=\frac{p}{p-1}$. Multiplying $\varphi_R\psi_\epsilon$ to both sides of \eqref{e:bochner_ut} and integrating by parts, we get
\begin{align}\label{e:bochnerutvarphiRepsilon}
-\frac{1}{2}\int_M\langle \nabla (\varphi_R\psi_\epsilon),\nabla |\nabla h_{i,t}|^2\rangle d\mu_g &\ge \frac{1}{n}\int_M(\varphi_R\psi_\epsilon)(\Delta h_{i,t})^2 d\mu_g+\int_M \left(-(\Delta h_{i,t})^2(\varphi_R\psi_\epsilon)-\Delta h_{i,t}\langle \nabla h_{i,t},\nabla (\varphi_R\psi_\epsilon)\rangle \right) d\mu_g\\
&:=I+II
\end{align}
Let us consider the left hand side.
\begin{align}
\int_M\langle \nabla (\varphi_R\psi_\epsilon),\nabla |\nabla h_{i,t}|^2\rangle d\mu_g&=\int_M \psi_\epsilon \langle\nabla \varphi_R,\nabla |\nabla h_{i,t}|^2\rangle d\mu_g +\int_M\varphi_R\langle \nabla \psi_\epsilon,\nabla |\nabla h_{i,t}|^2\rangle d\mu_g\\
&=-\int_M\psi_\epsilon \Delta \varphi_R |\nabla h_{i,t}|^2 d\mu_g-\int_M|\nabla h_{i,t}|^2\langle \nabla \psi_\epsilon,\nabla \varphi_R\rangle d\mu_g+\int_M\varphi_R\langle \nabla \psi_\epsilon,\nabla |\nabla h_{i,t}|^2\rangle d\mu_g.
\end{align}
Noting that $h_{i,t}$ is smooth and $|\nabla h_{i,t}|, |\nabla \varphi_R|$ are bounded and $|\nabla^2h_{i,t}|\in L^p$, letting $\epsilon\to 0$, the last two terms converge to zero. Thus we get
\begin{align}
\lim_{\epsilon\to 0}\int_M\langle \nabla (\varphi_R\psi_\epsilon),\nabla |\nabla h_{i,t}|^2\rangle d\mu_g=-\int_M \Delta \varphi_R |\nabla h_{i,t}|^2d\mu_g
\end{align}
In \eqref{e:bochnerutvarphiRepsilon}, when $\epsilon\to 0$, the term $I$ converges to $\frac{1}{n}\int_M\varphi_R(\Delta h_{i,t})^2$. Estimating similarly as above, letting $\epsilon\to 0$, the term $II$ would converge to $\int_M \left(-(\Delta h_{i,t})^2\varphi_R-\Delta h_{i,t}\langle \nabla h_{i,t},\nabla \varphi_R\rangle \right) d\mu_g$. Therefore, we arrive at
\begin{align}
\frac{1}{2}\int_M \Delta \varphi_R |\nabla h_{i,t}|^2 d\mu_g\ge \frac{1}{n}\int_M\varphi_R(\Delta h_{i,t})^2 d\mu_g+\int_M \left(-(\Delta h_{i,t})^2\varphi_R-\Delta h_{i,t}\langle \nabla h_{i,t},\nabla \varphi_R\rangle \right)d\mu_g.
\end{align}
Since $h_{i,t}\to u_t$ in $W^{2,q}$-sense for some $5/2\le q<n$, letting $i\to \infty$, we get
\begin{align}
\frac{1}{2}\int_M \Delta \varphi_R |\nabla u_{t}|^2 d\mu_g\ge \frac{1}{n}\int_M\varphi_R(\Delta u_{t})^2d\mu_g+\int_M \left(-(\Delta u_{t})^2\varphi_R-\Delta u_{t}\langle \nabla u_{t},\nabla \varphi_R\rangle \right)d\mu_g.
\end{align}
Letting $t\to 0$, by the approximating Lemma \ref{l:w12approximateheatkernel} we get
\begin{align}
\frac{1}{2}\int_M \Delta \varphi_R |\nabla u|^2 d\mu_g&\ge \frac{1}{n}\int_M\varphi_R(\Delta u)^2 d\mu_g+\int_M \left(-(\Delta u)^2\varphi_R -\Delta u\langle \nabla u,\nabla \varphi_R\rangle \right) d\mu_g\\
&=\frac{1}{n}\int_M\varphi_R(\Delta u)^2 d\mu_g+\int_M \varphi_R\langle \nabla \Delta u,\nabla u\rangle d\mu_g,
\end{align}
where the last equality follows from the definition of Dirichlet Laplacian. Letting $R\to \infty$, we finish the proof.
\end{proof}
\begin{comment}
Then $u_t$ is smooth away from the singular set $\Sigma$ and $u_t\in W^{2,p}_{loc}$ by Lemma \ref{l:aprioriW2q_harmonic}. By Bochner formula, we have on $M\setminus \Sigma$ for each $0<t<1$ that
\begin{align}\label{e:bochner_ut}
\frac{1}{2}\Delta |\nabla u_t|^2&=|\nabla^2u_t|^2+\langle \nabla \Delta u_t,\nabla u_t\rangle +{\rm Ric}(\nabla u_t,\nabla u_t)\\
&=|\nabla^2u_t|^2+\langle \nabla \Delta u_t,\nabla u_t\rangle\\
&\ge \frac{1}{n}(\Delta u_t)^2+\langle \nabla \Delta u_t,\nabla u_t\rangle.
\end{align}
For any $\epsilon>0$, assume $\psi_\epsilon$ is a cut-off function of $\Sigma$ from Lemma \ref{l:cut-off} satisfying $\psi_\epsilon\equiv 1$ on $M\setminus B_\epsilon(\Sigma)$ and $\psi_\epsilon$ vanishes in a neighborhood of $\Sigma$, and $\lim_{\epsilon\to 0}\int_M|\nabla \psi_\epsilon|^q(x)dx=0$ with $q=\frac{p}{p-1}$ when $p<\infty$, and $q>1$ when $p=\infty$. Multiplying $\varphi_R\psi_\epsilon$ to both sides of \eqref{e:bochner_ut} and integrating by parts, we get
\begin{align}\label{e:bochnerutvarphiRepsilon}
-\frac{1}{2}\int_M\langle \nabla (\varphi_R\psi_\epsilon),\nabla |\nabla u_t|^2\rangle &\ge \frac{1}{n}\int_M(\varphi_R\psi_\epsilon)(\Delta u_t)^2+\int_M \left(-(\Delta u_t)^2(\varphi_R\psi_\epsilon)-\Delta u_t\langle \nabla u_t,\nabla (\varphi_R\psi_\epsilon)\rangle \right)\\
&:=I+II
\end{align}
Let us consider the left hand side.
\begin{align}
\int_M\langle \nabla (\varphi_R\psi_\epsilon),\nabla |\nabla u_t|^2\rangle&=\int_M \psi_\epsilon \langle\nabla \varphi_R,\nabla |\nabla u_t|^2\rangle +\int_M\varphi_R\langle \nabla \psi_\epsilon,\nabla |\nabla u_t|^2\rangle \\
&=-\int_M\psi_\epsilon \Delta \varphi_R |\nabla u_t|^2-\int_M|\nabla u_t|^2\langle \nabla \psi_\epsilon,\nabla \varphi_R\rangle +\int_M\varphi_R\langle \nabla \psi_\epsilon,\nabla |\nabla u_t|^2\rangle.
\end{align}
Noting that $u_t\in W^{2,p}$ and $|\nabla u_t|, |\nabla \varphi_R|$ are bounded, letting $\epsilon\to 0$, the last two terms converge to zero. Thus we get
\begin{align}
\lim_{\epsilon\to 0}\int_M\langle \nabla (\varphi_R\psi_\epsilon),\nabla |\nabla u_t|^2\rangle=-\int_M \Delta \varphi_R |\nabla u_t|^2
\end{align}
In \eqref{e:bochnerutvarphiRepsilon}, when $\epsilon\to 0$, the term $I$ converges to $\frac{1}{n}\int_M\varphi_R(\Delta u_t)^2$. Estimating similarly as above, letting $\epsilon\to 0$, the term $II$ would converge to $\int_M \left(-(\Delta u_t)^2\varphi_R-\Delta u_t\langle \nabla u_t,\nabla \varphi_R\rangle \right)$. Therefore, we arrive at
\begin{align}
\frac{1}{2}\int_M \Delta \varphi_R |\nabla u_t|^2\ge \frac{1}{n}\int_M\varphi_R(\Delta u_t)^2+\int_M \left(-(\Delta u_t)^2\varphi_R-\Delta u_t\langle \nabla u_t,\nabla \varphi_R\rangle \right).
\end{align}
Letting $t\to 0$, by the approximating Lemma \ref{l:aprioriW2q_harmonic} we get
\begin{align}
\frac{1}{2}\int_M \Delta \varphi_R |\nabla u|^2&\ge \frac{1}{n}\int_M\varphi_R(\Delta u)^2+\int_M \left(-(\Delta u)^2\varphi_R-\Delta u\langle \nabla u,\nabla \varphi_R\rangle \right)\\
&=\frac{1}{n}\int_M\varphi_R(\Delta u)^2+\int_M \varphi_R\langle \nabla \Delta u,\nabla u\rangle,
\end{align}
where the last equality we use the definition of Dirichlet form. Letting $R\to \infty$, we finish the proof.
\end{proof}
\end{comment}
Now we are ready to prove Theorem \ref{t:RCDnonnegative}.
\begin{proof}[Proof of Theorem \ref{t:RCDnonnegative} ]
Noting the Bochner inequality proved in Proposition \ref{p:weakbochnerinequaliyt} and Theorem 6 of \cite{EKS15}(see also \cite{AMS}), $(M^n,g)$ with Lebesgue measure is an RCD space with nonnegative Ricci curvature, see also \cite{BKMR}.
\end{proof}
Now we can finish the proof of Theorem \ref{thm1.2}.
\begin{proof}[Proof of rigidity part of Theorem \ref{thm1.2}]
Now the rigidity part of Theorem \ref{thm1.2} follows immediately from Theorem \ref{t:RCDnonnegative} and the volume stability theorem of RCD space (\cite{LoVi,Stru06}, see also Theorem 1.6 of \cite{DeGi}). Actually, by Lemma \ref{lm6.6} and Theorem \ref{t:RCDnonnegative}, the manifold $(M^n,g)$ has nonnegative Ricci curvature in RCD sense.
take any point $x\in M$, noting that the manifold is asymptotically flat, we have
\begin{align}
\lim_{R\to \infty}\frac{{\rm Vol}(B_R(x))}{\omega_n R^n}=1,
\end{align}
By volume comparison of RCD space with nonnegative Ricci curvature, we have
\begin{align}
{\rm Vol}(B_R(x))=\omega_n R^n.
\end{align}
By volume rigidity Theorem 1.6 of \cite{DeGi} (or Corollary 1.7 of \cite{DeGi}), we get $B_R(x)$ is isometric to $B_R(0^n)\subset \mathbb{R}^n$. This implies $M$ is isometric to $\mathbb{R}^n$.
\end{proof}
|
{
"timestamp": "2020-12-29T02:22:41",
"yymm": "2012",
"arxiv_id": "2012.14041",
"language": "en",
"url": "https://arxiv.org/abs/2012.14041"
}
|
\section{Introduction}
This report consists a summary of our recent progress on the relationship between area law and OPE blocks. Area law has been a continuous topic in physics. The prototype of area law dates back to black hole physics in general relativity. The unusual property that the thermal entropy of a black hole is proportional to the event horizon of the black hole \cite{Bekenstein:1973ur, Hawking:1974sw} has stimulated varies modern idea of theoretical physics, including the famous holographic principle.
OPE block \cite{Czech:2016xec,deBoer:2016pqk}, on the other hand, is a relatively new topic in conformal field theory, though it has been noticed at the early stages of conformal field theory\cite{Ferrara:1971vh,Ferrara:1972cq}. The operator product expansion of two primary operators is equivalent to a summation of OPE blocks with corresponding three point function coefficients. It is a smeared operator which is generated from the so-called (quasi-)primary operator.
Modular Hamiltonian, the logarithm of the reduced density matrix \cite{Haag:1992}, plays a central role in the context of geometric entanglement entropy \cite{Bombelli:1986rw,Srednicki:1993im,Callan:1994py,Araki:1976zv}. Entanglement entropy is a von Neumann entropy generated from reduced density matrix of a subregion of spacetime. An intriguing fact of entanglement entropy is that it obeys area law in the leading order, though one should introduce a cutoff to secure the divergent behaviour. Its connection to gravity has been established by the work of Ryu and
Takayanagi \cite{Ryu:2006bv}, in which they proposed that the entanglement entropy of a CFT is equal to the
area of a minimal surface in the bulk AdS spacetime.
Modular Hamiltonian is a special OPE block generated by the stress energy-momentum tensor for a ball region. This leads to the conjecture that OPE block may be related to area law as modular Hamiltonian. Indeed, in a series of papers \cite{Long:2019fay,Long:2019pcv,Long:2020njs,Long:2020zeq}, we have shown that the quantity which satisfies area law is the type-$(m)$ connected correlation function (CCF). More explicitly, the leading term of the type-$(m)$ CCF is proportional to the area of the boundary of the ball. In the subleading terms, we find a logarithmic divergence with degree $q$. The degree $q$ is a natural number which is no larger than 2 in general dimensions. The coefficient $p_q$ for the logarithmic term with degree $q$ is cutoff independent. We establish a relationship between $p_q$ and the type-$(m-1,1)$ CCF of OPE blocks for two balls which are far away to each other. The coefficient $p_q$ obeys a cyclic identity which is independent of the order of the operators.
This paper is organised as follows. In section 2, we will introduce some basic concepts and conventions used in this paper. Section 3 is devoted to the study of the new area law which is related to the OPE blocks. Varies generalizations have been given in section 4. We conclude in section 5 with a number of general open problems that deserve, in
our opinion, more work.
\section{Setup}
In this section, we introduce some basic concepts and conventions used in this paper.
\subsection{Area law}
In any continues quantum field theory(QFT), physical degrees exist at each point $(t,x^i), i=1,\cdots,d-1$ of spacetime $M$. At each time slice $t=t_0$, the data on the Cauchy surface $\Sigma$ determines the evolution of the fields. One can divide the surface $\Sigma$ into a spacelike subregion $A$ and its complement $\bar{A}$, $\Sigma=A\cup \bar{A}$. The boundary $\partial A$ is a codimension 2 surface whose area is $\mathcal{A}$. The causal development of A is denoted by $\mathcal{D}(A)$. The physical data on $A$ can only determine the evolution of the fields in $\mathcal{D}(A)$. The causal development $\mathcal{D}(A)$ is an independent subsystem of the original spacetime $M$. Operators in this subsystem are collected to form an algebra $\bm{a}(A)$. Assume the QFT in the spacetime $M$ is described by a density matrix $\rho$, then by integrating out the degree of freedom in the complement of $\bar{A}$, one achieves a reduced density matrix $\rho_A$
\begin{equation}
\rho_A=\text{tr}_{\bar{A}}\rho.
\end{equation}
The reduced density matrix $\rho_A$ is a special operator in $\bm{a}(A)$ since it describes the subsystem $\mathcal{D}(A)$ effectively.
A general quantity $\mathcal{Q}(A)$ in $\bm{a}(A)$ is said to obey area law if its leading term is proportional to the area of the boundary $\partial A$,
\begin{equation}
\mathcal{Q}(A)\propto \mathcal{A}+\cdots.
\end{equation} One typical example is the black hole entropy in Einstein gravity.
The black hole entropy is proportional to the area of its event horizion,
\begin{equation}
S_{bh}=\frac{\mathcal{A}}{4G}
\end{equation} where $G$ is the Newton constant. At the loop level, black hole entropy requires logarithmic corrections \cite{Solodukhin:1994st,Solodukhin:1994yz,Kaul:2000kf,Carlip:2000nv,Govindarajan:2001ee,Sen:2012dw}. Usually, the logarithmic correction is in the form
$C\log\mathcal{A}$ where the constant $C$ may encode useful information of the black hole.
Sometimes the area law is divergent, one typical example is the geometric entanglement entropy
\begin{equation}
S_A=-\text{tr}_A\rho_A\log\rho_A.
\end{equation} In this case, one should insert a cutoff $\epsilon>0$,
\begin{equation}
S_A=\gamma \frac{\mathcal{A}}{\epsilon^{d-2}}+\cdots.
\end{equation} In the subleading terms, there may be a logarithmic term whose coefficient is independent of the cutoff,
\begin{equation}
S_A=\gamma \frac{R^{d-2}}{\epsilon^{d-2}}+\cdots+p\log\frac{R}{\epsilon}+\cdots
\end{equation} where the parameter $R$ is the characteristic length of the region $A$.
In this report, we will present a quantity $\mathcal{Q}(A)$ which has a slightly different logarithmic behaviour
\begin{equation}
\mathcal{Q}(A)=\gamma\frac{R^{d-2}}{\epsilon^{d-2}}+\cdots+p_q\log^q \frac{R}{\epsilon}+\cdots.
\end{equation} The maximum power $q$ of the logarithmic terms is a natural number. We will call it the degree of the quantity $\mathcal{Q}(A)$. The coefficient $p_q$ is cutoff independent and encodes useful information of the theory. In the special case that the subregion $A$ is a ball, $R$ could be chosen as its radius. The subregion $A$ and its causal development $\mathcal{D}(A)$ are in one-to-one correspondence, we will not distinguish them in the following.
In two dimensions, there is no polynomial terms of $\frac{R}{\epsilon}$, the modified ``area law'' is
\begin{equation}
\mathcal{Q}(A)=p_q \log^q\frac{R}{\epsilon}+\cdots.
\end{equation}
\subsection{OPE block}
In any d dimensional CFT, operators are classified into (quasi-)primary operators $\mathcal{O}$ and their descendants $\partial_{\mu}\partial_\nu \cdots \mathcal{O}$. A general primary operator is characterized by two quantum numbers, conformal weight $\Delta$ and $so(d-1)$ spin $J_{ij}$ with magnitude $J$. Under a global conformal transformation $x\to x'$, a primary spin 0 operator transforms as
\begin{equation}
\mathcal{O}(x)\to |\frac{\partial x'}{\partial x}|^{-\Delta/d}\mathcal{O}(x).
\end{equation}
where $|\partial x'/ \partial x|$ is the Jacobian of the conformal transformation of the coordinates, $\Delta$ is the conformal weight of the primary operator. Operator product expansion(OPE) of two separated primary scalar operators $\mathcal{O}_i(x_1)\mathcal{O}_j(x_2)$ is to expand their product in a local orthogonal and complete basis around a suitable point
\begin{equation}
\mathcal{O}_i(x_1)\mathcal{O}_j(x_2)=\sum_{k}C_{ijk}|x_{12}|^{\Delta_k-\Delta_i-\Delta_j}(\mathcal{O}_k(x_2)+\cdots),\label{ope}
\end{equation}
where $\cdots$ are descendants of the primary operator $\mathcal{O}_k$. Its form is fixed by global conformal symmetry, therefore it just contains kinematic information of the CFT. The summation is over all possible primary operators of the CFT. Here we expand the product around the point $x_2$. The distance of any two points $x_i,x_j$ is written as $|x_{ij}|$. The constant $C_{ijk}$ is called the OPE coefficient which is related to the three point function of primary operators
\begin{equation}
\langle \mathcal{O}_i(x_1)\mathcal{O}_j(x_2)\mathcal{O}_k(x_3)\rangle=\frac{C_{ijk}}{|x_{12}|^{\Delta_{12,3}}|x_{23}|^{\Delta_{23,1}}|x_{13}|^{\Delta_{13,2}}},\quad \Delta_{ij,k}=\Delta_i+\Delta_j-\Delta_k.
\end{equation} They are the only dynamical parameters in the CFT. The constants $\Delta_i, \Delta_j,\Delta_k$ are conformal weights of the corresponding primary operators. By collecting all kinematic terms in the summation, we can rewrite the OPE \eqref{ope} as
\begin{equation}
\mathcal{O}_i(x_1)\mathcal{O}_j(x_2)=|x_{12}|^{-\Delta_i-\Delta_j}\sum_k C_{ijk}Q^{ij}_k(x_1,x_2).
\end{equation}
The objects $Q^{ij}_k(x_1,x_2)$ are called OPE blocks \cite{Ferrara:1971vh,Ferrara:1972cq,Czech:2016xec}. They are non-local operators in the CFT and depend on the position $x_1$ and $x_2$ of the external operators. The upper index $i$ and $j$ show that it also depends on the quantum number of the external operators $\mathcal{O}_i$ and $\mathcal{O}_j$. It is easy to see that OPE block has dimension zero. Under a global conformal transformation $x\to x'$, an OPE block $Q^{ij}_k(x_1,x_2)$ will transform as
\begin{equation}
Q^{ij}_k(x_1,x_2)\to f(x_1',x_2')Q^{ij}_k(x_1',x_2').
\end{equation}
The explicit form of $f(x_1',x_2')$ is not important in this work. When the two external operators are the same, we have $f(x_1',x_2')=1$ and OPE block will be invariant under the global conformal transformation. One can also show that the OPE block is independent of the external operator in this special case. Due to this reason, we relabel such kind of OPE block as
\begin{equation}
Q_A[\mathcal{O}_k]=Q_k^{ii}(x_1,x_2).\label{opeii}
\end{equation}
The subscript $A$ denotes the region determined by the two points $x_1$ and $x_2$ where the two external operators insert into. The operator in the square bracket reflects the fact that OPE block is generated by a primary operator $\mathcal{O}_k$. We omit the information of $i$ since the OPE block is insensitive to the external operators in this case. We will classify the primary operators $\mathcal{O}_k$ into conserved currents $\mathcal{J}$ and non-conserved operators $\mathcal{O}$. A general symmetric traceless primary operator obeys the following unitary bound \cite{Minwalla:1997ka}
\begin{eqnarray}
\left\{\begin{array}{l}
\Delta\ge J+d-2,\quad J\ge 1,\nonumber\\
\Delta\ge \frac{d-2}{2},\quad J=0.\end{array}
\right.
\end{eqnarray}
A conserved current $\mathcal{J}$ with spin $J (J\ge 1)$ will satisfy $\Delta=J+d-2$. All other primary operators are non-conserved operators.
Correspondingly, the OPE block \eqref{opeii} generated by a conserved current $\mathcal{J}$ will be called a type-J OPE block. On the other hand, the OPE block \eqref{opeii} generated by a non-conserved operator $\mathcal{O}$ will be called a type-O OPE block.
When two operators are time-like separated, the region $A$ is a causal diamond. The two operators are at the sharp corner of the diamond $A$. We can use the conformal transformation to fix
\begin{equation}
x_1=(1, \vec{x}_0),\quad x_2=(-1,\vec{x}_0), \label{x1x2}
\end{equation}
then the causal diamond $A$ intersects $t=0$ slice with a unit ball which we will also denote it as $A$
\begin{equation}
A=\{(0,\vec{x})|(\vec{x}-\vec{x}_0)^2\le 1\}.\label{sigma}
\end{equation}
The center of the ball is $\vec{x}_0$. The boundary of the ball $A$ is a unit sphere $\partial A$. In the context of geometric entanglement entropy, the surface $\partial A$ is an entanglement surface which separates the ball $A$ and its complement. The leading term of entanglement entropy is proportional to the area of the surface $\partial A$ in general higher dimensions ($d>2$). In two dimensions, the entanglement entropy is logarithmically divergent with the logarithmic degree $q=1$. There is a conformal Killing vector $K$ which preserves the diamond $A$,
\begin{equation}
K^{\mu}=\frac{1}{2}(1-(\vec{x}-\vec{x}_A)^2-t^2,-2t \vec{x}).
\end{equation}
The conformal Killing vector $K$ is null on the boundary of the diamond $A$. It generates a modular flow of the diamond $A$. A type-O OPE block corresponds to point pair \eqref{x1x2} or unit ball $A$ \eqref{sigma} is \cite{deBoer:2016pqk}
\begin{equation}
Q_A[\mathcal{O}_{\mu_1\cdots\mu_J}]=c_{\mathcal{O}_{\mu_1\cdots\mu_J}}\int_{\mathcal{D}(A)} d^dx K^{\mu_1}\cdots K^{\mu_J}|K|^{\Delta-d-J} \mathcal{O}_{\mu_1\cdots \mu_J},\label{typeO}
\end{equation}
where the primary operator $\mathcal{O}_{\mu_1\cdots\mu_J}$ is non-conserved
\begin{equation}
\partial^{\mu_1}\mathcal{O}_{\mu_1\cdots\mu_J}\not=0.
\end{equation}
It has dimension $\Delta$ and spin $J$. When the operator is a conserved current
\begin{equation}
\partial^{\mu_1}\mathcal{J}_{\mu_1\cdots\mu_J}=0,\label{cons}
\end{equation}
the corresponding type-J OPE block is
\begin{equation}
Q_A[\mathcal{J}_{\mu_1\cdots\mu_J}]=c_{\mathcal{J}_{\mu_1\cdots\mu_J}}\int_{A}d^{d-1}\vec{x} (K^0)^{J-1}\mathcal{J}_{0\cdots0}. \label{typeJ}
\end{equation}
It can be obtained from \eqref{typeO} by using the conservation law \eqref{cons} and reducing it to a lower $d-1$ dimensional integral. The coefficient $c_{\mathcal{J}_{\mu_1\cdots\mu_J}}$ is also redefined at the same time. In \eqref{typeO} and \eqref{typeJ}, the coefficients $c_{\mathcal{O}_{\mu_1\cdots\mu_J}}$ and $c_{\mathcal{J}_{\mu_1\cdots\mu_J}}$ are free parameters, we set them to be 1.
\subsection{Modular Hamiltonian and area law}
A very special type-J OPE block is the modular Hamiltonian \cite{Haag:1992,Casini:2011kv} of the ball $A$,
\begin{equation}
H_A=2\pi \int_{A}d^{d-1}\vec{x} K^0 T_{00}=2\pi \int_{A}d^{d-1}\vec{x} \frac{1-(\vec{x}-\vec{x}_0)^2}{2} T_{00}(0,\vec{x}).
\end{equation}
Modular Hamiltonian is the logarithm of the reduced density matrix $\rho_A$
\begin{equation}
H_A=-\log \rho_A.\label{mod}
\end{equation}
It plays a central role in the context of
entanglement entropy,
\begin{equation}
S_A=-\text{tr}_{A}\rho_A \log\rho_A=\text{tr}_{A} e^{-H_A}H_A.
\end{equation}
More generally, R\'enyi entanglement entropy
\begin{equation}
S_A^{(n)}=\frac{1}{1-n}\log\text{tr}_{A}\rho_A^n
\end{equation}
has been shown to satisfy an area law generally
\begin{equation}
S_A^{(n)}=\gamma \frac{\mathcal{A}}{\epsilon^{d-2}}+\cdots,
\end{equation}
where $\mathcal{A}$ is the area of the entanglement surface $\partial A$ and $\epsilon$ is a UV cutoff. The constant $\gamma$ is cutoff dependent. The subleading terms $\cdots$ contain a logarithmic term with degree $q=1$ in even dimensions
\begin{equation}
S_A^{(n)}=\gamma \frac{\mathcal{A}}{\epsilon^{d-2}}+\cdots+p_1(n)\log\frac{R}{\epsilon}+\cdots,\label{nrenyi}
\end{equation}
where we have inserted back the radius $R=1$. The area $\mathcal{A}$ is related to the radius $R$ through the power law
\begin{equation}
\mathcal{A}\sim R^{d-2}.
\end{equation}
The coefficient $p_1(n)$ encodes useful information of the CFT. The relation between modular Hamiltonian and area law motivates the conjecture that OPE block maybe related to area law in a suitable way. We will give the framework to discuss this problem in the following subsection.
\subsection{Deformed reduced density matrix and connected correlation function}
Given a primary operator $\mathcal{O}$ in a ball A, one can always define a corresponding OPE block $Q_A[\mathcal{O}]$. We construct an exponential operator formally \cite{Long:2019pcv}
\begin{equation}
\rho_A=e^{-\mu Q_A}\label{dred}
\end{equation} which is still in the subregion $A$. The constant $\mu$ is free. Operators of the form \eqref{dred} is called deformed reduced density matrix. Note we use the same symbol $\rho_A$ to label deformed reduced density matrix. Recall that the modular Hamiltonian is a special OPE block, if one replaces the OPE block by the modular Hamiltonian \eqref{dred} and set $\mu=1$, the deformed reduced density matrix becomes the reduced density matrix exactly. We can relax the definition, namely, $Q_A$ in \eqref{dred} could be a linear superposition of several OPE blocks.
Note our definition of deformed reduced density matrix is a direction extension of the generalized reduced density matrix in the context of the so-called charged R\'enyi entropy \cite{Belin:2013uta}. In that work, $Q_A$ is a charge which is generated by a $U(1)$ current. The corresponding charged R\'enyi entropy is holographically dual to the thermal entropy of a charged black hole with hyperbolic horizon. However, in our definition, $Q_A$ is just a general OPE block or their linear superposition.
As a naive generalization of R\'enyi entanglement entropy, we construct the logarithm of the vacuum expectation value of the deformed reduced density matrix,
\begin{equation}
T_A(\mu)=\log \langle \rho_A\rangle=\log\langle e^{-\mu Q_A}\rangle.
\end{equation}
When $Q_A$ is modular Hamiltonian, the above quantity is related to the R\'enyi entropy for the vacuum state.
However, a direct computation of $T_A(\mu)$ is hard in general. A much more severe problem is that OPE block has no lower bound in general, therefore the definition \label{tamu} is not valid for general OPE blocks. To solve this problem, we observe that $T_A(\mu)$ could be expanded for small $\mu$,
\begin{equation}
T_A(\mu)=\sum_{m=1}^\infty \frac{(-\mu)^m}{m!}\langle Q_A^m\rangle_c.
\end{equation}
The Tayler expansion coefficient
\begin{equation}
\langle Q_A^m\rangle_c=(-1)^m \frac{\partial^m}{\partial\mu^m}T_A(\mu)|_{\mu\to0}
\end{equation} is called Type-(m) connected correlation function (CCF) of the OPE block $Q_A$. For each definite $m$, one can always calculate the corresponding CCF without knowing $T_A(\mu)$. The first few CCFs are
\begin{eqnarray}
\langle Q_A^2\rangle_c&=&\langle Q_A^2\rangle-\langle Q_A\rangle^2,\nonumber\\
\langle Q_A^3\rangle_c&=&\langle Q_A^3\rangle-3\langle Q_A^2\rangle\langle Q_A\rangle+2\langle Q_A\rangle^3.
\end{eqnarray} Using CCF, there is no issue of lower bound of the OPE block. As an application of the concept of CCF, we choose the OPE block as the modular Hamiltonian, then
it is easy to show that CCF of modular Hamiltonian $H_A$ satisfies area law with logarithmic degree $q=1$ in even dimensions,
\begin{equation}
\langle H_A^m\rangle_c=\tilde{\gamma}\frac{\mathcal{A}}{\epsilon^{d-2}}+\cdots+\tilde{p}_1^{(m)} \log\frac{R}{\epsilon}+\cdots,\quad m\ge 1.\label{HAm}
\end{equation}
The coefficient $\tilde{p}_1^{(m)}$ is determined from $p_1(n)$ by
\begin{equation}
\tilde{p}_1^{(m)}=(-1)^m\partial_n^{m}(1-n)p_1(n) |_{n\to 1}.
\end{equation}
There could be multiple spacelike-separated balls $A_1,A_2,\cdots$, each region has associate OPE block $Q_{A_i}$.
We insert $m_i$ OPE blocks into region $A_i$, then we can define the corresponding type-Y CCF
\begin{equation}
\langle Q_{A_1}^{m_1}Q_{A_2}^{m_2}\cdots \rangle_c
\end{equation}
where the Young diagram $Y$ is
\begin{equation}
Y=(m_1,m_2,\cdots),\quad m_1\ge m_2\ge\cdots\ge1.
\end{equation} The generator of all type-Y CCFs is
\begin{equation}
T_{\cup A_i}(\mu_1,\mu_2,\cdots)=\log\frac{\langle e^{-\sum_i \mu_i Q_{A_i}}\rangle}{\prod_i\langle e^{-\mu_i Q_{A_i}}\rangle}.
\end{equation}
When there are only two balls $A$ and $B$, the generator is
\begin{equation}
T_{A\cup B}(\mu_1,\mu_2)=\log \frac{\langle e^{-\mu_1Q_{A}-\mu_2 Q_{B}}\rangle}{\langle e^{-\mu_1Q_{A}}\rangle\langle e^{-\mu_2 Q_{B}}\rangle}=\sum_{m_1\ge 1,m_2\ge 1}\frac{(-1)^{m_1+m_2}\mu_1^{m_1}\mu_2^{m_2}}{m_1!m_2!}\langle Q_{A}^{m_1}Q_{B}^{m_2}\rangle_c.
\end{equation}
We parameterize $A$ and $B$ as
\begin{equation}
A=\{(0,\vec{x})|(\vec{x}-\vec{x}_0)^2\le 1\},\quad B=\{(0,\vec{x})|\vec{x}\le R'^2\}.
\end{equation} There is only one cross ratio
\begin{equation}
\xi=\frac{4R'}{x_0^2-(1-R')^2}.
\end{equation} When the two regions $A$ and $B$ are spacelike-separated, $|x_0|>1+R'$, the cross ratio is between 0 and 1,
\begin{equation}
0<\xi<1.
\end{equation} In some cases, it is more convenient to use an equivalent cross ratio
\begin{equation}
\eta=\frac{\xi}{1-\xi}=\frac{4R'}{x_0^2-(1+R')^2}.
\end{equation} For spacelike-separated regions $A$ and $B$, the range of the cross ratio $\eta$ is
\begin{equation}
0<\eta<\infty.
\end{equation} Since the OPE block $Q_A[\mathcal{O}]$ is invariant under conformal transformation, any type-$(m_1,m_2)$ CCF should be a function of cross ratio $\xi$ or $\eta$. Actually the OPE block is an eigenvector of the conformal Casimir
\begin{equation}
[L^2,Q_A[\mathcal{O}]]=C_{\Delta,J}Q_A[\mathcal{O}]
\end{equation}
where $L^2$ is the Casimir operator of the global conformal group. The eigenvalue $C_{\Delta,J}$ is
\begin{equation}
C_{\Delta,J}=-\Delta(\Delta-d)-J(J+d-2).
\end{equation} Therefore, any type-$(m-1,1)$ CCF should be a conformal block
\begin{equation}
\langle Q_{A}[\mathcal{O}_1]\cdots Q_{A}[\mathcal{O}_{m-1}]Q_{B}[\mathcal{O}_m]\rangle_c=D^{(d)}[\mathcal{O}_1,\cdots,\mathcal{O}_m]G^{(d)}_{\Delta_m,J_m}(\xi).\label{cb}
\end{equation} The subscript $\Delta_m,J_m$ are the conformal weight and spin of the primary operator $\mathcal{O}_m$. The index $(d)$ is used to label the dimension of spacetime. The conformal block can be constructed explicitly in even dimensions \cite{Dolan:2000ut,Dolan:2003hv}. In this paper, we just need the diagonal limit of conformal block\cite{Hogervorst:2013kva}. Any type-$(m_1,m_2)$ CCF with $m_1\ge m_2\ge2$ is not a conformal block
\section{Area law}
We conjecture that the type-$(m)$ CCF of OPE blocks obeys the following area law
\begin{equation}
\langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_m]\rangle_c=\gamma \frac{R^{d-2}}{\epsilon^{d-2}}+\cdots +p_q \log^q\frac{R}{\epsilon}+\cdots.\label{arealaw}
\end{equation} The leading term is proportional to the area of the boundary $\partial A$. We inserted the radius $R=1$ into the formula to balance the dimension. The small positive constant $\epsilon$ is the UV cutoff which is roughly the distance from the cutoff to the boundary $\partial A$. The constant $\gamma$ depends on the choice of the cutoff and the method of regularization, we will not be interested in its explicit value. The $\cdots$ terms are subleading and cutoff dependent. Therefore we omit their forms. The degree $q$ characterizes the maximal power of the logarithmic terms. The coefficient $p_q$ is invariant under the rescaling of the cutoff, therefore it encodes detail universal information of the theory. When all the OPE blocks are equal to the modular Hamiltonian, the degree $q=1$ for even dimensions according to \eqref{HAm}. However, as we will see, $q$ is not necessary equal to 1 in general.
To distinguish different type-$(m)$ CCFs in different dimensions, we write the area law \eqref{arealaw} more explicitly as
\begin{equation}
\langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_m]\rangle_c=\gamma[\mathcal{O}_1,\cdots,\mathcal{O}_m]\frac{R^{d-2}}{\epsilon^{d-2}}+\cdots+p^{(d)}_{q}[\mathcal{O}_1,\cdots,\mathcal{O}_m]\log^{q}\frac{R}{\epsilon}+\cdots.\label{al2}
\end{equation}
\subsection{Continuation}
The two formulas \eqref{cb} and \eqref{al2} are actually related to each other through an analytic continuation. We use the example of the two dimensional modular Hamiltonian to illustrate this relation. For any CFT$_2$, the modular Hamiltonian can be decomposed into the holomorphic and anti-holomorphic part, we focus on the holomorphic part
\begin{equation}
H_A=-\int_{-1}^1 dz \frac{1-z^2}{2}T(z+x_0)+c.
\end{equation} The constant $c$ can be fixed by the normalization condition
\begin{equation}
\text{tr}_A\rho_A=\text{tr}_A e^{-H_A}=1.
\end{equation} Its value doesn't affect the type-Y CCF with any $\sum_{i}m_i\ge 2$. We also used the convention $T(z)=-2\pi T_{zz}$ where the subscript $z$ is the holomorphic coordinate $z=t+x$. The radius of the interval $A$ is 1, we have shifted the variable $z$ such that the dependence of the center $x_0$ is in the stress tensor. The modular Hamiltonian of region $B$ can be obtained by setting $x_0=0$ and restoring the radius $R'$. The type-$(m-1,1)$ CCF of the modular Hamiltonian is
\begin{equation}
\langle H_A^{m-1}H_B\rangle_c=D^{(2)}[T_{\mu_1\nu_1},\cdots,T_{\mu_m\nu_m}]G^{(2)}_{2}(\eta).\label{modu}
\end{equation} The two dimensional conformal block for a chiral operator can be labeled by the conformal weight $h$ of the operator
\begin{equation}
G_h^{(2)}(\eta)=(-\eta)^h {}_2F_1(h,h,2h,-\eta).\label{gh2}
\end{equation} We can move the interval $A$ to $B$ such that they coincide. In this limit, any type-$(m-1,1)$ CCF should approach a type-$(m)$ CCF . This is equivalent to set $\eta\to-1$. We can set $x_0\to0$ and then take the limit $R'\to 1$,
\begin{equation}
x_A\to 0,\quad R'=1-\epsilon,\quad \epsilon\to 0.
\end{equation} The cross ratio $\xi\to-\infty$ or $\eta\to-1$ by
\begin{equation}
\xi=-\frac{4(1-\epsilon)}{\epsilon^2}\approx -\frac{4}{\epsilon^2},\quad \eta=-\frac{4(1-\epsilon)}{(2-\epsilon)^2}\approx-1+\frac{\epsilon^2}{4}.
\end{equation} On the right hand side of \eqref{modu}, we find a logarithmic divergent term in this limit
\begin{equation}
G^{(2)}_2(\eta)=12\log\frac{2}{\epsilon}+\cdots=12\log\frac{R}{\epsilon}+\cdots
\end{equation} The left hand side of \eqref{modu} approaches type-$(m)$ CCF, therefore
\begin{equation}
\langle H_A^m\rangle_c=12 D^{(2)}[T_{\mu_1\nu_1},\cdots,T_{\mu_m\nu_m}]\log\frac{R}{\epsilon}+\cdots.\label{al2dimension}
\end{equation} We read out the cutoff independent coefficient
\begin{equation}
p_1^{(2)}[T_{\mu_1\nu_1},\cdots,T_{\mu_m\nu_m}]=12 D^{(2)}[T_{\mu_1\nu_1},\cdots,T_{\mu_m\nu_m}].\label{moduuvir}
\end{equation} The relation \eqref{moduuvir} is a typical UV/IR relation for the modular Hamiltonian. The left hand side is the universal coefficient for $B$ and $A$ coincides (UV). On the right hand side, the $D$ coefficient characterizes the leading order behaviour of CCF when $B$ and $A$ are far away to each other (IR). They provide equivalent information of the CFT since the constant $12$ is completely fixed by conformal symmetry. The continuation of the conformal block can be generalized to higher dimensions. For example, in four dimensions, the conformal block associated with stress tensor becomes divergent as $A$ approaches $B$,
\begin{equation}
G^{(4)}_{4,2}\approx \tilde{\gamma}\frac{R^2}{\epsilon^2}+\cdots-120\log\frac{R}{\epsilon}+\cdots.
\end{equation} The leading term is exactly proportional to the area of the boundary and the logarithmic divergent term also appears in the subleading terms. We can read out the type-$(m)$ CCF of the modular Hamiltonian in four dimensions
\begin{equation}
\langle H_A^m\rangle_c=\gamma \frac{R^2}{\epsilon^2}+\cdots+p_1^{(4)}[T_{\mu_1\nu_1},\cdots,T_{\mu_m\nu_m}]\log\frac{R}{\epsilon}+\cdots\label{al4}
\end{equation} with
\begin{equation}
p_1^{(4)}[T_{\mu_1\nu_1},\cdots,T_{\mu_m\nu_m}]=-120 D^{(4)}[T_{\mu_1\nu_1},\cdots,T_{\mu_m\nu_m}].
\end{equation} Note we obtain the area law and the logarithmic behaviour of the type-$(m)$ CCF of the modular Hamiltonian without using any knowledge of R\'enyi entanglement entropy. The method of analytic continuation can be applied to general dimensions and OPE blocks. A conformal block $G_{\Delta,J}^{(d)}(\xi)$ obeys area law in the limit $\xi\to-\infty$ in even dimensions. It has degree $q=1$ only for $\Delta=J+d-2$,
\begin{equation}
G^{(d)}_{\Delta,J}(\xi)=\tilde{\gamma}\frac{R^{d-2}}{\epsilon^{d-2}}+\cdots +E^{(d)}[\Delta,J]\log\frac{R}{\epsilon}+\cdots,\quad \xi\to-\infty.
\end{equation} This means that type-$(m)$ CCF of type-J OPE blocks may always obey area law with degree $q=1$, the cutoff independent coefficient is
\begin{equation}
p_q^{(d)}[\mathcal{O}_1,\cdots,\mathcal{O}_m]=E^{(d)}[\mathcal{O}_m]\times D^{(d)}[\mathcal{O}_1,\cdots,\mathcal{O}_m].\label{qpdi}
\end{equation} We have replaced the quantum numbers in E function by the corresponding primary operator. For non-conserved operators, the conformal block $G_{\Delta,J}^{(d)}$ also obeys area law in the limit $\xi\to-\infty$ in even dimension, though it has degree $q=2$
\begin{equation}
G^{(d)}_{\Delta,J}(\xi)=\tilde{\gamma}\frac{R^{d-2}}{\epsilon^{d-2}}+\cdots+E^{(d)}[\Delta,J]\log^2\log\frac{R}{\epsilon}+\cdots,\quad \xi\to-\infty.
\end{equation} Therefore, type-$(m)$ CCF of type-O OPE blocks obeys area law with degree $q=2$. We can obtain similar UV/IR relations as \eqref{qpdi}.
In odd dimensions, the story is the same. The degree $q$ is $0$ for type-$(m)$ CCF of type-J OPE blocks and $1$ for type-O OPE blocks.
\subsection{Kinematic information}
The function $E^{(d)}[\mathcal{O}]$ is completely fixed by conformal symmetry. It can be obtained by reading out the coefficient of the logarithmic term with degree $q$. For each fixed quantum number $\Delta$ and $J$, there is a unique number $E^{(d)}[\mathcal{O}]$. For any type-J OPE block in two dimensions, the primary operator $\mathcal{O}$ has dimension $\Delta=J=h$. The conformal block \eqref{gh2} has degree $q=1$ in the limit $\eta\to-1$. The function $E^{(2)}[\mathcal{O}]$ is
\begin{equation}
E^{(2)}[\mathcal{O}]=\frac{2 \Gamma (2 h)}{\Gamma (h)^2},\quad \Delta=J=h. \label{twodimensionope}
\end{equation}
For type-O OPE block, the primary operator $\mathcal{O}$ has dimension $\Delta=h+\bar{h}$ and spin $J=h-\bar{h}$. The conformal block has degree $q=2$ in the limit $\eta\to-1$. The function $E^{(2)}[\mathcal{O}]$ is
\begin{eqnarray}
E^{(2)}[\mathcal{O}]=\left\{\begin{array}{cc}\frac{2^{4h}\Gamma(h+\frac{1}{2})^2}{\pi \Gamma(h)^2}&\quad J=0,\ h>0\\
-\frac{4^{2 h-1} \Gamma \left(h-\frac{1}{2}\right) \Gamma \left(h+\frac{1}{2}\right)}{\pi \Gamma (h-1) \Gamma (h)}&\quad J=1,\ h>1\\
\frac{4^{2 h-3} (h-2) (h-1) (2 h-3) (2 h-1) \Gamma \left(h-\frac{3}{2}\right)^2}{\pi \Gamma (h)^2}&\quad J=2,\ h>2\\
\cdots&\end{array}\right.
\end{eqnarray}
In four dimensions, we also find
\begin{equation}
E^{(4)}[\mathcal{O}]=\left\{\begin{array}{cc}12&\quad \Delta=3,\ J=1\\
-120&\quad\Delta=4,\ J=2\\
840&\quad\Delta=5,\ J=3\\
\cdots&\end{array}\right.
\end{equation} for conserved currents and
\begin{eqnarray}
E^{(4)}[\mathcal{O}]=\left\{\begin{array}{cc}
-\frac{2^{2\Delta-1}\Gamma(\frac{\Delta-1}{2})\Gamma(\frac{\Delta+1}{2})}{\pi \Gamma(\frac{\Delta-2}{2})^2}& \quad \Delta>1, \ J=0,\\\vspace{4pt}
\frac{2^{2\Delta-1}\Gamma(\frac{\Delta}{2})\Gamma(\frac{\Delta+2}{2})}{\pi\Gamma(\frac{\Delta-3}{2})\Gamma(\frac{\Delta+1}{2})}&\quad \Delta>3,\ J=1,\\\vspace{4pt}
-\frac{4^{\Delta-1}(\Delta-2)\Gamma(\frac{\Delta-3}{2})\Gamma(\frac{\Delta+3}{2})}{\pi\Gamma(\frac{\Delta-4}{2})\Gamma(\frac{\Delta+2}{2})}&\quad \Delta>4,\ J=2,\vspace{4pt} \\
\cdots
\end{array}\right.
\end{eqnarray} for non-conserved operators. In three dimensions, we find \begin{eqnarray}
E^{(3)}[\mathcal{O}]=\left\{\begin{array}{cc}
-\frac{2^{2\Delta-1}(\Delta-1)\Gamma(\Delta-\frac{1}{2})}{\sqrt{\pi}\Gamma(\Delta-1)}&\quad\Delta>\frac{1}{2}, \ J=0.\\
\vspace{4pt}
\frac{2^{\Delta+1}\Delta\Gamma(\Delta-\frac{1}{2})}{\Gamma(\frac{\Delta-2}{2})\Gamma(\frac{\Delta+1}{2})}&\quad\Delta>2,\ J=1,\\\vspace{4pt}
-\frac{2^{2\Delta-1}(\Delta^2-1)\Gamma(\Delta-\frac{1}{2})}{\sqrt{\pi}(\Delta-2)^2\Delta\Gamma(\Delta-3)}&\quad \Delta>3,\ J=2,\vspace{4pt}\\
\cdots
\end{array}\right.
\end{eqnarray} for non-conserved operators. Note for conserved currents in odd dimensions, the function $E^{(3)}[\mathcal{O}]$ may depend on explicit choice of the cutoff. For example, a transformation $\epsilon\to \epsilon(1+a\epsilon)$ may shift its value. This is because the degree is $0$, there is no logarithmic divergence at all.
\subsection{UV/IR relation}
The UV/IR relation \eqref{qpdi} relates type-$(m)$ CCF to type-$(m-1,1)$ CCF.
This relation may simplify computation in many cases. To see this point, let's compute the following type-$(2)$ CCF in two dimensions
\begin{eqnarray}
\langle Q_A[\mathcal{O}]^2\rangle_c&=&\int_{-1}^1 dz_1 \int_{-1}^1 dz_2 \frac{(1-z_1^2)^{h-1}(1-z_2^2)^{h-1}}{(z_1-z_2)^{2h}}\nonumber\\&=&\frac{(-1)^{-h}\sqrt{\pi}\Gamma(h)}{\Gamma(h+\frac{1}{2})}\int_{-1}^1 dz_1 \frac{1}{1-z_1^2}\nonumber\\&=&\frac{(-1)^{-h}\sqrt{\pi}\Gamma(h)}{\Gamma(h+\frac{1}{2})}\log\frac{2}{\epsilon}.\label{QAOOdirect}
\end{eqnarray}
This is a double integral with poles at $z_1=z_2$. We regularize the integral by ignoring these poles at the second step. At the last step, we insert a UV cutoff to regularize the integral. However, using UV/IR relation, one just need to fix the coefficient $D$ which is related to the large distance behaviour of the type-$(1,1)$ CCF,
\begin{equation}
\langle Q_A[\mathcal{O}]Q_B[\mathcal{O}]\rangle_c=\int_{-1}^1 dz_1\int_{-1}^1 dz_2\frac{(1-z_1^2)^{h-1}(1-z_2^2)^{h-1}}{(z_1-z_2+x_0)^{2h}}.
\end{equation} In the large distance limit, $x_0\to \infty$, the integral becomes simpler
\begin{eqnarray}
\langle Q_A[\mathcal{O}]Q_B[\mathcal{O}]\rangle_c&\approx& \int_{-1}^1 dz_1 \int_{-1}^1 dz_2 \frac{(1-z_1^2)^{h-1}(1-z_2^2)^{h-1}}{x_0^{2h}}\nonumber\\&=&4^{-h}(\frac{\sqrt{\pi } \Gamma (h)}{\Gamma \left(h+\frac{1}{2}\right)})^2\eta^{h}.
\end{eqnarray} We have used the relation $\eta\approx\frac{4}{x_0^2}$ in the large distance limit. Then we can read out
\begin{equation}
D^{(2)}[\mathcal{O},\mathcal{O}]=(-1)^{-h}4^{-h}(\frac{\sqrt{\pi } \Gamma (h)}{\Gamma \left(h+\frac{1}{2}\right)})^2.
\end{equation} Combining UV/IR relation and \eqref{twodimensionope}, we find
\begin{equation}
p^{(2)}_1[\mathcal{O},\mathcal{O}]=E^{(2)}[\mathcal{O}]\times D^{(2)}[\mathcal{O},\mathcal{O}]=\frac{(-1)^{-h}\sqrt{\pi}\Gamma(h)}{\Gamma(h+\frac{1}{2})}.
\end{equation} The result is exactly the same as \eqref{QAOOdirect}. We use the UV/IR relation to obtain type-$(3)$ CCF for type-J OPE blocks in two dimensions, the cutoff independent coefficient is
\begin{equation}
p^{(2)}_1[\mathcal{O}_1,\mathcal{O}_2,\mathcal{O}_3]=\frac{C_{123}\pi^{3/2}(-1)^{\frac{h_1+h_2+h_3}{2}}\Gamma(h_1)\Gamma(h_2)\Gamma(h_3)\kappa}{\Gamma(\frac{1+h_1+h_2-h_3}{2})\Gamma(\frac{1+h_1+h_3-h_2}{2})\Gamma(\frac{1+h_2+h_3-h_1}{2})\Gamma(\frac{h_1+h_2+h_3}{2})},\label{p1ooo}
\end{equation} where the constant $\kappa=\frac{1}{2}[1+(-1)^{h_1+h_2+h_3}]$. We notice that the result is totally symmetric under the exchange of any two conformal weights.
Since there are different ways to uplift type-$(m)$ to type-$(m-1,1)$, the cutoff independent coefficient should be identical since they characterize the same CCF after taking the limit $A\to B$. For $m=3$, this is a cyclic identity
\begin{equation}
p_q^{(d)}[\mathcal{O}_1,\mathcal{O}_2,\mathcal{O}_3]=p_q^{(d)}[\mathcal{O}_2,\mathcal{O}_3,\mathcal{O}_1]=p_q^{(d)}[\mathcal{O}_3,\mathcal{O}_1,\mathcal{O}_2].
\end{equation}
The UV/IR relation and the cyclic identity have been checked for type-$(m)$ CCF (m=2,3) in four dimensions. We list the cutoff independent coefficients below \cite{Long:2020zeq}.
\begin{itemize}
\item Type-(2). The normalization constants are set to 1.
\begin{itemize}
\item Spin 1-1 conserved currents.
\begin{eqnarray}
p_1^{(4)}[\mathcal{J}_\mu,\mathcal{J}_\nu]=-\frac{\pi^2}{3}.
\end{eqnarray}
\item Spin 2-2 conserved currents.
\begin{eqnarray}
p_1^{(4)}[T_{\mu\nu},T_{\rho\sigma}]=-\frac{\pi^2}{40}.
\end{eqnarray}
\item Spin 0-0 non-conserved operators.
\begin{eqnarray}
p_2^{(4)}[\mathcal{O},\mathcal{O}]=-\frac{4\pi^2(\Delta-1)\Gamma(\Delta-2)^2\Gamma(\frac{\Delta}{2})^4}{\Gamma(\Delta)^2\Gamma(\Delta-1)^2}.
\end{eqnarray}
\item Spin 1-1 non-conserved operators.
\begin{eqnarray}
p_2^{(4)}[\mathcal{O}_\mu,\mathcal{O}_\nu]=-\frac{4^{1-\Delta}\pi^3\Delta\Gamma(\frac{\Delta-3}{2})\Gamma(\frac{\Delta+1}{2})}{\Gamma(\frac{\Delta}{2}+1)^2},\quad \Delta>3.
\end{eqnarray}
\item Spin 2-2 non-conserved operators.
\begin{eqnarray}
p_2^{(4)}[\mathcal{O}_{\mu\nu},\mathcal{O}_{\rho\sigma}]=-\frac{3\pi^2(\Delta-2)\Delta^2\Gamma(\frac{\Delta}{2}-2)^2\Gamma(\frac{\Delta}{2}-1)^2}{64\Gamma(\Delta-4)\Gamma(\Delta+2)},\quad \Delta>4.
\end{eqnarray}
\end {itemize}
\item Type-$(3)$.
\begin{itemize}
\item Spin 1-1-2 conserved currents. The three point function of zero components are fixed by conformal symmetry
\begin{equation}
\langle T_{00}(x_1)\mathcal{J}_0(x_2)\mathcal{J}_0(x_3)\rangle_c=\frac{C_{T\mathcal{J}\mathcal{J}}}{x_{12}^4x_{13}^2x_{23}^2}.
\end{equation} Then the coefficient
\begin{eqnarray}
p_1^{(4)}[\mathcal{J}_{\mu},\mathcal{J}_\nu,T_{\rho\sigma}]=-\frac{\pi^3}{2}C_{T\mathcal{J}\mathcal{J}}.
\end{eqnarray}
\item Spin 2-2-2 conserved currents. The three point function of zero components are fixed by conformal symmetry
\begin{equation}
\langle T_{00}(x_1)T_{00}(x_2)T_{00}(x_3)\rangle_c=\frac{C_{TTT}}{x_{12}^4x_{13}^4x_{23}^4}.
\end{equation} Then the coefficient
\begin{equation}
p_1^{(4)}[T_{\mu\nu},T_{\rho\sigma},T_{\alpha\beta}]=\frac{\pi^3}{12}C_{TTT}.
\end{equation}
\item Spin 0-0-0 non-conserved currents.
\begin{eqnarray}
\hspace{-10pt}p_2^{(4)}[\mathcal{O}_1,\mathcal{O}_2,\mathcal{O}_3]&=&-2^{4-\Delta_1-\Delta_2-\Delta_3}\pi^3C_{123}\int_{\mathbb{D}^2}d\zeta d\bar{\zeta}(\zeta+\bar{\zeta})^2\int_{\mathbb{D}^2}d\zeta'd\bar{\zeta}'(\zeta'+\bar{\zeta}')^2\nonumber\\&&\times (1-\zeta^2)^{\frac{\Delta_1-4}{2}}(1-\bar{\zeta}^2)^{\frac{\Delta_1-4}{2}}(1-\zeta'^2)^{\frac{\Delta_2-4}{2}}(1-\bar{\zeta}'^2)^{\frac{\Delta_2-4}{2}}\int_0^\pi d\theta\frac{\sin\theta}{(a+b\cos\theta)^{\frac{\Delta_{12,3}}{2}}},\nonumber\\\label{cyc}
\end{eqnarray}
\end{itemize}
\end{itemize}
Though the expression \eqref{cyc} is not symmetric superficially under the exchange of any two conformal weights, we checked explicitly that it satisfies the cyclic identity for integer conformal weights.
For $m=4$, the UV/IR relation and the cyclic identity are much more harder to check. We considered type-$(4)$ CCF for massless free scalar theory \cite{Long:2019fay,Long:2019pcv}. In this theory, one can construct an infinite tower of conserved currents with even spin \cite{Bakas:1990ry}. The four point functions can be calculated explicitly. Therefore we can find type-$(3,1)$ and type-$(4)$ CCFs and read out the corresponding coefficients. For example, for spin-2-2-2-4 conserved currents \cite{Long:2019pcv},
\begin{equation}
D[2,2,2,4]=\frac{3}{70}D[2,2,4,2].
\end{equation} Both of them leads to the cutoff coefficients
\begin{equation}
p_1^{(2)}[2,2,2,4]=\frac{2\Gamma(8)}{\Gamma(4)^2}D[2,2,2,4]=\frac{2\Gamma(4)}{\Gamma(2)^2}D[2,2,4,2]=p^{(2)}_1[2,2,4,2].\label{p122224}
\end{equation} The cyclic identity is obeyed.
\subsection{Discussion}
The UV/IR relation should be slightly modified when the CCF contains both type-J and type-O OPE blocks. One simple example is the following type-$(3)$ CCF
\begin{equation}
\langle Q_A[\mathcal{J}]Q_A[\mathcal{O}]Q_A[\tilde{\mathcal{O}}]\rangle_c
\end{equation} where $Q_A[\mathcal{J}]$ is a type-J OPE block while $Q_A[\mathcal{O}]$ and $Q_A[\tilde{\mathcal{O}}]$ are type-O OPE blocks. This CCF is related to the following two type-$(2,1)$ CCFs
\begin{eqnarray}
\langle Q_A[\tilde{\mathcal{O}}] Q_A[\mathcal{J}]Q_B[\mathcal{O}]\rangle_c&=&D^{(d)}[\tilde{\mathcal{O}},\mathcal{J},\mathcal{O}]G^{(d)}_{\Delta,J}(\xi),\label{dis1}\\
\langle Q_A[\mathcal{O}]Q_A[\tilde{\mathcal{O}}] Q_B[\mathcal{J}]\rangle_c&=&D^{(d)}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}]G^{(d)}_{\Delta',J'}(\xi).\label{dis2}
\end{eqnarray} We choose $d=4$. Taking the limit $A\to B$ from \eqref{dis1}, we find a type-$(3)$ CCF with degree $q=2$, the UV/IR relation reads
\begin{equation}
p_2^{(4)}[\tilde{\mathcal{O}},\mathcal{J},\mathcal{O}]=E^{(4)}[\mathcal{O}]\times D^{(4)}[\tilde{\mathcal{O}},\mathcal{J},\mathcal{O}]\label{p241}
\end{equation} We can also take the limit $A\to B$ from \eqref{dis2}, then we will find a type-$(3)$ CCF with degree $q=1$, the UV/IR relation reads
\begin{equation}
p_1^{(4)}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}]=E^{(4)}[\mathcal{J}]\times D^{(4)}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}].\label{p242}
\end{equation} The equations \eqref{p241} and \eqref{p242} are not identical superficially since the subscript $q$ are not equal to each other. However, an explicit calculation for spin 2-0-0 and spin 2-2-0 in four dimensions \cite{Long:2020zeq} shows that the coefficient $D^{(4)}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}]$ is actually divergent logarithmically,
\begin{equation}
D^{(4)}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}]=D^{(4)}_{\text{log}}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}]\log\frac{R}{\epsilon}+\cdots.
\end{equation} The terms in $\cdots$ are finite and depends on cutoff scale. Due to the logarithmic divergence behaviour of the coefficient $D^{(4)}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}]$, the degree of type-$(3)$ CCF from \eqref{dis2} increases 1, the modified UV/IR relation becomes
\begin{equation}
p_2^{(4)}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}]=E^{(4)}[\mathcal{J}]\times D^{(4)}_{\text{log}}[\mathcal{O},\tilde{\mathcal{O}},\mathcal{J}].\label{p343}
\end{equation} We checked explicitly that the two constants \eqref{p241} and \eqref{p343} are equal to each other. The cyclic identity is still satisfied after counting the logarithmic divergence of the $D$ function.
\section{Generalizations}
The area law and logarithmic behaviour in the subleading terms can be extended in different directions. In this section, we mention several extensions.
\begin{itemize}
\item UV/IR relation. In general, one can uplift any type-$(m)$ CCF to a type-$(p,m-p)$ CCF
\begin{equation}
\langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_m]\rangle_c \stackrel{uplift}{\longrightarrow} \langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_{p}]Q_B[\mathcal{O}_{p+1}]\cdots Q_B[\mathcal{O}_m]\rangle_c,\quad 1\le p\le m-1.
\end{equation} When $p$ is not $1$ and $m-1$, the type-$(p,m-p)$ CCF is not a conformal block. It is still a function of cross ratio $\xi$, therefore it should reproduce the type-$(m)$ CCF after taking the limit $A\to B$,
\begin{equation}
\langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_m]\rangle_c=\lim_{\xi\to-\infty} \langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_{p}]Q_B[\mathcal{O}_{p+1}]\cdots Q_B[\mathcal{O}_m]\rangle_c.\label{uvir}
\end{equation} Obviously, this also defines a UV/IR relation between $p_q^{(d)}$ and several coefficients in the type-$(p,m-p)$ CCF. Since the right hand side is not proportional to any conformal block, it is not easy to write out an explicit formula. Nevertheless, one may still check the relation \eqref{uvir} case by case. One example is to consider the type-$(2,2)$ CCF of the modular Hamiltonian in CFT$_2$. By making use of the universal feature of the CCF of the stress tensor, one can fix the generator of type-$(m_1,m_2)$ CCFs \cite{Long:2019pcv}
\begin{equation}
T_{A\cup B}(\mu_1,\mu_2)=-\frac{c}{2}\text{tr}\log[\bm{1}-\left(\begin{array}{cc}\mathcal{A}&\mathcal{C}\\\mathcal{D}&\mathcal{B}\end{array}\right)],\label{ta1a2mu1mu2}
\end{equation} where the matrices $\mathcal{A},\mathcal{B},\mathcal{C}$ and $\mathcal{D}$ are
\begin{eqnarray}
\hspace{-50pt}\mathcal{A}_{xx'}\hspace{-10pt}&=&\hspace{-10pt}\frac{\eta^2}{4}\hspace{-5pt}\int_0^\infty \hspace{-10pt}dy \frac{\sqrt{xx'}y\sinh\pi \mu_1x\ \sinh\pi \mu_2 y}{\sinh \pi x'\ \sinh \pi y\ \sinh\pi(1+\mu_1)x\ \sinh\pi(1+\mu_2)y}(\frac{x_{13}}{x_{23}})^{i(x-x')}\mathcal{F}(x,x',y),\\
\hspace{-50pt}\mathcal{B}_{xx'}\hspace{-10pt}&=&\hspace{-10pt}\frac{\eta^2}{4}\hspace{-5pt}\int_0^\infty \hspace{-10pt}dy \frac{\sqrt{xx'}y\sinh\pi \mu_1 x\ \sinh\pi \mu_2 y}{\sinh \pi x'\ \sinh \pi y\ \sinh\pi(1+\mu_1)x\ \sinh\pi(1+\mu_2)y}(\frac{x_{13}}{x_{23}})^{-i(x-x')}\mathcal{F}(x',x,y),\\
\hspace{-50pt}\mathcal{C}_{xx'}\hspace{-10pt}&=&\hspace{-10pt}\frac{\eta^2}{4}\hspace{-5pt}\int_0^\infty\hspace{-10pt} dy \frac{\sqrt{xx'}y\sinh\pi \mu_1 x\ \sinh\pi \mu_2 y}{\sinh \pi x'\ \sinh \pi y\ \sinh\pi(1+\mu_1)x\ \sinh\pi(1+\mu_2)y}(\frac{x_{13}}{x_{23}})^{i(x+x')}\mathcal{F}(x,-x',y),\\
\hspace{-50pt}\mathcal{D}_{xx'}\hspace{-10pt}&=&\hspace{-10pt}\frac{\eta^2}{4}\hspace{-5pt}\int_0^\infty\hspace{-10pt} dy \frac{\sqrt{xx'}y\sinh\pi \mu_1x\ \sinh\pi \mu_2 y}{\sinh \pi x'\ \sinh \pi y\ \sinh\pi(1+\mu_1)x\ \sinh\pi(1+\mu_2)y}(\frac{x_{13}}{x_{23}})^{-i(x+x')}\mathcal{F}(-x,x',y).
\end{eqnarray} with
\begin{eqnarray}
\mathcal{F}(x,x',y)&=&{}_2F_1(1+ix,1-iy,2,-\eta)\ {}_2F_1(1-ix',1+iy,2,-\eta)\nonumber\\&&+{}_2F_1(1+ix,1+iy,2,-\eta)\ {}_2F_1(1-ix',1-iy,2,-\eta).
\end{eqnarray}
$\mathcal{F}$ and its complex conjugate obey
\begin{equation}
\mathcal{F}^*(x,x',y)=\mathcal{F}(x',x,y),\quad \mathcal{F}^*(-x,-x',y)=\mathcal{F}(x,x',y).
\end{equation}
so
\begin{equation}
\mathcal{A}=\mathcal{B}^*,\quad \mathcal{C}=\mathcal{D}^*.
\end{equation} We read out the first few CCFs
\begin{eqnarray}
\langle H_{A}^m\rangle_c&=&\frac{cm!}{12} \log\frac{2}{\epsilon},\nonumber\\ \langle H_{A}^{m-1}H_{B}\rangle_c&=&\frac{cm!}{144} \ G_2^{(2)}(\eta).\nonumber\\
\langle H_A^2H_B^2\rangle_c&=&c\{\frac{1+\eta}{\eta^2}[4\text{Li}_3(1+\eta)-2\log(1+\eta)\text{Li}_2(1+\eta)+\frac{2\log(1+\eta)}{3}\text{Li}_2(-\eta)\nonumber\\&&+\frac{1+\eta}{3}\log^2(1+\eta)-\frac{\pi^2}{3}\log(1+\eta)-4\zeta(3)]+\frac{2+\eta}{3\eta}[2\text{Li}_2(-\eta)+3\log(1+\eta)]-\frac{4}{3}\},\nonumber\\\label{tab22exa}
\end{eqnarray}
where the polylogrithm $\text{Li}_n(z)$ is
\begin{equation}
\text{Li}_n(z)=\sum_{k=1}^\infty\frac{z^k}{k^n}.
\end{equation}
The relation \eqref{uvir} can be checked for $p=2,m=4$. The right hand side is
\begin{equation}
\lim_{\eta\to-1}\langle H_A^2H_B^2\rangle_c=2c\log\frac{2}{\epsilon}+\cdots.
\end{equation} The cutoff independent coefficient $2c$ matches with the one in $\langle H_A^4\rangle_c$.
\item New power law. In the previous discussion, we focus on the case that $B$ and $A$ coincide with each other. However, there are other cases that the CCFs are still divergent. One can consider the limit that $A$ just attaches the edge of $B$,
\begin{equation}
R'=1,\quad x_0=2+\epsilon,\quad \epsilon\to 0.
\end{equation} The cross ratio $\xi$ does not approach $-\infty$ but $1$
\begin{equation}
\xi=\frac{4}{(2+\epsilon)^2}=1-\epsilon+\cdots.
\end{equation} We can define a new CCF which is also divergent from type-$(m-1,1)$ CCF
\begin{equation}
\langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_{m-1}]\odot Q_B[\mathcal{O}_m]\rangle_c=\lim_{\xi\to1} \langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_{m-1}]Q_B[\mathcal{O}_m]\rangle_c
\end{equation}
The continuation of conformal block tells us that the new CCF obeys a new power law
\begin{equation}
\langle Q_A[\mathcal{O}_1]\cdots Q_A[\mathcal{O}_{m-1}]\odot Q_B[\mathcal{O}_m]\rangle_c=\bar{\gamma} (\frac{R}{\epsilon})^{\frac{d-2}{2}}+\cdots+\bar{p}_q^{(d)}\log^q\frac{R}{\epsilon}+\cdots.\label{newpower}
\end{equation}
The leading term is proportional to
\begin{equation}
\mathcal{L}=R^{\frac{d-2}{2}}=\sqrt{\mathcal{A}}
\end{equation} which is the characteristic length of the region $A$ in four dimensions. In two dimensions, the leading term is a logarithmic term with power $q$. In this case, there is a new UV/IR relation between $\bar{p}_q$ and $D$ coefficient , we write it schematically
\begin{equation}
\bar{p}_q=\bar{E}\times D.
\end{equation} The function $\bar{E}^{(d)}[\mathcal{O}]$ is proportional to $E^{(d)}[\mathcal{O}]$. The proportional constant is shown below.
\begin{itemize}
\item $d$ is even.
\begin{itemize}
\item For conserved current $\mathcal{O}$ with conformal weight $\Delta=J+d-2$,
\begin{equation}
\bar{E}^{(d)}[\mathcal{O}]=\frac{(-1)^J}{2}E^{(d)}[\mathcal{O}].
\end{equation}
\item For non-conserved current $\mathcal{O}$ with conformal weight $\Delta$ and spin $J$,
\begin{equation}
\bar{E}^{(d)}[\mathcal{O}]=\frac{(-1)^J}{4}E^{(d)}[\mathcal{O}].
\end{equation}
\end{itemize}
We checked the relation for $d=2,4$ and spin $J\le 2$.
\item $d$ is odd. \begin{itemize}\item For non-conserved current $\mathcal{O}$ with conformal weight $\Delta$ and spin $J$,
\begin{equation}
\bar{E}^{(d)}[\mathcal{O}]=\frac{(-1)^J}{2}E^{(d)}[\mathcal{O}].
\end{equation}
\item For conserved current $\mathcal{O}$, there is no logarithmic divergent term in the CCF.
\end{itemize}
We checked the relation for $d=3$ and spin $J\le 2$.
\end{itemize}
Since $D$ function is the same, we find a relation between two cutoff independent coefficients $p$ and $\bar{p}$,
\begin{equation}
\frac{p}{E}=\frac{\bar{p}}{\bar{E}}.
\end{equation}
\end{itemize}
\section{Summary and outlook}
In this report, we have introduced the area law \eqref{arealaw} of type-$(m)$ CCFs of OPE blocks. It is a generalization of the area law of entanglement entropy. We will list several open problems for future work.
\begin{itemize}
\item Higher $m\ge 4$. In most of the work, we restrict to the region $m\ge 3$. This is because the structure of $m$-point correlation function of primary operators in CFT is fixed up to $m=3$. For $m\ge 4$, it is harder to extract cutoff independent coefficient.
\item UV/IR relation. The UV/IR relation
\begin{equation}
p=E\times D
\end{equation} has been checked for several examples. A rigorous proof is still lacking.
\item Cyclic identity. The cyclic identity of $p$ reflects the fact that $p$ is independent of the way to regularize the type-$(m)$ CCF. However, we feel that a direct computation is impossible to check this identity.
\item New power law. We generalize the type-$(m_1,m_2)$ CCF to the case that $A$ and $B$ just attaches with each other. The corresponding CCF is divergent with a new power law \eqref{newpower}. The corresponding new UV/IR relation
\begin{equation}
\bar{p}=\bar{E}\times D
\end{equation} also needs understanding.
\item Deformed reduced density matrix. This exponential operator is similar to the ``Wilson loop'' in gauge theories \cite{Maldacena:1998im,Rey:1998ik} despite the fact that the OPE block has no lower bound in general. When the OPE block has a lower bound, the logarithm of the vacuum expectation value of the deformed reduced density matrix
\begin{equation}
\log\langle e^{-\mu Q_A}\rangle
\end{equation} should also obey area law with logarithmic divergence. There may be a gravitational dual for this quantity as \cite{Jafferis:2014lza,Jafferis:2015del}. The similarity of the area law between this program and black hole entropy implies that the classical part contributes to the area term while quantum effects lead to logarithmic corrections.
\item Multiple integrals. According to the method of continuation of conformal block, area law of type-$(m)$ CCF is protected by conformal invariance. However, the method of continuation itself cannot guarantee that it always leads to the correct result. One has to develop other methods to deal with the multiple integrals. In two dimensions, one should generalize Selberg integrals \cite{Selberg:1944,For} to include more parameters \cite{Long:2020njs}.
\end{itemize}
\section*{Acknowledgements}
This work was supported by NSFC Grant No. 12005069.
|
{
"timestamp": "2020-12-29T02:26:33",
"yymm": "2012",
"arxiv_id": "2012.14141",
"language": "en",
"url": "https://arxiv.org/abs/2012.14141"
}
|
\section{Introduction}
\label{sec:intro}
For a finitely generated group $\Gamma$ and a suitable Lie group $G$, a primary object of study in higher Teichm\"uller theory \cite{wienhard2018invitation} is the \underline{$G$-character variety}
\begin{equation*}
\mathscr{R}_{G, \Gamma} = \left\{ \rho : \Gamma \longrightarrow G \right\} /\!\!/ \hspace{2.5pt} G
\end{equation*}
consisting of group homomorphisms from the group $\Gamma$ to the Lie group $G$, considered up to conjugation. Here, the double bar indicates that the quotient is \textcolor{red}{being} taken in the \textcolor{red}{algebro-geometric} sense of geometric invariant theory \cite{Mumford94}.
We are interested in studying the character variety $\mathscr{R}_{\mathrm{SL}_3, S} := \mathscr{R}_{\mathrm{SL}_3, \pi_1(S)}$ in the case where the group $\Gamma = \pi_1(S)$ is the fundamental group of a finite-type punctured surface $S$ with negative Euler characteristic, and where the Lie group $G=\mathrm{SL}_3$ is the special linear group.
Sikora \cite{SikoraTrans01} associated to any $\mathrm{SL}_3$-web $W$ in the surface $S$ (Figure \ref{fig:coordinates-example-alt}) a trace regular function $\mathrm{Tr}_W \in \mathscr{O}(\mathscr{R}_{\mathrm{SL}_3, S})$ on the $\mathrm{SL}_3$-character variety. A theorem of Sikora-Westbury \cite{SikoraAlgGeomTop07} implies that the preferred subset $\mathscr{W}_{3,S}$ of reduced $\mathrm{SL}_3$-webs indexes, by taking trace functions, a linear basis for the algebra $\mathscr{O}(\mathscr{R}_{\mathrm{SL}_3, S})$ of regular functions on the $\mathrm{SL}_3$-character~variety.
In a companion paper \cite{DouglasArxiv20}, we constructed explicit nonnegative integer coordinates for this $\mathrm{SL}_3$-web basis $\mathscr{W}_{3, S}$. In particular, we identified $\mathscr{W}_{3, S}$ with the set of solutions in $\mathbb{Z}_{\geq 0}^N$ of finitely many Knutson-Tao inequalities \cite{KnutsonJAmerMathsoc99} and modulo 3 congruence conditions. These coordinates depend on a choice of an ideal triangulation $\mathscr{T}$ of the punctured surface~$S$.
In the present article, we prove that these web coordinates satisfy a surprising naturality property with respect to this choice of ideal triangulation $\mathscr{T}$. Specifically, if another ideal triangulation $\mathscr{T}^\prime$ is chosen, then the induced coordinate change map takes the form of a tropicalized $\mathcal{A}$-coordinate cluster transformation \cite{FominJAmerMathSoc02, FockIHES06}.
\begin{figure}[htb]
\centering
\includegraphics[width=.585\textwidth]{coordinates-example-newalt}
\caption{\textcolor{red}{Positive tropical integer $\mathcal{A}$-coordinates for a reduced $\mathrm{SL}_3$-web on the once punctured torus, with respect to an ideal triangulation $\mathscr{T}$.}}
\label{fig:coordinates-example-alt}
\end{figure}
\subsection{Global aspects}
$ $
More precisely, let $\widehat{S}$ be a \underline{marked surface}, namely a compact oriented surface together with a finite subset $M \subset \partial \widehat{S}$ of preferred points, called marked points, lying on some of the boundary components of $\widehat{S}$. By a puncture we mean a boundary component of $\widehat{S}$ containing no marked points, which is thought of as shrunk down to a point. We say the surface $\widehat{S} = S$ is non-marked if $M = \emptyset$. We always assume that $\widehat{S}$ admits an \underline{ideal triangulation} $\mathscr{T}$, namely a triangulation whose vertex set is equal to the set of punctures and marked points. See~\S \ref{ssec:markedsurfacesidealtriangulations}.
\subsubsection{Fock-Goncharov duality}
$ $
Fock-Goncharov \cite{FockIHES06} introduced a pair of mutually dual \underline{moduli spaces} $\mathcal{X}_{\operatorname{PGL}_n,\widehat{S}}$ and $ \mathcal{A}_{\operatorname{SL}_n,\widehat{S}}$ (as well as for more general Lie groups). In the case $\widehat{S} = S$ of non-marked surfaces, the spaces $\mathcal{X}_{\operatorname{PGL}_n,S}$ and $\mathcal{A}_{\operatorname{SL}_n,S}$ are variations of the $\mathrm{PGL}_n$- and $\mathrm{SL}_n$-character varieties; for $n=2$, they generalize the enhanced Teichm\"uller space \cite{Fock07} and the decorated Teichm\"uller space \cite{penner1987decorated}, respectively. \underline{Fock-Goncharov duality} is a canonical mapping
\begin{equation*}
\mathbb{I} : \mathcal{A}_{\mathrm{SL}_n, S}(\mathbb{Z}^t)
\longrightarrow \mathscr{O}(\mathcal{X}_{\mathrm{PGL}_n, S}),
\end{equation*}
from the discrete set $\mathcal{A}_{\mathrm{SL}_n, S}(\mathbb{Z}^t)$ of \underline{tropical integer points} of the moduli space $\mathcal{A}_{\operatorname{SL}_n,S}$ to the algebra $\mathscr{O}(\mathcal{X}_{\mathrm{PGL}_n, S})$ of regular functions on the moduli space $\mathcal{X}_{\mathrm{PGL}_n, S}$, satisfying enjoyable properties; for instance, the image of $\mathbb{I}$ should form a linear basis for the algebra of functions $\mathscr{O}(\mathcal{X}_{\mathrm{PGL}_n, S})$. In the case $n=2$, Fock-Goncharov gave a concrete topological construction of duality by identifying the tropical integer points with laminations on the surface.
There are various ways to formulate Fock-Goncharov duality. A closely related version is
\begin{equation*}
\mathbb{I} : \mathcal{A}_{\mathrm{PGL}_n, S}(\mathbb{Z}^t)
\longrightarrow \mathscr{O}(\mathcal{X}_{\mathrm{SL}_n, S})
\end{equation*}
(compare \cite[Theorem 12.3 and the following Remark]{FockIHES06} for $n=2$). There are also formulations of duality in the setting of marked surfaces $\widehat{S}$, where the moduli spaces $\mathcal{X}_{\mathrm{PGL}_n, \widehat{S}}$ and $\mathcal{X}_{\mathrm{SL}_n, \widehat{S}}$ are replaced \cite{GoncharovInvent15, GoncharovArxiv19} by slightly more general constructions $\mathcal{P}_{\mathrm{PGL}_n, \widehat{S}}$ and~$\mathcal{P}_{\mathrm{SL}_n, \widehat{S}}$.
Investigating Fock-Goncharov duality has led to many exciting developments. By employing powerful conceptual methods (scattering diagrams, broken lines, Donaldson-Thomas transformations),
works such as \cite{GrossJAmerMathSoc18, GoncharovArxiv19} have established general formulations of duality. On the other hand, explicit higher rank constructions, in the spirit of Fock-Goncharov's topological approach in the case $n=2$, are not as well understood.
Following \cite{GoncharovInvent15} (see also \cite[Proposition 12.2]{FockIHES06}), we focus on the \underline{positive} points $ \mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}^+(\mathbb{Z}^t) \subset \mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}(\mathbb{Z}^t)$, defined with respect to the tropicalized \underline{Goncharov-Shen potential} $P^t : \mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}(\mathbb{Z}^t) \to \mathbb{Z}$ by $\mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}^+(\mathbb{Z}^t) = (P^t)^{-1}(\mathbb{Z}_{\geq 0})$. These positive tropical integer points play an important role in a variation of the previously mentioned duality,
\begin{equation*}
\tag{$\ast$}
\label{eq:duality-conjecture}
\mathbb{I} : \mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}^+(\mathbb{Z}^t)
\longrightarrow \mathscr{O}(\mathscr{R}_{\mathrm{SL}_n, \widehat{S}})
\end{equation*}
(see \cite[Conjecture 10.11 and Theorem 10.12, as well as Theorems 10.14, 10.15 for $G=\mathrm{PGL}_2$]{GoncharovInvent15}). Here, the space $\mathscr{R}_{\mathrm{SL}_n, \widehat{S}}$, introduced in \cite[\S 10.2]{GoncharovInvent15} (they denote it by $\mathrm{Loc}_{\mathrm{SL}_n, \widehat{S}}$), is a generalized (twisted) version of the $\mathrm{SL}_n$-character variety $\mathscr{R}_{\mathrm{SL}_n, S}$ valid for marked surfaces~$\widehat{S}$.
Because $\mathrm{PGL}_n$ is not simply connected, the moduli space $\mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}$ does not have a cluster structure; however, it does admit a positive structure. The tropical spaces $\mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}(\mathbb{Z}^t)$ and $\mathcal{A}_{\mathrm{PGL}_n, \widehat{S}}^+(\mathbb{Z}^t)$ are thus defined; moreover, they can be seen as subsets of the real tropical space $\mathcal{A}_{\mathrm{SL}_n, \widehat{S}}(\mathbb{R}^t)$, thereby inheriting a tropical cluster structure. Our goal is to construct, in the case $n=3$, a concrete topological model for the space $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}^+(\mathbb{Z}^t)$ of positive tropical integer points, which also exhibits this tropical cluster structure.
\textcolor{red}{See \S \ref{sec:preliminary-for-tropical-points} for a brief overview of the underlying Fock-Goncharov-Shen theory.}
\subsubsection{Topological indexing of linear bases}
\label{sssec:topological-indexing-of-linear-bases}
$ $
One of our guiding principles is that tropical integer points should correspond to topological objects generalizing laminations \cite{Thurston97} on surfaces in the case $n=2$. Such so-called \underline{higher laminations} \textcolor{red}{can be studied from many points of view, blending ideas from geometry, topology, and physics; see, for instance,} \cite{FontaineCompositio13, XieArxiv13, GoncharovInvent15, LeGeomTop16}. \textcolor{red}{In the present article, we focus attention on one of the topological approaches to studying higher laminations, via} \underline{webs} \cite{KuperbergCommMathPhys96, SikoraTrans01, CautisMathAnn14}; see also \cite{GaiottoAnnHenriPoincare13}. \textcolor{red}{Webs} are certain $n$-valent graphs-with-boundary embedded in the surface $\widehat{S}$ (considered up to equivalence in $\widehat{S} - M$). Webs also appear naturally in the context of quantizations of character varieties via \underline{skein modules and algebras} \cite{Turaev89, WittenCommMathPhys89, PrzytyckiBullPolishAcad91, SikoraAlgGeomTop05}.
We begin by reviewing the case $n=2$. For a marked surface $\widehat{S}$, define the set $\mathscr{L}_{2, \widehat{S}}$ of \underline{\textcolor{red}{(bounded)} $2$-laminations} on $\widehat{S}$ so that $\ell \in \mathscr{L}_{2, \widehat{S}}$ is a finite collection of mutually-non-intersecting simple loops and arcs on $\widehat{S}$ such that (i) there are no contractible loops; and, (ii) arcs end only on boundary components of $\widehat{S}$ containing marked points, and there are no arcs contracting to a boundary interval without marked points.
In the case where the surface $\widehat{S} = S$ is non-marked, a 2-lamination $\ell \in \mathscr{L}_{2, S}$ corresponds to a \underline{trace function} $\mathrm{Tr}_\ell \in \mathscr{O}(\mathscr{R}_{\mathrm{SL}_2, S})$, namely the regular function on the character variety $\mathscr{R}_{\mathrm{SL}_2, S}$ defined by sending $\rho: \pi_1(S) \to \mathrm{SL}_2$ to the product $\prod_\gamma \mathrm{Tr}(\rho(\gamma))$
of the traces along the components $\gamma$ of $\ell$. It is well-known \cite{BullockCommentMathHelv97, PrzytyckiTopology00} that the trace functions $\mathrm{Tr}_\ell$, varying over the $2$-laminations $\ell \in \mathscr{L}_{2, S}$, form a linear basis for the algebra $\mathscr{O}(\mathscr{R}_{\mathrm{SL}_2, S})$ of regular functions on the $\mathrm{SL}_2$-character variety.
On the opposite topological extreme, consider the case where the surface $\widehat{S} = \widehat{D}$ is a disk with $k$ marked points $m_i$ on its boundary, cyclically ordered. For each $i$, assign a positive integer $n_i$ to the $i$-th boundary interval located between the marked points $m_i$ and $m_{i+1}$. This determines a subset $\mathscr{L}_{2, \widehat{D}}(n_1, \dots, n_k) \subset \mathscr{L}_{2, \widehat{D}}$ consisting of the $2$-laminations $\ell$ having geometric intersection number equal to $n_i$ on the $i$-th boundary interval. It follows from the Clebsch-Gordan theorem (see, for instance, \cite[\S 2.2,2.3]{KuperbergCommMathPhys96}) that the subset $\mathscr{L}_{2, \widehat{D}}(n_1, \dots, n_k)$ of $2$-laminations indexes a linear basis for the space of $\mathrm{SL}_2$-invariant tensors $(V_{n_1} \otimes \cdots \otimes V_{n_k})^{\mathrm{SL}_2}$, where $V_{n_i}$ is the unique $n_i$-dimensional irreducible representation of $\mathrm{SL}_2$.
For a general marked surface $\widehat{S}$, Goncharov-Shen's moduli space $\mathscr{R}_{\mathrm{SL}_2, \widehat{S}}$ simultaneously generalizes both (a twisted version of) the character variety $\mathscr{R}_{\mathrm{SL}_2, S}$ for non-marked surfaces $\widehat{S} = S$, as well as the spaces of invariant tensors $(V_{n_1} \otimes V_{n_2} \otimes \cdots \otimes V_{n_k})^{\mathrm{SL}_2}$ for marked disks $\widehat{S} = \widehat{D}$. By \cite[Theorem 10.14]{GoncharovInvent15},
the set of $2$-laminations $\mathscr{L}_{2, \widehat{S}}$ canonically indexes a linear basis for the algebra of functions $\mathscr{O}(\mathscr{R}_{\mathrm{SL}_2, \widehat{S}})$ on the generalized character variety for the marked surface $\widehat{S}$, closely related to the linear bases in the specialized cases $\widehat{S} = S$ and $\widehat{S} = \widehat{D}$.
We now turn to the case $n=3$. In the setting of the disk $\widehat{S} = \widehat{D}$ with $k$ marked points on its boundary, the integers $n_i$ are replaced with highest weights $\lambda_i$ of irreducible $\mathrm{SL}_3$-representations $V_{\lambda_i}$, and the object of interest is the space $(V_{\lambda_1} \otimes V_{\lambda_2} \otimes \cdots \otimes V_{\lambda_k})^{\mathrm{SL}_3}$ of $\mathrm{SL}_3$-invariant tensors. Kuperberg \cite{KuperbergCommMathPhys96} proved that the set $\mathscr{W}_{3, \widehat{D}}(\lambda_1, \dots, \lambda_k)$ of non-convex non-elliptic $3$-webs $W$ on $\widehat{D}$, matching certain fixed topological boundary conditions corresponding to the weights $\lambda_i$, indexes a linear basis for the invariant space $(V_{\lambda_1} \otimes V_{\lambda_2} \otimes \cdots \otimes V_{\lambda_k})^{\mathrm{SL}_3}$ (so can be thought of as the $\mathrm{SL}_3$-analogue of the subset $\mathscr{L}_{2, \widehat{D}}(n_1, \dots, n_k) \subset \mathscr{L}_{2, \widehat{D}}$).
On the other hand, for non-marked surfaces $\widehat{S} = S$, Sikora \cite{SikoraTrans01} defined, for any $3$-web $W$ on $S$, a trace function $\mathrm{Tr}_W$ on the character variety $\mathscr{R}_{\mathrm{SL}_3, S}$, generalizing the trace functions $\mathrm{Tr}_\ell$ for $2$-laminations $\ell \in \mathscr{L}_{2, S}$ (Sikora also defined $\mathrm{Tr}_W \in \mathscr{O}(\mathscr{R}_{\mathrm{SL}_n, S})$ for any $n$-web $W$). A theorem of Sikora-Westbury \cite{SikoraAlgGeomTop07} implies that the subset $\mathscr{W}_{3, S}$ of non-elliptic $3$-webs $W$ indexes, by taking trace functions $\mathrm{Tr}_W$, a linear basis for the algebra of regular functions $\mathscr{O}(\mathscr{R}_{\mathrm{SL}_3, S})$ on the $\mathrm{SL}_3$-character variety.
For a general marked surface $\widehat{S}$, Frohman-Sikora's work \cite{FrohmanMathZ2022}, motivated by Kuperberg \cite{KuperbergCommMathPhys96}, suggests that a good definition for the \underline{\textcolor{red}{(bounded)} $3$-laminations} is the set $\mathscr{W}_{3, \widehat{S}}$ of reduced $3$-webs $W$ on $\widehat{S}$, which in particular are allowed to have boundary; see \S \ref{section:wa}. Indeed, \textcolor{red}{by} \cite[Proposition 4]{FrohmanMathZ2022}, this set $\mathscr{W}_{3, \widehat{S}}$ forms a linear basis for the reduced $\mathrm{SL}_3$-skein algebra. As for non-marked surfaces $S$, where skein algebras quantize character varieties, we suspect that Frohman-Sikora's reduced $\mathrm{SL}_3$-skein algebra is a quantization of Goncharov-Shen's generalized $\mathrm{SL}_3$-character variety $\mathscr{R}_{\mathrm{SL}_3, \widehat{S}}$. \textcolor{red}{In particular, we} suspect that the set $\mathscr{W}_{3, \widehat{S}}$ indexes a canonical linear basis for the algebra of regular functions $\mathscr{O}(\mathscr{R}_{\mathrm{SL}_3, \widehat{S}})$, generalizing the case $n=2$ \cite[Theorem 10.14]{GoncharovInvent15}; \textcolor{red}{see \cite[Conjecture~23]{FrohmanMathZ2022}. }
\subsubsection{Tropical coordinates for higher laminations}
$ $
Let a \underline{positive integer cone} be a subset of $\mathbb{Z}_{\geq 0}^k$ \textcolor{red}{closed under addition and containing zero. }
As in \cite{FockIHES06, Fock07}, in the case $n=2$, given a choice of ideal triangulation $\mathscr{T}$, with $N_2$ edges, of the marked surface $\widehat{S}$,
one assigns $N_2$ nonnegative integer coordinates to a given $2$-lamination $\ell \in \mathscr{L}_{2, \widehat{S}}$ by taking the geometric intersection numbers of $\ell$ with the edges of the ideal triangulation $\mathscr{T}$. This assignment determines an injective coordinate mapping
\begin{equation*}
\Phi^{(2)}_\mathscr{T} : \mathscr{L}_{2, \widehat{S}} \longhookrightarrow \mathbb{Z}_{\geq 0}^{N_2}
\end{equation*}
on the set of $2$-laminations $\mathscr{L}_{2, \widehat{S}}$. Moreover, the image of $\Phi_\mathscr{T}^{(2)}$ is a positive integer cone in $\mathbb{Z}_{\geq 0}^{N_2}$, which is characterized as the set of solutions of finitely many inequalities and parity conditions of the form
\begin{equation*}
a+b-c \geq 0 \quad\quad
\text{ and }
\quad\quad
a + b - c \quad \in 2\mathbb{Z}
\quad\quad
\left( a, b, c \in \mathbb{Z}_{\geq 0} \right).
\end{equation*}
Moreover, these integer coordinates are \underline{natural} with respect to the choice of $\mathscr{T}$, in the sense that if a different ideal triangulation $\mathscr{T}^\prime$ is chosen, then the induced coordinate transformation is the $\mathrm{SL}_2$ tropical $\mathcal{A}$-coordinate cluster transformation \cite[Figure 8]{Fock07}. These natural coordinates provide an identification $\mathscr{L}_{2, \widehat{S}} \cong \mathcal{A}_{\mathrm{PGL}_2, \widehat{S}}^+(\mathbb{Z}^t)$ as in \cite[Theorem 10.15]{GoncharovInvent15}. Taken together, \cite[Theorems 10.14, 10.15]{GoncharovInvent15} \textcolor{red}{constitute} a compelling topological version of the duality \eqref{eq:duality-conjecture} in the case $n=2$; see \cite[the two paragraphs after Theorem 10.15]{GoncharovInvent15}.
Our main result generalizes these natural coordinates to the setting $n=3$.
More precisely, given an ideal triangulation $\mathscr{T}$ of a marked surface $\widehat{S}$, put $N_3$ to be twice the number of edges (including boundary edges) of $\mathscr{T}$ plus the number of triangles of $\mathscr{T}$. Recall the set $\mathscr{W}_{3, \widehat{S}}$ of (equivalence classes of) reduced $3$-webs on $\widehat{S}$, discussed above.
\begin{theorem}
\label{thm:first-theorem-intro}
Given an ideal triangulation $\mathscr{T}$ of \textcolor{red}{the marked surface} $\widehat{S}$, there is an injection
\begin{equation*}
\Phi_\mathscr{T} : \mathscr{W}_{3, \widehat{S}}
\longhookrightarrow
\mathbb{Z}_{\geq 0}^{N_3}
\end{equation*}
satisfying the property that the image of $\Phi_\mathscr{T}$ is a positive integer cone in $\mathbb{Z}_{\geq 0}^{N_3}$ which is characterized as the set of solutions of finitely many Knutson-Tao rhombus inequalities \cite{KnutsonJAmerMathsoc99} and modulo $3$ congruence conditions of the form
\begin{equation*}
a+b-c-d \geq 0 \quad\quad
\text{ and }
\quad\quad
a + b - c-d \quad \in 3\mathbb{Z}
\quad\quad
\left( a, b, c, d \in \mathbb{Z}_{\geq 0} \right).
\end{equation*}
Moreover, these coordinates are natural \textcolor{red}{with respect to the action of the mapping class group of the surface $\widehat{S}$.} \textcolor{red}{More precisely,} if a different ideal triangulation $\mathscr{T}^\prime$ is chosen, then the coordinate change map relating $\Phi_\mathscr{T}$ and $\Phi_{\mathscr{T}^\prime}$ is given by the $\mathrm{SL}_3$ tropical $\mathcal{A}$-coordinate cluster transformation of \textcolor{red}{\cite{FominJAmerMathSoc02, FockIHES06}, expressed locally as in Equations \eqref{eq:boundarycoords}-\eqref{equation:mu4}; see Figure~{\upshape\ref{figure:flip}}.}
\end{theorem}
\textcolor{red}{See Theorems \ref{thm:main-theorem}, \ref{thm:second-main-theorem}, and \ref{cor:second-main-theorem}. The construction of $\Phi_\mathscr{T}$ (Theorem \ref{thm:main-theorem}) was done in~\cite{DouglasArxiv20}.}
\textcolor{red}{This} construction was motivated by earlier work of Xie \cite{XieArxiv13} \textcolor{red}{and Goncharov-Shen~\cite{GoncharovInvent15}}.
\textcolor{red}{In particular, Goncharov-Shen used the Knutson-Tao rhombus inequalities associated to an ideal triangulation $\mathscr{T}$ of $\widehat{S}$ to index the set of positive $\mathcal{A}$ tropical integer points, which they showed parametrizes a linear basis for the algebra of regular functions $\mathscr{O}(\mathscr{R}_{\mathrm{SL}_3, \widehat{S}})$; see \cite[\S 3.1 and Theorem 10.12 (stated for more general Lie groups)]{GoncharovInvent15}. Their parametrization is not mapping class group equivariant; see the remark in \cite[page 614]{GoncharovInvent15} immediately after the aforementioned theorem. In \cite{GoncharovArxiv19} they construct equivariant bases using the abstract machinery of \cite{GrossJAmerMathSoc18}.
Theorem \ref{thm:first-theorem-intro} provides a concrete model indexing the set $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}^+(\mathbb{Z}^t)$ of positive tropical integer points, also based on the Knutson-Tao inequalities, which in addition is equivariant with respect to the action of the mapping class group.}
\textcolor{red}{This} natural \textcolor{red}{indexing} $\mathscr{W}_{3, \widehat{S}} \cong \mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}^+(\mathbb{Z}^t)$ \textcolor{red}{provided by} Theorem \ref{thm:first-theorem-intro} generalizes the $n=2$ case \cite[Theorem~10.15]{GoncharovInvent15}.
\textcolor{red}{We think of the \textcolor{red}{web} coordinates of Theorem \ref{thm:first-theorem-intro} as \underline{positive tropical integer $\mathcal{A}$-coordinates}.}
We call the positive integer cone $\Phi_\mathscr{T}(\mathscr{W}_{3, \widehat{S}}) \subset \mathbb{Z}_{\geq 0}^{N_3}$ the $\mathrm{SL}_3$ \underline{Knutson-Tao-Goncharov-Shen cone} with respect to the ideal triangulation $\mathscr{T}$ of $\widehat{S}$.
\textcolor{red}{
These tropical web coordinates were constructed for some simple examples, such as the eight triangle webs shown in Figure \ref{figure:triangle} below, in \cite{XieArxiv13}. They also appeared implicitly in \cite[Theorem 8.22]{SunGeomFunctAnal20}, in the geometric context of eruption flows on the $\mathrm{PGL}_n(\mathbb{R})$-Hitchin component ($n=3$). Xie \cite{XieArxiv13} checked the mapping class group equivariance, in the above sense, of these coordinates on a handful of examples.}
Frohman-Sikora \cite{FrohmanMathZ2022} independently constructed nonnegative integer coordinates for the set $\mathscr{W}_{3, \widehat{S}}$ of reduced 3-webs. Their coordinates are related to, but different than, \textcolor{red}{the coordinates of Theorem \ref{thm:first-theorem-intro}}.
As an application, Kim \cite{KimArxiv20} constructed an explicit $\mathrm{SL}_3$-version of Fock-Goncharov duality using the tropical web coordinates of Theorem \ref{thm:first-theorem-intro}, in the setting of non-marked surfaces $\widehat{S}=S$.
We expect that Kim's approach, together with the $\mathrm{SL}_3$-quantum trace map \cite{Douglas1, KimArxiv20}, will lead to an explicit $\mathrm{SL}_3$-version of quantum Fock-Goncharov duality \cite{FockENS09}; see \cite{AllegrettiAdvMath17} for the $n=2$ case.
\textcolor{red}{As another application, Ishibashi-Kano \cite{Ishibashi22} generalized the coordinates of Theorem \ref{thm:first-theorem-intro} to an $\mathrm{SL}_3$-version of shearing coordinates for (unbounded) 3-laminations. }
\textcolor{red}{To end this subsection, we briefly} recall from \cite{DouglasArxiv20} the construction of the coordinate map $\Phi_\mathscr{T}$ from Theorem \ref{thm:first-theorem-intro}; \textcolor{red}{see \S \ref{section:wa}.} Given the ideal triangulation $\mathscr{T}$, form the \underline{split ideal triangulation} $\widehat{\mathscr{T}}$ by replacing each edge $E$ of $\mathscr{T}$ with two parallel edges $E^\prime$ and $E^{\prime\prime}$; in other words, fatten each edge $E$ into a bigon. One then puts a given reduced $3$-web $W \in \mathscr{W}_{3, \widehat{S}}$ into \underline{good position} with respect to the split ideal triangulation $\widehat{\mathscr{T}}$. The result is that most of the complexity of the $3$-web $W$ is pushed into the bigons (Figure \ref{figure:bigon}), whereas over each triangle there is only a single (possibly empty) honeycomb together with finitely many arcs lying on the corners (Figure \ref{figure:localtr}).
Once the 3-web $W$ is in good position, its coordinates $\Phi_\mathscr{T}(W) \in \mathbb{Z}_{\geq 0}^{N_3}$ are readily computed. For an example in the once punctured torus, see Figure \ref{fig:coordinates-example-alt}.
\subsection{Local aspects}
$ $
The first new contribution of the present work is a proof of the naturality statement appearing in Theorem \ref{thm:first-theorem-intro}; \textcolor{red}{see \S \ref{section:fe}.} This is a completely local statement, since any two ideal triangulations $\mathscr{T}$ and $\mathscr{T}^\prime$ are related by a sequence of \underline{diagonal flips} inside ideal squares. It therefore suffices to check the desired tropical coordinate change formulas for a single square:
\begin{equation}
\label{eq:boundarycoords}
x_i = x_i^\prime \quad \left( i=1,2,\dots,8 \right),
\end{equation}
\begin{equation}
\label{equation:mu1}
\max\{x_2+y_3, y_1+x_3\}-y_2=z_2,
\end{equation}
\begin{equation}
\label{equation:mu2}
\max\{y_1+x_6, x_7+y_3\}-y_4=z_4,
\end{equation}
\begin{equation}
\label{equation:mu3}
\max\{x_1^\prime+z_4, x_8^\prime+z_2\}-y_1=z_1,
\end{equation}
\begin{equation}
\label{equation:mu4}
\max\{z_2+x_5^\prime, z_4+x_4^\prime\}-y_3=z_3.
\end{equation}
See Figure \ref{figure:flip} for the notation.
\begin{figure}[htb]
\includegraphics[scale=0.665]{flip-alt}
\caption{Local $\mathrm{SL}_3$ tropical $\mathcal{A}$-coordinate cluster transformation, corresponding to a diagonal flip $\mathscr{T} \to \mathscr{T}^\prime$ in the square. See Equations \eqref{eq:boundarycoords}-\eqref{equation:mu4}.
}
\label{figure:flip}
\end{figure}
Given a $3$-web $W \in \mathscr{W}_{3, \widehat{S}}$ in good position with respect to $\mathscr{T}$, the restriction $W|_\Box$ of $W$ to a triangulated ideal square $(\Box, \mathscr{T}|_\Box) \subset (\widehat{S}, \mathscr{T})$ falls into one of \underline{42 families} $\mathscr{W}^k_{\mathscr{T}|_\Box} \subset \mathscr{W}_{3, \Box}$ for $k=1, 2, \dots, 42$. \textcolor{red}{Depending on which family $\mathscr{W}^k_{\mathscr{T}|_\Box}$ the restricted web $W|_\Box$ belongs to, there is an explicit topological description of how $W|_\Box$ rearranges itself into good position after the flip; see \S \ref{ssec:examplessquare}.} These local 42 families of $3$-webs in the square have a geometric interpretation, leading to our second \textcolor{red}{main} result.
Let $\widehat{S} = \Box$ be a disk with four marked points, namely an ideal square, and let $\mathscr{T}$ be a choice of diagonal of $\Box$. Theorem \ref{thm:first-theorem-intro} says that the set $\mathscr{W}_{3, \Box}$ of reduced $3$-webs in $\Box$ embeds via $\Phi_\mathscr{T}$ as a positive integer cone inside $\mathbb{Z}_{\geq 0}^{12}$. This cone possesses a finite subset of irreducible elements spanning it over $\mathbb{Z}_{\geq 0}$, called its \underline{Hilbert basis} \cite{hilbert1890ueber, schrijver1981total}; \textcolor{red}{see \S \ref{section:Hsq}.}
\begin{theorem}
\label{thm:second-theorem-intro}
The Knutson-Tao-Goncharov-Shen cone $\Phi_\mathscr{T}(\mathscr{W}_{3, \Box}) \subset \mathbb{Z}_{\geq 0}^{12}$ associated to the triangulated ideal square $(\Box, \mathscr{T})$ has a Hilbert basis consisting of $22$ elements, corresponding via $\Phi_\mathscr{T}$ to $22$ reduced $3$-webs $W_\mathscr{T}^i \in \mathscr{W}_{3, \Box}$ for $i=1, 2, \dots, 22$.
Moreover, the positive integer cone
\begin{equation*}
\Phi_\mathscr{T}\left(\mathscr{W}_{3, \Box}\right) = \bigcup_{k=1}^{42} \mathscr{C}_\mathscr{T}^k
\quad \subset \mathbb{Z}_{\geq 0}^{12}
\end{equation*}
can be decomposed into \underline{$42$ sectors} $\mathscr{C}_\mathscr{T}^k$: (I) each sector is generated over $\mathbb{Z}_{\geq 0}$ by $12$ of the $22$ Hilbert basis elements, and (II) adjacent sectors are separated by a codimension $1$ wall. These sectors $\mathscr{C}_\mathscr{T}^k$ are in one-to-one correspondence with the $42$ families $\mathscr{W}^k_{\mathscr{T}} \subset \mathscr{W}_{3, \Box}$ of $3$-webs in the square, discussed above.
Lastly, each family $\mathscr{W}^k_{\mathscr{T}} \subset \mathscr{W}_{3, \Box}$ contains $12$ distinguished $3$-webs $W_\mathscr{T}^{i(k, j)} \in \{ W_\mathscr{T}^i \}_{i=1,2,\dots,22}$ for $j=1, 2, \dots, 12$, corresponding to the $12$ Hilbert basis elements generating the sector $\mathscr{C}_\mathscr{T}^k$. We refer to the set $\{ W_\mathscr{T}^{i(k, j)} \}_{j=1,2,\dots,12}$ of these $12$ distinguished $3$-webs as the \underline{topological type} of the sector $\mathscr{C}_\mathscr{T}^k$. Then, two sectors $\mathscr{C}_\mathscr{T}^k$ and $\mathscr{C}_\mathscr{T}^{k^\prime}$ are adjacent if and only if their topological types differ by exactly one distinguished $3$-web; see Figure {\upshape\ref{figure:wallscross}}.
\end{theorem}
\begin{figure}[htb]
\includegraphics[width=.81\textwidth]{wallscross-newalt}
\caption{Sectors and walls in the Knutson-Tao-Goncharov-Shen (KTGS) cone $\Phi_\mathscr{T}(\mathscr{W}_{3, \Box}) \subset \mathbb{Z}_{\geq 0}^{12}$ for a triangulated ideal square $(\Box, \mathscr{T})$. \textcolor{red}{More precisely, displayed is a corresponding sector decomposition $\left\{ D_i \right\}_{i=1,2,\dots,42}$ of (a projection to $\mathbb{R}^4$ of a real version of) an isomorphic cone in $\mathbb{Z}_+^8 \times \mathbb{Z}^4$, obtained from the KTGS cone via a transformation defined using the 4 tropical integer $\mathcal{X}$-coordinates. The sectors $D_i$ are grouped depending on which orthant of $\mathbb{R}^4$ they belong to. These sectors are the vertices of a 4-valent graph, where two sectors are connected by an edge if and only if they share a wall; equivalently, their topological types differ by a single web. See Theorem \ref{thm:second-theorem-intro}. }}
\label{figure:wallscross}
\end{figure}
See Theorems \ref{theorem:basis} and \ref{theorem:decomp}.
\textcolor{red}{
We like to think of Theorem \ref{thm:second-theorem-intro} as expressing a \underline{topological wall-crossing phenomenon}. Investigating whether it \textcolor{red}{could be} related to other wall-crossing phenomena appearing in cluster geometry \cite{kontsevich2008stability} \textcolor{red}{might be} an interesting problem for future research.}
For a related appearance of Hilbert bases, in the $n=2$ setting, see \cite{AbdielAlgGeomTop17}.
The proof of Theorem \ref{thm:second-theorem-intro} is geometric in nature and might be of independent interest. \textcolor{red}{Recall \cite{FockIHES06}} there are two dual sets of coordinates for the two dual moduli spaces of interest, respectively, the $\mathcal{A}$-coordinates and the $\mathcal{X}$-coordinates, as well as their tropical \textcolor{red}{counterparts}. For a triangulated ideal square $(\Box, \mathscr{T})$, via the mapping $\Phi_\mathscr{T}$ each $3$-web $W \in \mathscr{W}_{3, \Box}$ is assigned 12 positive tropical integer $\mathcal{A}$-coordinates $\Phi_\mathscr{T}(W) \in \mathbb{Z}_{\geq 0}^{12}$. We show that there are also assigned to $W$ four internal \underline{tropical integer $\mathcal{X}$-coordinates} valued in $\mathbb{Z}$, two associated to the unique internal edge of $\mathscr{T}$ and one for each triangle of $\mathscr{T}$; see Figure \ref{fig:tropical-X-coords} below. We find that the decomposition of the $\mathcal{A}$-cone $\Phi_\mathscr{T}\left(\mathscr{W}_{3, \Box}\right) \subset \mathbb{Z}^{12}_{\geq 0}$ into 42 sectors is mirrored by a corresponding decomposition of the $\mathcal{X}$-lattice $\mathbb{Z}^4$ into 42 sectors; see Figure \ref{figure:wallscross}. We think of this as a manifestation of Fock-Goncharov's tropicalized \textcolor{red}{canonical} map:
\begin{equation*}
p^t : \mathscr{W}_{3, \Box} \cong \Phi_\mathscr{T}(\mathscr{W}_{3, \Box}) \cong \textcolor{red}{\mathcal{A}^+_{\mathrm{PGL}_3, \Box}(\mathbb{Z}^t)_\mathscr{T} \subset \mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)_\mathscr{T} \overset{\mathrm{canonical}}{\longrightarrow} \mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)_\mathscr{T}.}
\end{equation*}
The image of the map $p^t$ is $\textcolor{red}{\mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{Z}^t)_\mathscr{T}} \cong \mathbb{Z}^4$, and $p^t$ maps sectors of the positive integer cone $\Phi_\mathscr{T}(\mathscr{W}_{3, \Box}) \cong \textcolor{red}{\mathcal{A}^+_{\mathrm{PGL}_3, \Box}(\mathbb{Z}^t)_\mathscr{T}}$ to sectors of the integer lattice $\textcolor{red}{\mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{Z}^t)_\mathscr{T}} \cong \mathbb{Z}^4$. \textcolor{red}{See \S \ref{section:ssq}.}
\section*{Acknowledgements}
We are profoundly grateful to Dylan Allegretti, Francis Bonahon, Charlie Frohman, Sasha Goncharov, Linhui Shen, Daping Weng, Tommaso Cremaschi, and Subhadip Dey for many very helpful conversations and for generously offering their time as this project developed.
Much of this work was completed during very enjoyable visits to Tsinghua University in Beijing, supported by a GEAR graduate internship grant, and the University of Southern California in Los Angeles. We would like to take this opportunity to extend our enormous gratitude to these institutions for their warm hospitality (and tasty dinners!).
\section{Background on Fock-Goncharov-Shen theory and tropical points}
\label{sec:preliminary-for-tropical-points}
In this section, we recall some theoretical preliminaries in order to discuss the set $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{Z}^t)$ of positive tropical integer $\mathrm{PGL}_3$-points, including the Fock-Goncharov $\mathcal{A}$-moduli space and the Goncharov-Shen potential.
\subsection{Marked surfaces, ideal triangulations, and rhombi}
\label{ssec:markedsurfacesidealtriangulations}
$ $
A \textit{marked surface} $\widehat{S}$ is a pair $(S,m_b)$ where $S$ is a compact oriented finite-type surface with at least one boundary component, and $m_b \subset \partial S$ is a finite set of \emph{marked points} on $\partial S$. Let $m_p \subset \left\{ \text{components of } \partial S \right\}$ be the set of \emph{punctures}, defined as the subset of boundary components without marked points; as is common in the literature, for the remainder of the article we identify such unmarked boundary components in $m_p$ with the (actual) punctures obtained by removing them and shrinking the resulting hole down to a point.
We assume the Euler characteristic condition $\chi(S) < d/2$, where $d$ is the number of components of $\partial S - m_b$ limiting to a marked point. (For example, $d=3$ for a once punctured disk with three marked points on its boundary.) This topological condition is equivalent to the existence of an \emph{ideal triangulation} $\mathscr{T}$ of $\widehat{S}$, namely a triangulation whose set of vertices is equal to $m_b \cup m_p$; the vertices of $\mathscr{T}$ are called \textit{ideal vertices}.
For simplicity, we always assume that $\mathscr{T}$ does not contain any \textit{self-folded triangles}. That is, we assume each triangle of $\mathscr{T}$ has three distinct sides. (Our results should generalize, essentially without change, to allow for self-folded triangles.)
Given an ideal triangulation $\mathscr{T}$ of $\widehat{S}$, we define the \emph{ideal $3$-triangulation} $\mathscr{T}_3$ of $\mathscr{T}$ to be the triangulation of $\widehat{S}$ obtained by subdividing each ideal triangle $\Delta$ of $\mathscr{T}$ into $9$ triangles; see Figure \ref{figure:acoor} below. The 3-triangulation $\mathscr{T}_3$ has as many ideal vertices as $\mathscr{T}$, and has $N$ \textit{non-ideal vertices}, where $N$ is defined in Notation \ref{not:+} below.
A \textit{pointed ideal triangle} is a triangle $\Delta$ in an ideal triangulation $\mathscr{T}$ together with a preferred ideal vertex; $\Delta$ is called a \textit{pointed ideal 3-triangle} when subdivided as part of the associated 3-triangulation $\mathscr{T}_3$.
Given a pointed ideal 3-triangle, we may talk about the three associated \textit{rhombi}; see Figure \ref{figure:acoor1} below. In the figure, the red rhombus is called the \textit{corner rhombus}, and the yellow and green rhombi are called the \textit{interior rhombi}. Each rhombus has two \textit{acute vertices} and two \textit{obtuse vertices}. Note that exactly one of these eight vertices, the \textit{corner vertex}, is an ideal vertex of $\mathscr{T}_3$; specifically, the top (acute) vertex of the corner rhombus. (We will see below that the other vertices correspond to Fock-Goncharov $\mathcal{A}$-coordinates.)
\begin{notation}
\label{not:+}
$ $
\begin{enumerate}
\item
The natural number $N$ is defined as twice the total number of edges (including boundary edges) of $\mathscr{T}$ plus the number of triangles of $\mathscr{T}$. (Note that $N$ is what we called $N_3$ in \S \ref{sec:intro}.)
\item
It will be convenient to denote the nonnegative real numbers by $\mathbb{R}_+ := \mathbb{R}_{\geq 0}$ and the nonnegative integers by $\mathbb{Z}_+ := \mathbb{Z}_{\geq 0}$. Similarly, put $\mathbb{R}_- := \mathbb{R}_{\leq 0}$ and $\mathbb{Z}_- := \mathbb{Z}_{\leq 0}$.
\end{enumerate}
\end{notation}
\subsection{\texorpdfstring{$\mathrm{SL}_3$}{SL3}-decorated local systems}
\label{ssec:SL3-decorated-local-systems}
$ $
Although not strictly required for the main theorems of the article, the material of this section and the following one \S \ref{PGL3-decorated-local-systems} is intended to emphasize the important conceptual concepts guiding the rest of the paper.
Let $E$ be a 3-dimensional vector space.
\subsubsection{$\mathcal{A}$-coordinates}
\label{sssec:Acoordinates}
$ $
\begin{defn}[Decorated flags]
\label{defn:flag}
A \emph{flag} $F$ in $E$ is a maximal filtration of vector subspaces of $E$,
\begin{equation*}
\{0\}=F^{(0)} \subset F^{(1)} \subset F^{(2)} \subset F^{(3)}=E,\quad \dim F^{(i)}=i,
\end{equation*}
denoted by $(F^{(1)},F^{(2)})$.
A \emph{decorated flag} $(F,\varphi)$ is a pair consisting of a flag $F$ and a collection $\varphi$ of $2$ nonzero vectors
\begin{equation*}
\varphi=\left(
\check{f}_i \in \left( F^{(i)}/F^{(i-1)} \right) - \left\{ 0 + F^{(i-1)} \right\}; \quad i=1,2
\right).
\end{equation*}
The collection of decorated flags is denoted by $\mathcal{A}$.
\end{defn}
We can think of $\mathcal{A}$ as a homogeneous set as follows. Let $\mathrm{GL}(E)$ be the general linear group, and let $\mathrm{SL}(E) \subset \mathrm{GL}(E)$ be the special linear group consisting of transformations with determinant 1. The group $\mathrm{SL}(E)$ acts transitively on $\mathcal{A}$ by left translation. Then $\mathcal{A}$ can be identified with the quotient set $\mathrm{SL}(E)/U$ for any maximal unipotent subgroup $U$ of $\mathrm{SL}(E)$ (that is, in some basis $\beta$ of $E$, we have that $U$ consists of upper triangular matrices with 1's on the main diagonal, namely unipotent matrices).
We fix a volume form $\Omega \in \Lambda^3(E)$ on $E$. Then, a decorated flag $(F, \varphi) \in \mathcal{A}$ determines a unique element $\check{f}_3 \in F^{(3)}/F^{(2)} = E/F^{(2)}$ satisfying the property that $\Omega(f_1 \wedge f_2 \wedge f_3)=1$ for all $f_i \in F^{(i)}$, $i=1,2,3$, such that $f_i + F^{(i-1)} = \check{f}_i \in F^{(i)}/F^{(i-1)}$. A \emph{basis for $(F,\varphi)$ with respect to the volume form $\Omega$} is a choice of such a basis $(f_1, f_2, f_3)$ for the vector space $E$.
\begin{defn}[{$\mathcal{A}$-moduli space $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}$ \cite[\S 2]{FockIHES06}}]
\label{def:Amodulispace}
Fix a base point $x_0$ in $\widehat{S}$, which henceforth will be suppressed in the notation. Let $\alpha_i$ be an oriented peripheral closed curve around a puncture $p_i \in m_p$. A \emph{decorated $\operatorname{SL}_3$-local system} on $\widehat{S}$ is a pair $(\rho,\xi)$ consisting of
\begin{itemize}
\item
a surface group representation $\rho\in\mathrm{Hom}(\pi_1(\widehat{S}),\operatorname{SL}_3)$ with unipotent monodromy along each peripheral curve $\alpha_i$; and,
\item
a \textit{flag map} $\xi:m_b\cup m_p \rightarrow \mathcal{A}$, such that each peripheral monodromy $\rho(\alpha_i)$ fixes the decorated flag $\xi(p_i)\in\mathcal{A}$. Note that the decorated flags assigned to the marked points $m_b$ can be chosen arbitrarily.
\end{itemize}
Two decorated $\operatorname{SL}_3$-local systems $(\rho_1,\xi_1),(\rho_2,\xi_2)$ are equivalent if and only if there exists some $g\in\operatorname{SL}_3$ such that $\rho_2=g\rho_1 g^{-1}$ and $\xi_2=g\xi_1$. We denote the moduli space of equivalence classes of decorated $\operatorname{SL}_3$-local systems on $\widehat{S}$ by $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}$.
\end{defn}
\begin{remark}
More precisely, the flag map $\xi$ should be defined equivariantly at the level of the universal cover $\widetilde{S}$, and satisfy certain genericity conditions. These technicalities will be suppressed throughout our discussion.
\end{remark}
\begin{notation}
Let $V_{\mathscr{T}}$ (resp. $V_{\mathscr{T}_3}$) be the set of vertices of $\mathscr{T}$ (resp. $\mathscr{T}_3$). Note that $V_{\mathscr{T}}=m_b \cup m_p \subset V_{\mathscr{T}_3}$.
\begin{align*}
I_3:=
\left\{
\begin{array}{c|c}
V\in V_{\mathscr{T}_3} - V_{\mathscr{T}}
&
V\text{ lies on an edge of } \mathscr{T}\end{array}
\right\}\text{ and }
J_3:= V_{\mathscr{T}_3} - \left( V_{\mathscr{T}} \cup I_3\right).
\end{align*}
We adopt the following vertex labelling conventions:
\begin{itemize}
\item
We denote a vertex $V\in I_3$ on an oriented ideal edge $(a,b)$ of $\mathscr{T}$ by $v^{i,3-i}_{a,b}=v^{3-i,i}_{b,a}$, where $i=1$ or $2$ is the least number of edges of $\mathscr{T}_3$ from $V$ to $b$ (see Figure \ref{figure:acoor}).
\item
We denote a vertex $V\in I_3 \cup J_3$ on a triangle $(a,b,c)$ oriented counterclockwise by $v^{i,j,k}_{a,b,c}$, where the three nonnegative integers $i,j,k$ summing to $3$ are the least number of edges of $\mathscr{T}_3$ from $V$ to $\overline{bc}$, from $V$ to $\overline{ac}$, and from $V$ to $\overline{ab}$, respectively, where $\overline{bc}$, $\overline{ac}$, $\overline{ab}$ denote the unoriented edges of $\mathscr{T}$ (see Figure \ref{figure:acoor}).
\end{itemize}
\end{notation}
\begin{defn}[{$\mathcal{A}$-coordinates \cite[\S 9]{FockIHES06}}]
\label{defn:FGA}
Fix an ideal triangulation $\mathscr{T}$ of $\widehat{S}$ and its $3$-triangulation $\mathscr{T}_3$.
Consider a vertex $V \in I_3 \cup J_3$ contained in a counterclockwise oriented ideal triangle $\Delta=(a,b,c)$, as in Figure \ref{figure:acoor1}.
In the sense described above, choose bases, with respect to the fixed volume form $\Omega$,
\begin{align*}
(a_1,a_2,a_{3}),\; (b_1,b_2,b_3),\; (c_1,c_2,c_3) \quad \in E^3
\end{align*}
for the generic decorated flags $\xi_{\rho}(a)$, $\xi_{\rho}(b)$, $\xi_{\rho}(c) \in \mathcal{A}$.
For a decorated local system $(\rho,\xi)\in\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}$,
the \emph{Fock-Goncharov $\mathcal{A}$-coordinate} at $V=v_{a,b,c}^{i,j,k} \in I_3 \cup J_3$ is
\begin{equation} \label{eq:A-coordinate}
A_{V}(\rho,\xi):=A_{V}:=A_{a,b,c}^{i,j,k}:=\Omega\left(a^i \wedge b^j \wedge c^k\right) \neq 0,
\end{equation}
where
$a^1=a_1$, $a^2=a_1 \wedge a_2$, etc. Note that this is independent of the choice of bases.
\end{defn}
\begin{figure}[htb]
\includegraphics[scale=0.25]{acoor-alt}
\caption{$3$-triangulation.}
\label{figure:acoor}
\end{figure}
We also put $A_{a,b,c}^{3,0,0} := \Omega(a^3) = \Omega(a_1 \wedge a_2 \wedge a_3) = 1$, which follows from the definition of the basis $(a_1, a_2, a_3)$ of $\xi(a)$ with respect to $\Omega$. Similarly $A_{a,b,c}^{0,3,0}=A_{a,b,c}^{0,0,3}=1$. See also \S \ref{PGL3-decorated-local-systems} below.
Given a quiver defined, as in Figure \ref{figure:acoor}, with respect to the orientation of the surface, let
\begin{equation*}
\varepsilon_{VW} = \#\{\;\textnormal{arrows} \;\textnormal{from}\; V \; \textnormal{to} \; W\;\} -\# \{\;\textnormal{arrows} \;\textnormal{from}\; W \; \textnormal{to} \; V\;\}.
\end{equation*}
The \emph{Fock--Goncharov $\mathcal{X}$-coordinate} at $V$ (defined for any $V \in I_3 \cup J_3$, $V \notin \partial \widehat{S}$) is
\begin{equation*}
X_{V}(\rho,\xi):=X_{V}:= \prod_{W \in I_3 \cup J_3} A_{W}^{\varepsilon_{VW}} \neq 0.\end{equation*}
\begin{remark}
\label{rem:clusterstructure}
By \cite[\S 10]{FockIHES06}, the moduli space $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}$ has a cluster algebraic structure \cite{FominJAmerMathSoc02} described by quivers on the surface, such as that in Figure \ref{figure:acoor}. In particular, each $\mathcal{A}$-coordinate transition map between different triangulations $\mathscr{T}$ and $\mathscr{T}^\prime$ is positive rational.
\end{remark}
\subsubsection{Goncharov-Shen potential}
\label{sssec:GSpotential}
$ $
\begin{defn}[Goncharov--Shen potential]
\label{defn:FGGS}
Let
\begin{equation*}D:=\left\{(2,1,0),(1,2,0),(1,1,1)\right\}.\end{equation*}
Suppose the pointed ideal triangle (\S \ref{ssec:markedsurfacesidealtriangulations}) $\Delta=(a,b,c)$ is counterclockwise oriented, as in Figure \ref{figure:acoor1}. For $(i,j,k)\in D$, the monomials
\begin{equation}
\label{equation:alpha}
\alpha_{a;b,c}^{i,j,k} :=
\frac{A_{a,b,c}^{i-1,j,k+1}\cdot A_{a,b,c}^{i+1,j-1,k}}{A_{a,b,c}^{i,j,k} \cdot A_{a,b,c}^{i,j-1,k+1}}\;\;\;\quad(\text{recalling } A_{a,b,c}^{3,0,0}=1),
\end{equation}
introduced in \cite[Lemma 3.1]{GoncharovInvent15}, correspond to the three rhombi in Figure \ref{figure:acoor1}.
Define
\begin{align*}
P(\Delta):=P(a;b,c):=\alpha_{a;b,c}^{2,1,0}+\alpha_{a;b,c}^{1,2,0}+\alpha_{a;b,c}^{1,1,1}.
\end{align*}
Let $\Theta$ be the collection of counterclockwise oriented pointed ideal triangles of $\mathscr{T}$. The \textit{Goncharov--Shen potential} is
\begin{equation*}
P=\sum_{\Delta\in \Theta} P(\Delta).
\end{equation*}
Thus, for a given ideal triangulation $\mathscr{T}$, the Goncharov-Shen potential is a positive Laurent polynomial in the $\mathcal{A}$-coordinates for $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}$.
\end{defn}
\begin{figure}[htb]
\includegraphics[scale=0.3]{acoor1-alt}
\caption{Red rhombus for $\alpha_{a;b,c}^{2,1,0}$, yellow rhombus for $\alpha_{a;b,c}^{1,2,0}$, and green rhombus for $\alpha_{a;b,c}^{1,1,1}$ in a pointed ideal triangle.}
\label{figure:acoor1}
\end{figure}
\begin{remark}
The Goncharov-Shen potential is mapping class group equivariant; in particular, it defines a rational positive function on the moduli space $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}$. In \cite{GoncharovInvent15} and \cite{GrossJAmerMathSoc18}, the GS potentials are understood as the mirror Landau-Ginzburg potentials. In \cite[\S 4]{HuangArxiv19}, the GS potentials are understood as generalized horocycle lengths.
\end{remark}
\subsubsection{Tropical points}
\label{sssec:tropicalAcoordinates}
$ $
The moduli space $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}$ is a positive space (Remark \ref{rem:clusterstructure}), so its points $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}(\mathbb{P})$ are defined over any semifield $\mathbb{P}$. To each ideal triangulation $\mathscr{T}$ there is associated a \textit{$\mathscr{T}$-chart} $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}(\mathbb{P})_\mathscr{T}$, determined by the $\mathcal{A}$-coordinates (\S \ref{sssec:Acoordinates}), for the moduli space $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}(\mathbb{P})$ over $\mathbb{P}$. In this paper, we will always be working in a $\mathscr{T}$-chart (see Definition \ref{def:integer-points} below).
The \textit{higher decorated Teichm\"uller space} $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}(\mathbb{R}_{>0})$ is the set of positive real points of $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}$. By Remark \ref{rem:clusterstructure}, the cluster transformation between two triangulations $\mathscr{T}$ and $\mathscr{T}^\prime$ sends positive coordinates to positive coordinates. Hence, all the $\mathcal{A}$-coordinates of $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}(\mathbb{R}_{>0})$ for any ideal triangulation are positive.
The \textit{tropical semifield} $\mathbb{R}^t = (\mathbb{R}, \otimes, \oplus)$ is defined by $x \otimes y = x+y$ and $x \oplus y = \max\{x,y\}$. The isomorphism $x\rightarrow -x$ sends $(\mathbb{R}^t, +, \max)$ to $(\mathbb{R}^t, +, \min)$. In this section, we use $\mathbb{P}=\mathbb{R}^t = (\mathbb{R}^t, +, \min)$. The tropical semifields $\mathbb{Z}^t$ and $\frac{1}{3} \mathbb{Z}^t$ are defined in the same way.
For the tropical semifield $\mathbb{P}=\mathbb{R}^t$, the $\mathscr{T}$-chart $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}(\mathbb{R}^t)_\mathscr{T}$ is identified with $\mathbb{R}^N$. Here, $N$ is the number of $\mathcal{A}$-coordinates, recalling Notation \ref{not:+}. Similarly, for the tropical semifields $\mathbb{P} = \mathbb{Z}^t, \frac{1}{3} \mathbb{Z}^t$ the $\mathscr{T}$-charts $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}(\mathbb{Z}^t)_\mathscr{T}$ and $\mathcal{A}_{\mathrm{SL}_3, \widehat{S}}(\frac{1}{3} \mathbb{Z}^t)_\mathscr{T}$ are identified with the lattices $\mathbb{Z}^N$ and $(\frac{1}{3} \mathbb{Z})^N$, respectively.
The tropical $\mathcal{A}$-coordinates are denoted $A_V^t$ for $V \in I_3 \cup J_3$; compare Definition \ref{defn:FGA}.
The \textit{tropicalization} $f^t$ of a positive Laurent polynomial $f$ is
\begin{equation*}f^t(x_1,\cdots, x_k)= \lim_{C\rightarrow +\infty} \frac{\log f(e^{C x_1}, \cdots, e^{C x_k})}{C}.\end{equation*}
Tropicalizing the Goncharov-Shen potential $P$ (\S \ref{sssec:GSpotential}), we have, by \eqref{equation:alpha},
\begin{equation}
\label{equation:alphat}
\left(\alpha_{a;b,c}^{i,j,k}\right)^t=
\left(A_{a,b,c}^{i-1,j,k+1}\right)^t+ \left(A_{a,b,c}^{i+1,j-1,k}\right)^t - \left(A_{a,b,c}^{i,j,k}\right)^t-\left( A_{a,b,c}^{i,j-1,k+1}\right)^t,
\end{equation}
and
\begin{equation*}P^t=\min\left\{\left(\alpha_{a;b,c}^{i,j,k}\right)^t\right\}_{\text{any } \alpha_{a;b,c}^{i,j,k} \text{ of P}}.\end{equation*}
That is, the minimum is taken over all rhombi in all pointed ideal triangles of $\mathscr{T}$.
Note that, since $A_{a,b,c}^{3,0,0}=1$ is constant (by the discussion after \eqref{eq:A-coordinate}), we have $(A_{a,b,c}^{3,0,0})^t = 0$.
\begin{defn}
\label{definition:lam}
The space of \textit{positive real tropical points} is
\begin{equation*}\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}^+(\mathbb{R}^t):=\left\{x\in \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}(\mathbb{R}^t) \;\bigg|\; P^t(x)\geq 0 \right\}.\end{equation*}
Alternatively, in $\mathscr{T}$-charts,
\begin{equation*}\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}^+(\mathbb{R}^t)_\mathscr{T}:=\left\{x\in \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}(\mathbb{R}^t)_\mathscr{T} \cong \mathbb{R}^N \;\bigg|\; \left(\alpha_{a;b,c}^{i,j,k}\right)^t(x) \geq 0\text{ for all } \alpha_{a;b,c}^{i,j,k} \text{ of $P^t$} \right\}.\end{equation*}
The spaces $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}^+(\mathbb{Z}^t)$ and $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}^+\left(\frac{1}{3}\mathbb{Z}^t\right)$ are defined in the same way.
\end{defn}
\subsection{\texorpdfstring{$\mathrm{PGL}_3$}{PGL3}-decorated local systems}
\label{PGL3-decorated-local-systems}
$ $
We are interested in a closely related moduli space $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}$ defined similarly to $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}$. As we will see, the $\mathcal{A}$-coordinates do not make sense for $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}$. Nevertheless, the Goncharov-Shen potential $P$ is still well-defined. Following \cite{GoncharovInvent15} (see the paragraph immediately following their Theorem 10.15), we can say that $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}$ is a positive space, so we can talk about its points $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{P})$ over a semifield $\mathbb{P}$. In particular, we can talk about its positive tropical integer points $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{Z}^t)$, which is our main object of study.
To define the space $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}$ it suffices to say what is the set of \textit{$\mathrm{PGL}_3$-decorated flags}, denoted by $\mathcal{A}_{\mathrm{PGL}_3}$ (compare Definition \ref{defn:flag}); the rest of the definition then mimics that of $\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}$. Let $\mathcal{A}^\prime_{\mathrm{PGL}_3}$ be the set of quadruples $(F, \check{f}_1, \check{f}_2, \check{f}_3)$ where $F$ is a complete flag in $E$ and $\check{f}_i$ is a nonzero element of $F^{(i)}/F^{(i-1)}$ for $i=1,2,3$. Then $\mathrm{GL}_3$ acts transitively on $\mathcal{A}^\prime_{\mathrm{PGL}_3}$ in the obvious way. If we view $\mathbb{C}^*$ as the nonzero scalar matrices in $\mathrm{GL}_3$, then $\mathbb{C}^*$ acts on $\mathcal{A}^\prime_{\mathrm{PGL}_3}$ as well. We put $\mathcal{A}_{\mathrm{PGL}_3} := \mathcal{A}^\prime_{\mathrm{PGL}_3} / \mathbb{C}^*$. The action of $\mathrm{GL}_3$ descends to a transitive action of $\mathrm{PGL}_3$ on $\mathcal{A}_{\mathrm{PGL}_3}$. If $U$ is any maximal unipotent subgroup of $\mathrm{SL}_3$ (see \S \ref{sssec:Acoordinates}), let $U$ also denote its quotient in $\mathrm{PGL}_3$. We can then view $\mathcal{A}_{\mathrm{PGL}_3} \cong \mathrm{PGL}_3/U$ as a homogeneous~set.
We denote elements of the resulting moduli space $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}$ by $(\rho, \xi + \mathbb{C}^*)$ where $\rho : \pi_1(\widehat{S}) \to \mathrm{PGL}_3$ and $\xi + \mathbb{C}^* : m_b \cup m_p \to \mathcal{A}_{\mathrm{PGL}_3}$ (compare Definition \ref{def:Amodulispace}).
By a \textit{basis} of a $\mathrm{PGL}_3$-decorated flag $(F, \check{f}_1, \check{f}_2, \check{f}_3) + \mathbb{C}^*$ we mean a linear basis $(g_1, g_2, g_3)$ of $E$ adapted to the flag $F$ and representing this projective class; that is, such that there exists some nonzero scalar $\lambda$ so that $\lambda g_i + F^{(i-1)} = \check{f}_i \in F^{(i)} / F^{(i-1)}$ for all $i=1,2,3$.
The Goncharov-Shen potential $P$ is defined on the moduli space $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}$ as follows (compare \S \ref{sssec:GSpotential}). Make an arbitrary choice of a volume form $\Omega$ on $E$, as well as a basis $(a_1, a_2, a_3)$ of the $\mathrm{PGL}_3$-decorated flag $(\xi+\mathbb{C}^*)(a) \in \mathcal{A}_{\mathrm{PGL}_3}$ for each marked point or puncture $a \in m_b \cup m_p$. With respect to these choices, define numbers $A_{a,b,c}^{i,j,k}$ by \eqref{eq:A-coordinate}, and define the rhombus numbers $\alpha_{a;b,c}^{i,j,k}$ by \eqref{equation:alpha}; note that now $A_{a,b,c}^{3,0,0}$ need not be equal to $1$. While the numbers $A_{a,b,c}^{i,j,k}$ depend on the choices we have made, one checks that the rhombus numbers $\alpha_{a;b,c}^{i,j,k}$ do not. Thus, the potential $P$ is well-defined on $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}$.
We now turn to the tropical integer $\mathrm{PGL}_3$-points defined with respect to the Goncharov-Shen potential $P$. Note $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t) \subset \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}\left(\frac{1}{3} \mathbb{Z}^t\right)$. More precisely:
\begin{defn}
\label{def:integer-points}
Let $\mathscr{T}$ be an ideal triangulation of $\widehat{S}$. In $\mathscr{T}$-charts (\S \ref{sssec:tropicalAcoordinates}), we have the following notions.
The set of \textit{lamination-tropical points} is
\begin{equation*} \mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)_\mathscr{T} :=\left\{ x\in \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}\left(\frac{1}{3} \mathbb{Z}^t\right)_\mathscr{T}
\cong \left(\frac{1}{3} \mathbb{Z}\right)^N
\;\Bigg|\; \left(\alpha_{a;b,c}^{i,j,k}\right)^t(x) \in \mathbb{Z} \text{ for all } \alpha_{a;b,c}^{i,j,k} \text{ of $P^t$}\right\}.\end{equation*}
Note this satisfies
\begin{equation*}\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}(\mathbb{Z}^t)_\mathscr{T} \subset \mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)_\mathscr{T} \subset \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}\left(\frac{1}{3}\mathbb{Z}^t\right)_\mathscr{T}.\end{equation*}
Similarly, the set of \textit{web-tropical points} is
\begin{equation*} \mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{Z}^t)_\mathscr{T} :=\left\{x\in \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}\left(\frac{1}{3} \mathbb{Z}^t\right)_\mathscr{T}
\cong \left(\frac{1}{3} \mathbb{Z}\right)^N
\;\Bigg|\; \left(\alpha_{a;b,c}^{i,j,k}\right)^t(x) \in \mathbb{Z}_{+} \text{ for all } \alpha_{a;b,c}^{i,j,k} \text{ of $P^t$}\right\}.\end{equation*}
Note, by Definition \ref{definition:lam},
this satisfies
\begin{equation*}\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}^+(\mathbb{Z}^t)_\mathscr{T} \subset \mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{Z}^t)_\mathscr{T} \subset \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}^+\left(\frac{1}{3}\mathbb{Z}^t\right)_\mathscr{T}. \end{equation*}
\end{defn}
We see then that we have the identities of tropical real points: $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}(\mathbb{R}^t) = \mathcal{A}_{\mathrm{SL}_3, \widehat{S}}(\mathbb{R}^t)$ and $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{R}^t)=\mathcal{A}_{\operatorname{SL}_3,\widehat{S}}^+(\mathbb{R}^t)$.
From now on, we always view $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t) \subset \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}(\mathbb{R}^t)$ and $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{Z}^t) \subset \mathcal{A}^+_{\operatorname{SL}_3,\widehat{S}}(\mathbb{R}^t)$.
\begin{remark}
\label{rem:realsolutionsareinteger}
$ $
\begin{enumerate}
\item \label{item:annoyingminusisgn} It turns out (see \cite[Remark 44]{DouglasArxiv20}) that, in the expressions for $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)$ and $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{Z}^t)$ in Definition \ref{def:integer-points}, we just as well could have assumed $x\in \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}\left( \mathbb{R}^t\right) \cong \mathbb{R}^N$. That is, all real solutions are, in fact, one third integer solutions. Moreover, in the case of $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}^+(\mathbb{Z}^t)$, these one third integer solutions are nonpositive (this is because the rhombus numbers \eqref{equation:alphat} are opposite in sign to those appearing in \cite{DouglasArxiv20}).
\item \label{rem2:realsolutionsareinteger} The set $\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)$ of lamination-tropical points was called the set of \textit{balanced points} in \cite{KimArxiv20}.
\item By \cite[\S 3]{GoncharovInvent15}, when $\widehat{S}$ is a disk with three marked points on its boundary, the positive integer tropical points
are canonically identified with the Knutson--Tao hive cone \cite{KnutsonJAmerMathsoc99}.
\item See also the Conceptual Remarks \ref{rem:conceptual-remarks}, \ref{rem:equivariant}, \ref{rem:sec6conceptremark}, and \ref{rem:last-conceptual-remark}.
\end{enumerate}
\end{remark}
\section{Tropical points \textcolor{purple}{and webs}}
\label{section:wa}
We now \textcolor{purple}{introduce the main object of study, the Knutson-Tao-Goncharov-Shen cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$ associated to an ideal triangulation $\mathscr{T}$ of a marked surface $\widehat{S}$, and we} summarize the work of \cite{DouglasArxiv20} \textcolor{purple}{relating tropical points to topological objects called webs}.
\subsection{The Knutson-Tao-Goncharov-Shen cone and reduced webs}
\label{ssec:reducedwebsandtheKTGScone}
$ $
We recall only the topological and notational definitions of \S \ref{ssec:markedsurfacesidealtriangulations}. Let $\widehat{S}$ be a marked~surface.
\subsubsection{KTGS cone}
\label{sssec:KTGS-cone}
$ $
Given a pointed ideal triangle $\Delta$ in an ideal triangulation $\mathscr{T}$ of $\widehat{S}$ (\S \ref{ssec:markedsurfacesidealtriangulations}), assume nonnegative integers (see also Remark \ref{rem:realsolutionsareinteger}\eqref{item:annoyingminusisgn}) $a,b,c,d \in \mathbb{Z}$ (resp. $a, b, c \in \mathbb{Z}$) are assigned to some interior (resp. corner) rhombus, where the numbers $a,b$ are assigned to the two obtuse vertices, and the numbers $c,d$ are assigned to the two acute vertices. To such an assigned rhombus, we associate a \textit{Knutson-Tao rhombus inequality} $a+b-c-d \geq 0$ and a \textit{modulo 3 congruence condition} $(a+b-c-d)/3 \in \mathbb{Z}$. Here, we set $d=0$ if the rhombus is a corner rhombus, where then $d$ corresponds to the corner vertex.
\begin{defn}
\label{def:positive-integer-cone}
A \textit{positive integer cone} $\mathscr{C}$ is a submonoid of $\mathbb{Z}_+^k$ (Notation \ref{not:+}) for some $k$. That is, $\mathscr{C}$ is closed under addition and contains the zero vector.
\end{defn}
Recall the definition (Notation \ref{not:+}) of the natural number $N$. This is the same as the number of non-ideal points of the 3-triangulation $\mathscr{T}_3$. We order these $N$ non-ideal points arbitrarily in the following definition (compare \S \ref{ssec:naturality-for-any-marked-surface}), so that to each such non-ideal point of $\mathscr{T}_3$ we associate a coordinate of $\mathbb{Z}^N$. In this way, a point of $\mathbb{Z}^N$ assigns to each rhombus in a pointed ideal triangle $\Delta$ four numbers $a,b,c,d \in \mathbb{Z}$ as above.
\begin{defn}
\label{def:KTGS-cone}
Given an ideal triangulation $\mathscr{T}$ of $\widehat{S}$, let the \textit{Knutson-Tao-Goncharov-Shen cone} $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}^N$, or just the \textit{KTGS cone} for short, be the submonoid defined by the property that its points satisfy all of the Knutson-Tao rhombus inequalities and modulo 3 congruence conditions, varying over all rhombi of all pointed ideal triangles $\Delta$ of $\mathscr{T}$.
\end{defn}
\begin{prop}
\label{prop:KTGS-is-positive-integer-cone}
The KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N \subset \mathbb{Z}^N$ is a positive integer cone.
\end{prop}
\begin{proof}
This is by \cite[Corollary 46 and Definition 49]{DouglasArxiv20}; see also Remark \ref{rem:realsolutionsareinteger}(\ref{item:annoyingminusisgn}).
\end{proof}
\begin{conceptremark}
\label{rem:conceptual-remarks}
We think of the KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$, defined above, as the isomorphic coordinate chart
\begin{equation*}
\mathscr{C}_\mathscr{T} \cong -3 \mathcal{A}^+_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)_\mathscr{T}
\end{equation*}
where, in the theoretical language of \S \ref{PGL3-decorated-local-systems},
$\mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)_\mathscr{T} \subset (\frac{1}{3} \mathbb{Z})^N \cong \mathcal{A}_{\operatorname{SL}_3,\widehat{S}}\left(\frac{1}{3}\mathbb{Z}^t\right)_\mathscr{T}$
and $\mathcal{A}^+_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)_\mathscr{T} \subset \mathcal{A}_{\operatorname{PGL}_3,\widehat{S}}(\mathbb{Z}^t)_\mathscr{T} \cap (-\frac{1}{3} \mathbb{Z}_+)^N$ as in Remark \ref{rem:realsolutionsareinteger}(\ref{item:annoyingminusisgn}).
\end{conceptremark}
\subsubsection{Reduced webs}
\label{sssec:reduced-webs}
$ $
A \textit{web (possibly with boundary)} $W$ in $\widehat{S}$ \cite[\S 8.1]{DouglasArxiv20} is an oriented trivalent graph embedded in $\widehat{S}$ such that:
\begin{itemize}
\item the boundary $\partial W = W \cap (\partial \widehat{S} - m_b)$ of the web lies on the boundary of the surface (minus the marked points) and may be nonempty, in which case its boundary points are required to be monovalent vertices;
\item the three edges of $W$ at an internal vertex are either all oriented in or all oriented~out;
\item we allow that $W$ have components homeomorphic to the circle, called \textit{loops}, which do not contain any vertices;
\item we allow that $W$ have components homeomorphic to the closed interval, called \textit{arcs}, which have exactly two vertices on $\partial \widehat{S} - m_b$ and do not have any internal vertices.
\end{itemize}
Webs are considered up to \textit{parallel equivalence}, meaning related either by an ambient isotopy of $\widehat{S} - m_b$ or a homotopy in $\widehat{S} - m_b$ exchanging two ``parallel'' loop (resp. arc) components of $W$ bounding an embedded annulus (resp. rectangle, two of whose sides are contained in $\partial \widehat{S} - m_b$, as in Figure \ref{fig:boundary-parallel-move}).
\begin{figure}[htb]
\includegraphics[scale=.85]{boundary-parallel-move}
\caption{Boundary parallel move in the ideal square.}
\label{fig:boundary-parallel-move}
\end{figure}
A \textit{face} of a web $W$ \cite[\S 8.1]{DouglasArxiv20} is a contractible component of the complement $W^c \subset \widehat{S}$. \textit{Internal} (resp. \textit{external}) faces are those not intersecting (resp. intersecting) the boundary $\widehat{S} - m_b$. A face with $n$ sides (counted with multiplicity, and including sides on the boundary $\partial \widehat{S} - m_b$) is called a \textit{$n$-face}. Internal faces always have an even number of sides. An \textit{external H-4-face} is an external 4-face limiting to a single component of $W$ (there is only one type of external 2- or 3-face).
A web $W$ is \textit{reduced} if each internal face has at least six sides, and there are no external 2-, 3-, or H-4-faces. (Reduced webs were called ``rung-less essential webs'' in \cite[\S 8.2]{DouglasArxiv20}; see also \cite{FrohmanMathZ2022}.) Denote by $\mathscr{W}_{\widehat{S}}$ the set of reduced webs up to parallel equivalence. (Note that $\mathscr{W}_{\widehat{S}}$ is what we called $\mathscr{W}_{3, \widehat{S}}$ in \S \ref{sec:intro}.)
\subsection{Web tropical coordinate map}
\label{ssec:tropicalwebcoordinatemap}
$ $
In \cite[\S 8.2]{DouglasArxiv20}, for any marked surface $\widehat{S}$ and for each ideal triangulation $\mathscr{T}$ of $\widehat{S}$, we defined a bijection of sets
\begin{equation*}
\Phi_{\mathscr{T}}:\mathscr{W}_{\widehat{S}}\overset{\sim}{\longrightarrow} \mathscr{C}_\mathscr{T}
\end{equation*}
from the set $\mathscr{W}_{\widehat{S}}$ of parallel equivalence classes of reduced webs to the KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$, called the \textit{web tropical coordinate map}. We now recall the definition of this map.
\subsubsection{Split ideal triangulations, good positions, and web schematics}
\label{sssec:splitidealtriangulationsandgoodposition}
$ $
The \textit{split ideal triangulation} associated to $\mathscr{T}$, which by abuse of notation we also denote by $\mathscr{T}$, is defined by splitting each ideal edge of $\mathscr{T}$ (including boundary edges) into two disjoint ideal edges. In particular, the surface $\widehat{S}$ is cut into ideal triangles and \textit{bigons}, as shown in Figure \ref{figure:tr}. Note that although bigons do not admit ideal triangulations (in particular, they do not satisfy the hypothesis $\chi < d/2$ of \S \ref{ssec:markedsurfacesidealtriangulations} since $d=2$), we can still consider them as marked surfaces, where all the definitions for webs make sense.
\begin{figure}[htb]
\includegraphics[scale=0.4]{tr}
\caption{Split ideal triangulation.}
\label{figure:tr}
\end{figure}
As proved in \cite{FrohmanMathZ2022} and \cite[\S 8.2]{DouglasArxiv20}, by isotopy we can put any reduced web $W \in \mathscr{W}_{\widehat{S}}$ into \textit{good position} with respect to the split ideal triangulation $\mathscr{T}$, meaning (see below for more details):
\begin{itemize}
\item the restriction of the web $W$ to any bigon of $\mathscr{T}$ is a \textit{ladder web} (see Figure \ref{figure:bigon}(1));
\item the restriction of the web $W$ to any triangle of $\mathscr{T}$ is a \textit{honeycomb web}, namely an oriented honeycomb together with oriented corner arcs (see Figure \ref{figure:localtr}(1)).
\end{itemize}
\begin{figure}[htb]
\includegraphics[scale=0.55]{bigon}
\caption{ Ladder web in a bigon.}
\label{figure:bigon}
\end{figure}
\begin{figure}[htb]
\includegraphics[scale=0.45]{localtr-alt}
\caption{Honeycomb web in a triangle: $x=3$, $y=2$, and $z=t=u=v=w=1$. Here the honeycomb is oriented outward (there may also be inward oriented honeycombs). }
\label{figure:localtr}
\end{figure}
More precisely, the triangle condition (called ``rung-less essential'' in \cite[\S 3.4]{DouglasArxiv20}) is equivalent to saying that the restriction of $W$ to the triangle is reduced. The bigon condition (called ``essential'' in \cite[\S 3.4]{DouglasArxiv20}) is equivalent to asking that (1) all internal faces have at least six sides; and (2) for each edge $E$ of the bigon, and for every compact embedded arc $\alpha$ in the bigon such that $\partial \alpha = \alpha \cap E$ and such that $\alpha$ intersects $W$ transversely, we have that the number of intersection points $W \cap \overline{E}$ does not exceed the number of intersection points $W \cap \alpha$; here, $\overline{E} \subset E$ is the segment in $E$ between the two endpoints of $\alpha$. Note this is a weaker condition than $W|_\textnormal{bigon}$ being reduced, since, although it does not allow for external 2- or 3-faces, it does allow for external H-4-faces (also called ``rungs'' of the ladder web).
In particular, $W$ has minimal geometric intersection with the split ideal triangulation $\mathscr{T}$.
Note that for a web $W$ in good position: there are two types of honeycombs in triangles, \textit{out-} and \textit{in-honeycombs} (see Figure \ref{figure:localtr}); there may or may not be a honeycomb in a given triangle; and, no conditions on the orientations of the corner arcs in a triangle are assumed.
\begin{remark}
For an earlier appearance of these honeycomb webs in ideal triangles, see \cite[pp. 140-141]{KuperbergCommMathPhys96}.
\end{remark}
In Figure \ref{figure:bigon}(2) we show the \textit{bigon schematic diagram} for a ladder web in a bigon, where each ``H'' is replaced by a crossing.
In Figure \ref{figure:localtr}(2) we show the \textit{triangle schematic diagram} for a honeycomb web in a triangle. Here, the honeycomb component is completely determined by two pieces of information: its orientation (either all in or all out) and a nonnegative integer $x \in \mathbb{Z}_+$. Note that the schematic for corner arcs is not a ``faithful'' diagrammatic representation, in general, because it forgets the ordering of the oriented arcs on each corner; see Remark \ref{rem:cornerarcschematics}. However, as we will see, this schematic is sufficient to recover the web tropical coordinates.
\begin{remark}
\label{rem:cornerarcschematics}
As one last note about the schematic for corner arcs, if the surface $\widehat{S}$ is an \textit{ideal polygon} (a disk with marked points on its boundary), then the schematic is indeed faithful at the level of parallel equivalence classes of reduced webs. This is because, in this setting, permuting corner arcs preserves the equivalence class of the web; see \S \ref{ssec:reducedwebsandtheKTGScone}. See Figure \ref{fig:boundary-parallel-move}, showing a boundary parallel move in the ideal square.
\end{remark}
\begin{defn}
Given the split ideal triangulation as in Figure \ref{figure:adm}, suppose we are given two oriented arcs intersecting in the bigon along the ideal edge between the two triangles. The intersection is called a
\begin{enumerate}
\item \textit{non-admissible crossing} if the arcs go toward a common ideal triangle (Figure \ref{figure:adm}(1));
\item \textit{admissible crossing} if the arcs go toward different ideal triangles (Figure \ref{figure:adm}(2)).
\end{enumerate}
\end{defn}
\begin{figure}[htb]
\includegraphics[scale=0.5]{adm-alt}
\caption{(1) Non-admissible crossing. (2) Admissible crossing.}
\label{figure:adm}
\end{figure}
The following fact is essentially by definition.
\begin{observation}
\label{lemma:bigon}
For any reduced web $W$ in good position with respect to the split ideal triangulation $\mathscr{T}$, the schematic diagram (Figure {\upshape \ref{figure:bigon}(2)}) of any ladder web obtained by restricting $W$ to a bigon has only admissible crossings.
\end{observation}
\subsubsection{Definition of the web tropical coordinates}
\label{sssec:definitionofthetropicalwebcoordinates}
$ $
\begin{figure}[htb]
\includegraphics[scale=0.45]{triangle-alt}
\caption{\textcolor{purple}{Tropical web coordinates} for the eight ``irreducible'' reduced webs in the triangle. (The coordinates for the other four arcs $R_b$, $L_b$, $R_c$, $L_c$ are obtained by triangular symmetry.)}
\label{figure:triangle}
\end{figure}
Another way to think of an ideal triangle $\Delta$ is as an ideal polygon (Remark \ref{rem:cornerarcschematics}) with three marked points $(a,b,c)$ on its boundary, labeled counterclockwise say.
Let a reduced web $W$ be in good position with respect to a split ideal triangulation $\mathscr{T}$ of $\widehat{S}$. We start by defining the web tropical coordinates $\Phi(W|_\Delta) \in \mathscr{C}_\Delta$ ``locally'' for each restriction $W|_\Delta$ of $W$ to an ideal triangle $\Delta$ of $\mathscr{T}$, as in Figure \ref{figure:localtr}(1).
First, the images in $\mathscr{C}_\Delta \subset \mathbb{Z}_+^7$ under $\Phi$ of the eight ``irreducible'' (see \S \ref{section:Hsq} below) local reduced webs $R_a,L_a,R_b,L_b,R_c,L_c, T_{in}, T_{out}$ displayed in Figure \ref{figure:triangle} are defined as in that figure. One checks directly that these images satisfy the Knutson-Tao rhombus inequalities and the modulo 3 congruence conditions (\S \ref{ssec:reducedwebsandtheKTGScone}).
Then, the image under $\Phi$ of the restriction $W|_\Delta$ is defined as follows. Let $T \in \left\{ T_{in}, T_{out} \right\}$ be the oriented honeycomb appearing in $W|_\Delta$. Let the nonnegative integers $(x,w,v,u,t,y,z) \in \mathbb{Z}_{+}^7$ be defined by the schematic for $W|_\Delta$, as in Figure \ref{figure:localtr}(2). Put
\begin{equation*}
\Phi(W|_\Delta) := x \Phi(T) + v \Phi(L_a) + w \Phi(R_a) + t \Phi(L_b) + u \Phi(R_b) + z \Phi(L_c) + y \Phi(R_c) \quad \in \mathscr{C}_\Delta \subset \mathbb{Z}_+^7.
\end{equation*}
Lastly, the web tropical coordinates $\Phi_\mathscr{T}(W) \in \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$ for $W$ are defined by ``gluing together'' the local coordinates $\Phi(W|_\Delta$) for the triangles $\Delta$ across the edges of $\mathscr{T}$. Note that the pair of coordinates of $\Phi(W|_\Delta)$ along an edge $E$ at the bigon interface between two triangles $\Delta$ and $\Delta^\prime$ matches the corresponding pair of coordinates of $\Phi(W|_{\Delta^\prime})$ along the other bigon edge $E^\prime$, since these coordinates depend only on the number of oriented in- and out-strands crossing the bigon at either boundary edge $E$ or $E^\prime$. Thus, this gluing procedure is well-defined. In particular, in this way coordinates are assigned to the un-split ideal triangulation $\mathscr{T}$; this is why, in practice, one can go back and forth between the split and un-split triangulation.
See Figure \ref{fig:coordinates-example} for an example where $\widehat{S}$ is the once punctured torus. \textcolor{purple}{As another example, the face coordinate (namely, the coordinate that is 3 for $T_{in}$ and $T_{out}$) for the honeycomb web $W$ shown in Figure \ref{figure:localtr}(1) is $3\cdot3+4\cdot1+3\cdot2=19$.} There are plenty of examples of computing web coordinates throughout the paper; for instance, see \S \ref{ssec:examplessquare}.
In \cite[\S 8.2]{DouglasArxiv20} we showed $\Phi_\mathscr{T}(W) \in \mathscr{C}_\mathscr{T}$ is independent of the choice of good position of $W$ with respect to the split ideal triangulation $\mathscr{T}$. Moreover, we proved the result mentioned at the beginning of this subsection:
\begin{theorem}[{\cite[Theorem 80]{DouglasArxiv20}}]
\label{thm:main-theorem}
For each ideal triangulation $\mathscr{T}$ of the marked surface $\widehat{S}$, the web tropical coordinate map
\begin{equation*}
\Phi_{\mathscr{T}}:\mathscr{W}_{\widehat{S}}\overset{\sim}{\longrightarrow}\mathscr{C}_{\mathscr{T}},
\end{equation*}
from the set $\mathscr{W}_{\widehat{S}}$ of parallel equivalence classes of reduced webs in $\widehat{S}$ to the Knutson-Tao-Goncharov-Shen cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$, is a bijection of sets.
\end{theorem}
We will need the following fact, which is immediate from the definitions.
\begin{observation}
\label{lemma:ww}
For any disjoint reduced webs $W,W'\in \mathscr{W}_{\widehat{S}}$, we have $W \cup W^\prime \in \mathscr{W}_{\widehat{S}}$ and
\begin{equation*}\Phi_{\mathscr{T}}(W\cup W')=\Phi_{\mathscr{T}}(W)+\Phi_{\mathscr{T}}(W') \quad \in \mathscr{C}_\mathscr{T}.\end{equation*}
\end{observation}
\begin{figure}[htb]
\centering
\includegraphics[width=.85\textwidth]{coordinates-example}
\caption{Gluing construction for the tropical coordinates for a reduced web in the once punctured torus.}
\label{fig:coordinates-example}
\end{figure}
\section{Naturality of the web tropical coordinates}
\label{section:fe}
In \S \ref{section:wa}, we recalled the construction \cite{DouglasArxiv20} of the web tropical coordinate map $\Phi_\mathscr{T} : \mathscr{W}_{\widehat{S}} \to \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$, depending on a choice of ideal triangulation $\mathscr{T}$ of the marked surface $\widehat{S}$. By Theorem \ref{thm:main-theorem}, $\Phi_\mathscr{T}$ is a bijection.
In this section, we show that these coordinates are \textit{natural} with respect to changing the triangulation $\mathscr{T} \to \mathscr{T}^\prime$. That is, the induced coordinate change map $\mathscr{C}_\mathscr{T} \to \mathscr{C}_{\mathscr{T}^\prime}$ is a tropical $\mathcal{A}$-coordinate cluster transformation, in the language of Fock-Goncharov \cite{FockIHES06}.
\begin{remark}
\label{rem:regarding-flip-proof}
See \cite{Shen1} for another proof of the main result of this section, Theorem \ref{thm:second-main-theorem}.
\end{remark}
\subsection{Precise statement of naturality for the square}
\label{ssec:precisestatementofnaturality}
$ $
Recall that an \textit{ideal square} $\square$ is a disk with four marked points on its boundary. (See also Remark \ref{rem:cornerarcschematics}.) An ideal triangulation of $\Box$ is a choice of diagonal of the square; there are two such triangulations, related by a \textit{diagonal flip}.
\begin{defn}
\label{def:mutationmap}
Let $\mathscr{T}$ and $\mathscr{T}^\prime$ be the two ideal triangulations of the square $\Box$. The \textit{tropical $\mathcal{A}$-coordinate cluster transformation for the square} is the piecewise-linear function
\begin{equation*}
\mu_{\mathscr{T}^\prime, \mathscr{T}} : \mathbb{Z}^{12} \longrightarrow \mathbb{Z}^{12}
\end{equation*}
defined by
\begin{equation*}
\mu_{\mathscr{T}^\prime, \mathscr{T}}(x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8, y_1, y_2, y_3, y_4)
=
(x^\prime_1, x^\prime_2, x^\prime_3, x^\prime_4, x^\prime_5, x^\prime_6, x^\prime_7, x^\prime_8, z_1, z_2, z_3, z_4),
\end{equation*}
where the right hand side of the equation is given by Equations \eqref{eq:boundarycoords}, \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4}. See also Figure \ref{figure:flip}. (Here, we think of the domain of $\mu_{\mathscr{T}^\prime, \mathscr{T}}$ as associated to $\mathscr{T}$, and the codomain to $\mathscr{T}^\prime$.)
\end{defn}
Note that Equations \eqref{equation:mu3}, \eqref{equation:mu4} use Equations \eqref{eq:boundarycoords}, \eqref{equation:mu1}, \eqref{equation:mu2}.
The main result of this paper is:
\begin{theorem}
\label{thm:second-main-theorem}
Let $\mathscr{T}$ and $\mathscr{T}^\prime$ be the two ideal triangulations of the square $\Box$, and let $\Phi_\mathscr{T} : \mathscr{W}_\Box \to \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ and $\Phi_{\mathscr{T}^\prime} : \mathscr{W}_\Box \to \mathscr{C}_{\mathscr{T}^\prime} \subset \mathbb{Z}_+^{12}$ be the associated web tropical coordinate maps.~Then,
\begin{equation*}
\mu_{\mathscr{T}^\prime, \mathscr{T}}(c) =\Phi_{\mathscr{T}^\prime} \circ \Phi_\mathscr{T}^{-1}(c) \quad \in \mathscr{C}_{\mathscr{T}^\prime} \quad\quad \left( c \in \mathscr{C}_\mathscr{T} \right).
\end{equation*}
\end{theorem}
\begin{remark}
Note it is not even clear, a priori, from the definitions that $\mu_{\mathscr{T}^\prime, \mathscr{T}}(c) \geq 0$ for $c \in \mathscr{C}_\mathscr{T}$.
\end{remark}
\subsection{Proof of \textcolor{purple}{Theorem \ref{thm:second-main-theorem}}}
\label{ssec:proofofnaturality}
$ $
By definition of the tropical coordinates, and of good position of a reduced web $W$ in $\mathscr{W}_\Box$ with respect to the triangulations $\mathscr{T}$ and $\mathscr{T}^\prime$, we immediately get:
\begin{observation}
\label{obs:eq1forallwebs}
\textcolor{purple}{For all $W \in \mathscr{W}_\Box$, } the images $c=\Phi_\mathscr{T}(W) \in \mathscr{C}_\mathscr{T}$ and $\Phi_{\mathscr{T}^\prime} \circ \Phi_\mathscr{T}^{-1}(c) \in \mathscr{C}_{\mathscr{T}^\prime}$ satisfy Equation \eqref{eq:boundarycoords}.
\end{observation}
\begin{defn}
\label{def:corner-arcs}
Let the punctures of the square $\Box$ be labeled $a$, $b$, $c$, $d$ as in Figure \ref{figure:square1} \textcolor{purple}{(part 1)} below. Also as in the figure, define the 8 oriented \textit{corner arcs} $L_a, R_a, L_b, R_b, L_c, R_c, L_d, R_d$ in $\mathscr{W}_\Box$. Their 12 coordinates are provided in the figure as well.
\end{defn}
One checks by direct computation that:
\begin{observation}
\label{obs:thmforarcs}
The images $c=\Phi_\mathscr{T}(W) \in \mathscr{C}_\mathscr{T}$, for $W=L_a, R_a, L_b, R_b, L_c, R_c, L_d, R_d$ any of the $8$ corner arcs, satisfy Theorem {\upshape\ref{thm:second-main-theorem}}.
\end{observation}
\begin{defn}
\label{def:cornerless-webs-in-the-square}
A given reduced web $W$ in $\mathscr{W}_\Box$ is the disjoint union of (i) all its corner arc components, together called the \textit{corner part} and denoted $W_r$; and (ii) their complement $W_c := W - W_r$, which we call the \textit{cornerless part} of the web $W$.
A reduced web $W$ is \textit{cornerless} if $W=W_c$. That is, $W$ has no corner arcs.
Let $\mathscr{R} \subset \mathscr{W}_\Box$ be the set of \textit{corner webs}, \textcolor{purple}{that is, webs} whose cornerless parts are empty: $W = W_r$. That is, an element of $\mathscr{R}$ is a disjoint union of corner arcs.
\end{defn}
\begin{lem}
\label{lemma:add}
For any disjoint reduced webs $W \in \mathscr{R}$ and $W'\in \mathscr{W}_\square$, we have $W \cup W^\prime \in \mathscr{W}_\Box$ and
\begin{equation*}\mu_{\mathscr{T}^\prime, \mathscr{T}}(\Phi_\mathscr{T}(W))+\mu_{\mathscr{T}^\prime, \mathscr{T}}(\Phi_\mathscr{T}(W'))=\mu_{\mathscr{T}^\prime, \mathscr{T}}(\Phi_\mathscr{T}(W\cup W')) \quad \in \mathbb{Z}^{12} .\end{equation*}
\end{lem}
\begin{proof}
By Observation \ref{lemma:ww}, we get
\begin{equation*}\Phi_\mathscr{T}(W)+\Phi_\mathscr{T}(W')=\Phi_\mathscr{T}(W\cup W') \quad \in \mathscr{C}_\mathscr{T}.\end{equation*}
For any corner arc in Figure \ref{figure:square1}(1)-(8), thus for any $W\in \mathscr{R}$ (again by Observation \ref{lemma:ww}), the left hand sides of Equations \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} are always of the form $\max\{u,u\}-v$. Since
\begin{equation*}(\max\{u,u\}-v)+ (\max\{x,y\}-z)=\max\{u+x,u+y\}-(v+z) \quad \in \mathbb{Z},\end{equation*}
we obtain the desired equality.
\end{proof}
\begin{proof}[Proof of Theorem {\upshape \ref{thm:second-main-theorem}}]
$ $
\textcolor{purple}{Recall by Theorem \ref{thm:main-theorem} that any $c \in \mathscr{C}_\mathscr{T}$ is of the form $c=\Phi_\mathscr{T}(W)$ for some $W \in \mathscr{W}_\Box$.}
For any reduced web $W\in \mathscr{W}_\square$, suppose that its coordinates via $\Phi_\mathscr{T}$ are \textcolor{purple}{labeled} as in the left hand side of Figure \ref{figure:flip}, and via $\Phi_{\mathscr{T}'}$ as in the right hand side of Figure \ref{figure:flip}. By Observation \ref{obs:eq1forallwebs}, Equation \eqref{eq:boundarycoords} is satisfied for any web $W$ in $\mathscr{W}_\Box$. \textcolor{purple}{In addition, } by Observation \ref{obs:thmforarcs}, the Equations \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} are satisfied for any web $W$ in $\mathscr{R}$, that is, \textcolor{purple}{$W$} consisting only of corner arcs. By Lemma \ref{lemma:add} together with \textcolor{purple}{another application of} Observation \ref{lemma:ww} \textcolor{purple}{to $\mathscr{T}^\prime$}, we have thus reduced the problem to establishing Equations \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} for any cornerless web $W=W_c$.
The \textcolor{purple}{main} difficulty is that, for \textcolor{purple}{a given} cornerless web $W=W_c$ in good position with respect to the ideal triangulation $\mathscr{T}$, after flipping the diagonal $\mathscr{T} \to \mathscr{T}^\prime$ it is \textcolor{purple}{not obvious} how $W$ rearranges itself back into good position with respect to the new \textcolor{purple}{triangulation} $\mathscr{T}'$. \textcolor{purple}{(See, however, \S \ref{ssec:examplessquare} for examples of this rearranging into good position after the flip.)}
\textcolor{purple}{We} circumvent this difficulty \textcolor{purple}{by solving the} problem ``uniformly'', that is, without knowing how the new good position looks after the flip. The hypothesis that the web $W\textcolor{purple}{=W_c}$ does not have any corner arcs \textcolor{purple}{will be important here.}
To start, observe that it suffices to establish just Equation \eqref{equation:mu1}. Indeed, Equations \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} then immediately follow by 90 degree rotational symmetry. (Solve for $y_1$ and $y_3$, respectively, in the last two equations.)
With this goal in mind, we argue
\begin{equation*}
\tag{$\ast$}
\label{eq:main-lemma}
z_2 = x^\prime_2 + x^\prime_3 = x_2 + x_3
= \mathrm{max}(x_2 + y_3, x_3 + y_1) - y_2
\quad \in \mathbb{Z}_+.
\end{equation*}
Throughout, consider Figure \ref{fig:flip-proof}, recalling the notion of a web schematic; see \S \ref{sssec:splitidealtriangulationsandgoodposition} and Remark \ref{rem:cornerarcschematics}.
The second equation of \eqref{eq:main-lemma} has already been justified, by Observation \ref{obs:eq1forallwebs}.
\begin{figure}[htb]
\includegraphics[scale=.7]{flip-proof}
\caption{Schematic for the cornerless web $W=W_c$ in the square, before and after the flip. The variables $a, b, c, d, x, y, z, w, n, m$ are known, and can be read off from the good position of $W$ with respect to $\mathscr{T}$. The primed variables $a^\prime, b^\prime, \dots, m^\prime$ are not \textcolor{purple}{assumed to be} known. Because $W$ has no corner arcs, there are no arcs at the top and bottom vertices before the flip, nor at the left and right vertices after the flip; \textcolor{purple}{it follows by Observation \ref{lemma:bigon}} that we cannot have $a$ and $b$ (or $c$ and $d$) \textcolor{purple}{simultaneously} nonzero. To be concrete, we have shown the case where the honeycombs labeled $n$ and $m$ are out-honeycombs; we will justify the other cases as well. Note that the orientations of the $n^\prime$ and $m^\prime$ honeycombs are not \textcolor{purple}{assumed to be} known (and do not follow from the orientations of the $n$ and $m$ honeycombs).
}
\label{fig:flip-proof}
\end{figure}
Let us justify the first equation of \eqref{eq:main-lemma}. There are two cases, namely when $m^\prime$ represents an out- or an in-honeycomb.
When $m^\prime$ is ``out'', we compute:
\begin{equation*}
x_2^\prime = b^\prime + 2 z^\prime + m^\prime, \quad\quad
x_3^\prime = c^\prime + 2 y^\prime + 2 m^\prime, \quad\quad
z_2 = b^\prime + c^\prime + 2 y^\prime + 2 z^\prime + 3 m^\prime.
\end{equation*}
When $m^\prime$ is ``in'', we compute:
\begin{equation*}
x_2^\prime = b^\prime + 2 z^\prime + 2m^\prime, \quad\quad
x_3^\prime = c^\prime + 2 y^\prime + m^\prime, \quad\quad
z_2 = b^\prime + c^\prime + 2 y^\prime + 2 z^\prime + 3 m^\prime.
\end{equation*}
In both cases, the desired formula $z_2 = x^\prime_2 + x^\prime_3$ holds.
The justification of the third equation of \eqref{eq:main-lemma} is more involved.
We begin with a topological \textcolor{purple}{consequence}.
\begin{claim}
\label{claim:firstmainclaiminproof}
Let $W = W_c$ be a cornerless reduced web in the square. Up to $180$ degree rotational symmetry of the square, there are three cases:
\begin{enumerate}
\item
When the $n$ and $m$ honeycombs are both ``out'': Then,
\begin{equation*}
a + n + x = b + y \quad\quad \& \quad\quad w+d = z + m +c.
\end{equation*}
Moreover, if $y \geq n+x$, then $b=0$; and, if $y \leq n+x$, then $a=0$.
(Note this is the case displayed in the left hand side of Figure {\upshape\ref{fig:flip-proof}}.)
\item When the $n$ honeycomb is ``out'', and the $m$ honeycomb is ``in'': Then,
\begin{equation*}
a + n + x = b + y + m \quad\quad \& \quad\quad w+d = z +c.
\end{equation*}
Moreover, if $y+m \geq n+x$, then $b=0$; and, if $y+m \leq n+x$, then $a=0$.
\item When the $n$ and $m$ honeycombs are both ``in'': Then,
\begin{equation*}
a + x = b + y + m \quad\quad \& \quad\quad w+d + n = z +c.
\end{equation*}
Moreover, if $y +m \geq x$, then $b=0$; and, if $y+m \leq x$, then $a=0$.
\end{enumerate}
\end{claim}
\begin{figure}[htb]
\includegraphics[scale=.6]{flip-proof-lemma}
\caption{Proof of Claim \ref{claim:firstmainclaiminproof}. The web is assumed not to have any corner arcs. Shown is the case when the $n$ and $m$ honeycombs are both ``out''.}
\label{fig:flip-proof-lemma}
\end{figure}
The key topological property used to prove all three statements of the claim is the following: The number of ``out'' strands (resp. ``in'' strands) along one boundary edge of the bigon, \textcolor{purple}{as} displayed on the left hand side of Figure \ref{fig:flip-proof}, is equal to the number of ``in'' strands (resp. ``out'' strands) along the other boundary edge of the bigon.
We prove the first statement, (1), of the claim; the proofs of the second and third statements are similar. So assume the $n$ and $m$ honeycombs are both ``out''.
By the above topological property, we have the desired two identities \textcolor{purple}{of the statement}.
For the second part of the statement: When $y \geq n+x$, if $b$ were nonzero, then $a$ would have to be nonzero, since $b+y=a+n+x $. Then $b$ would be attaching to $a$; see the schematic shown in the left hand side of Figure \ref{fig:flip-proof-lemma} (see also the caption of Figure \ref{fig:flip-proof}). But this contradicts the hypothesis that $W$ has no corner arcs. Similarly, $a=0$ when $y \leq n+x$; see the right hand side of Figure \ref{fig:flip-proof-lemma}. This establishes the claim.
\begin{claim}
\label{claim:secondmainclaiminproof}
Let $W = W_c$ be a cornerless reduced web in the square. Up to $180$ degree rotational symmetry of the square, there are three cases:
\begin{enumerate}
\item When the $n$ and $m$ honeycombs are both ``out'': Then, $x_2 + y_3 \geq x_3 + y_1$ if and only if $y \geq n+x$; conversely, $x_2 + y_3 \leq x_3 + y_1$ if and only if $y \leq n+x$.
(Note this is the case displayed in the left hand side of Figure {\upshape\ref{fig:flip-proof}}.)
\item When the $n$ honeycomb is ``out'', and the $m$ honeycomb is ``in'': Then, $x_2 + y_3 \geq x_3 + y_1$ if and only if $y+m \geq n+x$; conversely, $x_2 + y_3 \leq x_3 + y_1$ if and only if $y+m \leq n+x$.
\item When the $n$ and $m$ honeycombs are both ``in'': Then, $x_2 + y_3 \geq x_3 + y_1$ if and only if $y+m \geq x$; conversely, $x_2 + y_3 \leq x_3 + y_1$ if and only if $y+m \leq x$.
\end{enumerate}
\end{claim}
We prove the first statement; the proofs of the second and third statements are similar. So assume the $n$ and $m$ honeycombs are both ``out''. By Figure \ref{fig:flip-proof}, we compute:
\begin{gather*}
x_2 = w + 2a +n, \quad\quad
y_3 = b + c + 2y +2z + 3m, \\
x_3 = z + 2b + 2m, \quad\quad
y_1 = a + d + 2x + 2w + 3n.
\end{gather*}
Thus,
\begin{gather*}
(x_2 +y_3) - (x_3 + y_1) = -w +a -2n - b + c + 2y + z + m -d -2x \geq 0
\\ \Longleftrightarrow \quad a + c + 2y + z + m \geq w + 2n + b + d +2x.
\end{gather*}
By applying the two identities of part (1) of Claim \ref{claim:firstmainclaiminproof}, the above inequality is equivalent to
$
3y \geq 3n + 3x
$
as desired. Conversely, by reversing the direction of the inequality throughout the argument, we have $x_2 + y_3 \leq x_3 + y_1$ if and only if $y \leq n + x$. This establishes the claim.
We are now prepared to justify the third equation of \eqref{eq:main-lemma}, which we recall is
\begin{equation*}
\label{eq:maincalculation}
\tag{$\ast\ast$}
x_2 + x_3 = \mathrm{max}(x_2 + y_3, x_3 + y_1) - y_2.
\end{equation*}
First, let us assume the $n$ and $m$ honeycombs are both ``out'', as in the left hand side of Figure \ref{fig:flip-proof}.
The values of $x_2, y_3, x_3, y_1$ were computed above, and we gather
\begin{align*}
x_2 + x_3 &= w + 2a + n + z + 2b + 2m, \\
x_2 + y_3 &= w + 2a + n + b + c + 2y + 2z + 3m, \\
x_3 + y_1 &= z + 2b + 2m + a + d + 2x + 2w + 3n.
\end{align*}
By Figure \ref{fig:flip-proof}, there are two ways to express $y_2$:
\begin{equation*}
y_2 = w + d + 2a + 2n + 2x = z + m + c + 2b + 2y.
\end{equation*}
There are two cases to establish \eqref{eq:maincalculation}. In the case $x_2 + y_3 \geq x_3 + y_1$, we compute, using the second form of $y_2$ above:
\begin{gather*}
\mathrm{max}(x_2 + y_3, x_3 + y_1) - y_2 = (x_2 + y_3) - y_2 \\
= w + 2a + n - b + z + 2m \overset{?}{=} x_2 + x_3 \quad
\Longleftrightarrow \quad b \overset{?}{=} 0.
\end{gather*}
For this case, by part (1) of Claim \ref{claim:secondmainclaiminproof}, we have $y \geq n+x$. Thus, $b=0$ by part (1) of Claim \ref{claim:firstmainclaiminproof}, as desired.
In the case $x_2 + y_3 \leq x_3 + y_1$, we compute, using the first form of $y_2$ above:
\begin{gather*}
\mathrm{max}(x_2 + y_3, x_3 + y_1) - y_2 = (x_3 + y_1) - y_2 \\
= z + 2b + 2m - a + w + n
\overset{?}{=} x_2 + x_3 \quad
\Longleftrightarrow \quad a \overset{?}{=} 0.
\end{gather*}
For this case, by part (1) of Claim \ref{claim:secondmainclaiminproof}, we have $y \leq n+x$. Thus, $a=0$ by part (1) of Claim \ref{claim:firstmainclaiminproof}, as desired.
This establishes \eqref{eq:maincalculation} when both the honeycombs are ``out''. When the $n$ honeycomb is ``out'', and the $m$ honeycomb is ``in''; or, when the $n$ and $m$ honeycombs are both ``in'': By essentially the same calculation, one computes again that, in the case $x_2 +y_3 \geq x_3 + y_1$, then \eqref{eq:maincalculation} is equivalent to $b=0$, and in the case $x_2 + y_3 \leq x_3 + y_1$, then \eqref{eq:maincalculation} is equivalent to $a=0$. These are justified by parts (2) and (3), respectively, of Claims \ref{claim:secondmainclaiminproof} and \ref{claim:firstmainclaiminproof}.
This completes the proof of the main result.
\end{proof}
\begin{remark}
\label{rem:proof-remark}
We emphasize that the above proof depended crucially on two topological properties (both used in the proof of Claim \ref{claim:firstmainclaiminproof}): (1) the \textit{bigon property} about ``in'' and ``out'' strands; and, (2) the \textit{cornerless property} saying that $a$ and $b$ cannot be simultaneously nonzero.
The trick was to use Lemma \ref{lemma:add} to separate the corner arc case from the cornerless case, and then to use the fact that it is obvious that the boundary coordinates do not change after the flip. It is somewhat surprising that there is this relationship \textcolor{purple}{\eqref{eq:main-lemma}} between the internal and boundary coordinates; we do not know if \textcolor{purple}{such a relationship occurs in higher rank}. \textcolor{purple}{In \S \ref{section:ssq}, we give another application of this ``separating corner and cornerless webs'' strategy.}
\end{remark}
\subsection{Naturality for a marked surface}
\label{ssec:naturality-for-any-marked-surface}
$ $
Let $\widehat{S}$ be a marked surface, let $\mathscr{T}$ be an ideal triangulation, and let $N$ denote the number of global tropical coordinates; see \S\ref{ssec:markedsurfacesidealtriangulations}. In \S \ref{section:wa}, we introduced the web tropical coordinate map $\Phi_\mathscr{T} : \mathscr{W}_{\widehat{S}} \to \mathscr{C}_\mathscr{T}$, where we implicitly assumed an inclusion $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$ of the KTGS cone of $\mathscr{T}$ (any permutation of the coordinates of $\mathbb{Z}_+^N$ determines a different inclusion). This choice played no role there, since we were only considering a single triangulation. We now consider multiple triangulations, where it becomes necessary to keep track of \textcolor{purple}{such choices}.
\subsubsection{Dotted triangulations}
\label{sec:base-triangulations}
$ $
More precisely, let $\mathscr{T} = \mathscr{T}_0$ be an initial choice of ideal triangulation, which we mark with $N$ dots in the usual way (as for instance in Figure \ref{figure:acoor}). Such a dotted initial triangulation $\mathscr{T} = \mathscr{T}_0$ is called a \textit{base (dotted) triangulation}. Fix a labeling of the dots of $\mathscr{T}$ from $1, 2, \dots, N$; we say that the base triangulation $\mathscr{T} = \mathscr{T}_0$ is \textit{labeled}. This determines a bijection $\left\{ 1, 2, \dots, N \right\} \to \left\{ \textnormal{dots} \right\}$, hence also an inclusion $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$ (since a point in $\mathscr{C}_\mathscr{T}$ is, most precisely, a function $\left\{ \textnormal{dots} \right\} \to \mathbb{Z}_+$; see Figure \ref{figure:triangle} for instance).
\begin{example*}[\textbf{part 1}]
See the first diagram 0 in Figure \ref{fig:sl3-pentagon-relation}.
\end{example*}
\begin{figure}[htb]
\includegraphics[scale=.6]{sl3-pentagon-relation}
\caption{Pentagon Relation for $\mathrm{SL}_3$, namely the sequence of five \textcolor{purple}{diagonal} flips from $\mathscr{T}_0$ to $\mathscr{T}_5$. It takes 35 flips to come back to the original dotted triangulation. That is, as dotted triangulations, $\mathscr{T}_0=\mathscr{T}_{35}$ and $\mathscr{T}_0 \neq \mathscr{T}_i$ for $i=1,2,\dots,34$. The ten boundary coordinates (shown only in the first picture) are fixed, or ``frozen'', by these flip mutations but still enter into the computations for the interior coordinates. See Remark \ref{rem:seven-identities} for a computation.}
\label{fig:sl3-pentagon-relation}
\end{figure}
We now imagine forgetting $\mathscr{T}=\mathscr{T}_0$, but leaving the dots associated to $\mathscr{T}$ where they were on the surface. Another triangulation $\mathscr{T}^\prime$ is \textit{dotted (with respect to the dotting of $\mathscr{T}$)} if there are two dots (from $\mathscr{T}$) on each edge of $\mathscr{T}^\prime$ and one dot (from $\mathscr{T}$) in each face of $\mathscr{T}^\prime$. Note that, using the labeling of the dots \textcolor{purple}{fixed along with the base triangulation $\mathscr{T}=\mathscr{T}_0$}, each dotted triangulation $\mathscr{T}^\prime$ determines an inclusion $\mathscr{C}_{\mathscr{T}^\prime} \subset \mathbb{Z}_+^N$ as above \textcolor{purple}{for $\mathscr{T}=\mathscr{T}_0$}.
We refer to $\mathscr{T}^\prime$ without a dotting as the underlying \textit{topological triangulation}.
\begin{example*}[\textbf{part 2}]
The triangulations $\mathscr{T}_1$, $\mathscr{T}_2$, $\mathscr{T}_5$, $\mathscr{T}_{35}$ shown in the last four diagrams 1, 2, 5, 35 in Figure \ref{fig:sl3-pentagon-relation} are dotted with respect to the dotted triangulation $\mathscr{T}_0$ in the first diagram 0. Note that, as triangulations, $\mathscr{T}_0 = \mathscr{T}_5 = \mathscr{T}_{35}$, but, as dotted triangulations, $\mathscr{T}_0 = \mathscr{T}_{35} \neq \mathscr{T}_5$.
\end{example*}
\subsubsection{Tropical $\mathcal{A}$-coordinate cluster transformation \textcolor{purple}{for a marked surface}}
\label{def:tropical-A-coordinate-cluster-transformation}
$ $
We pause our discussion of the \textcolor{purple}{topology and} geometry of webs and cones, and turn to a purely algebraic result of Fomin-Zelevinsky and Fock-Goncharov (see Theorem \ref{prop:mutation-naturality} below).
There is a procedure to start from a base triangulation $\mathscr{T}=\mathscr{T}_0$ and generate new dotted triangulations in a controlled way, via diagonal flips. Indeed, if $\mathscr{T}_i$ is dotted, and if $(\Box, \mathscr{T}_i |_\Box)$ is a triangulated square in $\mathscr{T}_i$, the diagonal flip at $\Box$ induces a new dotted triangulation $\mathscr{T}_{i+1}$. (For simplicity, we always assume that $\mathscr{T}_{i+1}$ is not self-folded; see \S \ref{ssec:markedsurfacesidealtriangulations}.) Note that the induced embedding of $\mathscr{T}_{i+1}$ in $\widehat{S}$ is well-defined up to isotopy in $\widehat{S} - \left\{ \textnormal{dots} \right\}$, since the triangulated square $(\Box, \mathscr{T}_i |_\Box)$ comes with a canonical foliation including the diagonal as a leaf. \textcolor{purple}{Observe that} the sequence $\mathscr{T}=\mathscr{T}_0 \to \mathscr{T}_1 \to \mathscr{T}_2 \to \cdots$ of dotted triangulations so-obtained depends on the chosen sequence of diagonal flips, so is not unique.
\begin{example*}[\textbf{part 3}]
See for instance the passage from the diagrams 0 to 1 or from the diagrams 1 to 2 in Figure \ref{fig:sl3-pentagon-relation}.
\end{example*}
Let $\mathscr{T}=\mathscr{T}_0$ be labeled as well, and let $\mathscr{T}_i$ and $\mathscr{T}_{i+1}$ be as above. \textcolor{purple}{Then} their dottings induce a \textit{flip mutation function}
\begin{equation*}
\mu_{\mathscr{T}_{i+1}, \mathscr{T}_i} : \mathbb{Z}^N \longrightarrow \mathbb{Z}^N
\end{equation*}
defined in the square $\Box$ as in Definition \ref{def:mutationmap} (that is, by the formulas \textcolor{purple}{ \eqref{eq:boundarycoords}, \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} }of Figure \ref{figure:flip}) and outside the square as the identity. (We remind that the domain is associated to $\mathscr{T}_i$ and the codomain to $\mathscr{T}_{i+1}$.) In particular, the formulas are the same irrespective of whether the boundary of the square has any self-gluings. If
\begin{equation*}
\label{eq:flipseq1}
\tag{$\dagger$}
\mathscr{T}=\mathscr{T}_0 \to \mathscr{T}_1 \to \mathscr{T}_2 \to \cdots \to \mathscr{T}_{m-1} \to \mathscr{T}_m=\mathscr{T}^\prime
\end{equation*} is a sequence of flips as above, ending at a dotted triangulation $\mathscr{T}_m=\mathscr{T}^\prime$, define the associated \textit{mutation function (for this sequence of flips)}
\begin{equation*}
\mu_{\mathscr{T}_m, \mathscr{T}_0} : \mathbb{Z}^N \longrightarrow \mathbb{Z}^N
\end{equation*}
as the function composition
\begin{equation*}
\mu_{\mathscr{T}_m, \mathscr{T}_0} := \mu_{\mathscr{T}_m, \mathscr{T}_{m-1}} \circ \cdots \circ \mu_{\mathscr{T}_2, \mathscr{T}_1} \circ \mu_{\mathscr{T}_1, \mathscr{T}_0}.
\end{equation*}
Here, we have used the standard convention that $(g \circ f)(x):=g(f(x))$.
Now, let in addition
\begin{equation*}
\label{eq:flipseq2}
\tag{$\dagger\dagger$}
\mathscr{T} = \mathscr{T}_0 \to \widetilde{\mathscr{T}}_1 \to \widetilde{\mathscr{T}}_2 \to \cdots \to \widetilde{\mathscr{T}}_{\widetilde{m}-1} \to \widetilde{\mathscr{T}}_{\widetilde{m}} = \mathscr{T}^\prime
\end{equation*}
be another sequence of diagonal flips starting at the same base triangulation $\mathscr{T}=\mathscr{T}_0$, and ending at the same topological triangulation $\mathscr{T}^\prime$, but \textcolor{purple}{where} the dottings of $\mathscr{T}_m = \mathscr{T}^\prime$ and $\widetilde{\mathscr{T}}_{\widetilde{m}} = \mathscr{T}^\prime$ are possibly different. Define the associated permutation linear map
\begin{equation*}
\sigma_{\widetilde{\mathscr{T}}_{\widetilde{m}}, \mathscr{T}_m} : \mathbb{Z}^N \longrightarrow \mathbb{Z}^N
\end{equation*}
as follows: For $i\in\left\{1,2,\dots, N\right\}$, if the $i$-labeled dot is on an edge (resp. face) of $\mathscr{T}_m\textcolor{purple}{=\mathscr{T}^\prime}$, and if the corresponding dot on the corresponding edge (resp. face) of $\widetilde{\mathscr{T}}_{\widetilde{m}}\textcolor{purple}{=\mathscr{T}^\prime}$ is labeled $\sigma(i)$, then $\sigma_{\widetilde{\mathscr{T}}_{\widetilde{m}}, \mathscr{T}_m}$ maps the $i$-th standard basis vector \textcolor{purple}{$e_i$} of $\mathbb{Z}^N$ to the $\sigma(i)$-th standard basis vector \textcolor{purple}{$e_{\sigma(i)}$} of $\mathbb{Z}^N$.
\begin{example*}[\textbf{part 4}]
We could take $\mathscr{T}_i = \widetilde{\mathscr{T}}_i$ in Figure \ref{fig:sl3-pentagon-relation} with $m=5$ and $\widetilde{m}=35$. Then,
\begin{equation*}
\sigma(1)=3, \quad \sigma(2)=1, \quad \sigma(3)=4, \quad \sigma(4)=6, \quad \sigma(5)=2, \quad \sigma(6)=7, \quad \sigma(7)=5
\end{equation*}
and $\sigma(i)=i$ is the identity on all of the boundary coordinates ($i=8,9,\dots,17$).
\end{example*}
\begin{theorem}[{\cite{FominJAmerMathSoc02, FockIHES06}}]
\label{prop:mutation-naturality}
Let $\mathscr{T}=\mathscr{T}_0$ be a labeled base dotted triangulation. Given two flip sequences \eqref{eq:flipseq1} and \eqref{eq:flipseq2} of dotted triangulations ending at the same topological triangulation $\mathscr{T}^\prime$, the following diagram commutes:
\begin{equation*}
\begin{tikzcd}
\mathbb{Z}^N \arrow{r}{
\mu_{\mathscr{T}_m, \mathscr{T}_0}
} \arrow[swap]{dr}{
\mu_{\widetilde{\mathscr{T}}_{\widetilde{m}}, \mathscr{T}_0}
} & \mathbb{Z}^N \arrow{d}{
\sigma_{\widetilde{\mathscr{T}}_{\widetilde{m}}, \mathscr{T}_m}
} \\
& \mathbb{Z}^N
\end{tikzcd}
\end{equation*}
\end{theorem}
An \textcolor{purple}{immediate} consequence of this theorem is:
\begin{observation}
\label{obs:interesting-consequence}
Using the notation of Theorem {\upshape\ref{prop:mutation-naturality}}, \textcolor{purple}{\eqref{eq:flipseq1} and \eqref{eq:flipseq2}}: For $\mathscr{T}^\prime=\mathscr{T}$ and $m=0$, we get $\mu_{\mathscr{T}_m, \mathscr{T}_0}=\mu_{\mathscr{T}_0, \mathscr{T}_0}=\mathrm{identity}$, hence
\begin{equation*}
\mu_{\widetilde{\mathscr{T}}_{\widetilde{m}}, \mathscr{T}_0}=\sigma_{\widetilde{\mathscr{T}}_{\widetilde{m}}, \mathscr{T}_0}
\quad : \mathbb{Z}^N \longrightarrow \mathbb{Z}^N
\end{equation*}
is a permutation map.
\end{observation}
\begin{example*}[\textbf{part 5}]
To get a feel for the content of Theorem \ref{prop:mutation-naturality}, let us consider (similar to part 4 of this example) $\mathscr{T}_i = \widetilde{\mathscr{T}}_i$ in Figure \ref{fig:sl3-pentagon-relation} with $m=0$ and $\widetilde{m}=5$, where $\sigma_{\mathscr{T}_5, \mathscr{T}_0}$ is defined by
\begin{equation*}
\sigma(1)=2, \quad \sigma(2)=5, \quad \sigma(3)=1, \quad \sigma(4)=3, \quad \sigma(5)=7, \quad \sigma(6)=4, \quad \sigma(7)=6
\end{equation*}
and $\sigma(i)=i$ for $i=8,9,\dots,17$.
Then Observation \ref{obs:interesting-consequence} says $\sigma_{\mathscr{T}_5, \mathscr{T}_0}^{-1} \circ \mu_{\mathscr{T}_5, \mathscr{T}_0}$ is the identity map $\mathbb{Z}^{17} \to \mathbb{Z}^{17}$. This is the so-called \textit{Pentagon Relation} for $\mathrm{SL}_3$. By construction this is obvious for the 8-th through the 17-th \textcolor{purple}{coordinates}, because these are the ``frozen'' coordinates on the boundary of the pentagon. So \textcolor{purple}{the heart of the} statement is \textcolor{purple}{the validity of} the seven identities:
\begin{equation*}
f_i(x_1, x_2, \dots, x_{17})=x_i
\quad\quad \left( i=1, 2, \dots, 7; \quad x_j \in \mathbb{Z}; \quad j=1,2,\dots,17 \right)
\end{equation*}
where $f_i : \mathbb{Z}^{17} \to \mathbb{Z}$ is defined as the $i$-th component of $\sigma_{\mathscr{T}_5, \mathscr{T}_0}^{-1} \circ \mu_{\mathscr{T}_5, \mathscr{T}_0}$. In particular, $f_i$ is a complicated piecewise-linear function built out of the operations $+$, $-$, and $\mathrm{max}$.
\end{example*}
\begin{remark}
\label{rem:seven-identities}
\textcolor{purple}{Of the seven identities $f_i(x_1, x_2, \dots, x_{17})=x_i$ discussed in Example (part 5), the most nontrivial is the case $i=5$. } {\color{red}Appendix \ref{app:first-appendix} at the end of this article contains a Mathematica notebook which provides the explicit expression for $f_5(x_1, x_2, \dots, x_{17})$. }
\end{remark}
To finish this digression, a well-known fact \textcolor{purple}{\cite{penner1987decorated}} says that any two triangulations $\mathscr{T}$ and $\mathscr{T}^\prime$ are related by a sequence of diagonal flips \eqref{eq:flipseq1}. By Theorem \ref{prop:mutation-naturality}, we thus immediately obtain:
\begin{cor}
\label{cor:cluster-transformation}
Let $\mathscr{T}=\mathscr{T}_0$ be a labeled base dotted triangulation. For any topological triangulation $\mathscr{T}^\prime$, there is a function
\begin{equation*}
\mu_{\mathscr{T}^\prime, \textcolor{purple}{\mathscr{T}_0}} : \mathbb{Z}^N \longrightarrow \mathbb{Z}^N,
\end{equation*}
defined only up to permutation of the coordinates of the codomain $\mathbb{Z}^N$, satisfying the property that if $\mathscr{T}_m=\mathscr{T}^\prime$ is related to $\mathscr{T}\textcolor{purple}{=\mathscr{T}_0}$ by a sequence of diagonal flips \eqref{eq:flipseq1}, then
\begin{equation*}
\mu_{\mathscr{T}^\prime, \textcolor{purple}{\mathscr{T}_0}} = \mu_{\mathscr{T}_m, \mathscr{T}_{m-1}} \circ \cdots \circ \mu_{\mathscr{T}_2, \mathscr{T}_1} \circ \mu_{\mathscr{T}_1, \mathscr{T}_0}
\end{equation*}
where the $\mu_{\mathscr{T}_{i+1}, \mathscr{T}_i} : \mathbb{Z}^N \to \mathbb{Z}^N$ are the corresponding (well-defined) flip mutation functions. \qed
\end{cor}
\begin{defn}
\label{def:cluster-transformation-for-surface}
The \textcolor{purple}{(pseudo-)}function $\mu_{\mathscr{T}^\prime, \textcolor{purple}{\mathscr{T}_0}} : \mathbb{Z}^N \to \mathbb{Z}^N$ from Corollary \ref{cor:cluster-transformation} is called the \textit{tropical $\mathcal{A}$-coordinate cluster transformation} \textcolor{purple}{for} the marked surface $\widehat{S}$ associated to the \textcolor{purple}{labeled base dotted triangulation $\mathscr{T}=\mathscr{T}_0$ and the topological triangulation $\mathscr{T}^\prime$. Below, we will drop the subscript and just write $\mu_{\mathscr{T}^\prime, \mathscr{T}}$ for this function.}
\end{defn}
\begin{remark}
\label{rem:FG-remark-Q}
$ $
\begin{enumerate}
\item \label{rem1:FG-remark-Q} Throughout this sub-subsection, there has been nothing special about the integers $\mathbb{Z}$ compared to, say, the rational numbers $\mathbb{Q}$. In particular, Theorem \ref{prop:mutation-naturality} makes sense and is true with $\mathbb{Z}$ replaced by $\mathbb{Q}$.
\item The Pentagon Relation for $\mathrm{SL}_3$ (Figure \ref{fig:sl3-pentagon-relation}), equivalent to the seven identities $f_i(x_1, x_2, \dots, x_{17})=x_i$ for $i=1,2,\dots,7$ discussed in Example (part 5) above is the main relation required to establish Theorem \ref{prop:mutation-naturality}.
\end{enumerate}
\end{remark}
\subsubsection{Naturality of the web tropical coordinate map}
\label{sssec:naturality-of-the-web-tropical-coordinate-map}
$ $
We are now ready to generalize Theorem \ref{thm:second-main-theorem} to any marked surface $\widehat{S}$.
Let $\mathscr{T}=\mathscr{T}_0$ be a labeled base dotted triangulation, and let $\mathscr{T}^\prime$ be any topological triangulation; see \S \ref{sec:base-triangulations}. Associated to this topological data is the tropical $\mathcal{A}$-coordinate cluster transformation $\mu_{\mathscr{T}^\prime, \mathscr{T}} : \mathbb{Z}^N \to \mathbb{Z}^N$, which is only defined up to permutation of the coordinates of the codomain $\mathbb{Z}^N$; see Definition \ref{def:cluster-transformation-for-surface}.
Lastly, recall from \S \ref{sec:base-triangulations} that the labels for the dots of $\mathscr{T}=\mathscr{T}_0$ determine an inclusion $\mathscr{C}_{\mathscr{T}} \subset \mathbb{Z}_+^N$ of its KTGS cone. Since $\mathscr{T}^\prime$ is not assumed to be dotted, \textcolor{purple}{the inclusion} $\mathscr{C}_{\mathscr{T}^\prime} \subset \mathbb{Z}_+^N$ \textcolor{purple}{of} its KTGS cone is only defined up to permutation of the coordinates of $\mathbb{Z}_+^N$.
\begin{theorem}
\label{cor:second-main-theorem}
Let $\mathscr{T}$ and $\mathscr{T}^\prime$ be triangulations, and let $\mu_{\mathscr{T}^\prime, \mathscr{T}} : \mathbb{Z}^N \to \mathbb{Z}^N$ be the corresponding tropical $\mathcal{A}$-coordinate cluster transformation, as just explained. For the associated web tropical coordinate maps $\Phi_\mathscr{T} : \mathscr{W}_{\widehat{S}} \to \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{N}$ and $\Phi_{\mathscr{T}^\prime} : \mathscr{W}_{\widehat{S}} \to \mathscr{C}_{\mathscr{T}^\prime} \subset \mathbb{Z}_+^{N}$, we have
\begin{equation*}
\mu_{\mathscr{T}^\prime, \mathscr{T}}(c) =\Phi_{\mathscr{T}^\prime} \circ \Phi_\mathscr{T}^{-1}(c) \quad \in \mathscr{C}_{\mathscr{T}^\prime} \quad\quad \left( c \in \mathscr{C}_\mathscr{T} \right),
\end{equation*}
where this equality is only defined up to permutation of the coordinates of $\mathscr{C}_{\mathscr{T}^\prime} \subset \mathbb{Z}_+^N$.
\end{theorem}
\begin{proof}
This is essentially an immediate consequence of Theorem \ref{thm:second-main-theorem}.
Indeed, consider a flip sequence \eqref{eq:flipseq1} of dotted triangulations $\mathscr{T}_i$, namely such that $\mathscr{T}_{i+1}$ is related to $\mathscr{T}_i$ by a single diagonal flip. Recall (\S \ref{sec:base-triangulations}) that the dotting of $\mathscr{T}_i$ induces an inclusion $\mathscr{C}_{\mathscr{T}_i} \subset \mathbb{Z}_+^N$ of its KTGS cone, and recall (\S \ref{def:tropical-A-coordinate-cluster-transformation}) that $\mu_{\mathscr{T}_{i+1}, \mathscr{T}_i} : \mathbb{Z}^N \to \mathbb{Z}^N$ is the corresponding (well-defined) flip mutation function.
By Theorem \ref{thm:second-main-theorem}, for each $i=0,1,...,m-1$ we have
\begin{equation*}
\label{eq:cor-of-diagonal-result}
\tag{$\$$}
\mu_{\mathscr{T}_{i+1}, \mathscr{T}_i}(c_i) =\Phi_{\mathscr{T}_{i+1}} \circ \Phi_{\mathscr{T}_i}^{-1}(c_i) \quad \in \mathscr{C}_{\mathscr{T}_{i+1}}
\quad\quad \left( c_i \in \mathscr{C}_{\mathscr{T}_i} \right).
\end{equation*}
By iterating \eqref{eq:cor-of-diagonal-result}, for any $c_0=c \in \mathscr{C}_\mathscr{T}=\mathscr{C}_{\mathscr{T}_0}$ we obtain
\begin{align*}
\mu_{\mathscr{T}_m, \mathscr{T}_0}(c)
&= \mu_{\mathscr{T}_m, \mathscr{T}_{m-1}} \circ \cdots
\circ \mu_{\mathscr{T}_3, \mathscr{T}_2}
\circ \mu_{\mathscr{T}_2, \mathscr{T}_1} \circ \mu_{\mathscr{T}_1, \mathscr{T}_0}(c_0)
\\&= \mu_{\mathscr{T}_m, \mathscr{T}_{m-1}} \circ \cdots
\circ \mu_{\mathscr{T}_3, \mathscr{T}_2}
\circ \mu_{\mathscr{T}_2, \mathscr{T}_1}\left( \Phi_{\mathscr{T}_{1}} \circ \Phi_{\mathscr{T}_0}^{-1}(c_0) \right)
\\&= \mu_{\mathscr{T}_m, \mathscr{T}_{m-1}} \circ \cdots
\circ \mu_{\mathscr{T}_3, \mathscr{T}_2}
\left( \Phi_{\mathscr{T}_{2}} \circ \Phi_{\mathscr{T}_1}^{-1}
\left( \Phi_{\mathscr{T}_{1}} \circ \Phi_{\mathscr{T}_0}^{-1}(c_0) \right)
\right)
\\&= \mu_{\mathscr{T}_m, \mathscr{T}_{m-1}}
\left( \Phi_{\mathscr{T}_{m-1}} \circ \Phi_{\mathscr{T}_0}^{-1}(c_0) \right)
\\&= \Phi_{\mathscr{T}_m} \circ \Phi_{\mathscr{T}_0}^{-1}(c)
\quad \in \mathscr{C}_{\mathscr{T}_m}.
\end{align*}
The result follows from the defining property of the function $\mu_{\mathscr{T}^\prime, \mathscr{T}}$ (Corollary~\ref{cor:cluster-transformation} \textcolor{purple}{and Definition~\ref{def:cluster-transformation-for-surface}}).
\end{proof}
\begin{conceptremark}
\label{rem:equivariant}
Another way to express Theorem \ref{cor:second-main-theorem}, common in the literature, is to say that the web tropical coordinates, \textcolor{purple}{determined by the maps} $\left\{ \Phi_\mathscr{T} \right\}_\mathscr{T}$, are equivariant with respect to the extended mapping class group of the marked surface $\widehat{S}$. Said another way, they form natural coordinates for the positive tropical integer $\mathrm{PGL}_3$-points $\mathcal{A}^+_{\mathrm{PGL}_3, \widehat{S}}(\mathbb{Z}^t)$, where a point in $\mathcal{A}^+_{\mathrm{PGL}_3, \widehat{S}}(\mathbb{Z}^t)$ is thought of concretely as a reduced web $W$ in~$\mathscr{W}_{\hat{S}}$.
\end{conceptremark}
\begin{application} \label{app:first-application} Generalizing Fock-Goncharov's (bounded) $\mathrm{SL}_2$-laminations \cite[\S 12]{FockIHES06}, Kim \cite{KimArxiv20} considers the space $\widetilde{\mathscr{W}}_{\widehat{S}}$ of \textit{(bounded) $\mathrm{SL}_3$-laminations} (he denotes this space by $\mathcal{A}_L(S, \mathbb{Z})$), which extends the space $\mathscr{W}_{\widehat{S}}$ of reduced webs by allowing for negative integer weights around the peripheral loops and arcs. He also extends the web tropical coordinate map $\Phi_\mathscr{T} : \mathscr{W}_{\widehat{S}} \to \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$ of Theorem \ref{thm:main-theorem} to an injective map $\widetilde{\Phi}_\mathscr{T} : \widetilde{\mathscr{W}}_{\widehat{S}} \to \mathbb{Z}^N$, and characterizes the image as an integer lattice defined by certain \textit{balancedness} conditions; it turns out that these conditions are equivalent to the modulo 3 congruence conditions of Definition \ref{def:KTGS-cone}. That is, whereas the reduced webs $\mathscr{W}_{\widehat{S}}$ correspond to solutions of both the modulo 3 congruence conditions and the Knutson-Tao inequalities, the $\mathrm{SL}_3$-laminations $\widetilde{\mathscr{W}}_{\widehat{S}} \supset \mathscr{W}_{\widehat{S}}$ correspond to solutions of \textcolor{purple}{only} the modulo 3 congruence conditions.
By \cite[Proposition 3.35]{KimArxiv20}, which generalizes Theorem \ref{cor:second-main-theorem}, the lamination tropical coordinates $\{ \widetilde{\Phi}_\mathscr{T} \}_\mathscr{T}$ are also natural, thereby \textcolor{purple}{constituting} an explicit model for the tropical integer $\mathrm{PGL}_3$-points $\mathcal{A}_{\mathrm{PGL}_3, \widehat{S}}(\mathbb{Z}^t)$; compare Remark \ref{rem:equivariant} and see also Remark \ref{rem:realsolutionsareinteger}\textcolor{purple}{\eqref{rem2:realsolutionsareinteger}}.
Kim's proof of \cite[Proposition 3.35]{KimArxiv20} uses Theorem \ref{cor:second-main-theorem}. One way to think about upgrading the naturality statement from webs to laminations is in terms of
the proof strategy of Theorem \ref{thm:second-main-theorem}; see \S \ref{ssec:proofofnaturality}, in particular Remark \ref{rem:proof-remark}. Indeed, since Lemma \ref{lemma:add} works as well for corner arcs with integer coefficients, the proof of Theorem \ref{thm:second-main-theorem} works more generally for the laminations~$\widetilde{\mathscr{W}}_{\widehat{S}}$.
\end{application}
\begin{application}
The same strategy used in the proof of Theorem \ref{cor:second-main-theorem} provides an alternative, geometric topological proof of Theorem \ref{prop:mutation-naturality} (see also Remark \ref{rem:FG-remark-Q}\textcolor{purple}{\eqref{rem1:FG-remark-Q}}), but only valid on the restricted \textcolor{purple}{cone} domain $\mathscr{C}_{\mathscr{T}_0} \subset \mathbb{Z}_+^N \subset \mathbb{Q}^N$ (or, by applying Kim's result for $\textcolor{purple}{\mathrm{SL}_3}$-laminations $\widetilde{\mathscr{W}}_{\widehat{S}}$ from Application \textcolor{purple}{\ref{app:first-application}}, on the restricted lattice domain $\widetilde{\Phi}_{\mathscr{T}}(\widetilde{W}_{\widehat{S}}) \subset \mathbb{Z}^N \subset \mathbb{Q}^N$).
\end{application}
\section{Web families and flip examples in the square}
\label{ssec:examplessquare}
In \S \ref{section:fe}, we proved the naturality of the web tropical coordinates without having to see what the new good position of a web in the square looks like after flipping the diagonal, which is topologically nontrivial. In this section, we give some examples of seeing the good position after the flip. This gives us another way to check the \textcolor{purple}{formulas} of Theorem \ref{thm:second-main-theorem}; see also Remark \ref{rem:regarding-flip-proof} at the beginning of \S \ref{section:fe}. These topological developments \textcolor{purple}{(in particular, Proposition \ref{lem:42families})} will also be applied in \textcolor{purple}{\S \ref{section:ssq}} in order to study the structure of the Knutson-Tao-Goncharov-Shen cone of the triangulated square.
\subsection{Web families}
\label{ssec:42-reduced-web-families-in-the-square}
$ $
Recall the notion of a web schematic; see \S \ref{sssec:splitidealtriangulationsandgoodposition} and Remark \ref{rem:cornerarcschematics}. Recall also Definitions \ref{def:corner-arcs} and \ref{def:cornerless-webs-in-the-square}, \textcolor{purple}{for the notions of corner webs $W=W_r \in \mathscr{R}$ and cornerless webs $W=W_c$.}
\begin{prop}
\label{lem:42families}
We can write the reduced webs in the square as a union
\begin{equation*}
\mathscr{W}_\Box = \cup_{i=1}^{42} \mathscr{W}_i
\end{equation*}
of $42$ families $\mathscr{W}_i \subset \mathscr{W}_\Box$ of reduced webs, where by definition $W \in \mathscr{W}_i$ if its cornerless part $W_c$ can be represented by the \textnormal{$i$-th cornerless schematic}, $9$ of which are shown in Figure {\upshape\ref{figure:9cases}} below; in fact, up to rotation, reflection, and orientation-reversing symmetry \textcolor{purple}{(see the caption of Figure {\upshape\ref{figure:9cases}})}, every family $\mathscr{W}_i$ falls into one of these $9$ cases.
\end{prop}
\begin{figure}[htb]
\includegraphics[scale=.6]{9cases-alt}
\caption{\textbf{Families (1)-(9).} Schematics for cornerless webs $\textcolor{purple}{W=}W_c$. There are $9$ reduced web families up to rotation, reflection, and orientation-reversing symmetry. (Note that orientation-reversing symmetry means simultaneously reversing the orientations of all components of the web.)}
\label{figure:9cases}
\end{figure}
\begin{proof}
This is a direct combinatorial count, done by hand. We note that the number of possibilities is restricted by the topology of web good positions; see Observation \ref{lemma:bigon}.
\end{proof}
\begin{notation}
\label{not:web-families}
Completely arbitrarily, the index $i_j$ for the family $\mathscr{W}_{i_j}$ whose cornerless schematic is labeled $(j)$ in Figure \ref{figure:9cases} $(j=1,2,\dots,9)$ is
\begin{equation*}
i_1 = 29, \quad i_2=30, \quad i_3 =42, \quad i_4=17, \quad i_5=5, \quad i_6=6, \quad i_7=2, \quad i_8=1, \quad i_9=33.
\end{equation*}
See also Remark \ref{rem:symmetry-groupings} just below.
As we will see in \textcolor{purple}{\S \ref{section:ssq}}, the family $\mathscr{W}_i$ corresponds to the $i$-th sector shown in Figure \ref{figure:wallscross}.
\end{notation}
\begin{remark}
\label{rem:symmetry-groupings}
If we define an equivalence relation on the 42 families $\mathscr{W}_i$ by rotation, reflection, and orientation-reversing symmetry, then (using Notation \ref{not:web-families} just above):
\begin{enumerate}
\item The symmetry class of $\mathscr{W}_{i_1}$ has four members: $\mathscr{W}_{i_1}=\mathscr{W}_{29}$; $\mathscr{W}_{21},\mathscr{W}_{24},\mathscr{W}_{32}$.
\item The symmetry class of $\mathscr{W}_{i_2}$ has four members: $\mathscr{W}_{i_2}=\mathscr{W}_{30}$; $\mathscr{W}_{23},\mathscr{W}_{22},\mathscr{W}_{31}$.
\item The symmetry class of $\mathscr{W}_{i_3}$ has eight members: $\mathscr{W}_{i_3}=\mathscr{W}_{42}$; $\mathscr{W}_{36},\mathscr{W}_{37},\mathscr{W}_{38},\mathscr{W}_{39},\mathscr{W}_{40},\mathscr{W}_{41},\mathscr{W}_{35}$.
\item The symmetry class of $\mathscr{W}_{i_4}$ has eight members: $\mathscr{W}_{i_4}=\mathscr{W}_{17}$; $\mathscr{W}_{18},\mathscr{W}_{19},\mathscr{W}_{20},\mathscr{W}_{25},\mathscr{W}_{26},\mathscr{W}_{27},\mathscr{W}_{28}$.
\item The symmetry class of $\mathscr{W}_{i_5}$ has four members: $\mathscr{W}_{i_5}=\mathscr{W}_{5}$; $\mathscr{W}_{8},\mathscr{W}_{13},\mathscr{W}_{16}$.
\item The symmetry class of $\mathscr{W}_{i_6}$ has four members: $\mathscr{W}_{i_6}=\mathscr{W}_{6}$; $\mathscr{W}_{7},\mathscr{W}_{14},\mathscr{W}_{15}$.
\item The symmetry class of $\mathscr{W}_{i_7}$ has four members: $\mathscr{W}_{i_7}=\mathscr{W}_{2}$; $\mathscr{W}_{3},\mathscr{W}_{10},\mathscr{W}_{11}$.
\item The symmetry class of $\mathscr{W}_{i_8}$ has four members: $\mathscr{W}_{i_8}=\mathscr{W}_{1}$; $\mathscr{W}_{4},\mathscr{W}_{9},\mathscr{W}_{12}$.
\item The symmetry class of $\mathscr{W}_{i_9}$ has two members: $\mathscr{W}_{i_9}=\mathscr{W}_{33}$; $\mathscr{W}_{34}$.
\end{enumerate}
\end{remark}
We emphasize that each schematic in Figure \ref{figure:9cases} represents a subset $\mathscr{W}_i \subset \mathscr{W}_\Box$ of reduced webs in the square. \textcolor{purple}{Note} these subsets are not disjoint. \textcolor{purple}{Indeed}, each intersection $\mathscr{W}_i \cap \mathscr{W}_j$ is at least ``8-dimensional'', in an appropriate sense (see \S \ref{section:ssq}), since the set of corner webs $\mathscr{R}$ is contained in each family $\mathscr{W}_i$. This intersection can contain more than just the corner webs. For instance, the intersection $\mathscr{W}_{29} \cap \mathscr{W}_{30}$, corresponding to schematics (1) and (2) in Figure \ref{figure:9cases}, is ``11-dimensional'' (\textcolor{purple}{thus, in Figure \ref{figure:wallscross}, sectors 29 and 30} are separated by a wall); the last, 12-th, dimension comes from the source or sink labeled with the weight $x \in \mathbb{Z}_+$ \textcolor{purple}{in schematics (1) and (2)}. As another example, $\mathscr{W}_{29} \cap \mathscr{W}_{33}$, \textcolor{purple}{corresponding to schematics (1) and (9) in Figure \ref{figure:9cases}}, is ``10-dimensional'' (\textcolor{purple}{thus, in Figure \ref{figure:wallscross}, sectors 29 and 33 are not separated by a wall).} In fact, each family $\mathscr{W}_i$ is ``12-dimensional'' (\textcolor{purple}{intuitively, this is} because the square has 12 Fock-Goncharov coordinates): 8 dimensions come from the corner part $W_r$, and 4 dimensions come from the cornerless part $W_c$. Correspondingly, each cornerless schematic in Figure \ref{figure:9cases} has four weights $x,y,z,t \in \mathbb{Z}_+$.
We remind (Remark \ref{rem:cornerarcschematics}) that, in schematics (1) and (2) in Figure \ref{figure:9cases}, we could have reversed the orientations of the two arc components, without changing the class of the web in $\mathscr{W}_\Box$. On the other hand, the orientation of the weight $x$ component distinguishes schematic (1) from (2); \textcolor{purple}{note} the caption of Figure \ref{figure:9cases}. Also by Remark \ref{rem:cornerarcschematics}, the $t$ and $z$ strands in schematic (3), \textcolor{purple}{for example}, do not cross in the upper triangle, \textcolor{purple}{for otherwise} the web would have an external H-4-face \textcolor{purple}{(\S \ref{sssec:splitidealtriangulationsandgoodposition})} on the boundary.
\subsection{Flip examples}
\label{ssec:flip-examples}
$ $
The 9 symmetry classes of web families (see Figure \ref{figure:9cases} and Remark \ref{rem:symmetry-groupings}) fall into roughly three types. Let us study the flip a bit more intensively for one example of each type.
For the remainder of this section, let $W\textcolor{purple}{=W_c} \in \mathscr{W}_\Box$ be a cornerless web in the square and belonging to the family $\mathscr{W}_{i_j} \subset \mathscr{W}_\Box$, where the value of $i_j$ depends on which of the 9 cases we are considering ($j=1,2,\dots,9$); see Notation \ref{not:web-families}.
Recall that $\mathscr{T}$ (resp. $\mathscr{T}^\prime$) is the triangulation shown in the left hand side (resp. right hand side) of Figure \ref{figure:flip}. Consider the web tropical coordinate maps $\Phi_\mathscr{T} : \mathscr{W}_\Box \to \mathscr{C}_\mathscr{T}$ and $\Phi_{\mathscr{T}^\prime} : \mathscr{W}_\Box \to \mathscr{C}_{\mathscr{T}^\prime}$ (see \S \ref{section:wa}). Denote $c = \Phi_\mathscr{T}(W) \in \mathscr{C}_\mathscr{T} \textcolor{purple}{\subset \mathbb{Z}_+^{12}}$ by
\begin{equation*}
c=(c_j)_{j=1,2,\dots,12} = (x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8, y_1, y_2, y_3, y_4),
\end{equation*}
and $c^\prime = \Phi_{\mathscr{T}^\prime} \circ \Phi_\mathscr{T}^{-1}(c) \in \mathscr{C}_{\mathscr{T}^\prime} \textcolor{purple}{\subset \mathbb{Z}_+^{12}}$ by
\begin{equation*}
c^\prime=(c^\prime_j)_{j=1,2,\dots,12}=(x^\prime_1, x^\prime_2, x^\prime_3, x^\prime_4, x^\prime_5, x^\prime_6, x^\prime_7, x^\prime_8, z_1, z_2, z_3, z_4);
\end{equation*}
compare Definition \ref{def:mutationmap} and Figure \ref{figure:flip}. \textcolor{purple}{We know right away that $x_i=x_i^\prime$ for $i=1,2,\dots,8$.}
Our goal is to \textcolor{purple}{check} that Equations \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} are satisfied, by presenting the explicit good position of the family $\mathscr{W}_{i_j}$ after the flip, allowing us to \textcolor{purple}{calculate the coordinates directly}. We do this in the three example cases $j=1, 3, 7$.
Recall in particular $x,y,z,t \in \mathbb{Z}_+$ in Figure \ref{figure:9cases}.
\textbf{Family (1).} The simplest cases are given by schematics (1) and (2) of Figure \ref{figure:9cases}. We verify case (1) here. The other case is similar. We compute the $c_j$'s and $c_j^\prime$'s via Figure \ref{figure:triangle} below; \textcolor{purple}{see also \S \ref{sssec:definitionofthetropicalwebcoordinates}}.
\begin{figure}[htb]
\includegraphics[scale=0.8]{flip1-alt}
\caption{Family (1).}
\label{figure:flip1}
\end{figure}
Notice in this case it is obvious that the web on the right hand side of Figure \ref{figure:flip1} is in good position with respect to the flipped triangulation.
Left hand side of Figure \ref{figure:flip1}, coordinates $c_j \quad (j=1,2,\dots,12)$:
\begin{align*}
(1)& &2x + y + 2z + 2t&&
(2)& &x+2y+z+t
\\(3)& &2x&&
(4)& &x
\\(5)& &2x+2y+z+2t&&
(6)& &x+y+2z+t
\\(7)& &2t&&
(8)& &t
\\(9)& &2x+y+2z+3t&&
(10)& &x+2y+z+2t
\\(11)& &3x+2y+z+2t&&
(12)& &2x+y+2z+t.
\end{align*}
Right hand side of Figure \ref{figure:flip1}, coordinates $c^\prime_j \quad (j=9,\dots,12)$:
\begin{align*}
(9^\prime)& &x+y+2z+2t&&
(10^\prime)& &3x+2y+z+t
\\(11^\prime)& &2x+2y+z+t&&
(12^\prime)& &x+y+2z+3t.
\end{align*}
The following computations verify Equations \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} in this case.
\begin{enumerate}[label=Eq. (\arabic*):]\setcounter{enumi}{1}
\item $\max\{(x+2y+z+t)+(3x+2y+z+2t),(2x+y+2z+3t)+2x\}-(x+2y+z+2t)=\max\{4x+4y+2z+3t, 4x+y+2z+3t\}-(x+2y+z+2t)=3x+2y+z+t$.
\item $\max\{(2x+y+2z+3t)+(x+y+2z+t),2t+(3x+2y+z+2t)\}-(2x+y+2z+t)
= \mathrm{max}\left\{
3x + 2y + 4z + 4t, 3x + 2y + z + 4t
\right\}-(2x+y+2z+t)
=x+y+2z+3t$.
\item $\max\{(2x+y+2z+2t)+(x+y+2z+3t),t+(3x+2y+z+t)\}-(2x+y+2z+3t)
=\mathrm{max}\left\{ 3x + 2y + 4z + 5t, 3x + 2y + z + 2t
\right\}-(2x+y+2z+3t)
=x+y+2z+2t.$
\item $\max\{(3x+2y+z+t)+(2x+2y+z+2t),(x+y+2z+3t)+x\}-(3x+2y+z+2t)
=\mathrm{max}\left\{ 5x + 4y + 2z + 3t, 2x + y + 2z + 3t
\right\}-(3x+2y+z+2t)
=2x+2y+z+t$.
\end{enumerate}
\textbf{Family (3).} The next simplest cases are given by schematics (3), (4), (5), (6), and (9) of Figure \ref{figure:9cases}. We verify case (3) here. The other four cases are similar. We compute the $c_j$'s and $c_j^\prime$'s via Figure \ref{figure:triangle} below; \textcolor{purple}{see also \S \ref{sssec:definitionofthetropicalwebcoordinates}}.
\begin{figure}[htb]
\includegraphics[scale=0.8]{flip3-alt}
\caption{Family (3).}
\label{figure:flip3}
\end{figure}
\begin{figure}[htb]
\includegraphics[scale=0.7]{flip3l}
\caption{(1) Bigon schematic after the flip. (2) An example of the web in good position after the flip: $x=y=z=t=1$.}
\label{figure:flip3l}
\end{figure}
Unlike in the previous example, it is less obvious that the schematic appearing on the right hand side of Figure \ref{figure:flip3} faithfully displays how the good position looks after the flip. That this is indeed the case is a bit subtle topologically, however can be verified by an explicit procedure that draws the desired flipped bigon on top of the starting web as represented by the left hand side of Figure \ref{figure:flip3}. We demonstrate this bigon drawing procedure in Figure \ref{fig:case3flip} in Appendix \ref{app:second-appendix} at the end of this article.
The schematic diagram of the web in good position restricted to the flipped bigon \textcolor{purple}{in} the right hand side of Figure \ref{figure:flip3} is shown in Figure \ref{figure:flip3l}(1). It is an enjoyable exercise to check that this \textcolor{purple}{bigon} schematic agrees with the web \textcolor{purple}{example} schematically shown in Figure \ref{fig:case3flip}.
Another guiding example showing the web in good position after the flip (without using schematics), is provided in Figure \ref{figure:flip3l}(2).
Left hand side of Figure \ref{figure:flip3}, coordinates $c_j \quad (j=1,2,\dots,12)$:
\begin{align*}
(1)& &x + 2y&&
(2)& &2x + y
\\(3)& &x + y + z + t&&
(4)& &2x+2y+2z+2t
\\(5)& &x+y+z&&
(6)& &2x+2y+2z
\\(7)& &2y+z+2t&&
(8)& &y+2z+t
\\(9)& &x + 3y + 2z + t&&
(10)& &2x+2y+2z+t
\\(11)& &3x + 3y+ 3z+2t&&
(12)& &x+y+z+2t.
\end{align*}
Right hand side of Figure \ref{figure:flip3}, coordinates $c^\prime_j \quad (j=9,\dots,12)$:
\begin{align*}
(9^\prime)& &2x + 3y +z + t&&
(10^\prime)& &3x+2y+z+t
\\(11^\prime)& &x+3y+2z+2t&&
(12^\prime)& &2x+4y+3z+2t.
\end{align*}
The following computations verify Equations \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} in this case.
\begin{enumerate}[label=Eq. (\arabic*):]\setcounter{enumi}{1}
\item $\max\{(2x+y)+(3x+3y+3z+2t), (x+3y+2z+t)+(x+y+z+t)\}-(2x+2y+2z+t)
=\mathrm{max}\left\{ 5x+4y+3z+2t, 2x+4y+3z+2t \right\}-(2x+2y+2z+t)
=3x+2y+z+t$.
\item $\max\{ (x+3y+2z+t)+(2x+2y+2z), (2y+z+2t)+(3x+3y+3z+2t) \}-(x+y+z+2t)
=\mathrm{max}\left\{ 3x + 5y + 4z + t, 3x+5y+4z+4t \right\}-(x+y+z+2t)
=2x+4y+3z+2t$.
\item $\max\{(x+2y)+(2x+4y+3z+2t), (y+2z+t)+(3x+2y+z+t)\}-(x+3y+2z+t)
=\mathrm{max}\left\{ 3x+6y+3z+2t, 3x+3y+3z+2t \right\}-(x+3y+2z+t)
= 2x+3y+z+t$.
\item $\max\{(3x+2y+z+t)+(x+y+z), (2x+4y+3z+2t)+(2x+2y+2z+2t)\}-(3x+3y+3z+2t)
=\mathrm{max}\left\{ 4x+3y+2z+t, 4x+6y+5z+4t \right\}-(3x+3y+3z+2t)
= x+3y+2z+2t$.
\end{enumerate}
\textbf{Family (7).} The last group of cases are given by schematics (7) and (8) of Figure \ref{figure:9cases}. We verify case (7) here. The other case is similar. We compute the $c_j$'s and $c_j^\prime$'s via Figure \ref{figure:triangle} below; \textcolor{purple}{see also \S \ref{sssec:definitionofthetropicalwebcoordinates}}.
\begin{figure}[htb]
\includegraphics[scale=0.8]{flip7-alt}
\caption{Family (7), shown when $z \geq t$.}
\label{figure:flip7}
\end{figure}
\begin{figure}[htb]
\includegraphics[scale=0.7]{flip7l}
\caption{(1) Bigon schematic after the flip. (2) An example of the web in good position after the flip: $x=y=t=1$ and $z=2$.}
\label{figure:flip7l}
\end{figure}
Note that, unlike for the previous two examples, in this case there are two possibilities: $z \geq t$ and $z \leq t$. In Figure \ref{figure:flip7}, we display the case when $z \geq t$ (the $z \leq t$ case is similar).
As for the previous example, it is not immediately obvious that the schematic appearing on the right hand side of Figure \ref{figure:flip7} \textcolor{purple}{displays} the correct good position. We again verify this by explicitly drawing the flipped bigon, as shown in Figure \ref{fig:case7flip} in Appendix \ref{app:second-appendix} at the end of this article.
The schematic diagram of the web in good position restricted to the flipped bigon \textcolor{purple}{in} the right hand side of Figure \ref{figure:flip7} is shown in Figure \ref{figure:flip7l}(1). It is an enjoyable exercise to check that this \textcolor{purple}{bigon} schematic agrees with the web \textcolor{purple}{example} schematically shown in Figure \ref{fig:case7flip}.
Another guiding example showing the web in good position after the flip (without using schematics), is provided in Figure \ref{figure:flip7l}(2).
\textcolor{purple}{We demonstrate the calculation when} $z \geq t$ (the case $z \leq t$ is similar).
Left hand side of Figure \ref{figure:flip7}, coordinates $c_j \quad (j=1,2,\dots,12)$:
\begin{align*}
(1)& &2x + t&&
(2)& &x + 2t
\\(3)& &2y+z&&
(4)& &y+2z
\\(5)& &2x + 2y + 2t&&
(6)& &x + y + t
\\(7)& &2x + 2y + 2z&&
(8)& &x+y+z
\\(9)& &3x + y + z + t&&
(10)& &2x + y + z + 2t
\\(11)& &2x+3y+2z+2t&&
(12)& &x + 2y + 2z + t.
\end{align*}
Right hand side of Figure \ref{figure:flip7}, coordinates $c^\prime_j \quad (j=9,\dots,12)$:
\begin{align*}
(9^\prime)& &2x + 2y + z + t&&
(10^\prime)& &x + 2y + z + 2t
\\(11^\prime)& &x + y + 2z - t&&
(12^\prime)& &3x + 3y + 2z + t.
\end{align*}
The following computations verify Equations \eqref{equation:mu1}, \eqref{equation:mu2}, \eqref{equation:mu3}, \eqref{equation:mu4} in this case. Note that the last equation uses the assumption $z \geq t$.
\begin{enumerate}[label=Eq. (\arabic*):]\setcounter{enumi}{1}
\item $\max\{ (x+2t)+(2x+3y+2z+2t), (3x+y+z+t)+(2y+z)
\}-(2x+y+z+2t)
=\mathrm{max}\left\{ 3x+3y+2z+4t, 3x+3y+2z+t
\right\}-(2x+y+z+2t)
=x+2y+z+2t$.
\item $\max\{(3x+y+z+t)+(x+y+t), (2x+2y+2z)+(2x+3y+2z+2t)
\}-(x+2y+2z+t)
=\mathrm{max}\left\{ 4x+2y+z+2t, 4x+5y+4z+2t
\right\}-(x+2y+2z+t)
=3x+3y+2z+t$.
\item $\max\{(2x+t)+(3x+3y+2z+t), (x+y+z)+(x+2y+z+2t)\}-(3x+y+z+t)
=\mathrm{max}\left\{ 5x+3y+2z+2t, 2x+3y+2z+2t
\right\}-(3x+y+z+t)
=2x+2y+z+t$.
\item $\max\{ (x+2y+z+2t)+(2x+2y+2t), (3x+3y+2z+t)+(y+2z)
\}-(2x+3y+2z+2t)
=\mathrm{max}\left\{ 3x+4y+z+4t, 3x+4y+4z+t
\right\}-(2x+3y+2z+2t)
=(3x+4y+4z+t)-(2x+3y+2z+2t)
=x+y+2z-t$.
\end{enumerate}
\section{KTGS cone for the square: Hilbert basis}
\label{section:Hsq}
In the remaining two sections, we study the structure of the Knutson-Tao-Goncharov-Shen cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$ associated to an ideal triangulation $\mathscr{T}$ of a marked surface $\widehat{S}$ (Definition \ref{def:KTGS-cone} \textcolor{purple}{ and Proposition \ref{prop:KTGS-is-positive-integer-cone}}) \textcolor{purple}{when} $\widehat{S}=\Box$ is an ideal square. In this case, an ideal triangulation $\mathscr{T}$ is simply a choice of a diagonal of $\Box$.
\subsection{Positive integer cones and Hilbert bases}
\label{ssec:hilbert-bases}
$ $
Recall from \S \ref{ssec:markedsurfacesidealtriangulations} that $\mathbb{Z}_+$ denotes the set of nonnegative integers. Recall also from \S \ref{ssec:reducedwebsandtheKTGScone}:
\begin{defn}
\label{def:submonoid}
A subset $\mathscr{M} \subset \mathbb{Z}^k$ (or $\subset \mathbb{R}^k$) is a \textit{submonoid} if $\mathscr{M}$ is closed under addition and contains 0.
\end{defn}
\begin{defn}
Let $\mathscr{M} \subset \mathbb{Z}^k$ \textcolor{purple}{(or $\subset \mathbb{R}^k$)} be a submonoid. An element $x \in \mathscr{M}$ is \textit{irreducible} if \textcolor{purple}{$x$ is nonzero}, and \textcolor{purple}{$x$ cannot be written as the sum of two nonzero elements of $\mathscr{M}$}.
We denote by $\mathscr{H} \subset \mathscr{M}$ the set of irreducible elements of $\mathscr{M}$.
A subset $\mathscr{D} \subset \mathscr{M}$ is:
\begin{itemize}
\item \textit{$\mathbb{Z}_+$-spanning} if every $x \in \mathscr{M}$ is of the form $x=\lambda_1 x_1 + \lambda_2 x_2 + \cdots + \lambda_m x_m$ for some $x_i \in \mathscr{D}$ and $\lambda_i \in \mathbb{Z}_+$, in which case we write $x \in \mathrm{span}_{\mathbb{Z}_+}(\mathscr{D})$;
\item a \textit{minimum} $\mathbb{Z}_+$-spanning set if, \textcolor{purple}{in addition}, for every $\mathbb{Z}_+$-spanning set $\mathscr{D}^\prime \subset \mathscr{M}$ we have $\mathscr{D} \subset \mathscr{D}^\prime$. \end{itemize}
\end{defn}
Note that a minimum $\mathbb{Z}_+$-spanning set is unique if it exists.
Recall (Definition \ref{def:positive-integer-cone}) that a positive integer cone $\mathscr{C} \subset \mathbb{Z}_+^k$ is a submonoid of $\mathbb{Z}_+^k$.
\begin{prop}
\label{prop:irreducible-elements}
The subset $\mathscr{H} \subset \mathscr{C} \subset \mathbb{Z}_+^k$ of irreducible elements of a positive integer cone $\mathscr{C} \subset \mathbb{Z}_+^k $ is the unique minimum $\mathbb{Z}_+$-spanning subset of $\mathscr{C}$.
\end{prop}
\begin{proof}
If $x$ is irreducible and is in the $\mathbb{Z}_+$-span of $x^\prime_1, x^\prime_2, \dots, x^\prime_m$ for $x_i^\prime \in \mathscr{C}$, then $x=x^\prime_{i_0}$ for some $i_0$ by the irreducibility property. Thus $\mathscr{H}$ is contained in any $\mathbb{Z}_+$-spanning set $\mathscr{D}$.
It remains to show that every element $x$ of $\mathscr{C}$ is in the $\mathbb{Z}_+$-span of $\mathscr{H}$. We argue by induction on the sum $\Sigma(x) \in \mathbb{Z}_+$ of the coordinates of $x \in \mathbb{Z}_+^k$; that is, on the quantity $\Sigma(x):=\sum_{i=1}^k n_i$ where $x=(n_1, n_2, \dots, n_k) \in \mathscr{C} \subset \mathbb{Z}_+^k$. This is true if $x=0$, where $\Sigma(x)=0$. So assume that $x$ is nonzero, and that $x^\prime$ is in the $\mathbb{Z}_+$-span of $\mathscr{H}$ whenever $\Sigma(x^\prime)<\Sigma(x)$. If $x$ is irreducible, we are done. Else, $x=y+z$ with both $y, z \in \mathscr{C} \subset \mathbb{Z}_+^k$ nonzero. So $\Sigma(y) < \Sigma(x)$ and $\Sigma(z) < \Sigma(x)$. Thus, $y$ and $z$ are in the $\mathbb{Z}_+$-span of $\mathscr{H}$ by hypothesis, so $x$ is as well.
\end{proof}
\begin{remark}
\textcolor{purple}{Note that the $\mathbb{Z}_+$-spanning property of $\mathscr{H}$ in Proposition \ref{prop:irreducible-elements} is not true if we had only assumed that $\mathscr{C}=\mathscr{M}$ is a submonoid of $\mathbb{Z}^k$.} For example, the submonoid $\mathscr{M}=\mathbb{Z}^k$ has no irreducible elements.
In the following sections, we will take $\mathscr{C} = \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$ to be the KTGS \textcolor{purple}{positive integer} cone associated to an ideal triangulation $\mathscr{T}$ of a marked surface $\widehat{S}$. It seems to be a significant property that the KTGS cone $\mathscr{C}_\mathscr{T}$ comes naturally as a subset of a single orthant of $\mathbb{Z}^N$; see Proposition \ref{prop:KTGS-is-positive-integer-cone} and Remarks \ref{rem:realsolutionsareinteger}\eqref{item:annoyingminusisgn}, \ref{rem:conceptual-remarks}.
\end{remark}
\begin{defn}
\label{def:hilbert-basis}
Let $\mathscr{H} \subset \mathscr{C} \subset \mathbb{Z}_+^k$ be as in Proposition \ref{prop:irreducible-elements}. If $\mathscr{H}$ is a finite set, then it is called the \textit{Hilbert basis} of the positive integer cone $\mathscr{C} \subset \mathbb{Z}_+^k$.
\end{defn}
\begin{example}
\label{ex:hilbbasis}
$ $
\begin{enumerate}
\item \label{ex1:hilbbasis}
The standard basis of $\mathbb{Z}^k$ is the Hilbert basis $\mathscr{H}$ of the positive integer cone $\mathscr{C}=\mathbb{Z}_+^k$.
\item \label{ex2:hilbbasis}
By Proposition \ref{prop:irreducible-elements}, the set $\mathscr{H} = \left\{ (1,0), (1,1), (0, 2) \right\}$ is the Hilbert basis of the positive integer cone $\mathscr{C}=\mathbb{Z}_+ (1, 0)+\mathbb{Z}_+ (1,1) + \mathbb{Z}_+ (0,2) \subset \mathbb{Z}_+^2$.
\item \label{ex3:hilbbasis}
Similarly, again by Proposition \ref{prop:irreducible-elements}, the set $\mathscr{H} = \left\{ (0,1,0), (1,1,0), (1,0,1), (0,1,1) \right\}$ is the Hilbert basis of the positive integer cone $\mathscr{C}=\mathrm{span}_{\mathbb{Z}_+}(\mathscr{H}) \subset \mathbb{Z}_+^3$.
\end{enumerate}
\end{example}
\begin{remark}
\label{rem:non-standard-terminology-hilbert-basis}
Hilbert bases \cite{hilbert1890ueber, schrijver1981total} are important objects in linear algebra and linear programming, and are defined for more general cones than what we have discussed here. Some of our terminology might be non-standard, adapted for the purposes of this \textcolor{purple}{paper}.
\end{remark}
\subsection{Hilbert basis of the KTGS cone for the triangle and the square}
\label{ssec:hilbert-basis-of-the-ktgs-cone-for-the-square}
$ $
\subsubsection{Hilbert basis for the triangle}
\label{sssec:hilbert-basis-for-the-triangle}
$ $
\textcolor{purple}{We begin by recalling from} \cite[\S 5]{DouglasArxiv20} the case of a single \textcolor{purple}{ideal} triangle $\widehat{S}=\mathscr{T}=\Delta$. Let $\mathscr{C}_\Delta \subset \mathbb{Z}_+^7$ be the corresponding KTGS \textcolor{purple}{positive integer} cone.
Recall the eight ``irreducible'' webs $L_a, R_a, L_b, R_b, L_c, R_c, T_{in}, T_{out}$ in $\mathscr{W}_\Delta$ defined in \S \ref{sssec:definitionofthetropicalwebcoordinates}. For each such web $W^H$, its 7 tropical coordinates $\Phi_\Delta(W^H) \in \mathscr{C}_\Delta \subset \mathbb{Z}_+^7$ are provided in Figure \ref{figure:triangle}.
\begin{prop}
\label{prop:hilb-base-triangle}
The $8$-element \textcolor{purple}{subset}
\begin{equation*}
\mathscr{H}_\Delta=\left\{ \Phi_\Delta(W^H);
\quad W^H=L_a, R_a, L_b, R_b, L_c, R_c, T_{in}, T_{out} \right\}
\textcolor{purple}{\quad \subset \mathscr{C}_\Delta}
\end{equation*}
is the Hilbert basis of the KTGS cone $\mathscr{C}_\Delta \subset \mathbb{Z}_+^7$ for the triangle.
\end{prop}
\begin{proof}
This is a consequence \cite[Proposition 45]{DouglasArxiv20} and its proof.
Indeed, we need to show that $\mathscr{H}_\Delta$ is the set of irreducible elements. \textcolor{purple}{To start,} any such $\Phi_\Delta(W^H)$ is nonzero. By the last sentence of \cite[Proposition 45]{DouglasArxiv20}, we have that $\mathscr{H}_\Delta$ is a $\mathbb{Z}_+$-spanning set for $\mathscr{C}_\Delta$. \textcolor{purple}{One checks} by hand that no single element of $\mathscr{H}_\Delta$ can be written as a $\mathbb{Z}_+$-linear combination of other elements of $\mathscr{H}_\Delta$. (This last property is particularly clear when viewed in the isomorphic cone $\mathscr{C} \subset \mathbb{Z}_+^6 \times \mathbb{Z} \subset \mathbb{Z}^7$, \textcolor{purple}{namely} the image of $\mathscr{C}_\Delta$ under a certain linear isomorphism $\mathbb{R}^7 \to \mathbb{R}^7$ \textcolor{purple}{of geometric origin}; for details, see the proof of \cite[Proposition 45]{DouglasArxiv20}. In \S \ref{section:ssq}, we generalize this linear isomorphism to the square case.)
The result follows by Proposition \ref{prop:irreducible-elements}.
\end{proof}
\begin{remark}
\label{rem:word-of-caution}
As a word of caution, \cite[Proposition 45]{DouglasArxiv20} does not imply that an element of $\mathscr{C}_\Delta$ has a unique decomposition \textcolor{purple}{as a sum of} Hilbert basis elements. Indeed, in $\mathscr{C}_\Delta$, \textcolor{purple}{we have the relation $\Phi_\Delta(T_{in})+\Phi_\Delta(T_{out})=\Phi_\Delta(L_a)+\Phi_\Delta(L_b)+\Phi_\Delta(L_c)$. See also \S \ref{ssec:relations-in-the-ktgs-cone-for-the-square} below.}
\textcolor{purple}{
It is also not true that if $\Phi_\Delta(W^\prime) \leq \Phi_\Delta(W) \in \mathscr{C}_\Delta \subset \mathbb{Z}_+^7$, in the sense that the inequality holds for each coordinate, then $W^\prime$ is topologically ``contained in'' $W$. Indeed, in the above example, we have $\Phi_\Delta(T_{in})$ or $\Phi_\Delta(T_{out}) \leq \Phi_\Delta(L_a)+\Phi_\Delta(L_b)+\Phi_\Delta(L_c)$ in $\mathscr{C}_\Delta$. An even simpler example is $\Phi_\Delta(L_a) \leq \Phi_\Delta(R_b)+\Phi_\Delta(R_c)$.}
\end{remark}
\subsubsection{Hilbert basis for the square}
\label{sssec:hilbert-basis-for-the-square-statement}
$ $
We turn to the square $\Box$, which for the rest of this section \textcolor{purple}{is} equipped with an \textcolor{purple}{ideal} triangulation $\mathscr{T}$, \textcolor{purple}{namely a choice of diagonal of $\Box$}.
Recall the \textcolor{purple}{8} oriented corner arcs in the square $\Box$ (Definition \ref{def:corner-arcs}); these are the \textcolor{purple}{``irreducible''} webs (1)-(8) \textcolor{purple}{in} $\mathscr{W}_\Box$ depicted in Figure \ref{figure:square1} \textcolor{purple}{(part 1)}. The triangulation $\mathscr{T}$ determines \textcolor{purple}{14} more \textcolor{purple}{``irreducible''} webs \textcolor{purple}{in $\Box$}, namely \textcolor{purple}{the webs} (9)-(22) in $\mathscr{W}_\Box$ depicted in Figure \ref{figure:square1} \textcolor{purple}{(part 2)}. The bracket notation used in Figure \ref{figure:square1} \textcolor{purple}{(part 2)} is explained in the caption of the figure. \textcolor{purple}{In sum,} let us denote these 22 ``irreducible'' webs by $W^H_i \textcolor{purple}{\in \mathscr{W}_\Box}$ for $i=1,2,\dots,22$.
Let $\Phi_\mathscr{T} : \mathscr{W}_\Box \to \mathscr{C}_\mathscr{T}$ be the associated web tropical coordinate map. For each web $W_i^H$, its 12 tropical coordinates $\Phi_\mathscr{T}(W_i^H) \in \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ are also provided in Figure \ref{figure:square1} \textcolor{purple}{(part 2)}.
\begin{theorem}
\label{theorem:basis}
For the webs $\left\{ W_i^H \right\}_{i=1,2,\dots,22}$ in $\mathscr{W}_\Box$ \textcolor{purple}{displayed} in Figure {\upshape\ref{figure:square1}}, the \textcolor{purple}{subset}
\begin{equation*}
\mathscr{H}_{(\Box, \mathscr{T})}=\left\{ \Phi_\mathscr{T}(W_i^H) ;
\quad i=1,2,\dots,22 \right\}
\quad \subset \mathscr{C}_\mathscr{T}
\end{equation*}
is the Hilbert basis of the KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ for the triangulated square $(\Box, \mathscr{T})$.
\end{theorem}
\begin{figure}[htb]
\includegraphics[width=.75\textwidth]{square1-alt-cuttop}
\caption{(Part 1 of 2; see below.) The first 8 elements of the 22 element Hilbert basis for the KTGS cone $\mathscr{C}_\mathscr{T}$ of the triangulated square $(\Box, \mathscr{T})$, pictured via the corresponding \textcolor{purple}{``irreducible''} reduced webs $\left\{ W^H_i \right\}_{i=1,2,\dots,22}$. }
\end{figure}
\begin{figure}[htb]
\ContinuedFloat
\includegraphics[width=.75\textwidth]{square1-alt-cutbot}
\caption{(Part 2 of 2; see above.) The last 14 elements of the 22 element Hilbert basis for the KTGS cone $\mathscr{C}_\mathscr{T}$ of the triangulated square $(\Box, \mathscr{T})$, pictured via the corresponding \textcolor{purple}{``irreducible''} reduced webs $\left\{ W^H_i \right\}_{i=1,2,\dots,22}$. The square bracket is a purely notational device for webs (9)-(22); the first entry of $[\cdot, \cdot]$ corresponds to the top triangle, and the second \textcolor{purple}{entry} to the bottom \textcolor{purple}{triangle}.}
\label{figure:square1}
\end{figure}
\begin{remark}
\label{rem:hilbert-basis-depends-on-triangulation}
Note that if the other triangulation $\mathscr{T}^\prime$ of $\Box$ had been chosen, then only the webs $W^H_1, \dots, W^H_8$ and $W^H_{19}, \dots, W^H_{22}$ would appear among the 22 ``irreducible'' webs $W^{\prime H}_i$ corresponding to $\mathscr{T}^\prime$. In \textcolor{purple}{other words}, the set of webs corresponding to the Hilbert basis $\mathscr{H}_{(\Box, \mathscr{T})}$ of $\mathscr{C}_\mathscr{T}$ depends on which triangulation $\mathscr{T}$ of the square is chosen.
\end{remark}
We will need a little \textcolor{purple}{bit of} preparation before proving the theorem.
Let $\Delta$ and $\Delta^\prime$ be the two triangles appearing in the split triangulation $\mathscr{T}$ of $\Box$ (\S \ref{sssec:splitidealtriangulationsandgoodposition}). Say, $\Delta$ is the top triangle on the left hand side of Figure \ref{figure:flip}, and $\Delta^\prime$ is the bottom triangle. In particular, neither $\Delta$ nor $\Delta^\prime$ include the intermediate bigon. If $W$ is a reduced web in $\Box$ in good position with respect to \textcolor{purple}{the split ideal triangulation} $\mathscr{T}$, then the restrictions $W|_\Delta$ and $W|_{\Delta^\prime}$ are in good position in their respective triangles (by definition of good position \textcolor{purple}{of $W$ with respect to $\mathscr{T}$}). At the level of coordinates, this induces two projections $\pi_\Delta : \mathscr{C}_\mathscr{T} \to \mathscr{C}_\Delta$ and $\pi_{\Delta^\prime} : \mathscr{C}_{\mathscr{T}} \to \mathscr{C}_{\Delta^\prime}$ defined by $\pi_\Delta(\Phi_\mathscr{T}(W))=\Phi_\Delta(W|_\Delta)$ and $\pi_{\Delta^\prime}(\Phi_\mathscr{T}(W))=\Phi_{\Delta^\prime}(W|_{\Delta^\prime})$. \textcolor{purple}{Compare Figure \ref{fig:coordinates-example}.}
\begin{lem}
\label{proposition:h}
For a reduced web $W$ in $\mathscr{W}_\Box$, suppose its image $\Phi_{\mathscr{T}}(W)$ is an irreducible element of $\mathscr{C}_\mathscr{T}$. Then, the projections $\pi_\Delta(\Phi_{\mathscr{T}}(W))$ and $\pi_{\Delta^\prime}(\Phi_{\mathscr{T}}(W))$ are, respectively, in the Hilbert bases $\mathscr{H}_\Delta$ and $\mathscr{H}_{\Delta^\prime}$ of the cones $\mathscr{C}_\Delta$ and $\mathscr{C}_{\Delta^\prime}$.
Consequently, the set of irreducible elements of $\mathscr{C}_\mathscr{T}$ is finite (thus forming a Hilbert basis) and is a subset of $\mathscr{H}_{(\Box, \mathscr{T})}$, \textcolor{purple}{as defined in Theorem {\upshape\ref{theorem:basis}}}.
\end{lem}
\begin{proof}
Assuming the first statement, the second statement immediately follows by Definition \ref{def:hilbert-basis}, Proposition \ref{prop:hilb-base-triangle}, and the construction of the 22 element set $\mathscr{H}_{(\Box, \mathscr{T})} \subset \mathscr{C}_\mathscr{T}$.
To establish the first statement, \textcolor{purple}{assume $W$ is in good position with respect to $\mathscr{T}$.} It suffices to show that if $\pi_\Delta(\Phi_{\mathscr{T}}(W)) = \Phi_\Delta(W|_\Delta)\in \mathscr{C}_\Delta$ is reducible, then $\Phi_\mathscr{T}(W) \in \mathscr{C}_\mathscr{T}$ is reducible. So assume that there are nonempty reduced webs $A_1$ and $A_2$ in $\mathscr{W}_\Delta$ such that $\Phi_\Delta(W|_\Delta)=\Phi_\Delta(A_1)+\Phi_\Delta(A_2)$ in $\mathscr{C}_\Delta$. (At this point, \textcolor{purple}{one should be mindful of Remark} \ref{rem:word-of-caution}.)
\textcolor{purple}{We} explicitly construct nonempty reduced webs $W_1$ and $W_2$ in $\mathscr{W}_\Box$ such that
\begin{equation*}
\label{eq:contradiction-hilbert-basis}
\tag{$\ast$}
\Phi_\mathscr{T}(W)=\Phi_\mathscr{T}(W_1)+\Phi_\mathscr{T}(W_2) \quad \in \mathscr{C}_\mathscr{T}.
\end{equation*}
Let $E$ (resp. $E^\prime$) denote the bigon edge intersecting $\Delta$ (resp. $\Delta^\prime$). Let $n$ and $m$ (resp. $n_i$ and $m_i$ for $i=1,2$) be, respectively, the number of out- and in-strand-ends of $W|_\Delta$ (resp. $A_i$) on $E$; similarly, let $n^\prime$ and $m^\prime$ be, respectively, the number of out- and in-strand-ends of $W|_{\Delta^\prime}$ on $E^\prime$. Note $n^\prime = m$ and $m^\prime = n$.
By \cite[Definition 35, property (2)]{DouglasArxiv20}, \textcolor{purple}{which says that the two edge coordinates on $E$ uniquely determine the number of out- and in-strand-ends on $E$} (\textcolor{purple}{this} is a simple linear algebra calculation), we must have $n=n_1+n_2$ and $m=m_1+m_2$. \textcolor{purple}{We gather} $n^\prime=m_1+m_2$ and $m^\prime=n_1+n_2$.
\textcolor{purple}{Now, for each $i=1,2$}, arbitrarily choose $m_i$ out-strand-ends and $n_i$ in-strand-ends of $W|_{\Delta^\prime}$ on $E^\prime$, \textcolor{purple}{which we} call \textit{$i$-strand-ends} of $W|_{\Delta^\prime}$.
Let us say that a component $C^\prime$ of $W|_{\Delta^\prime}$ is \textit{$A_i$-connecting} if \textcolor{purple}{at least} one of its strand-ends on $E^\prime$ is an $i$-strand-end; note that (1) a corner arc $C^\prime$ is $A_i$-connecting for at most one $i$ (possibly none, when $C^\prime$ is on the corner opposite $E^\prime$), and (2) a honeycomb $C^\prime$ is $A_i$-connecting for at least one $i$, and may be both $A_1$- and $A_2$-connecting.
Let $h^\prime \textcolor{purple}{\in \mathbb{Z}_+}$ \textcolor{purple}{be the size of} the honeycomb $H^\prime$ of $W|_{\Delta^\prime}$, and let $h^{\prime (i)}$ be the number of $i$-strand-ends of $H^\prime$; note that $h^\prime=h^{\prime (1)}+h^{\prime (2)}$. \textcolor{purple}{For each $i=1,2$,} define $A^\prime_i$ to be the reduced web in $\mathscr{W}_{\Delta^\prime}$ consisting of the $A_i$-connecting corner arc components $C^\prime$ of $W|_{\Delta^\prime}$ together with a honeycomb of size $h^{\prime (i)}$ oriented as $H^\prime$ (and we can include the non-$A_i$-connecting components $C^\prime$ \textcolor{purple}{into} $A^\prime_1$, say); note in particular that $\Phi_{\Delta^\prime}(W|_{\Delta^\prime})=\Phi_{\Delta^\prime}(A^\prime_1)+\Phi_{\Delta^\prime}(A^\prime_2) \textcolor{purple}{\in \mathscr{C}_{\Delta^\prime}}$.
Lastly, \textcolor{purple}{for each $i=1,2$,} define $W_i$ in $\mathscr{W}_\Box$ to be the unique nonempty reduced web in the square obtained from the triangle webs $A_i \textcolor{purple}{\in \mathscr{W}_\Delta}$ and $A^\prime_i \textcolor{purple}{\in \mathscr{W}_{\Delta^\prime}}$ by gluing across the bigon in the usual way \textcolor{purple}{(as in Figure \ref{figure:bigon}).} (Technically, it is the class of $W_i$ in $\mathscr{W}_\Box$ that is unique, and $W_i$ is determined up to corner arc permutations). By construction, \eqref{eq:contradiction-hilbert-basis} holds.
\end{proof}
\begin{proof}[Proof of Theorem {\upshape \ref{theorem:basis}}]
$ $
By Lemma \ref{proposition:h}, it remains to show that each element of $\mathscr{H}_{(\Box, \mathscr{T})}$ is irreducible in $\mathscr{C}_\mathscr{T}$. This property can be checked by hand. (\textcolor{purple}{The irreducibility} becomes \textcolor{purple}{clearer} in light of the linear map $\theta_\mathscr{T} : \mathbb{R}^{12} \to \mathbb{R}^{18}$ of \S \ref{sssec:a-linear-isomorphism} below, where the image $\theta_\mathscr{T}(\mathscr{H}_{(\Box, \mathscr{T})}) \subset \mathbb{Z}_+^{18}$ is written explicitly).
\end{proof}
\subsubsection{Two linear isomorphisms: first isomorphism $\theta_\mathscr{T}$ by rhombus numbers}
\label{sssec:a-linear-isomorphism}
$ $
Recall (Definition \ref{def:KTGS-cone}) that the KTGS cone $\mathscr{C}_\mathscr{T}$ \textcolor{purple}{for any triangulated marked surface $(\widehat{S}, \mathscr{T})$} is defined as the points in $\mathbb{Z}^N$ satisfying two conditions per rhombus, where there are three rhombi per pointed ideal triangle \textcolor{purple}{$\Delta$} of $\mathscr{T}$ (\S \ref{ssec:markedsurfacesidealtriangulations}). Both conditions involve the quantity $a+b-c-d$ associated to the rhombus; the first being that $3\beta := a+b-c-d \geq 0$, and the second that $\beta = (a+b-c-d)/3 \in \mathbb{Z}$ is an integer. (Recall $d=0$ if the rhombus is a corner rhombus; \S \ref{ssec:markedsurfacesidealtriangulations}.)
Let $\left\{ \beta_i \right\}_i$ denote these \textit{rhombus numbers}, varying over all the rhombi of $\mathscr{T}$.
It will be convenient in the remainder of the paper to talk about real vector spaces $\mathbb{R}^N$, which we think of as containing $\mathbb{Z}^N$, in particular the KTGS cone $\mathscr{C}_\mathscr{T}$, as a subset.
\textcolor{purple}{In this sub-subsection, for the triangulated ideal square $(\Box, \mathscr{T})$} we define a linear isomorphism $\theta_\mathscr{T}$ of real 12-dimensional vector spaces, \textcolor{purple}{which is} used in the proof of Theorem \ref{theorem:basis} and in \textcolor{purple}{\S \ref{section:ssq}}. \textcolor{purple}{Here,} 12 is the number of tropical coordinates for the square. \textcolor{purple}{Note that the triangulated square} has 18 rhombi $\left\{ \beta_i \right\}_{i=1,2,\dots,18}$, as displayed in Figure \ref{fig:tropical-X-coords} \textcolor{purple}{below in \S \ref{section:ssq}}.
\begin{defn}
Let $(\Box, \mathscr{T})$ be the triangulated square, \textcolor{purple}{whose coordinates are labeled as in} the left hand side of Figure \ref{figure:flip}. Define a linear map
\begin{equation*}
\theta_\mathscr{T} : \mathbb{R}^{12} \longrightarrow \mathbb{R}^{18}
\end{equation*}
by the formula
\begin{equation*}
\theta_\mathscr{T}(x_1, x_2, \dots, x_8, y_1, \dots, y_4) = (\beta_1, \beta_2, \beta_3, \beta_4, \beta_5, \beta_6, \beta_7, \beta_8, \beta_9, \beta_{10}, \beta_{11}, \beta_{12}, \beta_{13}, \beta_{14}, \beta_{15}, \beta_{16}, \beta_{17}, \beta_{18}),
\end{equation*}
where the $\left\{ \beta_i \right\}_{i=1,2,\dots,18}$ are the 18 rhombus numbers \textcolor{purple}{defined above}.
\end{defn}
For example, the images under $\theta_\mathscr{T}$ of the 22-element Hilbert basis $\mathscr{H}_{(\Box, \mathscr{T})} \subset \mathscr{C}_\mathscr{T}$ of Theorem \ref{theorem:basis} are calculated from Figures \ref{figure:square1} and \ref{fig:tropical-X-coords} to be:
\begin{align*}
(1)&&\quad \theta_\mathscr{T}(\Phi_\mathscr{T}([R_a]))&=(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)\\
(2)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_a]))&=(0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) \\
(3)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([R_b]))&=(0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0) \\
(4)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_b]))&=(0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0) \\
(5)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([R_c]))&=(0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0) \\
(6)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_c]))&=(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1) \\
(7)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([R_d]))&=(0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0) \\
(8)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_d]))&=(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0) \\
(9)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{out}, R_b]))&=(0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0) \\
(10)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{out}, L_c]))&=(0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1) \\
(11)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{in}, L_b]))&=(0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0) \\
(12)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{in}, R_c]))&=(0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0) \\
(13)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_b, T_{out}]))&=(0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1) \\
(14)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_c, T_{in}]))&=(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0) \\
(15)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([R_b, T_{in}]))&=(0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0) \\
(16)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([R_c, T_{out}]))&=(0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1) \\
(17)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{out}, T_{in}]))&=(0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0) \\
(18)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{in},T_{out}]))&=(0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1) \\
(19)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_b,R_c]))&=(0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0) \\
(20)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([R_b,L_c]))&=(0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1) \\
(21)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([R_c,L_b]))&=(0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0) \\
(22)&&\quad\theta_\mathscr{T}(\Phi_\mathscr{T}([L_c, R_b]))&=(0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0).
\end{align*}
When there is no confusion, we also let $\beta_i$ denote the general $i$-th coordinate of $\mathbb{R}^{18}$.
Consider the subspace $V_\mathscr{T} \subset \mathbb{R}^{18}$ defined by
\begin{gather*}
\label{eq:tropical-x-coordinates}
\tag{$\dagger$}
V_\mathscr{T} = \{ (\beta_i)_i \in \mathbb{R}_{18}; \quad
X_1 := \beta_3 - \beta_2 = \beta_6 - \beta_5 = \beta_9 - \beta_8, \quad
X_2 := \beta_4 - \beta_{13} = \beta_{17} - \beta_9, \quad
\\ \notag X_3 := \beta_{12} - \beta_{11} = \beta_{15}-\beta_{14} = \beta_{18} - \beta_{17}, \quad
X_4 := \beta_{16} - \beta_7 = \beta_5 - \beta_{15} \}.
\end{gather*}
\textcolor{purple}{See \S \ref{sssec:second-linear-isomorphism} for a discussion of the geometric meaning of the subspace $V_\mathscr{T}$ and the quantities~$X_i$.}
\begin{prop}
\label{obs:theta-is-injective}
The linear map $\theta_\mathscr{T}:\mathbb{R}^{12} \to \mathbb{R}^{18}$ is an isomorphism \textcolor{purple}{of $\mathbb{R}^{12}$} onto $V_\mathscr{T}$. That is, $\theta_\mathscr{T}$ is injective, and the image of $\theta_\mathscr{T}$ is equal to $V_\mathscr{T}$. In particular, $V_\mathscr{T}$ is $12$-dimensional.
\end{prop}
\begin{proof}
\textcolor{purple}{By} elementary linear algebra, the above 22 images $\left\{ \theta_\mathscr{T}(\Phi_\mathscr{T}(W_i^H))\right\}_{i=1,2,\dots,22} \subset \mathbb{Z}_+^{18}$ span a 12 dimensional subspace of $\mathbb{R}^{18}$. So $\theta_\mathscr{T}$ is injective.
That $\theta_\mathscr{T}(\mathbb{R}^{12}) \subset V_\mathscr{T}$ follows from the definition of the rhombus numbers $\left\{ \beta_i \right\}_{i=1,2,\dots,18}$; compare Figure \ref{fig:tropical-X-coords}.
That $V_\mathscr{T} \textcolor{purple}{\subset \mathbb{R}^{18}}$ is 12-dimensional follows from \textcolor{purple}{a} computation \textcolor{purple}{showing} that the linear map $f: \mathbb{R}^{18} \to \mathbb{R}^{6}$ defined by
\begin{align*}
f(\beta_1, \beta_2, \dots, \beta_{18}) =
(&\beta_3 - \beta_2+\beta_5-\beta_6, \quad \beta_6 - \beta_5 + \beta_8 - \beta_9, \quad \beta_4 - \beta_{13} + \beta_9-\beta_{17},
\\ &\beta_{12} - \beta_{11}+\beta_{14}-\beta_{15}, \quad \beta_{15} - \beta_{14}+\beta_{17}-\beta_{18}, \quad \beta_{16} - \beta_7 + \beta_{15} - \beta_5)
\end{align*}
has rank 6 (note $V_\mathscr{T}$ is the kernel of $f$).
\end{proof}
\begin{conceptremark}
\label{rem:sec6conceptremark}
Recall (\S \ref{PGL3-decorated-local-systems}) $\mathcal{A}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t) = \mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)$. Recall also from Remark \ref{rem:conceptual-remarks} that we view the positive integer cone $\mathscr{C}_\mathscr{T} \cong -3 \mathcal{A}_{\mathrm{PGL}_3, \Box}^+(\mathbb{Z}^t)_\mathscr{T}$ as a $\mathscr{T}$-chart for the positive tropical integer points $\mathcal{A}_{\mathrm{PGL}_3, \Box}^+(\mathbb{Z}^t) \subset \mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)$.
We think of $\mathbb{R}^{12} \cong \mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)_\mathscr{T} $ as the coordinate chart of $\mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)$ associated to the ideal triangulation $\mathscr{T}$, with one tropical $\mathcal{A}$-coordinate $-3(A_{a;b,c}^{i,j,k})^t$ per dot of $\mathscr{T}$ (\S \ref{ssec:SL3-decorated-local-systems}).
We view the rhombus numbers $\left\{ \beta_i \right\}_{i=1,2,\dots,18}$ as the tropicalizations $(\alpha_{a;b,c}^{i,j,k})^t$ of the rhombus functions $\alpha_{a;b,c}^{i,j,k}$ on the moduli space $\mathcal{A}_{\mathrm{PGL}_3, \Box}$ (\S \ref{PGL3-decorated-local-systems}).
By Proposition \ref{obs:theta-is-injective}, we can also think of the rhombus numbers $\left\{ \beta_i \right\}_{i} \in V_\mathscr{T} \subset \mathbb{R}^{18}$ as coordinates for $\mathcal{A}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)$ via the isomorphism $\theta_\mathscr{T}$, that is:
\begin{equation*}
\mathcal{A}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)_\mathscr{T} \cong V_\mathscr{T} \overset{\theta_\mathscr{T}}{\cong} \mathbb{R}^{12} \cong \mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)_\mathscr{T}.
\end{equation*}
\end{conceptremark}
\subsection{Tropical skein relations in the KTGS cone for the square}
\label{ssec:relations-in-the-ktgs-cone-for-the-square}
$ $
We end this section with a noteworthy observation, which will \textcolor{purple}{not be needed later.}
We saw in Remark \ref{rem:word-of-caution} that there are interesting relations even in the KTGS cone $\mathscr{C}_\Delta \subset \mathbb{Z}_+^7$ for the triangle. In fact, this is the only relation (in the sense analogous to Proposition \ref{prop:relations-in-the-square} below). The \textcolor{purple}{intuitive} reason there is only 1 relation for \textcolor{purple}{the} triangle is because \textcolor{purple}{the Hilbert basis for $\mathscr{C}_\Delta$ has 8 elements, whereas there are } only 7 \textcolor{purple}{Fock-Goncharov} coordinates.
We now describe all of the relations in the KTGS cone \textcolor{purple}{$\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$} for the square. \textcolor{purple}{Intuitively,} there are 10 relations because the Hilbert basis \textcolor{purple}{for $\mathscr{C}_\mathscr{T}$} has 22 elements, whereas there are only 12 \textcolor{purple}{Fock-Goncharov} coordinates.
\begin{prop}
\label{prop:relations-in-the-square}
The following $10$ linear relations are independent and complete among the $22$ elements of the Hilbert basis $\mathscr{H}_{(\Box, \mathscr{T})} \subset \mathscr{C}_\mathscr{T}$ for the KTGS cone for the square:
\begin{align*}
(1)&&\Phi_\mathscr{T}([T_{in},L_b])+ \Phi_\mathscr{T}([T_{out},L_c])&=\Phi_\mathscr{T}([L_a])+\Phi_\mathscr{T}([L_b])+\Phi_\mathscr{T}([L_c])
\\(2)&&\Phi_\mathscr{T}([T_{out},L_c])+ \Phi_\mathscr{T}([T_{in},R_c])&=\Phi_\mathscr{T}([L_a])+\Phi_\mathscr{T}([L_c])+\Phi_\mathscr{T}([L_b,R_c])
\\(3)&&\Phi_\mathscr{T}([L_c, T_{in}])+ \Phi_\mathscr{T}([L_b, T_{out}])&=\Phi_\mathscr{T}([L_b])+\Phi_\mathscr{T}([L_c])+\Phi_\mathscr{T}([L_d])
\\(4)&&\Phi_\mathscr{T}([L_b,T_{out}])+ \Phi_\mathscr{T}([R_b, T_{in}])+\Phi_\mathscr{T}([T_{out},R_b])
&=\Phi_\mathscr{T}([L_b])+\Phi_\mathscr{T}([R_b])+\Phi_\mathscr{T}([L_d])+\Phi_\mathscr{T}([T_{out},L_c])
\\(5)&&\Phi_\mathscr{T}([R_c,T_{out}])+ \Phi_\mathscr{T}([L_b, R_c])&=\Phi_\mathscr{T}([L_b,T_{out}])+\Phi_\mathscr{T}([R_c])
\\(6)&&\Phi_\mathscr{T}([T_{out},T_{in}])+ \Phi_\mathscr{T}([L_b, T_{out}])&=\Phi_\mathscr{T}([L_b])+\Phi_\mathscr{T}([L_d])+\Phi_\mathscr{T}([T_{out}, L_c])
\\(7)&&\Phi_\mathscr{T}([T_{in},T_{out}])+ \Phi_\mathscr{T}([T_{out},L_c])&=\Phi_\mathscr{T}([L_a])+\Phi_\mathscr{T}([L_c])+\Phi_\mathscr{T}([L_b, T_{out}])
\\(8)&&\Phi_\mathscr{T}([T_{out},R_b])+ \Phi_\mathscr{T}([R_b,L_c])&=\Phi_\mathscr{T}([R_b])+\Phi_\mathscr{T}([T_{out}, L_c])
\\(9)&&\Phi_\mathscr{T}([L_b,R_c])+ \Phi_\mathscr{T}([R_c,L_b])&=\Phi_\mathscr{T}([L_b])+\Phi_\mathscr{T}([R_c])
\\(10)&&\Phi_\mathscr{T}([T_{out},L_c])+ \Phi_\mathscr{T}([L_c,R_b])&=\Phi_\mathscr{T}([T_{out},R_b])+\Phi_\mathscr{T}([L_c]).
\end{align*}
\end{prop}
\begin{proof}
More precisely, what is meant by the statement of the proposition is the following. Let $f : \mathbb{R}^{22} \to \mathbb{R}^{12}$ be the linear map
\begin{equation*}
f(\lambda_1, \lambda_2, \dots, \lambda_{22})
= \sum_{i=1}^{22} \lambda_i \Phi_\mathscr{T}(W_i^H)
\quad \in \mathbb{R}^{12},
\end{equation*}
where the webs $W_i^H$ are as in Theorem \ref{theorem:basis}. As in the proof of Proposition \ref{obs:theta-is-injective}, each of the 10 relations above determines an element $r_j$ of $\mathbb{R}^{22}$. Let $V \subset \mathbb{R}^{22}$ be the kernel of $f$. The claim is that the elements $\left\{ r_j \right\}_{j=1,2,\dots,10}$ form a basis of $V$; in particular, $V$ is 10-dimensional.
By Figure \ref{figure:square1}, one checks that the 10 relations are satisfied, so $r_j \in V$. By elementary linear algebra, the 10 elements $\left\{ r_j \right\}_j \subset V$ are linearly independent. \textcolor{purple}{It remains to show} that the linear map $f:\mathbb{R}^{22}\to\mathbb{R}^{12}$ has rank 12, \textcolor{purple}{for which it suffices to show that the 22 elements $\left\{ \Phi_\mathscr{T}(W_i^H) \right\}_i \subset \mathbb{Z}_+^{12} \subset \mathbb{R}^{12}$ span $\mathbb{R}^{12}$.} \textcolor{purple}{This is true because, by the proof of Proposition \ref{obs:theta-is-injective}, the images of these 22 elements under the linear map $\theta_\mathscr{T} : \mathbb{R}^{12} \to \mathbb{R}^{18}$ span the 12 dimensional subspace $V_\mathscr{T} \subset \mathbb{R}^{18}$.}
\end{proof}
\begin{remark}
The relations of Proposition \ref{prop:relations-in-the-square} can be viewed as \textit{tropical $\mathrm{SL}_3$ skein relations}. \textcolor{purple}{Indeed, they can be ``predicted'' as the result of resolving the overlapping webs in the square (corresponding to a given relation in the cone) by one of the two Kuperberg $\mathrm{SL}_3$ skein relations \cite[\S 4, $q=1$]{KuperbergCommMathPhys96} (one resolution per crossing in the picture).} See also \cite{XieArxiv13}.
\end{remark}
\section{KTGS cone for the square: sector decomposition}
\label{section:ssq}
In \S \ref{section:Hsq}, we saw that the Knutson-Tao-Goncharov-Shen cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ for the triangulated square $(\Box, \mathscr{T})$ has a Hilbert basis $\mathscr{H}_{(\Box, \mathscr{T})} \subset \mathscr{C}_\mathscr{T}$ consisting of 22 elements (Figure \ref{figure:square1}). There are many linear dependence relations in $\mathbb{R}^{12}$ among these Hilbert basis elements; see Proposition \ref{prop:relations-in-the-square}. In this last section, we study certain linearly independent subsets of the Hilbert basis $\mathscr{H}_{(\Box, \mathscr{T})}$ that have topological interpretations in terms of webs.
More specifically, we show that each of the 42 web families $\mathscr{W}_i \subset \mathscr{W}_\Box$ (Proposition \ref{ssec:42-reduced-web-families-in-the-square} and Figure \ref{figure:9cases}) corresponds to a 12-dimensional subcone $\mathscr{C}_\mathscr{T}^i \subset \mathscr{C}_\mathscr{T}$ (called a sector) generated by 12 Hilbert basis elements. Moreover, every point in the KTGS cone $\mathscr{C}_\mathscr{T}$ lies in such a sector $\mathscr{C}_\mathscr{T}^i$. These sectors have a geometric description in terms of tropical integer $\mathcal{X}$-coordinates (Figure \ref{fig:tropical-X-coords}) for reduced webs $W \in \mathscr{W}_\Box$, which are functions of the corresponding positive tropical integer $\mathcal{A}$-coordinates (Figure \ref{fig:coordinates-example}); we already encountered some of these ideas in~\S \ref{sssec:a-linear-isomorphism}.
In summary, this analysis gives us a deeper understanding of the combinatorial, geometric, and topological properties of the KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ for the square; see Figure \ref{figure:wallscross}.
\subsection{Cones over the real numbers}
\label{ssec:sector-decompositions-of-cones}
$ $
Recall that $\mathbb{R}_+$ (resp. $\mathbb{R}_-$) denotes the set of nonnegative (nonpositive) real numbers.
\begin{remark}
Some of our terminology might be non-standard, adapted for the purposes of this paper; compare Remark \ref{rem:non-standard-terminology-hilbert-basis}.
\end{remark}
\subsubsection{Cones and sector decompositions}
\label{sssec:cones-and-sector-decompositions}
$ $
\begin{defn} $ $
\label{def:cone-defs}
\begin{itemize}
\item A \textit{(real) cone} $C \subset \mathbb{R}^k$ is a subset of $\mathbb{R}^k$ such that
\begin{equation*}
C = \left\{ \sum_{i=1}^m \lambda_i c_i; \quad \lambda_i \in \mathbb{R}_+ \right\}
\end{equation*}
for some finite set $\left\{ c_i \right\}_{i=1,2,\dots,m} \subset \mathbb{R}^k$, called a \textit{generating set} of $C$.
We also write $C=\mathrm{span}_{\mathbb{R}_+}(c_1, c_2, \dots, c_m)$ for the above equation.
\item The minimum number of elements of a generating set is called the \textit{rank} of the cone~$C$.
{\color{red}
A generating set $\left\{ c_i \right\}_i$ of minimum size is called a \textit{basis} of $C$.}
\item The subspace $\widetilde{C} \subset \mathbb{R}^k$ defined by
\begin{equation*}
\widetilde{C} = \left\{ \sum_{i} \widetilde{\lambda}_i c_i; \quad \widetilde{\lambda}_i \in \mathbb{R} \right\}
\end{equation*}
is independent of the choice of generating set $\left\{ c_i \right\}_i$, and its dimension is called the \textit{dimension} of the cone $C$. Note $ \mathrm{dim}(C) \leq \mathrm{rank}(C) < \infty$.
\item A cone $C$ is a \textit{sector} if $\mathrm{dim}(C)=\mathrm{rank}(C)$.
\item A cone $C \subset \mathbb{R}^k$ is \textit{full} if $\mathrm{dim}(C)=k$.
\end{itemize}
\end{defn}
\begin{example}
\label{ex:cone-examples}
$ $
\begin{enumerate}
\item {\color{red}\label{ex1:cone-examples} $C=\mathbb{R}$ is a full cone with basis $\left\{ 1, -1 \right\}$. Its rank is 2, and its dimension is 1, so it is not a sector.}
\item \label{ex2:cone-examples} {\color{red}$C=\mathbb{R}^2$ is a full cone with basis $\left\{ e_1, e_2, -e_1-e_2 \right\}$. Its rank is 3, and its dimension is 2, so it is not a sector. Here, $e_i$ is the $i$-th standard basis element.}
{\color{red}The subcone $C^\prime = \mathbb{R} e_1 \subset C$ is not full and is not a sector.}
\item \label{ex3:cone-examples} {\color{red}One checks that the full cone $C=\mathbb{R}^k$ has rank $k+1$, by taking as a basis the vertices of the $k$-dimensional simplex centered at the origin.}
\item \label{ex4:cone-examples} {\color{red}For $j=1,2,\dots,k$, the cone $C^{(j)} = \sum_{i=1}^j \mathbb{R}_+ e_i \subset \mathbb{R}^k$ is a sector of rank $j$. It is full only for $j=k$, namely when $C^{(j)}=C^{(k)} = \mathbb{R}_+^k$ is the positive orthant.}
\end{enumerate}
\end{example}
\begin{remark}
{\color{red}Note that if a cone $C$ has rank $r$ and if $\left\{ c_i \right\}_i$ is a generating set for $C$, there does not necessarily exist a subset $\left\{ c_{i_j} \right\}_{j=1,2,\dots,r}$ that is a basis. For example, take $C=\mathbb{R}^2$ with generating set $\left\{ e_1, -e_1, e_2, -e_2 \right\}$.}
\end{remark}
\begin{defn}
\label{def:sector-decomp}$ $
\begin{itemize}
\item A \textit{sector decomposition} of a full cone $C \subset \mathbb{R}^k$ is a finite collection $\left\{ C_i \right\}_{i=1,2,\dots,p}$ of subcones $C_i \subset C$ satisfying:
\begin{itemize}
\item each $C_i$ is a full sector;
\item $C = C_1 \cup C_2 \cup \cdots \cup C_p$ is the union of the sectors $C_i$;
\item For each distinct $i,j \in \left\{ 1,2,\dots,m \right\}$, the intersection $C_i \cap C_j \subset \mathbb{R}^k$ has empty interior (namely, this intersection does not contain an open subset of $\mathbb{R}^k$).
\end{itemize}
\item If $C$ and $C^\prime$ are two full cones in $\mathbb{R}^k$, then the intersection $C \cap C^\prime$ is a \textit{wall} if $C \cap C^\prime$ is a cone of dimension $k-1$.
\end{itemize}
\end{defn}
\begin{example} $ $
\label{ex:second-cone-example}
\begin{enumerate}
\item
{\color{red}In Example \ref{ex:cone-examples}\eqref{ex1:cone-examples}, putting $C_1 = \mathbb{R}_+$ and $C_2 = \mathbb{R}_-$ yields a sector decomposition of the cone $C=\mathbb{R}$. The intersection $C_1 \cap C_2 =\left\{ 0 \right\}$ is a wall.}
\item \label{ex2prime:second-cone-example}
{\color{red}In Example \ref{ex:cone-examples}\eqref{ex2:cone-examples}, putting $C_1=\mathbb{R}_+ e_1 + \mathbb{R}_+ e_2$, $C_2=\mathbb{R}_+ e_2 + \mathbb{R}_+ (-e_1 - e_2)$, and $C_3=\mathbb{R}_+ (-e_1 - e_2) + \mathbb{R}_+ e_1$ yields a sector decomposition of $C=\mathbb{R}^2$. Each sector faces two walls, and each pairwise-distinct intersection $C_i \cap C_j$ is a wall.}
\item \label{ex2:second-cone-example} {\color{red}Again in Example \ref{ex:cone-examples}\eqref{ex2:cone-examples}, alternatively putting $C_1=\mathrm{span}_{\mathbb{R}_+}(e_1, e_2)$, $C_2 = \mathrm{span}_{\mathbb{R}_+}(e_2, -e_1)$, $C_3=\mathrm{span}_{\mathbb{R}_+}(-e_1, -e_2)$, and $C_4=\mathrm{span}_{\mathbb{R}_+}(-e_2, e_1)$ yields a sector decomposition of $C=\mathbb{R}^2$. Each sector faces two walls, the intersections $C_i \cap C_{i+1}$ are walls, but the intersections $C_1 \cap C_3 = C_2 \cap C_4 = \left\{ 0 \right\}$ are not walls.}
\item {\color{red}In Example \ref{ex:cone-examples}\eqref{ex4:cone-examples}, putting $C_1=C^{(k)} = \mathbb{R}_+^k$ yields a sector decomposition with a single sector.}
\end{enumerate}
\end{example}
\begin{observation}
\label{lem:stupid-lemma-about-walls}
If $\left\{ C_i \right\}_{i=1,2,\dots,p}$ is a sector decomposition of a full cone $C \subset \mathbb{R}^k$, and if $W=C_i \cap C_\ell$ is a wall, then there is no other pair of sectors giving this wall: $W=C_{i^\prime}\cap C_{\ell^\prime}$ if and only if $\left\{ i, \ell \right\} = \left\{ i^\prime, \ell^\prime \right\}$.
\end{observation}
This essentially follows since walls have codimension 1 in $\mathbb{R}^k$. {\color{red}We give a proof here for completeness.}
\begin{proof}[Proof of Observation \ref{lem:stupid-lemma-about-walls}]
{\color{red}Let the $k-1$ dimensional subcone $W \textcolor{purple}{\subset C}$ be generated by $\left\{ w_j \right\}_{j=1,2,\dots,m}$. Let $v_i$ in the $k$ dimensional cone $C_i$ be linearly independent from $\left\{ w_j \right\}_j$.}
{\color{red}Let $w$ be a fixed point in the interior of $W$, namely $w=\sum_{j=1}^m \lambda_j w_j$ for some $\lambda_j > 0$. Then there is $\epsilon_i > 0$ such that
\begin{equation*}
\label{eq:stupid-wall-lemma}
\tag{$\$$}
\left\{ w+\lambda^{(i)} v_i + \sum_{j=1}^m \widetilde{\lambda}_j w_j; \quad 0 \leq \lambda^{(i)} < \epsilon_i \quad \text{and} \quad \lambda_j-\epsilon_i < \widetilde{\lambda}_j < \lambda_j+\epsilon_i \right\} \subset C_i.
\end{equation*}
Let $v^\perp_i \in \widetilde{W}^\perp - \left\{ 0 \right\} \subset \mathbb{R}^k$ be the nonzero component of $v_i$ perpendicular to the subspace $\widetilde{W}$ (with respect to the standard inner product, \textcolor{purple}{say}); note $v_i^\perp$ is not necessarily in $C_i$. By shrinking $\epsilon_i$, we can arrange that \eqref{eq:stupid-wall-lemma} holds with $v_i$ replaced by $v_i^\perp$; denote the resulting subset \eqref{eq:stupid-wall-lemma} by $\overline{U}_i \subset C_i$. Similarly, define a subset $\overline{U}_\ell \subset C_\ell$ \textcolor{purple}{depending on some $v_\ell^\perp \neq 0 \in \widetilde{W}^\perp$ and} $\epsilon_\ell > 0$. By further shrinking, let us arrange that $\epsilon_i = \epsilon_\ell$.}
{\color{red}So far, we have not used the assumption that $k$ is the dimension of the ambient space $\mathbb{R}^k$. We now use this assumption, to note that $\widetilde{W}^\perp \subset \mathbb{R}^k$ is 1 dimensional, and that $\overline{U}_i$ and $\overline{U}_\ell$ have nonempty interiors in $\mathbb{R}^k$.}
{\color{red}Without loss of generality, assume $|v_i^\perp| \leq |v_\ell^\perp|$. If $v_\ell^\perp$ pointed in the same direction as $v_i^\perp$, then by construction $C_i \cap C_\ell \supset \overline{U}_i \cap \overline{U}_\ell = \overline{U}_i$ would have nonempty interior, violating the hypothesis that $C_i$ and $C_\ell$ are part of a sector decomposition of the full cone $C \subset \mathbb{R}^k$.
Thus, $v_\ell^\perp$ points in the opposite direction as $v_i^\perp$.}
{\color{red}\textcolor{purple}{
Now, assume $C_{i^\prime} \cap C_{\ell^\prime} = W$ as well;
define $\overline{U}_{i^\prime} \subset C_{i^\prime}$ and $\overline{U}_{\ell^\prime} \subset C_{\ell^\prime}$ as above. After possibly swapping indices, we have that $v_i^\perp$ and $v_{i^\prime}^\perp$ (resp. $v_\ell^\perp$ and $v_{\ell^\prime}^\perp$) point in the same direction. Arguing as above, it follows that $\overline{U}_i \cap \overline{U}_{i^\prime} \subset C_i \cap C_{i^\prime}$ and $\overline{U}_\ell \cap \overline{U}_{\ell^\prime}\subset C_\ell \cap C_{\ell^\prime}$ have nonempty interiors. Therefore, $i=i^\prime$ and $\ell=\ell^\prime$.}}
\end{proof}
\begin{remark}
{\color{red}Observation \ref{lem:stupid-lemma-about-walls} is false for higher codimension intersections. For instance, in Example \ref{ex:second-cone-example}\eqref{ex2:second-cone-example}, we have $C_1 \cap C_3 = C_2 \cap C_4 = \left\{ 0 \right\}$.}
\end{remark}
\subsubsection{Some technical statements about cones of the form $C \subset \mathbb{R}_+^k \times \mathbb{R}^n$}
\label{ssec:technical-statements}
$ $
\textcolor{purple}{This sub-subsection can be skipped until \S \ref{sssec:proof-of-main-theorem-2}.}
\begin{lem}
\label{lem:rank-lemma}
Let $C \subset \mathbb{R}_+^k \times \mathbb{R}^n$ be a cone satisfying the following properties:
\begin{itemize}
\item $e_i \in C$ for $i=1,2,\dots,k$, where $e_i$ is the $i$-th standard basis element of $ \mathbb{R}^k \times \mathbb{R}^n$;
\item $\pi_n(C) = \mathbb{R}^n$, where $\pi_n : \mathbb{R}^k \times \mathbb{R}^n \to \mathbb{R}^n$ is the natural projection.
\end{itemize}
Then, $\mathrm{dim}(C)=k+n$. Namely, $C$ is full.
\end{lem}
\begin{example*}[\textbf{part 1}]
The rank of such a cone $C$ as in Lemma \ref{lem:rank-lemma} can equal the dimension or be strictly greater. For example, $C=\mathbb{R}_+ \times \mathbb{R}$ has rank 3, whereas $C^\prime=\mathrm{span}_{\mathbb{R}_+}(\left\{ (1;1), (0;-1) \right\}) \subset \mathbb{R}_+ \times \mathbb{R}$ has rank 2.
\end{example*}
\begin{proof}[Proof of Lemma \ref{lem:rank-lemma}]
Note $\mathbb{R}_+^k \times \left\{ 0 \right\} \subset C$, so $\mathbb{R}^k \times \left\{ 0 \right\} \subset \widetilde{C}$; see Definition \ref{def:cone-defs}. The hypothesis $\pi_n(C)=\mathbb{R}^n$ thus implies $\left\{ 0 \right\} \times \mathbb{R}^n \subset \widetilde{C}$. It follows that $\widetilde{C}=\mathbb{R}^k \times \mathbb{R}^n$.
\end{proof}
\begin{lem}
\label{lem:second-cone-lemma}
Consider a full cone $C \subset \mathbb{R}_+^k \times \mathbb{R}^n$ as in Lemma {\upshape\ref{lem:rank-lemma}}. Let $\left\{ x_j \right\}_{j=1,2,\dots,m}$ be a finite subset of $C$ with $m \geq n$, and let $\left\{ J_i \right\}_{i=1,2,\dots,p}$ for some $p$ be a collection of index sets $J_i=\left\{j^{(i)}_1, j^{(i)}_2, \dots, j^{(i)}_{n} \right\} \subset \left\{ 1, 2, \dots, m \right\}$ of constant size $n$.
Assume in addition:
\begin{itemize}
\item $C=\cup_{i=1}^{p} C_i$ is the union of the subcones
\begin{equation*}
C_i
= \mathrm{span}_{\mathbb{R}_+}\left(
\left\{ e_1, e_2, \dots, e_k \right\}
\cup \left\{ x_j; \quad j \in J_i \right\}\right)
\quad \subset C \subset \mathbb{R}_+^k \times \mathbb{R}^n
\quad\quad \left( i=1,2,\dots,p \right);
\end{equation*}
\item the subcones
\begin{equation*}
D_i = \mathrm{span}_{\mathbb{R}_+}\left(\left\{ \pi_n(x_j); \quad j \in J_i \right\}\right)\quad\subset \mathbb{R}^n\quad\quad\left( i=1,2,\dots,p \right)
\end{equation*}
are full sectors forming a sector decomposition $\left\{ D_i \right\}_{i=1,2,\dots,p}$ of $\mathbb{R}^n$ (Definition {\upshape\ref{def:sector-decomp}}).
\end{itemize}
Then, the subcones $C_i \subset C$ are full sectors forming a sector decomposition $\left\{ C_i \right\}_{i=1,2,\dots,p}$ of~$C$.
Also, the sector $C_i$ projects via $\pi_n$ to the sector $D_i$. Moreover, $\pi_n(C_i \cap C_\ell)=D_i \cap D_{\ell}$ for all pairwise distinct $i, \ell$.
\end{lem}
\begin{example*}[\textbf{part 2}]
For $C \subset \mathbb{R}_+ \times \mathbb{R}$ as in part 1, put $x_1=(0;1)$ and $x_2=(0;-1)$. For $C^\prime \subset \mathbb{R}_+ \times \mathbb{R}$ as in part 1, put $x^\prime_1=(1;1)$ and $x^\prime_2=(0;-1)$. In both cases, put $J_1=J^\prime_1=\left\{ 1 \right\}$ and $J_2=J^\prime_2=\left\{ 2 \right\}$.
Then, in both cases, $D_1=D_1^\prime=\mathbb{R}_+$ and $D_2=D_2^\prime=\mathbb{R}_-$. For $C$, we have $C_1=\mathbb{R}_+ \times \mathbb{R}_+$ and $C_2=\mathbb{R}_+ \times \mathbb{R}_-$. For $C^\prime$, we have $C^\prime_1=\mathrm{span}_{\mathbb{R}_+}(\left\{(1;0), (1;1)\right\})$ and $C^\prime_2=\mathbb{R}_+ \times \mathbb{R}_-$.
Lastly, $C_1 \cap C_2=C_1^\prime \cap C_2^\prime=\mathbb{R}_+ \times \left\{ 0 \right\}$, which projects by $\pi_1$ to $\left\{0\right\} = D_1 \cap D_2=D_1^\prime \cap D_2^\prime$.
\end{example*}
\begin{proof}[Proof of Lemma \ref{lem:second-cone-lemma}]
By construction $\pi_n(C_i)=D_i$, so $\pi_n(C_i \cap C_\ell) \subset D_i \cap D_\ell$. To see that the reverse inclusion holds requires a little argument. Consider a general element
\begin{equation*}
y=\pi_n \left( \sum_{j \in J_i} \lambda_j x_j \right)
= \pi_n \left( \sum_{j \in J_\ell} \lambda^\prime_j x_j \right)
\quad \in D_i \cap D_\ell \subset \mathbb{R}^n
\quad\quad \left( \lambda_j, \lambda^\prime_j \in \mathbb{R}_+ \right).
\end{equation*}
For $k^\ast = 1,2,\dots, k$, the $k^\ast$-coordinate of $v_i := \sum_{j \in J_i} \lambda_j x_j \in C_i$ is either $\leq$ or $\geq$ the $k^\ast$-coordinate of $v_\ell := \sum_{j \in J_\ell} \lambda^\prime_j x_j \in C_\ell$; without loss of generality, assume it is $\leq$. Putting $\alpha \geq 0$ to be the difference of these $k^\ast$-coordinates, we have that $v^\ast_i := v_i + \alpha e_{k^\ast} \in C_i$ has the same $k^\ast$-coordinate as $v_\ell$ and still satisfies $\pi_n(v^\ast_i)=y=\pi_n(v_\ell) \in D_i \cap D_\ell$. We then put $v^\ast_i$ to be the new $v_i$, and reiterate this procedure for each $k^\ast$, updating $v_i$ or $v_\ell$ at each step. The end result is that $v_i = v_\ell \in C_i \cap C_\ell$ and $\pi_n(v_i)=y=\pi_n(v_\ell)$ as desired.
We move on to establishing that $\left\{ C_i \right\}_i$ is a sector decomposition of $C$.
To see that $C_i \subset \mathbb{R}^k \times \mathbb{R}^n$ is a full cone, use the hypothesis that $D_i$ is open in $\mathbb{R}^n$, and proceed as in the proof of Lemma \ref{lem:rank-lemma}. By definition, $C_i$ has a generating set consisting of $k+n$ elements. Thus, $k+n=\mathrm{dim}(C_i)\leq\mathrm{rank}(C_i)\leq k+n$, so $C_i$ is a sector.
That $C=\cup_i C_i$ is by hypothesis.
It remains to show that $C_i \cap C_\ell$ has empty interior for all pairwise-distinct $i, \ell$. Suppose otherwise, so that there is a nonempty open subset $U \subset \mathbb{R}^k \times \mathbb{R}^n$ such that $U \subset C_i \cap C_\ell$. Since projections are open maps, $\pi_n(U)$ is a nonempty open subset of $\mathbb{R}^n$ contained in $D_i \cap D_\ell$, contradicting that $\left\{ D_i \right\}_i$ is a sector decomposition.
\end{proof}
\begin{lem}
\label{lem:third-cone-lemma}
Let the full cone $C \subset \mathbb{R}_+^k \times \mathbb{R}^n$, the sector decomposition $\left\{ D_i \right\}_{i=1,2,\dots,p}$ of $\mathbb{R}^n$, and the sector decomposition $\left\{ C_i \right\}_{i=1,2,\dots,p}$ of $C$ be as in Lemma {\upshape\ref{lem:second-cone-lemma}}.
Assume in addition:
\begin{itemize}
\item for each $i, \ell$ such that $D_i \cap D_\ell$ is a wall in $\mathbb{R}^n$ (Definition {\upshape\ref{def:sector-decomp}}), we have more specifically that the intersection $J_i \cap J_\ell \subset \left\{ 1, 2, \dots, m \right\}$ of index sets has $n-1$ elements, and
\begin{equation*}
D_i \cap D_\ell = \mathrm{span}_{\mathbb{R}_+}
\left(
\left\{ \pi_n(x_j); \quad j \in J_i \cap J_\ell \right\}
\right)
\quad \subset \mathbb{R}^n.
\end{equation*}
\end{itemize}
Then, for any $i, \ell$, we have that $C_i \cap C_\ell \subset \mathbb{R}_+^k \times \mathbb{R}^n$ is a wall in $C$ if and only if $D_i \cap D_\ell \subset \mathbb{R}^n$ is a wall in $\mathbb{R}^n$. In particular, if this is the case for a given $i, \ell$, then
\begin{equation*}
\label{eq:technical-equation-intersection}
\tag{$\%$}
C_i \cap C_\ell = \mathrm{span}_{\mathbb{R}_+}\left( \left\{ e_1, e_2, \dots, e_k \right\} \cup \left\{ x_j; \quad j \in J_i \cap J_\ell \right\} \right) \quad \subset \mathbb{R}_+^k \times \mathbb{R}^n.
\end{equation*}
This furnishes a one-to-one correspondence $\left\{ \text{walls of } \left\{ D_i \right\}_i \text {in } \mathbb{R}^n \right\} \leftrightarrow \left\{ \text{walls of } \left\{ C_i \right\}_i \text {in } C \right\}$.
\end{lem}
\begin{example*}[\textbf{part 3}]
For $C, C^\prime \subset \mathbb{R}_+ \times \mathbb{R}$ as in part 2, the sole wall in $\mathbb{R}$ is $\left\{ 0 \right\}$, corresponding to the sole wall $\mathbb{R}_+ \times \left\{ 0 \right\}$ in $C, C^\prime$.
As a non-example, where the hypothesis and part of the conclusion, \eqref{eq:technical-equation-intersection}, fail: In $\mathbb{R}_+ \times \mathbb{R}^2$ let $x_{1}=(0;0,1)$, $x_{2}=(1;1,0)$, $x_{3}= (0;0,-1)$, $x_{4}=(2;1,0)$, $x_5=(0;-1,0)$. Put $J_1=\left\{1,2\right\}$, $J_2=\left\{3,4\right\}$, $J_3=\left\{1,5\right\}$, $J_4=\left\{3,5\right\}$. Put $C=\cup_{i=1}^4 C_i$. Then $D_1 \cap D_2 = \mathbb{R}_+ \times \left\{ 0 \right\} \subset \mathbb{R}^2$ is a wall, but $J_1 \cap J_2 = \emptyset$ and $C_1 \cap C_2 = \mathrm{span}_{\mathbb{R}_+}(\left\{ e_1, x_4 \right\} \supsetneq \mathrm{span}_{\mathbb{R}_+}(\left\{ e_1 \right\})$. Note that $x_2 \in \pi_2^{-1}(D_1 \cap D_2) - (C_1 \cap C_2)$. On the other hand, $D_3 \cap D_4 = \mathbb{R}_- \times \left\{ 0 \right\}$, $J_3 \cap J_4 = \left\{ 5 \right\}$, and $C_3 \cap C_4 = \mathbb{R}_+ \times \mathbb{R}_{-} \times \left\{ 0 \right\} = \mathrm{span}_{\mathbb{R}_+}(\left\{ e_1, x_5 \right\})$. Moreover, $\pi_2^{-1}(D_3 \cap D_4)=C_3 \cap C_4$.
\end{example*}
\begin{proof}[Proof of Lemma \ref{lem:third-cone-lemma}]
Assume first that $C_i \cap C_\ell$ is a wall in $C$. By Lemma \ref{lem:second-cone-lemma}, $\pi_n(C_i \cap C_\ell)=D_i \cap D_\ell$. Thus, since $C_i \cap C_\ell$ is a cone, we have that $D_i \cap D_\ell$ is a cone. It follows that the subspace $\widetilde{D}_{i\ell} := \widetilde{D_i \cap D_\ell} \subset \mathbb{R}^n$ is equal to the projection under $\pi_n$ of the subspace $\widetilde{C}_{i\ell} := \widetilde{C_i \cap C_\ell} \subset \mathbb{R}^k \times \mathbb{R}^n$ (Definition \ref{def:cone-defs}), the latter which is $k+n-1$ dimensional by assumption. By definition of $C_i$ and $C_\ell$, we have $\mathbb{R}^k \times \left\{ 0 \right\} \subset \widetilde{C}_{i\ell}$. Thus $\widetilde{C}_{i\ell} = \mathbb{R}^k \times \pi_n(\widetilde{C}_{i\ell})=\mathbb{R}^k \times \widetilde{D}_{i\ell}$. Hence $\widetilde{D}_{i\ell}$ is $n-1$ dimensional, as desired.
Conversely, assume $D_i \cap D_\ell$ is a wall in $\mathbb{R}^n$. Let $j_i$ (resp. $j_\ell$) be the unique index in $J_i - (J_i \cap J_\ell)$ (resp. $J_\ell - (J_i \cap J_\ell)$). Since $D_i$ (resp. $D_\ell$) has dimension $n$, the set of vectors $\left\{ \pi_n(x_j); j \in J_i \right\}$ (resp. $\left\{ \pi_n(x_j); j \in J_\ell \right\}$) in $\mathbb{R}^n$ is linearly independent. Since $D_i \cap D_\ell \subset \textcolor{purple}{\text{(in fact, equals) }} \mathrm{span}_{\mathbb{R}_+}(\left\{\pi_n(x_j); j \in J_i \cap J_\ell\right\})$ by hypothesis, it follows that any expression in $\mathbb{R}^n$ of the form
\begin{equation*}
\lambda_{j_i} \pi_n(x_{j_i}) + \sum_{j \in J_i \cap J_\ell} \lambda_j \pi_n(x_j) \quad \in D_i \cap D_\ell
\quad\quad \text{ or } \quad\quad
\lambda^\prime_{j_\ell} \pi_n(x_{j_\ell}) + \sum_{j \in J_i \cap J_\ell} \lambda^\prime_j \pi_n(x_j) \quad \in D_i \cap D_\ell
\end{equation*}
must imply $\lambda_{j_i}=0$ or $\lambda_{j_\ell}=0$, respectively.
Consequently, since $\pi_n(C_i \cap C_\ell) \subset D_i \cap D_\ell$ (in fact, this is an equality by Lemma \ref{lem:second-cone-lemma}), any expression in $C \subset \mathbb{R}_+^k \times \mathbb{R}^n$ of the form
\begin{equation*}
\sum_{r=1}^k \eta_r e_r + \lambda_{j_i} x_{j_i} + \sum_{j \in J_i \cap J_\ell} \lambda_j x_j
=
\sum_{r=1}^k \eta^\prime_r e_r + \lambda^\prime_{j_\ell} x_{j_\ell} + \sum_{j \in J_i \cap J_\ell} \lambda^\prime_j x_j
\quad \in C_i \cap C_\ell
\quad\quad
\left(
\eta_r, \lambda_{j_i}, \lambda_j, \eta^\prime_r, \lambda^\prime_{j_\ell}, \eta^\prime_\ell \in \mathbb{R}_+
\right)
\end{equation*}
implies, by applying $\pi_n$, that $\lambda_{j_i}=\lambda^\prime_{j_\ell}=0$. We gather that the $\subset$ inclusion of \eqref{eq:technical-equation-intersection} is true. Since the reverse inclusion holds by the definitions of $C_i$ and $C_\ell$, we have that \eqref{eq:technical-equation-intersection} is true. In particular, $C_i \cap C_\ell$ is a cone.
To finish, we show $C_i \cap C_\ell$ is $k+n-1$ dimensional. By the same argument as in the first paragraph of this proof, we have $\widetilde{C}_{i \ell}=\mathbb{R}^k \times \widetilde{D}_{i \ell}$. Since $\widetilde{D}_{i \ell}$ is $n-1$ dimensional by hypothesis, the claim is true.
It remains to construct the desired bijection $f: \left\{ \text{walls of } \left\{ D_i \right\}_i \text {in } \mathbb{R}^n \right\} \to \left\{ \text{walls of } \left\{ C_i \right\}_i \text {in } C \right\}$. Indeed, let $W=D_i \cap D_\ell$ be a wall in $\mathbb{R}^n$. Put $f(W)$ to be the wall $C_i \cap C_\ell$ in $C$. By Observation \ref{lem:stupid-lemma-about-walls}, the choice of $\left\{ i, \ell \right\}$ such that $W=D_i \cap D_\ell$ is unique, so $f$ is well-defined. Conversely, if $W^\prime \subset \mathbb{R}_+^k \times \mathbb{R}^n$ is a wall of $\left\{ C_i \right\}_i$ in $C$, put $g(W^\prime)=\pi_n(W^\prime)\subset \mathbb{R}^n$. Since $W^\prime = C_i \cap C_\ell$ for some $i, \ell$ by definition, we have that $g(W^\prime)=\pi_n(C_i \cap C_\ell)=D_i \cap D_\ell$ is a wall in $\mathbb{R}^n$. Thus, we have defined a function $g : \left\{ \text{walls of } \left\{ C_i \right\}_i \text {in } C \right\} \to \left\{ \text{walls of } \left\{ D_i \right\}_i \text {in } \mathbb{R}^n \right\}$ (note we did not need to use Observation \ref{lem:stupid-lemma-about-walls} for this direction). It is immediate from the construction that $f$ and $g$ are inverses of each other.
\end{proof}
\subsubsection{Cone completions}
\label{sssec:cone-completions}
$ $
\textcolor{purple}{Unlike \S \ref{ssec:technical-statements}, this sub-subsection will be used in \S \ref{ssec:sector-decomposition-of-the-KTGS-cone-for-the-triangle-and-the-square}.}
\begin{defn}
\label{def:completion}
Let $\mathscr{M} \subset \mathbb{R}^k$ be a submonoid (Definition \ref{def:submonoid}) having a finite $\mathbb{Z}_+$-spanning set $\left\{ c_i \right\}_{i=1,2,\dots,m}$ (we say $\mathscr{M}$ is \textit{finitely generated}). Then, its \textit{completion} $\overline{\mathscr{M}} \subset \mathbb{R}^k$ is the corresponding real cone with the same generating set $\left\{ c_i \right\}_i$ (\textcolor{purple}{this} is independent of the choice of generating set).
\end{defn}
By Proposition \ref{prop:irreducible-elements}, we immediately have:
\begin{observation}
\label{obs:rank-hilbert-basis-estimate}
Let $\mathscr{C} \subset \mathbb{Z}_+^k$ be a positive integer cone \textcolor{purple}{(Definition {\upshape\ref{def:positive-integer-cone}})} admitting a Hilbert basis \textcolor{purple}{$\mathscr{H} \subset \mathscr{C}$} (Definition {\upshape\ref{def:hilbert-basis}}). Then, the rank of its completion $\overline{\mathscr{C}} \subset \mathbb{R}^k$ is less than or equal to the \textcolor{purple}{number of elements of the Hilbert basis $\mathscr{H}$}.
\end{observation}
\begin{example} $ $
\label{ex:hib-basis-cone-examples}
\begin{enumerate}
\item In Example \ref{ex:hilbbasis}\eqref{ex1:hilbbasis}, we see $\overline{\mathscr{C}}=\mathbb{R}_+^k$ and $\mathrm{dim}(\overline{\mathscr{C}})=\mathrm{rank}(\overline{\mathscr{C}})=k=|\mathscr{H}|$.
\item In Example \ref{ex:hilbbasis}\eqref{ex2:hilbbasis}, we see that $\overline{\mathscr{C}}=\mathbb{R}_+^2$ and $\mathrm{dim}(\overline{\mathscr{C}})=\mathrm{rank}(\overline{\mathscr{C}})=2<3=|\mathscr{H}|$.
\item In Example \ref{ex:hilbbasis}\eqref{ex3:hilbbasis}, we have that $\overline{\mathscr{C}}=\mathrm{span}_{\mathbb{R}_+}(\mathscr{H}) \subset \mathbb{R}_+^3$ and one checks that $\mathrm{dim}(\overline{\mathscr{C}})=3<4=\mathrm{rank}(\overline{\mathscr{C}})=|\mathscr{H}|$, as no point of $\mathscr{H}$ lies in the $\mathbb{R}_+$-span of the other three points.
\end{enumerate}
\end{example}
\begin{lem}
\label{lem:actual-geometry}
Let $\mathscr{M} \subset \mathbb{R}^k$ be a finitely generated monoid. Assume there are finitely generated submonoids $\mathscr{M}_1, \mathscr{M}_2, \dots, \mathscr{M}_p \subset \mathscr{M}$ such that $\mathscr{M}=\cup_{i=1}^p \mathscr{M}_i$. Then, $\overline{\mathscr{M}}=\cup_{i=1}^p \overline{\mathscr{M}}_i$.
\end{lem}
\begin{proof}
The inclusion $\supset$ is by definition. Let $\left\{ c_j \right\}_{j=1,2,\dots,m}$ be a finite generating set for $\mathscr{M}$. Put $C = \sum_{j=1}^m |c_j|$, where $|c_j|$ is the Euclidean length in $\mathbb{R}^k$. Then for each $x \in \overline{\mathscr{M}}$, there exists some $y \in \mathscr{M}$ such that $|x-y| \leq C$. Indeed, writing $x=\sum_{j=1}^m \lambda_j c_j$, put $y=\sum_{j=1}^m n_j c_j \textcolor{purple}{\in \mathscr{M}}$ where $n_j \geq 0$ is the largest integer less than or equal to $\lambda_j$. We have that
\begin{equation*}
|x-y|=\left| \sum_{j=1}^m (\lambda_j - n_j) c_j \right|
\leq \sum_{j=1}^m (\lambda_j - n_j) |c_j| \leq C.
\end{equation*}
We now argue by contradiction. Put $A=\cup_{i=1}^p \overline{\mathscr{M}}_i$ and suppose there is $x \in \overline{\mathscr{M}} - A$. Note $x \neq 0$. Let $\pi : \mathbb{R}^k - \left\{ 0 \right\} \to S^{k-1}$ be the natural projection onto the unit sphere. It suffices to show there exists an open ball $B \subset \mathbb{R}^k$ such that $\pi(x) \in B \cap S^{k-1}$ and such that $\pi(A\textcolor{purple}{-\left\{0\right\}})$ does not intersect $B \cap S^{k-1}$. Indeed, since $\mathscr{M} \subset A$ by hypothesis, this \textcolor{purple}{would} contradict that there is some point of $\mathscr{M}$ in $\pi^{-1}(B \cap S^{k-1}) \textcolor{purple}{\subset \mathbb{R}^k}$ at distance at most $C$ from a point in the ray $\mathbb{R}_{>0} x \subset \overline{\mathscr{M}} \cap \pi^{-1}(B \cap S^{k-1})$.
Suppose such a $B$ does not exist. Then, there is a sequence $(a_n)$ in $A\textcolor{purple}{-\left\{0\right\}}$ such that $\mathrm{lim}_{n \to \infty} \pi(a_n)=\pi(x)$ in $S^{k-1}$. Since the subcones $\overline{\mathscr{M}}_i$ are finitely generated, they are closed subsets of $\mathbb{R}^k$; thus, $A$ is closed as well. It follows that the intersection
\begin{equation*}
A^c = A \cap \left\{ z \in \mathbb{R}^k; \quad \frac{1}{2} \leq |z| \leq \frac{3}{2} \right\}
\end{equation*}
is compact; thus, $\pi(A^c) \subset S^{k-1}$ is compact as well. We gather $\pi(A^c)$ is closed in $S^{k-1}$.
Now, since the ray through any point of $A$ intersects $A^c$, we have $\pi(a_n) \in \pi(A^c)$ for all $n$. It follows by the closedness of $\pi(A^c)$ that $\pi(x) \in \pi(A^c)$; in particular, the ray through $x$ intersects $A=\cup_{i=1}^p \overline{\mathscr{M}}_i$. But this ray cannot intersect any subcone $\overline{\mathscr{M}}_i$, as $\overline{\mathscr{M}}_i$ is closed under scaling by $\mathbb{R}_+$ \textcolor{purple}{and} $x \notin A$ by assumption. This is a contradiction.
\end{proof}
\subsection{Sector decomposition of the KTGS cone for the triangle and the square}
\label{ssec:sector-decomposition-of-the-KTGS-cone-for-the-triangle-and-the-square}
$ $
Recall the notion of a sector decomposition $\left\{ C_i \right\}_i$ of a full cone $C \subset \mathbb{R}^k$, and of a wall between two full cones; see Definition \ref{def:sector-decomp}.
\begin{defn}
\label{def:completed-ktgs-cone}
Let $\widehat{S}$ be a marked surface, and \textcolor{purple}{let} $\mathscr{T}$ \textcolor{purple}{be} an ideal triangulation of $\widehat{S}$. The \textit{completed Knutson-Tao-Goncharov-Shen cone} $C_\mathscr{T}$ is the completion $C_\mathscr{T}= \overline{\mathscr{C}}_\mathscr{T} \subset \mathbb{R}_+^N$ of the KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^N$; see Definitions \ref{def:completion}, \ref{def:KTGS-cone} and Proposition \ref{prop:KTGS-is-positive-integer-cone}.
\end{defn}
\subsubsection{Sector decomposition for the triangle}
\label{sssec:sector-decomposition-for-the-triangle}
$ $
Let $S=\Delta$ be the ideal triangle. We will use the notation of \S \ref{sssec:hilbert-basis-for-the-triangle}.
\begin{prop}
\label{prop:sector-decomp-triangle}
The completed KTGS cone $C_\Delta \subset \mathbb{R}_+^7$ is $7$-dimensional. Putting
\begin{align*}
\mathscr{C}_\Delta^{out}&=\mathrm{span}_{\mathbb{Z}_+}\left( \left\{ \Phi_\Delta(W^H); \quad
W^H = L_a, R_a, L_b, R_b, L_c, R_c, T_{out} \right\} \right)
&&\subset \mathscr{C}_\Delta&&\subset \mathbb{Z}_+^7,
\\ \mathscr{C}_\Delta^{in}&=\mathrm{span}_{\mathbb{Z}_+}\left( \left\{ \Phi_\Delta(W^H); \quad
W^H = L_a, R_a, L_b, R_b, L_c, R_c, T_{in} \right\} \right)
&&\subset \mathscr{C}_\Delta&&\subset \mathbb{Z}_+^7,
\\ \textcolor{purple}{C_\Delta^{out}} &\textcolor{purple}{= \overline{\mathscr{C}}_\Delta^{out},
\quad
C_\Delta^{in} = \overline{\mathscr{C}}_\Delta^{in}
\quad\quad \subset C_\Delta \quad \subset \mathbb{R}_+^7,}
\end{align*}
yields a sector decomposition $\left\{ C_\Delta^{out}, C_\Delta^{in} \right\}$ of $C_\Delta$. Moreover, $C_\Delta^{out} \cap C_\Delta^{in}$ is a $6$-dimensional wall, generated by the cone points $\Phi_\Delta(W^H)$ corresponding to the $6$ corner arcs \textcolor{purple}{in $\mathscr{W}_\Delta$}.
\end{prop}
\begin{proof}
This is a consequence of \cite[Proposition 45]{DouglasArxiv20} and its proof.
Indeed, by \cite[Proposition 45]{DouglasArxiv20} we have that $\mathscr{C}_\Delta = \mathscr{C}_\Delta^{out} \cup \mathscr{C}_\Delta^{in}$. \textcolor{purple}{Thus, $C_\Delta = C_\Delta^{out} \cup C_\Delta^{in}$ by Lemma \ref{lem:actual-geometry}.} \textcolor{purple}{Also, again by \cite[Proposition 45]{DouglasArxiv20} (or elementary linear algebra)}, the cones $C_\Delta$, $C_\Delta^{out}$, $C_\Delta^{in} \subset \mathbb{R}_+^7$ are full; in particular, $C_\Delta^{out}$ and $C_\Delta^{in}$ are sectors.
As \textcolor{purple}{mentioned in passing during} the proof of Proposition \ref{prop:hilb-base-triangle}, by the proof of \cite[Proposition 45]{DouglasArxiv20} there is a linear isomorphism $f : \mathbb{R}^7 \to \mathbb{R}^7$ satisfying:
\begin{itemize}
\item $f(C_\Delta) \subset \mathbb{R}_+^6 \times \mathbb{R}$;
\item $f$ sends the corner arcs points $\Phi_\Delta(W^H)$ for $W^H=L_a, R_a, L_b, R_b, L_c, R_c$ to the \textcolor{purple}{first} six standard basis elements $e_i$ of $\mathbb{R}^6 \times \mathbb{R}$ for $i=1,2,\dots,6$;
\item $f(\Phi_\Delta(T_{in}))=(0,0,0,0,0,0;1)$ and $f(\Phi_\Delta(T_{out}))=(0,1,0,1,0,1;-1)$.
\end{itemize}
It follows that $f(C_\Delta^{out}) \cap f(C_\Delta^{in}) = \mathbb{R}_+^6 \times \left\{ 0 \right\}$ has empty interior; moreover, it is a 6-dimensional wall. As these properties are preserved by isomorphisms of $\mathbb{R}^7$, we conclude \textcolor{purple}{the~result.}
\end{proof}
\begin{cor}
\label{cor:rank-for-triangle}
The rank \textcolor{purple}{(Definition {\upshape\ref{def:cone-defs}})} of the completed KTGS cone $C_\Delta \subset \mathbb{R}_+^7$ is $8$.
\end{cor}
\begin{proof}
By Proposition \ref{prop:hilb-base-triangle}, the Hilbert basis $\mathscr{H}_\Delta$ of the positive integer cone $\mathscr{C}_\Delta \subset \mathbb{Z}_+^7$ has 8 elements. It follows by Observation \ref{obs:rank-hilbert-basis-estimate} that $\mathrm{rank}(C_\Delta) \leq 8$.
By Proposition \ref{prop:sector-decomp-triangle}, $\mathrm{rank}(C_\Delta) \geq 7$. We show there is no generating set with 7 elements.
We again work in the isomorphic cone $C := f(C_\Delta) \subset \mathbb{R}_+^6 \times \mathbb{R}$ from the proof of Proposition \ref{prop:sector-decomp-triangle}. \textcolor{purple}{Suppose} $\left\{ c_i \right\}_{i=1,2,\dots,7}$ \textcolor{purple}{is} a generating set of $C$. \textcolor{purple}{Let $\pi_6 : \mathbb{R}^6 \times \mathbb{R} \to \mathbb{R}^6$ be the natural projection.} Since \textcolor{purple}{the standard basis element} $e_i$ is in $C$ for $i=1,2,\dots,6$, and since $\pi_6(C)\subset\text{(in fact, equals) }\mathbb{R}_+^6$, \textcolor{purple}{after possibly changing indices} we can assume, for $i=1,2,\dots,6$, that $c_i = (e^{(6)}_i; \alpha_i)$ where $e^{(6)}_i$ is the $i$-th standard basis element of $\mathbb{R}^6$ and $\alpha_i \in \mathbb{R}$.
Since, by above, $C$ has a generating set where \textcolor{purple}{just the one generator} $(0,1,0,1,0,1;-1)$ has a negative last coordinate, it follows that any point of $C$ with a negative last coordinate has positive second, fourth, and sixth coordinates. Thus, $\alpha_i \geq 0$ for $i=1,2,\dots,6$.
The \textcolor{purple}{remaining} generator $c_7$ must therefore have a negative last coordinate. We conclude $(0,0,0,0,0,0;1) \in C$ cannot be generated with $\left\{ c_i \right\}_{i=1,2,\dots,7}$.
\end{proof}
\subsubsection{Sector decomposition for the square}
\label{sssec:sector-decomposition-for-the-square}
$ $
Let $S=\Box$ be the ideal square, equipped with an ideal triangulation $\mathscr{T}$, namely a choice of diagonal. We will use the notation of \S \ref{sssec:hilbert-basis-for-the-square-statement}. In particular, recall the 22 Hilbert basis webs $W^H_j$ in $\mathscr{W}_\Box$ ($j=1,2,\dots,22$); \textcolor{purple}{see} Figure \ref{figure:square1}.
We define 42 subcones $C_\mathscr{T}^i \subset C_\mathscr{T}$ ($i=1,2,\dots,42$) of the completed KTGS cone $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ as follows. First, define 42 \textcolor{purple}{web} subsets $\mathscr{Q}_i \subset \mathscr{W}_\Box$ ($i=1,2,\dots,42$), each \textcolor{purple}{containing} quatre \textcolor{purple}{Hilbert basis webs $W^H$}, by:
\begin{align*}
\mathscr{Q}_1&=\{[T_{in}, L_b],[R_c, T_{out}],[R_c, L_b],[R_b, L_c]\}&
\mathscr{Q}_2&=\{[T_{out}, L_c],[R_c, T_{out}],[R_c, L_b],[R_b, L_c]\}
\\ \mathscr{Q}_3&=\{[T_{in}, L_b],[R_b, L_c],[R_b, T_{in}],[R_c, L_b]\}&
\mathscr{Q}_4&=\{[T_{out}, L_c],[R_b, L_c],[R_b, T_{in}],[R_c, L_b]\}
\\ \mathscr{Q}_5&=\{[T_{in}, T_{out}],[T_{in}, L_b],[R_c, T_{out}],[R_b, L_c]\}&
\mathscr{Q}_6&=\{[L_b, T_{out}],[T_{in}, T_{out}],[T_{in}, R_c],[R_b, L_c]\}
\\ \mathscr{Q}_7&=\{[T_{in}, L_b],[L_c, R_b],[T_{in}, T_{out}],[R_c, T_{out}]\}&
\mathscr{Q}_8&=\{[T_{in}, T_{out}],[L_c,R_b],[L_b, T_{out}],[T_{in}, R_c]\}
\\ \mathscr{Q}_9&=\{[T_{out}, R_b],[L_c,R_b],[L_c, T_{in}],[L_b, R_c]\}&
\mathscr{Q}_{10}&=\{[T_{in}, R_c],[L_c,R_b],[L_c, T_{in}],[L_b, R_c]\}
\\ \mathscr{Q}_{11}&=\{[T_{out}, R_b],[L_c,R_b],[L_b, T_{out}],[L_b, R_c]\}&
\mathscr{Q}_{12}&=\{[T_{in}, R_c],[L_c,R_b],[L_b, T_{out}],[L_b, R_c]\}
\\ \mathscr{Q}_{13}&=\{[T_{out},T_{in}],[T_{out}, R_b],[L_c, T_{in}],[L_b, R_c]\}&
\mathscr{Q}_{14}&=\{[T_{out},L_c],[R_b, T_{in}],[T_{out},T_{in}],[L_b, R_c]\}
\\ \mathscr{Q}_{15}&=\{[T_{out},T_{in}],[T_{out}, R_b],[L_c, T_{in}],[R_c, L_b]\}&
\mathscr{Q}_{16}&=\{[T_{out},L_c],[R_b, T_{in}],[T_{out},T_{in}],[R_c, L_b]\}
\\ \mathscr{Q}_{17}&=\{[T_{out}, R_b],[T_{out},L_c],[R_c, L_b],[R_c,T_{out}]\}&
\mathscr{Q}_{18}&=\{[T_{in},L_b],[R_b, T_{in}],[L_c,T_{in}],[R_c, L_b]\}
\\ \mathscr{Q}_{19}&=\{[T_{in},L_b],[L_c,R_b],[L_c,T_{in}],[T_{in}, R_c]\}&
\mathscr{Q}_{20}&=\{[T_{out}, R_b],[L_c,R_b],[L_b, T_{out}],[R_c,T_{out}]\}
\\ \mathscr{Q}_{21}&=\{[T_{out}, R_b],[L_c,R_b],[R_c,T_{out}],[R_c,L_b]\}&
\mathscr{Q}_{22}&=\{[T_{out}, R_b],[L_c,R_b],[L_c,T_{in}],[R_c,L_b]\}
\\ \mathscr{Q}_{23}&=\{[T_{in}, L_b],[L_c,R_b],[R_c,T_{out}],[R_c,L_b]\}&
\mathscr{Q}_{24}&=\{[T_{in}, L_b],[L_c,R_b],[L_c,T_{in}],[R_c,L_b]\}
\\ \mathscr{Q}_{25}&=\{[T_{out},L_c],[T_{out}, R_b],[L_b, T_{out}],[L_b, R_c]\}&
\mathscr{Q}_{26}&=\{[T_{in}, R_c],[R_b,T_{in}],[L_c, T_{in}],[L_b, R_c]\}
\\ \mathscr{Q}_{27}&=\{[T_{in}, L_b],[R_b, L_c],[R_b, T_{in}],[T_{in}, R_c]\}&
\mathscr{Q}_{28}&=\{[T_{out},L_c],[R_b, L_c],[L_b, T_{out}],[R_c, T_{out}]\}
\\ \mathscr{Q}_{29}&=\{[L_b, T_{out}],[R_b, L_c],[L_b,R_c],[T_{out},L_c]\}&
\mathscr{Q}_{30}&=\{[R_b, T_{in}],[R_b, L_c],[L_b,R_c],[T_{out},L_c]\}
\\ \mathscr{Q}_{31}&=\{[T_{in},R_c],[R_b, L_c],[L_b, T_{out}],[L_b,R_c]\}&
\mathscr{Q}_{32}&=\{[T_{in},R_c],[R_b, L_c],[R_b, T_{in}],[L_b,R_c]\}
\\ \mathscr{Q}_{33}&=\{[T_{out}, R_b],[T_{out},L_c],[L_b, T_{out}],[R_c, T_{out}]\}&
\mathscr{Q}_{34}&=\{[T_{in},L_b],[R_b, T_{in}],[L_c,T_{in}],[T_{in}, R_c]\}
\\ \mathscr{Q}_{35}&=\{[T_{in},T_{out}],[T_{in}, L_b],[T_{in}, R_c],[R_b, L_c]\}&
\mathscr{Q}_{36}&=\{[T_{in},T_{out}],[R_b, L_c],[L_b, T_{out}],[R_c, T_{out}]\}
\\ \mathscr{Q}_{37}&=\{[T_{in}, L_b],[L_c,R_b],[T_{in}, T_{out}],[T_{in}, R_c]\}&
\mathscr{Q}_{38}&=\{[T_{in}, T_{out}],[L_c,R_b],[L_b, T_{out}],[R_c,T_{out}]\}
\\ \mathscr{Q}_{39}&=\{[T_{out},L_c],[T_{out}, R_b],[T_{out}, T_{in}],[L_b, R_c]\}&
\mathscr{Q}_{40}&=\{[T_{out}, T_{in}],[R_b,T_{in}], [L_c, T_{in}],[L_b, R_c]\}
\\ \mathscr{Q}_{41}&=\{[T_{out},L_c],[T_{out}, R_b],[T_{out}, T_{in}],[R_c,L_b]\}&
\mathscr{Q}_{42}&=\{[R_b,T_{in}],[T_{out}, T_{in}], [L_c, T_{in}],[R_c,L_b]\}.
\end{align*}
The $i$-th web subset $\mathscr{Q}_i \subset \mathscr{W}_i \textcolor{purple}{\subset \mathscr{W}_\Box}$ is moreover a subset of the $i$-th web family $\mathscr{W}_i$ (\S \ref{ssec:42-reduced-web-families-in-the-square}). More precisely, each of the four \textcolor{purple}{Hilbert basis} webs $W^H \in \mathscr{Q}_i$ is \textcolor{purple}{determined} by the schematic picture for the web family $\mathscr{W}_i$ (as in Figure \ref{figure:9cases}) \textcolor{purple}{by putting} all but one of the variables $x,y,z,t$ \textcolor{purple}{to} 0 and the remaining \textcolor{purple}{variable to} 1. Recall that the \textcolor{purple}{9 specific web} families denoted $(j)$ in \textcolor{purple}{Figure \ref{figure:9cases} are} the families $\mathscr{W}_{i_j}$ as explained in Notation \ref{not:web-families}.
\begin{defn}
\label{def:topological type}
For $i=1,2,\dots,42$, let $\mathscr{Q}_i \subset \mathscr{W}_i \subset \mathscr{W}_\Box$ be the set of four webs defined just above, and recall that $W_j^H$ for $j=1,2,\dots,8$ are the 8 corner arcs \textcolor{purple}{in the square}.
Define the \textit{$i$-th completed KTGS subcone} $C_\mathscr{T}^i \subset C_\mathscr{T} \subset \mathbb{R}_+^{12}$ as the completion
\begin{equation*}
C_\mathscr{T}^i = \overline{\mathscr{C}}_\mathscr{T}^i
\end{equation*}
of the \textit{$i$-th KTGS submonoid} $\mathscr{C}_\mathscr{T}^i \subset \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$, defined by
\begin{equation*}
\mathscr{C}_\mathscr{T}^i = \mathrm{span}_{\mathbb{Z}_+}\left(\left\{ \Phi_\mathscr{T}(W_1^H), \Phi_\mathscr{T}(W_2^H), \dots, \Phi_\mathscr{T}(W_8^H) \right\} \cup \left\{ \Phi_\mathscr{T}(W^H); \quad W^H \in \mathscr{Q}_i \right\} \right),
\end{equation*}
where $\Phi_\mathscr{T}(W^H) \in \mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ is the point in the KTGS \textcolor{purple}{positive integer} cone $\mathscr{C}_\mathscr{T}$ assigned to $W^H$ by the web tropical coordinate map $\Phi_\mathscr{T} : \mathscr{W}_\Box \to \mathscr{C}_\mathscr{T}$.
The set $\mathscr{Q}_i$ of four webs is called the \textit{topological type} of the completed KTGS subcone $C_\mathscr{T}^i$.
\end{defn}
By construction of the web tropical coordinate map $\Phi_\mathscr{T}$, we can immediately say:
\begin{observation}
\label{obs:families-and-their-cones}
For $i=1,2,\dots,42$, we have $\Phi_\mathscr{T}(\mathscr{W}_i)=\mathscr{C}_\mathscr{T}^i \subset \mathbb{Z}_+^{12}$.
\end{observation}
The main result of this section is:
\begin{theorem}
\label{theorem:decomp}
Consider the completed KTGS cone $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ for the triangulated square $(\Box, \mathscr{T})$; see Definition {\upshape\ref{def:completed-ktgs-cone}}. Then:
\begin{itemize}
\item $C_\mathscr{T}$ is $12$-dimensional. Namely, $C_\mathscr{T}$ is \textcolor{purple}{a} full \textcolor{purple}{cone}; \textcolor{purple}{see Definition {\upshape\ref{def:cone-defs}}.}
\item The \textcolor{purple}{completed KTGS} subcones $C_\mathscr{T}^i \subset C_\mathscr{T}$ are full sectors forming a sector decomposition $\left\{ C^i_\mathscr{T} \right\}_{i=1,2,\dots,42}$ of $C_\mathscr{T}$; see Definition {\upshape\ref{def:sector-decomp}}.
\item The intersection $C_\mathscr{T}^i \cap C_\mathscr{T}^\ell$ is a wall if and only if $\mathscr{Q}_i \cap \mathscr{Q}_\ell$ has $3$ elements; that is, if and only if the topological types of $C_\mathscr{T}^i$ and $C_\mathscr{T}^\ell$ differ by a single web. In this case,
\begin{equation*}
\label{eq:theorem-equation}
\tag{$\#$}
C_\mathscr{T}^i \cap C_\mathscr{T}^\ell = \mathrm{span}_{\mathbb{R}_+}\left(\left\{ \Phi_\mathscr{T}(W_1^H), \Phi_\mathscr{T}(W_2^H), \dots, \Phi_\mathscr{T}(W_8^H) \right\} \cup \left\{ \Phi_\mathscr{T}(W^H); \quad W^H \in \mathscr{Q}_i \cap \mathscr{Q}_\ell \right\} \right) \quad \subset \mathbb{R}_+^{12}.
\end{equation*}
Moreover, for each web $W \in \mathscr{Q}_i$, there exists a unique index $i^\ast(i,W) \in \left\{ 1, 2, \dots, 42 \right\}$ such that
\begin{equation*}
\mathscr{Q}_i \cap \mathscr{Q}_{i^\ast(i,W)} = \mathscr{Q}_i - \left\{ W \right\};
\end{equation*}
that is, such that there is a web $W^\ast(i,W) \in \mathscr{Q}_{i^\ast(i,W)}$ satisfying the property that the topological type \textcolor{purple}{$\mathscr{Q}_{i^\ast(i,W)}$} of $C_\mathscr{T}^{i^\ast(i,W)}$ is obtained from the topological type \textcolor{purple}{$\mathscr{Q}_i$} of $C_\mathscr{T}^i$ by \textcolor{purple}{swapping} $W$ with $W^\ast(i,W)$. (One might think of such a \textcolor{purple}{swap} as a \textnormal{web~mutation}.)
In particular, each sector $C^i_\mathscr{T}$ has $4$ walls. See Figure {\upshape\ref{figure:wallscross}}.
\end{itemize}
\end{theorem}
\begin{example}
\label{ex:example-of-web-mutation}
As an example of the second \textcolor{purple}{paragraph} of the third item of the theorem, consider $i=i_1=29$, corresponding to family (1) in Figure \ref{figure:9cases}. If $W = [L_b, T_{out}] \in \mathscr{Q}_{29}$, then $i^\ast(29,W)=i_2=30$ corresponding to family (2) in Figure \ref{figure:9cases}, and $W^\ast(29,W) = [R_b, T_{in}] \in \mathscr{Q}_{30}$.
\textcolor{purple}{
One similarly checks that $i^\ast(29, [L_b, R_c])=28$ and $W^\ast(29, [L_b, R_c])=[R_c, T_{out}] \in \mathscr{Q}_{28}$; that $i^\ast(29, [R_b, L_c])=25$ and $W^\ast(29, [R_b, L_c])=[T_{out}, R_b] \in \mathscr{Q}_{25}$; and, that $i^\ast(29, [T_{out}, L_c])=31$ and $W^\ast(29, [T_{out}, L_c])=[T_{in}, R_c] \in \mathscr{Q}_{31}$. }
\textcolor{purple}{Note that Figure \ref{figure:wallscross} provides some, but not all, of this topological information; the full information is contained in the definition of the subsets $\mathscr{Q}_i$ above. }
\end{example}
\begin{question}
By Theorem \ref{theorem:basis}, the Hilbert basis $\mathscr{H}_{(\Box, \mathscr{T})}$ of the positive integer cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ has 22 elements. It follows by Observation \ref{obs:rank-hilbert-basis-estimate} that $\mathrm{rank}(C_\mathscr{T}) \leq 22$.
By Theorem \ref{theorem:decomp}, $\mathrm{rank}(C_\mathscr{T}) \geq 12$. We ask: What is \textcolor{purple}{the rank of $C_\mathscr{T}$}?
Compare Example \ref{ex:hib-basis-cone-examples} and Corollary \ref{cor:rank-for-triangle} as well as Example (part 1) in \S \ref{ssec:technical-statements}.
\end{question}
We will need \textcolor{purple}{to make }some preparations before proving the theorem. In particular, we will \textcolor{purple}{make use of} a generalization of the linear isomorphism used in the proof of Proposition~\ref{prop:sector-decomp-triangle}.
\subsection{Proof of Theorem \ref{theorem:decomp}}
\label{ssec:proof-of-decomp-theorem}
$ $
\subsubsection{Two linear isomorphisms: second isomorphism $\phi_\mathscr{T}$ by tropical $\mathcal{X}$-coordinates}
\label{sssec:second-linear-isomorphism}
$ $
In \S \ref{sssec:a-linear-isomorphism}, to each ideal triangulation $\mathscr{T}$ of the square $\Box$ we constructed a linear isomorphism $\theta_\mathscr{T} : \mathbb{R}^{12} \to V_\mathscr{T} \subset \mathbb{R}^{18}$. This map sends 12 \textcolor{purple}{real} numbers $A_1, A_2, \dots, A_{12}$, \textcolor{purple}{called the} \textit{(real) tropical $\mathcal{A}$-coordinates}, to their 18 rhombus numbers $\beta_1, \beta_2, \dots, \beta_{18}$. There are 6 relations (\textcolor{purple}{see} \eqref{eq:tropical-x-coordinates} \textcolor{purple}{in \S \ref{sssec:a-linear-isomorphism}) } defining the 12-dimensional subspace $V_\mathscr{T} \subset \mathbb{R}^{18}$, \textcolor{purple}{which determine} four \textcolor{purple}{real numbers} $X_1, X_2, X_3, X_4$, called the \textit{(real) tropical $\mathcal{X}$-coordinates}: they are four numbers assigned to any 18-tuple in $V_\mathscr{T}$ of rhombus numbers. See Figure \ref{fig:tropical-X-coords}. See also \cite{XieArxiv13}.
\begin{remark}
The tropical $\mathcal{X}$-coordinates originate in Fock-Goncharov theory as tropicalized double and triple ratios \cite{FockIHES06, FockAdvMath07}, and can be thought of in the following geometric way. For $X_1$, say, consider the hexagon in the top triangle in \textcolor{purple}{the top left square of} Figure \ref{fig:tropical-X-coords}. There are six \textcolor{purple}{tropical $\mathcal{A}$-coordinates} assigned to the vertices of this hexagon. Then $X_1$ is the signed sum of these coordinates, as indicated in the figure. Similarly for $X_2, X_3, X_4$.
\end{remark}
\begin{figure}[htb]
\includegraphics[scale=.545]{tropical-X-coords}
\caption{
\textcolor{purple}{Shown are the 18 rhombus numbers $\left\{ \beta_i \right\}_{i=1,2,\dots,18}$ for the square, and the associated 4 tropical integer $\mathcal{X}$-coordinates $\left\{ X_i \right\}_{i=1,2,3,4}$. The latter can be computed either as differences of rhombus numbers, or as alternating sums of the 12 positive tropical integer $\mathcal{A}$-coordinates $\left\{ A_i \right\}_{i=1,2,\dots,12}$ around the polygons displayed on the left. The rhombi colored green are those involved in the first 8 coordinates of the isomorphism $\phi_\mathscr{T}: V_\mathscr{T} \to \mathbb{R}^{12}$.}}
\label{fig:tropical-X-coords}
\end{figure}
\begin{defn}
Let $V_\mathscr{T} \subset \mathbb{R}^{18}$ be the 12-dimensional subspace just discussed. Define a linear map
\begin{equation*}
\phi_\mathscr{T} : V_\mathscr{T} \longrightarrow \textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}
\end{equation*}
by
\begin{equation*}
\phi_\mathscr{T}(\beta_1, \beta_2, \beta_3, \dots, \beta_{18})=(\beta_1, \beta_2, \beta_4, \beta_5, \beta_7, \beta_8, \beta_{10}, \beta_{11}; \quad X_1, X_2, X_3, X_4).
\end{equation*}
\end{defn}
See Figure \ref{fig:tropical-X-coords}, where the eight rhombi appearing in the first eight coordinates of the image of $\phi_\mathscr{T}$ are colored green.
For example, the images under $\phi_\mathscr{T}$ of $\theta_\mathscr{T}$ applied to the 22 Hilbert basis elements, $\theta_\mathscr{T}(\Phi_\mathscr{T}(W_j^H)) \in V_\mathscr{T}$, \textcolor{purple}{can be computed from} Figure \ref{figure:square1} \textcolor{purple}{or from the computations in \S \ref{sssec:a-linear-isomorphism} to be}:
\begin{align*}
(1)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}([R_a])))&=(1,0,0,0,0,0,0,0;\quad0,0,0,0)
\\(2)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}([L_a])))&=(0,1,0,0,0,0,0,0;\quad0,0,0,0)
\\(3)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}([R_b])))&=(0,0,1,0,0,0,0,0;\quad0,0,0,0)
\\(4)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [L_b])))&=(0,0,0,1,0,0,0,0;\quad0,0,0,0)
\\(5)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [R_c])))&=(0,0,0,0,1,0,0,0;\quad0,0,0,0)
\\(6)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [L_c])))&=(0,0,0,0,0,1,0,0;\quad0,0,0,0)
\\(7)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [R_d])))&=(0,0,0,0,0,0,1,0;\quad0,0,0,0)
\\(8)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [L_d])))&=(0,0,0,0,0,0,0,1;\quad0,0,0,0)
\\(9)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{out}, R_b])))&=(0,0,0,0,0,0,0,0; \quad1,-1,0,0)
\\(10)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{out}, L_c])))&=(0,0,0,0,0,0,0,0; \quad1,0,0,0)
\\(11)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}([T_{in}, L_b])))&=(0,1,0,1,0,1,0,0; \quad-1,0,0,0)
\\(12)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [T_{in}, R_c])))&=(0,1,0,1,0,1,0,0; \quad-1,0,0,1)
\\(13)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [L_b, T_{out}])))&=(0,0,0,1,0,0,0,0; \quad0,0,1,0)
\\(14)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [L_c, T_{in}])))&=(0,0,0,0,0,1,0,1; \quad0,0,-1,0)
\\(15)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [R_b, T_{in}])))&=(0,0,1,0,0,0,0,1; \quad0,1,-1,0)
\\(16)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [R_c, T_{out}])))&=(0,0,0,0,1,0,0,0; \quad0,0,1,-1)
\\(17)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [T_{out}, T_{in}])))&=(0,0,0,0,0,0,0,1; \quad1,0,-1,0)
\\(18)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [T_{in}, T_{out}])))&=(0,1,0,1,0,1,0,0; \quad-1,0,1,0)
\\(19)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [L_b, R_c])))&=(0,0,0,1,0,0,0,0; \quad0,0,0,1)
\\(20)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [R_b, L_c])))&=(0,0,1,0,0,0,0,0; \quad0,1,0,0)
\\(21)&&\phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [R_c, L_b])))&=(0,0,0,0,1,0,0,0; \quad0,0,0,-1)
\\(22)&& \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}( [L_c, R_b])))&=(0,0,0,0,0,1,0,0; \quad0,-1,0,0).
\end{align*}
\begin{prop}
\label{prop:second-isomorphism}
The linear map $\phi_\mathscr{T} : V_\mathscr{T} \to \textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}$ is an isomorphism. Consequently, letting $\theta_\mathscr{T} : \mathbb{R}^{12} \to V_\mathscr{T}$ be the isomorphism from {\upshape\S\ref{sssec:a-linear-isomorphism}}, we have that the composition
\begin{equation*}
\phi_\mathscr{T} \circ \theta_\mathscr{T} : \mathbb{R}^{12} \overset{\sim}{\longrightarrow} \textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}
\end{equation*}
is a linear isomorphism.
\end{prop}
\begin{proof}
Since $V_\mathscr{T}$ is 12-dimensional (Proposition \ref{obs:theta-is-injective}), it suffices to show that the image of $\phi_\mathscr{T}$ spans $\textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}$. Indeed, one checks by elementary linear algebra that the above 22 images $\left\{ \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}(W_j^H)))\right\}_{j=1,2,\dots,22} \subset \mathbb{R}_+^8 \times \mathbb{R}^4$ span $\textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}$.
\end{proof}
\begin{defn}
\label{def:isomorphic-cone}
The \textcolor{purple}{linear} isomorphism $\phi_\mathscr{T} \circ \theta_\mathscr{T} : \mathbb{R}^{12} \to \textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}$ of Proposition \ref{prop:second-isomorphism} maps
the completed KTGS cone $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ to \textcolor{purple}{the} \textit{isomorphic cone}
\begin{equation*}
C:=\phi_\mathscr{T}(\theta_\mathscr{T}(C_\mathscr{T}))
\quad \subset \mathbb{R}_+^8 \times \mathbb{R}^4.
\end{equation*}
\end{defn}
Note that $C$ indeed lies in $\mathbb{R}_+^8 \times \mathbb{R}^4$ because $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ is the completion of the KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$, which by definition has all nonnegative (integer) rhombus numbers $\left\{ \beta_i \right\}_{i=1,2,\dots,18}$.
Observe, in particular, that the 8 corner arcs $W^H_1, W^H_2, \dots, W^H_8$ of Figure \ref{figure:square1} \textcolor{purple}{(part 1)} correspond via $\phi_\mathscr{T} \circ \theta_\mathscr{T} \circ \Phi_\mathscr{T}$ to the first 8 standard basis elements $e_i$ of $\mathbb{R}^8 \times \mathbb{R}^4$.
\subsubsection{Sector decomposition of $\mathbb{R}^4$ via the isomorphism $\phi_\mathscr{T} \circ \theta_\mathscr{T}$}
\label{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square}
$ $
Let $C \subset \mathbb{R}_+^8 \times \mathbb{R}^4$ be the cone defined in Definition \ref{def:isomorphic-cone}, which is isomorphic, via the isomorphism $\phi_\mathscr{T} \circ \theta_\mathscr{T}: \mathbb{R}^{12} \to \textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}$, to the completed KTGS cone $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ for the triangulated square $(\Box, \mathscr{T})$.
\begin{notation}
\label{not:relating-to-cone-lemmas}
Put $k=8$, $n=4$, $m=14$, and $p=42$ (compare \S \ref{ssec:technical-statements}).
\end{notation}
For $j=1,2,\dots,m$, define cone points $x_j \in C$ by
\begin{equation*}
x_j = \phi_\mathscr{T}(\theta_\mathscr{T}(\Phi_\mathscr{T}(W^H_{k+j}))) \quad \in C,
\end{equation*}
where $\Phi_\mathscr{T}(W^H_{j^\prime})$ is the $j^\prime$-th Hilbert basis element for the triangulated square; see Figure \ref{figure:square1}. Note that the points $\left\{x_j\right\}_{i=1,2,\dots,m} \textcolor{purple}{\subset C}$ are displayed explicitly in \S \ref{sssec:second-linear-isomorphism}.
For $i=1,2,\dots,p$, define index sets $J_i \subset \left\{ 1,2,\dots,m \right\}$ of constant size $n$ as follows. Given $i$, consider the topological type $\mathscr{Q}_i \subset \mathscr{W}_i \subset \mathscr{W}_\Box$ of the \textcolor{purple}{completed} KTGS subcone $C_\mathscr{T}^i \subset \mathbb{R}_+^{12}$ (Definition \ref{def:topological type}). By definition of the topological type $\mathscr{Q}_i$, there are four Hilbert basis webs $W^{H}_{k+j^{(i)}_1}, W^{H}_{k+j^{(i)}_2}, W^{H}_{k+j^{(i)}_3}, W^{H}_{k+j^{(i)}_4}$, \textcolor{purple}{where} the indices $j_r^{(i)} \in \left\{ 1,2,\dots,m \right\}$, such that $\mathscr{Q}_i = \left\{ W^{H}_{k+j^{(i)}_r} \right\}_{r=1,2,3,4}$. We then define the index set $J_i$ by
\begin{equation*}
J_i = \left\{ j^{(i)}_1, j^{(i)}_2, j^{(i)}_3, j^{(i)}_4
\right\} \quad \subset \left\{ 1,2,\dots, m \right\}.
\end{equation*}
\begin{defn}
\label{def:sectors-in-Rn}
\textcolor{purple}{Recalling Notation \ref{not:relating-to-cone-lemmas}:} for each $i=1,2,\dots,42$, define subcones $D_i \subset \mathbb{R}^4$ by (compare \S \ref{ssec:technical-statements})
\begin{equation*}
D_i = \mathrm{span}_{\mathbb{R}_+}\left(\left\{ \pi_4(x_j); \quad j \in J_i \right\} \right) \quad \subset \mathbb{R}^4.
\end{equation*}
Here, $\pi_4 : \mathbb{R}^8 \times \mathbb{R}^4 \to \mathbb{R}^4$ is the natural projection.
\textcolor{purple}{Just} as for the subcones $C_\mathscr{T}^i \subset C_\mathscr{T}$, we call $\mathscr{Q}_i$ the \textit{topological type} of \textcolor{purple}{the subcone} $D_i \textcolor{purple}{\subset \mathbb{R}^4}$.
\end{defn}
\begin{remark}
\label{rem:14-vectors-distinct}
\textcolor{purple}{Note, by the calculations of \S \ref{sssec:second-linear-isomorphism}, that the 14 vectors $\left\{ \pi_4(x_j) \right\}_{j=1,2,\dots,14} \subset \mathbb{R}^4$ are distinct.}
\end{remark}
\begin{prop}
\label{prop:subcones-sect-decomp-r4}
The subcones $D_i \subset \mathbb{R}^4$ are full sectors forming a sector decomposition $\left\{ D_i \right\}_{i=1,2,\dots,42}$ of $\mathbb{R}^4$ (Definition {\upshape\ref{def:sector-decomp}}).
\end{prop}
\begin{proof}
Let us begin by giving some examples of how to describe the subcones $D_i$. Specifically, we will describe those 9 subcones $D_{i_j} \textcolor{purple}{\subset \mathbb{R}^4} $ corresponding to the topological types $\mathscr{Q}_{i_j} \subset \mathscr{W}_{i_j} \textcolor{purple}{\subset \mathscr{W}_\Box}$, which in particular are subsets of the $(j)$ web families $\mathscr{W}_{i_j}$ displayed in Figure \ref{figure:9cases}; see Notation \ref{not:web-families} and Remark \ref{rem:symmetry-groupings}.
\begin{remark}
\label{rem:remark-about-rows}
\textcolor{purple}{In each of the 9 examples below,} note that the ordering of the rows of the matrix does not affect the description of \textcolor{purple}{the subcone} $D_i \subset \mathbb{R}^4$.
\end{remark}
\textbf{i1=29.} For $\mathscr{Q}_{29}$, let us write the four vectors $\pi_4 \left(\phi_\mathscr{T} \circ \theta_\mathscr{T} \circ \Phi_{\mathscr{T}}(\mathscr{Q}_{29})\right) \subset \mathbb{R}^4$ in rows to form a $(4\times 4 )$ matrix $M_{29}=\left(\begin{smallmatrix}
0&0&1&0 \\
0&1&0&0 \\
0&0&0&1 \\
1&0&0&0
\end{smallmatrix} \right)$. Then for real numbers $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{29} =\begin{pmatrix}
t&y&x&z
\end{pmatrix}.
\end{equation*}
Thus $D_{29}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\}$.
\textbf{i2=30.} Similarly, for $\mathscr{Q}_{30}$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_{30})\right)$ in rows, we get a $(4\times 4 )$ matrix $M_{30}=\left(\begin{smallmatrix}
0&1&-1&0\\
0&1&0&0\\
0&0&0&1\\
1&0&0&0
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{30} =\begin{pmatrix}
t&x+y&-x&z
\end{pmatrix}.
\end{equation*}
Thus $D_{30}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_2+X_3\geq 0\}$.
\textbf{i3=42.} For $\mathscr{Q}_{42}$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_{42})\right)$ in rows, we get a $(4\times 4 )$ matrix $M_{42}=\left(\begin{smallmatrix}
0&1&-1&0 \\
1&0&-1&0\\
0&0&-1&0\\
0&0&0&-1
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{42} =\begin{pmatrix}
y & x & -x -y-z & -t
\end{pmatrix}.
\end{equation*}
Thus $D_{42}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; X_1 + X_2 + X_3 \leq 0\}$.
\textbf{i4=17.} For $\mathscr{Q}_{17}$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_{17})\right)$ in rows, we get a $(4\times 4 )$ matrix $M_{17}=\left(\begin{smallmatrix}
1&-1&0&0\\
1&0&0&0\\
0&0&0&-1\\
0&0&1&-1
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{17} =\begin{pmatrix}
x + y & -x & t & -z-t
\end{pmatrix}.
\end{equation*}
Thus $D_{17}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1 + X_2 \geq 0 \quad \& \quad X_3 + X_4 \leq 0\}$.
\textbf{i5=5.} For $\mathscr{Q}_{5}$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_{5})\right)$ in rows, we get a $(4\times 4 )$ matrix $M_{5}=\left(\begin{smallmatrix}
-1&0&1&0\\
-1&0&0&0\\
0&0&1&-1\\
0&1&0&0
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{5} =\begin{pmatrix}
-x-y & t & x+z & -z
\end{pmatrix}.
\end{equation*}
Thus $D_{5}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; -X_1 \geq X_3+X_4 \geq 0\}$.
\textbf{i6=6.} For $\mathscr{Q}_{6}$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_{6})\right)$ in rows, we get a $(4\times 4 )$ matrix $M_{6}=\left(\begin{smallmatrix}
0&0&1&0\\
-1&0&1&0\\
-1&0&0&1\\
0&1&0&0
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{6} =\begin{pmatrix}
-y-z & t & x+y & z
\end{pmatrix}.
\end{equation*}
Thus $D_{6}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; -X_3 \leq X_1 + X_4 \leq 0\}$.
\textbf{i7=2.} For $\mathscr{Q}_{2}$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_{2})\right)$ in rows, we get a $(4\times 4 )$ matrix $M_{2}=\left(\begin{smallmatrix}
1&0&0&0\\
0&0&1&-1\\
0&0&0&-1\\
0&1&0&0
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{2} =\begin{pmatrix}
x & t & y & -y - z
\end{pmatrix}.
\end{equation*}
Thus $D_{2}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_3 + X_4 \leq 0\}$.
\textbf{i8=1.} For $\mathscr{Q}_1$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_1)\right)$ in rows, we get a $(4\times 4 )$ matrix $M_1=\left(\begin{smallmatrix}
-1&0&0&0\\
0&0&1&-1\\
0&0&0&-1\\
0&1&0&0
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_1 =\begin{pmatrix}
-x & t & y & -y-z
\end{pmatrix}.
\end{equation*}
Thus $D_1=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_3+X_4\leq 0\}$.
\textbf{i9=33.} For $\mathscr{Q}_{33}$, writing the four vectors $\pi_4 \left(\phi_\mathscr{T}\circ \theta_\mathscr{T}\circ \Phi_{\mathscr{T}}(\mathscr{Q}_{33})\right)$ in rows, we get a $(4\times 4 )$ matrix $M_{33}=\left(\begin{smallmatrix}
1&-1&0&0\\
1&0&0&0\\
0&0&1&0\\
0&0&1&-1
\end{smallmatrix} \right)$. Then for $x,y,z,t \geq 0$, we get
\begin{equation*}
\begin{pmatrix} x & y & z & t \end{pmatrix}
M_{33} =\begin{pmatrix}
x+y & -x & z+t&-t
\end{pmatrix}.
\end{equation*}
Thus $D_{33}=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1 + X_2 \geq 0 \quad \& \quad X_3 + X_4 \geq 0 \}$.
In the same way as the 9 examples just demonstrated, we compute \textcolor{purple}{directly} by hand the subcones $D_i \textcolor{purple}{\subset \mathbb{R}^4} $ for $i=1,2,\dots,42$ as follows:
\begin{align*}
D_1&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_3+X_4\leq 0\}
\\ D_2&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_3+X_4\leq 0\}
\\ D_3&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; X_2+X_3\geq 0\}
\\ D_4&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; X_2+X_3\geq 0\}
\\ D_5&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; -X_1\geq X_3+X_4\geq 0\}
\\ D_6&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; -X_3 \leq X_1+X_4\leq 0\}
\\ D_7&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; -X_1 \geq X_3+X_4\geq 0\}
\\ D_8&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\;-X_3 \leq X_1+X_4\leq 0\}
\\ D_9&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_2\leq 0\}
\\ D_{10}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_4\geq 0\}
\\ D_{11}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; X_1+X_2\leq 0\}
\\ D_{12}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; X_1+X_4\geq 0\}
\\ D_{13}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; -X_3\geq X_1+X_2\geq 0\}
\\ D_{14}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; -X_1\leq X_2+X_3\leq 0\}
\\ D_{15}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; -X_3\geq X_1+X_2\geq 0\}
\\ D_{16}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; -X_1\leq X_2+X_3\leq 0\}
\\ D_{17}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1+X_2\geq 0 \quad \& \quad X_3+X_4\leq 0\}
\\ D_{18}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; X_2+X_3\leq 0\}
\\ D_{19}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_4\leq 0\}
\\ D_{20}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1+X_2\leq 0 \quad \& \quad X_3+X_4\geq 0\}
\\ D_{21}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1+X_2\leq 0 \quad \& \quad X_3+X_4\leq 0\}
\\ D_{22}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; X_1+X_2\leq 0\}
\\ D_{23}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_3+X_4\leq 0\}
\\ D_{24}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\}
\\ D_{25}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; X_1+X_2\geq 0\}
\\ D_{26}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_4\geq 0 \quad \& \quad X_2+X_3\leq 0\}
\\ D_{27}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_4\leq 0 \quad \& \quad X_2+X_3\geq 0\}
\\ D_{28}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_3+X_4\geq 0\}
\\ D_{29}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\}
\\ D_{30}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_2+X_3\geq 0\}
\\ D_{31}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; X_1+X_4\geq 0\}
\\ D_{32}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_4\geq 0 \quad \& \quad X_2+X_3\geq 0\}
\\ D_{33}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1+X_2\geq 0 \quad \& \quad X_3+X_4\geq 0\}
\\ D_{34}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_4\leq 0 \quad \& \quad X_2+X_3\leq 0\}
\\ D_{35}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; X_1+X_3+X_4\leq 0\}
\\ D_{36}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1+X_3+X_4\geq 0\}
\\ D_{37}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{+}\;|\; X_1+X_3+X_4\leq 0\}
\\ D_{38}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-}\;|\; X_1+X_3+X_4\geq 0\}
\\ D_{39}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_2+X_3\geq 0\}
\\ D_{40}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+}\;|\; X_1+X_2+X_3\leq 0\}
\\ D_{41}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; X_1+X_2+X_3\geq 0\}
\\ D_{42}&=\{(X_1, X_2,X_3,X_4)\in \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-}\;|\; X_1+X_2+X_3\leq 0\}.
\end{align*}
In particular, each subcone $D_i \textcolor{purple}{\subset \mathbb{R}^4} $ has dimension 4, since its explicit description via inequalities shows that it has nonempty interior \textcolor{purple}{in $\mathbb{R}^4$}. Equivalently, one can check \textcolor{purple}{directly by hand} that the corresponding $4 \times 4$ matrix, such as in the 9 examples above, has rank 4.
Since, by Definition \textcolor{purple}{\ref{def:sectors-in-Rn}}, the \textcolor{purple}{subcones} $D_i \textcolor{purple}{\subset \mathbb{R}^4} $ are generated by 4 elements, we gather that each $D_i$ is a full subsector of $\mathbb{R}^4$.
\textcolor{purple}{One checks directly by hand} that the 16 orthants $\mathbb{R}_\pm \times \mathbb{R}_\pm \times \mathbb{R}_\pm \times \mathbb{R}_\pm \subset \mathbb{R}^4$ decompose~as:
\begin{align*}
(1)&&\mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{-} &= D_{24}
\\(2)&& \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+} &= D_{29}
\\(3)&& \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-} &= D_{2}\cup D_{28}
\\(4)&& \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-} &= D_{3}\cup D_{18}
\\(5)&& \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+} &= D_{6}\cup D_{31}\cup D_{35}
\\(6)&& \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-} &= D_{7}\cup D_{23}\cup D_{38}
\\(7)&& \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{+} &= D_{10}\cup D_{19}
\\(8)&& \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{+} &= D_{11}\cup D_{25}
\\(9)&& \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+} &= D_{14}\cup D_{30}\cup D_{40}
\\(10)&& \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{-} &= D_{15}\cup D_{22}\cup D_{41}
\\(11)&& \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{-} &= D_{1}\cup D_{5}\cup D_{36}
\\(12)&& \mathbb{R}_{+}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{-} &= D_{4}\cup D_{16}\cup D_{42}
\\(13)&& \mathbb{R}_{-}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{+} &= D_{8}\cup D_{12}\cup D_{37}
\\(14)&& \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{-} \times \mathbb{R}_{+} &= D_{9}\cup D_{13}\cup D_{39}
\\(15)&& \mathbb{R}_{+}\times \mathbb{R}_{-} \times \mathbb{R}_{+} \times \mathbb{R}_{-} &= D_{17}\cup D_{20}\cup D_{21}\cup D_{33}
\\(16)&& \mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{-} \times \mathbb{R}_{+} &= D_{26}\cup D_{27}\cup D_{32} \cup D_{34}.
\end{align*}
It follows that $\mathbb{R}^4 = \textcolor{purple}{\cup_{i=1}^{42}} D_i$. It remains to show that $D_i \cap D_\ell$ has empty interior for all pairwise-distinct $i, \ell$. Since this is true if $D_i$ and $D_\ell$ lie in different orthants, we \textcolor{purple}{only} need to check \textcolor{purple}{those} pairs $D_i$, $D_\ell$ lying in the same orthant.
This is \textcolor{purple}{done directly by hand}; \textcolor{purple}{however, the cases fall into only four types.} First, for orthants 1,2: There is nothing to check. Second, for orthants 3, 4, 7, 8: In 3, say, the inequalities $X_3 + X_4 \leq 0$ and $X_3 + X_4 \geq 0$ have codimension 1 intersection even when defined on all of $\mathbb{R}^4$, so they do as well when restricted to the orthant. The other cases go the same. Third, for orthants 15, 16: This is similar to the second type. Lastly, for orthants 5, 6, 9, 10, 11, 12, 13, 14: In 5, say, the sectors $D_{6}$ and $D_{31}$ similarly have codimension 1 intersection, as do the sectors $D_6$ and $D_{35}$. This is also true for $D_{31} \textcolor{purple}{\left( X_1+X_4\geq 0 \right)}$ and $D_{35} \textcolor{purple}{\left( X_1+X_3+X_4\leq 0 \right)}$ so long as $D_{31} \cap D_{35}$ implies $X_3=0$, which it does, since it implies
$X_3 \leq -X_1 - X_4 \leq 0$ \textcolor{purple}{whereas} we are restricted to the orthant $\mathbb{R}_{-}\times \mathbb{R}_{+} \times \mathbb{R}_{+} \times \mathbb{R}_{+}$. The other cases go the same.
\end{proof}
We now analyze the walls \textcolor{purple}{(Definition \ref{def:sector-decomp})} in the sector decomposition $\left\{ D_i \right\}_{i=1,2,\dots,42}$ of $\mathbb{R}^4$. Recall (Definition \ref{def:sectors-in-Rn}) that $\mathscr{Q}_i$ is called the topological type of the sector $D_i$.
\begin{prop}
\label{prop:walls-for-Dis}
The third item of Theorem {\upshape\ref{theorem:decomp}} holds word-for-word, except with $C_\mathscr{T}^i$ replaced by $D_i$, and replacing \eqref{eq:theorem-equation} by \eqref{eq:anotherequationaboutR4} below.
In particular, $D_i \cap D_\ell$ is a wall if and only if the intersection $J_i \cap J_\ell$ of their corresponding index sets has $3$ elements (see the beginning of this sub-subsection, {\upshape\S \ref{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square}}). \textcolor{purple}{In this case,}
\begin{equation*}
\label{eq:anotherequationaboutR4}
\tag{$*$}
D_i \cap D_\ell = \mathrm{span}_{\mathbb{R}_+}
\left(
\left\{ \pi_4(x_j); \quad j \in J_i \cap J_\ell \right\}
\right)
\quad \subset \mathbb{R}^4.
\end{equation*}
\end{prop}
\begin{proof}
Any wall $D_i \cap D_\ell$ must, \textcolor{purple}{by definition}, be $3$ dimensional. \textcolor{purple}{This restricts which indices $\left\{ i, \ell \right\}$ can yield walls.} \textcolor{purple}{Through a} direct \textcolor{purple}{by hand check}, using the \textcolor{purple}{explicit} description \textcolor{purple}{by inequalities} of the sector decomposition \textcolor{purple}{$\left\{ D_i \right\}_i $ as in the proof of Proposition \ref{prop:subcones-sect-decomp-r4}}, \textcolor{purple}{one verifies} that a necessary condition for $D_i \cap D_\ell$ to be a wall is for $D_i$ and $D_\ell$ to be connected by an edge in the graph $\mathscr{G}$ depicted in Figure \ref{figure:wallscross}. \textcolor{purple}{We} show this is also a sufficient condition.
More precisely, \textcolor{purple}{the goal is to show}, for any two sectors $D_i$ and $D_\ell$ connected by an edge in $\mathscr{G}$, that $J_i \cap J_\ell$ has 3 elements and \eqref{eq:anotherequationaboutR4} holds. In particular, $D_i \cap D_\ell$ is a cone of dimension~3. Note the inclusion $\supset$ in \eqref{eq:anotherequationaboutR4} holds \textcolor{purple}{automatically}; see Definition \ref{def:sectors-in-Rn}.
We checked this \textcolor{purple}{directly} by hand. There are two types of calculations, \textcolor{purple}{depending on whether} the sectors are in different orthants or the same orthant.
As an example where the sectors are in different orthants: We demonstrate this for $D_{29}$ and $D_{30}$, which were computed in detail in the proof of Proposition \ref{prop:subcones-sect-decomp-r4}. There, one sees that three rows of the corresponding $4 \times 4$ matrix $M_{29}$ appear as rows in the matrix $M_{30}$ (recall \textcolor{purple}{also} Remark \ref{rem:remark-about-rows}). This means that $J_i \cap J_\ell$ has 3 elements; \textcolor{purple}{see Remark \ref{rem:14-vectors-distinct}}. \textcolor{purple}{Note that} the row in $M_{29}$ \textcolor{purple}{that is} not in $M_{30}$ corresponds to the variable $x$, and the row in $M_{30}$ \textcolor{purple}{that is} not in $M_{29}$ corresponds to the variable $x^\prime$. The inclusion $\subset$ in \eqref{eq:anotherequationaboutR4} is true since
\begin{equation*}
(t, y, x, z) = (t^\prime, x^\prime+y^\prime, -x^\prime, z^\prime) \quad \in \mathbb{R}^4 \quad\quad \left( x,y,z,t,x^\prime,y^\prime,z^\prime,t^\prime \geq 0 \right)
\end{equation*}
implies $x=x^\prime=0$. The other different-orthant cases are similar.
As an example where the sectors are in the same orthant: We demonstrate this for $D_5$ and $D_1$, which were also computed in detail in the proof of Proposition \ref{prop:subcones-sect-decomp-r4}. There, one sees that three rows of the corresponding $4 \times 4$ matrix $M_{5}$ appear as rows in the matrix $M_{1}$ (recall \textcolor{purple}{also} Remark \ref{rem:remark-about-rows}). This means that $J_i \cap J_\ell$ has 3 elements; \textcolor{purple}{see Remark \ref{rem:14-vectors-distinct}}. The row in $M_{5}$ not in $M_{1}$ corresponds to the variable $x$, and the row in $M_{1}$ not in $M_{5}$ corresponds to the variable $z^\prime$. The inclusion $\subset$ in \eqref{eq:anotherequationaboutR4} is true since
\begin{equation*}
(-x-y, t, x+z, -z) = (-x^\prime, t^\prime, y^\prime, -y^\prime-z^\prime)\quad \in \mathbb{R}^4 \quad\quad \left( x,y,z,t,x^\prime,y^\prime,z^\prime,t^\prime \geq 0 \right)
\end{equation*}
implies, by adding the third and fourth \textcolor{purple}{entries}, that $x=-z^\prime$ hence $x=z^\prime=0$. The other same-orthant cases are similar.
We gather $D_i \cap D_\ell$ is a wall if and only if $D_i$ and $D_\ell$ are connected by an edge of the graph $\mathscr{G}$, \textcolor{purple}{in which case $J_i \cap J_\ell$ has 3 elements.} In particular, since $\mathscr{G}$ is 4-valent, the last paragraph of the third item of Theorem \ref{theorem:decomp} holds (again, with $D_i$ in place of $C_\mathscr{T}^i$).
\textcolor{purple}{To finish justifying the second paragraph of Proposition \ref{prop:walls-for-Dis} (equivalently, the first paragraph of the third item of Theorem \ref{theorem:decomp}, appropriately substituted), we need to show that if $J_i \cap J_\ell$ has 3 elements (equivalently, $\mathscr{Q}_i \cap \mathscr{Q}_\ell$ has 3 elements), then $D_i \cap D_\ell$ is a wall.}
\textcolor{purple}{So far, we have} exhibited, for each $i$, 4 topological types $\mathscr{Q}_\ell$ such that $\mathscr{Q}_i \cap \mathscr{Q}_\ell$ has 3 elements, \textcolor{purple}{all corresponding to walls $D_i \cap D_\ell$}. \textcolor{purple}{We thus need to show} there are no more indices $\ell$ such that $\mathscr{Q}_i \cap \mathscr{Q}_\ell$ has 3 elements. \textcolor{purple}{For this,} it suffices to establish the second paragraph of the third item of Theorem \ref{theorem:decomp}, \textcolor{purple}{which} is a purely topological \textcolor{purple}{statement about webs in good position on the triangulated square}; compare Observation \ref{lemma:bigon} and Proposition \ref{lem:42families}. We checked this directly by hand; compare Example \ref{ex:example-of-web-mutation}.
\end{proof}
\subsubsection{Bringing everything together}
\label{sssec:proof-of-main-theorem-2}
$ $
We are now prepared to prove the main result of this section.
\begin{proof}[Proof of Theorem {\upshape \ref{theorem:decomp}}]
Let $C \subset \mathbb{R}_+^8 \times \mathbb{R}^4$ be the cone isomorphic to the completed KTGS cone $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ via the linear isomorphism $\phi_\mathscr{T} \circ \theta_\mathscr{T} : \mathbb{R}^{12} \to \textcolor{purple}{\mathbb{R}^{8} \times \mathbb{R}^4}$; see Definition \ref{def:isomorphic-cone}.
Recall also Notation \ref{not:relating-to-cone-lemmas} from the discussion at the beginning of \S \ref{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square}, which should help \textcolor{purple}{with} comparing the general lemmas of \S \ref{ssec:technical-statements} \textcolor{purple}{to} the current application.
By the explicit calculation of the Hilbert basis elements $\phi_\mathscr{T} \circ \theta_\mathscr{T} \circ \Phi_\mathscr{T}(W_i^H) \in C$ for $i=1,2,\dots,22$ in \S \ref{sssec:second-linear-isomorphism}, together with Proposition \ref{prop:subcones-sect-decomp-r4}, we see that $C \subset \mathbb{R}_+^8 \times \mathbb{R}^4$ satisfies the hypotheses of Lemma \ref{lem:rank-lemma}. \textcolor{purple}{Indeed, $D_i \subset \pi_4(C)$ by definition, for each $i$.} Therefore, $C$ is 12 dimensional, so the isomorphic \textcolor{purple}{completed KTGS} cone $C_\mathscr{T}$ is \textcolor{purple}{12 dimensional} as well. \textcolor{purple}{This establishes the first item of Theorem \ref{theorem:decomp}.}
\textcolor{purple}{Let us} prove that $C_\mathscr{T} = \cup_{i=1}^{42} C_\mathscr{T}^i \subset \mathbb{R}_+^{12}$; \textcolor{purple}{see Definition \ref{def:topological type}}. This follows by Theorem \ref{thm:main-theorem}, Proposition \ref{lem:42families}, and Lemma \ref{lem:actual-geometry}. Indeed, by Theorem \ref{thm:main-theorem}, every point in the KTGS positive integer cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ is equal to $\Phi_\mathscr{T}(W)$ for some reduced web $W \in \mathscr{W}_\Box$ in the square. By Proposition \ref{lem:42families}, the web $W$ is an element of one of the 42 web families: $W \in \mathscr{W}_i \subset \mathscr{W}_\Box$. By \textcolor{purple}{Observation \ref{obs:families-and-their-cones}}, we have that $\Phi_\mathscr{T}(W) \in \mathscr{C}_\mathscr{T}^i$. We gather that $\mathscr{C}_\mathscr{T} = \cup_{i=1}^{42} \mathscr{C}_\mathscr{T}^i \subset \mathbb{Z}_+^{12}$. \textcolor{purple}{Note} that \textcolor{purple}{the monoid} $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ is finitely generated, since it admits a Hilbert basis (Theorem \ref{theorem:basis}). The submonoids $\mathscr{C}_\mathscr{T}^i \subset \mathscr{C}_\mathscr{T} $ are also finitely generated, by Definition \ref{def:topological type}. We conclude by Lemma \ref{lem:actual-geometry} that \textcolor{purple}{we have the equality} $C_\mathscr{T} = \cup_{i=1}^{42} C_\mathscr{T}^i \subset \mathbb{R}_+^{12}$ \textcolor{purple}{of completions}, as desired.
\textcolor{purple}{We return to the isomorphic cone $C \subset \mathbb{R}_+^8 \times \mathbb{R}^4$.} Let $\left\{ x_j \right\}_{j=1,2,\dots,14}$ and $\left\{ J_i \right\}_{i=1,2,\dots,42}$ be defined as in the beginning of \S \ref{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square}. For $i=1,2,\dots,42$, let the subcones $C_i \subset C$ be defined as in the \textcolor{purple}{statement} of Lemma \ref{lem:second-cone-lemma}. Equivalently, the subcone $C_i = \phi_\mathscr{T} \circ \theta_\mathscr{T}(C_\mathscr{T}^i) \subset \mathbb{R}_+^8 \times \mathbb{R}^4$ is the isomorphic counterpart to the subcone $C_\mathscr{T}^i \subset C_\mathscr{T} \subset \mathbb{R}_+^{12}$. It follows by the previous paragraph that $C = \cup_{i=1}^{42} C_i \subset \mathbb{R}_+^8 \times \mathbb{R}^4$ in the isomorphic cone $C$. By this, together with another application of Proposition \ref{prop:subcones-sect-decomp-r4}, we see that the hypotheses of Lemma \ref{lem:second-cone-lemma} are satisfied. Therefore, \textcolor{purple}{by Lemma \ref{lem:second-cone-lemma}}, we obtain the second item of Theorem \ref{theorem:decomp}, except with $C_\mathscr{T}$ and $C_\mathscr{T}^i$ replaced by $C$ and $C_i$, respectively. Since this property is preserved by linear isomorphisms, we conclude the second item of Theorem \ref{theorem:decomp} as stated.
Lastly, by Proposition \ref{prop:walls-for-Dis}, in particular \eqref{eq:anotherequationaboutR4}, the hypothesis of Lemma \ref{lem:third-cone-lemma} is satisfied. Therefore, \textcolor{purple}{by Lemma \ref{lem:third-cone-lemma}}, the set $\left\{ \text{walls of } \left\{ C_i \right\}_i \text {in } C \right\}$ is in one-to-one correspondence with the set $\left\{ \text{walls of } \left\{ D_i \right\}_i \text {in } \mathbb{R}^n \right\}$ \textcolor{purple}{in the obvious way by the projection $\pi_4$}. Moreover, a given wall $C_i \cap C_\ell$ can be computed by \eqref{eq:technical-equation-intersection} in Lemma \ref{lem:third-cone-lemma}. We conclude by the \textcolor{purple}{remainder} of Proposition \ref{prop:walls-for-Dis} that the first and third paragraphs of the third item of Theorem \ref{theorem:decomp} are valid, except with $C_\mathscr{T}$, $C_\mathscr{T}^i$, and \eqref{eq:theorem-equation} replaced by $C$, $C_i$, and \eqref{eq:technical-equation-intersection}, respectively. Since the inverse of the linear isomorphism $\phi_\mathscr{T} \circ \theta_\mathscr{T}$ preserves these properties, and maps \eqref{eq:technical-equation-intersection} to \eqref{eq:theorem-equation} (see the beginning of \S \ref{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square}), we conclude the first and third paragraphs of the second item of Theorem \ref{theorem:decomp} as stated. The second paragraph is a purely topological statement about \textcolor{purple}{webs in} the square, and was already established during the proof of Proposition \ref{prop:walls-for-Dis}.
\end{proof}
The following consequence is immediate from the proof of Theorem \ref{theorem:decomp}.
\begin{cor}
\label{cor:canonical-map-surjection}
The function
\begin{equation*}
\pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T} : C_\mathscr{T} \twoheadrightarrow \mathbb{R}^4
\end{equation*}
is a surjection from the completed KTGS cone $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ (in fact, from a ``$4$ dimensional'' \textcolor{purple}{proper} subset of $C_\mathscr{T}$)
onto $\mathbb{R}^4$. \qed
\end{cor}
Recall the notion of a cornerless web $W=W_c$ in the square; see Definition \ref{def:cornerless-webs-in-the-square}. Let $\mathscr{W}_\Box^c \subset \mathscr{W}_\Box$ denote the set of cornerless webs up to equivalence. Note for each $i=1,2,\dots,42$ that $\mathscr{Q}_i \subset \mathscr{W}_i \cap \mathscr{W}_\Box^c$.
Consider also the function $\pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T} \circ \Phi_\mathscr{T} : \mathscr{W}_\Box \to \mathbb{Z}^4 \subset \mathbb{R}^4$ defined on $\mathscr{W}_\Box$. See for example the nine computations at the beginning of the proof of Proposition \ref{prop:subcones-sect-decomp-r4}.
Another consequence of the proof of Theorem \ref{theorem:decomp} is:
\begin{cor}
\label{cor:restricted-function}
The restricted function
\begin{equation*}
\pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T} \circ \Phi_\mathscr{T} : \mathscr{W}_\Box^c \twoheadrightarrow \mathbb{Z}^4,
\end{equation*}
restricted to the cornerless webs $\mathscr{W}_\Box^c \subset \mathscr{W}_\Box$, is a surjection onto the integer lattice $\mathbb{Z}^4 \subset \mathbb{R}^4$.
In particular, the function $\pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T}$ from Corollary~{\upshape\ref{cor:canonical-map-surjection}} maps (a ``$4$ dimensional'' \textcolor{purple}{proper} subset of) the KTGS cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ surjectively onto~$\mathbb{Z}^4$.
\end{cor}
\begin{proof}
We know that $\mathbb{R}^4=\cup_{i=1}^{42} D_i$ and $D_i = \pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T}(C_\mathscr{T}^i)$ where $C_\mathscr{T}^i \supset \mathscr{C}_\mathscr{T}^i$ (Definition \ref{def:topological type}). We also know that the cone points $\Phi_\mathscr{T}(W^H_j)$ in $\mathscr{C}_\mathscr{T}^i$ for $j=1,2,\dots,8$, corresponding to the 8 corner arcs in the square, are sent by $\phi_\mathscr{T} \circ \theta_\mathscr{T}$ to the first 8 standard basis elements $e_j$ of $\mathbb{R}_+^8 \times \mathbb{R}^4$. We gather $\pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T}$ is still a surjection onto $D_i$ when restricted to the subset
\begin{equation*}
C_\mathscr{T}^{\prime i} := \mathrm{span}_{\mathbb{R}_+}
\left( \left\{ \Phi_\mathscr{T}(W^H); \quad W^H \in \mathscr{Q}_i
\right\} \right)
\quad \subset C_\mathscr{T}^i
\quad \subset \mathbb{R}_+^{12}.
\end{equation*}
Note also that (similar to Observation \ref{obs:families-and-their-cones})
\begin{equation*}
\Phi_\mathscr{T}(\mathscr{W}_i \cap \mathscr{W}_\Box^c)
=
\mathrm{span}_{\mathbb{Z}_+}
\left( \left\{ \Phi_\mathscr{T}(W^H); \quad W^H \in \mathscr{Q}_i
\right\} \right)
\quad
\subset C_\mathscr{T}^{\prime i}.
\end{equation*}
It thus suffices to show: for any $c \in C_\mathscr{T}^{\prime i}$, if $\pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T}(c) \in \mathbb{Z}^4 \cap D_i$, then $c \in \Phi_\mathscr{T}(\mathscr{W}_i \cap \mathscr{W}_\Box^c)$.
Once again, we work in the isomorphic cone $C_i=\phi_\mathscr{T} \circ \theta_\mathscr{T}(C_\mathscr{T}^i) \subset \mathbb{R}_+^8 \times \mathbb{R}^4$, which projects to $D_i$ by $\pi_4$. Put (see the beginning of \S \ref{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square})
\begin{equation*}
C_i^\prime := \phi_\mathscr{T} \circ \theta_\mathscr{T}(C_\mathscr{T}^{\prime i})=\mathrm{span}_{\mathbb{R}_+}\left( \left\{ x_j; \quad j \in J_i \right\} \right)
\quad \subset C_i \quad \subset \mathbb{R}_+^8 \times \mathbb{R}^4
\end{equation*}
and note also that
\begin{equation*}
\phi_\mathscr{T} \circ \theta_\mathscr{T} \circ \Phi_\mathscr{T}(\mathscr{W}_i \cap \mathscr{W}_\Box^c) =\mathrm{span}_{\mathbb{Z}_+}\left( \left\{ x_j; \quad j \in J_i \right\} \right)
\quad \subset C_i^\prime.
\end{equation*}
The above property is then equivalent to showing: for any $c \in C_i^{\prime}$ such that $\pi_4(c) \in \mathbb{Z}^4 \cap D_i$, we have $c \in \phi_\mathscr{T} \circ \theta_\mathscr{T} \circ \Phi_\mathscr{T}(\mathscr{W}_i \cap \mathscr{W}_\Box^c)$; that is, if such a $c \in C_i^{\prime}$ is written $c=x x_{1^{(i)}} + y x_{2^{(i)}} + z x_{3^{(i)}} + t x_{4^{(i)}}$ for $x,y,z,t \geq 0$ (see the beginning of \S \ref{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square}), we want to show $x, y, z, t \in \mathbb{Z}$.
This is accomplished through a direct by hand check, taking advantage of the explicit description of the sectors $D_i$ provided in \S \ref{sssec:sector-decomposition-of-r4-via-the-ktgs-cone-for-the-square}. As before, although there are 42 cases, these fall into only five types, each represented among the 9 examples demonstrated in the proof of Proposition \ref{prop:subcones-sect-decomp-r4}: Type 1 corresponds to $i_1$; Type 2 corresponds to $i_2, i_7, i_8$; Type 3 corresponds to $i_3$; Type 4 corresponds to $i_4, i_9$; and Type 5 corresponds to $i_5, i_6$. We will only demonstrate the most nontrivial case, Type 5 (for $i_5$, say); the other cases are similar.
So consider $i_5=5$, and assume $\pi_4(c) =x \pi_4(x_{1^{(5)}}) + y \pi_4(x_{2^{(5)}}) + z \pi_4(x_{3^{(5)}}) + t \pi_4(x_{4^{(5)}}) \in D_i$ is, in addition, in $\mathbb{Z}^4$ for some $x,y,z,t \geq 0$. Note the vector $\pi_4(x_{j^{(5)}})$ is the $j$-th row of the matrix displayed in the $i_5$ example in the proof of Proposition \ref{prop:subcones-sect-decomp-r4}. From this example, we gather that $\pi_4(c)=(-x-y, t, x+z, -z) \in \mathbb{Z}^4$. So $t, z \in \mathbb{Z}$; implying by $x+z \in \mathbb{Z}$ that $x \in \mathbb{Z}$; implying by $-x-y \in \mathbb{Z}$ that $y \in \mathbb{Z}$, as desired.
\end{proof}
\begin{remark}
We believe that the restricted function from Corollary \ref{cor:restricted-function} is also an injection. We suspect that there may be a proof of this result via a conjectural generalization of \eqref{eq:theorem-equation} to higher codimension intersections.
\end{remark}
\begin{conceptremark}
\label{rem:last-conceptual-remark}
Recall Remark \ref{rem:sec6conceptremark}. Recall also (\S \ref{PGL3-decorated-local-systems}) $\mathcal{A}^+_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t) = \mathcal{A}^+_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)$.
We view the real cone $C_\mathscr{T} \subset \mathbb{R}_+^{12}$ as the isomorphic $\mathscr{T}$-chart $C_\mathscr{T} \cong -\mathcal{A}_{\mathrm{SL}_3, \Box}^+(\mathbb{R}^t)_\mathscr{T}
\left( \cong -\mathcal{A}_{\mathrm{PGL}_3, \Box}^+(\mathbb{R}^t)_\mathscr{T} \right)$.
On the other hand, via the isomorphism $\theta_\mathscr{T} : \mathbb{R}^{12} \to V_\mathscr{T} \subset \mathbb{R}^{18}$ we view the real cone $\theta_\mathscr{T}(C_\mathscr{T}) \subset V_\mathscr{T}$ as the isomorphic $\mathscr{T}$-chart $\theta_\mathscr{T}(C_\mathscr{T}) \cong -\mathcal{A}_{\mathrm{PGL}_3, \Box}^+(\mathbb{R}^t)_\mathscr{T}
\left( \cong -\mathcal{A}_{\mathrm{SL}_3, \Box}^+(\mathbb{R}^t)_\mathscr{T} \right)$.
Recall, in addition to the 12 dimensional $\mathcal{A}$-moduli spaces $\mathcal{A}_{\mathrm{SL}_3, \Box}$ and $\mathcal{A}_{\mathrm{PGL}_3, \Box}$ (\S \ref{sec:preliminary-for-tropical-points}), Fock-Goncharov \cite{FockIHES06} and Goncharov-Shen \cite{GoncharovInvent15, GoncharovArxiv19} defined the $\mathcal{X}$- and $\mathcal{P}$-moduli spaces $\mathcal{X}_{\mathrm{PGL}_3, \Box}$ and $\mathcal{P}_{\mathrm{PGL}_3, \Box}$, which are 4- and 12-dimensional, respectively. In addition, there are canonical maps $p : \mathcal{A}_{\mathrm{SL}_3, \Box} \to \mathcal{X}_{\mathrm{PGL}_3, \Box}$ and $\overline{p}: \mathcal{A}_{\mathrm{SL}_3, \Box} \to \mathcal{P}_{\mathrm{PGL}_3, \Box}$ (the overline notation $\overline{p}$ may be nonstandard). The tropical points $\mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)$ and $\mathcal{P}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)$ of these spaces are also defined, inducing tropicalizations $p^t : \mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t) \to \mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)$ and $\overline{p}^t: \mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t) \to \mathcal{P}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)$ of the canonical maps.
In terms of $\mathscr{T}$-charts, we view $\mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)_\mathscr{T} \cong \mathbb{R}^4$ and $\mathcal{P}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)_\mathscr{T} \cong \mathbb{R}^8 \times \mathbb{R}^4$. We think of the projection $\pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T} : \mathbb{R}^{12} \to \mathbb{R}^8 \times \mathbb{R}^4 \to \mathbb{R}^4$ as the canonical map $p^t$ written in coordinates. The isomorphism $\phi_\mathscr{T} \circ \theta_\mathscr{T} : \mathbb{R}^{12} \to \mathbb{R}^8 \times \mathbb{R}^4$ could be viewed as a coordinate version of the canonical map $\overline{p}^t$.
We can interpret Corollary \ref{cor:canonical-map-surjection} as saying that, when expressed in coordinates, the canonical map $p^t \left( \approx \pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T} \right)$ also projects the subset $-\mathcal{A}_{\mathrm{SL}_3, \Box}^+(\mathbb{R}^t)_\mathscr{T} \subset -\mathcal{A}_{\mathrm{SL}_3, \Box}(\mathbb{R}^t)_\mathscr{T} \left(\approx C_\mathscr{T} \subset \mathbb{R}^{12} \right) $ of positive points onto $\mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{R}^t)_\mathscr{T} \left( \approx \mathbb{R}^4 \right)$.
In addition, since the positive integer cone $\mathscr{C}_\mathscr{T} \subset \mathbb{Z}_+^{12}$ is in bijection with the set of reduced webs $\mathscr{W}_\Box$ via the web tropical coordinate map $\Phi_\mathscr{T}$, and since $\mathscr{C}_\mathscr{T} \cong -3 \mathcal{A}_{\mathrm{PGL}_3, \Box}^+(\mathbb{Z}^t)_\mathscr{T}$ by Remark \ref{rem:conceptual-remarks}: we can interpret Corollary \ref{cor:restricted-function} as saying that, in coordinates, the canonical map $p^t \left( \approx \pi_4 \circ \phi_\mathscr{T} \circ \theta_\mathscr{T} \right)$ projects (a proper subset of) $-3 \mathcal{A}_{\mathrm{PGL}_3, \Box}^+(\mathbb{Z}^t)_\mathscr{T} \left( \approx \mathscr{C}_\mathscr{T} \right)$ onto $\mathcal{X}_{\mathrm{PGL}_3, \Box}(\mathbb{Z}^t)_\mathscr{T} \left( \approx \mathbb{Z}^4 \right)$.
\end{conceptremark}
\bibliographystyle{alpha}
|
{
"timestamp": "2022-09-15T02:07:12",
"yymm": "2012",
"arxiv_id": "2012.14202",
"language": "en",
"url": "https://arxiv.org/abs/2012.14202"
}
|
\section{Introduction}
Fractional parts of real number sequences incrementally fill the unit
interval with values in a way that strongly depends on the studied
sequence. The distributions of values and the gaps between them in the
limit of an infinite sequence have been studied from different
perspectives. A lot of effort has been put into researching the
distribution of gap widths for linear arithmetic progressions and more
general nonlinear sequences
\cite{threegap,gapproblems,filtri,distinct,sqrts,logs}. While these
studies cover the statistical distribution of the gap width
regardless of their position on the unit interval, the dependence of
the gap width on the position is a separate topic that is
presented in this article.
Consider the sequence of fractional parts of square roots of consecutive
integers,
\begin{equation}
a_n=\{\sqrt{n}\},\quad n\in \mathbb{N},
\end{equation}
where $\{\}$ operator refers to the fractional part.
Taking more and more terms of the sequence, the unit interval is filled
asymptotically uniformly, but all the terms of the
sequence are either irrational or $0$. After $N$ terms, the average gap width is
obviously $1/N$, and the filling is uniform
enough that the maximum gap also converges to zero (no holes are
left). However, special behaviour is expected around rational points on the
unit interval; the gaps around them scale with an exponent slower than $N^{-1}$.
In this article, we go through derivation of asymptotic behaviour of rational
gaps for the sequence of square roots and its generalizations to higher-order
roots and subsequences. We show that the gap widths scale with a function of
rational argument that can be explicitly derived and is related to the Thomae's function.
We also investigate the geometric interpretation of the result for the sequence
of square roots.
\section{The gap function}
\subsection{Notation}
The modulo operator has the conventional meaning of describing
equivalence classes. We will use it to represent periodically
extended sets, which will enable a more condensed notation.
\begin{equation}
\{a+bn \mid n \in\mathbb{N}\}=\{a \mod b\}.
\end{equation}
If two periods are in effect, the set can be reparameterized to cover a single period:
\begin{equation}
\{a + b n \mod c \mid n \in \mathbb{N}\}=\{a \mod \gcd(b,c)\}.
\label{eq:absorb}
\end{equation}
Element-wise multiplication of the set with a scalar multiplies both
the expression and the period (modulus),
\begin{equation}
c\{ a\mod b \}=\{ac\mod bc \},
\end{equation}
so every common factor can be carried to the front.
Define the gap operator on the set $A$, centered at $x$:
\begin{equation}
\gap_x(A)=\inf \{ y \mid y\in A \wedge y> x \}-\sup \{ y \mid y\in A \wedge y<x \}.
\label{eq:gapdef}
\end{equation}
The center $x$ is assumed zero if omitted. In this homogeneous case, the following useful property holds for element-wise multiplication of a set with a scalar,
\begin{equation}
\gap(tA)=|t|\gap(A).
\label{eq:gapfactor}
\end{equation}
\subsection{Definition}
Define a set of fractional parts of square roots of first $N$ positive integers:
\begin{equation}
f(N) = \{ \sqrt{n} \mod 1 \mid n \in \mathbb{N} \wedge n\leq N \wedge \sqrt{n} \notin \mathbb{N} \}
\end{equation}
This is a finite set over the open unit interval
$(0,1)$ with cardinality $\#f(N)=N-\lfloor \sqrt{N}\rfloor$,
periodically extended to include all numbers with the same fractional parts.
The main focus of this work is the following function that maps a real number to the width of
the gap in $f(N)$ around it:
\begin{equation}
g(x,N) = \gap_x(f(N)).
\end{equation}
Because there are no rational numbers in the set $f(N)$, the gap widths around rational numbers have an interesting behaviour. Let us define the gap function as the limit
\begin{equation}
G(x)=\lim_{N\to\infty}2\sqrt{N}g(x,N).
\label{eq:Glim}
\end{equation}
For irrational values of $x$, the limit is expected to converge to $0$, as the average gap width falls as $N^{-1}$, which is faster than $N^{-1/2}$. The approximants to $G(x)$ at irrational $x$ thus fall off as $2/\sqrt{N}$. These ``background'' gaps follow an unusual distribution. Rather than having an exponential tail, as one would expect for a random distribution on a unit interval, the distribution has a $t^{-3}$ power-law asymptotic behaviour \cite{sqrts}.
In the following sections, we derive the closed form expression for the function $G(x)$ and its generalizations.
\subsection{Derivation}
Consider first the gaps on the open interval $(0,1)$, ignoring the singular case of the gap around the integer congruence class (the gap $G(0)$).
Between consecutive perfect squares, $\{\sqrt{n}\}$ runs across the unit interval. Up to the upper limit $N$, there are $\sqrt{N}$ passes. We will parameterize the running integer $n$ with two integer parameters,
\begin{align}
n=k^2+m;&\quad m\in [1,2k] \\
\label{eq:mdef}
\sqrt{n}=k+x;&\quad x\in (0,1)
\end{align}
where $k$ is the integer part of the square root, $x$ is the fractional part, and $m$ is the ``remainder''. Together, this yields a quadratic equation for $x$,
\begin{equation}
x^2+2kx-m=0.
\label{eq:quadratic}
\end{equation}
Let $y=\frac{p}{q}$ be a target \emph{rational} number at which we want to evaluate the gap function ($p$ and $q>1$ are coprime). We must find the closest two irrational numbers that satisfy equation Eq.~\ref{eq:quadratic} and bracket our value $y$. For large enough $N$, the interval around $y$ is densely populated and the distance to the closest square root fractional part $x$ is small. By using the $y$ as an initial value, one iteration Newton's method on Eq.~\ref{eq:quadratic} approximates this distance, and becomes exact in the limit $N\to\infty$.
\begin{equation}
\Delta(p/q,k,m)=x-y\approx\frac{-y^2-2ky+m}{2y+2k}=\frac{1}{2(k+y)}\left(-\frac{p^2+2kpq}{q^2}+m\right).
\label{eq:newtstep}
\end{equation}
Among all the values this expression can assume for $n=k^2+m\leq N$, the smallest positive and largest negative value of $\Delta$, happen for consecutive $m$ and bracket a gap around zero, which approximates the gap $g(p/q,N)$. For each $k$, a large enough $m$ exists that makes the expression in the parentheses positive. Because $(p^2+2kpq)/q^2<2k+1$, such $m$ satisties the condition $m<2k+1$. Therefore we can safely drop this condition and allow $m$ to be any integer without affecting the gap width.
\begin{equation}
g(p/q,N)\approx \gap(\Delta(p/q,k,m)\mid m,k\in \mathbb{N})
\end{equation}
The parenthesized part of Eq.~\ref{eq:newtstep} yields the same value for an infinite number of pairs $(k,m)$. The Diophantine equation $-p^2-2kpq+mq^2 = \operatorname{const.} $ has periodic solutions: a pair $(k,m)$ gives the same value as $(k+tq/\gcd(2p,q) ,m+2tp/\gcd(2p,q) )$ for any integer multiplier $t$.
For any given value of the parenthesized expression in Eq.~\ref{eq:newtstep}, the pairs $(k,m)$ that minimize the gap are those which minimize the pre-factor $\frac{1}{2(k+y)}$. The lowest values are reached with the highest $t$ allowed by the current upper bound $k^2+m\leq N$. This means that the pairs $(k,m)$ that actually gap the interval around $0$ have $k\in (\sqrt N-q/\gcd(2p,q),\sqrt N)$. In the limit $N\to \infty$, we have $q\ll \sqrt N$ and the prefactors tend to
\begin{equation}
\frac{1}{2(k+y)}\asymp\frac{1}{2\sqrt N}.
\end{equation}
Note that the derivative in the denominator of the Newton's step only set the prefactor, which only depends on the $N$, and the
numerator (the parenthesised part of the expression Eq.~\ref{eq:newtstep}) by itself does not depend on $N$, so the limit in Eq.~\ref{eq:Glim} can be taken explicitly:
\begin{equation}
G\left(\tfrac{p}{q}\right)= \gap\left\{\frac{p^2+2kpq}{q^2}-m\middle| m,k \in \mathbb{N} \right\}.
\end{equation}
where we utilized Eq.~\ref{eq:gapfactor} to carry the constant factor out and omit the negation. Furthermore, $m$ can be absorbed into the modulus by virtue of Eq.~\ref{eq:absorb}, and in the next step, so can $k$,
\begin{align}
G\left(\tfrac{p}{q}\right)=& \gap\left\{\frac{p^2+2kpq}{q^2} \mod 1\middle| k \in \mathbb{N} \right\}=\\
=& \frac{1}{q^2}\gap\left\{p^2+2kpq \mod q^2\middle| k \in \mathbb{N} \right\}=\\
=& \frac{1}{q^2}\gap\left\{p^2\mod \gcd(2pq,q^2) \right\}=\\
=& \frac{1}{q^2}\gap\left\{p^2\mod q\gcd(2,q) \right\}.
\end{align}
We used the fact $\gcd(p,q)=1$ to simplify the modulus. As the set we are observing is simply a periodic equally spaced lattice, all the gaps are equal to the period (modulus), and the offset $p^2$ plays no role in the result. Finally, we can write down the solution,
\begin{equation}
G\left(\tfrac{p}{q}\right) = \frac{\gcd(2,q)}{q}=\frac{d}{q}
\end{equation}
where we marked $d=\gcd(2,q)\in \{1,2\}$. This expression can be written more explicitly as
\begin{equation}
G(x)=
\begin{cases}
2 & x=0\\
\frac{2}{q} & x = \frac{p}{q}, q = 0 \mod 2\\
\frac{1}{q} & x = \frac{p}{q}, q = 1 \mod 2\\
0 & x\notin \mathbb{Q}
\end{cases},
\label{eq:gx}
\end{equation}
and is depicted in Fig.~\ref{fig:g2} along with one of its approximants for a finite set generated by radicals up to $N=20000$.
A converging sequence of numerical approximants of this function is shown in Figure \ref{fig:fig_gaps}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{The function $G(x)$ and one of its numerical approximants computed at $N=20000$. The leftmost and rightmost data point represents the right and left half-gap, $G(0^+)$ and $G(1^-)$.}
\label{fig:g2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig2.png}
\caption{A series of numerical approximants to the gap function $G(x)$. Horizontal guidelines mark rational levels at $\frac{1}{q}$ where the limit values are supposed to lie, and a few diagonal lines with slopes $1$ and $2$ are used to highlight the collinearity of the values. Observe how the points converge to their limiting values, while the average background level is $2/\sqrt{N}$.}
\label{fig:fig_gaps}
\end{figure}
The gap function $G(x)$ is actually a familiar sight; it is a rescaled version of the Thomae's function $T(x)=G(2x)$ on the interval $(0,1)$ (see \cite{thomae,nechaev}, for instance).
\subsection{The gap around integers}
\label{sec:whole}
The gap $G(0)$ is is composed of the right half-gap $G(0^+)$ and left half-gap $G(0^-)=G(1^-)$ and as we excluded square roots of perfect squares from our set, the full gap is a sum of both half-gaps. The $G(0^+)$ gap is bounded from above by square roots of numbers right above perfect squares: $m=1$ and $p=0$. Equation \ref{eq:newtstep} reduces to $\frac{1}{2(k+y)}$, leading to $G(0^+)=1$. The left half-gap is obtained when $m=2k$ and $p=q$ ($y=1$), and is equal to $G(0^-)=1$, and together, $G(0)=2$.
\clearpage
\section{The generalized gap function}
\label{sec:general}
\subsection{Definition}
While the original gap function considered the square roots of all integers, we can generalize it to only include multiples of a chosen integer parameter $a$, falling back to the original definition when $a=1$.
\begin{equation}
f^{(a)}(N) = \{ \sqrt{an}\mod 1 \mid n \in \mathbb{N} \wedge an\leq N \wedge \sqrt{an} \notin \mathbb{N} \}
\end{equation}
\begin{equation}
g^{(a)}(x,N) = \gap_x(f^{(a)}(N))
\end{equation}
\begin{equation}
G^{(a)}(x)=\lim_{N\to\infty}2\sqrt{N}g^{(a)}(x,N)
\end{equation}
As the generalized set is only diluted (some terms are omitted), the gaps can
either stay equal, or become wider than before.
\subsection{Derivation}
In this section, we derive the function $G^{(a)}(x)$ using a similar
procedure used on the regular square root gaps.
A generalization to $\sqrt{an}$ does not affect the procedure up to
the Newton step performed in Eq.~\ref{eq:newtstep}, but it does
restrict the selection of $m$ for each $k$ differently, making them
coupled. To proceed with the reduction, we must find two integer
parameters that can be varied independently. The generalization of
Eq.~\ref{eq:mdef} is $m=an-k^2$, and substituting this relation, we
get a parameterization with $(k,n)$ which is indeed a pair of
independent positive integers. The generalized gap function is now
\begin{align}
G^{(a)}\left(\tfrac{p}{q}\right)=& \gap\left\{\frac{p^2+2kpq+k^2q^2}{q^2}-an\middle | k,n\in\mathbb{N}\right\}=\\
=& \frac{1}{q^2}\gap\left\{p^2+2kpq+k^2q^2 \mod aq^2\middle | k\in\mathbb{N}\right\}.
\end{align}
This may look similar to the previous case, but the quadratic term
$k^2$ makes the values in the set non-equally spaced and the offset
$p^2$ cannot be neglected.
The presence of $a$ and $q$ factors in the modulus suggests
decomposition $k=Ba+C$ with $C\in \mathbb{Z}_a$ and $B\in
\mathbb{Z}_q$.
\begin{equation}
G^{(a)}\left(\tfrac{p}{q}\right)= \frac{1}{q^2}\gap\left\{p^2+2Bapq+2Cpq+q^2C^2 \mod aq^2 \middle | B\in \mathbb{Z}_q,C\in\mathbb{Z}_a\right\}.
\end{equation}
The parameter $B$ only appears linearly and rescales the modulus
according to Eq.~\ref{eq:absorb}. The new modulus is
$\gcd(2apq,aq^2)=aqd$.
\begin{align}
G^{(a)}\left(\tfrac{p}{q}\right)=& \frac{1}{q^2}\gap\left\{p^2+2Cpq+q^2C^2 \mod aqd \middle | C\in\mathbb{Z}_a\right\}\\
=& \frac{1}{q^2}\gap\left\{p^2+qd((2/d)Cp +(q/d)C^2) \mod aqd \middle | C\in\mathbb{Z}_a\right\}.
\end{align}
The first term can be forcibly split,
\begin{equation}
p^2=qd\lfloor p^2/qd \rfloor+\underbrace{p^2\mod qd}_{\delta},
\end{equation}
where $\mod$ here refers to the remainder after division, not the entire congruence class.
\begin{equation}
G^{(a)}\left(\tfrac{p}{q}\right)= \frac{1}{q^2}\gap\left\{\delta+qd( \lfloor p^2/qd \rfloor +(2/d)Cp +(q/d)C^2) \mod aqd\middle | C\in\mathbb{Z}_a\right\}
\end{equation}
Because $\delta$ is smaller than the stride $qd$ of the second term, it is located in the same gap as zero, and can be omitted. Then, the common factor of the stride and the modulus can be carried outside.
\begin{equation}
G^{(a)}\left(\tfrac{p}{q}\right)= \frac{d}{q}\gap\left\{\epsilon+\lfloor p^2/qd\rfloor+(2/d)Cp +(q/d)C^2 \mod a\middle | C\in\mathbb{Z}_a\right\}
\label{eq:finalform}
\end{equation}
Note that in this form, the set could contain zero, which would make the gap definition ambiguous.
To amend this, we kept an infinitesimal displacement $\epsilon$ to specify that the zero value counts as an upper boundary of the gap. The expression is straight-forward to compute numerically, as it only involves finding the largest and the smallest value of a quadratic polynomial over $\mathbb{Z}_a$. The prefactor equals the expression for $a=1$, and the new $\gap$ factor tells us how much wider each gap is because of the skipped terms. In a special case when $q/d = 0 \mod a$, we are back to the linear case, and the gap stays the same width.
Examples for the first few values of $a$ are shown in Figs.~\ref{fig:g2a2},\ \ref{fig:g2a3},\ \ref{fig:g2a5}.
The form $\sqrt{an}$ can be further generalized to $\sqrt{an+b}$ without much difficulty. The only difference is to substitute $p^2$ with $p^2-bq^2$ in Eq.~\ref{eq:finalform}. This leads to different gap factors for each $b$.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig3.pdf}
\caption{The function $G^{(2)}(x)$ and one of its approximants. Because for $a=2$, the quadratic residue term is suppressed, the pattern is still regular and recognizable, and divides the rational numbers into 3 cases based on the denominator modulo $4$ (Eq.~\ref{eq:case2}). Guidelines with slope $4$ are added to follow the wider gaps caused by the missing values.}
\label{fig:g2a2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig4.pdf}
\caption{The function $G^{(3)}(x)$ and one of its approximants. This case has a more complex structure governed by the gaps in quadratic residues.}
\label{fig:g2a3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{fig5.pdf}
\caption{A higher-order example, $G^{(5)}(x)$. Observe how the spike heights are following a seemingly less organized sequence. The value $G^{(5)}(1/2)=6$ is clipped.}
\label{fig:g2a5}
\end{figure}
\subsection{The gap around integers}
The gap $G^{(a)}(0)$ is also affected by the condition $an=k^2+m$ which can be restated as $m=-k^2\mod a$. We can write it in the gap form,
\begin{equation}
G^{(a)}(0)=\gap\left\{-k^2 \mod a\middle| k\in \mathbb{Z}_a \wedge k\neq 0 \right\}.
\end{equation}
The value $m=-1\mod a$ is a negative quadratic residue ($k=1\mod a$), but the $m=1\mod a$ may not be. We can simplify it to
\begin{equation}
G^{(a)}(0)=\left(1+\inf\left\{-k^2 \mod a\middle| k\in \mathbb{Z}_a \wedge k\neq 0 \right\}\right).
\end{equation}
\subsection{The example for $a=2$}
If we set $a=2$, the gap function can still be reduced to an explicit form, because $C^2=C \mod 2$ and the quadratic term disappears. Equation \ref{eq:finalform} reduces to
\begin{equation}
G^{(2)}\left(\tfrac{p}{q}\right)= \frac{d}{q}\gcd((2/d)p+(q/d),2)
\end{equation}
or, case-by-case,
\begin{equation}
G^{(2)}\left(\tfrac{p}{q}\right)=\begin{cases}
\frac{4}{q} & q=2 \mod 4 \\
\frac{2}{q} & q=0 \mod 4 \\
\frac{1}{q} & q=1 \mod 2.
\end{cases}
\label{eq:case2}
\end{equation}
This function and its approximants are shown in Fig.~\ref{fig:g2a2}.
\subsection{Geometric interpretation of the gap function}
The authors of Ref.~\cite{sqrts} study the probability distribution of
gap widths by observing a square lattice and how random lattice
translates overlap a given triangle. We take the same idea,
but instead of studying probability across all configurations, we
retain the positional information and study projections of the lattice
onto a line.
The defining equation for the elements $x$ of the sequence of fractional
parts of square roots (Eq.~\ref{eq:quadratic}) can be seen as a function
in the $(k,m)$ plane,
\begin{equation}
m(k)=x^2+2kx,
\label{eq:mgeom}
\end{equation}
which describes a family of rays tangent to the caustic $m=-k^2$
parameterized by $x$, with the point of tangency at $(-x,-x^2)$
(Figure 2 in \cite{sqrts} and \ref{fig:lattice}).
Consider the Euclid's orchard: a lattice of integer points $(k,m)$ in two dimensions.
The gap $g(x,N)$ measures the width of the interval where no $x$
solves Eq.~\ref{eq:mgeom} for integer pairs $(k,m)$ in the range
$k<\sqrt{N}$. Set a vertical screen (a line) at $k_{\text{max}}=\sqrt{N}$.
A distance swept by the ray on the screen without casting a shadow of a lattice point,
simplifies to
\begin{equation}
x_2^2+2k_{\text{max}}x_2-x_1^2-2k_{\text{max}}x_1=2(x_2-x_1)(k+x)=2(k+x)g(x,N)\to G(x),
\end{equation}
where the mean value $x=(x_1+x_2)/2$ was used to represent each
interval. Notice that the expression is exactly the parenthesized part
of Eq.~\ref{eq:newtstep} and in the limit becomes the gap function.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{fig6a.pdf}
\includegraphics[width=0.45\textwidth]{fig6b.pdf}
\label{fig:lattice}
\caption{The Euclid's orchard representing the $(k,m)$ lattice, and the rays that project the lattice to the screen at the right. The ray pencils
around rational slopes $2x$ pass unobstructed between the lattice points, their width equal in the limit to
the gap function. {\sl Left:} The family of rays that corresponds to the fractional parts of square roots
form a parabolic caustic with the vertical component of the point of tangency equal to $-x$. {\sl Right:}
Asymptotically, any point with irrational coordinates yields the same light pattern on the screen. Thus, the gap function is the illumination pattern of a single point source projected through the Euclid's orchard.
}
\end{figure}
With our specific choice of the ray family, the gap function directly
measures the length of contiguously illuminated parts of the
screen. Recall, however, that the $x^2$ term in Eq.~\ref{eq:mgeom}
corresponds to the $p^2$ term in Eq.~\ref{eq:newtstep}, the value of
which does not affect the gap. The only exception is the singular case
$p=0 \mod q$ when a term of the sequence hits a rational point instead
of leaving a gap around it. In our case, the perfect squares which hit
the value $x=0$ were omitted from the sequence and the gaps around
the integers were treated separately in section \ref{sec:whole}.
Instead of $x^2$, the $m$-intercept $c(x)$ of the rays can be almost
any function of $x$,
\begin{equation}
m(k)=c(x)+2kx,
\end{equation}
and still produce the same illumination pattern in the limit. The
only condition is to avoid the singular case $c(p/q) = 0 \mod 1$.
Such cases again correspond to rational values in the
sequence. Geometrically, these are the rays that pass through an
infinite set of lattice points, casting a single point shadow instead
of leaving an illuminated gap around the point.
Perhaps the simplest candidates for the $m$-intercept are linear,
$c(x)=c_1+c_2 x$, forming a pencil of rays passing through a single
focus instead of a caustic. If $c_1$ and $c_2$ are in a rational
proportion, we get some rational terms in the sequence. In the most
singular case $c(x)=0$, the entire sequence is rational and instead of
gaps, the point shadows form an image of the rational set on the
screen. Recall that perspective projection of an orchard of unit height trees
produces a Thomae's function, but in perceived heights, not in the lengths of
illuminated gaps.
For all other choices of $c_{1,2}$, the projection of the lattice onto
the screen leaves gaps that converge toward the gap function we
defined in the beginning, though the continued fraction expansion
determines how quickly the limit converges for the closest rational
approximations of the ratio $c_1/c_2$. The invariance to the
translation of rays is related to the ergodicity that enabled
calculation of the gap distribution in Ref.~\cite{sqrts}.
In short, the gap function tells us the lengths of the portions of a
screen illuminated by a point source located at irrational
coordinates, shining through a square lattice -- an Euler's
orchard. The generalized gap function corresponds to the pattern
obtained by removing some of the lattice points so that only the
multiples of a certain integer cast shadows.
\clearpage
\section{The higher order root gaps}
\subsection{Definition and derivation}
Another straight-forward generalization is the sequence $\{\sqrt[\alpha]{n}\}$. We can proceed as before, by defining
\begin{equation}
n=k^\alpha+m;\quad m\in [1,(k+1)^\alpha-k^\alpha-1]
\label{eq:mdefalpha}
\end{equation}
and $\sqrt[\alpha]{n}=k+x$. Instead of a quadratic equation, we obtain a polynomial
\begin{equation}
(x+k)^\alpha-k^\alpha-m=0.
\label{eq:ndratic}
\end{equation}
We must check if the Newton step procedure is still valid for a higher order polynomial we are dealing with now. It turns out that for $x\in (0,1)$, the method converges, as the above polynomial is monotonously increasing from negative to positive on this domain with a single primitive positive root. The delta step yields
\begin{equation}
\Delta(p/q,k,m)=\frac{1}{\alpha(k^{\alpha-1}+\mathcal{O}(k^{\alpha-2}))}
\left(-\frac{\sum_{i=0}^{\alpha-1} {\alpha \choose i} p^{\alpha-i}(kq)^{i}}{q^{\alpha}}+m\right).
\end{equation}
The prefactor shows the power law scaling that has to be adjusted for the $\alpha$-th order gap function to exist in the limit:
\begin{align}
G_\alpha (p/q)=&\lim_{N\to\infty}\alpha N^{(\alpha-1)/\alpha} \gap(\Delta(p/q,k,m) \mid m,k\in\mathbb{N})=\\
=&\frac{1}{q^\alpha}\gap\left\{\sum_{i=0}^{\alpha-1} {\alpha \choose i} p^{\alpha-i}(kq)^{i} \mod q^\alpha \middle | k\in\mathbb{N} \right\}.
\end{align}
the prefactor scaling is getting closer and closer to the average gap distribution, which goes as $1/N$. This means that it takes a larger $N$ to suppress the background compared to the gap function, and the limit converges very slowly for $\alpha>2$.
We learned from the $G^{(a)}$ generalization that it is convenient to express $k$ in base $q$:
\begin{equation}
k=\sum_{j=0}^{\alpha-2} C_j q^{j}
\end{equation}
where $C_j\in \mathbb{Z}_q$. The term $C_{\alpha-1} q^{\alpha-1}$ is not needed because $q C_{\alpha-1}q^{\alpha-1} = 0\mod q^{\alpha}$. Let us also split off the first two terms of the sum over $i$:
\begin{equation}
G_\alpha (p/q)=\frac{1}{q^\alpha}\gap\left\{p^\alpha+\alpha p^{\alpha-1}q\sum_{j=0}^{\alpha-2}C_jq^j+ \sum_{i=2}^{\alpha-1} {\alpha \choose i} p^{\alpha-i}q^{i}\left[\sum_{j=0}^{\alpha-2} C_j q^{j}\right]^{i} \mod q^\alpha \middle | C_j\in\mathbb{Z}_q \right\}.
\label{eq:Galpha_unreduced}
\end{equation}
In the last term, the prefactor $q^{i\geq 2}$ ensures that $C_{\alpha-2}$ contribution to the sum is a multiple of $q^\alpha$. The second term, which is linear in $C_j$, is the only surviving term that contains $C_{\alpha-2}$. The parameter $C_{\alpha-2}$ can be absorbed, rescaling the modulus to
\begin{equation}
\gcd(\alpha p^{\alpha-1} q^{\alpha -1},q^\alpha)=\gcd(\alpha,q)q^{\alpha-1}=d_1 q^{\alpha-1}.
\end{equation}
The reduced expression becomes
\begin{equation}
G_\alpha (p/q)=\frac{1}{q^\alpha}\gap\left\{p^\alpha+\alpha p^{\alpha-1}q\sum_{j=0}^{\alpha-3}C_jq^j+ \sum_{i=2}^{\alpha-2} {\alpha \choose i} p^{\alpha-i}q^{i}\left[\sum_{j=0}^{\alpha-3} C_j q^{j}\right]^{i} \mod d_1q^{\alpha-1} \middle | C_j\in\mathbb{Z}_q \right\}.
\end{equation}
This step removed the highermost $C_j$ coefficient and reduced the powers of $q$ by one. Now the next biggest term is the one that only stands in the linear term, so the coefficient absorption can be performed recursively until the nonlinear part vanishes completely.
This procedure eventually yields a linear expression in the last remaining coefficient $C_1$. Because each next modulus divides the previous, all the steps of the recursion can be condensed into looking up the modulus induced by the term $\alpha p^{\alpha-1} C_1 q$ in the unreduced expression Eq.~\ref{eq:Galpha_unreduced}:
\begin{equation}
\gcd(\alpha p^{\alpha-1} q, q^\alpha)=q\gcd(\alpha,q^{\alpha-1})=qd.
\end{equation}
The constant term $p^\alpha$ can again be skipped and the higher order gap function written out as
\begin{equation}
G_\alpha(p/q)=\frac{\gcd(\alpha,q^{\alpha-1})}{q^{\alpha-1}}=\frac{d}{q^{\alpha-1}}.
\end{equation}
If $\alpha$ is prime, then $d=\gcd(\alpha,q)$ and can be either $1$ or $\alpha$.
\subsection{Additional generalizations}
The higher-order gap functions can also be generalized to use a
diluted set of integers, $\sqrt[\alpha]{an}$. The procedure is a
similar to the square root case. The solution involves finding a gap
in a polynomial expression of order $\alpha$ modulo $a$.
Another possible generalization is to introduce rational exponents,
i.e. $\sqrt[\alpha]{n^\beta}$. In this case, the resulting modular
expression is nonlinear in both independent parameters, which makes
reduction less trivial.
\subsection{Convergence rate}
The analytical result for the gap function is really difficult to demonstrate numerically even for $\alpha=3$. First of all, the $1/N$ average gap size means that the background noise in the numerical approximation scales as $3N^{(3-1)/3}(1/N)=3N^{-1/3}$. Assuming these background gaps are distributed exponentially \cite{sqrts}, we estimate the minimal $N$ for which there are only a couple of outliers above a level $\epsilon$:
\begin{equation}
Ne^{-3\epsilon N^{1/3}}\sim 1.
\end{equation}
As the gap function for $\alpha=3$ has smaller values compared to the square root gap function, $\epsilon\sim 1/3$ is barely enough to keep the tallest spikes ($q=3$) level with the farthest outliers. This means that it's impossible to see any kind of signal below $N\sim 2\cdot 10^6$, and the first spikes can begin to grow out of the noise around $N\sim 4\cdot 10^8$ to reach the third tallest spike at $q=6$. This is drastically worse than the $\alpha=2$ case where $N\sim 1000$ was already enough to plot a decent graph (Fig.~\ref{fig:fig_gaps}).
For $\alpha = 4$, values of order $N\sim 10^{14}$ would be needed to resolve the first hints of the tallest spikes, and $N\sim 10^{18}$ to get anything useful, which makes naive numerical demonstration impractical.
\section{Conclusion}
With the exception of integer solutions, expressions involving roots
of integers cannot assume rational values. An immediate consequence is
that rational points on the unit interval have a special role as a
repulsor for the fractional parts of the sequence. In this report, we
worked out how wide the gaps are around each rational number in the
form of an exact analytical expression. The relative gap widths, which
are, incidentally, also in rational proportions, tell us how well can
a fraction can be approximated by an irrational number (a root), not
unlike how the continued fraction determines how well an
\emph{irrational number} can be approximated with a fraction. In a
sense, fractions with a lower denominator tend to be ``less
irrational'', creating a wider gap in the sequence. The methodology
used here can be extended to a wide variety of sequences, albeit the
last step may not have a closed form expression.
|
{
"timestamp": "2020-12-29T02:21:55",
"yymm": "2012",
"arxiv_id": "2012.14019",
"language": "en",
"url": "https://arxiv.org/abs/2012.14019"
}
|
\section{Introduction}
\label{sec:intro}
About 10\% of active galactic nuclei (AGNs) in the local Universe releases large amounts of energy in the form of jets \citep[e.g.,][]{Netzer2015, Blandford2019, Hada2019}. AGN jets are often observed to move at relativistic speeds with apparent speeds up to tens of times the speed of light \citep[e.g.,][]{Jorstad2017, Lister2018}, produce rapid time variability at multiple wavelengths \citep[e.g.,][]{PT2014, PT2017}, actively interact with the interstellar and intergalactic medium, and affect the evolution of galaxies and clusters \citep[e.g.,][]{Fabian2012, HC2020}. They are believed to be launched by the accretion of matter and strong magnetic fields in the vicinity of black holes \citep[e.g.,][]{BZ1977, BP1982, Contopoulos1995, NQ2005, McKinney2006, Tchekhovskoy2011, Sadowski2013, Pu2015}.
How AGN jets are collimated and accelerated to relativistic speeds has been a longstanding problem. Many theoretical studies and special/general relativistic magnetohydrodynamic (S/GRMHD) simulations have shown that AGN jets are gradually accelerated by converting the electromagnetic energy into the kinetic energy \citep[e.g.,][]{Li1992, BL1994, VK2004, Komissarov2007, Komissarov2009, Tchekhovskoy2008, Tchekhovskoy2009, Lyubarsky2009}. The MHD jet acceleration occurs more efficiently when the jet and its associated poloidal magnetic fields are being systematically collimated through e.g., the magnetic nozzle effect \citep[e.g.,][]{Camenzind1987, Li1992, BN2006, Komissarov2009, Tchekhovskoy2009, Vlahakis2015}. Jet collimation is thought to be governed by the pressure of an external confining medium \citep{Eichler1993, BL1994, Komissarov2007, Komissarov2009, Lyubarsky2009}, which is presumably non-relativistic gas outflows launched from the accretion disk \citep[e.g.,][]{Sadowski2013, Yuan2015, Nakamura2018}. Therefore, jet acceleration and collimation are believed to occur simultaneously, under the influence of the black hole's gravity, forming a jet acceleration and collimation zone (ACZ) at a distance $\lesssim10^4$--$10^6$ gravitational radii \citep[$R_g$,][]{VK2004, Marscher2008, Meier2012}.
Recent very long baseline interferometry (VLBI) observations have indeed found the existence of the ACZs in nearby radio galaxies and blazars. \cite{AN2012} showed that the jet in M87 is gradually collimated in a semi-parabolic shape inside the Bondi radius, while it freely expands conically outside. The parabolic jet collimation profile appears to be present all the way down to near the jet base \citep{Junor1999, Hada2013, NA2013, Hada2016, Mertens2016, Kim2018, Nakamura2018, Walker2018}. \cite{Asada2014} showed that the M87 jet accelerates to relativistic speeds inside the Bondi radius, followed by gradual deceleration in the outer regions \citep{Biretta1995, Biretta1999, Meyer2013}. Recent VLBI monitoring observations of the M87 jet have suggested that the jet becomes relativistic already at distances $\lesssim1,000\ R_g$ and the velocity field may be stratified \citep{Mertens2016, Park2019a}. Also, \cite{Park2019b} have revealed that the magnitude of Faraday rotation measures in the M87 jet systematically decreases with increasing distance from the black hole. They applied a simple analytical model of hot accretion flows \citep[e.g.,][]{YN2014} and found that the inferred pressure profile of an external confining medium is flat enough to collimate the jet \citep[e.g.,][]{Komissarov2009}. These observations are consistent with the theoretical picture of AGN jet collimation and acceleration.
The detailed view of jet acceleration and collimation processes provided by the extensive observations of M87 has triggered many VLBI observations aiming at finding the ACZs in other radio-loud AGNs. These observations have shown that a jet structural transition is common \citep[e.g.,][]{Tseng2016, Akiyama2018, Hada2018, Nakahara2018, Nakahara2020, Kovalev2020}, while there is a diversity in the jet geometries before and after the transitions \citep[e.g.,][]{Nakahara2020}, in the transition locations \citep[e.g.,][]{Hada2018, Nakahara2018, Nakahara2020}, and there are a few cases with no clear indication of a jet structural transition \citep{Giovannini2018, Nakahara2019}. On the other hand, deriving jet acceleration profiles in the collimation zones has been challenging. This is mainly because the brightness distributions of the jets in nearby radio galaxies are usually smooth, unlike distant blazars which show the jets consisting of several distinct knots, and a robust jet kinematic analysis is allowed only when dense VLBI monitoring data observed with a high-resolution and at a high-cadence are available \citep[see][for a related discussion]{Park2019a}. Jet collimation and acceleration from non-relativistic to relativistic speeds occurring in the same region, to our knowledge, have been observed only in M87, Cygnus A \citep{Boccardi2016b}, and the $\gamma$-ray emitting narrow-line Seyfert 1 galaxy 1H 0323+342 \citep{Hada2018}.
The nearby giant elliptical galaxy NGC 315, at a redshift of 0.01648 \citep{Trager2000} which corresponds to a scale of 0.348 kpc arcsec$^{-1}$ for our adopted cosmology ($H_0 = 67.4\ {\rm km\ s^{-1}}$, $\Omega_m = 0.315$, \citealt{Planck2020}), is a good laboratory for studying jet collimation and acceleration. It hosts a Fanaroff-Riley Class I \citep[FR I,][]{FR1974} radio source whose two-sided jets extend out to several hundred kpc from the galaxy \citep[e.g.,][see also Figure~\ref{fig:images}]{Bridle1976, Laing2006}. Its large black hole mass of $1.6\times10^9\ M_{\odot}$, estimated from the stellar velocity dispersion of $\sigma_v\approx350 {\rm\ km\ s^{-1}}$ \citep{Faber1989, Ene2020} and the latest $M_{\rm BH}$-$\sigma_v$ relation \citep{Sexton2019}, makes it easier to resolve the putative jet collimation and acceleration region of this source.
The jets in NGC 315 have been extensively explored on kpc-scales. \cite{Canvin2005} applied an analytical model which assumes that apparent asymmetries between an approaching and a receding jet in total intensity and linear polarization are caused by relativistic aberration, originally developed by \cite{LB2002}, to deep, high-resolution Very Large Array (VLA) observations at 5 GHz. They constrained the jet viewing angle of NGC 315 to be $\theta\approx38^\circ$ and found that the jets decelerate from $\beta\approx0.9$ to $\beta\approx0.4$ between 8 and 18 kpc from the nucleus, where $\beta$ is the intrinsic jet speed in units of the speed of light. They also found that the jet velocity field is stratified in such a way that the jet edge has a slower speed than the jet on-axis. \cite{LB2014} extended this study by using an improved model and more data\footnote{They also modelled nine other nearby FR I radio galaxies.} and obtained similar results with an updated jet viewing angle of $\theta=49.8^{\circ+0.5}_{-0.2}$. These studies showed that the jets have a region with high synchrotron emissivity (so-called "brightness flaring"), followed by a rapid expansion of the jets with the jet opening angles increasing with distance ("geometrical flaring"). \cite{Laing2006} investigated the spatial distribution of the spectral index of the jets and found a flatter spectrum at the edge than on-axis in the jet deceleration and downstream regions. This transverse spectral index structure might be associated with the electron energy acceleration due to velocity shear developed by the interaction between the jets and the surrounding medium. \cite{Worrall2003, Worrall2007} detected the approaching jet at X-rays out to $\approx10, 30$ arcsec from the nucleus, respecitvely, which also suggests the presence of distributed particle acceleration in the jet.
On pc-scales, \cite{Venturi1993} observed only the north-west jet from multifrequency global-VLBI observations, from which they derived an upper limit on the jet viewing angle of $\theta\lesssim58^\circ$. They also found flat and steep spectral indices for the core and the extended jet, respectively. \cite{Giovannini1994} combined the result of \cite{Venturi1993} with other indirect constraints on the jet speed and viewing angle based on, e.g., the correlation between the core and the total radio power in radio galaxies, and constrained the viewing angle to be in the range of $30^\circ<\theta<41^\circ$. \cite{Cotton1999} performed multi-epoch observations of NGC 315 with the very long baseline array (VLBA) at 5 and 8 GHz with an average interval between observations of about one year. They observed outward moving features in the jet, as well as the receding jet, from which they suggested systematic jet acceleration on pc-scales.
The relatively large jet viewing angle of NGC 315, $\theta\approx50^\circ$ constrained on kpc-scales and $\theta\gtrsim30^\circ$ on pc-scales in previous studies, means that its jets would be less affected by relativistic effects as compared with M87. Thus, investigating the NGC 315 jets helps us to have a unified view of AGN jet collimation and acceleration processes. In this paper, we report the result from our multifrequency VLBA observation of NGC 315, as well as archival high-resolution VLBI data and VLA data analysis, which constrains the jet collimation and acceleration profiles over a wide range of jet distances. We also present the result from our supplementary dense VLBI monitoring observations, which shows a complex jet kinematic structure and constrains the jet viewing angle on pc-scales.
The paper is organized as follows. We describe the observations, archival data we used, and data reduction in Section~\ref{sec:data}. We present the results of our analysis of jet structure, opacity, collimation, and acceleration in Section~\ref{sec:results}. The results are discussed in Section~\ref{sec:discussion}. We summarize our findings and conclude in Section~\ref{sec:summary}.
\section{Observations and data reduction} \label{sec:data}
\subsection{Multifrequency VLBA observation}
\label{sec:vlba}
We observed NGC 315 with the VLBA on 2020 Jan 05 at frequencies of 1.548, 2.284, 4.980, 8.416, 15.256, 22.220, and 43.120 GHz. All ten VLBA stations successfully participated in the observation. The data were recorded in both right and left-hand circular polarizations with two-bit quantization in eight baseband channels (also often called intermediate frequencies; IFs), using the polyphase filterbank (PFB) observing system, at a recording rate of 2 Gbps, yielding a total bandwidth of 256 MHz for each polarization. The total observation time is 12 hours. The on-source time on our target is $\approx40$ minutes at 1.5--8.4 GHz, and $\approx50, 80, 100$ minutes at 15.3, 22.2, and 43.1 GHz, respectively. The weather condition was good and no major technical issue occurred during the observation at all stations. We summarize the basic properties of our data in Table~\ref{tab:data}.
We performed a standard data reduction with the NRAO's Astronomical Image Processing System (AIPS; \citealt{Greisen2003}, see, e.g., Section C in the AIPS cookbook\footnote{\url{http://www.aips.nrao.edu/CookHTML/CookBook.html}}). We updated the Earth Orientation Parameters using more accurate parameters that are available after the data correlation, taken from NASA CDDIS. We corrected the dispersive delays caused by the ionosphere by using the GPS models of the electron content in the ionosphere using the procedure VLBATECR in AIPS. The sampler voltage offsets were corrected by using the autocorrelation spectra. We removed the instrumental delay residuals and phase offsets between IFs by performing a "manual phase-cal" using a scan of bright calibrators such as 0133+476, 3C 84, BL Lac. The bandpass shapes of the cross-power spectra are calibrated for each IF by using calibrators. We corrected a possible offset of the autocorrelation amplitudes from unity caused by the bandpass calibration. A priori amplitude calibration was done by using the antenna gain curves and system temperatures with an atmospheric opacity correction. The antenna parallactic angles were corrected. We performed global fringe fitting \citep{SC1983} for each IF using a solution interval of ten seconds\footnote{This solution interval is shorter than the typical coherence time expected at the frequencies of our interest. The main purpose of using relatively short solution intervals was to capture possible rapid phase variations caused by atmospheric fluctuations on short timescales, especially at low source elevations. We repeated data reduction using longer solution intervals (one minute at $\lesssim8.4$ GHz and 30 seconds at $\gtrsim15.3$ GHz) and found that our results are robust against the solution intervals.}, using a point-source model. We found that the fringe detection rates for the antennas comprising very long baselines, which are HN (Hancock), MK (Mauna Kea), and SC (Saint Croix) stations, at high ($\gtrsim15$ GHz) frequencies are relatively low with this parameter setup. We combined all the IFs and increase the solution interval to 20 seconds in those cases, achieving high ($\gtrsim95$\%) detection rates for all stations and frequencies except at 43 GHz (see Section~\ref{sec:hsa}). The remaining instrumental delay between polarizations at the reference antenna was corrected by using a scan on bright calibrators. The data were averaged over the channels within each IF and in time over ten seconds. We performed CLEAN and phase/amplitude self-calibration iteratively with the Caltech Difmap package \citep{Shepherd1997} and produced source images.
\begin{deluxetable*}{cccccccc}
\tablecaption{Multifrequency Observations and Data of NGC 315\label{tab:data}}
\tablewidth{0pt}
\tablehead{
Proj. Code & Obs. Date & \colhead{Freq.} & \colhead{Beam Size} & \colhead{$I_{\rm p}$} & \colhead{$I_{\rm rms}$} & \colhead{$\theta_{\rm c, modelfit}$} & \colhead{$\theta_{\rm c, JMFIT}$} \\
&& \colhead{(GHZ)} & \colhead{(mas $\times$ mas, degree)} & \colhead{(Jy/B)} & \colhead{(mJy/B)} & (mas) & (mas) \\
&&&(a)&(b)&(c)&(d)&(e)
}
\startdata
\hline
\multicolumn{8}{c}{VLBA}\\
\hline
\multirow{ 6}{*}{BP243} & \multirow{ 6}{*}{2020 Jan 05} & 1.548 & $9.78\times6.22$, -4.27 & 0.250 & 0.048 & $0.753\pm0.022|_{\rm stat}\pm0.086|_{\rm sys}$ & $0.624\pm0.035|_{\rm stat}\pm0.086|_{\rm sys}$ \\
&& 2.284 & $6.56\times4.35$, -9.15 & 0.277 & 0.136 & $0.645\pm0.040|_{\rm stat}\pm0.031|_{\rm sys}$ & $0.571\pm0.043|_{\rm stat}\pm0.031|_{\rm sys}$ \\
&& 4.980 & $2.85\times1.97$, -5.12 & 0.378 & 0.037 & $0.280\pm0.002|_{\rm stat}\pm0.099|_{\rm sys}$ & $0.140\pm0.007|_{\rm stat}\pm0.099|_{\rm sys}$ \\
&& 8.416 & $1.75\times1.17$, -11.58 & 0.389 & 0.053 & $0.131\pm0.004|_{\rm stat}\pm0.010|_{\rm sys}$ & $0.145\pm0.004|_{\rm stat}\pm0.010|_{\rm sys}$ \\
&& 15.256 & $0.97\times0.68$, -3.40 & 0.285 & 0.068 & $0.104\pm0.004|_{\rm stat}\pm0.000|_{\rm sys}$ & $0.108\pm0.002|_{\rm stat}\pm0.000|_{\rm sys}$ \\
&& 22.220 & $0.77\times0.53$, -5.66 & 0.264 & 0.073 & $0.095\pm0.003|_{\rm stat}\pm0.000|_{\rm sys}$ & $0.097\pm0.002|_{\rm stat}\pm0.000|_{\rm sys}$ \\
\hline
\multicolumn{8}{c}{HSA}\\
\hline
BG170B$^1$ & 2008 Feb 03 & 43.212 & $0.36\times0.17$, -13.46 & 0.205 & 0.142 & $0.018\pm0.003|_{\rm stat}\pm0.017|_{\rm sys}$ & $0.042\pm0.002|_{\rm stat}\pm0.017|_{\rm sys}$ \\
\hline
\multicolumn{8}{c}{VLA}\\
\hline
AL538 & 2001 Mar 10 & 1.365 & $6021\times5425$, 89.63 & 0.429 & 0.054 & $-$ & $-$ \\
AC476 & 1996 Nov 02 & 4.860 & $516\times460$, 84.56 & 0.706 & 0.019 & $-$ & $-$ \\
\enddata
\tablecomments{(a) Major axis, minor axis, and position angle of the synthesized beam under the natural weighting of the data. (b) Map peak intensity in units of Jy per beam. (c) Image rms-noise measured in the off-source regions in units of mJy per beam. $I_{\rm rms}$ is relatively high at 2.3 GHz due to (i) the signals in some IFs being blocked by the filters installed in many stations and (ii) severe radio frequency interference in this band. (d) FWHM of core \texttt{modelfit} elliptical Gaussian component along the perpendicular direction to the jet axis. (e) FWHM of core JMFIT elliptical Gaussian component along the perpendicular direction to the jet axis. Statistical and systematic errors in the fitted core sizes in (d) and (e) are noted. $^1$Participating stations: the VLBA (except Mauna Kea station), the Effelsberg 100m station, the Green Bank Telescope, and the phased-up VLA.}
\end{deluxetable*}
\subsection{Archival HSA data at 43 GHz}
\label{sec:hsa}
We found that our VLBA image at 43 GHz is mostly dominated by the compact core, which is not well-suited for investigating the jet collimation profile, the jet-to-counterjet brightness ratio, and so on. The fringe detection rate for this data was at most $\lesssim50, 30, 10\%$ for HN, MK, and SC stations, respectively, even though we tried to combine all IFs and increase the solution interval to a few minutes for the global fringe fitting procedure. This is presumably because the source is fainter, the data is more sensitive to the weather condition, the extended source structure is more easily resolved out, and it suffers more from the antenna pointing offsets than at other frequencies. We instead used an archival High Sensitivity Array (HSA) data observed at 43.212 GHz on 2008 Feb 03 to investigate the jets at a short distance from the black hole. The Effelsberg 100m station, the Green Bank Telescope, and the phased-up VLA\footnote{We found that the phased-up VLA baselines have poor sensitivities than our expectation. We suspect that the phasing efficiency was not very high for this observation for some reasons. The Effelsberg 100m telescope and the Green Bank Telescope baselines show high sensitivities, as expected.} participated in the observation, which enables to achieve a higher sensitivity than our VLBA-only 43 GHz observation. We found that the image rms-noise of the HSA 43 GHz data is lower than the VLBA 43 GHz data by more than a factor of four (see Figure~\ref{fig:map_q2} for comparison of the maps). Therefore, we decided to use this data for our analysis by assuming that there was no significant change in the jet collimation and acceleration properties between the observations (over $\approx12$ years). We calibrated this archival data and imaged as described in Section~\ref{sec:vlba}.
\subsection{Archival VLA data at 1.4 and 4.9 GHz}
We analyzed two historical Very Large Array (VLA) data available in the VLA archive. One is performed on 2001 Mar 10 at 1.365 GHz in the B-configuration and the other on 1996 Nov 02 at 4.860 GHz in the A-configuration. These data were analyzed and presented in previous studies of NGC 315 \citep{Canvin2005, Laing2006, Worrall2007, LB2014}. The data calibration was done in a standard manner with AIPS. We performed imaging and self-calibration in Difmap.
\subsection{Notes on linear polarization of VLBA/HSA data}
We corrected the instrumental polarization of the VLBA and HSA data by using GPCAL, which is a new pipeline designed for achieving a high calibration accuracy \citep{Park2020}. GPCAL allows (i) to use multiple calibrators simultaneously and (ii) to take into account linear polarization structures of calibrators through so-called "instrumental polarization self-calibration", without being limited by using the conventional way of assuming the linear polarization structures being proportional to the total intensity structures. We applied GPCAL to two calibrators, 3C 84 and BL Lac, for the VLBA data and a single calibrator, 0133+476, for the HSA 43 GHz data, by implementing ten iterations of instrumental polarization self-calibration. However, we could not find any significant linear polarization in the NGC 315 jets at any frequency. This suggests two possibilities; 1. the NGC 315 jets are strongly depolarizaed on pc-scales by complex jet magnetic field structures or a turbulent external Faraday rotating screen \citep[e.g.,][]{Sokoloff1998}, or 2. the NGC 315 jets have very high rotation measures and the polarization signals are cancelled out when we averaged the data over frequency channels \citep[e.g.,][]{Bower2017}. The latter possibility requires calibration of the frequency-dependent instrumental polarization, which is beyond the scope of the present study. We plan to explore it in our future studies.
\section{Analysis and Results}
\label{sec:results}
We present the images of NGC 315 in Figure~\ref{fig:images}. The rapidly expanding jet region (the "geometrical flaring" region, \citealt{Canvin2005, LB2014}), followed by the re-collimation of the jet, is shown in both the approaching (the jet in the north west direction) and the receding (in the south east direction) jets in the VLA 1.4 GHz image (we will hereafter use the terminology of the jet and counterjet instead of the approaching and receding jets, respectively). The conically expanding jet and the tenuous counterjet are present in the VLA 4.9 GHz image. These are consistent with the previous studies using the same data\footnote{We note that \citealt{Laing2006} showed the jets extending up to larger distances compared to our images thanks to the high-sensitivity achieved by combining multiple data sets. Our VLA images are enough to constrain the jet collimation profile on kpc-scales, which is the main purpose to re-analyze the VLA data in the present study.} \cite[e.g.,][]{Laing2006}. The overall jet morphology on pc-scales is similar to kpc-scales. The images are dominated by the jet but the counterjet could be imaged at all VLBA/HSA frequencies. Notably, the 43.2 GHz image shows an indication of limb-brightening in the outer part of the jet, which will be discussed later (Section~\ref{sec:limb_brightening}).
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0\textwidth]{figures/ngc315_images.pdf}
\caption{Images of the jets of NGC 315 reconstructed from archival VLA data at 1.365 and 4.860 GHz, our VLBA observations at 1.5--22.2 GHz, and archival HSA data at 43.212 GHz. The physical scales corresponding to the white horizontal sticks are noted. The ticks on the axes are in units of arcsec for the VLA images and of mas for the VLBA/HSA images. The white dashed rectangles show the scales of the next maps in the order from left to right for the top and bottom rows and from right to left for the middle row. CLEAN models are restored with the naturally-weighted synthesized beams to produce the images except for the HSA 43 GHz image for which we restored with a circular beam with the minor-axis of the synthesized beam. This is for illustrating an indication of limb-brightening observed only at that frequency. The restoring beam FWHMs are indicated with the white cross in the lower right of each map. On kpc-scales, prominent two-sided jets with a morphology of rapid expansion near the central core and recollimation in the outer region are observed. On pc-scales, the south-east jet is much fainter and less extended than the north-west jet but it is detected at all frequencies. \label{fig:images}}
\end{figure*}
\begin{figure}[t!]
\centering
\includegraphics[width = 0.49\textwidth]{figures/spix_example.pdf}
\caption{Color maps of the distributions of spectral index $\alpha$ (defined as $I_\nu \propto \nu^\alpha$) between 1.5 and 5.0 GHz overlaid on contours of the total intensity at 1.5 GHz before (top) and after (bottom) alignment of the images (Section~\ref{sec:coreshift}). The maps are rotated clockwise by $40^\circ$ and the jet axis is aligned with the horizontal axis. The low-$\alpha$ region upstream of the core (on the counterjet side) and the oscillatory pattern along the jet in the upper map disappear after the image alignment. \label{fig:spix_example}}
\end{figure}
\subsection{Core-shift}
\label{sec:coreshift}
AGN jets are synchrotron emitters and accompany synchrotron self-absorption. Significant absorption usually occurs near the jet base, where the electron density and the magnetic field strength are expected to be high. Thus, the observed base of jet, so-called the "core", is separated from the physical jet base. Since synchrotron self-absorption depends on frequency \citep[e.g.,][]{RL1979}, the observed core-position is also expected to change with frequency. This is known as the "core-shift" effect \citep[e.g.,][]{Konigl1981, Lobanov1998, Hirotani2005} and has been observed in the jets of many blazars \citep[e.g.,][]{Lobanov1998, OG2009, Pushkarev2012, Fromm2013, Hada2018} and radio galaxies \citep[e.g.,][]{Sudou2000, Hada2011, Hada2013, Haga2015}. An accurate measurement of core-shift at multiple frequencies allows us to infer the location of the central engine \citep{Hada2011}, which is crucial to obtain proper "jet distance".
We define the position of "apparent" core as the brightest pixel in the core region in the image. The apparent core may be separated from the physical core, i.e., the $\tau=1$ surface, where $\tau$ is the synchrotron optical depth. This is because of the finite beam size, which can introduce an additional shift of the apparent core position to the extended jet side due to the blending of the physical core and the extended jet \citep[e.g.,][]{Hada2011}. The core-shift effect refers to a shift of the physical core to the jet base at a higher frequency, which occurs due to the physical properties of the source. The separation of the apparent core from the physical core is just due to the limited angular resolution. We will derive the physical core-shift effect below, which is important to constrain the location of the jet base. We constrain the position of the apparent core in Appendix~\ref{appendix:core_identification}, which is necessary to properly convert the apparent jet distance, i.e., the distance between a pixel in the image from the apparent core, to the physical jet distance, i.e., the distance between a pixel from the inferred jet base.
We investigated the core-shift effect in the jets by performing two-dimensional cross-correlation of the optically thin jet emission in the VLBA images at different frequencies \citep{CG2008}. This method is suitable for our analysis because the prominent, extended jet structures could be obtained at all frequencies (Figure~\ref{fig:images}). For each frequency pair, we used the same image size and pixel size. The pixel size of $1/20$ of the minor axis of the synthesized beam at the higher frequency was used \citep{Pushkarev2012, Fromm2013}. We restored the CLEAN models of each frequency pair with the synthesized beam of the low frequency image. We aligned the restored images so that the apparent cores are located at the map origin. We rotated the images clockwise by $40^\circ$ (the jet axis is aligned with the horizontal axis of the map with this rotation) and used the regions separated from the apparent cores by more than the major axis of the convolving beam along the x-axis to avoid the optically thick core in the calculation. We did not use the counterjet because it is much weaker and its length is more different between frequencies as compared with the jet (Figure~\ref{fig:images}). We computed the normalized cross-correlation coefficient between the images as
\begin{equation}
\rho_{xy} = \frac{\sum_{i=1}^n\sum_{j=1}^n(I_{\nu_1,ij} - \overline{I_{\nu_1}})(I_{\nu_2,ij} - \overline{I_{\nu_2}})}{\sqrt{\sum_{i=1}^n\sum_{j=1}^n(I_{\nu_1,ij} - \overline{I_{\nu_1}})^2\sum_{i=1}^n\sum_{j=1}^n(I_{\nu_2,ij} - \overline{I_{\nu_2}})^2}},
\end{equation}
where $n$ is the number of pixels in each direction, $I_{\nu_1,ij}$ and $I_{\nu_2,ij}$ are the intensities for the maps at frequencies $\nu_1$ and $\nu_2$ at $i$-th and $j$-th pixels along the x and y directions in the rotated maps, respectively, and $\overline{I_{\nu_1}}$ and $\overline{I_{\nu_2}}$ are the mean values of the intensities over the region analyzed. We shifted one of the images along the x and y axes by up to 80--160 pixels and searched for the amount of shift that gives us the maximum correlation coefficients. The maximum coefficients are larger than 0.97 in all the considered frequency pairs. We derived the uncertainties in the core-shifts by investigating the change in a radial spectral index profile in the optically thin jet region when introducing an additional shift to one of the pair images along the jet direction (Appendix~\ref{appendix:2dcc_error}).
We did not include the result of the frequency pairs that show very different lengths of the jet; the maximum correlation coefficients are lower than 0.9 in those cases and they are not considered robust. Also, the jet structure at 43 GHz appears to be quite different from those at other frequencies (Figure~\ref{fig:contours_shift}). This could be due to the long-term evolution of the jet brightness distribution over $\approx12$ years (Table~\ref{tab:data}), which prevents us from deriving a reliable core-shift at 43 GHz\footnote{We tried to obtain the core-shift between 22 and 43 GHz despite the apparent difference in the jet brightness distributions and obtained the maximum cross-correlation coefficient at the shift of the higher frequency map by zero and 0.08 mas along the jet longitudial and transverse directions, respectively. This results in a weird spectral index map and is very different from the trend we consistently see for all the other frequency pairs.}. Thus, we decided to obtain the 43 GHz core position by extrapolating from the core positions constrained at lower frequencies. The core-shifts for different frequency pairs are summarized in Table~\ref{tab:coreshift}.
\begin{deluxetable}{cccc}
\tablecaption{Core-shifts between each pair of frequencies derived from the VLBA data \label{tab:coreshift}}
\tablewidth{0pt}
\tablehead{
\colhead{$\nu_1$} & \colhead{$\nu_2$} & \colhead{$\Delta r$} & \colhead{Angle} \\
(GHz) & (GHz) & (mas) & ($^\circ$)
}
\startdata
1.548 & 2.284 & 1.090 (0.11) $\pm$ 0.663 & -50.0 \\
1.548 & 4.980 & 3.367 (0.34) $\pm$ 0.991 & -51.7 \\
1.548 & 8.416 & 4.120 (0.42) $\pm$ 0.870 & -48.4 \\
2.284 & 4.980 & 2.081 (0.32) $\pm$ 0.843 & -52.7 \\
2.284 & 8.416 & 2.960 (0.45) $\pm$ 0.581 & -47.8 \\
2.284 & 15.256 & 3.067 (0.47) $\pm$ 0.714 & -46.2 \\
4.980 & 8.416 & 0.296 (0.10) $\pm$ 0.349 & -38.7 \\
4.980 & 15.256 & 0.477 (0.17) $\pm$ 0.340 & -45.9 \\
4.980 & 22.220 & 0.523 (0.18) $\pm$ 0.390 & -44.3 \\
8.416 & 15.256 & 0.238 (0.14) $\pm$ 0.171 & -50.0 \\
8.416 & 22.220 & 0.313 (0.18) $\pm$ 0.247 & -45.2 \\
15.256 & 22.220 & 0.104 (0.11) $\pm$ 0.079 & -50.0 \\
\enddata
\tablecomments{Magnitude and position angle of core-shift measured between each pair of frequencies, derived by performing two-dimensional cross-correlation of the optically thin jet emission in the VLBA images. The values in the parentheses are the ratios of the core-shift magnitudes to the major axes of the restoring beams.}
\end{deluxetable}
We illustrate the impact of image alignment on the spectral index distribution of the jets in Figure~\ref{fig:spix_example}. The spectral index map between 1.5 and 5.0 GHz before the alignment shows an optically thin region in the upstream of the core (on the counterjet side) and an oscillatory pattern in the downstream jet. These features disappear after the alignment, which is consistent with the previous core-shift results on various sources (e.g., \citealt{CG2008}).
\begin{figure}[t!]
\centering
\includegraphics[width = 0.49\textwidth]{figures/coreshift.pdf}
\caption{Core position as a function of frequency. The physical core positions relative to the 22 GHz core are derived from a least-square fitting to the observed core-shifts between many frequency pairs (Table~\ref{tab:coreshift}). The black solid line shows the best-fit power-law function to the data points and the best-fit parameters of the function are noted on the top right. \label{fig:coreshift}}
\end{figure}
\begin{deluxetable*}{cccccc}
\tablecaption{Best-fit core-positions \label{tab:fitted_coreshift}}
\tablewidth{0pt}
\tablehead{
& \colhead{1.548 GHz} & \colhead{2.284 GHz} & \colhead{4.980 GHz} & \colhead{8.416 GHz} & \colhead{15.256 GHz}
}
\startdata
$\Delta r_{\rm core}$ & $4.222\pm0.514$ & $3.117\pm0.381$ & $0.610\pm0.211$ & $0.315\pm0.141$ & $0.100\pm0.076$
\enddata
\tablecomments{Best-fit physical core positions relative to the 22 GHz core in units of mas.}
\end{deluxetable*}
Our core-shift estimates are derived after aligning the images at different frequencies convolved with the same beam based on the apparent core positions. Thus, we expect that the separations of the apparent cores from the physical cores for each frequency pair would be similar and the core-shift estimates would represent the relative distances between the physical cores at different frequencies. We constrain the physical core position at each frequency with respect to the core at the reference frequency (assumed to be 22 GHz here) by using the core-shifts for different frequency pairs. We parametrized the relative position between the physical core at a certain frequency and the 22 GHz core and performed a least-square fitting. The best-fit core positions would reproduce the observed core-shifts for all frequency pairs well. We present the best-fit core positions and the uncertainties in Table~\ref{tab:fitted_coreshift}.
In Figure~\ref{fig:coreshift}, we present the physical core positions relative to the 22 GHz core as a function of frequency. The core position offsets systematically decrease with increasing frequency. We fit a power-law function of $\Delta r_{\rm core} = a\nu^b + c$, where $\Delta r_{\rm core}$ is the core position offset and $\nu$ the frequency, and find the best-fit values of $a=8.60\pm1.35$, $b=-1.39\pm0.20$, and $c=-0.11\pm0.06$. This result allows us to infer the location of the jet base (by taking $\nu \xrightarrow{} \infty$), which is at $\approx0.11$ mas upstream of the 22 GHz physical core.
We note that the observed core-shift is very large up to $\approx4.2$ mas at 1.5 GHz. A similar large core-shift was observed in the nearby radio galaxy NGC 4261 \citep{Haga2015}. We illustrate the large core-shifts in NGC 315 in Figure~\ref{fig:contours_shift}. The images are registered with respect to the inferred jet base position by using the constraints on the apparent core positions (the physical core-shift plus the additional shift of the apparent core from the physical core, see Appendix~\ref{appendix:core_identification}). The jet emission appears to have several knotty (or re-brightened) regions, notably at $\approx3.5$, 5, 25, 44 mas, and their positions are well aligned at different frequencies. However, the apparent core positions change a lot with frequency. We also present the spectral index distributions between adjacent frequencies in colors on top of the contours. Interestingly, the spectral indices become maximum near the location of the inferred jet base, reaching up to $\alpha\approx2.5$ between the lowest frequency pair, which is expected for synchrotron self-absorbed spectrum at low frequencies \citep{RL1979}. This observation is consistent with the expectation that AGN jet emission near the jet base is strongly absorbed by the synchrotron self-absorption process \citep[e.g.,][]{Konigl1981, Lobanov1998, Hirotani2005}.
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0\textwidth]{figures/contours_shift.pdf}
\caption{Contours show total intensity distributions at different frequencies convolved with the beam at the lowest frequency in each panel, shown on the bottom left. Colors show spectral index distributions between adjacent frequencies. For example, the color on top of the 1.548 GHz contour shows $\alpha$ between 1.548 and 2.284 GHz, convolved with the 1.548 GHz beam. All the maps are rotated clockwise by $40^\circ$ and the jet axis is aligned with the horizontal axis. The contours are shifted along the negative x-axis using the constraints on the apparent core positions with respect to the inferred jet base. Thus, the black dashed vertical line at $x=0$ corresponds to the inferred jet base position. Interestingly, $\alpha$ reaches its maximum near the inferred jet base location between all the frequency pairs, which suggests the presence of severe synchrotron self-absorption near the jet base, as expected for AGN jets \citep[e.g.,][]{Konigl1981}. The black solid vertical line marked as $z_b$ shows the location of the jet collimation break (Figure~\ref{fig:jet_radius}). There are a few re-brightened regions which appear as knots at $\approx-3.5, -5, -25, -44$ mas. The positions of these regions are consistent between the maps after the image registration, except at 43 GHz, which was observed $\approx12$ years earlier than the maps at other frequencies. This implies that there might be a long-term evolution of the jet brightness distribution. \label{fig:contours_shift}}
\end{figure*}
\subsection{Jet collimation profile}
\label{sec:collimation}
We derive jet radius\footnote{The jet radius is assumed to be half the jet width.} as a function of jet distance as follows. We restored the CLEAN model for each frequency with a circular beam having the size of the major axis of the synthesized beam to remove the effect of the restoring beam straightforwardly. We rotated the images clockwise by $40^\circ$, which makes the jet ridge well aligned with the x-axis of the maps (Figure~\ref{fig:contours_shift}). We obtained a transverse intensity profile (along the y-axis of the rotated maps) at each jet distance and fitted a Gaussian function to the profile. We subtracted the restoring beam full width at half maximum (FWHM) from the measured jet FWHM in quadrature to derive the intrinsic jet width. We regarded the jet width as robust only when (i) the amplitude of the fitted Gaussian function exceeds 15 times the off-source image rms-noise and (ii) the measured FWHM is larger than the restoring beam FWHM. We obtained the jet widths at distances separated from the apparent cores by more than the major axis beam sizes to avoid a potential complication originating from the convolution of the brightest core emission; we try to constrain the jet widths in the core regions by employing model fitting on the visibility and image domains (see below).
Although we derive the intrinsic jet width at each distance bin (with a size of the image pixel size), many of those measurements are not independent due to the finite beam size. We binned the jet widths in distance with a bin size of half the major axis beam size. We took the median value of the widths in each bin for a representative jet width and assumed 1/10 of the major axis beam size for an uncertainty of the width\footnote{We derive the jet width only when the peak intensity of the transverse Gaussian profile exceeds 15 times the image rms-noise. Therefore, our assumed errors are larger than the nominal, approximated errors in sizes of model components fitted to VLBI data given by $\sigma_d=d/{\rm SNR}$, where $d$ is the component size and SNR the signal-to-noise ratio \citep[e.g.,][]{Fomalont1999, Lee2008}.}.
The fact that we have a good constraint on the core-shift of the jet indicates that we could also constrain the jet collimation profile on scales much smaller than the formal resolution limit. This can be achieved by assuming that the size of the core corresponds to the jet width and using the distance of the apparent core from the jet base (Section~\ref{sec:coreshift} and Appendix~\ref{appendix:core_identification}). This approach was taken in previous studies of jet collimation in M87 \citep{NA2013, Hada2013} and it turned out that the jet radii derived from the core sizes are indeed consistent with those from the transverse jet intensity profile analysis \citep{Hada2013, Nakamura2018}.
There are two widely used methods to extract the core size from VLBI data: one is to fit a two-dimensional elliptical Gaussian function in the core region of the image and the other is to fit a Gaussian "component" to the visibilities directly. We take both approaches in this study. We used the task JMFIT in AIPS for the former approach, which was used for a similar analysis in previous studies (e.g., \citealt{Hada2013, Hada2018}). JMFIT provides the intrinsic major and minor axes of the fitted Gaussians after subtracting the convolving beam sizes\footnote{We used the images convolved with the circular Gaussian beams, similar to the transverse intensity profile analysis.}. We used \texttt{modelfit} in Difmap for the second approach and fitted a single elliptical Gaussian component. We found that the offsets of the fitted positions from the apparent core positions are small but not negligible at some frequencies for both JMFIT and \texttt{modelfit} measurments, which were considered to calculate the distance of the core components from the jet base. We calculated the FWHMs along the direction perpendicular to the jet axis and regarded them as the jet widths in the core regions. Each software provides the statistical uncertainties of the parameters for the fits. However, we found that the two measurements at some frequencies deviate from each other more than the formal $1\sigma$ uncertainties, indicating that there might be some systematic uncertainties that could not be captured by the fitting process. In those cases, we estimated the systematic uncertainties which are assumed to be the same for both measurements at each frequency and can make the discrepancy between the measurements equal to the total $1\sigma$ uncertainty. The core widths from two methods and their uncertainties are presented in Table~\ref{tab:data}.
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.85\textwidth]{figures/jet_width.pdf}
\caption{Jet radius as a function of de-projected jet distance from the black hole in units of $R_g$. Filled circles and filled triangles are jet radii obtained from the VLBA/HSA and VLA data, respectively, by analyzing transverse jet intensity profiles. Magenta diamonds and cyan squares are jet radii at the cores constrained by \texttt{modelfit} in Difmap on the visibility domain and JMFIT in AIPS on the image domain, respectively. The best-fit broken power-law function is shown with the black solid line. The jet morphology is semi-parabolic (with a slope of $0.58\pm0.05$ in the logarithmic space) and conical/hyperbolic (with a slope of $1.16\pm0.01$) inside and outside of the transition distance of $z_b=(1.1\pm0.2)\times10^5\ R_g$ indicated by the vertical dashed line. The locations of the sphere of influence of the black hole gravity ($R_{\rm SOI}$) and the Bondi radius ($R_{B}$) are indicated by the green and orange solid ticks on the upper y-axis, respectively. The colored ticks on the left y-axis shows the $\rm FWHM/2$ of the synthesized beams along the direction transverse to the jet axis. They are comparable to the maximum observed jet radii except at 43 GHz, which may explain the reason for the observed limb-brightened feature only at that frequency. \label{fig:jet_radius}}
\end{figure*}
In Figure~\ref{fig:jet_radius}, We present the jet radius as a function of de-projected jet distance from the black hole. The jet distance we obtained in our analysis is with respect to the apparent core. We add the distance between the apparent core and the inferred jet base position to derive the physical jet distance from the black hole (Section~\ref{sec:coreshift} and Appendix~\ref{appendix:core_identification}). We used the black hole mass of $M_{\rm BH} = 1.6\times10^9 M_\odot$ (Section~\ref{sec:intro}) to obtain distance in units of $R_g$ and the viewing angle of $\theta = 49.8^\circ$ (Section~\ref{sec:velocity}) for de-projection. We find that the jet radii from multiple frequencies at a similar jet distance are consistent with each other within errors, which implies that we did not underestimate the jet width uncertainties. The jet radii measured from the HSA 43 GHz data are consistent with those from the VLBA data at other frequencies, suggesting that the jet collimation profile is not significantly affected by the possible long-term evolution of the jet brightness distribution (Figure~\ref{fig:contours_shift}). The geometrical flaring of the jet and the jet recollimation are observed at distances of $\approx10^8$--$10^9\ R_g$, as already shown in the previous studies \citep{Canvin2005, LB2014}. We will investigate the origin of the recollimation in more details in a forthcoming paper. Before the jet reaches the flaring region, the jet appears to expand by following the same power-law function of jet shape, i.e., the same slope in the logarithmic space, and this trend continues to the mas-scale. However, the slope becomes flatter at a certain distance around $\approx10^5\ R_g$.
We fitted a broken power-law function to the observed jet radii. The function is given in the form of
\begin{eqnarray}
R = az + b &, & \quad z<z_b \nonumber \\
R = cz + (a - c)z_b + b &, & \quad z\gtrsim z_b
\end{eqnarray}
in the logarithmic space, where $R$ and $z$ denote the jet radius and distance, respectively. This form guarantees that the two functions are connected at the break location at $z=z_b$. We did not use the VLA 1.4 GHz data for fitting due to the complex evolution of the jet geometry with distance. We considered the uncertainties in both the jet distance (from the core-shift uncertainties) and the jet radius in the fitting; the jet distance uncertainties are important only for the measurements from the core size analysis. We tested fitting with a single power-law function and obtained the reduced chi-square of $\chi^2_{\rm red}\approx9.9$, which is significantly larger than $\chi^2_{\rm red}\approx5.9$ obtained from the broken power-law fitting\footnote{We note that the large $\chi^2_{\rm red}$ is mostly contributed by the VLA 5 GHz data. We obtain $\chi^2_{\rm red}\approx0.95$ when we exclude the VLA data for calculation of $\chi^2_{\rm red}$. This is because the jet is well resolved on kpc-scales and shows oscillations in jet radius with respect to the global broken power-law function (Figure~\ref{fig:jet_radius}). These oscillations have amplitudes much larger than the uncertainties in the jet radii and may be associated with local over-expansions and over-contractions of the jet.}. We found that the data are described well by a semi-parabolic shape (with a power-law index of $0.58\pm0.05$) and a conical/hyperbolic shape (with a power-law index of $1.16\pm0.01$) inside and outside the transition distance at $z_b = (1.1\pm0.2)\times10^5\ R_g$, respectively. Therefore, we confirm that the "jet collimation break" exists in NGC 315 (see Section~\ref{dis:collimation} for more discussion).
\subsection{Limb-brightening}
\label{sec:limb_brightening}
The jet structure appears to consist of a single ridge on pc-scales (Figure~\ref{fig:images}) and we found that a single Gaussian function can describe the transverse intensity profiles well in most cases. However, we found an indication of the jet transverse structures resolved at 43 GHz. This is because the 43 GHz data includes the Effelsberg 100m telescope, which provides very long baselines in the direction nearly perpendicular to the jet axis (the minor axis beam size is 0.17 mas). In the left panel of Figure~\ref{fig:doubleridge}, we show the 43 GHz image convolved with a circular beam with a size of the minor axis of the synthesized beam. The jet transverse intensity profiles could not be described well by a single Gaussian function at distances larger than $\approx0.8$ mas from the apparent core. We fitted double Gaussian functions for this distance range and obtained the edge-to-edge jet widths (the distances between the outer edges of the FWHMs of the two ridges), and compared them with the jet widths derived from the single Gaussian fits to the 43 GHz image convolved with a circular beam with the size of the major axis of the synthesized beam (as done for the VLBA images at $\lesssim22$ GHz) in the right panel of Figure~\ref{fig:doubleridge}. The jet radii from the two estimates are in good agreement with each other and with the radii at 22 GHz, which suggests that the observed limb-brightening in this region may be robust.
\begin{figure*}[t!]
\centering
\includegraphics[trim=0mm -5mm 0mm 0mm, width = 0.6\textwidth]{figures/contour_doubleridge.pdf}
\includegraphics[width = 0.39\textwidth]{figures/doubleridge_width.pdf}
\caption{Left: contour of total intensity observed with the HSA at 43 GHz, convolved with a circular beam with the size of the minor axis of the synthesized beam shown on the bottom left. The image is rotated clockwise by $40^\circ$. The blue lines show the positions of the peaks of the Gaussian functions fitted to the transverse intensity profiles. At distances $\gtrsim0.8$ mas, double Gaussian functions are needed to fit the data and there are two ridge lines in that region. Right: comparison of jet radii obtained from single Gaussian fitting (magenta squares) and double Gaussian fitting (blue stars) for the transverse intensity profile analysis of the HSA 43 GHz data. The jet radii obtained from single Gaussian fitting at 22 GHz are shown with the green diamonds for a reference. For single (double) Gaussian fitting, the map convolved with the major (minor) axis circular beam is used. For the double Gaussian fitting, the edge-to-edge widths divided by two are measured (see text). Grey circles show the other frequencies/methods data points presented in Figure~\ref{fig:jet_radius}. \label{fig:doubleridge}}
\end{figure*}
We show half of the beam FWHM sizes along the transverse jet direction for each VLBA/HSA data on the left y-axis in Figure~\ref{fig:jet_radius}. We find that the maximum jet radii are comparable to the corresponding angular resolutions at all frequencies but at 43 GHz. Therefore, there is a possibility that the jet in NGC 315 is intrinsically limb-brightened on pc-scales, similar to M87, but this feature could not be resolved in the previous and our VLBA observations. We will test this possibility with future observations with the HSA at multiple frequencies.
\subsection{Jet velocity field}
\label{sec:velocity}
As both the jet and counterjet were detected at all VLBA/HSA frequencies, we could infer the jet velocity field on pc-scales by assuming that they are intrinsically the same but the jet is brighter than the counterjet due to relativistic aberration. This approach was adopted to derive the jet velocity field, viewing angle, and other quantities of NGC 315 on kpc-scales \citep{Canvin2005, LB2014}. They could disentangle the degeneracy between the velocity and viewing angle by using both the total intensity and linear polarization data. The jet velocity field was derived also on pc-scales by combining the jet-to-counterjet brightness ratios and the jet kinematic results from multi-epoch monitoring observations \citep{Cotton1999}.
The jet-to-counterjet intensity ratio is related to $\beta$ and the viewing angle ($\theta$) via
\begin{equation}
\label{eq:ratio}
R\equiv\frac{I_{\rm jet}}{I_{\rm cjet}} = \left(\frac{1+\beta\cos\theta}{1-\beta\cos\theta}\right)^{2-\alpha},
\end{equation}
\noindent where $I_{\rm jet}$ and $I_{\rm cjet}$ are the intensities of the jet and counterjet at the same jet distance, respectively, and $\alpha$ is the spectral index defined as $I_\nu \propto \nu^\alpha$. We define the origin of the jets as $\approx0.11$ mas upstream of the 22 GHz physical core position from our core-shift result (Section~\ref{sec:coreshift}), from which we calculate the distance for the jet and counterjet. We obtained $R$ for the distances separated from the apparent cores by more than the major axes of the synthesized beams to avoid possible contamination from the convolution of the bright core emission.
For each $R$ derived at each frequency, we need a corresponding spectral index $\alpha$ to derive $\beta$. We obtained $\alpha$ by using an adjacent (higher) frequency map after the image registration. For example, for each $R$ measured at 1.5 GHz, we derived a corresponding $\alpha$ from the 1.5 and 2.3 GHz maps. This approach would minimize the distortions in $\alpha$ from different synthesized beams at different frequencies.
Two approaches are available to disentangle $\beta$ and $\theta$ (Equation~\ref{eq:ratio}). One is to obtain the apparent jet speed from VLBI monitoring observations, which is also a function of $\beta$ and $\theta$. Combining the apparent speed with the measured $R$ and $\alpha$ at a similar distance, we can solve for $\beta$ and $\theta$. The other approach is to use the jet viewing angle constrained from modelling of the kpc-scale jets \citep{LB2014}, assuming that the viewing angles on pc and kpc scales are the same.
It is known that jet kinematic analysis for radio galaxies is generally much more difficult than for blazars. The main cause is presumably that the jets of radio galaxies have smooth brightness distributions over distances and it is difficult to identify the same jet regions (or "brightness patterns") in different epochs. This issue was addressed well in our previous study of the jet kinematics of M87 \citep{Park2019a}. We argued that a reliable jet kinematic analysis is possible only when high-resolution and high-cadence monitoring data having similar uv-coverages in the sampled epochs\footnote{Significantly different uv-coverages in different epochs can result in artificial jet motions. See also \cite{Walker2018} for a related discussion.} are available. The M87 jet also has re-brightened regions\footnote{These are the regions of local brightness enhancement. The jets of nearby radio galaxies usually show gradually decreasing intensity with increasing distance except in the re-brightened regions.} at several locations, which makes the jet apparently look stationary when observed with a low-resolution and at a low-cadence.
A similar issue could exist for jet kinematics of NGC 315. \cite{Lister2019} found much slower apparent speeds of $\beta_{\rm app}\lesssim0.05c$, using the VLBA monitoring data over nearly 20 years at 15 GHz with an average interval of more than one year, than \cite{Cotton1999} at a similar distance range. The significant difference in the jet kinematic results of different studies could originate from the low cadence ($\gtrsim1$ year) in the previous observations and the re-brightened regions possibly existing in the jet of NGC 315. Motivated by these controversial results, we have performed dense monitoring observations with the KVN and VERA array (KaVA, \citealt{Niinuma2014, Oh2015, Wajima2016, Asada2017, Cho2017, Hada2017, An2018, Lee2019, Zhao2019}). We present the details of our observations, data reduction, kinematic analysis and results in Appendix~\ref{appendix}.
In summary, we found nearly stationary motions with the observed speeds being consistent with zero within 1--2$\sigma$ at distances less than $\approx5$ mas from the core, while a fast outward motion of $\beta_{\rm app} = 1.85\pm0.44c$ is observed at a distance of $\approx8$ mas. The inferred jet viewing angle by combining the observed apparent speed with the jet-to-counterjet brightness ratio at a similar distance is $\theta = 52.8\pm8.0^\circ$, which is in good agreement with the kpc-scale viewing angle constraint \citep{LB2014}. We found that the stationary motions at $\lesssim5$ mas may be associated with the re-brightened regions, which demonstrates the difficulty of obtaining a robust velocity field from a jet kinematic analysis for our source, similar to M87.
Therefore, we conclude that obtaining the jet velocity field from the jet-to-counterjet intensity ratio, by using the kpc-scale jet viewing angle of $\theta=49.8^\circ$ \citep{LB2014}, is more robust. Assuming the same viewing angle on pc and kpc scales is reasonable because the jet morphology appears to be very straight until the jet reaches a few tens to hundreds of kpc \citep[][see also Figure~\ref{fig:images}]{Laing2006} and the pc-scale viewing angle constrained from our jet kinematic analysis is indeed consistent with the kpc-scale one, although it has a substantially larger uncertainty.
A careful analysis of uncertainties in $R$ and $\alpha$ is necessary for a robust estimation of $\beta$. We found that the biggest uncertainties in $R$ and $\alpha$ originate from the uncertainties in the core-shift. We adopted a Monte-Carlo approach to take these uncertainties into account. We draw 1,000 random core-shifts along the jet direction for each frequency from normal distributions with means and standard deviations determined from the best-fit apparent core positions and their $1\sigma$ uncertainties (Table~\ref{tab:fitted_coreshift} and Table~\ref{tab:app_coreshift}). We obtained corresponding 1,000 realizations for $R$ and $\alpha$ as a function of projected jet distance at each frequency. The intensities $I_{\rm jet}$ and $I_{\rm cjet}$, which comprise $R$, are obtained by fitting a single Gaussian function to the transverse jet intensity profile for each distance (Section~\ref{sec:collimation}). We obtained a representative $\alpha$ for each jet distance from the intensity-weighted average of the spectral indices along the transverse jet direction. We binned $R$ and $\alpha$ from the 1,000 realizations in distance with a bin size of half of the major axis of the synthesized beam and obtained a representative value and $1\sigma$ uncertainty for each bin from the median and standard deviation of the data points within the bin.
\begin{figure}[t!]
\centering
\includegraphics[width = 0.49\textwidth]{figures/bp243_properties_binned.pdf}
\caption{Jet-to-counterjet intensity ratio (top), spectral index (middle), and jet speed in units of the speed of light (bottom) as functions of apparent (projected) distance from the black hole. \label{fig:properties_binned}}
\end{figure}
We present $R$, $\alpha$, and $\beta$ as a function of apparent projected distance from the black hole in Figure~\ref{fig:properties_binned}. The brightness ratios gradually increase at distances from $\approx0.4$ to $\approx10$ mas and then decrease at larger distances. The spectral indices are nearly constant over distance. Thus, the derived jet speeds follow the same trend as the jet brightness ratios: showing gradual acceleration and deceleration in the inner and outer regions of the distance of $\approx10$ mas. Our conclusion of jet acceleration and deceleration rely on $\beta$ at the shortest and longest distance bins, where the data from only a single frequency are available. We note that the 1.5 GHz data at the longest distance bin has a SNR of $\approx 7$ for the counterjet intensity, while the other data points have SNRs larger than 10, indicating that these measurements are robust. The 43 GHz data also seem to have large enough SNRs ($\gtrsim10$) but the large difference in observing epochs between 43 GHz and other frequencies could cause additional uncertainty. We address this issue using the VLBA 43 GHz data, which was observed nearly simultaneously to the VLBA data at other frequencies but not used for our main analysis due to the limited data quality (Section~\ref{sec:data}), in Appendix~\ref{appendix:beta}. We conclude that $\beta$ at 43 GHz in Figure~\ref{fig:properties_binned} is not underestimated, implying that the NGC 315 jets do accelerate on this scale.
We note that we could not find a significant difference in the spectral indices between the jet and counterjet at the same jet distance in the regions where $R$ and $\alpha$ were measured, i.e., separated from the apparent core by more than one beam size. Therefore, we assume that the effects of possible free-free absorption by ionized material near the central engine on the velocity field derivation is insignificant and that the observed core-shifts are dominated by synchrotron self-absorption. A strong indication of significant free-free absorption is the presence of the "emission gap" between the jet and counterjet at low frequencies (e.g., \citealt{Walker2000, Kameno2001, Baczko2019}) and of the cores of both the jet and counterjet shifting towards each other with increasing frequency (e.g., \citealt{Haga2015}). We could find neither of these indications for NGC 315 (Figure~\ref{fig:contours_shift}).
\section{Discussion}
\label{sec:discussion}
\subsection{Jet Collimation}
\label{dis:collimation}
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.75\textwidth]{figures/jet_width_comparison.pdf}
\caption{Jet radius as a function of de-projected jet distance from the black hole in units of $R_g$ for M87 \citep[blue, ][]{AN2012, Doeleman2012, Hada2013, NA2013, Akiyama2015, Nakamura2018}, NGC 6251 \citep[green, ][]{Tseng2016, Nakamura2018}, and NGC 315 (magenta). The dotted vertical lines show the locations of the jet collimation breaks. The dark (light) grey area shows the outermost poloidal field line anchored to the equator of the event horizon obtained from the FFE solution for $\kappa=0.75$ ($\kappa=1$) for the black hole spin range of 0.5--0.99, which corresponds to $z\propto R^{1.6}$ ($z\propto R^2$) asymptotically. The filled black region on the bottom left denotes the event horizon, and the shaded region shows the ergosphere for the spin parameter a = 0.99 (see \citealt{Nakamura2018} for more details). \label{fig:jet_width_comparison}}
\end{figure*}
In Figure~\ref{fig:jet_width_comparison}, we compare the jet radius of NGC 315 as a function of distance with the nearby FR I radio galaxies M87 \citep{AN2012, Doeleman2012, NA2013, Hada2013, Hada2016, Akiyama2015, Nakamura2018} and NGC 6251 \citep{Tseng2016}. We also show the steady axisymmetric force-free electrodynamic (FFE) solution \citep{Narayan2007, Tchekhovskoy2008} for the outermost poloidal field line anchored to the black hole event horizon on the equatorial plane with $\kappa=0.75$\footnote{$\kappa$ is related to the radial power-law index in the poloidal flux function of the approximate FFE solution, which describes the asymptotic shape of the field line (see \citealt{Narayan2007, Tchekhovskoy2008, Nakamura2018} for more details).}, which describes the observed parabolic jet collimation profiles of M87 and NGC 6251 well \citep{Nakamura2018}. The jet collimation profiles of NGC 315, NGC 6251, and M87 in the parabolic regions (before the jet collimation breaks) are remarkably similar, although the locations of the breaks are different.
\cite{Nakamura2018} showed that AGN jets may be collimated by the pressure of winds, which are non-relativistic and moderately magnetized gas outflows launched from the accretion flows \citep{Sadowski2013, Yuan2015}, on scales of $\lesssim100\ R_g$ from GRMHD simulations. The boundary shape between the jet and wind is consistent with the observed jet collimation profile for M87 \citep{AN2012, NA2013, Hada2013} and with the FFE solution for the outermost poloidal field line anchored to the event horizon on the equatorial plane. This result was confirmed on scales down to $\approx10^5 \ R_g$ by recent GRMHD simulations \citep{Chatterjee2019}. Also, \cite{Park2019b} showed that the inferred pressure profile of an external confining medium is flat enough to collimate the jet \citep{Komissarov2009} from the Faraday rotation observations of the jet collimation region in M87. Given that the parabolic jet shape of NGC 315 is consistent with the M87 jet (Figure~\ref{fig:jet_width_comparison}), a similar scenario can be applied to the NGC 315 jet.
However, there are two major differences in the jet radius profiles between NGC 315 and M87. One is the location of the jet collimation break and the other is the existence of a recollimation feature. The jet collimation break in M87 was suggested to occur near the Bondi radius \citep[$R_B$\xspace,][]{AN2012, NA2013, Nakamura2018}, within which the dynamics of materials is thought to be governed by the black hole gravity. We infer the Bondi radius from the temperature of $\approx0.44$ keV of the X-ray emitting gas in the core of NGC 315 (within 1 arcsec, \citealt{Worrall2007}), which is $R_B \approx 1.5\times10^6\ R_g$\footnote{1 arcsec corresponds to the physical scale of $\approx4.6\times10^6\ R_g$ and the estimate of $R_B$\xspace is based on the assumption that the temperature is the same between $R_B$\xspace and 1 arcsec. A flat temperature profile is generally expected in the cool cores of elliptical galaxies \citep[e.g.,][]{Hudson2010, Werner2019} including NGC 315 \citep{Sun2009}, but the temperature may not be exactly the same in the considered distance range \citep[e.g.,][]{Gaspari2013}, which is a source of uncertainty in the $R_B$\xspace estimate. See \cite{Tseng2016} for a related discussion.}. We can also infer the sphere of influence of the black hole gravity via $R_{\rm SOI} \equiv GM/\sigma_v^2$ \citep{Peebles1972}, where $\sigma_v$ is the stellar velocity dispersion of the host bulge measured near the black hole. \cite{Ene2020} obtained $\sigma_v\approx350\ {\rm km\ s^{-1}}$ at a radius of $\approx0.1$ arcsec in NGC 315, which gives $R_{\rm SOI} \approx 7.3\times10^5\ R_g$. These estimates suggest that the location of jet collimation break in NGC 315, $z_b \approx (1.1\pm0.2)\times10^5\ R_g$, is an order of magnitude smaller than $R_B$\xspace and $R_{\rm SOI}$\xspace, unlike M87\footnote{We note that $R_B$\xspace and $R_{\rm SOI}$\xspace scale with the black hole mass, while $z_b$ does not. This indicates that our conclusion that $z_b$ is much smaller than $R_B$\xspace and $R_{\rm SOI}$\xspace may depend on how accurate the black hole mass we used, which is based on the $M_{\rm BH}$-$\sigma_v$ relation, is. It is believed that the black hole mass estimates based on the $M_{\rm BH}$-$\sigma_v$ relation can be uncertain typically by a factor of two \citep[e.g.,][]{KH2013}. Therefore, it is unlikely that the black hole mass is overestimated by an order of magnitude and our conclusion may be robust against the mass uncertainty.}.
Also, the M87 jet has a dip in the jet width near the jet geometry transition region, at the location of a jet feature known as HST-1 \citep{AN2012, NA2013, Nakamura2018}. This feature is associated with the strong multiwavelength flare in 2005 including X-rays and even TeV $\gamma$-rays (e.g., \citealt{Aharonian2006, Cheung2007, Harris2009}), shows both superluminal and quasi-stationary knots \citep[e.g.,][]{Biretta1999, Cheung2007, Giroletti2012, Nakamura2010, NM2014}, and shows enhanced linealy polarized emission and Faraday rotation measure as compared with the neighboring inner and outer jets \citep{Chen2011, Park2019b}. HST-1 was suggested as a recollimation shock \citep{Stawarz2006, BL2009, LG2017} and the inferred jet pressure is orders of magnitude higher than the surrounding medium \citep[e.g.,][]{AN2012}. Thus, \cite{AN2012} suggested that the M87 jet can conically expand into the interstellar medium having a flat pressure profile with distance \citep[e.g.,][]{Russell2015, Russell2018} because of the high jet internal pressure caused by the recollimation shock. However, there is no indication of recollimation nor linear polarization detected at the jet collimation break point in NGC 315.
Therefore, different mechanisms are needed to explain the geometrical transition of NGC 315. If the jet is collimated by the winds from hot accretion flows as suggested above, the collimation break location being an order of magnitude smaller than $R_B$\xspace and $R_{\rm SOI}$\xspace may indicate that the winds may not reach down to the Bondi radius. It is yet unclear how far the winds can reach away from the black hole \citep[e.g.,][]{Yuan2015}; it may depend on the global geometry of the hot accretion flows \citep{Chatterjee2019} and the effects of the gravitational potential of the nuclear star clusters \citep{Bu2016a, Bu2016b}. There is a growing evidence that AGN jet collimation breaks do not necessarily occur at the Bondi radii. The break locations appear to be smaller than the Bondi radii (or $R_{\rm SOI}$\xspace) in NGC 6251, NGC 4261, NGC 1052 \citep{Tseng2016, Nakahara2018, Nakahara2020}, while it is possibly much larger in Cygnus A\footnote{No transition from a parabolic to conical geometry was observed at distances $\lesssim10^9\ R_g$.} \citep{Nakahara2019}.
\begin{figure}[t!]
\centering
\includegraphics[width = 0.47\textwidth]{figures/Worrall2007.pdf}
\caption{Pressure of the X-ray emitting gas in NGC 315 as a function of distance in units of $R_g$, read off from Figure 13 of \cite{Worrall2007}. The black dot is the pressure inferred from the thermal component in the X-ray spectral fit to the core, while the black solid curve is obtained from the best-fit $\beta$ model, which describes the observed X-ray surface brightness as being proportional to $[1 + (\theta/\theta_{\rm CX}^2)]^{0.5-3\beta}$, where $\theta$ is the angular radius and $\theta_{\rm CX}$ is the angular core radius \citep[e.g.,][]{CF1978, BW1993}. This model allows to infer the radial pressure profile of the X-ray emitting gas \citep{BW1993}, which is $P \propto z^{-1.5}$ for NGC 315 \citep{Worrall2007}. \label{fig:worrall}}
\end{figure}
How the jet in NGC 315 expands in a conical/hyperbolic shape after the collimation break also seems puzzling. Two scenarios have been broadly considered to explain the observed conical/hyperbolic expansions. One is that the pressure of an external confining medium decreases with distance and the asymptotic jet shape becomes conical \citep[e.g.,][]{BL1994, Zakamska2008, Komissarov2009, Lyubarsky2009, Vlahakis2015}. \cite{LB2014} indeed showed that the jets of many nearby radio galaxies, including NGC 315, expand rapidly in conical/hyperbolic shapes on kpc-scales. These expansions occur in the regions of steeply falling external pressure gradients. However, the jet geometry transition of NGC 315 occurs at a shorter distance where the pressure gradient is expected to be much flatter. We present the radial pressure profile of the X-ray emitting hot gas in NGC 315 estimated from Chandra observations \citep{Worrall2007} in Figure~\ref{fig:worrall}. The pressure decreases with distance as $P_{\rm ext}\propto z^{-1.5}$ in the regions where the jet expands conically at $z\gtrsim10^7\ R_g$ (Figure~\ref{fig:jet_radius}). However, the pressure profile becomes flatter in the inner region at $z \lesssim 10^7\ R_g$, similar to M87 \citep[e.g.,][]{Russell2015, Russell2018} and NGC 6251 \citep{Evans2005}. This indicates that the NGC 315 jet can maintain its conical/hyperbolic shape over a large range of distance (z $\approx10^5$--$10^8\ R_g$) regardless of the change in the external medium's pressure profile in the same region.
Another explanation is that the jet is overpressured due to a recollimation shock occuring near the jet geometry transition point and can freely expand into the interstellar medium having a nearly flat pressure profile. This was applied to the M87 jet \citep{AN2012}, but the absence of a recollimation feature nor significantly enhanced linear polarization emission at the jet collimation break site in NGC 315 makes it difficult to apply this scenario. A similar case was seen for NGC 6251 \citep{Tseng2016} and the authors suggested that an in-situ energy dissipation by converting the jet bulk kinetic energy into the jet internal energy may take place. The increased internal energy may be responsible for the free jet expansion into an external medium having a flat pressure profile. We found that the bulk jet speeds systematically decrease with distance right after the jet collimation break (Section~\ref{dis:acceleration}), which makes this explanation plausible.
\subsection{Jet Acceleration}
\label{dis:acceleration}
In Figure~\ref{fig:fourvel}, we present $\Gamma\beta$, where $\Gamma \equiv 1 / \sqrt{1 - \beta^2}$ is the bulk Lorentz factor, derived from the jet-to-counterjet brightness ratio analysis (Section~\ref{sec:velocity}), as a function of de-projected distance from the black hole. We obtained $\beta>1$ at the innermost distance bin at 1.5 GHz (Figure~\ref{fig:properties_binned}), which is due to the uncertainty, and we cannot convert this data point into $\Gamma$. Thus, we instead plot a lower limit by taking a $\beta-3\sigma$ value for this data point, where $\sigma$ is the uncertainty in $\beta$. We also did not include the data points having uncertainties larger than 90\% of their data values. We fitted a simple power-law for $\Gamma$ to the data points at distances smaller than the jet collimation break position ($z_b$), and obtained $\Gamma\propto z^{0.30\pm0.04}$. We overplot $\Gamma\beta$ for M87 observed in previous studies which were compiled in \cite{Park2019a} and their best-fit power-law function of $\Gamma\propto z^{0.16\pm0.01}$ in the jet acceleration zone.
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.75\textwidth]{figures/bp243_fourvel.pdf}
\caption{$\Gamma\beta$ as a function of de-projected jet distance from the black hole in units of $R_g$, constrained by the jet-to-counterjet brightness ratio analysis (Section~\ref{sec:velocity}). The lower limit of the data point having $\beta\ge1$ is indicated. The grey small circles are the results for M87 obtained in previous studies \citep{Biretta1995, Biretta1999, Cheung2007, Ly2007, Giroletti2012, Meyer2013, Asada2014, Hada2015, Hada2016, Hada2017, Mertens2016, Kim2018, Walker2018, Park2019a}, which were compiled in \cite{Park2019a}. The dashed vertical lines show the locations of the jet collimation breaks. The magenta dashed line is the best-fit power-law function to the NGC 315 data points before the jet collimation break, which is $\Gamma\propto z^{0.30\pm0.04}$. The best-fit jet acceleration profile obtained for M87 by \cite{Park2019a} is presented with the grey dashed line. The jet velocity field of NGC 315 constrained on kpc-scales by \cite{LB2014} for the jet on-axis and edge are shown by the salmon solid and dashed lines, respectively. \label{fig:fourvel}}
\end{figure*}
The MHD jet acceleration model predicts that jets can be efficiently accelerated to relativistic speeds when the jets are collimated. More specifically, an efficient jet acceleration occurs when the inner poloidal magnetic field lines close to the jet axis are collimated more than the outer field lines, which is called the "differential bunching/collimation" of the poloidal field lines \citep[Also known as the "magnetic nozzle effect", e.g.,][]{Li1992, BL1994, Vlahakis2004, Vlahakis2015, Komissarov2009, Tchekhovskoy2009}. This model is characterized by the acceleration zone spanning a large distance range, which is thought to be coincident with the jet collimation zone \citep[e.g.,][]{VK2004, Lyubarsky2009}. It is remarkable that the gradual jet acceleration of NGC 315 continues exactly until the jet geometry maintains a parabolic shape. The jet speeds start to decrease right after the jet collimation break. The same trend is observed in M87. These findings indicate that a gradual jet acceleration through the Poynting flux conversion takes place in these sources.
The observed jet acceleration profile of $\Gamma \propto z^{0.30}$ is much flatter than the efficient "linear acceleration" of $\Gamma \propto R \propto z^{0.58}$, expected for the initial acceleration region of parabolic outflows in the models of highly magnetized jets \citep{Tchekhovskoy2008, Tchekhovskoy2009, Komissarov2009, Lyubarsky2009}. The "slow acceleration" was also observed in M87 in previous studies \citep{Mertens2016, Park2019a}. This result indicates that jet acceleration is not simply determined by the jet collimation profile but may be determined by the interplay between (i) the degree of jet magnetization near the jet base, (ii) the differential collimation of poloidal magnetic field lines, which can be different in different sources even if they show the same jet geometries\footnote{The observed jet collimation profile might reflect only parts of the field lines (also called streamlines), while the jet acceleration efficiency is associated with the behaviors of multiple streamlines.}, and (iii) the interaction between the jet and the ambient medium. In fact, the maximum Lorentz factor achieved for NGC 315 is at most $\Gamma\approx3$. If we assume that the jet reaches equipartition between Poynting and matter energy flux at the end of the jet acceleration zone ($\sigma_m\approx1$, where $\sigma_m$ is the Poynting flux per unit matter energy flux), then the total energy flux per unit rest-mass energy flux is $\mu = \Gamma(1+\sigma_m) \approx 6$ \citep[e.g.,][]{TT2013}. If this is the case, the NGC 315 jet may not be highly magnetized at its base. Also, the observed rapid deceleration right after the collimation break suggests that there could be an active interaction of the jets with the surrounding medium on pc-scales. The interaction can result in gas entrainment from surrounding material and substantial deceleration of the jet, as suggested by observations of the kpc-scale jets of many radio galaxies \citep{LB2014}. Also, substantial jet deceleration due to entrainment of surrounding winds in the jet acceleration zone was shown in recent GRMHD simulations \citep{Chatterjee2019}.
In order for the differential collimation of poloidal field lines to occur, and thus for an efficient jet acceleration to occur, the field lines must be able to communicate to other regions of the jet. In other words, they must be causally connected with regions near the jet axis \citep[e.g.,][]{Tchekhovskoy2009, Komissarov2009, Clausen-Brown2013}. This condition is satisfied if the jet half-opening angle $\theta_j$ is smaller than the Mach cone half-opening angle $\theta_M$, which can be translated into the condition $\Gamma\theta_j\lesssim \sqrt{\sigma_m}$ \citep{Komissarov2009}. We found that $\theta_j$ gradually decreases with distance in the parabolic jet region down to $\approx1^\circ$ and the maximum Lorentz factor is $\Gamma\approx3$, which indicates that the condition is satisfied when assuming $\sigma_m\gtrsim1$ in the jet acceleration zone.
We note that the lowest jet speed after the jet deceleration is $\beta\approx0.4c$. However, the inferred jet speed from the kpc-scale observations at distances $\lesssim10^8\ R_g$ is $\beta\approx0.85c$ \citep{LB2014}, suggesting that there must be an additional jet acceleration zone in the conically expanding jet region. We note that \cite{LB2014} used a model which assumes a constant speed at short jet distances, which corresponds to the distances of $10^6$--$10^8\ R_g$ for NGC 315. A more accurate jet velocity field in this region, which we plan to investigate with future observations, would help to identify where the additional jet acceleration zone is located and how it is related to the jet geometry.
\subsection{Jet Velocity Stratification}
\label{dis:stratification}
It is commonly accepted that AGN jets are stratified in velocity. A faster-spine and slower-sheath jet structure, i.e., the inner layers closer to the jet axis (spine) being faster than the outer layers (sheath), has been considered in many previous studies. For example, \cite{Clausen-Brown2013} showed that the model with velocity shear can explain the observed apparent sizes of many blazars better than the model without shear (i.e., constant speed across jet layers). \cite{LB2014} showed that the jets of ten nearby radio galaxies indeed have faster inner layers and slower outer layers on kpc-scales. This velocity structure was also used for modelling the high-energy emission up to TeV energies observed in blazars and radio galaxies \citep[e.g.,][]{Ghisellini2005, TG2008, TG2014, Marscher2010, MacDonald2015, Park2019c}. Also, \cite{Nakahara2018} explained the observed jet radii for NGC 4261 \citep{Nakahara2018} and Cygnus A \citep{Boccardi2016b, Nakahara2019} being systematically larger than those for M87 and NGC 6251 (see Figure 8 in \citealt{Nakahara2018}) with the spine-sheath scenario.
On the other hand, a relativistic jet launched by a rotating black hole and accelerated by the Poynting flux conversion is expected to have a slower-spine and faster-sheath structure \citep[e.g.,][]{Komissarov2007, Komissarov2009, Tchekhovskoy2008, Tchekhovskoy2009, Penna2013, Nakamura2018, PT2020}. If the jet is viewed at a small angle, then the brightness of the outer boundary layers can be much more enhanced than the inner layers due to the relativistic Doppler boosting effect. \cite{Nakamura2018} suggested that this velocity structure can naturally produce the observed limb-brightening of the M87 jet in the collimation zone.
\begin{figure}[t!]
\centering
\includegraphics[width = 0.47\textwidth]{figures/doppler.pdf}
\caption{Flux enhancement factor by the Doppler boosting effect, $\delta^{2-\alpha}$, as a function of $\Gamma\beta$ for NGC 315 (magenta) and M87 (blue). $\alpha = -0.5$ is assumed for both sources. The vertical dashed lines show the positions of the maximum enhancement factors. \label{fig:doppler}}
\end{figure}
Interestingly, we found an indication of limb-brightening in our HSA image of NGC 315 at 43 GHz (Figure~\ref{fig:doubleridge}). However, the jet viewing angle of NGC 315 is quite large and one cannot expect much flux enhancement from the Doppler boosting effect for this source. Figure~\ref{fig:doppler} shows the expected flux enhancement factor of $\delta^{2-\alpha}$, where $\delta\equiv1/\Gamma(1-\beta\cos\theta)$ is the Doppler factor, for M87 \citep[$\theta=17^\circ$,][]{Mertens2016, Walker2018} and NGC 315 \citep[$\theta=50^\circ$,][See also Appendix~\ref{appendix}]{LB2014}. We assumed $\alpha = -0.5$, which is a good approximation for those sources \citep[See Figure~\ref{fig:properties_binned} and][]{Hada2016}.
The expected maximum flux enhancement is at most less than by a factor of two for NGC 315. For the fast speeds of $\Gamma \beta \gtrsim 1$, the flux can even be suppressed because the jet beaming cone moves away from our line of sight. Therefore, the observed limb-brightening of the jet in NGC 315 cannot be attributed to the Doppler-boosted emission of the fast moving jet sheath. We consider two possible scenarios. One is that the jet spine is much faster than the jet sheath on pc-scales, similar to the jet lateral velocity structure on kpc-scales \citep{LB2014}, resulting in the spine emission significantly De-boosted. A limb-brightening has also been observed in the nearby radio galaxy Cygnus A on pc-scales, known to have a large viewing angle of $\theta \approx 75^\circ$, and the faster-spine and slower-sheath structure was employed to explain the observed jet kinematics in this source \citep{Boccardi2016a, Boccardi2016b}. However, it is questionable how the jet spine attains very fast speeds at such a short distance, which appears difficult to achieve in the MHD jet acceleration models. We compare the radii of the jet and counterjet, which can provide some hints for jet velocity stratification but does not seem to be conclusive with our data, in Appendix~\ref{appendix:radii}.
The other scenario is that the jet sheath is intrinsically brighter (has a higher emissivity) than the spine. The consistency between the observed jet collimation profile and the FFE solution for the outermost poloidal field line anchored to the equator of the event horizon (Figure~\ref{fig:jet_width_comparison}) implies that the jet limbs may follow the boundary against the ambient medium, which is presumably non-relativistic winds \citep[e.g.,][]{Sadowski2013}. The velocity shear near the boundary layers can result in efficient particle acceleration \citep[e.g.,][]{Ostrowski1998, SO2002, Kataoka2006}. Also, recent GRMHD simulations showed that pinch instabilities can be developed near the jet-wind boundary, which can produce radiating superluminal knots \citep{Nakamura2018} and efficient particle acceleration through magnetic reconnection \citep{Chatterjee2019}. We note that confirming the existence of limb-brightening at other distance ranges with future observations at high resolutions and with high sensitivity, combining with the jet velocity field, can be critical to distinguish these scenarios. Probing the innermost jet region with millimeter VLBI arrays, where the jet speeds are presumably small and less Doppler effect is expected, can be especially critical.
\section{Conclusions}
\label{sec:summary}
We have studied the collimation and acceleration in the jets of the nearby FR I radio galaxy NGC 315 with our multifrequency VLBA observations and archival HSA and VLA data. We also have performed complementary monitoring observations with KaVA to study the jet kinematics. Our work leads us to the following principal conclusions:
\begin{enumerate}
\item We measured the frequency dependent position of the core from the 2D cross-correlation analysis, which follows a power-law relation of $r_{\rm core}(\nu) \propto \nu^{-1.39 \pm 0.20}$. The core-shift measurements allow us to infer the origin of the jets by extrapolating the power-law relation to $\nu \xrightarrow{} \infty$, which is $\approx0.11$ mas upstream of the 22 GHz physical core. This is crucial to accurately measure the "jet distance" for both the jet and counterjet and thus to derive accurate jet collimation and acceleration profiles.
\item We found that the jet geometry transitions from a semi-parabolic shape ($R\propto z^{0.58\pm0.05}$) into a conical/hyperbolical shape ($R \propto z^{1.16\pm0.01}$) at a distance of $z_b=(1.1\pm0.2)\times10^5\ R_g$. The jet collimation profile in the parabolic region is consistent with the profiles of the FR I radio galaxies M87 and NGC 6251 and with the FFE solution for the outermost poloidal magnetic field line anchored to the event horizon on the equatorial plane. We conclude that the jet may be collimated by the pressure of winds, non-relativistic gas outflows launched from hot accretion flows. The jet collimation break occurs at a distance an order of magnitude smaller than the Bondi radius and the black hole sphere of influence radius, which indicates that the winds may not reach down to the Bondi radius. Also, neither a recollimation feature nor significant linear polarization was detected at the jet geometry transition point. This implies that other mechanisms to increase the jet internal pressure than a recollimation shock are needed so that the jet expands conically through the surrounding hot gas having a flat pressure profile.
\item We derived the jet velocity field at distances of $\approx3,000$--$300,000\ R_g$ based on the assumption that the observed asymmetry in brightness between the jet and counterjet is due to relativistic aberration. We found that the jet gradually accelerates up to the bulk Lorentz factor of $\Gamma \sim 3$ with an acceleration profile of $\Gamma \propto z^{0.30\pm0.04}$ in the same region as the jet collimation zone. The jet decelerates right after the jet collimation break, similar to M87. We conclude that the jet is accelerated to relativistic speeds by converting the electromagnetic energy of the flow to its kinetic energy through the magnetic nozzle effect.
\item We found an indication of limb-brightening in the jet only in the HSA 43 GHz image. The angular resolution of our VLBI data is significantly smaller than the maximum observed jet radii only at this frequency, indicating that there is a possibility that the jet is intrinsically limb-brightened at other distances as well on pc-scales. As the NGC 315 jets have a relatively large viewing angle of $\approx50^\circ$, the flux enhancement expected from the Doppler boosting effect is at most by a factor of about two. This implies that either (i) the jet spine is much faster than the jet sheath on pc-scales, even though this velocity structure is difficult to reproduce with the MHD jet acceleration model, or (ii) the jet sheath has much higher emissivity than the spine due to the interaction with the surrounding medium.
\item Our monitogring observations with KaVA have shown that the jet structure at distances $\lesssim5$ mas from the apparent core appear very stationary over eight months. We argue that this does not mean that the jet is actually stationary but is because of the re-brightened regions in the jet, similar to M87, which can make the jet appear stationary. We could detect an outward motion with an apparent jet speed of $\beta_{\rm app} = 1.85\pm0.44c$ at a distance of $\approx8$ mas. Combining this speed with the observed jet-to-counterjet brightness ratio at a similar distance, we derive a jet viewing angle of $\theta\approx52.8\pm8.0^\circ$, which is in good agreement with the viewing angle constrained on kpc-scales.
\end{enumerate}
We finally remark that NGC 315 is the third radio-loud AGN for which a firm evidence for jet acceleration and collimation occurring simultaneously over a large range of jet distance (after M87 and 1H 0323+342), as the MHD jet acceleration model has predicted. The indication of limb-brightening in the jet also provides hints for velocity stratification and non-thermal particle acceleration mechanisms in AGN jets. We plan to search other AGNs for jet acceleration, collimation, and limb-brightening to have a more complete view in the near future.
\vspace{-0.1cm}
\acknowledgments
We thank the anonymous ApJ referee for detailed comments that improved the manuscript. J.P. thanks Hung-Yi Pu for useful discussions. J.P. acknowledges financial support from the Korean National Research Foundation (NRF) via Global PhD Fellowship Grant 2014H1A2A1018695 and support through the EACOA Fellowship awarded by the East Asia Core Observatories Association, which consists of the Academia Sinica Institute of Astronomy and Astrophysics, the National Astronomical Observatory of Japan, Center for Astronomical Mega-Science, Chinese Academy of Sciences, and the Korea Astronomy and Space Science Institute. This work is supported by the Ministry of Science and Technology of Taiwan grant MOST 109-2112-M-001-025 (K.A). The VLBA is an instrument of the National Radio Astronomy Observatory. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated by Associated Universities, Inc. This work is based in part on observations made with the KaVA, which is operated by the the Korea Astronomy and Space Science Institute and the National Astronomical Observatory of Japan.
\facilities{VLBA (NRAO), HSA (NRAO), VLA (NRAO), KaVA (KASI/NAOJ)}
\software{AIPS \citep{Greisen2003}, Difmap \citep{Shepherd1997}, GPCAL \citep{Park2020}, Numpy \citep{Numpy2011}, Scipy \citep{Scipy2020}, Pandas \citep{Pandas2010}, Astropy \citep{Astropy2013, Astropy2018}, Matplotlib \citep{Matplotlib2007}}
|
{
"timestamp": "2020-12-29T02:26:54",
"yymm": "2012",
"arxiv_id": "2012.14154",
"language": "en",
"url": "https://arxiv.org/abs/2012.14154"
}
|
\section{\textbf{Introduction}}
Solvable irrelevant deformations have attracted many interests in recent years due to their novel features that provide analytical control on the ultraviolet (UV) physics regardless of the difficulties from strong coupling or nonlocality. $T\bar{T}$-deformation is the most studied example of such deformations\footnote{For a pedagogical review see \cite{LectureJiang}, and for recent progress see the workshop \cite{TTbar2020}.} \cite{Smirnov:2016lqw,Cavaglia:2016oda}. It is originally defined in two-dimensional relativistic quantum field theories where $T\bar{T}$ is a well defined composite operator that can be written as the determinant of the energy momentum tensor operator. Many interesting physical quantities such as the energy spectrum, the partition function, the entanglement entropy and the S-matrix have been computed exactly in the deformed theories, see \cite{LectureJiang} and the references within. These exact results reveal fruitful and meanwhile unexpected structures of solvable irrelevant deformations which we have not fully understood. For example \cite{Hage1,Hage2,Hage3}, the deformed density of states of a deformed field theory shows the Hagedorn behavior which typically happens in string theory. On the other hand, the $T\bar{T}$-deformation to holographic conformal field theory opens a new window to study AdS/CFT correspondence \cite{verlinde,Hage1,kutasov2,dubovsky1,dubovsky2,bihdtott,ttandcf,cutoffads,commentsontt,Chen:2018eqk,Hartman:2018tkw,Caputa:2019pam,Murdia:2019fax,Jeong:2019ylz,Guica:2019nzm,Chen:2019mis,Grieninger:2019zts,Lewkowycz:2019xse,Apolo:2019zai,Hirano:2020nwq, Jafari:2019qns, He:2020hhm,Kruthoff:2020hsi}.
\par
Recently similar integrable deformations have been considered in other types of models including integrable lattice models \cite{Lattice1,Lattice2} and non-relativistic integrable field theories \cite{NonRe1,NonRe2,BoseJ}. Interestingly $T\bar{T}$ (like) deformation can be interpreted as a special type of algebra-preserving deformations studied in the context of integrable spin chains \cite{Long1}. The standard integrability technique of solving these integrable deformed models is first to compute the deformed S-matrix in the infinite volume limit where the deformation simply modifies the S-matrix by multiplying a 1. Castillejo-Dalitz-Dyson-like factor \cite{Smirnov:2016lqw,CDD,CDD2}, and then to substitute the deformed S-matrix into the Bethe equation to solve the theory in the finite volume.
Using this integrability technique the deformed one-dimensional Bose gas known as the Lieb-Liniger model was carefully studied in \cite{BoseJ} where the author showed that the deformed one-dimensional Bose gas shares many qualitative features with the $T\bar{T}$-deformed relativistic quantum field theories. For example, the spectrum can also be given by a flow equation and the density of states exhibits the Hagedorn behavior in the thermodynamic limit.
In this work we will use another method, the quantum perturbation theory, to study the spectrum of the deformed non-relativistic field theories which are not necessarily integrable. The model we focus on is the non-relativistic Schr\"{o}dinger model which can be viewed as the free limit of the Lieb-Liniger model. We find that for models with a degenerate Legendre transformation such as non-relativistic Schr\"{o}dinger or Lieb-Liniger model, the Lagrangian and Hamiltonian definition of the $T\bar{T}$-deformation may not be equivalent. In particular, within the Hamiltonian definition of $T\bar{T}$-deformation only the flow equation of the Hamiltonian is not enough to define the deformation, one also needs to specify how the constraints change under the flow. In the first-order deformation, the Legendre transformation is degenerate so one can use the Dirac-Bergmann algorithm to quantize the system and compute the spectrum perturbatively. We find that the perturbative expansion of the deformed spectrum is superficially divergent as expected for the models under an irrelevant deformation. After imposing the Dirichlet regularization and dropping off the transcendental terms which are not supposed to appear in the $T\bar{T}$-deformation, the deformed spectrum match the results derived from the flow equation. We will also show that after including the second-order deformation the theory either does not admit a perturbative description or has ambiguities.
The paper is organized as follows: in Sect. \uppercase\expandafter{\romannumeral2}, we derive the closed form of $T\bar{T}$-deformed Lagrangian in non-relativistic complex scalar theory; in Sect. \uppercase\expandafter{\romannumeral3}, we compute the deformed energy spectrum perturbatively, and manage to find an exact form by using Brillouin-Wigner perturbation; in Sect. \uppercase\expandafter{\romannumeral4}, we end with conclusions; we put some technical details into four appendices.
\bigskip
{\noindent \bf{Note added:}} When we were finishing this project and had obtained the main results in this paper, two interesting papers \cite{Italy,JiangNew}
appeared on arXiv in the same day. The same closed form of a $T\bar{T}$-deformed Lagrangian of non-relativistic models have also been derived. In \cite{Italy}, the Lagrangian is derived from the dynamical change of coordinates, and in \cite{JiangNew} it is derived using a geometric method. Our method in this work is similar to the one used in \cite{Close}, and is somewhat complementary to the ones in\footnote{We would like to mention that the closed form of the $T\bar{T}$-deformed Lagrangian of non-relativistic models was also found by S. Frolov et.al. from the light-cone gauge interpretation \cite{LG}.} \cite{Italy,JiangNew}.
\section{Non-relativistic $T\bar{T}$-deformed Lagrangian}
\noindent
In this section, we consider the $T\bar{T}$-deformation of a two-dimensional (2D) non-relativistic field theory whose Lagrangian satisfies\footnote{We will work in the convention that
\be
g_{\mu\nu}=\mbox{diag}(1,-1),\hspace{3ex}
x^{\mu}=(x^0,x^1)=(t,x).\nonumber\\
\ee
}
\be\label{ttbareq}
\frac{\partial \mathcal{L}^{(\lambda)}}{\partial \lambda}=\operatorname{det}\left(T_{\mu \nu}^{(\lambda)}\right),
\ee
where $T_{\mu\nu }$ is the energy momentum tensor of this theory. Our goal is to solve this flow equation \eqref{ttbareq} for a non-relativistic complex scalar theory with a potential $V(|\phi|)$.
\subsection{One real scalar case}
\noindent
To get some insights about \eqref{ttbareq} we warm up with a toy model involved with only a free real scalar, whose Lagrangian density reads
\be
\mathcal{L}_0=\phi\partial_0 \phi+(\partial_1\phi)^2.
\ee
To solve \eqref{ttbareq} we expand the Lagrangian with respect to $\lambda$:
\be\label{ExpandL}
\mathcal{L}=\sum_{n=0}\lambda^{n}\mathcal{L}_n.
\ee
Substituting this expansion into the flow equation \eqref{ttbareq} one can read off $\mathcal{L}_n$ term by term, and the first several orders of the Lagrangian are of the forms
\be
&\mathcal{L}_0=X+Y,\\
&\mathcal{L}_1=Y(X+Y),\\
&\mathcal{L}_2=Y(X^2+ 3XY+2Y^2),\\
&\mathcal{L}_3=Y(X^3+6X^2Y+10XY^2+5Y^3),\\
&\mathcal{L}_4=Y(X^4+10X^3Y+30X^2Y^2+35XY^3+14Y^4),\\
\ee
where we have defined the convenient variables:
\be
X\equiv \phi\partial_0 \phi, \hspace{3ex}Y\equiv (\partial_1\phi)^2.\ee
We observe that these terms can be cast into a general form as
\be
\mathcal{L}_n
&= Y \sum_{k=0}^{n} \frac{1}{k} X^{n-k} Y^{k} C_{n-1}^{k-1}C_{n+k}^{n-1}\\
&=X^n Y {}_2F_1\(-n,n+1,2,-\frac{Y}{X}\),\hspace{3ex}n=1,2,3,....
\ee
Performing the summation \eqref{ExpandL} directly gives the closed form of the Lagrangian
\be
\mathcal{L}
&=X+\sum_{k=0}^{\infty} \sum_{n=k}^{n} \lambda^n Y \frac{1}{k} X^{n-k} Y^{k} C_{n-1}^{k-1}C_{n+k}^{n-1}\\
&=-\frac{1}{2\lambda}\(-1-\lambda X+\sqrt{(1-\lambda X)^2-4 \lambda Y}\),
\ee
which indeed solves the flow equation \eqref{ttbareq}.
Next we add in a potential $V$ which is just a function of $\phi$ but does not contain the terms of the derivative of $\phi$. As before we expand the Lagrangian with respect to $\lambda$. The first few terms in the deformed Lagrangian are
\be
\mathcal{L}_0=&-V+X+Y\\
\mathcal{L}_1=&-V^2+VX+Y(X+Y)\\
\mathcal{L}_2=&-V^3+V^2X-VY^2+Y(X^2+ 3XY+2Y^2)\\
\mathcal{L}_3=&-V^4+V^3X-VY^2(3X+4Y)\\&+Y(X^3+6X^2Y+10XY^2+5Y^3)\\
\mathcal{L}_4=&-V^5+V^4X+2V^2Y^3-VY^2(6X^2+20XY+15Y^2)\\&+Y(X^4+10X^3Y+30X^2Y^2+35XY^3+14Y^4),\\
\ee
which can be rewritten in a general form as
\be \label{Gen2}
\mathcal{L}_n
&=-V^{n+1}+V^n X+\sum_{j=0}^{n} (-V)^{j} Y^{j+1}\sum_{k=0}^{n-2j}K(n,j,k),\\
K(n,j,k)&= X^{n-2j-k} Y^{k}C_{n+k}^{n-j-1}C_{n-j-1}^{k-1+j}C_{k-1-j}^{j}\frac{1}{k}.
\ee
Substituting \eqref{Gen2} into \eqref{ExpandL} and performing the summation we end up with the closed form of the deformed Lagrangian
\be\label{1rsL}
\mathcal{L}
=&-\frac{V}{1-\lambda V}\\&-\frac{1}{2\lambda(1-\lambda V))}\[-1-\lambda X+\sqrt{(1-\lambda X)^2-4 \lambda(1-\lambda V) Y}\].
\ee
Alternatively the deformed Lagrangian can also be obtained by using the method of characteristics with the initial condition
\be
\mathcal{L}_0=\phi \alpha +\beta^2-V,\quad \mbox{with}\hspace{2ex}\alpha=\partial_0 \phi,\ \beta=\partial_1 \phi.
\ee
In terms of these new variables $\alpha$ and $\beta$ the flow equation \eqref{ttbareq} could be rewritten as
\be
\frac{\partial \mathcal{L}}{\partial \lambda}=-\mathcal{L}^2+\beta\mathcal{L}\frac{\partial \mathcal{L}}{\partial \beta}+\alpha\mathcal{L}\frac{\partial \mathcal{L}}{\partial \alpha},
\ee
which is equivalent to a set of equations:
\begin{equation}\label{chara1}
\left\{
\begin{aligned}
&\frac{d \lambda}{d s}=1\\
&\frac{d \alpha}{d s}=-\alpha \mathcal{L}\\
&\frac{d \beta}{d s}=-\beta \mathcal{L}\\
&\frac{d \mathcal{L}}{d s}=-\mathcal{L}^2\\
&\mathcal{L}(s=0)\\&=\phi \alpha(0)+\beta(0)^2-V
\end{aligned}
\right.
\Rightarrow\hspace{2ex}
\left\{
\begin{aligned}
&\lambda=s\\
&\alpha=\frac{C_3}{-s+C_2}\\
&\beta=\frac{C_4}{-s+C_2}\\
&\mathcal{L}=\frac{1}{s-C_2}\\
&-\frac{1}{C_2}=\phi \frac{C_3}{C_2}+(\frac{C_4}{C_2})^2-V.
\end{aligned}
\right.
\end{equation}
Canceling the constants $C_i$'s in \eqref{chara1}, we get
\be
\mathcal{L}
&=-\frac{V}{1-\lambda V}\\&-\frac{1}{2\lambda(1-\lambda V))}\[-1-\lambda X+\sqrt{(1-\lambda X)^2-4 \lambda(1-\lambda V) Y}\],
\ee
which coincides with \eqref{1rsL}.
\subsection{One complex scalar case}
\noindent
Now let us turn to the complex scalar case.
The Lagrangian density of the free complex scalar theory is
\be
\mathcal{L}_0=\frac{i}{2}(\phi^\ast\partial_0 \phi-\phi\partial_0 \phi^\ast)-\partial_1\phi\partial_1\phi^\ast
\ee
Define $\vec{\alpha}\equiv\(\partial_0 \phi,\partial_0 \phi^\ast\)$ and $\vec{\beta}\equiv\(\partial_1 \phi,\partial_1 \phi^\ast\)$.
The $T\bar{T}$-flow equation is given by
\be\label{chara2}
\frac{\partial \mathcal{L}}{\partial \lambda}=&-\mathcal{L}^2+\mathcal{L}\vec{\beta}\cdot\frac{\partial \mathcal{L}}{\partial \vec{\beta}}+\mathcal{L}\vec{\alpha}\cdot\frac{\partial \mathcal{L}}{\partial\vec{\alpha}}\\&-\(\frac{\partial \mathcal{L}}{\partial \vec{\alpha}}\cdot\vec{\alpha}\)\(\frac{\partial \mathcal{L}}{\partial \vec{\beta}}\cdot\vec{\beta}\)+\(\frac{\partial \mathcal{L}}{\partial \vec{\alpha}}\cdot\vec{\beta}\)\(\frac{\partial \mathcal{L}}{\partial \vec{\beta}}\cdot\vec{\alpha}\).
\ee
Without the last two terms this flow equation \eqref{chara2} can be solved by using the method of characteristics with the initial condition
\be
\mathcal{L}(s=0)=\frac{i}{2}\(\phi^\ast(\vec{\alpha})_1-\phi(\vec{\alpha})_2\)-(\vec{\alpha})_1(\vec{\beta})_2
\ee
in exactly the same way as we described in last section.
The solution is
\be\label{ttbarL0}
\mathcal{L}
&=-\frac{1}{2\lambda}\[-1-\lambda X+\sqrt{(1-\lambda X)^2-4 \lambda Y}\],
\ee
where $X=\frac{i}{2}(\phi^\ast\partial_0 \phi-\phi\partial_0 \phi^\ast)$ and $Y=-\partial_1\phi\partial_1\phi^\ast$.
To include the modifications of the last two terms in \eqref{chara2}, we make an ansatz on the exact Lagrangian and expand it with respect to $\lambda$ . By matching the expanded terms\footnote{The explicit forms of the Lagrangians $\mathcal{L}_i$ are quite involved, and we list them in Appendix A. } to the first few orders we can fix our ansatz completely, and in the end we check our result with the flow equation. The final result we get is
\be
\mathcal{L}\label{Lag}
&=-\frac{1}{2\lambda}\[-1-\lambda X+\sqrt{(1-\lambda X)^2-4 \lambda Y+8i \lambda^2 A-4\lambda^3 B}\]
\ee
where
\be
&A=(\phi^\ast \partial_1 \phi+\phi \partial_1 \phi^\ast)(\partial_1 \phi\partial_0 \phi^\ast-\partial_0 \phi\partial_1 \phi^\ast),\\
&B=\phi \phi^\ast(\partial_1 \phi\partial_0 \phi^\ast-\partial_0 \phi\partial_1 \phi^\ast)^2.\\
\ee
In a similar way, we can find the deformed Lagrangian in the case that the complex scalar has a potential $V$,
\be\label{ttbarL}
&\mathcal{L}
=-\frac{V}{1-\lambda V}-\frac{1}{2\lambda(1-\lambda V)}(-1-\lambda X+\\&\sqrt{(1-\lambda X)^2-4 \lambda (1-\lambda V)Y+8i \lambda^2(1-\lambda V)A-4\lambda^3(1-\lambda V)B}).
\ee
The detailed derivation is straightforward but too tedious to be presented here.
\section{Deformed energy from quantum perturbative theory}
\noindent
In this section, we will compute the spectrum of the $T\bar{T}$-deformed non-relativistic free boson (an effective Schr\"{o}dinger model). The model we are considering is the free complex scalar defined on a compact region $x\in [0,R]$, whose Lagrangian density and Hamiltonian are respectively
\be
&\mathcal{L}=\frac{i}{2}\left(\phi^{\ast} \partial_{t} \phi-\partial_{t} \phi^{\ast} \phi\right)-\partial_{x} \phi^{\ast} \partial_{x} \phi,\label{Lag0}\\
&H=\int \mathrm{d} x \,\partial_{x} \phi^{\ast}(x) \partial_{x} \phi(x).
\ee
This simple free theory can be though as the free boson limit of the Lieb-Liniger model whose $T\bar{T}$-deformation has been well studied in \cite{BoseJ}.
The momentum operator is
\begin{equation}P=-\frac{i}{2} \int\left[\phi^{\ast}(x) \partial_{x} \phi(x)-\partial_{x} \phi^{\ast}(x) \phi(x)\right].\end{equation}
The complex scalar field satisfies the equal-time commutation relations,
\begin{equation}\begin{split}\label{commutation}
&[\phi(x, t), \phi(y, t)]=0, \quad\left[\phi^{\ast}(x, t), \phi^{\ast}(y, t)\right]=0,\\& \left[\phi(x, t), \phi^{\ast}(y, t)\right]=\delta(x-y).
\end{split}\end{equation}
The Hilbert space is spanned by $N$-particle states,
\begin{equation}|\vec{x}\rangle=\phi^{\ast}\left(x_{1}\right) \ldots \phi^{\ast}\left(x_{N}\right)|0\rangle.\end{equation}
Here, $\vec{x}=(x_1,x_2,...,x_N)$ and $|0\rangle$ is the vacuum of Fock space, which is annihilated by $\phi(x)$. The $N$-particle eigenfunction of this free model is simply given by the plane waves
\be&\langle \vec{x}|\vec{u}\rangle=\psi_{N}(\vec{u} \mid \vec{x})=\frac{1}{\sqrt{N !}} \sum_{\sigma \in \mathrm{S}_{N}} \exp \left[i \sum_{j=1}^{N} x_{j} u_{\sigma_{j}}\right],\\& \left|\vec{u}_{N}\right\rangle=\frac{1}{\sqrt{N !}} \int \mathrm{d}^{N} x \psi_{N}(\vec{u} \mid \vec{x})|\vec{x}\rangle,\label{Tran}\ee
with the eigenvalues
\begin{equation}E_{N}(\vec{u})=\sum_{j=1}^{N} u_{j}^{2}, \quad P_{N}(\vec{u})=\sum_{j=1}^{N} u_{j},\end{equation}
and
$$
\langle\vec{x}|\vec{y}\rangle=\sum_{\sigma \in \mathrm{S}_{N}} \delta(x_i-y_{\sigma_i}).
$$
\subsection{Lagrangian vs. Hamiltonian definition}
The $T\bar{T}$-deformation is formally defined as a flow of the Lagrangian density of the theory:
\be
\partial_\lambda \mathcal{L}=-\epsilon^{ab}\epsilon^{cd}T_{ac}^{(\lambda)}T_{bd}^{(\lambda)}\equiv-\mathcal{O}^{(\lambda)}_{T\bar{T}}
\ee
where $T_{ab}^{(\lambda)}$ is the stress tensor of the theory with the flow parameter $\lambda$. It is generally believed that at least in the classical level the deformation can be equivalently defined as a flow of the Hamiltonian density of theory:
\be \label{Hflow}
\partial_\lambda \mathcal{H}=\mathcal{O}^{(\lambda)}_{T\bar{T}}.
\ee
The classical equivalence has been examined in the leading order in \cite{Kruthoff:2020hsi} and more generally in \cite{Jorjadze:2020ili}. However the equivalence is built on an implicit assumption that the Lagrangian is not singular or equivalently the Legendre transformation is invertible such that the $T\bar{T}$-deformation and the Legendre transformation are commutative. The subtlety of a singular Lagrangian is that it leads to a constrained Hamiltonian. In this section we want to show that in the Hamiltonian formalism the flow equation \eqref{Hflow} itself is not enough to defined the $T\bar{T}$-deformation
To illustrate our point, let us consider the simple model \eqref{Lag0} with the Lagrangian
\be
\mathcal{L}=&\mathcal{L}_0+\lambda \mathcal{L}_1,
\ee
where the leading-order deformation of the Lagrangian is
\be
\mathcal{L}_1=\frac{i}{2}\(\phi \partial_t\phi (\partial_x \phi^\ast )^2-\phi^\ast \partial_t \phi^\ast (\partial_x\phi)^2\)+\(\partial_x\phi^\ast \partial_x\phi\)^2.
\ee
The energy density is now:
\be
T_{00}=&\frac{\partial \mathcal{L}}{\partial(\partial_t \phi)}\partial_t \phi+\frac{\partial \mathcal{L}}{\partial(\partial_t \phi^\ast)}\partial_t \phi^\ast-\mathcal{L}\\
=&\partial_x\phi\partial_x\phi^\ast-\lambda\(\partial_x\phi^\ast \partial_x\phi\)^2.\label{T00}
\ee
This energy density is supposed to be identified with Hamiltonian density $\mathcal{H}$ after rewriting $\partial_t\phi,~\partial_t\phi^\ast$ in terms of $\phi,~\phi^\ast$ and the canonical momenta $\Pi, \Pi^\ast$, which are given by
\be \label{Momen1}
&\Pi=\frac{\partial \mathcal{L}}{\partial (\partial_t \phi)}=\frac{i}{2}\phi^\ast+\frac{i\lambda}{2}\phi (\partial_x \phi^\ast)^2,\\& \Pi^\ast=\frac{\partial \mathcal{L}}{\partial (\partial_t \phi^\ast)}=-\frac{i}{2}\phi-\frac{i\lambda}{2}\phi^\ast(\partial_x\phi)^2.
\ee
The problem is that the Lagrangian density $\mathcal{L}$ is singular so that the rewriting can not be performed.
Since the energy density \eqref{T00} does not depend on $\partial_t\phi,~\partial_t\phi^\ast$ explicitly, one tends to claim
\be \label{HT00}
\mathcal{H}=T_{00}=\partial_x\phi\partial_x\phi^\ast-\lambda\(\partial_x\phi^\ast \partial_x\phi\)^2.
\ee
However clearly \eqref{HT00} is not consistent with the definition \eqref{Hflow} even in the leading order
\be \label{HTt}
&\mathcal{H}=\mathcal{H}_0+\lambda \mathcal{O}^{(\lambda)}_{T\bar{T}}\\&=\partial_x\phi \partial_x\phi^\ast-\lambda\(\partial_x\phi^\ast \partial_x\phi \)^2-\frac{i\lambda}{2} \(\phi \partial_t\phi (\partial_x \phi^\ast )^2-\phi^\ast \partial_t \phi^\ast (\partial_x\phi)^2\).
\ee
Strictly speaking, this is not the expression of Hamiltonian density because it contains the time-derivative terms $\partial_t\phi,~\partial_t\phi^\ast$ which come from $T_{ab}$. Using the equations of motion
\be
i\partial_t \phi+\partial_x^2\phi=i\partial_t \phi^\ast-\partial_x^2\phi^\ast=0+\mathcal{O}(\lambda)
\ee
one can rewrite \eqref{HTt} as
\be \label{HT001}
&\mathcal{H}=\partial_x\phi \partial_x\phi^\ast-\lambda\(\partial_x\phi^\ast \partial_x\phi \)^2\\&+\frac{\lambda}{2}\((\partial_x\phi^\ast)^2\partial_x^2\phi \phi+(\partial_x\phi)^2\partial_x^2\phi^\ast\phi^\ast\).
\ee
It is obvious that \eqref{HT00} and \eqref{HT001} are in mismatch. This mismatch is problematic when one tries to compute the spectrum of the theory perturbatively. This problem does not appear in a relativistic theory, for example in a theory of the relativistic free boson. In the relativistic case, the canonical momentum depends on the flow parameter as well such that the deformed Hamiltonian densities are the same at the classical level in two different ways, as we show in the Appendix \ref{Diff}. A more severe problem about the definition \eqref{Hflow} is that it is not enough to define the deformation. For example if want to obtain the second-order correction of \eqref{HT001} we need to first introduce the Lagrangian density
\be \label{Lan}
\mathcal{L}=\partial_t\phi \Pi+\partial_t^\ast\phi \Pi^\ast-\mathcal{H},
\ee
with the primary constraints
\be \label{Constr}
\chi_1=\Pi-\frac{i}{2}\phi^\ast=0,\quad \chi_2=\Pi^\ast+\frac{i}{2}\phi=0,
\ee
then derive the first-order canonical stress tensor through
\be
T^a_b=\frac{\partial \mathcal{L}}{\partial(\partial_a \phi)}\partial_a \phi+\frac{\partial \mathcal{L}}{\partial(\partial_a \phi^\ast)}\partial_a \phi^\ast-\delta^a_b \mathcal{L},
\ee
and in the end substitute it into \eqref{Hflow}. Therefore besides the flow equation \eqref{Hflow} one also needs to know how the constraints $\chi_1^{(\lambda)}$ and $\chi_2^{(\lambda)}$ change under the flow.\par
Assuming that we use \eqref{Hflow} together with $\chi_1^{(\lambda)}$ and $\chi_2^{(\lambda)}$ to define the $T\bar{T}$-deformation of the non-relativistic free boson gas, we can use the Hellmann-Feynman theorem and factorization formula to derive the flow equation of the spectrum
\be\label{floweqn}
\frac{d \langle n| H |n\rangle}{d\lambda}=\langle n|T_{00}|n\rangle \langle n|T_{11}|n\rangle -\langle n|T_{01}|n\rangle \langle n|T_{10}|n\rangle .
\ee
But because the mismatch of \eqref{HT00} and \eqref{HT001} we can not identify $\langle n|T_{00}|n\rangle$ with $E_n/R$ and $\langle n| H |n\rangle=E_n$ at the same time if we use \eqref{HT001} as the Hamiltonian density, where $R$ is the size of the system.
\iffalse
It suggests that the flow equation and the deformed spectrum derived in \cite{} may be problematic. Even though in \cite{}, the flow equation wes derived with three different methods but they all assumed
\be\label{eqn}
\langle n|T_{00}|n\rangle=\frac{\langle n|H|n\rangle}{R}.
\ee
Physically the violation of \eqref{eqn} seems to imply the energy density is not uniform so it is not equal to the total energy divided by the size of the system. In other words, the eigenstates $|n\rangle$ do not obey the translational invariance. This is not reasonable because translational invariance is the prerequisite to define T\bar{T} deformation. \fi
\par
It is not hard to check that the two constraints \eqref{Constr} are the second-class constraints so one may wonder how about to solve them to reduce the dimensions of the phase space first and then perform the $T\bar{T}$-deformation. The reduced Hamiltonian density is given in \cite{Gergely:2003zy}
\be \label{reduce}
\mathcal{H}_r=-\frac{i}{2}\partial_x \tilde{\phi} \partial_x \tilde{\Pi},\quad \tilde{\phi}=\frac{\phi}{2}+i\Pi^\ast,\quad \tilde{\Pi}=\Pi+i\frac{\phi^\ast}{2}.
\ee
However this does not solve the problem: we still can not transform the Hamiltonian density into a Lagrangian density which only depends on $\tilde{\phi}$ and $\partial_t \tilde{\phi}$ so that the $T\bar{T}$-deformation can not be defined. A caveat of our analysis is that there may exist other ways to describe the reduced Hamiltonian system such that the velocity $\partial_t \phi$ could be solved in terms of the momentum.\par
The singular Lagrangian density can be cured by including the higher-order deformations. In particular the exact Lagrangian is not singular so in principle we can use the Legendre transformation to obtain a Hamiltonian density without any constraints. But it turns out the resulting Hamiltonian has a singularity at $\lambda=0$, indicating that it can not be expanded in a Taylor series of the flow parameter $\lambda$. Requiring the vanishing of these singularities introduces the constraints. Since there are different ways to introduce the constraints, there will be ambiguities in the constrained Hamiltonian. In short, the mismatch \eqref{HT00} and \eqref{HT001} is not just a problem in the first-order deformation, instead it is a problem of the definition of the $T\bar{T}$-deformation in a system with singular Lagrangian density or equivalently in a constrained Hamiltonian system.
We conclude this section by stressing that the above argument is purely classical, without considering the problem of Feynman path integral and the ambiguities in the operator ordering.
\subsection{Dirac-Bergmann Algorithm}
The standard technique to solve the constrained Hamiltonian system is through the Dirac-Bergman Algorithm (DBA).
In this section, we will focus on the first-order deformation and study how the deformation affects the commutation relations using DBA.
First let us demonstrate the algorithm in the free theory with the action \eqref{Lag0}.
The conjugate momentum of $\phi$ and $\phi^\ast$ are
\be
\Pi=\frac{i}{2}\phi^\ast,\quad \Pi^\ast=-\frac{i}{2}\phi,
\ee
satisfying the standard equal-time commutation relation
\be
[\phi(x),\Pi(y)]=i\delta(x-y),\quad [\phi^\ast(x),\Pi^\ast(y)]=i\delta(x-y).
\ee
The Hamiltonian system has two constraints \eqref {Constr}
satisfying
\be
[\chi_1,\chi_2]\equiv C_{1_x,2_y}=\delta(x,y).
\ee
These two constraints are the second class, so we can introduce the Dirac bracket which is defined by
\be
&[\phi(x),\phi^\ast(y)]_D\\
&=[\phi(x),\phi^\ast(y)]-\int dx' dy'[\phi(x),\chi_a(x')]C_{a_x',b_y'}^{-1}[\chi_b(y'),\phi^\ast(y)]\\
&=0-\int dx' dy'\(i\delta(x-x')(-\delta(x'-y'))(-i\delta(y'-y))\)\\
&=\delta(x-y)
\ee
as expected, recalling that the inverse of the matrix $C^{-1}_{a,b}$ is defined as
\be \label{InC}
\int dx' C_{a_x,b_x'}C^{-1}_{b_x',c_y}=\delta_{ac}\delta(x-y).
\ee
Now we consider the first-order deformed theory with the action
\be
&\mathcal{L}=\mathcal{L}_0+\lambda \mathcal{L}_1,\\
&\mathcal{L}_1=(\partial_x\phi^\ast \phi_x \phi)^2+\frac{1}{2}\(i\phi \partial_t\phi (\partial_x\phi^\ast)^2-i\phi^\ast\partial_t\phi^\ast(\partial_x\phi)^2\).\label{FirstLag}
\ee
Correspondingly the two constraints become
\be
&\chi_1=\Pi-\frac{i}{2}\phi^\ast-\frac{i\lambda}{2}\phi (\partial_x\phi^\ast)^2,\\& \chi_2=\Pi^\ast+\frac{i}{2}\phi+\frac{i\lambda}{2}(\phi^\ast (\partial_x\phi)^2).
\ee
From them we can compute
\be
[\chi_1(x),\chi_2(y)]&\equiv C_{1_x,2_y}=-C_{2_y,1_x}\\
&=\delta(x-y)+\lambda G(x,y).
\ee
where
\be
G(x,y)=\partial_y\delta(x-y)\phi^\ast(y)\partial_y\phi(y)+\partial_x\delta(x-y)\partial_x\phi^\ast(x)\phi(x).
\ee
The inverse of the matrix $C_{1_x,2_y}$ up to the first order of $\lambda$ is simply
\be
C_{1_x,2_y}^{-1}=-\delta(x-y)+\lambda G(y,x).
\ee
Then at the first order of $\lambda$ the non-vanishing commutation relations between $\phi$ and $\phi^\ast$ is
\be
[\phi(x),\phi^\ast(y)]=0-C_{1_x,2_y}^{-1}=\delta(x-y)-\lambda G(y,x).
\ee
This implies that $\phi$ and $\phi^\ast$ are not canonical variables so that $\phi^\ast$ is not the creation operator to generate the Fock space. This fact is a reflection of the fact that $T\bar{T}$-deformation is equivalent to a dynamical coordinate transformation.
At the leading order we can introduce the canonical variables
\be\label{NewF}
\varphi=\phi+\frac{\lambda }{2} \phi^\ast (\partial_x \phi)^2,\quad \varphi^\ast=\phi^\ast+\frac{\lambda }{2} (\partial_x \phi^\ast)^2 \phi ,\ee
and conversely
\be
\phi=\varphi-\frac{\lambda}{2} \varphi^\ast(\partial_x\varphi)^2,\quad \phi^\ast=\varphi^\ast-\frac{\lambda}{2}\varphi(\partial_x\varphi^\ast)^2,
\ee
such that
\be
\langle x_1 x_2|x_3 x_4\rangle=\langle \varphi_1 \varphi_2 \varphi_3^\ast \varphi_4^\ast \rangle=\delta(x_{13})\delta(x_{24})+\delta(x_{14})\delta(x_{23}).
\ee
The relations \eqref{NewF} are not canonical transformation and they are \textit{not} the state flow which was introduced in \cite{Kruthoff:2020hsi}. In terms of the new canonical variables the Hamiltonian density \eqref{HT00} up to the first order reads
\be \label{H1b}
T_{00}=\mathcal{H}\equiv H_0+ H_1+\mathcal{O}(\lambda^2)
\ee
with
\be
H_0=&\partial_x\varphi\partial_x\varphi^\ast\\
H_1=&-2\lambda (\partial_x\varphi \partial_x\varphi^\ast)^2\\&-\lambda\(\varphi\partial_x\varphi \partial_x\varphi^\ast \partial_x^2\varphi^\ast+\varphi^\ast \partial_x\varphi^\ast\partial_x\varphi\partial_x^2\varphi \).
\ee
As a consistency check we also work out the first-order deformation of the number operator and momentum operator
\be
N&=-\frac{i}{2} \( \frac{\partial \mathcal{L}}{\partial \partial_t \phi}\phi- \frac{\partial \mathcal{L}}{\partial \partial_t \phi^\ast}\phi^\ast\)\\&=\phi^\ast \phi+\frac{\lambda}{2}\({\phi^\ast}^2(\partial_x\phi)^2+(\partial_x\phi^\ast)^2\phi^2 \)\\
&=\varphi^\ast \varphi+\mathcal{O}(\lambda^2),
\ee
\be
P&=\frac{\partial \mathcal{L}}{\partial \partial_t \phi}\partial_x\phi+\frac{\partial \mathcal{L}}{\partial \partial_t \phi^\ast}\partial_x\phi^\ast\\&=\frac{i}{2}(\partial_x\phi \phi^\ast-\partial_x \phi^\ast\phi)(1-\lambda \, \partial_x\phi\partial_x\phi^\ast),\quad\\
&=\frac{i}{2}(\partial_x\varphi \varphi^\ast-\partial_x \varphi^\ast\varphi)+\text{total derivatives}+\mathcal{O}(\lambda^2)
\ee
which are indeed undeformed as expected. We expect that the spectrum of \eqref{H1b} can be derived by performing the second quantization and applying the standard quantum perturbation theory.
\subsection{Two-particle sector without momentum}
The explicit form of the Hamiltonian \eqref{H1b} implies that there is only two-to-two scattering, so without losing any generality we can focus on the two particle sector. When the total momentum is zero, the $T\bar{T}$-deformed spectrum satisfies a very simple equation \cite{BoseJ}
\be
E_N(R,\lambda)=E_N(R+\lambda E_N,0).
\ee
To illustrate the procedure of our calculation, we start from the two-particle sector with zero momentum, $u_1=-u_2=u>0$. In this case, a general two-particle eigenfunction is
\be\label{eiginfunction0}
\psi=\frac{\sqrt{2}}{R} \cos\(u(x_1-x_2)\),
\ee
with
\be\label{Betheroot}
u_I=\frac{2\pi}{R}I,\quad I\in\mathbf{Z}^+.
\ee
First we compute the matrix elements of the normal ordered Hamiltonian \eqref{H1b} of the field theory in the position space
\be
\langle \vec{y}|H_0+ H_1 |\vec{x}\rangle.
\ee
For example the undeformed Hamiltonian density $\mathcal{H}_0$ is evaluated to be
\be
\langle \vec{y}|\mathcal{H}_0|\vec{x}\rangle&=\langle0|\varphi(y_1)\varphi(y_2) \partial_x\varphi^\ast \partial_x\varphi \varphi^\ast(x_1)\varphi^\ast(x_2)|0\rangle\\
&=\partial_x\delta(x-x_1)\partial_x\delta(x-y_2)\delta(y_1-x_2)\\&+\partial_x\delta(x-x_2)\partial_x\delta(x-y_2)\delta(y_1-x_1)\\
&+\partial_x\delta(x-x_1)\partial_x\delta(x-y_1)\delta(y_2-x_2)\\&+\partial_x\delta(x-x_2)\partial_x\delta(x-y_1)\delta(y_2-x_1).\\
\ee
Then we use the wavefunction \eqref{Tran} to obtain the Hamiltonian
\be\label{cal1}
H_{0II}
&\equiv \langle u_I|H_0 |u_I\rangle=\frac{1}{2}\int d\vec{y\,}d\vec{x}\,dx \,\psi_I^\ast (\vec{y})\psi_I(\vec{x})\langle \vec{y}|H_0|\vec{x}\rangle\\
&=\int d\vec{x}\,\psi^\ast_I(\vec{x})(-\partial^2_{x_1}-\partial^2_{x_2})\psi_I(\vec{x})\\
&=2 u_I^2\equiv \gamma I^2,
\ee
where $\gamma=\frac{8\pi^2}{R^2}$.
Similarly we find
\be \label{H1pmn}
H_{1IJ}=-\frac{\lambda}{2} (\frac{\sqrt{2}}{R})^2\left(8 u_I^2 u_J^2 R)\right)=-2\lambda \frac{\gamma^2 I^2 J^2}{R}.
\ee
The result $H_{1II}=-2\lambda/ R H_{0II}^2$ coincides with the subleading term in the $\lambda$ expansion of the \textit{exact} spectrum:
\be \label{Exact}
&E_N(R,\lambda)=E_N^{(0)}-\frac{2\lambda}{R}{E_N^{(0)}}^2+\frac{7\lambda^2}{R^2}{E_N^{(0)}}^3-\frac{30\lambda^3}{R^3}{E_N^{(0)}}^4\\&+\frac{143 \lambda^4}{R^4}{E_N^{(0)}}^5-\frac{728\lambda^5}{R^5}{E_N^{(0)}}^6\dots
\ee
which is solved from the flow equation in \cite{BoseJ}.
\subsection*{Rayleigh-Schr\"{o}dinger perturbation}
The higher-order perturbative results can be obtained by using the Rayleigh-Schr\"{o}dinger perturbation:
\be \label{RSE}
E^{(2)}_I=&I^4(-2\lambda)^2 \frac{\gamma^3}{R^2} g_I(1),\\
E^{(3)}_I=&I^4(-2\lambda)^3 \frac{\gamma^4}{R^3}\(g_I(1)^2-g_I(2)\)\\
E^{(4)}_I=&I^4(-2\lambda)^4 \frac{\gamma^5}{R^4}\(g_I(1)^3-3g_I(1)g_I(2)+g_I(3)\)\\
E^{(5)}_I=&I^4(-2\lambda)^5 \frac{\gamma^6}{R^5}\(g_I(1)^4-6g_I(1)^2g_I(2)\right.\\&\left.+2g_I(2)^2+4g_I(1)g_I(3)-g_I(4)\)\\
&....\nonumber
\ee
where for convenience we have introduced the function
\be
g_I(a)\equiv \sum_{J\neq I}^{\infty}\frac{I^{4(a-1)}J^4}{(I^2-J^2)^a}.
\ee
The functions $g_I(1)$ and $g_I(2)$ are divergent, and proper regularization has to be adopted. The first few terms of $g_I(a)$ are given by
\be
&g_I(1)=\frac{7I^2}{4},\quad g_I(2)=-\frac{11I^4}{16}+\frac{\pi^2I^6}{12},\\& g_I(3)=-\frac{I^6}{32}-\frac{5\pi^2I^8}{48},\\
&g_I(4)=-\frac{3I^8}{256}+\frac{\pi^2I^{10}}{96}+\frac{\pi^4I^{12}}{720},....
\ee
where we have used the Dirichlet regularization to regularize $g_I(1)$ and $g_I(2)$. For example, to compute $g(1)$ we first separate it into the divergent piece and convergent piece
\be
g_I(1)=\sum_{J\neq I}^{\infty}\([-I^2-J^2]+\frac{I^4}{I^2-J^2}\)
\ee
then regularize the divergent summation with the Dirichlet regularization.
Substituting the regularized functions into \eqref{RSE}, we obtain the higher-order quantum corrections to the spectrum:
\be\label{TwoSpectrum}
&E_I^{(2)}=I^6\lambda^2\frac{\gamma^3}{R^2}7,\quad E_I^{(3)}=I^{8}(-\lambda)^3 \frac{\gamma^4}{R^3}\(30-\frac{2\pi^2I^2}{3}\),\\
&E^{(4)}_I=I^{10}(-\lambda)^4 \frac{\gamma^5}{R^4}\({143}-\frac{26\pi^2I^2}{3}\),\\
&E^{(5)}_I=I^{12}(-\lambda)^5 \frac{\gamma^6}{R^5}\(728-\frac{2\pi^4I^4-400\pi^2I^2}{5}\),....
\ee
Surprisingly, these expressions are in exact match with \eqref{Exact} if all the transcendental terms are discarded. It was shown in \cite{Zamolodchikov:1991vx,Caselle:2013dra} that the appearance of the transcendental terms is the signal of the contributions from other irrelevant operators instead of $T\bar{T}$, so they should be discarded. If we follow this rule, we can use the Brillouin-Wigner perturbation to obtain the exact spectrum which coincides with the one solved from the flow equation, as we show below.
\subsection*{Exact form: Brillouin-Wigner perturbation}
\noindent
A technical introduction to the Brillouin-Wigner perturbation can be found in Appendix C. The exact energy satisfies
\be
E_k=E_k^{(0)}+\sum_{i=1}^{\infty}\frac{\prod_{j=0}^{i-1}H_{1m_j m_{j+1}}}{\prod_{j=1}^{i-1}(E_{k}-E^{(0)}_{m_j})},\quad m_0=m_i=k.
\ee
In our case, $E_n^{(0)}=\gamma n^2,\ \ H_{1n m}=-2\lambda \frac{\gamma^2}{R}n^2 m^2$ so that
\be
E_k=\gamma k^2+\gamma k^4 \sum_{i=1}^{\infty} \(-\frac{2\gamma \lambda}{R}\)^i g(k,b_k)^{i-1}
\ee
where $b_k=E_k / \gamma$ and
\be
g(k,b_k)&=\sum_{m\neq k,m=1}^{\infty}\frac{m^4}{b_k-m^2}=\frac{k^4}{k^2-b_k}+\frac{1}{2}b_k^{3/2}\pi \cot(\pi\sqrt{b_k}).
\ee
The equation of the energy can be recast into the following form
\be
&b_k=k^2-\frac{\beta k^4}{1+\beta \(\frac{k^4}{k^2-b_k}+\frac{1}{2}b_k^{3/2}\pi \cot(\pi\sqrt{b_k})\)}\\
\ee
where $\beta=\frac{\gamma \lambda}{R}$. As in the last subsection, if we discard the $\pi$-dependent terms this result will match the exact result found in \cite{BoseJ}.
Notice that we must do this carefully. Naively, it seems necessary to throw away the term $\pi \cot(\pi\sqrt{b_k})$ completely. However, because of $b_k=k^2+O(\lambda)$, we can expand the function and find
\be
&\pi \cot(\pi\sqrt{b_k})=\pi \cot(\pi(\sqrt{b_k}-k))\\&=\frac{1}{\sqrt{b_k}-k}+(\mbox{$\pi$-dependent terms}).
\ee
Then the equation of the energy becomes
\be
\frac{b_k}{k^2}=1-\frac{\beta k^2}{1+\beta k^2 \(\frac{1}{1-b_k/k^2}+\frac{1}{2}\frac{(b_k/k^2)^{3/2}}{\sqrt{b_k/k^2}-1}\)}.\\
\ee
This is an algebraic equation, and is equivalent to
\be\label{burger}
\tilde{\beta}_k\tilde{b}_k^{3/2}+\sqrt{\tilde{b}_k}-1=0
\ee
where
\be
\tilde{\beta}_k=\beta k^2/2=\frac{\gamma \lambda k^2}{2R},\hspace{3ex}\tilde{b}_k=\frac{E_k}{\gamma k^2}.
\ee
Multiplying \eqref{burger} by $\tilde{\beta}_k\tilde{b}_k^{3/2}+\sqrt{\tilde{b}_k}+1$, we get
\be
\tilde{\beta}_k^2\tilde{b}_k^{3}+2\tilde{\beta}_k\tilde{b}_k^{2}+\tilde{b}_k-1=0,
\ee
which is the Burgers' equation in \cite{BoseJ}. A solution of \eqref{burger} is simply
\be
\tilde{b}_k=\frac{2}{3 \tilde{\beta}_k }\[\cosh \left(\frac{2}{3}{\rm arcsinh}\left(\frac{3 \sqrt{3 \tilde{\beta}_k }}{2}\right)\right)-1\].
\ee
Expanding the above relation, we can reproduce the perturbative results in the last subsection.
\subsection{Two-particle sector with momentum}
In this subsection, we consider the two-particle sector with momentum. The general two-particle eigenfunction is
\be \label{wave2}
\psi=\frac{1}{\sqrt{2}R}\(e^{i u_1 x_1+i u_2 x_2}+e^{i u_1 x_2+i u_2 x_1}\),
\ee
with zero-order eigenvalue
\be
E^{(0)}_{u_1,u_2}=u_1^2+u_2^2.
\ee
When $u_1\neq \pm u_2$, the eigenvalues have a four-fold degeneracy $E^{(0)}_{u_1,u_2}=E^{(0)}_{-u_1,u_2}=E^{(0)}_{u_1,-u_2}=E^{(0)}_{-u_1,-u_2}$. When $u_1=u_2$, the degeneracy is two-fold: $E^{(0)}_{u_1,u_2}=E^{(0)}_{-u_1,-u_2}$. The degenerate spectrum is an obstacle to apply the flow equation \eqref{floweqn} to derive the deformed spectrum because the factorization formula fails. However in this case, because the total momentum is conserved so one can focus on a sector with some fixed total momentum $u=u_1+u_2$. Within this sector the spectrum will not be degenerate anymore and the flow equation still works. The flow equation can be solved by a formal series expansion in $\lambda$. The first few terms of the deformed spectrum are given by\footnote{There is a typo in equation $(122)$ of \cite{BoseJ}: $E_N^{(1)}=\frac{1}{R}(-2M_2^2+2M_1M_3)$.}
\be
&\mathcal{E}_{u_1,u_2}^{(1)}=\frac{\lambda}{R}2u_1u_2(u_1-u_2)^2,\\
& \mathcal{E}_{u_1,u_2}^{(2)}=-\frac{\lambda^2}{R^2}2u_1u_2(u_1-u_2)^2(u_1^2-5u_1u_2+u_2^2),\\
&\mathcal{E}_{u_1,u_2}^{(3)}=\frac{\lambda^3}{R^3}2u_1u_2(u_1-u_2)^2(u_1^2-10u_1u_2+u_2^2)\\&~~~~~~~~~~~~(u_1^2-3u_1u_2+u_2^2),\\
&\dots
\ee
Below we will try to compute the deformed spectrum within quantum perturbation theory.
In the basis of the wavefunction \eqref{wave2}, we find
\begin{eqnarray}
\lefteqn{\langle u_1,u_2|H_1|u_3,u_4\rangle}\notag\\
&=&-\frac{\lambda}{R}\delta(u_1+u_2-u_3-u_4)\notag\\
&&\times\(8u_1u_2u_3u_4-(u_1+u_2)^2(u_1u_2+u_3u_4)\),
\end{eqnarray}
where the Dirac delta function is due to the momentum conservation.
In particular we have
\be \label{TwoMoH1mn}
E_{u_1,u_2}^{(1)}= \langle u_1,u_2|H_1|u_1,u_2\rangle=\frac{\lambda}{R}2u_1u_2(u_1-u_2)^2=\mathcal{E}_{u_1,u_2}^{(1)}.
\ee
The second-order correction is given by a divergent summation\footnote{Without losing generality we have assumed $u_1\geq u_2$.}
\begin{widetext}
\be \label{TwoMoH2}
E_{u_1,u_2}^{(2)}&=\sum_{u_3,u_4} \frac{|\langle u_1 u_2|H_1|u_3,u_4\rangle|^2|}{u_1^2+u_2^2-u_3^2-u_4^2}\\
&=\frac{\lambda^2}{R^2}\sum_{u_3 \neq u_1,u_3=-\infty}^{\infty}\frac{-8u_1u_2(u_1+u_2-u_3)u_3+(u_1+u_2)^2(u_1u_2+(u_1+u_2-u_3)u_3)^2}{2(u_1-u_3)(u_3-u_2)}.
\ee
\end{widetext}
As before to regularize it we first separate out the divergent piece by a $1/u_3$ Taylor expansion
\be
\eqref{TwoMoH2}=&\sum_{u_3=-\infty}^{\infty} [D_0(u_1,u_2)+D_1(u_1,u_2)u_3+D_2(u_1,u_2)u_3^2] \\
&-(D_0(u_1,u_2)+D_1(u_1,u_2)u_1+D_2(u_1,u_2)u_1^2)\\&+\sum_{u_3 \neq u_1,u_3=-\infty}^{\infty} C(u_1,u_2,u_3),\ee
with
\be
&C(u_1,u_2,u_3)=\frac{2u_1^2(u_1-u_2)^4 u_2^2}{(u_1-u_3)(u_3-u_2)}.
\ee
Then using the Dirichlet regularization
\be
\sum_{u_3=-\infty}^{\infty} 1=\sum_{u_3=-\infty}^{\infty} u_3=\sum_{u_3=-\infty}^{\infty} u_3^2=0,
\ee
we find the final result of $E_{u_1,u_2}^{(2)}$
\be
\mathcal{E}_{u_1,u_2}^{(2)}=&\frac{\lambda^2}{R^2}\left(0-\(2u_1(u_1-u_2)^2u_2(u_1^2-6u_1u_2+u_2^2)\)\right.\\&\left.+2u_1^2(u_1-u_2)^2u_2^2\right).
\ee
Similarly, we can read the third-order correction
\be
E_{u_1,u_2}^{(3)}=\mathcal{E}_{u_1,u_2}^{(3)}-\pi^2\frac{2\lambda^3}{3R^3}u_1^3u_2^3(u_1-u_2)^4,
\ee
which reduces to \eqref{TwoSpectrum} by taking $u_2=-u_1$. The calculation of higher-order corrections is much involved, but the feature is very similar so we will not present the analysis here.\par
\subsection{Three-particle sector}
To analyze the spectrum of the three-particle sector one has to include the second order deformation. However as we discuss before, after including the second order deformation the theory either does not admit a perturbative description or has ambiguities. The detail of this analysis can be found in Appendix \ref{second}. In this subsection we only compute the spectrum up to the first-order deformation.
The eigenfunction of the free three-particle theory with $I=(I_1, I_2, I_3)$ is given by
\be
\psi_I(x_1,x_2,x_3)=\frac{1}{\sqrt{3!R^3}}\sum_{\sigma\in S}\exp^{i\sum_{j=1}^3 x_j u_{I\sigma_j}}
\ee
As before one can find the undeformed Hamiltonian in the position space
\be
&\langle x_1,x_2,x_3|H_0|y_1,y_2,y_3\rangle\\&=\sum_{i,j\in S}\int \,d x \partial_x \delta(x-x_{i_1})\partial_x \delta(x-y_{j_1})\delta(x_{i_2}-y_{j_2})\delta(x_{i_3}-y_{j_3})
\ee
and in the momentum space
\be
\langle u_I|H_0|u_J\rangle=\(\frac{2\pi}{R}\)^2\sum I_{i_1} J_{j_1}\delta^{I_{i_1}}_{J_{j_1}}\delta^{I_{i_2}}_{J_{j_2}}\delta^{I_{i_3}}_{J_{j_3}},
\ee
where $S$ denotes the permutation of three particles $(1,2,3)$, and $u_{I_1}=\frac{2\pi}{R}I_1$. Therefore the zeroth-order energy is given by\footnote{Here we assume that $I_1> I_2 >I_3$. }
\be
E_I^{(0)}=\langle u_I|H_0|u_I\rangle=\(\frac{2\pi}{R}\)^2(I_1^2+I_2^2+I_3^2).
\ee
Similarly, we can get the first-order result,
\be
&\langle x_1,x_2,x_3|H_1|y_1,y_2,y_3\rangle=\\&\sum_{i,j\in S}\int \,d x \delta(x-x_{i_1}) \delta(x-x_{i_2}) \delta(x-y_{j_1}) \delta(x-y_{j_2})\delta(x_{i_3}-y_{j_3})\\
&~~~\(-2\partial_{x_{i_1}}\partial_{x_{i_2}}\partial_{y_{j_1}}\partial_{y_{j_2}}-\partial_{x_{i_1}}\partial_{x_{i_2}}^2\partial_{y_{j_1}}-\partial_{x_{i_1}}\partial_{y_{j_1}}\partial_{y_{j_2}}^2\)
\ee
and
\be
&\langle u_I|H_1|u_J\rangle\\
&=\frac{1}{R}\(\frac{2\pi}{R}\)^4\sum_{i,j\in S}\delta^{I_{i_3}}_{J_{i_3}}\(-2 I_{i_1}I_{i_2}J_{j_1}J_{j_2}+I_{i_1}I_{i_2}^2 J_{j_1}+I_{i_1}J_{j_1}J_{j_2}^2\)
\ee
The first order correction of the energy is given by
\be
E^{(1)}_I=&\lambda \langle u_I|H_1|u_I\rangle \\=&\lambda \frac{1}{R}\(\frac{2\pi}{R}\)^4\left(2I_{1}I_{2}(I_{1}-I_{2})^2\right.\\&\left.+2I_{2}I_{3}(I_{2}-I_{3})^2+2I_{3}I_{1}(I_{3}-I_{1})^2\right)
\ee
It is the same as the one in \cite{BoseJ}.\par
The second-order correction of the energy should come from $\langle u_I|H_1|u_J\rangle$ and $\langle u_I|H_2|u_I\rangle$
\be
E_{I}^{(2)}=\lambda^2 \sum_{J\neq I}\frac{|\langle u_I|H_1|u_J\rangle|^2}{E_{I}^{(0)}-E_{J}^{(0)}}+\lambda^2 \langle u_I|H_2|u_I\rangle.
\ee
The first part of $E_{I}^{(2)}$ is
\begin{widetext}
\be
&\sum_{J\neq I}\frac{|\langle u_I|H_1|u_J\rangle|^2}{E_{I}^{(0)}-E_{J}^{(0)}}\\&
=\frac{1}{R^2}\(\frac{2\pi}{R}\)^6\sum_{J\neq I}\frac{1}{I_1^2+I_2^2+I_3^2-J_1^2-J_2^2-J_3^2}\sum_{i_3,j_3}\delta^{I_{i_3}}_{J_{i_3}}\(-8 I_{i_1}I_{i_2}J_{j_1}J_{j_2}+(I_{i_1}I_{i_2}+J_{j_1}J_{j_2})(I_{i_1}+I_{i_2}) (J_{j_1}+J_{j_2})\)^2\nonumber
\ee
\end{widetext}
Notice that there is no term like $\delta_{J_1}^{I_2}\delta_{J_2}^{I_3}$ due to $I_1+I_2+I_3=J_1+J_2+J_3$ and $J \neq I$. After some manipulations, we read
\begin{widetext}
\begin{eqnarray}
\lefteqn{\sum_{J\neq I}\frac{|\langle u_I|H_1|u_J\rangle|^2}{E_{I}^{(0)}-E_{J}^{(0)}}}\\
&=&\frac{1}{R^2}\(\frac{2\pi}{R}\)^6\left(-2I_1I_2(I_1-I_2)(I_1^2-5 I_1 I_2+I_2^2)-2I_1I_3(I_1-I_3)(I_1^2-5 I_1 I_3+I_3^2)-2I_3I_2(I_3-I_2)(I_3^2-5 I_3 I_2+I_2^2)\right)\nonumber
\end{eqnarray}
\end{widetext}
Matching $E_{I}^{(2)}$ with the result in \cite{BoseJ}, it implies that
\be
\langle u_I|H_2|u_I\rangle=&2 I_1 I_2 I_3 (5 I_1^3-6 I_1^2 I_2-6 I_1 I_2^2+5 I_2^3-6 I_1^2 I_3\\&+21 I_1 I_2 I_3-6 I_2^2 I_3-6 I_1 I_3^2-6 I_2 I_3^2+5 I_3^3).
\ee
\section{Conclusion}
\noindent
In this work we have derived a closed form \eqref{ttbarL} of the Lagrangian of the $T\bar{T}$-deformed non-relativistic model by using the perturbative computations and the method of characteristics. In addition, we computed the deformed spectrum by using two different perturbation theories. In particular, by using the Brillouin-Wigner perturbation, we managed to find the exact form of the deformed energy spectrum, which can match the one obtained in \cite{BoseJ} after some appropriate regularization.
One subtle issue in our study is on the regularization. The perturbative expansion of the deformed energy is a summation of infinite series, which is superficially divergent. We applied the zeta-function regularization and found that the results could match the ones in \cite{BoseJ} if we discard the $\pi$-dependent terms. The similar $\pi$-dependent terms, which were referred to as the terms with nonzero transcendentality, appeared also in the study on other models \cite{Zamolodchikov:1991vx, Caselle:2013dra}. They could come from the perturbations of higher order of $T\bar{T}$, say $T^2\bar{T}^2$. It would be certainly interesting to understand why they appear in our perturbative analysis.
Another interesting finding is that the $T\bar{T}$-deformation on the Lagrangian and the one on the Hamiltonian is not equivalent if the theory has a degenerate Legendre transformation. The deformation may remove the degeneracy but the resulting deformed theory would not admit a perturbative description. More importantly, we want to stress that to define the $T\bar{T}$-deformation in the Hamiltonian formalism one has to also specify how the constraints are deformed.
\section*{Acknowledgements}\noindent
We would like to thank Reiko Liu, Shi-Lin Zhu, Florian Loebbert and Sergey Frolov for valuable discussions and comments.
The work is in part supported by NSFC Grants No. 11325522, No. 11735001, No. 11947301 and No. 12047502. J. T. is also supported by the UCAS program of special research associate and by the internal funds of the KITS.
|
{
"timestamp": "2021-07-08T02:08:07",
"yymm": "2012",
"arxiv_id": "2012.14091",
"language": "en",
"url": "https://arxiv.org/abs/2012.14091"
}
|
\chapter{Background}
\label{ch:background}
\renewcommand{\chapterpath}{includes/background}
The purpose of this chapter is two-fold: First, it aims at providing the fundamental background about the most relevant aspects of machine learning for this thesis, such as the theory of generalisation and regularisation. Second, it is also intended to serve as a review of the relevant and related scientific literature.
The introduction to the core aspects of machine learning in this chapter is deliberately non-exhaustive, as it is intended to provide only a sufficient background to enable the understanding of the rest of the thesis to a wider audience. The interested reader may follow the references to scientific literature provided throughout the chapter.
\section{Machine learning fundamentals}
Humans perceive the world and make sense of it in a great variety of ways: we see light as visual information, hear sounds and create music, use language to produce written text and speech and organise much of what we know into collections of numbers, to name a few. As diverse as light, sound, language and numbers are, they can all be conceptualised as \textit{data}. Thinking of what we perceive and know about the world as information that can be organised into data is a powerful formal conceptualisation that allows us to better understand and analyse the input and output we perceive and generate.
Machine learning is the discipline that studies how to automatically discover patterns from data \citep{murphy2012machinelearning}. Much as humans and other animals are able to make sense of the world out of what they perceive, machine learning aims at providing the methods to make sense out of formally defined data points, that is learning from data \citep{abu2012learningfromdata}. Machine learning has its roots in mathematics and statistical inference, but has developed as a distinct field after the development and spread of computers and computer science, which have provided the means to store and process data efficiently and automatically.
\subsection{Elements of machine learning}
\label{sec:background-elements_ml}
The fundamental component of machine learning is the data and, as motivated above, the field itself arises from the ability to map any observation of the world into data points. Formally, one can define one data point as an $M$-dimensional vector $\mathbf{x} = x_{j}, \ldots, x_{M}$. Then, a set of $N$ observations $\{\mathbf{x}_{i}\}_{i}^{N} \in \mathcal{X}$ is said to be the \textit{input} data set. Most of the machine learning literature \citep{alpaydin2009machinelearning, abu2012learningfromdata, murphy2012machinelearning} makes a broad distinction amongst machine learning methods depending on the \textit{output} or \textit{target} data. A non-exhaustive taxonomy of the main types of machine learning methods as commonly found in the literature is the following:
\begin{itemize}
\item \textbf{Supervised learning}: in a supervised learning setting, every observation $\mathbf{x}_{i}$ is paired with an output variable $y_{i}$, also referred to as \textit{ground truth}, and thus the data set is considered $\mathcal{D} = \{(\mathbf{x}_{i}, y_{i})\}_{i}^{N}$. The goal of supervised learning is discovering the relationship between the input data $\mathbf{x}_{i}$ and the target variables $y_{i}$. Depending on the nature of $y_{i}$, the learning problem can be \textit{classification} or \textit{regression}:
\begin{itemize}
\item Classification: $y_{i} \in \{1, \ldots, C\}$ is a discrete or categorical value. The possible values of $y$ are also referred to as \textit{classes} or \textit{labels}.
\item Regression: $y_{i} \in \mathbb{R}$ is a continuous variable.
\end{itemize}
\item \textbf{Unsupervised learning}: in an unsupervised learning setting, there is not explicit access to target variables. Therefore the data set is simply $\mathcal{D} = \{\mathbf{x}_{i}\}_{i}^{N}$, and the goal is to discover patterns in the input data. A prototypical example of unsupervised learning is clustering.
\item \textbf{Reinforcement learning} is a broad class of machine learning methods, initially inspired by behavioural psychology and the concept of trial-and-error learning. Instead of a mapping between input and output variables, in reinforcement learning typically there is access to a \textit{reward} signal that might not be available for every input data point. The goal is to learn policy that maximises the expected rewards by seeking a balance between exploration---the acquisition of new knowledge---and exploitation---the use of that knowledge to improve performance.
\end{itemize}
While this distinction is useful, the boundaries are sometimes blurred, in practice. For example, some problems often labelled as unsupervised learning could be considered particular cases of supervised learning, as we have discuss in the Introduction (Section~\ref{sec:intro-rethinking_supervised}). Most of the problems that we will address in this thesis can be defined within a supervised framework and, more specifically, classification. For these reasons, in the remaining of this section we will focus on classification, unless specified otherwise.
As introduced above, the goal of a (supervised) machine learning method is to discover the relationship between the input data and the target variables. This assumes that such relationship is determined by an underlying, unknown function $f \colon \mathcal{X} \mapsto \mathcal{Y}$. Since $f$ is a latent function, the task is to find a function $g \in \mathcal{H} \colon \mathcal{X} \mapsto \mathcal{Y}$ from a set of candidate functions $h \in \mathcal{H}$---the hypothesis set---that approximates $f$ according to certain error or loss measure $L(h, f)$. In order to find $g$, the \textit{learning algorithm} $\mathcal{A}$ uses the available \textit{training} data $\mathcal{D} = \{(\mathbf{x}_{i}, y_{i})\}_{i}^{N} = (\mathbf{x}_{1}, y_{1}), \ldots, (\mathbf{x}_{N}, y_{N})$ to solve an optimisation problem by adjusting a set of learnable parameters $\boldsymbol{\theta}$. Hence, the task is to determine from the set of functions $h(\mathbf{x}; \boldsymbol{\theta})$, the one which best approximates the data $\mathcal{D}$.
Nonetheless, if the relationships found apply only to the training data $\mathcal{D}$, then the process could not be considered \textit{learning}, but at best memorisation. Crucially, the ultimate objective of machine learning is to learn relationships and make correct predictions beyond the observed data. This is called \textit{generalisation}. This notion of learning is only feasible in a probabilistic way. The probabilistic view introduces the important assumption that the relationship between the targets $y$ and the input $\mathbf{x}$ is not deterministic, but probabilistic and there exists an unknown, underlying joint probability distribution $P_{X,Y}(\mathbf{x}, y)$ on $\mathcal{X} \times \mathcal{Y}$---thus also a marginal input distribution $P_{X}(\mathbf{x})$ and a conditional output distribution $P_{Y|X}(y|\mathbf{x})$, in Bayesian terms. Furthermore, it is also generally assumed that the observed available data points were sampled independently from $P_{X,Y}$. A summary schematic of the main elements of supervised learning is shown in Figure~\ref{fig:background-elements_learning}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/elements_learning.pdf}
\end{center}
\caption{Schematic of the main elements of (supervised) learning. Adapted from \citet{abu2012learningfromdata}.}
\label{fig:background-elements_learning}
\end{figure}
The feasibility of learning from a mathematical and probabilistic perspective is studied by the field of statistical learning theory \citep{vapnik1995learningtheory, bousquet2003learningtheory, vonluxburg2011learningtheory}. In the next section, we introduce the important concept of generalisation in machine learning and some notions from learning theory that are relevant to this thesis.
\subsection{Theory of generalisation}
\label{sec:background-generalisation}
Generalisation is one of the most important concepts in machine learning and statistical learning theory. It refers to the idea that the ultimate goal in the learning process is not to minimise the error computed on the available training data in and of itself, but to perform well on unseen data, that is to generalise. This raises the reasonable first question of whether generalisation is possible at all. The probabilistic perspective not only provides a positive answer to the question but also the tools to analyse the generalisation guarantees of learning algorithms.
\subsubsection{Empirical risk minimisation}
In order to elaborate this idea, let us first introduce some important concepts that we will use in this section and throughout the thesis. To formally describe the problem at hand, we recall that we consider data that consist of the inputs $\mathbf{x}_{i} \in \mathcal{X}$ and the outputs or targets $y_{i} \in \mathcal{Y} = \{-1, 1\}$, that is binary classification, for simplicity of the exposition. Furthermore, we assume that the pairs $(\mathbf{x}_{i}, y_{i}) \in \mathcal{X} \times \mathcal{Y}$ are independently and identically sampled according to an unknown probability distribution $P_{X,Y}$. So as to measure the discrepancy between the target variables $y$ and the outcome of the hypotheses $h(x)$,\footnote{The hypotheses depend on both $\mathbf{x}$ and $\boldsymbol{\theta}$, that is $h(\mathbf{x}; \boldsymbol{\theta})$, but we will in general abuse notation and write simply $h(x)$, or even just $h$, for better readability.} we assume that we are given a real-valued \textit{loss function} $L(y, h(x))$. Since we are considering binary classification, the loss function will be the classification error: $L(y, h(x)) = \mathbbm{1}_{h(x) \neq y}$. Then, the \textit{risk} associated with a hypothesis $h$ is given by the expectation of the loss function, defined by the following \textit{risk functional}:
\begin{equation}
R(h) = \mathbb{E} \left[ L(y, h(x)) \right] = \int L(y, h(x))dP_{X,Y}(x, y)
\end{equation}
Ultimately, the goal of a learning algorithm is to find the optimal hypothesis $h^{*} \in \mathcal{H}$ that minimises the risk $R(h)$:
\begin{equation}
h^{*} = \argmin_{h \in \mathcal{H}} R(h)
\end{equation}
However, because the joint probability distribution $P_{X,Y}$ is unknown, it is not possible to exactly calculate $R(h)$. In practice, the risk functional is replaced by the computation of the \textit{empirical risk} on the set of $N$ available data points:
\begin{equation}
R_{N}(h) = \frac{1}{N} \sum_{i=1}^{N} L(y_{i}, h(x_{i}, \theta))
\end{equation}
and the learning algorithm chooses the hypothesis $g$ by minimising the empirical risk:
\begin{equation}
g = \hat{h} = \argmin_{h \in \mathcal{H}} R_{N}(h)
\end{equation}
This method is known as the \textit{Empirical Risk Minimisation} inductive principle (ERM) \cite{vapnik1982erm, vapnik1992erm}. The ERM principle was thoroughly studied during the 1960--1990s by Vladimir Vapnik and Alexey Chervonenkis, as well as other scientists, and it is summarised in the book \textit{The nature of statistical learning theory} \cite{vapnik1995learningtheory}. Empirical risk minimisation is a general principle and many classical estimation methods, such as least squares regression and maximum likelihood estimation, can be formulated as realisations of the ERM principle.
Some important aspects of the theory explained in the book and studied in a number of publications are the necessary and sufficient conditions for the consistency of algorithms based on the ERM, that is under what conditions minimising the empirical risk converges in the minimisation of the risk, when the number of examples tends to infinity \cite{vapnik1991necessary}; the rate of convergence of learning processes based on the ERM. Here, we will summarise the most important aspects and concepts of the learning theory based on the ERM.
\subsubsection{Uniform bounds}
Having outlined the notion of empirical risk minimisation and its components, we can define generalisation as the ability of a learning algorithm to achieve small risk $R(h)$. We can now return to the question of the feasibility of generalisation or the consistency of a learning process. An important result from probability theory that serves as starting point to study the feasibility of learning is Hoeffding's inequality \cite{hoeffding1963hoeffding}. It states that for any $\varepsilon > 0$ and a set of $N$ i.i.d. random variables $Z_{1} \ldots Z_{N}$, such that $\mathbb{P} \left[a \leq Z_{i} \leq b \right] \forall i$, then:
\begin{equation}
\label{eq:background-hoeffdings}
\mathbb{P} \left[ \left| \frac{1}{N} \sum_{i=1}^{N}Z_{i} - \mathbb{E}\left[Z\right] \right| > \varepsilon \right] \leq 2 \exp \left( -\frac{2\varepsilon^{2}N}{(b-a)^{2}} \right)
\end{equation}
Hoeffding's inequality can be regarded as quantitative version of the law of large numbers\footnote{The law of large numbers is a fundamental theorem from probability theory. It states that the sample average converges in probability towards the expected value as the sample size increases.} for the case when the variables are bounded. Equation~\ref{eq:background-hoeffdings} can be developed into a more useful form for our purposes, relating the risk of a hypothesis, $R(h)$, and the empirical risk, $R_{N}(h)$. For any $\delta > 0$, at least with probability $1 - \delta$:
\begin{equation}
\label{eq:background-hoeffdings_risk_single}
R(h) \leq R_{N}(h) + \sqrt{\frac{1}{2N}\log\frac{2}{\delta}}
\end{equation}
The interpretation of Equation~\ref{eq:background-hoeffdings_risk_single} is that the risk of hypothesis $h$ is bounded by a quantity that depends linearly and on the empirical risk plus a constant that depends on the number of samples used to obtain the empirical risk and the confidence $\delta$. On the one hand, this is good news as it establishes that the empirical risk is indicative of the true risk when the number of samples is large. This confirms the feasibility of learning.
Nonetheless, on the other hand, the bound in Equation~\ref{eq:background-hoeffdings_risk_single} is highly limited and useless in practice. Essentially, it says that for each hypothesis $h$, there exists a set of samples for which the bound holds. However, the function which will be chosen by the learning algorithm is unknown before training it on the data set. For instance, there exists, as well, a function for which the empirical risk is not indicative of the true risk at all. In order to derive tighter, more useful bounds we need to take into account all the possible hypotheses $\mathcal{H}$ that the learning algorithm may choose. One of the best studied approaches is to consider \textit{uniform bounds}.
The idea is to find an upper bound of the \textit{supremum} of $R(h) - R_{N}(h)$, as that will clearly provide an upper bound on $R(g)$:
\begin{equation}
R(g) - R_{N}(g) \leq \sup_{h \in \mathcal{H}} \left[ R(h) - R_{N}(h) \right]
\end{equation}
The simplest way is to consider the disjunction of all events $\abs{R(h_{m}) - R_{N}(h_{m})} > \varepsilon$ for a finite set of hypothesis $\mathcal{H}$ of size $M$ and $m = 1, \ldots, M$. Then, by applying the union bound\footnote{$\mathbb{P}(\bigcup_{I}A_{i}) \leq \mathbb{P}(\sum_{i}A_{i})$} to the application of Hoeffding's inequality (see Equation~\ref{eq:background-hoeffdings}) to the union of the difference of the risk and the empirical risk for all hypotheses, we get that
\begin{equation}
\label{eq:background-uniform_bounds}
\begin{split}
\mathbb{P} \left[ \left| R(h) - R_{N}(h) \right| > \varepsilon \right]& \leq \sum_{m=1}^{M} \mathbb{P} \left[ \abs{R(h_{m}) - R_{N}(h_{m})} > \varepsilon \right]\\
& \leq 2M \exp(-2\varepsilon^{2}N)
\end{split}
\end{equation}
and hence, equivalent to Equation~\ref{eq:background-hoeffdings_risk_single}, for $\delta > 0$, with probability at least $1 - \delta$:
\begin{equation}
\label{eq:background-hoeffdings_risk_all}
R(g) \leq R_{N}(g) + \sqrt{\frac{1}{2N}\log\frac{2M}{\delta}}
\end{equation}
This is finally a practical error bound that guarantees that the risk of the final hypothesis $g$ found via empirical risk minimisation will be bounded by the empirical risk measured on the training set, as long as the hypothesis set is finite, that is $\abs{\mathcal{H}} = M$. A subtler implication of Equation~\ref{eq:background-uniform_bounds}, besides the fact that it leads to Equation~\ref{eq:background-hoeffdings_risk_all}, is that $R(g) \geq R_{N}(g) - \varepsilon$ also holds. Hence, the ERM principle ensures that there are not much better hypotheses than $g$ in the set $\mathcal{H}$.
While the generalisation bound based on uniform deviations is a key result in statistical learning theory and it is theoretically relevant, it is only valid for learning algorithms that operate with finite hypothesis sets. For many machine learning algorithms, the hypothesis sets are infinitely large and additional theoretical results are necessary to describe their generalisation guarantees. Below we present some of the most important results.
\subsubsection{Vapnik-Chervonenkis theory}
When the hypothesis set $\mathcal{H}$ is uncountable and thus the right hand side of Equation~\ref{eq:background-hoeffdings_risk_all} is unbounded, a classical approach is to use the notion of \textit{growth function}, also known as \textit{shatter coefficient} or \textit{shattering number}. The growth function $m_{\mathcal{H}}(N)$ was introduced by \citet{vapnik1971vc} and it denotes the maximal \textit{effective} size of $\mathcal{H}$ on a set of $N$ examples, that is the maximum number of ways into which $N$ data points can be classified by the function class. Formally:
\begin{equation}
\label{eq:background-growth_function}
m_{\mathcal{H}}(N) = \sup_{x_1, \ldots, x_N \in \mathcal{X}}\left| (h(x_1), \ldots, h(x_N)) : h \in \mathcal{H} \right|
\end{equation}
For the case of binary classification that we are considering, the growth function $m_{\mathcal{H}}(N) \leq 2^N$. If the hypothesis set is capable of generating all possible dichotomies (binary labellings) of $x_1, \ldots, x_N$, then $\mathcal{H}$ is said to \textit{shatter} the data set. If no data set of size $k$ can be shattered by $\mathcal{H}$, then $k$ is said to be a \textit{break point} for $\mathcal{H}$.
One important concept in statistical learning theory is the Vapnik-Chervonenkis dimension, known as \textit{VC dimension} for short. The VC dimension of a hypothesis set $\mathcal{H}$, denoted by $d_{VC}(\mathcal{H}$) or simply $d_{VC}$ is the largest $N$ such that $m_{\mathcal{H}}(N) = 2^N$, that is the largest data set size that the hypothesis can shatter. Hence, if $d_{VC}$ is the VC dimension of $\mathcal{H}$, then $k = d_{VC} + 1$ is a break point for the growth function.
It can be shown that if a hypothesis class $\mathcal{H}$ has finite VC dimension $d_{VC}$, then the growth function can be upper bounded by a polynomial:
\begin{equation}
\label{eq:background-sauer_lemma}
m_{\mathcal{H}}(N) \leq \sum_{i=0}^{d_{VC}}{\binom{N}{i}}
\end{equation}
This allows us to bound the risk of a hypothesis in terms of the empirical risk and the growth function, what is known as the \textit{VC generalisation bound}. For $\delta > 0$, with probability at least $1 - \delta$:
\begin{equation}
\label{eq:background-vc_bound}
R(g) \leq R_{N}(g) + \sqrt{\frac{8}{N}\log\left(\frac{4m_{\mathcal{H}}(2N)}{\delta}\right)}
\end{equation}
The VC generalisation bound is a key result in statistical learning theory as it establishes the feasibility of learning with infinite hypothesis sets: with enough data, all hypotheses in an infinite $\mathcal{H}$ with finite VC dimension will generalise from the empirical risk. The bound holds for all hypothesis sets, learning algorithms, input spaces, probability distributions and binary targets \citep{abu2012learningfromdata}. Such generality comes at the expense of being quite a loose bound to be used in practice.
One interpretation of the generalisation bound in Equation~\ref{eq:background-vc_bound} that we will use in this thesis is that the right hand side consists of two terms: the empirical risk and a term that is usually interpreted as a penalty for model complexity:
\begin{equation}
\label{eq:background-vc_model_complexity}
\Omega(N, \delta, \mathcal{H}) = \sqrt{\frac{8}{N}\log\left(\frac{4m_{\mathcal{H}}(2N)}{\delta}\right)} \leq \sqrt{\frac{8}{N}\log\left(\frac{4(2N)^{d_{VC}}}{\delta}\right)}
\end{equation}
$\Omega$ depends on the number of examples $N$, the confidence parameter $\delta$ and the hypothesis class $\mathcal{H}$. The bound gets tighter (better) as the number of examples increases, as the confidence constant $\delta$ increases and as the complexity of the hypothesis set decreases (lower $d_{VC}$). This form of the bound on the risk, $R(g) \leq R_{N}(g) + \Omega(N, \delta, \mathcal{H})$, is found in most methods to estimate the theoretical generalisation guarantees of learning algorithms.
\subsubsection{Rademacher complexity}
\label{sec:background-rademacher}
Given these limitations of the Vapnik-Chervonenkis theory and, in particular, of the VC dimension, other measures of complexity have been developed \citep{bartlett2002complexity}. One relatively recent, popular example is the \textit{Rademacher complexity}, which allows to define generalisation bounds that are not restricted to binary classification and hold for any class of real-valued functions.
Let $\sigma_1, \ldots, \sigma_N$ be a set of independent random variables such that $P(\sigma_i = 1) = P(\sigma_i = -1) = \frac{1}{2}$. These are known as \textit{Rademacher variables}, hence the name of the complexity measure. As before, we consider a sample of $N$ independent data points $x_1, \ldots, x_N$ defined on $\mathcal{X}$. Now, instead of being restricted to binary classification, we let $\mathcal{F}$ be the class of real-valued functions $f \colon \mathcal{X} \mapsto \mathbb{R}$. Then, the \textit{empirical Rademacher complexity} of $\mathcal{F}$ with respect to the sample of size $N$ is defined as:
\begin{equation}
\label{eq:background-empirical_rademacher_complexity}
\hat{\mathcal{R}}_{N}(\mathcal{F}) = \mathbb{E}_{\sigma} \left[ \underset{f \in \mathcal{F}}{\mathrm{sup}} \left| \frac{1}{N} \sum_{i=1}^{N} \sigma_{i}f(x_{i}) \right| \right]
\end{equation}
where $\mathbb{E}_{\sigma}$ denotes the expectation with respect to the Rademacher variables. The \textit{Rademacher complexity}, also found in the literature as \textit{Rademacher average}, is defined as the expectation of the empirical Rademacher complexity over all data sets of size $N$ on $\mathcal{X}$:
\begin{equation}
\label{eq:background-rademacher_complexity}
\mathcal{R}_{N}(\mathcal{F}) = \mathbb{E} \left[ \hat{\mathcal{R}}_{N}(\mathcal{F}) \right]
\end{equation}
The interpretation of the Rademacher average as a complexity measure is intuitive: It is a measure of the ability of the function class $\mathcal{F}$ to fit random noise, introduced by the Rademacher variables $\sigma_i$. For a very large and complex $\mathcal{F}$, there will be a function $f$ that can fit the noise, making $\hat{\mathcal{R}}_{N}(\mathcal{F})$ larger.
For the case of binary classification that we have considered so far, in which $\mathcal{H} \subseteq \{h \colon \mathcal{X} \mapsto \mathcal{Y} = \{-1, 1\}\}$, it can be easily shown that $\hat{\mathcal{R}}_{N}(\mathcal{H}) = \frac{1}{2}\hat{\mathcal{R}}_{N}(\mathcal{F})$, using the fact that $\sigma_i$ and $\sigma_i Y_i$ have the same distribution. Finally, we can use the Rademacher complexity to bound the risk of the final hypothesis. For $\delta > 0$, with probability at least $1 - \delta$:
\begin{equation}
\label{eq:background-rademacher_risk}
R(g) \leq R_{N}(g) + \hat{\mathcal{R}}_{N} + \sqrt{\frac{2\log\frac{2}{\delta}}{N}}
\end{equation}
\subsection{Regularisation}
\label{sec:background-regularisation}
In Section \ref{sec:background-generalisation}, we have seen that the goal of a machine learning algorithm is to find a hypothesis $h(\mathbf{x})$ that, given some data $\mathcal{D} = \{(\mathbf{x}_{i}, y_{i})\}_{i}^{N}$, minimises the risk functional $R(h)$, which in turn depends on a loss function $L(y, h(\mathbf{x}))$ chosen as a criterion for the optimisation problem. We have also seen that, since it is not possible to exactly calculate the risk, in practice we optimise the empirical risk $R_{N}(h)$, which is calculated on the available training data:
\begin{equation}
\label{eq:background-argmin_empirical_risk}
g = \argmin_{h \in \mathcal{H}} R_{N}(h)
\end{equation}
This method is known as empirical risk minimisation (ERM), and in Section~\ref{sec:background-generalisation} we have summarised the theory that describes the convergence of the empirical risk to the actual risk of the model. Nonetheless, despite the importance of this theory to confirm the feasibility of learning, the bounds on the generalisation error are not always applicable in practice: they are quite loose, depend on hypothesis sets with finite VC dimension and, in general, the plain ERM principle is intended to deal with large sample sizes. In practice, learning algorithms rely on extensions of ERM, such as the principle of structural risk minimisation (SRM) \citep{vapnik1974srm}. A particular case of SRM is regularisation, a widely used technique in machine learning and one of its cornerstones \citep{poggio1990regularisation, girosi1995regularization}. In this section we review the fundamentals of regularisation and present some of its most common forms, which are relevant for this thesis.
The concept of regularisation of learning algorithms is closely related to the mathematical problem of approximating a function from sparse data, that is finding $f \in \mathcal{F}$ such that $Af = F$. \citet{hadamard1902illposed} demonstrated that under some general circumstances this is an ill-posed problem. That is, an arbitrarily small deviation $\varepsilon$ of $F$ ($F_{\varepsilon}$ instead of $F$, where $\norm{F - F_{\varepsilon}} < \varepsilon$) can cause large deviations in the solution of the equation. Formally, minimising the functional
\begin{equation}
\label{eq:background-ill_posed_functional}
\rho(f) = \norm{Af - F_{\varepsilon}}^2
\end{equation}
is not guaranteed to provide a good approximation even if $\varepsilon$ tends to zero. This closely resembles the learning problem that we have described above, where the task is to find the function $g$ that best approximates the data $\mathcal{D}$, using the empirical risk, as summarised in Equation~\ref{eq:background-argmin_empirical_risk} and detailed in Section~\ref{sec:background-generalisation}. As a matter of fact, finding $g$ in the presence of noise is also ill-posed, as there is an infinite number of solutions. In order to find a suitable solution with access to only limited data, it is necessary to constrain the hypothesis space $\mathcal{H}$ with some a priori information, for instance assuming that the function is smooth. This is the idea of the regularisation principles discovered in the 1960s \citep{phillips1962regularisation, tikhonov1963regularisation, ivanov1976regularisation}. In particular, they found that if instead of minimising the functional $\rho(f)$ of Equation~\ref{eq:background-ill_posed_functional}, one minimises the so-called regularised functional
\begin{equation}
\label{eq:background-reg_functional}
\rho^*(f) = \norm{Af - F_{\varepsilon}}^2 + \lambda(\varepsilon)\Omega(f)
\end{equation}
where $\Omega(f)$ is the regularisation functional, and $\lambda(\varepsilon)$ is a constant that determines the level of noise, then the sequence of solutions converges as $\varepsilon \rightarrow 0$. In our particular case of learning from data, the principles of regularisation translate into adding a similar regularisation term to the objective function. The similarity between the two problems is most obvious if we consider the mean squared error loss, instead of binary classification. In this case, the optimisation problem becomes the following:
\begin{equation}
\label{eq:background-reg_objective}
\begin{split}
g &= \argmin_{h \in \mathcal{H}}R_{reg}(h) = \argmin_{h \in \mathcal{H}}\left[ R_{N}(h) + \lambda\Omega(h) \right]\\
&= \argmin_{h \in \mathcal{H}}\left[ \sum_{i=1}^{N}(h(\mathbf{x}_{i}) - y_{i})^2 + \lambda\Omega(h) \right]
\end{split}
\end{equation}
$\Omega(h)$ is the regularisation functional or regulariser, which incorporates prior information or desired properties of the model. In general, the regulariser is chosen to encourage smooth functions. The constant $\lambda$ is the regularisation parameter, which controls the strength of the regularisation.
The concepts of generalisation and particularly regularisation are closely related to the widely used concept of \textit{overfitting}, that is the tendency of a learning algorithm to excessively fit the training data points, to the detriment of its generalisation. Broadly speaking, a complex hypothesis function is more likely to \textit{overfit} the training data than a simpler function. The idea of function smoothness introduced by the regularisation term can be seen as a way to counteract overfitting, in favour of better generalisation. This establishes a trade-off where ideally the learning algorithm should strike the right balance between fitting the data, that is minimising the empirical risk, and finding a smooth enough function that generalises well. This trade-off can be controlled by the value of the regularisation parameter, which is often determined through \textit{cross-validation} \citep{stone1974crossval, allen1974crossval}.
The choice of $\Omega(h)$ leads to different forms of regularisation and there is a very large body of literature on this topic. Popular choices are constraints on the norm of the parameters, which we will discuss in Section~\ref{sec:background-weight_decay} or constraints on the curvature of $h$. In modern machine learning, the concept of regularisation is very broad and regularisation is considered to be any mechanism that prevents overfitting, hence improving generalisation. In Chapter~\ref{ch:reg} we compare different forms of regularisation and discuss the distinction between implicit and explicit regularisation---key in Chapter~\ref{ch:daugreg}---and other regularisation taxonomies.
In the remaining of this section we introduce two specific regularisation techniques, weight decay and dropout, which are arguably the two most common forms of regularisation in modern neural networks.
\subsubsection{Weight decay}
\label{sec:background-weight_decay}
Weight decay is the common name used to refer to \textit{$L^2$--norm regularisation}\footnote{Note, however, that in part of the machine literature, the term weight decay refers to a form of regularisation in which the $L^2$--norm penalty is added directly to the update rule of gradient descent. This results in a conceptually equivalent form of regularisation, but with a slight numerical difference. See \citet{babenko2018wdvsl2} for more details.}, which is in turn a particular case of $L^p$--norm regularisation. In this section we will first review $L^p$--norm regularisation as a direct realisation of the type of regularisation described above, and then present the specific aspects of weight decay.
In the previous section we have seen that the concept of regularisation derived from the mathematical tool for solving ill-posed problems, results in a modification of the objective function (see Equation~\ref{eq:background-reg_objective}). We will denote the regularised objective by $\hat{J}$:
\begin{equation}
\label{eq:background-reg_objective_explicit}
\hat{J}(\boldsymbol{\theta}; \mathbf{x}, y) = J(\boldsymbol{\theta}; \mathbf{x}, y) + \lambda\Omega(\boldsymbol{\theta})
\end{equation}
$L^p$--norm regularisation refers to the family of techniques which apply a penalty on the norm of the parameters:
\begin{equation}
\label{eq:background-l_p_norm}
\Omega(\boldsymbol{\theta}) = \phi(\norm{\boldsymbol{\theta}}_p) = \phi\left(\left(\sum_{i=1}^{d}\abs{\theta_i}^p\right)^{\frac{1}{p}}\right)
\end{equation}
where $\phi(\cdot)$ is an optional function applied on the norm, for example the squared function. The most commonly used $L^p$--norm penalties are $L^1$ and $L^2$ regularisation. $L^2$ regularisation is probably one of the most widely used regularisation techniques in machine learning. In deep learning, it is commonly referred to as \textit{weight decay}, but it is also known as Tikhonov regularisation \citep{tikhonov1963regularisation} or ridge regression when applied to linear regression. In this thesis, we will analyse weight decay in neural networks and compare it to other regularisation techniques in Chapter~\ref{ch:daugreg}, and we use it to regularise a logistic regression algorithm in Chapter~\ref{ch:globsal}.
The specific regularisation term typically used for weight decay is $\Omega(\boldsymbol{\theta}) = \frac{1}{2}\norm{\boldsymbol{\theta}}_2^2$ because it allows to implement and express the objective function in a convenient and efficient way using the dot product between the vector of parameters and its transpose:
\begin{equation}
\label{eq:background-weight_decay_objective}
\hat{J}(\boldsymbol{\theta}; \mathbf{x}, y) = J(\boldsymbol{\theta}; \mathbf{x}, y) + \frac{\lambda}{2}\boldsymbol{\theta}^T\boldsymbol{\theta}
\end{equation}
Weight decay has been long used, at least since the 1980s \citep{hinton1987wd}, and widely studied, both empirically \citep{zhang2018wd} and theoretically \citep{krogh1992wd, neyshabur2015regularization}, especially in the context of neural networks. Intuitively, the mechanism provided by weight decay is to restrict the norm of the trainable parameters, by decreasing the weight vector at every iteration of a model trained with gradient descent, in the directions that do not contribute much to reducing the objective function. Relevant to this dissertation is the result by \citet{bishop1995tikhonov}, which showed that optimising a squared error loss with weight decay is equivalent to training with random noise in the inputs. In Chapter~\ref{ch:daugreg}, we will use this result to derive some theoretical insights into the comparison of weight decay and data augmentation.
\subsubsection{Dropout}
\label{sec:background-dropout}
Dropout is a regularisation technique first described by this name in \citep{hinton2012dropout, srivastava2014dropout}, although closely related to \textit{dilution} \citep{hertz1991dilution}. It is very widely used in modern neural networks due to its simplicity and effectiveness. In practice, dropout is implemented and hence can be described as a method that omits every unit---parameter, feature detector, etc.---of a model with probability $p$, at every iteration of the optimisation process (training). At inference (test) time, the whole set of units is considered. While dropout can be applied to a broad class of models, it is most often used to train deep neural networks and for simplicity we will also consider neural networks in this section.
Dropout is often described as a practical approximation of training an ensemble of models through bootstrap aggregation, commonly known as \textit{bagging} \citep{breiman1994bagging}, in which $M$ models are trained on $M$ subsets of a data set of size $N$ uniformly sampled with replacement (bootstrap sample). At inference time, the outputs of the $M$ models on each data point are averaged (for regression) or combined through majority voting (for classification). Bagging is a widely used technique, known to reduce the variance and overfitting of learning algorithms. However, it is computationally expensive as it requires training multiple models. Dropout efficiently approximates a form of bagging with an exponentially large number of sub-networks (models). Since neural networks are typically trained with mini-batch iterative methods (such as stochastic gradient descent), the parameters of the model are updated by computing the loss of a sub-network on a sub-sample of the data set. An important difference between standard bagging and dropout is that while in bagging the models are independent, with dropout the models share a subset of the parameters from the parent neural network.
In \citep{srivastava2014dropout}, dropout training is connected with a theory by \citet{livnat2010sex} about the superiority of sexual over asexual reproduction in nature. According to this theory, a criterion for natural selection would be enhancing the robust combination of different genes for better adaptation to changes, as opposed to the optimisation of the individual fitness through a slight mutation of one parent's genes. Sexual reproduction would favour this criterion by preventing co-adaptations of the available genes in one individual. With dropout, the units of a network are forced to learn useful combinations with other subsets of random units, hence preventing co-adaptation and increasing robustness.
Dropout has greatly impacted the deep learning community\footnote{The two original papers have been increasingly cited almost 25,000 times at the time of writing, according to Google Scholar.}. It is widely used for training neural networks in both research and application and it has been deeply studied both empirically and theoretically \citep{gal2016dropout}, sometimes uncovering contradictory and surprising properties. While it is out of the scope to review the vast literature on dropout training, we can mention some relevant findings. Since shortly after it was proposed, dropout has been analysed as adaptive form of regularisation \citep{wager2013dropout}. \citet{baldi2013dropout} found that the dynamics of gradient descent with dropout training approximate that of a regularised error function, while \citet{helmbold2017dropout} showed that in deeper networks, the behaviour of dropout differs significantly from standard regularisation. More recently, \citet{mou2018dropout} derived generalisation bounds based on the Rademacher complexity for deep neural networks trained with dropout. Finally, an interesting and relevant finding for this thesis is that dropout applied to the intermediate units of a neural network has been shown to be equivalent to training with noise in the input \citep{bouthillier2015dropoutasdaug}.
\chapterbibliography
}
\chapter[Data augmentation and object representation in the brain]{Data augmentation\\and object representation in the brain}
\label{ch:daugit}
\renewcommand{\chapterpath}{includes/daug-it}
\begin{contributors}
Johannes Mehrer performed the representational similarity analysis. Nikolaus Kriegeskorte, Peter K{\"onig} and Tim C. Kietzmann reviewed and edited the manuscript submitted to CCN.
\end{contributors}
\begin{outreach}
\item \textit{Deep neural networks trained with heavier data augmentation learn features closer to representations in hIT.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Johannes Mehrer, Nikolaus Kriegeskorte, Peter K{\"o}nig, Tim C. Kietzmann. Cognitive Computational Neuroscience (CCN), 2018.
\end{outreach}
One of the central goals of computational neuroscience is to develop better models of the human brain. The re-emergence of deep artificial neural networks, which now excel at many artificial intelligence tasks by automatically learning hierarchical representations \citep{girshick2014dlhierarchy}, has also had a positive impact on computational neuroscience. For instance, the features learnt by models trained for image object classification have been found to correlate better with the representations in the human inferior temporal cortex (hIT) than traditional hand-crafted features or shallow models \citep{khaligh2014annbrains, yamins2014annsbrains, gucclu2015annbrains}. Further, convolutional neural networks are currently the most accurate models for multiple regions across the primate visual cortex \citep{kietzmann2019dnncompneuro, yamins2016computneuro}. However, while the similarity between artificial and biological neural networks is promising, a crucial question remains: what makes neural networks learn representations that more closely mirror activations in the brain?
Delving into this question is one of the goals of this thesis because of its potential implications on our understanding of the brain and learning systems in general. Previous work has revealed that networks performing better in classification tasks correlate more strongly with neural representations in high level areas \citep{yamins2014annsbrains}. However, the network architecture seems to play a crucial role \citep{storrs2017ccn} and \citet{mehrer2017ccn} showed that training with more ecologically relevant image categories yields more similar representations. Inspired by the apparent importance of the training data, and the properties of data augmentation discussed in Chapter~\ref{ch:daugreg}, we here explore the influence of data augmentation on the representational similarity between artificial neural networks and the human inferior temporal cortex.
As we have discussed in the Introduction (Chapter~\ref{ch:intro}), the transformations included in (perceptually plausible) data augmentation schemes are inspired by the properties of visual perception. We perform translations, rotations, scaling and changes in the illumination of images (see Section~\ref{sec:daugreg-methods_data}) because these transformations are part of the variance we observe in the visual real-world. Transformations of this kind within certain ranges do not change the perceived object class and even identity. In Chapter~\ref{ch:daugreg} we have seen that applying these transformations to the training images of a neural network model is highly beneficial for generalisation. In this chapter we test the hypothesis that training with heavier data augmentation may encourage learning representations more aligned representations with the inferior temporal cortex.
\section{Methods}
\label{sec:daugit-methods}
This section presents the experimental setup to analyse the role of data augmentation on the similarity between artificial neural networks and neural representations in hIT. We describe the network architectures, the augmentation schemes and the methodology employed to compare both systems.
\subsection{Network architectures}
To increase the generality of our results, we analysed two distinct, well-known convolutional neural networks, which reach high-performance on image object-classification: the all convolutional network, All-CNN \citep{springenberg2014allcnn} and the wide residual network, WRN \citep{zagoruyko2016wrn}. We used the same architectures for the experiments in Chapter~\ref{ch:daugreg} and they are described in detail in Section~\ref{sec:daugreg-methods_archs}, so we here only provide a brief overview of the most important properties:
\begin{itemize}
\item \textbf{All-CNN} consists only of 12 convolutional layers, each followed by batch normalisation and a ReLU activation. It has a total of 9.4 million parameters.
\item \textbf{WRN} is a modification of ResNet \citep{he2016resnet} that achieves better performance with fewer layers, but more units per layer. We chose the WRN-28-10 version of the original paper, which has 28 layers and about 36.5 million parameters.
\end{itemize}
Following the conclusions from Chapter~\ref{ch:daugreg}, we did not train the models with either weight decay or dropout, but we kept the rest of the hyperparameters as in the original papers.
\subsection{Data augmentation}
The goal of this work was to study the impact of data augmentation on the similarity of the representations with the activations in the inferior temporal cortex. For that purpose, we considered two data augmentation schemes: \textit{light} and \textit{heavier} augmentations, as described in Section~\ref{sec:daugreg-methods_data}. Below we summarise the transformations included in each scheme:
\begin{itemize}
\item The \textbf{light} augmentation scheme has been widely used in the literature, for instance \citep{springenberg2014allcnn}. It performs only random horizontal flips and horizontal and vertical translations of maximum 10\% of the image size. Additionally, we performed random crops of $128\times128$ pixels.
\item The \textbf{heavier} scheme performs a larger range of random affine transformations such as scaling, rotations and shear mapping, as well as contrast and brightness adjustment and random crops.
\end{itemize}
We used these schemes to augment the highly benchmarked ImageNet ILSVRC 2012 data set \citep{russakovsky2015imagenet}. We used ImageNet instead of CIFAR-10---for instance---because its higher resolution images more closely match the stimulus statistics of the human visual system. We resized the images into $150\times200$ pixels. Examples of the light and heavier augmentations on ImageNet photos are shown in Figure~\ref{fig:daugit-daugimagenet}
\begin{figure}[ht]
\begin{center}
\includegraphics[width = 0.8 \linewidth]{\imgpath/imagenet_daug.png}
\end{center}
\caption{Illustration of the transformations performed by the light and heavier augmentation schemes on two example images. Note that the five transformations of the images in this figure have been produced by setting extreme values of the parameters, so as to highlight the characteristics of the schemes and the differences between them.}
\label{fig:daugit-daugimagenet}
\end{figure}
The performance of All-CNN and WRN trained with light and heavier augmentation is shown in Figure~\ref{fig:daugit-performance}. Note that training with light augmentation provides better results, specially on All-CNN. As pointed out in Chapter~\ref{ch:daugreg}, this is likely explained by first, the fact that the heavier augmentation scheme was not designed to optimise classification, but rather as an arbitrary larger set of plausible transformations; and second, because the limited capacity of the models---especially All-CNN---may prevent them from exploiting the aggressive transformations of the already large ImageNet data set. Nonetheless, the objective of this study was to analyse the learnt representations given a reasonably accurate performance. Ideally, we would also analyse the representations of a model trained with no augmentation. However, the performance without data augmentation is significantly worse and this would likely impact the representations \citep{yamins2014annsbrains}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = 0.7 \linewidth]{\imgpath/models_performance.png}
\end{center}
\caption{Test performance of All-CNN and WRN trained with light and heavier data augmentation.}
\label{fig:daugit-performance}
\end{figure}
\subsection{Representational similarity analysis}
In order to compare the representations learnt by the neural networks and the activations measured in the inferior temporal cortex, we made use of representational similarity analysis (RSA) \citep{kriegeskorte2008rsa, nili2014rsatoolbox}. The main advantage of RSA is that it allows direct comparisons across different model systems without having to explicitly align the different measurement types. This is accomplished by constructing representational dissimilarity matrices (RDMs) to express the pairwise similarity between stimuli, instead of directly comparing the representations of single stimuli. Across a set of input images, RDMs characterise the internal representations of a given system by storing all pairwise distances. The resulting matrix therefore expresses the representational geometry in the learnt activation space. By relying on distances, RDMs remain unchanged, if the space over which they are computed is rotated.
To characterise the representations in hIT, functional magnetic resonance imaging (fMRI) was used to measure BOLD responses while 15 participants were presented with 92 images of isolated objects. The images originate from a wide variety of categories and levels of abstraction. On the broadest level, they can be separated into animate and inanimate. Inanimate objects can either be natural or artificial, whereas animate objects are divided into human stimuli---heads and body parts---and animals---full body and heads only. This fMRI data set has been used in multiple studies and the details of the data acquisition can be found in \citep{kriegeskorte2008manandmonkey}. In Figure~\ref{fig:daugit-rdms} we show the RDM of the brain data, and the RDMs of the WRN model for illustration. As in \citep{kriegeskorte2008manandmonkey}, in order to better visualise the differences across the RDM, the colour code represents the percentiles of the actual RDMs.
To compare artificial neural networks and hIT representations, the network activation profiles for the 92 images were extracted. In particular, we computed the activations at the outputs of the 12 ReLU layers of All-CNN and at the outputs of the residual blocks of WRN. We then computed the RDM of these activations using the Pearson correlation, as well as the RDM of the fMRI responses in hIT. To obtain a more compact representation of the CNN models, we combined the RDMs of all layers into a single RDM as a linear combination of the individual layer RDMs with respect to the hIT RDM using non-negative least squares and a cross-validation procedure, which avoids overfitting the image set.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/rdms.png}
\end{center}
\caption{RDMs of the systems compared. Left: adapted from \citep{kriegeskorte2008manandmonkey}, the RDM of the human inferior temporal cortex. Centre and right, the RDMs of the WRN model, trained with light and heavier augmentation, respectively. As in \citep{kriegeskorte2008manandmonkey}, the colour code in the matrices shown in the figure represents the percentile of the dissimilarity.}
\label{fig:daugit-rdms}
\end{figure}
Finally, we characterise the similarity between the artificial neural networks and hIT by computing the Kendall's rank correlation coefficient $\tau_{A}$ between the RDM of the hIT representations and the RDM of the convolutional models. Standard errors were obtained from the similarity estimates of the 15 human subjects.
\section{Results and discussion}
\label{sec:daugit-results}
We show the results of the representational similarity analysis to compare the representations learnt by the neural networks and the fMRI data in Figure~\ref{fig:kendall}. As a main conclusion, we found that the correlation with the hIT representations is significantly higher for the models trained with heavier data augmentation. Not only is this indicated by the Kendall correlation, but also by visual inspection, the RDM of the model trained with heavier augmentations seems more similar to the RDM of the human IT. For example, face images---both human and non-human---form clearer similarity clusters in the model trained with heavier data augmentation, which is a well-studied property of the primate visual cortex.
In the case of the wide residual network (WRN) the difference between the two levels of augmentation is considerably larger, while in the All-CNN models, although statistically significant ($p<0.05$), the difference is smaller. However, recall that the classification performance of the models trained with heavier augmentations is worse, especially in the case of All-CNN (Figure-~\ref{fig:daugit-performance}. Therefore, it seems that even though the more aggressive transformations do not improve the classification performance, they do increase the similarity with the inferior temporal cortex.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = 0.8 \linewidth]{\imgpath/kendall.png}
\end{center}
\caption{Comparison of the Kendall's $\tau_{A}$ coefficient of the hIT RDM and the RDM of the networks trained with light and heavier data augmentation. Both on All-CNN and WRN, the correlation of the model trained with heavier transformations is significantly higher than the light counterpart. The grey shaded area indicates the maximum possible correlation of a model given the noise in the measured data.}
\label{fig:kendall}
\end{figure}
It is also interesting that the similarity with hIT is higher for All-CNN, again despite the lower performance, as opposed to previous work that indicated a correlation between performance and similarity with the visual cortex \citep{yamins2014annsbrains}. The overall lower correlation of WRN adds more evidence to the conclusion of \citet{storrs2017ccn}, who showed that residual networks exhibit a particularly low correlation with hIT compared to other architectures.
Given the exploratory nature of this study, it is not yet clear what exact mechanisms lead to the better match between representational geometries in higher level visual cortex and networks trained with heavier data augmentation. One hypothesis is that the larger variety during training may be more biologically plausible than training with constant images or very light transformations. Humans develop robust object representations based on highly variable input, while freely exploring the world. Sources of variation include different orientations, lighting conditions, backgrounds and occlusion. Eye-movements, including drifts and \textit{microsaccades}, may further contribute to the variability in the sensory input to which the recognition has to be robust. As we will further discuss in Chapter~\ref{ch:invariance}, this robustness is reflected in invariant activations towards identity-preserving transformations in the higher visual cortex.
Our experiments addressed the question as to which factors drive computational models to learn features closer to the brain representations. Given the superiority in visual robustness of the human brain, these insights may have implications for artificial vision systems based on deep neural networks, and for ANNs as a model system for visual processing in the brain. Finding that heavier data transformations leads to more IT-like representations further supports the notion that the input distribution plays a crucial role during the learning of representations in both the brain artificial networks.
\section{Conclusions}
In this chapter, we have explored how far light and heavier augmentation of the training set can affect the internal representations of deep neural networks and their alignment with human IT. To compare the neural and model system, we used representational similarity analysis, which allows for straightforward comparisons across different modalities---in this case, fMRI BOLD signal and neural network activations. RSA revealed that the neural networks trained with heavier transformations learn representations more similar to those observed in higher visual cortex.
Future work should analyse a larger range of network architectures and data sets to gain better insights into the mechanisms driving the internal representations. It will be also interesting to study the different components of data augmentation in order to understand which particular transformations play a bigger role in better explaining hIT.
\chapterbibliography
}
\chapter[Data augmentation instead of explicit regularisation]{Data augmentation\\instead of explicit regularisation}
\label{ch:daugreg}
\renewcommand{\chapterpath}{includes/daug-reg}
\begin{outreach}
\item \textit{Data augmentation instead of explicit regularization.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Peter K{\"o}nig. arXiv preprint arXiv:1806.03852, 2018.
\item \textit{Do deep nets really need weight decay and dropout?.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Peter K{\"o}nig. arXiv preprint arXiv:1802.07042, 2018.
\item \textit{Further advantages of data augmentation on convolutional neural networks.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Peter K{\"o}nig. International Conference on Artificial Neural Networks (ICANN, Best Paper Award), 2018.
\end{outreach}
Data augmentation in machine learning refers to the techniques that synthetically create new examples from a data set by applying possibly stochastic transformations on the existing examples. In the image domain, these transformations can be, for instance, slight translations or rotations, which preserve the perceptual appearance of the original images, but significantly alter the actual pixel values. Despite being an old technique \citep{abumostafa1990hints, simard1992daug} and ubiquitous in the deep learning literature and practice, data augmentation has often been regarded as a sort of \textit{cheating}, \textit{lower class} technique\footnote{As a result, the machine learning scientific community has heavily ignored data augmentation as a subject of study until recently. By way of illustration, the textbook Deep Learning \citep{goodfellow2016dlbook} dedicates one and a half pages to data augmentation, of which one third is devoted to the caveats of data augmentation. Only in the last few years has data augmentation started to receive increasing attention, likely due to the success of some data augmentation techniques, such as \textit{cutout} \citep{devries2017cutout} and \textit{mixup} \citep{zhang2017mixup}, and especially by the popularisation by Google of \textit{automatic} data augmentation \citep{cubuk2018autoaugment}, previously proposed by various university groups \citep{hauberg2016learningdaug, antoniou2017dagan, ratner2017learningdaug, lemley2017smartdaug}. We first submitted the results presented in this chapter in 2017 \citep{hergar2018daugregopenreview} and other authors have also presented surveys on data augmentation techniques \citep{perez2017dauganalysis, shorten2019daugsurvey}. Promisingly, very recently has data augmentation started to be analysed as well from a theoretical point of view \citep{rajput2019daug, chen2019invariance, lyle2020daug}}, which should not be used in order to assess the actual strength of a new proposal \citep{goodfellow2013maxout, graham2014fracmaxpool, larsson2016fractalnet, goodfellow2016dlbook}. A common criticism is that data augmentation usually requires domain or expert knowledge and it cannot be easily generalised across data domains \citep{devries2017daugfeatspace}.
Explicit regularisation methods such as weight decay \citep{hanson1989wd} and dropout \citep{srivastava2014dropout} are also nearly ubiquitous. In contrast, they are considered intrinsic parts of the learning algorithm and thus have remained unquestioned. However, in Chapter~\ref{ch:reg}, we have introduced the differences between explicit and implicit regularisation and raised the question of whether explicit regularisation is necessary in deep learning. On the other hand, in the Introduction (Chapter~\ref{ch:intro}) we have discussed the view of data augmentation as a powerful inductive bias from visual perception. Building upong these insights, in this chapter, we analyse the role of data augmentation in neural networks trained for image object categorisation and the need for weight decay and dropout when data augmentation is used. We first derive some theoretical insights from statistical learning theory and then present the results of a large empirical study in which we contrast the contributions of each technique.
\section{Theoretical insights}
\label{sec:daugreg-theoretical_insights}
As we have reviewed in Section~\ref{sec:background-generalisation}, the generalisation of a model class $\mathcal{H}$ can be analysed through complexity measures such as the VC-dimension or, more generally, the Rademacher complexity $\mathcal{R}_{N}(\mathcal{H}) = \mathbb{E} \left[ \hat{\mathcal{R}}_{N}(\mathcal{H}) \right]$, where:
\begin{equation}
\label{eq:daugreg-rademacher}
\hat{\mathcal{R}}_{N}(\mathcal{H}) = \mathbb{E}_{\sigma} \left[ \underset{h \in \mathcal{H}}{\mathrm{sup}} \left| \frac{1}{N} \sum_{i=1}^{N} \sigma_{i}h(x_{i}) \right| \right]
\end{equation}
is the empirical Rademacher complexity, defined with respect to a specific set of $N$ data samples. Then, in the case of binary classification and the class of linear separators, the generalisation error of a hypothesis, $\hat{\epsilon}_{N}(h)$, can be bounded using the Rademacher complexity:
\begin{equation}
\label{eq:daugreg-genbound}
\hat{\epsilon}_{N}(h) \leq \mathcal{R}_{N}(\mathcal{H}) + \mathcal{O} \left( \sqrt{\frac{\ln \sfrac{1}{\delta}}{N}} \right)
\end{equation}
with probability $1 - \delta$. Tighter bounds for some model classes, such as fully connected neural networks, can be obtained \citep{bartlett2002rademacher}, but it is not trivial to formally analyse the influence on generalisation of specific architectures or techniques. Nonetheless, we can use these theoretical insights to discuss the differences between explicit regularisation---specifically weight decay and dropout---and data augmentation.
A straightforward yet very relevant conclusion from the analysis of any generalisation bound is the strong dependence on the number of training examples $N$. Increasing $N$ drastically improves the generalisation guarantees, as reflected by the second term in the right hand side of Equation~\ref{eq:daugreg-genbound} and by the dependence of the Rademacher complexity (Equation~\ref{eq:daugreg-rademacher}) on the sample size too. Data augmentation exploits prior knowledge about the data domain and aspects of visual perception---in the case of image object recognition---to create new examples and its impact on generalisation is related to an increment in $N$, as stochastic data augmentation can generate virtually infinite different samples. Admittedly, the augmented samples are not independent and identically distributed and thus, the effective increment of samples does not strictly correspond to the increment in $N$. This is why formally analysing the impact of data augmentation on generalisation is complex. Recent studies have made progress in this direction by analysing the effect of simple data transformations on generalisation from a theoretical point of view \citep{chen2019invariance, rajput2019daug}.
Explicit regularisation methods aim, in contrast, at improving the generalisation error by constraining the hypothesis class $\mathcal{H}$ to reduce its complexity $\mathcal{R}_{N}(\mathcal{H})$ and, in turn, the generalisation error $\hat{\epsilon}_{N}(h)$. Crucially, while data augmentation exploits domain knowledge, most explicit regularisation methods only \textit{naively} constrain the hypothesis class, by simply reducing the representational capacity, as we have discussed in the previous chapter. For instance, weight decay constrains the learnable models $\mathcal{H}$ by setting a penalty on the weights norm. Interestingly, \citet{bartlett2017boundsnn} showed that weight decay has little impact on the generalisation bounds and confidence margins. Dropout has been extensively used and studied as a regularisation method for neural networks \citep{wager2013dropout}, but the exact way in which it impacts generalisation is still an open question. In fact, it has been stated that the effect of dropout on neural networks is ``somewhat mysterious'', complicated and its penalty highly non-convex \citep{helmbold2017dropout}. Recently, \citet{mou2018dropout} have established new generalisation bounds on the variance induced by a specific type of dropout on feedforward networks.
An interesting observation is that dropout can be analysed as a random form of data augmentation without domain knowledge \citep{bouthillier2015dropoutasdaug}. This implies that any generalisation bound derived for dropout can be regarded as a pessimistic bound for domain-specific, standard data augmentation. A similar argument applies for weight decay, which, as first shown by \citet{bishop1995tikhonov}, is equivalent to training with noisy examples if the noise amplitude is small and the objective is the sum-of-squares error function. Therefore, some forms of explicit regularisation are at least approximately equivalent to adding random noise to the training examples, which is the simplest form of data augmentation\footnote{Note that the opposite view---domain-specific data augmentation as explicit regularisation---does not apply. In Section~\ref{sec:reg-taxonomies} we discuss about the taxonomies of regularisation, including the difference between data augmentation and data-dependent regularisation}. Thus, it is reasonable to argue that more sophisticated data augmentation can overshadow the benefits provided by explicit regularisation.
In general, we argue that the reason why explicit regularisation may not be necessary is that neural networks are already implicitly regularised by many elements---stochastic gradient descent (SGD), convolutional layers, normalisation and data augmentation, to name a few---that provide a more successful inductive bias \citep{neyshabur2014implicitreg}. For instance, it has been shown that linear models optimised with SGD converge to solutions with small norm, without any explicit regularisation \citep{zhang2016understandingdl}. Furthermore, as discussed in Section~\ref{sec:reg-discussion}, if overparameterised neural networks are able to generalise well, the need for constraining their capacity is questionable. In the rest of the chapter we present an empirical study to contrast data augmentation and explicit regularisation---weight decay and dropout.
\section{Methods}
\label{sec:daugreg-methods}
This section describes the main aspects of the experimental setup for systematically analysing the role of data augmentation in deep neural networks compared to weight decay and dropout.
\subsection{Data}
\label{sec:daugreg-methods_data}
We performed the experiments on the highly benchmarked data sets ImageNet \citep{russakovsky2015imagenet} ILSVRC 2012, CIFAR-10 and CIFAR-100 \citep{krizhevsky2009cifar}. We resized the 1.3 M images from ImageNet into $150\times200$ pixels, as a compromise between keeping a high resolution and speeding up the training. Both on ImageNet and on CIFAR, the pixel values were mapped into the range $[0, 1]$.
So as to analyse the role of data augmentation, we trained every model with two different augmentation schemes as well as with no data augmentation at all. The two augmentation schemes are the following:
\subsubsection{\textit{Light} augmentation}
This scheme is common in the literature, for example \citep{goodfellow2013maxout, springenberg2014allcnn}, and performs only horizontal flips and horizontal and vertical translations of 10\% of the image size.
\subsubsection{\textit{Heavier} augmentation}
This scheme performs a larger range of affine transformations such as scaling, rotations and shear mappings, as well as contrast and brightness adjustment. On ImageNet we additionally performed random crops of $128\times128$ pixels. The choice of the allowed transformations is arbitrary and the only criterion was that the objects be still recognisable in general. We deliberately avoided designing a particularly successful scheme. The details of the transformations are presented below and the range of the parameters are specified in Table~\ref{tab:daugreg-heavier_params} and some visual examples are shown in Figure~\ref{fig:daugreg-cifar10_daug}.
\begin{itemize}
\item Affine transformations:
\vspace{5pt} \\
$
\begin{bmatrix}
x^\prime \\
y^\prime \\
1
\end{bmatrix}
=
\begin{bmatrix}
f_h z_x \cos(\theta) & -z_y \sin(\theta + \phi) & t_x \\
z_x \sin(\theta) & z_y \cos(\theta + \phi) & t_y \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
x \\
y \\
1
\end{bmatrix}
$
\item Contrast adjustment: $x^\prime = \gamma (x - \overline{x}) + \overline{x}$
\item Brightness adjustment: $x^\prime = x + \delta$
\end{itemize}
\begin{table}[ht]
\begin{center}
\begin{tabular}{cll}
\textbf{Parameter} & \textbf{Description} & \textbf{Range} \\
\hline \\
$f_h$ & Horiz. flip & $1 - 2 B(0.5)$ \\
$t_x$ & Horiz. translation & $\mathcal{U}(-0.1, 0.1)$ \\
$t_y$ & Vert. translation & $\mathcal{U}(-0.1, 0.1)$ \\
$z_x$ & Horiz. scale & $\mathcal{U}(0.85, 1.15)$ \\
$z_y$ & Vert. scale & $\mathcal{U}(0.85, 1.15)$ \\
$\theta$ & Rotation angle & $\mathcal{U}(-22.5^\circ, 22.5^\circ)$ \\
$\phi$ & Shear angle & $\mathcal{U}(-0.15, 0.15)$ \\
$\gamma$ & Contrast & $\mathcal{U}(0.5, 1.5)$ \\
$\delta$ & Brightness & $\mathcal{U}(-0.25, 0.25)$
\end{tabular}
\end{center}
\caption{Description and range of possible values of the parameters used for the heavier augmentation scheme. $B(p)$ denotes a Bernoulli distribution and $\mathcal{U}(a, b)$ a uniform distribution.}
\label{tab:daugreg-heavier_params}
\end{table}
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \textwidth]{\imgpath/cifar10_daug}
\end{center}
\caption{Illustration of the most extreme transformations performed by the data augmentation schemes on ten images---one per class---from CIFAR-10.}
\label{fig:daugreg-cifar10_daug}
\end{figure}
\subsection{Network Architectures}
\label{sec:daugreg-methods_archs}
We trained three distinct, popular architectures that have achieved successful results in visual object recognition: the all convolutional network, All-CNN \citep{springenberg2014allcnn}; the wide residual network, WRN \citep{zagoruyko2016wrn}; and the densely connected network, DenseNet \citep{huang2017densenet}. Importantly, we kept the training hyperparameters---learning rate, training epochs, batch size, optimiser, etc.---as in the original papers. Table~\ref{tab:architectures} summarises the main features of each network and below we specify further details.
\begin{table}[ht]
\begin{center}
\begin{tabular}{rccc}
& \textbf{All-CNN} & \textbf{WRN} & \textbf{DenseNet}\\
Ref. in original paper & \textit{All-CNN-C} & \textit{WRN-28-10} & \textit{DenseNet-BC}\\
Main feature & Only conv. layers & Residual connections & Dense connectivity\\
Number of layers & 16 / 12 & 28 & 101\\
Number of parameters & 9.4 / 1.3 M & 36.5 M & 0.8 M\\
Training hours & 35--45 / 2.5 & 100--145 / 14--15 & 24--27\\
CO2e emissions\footnotemark (kg) & 4.17--5.36 / 0.29 & 11.91--17.27 / 1.66--1.78 & 2.86--3.21\\
\end{tabular}
\end{center}
\caption{Key aspects of the network architectures. Cells with two values correspond to ImageNet / CIFAR.}
\label{tab:architectures}
\end{table}
\footnotetext{The carbon emissions were computed using the online calculator at \href{http://www.green-algorithms.org/}{green-algorithms.org} \cite{lannelongue2020carbonemissions}. The whole set of experiments in this chapter emmited an estimated total of 390.45 CO2e. The details about how the carbon emissions were calculated and about the impact of this study on global warming were made available as supplementary material of the main publication of this chapter .}
\subsubsection{All Convolutional Network}
All-CNN consists exclusively of convolutional layers with ReLU activation \citep{glorot2011relu}, it is relatively shallow and has few parameters. For ImageNet, the network has 16 layers and 9.4 million parameters; for CIFAR, it has 12 layers and about 1.3 million parameters. In our experiments to compare the adaptability of data augmentation and explicit regularisation to changes in the architecture (Section~\ref{sec:daugreg-depth}), we also tested a \textit{shallower} version, with 9 layers and 374,000 parameters, and a \textit{deeper} version, with 15 layers and 2.4 million parameters. The four architectures can be described as in Table~\ref{tab:allcnn}, where $K$\textbf{C}$D$($S$) is a $D \times D$ convolutional layer with $K$ channels and stride $S$, followed by batch normalisation and a ReLU non-linearity. \textit{N.Cl.} is the number of classes and Gl.Avg. refers to global average pooling.
\begin{table}[ht]
\begin{center}
\begin{tabular}{l|l}
\multirow{1}{*}{ImageNet} & 96\textbf{C}11(2)--96\textbf{C}1(1)--96\textbf{C}3(2)--256\textbf{C}5(1) \\
& --256\textbf{C}1(1)--256\textbf{C}3(2)--384\textbf{C}3(1) \\
& --384\textbf{C}1(1)--384\textbf{C}3(2)--1024\textbf{C}3(1) \\
& --1024\textbf{C}1(1)--\textit{N.Cl}.C1(1) \\
& --Gl.Avg.--Softmax \\ [3pt]
\multirow{1}{*}{CIFAR} & ~2$\times$96\textbf{C}3(1)--96\textbf{C}3(2)--2$\times$192\textbf{C}3(1) \\
& --192\textbf{C}3(2)--192\textbf{C}3(1)--192\textbf{C}1(1) \\
& --\textit{N.Cl}.C1(1)--Gl.Avg.--Softmax \\ [3pt]
\multirow{1}{*}{Shallower} & ~2$\times$96\textbf{C}3(1)--96\textbf{C}3(2)--192\textbf{C}3(1) \\
& --192\textbf{C}1(1)--\textit{N.Cl}.C1(1)--Gl.Avg.--Softmax \\ [3pt]
\multirow{1}{*}{Deeper} & ~2$\times$96\textbf{C}3(1)--96\textbf{C}3(2)--2$\times$192\textbf{C}3(1) \\
& --192\textbf{C}3(2)--2$\times$192\textbf{C}3(1)--192\textbf{C}3(2) \\
& --192\textbf{C}3(1)--192\textbf{C}1(1) \\
& --\textit{N.Cl}.C1(1)--Gl.Avg.--Softmax \\
\end{tabular}
\end{center}
\caption{Specification of the All-CNN architectures.}
\label{tab:allcnn}
\end{table}
The CIFAR network is identical to the All-CNN-C architecture in the original paper, except for the introduction of the batch normalisation layers \citep{ioffe2015batchnorm}, which we included because they generally improve performance, but had not been proposed at the time of publication of All-CNN \citep{springenberg2014allcnn}. The ImageNet version also includes batch normalisation layers and a stride of 2 instead of 4 in the first layer to compensate for the reduced input size.
Importantly, we kept the same training parameters as in the original paper in the cases they were reported. Specifically, the All-CNN networks were trained using stochastic gradient descent, with fixed Nesterov momentum 0.9, learning rate of 0.01 and decay factor of 0.1. The batch size for the experiments on ImageNet was 64 and we trained during 25 epochs decaying the learning rate at epochs 10 and 20. On CIFAR, the batch size was 128, we trained for 350 epochs and decayed the learning rate at epochs 200, 250 and 300. The kernel parameters were initialised according to the Xavier uniform initialisation \citep{glorot2010glorot}.
\subsubsection{Wide Residual Network}
WRN is a modification of ResNet \citep{he2016resnet} that achieves better performance with fewer layers, but more units per layer. Here, we chose for our experiments the WRN-28-10 version (28 layers and about 36.5 M parameters), which was reported to achieve the best results on CIFAR. It has the following architecture:
\begin{center}
\centering
16\textbf{C}3(1)--4$\times$160\textbf{R}--4$\times$320\textbf{R}--4$\times$640\textbf{R}--BN--ReLU--Avg.(8)--FC--Softmax
\end{center}
where $K$\textbf{R} is a residual block with residual function BN--ReLU--$K$\textbf{C}3(1)--BN--ReLU--$K$\textbf{C} 3(1). BN is batch normalisation, Avg.(8) is spatial average pooling of size 8 and FC is a fully connected layer. On ImageNet, the stride of the first convolution is 2. The stride of the first convolution within the residual blocks is 1 except in the first block of the series of 4, where it was set to 2 in order to subsample the feature maps.
Similarly, we kept the training parameters of the original paper: we trained with SGD, with fixed Nesterov momentum 0.9 and learning rate of 0.1. On ImageNet, the learning rate was decayed by 0.2 at epochs 8 and 15 and we trained for a total of 20 epochs with batch size 32. On CIFAR, we trained with a batch size of 128 during 200 epochs and decayed the learning rate at epochs 60, 120 and 160. The kernel parameters were initialised according to the He normal initialisation \citep{he2015he}.
\subsubsection{DenseNet}
The main characteristic of DenseNet \citep{huang2017densenet} is that the architecture is arranged into blocks whose layers are connected to all the layers below, forming a dense graph of connections, which permits training very deep architectures with fewer parameters than, for instance, ResNet. Here, we used a network with bottleneck compression rate $\theta = 0.5$ (DenseNet-BC), growth rate $k = 12$ and 16 layers in each of the three blocks. The model has nearly 0.8 million parameters. The specific architecture can be described as follows:
\begin{center}
\centering
2$\times k$\textbf{C}3(1)--DB(16)--TB--DB(16)--TB--DB(16)--BN--Gl.Avg.--FC--Softmax
\end{center}
where DB($c$) is a dense block, that is a concatenation of $c$ convolutional blocks. Each convolutional block is a set of layers whose output is concatenated with the input to form the input of the next convolutional block. A convolutional block with bottleneck structure has the following layers:
\begin{center}
\centering
BN--ReLU--4$\times k$\textbf{C}1(1)--BN--ReLU--$k$\textbf{C}3(1)--Concat.
\end{center}
TB is a transition block, which downsamples the size of the feature maps, formed by the following layers:
\begin{center}
\centering
BN--ReLU--$k$\textbf{C}1(1)--Avg.(2).
\end{center}
Like with All-CNN and WRN, we kept the training hyperparameters of the original paper. On the CIFAR data sets, we trained with SGD, with fixed Nesterov momentum 0.9 and learning rate of 0.1, decayed by 0.1 on epochs 150 and 200 and training for a total of 300 epochs. The batch size was 64 and the parameters were initialised with He initialisation.
\subsection{Train and Test}
Every architecture was trained on each data set both with explicit regularisation---weight decay and dropout as specified in the original papers---and without. Furthermore, we trained each model with the three data augmentation schemes: no augmentation, light and heavier. Figure~\ref{fig:experimental_setup} shows a summary of this experimental setup. The performance of the models was computed on the held out test tests. As in previous works \citep{krizhevsky2012alexnet, simonyan2014}, we averaged the softmax posteriors over 10 random \textit{light} augmentations, since slightly better results are obtained. Then we computed the classification accuracy for the models trained on CIFAR and the top-5 accuracy for the ImageNet models.
All the experiments were performed on Keras \citep{chollet2015keras} on top of TensorFlow \citep{tensorflow2015}, with a single GPU NVIDIA GeForce GTX 1080 Ti.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \textwidth]{\imgpath/experimental_setup.png}
\end{center}
\caption{Visual summary of the experimental setup. The figure represents the factors of variation in our experiments: data sets, architectures, amount of training data, data augmentation scheme and inclusion of explicit regularization. Comparisons within a factor of variation are most relevant on the factors on the right, like the performance of the models train with and without explicit regularization.}
\label{fig:experimental_setup}
\end{figure}
\subsection{Carbon footprint of the computational experiments}
\label{sec:daugreg-carbon_footprint}
Training artificial neural networks effectively on large, non-trivial data sets consumes a considerable amount of energy \citep{strubell2019energydl}. Crucially, the amount of compute of the largest models has been increasing exponentially during the last decade \citep{amodei2018energyai}. Therefore, the contribution of deep learning research to global warming and climate change should not be neglected \citep{schwartz2019greenai, lacoste2019carbonemissions, lannelongue2020carbonemissions}. As the experimental design of this chapter required training multiple neural network models, we wanted to be both aware and transparent about the environmental impact of our experimental study and in this section we will report calculations of the estimated carbon emissions associated with training our models for this chapter and the details of how these are computed.
In Table~\ref{tab:architectures} of Section~\ref{sec:daugreg-methods_archs} we report the estimated carbon emissions associated with training each architecture on each data set, given the specific characteristics of our computing hardware. These estimations rely on the online calculator available at \href{http://www.green-algorithms.org/}{green-algorithms.org}, developed by \citet{lannelongue2020carbonemissions}. In order to estimate the carbon emissions of each model, we took into consideration the following information: all the models were trained in a local desktop computer, located in Germany, with a single graphic processing unit (GPU), model GTX 1080 Ti, with 11 GB of memory. We assumed full usage of the 11 GB of memory and of the processing core for all models---a conservative estimation.
\begin{table}[htbp]
\caption{Summary of estimated carbon emissions associated to training the models for our experimental setup}
\begin{center}
\begin{tabular}{ccccccc}
Network & Data set & Depth & \% Data & N. models & Total h. & Total CO2e \\ \hline
\multirow{8}{*}{All-CNN} & \multirow{5}{*}{\makecell{CIFAR\\2.5 h\\0.29 CO2e}} & \multirow{3}{*}{\makecell{original}} & 100~\% & 36 & 90 & 10.73 \\
& & & 50~\% & 24 & 30 & 3.58 \\
& & & 10~\% & 24 & 6 & 0.72 \\ \cline{3-7}
& & shallower & 100~\% & 12 & 25.2 & 3.0 \\ \cline{3-7}
& & deeper & 100~\% & 12 & 36 & 4.29 \\ \cline{2-7}
& \multirow{3}{*}{\makecell{ImageNet\\35--45 h\\4.17--5.36 CO2e}} & \multirow{3}{*}{\makecell{original}} & 100~\% & 6 & 270 & 32.18 \\
& & & 50~\% & 6 & 135 & 16.09 \\
& & & 10~\% & 6 & 27 & 3.22 \\ \hline
\multirow{6}{*}{WRN} & \multirow{3}{*}{\makecell{CIFAR\\14--15 h\\1.66--1.78 CO2e}} & \multirow{3}{*}{\makecell{original}} & 100~\% & 36 & 540 & 64.35 \\
& & & 50~\% & 12 & 90 & 10.73 \\
& & & 10~\% & 12 & 18 & 2.15 \\ \cline{2-7}
& \multirow{3}{*}{\makecell{ImageNet\\100--145 h\\11.91--17.27 CO2e}} & \multirow{3}{*}{\makecell{original}} & 100~\% & 6 & 870 & 103.68 \\
& & & 50~\% & 6 & 435 & 51.84 \\
& & & 10~\% & 6 & 87 & 10.37 \\ \hline
\multirow{3}{*}{DenseNet} & \multirow{3}{*}{\makecell{CIFAR\\24--27 h\\2.86--3.21 CO2e}} & \multirow{3}{*}{\makecell{original}} & 100~\% & 12 & 324 & 38.61 \\
& & & 50~\% & 6 & 81 & 9.65 \\
& & & 10~\% & 6 & 16.2 & 1.93 \\ \hline
\vspace{-5pt}\\
\multicolumn{4}{c}{\textbf{Total}} & \textbf{228} & \textbf{3080.4} & \textbf{367.12}
\end{tabular}
\end{center}
\label{tab:carbon_emissions}
\end{table}
As reported in Table~\ref{tab:carbon_emissions}, the complete set of experiments reported in this chapter needed a total of 3,276 GPU hours (136.5 days), which correspond to actual real time, since we had access to a single GPU. With our hardware, this corresponds, according to \cite{lannelongue2020carbonemissions}, to an estimate of 832.53 kWh or 390.45 carbon dioxide equivalent (CO2e). Carbon dioxide equivalent represents the equivalent CO2 that would have the same global warming impact than a mixture of gases. By way of comparison, 390.45 CO2e corresponds to 34.25 tree-years---the time taken by a mature tree to absorb the CO2---, 69~\% of a flight New York City--San Francisco or 2,231 km in a passenger car.
\section{Results}
\label{sec:daugreg-results}
Here we present the results of the empirical study. In the first set of experiments (Section~\ref{sec:daugreg-orig}) we trained the architectures as in the original papers with the full data sets. A relevant characteristic of explicit regularisation methods is that they typically require the specification of hyperparameters. These are usually fine-tuned by the authors of research papers to achieve higher performance, as demanded by the dynamics of the scientific publication environment in the machine learning community. However, the sensitivity of the results towards these hyperparameters is often not made available. In order to gain insight on the role of explicit regularisation and data augmentation in more real world cases, where the hyperparameters have not been highly optimised, we varied the amount of training data (Section~\ref{sec:daugreg-less_data}) and the depth of the architectures (Section~\ref{sec:daugreg-depth}), while keeping all other hyperparameters untouched.
The objective of the study is to contrast the performance gained by training the models with both explicit regularisation and data augmentation, which is the common practice in the literature \citep{tan2019efficientnet, huang2017densenet, zagoruyko2016wrn, springenberg2014allcnn}, against training with only data augmentation. Hence, the presentation of the results in the figures aims at facilitating this comparison. In the performance plots, we represent the relative performance gain of each model with respect to the relevant baseline, which we specify at each section. We plot the results in pairs: the squared blue dots on the top, blue-shaded area correspond to the models trained with only data augmentation and the round orange dots on the bottom, orange-shaded area to the models trained with both data augmentation and explicit regularization. Additionally, the results of training with different levels of data augmentation are represented with dots in three lightness and saturation shades and we connected with dotted lines the models trained with the same level of augmentation.
In order to assess the statistical significance of the differences between models trained with and without explicit regularisation, we carried out percentile bootstrap analyses \citep{efron1992bootstrap}, that is simulations based on sampling with replacement. We followed the guidelines by \citet{rousselet2019bootstrap}. In all cases, the values of the distribution correspond to the difference between the performance---with respect to the baseline---of the models trained without explicit regularisation minus the performance of the models trained with explicit regularisation---the pairs of dots connected by a dotted line. We then compared the distribution of this difference in the bootstrap samples and with respect to the null hypothesis, that is no difference ($H_0 = 0$). For each experiment we sampled all possible bootstrap samples with replacement or a maximum of one million.
\subsection{Original architectures}
\label{sec:daugreg-orig}
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \textwidth]{\imgpath/orig.png}
\end{center}
\caption{Relative improvement of adding data augmentation and explicit regularization to the baseline models, $(accuracy - baseline)/accuracy * 100$. The baseline accuracy is shown on the left. The results suggest that data augmentation alone (in blue) can achieve even better performance than the models trained with both weight decay and dropout (in orange).}
\label{fig:daugreg-orig}
\end{figure}
First, we contrast the regularisation effect of data augmentation and weight decay and dropout on the original networks trained with the complete data sets, and show the results in Figure~\ref{fig:daugreg-orig}. As a baseline, we consider the ``bare bone'' models, that is the model trained with neither explicit regularisation nor data augmentation. We report the accuracy of the baseline on the left axis of the plot in Figure~\ref{fig:daugreg-orig}. To assess the relevant comparisons, we show the relative improvement in test performance achieved by adding each technique or combination of techniques to the baseline model. Table~\ref{tab:daugreg-orig_nets} shows the mean and standard deviation of each combination on the architecture and data set and Figure~\ref{fig:daugreg-bootstrap_orig} the results of the bootstrap analysis, which considers the differences of all pairs---squared blue dots minus round orange dots, connected with dotted lines\footnote{The relative performance of WRN on ImageNet trained with weight decay and dropout with respect to the baseline is negative (-6.22~\%) and is neither depicted in Figure~\ref{fig:daugreg-orig} nor taken into consideration to compute the average improvements in Table~\ref{tab:daugreg-orig_nets} and the bootstrap analysis in Figure~\ref{fig:daugreg-bootstrap_orig}.}.
\begin{figure}[ht]
\centering
\begin{center}
\includegraphics[width = \textwidth]{\imgpath/bootstrap_orig.png}
\end{center}
\caption{Bootstrap analysis to assess the difference in performance gain provided by training without and with weight decay and dropout, on the original architectures and using the full data sets. On the left of the figure we plot the bootstrap values---differences---with the mean and median as a solid and dashed line, respectively. The main figure shows the distribution of the mean of the bootstrap samples, the standard error of the sample mean, the 95~\% confidence intervals and the $P$ value with respect to the null hypothesis ($H_0=0$).}
\label{fig:daugreg-bootstrap_orig}
\end{figure}
\begin{table}[hb]
\begin{center}
\begin{tabular}{rcc}
& No explicit reg. & Weight decay + dropout \\
None & \textit{baseline} & 3.02 (1.65) \\
Light & 8.46 (3.80) & 7.88 (2.60) \\
Heavier & 8.68 (4.69) & 7.92 (4.03)
\end{tabular}
\end{center}
\caption{Average accuracy improvement over the baseline model of each combination of data augmentation level and presence of weight decay and dropout.}
\label{tab:daugreg-orig_nets}
\end{table}
The first conclusion from Figures~\ref{fig:daugreg-orig} and \ref{fig:daugreg-bootstrap_orig} as well as Table~\ref{tab:daugreg-orig_nets} is that training with data augmentation alone (blue dots on the top, blue-shaded areas) is better than training with both augmentation and explicit regularisation (in orange). This is the case in more than half of the cases (9/16) and the bootstrap analysis reveals that the difference is positive with 95~\% confidence and $P$ value $=0.022$. On average, adding data augmentation to the baseline model improved the accuracy on 8.57~\%, and adding both augmentation and explicit regularisation on 7.90~\% respectively.
At first glance, one may think that this is not remarkable, since the differences are small and data augmentation alone is not better in 100~\% of the cases. However, this result is surprising and remarkable for the following reason: note that the studied architectures achieved state-of-the-art results at the moment of their publication and the models included all light augmentation, weight decay and dropout, whose parameters were presumably finely tuned to optimise the accuracy. The replication of these results corresponds to the mid-orange dots in Figure~\ref{fig:daugreg-orig}. Here, we have show that simply removing weight decay and dropout---while keeping all other hyperparameters intact, see Section~\ref{sec:daugreg-methods_archs}---improves the \textit{then state-of-the-art} accuracy in 4 of the 8 studied cases. Why did not the authors trained without explicit regularisation and obtain better results?
Second, it can also be observed that the regularisation effect of weight decay and dropout, an average improvement of 3.02~\% with respect to the baseline, is much smaller than that of data augmentation: simply applying light augmentation increased the accuracy in 8.46~\% on average. Although the heavier augmentation scheme was deliberately not designed to optimise the performance, in both CIFAR-10 and CIFAR-100 it improved the test performance with respect to the light augmentation scheme. This was not the case on ImageNet, probably due to the larger complexity of the data set.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \textwidth]{\imgpath/val_acc_all.png}
\end{center}
\caption{Dynamics of the validation accuracy during training of All-CNN, WRN and DenseNet, trained on CIFAR-10 with heavier data augmentation, contrasting the models trained with explicit regularization (orange lines) and the models trained with only data augmentation (in blue). The regularized models heavily rely on the learning rate decay to obtain the boost of performance, while the models trained without explicit regularization quickly approach the final performance.}
\label{fig:daugreg-dynamics}
\end{figure}
Further, it can be observed that the results are in general more consistent in the models trained without explicit regularisation. Finally, an additional advantage of training without explicit regularisation is that the learning dynamics (Figure~\ref{fig:daugreg-dynamics}) is much faster and predictive of the final performance. Typically, regularisers such as weight decay and dropout effectively prevent the model from fitting the training data during the first epochs and heavily rely on the learning rate decay to obtain the boost that yields the final performance. On the contrary, models trained with only data augmentation reach very high validation performance after a few epochs. This effect is particularly acute on DenseNet, which performs heavier weight decay.
In sum, it seems the performance gain provided by weight decay and dropout can be achieved and often improved by data augmentation alone. Besides, the models trained without explicit regularisation presented additional advantages, which we will further discuss in Section~\ref{sec:daugreg-discussion}.
\subsection{When the available training data changes}
\label{sec:daugreg-less_data}
\begin{figure}[ht]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[keepaspectratio=true, width=\columnwidth]{\imgpath/reduced_data_50.png}
\caption{50~\% of the available training data}
\label{fig:daugreg-less_data_50}
\end{subfigure}
\\
\begin{subfigure}{\linewidth}
\includegraphics[keepaspectratio=true, width=\columnwidth]{\imgpath/reduced_data_10.png}
\caption{10~\% of the available training data}
\label{fig:daugreg-less_data_10}
\end{subfigure}
\caption{Fraction of the baseline performance when the amount of available training data is reduced, $accuracy/baseline * 100$. The models trained wit explicit regularisation present a significant drop in performance as compared to the models trained with only data augmentation. The differences become larger as the amount of training data decreases.}
\label{fig:daugreg-less_data}
\end{figure}
We argue that one of the main drawbacks of explicit regularisation techniques is their poor adaptability to changes in the conditions with which the hyperparameters were tuned. To test this hypothesis and contrast it with the adaptability of data augmentation, we extended the analysis by training the same networks with fewer examples. All models were trained with the same random subset of data and evaluated in the same test set as the previous experiments. In order to better visualise how well each technique resists the reduction of training data, in Figure~\ref{fig:daugreg-less_data} we show the fraction of baseline accuracy achieved by each model when trained with 50~\% and 10~\% of the available data. In this case, the baseline is thus each corresponding model trained with the complete data set. Table~\ref{tab:daugreg-less_data} summarises the mean and standard deviation of each combination and Figure~\ref{fig:daugreg-bootstrap_less_data} shows the result of the bootstrap analysis.
\begin{table}[htb]
\begin{center}
\begin{tabular}{rcc}
& \multicolumn{2}{c}{50~\% of the training data} \\
\cline{2-3}
& No explicit reg. & Weight decay + dropout \\
None & 88.11 (6.27) & 83.20 (9.83) \\
Light & 91.47 (4.31) & 88.27 (7.39) \\
Heavier & 91.82 (4.63) & 89.28 (6.63) \\
\vspace{-7pt}\\
& \multicolumn{2}{c}{10~\% of the training data} \\
\cline{2-3}
& No explicit reg. & Weight decay + dropout \\
None & 58.72 (14.93) & 58.75 (16.92) \\
Light & 67.55 (14.27) & 60.89 (18.39) \\
Heavier & 68.69 (13.61) & 61.43 (15.90)
\end{tabular}
\end{center}
\caption{Average fraction of the original accuracy of each corresponding combination of data augmentation level and presence of weight decay and dropout.}
\label{tab:daugreg-less_data}
\end{table}
\begin{figure}[ht]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width = \textwidth]{\imgpath/bootstrap_50.png}
\caption{50~\% of the available training data}
\label{fig:daugreg-bootstrap_means_50}
\end{subfigure}
\\
\begin{subfigure}{\linewidth}
\includegraphics[width = \textwidth]{\imgpath/bootstrap_10.png}
\caption{10~\% of the available training data}
\label{fig:daugreg-bootstrap_means_10}
\end{subfigure}
\caption{Bootstrap analysis analogous to the one detailed in Section~\ref{sec:daugreg-orig} and Figure~\ref{fig:daugreg-bootstrap_orig}, to analyse the statistical significance of the performance difference of models trained with only 10 and 50~\% of the data.}
\label{fig:daugreg-bootstrap_less_data}
\end{figure}
One of the main conclusions of this set of experiments is that if no data augmentation is applied, explicit regularisation hardly resists the reduction of training data by itself. On average, with 50~\% of the available data, these models only achieve 83.20~\% of the original accuracy (Table~\ref{tab:daugreg-less_data}), which, remarkably, is even worse than the models trained without any explicit regularisation (88.11~\%). On 10~\% of the data, the average fraction is the same (58.75 and 58.72~\%, respectively). This implies that training with explicit regularisation is even detrimental for the performance.
When combined with data augmentation, the models trained with explicit regularisation (orange dots) also perform worse (88.78 and 61.16~\% with 50 and 10~\% of the data, respectively), than the models without explicit regularisation (blue dots, 91.64 and 68.12~\% on average). Note that the difference becomes larger as the amount of available data decreases. Even more decisive are the results of the bootstrap analysis (Figure~\ref{fig:daugreg-bootstrap_less_data}): the mean difference of the fraction of the performance achieved by the models trained without and with explicit regularisation is 2.78 and 6.96, with 50 and 10~\% of the training data, respectively; the confidence intervals are well above the null hypothesis and the $P$ values are exactly 0.
Importantly, it seems that the combination of explicit regularisation and data augmentation is only slightly better than training without data augmentation. Two reasons may explain this: first, the original regularisation hyperparameters seem to adapt poorly to the new conditions. The hyperparameters were specifically tuned for the original setup and they would require re-tuning to obtain comparable results. Second, since explicit regularisation reduces the representational capacity, this might prevent the models from taking advantage of the augmented data.
In contrast, the models trained without explicit regularisation but with data augmentation more naturally adapt to the reduced availability of data. With 50~\% of the data, these models achieve about 91.5~\% of the performance with respect to training with the complete data sets. With only 10~\% of the data, they achieve nearly 70~\% of the baseline performance, on average. This highlights the suitability of data augmentation to serve, to a great extent, as true, useful data \citep{vinyals2016oneshot}.
\subsection{When the architecture changes}
\label{sec:daugreg-depth}
\begin{figure}[tb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/diff_depth.png}
\end{center}
\caption{Fraction of the original performance when the depth of the All-CNN architecture is increased or reduced in 3 layers. In the explicitly regularised models, the change of architecture implies a dramatic drop in the performance, while the models trained without explicit regularisation present only slight variations with respect to the original architecture.}
\label{fig:daugreg-depth}
\end{figure}
Finally, in the same spirit, we tested the adaptability of data augmentation and explicit regularisation to changes in the depth of the All-CNN architecture, by training shallower (9 layers) and deeper (15 layers) versions of the architecture. We show the fraction of the performance with respect to the original architecture in Figure~\ref{fig:daugreg-depth} and the bootstrap analysis in Figure~\ref{fig:daugreg-bootstrap_depth}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = \textwidth]{\imgpath/bootstrap_depth.png}
\end{center}
\caption{Bootstrap analysis analogous to the one detailed in Section~\ref{sec:daugreg-orig} and Figure~\ref{fig:daugreg-bootstrap_orig}, to analyse the statistical significance of the performance difference of All-CNN trained with 3 more and 3 fewer layers.}
\label{fig:daugreg-bootstrap_depth}
\end{figure}
A noticeable result from these experiments is that all the models trained with weight decay and dropout (round orange dots) suffered a dramatic drop in performance when the architecture changed, regardless of whether deeper or shallower and of the amount of data augmentation. As a matter of fact, the models trained without explicit regularisation performed on average 11.23~\% ($SD = 3.06$) better. As in the case of reduced training data, this may be explained by the poor adaptability of the regularisation hyperparameters, which strongly depend on the architecture.
This highly contrasts with the performance of the models trained without explicit regularisation (top, squared blue dots). With a deeper architecture, these models achieve slightly better performance, effectively exploiting the increased capacity. With a shallower architecture, they achieve only slightly worse performance\footnote{Note that the shallower models trained with neither explicit regularisation nor data augmentation achieve even better accuracy than their counterpart with the original architecture, probably due to the reduction of overfitting provided by the reduced capacity.}. Thus, these models seem to more naturally adapt to the new architecture and data augmentation becomes beneficial.
It is worth commenting on the particular case of the CIFAR-100 benchmark, where the difference between the models with and without explicit regularisation is even more pronounced, in general. It is common practice in object recognition papers to tune the parameters for CIFAR-10 and then test the performance on CIFAR-100 with the same hyperparameters. Therefore, these are typically less suitable for CIFAR-100. We believe this is the reason why the benefits of data augmentation seem even more pronounced on CIFAR-100 in our experiments.
In sum, these results highlight another crucial advantage of data augmentation: the effectiveness of its hyperparameters, that is the type of image transformations, depend mostly on the type of data, rather than on the particular architecture or amount of available training data, unlike explicit regularisation hyperparameters. Therefore, removing explicit regularisation and training with data augmentation increases the flexibility of the models.
\section{Discussion}
\label{sec:daugreg-discussion}
In this section we summarise our findings and discuss their relevance. In particular, we challenge the need for weight decay and dropout to train artificial neural networks, and propose to rethink data augmentation as a \textit{first class} technique instead of a \textit{cheating} method.
As an empirical analysis, one caveat of our work is the limited number of experiments (over 200 models trained). In order to increase the generality of our conclusions, we chose three significantly distinct network architectures and three data sets. Importantly, we also took a conservative approach in our experimentation: all the hyperparameters were kept as in the original models, which included both weight decay and dropout, as well as light augmentation. This setup is clearly suboptimal for models trained without explicit regularisation. Besides, the heavier data augmentation scheme was deliberately not optimised to improve the performance as it was not the scope of this work to propose a specific data augmentation technique. We leave for future work to explore data augmentation schemes that can more successfully be exploited by any deep model. Finally, in order to strengthen the conclusions from the empirical analysis, we have also discussed some theoretical insights in Section~\ref{sec:daugreg-theoretical_insights}, concluding that the generalisation gain provided by weight decay can be seen as a lower bound of what can be achieved by domain-specific data augmentation. We also hope that this work inspires researchers in other application domains, such as natural language processing, to further contrast data augmentation and explicit regularisation.
\subsection{Do deep nets really need weight decay and dropout?}
\label{sec:daug_vs_reg-wd_drop}
In Section~\ref{sec:daugreg-results} we have presented the results of a systematic empirical analysis of the role of weight decay, dropout and data augmentation in deep convolutional neural networks for object recognition. Our results have shown that explicit regularisation is not only unnecessary \citep{zhang2016understandingdl}, but also that its performance gain can be achieved by data augmentation alone: in most cases, training with data augmentation only was better than training with both data augmentation and explicit regularisation. In the few cases where that was not the case, the difference was very small. Moreover, unlike data augmentation, models trained with weight decay and dropout exhibited poor adaptability to changes in the architecture and the amount of training data. Why do researchers and practitioners keep training their neural networks with weight decay and dropout? Do deep nets really need weight decay and dropout?
The relevance of these findings lies in the fact that weight decay and dropout are almost ubiquitously present in convolutional neural networks \citep{huang2017densenet, zagoruyko2016wrn, springenberg2014allcnn}, including recent, state of the art models \citep{tan2019efficientnet}. Certainly, it has been shown in multiple research papers that weight decay and dropout can boost the performance of neural networks, and we here do not challenge the usefulness of weight decay and dropout, but the convenience to use it, given the associated cost and risk, and available alternatives.
First, not only add weight decay and dropout extra computations during training, but also they typically require training the models several times with different hyperparameters: the coefficient of the penalty for weight decay; for dropout, the location of the dropout mask and the amount of units to drop. These hyperparameters are arguably very sensitive to changes in elements of the learning process. Here we have studied changes in the amount of training data (Section~\ref{sec:daugreg-less_data}) and the depth of the architecture (Section~\ref{sec:daugreg-depth}). Consider, for instance, our results in Section~\ref{sec:daugreg-less_data}: All-CNN trained on CIFAR-10 with weight decay, dropout and light augmentation reaches about 92~\% accuracy. If we were in the development process and were unsure about what architecture to use, we could simply try our network with three more layers and we would obtain about 85~\% accuracy. We could also try an architecture with three fewer layers and obtain about 82~\% accuracy. This may lead us to conclude that the first architecture has the right number of layers, because adding or removing layers drastically reduces the performance; or perhaps that by adding or removing layers there is some negative interaction between the layers sizes, or any other of the many hypothesis we could think of.
Consider now what happens if we train without weight decay and dropout: All-CNN trained on CIFAR-10 with light augmentation---but without weight decay and dropout---obtains about 93.3~\% accuracy. This is slightly better than the explicitly regularised model, but we will ignore this now. If we train this model with three more layers, we obtain 93.4~\% accuracy, that is the same or slightly better---as opposed to the drop of 7 points we have seen before. If we train with three fewer layers, we obtain about 90~\% accuracy, a drop of 3 points---as opposed to a drop of 10 points. In this case, we would not conclude that adding or removing layers creates negative interactions. Note that the only difference between these two cases is that the first models are trained with weight decay and dropout. Therefore, it may be reasonable to only include explicit regularisation in the final version of a model, in order to potentially obtain a slight boost in performance prior to publication or production---provided the hyperparameters are adequately fine-tuned---but keeping weight decay and dropout as intrinsic part of our models can certainly lead us astray.
Finally, we can draw some connections between the results from this chapter and the insights from the previous chapter. In Chapter~\ref{ch:reg} we discussed that the role of explicit regularisation techniques, such as weight decay and dropout, is to reduce the representational capacity of the models. This, according to statistical learning theory, can reduce overfitting and in turn improve generalisation. However, artificial neural networks have usually orders of magnitude more parameters than training examples, and they still generalise well. While this phenomenon is still not well understood, one working hypothesis is that overparameterisation does not cause negative overfitting, but rather smooth fitting that can be suitable for accurate interpolation \citep{belkin2019biasvariance, hasson2020directfit}. If overparameterisation is not a problem for artificial neural networks, is it then necessary to constrain the representational capacity through explicit regularisation? Is it reasonable to train very large models, that require a lot of memory and computation---and negatively impact the environment---and at the same time constrain their capacity?
We also hypothesise that a reason why artificial neural networks generalise well in many tasks is due to the fact that the models include many sources of implicit regularisation or, in other words, inductive biases. For example, it is known that stochastic gradient descent naturally converges to solutions with small norm \citep{zhang2016understandingdl, neyshabur2014implicitreg}, batch normalisation also contributes to better generalisation, convolutional layers are particularly efficient to process image data---and not only---to name a few examples. In our case, we argue that data augmentation has the potential to encode very powerful inductive biases that improve generalisation. We conclude that in the presence of many other sources of implicit regularisation and more effective inductive biases, weight decay and dropout may not be necessary to train large deep artificial neural networks\footnote{Previous work has suggested interesting connections between weight decay and other types of regularisation and improved adversarial robustness \citep{galloway2018wdadversarial, jakubovitz2018regadversarial}. An interesting avenue for future work is studying whether this effects are also provided by data augmentation}.
\subsection{Rethinking Data Augmentation}
\label{sec:daugreg-rethink_daug}
Data augmentation is often regarded by authors of machine learning papers as \textit{cheating}, suggesting it should not be used in order to test the potential of newly proposed methods \citep{goodfellow2013maxout, graham2014fracmaxpool, larsson2016fractalnet}. In contrast, weight decay and dropout are considered intrinsic elements of the algorithms \citep{tan2019efficientnet}. In view of our results, we propose to rethink data augmentation and switch roles with explicit regularisation: good models should effectively exploit data augmentation and explicit regularisation should only be applied, if at all, once all other elements are fixed. This approach improves the performance and saves computational resources.
In this regard, it is worth highlighting some advantages of data augmentation: Not only does it not reduce the representational capacity, unlike explicit regularisation, but also, since the transformations reflect plausible variations of the real objects, it increases the robustness of the model \cite{novak2018sensitivity, rusak2020robustness}. Interestingly, in Chapter~\ref{ch:daugit} we will also show that models trained with heavier data augmentation learn representations more aligned with the inferior temporal (IT) cortex, highlighting its connection with visual perception and biological vision. Deep nets are especially well suited for data augmentation because they do not rely on pre-computed features. Moreover, unlike explicit regularisation, it can be performed on the CPU, in parallel to the gradient updates. Finally, from Sections~\ref{sec:daugreg-less_data} and~\ref{sec:daugreg-depth} we concluded that data augmentation naturally adapts to architectures of different depth and amounts of available training data, without the need for specific fine-tuning of hyperparameters.
A commonly cited disadvantage of data augmentation is that it depends on expert knowledge and it cannot be applied to all domains \citep{devries2017daugfeatspace}. However, we argue instead that expert and domain knowledge should not be disregarded but exploited. Expert and domain knowledge are, in fact, useful inductive biases. A remarkable advantage of data augmentation is that a single augmentation scheme can be designed for a broad family of data---for example, natural images, using our knowledge about visual perception---and effectively applied to a broad set of tasks---object recognition, segmentation, localisation, etc. We hope that these insights encourage more research on data augmentation and, in general, highlight the importance of using the available data more effectively. In the following chapters, we explore additional properties of models trained with data augmentation (Chapter~\ref{ch:daugit}) and how it can be used as part of the objective function to learn representations more aligned with the properties of the primate visual cortex (Chapter~\ref{ch:invariance}).
\chapterbibliography
}
\chapter{General discussion}
\label{ch:discussion}
\renewcommand{\chapterpath}{includes/discussion}
In this dissertation I have presented a series of experimental studies and discussions revolving around machine learning for image understanding, visual perception and visual neuroscience. An overarching objective of this work was to explore and exploit the connections between these fields, combining the tools and techniques common to each discipline.
A central subject of the dissertation has been data augmentation. Data augmentation has been ubiquitously used to train machine learning models on image tasks since the early 1990s, but it has received little scientific attention. In the first part of the thesis, we tried to bring data augmentation to the fore and study its role as implicit regularisation of machine learning algorithms and its potential to incorporate inductive biases from visual perception and biological vision. While on the surface data augmentation is just a method to synthetically increase the number of examples in a data set, we have here analysed it as a technique that encodes effective priors from perception: The image transformations typically included in data augmentation techniques---rotations, translations, scaling, changes in illumination, etc.---coincide with those that are plausible in the real world as we perceive it. Likely not by coincidence, the visual cortex of our brains represents objects under these transformations in a largely robust way.
From a machine learning point of view, data augmentation can be seen as a form of regularisation, in that it helps improve generalisation. Nonetheless, we discussed an important distinction between the type of regularisation provided by data augmentation---implicit regularisation---and explicit regularisation techniques (Chapter~\ref{ch:reg}). The terms explicit and implicit regularisation have appeared frequently in the deep learning literature, but no formal definition had been provided, to the best of our knowledge. Hence, the terms have been in used in an inconsistent and subjective manner. We here provided formal definitions of the two concepts based on their effect on the representational capacity of the model they are applied on, alongside several examples of each category for illustration. Importantly, we argued that data augmentation does not reduce the representational capacity and therefore is not explicit but implicit regularisation. We hope our definitions find consensus in the machine learning community and foster more rigorous discussions about regularisation.
In Chapter~\ref{ch:daugreg}, we delved into the distinction between data augmentation and explicit regularisation. We departed from the hypothesis that data augmentation improves generalisation by increasing the number of training examples through transformations that resemble those that can be find in the real world, while explicit regularisation \textit{simply} relies on the inductive bias that simpler models should generalise better. Although this inductive bias is at the root of the feasibility of learning from data and has proven effective in uncountable applications, the prior knowledge encoded by data augmentation seems intuitively more effective. Accordingly, we challenged the need for explicit regularisation techniques such as weight decay and dropout to train deep neural networks, provided data augmentation is also employed. If large networks with orders of magnitude more learnable parameters than training examples are able to generalise well, is it necessary to constrain their representational capacity? We first derived some theoretical insights from the literature that suggest that weight and dropout can be seen as \textit{naive} data augmentation, that is without domain knowledge. We then confirmed through an empirical evaluation that models trained with data augmentation alone outperform the combination of explicit regularisation and data augmentation typically used in practice.
Although the experimental setup of our empirical study included several network architectures and data sets, with results of over 300 trained models, extended experimentation would be of course desirable. All the experimental results from training neural networks presented in this thesis have been conducted with one---occasionally two---graphical processing unit (GPU) available. It would be highly beneficial if researchers without such computational limitations extended this analysis to confirm or reject our conclusions, and therefore we made the code available alongside the publications. Another desirable extension of this part of the dissertation would be to compare data augmentation and explicit regularisation in other data modalities beyond natural images, such as speech, text or medical images.
Since one of the motivations for analysing image data augmentation was its connection with visual perception and biological vision, we hypothesised that larger variation in the image transformations seen by a neural network may induce better representational similarity with the inferior temporal cortex. This is the region in the visual cortex where it is possible to decode object classes from measured activations and invariance to transformations has been repeatedly observed. In Chapter~\ref{ch:daugit}, we used representational similarity analysis to compare the features learnt by artificial neural networks and the activations measured in the inferior temporal cortex through fMRI. As hypothesised, we found that models trained with heavier transformations exhibit higher similarity with the visual cortex. This study was the result of a short collaboration in which we tested the idea with a limited experimental setup. Therefore, it would also be desirable to find more evidence of our conclusion in future work, as well as delving into what specific transformations drive invariance in the higher visual cortex.
The last chapter of the block on data augmentation made the connection with visual perception and biological vision more explicit. We departed from the idea that simply applying transformations to the input images and optimising a neural network for classification may not be enough to learn robust features as in the higher visual cortex. We first observed that useful information is lost in the way data augmentation is commonly applied: every time an image is transformed according to a data augmentation scheme, it is fed into the network to compute the classification loss just as any other new image. The transformed image is not just one more image, but a perceptually plausible transformation of another image in the set. With the standard classification objectives this potentially valuable information is simply lost. Could it not be used as an inductive bias?
In order to further exploit the potential inductive bias of data augmentation, in Chapter~\ref{ch:invariance} we proposed \textit{data augmentation invariance}, a simple learning objective inspired by the increasing invariance to identity-preserving transformations observed in the ventral visual stream. Data augmentation invariance combines several novel aspects: First, we perform data augmentation within the training batches, that is we construct the mini-batches by including $M$ transformations of each image. In this way, the model has access to the multiple transformations of an example at once---instead of separated by many iterations---and it potentially reduces the variance of the gradients. Second, we proposed a contrastive loss term that encourages similar representations of images that are transformations of each other. This has been suggested to be a key property of the inferior temporal cortex. Third, we define the data augmentation invariance objective in a layer-wise fashion, that is the representational invariance is optimised at multiple layers of the network. However, we distribute the weights of the loss terms of each layer exponentially along the hierarchy. This aimed to loosely mimic the increasing invariance along the visual cortex. We trained several architectures with data augmentation invariance and the models effectively and efficiently learnt robust representations, without detriment of the classification performance. In contrast, the representations of models trained with the standard categorical cross-entropy loss did not become more invariant to transformations than at the pixel space, in spite of being exposed to data augmentation during training.
Although our results were remarkably consistent across architectures and data sets, future work should find more evidence for the benefits of data augmentation invariance. Furthermore, we are interested in exploring other potential benefits of training with this objective. In particular, we would like to test the representational similarity of the learnt features with the inferior temporal cortex, which inspired this approach. Furthermore, it would be interesting to study whether encouraging invariance to some transformations---rotations, translations, illumination changes---induces invariance to other transformations, such as occlusions as in cutout augmentation.
In the second part of the dissertation, we moved the focus from data augmentation and artificial neural networks to visual attention and salience, using tools of cognitive science and neuroscience, such as eye-tracking and neuroimaging. In Chapter~\ref{ch:globsal}, we proposed and analysed the concept of \textit{global visual salience}. While a large body of scientific literature has studied visual attention and the salience properties of images, it has mostly focused in analysing what parts and features of an image drive eye movements and are more likely to attract fixations. Here, we studied the likelihood of natural images as a whole to attract the initial fixation of a human observer, when presented in competition with other images. For this purpose, we carried out an eye tracking experiment in which we showed participants pairs of images side by side. We trained a simple machine learning algorithm with the behavioural data from the experiment and found that it is possible to predict the direction of the first saccade---left or right---given a pair of images from the data set. This implies that some images have a higher \textit{global visual salience} than others. Specifically, faces and images with social content are most likely to be fixated first. Importantly, we also found that global salience is largely independent from the local salience properties of the images.
We believe our experimental data can be further used to study aspects of human visual attention of competing stimuli, since we mostly focused on the direction of the first fixation upon stimulus presentation. Therefore, we open sourced the data and the code of our analyses. In particular, it would be interesting to study the reaction times and engagement with the stimuli during the duration of the trials so as to find if there exist differences depending on the nature of the two images, for instance. Another interesting direction would be to more deeply study the visual properties of the images and find out whether it is possible to predict the global visual attention of novel images. Further, we hypothesised that global salience could be used as a tool or metric to better understand the visual attention behaviour of humans with conditions such as the autism spectrum disorder.
Finally, in Chapter~\ref{ch:imageid}, we analysed the relationship between the local salience maps of natural images and the brain activations in the early visual cortex. In particular, we followed up a previous study that demonstrated the possibility to identify natural images from brain activity using the low-parametric population receptive field (pRF) model and contrast information from the images. In our work, we extended that study by analysing the discriminability of salience maps. We compared contrast and salience maps computed with two distinct image salience models, one based on low-level features and the other based on high-level features learnt by a deep neural network. We found that salience, especially based on low-level features, is significantly more predictive of brain activity than contrast. This suggests that the activations in the early visual cortex contain information about various properties of the images, likely driven by feedback connectivity from higher areas. Moreover, the results in this chapter provided additional evidence for the possibility of studying properties of the visual cortex through predictive models based on simple tools such as the pRF model.
Before concluding this dissertation, I would like to briefly discuss some ethical considerations and the societal and environmental impact of the work presented here. First, although compared to much of the deep learning literature the computational resources used for this work were small, some of the results of this thesis required training multiple neural network models, especially for Chapter~\ref{ch:daugreg}. Training these models certainly contributed negatively on the environment with emmision of carbon dioxide, as reported in the chapter. In order to disseminate my work and engage with other scientists, I travelled by plane to attend several conferences, which also had a negative impact on climate change. I strongly advocate minimising the impact of scientific activity on the environment. One way of positively contributing to reduce this impact is through data sharing. Hence, we have made available much of the data collected for this work, which will also hopefully contribute to more open science. Currently, deep neural networks are remarkably energy-inneficient compared to brains. Incorporating better inductive biases, as we have discussed in this thesis, may contribute to more efficient machine learning algorithms. Second, while I do not envision a direct negative use of the work presented here, I believe that as work that aims to advance our technology, it has the potential of being misused or negatively impact our society. As Professor Ruha Benjamin puts it, ``technology can exclude without being explicitly designed for it''. I hope this is not the case of my work and I explicitly disapprove the use of the results, conclusions, data and code related to this work for applications that incite racism, sexism or unequal treatment of marginalised groups.
In sum, in this dissertation we have presented the results of various projects connecting different fields, such as machine learning, cognitive science and computational neuroscience. While science clearly needs the depth of very narrow studies, we have here tried to show the supplementary value of an interdisciplinary approach to science. In particular, I believe that understanding the nature of learning systems---both algorithms and brains---requires the collaboration of scientists of multiple disciplines, as many other researchers have argued before me. Learning algorithms will become more effective and efficient by incorporating insights from the brain; and we will deepen our understanding of the brain by using the tools of improved machine learning.
}
\chapter[Global visual salience of competing stimuli]{Global visual salience\\of competing stimuli}
\label{ch:globsal}
\renewcommand{\chapterpath}{includes/global-salience}
\begin{contributors}
Ricardo Ramos Gameiro designed the first prototype of the experiments, collected the eye tracking data and contributed to the original draft of the manuscript. Alessandro Grillini contributed to the comparisons between global and local salience and to the corresponding part of the original draft. Peter K{\"o}nig contributed to the conceptualisation of the project and supervised the work. Ricardo, Alessandro and Peter reviewed and edited the manuscript submitted to the Journal of Vision.
\end{contributors}
\begin{outreach}
\item \textit{Global visual salience of competing stimuli.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Ricardo Ramos Gameiro, Alessandro Grillini, Peter K{\"o}nig. PsyArXiv preprint PsyArXiv:z7qp5 \& Journal of Vision (accepted), 2019.
\end{outreach}
Visual attention is a highly complex mechanism that facilitates our understanding and navigation of the world around us, by enabling the coherent processing of the large amount of information that enters our eyes. Therefore, a fundamental component of vision and hence cognition is the guidance of eye movements \citep{liversedge2000eyemovements, geisler2011eyemovements, konig2016eyemovements}. We constantly have to decide where to look next and which regions of interest to explore, in order to process and interpret relevant information of a scene. As a consequence, the investigation of eye movement behaviour has become a major field in many research areas \citep{kowler2011eyemovements, kaspar2013visualattention}.
In this regard, a number of studies have shown that visual behaviour is controlled by three major mechanisms: bottom-up, top-down, and spatial biases \citep{desimone1995visualattention, egeth1997visualattention, kastner2000visualattention, corbetta2002topdownbottomup, connor2004buttomuptopdown, kollmorgen2010topdownbottomup}. Bottom-up factors describe features of the observed image, which attract eye fixations, involving primary contrasts, such as colour, luminance, brightness, and saturation \citep{itti1998salience, reinagel1999bottomup, baddeley2006bottomup}. Hence, bottom-up factors are typically based on the sensory input. In contrast, top-down factors comprise internal states of the observer \citep{connor2004buttomuptopdown, kaspar2013visualattention}. That is, eye movement behaviour is also guided by specific characteristics, such as personal motivation, specific search tasks, and emotions \citep{wadlinger2006topdown, einhauser2008topdown, rauthmann2012topdown, kaspar2012topdown}. Finally, spatial properties of the image, such as the image size, and motor constraints of the visual system in the brain may affect eye movement behaviour \citep{ramosgameiro2017explorationexploitation, ossandon2014spatialbiases}. As a result, spatial properties and motor constraints then lead to specific bias effects, such as the central bias in natural static images \citep{tatler2007centralbias}. Thus, investigating visual behaviour necessarily implies an examination of bottom-up and top-down factors as well as spatial biases.
Based on these three mechanisms---bottom-up, top-down and spatial biases---guiding visual behaviour, \citet{koch1987salience} first revealed a method to highlight salient points in static image scenes. Whereas this model was purely conceptual, \citet{niebur1996salience} later developed an actual implementation of salience maps. This was the first prominent proposal of topographically organised features maps that guide visual attention. Salience maps describe these topographic representations of an image scene, revealing where people will most likely look at while observing the respective scene \citep{itti2001salience}. That is, salience maps can be interpreted as a prediction of the distribution of eye movements on images. Usually, salience maps include only bottom-up image features, predicting eye fixations on image regions with primary contrasts in colour changes, saturation, luminance or brightness among others \citep{itti1998salience}. However, in their first implementation, \citet{niebur1996salience} also tried to include top-down factors to build up salience maps and thus predict where people will look at most likely in image scenes. Current state-of-the-art computational salience models are artificial neural networks pre-trained on large data sets for visual object recognition and subsequently tuned to predict fixations, as is the case of Deep Gaze II \citep{kuemmerer2016deepgaze}. Such models do not rely only on bottom-up features any more, but also incorporate higher-level features learned on object recognition tasks. Still, despite the better performance on salience benchmarks, deep nets-based models seem to fail at predicting the salience driven by low-level features \citep{kuemmerer2017icfdeepgaze}.
Salience maps provide a highly accurate and robust method to predict human eye movement behaviour on static images, by relying on local features to determine which parts of an image are most salient \citep{niebur1996salience, itti2001salience, kowler2011eyemovements}. However, these methods do not provide any information about the salience of the image as whole, which may depend on both local properties and also the overall semantic and contextual information of the image. Such global salience is of great relevance when an observer is faced with two or more independent visual stimuli in one context. These combinations describe situations when several stimuli compete with each other with regard to their individual semantic content, despite being in the same overall context. Such cases appear frequently in real life, for instance when two billboards hang next to each other in a mall, when several windows are open on a computer screen, or a monitor on intensive care unit, to name a few examples. Thus, by placing two or more independent image contexts side by side, as described in the previous examples, classical salience maps may well predict eye movement behaviour within each of the individual images as a closed system, but they will most likely fail to predict visual behaviour across the whole scene involving all images. Specifically, they will fail at answering the question: which stimulus is most likely to attract the observers' visual attention?
\section{Hypotheses and contributions}
\label{sec:globsal-contributions}
In this chapter, we present the work of a study where we postulate several hypotheses. Our primary hypothesis (H1) is that it is possible to measure and calculate the global salience of natural images. That is, the likelihood of a visual stimulus to attract the first fixation of a human observer, when it is presented in competition alongside another stimulus, can be systematically modelled. In the experiment presented here, participants were confronted with stimuli containing two individual natural images---one on the left and one on the right side of the screen---at the same time. The set of images used to build our stimuli consisted of urban, indoor and nature scenes, close-ups of human faces and scenes with people in a social context. During the observation of the image pairs, we recorded the participants' eye movements. Specifically, to characterise the global salience we were interested in the direction---left or right---of the initial saccade the participant made after the stimulus onset. For further analysis, we also collected all binary saccade decisions on all the image pairs presented to the participants. We used the behavioural data collected from the participants to train a logistic regression model that successfully predicts the location of the first fixation for a given pair of images. This allowed us to use the coefficients of the model to characterise the likelihood of each image to attract the first fixation, relative to the other images in the set. In general, images that were fixated more often are ranked higher than other images. Hence, we computed a unique \textit{attraction score} for each image that we denote \textit{global salience}, which depends on the individual contextual information of the image as a whole.
We also analysed the local salience properties of the individual images and compared it to the global salience. We hereby claimed that the global salience cannot be explained by the feature-driven salience maps. Formally, we hypothesise that (H2): Natural images have a specific global salience, independent of their local salience properties, that characterises their likelihood to attract the first fixation of human observers, when presented alongside another competing stimulus. A larger global salience leads to a higher attraction of initial eye movements.
In order to properly calculate the global salience, we accounted for general effects of visual behaviour in stimuli with two paired images. Previous studies have shown that humans tend to exhibit a left bias in scanning visual stimuli. \citet{barton2006leftbias} showed that subjects looking at faces fixated longer the eye on their left side, even if the faces were inverted, and the effect was later confirmed and extended to dogs and monkeys \citep{guo2009leftbias}. For an extensive review about spatial biases see the work by \citet{ossandon2014spatialbiases}, where the authors presented evidence of a marked initial left bias in right-handers, but not in left-handers, regardless of their habitual reading direction. In sum, there is a large body of evidence of lateral asymmetry in viewing behaviour, although the specific sources are yet to be fully confirmed. With respect to our study, we hypothesise that (H3): Presenting images in horizontal pairs leads to a general spatial bias in favour of the image on the left side.
In addition to the general left bias, in half of the trials of the experimental sessions, one of the images had been already seen by the participant in a previous trial, while the other was new. The participants also had to indicate which of the images was new or old. Thus, we also addressed the questions of whether the familiarity with one of the images or the task have any effect in the visual behaviour and thus in the global salience of the images. Do images that show the task-relevant scene attract more initial saccades? Likewise, are novel images more likely to attract the first fixation? This challenge sheds some light on central-peripheral interaction in visual processing. \citet{guo2007topdown}, for instance, showed that during face-processing, humans indeed rely on top-down information in scanning images. However, \citet{acik2010bottomuptopdown} proposed that young adults usually rely on bottom-up rather than top-down information during visual search. In this regard, we thus hypothesise that (H4): Task-relevance and familiarity of images will not lead to higher probability of being fixated first. In order to account for any spatial bias effects that could influence the global salience model, we added coefficients to the logistic regression algorithm that could potentially capture any lateral, familiarity and bias effects. This not only makes the model more accurate, but allows us to analyse the influence of these effects. Furthermore, the location of the images in the experiments was randomised across trials and participants.
Finally, in order to better understand the properties of the global salience of competing stimuli, we also analysed the exploration time of each image. In this regard, we hypothesise the following (H5): Images with larger global salience will be explored longer than images with low global salience.
\section{Methods: experimental setup}
\label{sec:globsal-methods}
The present study was conducted in the Neurobiopsychology lab at the Institute of Cognitive Science of the University of Osnabr\"uck, Germany. The experimental methods were approved by the Ethical Committee of the University of Osnabr\"uck, Germany, and performed in accordance with the guidelines of the German Psychological Society. All participants gave written consent to participate in this study.
\subsection{Participants}
\label{sec:globsal-methods_participants}
Forty-nine healthy participants (33 females, mean age = 22.39 years, \textit{SD} = 3.63) with normal or corrected-to-normal vision took part in this study. All participants were instructed to freely observe the stimuli on the screen. In part of the measurements, they had to indicate after the trial the old or new image of a pair as further described below.
\subsection{Apparatuses}
\label{sec:globsal-methods_apparatuses}
We presented the stimuli on a 32'' widescreen Samsung monitor (Apple, California, USA) with a native resolution of 3840 $\times$ 2160 pixels. For eye movement recordings, we used a stationary Eye Link 1000 eye tracker (SR Research Ltd.) providing binocular recordings with one head camera and two eye cameras with sampling rate of 500 Hz.
Participants were seated in a darkened room at a distance of 80 cm from the monitor, resulting in 80.4 pixels per visual degree in the centre of the monitor. We did not fixate the participant's head with a headrest but verbally instructed the participants not to make head movements during the experiment. This facilitated comfortable conditions for the participants. However, eye tracker constantly recorded four edge markers on the screen with the head camera, in order to correct for small head movements. This guaranteed stable gaze recordings based on eye movements, independent of residual involuntary head movements.
The eye tracker measured binocular eye movements. For calibration of the eye tracking camera, each participant had to fixate on 13 black circles (size 0.5\textdegree) that appeared consecutively at different screen locations. The calibration was validated afterwards by calculating the drift error for each point. The calibration was repeated until the system reached an average accuracy of \textless~0.5\textdegree\ for both eyes of the participant.
\subsection{Stimuli}
\label{sec:globsal-methods_stimuli}
The images set consisted of 200 images, of which 197 were natural photographs and 3 were randomly generated pink noise images. Altogether, the stimulus set was divided into 6 categories, according to the image content: human faces, urban scenes, natural landscapes, indoor scenes, social activities and pink noise. All the photographs were obtained from either the internal image database of the Neurobiopsychology laboratory at the University of Osnabr\"uck, Germany or the NimStim database. Each image was scaled to a resolution of 1800 $\times$ 1440 pixels. Some examples are shown in Figure~\ref{fig:globsal-stimuli_samples}.
Each trial consisted of one stimulus with a resolution of 3840 $\times$ 2160 pixels, matching the full-size screen resolution of the display monitor (32'' diagonal; 47.8\textdegree\ $\times$ 26.9\textdegree). Within each presented stimulus, two images were randomly paired that is, one image was shown on the left screen side and the other image on the right screen side. Between both images, each stimulus contained a central gap of 240 pixels, as illustrated by Figure~\ref{fig:globsal-stimuli_pair}. The background area of the stimuli was set to middle grey.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/stimuli_samples.png}
\caption{Four images from the set}
\label{fig:globsal-stimuli_samples}
\end{subfigure}
\hspace{0.02 \linewidth}
\begin{subfigure}{0.48 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/stimuli_pair.png}
\caption{The stimulus at one trial}
\label{fig:globsal-stimuli_pair}
\end{subfigure}
\\
\begin{subfigure}{0.43 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/trial_notask.png}
\caption{Procedure in blocks 1 and 4, where no task was given}
\label{fig:globsal-trial_notask}
\end{subfigure}
\hspace{0.02 \linewidth}
\begin{subfigure}{0.53 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/trial_task.png}
\caption{Procedure in blocks 2 and 3, where participants had to perform a simple task after the stimulus offset}
\label{fig:globsal-trial_task}
\end{subfigure}
\caption{Experimental setup}
\label{fig:globsal-experiment}
\end{figure}
\subsection{Procedure}
\label{sec:globsal-methods_procedure}
The experiment consisted of 200 trials divided into four blocks, at the beginning of which the eye-tracking system was re-calibrated. The blocks were designed such that each had a different combination of task and image novelty:
\begin{itemize}
\item \textbf{Block 1} consisted of 25 trials formed by 50 distinct, novel images (new/new). This block was task-free, that is participants were guided to freely observe the stimuli (Figure~\ref{fig:globsal-trial_notask}).
\item \textbf{Block 2} consisted of 75 trials, each formed by one new image and one of the previously seen images. (new/old or old/new). In this block, the participants were guided to freely observe the stimuli and, additionally, they were asked to indicate the \textit{new} image of the pair after the stimulus offset (Figure~\ref{fig:globsal-trial_task}).
\item \textbf{Block 3} consisted of 75 trials, each formed by one new image and one of the previously seen images. (new/old or old/new). In this block, the participants were asked to indicate the \textit{old} image of the pair.
\item \textbf{Block 4} consisted of 25 trials formed by 50 previously seen images (old/old). Like block 1, this block was also task-free.
\end{itemize}
The decision in blocks 2 and 3 was indicated by either pressing the left---task-relevant image is on the left side---or right---task relevant-image is on the right side---arrow button on a computer keyboard.
The image pairs were formed by randomly sampling from the set of 200 images, but some constraints were set in order to satisfy the characteristics of each block and keep a balance in the number of times each image was seen by the participant. The sampling process was as follows: In block 1, 50 images were randomly sampled to form the 25 pairs. In blocks 2 and 3, in order to construct the new/old and old/new pairs, the new image was randomly sampled from the set of remaining unseen images and the old image was randomly sampled of previously seen images, with two additional constraints: it must have been chosen only one time before and not in the previous 5 trials. Finally, in block 4, a set of exactly 50 images which had been shown only once remained. These were used to randomly sample the remaining 25 trials. In all blocks, after sampling the two images, the left/right configuration was also randomly chosen with probability 0.5.
The sampling process was different for each participant, that is they saw different sets of pairs from the 40,000 different pairs and in different order. This aimed at reducing the predictability of the process while satisfying the experimental constraints. Overall, we collected data from 9,800 pairs, some of which might have been repeated across participants. However, note that each participant saw each image exactly twice, therefore the frequency of presentation of the images was balanced across the whole experiment. As we will see in the following section, the amount of data was enough to fit the computational model.
In all cases, the presentation time for each stimulus was 3 seconds and it was always preceded by a blank, grey screen with a white, central fixation dot. The stimulus was displayed only after the participant fixated the central dot.
The majority of our analyses focused on the first fixation. As a pre-processing stage, we discarded the fixations 1) due to anticipatory saccades, 2) shorter than 50 ms or longer than $\mu_{dur} + 2 \sigma_{dur}$ ms, where $\mu_{dur} = 198$ ms and $\sigma_{dur} = 90$ ms are the mean and standard deviation of all fixation durations, respectively, and 3) located outside any of the two images. The discarded fixations were less than 4~\% of the total.
\section{Methods: computation of global salience}
\label{sec:globsal}
In order to characterise the global salience of competing stimuli, we trained a logistic regression model with the behavioural data from the eye-tracking experiments. Provided that the model can accurately predict the location of the first fixation---left or right---the coefficients for each image will represent the likelihood of the image to attract the first fixation and this, in turn, can then be interpreted as the global image salience. The intuition is that images that are more often the target of the first fixation after the stimulus onset have a higher global salience, and vice versa.
\subsection{Logistic regression for pairwise estimates}
\label{sec:globsal-logreg_plain}
Typically, logistic regression is used in binary classification problems, as is this case where the initial fixation after stimulus onset can land either on the left ($y = -1$) or on the right ($y = 1$) image. The classifier simply estimates a probability $h_{w}(\mathbf{x})$ for the binary event on the linear hypothesis $\mathbf{w}^{T}\mathbf{x}$ by applying a logistic function:
\begin{equation}
\label{eq:logreg}
h_{w}(\mathbf{x}) = \frac{1}{1 + e^{-\mathbf{w}^{T}\mathbf{x}}} = \frac{e^{\mathbf{w}^{T}\mathbf{x}}}{1 + e^{\mathbf{w}^{T}\mathbf{x}}}
\end{equation}
where $\mathbf{x}$ is a vector that represents the independent or explanatory variables (features) and $\mathbf{w}$ the coefficients to be learned. Thus, the likelihood of the binary outcome given the data is the following:
\[
P(y|\mathbf{x}) =
\begin{cases}
h_{w}(\mathbf{x}) &\quad\text{if } y = 1\\
1 - h_{w}(\mathbf{x}) &\quad\text{if } y = -1\\
\end{cases}
= \frac{e^{y\mathbf{w}^{T}\mathbf{x}}}{1 + e^{y\mathbf{w}^{T}\mathbf{x}}}
\]
The coefficients are then optimised by minimising the negative log-likelihood $-log(P(y|\mathbf{x}))$ through gradient descent. Typically, a regularisation penalty is added on the coefficients, controlled by the parameter $C$---inverse of the regularisation strength. In our case, we applied $L_2$ regularisation and therefore the algorithm solves the following optimisation problem, given a set of $N$ training data points (trials):
\begin{equation}
\label{eq:objective}
\min_{\mathbf{w}} \frac{1}{2}\mathbf{w}^{T}\mathbf{w} + C\sum_{i=1}^{N}log(1 + e^{-y_{i}\mathbf{w}^{T}\mathbf{x}_{i}})
\end{equation}
The optimisation problem was solved through the \textit{LIBLINEAR} algorithm \citep{fan2008liblinear}, available in the \texttt{scikit-learn} Python toolbox.
In our particular case, for every trial $i$---stimulus pair seen by a participant---each feature $x_{ij}$ corresponded to one image $j$ and only two images were shown at each trial. Therefore, we were interested in modelling the probability that one image $u$ receives the first fixation when presented next to another image $v$; hence $p(u > v)$. This simplifies the standard logistic regression model to a special case for pairwise probability estimates, known as the Bradley-Terry-Luce model \citep{bradley1952pairwisecomp, luce2005pairwisecomp}, where the probability $h_{w}$ is the following:
\begin{equation}
\label{eq:btl}
h_{w}(u,v) = p(u > v) = \frac{e^{w_{u}}}{e^{w_{u}} + e^{w_{v}}} = \frac{e^{w_{u}-w_{v}}}{1 + e^{w_{u}-w_{v}}}
\end{equation}
where $w_{u}$ and $w_{v}$ are the coefficients of image $u$ and $v$. This is a special case of the function in Equation \ref{eq:logreg}, where all the elements in the feature vector $\mathbf{x}$ are zero except for the two paired features $x_{u}$ and $x_{v}$, which are set to 1 and -1 respectively. Note that in the Bradley-Terry-Luce model the coefficients still refer to the whole set of features and therefore are described by an $M$-dimensional vector $\mathbf{w} = \{w_{1}, w_{2}, ..., w_{M}\}$, where in our case $M=200$, the total number of images in the set. After training the model, each learned coefficient $w_{j}$ will be related to the average likelihood of image $j$ of receiving the first fixation when presented next to other images from the set. As stated above, we interpret these coefficients $\mathbf{w}$ as a measure of the global image salience.
In order to estimate the coefficients $\mathbf{w}$, the logistic regression model was trained on the data set arranged into a design matrix $X$ of the following form:
\begin{equation}
\label{eq:matrix_x}
X =
\begin{bmatrix}
x_{1}^{(1)} & x_{2}^{(1)} & \dots & x_{M}^{(1)} \\
x_{1}^{(2)} & x_{2}^{(2)} & \dots & x_{M}^{(2)} \\
\vdots & \vdots & \ddots & \vdots \\
x_{1}^{(N)} & x_{2}^{(N)} & \dots & x_{M}^{(N)} \\
\end{bmatrix}
\end{equation}
where each row represents one measured data point: one trial where one participant was presented a pair of images $u$ and $v$---the total number of trials was in our case $N = 49~\text{participants} \times 200~\text{trials per participant} = 9800$---and where the columns represent the values of the different features (images) that were tested ($M = 200$). According to Equation~\ref{eq:btl}, if image $u$ is presented on the right and image $v$ is presented on the left at trial $i$, then $x_{u}^{(i)}=1$, $x_{v}^{(i)}=-1$ and $x_{j}^{(i)}=0, \forall~j \neq u,v$. Finally, the outcome of each trial is given as a vector $\mathbf{y}$:
\[
\mathbf{y} =
\begin{bmatrix}
y^{(1)} \\
y^{(2)} \\
\vdots \\
y^{(N)} \\
\end{bmatrix}
\]
such that $y^{(i)} = 1$ if the right image was fixated first, and $y^{(i)} = -1$ if the left image was fixated first at trial $i$.
\subsection{Task, familiarity and lateral bias}
\label{sec:globsal-biases}
Not only were we interested in modelling the likelihood of every image of receiving the first fixation, but also the contribution of other aspects of the experiment, namely the effect of having to perform a small task when observing the pair of images and the familiarity with one of the two images. More specifically, we were interested in answering the following questions: Do light task demands, such as having to determine which image is new or old, influence the direction of the first saccade? And, are unseen stimuli more likely to receive the initial saccade than previously observed stimuli when presented together, or vice versa?
We addressed these questions by adding new features to the model that capture these characteristics of the experimental setup. These features were assigned coefficients that, after training, will indicate the magnitude of the contributions of the effects. In particular, we added the following features columns to every row $i$ of the design matrix:
\begin{itemize}
\item $t^{(i)}$: 1 if the target of the task (select new/old image) was on the right at trial $i$, -1 image if on the left, 0 if no task.
\item $f^{(i)}$: 1 if at trial $i$, the image on the right had been already shown at a previous trial (familiar), while the image on the left was still unseen; -1 if the familiar image was on the left; 0 if both images were new or familiar.
\end{itemize}
Not only did these new features enable new elements for the analysis, but also added more representational power to the model, which could potentially learn better coefficients to describe the global salience of each image. In this line, we added one more feature to the model to capture one important aspect of visual exploration: the lateral bias. Although a single intercept term in the argument of the logistic function ($\mathbf{w}^{T}\mathbf{x} + b)$ would capture most of the lateral bias, since the outcome $\mathbf{y}$ describes exactly the lateral direction, left or right, of the first saccade, we instead added subject-specific features to model the fact that the trials were generated by different subjects with an individual lateral bias. This was done by adding $K = 49$ (number of participants) features $s_{k}^{(i)}$, with value 1 if the trial $i$ was performed by subject $k$ and 0 otherwise. Altogether, the final design matrix $X^{\prime}$ extends the design matrix $X$ defined in Equation~\ref{eq:matrix_x} as follows:
\begin{equation}
\label{eq:matrix_x_prime}
X^{\prime} =
\begin{bmatrix}[ccc|c|c|ccc]
x_{1}^{(1)} & \dots & x_{M}^{(1)} & t^{(1)} & f^{(1)} & s_{1}^{(1)} & \dots & s_{K}^{(1)} \\
x_{1}^{(2)} & \dots & x_{M}^{(2)} & t^{(2)} & f^{(2)} & s_{1}^{(2)} & \dots & s_{K}^{(2)} \\
\vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
x_{1}^{(N)} & \dots & x_{M}^{(N)} & t^{(N)} & f^{(N)} & s_{1}^{(N)} & \dots & s_{K}^{(N)} \\
\end{bmatrix}
\end{equation}
Note that the leftmost block of $X^{\prime}$ is identical to $X$ (defined in Equation~\ref{eq:matrix_x}). While the shape of $X$ is $9800 \times 200$, $X^{\prime}$ is a $9800 \times 251$ matrix, since $200 + 1 + 1 + 49 = 251$.
\subsection{Validation and evaluation of the model}
\label{sec:globsal-model_eval}
In order to ensure the successful training of the model, we carried out a 5-fold cross-validation of the regularisation parameter $C$ of the model, described in Equation~\ref{eq:objective}. That is, we split our data set into 5 different folds of 39 subjects for training and 10 for validation---7,800 and 2,000 trials, respectively---and evaluated the performance with 10 different values of $C$, according to the following search space:
\[C = 10^{p} \quad\text{with}~p = -3 + \frac{2}{3}(n-1) \quad\text{and}~n=1, ..., 10 \]
The value that provided the best average performance across the folds was selected.
In order to reliably assess the model performance while taking the most out of the data set, we embedded the cross-validated model into a \textit{leave-2-participants-out} cross-evaluation. That is, we constructed 25 different folds of data, each with the trials of 23 participants for training and of 2 participants for evaluation. We report here the average performance across the 25 test and train partitions together with the standard deviation (within brackets). In particular, in Table \ref{tab:performance} we include the area under the curve (AUC), the Tjur\footnote{While there is no consensus about the best metric for the evaluation of logistic regression, the coefficient of discrimination $R^{2}$ proposed by \citet{tjur2009r2} has been widely adopted recently, as it is more intuitive than other definitions of coefficients of determination and still asymptotically related to them.} coefficient of discrimination $R^{2}$ and the accuracy. For the sake of an easier interpretation, we include the theoretical baseline values of the AUC and $R^{2}$, and the empirical baseline accuracy on our test partitions.
\begin{table}[ht]
\centering
\begin{tabular}{rccc}
\multicolumn{1}{l}{} & AUC & Tjur $R^{2}$ & Accuracy \\
\hline \\
Test & 0.8884 (0.0180) & 0.4287 (0.0460) & 81.36 \% (0.32) \\
Train & 0.8865 (0.0040) & 0.4240 (0.0214) & 81.99 \% (1.52) \\
Random baseline & 0.5 & 0.0 & 60.70 \% (2.32)
\end{tabular}
\caption{Test, train and baseline performance of the logistic regression model. Values within brackets indicate the standard deviation across the folds.}
\label{tab:performance}
\end{table}
The results in Table~\ref{tab:performance} show that the logistic regression model successfully learned the behavioural patterns from the experimental data and hence accurately predicted the direction of the first saccade, with very low overfitting, since train and test performance were very similar and have low variance. As a conclusion, this implies that the learned coefficients can be meaningfully used for further analysis, as will be presented in Section~\ref{sec:results}.
\section{Methods: salience maps of competing stimuli}
\label{sec:locsal}
In order to test whether the global salience is independent from the lower-level, salience properties of the stimuli (H2), we also computed salience maps both of each individual image and of each full stimulus shown at each trial, that is the pair of images with grey background, as shown in Figure~\ref{fig:globsal-stimuli_pair}. For the computation of the salience maps we used the Graph-Based Visual Salience algorithm (GBVS) \citep{harel2007gbvs}, which is a computational salience model that makes use of well-defined low-level features.
Moreover, we also analysed the connection between global salience and a less restricted salience model, Deep Gaze II \citep{kuemmerer2016deepgaze}, whose features include higher level cues, since it is a deep neural network model pre-trained for large scale, image object recognition tasks, with additional features optimised for salience prediction.
In order to compare the salience maps with the behavioural data from the observation of competing stimuli, as well as with our derived global salience, we performed the following tests:
\subsection{Predictivity of salience maps for the first fixation}
\label{sec:locsal-firstfix}
In this case, our aim was to evaluate the performance of salience maps in predicting the landing location of the first fixation when two competing stimuli are presented. To do so, we computed the Kullback-Leibler Divergence between the first fixation distribution $F_{j}(b)$ and the salience distribution $S_{j}(b)$ for every image $j$ in the set of 200 images:
\begin{equation}
D_{KL}(F_{j}||S_{j}) = \sum_{b=1}^{B} F_{j}(b) \log(\frac{F_{j}(b)}{S_{j}(b)+\epsilon}+\epsilon)
\label{eq:kld}
\end{equation}
where is $\epsilon$ is a small constant to ensure numerical stability and $b$ refers to $B$ bins of one $1 \times 1$ degrees of visual field angle.
The first fixation distribution, $F_{j}(b)$, is the probability density distribution of all the first fixations made by all observers on each image $j$. To compute $F_{j}(b)$, we divided every image into sections of one squared degree of visual field angle and counted the number of first fixations made by all participants on each bin to obtain a histogram. Then, the histogram was smoothed using a Gaussian kernel with a size of one degree of visual field angle and normalised such that it became a probability distribution. The salience distribution, $S_{j}(b)$, is the smoothed and normalised (likewise) salience map---computed with GBVS or Deep Gaze II---of each individual image $j$.
Hence, according to the definition in Equation~\ref{eq:kld}, a low $D_{KL}(F_{j}||S_{j})$ would imply a good match between the location of the first fixations and the salience map of image $j$.
\subsection{Comparison between global and local salience}
\label{sec:locsal-glob_vs_loc}
In order to compare the local salience maps and the global salience scores learned by the computational model presented in Section~\ref{sec:globsal}, we analysed the GBVS and Deep Gaze salience maps of both the individual images and the whole stimuli, in relation to the global salience scores.
\subsubsection{Individual images}
First, we compared the Kullback-Leibler Divergence between the first fixations distribution and the salience maps of the individual images, as computed in Equation~\ref{eq:kld}, and the global salience scores, that is the coefficients learned by the optimisation defined in Equation~\ref{eq:objective}. This aimed at analysing whether, for instance, images whose local salience properties indeed drove the location of the first fixation have a higher global salience score, and vice versa.
\subsubsection{Trials}
Second, we looked at the properties of the salience map of the final stimulus seen by the participants at each trial, that is the paired competing images with a grey background (see Figure~\ref{fig:globsal-stimuli_pair}). As a metric of the contribution of each image to the salience map, for each trial $i$ we computed the relative salience mass $M$ of each image, left and right:
\begin{table}[ht]
\centering
\label{tab:salience_mass}
\begin{tabular}{cc}
$M_{i}^{L} = \int_{x \in X_{L}} S_{i}(x)$ & $M_{i}^{R} = \int_{x \in X_{R}} S_{i}(x)$
\end{tabular}
\end{table}
where $S_{i}(x)$ is the normalised salience map of the whole stimulus presented at trial $i$ and $X_{L}$ and $X_{R}$ are the stimulus locations corresponding to the left and right images, respectively. A significant positive correlation between $\Delta_{M}^{(i)} = M_{i}^{L} - M_{i}^{R}$ and the difference between the global salience scores of the images on the left and right, $\Delta_{GS}^{(i)}= w_{L}^{(i)} - w_{R}^{(i)}$, would indicate that the local salience properties can partly explain the direction of the first fixation.
\section{Results}
\label{sec:results}
In this section, we present the main results of our analyses and discuss the validity of the hypotheses presented in Section~\ref{sec:globsal-contributions}. Each of the sub-sections focuses on one of the five hypotheses, in the natural order. All the scatter plots that show the relationship between two variables include the value of the Pearson correlation, as well as the line fit by a linear regression model, with 95 \% confidence intervals estimated using bootstrap with 1,000 resamples.
\subsection{Global visual salience}
\label{sec:results-global_salience}
In our first hypothesis (H1), we stated that images can be ranked according to a specific global salience that leads to the attraction of initial eye fixations. In order to quantify the global salience of individual images, we have presented in Section~\ref{sec:globsal} a computational model that successfully predicts the direction of the first fixation from the behavioural data, as validated by the results in Section~\ref{sec:globsal-model_eval}, and thus we can analyse the coefficients of the model as indicators of the global salience of each image in the data set.
Importantly, the fact that the first fixation direction of the participants when exploring such competitive stimuli can be predicted by a computational model means that their behaviour was not random, but followed certain patterns. In order to shed some light on the nature of these patterns, in Figure~\ref{fig:globsal-stimuli_sorted} we show the complete set of stimuli ranked according the global salience score learned by our model and in Figure~\ref{fig:globsal-global_salience_categories} the value of the global salience scores of each image, highlighting the differences between the image categories.
\begin{figure}[ht]
\centering
\begin{subfigure}{\linewidth}
\includegraphics[width = \linewidth]{\imgpath/stimuli_sorted_facesblurred.png}
\caption{Experimental stimuli, ranked according to the learned global salience. The stimulus with the highest global salience score is on the top-left corner and the rest are sorted with the x-axis changing fastest (row-major order). Faces have been blurred to preserve the identity.}
\label{fig:globsal-stimuli_sorted}
\end{subfigure}
\\
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/global_salience.png}
\caption{Global salience score of each stimulus and image categories.}
\label{fig:globsal-global_salience_categories}
\end{subfigure}
\caption{Global salience scores of the experimental stimuli}
\label{fig:globsal-global_salience}
\end{figure}
Figure~\ref{fig:globsal-global_salience} shows that there exists a clear, general tendency to first fixate on the images that contain either close-up faces or scenes with humans, even though the first fixations may occur, on average, as early as after 242 ms ($\sigma_{SD} = 66$ ms) from the stimulus onset. These two categories, faces and humans, were assigned the highest global salience scores. Then, urban, indoor and natural landscapes obtained significantly lower scores, with no big differences among the three categories. Finally, the three pink-noise images were assigned very low scores, which serves as sanity-check of our proposed method.
\subsection{Global vs. local salience}
\label{sec:results-local_salience}
A reasonable question in view of the results presented in Figure~\ref{fig:globsal-global_salience} is whether the global salience scores---and the ranking of the stimuli that arises from the scores---is a unique measure that assesses the initial visual behaviour when facing competing stimuli, or whether this behaviour and thus our proposed global salience can be explained by the low-level properties of the respective stimuli.
In our second hypothesis (H2) we stated, instead, that the global salience is independent from the low-level local salience properties. So as to test this, we performed several tests, described in Section~\ref{sec:locsal}.
In Figure~\ref{fig:globsal-kld_distr_gbvs} we plot the distribution of the Kullback-Leibler Divergence between the first fixations maps and the GBVS local salience maps of the individual images (see Equation~\ref{eq:kld}). The mean of the distribution is significantly non-zero (two-tail t-test p-value $< .001$, $\mu_{KLD} = 1.44$, $\sigma_{KLD} = 0.33$), which means that there is a significant loss of information when using a local salience map to predict the landing locations of the first fixations on a given image \citep{riche2013saliency}. In order to illustrate the mismatch, in Figure~\ref{fig:globsal-kld_distr_gbvs} we display three example images with the overlaid salience maps and the location of all the first fixations that landed on them. When the KLD value is minimum (a), the salience maps can approximate the fixations, although this happened rarely. Already with KLD values around the mean, the performance of a salience map in predicting the landing location of fixations is rather mediocre (b) and deteriorates further as the KLD increases (c).
\begin{figure}[ht]
\centering
\begin{subfigure}{0.75 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/kld_distr_gbvs.png}
\label{fig:globsal-kld_distr_gbvs_sub}
\end{subfigure}
\\
\begin{subfigure}{0.3 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/kld_gbvs_min_image.png}
\caption{$D_{KL}(F||S) = 0.57$}
\label{fig:globsal-kld_gbvs_min_image}
\end{subfigure}
\hspace{0.02 \linewidth}
\begin{subfigure}{0.3 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/kld_gbvs_mean_image.png}
\caption{$D_{KL}(F||S) = 1.44$}
\label{fig:globsal-kld_gbvs_mean_image}
\end{subfigure}
\hspace{0.02 \linewidth}
\begin{subfigure}{0.3 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/kld_gbvs_max_image.png}
\caption{$D_{KL}(F||S) = 3.19$}
\label{fig:globsal-kld_gbvs_max_image}
\end{subfigure}
\caption{Top row: distribution of the Kullback-Leibler Divergence between the first fixations map and the GBVS local salience maps. Bottom row: images with the minimum, closest to the mean and maximum KLD, with their overlaid salience map and the location of the first fixations.}
\label{fig:globsal-kld_distr_gbvs}
\end{figure}
Perhaps not surprisingly, in view of the poor match between the salience maps and the first fixation maps, Figure~\ref{fig:globsal-kld_vs_globsal_gbvs} shows that the Kullback-Leibler Divergence between them does not correlate with the global salience scores. This means that the images which attract the first fixations towards salient regions (low KLD) do not tend to have high global salience scores neither vice versa.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.6 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/legend_categories.png}
\end{subfigure}
\\
\begin{subfigure}{0.48 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/kld_vs_globsal_gbvs.png}
\caption{GBVS}
\label{fig:globsal-kld_vs_globsal_gbvs}
\end{subfigure}
\hspace{0.01 \linewidth}
\begin{subfigure}{0.48 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/kld_vs_globsal_deepgaze.png}
\caption{Deep Gaze}
\label{fig:globsal-kld_vs_globsal_deepgaze}
\end{subfigure}
\caption{Comparison between the global salience scores the KLD between the first fixation distribution and the salience maps from the computational models}
\label{fig:globsal-kld_vs_globsal}
\end{figure}
Finally, we analyse in Figure~\ref{fig:globsal-mass_gbvs_vs_globsal} whether the direction of the first fixation when looking at competing stimuli, as modelled by our proposed global salience scores, can be explained by the difference in the low-level salience properties of the competing stimuli, as measured by the GBVS salience mass of each image (see Section~\ref{sec:locsal-glob_vs_loc}). Also in this case we found no significant correlation.
The noisy images included in the stimulus set serve once more as a validation of the expected results. When one of the images (left or right) was pink noise the difference in GBVS salience mass was either very high or very low, as is the difference in global salience scores. In this case, both metrics do correlate, but as shown by the central scatter plot of Figure~\ref{fig:globsal-mass_gbvs_vs_globsal}, the feature-driven (GBVS) salience mass cannot explain the global salience scores learned by the model.
\begin{figure}[ht]
\centering
\includegraphics[width = \linewidth]{\imgpath/mass_gbvs_vs_globsal.png}
\caption{Correlation between the GBVS image salience mass and the global salience scores.}
\label{fig:globsal-mass_gbvs_vs_globsal}
\end{figure}
In order to better understand what drives the direction of the first fixation when faced with competing stimuli, we also compared our proposed global salience with properties of Deep Gaze II salience maps. As presented in Section~\ref{sec:locsal}, unlike GBVS, Deep Gaze does make use of higher-level information of the images to predict the salience maps, since it is a neural network pre-trained on image object recognition tasks. This allows it to model salience driven by faces or objects \citep{kuemmerer2017icfdeepgaze} and it becomes an interesting model to which compare our global salience model, since we have seen in Section~\ref{sec:results-global_salience} that images containing faces and humans tend to get a higher global salience score.
In general, we observe that unlike GBVS, measures derived from Deep Gaze salience maps exhibit a non-zero, yet moderate correlation with our proposed global salience. For instance, Figure~\ref{fig:globsal-kld_vs_globsal_deepgaze} shows a slight negative correlation between global salience scores and the KLD between first fixation distributions and Deep Gaze salience maps. However, looking at the distribution of the Kullback-Leibler divergence in Figure~\ref{fig:globsal-kld_distr_deepgaze}, we see that the salience maps are also far from matching the location of the first fixations on the images. Finally, we also observed (see Figure~\ref{fig:globsal-mass_deepgaze_vs_globsal} a non-zero correlation between the difference of global salience scores between the left and the right image, and the difference in salience mass computed with Deep Gaze.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.75 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/kld_distr_deepgaze.png}
\label{fig:globsal-kld_distr_deepgaze_sub}
\end{subfigure}
\\
\begin{subfigure}{0.3 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/kld_deepgaze_min_image.png}
\caption{$D_{KL}(F||S) = 0.57$}
\label{fig:globsal-kld_deepgaze_min_image}
\end{subfigure}
\hspace{0.02 \linewidth}
\begin{subfigure}{0.3 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/kld_deepgaze_mean_image.png}
\caption{$D_{KL}(F||S) = 1.44$}
\label{fig:globsal-kld_deepgaze_mean_image}
\end{subfigure}
\hspace{0.02 \linewidth}
\begin{subfigure}{0.3 \linewidth}
\centering
\includegraphics[width = \linewidth]{\imgpath/kld_deepgaze_max_image.png}
\caption{$D_{KL}(F||S) = 3.19$}
\label{fig:globsal-kld_deepgaze_max_image}
\end{subfigure}
\caption{Top row: distribution of the Kullback-Leibler Divergence between the first fixations map and the Deep Gaze local salience maps. Bottom row: images with the minimum, closest to the mean and maximum KLD, with their overlaid salience map and the location of the first fixations.}
\label{fig:globsal-kld_distr_deepgaze}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width = \linewidth]{\imgpath/mass_deepgaze_vs_globsal.png}
\caption{Correlation between the Deep Gaze image salience mass and the global salience scores.}
\label{fig:globsal-mass_deepgaze_vs_globsal}
\end{figure}
Taken together, we can conclude that our proposed computational model provided a robust method to rank images according to a unique global image salience that is independent of the low-level local salience properties of the stimuli, and we observed a non-zero, yet moderate correlation with a computational salience model that incorporates higher-level cues.
\subsection{Lateral bias}
\label{sec:results-spatialbias}
Our third hypothesis (H3) stated that a general spatial bias leads to a higher likelihood to first fixate on the left rather than the right image. We thus calculated the number of first saccades that landed onto the left and the right image for each block separately (Figure~\ref{fig:globsal-first_saccade_blocks}). A 4 $\times$ 2 (block: 1, 2, 3, 4 $\times$ image side: left, right) repeated measures ANOVA (Greenhouse-Geisser corrected) revealed a general spatial bias of the initial saccade towards the left image as indicated by a significant main effect according to the image side [\textit{F}(1, 48) = 30.833; \textit{p} \textless\ .001; \textit{$\eta_p^2$} = .391]. No further effects were found [all \textit{F} $\leq$ 2.594; all \textit{p} $\geq$.074, all \textit{$\eta_p^2$} $\leq$ .051], showing that the left bias was present in all blocks with similar extent. Thus, we can conclude that the participants generally targeted their initial saccades more on left than right sided images.
\begin{figure}[ht]
\centering
\includegraphics[width = \linewidth]{\imgpath/first_saccade_blocks.png}
\caption{Percentage of first saccades that targeted on the left (red) and right (blue) images, at each block of the experimental session. Error bars depict the standard deviation of the mean. Note that considerably more first fixations landed on the left image, highlighting the lateral bias.}
\label{fig:globsal-first_saccade_blocks}
\end{figure}
Nonetheless, the error bars in Figure~\ref{fig:globsal-first_saccade_blocks} suggest a high variability of the lateral bias across subjects. In order to investigate this, we calculated the number of first saccades on the right image for each participant separately. Moreover, since our model included an individual bias term for each participant, as described in Section~\ref{sec:globsal}, we can also look at the magnitude of the coefficients learned by the model. In Figure~\ref{fig:globsal-subject_bias} we plot, for each participant, the percentage of first saccades towards the right image and their corresponding lateral bias term learned by the computational model. Both metrics are highly correlated---further highlighting the validity of the model---and reveal a high variability in the lateral bias across participants. Overall, 63 \% of all first fixations landed on the left image.
\begin{figure}[ht]
\centering
\includegraphics[width = 0.6 \linewidth]{\imgpath/subject_bias.png}
\caption{Lateral bias of each participant, as measured by the percentage of first fixations onto the right image and the lateral bias terms learned by our computational model. Both metrics are highly correlated and reveal the average left bias, but with high variability across participants.}
\label{fig:globsal-subject_bias}
\end{figure}
\subsection{Task and familiarity}
\label{sec:results-taskandfam}
Next, we investigated the effect of the familiarity with one of the images and of the task of selecting the already seen or unseen image, which the participants had to perform in blocks 2 and 3 of the experiment, respectively. In particular, we were interested in finding out whether there is a tendency to direct the initial saccade towards the task-relevant images or towards the new images, for instance. In our fourth hypothesis (H4) we stated that our task and familiarity should have little or no influence in the initial saccade. For that purpose, we first performed a 2 $\times$ 2 (task: select new, select old $\times$ fixated image: new image, old image) repeated measures ANOVA analysis (Greenhouse-Geisser corrected). The results revealed no significant effects [all \textit{F} $\leq$ 1.936; all \textit{p} $\geq$ .170, all \textit{$\eta_p^2$} $\leq$ .039] (Figure~\ref{fig:globsal-first_saccade_tasks}). Thus, the provided tasks did not bias the initial saccade decision to target one of the two presented images. Nevertheless, we found that participants correctly identified 91.43\% of the new images in block 2 and 91.16\% of the old images in block 3. Hence, the task performance was highly above chance (50\%) and the participants were accurate in identifying the new and old images respectively.
Also in this case, the same conclusion can be extracted from the coefficients learned by the model to capture the task and familiarity effects, which are -0.04 and -0.10, respectively, that is, very small and only slightly higher for the familiarity.
\begin{figure}[ht]
\centering
\includegraphics[width = \linewidth]{\imgpath/first_saccade_tasks.png}
\caption{Percentage of first saccades that targeted on the new (purple) and old (green) images, at blocks 2 and 3, where participants had the task of indicating the new and old image, respectively. Error bars depict the standard deviation of the mean. No significant bias can be appreciated in this case.}
\label{fig:globsal-first_saccade_tasks}
\end{figure}
Taken together, spatial properties influenced the initial saccade in favour to fixate left sided images first. Although task performance was very high, neither the task nor the familiarity with one of the images had an influence in the direction of the first fixation after stimulus onset. These results fully support our third and fourth hypotheses.
\subsection{Total exploration of images}
\label{sec:results-totalexplorationofimages}
In our fifth hypothesis (H5), we stated that images with higher global image salience lead to a longer exploration time than images with lower global salience. We thus calculated the relative dwell time on each image, left and right, for each trial. As an initial step, similarly to the analysis of the initial saccade, we analysed the potential effect of the spatial image location as well as the task and familiarity relevance on the exploration time.
With respect to the spatial image location, a 4 $\times$ 2 (block: 1, 2, 3, 4 $\times$ image side: left, right) repeated measures ANOVA (Greenhouse-Geisser corrected) revealed a significant main effect according to the block [\textit{F}(2.368, 113.668) = 12.066; \textit{p} \textless\ .001; \textit{$\eta_p^2$} = .201] but no further effects [all \textit{F} $\leq$ 2.232; all \textit{p} $\geq$.109, all \textit{$\eta_p^2$} $\leq$ .044]. Thus, the total time of exploration did not depend on the spatial location of the images, as also shown in Figure \ref{fig:globsal-dwell_blocks}.
With respect to the task relevance---recall: block 2 - select new image; block 3 - select old image---we calculated a 2 $\times$ 2---task: select new, select old $\times$ fixated image: new image, old image---repeated measures ANOVA (Greenhouse-Geisser corrected). The results revealed a significant main effect according to the task [\textit{F}(1, 48) = 4.298; \textit{p} \textless\ .050; \textit{$\eta_p^2$} = .082] and fixated image [\textit{F}(1, 48) = 64.524; \textit{p} \textless\ .001; \textit{$\eta_p^2$} = .573], as well as an interaction between task and fixated image [\textit{F}(1, 48) = 36.728; \textit{p} \textless\ .001; \textit{$\eta_p^2$} = .433]. As shown by Figure \ref{fig:globsal-dwell_fam_task}, our results showed that, in general, participants tended to spend more time exploring new instead of previously seen images. Furthermore, this effect was noticeably larger in block 2, where the task was to select the new images, than in block 3 (select old image).
Consequently, we found that the spatial location of images did not affect the total time of exploration. Instead, the task and familiarity had a considerable impact on the exploration time, revealing that new images were explored during a longer time than the counterpart.
\begin{figure}[ht]
\centering
\includegraphics[width = \linewidth]{\imgpath/dwell_blocks.png}
\caption{Exploration as measured by the relative dwell time on the left (red) and right (blue) images, at each block of the experimental session. Error bars depict the standard deviation of the mean.}
\label{fig:globsal-dwell_blocks}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width = \linewidth]{\imgpath/dwell_fam_task.png}
\caption{Exploration as measured by the relative dwell time on the new (purple) and old (green) images, at blocks 2 and 3, where participants had the task of indicating the new and old image, respectively. Error bars depict the standard deviation of the mean.}
\label{fig:globsal-dwell_fam_task}
\end{figure}
For our main analysis regarding the interaction between exploration time and global image salience, we then contrasted the global salience score learned for each image with its respective dwell time averaged over all trials and subjects. The results revealed a significant positive correlation, indicating that images with larger global image salience led to a more intense exploration (Figure \ref{fig:globsal-dwell_scores}). Thus, global image salience describes not only a measure of which image attracts initial eye movements, but is also connected to longer exploration time, suggesting that global salience may describe the relative engagement of images.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.6 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/legend_categories.png}
\end{subfigure}
\\
\begin{subfigure}{0.6 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/dwell_scores.png}
\end{subfigure}
\caption{Dwell time vs. global salience scores}
\label{fig:globsal-dwell_scores}
\end{figure}
Taken together, our results suggest that the task and familiarity---but not the spatial location of images---influenced the exploration time with respect to higher dwell times on unseen images in combination with the task to select the new image. Note, however, that regarding the effects of task our findings are restricted to the specific task assigned in our experiments, that is selecting which image is new or old. The effects of task in visual attention is an active field in visual perception and the results of multiple contributions should be taken together into consideration to draw robust conclusions. Finally, we also found that images with higher global salience correspondingly led to a larger time of exploration. These results fully support our fifth hypothesis.
\section{Discussion}
\label{sec:discussion}
We have presented a computational model trained on the saccadic behaviour of participants freely looking at pairs of competing stimuli, which is able to learn a robust score for each image, related to its likelihood of attracting the first fixation. This fully supports our first hypothesis and we refer to this property of natural images as the global visual salience.
The computational model consists of a logistic regression classifier, trained with the behavioural data of 49 participants who were presented 200 pairs of images. In order to reliably assess the performance of the model, we carried out a careful 25-fold cross-evaluation, with disjoint sets of participants for training, validating and testing. Given a pair of images from the set of 200, the model predicted the direction of the first saccade with 82 \% accuracy and 0.88 area under the ROC curve.
Throughout this chapter, we have analysed the general lateral bias towards the left image (H2), as well as other possible influences such as the familiarity with one of the images and the effect of a simple task (H3). Moreover, we have analysed the relationship of our proposed global salience with the local salience properties of the individual images (H4). Finally, we have also studied the total exploration time of each image in the eye-tracking experiment and compared it to the global salience, which is based upon the first fixation (H5).
Regarding the lateral bias, we found that participants tended to look more frequently towards the image on the left. Such left bias is typical in visual behaviour and has been found in many previous studies \citep{barton2006leftbias, guo2009leftbias, calenwalshe2014assymetricfixation, ossandon2014spatialbiases}. However, most of these studies presented only single images per stimulus. In this regard, it has been argued that cultural factors of the Western population who mostly take part in the research experiments may lead to a semantic processing of natural visual stimuli similar to the reading direction, that is from left to right \citep{spalek2005leftbias, zaeinab2016leftbias}.
In our study, about 63 \% of the first fixations landed on the left image. However, we also observed a high variability across participants, successfully captured by our computational model. In contrast, we showed that the given task in certain trials did not influence initial saccade behaviour. Participants equally distributed the target location of saccades on the presented images, regardless of familiarity and task relevance. Consequently, the spatial location of an image affected saccade behaviour, whereas the task as well as familiarity had no influence.
Importantly, we found that that global salience, that is the likelihood of an image attracting the first fixation when presented next another competing image, is independent of the low-level local salience properties of the respective images. The location of the first fixations made by the participants in the study did not correlate with the GBVS salience maps of the images and the saccadic choice---left or right---was neither explained by the GBVS salience mass difference. Hence, our results provide some new insights in the understanding of visual perception of natural images, showing that the global salience of an image is rather affected by the semantics of the content. For instance, images involving socially relevant content such as humans or faces led to higher global salience than images containing purely indoor, urban or natural scenes.
To gain further insight regarding this aspect, we computed the salience maps using Deep Gaze II \citep{kuemmerer2016deepgaze}, a computational salience model that is not limited to low-level features, but also makes use of high-level cues, obtained by pre-training the model with image object recognition tasks. We repeated the same analyses as with the GBVS model and we found that metrics derived from Deep Gaze salience maps did have a non-zero, yet moderate correlation with our proposed global salience. This, together with previous evidence about the importance of low- and high-level features in detecting fixations \citep{kuemmerer2017icfdeepgaze}, matches our finding that global salience cannot be explained by low-level properties of the images. However, the relatively low correlation further suggests that the initial preference for one of the images does not depend only on properties of the individual salience maps.
According to previous research, initial eye movements in young adults are based on bottom-up image features, whereas socially relevant content is fixated later in time \citep{acik2010bottomuptopdown}. Interestingly, as described above, we found that this was not the case when two images have been shown at the same time. Considering the very short reaction time between stimulus onset and the observers reaction to fixate one of the two images, it seems surprising that participants had to pre-scan both images in their peripheral visual field before initialising the first saccade. Thus, in contrast to classical salience maps, we might argue that the global salience of an image highly relates to the semantic and socially relevant content.
In order to further investigate the effects of the global image salience, we also evaluated the total time of image exploration, that is the dwell time. We hereby found that, different to the initial saccade, the spatial location of images did not affect the time participants explored the individual images of each image pair. However, the task and familiarity had an effect. We saw that in the task where participants had to select the new image, new images were explored longer than previously seen images. In contrast, the task asking to select the old image led to an almost equal exploration time on new and familiar images. Therefore, we conclude that participants in general tended to explore new images for a slightly longer time. Nevertheless and most importantly, we saw generally---and independent of the spatial location, task and familiarity---that images with higher global salience were explored longer in time. Thus, images with larger global salience did not only attract initial eye movements after stimulus onset, but also led to longer exploration times. These results support our assumption, that the global salience score of an image can also be interpreted as a measure of the general attraction of an image, in comparison to other images.
In this regard, note that although we considered the location of the first fixation as the target variable to model the global salience scores and carry out the subsequent analyses, the same computational model and procedures can be used to model alternative aspects of the behavioural responses. For instance, the model could be trained to fit the dwell time---which we have found to be positively correlated with the global salience based on the first fixations---, the engagement---time until fixating away---or the number of saccades.
In spite of the high performance of our computational model and its potential to assign reliable global salience scores to natural images, an important limitation is that the model and thus the scores are dependent on the image set that we used. Whereas local salience maps rely on image features, our proposed global salience model relies on the differences between the stimuli and the behavioural differences that they elicit on the participants. We observed significant differences between image categories, for example humans versus indoor scenes, but this is only one initial step and future work should investigate what other factors influence the global image salience. For example, it would be interesting to train a deep neural network with a possibly larger set of images and the global salience scores learned by our model as labels, similarly to how Deep Gaze was trained to predict fixation locations. This could shed more light on what features make an image more globally salient.
Another related, interesting avenue for future work is investigating the global salience in homogeneous data sets, that is with images of similar content. Our work has shown that large differences exist between images with somehow different content, for instance containing humans or not. However, we did not observe significantly difference global salience between natural and urban scenes (see Figure~\ref{fig:globsal-global_salience_categories}), although significant difference do exist between specific images. An interesting question is: \textit{what} makes one image more likely to attract the first fixation, when presented alongside a semantically similar image? We think an answer to this question can be sought by combining a similar experimental setup as the one presented in this work, with additional data, and making use of advanced feature analysis, such as deep artificial neural networks, as mentioned above.
For instance, small changes in the context information of single images, might already have a dramatic influence on reaction times in decision tasks \citep{kietzmann2015topdown}. In addition, the global salience was based on eye movement behaviour of human data. Depending on the choice of participants, e.g. different culture, age, personal interests and emotions, our model could have revealed different results \citep{balcetis2006topdown, dowiasch2015aging}. Again, further studies might use the model on a wider range of participants, in order to validate the specific global salience and thus attraction of images.
In contrast, differences in the global salience between participant groups could be a great advantage in certain research fields. In medical applications for instance, researchers could identify specific diseases, such autistic spectrum disorder (ASD). In such example, our method could generate a model of the global visual salience of both control people and individuals with certain condition, and then be used for diagnosis. Another use case of our model would be marketing research, where the attraction of different images could be compared adequately based on intuitive visual behaviour. Thus, depending on the research question, the global image salience might provide a new insight in prediction and analysis of visual behaviour.
\section{Conclusion}
\label{sec:conclusion}
Previous research has investigated the local salience properties of single images, which has helped understand visual behaviour. However, assigning a single and unique global salience score to an image as a whole has been neglected. Here, we thus trained a logistic regression model to learn unique, global salience scores for each tested image. We hereby showed that images can indeed be ranked according to their global salience, providing a new method to predict eye movement behaviour across images with distinct semantic content. These results could be used in a variety of research, such as medicine or marketing.
\chapterbibliography
}
\chapter[Image identification from brain activity]{Image identification\\from brain activity}
\label{ch:imageid}
\renewcommand{\chapterpath}{includes/image-id}
\begin{contributors}
Wietske Zuiderbaan, Ben M. Harvey and Serge O. Dumoulin are the authors of the article upon which this work builds. Wietske and Serge supervised the work during my internship at their lab. Akhil Edadan and Peter K{\"onig} participated in the discussions.
\end{contributors}
\begin{outreach}
\item \textit{Saliency and the population receptive field model to identify images from brain activity.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Wietske Zuiderbaan, Akhil Edadan, Serge O. Dumoulin, Peter K{\"o}nig. Annual Meeting of the Visual Sciences Society (VSS, poster presentation), 2019.
\end{outreach}
The goal of computational neuroscience is to understand the mechanisms and principles by which the brain yields behaviour and cognition. One of the approaches to deepen our understanding of the brain is to develop models that can make predictions about brain activity. These methods, sometimes referred to as \textit{brain}/\textit{mind reading}, not only have some practical applications, but they also provide insights about the underlying neural processes that enable the prediction \citep{tong2012mindreading}.
One particular example for the study of the visual system is the identification of presented images from brain imaging recordings, such as fMRI, in the visual cortex. While a successful approach to this challenge is to use statistical models that can learn activations patterns from a set of recordings \citep{kay2008imageid}, an alternative approach is to rely on biologically inspired encoding models. A recent proposal of this type showed that it is possible to identify the presented stimulus from a set of natural images by comparing the measured activity on areas V1, V2 and V3 with a predicted response profile encoded by the low-parametric population receptive field (pRF) model and contrast information from the images \citep{zuiderbaan2017imageidentification}. In the work presented in this chapter, we follow up this methodology by studying the predictive power of salience information from the images---instead of contrast---combined with the pRF model, and extend the analysis to higher visual areas: V1, V2, V3, hV4, LO12 and V3AB\footnote{This work was carried out during a 2-months internship at the Spinoza Centre for Neuroimaging in Amsterdam, in early 2018, supervised by Dr. Wietske Zuiderbaan and Professor Serge Dumoulin. Besides the potential scientific interest of the work, which was presented as a poster at the 19th Annual Meeting of the Vision Sciences Society (VSS), the internship was part of the training programme of my PhD fellowship, a Marie Skłodowska-Curie Innovative Training Network. A requirement of the programme is to carry out interdisciplinary internships at laboratories that are part of the partnership. As an additional contribution of my internship, I developed interactive visualisation tools in Python to better interpret the kind of data used in this project which became part of the laboratory's repository.}.
\section{Methods}
\label{sec:imageid-methods}
The goal of this study was to assess whether the salience information of images is predictive of brain activations in the visual cortex elicited by the visualisation of natural images. As a follow-up study of the work by \citet{zuiderbaan2017imageidentification}, a significant part of the methodology is borrowed from the first publication and, in particular, the neuroimaging data is the same. In this section, we will outline the most relevant aspects of the methodology that is not original of this work and refer to the original publication for further details, so as to focus on the aspects that are novel here.
\subsection{Visual stimuli}
As a data set of images we used a subset of 45 images from the Berkeley Segmentation Dataset and Benchmark \citep{martin2001berkeley}, which was already used in some of the experiments by \citep{zuiderbaan2017imageidentification}. The data set was processed such that the resulting images had a resolution of $538\times538$ pixels and were applied a circular fading mask. The set of 45 images as they were used in the experiments are shown in Figure~\ref{fig:imageid-berkely}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/berkeley.png}
\end{center}
\caption{The set of 45 natural images used in the experiments}
\label{fig:imageid-berkely}
\end{figure}
\subsection{Functional imaging}
\label{sec:imageid-functional_imaging}
The brain activations in the visual cortex were acquired in a neuroimaging study with functional magnetic resonance imaging (fMRI), carried out originally for the work presented by \citet{zuiderbaan2017imageidentification}. Therefore, the details can be found in that first article and here we will summarise the most relevant information: Two participants (one female; ages 28-38 years) took part in the fMRI study, were their brain responses to the natural images were collected. Each image was shown 18 times for a duration of 300 ms, with a radius of $5.5^{\circ}$ from the participant point of view. The neural response at each cortical location (voxel) was obtained through standard GLM-analysis \citep{friston1995fmrianalysis}: each voxel was summarised by the t-value of the fit between the predicted time series---the stimulus presentation convolved with the hemodynamic response function---and the measured BOLD signal.
In addition to the voxel responses to each stimulus, the optimal parameters of the population receptive field (pRF) model were estimated for each participant using the standard approach described by \citet{dumoulin2008prf}: For each voxel, the model estimates the position of the receptive field $(x, y)$ and the size $\sigma$ (standard deviation) using a Gaussian kernel. The parameters of the pRF were estimated using conventional contrast-defined moving-bar apertures with natural image content from images of the same data set but distinct from the set of 45 images used in the image identification experiments.
\citet{zuiderbaan2017imageidentification} analysed areas V1, V2 and V3 of the visual cortex as regions of interest. Here, we extended the analysis to higher visual areas: on the lateral occipital complex (LO-1 and LO-2: LO12), on the ventral occipital (hV4) and on the dorsal occipital (V3A and V3B: V3AB) \citep{wandell2007visualfield}. See Figure~\ref{fig:imageid-visual_field_maps} for an illustration of the regions of interest in the human visual cortex.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/visual_field_maps.png}
\end{center}
\caption{Adapted from \citep{wandell2007visualfield} with permission from the authors: Visual field maps in the human visual cortex. In our analyses, V3A and V3B were considered a single region of interest (V3AB), and so were LO-1 and LO-2 (LO12).}
\label{fig:imageid-visual_field_maps}
\end{figure}
\subsection{Salience and contrast maps}
One of the goals of this work was to contrast the correlation of salience maps with the activations in the visual cortex, and the predictive ability of contrast maps---calculated as the root mean squared contrast---which was assessed in the original article by \citet{zuiderbaan2017imageidentification}. The choice of contrast is motivated by the evidence that the early visual cortex responds strongly to differences in contrast \citep{boynton1999contrast, olman2004contrast}. However, the visual cortex certainly responds to other properties of the stimuli and hence the interest in studying salience.
In order to analyse the correlation of salience with the activations in the visual cortex, we computed the salience maps of each image using two distinct salience models. One of the models is DeepGaze II\footnote{Note that in Chapter~\ref{ch:globsal} we also used DeepGaze to analyse whether our proposed global salience was related or independent from the local salience properties of images, proposed by \citet{kuemmerer2016deepgaze}. DeepGaze II---for better readability, in what follows we will simply refer to it as DeepGaze---is a model that uses the features extracted by a deep neural network, VGG \citep{simonyan2014vgg}, trained on image object categorisation tasks as inputs to a 4-layer readout neural network optimised for salience prediction}, the then state-of-the-art salience model for various metrics. The second model is ICF, which stands for Intensity Contrast Features, proposed by \citet{kuemmerer2017icfdeepgaze}. ICF is trained on the same readout neural network as DeepGaze, but instead of using the VGG features, the input to the network are 5 intensity and 5 contrast feature maps.
Our choice of salience models, out of the many models proposed in the vast literature on image salience, was motivated by various reasons. First of all, we chose DeepGaze as it was the state-of-the-art model in various metrics of image salience evaluation \citep{kuemmerer2016deepgaze} and considering a model that accurately predicts the salient regions of images is also desirable for studying how salience information is encoded in the visual cortex. Nonetheless, as we have discussed in Chapter~\ref{ch:globsal}, visual attention is a complex brain mechanism and salience maps capture different aspects of it. For instance, we have discussed how visual attention can be guided by both bottom-up factors, as well as top-down factors and higher-level features. This motivated the inclusion of a model that is limited to lower-level---intensity and contrast---features, in this case ICF. The fact that DeepGaze and ICF share part of the architecture facilitates the comparisons.
Interestingly, \citet{kuemmerer2017icfdeepgaze} analysed and compared the properties of DeepGaze and ICF, and found that while DeepGaze generally outperformed ICF in terms of the evaluation metrics used to assess salience models, ICF was more accurate in a large number of images. In particular, DeepGaze succeeds at predicting salient regions associated with higher-level factors such as objects and faces, while ICF was superior in the cases were fixations are driven by, for instance, high contrast. By way of illustration, in Figure~\ref{fig:imageid-sample_maps} we show the DeepGaze and ICF salience maps of three images from the data set, as well as their contrast maps, which were used in the original study.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = 0.8 \linewidth]{\imgpath/sample_maps.png}
\end{center}
\caption{Contrast, DeepGaze II and ICF maps of three images of the data set.}
\label{fig:imageid-sample_maps}
\end{figure}
\subsection{Brain response predictions}
To assess the correspondence between the salience and contrast maps and the brain activations, we first calculated a prediction response profile of every image as the summed overlap of the map with the pRF weighting function of each cortical location, normalised by the total volume of the pRF:
\begin{equation}
\label{eq:imageid-pred}
p = \frac{\sum_{i=1}^{N}w_i S_i}{\sum_{j=1}^{M}w_j}
\end{equation}
where $S_i$ denotes the value of the feature map $S$---either DeepGaze, ICF or contrast---normalised as a probability distribution, at pixel $i$; $N$ is the number of pixels within the window of the pRF and $M$ is the total number of pixels in the stimulus area. $w_i$ is the pRF weighting function, whose parameters were obtained as described in Section~\ref{sec:imageid-functional_imaging} and in \citep{zuiderbaan2017imageidentification} in more detail:
\begin{equation}
\label{eq:imageid-prf}
w_i = \exp\left(-\frac{(x_i-x_c)^2+(y_i-y_c)^2}{2\sigma^2}\right)
\end{equation}
where $x_i$ and $y_i$ are the locations of pixel $i$; $x_c$ and $y_c$ are the centre of the pRF in the visual field; and $\sigma$ is the size of the Gaussian kernel of the pRF. Note that in \citep{zuiderbaan2017imageidentification} this is the procedure used to compute the predictions of synthetic images, while for natural images a different method was used, which was specific for the way the contrast information was obtained. Here, we use this method, since we can generalise the prediction $p$ in Equation~\ref{eq:imageid-pred} to any feature map $S$---DeepGaze, ICF or contrast.
\subsection{Evaluation metrics}
Let us formally express the measurements and predictions taking all variables into account---feature maps, areas, images, voxels---in order to describe the evaluation metrics we used in our analysis. First note that, as in the original study by \citet{zuiderbaan2017imageidentification}, we only considered for the analysis the voxels with positive t-value, pRF eccentricity values within $0.5\text{---}4.5^{\circ}$ and pRF variance explained larger than 55 \%. For every image $k$ and every visual area $A$ studied---V1, V2, V3, hV4, LO12 and V3AB---we have a \textit{measured} response profile
\[
\mathbf{m}_{k}^{A} = m_{k, 1}^{A}, \ldots, m_{k, D_A}^{A}
\]
where $D_A$ is the number of (\textit{valid}) voxels in area $A$. Correspondingly, we have the \textit{predicted} response profiles for every image $k$, every visual area $A$ and by every feature map $S$, that is DeepGaze, ICF and contrast:
\[
\mathbf{p}_{k}^{A, S} = p_{k, 1}^{A, S}, \ldots, p_{k, D_A}^{A, S}
\]
where every $p_{k, v}^{A, S}$ is computed as in Equation~\ref{eq:imageid-pred}. As a similarity metric we compute the Pearson correlation between measured responses and predicted profiles. We will denote by $r_{k, l}^{A, S}$, or simply $r_{k, l}$ abusing notation, to the correlation between the measured response of image $k$, $\mathbf{m}_{k}^{A}$, and the predicted profile of image $l$, $\mathbf{p}_{l}^{A, S}$:
\begin{equation}
\label{eq:imageid-correlation}
r_{k, l}^{A, S} = corr(\mathbf{m}_{k}^{A}, \mathbf{p}_{l}^{A, S})
\end{equation}
In order to assess the image identification accuracy we simply considered a correct identification if $r_{k, k} > r_{k, l}$, $\forall~k \neq l$. Additionally, as a more informative metric, we combined the correlation values into a confidence score that represents how hard it is to distinguish the actual presented image $k$ from the other candidate images in the data set, based on the correlation values:
\begin{equation}
\label{eq:imageid-confidence}
c_k^{A, S} = r_{k, k}^{A, S} - \frac{1}{K}\sum_{l=1}^{K}r_{k, l}^{A, S}
\end{equation}
Finally, in order to have a compact measure to compare the predictivity of the different feature maps on every visual area, we also performed a representational similarity analysis\footnote{In Chapter~\ref{ch:daugit}, we also used representational similarity analysis to compare the features learnt by artificial neural networks and the representations measured in the inferior temporal cortex.} (RSA) \citep{kriegeskorte2008rsa}. In order to perform RSA, we constructed representational dissimilarity matrices (RDM) for both the measured responses and the predicted profiles, $M_{k, l}^{A}$ and $P_{k, l}^{A, S}$ respectively, where each entry $(k, l)$ of the matrices is 1 minus the Pearson correlation between the profile for image $k$ and image $j$. In this case, the correlation is not computed between the measured and the predicted profiles, but between two measured responses for $M_{k, l}$ and two predicted profiles for $P_{k, l}$. Thus, the RDMs are symmetric and the values in the diagonals are zero. As a summary metric to compare $M$ and $P$ we computed the Kendall correlation.
\section{Results and discussion}
\label{sec:imageid-results}
We first analyse the distribution of the confidence of the predictions for each model---feature map---and visual area, shown in Figure~\ref{fig:imageid-boxplot_confidence}. Although the results are complex and not highly consistent, several interesting conclusions can be drawn. First, we observe that the identification ability of all models is best on V1 and decreases in higher visual areas. This was observed by \citep{zuiderbaan2017imageidentification} too, who analysed V1, V2 and V3; and we here confirmed it, although we had hypothesised that salience maps might be more discriminative in higher visual areas. Unfortunately, it is not possible to conclude from our results that contrast and salience are less predictive of the activations in higher visual areas because this result could be also explained by the increased pRF sizes \citep{smith2001rfsizes}. In what follows, our analysis will focus in the earlier areas---V1, V2, V3 and, to a lesser extent, hV4---where the identification performance is better and the differences between models larger.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/boxplot_confidence.png}
\end{center}
\caption{Distribution of the confidence values (see Equation~\ref{eq:imageid-confidence}) for every feature map and visual area. Each black dot corresponds to the confidence of the prediction for one image $k$, from the data of one of the experimental participants. The numbers over each box represent the identification accuracy (as a percentage).}
\label{fig:imageid-boxplot_confidence}
\end{figure}
Second, the most relevant observation is that both salience models, DeepGaze and ICF, seem more predictive of the measured activations in the visual cortex than the contrast maps of the images. A significant difference can be observed in both the median of the distribution of the confidence values and in the identification accuracy, shown over each box. For instance, the identification accuracy on V1 using DeepGaze and ICF maps is 34.4 \% and 46.7 \% respectively, while it is below 20 \% using contrast\footnote{In the first publication by \citet{zuiderbaan2017imageidentification} the prediction model for natural images using contrast is computed differently and the evaluation procedures are not the same, hence the results differ slightly to the ones presented here. Nonetheless, in our study we analysed both methods and the results are qualitatively very similar. We here present the results using contrast maps, since the method is identical for salience maps.}.
Although image identification from brain activity has been shown before \citep{kay2008imageid}, the identification performance of the methods presented here is remarkable due to the simplicity of the model. The main differences between this pRF-based method and the model presented by \citet{kay2008imageid} were reviewed by \citet{zuiderbaan2017imageidentification}, but most importantly, note that here we estimated only 3 pRF parameters for each voxel---location and size of the receptive field---which were obtained from the fMRI recordings of a standard pRF session: responses to conventional moving-bar apertures during about half an hour. Then, the 3 pRF parameters are combined with the contrast or salience maps of the images to predict the activity of each voxel. In contrast, \citet{kay2008imageid} recorded the activations to 1,750 natural images (about 5 hours of scanning time) to fit a Gabor-Wavelet-Pyramid model with 2,730 parameters per voxel. Summarised, image identification with the pRF model requires a short scanning time and does not require training a model with brain activations towards natural image. Thus, it can be easily applied to any other image.
The fact that identifying natural images from brain activity using salience maps is more accurate than using contrast information from the images tells us that the image salience \textit{under} the receptive field of each voxel is more discriminative than the contrast. This may come as a surprise, since the early visual cortex is well known to respond strongly to differences in contrast \citep{boynton1999contrast, olman2004contrast}. Salience maps also certainly contain contrast information, in our analysis especially the maps obtained with the ICF model, but not only. Therefore, our results can be interpreted as additional evidence of the activations in the early visual cortex being shaped by multiple factors, likely top-down influences \citep{treue2003salience}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/kendall.png}
\end{center}
\caption{Results of the representational similarity analysis (RSA): Kendall correlation between the representational dissimilarity matrices of the measured responses and the predicted profiles. The error bars correspond to the variation across experimental participants.}
\label{fig:imageid-kendall}
\end{figure}
Another interesting conclusion derives from the comparison between the two salience models analysed. Recall that DeepGaze computes the salience map by first extracting features through a deep neural network trained on image object recognition tasks. Therefore, these features likely contain high-level information such as face and object detectors and the model is particularly accurate at spotting salience driven by such factors, as reflected in Figure~\ref{fig:imageid-sample_maps}. On the contrary, ICF does not extract high-level features but is restricted to intensity and contrast features. Here, we found that image identification is more accurate by using ICF than DeepGaze salience maps. While this is observable in Figure~\ref{fig:imageid-boxplot_confidence}, we additionally perform a representational similarity analysis (RSA) to derive a compact metric of the ability of each model to discriminate the brain activations of each image. We present these results in Figure~\ref{fig:imageid-kendall}, which confirms the conclusion that ICF is more discriminative than DeepGaze in the earlier visual areas.
One more way to visualise the superiority of ICF at predicting brain activations in the early visual cortex is by directly analysing the correlation matrices. In Figure~\ref{fig:imageid-corr}, we plot the correlation matrices of each model on V1, where each element in the matrix encodes the correlation between the measured responses of one image (rows) and the predicted profile of another image (columns), as in Equation~\ref{eq:imageid-correlation}. Note that a correct identification occurs when the element in the diagonal---correlation between measured and predicted profile of the same image---has the highest value in the row. Visually, it becomes apparent that the diagonal in the ICF model has higher values and is better discriminated from the rest of the matrix, than in the DeepGaze model, and yet more in the contrast model. In view of these results, we conclude that image salience is more predictive of the brain activity in the early visual cortex than contrast information, and hypothesise that ICF may be more accurate than DeepGaze because the salience of the latter is driven by high-level information that may not correlate with the activity in the early areas of the visual cortex.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/corr_matrices_v1.png}
\end{center}
\caption{Correlation matrices of the three models for the predictions in visual area V1. Each entry $(i, j)$ of the matrices represent $r_{i, j}^{V1, S} = corr(\mathbf{m}_{i}^{V1}, \mathbf{p}_{j}^{V1, S})$.}
\label{fig:imageid-corr}
\end{figure}
The data we obtained from the predictions allows for multiple levels of analysis, since there are several factors at play: three feature maps, six visual areas, two subjects, etc. During the two-months internship in which this project was carried out, I developed multiple interactive visualisation using Bokeh\footnote{Bokeh is an interactive visualisation library for Python: \href{https://bokeh.org/}{www.bokeh.org}} and the interactive functionality of Jupyter notebooks. Interactive visualisation provides insights that are hardly accessed otherwise and is useful to guide the more systematic, conventional, statistical analyses. In particular, it can be used to analysed the results at different levels of analysis and to look into specific details or data points. Some examples are shown in Figure~\ref{fig:imageid-interactive_visualisation}.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.32 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/interactive_vis_participants_all.png}
\label{fig:imageid-participants_all}
\end{subfigure}
\begin{subfigure}{0.32 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/interactive_vis_participants_specific.png}
\label{fig:imageid-participants_specific}
\end{subfigure}
\begin{subfigure}{0.32 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/participants.png}
\label{fig:imageid-participants_fit}
\end{subfigure}
\\
\begin{subfigure}{\linewidth}
\includegraphics[width = \linewidth]{\imgpath/interactive_vis_icf-vs-dg.png}
\label{fig:imageid-icf_vs_dg}
\end{subfigure}
\caption{Examples of interactive scatter plots of the Pearson confidence (Equation~\ref{eq:imageid-confidence}) to contrast different experimental variables. The interactive plot allows to hover over the data points to visualise the image that they represent and highlight all data points of one image by clicking on them. Bottom row: DeepGaze vs. ICF on areas V1, V2 and V3. Top row: comparison of the confidence values of the predictions from the data of the two experimental participants. From left to right: data from the three models, represented with different shapes for each visual area and different colours for each image; highlight of one image, after clicking on one point; linear fit for each visual area separately.}
\label{fig:imageid-interactive_visualisation}
\end{figure}
Besides the main conclusions discussed above, some other observations are the following: In general, we found a positive linear correlation in the prediction confidence between the predictions on the data from both participants in areas V1, V2, V3 and hV4 ($r = 0.76, 0.80, 0.78, 0.71$ respectively) and less so in LO12 and V3AB ($r = 0.42, 0.17$), where the predictions were also less confident. We also found a correlation between the predictions across visual areas, that is images that were confidently predicted in V1 were also in V2 ($r = 73$) and in V3 ($r = 0.56$), and the correlation decreases further in higher visual areas, as expected.
The prediction confidence was also positively correlated between DeepGaze and ICF ($r = 0.58, 0.62, 0.50, 0.58$ in areas V1, V2, V3 and hV4), although with exceptions, that is several images were confidently predicted by ICF but not by DeepGaze, and vice versa. The interactive visualisation was useful to identify and study these cases. Not surprisingly, the correlation was much lower between the salience and the contrast models. For instance, in V1, the correlation between DeepGaze and contrast and ICF and contrast was $r = 0.25, 0.35$, respectively.
\section{Conclusion}
\label{sec:imageid-conclusion}
In this chapter we have presented the results of a project carried out during a two-months internship at the Spinoza Centre for Neuroimaging in Amsterdam, in which we extended the work presented by \citet{zuiderbaan2017imageidentification}. In particular we analysed the discriminability of salience maps to identify images from brain activity in the visual cortex, using a low-parametric model of the receptive field of the measured cortical locations, the population receptive field (pRF) model \citep{dumoulin2008prf}.
We contrasted the identification performance of two distinct salience models---DeepGaze and ICF--- and the contrast-based model originally presented by \citet{zuiderbaan2017imageidentification} and found that the salience information within the receptive fields of the voxels is more predictive of the brain activity than the contrast information. Furthermore, we observed that ICF, whose salience maps are computed using low-level features, is more predictive than DeepGaze, which encodes high-level information such as salience driven by faces and objects.
Overall, this analysis demonstrates that this simple method of prediction of brain activity using the pRF model can be extended to other types of image information beyond contrast, enabling further analysis. Future work may use this kind of analysis to further understand the computations in the early visual cortex and extend this methodology to other cortical areas.
\chapterbibliography
}
\chapter{Introduction}
\label{ch:intro}
\renewcommand{\chapterpath}{includes/intro}
Visual information plays a remarkably prominent role in how humans and many other animals perceive the world. By way of illustration, about 27 \% of the cerebral cortex of humans and 52 \% of the macaque monkey's is devoted to vision \citep{vanessen2003visualcortex}. This involves billions of neurons and connections which give rise to very complex and sophisticated mechanisms, such as visual attention and object recognition. Understanding the way visual information is processed in the brain and how it affects our behaviour is a very active area of research in cognitive science and neuroscience. In turn, developing artificial algorithms that mimic some aspects of biological vision by learning patterns from collections of image data is another active and currently fruitful area of research in machine learning and computer vision. While all these disciplines are rooted in significantly different origins and make use of distinct tools, there are grounds to defend that the interdisciplinary study of these areas of research can highly benefit each other \citep{bengio2015dlandneuroscience, marblestone2016dlandneuroscience, hassabis2017aiandneuroscience, bowers2017pdp, richards2019dlandneuroscience, kietzmann2019dnncompneuro, lindsay2020dlandneuroscience, saxe2020dlandneuroscience}.
In this thesis, we study several aspects of vision and image understanding, such as visual object recognition and visual attention, using the methods and techniques of various disciplines. In particular, we approach machine visual object recognition through deep artificial neural networks, compare some of their properties with the visual cortex, and draw inspiration from visual perception and biological vision to improve the artificial models; we employ eye tracking to analyse the global salience of images when humans look at competing stimuli; and we study the effectiveness of visual salience to identify images from fMRI data. Overall, this dissertation aims at integrating knowledge and methodologies of various disciplines to further advance our understanding of biological and artificial vision.
Summarised, the specific contributions of this thesis are the following: we shed light on the heavily understudied role of data augmentation---transformations of the input data---on training artificial neural networks for image object recognition, and show that it is more effective than the most popular regularisation techniques. Additionally, we found that models trained with heavier image transformations may learn internal representations more aligned with the activations measured in the visual cortex of the brain. Further, we demonstrate how data augmentation can be used to incorporate inductive biases---useful priors---from visual perception and biological vision, in particular the invariance to identity-preserving transformations. Separately, we propose the concept of image global salience, a property of natural images that reflects the likelihood of a stimulus to attract the gaze of a human observer when presented in competition with other stimuli. Finally, we studied the correlation of image salience with the measured activations in the visual cortex.
The interdisciplinary nature of this thesis is rooted in the breadth of my\footnote{Except in the cases where I refer explicitly to me---the author of this thesis---as an individual or intend to express my personal opinion, I will use, in general, the plural form of the first person in order to acknowledge the contribution of some collaborators to certain parts of the work and keep a consistency throughout the whole thesis.} personal interests as well as in the opportunities that the characteristics of my PhD programme offered to me. Having a background on computer science and electrical engineering, already my Bachelor's thesis \citep{hernandez2014bscthesis} addressed the problem of predicting subjective perception from audiovisual content, work continued in my Master's thesis, where I used psychophysical measurements \citep{hernandez2017mscthesis}. My journey towards cognitive science went on when I was admitted as a PhD candidate in the Institute of Cognitive Science of the University of Osnabrück, with Professor Peter König---although in practice I lived and worked in Berlin. The PhD programme also gave me the opportunity to get in touch, for the first time, with neuroscience, as I became a fellow of the Marie Skłodowska-Curie Innovative Training Network ``Training the next generation of European visual neuroscientists" (\href{http://nextgenvis.eu}{NextGenVis\footnote{\href{http://nextgenvis.eu}{www.nextgenvis.eu}}}). In spite of the challenges of being an outlier in the cohort of PhD students as a machine learning scientist, the breadth of my interests certainly expanded. As a mandatory aspect of the Marie Skłodowska-Curie grant, I had to carry out two 2-months internships at laboratories of the partnership or related. This gave me the opportunity to work at the Spinoza Centre for Neuroimaging in Amsterdam with Dr. Serge Dumoulin, where I worked on the project that is described in Chapter~\ref{ch:imageid}; and at the Cognition and Brain Sciences Unit of the University of Cambridge with Dr. Tim Kietzmann, where I started the work that yielded Chapter~\ref{ch:invariance}. I am very greateful to Serge, Tim and Peter for these opportunities.
The breadth of one's interests and work is certainly at odds with depth. However, there is probably not a single sweet spot in the trade-off between breadth and depth valid for all scientists. I understand science as a collaborative ecosystem that is most effective if scientists are distributed across the whole spectrum of the breadth-depth dichotomy. This thesis---and my work so far---is an attempt to explore the interdisciplinary approach to science, in particular to the study of learning systems---artificial and biological---specialised in visual information.
\section{Learning to see}
\label{sec:intro-learning_to_see}
Vision has evolved as one of the most advanced perceptual mechanisms in humans and many other animals, and is the main source of information for most of us to navigate and understand the world around us. Among the multiple high-level perceptual abilities achieved by our visual system, one of the most sophisticated and best studied is visual object recognition: under an enormous variety of conditions, humans can easily distinguish objects of different categories as well as identify individual objects within the same class \citep{logothetis1996objectrecognition}. Being such an important aspect of human perception, automatically finding and identifying objects from digital images has been as well an active area of research in computer vision and a technological challenge for several decades.
The task of image object recognition in computer vision can be defined in its simplest case as the categorisation of an image into one class from a set of pre-defined object classes, according to the main object present in the image by extracting and processing the visual information, that is the pixel values. Object recognition is closely related to other tasks such image object detection \citep{zhao2019objectdetection} and content-based image retrieval \citep{latif2019imageretrieval, zhou2017imageretrieval}. The generalisation of this broader set of of tasks is known as \textit{image understanding}. Ultimately, the goal of artificial image understanding is to extract rich and complex information from a digital image, as close as possible to biological vision and visual perception.
During several decades, object recognition and the related tasks of image understanding were predominantly approached by extracting handcrafted features from the images to then train machine learning classifiers. As a result, much of the computer vision literature was dominated by the proposal and refinement of image processing techniques, such as edge detectors, line and curve detectors like the Hough transform \citep{duda1972hough}, scale-invariant feature descriptors such like SIFT \citep{lowe2004sift} and HOG \citep{dalal2005hog} or Haar-like features \citep{lienhart2002haar}, among many others. These features are usually combined using bag-of-visual-words techniques \citep{sivic2003bog} to train classifiers such as support vector machines (SVM) \citep{cortes1995svm}; or more sophisticated approaches such as Fisher kernels \citep{perronnin2007fisher}. Today, these techniques are informally known as \textit{traditional} machine learning or \textit{traditional} computer vision.
What currently is considered \textit{modern} machine learning are \textit{deep artificial neural networks} (ANN). A major breakthrough in computer image object recognition occurred in 2012 when \citet{krizhevsky2012alexnet} presented results of a neural network algorithm that nearly halved the previous error measure on the large-scale benchmark data set ImageNet \citep{russakovsky2015imagenet}. This attracted an increasing amount of attention towards this kind of algorithms, leading to a rapid development of the subfield of machine learning known as \textit{deep learning}.
ANNs are, however, not exactly \textit{modern}. The history of neural networks in the scientific literature can be traced back at least to the 1940s, when the first theories of biological learning were proposed \citep{mcculloch1943biologicallearning, hebb1949biologicallearning}. Biological learning and biological neural networks inspired the development of artificial models such as the perceptron \citep{rosenblatt1958perceptron} and later the Neocognitron \citep{fukushima1982neocognitron}---a precursor of current convolutional neural networks---directly inspired by the findings by \citet{hubelwiesel1959} about the receptive fields of neurons in the cat's visual cortex. ANNs gained some popularity in the 1980s and 1990s with the development and application of the \textit{backpropagation} principle \citep{rumelhart1986backprop} and other successful proposals, such as the long-short term memory (LSTM) recurrent neural networks \citep{hochreiter1997lstm}. Nonetheless, the progress and adoption of neural networks until 2012 was slow and minor, overshadowed by other algorithms such as the SVM.
One of the main differences---and arguably, advantages---of deep learning with respect to traditional machine learning algorithms is that they are able to extract relevant features for a certain task, such as image object recognition, more directly from the data. This allows to apply similar learning principles, techniques and architectures to a more general set of data modalities and tasks. This can be regarded as a step towards finding a general learning principle, a hypothesis suggested in biological learning, upon the observation of the highly regular structure found in the neocortex across different brain areas and species \citep{douglas1989neocortex, harris2015neocortex}. This \textit{philosophy} of letting an algorithm learn the relevant features---\textit{representation learning}---from the available data can be regarded as the other end of \textit{symbolic artificial intelligence} (AI) \citep{haugeland1989ai}, whose approach was to programme an agent to perform certain tasks by directly manipulating symbols and manually implementing rules to simulate some behaviour. The traditional machine learning and computer vision algorithms used for many years would be somewhere in between symbolic AI and the philosophy of deep learning.
Despite the significant practical and conceptual step forward originated by the recent progress and adoption of deep learning, the claims or beliefs that ANNs are or will be able to automatically learn anything from data without human intervention are overstated. Naturally, artificial neural networks are also subject to the \textit{no free lunch theorem}: no learning algorithm is better than any other at classifying unobserved data points, when averaged over all possible data distributions \citep{wolpert1996nofreelunch}. In other words, in order to learn anything about the world it is necessary to introduce some \textit{priors} or \textit{inductive biases} about the world.
In this light, although it is often claimed that deep learning is a shift away from the traditional approach of hand-engineering features to ``automatically'' learning them with ``a general-purpose learning procedure'' \citep{lecun2015deeplearningnature}, it can also be seen as a shift of inductive biases. The priors used by deep learning are definitely \textit{more} general-purpose than those used in traditional machine learning, but neural networks do not remove the need for inductive biases---and they will never do. While a few years ago we handcrafted visual descriptors that could discriminate object classes in a set of images, we now handcraft types of layers, architectures, parameter initialisation strategies, regularisers and optimisation methods, to name a few elements of deep learning\footnote{Even approaches such as neural architecture search \citep{zoph2016nas}, which aim at further automating machine learning, not only use a tremendous amount of computational resources---with a proportional environmental impact---but their efficacy has been questioned, as reviewed by \citet{gencoglu2019hark}, among others.}. The current success of deep learning is the result of a collective effort in exploring the combinations of these elements that work best for different tasks, presented in a large body of scientific literature.
We argue that the new landscape opened by deep learning is undoubtedly a significant step towards a more natural and general way of processing data, especially because it allows to train learning algorithms almost end-to-end from nearly naturalistic sensory signals, such as digital images or sound \citep{saxe2020dlandneuroscience}. However, we should not neglect the need for searching better inductive biases, that is incorporating prior information about the tasks we want to solve, since without it learning would be much less efficient or not possible at all. We know this from statistical learning theory, but also from the innate genetic inductive biases provided by evolution in nature, which seem to predispose organisms to quickly learn and adapt to their specific environment \citep{zador2019purelearning}. Hence, studying biological learning systems seems like a natural approach to draw inspiration for improving artificial learning algorithms \citep{hassabis2017aiandneuroscience, nayebi2017bioconstraints, lindsay2018bioconstraints, malhotra2020bioconstraints}.
A key pair of ingredients in the development and success of deep learning was the increase in available computational power, alongside the publication of large data sets. The comparably much smaller data sets and reduced computational resources several decades ago has been argued to be one of the reasons why researchers could not prove the capabilities of ANNs until recently. Compared to other machine learning models, neural networks excel at learning from large data sets, but in some cases require much and specialised computer power, like graphical processing units (GPU). In computer vision specifically, the efforts in hand-labelling large data sets in the last decades set the grounds for deep learning to unleash its potential. Some of these data sets, which we have used in the experiments presented in this thesis, are MNIST, 70,000 small greyscale images of digits \citep{lecun1998mnist}; CIFAR-10 and CIFAR-100, 60,000 small colour images of objects from 10 and 100 classes, respectively \citep{krizhevsky2009cifar}; and ImageNet (ILSVRC 2012), 1.3 million high-resolution natural images of 1,000 object classes \citep{russakovsky2015imagenet}.
With the availability of these benchmark data sets to train and compare different algorithms, most of the research efforts in the last years have been used to improve different aspects of the neural network architectures and the training process: parameter initialisation strategies \citep{glorot2010glorot, he2015he}, activation functions \citep{glorot2011relu}, normalisation layers \citep{ioffe2015batchnorm}, stochastic optimisation methods \citep{duchi2011adagrad, kingma2014adam}, network architectures \citep{springenberg2014allcnn, simonyan2014vgg, he2016resnet, huang2017densenet}, and so on. The collaborative effort of many researchers has been essential to develop these and many other methods that have advanced the performance and our understanding of deep neural networks. Meanwhile, beyond the creation and publication of data sets, little attention has been paid to the data, which was considered fixed and given, since the goal is generally to improve the state of the art results on the common benchmark data sets. However, we have noted that the availability of data was key for the success of deep learning in image object recognition.
The need for larger amounts of labelled examples to train neural networks led some researchers to create data sets, and also led many to think of ways of easily create new examples. A straightforward way to extend a data set, especially in the image domain, is through \textit{data augmentation}. Data augmentation, broadly defined, consists of synthetically expanding a data set by applying transformations on the available examples that preserve the ground truth labels. Data augmentation has been used in image object recognition at least since the 1990s \citep{abumostafa1990hints, simard1992daug}; has been identified as critical component of many successful models, such as AlexNet \citep{krizhevsky2012alexnet}; and is ubiquitous in both deep learning research and application. Nonetheless, despite its popularity, data augmentation has surprisingly remained a largely understudied component of machine learning: many in the field use it, know that it helps generalisation and intuitively understand why, but little is known about its interaction with other techniques or whether its actual potential goes beyond simply increasing the number of training examples.
A significant part of this thesis aims at filling this gap and shedding light on the role of data augmentation for visual object recognition with deep neural networks. We argue that data augmentation has been heavily understudied because it has been looked down on by the machine learning community, regarded almost as a \textit{cheating} technique, rather than as a method that deserves analysis and attention. In this dissertation, we contend that data augmentation is interesting beyond simply providing new examples. We analyse data augmentation as regularisation, but draw significant differences with respect to classical regularisation. We also analyse the potential of data augmentation for training models that learn robust visual representations, taking inspiration from properties of the visual cortex. Overall, we here aim to explore the connections between data augmentation, visual perception and biological vision.
\section{Why data augmentation?}
Having a background in image processing and having carried out research projects with traditional, handcrafted visual descriptors \citep{hernandez2017mscthesis}, when I first started training neural networks with image data at the beginning of my PhD, it was the most natural approach for me to apply transformations on the images in my data sets to get some more data points \textit{for free}. As I was learning about deep learning, I became increasingly interested in the generalisation properties of neural networks and in regularisation methods. In my toy experiments, I noticed that while the most popular regularisation methods, that is weight decay and dropout, did improve the test performance of the models, the true boost in generalisation seemed to be provided by the image transformations that I had coded, that is data augmentation. This made sense to me: we know from statistical learning theory that generalisation can improve by finding the right complexity of the model, that is by accurately tuning the regularisation; but generalisation should always improve with more training examples (see Section~\ref{sec:background-generalisation}).
I became even more intrigued about these observations and got additional insights after reading ``Understanding deep learning requires rethinking regularization'', by \citet{zhang2016understandingdl}. This article, among other results, included the performance of a few models trained with and without weight decay, dropout and data augmentation. The superiority of data augmentation was also apparent by carefully analysing the results in the tables, but this fact is nearly ignored in the paper, since all three methods---weight decay, dropout and data augmentation---are considered the same type of regularisation (see Chapter~\ref{ch:daugreg}). I had the chance to chat with Dr. Chiyuan Zhang at his poster presentation at the International Conference on Learning Representations in 2017. While he was arguing how the generalisation bounds based on the Rademacher complexity (see Section~\ref{sec:background-rademacher}) may not explain the generalisation performance of deep neural networks, I asked what would the $n$---the number of training examples---be in the formula if they trained with data augmentation. Of course there is no straightforward answer to that question, so that got me thinking.
Since data augmentation was so remarkably effective in improving the test performance of neural networks, I assumed there should be literature on the topic that explained how exactly data augmentation affects generalisation and empirical comparisons with other regularisation methods. Unfortunately, I could not find much, except than corroborating that everybody seemed to use data augmentation in their papers to push the performance of their models towards the state of the art \citep{krizhevsky2012alexnet, springenberg2014allcnn}. I could think of three types of reasons for this lack of literature on data augmentation: One, ``it is \textit{obvious} that data augmentation is beneficial''. Two, ``it is \textit{too complicated} to study''. Three, ``it is \textit{not interesting}''. The first reason was not very satisfying, scientifically. The second one was actually a reason for me to try to shed some light. While it is probably a mix of all, I think the third point was the actual reason why data augmentation was disregarded: As mentioned in Section~\ref{sec:intro-learning_to_see}, the philosophy of the re-emerging deep learning field was to let a model learn ``good features \dots automatically using a general-purpose learning procedure'' \citep{lecun2015deeplearningnature}. Handcrafting some image transformations to augment a data set seemed closer to the \textit{old-fashioned} traditional machine learning methods and against the deep learning philosophy. As a matter of fact, data augmentation seemed to be considered \textit{cheating} and many papers would include results of their new method or architecture \textit{without} data augmentation to show that it actually worked \citep{goodfellow2013maxout, graham2014fracmaxpool, larsson2016fractalnet}---this ablation studies were however not carried out with other techniques, such as weight decay or dropout.
The fact that data augmentation increases the number of training examples, even though breaking the assumption of independent and identically distributed sampling, is only the most obvious advantage: the tip of the iceberg. In what follows, we will outline the interpretation of data augmentation as a remarkably useful inductive bias directly connected to visual perception, and make explicit some connections with properties of the visual cortex.
In the application of deep learning we have considered so far, image understanding and, in particular, object recognition, it is easy to get stuck in the specific goal of the task at hand that is often performed: to obtain a high classification performance in the test set of the benchmark data set. However, it is always useful to take a step back to see the forest for the trees. The objective for, at least, the research community is not to incrementally improve the state of the art performance on the benchmark data sets, but to truly develop good models of image understanding. Furthermore, when we talk about object recognition, in general we do not mean object recognition for any arbitrary visual system or modality of the many in nature, but we mean \textit{human} object recognition, that is recognising the object classes that are relevant for humans, from photos or videos that actually resemble how we perceive the world, that is natural images\footnote{Some particular subfields of computer vision are devoted to non-human vision, for example multispectral imagery \citep{audebert2019multispectral}.}.
Since we are interested in human object recognition, we argue that we should always keep an eye on what we know from visual perception and, ideally, biological vision too. The transformations that are most commonly applied in data augmentation schemes are \textit{perceptually plausible}: the resulting image preserves the properties of the perceived visual world as well as, in the case of object recognition, the object class\footnote{Other types of transformations in which perceptual plausibility is not preserved have been successfully used in computer vision. One popular example is \textit{mixup} \citep{zhang2017mixup}, which performs the weighted average of the pixels of two images---and their labels. Another example is data augmentation in feature space \citep{devries2017daugfeatspace}. However, in this thesis we are interested in exploring connections between machine learning and visual perception and thus we will consider only perceptually plausible image transformations.}. Especially in the image domain, it is straightforward to identify a large number of perceptually plausible transformations---equivariant transformations, some colour adjustments, blurring, etc. Some examples are shown in Figure~\ref{fig:intro-daug_imagenet}---and it is well-known since decades ago how to apply them to digital images \citep{gonzalez2018imageprocessing}. Hence, the combination of having tools for performing the transformations and expert domain knowledge---visual perception---provides us with a large-capacity generator of new examples and an approximate \textit{oracle} of the target function, for a relevant subset of the input space.
The access to an oracle of the target function serves as a remarkably effective inductive bias, which is at the very essence of machine learning. Data augmentation exploits this oracle, constructed from our knowledge about perception to densely populate the relevant regions of the high-dimensional input space. That is, the different views of the objects that are perceptually plausible. \citet{richards2019dlandneuroscience} argue that in order to train neural networks ``the three components specified by design are the objective functions, the learning rules and the architectures''. Throughout this dissertation we will argue that incorporating inductive biases through the data, possibly in combination with the objective function (Chapter~\ref{ch:invariance}), is another effective way to learn better, more robust representations.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/imagenet15.png}
\end{center}
\caption{Some examples of perceptually plausible image transformations applied on the same image, from ImageNet.}
\label{fig:intro-daug_imagenet}
\end{figure}
\section{Rethinking supervised learning}
\label{sec:intro-rethinking_supervised}
The re-emergence of deep learning during the last decade due to the noteworthy achievements of artificial neural networks built up a sort of philosophy that anything could be automatically learnt from data without human intervention, in contrast to the previous approach that required, indeed, a higher degree of manual design. However, this promise is misleading, since it misses the fact that the success of deep networks has been partly due to the human effort of manually labelling thousands of images and other data modalities \citep{russakovsky2015imagenet, cao2018vggface2}. As a matter of fact, the need for much data has been at the core of some strong criticisms of deep learning \citep{marcus2018critiquedl}. We argue instead that neither is deep learning some sort of exceptional solution to learn without inductive biases, nor is it a hopeless model class because it requires large data sets. On the contrary, we contend that the competitive advantage of artificial neural networks is that they are deep universal function approximators \citep{hornik1991functionapproximation} that have been proven to \textit{be able to} learn from large collections of data. While other models are known to scale poorly as the amount of training data increases, ANNs excel at fitting the training data and applying interpolation on unseen examples \citep{belkin2019biasvariance, hasson2020directfit}. This is a feature, not a bug. We will make faster progress if we exploit the advantages of deep learning, while being aware of its limitations.
Data augmentation, as we have seen, is an effective way of using this property: it generates examples in the regions of the input space where the model should learn how to interpolate. A commonly seen argument to defend that this should not be necessary and neural networks should be able to generalise from few examples is that humans and other animals learn---to categorise objects, for example---with very little supervision \citep{vinyals2016oneshot, marcus2018critiquedl, morgenstern2019oneshot, zador2019purelearning}. In this section, we will elaborate on our views on why we think this may be a misconception that can lead us astray and how we can benefit from rethinking the notions of supervised and unsupervised learning. We will do so by taking a look into learning theory, visual perception and biological vision and how these ideas relate to data augmentation and the contributions of this dissertation.
\subsection{Supervised biological learning}
The comparison between artificial intelligence---specifically artificial neural networks---and biological learning systems is intrinsic to the field, since one long-term goal of artificial intelligence is to mirror the capabilities of human intelligence. However, these capabilities are sometimes overestimated. One example is the argument that intelligence in nature evolves without supervision. In what follows, we will discuss three aspects of biological learning to argue against this view, in order to gain insights that better inform our progress in machine learning: first, we will discuss how generalisation requires exposure to relevant \textit{training data}; second, the role of evolution and brain development; third, the variety of supervised signals that the brain may use.
First, in the argument that machine learning models should generalise from a few examples, there seems to be a promise or aspiration that with future better methods it will be possible for a neural network---for instance---to perform robust visual object categorisation among many object classes after being trained on one or a few examples per class. While we agree that a challenge for the near future is to \textit{more efficiently} learn from fewer examples, we should also remind ourselves that no machine learning algorithm can robustly learn anything that cannot be inferred from the data it has been trained on. While this may seem to be against the ultimate goal of deep learning and a reason to look for radically different approaches, we should also bear in mind that learning in nature is not too different.
Even though the human visual system is remarkably robust, its capabilities are optimised for the tasks it needs to perform and largely shaped by experience, that is the \textit{training data}---and years of evolution \citep{hasson2020directfit}, as we will discuss below. For instance, from the literature on human visual perception, we know that object recognition is viewpoint dependent \citep{tarr1998viewpointdependence}. A well-studied property of human vision is that our face recognition ability is severely impaired if faces are presented upside down \citep{yin1969invertedfaces, valentine1988invertedfaces}. Setting aside the specific complexity of face processing in the brain, a compelling explanation for this impairment is that we are simply not used to seeing and recognising inverted faces. More generally, human perception of objects and our recognition ability is greatly affected when we see objects from unfamiliar viewpoints \citep{edelman1992viewpointdependence, tarr1998viewpointdependence, bulthoff2006viewpointdependence, milivojevic2012viewpointdependence}. Furthermore, although better than the \textit{one-shot} or \textit{few-shot} generalisation of current ANNs, humans also have limited ability to recognise novel classes \citep{morgenstern2019oneshot}. Interestingly, experiments with some novel classes of objects, known as Greebles, showed that with sufficient experience and training, humans can acquire expertise in recognising new objects from different viewpoints, even making use of an area of the brain---the fusiform face area---that typically responds strongly with face stimuli \citep{gauthier1999greebles}. This provides evidence that \textit{generalisation} to multiple viewpoints is only developed after exposure to similar conditions. In this regard, data augmentation seems like a straightforward way to provide certain degree of input variability.
Second, the commonplace comparison of artificial neural networks with the brain often misses a fundamental component of biology, recently brought to the fore by \citet{zador2019purelearning} and \citet{hasson2020directfit}, although considered since the early days of artificial intelligence \citep{turing1948intelligentmachinery}: the role that millions of years of evolution have played in developing the nervous systems of organisms in nature, including the human brain. For example, some similarities have been observed between the representational geometry of the internal features learnt by ANNs trained for visual object recognition and that of the neural activations measured in the visual cortex of the brain \citep{yamins2014annsbrains, khaligh2014annbrains, gucclu2015annbrains}. These articles were followed up by numerous studies that postulate ANNs as models of the visual cortex. However, what exactly drives better similarity with the brain remains an open question and this line of research presents several challenges.
While artificial neural networks are typically trained from scratch, from random initialisation, the neural activations are often measured in the adult brain. A reasonable approach would be to at least consider insights from developmental neuroscience and the infant brain \citep{harwerth1986criticalperiods, atkinson2002developmental, gelman2011childcategorization}. Moreover, as mentioned above, an interesting avenue is to also take into account the role of evolution. A large part of the brain connectivity is encoded genetically, and some properties of the visual cortex are known to be innate, that is without prior exposure to visual stimuli \citep{zador2019purelearning}. Since neural networks are expected to learn some of these properties through extensive training on image data sets from scratch\footnote{Some interesting and promising areas in machine learning research deviate from this standard approach. For example, transfer learning studies and domain adaptation study the potential of features learnt on one task to be reused in different, related tasks \citep{zhuang2019transferlearning}, and continual learning studies the ways in which machine learning models can indefinitely sustain the acquisition of new knowledge without detriment of the previously learnt tasks \citep{aljundi2019continuallearning, mundt2019continuallearning}. These approaches are inspired by biological learning or share interesting properties with it.}, part of the artificial learning process may have more to do with evolution than with the visual learning capabilities of an adult brain. This may be another reason to temper the expectations that neural networks be able to learn from a few examples, without \textit{hard-wiring} some of the innate properties of the brain \citep{lindsey2019bioconstraints, malhotra2020bioconstraints} or, alternatively, simulating part of the evolutionary process.
Third, another commonly found argument has it that children, and other animals in general, learn robust object recognition without supervision. First of all, we should recall again the role of evolution, which can be interpreted as a pre-trained model, optimised through millions of years of data with natural selection as a supervised signal \citep{zador2019purelearning}. Second, we will argue against the very claim that children learn in fully unsupervised fashion. Obviously, the kind of supervision that humans make use of is not identical to that of machine learning algorithms: we do not see a class label on top of every object we look at. However, we receive supervision from multiple sources. Even though not for every visual stimulus, children do frequently receive information about the object classes they see---parents would point at objects and name them, for instance. Furthermore, humans usually follow a guided hierarchical learning: children do not directly learn to tell apart breeds of dogs, but rather start with umbrella terms and then progressively learn down the class hierarchy \citep{bornstein2010hierarchychildren}. \citet{hasson2020directfit} mention other examples of supervision from \textit{social cues}, that is from other humans, such as learning to recognise individual faces, produce grammatical sentences, read and write; as well as from embodiment and action, such as learning to balance the body while walking or grasping objects. In all these actions, we can identify a supervised signal that surely influences learning in the brain \citep{shapiro2019embodied}.
Moreover, besides this kind of explicit supervision, the brain surely makes use of more subtle, implicit supervised signals, such as temporal stability \citep{becker1999temporalstability, wyss2003temporalstability}: The light that enters the retina is not a random signal from a sequence of rapidly changing arbitrary photos, but a highly coherent and regular flow of slowly changing stimuli, especially at the higher, semantical level \citep{kording2004complexcells}. At the very least, this is how we perceive it and if such a smooth perception is a consequence instead of a cause, then it should be, again, a by-product of a long process of evolution. In the light of these insights from evolutionary theory and the explicit and implicit supervision that drives biological learning, we argue that we should temper the claims and aspirations that artificial neural networks should learn without supervision and from very few examples. Instead, we may benefit from rethinking the concept of supervision, embrace it and try to incorporate the forms of supervision present in nature into machine learning algorithms.
\subsection{Supervised machine learning}
If we open a machine learning textbook \citep{murphy2012machinelearning, abu2012learningfromdata, goodfellow2016dlbook}, we will most surely find a taxonomy of learning algorithms with a clear distinction between \textit{supervised} and \textit{unsupervised} learning. If we take a look at the deep learning literature of the past years, we will also find abundant work on some variants \textit{in between}: semi-supervised learning, self-supervised learning, etc. However, while this taxonomy can be useful, the boundaries are certainly not clear. As a matter of fact, strictly speaking, unsupervised learning is an illusion. If we recall the \textit{no free lunch} theorem \citep{wolpert1996nofreelunch}, averaged over all possible distributions, all classification algorithms are equivalent. Therefore, we need to constrain the distributions or, in other words, introduce prior knowledge---that is \textit{supervision}. Recently, \citet{locatello2018disentanglement} obtained a similar result for the case of unsupervised learning of disentangled representations: without inductive biases for both the models and the data sets, unsupervised disentanglement learning is impossible. These results are purely theoretical and do not hold in real world applications, precisely because in practice we use multiple inductive biases, even when we do so-called unsupervised learning.
In a strict sense, even the classical, \textit{purely} unsupervised methods, such as independent component analysis or nearest neighbours classifiers, make use of priors, such as independence or distance, respectively. Without inductive bias, learning is not possible. Consider the data points in Figure~\ref{fig:unsupervised} (middle). With no prior information, all possible point categorisations are possible and equally valid. Depending on the inductive bias used, one model may find the configuration on the left, on the right, or any other. Which one is better depends on the task.
\begin{figure}[htb]
\centering
\begin{subfigure}{0.32 \linewidth}
\includegraphics[keepaspectratio=true, width=\linewidth]{\imgpath/circles_lateral.png}
\caption{One possible categorisation}
\label{fig:lateral}
\end{subfigure}
\begin{subfigure}{0.32 \linewidth}
\includegraphics[keepaspectratio=true, width=\linewidth]{\imgpath/circles_all-black.png}
\caption{A set of data points}
\label{fig:allblack}
\end{subfigure}
\begin{subfigure}{0.32 \linewidth}
\includegraphics[keepaspectratio=true, width=\linewidth]{\imgpath/circles_concentric.png}
\caption{One possible categorisation}
\label{fig:concentric}
\end{subfigure}
\caption{An illustration of the need for inductive biases. Without any prior knowledge about the objective, any clustering of the data points is valid and can be potentially realised by a learning algorithm}
\label{fig:unsupervised}
\end{figure}
Although this is not news, the terminology used in the machine learning literature seems to neglect these nuances and evidences that the field suffers from \textit{catastrophic forgetting}\footnote{I am borrowing the expression from Prof. Irina Rish, who used it at a panel discussion of the UNIQUE Student Symposium 2020.}. Particularly in deep learning and computer vision, the term \textit{supervised} learning is often used to actually refer to \textit{classification}, that is models trained on examples labelled according to the object classes, for instance. In turn, \textit{unsupervised} learning is used for any model that does not use the labels, regardless of what other kind of supervision may be used. Further, the term \textit{semi-supervised} learning refers to models that are trained with a fraction of the labels. While this terminology may be useful in some cases, it is not well defined and misses the fact that supervision can come in multiple flavours, not only as classification labels.
We have seen examples of different forms of supervision used by humans and other animals. What forms of supervision are common in machine learning? The relatively recent explosion of deep learning has brought about the development of several libraries for automatic differentiation\footnote{Automatic differentiation is also known as algorithmic differentiation and differentiable programming, among other names. It is the set of techniques that allows to calculate the derivatives of numeric functions algorithmically. The development of automatic differentiation libraries has played a key role in the progress of deep learning in the past 10--15 years. Examples are Theano \citep{theano2016}, TensorFlow \citep{tensorflow2015} and PyTorch \citep{pytorch2019}.} \citep{baydin2017automaticdifferentiation}, which in turn have enabled the proposal of multiple loss functions with other types of supervision that can effectively be optimised by artificial neural networks through backpropagation and stochastic gradient descent. However, these approaches are often termed in the literature \textit{semi-supervised}, \textit{self-supervised} or even \textit{unsupervised} learning. The term \textit{semi-supervised} learning has gained popularity recently \citep{jing2020selfsupervised}. On the one hand, this term acknowledges the fact that supervision is used---as opposed to unsupervised learning. On the other hand, it draws a hard line between classification and the other forms of supervised learning. The most likely reason for this separation is that labels are more costly to obtain than defining and implementing the tasks of self-supervised learning, rather than a formal, conceptual reason. From a theoretical point of view, both the conventional classification models and the recent wave of self-supervised objectives can be formalised within the category of supervised learning.
Importantly, the bulk of statistical learning theory (see a brief review in Section~\ref{sec:background-generalisation}) has been developed for binary classification loss functions, then extended for multiclass classification, and in part for regression loss functions such as the mean squared error. Critically, the mapping of many results from statistical learning theory onto the various objective functions used in semi-supervised learning is far from trivial. Given the success of this kind of supervised objectives and their connection with perception, the study of these methods from a theoretical point of view might be a fruitful direction for future work.
\section{Data augmentation, regularisation and inductive biases}
After discussing the conflict in the terminology and acknowledging that purely unsupervised learning is an illusion, we can return to the role of inductive biases from visual perception and biological vision in defining useful forms of supervision for training artificial neural networks and, in particular, the role of data augmentation.
If we consider the particular, well-studied case of classification, to which the problem of visual object categorisation belongs, we can recall again the well-known \textit{no free lunch} theorem, which establishes that learning is not possible without any prior. The field of statistical learning theory, whose fundamental results we review in Section~\ref{sec:background-generalisation}, studies the conditions that make learning from data possible. One of its key results is that the space of possible solutions where an algorithm can search for a solution, the hypothesis set, has to be finite \citep{vapnik1971vc}. Otherwise, the problem of inferring a function from a finite set of data is ill-posed. The classical way to ensure the problem is well-posed is through regularisation \citep{phillips1962regularisation, tikhonov1963regularisation, ivanov1976regularisation}. Essentially, regularisation imposes a constraint on the hypothesis set---in the classical sense, a smoothness constraint---so that the inferred function cannot vary too rapidly around the training data points (see Section~\ref{sec:background-regularisation} for more details). Therefore, we can think of regularisation as an inductive bias.
Being such an essential ingredient of the learning problem, regularisation has been widely studied and multiple forms of regularisation have been proposed. Two ubiquitously used regularisation techniques in deep learning are weight decay and dropout (reviewed in Section~\ref{sec:background-regularisation}). The former is a classical constraint on the norm of the learnable weights. The latter is a procedure to randomly turn off a subset of neurons during training. Both techniques have been shown to improve the generalisation of ANNs on the test data and hence their wide adoption. Thinking of regularisation techniques as inductive biases allows us to analyse the type of prior knowledge they incorporate, as well as widen the notion of regularisation to include other methods, such as data augmentation.
One of the contributions of this thesis is the comparison of weight decay, dropout and data augmentation. In Chapter~\ref{ch:daugreg}, we show that neural networks trained on image object recognition tasks \textit{without} weight decay and dropout achieve better performance than their counterparts, provided data augmentation is used during training. However, the common practice is to include all three: weight decay, dropout and data augmentation, among other methods. If we think of the inductive biases each technique introduces, data augmentation seems intuitively more advantageous: Weight decay assumes that models with smaller parameters should generalise better. Dropout assumes that neural networks that are forced to perform well with a subset of the neurons should perform better when all the neurons are used. Data augmentation makes use of an approximation of the oracle function, derived from prior knowledge about visual perception, that generates examples in regions of the input space where the model should learn a mapping. Although the inductive biases introduced by weight decay and dropout are clearly beneficial, the contribution of data augmentation seems more powerful. Nonetheless, these results were received with scepticism by the machine learning community.
In order to shed more light on the debate, we also draw theoretical insights from statistical learning theory that support the empirical findings and provide a grounded distinction between explicit regularisation methods, to which weight decay and dropout belong, and the implicit regularisation effect provided by data augmentation and other methods (Chapters~\ref{ch:reg}~and~\ref{ch:daugreg}). Explicit regularisation methods operate by directly constraining the hypothesis set of the model, known as representational capacity. However, many other methods that cannot be considered explicit regularisation provide an implicit regularisation effect, since they improve generalisation. Drawing a connection with the previously discussed ideas about supervision and inductive biases, we conclude that while explicit regularisation may indeed improve generalisation, it generally involves the optimisation of sensitive hyperparameters and at least the same or even better returns can be obtained by exploring ways of incorporating more meaningful inductive biases from visual perception and biological vision.
\section{Invariance}
In search of better inductive biases, we subscribe to the increasing trend in the machine community of exploring ways of training artificial neural networks beyond classification\footnote{Note we use the term \textit{classification}, not \textit{supervised learning}, after the discussion in Section~\ref{sec:intro-rethinking_supervised}.}. A pillar of this thesis is the attempt to integrate different disciplines. Therefore, in order to look for ways of improving visual object recognition models, we searched for inspiration in the mechanisms the brain has evolved in the visual cortex to solve object recognition.
The visual cortex is the part of the brain that processes visual information. One of its fundamental properties is the hierarchical organisation: while the primary areas of the visual cortex, which first process the information from the retina, respond to low-level properties such as the location in the visual field and the orientation of small parts of the stimuli \citep{hubelwiesel1962}, the inferior temporal (IT) cortex, later in the processing pipeline, responds to higher-level properties such as the object category of the stimulus \citep{gross1972it}. This organisation of the biological networks in the brain greatly inspired the development of artificial neural networks.
Related to the hierarchical organisation of the visual cortex, \citet{desimone1984invariantitmacaque} found that some neurons in the inferior temporal cortex of the macaque monkey responded consistently to the presentation of the same faces, regardless of their size and position. In contrast, neurons in earlier areas of the visual cortex are very sensitive to small changes in low-level properties of the visual stimuli, such as the orientation of edges. This invariance property was found to generalise to other objects besides faces \citep{booth1998invariantitmacaque} in the late 1990s and in the 2000s was observed in the human brain too \citep{quiroga2005invariantithuman}. Importantly, the invariance to identity-preserving transformations has been proposed to be an essential ingredient of robust visual object recognition in the brain \citep{dicarlo2007untangling, tacchetti2018invariance}. Hence, a reasonable question is whether artificial neural networks trained for visual object recognition are also invariant to such transformations.
The question of invariance in artificial neural networks has been addressed almost since their inception and studied from multiple perspectives. A large body of work has aimed at encoding different types of transformations into the networks (see a short review by \citet{cohen2016groupequivcnns}), such as translation or rotation invariance. One contribution of this thesis is the study of the invariance of ANNs towards identity-preserving transformations using data augmentation. The use of data augmentation is convenient: Not coincidentally, the kind of stimulus transformations tested by computational neuroscientists to study the invariance to identity-preserving transformations in the inferior temporal cortex and the image transformations typically included in data augmentation schemes are the same, or very similar. These are the transformations that we encounter naturally in the visual world, as we perceive it along the temporal dimension \citep{kording2004complexcells, einhauser2005viewpointinvariance, taylor2011temporalstability}, which perceptually preserve the object identity. For this reason we have named them \textit{perceptually plausible} transformations.
First, we measured the invariance to identity-preserving transformations of models trained on image object recognition data sets (Chapter~\ref{ch:invariance}). Intuitively and also taking the insights from the properties of the IT cortex, two images that represent different views of the same object should produce similar activations at the higher layers of a neural network. However, we found that the similarity is not better than at the pixel space, even though the models do classify the images correctly and were exposed to such transformations during training. This finding contradicts in part the general intuition that neural networks learn hierarchical representations, ranging from specific to more abstract, object-related features. We hypothesised that this is a sign of a lack of perceptual inductive bias. Given the large representational capacity of neural networks, training them with the sole objective of classifying a data set of images into the right classes does not seem enough to learn perceptually invariant representations, despite the theoretical results showing that invariance to \textit{nuisances} should emerge naturally \citep{achille2018emergence}. In general, there exist multiple possible solutions for the classification problem within the hypothesis space spanned by modern deep neural networks and the models do not seem to naturally converge to solutions well aligned with some crucial aspects of visual perception and biological vision \citep{sinz2019dlvsbrain, geirhos2020shortcutlearning, dujmovic2020adversarial}. This led us to propose \textit{data augmentation invariance}, an objective function that encourages robust representations, inspired by the invariance observed in the visual cortex.
In Chapter~\ref{ch:invariance}, we discuss the details of data augmentation invariance. We trained artificial neural networks on image object recognition data sets by jointly optimising the categorisation objective and a new data augmentation invariance objective. The latter is a layer-wise objective that encourages that the representations of transformations of the same image---generated through perceptually plausible data augmentation---cluster together. We attempted to simulate the increasing invariance along the visual cortex hierarchy by distributing the invariance loss exponentially along the neural network layers. Models trained with data augmentation invariance effectively learnt increasingly invariant representations without detriment to the classification accuracy, which even improved in some cases.
In view of these results we argue that replacing or complementing the standard classification objectives with perceptually and biologically inspired objectives, such as data augmentation invariance, is a promising avenue to both improve computer vision algorithms and obtain better models of natural vision. Such objectives are likely not biologically plausible, in the sense that the brain does not optimise an equivalent objective \citep{pozzi2018biologicallyplausible}. However, since properties like invariance to identity-preserving transformations are at least a by-product of either evolution or early brain development (or both), it is reasonable and potentially fruitful to optimise artificial neural networks with objectives that simulate key properties of the brain.
\section{Visual salience}
So far our discussion around visual perception, biological vision and machine image understanding algorithms has revolved mainly around object recognition. However, animal vision encompasses a broader range of capabilities that allow us to navigate and understand the world. As part of the interdisciplinary pursuit of this work, we here studied some aspects of another central component of vision: visual attention.
Visual attention is a complex brain mechanism that enables us to coherently process the sheer amount of light that enters our eyes. At any given time, even though the retina receives stimulation from the whole visual field, only a small fraction of the information is processed in detail \citep{desimone1995visualattention}. Specifically, the visual system preferentially processes the information located in the centre of the visual field---the \textit{fovea}---as the central part of the retina has a larger density of photoreceptors than in the surroundings---the \textit{visual periphery} \citep{wassle1990fovea, azzopardi1993fovea}. For instance, a recent study has shown that many people fail to notice when up to 95 \% of the visual field is presented without colour \citep{cohen2020colour}. What particular area of the available information in front of us is processed in most detail at a time is mediated by eye movements, and what exactly drives eyes movement is a complex, widely studied question, which remains largely open.
For example, it is known that eye movements can be driven by both low-level properties of the stimuli---\textit{bottom-up}---and by cognitive processes derived from, for instance, a task or desire---\textit{top-down} \citep{vonstein2000topdown, munoz2004bottomup, connor2004buttomuptopdown, betz2010topdown, kollmorgen2010topdownbottomup, schutt2019bottomuptopdown}. An interesting subject of study is the relationship between object recognition and visual attention. There is strong evidence for the role of object recognition in the direction of eye movements \citep{zhaoping2007topdown}, but visual attention has been also suggested to predict object perceptual awareness \citep{holm2008attentionprecedes, kietzmann2011attentionprecedes}.
From a behavioural perspective, the majority of the research work that studies visual attention makes use of eye tracking devices, which are able to map the gaze of an observer at any given time with the location on a stimulus. Another active area of research, at the intersection between vision science and computer vision, is the modelling of visual salience, first proposed by \citet{itti1998salience}. Salience models shift the focus to the stimulus side and aim to answer the question ``what parts of a stimulus are most salient to a human observer?''. Adhering to the definitions by \citet{kuemmerer2018salience}, a salience model predicts the probability that a pixel on a given image is fixated, which can be expressed through salience maps that represent the distribution of salience for specific tasks. For this thesis we made use of both eye tracking and salience maps to study some aspects of human vision.
In one project, presented in detail in Chapter~\ref{ch:globsal}, we were interested in studying the global salience of competing stimuli, that is stimuli presented side by side. The bulk of the research on computational models of visual salience addresses the question of what parts of a stimulus are more likely to attract the gaze of observers. In this case, we aimed at quantifying the salience of images as a whole, to seek answers for the questions: Are some types of images more likely to attract the gaze of observers? If so, is this global salience related to the local salience properties of the images? Do other factors, such as familiarity with one of the images, play a role in the gaze direction of observers? In order to answer these questions, we conducted eye tracking experiments in which we recorded the gaze direction of participants who were shown pairs of images side by side. We then modelled the behavioural data with a machine learning algorithm and computed the local salience properties of the images with representative salience models from the literature. As a main finding, we concluded that natural images intrinsically have a global salience that varies widely across different types of images and is independent of the local salience properties.
In another study, we combined computational models of visual salience with brain measurements of functional magnetic resonance imaging (fMRI) to analyse properties of the human visual cortex. We followed up the work by \citet{zuiderbaan2017imageidentification}, where the authors showed that it is possible to identify which natural image was shown to a participant in the scanner from fMRI recordings. The predictions were made by comparing the brain activations elicited by each image on areas V1, V2 and V3, and a combination of a contrast map of the images with the receptive field properties of the cortical areas, obtained through the population receptive field (pRF) model \citep{dumoulin2008prf}. Here, we studied whether brain activity was better predicted by salience maps than by contrast maps, and extended the analysis to a broader range of visual cortical areas: V1, V2, V3, hV4, LO12 and V3AB \citep{wandell2007visualfield}. We studied two distinct models of visual salience, ICF and DeepGaze \citep{kuemmerer2017icfdeepgaze}, and concluded that salience is more predictive of brain activations than contrast, especially the salience model based on intensity and contrast information only (ICF), rather than on high-level features, suggesting that salience information is still present in the neural activations of the visual cortex.
\section{Overview of contributions and outline}
\label{sec:intro-contributions}
The overarching objective of this thesis is to explore and exploit the connections between deep artificial neural networks and the visual cognitive and neural sciences. We believe that all three fields can benefit from mutual collaboration and synergies.
In order to facilitate the understanding of the thesis to a broader audience and set the grounds for the discussion throughout the dissertation, Chapter~\ref{ch:background} provides an introduction to the fundamentals of both machine learning and visual object recognition in the brain, as well as other relevant concepts. This chapter serves also as a review of related scientific literature.
Then, a first block from Chapter~\ref{ch:reg} to \ref{ch:invariance}, has data augmentation as the central theme, starting from the rather machine learning-centred Chapter~\ref{ch:reg}, towards gradually incorporating aspects from visual perception and biological vision in the subsequent chapters.
Specifically, in Chapter~\ref{ch:reg}, we discuss the concepts of explicit and implicit regularisation and provide definitions of these terms that have been widely but ambiguously used in the literature. Part of this chapter is based on the publication \citep{hergar2018daugreg} and we here provide a longer discussion about the taxonomy of regularisation and examples of explicit and implicit regularisation, arguing in particular that data augmentation is not explicit regularisation, as considered before in the literature.
Chapter~\ref{ch:daugreg} is focused on the comparison of explicit regularisation techniques---weight decay and dropout---and data augmentation and much of the content has been published in several articles \citep{hergar2018daugadvantages, hergar2018daugreg, hergar2018wddropout}. We present the results of a systematic empirical evaluation, alongside some insights from statistical learning theory, to conclude that data augmentation alone can provide the same generalisation gain than combined with explicit regularisation, and is remarkably more flexible. In view of these results, we discuss the need for weight decay and dropout in deep learning and propose to rethink the status of data augmentation.
In Chapter~\ref{ch:daugit}, we compare the representations learnt by neural networks trained with data augmentation and the activations in the inferior temporal cortex of the human brain \citep{hergar2018daugit}. We found that models trained with heavier transformations learn features more aligned with the representations in the higher visual cortex.
Following up the connection between data augmentation and biological vision, in Chapter~\ref{ch:invariance}, we study one of the fundamental properties of the visual cortex: the increasing invariance along the ventral stream to identity-preserving transformations of visual objects. Using data augmentation as a framework to generate such transformations, we first show that standard artificial neural networks trained optimised for object categorisation are hardly robust in terms of representational similarity. Then, we propose \textit{data augmentation invariance} as a simple, yet effective and efficient way of learning robust features, while preserving the categorisation performance \citep{hergar2019dauginv}.
The last two chapters of the dissertation can be seen as a separate block, in which data augmentation and artificial neural networks are not the main subject, although machine learning is still used as a tool. Chapter~\ref{ch:globsal} is closer to the field of cognitive science, as we study some aspects of visual behaviour through an eye-tracking experiment \citep{hergar2019globsal}. In particular, we propose \textit{global visual salience} as a metric of the likelihood of competing natural images to attract the gaze of an observer. Chapter~\ref{ch:imageid} is closer to neuroimaging, as we compare models of image identification from brain data to study properties of the early visual cortex. Part of the results of this study were presented as a poster contribution at the Annual Meeting of the Visual Sciences Society in 2019 \citep{hergar2019imageid}, and we here extended the analysis.
We conclude the dissertation with a general discussion in Chapter~\ref{ch:discussion}, where we provide an overview of the main results, outline the connections between the different parts, discuss future lines of work and the broader potential impact of this work.
\chapterbibliography
}
\chapter{Data augmentation invariance}
\label{ch:invariance}
\renewcommand{\chapterpath}{includes/invariance}
\begin{contributors}
Tim C. Kietzmann supervised the work during my internship at his lab. Tim and Peter K{\"o}nig reviewed and edited the manuscripts submitted to conferences.
\end{contributors}
\begin{outreach}
\item \textit{Learning robust visual representations using data augmentation invariance.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Peter K{\"o}nig, Tim C. Kietzmann. arXiv preprint arXiv:1906.04547 \& Cognitive Computational Neuroscience (CCN), 2019 \& Workshop on Bridging AI and Cognitive Science, International Conference on Learning Representations (ICLR), 2019.
\item \textit{Learning representational invariance instead of categorization.} \textbf{Alex Hern{\'a}ndez-Garc{\'i}a}, Peter K{\"o}nig. Workshop on pre-registration in computer vision, International Conference on Computer Vision (ICCV), 2019.
\end{outreach}
Deep artificial neural networks (ANNs) have borrowed much inspiration from neuroscience and are, at the same time, the current best model class for predicting neural responses across the visual system in the brain \citep{kietzmann2019dnncompneuro, kubilius2018cornet}. Yet, despite consensus about the benefits of a closer integration of deep learning and neuroscience \citep{bengio2015dlandneuroscience, marblestone2016dlandneuroscience, richards2019dlandneuroscience}, important differences remain \citep{sinz2019dlvsbrain, geirhos2020shortcutlearning, dujmovic2020adversarial}.
Here, we investigate a representational property that is well established in the neuroscience literature on the primate visual system: the increasing robustness of neural responses to identity-preserving image transformations. While early areas of the ventral stream (V1-V2) are strongly affected by variation in object size, position, viewpoint and other factors, later levels of processing are increasingly robust to such changes, as measured first in single neurons of the inferior temporal (IT) cortex of macaques \cite{booth1998invariantitmacaque} and then in humans' \citep{quiroga2005invariantithuman, isik2013dynamics}. The cascaded achievement of invariance to such identity-preserving transformations has been proposed as a key mechanism for robust object recognition \citep{dicarlo2007untangling, tacchetti2018invariance}.
Learning such invariant representations has been a desired objective since the early days of artificial neural networks \citep{simard1992daug}. Accordingly, a myriad of techniques have been proposed to attempt to achieve tolerance to different types of transformations (\citet{cohen2016groupequivcnns} briefly reviewed some of these efforts). Interestingly, recent theoretical work \citep{achille2018emergence} has shown that invariance to ``nuisance factors'' should naturally emerge from the optimisation process of deep models that minimise the information of the representations about the inputs, while retaining the minimum information about the target, as proposed by \citet{tishby2015infobottleneck} in the information bottleneck principle.
Nevertheless, artificial neural networks are still not robust to identity-preserving transformations, including simple image translations \citep{zhang2019convolutions}. A remarkable extreme example are adversarial attacks \citep{szegedy2013adversarial}, in which small changes, imperceptible to the human brain, can alter the classification output of the network. Extending this line of research, we used data augmentation as a framework to generate the transformations to which vision models should be invariant to, according to visual perception and biological vision. As a first contribution, we propose a simple metric, \textit{data augmentation invariance score}, to measure the invariance of neural networks to identity-preserving transformations.
Second, inspired by the increasing invariance observed along the primate ventral stream of the visual cortex, we here propose \textit{data augmentation invariance}, a simple, yet effective and efficient mechanism to improve the robustness of the representations: we include an additional contrastive term in the objective function that encourages the similarity between augmented examples within each batch. We will argue that this objective encodes a useful inductive bias that exploits prior knowledge from visual perception and biological vision.
Finally, we explore the possibility of fully replacing the categorisation objective that is commonly used to train neural networks for classification, the categorical cross-entropy, by objective functions purely based on invariance learning.
\section{Invariance score}
\label{sec:invariance-eval}
To measure the invariance of the learnt features under the influence of identity-preserving image transformations we compare the activations of a given image with the activations of an augmented version of the same image.
Consider the activations of an input image $x$ at layer $l$ of a neural network, which can be described by a function $f^{(l)}(x) \in \mathbb{R}^{D^{(l)}}$. We can define the distance between the activations of two input images $x_{i}$ and $x_{j}$ by their mean squared difference:
\begin{equation}
\label{eq:invariance-mse}
d^{(l)}(x_{i}, x_{j}) = \frac{1}{D^{(l)}}\sum_{k=1}^{D^{(l)}}(f_{k}^{(l)}(x_{i}) - f_{k}^{(l)}(x_{j}))^2
\end{equation}
Following this, we are interested in the mean squared distance between $f^{(l)}(x_i)$ and a randomly sampled transformation of $x_i$, that is $d^{(l)}(x_{i}, G(x_{i}))$. $G(x)$ refers to the stochastic function that transforms the input images according to a pre-defined, parameterised data augmentation scheme.
In order to assess the similarity between the activations of an image $x_i$ and of its augmented versions $G(x_{i})$ we normalise it by the average similarity in the (test) set. We define the \textit{data augmentation invariance score} $S_{i}^{(l)}$ of image $x_i$ towards the transformation $G(x)$ at layer $l$ of a model, with respect to a data set of size $N$, as follows:
\begin{equation}
\label{eq:invariance-invariance}
S_{i}^{(l)} = 1 - \frac{d^{(l)}(x_{i}, G(x_{i}))}{\frac{1}{N}\sum_{j=1}^{N}d^{(l)}(x_{i}, x_{j})}
\end{equation}
Note that the invariance $S_{i}^{(l)}$ takes the maximum value of 1 if the activations of $x_{i}$ and its transformed version $G(x_{i})$ are identical, and the value of 0 if the distance between transformed examples, $d^{(l)}(x_{i}, G(x_{i}))$ (numerator), is equal to the average distance in the set (denominator).
\subsection{Learning objective}
\label{sec:invariance-daug_invariance}
Most ANNs trained for object categorisation are optimised through mini-batch stochastic gradient descent (SGD), that is the weights are updated iteratively by computing the loss of a batch $\mathcal{B}$ of examples, instead of the whole data set at once. The models are typically trained for a number of \textit{epochs} $E$ which is a whole pass through the entire training data set of size $N$. That is, the weights are updated $K=\frac{N}{|\mathcal{B}|}$ times each epoch.
Data augmentation introduces variability into the process by performing a different, stochastic transformation of the data every time an example is fed into the network. However, with standard data augmentation, the model has no information about the \textit{identity} of the images. In other words, different augmented examples, seen at different epochs, separated by $\frac{N}{|\mathcal{B}|}$ iterations on average, correspond to the same seed data point. This information is potentially valuable and useful to learn better representations. For example, in biological vision, the high temporal correlation of the stimuli that reach the visual cortex may play a crucial role in the creation of robust connections \citep{becker1999temporalstability, kording2004complexcells, wyss2006temporalstability}. However, this is generally not exploited in supervised settings. In semi-supervised learning, where the focus is on learning from fewer labelled examples, data augmentation has been used as a source of variability together with dropout or random pooling, among others \citep{laine2016ssl}.
In order to make use of this information and improve the robustness, we first propose \textit{in-batch} data augmentation by constructing the batches with $M$ transformations of each example---\citet{hoffer2019batchaugmentation} recently discussed a similar idea. Additionally, we propose a new objective function that accounts for the invariance of the feature maps across multiple image samples. Considering the difference between the activations at layer $l$ of two images, $d^{(l)}(x_{i}, x_{j})$, defined in Equation~\ref{eq:invariance-mse}, we define the \textit{data augmentation invariance} loss at layer $l$ for a given batch $\mathcal{B}$ as follows:
\begin{equation}
\label{eq:invariance-data_aug_inv}
\mathcal{L}_{inv}^{(l)} = \frac{\sum_{k}\frac{1}{|\mathcal{S}_{k}|^2}\sum_{x_i, x_j \in \mathcal{S}_{k}}d^{(l)}(x_{i}, x_{j})}{\frac{1}{|\mathcal{B}|^2}\sum_{x_i, x_j \in \mathcal{B}}d^{(l)}(x_{i}, x_{j})}
\end{equation}
where $\mathcal{S}_{k}$ is the set of samples in the batch $\mathcal{B}$ that are augmented versions of the same seed sample $x_k$. This loss term intuitively represents the average difference of the activations between the sample pairs that correspond to the same source image, relative to the average difference of all pairs. A convenient property of this definition is that $\mathcal{L}_{inv}$ does not depend on either the batch size or the number of in-batch augmentations $M=|\mathcal{S}_{k}|$. Furthermore, it can be efficiently implemented using matrix operations. Our data augmentation invariance can be seen as a contrastive loss \citep{hadsell2006contrastive}, since the aim is to bring closer the representations of related examples---transformations of the same source image---and push apart the representations from other examples.
In order to simultaneously achieve certain representational invariance at $L$ layers of the network and high object recognition performance at the network output, we define the total loss as follows:
\begin{equation}
\mathcal{L} = (1 - \alpha)\mathcal{L}_{obj} + \sum_{l=1}^{L}\alpha^{(l)}\mathcal{L}_{inv}^{(l)}
\end{equation}
where $\alpha = \sum_{l=1}^{L}\alpha^{(l)}$ and $\mathcal{L}_{obj}$ is the loss associated with the object recognition objective, typically the cross-entropy between the object labels and the output of a softmax layer. For all the experiments presented in this chapter, we set $\alpha=0.1$ and distributed the coefficients across the layers according to an exponential law, such that $\alpha^{(l=L)}= 10\alpha^{(l=1)}$. This aims at simulating a probable response along the ventral visual stream, where higher regions are more invariant than the early visual cortex\footnote{It is beyond the scope of this study to analyse the sensitivity of the hyperparameters $\alpha^{(l)}$, but we did not observe a significant impact in the classification performance by using other distributions.}.
\subsection{Architectures and data sets}
\label{sec:invariance-arch-and-data}
As test beds for our hypotheses and proposal we trained three neural network architectures: all convolutional network, All-CNN-C \citep{springenberg2014allcnn}; wide residual network, WRN-28-10 \citep{zagoruyko2016wrn}; and DenseNet-BC \citep{huang2017densenet}. All three architectures have been widely used in the deep learning literature, including our experiments in Chapter~\ref{ch:daugreg}, where the details can be consulted. They provide generality of the results, as they have distinctive architectural characteristics: only convolutional layers, residual blocks and dense connectivity, respectively.
We trained the three architectures on the highly benchmarked data set for object recognition CIFAR-10 \citep{krizhevsky2009cifar}. Additionally, in order to test our proposal on higher resolution images and a larger number of classes, we also trained All-CNN and WRN on the \textit{tiny} ImageNet data set, a subset of ImageNet \citep{russakovsky2015imagenet} with 100,000 64x64 training images that belong to 200 classes. All models were trained using the \textit{heavier} data augmentation scheme as defined in Section~\ref{sec:daugreg-methods_data}, which consists of affine transformations, contrast adjustment and brightness adjustment.
For the models trained on CIFAR-10, the training hyperparameters---learning rate, decay, number of epochs, etc.---were set as in the original papers, except that, following the conclusions from Chapter~\ref{ch:daugreg}, we did not use explicit regularisation---weight decay and dropout---since comparable performance is obtained without them if data augmentation is used. For all three architectures, we performed $M=|\mathcal{S}_{k}|=8$ augmentations per image in the batch, while keeping the real batch size as in the original papers.
On tiny ImageNet, All-CNN included three additional layers and was trained for 150 epochs, with batch size 128 and initial learning rate 0.01 decayed by 0.1 at epochs 100 and 125. We included $M=8$ augmentations per image in the batches. WRN was trained for 50 epochs, with batch size 32 and initial learning rate 0.01 decayed by 0.2 at epochs 30 and 40. To train WRN with data augmentation invariance, we performed $M=4$ augmentations per image. In all cases, the models were trained with stochastic gradient descent with Nesterov momentum 0.9.
Note that the hyperparameters were fine-tuned for the original papers by training only with the standard categorical cross-entropy and with standard epoch-wise data augmentation. Therefore, they were likely suboptimal for our proposed data augmentation invariance. However, our aim was not achieving the best possible classification performance, but rather demonstrate the suitability of data augmentation invariance and analyse the learnt representations.
The invariance loss defined in Equation~\ref{eq:invariance-data_aug_inv} was computed after the ReLU activation of each convolutional layer for All-CNN, at the output of each residual block for WRN, and after the first convolution and the output of each dense block for DenseNet.
\section{Results}
\label{sec:invariance-results}
The first contribution of this chapter is to empirically test in how far convolutional neural networks trained for object categorisation produce invariant representations under the influence of identity-preserving transformations of the input images. Figures~\ref{fi:invariance-invariance_allcnn}--\ref{fi:invariance-invariance_densenet} show the invariance scores, as defined in Equation~\ref{eq:invariance-invariance}, across the network layers. Since we do not compute the invariance score at every single layer of the architectures, the numbering of the layers simply indicate increasing depth in the hierarchy (see Section~\ref{sec:invariance-arch-and-data} for the details). The red boxes correspond to the baseline models and the blue boxes to the models trained with our data augmentation invariance objective.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/invariance_allcnn.png}
\end{center}
\caption{All-CNN: distribution of the invariance score at each layer of the baseline model and the model trained data augmentation invariance (higher is better).}
\label{fi:invariance-invariance_allcnn}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/invariance_wrn.png}
\end{center}
\caption{WRN: distribution of invariance score at each layer of the baseline model and the model trained data augmentation invariance (higher is better).}
\label{fi:invariance-invariance_wrn}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/invariance_densenet.png}
\end{center}
\caption{DenseNet: distribution of invariance score at each layer of the baseline model and the model trained data augmentation invariance (higher is better).}
\label{fi:invariance-invariance_densenet}
\end{figure}
The distributions of the invariance score shown in the figures were computed using the test partitions of the data. For each image, we performed five random transformations using as parameters the values at exactly half of the range used for training (see Section~\ref{sec:daugreg-methods_data}). Despite the presence of data augmentation during training, which implies that the networks \textit{sees} and may learn augmentation-invariant transformations, the representations of the baseline models (red boxes) do not increase substantially beyond the invariance observed in the pixel space (left-most boxes). To illustrate this, see the images in Figure~\ref{fig:invariance-sample_images}, whose representations are all equally distant to the reference image, despite the perceptual similarity of the transformed images.
\begin{figure}[htb]
\begin{center}
\includegraphics[width = 0.8 \linewidth]{\imgpath/sample_images.png}
\end{center}
\caption{The top layer representations of the six images on the right learnt by All-CNN are equally (dis)similar to the reference image (left), even though the images at the bottom row are transformations of it.}
\label{fig:invariance-sample_images}
\end{figure}
As a solution, we have proposed a simple, label-free modification of the loss function to encourage the learning of data augmentation-invariant features. The blue boxes in Figures~\ref{fi:invariance-invariance_allcnn}--\ref{fi:invariance-invariance_densenet} show that our invariance mechanism pushed network representations to become increasingly more robust with network depth\footnote{Both All-CNN and WRN seem to more easily achieve the representational invariance on CIFAR-10 than on Tiny ImageNet. This may indicate that the complexity of the data set not only makes the object categorisation more challenging, but also the learning of invariant features.}. As discussed above, this is a well-studied property of the visual ventral stream in the primate brain.
In order to better understand the effect of the data augmentation invariance objective, we analysed the learning dynamics of the invariance loss at each layer of All-CNN trained on CIFAR-10. In Figure~\ref{fig:dynamics}, we see that in the baseline model, the invariance loss keeps increasing over the course of training. In contrast, in the models trained with data augmentation invariance, the loss drops for all but the last layer. Perhaps unexpectedly, the invariance loss increases during the first epochs and only then starts to decrease. While further investigation is required, these two phases may correspond to the fitting and compression-diffusion phases proposed in the framework of the information bottleneck principle by \citet{shwartz2017bottleneck}. According to the authors, during the first epochs of optimisation with SGD, the model increases the information about the labels (fitting) and during the rest of training, the model reduces the information on the input (compression). However, note that \citet{saxe2019informationbottleneck} have argued that this occurs only in some cases that depend on the non-linearities.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/loss_dynamics.png}
\end{center}
\caption{Dynamics of the data augmentation invariance loss $\mathcal{L}_{inv}^{(l)}$ during training (lower is better). The axis of abscissas (epochs) is scaled quadratically to better appreciate the dynamics at the first epochs. The same random initialisation was used for both models.}
\label{fig:dynamics}
\end{figure}
In terms of efficiency, adding terms to the objective function implies an overhead of the computations. However, since the pairwise distances can be efficiently computed at each batch through matrix operations, the training time was only increased by about 10 \% on average.
Finally, as reported in Table~\ref{tab:accuracy}, the improved invariance comes at little or no cost in the categorisation performance, as the networks trained with data augmentation invariance achieved similar classification performance to the baseline model and in some cases it clearly improved it. This is remarkable as the hyperparameters used in all cases were optimised to maximise the performance in the original models, trained without data augmentation invariance. Therefore, it is reasonable to expect an improvement in the classification performance if, for instance, the batch size or the learning rate schedule are better tuned for this new learning objective. Learning increasingly invariant features could lead to more robust categorisation, as exemplified by the increased test performance observed for the All-CNN models---despite no hyperparameter tuning.
\begin{table}[tb]
\begin{center}
\begin{tabular}{rccccc}
& \multicolumn{3}{c}{CIFAR-10} & \multicolumn{2}{c}{Tiny ImageNet (acc. | top5)} \\
\cline{2-6}
& All-CNN & WRN & DenseNet & All-CNN & WRN \\
Baseline & 91.48 & 94.58 & 94.88 & 51.09 | 73.48 & 61.49 | 82.99 \\
Data aug. invariance & 92.47 & 94.86 & 93.98 & 52.57 | 76.53 & 61.23 | 83.23
\end{tabular}
\end{center}
\caption{Classification accuracy on the test set of the baseline models and the models trained with data augmentation invariance.}
\label{tab:accuracy}
\end{table}
\section[Learning representational invariance \textit{instead} of categorisation]{Learning representational invariance \textit{instead}\\of categorisation}
\label{sec:invariance-instead_categorisation}
In view of the effectivity of the data augmentation invariance learning objective, it is reasonable to wonder whether such an objective could fully replace the standard categorisation objective commonly used in so-called\footnote{See the discussion about supervised learning in the Introduction (Section~\ref{sec:intro-rethinking_supervised})} supervised learning. There are multiple reasons why exploring alternatives to classification objectives is attractive. In the Introduction of this thesis we have discussed the benefits of incorporating inductive biases from visual perception and biological vision in the form of objective functions and the results presented above in this chapter have demonstrated the usefulness of data augmentation to promote invariant representations.
A remarkable example of the mismatch between ANNs and primate visual perception is the well-known vulnerability of the former to adversarial perturbations \citep{szegedy2013adversarial, dujmovic2020adversarial}. Recent work by \citet{ilyas2019advfeatures} has suggested that adversarial vulnerability might be caused by highly discriminative features present in the data yet incomprehensible to humans. In the same line, it has been recently shown \cite{wang2019highfreq} that ANNs learn high-frequency components of images, imperceptible to humans, but useful for categorisation. A related idea was suggested earlier by \citet{jo2017surfaceregularities}. Notably, this is only one example of the differences between current artificial and biological visual object perception \citep{sinz2019dlvsbrain, geirhos2020shortcutlearning, malhotra2020bioconstraints}. We hypothesise that some of these undesired properties might be caused by the optimisation of the specific task of classification.
In this section we present the results of a exploratory, preliminary study where we aim to replace the standard categorical-cross entropy objective by a combination of data augmentation invariance and a similarly defined \textit{class-wise invariance}, which uses the labels of the images.
\subsection{Class-wise invariance}
\label{sec:invariance-classwise}
Class-wise invariant representation learning \citet{belharbi2017classinvariance} was introduced as a regularisation term that encourages similarity in the representations of objects from the same class. The authors showed that class-wise invariance helps improve generalisation, especially when few examples are available. Related ideas have been proposed in the metric learning and clustering literature.
Class-wise invariance is interesting because, in spite of simply optimising the prediction of the object labels, it sets the learning objective on how the intermediate features should be like. However, used on its own it would possibly be subject to some of the same undesirable properties of purely supervised methods. We hypothesise that combined, data augmentation and class-wise invariance alone---without a categorisation objective---may learn robust, discriminative features.
We define the class-wise invariance loss at layer $l$ of a neural network $\mathcal{L}_{C}^{(l)}$ as a parallel to the data augmentation invariance loss (Equation~\ref{eq:invariance-data_aug_inv}):
\begin{equation}
\label{eq:invariance-class_wise_inv}
\mathcal{L}_{C}^{(l)} = \frac{\sum_{r}\frac{1}{|\mathcal{T}_{r}|^2}\sum_{x_i, x_j \in \mathcal{T}_{r}}d^{(l)}(x_{i}, x_{j})}{\frac{1}{|\mathcal{B}|^2}\sum_{x_i, x_j \in \mathcal{B}}d^{(l)}(x_{i}, x_{j})}
\end{equation}
where $\mathcal{T}_{r}$ are the subsets from $\mathcal{B}$ formed by images of the same object class $r$. Let us denote by $\mathcal{L}_{DA}^{(l)}$ the data augmentation invariance loss (Equation~\ref{eq:invariance-data_aug_inv}). We propose to optimise, through stochastic gradient descent, the following overall objective:
\begin{equation}
\label{eq:invariance-daug_class_objective}
\mathcal{L} = \sum_{l=1}^{L}\alpha^{(l)}\mathcal{L}_{DA}^{(l)} + \sum_{l=1}^{L}\beta^{(l)}\mathcal{L}_{C}^{(l)}
\end{equation}
where $\alpha^{(l)}$ and $\beta^{(l)}$ are scalars that control the degree of similarity between the features of augmented samples and of objects of the same category, respectively, at each layer $l$ of the architecture. Summarised, by jointly optimising $\mathcal{L}_{DA}^{(l)}$ and $\mathcal{L}_{C}^{(l)}$, we expect the model to learn robust features---as encouraged by the data augmentation invariance---while still allowing for categorisation---driven by the class-wise invariance.
\subsection{Results}
\label{sec:invariance-classwise_results}
In order to test this idea, we trained All-CNN on CIFAR-10 with the objective defined in Equation~\ref{eq:invariance-daug_class_objective}. We found that optimising this objective function as is, with the hyperparameters as in the models trained with standard categorical cross-entropy was \textit{not} able to perform multi-class classification. One hypothesis is that the learnt representations collapse along one single dimension, since the explained variance by the first principal component gets close to 100~\% and, therefore, the data points get separated in two clusters only. We have not found yet the exact cause of the undesired behaviour, but it may be due to the inability to escape local minima.
Despite the limitation of the fact that a 10-classes problem could not be optimised, in order to gain insights on the effect of the method, we analysed the representations learnt through the optimisation of invariance, instead of categorisation. In Figure~\ref{fig:invariance-mse_matrices}, we plot the representational dissimilarity matrices of the invariance model, alongside the baseline model trained with categorical cross-entropy and an additional model that jointly optimised both objectives. The latter model did achieved test performance on CIFAR-10 comparable to the baseline model.
Interestingly, we found that the models optimised with invariance objectives learnt representations that naturally create meaningful clusters that separate animals and vehicle classes, that is animate and inanimate objects. This categorisation has been reported multiple times to be distinctively represented in the inferior temporal cortex of the primate brain \citep{kriegeskorte2008manandmonkey, bao2020itmaps}. This connection is still speculative and in the future we will further explore the representations learnt through this invariance objectives.
\begin{figure}[ht]
\begin{center}
\includegraphics[width = \linewidth]{\imgpath/mse_matrices.png}
\end{center}
\caption{Representational similarity matrices of All-CNN trained on CIFAR-10 with (left) standard categorical cross-entropy, (middle) the invariance loss defined in Equation~\ref{eq:invariance-daug_class_objective} and (right) the invariance loss plus a categorical-cross entropy term. In the models trained with invariance losses, meaningful hierarchical clusters (animals vs. vehicles) emerge.}
\label{fig:invariance-mse_matrices}
\end{figure}
In an additional effort to further understand the learnt representations without the limitation of the low accuracy on 10 classes, we created a subset of CIFAR-10 with the images of cars and dogs only, that is a binary classification problem. This data set could be classified correctly with high accuracy. In Figure~\ref{fig:invariance-2d_representations}, we see that in the model trained with the invariance objective most examples of the two classes are separated by a larger margin than in the baseline model, where the clusters of the two classes are closer to each other.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.4 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/2drepr_noinv.png}
\caption{Baseline model}
\end{subfigure}
\begin{subfigure}{0.4 \linewidth}
\includegraphics[width = \linewidth]{\imgpath/2drepr_inv.png}
\caption{Invariance model}
\end{subfigure}
\caption{Representations of the test \textit{CIFAR-2} (cars and dogs) examples along the two principal components of the top layer representations of All-CNN.}
\label{fig:invariance-2d_representations}
\end{figure}
Finally, we evaluated the adversarial robustness of the models. We had hypothesised that one of the reasons for adversarial vulnerability is that models optimised for categorisation only are highly unconstrained, with no specification of how the features should be. Therefore, the model focuses on learning any features that allows for linearly separable classes at the output of the network \citep{malhotra2020bioconstraints}. This has been shown to be prone to adversarial vulnerability, that is small perturbations largely affect the classification. In contrast, our invariance objective can be seen as the opposite strategy. It specifies how the features should be---increasingly invariant to identity-preserving transformations and relatively invariant within object classes---and expects good classification as a by-product. This could improve the sensitivity to adversarial perturbations and is what our results in Table~\ref{tab:invariance-adversarial_robustness} suggest: the model trained with the invariance objectives is remarkably more robust that the baseline models, without sacrificing performance.
\begin{table}[htb]
\begin{center}
\begin{tabular}{rcc}
& Baseline (only cat.) & Only invariance \\
\hline
Clean examples & 98.9~\% & 98.7~\%\\
Attack: PGD $\varepsilon=0.03$ & 7.6~\% & 62.0~\%\\
Attack: FGSM $\varepsilon=0.03$ & 31.5~\% & 88.5~\%\\
\end{tabular}
\end{center}
\caption{Adversarial robustness of the baseline (only categorisation) and invariance models trained on \textit{CIFAR-2}. Without any adversarial training, the model trained with the invariance objective only is highly robust to adversarial perturbations, in contrast to the standard, categorisation model.}
\label{tab:invariance-adversarial_robustness}
\end{table}
\section{Discussion}
\label{sec:invariance-discussion}
In this chapter, we have first proposed an invariance score that assesses the robustness of the features learnt by a neural network towards the identity-preserving transformations typical of common data augmentation schemes (see Equation~\ref{eq:invariance-invariance}). Intuitively, the more similar the representations of transformations of the same image, with respect to other images in a data set, the higher the data augmentation invariance score. The data augmentation score is meaningful in that the transformations performed by perceptually plausible data augmentation schemes---which we consider here exclusively---are motivated by human visual perception and coincide largely with the transformations to which the higher visual cortex is invariant.
Using this score, we have analysed the representations learnt by three popular neural network architectures---All-CNN, WRN and DenseNet---trained on image object recognition tasks---CIFAR-10 and Tiny ImageNet. The analysis revealed that their features are less invariant than commonly assumed, despite sufficient exposure to matching image transformations during training. In most cases, the representational invariance did not even increase with respect to the original pixel space. This property is fundamentally different to the primate ventral visual stream, where neural populations have been found to be increasingly robust to changes in view or lighting conditions of the same object \citep{dicarlo2007untangling}.
Taking inspiration from this property of the visual cortex, we have proposed a label-free objective to encourage learning more robust features, using data augmentation as the framework to perform identity-preserving transformations on the input data. We constructed mini-batches with $M$ augmented versions of each image and modified the loss function to maximise the similarity between the activations of the same seed images, as compared to other images in the batch. Aiming to approximate the observations in the biological visual system, higher layers were set to achieve exponentially more invariance than the early layers. An interesting avenue for future work will be to investigate whether this increased robustness also allows for better modelling of neural data.
Data augmentation invariance effectively produced more robust representations, unlike standard models optimised only for object categorisation, at little or no cost in classification performance and with an affordable, slight increase (10~\%) in training time. Ideally, object recognition models should be reasonably invariant to all the transformations of the objects to which human perception is also invariant. Data augmentation is just an approximation to analyse and encourage invariance to a set of transformations that can be applied on still, 2D images. Future work should analyse the invariance of models trained with video \citep{taylor2010spatiotemporal} and even 3D data \citep{achlioptas20183d}.
These results provide additional empirical evidence that deep supervised models optimised only according to the standard categorisation objective---the categorical cross-entropy between the true object labels and the model predictions---learn discriminative but non-robust features. This is likely due to their large capacity to learn discriminative features in too unconstrained a setting \citep{geirhos2020shortcutlearning, malhotra2020bioconstraints}, which has been recently suggested to be at the source of adversarial vulnerability \citep{ilyas2019advfeatures}.
In order to explore whether getting rid of the categorisation objective can produce features more aligned with our intuitions from visual perception, less vulnerable to adversarial perturbations and potentially more similar to the brain representations, we proposed to replace the categorical cross-entropy with purely invariance objectives. In particular, we combined layer-wise data augmentation invariance with class-wise invariance, which encourages similarity of the features from images of the same class. Although our preliminary experiments did not succeed at achieving competitive categorisation performance, we made interesting observations in our analysis.
First, training with invariance objectives yields representations that are clustered hierarchically, while the dissimilarity matrix of the standard model is fairly homogeneous. Remarkably, the main clusters formed through invariance learning correspond to the animate and inanimate classes, a separation consistently observed in the primate visual cortex \cite{kriegeskorte2008manandmonkey, bao2020itmaps}. Furthermore, we trained a model on a binary classification task using invariance objectives only and found that the adversarial vulnerability is very low, in contrast to categorisation models, which exhibit very high sensitivity to adversarial perturbations. While these conclusions are still speculative, they set a promising path for future research on alternative objective functions that encode inductive biases from visual perception and biological vision.
\chapterbibliography
}
\chapter{Explicit and implicit regularisation}
\label{ch:reg}
\renewcommand{\chapterpath}{includes/regularisation}
\begin{outreach}
\item \textit{Data augmentation instead of explicit regularization.} Alex Hern{\'a}ndez-Garc{\'i}a, Peter K{\"o}nig. arXiv preprint arXiv:1806.03852, 2018.
\end{outreach}
One of the central issues in machine learning research and application is finding ways of improving generalisation. Regularisation, broadly defined as any modification applied to a learning algorithm that helps the model generalise better, plays therefore a key role in machine learning\footnote{In Chapter~\ref{ch:background} we review the fundamentals of machine learning and in particular, Section~\ref{sec:background-regularisation} reviews the essential aspects of regularisation to understand this and the upcoming chapters.}. In the case of deep learning, where neural networks tend to have several orders of magnitude more parameters than training examples, statistical learning theory (Section~\ref{sec:background-generalisation}) indicates that regularisation becomes even more crucial. Accordingly, a myriad of techniques have been proposed as regularisers (Section~\ref{sec:background-regularisation}): weight decay \citep{hanson1989wd} and other $L^p$ penalties on the learnable parameters; dropout---random dropping of units during training---\citep{srivastava2014dropout} and stochastic depth---random dropping of whole layers---\citep{huang2016stochasticdepth}, to name a few.
Moreover, whereas in simpler machine learning algorithms the regularisers can be easily identified as explicit terms in the objective function, in modern deep neural networks the sources of regularisation are not only explicit but implicit \citep{neyshabur2014implicitreg}. In this regard, many techniques have been studied for their regularisation effect, despite not being explicitly intended as such. Examples are convolutional layers \citep{lecun1990conv}, batch normalisation \citep{ioffe2015batchnorm} and data augmentation. In sum, there are multiple elements in deep learning that contribute to reducing overfitting and thus improve generalisation.
It is common practice in both the scientific literature and application to incorporate several of these regularisation techniques in the training procedure of neural networks. For instance, weight decay, dropout and data augmentation have been used jointly in multiple well-known architectures \citep{tan2019efficientnet, huang2017densenet, zagoruyko2016wrn, springenberg2014allcnn}. It is therefore implicitly assumed that each technique is necessary and contributes additively to improving generalisation. However, the interplay between regularisation techniques is yet to be well understood and might be an important piece for the puzzle of why and when deep networks generalise.
In Chapter~\ref{ch:daugreg}, we will focus on contrasting some specific forms of regularisation, namely weight decay, dropout and data augmentation. This chapter serves as a preamble of the following one. Here, we will provide definitions of two terms that have been widely but ambiguously used in the machine learning literature: explicit and implicit regularisation. We contend that these terms are useful to understand and explain the role of regularisation in artificial neural networks, as reflected by their use in the literature. Therefore, it is important to settle the precise meaning of the terms and provide examples. Besides being helpful to interpret the results in Chapter~\ref{ch:daugreg}, we also hope that this is a useful contribution to the machine learning community.
\section{Why do we need definitions?}
\label{sec:reg-why}
While several regularisation taxonomies have been proposed (see Section~\ref{sec:reg-taxonomies}), to the best of our knowledge there is no formal definitions of explicit and implicit regularisation in the machine literature. Nonetheless, the terms have been widely used \cite{neyshabur2014implicitreg, zhang2016understandingdl, wilson2017neurips, mesnil2011transferlearning, poggio2017theory3, martin2018selfregularisation, achille2018emergence}. This could suggest that the concepts are ingrained in the field and well understood by the community. However, by analysing the use of the terms explicit and implicit regularisation in the literature and in discussions with practitioners one can see that there is a high degree of ambiguity. In this section we will review some examples and motivate the need for formal definitions.
The PhD thesis by \citet{neyshabur2017thesis} is devoted to the study of implicit regularisation in deep learning. For instance, Neyshabur shows that common optimisation methods for deep learning, such as stochastic gradient descent (SGD), introduce an inductive bias that lead to better generalisation. That is, SGD \textit{implicitly} regularises the learning process. However, the notion and definition of implicit regularisation is only implied in Neyshabur's PhD thesis and related works.
Some may argue that the definitions are not necessary. By looking at one single piece of work, even without a formal definition, the meaning may be inferred from the use. However, when one considers a larger body of work by multiple authors, differences and even contradictions emerge. In the work by Neyshabur and colleagues \citep{neyshabur2017thesis, neyshabur2014implicitreg}, it can be interpreted that implicit regularisation refers to the generalisation improvement provided by techniques such as stochastic gradient descent (SGD) that are not \textit{typically} considered as regularisation. By extension, explicit regularisation would refer to those other techniques: ``we are not including any explicit regularisation, neither as an explicit penalty term nor by modifying optimisation through, e.g., drop-outs, weight decay, or with one-pass stochastic methods'' \citep{neyshabur2017thesis}. In \citet{poggio2017theory3}, it can be interpreted that implicit regularisation refers to techniques that lead to minimisation of the parameter norm without explicitly optimising for it. By extension, explicit regularisation would refer to classical penalties on the parameter norm, such as weight decay. It is therefore unclear whether other methods such as dropout should be considered explicit or implicit regularisation according to \citet{poggio2017theory3}.
\citet{zhang2016understandingdl} raised the thought-provoking idea that ``explicit regularisation may improve generalisation performance, but is neither necessary nor by itself sufficient for controlling generalisation error.'' The authors came to this conclusion from the observation that turning off the ``explicit regularisers'' of a model does not prevent the model from generalising reasonably well. In their experiments, the explicit regularisation techniques they turned off were, specifically, weight decay, dropout and data augmentation. In this case, it seems that \citet{zhang2016understandingdl} made a distinction based on the mere intention of the practitioner. Under that logic, because data augmentation has to be designed and applied \textit{explicitly}, it would be explicit regularisation.
These examples illustrate that the terms explicit and implicit regularisation have been used subjectively and inconsistently in the literature. In order to help avoid ambiguity, settle the concepts and facilitate the discussion, in the next section we propose definitions and provide examples to illustrate each category. Further, we will argue that data augmentation is not explicit regularisation and introduce some key differences with respect to explicit regularisation, which will set the grounds for Chapter~\ref{ch:daugreg}.
\section{Definitions and examples}
\label{sec:reg-definitions}
We propose the following definitions of explicit and implicit regularisation:
\begin{itemize}
\item \textbf{Explicit regularisation techniques} are those techniques which reduce the \textit{representational} capacity of the model class they are applied on. That is, given a model class $\mathcal{H}_0$, for instance a neural network architecture, the introduction of explicit regularisation will span a new hypothesis set $\mathcal{H}_1$, which is a \textit{proper subset} of the original set, that is $\mathcal{H}_1 \subsetneq \mathcal{H}_0$.
\item \textbf{Implicit regularisation} is the reduction of the generalisation error or overfitting provided by means other than explicit regularisation techniques. Elements that provide implicit regularisation do not reduce the \textit{representational} capacity, but may affect the \textit{effective} capacity of the model: the \textit{achievable} set of hypotheses given the model, the optimisation algorithm, hyperparameters, etc.
\end{itemize}
Note that we define explicit and implicit regularisation by using the concepts of \textit{representational} and \textit{effective} capacity. Although these terms are also used ambiguously by some practitioners, definitions of these concepts can be found in the literature. For instance, the textbook Deep Learning \citep{goodfellow2016dlbook} clearly defines the representational capacity as the ``the family of functions the learning algorithm can choose from'' and explains that the effective capacity ``may be less than the representational capacity'' because the learning algorithm does not always find the ``best function'' due to ``limitations, such as the imperfection of the optimisation algorithm''. Thinking of these \textit{limitations} as implicit regularisation denotes that this can be beneficial. In any case, we here adopt these definitions of representational and effective capacity.
One of the most common explicit regularisation techniques in machine learning is $L^p$-norm regularisation \citep{tikhonov1963regularisation}, of which weight decay is a particular case, widely used in deep learning. Weight decay sets a penalty on the $L^2$ norm of the model's learnable parameters, thus constraining the representational capacity of the model. Dropout \citep{srivastava2014dropout} is another common example of explicit regularisation, where the hypothesis set is reduced by stochastically deactivating a number of neurons during training. Similar to dropout, stochastic depth \citep{huang2016stochasticdepth}, which drops whole layers instead of neurons, is also an explicit regularisation technique.
Regarding implicit regularisation, note first that the above definition does not refer to \textit{techniques}---as in the definition of explicit regularisation---but to a regularisation \textit{effect}, as it can be provided by multiple elements of different nature. For instance, stochastic gradient descent (SGD) is known to have an implicit regularisation effect---reduction of the generalisation error---without constraining the representational capacity \citep{zhang2017sgd}. Batch normalisation neither reduces the capacity, but it improves generalisation by smoothing the optimisation landscape \citep{santurkar2018bn}. Of quite a different nature, but still implicit, is the regularisation effect provided by early stopping \citep{yao2007earlystopping}, which does not reduce the representational but the effective capacity.
In these examples and all other cases of implicit regularisation, we can think of the effect on the capacity in the following way: we start by defining our model class, for instance a neural network, which spans a set of functions $\mathcal{H}_0$ (see Section~\ref{sec:background-elements_ml}). If we decide to train with explicit regularisation, for instance weight decay or dropout, then the model will have access to a smaller set of functions $\mathcal{H}_1 \subsetneq \mathcal{H}_0$, that is the representational capacity. On the contrary, if we decide to train with SGD, batch normalisation or early stopping, the set of functions spanned by the model stays identical. Due to the dynamics and limitations imposed by these techniques, some functions may never be found, but theoretically they could be. In other words, the effective capacity may be smaller but not the representational capacity.
Central to this thesis is data augmentation, a technique that provides an implicit regularisation effect. As we have discussed, \citet{zhang2016understandingdl} considered data augmentation an explicit regularisation technique and was analysed as equivalent in terms of category to weight decay and dropout. However, data augmentation does not reduce the representational capacity of the models and hence, according to our definitions, cannot be considered explicit regularisation. This is relevant to understand the differences between weight decay, dropout and data augmentation that we will present in Chapter~\ref{ch:daugreg}, especially in the context of artificial neural networks.
\section{On the taxonomy of regularisation}
\label{sec:reg-taxonomies}
As in most disciplines, many taxonomies of regularisation techniques for machine learning have been proposed. Being a key ingredient of machine learning theory and practice, machine learning textbooks include a review of regularisation methods. In the case of deep learning, besides the classical regularisation methods used in \textit{traditional} machine learning, multiple new regularisation techniques have been proposed in recent years, and many techniques have been analysed because of their implicit regularisation effect. In this section, we briefly review some taxonomies of regularisation proposed in the literature and discuss their similarity with our definitions.
In their textbook, \citet{goodfellow2016dlbook} review some of the most common regularisation techniques used to train deep neural networks, but do not discuss the concepts of explicit and implicit regularisation. More recently, \citet{kukavcka2017regularization} provided an extensive review of regularisation methods for deep learning. Although they mention the implicit regularisation effect of techniques such as SGD, no further discussion of the concepts is provided. Nonetheless, they define the category \textit{regularisation via optimisation}, which is somewhat related to implicit regularisation. However, regularisation via optimisation is more specific than our definition; hence, methods such as data augmentation would not fall into that category.
Recently, \citet{guo2018mixup} provided a distinction between \textit{data-independent} and \textit{-dependent} regularisation. They define data-independent regularisation as those techniques that impose certain constraint on the hypothesis set, thus constraining the optimisation problem. Examples are weight decay and dropout. We believe this is closely related to our definition of explicit regularisation. On the other hand, they define data-dependent regularisation as those techniques that make assumptions on the hypothesis set with respect to the training data, as is the case of data augmentation. While we acknowledge the usefulness of such taxonomy, we argue that the division between data-independent and -dependent regularisation leaves some ambiguity about other techniques, such as batch-normalisation, which neither imposes an explicit constraint on the representational capacity nor on the training data.
On the contrary, our distinction between explicit and implicit regularisation aims at being complete, since implicit regularisation refers to any regularisation effect that does not come from explicit---or data-independent---techniques.
\section{Discussion}
\label{sec:reg-discussion}
The main contribution of this chapter has been the proposal of definitions of explicit and implicit regularisation. These terms that have been widely used in the machine learning literature without being formally defined, hence giving rise to subjective and ambiguous use. With the definition of these important concepts we have set the grounds for our discussion on the rest of this thesis, especially in Chapter~\ref{ch:daugreg}, but we also hope to help settle the concepts and reduce the ambiguity in the literature.
Besides this contribution, it is interesting to draw some connections between the concept of implicit regularisation, the discussion about inductive biases in the Introduction (Chapter~\ref{ch:intro}) and data augmentation. According to our definition above, implicit regularisation is the improvement in generalisation provided by elements that are not explicit regularisation techniques. This is a broad definition that includes many possible sources of implicit regularisation. A concept that underlies many of them is that of inductive bias. The inductive bias encoded in explicit regularisation techniques is simply that smaller models generalise better, which is reminiscent of Occam's razor. While this is a powerful inductive bias, we have discussed that many other sources of inductive bias are possible and are worth exploring.
In this regard, a clear distinction between explicit and implicit regularisation may help in the analysis, especially in the case of deep learning. A striking difference between neural networks and other machine learning algorithms is that deep networks easily scale to an (almost) arbitrarily large number of parameters and still generalise well on a held out test data. Only recently are we starting to understand this phenomenon, which seemed to be at odds with the results of statistical learning theory \citep{belkin2019biasvariance}.
First, the concept of implicit regularisation may help explain the generalisation of deep neural networks. Second, the fact that very large neural networks can generalise well directly casts doubts on the need for explicit regularisation \citep{zhang2016understandingdl}, that is to constrain the representational capacity. However, most artificial neural networks are still trained with explicit regularisation methods such as weight decay and dropout. In the next chapter, we follow up this idea and directly address the question of whether explicit regularisation is necessary in deep learning, provided enough implicit regularisation is included, specifically data augmentation.
\chapterbibliography
}
|
{
"timestamp": "2020-12-29T02:27:44",
"yymm": "2012",
"arxiv_id": "2012.14185",
"language": "en",
"url": "https://arxiv.org/abs/2012.14185"
}
|
\section{Introduction}
The main goal of our paper is to construct a family of Berezin-Toeplitz quantizations based on appropriate eigenspaces of the Bochner Laplacian under certain condition on the Riemannian metric on the symplectic manifold (which is slightly more general than the almost-K\"ahler one). More precisely, let $(X,\mathbf B)$ be a closed symplectic manifold of dimension $2n$. Assume that there exists a Hermitian line bundle $(L,h^L)$ on $X$ with a Hermitian connection $\nabla^L$ such that
\begin{equation}\label{e:def-omega}
\mathbf B=iR^L,
\end{equation}
where $R^L$ is the curvature of the connection $\nabla^L $ defined as $R^L=(\nabla^L)^2$.
Let $g$ be a Riemannian metric on $X$ and $(E,h^E)$ be a Hermitian vector bundle of rank $r$ on $X$ with a Hermitian connection $\nabla^E$. For any $p\in \field{N}$, let $L^p:=L^{\otimes p}$ be the $p$th tensor power of $L$ and let
\[
\nabla^{L^p\otimes E}: {C}^\infty(X,L^p\otimes E)\to
{C}^\infty(X, T^*X \otimes L^p\otimes E)
\]
be the Hermitian connection on $L^p\otimes E$ induced by $\nabla^{L}$ and $\nabla^E$. Consider the induced Bochner Laplacian $\Delta^{L^p\otimes E}$ acting on $C^\infty(X,L^p\otimes E)$ by
\begin{equation}\label{e:def-Bochner}
\Delta^{L^p\otimes E}=\big(\nabla^{L^p\otimes E}\big)^{\!*}\,
\nabla^{L^p\otimes E},
\end{equation}
where $\big(\nabla^{L^p\otimes E}\big)^{\!*}: {C}^\infty(X,T^*X\otimes L^p\otimes E)\to
{C}^\infty(X,L^p\otimes E)$ is the formal adjoint of $\nabla^{L^p\otimes E}$.
For an arbitrary $x\in X$, one can introduce a second order differential operator acting on $C^\infty(T_{x}X, E_{x})$ (the model operator), which is obtained from the Bochner Laplacian $\Delta^{L^p\otimes E}$ by freezing coefficients at $x$ (see \eqref{e:DeltaL0p} below and \cite{Kor20} for more details). It is the Bochner Laplacian on a constant curvature Hermitian line bundle over the Euclidean space $T_{x}X$. It can be also considered as the magnetic Laplacian with constant magnetic field. We consider the skew-adjoint operator $B_x : T_xX\to T_xX$ such that
\[
\mathbf B_x(u,v)=g(B_xu,v), \quad u,v\in T_xX.
\]
Its eigenvalues have the form $\pm i a_j(x), j=1,\ldots,n,$ with $a_j(x)>0$.
The spectrum of the model operator consists of eigenvalues of the form
$\sum_{j=1}^n(2k_j+1)a_j(x)$ with $(k_1,\cdots,k_n)\in\field{Z}_+^n$. Each eigenvalue has
infinite multiplicity and can be called a Landau level.
We assume that the functions $a_j$ can be chosen to be constants:
\begin{equation}\label{e:aj-constant}
a_j(x)\equiv a_j, \quad x\in X, \quad j=1,\ldots,n.
\end{equation}
This is a condition on the Riemannian metric $g$, which can be satisfied for any symplectic manifold $X$. In this case, that the spectrum of the model operator is independent of $x$ and coincides with
the countable discrete set
\begin{equation}\label{e:def-Sigmax}
\Sigma:=\left\{\Lambda_{\mathbf k}:=\sum_{j=1}^n(2k_j+1) a_j\,:\, \mathbf k=(k_1,\cdots,k_n)\in\field{Z}_+^n\right\}.
\end{equation}
If $J=\frac{1}{2\pi}B$ is an almost-complex structure (the almost K\"ahler case), then $a_j=2\pi, j=1,\ldots,n$ and
\begin{equation}\label{e:Kaehler}
\Sigma=\left\{2\pi (2k+n)\,:\, k\in\field{Z}_+\right\}.
\end{equation}
As shown in \cite{Kor20} (see also \cite{FT}), for any $K>0$, there exists $c>0$ such that for any $p\in \field{N}$ the spectrum of $\Delta^{L^p\otimes E}$ in the interval $[0,K]$ is contained in the $cp^{3/4}$-neighborhood of $p\Sigma$. In other words, the spectrum of $\Delta^{L^p\otimes E}$ asymptotically splits into clusters around $p\Sigma$ of size ${\mathcal O}(p^{3/4})$.
Next, we fix one of these clusters associated with $\Lambda\in \Sigma$ and develop the Toeplitz operator calculus associated with the eigenspace of the Bochner Laplacian corresponding to eigenvalues from this cluster. Consider an interval $I=(\alpha,\beta)$ such that $(\alpha,\beta)\cap \Sigma=\{\Lambda\}$. By the above mentioned fact, there exist $\mu_0>0$ and $p_0\in \field{N}$ such that for any $p>p_0$
\[
\sigma(\Delta^{L^p\otimes E})\subset (-\infty, p(\Lambda -\mu_0)) \cup (p\alpha,p\beta) \cup (p(\Lambda+\mu_0), \infty).
\]
The spectral projection of the operator $\Delta^{L^p\otimes E}$ associated with $(p\alpha,p\beta)$ is independent of the choice of $I$ and will be denoted by $P_{p,\Lambda}$.
For $f\in C^\infty(X,\operatorname{End}(E))$, we define the associated Toeplitz operator to be the sequence of bounded linear operators
\[
T_{f,p}=P_{p,\Lambda}fP_{p,\Lambda}: L^2(X,L^p\otimes E)\to L^2(X,L^p\otimes E), \quad p\in \field{N}.
\]
\begin{thm}\label{t:comm}
Let $f,g\in C^\infty(X,\operatorname{End}(E))$. Then, for the product of the Toeplitz operators $\{T_{f,p}\}$ and $\{T_{g,p}\}$, we have
\begin{equation}\label{e:TfpTgp-prod}
T_{f,p}T_{g,p}=T_{fg,p}+\mathcal O(p^{-1}).
\end{equation}
Moreover, if $f,g\in C^\infty(X)$, then, for the commutator of the operators $\{T_{f,p}\}$ and $\{T_{g,p}\}$, we have
\begin{equation}\label{e:TfpTgp-comm}
[T_{f,p}, T_{g,p}]=i p^{-1}T_{\{f,g\},p}+ \mathcal O(p^{-1/2}),
\end{equation}
where $\{f,g\}$ is the Poisson bracket on the symplectic manifold $(X,\mathbf B)$.
\end{thm}
Thus, the Toeplitz operators provide a Berezin-Toeplitz quantization for the compact symplectic manifold $(X,\mathbf B)$. The limit $p\to +\infty$ for Toeplitz operators can be thought of as a semiclassical limit, with semiclassical parameter $\hbar=\frac{1}{p}\to 0$. Theorem~\ref{t:algebra} shows that this quantization has a correct semiclassical limit.
In the case when the set $\mathcal K_\Lambda:=\{\mathbf k\in \field{Z}^n_+ : \Lambda_{\mathbf k}=\Lambda\}$ consists of a single element, we construct the algebra of Toeplitz operators associated with $\Lambda$.
\begin{defn}\label{d:Toeplitz0}
A Toeplitz operator is a sequence $\{T_p\}=\{T_p\}_{p\in \mathbb N}$ of bounded linear operators $T_p : L^2(X,L^p\otimes E)\to L^2(X,L^p\otimes E)$, satisfying the following conditions.
\begin{description}
\item[(i)] For any $p\in \mathbb N$, we have
\[
T_p=P_{p,\Lambda}T_pP_{p,\Lambda}.
\]
\item[(ii)] There exists a sequence $g_l\in C^\infty(X,\operatorname{End}(E))$ such that
\[
T_p=P_{p,\Lambda}\left(\sum_{l=0}^\infty p^{-l}g_l\right)P_{p,\Lambda}+\mathcal O(p^{-\infty}),
\]
i.e. for any natural $k$ there exists $C_k>0$ such that
\[
\left\|T_p-P_{p,\Lambda}\left(\sum_{l=0}^k p^{-l}g_l\right)P_{p,\Lambda}\right\|\leq C_kp^{-k-1}.
\]
\end{description}
\end{defn}
\begin{thm}\label{t:algebra}
Assume that $\mathcal K_\Lambda$ consists of a single element. Then, for any $f,g\in C^\infty(X,\operatorname{End}(E))$, the product of the Toeplitz operators $\{T_{f,p}\}$ and $\{T_{g,p}\}$ is a Toeplitz operator in the sense of Definition \ref{d:Toeplitz0}. More precisely, it admits the asymptotic expansion
\begin{equation}\label{e:TfTg-exp}
T_{f,p}T_{g,p}=\sum_{r=0}^\infty p^{-r}T_{C_r(f,g),p}+\mathcal O(p^{-\infty}),
\end{equation}
with some $C_r(f,g)\in C^\infty(X,\operatorname{End}(E))$, where the $C_r$ are bidifferential operators. In particular, $C_0(f,g)=fg$ and, for $f,g\in C^\infty(X)$, we have
\begin{equation}\label{e:C1fg-gf}
C_1(f,g)-C_1(f,g)=i\{f,g\}.
\end{equation}
\end{thm}
The idea to use Toeplitz operators for quantization of K\"ahler manifolds was suggested by Berezin in \cite{Berezin}. We refer the reader to \cite{Ali,Englis,ma:ICMtalk,Schlich10} for some recent surveys on Berezin-Toeplitz and geometric quantization. For a general compact K\"ahler manifold, the Berezin-Toeplitz quantization was constructed by Bordemann-Meinrenken-Schli\-chen\-maier \cite{BMS94}, using the theory of Toeplitz structures of Boutet de Monvel and Guillemin \cite{BG}. In this case, the quantum space is the space of holomorphic sections of tensor powers of the prequantum line bundle over the K\"ahler manifold. For an arbitrary symplectic manifold, Guillemin and Vergne suggested to use the kernel of the spin$^c$ Dirac operator as a quantum space. The corresponding Berezin-Toeplitz quantization was developed by Ma-Marinescu \cite{ma-ma:book,ma-ma08}. It is based on the asymptotic expansion of the Bergman kernel outside the diagonal obtained by Dai-Liu-Ma \cite{dai-liu-ma}. Another candidate for the quantum space was suggested by Guillemin-Uribe \cite{Gu-Uribe}. It is the space of eigensections of the renormalized Bochner Laplacian corresponding to eigenvalues localized near the origin. In this case, the Berezin-Toeplitz quantization was recently constructed in \cite{ioos-lu-ma-ma,Kor18}, based on Ma-Marinescu work: the Bergman kernel expansion from \cite{ma-ma08a} and Toeplitz calculus developed in \cite{ma-ma08} for spin$^c$ Dirac operator and K\"ahler case (also with an auxiliary bundle). We note also that Charles \cite{charles16} proposed recently another approach to quantization of symplectic manifolds and Hsiao-Marinescu \cite{HM} constructed a Berezin-Toeplitz quantization for eigensections of small eigenvalues in the case of complex manifolds.
In our paper, we follow the approach to Toeplitz operator calculus developed in \cite{ma-ma:book,ma-ma08,ioos-lu-ma-ma,Kor18}. Asymptotic expansions of the kernels of spectral projections, which we need in our case, are proved in \cite{Kor20}. When $\Lambda=\Lambda_0$ is the lowest Landau level, our results are reduced to the results obtained in \cite{ioos-lu-ma-ma,Kor18}, which hold for any Riemannian metric $g$ (not necessarily satisfying the condition \eqref{e:aj-constant}). We mention that, in two simultaneous papers \cite{charles20a,charles20b}, Charles studies the same subject, using the methods of \cite{charles16}.
There are several papers devoted to Toeplitz operators acting on spectral subspaces of the Landau Hamiltonian in $\field{R}^{2n}$ (see, for instance, \cite{BPR04,FP06,MR03,PRV13,RW02,RT08,RT09} and references therein). For constant magnetic fields, such operators are related with the Toeplitz operators acting on Bargmann-Fock type spaces of polyanalytic functions (see, for instance, \cite{AF14,AG10,EZ17,G10,keller-luef,roz-vas19,V00} and references therein). In particular, in \cite{keller-luef}, quantization schemes defined by polyanalytic Toeplitz operators are discussed.
The paper is organized as follows. In Section~\ref{s:algebraA}, we introduce an algebra $\mathfrak A$ of integral operators on $X$ defined in terms of conditions on their smooth Schwartz kernels. In Section~\ref{s:Toeplitz-in-A}, we show that Toeplitz operators in the sense of Definition \ref{d:Toeplitz0} belong to $\mathfrak A$. In Section \ref{s:Thm1}, using the results of the previous sections, we prove the first part of Theorem \ref{t:comm} and reduce the proof of its second part to a similar statement in the Euclidean case. The proof of this statement is given in Section~\ref{s:model}. In Section~\ref{s:charact}, we prove that the set of Toeplitz operators coincides with the algebra $\mathfrak A$ in the case when $\mathcal K_\Lambda$ consists of a single element, which gives a characterization of Toeplitz operators in terms of their Schwartz kernels in the form introduced in \cite[Theorem 4.9]{ma-ma08}. Using Theorems~\ref{t:characterization} and \ref{t:comm}, we easily complete the proof of Theorem \ref{t:algebra}.
This work was started as a joint project with L. Charles, but later we decided to work on our approaches separately. I would like to thank Laurent for his collaboration.
\section{Algebra of integral operators}\label{s:algebraA}
In this section, we introduce an algebra $\mathfrak A$ of integral operators on $X$ defined in terms of conditions on their smooth Schwartz kernels. Our motivation comes from the description of Toeplitz operators in terms of their Schwartz kernels introduced in \cite{ma-ma:book,ma-ma08}. Later, we will show that Toeplitz operators in the sense of Definition \ref{d:Toeplitz0} belong to this algebra, and, if $\mathcal K_\Lambda$ consists of a single element, the set of Toeplitz operators coincides with $\mathfrak A$.
We introduce normal coordinates near an arbitrary point $x_0\in X$.
We denote by $B^{X}(x_0,r)$ and $B^{T_{x_0}X}(0,r)$ the open balls in $X$ and $T_{x_0}X$ with center $x_0$ and radius $r$, respectively. Let $r_X>0$ be the injectivity radius of $X$. We identify $B^{T_{x_0}X}(0,r_X)$ with $B^{X}(x_0,r_X)$ via the exponential map $\exp^X_{x_0}: T_{x_0}X \to X$. Furthermore, we choose trivializations of the bundles $L$ and $E$ over $B^{X}(x_0,r_X)$, identifying their fibers $L_Z$ and $E_Z$ at $Z\in B^{T_{x_0}X}(0,r_X)\cong B^{X}(x_0,r_X)$ with the spaces $L_{x_0}$ and $E_{x_0}$ by parallel transport with respect to the connections $\nabla^L$ and $\nabla^E$ along the curve $\gamma_Z : [0,1]\ni u \to \exp^X_{x_0}(uZ)$. Denote by $\nabla^{L^p\otimes E}$ and $h^{L^p\otimes E}$ the connection and the Hermitian metric on the trivial bundle with fiber $(L^p\otimes E)_{x_0}$ induced by these trivializations.
We choose an orthonormal base $\{e_j : j=1,\ldots,2n\}$ in $T_{x_0}X$ such that
\begin{equation}\label{e:obase}
B_{x_0}e_{2k-1}=a_k e_{2k}, \quad B_{x_0}e_{2k}=-a_ke_{2k-1},\quad k=1,\ldots,n.
\end{equation}
Thus, we have
\begin{equation}\label{e:Bx0}
\mathbf B_{x_0}=\sum_{k=1}^{n}a_kdZ_{2k-1}\wedge dZ_{2k}.
\end{equation}
We introduce a coordinate chart $\gamma_{x_0} : B(0,c)\subset \field{R}^{2n}\to X$ defined on the ball $B(0,c):=\{Z\in \field{R}^{2n} : |Z|<c\}$ with some $c\in (0,r_X)$, which is given by the restriction of the exponential map $\exp_{x_0}^X$ composed with the linear isomorphism $\mathbb R^{2n}\to T_{x_0}X$ determined by the base $\{e_j \}$.
Let $dv_{TX}$ denote the Riemannian volume form of the Euclidean space $(T_{x_0}X, g_{x_0})$. We define a smooth function $\kappa$ on $B^{T_{x_0}X}(0,a^X)\cong B^{X}(x_0,a^X)$ by the equation
\[
dv_{X}(Z)=\kappa(Z)dv_{TX}(Z), \quad Z\in B^{T_{x_0}X}(0,a^X).
\]
Let $\{\Xi_p\}$ be a sequence of linear operators $\Xi_p : L^2(X,L^p\otimes E)\to L^2(X,L^p\otimes E)$ with smooth kernel $\Xi_p(x,x^\prime)$ with respect to $dv_X$. Consider the fiberwise product $TX\times_X TX=\{(Z,Z^\prime)\in T_{x_0}X\times T_{x_0}X : x_0\in X\}$. Let $\pi : TX\times_X TX\to X$ be the natural projection given by $\pi(Z,Z^\prime)=x_0$. The kernel $\Xi_p(x,x^\prime)$ induces a smooth section $\Xi_{p,x_0}(Z,Z^\prime)$ of the vector bundle $\pi^*(\operatorname{End}(E))$ on $TX\times_X TX$ defined for all $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$ with $|Z|, |Z^\prime|<r_X$.
Denote by $\mathcal P$ the Bergman kernel in $\mathbb R^{2n}$ given by
\begin{equation}
\label{e:Bergman}
\mathcal P(Z,Z^\prime)=\frac{1}{(2\pi)^n}\prod_{j=1}^na_j \exp\left(-\frac 14\sum_{k=1}^na_k(|z_k|^2+|z_k^\prime|^2- 2z_k\bar z_k^\prime) \right) .
\end{equation}
We will use the same notation for the corresponding scalar function $\mathcal P(Z,Z^\prime)=\mathcal P(Z,Z^\prime)\operatorname{Id}_{E_{x_0}}$ on $T_{x_0}X\times T_{x_0}X$ with values in $\operatorname{End}(E_{x_0})$.
\begin{defn}[\cite{ma-ma:book,ma-ma08}]
We say that
\begin{equation}
\label{e:full-off}
p^{-n}\Xi_{p,x_0}(Z,Z^\prime)\cong \sum_{r=0}^k(Q_{r,x_0}\mathcal P)(\sqrt{p}Z,\sqrt{p}Z^\prime)p^{-\frac{r}{2}}+\mathcal O(p^{-\frac{k+1}{2}})
\end{equation}
with some $Q_{r,x_0}\in \operatorname{End}(E_{x_0})[Z,Z^\prime]$, $0\leq r\leq k$, depending smoothly on $x_0\in X$, if there exist $\varepsilon^\prime\in (0,r_X]$ and $C_0>0$ with the following property:
for any $l\in \mathbb N$, there exist $C>0$ and $M>0$ such that for any $x_0\in X$, $p\geq 1$ and $Z,Z^\prime\in T_{x_0}X$, $|Z|, |Z^\prime|<\varepsilon^\prime$, we have
\begin{multline*}
\Bigg|p^{-n}\Xi_{p,x_0}(Z,Z^\prime)\kappa^{\frac 12}(Z)\kappa^{\frac 12}(Z^\prime) -\sum_{r=0}^k(Q_{r,x_0}\mathcal P)(\sqrt{p} Z, \sqrt{p}Z^\prime)p^{-\frac{r}{2}}\Bigg|_{\mathcal C^{l}(X)}\\
\leq Cp^{-\frac{k+1}{2}}(1+\sqrt{p}|Z|+\sqrt{p}|Z^\prime|)^M\exp(-\sqrt{C_0p}|Z-Z^\prime|)+\mathcal O(p^{-\infty}).
\end{multline*}
\end{defn}
Here $\mathcal C^{m^\prime}(X)$ is the $\mathcal C^{m^\prime}$-norm for the parameter $x_0\in X$. We say that $G_p=\mathcal O(p^{-\infty})$ if for any $l, l_1\in \mathbb N$, there exists $C_{l,l_1}>0$ such that $\mathcal C^{l_1}$-norm of $G_p$ is estimated from above by $C_{l,l_1}p^{-l}$.
The expansion \eqref{e:full-off} will be called the full off-diagonal expansion for the kernel of $\Xi_p$.
\begin{defn}\label{d:Toeplitz}
We introduce the class $\mathfrak A$, which consists of sequences of linear operators $\{T_p: L^2(X,L^p\otimes E)\to L^2(X,L^p\otimes E)\}$, satisfying the following conditions:
\begin{description}
\item[(i)] For any $p\in \mathbb N$, we have
\[
T_p=P_{p,\Lambda}T_pP_{p,\Lambda}.
\]
\item[(ii)] For any $\varepsilon_0>0$ and $l\in \mathbb N$, there exists $C>0$ such that
\[
|T_{p}(x,x^\prime)|\leq Cp^{-l}
\]
for any $p\in \mathbb N$ and $(x,x^\prime)\in X\times X$ with $d(x,x^\prime)>\varepsilon_0$. (Here $d(x,x^\prime)$ is the geodesic distance.)
\item[(iii)]
The kernel of $T_p$ admits the full off-diagonal expansion
\[
p^{-n}T_{p,x_0}(Z,Z^\prime)\cong \sum_{r=0}^kK_{r,x_0}(\sqrt{p}Z,\sqrt{p}Z^\prime)p^{-\frac{r}{2}}+\mathcal O(p^{-\frac{k+1}{2}})
\]
for any $k\in \mathbb N$, $x_0\in X$, $Z,Z^\prime\in T_{x_0}X$, $|Z|, |Z^\prime|<\varepsilon^\prime$ with some $\varepsilon^\prime\in (0,r_X/4)$,
with
\[
K_{r,x_0}(Z,Z^\prime)=(\mathcal Q_{r,x_0}\mathcal P)(Z,Z^\prime),
\]
where $\mathcal Q_{r,x_0}\in \operatorname{End}(E_{x_0})[Z,Z^\prime]$ is a family of polynomials, depending smoothly on $x_0$, of the same parity as $r$.
\end{description}
\end{defn}
One can easily check the following properties of $\mathfrak A$.
\begin{prop}\label{p:boundedA}
The set $\mathfrak A$ is an involutive algebra. For any $\{T_p\}\in \mathfrak A$, the operator $T_p$ is bounded in $L^2(X,L^p\otimes E)$ with the norm, uniformly bounded in $p$.
\end{prop}
For any $F, G\in C^\infty(T_{x_0}X\times T_{x_0}X, \operatorname{End}(E_{x_0}))$, exponentially decreasing away the diagonal, we denote by $F\ast G\in C^\infty(T_{x_0}X\times T_{x_0}X, \operatorname{End}(E_{x_0}))$ the smooth kernel of the composition of the corresponding integral operators in $L^2(T_{x_0}X, E_{x_0})$:
\[
(F\ast G)(Z,Z^\prime)=\int_{T_{x_0}X} F(Z,Z^{\prime\prime})G(Z^{\prime\prime} ,Z^\prime)dZ^{\prime\prime}.
\]
\begin{prop}\label{p:algebraA}
For any $\{T^\prime_p\}, \{T^{\prime\prime}_p\}\in \mathfrak A$, the coefficients $K_{r,x_0}(Z,Z^\prime)$ in the full off-diagonal expansion for the kernel of the composition $\{T^\prime_p\circ T^{\prime\prime}_p\}$ are related with the analogous coefficients $K^\prime_{r,x_0}(Z,Z^\prime)$ and $K^{\prime\prime}_{r,x_0}(Z,Z^\prime)$ for $\{T^\prime_p\}$ and $\{T^{\prime\prime}_p\}$, respectively, by
\begin{equation}\label{e:comp-K}
K_{r,x_0}=\sum_{r_1+r_2=r}K^\prime_{r_1,x_0}\ast K^{\prime\prime}_{r_2,x_0}.
\end{equation}
\end{prop}
\section{Description of the kernels of Toeplitz operators}\label{s:Toeplitz-in-A}
In this section, we show that any Toeplitz operator $\{T_{p}\}$ in the sense of Definition \ref{d:Toeplitz0} belongs to the algebra $\mathfrak A$ introduced in the previous section. It is easy to see that it suffices to do this for the operator $\{T_{f,p}\}$ determined by $f\in C^\infty(X,\operatorname{End}(E))$.
Let $P_{p,\Lambda}(x,x^\prime)$, $x,x^\prime\in X$, be the smooth kernel of $P_{p,\Lambda}$ with respect to the Riemannian volume form $dv_X$. The Schwartz kernel of $T_{f,p}$ is given by
\begin{equation}\label{e:Tfp}
T_{f,p}(x,x^\prime)=\int_X P_{p,\Lambda}(x,y)f(y)P_{p,\Lambda}(y,x^{\prime})dv_X(y).
\end{equation}
\begin{lem}
For any $\varepsilon>0$ and $l,m\in \mathbb N$, there exists $C>0$ such that for any $p\geq 1$ and $(x,x^\prime)\in X\times X$ with $d(x,x^\prime)>\varepsilon$ we have
\[
|T_{f,p}(x,x^\prime)|_{C^m}\leq Cp^{-l}.
\]
\end{lem}
Here $|T_{f,p}(x, x^\prime)|_{\mathscr{C}^k}$ denotes the pointwise $\mathscr{C}^k$-seminorm of the section $T_{f,p}$ at a point $(x, x^\prime)\in X\times X$, which is the sum of the norms induced by $h^L, h^E$ and $g$ of the derivatives up to order $k$ of $T_{f,p}$ with respect to the connection $\nabla^{L^p\otimes E}$ and the Levi-Civita connection $\nabla^{TX}$ evaluated at $(x, x^\prime)$.
\begin{proof}
The proof follows from \eqref{e:Tfp} and the off-diagonal exponential estimate for $P_{p,\Lambda}(x,x^\prime)$ \cite{Kor20} as in \cite[Lemma 4.2]{ma-ma08a}.
\end{proof}
By \cite{Kor20}, for any $k\in \mathbb N$, the kernel of $P_{p,\Lambda}$ admits the full off-diagonal expansion
\begin{equation}\label{e:PL-exp}
p^{-n}P_{p,\Lambda,x_0}(Z,Z^\prime)\cong
\sum_{r=0}^kF_{r,x_0}(\sqrt{p} Z, \sqrt{p}Z^\prime)p^{-\frac{r}{2}}+\mathcal O(p^{-\frac{k+1}{2}}).
\end{equation}
Here, for any $r\geq 0$, the coefficient $F_{r,x_0}\in C^\infty(T_{x_0}X\times T_{x_0}X, \operatorname{End}(E_{x_0}))$ has the form
\begin{equation}\label{e:Fr}
F_{r,x_0}(Z,Z^\prime)=J_{r,x_0}(Z,Z^\prime)\mathcal P(Z,Z^\prime),
\end{equation}
where $J_{r,x_0}(Z,Z^\prime)$ is a polynomial in $Z, Z^\prime$, depending smoothly on $x_0$,
of the same parity as $r$ and $\operatorname{deg} J_{r,x_0}\leq \kappa(\Lambda)+3r$, where $\kappa(\Lambda)=\max \{|\mathbf k| : \Lambda_{\mathbf k}=\Lambda \}$.
Recall the description of the leading coefficient $F_{0,x_0}(Z,Z^\prime)$. We introduce the connection on the trivial line bundle on $T_{x_0}X\cong \field{R}^{2n}$, with the connection one-form $\alpha$ given by
\begin{equation}\label{e:Aflat}
\alpha_=\sum_{k=1}^{n}\frac{1}{2}a_k(Z_{2k-1}\, dZ_{2k}-Z_{2k}\, dZ_{2k-1}).
\end{equation}
Its curvature is constant: $d\alpha=\mathbf B_{x_0}$.
Let $\mathcal H^{(0)}$ be the associated Bochner Laplacian on $C^\infty(\field{R}^{2n})$:
\begin{equation}\label{e:DeltaL0p}
\mathcal H^{(0)}=(d-i\alpha)^*(d-i\alpha).
\end{equation}
The spectrum of $\mathcal H^{(0)}$ coincides with $\Sigma$ and consists of eigenvalues of infinite multiplicity. Considered as an operator on $C^\infty(T_{x_0}X,E_{x_0})\cong C^\infty(\field{R}^{2n},E_{x_0})$, the operator $\mathcal H^{(0)}\otimes \operatorname{id}_{E_{x_0}}$ is exactly the model operator at $x_0$ mentioned in Introduction. So the assumption \eqref{e:aj-constant} guarantees that, in a suitable coordinates, the model operator is independent of $x_0$. Let $\mathcal P_{\Lambda}$ be the orthogonal projection on the eigenspace of $\mathcal H^{(0)}$ with the eigenvalue $\Lambda$ (see Section \ref{s:model} for more information on $\mathcal P_{\Lambda}$).
The leading coefficient in \eqref{e:PL-exp} is given by
\begin{equation}\label{e:F0}
F_{0,x_0}(Z,Z^\prime)=\mathcal P_{\Lambda}(Z,Z^\prime).
\end{equation}
As in \cite[Lemma 4.7]{ma-ma08a}, using an explicit formula for $F_{1,x_0}$ given in \cite{Kor20}, one can show that, for any $Z, Z^\prime\in T_{x_0}X$, $F_{1,x_0}(Z,Z^\prime)$ is a scalar operator in $E_{x_0}$, and, therefore, commutes with any operator in $E_{x_0}$.
Since $P_{p,\Lambda}$ is a projection, we have
\begin{gather}
\mathcal P_{\Lambda}\ast F_{1,x_0}+F_{1,x_0} \ast \mathcal P_{\Lambda}=F_{1,x_0},\label{e:F1} \\
\mathcal P_{\Lambda}\ast F_{2,x_0}+F_{1,x_0}\ast F_{1,x_0}+ F_{2,x_0} \ast \mathcal P_{\Lambda}=F_{2,x_0}. \label{e:F2}
\end{gather}
\begin{lem}
For any $f\in C^\infty(X, \operatorname{End}(E))$, the operator $\{T_{f,p}\}$ belongs to $\mathfrak A$. The coefficients $K_{r,x_0}(f)\in C^\infty(T_{x_0}X\times T_{x_0}X, \operatorname{End}(E_{x_0}))$ of the full off-diagonal expansion for the kernel of $T_{f,p}$ are given by
\begin{equation}\label{e:Krf}
K_{r,x_0}(f)=\sum_{r_1+r_2+k=r}F_{r_1,x_0}\ast \frac{1}{k!}(d^kf_{x_0})_0\cdot F_{r_2,x_0},
\end{equation}
where
\[
(d^kf_{x_0})_0(Z)=\sum_{|\alpha|=k}\frac{\partial^\alpha f_{x_0}}{\partial Z^\alpha}(0)Z^\alpha
\]
and we denote
\[
(d^kf_{x_0})_0\cdot F_{r_2,x_0}(Z,Z^\prime)=(d^kf_{x_0})_0(Z) F_{r_2,x_0}(Z,Z^\prime)
\]
In particular, we have
\begin{align}
K_{0,x_0}(f)=&f(x_0)\mathcal P_{\Lambda},\label{e:K0f}\\
K_{1,x_0}(f)=& f(x_0)F_{1,x_0}+\mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda},\label{e:K1f}
\end{align}
and, for $f\in C^\infty(X)$
\begin{multline}
K_{2,x_0}(f)= f(x_0)F_{2,x_0}+F_{1,x_0}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda} \\ + \mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot F_{1,x_0}+ \mathcal P_{\Lambda}\ast \frac 12 (d^2f_{x_0})_0\cdot \mathcal P_{\Lambda}. \label{e:K2f}
\end{multline}
\end{lem}
\begin{proof}
The proof goes along the same lines as the proof of \cite[Lemma 4.6]{ma-ma08a}, so we just highlight the main points.
The fact that $\{T_{f,p}\}$ belongs to $\mathfrak A$ follows easily from \eqref{e:Tfp}, \eqref{e:PL-exp} and Proposition \ref{p:algebraA}.
By \eqref{e:Tfp} and \eqref{e:PL-exp}, we also get
\begin{multline*}
p^{-n}T_{f,p,x_0}(Z,Z^\prime)\cong \sum_{r=0}^\infty p^{-\frac{r}{2}} \sum_{r_1+r_2=r} \int F_{r_1,x_0}(\sqrt{p} Z, \sqrt{p}W)f_{x_0}(W)\times \\ \times F_{r_2,x_0}(\sqrt{p} W, \sqrt{p}Z^\prime)dW.
\end{multline*}
Now we write the Taylor expansion for $f_{x_0}$ at $0$:
\[
f_{x_0}(W) =\sum_{\alpha\in \field{Z}_+^{2n}} \frac{\partial^\alpha f_{x_0}}{\partial W^\alpha}(0)\frac{W^\alpha}{\alpha!},
\]
We infer that
\begin{multline*}
p^{-n}T_{f,p,x_0}(Z,Z^\prime)\cong \sum_{\alpha\in \field{Z}_+^{2n}} \sum_{r=0}^\infty p^{-\frac{r}{2}} \sum_{r_1+r_2+|\alpha|=r} \int F_{r_1,x_0}(\sqrt{p} Z, \sqrt{p}Z^\prime)\times \\ \times \frac{\partial^\alpha f_{x_0}}{\partial W^\alpha}(0)\frac{(\sqrt{p} W)^\alpha}{\alpha!} F_{r_2,x_0}(\sqrt{p} W, \sqrt{p}Z^\prime)dW,
\end{multline*}
which proves \eqref{e:Krf}.
Using \eqref{e:Krf}, \eqref{e:F1}, \eqref{e:F2} and the fact that $F_{1,x_0}(Z,Z^\prime)$ commutes with any operator in $E_{x_0}$, one can easily derive \eqref{e:K0f}, \eqref{e:K1f} and \eqref{e:K2f}.
\end{proof}
\section{The composition theorem}\label{s:Thm1}
Now we will use the results of the previous sections to prove Theorem \ref{t:comm}. In this section, we will prove the first part of the theorem and reduce the proof of the second part to the proof of a similar statement in the Euclidean case, which will be given in the next section.
Let $f,g\in C^\infty(X, \operatorname{End}(E))$. By Proposition \ref{p:algebraA}, the operator $T_{f,p}T_{g,p}$ belongs to $\mathfrak A$, and, by \eqref{e:comp-K}, the coefficients $K_{r,x_0}(f,g)\in C^\infty(T_{x_0}X\times T_{x_0}X, \operatorname{End}(E_{x_0}))$ of the full off-diagonal expansion for the kernel of $T_{f,p}T_{g,p}$ are given by
\begin{equation}\label{e:Krf-g}
K_{r,x_0}(f,g)=\sum_{r_1+r_2=r}K_{r_1,x_0}(f)\ast K_{r_2,x_0}(g)
\end{equation}
To prove \eqref{e:TfpTgp-prod}, it is sufficient to show that
\[
K_{r,x_0}(f,g)=K_{r,x_0}(fg). \quad r=0,1.
\]
By \eqref{e:Krf-g}, \eqref{e:K0f} and the fact that $\mathcal P_{\Lambda}$ commutes with $g(x_0)$, we get
\begin{align*}
K_{0,x_0}(f,g)=& K_{0,x_0}(f)\ast K_{0,x_0}(g)\\ =& f(x_0)\mathcal P_{\Lambda}\ast g(x_0)\mathcal P_{\Lambda}=f(x_0)g(x_0)\mathcal P_{\Lambda}=K_{0,x_0}(fg).
\end{align*}
Next, by \eqref{e:Krf-g}, \eqref{e:K0f}, \eqref{e:K1f}, \eqref{e:F1}, we have
\begin{align*}
K_{1,x_0}(f,g)=& K_{1,x_0}(f)\ast K_{0,x_0}(g)+K_{0,x_0}(f)\ast K_{1,x_0}(g)\\
=& (f(x_0)F_{1,x_0}+\mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda}) \ast g(x_0)\mathcal P_{\Lambda}\\ &+f(x_0)\mathcal P_{\Lambda}\ast (g(x_0)F_{1,x_0}+\mathcal P_{\Lambda}\ast (dg_{x_0})_0\cdot \mathcal P_{\Lambda}) \\
=& f(x_0)g(x_0)F_{1,x_0}+\mathcal P_{\Lambda}\ast (df_{x_0})_0 g(x_0)\cdot \mathcal P_{\Lambda}\\ &+\mathcal P_{\Lambda}\ast f(x_0)(dg_{x_0})_0\cdot \mathcal P_{\Lambda}\\
=& f(x_0)g(x_0)F_{1,x_0}+\mathcal P_{\Lambda}\ast (d(fg)_{x_0})_0 \cdot \mathcal P_{\Lambda}\\ = & K_{1,x_0}(fg),
\end{align*}
since $\mathcal P_{\Lambda}$ and $F_{1,x_0}$ commute with $g(x_0)$ and $f(x_0)$ and $h\cdot \mathcal P_{\Lambda} \ast \mathcal P_{\Lambda}=h\cdot \mathcal P_{\Lambda}$ for any $h$.
This proves \eqref{e:TfpTgp-prod}.
Now suppose that $f,g\in C^\infty(X)$. Then we have
\begin{equation}\label{e:K01fg-gf}
K_{r,x_0}(f,g)-K_{r,x_0}(g,f)=0, \quad r=0,1.
\end{equation}
Let us compute $K_{2,x_0}(f,g)$. By \eqref{e:Krf-g}, \eqref{e:K0f}, \eqref{e:K1f}, \eqref{e:K2f}, we have
\begin{align*}
K_{2,x_0}(f,g)
=& \Big(f(x_0)F_{2,x_0}+F_{1,x_0}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda} + \mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot F_{1,x_0} \\ & + \mathcal P_{\Lambda}\ast \frac 12 (d^2f_{x_0})_0\cdot \mathcal P_{\Lambda}\Big)\ast g(x_0)\mathcal P_{\Lambda}\\
& +(f(x_0)F_{1,x_0}+\mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda})\ast (g(x_0)F_{1,x_0}+\mathcal P_{\Lambda}\ast (dg_{x_0})_0\cdot \mathcal P_{\Lambda})\\ & + f(x_0)\mathcal P_{\Lambda} \ast \Big(g(x_0)F_{2,x_0}+F_{1,x_0}\ast (dg_{x_0})_0\cdot \mathcal P_{\Lambda}\\ & + \mathcal P_{\Lambda}\ast (dg_{x_0})_0\cdot F_{1,x_0}+ \mathcal P_{\Lambda}\ast \frac 12 (d^2g_{x_0})_0\cdot \mathcal P_{\Lambda}\Big).
\end{align*}
Using \eqref{e:F2}, we collect the terms with $f(x_0)g(x_0)$. Since $\mathcal P_{\Lambda}$ and $F_{1,x_0}$ commute with $g(x_0)$ and $f(x_0)$, $h\cdot \mathcal P_{\Lambda} \ast \mathcal P_{\Lambda}=h\cdot \mathcal P_{\Lambda}$ for any $h$, we get
\begin{align*}
K_{2,x_0}(f,g)=& f(x_0)g(x_0)F_{2,x_0}\\ & +F_{1,x_0}\ast (df_{x_0})_0g(x_0)\cdot \mathcal P_{\Lambda} + \mathcal P_{\Lambda}\ast (df_{x_0})_0g(x_0)\cdot F_{1,x_0}\ast \mathcal P_{\Lambda} \\ & + \mathcal P_{\Lambda}\ast \frac 12 (d^2f_{x_0})_0g(x_0) \cdot \mathcal P_{\Lambda}+F_{1,x_0}\ast f(x_0)(dg_{x_0})_0\cdot \mathcal P_{\Lambda}\\ & +\mathcal P_{\Lambda}\ast (df_{x_0})_0g(x_0)\cdot \mathcal P_{\Lambda}\ast F_{1,x_0}+\mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda}\ast (dg_{x_0})_0\cdot \mathcal P_{\Lambda}\\ & + \mathcal P_{\Lambda} \ast f(x_0) (dg_{x_0})_0\cdot F_{1,x_0}
+ \mathcal P_{\Lambda}\ast \frac 12 f(x_0)(d^2g_{x_0})_0\cdot \mathcal P_{\Lambda}\\ =& f(x_0)g(x_0)F_{2,x_0}+F_{1,x_0}\ast (d(fg)_{x_0})_0\cdot \mathcal P_{\Lambda}+ \mathcal P_{\Lambda}\ast (d(fg)_{x_0})_0\cdot F_{1,x_0} \\ & + \mathcal P_{\Lambda}\ast \frac 12 (d^2f_{x_0})_0g(x_0) \cdot \mathcal P_{\Lambda} +\mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda}\ast (dg_{x_0})_0\cdot \mathcal P_{\Lambda}\\ & + \mathcal P_{\Lambda}\ast \frac 12 f(x_0)(d^2g_{x_0})_0\cdot \mathcal P_{\Lambda}.
\end{align*}
Finally, we see that
\begin{multline}\label{e:K2fg-gf}
K_{2,x_0}(f,g)-K_{2,x_0}(g,f)\\= \mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda}\ast (dg_{x_0})_0\cdot \mathcal P_{\Lambda}-\mathcal P_{\Lambda}\ast (dg_{x_0})_0\cdot \mathcal P_{\Lambda}\ast (df_{x_0})_0\cdot \mathcal P_{\Lambda}.
\end{multline}
Thus, we have reduced the proof of \eqref{e:TfpTgp-comm} to the linear model.
We consider the linear space $\mathbb R^{2n}$ equipped with the symplectic form $\omega_a=\sum_{k=1}^{n}a_kdZ_{2k-1}\wedge dZ_{2k}$ (cf. \eqref{e:Bx0}).
\begin{prop}\label{p:linear-comm}
For linear functions $F$ and $G$ on $\mathbb R^{2n}$, we have
\[
[P_{\Lambda} F \mathcal P_{\Lambda}, P_{\Lambda} G \mathcal P_{\Lambda}]=\{F,G\}_a \mathcal P_{\Lambda},
\]
where $\{F,G\}_a$ is the Poisson bracket on the symplectic manifold $(\mathbb R^{2n},\omega_a)$
\end{prop}
Proof of Proposition \ref{p:linear-comm} will be given in the next section. Now we demonstrate how it allows us to complete the proof of \eqref{e:TfpTgp-comm}.
Observe that, for linear functions $F$ and $G$, the Poisson bracket $\{F,G\}_a$ is a constant function on $\field{R}^{2n}$, and, by \eqref{e:Bx0}, it is easy to see that, for any $f,g\in C^\infty(X)$,
\[
\{f,g\}(x_0)=\{(df_{x_0})_0,(dg_{x_0})_0\}_a.
\]
Therefore, the identity \eqref{e:TfpTgp-comm} follows immediately from \eqref{e:K01fg-gf}, \eqref{e:K2fg-gf} and Propositions \ref{p:linear-comm} and \ref{p:boundedA}.
\section{The model case}\label{s:model}
In this section, we prove Proposition \ref{p:linear-comm}, thus completing the proof Theorem \ref{t:comm}. First, we need to recall some information on the spectral theory of the model operator $\mathcal H^{(0)}$ \cite{ma-ma08a}.
We will use the complex coordinates $z\in\mathbb C^{n}\cong \mathbb R^{2n}$, $z_j=Z_{2j-1}+iZ_{2j}, j=1,\ldots,n,$ in the linear space $\mathbb R^{2n}$. Put
\[
\frac{\partial}{\partial z_j}=\tfrac{1}{2}\left(\frac{\partial}{\partial Z_{2j-1}}-i\frac{\partial}{\partial Z_{2j}}\right), \quad \frac{\partial}{\partial \bar{z}_j}=\frac{1}{2}\left(\tfrac{\partial}{\partial Z_{2j-1}}+i\frac{\partial}{\partial Z_{2j}}\right).
\]
Define first order differential operators $b_j,b^{+}_j, j=1,\ldots,n,$ on $C^\infty(\field{R}^{2n},E_{x_0})$ by the formulas
\[
b_j= -2{\frac{\partial}{\partial z_j}}+\frac{1}{2}a_j\bar{z}_j,\quad
b^{+}_j=2{\frac{\partial}{\partial\bar{z}_j}}+\frac{1}{2}a_j z_j, \quad j=1,\ldots,n.
\]
Then $b^{+}_j$ is the formal adjoint of $b_j$ on $L^2(\field{R}^{2n},E_{x_0})$, and
\[
\mathcal H^{(0)}=\sum_{j=1}^n b_j b^{+}_j+\Lambda_0.
\]
We have the commutation relations
\begin{equation}
[b_i,b^{+}_j]=b_i b^{+}_j-b^{+}_j b_i =-2a_i \delta_{i\,j},\quad
[b_i,b_j]=[b^{+}_i,b^{+}_j]=0\, ,\label{e:com1}
\end{equation}
and, for any polynomial $g(z,\bar{z})$ on $z$ and $\bar{z}$,
\begin{equation}
[g(z,\bar{z}),b_j]= 2 \tfrac{\partial}{\partial z_j}g(z,\bar{z}),
\quad [g(z,\bar{z}),b_j^+]
= - 2\tfrac{\partial}{\partial \bar{z}_j}g(z,\bar{z})\,. \label{com-bg}
\end{equation}
By \cite[(1.98)]{ma-ma08}, we have
\begin{equation}\label{bP}
(b^{+}_j\mathcal P)(Z,Z^\prime)=0, \quad (b_j\mathcal P)(Z,Z^\prime)=a_j(\bar z_j-\bar z^\prime_j)\mathcal P(Z,Z^\prime).
\end{equation}
We also introduce first order differential operators $\bar b_j, \bar b^{+}_j, j=1,\ldots,n,$ on $C^\infty(\field{R}^{2n},E_{x_0})$ by the formulas
\[
\bar b_j= -2{\tfrac{\partial}{\partial \bar z_j}}+\tfrac{1}{2}a_j{z}_j,\quad
\bar b^{+}_j=2{\tfrac{\partial}{\partial {z}_j}}+\tfrac{1}{2}a_j \bar z_j, \quad j=1,\ldots,n.
\]
They commute with the operators $b_j,b^{+}_j, j=1,\ldots,n,$ and satisfy the same commutation relations \eqref{e:com1} as $b_j,b^{+}_j$.
We have
\begin{equation}\label{bar-bP}
(\bar b^{+}_j\mathcal P)(Z,Z^\prime)=a_j\bar z^\prime_j\mathcal P(Z,Z^\prime), \quad (\bar b_j\mathcal P)(Z,Z^\prime)=a_jz_j\mathcal P(Z,Z^\prime).
\end{equation}
Recall that any function $\Phi\in L^2(\field{R}^{2n},E_{x_0})$ of the form
\begin{equation}\label{e:eigenspace}
\Phi=b^{\mathbf k}\left(f(z)\exp\left({-\frac{1}{4}\sum_{j=1}^n a_j|z_j|^2}\right)\right),
\end{equation}
where $f$ is an analytic function in $\field{C}^n\cong \field{R}^{2n}$ and $\mathbf k\in\field{Z}_+^n$, is an eigenfunction of the operator $\mathcal H^{(0)}$ with the eigenvalue $\Lambda_{\mathbf k}=\sum_{j=1}^n(2k_j+1) a_j$.
In particular, the eigenspace of $\mathcal H^{(0)}$ associated with an eigenvalue $\Lambda$ consists of functions $\Phi$ given by \eqref{e:eigenspace} with $\mathbf k\in\mathcal K_\Lambda$.
In the case when $E_{x_0}=\field{C}$, an orthonormal basis of the eigenspace of $\mathcal H^{(0)}$ associated with the lowest eigenvalue $\Lambda_0$ is formed by the functions
\begin{equation}\label{e:varphi-beta}
\varphi_\beta(Z)=\left(\frac{a ^\beta}{(2\pi)^n 2 ^{|\beta|} \beta!}
\prod_{i=1}^n a_i\right)^{1/2}z^\beta
\exp\Big (-\frac{1}{4} \sum_{j=1}^n a_j |z_j|^2\Big )\,,\quad \beta\in\field{Z}_+^n.
\end{equation}
An orthonormal basis of the eigenspace associated with the eigenvalue $\Lambda$ is given by (see, for instance, \cite{BPR04})
\begin{equation}\label{e:varphi-kbeta}
\varphi_{\mathbf k,\beta}=\frac{1}{(2^{|\mathbf k|}a^{\mathbf k}\mathbf k!)^{1/2}}b^{\mathbf k}\varphi_\beta\,,\quad \beta\in\field{Z}_+^n, \mathbf k\in\mathcal K_\Lambda.
\end{equation}
Thus, the spectral projection $\mathcal P_{\Lambda}$ in $L^2(\field{R}^{2n},E_{x_0})$ is given by
\[
\mathcal P_{\Lambda}=\sum_{\mathbf k\in \mathcal K_\Lambda} \mathcal P_{\Lambda_{\mathbf k}},
\]
where $\mathcal P_{\Lambda_{\mathbf k}}$ is the smoothing operator with the kernel
\begin{equation}\label{e:PLambda-kernel}
\mathcal P_{\Lambda_{\mathbf k}}(Z,Z^\prime)=\sum_{\beta\in\field{Z}_+^n}\varphi_{\mathbf k,\beta}(Z) \overline{\varphi_{\mathbf k,\beta}(Z^\prime)}=\frac{1}{2^{|\mathbf k|}a^{\mathbf k}\mathbf k!}b^{\mathbf k}_z\bar{b}^{\mathbf k}_{z^\prime}\mathcal P(Z,Z^\prime).
\end{equation}
(see also \eqref{e:PLambda-k} below for an explicit formula).
We can also write the following formula for the operator $\mathcal P_{\Lambda_{\mathbf k}}$ itself:
\begin{equation}\label{e:PLambda}
\mathcal P_{\Lambda_{\mathbf k}}=\frac{1}{2^{|{\mathbf k}|}a^{\mathbf k}{\mathbf k}!}b^{\mathbf k}\mathcal P(b^+)^{\mathbf k}.
\end{equation}
Observe that
\[
b^+_j \mathcal P_{\Lambda_{\mathbf k}}=\mathcal P_{\Lambda_{{\mathbf k}-e_j}}b^+_j , \quad b_j\mathcal P_{\Lambda_{\mathbf k}}= \mathcal P_{\Lambda_{{\mathbf k}+e_j}}b_j, \quad j=1,\ldots,n,
\]
where $(e_1,\ldots, e_n)$ is the standard basis in $\field{Z}^n$.
Indeed, using \eqref{e:PLambda} and \eqref{e:com1}, we get
\[
b^+_j \mathcal P_{\Lambda_{\mathbf k}}=\frac{1}{2^{|{\mathbf k}|}a^{\mathbf k}{\mathbf k}!}[b^+_j,b^{\mathbf k}] \mathcal P (b^+)^{\mathbf k}=\frac{1}{2^{|{\mathbf k}|}a^{\mathbf k}{\mathbf k}!}2a_jk_j b^{{\mathbf k}-e_j} \mathcal P (b^+)^{\mathbf k}= \mathcal P_{\Lambda_{{\mathbf k}-e_j}}b^+_j .
\]
The second identity follows by taking adjoints.
Next, we show that, for a linear function $F(z,\bar z)=\sum_{j=1}^nF_{z_j}z_j+F_{\bar z_j}\bar z_j$, we have
\begin{multline}\label{e:commFP}
[F,\mathcal P_{\Lambda_{\mathbf k}}]\\ = \sum_{j=1}^n \frac{1}{a_j} \left[ F_{z_j}b^+_j \left(\mathcal P_{\Lambda_{\mathbf k}} - \mathcal P_{\Lambda_{{\mathbf k}+e_j}}\right)+ F_{\bar z_j} b_j \left(\mathcal P_{\Lambda_{\mathbf k}}- \mathcal P_{\Lambda_{{\mathbf k}-e_j}} \right)\right].
\end{multline}
Observe that
\begin{equation}\label{e:commzP}
[z_j,\mathcal P_{\Lambda_{\mathbf k}}] = \frac{1}{a_j}\left(\mathcal P_{\Lambda_{{\mathbf k}-e_j}} - \mathcal P_{\Lambda_{\mathbf k}}\right) b^+_j= \frac{1}{a_j} b^+_j \left(\mathcal P_{\Lambda_{\mathbf k}} - \mathcal P_{\Lambda_{{\mathbf k}+e_j}}\right).
\end{equation}
Indeed, using \eqref{e:PLambda}, \eqref{com-bg} and \eqref{bP}, we get
\begin{align*}
[z_j,\mathcal P_{\Lambda_{\mathbf k}}]= & \frac{1}{2^{|{\mathbf k}|}a^{\mathbf k}{\mathbf k}!}\left([z_j,b^{\mathbf k}] \mathcal P (b^+)^{\mathbf k}+b^{\mathbf k} [z_j,\mathcal P] (b^+)^{\mathbf k}\right)\\ = & \frac{1}{2^{|{\mathbf k}|}a^{\mathbf k}{\mathbf k}!}\left(2k_jb^{{\mathbf k}-e_j} \mathcal P (b^+)^{\mathbf k}-\frac{1}{a_j}b^{\mathbf k} \mathcal P (b^+)^{{\mathbf k}+e_j}\right)\\ = & \frac{1}{a_j}\left(\mathcal P_{\Lambda_{{\mathbf k}-e_j}} - \mathcal P_{\Lambda_{\mathbf k}}\right) b^+_j.
\end{align*}
Taking adjoints, we infer that
\begin{equation}\label{e:commzP2}
[\bar z_j,\mathcal P_{\Lambda_{\mathbf k}}]= \frac{1}{a_j} b_j \left(\mathcal P_{\Lambda_{\mathbf k}}- \mathcal P_{\Lambda_{{\mathbf k}-e_j}} \right)= \frac{1}{a_j} \left(\mathcal P_{\Lambda_{{\mathbf k}+e_j}}- \mathcal P_{\Lambda_{\mathbf k}} \right)b_j.
\end{equation}
From \eqref{e:commzP} and \eqref{e:commzP2}, we get \eqref{e:commFP}.
Using \eqref{e:commFP}, we compute
\begin{multline}\label{e:PFP}
P_{\Lambda_{{\mathbf k}_1}} F \mathcal P_{\Lambda_{{\mathbf k}_2}}=\delta_{{\mathbf k}_1,{\mathbf k}_2} F \mathcal P_{\Lambda_{{\mathbf k}_2}}+[P_{\Lambda_{{\mathbf k}_1}}, F] \mathcal P_{\Lambda_{{\mathbf k}_2}}
= \delta_{{\mathbf k}_1,{\mathbf k}_2} F \mathcal P_{\Lambda_{{\mathbf k}_2}}\\ +\sum_{\ell=1}^n \frac{1}{a_\ell}\left[ \left(\delta_{{\mathbf k}_1-e_\ell,{\mathbf k}_2}- \delta_{{\mathbf k}_1,{\mathbf k}_2} \right)F_{\bar z_\ell} b_\ell -\left(\delta_{{\mathbf k}_1,{\mathbf k}_2} - \delta_{{\mathbf k}_1+e_\ell,{\mathbf k}_2}\right) F_{z_\ell} b^+_\ell \right]\mathcal P_{\Lambda_{{\mathbf k}_2}} .
\end{multline}
Now we are ready to complete the proof of Proposition~\ref{p:linear-comm}. For linear functions $F$ and $G$, using \eqref{e:PFP}, we get
\begin{multline*}
P_{\Lambda_{{\mathbf k}_1}} F \mathcal P_{\Lambda_{\mathbf k}}G\mathcal P_{\Lambda_{{\mathbf k}_2}} = \delta_{{\mathbf k}_1,{\mathbf k}} F \mathcal P_{\Lambda_{\mathbf k}}G\mathcal P_{\Lambda_{{\mathbf k}_2}} \\ +\sum_{\ell=1}^n\frac{1}{a_\ell}\left[ \left(\delta_{{\mathbf k}_1-e_\ell,{\mathbf k}}- \delta_{{\mathbf k}_1,{\mathbf k}} \right)F_{\bar z_\ell} b_\ell -\left(\delta_{{\mathbf k}_1,{\mathbf k}} - \delta_{{\mathbf k}_1+e_\ell,{\mathbf k}}\right) F_{z_\ell} b^+_\ell \right]\mathcal P_{\Lambda_{\mathbf k}} G\mathcal P_{\Lambda_{{\mathbf k}_2}}.
\end{multline*}
Next, we transpose $\mathcal P_{\Lambda_{\mathbf k}}$ and $G$ and apply \eqref{e:commFP}:
\begin{multline*}
P_{\Lambda_{{\mathbf k}_1}} F \mathcal P_{\Lambda_{\mathbf k}}G\mathcal P_{\Lambda_{{\mathbf k}_2}} = \delta_{{\mathbf k}_1,{\mathbf k}}\delta_{{\mathbf k},{\mathbf k}_2} FG \mathcal P_{\Lambda_{{\mathbf k}_2}} \\
\begin{aligned}
& +\sum_{\ell=1}^n \frac{1}{a_\ell} \delta_{{\mathbf k},{\mathbf k}_2}\left[ \left(\delta_{{\mathbf k}_1-e_\ell,{\mathbf k}}- \delta_{{\mathbf k}_1,{\mathbf k}} \right) F_{\bar z_\ell}b_\ell -\left(\delta_{{\mathbf k}_1,{\mathbf k}} - \delta_{{\mathbf k}_1+e_\ell,{\mathbf k}}\right) F_{z_\ell} b^+_\ell \right] G \mathcal P_{\Lambda_{{\mathbf k}_2}} \\ & + \sum_{j=1}^n \frac{1}{a_j} \delta_{{\mathbf k}_1,{\mathbf k}} F \left[\left(\delta_{{\mathbf k}+e_j,{\mathbf k}_2}-\delta_{{\mathbf k},{\mathbf k}_2}\right) G_{z_j}b^+_j + \left(\delta_{{\mathbf k}-e_j,{\mathbf k}_2}-\delta_{{\mathbf k},{\mathbf k}_2}\right) G_{\bar z_j} b_j \right] \mathcal P_{\Lambda_{{\mathbf k}_2}} \\ & +\sum_{j,\ell=1}^n \frac{1}{a_ja_\ell}\left[ \left(\delta_{{\mathbf k}_1-e_\ell,{\mathbf k}}- \delta_{{\mathbf k}_1,{\mathbf k}} \right)F_{\bar z_\ell}b_\ell -\left(\delta_{{\mathbf k}_1,{\mathbf k}} - \delta_{{\mathbf k}_1+e_\ell,{\mathbf k}}\right) F_{z_\ell} b^+_\ell\right]\times \\ & \times \left[\left(\delta_{{\mathbf k}+e_j,{\mathbf k}_2}-\delta_{{\mathbf k},{\mathbf k}_2}\right) G_{z_j}b^+_j + \left(\delta_{{\mathbf k}-e_j,{\mathbf k}_2}-\delta_{{\mathbf k},{\mathbf k}_2}\right) G_{\bar z_j} b_j \right] \mathcal P_{\Lambda_{{\mathbf k}_2}}.
\end{aligned}
\end{multline*}
In the case when ${\mathbf k}_1,{\mathbf k},{\mathbf k}_2\in \mathcal K_\Lambda$, we necessarily have ${\mathbf k}_1\pm e_\ell\not\in \mathcal K_\Lambda$, ${\mathbf k}\pm e_j\not\in \mathcal K_\Lambda$. Thus, from the last formula, we conclude that $P_{\Lambda_{{\mathbf k}_1}} F \mathcal P_{\Lambda_{\mathbf k}}G\mathcal P_{\Lambda_{{\mathbf k}_2}} =0$, unless ${\mathbf k}_1={\mathbf k}_2={\mathbf k}$, and in the latter case, we have
\begin{multline*}
P_{\Lambda_{\mathbf k}} F \mathcal P_{\Lambda_{\mathbf k}}G\mathcal P_{\Lambda_{\mathbf k}} = FG \mathcal P_{\Lambda_{\mathbf k}} \\
+\sum_{\ell=1}^n \frac{1}{a_\ell}\left[-F_{\bar z_\ell}b_\ell -F_{z_\ell} b^+_\ell \right] G \mathcal P_{\Lambda_{\mathbf k}} + \sum_{j=1}^n \frac{1}{a_j} F \left[-G_{z_j}b^+_j-G_{\bar z_j} b_j \right] \mathcal P_{\Lambda_{\mathbf k}} \\ +\sum_{j,\ell=1}^n \frac{1}{a_ja_\ell}\left[ - F_{\bar z_\ell}b_\ell - F_{z_\ell} b^+_\ell\right] \left[- G_{z_j}b^+_j - G_{\bar z_j} b_j \right] \mathcal P_{\Lambda_{\mathbf k}}.
\end{multline*}
We see that
\begin{multline*}
P_{\Lambda_{\mathbf k}} F \mathcal P_{\Lambda_{\mathbf k}}G\mathcal P_{\Lambda_{\mathbf k}}-P_{\Lambda_{\mathbf k}} G \mathcal P_{\Lambda_{\mathbf k}}F\mathcal P_{\Lambda_{\mathbf k}} =\sum_{j=1}^n \frac{1}{a^2_j}(F_{z_j} G_{\bar z_j}-F_{\bar z_j}G_{z_j})(b^+_j b_j - b_j b^+_j) \mathcal P_{\Lambda_{\mathbf k}}\\ =\sum_{j=1}^n \frac{2}{a_j}(F_{z_j} G_{\bar z_j}-F_{\bar z_j}G_{z_j}) \mathcal P_{\Lambda_{\mathbf k}}=\{F,G\}_a\mathcal P_{\Lambda_{\mathbf k}}.
\end{multline*}
that completes the proof of Proposition \ref{p:linear-comm}.
\section{Characterization of Toeplitz operators}\label{s:charact}
In the case when $\mathcal K_\Lambda$ consists of a single element, we prove that the sect of Toeplitz operators coincides with the algebra $\mathfrak A$, which gives a characterization of Toeplitz operators in terms of their Schwartz kernels, This type of characterization was introduced in \cite[Theorem 4.9]{ma-ma08}.
Using this result and Theorems~\ref{t:comm}, we easily complete the proof of Theorem \ref{t:algebra}.
\begin{thm}\label{t:characterization}
Assume that $\mathcal K_\Lambda$ consists of a single element.
A sequence of bounded linear operators $\{T_p: L^2(X,L^p\otimes E)\to L^2(X,L^p\otimes E)\}$ is a Toeplitz operator in the sense of Definition~\ref{d:Toeplitz0} if and only if it belongs to $\mathfrak A$.
\end{thm}
The fact that any Toeplitz operator in the sense of Definition~\ref{d:Toeplitz0} belongs to $\mathfrak A$ is proved in Section~\ref{s:Toeplitz-in-A} and holds without any assumption on $\Lambda$.
Thus, we assume that $\{T_p\}$ belongs to $\mathfrak A$ and prove that it is a Toeplitz operator in the sense of Definition~\ref{d:Toeplitz0}. The proof is divided in several steps and in the beginning we don't assume that $\mathcal K_\Lambda$ consists of a single element.
The following is an analog of \cite[Lemma 4.12]{ma-ma08a}. Recall that $K_{0,x_0}(Z,Z^\prime)$ denotes the leading coefficient in the full off-diagonal expansion for the kernel of $T_p$,
\begin{prop}\label{p:K_0zz}
The coefficient $K_{0,x_0}(Z,Z^\prime)$ has the form
\[
K_{0,x_0}(Z,Z^\prime)=\sum_{{\mathbf k},{\mathbf k}^\prime\in{\mathcal{K}}_\Lambda}b^{\mathbf k}_z{\bar b}^{{\mathbf k}^\prime}_{z^\prime}[Q_{{\mathbf k}{\mathbf k}^\prime,x_0}\mathcal P]
\]
with some polynomials $Q_{{\mathbf k}{\mathbf k}^\prime,x_0}(z,\bar z^\prime)$
for any $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$.
\end{prop}
\begin{proof}
By Definition~\ref{d:Toeplitz} (i) and \eqref{e:comp-K}, we get
\begin{equation}\label{e:K=RKP}
K_{0,x_0}=\mathcal P_{\Lambda}\circ K_{0,x_0} \circ \mathcal P_{\Lambda}
\end{equation}
By \eqref{e:K=RKP} and \eqref{e:PLambda}, it follows that
\begin{equation}\label{e:K}
K_{0,x_0}=\mathcal P_{\Lambda}\circ K_{0,x_0} =\sum_{{\mathbf k}\in{\mathcal{K}}_\Lambda}\frac{1}{2^{|{\mathbf k}|}a^{\mathbf k}{\mathbf k}!}b^{\mathbf k}\circ \mathcal P\circ (b^+)^{\mathbf k}\circ K_{0,x_0}.
\end{equation}
By \cite[(2.12)]{ma-ma08a}, there exists $F_k\in \field{C}[z, Z^\prime]$ such that
\[
(\mathcal P\circ (b^+)^{\mathbf k}\circ K_{0,x_0})(Z, Z^\prime)=F_{\mathbf k}\cdot {\mathcal{P}}(Z, Z^\prime).
\]
Plugging this in \eqref{e:K}, we get
\begin{equation}\label{e:K2}
K_{0,x_0}(Z, Z^\prime)=\sum_{{\mathbf k}\in{\mathcal{K}}_\Lambda}\frac{1}{2^{|{\mathbf k}|}a^{\mathbf k}{\mathbf k}!}b^{\mathbf k}_zF_{\mathbf k}(z, Z^\prime){\mathcal{P}}(Z, Z^\prime).
\end{equation}
Similarly, using \eqref{e:K=RKP}, \eqref{e:PLambda} and \eqref{e:K2}, we can write
\begin{multline}\label{e:K3}
K_{0,x_0}=K_{0,x_0}\circ \mathcal P_{\Lambda} =\sum_{{\mathbf k}^\prime\in{\mathcal{K}}_\Lambda}\frac{1}{2^{|{\mathbf k}^\prime|}a^{{\mathbf k}^\prime}{\mathbf k}^\prime!} K_{0,x_0}\circ b^{{\mathbf k}^\prime}\circ \mathcal P\circ (b^+)^{{\mathbf k}^\prime}\\
=\sum_{{\mathbf k},{\mathbf k}^\prime\in{\mathcal{K}}_\Lambda}\frac{1}{2^{|{\mathbf k}|+|{\mathbf k}^\prime|}a^{{\mathbf k}+{\mathbf k}^\prime}{\mathbf k}!{\mathbf k}^\prime!} b^{\mathbf k}_zF_{\mathbf k}{\mathcal{P}}\circ b^{{\mathbf k}^\prime}\circ \mathcal P\circ (b^+)^{{\mathbf k}^\prime}.
\end{multline}
Now we proceed as follows:
\begin{multline}\label{e:K4}
(F_{\mathbf k}{\mathcal{P}}\circ b^{{\mathbf k}^\prime})(Z, Z^\prime)=(\bar{b}^+_z)^{{\mathbf k}^\prime} (F_{\mathbf k}{\mathcal{P}})(Z, Z^\prime)\\ =\sum_{{\mathbf l}\leq {\mathbf k}^\prime}{{\mathbf k}^\prime \choose {\mathbf l} }2^{|{\mathbf l}|}\frac{\partial^{|\mathbf |l}}{\partial z^{\mathbf l}}F_{\mathbf k}(z, Z^\prime)(\bar{b}^+_z)^{{\mathbf k}^\prime-l}{\mathcal{P}}(Z, Z^\prime)\\ =\sum_{{\mathbf l}\leq {\mathbf k}^\prime}{{\mathbf k}^\prime \choose{\mathbf l}l}2^{|{\mathbf l}|}\frac{\partial^{|\mathbf l|}}{\partial z^{\mathbf l}}F_{\mathbf k}(z, Z^\prime)(a\bar{z}^\prime)^{{\mathbf k}^\prime-l}{\mathcal{P}}(Z, Z^\prime)=F_{{\mathbf k}{\mathbf k}^\prime}{\mathcal{P}}(Z, Z^\prime)
\end{multline}
with some $F_{{\mathbf k}{\mathbf k}^\prime}\in \field{C}[z, Z^\prime]$.
By \cite[Proof of Lemma 2.2]{ma-ma08a}, there exists $Q_{{\mathbf k}{\mathbf k}^\prime}\in \field{C}[z, \bar z^\prime]$ such that
\begin{equation}\label{e:K5}
(F_{{\mathbf k}{\mathbf k}^\prime}{\mathcal{P}}\circ \mathcal P)(Z, Z^\prime)=2^{|{\mathbf k}|+|{\mathbf k}^\prime|}a^{{\mathbf k}+{\mathbf k}^\prime}{\mathbf k}!{\mathbf k}^\prime! Q_{{\mathbf k}{\mathbf k}^\prime}\mathcal P(Z, Z^\prime).
\end{equation}
Combining \eqref{e:K3}, \eqref{e:K4} and \eqref{e:K5}, we complete the proof.
\end{proof}
The following is an analog of an upper estimate for the Wick symbol.
\begin{prop}\label{p:K0-Wick}
We have
\[
|K_{0,x_0}(Z,Z)|\leq \limsup_{p\to \infty} \|T_p\|\left(1+\mathcal O(p^{-\frac{1}{2}})\right).
\]
for any $x_0\in X$ and $Z\in T_{x_0}X$.
\end{prop}
\begin{proof}
By Definition \ref{d:Toeplitz0} (i), we get
\begin{multline}\label{e:TpLTpL}
T_p(x,x^\prime)=(P_{p,\Lambda}T_pP_{p,\Lambda})(x,x^\prime)\\ =\int P_{p,\Lambda}(x,y) T_p(y,y^\prime) P_{p,\Lambda}(y^\prime,x^\prime) dv_X(y)\,dv_X(y^\prime). .
\end{multline}
For any $x,y\in X$, $P_{p,\Lambda}(y,x)$ is a linear map from $(L^p\otimes E)_x$ to $(L^p\otimes E)_y$.
For $x\in X$ and $v\in (L^p\otimes E)_x$, introduce $S^p_{x,v}\in C^\infty(X. L^p\otimes E)$ by
\[
S^p_{x,v}(y)=P_{p,\Lambda}(y,x)v, \quad y\in X.
\]
Observe that
\begin{align*}
\langle S^p_{x,v}, S^p_{x^\prime,v^\prime}\rangle =& \int_X \langle P_{p,\Lambda}(y,x)v, P_{p,\Lambda}(y,x^\prime)v^\prime\rangle dv_X(y) \\
=& \int_X \langle v, P_{p,\Lambda}(x,y) P_{p,\Lambda}(y,x^\prime)v^\prime\rangle dv_X(y) \\ =&\langle v, P_{p,\Lambda}(x,x^\prime)v^\prime\rangle.
\end{align*}
In particular, we have
\[
\langle S^p_{x,v}, S^p_{x,v}\rangle =\langle v, P_{p,\Lambda}(x,x)v\rangle= p^n|v|^2\left(\mathcal P_{\Lambda}(0,0)+\mathcal O(p^{-\frac{1}{2}})\right),
\]
and
\[
\|S^p_{x,v}\|=p^{n/2}|v| \left(\left(\mathcal P_{\Lambda}(0,0)\right)^{1/2}+\mathcal O(p^{-\frac{1}{2}})\right).
\]
By \eqref{e:TpLTpL}, we infer that for $x, x^\prime\in X$, $v\in (L^p\otimes E)_x$ and $v^\prime\in (L^p\otimes E)_{x^\prime}$,
\[
\langle v, T_p(x,x^\prime)v^\prime\rangle =\int_X \langle S^p_{x,v}(y), T_p(y,y^\prime)S^p_{x^\prime,v^\prime}(y^\prime)\rangle dv_X(y)\,dv_X(y^\prime)=\langle S^p_{x,v}, T_p S^p_{x^\prime,v^\prime}\rangle.
\]
It follows that
\begin{equation}
p^{-n}\left|T_p(x,x^\prime)\right| \leq \|T_p\|\left(\mathcal P_{\Lambda}(0,0)+\mathcal O(p^{-\frac{1}{2}})\right).
\end{equation}
Fix $x_0\in X$ and write $x=\exp_{x_0}^X(Z)$ and $x^\prime=\exp_{x_0}^X(Z^\prime)$. Then. by (iii), we get
\[
p^{-n}T_p(x,x^\prime)=p^{-n}T_{p,x_0}(Z,Z^\prime)\cong K_{0,x_0}(\sqrt{p}Z,\sqrt{p}Z^\prime)+\mathcal O(p^{-\frac{1}{2}})
\]
for $|Z|, |Z^\prime|<\varepsilon$. It follows that, for $|Z|, |Z^\prime|<\varepsilon\sqrt{p}$,
\begin{align*}
K_{0,x_0}(Z,Z^\prime) = & p^{-n}T_{p,x_0}\left(\tfrac{1}{\sqrt{p}}Z,\tfrac{1}{\sqrt{p}}Z^\prime\right)+\mathcal O(p^{-\frac{1}{2}})\\
=& p^{-n}T_{p}\left(\exp_{x_0}^X\left(\tfrac{1}{\sqrt{p}}Z\right),\exp_{x_0}^X\left(\tfrac{1}{\sqrt{p}}Z\right)\right)+\mathcal O(p^{-\frac{1}{2}})\\
\leq & \|T_p\|\left(1+\mathcal O(p^{-\frac{1}{2}})\right),
\end{align*}
that completes the proof.
\end{proof}
From now on, we will assume that $\mathcal K_\Lambda$ consists of a single element.
The following is an analog of \cite[Proposition 4.11]{ma-ma08a}. We give a different proof, which is shorter than in \cite{ma-ma08a} and based on Proposition \ref{p:K0-Wick}.
\begin{prop}\label{p:Q00}
Assume that $\mathcal K_\Lambda$ consists of a single element ${\mathbf k}\in \field{Z}^n_+$.
Then
\[
K_{0,x_0}(Z,Z^\prime)=Q_{x_0}\mathcal P_{\Lambda_{\mathbf k}}(Z,Z^\prime)
\]
for any $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$ with some $Q_{x_0}\in \operatorname{End}(E_{x_0})$.
\end{prop}
\begin{proof}
By Proposition \ref{p:K_0zz}. using Leibniz rule, we get
\begin{multline}\label{e:K0-Qkk}
K_{0,x_0}(Z,Z^\prime)\\ =\sum_{{\mathbf l}\leq {\mathbf k}}\sum_{{\mathbf l}^\prime\leq {\mathbf k}}{{\mathbf k} \choose {\mathbf l}}{{\mathbf k} \choose {\mathbf l}^\prime} \frac{\partial^{2{\mathbf k}-{\mathbf l}-{\mathbf l}^\prime}}{\partial z^{{\mathbf k}-{\mathbf l}} \partial z^{\prime {\mathbf k}-{\mathbf l}^\prime}} Q_{{\mathbf k}{\mathbf k},x_0}(z,\bar z^\prime)b^{\mathbf l}_z{\bar b}^{{\mathbf l}^\prime}_{z^\prime}\mathcal P(Z,Z^\prime).
\end{multline}
To compute $b^{\mathbf l}_z{\bar b}^{{\mathbf l}^\prime}_{z^\prime}\mathcal P$, we write $\mathcal P$ as the product
\[
\mathcal P(Z,Z^\prime)=\prod_{j=1}^n \mathcal P_j(Z_j,Z_j^\prime)
\]
and treat each factor separately. Then we get for $l_j\geq l^\prime_j$,
\begin{multline*}
b^{l_j}_{j,z_j}{\bar b}^{l^\prime_j}_{j,z^\prime_j}\mathcal P_j(Z_j,Z_j^\prime)\\
=2^{l^\prime_j}a_j^{l_j}l^\prime_j! (\bar z_j-\bar z^\prime_j)^{l_j-l^\prime_j} L^{(l_j-l^\prime_j)}_{l^\prime_j}\left(\frac{a_j|z_j-z^\prime_j|^2}{2}\right)\mathcal P_j(Z_j,Z_j^\prime),
\end{multline*}
where $L^{(m)}_k$, $k,m\in\field{Z}_+$, is the generalized Laguerre polynomial:
\[
L^{(m)}_{k}(x)=\frac{x^{-m}e^x}{k!}\frac{d^k}{dx^k}(e^{-x}x^{m+k})=\sum_{j=0}^{k}\binom{k+m}{k-j}\frac{(-x)^j}{j!}, \quad x\geq 0,
\]
The formula for $l_j\leq l^\prime_j$ is obtained by considering the adjoints.
In particular, for $l_j=l_j^\prime$, we have
\begin{equation}\label{e:llP0}
b^{l_j}_{j,z_j}{\bar b}^{l_j}_{j,z^\prime_j}\mathcal P_j(Z_j,Z_j^\prime) =2^{l_j}a_j^{l_j}l_j!L_{l_j}\left(\frac{a_j|z_j-z^\prime_j|^2}{2}\right) \mathcal P_j(Z_j,Z_j^\prime),
\end{equation}
where $L_k=L^{(0)}_k$, $k\in\field{Z}_+$, is the Laguerre polynomial,
and
\begin{equation}\label{e:llP}
b^{l_j}_{j,z_j}{\bar b}^{l_j}_{j,z^\prime_j}\mathcal P_j(Z_j,Z_j)=2^{l_j}a_j^{l_j}l_j! \mathcal P_j(0,0).
\end{equation}
For $l_j\neq l^\prime_j$, we get
\begin{equation}\label{e:llP1}
b^{l_j}_{j,z_j}{\bar b}^{l^\prime_j}_{j,z^\prime_j}\mathcal P_j(Z_j,Z_j)=0.
\end{equation}
By \eqref{e:llP0} and \eqref{e:PLambda-kernel}, we derive an explicit formula for $\mathcal P_{\Lambda_{\mathbf k}}(Z,Z^\prime)$.
\begin{multline}\label{e:PLambda-k}
\mathcal P_{\Lambda_{\mathbf k}}(Z,Z^\prime)=\frac{1}{(2\pi)^n}\prod_{j=1}^na_j L_{k_j}\left(\frac{a_j|z_j-z^\prime_j|^2}{2}\right)\times \\ \times \exp\left(-\frac 14\sum_{k=1}^na_k(|z_k|^2+|z_k^\prime|^2- 2z_k\bar z_k^\prime) \right).
\end{multline}
Taking into account \eqref{e:llP} and \eqref{e:llP1}, the equality \eqref{e:K0-Qkk} for $Z=Z^\prime$ takes the form
\begin{equation*}
K_{0,x_0}(Z,Z)=\sum_{{\mathbf l}\leq {\mathbf k}}2^{|{\mathbf l}|}a^{\mathbf l}{\mathbf l}! {{\mathbf k} \choose {\mathbf l}}^2 \frac{\partial^{2{\mathbf k}-2{\mathbf l}}}{\partial z^{{\mathbf k}-{\mathbf l}} \partial z^{\prime {\mathbf k}-{\mathbf l}}} Q_{{\mathbf k}{\mathbf k},x_0}(z,\bar z)\mathcal P(0,0).
\end{equation*}
By Proposition \ref{p:K0-Wick}, we get
\begin{equation*}
\sum_{{\mathbf l}\leq {\mathbf k}}2^{|{\mathbf l}|}a^{\mathbf l}{\mathbf l}! {{\mathbf k} \choose {\mathbf l}}^2 \frac{\partial^{2{\mathbf k}-2{\mathbf l}}}{\partial z^{{\mathbf k}-{\mathbf l}} \partial z^{\prime {\mathbf k}-{\mathbf l}}} Q_{{\mathbf k}{\mathbf k},x_0}(z,\bar z)=const.
\end{equation*}
Comparing the top degree coefficients in both sides of the last identity, one can easily see that $ Q_{{\mathbf k}{\mathbf k},x_0}(z,\bar z)=Q_{x_0}=const$. Since $ Q_{{\mathbf k}{\mathbf k},x_0}$ is a polynomial of $z$ and $\bar z^\prime$, this implies $ Q_{{\mathbf k}{\mathbf k},x_0}(z,\bar z^\prime)=Q_{x_0}$.
\end{proof}
From now on, we will closely follow the arguments of the proof of Theorem 4.9 in \cite{ma-ma08a}. So we will be brief.
Define a section $g_0\in C^\infty(X,\operatorname{End}(E))$, setting
\[
g_0(x_0)= \mathcal Q_{x_0},
\]
where $\mathcal Q_{x_0}\in \operatorname{End}(E_{x_0})$ is given by Proposition \ref{p:Q00}).
The following is an analog of \cite[Proposition 4.17]{ma-ma08a}. Its proof is based on Proposition~\ref{p:Q00} and therefore a little bit shorter than the proof of \cite[Proposition 4.17]{ma-ma08a}.
\begin{prop}\label{p:Tp-Tg0p}
We have $p^{-n}(T_p-T_{g_0,p})(Z, Z^\prime)\cong \mathcal O(p^{-1})$.
\end{prop}
\begin{proof}
Consider the sequence of operators
\[
R_p=p^{1/2}(T_p-T_{g_0,p}), \quad p\in \field{N}.
\]
It is easy to see that it is in $\mathfrak A$. Moreover, computing the full off-diagonal expansions for the kernels of $T_p$ and $T_{g_0,p}$, we get
\[
p^{-n}R_{p,x_0}(Z, Z^\prime)\cong (K_{1,x_0}-K_{1,x_0}(g_0))(\sqrt{p}Z,\sqrt{p}Z^\prime)+\mathcal O(p^{-1/2}).
\]
We apply Proposition~\ref{p:Q00} to $\{R_p\}$ and infer that
\[
(K_{1,x_0}-K_{1,x_0}(g_0))(Z,Z^\prime)=S_{x_0}\mathcal P_{\Lambda_{\mathbf k}}(Z,Z^\prime)
\]
for any $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$ with some $S_{x_0}\in \operatorname{End}(E_{x_0})$.
Since $K_{1,x_0}-K_{1,x_0}(g_0)=(\mathcal Q_{1,x_0}-\mathcal Q_{1,x_0}(g_0))\mathcal P$, where $\mathcal Q_{1,x_0}$ and $\mathcal Q_{1,x_0}(g_0)$ are odd, we conclude that
\[
(K_{1,x_0}-K_{1,x_0}(g_0))(Z,Z^\prime)=0
\]
for any $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$, that completes the proof.
\end{proof}
By Proposition~\ref{p:Tp-Tg0p}, we have $T_p=P_{p,\Lambda}g_0P_{p,\Lambda}+\mathcal O(p^{-1})$. Consider the operator $p(T_p-P_{p,\Lambda}g_0P_{p,\Lambda})$. It is easy to see that it belongs to $\mathfrak A$.
By Proposition \ref{p:Q00}, the leading coefficient $K^\prime_{0,x_0}(Z,Z^\prime)$ in the full off-diagonal expansion for the kernel of $p(T_p-P_{p,\Lambda}g_0P_{p,\Lambda})$ has the form
\[
K^\prime_{0,x_0}(Z,Z^\prime)=Q^\prime_{x_0}\mathcal P_{\Lambda_{\mathbf k}}(Z,Z^\prime)
\]
for any $x_0\in X$ and $Z,Z^\prime\in T_{x_0}X$ with some $Q^\prime_{x_0}\in \operatorname{End}(E_{x_0})$. Setting
\[
g_1(x_0)= \mathcal Q^\prime_{x_0},
\]
we get a section $g_1\in C^\infty(X,\operatorname{End}(E))$, and, as in Proposition \ref{p:Tp-Tg0p}, we will show that $p(T_p-P_{p,\Lambda}g_0P_{p,\Lambda})=P_{p,\Lambda}g_1P_{p,\Lambda}+\mathcal O(p^{-1})$, which implies $T_p=P_{p,\Lambda}(g_0+p^{-1}g_1)P_{p,\Lambda}+\mathcal O(p^{-2})$. So we can proceed by induction to complete the proof of Theorem \ref{t:characterization}.
Using Theorems~\ref{t:characterization} and \ref{t:comm}, one can easily complete the proof of Theorem \ref{t:algebra}.
|
{
"timestamp": "2020-12-29T02:28:02",
"yymm": "2012",
"arxiv_id": "2012.14198",
"language": "en",
"url": "https://arxiv.org/abs/2012.14198"
}
|
\section{Introduction}
While the central conjectures in complexity theory such as $\ensuremath{{\sf P}}\xspace\ne\ensuremath{{\sf NP}}\xspace$ have the form of impossibility results, we hope that a better understanding of the impossibility phenomena will also shed light on the question of constructing new useful algorithms. A successful formalization of such hopes can be found in cryptography, where the impossibility results in the form of average-case lower bounds are turned into cryptographic primitives. In the present paper we are interested in turning complexity lower bounds into efficient learning algorithms.
\medskip
Results of this form can be traced back to cryptography as well. The
`{\em pseudorandomness from unpredictability}' paradigm was used by Blum, Furst, Kearns and Lipton~\cite{BFKL} to show that efficient distinguishers breaking pseudorandom generators
imply an efficient learning of p-size circuits on average. The distinguishers from \cite{BFKL} can be interpreted as constructive circuit lower bounds distinguishing partial truth-tables of easy Boolean functions from partial truth-tables of hard functions, cf. Section \ref{s:leargen}. The existing methods for proving circuit lower bounds have been also applied in constructions of new learning algorithms for restricted circuit classes, e.g. Linial, Mansour and Nisan \cite{LMN} used \ensuremath{{\sf AC}^0}\xspace lower bounds to get learning algorithms for \ensuremath{{\sf AC}^0}\xspace. More recently, in a landmark work, Carmosino, Impagliazzo, Kabanets and Kolokolova \cite{CIKK} gave a generic construction of learning algorithms from natural proofs of circuit lower bounds. Oliveira and Santhanam \cite{OS} extended their result to a dichotomy between the non-existence of non-uniform pseudorandom function families and the existence of efficient learning of small circuits. These results led Oliveira and Santhanam \cite{OS} also to a discovery of a surprising learning speedup. For example, learning p-size circuits over the uniform distribution with membership queries by circuits of weakly subexponential size $2^n/n^{\omega(1)}$ implies that for each constant $k$ and $\epsilon>0$, circuits of size $n^k$ can be learned over the uniform distribution with membership queries by circuits of strongly subexponential size $2^{n^{\epsilon}}$.
\subsection{Our contribution}
In the present paper we revisit these connections. We start by considering a simple {\em instance-specific} model of learning in which proving a single circuit lower bound implies a reliable prediction of the value of a target function on a single input. The model underlies the construction of learning algorithms from \cite{BFKL, CIKK} and differs from the standard PAC learning model mainly in that it does not ask learners to construct a circuit which computes the target function on a big fraction of inputs, cf. Section \ref{s:i-s}.
\medskip
\noindent {\bf Learning from witnessing lower bounds.} Our main result is a construction of efficient PAC learning of p-size circuits from a constructive circuit lower bound for an arbitrary Boolean function $H$. More precisely, we obtain subexponential-size circuits learning p-size circuits over the uniform distribution with membership queries. The assumption of a constructive circuit lower bound we need is defined as the existence of $2^{O(n)}$-size `witnessing' circuits $W$ which given an oracle access to a p-size circuit $D$ with $n$ inputs find a not-yet-queried input on which $D$ fails to compute $H$. The circuits $W$ are allowed to fail on $1/poly(n)$ fraction of circuits $D$.
Moreover, even if circuits $W$ succeed on a circuit $D$ they are allowed to output incorrect answer $\log n$ times (receiving a correction in each round) before generating the right answer, cf. Theorem \ref{t:main}. The implication can be also interpreted as a construction of PAC learning algorithms from a frequent interactive instance-specific\footnote{We use the adjective `instance-specific' only informally in this paper. The instance-specific model discussed earlier actually differs slightly from the concept in Theorem \ref{t:main}.} learning: If we are given an algorithm which is able to predict a value of a big fraction of p-size circuits (after a small number of queries and $\le \log n$ mistakes) even on a single input, this already implies learnability of p-size circuits on almost all inputs. The opposite implication producing efficient witnessing of lower bounds from learning algorithms holds as well, which yields a new characterisation of PAC learning of small circuits, cf. Lemma~\ref{l:mainconverse}.
\medskip
\noindent {\bf {\em Relation to proof complexity, natural proofs and witnessing theorems.}}
The notion of interactive witnessing of circuit lower bounds from Theorem \ref{t:main} is motivated by witnessing theorems from bounded arithmetic. One of the most prominent theories of bounded arithmetic is Cook's theory \ensuremath{{\sf PV_1}}\xspace, which formalizes p-time reasoning. Theories of bounded arithmetic satisfy many so called witnessing theorems, which allow us to show, for example, that if we can prove a p-size circuit lower bound for a function $H\in\ensuremath{{\sf NP}}\xspace$ in \ensuremath{{\sf PV_1}}\xspace then there exists a witnessing analogous to the one from Theorem~\ref{t:main} except that the witnessing circuits $W$ have white-box access to $D$ (i.e. access to a full description of $D$), see Section \ref{s:witnessing} for a more detailed comparison. The witnessing from Theorem \ref{t:main} is also closely related to algorithms finding hard instances of \ensuremath{{\sf NP}}\xspace problems by Gutfreund, Shaltiel, Ta-Shma \cite{GST} and Atserias \cite{Abb}. The main difference is that the algorithms from~\cite{GST} have white-box access to the algorithm whose error they search for. While Atserias \cite{Abb} made \cite{GST} work with the black-box (oracle) access, his algorithm achieves much smaller probability of success than the one required in Theorem \ref{t:main}, cf. Section \ref{s:witnessing}.
The proof of Theorem \ref{t:main} is an adaptation of a method of exploiting Nisan-Wigderson generators introduced by Kraj\'{i}\v{c}ek \cite{Knw} in order to give a model-theoretic evidence for Razborov's conjecture in proof complexity. Razborov's conjecture \cite{Rkdnf} states a conditional hardness of deriving tautologies expressing the existence of an element outside of the range of a suitable NW-generator in strong proof systems. Kraj\'i\v{c}ek's result significantly strengthens a similar but much simpler proof of the validity of Razborov's conjecture for proof systems with feasible interpolation \cite{Pnw}. The method has been also used to show a conditional hardness of generating hard tautologies \cite{Kht}, a conditional unprovability of p-size circuit lower bounds for \ensuremath{{\sf SAT}}\xspace in theories of bounded arithmetic below Cook's theory \ensuremath{{\sf PV_1}}\xspace \cite{Pclba} and an unconditional unprovability of strong nondeterministic lower bounds in Je\v{r}\'abek's theory of approximate counting \ensuremath{{\sf APC_1}}\xspace \cite{PSapc}. We take advantage of its unique way of exploiting the NW generator: it gives us a reconstruction algorithm which after breaking the NW-generator in a particular interactive fashion allows us to approximately compute the function on which the generator is based. There are, however, technical issues with adapting this method in our context, e.g. unlike in bounded arithmetic our witnessing circuits can fail with a significant probability. Our main contribution is in finding the right notions which allow the arguments to go through (in both directions).
A competing notion of constructive circuit lower bounds has been developed in the influential theory of natural proofs of Razborov and Rudich \cite{RR}, which explains why many of the existing lower bound methods cannot yield separations such as $\ensuremath{{\sf P}}\xspace\ne \ensuremath{{\sf NP}}\xspace$. Natural proofs are known to be equivalent to the existence of efficient learning algorithms, cf. \cite{CIKK}. For example, \ensuremath{{\sf P/poly}}\xspace-natural proofs useful against \ensuremath{{\sf P/poly}}\xspace\footnote{\ensuremath{{\sf P/poly}}\xspace-natural proofs useful against \ensuremath{{\sf P/poly}}\xspace are defined as $2^{O(n)}$-size circuits with $2^n$ inputs accepting a $1/2^{O(n)}$-fraction of inputs and rejecting all inputs which represent truth-tables of Boolean functions on $n$ inputs computable by p-size circuits, cf. Definition \ref{d:natpr}.} are equivalent to subexponential-size circuits learning p-size circuits over the uniform distribution with membership queries. Furthermore, natural proofs have been used to derive unprovability results in proof complexity as well. Specifically, to derive unprovability of circuit lower bounds in proof systems with the feasible interpolation property, cf. \cite{Rba,Kdual}. Despite similar applications and motivations for defining these concepts, the relation between natural proofs and the witnessing method has not been clear. In fact, a priori the `static' definition of natural proofs appears to be quite orthogonal to the witnessing from Theorem \ref{t:main}. Theorem \ref{t:main} thus not only extends the scope of the natural proofs barrier by providing another equivalent characterisation which incorporates interactivity but also helps to clarify its relation to the witnessing method.
\medskip
\noindent {\bf Learning speedup.} Our second
contribution is a simple proof of a generalized learning speedup
of Oliveira and Santhanam \cite{OS}.
Specifically, we show that for each superpolynomial function $s$, if for each constant $k$, circuits of size $n^k$ are learnable by circuits of size $s$ over the uniform distribution with random examples, then for each constant $k$ and $\epsilon>0$, circuits of size $n^k$ are learnable over the uniform distribution with membership queries by circuits of size $O(s^{\epsilon})$, cf. Theorem \ref{t:speedup}.
We obtain the speedup by a more direct exploitation of a slightly modified NW-generator. In comparison to the proof from \cite{OS}, this sidesteps the need to construct natural proofs and invoke the construction of Carmosino et al. \cite{CIKK}. A disadvantage of the method is that we need to assume learning with random examples instead of membership queries. Nevertheless, we present one more alternative proof of the learning speedup based on (a simple case of) Theorem~\ref{t:main}, which allows to start with membership queries, cf. Theorem \ref{t:aspeed}. We emphasize, however, that behind all proofs of the learning speedup is essentially the same general idea of reconstructing, in this or that way, the base function of some form of the NW-generator.
\medskip
\noindent {\bf {\em Relation to hardness magnification and locality.}} The generalized learning speedup can be interpreted as a nonlocalizable hardness magnification theorem reducing a complexity lower bound into a seemingly weaker one. In general, hardness magnification refers to an approach to strong complexity lower bounds developed in a series of recent papers, cf. Section \ref{s:speedup}. Unfortunately, while the approach avoids (in certain cases provably \cite{CHOPRS}) the natural proofs barrier, it suffers from a `{\em locality barrier}': magnification theorems typically yield unconditional upper bounds for specific problems if the computational model in question is allowed to use oracles with small fan-in, but the existing lower bounds actually work even against the presence of local oracles. In fact, a better understanding of nonlocalizable lower bounds is essential for further progress on strong complexity lower bounds in general, see Section \ref{s:speedup} for more details. A promising aspect of the learning speedup (Theorem \ref{t:speedup}) is that it avoids the locality barrier, cf. Section \ref{s:speedup}.
\bigskip
\noindent {\bf Learning from breaking cryptographic pseudorandom generators.} In Section~\ref{s:leargen} we survey known constructions of learning algorithms from distinguishers breaking pseudorandom generators (PRGs) or natural proofs.
While several such constructions are known, the question of extracting efficient learning of p-size circuits from the non-existence of cryptographic PRGs remains open. A positive answer to this question would establish an interesting win-win situation: either safe cryptography or efficient learning is possible. In the already mentioned approach, Oliveira and Santhanam \cite{OS} showed that efficient learning of p-size circuits with membership queries follows from the non-existence of nonuniform pseudorandom function families. By a straightforward adaptation of the proof method behind their result we show that efficient learning of p-size circuits {\em with random examples} follows from the non-existence of succinct nonuniform pseudorandom function families, cf. Theorem \ref{t:towardscore}. Finally, we point out that the desired construction of learning algorithms from the non-existence of cryptographic PRGs is closely related to a question of Rudich about turning demibits to superbits, cf. Section \ref{s:rudich}.
\section{Preliminaries}
$[n]$ denotes $\{1,\dots,n\}$. $\ensuremath{{\sf Circuit}}\xspace[s]$ denotes fan-in two Boolean circuits of size at most $s$. The size of a circuit is the number of gates. A function $f:\{0,1\}^n\mapsto \{0,1\}$ is $\gamma$-approximated by a circuit $C$, if $\Pr_x[C(x)=f(x)]\ge\gamma$.
\begin{definition}[Natural property \cite{RR}]\label{d:natpr} Let $m=2^n$ and $s,d:\mathbb{N} \mapsto \mathbb{N}$. A sequence of circuits $\{C_{m}\}^{\infty}_{m=1}$ is a $\ensuremath{{\sf Circuit}}\xspace[s(m)]$-natural property useful against $\ensuremath{{\sf Circuit}}\xspace[d(n)]$ if
\begin{itemize}
\item[1.] {\em Constructivity.} $C_{m}$ has $m$ inputs and size $s(m)$,
\item[2.] {\em Largeness.} $\Pr_x[C_{m}(x)=1]\ge 1/m^{O(1)}$,
\item[3.] {\em Usefulness.} For each sufficiently big $m$, $C_{m}(x)=1$ implies that $x$ is a truth-table of a function on $n$ inputs which is not computable by circuits of size $d(n)$.
\end{itemize}
\end{definition}
\begin{definition}[Pseudorandom generator]\label{d:prg}
A function $g:\{0,1\}^n\mapsto\{0,1\}^{n+1}$ computable by p-size circuits is a pseudorandom generator safe against circuits of size $s(n)$, if for each circuit $D$ of size $s(n)$, $$\left|\Pr_{y\in\{0,1\}^{n+1}}[D(y)=1]-\Pr_{x\in\{0,1\}^n}[D(g(x))=1]\right|<\frac{1}{s(n)}.$$
\end{definition}
\begin{definition}[PAC learning]\label{d:lear}
A circuit class $\mathcal{C}$ is learnable over the uniform disribution by a circuit class $\mathcal{D}$ up to error $\epsilon$ with confidence $\delta$, if there are randomized oracle circuits $L^f$ from $\mathcal{D}$ such that for every Boolean function $f:\{0,1\}^n\mapsto\{0,1\}$ computable by a circuit from $\mathcal{C}$, when given oracle access to $f$, input $1^n$ and the internal randomness $w \in \{0,1\}^*$, $L^f$ outputs the description of a circuit satisfying
\begin{equation*}
\Pr_w [ L^f(1^n, w) \text{ } (1-\epsilon) \text{-approximates } f ] \geq \delta.
\end{equation*}
\noindent $L^f$ uses non-adaptive membership queries if the set of queries which $L^f$ makes to the oracle does not depend on the answers to previous queries. $L^f$ uses random examples if the set of queries which $L^f$ makes to the oracle is chosen uniformly at random.
\end{definition}
In this paper, PAC learning always refers to learning over the uniform distribution.
\bigskip
\noindent {\bf Boosting confidence and reducing error.} The confidence of the learner can be efficiently boosted in a standard way. Suppose an $s$-size circuit $L^f$ learns $f$ up to error $\epsilon$ with confidence $\delta$. We can then run $L^f$ $k$ times, test the output of $L^f$ from every run with $m$ new random queries and output the most accurate one. By Hoeffding's inequality, $m$ random queries fail to estimate the error $\epsilon$ of an output of $L^f$ up to $\gamma$ with probability at most $2/e^{2\gamma^2m}$.
Therefore the resulting circuit of size $poly(s,m,k)$ learns $f$ up to error $\epsilon+\gamma$ with confidence at least $1-2k/e^{2\gamma^2m}-(1-\delta)^k\ge 1-2k/e^{2\gamma^2m}-e^{-k\delta}$. If we are trying to learn small circuits we can get even confidence 1 by fixing internal randomness of learner nonuniformly without losing much on the running time or the error of the output. It is also possible to reduce the error up to which $L^f$ learns $f$ without a significant blowup in the running time and confidence. If we want to learn $f$ with a better error, we first learn an amplified version of $f$, $Amp(f)$. Employing direct product theorems and Goldreich-Levin reconstruction algorithm, Carmosino et. al. \cite[Lemma 3.5]{CIKK} showed that for each $0<\epsilon,\gamma<1$ it is possible to map a Boolean function $f$ with $n$ inputs to a Boolean function $Amp(f)$ with $poly(n,1/\epsilon,\log(1/\gamma))$ inputs so that $Amp(f)\in \ensuremath{{\sf P/poly}}\xspace^f$ and there is a probabilistic $poly(|C|,n,1/\epsilon,1/\gamma)$-time machine which given a circuit $C$ $(1/2+\gamma)$-approximating $Amp(f)$ and an oracle access to $f$ outputs with high probability a circuit $(1-\epsilon)$-approximating $f$. We thus typically ignore the optimisation of the confidence and error parameter in the rest of the paper.
\section{Instance-specific learning}\label{s:i-s}
The most direct way of turning circuit lower bounds into a certain type of learning can be described as follows.\footnote{The simple observation from box A appeared in \cite[Section 4.5]{MP} and \cite{Pmu15}.
I am not aware of a more systematic treatment of this concept. There are related models of learning such as `knows what it knows' model by Li-Littman-Walsh \cite{LLW} and `reliable learning' by Rivest-Sloan \cite{RS} which prohibit incorrect predictions in various ways. These models, however, follow the formalization of PAC learning in that the goal of the learner is to learn the target concept by accessing it. In box A we do not assume that the target concept $f$ is determined on all inputs or prior to the given samples.}
\bigskip
\setlength\fboxrule{1pt}
\fbox{\parbox{411pt}{
\vspace{6pt}
\hspace{1pt} {\bf A. Prediction from lower bound.} Suppose we are given bits $f(y_1),\dots,f(y_k)$ for $n$-bit strings $y_1,\dots,y_k$ defining a partial Boolean function $f$. We want to predict the value of $f$ on a new input $y_{k+1}\in\{0,1\}^n$. A priori $f(y_{k+1})$ is not defined but we will interpret the minimal-size circuit $C^f$ coinciding with $f$ on $y_1,\dots,y_k$ as `the right' prediction of $f(y_{k+1})$. That is, we want to find $C^f(y_{k+1})$. Here, we assume that the minimal circuit $C^f$ determines the value $f(y_{k+1})$. Otherwise, there are two circuits $C^1, C^2$ of minimal size such that $C^1(y_{k+1})\ne C^2(y_{k+1})$, and therefore any prediction is equally good. Say that the size of the minimal circuit $C^f$ is $s$. Then the task to predict the value $C^f(y_{k+1})$ can be formulated as the task to prove an $s$-size circuit lower bound of the form $$\forall\text{ circuit }C\text{ of size }s,\ \bigvee_{i=1,\dots,k} C(y_i)\ne f(y_i)\vee C(y_{k+1})\ne\epsilon$$ for $\epsilon=0$ or $\epsilon=1$.
\vspace{3pt}}}
\bigskip
An interesting aspect of the prediction method described in box A is that by proving even a single circuit lower bound we can learn something about the function $f$ (if we know the value $s$). More precisely, we predict $C^f$ on a single input but do not necessarilly gain knowledge of the values of $C^f$ on other inputs. This `instance-specific' learning should be contrasted with PAC learning, Definition \ref{d:lear}, where one is required to generate a circuit predicting the target function $f$ on most inputs. This, however, does not mean that it is easier to learn in the sense of box A: in Definition \ref{d:lear} we do not need to recognize when the prediction errs while the prediction from box A is zero-error in the sense that it guarantees to output the right value of $C^f(y_{k+1})$.\footnote{{\bf Provability vs truth.} The definition of `the right' prediction in terms of minimal circuits used in box A can be interpreted as an implicit (alternative) definition of truth. Consider, for example, that strings $y_j$ encode statements in set theory ZFC and the value $f(y_j)$ is 1 if and only if the statement encoded by $y$ is provable in ZFC. It would be interesting to find out whether the minimal circuit coinciding with a sufficiently rich list of such samples $(y_j,f(y_j))$ determines a truth value of the Continuum Hypothesis or of the consistency of ZFC, statements which are independent of ZFC. Unfortunately, in general, such questions seem to be out of reach of the contemporary mathematics.
\medskip
\noindent {\bf Determining minimal circuit size.} A drawback of the observation in box A is that it requires knowledge of the size $s$ of the minimal circuit $C^f$, which might be hard for the learner to determine. The size $s$ could be determined by deciding $t$-size circuit lower bounds for $t\in [s]$. Perhaps a more practical way of addressing the issue is to take a sufficiently big approximate value $s'$ of $s$, choose a random $t\in [s']$ and prove $t$-size lower bounds (as in box A with $t$ instead of $s$). If $s'\le n^{O(1)}$, the probability that we have the right $t$ is $1/n^{O(1)}$. Then, by solving polynomially many $t$-size lower bounds (in order to predict $C^f(y)$ on polynomially many $y$'s), we can approximate the accuracy of our predictions. If the accuracy is not high, we can reapeat the process with a new random $t\in [s']$.
The advantage of this method is that it does not rely on deciding correctly whether some particular $t$-size circuit lower bounds hold - we are actually allowed to err on some fraction of lower bounds. However, its predictions are no longer zero-error. A closely related argument is formalized in Section \ref{s:leargen}
\bigskip
\noindent {\bf Proof complexity.} The prediction method from box A
relies on proof complexity of circuit lower bounds, cf. \cite{Kpc}.\footnote{Notably, Razborov \cite{Rkdnf} established that weak proof systems such as Resolution operating with $k$-DNFs for small $k$ do not have polynomial-size proofs of any superpolynomial circuit lower bound whatsoever and he conjectured this holds under a hardness assumption even for stronger systems such as Frege. The issue is, however, delicate because proof systems like Extended Frege are already capable of formalizing a lot of complexity theory, see e.g. \cite{MP}, and it is perfectly plausible that if a circuit lower bound is provable at all, then it is efficiently provable in Extended Frege.} It would be interesting to find out if proving circuit lower bounds in standard proof systems suffices to construct learning circuits
\begin{question}[Learning interpolation]\label{q:fip
Is there a p-time function which given an Extended Frege proof of a formula $\bigvee_{y\in A} C(y)\ne f(y)\vee C(x)\ne\epsilon$, for $\epsilon=0$ or $\epsilon=1$, with free variables representing $s$-size circuits $C$ with $n$ inputs, a fixed set $A$ of $n$-bit inputs of a sufficiently big size $|A|=poly(s,n)$, a fixed $n$-bit string $x\notin A$ and values of $f\in\ensuremath{{\sf Circuit}}\xspace[s]$ on $A$, outputs a circuit $(1/2+1/n)$-approximating $f$?
\end{question}
\subsection{Learning from witnessing lower bounds}\label{s:witnessing
We now give a construction of PAC learning algorithms from an interactive witnessing of circuit lower bounds. As discussed in the introduction, the implication can be also interpreted as a construction of PAC learning algorithms from a frequent interactive instance-specific learning
\medskip
\begin{theorem}[Learning from interactive witnessing of lower bounds]\label{t:main}
Let $d\ge 2; k,K \ge 1$ and $H$ be a Boolean function with $n$ inputs.
Assume there are $2^{Kn}$-size circuits $W_1^1,\dots,W_{\log n}^b$ with $b=2^{Kn}$ such that for each distribution $\mathcal{R}$ on $n^{10dk}$-size circuits with $n$ inputs
there exists $j\in [b]$ such that circuits $W_1^j,\dots,W_{\log n}^j$ witness errors of $n^{10dk}$-size circuits attempting to compute $H$ in the following way.
\begin{itemize} \item[] Given an oracle access to a random $n^{10dk}$-size circuit $D(x)$ with $n$ inputs, with probability at least $1-3/n^3$ over $\mathcal{R}$, the following interactive protocol succeeds: After querying values of
circuit $D$, $W_1^j$ outputs a not-yet-queried $x_1\in\{0,1\}^n$ s.t. $D(x_1)\ne H(x_1)$ or $W_2^j$ receives a correction in the form of bits $D(x_1), H(x_1)$ s.t. $D(x_1)=H(x_1)$. Having $D(x_1), H(x_1)$ and the samples queried by $W^j_1$, $W_2^j$ makes further queries to $D$ and generates the second not-yet-queried candidate $x_2\in\{0,1\}^n$ for the claim $C(x_2)\ne H(x_2)$. If $D(x_2)=H(x_2)$, $W_3^j$ receives a correction and the protocol continues in this way until some $W_t^j$, for $t\le \log n$, with access to all previous corrections and samples finds the right $x_t$ which has not been queried by $W^j_1,\dots,W^j_t$ and witnesses $D(x_t)\ne H(x_t)$.
\end{itemize}
Then, circuits of size $n^{dk}$ with $n^d$ inputs can be learned by circuits of size $2^{K'n}$ over the uniform distribution with non-adaptive membership queries, confidence $1/2^{K'n^{2}}$ up to error $1/2-1/2^{K'n^{2}}$, where $K'$ is a constant depending only on $K$
\end{theorem}
Note that the witnessing circuits from Theorem \ref{t:main} can work for arbitrary function $H$ and, for the circuits $D$ on which the witnessing succeeds, the number of queries in each round is implicitly bounded by $<2^n$ (since after querying $D$ on all inputs it would be impossible to output a not-yet-queried input)
\proof The proof follows the main construction from \cite{Pclba, Knw} in the context of learning. The main technical complication is caused by the fact that the witnessing circuits $W^1_1\dots,W^b_{\log n}$ are allowed to fail on a significant fraction of inputs.
\medskip
In order to derive the conclusion of the theorem it suffices to assume that the witnessing circuits work for distributions $\mathcal{R}$ induced by specific Nisan-Wigderson generators.
Consider a Nisan-Wigderson generator based on a circuit $C$ which we aim to learn. Specifically, for $d\ge 2$ and $n^{2d}\le m\le 2n^{2d}$, let $A=\{a_{i,j}\}^{i\in [2^n]}_{j\in [m]}$ be a $2^n\times m$ 0-1 matrix with $n^{d}$ ones per row and $J_i(A):=\{j\in [m]; a_{i,j}=1\}$. Then define an NW-generator $NW_{C}:\{0,1\}^{m}\mapsto\{0,1\}^{2^n}$ as $$(NW_{C}(w))_i=C(w|J_i(A))$$ where $w|J_i(A)$ are $w_j$'s such that $j\in J_i(A)$.
For any $d\geq 2$, Nisan and Wigderson \cite{NW} constructed a $2^n\times m$ 0-1 matrix $A$ with $n^{d}$ ones per row and $n^{2d}\le m\le 2n^{2d}$ which is also an $(n,n^{d})$-design meaning that for each $i\neq j$, $|J_i(A)\cap J_j(A)|\leq n$ and $|J_i(A)|=n^d$. Moreover, there are $n^{9d}$-size circuits which given $i\in \{0,1\}^n$ and $w\in\{0,1\}^{m}$ output $w|J_i(A)$, cf. \cite{CIKK}. Therefore, if $C$ has $n^{d}$ inputs and size $n^{dk}$, then for each $w\in \{0,1\}^{m}$, $(NW_{C}(w))_x$ is a function on $n$ inputs $x$ computable by circuits of size $n^{10dk}$. We want to learn $C$ by a circuit of size $2^{O(n)}$.
Let $\mathcal{R}$ be the distribution on $n^{10dk}$-size circuits defined so that a random circuit over $\mathcal{R}$ is $(NW_C(w))_x$ for $w\in\{0,1\}^m$ chosen uniformly at random.
\medskip
By the assumption of the theorem, we have $2^{Kn}$-size circuits $W_1^1,\dots, W_{\log n}^b$, with $b=2^{Kn}$ such that for some $j\in [b]$ for $1-3/n^3$ of all $w\in\{0,1\}^{m}$
circuits $W_1^j,\dots, W_{\log n}^j$ find an error of the $n^{10dk}$-size circuit $(NW_C(w))_x$ attempting to compute $H$. We will use them in order to break, in a certain sense, the generator $NW_C$ and reconstruct the circuit $C$.
For each $w$ define a trace $tr(C,w)=x_1,\dots, x_t$ as the sequence of $t\le \log n$ strings generated by $W_1^j,\dots,W_t^j$ on $(NW_C(w))_x$ such that $W_t^j$ is the first circuit which succeeds in witnessing the error, i.e. $H(x_t)\ne (NW_C(w))_{x_t}$. If circuits $W^j_1,\dots,W^j_{\log n}$ do not find an error, $x_t=x_{\log n}$. The trace is defined w.r.t. a fixed `helpful' oracle $Y$ providing corrections in the form of bits $(NW_C(w))_x, H(x)$.
\medskip
For $u\in \{0,1\}^{n^{d}}$ and $v\in\{0,1\}^{m-n^{d}}$ define $r_x(u,v)\in \{0,1\}^{m}$ by putting bits of $u$ into positions $J_x(A)$ and filling the remaining bits by $v$ (in the natural order). We say that $w\in \{0,1\}^{m}$ is {\em good} if the trace $tr(C,w)$ ends with a string witnessing an error of circuit $(NW_C(w))_x$ and {\em bad} otherwise. Similarly, given $v\in \{0,1\}^{m-n^d}$ and $x'\in \{0,1\}^n$, we say that $u\in\{0,1\}^{n^d}$ is good if $r_{x'}(u,v)$ is.
The core claim of the proof is the existence of a frequent trace on which circuit $W^j_1,\dots,W^j_{\log n}$ succeed in witnessing the error with significant advantage.
\begin{claim}\label{claim}
There is a trace $Tr=X_1,\dots, X_t, t\leq \log n$ such that for $s\ge 1/(6^{2n(t-1)}2^{2n}n)$ of all $a\in \{0,1\}^{m-n^{d}}$ for $s'\ge s$ of all $u\in\{0,1\}^{n^d}$ $tr(C,r_{X_t}(u,a))$ starts with $Tr$ and at least $(2/3-6^t/n^3-2/n)s'2^{n^d}$ $u$'s are good and satisfy $tr(C,r_{X_t}(u,a))=Tr$.
\end{claim}
The trace $Tr$ is constructed inductively: in step $i$ we want to find $X_1,\dots, X_{i-1}$ such that for $\ge 1/6^{2n(i-1)}$ of all $w$'s $tr(C,w)$ strictly extends $X_1,\dots,X_{i-1}$ and the fraction of good $w$'s for which this happens is $\ge 1-6^i/2n^3$. For $i=1$ this holds by the assumption. Assume we have such $X_1,\dots, X_{i-1}$. We want to extend them to $X_1,\dots,X_i$. Since there are at most $2^n$ strings $X_j$, there is $X_i$ such that for $s''\ge 1/(2^{2n}6^{2n(i-1)})$ $w$'s $tr(C,w)$ starts with $X_1,\dots,X_i$ and $\le 6^i/n^3$ of these $w$'s are bad. Otherwise, the fraction of good $w$'s for which $tr(C,w)$ strictly extends $X_1,\dots,X_{i-1}$ would be $\le 1/2^n+1-6^i/n^3<1-6^i/2n^3$ if $2n^3\le 2^n$. Now, either for $\ge (2/3)s''$ of $w$'s $tr(C,w)$ stops at $X_i$ (hence, for $\le (1/3)s''$ $w$'s the trace continues and for $\le 6^is''/n^3$ bad $w$'s $tr(C,w)$ starts with $X_1,\dots,X_i$) or for $\ge (1/3)s''$ $w$'s the trace strictly extends $X_1,\dots,X_i$. In the latter case, for $\le 6^is''/n^3$ bad $w$'s $tr(C,w)$ starts with $X_1,\dots,X_i$, which means that the fraction of bad $w$'s such that $tr(C,w)$ strictly extends $X_1,\dots,X_i$ is $\le 3\cdot 6^i/n^3$.
Since for all $w$, the length of $tr(C,w)$ is bounded by $\log n$, the process of extending $X_1,\dots, X_{i-1}$ has to stop at some step $1\le i\le \log n$. That is, there is $Tr=X_1,\dots, X_t, t\le \log n$ such that for $\ge (2/3)s$ of $w$'s $tr(C,w)=Tr$, for $\le (1/3)s$ of $w$'s $tr(C,w)$ strictly extends $Tr$ and $\le 6^ts/n^3$ of $w$'s such that $tr(C,w)$ is consistent with $Tr$ are bad, where $s\ge 1/(6^{2n(t-1)}2^{2n})$.
The number of good $w$'s such that $tr(C,w)=Tr$ is at least $(2/3-6^t/n^3)s2^m$. Therefore, $\ge s/n$ $a$'s can be completed by $s'\ge s/n$ $u$'s to a string $w=r_{X_t}(u,a)$ such that $tr(C,w)$ starts with $Tr$ and at least $(2/3-6^t/n^3-2/n)s'2^{n^d}$ $u$'s are good and satisfy $tr(C,r_{X_t}(u,a))=Tr$. This proves the claim.
\medskip
For $X\in \{0,1\}^n$ and $a'\in\{0,1\}^{m-n^d}$ let $r_{X}(\cdot,a')$ be the bits of $a'$ in the positions of $[m]\backslash J_{X}(A)$.
Since $A$ is an $(n,n^{d})$-design, for any row $x\neq X$ at most $n$ bits of $r_{X}(\cdot,a')|J_x(A)$ are not set.
For
$x\ne X$, let $Y_{x,C}^{X,a'}$ be the set of all corrections provided by $Y$ on $x, C$ and $r_{X}(u,a')|J_x(A)$ for all $u\in\{0,1\}^{n^{d}}$. This includes queries to $C$ on inputs $r_{X}(u,a')|J_x(A)$. The size of each set $Y_{x,C}^{X,a'}$ is $2^{O(n)}$.
\smallskip
We are ready to describe a circuit $D'$ that approximates $C$. First, choose uniformly at random $a'\in\{0,1\}^{m-n^d}$, a trace $X^1,\dots,X^t$ with $t\le \log n$, a bit $maj\in \{0,1\}$ and $j'\in [b]$. Query $C$ so that all queries to $C$ from sets $Y_{x,C}^{X^t,a'}$, for $x\ne X^t$, are obtained. In order to get access to all corrections from $Y_{X^1,C}^{X^t,a'},\dots, Y_{X^{t-1},C}^{X^t,a'}$ we provide also the full truth-table of $H$ as a nonuniform advice of $D'$. The truth table of $H$ is a single nonuniform advice of the learner which works for every $C$. Then $D'$ computes as follows.
For each $u\in\{0,1\}^{n^{d}}$ produce $r_{X^t}(u,a')$.
Next, use $W_1^{j'}$ to produce $x^1$. If a query of $W_1^{j'}$ cannot be answered by $Y_{x,C}^{X^t,a'}$ with $x\ne X^t$ or $x^1\ne X^1$, output $maj$. Otherwise, use the advice from $Y_{X^1,C}^{X^t,a'}$ to find out if $H(X^1)=NW_{C}(r_{X^t}(u,a'))_{X^1}$. If the equality does not hold, output $maj$. Otherwise, use $W_2^{j'}$ to generate $x^2$ and continue in the same manner until $W_t^{j'}$ produces $x^t$. If a query of $W_t^{j'}$ cannot be answered by $Y_{x,C}^{X^t,a'}$ with $x\ne X^t$ or $x^t\ne X^t$, output $maj$. Otherwise, output 0 iff $H(X^t)=1$. The resulting circuit $D'$ has $n^{d}$ inputs and size $2^{O(n)}$, if $m\le 2^n$ (which holds w.l.o.g.)
\medskip
By Claim \ref{claim}, with probability at least $1/(6^{2n\log n}2^{O(n\log n)})$ the learner guessed $j'=j$, trace $Tr$ and assignment $a$ such that for at least $(2/3-6^t/n^3-2/n)s'$ of all $u\in\{0,1\}^{n^{d}}$, $D'$ will successfully predict $C(u)$. Moreover, for at most $(1/3+6^t/n^3+2/n)s'$ of all $u$'s, the trace extends $Tr$ or starts with $Tr$ but does not end with a string witnessing an error. Since with probability $1/2$ the correct value on at least half of all remaining $u$'s is $maj$, $\Pr_u[D'(u)=C(u)]\geq 1/2+(1/6-6^t/n^3-2/n)s$.
\qed
\bigskip
The assumption from Theorem \ref{t:main} is justified by the following lemma which establishes the converse.
\begin{lemma}[Witnessing from learning]\label{l:mainconverse}
Let $k\ge 1$; $\epsilon<1$; $2^n/2n\ge 2^{\epsilon n}\ge n^k$ and $H$ be a Boolean function with $n$ inputs hard to $(1-1/n)$-approximate by circuits of size $2^{\epsilon n}$.
Assume $\ensuremath{{\sf Circuit}}\xspace[n^k]$ can be learned by $\ensuremath{{\sf Circuit}}\xspace[2^{\epsilon n}]$ over the uniform distribution with confidence $1$ up to error $\epsilon'$.
Then, there are $2^{O(n)}$-size circuits $W^1,\dots,W^b$ with $b=2^n/2n$ such that for each distribution $\mathcal{R}$ on $n^{k}$-size circuits with $n$ inputs
there exists $j\in [b]$ such that given an oracle access to a random $n^{k}$-size circuit $D(x)$ with $n$ inputs, with probability at least $1-2\epsilon' n$ over $\mathcal{R}$, after $\le 2^{\epsilon n}$ queries to
circuit $D$, $W^j$ outputs a not-yet-queried $x\in\{0,1\}^n$ s.t. $D(x)\ne H(x)$.
\end{lemma}
\proof By the assumption, there exists an $2^{\epsilon n}$-size circuit $W$ which for each $n^k$-size circuit $D$, given an oracle access to $D$, outputs a circuit $C$ $(1-\epsilon')$-approximating $D$. Since $H$ is hard to $(1-1/n)$-approximate by circuits of size $2^{\epsilon n}\le 2^n/2n$, there are at least $2^n/2n$ inputs which have not been queried by $W$ and on which $C$ fails to compute $H$. Therefore, a random input which has not been queried by $W$ and on which $C$ fails to compute $H$ witnesses $D(x)\ne H(x)$ with probability $\ge 1-2\epsilon' n$. Let $W^1,\dots,W^b$, $b=2^n/2n$, be circuits such that $W^i$ simulates $W$ and outputs the $i$-th input on which $C$ fails to compute $H$ ignoring inputs which have been queried by $W$. The size of each $W^i$ is $2^{O(n)}$ because it uses the whole truth table of $H$ as a nonuniform advice. Let $\mathcal{R}$ be arbitrary distribution on circuits of size $n^k$. Since for each $D$, at least $1-2\epsilon' n$ of $W^i$'s succeed, there is $W^j$ which succeeds on random $D$ with probability $\ge 1-2\epsilon' n$ over $\mathcal{R}$.
\qed
\bigskip
Note that Theorem \ref{t:main} together with Lemma \ref{l:mainconverse} imply that for suitable $H$ it is possible to collapse the number of rounds in the interactive witnessing from Theorem \ref{t:main} at the expense of witnessing errors of slightly smaller circuits (and a small increase in the running time of the witnessing).
\def\backup{
\begin{theorem}[Instance-specific learning from interactive witnessing of lower bounds]\label{t:main}
Let $d\ge 2, k\ge 1$ and $H$ be an \ensuremath{{\sf NP}}\xspace machine. Assume there are $poly(n)$-size circuits $W_1,\dots,W_{l}$ for $l=O(n)$ which witness errors of $n^{5dk}$-size circuits attempting to compute $H$ in the following way: After $poly(n)$ queries to an $n^{5dk}$-size circuit $D$, $W_1$ outputs $x_1\in\{0,1\}^n$ such that $D(x_1)\ne H(x_1)$ or $W_2$ receives a counterexaple to its claim in the form of $D(x_1)$ and $y_1\in\{0,1\}^n$ s.t. $D(x_1)=1$ iff $H(x_1)$ accepts $x_1$ with nondeterministic bits $y_1$. Having $D(x_1)$ and $y_1$, $W_2$ generates the second candidate $x_2\in\{0,1\}^n$ for the claim that $C(x_2)\ne H(x_2)$. If $D(x_2)=H(x_2)$, $W_3$ receives a counterexample and the protocol continues in this way until some $W_t$, for $t\le l$, with access to all previous counterexamples finds the right $x_l$ witnessing $D(x_l)\ne H(x_l)$.
Then, $\ensuremath{{\sf Circuit}}\xspace[n^{k}]$ can be learned by $\ensuremath{{\sf Circuit}}\xspace[2^{O(n^{1/d})}]$ with confidence 1 and error $1/2-1/2^{O(n^{2/d})}$.
\end{theorem}
\proof The proof adapts the main construction from \cite{Pclba, Knw} to the context of learning. We describe $2^{O(n)}$-size circuits learning circuits of size $n^{dk}$ with $n^{d}$ inputs, for $d\ge 2$.
\smallskip
Consider a Nisan-Wigderson generator based on a circuit $C$ which we aim to learn. Specifically, for $d\ge 2$ and $n^{2d}\le m\le 2n^{2d}$, let $A=\{a_{i,j}\}^{i\in [2^n]}_{j\in [m]}$ be $2^n\times m$ 0-1 matrix with $n^{d}$ ones per row and $J_i(A):=\{j\in [m]; a_{i,j}=1\}$.
Then define an NW-generator $NW_{C}:\{0,1\}^{m}\mapsto\{0,1\}^{2^n}$ as $$(NW_{C}(w))_i=C(w|J_i(A))$$ where $w|J_i(A)$ are $w_j$'s such that $j\in J_i(A)$.
\smallskip
For any $d\geq 2$, Nisan and Wigderson \cite{NW} constructed $2^n\times m$ 0-1 matrix $A$ with $n^{d}$ ones per row and $n^{2d}\le m\le 2n^{2d}$ which is also an $(n,n^{d})$-design meaning that for each $i\neq j$, $|J_i(A)\cap J_j(A)|\leq n$. Moreover, there are $n^{4d}$-size circuits which given $i\in \{0,1\}^n$ and $w\in\{0,1\}^{m}$ output $w|J_i(A)$, cf. \cite{CIKK}. Therefore, if $C$ has $n^{d}$ inputs and size $n^{dk}$, then for each $w\in \{0,1\}^{m}$, $(NW_{C}(w))_x$ is a function on $n$ inputs $x$ computable by circuits of size $n^{5dk}$.
\medskip
By the assumption of the theorem, there are $l=O(n)$ circuits $W_1,\dots, W_l$ of $poly(n)$-size which for each $w\in\{0,1\}^{m}$ find an error of the $n^{5dk}$-size circuit $(NW_C(w))_x$ attempting to compute $H$. Define a trace $tr(w)=x_1,\dots, x_t$ as the sequence of $t\le l$ strings generated by $W_1,\dots,W_t$ such that $W_t$ is the first circuit which succeeds in witnessing the error, i.e. $H(x_t)\ne (NW_C(w))_{x_t}$. The trace is defined w.r.t. a fixed `helpful' oracle $Y$ providing counterexamples in the form of bits $(NW_C(w))_x$ and strings $y$ such that $H$ accepts $x$ with nondeterministic bits $y$, if $H$ accepts at all. The advice provided by $Y$ depends only on $x$ and $w|J_x(A)$.
\medskip
For $u\in \{0,1\}^{n^{d}}$ and $v\in\{0,1\}^{m-n^{d}}$ define $r_x(u,v)\in \{0,1\}^{m}$ by putting bits of $u$ into positions $J_x(A)$ and filling the remaining bits by $v$ (in the natural order).
\begin{claim} There is a trace $Tr=X_1,\dots, X_t, t\leq l$ and $a\in \{0,1\}^{m-n^{d}}$ such that $Tr=tr(r_{x_t}(u,a))$ for at least $2/(3(2^n))^t$ of all $u$'s.
\end{claim}
$Tr$ and $a$ are constructed inductively: Since there are at most $2^n$ strings $X_j$, there is $X_1$ such that for at least $1/2^n$ of all $w$'s $tr(w)$ begins with $X_1$. Either there is $a\in \{0,1\}^{m-n^{d}}$ such that $tr(r_{X_1}(u,a))=X_1$ for at least $2/(3(2^n))$ of all $u$'s or there is $X_2$ such that for at least $1/(3(2^{2n}))$ of all $w$'s $tr(w)$ begins with $X_1,X_2$. Assume that for at least $1/(3^{i-1}(2^{in}))$ of $w$'s $tr(w)$ begins with $X_1,\dots, X_i$. Either there is $a\in \{0,1\}^{m-n^{d}}$ such that $tr(r_{X_{i}}(u,a))=X_1,\dots,X_i$ for at least $2/(3(2^n))^i$ of all $u$'s or there is $X_{i+1}$ such that for at least $1/(3^i(2^{(i+1)n}))$ $w$'s $tr(w)$ begins with $X_1,\dots,X_{i+1}$. This proves the claim.
\medskip
Fix now $Tr, a$ from the claim and let $r_{X_t}(\cdot,a)$ be the bits of $a$ in the positions of $[m]\backslash J_{X_t}(a)$.
\smallskip
Let $u\in \{0,1\}^{n^{d}}$. Since $A$ is $(n,n^{d})$-design, for any row $x\neq X_t$ at most $n$ bits of $r_{X_t}(\cdot,a)|J_x(A)$ are not set.
Let $Y_x, x\neq X_t$ be the set of all counterexamples provided by $Y$ on $x$ and $r_{X_t}(u,a)|J_x(A)$ for all $u\in\{0,1\}^{n^{d}}$. This includes queries to $C$ on inputs $r_{X_t}(u,a)|J_x(A)$. The size of each set $Y_x$ is $2^{O(n)}$.
\smallskip
We are ready to describe a circuit $D'$ that approximates $C$ using as advice $X_1,\dots,X_t$, $r_{X_t}(\cdot,a)$ and sets $Y_{X_1},\dots,Y_{X_{t-1}}$.
For each $u\in\{0,1\}^{n^{d}}$ produce $r_{X_t}(u,a)$.
Then use $W_1$ to produce $x^1$. If $x^1\ne X_1$, output a random bit. Otherwise, use the advice from $Y_{X_1}$ to find out if $H(X_1)=NW_{C}(r_{X_t}(u,a))_{X_1}$. If the equality does not hold, output a random bit. Otherwise, use $W_2$ to generate $x^2$ and continue in the same manner until $W_t$ produces $x^t$. If $x^t\ne X_t$, output a random bit. Otherwise, output 0 iff $H(X_t)=1$. The resulting circuit $D'$ has $n^{d}$ inputs and size $2^{O(n)}$
\medskip
By the choice of $Tr$, for at least $2/(3(2^n))^t$ of all $u\in\{0,1\}^{n^{d}}$, $D'$ will successfully predict $C(u)$.
Moreover, at most $1/(3(2^n))^t$ of all traces extend $X_1,\dots,X_t$. Since a random bit is the correct value on at least half of all $u$'s for which $tr(r_{X_t}(u,a))$ is not $Tr$ and does not extend $Tr$, $\Pr_u[D'(u)=C(u)]\geq 1/2+1/(3^{t}2^{nt+1})$.
\qed
}
\def\lfromisl{
A direct consequence of Theorem \ref{t:main} is that if we are given a learning algorithm which is able to predict a value of a significant fraction of p-size circuits w.r.t. a suitable distribution after polynomially many queries even on a single input, this already implies learnability of p-size circuits on almost all inputs. Moreover, if the prediction is done in a reliable way as described in box A, i.e. the prediction is zero-error, the learner is zero-error on a significant fraction of inputs with a significant confidence.
\begin{corollary}[Learning from frequent instance-specific learning]\label{c:main}
Let $d\ge 2, k\ge 1$ and $H$ be Boolean function on $n$ inputs. Assume there are $2^{O(n)}$-size circuits $W_1^1,\dots,W_{l}^b$ for $l=O(\log n), b=2^{O(n)}$ witnessing errors of $n^{10kd}$-size circuits $(NW_C(w))_x$ as in Theorem \ref{t:main}. Further, assume that each circuit from $W_1^1,\dots,W_l^b$ can recognize before receiving corrections if its output is going to work.\textcolor{red}{This is like having no interactions.}
Then, there are $2^{O(n)}$-size circuits which make $poly(n)$ queries to the $n^{dk}$-size circuit $C$ with $n^d$ inputs and predict the value of $C$ with zero-error on a $1/2^{O(n^{5})}$ fraction of inputs with confidence $1/2^{O(n^{5})}$.
\end{corollary}}
\medskip
\noindent {\bf Learning from witnessing lower bounds with white-box access.} Theorem \ref{t:main} holds also under the stronger assumption that circuits $W^1_1\dots, W^b_{\log n}$ witness errors of $n^{10dk}$-size nondeterministic circuits $D$ with $n$ inputs (and $\le n^{10dk}$ nondeterministic bits), where $D$ computes a function in $\ensuremath{{\sf Circuit}}\xspace[n^{10dk}]$, i.e. $D$ is a nondeterministic circuit computing a function in \ensuremath{{\sf P/poly}}\xspace. Then it makes sense to allow $W^1_1,\dots, W^b_{\log n}$ to access a full description of a given nondeterministic circuit $D$. The conclusion of the resulting theorem remains valid with the only difference that the learning algorithm is given full description of an $n^{dk}$-size nondeterministic circuit with $n^d$ inputs representing the target function (which is computable by an $n^{dk}$-size deterministic circuit with $n^d$ inputs).
\medskip
\noindent {\bf Comparison to witnessing in bounded arithmetic.} The existence of witnessing analogous to the one from Theorem \ref{t:main} follows from the provability of circuit lower bounds in bounded arithmetic.
If $H:\{0,1\}^n\rightarrow \{0,1\}$ is an \ensuremath{{\sf NP}}\xspace function and $n_0, k$ are constants, we can write down a $\forall\Sigma^b_2$ formula $\ensuremath{{\sf LB}}\xspace(H,n^k)$ stating that $H$ is hard for circuits of size $n^k$: $$\forall n,\ n>n_0\ \forall\ \text{circuit}\ D\ \text{of size}\ \leq n^k\ \exists y,\ |y|=n,\ D(y)\neq H(y),$$ where $D(y)\neq H(y)$ is a $\Sigma^b_2$ formula stating that a circuit $D$ on input $y$ outputs the opposite value of $H(y)$. Here, $\Sigma^b_2$ is a class of formulas in the language of Cook's theory \ensuremath{{\sf PV_1}}\xspace which define precisely the predicates from $\Sigma^p_2$ level of the polynomial hierarchy, cf.~\cite{Kpc}.
By the KPT theorem \cite{KPT}, if \ensuremath{{\sf PV_1}}\xspace proves $\ensuremath{{\sf LB}}\xspace(H,n^k)$ then there are finitely many $poly(n)$-time functions $W_1,\dots, W_l$ which witness the existential quantifiers of $\ensuremath{{\sf LB}}\xspace(H,n^k)$ (including the existential quantifier from the subformula $D(y)\ne H(y)$) in the same interactive way as in Theorem \ref{t:main} except that the corrections include strings standing for the innermost universal quantifier of $\ensuremath{{\sf LB}}\xspace(H,n^k)$ (which allow to verify in p-time that $D(y)\ne H(y)$ has not been witnessed by the most recent candidates). Moreover, $W_1,\dots, W_l$ have access to the full description of a given circuit $D$ and do not make queries to $D$ but directly generate potential errors, cf. \cite{Pclba}.
It is possible to change the formula $\ensuremath{{\sf LB}}\xspace(H,n^k)$ by introducing a parameter $m$ satisfying $2^n=|m|$ so that the witnessing from the \ensuremath{{\sf PV_1}}\xspace-provability of the new formula is given by circuits $W_1,\dots, W_l$ of size $2^{O(n)}$. In such case, $H$ is allowed to be in $\mathsf{NE}$. We could allow $H$ to be even an arbitrary Boolean function if we formulated the lower bound in QBF proof systems instead of bounded arithmetic.
A crucial difference between the black-box witnessing from Theorem \ref{t:main} and white-box witnessing in bounded arithmetic is that, under standard hardness assumptions, the white-box witnessing of p-size circuit lower bounds for functions $H$ such as \ensuremath{{\sf SAT}}\xspace exists, cf. \cite{MP}.
\medskip
\noindent {\bf Comparison to other witnessing theorems.} Lipton and Young \cite{LY} showed that for each Boolean function $H$ hard for circuits of size $O(n^{k+1})$ there is a multiset of inputs $A$ of size $O(n^k)$, the so called anticheckers, such that each $n^k$-size circuit fails to compute $H$ on $\ge 1/3$ of inputs from $A$.
Therefore, for each distribution $\mathcal{R}$ on $n^k$-size circuits, some input from the set of anticheckers will witness an error of a random $n^k$-size circuits $D$ (without a single query to $D$) with probability $\ge 1/3$ over $\mathcal{R}$. Using $t$ rounds the probability of witnessing an error can be increased to $1-1/(3/2)^t$. This can be done with $\le n^{O(kt)}$ witnessing circuits $W^i_j$. More precisely, we can let $W^i_1,\dots,W^i_t$ to be the $i$-th possible $t$-tuple of inputs from the set of anticheckers, for $i<n^{O(kt)}$. Theorem \ref{t:main} shows that it is not possible to increase this probability further to $1-3/n^3$ using $\log n$ rounds unless p-size circuits can be learned efficiently.
Gutfreund, Shaltiel and Ta-Shma \cite{GST} showed that if \ensuremath{{\sf P}}\xspace $\ne$ \ensuremath{{\sf NP}}\xspace there is a p-time algorithm which, given a description of an $n^k$-time machine $D$, generates a set of $\le 3$ formulas such that $D$ fails to solve \ensuremath{{\sf SAT}}\xspace on one of them. Atserias \cite{Abb} extended this by showing that if $\ensuremath{{\sf NP}}\xspace\not\subseteq \mathsf{BPP}$ there is a probabilistic p-time algorithm which, given an oracle access to an $n^k$-time machine $D$, outputs with probability $\ge 1/8$ a set of formulas such that $D$ fails to solve \ensuremath{{\sf SAT}}\xspace on one of them. These algorithms differ from the witnessing in Theorem \ref{t:main} in several ways: they find errors of uniform algorithms, are allowed to generate errors of different lengths, generate errors with a significantly smaller probability than the probability required in Theorem \ref{t:main} and the set of formulas generated by the algorithm of Atserias includes formulas on which the algorithm queried $D$
\section{Learning from breaking pseudorandom generators}\label{s:leargen}
Circuit lower bounds can be used to construct PAC learning algorithms also if we assume that they break pseudorandom generators. The construction goes back to a
relation between predictability and pseudorandomness which can be interpreted in terms of learning algorithms, as shown by Blum, Furst, Kearn and Lipton \cite{BFKL} and later extended by several other works. In this section we survey some of these connections, derive a construction of learning algorithms from the non-existence of succinct nonuniform pseudorandom function families and show how these connections relate to a question of Rudich about turning demibits to superbits.
\bigskip
We start by recalling the construction from \cite{BFKL}, which underlies all results in this section.
\medskip
For an $n^c$-size circuit $C$ with $n$ inputs define a generator $$G_C:\{0,1\}^{mn}\mapsto\{0,1\}^{mn+m}$$ which maps $m$ $n$-bit strings $x_1,\dots,x_m$ to $x_1,C(x_1),\dots,x_m,C(x_m)$.
\begin{lemma}[from \cite{BFKL}]\label{l:gen}
There is a randomized p-time function $L$ such that for every $n^c$-size circuit $C$,
if an $s$-size circuit $D$ satisfies $$\Pr[D(x)=1]-\Pr[D(G_C(x))=1]\ge 1/s,$$ then the circuit $C$ is learnable by $L(D)$ over the uniform distribution with random examples, confidence $1/2m^2s$, up to error $1/2-1/2ms$.
\end{lemma}
\proof Given $D$, $L(D)$ chooses a random $i\in [m]$, random bits $r_{i},\dots,r_m$, random $n$-bit strings $x_1,\dots,x_n$ except $x_i$ and queries the bits $C(x_1),\dots, C(x_{i-1})$. For $x_i\in\{0,1\}^n$, let $p_i:=D(x_1,C(x_1),\dots,x_{i-1},C(x_{i-1}),x_i,r_i,\dots,x_m,r_m)$. Then $L(D)$ on $x_i$ predicts the value $C(x_i)$ by outputting $\neg r_i$ if $p_i=1$ and $r_i$ otherwise. By triangle inequality, random $i\in [m]$ satisfies $$\Pr[p_{i}=1]-\Pr[p_{i+1}=1]\ge 1/ms$$ with probability $1/m$. Since the probability over $r_i\dots,r_m,x_1,\dots,x_m$ that $L(D)$ predicts $C(x_i)$ correctly is $$\frac{1}{2}\Pr[p_i=1\mid r_i\ne C(x_i)]+\frac{1}{2}(1-\Pr[p_i=1\mid r_i= C(x_i)]),$$ and $\Pr[p_i=1]=\frac{1}{2} \Pr[p_i=1\mid r_i=C(x_i)]+\frac{1}{2}\Pr[p_i=1\mid r_i\ne C(x_i)],$ it follows that $$\Pr_{x_i}[L(D)(x_{i})=C(x_i)]\ge 1/2+1/2ms$$ with probability $1/2m^2s$ over the internal randomness of $L(D)$. \qed
\medskip
\bigskip
The proof of Lemma \ref{l:gen} implies that learning on average follows from breaking pseudorandom generators. Specifically, let $R$ be a p-size circuit which given $r$ bits outputs an $n^c$-size circuit $C$ and consider a generator $G:\{0,1\}^{mn+r}\mapsto \{0,1\}^{mn+m}$ which applies $R$ on its first $r$ input bits in order to output a circuit $C$ and then computes as a generator $G_C$ on the remaining $mn$ inputs. Breaking $G$ implies that we can break $G_C$ with significant probability over $C$ drawn from the distribution induced by $R$. Consequently, breaking $G$ means that we can learn a big fraction of $n^c$-size circuits w.r.t. $R$. Can we improve this average-case learning into a worst-case learning which works for all $n^c$-size circuits? Since efficient learning algorithms for p-size circuits yield natural properties useful against p-size circuits, which by \cite{RR} break pseudorandom generators, a positive answer would present an important dichotomy: cryptographic pseudorandom generators do not exist if and only if there are efficient learning algorithms for small circuits (with suitable parameters). This possibility has been explored by Oliveira-Santhanam \cite{OS} and Santhanam \cite{S19}, cf. Section \ref{s:prfs}.
\begin{question}[Dichotomy]\label{q:dichotomy} Assume that for each $\epsilon<1$ there is no pseudorandom generator $g:\{0,1\}^n\mapsto \{0,1\}^{n+1}$ computable in \ensuremath{{\sf P/poly}}\xspace and safe against circuits of size $2^{n^{\epsilon}}$ for infinitely many $n$. Does it follow that p-size circuits are learnable by circuits of size $2^{O(n^\delta)}$, for some $\delta<1$, with confidence $1/n$, up to error $1/2-1/2^{O(n^\delta)}$?
\end{question}
\subsection{Worst-case learning from strong lower bound methods}
The proof of Lemma \ref{l:gen} shows also that we can construct a worst-case learning algorithm assuming that given an oracle access to a pseudorandom generator we can efficiently produce its distinguisher. In particular, a single method breaking all pseudorandom generators would suffice.
\begin{definition} The circuit size problem $\ensuremath{{\sf GCSP}}\xspace[s,k]$ is the problem to decide whether for a given list of $k$ samples $(y_i,b_i)$, $y_i\in\{0,1\}^n, b_i\in\{0,1\}$, there exists a circuit $C$ of size $s$ computing the partial function defined by samples $(y_i,b_i)$, i.e. $C(y_i)=b_i$ for the given $k$ samples $(y_i,b_i)$. The parameterized minimum circuit size problem $\ensuremath{{\sf MCSP}}\xspace[s]$ stands for $\ensuremath{{\sf GCSP}}\xspace[s,2^n]$ where the list of $2^n$ samples defines the whole truth-table of a Boolean function.
\end{definition}
If we were extraordinary in proving circuit lower bounds, we could solve \ensuremath{{\sf GCSP}}\xspace efficiently. Note that $\ensuremath{{\sf MCSP}}\xspace[n^{O(1)}]\in\ensuremath{{\sf P/poly}}\xspace$ is stronger assumption than the existence of \ensuremath{{\sf P/poly}}\xspace-natural property useful against \ensuremath{{\sf P/poly}}\xspace, which breaks pseudorandom generators.
The following theorem appeared (in different terminology) in Vadhan \cite{LR}, see also~\cite{ILO}.
\begin{theorem}[Learning from succinct natural proofs]\label{t:gcsplear}
Assume $\ensuremath{{\sf GCSP}}\xspace[n^c,n^{d}]\in\ensuremath{{\sf P/poly}}\xspace$ for constants $d>c+1$. Then, $\ensuremath{{\sf Circuit}}\xspace[n^c]$ is learnable by \ensuremath{{\sf P/poly}}\xspace over the uniform distribution with random examples, confidence $1/poly(n)$, up to error $1/2-1/poly(n)$.
\end{theorem}
\proof
As the number of partial Boolean functions on a given set of $m$ inputs is $2^m$ and the number of $n^c$-size circuits is bouded by $2^{n^{c+1}}$,
$\ensuremath{{\sf GCSP}}\xspace[n^c,n^{d}]\in\ensuremath{{\sf P/poly}}\xspace$ implies that for $m=n^{d}$ there are p-size circuits $D$ such that for each $n^c$-size circuit $C$, $$\Pr[D(x)=1]-\Pr[D(G_C(x))=1]\ge 1/2.$$ Now, it suffices to apply Lemma \ref{l:gen}. \qed
\def\susanna{
\bigskip
An instance-specific version of Theorem \ref{t:gcsplear} in which the prediction of $C(x)$ on a single input $x$ can be obtained from proving a single circuit lower bound holds by definition if we want to predict the value of $C(x)$ only if it is determined by the queried/given samples as in box A.\footnote{The proof of Theorem \ref{t:gcsplear} does not give us an instance-specific learning - it exploits the fact that suitable circuit lower bounds yield more often a correct prediction than an incorrect one.} We conclude the section by pointing out that this could be strengthened to an instance-specific construction of a learning algorithm which predicts $C(x)$ on many inputs assuming an error concentration hypthesis.
Suppose the size of circuit $C$ is $s=n^k$. A simple counting argument shows that if we chose randomly $y_1,\dots, y_{n^{k_1}}$ for a sufficiently big $k_1$, then with high probability each circuit of size $n^{k}$ which coincides with $C$ on $y_1,\dots y_{n^{k_1}}$ differs from $C$ only on a $\le \frac{1}{n}$-fraction of inputs. However, it is possible that each circuit of size $n^{k}$ errs on a different set of inputs and that their collective set of errors is too big: if the function computed by $C$ can be actually computed by a circuit of size $s-O(n)$, then for each $x$, we can construct a circuit $C'$ such that $C'(x)\ne C(x)$ but still $C'(y_i)=C(y_i)$ for $i=1,\dots,n^{k_1}$. Is this the case also if the size $s$ is guaranteed to be the size of a minimal circuit computing the partial function $y_1,C(y_1),\dots,y_{n^{k_1}},C(y_{n^{k_1}})$?
\begin{question}[Error concentration]\label{q:ach} Let $s:\mathbb{N}\mapsto\mathbb{N}$ be a function. Is it true that for every Boolean function $f$ with (minimal) circuit complexity $s$, for each $n$, there is $m=poly(s,n)$ such that randomly chosen $n$-bit strings $y_1,\dots,y_m$ force the set of $x\in\{0,1\}^n$ satisfying $C(x)\ne f(x)$ for some circuit $C$ of size $s$ which coincides with $f$ on $y_1,\dots,y_m$ to be small with high probability? More precisely, does the following hold? $$\Pr_{y_1,\dots,y_m}[|\{x\in\{0,1\}^n \mid \exists\ s\text{-size }C\text{ s.t.}\bigwedge_{y_1,\dots,y_m} C(y_i)=f(y_i)\wedge C(x)\ne f(x)\}|\le \frac{2^n}{n}]\ge 1-\frac{1}{n}$$
\end{question}
Since the size $s$ of the circuit $C$ can be provided as a nonuniform advice, a positive answer to Question \ref{q:ach} would mean that for every Boolean function $f$ whose minimal circuit has size $s$, for many inputs $x$, we can predict $f(x)$ by chosing random strings $y_1,\dots,y_m$ and proving a single lower bound as in box A.
\textcolor{red}{To be erased: instance-specific part. Question 3 resolved by a simple counter-example of Erfan and Susanna. Unclear what would be the right formulation. Not so important anyway.}}
\subsection{Worst-case learning from natural proofs}\label{s:natcikk}
In Theorem \ref{t:gcsplear}, we can learn $f\in\ensuremath{{\sf Circuit}}\xspace[n^c]$ even if the algorithm for \ensuremath{{\sf GCSP}}\xspace works just for a significant fraction of partial truth-tables $(y_1,b_1),\dots,(y_{n^d},b_{n^d})$ with zero-error on easy partial truth-tables.
Carmosino, Impagliazzo, Kabanets and Kolokolova \cite{CIKK} proved that the assumption of Theorem \ref{t:gcsplear} can be weakened to the existence of a standard natural property. The price for this is that the resulting learning uses membership queries instead of random examples.
The crucial idea is similar to the proof of Theorem \ref{t:main}: apply the natural property (as an algorithm for suitable \ensuremath{{\sf GCSP}}\xspace) on a Nisan-Wigderson generator $NW_f$ based on the function $f$, which we want to learn.
\begin{theorem}[Learning from natural proofs \cite{CIKK}]\label{t:cikk}
Let $R$ be a $\ensuremath{{\sf P/poly}}\xspace$-natural property useful against $\ensuremath{{\sf Circuit}}\xspace[n^d]$ for some $d \geq 1$. Then, for each $\gamma\in (0,1)$, $\ensuremath{{\sf Circuit}}\xspace[n^k]$ is learnable by $\ensuremath{{\sf Circuit}}\xspace[2^{O(n^{\gamma})}]$ over the uniform distribution with non-adaptive membership queries, confidence 1, up to error $\frac{1}{n^k}$, where $k = \frac{d\gamma}{a}$ and $a$ is an absolute constant
\end{theorem}
\subsection{Learning from breaking pseudorandom function families}\label{s:prfs}
Oliveira and Santhanam \cite{OS} showed that the assumption of the existence of natural proofs from Theorem \ref{t:cikk} can be further weakened to the existence of a distinguisher breaking non-uniform pseudorandom function families. Their result follows from a combination of Theorem \ref{t:cikk} and the Min-Max Theorem. Using their strategy but combining the Min-Max Theorem with Theorem \ref{t:gcsplear}, learning algorithms with random examples can be obtained from distinguishers breaking succinct non-uniform pseudorandom function families
\bigskip
A {\em two-player zero-sum game} is specified by an $r\times c$ matrix $M$ and is played as follows. MIN, the row player, chooses a probability distribution $p$ over the rows. MAX, the column player, chooses a probability distribution $q$ over the columns. A row $i$ and a column $j$ are drawn randomly from $p$ and $q$, and MIN pays $M_{i,j}$ to MAX. MIN plays to minimize the expected payment, MAX plays to maximize it. The rows and columns are called the {\em pure strategies} available to MIN and MAX, respectively, while the possible choices of $p$ and $q$ are called {\em mixed strategies}. The Min-Max theorem states that playing first and revealing one's mixed strategy is not a disadvantage:
$$min_{p} max_j \sum_i p(i)M_{i,j}=max_q min_i \sum_j q(j) M_{i,j}.$$ Note that the second player need not play a mixed strategy - once the first player's strategy is fixed, the expected payoff is optimized for the second player by playing some pure strategy. The expected payoff when both players play optimally is called the {\em value} of the game. We denote it $v(M)$.
A mixed strategy is {\em $k$-uniform} if it chooses uniformly from a multiset of $k$ pure strategies. Let $M_{min}=min_{i,j} M_{i,j}$ and $M_{max}=max_{i,j} M_{i,j}$. Newman \cite{Nm}, Alth\"ofer \cite{Alt} and Lipton-Young \cite{LY} showed that each player has a near-optimal $k$-uniform strategy for $k$ proportional to the logarithm of the number of pure strategies available to the opponent.
\begin{theorem}[\cite{Nm, Alt, LY}]\label{thm:minmax} For each $\epsilon>0$ and $k\ge \ln(c)/2\epsilon^2$, $$min_{p\in P_k} max_j \sum_{i} p(i) M_{i,j}\le v(M)+\epsilon(M_{max}-M_{min}),$$ where $P_k$ denotes the $k$-uniform strategies for MIN. The symmetric result holds for MAX.
\end{theorem}
\begin{definition}[Succinct non-uniform PRF] An $(m,m')$-succinct non-uniform pseudorandom function family from circuit class $\mathcal{C}$ safe against circuits of size $s$ is a set $S$ of partial truth-tables $\langle (x_1,b_1),\dots, (x_m,b_m)\rangle$ where each $x_i$ is an $n$-bit string and $b_i\in\{0,1\}$ such that each partial truth-table from $S$ is computable by one of $m'$ circuits from $\mathcal{C}$ and for every circuit $D$ of size $s$, $$\Pr_{x}[D(x)=1]-\Pr_{x\in S}[D(x)=1]<1/s$$ where the first probability is taken over $x\in\{0,1\}^{m(n+1)}$ chosen uniformly at random and the second probability over partial truth-tables chosen uniformly at random from $S$.
\end{definition}
\begin{theorem}[Learning or succinct non-uniform PRF]\label{t:towardscore}
Let $c\ge 1$ and $s>n,m\ge 1$. There is an $(m,8s^4)$-succinct non-uniform PRF in $\ensuremath{{\sf Circuit}}\xspace[n^c]$ safe against $\ensuremath{{\sf Circuit}}\xspace[s]$ or there are circuits of size $poly(s)$ learning $\ensuremath{{\sf Circuit}}\xspace[n^c]$ over the uniform distribution with random examples, confidence $1/poly(s)$, up to error $1/2-1/poly(s)$
\end{theorem}
\proof
Consider a two-player zero-sum game specified by a matrix $M$ with rows indexed by $n^c$-size circuits with $n$ inputs and columns indexed by $s$-size circuits with $m(n+1)$ inputs. Define the entry $M_{C,D}$ of $M$ corresponding to a row circuit $C$ and a column circuit $D$ as $$M_{C,D}:=|\Pr_x[D(x)=1]-\Pr_x[D(G_C(x))=1]|$$ for the generator $G_C$ from the proof of Lemma \ref{l:gen}. Hence $M_{max}-M_{min}\le 1$.
If $v(M)\ge 1/4s$, then by Theorem \ref{thm:minmax} (with $\epsilon=1/8s$), there exist a multiset of $k\le 32n^{c+1}s^2$ $s$-size circuits $D^1,\dots, D^{k}$ such that for every $n^c$-size circuit $C$, a random $D$ from $D^1,\dots, D^{k}$ satisfies $$\text{E}[|\Pr[D(x)=1]-\Pr[D(G_C(x))=1]|]\ge 1/8s.$$
By Lemma \ref{l:gen}, for every $n^c$-size circuit $C$, one of the circuits $D^1,\dots, D^k$ (or their negations) can be used to learn $C$ with confidence $1/poly(s)$, up to error $1/2-1/poly(s)$.
A $poly(s)$-size circuit using a random $D^i$ from $D^1,\dots, D^k$ or its negation thus learns $\ensuremath{{\sf Circuit}}\xspace[n^c]$ with random examples, confidence $1/poly(s)$, up to error $1/2-1/poly(s)$.
\medskip
If $v(M)<1/4s$, then by Theorem \ref{thm:minmax} (with $\epsilon=1/4s$), there exists a multiset of $k\le 8s^4$ $n^c$-size circuits $C^1,\dots, C^k$ such that for every $s$-size circuit $D$, a random $C$ from $C^1,\dots, C^k$ satisfies $$\text{E}[|\Pr[D(x)=1]-\Pr[D(G_C(x))=1]|]\le 1/2s.$$ Since $\text{E}[|\Pr[D(x)=1]-\Pr[D(G_C(x))=1]|]\ge|\Pr[D(x)=1]-\text{E}[\Pr[D(G_C(x))=1]]|$ a generator $$G:\{0,1\}^{mn+\lceil\log k\rceil}\mapsto \{0,1\}^{mn+m}$$ which takes as input a string of length $mn+\lceil\log k\rceil$ encoding (an index of) a circuit $C$ from $C^1,\dots, C^k$ together with $m$ $n$-bit strings $x_1,\dots,x_m$ and outputs $x_1,C(x_1),\dots, x_m, C(x_m)$ is safe against circuits of size $s$. The range of $G$ defines an $(m,8s^4)$-succinct non-uniform PRF in $\ensuremath{{\sf Circuit}}\xspace[n^c]$ safe against $\ensuremath{{\sf Circuit}}\xspace[s]$.
\qed
\bigski
Note that the existence of a generator $G$ from the proof of Theorem \ref{t:towardscore} follows directy from a counting argument if we do not require that $G$ defines a PRF of small complexity: a random set of $poly(s,n)$ strings (yielding a non-uniform pseudorandom generator mapping $\{0,1\}^{O(\log s)}$ to $\{0,1\}^n$) fools circuits of size $s$.
\def\unimin{
\bigskip
\noindent {\bf Lowering the bar to complexity-theoretic PRGs.} Let $M^{n,m}$ be a sequence of $2^n\times 2^s$ matrices specifying two-player zero-sum games. We say that a $poly(s)$-time algorithm $A$ is a sampling algorithm of a mixed strategy $q$ of the column player if $\forall j\in [2^s]$, $\Pr_{x\in\{0,1\}^{n}}[A(x)=j]=q(j)$.
\begin{hypothesis}[Uniform Min-Max]\label{h:uniminmax} Let $M^{n,s}$ be a sequence of $2^n\times 2^s$ matrices specifying two-player zero-sum games. Assume there is a p-time algorithm which given a sampling algorithm of a mixed strategy $q$ of the column player outputs a pure strategy $i$ of the row player such that $\sum_j q(j)M_{i,j}\le v(M)$. Then there is a p-time algorithm which given $1^s$ outputs a sampling algorithm of a mixed strategy $q'$ of the column player such that $min_i \sum_j q'(j)M_{i,j}\ge v(M)-1/2^s$.
\end{hypothesis}
A uniform version of the Min-Max theorem has been proved by Vadhan-Zheng \cite{VZ}. However, their algorithm does not seem to generate sampling algorithms of mixed strategies - just an implicit description of such strategies.
\begin{theorem}[Non-refutation of Learning or PRGs assuming uniform Min-Max]\label{t:almostcore}
Assume the uniform Min-Max hypothesis. Then, there is a p-time computable pseudorandom generator safe against $\ensuremath{{\sf Circuit}}\xspace[s]$, where $s\ge poly(n)$, or for every constant $c$, there is no p-time algorithm which given a circuit $D$ of size $poly(s,n)$ outputs a circuit $C$ of size $n^c$ such that $D$ does not learn $C$ with random examples, confidence $1/n$, up to error $1/2-1/poly(s,n)$.
\end{theorem}
\proof Proceed as in the proof of Theorem \ref{t:towardscore} but distinguish the following two cases. If for infinitely many $n$, $v(M)\ge 1/4s(n)$, then for infinitely many $n$, there are circuits of size $poly(s,n)$ learning circuits of size $n^c$ with random examples, confidence $1/n$, up to error $1/2-1/poly(s,n)$. If for all sufficiently big $n$, $v(M)<1/4s(n)$, then apply uniform Min-Max hypothesis to conclude that for all sufficiently big $n$, for every circuit $D$ of size $s(n)$, the generator $$G:\{0,1\}^{mn+poly(s)}\mapsto \{0,1\}^{mn+m}$$ which takes as input a string of length $mn+poly(m)$ encoding an input of the sampling algorithm from the uniform Min-Max hypothesis together with $m$ $n$-bit strings $x_1,\dots,x_m$ and outputs $x_1,C(x_1),\dots, x_m, C(x_m)$ is safe against circuits of size $s$. \qed
\bigskip
Since the existence of an efficient algorithm generating strategies for the second Player follows from the provability of a suitable statement in intuitionistic \ensuremath{{\sf S^1_2}}\xspace we can instead assume that the non-existence of an efficient learner is also feasibly provable.
\begin{corollary}[Consistency of Learning or PRGs assuming uniform Min-Max]\label{c:conscore}
Assume uniform Min-Max hypothesis. Then, there is a p-time computable pseudorandom generator safe against $\ensuremath{{\sf Circuit}}\xspace[s]$, where $s\ge poly(n)$, or for every constant $c$, it is consistent with intuitionistic $\ensuremath{{\sf S^1_2}}\xspace$ that there are circuits of size $poly(s,n)$ learning $\ensuremath{{\sf Circuit}}\xspace[n^c]$ with random examples, confidence $1/n$, up to error $1/2-1/poly(s,n)$.
\end{corollary}
\proof[Proof sketch.] Proceed as in the proof of Theorem \ref{t:towardscore} but in the case $v(M)<1/4s$ apply witnessing theorem. \qed
}
\subsection{Superbits vs demibits}\label{s:rudich}
Rudich \cite{Rs} proposed a conjecture about the existence of superbits, a version of pseudorandom generators safe against nondeterministic circuits, and showed that it rules out the existence of \ensuremath{{\sf NP}}\xspace-natural properties against \ensuremath{{\sf P/poly}}\xspace. He then asked whether the existence of superbits follows from a seemingly weaker assumption of the existence of so called demibits. We note that an affirmative answer to his question would resolve Question \ref{q:dichotomy} in nondeterministic setting.
\begin{definition}[Superbit] A function $g:\{0,1\}^n\mapsto\{0,1\}^{n+1}$ computable by p-size circuits is a superbit if there is $\epsilon<1$ such that for infinitely many input lengths $n$, for all nondeterministic circuits $C$ of size $|C|\le 2^{n^{\epsilon}}$, $$\Pr_{x\in\{0,1\}^{n+1}}[C(x)=1]-\Pr_{x\in\{0,1\}^n}[C(g(x))=1]<1/|C|.$$
\end{definition}
\begin{definition}[Demibit] A function $g:\{0,1\}^n\mapsto\{0,1\}^{n+1}$ computable by p-size circuits is a demibit if there is $\epsilon<1$ such that for infinitely many input lengths $n$, no nondeterministic circuit $C$ of size $|C|\le 2^{n^{\epsilon}}$ satisfies $$\Pr_{x\in\{0,1\}^{n+1}}[C(x)=1]\ge 1/|C|\ \ \ \text{ and }\ \ \ \Pr_{x\in\{0,1\}^n}[C(g(x))=1]=0.$$
\end{definition}
\begin{proposition}[Question \ref{q:dichotomy} vs Rudich's problem] Assume the existence of demibits implies the existence of superbits. Then, either superbits exist or for each $c\ge 1$, for each $\epsilon<1$, $\ensuremath{{\sf Circuit}}\xspace[n^c]$ is learnable by $\ensuremath{{\sf Circuit}}\xspace[2^{O(n^{\epsilon})}]$ over the uniform distribution with random examples, confidence $1/2^{O(n^{\epsilon})}$ up to error $1/2-1/2^{O(n^{\epsilon})}$, where the learner is allowed to generate a nondeterministic or co-nondeterministic circuit approximating the target function
\end{proposition}
\proof Assume superbits do not exist and their non-existence implies the non-existence of demibits. Consider a generator $G:\{0,1\}^{mn+n^{c+1}}\mapsto\{0,1\}^{mn+m}$, with $m=n^{c+1}+1$, which interprets the first $n^{c+1}$ bits of its input as a description of an $n^c$-size circuit $C$ and then computes on the remaining $mn$ inputs as generator $G_C$ from Lemma \ref{l:gen}. Since $G$ is not a demibit, for each $\epsilon<1$ there are nondeterministic circuits $D$ of size $2^{(mn+m-1)^{\epsilon}}$, such that for each $n^c$-size circuit $C$, $$\Pr[D(x)=1]-\Pr[D(G_C(x))=1]\ge 1/|D|.$$ By the proof of Lemma \ref{l:gen}, this means that $n^c$-size circuits are learnable by circuits of size $poly(|D|)$ with confidence $1/poly(|D|)$ up to error $1/2-1/poly(|D|)$, except that the learner might generate nondeterministic (if $r_i=0$) or co-nondeterminitic (if $r_i=1$) circuit approximating the target function. \qed
\section{Learning speedup}\label{s:speedup}
A striking consequence of the relation between natural proofs and learning algorithms is a learning speedup of Oliveira and Santhanam \cite{OS}.
Suppose \ensuremath{{\sf P/poly}}\xspace is learnable by circuits of weakly subexpoential size $2^n/n^{\omega(1)}$. The learning circuits can be used to accept truth-tables of all functions in \ensuremath{{\sf P/poly}}\xspace while their size guarantees that many hard functions are going to be rejected. This implies the existence of a \ensuremath{{\sf P/poly}}\xspace-natural property useful against \ensuremath{{\sf P/poly}}\xspace, which by Theorem \ref{t:cikk}, gives us circuits of strongly subexponential size $2^{n^\gamma}$, $\gamma<1$, learning \ensuremath{{\sf P/poly}}\xspace.
The argument of Oliveira and Santhanam can be generalized to a speedup of learners of arbitrary size $s$.
Here, we show how to derive such a generalized version more directly without constructing natural proofs and invoking Theorem \ref{t:cikk}. This is possible thanks to a more direct exploitation of a slightly modified NW-generator. A drawback of the approach is that we need to assume learning with random examples instead of membership queries.
\begin{theorem}[Generalized speedup]\label{t:speedup} Let $d,k\ge 1$ and $n\le s(n)\le 2^{n}/n$. Assume $\ensuremath{{\sf Circuit}}\xspace[n^{10dk}]$ is learnable by $\ensuremath{{\sf Circuit}}\xspace[s(n)]$ over the uniform distribution with random examples, confidence $1$, up to error $1/2-5/n$. Then circuits of size $m^k$ with $m=n^d$ inputs are learnable by circuits of size $n^{dK}(s(n))^3$ over the uniform distribution with non-adaptive membership queries, confidence $1/n^3$, up to error $1/2-1/n$. Here, $K$ is an absolute constant.
\end{theorem}
Theorem \ref{t:speedup} implies, for example, that if p-size circuits are learnable with random examples by circuits of quasipolynomial size $n^{O(\log n)}$, then p-size circuits are learnable with membership queries by circuits of size $O(n^{\epsilon \log n})$, for each $\epsilon>0$. The speedup is achieved w.r.t. the input length of target functions at the expense of their circuit complexity.
\proof
Let $A$ be a $2^{b}\times u$ 0-1 matrix forming a $(b,n^{d})$-design with $|J_i(A)|=n^{d}$ for $n^{2d}\le u\le 2n^{2d}$, a constant $d$ and parameter $b$ such that $ns\le 2^{b}\le 2ns$. The design is constructed in the usual way by evaluating polynomials of degree $\le b$ on $n^{d}$ points of a field with $n^d\le p\le 2n^d$ elements.
In particular, there are $n^{9d}$-size circuits which given $i\in\{0,1\}^{b}$ and $w\in\{0,1\}^u$ output $w|J_i(A)$. Define $NW_f$-generator mapping strings $w$ of length $u$ to strings of length $2^{n}$ as $$(NW_f(w))_{x_1,\dots,x_n}=f(w|J_{x_1,\dots,x_{b}}(A)).$$ Then for each $m$-input function $f\in\ensuremath{{\sf Circuit}}\xspace[m^k]$ and $w\in\{0,1\}^{u}$, $(NW_f(w))_x$ is computable as a function of $x\in\{0,1\}^n$ by a circuit of size $n^{10dk}$.
By the assumption of the theorem every such circuit $(NW_f(w))_x$ is learnable by a circuit $L$ of size $s$ with confidence $\delta=1$, up to error $1/2-\epsilon$. Consequently, there is a circuit $D^f$ of size $O(s^3)$ such that \begin{equation}\label{e:speed1}\Pr_{w,x,y^1,\dots,y^t}[D^f(x_1,\dots,x_n,w,y^1,\dots,y^t)=f(w|J_{x_1,\dots,x_{b}}(A))]\ge (1/2+\epsilon)\delta\end{equation} where $D^f$ queries values $f(w|J_{y^j}(A))$ for $t\le s$ random strings $y^j\in\{0,1\}^{b}$, $j=1,\dots, t$. The size of $D^f$ takes into account the need to simulate the circuit described by $L$. Now, random $y^1,\dots,y^t$ satisfy \begin{equation}\label{e:speed3}\Pr_{w,x}[D^f(x_1,\dots,x_n,w,y^1,\dots,y^t)=f(w|J_{x_1,\dots,x_{b}}(A))]\ge 1/2+\epsilon-1/n\end{equation} with probability at least $1/n$. Otherwise, the probability in (\ref{e:speed1}) would be $<1/n+(1/2+\epsilon-1/n)$. Similarly, given $y^1,\dots,y^t$ such that (\ref{e:speed3}) holds, a random $x\in\{0,1\}^n$
satisfies \begin{equation}\label{e:speed2}\Pr_{w}[D^f(x_1,\dots,x_n,w,y^1,\dots,y^t)=f(w|J_{x_1,\dots,x_{b}}(A))]\ge 1/2+\epsilon-3/n\end{equation} with probability at least $2/n$.
Moreover, since every $y^j$ specifies $2^{n-b}$ values of $(NW_f(w))_x$, given $y^1,\dots, y^t$, a random $x\in\{0,1\}^n$ equals some $y^j$ on the first $b$ bits with probability $\le t/2^b\le 1/n$. Applying the same averaging one more time, for $y^1,\dots,y^t$ and $x$ which differs on the first $b$ bits from each $y^j$ and satisfies (\ref{e:speed2}), randomly fixed $u-n^d$ bits of $w$ on the positions of $[u]\backslash J_{x}(A)$ preserve the probability (\ref{e:speed2}) up to an additional error $1/n$ with probability at least $1/n$.
For each $y^1,\dots,y^t$, each $x$ which differs on the first $b$ bits from every $y^j$ and for each fixation of $u-n^d$ bits of $w$ on the positions of $[u]\backslash J_{x}(A)$, $(b,n^d)$-design guarantees that the number of all queries $f(w|J_{y^j}(A))$, $j=1,\dots,t$, of $D^f$ for all possible $w$ with the $u-n^d$ fixed bits is $\le t2^{b}$. We can thus learn a circuit $D'$ approximating $f\in \ensuremath{{\sf Circuit}}\xspace[m^k]$ with $m=n^d$ inputs with advantage $1/2+\epsilon-4/n$ in the following way. Choose random $y^1,\dots,y^t$, $x$, random $u-n^d$ bits of $w$ corresponding to $[u]\backslash J_{x}(A)$ and query $\le t2^b$ values $f(w|J_{y^j}(A))$ for all possible $w$ with the $u-n^d$ fixed bits. Then the circuit $D'$, given $n^d$ bits of $w$ corresponding to $J_x(A)$, generates $w$ and computes as $D^f$ with the provided queries $f(w|J_{y^j}(A))$. Since $w$ can be constructed from given $n^d$ bits, $x$ and the $u-n^d$ fixed bits of $w$ by a circuit of size $n^{O(d)}$, each $w|J_{y^j}(A)$ can be constructed from $w$ and $y^j$ by a circuit of size $n^{9d}$ and for each query to $f$ the right value can be selected by a circuit of size $O(n^dt2^b)$, the size of $D'$ is $O(s^3+tn^{9d}+n^dt^22^b+n^{O(d)})\le n^{O(d)}s^3$. $D'$ can be described by $n^{dK}s^3$ bits, for an absolute constant $K$, and constructed by a circuit of the same size which just substitutes $y^j, x$ and $u-n^d$ bits of $w$ in the otherwise fixed description of $D'$.
Since random $y^1,\dots,y^t$ satisfy (\ref{e:speed3}) with probability at least $1/n$, a random $x$ differs on the first $b$ bits from each $y^1,\dots,y^t$ and satisfies (\ref{e:speed2}) with probability at least $1/n$ while the randomly fixed $u-n^d$ bits of $w$ have the desired property with probability at least $1/n$ as well, the confidence of the learning algorithm is at least $1/n^3$. \qed
\bigskip
We give one more proof of the learning speedup which also addresses the issue of membership queries.
\begin{theorem}[Alternative speedup]\label{t:aspeed}
Let $d\ge 2; k\ge 1$ and
$\epsilon<1$. Assume $\ensuremath{{\sf Circuit}}\xspace[n^{10dk}]$ is learnable by $\ensuremath{{\sf Circuit}}\xspace[2^{\epsilon n}]$ over the uniform distribution (possibly with membership queries) with confidence $1$, up to error $1/n^5$. Then, circuits of size $n^{dk}$ with $n^d$ inputs are learnable by circuits of size $2^{Kn}$ over the uniform distribution with confidence $1/2^{Kn}$ up to error $1/2-2^{Kn}$, where $K$ is an absolute constant
\end{theorem}
\proof By a counting argument there exists $H$ which is not $(1-1/n)$-approximable by circuits of size $2^{\epsilon n}$. Here, $n$ is w.l.o.g. sufficiently big. By Lemma \ref{l:mainconverse}, learnability of $\ensuremath{{\sf Circuit}}\xspace[n^{10dk}]$ by $\ensuremath{{\sf Circuit}}\xspace[2^{\epsilon n}]$ up to error $1/n^5$ implies the existence of circuits of size $2^{O(n)}$ witnessing errors of circuits of size $n^{10dk}$ with probability $\ge 1-2/n^4$. The conclusion thus follows by applying Theorem \ref{t:main}. The improved confidence and approximation parameter is the consequence of the fact that our witnessing circuits succeed in the first round, i.e. $t=1$. \qed
\bigskip
\noindent {\bf Proof-search speedup.} The core trick behind Theorem \ref{t:speedup} can be formulated in the context of
proof complexity. Assume that an $n^{10dk}$-size lower bound is provable in a proof system $P$ by a proof of size $s(n)$. Then, a substitutional instance of the same $P$-proof of size $s(n)$ proves an $m^k$-size lower bound for circuits with $m=n^d$ inputs, on inputs given by the NW-generator from the proof of Theorem \ref{t:speedup}. Here, the base function of the NW-generator is not specified but represented by free variables encoding a circuit of size $m^k$.
\def\prfspeed{The observation can be formulated also more generally as a form of a proof-search speedup. In fact, in order to do so, we do not need the modified NW-generator from Theorem \ref{t:speedup}, the standard one will suffice.
\smallskip
Denote by $\ensuremath{{\sf lb}}\xspace_Y(f,t)$ a $poly(t,n)$-size propositional formula expressing a $t(n)$-size circuit lower bound for a Boolean function $f$ restricted to inputs
$Y\subseteq\{0,1\}^n$.
The formula $\ensuremath{{\sf lb}}\xspace_Y(f,t)$ has $poly(t)$ variables representing circuits of size $t$ and contains also auxiliary variables needed to encode computations of $t$-size circuits on inputs from $Y$. The auxiliary variables are not essential in the sense that their true value can be determined in p-time from $Y$ and an assignment encoding a $t$-size circuit.
If $Y=\{0,1\}^n$, we denote the resulting formula $\ensuremath{{\sf lb}}\xspace_Y(f,t)$ by $\ensuremath{{\sf tt}}\xspace(f,t)$.
Further, we say that a proof system $P$ admits substitutions if for each $s$-size $P$-proof of a formula $\phi$ and a partial assignment $a$ of variables of $\phi$, we can find a $P$-proof of $\phi(a)$ in time $O(s)$.
\begin{theorem}[Proof-search speedup]\label{t:autsp} Let $d,k\ge 1$. Assume that a proof system $P$ admits subtitutions and there are $2^{O(n)}$-size circuits which given a tautology $\ensuremath{{\sf tt}}\xspace(f,n^{5dk})$ find its $P$-proof. Then there are $2^{O(n)}$-size circuits which find a $P$-proof of each tautology $\ensuremath{{\sf lb}}\xspace_Y(h,m^k)$, where $h$ is a Boolean function with $m=n^d$ inputs which is hard for circuits of size $m^{Kdk}$, for a sufficiently big absolute constant $K$, $Y$ is a set of $n^d$-bit strings of the form $w|J_i(A)$ for a $2^n\times u$ matrix $A$ being the $(n,n^d)$-design from the proof of Theorem \ref{t:main} and $w\in\{0,1\}^u$.
\end{theorem}
Theorem \ref{t:autsp} shows that if we can find proofs of p-size lower bounds for Boolean functions with $n$ inputs by circuits of size $2^{O(n)}$, then we can find proofs of p-size lower bounds for Boolean functions with $n$ inputs satisfying the conditions of Theorem \ref{t:autsp} by circuits of size $2^{O(n^\epsilon)}$, for each $\epsilon>0$.
\proof Given a tautology $\ensuremath{{\sf lb}}\xspace_Y(h,m^k)$ we consider a formula $\ensuremath{{\sf tt}}\xspace(f,n^{5dk})$ such that $\ensuremath{{\sf lb}}\xspace_Y(h,m^k)$ can be obtained from $\ensuremath{{\sf tt}}\xspace(f,n^{5dk})$ by a partial assignment of its variables. Such a formula with a function $f$ exists and is specified by the assumption of the theorem.
The formula $\ensuremath{{\sf tt}}\xspace(f,n^{Kdk})$ is a tautology as well. Otherwise, there would be a circuit of size $n^{5dk}$ computing $f$ which could be used to compute $h$: The set $Y$ consists of strings $w|J_i(A)$. By the construction of design $A$, each $w|J_i(A)$ specifies $n^d$ points of a polynomial of degree $n$. Given $w$ for each $w|J_i(A)$ we can determine a polynomial $p$ of degree $n$ which generates $w|J_i(A)$. In fact, there are $poly(n)$-size circuits which given $w$ and $w|J_i(A)$ output the polynomial, represented as a sum of monomials, together with the index $i$ (PROBLEM: not index i but some polynomial). Composing these circuits with $C$ yields a circuit of size $m^{Kdk}$ computing $h$. This specifies the constant $K$ and contradicting the hardness of $h$.
Therefore, by the assumption of the theorem we can find a $P$-proof of $\ensuremath{{\sf tt}}\xspace(f,n^{5dk})$ using circuits of size $2^{O(n)}$. As $P$ admits substitutions and there are circuits constructing the design $A$ from the proof of Theorem \ref{t:main}, we obtain also a $2^{O(n)}$-size $P$-proof of $\ensuremath{{\sf lb}}\xspace_Y(h,m^k)$. \qed
}
\bigskip
\noindent {\bf Nonlocalizable hardness magnification.} Theorem \ref{t:speedup} and the original speedup of Oliveira and Santhanam can be interpreted as hardness magnification theorems. Hardness magnification is an approach to strong complexity lower bounds by reducing them to seemingly much weaker lower bounds developed in a series of recent papers \cite{HM, MP, OPS, MMW, CMMW, CT, CJWs, CHOPRS, CJWt, Msca, CHMY}, see \cite{CHOPRS} for a more comprehensive survey. For example, it turns out that in order to prove that functions computable in nondeterministic quasipolynomial-time are hard for \ensuremath{{\sf NC}^1}\xspace it suffices to show that a parameterized version of the minimum circuit size problem \ensuremath{{\sf MCSP}}\xspace is hard for $\ensuremath{{\sf AC}^0}\xspace[2]$. However, \cite{CHOPRS} identified a {\em locality barrier} which explains why direct adaptations of many existing lower bounds do not yield strong complexity lower bounds via hardness magnification. Essentially, the reason is that the existing lower bounds for explicit Boolean functions work often even for models which are allowed to use arbitrary oracles with $n^{o(1)}$-small fan-in. This is easy to see in the case of $\ensuremath{{\sf AC}^0}\xspace[2]$ lower bounds: oracles of small fan-in can be simulated by polynomials of low degree. On the other hand, hardness magnification theorems typically yield (unconditional) upper bounds in the form of weak computational models extended with local oracles computing specific problems such as the abovementioned version of \ensuremath{{\sf MCSP}}\xspace. In fact, even irrespective of hardness magnification it is important to develop lower bound methods which do not localize: proving the nonexistence of subexponential-size learning algorithms for \ensuremath{{\sf P/poly}}\xspace would imply the nonexistence of \ensuremath{{\sf P/poly}}\xspace natural properties against \ensuremath{{\sf P/poly}}\xspace but it is not hard to see that natural properties against \ensuremath{{\sf P/poly}}\xspace are computable by p-size circuits with local oracles. Overcoming the locality barrier is thus essential for proving strong complexity lower bounds in general.\footnote{Some known circuit lower bounds above the magnification threshold are provably nonlocalizable but they do not fit to the framework of the so called Hardness Magnification frontier \cite{CHOPRS}, one reason being that they do not work for explicit and natural problems, cf. \cite{CHOPRS, CJWt}. For example, a nonlocalizable lower bound from \cite{CHOPRS} works for a function in $\mathsf{E}$ which is artificial in the sense that it is designed to avoid localization, not for a problem of independent interest such as \ensuremath{{\sf MCSP}}\xspace. Oliveira \cite{Okol} showed that near superlinear-size lower bounds for a version of \ensuremath{{\sf MCSP}}\xspace defined w.r.t. a notion of randomized Kolmogorov complexity imply strong circuit lower bounds while the same problem is provably hard for probabilistic p-time. The lower bound of Oliveira works, however, only against uniform models of computation. Moreover, the magnification theorem concludes at best a `weak' lower bound of the form quasipolynomial-time $\mathsf{QP}$ being hard for \ensuremath{{\sf P/poly}}\xspace. Similarly, an approach of Chen, Jin and Williams \cite{CJWt} via derandomizations and uniform obstructions appears to avoid the locality barrier but yields at best lower bounds of the form $\mathsf{QP}\not\subseteq\ensuremath{{\sf P/poly}}\xspace$.
Theorem \ref{t:speedup}, if read counterpositively, is a magnification of $O(n^{\epsilon \log n})$-size lower bounds for learning p-size circuits to $n^{O(\log n)}$-size lower bounds. This differs from previous hardness magnification theorems
by avoiding localization: the size of the learner plays a crucial role in the reduction and therefore cannot be simply replaced by an arbitrary oracle. The same trick is behind non-blackbox worst-case to average-case reductions within \ensuremath{{\sf NP}}\xspace of Hirahara \cite{Hb}. To the best of my knowledge, the only other hardness magnification theorems with this property
appeared in \cite{CHOPRS} and \cite{Hmeta}.\footnote{There are two more results which could be potentially classified as nonlocalizable hardness magnifications. A theorem of Buresh-Oppenheim and Santhanam \cite[Theorem 1]{JOS} is based on an exploitation of Nisan-Wigderson generators similar to that of \cite{CHOPRS} but it seems less practical in its current form, as it magnifies only lower bounds for nondeterministic circuits. The other result of Tal \cite{Tmag} shows that an average-case hardness for formulas of size $s$ can be magnified to the worst-case hardness for slightly bigger formulas. A problem is that \cite{Tmag} magnifies at best to an $s^2$-size lower bound. Moreover, if we wanted to strenghten it further by connecting it with another magnification theorem, it is not clear how to preserve the nonlocalizability - the weak lower bound obtained via \cite{Tmag} would likely localize.} \cite[Theorem 1]{CHOPRS}, like Hirahara \cite{Hb} and the speedup of Oliveira-Santhanam, is based on the result of Carmosino, Impagliazzo, Kabanets and Kolokolova \cite{CIKK}. However, the hardness magnification from \cite{CHOPRS} is still captured by the locality barrier: it asks for a lower bound for a version of \ensuremath{{\sf MCSP}}\xspace whose localized version does not hold (as witnessed by other hardness magnification theorems). Theorem \ref{t:speedup} does not seem to localize in this sense either: it asks for an $n^{\epsilon \log n}$-size lower bound on learning algorithms while there seems to be no reason to expect that p-size circuits are learnable by circuits of size $O(n^{\log n})$ extended with oracles of fan-in $n^{o(1)}$. (Such a localization would mean that p-size circuits are learnable in subexponential size.)
The magnification theorems of Hirahara \cite{Hmeta} face similar complications.\footnote{Hirahara \cite[Theorem 11 and 13]{Hmeta} proves two types of magnification theorems. The first type essentially adapts the result from \cite{CHOPRS} in the context of weaker computational models. The second type extends it by introducing metacomputational circuit lower bound problems MCLPs and showing that weak lower bounds for MCLPs can be magnified as well. MCLPs are not solvable by any algorithm whatsoever unless standard hardness assumptions break. This implies that there is no unconditional upper bound for MCLPs and the locality barrier does not apply. Unfortunately, we do not have any interesting lower bound for MCLPs either. The corresponding magnification theorems thus do not establish a Hardness Magnification frontier~\cite{CHOPRS}. Nevertheless, as suggested in \cite{Hmeta}, developing such methods might be a way to strong lower bounds.}
Unfortunately, Theorem \ref{t:speedup} does not reduce p-size lower bounds to, say, subquadratic lower bounds: It magnifies $n^{O(d)}s^3$-size lower bounds for learning functions with $m=n^d$ inputs (and circuit complexity $m^{k}$) to an $s$-size lower bound for learning functions with $n$ inputs (and circuit complexity $n^{10dk}$). That is, a polynomial speedup w.r.t. the input-length of target functions is traded for a polynomial decrease of the circuit size of target functions. Ideally, we would like to magnify, say, $n^{1.9}$-size formula lower bound for learning circuits of size $n^{1.1}$ with $n$ inputs to $n^{O(1)}$-size formula lower bounds for learning circuits of size $n^{2.1}$ with $n$ inputs. If the existing methods for proving the required formula lower bounds were applicable to prove subquadratic formula lower bounds for learning algorithms (note that such lower bounds are allowed to localize and naturalize), such a strengthening of Theorem \ref{t:speedup} would lead to explicit \ensuremath{{\sf NC}^1}\xspace lower bounds.
\section{Concluding remarks and open problems}\label{s:concluding}
The methods for deriving learning algorithms from circuit lower bounds presented in this paper might be improvable in many ways.
\medskip
\noindent {\bf Safe cryptography or efficient learning.} Perhaps the most appealing question asks for bridging cryptography and learning theory. Showing that efficient learning follows from breaking pseudorandom generators, i.e. answering positively Question \ref{q:dichotomy}, would establish a remarkable win-win situation. As discussed in Section \ref{s:rudich} the question is closely related to a problem of Rudich about turning demibits to superbits.
\medskip
\noindent{\bf Instance-specific learning vs PAC learning.} Circuit lower bounds correspond to a simple instance-specifc learning model described in Section \ref{s:i-s}. Can we improve our understand of the model and its relation to PAC learning? In particular, can we determine how much we can learn from a single circuit lower bound? A possible formalization of the problem is given by Question \ref{q:fip}.
\medskip
\noindent {\bf Connections to proof complexity.} The present paper brings several methods from proof complexity to learning theory. It seems likely that these connections can be strengthened. A particularly relevant part of proof complexity is the theory of proof complexity generators, cf. \cite{Kfor}. An interesting conjecture in the area due to Razborov \cite{Rkdnf} implies a conditional hardness of circuit lower bounds in strong proof systems. In other words, Razborov's conjecture asks for turning short proofs of circuit lower bounds into upper bounds breaking standard hardness assumptions.
Notably, strengthening Theorem \ref{t:main} by allowing white-box access in the witnessing of lower bounds would lead to a conditional unprovability of p-size lower bounds for $\ensuremath{{\sf SAT}}\xspace$ in Cook's theory \ensuremath{{\sf PV_1}}\xspace. A complication is that under standard hardness assumptions such a witnessing exists. That is, in order to obtain the conditional unprovability, one might need to exploit the \ensuremath{{\sf PV_1}}\xspace-provability in a deeper way. Nevertheless, this suggests a simplified version of Question \ref{q:dichotomy}: Can we prove a disjunction stating the \ensuremath{{\sf PV_1}}\xspace-consistency of the existence of strong pseudorandom generators or the \ensuremath{{\sf PV_1}}\xspace-consistency of efficient learning? Since, by witnessing theorems in \ensuremath{{\sf PV_1}}\xspace, both the \ensuremath{{\sf PV_1}}\xspace-provability of the non-existince of pseudorandom generators and the \ensuremath{{\sf PV_1}}\xspace-provability of the impossibility of effficient learning imply uniform efficient algorithms witnessing these facts, it could be possible to combine them with a version of uniform MinMax \cite{VZ} to get a contradiction.
\medskip
\noindent {\bf Nonlocalizable hardness magnification near the existing lower bounds.} Can we push forward the program of hardnness magnification by strengthening the magnification from Theorem \ref{t:speedup} to a setting in which strong circuit lower bounds follow from lower bounds near the already existing ones? The importance of the question stems from the necessity of developing nonlocalizable magnification theorems or nonlocalizable constructive lower bound methods as discussed in Section \ref{s:speedup}.
\medskip
\noindent {\bf SAT solving circuit lower bounds.} It would be interesting to investigate practical consequences of the provability of circuit lower bounds. Circuit lower bounds for explicitly given Boolean functions are \ensuremath{{\sf coNP}}\xspace statements which means that they are encodable into propositional tautologies resp. SAT instances. Could SAT solvers be successful in proving interesting instances of circuit lower bounds for some fixed input lengths? If so, this could provide an experimental verification of central results and conjectures from complexity theory such as $\ensuremath{{\sf P}}\xspace\ne\ensuremath{{\sf NP}}\xspace$ up to some finite domain. As discussed in the present paper, efficient algorithms proving circuit lower bounds can be also transformed into learning algorithms, which provides a separate motivation for this line of research.
In particular, SAT solving of circuit lower bounds could lead to an interesting comparison with the research on neural networks. The task of training a neural network is to design a circuit $C$ of size $s$, typically with a specific architecture, coinciding with some training input samples $(y_i,f(y_i))$, and apply it to predict the value $f(y)$ on a new input $y$. As discussed in Section \ref{s:i-s}, this problem can be addressed by proving a circuit lower bound. Since proving a circuit lower bound can give us a reliable instance-specific prediction one could try to use SAT solvers to verify outcomes of neural networks. More generally, one could try to simulate neural networks by SAT solving circuit lower bounds. A potential advantage of SAT solvers is that they do not need to construct a circuit coinciding with training data - it is enough to prove its properties (lower bounds). On the othe hand, SAT solvers need to prove a universal statement which might turn out to be even harder.
\def\practicalpart{
\section{SAT solving circuit lower bounds}
Circuit lower bounds for explicitly given Boolean functions are \ensuremath{{\sf coNP}}\xspace statements which makes them easily encodable into propositional tautologies resp. SAT instances. Could SAT solvers be successful in proving interesting instances of circuit lower bounds for some fixed input lengths? If so, this could provide an experimental verification of central results and conjectures from complexity theory such as $\ensuremath{{\sf P}}\xspace\ne\ensuremath{{\sf NP}}\xspace$ up to some finite domain. Efficient algorithms proving circuit lower bounds can be also transformed into learning algorithms, which provides a separate motivation for this line of research.
\subsection{SAT solvers versus neural networks}
\noindent The task of training a neural network is to design a circuit $C$ of size $s$, typically with a specific architecture, coinciding with some training input samples $(y_i,f(y_i))$, and apply it to predict the value $f(y)$ on a new input $y$. As discussed in Section \ref{seclear}, this problem can be addressed by proving a circuit lower bound of the form: $$\forall\text{ circuit }C\text{ of size }s,\ \bigvee_{i} C(y_i)\ne f(y_i)\vee C(y)\ne\epsilon$$ for $\epsilon\in\{0,1\}$. If the lower bound holds, then either there is no neural network of size $s$ for given inputs $(y_i,f(y_i))$ or the right prediction on $y$ is $1-\epsilon$.
\smallskip
\noindent {\bf $\circ$ verifying neural networks with SAT solvers.} Proving a circuit lower bound can thus guarantee that no neural network predicts $\epsilon$ as the outcome.
\smallskip
\noindent {\bf $\circ$ simulating neural networks with SAT solvers.} Could SAT solving circuit lower bounds be easier than learning neural networks? A potential advantage of SAT solvers is that they do not need to construct a circuit coinciding with training data - it is enough to prove its properties (lower bounds). On the othe hand, SAT solvers need to prove a universal statement which might turn out to be even harder.
\subsection{Problem statement}
The problem is to decide the existence of circuits with size $s$ for $n$-input 1-output Boolean functions. The input specifies $n, s$, and the (partial) Boolean function. The tool returns SAT/UNSAT which means there is/isn't a circuit of size $s$ that can implement the $n$-input 1-output (partial) Boolean function.
More precisely, the problem is given to SAT solver as a SAT formula $\mathsf{lb}_S(f,s)$ of the form $$\bigvee_{a\in S} f(a)\ne C(a)$$ where $S$ is a set of (not necessarily all) $n$-bit strings, $f(a)$ is a value of Boolean function $f$ on input $a$ and $C(a)$ is the 1-output of a circuit $C$ with $s$ gates on input $a$. The circuit $C$ is represented by free variables. The formula $\mathsf{lb}_S(f,s)$ is satisfiable if and only if there exists an $n$-input 1-output circuit $C$ with $s$ gates computing Boolean function $f$ on inputs from the set $S$. Note that the specification of the Boolean function $f$ is hardwired into the SAT formula.
\def\practical{
\subsection{Work plan}
Step 1. Verify finitistic versions of known circuit lower bounds, e.g. $\ensuremath{{\sf PARITY}}\xspace\notin\ensuremath{{\sf AC}^0}\xspace$. The hope is that this could eventually lead to a verification of conjectures such as $\ensuremath{{\sf SAT}}\xspace\notin\ensuremath{{\sf P/poly}}\xspace$ or the nonexistence of efficient learning algorithms. It is known \cite{Rkdnf} that Resolution-based SAT solvers cannot prove circuit lower bounds feasibly. Therefore, a new algorithmic approach will be needed. One option is to incorporate a potential speedup provided by the very hardness of circuit lower bounds for Resolution: instead of proving for all, it suffices to prove on the range \cite{}.
\bigskip
\noindent Step 2. Solve MNIST by SAT solving circuit lower bounds.
\bigskip
\noindent Step 3. Try to break pseudorandom generators of the form $G_C$ from Section \ref{s:leargen}. If the generator survives, its safe against the attack. Otherwise, we get the desired lower bound.
\bigskip
\noindent {\bf Initial challenge.} Prove that DNFs (i.e. depth 2 circuits with OR gates on top and AND gates at the bottom) with 20 inputs and 40 gates cannot compute \ensuremath{{\sf PARITY}}\xspace on set $S$ consisting of 2000 randomly chosen 20-bit strings.
\subsection{Encoding}
Naive encoding of circuit lower bounds vs Razborov's encoding.
\subsection{Training samples}
Unsatisfiable instances: take random Boolean functions $f$ and random set of anticheckers. With high probability the resulting lower bound is going to be a tautology.
\smallskip
EF=R* width 3+E (but n literals > $n^3$ clauses > $n^6$ step to find the proof)}
}
\subsection*{Acknowledgements
I would like to thank Rahul Santhanam for many inspiring discussions which, in particular, motivated me to prove Theorem \ref{t:main}.
I am indebted to Susanna de Rezende and Erfan Khaniki for many illuminating discussions during the development of the project. I would also like to thank V. Kanade for helpful comments on the existing learning models and L. Chen, V. Kabanets, J. Kraj\'{i}\v{c}ek and I.C. Oliveira for helpful comments on the draft of the paper. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odovska-Curie grant agreement No 890220.
\medskip
\noindent\fbox{\includegraphics[width=48pt,height=31pt]{flag.jpg}}
|
{
"timestamp": "2020-12-29T02:24:43",
"yymm": "2012",
"arxiv_id": "2012.14095",
"language": "en",
"url": "https://arxiv.org/abs/2012.14095"
}
|
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{./introduction} \caption{A unified detector is expected to detect all categories (e.g., both the classes in {\color{red} COCO} and {\color{green} WIDER FACE}) while only separately labeled training sets are provided (``faces" are not labeled in COCO; simultaneously, instances of the 80 COCO classes are not annotated in FACE). Typically, two simple ways can achieve this goal: relabeling or plain joint training. However, relabeling is extremely costly (which needs to browse each image to discover instances belonging to the missing categories) and plain joint training leads to severe biases and a poor detector (which fails to detect ``face" in COCO evaluation set and ``person" in FACE evaluation set). Best seen in color.}
\label{fig:introduction}
\end{figure}
Object detection is a fundamental and essential computer vision problem in various application areas, such as self-driving~\cite{WEI2020107195}, medicine~\cite{wang2020focalmix} and security~\cite{AKCAY2022108245}, etc. Owing to the development of deep neural networks, more and more powerful detectors are proposed~\cite{ren2015faster,lin2017focal,QIN2020107404}.
A standard object detector is typically trained on a pre-prepared fully annotated training dataset with fixed categories. However, in many real-world applications, we cannot know all possible categories of interest beforehand, so it is often desired to add new object classes progressively. For instance, we have a detector in use, which was trained on an original dataset (e.g., COCO~\cite{lin2014microsoft} with 80 classes). Now, it is required to detect a new class ``human face" (abbr.: face) simultaneously while only some new training data labeled with ``face'' is additionally provided (e.g., WIDER FACE~\cite{yang2016wider}). As shown in Fig.~\ref{fig:introduction}, many faces in the original training data are not labeled, similarly, the instances belonging to the original categories (e.g., ``person") are not annotated in the new dataset.
As new categories are incrementally added, training data is collected and labeled specifically for them. {\color{highlight}Commonly, the models are expected to be efficient, namely smaller, faster and better~\cite{redmon2016you,yang2020distilling,yang2020factorizable,jing2020dynamic,jing2021amalgamating}}. An important question is how to train a unified detector that can manage all categories based on the limited datasets, because a unified detector introduces merely negligible extra computation and memory overheads with respect to the number of categories of interest. As shown in Fig.~\ref{fig:introduction}, there are two straightforward ways to achieve a unified detector. The one is relabeling, which transforms the problem into a standard object detection task. However, it's prohibitively expensive to relabel all missing instances in the original and new datasets. For massive amounts of data and ever-increasing categories of interest, browsing images from all datasets to discover instances belonging to the missing categories is impractical. The other one is plain joint training. While, in one dataset (e.g., COCO), samples related to unlabeled instances (e.g., ``face") will be regarded as negative (background) samples, resulting in conflicts and false gradients during training. Due to overfitting the false negative samples, it is difficult for the detector trained in the plain way to discover categories not labeled in corresponding training data. For example, {\color{highlight} as shown in Fig.~\ref{fig:introduction}}, the model trained on the combined training set of COCO and WIDER FACE cannot detect ``face" in the COCO evaluation set or ``person" in the WIDER FACE evaluation set.
Can we obtain a unified detector that can detect all categories well while avoiding relabeling manually and alleviating conflicts during training? We propose a general solution for this purpose in the work. (i) A conflict-free loss is carefully designed to avoid two possible conflicts during joint learning (details in Sec.~\ref{sec:cf_loss}): (a) Samples assigned as negative in one dataset may be actually positive instances belonging to classes from the other datasets. (b) Samples assigned as positive may be more suitable to be labeled as categories of the other datasets. Experimental results demonstrate that conflict-free loss can lead to an acceptable unified detector. (ii) To further improve performance, we propose a retraining phase (details in Sec.~\ref{sec:gt_mining} and Sec.~\ref{sec:retraining}). (a) We attempt to mine high-quality unlabeled ground-truth with the trained detector. The Monte Carlo Dropout is employed to obtain localization confidence, which is combined with the classification confidence to unearth pseudo annotations with more accurate bounding boxes. (b) Regarding the retraining strategies with the mined pseudo annotations, we realize that although a lot of unlabeled instances are mined, there still hide many latent ground-truth. Therefore, the conflict-free idea cannot be removed in the retraining stage. However, simply copying the training strategy from the first phase results in insufficient negative information, as only positive samples are added by pseudo annotations during retraining. Consequently, we propose an overlap-based negative samples weighting strategy to utilize negative samples modestly in the retraining process. In the end, we achieve a strong and unified detector.
{\color{highlight} Object detection, semi-supervised object detection, open-set object detection, and class incremental object detection are all related to the study of category-extended object detector. A significant amount of research has been conducted in these areas. Sec.~\ref{sec:related_work} provides an abridged overview of the relevant background.}
To sum up, the main contributions of this paper are listed as follows.
\textbf{(i)} We present a general solution for training the unified detector with only the original datasets and incrementally labeled new datasets, which is urgently required in realistic applications.
\textbf{(ii)} A conflict-free loss is proposed to avoid possible label ambiguity. Localization confidence is designed for mining more accurate pseudo annotations. An overlap-weighted loss is employed to deal with uncertain negative samples for retraining a stronger detector with pseudo annotations.
\textbf{(iii)} The entire pipeline does not introduce any additional manual labeling or modification of network structures and does not alter inference speed.
\textbf{(iv)} Extensive experiments are conducted to demonstrate the effectiveness of our proposed solution as compared with state-of-the-art approaches.
\section{Related Work}
\label{sec:related_work}
{\color{highlight} \paragraph{Object Detection} Object detection has made significant progress in the past decade. Detectors (R-CNN serials~\cite{ren2015faster,girshick2014rich,girshick2015fast}, RetinaNet~\cite{lin2017focal}, FCOS~\cite{tian2019fcos}, etc.) mainly focus on efficiency or performance enhancement based on sufficient and complete training data. Recently, object detection in more complex and realistic scenarios has also attracted much attention~\cite{wang2018geometry,yang2021training,gao2022discrepant}}.
\paragraph{Semi-Supervised Object Detection} SSOD aims to learn detectors based on a few labeled images and large amounts of unlabeled images. CSD~\cite{jeong2019consistency} and FocalMix~\cite{wang2020focalmix} utilize consistency regularization to make full use of the unlabeled data. Self-training and strong data augmentations are employed in STAC~\cite{sohn2020simple} to enhance the detector for SSOD. Although the issue of incomplete training data is also involved in this work, the main difference is that no extra images are introduced during training except the available partially-labeled datasets in our problem.
\paragraph{Open-Set Object Detection} OSOD~\cite{hall2020probabilistic,miller2018dropout} refers to the issue that objects from a new domain that are not seen in the training data may be encountered in the test data, which bring about more false positive predictions compared to the closed-set. In this work, there probably exists a non-ignorable domain gap between the new annotated training set and the original training set. Methods for OSOD inspire us to mine higher-quality pseudo annotations with fewer false positive predictions based on uncertainty estimation.
{\color{highlight}\paragraph{Class Incremental Object Detection} In recent years, class incremental learning has attracted much attention in diverse tasks, such as classification~\cite{rebuffi2017icarl,wu2019large,zhao2020maintaining,zhao2022energy}, segmentation~\cite{michieli2021knowledge} and also object detection. CIOD aims at developing a lifelong-learning detection system. In~\cite{shmelkov2017incremental,hao2019end,perez2020incremental,kj2021incremental}, efforts are made to learn detectors incrementally based on only new training images with new classes, i.e., the original training data is not available when new classes arrive. Specifically, knowledge distillation is employed to prevent the forgetting phenomenon in~\cite{shmelkov2017incremental} and the method in~\cite{hao2019end} further extend it to an end-to-end training manner. A meta-learned gradient preconditioning is proposed in~\cite{kj2021incremental} to not only minimize forgetting but also maximize knowledge transfer. In CIOD, the primary challenge is the catastrophic forgetting~\cite{mccloskey1989catastrophic,lao2021focl} for old classes due to a lack of original datasets. While the original data is still available in this work, we focus on a more immediate and realistic situation. Although it relaxes the constraints, we show that it is still a challenging problem (as shown in Fig.~\ref{fig:introduction}) which is frequently encountered in real-world applications and must be addressed urgently.}
\paragraph{Multi-Dataset Object Detection} MDOD tries to train a single detector on multiple datasets, which is the same purpose as in this work. In~\cite{zhao2020object} and~\cite{rame2018omnia}, a pseudo labeling approach is exploited. Dataset-aware focal loss is proposed in~\cite{yao2020cross} for the multi-dataset training. Compared to these methods, we present a more powerful pipeline to deal with this problem, which concentrates on three main questions: how to avoid label conflicts effectively and comprehensively (answered in Sec.~\ref{sec:cf_loss}), how to mine more accurate pseudo annotations (answered in Sec.~\ref{sec:gt_mining}), and how to make better use of pseudo labels (answered in Sec.~\ref{sec:retraining}).
\section{Category-extended Object Detector}
In this section, we introduce our solution for training category-extended detectors. We analyze the possible label conflicts hidden in the plain joint training and propose a general loss formula (conflict-free loss) for this problem (Sec.~\ref{sec:cf_loss}). It attempts to take full advantage of the exact information and avoid ambiguous one, leading to an acceptable detector in one training round. Then, to further improve the performance, we design the retraining phase. It consists of two components: (a) Unlabeled ground-truth mining with classification and localization confidence (CLC) (Sec.~\ref{sec:gt_mining}). The CLC-based unlabeled ground-truth mining method helps to obtain more accurate pseudo annotations. (b) Retraining with overlap-weighted negative samples (Sec.~\ref{sec:retraining}). The overlap-weighted negative samples in retraining help to make full use of the pseudo annotations and obtain a better detector.
\subsection{Notations}
For brevity of presentation and without loss of generality, we assume that there are original dataset $\mathcal{D}_o$ (denoted by categories $\mathcal{C}_o$, images $\mathcal{I}_o$ and ground-truth annotations $\mathcal{G}_o$) and newly-added dataset $\mathcal{D}_n$ (denoted by categories $\mathcal{C}_n$, images $\mathcal{I}_n$ and ground-truth annotations $\mathcal{G}_n$) with different label spaces\footnote{This technique is also applicable to multiple datasets.} ($\mathcal{C}_o$ and $\mathcal{C}_n$ do not include the special category ``background"). We aim to train a unified object detector on $\mathcal{D}_o$ and $\mathcal{D}_n$. The overall loss function for training can be formulated as a weighted sum of the classification loss ($\mathcal{L}_{cls}$) and the localization loss ($\mathcal{L}_{loc}$):
\begin{equation}
\mathcal{L}(\{\mathbf{p}_i\}, \{\mathbf{t}_i\};
\{\mathbf{p}^*_i\}, \{\mathbf{t}^*_i\}) =
\mathcal{L}_{cls}(\{\mathbf{p}_i\}, \{\mathbf{p}^*_i\})
+
\mathcal{L}_{loc}(\{\mathbf{t}_i\}, \{\mathbf{t}^*_i\}),
\label{eq:overall_loss}
\end{equation}
where $i$ is the index of an anchor, $\mathbf{t}_i$ is a $4$-dimensional vector representing the parameterized coordinates of the predicted bounding box, and $\mathbf{t}_i^*$ is that of the ground-truth box associated with a positive anchor. $\mathbf{p}_i^*$ and $\mathbf{p}_i$ are the ground-truth label and the predicted probability vector of anchor $i$, respectively. The localization loss $\mathcal{L}_{loc}$ usually can be Smooth $L_1$ loss~\cite{he2016deep}. The classification loss $\mathcal{L}_{cls}$ can be a binary cross-entropy (BCE) based loss. It views the classification task as a series of independent binary classification tasks, which is formulated as
\begin{equation}
\mathcal{L}_{cls} \coloneqq \mathcal{L}_{bce}(\{ \mathbf{p}_i \}, \{ \mathbf{p}^*_i \}) =
\frac{1}{N} \sum_{i,c}
l({p^c_i, p^*_i}^c)
,
\label{eq:bce_loss}
\end{equation}
where $
l({p^c_i, p^*_i}^c) = -[{p^*_i}^c log(p^c_i) +
(1 - {p^*_i}^c) log(1 - p^c_i)]
$. Under the BCE-based loss, $\mathbf{p}_i^*$ is a $|\mathcal{C}_o \cup \mathcal{C}_n|$-dimensional vector. If anchor $i$ matches with a ground-truth box in $\mathcal{G}_o$ or $\mathcal{G}_n$ of category $c$, $\mathbf{p}_i^*$ is denoted as a one-hot vector with only ${p_i^*}^c=1$. If it does not match any box in $\mathcal{G}_o \cup \mathcal{G}_n$, $\mathbf{p}_i^*$ will be set to $\mathbf{0}$.
As an example shown in Fig.~\ref{fig:introduction}, the detector trained with plain joint training on the combined dataset of COCO (dose not labeled ``face") and WIDER FACE (dose not labeled ``person") cannot detect ``face" in the COCO evaluation set or ``person" in the WIDER FACE evaluation set. The detector is biased due to the datasets and training method: in one dataset (e.g., COCO), samples related to unlabeled instances (e.g., “face") will be regarded as negative (background) samples, resulting in conflicts and false gradients during training. To relieve this problem, in the next subsection, we propose conflict-free loss, which attempts to make full use of the correct information and avoid using ambiguous information.
\subsection{Conflict-Free Loss}
\label{sec:cf_loss}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{./cf_loss}
\caption{Illustration of conflict-free loss. Two possible conflicts are relieved, details in Sec.~\ref{sec:cf_loss}. {\color{red} Red} for original classes and {\color{green} green} for new classes. Best seen in color.}
\label{fig:cf_loss}
\end{figure}
In Eq.~\eqref{eq:overall_loss}, the localization loss is activated only for positive anchors ($i \in \text{Pos}$) and disabled otherwise ($i \in \text{Neg}$), so that there are few possible conflicts hidden in it and we keep the localization loss unchanged. However, the plain BCE-based loss will be affected by false signals during joint training on multiple datasets. Thus, the classification loss should be carefully designed. There are two possible conflicts in plain BCE-based loss. (i) Anchors assigned as negative samples in one dataset may be positive samples belonging to unlabeled instances with classes from the other datasets. (ii) The assigned positive samples may belong to categories of other datasets. Commonly, in one dataset, an anchor is assigned as positive for one specific category when the Intersection over Union (IoU) with the ground-truth is greater than a threshold (say, 0.5 typically). However, such loose restriction may ignore the possibility that the anchor is more suitable to be labeled as another category that is only annotated in other datasets.
Consequently, we propose the Conflict-Free (CF) loss, which is formulated as
\begin{equation}
\mathcal{L}_{cls} \coloneqq \mathcal{L}_{cf}(\{ \mathbf{p}_i \}, \{ \mathbf{p}^*_i \}) =
\frac{1}{N} \sum_{i,c} w(i, c) \cdot
l({p^c_i, p^*_i}^c)
,
\label{eq:cf_loss}
\end{equation}
where, $w(i, c)$ is given by Alg.~\ref{algo:omega_training}, $\star$ represents $o$ or $n$, $f(i)$ represents the maximum IoU of anchor $i$ with ground-truth, $\tau_{s}$ is a strict threshold (0.9, conservatively). An illustration of Eq.~\eqref{eq:cf_loss} and Alg.~\ref{algo:omega_training} is shown in Fig.~\ref{fig:cf_loss}. In conflict-free loss, the two possible conflict origins are removed. First, a negative sample will not contribute to the loss of classes that are not labeled in its dataset $(i \in \mathcal{I}_{\star} \ \& \ c \notin \mathcal{C}_{\star} \ \& \ i \in \text{Neg})$. Second, the conflict-free loss does not concern the positive anchors with insufficient large overlaps with ground-truth ($\text{IoU} < \tau_{s}$) to provide negative information to classes from the other datasets $(i \in \mathcal{I}_{\star} \ \& \ c \notin \mathcal{C}_{\star} \ \& \ i \in \text{Pos} \ \& \ f(i) < \tau_{s})$. Note that the name ``conflict-free" is used to represent a general loss formula for avoiding conflicts, which can be used directly or integrated with any other BCE-based loss (e.g., Focal loss~\cite{lin2017focal}) by replacing $l$.
\begin{algorithm}[tp]
\centering
\caption{Obtaining $\{\omega (i,c)\}$ for Eq.~\eqref{eq:cf_loss}.}
\begin{algorithmic}[1]
\For{ anchor $i \in 1, \cdots, N $; class $c \in 1, \cdots, |\mathcal{C}_o \cup \mathcal{C}_n|$}
\If{$i \in \mathcal{I}_{\star} \ \& \ c \notin \mathcal{C}_{\star}$}
\If{$i \in \text{Neg} \ | \ (i \in \text{Pos} \ \& \ f(i) < \tau_{s})$}
\State $\omega(i,c)=0$
\Else
\State $\omega(i,c)=1$
\EndIf
\Else
\State $\omega(i,c)=1$
\EndIf
\EndFor
\State \textbf{Output: $\{\omega (i,c)\}$}
\end{algorithmic}
\label{algo:omega_training}
\end{algorithm}
\begin{figure*}[t]
\centering
\includegraphics[width=0.65\textwidth]{./retraining}
\caption{Illustration of the retraining method. (top) Unlabeled ground-truth mining with both classification and localization confidence. (bottom) Retraining on the supplemented datasets with overlap-based weighting schedule for negative samples. The remaining legend is the same as in Fig.~\ref{fig:cf_loss}. Best seen in color.}
\label{fig:retraining}
\end{figure*}
It is demonstrated in the later experiments that an acceptable unified detector can be learned with conflict-free loss in one training round. While, there still exist many underutilized unlabeled objects. Inspired by self-training, a retraining phase is employed to further improve performance, which consists of two components: unlabeled ground-truth mining with classification and localization confidence (Sec.~\ref{sec:gt_mining}) and retraining with overlap-weighted negative samples (Sec.~\ref{sec:retraining}).
\subsection{Unlabeled Ground-Truth Mining}
\label{sec:gt_mining}
Unlike classification tasks, detectors need to predict the localization of objects, so that accurate localization annotations usually lead to better performance. Previous studies~\cite{jiang2018acquisition,he2019bounding} show that classification score cannot be interpreted as localization confidence: a predicted bounding box with high classification confidence may be localized inaccurately. If the inaccurate predicted results are adopted as pseudo annotations, they will prevent detectors from getting better performance. In pursuit of high-quality pseudo annotations, we propose the Classification and Localization Confidence (CLC) based unlabeled ground-truth mining method.
Monte Carlo Dropout-based Bayesian neural networks have shown great potential in measuring model uncertainty~\cite{gal2016dropout,kendall2017uncertainties,miller2018dropout,miller2019evaluating}. Inspired by this concept, during the inference phase, we perform $T$ forward passes of the base detector learned in Sec.~\ref{sec:cf_loss} with activated dropout layers as depicted in Fig.~\ref{fig:retraining}. Notice that since we introduce dropout operation only on the last layer of the classification and localization head while all the previous layers are identical and the computations are shared, the extra inference time during $T$ forward passes is ignorable. Then, for each image, we perform clustering on the bounding boxes of each class, resulting in a set of clusters. Each cluster $\mathcal{O}_i$ is made up of several bounding boxes with their respective classification score, defined as
\begin{equation}
\begin{aligned}
\mathcal{O}_i &= \{(\mathbf{t}_{ij}, \mathbf{p}_{ij}) \ | \ j=1,2,\cdots,|\mathcal{O}_i| \leq T \}, \\
\ \text{s.t.} &\ \text{IoU}(\mathbf{t}_{ij_1}, \mathbf{t}_{ij_2}) \geq \tau_{nms}, \forall \ \mathbf{t}_{ij_1}, \mathbf{t}_{ij_2} \in \mathcal{O}_i,\\
&\argmax_{r \in \mathcal{C}_o \cup \mathcal{C}_n} \ p^r_{ij_1} = \argmax_{r \in \mathcal{C}_o \cup \mathcal{C}_n} \ p^r_{ij_2}, \forall \ \mathbf{p}_{ij_1}, \mathbf{p}_{ij_2} \in \mathcal{O}_i,
\end{aligned}
\end{equation}
where the threshold for clustering is the same as the NMS IoU threshold $\tau_{nms}$ (0.5 typically). Then the cluster $\mathcal{O}_i$ is represented by a triplet $(\overline{\mathbf{t}}_i, \overline{p}_i, \overline{e}_i)$, where the integrated bounding box $\overline{\mathbf{t}}_i$ and classification confidence $\overline{p}_i$ can be calculated by
\begin{equation}
\overline{\mathbf{t}}_i = \frac{1}{|\mathcal{O}_i|} \sum_{j} \mathbf{t}_{ij}, \
\overline{p}_i = \frac{1}{|\mathcal{O}_i|} \sum_{j} \max(p^1_{ij}, \cdots, p^{|\mathcal{C}_o \cup \mathcal{C}_n|}_{ij}),
\end{equation}
More importantly, the localization confidence is given by
\begin{equation}
\overline{e}_i =
\frac{ 1 + \mathbbm{1}_{|\mathcal{O}_i| \geq \frac{T}{2}} }{2|\mathcal{O}_i|^2}
\sum_{j_1, j_2} \text{IoU}(\mathbf{t}_{ij_1}, \mathbf{t}_{ij_2}).
\label{eq:loc_conf}
\end{equation}
Then, we obtain the detection confidence $\overline{d}_i = \overline{p}_i \times \overline{e}_i$, which considers both classification and localization confidence, and can better reflect the quality of predictions. Finally, we accept an integrated bounding box as pseudo ground-truth for class $c$ only if it satisfies: (i) class $c$ is not labeled in the dataset originally; (ii) $\overline{d}_i$ is larger than a pre-defined threshold $\eta$. Generally speaking, only the predictions with enough high detection confidence will be selected as pseudo annotations for the unlabeled classes. Fig.~\ref{fig:retraining} (top) illustrates the mining method.
\subsection{Retraining with Pseudo Annotations}
\label{sec:retraining}
After acquiring the pseudo annotations, we try to train a more powerful unified detector with the ground-truth and pseudo annotations together. So how to make better use of pseudo annotations? The most straightforward way is viewing the combined dataset supplemented with pseudo annotations as fully labeled~\cite{sohn2020simple}. Then the detector can be trained in a normal way with common classification losses (Eq.~\eqref{eq:bce_loss}). However, there may still exist considerable instances that are difficult to be mined. Hence, there is a risk that the inaccurate supervisory information may be introduced. Instead, one can retrain the detector with the conflict-free loss again (Eq.~\eqref{eq:cf_loss} and Alg.~\ref{algo:omega_training}). Compared to the first phase, the negative samples still do not contribute to the loss of classes from different datasets, only positive samples are supplemented by the pseudo annotations. So that the imbalance between positive samples and negative samples may be intensified in this way. A solution to alleviate the imbalance problem mentioned above is to introduce some ``safe" negative samples~\cite{zhao2020object}, which can contribute to the loss of classes from different datasets while ``unsafe" negative samples are still discarded.
\begin{algorithm}[t]
\centering
\caption{Obtaining $\{\omega (i,c)\}$ for Eq.~\eqref{eq:cf_loss} during retraining.}
\begin{algorithmic}[1]
\For{ anchor $i \in 1, \cdots, N $; class $c \in 1, \cdots, |\mathcal{C}_o \cup \mathcal{C}_n|$}
\If{$i \in \mathcal{I}_{\star} \ \& \ c \notin \mathcal{C}_{\star} \ \& \ i \in \text{Neg}$}
\State $\omega(i,c)= Gom(f(i))$
\Else
\State $\omega(i,c)=1$
\EndIf
\EndFor
\State \textbf{Output: $\{\omega (i,c)\}$}
\end{algorithmic}
\label{algo:omega_retraining}
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.25\textwidth]{./gom_func}
\caption{Overlap-based weighting schedule ($\tilde{a}=1$, $\tilde{b}=10,000$ and $\tilde{c}=25$). The negative samples' contribution to the loss of classes from different datasets is assigned with high weight only when they have a large overlap with annotated boxes.}
\label{fig:gom_func}
\end{figure}
Instead, we introduce an overlap-based weighting schedule~\cite{wu2018soft} here to re-weight the negative samples' contribution to the loss of classes from different datasets. The main hypothesis is that negative samples which have relatively large overlaps (like 0.3 IoU, 0.4 IoU) with the existing ground-truth (or mined pseudo annotations) are probably not related to any unlabeled instances. On the contrary, negative samples with small overlaps have a higher risk of being unlabeled instances, i.e., false background. Based on this hypothesis, the Gompertz function
$$
Gom(x) = \tilde{a} e^{-\tilde{b} e^{-\tilde{c}x}}
$$
is employed to describe the relationship between overlap and the weight of loss, where $\tilde{a}$ is an asymptote, $\tilde{b}$ sets the displacement along the x-axis, $\tilde{c}$ sets the growth rate. We plot this function in Fig.~\ref{fig:gom_func} with $\tilde{a}=1$, $\tilde{b}=10,000$ and $\tilde{c}=25$. The negative samples' contribution to the loss of classes from different datasets is assigned with high weight only when they have a large overlap with annotated boxes. Other negative samples are still given very low weights.
Our overlap-weighted retraining method can be formulated as training with Eq.~\eqref{eq:cf_loss} and $\{\omega (i,c)\}$ calculated from Alg.~\ref{algo:omega_retraining}. Considering many unlabeled instances are mined and the main concern in retraining is balancing the extra positive samples brought by pseudo annotations, we no longer consider the conflicts in positive samples in the retraining phase. In later experiments, we found that this method can take advantage of the mined pseudo annotations more fully.
\section{Experiments}
\subsection{Experimental Settings}
\label{sec:exp_set}
\paragraph{Datasets} We conduct experiments on several widely used datasets with different settings. MS COCO~\cite{lin2014microsoft} (2017) consists of 118,287 training images (COCO-train), 5,000 validation images (COCO-val) and 40,670 test images with 80 categories. PASCAL VOC~\cite{everingham2010pascal} (2007) has 9,963 images with 20 categories, 50\% for training/validation (VOC-train) and 50\% for testing (VOC-test). WIDER FACE~\cite{yang2016wider} is comprised of 32,203 images with one category ``face'', training 40\% (FACE-train), validation 10\% (FACE-val) and test 50\%. To evaluate the unified models, we also adopt the re-labeled evaluation sets \cite{zhao2020object}: VOC-subtest (a subset of VOC-test) contains 500 images with annotations for the 80 categories in COCO; FACE-subval (a subset of FACE-val) consists of 500 images with annotations for the ``face'' in FACE and the ``person'' in COCO; COCO-subval (a subset of COCO-val) has 500 images with annotations for the 80 categories in COCO and the ``face'' in FACE.
\begin{table}[t]
\centering
\caption{The statistics of the training sets used in our experiments.}
\resizebox{\textwidth}{!}{
\begin{tabular}{llllll}
\toprule
setup & training set & \multicolumn{1}{l}{\#categories} & \multicolumn{1}{l}{\#images} & \multicolumn{1}{l}{\#annotations} & \multicolumn{1}{l}{\#missing annotations} \\
\midrule
A & COCO75-train & 75 & 100,543 & 601,410 & 129,089 \\
& COCO5-train & 5 & 17,744 & 22,976 & 106,526 \\
B & COCO79-train & 79 & 100,543 & 506,916 & 222,696 \\
& COCO1-train & 1 & 17,744 & 39,769 & 90,620 \\
C & COCO60-train & 60 & 118,287 & 367,189 & 492,812 \\
& VOC-train & 20 & 5,011 & 15,662 & unknown \\
D & COCO-train & 80 & 118,287 & 860,001 & unknown \\
& FACE-train & 1 & 12,880 & 159,424 & unknown \\
E & COCO60-train & 60 & 118,287 & 367,189 & unknown \\
& VOC-train & 20 & 5,011 & 15,662 & unknown \\
& FACE-train & 1 & 12,880 & 159,424 & unknown \\
\bottomrule
\end{tabular}%
}
\label{tab:data_stat}%
\end{table}%
\paragraph{Setups} To analyze the effect of our pipeline, we design five experimental setups. The statistics of the training sets used in these setups are summarized in Tab.~\ref{tab:data_stat}.
\textbf{Setup A}: COCO75-train and COCO5-train. We split COCO-train into two sets. One set is named COCO75-train, in which only annotations of 75 categories are retained, while annotations related to the other 5 categories are removed; another set, named COCO5-train, contains only annotations of the 5 categories. COCO75-train and COCO5-train are regarded as original dataset $\mathcal{D}_o$ and new dataset $\mathcal{D}_n$, respectively. In COCO75-train, there exist considerable unlabeled objects of the remaining 5 categories, similar in COCO5-train. We report the performance of the 75 classes and the 5 classes on COCO-val. Since there are more missing annotations belonging to the new 5 classes in the combined training set, we should pay more attention to the results of these 5 classes.
\textbf{Setup B}: COCO79-train and COCO1-train. We perform another split on COCO-train: the original dataset COCO79-train contains 79 classes, and the new dataset COCO1-train has only one category. We report the performance of the 79 classes and the one class on COCO-val. Similarly, the results of the new class should be paid more attention.
\textbf{Setup C}: COCO60-train and VOC-train. In these experiments, we regard VOC-train as the new dataset and COCO60-train as the original dataset, which is generated by removing the annotations of the 20 VOC categories from COCO-train. We report the performance of all 80 classes on VOC-subtest and COCO-subval, and of the 20 VOC classes on COCO-val.
\textbf{Setup D}: COCO-train and FACE-train. We also train unified detectors on COCO-train (as original dataset) and FACE-train (as new dataset). The ``face'' is not annotated in COCO-train and the 80 COCO classes (``person'' mainly) are also not labeled in FACE-train. The performance of ``face" and ``person" on COCO-subval and FACE-subval is reported, in which the results of ``face" on COCO-subval and ``person" on FACE-subval should be paid more attention.
\textbf{Setup E}, which involves three training sets labeled with different classes: COCO60-train, VOC-train, and FACE-train. The whole label space has 81 classes. We report the performance of all 81 classes on VOC-subtest and COCO-subval, and of ``person" on FACE-subval.
\paragraph{Metrics} We use the standard metrics for object detection: AP, $\text{AP}_{50}$, and $\text{AP}_{75}$.
\paragraph{Implementations Details}
For fair comparisons, all experiments are conducted on RetinaNet~\cite{lin2017focal} with ImageNet-pretrained ResNet-50~\cite{he2016deep} as the backbone. The models are implemented with PyTorch~\cite{paszke2019pytorch} and MMDetection~\cite{chen2019mmdetection}. In all experiments, input images are resized to $1333\times 800$. The models are trained using SGD over 8 GPUs with 2 images per GPU with 0.9 momentum, 0.0001 weight decay. We train 12 epochs in total with an initial learning rate of 0.02, and decrease the learning rate by 0.1 at epoch 8 and 11. For the conflict-free loss, the more strict threshold $\tau_{s}$ is set to 0.9 conservatively as described in Sec.~3.2 in all experimental settings. For the CC-based unlabeled ground-truth mining method, the confidence threshold $\eta$ is set to 0.5 in all experimental settings for best overall performance. For the CLC-based mining method, the number of forward passes $T$ is set to 20 for all experimental settings; the confidence threshold $\eta$ is set to 0.525 in the experiment on COCO75-train and COCO5-train, and 0.5 in other experimental settings for best overall performance. For the retraining strategy ``safe negatives", the lower confidence threshold $\eta'$ is set to 0.1 to obtain high-recall pseudo annotations in all experimental settings. For the retraining strategy ``overlap-weighted", we set $\tilde{a}=1$, $\tilde{b}=10,000$ and $\tilde{c}=25$ as shown in Sec.~3.4 in all experimental settings. There is almost no change in hyper-parameters in different experimental settings, which shows that our pipeline is robust.
\subsection{Overall Performance}
\begin{figure}[t]
\centering
\subfloat[]{\includegraphics[height=0.16\textwidth]{./c_b1}},
\subfloat[]{\includegraphics[height=0.16\textwidth]{./c_b2}},
\subfloat[]{\includegraphics[height=0.16\textwidth]{./c_b3}},
\subfloat[]{\includegraphics[height=0.16\textwidth]{./c_b4}}
\caption{Overall Performance. Subfigs (a), (b), (c) and (d) show the results of setup A, B, C and D, respectively. The performance is evaluated on specific sets and categories for each setup, details in Sec.~\ref{sec:exp_set}. Best seen in color.}
\label{fig:compare_baselines}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[height=0.2\textheight]{./vis1.pdf}
\caption{Detection results of the plain joint training (top) and our method (bottom) on COCO-val (COCO evaluation set, the first two columns) and FACE-val (WIDER-FACE evaluation set, the last two columns). Red bounding boxes indicate objects belonging to the 80 categories in COCO, while green ones correspond to the category ``face'' in WIDER FACE. Best seen in color.}
\label{fig:vis1}
\end{figure}
We first show the performance of unified detectors trained by the proposed solution and the plain joint training on setup A-D. As shown in Fig.~\ref{fig:compare_baselines}, the performance of the detector obtained by plain joint training is poor, especially for categories that are not labeled in datasets. In setup C, due to the label ambiguity and domain gap, the detector trained in the plain way is heavily biased, which only gets 4.1\% AP$_{50}$ for the 20 VOC categories on COCO-val. In setup D, the performances of the detector trained in the plain way are only 3.4\% and 6.7\% in terms of AP$_{50}$ for ``face" on COCO-subval and ``person" on FACE-subval, respectively. We show that the proposed method achieves the performances of 42.6\%, 53.9\% and 58.5\% with improvements of 38.5\%, 50.5\% and 51.8\%, respectively. For setup A and B, we also provide the results based on fully labeled datasets, which can be seen as upper-bounds. These results demonstrate that our method can reduce the gap with fully labeled datasets.
Fig.~\ref{fig:vis1} illustrates some visualization results of the detectors trained on COCO-train and FACE-train. Due to the conflicts during plain joint training and the domain gap between the two sets, the detector trained in the vanilla way only can detect ``person'' in COCO-val and ``face'' in FACE-val (i.e., categories which are labeled in corresponding training data). Assisted by our approach, the detector can detect all categories on two evaluation sets. Both qualitative and quantitative results demonstrate the effectiveness of our method.
\subsection{Comparison to Other Methods}
In this section, we compare our conflict-free loss, overlap-weighted retraining method and CLC-based unlabeled ground-truth mining method with corresponding counterparts in Sec.~\ref{sec:compare_1}, Sec.~\ref{sec:compare_2} and Sec.~\ref{sec:compare_3}, respectively.
\begin{table}[t]
\centering
\caption{Compare conflict-free loss with other methods on COCO75-train ($\mathcal{D}_o$) and COCO5-train ($\mathcal{D}_n$). The results of 75 original classes and 5 new classes on COCO-val are reported. `\textbf{*}' indicates the categories that we should pay more attention to. Best results for these categories are in \textbf{bold}.}
\resizebox{1.\textwidth}{!}{
\begin{tabular}{llllllll}
\toprule
Data & Method & \multicolumn{3}{l}{75 original classes} & \multicolumn{3}{l}{5 new classes~\textbf{*}} \\
\cmidrule(lr){3-5} \cmidrule(lr){6-8}
& & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\
\midrule
$\mathcal{D}_o$ & plain~\cite{lin2017focal} & 36.3 & 55.4 & 38.6 & N/A & N/A & N/A \\
$\mathcal{D}_n$ & plain~\cite{lin2017focal} & N/A & N/A & N/A & 20.2 & 36.6 & 19.4 \\
$\mathcal{D}_o$+$\mathcal{D}_n$ & plain~\cite{lin2017focal} & 35.8 & 54.9 & 38.1 & 20.6 & 36.3 & 20.3 \\
& dataset-aware~\cite{yao2020cross} & 36.4 & 55.3 & \textbf{38.8} & 22.7 & 39.4 & 22.5 \\
& conflict-free (ours) & \textbf{36.5} & \textbf{55.5} & 38.7 & \textbf{23.5} & \textbf{40.4} & \textbf{23.4} \\
{\color{highlight} fully labeled} & {\color{highlight}plain~\cite{lin2017focal}} & {\color{highlight}37.1} & {\color{highlight}56.3} & {\color{highlight}39.6} & {\color{highlight}30.6} & {\color{highlight}49.6} & {\color{highlight}31.5} \\
\bottomrule
\end{tabular}%
}
\label{tab:compare_loss}%
\end{table}%
\subsubsection{Compare conflict-free loss with others} \label{sec:compare_1}
We compare the proposed conflict-free loss with other methods under setup A. The dataset-aware loss in~\cite{yao2020cross} is a special case of conflict-free loss without considering the second conflict origin discussed in Sec.~\ref{sec:cf_loss}. We also provide the results of multiple detectors trained on separate datasets.
As shown in Tab.~\ref{tab:compare_loss}, the performance of the detectors trained separately on COCO75-train and COCO5-train are 36.3\% AP and 20.2\% AP respectively. The performance of the detector trained on the combined training set in a vanilla way drops to 35.8\% AP for the 75 classes, which is mainly caused by the conflicts during joint training; and improves to 20.6\% AP for the 5 classes, which reveals that more training data may lead to better feature representation. The detector trained with dataset-aware loss~\cite{yao2020cross} outperforms the plain method. Furthermore, our conflict-free loss achieves better results (23.5\% AP for the 5 classes), which is mainly due to that the conflict-free loss can avoid two possible conflicts simultaneously as discussed in Sec.~\ref{sec:cf_loss}. It is worth noting that the unified detector trained with conflict-free loss outperforms separate detectors, especially for new classes (23.5AP vs 20.2AP) owing to the abundant training samples provided by other datasets. It demonstrates that the conflict-free loss leads to a unified detector with acceptable results in one training round, outperforms other unified detectors, and is not worse than separate detectors.
\begin{table}[t]
\centering
\caption{Compare the unlabeled ground-truth mining method and the retraining method with other approaches under setup A ($\mathcal{D}_o$: COCO75-train, $\mathcal{D}_n$: COCO5-train). $\mathcal{\hat{G}}_{CC}$ and $\mathcal{\hat{G}}_{CLC}$ represent the mined pseudo annotations by CC-based and CLC-based methods, respectively. We report the results of the 75 original classes and the 5 new classes on COCO-val, respectively.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{llllllll}
\toprule
Data & Method & \multicolumn{3}{l}{75 original classes} & \multicolumn{3}{l}{5 new classes~\textbf{*}} \\
\cmidrule(lr){3-5} \cmidrule(lr){6-8}
& & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\
\midrule
$\mathcal{D}_o$+$\mathcal{D}_n$ & plain~\cite{lin2017focal} & 35.8 & 54.9 & 38.1 & 20.6 & 36.3 & 20.3 \\
& conflict-free (ours) & 36.5 & 55.5 & 38.7 & 23.5 & 40.4 & 23.4 \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CC}$ & as fully labeled~\cite{sohn2020simple} & 36.6 & 55.4 & 38.9 & 24.0 & 39.9 & 24.6 \\
& conflict-free (ours) & 36.3 & 55.2 & 38.5 & 24.2 & 40.8 & 24.6 \\
& safe negatives~\cite{zhao2020object} & 36.6 & \textbf{55.7} & 38.7 & 24.5 & 41.1 & 25.0 \\
& overlap-weighted (ours) & 36.6 & 55.4 & \textbf{39.0} & 24.7 & 41.7 & \textbf{25.1} \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CLC}$ & overlap-weighted (ours) & \textbf{36.7} & 55.6 & \textbf{39.0} & \textbf{24.9} & \textbf{42.0} & \textbf{25.1} \\
{\color{highlight} fully labeled} & {\color{highlight}plain~\cite{lin2017focal}} & {\color{highlight}37.1} & {\color{highlight}56.3} & {\color{highlight}39.6} & {\color{highlight}30.6} & {\color{highlight}49.6} & {\color{highlight}31.5} \\
\bottomrule
\end{tabular}%
}
\label{tab:coco75+coco5}%
\end{table}%
\begin{table}[t]
\centering
\caption{Compare the unlabeled ground-truth mining method and the retraining method with other approaches under setup B ($\mathcal{D}_o$: COCO79-train, $\mathcal{D}_n$: COCO1-train). We report the results of the 79 original classes and the 1 new class on COCO-val, respectively.}
\resizebox{1.\textwidth}{!}{
\begin{tabular}{llllllll}
\toprule
Data & Method & \multicolumn{3}{l}{79 original classes} & \multicolumn{3}{c}{1 new class~\textbf{*}} \\
\cmidrule(lr){3-5} \cmidrule(lr){6-8}
& & AP & AP$_{50}$ & AP$_{75}$ & AP & AP$_{50}$ & AP$_{75}$ \\
\midrule
$\mathcal{D}_o$+$\mathcal{D}_n$ & plain~\cite{lin2017focal} & 35.4 & 54.4 & 37.7 & 37.9 & 69.2 & 36.5 \\
& conflict-free (ours) & \textbf{36.5} & \textbf{55.5} & \textbf{39.0} & 41.6 & 72.4 & 41.4 \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CC}$ & as fully labeled~\cite{sohn2020simple} & 36.2 & 55.1 & 38.4 & 42.9 & 71.1 & 44.4 \\
& conflict-free (ours) & 36.1 & 54.8 & 38.3 & 44.4 & 74.4 & 45.6 \\
& safe negatives~\cite{zhao2020object} & 36.1 & 55.0 & 38.2 & 43.7 & 72.9 & 45.0 \\
& overlap-weighted (ours) & 36.0 & 54.8 & 38.1 & 44.6 & 74.2 & 46.0 \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CLC}$ & overlap-weighted (ours) & 36.2 & 55.0 & 38.5 & \textbf{44.7} & \textbf{74.6} & \textbf{46.3} \\
{\color{highlight} fully labeled} & {\color{highlight}plain~\cite{lin2017focal}} & {\color{highlight}36.5} & {\color{highlight}55.5} & {\color{highlight}38.9} & {\color{highlight}51.2} & {\color{highlight}79.6} & {\color{highlight}54.2} \\
\bottomrule
\end{tabular}%
}
\label{tab:coco79+coco1}
\end{table}%
\subsubsection{Compare the overlap-weighted retraining method with others}
\label{sec:compare_2}
We describe other retraining strategies as follows. (i) As described in Sec.~\ref{sec:retraining}, we can view the combined dataset supplemented with pseudo annotations \textbf{as fully labeled}~\cite{sohn2020simple}, and train detectors with common classification loss. (ii) We can retrain the detector with the \textbf{conflict-free} loss again. Compared to the first phase, only positive samples are supplemented by the pseudo annotations. (iii) We can introduce some \textbf{safe negatives}~\cite{zhao2020object}. In addition to the threshold $\eta$ used to mine high-quality pseudo annotations, a lower threshold $\eta'$ is introduced to get high-recall pseudo annotations. Under the combination of ground-truth and high-quality pseudo annotations, some negative samples are actually unsafe negative samples which can match boxes from high-recall pseudo annotations. Anchors that do not match any ground-truth or high-recall pseudo annotations are considered to be safe negative samples. Safe negative samples can contribute to the loss of classes from different datasets while unsafe negative samples are still discarded.
For fair, we compare our overlap-weighted retraining method with these strategies with the same pseudo annotations (training detectors with conflict-free loss and mining pseudo annotations with classification confidence, which is described in the next paragraph). We provide the results of setup A, B, C and D in Tab.~\ref{tab:coco75+coco5},~\ref{tab:coco79+coco1},~\ref{tab:coco60+voc20} and~\ref{tab:coco80+face1} respectively ($\mathcal{D}_o$ + $\mathcal{D}_n$ + $\mathcal{\hat{G}}_{CC}$). In setup A, retraining with ``as fully labeled" leads to improvement compared to the results of one training round (0.5\%AP gains for the 5 new classes), which demonstrates the effectiveness of retraining. The detector retrained with conflict-free loss achieves better performance for the 5 classes. Relying on ``safe negatives" during retraining, the detector reaches 24.5\% AP for the 5 classes. Finally, our retraining method achieves the best results (24.7\% AP for 5 classes). Similar results are shown in setup B, C and D. These results imply that there are still many unlabeled ground-truth in the training data, thus the idea of avoiding conflicts cannot be ignored in retraining process. Whereas, if we use the same method as in the first training round, only positive samples would be added, causing insufficient negative information in retraining. Thus, it is important to employ suitable negative samples to achieve a balance between positive and negative samples. Our empirical experiments show that using the overlap-weighted negative samples is a better choice for the purpose.
\begin{table}[t]
\centering
\caption{Compare the unlabeled ground-truth mining method and the retraining method with other approaches under setup C ($\mathcal{D}_o$: COCO60-train, $\mathcal{D}_n$: VOC-train). We report the results of all 80 classes on VOC-subtest and COCO-subval, and the 20 VOC classes on COCO-val.
}
\resizebox{1.\textwidth}{!}{
\begin{tabular}{llllllll}
\toprule
Data & Method & \multicolumn{2}{l}{VOC-subtest} & \multicolumn{2}{l}{COCO-subval} & \multicolumn{2}{l}{COCO-val} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
& & \multicolumn{2}{l}{all 80 classes} & \multicolumn{2}{l}{all 80 classes} & \multicolumn{2}{l}{20 new classes} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
& & AP$_{50}$ & AP$_{75}$ & AP$_{50}$ & AP$_{75}$ & AP$_{50}$ & AP$_{75}$ \\
\midrule
$\mathcal{D}_o$+$\mathcal{D}_n$ & plain~\cite{lin2017focal} & 43.0 & 28.0 & 42.6 & 27.2 & 4.1 & 2.7 \\
& conflict-free (ours) & 46.8 & 30.1 & 50.3 & 30.8 & 32.3 & 18.6 \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CC}$ & as fully labeled~\cite{sohn2020simple} & 48.0 & 31.5 & 48.7 & 29.9 & 27.5 & 18.6 \\
& conflict-free (ours) & \textbf{49.4} & 31.7 & 49.9 & 30.8 & 34.1 & 20.4 \\
& safe negatives~\cite{zhao2020object} & 49.0 & \textbf{33.3} & 49.9 & 31.5 & 34.3 & 21.7 \\
& overlap-weighted (ours) & 47.5 & 31.4 & 51.6 & 31.6 & \textbf{42.7} & 24.9 \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CLC}$ & overlap-weighted (ours) & 49.0 & 33.0 & \textbf{52.2} & \textbf{31.7} & 42.6 & \textbf{25.4} \\
\bottomrule
\end{tabular}%
}
\label{tab:coco60+voc20}%
\end{table}%
\begin{table}[t]
\centering
\caption{Compare the unlabeled ground-truth mining method and the retraining method with other approaches on setup D ($\mathcal{D}_o$: COCO-train, $\mathcal{D}_n$: FACE-train). We report the results of categories ``person'' and ``face''. `\textbf{*}' indicates the categories that we should pay more attention to.}
\resizebox{1.\textwidth}{!}{
\begin{tabular}{llllllllll}
\toprule
Data & Method & \multicolumn{4}{l}{COCO-subval} & \multicolumn{4}{l}{FACE-subval} \\
\cmidrule(lr){3-6} \cmidrule(lr){7-10}
& & \multicolumn{2}{l}{person} & \multicolumn{2}{l}{face~\textbf{*}} & \multicolumn{2}{l}{person~\textbf{*}} & \multicolumn{2}{l}{face} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10}
& & AP$_{50}$ & AP$_{75}$ & AP$_{50}$ & AP$_{75}$ & AP$_{50}$ & AP$_{75}$ & AP$_{50}$ & AP$_{75}$ \\
\midrule
$\mathcal{D}_o$+$\mathcal{D}_n$ & plain~\cite{lin2017focal} & 71.1 & 40.8 & 3.4 & 0.9 & 6.7 & 3.7 & 54.3 & 28.0 \\
& conflict-free (ours) & 71.9 & \textbf{43.2} & 49.9 & 20.9 & 55.6 & 33.9 & \textbf{53.8} & 28.8 \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CC}$ & as fully labeled~\cite{sohn2020simple} & \textbf{72.7} & 42.6 & 49.0 & 22.6 & 51.5 & 31.8 & \textbf{53.8} & 28.9 \\
& conflict-free (ours) & 71.2 & 42.1 & 53.9 & 22.8 & 56.4 & \textbf{34.7} & 53.7 & 28.8 \\
& safe negatives~\cite{zhao2020object} & 72.4 & 42.3 & 50.7 & 22.4 & 52.9 & 31.7 & 54.2 & \textbf{29.0} \\
& overlap-weighted (ours) & 71.4 & 42.8 & \textbf{54.0} & 22.5 & 58.3 & \textbf{34.7} & 53.1 & 28.7 \\
$\mathcal{D}_o$+$\mathcal{D}_n$+$\mathcal{\hat{G}}_{CLC}$ & overlap-weighted (ours) & 72.3 & 43.1 & 53.9 & \textbf{23.7} & \textbf{58.5} & \textbf{34.7} & 53.2 & \textbf{29.0} \\
\bottomrule
\end{tabular}%
}
\label{tab:coco80+face1}%
\end{table}%
\begin{figure}[t]
\centering
\includegraphics[height=0.12\textheight]{./tpfp_hist}
\includegraphics[height=0.12\textheight]{./tpfp_curve}
\caption{Distribution of localization confidence across true and false positive predictions (left). The curve of true positive predictions versus fasle negative predictions (right). Best seen in color.}
\label{fig:tpfp}
\end{figure}
\subsubsection{Compare CLC-based unlabeled ground-truth mining method with others}
\label{sec:compare_3}
We compare the CLC-based mining method with the approach that only relies on classification confidence (CC-based) similar to some work for classification tasks~\cite{berthelot2019mixmatch,huang2021behavior,lai2019improving}. The generating process also relies on a score threshold strategy, which only exploits the classification score to represent the detector's confidence in the predicted results. The predictions with high classification confidence will be selected as pseudo annotations to complete ground-truth for the unlabeled classes.
We investigate the performance of the CC-based and the CLC-based mining methods. We expect more accurate bounding boxes can be mined by our CLC-based mining method. To check the localization quality of bounding boxes, we count the true positive predictions and the false positive predictions with high overlap metric ($\text{IoU} \geq 0.75$) in the predicted results whose classification confidence is larger than 0.5. The distribution of localization confidence (Eq.~\eqref{eq:loc_conf}) across true positive predictions and false positive predictions are shown in Fig.~\ref{fig:tpfp} (left). We find the overwhelming majority of bounding boxes with low localization confidence are false negative predictions, which shows the localization confidence can reflect the quality of bounding boxes to some extent. Besides, as shown in Fig.~\ref{fig:tpfp} (right), the number of false positive predictions can be suppressed with the collaboration of localization confidence. The results in Table~\ref{tab:coco75+coco5},~\ref{tab:coco79+coco1},~\ref{tab:coco60+voc20} and~\ref{tab:coco80+face1} ($\mathcal{D}_o$ + $\mathcal{D}_n$ + $\mathcal{\hat{G}}_{CLC}$) also demonstrate that the pseudo annotations mined by the CLC-based method bring an improvement compared to the CC-based method.
\subsection{Extend to multiple training sets}
The above analysis and evaluations are conducted on two datasets, while, our method is also fully capable of handling multiple datasets. To verify it, we perform setup E, which involves three training sets labeled with different classes: COCO60-train ($\mathcal{D}_o$), VOC-train ($\mathcal{D}_{n_1}$), and FACE-train ($\mathcal{D}_{n_2}$). The results are listed in Table~\ref{tab:setup_e}, demonstrating the effectiveness of our solution again.
\begin{table}[tp]
\centering
\caption{Results on setup E. AP$_{50}$ (\%) / AP$_{75}$ (\%) are reported for the 81 classes on VOC-subtest, the 81 classes on COCO-subval, and the ``person" category on FACE-subval, respectively.}
\resizebox{1.\textwidth}{!}{
\begin{tabular}{llllllll}
\toprule
Data & Method & \multicolumn{2}{l}{VOC-subtest} & \multicolumn{2}{l}{COCO-subval} & \multicolumn{2}{l}{FACE-subval} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
& & \multicolumn{2}{l}{all 81 classes} & \multicolumn{2}{l}{ all 81 classes} & \multicolumn{2}{l}{person} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
& & AP$_{50}$ & AP$_{75}$ & AP$_{50}$ & AP$_{75}$ & AP$_{50}$ & AP$_{75}$\\
\midrule
$\mathcal{D}_o$+$\mathcal{D}_{n_1}$+$\mathcal{D}_{n_2}$ & plain~\cite{lin2017focal} & 40.0 & 27.0 & 42.1 & 25.7 & 0.2 & 0.0 \\
& conflict-free (ours) & 46.5 & 30.1 & 49.1 & 29.7 & 41.9 & 13.5 \\
$\mathcal{D}_o$+$\mathcal{D}_{n_1}$+$\mathcal{D}_{n_2}$+$\mathcal{\hat{G}}_{CLC}$ & as fully labeled~\cite{sohn2020simple} & 47.4 & \textbf{32.0} & 48.2 & 30.0 & 39.3 & 15.7 \\
& overlap-weighted (ours) & \textbf{49.6} & 31.4 & \textbf{51.2} & \textbf{30.4} & \textbf{48.5} & \textbf{18.5} \\
\bottomrule
\end{tabular}%
}
\label{tab:setup_e
\end{table}%
\subsection{BCE-based v.s. CE-based}
Similar to BCE loss, cross-entropy (CE) loss is also widely used in object detection. With CE loss, label $\mathbf{p}^*$ and prediction $\mathbf{p}$ should have $|\mathcal{C}_o \cup \mathcal{C}_n|+1$ categories (plus the ``background"), and the prediction $\mathbf{p}$ is activated by softmax instead of sigmoid. BCE-based and CE-based loss are both commonly used. There is usually no obvious difference in performance between these two losses in standard object detection~\cite{redmon2018yolov3}. However, in CE loss, all categories would affect each other (due to ``softmax"). Thus, we argue that BCE-based loss is a better choice for the task handled in this work, as it helps to avoid conflicts conveniently.
\begin{table}[t]
\centering
\caption{{\color{highlight}Performance with pseudo annotations mined by our unified detector in Sec.~\ref{sec:cf_loss} ($\mathcal{\hat{G}}_{CC}$, $\mathcal{\hat{G}}_{CLC}$) and the Separate Detectors ($\mathcal{\hat{G}}_{CC,SDs}$, $\mathcal{\hat{G}}_{CLC,SDs}$) under setup C.}}
\resizebox{1.\textwidth}{!}{
\begin{tabular}{lllllll}
\toprule
\multicolumn{1}{l}{\color{highlight}{Method}} & \color{highlight}{$\mathcal{\hat{G}}$} & \multicolumn{2}{l}{\color{highlight}{VOC-subtest}} & \color{highlight}{} & \multicolumn{2}{l}{\color{highlight}{COCO-subval}} \\
\cmidrule{3-4}\cmidrule{6-7}\color{highlight}{} & & \color{highlight}{AP$_{50}$} & \color{highlight}{AP$_{75}$} & \color{highlight}{} & \color{highlight}{AP$_{50}$} & \color{highlight}{AP$_{75}$} \\
\midrule
\multicolumn{1}{l}{\color{highlight}{as fully labeled~\cite{sohn2020simple}}} & \color{highlight}{$\mathcal{\hat{G}}_{CC,SDs}$} & \color{highlight}{44.3 } & \color{highlight}{28.3 } & \color{highlight}{} & \color{highlight}{48.0 } & \color{highlight}{27.2 } \\
\color{highlight}{} & \color{highlight}{$\mathcal{\hat{G}}_{CC}$} & \color{highlight}{48.0 (+3.7) } & \color{highlight}{31.5 (+3.2)} & \color{highlight}{} & \color{highlight}{48.7 (+0.7)} & \color{highlight}{29.9 (+2.7)} \\
\multicolumn{1}{l}{\color{highlight}{confict-free}} & \color{highlight}{$\mathcal{\hat{G}}_{CC,SDs}$} & \color{highlight}{46.2 } & \color{highlight}{26.8 } & \color{highlight}{} & \color{highlight}{50.1 } & \color{highlight}{29.9 } \\
\color{highlight}{} & \color{highlight}{$\mathcal{\hat{G}}_{CC}$} & \color{highlight}{49.4 (+3.2)} & \color{highlight}{31.7 (+4.9)} & \color{highlight}{} & \color{highlight}{49.9 (-0.2)} & \color{highlight}{30.8 (+0.9)} \\
\multicolumn{1}{l}{\color{highlight}{safe negatives~\cite{zhao2020object}}} & \color{highlight}{$\mathcal{\hat{G}}_{CC,SDs}$} & \color{highlight}{45.3 } & \color{highlight}{28.2 } & \color{highlight}{} & \color{highlight}{49.7 } & \color{highlight}{28.2 } \\
\color{highlight}{} & \color{highlight}{$\mathcal{\hat{G}}_{CC}$} & \color{highlight}{49.0 (+3.7)} & \color{highlight}{33.3 (+5.1)} & \color{highlight}{} & \color{highlight}{49.9 (+0.2)} & \color{highlight}{31.5 (+3.3)} \\
\multicolumn{1}{l}{\color{highlight}{overlap-weighted}} & \color{highlight}{$\mathcal{\hat{G}}_{CC,SDs}$} & \color{highlight}{46.3 } & \color{highlight}{28.7 } & \color{highlight}{} & \color{highlight}{50.0 } & \color{highlight}{29.4 } \\
\color{highlight}{} & \color{highlight}{$\mathcal{\hat{G}}_{CC}$} & \color{highlight}{47.5 (+1.2)} & \color{highlight}{31.4 (+2.7)} & \color{highlight}{} & \color{highlight}{51.6 (+1.6)} & \color{highlight}{31.6 (+2.2)} \\
\multicolumn{1}{l}{\color{highlight}{overlap-weighted}} & \color{highlight}{$\mathcal{\hat{G}}_{CLC,SDs}$} & \color{highlight}{49.2 } & \color{highlight}{30.8 } & \color{highlight}{} & \color{highlight}{50.9 } & \color{highlight}{29.9 } \\
\color{highlight}{} & \color{highlight}{$\mathcal{\hat{G}}_{CLC}$} & \color{highlight}{49.0 (-0.2)} & \color{highlight}{33.0 (+2.2)} & \color{highlight}{} & \color{highlight}{52.2 (+1.3)} & \color{highlight}{31.7 (+1.8)} \\
\bottomrule
\end{tabular}%
}
\label{tab:sds}%
\end{table}%
{\color{highlight}
\subsection{Separate detectors for unlabeled ground-truth mining}
\label{sec:SDs}
We have shown that the conflict-free loss can lead to an acceptable detector in one training round, and that the retraining phase can significantly improve performance. Conflict-free loss is recommended if you prefer a simple one-round training approach. If one adopts the two-rounds training, what about using multiple detectors trained on separate datasets for unlabeled ground-truth mining?
We compare the performance of using pseudo annotations mined by the unified detector and the separate detectors under setup C. The results in Tab.~\ref{tab:sds} demonstrate the pseudo annotations mined by the unified model are superior to those mined by multiple separate detectors. Considering that training multiple separate detectors is more cumbersome, we also recommend the unified detector as an initial detector in two-rounds training method.
}
\subsection{More visualization results.}
Fig.~\ref{fig:vis2} illustrates more visualization results of the detectors on setup C, showing that our method achieves powerful and robust detectors with limited data.
\section{Conclusion}
In this paper, we aim at training a category-extended object detector with limited data. To achieve this goal, a powerful pipeline is proposed. The conflict-free loss is designed to eliminate possible conflicts during joint learning. To further improve performance, a retraining phase is employed. We adopt a Monte Carlo Dropout-based method to estimate localization confidence to unearth more accurate pseudo annotations. We explore several strategies for retraining with the pseudo annotations, and empirically exhibit that employing overlap-weighted negative samples during retraining leads to better performance. The experimental results on multiple settings demonstrate that the proposed method achieves a feasible category-extended object detector and outperforms other approaches.
\begin{figure}[tp]
\centering
\caption{Detection results on COCO-val and VOC-test. Red bounding boxes indicate objects of the 60 categories in COCO60-train, while green ones correspond to the 20 VOC categories.}
\includegraphics[height=0.6\textheight]{./vis2.pdf}
\label{fig:vis2}%
\end{figure}%
{\color{highlight} Despite the promising results, there is still a significant gap between models trained on incrementally labeled datasets and models trained on fully labeled datasets. Exploring unsupervised domain adaptation algorithms would benefit unlabeled ground-truth mining by mitigating the effects of domain gap; designing class-distribution aware retraining methods would be beneficial in alleviating the class imbalance problem, which may be exacerbated by pseudo annotations. For other tasks, the incrementally labeled datasets can also be encountered in multi-label classification, semantic / instance segmentation, and other modalities, which are worth exploring further.}
\section*{Acknowledgements}
This work is supported in part by the National Natural Science Foundation of China (62171248, 61972219), the R\&D Program of Shenzhen (JCYJ2018050815\\2204044, JCYJ20190813174403598, SGDX20190918101201696), the PCNL KEY project (PCL2021A07), and the Overseas Research Cooperation Fund of Tsinghua Shenzhen International Graduate School (HW2021013).
\begin{small}
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2022-08-09T02:12:16",
"yymm": "2012",
"arxiv_id": "2012.14115",
"language": "en",
"url": "https://arxiv.org/abs/2012.14115"
}
|
\section{Introduction}
\label{sec:intro}
Formalising strategic reasoning has become an increasingly rich and attractive direction of active research and applications of multi-agent modal logics over the past few decades. Early logical systems capturing agents' abilities were developed with philosophical motivations and applications in mind, including Brown's modal logic of ability \cite{brown:88a} and Belnap and Perloff's STIT logics ~\cite{belnap:88a}. In the late 1990s -- early 2000s two seminal works in the area appeared independently: Pauly's Coalition logic \Logicname{CL}\xspace, introduced in \cite{Pauly01phd,Pauly02modal}, and Alur, Henzinger and Kupferman's Alternating time temporal logic \Logicname{ATL}\xspace introduced (in its final version) in \cite{AHK-02}, cf also \cite{TLCSbook}.
The logic \Logicname{CL}\xspace was introduced with the explicit intention to formalise reasoning about one-step (local) strategic abilities of coalitions of agents to guarantee the achievement of designated objectives in the immediate outcome of their collective action, regardless of the respective actions of the remaining agents.
The logic \Logicname{ATL}\xspace, on the other hand, was introduced as a logical formalism for formal specification and verification of open (interacting with environment) computer systems, where the agents represent concurrently executed processes. However, it was gradually adopted in the research on logics for multi-agent systems as one of the most standard and popular logical systems for reasoning about long-term strategic abilities of agents and coalitions in concurrent multi-player games. The logic \Logicname{ATL}\xspace can be described as an extension of \Logicname{CL}\xspace with the long-term temporal operators $\mathsf{G}$ and $\mathsf{U}$, adopted in the branching-time temporal logic \Logicname{CTL}\xspace, which can be regarded as a single-agent fragment of \Logicname{ATL}\xspace.
More precisely, both \Logicname{CL}\xspace and \Logicname{ATL}\xspace feature special modal operators\footnote{We use here the notation from \cite{AHK-02}, which was more widely adopted.} $\coal{C}{}$, indexed with groups (coalitions) of agents $C$, such that for any formula $\phi$, regarded as expressing the coalitional objective of $C$, the formula $\coal{C}{\phi}$ intuitively says that the coalition $C$ has a collective strategy $\sigma_{C}$ that guarantees the satisfaction of $\phi$ in every outcome (state for \Logicname{CL}\xspace, respectively, play for \Logicname{ATL}\xspace) that can occur when the agents in $C$ execute their strategies in $\sigma_{C}$, regardless of the choices (strategic or not) of actions of the agents not in $C$.
Thus, both \Logicname{CL}\xspace and \Logicname{ATL}\xspace capture reasoning about \emph{absolute powers} of agents and coalitions to act in pursuit of their goals and succeed unconditionally against \emph{any} possible behaviour of their opponents, which are thus regarded as adversaries (in the context of \Logicname{CL}\xspace) or as randomly behaving environment (in the context of \Logicname{ATL}\xspace). This is a somewhat extreme perspective, as strategic interactions of rational agents in the real world usually involve a complex interplay of \emph{cooperation} and \emph{competition}, both driven by the individual and collective objectives of all agents, be them proponents or opponents of the objective in focus. To capture these adequately, expressively richer formal logical languages are needed. In the recent precursor \cite{GorankoEnqvist18} of the present work we proposed two such extensions of \Logicname{CL}\xspace with additional coalitional operators, respectively implementing the following two ideas relating cooperation and competition in social context:
\begin{description}
\item[] \emph{Social friendliness:} agents can achieve private goals while leaving room for cooperation with the others and with the rest of the society.
\item[] \emph{Group protection:} agents can cooperate within the society while simultaneously protecting their private goals.
\end{description}
The second extension mentioned above, called \emph{Group Protecting Coalition Logic} (GPCL) is the starting point of the present work, which introduces and studies its extension in \Logicname{ATL}\xspace-like style, called Temporal Logic of Coalitional Goal Assignments (\Logicname{TLCGA}\xspace). The logic \Logicname{TLCGA}\xspace features one, very expressive, coalitional strategic operator, viz. the \emph{coalitional goal assignment} operator of the type
$\brak{\gamma}$, where $\gamma$ is a mapping assigning to each coalition (subset) in the family of all agents $\Agt$ its coalitional \emph{goals}, which is formalised by a path formula of the language of \Logicname{TLCGA}\xspace, i.e. a formula prefixed with a temporal operator $\mathord \mathsf{X}\, , \mathord \mathsf{G}\, $, or $\, \mathsf{U} \, $, representing the temporalised objective for the respective coalition.
Then, the formula $\brak{\gamma}$ intuitively says that there is a strategy profile
$\ensuremath{\Sigma\xspace}$ for the grand coalition $\Agt$ such that for each coalition $C$, the restriction $\ensuremath{\Sigma\xspace} \vert_C$ of $\ensuremath{\Sigma\xspace}$ to $C$ is a collective strategy of $C$ that enforces the satisfaction of its objective $\gamma(C)$ in all outcome plays enabled by $\ensuremath{\Sigma\xspace} \vert_C$. The intuition is that each agent participates in the grand coalition with its individual strategy so that, while contributing to the achievement of the common goal, each coalition also guarantees the protection of its coalitional interest agains any possible deviation of all other agents.
The logic \Logicname{TLCGA}\xspace naturally extends \Logicname{ATL}\xspace (in particular, \Logicname{CL}\xspace) and exhibits richer expressiveness, both purely technical, but also in terms of potential applications (cf. Section \ref{sec:applications}). Notably, it turns out (cf. Section \ref{subsec:CGAsemantics-variations}) that the semantics of \Logicname{TLCGA}\xspace is more sophisticated than the semantics of \Logicname{ATL}\xspace and most of its extensions studied so far, in sense that it is essential that the class of (memory based) strategies underlying the semantics of \Logicname{TLCGA}\xspace includes all \emph{play-based strategies} (taking into account the full history, consisting not only of the sequence of visited states, but also including the action profiles causing the transitions between them) rather than only the \emph{path-based strategies} (based on the state histories only), which has been the customary choice both for the logics in the \Logicname{ATL}\xspace/\Logicname{ATL^*}\xspace family and for the family of Strategy logics (see further) studied so far. More precisely, the two semantics differ for \Logicname{TLCGA}\xspace, both in terms of truth at a state in a model and in terms of validity (respectively, satisfiability), as shown in Section \ref{subsec:CGAsemantics-variations}, where we also argue that the semantics using play-based strategies is the more natural and faithful one for the intended meaning of the operator $\brak{\cdot}$. We also analyse there how the two semantics relate technically, and show how model checking for the semantics using play-based strategies can be reduced to model checking for the semantics using path-based strategies.
Besides $\Logicname{TLCGA}\xspace$, we also introduce and study, though in less detail, two extensions of this language: $\Logicname{TLCGA}\xspace^+$, in which conjunctions of path formulas are allowed, and a fixpoint language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ in the style of the modal $\mu$-calculus. The first of these two extensions appears naturally in some applications with a connection to game theory, described shortly. The full fixpoint language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ has a theoretical interest on its own, as a very expressive, yet quite well behaved logic for multi-player games. Furthermore, since the logics $\Logicname{TLCGA}\xspace$ and $\Logicname{TLCGA}\xspace^+$ embed as fragments of $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, the latter can be used as a tool to study these logics and obtain technical results about them. Here, it is used to prove decidability and finite model property of both $\Logicname{TLCGA}\xspace$ and $\Logicname{TLCGA}\xspace^+$, as well as to establish upper complexity bounds for their satisfiability problems.
As we demonstrate with examples in Section \ref{sec:applications}, the logic \Logicname{TLCGA}\xspace enables the expression of various natural and important nuanced patterns of multi-player strategic interaction.
In particular, the logic \Logicname{TLCGA}\xspace captures a concept that we call ``\emph{co-equilibrium}'', which we define and promote here as a new, alternative solution concept on the border of non-cooperative and cooperative game theory \cite{OR2}. We argue (in Section \ref{subsec:co-equilibria}) that is more natural and applicable than the standard notion of Nash equilibrium in the context of concurrent multi-player games with individual qualitative objectives. Existence of a co-equilibrium can be expressed quite simply in \Logicname{TLCGA}\xspace using the operator $\brak{\gamma}$.
We also show (in Section \ref{subsec:stable-outcomes}) how other naturally defined notions of stable individually and coalitionally stable outcomes can be formalised in $\Logicname{TLCGA}\xspace^+$.
Further motivation for the present work comes from \emph{cooperative game theory}
\cite{OR2},
\cite{BranzeiDimitrovTijs},
\cite{DBLP:series/synthesis/2011Chalkiadakis}. One natural link is the apparent relationship between concurrent game models and coalitional goal assignments in them studied here, and some classes of cooperative games, such as the
so called \emph{simple games} \cite{Ramamurthy90}, \cite{vanDeemen97}, where the characteristic function defining the game assigns a payoff 0 or 1 to each coalition. An important class of such games are the \emph{voting games} \cite{DBLP:series/synthesis/2011Chalkiadakis}. We only mention here these links with cooperative game theory, but they are left to be explored in a further work.
In this paper we only illustrate briefly (in Section \ref{subsec:applications2GT}) the expressiveness of \Logicname{TLCGA}\xspace and $\Logicname{TLCGA}\xspace^+$ by showing how natural qualitative analogues of the key notions of \emph{stable strategy profiles} and \emph{core}, defined and studied for a type cooperative games based on concurrent game models in \cite{DBLP:conf/atal/GutierrezKW19}, can be expressed in $\Logicname{TLCGA}\xspace^+$.
\paragraph{Main contributions.}
Besides the introduction of the logic \Logicname{TLCGA}\xspace and its extensions $\Logicname{TLCGA}\xspace^+$ into $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, the main technical contributions of this paper are:
\begin{itemize}
\item Fixpoint characterizations of the main types of long-term goal assignments and translation of $\Logicname{TLCGA}\xspace$ and $\Logicname{TLCGA}\xspace^+$ into $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$.
\item bisimulation invariance and Hennessy-Milner property for the logic \Logicname{TLCGA}\xspace and for the full $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ with respect to the GPCL-bisimulation introduced in \cite{GorankoEnqvist18}.
\item sound and complete axiomatic system for \Logicname{TLCGA}\xspace.
\item finite model property and decidability (with triple exponential bound) for the extended logic $\Logicname{TLCGA}\xspace^+$ (hence also of \Logicname{TLCGA}\xspace).
\item double exponential bound on the satisfiability problem for the fixpoint language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$.
\item complexity bounds for the model checking problems and the satisfiability problems for \Logicname{TLCGA}\xspace and $\Logicname{TLCGA}\xspace^+$.
\end{itemize}
\paragraph{Related work and results.}
In addition to the links and references mentioned so far, the present work bears both conceptual and technical connections with several previously studied logics for strategic reasoning and multi-player games, including: the logic \Logicname{ATL}\xspace with irrevocable strategies \cite{AgotnesTARK2007,Jamroga08commitment-tr}, \Logicname{ATL}\xspace with strategy contexts \cite{BrihayeLLM09}, coalitional logics of cooperation and propositional control \cite{HoekW05, TroquardHW09}, cooperative concurrent games \cite{DBLP:conf/atal/GutierrezKW19}, and especially with the family of Strategy Logics, originally introduced in \cite{DBLP:journals/iandc/ChatterjeeHP10} and further extended and studied in \cite{fsttcs/MogaveroMV10},
\cite{tocl/MogaveroMPV14},
\cite{corr/MogaveroMPV16},
\cite{aamas/AminofMMR16},
\cite{DBLP:conf/aaai/AcarBM19},
\cite{DBLP:journals/mst/GardyBM20}, etc.
Indeed, the operator $\brak{\gamma}$ in $\Logicname{TLCGA}\xspace^+$
with the `path-based semantics' mentioned earlier (cf. Section \ref{subsec:CGAsemantics-variations}) can be translated to the Conjunctive Goals fragment\footnote{We thank an anonymous reviewer for suggesting to consider the translation of \Logicname{TLCGA}\xspace into that fragment.} SL[CG] of Strategy Logic \cite{DBLP:conf/lics/MogaveroMS13}, \cite{DBLP:journals/mst/GardyBM20}, in a way similar to the standard translation of modal logics to first-order logic.
%
As shown in Section \ref{subsec:ST2SL}, that translation can be used for applying model checking algorithms for SL[CG] to $\Logicname{TLCGA}\xspace^+$ (in particular, \Logicname{TLCGA}\xspace) and for obtaining \textrm{2ExpTime} complexity upper bound for the model checking problem for $\Logicname{TLCGA}\xspace^+$, for both versions of its semantics.
%
Likewise, that translation can be used for applying algorithms and obtaining \textrm{PSpace} complexity upper bound for the satisfiability problem in flat fragment of $\Logicname{TLCGA}\xspace^+$ with the path-based semantics
by reduction to the satisfiability problem in flat fragment FSL[CG] of SL[CG], shown in \cite{DBLP:conf/aaai/AcarBM19} to be decidable and \textrm{PSpace-complete}.
On the other hand, a direct argument in Section \ref{sec:FMP}, using embedding of \Logicname{TLCGA}\xspace and $\Logicname{TLCGA}\xspace^+$ to the $\mu$-calculus extension $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$
shows that the satisfiability problem in the whole $\Logicname{TLCGA}\xspace^+$ (in particular, \Logicname{TLCGA}\xspace), with its standard play-based semantics, is in \textrm{2ExpTime}.
%
Notwithstanding these important technical benefits, we note that translating \Logicname{TLCGA}\xspace to (a fragment of) Strategy Logic could generally result in unwanted surplus of expressiveness, or even in a technical overkill, for which we have both conceptual and computational reasons to avoid, whenever possible.
On the conceptual side, translating \Logicname{TLCGA}\xspace to Strategy Logic would lose of the elegant succinctness and clear focus of the operator $\brak{\gamma}$ as the main high-level logical construct of the language and would replace it with its low-level description in Strategy Logic\footnote{Admittedly, this is a subjective argument and mostly a matter of taste, which we only state but do not try to impose here.}.
On the technical side, such translation would map a syntactically simple propositional language to a generally quite more expressive and syntactically heavier, essentially second-order language, explicitly involving quantification over strategies (being functions from finite sequences of states to actions). Essentially these are the same arguments in favour of preferring modal logic over first-order logic, but amplified by the technical complexity of quantifying over functions rather than individuals.
Thus, we eventually adopt and advocate the pragmatic approach of adhering as much as possible to the propositional logic framework of \Logicname{TLCGA}\xspace for the purposes of expressing and reasoning about properties of strategic interactions of the type mentioned above, while resorting to using translation to fragments of Strategy Logic only when necessary or practically expedient, e.g. for the purpose of using already developed tools for model or satisfiability checking in these fragments of Strategy Logic.
Our work is also essentially connected with \emph{coalgebraic modal logic} \cite{moss1999coalgebraic,pattinson2003coalgebraic,cirstea2007modular}, which is an abstract framework for modal logics of state-based evolving systems. Together with the fixpoint characterization of \Logicname{TLCGA}\xspace, this makes \Logicname{TLCGA}\xspace in essence a fragment of a \emph{coalgebraic fixpoint logic}
\cite{venema2006automata,fontaine2010automata,cirstea2011exptime}.
This connection is used to establish decidability and finite model property for our logic. Beyond that, however, our presentation is mostly self-contained, and will not require familarity with coalgebra.
Still, we want to emphasize that the connection with coalgebraic logic, and coalgebraic fixpoint logics in particular, is implicitly present throughout the paper. In particular the notion of \emph{one-step completeness}, and the idea of lifting one-step completeness to completeness for the full language, is at the heart of our completeness proof. The notion of one-step completeness is inherently coalgebraic and has been studied in depth by a number of authors \cite{schroder2009pspace,schroder2008expressivity,pattinson2003coalgebraic}.
Furthermore, the fact that our translation into fixpoint logic requires only a single recursion variable means that \Logicname{TLCGA}\xspace is a fragment of a \emph{flat} fixpoint logic \cite{SantocanaleV10,SchroderV10,enqvist2018flat}, and completeness of flat fixpoint logics can be obtained by simpler techniques than the full $\mu$-calculus. There are two main reasons why our completeness proof is not explicitly formulated in coalgebraic terms: first, we note that \Logicname{TLCGA}\xspace is not a flat $\mu$-calculus per se, but rather embeds into such a logic via a fairly intricate translation. So the results in \cite{SchroderV10} do not apply directly here, as far as we can see. Second, and more importantly, we want the proof to be as self-contained and accessible without prior knowledge in coalgebraic $\mu$-calculus as possible. It is possible that one could ``transfer'' completeness of flat coalgebraic $\mu$-calculi to obtain completeness for \Logicname{TLCGA}\xspace via our translation, but we believe a direct completeness proof is more transparent and provides better understanding of \Logicname{TLCGA}\xspace and its semantics.
Lastly, we also note the relationship of the present work with the logic for local conditional strategic reasoning CSR introduced in \cite{LORIVII-GorankoJu}.
Furthermore, we point out the direct applicability of the logic \Logicname{TLCGA}\xspace for adequate alternative formalisation of the ideas of \emph{rational synthesis} \cite{tacas/FismanKL10} and \emph{rational verification} \cite{aaai/WooldridgeGHMPT16}.
These connections and possible applications are left to future work.
\paragraph{Structure of the paper.}
After some preliminaries in Section \ref{sec:prelim} on concurrent game models, plays and strategies in them, in Section \ref{sec:TLCGA} we introduce and study the formal syntax and semantics of the logic \Logicname{TLCGA}\xspace, and illustrate its expressiveness with some examples.
In Section \ref{sec:fixpoints} we obtain fixpoint characterizations of the long-term goal assignments expressed in a suitable $\mu$-calculus extension of \Logicname{TLCGA}\xspace. We then discuss the connection with coalgebraic modal logic.
In Section \ref{subsec:Bisimulations} we introduce the relevant notion of bisimulation for \Logicname{TLCGA}\xspace and prove bisimulation invariance and the Hennessy-Milner property for it.
In Section \ref{sec:axiomatization} we provide an axiomatic system for \Logicname{TLCGA}\xspace for which we prove soundness and completeness. In Section \ref{sec:FMP} we show decidability of \Logicname{TLCGA}\xspace via finite model property. We then end with brief concluding remarks in Section \ref{sec:concluding}.
\section{Preliminaries and background}
\label{sec:prelim}
\subsection{Concurrent game models, plays, strategies}
\label{subsec:CGM}
We fix a finite set of \textbf{players/agents} $\Agt = \{\ensuremath{\mathsf{a}\xspace}_1,...,\ensuremath{\mathsf{a}\xspace}_n\}$
and a set of \textbf{atomic propositions} $\Prop$.
Subsets of $\Agt$ will also be called \textbf{coalitions}.
Given a set $W$, we denote by $W^*$ the set of finite words over $W$, by $W^+$ the set of non-empty words from $W^*$, and by $W^\omega$ the set of infinite words over $W$.
\begin{definition}
Let $\ensuremath{\mathsf{O}\xspace}$ be any non-empty set. A \textbf{(strategic) game form over the set of outcomes $\ensuremath{\mathsf{O}\xspace}$} is a tuple
\[\mathcal{G} =
(\Act,\act,\ensuremath{\mathsf{O}\xspace},\ensuremath{\mathsf{out}\xspace})\]
where
\begin{itemize}
\item $\Act$ is a non-empty set of \textbf{actions},
\item $\act: \Agt \to \mathcal{P}^{+}(\Act)$ is a mapping assigning to each $\ensuremath{\mathsf{a}\xspace} \in \Agt$ a non-empty set $\act_\ensuremath{\mathsf{a}\xspace}$ of \textbf{actions available to the player $\ensuremath{\mathsf{a}\xspace}$},
\item $\ensuremath{\mathsf{out}\xspace}: \Pi_{\ensuremath{\mathsf{a}\xspace} \in \Agt}\act_\ensuremath{\mathsf{a}\xspace} \to \ensuremath{\mathsf{O}\xspace}$ is a map assigning to every \textbf{action profile}
$\ensuremath{\zeta\xspace} \in \Pi_{\ensuremath{\mathsf{a}\xspace} \in \Agt} \act_\ensuremath{\mathsf{a}\xspace}$ a unique \textbf{outcome} in $\ensuremath{\mathsf{O}\xspace}$.
\end{itemize}
\end{definition}
\begin{definition}
A \textbf{concurrent game model} is a tuple
\[\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},V)\]
where
\begin{itemize}
\item $\ensuremath{\mathsf{S}\xspace}$ is a non-empty set of \textbf{states},
\item $\Act$ is a
non-empty set of \textbf{actions},
\item $\mathfrak{g}$ is a \textbf{game map}, assigning to each state
$w \in \ensuremath{\mathsf{S}\xspace}$ a strategic game form
$\mathfrak{g}(w) = (\Act,\act_w,\ensuremath{\mathsf{S}\xspace},\ensuremath{\mathsf{out}\xspace}_w)$ over the set of outcomes $\ensuremath{\mathsf{S}\xspace}$.
\item $V: \Prop \to \powerset{\ensuremath{\mathsf{S}\xspace}}$ is a \textbf{valuation} of the atomic propositions in $\ensuremath{\mathsf{S}\xspace}$;
\end{itemize}
For every concurrent game model $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},V)$ we define the following.
\begin{itemize}
\item
For each $\ensuremath{\mathsf{a}\xspace} \in \Agt$ and $w \in \ensuremath{\mathsf{S}\xspace}$, the set
$\act_w(\ensuremath{\mathsf{a}\xspace})$ consists of the \textbf{locally available actions} for $\ensuremath{\mathsf{a}\xspace}$ in $w$.
It will also be denoted by $\act(\ensuremath{\mathsf{a}\xspace},w)$.
We also define the set $\act_\ensuremath{\mathsf{a}\xspace} := \bigcup_{w\in\ensuremath{\mathsf{S}\xspace}} \act_w(\ensuremath{\mathsf{a}\xspace})$ of \textbf{globally available actions} for $\ensuremath{\mathsf{a}\xspace}$.
\item
An \textbf{action profile} is a tuple of actions
$\ensuremath{\zeta\xspace} \in \Pi_{\ensuremath{\mathsf{a}\xspace} \in \Agt} \act_\ensuremath{\mathsf{a}\xspace}$.
%
A \textbf{locally available action profile at state $w$} is any tuple of locally available actions
$\ensuremath{\zeta\xspace} \in \Pi_{\ensuremath{\mathsf{a}\xspace} \in \Agt} \act_w(\ensuremath{\mathsf{a}\xspace})$.
The set of these action profiles will be denoted by $\ensuremath{\mathsf{ActProf}\xspace}_w$.
\item $\ensuremath{\mathsf{out}\xspace}_\ensuremath{\mathcal{M}}$
is the \textbf{global outcome function} assigning to every state $w$ and a local action profile $\ensuremath{\zeta\xspace}$ at $w$
a unique \textbf{outcome} $\ensuremath{\mathsf{out}\xspace}_\ensuremath{\mathcal{M}}(w,\ensuremath{\zeta\xspace}) := \ensuremath{\mathsf{out}\xspace}_w(\ensuremath{\zeta\xspace})$.
When $\ensuremath{\mathcal{M}}$ is fixed by the context, it will be omitted from the subscript.
\item
Given a coalition $C \subseteq \Agt$, a \textbf{joint action} for $C$ in $\ensuremath{\mathcal{M}}$ is a tuple of individual actions $\ensuremath{\zeta\xspace}_{C} \in \prod_{\ensuremath{\mathsf{a}\xspace} \in C} \act_\ensuremath{\mathsf{a}\xspace}$.
In particular, for any action profile
$\ensuremath{\zeta\xspace} \in \Pi_{\ensuremath{\mathsf{a}\xspace} \in \Agt} \act_\ensuremath{\mathsf{a}\xspace}$, $\ensuremath{\zeta\xspace} \vert_C$ is the joint action obtained by restricting $\ensuremath{\zeta\xspace}$ to $C$.
\item
For any $w\in \ensuremath{\mathsf{S}\xspace}$, $C \subseteq \Agt$, and joint action $\ensuremath{\zeta\xspace}_{C}$ that is available at $w$, we define:
%
\[
\ensuremath{\mathsf{Out}\xspace}[w,\ensuremath{\zeta\xspace}_{C}] = \left\{u \in S \mid \exists \ensuremath{\zeta\xspace} \in
\prod_{\ensuremath{\mathsf{a}\xspace} \in \Agt} \act_{w}(\ensuremath{\mathsf{a}\xspace}):
\; \ensuremath{\zeta\xspace} \vert_C = \ensuremath{\zeta\xspace}_{C} \mbox{ and } \ensuremath{\mathsf{out}\xspace}(w,\ensuremath{\zeta\xspace}) = u \right\}.
\]
\end{itemize}
A \textbf{partial play}, or a \textbf{history}
in $\ensuremath{\mathcal{M}}$ is either an element of $\ensuremath{\mathsf{S}\xspace}$ or a finite word of the form:
$$w_0 \ensuremath{\zeta\xspace}_0 w_1 ... w_{n - 1} \ensuremath{\zeta\xspace}_{n - 1} w_n$$
where $w_0,...,w_n \in \ensuremath{\mathsf{S}\xspace}$ and for each $i < n$, $\ensuremath{\zeta\xspace}_i$ is a locally available action profile in $\Pi_{a \in \Agt}\act(a,w_i)$.
The last state in a history $h$ will be denoted by $\ensuremath{\mathit{l}}(h)$.
The set of histories in $\ensuremath{\mathcal{M}}$ is denoted by $\ensuremath{\mathsf{Hist}\xspace}(\ensuremath{\mathcal{M}})$.
A \textbf{(memory-based) strategy for player $\ensuremath{\mathsf{a}\xspace}$} is a map $\ensuremath{\sigma\xspace}_\ensuremath{\mathsf{a}\xspace}$ assigning to each history $h = w_0 \ensuremath{\zeta\xspace}_0... \ensuremath{\zeta\xspace}_{n-1}w_n$ in $\mathsf{Play}$ an action $\ensuremath{\sigma\xspace}_\ensuremath{\mathsf{a}\xspace}(h)$ from $\act(\ensuremath{\mathsf{a}\xspace},w_n)$.
Note that strategies are defined here in terms of \emph{histories}, i. e. \emph{partial plays}, not just sequences of states, as it is customary for \Logicname{ATL^*}\xspace and in particular \Logicname{ATL}\xspace \cite{AHK-02}, cf also
\cite{TLCSbook} or \cite{BGJ15}. This distinction will turn out to be essential for the semantics of the logic introduced here.
A strategy $\ensuremath{\sigma\xspace}_\ensuremath{\mathsf{a}\xspace}$ is \textbf{memoryless}, or \textbf{positional}, if it assigns actions only based on the current (last) state, i.e. $\ensuremath{\sigma\xspace}_\ensuremath{\mathsf{a}\xspace}(h) = \ensuremath{\sigma\xspace}_\ensuremath{\mathsf{a}\xspace}(h')$ whenever $\ensuremath{\mathit{l}}(h) = \ensuremath{\mathit{l}}(h')$.
Given a coalition $C \subseteq \Agt$, a \textbf{joint strategy} for $C$ in the model
$\ensuremath{\mathcal{M}}$ is a tuple $\ensuremath{\Sigma\xspace}_{C}$ of individual strategies, one for each player in $C$.
A \textbf{(global) strategy profile} $\ensuremath{\Sigma\xspace}$ is a joint strategy for the grand coalition $\Agt$, i.e. an assignment of a strategy to each player.
We denote the set of all strategy profiles in the model $\ensuremath{\mathcal{M}}$ by $\ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}}$, and the set of all joint strategies for a coalition $C$ in $\ensuremath{\mathcal{M}}$ by $\ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}}(C)$. Thus, $\ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}} = \ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}}(\Agt)$.
Given a strategy profile $\ensuremath{\Sigma\xspace}$, the \textbf{play} induced by $\ensuremath{\Sigma\xspace}$ at $w \in \ensuremath{\mathsf{S}\xspace}$ is the unique infinite word
\[\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace}) = w_0 \ensuremath{\zeta\xspace}_0 w_1 \ensuremath{\zeta\xspace}_1 w_2 \ensuremath{\zeta\xspace}_2...\]
such that $w_0 = w$ and, for each $n < \omega$ we have $w_{n + 1} = \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}_n,w_n)$, and
\[
\ensuremath{\zeta\xspace}_{n + 1} = \ensuremath{\Sigma\xspace}(w_0 \ensuremath{\zeta\xspace}_0 ... \ensuremath{\zeta\xspace}_{n}w_{n +1})
\]
The infinite word $w_0w_1w_2...$ obtained by simply forgetting the moves of players in this infinite play is called the \textbf{computation path} induced by $\ensuremath{\Sigma\xspace}$ at $v$, and denoted $\pth(\ensuremath{\Sigma\xspace},v)$.
More generally, given a coalition $C \subseteq \Agt$, a state $w \in \ensuremath{\mathsf{S}\xspace}$
and a joint strategy $\ensuremath{\Sigma\xspace}_{C}$ for $C$ we define the \textbf{set of outcome plays induced by the joint strategy $\ensuremath{\Sigma\xspace}_{C}$ at $w$} to be the set of plays
\[\ensuremath{\mathsf{Plays}\xspace}(w,\ensuremath{\Sigma\xspace}_{C}) = \big\{\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace}) \mid \ensuremath{\Sigma\xspace} \in \ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}}
\mbox{ such that } \ensuremath{\Sigma\xspace}(\ensuremath{\mathsf{a}\xspace}) = \ensuremath{\Sigma\xspace}_{C}(\ensuremath{\mathsf{a}\xspace}) \mbox{ for all } \ensuremath{\mathsf{a}\xspace} \in C \big\}
\]
Given a strategy profile $\ensuremath{\Sigma\xspace}$ we also denote
$\ensuremath{\mathsf{Plays}\xspace}(w,\ensuremath{\Sigma\xspace},C) := \ensuremath{\mathsf{Plays}\xspace}(w,\ensuremath{\Sigma\xspace}\vert_{C})$.
We will likewise use the notation
$\paths(w,\ensuremath{\Sigma\xspace},C)$ for the set of computation paths obtained from the plays in
$\ensuremath{\mathsf{Plays}\xspace}(w,\ensuremath{\Sigma\xspace},C)$.
Since these only depend on the strategies assigned to players in $C$, we shall freely use the notation $\ensuremath{\mathsf{Plays}\xspace}(w,\ensuremath{\Sigma\xspace},C)$ and $\paths(w,\ensuremath{\Sigma\xspace},C)$ even when $\ensuremath{\Sigma\xspace}$ is defined for all members of $C$, but not for all other players.
\end{definition}
The strategies in our semantics will be memory-based: moves of players in a strategy may depend on previous moves of other players, and players have perfect information and recall about previous moves.
Indeed, as we will show in Section \ref{subsec:CGAsemantics},
just like for \Logicname{ATL^+}\xspace and \Logicname{ATL^*}\xspace, but unlike \Logicname{ATL}\xspace, the restriction to positional strategies generates different semantics for the logic \Logicname{TLCGA}\xspace which we introduce here.
\section{The temporal logic of coalitional goal assignments \Logicname{TLCGA}\xspace}
\label{sec:TLCGA}
\subsection{Goal assignments, language and syntax of \Logicname{TLCGA}\xspace}
\label{subsec:CGAsyntax}
Given a fixed finite set players
$\Agt$ and a set $G$ of objects, called `goals', a \textbf{(coalitional) goal assignment for $\Agt$ in $G$} is a mapping $\gamma: \mathcal{P}(\Agt) \to G$.
We now define the set $\ensuremath{\mathsf{StateFor}\xspace}$ of
\textbf{state formulae} and the set $\ensuremath{\mathsf{PathFor}\xspace}$ of
\textbf{path formulae} of \Logicname{TLCGA}\xspace by mutual induction, using the following BNF:
\medskip
$\ensuremath{\mathsf{StateFor}\xspace}: \ \ \ \ \ \varphi := p \mid \top
\mid \neg \varphi
\mid (\varphi \wedge \varphi)
\mid (\varphi \lor \varphi)
\mid
\brak{\gamma}$
\smallskip
$\ensuremath{\mathsf{PathFor}\xspace}: \ \ \ \ \ \theta := \mathsf{X} \varphi \mid \varphi \mathsf{U} \varphi \mid \mathsf{G} \varphi$
\smallskip
where $p \in \Prop$ and $\gamma: \mathcal{P}(\Agt) \to \ensuremath{\mathsf{PathFor}\xspace}$ is a goal assignment for $\Agt$ in
$\ensuremath{\mathsf{PathFor}\xspace}$. The other propositional connectives $\bot$, $\to$ and $\ifff$, as well as the temporal operator $\atlf$, are defined as usual. We write: $\mathsf{XFor}$ for the set of path formulas
of the form $\mathsf{X} \varphi$; $\mathsf{UFor}$ for the set of path formulas
of the form $\varphi \mathsf{U} \psi$;
$\mathsf{GFor}$ for the path formulas
of the form $\mathsf{G} \varphi$; and $\mathsf{UGFor}$ for $\mathsf{UFor} \cup \mathsf{GFor}$.
We denote the language by $\ensuremath{\mathcal{L}^\cga\xspace}$, and its nexttime fragment (where
$\ensuremath{\mathsf{PathFor}\xspace}$ is restricted to $\mathsf{XFor}$) by $\ensuremath{\mathcal{L}^\xcga\xspace}$.
The latter is essentially (with some minor notational changes) the language of the logic \Logicname{GPCL}\xspace introduced in \cite{GorankoEnqvist18}.
Intuitively, the path formulae can be regarded as temporal goals. The goal $\mathsf{X} \top$ is called a \textbf{trivial goal} and all other goals in $\ensuremath{\mathsf{PathFor}\xspace}$ are \textbf{non-trivial goals}. The family of coalitions $\ensuremath{\mathcal{F}\xspace}$ to which the goal assignment $\gamma$ assigns non-trivial goals is called the \textbf{support of $\gamma$}, denoted $\ensuremath{\mathsf{Support}\xspace}(\gamma)$, and $\gamma$ is said to be
\textbf{supported by $\ensuremath{\mathcal{F}\xspace}$}.
Sometimes we will write a goal assignment $\gamma$ explicitly, like
\[C_1\gass \theta_1,...,C_n\gass \theta_n,\]
meaning that
$\ensuremath{\mathsf{Support}\xspace}(\gamma) = \{C_1,...,C_n\}$ and $\gamma(C_i) = \theta_i$, for $i=1,...,n$.
More notation:
\begin{itemize}
\item $\gamma^\top$ is the \textbf{trivial goal assignment}, mapping each coalition to $\mathsf{X} \top$.
\item
The goal assignment $\gamma[C \gass \theta]$ is like $\gamma$, but mapping $C$ to $\theta$.
\item
The goal assignment $\gamma \setminus C$ defined as $\gamma[C \gass \mathsf{X} \top]$ is like $\gamma$, but excluding $C$ from its support, by replacing its goal with
$\mathsf{X} \top$.
\item The goal assignment $\gamma\vert_C$ is defined by mapping each $C' \subseteq C$ to $\gamma(C')$ and mapping all coalitions not contained in $C$ to $\mathsf{X}\top$.
\end{itemize}
As a convention, if $\gamma$ is
the unique goal assignment with empty support, we will identify the formula $\brak{\gamma}$ with $\top$.
We will also consider a slight extension of the logic $\Logicname{TLCGA}\xspace$ which allows forming conjunctions of path formulas. This extension, which we call $\Logicname{TLCGA}\xspace^+$, has the same definition of state formulae, whereas the path formulae of $\Logicname{TLCGA}\xspace^+$ are defined as follows:
\smallskip
$\ensuremath{\mathsf{PathFor}\xspace}: \ \ \ \ \ \theta := \mathsf{X} \varphi \mid \varphi \mathsf{U} \varphi \mid \mathsf{G} \varphi \mid \theta \wedge \theta$
\smallskip
In this paper we will focus mainly on \Logicname{TLCGA}\xspace, but most of the technical results extend likewise to $\Logicname{TLCGA}\xspace^+$, and we will note that explicitly for some of them.
\subsection{Semantics of \Logicname{TLCGA}\xspace}
\label{subsec:CGAsemantics}
The semantics of \Logicname{TLCGA}\xspace is defined in terms of truth of state formulae at a state, respectively truth of path formulae on (the path generated by) a play,
in a concurrent game model
$\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$.
The truth clauses are like in classical logic for the boolean connectives and like in LTL for the temporal operators.
The only new clause, for $\cgoal{\gamma}$, is as follows,
where $s \in \ensuremath{\mathsf{S}\xspace}$:
\smallskip
\begin{quotation}
$\ensuremath{\mathcal{M}}, s \models \cgoal{\gamma}$ \ iff \ there exists a strategy profile
$\ensuremath{\Sigma\xspace} \in \ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}}$ such that, \\
for each $C \subseteq \Agt$, it holds that
$\ensuremath{\mathcal{M}}, \path \models \gamma(C)$ for every $\path \in \paths(s,\ensuremath{\Sigma\xspace},C)$.
\end{quotation}
\smallskip
For any state formula $\varphi \in \ensuremath{\mathsf{StateFor}\xspace}$ we define \textbf{the extension of $\varphi$ in $\ensuremath{\mathcal{M}}$} to be the set of states in $\ensuremath{\mathcal{M}}$ where $\varphi$ is true:
$\tset{\varphi}_{\ensuremath{\mathcal{M}}} = \{ s \in S \mid \ensuremath{\mathcal{M}}, s \models \varphi \}$.
Likewise, we define the extension of any path formula $\theta \in \ensuremath{\mathsf{PathFor}\xspace}$
to be the set of paths in $\ensuremath{\mathcal{M}}$ where $\theta$ is true:
$\tset{\theta}^{p}_{\ensuremath{\mathcal{M}}} = \{ \path \in S \mid \ensuremath{\mathcal{M}}, \path \models \theta \}$.
The truth clause for $\cgoal{\gamma}$ can now be re-stated in terms of formula extensions as follows:
\[
\tset{\cgoal{\gamma}}_{\ensuremath{\mathcal{M}}} =
\big\{ s \in S \mid \exists\, \ensuremath{\Sigma\xspace} \in \ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}}:
\paths(s,\ensuremath{\Sigma\xspace},C) \subseteq \tset{\gamma(C)}^{p}_{\ensuremath{\mathcal{M}}}
\ \mbox{for each} \ C \subseteq \Agt \big\}.
\]
A strategy profile $\ensuremath{\Sigma\xspace}$ is said to \textbf{witness the goal assignment $\gamma$} at a state $s$ of a model $\ensuremath{\mathcal{M}}$, denoted by $\ensuremath{\Sigma\xspace},s \Vvdash \gamma$, if, for every coalition $C$ in the support of $\gamma$ and every path $\pi \in \paths(s,\ensuremath{\Sigma\xspace},C)$ in $\ensuremath{\mathcal{M}}$ we have $\ensuremath{\mathcal{M}}, \pi \ensuremath{\Vdash} \gamma(C)$.
We then also say that \textbf{$\ensuremath{\Sigma\xspace}$ witnesses the formula $\brak{\gamma}$} at the state $s$ in $\ensuremath{\mathcal{M}}$.
Thus, $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \brak{\gamma}$ iff ${\gamma}$ is witnessed by some strategy profile at $s$ in $\ensuremath{\mathcal{M}}$.
A \Logicname{TLCGA}\xspace formula $\phi$ is \textbf{valid}, denoted
$\ensuremath{\Vdash} \phi$, if $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \phi$ for every concurrent game model $\ensuremath{\mathcal{M}}$ and a state $s$ in it; respectively $\phi$ is \textbf{satisfiable} if $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \phi$ for some concurrent game model $\ensuremath{\mathcal{M}}$ and some state $s$ in it. Likewise, (local) logical consequence and logical (semantic) equivalence in \Logicname{TLCGA}\xspace are defined as expected.
\smallskip
We note that the formula $\cgoal{C \gass \atlx\phi, \, \Agt \gass \atlx \psi}$ is semantically equivalent to the strategic operator $\coop{C}(\phi; \psi)$ in the logic \Logicname{SFCL}\xspace defined in \cite{GorankoEnqvist18}. Therefore, the corresponding fragment \Logicname{SFCL_{1}}\xspace of \Logicname{SFCL}\xspace embeds into \Logicname{TLCGA}\xspace. Note also that the strategic operator $\coop{C}$ from Coalition logic \Logicname{CL}\xspace is definable as a special case:
$\coop{C}\phi := \coop{C}(\phi; \top) \equiv \cgoal{C\gass\atlx\phi}$.
Furthermore, the strategic operator $\coal{C}$ from \Logicname{ATL}\xspace is also a special case:
$\coal{C}{\theta} \equiv \cgoal{C\gass \theta}$.
Thus, the logic \Logicname{ATL}\xspace is embedded as a simple fragment of \Logicname{TLCGA}\xspace.
Also, we note that a natural (downward) monotonicity condition on goal assignments can be imposed, viz. that $\models \gamma(C) \to \gamma(C')$ for every coalitions $C$, $C'$, such that $C' \subseteq C$. This condition can be imposed semantically, up to equivalence, by replacing each $\gamma(C)$ with $\bigwedge_{C' \subseteq C} \gamma(C')$, though the resulting goals cannot be expressed in \Logicname{TLCGA}\xspace, but in $\Logicname{TLCGA}\xspace^+$. A special case of monotone goal assignments, worth noting, are the `individualistic' goal assignments, where $\gamma(C) \equiv \bigwedge_{\ensuremath{\mathsf{a}\xspace} \in C} \gamma(\ensuremath{\mathsf{a}\xspace})$ for every coalition $C$.
\subsection{Variations of the semantics of \Logicname{TLCGA}\xspace}
\label{subsec:CGAsemantics-variations}
\subsubsection{Memory-based and the memoryless semantics}
Let us introduce ad hoc the variation $\cgoalpos{\cdot}$ of $\cgoal{\gamma}$, with semantics restricted to positional strategies, i.e.:
$\ensuremath{\mathcal{M}}, s \models \cgoalpos{\gamma}$ \ iff \ there exists a \emph{positional} strategy profile
$\ensuremath{\Sigma\xspace} \in \ensuremath{\mathsf{StratProf}\xspace}_\ensuremath{\mathcal{M}}$ such that, for each $C \subseteq \Agt$, it holds that
$\ensuremath{\mathcal{M}}, \path \models \gamma(C)$ for every $\path \in \paths(s,\ensuremath{\Sigma\xspace},C)$.
\begin{proposition}[No positional determinacy of \Logicname{TLCGA}\xspace]
\label{prop:NoPositionalDeterminacy}
Let $\Agt = \{\ensuremath{\mathsf{a}\xspace},\ensuremath{\mathsf{b}\xspace}\}$.
There exist concurrent game model $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$, state $s \in \ensuremath{\mathsf{S}\xspace}$ and a coalitional goal assignment $\gamma$, such that
$\ensuremath{\mathcal{M}}, s \models \cgoal{\gamma}$, but
$\ensuremath{\mathcal{M}}, s \not\models \cgoalpos{\gamma}$.
Consequently, the memory-based and the memoryless semantics of $\cgoal{\cdot}$
are not equivalent.
\end{proposition}
\begin{proof}
Consider the model $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$ on Figure \ref{exampleA} and a goal assignment $\gamma$, such that
$\gamma(\{\ensuremath{\mathsf{a}\xspace},\ensuremath{\mathsf{b}\xspace} \}) = p \, \mathsf{U} \, q$ and
$\gamma(\{\ensuremath{\mathsf{a}\xspace} \}) = \top \, \mathsf{U} \, \neg(p \lor q)$.
Then, $\ensuremath{\mathcal{M}}, s \models \cgoal{\gamma}$, witnessed by any strategy profile $\ensuremath{\Sigma\xspace}$ such that
$\ensuremath{\Sigma\xspace}_{\ensuremath{\mathsf{a}\xspace}} (s) = a_1$ and $\ensuremath{\Sigma\xspace}_{\ensuremath{\mathsf{a}\xspace}} (s s_1 s) = a_2$.
However, there is no positional strategy profile witnessing the truth of
$\cgoalpos{\gamma}$ at $s$ because any positional strategy for $\ensuremath{\mathsf{a}\xspace}$ would have to assign a unique action to any history ending at $s$, hence not enabling both the satisfaction of $p \, \mathsf{U} \, q$ and of $\top \, \mathsf{U} \, \neg(p \lor q)$ there.
\begin{figure}
\begin{center}
\medskip
\begin{tikzpicture}[->,>=stealth', shorten >=1pt, node distance=30mm, thick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[state] (0) {$s \atop \{p \}$};
\node[state] (1) [below left of=0] {$s_1 \atop \{ q\}$};
\node[state] (2) [below right of=0] {$s_2 \atop \{ \}$};
\path
(0) edge [bend right] node {$(a_1, b)$\ \ \ \ \ \ \ \ \ \ \ \ \ ~} (1)
(0) edge [bend left] node {$~ \ \ \ \ \ \ \ \ \ \ \ \ \ (a_2, b)$} (2)
(1) edge [right] node {$(a, b)$} (0)
(2) edge [left] node {$(a, b)$} (0);
\end{tikzpicture}
\end{center}
\caption{Example showing that memory is needed}
\label{exampleA}
\end{figure}
\end{proof}
Hereafter, we will only work with the memory-based semantics.
\subsubsection{Semantics with path-based strategies vs play-based strategies}
Furthermore, note that (memory-based) strategies are defined here in terms of \emph{plays}, not just paths, as it is customary for \Logicname{ATL^*}\xspace and in particular \Logicname{ATL}\xspace \cite{AHK-02} (cf also \cite{BGJ15} or \cite{TLCSbook}), as well as for Strategy Logic \cite{DBLP:journals/iandc/ChatterjeeHP10}, \cite{fsttcs/MogaveroMV10}, \cite{tocl/MogaveroMPV14}. Indeed, the two versions of strategy types affect essentially the semantics, as shown by the following example.
\begin{example}[Path-based strategies vs play-based strategies]
\label{exampleB}
Consider the model $\ensuremath{\mathcal{M}}$ below, with 3 players: $\{1,2,3\}$, where the triples of actions correspond to the order $(1,2,3)$ and $*$ denotes any (or, a single) action.
\begin{center}
\medskip
\begin{tikzpicture}[->,>=stealth', shorten >=1pt, node distance=30mm, thick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[state] (0) {$s \atop \{p, q\}$};
\node[state] (01) [below left of=0] {$s_1 \atop \{p, q\}$};
\node[state] (02) [below right of=0] {$s_2 \atop \{p,q \}$};
\node[state] (1) [below left of=02] {$s_{31} \atop \{ p\}$};
\node[state] (2) [below right of=02] {$s_{32} \atop \{ q\}$};
\path
(0) edge [left] node {$(a_1,a_2,a_3)$} (01)
(0) edge [right] node {$(a_1, b_2, a_3), (a_1, a_2,b_3), (a_1, b_2, b_3)$} (02)
(01) edge [loop left] node {$(*,*,*)$} (01)
(1) edge [loop right] node {$(*,*,*)$} (1)
(2) edge [loop right] node {$(*,*,*)$} (2)
(02) edge [left] node {$(a_p,*,*)$} (1)
(02) edge [right] node {$(a_q, *,*)$} (2);
\end{tikzpicture}
\end{center}
\label{fig:exampleB}
\medskip
Consider the goal assignment $\gamma$, such that $\gamma(\{1,2 \}) = \mathord \mathsf{G}\, p$ and
$\gamma(\{1,3 \}) = \mathord \mathsf{G}\, q$.
The following hold:
\begin{enumerate}
\item $\ensuremath{\mathcal{M}}, s \models \cgoal{\gamma}$ in terms of the semantics with plays-based strategies adopted here.
Indeed, the strategy profile $\ensuremath{\Sigma\xspace}$ prescribing the following action profiles:
$(a_1,a_2,a_3)$ on the play $s$;
$(a_p,*,*)$ on the play $s (a_1,a_2,b_3) s_2$;
$(a_q,*,*)$ on the plays $s (a_1,b_2,a_3) s_2$ and $s (a_1,b_2,b_3) s_2$;
and $(*,*,*)$ on any play ending at $s_1$, $s_{31}$, and $s_{32}$,
would ensure the truth of $\cgoal{\gamma}$ at $s$.
\item $\ensuremath{\mathcal{M}}, s \not\models \cgoal{\gamma}$ in terms of the semantics with path-based strategies.
This is because player $1$ does not have such strategy for which both:
\begin{enumerate}
\item the coalition $\{1,2\}$ ensures satisfaction of the goal $\mathord \mathsf{G}\, p$ by transition from $s_2$ to $s_{31}$, if $3$ acts $b_3$ at $s$ and the game goes to $s_2$, and
\item the coalition $\{1,3\}$ ensures satisfaction of the goal $\mathord \mathsf{G}\, q$ by transition from $s_2$ to $s_{32}$, if $2$ acts $b_2$ at $s$ and the game goes to $s_2$.
\end{enumerate}
\end{enumerate}
\end{example}
Thus, two different semantics for \Logicname{TLCGA}\xspace emerge. Let us call them respectively \textbf{play-based semantics}, hereafter indicated by $ \models_{\mathsf{play}}$, and
\textbf{path-based semantics}, hereafter indicated by $ \models_{\mathsf{path}}$.
Now, a natural question arises: which is the better / more correct one?
We argue that, as the example above indicates, this is the semantics based on play-based strategies, adopted here, because only play-based strategies can detect agents' deviations from the adopted strategy profile of the grand coalition, so as to ensure that the execution of these strategies by the non-deviating agents will still guarantee the fulfilment of their individual and collective goals.
\subsubsection{Relating the path-based and the play-based semantics}
Still, the two semantics can be reconciled in the example above by splitting off the node $s_2$ into 3 copies, each being the successor node of $s_1$ for exactly one action profile, as on Figure \ref{fig:exampleB2}. (In fact, for this example, 2 copies suffice, just to split the outcomes from $(a_1, b_2, a_3)$ and
$(a_1, a_2,b_3)$.) That way, different action profiles applied at any given state lead to different successor nodes, so different plays correspond to different paths, hence the two semantics coincide in the resulting model.
\begin{center}
\medskip
\begin{tikzpicture}[->,>=stealth', shorten >=1pt, node distance=26mm, thick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[state] (0) {$s \atop \{p, q\}$};
\node[state] (021) [below left of=0] {$s_{21} \atop \{p, q\}$};
\node[state] (01) [ left of=021] {$s_1 \atop \{p, q\}$};
\node[state] (022) [right of=021] {$s_{22} \atop \{p,q \}$};
\node[state] (023) [ right of=022] {$s_{23} \atop \{p,q \}$};
\node[state] (1) [below left of=021] {$s_{31} \atop \{ p\}$};
\node[state] (2) [below right of=023] {$s_{32} \atop \{ q\}$};
\path
(0) edge [left] node {$_{(a_1,a_2,a_3)}$} (01)
(0) edge [right] node {$_{(a_1, b_2, a_3)}$} (021)
(0) edge [right] node {$_{(a_1, a_2,b_3)}$} (022)
(0) edge [right] node {$_{(a_1, b_2, b_3)}$} (023)
(01) edge [loop left] node {$_{(*,*,*)}$} (01)
(1) edge [loop left] node {$_{(*,*,*)}$} (1)
(2) edge [loop right] node {$_{(*,*,*)}$} (2)
(021) edge [left] node {$_{(a_p,*,*)}$} (1)
(021) edge [right] node {$_{(a_q, *,*)}$} (2)
(022) edge [left] node {$_{(a_p,*,*)}$} (1)
(022) edge [right] node {$_{(a_q, *,*)}$} (2)
(023) edge [left] node {$_{(a_p,*,*)}$} (1)
(023) edge [right] node {$_{(a_q, *,*)}$} (2);
\end{tikzpicture}
\end{center}
\label{fig:exampleB2}
This construction generalises to any concurrent game model to produce concurrent game models $\ensuremath{\mathcal{M}}$ having the following property: for every state $s \in \ensuremath{\mathcal{M}}$ and action profiles $\ensuremath{\zeta\xspace}_1,\ensuremath{\zeta\xspace}_2$ available at $s$, if $\ensuremath{\zeta\xspace}_1 \neq \ensuremath{\zeta\xspace}_2$ then $\ensuremath{\mathsf{out}\xspace}_\ensuremath{\mathcal{M}}(s,\ensuremath{\zeta\xspace}_1) \neq \ensuremath{\mathsf{out}\xspace}_\ensuremath{\mathcal{M}}(s,\ensuremath{\zeta\xspace}_2)$. We call such models \textbf{injective}.
Every non-injective concurrent game model $\ensuremath{\mathcal{M}}$ can be transformed into an injective one $\iota(\ensuremath{\mathcal{M}})$ by generalising the construction in the example above. That can be done in various ways, but a most economical one is to multiply every state $w$ into as many copies as the maximal number of incoming to $w$ transitional arrows leading from any other given state $u$ labelled with different action profiles applied at $u$. The labels of these copies, as well as all available action profiles at each of them and their outcomes, are copied from those at $w$. Thereafter, all multiply-labelled transitions from any state $s$ to $w$ are respectively re-directed to the different copies of $w$. We leave out the routine technical details of this construction.
We will call this construction \textbf{state-copying and outcome-splitting}, abbreviated \textbf{SCOS}.
Note that plays and paths in any injective model are in a trivial 1-1 correspondence, hence play-based and path-based strategies in injective models coincide.
It is then straightforward to show the following, by induction on the formulae in $\Logicname{TLCGA}\xspace^+$.
\begin{proposition}
\label{prop:injective}
In every injective concurrent game model $\ensuremath{\mathcal{M}}$, $\models_{\mathsf{play}}$ \ and \ $ \models_{\mathsf{path}}$ coincide, i.e. for every state $s \in \ensuremath{\mathcal{M}}$ and a state formula
$\varphi \in \Logicname{TLCGA}\xspace^+$: \ $\ensuremath{\mathcal{M}}, s \models_{\mathsf{path}} \varphi$ iff
$\ensuremath{\mathcal{M}}, s \models_{\mathsf{play}} \varphi$.
\end{proposition}
We further note that the resulting model from applying the SCOS construction is \Logicname{TLCGA}\xspace-bisimilar to the original one, in terms of the notion of \Logicname{TLCGA}\xspace-bisimulation defined in Section \ref{subsec:Bisimulations}.
Consequently, due to the bisimulation invariance theorem \ref{thm:bisimulation invariance+}
established there, the SCOS construction preserves, inter alia, truth of all $\Logicname{TLCGA}\xspace^+$ formulae with respect to the play-based semantics, as follows.
\begin{proposition}
\label{prop:SCOS}
Given any concurrent game model $\ensuremath{\mathcal{M}}$, state $s \in \ensuremath{\mathcal{M}}$, and a copy $s^i$ of $s$ in the injective model $\iota(\ensuremath{\mathcal{M}})$ obtained by applying SCOS to $\ensuremath{\mathcal{M}}$, for every state formula $\varphi \in \Logicname{TLCGA}\xspace^+$: \ $\ensuremath{\mathcal{M}}, s \models_{\mathsf{play}} \varphi$ \ iff \
$\iota(\ensuremath{\mathcal{M}}), s^i \models_{\mathsf{play}} \varphi$.
\end{proposition}
Combining the two observations above, we obtain the following.
\begin{corollary}
\label{cor:MCreduction}
The model checking problem (MC) for formulae of $\Logicname{TLCGA}\xspace^+$ with the play-based semantics is reducible to
the model checking problem for $\Logicname{TLCGA}\xspace^+$ with the path-based semantics, at the cost of at most quadratic blow-up of the size of the model.
\end{corollary}
Observe also that in any concurrent game model $\ensuremath{\mathcal{M}}$,
every path-based strategy is also a play-based strategy (prescribing the same action on every two plays generating the same path). Therefore, for every formula $\cgoal{\gamma}\in \Logicname{TLCGA}\xspace^+$, in any concurrent game model $\ensuremath{\mathcal{M}}$ and a state $s \in \ensuremath{\mathcal{M}}$: \
if $\ensuremath{\mathcal{M}}, s \models_{\mathsf{path}} \cgoal{\gamma}$ then
$\ensuremath{\mathcal{M}}, s \models_{\mathsf{play}} \cgoal{\gamma}$.
\medskip
We now show that the two semantics also differ with respect to the respective validities (hence, also
with respect to the satisfiable formulae) of \Logicname{TLCGA}\xspace. For that we will use the idea of Example \ref{exampleB}. Consider the goal assignment $\gamma$ defined there, as well as the goal assignment $\gamma'$ with support $\{\{1,2\}, \{1,3\}, \{1,2,3\}\}$, where: \\
$\gamma'(\{1,2 \}) = \mathord \mathsf{X}\, \cgoal{\{1\} \gass \mathord \mathsf{G}\, p}$,
$\gamma'(\{1,3 \}) = \mathord \mathsf{X}\, \cgoal{\{1\} \gass \mathord \mathsf{G}\, q}$, and
$\gamma'(\{1,2,3 \}) = \mathord \mathsf{G}\, (p \land q)$.
Then the following hold:
\begin{enumerate}
\item $\models_{\mathsf{play}} \cgoal{\gamma'} \to \cgoal{\gamma}$.
Indeed, in any given model, a strategy profile satisfying $\cgoal{\gamma}$ can be produced from a strategy profile satisfying $\cgoal{\gamma'}$ by amending the strategy for player $1$ in the former strategy profile with those strategies for $1$ claimed to exist in the respective successor outcomes from the joint strategies for $\{1,2 \}$ for $\{1,3 \}$.
%
These amendments are done as follows.
Consider any successor state $w$ of the current state $s$, obtained as the outcome from applying at $s$ an action profile $\ensuremath{\zeta\xspace}$ obtained, for instance, by extending the joint action of $\{1,2 \}$ prescribed at $s$ by the strategy profile $\ensuremath{\Sigma\xspace}'$ witnessing the truth of $\cgoal{\gamma'}$ at $s$ with any action of 3 different from the one prescribed by its strategy in $\ensuremath{\Sigma\xspace}'$. Then, the strategy for $1$ is re-defined on any play of the type $\pi s \ensuremath{\zeta\xspace} w \pi'$ to prescribe the action which a strategy for $1$ that is claimed by $\gamma'(\{1,2 \})$ to exist at $w$ prescribes at the play $w \pi'$ in order to ensure the truth of $\mathord \mathsf{G}\, p$ on any resulting play. The strategy for $1$ on all plays passing through $\ensuremath{\zeta\xspace}' w'$ when $\ensuremath{\zeta\xspace}'$ comes from a joint action of $\{1,3 \}$ at $s$ is re-defined likewise. Lastly, when all 3 players are following the strategy profile $\ensuremath{\Sigma\xspace}'$ witnessing the truth of $\cgoal{\gamma'}$ at $s$, in the resulting play both $\mathord \mathsf{G}\, p$ and $\mathord \mathsf{G}\, q$ hold, so no strategy amendment is needed.
%
Since the strategies that we consider are play-based, the two cases of strategy amendments are independent from each other, as they apply to disjoint sets of plays, and are therefore unproblematic to combine (which is not the case for path-based strategies, as Example \ref{exampleB} shows).
It is now straightforward to show that the strategy profile $\ensuremath{\Sigma\xspace}$, resulting from replacement of the strategy for 1 in $\ensuremath{\Sigma\xspace}'$ with the amended strategy described above, witnesses the truth of $\cgoal{\gamma}$.
\item $\not\models_{\mathsf{path}} \cgoal{\gamma'} \to \cgoal{\gamma}$.
Indeed, that formula fails in the model displayed in Example \ref{exampleB}, because it is straightforward to show that $\ensuremath{\mathcal{M}}, s \models_{\mathsf{path}} \cgoal{\gamma'}$.
\end{enumerate}
\subsection{Standard translation of the path-based semantics of $\Logicname{TLCGA}\xspace^+$ into
fragments of Strategy Logic and the complexity of model checking \Logicname{TLCGA}\xspace.
}
\label{subsec:ST2SL}
We refer here to a standard version SL of Strategy logic as defined e.g. in \cite{fsttcs/MogaveroMV10},
\cite{tocl/MogaveroMPV14}, involving variables ranging over strategies that can be associated with any agents within the formulae by means of strategy assignments, and quantification over such variables.
Here we mostly follow the notation for strategy assignments from \cite{DBLP:conf/lics/MogaveroMS13}, and \cite{DBLP:conf/aaai/AcarBM19}, which slightly differ from the one in \cite{DBLP:journals/mst/GardyBM20}.
First, we show that the logic $\Logicname{TLCGA}\xspace^+$ \emph{with the path-based semantics} can be translated to SL
in a way very similar to the standard translation of modal logic to first-order logic. The key clause is the translation of the operator $\brak{\gamma}$, defined as follows\footnote{This form of the translation was suggested by an anonymous reviewer.} (using the notation from
\cite{DBLP:conf/aaai/AcarBM19}),
for
$\Agt = \{\ensuremath{\mathsf{a}\xspace}_1,...,\ensuremath{\mathsf{a}\xspace}_n\}$:
%
\[\textbf{tr}(\brak{\gamma}) =
\exists x_1 ... \exists x_n \forall y_1 ... \forall y_n
\bigwedge_{C \subseteq \Agt}
\flat_{C} \gamma(C)
\]
where the binding prefix $\flat_{C}$ assigns to each agent $\ensuremath{\mathsf{a}\xspace}_i$ in $\Agt$ the value of the strategy variable
$x_i$ if $\ensuremath{\mathsf{a}\xspace}_i \in C$, and the value of the strategy variable $y_i$, otherwise.
To shorten the formula up to equivalence, the big conjunction above can be restricted to range only over those coalitions $C \subseteq \Agt$ for which $\gamma(C) \neq \atlx \top$.
Since the translation above places only conjunctions of temporal goals in the scope of all quantifiers over strategies, $\Logicname{TLCGA}\xspace^+$ translates in the Conjunctive-Goal fragment SL[CG]. Using this, we obtain the following:
\begin{proposition}
\label{prop:MC complexity}
For each of the path-based semantics and the play-based semantics of $\Logicname{TLCGA}\xspace^+$, the respective model checking problem is \textrm{PTIME-complete} in the size of the model and in \textrm{2ExpTime} in the size of the formula.
\end{proposition}
\begin{proof}
For MC in the path-based semantics, the claim follows from the translation $\textbf{tr}$ above and the respective results in \cite[Theorem IV.2]{DBLP:conf/lics/MogaveroMS13} and in \cite{DBLP:journals/mst/GardyBM20}, for the upper bounds. The lower bound in terms of the size of the model follows from the \textrm{PTIME-complete} complexity of MC for ATL \cite{AHK-02}, which is embedded into \Logicname{TLCGA}\xspace. The precise complexities of MC for \Logicname{TLCGA}\xspace and $\Logicname{TLCGA}\xspace^+$ in the size of the formula are currently still open.
For the MC in the play-based semantics the complexity bounds are obtained from those above, by using the reduction to MC in the path-based semantics provided by Corollary \ref{cor:MCreduction}.
\end{proof}
Note further that, when $\brak{\gamma}$ is in the \textbf{flat fragment $\Logicname{FTLCGA}\xspace^+$ of $\Logicname{TLCGA}\xspace^+$}, i.e. all goals $\gamma(C)$ are conjunctions of purely temporal goals (not containing nested $\gamma$ operators), the translation formula above is in the flat sub-fragment FSL[CG] of SL[CG] \cite{DBLP:conf/aaai/AcarBM19}. Consequently, that translation can also be used for applying algorithms and obtaining \textrm{PSpace} complexity bound for the satisfiability problem in $\Logicname{FTLCGA}\xspace^+$ with the
path-based semantics
by reduction to the satisfiability problem in FSL[CG], shown in \cite{DBLP:conf/aaai/AcarBM19} to be decidable and \textrm{PSpace-complete} (notably, lower than the \textrm{2ExpTime} complexity of model checking for FSL[CG], as shown in \cite{DBLP:conf/aaai/AcarBM19}).
On the other hand, as we show in Section \ref{sec:FMP}, using embedding of $\Logicname{TLCGA}\xspace^+$ to a $\mu$-calculus extension $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ of the nexttime fragment $\ensuremath{\mathcal{L}^\xcga\xspace}$ of \Logicname{TLCGA}\xspace, defined in Section \ref{subsec:mu-TLCGA},
the satisfiability problem in $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ is in \textrm{2ExpTime}, hence the satisfiability problem in the whole $\Logicname{TLCGA}\xspace^+$, with its standard
play-based semantics, is in \textrm{3ExpTime}.
\section{Some applications of \Logicname{TLCGA}\xspace and $\Logicname{TLCGA}\xspace^+$}
\label{sec:applications}
\subsection{Examples of expressing and reasoning about group objectives with \Logicname{TLCGA}\xspace}
\label{subsec:expressing}
\subsubsection{Example 1: Password protected data sharing}
\label{example1}
This example is adapted from \cite{GorankoEnqvist18}, where it was adapted from \cite{parikh1985logic}. Consider the following scenario involving two players, Alice (denoted $A$) and Bob (denoted $B$). Each of them owns a server storing some data, the access to which is protected by a password. Alice and Bob want to exchange passwords, but neither of them is sure whether to trust the other. So the common goal of the two players is to cooperate and exchange passwords, but each player also has the private goal not to give away their password in case the other player turns out to be untrustworthy and not provide his/her password. When and how can the two players cooperate to exchange passwords? The answer depends on the kind of actions that Alice and Bob can perform while attempting to achieve their common objective. However, we are more interested now in formalising the problem in \Logicname{TLCGA}\xspace.
Let us first try to express the common objective by a \Logicname{TLCGA}\xspace formula. For that, we write $H_A$ for ``Alice has access to the data on Bob's server'' and $H_B$ for ``Bob has access to the data on Alice's server''. Then an obvious candidate for a formula expressing the common goal is the goal assignment formula
\[\cgoal{\{A,B\} \gass \mathord \mathsf{F}\, (H_A \wedge H_B)} \]
stating that Alice and Bob have a joint strategy to eventually reach their common objective.
However, it is easy to see that this is not good enough. Indeed, while common desired eventual outcome is $H_A \wedge H_B$, but for $\mathsf{Alice}$ the worst possible outcome is $\neg H_A \wedge H_B$, whereas the worst possible outcome for Bob is $H_A \wedge \neg H_B$, and each of them would like to avoid their worst possible outcome to happen while trying to achieve the common goal. Thus, the common goal can be formulated better as ``\textit{eventually reach a state where both players can access each other's data and until then no player should be able to unilaterally access the other's data}", expressed by the following goal assignment formula:
\[ \cgoal{ \{A,B\}\gass (H_A \ifff H_B) \mathsf{U} (H_A \wedge H_B ) } \]
The formula above is ok if both players follow a strategy profile that would realise that goal, but it does not express yet the stronger requirement that even if one of them deviates from that strategy profile the other should still be able to protect her/his interests while still following her/his strategy. For that, we need to enrich the goal assignment above with individual goals:
\[ \cgoal{ \{A,B\}\gass (H_A \ifff H_B) \, \mathsf{U} \, (H_A \wedge H_B ); \
A \gass \mathord \mathsf{G}\, (H_B \rightarrow H_A ); \ B \gass \mathord \mathsf{G}\, (H_A \rightarrow H_B) } \]
Note that the common goal can now be simplified to the original one, to produce an equivalent to the above formula:
\[ \cgoal{ \{A,B\} \gass \mathord \mathsf{F}\, (H_A \wedge H_B ); \
A \gass \mathord \mathsf{G}\, (H_B \rightarrow H_A ); \ B \gass \mathord \mathsf{G}\, (H_A \rightarrow H_B) } \]
\subsubsection{Example 2: Sheep and wolves: a fragile alliance}
\label{example2}
This example is a remake with a twist of a well known children's puzzle.
A group of 3 wolves and 3 sheep is on the one side of a river and they want to cross the river by boat. There is only one boat that can take 2 animals at a time, but there is no boatman, so one animal has to take the boat back every time, until they all cross the river. The main problem, of course, is that if the wolves ever outnumber the sheep on either side of the river, or on the boat, then the sheep in minority will be promptly eaten up by the wolves. The question is whether, -- and if so, how -- all animals can cross the river without any sheep being eaten.
(Spoiler alert: the answer will be gradually revealed further, so the reader may wish to pause here and think on the puzzle before reading further.)
Let us formalise the problem in \Logicname{TLCGA}\xspace. First, some notation. Let $\mathsf{Sheep}$ denote the set of all sheep, $\mathsf{Wolves}$ denote the set of all wolves,
$\mathbf{c}$ denote the proposition ``\textit{all animals have crossed the river}'' and
$\mathbf{e}$ denote the proposition ``\textit{a sheep gets eaten}''. Then the problem seems to be expressed succinctly as the question whether the following formula is true:
\[
\brak{\mathsf{Sheep} \cup \mathsf{Wolves}\gass (\neg \mathbf{e}) \, \mathsf{U} \, \mathbf{c}}\
\]
As in the previous example, this formula is too weak to express the important subtlety that, even if such strategy exists, nothing guarantees that the wolves will not decide to deviate from it and have a gourmet feast with a sheep before (or after) crossing the river. Thus, we need to add an extra goal for all sheep, protecting their interest to stay alive:
\[
\brak{\mathsf{Sheep} \cup \mathsf{Wolves}\gass (\neg \mathbf{e}) \, \mathsf{U} \, \mathbf{c};
\ \mathsf{Sheep}\gass \mathsf{G} \neg \mathbf{e}}
\]
Now, the common goal can clearly be simplified, while preserving the formula up to equivalence:
\[
\brak{\mathsf{Sheep} \cup \mathsf{Wolves}\gass \mathord \mathsf{F}\, \mathbf{c}; \ \mathsf{Sheep}\gass\mathsf{G} \neg \mathbf{e}}
\]
Let us now try to model it as a concurrent game model. We can assume that the river crossing happens instantaneously, so each state of the game is described uniquely (up to re-shuffling of the sheep and of the wolves, which can be considered identical) by the numbers of sheep and wolves on each side of the river, plus the position of the boat (on one or the other side of the river). At each river crossing round, each of the animals has two possible actions: `stay' or `go on the boat and cross the river'. The respective transitions are then readily defined, by ensuring that only legitimate transitions can occur, so e.g., if more than two animals decide to jump on the boat at the same time, the state does not change (the transition is a loop). The states satisfying $\mathbf{e}$ are precisely those where there are more wolves than sheep on any one side of the river, whereas only one state satisfies $\mathbf{c}$, where all animals have crossed the river.
And, now, the question: is there a strategy profile satisfying the goal assignment above? The answer, perhaps surprisingly, depends on the specific design of the `river crossing game'. If it presumes that all animals act simultaneously, then it is easy to see that any joint strategy realising the common goal can be abused by the wolves deviating from it and eating some of the strategy-abiding sheep. For example consider the joint strategy resulting in the play shown below:
\[
\begin{array}{l | c c c | r}
S\; S\;S \; W\;W\;W & B & & & \\
S\; S \; W\;W & & & B & S\; W \\
S\; S \; S\; W\;W & B & & & W \\
S\; S \; S & & & B & W\; W \; W \\
S\; S\; S\; W & B & & & W\;W \\
S\;W & & & B & S \; S \; W\; W \\
S\; S \; W \; W & B & & & S\; W \\
W\;W & & & B & S\; S \; S\; W \\
W\; W\;W & B & & & S\; S \; S \\
W & & & B & S\; S \; S \; W\; W \\
W\;W & B & & & S\; S \;S\; W \\
& & & B & S\; S\;S \; W\;W\;W
\end{array}
\]
At the very first round of this play, a sheep and a wolf cross the river together. If the wolf deviates from this action and stays instead, then two sheep are left to fend against three wolves on one side of the river. We leave it to the reader to convince themselves that any joint strategy that achieves the common goal must encounter a similar situation.
So, the answer to our question in this case is `No'. However, to level the playing field, the game can be modified so that at every state \emph{first all wolves choose how to act and then all sheep choose how to act}, i.e. formally, every round gets split into two sub-rounds with intermediate states (thus, making it a partly turn-based game). The effect of this change is that now a strategy profile satisfying the goal assignment above could be designed in such a way that the joint strategy of the sheep could involve a suitable joint counter-action to any possible deviation of the wolves that would jeopardise a sheep. Indeed, the joint strategy shown previously can now easily be modified to make it sheep-friendly.
\subsection{Some applications of \Logicname{TLCGA}\xspace and $\Logicname{TLCGA}\xspace^+$
to non-cooperative and cooperative game theory}
\label{subsec:applications2GT}
\subsubsection{Expressing Nash equilibria}
\label{subsec:equilibria}
The fundamental game-theoretic concept of Nash equilibrium can be applied in the concurrent games that we consider, where, given a goal assignment $\gamma$, the payoff from each play for every player is binary: 1, if that player's goal defined by $\gamma$ is satisfied on that play (i.e., the player is a `winner' in the play), and 0 otherwise (i.e., the player is a `loser' in the play).
However, this notion makes little sense in such qualitative setting, because every strategy profile where no `loser' can deviate unilaterally to satisfy her objective is a weak Nash equilibrium. That gives no individually rational reasons for the losers to adhere to that strategy profile, because a deviation cannot be penalised any further by making their payoff even worse than it already is. Thus, we are rather sceptical about the use of Nash equilibria in such games with qualitative objectives as those considered here, and we argue further for an alternative solution concept in games with such objectives.
Accordingly, even though the language of \Logicname{TLCGA}\xspace is not designed with the explicit purpose of expressing equilibria by means of formulae, that can be done in terms of characterising the equilibria profiles as those witnessing suitable goal assignments defined by means of the original goal assignment $\gamma$. In this way, \Logicname{TLCGA}\xspace also enables expressing \emph{existence of equilibria} by means of formulae.
First, we add some terminology and notation. Let us fix a concurrent game model $\ensuremath{\mathcal{M}}$ with initial state $w$.
Then, any pair $(\ensuremath{\Sigma\xspace},\gamma)$ of a strategy profile $\ensuremath{\Sigma\xspace}$ and a goal assignment $\gamma$, the pair $(\ensuremath{\Sigma\xspace},\gamma)$ determines a partition of the family of possible coalitions of agents $\powerset{\Agt}$ in two disjoint subsets: the set $\ensuremath{\mathsf{W}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$ of \textbf{winning coalitions}, whose goals assigned by $\gamma$ are satisfied in the play $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ in $\ensuremath{\mathcal{M}}$ starting from $w$ and induced by $\ensuremath{\Sigma\xspace}$, and the set $\ensuremath{\mathsf{L}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$ of \textbf{losing coalitions},
whose goals assigned by $\gamma$ are not satisfied in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$.
%
In particular, the pair $(\ensuremath{\Sigma\xspace},\gamma)$ determines a partition of $\Agt$ in two disjoint subsets, respectively the set $\ensuremath{\mathsf{W_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$ of (individual) winners and the set $\ensuremath{\mathsf{L_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$ of (individual) losers from $\ensuremath{\Sigma\xspace}$ with respect to $\gamma$.
We now illustrate the idea of expressing equilibria in \Logicname{TLCGA}\xspace in the case of a nexttime goal assignment $\alpha$ with an example for 2 players, $A$ and $B$, with respective individual nexttime goals $\alpha_A$ and $\alpha_B$. First, we can express an equilibrium satisfying any fixed combination of individual goals. In the case when both goals are satisfied by the equilibrium profile, no deviations can possibly improve any player's payoff, so
$\ensuremath{\Sigma\xspace}$ is such equilibrium profile iff $\ensuremath{\Sigma\xspace}$ witnesses the goal assignment
$\{A,B\} \gass \mathord \mathsf{X}\, (\alpha_A \wedge \alpha_B)$, hence existence of such equilibrium profile is simply expressed by the formula
\[\brak{\{A,B\} \gass \mathord \mathsf{X}\, (\alpha_A \wedge \alpha_B)} \]
A more interesting case is when the equilibrium profile $\ensuremath{\Sigma\xspace}$ satisfies only one goal, say $\alpha_A$.
This is the case precisely when $\ensuremath{\Sigma\xspace}$ witnesses the goal assignment
$\{A,B\} \gass \mathord \mathsf{X}\, (\alpha_A \wedge \neg \alpha_B); \ A \gass \mathord \mathsf{X}\, \neg \alpha_B$. The goal $\mathord \mathsf{X}\, \neg \alpha_B$ of $A$ encodes the claim that if $A$ follows the equilibrium strategy then $B$ cannot deviate to satisfy its goal, thus $B$ has no unilateral beneficial deviation from the equilibrium strategy profile. Thus, the following formula expresses existence of such equilibrium:
\[\brak{\{A,B\} \gass \mathord \mathsf{X}\, (\alpha_A \wedge \neg \alpha_B); \ A \gass \mathord \mathsf{X}\, \neg \alpha_B} \]
Likewise, the following formula expresses existence of an equilibrium not satisfying anyone's goal:
\[\brak{\{A,B\} \gass \mathord \mathsf{X}\, (\neg \alpha_A \wedge \neg \alpha_B); \ {A} \gass \mathord \mathsf{X}\, \neg \alpha_B; \ {B} \gass \mathord \mathsf{X}\, \neg \alpha_A} \]
Thus, the language of \Logicname{TLCGA}\xspace allows for expressing more refined descriptions of equilibria and
the disjunction of all such formulae expresses the existence of any equilibrium with respect to $\alpha$.
In the more general case of any set of agents $\Agt$ and pair $(\ensuremath{\Sigma\xspace},\alpha)$ of a strategy profile $\ensuremath{\Sigma\xspace}$ and a nexttime goal assignment $\alpha$, the strategy profile $\ensuremath{\Sigma\xspace}$ is a Nash equilibrium with respect to $\alpha$ iff $\ensuremath{\Sigma\xspace}$ witnesses the goal assignment $\alpha^{0}_{\ensuremath{\Sigma\xspace}}$ defined as follows:
$\alpha^{0}_{\ensuremath{\Sigma\xspace}}(\Agt) = \mathord \mathsf{X}\, \bigwedge_{\ensuremath{\mathsf{a}\xspace} \in \ensuremath{\mathsf{W_{0}}\xspace}(\ensuremath{\Sigma\xspace},\alpha)} \alpha(\ensuremath{\mathsf{a}\xspace})$, \
$\alpha^{0}_{\ensuremath{\Sigma\xspace}}(\Agt \setminus \{\ensuremath{\mathsf{a}\xspace}\}) = \lnot \alpha(\ensuremath{\mathsf{a}\xspace})$ for each
$\ensuremath{\mathsf{a}\xspace} \in \ensuremath{\mathsf{L_{0}}\xspace}(\ensuremath{\Sigma\xspace},\alpha)$, \ and $\alpha^{0}_{\ensuremath{\Sigma\xspace}}(C) = \top$ in all other cases.
Then, the disjunction of all such formulae over all subsets $\ensuremath{\mathsf{W_{0}}\xspace}(\ensuremath{\Sigma\xspace},\alpha)$ of
$\Agt$ expresses the existence of any equilibrium, though the size of that formula grows exponentially in the number of agents.
Lastly, for the most general case of any goal assignment $\gamma$, the idea is the same, but using $\Logicname{TLCGA}\xspace^+$
(which allows conjunctions of path formulae of $\Logicname{TLCGA}\xspace$ as goals) in order to define the goal
$\gamma^{0}_{\ensuremath{\Sigma\xspace}}(\Agt)$. Note that in the case when all individual goals are of the type $\mathord \mathsf{G}\, \psi$, the language of \Logicname{TLCGA}\xspace again suffices, because $\mathord \mathsf{G}\, $ distributes over conjunctions.
Thus, computing the sets $\ensuremath{\mathsf{L_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$ and answering the question of whether the game has any
Nash equilibria can be solved by reducing to model checking in $\Logicname{TLCGA}\xspace^+$; in the special case when all goals are nexttime formulae, or all are $\mathord \mathsf{G}\, $-type formulae, model checking in \Logicname{TLCGA}\xspace suffices.
\subsubsection{Co-equilibria}
\label{subsec:co-equilibria}
Here we define and promote a new, alternative solution concept, that naturally arises in our framework, vis that of a `\emph{co-equilibrium}', which is also one of the main motivations for the introduction of the operator $\brak{\cdot}$. Recall that an equilibrium strategy profile means that no player can deviate individually to improve their performance, and that concept makes very good sense when players pursue \emph{quantitative} individual objectives which are usually achieved to some degree, leaving room both for possible optimisation and for punishment by the other players when deviating, hence can serve as an effective deterrent from deviation. As we argued above, it does not make very good sense when the individual objectives are \emph{qualitative}, i.e. \emph{win} or \emph{lose}, as losing is the worst possible outcome for the player, hence there can be no deterrent from deviation from a strategy profile where that player is losing anyway.
Furthermore, players usually participate simultaneously in several coalitions with mutually consistent, yet different objectives. Assuming that they are first of all individually rational and only then collectively rational, players try to adjust their strategic behaviour so as to serve the collective objectives as much as possible while first protecting and ensuring their individual interests. In particular, all players usually have one common, societal objective -- say to keep the entire system live and safe -- so they enter into a global `social contract' over that common objective, but only on condition that pursuing it would not compromise the achievement of their individual objectives.
These aspects of strategic interaction of individually rational agents serve as our motivation to define the
somewhat dual to equilibrium notion of \emph{co-equilibrium} in the context of collective and individual qualitative objectives, as a strategy profile that not only ensures satisfaction of the collective objective (the `social contract') if all players follow it, but moreover also guarantees to every player who adheres to it that even if all other players deviate, that would not affect the satisfaction of his/her individual objective\footnote{The notion of co-equilibrium, when applied to possibly quantitative objectives, is essentially equivalent to the special case of `$t$-immune strategy profile', introduced in \cite{DBLP:conf/podc/AbrahamDGH06}, when $t = n-1$, where $n$ is the number of players.}.
Thus, a co-equilibrium is a strongly stable solution concept that, we argue, makes better sense than a Nash equilibrium in games with \emph{qualitative} individual objectives and existence of a co-equilibrium is an important criterion for stability of a society of strategically interacting individually rational agents.
Formally, a strategy profile $\ensuremath{\Sigma\xspace}$ is a \textbf{ co-equilibrium} with respect to a goal assignment $\gamma$ iff $\ensuremath{\Sigma\xspace}$ witnesses the goal assignment $\gamma^{*}$ which is the restriction of $\gamma$ with support consisting of the grand coalition $\Agt$ and all singleton sets of agents. Respectively, existence of a co-equilibrium with respect to $\gamma$ can be expressed in \Logicname{TLCGA}\xspace simply as $\brak{\gamma^{*}}$.
\subsubsection{Expressing other stable outcomes}
\label{subsec:stable-outcomes}
The notion of co-equilibrium is one of a family of stable strategy profiles that can be defined by varying
the notion of stability with respect to non-existence of various \emph{beneficial deviations} of players or coalitions.
In the case of co-equilibrium, no player or coalition is interested to deviate simply because they are all satisfied by the co-equilibrium strategy profile, so there are no beneficial deviations. This, of course, is the ideal case, which is often not possible in reality, so we will look at some natural relaxations of the notion of stable outcome.
Let us fix a concurrent game model $\ensuremath{\mathcal{M}}$ with initial state $w$, a coalitional goal assignment $\gamma$, and a strategy profile $\ensuremath{\Sigma\xspace}$. In Section \ref{subsec:equilibria}, we defined winning players and coalitions in the outcome play $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ with respect to $\gamma$, the sets
$\ensuremath{\mathsf{W}\xspace}(\ensuremath{\Sigma\xspace},\gamma), \ensuremath{\mathsf{L}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$ and respectively
$\ensuremath{\mathsf{W_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma), \ensuremath{\mathsf{L_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$. In addition, we say that a coalition $C$ is
\textbf{individually winning in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ with respect to $\gamma$} if
$C \subseteq \ensuremath{\mathsf{W_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$;
respectively,
$C$ is \textbf{individually losing in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ with respect to $\gamma$} if
$C \subseteq \ensuremath{\mathsf{L_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$.
Now, with reference to the fixed coalitional goal assignment $\gamma$, we say that a strategy profile
$\ensuremath{\Sigma\xspace}$ is:
\begin{enumerate}
\item \textbf{individually stable at $w$}, if no losing player in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ can deviate from
$\ensuremath{\Sigma\xspace}$ to become a winning player in the resulting strategy profile $\ensuremath{\Sigma\xspace}'$.
This is precisely equivalent to the standard notion of Nash equilibrium.
\item \textbf{strongly individually stable at $w$}, if no group of losing players in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ can collectively deviate from $\ensuremath{\Sigma\xspace}$ to become a group of winning players in the resulting strategy profile $\ensuremath{\Sigma\xspace}'$.
This corresponds to Aumann's notion of \emph{strong equilibrium} \cite{aumann59} and is similarly expressible in \Logicname{TLCGA}\xspace (for nexttime goal assignments) or $\Logicname{TLCGA}\xspace^+$ (for any goal assignments):
$\ensuremath{\Sigma\xspace}$ is strongly individually stable with respect to $\gamma$ iff $\ensuremath{\Sigma\xspace}$ witnesses the goal assignment $\gamma^{s}_{\ensuremath{\Sigma\xspace}}$ defined as follows:
$\gamma^{s}_{\ensuremath{\Sigma\xspace}}(\Agt) = \bigwedge_{\ensuremath{\mathsf{a}\xspace} \in \ensuremath{\mathsf{W_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)} \gamma(\ensuremath{\mathsf{a}\xspace})$, \\
$\gamma^{s}_{\ensuremath{\Sigma\xspace}}(C) = \lnot \bigwedge_{\ensuremath{\mathsf{a}\xspace} \in \overline{C}} \gamma(\ensuremath{\mathsf{a}\xspace})$
for each $C$ such that
$\overline{C} \subseteq \ensuremath{\mathsf{L_{0}}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$, \\
and $\gamma^{s}_{\ensuremath{\Sigma\xspace}}(C) = \top$ in all other cases.
\item \textbf{coalitionally stable at $w$}, if no losing coalition in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ can collectively deviate from $\ensuremath{\Sigma\xspace}$ to become a winning coalition in the resulting strategy profile $\ensuremath{\Sigma\xspace}'$.
This is similarly expressible in \Logicname{TLCGA}\xspace (for nexttime goal assignments) or $\Logicname{TLCGA}\xspace^+$ (for any goal assignments):
$\ensuremath{\Sigma\xspace}$ is coalitionally stable with respect to $\gamma$ iff $\ensuremath{\Sigma\xspace}$ witnesses the goal assignment $\gamma_{\ensuremath{\Sigma\xspace}}$ defined as follows:
$\gamma_{\ensuremath{\Sigma\xspace}}(\Agt) = \bigwedge_{C \in \ensuremath{\mathsf{W}\xspace}(\ensuremath{\Sigma\xspace},\gamma)} \gamma(C)$, \\
$\gamma_{\ensuremath{\Sigma\xspace}}(C) =
\lnot \gamma(\overline{C})$ for each $C$ such that
$\overline{C} \in \ensuremath{\mathsf{L}\xspace}(\ensuremath{\Sigma\xspace},\gamma)$, \\
and $\gamma_{\ensuremath{\Sigma\xspace}}(C) = \top$ in all other cases.
\end{enumerate}
\subsubsection{Cooperative games and \Logicname{TLCGA}\xspace}
\label{subsec:CoopGT}
The technical ideas outlined above can also be applied to express in \Logicname{TLCGA}\xspace (or $\Logicname{TLCGA}\xspace^+$) some key notions of the theory of \emph{cooperative games (with transferable utility)} \cite{Ramamurthy90},
\cite{BranzeiDimitrovTijs}, \cite{DBLP:series/synthesis/2011Chalkiadakis}, including \emph{beneficial deviations}, \emph{stable outcomes}, and \emph{core} of a cooperative game. We leave the exploration of these to a further work, and here we only outline one example showing how that can be done for the kind of \emph{cooperative concurrent games} studied in \cite{DBLP:conf/atal/GutierrezKW19}.
These games are played in concurrent game structures, where each player has a goal expressed by a set ot plays starting from some fixed initial state, regarded as the "winning plays" for that player. In particular, cooperative concurrent games with goals expressible in the linear time logic \Logicname{LTL}\xspace (cf e.g. \cite{TLCSbook}) are studied in \cite{DBLP:conf/atal/GutierrezKW19}, for which it is shown that a suitably defined notion of a core of such a game can be logically characterised using the logic \Logicname{ATL^*}\xspace
and the computational complexities of certain decision problems associated with that core have been established.
Let us fix a state $w$ in a concurrent game model $\ensuremath{\mathcal{M}}$ and consider the cooperative concurrent game $G = G(\ensuremath{\mathcal{M}},w)$ generated at $w$ in $\ensuremath{\mathcal{M}}$ and a strategy profile $\ensuremath{\Sigma\xspace}$ in $G$.
We note that the terminology from \cite{DBLP:conf/atal/GutierrezKW19}, differs somewhat from ours, as they call a winning (resp. losing) coalition\footnote{Only individual, but no coalitional goals are considered in \cite{DBLP:conf/atal/GutierrezKW19}.} what we call in Section \ref{subsec:stable-outcomes} \emph{individually winning} (resp. \emph{individually losing}) coalition. For consistency, we will adhere to our terminology.
To make the parameter $w$ referring to the current initial state explicit, we will denote the set of all losing players in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$ with respect to $\gamma$ by $\ensuremath{\mathsf{L_{0}}\xspace}(w,\ensuremath{\Sigma\xspace},\gamma)$.
Thus,
$C \subseteq \ensuremath{\mathsf{L_{0}}\xspace}(w,\ensuremath{\Sigma\xspace},\gamma)$, i.e. the coalition $C$ is individually losing in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$,
iff $\ensuremath{\Sigma\xspace}$ witnesses the $\Logicname{TLCGA}\xspace^+$ goal assignment $\Agt \gass \bigwedge_{\ensuremath{\mathsf{i}\xspace} \in C} \lnot \gamma(\ensuremath{\mathsf{i}\xspace})$. In the case when all $\gamma(\ensuremath{\mathsf{i}\xspace})$ are nexttime formulae $\mathord \mathsf{X}\, \psi(\ensuremath{\mathsf{i}\xspace})$, that goal assignment can be replaced by the $\Logicname{TLCGA}\xspace$ goal assignment $\Agt \gass \mathord \mathsf{X}\, \bigwedge_{\ensuremath{\mathsf{i}\xspace} \in C} \lnot \psi(\ensuremath{\mathsf{i}\xspace})$.
Now, if $C \subseteq \ensuremath{\mathsf{L_{0}}\xspace}(w,\ensuremath{\Sigma\xspace},\gamma)$,
then a \emph{individually beneficial deviation}\footnote{Note that this notion, defined in \cite{DBLP:conf/atal/GutierrezKW19} as ``beneficial deviation'', does not actually depend on $\ensuremath{\Sigma\xspace}$ in any other way but that $C$ is losing in $\ensuremath{\mathsf{play}\xspace}(w,\ensuremath{\Sigma\xspace})$.} for $C$ is any joint strategy $\ensuremath{\Sigma\xspace} _C$ of $C$ that guarantees for $C$ to be individually winning on any outcome play from $w$ induced by
$\ensuremath{\Sigma\xspace} _C$.
The \textbf{core of the game $G$}, denoted $\mathit{core}(G)$, is defined as the set of \textbf{stable strategy profiles}
in $G$, viz. those that admit no beneficial deviations (by any losing coalition).
%
Thus, $C$ has a beneficial deviation at the state $w$ iff $\brak{\gamma_{C}}$ is true at $w$, where $\gamma_{C}$ is the goal assignment $C \gass \bigwedge_{\ensuremath{\mathsf{i}\xspace} \in C} \gamma(\ensuremath{\mathsf{i}\xspace})$.
Therefore, existence of a coalition that has a beneficial deviation with respect to $\gamma$ (and $\ensuremath{\Sigma\xspace}$) can be expressed as the disjunction of all formulae $\brak{\gamma_{C}}$, over all individually losing coalitions $C \subseteq \ensuremath{\mathsf{L_{0}}\xspace}(w,\ensuremath{\Sigma\xspace},\gamma)$.
Then, $\ensuremath{\Sigma\xspace}$ is in $\mathit{core}(G)$ iff it does not satisfy that formula at $w$:
\[ \ensuremath{\Sigma\xspace} \in \mathit{core}(G) \qquad \mbox{ iff } \qquad
\ensuremath{\mathcal{M}}, w \models \bigwedge_{C \subseteq \ensuremath{\mathsf{L_{0}}\xspace}(w,\ensuremath{\Sigma\xspace},\gamma)} \lnot \brak{\gamma_{C}}\]
Thus, like in the case of Nash equilibria and the other stable outcomes defined in Section \ref{subsec:stable-outcomes}, computing the sets $\ensuremath{\mathsf{L_{0}}\xspace}(w,\ensuremath{\Sigma\xspace},\gamma)$ and answering the question of whether the core of such a game is non-empty can be solved by reducing to model checking in $\Logicname{TLCGA}\xspace^+$; in the special case when all goals are nexttime formulae, or all are $\mathord \mathsf{G}\, $-type formulae, model checking in \Logicname{TLCGA}\xspace suffices.
\section{Fixpoint characterizations of temporal formulae in \Logicname{TLCGA}\xspace}
\label{sec:fixpoints}
In this section we will show how to embed the logic $\Logicname{TLCGA}\xspace$ into a suitable fixpoint logic. In fact, the embedding can be extended likewise to the logic $\Logicname{TLCGA}\xspace^+$, and we sketch along the way how this is done. The results in this section will be stated and proved for \Logicname{TLCGA}\xspace, but their extensions to $\Logicname{TLCGA}\xspace^+$ are quite routine.
\subsection{Types of goal assignments}
\label{subsec:TypesAssignments}
\begin{definition}
A goal assignment $\gamma$ supported by a family of coalitions $\ensuremath{\mathcal{F}\xspace}$
will be called \textbf{long-term temporal}
if $\gamma$ maps every coalition in $\ensuremath{\mathcal{F}\xspace}$ either to a $\mathsf{U}$-formula or a $\mathsf{G}$-formula, that is, if $\gamma[\ensuremath{\mathcal{F}\xspace}] \subseteq \mathsf{UGFor}$, where
$\gamma[\ensuremath{\mathcal{F}\xspace}] = \{ \gamma(C) \mid C \in \ensuremath{\mathcal{F}\xspace} \}$.
A goal assignment is called \textbf{local}, or \textbf{nexttime}, if $\gamma$ maps every coalition in $\ensuremath{\mathcal{F}\xspace}$ to a $\mathsf{X}$-formula, i.e., $\gamma[\ensuremath{\mathcal{F}\xspace}] \subseteq \mathsf{XFor}$.
A formula $\phi$ is said to be in \textbf{normal form} if, for every subformula of the form $\brak{\gamma}$, the goal assignment $\gamma$ is either a nexttime or a long-term temporal goal assignment.
\end{definition}
To extend this definition to $\Logicname{TLCGA}\xspace^+$, we say that a goal assignment $\gamma$ in the extended language is long-term temporal if for each coalition $C$, each conjunct of $\gamma(C)$ is either a $\mathsf{U}$-formula or a $\mathsf{G}$-formula. Clearly, this reduces to the previous definition for the special case of goal assignments in $\Logicname{TLCGA}\xspace$.
\begin{definition}
Let $\gamma$ be a long-term temporal goal assignment supported by the family $\ensuremath{\mathcal{F}\xspace}$. We say that $\gamma$ is:
\begin{itemize}
\item of \textbf{type $\mathsf{U}$} if $\gamma$ maps at least one element of $\ensuremath{\mathcal{F}\xspace}$ to an
$\mathsf{U}$-formula,
\item of \textbf{type $\mathsf{G}$ }
if $\gamma$ maps every element of $\ensuremath{\mathcal{F}\xspace}$ to an $\mathsf{G}$-formula.
\end{itemize}
We denote the sets of goal assignments of type $\mathsf{U}$ and type $\mathsf{G}$ respectively by $\ensuremath{\mathsf{Type}\until}\xspace$ and $\ensuremath{\mathsf{Type}\always}\xspace$.
\end{definition}
Again, this can be extended to $\Logicname{TLCGA}\xspace^+$: we say that a long-term temporal goal assignment $\gamma$ is of type $\mathsf{U}$ if there is some coalition $C$ in the support of $\gamma$, such that at least one conjunct of $\gamma(C)$ is an $\mathsf{U}$-formula. Otherwise, $\gamma$ is of type $\mathsf{G}$.
\subsection{The fixpoint property of goal assignments}
\begin{definition}
Given a family of coalitions $\ensuremath{\mathcal{F}\xspace}$ and a goal assignment
$\gamma$ supported by $\ensuremath{\mathcal{F}\xspace}$, we write $\gamma\vert_{\lfor}$ for the restriction of $\gamma$ to the family $\ensuremath{\mathcal{F}\xspace}\vert_{\lfor} = \{C \in \ensuremath{\mathcal{F}\xspace} \mid \gamma(C) \in \mathsf{UGFor}\}$. Similarly we write $\gamma\vert_{\xfor}$ for the restriction of $\gamma$ to the family $\ensuremath{\mathcal{F}\xspace}\vert_{\xfor} \subseteq \ensuremath{\mathcal{F}\xspace}$ defined as $\{C \in \ensuremath{\mathcal{F}\xspace} \mid \gamma(C) \in \mathsf{XFor}\}$.
\end{definition}
To extend this notion to $\Logicname{TLCGA}\xspace^+$, we define $\gamma\vert_{\lfor}$ ($\gamma\vert_{\xfor}$) by setting, for each coalition $C$, $\gamma\vert_{\lfor}(C)$ to be the conjunction of all formulas $\alpha \mathsf{U} \beta$ or $\mathsf{G} \chi$ that appear as conjuncts of $\gamma(C)$, provided that $\gamma(C)$ has at least one such conjunct, and $\gamma\vert_{\lfor}(C) = \atlx \top$ otherwise. The goal assignment $\gamma\vert_{\xfor}$ is defined similarly.
\begin{definition}
Given a family of coalitions $\ensuremath{\mathcal{F}\xspace}$ and a goal assignment
$\gamma$ supported by $\ensuremath{\mathcal{F}\xspace}$, the
\textbf{nexttime-extension of $\gamma$} is the goal assignment
$\diffof{\gamma}$
defined as follows. First, we define
$\sup{\diffof{\gamma}} :=
\big\{\bigcup \ensuremath{\mathcal{F}\xspace}' \mid \emptyset \neq \ensuremath{\mathcal{F}\xspace}' \subseteq \ensuremath{\mathcal{F}\xspace}\big\}$,
Then, for each $C \in \sup{\diffof{\gamma}}$ we define
\[
\diffof{\gamma}(C) := \atlx \Big(\bigwedge \big\{\varphi \mid \mbox{there exists } C' \in \ensuremath{\mathcal{F}\xspace}, C' \subseteq C \mbox{ such that } \gamma(C') = \atlx\varphi \big\}
\wedge \brak{(\gamma\vert_C)\vert_{\lfor}} \Big),
\]
where as a convention we remove from this formula any conjuncts that reduce to $\top$, which can appear as the result of a conjunction of the empty set (the left conjunct reduces to $\top$) or as $\brak{\gamma}$ where $\gamma$ is the empty goal assignment (the right conjunct reduces to $\top$). For all coalitions that are not in $\sup{\diffof{\gamma}}$, $\diffof{\gamma}$ assigns the trivial goal. Given any formula $\phi$, we will sometimes abbreviate the formula $\diffof{\gamma}[\bigcup \ensuremath{\mathcal{F}\xspace} \gass \mathsf{X} \phi]$ by $\gammaof{\phi}$.
\end{definition}
The definition above may look a bit opaque, but what it does is quite simple. Intuitively, the goal assignment $\diffof{\gamma}$ describes the conditions that each coalition must ensure to hold for the \emph{next state}, with respect to a given strategy profile $\ensuremath{\Sigma\xspace}$, in order for that strategy profile to fulfill the goal assignment $\gamma$ at the \emph{current state}. To make things more concrete, let us consider two examples.
\begin{example}
If $\brak{\gamma}$ is $\brak{C \gass \varphi \mathsf{U} \psi}$, then $\brak{\diffof{\gamma}} = \brak{C \gass \atlx\brak{C \gass \varphi \mathsf{U} \psi}} = \brak{C \gass \atlx\brak{\gamma}}$.
So in this special case the nexttime-extension simply pushes the eventuality $\varphi \mathsf{U} \psi$ one step into the future, so to speak. Similarly, if $\brak{\gamma}$ is $\brak{C \gass \mathsf{G} \varphi}$, then $\brak{\diffof{\gamma}} = \brak{C \gass \atlx\brak{C \gass \mathsf{G} \varphi}} = \brak{C \gass \atlx\brak{\gamma}}$.
\end{example}
\begin{example} Consider the example of a goal assignment $\gamma$ supported by $\ensuremath{\mathcal{F}\xspace} = \{\{a,b\},\{c\},\{b,c\}\}$ and defined by the assignment:
$$\{a,b\}\gass p\mathsf{U} q, \;\; \{c\} \gass \mathsf{G} r, \;\; \{b,c\}\gass\atlx s$$
The support of $\diffof{\gamma}$ will be $\ensuremath{\mathcal{F}\xspace} \cup \{\{a,b,c\}\}$. The action of $\diffof{\gamma}$ is shown below:
\begin{itemize}
\item $\{a,b\} \gass \mathsf{X} \brak{\{a,b\} \gass p\mathsf{U} q}$
\item $\{c\} \gass \mathsf{X} \brak{\{c\} \gass \mathsf{G} r}$
\item $\{b,c\} \gass \mathsf{X} (s \wedge \brak{\{c\} \gass \mathsf{G} r})$
\item $\{a,b,c\} \gass \mathsf{X} (s \wedge \brak{\{a,b\} \gass p\mathsf{U} q, \{c\} \gass \mathsf{G} r})$
\end{itemize}
\end{example}
The procedure for computing $\diffof{\gamma}$ goes, informally, as follows: for each $\subseteq$-downset\footnote{I.e. set $\mathcal{D}$ such that for all $Z \in \mathcal{D}$ and all $Z' \in \ensuremath{\mathcal{F}\xspace}$, if $Z' \subseteq Z$ then $Z' \in \mathcal{D}$.} $\mathcal{D}$ of coalitions from $\ensuremath{\mathcal{F}\xspace}$, we collect all the formulas $\varphi$ for which some coalition in $\mathcal{D}$ is mapped to the goal $\atlx\varphi$ into a conjunction, add a conjunct collecting all the longterm goals for coalitions in $\mathcal{D}$ into a single goal assigment, and finally put the resulting conjunction in the scope of an $\atlx$-operator and assign this goal to the union of $\mathcal{D}$.
We are now ready to define one of the key concepts of the paper.
\begin{definition}
Let $\gamma$ be a goal assignment, supported by $\ensuremath{\mathcal{F}\xspace} $. Then we define the following formula:
\[\unf{\gamma} :=
\bigvee \mathsf{Finish}(\gamma) \vee \bigg(\bigwedge \mathsf{UHolds}(\gamma) \wedge \bigwedge \mathsf{GHolds}(\gamma) \wedge \brak{\diffof{\gamma}}\bigg),
\]
where:
\begin{itemize}
\item $\mathsf{Finish}(\gamma) : = \big\{\beta \wedge \brak{\gamma \setminus C} \mid
\gamma(C) =
\alpha \mathsf{U} \beta\big\}$
\item $\mathsf{UHolds}(\gamma) := \big\{\alpha \mid
\gamma(C) = \alpha \mathsf{U} \beta,
\mbox{ for some } C,\beta \big\}$
\item $\mathsf{GHolds}(\gamma) : = \big\{\chi \mid
\gamma(C) =
\mathsf{G} \chi,
\mbox{ for some } C \big\}$
\end{itemize}
As before, by convention we remove from this formula all conjuncts that reduce to $\top$ and all disjuncts that reduce to $\bot$.
So, for example, if the set $\mathsf{Finish}(\gamma) = \emptyset$, and hence also $\mathsf{UHolds}(\gamma) = \emptyset$, then the formula $\unf{\gamma}$ reduces to:
$$\bigwedge \mathsf{GHolds}(\gamma) \wedge \brak{\diffof{\gamma}}.$$
We call $\unf{\gamma}$ \textbf{the unfolding formula} of $\gamma$.
\end{definition}
The formula $\unf{\gamma}$ may require some additional explanation. We shall see that $\unf{\gamma}$ is in fact equivalent to $\brak{\gamma}$, and the formula can be seen as an analysis of the different possibilities for \emph{how} a given strategy profile may fulfil the goal assignment $\gamma$. The first disjunction $\mathsf{Finish}(\gamma)$ describes those cases where one of the coalitions $C$ has an eventuality $\alpha \mathsf{U} \beta$ as its goal formula; if the formula $\beta$ happens to be true at the current state then the coalition $C$ trivially attains its goal regardless of its actions. So, for a strategy profile to fulfil the goal assignment $\gamma$, it suffices that it fulfils the goals of all coalitions besides $C$, i.e. the goal assignment $\gamma \setminus C$. In the remaining case, the conditions that a strategy profile must satisfy in order to fulfil the goal assignment $\gamma$ are divided into two parts: ``local'' conditions that must be true at the \emph{current state}, which are outside the agents' control, and those conditions that each coalition must ensure for the \emph{next} state. The latter conditions are described by the formula $\brak{\diffof{\gamma}}$, as explained earlier. The local conditions are derived from the interpretation of temporal (path) formulas: if the goal of coalition $C$ is an eventuality $\alpha \mathsf{U} \beta$, and this goal is not trivially fulfilled because $\beta$ happens to be true, then the goal can only be attained if $\alpha$ is true at the current state. Similarly, a goal $\mathsf{G} \chi$ can only be fulfilled if $\chi$ is currently true. These conditions are captured by the conjuncts $\mathsf{UHolds}(\gamma)$ and $\mathsf{GHolds}(\gamma)$, respectively.
\begin{definition}
Let $\gamma$ be a long-term temporal goal assignment. Then we define the \textbf{induction formula for $\gamma$ on $\phi$} as follows
\[
\indf{\gamma}{\phi} :=
\bigvee \mathsf{Finish}(\gamma) \vee \Big(\bigwedge \mathsf{UHolds}(\gamma) \wedge \bigwedge \mathsf{GHolds}(\gamma) \wedge
\brak{\gammaof{\phi}}\Big),
\]
after removing redundant conjuncts and disjuncts, as before. So, this formula is like
$\unf{\gamma}$, except that the largest coalition in the support of $\diffof{\gamma}$ will be mapped to $\mathsf{X}\phi$.
\end{definition}
\begin{proposition}
\label{prop:unfold}
For every long-term temporal goal assignment $\gamma$ we have:
$$\unf{\gamma} = \indf{\gamma}{\brak{\gamma}}.$$
\end{proposition}
\begin{proof}
If $\gamma$ is long-term temporal and supported by $\ensuremath{\mathcal{F}\xspace}$ then we get:
\[(\gamma\vert_{\lfor})\vert_{\bigcup \ensuremath{\mathcal{F}\xspace}} = \gamma.\]
Thus, since there are no nexttime formulas to consider, $\diffof{\gamma}$ will map $\bigcup \ensuremath{\mathcal{F}\xspace}$ to $\mathsf{X}\brak{\gamma}$, hence
$\diffof{\gamma}[\bigcup \ensuremath{\mathcal{F}\xspace} \gass \mathsf{X}\brak{\gamma}] = \diffof{\gamma}$.
\end{proof}
It is not hard to see that, if $\gamma$ is a nexttime goal assignment, then $\unf{\gamma} \equiv \brak{\gamma}$. For example, suppose $\gamma$ is supported by $\{\{a\},\{b\}\}$ and maps $\{a\}$ to $\mathsf{X} p$ and $\{b\}$ to
$\mathsf{X} q$. Then $\unf{\gamma}$ is equalt to $\brak{\diffof{\gamma}}$, which is the following formula:
$$\brak{\{a\} \gass \mathsf{X} p, \{b\} \gass \mathsf{X} q, \{a,b\} \gass \mathsf{X} (p \wedge q)}$$
which is clearly equivalent to
$\brak{\gamma} = \brak{\{a\} \gass \mathsf{X} p, \{b\} \gass \mathsf{X} q\}}$.
In fact, the equivalence always holds; this is by design, and will play a key role for our axiomatization.
\begin{theorem}[Fixpoint property]
\label{p:fixpoint-property}
For any goal assignment $\gamma$:
$$ \brak{\gamma}\equiv \unf{\gamma},$$
and hence for any long-term temporal goal assignment $\gamma$:
$$\brak{\gamma} \equiv \indf{\gamma}{\brak{\gamma}}.$$
\end{theorem}
\append{
\begin{proof}
We prove each implication separately.
\paragraph{Left to right:} suppose that $\ensuremath{\mathcal{M}},s \ensuremath{\Vdash} \brak{\gamma}$, where $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$, and let $\ensuremath{\Sigma\xspace}$ be some profile witnessing $\gamma$ at $s$. Assuming that $\ensuremath{\mathcal{M}}, s \ensuremath{\nVdash} \bigvee \mathsf{Finish}(\gamma)$, we show that:
$$\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \bigwedge \mathsf{UHolds}(\gamma) \wedge \bigwedge \mathsf{GHolds}(\gamma) \wedge \brak{\diffof{\gamma}}.$$
We treat these conjuncts separately. First, note that if $\mathsf{UHolds}(\gamma) = \emptyset$ or $\mathsf{GHolds}(\gamma) = \emptyset$ then these conjunctions reduce to $\top$ and hence are trivially satisfied.
Suppose $\alpha \in \mathsf{UHolds}(\gamma)$. Then there is some coalition $C$ and some $\beta$ for which
$\gamma(C) = \alpha \mathsf{U} \beta$. The set $\paths(s,\ensuremath{\Sigma\xspace},{C})$ is always non-empty, so consider an arbitrary member $\path$ and recall that its first element is $s$. Since $\ensuremath{\Sigma\xspace}, s \Vvdash \gamma$ it follows that $\path \models \alpha \mathsf{U} \beta$. Since we assumed that $\ensuremath{\mathcal{M}}, s \ensuremath{\nVdash} \bigvee \mathsf{Finish}(\gamma)$, we cannot have $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \beta$ since this would give $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \beta \wedge \brak{\gamma}$ which entails $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \beta \wedge \brak{\gamma\vert_{C}}$, and $\beta \wedge \brak{\gamma\vert_{C}}$ is a member of $\mathsf{Finish}(\gamma)$. Hence we have $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \alpha$, as required. The proof that each conjunct from $\mathsf{GHolds}(\gamma)$ is satisfied is similar (but simpler).
We now show that $\ensuremath{\Sigma\xspace}, s \Vvdash \diffof{\gamma}$. Pick an arbitrary coalition $C$ in the support of $\diffof{\gamma}$ and an arbitrary path $\path \in \paths(s,\ensuremath{\Sigma\xspace},C)$. We need to show that $\path \models \diffof{\gamma}(C)$. Suppose that $\path$ is the path generated by a play in $\ensuremath{\mathsf{Plays}\xspace}(w,\ensuremath{\Sigma\xspace},C)$ of the form:
$$w_0 \ensuremath{\zeta\xspace}_0 w_1 \ensuremath{\zeta\xspace}_{1} w_2 ...$$
where $w_0 = s$. We need to show that:
\begin{enumerate}
\item For each $C' \subseteq C$ in the support of $\gamma$, if $\gamma(C') = \mathsf{X} \psi$ then $\ensuremath{\mathcal{M}},w_1 \ensuremath{\Vdash} \psi$,
\item $\ensuremath{\mathcal{M}}, w_1 \ensuremath{\Vdash} \brak{(\gamma\vert_{\lfor})\vert_{C}}$.
\end{enumerate}
The first item is straightforward, so we focus on item (2). We need to come up with a strategy profile $\ensuremath{\Sigma\xspace}'$ that witnesses $(\gamma\vert_{\lfor})\vert_{C}$ at $w_1$. We define $\ensuremath{\Sigma\xspace}'$ as follows.
Given a history of the form:
$$v_0 \ensuremath{\zeta\xspace}'_0 v_1... v_{n -1 }\ensuremath{\zeta\xspace}'_{n - 1} v_n$$
where $v_0 = w_1$, and a player $\ensuremath{\mathsf{a}\xspace} \in C$, we set:
$$\ensuremath{\Sigma\xspace}'(\ensuremath{\mathsf{a}\xspace}, v_0 \ensuremath{\zeta\xspace}'_0 v_1... v_{n -1 }\ensuremath{\zeta\xspace}'_{n - 1} v_n) := \ensuremath{\Sigma\xspace}(\ensuremath{\mathsf{a}\xspace}, w_0 \ensuremath{\zeta\xspace}_0 v_0 \ensuremath{\zeta\xspace}'_0 v_1... v_{n -1 }\ensuremath{\zeta\xspace}'_{n - 1} v_n)$$
Now let $C' \subseteq C$ be a coalition for which $\gamma(C') = \alpha \mathsf{U} \beta$, and let $\path' \in \paths(w_1,\ensuremath{\Sigma\xspace}',C')$. Then $s\path' \in \paths(s,\ensuremath{\Sigma\xspace},C')$ by construction of $\ensuremath{\Sigma\xspace}'$, hence $s\path' \models \alpha \mathsf{U} \beta$.
Since $\ensuremath{\mathcal{M}},s\ensuremath{\nVdash} \beta$,
we get $\path' \models \alpha \mathsf{U} \beta$ as well. Similarly we can show that if $C' \subseteq C$ is a coalition for which $\gamma(C') = \mathsf{G} \alpha$, and $\path' \in \paths(w_1,\ensuremath{\Sigma\xspace}',C')$, then $\path' \models \mathsf{G} \alpha$. This shows that $\ensuremath{\Sigma\xspace}',w \Vvdash (\gamma\vert_{\lfor})\vert_{C}$, as required.
\paragraph{Right to left:}
Suppose $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \brak{\unf{\gamma}}$. We show that $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \brak{\gamma}$.
There are two cases to consider: either some formula in $\mathsf{Finish}(\gamma)$ holds at $s$, or:
$$
\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \bigwedge \mathsf{UHolds}(\gamma) \wedge \bigwedge \mathsf{GHolds}(\gamma) \wedge \brak{\diffof{\gamma}}.
$$
In the first case there is some $C$ in the support $\ensuremath{\mathcal{F}\xspace}$ of $\gamma$ for which $\gamma(C) = \alpha \mathsf{U} \beta$ and:
$$ \ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \beta \wedge \brak{\gamma \setminus C}.$$
But then $\alpha \mathsf{U} \beta$ holds at any path beginning from $s$, and it follows that $\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \brak{(\gamma \setminus C)[C \gass \alpha \mathsf{U} \beta]}$. This formula is the same as $\brak{\gamma}$, so we are done.
Suppose the second, more challenging case, and fix a strategy profile $\ensuremath{\Sigma\xspace}$ for $\Agt$ witnessing $\brak{\diffof{\gamma}}$ at $s$. Given any locally available action profile $\ensuremath{\zeta\xspace} \in \ensuremath{\mathsf{ActProf}\xspace}_s$,
we define the set
$\mathsf{fol}(\ensuremath{\zeta\xspace})$ of \textbf{followers of $\ensuremath{\Sigma\xspace}$ relative to $\ensuremath{\zeta\xspace}$}:
\[
\mathsf{fol}(\ensuremath{\zeta\xspace}) := \bigcup \big\{C \in \ensuremath{\mathcal{F}\xspace} \mid \ensuremath{\zeta\xspace}(\ensuremath{\mathsf{a}\xspace}) = \ensuremath{\Sigma\xspace}(\ensuremath{\mathsf{a}\xspace},s)
\mbox{ for all } \ensuremath{\mathsf{a}\xspace} \in C
\big\}.
\]
Note that $\mathsf{fol}(\ensuremath{\zeta\xspace})$ belongs to the support of $\diffof{\gamma}$.
By unfolding the definition of $\brak{\diffof{\gamma}}$, we see that the following conditions hold for each locally available action profile $\ensuremath{\zeta\xspace}\in \ensuremath{\mathsf{ActProf}\xspace}_s$
and any $C \in \ensuremath{\mathcal{F}\xspace}$ such that $C \subseteq \mathsf{fol}(\ensuremath{\zeta\xspace})$:
\begin{enumerate}
\item $\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace},s) \in \tset{\brak{(\gamma\vert_{\lfor})\vert_{C}}}_{\ensuremath{\mathcal{M}}}$.
\item If $\gamma(C) = \mathsf{X} \varphi$ then
$\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace},s) \in \tset{\varphi}_{\ensuremath{\mathcal{M}}}$.
\end{enumerate}
This motivates the following definition: for each locally available action profile
$\ensuremath{\zeta\xspace}\in \ensuremath{\mathsf{ActProf}\xspace}_s$
we pick a strategy profile $\underline{\ensuremath{\zeta\xspace}}$ defined
for all players in $\bigcup \ensuremath{\mathcal{F}\xspace}$, such that for all $C \subseteq \mathsf{fol}(\ensuremath{\zeta\xspace})$:
\[
\quad \underline{\ensuremath{\zeta\xspace}},\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace},s) \Vvdash (\gamma\vert_{\lfor})\vert_{C}.
\]
Now, we will build a strategy profile $\Omega$ using the strategy profile $\ensuremath{\Sigma\xspace}$ and all strategy profiles $\underline{\ensuremath{\zeta\xspace}}$ for each possible locally available action profile $\ensuremath{\zeta\xspace}$ at $s$:
Given a player $\ensuremath{\mathsf{a}\xspace} \in \bigcup \ensuremath{\mathcal{F}\xspace}$ and $w \in \ensuremath{\mathsf{S}\xspace}$, let $\Omega(\ensuremath{\mathsf{a}\xspace},w) = \ensuremath{\Sigma\xspace}(\ensuremath{\mathsf{a}\xspace},w)$ if $w = s$, and some arbitrary available move otherwise. For a history of the form $w_0\ensuremath{\zeta\xspace}_0...\ensuremath{\zeta\xspace}_n w_{n+1}$, if $w_0 \neq s$ then we can again define the move of player $\ensuremath{\mathsf{a}\xspace}$ arbitrarily.
Otherwise, we set
\[
\Omega(\ensuremath{\mathsf{a}\xspace},w_0\ensuremath{\zeta\xspace}_0 w_1....\ensuremath{\zeta\xspace}_n w_{n+1}) :=
\underline{\ensuremath{\zeta\xspace}_0}(\ensuremath{\mathsf{a}\xspace}, w_1 \ensuremath{\zeta\xspace}_1 ... \ensuremath{\zeta\xspace}_n w_{n + 1}).
\]
We will show that the strategy profile $\Omega$ witnesses $\brak{\gamma}$ at $s$. Let $C \in \ensuremath{\mathcal{F}\xspace}$, and let $\path \in \paths(s,\Omega,C)$. We need to show that $\path \models \gamma(C)$.
The case where $\gamma(C) = \mathsf{X} \varphi$ is immediate, by definition of $\diffof{\gamma}$ and of $\Omega$ at $s$.
We focus on the case where $\gamma(C)$ is of the form $\alpha \mathsf{U} \beta$. We assume that $\path$ is the path generated by a play in $\ensuremath{\mathsf{Plays}\xspace}(s,\Omega,C)$ of the form:
$$w_0 \ensuremath{\zeta\xspace}_0 w_1 \ensuremath{\zeta\xspace}_1 w_2 ...$$
so that $\path$ equals $w_0w_1w_2...$, $w_0 = s$ and $w_1 = \ensuremath{\mathsf{out}\xspace}(s,\ensuremath{\zeta\xspace}_0)$. But then $C \subseteq \mathsf{fol}(\ensuremath{\zeta\xspace}_0)$, and by construction of the strategy profile $\Omega$, the play $w_1 \ensuremath{\zeta\xspace}_1 w_2 \ensuremath{\zeta\xspace}_2 w_3...$ belongs to $\ensuremath{\mathsf{Plays}\xspace}(w_1,\underline{\ensuremath{\zeta\xspace}}_0,C)$. So the path $w_1w_2w_3...$ satisfies $\alpha \mathsf{U} \beta$ since, by definition, $\underline{\ensuremath{\zeta\xspace}}_0, w_1 \Vvdash (\gamma\vert_{\lfor})\vert_{C}$. Since $\alpha \in \mathsf{UHolds}(\gamma)$, we have $\ensuremath{\mathcal{M}},s \ensuremath{\Vdash} \alpha$. Since $s = w_0$ it follows that $\path \models \alpha \mathsf{U} \beta$ as required.
Lastly, the case where $\gamma(C)$ is of the form $ \mathsf{G} \chi$ is analogous.
\end{proof}
}
Finally, we show how to extend these definitions to the logic $\Logicname{TLCGA}\xspace^+$. In this setting, goal assignments are of a more general kind as they map coalitions to conjunctions of $\atlx$-formulas, $\mathsf{U}$-formulas and/or $\mathsf{G}$-formulas, so we have to account for this slight complication. Given a goal assignment $\gamma$, and a coalition $C$ in its support, we can think of $\gamma(C)$ as the set of its conjuncts of the form $\atlx \alpha$, $\alpha \mathsf{U} \beta$ or $\mathsf{G} \chi$. So we abuse notation slightly and write $\theta \in \gamma(C)$ to say that $\theta$ has one of these three forms, and is a conjunct of $\gamma(C)$. With this notation in place, we extend the definition of the
nexttime-extension $\diffof{\gamma}$ of a goal assignment $\gamma$ as follows. The support of $\diffof{\gamma}$ is defined as before, and for each $C \in \sup{\diffof{\gamma}}$ we define
\[
\diffof{\gamma}(C) := \atlx \Big(\bigwedge \big\{\varphi \mid \mbox{there exists } C' \in \ensuremath{\mathcal{F}\xspace}, C' \subseteq C \mbox{ such that } \atlx\varphi \in \gamma(C') \big\}
\wedge \brak{(\gamma\vert_C)\vert_{\lfor}} \Big),
\]
As before we abbreviate the formula $\diffof{\gamma}[\bigcup \ensuremath{\mathcal{F}\xspace} \gass \mathsf{X} \phi]$ by $\gammaof{\phi}$.
Next we define the unfolding of a goal assignment:
Let $\gamma$ be a goal assignment, supported by $\ensuremath{\mathcal{F}\xspace}$. Given a coalition $C$ and a conjunct $\theta$ of $\gamma(C)$, we write $\gamma\setminus(C\gass\theta)$ for the goal assignment that is like $\gamma$, except:
$$(\gamma\setminus(C\gass \theta))(C) = \bigwedge \{\theta' \mid \theta' \in \gamma(C) \wedge \theta' \neq \theta\}$$
We now define:
\[\unf{\gamma} :=
\bigvee \mathsf{Finish}(\gamma) \vee \bigg(\bigwedge \mathsf{UHolds}(\gamma) \wedge \bigwedge \mathsf{GHolds}(\gamma) \wedge \brak{\diffof{\gamma}}\bigg),
\]
where:
\begin{itemize}
\item $\mathsf{Finish}(\gamma) : = \big\{\beta \wedge \brak{\gamma \setminus (C \gass \alpha \mathsf{U} \beta)} \mid
\gamma(C) =
\alpha \mathsf{U} \beta\big\}$
\item $\mathsf{UHolds}(\gamma) := \big\{\alpha \mid
\alpha \mathsf{U} \beta \in \gamma(C),
\mbox{ for some } C,\beta \big\}$
\item $\mathsf{GHolds}(\gamma) : = \big\{\chi \mid
\mathsf{G} \chi \in \gamma(C),
\mbox{ for some } C \big\}$
\end{itemize}
The induction formula for $\indf{\gamma}{\phi}$ is defined as before, by
\[
\indf{\gamma}{\phi} :=
\bigvee \mathsf{Finish}(\gamma) \vee \Big(\bigwedge \mathsf{UHolds}(\gamma) \wedge \bigwedge \mathsf{GHolds}(\gamma) \wedge
\brak{\gammaof{\phi}}\Big),
\]
where $\gammaof{\phi}$ is $\diffof{\gamma}[\bigcup \ensuremath{\mathcal{F}\xspace} \gass \mathsf{X} \phi]$. With these extended definitions, the proofs of Proposition \ref{prop:unfold} and Theorem \ref{p:fixpoint-property} go through as before.
\subsection{A $\mu$-calculus of goal assignments}
\label{subsec:mu-TLCGA}
The $\mu$-calculus extension of the language $\ensuremath{\mathcal{L}^\cga\xspace}$ of \Logicname{TLCGA}\xspace will be denoted by
$\ensuremath{\mathcal{L}^\cga\xspace}_\mu$, and the $\mu$-calculus extension of the nexttime fragment
$\ensuremath{\mathcal{L}^\xcga\xspace}$ -- by $\ensuremath{\mathcal{L}^\xcga\xspace}_{\mu}$.
Formally the language $\ensuremath{\mathcal{L}^\cga\xspace}_\mu$ is given by the following grammar:
$$\ensuremath{\mathsf{StateFor}\xspace}: \ \ \ \ \ \varphi := p \mid \top
\mid \neg \varphi
\mid (\varphi \wedge \varphi)
\mid (\varphi \lor \varphi)
\mid
\brak{\gamma} \mid \mu x.\varphi
$$
$$\ensuremath{\mathsf{PathFor}\xspace}: \ \ \ \ \ \theta := \mathsf{X} \varphi \mid \varphi \mathsf{U} \varphi \mid \mathsf{G} \varphi$$
Here, in $\mu x. \varphi$ the formula $\varphi$ is subject to the usual constraint that every occurrence of the variable $x$ in $\varphi$ is positive, in the sense that it is under the scope of an even number (possibly zero) of negations. We usually denote bound variables $x,y,z...$ rather than $p,q,r...$, but formally we do not introduce a separate supply of fixpoint variables. In the formula $\mu x.\varphi$ the variable $x$ is simply a propositional variable.
We define the greatest fixpoint operator as usual:
\[\nu x.\varphi := \lnot \mu x. \lnot \varphi [\lnot x / x],\]
where $\varphi [\lnot x / x]$ is the result of uniform substitution of $\lnot x$ for $x$ in
$\varphi$.
A model for $\ensuremath{\mathcal{L}^\cga\xspace}_\mu$ is just like a model for $\ensuremath{\mathcal{L}^\cga\xspace}$, viz a tuple,
$\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$,
but now the valuation $V$ assigns values to the variable(s) $z$ used in formulae
$\mu z.\psi$. Now, for each $Z \subseteq \ensuremath{\mathsf{S}\xspace}$, we define the amended valuation $V^Z := V[z \mapsto Z]$, which is like $V$ except that it maps $z$ to $Z$.
We will denote
$\ensuremath{\mathcal{M}}^Z := (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V^Z)$ and for any formula $\psi (z)$ in which the variable $z$ may occur free, we will write
$\ensuremath{\mathcal{M}}, s \ensuremath{\Vdash} \psi (Z)$ to state that $\ensuremath{\mathcal{M}}^Z, s \ensuremath{\Vdash} \psi (z)$.
The semantics of $\ensuremath{\mathcal{L}^\cga\xspace}_\mu$ extends that of $\ensuremath{\mathcal{L}^\cga\xspace}$ with the additional clause that the extension $\tset{\mu z. \varphi(z)}$ of a least fixpoint-formula in a model $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$ is given by:
\[
\tset{\mu z. \varphi(z)} : = \bigcap \big\{Z \subseteq \ensuremath{\mathsf{S}\xspace} \mid \tset{\varphi(Z)} \subseteq Z \big\},
\]
where, as expected:
$$\tset{\varphi(Z)} := \big\{w \in \ensuremath{\mathsf{S}\xspace} \mid \ensuremath{\mathcal{M}}^Z,w \ensuremath{\Vdash} \varphi(z) \big\}.$$
Given a model
$\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$,
let $f_\gamma$ be the monotone map on $\mathcal{P}(\ensuremath{\mathsf{S}\xspace})$ induced by
$\indf{\gamma}{z}$ in the usual way, i.e., for each $Z \subseteq \ensuremath{\mathsf{S}\xspace}$, $f_\gamma(Z)$ is the set of states satisfying $\indf{\gamma}{z}$ with respect to the amended valuation $V^Z := V[z \mapsto Z]$.
The proofs of the following two propositions are in the appendix.
\begin{proposition}[Fixpoint characterization of $\ensuremath{\mathsf{Type}\until}\xspace$ temporal goal assignments]
\label{prop:typeone}
Suppose that $\gamma$ is a long-term temporal goal assignment in $\ensuremath{\mathsf{Type}\until}\xspace$, and let $z$ be a fresh variable not occurring in $\brak{\gamma}$.
Then $\brak{\gamma} \equiv \mu z. \indf{\gamma}{z}$.
\end{proposition}
\begin{proposition}[Fixpoint characterization of $\ensuremath{\mathsf{Type}\always}\xspace$ temporal goal assignments]
\label{prop:typetwo}
Suppose that $\gamma$ is a long-term temporal goal assignment in $\ensuremath{\mathsf{Type}\always}\xspace$.
Then $\brak{\gamma} \equiv \nu z. \indf{\gamma}{z}$.
\end{proposition}
Recall that (memory-based) strategies are defined here in terms of \emph{plays}, not just paths
as for \Logicname{ATL^*}\xspace in \cite{AHK-02} (cf also \cite{BGJ15} or \cite{TLCSbook}), and we showed in Example \ref{exampleB} that the two versions affect essentially the semantics. In fact, the model in that example also shows that $\nu z. \indf{\gamma}{z} \to \brak{\gamma}$ is not valid in the semantics with path-based strategies.
Using the fixpoint characterizations of long-term temporal goal assignments we have established here, we can define an explicit translation $t : \ensuremath{\mathcal{L}^\cga\xspace} \to \ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, preserving the semantics of each $\ensuremath{\mathcal{L}^\cga\xspace}$-formula. To make this precise, the translation has to be defined by induction on a certain wellfounded order $\prec$ over $\ensuremath{\mathcal{L}^\cga\xspace}$-formulas, defined as the smallest transitive relation closed under the following rules:
\begin{itemize}
\item If $\varphi$ is a proper subformula of $\psi$ then $\varphi \prec \psi$.
\item If $\gamma$ is a goal assignment which is neither nexttime nor long-term temporal, i.e. the support of $\gamma$ contains both some coalitions mapped to nexttime formulas and some coalitions mapped to long-term temporal goals, then $\unf{\gamma} \prec \brak{\gamma}$.
\item If $\gamma$ is a $\ensuremath{\mathsf{Type}\until}\xspace$ or $\ensuremath{\mathsf{Type}\always}\xspace$-goal assignment, and $z$ is a propositional variable not occurring in $\brak{\gamma}$, then $\indf{\gamma}{z} \prec \brak{\gamma}$.
\end{itemize}
To see that this order is wellfounded, consider for example the clause where $\gamma$ is in $\ensuremath{\mathsf{Type}\until}\xspace$. Then the formula $\indf{\gamma}{z}$ can be viewed as being built up using boolean connectives from proper subformulas of $\brak{\gamma}$ and formulas of the form $\brak{\gamma'}$ where $\gamma'$ is either $\gammaof{z}$ or $\gamma\setminus C$ for some coalition $C$ in the support of $\gamma$. Note that the goal assignment $\gamma \setminus C$ has a smaller support than $\gamma$. Furthermore, the goal assignment $\gammaof{z}$ is a nexttime goal assignment, the coalition $\bigcup \mathcal{F}$ is mapped to $\atlx z$ (where $\mathcal{F}$ is the support of $\gamma$) and each coalition $C \neq \bigcup \mathcal{F}$ in the support of $\gammaof{z}$ is mapped to $\atlx \brak{\gamma\vert_C}$. But each goal assignment $\gamma \vert_C$ for $C \neq \bigcup \mathcal{F}$ has smaller support than $\gamma$, since if $C$ a proper subset of $\bigcup \mathcal{F}$ then there is some $C' \in \mathcal{F}$ which is not contained in $C$. So in $\gamma\vert_C$, the coalition $C'$ will be assigned the trivial goal $\atlx \top$.
With the wellfounded order $\prec$ in place, the translation can be defined by $t(\brak{\gamma}) = \mu z. t(\indf{\gamma}{z})$ if $\gamma$ is a type $\mathsf{U}$ goal assignment, $t(\brak{\gamma}) = \nu z. t(\indf{\gamma}{z})$ if $\gamma$ is of type $\mathsf{G}$ and $t(\brak{\gamma}) = t(\unf{\gamma})$ if $\gamma$ contains both $\mathsf{U}$-formulas and $\mathsf{G}$-formulas as goals. The induction steps for booleans and nexttime goal assignments are handled in the standard manner, simply letting the translation commute with these connectives.
As noted at the beginning of the section, it is a relatively routine task to extend our translation to cover the richer language $\mathcal{L}^{\Logicname{TLCGA}\xspace^+}$ underlying the logic $\Logicname{TLCGA}\xspace^+$. Again, the translation is by induction on a wellfounded ordering over formulas, defined as above but with the extended definitions of $\unf{\gamma}$ and $\indf{\gamma}{z}$. The proofs of Proposition \ref{prop:typeone} and \ref{prop:typetwo} go through essentially as before.
\subsection{A note on coalgebras}
One reason why the translation of $\ensuremath{\mathcal{L}^\cga\xspace}$ into $\ensuremath{\mathcal{L}^\xcga\xspace}$ we have presented is useful is because it establishes a connection with \emph{coalgebraic modal logics}. We note that the nexttime fragment $\ensuremath{\mathcal{L}^\xcga\xspace}$ of $\ensuremath{\mathcal{L}^\cga\xspace}$ is, in fact, an instance of the general framework of coalgebraic modal logics, making the language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ a \emph{coalgebraic fixpoint logic}, as studied in \cite{venema2006automata,cirstea2011exptime,fontaine2010automata}. This connection helps to clarify the place of the logic \Logicname{TLCGA}\xspace in the landscape of modal fixpoint logics for various kinds of state-based evolving systems. The theory of \emph{universal coalgebra} appears in computer science as a generic framework for modelling a wide range of state-based evolving systems in a uniform manner \cite{rutten2000universal}. It is formulated using the language of basic category theory (functors and natural transformations), see \cite{maclane1971categories} for a standard reference. The key idea of universal coalgebra is to pack all the information about the type of transitions that a system can make (deterministic, non-deterministic, probabilistic etc.) into a functor on the category of sets, which then can be considered as a variable parameter featuring in abstract definitions and results. A coalgebra for a functor $\mathsf{F}$ is then just a set $X$ together with a map $f : X \to \mathsf{F} X$, which intuitively represents the evolution of the state-based system. Concurrent game models are in fact coalgebras for a certain functor (together with a valuation of propositional variables), as has been observed several times, e.g. \cite{schroder2009pspace}. Furthermore, it can be checked that the semantics of modalities $\brak{\gamma}$ in which each goal formula is of the form $\atlx \theta$ for some $\theta$ can be phrased in terms of \emph{predicate liftings} for this functor, which is the currently most usual way of interpreting modalities in coalgebras. We omit the details here.
The coalgebraic representation of the logic $\ensuremath{\mathcal{L}^\xcga\xspace}_{\mu}$, together with the translation from \Logicname{TLCGA}\xspace into $\ensuremath{\mathcal{L}^\xcga\xspace}_{\mu}$, gives us access to a wealth of known general results on coalgebraic fixpoint logics. For example we shall use it to derive decidability and finite model property for the logic $\Logicname{TLCGA}\xspace$, as well as a complexity bound on the satisfiability problem for the language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$. However, we emphasize that some caution is required here. Universal coalgebra and coalgebraic logic are valuable frameworks, unifying a large class of systems and associated logics, just like universal algebra provides a common language that puts many different algebraic structures under one roof. But, of course, universal algebra does not tell us everything we want to know about specific classes of algebras, like groups or Heyting algebras. Generic results are helpful, but a detailed study of concrete special cases is usually required. The same is true here. In particular, our main technical result in this paper on completeness for an axiomatization of \Logicname{TLCGA}\xspace makes heavy use of ideas from the literature on coalgebraic logic, in particular the notion of ``one-step completeness'' is essential \cite{schroder2009pspace,cirstea2011exptime}. But the proof of one-step completeness is not a trivial consequence of generic results from coalgebra, it requires a careful study of the semantics of nexttime goal assignments.
Once we have one-step completeness in place, the next step will be to handle least and greatest fixpoints. We have shown that \Logicname{TLCGA}\xspace can be translated into
$\ensuremath{\mathcal{L}^\xcga\xspace}_{\mu}$ using a single recursion variable, in effect embedding \Logicname{TLCGA}\xspace as a fragment of a ``flat coalgebraic fixpoint logic'' in the sense of Schr\"{o}der and Venema \cite{schroder2010flat}, who prove a generic completeness result for such logics. It is possible that we could obtain completeness for our axiom system by `transferring' Schr\"{o}der's and Venema's completeness result via our translation, but we have chosen to present a direct proof instead since we believe this will be more instructive. That said, we do consider our approach as being very much coalgebraic in spirit, and the connection with coalgebraic fixpoint logic is conceptually important. In particular, the idea that one-step completeness of a (multi-)modal logic can be ``lifted'' to give a complete axiomatization of a fixpoint extension \cite{GorDrim06,schroder2010flat,cirstea2011exptime,enqvist2019completeness} is at the heart of our proof.
\section{Bisimulations and bisimulation invariance for \Logicname{TLCGA}\xspace}
\label{subsec:Bisimulations}
As noted earlier, the logic \Logicname{GPCL}\xspace introduced in \cite{GorankoEnqvist18} is essentially the nexttime fragment $\ensuremath{\mathcal{L}^\xcga\xspace}$ of \Logicname{TLCGA}\xspace. Therefore, the notion of \Logicname{GPCL}\xspace-bisimulation (ibid.) also applies to \Logicname{TLCGA}\xspace. For the reader's convenience, we introduce it again here, now called
\textbf{\Logicname{TLCGA}\xspace-bisimulation} and extend the bisimulation invariance result from \cite{GorankoEnqvist18} to the full logic \Logicname{TLCGA}\xspace.
This notion of bisimulation corresponds to the play-based semantics which is of importance for us. A similar notion can be defined for the path-based semantics and bisimulation invariance of \Logicname{TLCGA}\xspace formulae with respect to it can be proved likewise, which we leave out.
We only define \Logicname{TLCGA}\xspace-bisimulation within a single concurrent game model, and generalise to bisimulations between game models via disjoint unions.
\begin{definition}[\Logicname{TLCGA}\xspace-bisimulation]
\label{def:GPCLbisimulation}
Let
\[\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)\]
be a game model for the set of agents $\Agt$. A binary relation $\beta \subseteq \ensuremath{\mathsf{S}\xspace}^{2}$ is a \textbf{\Logicname{TLCGA}\xspace-bisimulation in \ensuremath{\mathcal{M}}}
if it satisfies the following conditions for every pair of states $(s_1, s_2)$
such that $s_1 \beta s_2$:
\begin{description}
\itemsep = 2pt
\item[Atom equivalence:] For every $p \in \Prop$: $s_1 \in V(p)$ iff
$s_2 \in V(p)$.
\item[Forth:] For any action profile $\ensuremath{\zeta\xspace}^{1}$ of $\Agt$ at $s_1$ there is
an action profile $\ensuremath{\zeta\xspace}^2$ of $\Agt$ at $s_2$ such that:
\begin{description}
\itemsep = 1pt
\item[LocalBack:] For every coalition $C$ and every $u_{2} \in \ensuremath{\mathsf{Out}\xspace}[s_{2},\ensuremath{\zeta\xspace}^{2}\vert_C]$, there is some $u_1 \in \ensuremath{\mathsf{Out}\xspace}[s_1,\ensuremath{\zeta\xspace}^1\vert_C]$ such that $u_1 \beta u_2$.
\end{description}
\item[Back:] For any joint action $\ensuremath{\zeta\xspace}^{2}$ of $\Agt$ at $s_2$ there is
a joint action $\ensuremath{\zeta\xspace}^1$ of $\Agt$ at $s_1$ such that:
\begin{description}
\itemsep = 1pt
\item[LocalForth:] For every coalition $C$ and every $u_{1} \in \ensuremath{\mathsf{Out}\xspace}[s_{1},\ensuremath{\zeta\xspace}^{1}\vert_C]$, there is some $u_2 \in \ensuremath{\mathsf{Out}\xspace}[s_2,\ensuremath{\zeta\xspace}^2\vert_C]$ such that $u_1 \beta u_2$.
\end{description}
\end{description}
States $s_1, s_2 \in \ensuremath{\mathcal{M}}$ are \textbf{\Logicname{TLCGA}\xspace-bisimulation equivalent}, or just \textbf{\Logicname{TLCGA}\xspace-bisimilar},
if there is a bisimulation $\beta$ in $\ensuremath{\mathcal{M}}$ such that $s_1 \beta s_2$.
\end{definition}
The proof of the following theorem is relatively routine, and can be found in the appendix.
\begin{theorem}[\Logicname{TLCGA}\xspace-bisimulation invariance]
\label{thm:bisimulation invariance}
Let $\beta$ be a \Logicname{TLCGA}\xspace-bisimulation in a game model $\ensuremath{\mathcal{M}}$. Then for every \Logicname{TLCGA}\xspace-formula $\varphi$ and every pair $s_1, s_2 \in \ensuremath{\mathcal{M}}$ such that $s_1 \beta s_2$:
\[
\ensuremath{\mathcal{M}}, s_{1} \models \varphi \ \mbox{iff} \ \ensuremath{\mathcal{M}}, s_{2} \models \varphi
\]
\end{theorem}
In fact, the proof of Theorem \ref{thm:bisimulation invariance} essentially amounts to a proof of bisimulation invariance for the whole language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, and therefore also for the fragment $\Logicname{TLCGA}\xspace^+$.
\begin{theorem}
\label{thm:bisimulation invariance+}
Every formula of $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ is bisimulation invariant.
\end{theorem}
Furthermore, we also have the following result from \cite{GorankoEnqvist18}, showing that \Logicname{TLCGA}\xspace already suffices to capture bisimulation invariance in finite models.
\begin{proposition}[Hennessy-Milner property, cf
\cite{GorankoEnqvist18}, Proposition 4.6]
Let $\beta$ be a \Logicname{TLCGA}\xspace-bisimulation in a finite game model $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$.
Then for any pair $s_1, s_2 \in \ensuremath{\mathsf{S}\xspace}$, $s_1 \beta s_2$ holds iff $s_1$ and $s_2$ satisfy the same $\Logicname{TLCGA}\xspace$-formulae.
\end{proposition}
\begin{proof}
For the non-trivial direction we will use only formulae from the fragment
$\ensuremath{\mathcal{L}^\xcga\xspace}$.
Since $\ensuremath{\mathcal{M}}$ is finite, we can define, by a standard construction, a `characteristic formula' $\ensuremath{\mathsf{char}}\xspace(s)$ for each state $s$ in $\ensuremath{\mathcal{M}}$, such that $s_1,s_2$ are $\ensuremath{\mathcal{L}^\xcga\xspace}$-equivalent if and only if $\ensuremath{\mathsf{char}}\xspace(s_1) = \ensuremath{\mathsf{char}}\xspace(s_2)$, and that $\ensuremath{\mathsf{char}}\xspace(s_1) \land \ensuremath{\mathsf{char}}\xspace(s_2) \equiv \bot$ whenever $s_1,s_2$ are not $\ensuremath{\mathcal{L}^\xcga\xspace}$-equivalent.
For a set of states $Z$, let $\ensuremath{\mathsf{char}}\xspace[Z] = \bigvee \{\ensuremath{\mathsf{char}}\xspace(v) \mid v \in Z\}$. Our goal is to show that the relation of \Logicname{TLCGA}\xspace-equivalence is itself a \Logicname{TLCGA}\xspace-bisimulation, and the key observation is that each state $s $ satisfies the $\ensuremath{\mathcal{L}^\xcga\xspace}$-formula:
\begin{eqnarray*} \bigwedge_{\ensuremath{\zeta\xspace} \in \ensuremath{\mathsf{ActProf}\xspace}_s} \cgoal{C_1 \gass \atlx \ensuremath{\mathsf{char}}\xspace[\ensuremath{\mathsf{Out}\xspace}[s,\ensuremath{\zeta\xspace}\vert_{C_1}],...,C_k \gass \atlx \ensuremath{\mathsf{char}}\xspace[\ensuremath{\mathsf{Out}\xspace}[s,\ensuremath{\zeta\xspace}\vert_{C_k}] }
\end{eqnarray*}
where we list the set $\mathcal{P} (\Agt)$ of all possible coalitions as $C_1,...,C_k$.
\end{proof}
\section{Axiomatization and one-step completeness of \Logicname{TLCGA}\xspace}
\label{sec:axiomatization}
In this section we focus exclusively on \Logicname{TLCGA}\xspace and leave the extension of the axiomatic system presented here to $\Logicname{TLCGA}\xspace^+$, as well as the respective axiomatizations for the path-based semantics (which may turn out to be more problematic) for future work.
\subsection{Axiomatic system for \Logicname{TLCGA}\xspace}
\label{subsec:Axiomatization}
\begin{definition}
Let $\ensuremath{\mathcal{F}\xspace}$ be a set of coalitions.
A \textbf{voting profile} for $\ensuremath{\mathcal{F}\xspace}$ is a mapping $f$
assigning to each $\ensuremath{\mathsf{a}\xspace}_i \in \Agt$ a goal assignment $f(\ensuremath{\mathsf{a}\xspace}_i)$.
If $f(\ensuremath{\mathsf{a}\xspace}_i)(C)$ is a nexttime formula for each $i$ and $C \in \ensuremath{\mathcal{F}\xspace}$, we say that $f$ is a \textbf{one-step voting profile} for $\ensuremath{\mathcal{F}\xspace}$.
\end{definition}
The notion of merging a voting profile, defined next, will be used in some proofs later and we will need some derivable formulae that use it, listed further.
\begin{definition}
Let $f$ be a voting profile. We define the goal assignment $\mathsf{merge}(f)$ as follows:
\begin{itemize}
\itemsep = 2pt
\item
$\mathsf{merge}(f) (C) := \theta$,
if
{$C \neq \emptyset$ and $f(\ensuremath{\mathsf{a}\xspace}_i)(C)= \theta$} for each $a_i\in C$, \
\item $\mathsf{merge}(f)(C) := \mathsf{X} \top$,
{if $C = \emptyset$ or} the above holds for no $ \theta$.
\end{itemize}
\end{definition}
Our axioms are as follows (recall notation on goal assignments from Section \ref{subsec:CGAsyntax}).
\subsubsection{I. General axiom schemes for goal assignments}
\begin{description}
\itemsep = 2pt
\item[(Triv)] $\brak{\gamma^\top}$ \ \ (Recall that $\gamma^\top$ is the trivial goal assignment, mapping each coalition to $\mathsf{X} \top$)
\item[(Safe)] $\neg \brak{\Agt \gass \mathsf{X} \bot}$
\item[(Merge)] $\brak{C_1 \gass \theta_1} \wedge ... \wedge \brak{C_n \gass \theta_n} \to \brak{C_1 \gass \theta_1,...,C_n \gass \theta_n}$, \ where $C_1,...,C_n$ are pairwise disjoint.
This axiom generalises the Superadditivity axiom of Coalition Logic. The idea is simple: if the coalitions $C_1,...,C_n$ are pairwise disjoint, then they can join their collective strategies for their respective coalitional goals into one strategy profile that ensures achievement of all these collective goals.
\item[(GrandCoalition)]
$\brak{\gamma} \rightarrow (\brak{\gamma[\Agt \gass \mathsf{X}(\varphi \wedge \psi)]} \vee \brak{\gamma[\Agt \gass \mathsf{X} (\varphi \wedge \neg\psi)]} )$, \
where $\gamma(\Agt) = \mathsf{X} \varphi$.
Any strategy profile generates a unique successor state, on which any state formula $\psi$ is either true or its negation is true, so either $\psi$ or $\lnot \psi$ can be added to the nexttime goal of the grand coalition $\Agt$.
\item[(Case)] $\brak{\gamma} \rightarrow (\brak{\gamma[C \gass \mathsf{X} (\varphi \wedge \psi)]} \vee \brak{\gamma\vert_C[(\Agt \gass \mathsf{X} \neg\psi]})$, where $\gamma(C) = \mathsf{X} \varphi$.
For any coalition $C$, state formula $\psi$, and a strategy profile $\ensuremath{\Sigma\xspace}$, either its projection
$\ensuremath{\Sigma\xspace}_C$ to $C$ ensures the truth of $\psi$ in all successor states enabled by $\ensuremath{\Sigma\xspace}_C$ -- in which case $\psi$ can be added to the nexttime goal of $C$ enforced by $\ensuremath{\Sigma\xspace}$ -- or else $\lnot \psi$ is true in some of these successor states, in which case it can be added to $\gamma\vert_C$ as the nexttime goal of the grand coalition $\Agt$ enforced by $\ensuremath{\Sigma\xspace}$.
\item[(Con)] $\cgoal{\gamma} \to \cgoal{\gamma[C \gass \mathsf{X} (\varphi \wedge \psi)]}$ where $\gamma(C) = \mathsf{X} \varphi$ and $\gamma(C') = \mathsf{X} \psi$ for some $C' \subseteq C$.
Given any coalition $C$ and sub-coalition $C'$, the nexttime goal of $C'$ can be added to the nexttime goal of $C$, in sense that if there is any strategy profile $\ensuremath{\Sigma\xspace}$ that ensures that $C$ and $C'$ can force their respective nexttime goals $\mathsf{X} \varphi$ and $\mathsf{X} \psi$, then $\ensuremath{\Sigma\xspace}$ also ensures that $C$ can force the conjunction of these goals.
\end{description}
\subsubsection{II. General rules of inference:}
~
\textbf{Modus Ponens}
and
\textbf{Goal Monotonicity (G-Mon)}:
\[\frac{\phi \to \psi}{\cgoal{\gamma[C\! \gass \atlx \phi]} \to \cgoal{\gamma[C\! \gass \atlx \psi]}}\]
The meaning of the rule is clear: if $\phi$ implies $\psi$, then any coalition $C$ that can ensure the nexttime goal $\phi$ within the context of some strategic goal assignment, can also ensure $\psi$ with the same context.
\subsubsection{III. Axioms and rules for the long-term goal assignments}
\label{sec:TemporalAxioms}
~
The axioms and rules for the goal assignments of Types 1 and 2, involving long-term temporal operators are given on Figure \ref{fig:rules}. They are adapted from the respective axioms and rules for least and greatest fixed points in the modal mu-calculus.
In the axiom \textsf{Fix}, $\gamma$ is any goal assignment. In the rule \textsf{R-Ind} it is a long-term temporal assignment of type $\mathsf{U}$, and in \textsf{R-CoInd} it is a long-term temporal assignment of type $\mathsf{G}$.
\begin{figure}[htb]
\fbox{
\begin{minipage}[t]{.9\textwidth}
\begin{prooftree}
\AxiomC{\textsf{Fix:} \; $\unf{\gamma} \leftrightarrow \brak{\gamma}$ }
\end{prooftree}
\begin{prooftree}
\AxiomC{$\indf{\gamma}{\phi} \to \phi$ }
\LeftLabel{\textsf{R-Ind:}}
\RightLabel{\;\;($\gamma \in \ensuremath{\mathsf{Type}\until}\xspace$)}
\UnaryInfC{$\brak{\gamma} \rightarrow \phi $}
\end{prooftree}
\begin{prooftree}
\AxiomC{$\phi \to \indf{\gamma}{\phi} $ }
\LeftLabel{\textsf{R-CoInd:}}
\RightLabel{\;\;($\gamma \in \ensuremath{\mathsf{Type}\always}\xspace$)}
\UnaryInfC{$\phi \to \brak{\gamma} $}
\end{prooftree}
\end{minipage}
}
\caption{Fixpoint axiom and induction rules}
\label{fig:rules}
\end{figure}
We denote the axiomatic system above by $\textsf{Ax}_{\Logicname{TLCGA}\xspace}$ and will denote derivability in it by $\textsf{Ax}_{\Logicname{TLCGA}\xspace} \vdash$, but will often write just $\vdash$.
Here are some important validities that are derivable in $\textsf{Ax}_{\Logicname{TLCGA}\xspace}$, some of which will be used in the proofs further:
\begin{description}
\itemsep = 2pt
\item[\textsf{Ind}] $ \brak{\gamma} \ifff \indf{\gamma}{\brak{\gamma}}$ \ for every long-term temporal goal assignment $\gamma$.
(Immediately from (Fix), due to Proposition \ref{prop:unfold}).
\item[(Weakening)] \
\label{GPCL-G0}
$\cgoal{\gamma} \to
\cgoal{C\gass \gamma(C)}$, \
for any $C \subseteq \Agt$. \ \
(Using (Triv) and (G-Mon).)
\item[$\Agt$-Maximality] \
\label{GPCL-G1}
$\cgoal{\emptyset \gass\atlx\phi}
\lor
\cgoal{\Agt \gass\atlx\neg\phi}$. \ (Using (Triv) and (Case).)
\item[(Superadditivity)] \
\label{GPCL-G2}
$\cgoal{C_1\gass\atlx\phi_1} \land \cgoal{C_2\gass\atlx\phi_2}
\to \cgoal{C_1 \cup C_2\, \gass \atlx(\phi_1 \land \phi_2); \
C_1 \gass \atlx\phi_1; \ C_2 \gass \atlx\phi_2 }$, \
if $C_1 \cap C_2 = \emptyset$.
This subsumes the Superadditivity axiom for Coalition Logic CL. It is derivable from (Merge)) using twice (Con) to add $\atlx(\phi_1 \land \phi_2)$ as a goal assignment to $C_1 \cup C_2$.
\item[(Merge')] $\bigwedge_{a_i \in \Agt} \cgoal{f(\ensuremath{\mathsf{a}\xspace}_i)} \rightarrow \cgoal{\mathsf{merge}(f)}$, where $f$ is any
voting profile.
This is an essentially equivalent formulation of (Merge). Indeed, (Merge) is a particular case of (Merge'), whereas (Merge') is derivable from (Merge) by first using (Weakening) to detach each $\brak{C_j \gass \theta_j}$ from every $f(\ensuremath{\mathsf{a}\xspace}_i)$, for $\ensuremath{\mathsf{a}\xspace}_i \in C_j$,
{if $C_j \neq \emptyset$ and} $f(\ensuremath{\mathsf{a}\xspace}_i)(C_j) = \theta_j$ for all such $\ensuremath{\mathsf{a}\xspace}_i$.
\item[\textsf{Fix}($\mathsf{G}$)]
\label{PostFP(G)}
\
$ \brak{C \gass \mathsf{G} \chi} \to
\chi \land \brak{C \gass \atlx \brak{C \gass \mathsf{G} \chi}}$. \\
This is a special case of $\mathsf{Fix}$.
\item[\textsf{CoInd}($\mathsf{G}$)]
\label{CoInd(G)}
If
$\vdash \phi \to \chi \land \brak{C \gass \atlx \phi}$
then $\vdash \phi \to \brak{C \gass \mathsf{G} \chi}$. \\
This is a special case of the rule $\mathsf{CoInd}$.
In particular,
by using \textsf{Fix}($\mathsf{G}$) and applying \textsf{G-Mon} we obtain
$\vdash (\chi \land \brak{C \gass \atlx \brak{C \gass \mathsf{G} \chi}}) \to \chi \land \brak{C \gass \atlx (\chi \land \brak{C \gass \atlx \brak{C \gass \mathsf{G} \chi}})}$
Now, by applying \textsf{CoInd}($\mathsf{G}$)
for $\phi = \chi \land \brak{C \gass \atlx \brak{C \gass \mathsf{G} \chi}}$, we derive
%
\[
\vdash \chi \land \brak{C \gass \atlx \brak{C \gass \mathsf{G} \chi}} \to
\brak{C \gass \mathsf{G} \chi}.
\]
%
Thus, we have derived the fixpoint equivalence for $\mathsf{G}$:
\item[\textsf{FP}($\mathsf{G}$)]
\label{FP(G)}
$ \brak{C \gass \mathsf{G} \chi} \ifff
\chi \land \brak{C \gass \atlx \brak{C \gass \mathsf{G} \chi}}$.
\item[\textsf{PreFP($\mathsf{U}$)}]
\label{PreFP(U)}
$ \beta \lor (\alpha \land \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}}) \to
\brak{C \gass \alpha \mathsf{U} \beta}$. \\
This is a special case of $\mathsf{Fix}$.
\item[\textsf{Ind}($\mathsf{U}$)]
\label{Ind(U)}
If
$\vdash \beta \lor (\alpha \land \brak{C \gass \atlx \phi}) \to \phi$
then $\vdash \brak{C \gass \alpha \mathsf{U} \beta} \to \phi$. \\
This is a special case of the rule $\mathsf{Ind}$.
In particular,
by applying the rule \textsf{G-Mon} to \textsf{PreFP($\mathsf{U}$)}
we derive
\[\brak{C \gass \atlx (\beta \lor (\alpha \land \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}}))} \to \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}}.\]
Then, by simple propositional inference we derive
$(\beta \lor (\alpha \land \brak{C \gass \atlx (\beta \lor (\alpha \land \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}}))})) \to
(\beta \lor (\alpha \land \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}}))$.
Now, by applying \textsf{Ind}($\mathsf{U}$) for
$\phi = \beta \lor (\alpha \land \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}})$,
we derive
\[\vdash \brak{C \gass \alpha \mathsf{U} \beta} \to
\beta \lor (\alpha \land \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}})
\]
Thus, we have derived the fixpoint equivalence for $\mathsf{U}$:
\item[FP($\mathsf{U}$)]
\label{FP(U)}
$\brak{C \gass \alpha \mathsf{U} \beta} \ifff
\beta \lor (\alpha \land \brak{C \gass \atlx \brak{C \gass \alpha \mathsf{U} \beta}})$
\end{description}
\begin{proposition}[Soundness of $\textsf{Ax}_{\Logicname{TLCGA}\xspace}$]
\label{prop:soundness} The axiomatic system $\textsf{Ax}_{\Logicname{TLCGA}\xspace}$ is sound: every derivable formula in $\textsf{Ax}_{\Logicname{TLCGA}\xspace}$ is valid.
\end{proposition}
\begin{proof}
We show that every axiom is valid and all rules of inference preserve validity.
Checking validity of the general axiom schemes is fairly routine. Most of these, as well as the preservation of validity by the general rules II, follow from the soundness of the logic \Logicname{GPCL}\xspace in \cite{GorankoEnqvist18}.
The validity of the axiom scheme
$\mathsf{Fix}$
follows from
Theorem
\ref{p:fixpoint-property}.
The preservation of validity by the special rule $\mathsf{R-Ind}$ can be shown as follows. Suppose $\indf{\gamma}{\phi} \to \phi$ is valid.
Take any concurrent game model $\ensuremath{\mathcal{M}}$.
Then $\ensuremath{\mathcal{M}} \ensuremath{\Vdash} \indf{\gamma}{\phi} \to \phi$, hence
$\tset{\phi}_{\ensuremath{\mathcal{M}}}$ is a pre-fixed point of the set operator induced by the formula
$\indf{\gamma}{z}$ in $\ensuremath{\mathcal{M}}$. By Proposition \ref{prop:typeone},
$\brak{\gamma}$ is semantically equivalent to the least fixed point $\mu z. \indf{\gamma}{z}$, which is also the least pre-fixed point of $\indf{\gamma}{z}$.
Therefore, $\ensuremath{\mathcal{M}} \ensuremath{\Vdash} \brak{\gamma} \rightarrow \phi$.
Thus, $\brak{\gamma} \rightarrow \phi$ is valid.
The preservation of validity by the special rule $\mathsf{R-CoInd}$ is proved analogously, using Proposition \ref{prop:typetwo} and the fact that the
greatest fixed point $\nu z. \indf{\gamma}{z}$, is also its greatest post-fixed point.
\end{proof}
Recall (cf Section \ref{subsec:TypesAssignments}) that a formula $\phi \in \ensuremath{\mathcal{L}^\cga\xspace}$ is in \textbf{normal form} if, for every subformula of the form $\brak{\gamma}$, the goal assignment $\gamma$ is either a nexttime or a long-term temporal goal assignment.
\begin{proposition}
\label{prop:NF}
For every formula $\varphi$ there is a formula $\psi$ which is in normal form, and such that $\textsf{Ax}_{\Logicname{TLCGA}\xspace} \vdash \varphi \leftrightarrow \psi$.
\end{proposition}
\begin{proof}
By induction on the structure of formulas, using the axiom \textsf{Fix} for the crucial steps. By design, the unfolding $\unf{\gamma}$ of any goal assignment $\gamma$ is a nexttime goal assignment, and all new goal assignments appearing in the scope of nexttime operators in the codomain of $\unf{\gamma}$ will be long-term temporal. So, all mixing of nexttime and long-term temporal path formulas in $\brak{\unf{\gamma}}$ will appear in proper subformulas of $\brak{\gamma}$, where the inductive hypothesis is applied.
\end{proof}
By the soundness, the proposition above implies the following corollary.
\begin{corollary}
\label{cor:NF}
For every formula $\varphi$ there is a semantically equivalent formula $\psi$ which is in normal form.
\end{corollary}
\subsection{Formula types, components, and extended FL-closure of \Logicname{TLCGA}\xspace formulae}
\label{subsec:components-and-closure}
We use some generic notions and terminology from the literature on tableaux-based satisfiability decision methods (see e.g. \cite[Ch.13]{TLCSbook}).
Formulae of \Logicname{TLCGA}\xspace in normal form can be classified as:
\textbf{literals}: $\top$, $\neg \top$, $p, \neg p$, where $p \in \Prop$,
\textbf{conjunctive formulae}, of the type $(\phi \land \psi)$ and $\lnot (\phi \lor \psi)$;
\textbf{disjunctive formulae}, of the type $(\phi \lor \psi)$ and $\lnot (\phi \land \psi)$;
\textbf{successor formulae}: $\brak{\gamma}$ and $\lnot \brak{\gamma}$, where
$\gamma$ is a local (nexttime) goal assignment;
and
\textbf{long term temporal formulae}, of the type $\brak{\gamma}$ and $\lnot \brak{\gamma}$, where $\gamma$ is a long term goal assignment.
The formulae in the last four classes have respective \textbf{components}
that are given by Figure \ref{alphaTable}.
Clearly, every conjunctive (respectively, disjunctive) formula in the table is equivalent to the conjunction (respectively, disjunction) of its components.
\begin{figure}[h]
\caption{Types of formulae and their components}
\label{alphaTable}
\centering
\small
{\label{fig:edge-a}
$
\begin{array}{cc}
\begin{array}{l|l}
\mbox{Conjunctive formula} & \mbox{Components} \\
\hline
\neg \neg \varphi & \varphi \\
\varphi \land \psi & \varphi, \; \psi \\
\neg(\varphi \lor \psi) & \neg \varphi, \; \neg \psi \\
\end{array}
&
\hspace{5mm}
\begin{array}{l|l}
\mbox{Disjunctive formula} & \mbox{Components} \\
\hline
\varphi \lor \psi & \varphi, \; \psi \\
\neg(\varphi \land \psi) & \neg \varphi, \; \neg \psi \\
~ & ~ \\
\end{array}
\end{array}
$
}
\\
\vspace{3mm}
{\label{fig:edge-c}
$
\begin{array}{l|l}
\mbox{Local goal formulae} & \mbox{Components} \\
\hline
\brak{\gamma} \;\; \mbox{(positive)} \;\; &
\{\psi \mid \gamma(C) = \atlx \psi, \ C \subseteq \Agt \}
\\
\neg \brak{\gamma} \;\; \mbox{(negative)} \;\; &
\{\neg \psi \mid \gamma(C) = \atlx \psi, \ C \subseteq \Agt \}
\\
\end{array}
$
}
\\
\vspace{5mm}
{
$
\begin{array}{l|l}
\mbox{Temporal goal formulae} & \mbox{Components} \\
\hline
\brak{\gamma} \;\; \mbox{(positive)} \;\; & \indf{\gamma}{\brak{\gamma}} \\
\neg \brak{\gamma} \;\; \mbox{(negative)} \;\; & \neg \indf{\gamma}{\brak{\gamma}} \\
\end{array}
$
}
\end{figure}
Given a formula $\varphi$, we define $\overline{\varphi} := \psi$ if $\varphi$ is of the form $\neg \psi$, and $\overline{\varphi} := \neg \varphi$ otherwise.
\begin{definition}
\label{def: extended (Fischer-Ladner) closure}
The \textbf{extended (Fischer-Ladner) closure} of a \Logicname{TLCGA}\xspace formula in normal form
$\varphi$ is the least set of formulae $\ensuremath{\mathsf{ecl}}(\varphi)$ such that:
\begin{enumerate}
\item
$\varphi \in \ensuremath{\mathsf{ecl}}(\varphi)$,
\item
$\ensuremath{\mathsf{ecl}}(\varphi)$ is closed under taking all components of formulae in $\ensuremath{\mathsf{ecl}}(\varphi)$.
\item
$\overline{\psi} \in \ensuremath{\mathsf{ecl}}(\varphi)$ whenever $\psi \in \ensuremath{\mathsf{ecl}}(\varphi)$. \
\end{enumerate}
\smallskip
For any set of formulae $\Phi$ we define
$\ensuremath{\mathsf{ecl}}(\Phi) := \bigcup \{\ensuremath{\mathsf{ecl}}(\varphi) \; \mid \; \varphi \in \Phi\} $.
A set $\Phi$ of \Logicname{TLCGA}\xspace formulae in normal form is said to be \textbf{(Fischer-Ladner) closed} iff $\ensuremath{\mathsf{ecl}}(\Phi) = \Phi$.
\end{definition}
The proof of the following proposition can be found in the appendix:
\begin{proposition}
\label{prop:closure}
The extended closure of any finite set $\Phi$ of \Logicname{TLCGA}\xspace formulae in normal form is finite.
\end{proposition}
\subsection{One-step completeness}
\label{subsec:One-stepCompleteness}
Hereafter derivability/provability and consistency refer to the axiomatic system
$\textsf{Ax}_{\Logicname{TLCGA}\xspace}$. Given a set of formulae $\Phi$, the maximal consistent subsets of $\Phi$ are defined as usual.
\begin{definition}
Given a
closed set of formulae $\Phi$, a \textbf{$\Phi$-atom} is a maximal consistent subset of $\Phi$. We denote by $\mathsf{At}(\Phi)$ the set of all $\Phi$-atoms.
\end{definition}
\begin{definition}
Let $\Phi$ be a finite set of formulae. A \textbf{nexttime assignment} over $\Phi$ is a formula of the shape $$\brak{C_1 \gass \mathsf{X}\varphi_1,..., C_k \gass \mathsf{X} \varphi_k}$$
where each formula $\varphi_i$ belongs to $\Phi$. A
\textbf{modal one-step theory} over $\Phi$ is a finite set of formulae $\Gamma$, such that every formula in $\Gamma$ is either a nexttime assignment over $\Phi$ or the negation of such a formula.
\end{definition}
\begin{definition}
Let $\Phi$ be a finite set of formulae. A \textbf{consistent game form for $\Phi$} is a game form
$(\Act,\act,\mathcal{P} (\Phi),\ensuremath{\mathsf{out}\xspace})$
over the set of outcomes $\mathcal{P} (\Phi)$ such that, for each action profile $\ensuremath{\zeta\xspace}$, $\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$ is a consistent set of formulae. A \textbf{maximal consistent game form for $\Phi$} is a game form
$(\Act,\act,\mathcal{P} (\Phi),\ensuremath{\mathsf{out}\xspace})$
over outcomes $\mathcal{P} (\Phi)$ such that, for each action profile $\ensuremath{\zeta\xspace}$, $\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$ is a maximal consistent subset of $\Phi$.
\end{definition}
Note that, if $\Phi$ is a
closed set of formulae, then a consistent game form for $\Phi$ is maximal if and only if for every action profile $\ensuremath{\zeta\xspace}$ the set $\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$ is a $\Phi$-atom.
Given a strategic game form $G = (\Act,\act,\ensuremath{\mathsf{O}\xspace},\ensuremath{\mathsf{out}\xspace})$, a coalition $C$ and action profiles $\ensuremath{\zeta\xspace}', \ensuremath{\zeta\xspace}$, we write $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$ to state that $\ensuremath{\zeta\xspace}' \vert_C = \ensuremath{\zeta\xspace} \vert_C$.
\begin{theorem}[One-step completeness]
\label{t:one-step-comp}
Let $\Gamma$ be a consistent modal one-step theory over a finite set of formulas $\Phi$ and assume that $\Phi$ contains all components of $\Gamma$, also contains $\overline{\psi}$ whenever $\psi \in \Phi$, and is closed under conjunctions (up to provable equivalence).
Then there exists a maximal consistent game form
$\ensuremath{\mathcal{M}}(\Gamma) = (\Act,\act,\mathcal{P} (\Phi),\ensuremath{\mathsf{out}\xspace})$
for $\Phi$
such that, for every goal assignment $\gamma$:
\begin{enumerate}
\item If $\brak{\gamma} \in \Gamma$, then there is a profile $\ensuremath{\zeta\xspace} \in \Pi_{a \in \Agt}\act_a$ such that for all $C$ in the support of $\gamma$ with $\gamma(C) = \mathsf{X} \phi$ and all $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$, we have $\phi \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$.
\item If $\neg \brak{\gamma} \in \Gamma$, then for every profile $\ensuremath{\zeta\xspace} \in \Pi_{a \in \Agt}\act_a$ there is some $C$ in the support of $\gamma$, and some $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$, for which we have
$\overline{\phi} \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$ where $\gamma(C) = \mathsf{X} \phi$.
\end{enumerate}
Furthermore, the size of $\Act$ is at most exponential in $\vert \Phi \vert$.
\end{theorem}
\begin{proof}
We may assume without loss of generality that, for every nexttime goal assignment $\gamma$ over $\Phi$, the set $\Gamma$ contains either $\brak{\gamma}$ or $\neg \brak{\gamma}$, since otherwise we can extend $\Gamma$, using Lindenbaum's lemma, to a consistent (and still finite) set satisfying this assumption.
We consider nexttime goal assignments over the set of conjunctions of subsets of $\Phi$.
We say that such a goal assignment $\gamma$ is \textbf{deterministic} if $\gamma(\Agt)$ is (provably equivalent to) the conjunction of a maximal consistent subset of $\Phi$.
We then say that a
goal assignment $\gamma'$ is a \textbf{strengthening} of $\gamma$ if, for all $C$ with $\gamma(C) = \atlx\phi$ and $\gamma'(C) = \atlx\phi'$, the formula $\phi' \to \phi$ is provable. Note that every formula $\brak{\gamma}$ provably implies the disjunction of all formulas $\brak{\gamma'}$ where $\gamma'$ is a deterministic strengthening of $\gamma$ over $\Phi$.
This follows by repeated applications of the axiom (GrandCoalition).
If $\gamma$ is deterministic the we let \textbf{$\mathsf{next}(\gamma)$} denote the maximal consistent set $\Psi$ for which $\gamma(\Agt) = \mathsf{X} ( \bigwedge \Psi )$. Note that for any deterministic strengthening $\gamma'$ of a goal assignment $\gamma$ over $\Psi$ and any $C$ for which $\gamma(C) = \atlx \varphi$, $\mathsf{next}(\gamma')$ must contain $\varphi$ as a conjunct. This is a consequence of axioms (Con), (Safe) and the assumption that $\mathsf{next}(\gamma')$ is maximal consistent.
For technical convenience, in this proof we fix the enumeration of $\Agt$ to be
$\ensuremath{\mathsf{a}\xspace}_0,...,\ensuremath{\mathsf{a}\xspace}_{K-1}$,
where $K = \vert \Agt \vert$.
Given an agent $\ensuremath{\mathsf{a}\xspace} \in \Agt$, we define $\gacta$
to be the set of all triples
$(\gamma,f, k)$ such that $\gamma$ is a goal assignment with $\brak{\gamma} \in \Gamma$, $0 \leq k < K$, and $f$ is a function mapping each goal assignment
$\gamma' : \mathcal{P}(\Agt) \to \Phi$ to one of its deterministic strengthenings. To count the number of actions, this is the number of goal assignments $\gamma$ with $\brak{\gamma} \in \Gamma$, times the number of functions $f$ mapping goal assignments over $\Phi$ to deterministic strengthenings, times $K$. The number of goal assignments over $\Phi$ is $\vert \Phi \vert^{2^K}$, and more generally the number of goal assignments over conjunctions of subsets of $\Phi$ is $(2^{\vert \Phi \vert})^{2^K} = 2^{2^K \cdot \vert \Phi \vert}$. So the number of functions $f$ that can appear in an action is:
$$(2^{2^K \cdot \vert \Phi \vert})^{\vert \Phi \vert^{2^K}} = 2^{2^K \cdot \vert \Phi \vert^{2^K + 1}}$$
Since the number of agents is fixed, $2^K$ is a constant and the term $2^K \cdot \vert \Phi \vert^{2^K + 1}$ is a polynomial in $\vert \Phi \vert$. Hence the number of actions is exponential in $\vert \Phi \vert$.
{Note that
$\gacta \neq \emptyset$ for all $\ensuremath{\mathsf{a}\xspace} \in \Agt$ since there is at least one goal assignment with $\brak{\gamma} \in \Gamma$ by the axiom (Triv), and $\brak{\gamma}$ is equivalent to the disjunction of its deterministic strengthenings, so one of these must also be in $\Gamma$. Note also that the goal assigned to $\Agt$ by any $\brak{\gamma} \in \Gamma$ must be consistent by the axiom (Safe).}
%
We set $\gAct = \bigcup_{\ensuremath{\mathsf{a}\xspace} \in \Agt} \gacta$.
Given an action profile
$\ensuremath{\zeta\xspace}$ and $\ensuremath{\mathsf{a}\xspace} \in \Agt$, if $\ensuremath{\zeta\xspace}_\ensuremath{\mathsf{a}\xspace} = (\gamma,k,f)$ we write
$\mathsf{vote}(\ensuremath{\mathsf{a}\xspace},\ensuremath{\zeta\xspace}) = \gamma$, $\mathsf{bet}(\ensuremath{\mathsf{a}\xspace},\ensuremath{\zeta\xspace}) = k$ and
$\mathsf{choice}(\ensuremath{\mathsf{a}\xspace},\ensuremath{\zeta\xspace},\gamma') = f(\gamma')$ for every goal assignment
$\gamma'$ with $\brak{\gamma'} \in \Gamma$. We write
$\mathsf{vote}(\ensuremath{\zeta\xspace})$ for the voting profile mapping each
$\ensuremath{\mathsf{a}\xspace} \in \Agt$ to $\mathsf{vote}(\ensuremath{\mathsf{a}\xspace},\ensuremath{\zeta\xspace})$. We define the \emph{voting winner} $\mathsf{win}(\ensuremath{\zeta\xspace})$ to be player $\ensuremath{\mathsf{a}\xspace}_i$ where $i$ is determined as follows:
\[
i := \left( \sum_{\ensuremath{\mathsf{a}\xspace} \in \Agt} \mathsf{bet}(\ensuremath{\mathsf{a}\xspace},\ensuremath{\zeta\xspace}) \right) \; \text{mod} \; K
\]
Finally, we define the outcome of a given action profile $\ensuremath{\zeta\xspace}$ as follows:
\[
\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}) := \mathsf{next}\left( \mathsf{choice}(\mathsf{win}(\ensuremath{\zeta\xspace}), \ensuremath{\zeta\xspace}, \mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace})) \right)
\]
We will show that the game form $\ensuremath{\mathcal{M}}(\Gamma) = (\gAct,\gact,\mathcal{P} (\Phi),\ensuremath{\mathsf{out}\xspace})$ we have constructed satisfies the criteria listed in the statement of the theorem. First, we shall prove a rather technical auxiliary claim. Before going through its proof, the reader may want to skip ahead to see how the claim is used in the main argument.
\begin{customclaim}{1}
Let $\varphi$ be any formula in $\Phi$, $C$ any coalition, and let $\ensuremath{\zeta\xspace}$ be an action profile in $\ensuremath{\mathcal{M}}(\Gamma)$ such that for every action profile $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$, we have $\varphi \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$. Let $\gamma$ be any deterministic strengthening of $\mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}))$ such that $\brak{\gamma} \in \Gamma$ and $\gamma(\Agt) = \mathsf{X} (\bigwedge \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}))$, and let $\gamma(C) = \mathsf{X} \psi$. Then there exists a deterministic strengthening $\gamma'$ of $\gamma$ such that $\brak{\gamma'} \in \Gamma$, and:
$$\gamma'(C) = \mathsf{X}(\psi \wedge \varphi).$$
\end{customclaim}
\append{
\paragraph{Proof of Claim 1:}
We need to distinguish two cases, for $C = \Agt$ and $C \neq \Agt$. We begin with the easier case where $C = \Agt$. In this case there is only one action profile $\ensuremath{\zeta\xspace}' \sim_{C} \ensuremath{\zeta\xspace}$, namely $\ensuremath{\zeta\xspace}$ itself. Our assumption thus gives $\varphi \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$. Further, we have $\gamma(C) = \gamma(\Agt) = \mathsf{X} (\bigwedge \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}))$ by assumption. But, since $\psi \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$ the formula $\bigwedge \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$ is equal (up to provable equivalence) to $\psi \wedge \bigwedge \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$, which is provably equivalent to $\psi \wedge \varphi$ (because $\gamma'$ is a deterministic strengthening). Hence we can set $\gamma' = \gamma$.
The case where $C \neq \Agt$ is more involved. We assumed that $\brak{\gamma} \in \Gamma$. By (Case), we have:
\[
\brak{\gamma[C \gass \atlx (\psi \wedge \varphi)]} \vee \brak{\gamma \vert_{C}[\Agt \gass \atlx\neg \varphi]} \in \Gamma
\]
We first show that $\brak{\gamma \vert_{C}[\Agt \gass \atlx\neg \varphi]} \notin \Gamma$. Suppose the contrary, that $\brak{\gamma \vert_{C}[\Agt \gass \atlx\neg \varphi]} \in \Gamma$. Let $\gamma^*$ be an arbitrary deterministic strengthening of $\gamma \vert_{C}[\Agt \gass \atlx\neg \varphi]$ such that $\brak{\gamma^*} \in \Gamma$. Such strengthening must exist, since we recall that every formula of the form $\brak{\delta}$ provably implies the disjunction of all its deterministic strengthenings over $\Phi$.
Now we define a new action profile $\ensuremath{\zeta\xspace}'$ as follows. First, pick an {arbitrary}
$\ensuremath{\mathsf{c}\xspace} \notin C$, which exists since $C \neq \Agt$. For each $\ensuremath{\mathsf{a}\xspace} \in C$ set $\ensuremath{\zeta\xspace}'_\ensuremath{\mathsf{a}\xspace} = \ensuremath{\zeta\xspace}_\ensuremath{\mathsf{a}\xspace}$.
For each $\ensuremath{\mathsf{a}\xspace} \notin C$ and $\ensuremath{\mathsf{a}\xspace} \neq \ensuremath{\mathsf{c}\xspace}$, set $\ensuremath{\zeta\xspace}'_\ensuremath{\mathsf{a}\xspace} = (\gamma^\top,f,0)$ where $f$ is arbitrary (recall that $\gamma^\top$ is the trivial goal assignment with empty support). For $\ensuremath{\mathsf{c}\xspace}$, set $\ensuremath{\zeta\xspace}'_\ensuremath{\mathsf{c}\xspace} = (\gamma^\top,f,h)$ where the choice function $f$ chooses $\gamma^*$ whenever possible, and the bet $h$ is chosen so that the index of the player $\ensuremath{\mathsf{c}\xspace}$ is equal to $\sum_{\ensuremath{\mathsf{a}\xspace} \in C} \mathsf{bet}(\ensuremath{\mathsf{a}\xspace},\ensuremath{\zeta\xspace}) + h \ (\text{mod } K)$. This guarantees that $\ensuremath{\mathsf{c}\xspace}$ will be the voting winner in $\ensuremath{\zeta\xspace}'$. Clearly $\ensuremath{\zeta\xspace}' \sim_{C} \ensuremath{\zeta\xspace}$. Furthermore, since $\gamma^*$ is a strengthening of $\gamma \vert_{C}[\Agt \gass \atlx\neg \varphi]$, and $\gamma$ is a strengthening of $\mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}))$ by assumption, it follows that $\gamma^*$ is a deterministic strengthening of $\mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}'))$. This is because the only coalitions not mapped to $\atlx\top$ by $\mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}'))$ are the ones contained in $C$, and for any such coalition $D$ we have $\mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}'))(D) = \mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}))(D)$. So, we get:
\[
\begin{aligned}
\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}') & = \mathsf{next}\left( \mathsf{choice}(\mathsf{win}(\ensuremath{\zeta\xspace}'), \ensuremath{\zeta\xspace}', \mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}')) \right) \\
& = \mathsf{next}\left( \mathsf{choice}(\ensuremath{\mathsf{c}\xspace}, \ensuremath{\zeta\xspace}', \mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}')) \right) \\
& = \mathsf{next}(\gamma^*).
\end{aligned}
\]
But $\neg \varphi \in \mathsf{next}(\gamma^*)$ since $\gamma^*$ is a strengthening of $\gamma \vert_{C}[\Agt \gass \atlx\neg \varphi]$. By consistency of $\mathsf{next}(\gamma^*)$, we have thus found an action profile $\ensuremath{\zeta\xspace}'$ such that $\ensuremath{\zeta\xspace} \sim_{C} \ensuremath{\zeta\xspace}'$ and $\varphi \notin \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$. This is a contradiction with our assumption on the action profile $\ensuremath{\zeta\xspace}$.
Thus, we have proved $\brak{\gamma \vert_{C}[\Agt \gass \atlx\varphi]} \notin \Gamma$, as desired.
It follows that $\brak{\gamma[C \gass \atlx (\psi \wedge \varphi)]} \in \Gamma$. We then define:
\[\gamma' := \gamma[C \gass \atlx (\psi \wedge \varphi)].\]
Thus, we have showed that $\brak{\gamma'} \in \Gamma$, $\gamma'$ is clearly a strengthening of $\gamma$, and it is deterministic since $\gamma'(\Agt) = \gamma(\Agt)$. This concludes the proof of the claim.
\qed.
}
\medskip
We now prove that the properties (1) and (2) listed in the theorem hold for the game form
$\ensuremath{\mathcal{M}}(\Gamma) = (\gAct,\gact,\mathcal{P} (\Phi),\ensuremath{\mathsf{out}\xspace})$.
\paragraph{Item (1):} Suppose $\brak{\gamma} \in \Gamma$. Let $\ensuremath{\zeta\xspace}$ be defined by letting all players vote for $(\gamma, f, 0)$ where $f$ is an
{arbitrary, fixed choice function}.
Let $C$ be in the support of $\gamma$, where $\gamma(C) = \mathsf{X} \varphi$ and let $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$.
Since all players in $C$ vote for $\gamma$ in $\ensuremath{\zeta\xspace}'$, and the outcome
$\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$ is $\mathsf{next}(\gamma')$ for a deterministic strengthening of
$\mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}'))$, it follows that $\varphi \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$.
\paragraph{Item (2):} Suppose $\neg \brak{\gamma} \in \Gamma$, and let $\ensuremath{\zeta\xspace}$ be an arbitrary action profile. We want to show that there is some coalition $C$ and some action profile $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$ such that
$\overline{\phi}
\in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$, where $\gamma(C) = \varphi$.
We will prove this by reductio ad absurdum. Suppose that for every coalition $C$ with $\gamma(C) = \mathsf{X} \varphi$ and every action profile $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$, we have
$\overline{\phi}\notin \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$. This means that $\varphi \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$ since both $\overline{\phi}$
and $\varphi$ are in the closure of $\brak{\gamma}$
and $\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$ is maximal consistent.
Let us list all coalitions in the support of $\gamma$ as $C_1,...,C_m$. Let $\gamma_0$ denote the goal assignment $\mathsf{choice}(\mathsf{win}(\ensuremath{\zeta\xspace}), \ensuremath{\zeta\xspace}, \mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}))$. Then $\gamma_0(\Agt) = \mathsf{X} (\bigwedge \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}))$. Furthermore $\brak{\gamma_0} \in \Gamma$ by definition of $\mathsf{choice}$, and $\gamma_0$ is a deterministic strengthening of $\mathsf{merge}(\mathsf{vote}(\ensuremath{\zeta\xspace}))$. For each $i \in \{1,...,m\}$ let $\psi_i$ be the formula such that $\gamma(C_i) = \atlx \psi_i$, and let $\psi^0_i$ be the formula such that $\gamma_0(C_i) = \atlx \psi^0_i$. We define, for each $i \in \{0,...,m\}$, a deterministic goal assignment $\gamma_i$ such that:
\begin{itemize}
\item $\brak{\gamma_i} \in \Gamma$,
\item $\gamma_i$ is a deterministic strengthening of $\gamma_j$ for all $j < i$,
\item $\gamma_i(C_j) = \mathsf{X} (\psi^0_j \wedge \psi_j)$ for all $j$ with $1 \leq j \leq i$, and $\gamma_i(\Agt) = \mathsf{X} (\bigwedge \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}))$.
\end{itemize}
The goal assignment $\gamma_0$ has already been defined, and we can extend the definition inductively to all $i$ by repeatedly applying Claim 1. Note that the induction hypothesis has been tailored so that Claim 1 applies at each inductive step.
Now, consider the goal assignment $\gamma_m$. We have $\brak{\gamma_m} \in \Gamma$. But by definition $\brak{\gamma_m}$ is a strengthening of $\brak{\gamma}$, hence $\brak{\gamma} \in \Gamma$, by Goal Monotonicity.
Since we assumed that $\neg \brak{\gamma} \in \Gamma$,
we have reached a contradiction with the consistency of $\Gamma$.
This concludes the proof of item (2) and thus the proof of the theorem.
\end{proof}
\section{Completeness of \Logicname{TLCGA}\xspace}
\label{sec:completeness}
\subsection{Networks}
\label{sec:Networks}
Throughout the rest of this section, we fix a finite,
closed set $\Phi$ of \Logicname{TLCGA}\xspace-formulae in normal form.
To prove completeness, we will show how to construct a model for each (consistent) atom in $\mathsf{At}(\Phi)$, using the technique of \emph{networks}. The idea behind this technique is to construct a series of approximations to the satisfying model. At each finite stage of the construction, the current approximating network will generally have a number of \emph{defects}, each of which represents is some particular reason that the network cannot yet be regarded as a satisfying model.
For example, consider a formula of the form $\brak{A \gass \alpha \mathsf{U} \beta, B\gass \mathsf{G} \chi}$. If this formula belongs to the label associated with some node $u$ in a network, then there must exist some strategy profile $\ensuremath{\Sigma\xspace}$ such that, when restricted to a joint strategy for the coalition $A$, it ensures that every play generated by it eventually leads to a state where $\beta$ is in the label, and all states that are visited meanwhile have $\alpha$ in their labels; likewise, when $\ensuremath{\Sigma\xspace}$ is restricted to a joint strategy for the coalition $B$, it ensures that on every play generated by it all states that are visited have $\chi$ in their labels.
For the respective conditions for $\alpha$ and $\chi$ it suffices that they are satisfied \emph{locally}, at every step of the construction of the network.
Thus, there are two types of conditions to be satisfied by the network: \emph{local conditions}, that can be ensured on-the-fly, i.e. the defects arising from their violation can be fixed step-by-step in the process of the construction and updates of the network; and \emph{eventualities}, which need to be taken special care of. In the example above, if it is not already the case that every play generated by the joint strategy for $B$ obtained from $\ensuremath{\Sigma\xspace}$ eventually leads to a state where $\beta$ is in the label, that creates a defect which we needs to be eventually fixed. To do so we first show that we can ``push'' the defect towards leaves in the network, in the sense that it suffices to fix the defect at each leaf in order to make sure each occurrence of the defect in the current network disappears. Next, we fix each defect associated with a leaf. To do that, we prove that for every atom that contains the formula $\brak{A \gass \alpha \mathsf{U} \beta, B\gass \mathsf{G} \chi}$, we can find a network whose root is labelled by that atom, and in which the occurrence of the formula at the root is not a defect (though it may appear as a defect elsewhere in the network). This network can then by ``plugged in'' at appropriate leaves in another network in order to fix a defect there. Once a specific occurrence of a defect is fixed it never reappears, but of course each round of the construction may introduce new defects, like new heads of a hydra appearing where one was previously cut off. These new defects are then taken care of in the \emph{next} round, and so on. By taking a limit of the approximating series we obtain a \emph{perfect} network, i.e. a network with no defects. A ``truth lemma'' for perfect networks assures that we can view the network obtained in the limit as a model in which each formula attached to the root is true.
\begin{definition}
A \textbf{$\Phi$-network} is a triple $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ such that:
\begin{itemize}
\item $T$ is a rooted, finitely branching directed tree,
\item $L : T \to \mathsf{At}(\Phi)$ is a map that assigns to each node of $T$ an atom from $\mathsf{At}(\Phi)$.
\item $\mathcal{G}$ is a map that assigns to each non-leaf node $u$ of $T$ a game form
$\mathcal{G}(u) = (\Act^u,\act^u,T,\ensuremath{\mathsf{out}\xspace}^u)$, where $\ensuremath{\mathsf{out}\xspace}^u$ is subject to the constraint that its codomain is the set of children nodes of $u$ in $T$.
\end{itemize}
\end{definition}
\begin{definition}
A network $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ is said to be a \textbf{sub-network} of $\ensuremath{\mathcal{N}}' = (T',L',\mathcal{G}')$, written $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$, if:
\begin{itemize}
\item $T$ is a subgraph of $T'$ and the root of $T$ is also the root of $T'$,
\item If $u$ is any non-leaf node in $T$ then it has the same children in $T$ as in $T'$ and, furthermore, $\mathcal{G}(u) = \mathcal{G}'(u)$,
\item $L = L'\vert_T$.
\end{itemize}
\end{definition}
\begin{definition}
Given a $\Phi$-network $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$, a \textbf{marking} of $\ensuremath{\mathcal{N}}$ is a map $\ensuremath{\mathbf{m}}$ from $T$ to the powerset of $\Phi$ such that $\ensuremath{\mathbf{m}}(v) \subseteq L(v)$ for all $v \in T$. (In particular, note that $L$ itself is a marking of $\ensuremath{\mathcal{N}}$.) Given a marking $\ensuremath{\mathbf{m}}$ of $\ensuremath{\mathcal{N}}$, a nexttime goal assignment $\gamma$ such that $\brak{\gamma} \in \Phi$, and a non-leaf node $u \in T$ with $\mathcal{G}(u) = (\Act,\act,T,\ensuremath{\mathsf{out}\xspace})$, we say that the marking $\ensuremath{\mathbf{m}}$ \textbf{verifies the goal assignment $\gamma$ at $u$} if there is a strategy profile $\ensuremath{\Sigma\xspace} \in \Pi_{a \in \Agt}$ such that, for every $C$ in the support of $\gamma$ such that $\gamma(C) = \mathsf{X} \psi$ and for every strategy profile $\ensuremath{\Sigma\xspace}'$ with $\ensuremath{\Sigma\xspace}' \sim_C \ensuremath{\Sigma\xspace}$, we have $\psi \in \ensuremath{\mathbf{m}}(\ensuremath{\mathsf{out}\xspace}(\ensuremath{\Sigma\xspace}',u))$. We say that $\ensuremath{\mathbf{m}}$ \textbf{refutes the goal assignment $\gamma$ at $u$} if for every strategy profile $\ensuremath{\Sigma\xspace} \in \Pi_{a \in \Agt}$ there is some $C$ in the support of $\gamma$ with $\gamma(C) = \mathsf{X} \psi$ and some strategy profile $\ensuremath{\Sigma\xspace}'$ with $\ensuremath{\Sigma\xspace}' \sim_C \ensuremath{\Sigma\xspace}$ such that $\overline{\psi} \in \ensuremath{\mathbf{m}}(\ensuremath{\mathsf{out}\xspace}(\ensuremath{\Sigma\xspace}',u))$.
\end{definition}
\begin{definition}
A $\Phi$-network $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ is said to be \textbf{one-step coherent} if, for every non-leaf node $u \in T$ such that $\mathcal{G}(u) = (\Act,\act,T,\ensuremath{\mathsf{out}\xspace})$, the marking $L$ verifies every nexttime goal assignment $\gamma$ such that $\brak{\gamma} \in L(u)$ and refutes every nexttime goal assignment $\gamma$ such that $\neg \brak{\gamma} \in L(u)$.
\end{definition}
\subsection{Eventualities and defects}
\label{sec:EventualitiesAndDefects}
\begin{definition}
A \textbf{$\ensuremath{\mathsf{Type}\until}\xspace$-eventuality} is a formula $\brak{\gamma}$ where $\gamma \in \ensuremath{\mathsf{Type}\until}\xspace$.
A \textbf{$\ensuremath{\mathsf{Type}\always}\xspace$-eventuality} is a formula of the form $\neg \brak{\gamma}$ where $\gamma \in \ensuremath{\mathsf{Type}\always}\xspace$.
\end{definition}
\begin{definition}
Let $\brak{\gamma}$ be a $\ensuremath{\mathsf{Type}\until}\xspace$-eventuality, where $\gamma$ is a goal assignment for the family $\ensuremath{\mathcal{F}\xspace} = \{C_1,...,C_n,D_1,...,D_m\}$ or $\ensuremath{\mathcal{F}\xspace} = \{C_1,...,C_n\}$ defined by:
\[
\gamma(C_1) = \alpha_1 \mathsf{U} \beta_1, ...,\ \gamma(C_n) = \alpha_n \mathsf{U} \beta_n
\]
and (if $m > 0$)
\[
\gamma(D_1) = \mathsf{G} \chi_1, ...,\ \gamma(D_m) = \mathsf{G} \chi_m.
\]
Let $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ be a one-step coherent network.
Given a node $u \in T$, we say that \textbf{$\brak{\gamma}$ is partially fulfilled in $0$ steps at $u$ in $\ensuremath{\mathcal{N}}$} if there is some $i \in \{1,...,n\}$ such that $\beta_i \wedge \brak{\gamma \setminus C_i} \in L(u)$. For any natural number $k \geq 0$, we say that \textbf{$\brak{\gamma}$ is partially fulfilled in $k + 1$ steps at $u$} if it is either partially fulfilled in $0$ steps, or $u$ is a non-leaf node and the following conditions hold:
\begin{itemize}
\item $\alpha_i \in L(u)$ for all $i \in \{1,...,n\}$,
\item $\chi_j \in L(u)$ for all $j \in \{1,...,m\}$,
\item there is a marking $\ensuremath{\mathbf{m}}$ that verifies $\brak{\diffof{\gamma}}$ at $u$ and is such that for all $v \in T$ such that $v$ is a child of $u$, $\brak{\gamma}$ is partially fulfilled in $k$ steps at $v$ whenever $\brak{\gamma} \in \ensuremath{\mathbf{m}}(v)$.
\end{itemize}
Lastly, we say that \textbf{$\brak{\gamma}$ is partially fulfilled at $u$} if it is partially fulfilled in $k$ steps at $u$ for some $k\geq 0$.
\end{definition}
\begin{definition}
Let $\neg\brak{\gamma}$ be an eventuality in $\ensuremath{\mathsf{Type}\always}\xspace$, where $\gamma$ is the goal assignment for the family $\ensuremath{\mathcal{F}\xspace} = \{D_1,...,D_m\}$ defined by:
\[
\gamma(D_1) = \mathsf{G} \chi_1,..., \gamma(D_m) = \mathsf{G} \chi_m.
\]
Let $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ be a one-step coherent network. Given a node $u \in T$, we say that \textbf{$\neg \brak{\gamma}$ is partially fulfilled in $0$ steps at $u$ in $\ensuremath{\mathcal{N}}$} if there is some $i \in \{1,...,n\}$ such that $\overline{\chi_i} \in L(u)$.
For any natural number $k \geq 0$, we say that \textbf{$\neg\brak{\gamma}$ is partially fulfilled in $k + 1$ steps at $u$} if it is either partially fulfilled in $0$ steps, or $u$ is a non-leaf node and there exists a marking $\ensuremath{\mathbf{m}}$ that refutes $\brak{\diffof{\gamma}}$ at $u$, and such that for all $v \in T$
{such that $v$ is a child of $u$}, $\neg \brak{\gamma}$ is partially fulfilled in $k$ steps at $v$ whenever $\neg \brak{\gamma} \in \ensuremath{\mathbf{m}}(v)$.
Lastly, we say that \textbf{$\neg\brak{\gamma}$ is partially fulfilled at $u$} if it is partially fulfilled in $k$ steps at $u$ for some $k\geq 0$.
\end{definition}
\begin{definition}
A \textbf{defect} of a network $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ is a pair $(u,\phi)$ such that $u \in T$, $\phi \in L(u)$ is an eventuality which is not partially fulfilled at $u$.
\end{definition}
\begin{proposition}
\label{p:stayfinished}
Let $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$ and let $(u,\varphi)$ be a defect of $\ensuremath{\mathcal{N}}'$. If $u$ belongs to $\ensuremath{\mathcal{N}}$, then $(u,\varphi)$ is a defect of $\ensuremath{\mathcal{N}}$, as well.
\end{proposition}
\begin{proof}
A trivial induction on $k$ shows that, if an eventuality of any of the two types is partially fulfilled in $k$ steps at $u$ in $\ensuremath{\mathcal{N}}$, then it is partially fulfilled in $k$ steps at the same node in $\ensuremath{\mathcal{N}}'$ as well.
\end{proof}
\begin{definition}
A network is said to be \textbf{perfect} if it is one-step coherent, has no leaves, and no defects.
\end{definition}
\begin{definition}
Given a perfect network $\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ we define a game model $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{N}}) = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},V)$
as follows. We take $\ensuremath{\mathsf{S}\xspace}$ to be the set of all nodes in $T$, and $\mathfrak{g} = \mathcal{G}$.
Finally, we set $V(p) = \{v \in T \mid p \in L(v)\}$. We call $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{N}})$ the \textbf{induced model of the network} $\ensuremath{\mathcal{N}}$.
\end{definition}
{The following proposition will relate truth sets $\tset{\varphi}_{\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{N}})}$ of formulas in the induced model of a network to the set of nodes $v$ with $\varphi \in L(v)$. To clearly distinguish the latter from the former, we introduce the following notation:
$$\lset{\varphi}{\ensuremath{\mathcal{N}}} := \{v \in T \mid \varphi \in L(v)\}$$}
{For the proof of the following proposition, see the appendix.}
\begin{proposition}
\label{prop:network-sat}
Every $\Phi$-atom that is the label of some node in a perfect $\Phi$-network
$\ensuremath{\mathcal{N}} = (T,L,\mathcal{G})$ is true at the respective state of the model
$\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{N}})$ induced by that network.
\end{proposition}
\subsection{Constructing a perfect network}
\label{sec:PerfectNetwork}
\subsubsection{Step 1: extending leaves in coherent networks}
\begin{proposition}
\label{p:remove-leaves}
Let $\ensuremath{\mathcal{N}}$ be any finite, one-step coherent network, and let $u$ be a leaf in $\ensuremath{\mathcal{N}}$. Then there exists a finite and one-step coherent network $\ensuremath{\mathcal{N}}'$ such that $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$ and such that $u$ is not a leaf in $\ensuremath{\mathcal{N}}'$.
\end{proposition}
\begin{proof}
Let $\Gamma = \{\brak{\gamma} \mid \brak{\gamma} \in L(u)\} \cup \neg\{\brak{\gamma} \mid \neg \brak{\gamma} \in L(u)\}$. This is a consistent modal one-step theory, so let $(\Act,\act,\mathcal{P} (\Phi),\ensuremath{\mathsf{out}\xspace})$ be the maximal consistent game form $\ensuremath{\mathcal{M}}(\Gamma)$ provided by Theorem \ref{t:one-step-comp}. We construct the network $\ensuremath{\mathcal{N}}' = (T',L',\mathcal{G}')$ as follows. For each atom $\Psi$ in the image of the function $\ensuremath{\mathsf{out}\xspace}$ (which is always non-empty), add a new successor $v_\Psi$ to $u$, and set $L(v_\Psi) = \Psi$. Let $S$ denote the set of successors of $u$ added in this manner. We construct a new game form
$\mathcal{G}'(u) = (\Act,\act,S,\ensuremath{\mathsf{out}\xspace}')$ by setting, for each action profile $\ensuremath{\zeta\xspace}$, $\ensuremath{\mathsf{out}\xspace}'(\ensuremath{\zeta\xspace}) := \{v_\Psi\}$ where $\Psi = \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace})$.
This completes the definition of $\ensuremath{\mathcal{N}}'$. It is clear that $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$, and the conditions given in Theorem \ref{t:one-step-comp} directly entail (by design) that the network $\ensuremath{\mathcal{N}}'$ is one-step coherent. Since at least one successor was added to $u$, this node is no longer a leaf in $\ensuremath{\mathcal{N}}'$.
\end{proof}
\subsubsection{Step 2: pushing defects towards leaves}
\begin{proposition}
\label{p:push}
Let $\mathcal{N}$ be a finite, one-step coherent network and let $(u,\varphi)$ be a defect of $\ensuremath{\mathcal{N}}$. Then there exists a
set $\{v_1,...,v_k\}$ of leaves in $\ensuremath{\mathcal{N}}$ such that:
\begin{itemize}
\item For each $i \in \{1,...,k\}$, the pair $(v_i,\varphi)$ is a defect of $\ensuremath{\mathcal{N}}$, and
\item For any one-step coherent network $\ensuremath{\mathcal{N}}'$ such that $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$, if $(u,\varphi)$ is still a defect in $\ensuremath{\mathcal{N}}'$ then there is some $i \in \{1,...,k\}$ such that $(v_i,\varphi)$ is a defect of $\ensuremath{\mathcal{N}}'$.
\end{itemize}
\end{proposition}
\begin{proof}
We focus on the case of a type $\mathsf{U}$ eventuality; the case of type $\mathsf{G}$ eventualities is very similar. Let $\brak{\gamma}$ be a type $\mathsf{U}$ eventuality. We say that a defect $(u,\brak{\gamma})$ \textbf{one-step generates} a defect $(v,\brak{\gamma})$ if $v$ is one of the children of $u$, and $(v,\brak{\gamma})$ is a defect of $\ensuremath{\mathcal{N}}$. We then say that a defect $(u,\brak{\gamma})$ \textbf{generates} a defect $(v,\brak{\gamma})$ if $(v,\brak{\gamma})$ is a successor of $(u,\brak{\gamma})$ with respect to the transitive closure of the one-step generation relation.
Now suppose that $(u,\brak{\gamma})$ is a defect of $\ensuremath{\mathcal{N}}$. We claim that the set of leaves $l$ such that $(l,\brak{\gamma})$ is a defect generated by the defect $(u,\brak{\gamma})$ satisfies the conditions of the proposition. The first condition holds by definition. To prove this, consider any one-step coherent network $\ensuremath{\mathcal{N}}'$ such that $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$. We show that for every defect $(w,\brak{\gamma})$ of $\ensuremath{\mathcal{N}}$, if $(w,\brak{\gamma})$ is still a defect of $\ensuremath{\mathcal{N}}'$, then the same holds for some defect $(v,\brak{\gamma})$ of $\ensuremath{\mathcal{N}}$ that is one-step generated by $(w,\brak{\gamma})$. By repeatedly applying this claim, starting with the defect $(u,\brak{\gamma})$, we eventually reach a leaf $l$ such that $(l,\brak{\gamma})$ is a defect generated by the defect $(u,\brak{\gamma})$, and is still a defect in $\ensuremath{\mathcal{N}}'$.
So, let $(w,\brak{\gamma})$ be a non-leaf defect of $\ensuremath{\mathcal{N}}$, such that $(w,\brak{\gamma})$ is still a defect of $\ensuremath{\mathcal{N}}'$. Suppose, for a contradiction, that for all the children $v$ of $w$, $(v,\brak{\gamma})$ is \emph{not} a defect of $\ensuremath{\mathcal{N}}'$. This means that for all children $v$ of $w$ in $\ensuremath{\mathcal{N}}$, and hence for all children of $w$ in $\ensuremath{\mathcal{N}}'$ since $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$,
there is some $k_v$ for which the eventuality $\brak{\gamma}$ is
partially fulfilled in $k_v$
steps at $v$ in $\ensuremath{\mathcal{N}}$. Let $K$ be the maximum of these numbers $k_v$, which exists since the set of successors of $w$ is finite. By one-step coherence of the network $\ensuremath{\mathcal{N}}'$ it follows that $\brak{\gamma}$ is
partially fulfilled in $K+1$ steps at $w$ in $\ensuremath{\mathcal{N}}'$, witnessed by the labelling function $L$ of $\ensuremath{\mathcal{N}}$ regarded as a marking of $\ensuremath{\mathcal{N}}'$. Thus, we have reached a contradiction, which completes the proof.
\end{proof}
\subsubsection{Step 3: removing defects}
We now show how to remove defects from a network:
\begin{proposition}
\label{p:remove-defects}
Let $(u,\varphi)$ be a defect of some finite, one-step coherent network $\ensuremath{\mathcal{N}}$. Then there exists a finite, one-step coherent network $\ensuremath{\mathcal{N}}'$ such that $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$, and such that $(u,\varphi)$ is not a defect of $\ensuremath{\mathcal{N}}'$.
\end{proposition}
\begin{proof}
By Proposition \ref{p:push}, we may assume w.l.o.g. that the defect $(u,\varphi)$ is such that $u$ is a leaf: if we can show how to remove the defect $\varphi$ at a single leaf, then, clearly, we can repeat the procedure to remove $\varphi$ at each leaf in the set $\{v_1,...,v_k\}$. {(Note that our procedure for removing a defect at a single leaf $v$ given below will not affect any other leaves, i.e. each leaf in the original network besides $v$ will still be a leaf in the new network.)}
Combined with Proposition \ref{p:push} this proves the result.
So, suppose that $(u,\varphi)$ is a defect and $u$ is a leaf. It is sufficient to show that there is a finite, one-step coherent network $\ensuremath{\mathcal{N}}''$ in which the root has the same label as $u$ in $\ensuremath{\mathcal{N}}$, and in which the eventuality $\varphi$ is partially fulfilled. We can then simply identify the root of the network $\ensuremath{\mathcal{N}}'$ with the leaf $u$ in $\ensuremath{\mathcal{N}}$ to form a finite, one-step coherent network $\ensuremath{\mathcal{N}}'$ such that $\ensuremath{\mathcal{N}}'' \sqsubseteq \ensuremath{\mathcal{N}}'$ and $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$. By Proposition \ref{p:stayfinished}, the eventuality $\varphi$ is partially fulfilled at $u$ in $\ensuremath{\mathcal{N}}'$.
Consider the $\Phi$-atoms $\Psi$ such that $\varphi \in \Psi$ and there exists a finite, one-step coherent network in which the root is labelled by $\Psi$ and the eventuality $\varphi$ is {partially} fulfilled.
Let $\delta$ be the disjunction of all conjunctions of the form $\bigwedge \Psi$ for all
such $\Phi$-atoms $\Psi$.
(This is well-defined since the set of all such conjunctions is finite, as long as we disallow conjunctions with redundant multiple occurrences of the same conjunct.) The result then follows from the following claim, which is proved in the appendix.
\begin{innercustomclaim}
$\vdash \varphi \to \delta$.
\end{innercustomclaim}
\end{proof}
\subsubsection{Final step: putting everything together}
The following proposition is proved in the appendix.
\begin{proposition}
\label{prop:network-extension}
Let $\ensuremath{\mathcal{N}}$ be a finite, one-step coherent $\Phi$-network. Then there exists a finite, one-step coherent $\Phi$-network $\ensuremath{\mathcal{N}}'$ such that:
\begin{enumerate}
\item $\ensuremath{\mathcal{N}} \sqsubseteq \ensuremath{\mathcal{N}}'$,
\item no leaf of $\ensuremath{\mathcal{N}}$ is a leaf of $\ensuremath{\mathcal{N}}'$,
\item no defect of $\ensuremath{\mathcal{N}}$ is a defect of $\ensuremath{\mathcal{N}}'$.
\end{enumerate}
\end{proposition}
\begin{proposition}
\label{prop:network-limit}
Every $\Phi$-atom is the label of the root of some perfect $\Phi$-network.
\end{proposition}
\begin{proof}
Take any $\Phi$-atom $\Psi$. We construct an infinite chain of finite one-step coherent $\Phi$-networks $\ensuremath{\mathcal{N}}_0 \sqsubseteq \ensuremath{\mathcal{N}}_1 \sqsubseteq...$ inductively as follows.
We start with $\ensuremath{\mathcal{N}}_0 = (T_0,L_0,\mathcal{G}_0)$ where $T_0 = \{u_0 \}$ is a singleton,
$L_0(u_0) = \Psi$ and $\mathcal{G}_0$ is empty (there are no non-leaf nodes).
This is trivially one-step coherent.
Suppose, we have constructed the finite one-step coherent $\Phi$-networks
$\ensuremath{\mathcal{N}}_0 \sqsubseteq \ensuremath{\mathcal{N}}_1 \sqsubseteq... \ensuremath{\mathcal{N}}_n$.
Then we apply Proposition \ref{prop:network-extension} to construct a finite, one-step coherent network $\ensuremath{\mathcal{N}}_{n+1}$ such that $\ensuremath{\mathcal{N}}_n \sqsubseteq \ensuremath{\mathcal{N}}_{n+1}$, no leaves in $\ensuremath{\mathcal{N}}_n$ are still leaves in $\ensuremath{\mathcal{N}}_{n+1}$, and no defects in $\ensuremath{\mathcal{N}}_n$ are still defects in $\ensuremath{\mathcal{N}}_{n+1}$.
\smallskip
Finally, we construct the network $\ensuremath{\mathcal{N}}$ as union of the chain $\ensuremath{\mathcal{N}}_0 \sqsubseteq \ensuremath{\mathcal{N}}_1 \sqsubseteq...$. Clearly, it is still one-step coherent and has no leaves and no defects, i.e. it is perfect.
\end{proof}
We can now state and prove the completeness theorem.
\begin{theorem}[Completeness of $\textsf{Ax}_{\Logicname{TLCGA}\xspace}$]
\label{thm:completeness}
Let $\Gamma$ be a finite $\textsf{Ax}_{\Logicname{TLCGA}\xspace}$-consistent set of \Logicname{TLCGA}\xspace-formulae.
Then $\Gamma$ is satisfied in some concurrent game model.
\end{theorem}
\begin{proof}
Let $\Phi$ be the extended Fischer-Ladner closure of $\Gamma$ and let
$\Gamma^*$ be a $\Phi$-atom containing $\Gamma$ (which exists, by a standard version of Lindenbaum's lemma).
By Proposition \ref{prop:network-limit}, $\Gamma^*$ is the label of the root of some perfect $\Phi$-network. Then, by Proposition \ref{prop:network-sat},
$\Gamma^*$ is true at the respective state of the model $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{N}})$ induced by that network.
\end{proof}
\section{Finite model property and decidability }
\label{sec:FMP}
In this section we show finite model property and decidability for our logic \Logicname{TLCGA}\xspace. Since we have a truth-preserving and effective translation of the language $\ensuremath{\mathcal{L}^\cga\xspace}$ into the fixpoint logic $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, it suffices to prove finite model property and decidability for the latter. Here, we will avail ourselves of some abstract results from the literature on coalgebraic modal fixpoint logic. In particular, a general bounded-size model property for coalgebraic $\mu$-calculi was proved in \cite{fontaine2010automata}, and since $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ is an instance of coalgebraic $\mu$-calculus, we are almost done. There is one subtlety that we need to deal with, concerning the notion of ``finiteness'' of a model. There are two distinct notions of ``finite model'' that we may consider:
\begin{definition}
Let $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$ be a concurrent game model. We say that $\ensuremath{\mathcal{M}}$ is \emph{state-finite} if $\ensuremath{\mathsf{S}\xspace}$ is a finite set. We say that $\ensuremath{\mathcal{M}}$ is \emph{action-finite} if $\Act$ is a finite set. We say that $\ensuremath{\mathcal{M}}$ is \emph{finite} if it is both state-finite and action-finite.
\end{definition}
We get the following ``state-finite model property'', as a direct corollary of the general finite model theorem from \cite{fontaine2010automata}:
\begin{theorem}
Any satisfiable formula of $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ is satisfiable in a state-finite model.
\end{theorem}
However, what we want is a proper finite model property. We can obtain this with a little bit of extra work. First, we obtain the following ``action-finite model property'':
\begin{lemma}
\label{l:afmp}
Any satisfiable formula of $\ensuremath{\mathcal{L}^\cga\xspace}$ is satisfiable in an action-finite model.
\end{lemma}
\begin{proof}
Since the size of the set of actions in the game forms constructed in the proof of the one-step completeness theorem (Theorem \ref{t:one-step-comp}) has an exponential upper bound that depends on the size of a finite consistent modal one-step theory, and since the
our construction of a model for a consistent \ensuremath{\mathcal{L}^\cga\xspace}-formula can easily be seen to provide an action-finite model.
\end{proof}
\begin{theorem}[Finite model property]
Any satisfiable formula of $\ensuremath{\mathcal{L}^\cga\xspace}$ is satisfiable in a finite model.
\end{theorem}
\begin{proof}
Let $\varphi$ be a satisfiable formula of $\ensuremath{\mathcal{L}^\cga\xspace}$. By soundness of our proof system $\varphi$ is consistent, and hence is satisfiable in a model $\ensuremath{\mathcal{M}} = (\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace},V)$ where the set $\Act$ is finite. Hence the corresponding equivalent formula $\varphi'$ in $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ is satisfiable in this model too.
But the frame $(\ensuremath{\mathsf{S}\xspace},\Act,\mathfrak{g},\ensuremath{\mathsf{out}\xspace})$ can be equivalently represented as a coalgebra $f : \ensuremath{\mathsf{S}\xspace} \to \mathsf{G}^{\Act} \ensuremath{\mathsf{S}\xspace}$ for a functor $\mathsf{G}^{\Act}$ in which the set of actions $\Act$ is explicitly encoded, so the triple $(\ensuremath{\mathsf{S}\xspace},f,V)$ is a $\mathsf{G}^{\Act}$-model in which $\varphi'$ is satisfiable. By the finite model property theorem
proved in \cite{fontaine2010automata}, $\varphi'$ is satisfiable in a $\mathsf{G}^{\Act}$-model $(\ensuremath{\mathsf{S}\xspace}',f',V')$ for which $\ensuremath{\mathsf{S}\xspace}'$ is finite. This model can equivalently be represented as a concurrent game model $(\ensuremath{\mathsf{S}\xspace}',\Act,\mathfrak{g}',\ensuremath{\mathsf{out}\xspace}')$ in which $\varphi'$ is satisfiable, hence also $\varphi$. Since $\ensuremath{\mathsf{S}\xspace}'$ is finite and $\Act$ is finite, this is a finite model for $\varphi$.
\end{proof}
Together with our completeness result for \Logicname{TLCGA}\xspace, this implies:
\begin{theorem}
\label{t:cga-is-decidable}
The satisfiability problem for $\Logicname{TLCGA}\xspace$ is decidable.
\end{theorem}
Note that there is no need for an explicit bound on the size of a satisfying finite model for this result; completeness for \Logicname{TLCGA}\xspace ensures that we can computably enumerate the valid formulas of $\ensuremath{\mathcal{L}^\cga\xspace}$, and the finite model property means that we can computably enumerate the non-valid formulas as well. Hence, the set of valid formulas is a computable set.
Since the logic $\Logicname{TLCGA}\xspace^+$ also embeds into $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, we get a further corollary:
\begin{theorem}
The logic $\Logicname{TLCGA}\xspace^+$ has the finite model property.
\end{theorem}
The proof of Theorem \ref{t:cga-is-decidable} took a detour via finite model property for the language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$. We shall now look closer at the latter language and obtain an upper bound on the complexity of satisfiability for $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, under the assumption that the set $\Agt$ of agents is fixed. Decidability of $\Logicname{TLCGA}\xspace^+$ will then follow as a corollary. To obtain these results, we will require a closer analysis of the one-step satisfiability problem.
\begin{definition}
Let $V$ be a fixed set of proposition variables.
The set of \textbf{positive one-step formulas} over $V$ is generated by the following grammar:
$$\brak{\gamma_0} \mid \neg \brak{\gamma_1} \mid \varphi \wedge \varphi \mid \varphi \vee \varphi$$
where $\gamma_0$ is a goal assignment such that for each coalition $C$ in the support of $\gamma_0$
we have $\gamma_0(C) = \atlx p$ for some $p \in V$, called a \textbf{positive goal assignment}, and $\gamma_1$ is a goal assignment such that for each coalition $C$ in the support of $\gamma_0$
we have $\gamma_1(C) = \atlx \neg p$ for some $p \in V$, called a \textbf{negative goal assignment}.
A \textbf{positive one-step sequent} over $V$ is a finite set $\Gamma$ of formulas, each of which is either of the form $\brak{\gamma_0}$ where $\gamma_0$ is a positive goal assignment, or of the form $\neg \brak{\gamma_1}$ where $\gamma_1$ is a negative goal assignment.
\end{definition}
Let $\Gamma$ be a positive one-step sequent over $V$. A \textbf{redistribution} of $\Gamma$ is a tuple $(C_1,\gamma_1,....,C_n,\gamma_n)$ where $C_1,...,C_n$ is a set of pairwise disjoint coalitions and each $\gamma_i$ is a positive goal assignment with $\brak{\gamma_i} \in \Gamma$.
The intuition behind this notion is as follows: suppose that $\gamma_1,...,\gamma_n$ are positive goal assignments with $\brak{\gamma_i} \in \Gamma$ for each $i \in \{1,...,n\}$. Then according to the sequent $\Gamma$, there are action profiles $\ensuremath{\zeta\xspace}_1,...,\ensuremath{\zeta\xspace}_n$ such that for each $\ensuremath{\zeta\xspace}_i$, every coalition $C$ ensures its goal formula $\gamma_i(C)$ with respect to the action profile $\ensuremath{\zeta\xspace}_i$. But then, if $C_1,...,C_n$ are any pairwise disjoint coalitions, the coalition $C_1 \cup ... \cup C_n$ has a joint action in which each coalition $C_i$ ensures its goal formula $\gamma_i(C_i)$, by playing in accordance with the action profile $\ensuremath{\zeta\xspace}_i$. In other words, the action profiles $\ensuremath{\zeta\xspace}_1,...,\ensuremath{\zeta\xspace}_n$ can be combined into a joint action for $C_1 \cup ... \cup C_n$ be restricting each action profile $\ensuremath{\zeta\xspace}_i$ to the coalition $C_i$, and then ``gluing together'' the resulting joint actions into a joint action for $C_1 \cup ... \cup C_n$. The set of redistributions of $\Gamma$ can be viewed as a set of notations, for each of the different ways that action profiles corresponding to the positive goal assignments occurring in $\Gamma$ can be combined in this manner.
Note that a redistribution $(C_1,\gamma_1,....,C_n,\gamma_n)$ of $\Gamma$ can be uniquely represented by a map:
$$f : \mathcal{P} \Agt \to \Gamma \cup \{*\}$$
defined by setting $f(C_i) = \brak{\gamma_i}$ and $f(B) = *$ for $B \notin \{C_1,...,C_n\}$. Hence,
there are at most $(\vert \Gamma \vert + 1)^{k}$ redistributions of $\Gamma$, where $k = 2^{\vert \Agt \vert}$. Recall that the set $\Agt$ is fixed,
so $2^{\vert \Agt \vert}$ is a constant, giving a polynomial bound on the number of redistributions.
A goal assignment $\gamma$ with $\brak{\gamma} \in \Gamma$ or $\neg \brak{\gamma} \in \Gamma$ we be called a \textbf{relevant goal assignment} for $\Gamma$. Given a redistribution $R = (C_1,\gamma_1,...,C_n,\gamma_n)$, we let $F(R)$ denote the set:
$$\{p \mid \exists i, B \subseteq C_i:\;\gamma_i(B) = \atlx p\}.$$
Intuitively, $F(R)$ is the set of goals forced by coalitions that have formed in the action profile represented by the redistribution $R$.
Given a triple $(R,\gamma',C')$ consisting of a redistribution $R = (C_1,\gamma_1,...,C_n,\gamma_n)$, a negative relevant goal assignment $\gamma'$, and a coalition $C'$ such that $\gamma'(C') = \atlx \neg q$, we denote by $F(R,\gamma',C')$ the set:
$$\{p \mid \exists i, B \subseteq C_i \cap C': \; \gamma_i(B) = \atlx p\} \cup \{q\}.$$
Intuitively, the set $F(R,\gamma',C')$ is the minimal set of variables that have to be in the outcome of a strategy profile $\ensuremath{\zeta\xspace}$, in which each coalition $C_i \in \{C_1,...,C_n\}$ acts towards its goal with respect to the goal assignment $\gamma_i$, and at the same time the players in $\Agt\setminus C'$ act to block the goal of $C'$ according to the goal assignment $\gamma'$.
\begin{definition}
Let $V$ be a given set of proposition variables. A \textbf{satisfiability constraint} over $V$ is a subset of $\mathcal{P}(V)$. Given a positive one-step sequent $\Gamma$ over $V$ and a satisfiability constraint $\mathcal{S}$ over $V$, we say that $\Gamma$ is $\mathcal{S}$\textbf{-satisfiable} if there exists a game form $\mathcal{G} =
(\Act,\act,\mathcal{P}(V),\ensuremath{\mathsf{out}\xspace})$ such that:
\begin{itemize}
\item If $\brak{\gamma} \in \Gamma$ then there exists an action profile $\ensuremath{\zeta\xspace}$ such that, for every coalition $C$, if $\gamma(C) = \atlx p$ then $p \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$ for every $\ensuremath{\zeta\xspace}'\sim_C \ensuremath{\zeta\xspace}$.
\item If $\neg \brak{\gamma} \in \Gamma$ then for every action profile $\ensuremath{\zeta\xspace}$, there is some coalition $C$ and some $\ensuremath{\zeta\xspace}' \sim_C \ensuremath{\zeta\xspace}$ such that $p \in \ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}')$, where $\gamma(C) = \atlx \neg p$.
\item For every action profile $\ensuremath{\zeta\xspace}$, there is some $V' \in \mathcal{S}$ with $\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}) \subseteq V'$.
\item For every $V' \in \mathcal{S}$ there is some action profile $\ensuremath{\zeta\xspace}$ with $\ensuremath{\mathsf{out}\xspace}(\ensuremath{\zeta\xspace}) \subseteq V'$.
\end{itemize}
Given a game form $\mathcal{G}$ satisfying these conditions, we say that $\mathcal{G}$ $\mathcal{S}$\textbf{-satisfies} $\Gamma$.
\end{definition}
The following proposition reduces the question of whether a positive one-step sequent $\Gamma$ is $\mathcal{S}$-satisfiable to checking of a few relatively simple combinatorial conditions on the set $\mathcal{S}$. The proof is in the appendix.
\begin{proposition}
\label{p:c-sat}
Given a set $V$ of variables, a satisfiability constraint $\mathcal{S}$ over $V$ and a positive one-step sequent $\Gamma$ over $V$, $\Gamma$ is $\mathcal{S}$-satisfiable if and only if:
\begin{enumerate}
\item For each redistribution $R$ of $\Gamma$, there is $Z \in \mathcal{S}$ with $F(R) \subseteq Z$.
\item For each redistribution $R$ of $\Gamma$ and each negative relevant goal assignment $\gamma'$, there is some $C' \subseteq \Agt$ such that:
\begin{itemize}
\item either $C' = \Agt$ and $q \in Z$ for every $Z \in \mathcal{S}$, where $\gamma'(\Agt) = \atlx\neg q$, or
\item $C' \neq \Agt$ and there is some $Z \in \mathcal{S}$ such that $F(R,\gamma',C') \subseteq Z$.
\end{itemize}
\end{enumerate}
\end{proposition}
To illustrate this proposition, let $\Agt = \{\ensuremath{\mathsf{a}\xspace},\ensuremath{\mathsf{b}\xspace}\}$, $V = \{p,q,r\}$ and let: $$\Gamma = \{\brak{\{\ensuremath{\mathsf{a}\xspace}\} \gass \atlx p},\brak{\{\ensuremath{\mathsf{b}\xspace}\} \gass \atlx q }, \neg \brak{\{\ensuremath{\mathsf{b}\xspace}\}\gass \atlx \neg r}\}$$
Then $\Gamma$ is $\mathcal{S}$-satisfiable for $\mathcal{S} = \{\{p,q\},\{q,r\}\}$. However, $\Gamma$ is not $\mathcal{S}$-satisfiable for $\{\{p,q\},\{p,r\}\}$. Consider the redistribution $R$ in which $\ensuremath{\mathsf{a}\xspace}$ ``votes'' for $\{\brak{\{\ensuremath{\mathsf{a}\xspace}\} \gass \atlx p}$ and $\ensuremath{\mathsf{b}\xspace}$ votes for $\brak{\{\ensuremath{\mathsf{b}\xspace}\} \gass \atlx q }$. Then $\ensuremath{\mathsf{a}\xspace}$ forces $p$ on their own, and $\ensuremath{\mathsf{b}\xspace}$ forces $q$ on their own, so there should be a possible outcome containing both $p$ and $q$ -- which there is. But $\ensuremath{\mathsf{b}\xspace}$ should not be able force $\neg r$ on their own since $\neg \brak{\{\ensuremath{\mathsf{b}\xspace}\}\gass \atlx \neg r} \in \Gamma$, so there should be some outcome containing $q$, which $\ensuremath{\mathsf{b}\xspace}$ is forcing on their own, but also $r$. Formally, this is captured by $F(R,\gamma,\{\ensuremath{\mathsf{b}\xspace}\}) = \{q,r\}$, where $\brak{\gamma} = \brak{\{\ensuremath{\mathsf{b}\xspace} \}\gass \atlx \neg r}$.
\begin{proposition}
\label{p:polytime}
Let $V$ be a fixed set of proposition variables and $\mathcal{S}$ a fixed satisfiability constraint. Then the problem of checking whether a given positive one-step sequent $\Gamma$ is $\mathcal{S}$-satisfiable is solvable in polynomial time.
\end{proposition}
\begin{proof}
Easy consequence of Proposition \ref{p:c-sat}: we only have to check the conditions (1) and (2) on $\mathcal{S}$ for each redistribution of $\Gamma$ and each negative relevant goal assignment. Recalling that there are at most polynomially many redistributions of $\Gamma$, the result follows.
\end{proof}
We extend the notion of $\mathcal{S}$-satisfiability to arbitrary positive one-step formulas over $V$ in the obvious manner.
Let $V$ be a set of proposition variables and $\mathcal{S} \subseteq V$ a satisfiability constraint. The \textbf{one-step satisfiability problem} for $V,\mathcal{S}$ is: given a positive one-step formula $\varphi$ over $V$, decide whether $\varphi$ is $\mathcal{S}$-satisfiable.
\begin{proposition}
\label{p:exptime}
The one-step satisfiability problem for given $V,\mathcal{S}$ can be solved in nondeterministic polynomial time (measured in the length of a given formula $\varphi$).
\end{proposition}
\begin{proof}
The procedure for checking $\mathcal{S}$-satisfiability of $\varphi$ can be done as follows: we keep in memory a list of formulas $L$, beginning with the singleton list $[\varphi]$. If the leftmost formula in the memory $L$ that is not ``atomic'', i.e. not of the form $\brak{\gamma}$ or $\neg \brak{\gamma}$ for some $\gamma$, has $\vee$ as main connective, then we guess one of the disjuncts non-deterministically and replace the disjunction by the guessed disjunct. If the main connective of the leftmost non-atomic formula is $\wedge$ then we remove that formula and instead add each conjunct to the list. Clearly this reduction can only go on for a number of steps that is bounded by the length of the formula $\varphi$, and once each formula in the list is atomic, we apply Proposition \ref{p:polytime} to check whether the one-step sequent read off from the list is $\mathcal{S}$-satisfiable.
\end{proof}
It follows from a general result in \cite{fontaine2010automata} that, if the one-step satisfiability problem for fixed $V,\mathcal{S}$ is solvable in exponential time, then the satisfiability problem for the full $\mu$-calculus language is in double exponential time. From Proposition \ref{p:exptime} we thus get:
\begin{theorem}
\label{t:mu-dexptime}
The satisfiability problem for $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ is in $\textrm{2ExpTime}$.
\end{theorem}
As a corollary, we get:
\begin{theorem}
The logic $\Logicname{TLCGA}\xspace^+$, (hence, also \Logicname{TLCGA}\xspace) is decidable in $\textrm{3ExpTime}$.
\end{theorem}
\begin{proof}
The translation from $\Logicname{TLCGA}\xspace^+$ to $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ increases the size of a formula by at most a single exponential, so the result follows from Theorem \ref{t:mu-dexptime}.
\end{proof}
We do not know whether the $\textrm{2ExpTime}$ bound given by Theorem \ref{t:mu-dexptime} is tight. Many variants of the $\mu$-calculus share the same $\textrm{ExpTime}$-complete complexity of the satisfiability problem as the standard modal $\mu$-calculus, as demonstrated in \cite{cirstea2011exptime}. However, the conditions under which the general $\textrm{ExpTime}$-bound of \cite{cirstea2011exptime} holds are not trivial to verify; in fact we believe they may fail for the language $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$. Furthermore, even if the satisfiability problem for $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ happens to be in $\textrm{ExpTime}$, this bound does not immediately transfer to the language $\ensuremath{\mathcal{L}^\cga\xspace}$, since our translation of $\ensuremath{\mathcal{L}^\cga\xspace}$ into $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ can produce an exponential blowup in the formula size.
In summary, the $\textrm{2ExpTime}$-bound for satisfiability in $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$ stated above is the best we can currently claim for sure about the complexity of that satisfiability problem. The problem of determining the precise complexity of satisfiability, both for $\ensuremath{\mathcal{L}^\cga\xspace}$ and for $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, may very well turn out to be a difficult one. We leave this as a question for future research.
\section{Concluding remarks}
\label{sec:concluding}
The present work falls in the line of research employing formal logical methods for modelling, expressing, and reasoning about strategic interactions in multi-agent systems, and in particular multi--player games, initiated with introduction of logics such as \Logicname{CL}\xspace, \Logicname{ATL}\xspace, and Strategy Logic. The coalitional goal assignment operator $\brak{\cdot}$ introduced here covers as special cases the modal operators for strategic abilities featuring in \Logicname{CL}\xspace and \Logicname{ATL}\xspace. Furthermore, whereas these strategic operators assume purely adversarial behaviour of the opposing agents and coalitions, and express unconditional ability of the proponent coalition to achieve its objective against any such objective, the operator $\brak{\cdot}$ expresses a natural combination of cooperative and non-cooperative interactions which is more common and realistic in `real-life' multi-agent scenarios. Whereas this operator is already quite expressive, it formalises but one general and important pattern of such interaction. Other such patterns, formalising variants of \emph{conditional} strategic reasoning, have been proposed and studied in \cite{LORIVII-GorankoJu}, and certainly more are currently being identified and studied. The patterns of strategic interaction formalised by $\brak{\cdot}$ also enable the expression of new versions of socially relevant solution concepts, such as co-equilibrium, discussed in Section \ref{sec:applications}, opening new perspectives towards formal analysis of multi-agent strategic interaction.
On the more technical aspects of ours work, we note that, while the most important logical questions about \Logicname{TLCGA}\xspace have been resolved here, there is still substantial technical follow-up work to do, including complete axiomatizations of important variations and extensions, such as $\Logicname{TLCGA}\xspace^+$ and possibly the full $\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, as well as development of practically efficient tableau-based algorithmic methods for deciding satisfiability and well as for solving the relevant model checking problems. In particular, identifying the exact complexities of these problems, for which reasonable upper bounds have been established in Section \ref{sec:FMP}, is yet to be completed.
For solving these problems and exploring further research directions, we hope that the interaction with fragments of Strategy logics, initiated in Section \ref{subsec:ST2SL}, will prove fruitful both ways. In particular, the distinction between path-based and play-based semantics established in Section \ref{subsec:CGAsemantics-variations} suggests that the latter type of semantics is worth exploring in the context of Strategy logics, too.
Lastly, we note that in this paper we have mainly explored the theoretical foundations and the purely logical and computational aspects of the logics \Logicname{TLCGA}\xspace, $\Logicname{TLCGA}\xspace^+$ , and
$\ensuremath{\mathcal{L}^\xcga\xspace}_\mu$, whereas their potential for applications to game-theory and, more generally, to formal modelling and reasoning about multi-agent strategic interaction has only been indicated in Section \ref{sec:applications}, but is yet to be unfolded in future work.
|
{
"timestamp": "2022-02-24T02:00:36",
"yymm": "2012",
"arxiv_id": "2012.14195",
"language": "en",
"url": "https://arxiv.org/abs/2012.14195"
}
|
\subsection{ Proof of Theorems}
We denote by $ \DD^{\ast,j}_{\lambda_1, \lambda_2} $ a part of $ \DD^{\ast}_{\lambda_1, \lambda_2} $
(respectively $ \DD^{\#,j}_{\lambda_1, \lambda_2} $ a part of $ \DD^{\#}_{\lambda_1, \lambda_2} $)
(see \eqref{Le4-1}) satisfying the condition number $j \in \{1,...,5 \}$:
\begin{multline} \label{Le7-0}
\DD^{\ast,j}_{\lambda_1, \lambda_2} = \sum_{ (\br_1,\br_2) \in \UU_{ \lambda_1, \lambda_2 }} \;\;
\sum_{\substack{ \fm_{i,k_{i,j}} \in I_{p_i^{ r_{i,k_{i,j}} -r_{i,k_{i,j-1}} }} \\ 0< |m_j| \leq n^{10}, \; |\fm_{i,j}| \leq n^{10}}} \;
\frac{1}{\bar{m}_1 \bar{m}_{2}}
\; \prod_{i,j=1}^{2} \frac{1}{\bar{\fm}_{i, j}} \delta_{p_i^{r^{+}_i}}( \tilde{m}_{i} ) \Delta(case \; j ),\\
\DD^{\#,j}_{\lambda_1, \lambda_2} = \sum_{ (\br_1,\br_2) \in \UU_{ \lambda_1, \lambda_2 }} \;\;
\sum_{ 0< |m_j| \leq n^{10}, \; |\fm_{i,j}| \leq n^{10}} \;
\frac{1}{\bar{m}_1 \bar{m}_{2}}
\; \prod_{i,j=1}^{2} \frac{1}{\bar{\fm}_{i, j}} \delta_{p_i^{r^{+}_i}}( \tilde{m}_{i} ) \Delta(case \; j ).
\end{multline}
We will prove that $ \DD^{\ast,j}_{\lambda_1, \lambda_2} \ll n^2 $ (see \eqref{Le4-30}).
We consider the case $j \in [1,3]$ in Lemma 6 and the case $j \in [4,5]$ in Lemma 7.
Proofs of Lemma~6 and Lemma 7 are similar. To make the article more readable, we do not combine Lemma~6 and Lemma 7. \\ \\
{\bf Lemma 6.} {\it The estimate \eqref{Le4-30} is true for $ \lambda_{1} = \lambda_{2}=0$.}\\ \\
{\bf Proof.} We will consider \eqref{Le4-30} for $n \geq n_0$, with $ n_0 = [2^ {4C_1+20}]p_1^4 p^4_2$, where $C_1$ is defined in the Corollary.
From \eqref{Beg-26} and \eqref{Beg-40}, we have for $n \geq n_0$, $ i,j \in \{1,2\}$, that
\begin{multline} \label{Le5-4}
\hat{r}_j= \max(r_{1,j}, r_{2,j})> V, \quad \quad r^{+}_i - r^{-}_i > \VV \;\;\for \;\;\lambda_{i}=1, \quad \quad r^{+}_i - r^{-}_i \leq \VV \;\;\for \; \; \lambda_{i}=0, \\
r^{+}_i =\max( r_{i,1}, r_{i,2} ), \quad r^{-}_i =\min( r_{i,1}, r_{i,2} ), \quad V =p_1^4 p_2^4 [\log_2^4 n] \geq 4(C_1+10) \log_2^2 n, \\
V/(4 \log_2(p_1p_2)) > \VV = [\log_2^3 n]\geq 4C_1 \log_2^2 n.
\end{multline}
{\it Case} 1: $ \lambda_{1} = \lambda_{2}=0$ and $r^{+}_{i_0} \leq V$ for some $i_0 \in \{1,2\}$. Using \eqref{Le7-0}, we derive
\begin{multline} \label{Le4-98}
\DD^{\#,1}_{0, 0} = \sum_{ (\br_1,\br_2) \in \UU_{ 0, 0 }} \;\;
\sum_{ 0< |m_j| \leq n^{10}, \; |\fm_{i,j}| \leq n^{10},\; i,j=1,2} \;
\frac{1}{\bar{m}_1 \bar{m}_{2}}
\; \prod_{i,j=1}^{2} \frac{1}{\bar{\fm}_{i, j}} \delta_{p_i^{r^{+}_i}}( \tilde{m}_{i} ) \Delta(r^{+}_{i_0} \leq V )\\
\ll
\sum_{ (\br_1,\br_2) \in \UU_{ 0, 0 }}\; \log^6 n \;
\Delta(r^{+}_{i_0} \leq V ) \leq
\sum_{\substack{ r^{+}_i - r^{-}_i \leq \VV, r^{+}_{i_0} \leq V\\
r_{i,j} \in [1,n], \; i,j=1,2}} \log^6 n
\ll nV \VV^2 \log^6 n \ll n^2.
\end{multline}
In the following, we consider the case $\min (r^{+}_1,r^{+}_2) > V $.
Let
\begin{equation} \label{Le5-8}
g_{i} := \sum_{j=1}^2 \fm_{i,j} p_{i}^{r^{+}_{i} - r_{i,j} }, \quad
i^{'} \equiv i+1 \mod 2, \;\; i^{'} \in \{ 1,2 \} , \;\; i=1,2.
\end{equation}
By \eqref{Beg-3}, \eqref{Le3-18} and \eqref{Le7-0}, we obtain that
\begin{multline} \label{Le5-2}
\tilde{m}_{i} =\sum_{j=1}^{2} (\fm_{i,j} + m_{j} M_{i, \br_{j}})p_i^{ r^{+}_i - r_{i,j} }
\mod p_i^{r^{+}_i}, \quad M_{i,\br_j} p_{i^{'}}^{r_{i^{'},j} } \equiv 1 \mod p_i^{r_{i,j}},\\
p_{i^{'}}^{r^{+}_{i^{'}}} \tilde{m}_{i} = g_{i} p_{i^{'}}^{r^{+}_{i^{'}}} + \sum_{j=1}^2 m_{j} M_{i,\br_j} p_{i^{'}}^{r^{+}_{i^{'}}} p_{i}^{r^{+}_{i} - r_{i,j} }
, \quad
M_{i,\br_j} p_{i^{'}}^{ r^{+}_{i^{'}} } \equiv p_{i^{'}}^{ r^{+}_{i^{'}}- r_{i^{'},j} } \mod p_i^{r_{i,j}},\\
M_{i,\br_j} p_{i^{'}}^{ r^{+}_{i^{'}} } p_i^{r^{+}_i -r_{i,j}} \equiv
p_{i^{'}}^{ r^{+}_{i^{'}} - r_{i^{'},j} } p_i^{r^{+}_i -r_{i,j}} \mod p_i^{r^{+}_i}.
\end{multline}
Hence
\begin{equation} \label{Le5-10}
g_{i} p_{i^{'}}^{r^{+}_{i^{'}}} + \sum_{j=1}^2 m_{j} p_{i}^{r^{+}_{i} - r_{i,j} } p_{i^{'}}^{r^{+}_{i^{'}} - r_{i^{'},j} } \equiv 0 \mod p_i^{r^{+}_i}, \quad i=1,2.
\end{equation}
{\it Case} 2: $ \lambda_{1} = \lambda_{2}=0$, $\quad g_1=g_2=0$ and $\min (r^{+}_1,r^{+}_2) > V $.
From \eqref{Le5-10}, we have
\begin{equation} \label{Le5-14}
\sum_{j=1}^2 m_{j} p_{i}^{r^{+}_{i} - r_{i,j} } p_{i^{'}}^{r^{+}_{i^{'}} - r_{i^{'},j} } \equiv 0 \mod p_i^{r^{+}_i} .
\end{equation}
Bearing in mind \eqref{Le5-4}, and that $0 < |m_j| \leq n^{10}$ ($j=1,2)$,
$ \lambda_{1} = \lambda_{2}=0$,
$ V/(4\log_2(p_1p_2)) $ $\geq \VV = [\log_2^3 n] \geq r^{+}_i - r^{-}_i$,
$r^{+}_i >V$ $(i=1,2)$, we get
\begin{equation} \label{Le5-30}
\log_2 \big(\big| \sum_{j=1}^2 m_{j} p_{i}^{r^{+}_{i} - r_{i,j} } p_{i^{'}}^{r^{+}_{i^{'}} - r_{i^{'},j} } \big|\big) \leq 20 \log_2 n + \VV \log_2 (p_1 p_2) \leq V -\VV <r^{+}_i -\VV.
\end{equation}
Hence the congruence \eqref{Le5-14} is equality :
\begin{equation} \nonumber
\sum_{j=1}^2 m_{j} p_{i}^{r^{+}_{i} - r_{i,j} } p_{i^{'}}^{r^{+}_{i^{'}} - r_{i^{'},j} } =0, \quad i=1,2.
\end{equation}
Let $\mu_3 \geq 1$ be the greatest common divisor of $ m_1$ and $m_2$, and let $ m_1^{*}=m_1/\mu_3$, $ m_2^{*}=m_2/\mu_3$, $ ( m_1^{*}, m_2^{*})=1 $. Then
\begin{equation} \nonumber
\sum_{j=1}^2 m_j^{*} p_{i}^{r^{+}_{i} - r_{i,j} } p_{i^{'}}^{r^{+}_{i^{'}} - r_{i^{'},j} } =0.
\end{equation}
It is easy to verify that
$ |m_1^{*} m_2^{*}| = p_1^{r^{+}_{1}- r^{-}_1} p_2^{r^{+}_{2} - r^{-}_2} $ and $ |m_j^{*}| \in \{1, p_1^{r^{+}_{1}- r^{-}_1},
p_2^{r^{+}_{2} - r^{-}_2}, \\ p_1^{r^{+}_{1}- r^{-}_1} p_2^{r^{+}_{2} - r^{-}_2} \}$, $j=1,2$.
Let $\mu_i \geq 1$ be the greatest common divisor of $ \fm_{i,1}$ and $\fm_{i,2}$.
Taking into account that $ g_1=g_2=0$, we have from \eqref{Le5-8} that
$ \fm_{i,1} = \fm_{i,2} =0 $ or
$ |\fm_{i,1}\fm_{i,2}| =\mu_i^2 p_i^{r^{+}_{i}- r^{-}_i} $ and $ |\fm_{i,j}| /\mu_i
\in\{1, p_i^{r^{+}_{i}- r^{-}_i}\} $ , $i,j=1,2$.
Substituting the above relations into \eqref{Le7-0}, we get
\begin{equation} \nonumber
\DD^{\#,2}_{0, 0} \leq
\sum_{\substack{ 0< |m_j| \leq n^{10}, \; |\fm_{i,j}| \leq n^{10} \\ i,j=1,2}} \;
\sum_{ \substack{ (\br_1,\br_2) \in \UU_{ 0, 0 } \\ \min (r^{+}_1,r^{+}_2) > V }}\;
\frac{1}{\bar{m}_1 \bar{m}_{2}}
\; \prod_{i,j=1}^{2} \frac{\Delta(g_i=0)}{\bar{\fm}_{i, j}} \delta_{p_i^{r^{+}_i}}( \tilde{m}_{i} )
\end{equation}
\begin{equation} \label{Le4-90}
\ll
\sum_{\substack{ \mu_i \geq 1 \\ i=1,2,3}} \;
\sum_{\substack{ r_{i,j} \in [1,n] \\ i,j=1,2}} \;
\frac{1}{\mu_1^2 \mu_2^2 \mu_3^2 }
\; p_1^{-r^{+}_{1}+ r^{-}_1} p_2^{-r^{+}_{2}+ r^{-}_2} \ll \sum_{ \substack{ r_{i,j} \in [1,n] \\ i,j=1,2}}
p_1^{-r^{+}_{1}+ r^{-}_1} p_2^{-r^{+}_{2}+ r^{-}_2}
\ll n^2.
\end{equation}\\
{\it Case} 3: $ \lambda_{1} = \lambda_{2}=0$, $\quad |g_1| + |g_2| >0$ and $\min (r^{+}_1,r^{+}_2) > V $.
Let $g_1 \neq 0$. The proof for the case $g_2 \neq 0$ is similar.
By \eqref{Le5-10}, we get
\begin{equation} \label{Le6-50}
g_{1} p_2^{r^{+}_{2}} + \xi \equiv 0 \mod p_1^{r^{+}_{1}},
\quad \with \quad \xi=\sum_{j=1}^2 m_{j} p_{1}^{r^{+}_{1} - r_{1,j} } p_{2}^{r^{+}_{2} - r_{2,j} }, \quad r^{+}_1 > V.
\end{equation}
Bearing in mind that $ \lambda_{1} = \lambda_{2}=0$, we get
from \eqref{Le5-4}
that $ \rho_i := r^{+}_{i} -r^{-}_{i}
\leq \VV$ for $i=1,2$.
We fix $m_{1}, m_{2}, \fm_{i,j}$ and $ \rho_i $, with $i,j=1,2$.
Let $\beta_1 =\ord_{p_1}(g_{1})$ and let $g_{1}=g_{1}^{'} p_{1}^{\beta_1}$, $ (p_{1},g_{1}^{'} ) =1$.
Similarly to \eqref{Le5-30}, we obtain from \eqref{Le5-8} and \eqref{Le6-50} that
\begin{equation} \nonumber
\beta_1 \leq
\log_2|\sum_{j=1}^2 \fm_{1,j} p_{1}^{r^{+}_{1} - r_{1,j} }|)
\leq 20 \log_2 n + \VV \log_2 (p_1 p_2) \leq V -\VV < r^{+}_{1}-\VV .
\end{equation}
Let $\beta_2 =\ord_{p_1} (\xi)$.
From \eqref{Le6-50}, we have that $\beta_2 =\beta_1$ and
\begin{equation} \label{Le6-70}
p_2^{r^{+}_{2}} \equiv - \xi^{'} /g_1^{'} \mod p_{1}^{\VV}, \quad \with \quad \xi^{'} = \xi /p_1^{\beta_2}.
\end{equation}
Suppose that \eqref{Le6-70} have two solutions $r^{+'}_{2}$ and $r^{+''}_{2}$. Then
\begin{equation} \nonumber
p_2^{z} \equiv 1 \mod p_1^{\VV}, \quad \with \quad z= r^{+'}_{2} -r^{+''}_{2}
\in [-n,n].
\end{equation}
Hence $\ord_{p_1}( p_2^{z} - 1 ) \geq \VV = [\log_2^3 n] > C_1 \log_2^2 n$.
By the Corollary, we get that the number of solutions of the above congruence is only one $(z=0)$.
Taking into account that $\lambda_1 =\lambda_2 =0$, we get from \eqref{Le5-4}
that $ 0 \leq r^{+}_{i} -r^{-}_{i} \leq \VV$, $i=1,2$.
Therefore, the number of $(\br_1, \br_2)$ satisfying \eqref{Le6-70} is less than
\begin{equation} \nonumber
\#\{ (r^{+}_{1},\rho_1,\rho_2)\; |\; 1 \leq r^{+}_{1} \leq n, 0 \leq \rho_1,\rho_2 \leq \VV \}=n(\VV+1)^2.
\end{equation}
Using \eqref{Le7-0}, we derive
\begin{equation} \label{Le4-94}
\DD^{\#,3}_{0, 0}\ll
\sum_{\substack{ 0< |m_j| \leq n^{10}, \; |\fm_{i,j}| \leq n^{10} \\ i,j=1,2}} \;
\frac{n\VV^2}{\bar{m}_1 \bar{m}_{2}}
\; \prod_{i,j=1}^{2} \frac{1}{\bar{\fm}_{i, j}} \ll n \VV^2 \log^6 n \ll n^2.
\end{equation}
From \eqref{Le4-1}, \eqref{Le7-0}, \eqref{Le4-98}, \eqref{Le4-90} and \eqref{Le4-94}, we obtain $\DD^{\ast}_{0, 0} \leq \DD^{\#}_{0, 0}
\leq \DD^{\#,1}_{0,0} + \DD^{\#,2}_{0,0} + \DD^{\#,3}_{0,0} \ll n^2$.
Hence Lemma 6 is proved. \qed \\ \\
{\bf Lemma 7.} {\it The estimate \eqref{Le4-30} is true for $\max_i \; \lambda_{i}=1$.} \\ \\
{\bf Proof.}
Let's consider the case $ \lambda_{1}=1$. The proof for the case $ \lambda_{2}=1$ is similar.
We will consider $ \DD^{\ast,j}_{\lambda_1, \lambda_2}$.
By \eqref{Beg-26}, \eqref{Beg-40}, \eqref{Le7-0} and \eqref{Le5-4}, we get that
\begin{multline}\label{Le6-1}
\hat{r}_j=\max(r_{1,j}, r_{2,j})> V, \quad
r_{i,k_{i,2}} = r^{+}_i =\max_j r_{i,j}, \;\; r_{i,k_{i,1}} = r^{-}_i =\min_j r_{i,j},\\
r^{+}_1 - r^{-}_1 > \VV = [\log_2^3 n], \quad \ad \quad
\fm_{i,k_{i,j}} \in I_{p_i^{ r_{i,k_{i,j}} -r_{i,k_{i,j-1}} }}, \quad i,j =1,2.
\end{multline}
Using \eqref{Le5-2} and \eqref{Le6-1}, we obtain for $i \in \{1,2 \}$ that
\begin{equation} \label{Le6-2}
\tilde{m}_{i} \equiv \fm_{i,k_{i,2}} + m_{k_{i,2}} M_{i, \br_{k_{i,2}}}+ (\fm_{i,k_{i,1}} + m_{k_{i,1}} M_{i, \br_{k_{i,1}}})
p_i^{ r^{+}_i - r_{i,k_{i,1}} } \equiv
0 \mod p_i^{r^{+}_i},
\end{equation}
\begin{equation} \nonumber
\fm_{i,k_{i,2}} + m_{k_{i,2}} M_{i, \br_{k_{i,2}}}
\equiv
0 \mod p_i^{ r^{+}_i - r^{-}_i }, \quad M_{i, \br_{k_{i,2}}} p_{{i^{'}}}^{r_{{i^{'},k_{{i,2}} }}} \equiv 1 \mod p_i^{ r^{+}_i }, \; r^{+}_i= r_{i,k_{i,2}}.
\end{equation}
Hence
\begin{equation} \label{Le6-3}
p_{{i^{'}}}^{r_{{i^{'},k_{{i,2}}} }} \fm_{i,k_{i,2}} + m_{k_{i,2}} +p_{{i^{'}}}^{r_{{i^{'},k_{{i,2}}} }}(\fm_{i,k_{i,1}} + m_{k_{i,1}} M_{i, \br_{k_{i,1}}})
p_i^{ r^{+}_i - r_{i,k_{i,1}} } \equiv
0 \mod p_i^{r^{+}_i}.
\end{equation}
From \eqref{Le6-1} and \eqref{Le6-2}, we have $r^{+}_1 - r^{-}_1 > \VV$ and
\begin{equation} \label{Le6-4}
p_2^{r_{2,k_{1,2}}} \fm_{1,k_{1,2}} + m_{k_{1,2}}
\equiv
0 \mod p_1^{\VV }.
\end{equation}
Let $\beta_3 =\ord_{p_1}(m_{k_{1,2}})$ and let $m_{k_{1,2}}^{*} =m_{k_{1,2}} p_{1}^{-\beta_3}$, $ (p_{1},m_{k_{1,2}}^{*} ) =1$. Bearing in mind that $ 0 <\max(|m_j |, |\fm_{i,j}|) \leq n^{10}$,
we get $\beta_3 \leq \VV/2$. Let $\fm_{1,k_{1,2}}^{*}=\fm_{1,k_{1,2}} p_{1}^{-\beta_3}$.
Then $ p_2^{r_{2,k_{1,2}}} \fm_{1,k_{1,2}}^{*} / m_{k_{1,2}}^{*} + 1
\equiv
0 \mod p_1^{ [\VV/2 ] } $ and
\begin{equation} \nonumber
\ord_{p_1}( p_2^{r_{2,k_{1,2}}} \fm_{1,k_{1,2}}^{*} / m_{k_{1,2}}^{*} + 1 ) \geq [\VV
/2] = [ [\log_2^3 n]/2] \geq C_1 \log_2^2 n , \quad n \geq n_0.
\end{equation}
Applying the Corollary, we get that the above congruence is equality
\begin{equation} \label{Le6-10}
p_2^{r_{2,k_{1,2}}} \fm_{1,k_{1,2}}^{*} =- m_{k_{1,2}}^{*} \quad \ad \quad
p_2^{r_{2,k_{1,2}}} \fm_{1,k_{1,2}} =- m_{k_{1,2}}, \quad \beta_3 =\ord_{p_1}(m_{k_{1,2}}).
\end{equation}
Taking into account that $\max(|m_j |, |\fm_{i,j}|) \leq n^{10}$, we get
\begin{equation} \label{Le6-11}
r_{2,k_{1,2}} \leq
\log_{p_2} (\max(|m_1|, |m_2|)) \leq 10 \log_2 n \leq \VV .
\end{equation}
{\it Case} 4 : $\quad \lambda_{1} = \lambda_{2} = 1$.
Repeating \eqref{Le6-1} - \eqref{Le6-10} for $\lambda_2 =1$, we get
\begin{equation} \label{Le6-12}
p_1^{r_{1,k_{2,2}}} \fm_{2,k_{2,2}}^{*} =- m_{k_{2,2}}^{*} \quad \ad \quad
p_1^{r_{1,k_{2,2}}} \fm_{2,k_{2,2}} =- m_{k_{2,2}},
\end{equation}
with $\beta_4 =\ord_{p_2}(m_{k_{2,2}})$, $m_{k_{2,2}}^{*}=m_{k_{2,2}} p_{2}^{-\beta_4}$, $ (p_{2},m_{k_{2,2}}^{*} ) =1$, $\fm_{2,k_{2,2}}^{*} = \fm_{2,k_{2,2}} p_{2}^{-\beta_4}$.\\
Now substituting \eqref{Le6-10} and \eqref{Le6-12} into \eqref{Le6-3}, we obtain
\begin{equation} \label{Le6-13}
p_{{i^{'}}}^{r_{{i^{'},k_{{i,2}}} }}(\fm_{i,k_{i,1}} + m_{k_{i,1}} M_{i, \br_{k_{i,1}}})
p_i^{ r^{+}_i - r_{i,k_{i,1}} } \equiv
0 \mod p_i^{r^{+}_i}, \quad i=1,2.
\end{equation}
Hence
\begin{equation} \label{Le6-14}
\fm_{1,k_{1,1}} + m_{k_{1,1}} M_{1, \br_{k_{1,1}}}
\equiv
0 \mod p_1^{r_{1,k_{1,1}}}, \; \;
\fm_{2,k_{2,1}} + m_{k_{2,1}} M_{2, \br_{k_{2,1}}}
\equiv
0 \mod p_2^{r_{2,k_{2,1}}}.
\end{equation}
According to \eqref{Le3-17}, \eqref{Le6-1} and \eqref{Le6-10}, we get that $ \fm_{i,k_{i,j}} \in
I_{ p_i^{r_{i,k_{i,j}}-r_{i,k_{i,j-1}}}} $ and $r_{i,k_{i,1}} \\=r^{-}_{i}$, $k_{i,0}=r_{i,0} =0$.
Therefore
$ \fm_{i,k_{i,1}} \in
I_{ p_i^{r_{i,k_{i,1}}}}$, $i=1,2$.
We fix $ m_{1}, m_2, \br_1,\br_2$.
Bearing in mind that $ I_M=[-[(M-1)/2],[M/2]] \cap \ZZ$ is a complete set of residues $\mod M$
(see \eqref{Beg-1a}), we obtain that
there exists only one solution $ \fm_{i,k_{i,1}} \in
I_{ p_i^{r_{i,k_{i,1}}}}$
of \eqref{Le6-14}, with $i \in \{1,2\}$. By \eqref{Le6-10} and \eqref{Le6-12}, we have
$ p_2^{r_{2,k_{1,2}}} \fm_{1,k_{1,2}}^{*} =- m_{k_{1,2}}^{*}$, $ \;\; p_1^{r_{1,k_{2,2}}} \fm_{2,k_{2,2}}^{*} =- m_{k_{2,2}}^{*}$.
Substituting the above relations into \eqref{Le7-0}, we obtain
\begin{multline} \label{Le6-35}
\DD^{\ast,4}_{1, 1} \ll \sum_{\substack{k_{i,j}=1,2 \\i,j=1,2}} \;
\sum_{ (\br_1,\br_2) \in \UU_{ 1, 1 }} \; \sum_{\substack{ \beta_i \geq 0 \\ i=3,4}} \;
\sum_{\substack{ 0< |\fm_{i,k_{i,2}}^{*}| \leq n^{10} \\ i=1,2}}
\;\prod_{i=1}^{2}\delta_{p_i^{r^{+}_i}}( \tilde{m}_{i} )
\frac{ p_i^{-\beta_{i+2} -r_{i,k_{i^{'},2}}}}{ |\fm^{*}_{i,k_{i,2}}|^2 } \\
\ll
\sum_{\substack{r^{+}_i, r^{-}_i \in [1,n]\\ i=1,2}}
\frac{1}{ p_1^{r^{-}_1} p_2^{r^{-}_2}} \ll n^{2}.
\end{multline}
{\it Case} 5 : $\quad \lambda_{1} = 1$, $\lambda_{2}=0$.
By \eqref{Le5-4} and \eqref{Le6-11}, we have
\begin{equation} \nonumber
r^{+}_{1} -r^{-}_{1} > \VV, \;\; r^{+}_{2} -r^{-}_{2} \leq \VV,
\;\; r_{2,k_{1,2}} \leq \VV \;\; \ad \;\;
\hat{r}_j = \max(r_{1,j}, r_{2,j}) > V>10 \VV, \; \;j=1,2.
\end{equation}
Hence $r^{+}_{2}= \max(r_{2,1},r_{2,2}) \leq 2 \VV $.
Therefore $ r_{1,j}=\hat{r}_j> V $ ($j=1,2$) and $ r^{-}_{1} = \min(r_{1,1},r_{1,2}) \geq V $.
By \eqref{Le5-2}, we get
$ M_{1,\br_j} p_2^{r_{2,j}} \equiv 1 \mod p_1^{ r_{1,j} }$ and
$ M_{1, \br_{k_{1,1}}} p_2^{r_{2,k_{1,1}}} \equiv 1 \mod p_1^{ r^{-}_1 }$, $r^{-}_1 = r_{1, k_{1,1}}$.
In view of \eqref{Le6-13},
we obtain $\fm_{1,k_{1,1}} + m_{k_{1,1}} M_{1, \br_{k_{1,1}}} \equiv 0 \mod p_1^{ r^{-}_1 }$
and
\begin{equation} \nonumber \label{Le6-83}
p_2^{r_{2,k_{1,1}}} \fm_{1,k_{1,1}} + m_{k_{1,1}} \equiv
0 \mod p_1^{ r^{-}_1 }, \; \ord_{p_1}( p_2^{r_{2,k_{1,1}}} \fm_{1,k_{1,1}} + m_{k_{1,1}} ) \geq r^{-}_1 \geq V.
\end{equation}
Let $\beta_0 =\ord_{p_1}(m_{k_{1,1}})$, $m_{k_{1,1}}^{*}=m_{k_{1,1}} p_{1}^{-\beta_0}$, $ (p_{1},m_{k_{1,1}}^{*} ) =1$ and $\fm_{1,k_{1,1}}^{*} = \fm_{1,k_{1,1}} p_{1}^{-\beta_0}$.
We have $\beta_0 \leq
\log_{p_2} (\max(|m_1|, |m_2|)) \leq 10 \log_2 n \leq \VV/2$, and
$r^{-}_1 -\VV/2 \geq V-\VV \geq C_1 \log_2^2 n $ (see \eqref{Le5-4}).
Applying the Corollary, we get that the above congruence is equality :
\begin{equation} \label{Le6-75}
p_2^{r_{2,k_{1,1}}} \fm_{1,k_{1,1}}^{*} =- m_{k_{1,1}}^{*} \quad \ad \quad
p_2^{r_{2,k_{1,1}}} \fm_{1,k_{1,1}} =- m_{k_{1,1}}, \quad \beta_0 =\ord_{p_1}(m_{k_{1,1}}).
\end{equation}
Now from \eqref{Le6-2}, we obtain, for $i=2$, that
\begin{equation} \label{Le6-51}
\fm_{2,k_{2,2}} + \fm_{2,k_{2,1}} p_2^{ r^{+}_2 - r_{2,k_{2,1}} } \equiv
A_0 \mod p_2^{r^{+}_2},
\end{equation}
with $ A_0 = -m_{k_{2,2}} M_{2, \br_{k_{2,2}}} - m_{k_{2,1}} M_{2, \br_{k_{2,1}}}
p_2^{ r^{+}_2 - r_{2,k_{2,1}} } $.
We fix $ m_{1}, m_2, \br_1,\br_2$.
Bearing in mind that $ \fm_{2,k_{2,1}} \in
I_{ p_2^{r_{2, k_{2,1}}}}$ and $ \fm_{2,k_{2,2}} \in
I_{ p_2^{r^{+}_2 -r_{2,k_{2,1}}}} $ (see \eqref{Beg-1a} and \eqref{Le6-1}), we obtain that
there exists only one solution $(\fm_{2,1}, \fm_{2,2}) $
of \eqref{Le6-51}.
Now substituting \eqref{Le6-10} and \eqref{Le6-75} into \eqref{Le7-0}, we obtain
\begin{multline} \label{Le6-39}
\DD^{\ast,5}_{1, 0}= \sum_{\substack{k_{i,j}=1,2 \\i,j=1,2}} \;\;
\sum_{ (\br_1,\br_2) \in \UU_{ 1, 0 }} \;\; \sum_{\substack{ \beta_{3i-3} \geq 0\\ i=1,2}}
\;\; \sum_{ 0< |\fm_{1,k_{1,i}}^{*}| \leq n^{10} , \; i=1,2}
\;\; \prod_{i=1}^{2}\delta_{p_i^{r^{+}_i}}( \tilde{m}_{i} )
\frac{ p_1^{-\beta_{3i-3}} p_2^{ -r_{2,i}}}{ |\fm^{*}_{1,k_{1,i}}|^2 } \\ \ll
\sum_{ r_{i,j} \in [1,n],\; i,j=1,2}
\frac{1}{ p_2^{r_{2,1}+ r_{2,2}}} \ll n^{2}.
\end{multline}
From \eqref{Le4-1}, \eqref{Le7-0}, \eqref{Le6-35} and \eqref{Le6-39},
we obtain $\DD^{\ast}_{1, 1}=
\DD^{\ast,4}_{1, 1} \ll n^2$ and
$\DD^{\ast}_{1, 0} =\DD^{\ast,5}_{1, 0}
\ll n^2$.
The case $\quad \lambda_{1} = 0$, $\lambda_{2}=1$ is similar to the case $ \lambda_{1} = 1$, $\lambda_{2}=0$.
Therefore $\DD^{\ast}_{0, 1} \ll n^2$ and $\DD^{\ast}_{1, 1} + \DD^{\ast}_{1, 0} + \DD^{\ast}_{0, 1} \ll n^2$.
Hence Lemma 7 is proved. \qed
By Lemma 6, Lemma 7 and \eqref{Le4-30},
the Theorem is proved. \qed \\
{\bf Remark 1.} The constant $C$ in the Theorem depends only on Yu's constant $C_1$ (see [Bu, p. 19]).\\
{\bf Remark 2.} In [Le], we proved that Halton's sequence is of $L_q$-low discrepancy for all $s \geq 2$ and $q>0$. In [Le], we also proved the Central Limit Theorem and moment convergence for
Hammersley's point set
\begin{equation} \label{CLT1}
\frac{ D(\bar{\bx}, \cH_{s+1,N} ) }{ \left\| D(\bar{\bx},\cH_{s+1,N}) \right\|_{2}}
\stackrel{w}{\rightarrow} \cN(0,1), \qquad
\frac{ D_{s+1,q}( \cH_{s+1,N} )}{ D_{s+1,2}( \cH_{s+1,N} )}
\stackrel{N \rightarrow \infty}{\longrightarrow} \kappa_q^{1/q}, \; s \geq 3,
\end{equation}
with $\kappa_q= \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{\infty} |u|^q e^{-u^2/2} d u$. In particular, we get that the lower bound \eqref{In7} is optimal for all $q>0$ for Hammersley's point set. The proof of these results is the same as the proof of the Theorem, but much more technical.
\\
{\bf Remark 3.} The Halton sequence
is of $L_q$-low discrepancy for $s=1$ only for some sort of symmetrisation [KrPi]. It is very difficult to understand why Halton's sequence is of $L_q$-low discrepancy for $s\geq 2$ without any symmetrisation. The first idea is that the main tool (Theorem A) may be applied only for $s\geq 2$. But it is not possible that this explanation is complete. Because in [Le], we see the same problem for transition from $s = 2$ to $s \geq 3$. Namely, we do not need any symmetrisation for the validity of the Central Limit Theorem for
Hammersley's point set for $s \geq 3$ (see \eqref{CLT1}). But we need a symmetrisation for the case $s = 2$. Thus, it remains to say that probably the auto-symmetrisation grows with the increase of the dimension.
|
{
"timestamp": "2020-12-29T02:21:13",
"yymm": "2012",
"arxiv_id": "2012.14002",
"language": "en",
"url": "https://arxiv.org/abs/2012.14002"
}
|
\section{Introduction}
In this work, we propose to study Gutman's graph energy \cite{Gut} from the viewpoint of cooperative game theory. For this, we define a cooperative game based on the graph energy and study its main properties and solutions.
Cooperative game theory is a branch of game theory which models scenarios where different players may gain a larger utility by cooperating between them. One main problem is the decide how to divide the total utility so that every player is satisfied and willing to cooperate. There are various solutions to this problem depending on the desired properties one accepts as fair. The book \cite{Peters} presents an introduction to the topic.
On the other hand, recall that for a graph $G$, the energy of $G$, is given by $$\mathcal{E}(G)=\sum^n_{i=1}|\lambda_i|$$
where $\lambda_1,\dots,\lambda_n$ denote the eigenvalues of the adjacency matrix of $G$, counted with multiplicity. The monograph \cite{LSG}, presents a nice review of the theory.
The idea is that for a given graph $G$ with vertex set $V$, we define a game on $V$ by considering the energy of the subgraphs of $G$. More precisely, for each subset $S$ of $G$, let $H=H(S)$ be the \emph{induced} subgraph of $G$. The value of $S$ is given by the energy of the subgraph $H$, $\mathcal{E}(H)$. We prove that this is a \emph{superadditive} game.
We would also be interested in a particular payoff: the energy of a vertex. The energy of vertex as defined in \cite{AJ} is a way to split the total energy of the graphs among the different vertices. In principle, it is natural to try to give an interpretation in terms of centrality, similar to the centrality measures studied in networks, such as degree, PageRank, betweeness, closeness, etc. However, from observation of examples, we believe that one is lead to the intuition that the vertex energy is closer to a payoff in a cooperative game. Thus, on the above described game we consider the payment given by the vertex energy and prove that it actually satisfies the axioms of symmetry, null player and efficiency. More important we show that this payment is in the core and thus provide evidence of the above intuition (see Section 2.3 for definitions). This is the main result of the paper.
We also consider generalizations where we consider instead of the energy, a new family of ``energies" based on the $p$-Schatten norms. Similar results are proved for these generalizations, obtaining new inequalities.
We hope that this paper motivates the study of graphs and their properties from the viewpoint of game theory.
Apart from this introduction, the paper contains five sections. In Section 2 we present the necessary preliminaries. 2.1 and 2.2 are devoted to graphs and energy of graph and vertices, 2.3 for matrix inequalities and in 2.4 we describe basic concepts of cooperative game theory. In Section 3, we prove the main results of the paper. We introduce a cooperative game based on energy of a graph in 3.1. In 3.2 we consider the vertex energy from this viewpoint and give further considerations. In Section 4 we generalize these ideas for $p$-Schatten norms by introducing the $p$-Schatten energy. We start with basic properties of this energy in 4.1, then introduce the $p$-energy of a vertex in 4.2, and finally we specialize in the case $p=2$ in Section 4.3, proving that this case coincide with games studied before. In Section 5 we present some examples. We finish with a final small section regarding conclusions and further questions.
\section{Preliminaries}
\subsection{Graphs and its adjacency matrix}
We will consider simple undirected finite graphs which we will call simply graphs.
A graph $G$ consists of a pair $(V,E):=(V(G),E(G))$ where $|V|<\infty $ and $E\subset V\times V$ is such that, $(v,v)\notin E$ for all $v\in V$, and $(v,w)\in E$ implies that $(w,v) \in E$, for all $v,w\in V$.
A subgraph $H$ of $G$ is a graph such that $V(H)\subset V(G)$ and $E(H)\subset E(G)$. Given $S\subset{V(G)}$, the induced subgraph of $S$ is the subgraph $I(S)=(S,F)$ with vertex set $S$ and edge set $F=\{(v,w)\in E(G)|v\in S,w \in S\}.$ We call a subgraph $H$ of $G$, induced if there is some $S$ such that $H=I(S)$.
For any graph $G=(V,E)$, with vertex set $V=\{v_1,\dots,v_n\}$, the adjacency matrix of $G$, that we denote by $A=A(G)$, is the matrix with entries
$A_{ij}=1$ if $(v_i,v_j)\in E$ and $A_{ij}=0$ otherwise.
\subsection{ Graph energy and vertex energy}
We denote by $M_n:=M_n(\mathbb{C})$, the set of matrices of size $n\times n$ over the complex numbers and by $Tr:M_n\to \mathbb{C}$, the trace on $M_n$, which is the map $A \mapsto A_{11}+\cdots+ A_{nn}$.
The energy of a graph $G$, denoted by $\mathcal{E}(G)$, is defined as the sum of the absolute values of the eigenvalues of the adjacency matrix $A = A(G)$, i.e.,
\begin{equation*}
\mathcal{E}(G) = \sum_{i=1}^{n}|\lambda_{i}|.
\end{equation*}
In other words, the energy of a graph is the trace norm of its adjacency matrix. More precisely, for a matrix $M$, we define
the absolute value of $M$ by $|M|:=(MM^*)^{1/2}$ where $M^*$ denotes conjugate transpose of $M$. The energy of $G$ is then given by
\begin{equation*}
\mathcal{E}(G)= Tr(|A(G)|)=\displaystyle\sum_{i=1}^{n}|A(G)|_{ii}.
\end{equation*}
The next important definition is given in \cite{AJ} and studied in \cite{AFJ}.
\begin{defi}\label{Energyv}
For a graph G and a vertex $v_i\in G$, \emph{the energy of the vertex} $v_i$, which we denote by $\mathcal{E}_G(v_i)$, is given by
\begin{equation}
\mathcal{E}_G(v_i)=|A(G)|_{ii}, \quad\quad~~~\text{for } i=1,\dots,n,
\end{equation}
where $A(G)$ is the adjacency matrix of $G$.
\end{defi}
In this way the energy of a graph is given by the sum of the individual energies of the vertices of $G$,
\begin{equation*}
\mathcal{E}(G)=\mathcal{E}_G(v_1)+\cdots+\mathcal{E}_G(v_n),
\end{equation*}
and thus the energy of a vertex is a refinement of the energy of a graph.
The idea behind the definition comes from non-commutative probability. Actually, the linear functional $\phi_i:M_n\to\mathbb{C}$ defined by $\phi_i(A)=A_{ii}$ is a vector state, and in particular, it is positive and $\phi(I)=1$, where $I$ is the identity matrix, see \cite{HoraObata} for details.
H\"older inequality extends to the non-commutative case as follows: Let $(p,q)$ be a pair such that $1/p+1/q=1$, then
\begin{equation}
\label{Hold}\phi_i(XY)\leq \phi_i(|X|^p)^{1/p}\phi_i(|Y|^q)^{1/q}, \text{ for all } X,Y\in M_n.
\end{equation}
\subsection{Jensen's trace inequality}
Trace inequalities refer to inequalities for the trace of matrices or operators and functions of them. In this paper we will use a specific trace inequality which is the generalization of Jensen's inequality for convex functions. Before introducing it, we need to bring some basic notation and definitions.
\begin{defi}
\begin{enumerate}
\item A matrix is called self-adjoint if $M=M^*$.
\item An orthogonal projection $P\in M_n$ is a self-adjoint matrix such that $P=P^2.$
\end{enumerate}
\end{defi}
For a self-adjoint matrix $M$, and a real function $f$ we may define $f(M)$ as follows: consider the diagonalization $M=UDU^{-1}$ where $U$ is unitary and $D$ is diagonal, then
$$f(M) := U \begin{bmatrix}
f(d_1) & \dots & 0 \\
\vdots & \ddots & \vdots \\
0 & \dots & f(d_n)
\end{bmatrix} U^{-1} $$ where $d_1, \dots, d_n$ denote the diagonal entries of $D$.
The main tool to prove the desired properties for the energy will be Jensen's trace inequality, which we state as a proposition for further reference.
\begin{prop}[Jensen's trace inequality] \label{JTI}
Let $f$ be a continuous convex function and let $X_1,\dots, X_i\in M_n$ be selfadjoint. If $f$ is convex, the following the inequality holds:
$$\operatorname{Tr}\Bigl(f\Bigl(\sum_{k=1}^iA_k^*X_kA_k\Bigr)\Bigr)\leq \operatorname{Tr}\Bigl(\sum_{k=1}^i A_k^*f(X_k)A_k\Bigr),$$
where $A_1,\dots, A_i\in M_n$ is any collection of matrices such that $\sum_{k=1}^iA_k^*A_k=I.$
\end{prop}
\subsection{Cooperative Game Theory}
In this section, we consider a cooperative game with transferable utility, or a TU- game. That is a pair $(N,w)$, where $N=\{1,\dots,n\}$ with $n\in\mathbb N$ and $w:2^N\rightarrow\mathbb R$ such that $w(\emptyset)=0$. $N$ is called the set of players and the function $w$ is called the characteristic function. Each subset of $N$ is called a coalition. For each coalition $S\subset N$, $w(S)$ is called the value of the $S$.
\\
{\bf Remark:}
Usually the characteristic function is denoted by $v$, in this article we use $w$ so that the exposition is clear, since we work also with graphs in which $v_i$ denotes the $i$-th vertex of a graph.
\begin{defi}
A game $(N,w)$ is called {\it superadditive} if $w(S\cup T)\geq w(S)+w(T)$ whenever $S\cap T=\emptyset$.
\end{defi}
\begin{defi}
A game $(N,w)$ is called {\it convex} if $w(S\cup T)+w(S\cap T) \geq w(S)+w(T)$ for every $S,T\subset N$.
\end{defi}
A {\it payoff distribution} or {\it payoff vector }is an element $x=(x_1,\dots,x_n)$ in $\mathbb{R}^n$ and $x_i$ is called the payoff of the player $i$.
For a payoff distribution $x$ and a coalition $S$, we use the notation $x(S)=\sum_{i\in S} x_i$.
\begin{defi} If $\mathbf{x}$ is a payoff vector we will say that
\begin{enumerate}
\item $x$ is {\it individually rational} if $x_i\geq w({i})$
\item $x$ is {\it group rational} if $x(S)\geq w(S)$
\item $x$ is {\it efficient} if $x(N)=w(N).$
\item $x$ is an {\it imputation} if it is individually rational and efficient. The set of imputations of $(N,w)$ is denoted by $I(w)$.
\end{enumerate}
\end{defi}
\begin{defi}
The \it{core} of a game $(N,w)$, denoted by $C(w)$ is the set of imputations that are group rational. That is,
$$C(w)=\{x| x(S)\geq w(S) \text{ for each } S\subset N \text{ and } x(N)=w(N) \}$$
\end{defi}
An important concept in cooperative game theory is the Shapley value. Given the set of all games of $n$ players, which we denote by $\mathcal G^{\mathcal N}$, a \emph{value} is a map from $\phi:\mathcal G^{\mathcal N}\mapsto \mathbb{R}^n$, i.e., a \emph{value} is a function that assigns a payoff distribution to each game of $n$ players.
Some desirable properties that values may satisfy are the following:
\begin{enumerate}
\item {\bf Efficiency}: $\sum_{i=1}^n\phi_i(w)=w(N)$ for all $w\in \mathcal G^{\mathcal N}$, where $\phi_i(w)$ denotes the the $i$-th entry of $\phi(w)$.
\item {\bf Symmetry}: $\phi_i(w)=\phi_j(w)$ for all $w\in \mathcal{G}^{\mathcal N}$ and all symmetric players $i$ and $j$
in $w$, where $i$ and $j$ are symmetric in $(N,w)$ if $w(S\cup\{i\})=w(S\cup\{j\})$ for every coalition $S\subset N\backslash\{i,j\}$.
\item {\bf Additivity}: For two games $w_1,w_2\in G^{\mathcal N}$, $\phi(w_1+w_2)=\phi(w_1)+\phi(w_2).$
\item {\bf Null player}: $\phi_u(w)=0$ for all $w\in \mathcal{G}^{\mathcal N}$ and all null players $i\in w$,
where a player is called null if $w(S\cup{i})-w(S)=0$ for every coalition $S$.
\end{enumerate}
The Shapley value is the only value that satisfies the axioms of efficiency, symmetry, additivity and null player, it is well known that actually the Shapley value is given by
\[\phi_i(w)=\sum_{S\subset N\backslash\{i\}}\frac{|S|!(n-|S|-1)!(w(S\cup\{i\})-w(S))}{n!}.\]
In general, it is not direct to decide whether the Shapley value is in the core. One quite used criteria to decide this is convexity: for any convex game, the Shapley value is always in the core. We refer the reader to \cite{Peters} for details.
\section{The energy game on graphs}\label{EGG}
In this section we define the energy game on graphs. We address natural questions about this game. Namely, we prove that for each graph, the energy game is a superadditive game, although it is not always convex. We also prove some important properties about the solutions, namely, we show that the core is non empty. Moreover, the energy of a vertex as defined in Definition \ref{Energyv}, is a payoff distribution, which we prove is in the core. These are the main results of the article.
The energy of a vertex, thought as a value restricted the energy game on graphs, also satisfies the axioms of null player, symmetry and efficiency. However, it is easy to see that the Shapley value does not always coincide with the energy of a vertex.
\subsection{The energy game}
We now define the object of our interest, namely, the energy game on graphs.
\begin{defi}
Let $G=(V,E)$ be a simple undirected graph. For each subset $S\subset V$ let $w(S):=\mathcal{E}(I(S))$ where $I(S)$ is the graph induced by $S$. We denote $(V,w)$, the \emph{energy game on $G$.}
\end{defi}
The following theorem justifies the interest in the definition above.
\begin{prop} \label{Super} For any graph $G=(V,E)$, the game $(V,w)$ is superadditive.
\end{prop}
\begin{proof}
The proposition is equivalent to proving that if the adjacency matrix of a graph $G$ is partitioned as
$$A(G)=\left(\begin{array}{cc}
X&Y\\
Z&W
\end{array}\right),$$
with $X$ and $W$ square matrices of dimension $k$ and $n-k$, respectively, then $Tr(|X|)+Tr(|W|)\leq Tr(|A(G)|)$. This may be proved by the use of Lemma 4.17 of \cite{LSG} (see also Theorem 3 in \cite{Tho}). We will prove it using Jensen's trace inequality, Proposition \ref{JTI}, applied to the function $f=Abs$. Indeed, taking $n=2$, $X_1=A(G)$, $X_2=A(G)$, $A_1=P$ and $A_2=I-P$, where $P$ is the projection to the first $k$ entries, we get that
\begin{equation}\label{jen2} \operatorname{Tr}\Bigl(|PA(G)P +(I-P)A(G)(I-P)| \Bigr) \leq \operatorname{Tr}\Bigl(P| A(G)|P+(1-P)| A(G)|(1-P)\Bigr).\end{equation}
Since we have that $$PA(G)P=\left(\begin{array}{cc}
X&0\\
0&0
\end{array}\right), (I-P)A(G)(I-P)=\left(\begin{array}{cc}
0&0\\
0&W
\end{array}\right),$$
the left hand side of \eqref{jen2} equals $Tr(|X|)+Tr(|W|).$ On the other hand for any matrix $M$, $M$ and $PMP+(I-P)M(I-P)$ have the same diagonal entries, thus $Tr(PMP+(I-P)M(I-P))=Tr(M)$, and thus the right handside of \eqref{jen2} is $Tr(|A(G)|)$.
\end{proof}
\begin{rem}
At first glance, one may think that superadditivity may be related to the triangle inequality of the 1-Schatten norm, see section 4. Triangle inequality, also known as Ky-Fan's theorem,
says that for any matrices $A+B$
$$Tr(|A+B|)\leq Tr(|A|)+Tr(|B|).$$
This may seem in contradiction with the inequality $E(S\cup T)\geq E(S)+E(T)$, but looking closer at the proof of Proposition \ref{Super} one sees that triangle inequality yields $E(S\cup T)\leq E(S)+E(T) +E(U)$
where $U$ is the bipartite subgraph of $G$ which contains the edges in $G$ between the induced subgraphs by the sets $S$ and $T$.
\end{rem}
Let us mention that the energy game is not always convex as the following example shows.
\begin{exa}
Consider $G=(V,E)$, with $V=\{1,2,3\}$ and $E=\{\{1,2\},\{2,3\}\}$. Let $S=\{1,2\}$ and $T=\{2,3\}$. Then $w(S)=w(T)=2$, $w(S\cup T)=2\sqrt{2}$ and $w(S\cap T)=0$, hence
$w(S)+w(T)>w(S\cup T)+w(S\cap T)$. This proves that this game is not convex.
\end{exa}
\subsection{The vertex energy as a payoff}
In \cite{AJ}, the energy of a vertex is introduced and considered. For a graph $G$, with adjacency matrix $A(G)$, and a vertex $v_i$, recall that the energy of the vertex $v_i$ is given by
$$\mathcal{E}_G(v_i)=|A(G)|_{ii}.$$
The vector $e=(\mathcal{E}_G(v_1),\dots, \mathcal{E}_G(v_n))$ is an imputation since the sum of the energy of the vertices is exactly the total energy of the graph.
Now we prove the main result of the paper, which states that the vertex energy is a payoff which is in the core.
This will be a direct consequence of the following theorem.
\begin{thm}\label{MTeorem} Let $H$ be an induced subgraph of $G$, then
$$\sum_{v\in V(H)} \mathcal E_G(v)\geq\mathcal E(H),$$
where $\mathcal E_G(v)$ is the vertex energy of $v$ in $G$ and $\mathcal E(H)$ is the energy of $H$.
\end{thm}
\begin{proof}
Without loss of generality, suppose the the vertex set of $H$ is $\{1,\dots,k\}$ for some $k$.
Let $A$ be the adjacency matrix of $G$, let $P$ be the projection into the first $k$ entries, and $Q=I-P$. Now, Jensen's trace inequality, Proposition \ref{JTI}, applied to the function $f=Abs$, the absolute value, and the parameters $n=2$, $X_1=A$, $X_2=0$, $A_1=P$ and $A_2=Q$, gives
\begin{equation}\label{mainequation}
Tr(|PAP|)\leq Tr(P|A|P).
\end{equation}
Since $PAP$ is the adjacency matrix of the induced subgraph $H$ the left-hand side of \ref{mainequation} coincides with the energy of $H$, $E(H)$. On the other hand, by definition the diagonal of the matrix $|A|$ consists of the values $\mathcal E(v_i)$, $i=1,\dots,|V|$ and thus the diagonal of $P|A|P$ contains the values $(P|A|P)_{ii}=\mathcal E(v_i)$, for $i=1,\dots,k$ and $(P|A|P)_{ii}=0$ for $i=k+1,\dots,n$, which means that $Tr(P|A|P)=\sum_{v\in V(H)} \mathcal E_G(v)$
\end{proof}
We are now in position to prove the main theorem.
\begin{thm}\label{Core}
Let $(G,V)$ be a simple undirected graph, then the payoff vector given by
$e=(\mathcal{E}(v_1),\dots, \mathcal{E}(v_n))$ is in the core of the game
$(V,w_{\mathcal{E}})$.
\end{thm}
\begin{proof} We need to prove that for each $S\subset V$, $w_{\mathcal E}(S)\geq e(S)$ that is
$$\mathcal E(I(S))\leq \sum_{i\in S} \mathcal E(v_i)$$ where $\mathcal E(v_i)$ is the vertex energy of $v_i$ in $G$ and $\mathcal E(G(S))$ is the energy of of the graph induced by $S$. The result now follows from Theorem \ref{MTeorem}.
\end{proof}
{\bf Remark:} Theorem \ref{MTeorem} gives an inequality useful for bounding the energy of a graph. For example, for two connected vertices $v$ and $w$ the theorem implies that $\mathcal E(v)+\mathcal E(w)\geq 2$, this has been proved independently in \cite{Arizmendi-Arizmendi}. Another example is given by Theorem 4.20 in \cite{LSG} for which we give an alternative proof.
\begin{thm}
If $F$ is an edge cut of a simple graph, then $\mathcal{E}(G-F)\leq \mathcal{E}(G)$.
\end{thm}
\begin{proof}
Since $F$ is an edge cut of $G$, $G-F=H\cup K$, where $H$ and $K$ are two complementary induced graphs. We can split the set vertices of $G$ as $V_H\cup V_K$ where the $V_H$ is the set of vertices of $H$ and $V_K$ is the set of vertices in $H$.
By Theorem \ref{MTeorem}, $$\mathcal{E}(H)\leq \sum_{v\in V_H} \mathcal E(v), \text{ and} $$
$$\mathcal{E}(K)\leq \sum_{v\in V_K}, \mathcal E(v).$$
Hence,
$$\mathcal{E}(G-F)=\mathcal{E}(H)+\mathcal{E}(K)\leq \sum_{v\in V_H} \mathcal E(v)+ \sum_{v\in V_K} \mathcal E(v)=\sum_{v\in V(G)} \mathcal E(v)=\mathcal{E}(G)$$
\end{proof}
If one considers the energy game on graphs with $n$ vertices, then the energy of a vertex can be thought as a value in these games. In this sense, one can ask if this value coincides with the Shapley value restricted to the energy game on graphs. The next theorem goes in this direction
\begin{thm}\label{syefnull}
The energy of a vertex satisfies the axioms of symmetry, efficiency and null player axiom.
\end{thm}
\begin{proof}
The efficiency axiom follows by definition.
We will prove that the only null players are isolated vertices. Indeed, suppose that $v_i$ is not isolated, then $deg(i)>0$ and by \cite[Theorem 3.3]{AFJ}
$$\mathcal{E}(v_i)\geq deg(v_i)/\Delta>0,$$
where $\Delta$ denotes the maximum degree of the graph.
On the other hand, by Theorem \ref{MTeorem}, for $H=G\setminus\{i\}$ we have
$$\sum_{v\in V\setminus\{v_i\}} \mathcal E_G(v)\geq\mathcal E(G\setminus\{v_i\})=w(V\setminus\{v_i\}),$$
and then $$w(V)=\mathcal E_G(v_i)+\sum_{v\in V\setminus\{v_i\}}\mathcal{E}(v_i) >w(V\setminus\{v_i\}),$$ proving that $i$ is not a null player.
Now if $v_i$ is isolated $\mathcal{E}(v_i)=0$, as desired.
Now, let us characterize symmetric players. For a vertex $v$ in $V$ consider $N(v)$, the set of neighbours of $v$.
We claim that $v_i$ and $v_j$ are symmetric if and only if $N(v_i)=N(v_j)$. First if $N(v_i)=N(v_j)$, then for any $S\subset G$ the sub-graph by $S\cup{v_i}$ and $S\setminus{v_i}$ are isomorphic to $S\cup{v_j}$ and $S\setminus{v_j}$, respectively. Thus $v_i$ and $v_j$ are symmetric.
Now, let $v_i$ and $v_j$ be symmetric players and consider for each $w\in N(v_i)$ the subgraph induced by $\{w,v_i\}$, which has energy $2$. Since $v_j$ and $v_i$ are symmetric then the subgraph induced by $\{w,v_j\}$ must also have energy $2$. This, of course, means that $w$ is connected to $v_j$ in $G$. Since this works for all $w\in N(v_j)$, then $N(v_j)\subset N(v_i)$. Interchanging the roles of $v_i$ and $v_j$, we see that $N(v_i)\subset N(v_j)$ and thus $N(v_i)=N(v_j)$.
\end{proof}
Even though the energy of a vertex satisfies the above axioms of symmetry, efficiency and null player, it does not coincide with the Shapley value, as the following example shows.
\begin{exa}
Consider $G=(V,E)$, with $V=\{1,2,3\}$ and $E=\{\{1,2\},\{2,3\}\}$, i.e. $G$ is the path with $3$ vertices. A direct calculation shows that $\mathcal{E}(v_1)=\mathcal{E}(v_3)=\frac{1}{\sqrt{2}}$ and
$\mathcal{E}(v_2)=\sqrt{2}$. On the other hand, the values of each coalition are given by
$$\mathcal{E}(\{\})=\mathcal{E}(\{1\})=\mathcal{E}(\{2\})=\mathcal{E}(\{3\})=\mathcal{E}(\{1,3\})=0,$$
$$\mathcal{E}(\{1,2\})=\mathcal{E}(\{2,3\})=2,$$
$$\mathcal{E}(\{1,2,3\})=2\sqrt{2}.$$
Hence, if we denote by
$\varphi(v_i)$ the Shapley value of the $i$-th vertex then a direct calculation shows that
$\varphi(v_1)=\varphi(v_3)=\frac{2\sqrt{2}-1}{3}$ and
$\varphi(v_2)=\frac{2+2\sqrt{2}}{3}$.
\end{exa}
However, we conjecture that the Shapley value is in the core.
\begin{conj}
For the energy game, the Shapley value is in the core.
\end{conj}
Let us end with the following lemma which says that the marginal contribution of a vertex in a coalition is larger than the sum of the vertex energies of the coalition, which is a step in that direction.
\begin{prop}
Let $S\subset V$ and $i\in S$. The marginal contribution of $i$, satisfies $$ w(S)-w(S\setminus \{i\})\geq \mathcal{E}_S(i).$$
\end{prop}
\begin{proof}
This follows from the following inequalities
\begin{eqnarray}
\mathcal{E}(S)&=&\sum_{j\in S}\mathcal{E}_S(j)=\sum_{j\in S\setminus\{i\} }\mathcal{E}_S(j)+\mathcal{E}_S(i)
\\&\geq&
\mathcal{E}(S\setminus \{i\})+\mathcal{E}_S(i),
\end{eqnarray}
where we used Theorem 3.5 in the inequality.
\end{proof}
\section{Schatten energy game}
Recalling that the energy of a graph is the 1-Schatten norm (or nuclear norm) of the adjacency matrix, in this section we want to consider an energy inspired by the $p-$Schatten norms.
Explicitly, instead of using $|A|$ one can use $|A|^p$. In this way, the $p$-energy of a graph is defined as $\mathcal E_p(G):= tr(|A(G)|^p)$ where $A(G)$ denotes the adjacency matrix of $G$. By the fact that $A(G)^2_{ij}$ counts the walks of size two from the vertex $v_i$ to the vertex $v_j$, one easily gets that $\mathcal E_2(G)= 2m$, where $m$ is the number of edges in the graph.
Here, we should point out that using the $p$-Schatten norm, $(tr(|A(G)|^p))^{1/p}$, directly would result in a theory which is not superadditive.
The $p$-energy game on a graph $(G,v)$ is defined in an analogous way. For each subset $S\subset V$ let $w_p(S):=\mathcal{E}_p(G(S))$ where $G(S)$ is the graph induced by $S$. Moreover for $p\geq1$ we have that
\begin{prop}
Let $p\geq 1$. The $p$-energy game on graphs is superadditive.
\end{prop}
\begin{proof}
The proof is the same as the one of Proposition \ref{Super} by using Jensen's trace inequality for $n=2$, $X_1=A(G)$, $X_2=A(G)$, $A_1=P$ and $A_2=I-P$, where $P$ is the projection to the first $k$ entries. In this case, one should consider the convex function $f(\cdot)=|\cdot|^p$.
\end{proof}
\subsection{Some properties of the Schatten energy}
Apart from the above properties described in relation to game theory, by being the $p$-th power of the Schatten norm we automatically have the following properties.
\begin{prop}[Monotonicity of Schatten norms] \label{mon schatten}
For $1<p<q<\infty$, we have that
\begin{equation} \label{monotonocity}
\mathcal{E}_p^{1/p}\geq \mathcal{E}_q^{1/q}. \end{equation}
\end{prop}
A direct corollary is the following, which will be improved below for bipartite graphs.
\begin{cor}
Let $G=(V,E)$ be a graph with $|V|=n$ and $|E|=m$ then
i) If $1\leq p\leq 2$, then $(2m)^{p/2}\leq \mathcal{E}_p(G)$.
ii) If $p>2$, then $(2m)^{p/2} \geq\mathcal{E}_p(G)$.
\end{cor}
\begin{proof}
Since $\mathcal E_2(G)=2m$, by \eqref{monotonocity} we have,
$$\mathcal{E}_p(G)^{1/p} \geq\mathcal{E}_2(G)^{1/2}=(2m)^{1/2}$$
for $0\leq p<2$, and
$$\mathcal{E}_p(G)^{1/p} \leq\mathcal{E}_2(G)^{1/2}=(2m)^{1/2}$$
for $p<2$.
\end{proof}
On the other hand, multiplying by a factors of $n^{-1/p}$ and $n^{-1/q}$ reverses the inequality (\ref{monotonocity}).
\begin{prop} \label{monotonocity2}
For $1<p<q<\infty$, we have that
\begin{equation}\label{Hol} (\mathcal{E}_p/n)^{1/p}\leq (\mathcal{E}_q/n)^{1/q}. \end{equation}
\end{prop}
In particular taking again $q=2$ we obtain generalization of McClelland inequality \cite{Mc}, and taking $p=2$ we obtain a lower bound in terms of the number of edges $m$
\begin{prop}
Let $G=(V,E)$ be a graph with $|V|=n$ and $|E|=m$ then
i) If $1\leq p\leq 2$, then $ \mathcal{E}_p(G)\leq n^{1-p/2} (2m)^{p/2}$.
ii) If $p>2$, then $ \mathcal{E}_p(G)\geq n^{1-p/2} (2m)^{p/2}$.
\end{prop}
\begin{proof}
Note that $\mathcal{E}_2(G)=2m$. Both inequalities follow then from (\ref{Hol}).
\end{proof}
It is well known that among trees with $n$-vertices, $S_n$ and $P_n$ are the graphs with minimal and maximal energy, respectively.
For the Schatten energy there is a different behavior between $1\leq p\leq2$ and $p>2$. This is stated in Li, Shi and Gutman \cite{LSG} as a private communication from S. Wagner and is asked again in Nikiforov\cite{Nik3}.
\begin{conj} [\cite{Nik3}]
Let $T=(V,E)$ be a tree with $|V|=n$ and $|E|=m$ then
i) If $1\leq p\leq 2$, then $\mathcal{E}_p (S_n)\leq \mathcal{E}_p(T) \leq\mathcal{E}_p(P_n)$.
ii) If $p>2$, then $\mathcal{E}_p (P_n) \leq \mathcal{E}_p(T) \leq\mathcal{E}_p(S_n)$.
\end{conj}
We are able to prove the inequalities for the star $S_n$, including all bipartite graphs.
\begin{prop}
Let $G=(V,E)$ be a bipartite graph with $|V|=n$ and $|E|=m$ then
i) If $1\leq p\leq 2$, then $2(m)^{p/2}\leq \mathcal{E}_p(G)$.
ii) If $p>2$, then $2(m)^{p/2} \geq\mathcal{E}_p(G)$.
In particular if for any tree $T$ $$\mathcal{E}_p(S_n) \geq\mathcal{E}_p(T)$$ if $p<2$ and $$\mathcal{E}_p(S_n) \geq\mathcal{E}_p(T).$$
\end{prop}
\begin{proof}
Since $G$ is bipartite, the eigenvalues $\lambda_1\geq \lambda_2\cdots \geq\lambda_n$
satisfy that $\lambda_i=-\lambda_{n-i}$.
Now, $\mathcal{E}_2(T)=2m=\lambda_1^2+\cdots+\lambda_n^2$, then
$$\lambda_1^2+\cdots+\lambda_k^2=m=n-1$$
and
$$\mathcal{E}(G)_p=
2(\lambda_1^p+\cdots+\lambda_k^p).$$
Now, by the monotonicity of $p$-norms we see that
$$ \sqrt{m}=(\lambda_1^2+\cdots+\lambda_k^2)^{1/2}\leq (\lambda_1^p+\cdots+\lambda_k^p)^{1/p}$$
if $1\leq p\leq2$, and
$$ \sqrt{m}=(\lambda_1^2+\cdots+\lambda_k^2)^{1/2}\geq (\lambda_1^p+\cdots+\lambda_k^p)^{1/p}$$
if $p\geq2.$
Finally, if $G$ is a tree then $m=n-1$. Recalling that the star graph has non zero eigenvalues $-\sqrt{n-1}$ and $\sqrt{n-1}$, we see that its $p$-energy is given by $2\sqrt{n-1}^p=2(m)^{p/2}$.
\end{proof}
\begin{rem}
It is not true in general that $2(m)^{p/2}< \mathcal{E}_p(G)$, for $p>2$. For $n\geq3$, the complete graph on $n$-points is an easy counterexample. In this case the eigenvalues are $-1$ with multiplicity $n-1$ and $n-1$ with multiplicity 1. Thus the Schatten energy is given by $\mathcal{E}_p(K_n)=(n-1)^p+(n-1)$. On the other hand, $m=n(n-1)/2$. It can be easily checked, by elementary calculus that $(n-1)^p+(n-1)-(n(n-1)/2)^{p/2}>0$ for $p>4,n\geq3$, and thus $\mathcal{E}_p(K_n)>(2m)^{p/2}$.
\end{rem}
\subsection{The $p$-energy of a vertex}
Let us fix a graph $G$ for the sake of notation. One can also define the $p$-energy of a vertex as $\mathcal E_p(v_i):=|A(G)|^p_{ii}$. A direct calculation easily proves that $\mathcal{E}_2(v_i)=d_i$. One can prove that it is in the core for $p\geq1$
\begin{thm}\label{Th1}
Let $p\geq1$. The $p$-energy of a vertex is in the core.
\end{thm}
\begin{proof}
The proof is analogous as the proof of Theorem \ref{Core}, where $p=1$ is considered. The only modification to be done is to consider the convex function $f=|\cdot|^p$, instead of $f=|\cdot|$.
\end{proof}
\begin{prop}The $p$-energy of a vertex is an imputation that satisfies the axioms of null player, efficiency and symmetry.
\end{prop}
\begin{proof}
It is clear from the definition that it is an imputation. The other properties are proved along the same lines as Theorem \ref{syefnull}, with obvious modifications. We leave the proof to the reader.
\end{proof}
We do not go into much details on further of the graph theoretical properties of the $p$-energy of a vertex but let us state a basic bound.
\begin{thm}
Let $0<r<s$. Then $$\mathcal E_r(v_i)\leq\mathcal{E}_s(v_i)^{r/s}.$$
In particular, for $p<2$,
$$\mathcal E_p(v_i)\leq d_i^{p/2},$$
where $d_i$ denotes the degree of $v_i.$
\end{thm}
\begin{proof}
We use Non Commutative Holder Inequality (\ref{Hold}) for $X=|A|^r$ and $Y=1$, with $p=s/r$ and $q=s/(s-r)$. For $p=2$, we notice that $\mathcal E_2(v_i)=d_i.$
\end{proof}
Finally, we show that as in the case of vertex energy, the $p$-energy of a bipartite graph is divided evenly between its two parts.
\begin{prop}\label{bipartite}
Let $G$ be a bipartite graph with parts $V$ and $W$ then
\begin{equation}
\sum_{v \in V} \mathcal{E}_{p} (v) = \sum_{w \in W} \mathcal{E}_{p} (w).
\end{equation}
\end{prop}
\begin{proof}
Being bipartite corresponds to having a matrix of the form,
\begin{equation}
A =
\begin{pmatrix}
{\bf 0}_{r,r} & B \\
B^T & {\bf 0}_{s,s}
\end{pmatrix},
\end{equation}
then the matrix $M=AA^T$ is of the form
\begin{equation}
M =
\begin{pmatrix}
BB^T & 0\\
0 & B^T B
\end{pmatrix},
\end{equation}
and the p-th positive power of the absolute value of the matrix is given by
\begin{equation}
|A|^P =
\begin{pmatrix}
(\sqrt{BB^T})^p & 0\\
0 &(\sqrt{B^T B}P
\end{pmatrix}
\end{equation}
Now, we have the equalities
$$Tr(B^TB)^{p/2}=Tr\sqrt{B^TB}^p= \sum_{i\in|V_2|} |A|^p_{ii} =\sum_{i\in|V_2|} E(v_i),$$
and
$$ Tr(B^TB)^{p/2}=Tr\sqrt{B^TB}^p=\sum_{i\in|V_1|} |A|^p_{ii} =\sum_{i\in|V_1|} E(v_i),$$
from where, since $BB^T$ and $B^TB$ have the same eigenvalues, then $Tr(BB^T)^p/2=Tr(B^TB)^p/2$, and we get the result.
\end{proof}
\subsection{The case $p=2$}
For $p=2$ the $p-$energy is the same as the two times the number of edges of the graph. In this case the $p-$energy game is a particular case of what is called \emph{ induced subgraph games} with all weights identically 1, see \cite{ref1,ref2} for details.
It is well known that induced subgraph games with non-negative weights the game is convex, as a corollary the core is nonempty.
Moreover, in this case the $p-$energy of the vertex coincides with the degree which by theorem \ref{Th1} is in the core. Finally we have the next proposition.
\begin{prop}
The Shapley value of the $2$-energy game on graphs is exactly the degree of the vertex, which also coincides with the $2$-energy of a vertex.
\end{prop}
\begin{proof}
Let $\mathcal V(i)$ the set of neighbours of the vertex $i$. Recall that the Shapley value is given by
$$\phi_i(v)=\sum_{S\subset N\backslash\{i\}}\frac{|S|!(n-|S|-1)!(w(S\cup\{i\})-w(S))}{n!}$$
Since $w(S\cup\{i\})$ is twice the number edges of the induced graph of $S\cup\{i\}$ and $w(S)$ is twice the number of edges of the induced graph of $S$, then the marginal contribution of i, $w(S\cup\{i\})-w(S)$, equals two times the number of neighbours of $i$ inside $S$ i.e. $2|S\cap \mathcal V(i)|$, which yields,
\begin{eqnarray*}
\phi_i(v)&=&\sum_{S\subset N\backslash\{i\}}\frac{|S|!(n-|S|-1)!2|S\cap \mathcal V(i)|}{n!}\\
&=&2\sum_{S\subset N\backslash\{i\}}\frac{|S|!(n-|S|-1)!|S\cap \mathcal V(i)|}{n!}\\
&=&2\sum_{S\subset N\backslash\{i\}}\left(\sum_{j\in S\cap \mathcal V(i)}\frac{|S|!(n-|S|-1)!}{n!}\right)
\end{eqnarray*}
Now since the only coalitions that contribute to the sum are the ones containing a neighbour we can actually rewrite this sum as
\begin{eqnarray*}
\phi_i(v)&=&2\sum_{j\in \mathcal{V}}\left(\sum_{S\subset N\backslash\{i\},j\in S}\frac{|S|!(n-|S|-1)!}{n!}\right)
\end{eqnarray*}
Finally, the number of coalitions of size $k$ containing the vertex $j$ and not containing the vertex $i$ is given by $\binom{n-2}{k-1}$ so that
\begin{eqnarray*}
\phi_i(v)&=&2\sum_{j\in \mathcal{V}}\left(\sum_{k=1}^{n-1}\binom{n-2}{k-1}\frac{k!(n-k-1)!}{n!}\right)\\
&=&2\sum_{j\in \mathcal{V}}\left(\sum_{k=1}^{n-1}\binom{n-2}{k-1}\frac{k!(n-k-1)!}{n!}\right)\\
&=&2\sum_{j\in \mathcal{V}}\left(\sum_{k=1}^{n-1}\frac{k}{n(n-1)}\right)\\&=&2\sum_{j\in \mathcal{V}}\frac{1}{2}=deg(i).
\end{eqnarray*}
\end{proof}
\section{Examples}
Let us finish with two simple examples of well known graphs.
\begin{exa}[Path]
Consider a path of size $n$. That is the graph with vertex set $V=\{1,2,\dots, n\}$ and edge set $E=\{i\sim i+1|i=1\dots,n\}$.
\begin{center}
\begin{figure}[ht]
\includegraphics[width=10cm]{caminoconenergias.png}
\caption{Vertex energies on the path of size 6. The intensity of the color is proportional to this quantity.}
\end{figure}
\end{center}
According to Corollary 4.10 in \cite{ALR} the vertex energies satisfy the following relation.
$$\mathcal{E}(v_1)<\mathcal{E}(v_3)< \cdots <\mathcal{E}(v_{2k+1})< \mathcal{E}(v_{2k+2})< \mathcal{E}(v_{2k})< \cdots <\mathcal{E}(v_{4}) < \mathcal{E}(v_2)$$
for all $k<[n/4].$
So the minimum is attained at the vertices $v_1$ and $v_n$, and the maximum at $v_2$ and $v_{n-1}$.
A possible game-theoretical interpretation for this behavior is the following. Since the extreme points ($v_1$ and $v_n$) can only cooperate with their neighbour, they have not much negotiation margin and thus accept a payment which is much lower than the rest. So even though the total gain of players $v_1$ and $v_2$ is larger bigger than 2, player $v_1$ accepts a payoff which is smaller than 1.
\end{exa}
\begin{exa}
The star with $n$ vertices is an example of a bipartite graph with parts $\{v_1\}$ and $\{v_2,\dots,v_{n-1}\}$.
\begin{figure}[h]
\includegraphics[width=12cm]{stars.png}
\caption{Vertex energies on star of size 4 to 6. The intensity of the color is proportional to this quantity.}
\end{figure}
The total energy is $2\sqrt{n-1}$ and is divided on equally between the two parts. So $\mathcal{E}_G(v_1)=\sqrt{n-1}$ and $\mathcal{E}_G(v_i)=1/\sqrt{n-1}$. Moreover, one can think here that each of the edges represents a collaboration of work and in this case it is split evenly between the two vertices.
\end{exa}
\begin{exa}
Consider the graph $S_3=P_3$, with vertex set $v_1,v_2,v_3$ and with edges $v_1\sim v_2$ and $v_2\sim v_3$ . The core is the set of payoffs $\phi:V\to\mathbb{R}$ that satisfies the equations
\begin{eqnarray*}
\phi(v_1)&\geq& 0, \quad \phi(v_2)\geq 0, \quad \phi(v_3)\geq 0, \\
\phi(v_1)+\phi(v_2)&\geq& 2, \quad \phi(v_2)+\phi(v_3)\geq 2, \quad \phi(v_2)+\phi(v_3)\geq 0, \\
\phi(v_1)+\phi(v_2)+\phi(v_3)&=& 2\sqrt{2}\\
\end{eqnarray*}
The vertex energy is given by $\mathcal{E}(v_1)=1/\sqrt{2}$, $\mathcal{E}(v_2)=\sqrt{2}$, $\mathcal{E}(v_3)=1/\sqrt{2}$, while the Shapley value is given by $Sh(v_1)=Sh(v_3)=.6093$ ad $Sh(v_2)=1.6093$ which is inside the core. Figure \ref{coreS3} illustrates the different values.
\begin{center}
\begin{figure}[h]
\definecolor{uuuuuu}{rgb}{0,0,0}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm,scale=3]
\fill[line width=2pt,fill=black,fill opacity=0.1] (1.4142135623730954,2.4494897427831783) -- (1,1.7320508075688772) -- (1.4142135623730954,1.014611872354576) -- (1.8284271247461905,1.7320508075688774) -- cycle;
\draw [line width=2pt] (0,0)-- (1.4142135623730954,2.4494897427831783);
\draw [line width=2pt] (1.4142135623730954,2.4494897427831783)-- (2.8284271247461907,0);
\draw [line width=2pt] (2.8284271247461907,0)-- (0,0);
\draw [line width=1pt] (1.8284271247461905,1.7320508075688774)-- (1.4142135623730954,1.014611872354576);
\draw [line width=1pt] (1,1.7320508075688772)-- (1.4142135623730954,1.014611872354576);
\draw [line width=1pt,dash pattern=on 1pt off 1pt] (1.4142135623730954,2.4494897427831783)-- (1.4142135623730954,1.014611872354576);
\draw (2.8,0.2) node[anchor=north west] {$v_3$};
\draw (1.45,2.5448385846905963) node[anchor=north west] {$v_2$};
\draw (1.43,1.56) node[anchor=north west] {$Sh$};
\draw (1.43,1.38) node[anchor=north west] {$\mathcal{E}$};
\draw (1.43,1.10) node[anchor=north west] {$C$};
\draw (1.90,1.85) node[anchor=north west] {$B$};
\draw (.80,1.85) node[anchor=north west] {$D$};
\draw (1.25,2.63) node[anchor=north west] {$A$};
\draw (-0.2,0.2) node[anchor=north west] {$v_1$};
\draw [line width=2pt] (1.4142135623730954,2.4494897427831783)-- (1,1.7320508075688772);
\draw [line width=2pt] (1,1.7320508075688772)-- (1.4142135623730954,1.014611872354576);
\draw [line width=2pt] (1.4142135623730954,1.014611872354576)-- (1.8284271247461905,1.7320508075688774);
\draw [line width=2pt] (1.8284271247461905,1.7320508075688774)-- (1.4142135623730954,2.4494897427831783);
\begin{scriptsize}
\draw [fill=uuuuuu] (0,0) circle (1pt);
\draw [fill=uuuuuu] (1,1.7320508075688772) circle (1pt);
\draw [fill=uuuuuu] (1.4142135623730954,2.4494897427831783) circle (1pt);
\draw [fill=uuuuuu] (1.4142135623730954,1.014611872354576) circle (1pt);
\draw [fill=uuuuuu] (1.8284271247461905,1.7320508075688774) circle (1pt);
\draw [fill=uuuuuu] (2.8284271247461907,0) circle (1pt);
\draw [fill=uuuuuu] (1.41,1.3939356435643566) circle (1pt);
\draw [fill=uuuuuu] (1.41,1.225) circle (1pt);
\end{scriptsize}
\end{tikzpicture}
\caption{The core in gray, delimited by the cuadrilateral $ABCD$ with $A=(0,2.828,0)$, $B=(0.828,2,0)$, $C=(0.828,1.172,0.828)$ $D=(0,2,0.828)$. The Shapley value $Sh=(0.6093,1.6093,0.6093)$ and the energy of a vertex $\mathcal{E}= (0.707,1.414,0.707)$ }
\label{coreS3}
\end{figure}
\end{center}
\end{exa}
\section{Conclusions and further questions}
In this paper we have shown how to define game by borrowing the concept of energy from chemical graph theory and generalized for Schatten norms. The main contribution of this paper is giving a successful interpretation of the vertex energy of a graph as a payoff and thus giving an explanation on how the energy is split among the vertices, which otherwise may seems counter-intuitive. As a consequence of proving desirable properties from the viewpoint of game theory, we obtained new inequalities which may be of interest from the graph theoretical viewpoint. This opens a new line of study, which we expect to develop in the future.
Regarding the Schatten energy, the case $p=2$ is special since in that case the the $p$-energy and the Shapley value coincide, and thus the solution may be extended to any game. In general, the $p$-energy makes sense for all matrices but it is not clear how to define it for any game, it will useful to give another set of axioms which characterize the $p$-energy and which is suitable for any game. Moreover, as we mentioned above, the Shapley value and the $p$-energy of a vertex share many properties as solutions of the energy game. Giving inequalities between them which may allow one to compare them may be of interest.
Finally, as the reader may have noticed, the main technical tool used in this paper is Jensen Trace Inequality. This inequality is, more generally, valid for any convex function, giving sense for other games on graphs, which satisfy superadditivity and have at least one solution on the core analogue to the $p$-energy. One natural question is which set of games come as an outcome of this construction and how large is this set as a subset of the games on $n$ players.
\subsection*{Acknowledgement} The first author would like to thank Carlo Danon for the work done during the summer of 2019 which was crucial for starting this paper. We would like to thank Jorge Fern\'andez for some comments and for letting us use the Figures 1 and 2 in this paper. The second author received support from CONACYT grant CB-2017-2018-A1-S-9764.
{\small
\renewcommand{\baselinestretch}{0.5}
\newcommand{\vspace{-.05in}\bibitem}\begin{thebibliography}{10}{\vspace{-.05in}\bibitem}
|
{
"timestamp": "2021-03-22T01:06:42",
"yymm": "2012",
"arxiv_id": "2012.14114",
"language": "en",
"url": "https://arxiv.org/abs/2012.14114"
}
|
\section*{Introduction}
The present paper studies a class of groups that are constructed via a recent framework due to Jones \cite{Jones17-Thompson}.
This framework was discovered by accident in the land of quantum field theory as we will now explain, see the following survey for more details \cite{Brothier-19-survey}.
Conformal field theories (in short CFT) \`a la Haag-Kastler provide subfactors and conversely \textit{certain} subfactors provide CFT but the reconstruction is on a case by case basis and so far the most intriguing subfactors (with exotic representation theory uncaptured by groups and quantum groups) are not known to provide a CFT \cite{Longo-Rehren95,Evans_Kawahigashi_92_sf_cft,Jones-Morrison-Synder14,Bischoff17,Xu18-CFT}.
By using the planar algebra of a subfactor, Jones created a lattice model approximating the desired CFT.
This did not converge to a classical CFT but rather defined a discontinuous physical model particularly relevant at quantum phase transition where Richard Thompson's group $T$ plays the role of the spatial conformal group \cite{Jones16-Thompson,Jones18-Hamiltonian,Osborne-Stiegemann19, Stiegemann19-thesis,Brot-Stottmeister-M19,Brot-Stottmeister-Phys}.
By extracting the mathematical essence of this construction Jones found a wonderful machine for constructing actions of certain groups (e.g.~Thompson's groups and braid groups) called \textit{Jones' actions}.
Recall that Richard Thompson's group $F$ is the group of piecewise linear homeomorphisms of the unit interval having finitely many breakpoints all at dyadic rationals and having slopes powers of 2.
There are two other groups: Thompson's group $T$ containing $F$ and translations by dyadic rationals acting by homeomorphisms on the unit circle, and Thompson's group $V$ containing $T$ and discontinuous exchanges of subintervals of $[0,1]$ that act by homeomorphisms on Cantor space, see \cite{Cannon-Floyd-Parry96} for details.
We will be focusing on Thompson's group $V$ in this paper but this study stays relevant for the smaller Thompson's groups $F$ and $T$.
Thompson's groups $F,T$ and $V$ are countable discrete groups that are notoriously difficult to understand.
However, they admit simple descriptions in terms of fraction groups of categories, which is the description used by Jones' technology.
It has been known for a long time that a category having nice cancellation properties (a category with a left calculus of fractions) provides a groupoid by formally inverting its morphisms and in particular groups by fixing a common source to all morphisms: such groups are called \textit{fraction groups} or \textit{groups of fractions} \cite{GabrielZisman67}.
In particular, the fraction group of the category of binary forests (specialised at the object $1$) is (isomorphic to) Thompson's group $F$: elements of $F$ are described by (an equivalence class of) a pair of trees having the same number of leaves.
The larger groups $T$ and $V$ have similar descriptions but with trees having their leaves decorated by natural numbers corresponding to permutations, see \cite{Brown87, Cannon-Floyd-Parry96}.
Jones made the crucial observation that, if we consider any functor starting from such a category (e.g.~the category of binary forests), then we obtain an action of the fraction group (e.g.~of $F$, $T$ or $V$).
This led to a powerful and practical framework for constructing group actions and put in evidence unsuspected connections between different fields of research.
We present some of those connections.
\subsection*{Application of Jones' technology}
$\bullet$ Functors from the category of binary forests into the category of Conway tangles provide ways to construct knots and links via Thompson's group $F$.
Jones proved that they can all be constructed in that way concluding that `` Thompson's group $F$ is as good as the braid group for producing knots and links.''
He discovered the so-called Jones subgroup $\Vec F\subset F$ or oriented Thompson's group which has remarkable properties such as being isomorphic to the ternary Brown-Thompson's group $F_3$ as proved by Golan and Sapir \cite{Jones17-Thompson, Golan-Sapir17}.
Aiello proved the beautiful result that any \textit{oriented} links can be produced using $\Vec F$ and the construction of Jones \cite{Aiello20}.
This led to a profound connection between Thompson's group $F$ and knot theory, providing new invariants for knots and linking for the second time, after the Jones polynomials, subfactor theory with knot theory \cite{Jones_polynome_vna,Jones19-thomp-knot}.
$\bullet$ Using functors ending in the category of Hilbert spaces we obtain unitary representations and have access to matrix coefficients that are explicitly computable via an algorithm depending on the functor chosen \cite{Jones17-Thompson,Jones19Irred,ABC19}.
Jones and the author used this approach for constructing new families of unitary representations and matrix coefficients for Thompson's groups $F,T,$ and $V$ \cite{Brot-Jones18-2,Brot-Jones18-1}.
In particular, a pair $(a,b)$ of two bounded operators acting on a Hilbert space $H$ and satisfying the \textit{Pythagorean relation} $a^*a+b^*b=\id_H$ provides a unitary representation of the largest Thompson's group $V$.
This produces many new explicit examples of positive definite maps and a deep connection between Thompson's groups $F,T,$ and $V$ and the Cuntz algebra complementing previous works of Nekrashevych \cite{Nekrashevych04}.
This also led to new effortless proofs establishing analytic properties of Thompson's groups: any intermediate group between the derived group of $F$ and $V$ is not a Kazhdan's group and $T$ has the Haagerup property.
Although, these are not new results it demonstrates how powerful is Jones' technology.
Recall that Reznikoff proved that Thompson's group $T$ was not a Kazhdan's group which was also a consequence of the combined work of Ghys-Sergiescu and Navas \cite{Reznikoff01,Ghys-Sergiescu87,Navas02}.
Moreover, Farley proved the stronger result that $V$ has the Haagerup propery implying that all of its subgroup has this property and in particular are not Kazhdan's groups \cite{Farley03-H}.
$\bullet$ A functor $\Phi: \mathcal C\to \Gr$ ending in the category of groups gives an action $G_\mathcal C\curvearrowright K$ of the fraction group $G_\mathcal C$ associated to the source category $\mathcal C$ on a limit group $K$.
The author made the key observation that the semidirect product $K\rtimes G_\mathcal C$ is again a fraction group and provided an explicit description of the category inducing this group \cite{Brothier19WP}.
Jones framework can be then reapplied to this semidirect product $K\rtimes G_\mathcal C$ for constructing unitary representations and computing matrix coefficients.
With this method the author provided a new large class of semidirect products with the Haagerup property.
In particular, all wreath products $\oplus_{\mathbf{Q}_2}\Gamma\rtimes V$, with $\Gamma$ any discrete group having the Haagerup property and $V\curvearrowright \mathbf{Q}_2$ the classical action of Thompson's group $V$ on the dyadic rationals, have the Haagerup property \cite{Brothier19WP}.
This provided, using a result of Cornulier, the first examples of finitely presented wreath products having the Haagerup property for a nontrivial reason (i.e.~the group acting is nonamenable and the base space is infinite) \cite{Cornulier06}.
This class of wreath products/fraction groups is contained in the class of groups that we are studying in this article.
$\bullet$ Groups constructed as above naturally appear in certain field theories.
Stottmeister and the author constructed physical models in the line of Jones' work but also using previous constructions appearing in loop quantum gravity.
In those physical models, keeping the notation from above, the physical space is approximated by $\mathbf{Q}_2$ with local symmetries being the group $\Gamma$ and Thompson's group $T$ playing the role of spatial symmetries acting by local scale transformations and rotations on $\mathbf{Q}_2.$
Together they generate a group of the form $K\rtimes T$ admitting a fraction group description where $K \subset \prod_{\mathbf{Q}_2}\Gamma$ plays the role of a discrete loop group \cite{Brot-Stottmeister-M19,Brot-Stottmeister-Phys}.
\subsection*{Construction of groups}
We now present a specific class of fractions groups for which Jones' technology can be efficiently applied.
Groups studied in this article and its sequel belong to this class.
Consider a functor $\Phi:\mathcal F\to \Gr$ from the category of forests to the category of groups.
Jones' technology provides a Jones' action $F\curvearrowright K$ of Thompson's group $F$ on a group $K$.
Consider the semidrect product $G_0:=K\rtimes F$.
The author made the trivial but key observation that $G_0$ is itself a fraction group.
Therefore, we can reapply Jones' technology to $G_0$ as explained in the third bullet point above.
As an initial study we consider the simplest class of functors $\Phi$: when $\Phi:\mathcal F\to\Gr$ is \textit{covariant} and \textit{monoidal}.
By a universal property of $\mathcal F$ the class of covariant monoidal functors $\Phi:\mathcal F\to\Gr$ is in bijection with all triples $(\Gamma,\alpha_0,\alpha_1)$ where $\Gamma$ is a group and $\alpha_0,\alpha_1\in\End(\Gamma)$ are endomorphisms of $\Gamma$.
The monoidal assumption implies that we can extend the Jones action $F\curvearrowright K$ to the largest Thompson's group $V$.
Therefore, we obtain a larger group $G:=K\rtimes V$.
The group $G$ is still a fraction groups with remarkable diagrammatic descriptions.
We are studying in this article and its sequel the class of such groups $G=K\rtimes V$.
More precisely, we are studying in the first article the class of groups $G$ obtained from triples $(\Gamma,\alpha_0,\varepsilon_\Gamma)$ where $\varepsilon_\Gamma:\Gamma\to \Gamma, g\mapsto e_\Gamma$ is the trivial endomorphism sending all elements of $\Gamma$ to the neutral element $e_\Gamma.$
These groups can be described as restricted permutational wreath products $\oplus_{\mathbf{Q}_2}\Gamma\rtimes V$ constructed from the action $V\curvearrowright \mathbf{Q}_2$ and are the ones appearing in the third bullet point.
More precisely, if $\Gamma$ has the Haagerup property as a discrete group and $\alpha_0$ is injective, then $G$ has the Haagerup property \cite{Brothier19WP}.
In the second paper we will be considering fraction groups obtained from triples of the form $(\Gamma,\alpha_0,\alpha_1)$ with $\alpha_0,\alpha_1$ automorphisms producing a rather different class of groups: the group $K$ is a \textit{discrete loop group} $L\Gamma$, see \cite{Brothier20-2}.
The groups obtained in the second article are the ones appearing in the physical models studied in \cite{Brot-Stottmeister-M19,Brot-Stottmeister-Phys}.
\subsection*{Motivations}
The motivation of this work is multifold.
As remarked in the last bullet point the groups $K\rtimes T$ constructed from certain covariant monoidal functors $\Phi:\mathcal F\to\Gr$ appear in physical models. It is then important to obtain information about these groups as they describe symmetries. Moreover, knowing if the groups $G$ are pairwise isomorphic or not becomes an important question to compare the various models.
Perhaps, the most natural motivation is to understand what kind of groups can be obtained from functors $\Phi:\mathcal F\to \Gr.$
Can we simply describe these groups? Are they often isomorphic to each other? What kind of properties do they satisfy? What are their internal symmetries? Do they share some of the exceptional behavior among groups like Thompson's groups?
All of this questions are important.
Indeed, to each group in this class we can apply efficiently Jones' technology and construct explicit unitary representations for which we can often compute their matrix coefficients.
This can lead to establish that large classes of groups have the Haagerup property or are not Kazhdan's groups for instance as explained above. One then wants to know to what kind of groups this results apply to justifying the initial questions of this paragraph.
Explicit descriptions of these groups can provide new information in group theory on permanence of certain properties. This was the case for the Haagerup property and wreath products as mentioned above.
Another motivation is to construct Thompson-like groups and to provide new tools and approach to other pre-existing works. Conversely, we are motivated by adapting and using other known techniques and results to our class of groups.
There are a number of generalisations of Thompson's groups $F,T$, and $V$.
Cloning systems of Witzel-Zaremsky (using Brin-Zappa-Szep's products) and Tanushevski's construction produce two large classes of Thompson-like groups \cite{Witzel-Zaremsky18,Zaremsky18-clone,Tanushevski16,Tanushevski17}.
The author discovered that these two classes of groups have large intersections with the class of groups obtained from Jones' technology and \textit{covariant} functors $\Phi:\mathcal F\to\Gr$.
This is a wonderful coincidence as the technologies and questions studied are very different. Hence, each formalism, results proved, and techniques developed will be profitable to each community.
We have extensively compare these technologies and how they connect to each other in a long appendix in the sequel of this article.
Hence, we will be brief here but invite the interested reader to consult \cite[Appendix]{Brothier20-2}.
Interestingly and fortunately, results proved by each research group are rather disjoint.
In particular, all the results presented in this present article, its sequel and the one concerning the Haagerup property are new \cite{Brothier20-2,Brothier19WP}.
Therefore, they provide new results in cloning systems and in Tanushevski's framework.
Even the description of this groups as wreath products or semidirect products of discrete loop groups with Thompson's groups $F, T,$ and $V$ seem to be new which should clearly help understanding these groups.
Conversely, cloning system technology connects our work with a conjecture of Lehnert (later modified via a theorem of Bleak-Matucci-Neunh\'offer) concerning co-context-free groups \cite{Lehnert08-thesis, BleakMatucciNeunhoffer16,BZFGHM18}.
Tanushevski and Witzel-Zaremsky prove that a large class of our fraction groups have exceptional finiteness properties.
Moreover, Tanushevski provides extensive descriptions of the normal subgroups of the $K\rtimes F$ appearing in this article.
Finally, using cloning systems Ishida proves that many of our fraction groups can be (left- or bi-) ordered \cite{Ishida17}.
\subsection*{Other generalisations of Thompson's groups $F,T$, and $V$.}
There are a number of other generalisations of Thompson's groups.
We present here a few of them and compare them with the class of groups constructed using Jones' technology.
This explains where sit our fraction groups compared with other classical constructions of Thompson-like groups.
Elements of Thompson's group $F$ can be diagrammatically described by pairs of (rooted, finite, ordered) binary trees with the same number of leaves.
If we add cyclic permutations or any permutations of the leaves we obtain Thompson's groups $T$ and $V$, respectively \cite{Cannon-Floyd-Parry96}.
Note that an element of $V$ can be represented by two trees $t,s$ so that each leaf of $t$ is linked to a unique leaf of $s$ by a curve. Linking leaves of $t$ to $s$ corresponds to a permutation.
By replacing the (possibly intersecting) curves by braids we obtain the braided Thompson's group independently discovered by Brin and Dehornoy \cite{Brin07-BraidedThompson,Dehornoy06}.
Now, instead of having binary trees, we may consider $n$-ary trees obtaining new groups $F_n,T_n,V_n$ for $n\geq 2.$
Moreover, we may replace trees by forests with a fixed number $r$ of roots.
This produces the so-called Higman-Thompson's groups $V_{n,r}$ and Brown-Thompson's groups $F_{n,r},T_{n,r}$ \cite{Higman74,Brown87}.
A pair of trees can be associated to a pair of partitions of the unit interval and a transformation of $[0,1]$.
It is then possible to define $d$-dimensional analogues of Thompson's group $V$ by considering partitions of $[0,1]^d$ and obtaining the Brin-Thompson's groups $dV$ for $d\geq 1$ \cite{Brin04-nV}.
The $n$-ary versions $F_n,T_n,V_n$ for $n\geq 2$ of Thompson's groups $F,T,$ and $V$ are acting on the boundary of the infinite $n$-ary tree $\mathcal T_n$ which is the Cantor space $\{0,\cdots,n-1\}^\mathbf{N}$ equal to all infinite words in the alphabet $\{0,\cdots,n-1\}.$
The action consists in changing finitely many prefix of infinite words.
One can enlarge these groups by acting on the remaining infinite part of words. This can be done by fixing a \textit{self-similar} subgroup $H$ of the automorphism group of the infinite $n$-ary tree $\mathcal T_n$.
We then obtain a so-called Rover-Nekrashevych's group $V_n(H)$ which is a subgroup of the almost automorphism group of $\mathcal T_n$ \cite{Rover99, Nekrashevych05}.
Hughes introduced the notion of finite similarity structure for a compact ultrametic space and locally finitely determined group of local similarities, also known as finite similarity structure (FSS) groups \cite{Hughes09}.
This generalises the key example of $V_n$ acting on $\{0,\cdots,n-1\}^\mathbf{N}$ and the groups of Rover-Nekrashevych when the self-similar group $H$ is finite.
There are other important classes of Thompson-like groups that have remarkable diagrammatic descriptions but no longer using trees or forests: for instance the \textit{diagram groups} of Guba-Sapir and their extension as \textit{picture groups} by Farley \cite{Guba-Sapir97,Farley05}.
We now compare with the class $\mathcal{CMF}$ of groups $G=K\rtimes V$ constructed using Jones' technology via covariant monoidal functors $\Phi:\mathcal F\to\Gr$ or, equivalently, triples $(\Gamma,\alpha_0,\alpha_1)$ with $\Gamma$ a group and $\alpha_0,\alpha_1\in\End(\Gamma).$
It is the class of groups considered in this article and its sequel.
Elements of $G$ can be represented by a pair of binary trees, with permutations of their leaves and moreover decoration of the leaves by elements of $\Gamma.$
One can describe the elements of the $T$ and $F$ versions of $G$ by considering exclusively cyclic permutations or no permutations at all, respectively.
We obtain that for these groups $n=2,r=1,d=1$ for the notations of above: we are considering pairs of \textit{binary} forests with $r=1$ root (i.e.~trees) and in dimension $d=1$.
We don't have braids either.
Moreover, the group $\Gamma$ has no interaction with the tree structure of $\mathcal T_2$ nor with Cantor space $\{0,1\}^\mathbf{N}$.
Hence, as far as the author knows there are no large intersection between the class $\mathcal{CMF}$ and all the class of groups presented except for the groups constructed by Witzel-Zaremsky and Tanushevski as explained earlier.
Moreover, we do not see any similarity in the construction producing the class $\mathcal{CMF}$ and the other classes of groups.
Here is an argument tending to show that the class of groups considered are rather different.
Farley (resp. Hugues) proved the equivalent statement that all the picture groups (resp. all FSS groups) admits a proper isometric action on a CAT(0) cubical complex implying the Haagerup property \cite{Farley05,Hughes09}.
Our class of groups contain all restricted permutational wreath products $\oplus_{\mathbf{Q}_2}\Gamma\rtimes V$ of all groups $\Gamma$ deduced from the action $V\curvearrowright \mathbf{Q}_2$ of $V$ acting on the dyadic rational of the unit interval. In particular, if $\Gamma$ does not have the Haagerup property, then this group is not a FSS group nor a picture group.
\subsection*{Presentation of the main results.}
Consider a triple $(\Gamma,\alpha_0,\varepsilon_\Gamma)$ where $\varepsilon_\Gamma$ is the trivial endomorphism. Since the second endomorphism is trivial we may write $\alpha=\alpha_0$ and consider the pair $(\Gamma,\alpha)$ to express the data of the triple $(\Gamma,\alpha,\varepsilon_\Gamma).$
To this pair $(\Gamma,\alpha)$ is associated a unique covariant monoidal functor $\Phi:\mathcal F\to\Gr$ satisfying that $\Phi(n)=\Gamma^n$ and $\Phi(Y):\Gamma\to \Gamma^2, g\mapsto (\alpha(g),e_\Gamma)$ for all $n\geq 1.$
This provides a limit group $K=\varinjlim_{t\in\mathfrak T} \Gamma_t$ and a Jones' action $\pi_\Phi:V\curvearrowright K$.
We are considering the semidirect product or fraction group $G=K\rtimes V$.
A general argument on a larger class of fraction groups proves that $K\subset G$ is a characteristic subgroup.
Using an inductive limit trick argument we show that $\alpha$ can always be assumed to be an automorphism, see Section \ref{sec:End}.
Starting with $\alpha$ an automorphism, we obtain that $G=K\rtimes V$ is isomorphic to a restricted permutational twisted wreath product $\Gamma\wr_{\mathbf{Q}_2}V=\oplus_{\mathbf{Q}_2}\Gamma\rtimes V$.
This wreath product is obtained from the classical action $V\curvearrowright \mathbf{Q}_2$ of $V$ on the dyadic rationals $\mathbf{Q}_2$ of the unit interval.
The twist is described using slopes of elements of $V$ and the automorphism $\alpha$, see Section \ref{sec:description}.
This provides a second powerful description (after the tree-like diagrammatic description) of these fraction groups.
Using the wreath product description and the fact that $\oplus_{\mathbf{Q}_2}\Gamma\subset \Gamma\wr_{\mathbf{Q}_2}V$ is characteristic we prove the surprising fact that all isomorphisms $\theta:G\to \tilde G$ between such fraction groups $G=\Gamma\wr_{\mathbf{Q}_2}V$ and $\tilde G=\tilde\Gamma\wr_{\mathbf{Q}_2}V$ are \textit{spatial} in the following sense.\\
{\bf Definition.}
An isomorphism $\theta:G\to\tilde G$ is called \textit{spatial} if the following hold.
There exists an isomorphism $\kappa:K\to \tilde K$, a map $c:V\to \tilde K$ and a homeomorphism $\varphi$ of Cantor space which restricts to a bijection of $\mathbf{Q}_2$ satisfying that
$$\theta(av) = \kappa(a)\cdot c_v \cdot \ad_\varphi(v) \text{ for all } a\in K, v\in V$$
where $\ad_\varphi(v):=\varphi v \varphi^{-1}, \ v\in V$ and moreover
$$\supp(\kappa(a)) = \varphi(\supp(a)) \text{ for all } a\in K$$
where $\supp(a)$ denotes the support of $a\in K$.
The fact that all isomorphisms are spatial implies the following classfication theorem:
\begin{theol}[Theorem \ref{th:isomGalpha}]\label{theol:A}
Consider two groups with automorphisms $(\Gamma,\alpha\in\Aut(\Gamma))$ and $(\tilde\Gamma,\tilde\alpha\in\Aut(\tilde\Gamma))$ and their associated fraction groups $G:= K\rtimes V$ and $\tilde G:=\tilde K\rtimes V$.
The groups $G$ and $\tilde G$ are isomorphic if and only if there exists $\beta\in\Isom(\Gamma,\tilde\Gamma)$ and $h\in\tilde\Gamma$ such that $\tilde\alpha = \ad(h)\circ\beta \alpha\beta^{-1}.$
\end{theol}
In particular, we obtain that two nonisomorphic groups $\Gamma,\tilde\Gamma$ provide two nonisomorphic fraction groups whatever are the choices of automorphisms $\alpha$ and $\tilde\alpha$.
Note that this theorem provides a classification of a particular class of twisted permutational restricted wreath products $\Gamma\wr_{\mathbf{Q}_2}V$.
{\bf Comparison between general results on wreath products and our results.}
The above result is surprising as it cannot be expected for arbitrary classes of wreath products.
It is known that the standard restricted wreath product $\Gamma\wr\Lambda:=\oplus_\Lambda\Gamma\rtimes \Lambda$ remembers the group $\Gamma$ (if $\Gamma\wr\Lambda\simeq \tilde\Gamma\wr \tilde\Lambda$, then $\Gamma\simeq\tilde\Gamma$) and moreover the subgroup $\oplus_\Lambda\Gamma\subset \Gamma\wr\Lambda$ is characteristic (except for very few cases) but isomorphisms between two such groups are not necessarily spatial in the sense above of \cite{Neumann64}.
In Remark \ref{rem:AWP}, for each group $\Gamma$ nontrivial, we provide a simple construction of a non-spatial automorphism of the standard restricted wreath product $G=\Gamma^2\wr\mathbf{Z}$.
There exist some partial results regarding isomorphisms between particular permutational and twisted permutational wreath products extending Neumann's work \cite{Bodnarchuk94}.
Although, results similar to ours concerning properties of isomorphisms in non-standard wreath products typically hold for finite groups and with assumptions on $\Gamma$ such as being indecomposable (i.e.~it is not the direct sum of two nontrivial subgroups) even when the action $\Lambda\curvearrowright X$ is transitive, see \cite{Gross88}.
It is remarkable that for our results no assumptions are required for the group $\Gamma$.
In Remark \ref{rem:AWP}, we provide examples of permutational twisted wreath products $\Gamma\wr_X B$ and $\tilde\Gamma\wr_{\tilde X} B$ that are isomorphic with $B\curvearrowright X, B\curvearrowright \tilde X$ transitive but such that $\Gamma$ and $\tilde\Gamma$ are not isomorphic.
The fact that isomorphisms behave well with the wreath product structure gives us hope to understand in detail all isomorphisms between such fraction groups.
We indeed succeeded in decomposing any automorphism of a fixed wreath product in our class into the product of four \textit{elementary} ones.
Moreover, we could describe the structure of the automorphism group via an explicit semidirect product.
We restricted this study to \textit{untwisted} wreath products $G=K\rtimes V \simeq \oplus_{\mathbf{Q}_2}\Gamma\rtimes V$ corresponding to pairs $(\Gamma,\id_\Gamma)$ where $\id_\Gamma$ is the identity automorphism of $\Gamma.$
The fraction group $G$ is then isomorphic to a permutational wreath product with a trivial twist justifying the terminology.
Before stating this main result, we introduce some notations:
$Z\Gamma$ is the centre of $\Gamma$; $N(G)/Z\Gamma$ is the group of maps $f:\mathbf{Q}_2\to\Gamma$ normalising $G$ mod out by constant maps valued in the centre of $\Gamma$; and $\Stab_N(\Q_2)$ is the group of homeomorphisms of Cantor space normalising $V$ and stabilising a copy of $\mathbf{Q}_2$ inside Cantor space.
\begin{theol}[Theorem \ref{theo:isomB}]\label{theol:B}
Let $\Gamma$ be a group and $G:=K\rtimes V\simeq \oplus_{\mathbf{Q}_2}\Gamma\rtimes V$ be the fraction group associated to the pair $(\Gamma,\id_\Gamma)$.
There exists a surjective morphism from the semidirect product
$$\left( Z\Gamma\times N(G)/Z\Gamma\right) \rtimes \left(\Stab_N(\Q_2)\times \Aut(\Gamma)\right)$$
onto the automorphism group of $G$ whose kernel is the set $\{(\bar g,\ad(g)^{-1}):\ g\in\Gamma\}$ where $\bar g\in N(G)/Z\Gamma$ is the equivalence class of the constant map equal to $g$ and $\ad(g)\in\Aut(\Gamma)$ is the inner automorphism of $\Gamma$ associated to $g$.
\end{theol}
There exists general results regarding the structure of the automorphism group of a wreath product \cite{Houghton63,Hassanabi78}. Although, since in general automorphisms are not spatial those results are much coarser than ours showing again how special are the class of wreath products arising from Jones' technology. Results similar to ours are known to hold when additional assumptions are made on $\Gamma$ such as being finite, nonabelian and indecomposable, see \cite{Gross88}.
We refer the reader to Section \ref{sec:semidirect} for the definition of the action
$$ \left(\Stab_N(\Q_2)\times \Aut(\Gamma)\right) \curvearrowright \left( Z\Gamma\times N(G)/Z\Gamma\right)$$
involved in the semidirect product of Theorem B.
The actions of $N(G),\Stab_N(\Q_2)$ and $\Aut(\Gamma)$ on $G$ are not surprising: the first acts by conjugation, the second acts by shifting indices on $K=\oplus_{\mathbf{Q}_2}\Gamma$ and by conjugation on $V$ and the third acts diagonally on $K=\oplus_{\mathbf{Q}_2}\Gamma$ leaving $V$ invariant.
However, the action of $Z\Gamma$ on $G$ was unexpected. It is built using the slopes of elements of $V$ and the dyadic valuation.
More precisely, the map
$$\ell:V\to \prod_{\mathbf{Q}_2}\Gamma, \ell_v(x)=\log_2(v'(v^{-1}x)),\ v\in V, x\in\mathbf{Q}_2$$ satisfies the cocycle identity $\ell_{vw} = \ell_v + \ell_w^v, v,w\in V$ implying that given any $\zeta\in Z\Gamma$ the formula
$$av\mapsto a\cdot \zeta^{\ell_v}\cdot v,\ a\in \prod_{\mathbf{Q}_2}\Gamma, v\in V$$
defines an automorphism of the \textit{unrestricted} wreath product $\prod_{\mathbf{Q}_2}\Gamma\rtimes V.$
However, $\ell_v,v\in V$ is not finitely supported in general implying that our automorphism of above does not restrict to an automorphism of the \textit{restricted} wreath product.
By perturbating $\ell_v$, we can define a finitely supported map by considering $p_v=\ell_v + \nu-\nu^v$ where $\nu:\mathbf{Q}_2\to\mathbf{Z}$ is the dyadic valuation.
This latter map does satisfy the cocycle identity of above, is finitely supported and nontrivial when $v$ is.
Therefore, the formula
$$E_\zeta:av\mapsto a\cdot \zeta^{p_v}\cdot v,\ a\in \oplus_{\mathbf{Q}_2}\Gamma, v\in V$$
defines an automorphism of $G$ for any $\zeta\in Z\Gamma$ and in fact a faithful action $E:Z\Gamma\curvearrowright G.$
{\bf Rubin's faithful classes.}
As pointed out by an anonymous referee of this article the fact that all isomorphisms are spatial together with the thin description of automorphisms of each fraction group $G$ suggest that we are in a situation similar to where Rubin's technology applies \cite{Rubin89}.
A class $C$ of pairs $(X,G)$ where $X$ is a topological space and $G$ is a group acting by homeomorphisms on $X$ is called \textit{faithful}, if given any isomorphism $\theta:G\to \tilde G$, then there exists a homeomorphism $f:X\to \tilde X$ satisfying that $\theta$ is implemented by conjugation by $f$.
Rubin provided a number of examples of faithful classes.
In particular, the class equal to the single pair $(V,\mathfrak C)$ where $V$ is the largest Thompson's group and $\mathfrak C$ is Cantor space is faithful; an essential fact used in this article.
It would be very interesting to know if for our class (or a certain subclass) of fraction groups we can find for each group $G$ a suitable topological space $X_G$ on which $G$ acts by homeomorphisms so that the class of pairs $(X_G,G)$ is faithful in Rubin's sense.
\subsection*{Further comments on the results and proofs.}
Consider the hypothesis of Theorem \ref{theol:A} and an isomorphism $\theta:G\to\tilde G.$
We start by proving that $\theta(K)=\tilde K$ using that $K\subset G$ is the unique maximal normal subgroup of $G$ satisfying a certain decomposability property.
This implies the following decomposition
$$\theta(av) = \kappa(a)\cdot c_v \cdot \phi(v), \ a\in K,v\in V$$
with $\kappa:K\to\tilde K$ an isomorphism, $c:V\to \tilde K, v\mapsto c_v$ a map satisfying a cocycle condition and $\phi$ an automorphism of $V$.
Using a reconstruction theorem of Rubin we obtain the crucial fact: $\phi(v) = \varphi v\varphi^{-1}, v\in V$ for a (unique) homeomorphism $\varphi$ of Cantor space \cite{Rubin96}.
We then prove the surprising and unexpected fact that $\supp(\kappa(a)) = \varphi(\supp(a)), a\in K$ where $\supp(a):=\{x\in\mathbf{Q}_2:\ a(x)\neq e\}$ is the support of $a$.
Note that this forces $\varphi$ to stabilise the dyadic rationals inside Cantor space, i.e.~$\varphi\in\Stab_N(\Q_2).$
Moreover, this implies that $\kappa:\oplus_{\mathbf{Q}_2}\Gamma\to \oplus_{\mathbf{Q}_2}\tilde\Gamma$ is a direct product of isomorphisms: $\kappa=\prod_{x\in\mathbf{Q}_2}\kappa_x \in \prod_{x\in\mathbf{Q}_2}\Isom(\Gamma,\tilde\Gamma)$.
To prove the relation between $\alpha$ and $\tilde\alpha$ we use the formula
$$(vav^{-1})(x) = \alpha^n(a(v^{-1}x)), \ a\in \oplus_{\mathbf{Q}_2}\Gamma, v\in V, x\in \mathbf{Q}_2$$ where $2^n$ is the slope of $v$ at the point $v^{-1}x$.
We prove and use the fact that the slope of $\varphi v \varphi^{-1}$ at $\varphi(x)$ is equal to the slope of $v$ at $x$ when $x\in\mathbf{Q}_2,vx=x, v\in V$ and $\varphi\in\Stab_N(\Q_2).$
Note that this equality of slopes will no longer be true in general if we were not assuming that $\varphi$ stabilises the dyadic rationals $\mathbf{Q}_2$, see Remark \ref{rem:Shayo}.
We will see in the second article that the situation is more complex when one has to work with \textit{all} automorphisms of $V$ and not only those coming from $\Stab_N(\Q_2).$
We now consider the automorphism group of $G=K\rtimes V$ that we study in the \textit{untwisted} case.
Hence, $G$ is a permutational wreath product of the form $\oplus_{\mathbf{Q}_2}\Gamma\rtimes V$ for some group $\Gamma.$
Using the fact that $K\subset G$ is a characteristic subgroup and the result concerning the support mentionned earlier we obtain that any $\theta\in\Aut(G)$ can be decomposed as
$$\theta(av)=\kappa(a)\cdot c_v\cdot \ad_\varphi(v), a\in K,v\in V$$ such that $\kappa(a)(\varphi(x)) = \kappa_x(a(x)), x\in \mathbf{Q}_2$ for some $\kappa_x\in\Aut(\Gamma)$ and a fixed $\varphi\in\Stab_N(\Q_2).$
Up to a composition with $av\mapsto a^\varphi\cdot \ad_\varphi(v)$ we can assume that $\varphi$ is trivial.
Moreover, up to a composition with elements of the normaliser $N(G)$ and automorphisms of $\Aut(\Gamma)$ acting on the wreath product we can assume that $\kappa$ is the identity.
It remains an automorphism of the form: $av\mapsto a\cdot c_v\cdot v$ which forces $c_v(x)$ to be in the centre of $\Gamma$ for all $v\in V, x\in\mathbf{Q}_2.$
We obtain a map $v\in V\mapsto c_v\in \oplus_{\mathbf{Q}_2}Z\Gamma$ satisfying the cocycle identity $c_{vw}=c_v \cdot c_w^v, v,w\in V$.
To finish the proof of the theorem we classify all such maps but valued in the product (not the direct sum) of the $Z\Gamma$ that is:
$$\{d:V\to \prod_{\mathbf{Q}_2}Z\Gamma:\ d_{vw}=d_v\cdot d_w^v, \ \forall v,w\in V\}.$$
These maps form an abelian group for the pointwise product and moreover any cocycle $d$ can be decomposed as a product of two as follows: $$d_v(x)=\zeta^{\log_2(v'(v^{-1}x))} \cdot f(x) f(v^{-1}x), v\in V, x\in\mathbf{Q}_2$$ for a pair $(\zeta,f)\in Z\Gamma\times \prod_{\mathbf{Q}_2}Z\Gamma$ that is unique up to multiplying $f$ by a constant map.
Hence, this later group of cocycles is isomorphic to $Z\Gamma\times \prod_{\mathbf{Q}_2}Z\Gamma/Z\Gamma.$
By considering $p_v:=\log_2(v'(v^{-1}\cdot))+\nu-\nu^v, v\in V$ rather than $\log_2(v'(v^{-1}\cdot))$ we decompose the automorphism $av\mapsto a\cdot c_v\cdot v$ into a product of $\ad(f)$ with $f\in N(G)\cap \prod_{\mathbf{Q}_2}Z\Gamma$ and an automorphism: $$E_\zeta:av\mapsto a\cdot \zeta^{p_v}\cdot v, a\in K, v\in V$$
with $\zeta\in Z\Gamma$.
This achieves the proof that every automorphism of $G$ is the product of four kinds of elementary automorphisms as previously described.
We obtain that $\Aut(G)$ is generated by the copies of the groups $Z\Gamma, N(G), \Stab_N(\Q_2)$ and $\Aut(\Gamma)$.
To avoid confusions we write here $D(Z\Gamma)\subset N(G)$ for the subgroup of constant maps from $\mathbf{Q}_2$ to $Z\Gamma.$
It is rather easy to see that $Z\Gamma,N(G)/D(Z\Gamma), \Stab_N(\Q_2),\Aut(\Gamma)$ sit faithfully inside $\Aut(G).$
We want to understand how those subgroups interact with each other.
A straightforward check shows that $Z\Gamma$ commutes with $N(G)$ and $\Stab_N(\Q_2)$ commutes with $\Aut(\Gamma).$
Both groups $\Stab_N(\Q_2)$ and $\Aut(\Gamma)$ normalise $N(G)$ and act in the expected way:
$$[(\varphi,\beta)\cdot f](x):= \beta(f(\varphi^{-1}x)),$$
where
$ \varphi\in\Stab_N(\Q_2), \beta\in\Aut(\Gamma), f\in N(G), x\in \mathbf{Q}_2.$
These actions are clearly factorisable into actions on the quotient group $N(G)/D(Z\Gamma).$
The group $\Aut(\Gamma)$ normalises $Z\Gamma$ acting as $\beta\cdot \zeta:=\beta(\zeta), \beta\in\Aut(\Gamma),\zeta\in Z\Gamma.$
However, $\Stab_N(\Q_2)$ does not normalise $Z\Gamma$ but normalises the product of groups $Z\Gamma\times N(G)/D(Z\Gamma).$
The action is more complicated than expected but can be written down using slopes of elements of $V$.
For clarity of the presentation we choose to first define a semidirect product
$$\left( Z\Gamma\times N(G)/D(Z\Gamma) \right) \rtimes \left(\Stab_N(\Q_2)\times \Aut(\Gamma)\right)$$
without referring to $G$ and thus proving that the complicating formula describing the action of $\Stab_N(\Q_2)\times\Aut(\Gamma)$ on $Z\Gamma\times N(G)/D(Z\Gamma)$ is indeed well-defined.
We end the proof of Theorem \ref{theol:B} by defining the group morphism from the semidirect product $\left( Z\Gamma\times N(G)/D(Z\Gamma) \right) \rtimes \left(\Stab_N(\Q_2)\times \Aut(\Gamma)\right)$ onto $\Aut(G)$ and by computing its kernel which is an easy task.
\subsection*{Comparison between this present first article and the second one.}
In this first article, we consider the class of fraction groups $G(\Gamma,\alpha_0,\alpha_1)$ built from a triple $(\Gamma,\alpha_0,\alpha_1)$ where $\Gamma$ is a group, $\alpha_0,\alpha_1$ are endomorphisms with one of them trivial say $\alpha_1.$
We start by reducing to the case where $\alpha_0$ is an automorphism using a direct limit process, i.e.~there exists a group $\Lambda$ and an automorphism $\beta_0$ such that $G(\Gamma,\alpha_0,\varepsilon_\Gamma)\simeq G(\Lambda,\beta_0,\varepsilon_\Lambda).$
We prove a rigidity phenomena on maps between two of such fraction groups showing that all isomorphisms are spatial.
This led to Theorem \ref{theol:A}: a complete classification of the class of fraction groups and Theorem \ref{theol:B}: a thin description of the automorphism group of an untwisted fraction group (i.e.~fraction groups obtained from the identity automorphism $\alpha_0=\id_\Gamma$ and the trivial endomorphism $\alpha_1=\varepsilon_\Gamma$).
The proof of the rigidity phenomena is rather easy and so is the proof of Theorem \ref{theol:A}. The main technical difficulty is to obtain Theorem \ref{theol:B}; in particular in finding all automorphisms and understanding the structure of the automorphism group.
In the second article we follow a similar scheme than in the first one.
We consider all triples $(\Gamma,\alpha_0,\alpha_1)$ and their associated fraction groups $G(\Gamma,\alpha_0,\alpha_1)$ where $\alpha_0,\alpha_1$ are any endomorphisms.
Contrary to the first article there exists some groups $G(\Gamma,\alpha_0,\alpha_1)$ not isomorphic to any $G(\Lambda,\beta_0,\beta_1)$ with $\beta_0,\beta_1$ automorphism of a group $\Lambda.$
After describing the general structure of $G(\Gamma,\alpha_0,\alpha_1)$ for any triple we quickly specialise to $\alpha_0,\alpha_1$ being automorphisms in order to pursue a deeper study.
We prove a similar rigidity phenomena showing that all isomorphisms between two such fraction groups are spatial (up to multiplying the isomorphism by a centrally valued morphism).
This is the most difficult proof of the second article. From it we obtain a partial classification of the class of fraction group and in particular show that the fraction group obtained from $(\Gamma,\alpha_0,\alpha_1)$ remembers $\Gamma$.
We do not know how to obtain a complete classification.
We then prove that the two classes of groups considered in the first and second articles are disjoint: if $\Gamma$ is nontrivial and $\alpha_0,\alpha_1$ are automorphisms, then $G(\Gamma,\alpha_0,\alpha_1)$ is never isomorphic to $G(\Lambda,\beta_0,\varepsilon_\Lambda)$ for a group $\Lambda.$
This is proved by computing certain centralizer subgroups.
We further prove that there are no nice embedding between one group of the first class and one group of the second class.
We describe the automorphism group of an untwisted fraction group (i.e.~when $\alpha_0=\alpha_1=\id_\Gamma$) by decomposing each automorphism into a product of elementary ones.
It is slightly easier to find these elementary automorphisms but the proof showing that there products exhaust all automorphisms is still quite subtle.
\section*{Acknowledgement}
We warmly thank the anonymous referees for all the constructive and judicious suggestions they made to us.
\section{Preliminaries}\label{sec:preliminary}
In this section we start by briefly presenting the general framework of Jones' actions introduced in \cite{Jones16-Thompson}.
The general philosophy is that a nice category provides a group (a fraction group) and a functor starting from this category provides an action of this group (a Jones' action).
The construction of such fraction groups comes from ideas of Ore which appear in the work of Mal'tsev and were adapted to categories by Gabriel and Zisman \cite{Maltsev53,GabrielZisman67}.
The construction of actions from functors is the novel part of Jones' technology.
Although, it can be interpreted as a special case of KAN extension.
We refer the reader to the short appendix of \cite{Brothier19WP} and to the survey \cite{Brothier-19-survey} for details and a discussion.
We specialise our study to covariant monoidal functors from the category of binary forests to the category of groups which provides actions of Thompson's group $V$ on a group.
We consider the semidirect product which again has a natural structure of fraction group that we describe.
All of these have been extensively explained in \cite[Section 2]{Brothier19WP} that we refer the reader to for additional details.
We end this preliminary section by recalling and proving elementary facts on Thompson's group $V$ that we consider as a subgroup of the homeomorphism group of Cantor space. Moreover, we recall properties of its automorphism group.
\subsection{Jones general framework}\label{sec:general-fram}
Consider a small category $\mathcal C$ with a chosen object $e\in\ob(\mathcal C)$ and assume that $\mathcal C$ admits a calculus of left fractions in $e$.
We then consider the set of pairs $(f,g)$ of morphisms of $\mathcal C$ having domain $e$ and common codomain.
We mod out this set of pairs by the equivalence relation generated by $(f,g)\sim (p\circ f, p\circ g)$ for any composable morphism $p$ and write $\dfrac{f}{g}$ for the equivalence class of $(f,g)$ calling it a fraction.
This quotient set admits a group structure given by the multiplication:
$$\dfrac{f}{g}\cdot \dfrac{f'}{g'} := \dfrac{p\circ f}{p'\circ g'} \text{ for any choice of $p,p'$ satisfying } p\circ g = p'\circ f'.$$
This forms a group $G_{\mathcal C,e}=G_\mathcal C$ that we call the fraction group of $(\mathcal C,e)$ or simply the fraction group of $\mathcal C$ if the context is clear.
Note that we are following here the original convention of Jones' construction. Hence, the fraction $\dfrac{f}{g}$ must be understood as the composition $f^{-1}\circ g$ where $f^{-1}$ is a formal inverse of $f$.
Jones made the fundamental observation that given a functor $\Phi:\mathcal C\to\mathcal D$ one can construct a group action $\pi_\Phi: G_\mathcal C\curvearrowright X_\Phi$ called the \textit{Jones' action}.
The construction is again based on considering fractions.
To ease the presentation assume that morphism spaces of $\mathcal D$ are sets and $\Phi$ is covariant.
Consider the set of all pairs $(g,d)$ where $g$ is a morphism from $e$ to $a$ and $d$ is a morphism from $\Phi(e)$ to $\Phi(a)$ where $a$ runs over all objects of $\mathcal C$.
Define the equivalence relation generated by $(g,d)\sim (f\circ g, \Phi(f)\circ d)$ on this set of pairs and write $\dfrac{g}{d}$ for the equivalence class of $(g,d)$ that we call a fraction.
We obtain a set of fractions $X_\Phi$ and the Jones action $\pi_\Phi:G_\mathcal C\curvearrowright X_\Phi$ defined by the formula:
$$\dfrac{f}{g}\cdot \dfrac{f'}{d} := \dfrac{p\circ f}{\Phi(p')\circ d} \text{ for any choice of $p,p'$ satisfying } p\circ g = p'\circ f'.$$
If the morphism spaces of $\mathcal D$ are not sets, then we can adapt the construction by considering $\Hom_\mathcal D(\Phi(e), \Phi(a))$ rather than $\Phi(a).$
A remarkable feature of this construction is that it does not require any assumptions on $\mathcal D$ nor $\Phi.$ Hence, any functor starting form $\mathcal C$ will provide a Jones' action.
We will describe in details a collection of Jones' actions in Section \ref{sec:construction-FG} and in Proposition \ref{prop:twistWP}.
If $\Phi$ is a covariant functor and $\mathcal D$ is the category of Hilbert spaces with isometries for morphisms, then $X_\Phi$ is a pre-Hilbert space and the Jones action $\pi_\Phi$ can be extended into a unitary representation of $G_\mathcal C$ on the completion of $X_\Phi.$
This provides a wonderful machine for constructing unitary representations and matrix coefficients, see \cite{Jones19Irred,Brot-Jones18-2,Brot-Jones18-1,Brothier19WP,ABC19}.
If $\Phi:\mathcal C\to\Gr$ is a covariant functor where $\Gr$ is the category of groups, then $X_\Phi$ is a group and the Jones action $\pi_\Phi: G_\mathcal C\curvearrowright X_\Phi$ is an action by automorphisms on this group.
We can then consider the semidirect product $X_\Phi\rtimes G_\mathcal C$ obtaining a group from the functor $\Phi$ (and the choice of a fixed object $e\in\mathcal C$).
The author made the observation that $X_\Phi\rtimes G_\mathcal C$ has a very natural description in terms of fraction group, see \cite{Brothier19WP}.
There is a category $\mathcal C_\Phi$ with the same objects as $\mathcal C$ but with typically more morphisms. The fraction group of $(\mathcal C_\Phi,e)$ is then isomorphic to $X_\Phi\rtimes G_\mathcal C$.
We will describe and study those semidirect products when the initial category $\mathcal C$ is a certain category of forests and $\Phi$ is covariant and monoidal.
\subsection{The case of forests and groups}
\subsubsection{The category of forests}\label{sec:forest}
{\bf Trees and forests.}
An ordered rooted binary tree $t$ is a tree-graph with one root $*$ and finitely many vertices.
We imagine it as a graph drawn in the plane with the root on the bottom and leaves on top. Every vertex $v$ of $t$ that is not a leaf has two descendants $v_l,v_r$ that are vertices placed at the top left and top right of $v$, respectively.
We say that the edge from $v$ to $v_l$ (resp. from $v$ to $v_r$) is a left edge (resp. a right edge).
A forest is a union of finitely many ordered rooted binary trees where the roots (and hence the trees) are ordered from left to right.
From now on we will call these objects trees and forests.
{\bf Category of forests.}
We form a (small) category $\mathcal F$ whose set of objects is $\mathbf{N}:=\{0,1,2,3,\cdots\}$ and whose morphism space $\mathcal F(n,m)$ from $n$ to $m$ is the set of all forests having $n$ roots and $m$ leaves for $1\leq n\leq m.$
By convention, $\mathcal F(0,0)$ has one morphism corresponding to an empty diagram and $\mathcal F(0,n)$ is empty for all $n\geq 1.$
The composition is done by stacking vertically forests $f,g$ on top of each other by lining the leaves of the bottom forest $g$ with the roots of the top forest $f$ obtaining $f\circ g.$
We may write $fg$ rather than $f\circ g$ when composing forests.
We equip this category with a monoidal structure $\otimes$ such that
$n\otimes m = n+m$ for objects $n,m$ and $f\otimes g$ is the horizontal concatenation of the two forests $f$ and $g$ where $f$ is on the left and $g$ on the right.
Note that the single element of $\mathcal F(0,0)$ is the tensor unit.
We think of $\mathcal F$ in the naive sense: it is a set equipped with two binary operations: a partially defined composition which corresponds to \textit{vertical} concatenations and a tensor product which corresponds to \textit{horizontal} concatenations.
{\bf Presentation.}
Denote by $I$ and $Y$ the tree with one leaf and the tree with two leaves, respectively.
The category $\mathcal F$ has the remarkable property that every morphism is the composition of tensor products of $I$ and $Y$.
Indeed, write $f_{j,n}= I^{\otimes j-1}\otimes Y\otimes I^{\otimes n-j}$ for the forest with $n$ roots, $n+1$ leaves such that the $j$-th tree is $Y$ and all the others are the trivial tree $I$ for $1\leq j\leq n.$
It is easy to see that every forest is the result of finite compositions of such $f_{j,n}$.
In fact, the category $\mathcal F$ admits the presentation having $\{f_{j,n}:\ 1\leq j\leq n\}$ for set of generators and set of relations:
$$f_{q,n+1}\circ f_{j,n} = f_{j,n+1}\circ f_{q-1,n} \text{ for all } 1\leq j <q \leq n+1.$$
{\bf Partial order.}
Let $\mathfrak T$ be the set of all trees, which is the set of all morphisms of $\mathcal F$ with source $1.$
Given a tree $t$ we equip its set of vertices with the usual tree distance $d$: $d(v,w)$ is the number of edges in the unique geodesic path between $v$ and $w$.
We equip $\mathfrak T$ with the following partial order:
$$t\leq s \text{ if and only if } s=f\circ t \text{ for some forest } f.$$
For each $n\geq 1$ we write $t_n$ for the tree with $2^n$ leaves all at distance $n$ from the root.
Observe that $(t_n:\ n\geq 1)$ is a cofinal sequence in $(\mathfrak T,\leq)$ meaning that for all $t\in \mathfrak T$ there exists $n\geq 1$ satisfying $t\leq t_n.$
We consider $t_\infty$ the \textit{infinite} rooted binary tree.
For convenience, we will often identify elements of $\mathfrak T$ with finite rooted sub-trees of $t_\infty.$
\subsubsection{Cantor space}\label{sec:cantor}
{\bf Cantor space and the unit interval.}
We write $\mathfrak C$ for Cantor space that we define as being the set of all infinite sequences in $0$ and $1$, that is $\mathfrak C:=\{0,1\}^\mathbf{N}$, equipped with the product topology.
We consider the map $$S:\mathfrak C\to [0,1], x=(x_n)_{n\in\mathbf{N}}\mapsto \sum_{n\in\mathbf{N}}\dfrac{x_n}{2^n}$$
which is surjective.
Recall that a dyadic rational is a number of the form $\dfrac{a}{2^b}$ with $a,b\in\mathbf{Z}$ equal to the ring $\mathbf{Z}[1/2].$
Any element of $[0,1]$ that is not a dyadic rational has a unique pre-image.
However, each dyadic rational $r\in(0,1)$ admits exactly two pre-images: $x,y\in\mathfrak C$ satisfying that there exists $N\geq 1$ such that $x_n=y_n$ if $n\leq N-1$, $x_N=1, y_N=0$ and $x_n=0,y_n=1$ for all $n\geq N+1$.
For example, $$\dfrac{5}{8} = \dfrac{1}{2} + \dfrac{1}{8} = S(101000\cdots) = S(100111\cdots) = \dfrac{1}{2} + \sum_{k=4}^\infty \dfrac{1}{2^k}.$$
In particular, $S$ realises a bijection from the set of finitely supported sequences $$\{x\in\mathfrak C:\ \exists N\geq 1,\ x_n=0,\ \forall n\geq N\}$$ onto the set of dyadic rationals contained in $[0,1)$.
\begin{notation}We write $\mathbf{Q}_2$ for the set $\mathbf{Z}[1/2]\cap [0,1)$ that we identify with $\{ S(x):\ x\in\mathfrak C,\ \exists N\geq 1,\ x_n=0,\ \forall n\geq N\}$ and with the set of finitely supported sequences $\{0,1\}^{(\mathbf{N})} $ in $\mathfrak C$.\end{notation}
We write $\leq$ for the lexicographic order of $\mathfrak C$ and remark that $S(x)\leq S(y)$ if and only if $x\leq y$ for all $x,y\in\mathfrak C.$
{\bf Standard dyadic intervals.}
The topology of $\mathfrak C$ is generated by the following clopen sets (sets that are open and closed) that are
$$I:=\{m_I\cdot x:\ x\in \{0,1\}^\mathbf{N}\}$$
where $m_I\in\{0,1\}^{(\mathbf{N})} $ is a finite sequence of $0$ and $1$ that we call a word and where the symbol $\cdot$ is the concatenation.
We say that $I$ is a \textit{clopen interval} of $\mathfrak C$.
Observe that $S(I)=[\dfrac{a}{2^b},\dfrac{a+1}{2^b}]$ for certain $a,b\in\mathbf{N}$.
Moreover, if $|m_I|$ is the word length of the word $m_I$, then the Lebesgue measure of $S(I)$ is equal to $2^{-|m_I|}$ for any such $I$, i.e.~$|m_I|=b$ if we adopt the previous notations.
For technical reason we will consider the half-open interval $\dot S(I):=[\dfrac{a}{2^b},\dfrac{a+1}{2^b})$ and call it a \textit{standard dyadic interval} (in short \textit{sdi}).
By abuse of terminology we may call $I$ a sdi rather than a clopen interval and may identify it with $\dot S(I)$ or with $\dot S(I) \cap \mathbf{Q}_2.$
{\bf Standard dyadic partitions.}
Consider a finite collection of clopen intervals $I_1,\cdots,I_n$ that are mutually disjoint and whose union is equal to $\mathfrak C$.
Up to reordering them we obtain that
$$S(I_1)=[0,a_1], S(I_2) = [a_1,a_2],\cdots, S(I_n)=[a_{n-1},1]$$ with
$$0<a_1<a_2<\cdots<a_{n-1}<1.$$
We say that the corresponding family of half-open intervals $[0,a_1), [a_1,a_2),\cdots, [a_{n-1},1)$ is a \textit{standard dyadic partition} of $[0,1)$ (in short \textit{sdp}).
The family is \textit{ordered} if $\sup(I_k)\leq \inf(I_{k+1})$ for all $1\leq k\leq n-1.$
If the context is clear we may suppress the word ordered.
The set of sdp admits a partial order: if $Q,P$ are sdp, then $Q\leq P$ if every sdi of $Q$ is a union of sdi of $P$.
{\bf Interpretation using trees.}
We now present how trees are useful for studying and representing Cantor space as a topological space.
Consider the rooted binary infinite tree $t_\infty$ with root $*$.
{\bf From paths to sequences.}
Given an edge $e$ of $t_\infty$ we set $E(e)=0$ if $e$ is a left edge and $E(e)=1$ if $e$ is a right edge where $E$ stands for evaluation.
If $p$ is a path going from bottom to top (which is necessarily a geodesic then) it is the concatenation of some edges $p=e_1\cdot e_2\cdot e_3\cdots$.
We extend $E$ on those paths as $E(p) = E(e_1)\cdot E(e_2)\cdot E(e_3)\cdots.$
If $p$ is finite, then $E(p)$ is a word in $0,1$ and if $p$ is infinite, then $E(p)\in\{0,1\}^\mathbf{N}.$
It is easy to see that $E$ realises a bijection from the set of infinite geodesic paths of $t_\infty$ with source $*$ and Cantor space.
{\bf From vertices to sdi.}
Given a vertex $\nu$ of $t_\infty$ there exists a unique geodesic path $p_\nu$ going from $*$ to $\nu$.
We can then consider $E(p_\nu)\in\{0,1\}^{(\mathbf{N})} $ and the following subset of Cantor space:
$$I_\nu:=\{ E(p_\nu)\cdot x:\ x\in\{0,1\}^{\mathbf{N}} \}$$
that are all elements of $\mathfrak C$ with common prefix $E(p_\nu).$
We obtain bijections between vertices of $t_\infty$, finite sequences of 0,1 and sdi (i.e.~clopen intervals) of $\mathfrak C$.
{\bf From trees to sdp.}
Consider now a tree $t\in\mathfrak T$ that is a finite rooted tree that we identify with a rooted subtree of $t_\infty.$
To each leaf $\ell$ of $t$ corresponds a vertex of $t_\infty$ and thus a sdi $I^t_\ell$ of $\mathfrak C$.
We obtain that $(I^t_\ell:\ \ell\in\Leaf(t))$ is a sdp of $\mathfrak C$ and in fact
$$t\in\mathfrak T\mapsto(I^t_\ell:\ \ell\in\Leaf(t))=:P_t$$
realises a bijection between the trees and the sdp of $\mathfrak C.$
{\bf Refinement and partial order.}
Consider now $t\in\mathfrak T$ and $f\in \Hom(\mathcal F)$ a forest composable with $t$.
Observe that the sdp $P_{f\circ t}$ associated to $f\circ t$ is a refinement of the sdp $P_t$: that is $P_t\leq P_{f\circ t}.$
Hence, $t\mapsto P_t$ defined above is an order-preserving bijection.
\subsubsection{Thompson's groups}
{\bf Thompson's group $V$.}
Consider two sdp $(I_1,\cdots,I_n)$ and $(J_1,\cdots,J_n)$ of $\mathfrak C$ with the same number of sdi.
There exists a unique homeomorphism $v$ of $\mathfrak C$ satisfying that
$$v(m_{I_k}\cdot x) = m_{J_k}\cdot x$$
for all $1\leq k\leq n$ and $x\in\{0,1\}^{\mathbf{N}} $ where $m_{I_k}$ is the unique word satisfying that
$$I_k=\{ m_{I_k}\cdot x:\ x\in\{0,1\}^{\mathbf{N}} \}.$$
We write $V$ for the collection of all such maps $v$ which forms a group called Thompson's group $V$.
The element $v$ defines a bijection of $[0,1)$ as follows: consider the unique map realising an increasing affine bijection from $\dot S(I_k)$ to $\dot S(J_k)$ (recall that $\dot S(I_k)$ is equal to $S(I_k)$ minus its last point and $S:\mathfrak C\to [0,1]$ is the classical surjection).
This provides a piecewise linear bijection of $[0,1)$ having finitely many discontinuity points all at dyadic rational and having slopes powers of 2.
Equivalently, all such bijection of $[0,1)$ comes from an element of $V$.
{\bf Thompson's groups $F$ and $T$.}
Thompson's group $V$ contains two subgroups $F,T$ satisfying that $F\subset T$: the subgroup $T$ (resp. $F$) is the set of all transformation of $V$ which restricts to an homeomorphism of $\mathbf{R}/\mathbf{Z}$ (resp. of $[0,1)$) which is the set of all transformations $v$ as above sending an \textit{ordered} sdp $(I_1,\cdots,I_n)$ to another $(J_1,\cdots,J_n)$ in such a way that there exists $0\leq d\leq n-1$ satisfying that $v(I_k)=J_{k+d}$ (resp. $v(I_k)=J_k$) for all $1\leq k\leq n$ and where the index $k+d$ is considered modulo $n.$
{\bf Thompson's groups as fraction groups.}
The three Thompson's groups can be realised as fraction groups.
Thompson's group $F$ is isomorphic to the fraction group of $(\mathcal F,1)$.
This comes from the fact that an element of $F$ is totally described by two ordered sdp with the same number of sdi.
The two sdp corresponds to a pair of trees with the same number of leaves.
To obtain Thompson's group $T$ one has to index the leaves of the trees in a cyclic way and thus we then consider the category of affine forests $\mathcal{AF}$ with object the nonzero natural numbers and morphism space $\mathcal{AF}(n,m)=\mathcal F(n,m)\times \mathbf{Z}/m\mathbf{Z}, n\leq m.$
We obtain that $T$ is isomorphic to the fraction group of $(\mathcal{AF},1)$.
To obtain the larger Thompson's group $V$ one can have any indexing of the leaves of trees which corresponds in adding any permutation in the data of the trees giving the category $\mathcal{SF}$ with same set of objects than before but with morphism space $\mathcal{SF}(n,m)=\mathcal F(n,m)\times S_m, n\leq m$ where $S_m$ is the group of permutations of $\{1,\cdots,m\}.$
Thompson's group $V$ is isomorphic to the fraction group of $(\mathcal{SF},1)$ and so $v\in V$ is equal to a fraction $\dfrac{\tau\circ t}{\sigma\circ s}$
with $t,s$ trees and $\tau,\sigma$ permutations playing the role of indexation of the leaves of $t$ and $s$, respectively.
Note that this fraction is equal to $\dfrac{\sigma^{-1}\tau \circ t}{s}=\dfrac{t}{\tau^{-1}\sigma\circ s}$ so we can always represent elements of $V$ with a fraction having at most one nontrivial permutation.
\subsubsection{Constructions of fraction groups}\label{sec:construction-FG}
The plan of this section is as follows.
First, we restrict the class of functors $\Phi:\mathcal F\to\Gr$ considered that are in bijection with triples $(\Gamma,\alpha_0,\alpha_1)$ with $\Gamma$ a group and $\alpha_0,\alpha_1\in\End(\Gamma).$
We fix one functor $\Phi$ and its associated triple $(\Gamma,\alpha_0,\alpha_1).$
Second, we construct a limit group $\varinjlim_{t\in\mathfrak T}\Gamma_t$ obtained by taking a limit on all trees $t\in\mathfrak T$.
This limit group is the set $X_\Phi$ that we have described earlier but having, in this particular case, the extra-property of being a group.
Third, we provide three descriptions of the Jones action of Thompson's group $V$ on the limit group $\varinjlim_{t\in\mathfrak T}\Gamma_t$. This is the Jones action $\pi_\Phi:V\curvearrowright X_\Phi$ defined earlier.
Fourth, we describe the semidirect product $X_\Phi\rtimes V= \varinjlim_{t\in\mathfrak T}\Gamma_t\rtimes V$ as a fraction group.
To do this, we define a category $\mathcal C_\Phi$ and establish that $X_\Phi\rtimes V$ is isomorpic to the fraction group of $(\mathcal C_\Phi,1).$
{\bf Description of covariant monoidal functors.}
Consider $\Gr$ the category of groups that we equip with the classical monoidal structure: the direct sum $\oplus.$
Consider a (covariant) monoidal functor $\Phi:\mathcal F\to \Gr$.
If $\Gamma=\Phi(1)$, then $\Phi(n)=\Gamma^n$ the $n$-th direct sum of $\Gamma$ for all $n\geq 1$ since $\Phi$ is monoidal.
Consider $R:=\Phi(Y)$ (where $Y$ stands for the tree with two leaves) which is a group morphism from $\Gamma$ to $\Gamma\oplus\Gamma$ and thus of the form
$$R(g)=(\alpha_0(g),\alpha_1(g)),\ g\in \Gamma$$
for some endomorphisms $\alpha_0,\alpha_1\in \End(\Gamma)$.
Note that each morphism of $\mathcal F$ is a composition of some $f_{j,n}=I^{\otimes j-1}\otimes Y\otimes I^{\otimes n-j}$ and that
$$\Phi(f_{j,n}) = \id_{\Gamma^{j-1}} \oplus R \oplus \id_{\Gamma^{n-j}}, \ n\geq 1, 1\leq j\leq n$$
since $\Phi$ is monoidal.
Therefore, $\Phi$ is completely described by $\Gamma$ and the (ordered) pair $(\alpha_0,\alpha_1).$
Conversely, any choice of group $\Gamma$ and pair of endomorphisms $(\alpha_0,\alpha_1)\in\End(\Gamma)^2$ defines a covariant monoidal functor from $\mathcal F$ to $\Gr.$
This latter fact comes from the presentation of the category $\mathcal F$ given in Section \ref{sec:forest}.
{\bf Construction of the limit group $X_\Phi$.}
Assume we have chosen $\Gamma,\alpha_0,\alpha_1$ and consider $R$ as above.
Let $\Phi$ be the associated monoidal functor.
For each tree $t\in\mathfrak T$ we consider
$$\Gamma_t=\{(g,t):\ g\in \Gamma^n\}$$
a copy of the group $\Gamma^n$ where $n=|\Leaf(t)|$ is the number of leaves of $t$.
We may identify $\Gamma_t$ with the group $\{g:\Leaf(t)\to \Gamma\}$ of all maps from $\Leaf(t)$ to $\Gamma$.
Given a forest $f$ with $n$ roots we define
$$\iota_{ft,t}:\Gamma_t\to \Gamma_{ft} \text{ such that } \iota_{ft,t}(g_1,\cdots,g_n):= \Phi(f)(g_1,\cdots,g_n).$$
For example, if $t=Y$ and $f=I\otimes Y$, then $$\iota_{ft,t}(g_1,g_2) = (g_1, R(g_2)) = (g_1, \alpha_0(g_2), \alpha_1(g_2)).$$
This provides a directed system of groups $$(\Gamma_t, \ \iota_{s,t}:\ s,t\in\mathfrak T, s\geq t)$$
indexed by the set of trees $\mathfrak T$.
Let $\varinjlim_{t\in\mathfrak T} \Gamma_t$ be the directed limit which is still a group and which is in bijection with the quotient space:
$$\{ (g,t):\ t\in \mathfrak T,\ g\in \Gamma^{ |\Leaf(t)|} \}/\sim$$
where $$(g,t)\sim (g',t') \text{ if and only if $\exists f,f'$ satisfying } ft=f't' \text{ and } \Phi(f)(g)=\Phi(f')(g').$$
This is the group $X_\Phi$ mentioned earlier in Section \ref{sec:general-fram}.
{\bf The Jones action of Thompson's group $V$ on $X_\Phi$.}
{\bf First description.}
We now describe the Jones action $\pi_\Phi: V\curvearrowright \varinjlim_{t\in\mathfrak T} \Gamma_t$.
If $v\in F$, then $v$ is described by a pair of trees $(t,s)$ and thus by a fraction $\dfrac{t}{s}.$
Consider $g\in \varinjlim_{t\in\mathfrak T}\Gamma_t$.
Up to refining both $t$ and $s$ (by considering $(ft,fs)$ rather than $(t,s)$) we can assume that $g$ admits a representative $(h,s)\in \Gamma_s$ and thus using the fraction notation we have $g=\dfrac{s}{h}.$
the Jones action is then $$\pi_\Phi\left(\dfrac{t}{s}\right)\left(\dfrac{s}{h}\right) = \dfrac{t}{h}.$$
Hence, $\pi_\Phi\left(\dfrac{t}{s}\right)$ restricted to $\Gamma_s$ consists in the following trivial isomorphism:
$(h,s)\mapsto (h,t)$
from $\Gamma_s$ to $\Gamma_t.$
{\bf Second description.}
Here is another way to interpret this action.
Consider $g\in \varinjlim_{t\in\mathfrak T} \Gamma_t$ and a representative $g$ in $\Gamma_s$ for a tree $s$ that is large enough.
We may identify $g$ with a map $$\Leaf(s)\to \Gamma, \ell\mapsto h_\ell.$$
We interpret $g$ as the tree $s$ where each of its leaf $\ell$ is decorated by $h_\ell$.
Consider $v\in V$ that we write as a fraction $\dfrac{\sigma\circ t}{s}$ where $t$ is a tree and $\sigma$ is a permutation.
Now $\sigma$ can be interpreted as a bijection $b:\Leaf(t)\to\Leaf(s)$ since $t$ and $s$ must have the same number of leaves.
We obtain that $\pi_\Phi(v)(\dfrac{s}{h})$ is equal to the tree $t$ such that any of its leaf $\lambda$ is decorated by the group element $h_{b(\lambda)}.$
Note that if $v\in F$, then $\sigma$ is trivial.
This implies that we do not change the order of the component of $h$ (when reading from left to right the group element placed on top of the leaves of the tree $s$ or $t$).
If $v\in T$ we may change cyclically the order of the components and if $v\in V$ we may change using any permutation the order of the components.
{\bf Third description.}
Here is a spatial interpretation of Jones' action.
Consider $v\in V$ and $g\in \varinjlim_{t\in\mathfrak T}\Gamma_t.$
There exist two sdp $I:=(I_1,\cdots,I_n)$ and $J:=(J_1,\cdots,J_n)$ of Cantor space such that $v(I_k)=J_k, 1\leq k\leq n$ and $v$ is adapted to the first sdp, i.e.~the restriction of $v$ to $I_k$ is affine.
There exist a tree $s$ and a representative of $g$ in $\Gamma_s.$
Let $L:=(L_1,\cdots,L_m)$ be the sdp associated to $s$.
The element $g$ is then the same data than the pair $(g_L,L)$ where $g_L:\mathfrak C\to\Gamma$ is a map constant on each $L_k$ taking a certain value $g_k$ for $1\leq k\leq m.$
Let us assume that the partition $I$ is thinner than $L$.
For simplicity, suppose for instance that $I=(L_1^0,L_1^1, L_2,\cdots,L_m)$ where $L_1^0,L_1^1$ are the first and second half of $L_1$, respectively.
We now represent $g$ as the pair $(g_I,I)$ with the map $g_I:\mathfrak C\to\Gamma$ taking the values
$$g_I(x) =\begin{cases}
\alpha_0(g_1) \text{ if } x\in I_1=L_1^0\\
\alpha_1(g_1) \text{ if } x\in I_2=L_1^1\\
g_k \text{ if } x\in I_k=L_{k-1} \text{ with } 2\leq k\leq n
\end{cases}.$$
The element $\pi_v(g)$ is represented by the pair $(g_J,J)$ where:
$$g_J(x) =\begin{cases}
\alpha_0(g_1) \text{ if } x\in J_1\\
\alpha_1(g_1) \text{ if } x\in J_2\\
g_k \text{ if } x\in J_k \text{ with } 2\leq k\leq n
\end{cases}.$$
{\bf Description of the semidirect product $X_\Phi\rtimes V$ as a fraction group.}
We will now explain how we can interpret the semidirect product $\varinjlim_{t\in\mathfrak T} \Gamma_t\rtimes V$ as a fraction group.
We start by defining a category that admits a left calculus of fractions.
Consider the category $\mathcal C_\Phi$ whose set of object is $\mathbf{N}$ and morphism spaces $\mathcal C_\Phi(n,m)$ equal to $\mathcal F(n,m)\times S_m\times \Gamma^m$.
A morphism can be diagrammatically represented as a forest plus on top of it a permutation that we represent as a diagram with $n$ segments (if the forest has $n$ leaves) going from $(k,a)$ to $(\sigma(k),a+1), 1\leq k\leq n$ where the coordinates are taken in $\mathbf{R}^2$ and where $a$ stands for the altitude of the leaves in the diagram. On top of the permutation we place some elements of $\Gamma$. Note that if $\Gamma$ was trivial, then we will simply obtain a diagrammatic representation of the category giving the fraction group $V$.
In picture we obtain that the morphism $(Y,(21), g_1,g_2)\in \mathcal C_\Phi(n,m)$ is represented by the diagram:
\newcommand{\Ysigmag}{
\begin{tikzpicture}[baseline = .4cm]
\draw (0,0)--(0,.5);
\draw (0,.5)--(-.75,1);
\draw (0,.5)--(.75,1);
\draw (-.75,1)--(.75,1.5);
\draw (.75,1)--(-.75,1.5);
\node at (-.75,1.75) {$g_1$};
\node at (.75,1.75) {$g_2$};
\end{tikzpicture}
}
$$\Ysigmag \ .$$
We interpret $(Y,(21), g_1,g_2)$ as the composition of three morphisms being $(g_1,g_2), (21)$ and $Y$ and thus identify $\Gamma^m$ and $S_m$ as subsets of the endomorphism space $\mathcal C_\Phi(m,m)$ and $\mathcal F(n,m)$ as subsets of the morphism space $\mathcal C_\Phi(n,m), n,m\geq 1.$
The composition of morphisms is explained by the following diagrams where we freely use the identifications just mentioned:
\newcommand{\hforest}{
\begin{tikzpicture}[baseline = .4cm]
\draw (0,0)--(0,1);
\draw (1,0)--(1,2/3);
\draw (1,2/3)--(2/3,1);
\draw (1,2/3)--(4/3,1);
\node at (0,1.2) {$h_1$};
\node at (2/3,1.2) {$h_2$};
\node at (4/3,1.2) {$h_3$};
\end{tikzpicture}
}
\newcommand{\compoone}{\begin{tikzpicture}[baseline = .4cm]
\draw (0,0)--(0,.5);
\draw (0,.5)--(-.75,1);
\draw (0,.5)--(.75,1);
\draw (-.75,1)--(.75,1.5);
\draw (.75,1)--(-.75,1.5);
\node at (-.75,1.75) {$g_1$};
\node at (.75,1.75) {$g_2$};
\draw (-.75,2)--(-.75,3);
\draw (.75,2)--(.75,2+2/3);
\draw (.75,2/3+2)--(2/3-.25,3);
\draw (.75,2/3+2)--(4/3-.25,3);
\node at (-.75,3.2) {$h_1$};
\node at (2/3-.25,3.2) {$h_2$};
\node at (4/3-.25,3.2) {$h_3$};
\end{tikzpicture}}
\newcommand{\compotwo}{\begin{tikzpicture}[baseline = .4cm]
\draw (0,0)--(0,.5);
\draw (0,.5)--(-.75,1);
\draw (0,.5)--(.75,1);
\draw (-.75,1)--(.75,1.5);
\draw (.75,1)--(-.75,1.5);
\node at (-.75,1.75) {$h_1g_1$};
\draw (.75,1.5)--(.75,2);
\draw (.75,2)--(0,2.5);
\draw (.75,2)--(1.5,2.5);
\node at (-.25,2.75) {$h_2\alpha_0(g_2)$};
\node at (1.75,2.75) {$h_3\alpha_1(g_2)$};
\end{tikzpicture}}
\newcommand{\compothree}{\begin{tikzpicture}[baseline = .4cm]
\draw (0,0)--(0,.25);
\draw (0,.25)--(-2,1.25);
\draw (0,.25)--(2, 1.25);
\draw (-1,.75)--(0,1.25);
\draw (-2,1.25)--(0,2);
\draw (0,1.25)--(2,2);
\draw (2,1.25)--(-2,2);
\node at (-2,2.2) {$h_1g_1$};
\node at (0,2.2) {$h_2\alpha_0(g_2)$};
\node at (2,2.2) {$h_3\alpha_1(g_2)$};
\end{tikzpicture}}
$$\hforest \circ \Ysigmag = \compoone = \compotwo$$
which is equal to
$$
\compothree$$
Elements of the fraction group of this category (at the object 1) is the set of (equivalence classes of) pairs of decorated trees as defined above.
Note that the definition of the composition of the morphisms of $\mathcal C_\Phi$ provides a number of skein-type relations such as the one of above (which consists in pulling a string around a vertex like the Yang-Baxter relation in a fusion category). We thus have a number of such skein relations that are automatically satisfied.
We are going to show that the fraction group of $(\mathcal C_\Phi,1)$ is isomorphic to the semidirect product $X_\Phi\rtimes V= \varinjlim_{t\in\mathfrak T}\Gamma_t\rtimes V$ by constructing an explicit isomorphism.
Consider an element $\gamma$ of the fraction group of $(\mathcal C_\Phi,1).$
It is the equivalence class of a pair of triples $( (t, \tau, g) , (s,\sigma,h))$ where $t,s$ are trees with $n$ leaves, $\tau,\sigma$ are permutations in $S_n$ and $g=(g_1,\cdots, g_n), h=(h_1,\cdots, h_n)$ are $n$-tuples of elements of $\Gamma$, for a certain $n\geq 1.$
We may write it as a fraction $\dfrac{g \circ \tau\circ t}{ h \circ \sigma \circ s}$.
Note that $g$ and $\tau$ are automorphisms of $\mathcal C_\Phi$ and thus we can multiply the numerator and denominator of the fraction by $(g\circ \tau)^{-1}.$
We obtain that $\gamma=\dfrac{t}{\tau^{-1}\circ g^{-1}h\circ \sigma\circ s} = \dfrac{t}{h'\circ \sigma'\circ s}$ where $h'$ is obtained by permuting the entries of $g^{-1}h$ via the permuation $\tau^{-1}$ and $\sigma'=\tau^{-1}\sigma.$
Consider the map
$$\gamma= \dfrac{t}{h'\circ \sigma'\circ s} \mapsto \left( (h',t) , \dfrac{t}{\sigma' \circ s} \right) \in \Gamma_t\times V.$$
This formula defines an isomorphism from the fraction group $G_{\mathcal C_\Phi}$ onto the semidirect product $\left(\varinjlim_{t\in\mathfrak T}\Gamma_t\right)\rtimes V$.
\subsection{Thompson's group: automorphisms and slopes}
\subsubsection{Slopes}
Consider $v\in V$ and note that for any $x\in [0,1)$ there exists a sdi $I$ on which $v$ is adapted to $I$ (is affine on this interval) with slope $2^n$ for a certain $n\in\mathbf{Z}.$
We write $v'(x)=2^n$ for the slope of $v$ at the point $x$.
Observe that $v(I)$ is also a sdi and we can find two words $m_I$ and $m_{v(I)}$ in 0,1 satisfying that
$$I=\{ m_I\cdot x:\ x\in\{0,1\}^{\mathbf{N}} \} \text{ and } v(I)=\{m_{v(I)}\cdot x:\ x\in\{0,1\}^{\mathbf{N}} \}$$
where we view $I$ and $v(I)$ inside Cantor space $\mathfrak C$.
Recall that $|m_I|$ is the number of letters in the word $m_I$, i.e.~the word length of the word $m_I$.
One can see that $n= |m_I|-|m_{v(I)}|$. In this way we can alternatively define the slope of $v\in V$ at any element of $\mathfrak C.$
We obtain that $v':\mathfrak C\to \mathbf{Z}, x\mapsto v'(x)$ is continuous and equivalently there exists a sdp $(I_1,\cdots,I_n)$ such that $v'$ is constant on each $I_k, 1\leq k\leq n.$
In order to have slightly lighter notations we may remove parenthesis and write $vx$ and $vI$ rather than $v(x)$ and $v(I)$ for $v\in V, x\in\mathbf{Q}_2$ and $I$ any subset of $\mathbf{Q}_2$ or $\mathfrak C$.
It is easy to see that the slope of elements of $V$ satisfies the chain rule:
$$(vw)'(x) = v'(wx)\cdot w'(x) \text{ for all } v,w\in V, x\in [0,1) \text{ or in } \mathfrak C.$$
This will be often used. We will consider the map
$$\ell:V\to \prod_{\mathbf{Q}_2}\Gamma,\ v\mapsto \ell_v$$
such that
$$\ell_v(x):= \log_2(v'(v^{-1}x)) \text{ for all } v\in V, x\in\mathbf{Q}_2$$
where $\log_2$ is the logarithm in base $2$.
Note that the chain rule implies that $\ell_{vw} = \ell_v + \ell_w^v$ where $\ell_w^v(x):= \ell_w(v^{-1}x), x\in \mathfrak C, v,w\in V.$
Consider $x\in\mathfrak C$ and the stabiliser subgroup
$$V_x:=\{v\in V:\ vx=x\}.$$
Denote by $V_x'$ the derived subgroup of $V_x$ (the subgroup generated by the commutators).
We provide a description of $V_x'$ and $V_x/V_x'$ using slopes. This is rather standard. A full proof can be found in \cite{Bleak-Lanoue10}. We provide a short proof for the convenience of the reader.
Recall that the group of germs of $V$ at a point $x\in\mathfrak C$ is the group $V_x$ quotiented by the subgroup $W_x$ of elements of $V_x$ that acts like the identity on a neighborhood of $x$.
\begin{lemma}\label{lem:slopeone}
Fix $x\in \mathbf{Q}_2$ and consider $V_x$ and its derived subgroup $V_x'$.
We have that $V_x'$ is the subgroup of $v\in V$ such that $v$ acts like the identity on a neighborhood of $x$ inside $\mathfrak C$ and equivalently $$V_x'=\{v\in V:\ vx=x \text{ and } v'(x)=1\}.$$
The abelianisation $V_x/V_x'$ of $V_x$ is thus equal to the group of germs of $V$ at the point $x$.
Moreover, the morphism $V_x\to \mathbf{Z}, v\mapsto \ell_v(x)$ factorises into an isomorphism
$V_x/V_x'\to\mathbf{Z}.$
\end{lemma}
\begin{proof}
Fix $x\in \mathbf{Q}_2$ and consider the map
$$L:V_x\to \mathbf{Z}, v\mapsto \ell_v(x):=\log_2(v'(x)).$$
This is a group morphism.
Note that since $\mathbf{Z}$ is abelian the kernel of $L$ contains the derived subgroup $V_x'$, i.e.~$V_x'\subset \ker(L).$
For the reverse inclusion observe that
$\ker(L)=\cup_I \Fix_V(I)$ where $I$ runs over all sdi containing $x$ and $\Fix_V(I)$ is the subgroup of $v\in V$ satisfying $vy=y$ for all $y\in I$.
It is easy to see that $\Fix_V(I)$ is isomorphic to $V$ if $I$ is a proper sdi of $\mathfrak C$ and thus is simple and nonabelian since $V$ is.
Therefore, $\Fix_V(I)'=\Fix_V(I)$ implying that $\ker(L)'=\ker(L)$ and thus $\ker(L)=\ker(L)'\subset V_x'$.
We deduce that $\ker(L)=V_x'$.
To show that $L$ is surjective it is sufficient to observe that for any $x\in \mathbf{Q}_2$ there exists a sdi $I$ starting at $x$ and $v\in V$ adapted to $I$ so that $v(I)$ is the first half of $I$.
Hence, $vx=x$ and $v'(x)=1/2$. In particular, $L(v)=-1$ which implies that $L$ is surjective.
This finishes the proof of the lemma.
\end{proof}
\subsubsection{Automorphism group of $V$}
Let $\Aut(V)$ be the automorphism group of $V$.
Recall that $V$ is defined as a subgroup of the homeomorphism group $\Homeo(\mathfrak C)$ of Cantor space $\mathfrak C$.
Define $$N_{H(\fC)}(V):=\{\varphi\in\Homeo(\mathfrak C):\ \varphi V\varphi^{-1} = V\}$$
the normaliser subgroup of $V$ inside $\Homeo(\mathfrak C)$.
We have a map:
$$\ad:N_{H(\fC)}(V)\to \Aut(V), \varphi\mapsto \ad_\varphi \text{ defined as } \ad_\varphi(v):=\varphi v\varphi^{-1},$$
for all $\varphi\inN_{H(\fC)}(V), v\in V.$
A classical argument using Rubin's theorem implies that $\ad$ is an isomorphism \cite{Rubin96}, see \cite[Section 3]{BCMNO19} for details and a proof of the faithfulness of $\ad$.
We will study elementary properties of elements of $N_{H(\fC)}(V)$.
\begin{lemma}
If $\varphi\in N_{H(\fC)}(V)$ and $I$ is a sdi, then $\varphi(I)$ is a finite union of sdi.
\end{lemma}
\begin{proof}
Write $\mathcal I$ for the set of finite unions of sdi.
To each $v\in V$ we associate $\Fix(v)$ the set of $x\in\mathfrak C$ so that $vx=x.$
It is easy to see that $v\in V\mapsto \Fix(v)$ is a surjection from $V$ onto $\mathcal I$.
Moreover, if $\varphi\inN_{H(\fC)}(V)$, then $\Fix(\ad_\varphi(v)) = \Fix(\varphi v\varphi^{-1}) = \varphi(\Fix(v)).$
This implies that if $I\in\mathcal I$, then $\varphi(I)\in\mathcal I$ proving the lemma.
\end{proof}
In this paper we will be working with a subgroup of $N_{H(\fC)}(V)$.
Let $\Stab_N(\Q_2)$ be the set of $\varphi\inN_{H(\fC)}(V)$ stabilising $\mathbf{Q}_2$ inside $\mathfrak C$ that is:
$$\Stab_N(\Q_2):=\{\varphi\in\Homeo(\mathfrak C):\ \varphi V\varphi^{-1} = V \text{ and } \varphi(\mathbf{Q}_2)=\mathbf{Q}_2\}.$$
Recall that $\mathbf{Q}_2\subset\mathfrak C$ is the set of sequences $x=(x_n)_{n\in\mathbf{N}}\in\{0,1\}^{\mathbf{N}} $ satisfying that there exits $N\geq 1$ such that $x_n=0$ for all $n\geq N$.
\begin{remark}
In general, an element of $N_{H(\fC)}(V)$ does not stabilise $\mathbf{Q}_2.$
Consider for instance $x=(x_n)_{n\in\mathbf{N}}\mapsto (\overline{x_n})_{n\in\mathbf{N}}$ where $\overline 0=1, \overline 1=0.$
It is an element of $N_{H(\fC)}(V)$ sending all stationary sequences eventually equal to 0 (so $\mathbf{Q}_2$) to the the space of all stationary sequences eventually equal to 1 (that is the other copy of the dyadic rationals inside Cantor space).
There exist more exotic elements of $N_{H(\fC)}(V)$ which do not stabilise the union of the copies of the dyadic rationals inside Cantor space, see Remark \ref{rem:Shayo}.
\end{remark}
We have the following useful fact for elements of $\Stab_N(\Q_2)$.
\begin{proposition}\label{prop:slope}
If $x\in \mathbf{Q}_2, v\in V$ satisfying $vx=x$ and $\varphi\in\Stab_N(\Q_2)$, then
$$(\varphi v\varphi^{-1})'(\varphi(x)) = v'(x).$$
\end{proposition}
\begin{proof}
Consider $x\in \mathbf{Q}_2, v\in V_x$ and $\varphi\in\Stab_N(\Q_2).$
We start by showing that if $v'(x)<1$, then $(\varphi v\varphi^{-1})'(\varphi(x))<1.$
Define $I,J$ some sdi containing $x$ and $\varphi(x)$ that are adapted to $v$ and $\varphi v\varphi^{-1}$, respectively.
Note that since $x\in\mathbf{Q}_2$ the sdi $I$ is necessarily of the form $[x,x+a)$ and the restriction of $v$ to this interval acts in the following way: $x+b\mapsto x + v'(x)b.$
We have a similar description of $J$ and the restriction of $\varphi v\varphi^{-1}$ to $J$ since $\varphi(x)\in\mathbf{Q}_2,\varphi v\varphi^{-1}\in V$ and $(\varphi v\varphi^{-1})(\varphi(x)) = \varphi(x).$
Up to reducing $J$ we can assume that $\varphi(I)$ contains $J$.
Assume that $v'(x)<1$ and observe that this condition is equivalent to have that $\lim_{n\to\infty} v^n(y)=x$ for all $y\in I$.
Consider $z\in J$ that we can write as $\varphi(y)$ for some $y\in I$.
By continuity of $\varphi$ we obtain that
$$\lim_{n\to\infty} (\varphi v\varphi^{-1})^n (\varphi(y)) = \varphi(\lim_{n\to\infty}v^n(y))= \varphi(x)$$ implying that $(\varphi v\varphi^{-1})'(\varphi(x))<1.$
We finish the proof using the group of germs $V_x/V_x'$.
Since $x\in \mathbf{Q}_2$ we have by Lemma \ref{lem:slopeone} that $L_x:V_x\to \mathbf{Z}, v\mapsto \log_2(v'(x))$ factorises into an isomorphism $\overline L_x:V_x/V_x'\to \mathbf{Z}.$
Since $\varphi\in \Stab_N(\Q_2)$ we have that $\varphi(x)\in\mathbf{Q}_2$ and thus an isomorphism $\overline L_{\varphi(x)}:V_{\varphi(x)}/V_{\varphi(x)}'\to\mathbf{Z}$.
Consider the automorphism $\ad_\varphi$ of $V$ and observe that $\ad_\varphi(V_x)=V_{\varphi(x)}$.
Hence, $\ad_\varphi$ factorizes into an isomorphism $\overline\ad_\varphi:V_x/V_x'\to V_{\varphi(x)}/V_{\varphi(x)}$.
We obtain an automorphism
$$f:=\overline L_{\varphi(x)} \circ \overline\ad_\varphi \circ {\overline L_x}^{-1}\in\Aut(\mathbf{Z}).$$
Therefore, $f=\pm \id_\mathbf{Z}$.
We proved that if $v'(x)<1$, then $(\varphi v\varphi^{-1})'(\varphi(x))<1$.
Therefore, if $n$ is a negative integer, then so is $f(n)$.
This implies that $f$ is the identity automorphism and thus $v'(x)=(\varphi v\varphi^{-1})'(\varphi(x))$.
\end{proof}
\begin{remark}\label{rem:Shayo}
Recall that Cantor space contains two copies of the dyadic rationals that we denote by $\mathbf{Q}_2$ and $\mathbf{Q}_2^1$.
The last proposition works for $\varphi\in\Stab_N(\Q_2)$ but we can relax the assumptions and only require that $\varphi$ stabilises $\mathbf{Q}_2\cup \mathbf{Q}_2^1$.
Although, by adapting the proof of above we obtain that, if $vx=x, v'(x)\neq 1,x\in\mathbf{Q}_2$ and $\varphi(\mathbf{Q}_2)\neq \mathbf{Q}_2,\mathbf{Q}_2^1$, then $(\varphi v\varphi^{-1})'(\varphi(x))\neq v'(x)$, see \cite[Proposition 1.2]{Brothier20-2} for a precise statement and a proof.
Feyisayo Olukoya constructed such an exotic example of $\varphi\inN_{H(\fC)}(V)$ satisfying that $\varphi(\mathbf{Q}_2)$ does not intersect $\mathbf{Q}_2\cup \mathbf{Q}_2^1$.
This example was communicated to us by Collin Bleak and we warmly thank both of them for sharing it.
The construction of this exotic homeomorphism uses transducers that were developed in \cite{GNS00} and further used to describe and classify automorphisms of $V$ in \cite{BCMNO19}.
Note that if $vx\neq x$, then the last proposition does not work for trivial reasons.
Indeed, consider the element $\varphi \in V$ which permutes cyclically the intervals $[0,1/2]$, $[1/2,3/4]$ and $[3/4,1]$ and $v\in V$ that permutes $[0,1/2]$ and $[1/2,1]$.
Note that $v'(x)=1$ for all $x\in\mathbf{Q}_2$ but $\varphi v\varphi^{-1}([0,1/2]) = \varphi v [3/4,1] = \varphi [1/4,1/2] = [5/8,6/8]$.
Hence, $\varphi v\varphi^{-1}$ has slope $1/4$ on the interval $[0,1/2]$ while $v$ has slope $1$ everywhere.
\end{remark}
\section{General properties of fraction groups constructed from forests}\label{sec:characteristic}
In this section we consider \textit{any} monoidal covariant functor $\Phi$ from the category of forests $\mathcal F$ to the category of groups $\Gr$ before restricting into a smaller class of functors in the coming sections.
As we have seen in the previous section $\Phi$ is totally described by the choice of a group $\Gamma$ and a morphism
$$\Gamma\to\Gamma\oplus\Gamma, \ g\mapsto (\alpha_0(g),\alpha_1(g))$$
with $\alpha_0,\alpha_1 \in \End(\Gamma)$.
To emphasize the choice of the pair $(\alpha_0,\alpha_1)$ we write $\Phi_\alpha$ for $\Phi$.
Denote by $K_\alpha := \varinjlim_{t\in\mathfrak T} \Gamma_t$ the associated limit group, $\pi_\alpha: V\curvearrowright K_\alpha$ the Jones action and $G_\alpha:= K_\alpha\rtimes V$ the associated semidirect product.
Finally write $\mathcal C_\alpha$ for the larger category of forests with leaves decorated by elements of $\Gamma$ and natural numbers satisfying that $G_\alpha$ is the fraction group of the category $\mathcal C_\alpha$ at the object $1.$
The aim of this section is to prove the following:
\begin{theorem}\label{theo:KinG}
The normal subgroup $K_\alpha\lhd G_\alpha$ is characteristic of $G_\alpha$, i.e.~any automorphism of $G_\alpha$ restricts to an automorphism of $K_\alpha.$
Using obvious notations, consider an isomorphism $\theta:G_\alpha\to G_\beta$ between two fraction groups. We have that $\theta(K_\alpha)=K_\beta.$
\end{theorem}
To prove the theorem we are going to show that $K_\alpha$ is the unique maximal normal subgroup of $G_\alpha$ satisfying a certain decomposability property.
\begin{definition}
Consider a normal subgroup $N\lhd G$.
If $X\subset N$ is a subset, then we write $\mathcal{N}_N(X)$ for the smallest normal subgroup of $N$ containing $X$ and call it the normaliser of $X$ inside $N$.
A normal subgroup $N\lhd G$ satisfies the \textit{decomposability property} if:
\begin{enumerate}
\item $N$ can be decomposed as a direct sum of two groups $N=A\oplus B$;
\item $N=\mathcal{N}_G(A) = \mathcal{N}_G(B)$.
\end{enumerate}
\end{definition}
Denote by $t_n, n\geq 1$ the tree with $2^n$ leaves all at distance $n$ from the root.
For example, $t_1=Y$ and $t_2=(Y\otimes Y)\circ Y.$
Consider the permutations $\sigma=(21), \tau=(2134)$ (here we write $(a_1\cdots a_n)$ for the permutation $i\mapsto a_i$) and the elements $v_\sigma := \dfrac{\sigma t_1}{t_1}, v_\tau:=\dfrac{\tau t_2}{t_2}$.
Note that in the first case we permute the two leaves of $t_1$ as in the second case we permute the two first leaves of $t_2$ and let invariant the two others.
The permutations $\sigma,\tau$ induce some permutation $\sigma_n,\tau_n$ on $2^n,n\geq 2$ elements that are
$$\sigma_n(i) = i+2^{n-1} \mod 2^n$$
and
$$\tau_n(i) =
\begin{cases}
i + 2^{n-2} \text{ if } 1\leq i\leq 2^{n-2} \\
i - 2^{n-2}\text{ if } 2^{n-2}+1 \leq i \leq 2^{n-1} \\
i \text{ if } 2^{n-1}+1 \leq i \leq 2^{n} \\
\end{cases}.
$$
More generally, for any permutation $\kappa\in S_{2^k},k\geq 1$ on $2^k$ elements and $n\geq k$ we can define a $2^{n-k}$-cable version $\kappa_n\in S_{2^n}$ of $\kappa$ that is:
$$\kappa_n(i + j 2^{n-k}) = i + \kappa(j) 2^{n-k} \text{ for } 1\leq i\leq 2^{n-k} \text{ and } 1\leq j\leq 2^{k}.$$
Identify the permutation $\kappa_n$ with the automorphism of the group $\Gamma_{t_n}\simeq \Gamma^{\oplus 2^n}$ that is permuting the coordinates.
\begin{definition}
Given any permutation $\kappa\in S_{2^k},k\geq 1$ and $n\geq k$ we consider the set
$$X_{\kappa,n} = \{ g\kappa_n(g^{-1}) : \ g\in \Gamma_{t_n}, \ \supp(g)\cap \kappa(\supp(g))=\emptyset \}$$
where $\supp(g)$ denotes the support of $g$, i.e.~the set of $1\leq i\leq 2^n$ for which the $i$-th component of $g$ is nontrivial.
\end{definition}
\begin{proposition}\label{prop:isom}
Consider a chain of nontrivial normal subgroups $L\lhd K\lhd G_\alpha$ and assume that $L$ is not contained inside $K_\alpha$.
The following assertions are true.
\begin{enumerate}
\item For any permutation $\kappa\in S_{2^k}$ there exists $n_{\kappa,K}\geq 2$ such that if $n\geq n_{\kappa,K}$, then $X_{\kappa,n}$ is contained inside $K.$
\item Consider the following set $Y_n,n\geq 3$ of elements $y=(g, g^{-1} , e , e , g^{-1} , g , e , e )\in G_{t_n}$ for $g\in G_{ t_{n-3}}$ and where we identify $G_{t_n}$ with $G_{t_{n-3}}^8.$
There exists $n_{L,K}\geq 2$ such that if $n\geq n_{L,K}$, then $Y_n$ is contained inside $L.$
\item If $\tilde K\lhd G_\alpha$ is a proper normal subgroup with the decomposability property, then $\tilde K$ is contained inside $K_\alpha.$
\item The subgroup $K_\alpha\lhd G_\alpha$ is the unique maximal normal subgroup of $G_\alpha$ with the decomposability property.
In particular, it is a characteristic subgroup.
\end{enumerate}
\end{proposition}
\begin{proof}
Proof of (1).
Since $K$ is nontrivial and not contained inside $K_\alpha$ there exists $a\in K_\alpha$ and a nontrivial element $v\in V$ such that $av\in K$.
Since $V$ is simple this implies that for any $w\in V$ there exists $b\in K_\alpha$ such that $bw\in \mathcal{N}_{G_\alpha}(av)$ and thus $bw$ is in $K$ since it is a normal subgroup.
Consider a permutation $\kappa$ on $2^k$ elements with $k\geq 1$ and the associated element $v_\kappa\in V.$
For $n$ large enough there exists $a\in \Gamma_{t_n}$ such that $av_\kappa$ belongs to $K.$
Given $x\in G_{t_n}$ we consider the element $xav_\kappa x^{-1}= xa\kappa_n (x)^{-1} v_\kappa$.
We obtain $\tilde a = xa\kappa_n (x)^{-1}$ with coordinate $\tilde a_i = x_i a_i x_{\kappa(i)}^{-1}.$
Choose $x$ such that $x_i=a_i^{-1}$ on a set of $i\in A$ such that $\kappa(A)\cap A=\emptyset$ and put $x_i=e$ outside of $A$.
We then obtain that $\tilde a_i=e$ for any $i\in A.$
Consider now $y\in G_{t_n}$ supported on such a set $A$ and observe that the commutator $[y,\tilde a v_\kappa] = [y,xav_\kappa x^{-1}]$ is equal to $x\kappa_n(x)^{-1}$ that is by definition in $K$.
Proof of (2).
A similar argument as above tells us that for any $v\in V$ there exists $a\in K_\alpha$ such that $av\in L.$
In particular, if $v=v_\sigma$ for the permutation $\sigma = (21)\in S_2$, then there exists a large enough $n\geq 2$ such that $av_\sigma\in L$ and $a\in \Gamma_{t_n}.$
By (1), we can choose a large enough $n$ such that $X_{\tau,n},X_{\kappa,n}\subset K$ where $\tau= (2134)\in S_4$ and $\kappa=(12346578)\in S_8.$
Write $a=(a_1,a_2,a_3,a_4)$ for an element of the group $G_{t_n}$ that we identify with the group $G_{t_{n-2}}^4$.
Consider $x:=(a_1^{-1},a_1,e,e)$ that is in an element of $X_{\tau,n}$ and thus is in $K$.
Since $L$ is a normal subgroup of $K$ we have that $x av_\sigma x^{-1}$ is in $K$ that is $\tilde a v_\sigma$ with $\tilde a= xa\sigma_n(x)^{-1}.$
Note that $\tilde a$ is of the form $(e,\tilde a_2,\tilde a_3,\tilde a_4).$
Hence, we can assume that the first coordinate of $a\in G_{t_{n-2}}^4$ is trivial.
Now identify $G_{t_n}$ with $G_{t_{n-3}}^8$ and fix $g\in G_{t_{n-3}}$.
Define the element $x:= ( e , e , e , e , g^{-1} , g , e , e )$ and observe that it is in $X_{\kappa,n}$ and thus in $K$ by hypothesis on $n$.
Consider the commutator $[x,av_\sigma] = x a \sigma_n(x)^{-1} a^{-1}$ and observe that it is equal to
$$( g , g^{-1} , e ,e , g^{-1}, g , e, e ).$$
This finishes the proof of (2).
Proof of (3).
Consider $\tilde K\lhd G_\alpha$ such that $\tilde K=A\oplus B$ and $\mathcal{N}_{G_\alpha}(A)=\mathcal{N}_{G_\alpha}(B)=\tilde K.$
Assume that $\tilde K$ is not contained inside $K_\alpha$.
If $A$ or $B$ is contained inside $K_\alpha$, then so is its normaliser implying that $\tilde K\subset K_\alpha$, a contradiction.
Therefore, both $A$ and $B$ are not contained inside $K_\alpha.$
Since $A$ commutes with $B$ we obtain that $A$ is a normal subgroup of $\tilde K$ and likewise for $B$.
By (2), applied to $A\lhd \tilde K\lhd G_\alpha$ and $B\lhd \tilde K\lhd G_\alpha$, there exists a large enough $n$ such that $Y_n\subset A$ and $Y_n\subset B$, a contradiction since $A\cap B=\{e\}.$
Proof of (4).
Consider $A_n\subset G_{t_n}$ (resp. $B_n$) the set of elements with support contained in the first (resp. the last) $2^{n-1}$ coordinates.
Note that $G_{t_n}=A_n\oplus B_n$ and that $\Phi_\alpha(f_n^{n+1})(A_n)\subset A_{n+1}, \Phi_\alpha(f_n^{n+1})(B_n)\subset B_{n+1}$ where $f_{n}^{n+1}$ is the forest with $2^n$ roots whose each tree is $Y$.
This implies that the set of fractions $A:=\{\dfrac{a}{t_n}: \ n\geq 1, a\in A_n\}$ forms a subgroup of $K_\alpha$ and that $K_\alpha=A\oplus B$.
Note that $v_\sigma A v_\sigma^{-1} =B$ where $\sigma=(21)\in S_2$, implying that the normaliser of $A$ inside $G_\alpha$ contains $K_\alpha$ and likewise for $B$.
Since $K_\alpha\lhd G_\alpha$ is a normal subgroup we obtain that $\mathcal{N}_{G_\alpha}(A)=K_\alpha = \mathcal{N}_{G_\alpha}(B).$
We obtain that $K_\alpha\lhd G_\alpha$ has the decomposability property.
By (3), any proper normal subgroup with this later property is contained inside $K_\alpha$ making it maximal.
The rest of the proposition is obvious.
\end{proof}
Theorem \ref{theo:KinG} follows from the last point of the proposition.
\section{Description and classification of fraction groups}\label{sec:classification}
In this section we restrict the class of functors $\Phi$ considered.
We now assume that $\Phi:\mathcal F\to \Gr$ is a covariant monoidal functor built from the data of a group $\Gamma$ and a morphism of the following form:
$$\Phi(Y)(g):\Gamma\to \Gamma\oplus \Gamma, \ g\mapsto (\alpha_0(g),e_\Gamma).$$
We write $\alpha$ rather than $\alpha_0$ which can be any endomorphism of $\Gamma.$
Recall that $e=e_\Gamma$ is the neutral element of the group $\Gamma$ (we may remove the subscript $\Gamma$ if the context is clear).
The associated functors, limit group, Jones' action, fraction group and category are denoted as in the previous section by $\Phi_\alpha, K_\alpha, \pi_\alpha, G_\alpha, \mathcal C_\alpha$, respectively.
We are going to fully classify the class of all such fraction groups $G_\alpha$ up to isomorphism.
\subsection{From an endomorphism to an automorphism}\label{sec:End}
In this subsection we reduce the study to $\alpha$ an automorphism.
The idea is to build a group denoted by $\lim\Gamma$ and extending $\alpha$ on $\lim\Gamma$ as an automorphism $\lim\alpha\in\Aut(\lim\Gamma)$.
We will then show that the induced fraction groups $G_\alpha$ and $G_{\lim\alpha}$ are isomorphic justifying this reduction of study.
\begin{definition}
Consider a group $\Gamma$ and any endomorphism $\alpha\in\End(\Gamma)$.
We define the directed system of groups $(\Gamma_n,n\geq 0)$ with the family of group morphisms $(\iota_n^{m}:\Gamma_n\to\Gamma_m, m\geq n)$ where
$$\Gamma_n:=\{(g,n):\ g\in\Gamma\}$$ is a copy of $\Gamma$ and
$$\iota_n^{n+p}(g,n) := (\alpha^p(g),n+p) , \ n,p\geq 0, g\in\Gamma.$$
We write $\lim\Gamma:=\varinjlim_\alpha \Gamma_n$ for the inductive limit of this directed system.
Denote by $\sim$ the equivalence relation generated by $(g,n)\sim (\alpha(g),n+1),g\in\Gamma,n\geq 0$ and write $[g,n]$ for the equivalence class of $(g,n)$ which is by definition an element of $\lim\Gamma.$
We extend the endomorphism $\alpha$ of $\Gamma$ into an endomorphism $\lim\alpha$ of $\lim\Gamma$ as follows:
$$(\lim\alpha)[g,n] := [\alpha(g),n] \text{ for all } n\geq 0, g\in \Gamma.$$
We obtain two groups of fractions $G_\alpha=K_\alpha\rtimes V$ and $G_{\lim\alpha}=K_{\lim\alpha}\rtimes V$.
\end{definition}
\begin{lemma}
The map $\lim\alpha$ is a group automorphism of $\lim\Gamma.$
\end{lemma}
\begin{proof}
Write $\beta_n:\Gamma_n\to \Gamma_{n}$ for all $n\geq 0$ defined as $\beta_n(g,n) = (\alpha(g),n).$
Consider $n,p\geq 0, g\in \Gamma$ and note that
\begin{align*}
\iota^{n+p}_n\circ \beta_n(g,n) & = \iota_n^{n+p}(\alpha(g),n) = (\alpha^{p+1}(g),n+p)\\
& = \beta_{n+p}(\alpha^p(g), n+p) = \beta_{n+p}\circ \iota_{n}^{n+p}(g,n).
\end{align*}
Therefore, the family $(\beta_n:\ n\geq 0)$ defines a map $\lim\alpha$ from $\lim\Gamma$ to itself.
Moreover, $\lim\alpha$ is a group morphism because each $\beta_n,n\geq 0$ is.
Assume that $\lim\alpha[g,n]=e$ where $e$ is the neutral element of $\lim\Gamma.$
Note that $[h,j]=e$ if and only if there exists $p$ large enough satisfying $\alpha^p(h)=e$ for $h\in\Gamma,j\geq 0.$
Since $\lim\alpha[g,n]=[\alpha(g),n]$ we obtain that $\alpha^p(g)=e$ for $p$ large enough and thus $[g,n]=e$ implying that $\lim\alpha$ is injective.
Consider $[g,n]\in \lim\Gamma$ with $n\geq 0$ that is equal to $[\alpha(g),n+1] = (\lim\alpha)[g,n+1]$ and thus belongs to the range of $\lim\alpha.$
This implies that $\lim\alpha$ is surjective and all together $\lim\alpha$ is an automorphism of the group $\lim\Gamma.$
\end{proof}
\begin{proposition}\label{prop:isomaut}
Consider the morphism $\theta_0:\Gamma\to\lim\Gamma, g\mapsto [g,0]$.
This induces the morphism
$$\theta_t:\Gamma_t\to(\lim\Gamma)_t, g=(g_\ell)_{\ell\in\Leaf(t)} \mapsto (\theta_0(g_\ell))_{\ell\in\Leaf(t)}$$
for any tree $t\in\mathfrak T.$
These maps are compatible with the two directed structures inducing a group isomorphism $$\theta:\varinjlim_{t\in\mathfrak T} \Gamma_t\to \varinjlim_{t\in\mathfrak T}(\lim\Gamma)_t.$$
This isomorphism is $V$-equivariant for the two Jones' actions and extends uniquely into a group isomorphism between the fraction groups $G_\alpha$ and $G_{\lim\alpha}.$
\end{proposition}
\begin{proof}
Let $\Phi:\mathcal F\to\Gr$ and $\lim\Phi:\mathcal F\to\Gr$ be the monoidal functors induced by $\alpha$ and $\lim\alpha$, respectively.
To prove that we have a directed system of maps $\theta_t$ it is sufficient to check that:
$$\theta_{ft}\circ \Phi(f) = (\lim\Phi)(f)\circ \theta_{t} \text{ for all } t\in\mathfrak T, f\in\Hom(\mathcal F).$$
This later equality is a consequence of the following: $\lim\alpha\circ \theta_0 = \theta_0\circ \alpha.$
Therefore, $\theta$ is well-defined and is a group morphism as a limit of group morphisms.
Consider $(g,t)\in \varinjlim_{t\in\mathfrak T} \Gamma_t$ such that $\theta_t(g,t)=e.$
We have that $g=(g_\ell)_{\ell\in\Leaf(t)}$ with $g_\ell\in\Gamma.$
If $\theta_t(g,t)=e$, then $\theta_0(g_\ell)=e$ for any $\ell\in \Leaf(t)$.
That is, there exists a power $N_\ell\geq 1$ such that $\alpha^{N_\ell}(g_\ell)=e$ by definition of the limit group $\lim\Gamma.$
Put $N:=\max(N_\ell : \ell\in\Leaf(t))$ and consider a forest $f$ that is composable with $t$ and satisfying that each of its leaves is at distance $N$ from the root in its connected component.
We get that $\Phi(f)(g)$ has all of its components equal to $e$ or $\alpha^{N}(g_\ell)=e$ since $N\geq N_\ell$.
Since $(g,t)\sim (\Phi(f)(g),ft) =(e,ft)$ inside the limit group $\varinjlim_{t\in\mathfrak T}\Gamma_t$ we obtain that $(g,t)$ is trivial and thus $\theta$ is injective.
Consider $g\in \varinjlim_{t\in\mathfrak T} (\lim\Gamma)_t$.
We can assume, up to identifying classes with representatives, that $g=(g,t)\in (\lim\Gamma)_t$ for a certain tree $t$ and thus can be written as $g=(g_\ell)_{\ell\in\Leaf(t)}$ with $g_\ell\in\lim\Gamma.$
Since $\lim\Gamma$ is the direct limit of the family of groups $(\Gamma_n:\ n\geq 0)$ and $\Leaf(t)$ is finite we can assume that there exists a large enough $n\geq 0$ such that $g_\ell=[x_\ell,n]$ with $x_\ell\in\Gamma$ for all $\ell\in\Leaf(t).$
Consider a forest $f$ composable with $t$ for which each of its leaves is at distance $n$ from the root in its connected component.
We obtain that $(g,t)\sim ( (\lim\Phi)(f)(g),ft)$ and note that every component of $(\lim\Phi)(f)(g)$ is either trivial or of the form $[\alpha^n(x_\ell),n]\in\lim\Gamma$.
Observe that $[\alpha^n(x_\ell),n] = [x_\ell,0]$ inside $\lim\Gamma$ and thus belongs to the range of $\theta_0$.
We obtain that $g$ is in the range of $\theta_{ft}$ implying that $\theta$ is onto.
\end{proof}
Here is a simple and explicit example which describes well the construction of above. For pedagogical reason we have chosen an example where the endomorphism $\alpha$ is injective.
\begin{example}
Consider the group $\Gamma:=\mathbf{Z}$, a natural number $q\geq 2$ and the non-surjective endomorphism
$$\alpha:\mathbf{Z}\to\mathbf{Z}, z\mapsto qz.$$
We are going to construct $(\lim \Gamma, \lim\alpha)$ and construct an explicit isomorphism $j:\lim\Gamma\to \mathbf{Z}[1/q]$ conjugating $\lim\alpha$ with $x\in\mathbf{Z}[1/q]\mapsto qx\in\mathbf{Z}[1/q].$
To do this we define a family of groups $\Gamma_n$ and injective morphisms $j_n:\Gamma_n\to \mathbf{Z}[1/q]$ and take the limit in $n$.
Note that $j$ and $j_n$ have no equivalent in the proof of above. We used them in order to exhibit a particularly nice description of the pair $(\lim\Gamma,\lim\alpha)$.
For each $n\geq 0$ define $\Gamma_n:=\{(z,n):\ z\in \mathbf{Z}\}$ which is a copy of the group $\mathbf{Z}$.
Consider the morphism
$$j_n:\Gamma_n\to \mathbf{Z}[1/q], \ (z,n)\mapsto \dfrac{z}{q^n} \text{ for all } n\geq 0.$$
Observe that $$j_{n+p}\circ \iota_n^{n+p}(z,n) = j_{n+p}( q^pz, n+p) = \dfrac{q^p z}{q^{n+p}} = \dfrac{z}{q^n} = j_n(z,n) \text{ for all } z\in \mathbf{Z}, n,p\geq 0.$$
Therefore, the family of morphisms $(j_n:\ n\geq 0)$ defines a morphism
$$j: \lim\Gamma\to \mathbf{Z}[1/q],\ [z,n]\mapsto \dfrac{z}{q^n}.$$
Since all the $j_n,\ n\geq 0$ are injective so is $j$.
It is not hard to see that $j$ is surjective implying that $j$ is a group isomorphism from $\lim \Gamma$ to $\mathbf{Z}[1/q].$
Now, define the morphisms
$$\beta_n:\Gamma_n\to\Gamma_n,\ (z,n)\mapsto (qz,n), \ n\geq 0.$$
We have that
\begin{align*}
\iota_n^{n+p}\circ \beta_n(z,n) & = \iota_n^{n+p}(qz,n) = (\dfrac{qz}{q^{n+p}}, n+p) = ( q \cdot \dfrac{z}{q^{n+p}}, n+p)\\
& = \beta_{n+p}(\dfrac{z}{q^{n+p}}, n+p ) =\beta_{n+p}\circ \iota_{n}^{n+p}(z,n)
\end{align*}
for all $z\in\mathbf{Z},n,p\geq 0.$
Hence, we can define a limit endomorphism
$$\lim\alpha: \lim\Gamma\to\lim\Gamma,\ [z,n]\mapsto [\alpha(z),n]=[qz,n].$$
Moreover, we observe that:
$$(j\circ \lim\alpha) [z,n] = j([qz,n]) = \dfrac{qz}{q^n} = q\cdot \dfrac{z}{q^n} = q\cdot j( [z,n]) \text{ for all } z\in\mathbf{Z},n\geq 0.$$
All together, we deduce that $\lim\Gamma$ is isomorphic to $\mathbf{Z}[1/q]$ and that $\lim\alpha$ is conjugated with the automorphism:
$$\mathbf{Z}[1/q]\to\mathbf{Z}[1/q], \ x\mapsto qx.$$
\end{example}
\subsection{Description of the fraction groups}\label{sec:description}
In this subsection we provide a description of $G_\alpha$ in terms of a twisted permutational restricted wreath product.
\begin{proposition}\label{prop:twistWP}
Consider a group $\Gamma$, an \textit{automorphism} $\alpha\in\Aut(\Gamma)$ and the associated fraction group $G_\alpha=K_\alpha\rtimes V$.
Let $\oplus_{\mathbf{Q}_2}\Gamma$ be the direct sum of $\Gamma$ over the dyadic rationals $\mathbf{Q}_2$ and define the action
$$\beta:V\curvearrowright \oplus_{\mathbf{Q}_2}\Gamma, \
\beta_v(a)(x):= \alpha^{\log_2(v'(v^{-1}x)) } \left( a(v^{-1} x) \right) , \ v\in V, a\in \oplus_{\mathbf{Q}_2}\Gamma, x\in \mathbf{Q}_2.$$
Write $\oplus_{\mathbf{Q}_2}\Gamma\rtimes_\alpha V$ for the associated wreath product.
\begin{itemize}
\item The groups $G_\alpha$ and $\oplus_{\mathbf{Q}_2}\Gamma\rtimes_\alpha V$ are isomorphic.
\item Here is an explicit isomorphism: For $t\in\mathfrak T$ and $\ell\in\Leaf(t)$ we write $N_\ell^t$ and $I_\ell^t:=[r_\ell^t, s_\ell^t)$ for the distance between the root of $t$ and its leaf $\ell$ and for the sdi associated to $\ell$, respectively.
The map $$\theta_t:\Gamma_t\to \oplus_{\mathbf{Q}_2}\Gamma$$
defined as
$$\theta_t(g):\mathbf{Q}_2\to \Gamma,\ \theta_t(g)(x)=\begin{cases}
\alpha^{-N_\ell^t} (g_\ell) \text{ if } x=r_\ell^t, \ \ell\in\Leaf(t) \\
0 \text{ otherwise}
\end{cases}$$
for $g=(g_\ell:\ \ell\in \Leaf(t))\in \Gamma_t$ and $x\in\mathbf{Q}_2$ is an injective morphism.
The family $(\theta_t:\ t\in\mathfrak T)$ defines a group isomorphism from $K_\alpha=\varinjlim_{t\in\mathfrak T}\Gamma_t$ to $\oplus_{\mathbf{Q}_2}\Gamma$ which is $V$-equivariant and thus extends uniquely into an isomorphism from $G_\alpha$ to $\oplus_{\mathbf{Q}_2}\Gamma\rtimes_\alpha V.$
\end{itemize}
\end{proposition}
\begin{proof}
This proposition was already observed in \cite[Section 2.3]{Brothier19WP}.
We provide a proof for the convenience of the reader.
Let $(\Gamma_t, \iota_{s,t}:\ s,t\in \mathfrak T, s\geq t)$ be the directed system of groups associated to the functor $\Phi_\alpha$ built from $\alpha.$
Consider a tree $t$ and $\theta_t$ as described above.
It is clearly valued in $\oplus_{\mathbf{Q}_2}\Gamma$ since $|\supp(\theta_t(g))|\leq |\Leaf(t)|$ for all $g\in \Gamma_t.$
It defines a group morphism from $\Gamma_t$ to $\oplus_{\mathbf{Q}_2}\Gamma$ and in fact an isomorphism from $\Gamma_t$ to $\oplus_{L_t}\Gamma$ where $L_t:=\{ r^t_\ell:\ \ell\in \Leaf(t)\}.$
Consider a forest $f$ that is composable with $t$.
For each leaf $\ell$ of $t$ is associated a root $R_\ell$ of $f$ and a geodesic path starting at $R_\ell$ going upward with only left edges ending at $S_\ell$ which is a leaf of $f.$
If $N^f_\ell$ is the length of this path we obtain that $\iota_{ft,t}(g)$ is supported on $\{R_\ell:\ \ell\in\Leaf(t)\}$ such that
$$\iota_{ft,t}(g)(S_\ell)= \alpha^{N_\ell^f}(g_\ell), \ell\in \Leaf(t).$$
We deduce that $\theta_{ft}\circ\iota_{ft,t} = \theta_t$ and thus there exists a unique group morphism $\theta:\varinjlim_{t\in\mathfrak T}\Gamma_t\to \oplus_{\mathbf{Q}_2}\Gamma$ satisfying that $\theta$ restricts to $\theta_t$ on $\Gamma_t$ for all $t\in\mathfrak T.$
It remains to show that $\theta$ is an isomorphism and is $V$-equivariant.
For each $t\in\mathfrak T$ the morphism $\theta$ restricts into an isomorphism from $\Gamma_t$ onto $\oplus_{L_t}\Gamma_t$.
In particular, $\theta$ is injective.
Moreover, since any dyadic rational of $\mathbf{Q}_2$ is the first point of a sdi we have that $\cup_{t\in\mathfrak T} L_t=\mathbf{Q}_2$ and thus $\cup_{t\in\mathfrak T} \oplus_{L_t}\Gamma = \oplus_{\mathbf{Q}_2}\Gamma$ implying that $\theta$ is surjective.
We have proven that $\theta$ is an isomorphism.
Let us show that $\theta$ is $V$-equivariant.
Consider $g\in K_\alpha$ and $v\in V$.
We can assume that $g=(g_\ell)_{\ell\in\Leaf(t)}$ is in $\Gamma_t$ for some tree $t\in\mathfrak T$.
Moreover, taking $t$ large enough we can assume that $v$ is adapted to the sdp $(I_\ell^t:\ \ell\in\Leaf(t))$ of $t$ sending this sdp to the sdp $(J_\ell^s:\ \ell\in\Leaf(s))$ (here we implicitly identify the leaves of $t$ and $s$).
Consider the Jones action $\pi:V\curvearrowright K_\alpha$ and note that $\pi_v(g)$ has for representative $(g_\ell)_{\ell\in\Leaf(s)}$ but as an element of $\Gamma_s$ inside $K_\alpha.$
We obtain that $\theta(\pi_v(g))$ is supported in $\{r_\ell^s:\ell\in\Leaf(s)\}$ satisfying
$$\theta(\pi_v(g))(v r_\ell^t) = \theta(\pi_v(g))(r_\ell^s) = \alpha^{-N_\ell^s}(g_\ell) = \alpha^{N_\ell^t-N_\ell^s}(\theta(g)(r_\ell^t)).$$
We conclude by observing that the slope of $v$ restricted to $I_\ell^t$ is equal to $\dfrac{|I_\ell^s|}{|I_\ell^t|} = 2^{N_\ell^t-N_\ell^s}$ implying that $\beta_v(\theta(g))(vr_\ell^t)=\theta(\pi_v(g))(vr_\ell^t)$ for all $\ell$ and thus $\beta_v(\theta(g))=\theta(\pi_v(g)).$
\end{proof}
\begin{remark}
The isomorphism defined in the previous proposition is very convenient to work with.
Although, it may not be the most obvious one.
Perhaps, the most natural way to consider $G_\alpha$ as a wreath product is to replace $N_\ell^t$ by the number $M_\ell^t$ of \textit{left} edges between the root and the leaf $\ell$ rather than the number of all edges.
This provides an isomorphism from $K_\alpha$ to $\oplus_{\mathbf{Q}_2}\Gamma$.
However, to make it $V$-equivariant we then consider the action
$$\gamma_v(a)(x) = \alpha^{M_x^v}(a(v^{-1}x))$$
rather than $\beta_v$ where $M_x^v:=M_{v^{-1}I}-M_{I}$ for the choice of a sdi $I$ adapted to $v^{-1}$ and containing $x$ and where $M_I$ is equal to the number of \textit{left} edges to go from $[0,1)$ to $I$ inside the infinite rooted binary $t_\infty$ as described in \eqref{sec:forest}.
Note that this formula does not depend on the choice of $I$.
\end{remark}
\subsection{Thin classification of fraction groups}
In this section we will decide when two fraction groups are isomorphic or not.
We consider some groups $\Gamma,\tilde\Gamma$, automorphisms $\alpha\in\Aut(\Gamma),\tilde\alpha\in \Aut(\tilde\Gamma)$ and the associated fraction groups $G_\alpha=K_\alpha\rtimes V, G_{\tilde\alpha}=K_{\tilde\alpha}\rtimes V$ that we often write $G=K\rtimes V$ and $\tilde G=\tilde K\rtimes V$, respectively.
Moreover, we identify the fraction groups with the corresponding wreath products described in the previous subsection.
We start by constructing some elementary isomorphisms.
\begin{lemma}\label{lem:isomone}
Consider two isomorphic group $\Gamma,\tilde\Gamma$ and $\beta\in\Isom(\Gamma,\tilde\Gamma), \alpha\in\Aut(\Gamma).$
Then the fraction groups $G=K\rtimes V$ and $\tilde G = \tilde K \rtimes V$ constructed from $\alpha$ and $\tilde\alpha:= \beta \alpha \beta^{-1}$ are isomorphic.
\end{lemma}
\begin{proof}
Consider the isomorphism $\kappa$ from $\prod_{\mathbf{Q}_2}\Gamma$ to $\prod_{\mathbf{Q}_2}\tilde \Gamma$ defined as $\kappa(a)(x) = \beta(a(x))$ for all $a\in \prod_{\mathbf{Q}_2}\Gamma$ and $x\in\mathbf{Q}_2.$
Observe that $\supp(\kappa(a)) = \supp(a)$ for any maps $a$ implying that $\kappa$ sends finitely supported maps to finitely supported ones and thus $\kappa$ restricts to an isomorphism from $K$ to $\tilde K$.
Let us check that $\kappa$ is $V$-equivariant. Denote by $\pi:V\curvearrowright K$ and $\tilde\pi:V\curvearrowright \tilde K$ the Jones actions.
We have that
\begin{align*}
\kappa(\pi_v(a))(vx) & = \beta(\pi_v(a)(vx)) = \beta(\alpha^{\log_2(v'(x))}(a(x)))\\
& = (\beta\alpha \beta^{-1})^{\log_2(v'(x))}(\beta(a(x)) \\
& = {\tilde \alpha}^{\log_2(v'(x))}(\beta(a(x))\\
& = \tilde\pi_v(\kappa(a))(vx), \ a\in K, v\in V, x\in \mathbf{Q}_2.
\end{align*}
Therefore, the isomorphism $\kappa:K\to\tilde K$ extends to an isomorphism $\theta:G\to\tilde G$ such that $\theta(av)=\kappa(a) v$ for any $a\in K, v\in V.$
\end{proof}
The construction of the following isomorphism is less trivial than the last one and uses the dyadic valuation.
\begin{notation}\label{not:valuation}
Let $\nu:\mathbf{Q}\to \mathbf{Z}$ be the dyadic valuation such that $\nu(0)=0$ and $\nu(\prod_{p: \text{ prime }} p^{n_p}) = n_2$ for any finitely supported maps $p\mapsto n_p\in\mathbf{Z}$.
\end{notation}
\begin{lemma}\label{lem:isomtwo}
Consider a group $\Gamma$, an automorphism $\alpha\in\Aut(\Gamma)$ and the associated fraction group that we denote here by $G=K\rtimes_\alpha V$.
Given $k\in \Gamma$ we consider the automorphism $\tilde\alpha:=\ad(k)\circ\alpha\in\Aut(\Gamma)$ and the associated fraction group that we denote by $\tilde G=K\rtimes_{\tilde\alpha}V$.
We have that $G\simeq \tilde G.$
\end{lemma}
\begin{proof}
The idea of the proof is to construct two isomorphisms $\ad(f)$ and $av\mapsto a\cdot c_v\cdot v$ of the \textit{unrestricted} wreath products.
Then we show that that the combined isomorphism $av\mapsto \ad(f)(a\cdot c_v\cdot v)$ restricts into an isomorphism between the \textit{restricted} wreath products.
We start by constructing a family $(k_n,n\in\mathbf{Z})$ of $\Gamma$ satisfying $\tilde\alpha^n=\ad(k_n)\alpha^n$ for all $n\in\mathbf{Z}.$
Such a family is defined by induction as follows:
$$\begin{cases} k_0=e\\
k_{n+1} = k_n\alpha^n(k) \text{ for } n\geq 0\\
k_{-(m+1)} = k_{-m} \alpha^{-(m+1)}(k^{-1}) \text{ for } m\geq 0\end{cases}.$$
We will be using the following equation:
\begin{equation}\label{eq:k}
k_n \alpha^n(k_m) = k_{n+m} \text{ for any } n,m\in\mathbf{Z}.\end{equation}
Consider the dyadic valuation $\nu:\mathbf{Q}\to\mathbf{Z}$ defined in \eqref{not:valuation}.
{\bf Claim:} For any $v\in V$ there exists a finite subset $F_v\subset \mathbf{Q}_2$ such that
$$\log_2(v'(x)) = \nu(vx) - \nu(x) \text{ for any } x\in \mathbf{Q}_2\setminus F_v.$$
Fix $v\in V$ and note that there exists a sdp $P$ (finite by definition) such that $v$ restricted to each sdi of $P$ is the composition of an affine map with slope a power of $2$ and a translation by a dyadic rational.
Moreover, the property of the claim is closed under composition and taking inverses.
Therefore, it is sufficient to check this property individually for the functions $f:x\mapsto 2 x$ and $\tau:x\mapsto x+\dfrac{1}{2^m}$ with $m\geq 1$ with domain $[0,1]$.
The map $x\mapsto \nu(2x)-\nu(x)$ is constant equal to $1$ on $(0,1]$ and $x\mapsto \nu(x+1/2^m) - \nu(x)$ has its support contained in $\{n/2^m: 0\leq n\leq 2^m\}$ if $x$ is restricted to $[0,1]$.
In both cases we obtain the equality $\log_2(f'(x)) = \nu(fx)-\nu(x)$ and $\log_2(\tau'(x))=\nu(\tau x)-\nu(x)$ for all but finitely many $x\in \mathbf{Q}_2.$
Consider $\overline K:=\prod_{\mathbf{Q}_2} \Gamma$ the group of all maps from $\mathbf{Q}_2$ to $\Gamma.$
Using the description of $G,\tilde G$ as wreath products we can easily extend the Jones actions $\pi:V\curvearrowright K$ and $\tilde\pi:V\curvearrowright K$ induced by $\alpha$ and $\tilde\alpha$ respectively into actions of $V$ on $\overline K$.
We continue to denote by $\pi,\tilde\pi:V\curvearrowright \overline K$ these extensions and write $\overline K\rtimes_\alpha V$ and $\overline K\rtimes_{\tilde\alpha} V$ for the corresponding semidirect products.
{\bf Claim:} We have an isomorphism $$\theta:\overline K\rtimes_\alpha V\to \overline K\rtimes_{\tilde\alpha} V$$ defined as
$$\theta(av) = \ad(f) (a\cdot c_v \cdot v) = f\cdot a \cdot c_v \cdot v \cdot f^{-1}, \ a\in \overline K , v\in V$$
where we define
$$f(x):=k_{\nu(x)}, \ c_v(vx) :=(k_{\log_2(v'(x))})^{-1} \text{ for all } x\in \mathbf{Q}_2, v\in V.$$
Since $\ad(f)$ is an automorphism of $\overline K\rtimes_{\tilde\alpha} V$ it is sufficient to show that
$$\rho:av\mapsto a\cdot c_v \cdot v$$ defines an isomorphism from $\overline K\rtimes_\alpha V$ to $\overline K\rtimes_{\tilde\alpha} V.$
The map $\rho$ is multiplicative if and only if
$$\pi_v(b)c_{vw} = c_v \tilde\pi_v(b c_w) \text{ for all } b\in \overline K, v,w\in W.$$
Fix $x\in\mathbf{Q}_2$ and write $n:=\log_2(v'(wx))$ and $m:=\log_2(w'(x)).$
We have that
$$[\pi_v(b)c_{vw}] (vwx) = \alpha^n(b(wx)) \cdot k_{\log_2((vw)'(x))}^{-1} = \alpha^n(b(wx)) \cdot k_{n+m}^{-1}$$
using the chain rule $(vw)'(x) = v'(wx)\cdot w(x)$ which is valid for elements of $V$.
Now,
\begin{align*}
[c_v \tilde\pi_v(b c_w)](vwx) & = k_n^{-1} \cdot {\tilde\alpha}^n( b(wx) c_w(wx)) = k_n^{-1}\ad(k_n)\alpha^n\left( b(wx) \cdot k_m^{-1}\right)\\
& = \alpha^n(b(wx)) \cdot \alpha^n(k_m^{-1}) k_n^{-1}\\
& = \alpha^n(b(wx))\cdot k_{m+n}^{-1}
\end{align*}
by Equation \eqref{eq:k}.
This proves that $\rho$ is multiplicative.
It is then a group morphism.
A similar proof shows that the formula $av\mapsto a\cdot c_v^{-1}\cdot v$ defines a group morphism from $\overline K\rtimes_{\tilde\alpha} V$ to $\overline K\rtimes_\alpha V$ which is an inverse of $\rho$ implying that $\rho$ is an isomorphism.
{\bf Claim:} The isomorphism $\theta$ restricts to an isomorphism from $K\rtimes_\alpha V$ onto $K\rtimes_{\tilde\alpha} V.$
Observe that
$$\theta(av) = (faf^{-1})\cdot (f c_v \tilde\pi_v(f^{-1}))\cdot v, a\in \overline K,v\in V$$ and that $\supp(faf^{-1})\subset \supp(a).$
Therefore, it is sufficient to check that $f c_v \tilde\pi_v(f^{-1})$ is finitely supported for any $v\in V$.
Fix $v\in V,x\in \mathbf{Q}_2\setminus F_v$ and write $n:=\log_2(v'(x)).$
We have that:
\begin{align*}
(f c_v \tilde\pi_v(f^{-1}))(vx) & = f(vx) c_v(vx) k_n \alpha^n( f(x)^{-1}) k_n^{-1}\\
& = k_{\nu(vx)} \alpha^n(k_{\nu(x)}^{-1}) k_n^{-1}\\
& = k_{\nu(vx)} [k_n\alpha^n(k_{\nu(x)}) ]^{-1} \\
& = k_{\nu(vx)} (k_{n+\nu(x)})^{-1}\\
& = e,
\end{align*}
by definition of $F_v$ and Equation \ref{eq:k}.
Therefore, the support of $(f c_v \tilde\pi_v(f^{-1}))$ is contained in the finite set $F_v.$
This implies that $\theta$ maps $K\rtimes_\alpha V$ inside $K\rtimes_{\tilde\alpha} V$.
A similar proof shows that $av\mapsto \ad(f^{-1})( a [f^v c_v^{-1} (f^v)^{-1}] v)$ defines a group morphism from $K\rtimes_{\tilde\alpha}V$ to $K\rtimes_\alpha V$ which is an inverse to $\theta$ implying that $\theta$ restricts into an isomorphism from $G$ to $\tilde G$.
\end{proof}
We now prove that an isomorphism $\kappa:K\to \tilde K$ that is spatial can be decomposed as a product of isomorphisms. Recall that $K=\oplus_{\mathbf{Q}_2}\Gamma$ and $\tilde K=\oplus_{\mathbf{Q}_2}\tilde\Gamma$ where $\Gamma,\tilde\Gamma$ are groups.
\begin{lemma}\label{lem:decomposition}
Let $\kappa:K\to \tilde K$ be an isomorphism satisfying that $\supp(\kappa(a))=\varphi(\supp(a))$ for a certain $\varphi \in\Stab_N(\Q_2))$.
Then there exists a unique family of isomorphisms $(\kappa_x:K\to \tilde K, x\in \mathbf{Q}_2)$ satisfying that
$$\kappa(a)(\varphi(x)) = \kappa_x(a(x)) \text{ for all } a\in K , x\in\mathbf{Q}_2.$$
\end{lemma}
\begin{proof}
Consider $g\in \Gamma,x\in \mathbf{Q}_2$ and $g_x\in K$ the element supported in $\{x\}$ taking the value $g$ at $x$.
We have that $\kappa(g_x)$ has support equal to $\{\varphi(x)\}$ if $g\neq e$ and thus there exists $\kappa_x(g)\in \tilde\Gamma$ satisfying that $\kappa(g_x) = [\kappa_x(g)]_{\varphi(x)}$, the map supported in the singleton $\{\varphi(x)\}$ taking the value $\kappa_x(g)$ at $\varphi(x).$
Using that $\kappa$ is a group morphism we obtain that
\begin{align*}
[\kappa_x(gh)]_{\varphi(x)} & =\kappa((gh)_x) = \kappa(g_x \cdot h_x) = \kappa(g_x)\cdot \kappa(h_x)\\
& = [\kappa_x(g)]_{\varphi(x)}\cdot [\kappa_x(h)]_{\varphi(x)} = [\kappa_x(g)\cdot\kappa_x(h)]_{\varphi(x)}
\end{align*}
for $g,h\in \Gamma$ implying that $\kappa_x$ is a group morphism.
Therefore, $(\kappa_x,\ x\in \mathbf{Q}_2)$ is a family of group morphisms implying that
$$\prod_{x\in\mathbf{Q}_2}\kappa_x: K\to \tilde K, a\mapsto (\varphi(x)\mapsto \kappa_x(a(x)))$$ is a group morphism.
This latter morphism coincide with $\kappa$ on $\{g_x:\ g\in\Gamma,x\in\mathbf{Q}_2\}$ but since this set is generating $K$ we obtain that $\kappa=\prod_{x\in\mathbf{Q}_2}\kappa_x$.
It is rather obvious to see that $a\mapsto (\varphi(x)\mapsto \kappa_x(a(x)))$ is an isomorphism if and only if each $\kappa_x$ is an isomorphism which finishes the proof.
\end{proof}
Before proving the main theorem of this section we prove the following surprising rigidity fact: any isomorphism between fraction groups of the class considered is \textit{spatial} in the sense described below.
\begin{proposition}\label{prop:supportone}
Consider two groups $\Gamma,\tilde\Gamma$ with $\Gamma$ nontrivial and the associated fraction groups $G:=K\rtimes V, \tilde G:=\tilde K\rtimes V$ where $K=\oplus_{\mathbf{Q}_2}\Gamma, \tilde K=\oplus_{\mathbf{Q}_2}\tilde\Gamma$.
Assume we have an isomorphism $\theta:G\to \tilde G.$
Then $\theta$ restricts to an isomorphism $\kappa: K\to \tilde K$.
Moreover, there exists a unique homeomorphism $\varphi$ of Cantor space, normalising $V$ and stabilising $\mathbf{Q}_2$ (i.e.~$\varphi\in\Stab_N(\Q_2)$) satisfying
$$\supp(\kappa(a)) =\varphi(\supp(a)) \text{ for all } a\in K.$$
In particular, there exists a unique family of isomorphisms $(\kappa_x:\Gamma\to \tilde\Gamma,\ x\in \mathbf{Q}_2)$ satisfying that
$$\kappa(a)(\varphi(x)) = \kappa_x(a(x)) \text{ for all } a\in K, x\in \mathbf{Q}_2.$$
\end{proposition}
\begin{proof}
Consider $\theta:G\to\tilde G$ as above.
Theorem \ref{theo:KinG} implies that $\theta$ restricts to an isomorphism $\kappa$ from $K$ to $\tilde K$.
Therefore,
$$\theta(av)=\kappa(a) \cdot c_v \cdot \phi_v,a\in K,v\in V$$ where
$$\kappa\in\Isom(K,\tilde K),\phi\in\Aut(V) \text{ and } c:V\to K, v\mapsto c_v.$$
By Rubin's theorem we have that $\phi=\ad_\varphi$ for a unique $\varphi\inN_{H(\fC)}(V).$
Define
$$W_z:=\{ v\in V:\ vz = z \text{ and } v'(z)=1\}$$
for any $z\in\mathfrak C$.
Note that $v\in W_z$ if and only if there exists a sdi $I$ containing $z$ on which $v$ acts like the identity.
This implies that $\ad_\varphi:v\mapsto \varphi v \varphi^{-1}$ restricts to an isomorphism from $W_z$ to $W_{\varphi(z)}$, i.e.~$\phi(W_z)=W_{\varphi(z)}.$
Since $\Gamma$ is nontrivial there exists $g\in\Gamma, g\neq e.$
For $x\in\mathbf{Q}_2$ we write $g_x\in K$ for the map supported at $\{x\}$ taking the value $g$.
Observe that if $v\in W_x$, then
\begin{align*}
\theta(vg_xv^{-1}) & = \theta( [\alpha^{\log_2(v'(x))}(g)]_{vx} ) =\kappa(g_x)\\
& = \theta(v) \theta(g_x) \theta(v)^{-1}\\
& = c_v\cdot \phi_v \kappa(g_x) \phi_v^{-1} \cdot c_v^{-1}\\
& = \ad(c_v) \tilde\pi_{\phi_v}(\kappa(g_x)).
\end{align*}
We deduce that $\supp(\kappa(g_x)) = \phi_v(\supp(\kappa(g_x))$.
Therefore, the group $\phi(W_x)=W_{\varphi(x)}$ is a subgroup of the stabiliser subgroup
$$\Stab_V(\supp(\kappa(g_x))):=\{ w\in V:\ w\cdot \supp(\kappa(g_x)) = \supp(\kappa(g_x))\}.$$
This implies that $\supp(\kappa(g_x))$ is equal to the singleton $\{\varphi(x)\}.$
Indeed, assume that there exists $s\in \supp(\kappa(g_x)),s\neq\varphi(x)$.
Since $\supp(\kappa(g_x))$ is a finite subset of $\mathbf{Q}_2$ we can find a sdi $I$ such that $\varphi(x)\notin I$ and $I\cap \supp(\kappa(g_x))= \{s\}.$
Let $I_0$ and $I_1$ be the first and second half of $I$ and consider $v\in V$ permuting $I_0$ with $I_1$ and fixing all other elements of $\mathfrak C$.
In particular, $v\in W_{\varphi(x)}$ and thus $v$ stabilises $\supp(\kappa(g_x)).$
However, by definition of $v$ we have that $vs\notin \supp(\kappa(g_x))$, a contradiction.
Since $\kappa$ is injective and $g\neq e$ we have that $\supp(\kappa(g_x))$ has at least one point and this point must be $\varphi(x).$
In particular, $\varphi$ stabilises $\mathbf{Q}_2.$
By observing that any $a\in K$ is a finite product of some $g_x$ as above we obtain that $\supp(\kappa(a))=\varphi(\supp(a)).$
The second statement of the proposition is given by Lemma \ref{lem:decomposition}.
\end{proof}
We are now able to prove the main theorem of this section which shows that the only isomorphic pairs of fraction groups come from the last two lemmata.
\begin{theorem}\label{th:isomGalpha}
Consider two groups with automorphisms $(\Gamma,\alpha\in\Aut(\Gamma))$ and $(\tilde\Gamma,\tilde\alpha\in\Aut(\tilde\Gamma))$ and their associated fraction groups $G:= K\rtimes V$ and $\tilde G:=\tilde K\rtimes V$, respectively.
The groups $G$ and $\tilde G$ are isomorphic if and only if there exists $\beta\in\Isom(\Gamma,\tilde\Gamma)$ and $h\in\tilde\Gamma$ such that $\tilde\alpha = \ad(h)\circ\beta \alpha\beta^{-1}.$
\end{theorem}
\begin{proof}
The statement is trivially true if $\Gamma$ and $\tilde\Gamma$ are the trivial groups. We now assume that $\Gamma$ is nontrivial.
Consider an isomorphism $\theta:G\to \tilde G$ with $G,\tilde G$ as above.
By Proposition \ref{prop:isom}.4 we have that $\theta(K)=\tilde K$.
Therefore, we have:
$$\theta(av)=\kappa(a) \cdot c_v \cdot \phi_v \text{ for all } a\in K, v\in V$$
for some $\kappa\in\Isom(K,\tilde K)$, $\phi=\ad_\varphi\in\Aut(V)$, $c:V\to\tilde K$ where $\varphi\in N_{H(\fC)}(V)$.
Moreover, Proposition \ref{prop:supportone} implies that $\varphi$ stabilises $\mathbf{Q}_2$ and that $\kappa$ can be written as a product of isomorphisms: there exists a family $(\kappa_x:\ x\in \mathbf{Q}_2)$ such that
$$\kappa(a)(\varphi(x)) = \kappa_x(a(x)) \text{ for all } x\in\mathbf{Q}_2, a\in K.$$
In particular, $\Gamma$ is isomorphic to $\tilde\Gamma$ (via any of the $\kappa_x,x\in\mathbf{Q}_2$).
Let us show now that $\tilde\alpha=\ad(h)\circ \beta \alpha \beta^{-1}$ for suitable $\beta\in\Isom(\Gamma,\tilde\Gamma),h\in\tilde\Gamma.$
Fix $x\in\mathbf{Q}_2$ and $v\in V$ such that $vx=x$ and $v'(x)=2.$
Note that $\phi_v(\varphi(x))=\varphi(x)$ and $\phi_v'(\varphi(x))=2$ in virtue of Proposition \ref{prop:slope}.
If $g\in\Gamma$ and $g_x$ is the map supported in $\{x\}$ taking the value $g$ we obtain that
$$\theta(vg_xv^{-1}) = \kappa( [\alpha(g)]_x )= c_v \cdot \phi_v\cdot \kappa(g_x) \cdot \phi_v^{-1} \cdot c_v^{-1}= \ad(c_v)( \tilde\pi_{\phi_v}(\kappa(g_x))).$$
Note that $\kappa(g_x)$ is supported in $\{\varphi(x)\}$ and thus $\tilde\pi_{\phi_v}(\kappa(g_x))$ is also supported in $\{\varphi(x)\}$ since $\phi_v(\varphi(x))=\varphi(x).$
Moreover, $\phi_v'(\varphi(x))=2$ implying that $\tilde\pi_{\phi_v}(a)(\varphi(x)) = \tilde\alpha(a(\varphi(x)))$ for all $a\in \tilde K.$
If we evaluate the equality of above at $\varphi(x)$ we obtain:
$$\kappa_x(\alpha(g)) = \ad(c_v(\varphi(x))( \tilde\alpha(\kappa_x(g))).$$
Since $g$ was arbitrary we deduce the equality:
$$\kappa_x \circ \alpha = \ad(c_v(\varphi(x)) \circ \tilde\alpha \circ\kappa_x.$$
In particular,
$$\tilde\alpha= \ad(h)\circ \beta \alpha \beta^{-1}$$
where $\beta=\kappa_x$ and $h=c_v(\varphi(x))^{-1}.$
The converse is given by Lemmata \ref{lem:isomone} and \ref{lem:isomtwo}.
\end{proof}
\begin{remark}\label{rem:AWP}
Note that it is possible to follow Neumann's original proof for restricted wreath products in order to obtain that $\Gamma$ is isomorphic to $\tilde\Gamma$ \cite{Neumann64}. However, the proof would be rather indirect and would provide a less precise statement regarding the relation between the automorphisms $\alpha$ and $\tilde\alpha$.
Our proof takes advantage of the highly transitive action of $V\curvearrowright \mathbf{Q}_2$.
The fact that all isomorphisms between two groups in our class are spatial is very surprising and does not hold in general even for restricted standard wreath products.
Moreover, one cannot expect to generalise Neumann's theorem for the class of all restricted permutational and twisted wreath products constructed from transitive actions.
We illustrate these remarks with specific examples.
We start by constructing non-spatial automorphisms.
Consider a nontrivial group $\Gamma.$
Let $G:=\Gamma^2\wr \mathbf{Z}=\oplus_{n\in\mathbf{Z}}\Gamma^2\rtimes \mathbf{Z}$ be the standard restricted wreath product of $\Gamma^2=\Gamma\oplus\Gamma$ with $\mathbf{Z}$.
An element of $\oplus_{n\in\mathbf{Z}}\Gamma^2$ is a finitely supported map $f:\mathbf{Z}\to \Gamma^2, n\mapsto (f(n)_0, f(n)_1)$.
Now we re-index them as maps from $\frac{1}{2}\mathbf{Z}$ to $\Gamma$ using the isomorphism:
$$j: \oplus_{n\in\mathbf{Z}}\Gamma^2\to \oplus_{n\in \frac{1}{2}\mathbf{Z}} \Gamma$$
defined as
$$j(f)(n) = f(n)_0 \text{ and } j(f)(n+1/2) = f(n)_1 \text{ for all } n\in \mathbf{Z}, f\in\oplus_{n\in\mathbf{Z}}\Gamma^2.$$
Consider the automorphism $\kappa\in \Aut(\oplus_{n\in \frac{1}{2}\mathbf{Z}} \Gamma)$ which consists in shifting indices by $1/2.$
Observe that the conjugated automorphism $\kappa':= j^{-1}\circ \kappa\circ j$ of $\oplus_\mathbf{Z} \Gamma^2$ is $\mathbf{Z}$-equivariant and thus extends to an automorphism $\theta$ of $\Gamma^2\wr\mathbf{Z}$.
However, it is not spatial.
Indeed, fix $g\in\Gamma$ nontrivial, $n\in\mathbf{Z}$ and define $f\in\oplus_{n\in\mathbf{Z}}\Gamma$ supported at $n\in\mathbf{Z}$ so that $f(n)=(g,g).$
We have that $\kappa'(f)$ has support $\{n,n+1\}$ since $f(n)=(e_\Gamma,g)$ and $f(n+1)=(g,e_\Gamma).$
If $\theta$ was spatial, then the support of $f$ and $\kappa'(f)$ should have the same cardinal which is not the case here.
We now present isomorphic twisted permutational restricted wreath products that are constructed from non-isomorphic groups $\Gamma$ and $\tilde\Gamma$.
Let $\Gamma$ be any nontrivial group and consider $B:=B_0\oplus B_1$ the direct sum of two nontrivial groups.
Consider the standard restricted wreath product $G:=\Gamma\wr B=K\rtimes B$.
Define now $\tilde\Gamma:= \Gamma^{B_1}$, $\tilde K:= \oplus_{B_0} (\Gamma^{B_1})=\oplus_{B_0}\tilde\Gamma$ and the following action of $B$:
$$B\curvearrowright \tilde K ,\ [(b_0,b_1)\cdot f](a_0):= \beta_{b_1}(f(b_0^{-1}a_0)), (b_0,b_1)\in B, a_0\in B_0, f\in\tilde K$$
such that
$$\beta:B_1\curvearrowright \Gamma^{B_1}$$
is the left-shift action.
The semidirect product $\tilde G:=\tilde K\rtimes B$ is a twisted permutational restricted wreath product.
Note that the map:
$$\kappa:K\to \tilde K, \kappa(f)(b_0,b_1) = f(b_0)(b_1),\ f\in K, b_0\in B_0, b_1\in B_1.$$
is a $B$-equivariant isomorphism inducing an isomorphism from $G$ to $\tilde G$.
Although, $\Gamma$ is not isomorphic to $\tilde\Gamma:=\Gamma^{B_1}$ in general and the two actions $B\curvearrowright B_0$ and $B\curvearrowright B$ are transitive
\end{remark}
Considering endomorphisms rather than automorphisms we obtain the following classification result.
\begin{corollary}
Consider some groups $\Gamma,\tilde\Gamma$ and endomorphisms $\alpha\in\End(\Gamma),\tilde\alpha\in\End(\tilde\Gamma).$
Let $G=K\rtimes V$ and $\tilde G = \tilde K\rtimes V$ be the associated fraction groups.
Let $\lim\Gamma$ be the directed limit of groups constructed in Section \ref{sec:End} with automorphisms $\lim\alpha$ and similarly consider $(\lim{\tilde \Gamma},\lim{\tilde \alpha}).$
The groups $G$ and $\tilde G$ are isomorphic if and only if there exists an isomorphism $\beta\in\Isom(\lim\Gamma,\lim{\tilde\Gamma})$ and $h\in\lim{\tilde\Gamma}$ such that
$$\lim\alpha = \ad(h)\circ \beta\circ \lim{\tilde\alpha}\circ \beta^{-1}.$$
\end{corollary}
Note that the isomorphism $\beta$ of above can be arbitrary.
This suggests that there are no simple ways to express the connections between $(\Gamma,\alpha)$ and $(\tilde\Gamma,\tilde\alpha)$ without passing through the limits $(\lim\Gamma,\lim\alpha)$ and $(\lim\tilde\Gamma,\lim\tilde\alpha).$
\section{Description of the automorphism group of a fraction group}\label{sec:AutG}
In all this section we consider fraction groups that are isomorphic to \textit{untwisted} restricted permutational wreath products.
These are the fraction groups obtained from the choice of a group $\Gamma$ and the morphism $\Gamma\to\Gamma\oplus\Gamma, g\mapsto (g,e_\Gamma)$, that is when the endomorphism $\alpha$ is the identity automorphism $\id_\Gamma$.
We consider a fixed group $\Gamma$ that we assume nontrivial, the trivial case being not interesting for our study.
Put $K:=\oplus_{\mathbf{Q}_2}\Gamma$ the group of finitely supported maps from $\mathbf{Q}_2$ to $\Gamma$ and write $G:=K\rtimes V$ for the restricted permutational wreath product associated to the action $V\curvearrowright \mathbf{Q}_2$ which is the fraction group we want to study.
The aim of this section is to provide a clear description of the automorphism group $\Aut(G)$.
\subsection{The four groups acting on $G$}
We start by fixing some notation and defining certain groups.
Recall that $\Homeo(\mathfrak C)$ is the group of homeomorphisms of Cantor space $\mathfrak C$ and that Thompson's group $V$ is identified with a subgroup of it.
Let
$$N_{H(\fC)}(V):=\{\varphi\in\Homeo(\mathfrak C):\ \varphi V \varphi^{-1}=V\}$$ be the normaliser subgroup of $V$ inside $\Homeo(\mathfrak C)$ and put $\Stab_N(\Q_2)$ the stabiliser subgroup of $N_{H(\fC)}(V)$ for the subset $\mathbf{Q}_2\subset \mathfrak C.$
Therefore,
$$\Stab_N(\Q_2):=\{ \varphi\in \Homeo(\mathfrak C):\ \varphi V\varphi^{-1} =V \text{ and } \varphi(\mathbf{Q}_2)=\mathbf{Q}_2\}.$$
As explained earlier $N_{H(\fC)}(V)\to \Aut(V), \varphi\mapsto \ad_\varphi$ realises an isomorphism.
The fact that this morphism is surjective is a consequence of Rubin's theorem and the injectivity follows from an easy argument, see \cite{Rubin96} and \cite[Section 3]{BCMNO19} for details.
Moreover, note that $\Stab_N(\Q_2)$ is a proper subgroup of $\Aut(V)$.
Indeed, if $\sigma$ is the nontrivial permutation of $\{0,1\}$ we can induce an homeomorphism of $\mathfrak C$. This element is in $N_{H(\fC)}(V)$ but does not stabilise $\mathbf{Q}_2$.
We refer the reader to \cite{BCMNO19} for a deep description of $\Aut(V)$.
Consider the group
$$\overline K:=\prod_{\mathbf{Q}_2}\Gamma$$ of all maps from $\mathbf{Q}_2$ to $\Gamma$ and the unrestricted permutational wreath product $$\overline G:=\prod_{\mathbf{Q}_2}\Gamma\rtimes V.$$
Identify $G:=\oplus_{\mathbf{Q}_2}\Gamma\rtimes V$ with the corresponding subgroup of $\overline G$ and consider the elements of $\overline K$ inside $\overline G$ which normalise $G$.
We write
$$N_{\overline K}(G):=\{f\in \prod_{\mathbf{Q}_2}\Gamma:\ f G f^{-1} =G\}$$
for the normaliser subgroup.
As usual $\Aut(\Gamma)$ is the automorphism group of $\Gamma$ and $Z\Gamma$ the centre of $\Gamma.$
We are going to show that $\Aut(G)$ is generated by some copy of the following groups:
$$\Stab_N(\Q_2), \Aut(\Gamma), N_{\overline K}(G) \text{ and } Z\Gamma.$$
They will all act faithfully on $G$ except $N_{\overline K}(G)$ that we will mod out by the normal subgroup of constant maps from $\mathbf{Q}_2$ to $Z\Gamma$ that we simply denote by $Z\Gamma$.
\subsubsection{Action of the automorphism group of the input group}
The whole automorphism group $\Aut(\Gamma)$ acts on $G$ in the expected diagonal way:
$$\beta\cdot av := \overline \beta(a) v, \ \beta\in \Aut(\Gamma), a\in K, v\in V$$
where $$\overline\beta(a):\mathbf{Q}_2\to \Gamma, x\mapsto \beta(a(x)).$$
It is easy to check that this formula defines a faithful action by automorphism $\overline\beta:\Gamma\curvearrowright G$.
\subsubsection{Action of certain automorphisms of Thompson's group $V$}
The stabiliser $\Stab_N(\Q_2)$ acts spatially on $G$ as follows:
$$ \varphi \cdot (av) := a^\varphi \cdot \ad_\varphi(v), \ \varphi \in \Stab_N(\Q_2), a\in \oplus_{\mathbf{Q}_2}\Gamma, v\in V$$
where $$a^\varphi:\mathbf{Q}_2\to \Gamma, x\mapsto a(\varphi^{-1} x) \text{ and } \ad_\varphi(v)= \varphi v \varphi^{-1}.$$
This is well-defined since $\supp(a^\varphi)=\varphi(\supp(a))$ and thus if $a$ is finitely supported so is $a^\varphi.$
This action is clearly faithful since the action of $N_{H(\fC)}(V)$ on $V$ is known to be faithful, see \cite[Section 3]{BCMNO19}.
\subsubsection{Adjoint action of $\Gamma$-valued functions}
Given $f\in \overline K$ we can act on $\overline G$ by conjugation:
$$f\cdot (av) = \ad(f)(av) = favf^{-1} = (fa(f^v)^{-1})\cdot v, \ f\in \overline K, a\in \overline K, v\in V.$$
This restricts to an action by automorphisms:
$$\ad: N_{\overline K}(G)\to \Aut(G).$$
Let us compute the kernel of $\ad$.
Assume that $\ad(f)=\id_G$ for a certain $f\in N_{\overline K}(G).$
We then obtain that
$$v = \ad(f)(v) = f(f^v)^{-1} v$$ implying that $f=f^v$ for all $v\in V$.
Since $V\curvearrowright \mathbf{Q}_2$ is transitive we obtain that $f$ is constant.
Consider $a\in K, x\in \mathbf{Q}_2$ and observe that
$$a(x)=\ad(f)(a)(x) = f(x)a(x)f^{-1}(x)$$
implying that $f(x)$ is central in $\Gamma$ and thus $f$ is constant and valued in $Z\Gamma.$
Conversely, if $\zeta\in Z\Gamma$ and $f(x)=\zeta$ for all $x\in \mathbf{Q}_2,$ we have that
$$\ad(f)(av) = fa(f^v)^{-1} v = \zeta \zeta^{-1} a v = av$$ for all $a\in K, v\in V$.
We have proven that the kernel of the action $\ad:N_{\overline K}(G)\curvearrowright G$ is equal to the group of all constant maps from $\mathbf{Q}_2$ to $Z\Gamma$ that we simply denote by $Z\Gamma$.
Note that the group $N_{\overline K}(G)$ is in general strictly larger than $K$ as shown in the following remark. It is clear that constant maps are in the normalisers but there are also some less trivial ones.
\begin{remark}
We provide an example of an element of the normaliser subgroup that is not in $K$ nor constant.
Assume $\Gamma=\mathbf{Z}_2.$
Consider the following set: $$X:=\{ \dfrac{4k+1}{2^n} :\ n\in \mathbf{Z}, k\in\mathbf{Z} \}$$ and write $Y:=X\cap [0,1)$.
We put $f:=\chi_Y$ the characteristic function of $Y$ interpreted as an element of $\overline K=\prod_{\mathbf{Q}_2} \mathbf{Z}_2.$
Observe that $X=2X$ and that the symmetric difference $(X+\dfrac{1}{2})\Delta X$ is locally finite in the sense that its intersection with any interval $(x,y), x,y\in \mathbf{R}$ is finite.
This implies that for any $v\in V$ we have that $v\cdot \chi_Y = \chi_Y \mod \oplus_{\mathbf{Q}_2}\mathbf{Z}_2.$
Hence, $$\ad(f)(av) = fa(f^v)^{-1} v = \chi_{Y\Delta v Y} \cdot a v, \ a \in K, v\in V,$$
which is an element of $K\rtimes V$ since $\chi_{Y\Delta v Y}$ is finitely supported. However, $f=\chi_Y$ is not in $K$ since $Y$ is an infinite set nor is constant.
\end{remark}
\subsubsection{Exotic automorphisms: actions of the centre of the input group}
There is an unexpected action of $Z\Gamma$ on $G$ that we now describe.
We construct those exotic automorphisms in a similar way than in Lemma \ref{lem:isomtwo} by using the dyadic valuation and the slopes of elements of $V$.
\begin{proposition}\label{prop:zetacocycle}
For any $v\in V$ we define the map $p_v:=\log_2(v')^v - \nu +\nu^v$ that is
$$p_v(x) = \log_2(v'(v^{-1}x)) - \nu(x) + \nu(v^{-1}x) ,\ x\in\mathbf{Q}_2.$$
For any $\zeta\in Z\Gamma$ we put
$$c(\zeta): V\to \prod_{\mathbf{Q}_2}Z\Gamma, \ c(\zeta)_v= \zeta^{p_v}, \ c(\zeta)_v(x)= \zeta^{p_v(x)}, v\in V, x\in\mathbf{Q}_2.$$
The map $c(\zeta)$ is valued in $\oplus_{\mathbf{Q}_2}Z\Gamma$ and satisfies the following cocycle identity:
$$c(\zeta)_{vw} = c(\zeta)_v \cdot c(\zeta)_w^v \text{ for all } v,w\in V.$$
This defines an injective group morphism:
$$E:Z\Gamma\to \Aut(G),\ E_\zeta(av) := a \cdot c(\zeta)_v \cdot v \text{ for all } a \in K, v\in V, \zeta\in Z\Gamma.$$
\end{proposition}
\begin{proof}
The first claim of Lemma \ref{lem:isomtwo} implies that $p_v:\mathbf{Q}_2\to\mathbf{Z}$ is finitely supported and so $c(\zeta)_v$ is for all $v\in V,\zeta\in Z\Gamma.$
Let us show that $E$ defines a group morphism.
Consider $a,b\in K, v,w\in V, \zeta\in Z\Gamma$.
We have that
\begin{align*}
E_\zeta(av) \cdot E_\zeta(bw) & = a \cdot \zeta^{p_v} \cdot v \cdot b \cdot \zeta^{p_w} \cdot w = a\cdot \zeta^{p_v}\cdot b^v \cdot \zeta^{p_w^v} \cdot vw = \zeta^{p_v + p_w^v} \cdot ab^v \cdot vw \\
E_\zeta(av\cdot bw) & = E_\zeta( a b^v \cdot vw) = ab^v\cdot \zeta^{p_{vw}}\cdot vw.
\end{align*}
Observe now that
\begin{align*}
[p_v + p_w^v](x) & = \log_2(v'(v^{-1}x)) - \nu(x) + \nu(v^{-1}x) + \log_2(w'(w^{-1}v^{-1}x)) - \nu(v^{-1}x) + \nu(w^{-1}v^{-1}x)\\
& = \log_2(v'(v^{-1}x)) + \log_2(w'(w^{-1}v^{-1}x)) - \nu(x) + \nu(w^{-1}v^{-1}x)\\
p_{vw}(x) & = \log_2((vw)'((vw)^{-1}x) - \nu(x) + \nu((vw)^{-1}x) \\
& = \log_2(v'(v^{-1}x)) + \log_2(w'(w^{-1} v^{-1}x) - \nu(x) + \nu(w^{-1} v^{-1} x),
\end{align*}
for $x\in \mathbf{Q}_2.$
We proved that $$p_v+p_w^v = p_{vw}.$$
All together this implies that for any $\zeta\in Z\Gamma$ the map $E_\zeta$ is an endomorphism of the group $G$.
It is easy to see that $E_\zeta$ is bijective with inverse map $E_{\zeta^{-1}}$ implying that $E_\zeta$ is an automorphism of $G$ for $\zeta\in Z\Gamma.$
It is rather obvious that $E_\zeta\circ E_\eta = E_{\zeta\eta}$ for $\zeta,\eta\in Z\Gamma$ implying that $E$ defines a group morphism from $Z\Gamma$ to $\Aut(G).$
To finish the proof it is sufficient to prove that $E$ is faithful.
Assume that $E_\zeta=\id_G$ for a certain $\zeta\in Z\Gamma.$
This is equivalent to having
$$\zeta^{p_v(x)}=e \text{ for all } v\in V, x\in\mathbf{Q}_2.$$
Consider $v\in V$ such that $v0=0$ and $v'(0)=2.$
We obtain that $\zeta^{p_v(0)}= \zeta$ and thus $\zeta=e$.
Therefore, the kernel of $E$ is trivial.
\end{proof}
We now fix some notations concerning the actions of those four groups on $G$ and summarise the observation of above in the following proposition.
\begin{proposition}
Consider the direct product $\Stab_N(\Q_2)\times \Aut(\Gamma)$ and define the map
$$A: \Stab_N(\Q_2)\times \Aut(\Gamma)\to \Aut(G), \ A_{\varphi,\beta}(av):= \overline \beta(a)^\varphi \ad_\varphi(v)$$
for $\varphi\in \Stab_N(\Q_2), \beta\in\Aut(\Gamma), a\in K, v\in V$ such that
$$\overline\beta(a)(x):=\beta(a(x)) \text{ and } a^\varphi(x):=a(\varphi^{-1}(x)), x\in\mathbf{Q}_2.$$
The map $A$ is an injective group morphism.
Define the map $$\ad:N_{\overline K}(G)\to \Aut(G), \ad(f)(av):= favf^{-1} = (faf^{-1}) f(f^v)^{-1} v$$
for $f\in N_{\overline K}(G), a\in K, v\in V.$
The map $\ad$ is a group morphism and its kernel is $Z\Gamma\lhd N_{\overline K}(G).$
We continue to write $$\ad:N_{\overline K}(G)/Z\Gamma\to \Aut(G)$$ for
the factorised injective group morphism.
For any $v\in V$ we put $$p_v:\mathbf{Q}_2\to \mathbf{Z}, \ p_v(x):= \log_2(v'(v^{-1}x)) -\nu(x) + \nu(v^{-1}x), \ x\in\mathbf{Q}_2$$
where $\nu$ is the dyadic valuation and define the map
$$E:Z\Gamma\to \Aut(G),\ E_\zeta(av) = a \cdot \zeta^{p_v} \cdot v, \ a\in K , v\in V, \zeta\in Z\Gamma$$
which is an injective group morphism.
\end{proposition}
We have nothing to prove except that the actions of $\Stab_N(\Q_2)$ and $\Aut(\Gamma)$ on $G$ mutually commute which is an easy computation.
\subsection{Semidirect product}\label{sec:semidirect}
We will later prove that any automorphism of $G$ can be decomposed uniquely as a product of four elements of the groups
$$\Stab_N(\Q_2), \Aut(\Gamma),N_{\overline K}(G)/Z\Gamma \text{ and } Z\Gamma.$$
In this section we describe how these four groups sit together inside $\Aut(G)$.
They come in two direct products: $\Stab_N(\Q_2)\times \Aut(\Gamma)$ and $Z\Gamma\times N_{\overline K}(G)/Z\Gamma$ with the first direct product acting on the second in a semidirect product fashion.
Before defining the action we will need a technical lemma.
\begin{lemma}\label{lem:formula}
Consider $x\in \mathbf{Q}_2, \phi,\varphi\in\Stab_N(\Q_2)$ and $v,w\in V$.
We have the following formula:
\begin{enumerate}
\item If $vx=wx$, then $$\log_2((\varphi^{-1} v \varphi)'(\varphi^{-1}x)) - \log_2(v'(x)) = \log_2((\varphi^{-1} w \varphi)'(\varphi^{-1}x)) - \log_2(w'(x));$$
\item Given $x\in\mathbf{Q}_2$ and $v\in V$ satisfying $v0=x$ we put $$\gamma_\varphi(x):= \log_2((\varphi^{-1} v\varphi)'(\varphi^{-1}0)) - \log_2(v'(0)).$$
This formula does not depend on the choice of $v$ and defines a map $\gamma_\varphi:\mathbf{Q}_2\to \mathbf{Z}$;\\
\item For any $x\in \mathbf{Q}_2$ and $v\in V$ we have
$$\gamma_\varphi(vx)-\gamma_\varphi(x) = \log_2((\varphi^{-1} v \varphi)'(\varphi^{-1}x)) - \log_2(v'(x));$$\\
\item We have the equality $\gamma_{\phi\varphi} = \gamma_\phi + \gamma_\varphi^\phi - \gamma_\varphi(\phi^{-1}(0))$.
\end{enumerate}
\end{lemma}
Note that here the symbol $0$ denotes the usual zero of the dyadic rationals $\mathbf{Q}_2.$
\begin{proof}
Consider $x\in \mathbf{Q}_2, \phi,\varphi\in\Stab_N(\Q_2)$ and $v,w\in V$.
Proof of (1).
Assume that $vx=wx$ and put $u:=v^{-1}w$.
Since $ux=x$ we can apply Proposition \ref{prop:slope} obtaining that
$$(\varphi^{-1} u \varphi)'(\varphi^{-1}x) = u'(x).$$
Observe that
\begin{align*}
\dfrac{(\varphi^{-1} vu \varphi)'(\varphi^{-1}x)}{(vu)'(x)} & = \dfrac{([\varphi^{-1} v \varphi] \circ [\varphi^{-1} u \varphi])'(\varphi^{-1}x)}{v'(ux)\cdot u'(x)} \\
& = \dfrac{[\varphi^{-1} v \varphi]'(\varphi^{-1}x) \cdot [\varphi^{-1} u \varphi]'(\varphi^{-1}x)}{v'(x)\cdot u'(x)} \\
& = \dfrac{[\varphi^{-1} v \varphi]'(\varphi^{-1}x)}{v'(x)}. \\
\end{align*}
Taking the logarithm we obtain (1).
Proof of (2).
We need to show that if $v0=w0$, then
$$\log_2((\varphi^{-1} v \varphi)'(\varphi^{-1}0)) - \log_2(v'(0)) = \log_2((\varphi^{-1} w \varphi)'(\varphi^{-1}0)) - \log_2(w'(0)).$$
This is a direct consequence of (1) applied to $x=0.$
Proof of (3).
Consider $x\in \mathbf{Q}_2$ such that $w0=x$ and let $v$ be in $V$.
Observe that
\begin{align*}
\gamma_\varphi(vx)-\gamma_\varphi(x) = & \gamma_\varphi(vw0) - \gamma_\varphi(w0)\\
= & \log_2((\varphi^{-1} vw\varphi)'(\varphi^{-1}0)) - \log_2((vw)'(0))\\
& - \log_2((\varphi^{-1} w\varphi)'(\varphi^{-1}0)) + \log_2(w'(0))\\
= & \log_2((\varphi^{-1} v\varphi)'(\varphi^{-1}w0)) + \log_2((\varphi^{-1} w\varphi)'(\varphi^{-1}0))- \log_2((vw)'(0))\\
& - \log_2((\varphi^{-1} w\varphi)'(\varphi^{-1}0)) + \log_2(w'(0))\\
= & \log_2((\varphi^{-1} v\varphi)'(\varphi^{-1}w0)) - \log_2(v'(w0)) - \log_2(w'(0)) + \log_2(w'(0))\\
= & \log_2((\varphi^{-1} v\varphi)'(\varphi^{-1}w0)) - \log_2(v'(w0))\\
= & \log_2((\varphi^{-1} v\varphi)'(\varphi^{-1}x)) - \log_2(v'(x)).\\
\end{align*}
Proof of (4).
Consider $y\in \mathbf{Q}_2$ and $v\in V$ such that $v0=y.$
Observe that
\begin{align*}
\gamma_{\phi\varphi}(y) = & \gamma_{\phi\varphi}(v0) = \log_2((\phi\varphi)^{-1} v (\phi\varphi))'(\varphi^{-1}\phi^{-1}0) - \log_2(v'(0))\\
= & \log_2( [ \varphi^{-1} ( \phi^{-1} v \phi ) \varphi ]'( \varphi^{-1} [ \phi^{-1} 0] ) - \log_2( ( \phi^{-1} v \phi)'(\phi^{-1} 0) ) \\
& + \log_2( ( \phi^{-1} v \phi)'(\phi^{-1} 0) ) - \log_2(v'(0))\\
= & \log_2( [ \varphi^{-1} ( \phi^{-1} v \phi ) \varphi ]'( \varphi^{-1} [ \phi^{-1} 0 ] ) - \log_2( ( \phi^{-1} v \phi)'(\phi^{-1} 0) ) + \gamma_\phi(v0)\\
= & \gamma_\varphi( (\phi^{-1} v \phi) (\phi^{-1} 0) ) - \gamma_\varphi(\phi^{-1} 0) + \gamma_\phi(v0) \text{ using (3) }\\
= & \gamma_\varphi( (\phi^{-1} v 0) ) - \gamma_\varphi(\phi^{-1} 0) + \gamma_\phi(v0)\\
= & [\gamma_\varphi^\phi + \gamma_\phi](y) - \gamma_\varphi(\phi^{-1}0).
\end{align*}
\end{proof}
Let $\nu:\mathbf{Q}\to \mathbf{Z}$ be the dyadic valuation, see Notation \ref{not:valuation}.
For any $\varphi\in\Stab_N(\mathbf{Q}_2)$ define the map $\mu_\varphi:\mathbf{Q}_2\to \mathbf{Z}$ as follows:
\begin{equation}\label{def:muvarphi}\mu_\varphi = \gamma_\varphi -\nu^\varphi + \nu.\end{equation}
Note that if $x\in \mathbf{Q}_2$ and $v\in V$ satisfies $v0=x$, then
$$\mu_\varphi(x) = \log_2((\varphi^{-1} v\varphi)'(\varphi^{-1}0)) - \log_2(v'(0)) -\nu(\varphi^{-1} v0) + \nu(v0)$$
and this formula is independent of the choice of $v$ by the previous lemma.
We can now define an action of $\Stab_N(\Q_2)\times \Aut(\Gamma)$ on $Z\Gamma\times N_{\overline K}(G)/Z\Gamma$ which we will prove to be the action describing the semidirect product decomposition of $\Aut(G).$
\begin{proposition}\label{prop:semidirect}
We have an action by automorphism:
$$\sigma_0: \Stab_N(\Q_2)\times \Aut(\Gamma) \to \Aut(Z\Gamma\times \overline K/Z\Gamma)$$
described by the formula:
$$\sigma_0(\varphi,\beta)(\zeta, f):= (\beta(\zeta),\overline\beta(f)^\varphi \cdot \beta(\zeta)^{\mu_\varphi})$$
for all
$$(\zeta,f)\in Z\Gamma\times \overline K / Z\Gamma \text{ and } (\varphi,\beta)\in \Stab_N(\Q_2)\times \Aut(\Gamma).$$
This map $\sigma$ induces an injective group morphism
$$\sigma: \Stab_N(\Q_2)\times \Aut(\Gamma) \to \Aut(Z\Gamma\times N_{\overline K }(G)/Z\Gamma).$$
\end{proposition}
\begin{remark}
The notation is slightly misleading.
Given $\beta\in\Aut(\Gamma), \varphi\in\Stab_N(\Q_2), f\in N_{\overline K}(G), \zeta\in Z\Gamma, x\in \mathbf{Q}_2$ we have that $$\overline\beta(f)^\varphi(x):= \beta( f(\varphi^{-1}x))$$ where the superscript $\varphi$ means that we precompose $f$ with the function $\varphi^{-1}$ while
$$\beta(\zeta)^{\mu_\varphi}(x):= \beta( \zeta)^{\mu_\varphi(x)}$$
where the superscript $\mu_\varphi$ stands for $\zeta$ elevated to the power $\mu_\varphi.$
\end{remark}
\begin{proof}
Consider $(\varphi,\beta)\in \Stab_N(\Q_2)\times \Aut(\Gamma)$.
It is clear that the formula
$$\sigma_0(\varphi,\beta):(\zeta,f)\mapsto (\beta(\zeta), \overline\beta(f)^\varphi\cdot \beta(\zeta)^{\mu_\varphi})$$
defines a map from $Z\Gamma\times \overline K$ to itself since any automorphism of $\Gamma$ maps its centre to itself.
Moreover, this is clearly a group endomorphism of $Z\Gamma\times \overline K.$
Consider the quotient map $q:Z\Gamma\times \overline K\to Z\Gamma\times \overline K/Z\Gamma$.
If $(e,f)\in \ker(q)$ (that is $f\in Z\Gamma$), then $\sigma_0(e,f)=(e, \overline\beta(f)^\varphi)=(e,\overline\beta(f))\in \ker(q)$ and thus $\sigma_0(\varphi,\beta)$ factorises into an endomorphism written $\sigma(\varphi,\beta)$ of $Z\Gamma\times \overline K/Z\Gamma.$
Let us check that $\sigma$ is multiplicative.
Observe that if $\varphi,\varphi_0\in \Stab_N(\Q_2), \beta,\beta_0\in\Aut(\Gamma), \zeta\in Z\Gamma, f\in \overline K$, then:
\begin{align*}
\sigma(\varphi,\beta)\circ \sigma(\varphi_0,\beta_0) (\zeta,f) & = \sigma(\varphi,\beta) ( \beta_0(\zeta) , {\overline\beta_0(f)}^{\varphi_0} \cdot \beta_0(\zeta)^{\mu_{\varphi_0}})\\
& = (\beta\beta_0(\zeta) , \overline \beta( {\overline\beta_0(f)}^{\varphi_0} \beta_0(\zeta)^{\mu_{\varphi_0}})^\varphi \cdot \beta( \beta_0(\zeta) )^{\mu_\varphi}) \\
& = (\beta\beta_0(\zeta) , \overline{(\beta\beta_0)}(f)^{\varphi\varphi_0} \cdot (\beta\beta_0)(\zeta)^{\mu_{\varphi_0}^\varphi} \cdot (\beta\beta_0)(\zeta)^{\mu_\varphi})\\
& = (\beta\beta_0(\zeta) , \overline{(\beta\beta_0)}(f)^{\varphi\varphi_0} \cdot (\beta\beta_0)(\zeta)^{\mu_{\varphi_0}^\varphi+\mu_\varphi}).
\end{align*}
Now it is sufficient to check:
$$\zeta^{\mu_{\varphi_0}^\varphi+\mu_\varphi} = \zeta^{\mu_{\varphi\varphi_0}} \mod Z\Gamma \text{ for all } \zeta\in Z\Gamma.$$
Finally, one checks that
$$\mu_{\varphi\varphi_0} = \mu_{\varphi_0}^\varphi + \mu_\varphi \mod \mathbf{Z}.$$
Lemma \ref{lem:formula} Formula (3) implies that $$\mu_{\varphi\varphi_0} = \mu_{\varphi_0}^\varphi + \mu_\varphi - \mu_{\varphi_0}(\varphi^{-1}(0))$$ giving us the desirable equality modulo the constant maps.
Observe now that $\sigma(e,e)$ is the identity implying that $\sigma$ is a group morphism from $\Stab_N(\Q_2)\times \Aut(\Gamma)$ to $\Aut(Z\Gamma\times \overline K/Z\Gamma).$
We now show that $\sigma$ acts on the subgroup $Z\Gamma\times N_{\overline K}(G)/Z\Gamma$.
Consider $\zeta\in Z\Gamma$ and $\varphi\in \Stab_N(\Q_2).$
We want to show that $\zeta^{\mu_\varphi}$ is normalising $G$.
Consider $av\in G$ and observe that
$$\ad(\zeta^{\mu_\varphi})(av) = \zeta^{\mu_\varphi - \mu_\varphi^v} av.$$
To conclude it is then sufficient to show that the support of $\mu_\varphi-\mu_\varphi^v$ is finite for all $v\in V.$
Observe that for $x\in \mathbf{Q}_2$ we have:
\begin{align*}
[\mu_\varphi-\mu_\varphi^v](vx) = & \log_2((\varphi^{-1} v \varphi)'(\varphi^{-1}x)) - \log_2(v'(x))\\
& -\nu(\varphi^{-1} vx) + \nu(vx) + \nu(\varphi^{-1}x) - \nu(x) \text{ by Lemma \ref{lem:formula} } \\
= & [ \log_2((\varphi^{-1} v \varphi)'(\varphi^{-1}x)) - \nu(\varphi^{-1}vx) + \nu(\varphi^{-1} x) ]\\
& - [ \log_2(v'(x)) - \nu(vx) + \nu(x)]\\
= & p_{\varphi^{-1}v\varphi}(\varphi^{-1}x) - p_v(vx).
\end{align*}
Since $p_{\varphi^{-1}v\varphi}$ and $p_v$ are finitely supported so is the map $\mu_\varphi-\mu_\varphi^v$ implying that $\zeta^{\mu_\varphi}\in N_{\overline K}(G).$
It is now easy to deduce that
$$\sigma_0(\varphi,\beta)( Z\Gamma\times N_{\overline K}(G)) = Z\Gamma\times N_{\overline K}(G) \text{ for all } (\varphi,\beta)\in\Stab_N(\Q_2)\times \Aut(\Gamma).$$
This implies that $\sigma$ provides an action by automorphisms on $Z\Gamma\times N_{\overline K}(G)/Z\Gamma.$
It remains to prove that this latter action is faithful.
Assume that $\sigma(\varphi,\beta)$ is the identity automorphism.
Since $\Gamma$ is nontrivial by assumption there exists $g\in \Gamma, g\neq e$.
Write $g_x\in K$ for the map with support $\{x\}$ taking the value $g$.
Note that $\sigma(\varphi,\beta)(e,g_x) = (e, \beta(g)_{\varphi(x)})$ implying that
$$g_x = \beta(g)_{\varphi(x)} \mod Z\Gamma \text{ for all } x\in\mathbf{Q}_2, g\in \Gamma.$$
This implies that $\varphi(x)=x$ for all $x\in \mathbf{Q}_2$ and $\beta(g)=g$ for all $g\in \Gamma$ concluding the proof.
\end{proof}
\subsection{Decomposition of automorphisms of $G$}
We start by classifying all automorphisms of $G$ of the following form:
$$av\mapsto a \cdot c_v \cdot v , \ a\in K, v\in V$$
and where $c:V\to K, v\mapsto c_v.$
We call such $c$ cocycles for the following formula that they must satisfy for defining an action:
$$c_{vw} = c_v \cdot c_w^v ,\ v,w\in V.$$
Write $$\Coc(V\curvearrowright \prod_{\mathbf{Q}_2}\Gamma):=\{ c:V\to \prod_{\mathbf{Q}_2}\Gamma:\ c_{vw}=c_v \cdot c_w^v \text{ for all } v,w\in V\}.$$
We will be particularly interested by the subsets $\Coc(V\curvearrowright \prod_{\mathbf{Q}_2}Z\Gamma)$ and $\Coc(V\curvearrowright K)$.
\begin{proposition}\label{prop:cocycle Abelian}
Let $\Lambda$ be an abelian group and consider the set of cocycles
$$\Coc=\Coc(V\curvearrowright \prod_{\mathbf{Q}_2}\Lambda):=\{ c:V\to \prod_{\mathbf{Q}_2}\Lambda:\ c_{vw}=c_v \cdot c_w^v,\ \forall v,w\in V\}.$$
\begin{enumerate}
\item Equipped with the product
$$(c\cdot d)_v(x):=c_v(x) d_v(x) , \ c,d\in \Coc, v\in V, x\in\mathbf{Q}_2$$
the set $\Coc$ is an abelian group.
\item Given any $\zeta\in \Lambda$ the formula
$$s(\zeta)_v(x):= \zeta^{\log_2(v'(v^{-1}x))} , \ v\in V, x\in \mathbf{Q}_2$$
defines a cocycle that we call the slope cocycle associated to $\zeta$;
\item
For any $c\in \Coc$ there exists $\zeta\in\Lambda$ and $f\in \prod_{\mathbf{Q}_2}\Lambda$ satisfying
$$c_v= s(\zeta)_v \cdot f (f^v)^{-1} , \ v\in V.$$
The pair $(\zeta,f)$ is unique up to multiplying $f$ by a constant map.
\item The assignment $c\mapsto (\zeta,f)$ realises a group isomorphism from $\Coc$ onto $\Lambda\times \left(\prod_{\mathbf{Q}_2}\Lambda\right)/\Lambda.$
\end{enumerate}
\end{proposition}
\begin{proof}
Consider $c,d\in \Coc$ and $v,w\in V.$
Then
$$(cd)_{vw} = c_{vw}\cdot d_{vw} = c_v\cdot c_w^v \cdot d_v \cdot d_w^v = (c_v d_v) (c_w^v d_w^v) = (cd)_v\cdot (cd)_w^v.$$
Therefore, $cd$ is a cocycle.
The cocycle $c$ satisfying $c_v(x)=e$ for all $v\in V, x\in\mathbf{Q}_2$ is neutral for the multiplication.
Fix $d\in \Coc$ and define $b_v(x):= d_v(x)^{-1}, v\in V, x\in\mathbf{Q}_2$.
Observe that
\begin{align*}
b_{vw}(vwx) & = d_{vw}(vwx)^{-1} = \left(d_v(vwx) d_w(wx)\right)^{-1} = d_w(wx)^{-1} d_v(vwx)^{-1} \\
& = d_v(vwx)^{-1} d_w(wx)^{-1} = b_v(vwx) b_w(wx) = (b_v\cdot b_w^v)(vwx),
\end{align*}
for all $v,w\in V, x\in \mathbf{Q}_2$.
Therefore, $b$ is in $\Coc$ and it is easy to see that $b$ is an inverse of $d$ inside $\Coc$.
Therefore, $\Coc$ is a group which is clearly abelian.
The fact that $(vw)'(x) = v'(wx) \cdot w'(x)$ for $v,w\in V, x\in\mathbf{Q}_2$ implies that $s(\zeta)$ is a cocycle for $\zeta\in \Lambda.$
Fix $c\in \Coc$. We are going to decompose $c$ such that
$$c_v=s(\zeta)_v\cdot f(f^v)^{-1}, \ v\in V$$
for some $\zeta\in \Lambda, f\in\prod_{\mathbf{Q}_2}\Lambda.$
Fix $x\in \mathbf{Q}_2$ and write $V_x:=\{v\in V:\ vx=x\}$ with derived group $V_x'.$
Consider the map $$P_x: V_x\mapsto \Lambda, \ v\mapsto c_v(x)$$
and note that $P_x$ is a group morphism valued in an abelian group and thus factorises into a group morphism
$$\overline P_x: V_x/V_x'\to \Lambda.$$
Since $x\in\mathbf{Q}_2$ we can find $v\in V_x$ such that $v'(x)=2.$
(This would not be the case for general $x$ in Cantor space, see \cite{Bleak-Lanoue10} or \cite{Brothier20-2} for details.)
This implies that the map $$\ell_x:V_x\to \mathbf{Z},\ v\mapsto \log_2(v'(x))$$ is surjective.
By Lemma \ref{lem:slopeone} we have that $V_x'=\ker(\ell_x)$ and thus $\ell_x$ factorises into an isomorphism
$$\overline \ell_x: V_x/V_x'\to \mathbf{Z}.$$
Write $1_\mathbf{Z}\in \mathbf{Z}$ for the positive generator of $\mathbf{Z}$ and consider $\zeta_x:= \overline P_x\circ \overline\ell_x^{-1}(1_\mathbf{Z})$ which is in $\Lambda$.
Observe that
$$c_v(x) = \zeta_x^{\log_2(v'(x))} \text{ for all } v\in V_x.$$
Let us show that $\zeta_x$ does not depend on $x\in\mathbf{Q}_2.$
Consider $x,y\in \mathbf{Q}_2$.
There exists $w\in V$ such that $wx=y$ since $V$ acts transitively on $\mathbf{Q}_2.$
The adjoint map $$\ad_w:V\to V, v\mapsto wvw^{-1}$$ restricts into an isomorphism from $V_x$ onto $V_{y}.$
Fix such a $w$ and take $v\in V_x.$
Observe that
\begin{align*}
c_{wvw^{-1}}(y) & = c_{wvw^{-1}}(wx) = c_{wv}(wx) c_{w^{-1}}(x) = c_w(wx) c_v(x) c_{w^{-1}}(x)\\
& = c_v(x) [ c_w(wx) c_{w^{-1}}(x) ] = c_v(x) [c_wc_w^w](wx)\\
& = c_v(x) c_e(wx) =c_v(x).
\end{align*}
We obtain that $\zeta_y^{\log_2( \ad_w(v)'(y))} = \zeta_x^{\log_2(v'(x))}$ for all $v\in V_x.$
Now observe that $\ad_w(v)'(y) = v'(x)$ if $y=wx, v\in V_x$ by the chain rule applied to elements of Thompson's group $V$.
Choosing $v\in V_x$ with slope 2 at $x$ we obtain that $\zeta_x=\zeta_y$.
We have proven that there exists a unique $\zeta\in \Lambda$ such that
$$c_v(x) = \zeta^{\log_2(v'(x))} \text{ for all } x\in\mathbf{Q}_2, v\in V_x.$$
Put $$s(\zeta)_v(x):= \zeta^{\log_2(v'(v^{-1}x))}, v\in V, x\in \mathbf{Q}_2.$$
We write $c= s(\zeta) \cdot d$ where $d:=c \cdot s(\zeta)^{-1}\in \Coc.$
We are going to show that there exists $f:\mathbf{Q}_2\to\Lambda$ satisfying
$$d_v = f(f^v)^{-1} \text{ for all } v.$$
Consider $x\in\mathbf{Q}_2$ and $v\in V_x$.
We have that $c_v(x) = s(\zeta)_v(x)$ and thus $d_v(x)=e.$
Given $x,y\in \mathbf{Q}_2$ we put $V_{y,x}:=\{v\in V:\ vx=y\}$.
If $v,w\in V_{y,x}$, then $w^{-1}v\in V_x$ implying that $d_{w^{-1}v}(x)=e.$
We obtain that
$$d_v(y) = d_{w\cdot w^{-1}v} (y) = d_w(y) \cdot d_{w^{-1}v}^w(y) = d_w(y) \cdot d_{w^{-1}v}(x) = d_w(y).$$
Therefore, $u\in V_{y,x}\mapsto d_u(y)\in \Lambda$ is constant equal to a certain $g_{y,x}\in\Lambda.$
The cocycle formula and the fact that $V\curvearrowright \mathbf{Q}_2$ is transitive imply that $g_{z,x}=g_{z,y} \cdot g_{y,x}$ for all $x,y,z\in \mathbf{Q}_2$.
Since $d_v(x)=e$ for all $v\in V_x$, we obtain that $g_{x,x}=e$ and thus $g_{y,x}^{-1} = g_{x,y}$ for all $x,y\in\mathbf{Q}_2.$
Fix a point of $\mathbf{Q}_2$ say $0$ and consider the map $f:\mathbf{Q}_2\to \Lambda, f(x):=g_{x,0}.$
Observe that $f(y)f(x)^{-1} = g_{y,0} g_{x,0}^{-1} = g_{y,0} g_{0,x} = g_{y,x}$ for all $x,y\in \mathbf{Q}_2$ implying that
$$[f(f^v)^{-1}](x) = f(x) f(v^{-1}x)^{-1} = g_{x,v^{-1}x} = d_v(x) \text{ for all } v\in V, x\in \mathbf{Q}_2$$
since $v\in V_{x,v^{-1}x}.$
We have proven that $c_v = s(\zeta)_v\cdot f(f^v)^{-1}$ for all $v\in V.$
Assume that $c_v = s(\xi)_v\cdot h(h^v)^{-1}$ for some $\xi\in\Lambda$ and $h:\mathbf{Q}_2\to\Lambda.$
For any $v\in V,x\in\mathbf{Q}_2$ we have
$$\zeta^{\log_2(v'(x))} f(vx)f(x)^{-1} = \xi^{\log_2(v'(x))} h(vx)h(x)^{-1}.$$
If we consider $x=0$ and $v\in V$ that dilates $[0,1/4]$ into $[0,1/2]$ we obtain that $\zeta=\xi.$
Given $x=0$ and $\tau$ the translation $z\mapsto z+y$ with $y\in\mathbf{Q}_2$ we obtain that $f(y)=h(y)h(0)^{-1}.$
Therefore, $f = h \cdot a$ where $a:\mathbf{Q}_2\to \Lambda$ is the constant map with value $h(0)^{-1}.$
Let us show that the assignment
$c\mapsto (\zeta, f)$
is an isomorphism from $\Coc$ onto $\Lambda\times \left(\prod_{\mathbf{Q}_2}\Lambda\right)/\Lambda.$
Note that $\zeta\in \Lambda\mapsto s(\zeta)$ and $f\in\prod_{\mathbf{Q}_2}\Lambda\mapsto (v\mapsto f(f^v)^{-1})$ are group morphisms since $\Lambda$ is abelian.
The first one is injective and the second has for kernel the constant functions since $V\curvearrowright\mathbf{Q}_2$ is transitive.
Therefore,
$$\Lambda\times \left(\prod_{\mathbf{Q}_2}\Lambda\right)/\Lambda\to \Coc, \ (\zeta,f)\mapsto \left(v\mapsto s(\zeta)_v\cdot f(f^v)^{-1}\right)$$
is an injective group morphism and we have proven that it is surjective.
This finishes the proof.
\end{proof}
Define the semidirect products induced by the action $\sigma$ of the previous section that is:
$$\sigma(\varphi,\beta) (\zeta,f) := ( \beta(\zeta) , (\beta(\zeta))^{\mu_\varphi} \cdot (\overline\beta(f))^\varphi),$$
where $\varphi\in\Stab_N(\Q_2), \beta\in\Aut(\Gamma), \zeta\in Z\Gamma, f\in N_{\overline K}(G)/Z\Gamma$ and such that $$\mu_\varphi(v0) = \log_2((\varphi^{-1} v \varphi)'(\varphi^{-1}0) - \log_2(v'(0)) -\nu(\varphi^{-1}v0) + \nu(v0) ,v\in V.$$
Recall that the formula of $\mu_\varphi(v0)$ only depends on $\varphi$ and $v0$ (but not on $v$).
We put $$Q:=\left( Z\Gamma\times N_{\overline K}(G) /Z\Gamma \right) \rtimes \left(\Stab_N(\Q_2)\times \Aut(\Gamma) \right).$$
\begin{theorem}\label{theo:isomB}
The map $$\Xi: Q\to \Aut(G),\ (\zeta,f,\varphi,\beta)\mapsto E_\zeta \ad(f) A_{\varphi,\beta}$$
is a surjective group morphism with kernel the normal subgroup
$$M:= \{ (e, \overline g, e, \ad(g^{-1})): \ g\in \Gamma\}$$
where $\overline g:\mathbf{Q}_2\to \Gamma$ is the constant map equal to $g$ everywhere.
\end{theorem}
\begin{proof}
Here is the plan of the proof.
First, we show that $\Xi$ is a morphism. The delicate points resides in checking that the commutation relations between $Z\Gamma$ and $\Stab_N(\Q_2)$ are preserved by $\Xi$.
Second, we compute the kernel of $\Xi$. This is done by testing certain elements of $G$ and by using the transitivity of $V\curvearrowright \mathbf{Q}_2$.
Third, we prove that $\Xi$ is surjective. This is the most difficult part of the proof. We start by considering an automorphism $\theta$ that we decompose as:
$$\theta(av)=\kappa(a)\cdot c_v\cdot \ad_\varphi(v), \ a\in K, v\in V$$ for some $\kappa\in\Aut(K), c:V\to K, \varphi\inN_{H(\fC)}(V).$
We multiply $\theta$ by elements in the range of $\Xi$ until we obtain the trivial automorphism.
Using this method we successively ``remove'' $\varphi$, $\kappa$ and $c_v$ using essentially that $\theta$ is spatial and the classification of cocycles valued in an abelian group.
Let us prove that $\Xi$ is a group morphism.
We have already checked that the maps $(\varphi,\beta)\mapsto A_{\varphi,\beta}$, $f\mapsto \ad(f)$ and $\zeta\mapsto E_\zeta$ are injective group morphisms.
We are reduced to verify that $E_\zeta$ commutes with $\ad(f)$ and that
$$A_{\varphi,\beta}\circ E_\zeta \circ \ad(f) \circ A_{\varphi,\beta}^{-1} = E_{\beta(\zeta)}\circ \ad(\overline\beta(f)^\varphi) \circ \ad(\beta(\zeta)^{\mu_\varphi})$$ for all $\zeta\in Z\Gamma, f\in N_{\overline K}(G), \varphi\in\Stab_N(\Q_2), \beta\in\Aut(\Gamma).$
The first statement is proved by observing that given $\zeta\in Z\Gamma$ we have that $\zeta^{p_v}$ commutes with any $f\in \overline K$ and thus for all $v\in V$ implying that $E_\zeta$ and $\ad(f)$ commute.
Choose such a quadruple $(\zeta,f,\varphi,\beta)$ and an element $av\in G$ with $a\in K, v\in V.$
Observe that
\begin{align*}
A_{\varphi,\beta}\circ E_\zeta \circ \ad(f) \circ A_{\varphi,\beta}^{-1} (av) & = A_{\varphi,\beta}\circ E_\zeta \circ \ad(f) (\overline{\beta^{-1}}(a)\circ \varphi \cdot (\varphi^{-1} v \varphi))\\
& = A_{\varphi,\beta}\circ E_\zeta (f\cdot \overline{\beta^{-1}}(a)\circ \varphi \cdot (\varphi^{-1} v \varphi)\cdot f^{-1})\\
& = A_{\varphi,\beta} (f\cdot \overline{\beta^{-1}}(a)\circ \varphi \cdot \zeta^{p_{\varphi^{-1} v \varphi}}\cdot (\varphi^{-1} v \varphi) \cdot f^{-1})\\
& = \overline{\beta}(f)^\varphi \cdot a \cdot \beta(\zeta)^{p^\varphi_{\varphi^{-1} v \varphi}}\cdot v \cdot ( \overline\beta( f)^\varphi)^{-1})\\
& = \ad(\overline{\beta}(f)^\varphi) ( a \cdot \beta(\zeta)^{p^\varphi_{\varphi^{-1} v \varphi}}\cdot v).
\end{align*}
On the other hand:
\begin{align*}
\Xi(\sigma(\varphi,\beta)(\zeta,f))(av) & = \Xi( ( \beta(\zeta) , \overline\beta(f)^\varphi, \beta(\zeta)^{\mu_\varphi}) (av)\\
& = \ad(\overline\beta(f)^\varphi) ( a \cdot \beta(\zeta)^{p_v + \mu_\varphi - \mu_\varphi^v} \cdot v).
\end{align*}
By Proposition \ref{prop:semidirect} we have the equality
$$p^\varphi_{\varphi^{-1} v \varphi} = p_v + \mu_\varphi - \mu_\varphi^v$$
implying that
$$A_{\varphi,\beta}\circ E_\zeta \circ \ad(f) \circ A_{\varphi,\beta}^{-1} =\Xi(\sigma(\varphi,\beta)(\zeta,f)).$$
Therefore, $\Xi$ is a group morphism.
Let us show that the kernel of $\Xi$ is the normal subgroup $M$ described in the theorem.
Consider $(\zeta,f,\varphi,\beta)\in Q$ and assume that $\theta:=\Xi(\zeta,f,\varphi,\beta)$ is the trivial automorphism.
Consider $h\in\Gamma, h\neq e$ and $h_x\in K$ the map supported in $\{x\}$ taking the value $h.$
Note that such a $h$ exists since $\Gamma$ is nontrivial.
Observe that
\begin{align*}
\theta(h_x) & = E_\zeta\circ\ad(f)\circ A_{\varphi,\beta}(h_x) = E_\zeta \circ \ad(f) \beta(h)_{\varphi(x)}\\
& = E_\zeta [f(\varphi(x)) \beta(h) f(\varphi(x))^{-1}]_{\varphi(x)}\\
& = [f(\varphi(x)) \beta(h) f(\varphi(x))^{-1}]_{\varphi(x)}.
\end{align*}
In particular, $\theta(h_x)$ has support $\{\varphi(x)\}$ but since $\theta(h_x)=h_x$ this latter support is equal to the support of $h_x$ which is $\{x\}$ implying that $\varphi=\id.$
We obtain that
\begin{equation}\label{h} h = f(x) \beta(h) f(x)^{-1} \text{ for all } h\in \Gamma, x\in\mathbf{Q}_2.\end{equation}
Consider $v\in V$ and observe that
\begin{equation}\label{v}
v = \theta(v) = \zeta^{p_v}\cdot f \cdot v \cdot f^{-1} = \zeta^{p_v}\cdot f (f^v)^{-1} \cdot v.\end{equation}
Therefore,
$$f(x) \zeta^{p_v(x)} = f(v^{-1}x) \text{ for all } v\in V, x\in \mathbf{Q}_2.$$
Fix $x\in\mathbf{Q}_2$ and choose $v\in V$ such that $v(x)=x$ and $v'(x)=2.$
We obtain that $$f(x) \zeta = f(x)$$ and thus $\zeta=e$.
Equation \eqref{v} becomes
$$f=f^v \text{ for all } v\in V.$$
Since $V\curvearrowright \mathbf{Q}_2$ is transitive we deduce that $f:\mathbf{Q}_2\to \Gamma$ is constant.
There exists $g\in \Gamma$ such that $f(x)=g$ for all $x\in \mathbf{Q}_2$ and thus $f=\overline g.$
Using \eqref{h} we obtain that $g \beta(h) g^{-1} = h$ for all $h\in\Gamma$ implying that $\beta=\ad(g^{-1}).$
Conversely, it is easy to see that $\Xi(e,\overline g, e,e) = \Xi(e,e,e,\ad(g))$ implying that $\ker(\Xi)=M.$
It remains to show that $\Xi$ is surjective.
Fix an automorphism $\theta\in \Aut(G).$
By Theorem \ref{theo:KinG}, we have that $\theta(K) = K.$
Moreover, Proposition \ref{prop:supportone} implies that there exists $\kappa\in \Aut(K), \varphi\in\Stab_N(\Q_2)$ and $c:V\to K$ such that
$$\theta(av) = \kappa(a)\cdot c_v \cdot \ad_\varphi(v) \text{ for all } a\in K, v\in V.$$
Up to a composition with $A_{\varphi,e}=\Xi(e,e,\varphi,e)$ we can assume that $\varphi$ is the identity transformation and thus $$\theta(av) = \kappa(a)\cdot c_v \cdot v, \ a\in K, v\in V.$$
By Proposition \ref{prop:supportone} we have that $\supp(\kappa(a)) = \supp(a)$ for all $a\in K$ since $\varphi$ is trivial.
Moreover, $\kappa$ can be decomposed as a product of automorphisms:
$$\kappa = \prod_{x\in\mathbf{Q}_2} \kappa_x\in\prod_{Q_2}\Aut(\Gamma)$$
such that $$\kappa(a)(x) = \kappa_x(a(x)) \text{ for all } a \in K, x\in\mathbf{Q}_2.$$
Now given $v\in V$ we have that
$$\kappa(vav^{-1})(vx) = [c_v \cdot v \cdot \kappa(a)\cdot v^{-1}\cdot c_v^{-1}](vx) = \ad(c_v(vx))[\kappa(a)(x)]= \ad(c_v(vx))[\kappa_x(a(x))].$$
This is equal to $\kappa_{vx}(a(x)).$
Therefore,
\begin{equation}\label{eq:fund}
\kappa_{vx} = \ad(c_v(vx))\circ \kappa_x \text{ for all } x\in\mathbf{Q}_2, v\in V.
\end{equation}
Fix $x=0$ in $\mathbf{Q}_2$ and write $\beta:=\kappa_{0}$.
Up to a composition with $A_{e,\beta}=\Xi(e,e,e,\beta)$ we can now assume that $\kappa_0=\id$.
This implies that $\kappa_{v0}=\ad(c_v(v0))$ for all $v\in V$.
Therefore, since $V\curvearrowright\mathbf{Q}_2$ is transitive we deduce that $\kappa_x$ is an inner automorphism of $\Gamma$ for each $x\in\mathbf{Q}_2.$
We now take care of the cocyle part $c$.
For each $x\in \mathbf{Q}_2$ choose $v\in V$ such that $v0=x$ and put $h(x):=c_v(v0).$
Consider the map $h:\mathbf{Q}_2\to \Gamma, x\mapsto h(x)$ and observe that $\kappa(a) = h a h^{-1}$ for all $a\in \oplus_{\mathbf{Q}_2}\Gamma.$
Equation \ref{eq:fund} implies that:
$$h(vx) a(vx) h(vx)^{-1} = c_v(vx) h(x) a(x) h(x)^{-1} c_v(vx)^{-1} \text{ for all } a\in K, x\in \mathbf{Q}_2, v\in V.$$
Hence, $\ad(h) = \ad(c_v) \circ \ad(h^v)$ implying that
$$c_v = h(h^v)^{-1} \mod \prod_{\mathbf{Q}_2}Z\Gamma \text{ for all } v\in V.$$
Therefore, $c_v = d_v \cdot h(h^v)^{-1} $ where $d_v\in \prod_{\mathbf{Q}_2} Z\Gamma$ for all $v\in V.$
Moreover, $d:v\mapsto d_v$ must be a cocycle that is valued in $\prod_{\mathbf{Q}_2}Z\Gamma$.
Since $Z\Gamma$ is abelian, Proposition \ref{prop:cocycle Abelian} implies that there exists a pair $(\zeta,f_0)$ with $\zeta\in Z\Gamma, f_0\in \prod_{\mathbf{Q}_2} Z\Gamma$ satisfying
$$d_v = s(\zeta)_v\cdot f_0(f_0^v)^{-1} \text{ for all } v\in V,$$
where $s(\zeta)_v(x)=\zeta^{\log_2(v'(v^{-1}x))},v\in V,x\in\mathbf{Q}_2$ is the slope cocycle defined in Proposition \ref{prop:cocycle Abelian}.
Therefore, $$c_v = \zeta^{p_v} \cdot f(f^v)^{-1}, \ v\in V$$
where $p_v=\log_2(v')^v - \nu + \nu^v$ is the map defined in Proposition \ref{prop:zetacocycle} and where $f:= h\cdot f_0\cdot \zeta^\nu.$
Up to a composition with $$E_\zeta:av\mapsto a \cdot \zeta^{p_v}\cdot v$$ we can now assume that $\theta$ is of the form
$$\theta(av) = \ad(f)(av) = faf^{-1} f(f^v)^{-1} v, a\in \oplus_{\mathbf{Q}_2}\Gamma, v\in V$$
where $f\in \overline K$ and necessarily $f\in N_{\overline K}(G).$
We have proven that $\theta$ is a product of automorphisms of the form $\ad(f), E_\zeta$ and $A_{\varphi,\beta}$ with $f\in N_{\overline K}(G), \zeta\in Z\Gamma, \varphi\in N_{H(\fC)}(V), \beta\in\Aut(\Gamma)$ implying that the range of $\Xi$ is generating $\Aut(G)$ and thus $\Xi$ is surjective.
\end{proof}
\newcommand{\etalchar}[1]{$^{#1}$}
|
{
"timestamp": "2021-12-03T02:07:11",
"yymm": "2010",
"arxiv_id": "2010.03765",
"language": "en",
"url": "https://arxiv.org/abs/2010.03765"
}
|
\section{Introduction}
Understanding the connection between accretion and outflow during pre-main sequence (PMS) evolution is a major endeavor in stellar astrophysics and it is required to understand planetary systems formation. The high energy spectrum (UV to X-ray) produced through accretion plays a key role in the photoevaporation of the gas in the young planetary disks (see Clarke, 2011 and Alexander et al. 2014 for recent reviews) setting the transition between the accreting Class~II sources and the weak line T Tauri (WTTS) phase. There are plenty of studies both from the theoretical and observational point of view addressing this process (e.g. Hollenbach et al. 1994, Font et al. 2004, Alexander et al. 2004b, Gorti \& Hollenbach 2009, Ercolano et al. 2009). There is however, a need for observations of close binary systems where tidal forces are very significant in the transport of angular momentum hence, in tapping the accretion flow. An additional source of interest relates with the formation and evolution of exoplanetary systems hosting the so-called hot Jupiters; close binaries with low mass ratio\footnote{The mass ratio, $q$, is defined as $q = M_2/M_1$, being $M_1$, the mass of the primary and $M_2$, the mass of the secondary.}, such as UZ~Tau, provide important clues on giant planets formation and migration.
Most of the PMS-spectroscopic binaries (PMS-SBs) are low mass systems with primary spectral type G0 or later. In general, the known PMS-SBs are rather evolved objects containing very active young stars (WTTSs) surrounded by thin disks, often debris disks (see Melo et al. 2001 and G\'omez de Castro \& Marcos-Arenal 2015 for a recent compilation). There are only a handful of PMS-SB that belong to Class II, namely V4046~Sgr, AK~Sco, DQ~Tau, UZ~Tau~E with orbital period smaller than 20 days, GW Ori with period 241.9 days and CS~Cha with period longer than 2482~days (Guenther et al. 2007). AK Sco stands out in this compilation because the system is composed of two equal mass F5 stars in a highly eccentric orbit (e=0.47) that get as close as 11 stellar radii at periastron passage (Alencar et al. 2003). In PMS close binary systems, accretion disks can either take up or release angular momentum and the details of the evolution depend on the mass ratio between the two stars and on the orbit eccentricity (Artymowicz \& Lubow, 1994; Bate \& Bonnell, 1997; Hanawa et al 2010, de Val-Borro et al 2011, Shi et al. 2012). In particular, AK Sco's highly eccentric orbit favors the formation of spiral waves within the inner disk; the variable gravitational potential produced by the binary acts as a gravitational piston that drags material efficiently from the inner border of the disk at apastron to release it onto the stars preferentially at periastron (see Figure~1, from G\'omez de Castro et al. 2013, hereafter Paper I). Observational confirmation to this prediction has been provided recently by a Hubble monitoring campaign with the Cosmic Origins Spectrograph (COS) (G\'omez de Castro et al. 2016, hereafter Paper II); coincident with periapsis, a marked 10\% decrease in the H$_2$ emission from the disk
was detected caused by an infalling filament of gas that absorbs the stellar Ly$\alpha$ photons and shades some of the H$_2$ molecules on the disk surface. The light curves also showed an associated enhancement of the main accretion tracers, namely, C IV, Si IV, N V, C III, Si III resonance UV multiplets (Paper II).
AK Sco was also monitored for two more cycles with the Space Telescope Imaging Spectrograph (STIS). All together, the data acquired during these three consecutive cycles provide a unique dataset to spectroscopically characterize accretion and evaluate the inter-cycle variability of the system. The aim of this work is to present the results of this analysis.
The article is organized as follows. A comprehensive summary of the Hubble observing campaign is provided in Section 2. AK Sco was monitored during the
first two cycles with the Space Telescope Imaging Spectrograph (STIS) providing low dispersion, high sensitivity spectra in the full 1140~\AA\ - 3184~\AA\ spectral range; the results from this STIS-based monitoring are described in Section 3. During the last cycle, AK Sco was monitored with COS in the 1159~\AA\ - 1762~\AA\ range with much better spectral dispersion (19,000) to study the kinematics of the line emitting plasma and resolve properly the H$_2$ emission features. The main results from this cycle are gathered in Paper II however, the detailed description of the kinematics of the radiating plasma is included
in Section 4. In Section 5, the data from all the three cycles are analyzed together and compared with the predictions from numerical simulations of the
dynamical evolution of the binary. STIS data also show evidence of an extended diffuse envelope around the system radiating in Ly$\alpha$ as will be
shown in Section 6. All observations were obtained in photon counting mode, this enabled studying in detail the light curves especially, during the
long COS observations. The analysis is included in Section 7 and shows no evidence for short time scale fluctuations ($\tau \leq 800$~s)
neither in the C IV light curve nor in the overall far UV spectrum. To conclude, a short summary with the main results is in Section 8.
\section{Observing program and spectral overview}
The observations were obtained in July-August 2014 with the instruments Space Telescope Imaging Spectrograph (STIS) and Cosmic Origins Spectrograph (COS), the log of observations is in Table~2. As shown in Fig.~2, AK~Sco was tracked during periastron passage in three consecutive periods. During the first two periods, it was observed in low dispersion with STIS and gratings G140L and G230L, to get full coverage in the range: 1140~\AA\ -- 3184~\AA\ range with high sensitivity and spatial
information. In the last period only the 1159~\AA\ -- 1762.490~\AA\ range was observed but with higher dispersion (19,000) using the gratings G130M and G160M in COS. Observations were carried out in photon counting mode to preserve as much temporal information as possible and enable the time series analysis of the results.
The average spectrum of AK~Sco during the observing run is displayed in Fig.~3. Emission lines formed at a broad range of plasma temperatures are detected as otherwise, usually observed in the ultraviolet spectrum of the T Tauri stars (TTSs) (G\'omez de Castro, 2009): emission from $H_2$ molecular bands, from the resonance transitions of highly ionized species (N~V, C~IV, Si~IV), intermediate ionization species (C~III, Si~III, O~III) and singly ionized or neutral plasma (C~I, C~II, O~I, O~II, Mg~II, Si~II, Fe~II). Contributions from the stellar atmosphere, the outflow and the accretion flow are expected in all of them. The unique feature about Fig.~3 is the unprecedented SNR of the UV continuum obtained after co-adding all exposures. In Fig.~4, the reddening corrected average spectrum ($A_V = 0.5, R=4.3$, see Table 1) is compared with that of the nearby F-type main sequence stars, HD~139664 and HD~22879 (see Appendix A for details on the UV spectrum of F stars). HD~139664 (F5) is known to have a debris disk but thin enough to have a tiny contribution to the infrared IRAS flux (see Fig.~8 in Scheneider et al. 2014). Some of the main photospheric features of an F5 star are readily recognized in AK~Sco UV spectrum, as well as the significant excess below 2000~\AA . The overall UV spectral energy distribution is closer to a later spectral type (F9) star that to an F5.
\section{ Results from the STIS monitoring}
\subsection{UV Continuum variability}
The UV continuum of F stars in the $1640-3100$~\AA\ is very sensitive to the spectral type, as shown in detail in Appendix A. This trend is quantified in Fig.~5 where the ratio between the integrated fluxes in the the bands: F1 ($1640-2400$~\AA ), F2 ($2400-2775$~\AA ) and F3 ($2830-3100$~\AA ) has been computed for the F stars in Table~A1 and used to build the rates $R1 = F1/F2$ and $R2 = F2/F3$ represented in the Figure. Spectral types of main sequence stars are indicated, they define a clear trend parallel to the extinction arrow; extinction is negligible for most of them (see Table~A1). Each band traces a main component: F1 is the most significantly affected by extinction (also includes the UV bump), F2 is dominated by the Fe~II multiplets and F3 by the Balmer continuum and the stellar photosphere. The 55~\AA\ separation between F2 and F3 bands has been set to avoid the very strong Mg~II feature. The extinction arrow has been drawn for the average ISM extinction law (Fiztpatrick \& Massa, 2007), as well for a modified extinction law with $R=4.3$ that according to Manset et al. (2005) is representative of the circumstellar environment in AK~Sco. Notice that the extinction arrow runs roughly parallel to the spectral types, i.e., there is degeneracy between spectral type and extinction.
We have computed $R1$ and $R2$ for all STIS observations of AK~Sco (cycles 1 and 2) and over-ploted them on the figure. AK Sco is located closer to late F stars than to early F in the diagram though it is classified as an F5 star from its optical spectrum. AK Sco extinction is small, A$_V = 0.5$ (see Table 1) or E(B-V)$\simeq 0.12$, and thus extinction alone cannot
account for the observed SED. Henceforth, according to its UV spectrum, AK Sco should be classified as an F8-F9 star instead of the F5 type assigned on the base of its optical spectrum. Moreover, the excess contribution from the accretion flow pushes AK~Sco away from the main sequence band. The variations found during our observing campaign go basically perpendicular to the extinction arrow and are most likely caused by variations in the accretion rate. The integrated UV flux in the $1640 - 3100$~\AA\ spectral range increased by a 17\% from cycle~1 to cycle~2.
During cycle~1, the excursion of AK~Sco in the diagram has a non-negligible component parallel to the extinction arrow of $E(B-V) \simeq 0.01$, corresponding to a variation in $A_V$ of -0.043 ($A_V = R*E(B-V)$); this variation is towards a decreasing extinction or increasing blueing of the spectrum in the middle of the cycle's observations.
\subsection{Spectral lines variability}
In figure 6, the spectral variability during cycle~1 and 2 is represented. The variability has been computed as the standard deviation of the mean of the spectra for each cycle\footnote{Let us denote as $S_i$ each spectrum and N the number of spectra obtained in the cycle. The mean spectrum is computed as
$<S> = (\sum _{i=1}^{i=N} S_i)/N$ and the standard deviation of the mean as, $ \sigma = ((\sum _{i=1}^{i=N} (S_i^2-<S>^2))/(N(N-1)))^{1/2}$}; as displayed it is concentrated in the cores of the emission lines but there are differences between both cycles, being the main,
\begin{itemize}
\item The variability of the most prominent UV lines is a factor of 2 higher during cycle 1 than during cycle 2.
\item During cycle 1, the UV continuum (2630 \AA\ - 3150 \AA\ ) varies by less than 0.8\%; during cycle 2 this variation rises to 2.2\%.
\item Not all spectral lines follow the same behaviour. For instance, N~V variability during cycle 1 is significantly higher than during cycle 2.
The same occurs for O III] and Mg~II.
\end{itemize}
The light curves of the main tracers are plotted in Figure~7. Cycle~1 flux variations are reminiscent of those observed in the high dispersion spectra of cycle 3 (see Paper II); at phase $\sim 1$, at the periapsis, there is a sudden increase in the flux that it is observed in N V, Si IV, C IV but also in singly ionized species such as C II or O I. During cycle 2, variations are softer and within the error bars for most tracers. Clearly, the observed trends change from one cycle to the next. It is noteworthy, that the semiforbidden transitions do not follow by the same trends than the permitted lines and their variability is not correlated.
\subsection{Flux-flux relations and accretion shock diagnostics}
Radiation from an accretion shock has a markedly different spectral energy distribution than radiation from a cool star atmosphere.
In cool stars, magnetic energy is transported from the stellar surface onto the atmosphere where it is dissipated into three
main regions: the warm (T$\sim 10^4$ K) chromosphere, the hot (T$\geq 10^6$ K) corona and the transition region (TR) between them.
The TR is a very thin layer (some $10^4$ km thickness in the Sun) where the temperature increases by two orders of magnitude and it can only
be observed at UV wavelengths. There are well characterized correlations between the flux radiated in the various spectral tracers of these regions; chromospheric (neutral and singly ionized species), TR (C III, Si III, SI IV, C IV, N V, He II...) and coronal spectral tracers (highly ionized species, O VI, X-ray flux...). These so-called flux-flux relations are used to model energy transport in cool stars and call for a universal mechanism operating in them (Ayres et al. 1995, Mihalas 1978).
Flux-flux relations have also been studied in TTSs (Hu\'elamo et al. 1998, Johns-Krull et al. 2000, Yang et al. 2012, G\'omez de Castro \& Marcos-Arenal 2012, hereafter GdCMA). When compared with their main sequence analogues, it becomes evident that there is excess radiation from low
ionization species (C II, Mg II, OI) with respect to the highly ionized ones (C IV, Si IV, He II). This indicates that radiation is released by a
different mechanism than in cool stars. The most successful models propose that the excess gravitational energy of the accreting
matter is released into heating at accretion shocks where the temperature reaches 0.3-1 MK, i.e., coronal-like temperatures, driving
a photoionization cascade that results in the observed scalings (Calvet \& Gullbring 1998, G\'omez de Castro \& Lamzin 1999).
AK Sco monitoring provides a unique chance to evaluate how these scalings behave during an accretion event and hence test the theoretical models. Of the many possible flux-flux relations we should focus in four: C IV versus O I, C IV versus C II, C IV versus He II and Si III] versus C III]. To study AK Sco behaviour in the context of the TTSs properties, all observations of TTSs obtained with HST and STIS G140L and G230L gratings have been downloaded from the Hubble archive and the fluxes
of the O I, C II, C IV, He II, Si III]$_{1892}$ and C III]$_{1908}$ lines have been determined.
The C IV versus OI flux-flux relation is optimal to evaluate the relative abundance between highly ionized and neutral species in the TTSs environment; it shows the largest deviation from the trend observed in main sequence stars (see GdCMA). Figure 8 shows that AK Sco falls in the TTSs trend and that from cycle 1 to cycle 2 moves along the trend with the largest fluxes being observed in cycle 2. Both cycles are neatly separated in the plot. Moreover, the intra-cycle variations are significantly smaller than the inter-cycle ones.
Note that AK Sco fluxes are the sum of the contribution of the two components of the system. To evaluate the contribution from
each component is not trivial since according to numerical simulations the accretion flow may be preferentially channeled in any of them (Paper I);
splitting the flux in equal parts between the two components is equivalent to shifting by 0.3 dex the location of AK Sco in the diagram.
Cycles 1 and 2 are markedly different. During cycle 2, there are not significant variations in the line fluxes however, this is not the case in cycle 1; the OI flux
increases by a 20\% from the beginning to the end of the cycle while the C IV flux increases by a $\pm 7$\% in the same time lapse. The overall flux increase could be
accounted by a decrease in the extinction but the markedly decrease of the C IV flux at the beginning of the cycle cannot be explained by an extinction effect.
The trends observed in the C IV-O I flux-flux diagram are reproduced in the C IV-C II flux-flux and the C IV-He II flux-flux diagrams (see, Figure 9 and 10, respectively). AK Sco observations are located on the regression line of the TTSs in the C IV-C II diagram and they are slightly up in the C IV-He II diagram, similarly to what observed in the C IV-O I diagram.
The ratio Si III]/C III] is density sensitive and it is often used to measure the electron density of warm plasma in the atmosphere of late-type stars and TTSs (Brown et al. 1984, G\'omez de Castro \& Verdugo 2001, 2003). In AK Sco, Si III]/C III]$\simeq 2$, similar to what observed in the TTSs (see Figure 11),
and indicates that the electron density of the emitting plasma is in the range $10^9$~cm$^{-3}$ to $10^{11}$~cm$^{-3}$ depending on the
precise plasma temperature (see Figure 4, in G\'omez de Castro \& Verdugo, 2001).
The Si III]/C III] ratio does not vary significantly from cycle to cycle ($2.2 \pm 0.6$ in cycle 1 and $2.0 \pm 0.3$ in cycle 2) even though the line fluxes do vary.
The first exposure in each cycle has very good SNR and it can be used as a pivot to measure variations during cycle. Again, a markedly different behavior is observed between cycle 1 and cycle 2. During cycle 2, no significant variations are observed (1-$\sigma$ and 2-$\sigma$ confidence ellipse are displayed in the plot). However, in cycle 1 there is a significant variation between the first exposure (phase 0.9939) and that at phase 0.9993, when the lines ratio increases by a 37\% and thus, the electron density of the plasma.
\section{Results from COS monitoring}
The mean profiles of the main transitions are displayed in Figure~12; the profiles are very broad and centered at rest
(the radial velocity of the system is 1.3 km~s$^{-1}$). The two spectroscopic components are not resolved in spite of their high
relative velocity at periastron passage; from 170 and 190 km~s$^{-1}$ during the monitored phase interval (Alencar et al. 2003).
The highest asymmetry is observed in the He II transition; as shown in Figure~13, after periastron passage, the line becomes significantly
redshifted (95 km~s$^{-1}$). This redshift cannot be caused by mass infall; though He II is a very sensitive tracer of accretion,
He II observations of TTSs show that it is only slightly redshifted, if at all (G\'omez de Castro 2013). However, 95 km~s$^{-1}$ is the
expected radial velocity of one of the two components of the system at periapsis thus, the observed shift
of the He II flux enhancement indicates that at periastron accretion is preferentially driven into one of the two components.
A similar trend is observed in the rest of the accretion tracers though blurred by the large broadening of the profiles.
To visualize it better, we have computed the {\it excess} profiles obtained by subtracting the first observation (phase=0.9920 for G130M and phase=0.9930 for G160M) from the rest. In Figure 14, the excess profile is represented as a hyper-surface in phase and wavelength for the main lines.
The increase of the red-wards shifted component flux from phase 1.0034 on is clearly noticeable in the Si~IV, C~V, N~V and Si~III lines.
At the same time, the blue-wards shifted component becomes dimmer. This behavior is observed in all hot gas lines, regardless of the COS grating setting or detector segment from which the data were extracted. It is also in marked contrast to the COS observations of H$_{2}$. The uncertainty of the COS wavelength solution is $\approx$~15 km s$^{-1}$ (Holland et al. 2014, {\it COS Instrument Handbook}), therefore we consider the systematic redshifts of the emission lines to be a real effect.
The profiles of the {\it CIV excess} are also plotted in Figure 15. They show an increasing absorption in the blue edge of the line that it is caused by a progressive decrease of the flux in the blue-wards shifted part of the profile compared with the beginning of the periastron passage. One may naively think that this effect is caused by a decrease in the C~IV flux (hence the accretion rate) from the companion ($V_{rad} \in [-95,-85]$~km~s$^{-1}$) however, the sharp edge at 286 km~s$^{-1}$ precludes this interpretation.
There is also an H$_2$ emission line on the blue-wing of C IV 1548 (see Herczeg et al. 2002, France et al. 2014) however, variations in the
H$_2$ could not caused the observed edge since the line is very narrow and H$_2$ variability is decoupled from the atomic species (see Paper II).
The light curve of the excess is displayed in Figure 16 for the main spectral lines. The fastest rise is observed in He II; the excess rises by a factor of 4 in one hour. The C IV light curve is significantly softer ({\it excess} growth rate of 2.3 per hour). The maximum {\it excess} is observed from phase 1.01 on (3.26 hours after periapsis).
\section{Accretion and cycle-to-cycle variations}
The AK Sco binary system has a highly eccentric orbit, as a result when the system approaches periastron, the outer boundaries of the circumstellar disks (and the accretion streams passing by) get close enough one to each other to effectively lose the angular momentum, leading to an increase of the accretion rate. The predictions from numerical simulations
for some sample cycles are shown in Figure 17; most often, the accretion flow is not evenly distributed between the two components of the system. In fact, there is
a pronounced asymmetry in some cycles, see {\it i.e.} cycle 12 in Figure 17. Inter-cycle variations do not occur only in the total amount of the accretion rate but also on the
details of the temporal distribution of the infall that shows in the light curves of the relevant spectral tracers.
The Hubble observations confirm these predictions:
\begin{itemize}
\item During cycle 1, there is a blueing of the near UV continuum and an increase of the line flux at phase 0.9982-0.9993 by $\sim 10$\%. The increase is pronounced
in OI, C II, and Si IV and it is less prominent in saturated (Mg II, C IV) or weak (N V, He II) transitions. This evolution cannot be interpreted as an increase of the
fraction of the stellar surface affected by the accretion shock; rather, the variations observed in the flux-flux diagrams call for a variation in the accretion rate
and the electron density at the shock front. It is noteworthy that the Si III]/C III] ratio rises by a 37\% from phase 0.992 to phase 0.9993 ([1] and [3] in Figure 11) corresponding to an increase of the electron density by a factor of $\sim 10$ for a fiducial temperature of 50,000~K. These observations are consistent with the theoretical prediction of enhanced mass-infall (accretion rate) at periastron.
\item During cycle 2, the flux is higher but there are not significant variations during the monitored time lapse.
\item During cycle 3, the behavior observed in cycle 1 seems to be reproduced. Moreover, the He II and CIV profiles show evidence of the accretion rate enhancement being channeled preferentially onto one of the two components of the system.
\end{itemize}
The UV radiation studied in this work is mainly produced at accretion shocks or very close to the stellar surface.
Numerical calculations of the structure of accretion shocks in TTSs indicate that the C~III], O~III] and Si~III] lines should have comparable intensities and that their ratios can be reliably used to derive the density, accretion infall velocity and hence, accretion rate on these stars (G\'omez de Castro \& Lamzin 1999). Though C~III] and Si~III] transitions could also be excited at the base of the jets (G\'omez de Castro \& Verdugo 2001), the profiles of these lines in AK~Sco clearly indicate that any contamination by a possible jet is negligible (G\'omez de Castro 2009), see also Sect.~6. The location of AK Sco in the Si III]/O III] versus Si III]/C III] diagram
is displayed in Figure 18. There are not significant variations during the HST monitoring and, in all cases, the observations indicate that the emission is produced by low temperature (mild shock) and low density (low accretion rate) plasma. The electron density inferred is $3-4 \times 10^{10}$~cm~s$^{-3}$ in good agreement with the predictions
from generic collisional plasma diagnosis.
The Si III]/O III] ratio varies from $3.2 \pm 0.4$ in the first observation of cycle 1 (the one with the highest SNR) to $2.6\pm 0.3$
in the first observation of cycle 2; though this variation is marginal, it is suggestive of a change in the electron temperature from one cycle to another. The Si III]/O III] ratio is temperature sensitive and in the accretion shock scenario, this variation is associated with a small change in the shock velocity; from $200~km~s^{-1}$ to 215~km~s$^{-1}$.
Material in the shock front is heated by the release of the gravitational energy of the infalling material at the impact point; roughly $T \simeq \mu v^2_{shock}/3k_B $. The higher the shock velocity, the higher the electron temperature in the Si III], C III] lines formation region. Small variations in the end shock speed are to be expected since the angular momentum
of the material in the innermost orbit might suffer slight variations due to the dynamics of the system (see Paper I).
\section{Extended Ly~$\alpha$ emission}
Ly~$\alpha$, Mg~II and semiforbidden line radiation (C~II], C~III, Si~III]) has been detected from jets and Herbig-Haro objects in TTSs (Coffey et al. 2004, L\'opez-Mart\'inez \& G\'omez
de Castro, 2015). We used the STIS 52\arcsec~$\times$~0.2\arcsec\ slit to enable the possibility of a serendipitous detection of extended outflows from the AK Sco system. The STIS long-slit was oriented at position angles of 28.5\arcdeg\ and 36.6\arcdeg\ for visits 01 and 02, respectively. The objective here was to build up signal to look for the spectral signature of extended emission, and consequently all of the STIS G140L and G230L observations were coadded to maximize the chance of detection. The two dimensional spectra were aligned by fitting a Gaussian profile to the cross-dispersion profile of bright emission regions, \ion{C}{4} and the NUV continuum for G140L and G230L, respectively. The centroids of all of the individual exposures were then shifted to the spatial centroid of the initial exposure and the data were stacked. Centroid shifts were $\leq$ 2 pixels in all cases. Spatial profiles of several spectral regions of interest were then extracted from the coadded G140L and G230L spectral images.
Figure~19 shows the flux-normalized spatial profiles from the G140L ($top$) and G230L ($bottom$) observations. We created individual extractions of \ion{H}{1} Ly$\alpha$, \ion{C}{2}, \ion{Si}{4}, the 1427~--~1520~\AA\ region comprising H$_{2}$ emission and FUV continuum, \ion{C}{4}, and \ion{O}{3}] from the G140L spectra. \ion{Mg}{2}, an adjacent continuum region to \ion{Mg}{2}, and two continuum regions dominated by Balmer continuum and stellar photosphere (2000~--~2620~\AA\ and 2900~--~3040~\AA) were extracted from the combined G230L observations. The \ion{Mg}{2} profile is indistinguishable from the adjacent continua. The FUV emission lines (except possibly Ly$\alpha$) are also unresolved, with lines at shorter wavelengths displaying progressively broader profiles at flux levels $\lesssim$~2~\% of the peak. These are likely the result of the extended short-wavelength point spread function of the $HST$ optical telescope assembly, owing primarily to mid-frequency errors related to the final quality of the telescope polish. The Ly$\alpha$ profile displays both a 1-pixel offset from the main FUV peak and evidence for spatial extension. We caution the reader that these profiles are extracted across the bright geocoronal Ly$\alpha$ emission line~--~the dotted line profile in Figure 5, $top$, shows the Ly$\alpha$ profile prior to airglow subtraction and the solid circles are the profile following the subtraction of a constant geocoronal airglow component, the average of 100 pixels centered approximately 4.3\arcsec\ in the ``+$y$'' direction on the STIS FUV MAMA. Taking the nominal 103 pc distance (van Leeuwen 2007) and 68\arcdeg inclination angle (Alencar et al. 2003) for AK Sco, $>$ 90~\% of the FUV line flux is located within $\pm$~9.3 AU of the center of mass of the system, with $>$~90~\% of the NUV flux originating within $\pm$~7.4 AU from the center of mass. Our conclusion is that there is tentative evidence that AK Sco displays extended Ly$\alpha$ emission (subject to large uncertainties in the airglow subtraction), but that all of the other emissions originate within $\approx$~$\pm$~10 AU of the center of mass.
\section{Light curve analysis: search for low frequency modes}
In a previous work (Paper I), we reported the detection of an Ultra Low Frequency (ULF)
oscillation in the UV light-curve from AK Sco using the Optical Monitor (OM)
instrument on-board the XMM-Newton. The ULF oscillation was excited close to the periastron passage,
lasting only for few oscillations at the rise of the UV light curve. It had a period of
790$^{+200}_{-150}$~s and was detected with filter UVM2 (spectral range 2000 \AA\ - 2700 \AA).
This bulk radiation may be produced in several distinct regions of the TTSs environment
(chromosphere, accretion shock, photoionized gas in the inner disk...) making difficult the identification
of the source. The spectral lines detected with COS cover a broad range of densities and temperatures
hence the analysis of their light curves has the potential of enabling the identification of the source.
For this analysis, the COS event lists datasets have been used instead of the 1-D extracted spectra (.x1d files).
The light-curve generation reads the segments A and B from FUV datafiles
following the steps taken by the light-curve COS package from Ely et al. (2015).
These light-curves were later on processed for computing the Lomb-Scargle Periodograms
(Lomb 1976, Scargle 1982), using the standard astropy package.
This method performs well with unevenly spaced data, as is our case.
The Lomb-Scargle method essentially fits a sinusoidal model to the data at each frequency, with a
larger power assigned to a given frequency reflecting a better fit.
The selected normalization uses the residuals of the data around the constant reference model,
leading to the resulting power $P$ to be a dimensionless quantity between $0$ and $1$.
When using a Lomb-Scargle Periodogram and deciding whether a signal contains a periodic
component, an important consideration is the significance of the peaks.
In a Bayesian view, the Lomb-Scargle periodogram is the optimal statistic for
detecting a stationary sinusoidal signal in the presence of Gaussian noise.
Hence, this significance is usually expressed in terms of a false alarm probability (fap),
which encodes the probability of measuring a peak of a given height (or higher)
conditioned on the assumption that the data consists of Gaussian noise with no periodic component.
A fap level of, let us say $1\%$ means that,
under the assumption that there is no periodic signal in the data,
we will observe a peak reaching such level high or higher only $1\%$ of the time.
The COS monitoring of the periastron passage lasted about $9$hrs.
The Figure~\ref{fig_jcv_periodogram} shows the periodograms corresponding
to the light-curves (see Paper II).
The upper panel corresponds to the G130M data, and the bottom panel to
the G160M data.
The periodograms are built up to a maximum period of $2.0$hrs.
The peaks around the rightmost vertical line correspond to the detection
of the HST orbital period of around $1.6$hrs. The grey area shows
the ULF interval as reported in (Gomez de Castro 2013).
The periodogram panel also plots the horizontal lines corresponding to
the required peak height to attain a false alarm probability (fap) of $10\%$, $5\%$ and $1\%$.
In addition to the peaks linked to the HST orbital period, we observe other high peaks.
The Lomb-Scargle periodograms detect all periodicities found in the data, including those periods
corresponding to the observational sampling period and its $1/n$ multiples. That is, they will also reflect
combinations of the duration of exposures and separations between them.
In order to distinguish any real ULF periodicity from those aliased multiples, we have
overlaid in the figure the periodograms resulting from fixing the real signal to an arbitrary
value of $1$. In this way, the pattern of the resulting peaks will exclusively correspond
to the observational sampling.
The region where the ULF is suspected to be present is between $0.15$ and $0.4$hrs.
Here, all the peaks are grouped and mimic those found
at larger periods, which indicate they are alias from main frequencies.
Unfortunately, the typical G160M exposure duration is around $0.22$,
and these peaks likely come from this sampling duration close to the searched ULF period.
The results with G130M data are very similar. Fixing again the counts to an arbitrary unit value,
we get the same pattern of peaks resulting from the observational sampling.
Finally, when computing the periodograms corresponding to the light-curves
selecting certain wavelengths, such as C IV, Si IV (Hot species) and C II
(warm species), the results are again comparable. Any periodicity within the
ULF region may still exist, but with a really high fap, not distinguishable from noise. Outside the
ULF region, one may search for other periodicities. But, in a similar manner,
all the peaks with low fap are linked to the observing windows, and a clean up process of the sampling frequencies
does not improve the results.
As an alternative, we can compute the periodograms just using individual exposures. The signal will be weaker
and the fap levels may decrease, but the main sampling multiples will not be present.
This analysis is seen in Figure~\ref{fig_jcv_periodogram_individual}. The upper panel shows the results coming from the
G130M single exposures. We do not detect any periodicity in the ULF area.
The G130M exposure labeled as `isq' presents a peak around a period of $0.17$hrs, but the corresponding
fap is very high, around $83\%$, indicating that this could be produced by noise.
The G160M analysis also presents peaks with a very high fap. However, as the G160M exposures
were taken in consecutive pairs, this analysis uses longer duration light-curves.
The results can be seen in the bottom panel of Figure~\ref{fig_jcv_periodogram_individual}.
The pair labeled `jeq-jgq' presents a peak around $0.25$hrs, with a fap of $\%24$. One may think this peak could be
an alias of the double exposure duration. But, notably, such a peak is not present in any of the remaining
curves. Hence, we may have a (faint) indication that the ULF was present at least during one of the exposures.
\section{Summary and Conclusions}
The observations provided by the dedicated Hubble's monitoring of AK Sco have enabled for the first time to track the variability of a pre-main sequence
binary with a degree of detail similar to that of the UV observations of interacting binaries. Much of the observed behaviour was already predicted by numerical simulations (Paper I)
but this campaign has provided the highly needed experimental evidence of the erratic cycle-to-cycle variations. Also the radiative output from accretion has been
accurately measured rendering fundamental data for accretion shocks calculations.
Two other PMS interacting binaries had previously been monitored with Hubble namely, UZ Tau~E and DQ Tau (Ardila et al 2015) but the phase coverage had significantly poorer
temporal resolution and this is fundamental issue since, as shown in this work, the dynamics of these systems is very complex. Moreover, the gravitational piston effect of the passage by the periapsis is less significant in them making more difficult the precise timing of the accretion events. DQ~Tau, the system most similar to AK Sco, is composed of two
equal mass stars with mass 0.5~M$_{\odot}$, in a orbit less eccentric than AK Sco's one and with larger semi-major axis (Mathieu et al. 1997). The mass ratio of the components in UZ~Tau~E is 0.289 (Prato et al. 2003) thus mass infall is preferentially channelled in the primary and the effect is less significant.
AK Sco was monitored for two consecutive cycles in low dispersion (high sensitivity) with STIS and a third one with COS providing good time and kinematical
resolution at the cost of a lower SNR in the flux.
STIS data analysis revealed the enhancement of the accretion rate during the first periastron passage, supported by:
\begin{itemize}
\item Blueing of the 1640-3100 NUV continuum between 0.9982 and 0.9993 phases, in the middle of the cycle.
\item Sudden increase in the flux of important accretion tracers, such as the NV, Si IV and C IV lines, an also in neutral singly ionized species such as O I and C II.
\item Variations in the Si III]/C III] flux-flux diagrams, revealing variations in the electron density by an order of magnitude during the periastron passage.
\end{itemize}
This behavior is reproduced as well in the third cycle, in agreement with the previous analysis showed in G\'omez de Castro et al. (2016). Moreover, the high resolution of COS make possible to recognize an increase in the red-wards shifted component in important accretion tracers (He II and C IV), while the blue-wards shifted component of these lines decreases, pointing out that accretion is preferentially driven into one of the two components of the system.
These intra-cycle variations are in concordance with the results given in Paper I through the XMM-Newton monitoring of the AK Sco system, where the enhanced of the UV and X-ray flux in the binary was suggested to be produced by an accretion outburst.\\
Cycle-to-cycle variations have been measured as well, where the most remarkable feature is the notably increase in the total UV radiation of the system from cycle 1 to cycle 2. Moreover, between the two first cycles, the Si III]/O III] temperature-dependent ratio reveals a marginal variation of the plasma temperature, that could be translated into a small change in the shock velocity from 200 km/s to 215 km/s.\\
Despite the measured enhancement of the UV radiation in cycle 2, the absent of significant variations in the flux of spectral lines and in the flux-flux relations during this cycle reveals no hints about the accretion rate enhancement, in discordance with the other two cycles.
Beside accretion processes in the AK Sco system, the presence of extended emission due the presence of jets has been addressed in this study. Spatial cross-dispersion profiles of STIS data allowed us to identify hints of a diffused envelope around the AK Sco binary radiating in Ly$\alpha$. \\
Finally, inspired by the detection of ultra-low frequency oscillations in AK Sco (Paper I), we analyzed the presence of these low-frequency modes in the light-curve of the binary, however, the analysis of the UV light curve from COS data only shows a minor indication that the ULF may be present in the region where the C~IV line is produced. This result requires further confirmation with a dedicated campaign monitoring AK Sco only with COS/G160M.
\acknowledgments
This work has been partly funded by the Ministerio de Economia y Competitividad of Spain through grant
ESP-2017 -87813-R.
The data presented here were obtained through $HST$ Guest Observing program 13372.
{\it Facilities:} \facility{HST (COS)}, \facility{HST (STIS)}.
|
{
"timestamp": "2020-10-09T02:14:28",
"yymm": "2010",
"arxiv_id": "2010.03911",
"language": "en",
"url": "https://arxiv.org/abs/2010.03911"
}
|
\section{Introduction}
\IEEEPARstart{T}{he} human retina is composed of various nerve cells. Among these, retinal ganglion cells (RGCs) are responsible for transmitting visual information to the brain. The axons of these ganglion cells collectively form the retinal never fiber layer (RNFL), their cell bodies are enclosed in the ganglion cell layer (GCL) and the inner plexiform layer (IPL) embodies their dendrites. The composition of these layers is commonly termed as ganglion cell complex (GCC). Glaucoma (a progressive optic neuropathy) severely degrades these RGCs, resulting in a thinning of RNFL, ganglion cell with the inner plexiform layer (GC-IPL), and the GCC profiles, as shown in Figure \ref{fig:fig1}. This RGC atrophy can cause permanent visual impairments and even blindness if left untreated \cite{weinreb2014JAMA}.
Clinically, glaucoma can be identified through fundus and optical coherence tomography (OCT) based examinations. OCT imaging is generally preferred by the clinicians over other modalities due to its objective assessments of early and advanced staged glaucoma where early glaucoma refers to the condition when the RGCs start to degenerate due to increased intraocular pressure \cite{Burgoyne2005ONH}. However, the progression of RGC dysfunction leads towards the advanced glaucomatous stage where the total cupping of the optic nerve and severe visual impairments can be observed.
\begin{figure}[t]
\includegraphics[scale=0.1755]{JPGs/J11.jpg}
\caption{\small Optic nerve head (ONH) OCT scan depicting (A) healthy and (B) glaucomatous pathology. Inner Limiting Membrane (ILM), RNFL, GC-IPL, GCC, and Retinal Pigment Epithelium (RPE) are highlighted along with the choroidal and optic disc region. The thinning of RNFL, GC-IPL, and the GCC regions can be observed in the glaucomatous scan (B) as compared to the healthy one (A). Both scans are taken from publicly available Armed Forces Institute of Ophthalmology (AFIO) dataset \cite{Raja2020DIB}. }
\centering
\label{fig:fig1}
\end{figure}
\section{Related Work}\label{sec:RelatedWork}
\noindent Many researchers have worked on diagnosing glaucoma from retinal OCT images. These studies either emphasize the clinical significance of retinal OCT examination for analyzing glaucomatous severity, or they propose OCT-based autonomous systems for analyzing the glaucomatous pathologies.
\subsection{Clinical Studies}
\noindent Development in retinal imaging modalities (especially OCT) is making rapid strides in providing the objective visualization of early, mild, and severe ocular complications \cite{Hassan2015IST}, particularly for the glaucoma \cite{majoor2019TVST}, the second leading cause of irreversible blindness worldwide.
Moreover, the detection and monitoring of glaucoma by measuring the velocity of RNFL thickness loss has been significantly highlighted in many recent state-of-the-art studies \cite{grewal2012Glaucoma, Gracitelli2015Ophthalmol}. Leung et al. \cite{leung2012Ophthalmology} demonstrated the importance of RNFL thickness (generated through OCT and visual field tests) in determining the retinal pathological variations within different glaucomatous stages.
Ojima et al. \cite{ojima2007Ophthalmol} signified the importance of RNFL thickness and macular volume, and declared that RNFL thickness has higher diagnostic power than a complete macular volume to detect glaucoma.
Furthermore, Medeiros et al. \cite{Medeiros2012IOVS} evaluated RGC loss using standard automated perimetry (SAP) and spectral-domain OCT (SD-OCT) examinations.
They observed that the early pathological degeneration of RGC results in the drastic thinning of RNFL as compared to the RGC changes in the late glaucomatous stages.
Likewise, El-Naby et al. \cite{El-Naby2014Egyptian} extracted the RNFL thickness from SD-OCT scans and compared it with the VF sensitivity to observe their correlation in screening primary open-angle glaucoma. They concluded that the mean RNFL thickness obtained through SD-OCT imagery is a very good indicator of screening glaucomatous subjects and also for monitoring the progression and severity of the disease.
\subsection{Automated Glaucomatous Analysis}
\noindent Initial methods developed for glaucomatous screening analyze cup-to-disc ratios from macula-centered and disc-centered fundus images \cite{Khalil2020Wiley, Sun2018Localizing, Chen2015Glaucoma, Cheng2013Superpixel, Fu2018DiscAware}. However, observing the degeneration of RGCs through optic nerve head (ONH) SD-OCT scans can provide a superior and objective indication of early glaucoma, resulting in the timely prevention of non-recoverable visual impairments. Furthermore, due to the unprecedented clinical significance of retinal OCT examination \cite{majoor2019TVST, grewal2012Glaucoma, Gracitelli2015Ophthalmol}, many researchers have developed autonomous systems to objectively screen glaucoma (especially in early-stage) using retinal OCT scans \cite{Khalil2018Access}.
Moreover, Almobarak et al. \cite{Almobarak2014IOVS} manually segmented ONH structures from SD-OCT scans to analyze pathological variations in healthy and glaucomatous pathologies. Kromer et al. \cite{Kromer2017Ophthalmology} extracted eight retinal boundaries from 40 SD-OCT scans of healthy subjects using curve regularization. In \cite{Duan2018Access}, a generated model was presented to segment retinal layers from OCT images using a group-wise curve alignment. Niu et al. \cite{Niua2014CBM} proposed an automated method to segment the six retinal layers using correlation smoothness constraint and dual gradients. Apart from this, several methods have been proposed to quantify retinal layer thickness from SD-OCT scans depicting normal \cite{Kafieh2015Hindawi, Bagci2007LSSAW}, and abnormal retinal pathologies \cite{Abdellatif2019Hindawi, Hassan2016AO, Hassan2019CBM, Hassan2018Access, Chiu2015BOE}. Ometto et al. \cite{Ometto2019Trans} presented ReLayer, an automated framework to segment and estimate the thickness of ILM, inner/outer segment, and RPE from OCT scans to monitor retinal abnormalities.
Gao et al. \cite{Gao2015PLOSONE} extracted retinal layers through graph decomposition from Topcon SD-OCT scans and evaluated the mean macular thickness of RNFL regions. Afterward, they compared their obtained results with the thickness measurements from Topcon’s built-in layer extraction framework. Likewise, Mayer et al. \cite{Mayer2010BOE} proposed an automated framework for extracting the retinal layers and computing the RNFL thickness by minimizing the energy obtained through scan gradients, local and regional smoothing filters. They validated their framework on a dataset containing both normal and glaucomatous affected OCT scans, and achieved a mean RNFL thickness of 94.1$\pm$11.7$\mu m$ and 65.3$\pm$15.7$\mu m$, respectively for normal and glaucomatous pathologies. In addition to this, many researchers have proposed computer-aided diagnostic systems to diagnose glaucomatous pathologies from fundus \cite{Khalil2017IET, Sun2018Localizing, Chen2015Glaucoma, Cheng2013Superpixel}, OCT \cite{Khalil2018Access} and fused fundus and OCT imagery \cite{Khalil2020Wiley}.
More recently, deep learning has been applied to analyze the glaucomatous pathologies through segmentation-free retinal layers extraction framework \cite{Mariottoni2020ScientificReports}. Zang et al. \cite{Zang2019BOE} used a convolutional neural network (CNN) and graph search to delineate the retinal boundaries and the optic disc region from ONH SD-OCT scans of normal and glaucomatous subjects, achieving the overall dice coefficient of 0.91$\pm$0.04. Maetschke et al. \cite{Maetschke2019PLOSONE} highlighted the significance of RNFL, GC-IPL profiles for diagnosing, and monitoring glaucoma progression and used the 3D CNN model to classify healthy and glaucomatous ONH SD-OCT scans. They outperformed conventional machine learning approaches by achieving the area under the curve ($AUC$) score of 0.94. Furthermore, Devalla et al. \cite{Sripad2018BOE} proposed a dilated-residual U-Net architecture (DRUNET) for the extraction of six ONH tissues from SD-OCT scans to aid experts in analyzing glaucomatous progression. DRUNET achieved the overall dice coefficient of 0.91$\pm$0.05 when accessed against manual tissue segmentation done by the expert observer. In addition to this, a joint retinal segmentation and classification pipeline was proposed in \cite{Wang2019BOE} to analyze healthy and glaucomatous pathologies from 1,004 locally acquired circular OCT scans, and also a severe diabetic macular edema (DME) pathology from selective 110 macular OCT scans of Duke dataset \cite{Chiu2015BOE}.
\begin{figure*}[htb]
\includegraphics[width=1\linewidth]{Figures/Picture10.png}
\caption{\small The block diagram of the proposed framework. First of all, the input scan is preprocessed to remove the background noise and vendor annotations. Afterward, the processed scan is passed to the hybrid convolutional network (RAG-Net\textsubscript{v2}) for the simultaneous extraction of RNFL, GC-IPL, and GCC regions, and its classification against glaucoma. The screened glaucomatous scan is further graded by SVM based on the RGC atrophy observed through RNFL, GC-IPL, and GCC thickness profiles. }
\centering
\label{fig:fig2}
\end{figure*}
\noindent Pathological degeneration of RGCs observed in RNFL, GC-IPL, and GCC thickness profiles can objectively monitor the progression of glaucomatous severity. However, manual extraction of these regions is a subjective and time-consuming task. Several automated layers extraction methods have been proposed in the literature to address this shortcoming \cite{Bagci2007LSSAW, Gao2015PLOSONE, Mayer2010BOE, Wang2019BOE, Sripad2018BOE}. But, to the best of our knowledge, there is no clinically validated framework that utilizes the degraded RGC profiles to diagnose and grade glaucomatous progression using ONH SD-OCT scans. Moreover, as the ONH SD-OCT scans are considered to be more significant for detecting glaucoma progression \cite{Almobarak2014IOVS}, validating an automated framework on a publicly available standardized ONH SD-OCT dataset adds significant value to the body of knowledge.
\noindent In this paper, we present a fully automated diagnosis and grading of glaucoma from ONH SD-OCT images by analyzing pathological variations of RNFL, GC-IPL, and GCC regions. The proposed framework is unique as it employs a hybrid convolutional network for the RGC-aware diagnosis and grading of glaucoma, and it has been clinically validated with four expert clinicians. The main features of this paper are:
\begin{itemize}[leftmargin=*]
\item A novel strategy for the classification and grading of glaucomatous progression by analyzing RNFL, GC-IPL, and GCC regions from ONH SD-OCT scans.
\item A significantly improved and lightweight hybrid retinal analysis and grading network (RAG-Net\textsubscript{v2}) for the simultaneous pixel-level segmentation of retinal regions and scan-level classification of glaucomatous pathologies.
\item Rigorous clinical validation of the proposed framework with four expert ophthalmologists to screen, track, and grade glaucomatous progression from high-quality ONH SD-OCT scans. The complete dataset and the annotations from the expert observers are publicly available at \url{https://data.mendeley.com/datasets/2rnnz5nz74/2}.
\end{itemize}
\noindent The rest of the paper is organized as follows. Section \ref{sec:method} describes the proposed method. Section \ref{sec:expsetup} showcases the experimental setup. Section \ref{sec:results} presents the results, followed by detailed discussion and concluding remarks about the proposed framework in Section \ref{sec:discussion}.
\section{Proposed Method}\label{sec:method}
\noindent We present a novel framework that gives an RGC-aware diagnosis of glaucoma using ONH SD-OCT scans. Furthermore, it measures the severity of glaucomatous progression by analyzing the RNFL, GC-IPL, and GCC thickness profiles. The block diagram of the proposed framework is shown in Figure \ref{fig:fig2}. First of all, the input scan is preprocessed to retain the retinal area. Afterward, the preprocessed scan (containing only the retina and the ONH) is passed to the hybrid convolutional network that extracts the RNFL, GC-IPL, and GCC regions, and screens the scan against glaucoma. The thickness profiles of these extracted regions are computed and their mean values are passed as a feature vector to the supervised support vector machines (SVM) for grading the screened glaucomatous scan as either early suspect or a severe case. The detailed description of each block is presented below:
\subsection{Preprocessing}
\noindent The purpose of the preprocessing is to remove the background artifacts and noisy content to obtain accurate extraction of RNFL, GC-IPL, and GCC regions. The preprocessing is performed through structure tensor \cite{st2}, which highlights the predominant orientations of the image gradients within the specified neighborhood of a pixel. For each pixel of the input image, we get a symmetric $2 \times 2$ matrix $S$ defined by the outer products of image gradients:
\begin{equation}
\small
S=\begin{bmatrix}
\varphi * (\nabla X . \nabla X) & \varphi * (\nabla X . \nabla Y) \\
\varphi * (\nabla Y . \nabla X) & \varphi * (\nabla Y . \nabla Y)
\end{bmatrix}
\label{eq:Eq1}
\end{equation}
where the image gradients $\nabla X$ and $\nabla Y$ are oriented at $0^\circ$ and $90^\circ$, respectively. $\varphi$ denotes the parametric smoothing filter (typically a Gaussian). Because of the symmetry, three out of the four matrix elements are unique. Computing $S$ for each pixel we obtain three unique tensors from which we select the one having the maximum coherency according to their norm. Afterward, the selected tensor is transformed as an 8-bit grayscale image. Then, the ILM and choroidal boundaries are extracted from it by detecting the first and last foreground-background transitions in each column of the scan.
To avoid outliers, we constrain the distance between consecutive pixels in the ILM and choroidal boundaries to be below a threshold $\tau$ determined empirically.
Apart from this, the missing values in each layer are estimated through linear interpolation and are smoothed through median filtering. Then, a retinal mask is generated which is multiplied with the original scan to isolate the retinal and ONH regions as shown in Figure \ref{fig:fig3}. The complete pseudo-code to extract the retina and ONH region is presented in Algorithm \ref{algo}:
\subsection{Hybrid Convolution Framework}
\noindent We propose a hybrid convolutional network (HCN)
for extracting the retinal regions and also for the classification of candidate scan as normal or glaucomatous. Using an HCN rather than a conventional classification model in this study aims to get an RGC-aware diagnosis of glaucoma. The HCN model proposed here is an improved version of the Retinal Analysis and Grading Network (RAG-Net) \cite{Hassan2020JBHI}. The RAG-Net and its improved version will be described next.
\begin{algorithm}
\SetAlgoLined
\DontPrintSemicolon
\textbf{Input: } OCT Image $I$
\textbf{Output: } Preprocessed Image $I_{ONH}$
ILM $\gets$ $\phi$
Choroid $\gets$ $\phi$
$\tau$ $\gets$ 20
$v_1$ $\gets$ $\phi$
$v_2$ $\gets$ $\phi$
$S$ $\gets$ ComputeStructureTensor(\textit{I})
$S_u$ $\gets$ GetUniqueTensors($S$)
$S_u^t$ $\gets$ GetCoherentTensor($S_u$)
$I_u^t$ $\gets$ ConvertTensorToImage($S_u^t$)
[nRow,nCol] $\gets$ GetSize($I_u^t$)
\For {$c$ $\gets$ 1 to nCol}{
$p_1$ $\gets$ FindFirstTransitionInRow($I_u^t$(:,$c$))
$p_2$ $\gets$ FindLastTransitionInRow($I_u^t$(:,$c$))
\eIf {$c$ is 1}{
ILM($c$) $\gets$ $p_1$
Choroid($c$) $\gets$ $p_2$
$v_1$ = $v_2$ = $c$}{
\textit{isP1Valid} $\gets$ CheckDistance($p_1$,ILM($v_1$), $\tau$)
\textit{isP2Valid} $\gets$ CheckDistance($p_2$,Choroid($v_2$), $\tau$)
\If{isP1Valid}{
$v_1$ $\gets$ $c$
ILM($v_1$) $\gets$ $p_1$
}
\If{isP2Valid}{
$v_2$ $\gets$ $c$
Choroid($v_2$) $\gets$ $p_2$
}
}
}
ILM $\gets$ InterpolateGapsAndSmoothLayer(ILM)
Choroid $\gets$ InterpolateGapsAndSmoothLayer(Choroid)
mask $\gets$ GenerateMask($I_u^t$,ILM, Choroid)
$I_{ONH}$ $\gets$ \textit{I} * mask
\caption{Retina and ONH Extraction}
\label{algo}
\end{algorithm}
\subsubsection{Retinal Analysis and Grading Network}
RAG-Net is a hybrid convolutional network specifically designed to extract retinal lesions and abnormalities from the macular OCT scans and give lesion-aware grading of retinal diseases by performing simultaneous pixel-level segmentation and scan-level classification. Architecturally, it contains 112 convolution layers, 111 batch normalization layers, 102 ReLUs, 6 pooling layers, 5 lambda layers, 2 softmax, and a fully connected layer \cite{Hassan2020JBHI}. Furthermore, it contains around 62.3M parameters from which around 0.1M are non-learnable. RAG-Net showed the capacity to generalize across multiple scanner specifications for retinal lesion extraction and lesion-aware grading of retinopathy and has also been applied to the multi-modal imagery for retinal lesions extraction, achieving superior performance among its competitors \cite{Hassan2020}. However, the original RAG-Net has been found limited in discriminating similar texture objects, like retinal boundaries, when their transitional variations are small. This is because RAG-Net possesses kernels with smaller receptive fields (field of view) that do not retain accurately the contextual information of small and similar textural regions. Although, the feature pyramid module within the RAG-Net architecture tends to overcome this limitation to some extent. But overall the performance is still capping as will be shown in the results section (Section \ref{sec:results}). Moreover, the source code of RAG-Net and its complete documentation is available at \url{http://biomisa.org/index.php/downloads/} \cite{Hassan2020JBHI}.
\begin{figure}[htb]
\includegraphics[width=1\linewidth]{Figures/Picture3.png}
\caption{\small Preprocessing stage (A) original ONH SD-OCT scan, (B) tensor with maximum coherency, (C) grayscale tensor from which the retinal and choroidal layer points are iteratively picked, (D) extracted ILM and choroidal layers, (E) retinal extraction mask, (F) extracted retina and ONH. }
\centering
\label{fig:fig3}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=1\linewidth]{Figures/Picture4.png}
\caption{\small Illustration of dilated convolution with $3 \times 3$ kernel and (A) dilation rate $r = 1$, (B) $r = 2$, and (C) $r = 3$. }
\centering
\label{fig:fig4}
\end{figure}
\begin{table}[htb]
\centering
\caption{RAG-Net\textsubscript{v2} hyper-parameters}
\begin{tabular}{lcc}
\toprule
\textbf{Layers} & \textbf{Number of Layers} & \textbf{Parameters} \\ \hline
Convolution & 16 & 4,847,369\\
Pooling & 4 Average, 10 Max & 0\\
Batch Normalization & 15 & 17,920\\
Activation & 13 ReLU, 2 Softmax & 0\\
Lambda & 5 & 0 \\
Input & 2 & 0 \\
Zero-Padding & 10 & 0 \\
Concatenation & 1 & 0 \\
Reshape & 1 & 0 \\
Fully Connected and Flatten & 2 & 716,810 \\
Classification & 1 & 22 \\ \hline
Learnable Parameters & 5,573,161 & \\
Non-learnable Parameters & 8,960 & \\
Total Parameters & 5,582,121 & \\
\bottomrule
\end{tabular}
\label{tab:tab1}
\end{table}
\subsubsection{Modified Retinal Analysis and Grading Network}
We propose a RAG-Net modified version, dubbed RAG-Net\textsubscript{v2}, whereby the contextual-aware unit is built upon atrous convolutions (also known as dilated convolution). The atrous convolutions are formulated in a residual fashion which greatly enhances the kernels receptive field to perform more broad and context-aware filtering while maintaining the same spatial resolution \cite{Wang2018KDD}. The 2D atrous convolution is expressed as:
\begin{equation}
g(x,y)=\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} k(i,j)f(x-r*i,y-r*j)
\label{eq:eq2}
\end{equation}
where $f$ denotes input function (typically a feature map from the previous layer), $k$ represents the $N_1 \times N_2$ kernel, $r$ is the dilation rate and $g$ denotes the convolution output (a feature map produced in the current layer). It should be noticed in the above equation that we have introduced a common dilation rate $r$ in both image dimensions to ensure a consistent reception field at both of them. When $r = 1$, then atrous convolution is simply a linear convolution as shown in Figure \ref{fig:fig4} (A), and when $r >= 1$, the receptive field is enlarged so the kernel captures wider contextual area from the input function to produce more distinctive feature maps. However, increasing the $r$ also introduces gridding artifacts \cite{Wang2018KDD} due to large gaps between convolving pixels in the input function, causing a cascading effect in the consecutive convolution layers which may result in a significant decrease of network performance. The gridding artifacts are also illustrated in Figure \ref{fig:fig5} (top row) for the stacked convolution layers.
\begin{figure}[htb]
\includegraphics[width=1\linewidth]{Figures/Picture5.png}
\caption{\small A block containing 5 consecutive atrous convolution layers with fixed dilation rate ($r=3$) in top row and variable dilation rates ($r = k$ in $k^{th}$ layer where $k = 1, 2, … ,5$) in bottom row. A single pixel in layer 5 (highlighted in light blue color) is computed using (green pixels in layer 4). Similarly, the green pixels (in layer 4) are computed through brown pixels in layer 3, brown pixels are computed through red pixels in layer 2 and red pixels are computed through dark blue pixels in layer 1. It can be observed that the receptive field of a $3 \times 3$ kernel is greatly enhanced in both (top row) and (bottom row) as compared to standard linear convolutions. However, the fixed dilation rate (in the top row) introduces gridding artifacts i.e. output pixel is layer 5 is computed by totally disjoint input pixels of layer 4 and so on. Also, as the layers are cascaded, the effects of gridding artifacts can be catastrophic e.g. observe (in the top row) that how a pixel in layer 5 relates to the pixels in layer 1. Employing variable dilation rates can effectively diminish these gridding artifacts while preserving the increased field of view at the same time as shown in the bottom row.}
\centering
\label{fig:fig5}
\end{figure}
In order to compensate this, we perform atrous convolution with variable dilation factors \cite{Wang2018WACV} in the RAG-Net\textsubscript{v2} architecture. For a block of $n$ consecutive convolution layers within the network having the dilation rate ‘$r$’ where $n > r$, the dilation factors in the proposed framework are generated through $round(r-\frac{n}{2}+i)$ where $i$ varies from $0$ to $n-1$. For example, for a block containing 5 cascaded convolution layers having the dilation rate $r = 3$, the dilation factors will be [1, 2, 3, 4, 5], meaning that the first convolution layer within the block will perform standard convolution (as $r = 1$), the second layer will have $r = 2$ and so on as shown in Figure \ref{fig:fig5} (bottom row).
Similarly, for $n = 3$, $r = 3$, the dilation factors will be [2, 3, 4].
The second major benefit of RAG-Net\textsubscript{v2} is that it is extremely lightweight and contains 91.04\% fewer parameters than original RAG-Net architecture (having 62,352,188 parameters in total) while achieving the better segmentation and classification performance. The detailed architectural description and hyper-parameters of RAG-Net\textsubscript{v2} are reported in Table \ref{tab:tab1} from which we can see that it contains 5,573,161 learnable and 8,960 non-learnable parameters. Moreover, rather than training RAG-Net\textsubscript{v2} from scratch, it is fine-tuned on RAG-Net weights (which are already adjusted in \cite{Hassan2020JBHI} for lesion-aware retinal image analysis) to achieve faster convergence.
\subsection{Estimation of Retinal Profiles}
\noindent The severity of glaucoma can be effectively graded by analyzing the RGC atrophy. RGCs primarily exists within the GCC region that consists of RNFL, the ganglion cellular layer (GCL), and the inner plexiform layer (IPL) \cite{Maetschke2019PLOSONE}. To objectively evaluate the glaucoma progression, the proposed system computes the RNFL, GC-IPL, and the GCC thickness profiles. The RNFL thickness is computed by taking the absolute difference between the ILM and the GCL. Moreover, GC-IPL thickness is computed by taking the absolute difference between GCL and IPL. Also, the GCC thickness is computed by taking the absolute difference between ILM and IPL. Afterward, the mean RNFL, GC-IPL, and GCC thickness values are computed from the extracted thickness profiles which are then passed to the supervised SVM model for grading the glaucomatous progression. The reason for choosing these thickness values as features for the SVM is because they reflect the pathological degeneration of RGCs which can be used for grading glaucoma (predicted through the RAG-Net\textsubscript{v2} classification unit) as early or advanced (more severe).
\subsection{Modified Dice-Entropy Loss Function}
\noindent In order to effectively extract the retinal regions and use them for the RGC-aware classification of glaucomatous scans, RAG-Net\textsubscript{v2} jointly optimized the dice-entropy loss which is a linear combination of both dice and cross-entropy loss as expressed below:
\begin{eqnarray}
L_{de} & = & \alpha_1 L_d + \alpha_2 L_e
\label{eq:eq3} \\
L_d & = & \frac{1}{N} \sum_{i=1}^N \left(1 - \frac{2 \sum_{j=1}^C t_{i,j} p_{i,j}} {\sum_{j=1}^C t_{i,j}^2 + \sum_{j=1}^C p_{i,j}^2}\right)
\label{eq:eq4} \\
L_e & = & -\frac{1}{N} \sum_{i=1}^{N} \sum_{j=1}^{C} t_{i,j} \log(p_{i,j})
\label{eq:eq5}
\end{eqnarray}
\noindent where $L_d$ denotes the dice loss, $L_e$ represents the multi-category cross-entropy loss, $t_{i,j}$ represents the true labels for $i^{th}$ sample belonging to $j^{th}$ class, $p_{i,j}$ denotes the predicted probability for the $i^{th}$ sample belonging to $j^{th}$ class, $\alpha_{1,2}$ represent the loss functions weights, $N$ represents the total number of samples in a batch, and $C$ represents the total number of classes. The dice-entropy loss penalizes RAG-Net\textsubscript{v2} to accurately segment the RNFL and GC-IPL pixels from the other retinal pixels and at the same time enables RAG-Net\textsubscript{v2} to robustly classify the ONH SD-OCT scans as healthy or glaucomatous.
\section{Experimental Setup} \label{sec:expsetup}
\noindent This section presents a detailed description of the dataset and the training protocol. It also presents the evaluation metrics which have been used to validate the performance of the proposed framework.
\subsection{AFIO Dataset}
\noindent Armed Forces Institute of Ophthalmology (AFIO) dataset, firstly introduced in \cite{Raja2020DIB}, is a publicly available repository containing high-resolution ONH SD-OCT scans of healthy and glaucomatous subjects. The dataset has been acquired from AFIO Hospital, Rawalpindi, Pakistan. To the best of our knowledge, it is the only dataset that contains OD centered fundus and ONH centered SD-OCT scans for each subject along with the detailed cup-to-disc markings and annotations from four expert ophthalmologists. The scans within the AFIO dataset are acquired using Topcon 3D OCT-1000 sampled over four years. Furthermore, all the scans within AFIO datasets have been thoroughly graded by four expert ophthalmologists in a blind-manner (i.e. each grader does not know the grading done by his/her colleagues). Also, all four clinicians were very senior (having 20 to 25 years of professional experience in clinical ophthalmology). Moreover, the detailed specifications of the AFIO dataset are presented in Table \ref{tab:tab2}.
\begin{table}[htb]
\centering
\caption{AFIO Dataset Specifications}
\begin{tabular}{ll}
\toprule
Acquisition Machine & Topcon 3D OCT 1000 \\\hline
Scan Reference & Optic Nerve Head (ONH) Centered \\
Examination & Dilated Pupil with Ø4.0mm (45º) Diameter\\
Images & 196 ONH SD-OCT Images\\
Scan Type & B-scan\\
Resolution & 951x456\\
Subjects & 101\\
Categories & Healthy: 50\\
& Glaucoma: 146\\
\bottomrule
\end{tabular}
\label{tab:tab2}
\end{table}
\subsection{Training Details}
\noindent RAG-Net\textsubscript{v2} in the proposed framework is implemented using Keras APIs on Anaconda Python 3.7.4 platform\footnote{The source code is available at \url{https://github.com/taimurhassan/rag-net-v2}.}. The training is conducted for 40 epochs where each epoch lasted for 512 iterations on a machine with Intel Core i7-9750H@2.6 GHz processor and 32 GB RAM with a single NVIDIA RTX 2080 Max-Q GPU having cuDNN v7.5 and a CUDA Toolkit 10.1.243. The optimization during the training is performed through ADADELTA \cite{Zeiler2012ADADELTA} having a default learning rate of one with the decay factor of 0.95. Moreover, 70\% of the dataset is used for the training and the rest of 30\% were used for testing as per the dataset standard \cite{Raja2020DIB}. To compensate for the low number of training scans within the AFIO dataset, we fine-tuned the weights of the original RAG-Net architecture (obtained after training on more than 0.1 million macular OCT scans \cite{Hassan2020JBHI}) and also performed augmentation of the training scans. The data augmentation we performed is as follows: first, all the scans were horizontally flipped, and then they were rotated between -5 to 5 degrees. Then, we added a zero-mean white Gaussian noise with 0.01 variance. The augmentation procedure resulted in a total of 6,028 training scans to fulfill the training requirements.
\subsection{Evaluation Metrics}
\noindent The proposed framework has been evaluated using a number of metrics described below:
\subsubsection{Confusion Matrix}
The performance of the proposed framework for accurately classifying and grading glaucomatous subject is measured through the confusion matrix and its associated metrics such as accuracy ($ A_{C} = \frac{T_P + T_N}{T_P + T_N + F_P + F_N}$), recall ($T_{PR} = \frac{T_P}{T_P + F_N}$), specificity ($T_{NR} = \frac{T_N}{T_N + F_P}$), false positive rate ($F_{PR} = \frac{F_P}{T_N + F_P}$), precision ($P_{PV} = \frac{T_P}{T_P + F_P}$) and F\textsubscript{1} score ($F_1 = \frac{2 * P_{PV} * T_{PR}}{P_{PV} + T_{PR}}$), where $T_P$ denotes true positives, $T_N$ denotes the true negatives, $F_P$ denotes the false positives, and $F_N$ denotes the false negatives. To measure classification performance, $T_P$, $F_P$, $T_N$ and $F_N$ are calculated scan-wise.
\subsubsection{Receiver Operating Characteristics (ROC) Curve}
ROC curve indicates the capacity of the proposed framework to correctly classify and grade healthy and glaucomatous pathologies at various classification thresholds. Moreover, the performance through ROC curves is quantitatively measured through $AUC$ scores.
\subsubsection{Dice Coefficient ($D_C$)}
The dice coefficient ($D_C$) measures how well the proposed framework segments the RNFL, GC-IPL, and GCC regions as compared to their ground truths, and it is computed through: ($D_C= \frac{2 * TP}{2 * TP + FN + FP}$). Here $T_P$, $F_P$, and $F_N$ are calculated pixel-wise where $T_P$ indicates the correct extraction of positives (RNFL, GC-IPL, and GCC regions), $F_P$ indicates the misclassified background pixels, and $F_N$ denotes those positive pixels which have been missed by the proposed framework. Afterward, the mean dice coefficient ($\mu_{DC}$) is computed by taking an average of $D_C$ scores scan-wise across the whole dataset.
\subsubsection{Mask Precision}
To further validate the performance of the proposed framework for extracting RNFL, GC-IPL, and GCC regions, we used the mask precision ($m_p$) metric. Unlike the dice coefficient, $m_p$ measures both the capacity of the proposed framework in accurately recognizing the RNFL, GC-IPL, and GCC regions as well as extracting their corresponding masks. First of all, the dice coefficient $D_C$ of the extracted regions is computed in each image using their ground truths. If $D_C \geq 0.5$, then the $m_p$ (for each region) is computed pixel-wise through $m_{p} = \frac{T_P}{T_P + F_P}$. However, if the dice coefficient is below 0.5, then the whole region is considered as $F_P$, resulting in a $m_p$ score of 0. Moreover, the mean mask precision $\mu_{mp}$ is computed by taking the average of $m_p$ for each region as $\mu_{mp}=\sum_{k=0}^{c} m_p(k)$, where $c$ denotes the number of classes (regions).
\subsubsection{Clinical Validation}
Apart from using performance metrics, we clinically validated the glaucomatous screening and grading performance of the proposed framework with four expert ophthalmologists using the standardized Pearson correlation coefficient ($r_c$) and its statistical significance measured through the $p$-value.
\section{Results} \label{sec:results}
\noindent The proposed framework has been thoroughly evaluated on the AFIO dataset for the RGC-aware diagnosis and grading of glaucoma. First, we present an ablative analysis to evaluate different segmentation models for the extraction of RNFL, GC-IPL, and GCC regions. Afterward, we present a detailed comparison of the proposed framework with state-of-the-art solutions for extracting the RGC regions as well as screening glaucomatous subjects. Apart from this, we also present the clinical validation of our RAG-Net\textsubscript{v2} driven grading system with four expert ophthalmologists.
\subsection{Ablation Study}
\noindent The ablative aspect of this research involves choosing the segmentation framework that can accurately extract the retinal regions such as the RNFL and the GC-IPL regions to compute the GCC profiles and grade glaucomatous subjects accordingly. For this purpose, we compared the performance of the proposed RAG-Net\textsubscript{v2} segmentation unit with the popular state-of-the-art PSPNet \cite{PSPNet}, SegNet \cite{SegNet}, UNet \cite{unet}, FCN-(8, 32) \cite{fcn}, as well as our original RAG-Net architecture \cite{Hassan2020JBHI}. The extraction performance is shown in Table \ref{tab:tab3}, where we can observe that RAG-Net\textsubscript{v2} achieved the overall best $\mu_{DC}$ score of 0.8697 leading the second-best FCN-8 \cite{fcn} by 2.78\%. Moreover, the performance of PSPNet \cite{PSPNet} and FCN-8 \cite{fcn} are extremely comparable as the FCN-8 \cite{fcn} is leading PSPNet \cite{PSPNet} by 0.035\% only. Similarly, FCN-8 \cite{fcn} is leading RAG-Net \cite{Hassan2020JBHI} by 7.12\%. If we look at the performance of each model for extracting individual regions, we can see that the best score for extracting RNFL is achieved by FCN-8 \cite{fcn}, though it’s leading PSPNet \cite{PSPNet} by 0.011\% and RAG-Net\textsubscript{v2} by 0.651\%. For extracting GC-IPL, the best performance is achieved by RAG-Net\textsubscript{v2}, leading the second-best FCN-8 \cite{fcn} by 6.28\%. The best performance for extracting GCC regions is also achieved by the RAG-Net\textsubscript{v2} i.e. $\mu_{DC}$ score of 0.8698. In terms of $\mu_{mp}$, the overall best performance is also achieved by RAG-Net\textsubscript{v2} as shown in Table \ref{tab:tab4}, leading the second-best FCN-8 by 1.78\%. It leads the second-best FCN-8 by 3.19\% and 3.38\%, respectively for extracting GC-IPL and GCC regions. However, for the RNFL extraction, it lags from both FCN-8 and PSPNet by 1.05\% and 0.76\%. But overall, RAG-Net\textsubscript{v2} outperforms FCN-8 and PSPNet with a larger margin in extracting RNFL, GC-IPL, and GCC regions. Figure \ref{fig:fig6} showcases some qualitative comparison of all the hybrid and conventional segmentation models. Here, scan (A), (J), and (S) depict glaucomatous pathologies whereas scan (AB) depicts a healthy pathology. Both RAG-Net\textsubscript{v2} and FCN-8 produced good performance (comparing it with the ground truths) in extracting the RNFL and GC-IPL regions.
But RAG-Net\textsubscript{v2} has an upper hand in extracting GC-IPL boundaries from both healthy and glaucomatous scans (highlighted in green color). Furthermore, the original RAG-Net \cite{Hassan2020JBHI} was found limited in accurately extract RGC regions especially in scans (D), (M), (V), and (AE). Such limitation is because RAG-Net \cite{Hassan2020JBHI} cannot well differentiate similar textural patterns (often depicted in the retinal layers and boundaries). Moreover, because RAG-Net\textsubscript{v2} achieves the overall best performance for extracting RGC regions (as evident from Table \ref{tab:tab3}, Table \ref{tab:tab4} and Figure \ref{fig:fig6}), besides having the capacity to classify and grade glaucomatous pathologies, we chose it in the proposed framework for further analysis.
\subsection{Extraction of RNFL, GC-IPL and GCC Regions}
\noindent To the best of our knowledge, all the existing methods (except \cite{Maetschke2019PLOSONE}) which have been proposed for the extraction of RNFL, GC-IPL and GCC regions have been validated on either local in-house datasets or on publicly available datasets which only contains macular OCT scans. Moreover, the framework proposed in \cite{Maetschke2019PLOSONE} is designed for screening normal and glaucomatous pathologies without paying attention to RGC atrophy. The comparison of RAG-Net\textsubscript{v2} with state-of-the-art literature \cite{Sripad2018BOE, Wang2019BOE} for the extraction of RNFL, GC-IPL and GCC regions is indirect as the experimental protocols and the datasets were different (see Table \ref{tab:tab9}). Nonetheless, this indirect comparison highlights the capacity of RAG-Net\textsubscript{v2} for extracting the retinal regions while simultaneously diagnosing and grading the glaucomatous pathologies as compared to competitive methods. As reported in Table \ref{tab:tab9}, our proposed framework lags from DRUNET \cite{Sripad2018BOE} by 5.49\% in terms of $\mu_{DC}$. However, their dataset is not public and its size is almost the half of AFIO dataset. Furthermore, DRUNET is only a conventional segmentation model that does not possess glaucomatous screening and grading capabilities as compared to RAG-Net\textsubscript{v2}. The joint segmentation and classification pipeline, termed bi-decision \cite{Wang2019BOE} achieved the mean dice coefficient of 0.72 for extracting the retinal regions.
However, for glaucoma screening, bi-decision \cite{Wang2019BOE} is only validated on the local dataset (although the authors verified bi-decision on the selective 110 DME affected scans as well from the publicly available Duke dataset \cite{Chiu2015BOE}). Here, we want to highlight that we have already validated the original RAG-Net \cite{Hassan2020JBHI} on 43,613 macular OCT scans from five publicly available datasets (including the Duke dataset \cite{Chiu2015BOE}), and here our emphasis is on extending our hybrid framework for the RGC-aware diagnosis and grading of glaucoma using high-quality ONH SD-OCT scans from the publicly available dataset, and for reproducing the clinical trials.
\begin{table}[htb]
\centering
\caption{Comparison of the proposed framework with DRUNET \cite{Sripad2018BOE} and Bi-decision \cite{Wang2019BOE}. The abbreviations are DS: Dataset Size, DPA: Dataset Publicly Available, EP: Experimentation Protocol, GS: Glaucomatous Screening, GG: Glaucomatous Grading, TR: Training, TE: Testing, CV: Cross-Validation.}
\begin{tabular}{cccc}
\toprule
& Proposed & DRUNET \cite{Sripad2018BOE} & Bi-decision \cite{Wang2019BOE}\\\hline
DS & 196\textsuperscript{\#} (ONH) & 100\textsuperscript{\#} (ONH) & 1,114* \\
DPA & Yes & No & No \\
EP & TR: 137, TE: 59 & TR: 40, TE: 60 & 3-Fold CV\\
GS & Yes & No & Yes \\
GG & Yes & No & No \\
$\mu_{DC}$ & 0.86 & 0.91 & 0.72 \\
\bottomrule
\end{tabular}
\\ \justify * 1,004 are circular OCT scans from the local dataset, and 110 selected macular scans are taken from Duke dataset \cite{Chiu2015BOE} representing DME pathologies.
\\ \textsuperscript{\#} This count represents the dataset size excluding the augmented scans.
\label{tab:tab9}
\end{table}
\begin{table}[htb]
\centering
\caption{Comparison of segmentation models in terms of $\mu_{DC}$ for extracting the RNFL, GC-IPL, and GCC regions.}
\begin{tabular}{ccccc}
\toprule
Framework &RNFL &GC-IPL &GCC &Mean\\ \hline
RAG-Net\textsubscript{v2} & 0.8692 & \textbf{0.8703} & \textbf{0.8698} & \textbf{0.8697} \\
RAG-Net \cite{Hassan2020JBHI} & 0.8192 & 0.7508 & 0.7860 & 0.7853\\
PSPNet \cite{PSPNet} & 0.8748 & 0.8151 & 0.8457 & 0.8452 \\
SegNet \cite{SegNet} & 0.8111 & 0.6945 & 0.7555 & 0.7537 \\
UNet \cite{unet} & 0.8216 & 0.8253 & 0.8234 & 0.8234 \\
FCN-32 \cite{fcn} & 0.8638 & 0.7470 & 0.8083 & 0.8064\\
FCN-8 \cite{fcn} & \textbf{0.8749} & 0.8156 & 0.8460 & 0.8455\\
\bottomrule
\end{tabular}
\label{tab:tab3}
\end{table}
\begin{table}[htb]
\centering
\caption{Comparison of segmentation models in terms of $\mu_{mp}$ for extracting the RNFL, GC-IPL, and GCC regions.}
\begin{tabular}{ccccc}
\toprule
Framework &RNFL &GC-IPL &GCC &Mean\\ \hline
RAG-Net\textsubscript{v2} &0.8410 & \textbf{0.8108} & \textbf{0.7915} & \textbf{0.8144} \\
RAG-Net \cite{Hassan2020JBHI} & 0.7661 & 0.6701 & 0.6016 & 0.6792\\
PSPNet \cite{PSPNet} & 0.8475 & 0.7685 & 0.7229 & 0.7796 \\
SegNet \cite{SegNet} & 0.7402 & 0.6978 & 0.6630 & 0.7003 \\
UNet \cite{unet} & 0.8082 & 0.7796 & 0.7291 & 0.7723 \\
FCN-8 \cite{fcn} & \textbf{0.8500} & 0.7849 & 0.7647 & 0.7999\\
FCN-32 \cite{fcn} & 0.7623 & 0.7080 & 0.6867 & 0.7190\\
\bottomrule
\end{tabular}
\label{tab:tab4}
\end{table}
\begin{figure*}[t]
\includegraphics[width=1\linewidth]{Figures/Picture6.png}
\caption{\small Performance comparison of deep segmentation models for the extraction of RNFL (shown in red color) and GC-IPL regions (shown in green color). Left to right, column-wise: Original scans, ground truths, the performance of RAG-Net\textsubscript{v2}, RAG-Net, PSPNet, SegNet, UNet, FCN-8, and FCN-32.}
\centering
\label{fig:fig6}
\end{figure*}
\subsection{Classification of Glaucomatous Scans}
\noindent Since the thinness of the RNFL, GC-IPL and GCC profiles highlight the degeneration of RGCs, which directly reflects glaucomatous progression. We have utilized the encoder end of the RAG-Net\textsubscript{v2} to perform RGC-aware classification of healthy and classification pathologies. After training the RAG-Net\textsubscript{v2} for extracting the RGC regions, the trained weights are used for screening healthy and glaucomatous pathologies through the RAG-Net\textsubscript{v2} classification unit.
The performance of RAGNet\textsubscript{v2} classification unit for screening glaucoma is measured through standard metric such as $A_C$, $T_{PR}$, $T_{NR}$, $P_{PV}$, $AUC$ and $F_1$ score.
As reported in Table \ref{tab:tab5}, the proposed framework achieves 0.958\% better results as compared with \cite{Khalil2018Access} and \cite{Khalil2020Wiley}, and 12.5\% better results as compared to \cite{Khalil2017IET} in terms of accuracy. The comparison with \cite{Khalil2017IET} is indirect as the authors only used fundus imagery for the classification of glaucomatous pathologies\footnote{We also evaluated the RAGNet\textsubscript{v2} classification unit for screening glaucoma using fundus images. Please see the supplementary material for more details on these additional experiments.}.
However, the comparison with \cite{Khalil2018Access} and \cite{Khalil2020Wiley} is fair and direct as both of these frameworks were tested on the AFIO dataset using the same experimental protocols. We can further observe the capacity of the proposed framework in screening glaucomatous pathologies through the $T_{PR}$ ratings in Table \ref{tab:tab5}, where the RAG-Net\textsubscript{v2} achieves 1.73\% better performance than \cite{Khalil2018Access}, and also leading \cite{Khalil2020Wiley} by 4.41\%. This performance gain is achieved because RAG-Net\textsubscript{v2} pays attention to the pathological variations of RGCs related to the progression of glaucoma.
The classification performance of RAG-Net\textsubscript{v2} is also evaluated through ROC curve as shown in Figure \ref{fig:fig7}, and compared with \cite{Maetschke2019PLOSONE} in terms of $AUC$ as shown in Table \ref{tab:tab5}, where we can see that RAG-Net\textsubscript{v2} leads \cite{Maetschke2019PLOSONE} by 4.77\%. However, this comparison is also indirect as both frameworks are tested on different datasets.
\begin{table}[htb]
\centering
\caption{Comparison of classification performance of RAG-Net\textsubscript{v2} with state-of-the-art solutions. Bold indicates the best scores while the second-best scores are underlined.}
\begin{tabular}{ccccccc}
\toprule
Metric & Proposed & \cite{Khalil2018Access} & \cite{Khalil2020Wiley} & \cite{Khalil2017IET} & \cite{Maetschke2019PLOSONE} & \cite{Wang2019BOE}\\ \hline
$A_C$ & \textbf{0.9491} & \underline{0.9400} & \underline{0.9400} & 0.8300 & - & 0.8140 \\
$T_{PR}$ & \textbf{0.9714} & \underline{0.9545} & 0.9285 & 0.8846 & - & - \\
$T_{NR}$ & 0.9166 & \underline{0.9285} & \textbf{0.9545} & 0.7708 & - & - \\
$F_{PR}$ & 0.0834 & \underline{0.0714} & \textbf{0.0455} & 0.2292 & - & - \\
$P_{PV}$ & \underline{0.9444} & \textbf{0.9629} & \textbf{0.9629} & 0.8604 & - & - \\
$F_1$ & \textbf{0.9577} & \underline{0.9453} & \underline{0.9453} & 0.8723 & - & - \\
$AUC$ & \textbf{0.9871} & - & - & - & \underline{0.9400} & 0.8640 \\
\bottomrule
\end{tabular}
\label{tab:tab5}
\end{table}
\begin{figure}[htb]
\centering
\includegraphics[width=0.922\linewidth]{Figures/roc2.png}
\caption{\small ROC curve highlighting the performance of RAG-Net\textsubscript{v2} for classifying and grading glaucoma. }
\label{fig:fig7}
\end{figure}
\subsection{Profile Extraction}
\noindent The novel aspect of the proposed framework is that it can grade glaucomatous progression by analyzing pathological variations of RGCs through RNFL, GC-IPL, and GCC regions represented within the ONH SD-OCT scans. Table \ref{tab:tab6} reports the mean RNFL, GC-IPL, and GCC region thickness range for the early and advanced stage glaucomatous pathologies.
We can observe here that the RAG-Net\textsubscript{v2} achieved the mean RNFL thickness of 93.50$\mu m$ for early glaucomatous suspects and 69.46$\mu m$ for the advanced glaucomatous stage. These RNFL thickness ranges were obtained from scans of publicly available AFIO datasets and confirmed by expert clinicians. They deviate from the ranges defined by \cite{El-Naby2014Egyptian} by 2.67\% and 3.25\%, respectively, for the early and advanced stage glaucoma. We can also observe that GC-IPL and GCC profiles provide a unique distinction between glaucomatous severity, and can contribute positively towards grading it.
In Figure \ref{fig:fig8}, we report some qualitative examples highlighting the segmentation performance of RAG-Net\textsubscript{v2}.
The scans in this figure (except the original ones in the first column) are intentionally converted to grayscale so that the extracted regions can be visualized.
Scans (A), (E), and (I) represent normal, early glaucomatous suspect and advanced-stage glaucoma, respectively from which RAG-Net\textsubscript{v2} has accurately extracted the RNFL, GC-IPL, and GCC profiles. For example, we can see RNFL thinning in the scan (F) and (J) as compared to (B), and how precisely it is picked by RAG-Net\textsubscript{v2}. Also, the yellow color in the scans of Figure \ref{fig:fig8} (except the first column) indicates the overlap for the extracted retinal regions with the ground truth whereas other colors indicate incorrectly segmented regions (these regions are very small, please zoom in to best see them).
\begin{table}[htb]
\centering
\caption{Mean RNFL, GC-IPL, and GCC thickness profiles extracted by the proposed framework for early and advanced glaucomatous pathologies.}
\begin{tabular}{ccc}
\toprule
Mean Thickness & Early & Advanced\\\hline
RNFL & 93.50$\pm$9.84 & 69.46$\pm$5.17\\
GC-IPL & 62.23$\pm$5.67 & 33.96$\pm$7.53\\
GCC & 155.73$\pm$13.10 & 103.42$\pm$10.27\\
RNFL \cite{El-Naby2014Egyptian} & 91.00$\pm$7.28 & 67.20$\pm$7.06\\
\bottomrule
\end{tabular}
\label{tab:tab6}
\end{table}
\begin{figure}[htb]
\includegraphics[width=1\linewidth]{JPGs/J9.jpg}
\caption{\small Examples of normal, early and advanced stage glaucomatous scans from which the RNFL, GC-IPL, and GCC regions are extracted. The scans are intentionally made grayscale to highlight the extraction results as compared to the ground truths. The yellow color indicates complete overlap with the ground truths while other colors indicate incorrect extractions. Please zoom-in to best see the results.}
\centering
\label{fig:fig8}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=0.95\linewidth]{Figures/Picture9A.png}
\caption{\small Distinctive RNFL, GC-IPL, and GCC thickness profiles to discriminate early and advanced stages glaucomatous pathologies.}
\label{fig:fig9}
\end{figure}
\noindent In Figure \ref{fig:fig9} we report plots of the class distribution for the early and advanced glaucomatous classes in the feature space defined by RNFL, GC-IPL, and GCC profiles. We can observe that the thinning of these profiles can clearly distinguish glaucoma progression. We notice that the training samples in Figure \ref{fig:fig9} are highly clustered compared to the testing samples. This is due to the fact that these training samples are generated through data augmentation and, thus, highly correlated with the actual samples. Overall, these profiles can show clear discrimination and great potential to be used with an SVM classifier for grading glaucoma progression.
\subsection{RGC Aware Grading of Glaucoma}
\noindent Using the RNFL, GC-IPL, and GCC profiles as distinctive features, the proposed system provides RGC-aware grading of glaucoma using the SVM classification model. Both the classification and grading performance of the proposed framework is shown through confusion matrices in Figure \ref{fig:fig10} (A) and Figure \ref{fig:fig10} (B), respectively. Moreover, Figure \ref{fig:fig10} (C) reports the grading performance using only the mean RNFL thickness threshold (see Table \ref{tab:tab6}). The classification between normal and glaucomatous scans is performed through RAG-Net\textsubscript{v2} classification unit by directly passing the preprocessed ONH SD-OCT scans whereas the RGC-aware grading of glaucoma as early suspect or advanced stage (reported in Figure \ref{fig:fig10}-B) is performed through SVM based on RNFL, GC-IPL, and GCC profiles. In Figure \ref{fig:fig10} (B), we can see that the proposed framework achieved an accuracy of 0.9117 for screening early and advanced stage glaucoma cases which is 16.123\% superior compared to grading approach based on analyzing only the RNFL thickness.
Also, out of 34 test samples in Figure \ref{fig:fig10} (B), three are falsely graded i.e. two early cases are graded as severe and one severe case is graded as having early glaucoma symptoms. But, these three misclassifications have less impact on the overall performance of the proposed system because both of them have been correctly classified as having glaucoma. In Figure \ref{fig:fig10} (A), we notice that one misclassification (out of 35) of a glaucomatous scan predicted as normal by our system. This is one challenging boundary case showing very early glaucomatous symptoms i.e. having the cup-to-disc ratio of 0.325. All the four ophthalmologists have considered this as an early suspect (please see the recommendation of the ophthalmologists in the patient record sheet within the dataset for more detail. The misclassified case has a name '149155\_20150914\_095855\_Color\_R\_001.jpg').
\subsection{Clinical Trials}
\noindent We have also performed a series of clinical trials in which we cross-validated the predictions of the proposed framework with the recommendations from the expert clinicians. Table \ref{tab:tab7} reports samples of the clinical trials along with the recommendations of the four expert ophthalmologists (publicly available in the dataset package \cite{Raja2020DIB}). Furthermore, we report in Table \ref{tab:tab10} the Pearson correlation analysis along with its statistical significance showcasing the clinical validation of the proposed framework with each clinician. $r_c$ ranges between -1 to +1 where -1 indicates the strong negative association between the two entities, +1 shows the strong positive association and 0 depicts that both entities are not related. Furthermore, $p$-value $<$ 0.05 indicates that the obtained $r_c$ score is statistically significant. From Table \ref{tab:tab10}, we can observe that
although the recommendations contradict with each other (as they are marked by each clinician based on his/her own experience), the proposed framework achieves the highest correlation with Clinician-4 (i.e. $r_c$ = 0.9236), having $p$-value of 4.40 $ \times$ 10\textsuperscript{-58}). Moreover, the minimum $r_c$ score is achieved with the grading of Clinician-3 (i.e. $r_c = $ 0.6863), but it is still quite significant. In $8^{th}$ row of Table \ref{tab:tab7}, we have an exception where all clinicians have marked the scan as having early glaucomatous symptoms, and the proposed framework grades it as depicting advanced stage pathology. This indicates the proposed framework is tuned in a way to prioritize advanced stage subjects that need immediate attention and treatment to prevent vision loss and blindness. We also report in Table \ref{tab:tab7} the fundus scan with each case to help the readers (and other clinicians) in cross-verifying the predictions made by the proposed framework by correlating the cup-to-disc ratios.
Through these fundus scans, we can also notice that the ONH SD-OCT scans graded as having early or advanced stage glaucoma typically contains a cup-to-disc ratio of 0.6 or above which is normally considered to imply glaucoma \cite{Illinois}.
\begin{figure}[t]
\includegraphics[width=1\linewidth]{Figures/Picture7A.png}
\caption{\small Confusion matrix representing (A) classification of healthy and glaucomatous ONH SD-OCT scans, (B) grading of early suspects, and advanced-stage glaucoma, (C) glaucomatous grading using mean RNFL thickness threshold. }
\centering
\label{fig:fig10}
\end{figure}
\begin{table}[htb]
\centering
\caption{Clinical validation of the proposed framework for glaucomatous screening with respect to the recommendation from four expert ophthalmologists. C1: Clinician-1, C2: Clinician-2, C3: Clinician-3, C4: Clinician-4, PF: Proposed Framework, AG: Advanced Glaucoma, EG: Early Glaucoma, H: Healthy. Fundus scans are provided with each ONH scan for reader cross-verification.}
\begin{tabular}{>{\centering\arraybackslash} m{2.9cm} >{\centering\arraybackslash} m{1.5cm} >{\centering\arraybackslash} m{.25cm} >{\centering\arraybackslash} m{.25cm} >{\centering\arraybackslash} m{.25cm} >{\centering\arraybackslash} m{.25cm} >{\centering\arraybackslash} m{.25cm}}
\toprule
\vspace{0.25cm}ONH Scan\vspace{0.25cm} & \vspace{0.25cm}Fundus\vspace{0.25cm} & \vspace{0.25cm}C1\vspace{0.25cm} & \vspace{0.25cm}C2\vspace{0.25cm} & \vspace{0.25cm}C3\vspace{0.25cm} & \vspace{0.25cm}C4\vspace{0.25cm} & \vspace{0.25cm}PF\vspace{0.25cm}\\\hline
\vspace{0.15cm}\includegraphics[scale=0.0875]{TableVII/1o.jpg} & \vspace{0.15cm}\includegraphics[scale=0.04375]{TableVII/1f.jpg} & \vspace{0.15cm}AG & \vspace{0.15cm}H & \vspace{0.15cm}EG & \vspace{0.15cm}AG & \vspace{0.15cm}AG\\
\includegraphics[scale=0.0875]{TableVII/2o.jpg} & \includegraphics[scale=0.04375]{TableVII/2f.jpg} & AG & H & EG & AG & AG\\
\includegraphics[scale=0.0875]{TableVII/3o.jpg} & \includegraphics[scale=0.04375]{TableVII/3f.jpg} & EG & EG & EG & EG & EG\\
\includegraphics[scale=0.0875]{TableVII/4o.jpg} & \includegraphics[scale=0.04375]{TableVII/4f.jpg} & EG & EG & EG & EG & EG\\
\includegraphics[scale=0.0875]{TableVII/5o.jpg} & \includegraphics[scale=0.04375]{TableVII/5f.jpg} & EG & EG & H & EG & EG\\
\includegraphics[scale=0.0875]{TableVII/6o.jpg} & \includegraphics[scale=0.04375]{TableVII/6f.jpg} & H & H & H & H & H\\
\includegraphics[scale=0.0875]{TableVII/7o.jpg} & \includegraphics[scale=0.04375]{TableVII/7f.jpg} & AG & AG & AG & AG & AG\\
\includegraphics[scale=0.0875]{TableVII/8o.jpg} & \includegraphics[scale=0.04375]{TableVII/8f.jpg} & EG & EG & EG & EG & AG\\
\includegraphics[scale=0.0875]{TableVII/9o.jpg} & \includegraphics[scale=0.04375]{TableVII/9f.jpg} & H & H & EG & H & H\\
\includegraphics[scale=0.0875]{TableVII/10o.jpg} & \includegraphics[scale=0.04375]{TableVII/10f.jpg} & H & H & H & H & H\\
\bottomrule
\end{tabular}
\label{tab:tab7}
\end{table}
\begin{table}[htb]
\centering
\caption{Clinical validation of the proposed framework through the Pearson correlation coefficient ($r_c$) and its statistical significance ($p$-value).}
\begin{tabular}{ccccc}
\toprule
Metric & Clinician-1 & Clinician-2 & Clinician-3 &Clinician-4 \\ \hline
$r_c$ & 0.7260 & 0.8416 & 0.6863 & 0.9236 \\
$p$-value & 1.04 $\times$ 10\textsuperscript{-23} & 6.10 $\times$ 10\textsuperscript{-38} & 2.10 $\times$ 10\textsuperscript{-20} & 4.40 $\times$ 10\textsuperscript{-58}\\
\bottomrule
\end{tabular}
\label{tab:tab10}
\end{table}
\section{Discussion and Conclusion} \label{sec:discussion}
\noindent In this work, we proposed a fully automated system for the classification and grading of glaucoma from ONH SD-OCT scans. Unlike other frameworks that rely on cup-to-disc ratios for screening glaucoma, the proposed system analyzes the pathological changes related to the degeneration of RGCs through RNFL, GC-IPL, and GCC thickness profiles. We propose a hybrid convolutional network (RAG-Net\textsubscript{v2}) for the extraction of these profiles, coupled with an SVM classifier for the classification and the grading of healthy and glaucomatous pathologies. The experiments evidenced the superiority of our framework in screening early and advanced glaucoma cases as compared to the state-of-the-art solutions relying on the cup-to-disc ratios as evidenced by the $F_1$ score of 0.9577. The preeminence of our system emanated from the newly proposed architecture variants in RAG-Net\textsubscript{v2}, integrating contextual-aware modules, built on residual atrous convolutions, along with the feature pyramid block. This proposed variants boosted the capacity of the RAG-Net\textsubscript{v2} for discriminating the retinal regions as reflected by the $\mu_{DC}$ score of 0.8697, outperforming popular deep segmentation models. Apart from this, RAG-Net\textsubscript{v2} significantly reduces the total number of trainable and non-trainable parameters by 91.04\% as compared to the original RAG-Net architecture. This improvement also relates to the addition of contextual-aware modules replacing the standard convolutional blocks with the atrous convolutions, making the RAG-Net\textsubscript{v2} a lightweight architecture for screening and monitoring the glaucoma progression.
The introduction of contextual-aware modules in RAG-Net\textsubscript{v2} addresses the limitations of the original RAG-Net architecture which, while outperforms state-of-the-art frameworks for lesions extraction, showed restraints in differentiating similar textural patterns in retinal layers and boundaries as shown Figure \ref{fig:fig6}. However, the implications of introducing contextual-aware modules need to be thoroughly tested for the lesions extraction from the macular pathologies, if we want to extend the modified RAG-Net architecture to the analysis of lesion-aware maculopathy. This investigation will be part of our next future work.
\section*{Acknowledgment}
\noindent{This work is supported by a research fund from Khalifa University: Ref: CIRA-2019-047. We are also thankful to the four expert ophthalmologists for providing the clinical validations of the retinal scans within the AFIO dataset.}
\small
|
{
"timestamp": "2020-10-09T02:12:48",
"yymm": "2010",
"arxiv_id": "2010.03872",
"language": "en",
"url": "https://arxiv.org/abs/2010.03872"
}
|
\section{Introduction}
\label{sec:intro}
\input{sections/introduction2.tex}
\input{sections/it_measures2.tex}
\label{sec:it}
\section{Multivariate Gaussianization}
\label{sec:gauss_trans}
\input{sections/gaussian_transformations2.tex}
\section{RBIG-based Estimators}
In the following subsections we propose RBIG-based estimators for different information-theoretic quantities: $T$ , $H$, $D_{KL}$, and $I$. In every case, we follow this structure:
\begin{enumerate}
\item We derive the RBIG-based estimator.
\item We validate its performance by comparing the result with alternative estimators on synthetic data from known PDFs where the analytical result is known.
\item We give examples of applicability in practical problems with real-world data where the PDF is unknown.
\end{enumerate}
\subsection{Total Correlation, \texorpdfstring{$T({\mathbf x})$}{T(x)}}
\label{sec:TC}
\input{sections/total_correlation2.tex}
\subsection{Entropy, \texorpdfstring{$H({\mathbf x})$}{H(x)}}
\label{sec:entropy}
\input{sections/entropy2.tex}
\subsection{Kullback-Leibler Divergence, \texorpdfstring{${\textrm{KL}}({\mathbf x}|{\mathbf y})$ }{KL(x|y)} }
\label{sec:kld}
\input{sections/kld2.tex}
\subsection{Mutual Information, \texorpdfstring{$I({\mathbf x},{\mathbf y})$}{I(x,y)}}
\label{sec:MI}
\input{sections/mutual_information2.tex}
\section{Concluding Remarks}
\input{sections/conclusions2.tex}
\label{sec:conclusions}
\bibliographystyle{IEEEtran}
\subsection{Total correlation}
\subsubsection{Gaussian}
For a Gaussian distribution, ${\mathbf x} \sim \mathcal{N}({\mathbf x},\mu,\Sigma)$, the analytic value of total correlation is:
\begin{eqnarray*}
T({\mathbf x}) &=& \sum_{i=1}^{D_x} H(x_i) - H({\mathbf x}) \\
&=& \sum_{i=1}^{D_x} \frac{1}{2} \log(2 e \pi \sigma_i^2) - \frac{1}{2} \log(|2 e \pi \Sigma|) \\
&=& \sum_{i=1}^{D_x} \log (\sigma_i) - \frac{1}{2}\log(|\Sigma|). \\
\end{eqnarray*}
\subsubsection{Linearly transformed uniform distribution}
The total correlation of a linearly transformed uniform distribution can be computed using the variation of total correlation under transformations defined in Eq.~\eqref{deltaT}.
By starting from a multidimensional distribution which marginals are independent, i.e. $T({\mathbf x}) = 0$, we can easily compute the total correlation of a rotated and scaled version, ${\mathbf y} = \bf{M} {\mathbf x}$:
\begin{equation*}
T({\mathbf y}) = \sum_{i=1}^{D_y} H(y_i) - \sum_{i=1}^{D_x} H(x_i) - \frac{1}{2}\log(|\bf{M}^\top\bf{M}|).
\label{TC_rot}
\end{equation*}
Since we know the applied linear transformation we can compute {$\log(|\bf{M}^\top\bf{M}|)$}. The marginal entropies for the transformed data are computed empirically from a dataset of $5 \cdot 10^5$ samples.
\subsubsection{Multivariate t-Student}
In \cite{Guerrero-Cusumano1998} the total correlation shared by the coefficients of samples that follow a multivariate t-Student distribution of dimension $D$, $\nu$ degrees of freedom, and scale matrix (or shape matrix) ${\mathbf A}$, was given:
\begin{eqnarray*}
T(D,\nu,{\mathbf A}) &=& -\frac{1}{2} \log(\det({\mathbf A})) \\
&+& \log\left[ \frac{\Gamma(\frac{D}{2})[\beta(\frac{\nu+1}{2},\frac{1}{2})]^D}{(\pi)^{\frac{D}{2}}\beta(\frac{\nu+D}{2},\frac{D}{2})} \right] \\
&+& \frac{D(\nu+1)}{2} \left[ \psi \left(\frac{\nu+1}{2} \right) - \psi \left(\frac{\nu}{2} \right)\right]\\
&-& \frac{\nu+D}{2} \left[ \psi \left(\frac{\nu+D}{2} \right) - \psi \left(\frac{\nu}{2} \right)\right].
\end{eqnarray*}
\subsection{Entropy}
\subsubsection{Gaussian}
Entropy of a Gaussian distribution, ${\mathbf x} \sim \mathcal{N}(\mu,\Sigma)$, can computed in nats as:
\begin{eqnarray*}
H({\mathbf x}) &=& \frac{D_x}{2} + \frac{D_x}{2}\log(2 \pi) + \frac{1}{2} \log(|\Sigma|).
\end{eqnarray*}
\subsubsection{Linearly transformed uniform distribution}
As for the total correlation we use a linearly transformed version of a multidimensional uniform distribution which marginals are independent, i.e. $T({\mathbf x}) = 0$. Therefore, using Eqs. \eqref{eq:TC} and \eqref{deltaT}, we can compute a reference based only on marginal entropy estimations as follows:
\begin{eqnarray*}
H({\mathbf y}) &=& \sum_{i=1}^{D_y} H(y_i) - T({\mathbf y}) \\
&=& \sum_{i=1}^{D_x} H(x_i) + {\frac{1}{2}\log(|\bf{M}^\top \bf{M}|)}.
\end{eqnarray*}
This reference will be a true value if $H(x_i)$ is analytic. In our case $M$ is squared.
\subsubsection{Multivariate Student}
In \cite{GUERREROCUSUMANO1996} an analytical definition of the entropy of a multivariate Student was given by
\begin{eqnarray*}
H(D,\nu,{\mathbf A}) &=& \frac{1}{2} \log(|{\mathbf A}|) + log\left[ \frac{(\nu\pi)^{\frac{D}{2}}}{\Gamma(-\frac{D}{2})} \beta (\frac{D}{2},\frac{\nu}{2}) \right] \\
&+& \frac{\nu+D}{2} \left[ \psi(\frac{\nu+D}{2}) - \psi(\frac{\nu}{2})\right].
\end{eqnarray*}
\subsection{Kullback-Leibler Divergence}
\subsubsection{Gaussian}
$D_{\textrm{KL}}$ between two Gaussians ${\mathbf x}_1 \sim {\mathcal N}(\mu_1\mathbf{1},\Sigma_1)$ and ${\mathbf x}_2 \sim {\mathcal N}(\mu_2\mathbf{1},\Sigma_2)$ can be calculated as
\begin{equation*}
{{D}_{\textrm{KL}}({\mathbf x}_1|{\mathbf x}_2) = \\
\frac{1}{2} \left( \text{tr}(\Sigma_2^{-1}\Sigma_1) + P - D + \log\frac{|\Sigma_2|}{|\Sigma_1|} \right),}
\label{eq:KLD_gauss}
\end{equation*}
where $\text{tr}(·)$ is the trace function, $P = (\mu_1 + \mu_2)^{\top} \Sigma_2^{-1} (\mu_1 + \mu_2)$, and $\mathbf{1}$ is a vector of ones.
\subsubsection{Multivariate Student}
The $D_{\textrm{KL}}$ between two multivariate Student-t distributions (${\mathbf x}_1 \sim f_1(\mu_1=0,\Sigma_1=\mathbf{I},\nu_1)$ and ${\mathbf x}_2 \sim f_2(\mu_2=0,\Sigma_2=\mathbf{I},\nu_2)$) can be calculated as (\cite{Villa2018} eq. 8):
\begin{eqnarray*}
{D}_{\textrm{KL}}({\mathbf x}_1|{\mathbf x}_2) = \log\bigg(\frac{{K}_1}{{K}_2}\bigg) - E_1 + E_2,
\label{eq:KLD_student}
\end{eqnarray*}
with
\begin{eqnarray*}
K_1 = \dfrac{\Gamma(\dfrac{\nu_1+D}{2})}{\Gamma(\dfrac{\nu_1}{2}) \sqrt{(\pi\nu_1)^D}}
\end{eqnarray*}
and
\begin{eqnarray*}
E_1 &=& \frac{\nu_1 + D}{2} \mathbb{E}_{D,\nu_1} \left[ \log\bigg(1+ \frac{{\mathbf X}_1^\top {\mathbf X}_1}{\nu_1} \bigg)\right] \\
&=& \dfrac{\nu_1 + D}{2} \left( \Psi\bigg(\frac{\nu_1+D}{2}\bigg) - \Psi\bigg(\dfrac{\nu_1}{2}\bigg) \right)
\end{eqnarray*}
\subsubsection{Estimation using RBIG}
We build upon the straightforward estimation of the total correlation $T$ with RBIG in Eq.~\ref{Estim_T}, and the relation given in Eq.~\ref{eq:TC}, to define the RBIG-based estimator for $H$:
\begin{equation}
\Tilde{H}({\mathbf x}) = \sum_{i=1}^{D_x} \Tilde{H}(x_i) - \Tilde{T}({\mathbf x}).
\end{equation}
Note that $\Tilde{T}$ only consists of marginal entropy estimations and the only extra computations in $\Tilde{H}({\mathbf x})$ are also marginal, that is $\Tilde{H}(x_i)$.
\subsubsection{Validation on known PDFs}
Here we validate the performance of the proposed estimator in datasets with the same configuration described in \ref{sec:val_TC}.
See the details of the synthetic datasets in Appendix \ref{app:formulas}.
Table \ref{tab:Entropy} summarizes the bias (percentage of error) for a fixed number of samples.
Exhaustive results (bias and variance as a function of the kind of distribution, dimension and number of samples) are given in the supplementary material.
Results show again that RBIG estimation obtains good performance for all considered distributions.
The relative error
is under $5\%$ in most of the cases and below $20\%$ in all cases.
In the Gaussian case, RBIG outperforms the other methods, and obtains similar accuracies to the estimator that assumes Gaussian distribution.
For the other distributions RBIG is the best one in most cases, and close to the first in the case it is not.
Again the performance of RBIG is clearly better in situations with multiple dimensions.
\begin{table}[b!]
\begin{centering}
\caption {Relative mean absolute errors in percentage for entropy estimation on known PDFs. Best value in dark gray, second best value in bright gray.}\label{tab:Entropy}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
& & $D_x$ & \textbf{RBIG} & \textbf{kNN} & \textbf{KDP} & \textbf{expF} & \textbf{vME} & \textbf{Ens} \\
\hline
\parbox[t]{3mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Gaussian}}} & & 3 & \cellcolor[HTML]{656565}0.90 & 1.70 & 112.30 & \cellcolor[HTML]{C0C0C0}1.10 & 8.80 & 12.00 \\
& & 10 & \cellcolor[HTML]{C0C0C0}1.20 & 27.90 & 179.80 & \cellcolor[HTML]{656565}0.10 & 34.70 & 40.30 \\
& & 50 & \cellcolor[HTML]{C0C0C0}0.90 & 32.20 & 107.40 & \cellcolor[HTML]{656565}0.10 & 108.40 & 38.10 \\
& & 100 & \cellcolor[HTML]{C0C0C0}0.70 & 30.70 & 89.60 & \cellcolor[HTML]{656565}0.10 & 94.20 & 34.60 \\
\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Rotated}}} & & 3 & \cellcolor[HTML]{C0C0C0}4.00 & 6.20 & 171.40 & 36.80 & \cellcolor[HTML]{656565}3.70 & 30.10 \\
& & 10 & \cellcolor[HTML]{656565}12.20 & 38.50 & 241.80 & 17.90 & \cellcolor[HTML]{C0C0C0}31.80 & 53.90 \\
& & 50 & \cellcolor[HTML]{656565}6.80 & 44.60 & 136.60 & \cellcolor[HTML]{C0C0C0}13.50 & 87.90 & 51.60 \\
& & 100 & \cellcolor[HTML]{656565}5.50 & 42.50 & 110.70 & \cellcolor[HTML]{C0C0C0}11.50 & 94.30 & 47.20 \\
\hline
\parbox[t]{2mm}{\multirow{14}{*}{\rotatebox[origin=c]{90}{Student}}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=3$}}} & 3 & \cellcolor[HTML]{656565}0.75 & \cellcolor[HTML]{C0C0C0}0.76 & 34.93 & 11.90 & 3.13 & 1.99 \\
& & 10 & 2.82 & \cellcolor[HTML]{656565}1.44 & 137.19 & 15.55 & 53.19 & \cellcolor[HTML]{C0C0C0}1.77 \\
& & 50 & \cellcolor[HTML]{C0C0C0}6.03 & \cellcolor[HTML]{656565}3.47 & 195.30 & 22.11 & 175.94 & 7.09 \\
& & 100 & \cellcolor[HTML]{656565}6.61 & \cellcolor[HTML]{C0C0C0}8.57 & 228.62 & 24.44 & 166.09 & 13.72 \\
\cline{2-9}
& \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=5$}}} & 3 & \cellcolor[HTML]{656565}0.51 & \cellcolor[HTML]{C0C0C0}0.70 & 24.75 & 3.42 & 1.38 & 1.94 \\
& & 10 & \cellcolor[HTML]{656565}1.12 & 1.25 & 96.73 & 5.52 & 59.29 & \cellcolor[HTML]{C0C0C0}1.21 \\
& & 50 & \cellcolor[HTML]{656565}2.82 & \cellcolor[HTML]{C0C0C0}4.84 & 146.63 & 9.59 & 202.48 & 8.89 \\
& & 100 & \cellcolor[HTML]{656565}3.11 & \cellcolor[HTML]{C0C0C0}10.67 & 184.02 & 11.36 & 195.17 & 16.24 \\
\cline{2-9}
& \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=20$}}} & 3 & \cellcolor[HTML]{656565}0.54 & \cellcolor[HTML]{C0C0C0}0.71 & 19.19 & 0.76 & 1.32 & 1.56 \\
& & 10 & \cellcolor[HTML]{C0C0C0}0.48 & 0.84 & 69.83 & 1.56 & 46.62 & \cellcolor[HTML]{656565}0.37 \\
& & 50 & \cellcolor[HTML]{656565}0.95 & 6.66 & 107.74 & \cellcolor[HTML]{C0C0C0}3.31 & 219.86 & 11.13 \\
& & 100 & \cellcolor[HTML]{656565}0.68 & 13.45 & 138.98 & \cellcolor[HTML]{C0C0C0}4.20 & 214.41 & 19.35 \\
\hline
\end{tabular}
\end{centering}
\end{table}
\subsubsection{Experiments on real-world data: geosciences}
Entropy computed with RBIG has been already applied in geoscience and remote sensing for Earth observation. For instance, in \cite{Laparra2015} entropy was used to analyze which is the optimal configuration of spectral and spatial resolution for an hyperspectral image sensor. In \cite{Johnson18EGU}
entropy was used to analyze effect of using different spatio-temporal configurations for
studying global essential climate variables related to the water cycle, like evaporation, precipitation and soil moisture.
Here we show a novel example were we analyze the spatial patterns of air temperature globally.
This example illustrates the kind of interesting regularities that may be discovered by the RBIG-based entropy measures.
We use the data from the Earth Science Data Lab (ESDL) \footnote{\url{https://www.earthsystemdatalab.net/}}\cite{Mahecha19esdc}. These data consist of 3-dimensional cubes (2 spatial and one temporal dimensions) for different variables. In our case we select the air temperature at 2 m (ERAInterim) as a variable for the years 2001-2008 and the whole globe. For each temporal acquisition we have a temperature product with $1440 \times 720$ spatial samples (in a 0.25 degrees grid resolution) weekly sampled (46 images per year).
Figure \ref{fig:Real_entropy} shows the results of computing the entropy for patches of $5 \times 5$ spatial regions for each temporal acquisition. Results for the mean temperature of the whole globe are given for reference. It can be seen that the seasonal cycle is clear for temperature (left) and entropy (middle). However, while temperature displays one cycle per year as expected, the spatial entropy displays two cycles per year.
The figure at the right shows the relations between the mean temperature and the spatial entropy showing a cyclic pattern.
This means that variability in the spatial patterns of the temperature have a different behavior than the mean temperature.
The spatial patterns are less variable during winter and summer, and are more variable in spring and autumn.
This kind of analysis can help to understand the spatial behavior of terrestrial ecosystems for example, and could help in characterizing their variability and dynamics.
\begin{figure*}
\centering
\begin{tabular}{m{55mm}m{55mm}m{55mm}}
\includegraphics[width=\linewidth]{Figures/experiments/FIGS_entropy/RBIG4IT_H_mean_evolution_spat_res_5.png} &
\includegraphics[width=\linewidth]{Figures/experiments/FIGS_entropy/RBIG4IT_H_entropy_evolution_spat_res_5.png} &
\includegraphics[width=\linewidth]{Figures/experiments/FIGS_entropy/RBIG4IT_H_entropy_vs_temperature_2_spat_5.png}
\end{tabular}
\caption{Regularities in geoscience data from RBIG estimations of entropy. Left: mean global temperature at each time step. Center: Spatial entropy at each time step (see text for details). Right: Evolution of the relation between temperature and spatial entropy for 2001-2008. The mean for all the studied years is given in the thicker line which colors correspond to the distance between the Earth and the sun for this period of time (blue closer, yellow farther away).}\label{fig:Real_entropy}
\end{figure*}
\subsection{Gaussianization is useful to estimate information}
If Gaussianization mappings are differentiable, as is usually the case~\cite{Friedman74,Friedman84,Chen2001,Lyu2009,Laparra2011b,Johnson19,Inouye2018DeepDD
,Kingma2014VAES,Rezende2015NF,BalleLS15}, they may be useful to estimate ITMs because of two reasons:
(1)~these mappings can be used first to estimate the PDFs via the standard transform of probability under smooth mappings~\cite{Stark95}, $p({\mathbf x}) = |\nabla G_x({\mathbf x})| \, \mathcal{N}({\mathbf x}',\mathbf{0},\mathbf{I})$, and afterwards, estimate the corresponding measure from the PDFs, and
(2)~they can be used to estimate the total correlation from data samples, and then, the remaining information theory measures can be related to $T$.
Gaussianization is appropriate to compute $T({\mathbf x})$ because the components of its output, ${\mathbf x}' \sim \mathcal{N}({\mathbf x}',\mathbf{0},\mathbf{I})$, are statistically independent, and hence $T({\mathbf x}')=0$. Therefore, the variation of total correlation under smooth mappings~\cite{Lyu2009}, which for transforms that do not preserve dimension (rectangular Jacobian $\nabla G$) generalizes to,
\begin{eqnarray}
\Delta T({\mathbf x},{\mathbf x}') \!\! &=& \!\! T({\mathbf x}) - T({\mathbf x}') \nonumber \\
\!\! &=& \!\! \sum_{i=1}^{D_x} H(x_i) - \sum_{i=1}^{D_{x'}} H(x'_i) \label{deltaT} \\ \!\! & & \!\! + \frac{1}{2}\mathbb{E}_x \Big( \log |\nabla G_x({\mathbf x})^\top \nabla G_x({\mathbf x})| \Big) \nonumber
\end{eqnarray}
in the case of Gaussianization, where $D_{x'}=D_{x}$ and $T({\mathbf x}')=0$, leads to:
\begin{equation}
T({\mathbf x}) = \sum_{i=1}^{D_x} H(x_i) - \frac{D_x}{2} \log(2\pi e) + \mathbb{E}_x \Big( \log |\nabla G_x({\mathbf x})| \Big) \label{eq:T_definition}
\end{equation}
These two strategies (either estimating the PDF, or estimating $T$) illustrate the theoretical and practical interest of differentiable Gaussianization transforms. However, direct application of these ideas for an arbitrary Gaussianization may not be straightforward. In the first strategy, errors in the estimated PDF may accumulate when integrating along the domain, and multidimensional integration is not trivial. In the second strategy, Eq.~\eqref{deltaT} relies on an average over all the samples, the matrix $\nabla G_x({\mathbf x})$, which is multivariate (i.e. time consuming and also prone to estimation errors).
Therefore, all differentiable Gaussianizations are eventually applicable for information estimation but only certain transforms could overcome the above issues.
\subsection{Previous differentiable Gaussianization transforms}
The idea of designing a method to estimate multivariate densities through iterative marginal operations in order to avoid the curse of dimensionality was originally proposed in \cite{Friedman84}. This work was based on the idea of projection pursuit~\cite{Friedman74} which is also one of the seminal works for what later will be known as Independent Components Analysis (ICA).
A step forward was given in \cite{Chen2001} where the projection-pursuit-like Gaussianization was based on an iterative cascade of two operations: a multivariate \emph{linear ICA} and a set of univariate (marginal) gaussianizations based on mixtures of Gaussians. The advantage was the use of \emph{fast linear ICA} techniques developed in the late 90's~\cite{Hyvarinen01}. However the problem of the above methods inspired on projection pursuit is their focus on the multivariate (time-consuming) problem: the ICA transform looking for the most non-Gaussian direction of the data in each iteration.
More recently, transforms to latent spaces with Gaussian PDF have been proposed in the context of deep-learning. Examples include Variational AutoEncoders \cite{Kingma2014VAES}, Generative Adversarial Networks \cite{Goodfellow2014GANS} and Invertible Flows \cite{Rezende2015NF}. Each family tackles the underlying PDF estimation problem from a slightly different {\em algorithmic} perspective, but they share many {\em conceptual} properties.
They look for two main components: the first looks for a function that maps samples from a latent space ${\mathbf x}'$ to the observed space ${\mathbf x}$, and the second aims to find a function that maps data from the observed space ${\mathbf x}$ to some latent space ${\mathbf x}'$. The latter component are called {\em density destructors}~\cite{Inouye2018DeepDD} because their aim is removing the structure of the input data distribution.
These density destructors had not been used to estimate information-theoretic variables nor had been connected to previous projection-pursuit transforms.
However, this gap has been filled~\cite{Johnson19} by linking the deep-learning family and the projection-pursuit family with special emphasis on information theory through a specific algorithm that may be interpreted in both ways: the Rotation-Based Iterative Gaussianization (RBIG)~\cite{Laparra2011b}.
In the context of estimating information theory quantities the particular Gaussianization we propose to use, RBIG, has two major advantages with regard to other techniques:
\begin{enumerate}
\item The cascade of nonlinear+linear transforms was shown to converge to a Gaussian even if the linear transforms are random rotations. This alleviates the requirement for linear ICA in classical projection-pursuit techniques, and the requirement to learn the typically large amount of parameters of the linear transforms in the deep-learning family.
\item RBIG has special properties regarding total correlation that allow to follow the strategy suggested by Eq.~\eqref{deltaT}, but using only marginal operations, thus overcoming the requirement to compute and average the Jacobian.
\end{enumerate}
The first of these properties was proved in the original paper~\cite{Laparra2011b}, and the second, more relevant for information theory, is analyzed in detail below in Section~\ref{sec:key}.
\subsection{The Rotation-Based Iterative Gaussianization}
\label{sec:RBIG}
The Gaussianization strategy that we follow, RBIG, was introduced in \cite{Laparra2011b} and is illustrated in Fig.\ref{rbig}. It is based on the sequential (layer wise) application of two operations: nonlinear marginal Gaussianizations, $\Psi$, and linear rotations, ${\mathbf R}$:
\begin{equation}
{\mathbf x}^{(n+1)} = {\mathbf R}^{(n)} \Psi^{(n)}({\mathbf x}^{(n)})
\label{rbig_iter}
\end{equation}
where $n$ refers to the step in the sequence and convergence takes $N$ steps, $n=1,\ldots,N$.
The original data (the data in the step 0) is ${\mathbf x}^{(0)} = {\mathbf x}$, and the output of the transform is ${\mathbf x}^{(N)} = {\mathbf x}'$.
The marginal Gaussianization consists of a set of marginal operations (dimension-wise): marginal equalization from the cumulative density function, and marginal transform from uniform to Gaussian using the inverse cumulative density function of a standard univariate Gaussian.
Of course all the components of the marginally Gaussianized variable follow a standard univariate Gaussian, ${\Psi}_i^{(n)}(x_i^{(n)}) \sim \mathcal{N}(0,1) \,\,\, \forall i = 1,\ldots,D_x$, that can be collectively grouped as $\boldsymbol{\Psi}^{(n)}({\mathbf x}^{(n)}) = [\Psi_1^{(n)}(x_1^{(n)}),\ldots,\Psi_{D_x}^{(n)}(x_{D_x}^{(n)})]$. However, note that at some intermediate point in the series the joint PDF is not a multivariate Gaussian (yet). See for instance the evolution in Fig.~\ref{rbig} which was computed with RBIG.
The following operation is a rotation, ${\mathbf R}$, which may be sophisticated for quicker convergence (e.g. PCA or ICA matrices), but convergence is also guaranteed even for random orthogonal matrices~\cite{Laparra2011b}. The number of layers can be easily decided using a convergence criteria~\cite{Laparra2011b}.
\section{The Special Role of Total Correlation in RBIG}
\label{sec:key}
RBIG is particularly convenient to propose ITM estimators because
it breaks the multivariate estimation of total correlation in multiple univariate entropy estimations.
This ability to break the multivariate problem emerges naturally from the way the RBIG network is constructed
and can be deduced from the following property.
\noindent{\bf Lemma.}
The change of total correlation in each RBIG layer reduces to a sum of univariate entropies:
\begin{eqnarray}
\Delta T^{(n)} &=& T\Big({\mathbf x}^{(n)}\Big) - T\Big({\mathbf x}^{(n+1)}\Big) \nonumber \\[0.2cm]
&=& \frac{D_x}{2} \log(2\pi e) - \sum_{i=1}^{D_x} H\Big(x^{(n+1)}_i\Big)
\label{lemma}
\end{eqnarray}
\noindent{\bf Proof.} The difference in total correlation before and after the application of a layer is:
\begin{equation}
\Delta T^{(n)} = T\Big({\mathbf x}^{(n)}\Big) - T\Big( {\mathbf R}^{(n)} \Psi^{(n)}({\mathbf x}^{(n)}) \Big), \nonumber
\end{equation}
since total correlation is invariant under marginal invertible transforms~\cite{Studeny98} and $\Psi^{(n)}$ is a marginal invertible transform we have:
\begin{equation}
\Delta T^{(n)} = T\Big(\Psi^{(n)}({\mathbf x}^{(n)})\Big) - T\Big( {\mathbf R}^{(n)} \Psi^{(n)}({\mathbf x}^{(n)}) \Big) \nonumber
\end{equation}
And then, applying Eq.~\eqref{deltaT}, we have:
\begin{eqnarray}
\hspace{-0.5cm} \Delta T^{(n)} = \sum_{i=1}^{D_x} H\Big( \Psi_i^{(n)}(x_i^{(n)}) \Big) \!\!\!\!\!&-&\!\!\!\!\! \sum_{i=1}^{D_x} H\Big( \big( {\mathbf R}^{(n)} \Psi_i^{(n)}(x_i^{(n)}) \big) \Big) \nonumber \\
\!\!\!\!\!&+&\!\!\!\!\! \mathbb{E}\Big( \log(|{\mathbf R}^{(n)}|) \Big)
\nonumber
\end{eqnarray}
where the first term is just $D_x$ times the known entropy of the marginal $\mathcal{N}(0,1)$, the second term is the sum of marginal entropies of the transformed variable, ${\mathbf x}^{(n+1)}$, and the last term vanishes because ${\mathbf R}^{(n)}$ is restricted to be a rotation and has unit-determinant. This leads to Eq.~\eqref{lemma}.
The univariate nature of the computation of variations of $T$ in RBIG layers
makes this specific Gaussianization better suited to obtain ITMs from $T$
than other Gaussianization transforms. In RBIG one is not forced to
compute the matrices $\nabla_x G({\mathbf x})$ and average them over the multidimensional
space as in Eq.~\eqref{eq:T_definition}.
\section{Background}
\subsection{Information Theory Measures}
A comprehensive review of the information theoretic measures discussed in this work (namely total correlation, entropy, Kullback-Leibler divergence, and mutual information) is available elsewhere.
For instance, \cite{Cover05} is an excellent reference for differential entropy, $H$, mutual information, $I$, and Kullback-Leibler Divergence, $D_{\textrm{KL}}$; and~\cite{Watanabe1960} is an appropriate source for total correlation, $T$, also known as multi-information~\cite{Studeny98}. However, let us recall here the relation between $H$, $I$, and $T$, through the Venn Diagram in Fig.~\ref{fig:Fig_1}, which allows us to introduce the basic identities required throughout the work.
This diagram represents the uncertainty and shared information in two multivariate random variables, ${\mathbf x} = [x_1,\ldots,x_{D_x}] \in \mathbb{R}^{D_x}$, and ${\mathbf y} = [y_1,\ldots,y_{D_y}] \in \mathbb{R}^{D_y}$.
In particular, we consider two-dimensional variables, $D_x = D_y = 2$, just for visualization purposes, but the interpretation of the diagram is general and dimension could be bigger and not necessarily the same for ${\mathbf x}$ and ${\mathbf y}$. In this diagram each set represents the information provided by each univariate component.
\begin{figure}[t!]
\centering
\includegraphics[width=7.5cm]{Figures/fig_1.JPG}
\caption{Conceptual scheme of information theoretic measures. ${\mathbf x} = [x_1,x_2]$ and ${\mathbf y}=[y_1,y_2]$ are two-dimensional random variables. Areas represent amounts of information, and intersections represent shared information among the corresponding variables and dimensions. Examples of entropy, total correlation and mutual information are given.}
\label{fig:Fig_1}
\end{figure}
The \emph{univariate differential entropy} ($H(x_i)$ or $H(y_i)$ in the diagram) is the expected value of the provided information by the $i$-th dimension of the variable:
\begin{equation}
H(x_i) = \mathbb{E}_{x_i} [-\log(p_{x_i}(x_i))],
\label{eq:entropy}
\end{equation}
where we have used the definition of information given by Shannon~\cite{Shannon1948}. In our diagram, the entropy corresponds to the area of the set. This definition can be extended to multidimensional variables. In this case, the \emph{joint differential entropy}, $H({\mathbf x}) = H([x_1,\ldots,x_{D_x}])$, is given by the union of the univariate sets.
The \emph{Total correlation} describes the information shared by \emph{several} univariate components, and hence it can be defined either within a vector, i.e. $T({\mathbf x})$ is the intersection of the sets corresponding to $x_i$; or for concatenation of an arbitrary number of multivariate variables, i.e. $T([{\mathbf x},{\mathbf y}])$ is the intersection of all the components in $[{\mathbf x},{\mathbf y}] \in \mathbb{R}^{D_x+D_y}$. This intersection accounts for the redundancy between the dimensions of the considered set of variables. Therefore, $T$ can be computed as the difference between the sum of the entropies of the univariate sets minus the union:
\begin{equation}
T({\mathbf x}) = \sum_{i=1}^{D_x} H(x_i) - H({\mathbf x}).
\label{eq:TC}
\end{equation}
The \emph{mutual information} also describes the shared information between random variables but only between \emph{two of them}, which can be univariate or multivariate, and they are not restricted to have the same dimension.
For example, as in $I(x_1,x_2)$ or in $I({\mathbf x},{\mathbf y})$.
The Venn diagram illustration is useful to show that, given the shared information nature of $I$, it is related to the differential entropies:
\begin{equation}
I({\mathbf x},{\mathbf y}) = H({\mathbf x}) + H({\mathbf y}) - H([{\mathbf x},{\mathbf y}]).
\label{H_I}
\end{equation}
Note that, depending on the group of variables or dimensions considered as input some measures may be equivalent. For instance, in our two-dimensional example $T({\mathbf x}) = T([x_1,x_2]) = I(x_1,x_2)$.
The \emph{Kullback-Leibler Divergence} $D_{\textrm{KL}}$ cannot be easily included in the Venn Diagram of Fig.~\ref{fig:Fig_1} because its definition depends not only on the entropy but also on the cross-entropy, $D_{\textrm{KL}({\mathbf x}|{\mathbf y})} = \mathbb{E}_{x_i} [-\log(p_{y_i}(y_i))] - H(x_i)$. The $D_{\textrm{KL}}$ measures a divergence (not a distance) between two PDFs~\cite{Cover05}.
It is worth noting that all four measures ($H$, $T$, $I$, and $D_{\textrm{KL}}$) are defined in the same units.
\subsection{Conventional Estimators}
\label{sec:intro_methods}
While plenty of methods focus on the estimation of $H$, $T$, $I$ and $D_{\textrm{KL}}$ for one dimensional or two-dimensional variables, there are few methods that deal with variables of more dimension. Our comparison is based on the broad family of ITM estimators included in~\cite{szabo14information}, which can be used in the general multivariate case, and we used their default parameterizations in all the experiments.
We selected a number of conventional estimators that compute Shannon entropy of a continuous random variable and, by extension, can estimate the mutual information as seen in equation \ref{eq:MI_as_H}.
The simplest ITM estimators assume that the data distributions are from the exponential family, in our case we used the Gaussian distribution. With this assumption, the aforementioned ITMs are straightforward to calculate. We refer to these estimates as the \emph{exponential family} (\textbf{expF}).
The Gaussian family has attractive mathematical properties.
The Sharma-Mittal entropy yields an analytical expression for the entropy of a Gaussian distribution~\cite{Nielsen_2011} which also includes the derivative of the log-Normalizer of the distribution which acts as a corrective term yielding better estimates of the distribution.
The experiments below show that the Gaussian assumption performs well on simple distributions, but has lower performance for distributions which are characterized by their tails like the T-Student or the Pareto distribution. This is a key factor in many scientific fields where actual data are not Gaussian.
The next family of popular methods are those using binning strategies \cite{KumarPothapakula2019}.
These binning methods include algorithms like the ridge histogram estimation, the smooth kernel density estimator (KDE),
and the adaptive $k$-Nearest Neighbors (\textbf{kNN}) estimator \cite{Kozachenko1987ASE,Goria05}. These methods are non-parametric which allows them to be more flexible and should be able to fit more complex distributions. However, these methods are sensitive to the parameters chosen to define the neighborhoods and there is no intuitive method to select the appropriate parameters. Besides, their parameterization is data-dependent.
We select the most robust method, the kNN estimator, as representative of this family in our experiments. kNN is adaptive in nature due to the neighbour structure from a distance matrix.
In all the experiments we choose the default value for $k$. Standard kNN algorithms are known to have scaling problems due to the high dimensionality. Therefore, we also include the scaled version which uses partition trees \cite{Stowell09},
and here is referred to as \textbf{KDP}.
We also utilize the batch method which attempts to estimate the entropy in batches and then it takes the mean of the estimates over the ensemble (\textbf{Ens}). This has been shown to be effective in applied settings such as image registration \cite{Kybic2004}.
We also include a more recent method which estimates these measures via von Mises Expansions (\textbf{vME}) \cite{NIPS2015_5911} which uses a functional Taylor expansion that computes the second order entropy. This non-parametric approach has been shown to have a fast rate of convergence compared to the traditional kNN family.
\subsubsection{Estimation using RBIG}
The proposed estimator is based on the property of KLD that states that the $D_{\textrm{KL}}$ is invariant under invertible and differentiable mappings~\cite{Qiao10}, i.e. $D_{\textrm{KL}}({\mathbf x}|{\mathbf y}) = D_{\textrm{KL}}(f({\mathbf x})|f({\mathbf y}))$
if the function $f(\cdot)$ is invertible and differentiable.
As a result, we propose transforming both datasets, ${\mathbf x}$ and ${\mathbf y}$, to a domain where we know that one of them is Gaussian,
and then compute how non-Gaussian the other dataset is. Figure \ref{fig:KLD_ilustrative} illustrates this two-steps idea.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[width=8cm]{Figures/Fig_Dkl.pdf} \end{tabular}
\caption{Illustrative example of computing the $D_{\textrm{KL}}({\mathbf x}|{\mathbf y})$ using RBIG. First, both datasets are transformed using the transformation that Gaussianizes the dataset ${\mathbf x}$, i.e. ${\mathbf x}' = G_x({\mathbf x}) \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ and ${\mathbf y}' = G_x({\mathbf y})$. And secondly we compute how far is the new dataset ${\mathbf y}^*$ of being Gaussian. In order to do the second step we need to compute the total correlation of ${\mathbf y}'$. We do so using the equation \eqref{eq:TC_from_RBIG} which requires to compute the transformation that Gaussianizes ${\mathbf y}'$, i.e. $G_{y^*}$.}
\label{fig:KLD_ilustrative}
\end{figure}
In the above context, we use the concept of \emph{Non-Gaussianity}~\cite{Cardoso03}.
Non-Gaussianity is defined as the divergence of a particular distribution with regard to a Gaussian distribution
with the same mean and covariance $J({\mathbf x}) = D_{\textrm{KL}}(p({\mathbf x})|\mathcal{N}({\mathbf x},\bf{\mu},\Sigma))$.
Here we define the \emph{standard non-Gaussianity}, $J_s({\mathbf x})$ as the divergence between $p({\mathbf x})$ and a standard Gaussian, $J_s({\mathbf x}) = D_{\textrm{KL}}(p({\mathbf x})|\mathcal{N}({\mathbf x},\mathbf{0},\mathbf{I}))$. Now, by using the Pythagorean property of the
divergence~\cite{Cardoso03}, we can write the standard non-Gaussianity estimate as:
\begin{equation}
J_s({\mathbf x}) = D_{\textrm{KL}}\Bigg( p({\mathbf x})\Bigg| \prod_{i=1}^{D_x} p(x_i) \Bigg) + D_{\textrm{KL}}\Bigg(\prod_{i=1}^{D_x} p(x_i)\Bigg| \prod_{i=1}^{D_x} \mathcal{N}(0,1) \Bigg), \nonumber
\end{equation}
and, since the first term is the definition of total correlation~\cite{Studeny98}, and the second term is the divergence of factorized PDFs, the standard non-Gaussianity can be readily written as:
\begin{equation}
J_s({\mathbf x}) = T({\mathbf x}) + \sum_{i=1}^{D_x} D_{\textrm{KL}}(p_{x_i}(x_i)|\mathcal{N}(0,1)).
\label{eq:nongaussianity}
\end{equation}
This leads to propose the following KLD estimator.
\noindent{\bf Proposition:} Taking the RBIG transform $G_x(\cdot)$ learned from samples, ${\mathbf x}$, the RBIG-based estimator of divergence $D_{\textrm{KL}}({\mathbf x}|{\mathbf y})$ can be computed as:
\begin{eqnarray}
\hspace{-1cm} \Tilde{D}_{\textrm{KL}}({\mathbf y}|{\mathbf x}) &=& J_s(G_x({\mathbf y}))
\label{estim_kld_yx}
\end{eqnarray}
where, following eq.~\eqref{eq:nongaussianity}, $J_s(\cdot)$ depends on an RBIG-based estimation of $T$, and a set of marginal divergences.
\noindent{\bf Proof:} Consider the transformation $G_x$ obtained training RBIG using the data ${\mathbf x}$. Invoking the invariance of $D_{\textrm{KL}}({\mathbf y}|{\mathbf x})$ under invertible and differentiable transforms~\cite{Qiao10}, and considering that the RBIG transforms are the invertible and differentiable~\cite{Laparra2011},
we can write:
\begin{eqnarray}
D_{\textrm{KL}}({\mathbf y}|{\mathbf x}) &=& D_{\textrm{KL}}(G_x({\mathbf y})|G_x({\mathbf x})). \label{KLD_J}
\end{eqnarray}
Taking into account that when applying the transformation $G_x$ over ${\mathbf x}$ the obtained dataset follows a standard Gaussian, i.e. $G_x({\mathbf x}) \sim \mathcal{N}(\mathbf{0},\mathbf{I})$, then:
\begin{eqnarray}
D_{\textrm{KL}}({\mathbf y}|{\mathbf x}) &=& D_{\textrm{KL}}(G_x({\mathbf y})|\mathcal{N}(\mathbf{0},\mathbf{I})) = J_s(G_x({\mathbf y})).\nonumber
\end{eqnarray}
\subsubsection{Validation on known PDFs}
\label{sec:kld_validation}
Here we evaluate the $D_{\textrm{KL}}$ among two datasets ${\mathbf x}$ and ${\mathbf y}$. Four different combinations of different datasets are analyzed:
\begin{itemize}
\item {\em Gaussians with different means.} Both datasets are drawn from Gaussian distributions with the same diagonal covariance ($\Sigma_1 = \Sigma_2 = \mathbf{I}$) but different means ($\mu_1=\mathbf{0}$, $\mu_2$ may take different values $\mu_2=[0.2,0.4,0.6]$.
\item {\em Gaussians with different covariance matrices.} The data are also generated from Gaussians but in this case with the same mean (i.e. $\mu_1 = \mu_2 = \mathbf{0}$) but different covariances (i.e. $\Sigma_1 \neq \Sigma_2$). We parametrized $\Sigma_2 = \sigma_2 \mathbf{Q} + \mathbf{I}$ and tested three different values $\sigma_2=[0.5,0.75,0.9]$, where $\mathbf{Q}$ is a covariance matrix generated randomly in each trial and with zeros in the diagonal.
\item {\em Gaussian vs Student.} One dataset is generated from a Gaussian distribution and the other dataset from a multivariate Student distribution. We keep fixed the parameters of the Gaussian distribution while modifying the $\nu$ parameter of the multivariate Student. We parametrized the Gaussian distribution using a multivariate Student with $\nu_1=100$. In particular $\Sigma_1 = \Sigma_2 = \mathbf{I}$, $\mu_1=\mu_2=\mathbf{0}$, and $\nu_2 = [2,4,7]$.
\item {\em Student vs Student.} Data is generated from two multivariate Student distributions. We keep fixed the parameters of one distribution while modifying the $\nu$ parameter of the other one. In particular $\Sigma_1 = \Sigma_2 = \mathbf{I}$, $\mu_1=\mu_2=\mathbf{0}$, and $\nu_1=8$, and $\nu_2 = [2,4,7]$.
\end{itemize}
Details of how to compute analytically the $D_{\textrm{KL}}$ value for each configuration are given in Appendix~\ref{app:formulas}.
Table~\ref{tab:KLD} summarizes the results for a representative sample size.
In this case, divergence is not directly available from the {\bf KDP} and {\bf Ens} estimators, so they are not included in the comparison.
Bias/variance results for different number of samples are given in the supplementary material.
Results show again that RBIG estimation obtains good performance.
BRIG is second best only in the all-Gaussian cases where, not surprisingly, it is outperformed by \emph{expF}, the method that assumes Gaussian models.
When different distributions come in, RBIG estimations are clearly better.
Actually, in this non-Gaussian case, while all other methods yield huge errors, i.e. the relative error bigger than $1000\%$ in particular sample/dimension configurations, RBIG bias never diverges.
\begin{table}[t!]
\caption {Relative mean absolute errors in percentage for $D_{\textrm{KL}}$ estimation on known PDFs. Best value in dark gray, second best value in bright gray.}\label{tab:KLD}
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
& & \textbf{dim} & \textbf{RBIG} & \textbf{kNN} & \textbf{expF} & \textbf{vME} \\
\hline
\parbox[t]{3mm}{\multirow{12}{*}{\rotatebox[origin=c]{90}{Gaussian, different means}}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu_2=0.2$}}} & 3 & \cellcolor[HTML]{C0C0C0}13.49 & 16.90 & \cellcolor[HTML]{656565}10.28 & 92.28 \\
& & 10 & \cellcolor[HTML]{C0C0C0}19.50 & 22.47 & \cellcolor[HTML]{656565}3.27 & >1000 \\
& & 50 & \cellcolor[HTML]{C0C0C0}32.04 & 40.85 & \cellcolor[HTML]{656565}13.34 & >1000 \\
& & 100 & 47.40 & \cellcolor[HTML]{C0C0C0}41.24 & \cellcolor[HTML]{656565}28.14 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu_2=0.4$}}} & 3 & \cellcolor[HTML]{656565}3.55 & 7.98 & \cellcolor[HTML]{C0C0C0}5.57 & 24.22 \\
& & 10 & \cellcolor[HTML]{C0C0C0}5.25 & 22.93 & \cellcolor[HTML]{656565}2.02 & 604.17 \\
& & 50 & \cellcolor[HTML]{C0C0C0}8.67 & 40.91 & \cellcolor[HTML]{656565}3.40 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}4.83 & 43.49 & \cellcolor[HTML]{C0C0C0}8.70 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu_2=0.6$}}} & 3 & \cellcolor[HTML]{656565}3.75 & 6.39 & \cellcolor[HTML]{C0C0C0}3.89 & 12.23 \\
& & 10 & \cellcolor[HTML]{C0C0C0}2.81 & 24.72 & \cellcolor[HTML]{656565}1.85 & 213.72 \\
& & 50 & \cellcolor[HTML]{C0C0C0}13.83 & 43.11 & \cellcolor[HTML]{656565}1.83 & 897.65 \\
& & 100 & \cellcolor[HTML]{C0C0C0}42.42 & 46.00 & \cellcolor[HTML]{656565}5.11 & 686.96 \\
\hline
\parbox[t]{3mm}{\multirow{12}{*}{\rotatebox[origin=c]{90}{Gaussians, different covs.}}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu_2=0.5$}}} & 3 & \cellcolor[HTML]{C0C0C0}24.93 & 27.30 & \cellcolor[HTML]{656565}4.89 & 63.90 \\
& & 10 & \cellcolor[HTML]{C0C0C0}18.80 & 103.65 & \cellcolor[HTML]{656565}2.64 & >1000 \\
& & 50 & \cellcolor[HTML]{C0C0C0}23.62 & 173.62 & \cellcolor[HTML]{656565}8.42 & >1000 \\
& & 100 & \cellcolor[HTML]{C0C0C0}32.56 & 200.33 & \cellcolor[HTML]{656565}17.59 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu_2=0.75$}}} & 3 & \cellcolor[HTML]{C0C0C0}21.04 & 24.77 & \cellcolor[HTML]{656565}3.72 & 36.64 \\
& & 10 & \cellcolor[HTML]{C0C0C0}10.44 & 96.85 & \cellcolor[HTML]{656565}1.86 & 605.00 \\
& & 50 & \cellcolor[HTML]{C0C0C0}10.07 & 159.16 & \cellcolor[HTML]{656565}5.70 & >1000 \\
& & 100 & \cellcolor[HTML]{C0C0C0}13.66 & 179.67 & \cellcolor[HTML]{656565}11.40 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\mu_2=0.9$}}} & 3 & \cellcolor[HTML]{C0C0C0}17.12 & 25.95 & \cellcolor[HTML]{656565}3.40 & 26.15 \\
& & 10 & \cellcolor[HTML]{C0C0C0}6.77 & 94.42 & \cellcolor[HTML]{656565}1.60 & 448.87 \\
& & 50 & \cellcolor[HTML]{656565}3.40 & 152.46 & \cellcolor[HTML]{C0C0C0}4.81 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}5.96 & 170.28 & \cellcolor[HTML]{C0C0C0}9.43 & >1000 \iffalse inf \fi \\
\hline
\parbox[t]{3mm}{\multirow{12}{*}{\rotatebox[origin=c]{90}{Gaussian vs. Student}}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=2$}}} & 3 & \cellcolor[HTML]{656565}5.08 & 29.58 & 793.53 & \cellcolor[HTML]{C0C0C0}5.78 \\
& & 10 & \cellcolor[HTML]{656565}32.72 & \cellcolor[HTML]{C0C0C0}83.51 & 1278.91 & 596.63 \\
& & 50 & \cellcolor[HTML]{656565}59.37 & \cellcolor[HTML]{C0C0C0}468.33 & 2783.43 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}42.11 & \cellcolor[HTML]{C0C0C0}1024.30 & 4330.18 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=4$}}} & 3 & \cellcolor[HTML]{656565}17.09 & 95.02 & 148.08 & \cellcolor[HTML]{C0C0C0}22.52 \\
& & 10 & \cellcolor[HTML]{656565}42.84 & \cellcolor[HTML]{C0C0C0}157.63 & 219.26 & 963.37 \\
& & 50 & \cellcolor[HTML]{656565}60.53 & 584.46 & \cellcolor[HTML]{C0C0C0}547.48 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}41.71 & 1214.61 & \cellcolor[HTML]{C0C0C0}962.45 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=7$}}} & 3 & \cellcolor[HTML]{656565}8.34 & 271.61 & \cellcolor[HTML]{C0C0C0}35.78 & 59.69 \\
& & 10 & \cellcolor[HTML]{656565}38.78 & 307.82 & \cellcolor[HTML]{C0C0C0}49.77 & >1000 \\
& & 50 & \cellcolor[HTML]{656565}48.80 & 713.36 & \cellcolor[HTML]{C0C0C0}145.15 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}26.01 & 1399.34 & \cellcolor[HTML]{C0C0C0}278.93 & >1000 \\
\hline
\parbox[t]{3mm}{\multirow{12}{*}{\rotatebox[origin=c]{90}{Student vs. Student}}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=2$}}} & 3 & \cellcolor[HTML]{656565}9.08 & \cellcolor[HTML]{C0C0C0}13.87 & 3442.45 & >1000 \\
& & 10 & \cellcolor[HTML]{656565}20.57 & \cellcolor[HTML]{C0C0C0}57.60 & 7462.58 & 346.61 \\
& & 50 & \cellcolor[HTML]{656565}85.14 & \cellcolor[HTML]{C0C0C0}405.47 & 19991.36 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}242.80 & \cellcolor[HTML]{C0C0C0}939.24 & 35064.60 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=4$}}} & 3 & \cellcolor[HTML]{656565}9.51 & \cellcolor[HTML]{C0C0C0}47.03 & 1502.19 & 48.89 \\
& & 10 & \cellcolor[HTML]{656565}36.33 & \cellcolor[HTML]{C0C0C0}139.12 & 2561.86 & >1000 \\
& & 50 & \cellcolor[HTML]{656565}37.29 & \cellcolor[HTML]{C0C0C0}656.95 & 7997.12 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}60.52 & \cellcolor[HTML]{C0C0C0}1441.18 & 13033.03 & >1000 \\
\cline{2-7}
&\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=7$}}} & 3 & \cellcolor[HTML]{656565}13.13 & \cellcolor[HTML]{C0C0C0}126.41 & 589.47 & 128.84 \\
& & 10 & \cellcolor[HTML]{656565}23.13 & \cellcolor[HTML]{C0C0C0}301.97 & 1070.70 & >1000 \\
& & 50 & \cellcolor[HTML]{656565}28.34 & \cellcolor[HTML]{C0C0C0}976.95 & 3689.57 & >1000 \\
& & 100 & \cellcolor[HTML]{656565}145.88 & \cellcolor[HTML]{C0C0C0}2046.95 & 6370.43 & >10000 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Experiments on real-world data: computer vision}
Here we show how RBIG-based estimations of $D_{\textrm{KL}}$ may be useful to analyze image databases widely used in Computer Vision: MNIST~\cite{Lecun98} and CIFAR-10~\cite{Krizhevsky09}.
Divergence between classes may describe the difficulty/simplicity of class separation in image recognition problems, and can help designing automatic classification schemes.
The MNIST digit dataset contains 60000 training and 10000 test images of ten handwritten digits (0 to 9) that are $28\times28$ pixels in size. The gray scale levels were converted to the range $[\epsilon, 1-\epsilon], \epsilon\sim \mathcal N(0, 0.1)$.
The CIFAR-10 natural images dataset which contains 50000 training and 10000 test RGB images that are of size $3\times32\times32$ pixels. The color levels were converted to the range $[\epsilon, 1-\epsilon], \epsilon\sim \mathcal N(0, 0.1)$.
We train our RBIG algorithm to calculate the $D_{\textrm{KL}}$ between all of the pairs of classes in both directions ($D_{\textrm{KL}}({\mathbf x}|{\mathbf y})$ and $D_{\textrm{KL}}({\mathbf y}|{\mathbf x})$) for the MNIST dataset and the CIFAR dataset.
Figure \ref{fig:KLD_mnist_cifar} shows heatmaps for the MNIST and CIFAR-10 datasets. It can be seen that in both cases, digits and objects, the PDF of similar classes have lower $D_{\textrm{KL}}$. For instance in the MNIST case the digits 7 and 9 are close in both directions (from 7 to 9, and from 9 to 7, note that $D_{\textrm{KL}}$ is not symmetric). In the case of the objects the closest ones are \emph{cat} and \emph{dog}, or \emph{horse} and \emph{deer}.
\begin{figure}
\centering
\begin{tabular}{c}
\includegraphics[width=8cm, trim=42.5mm 0 0 0]{Figures/experiments/FIGS_KLD/mnist.png} \\
\includegraphics[width=9cm, trim=50mm 0 0 0]{Figures/experiments/FIGS_KLD/cifar.png} \\
\end{tabular}
\caption{Analysis of computer vision datasets using RBIG estimations of divergence.
Heatmap showing the $D_{\textrm{KL}}$ results between classes for (top) MNIST and (bottom) CIFAR-10 datasets.}
\label{fig:KLD_mnist_cifar}
\end{figure}
\subsubsection{Computation using RBIG}
The proposed estimate is based on a well-known property of mutual information: it is invariant over a reparametrization of the space of each variable using smooth and uniquely invertible transformations \cite{Kraskov2004}.
Basically our proposal is to transform the two datasets, ${\mathbf x}$ and ${\mathbf y}$, to a domain where both follow a standard Gaussian. This removes all the total correlation contained on each dataset.
\noindent{\bf Proposition:} The total correlation remaining among both gaussianized datasets is equivalent to the mutual information between the original datasets:
\begin{equation}
\Tilde{I}({\mathbf x}, {\mathbf y}) = \Tilde{T}([{\mathbf x}',{\mathbf y}']),
\label{I_rbig}
\end{equation}
where ${\mathbf x}'=G_x({\mathbf x})$ and ${\mathbf y}'=G_y({\mathbf y})$. Therefore, the computation of $\Tilde{I}({\mathbf x}, {\mathbf y})$ is reduced to one measurement. Moreover, according to Eq.~\ref{Estim_T}, it reduces to a set of univariate operations.
\noindent{\bf Proof:} First we apply RBIG to both variables, ${\mathbf x}$ and ${\mathbf y}$, the new variables, ${\mathbf x}' = G_x({\mathbf x})$ and ${\mathbf y}' = G_y({\mathbf y})$, are Gaussian distributions with zero mean and identity covariance. Their joint entropies can be expressed as the sum of the marginal entropies, i.e. their total correlation is zero: $ H({\mathbf x}') = \sum_{i=1}^{D_x} H(x'_i)$ and $ H({\mathbf y}') = \sum_{i=1}^{D_y} H(y'_i)$. Therefore if we use the definition of $I$ in entropy terms we have:
\begin{eqnarray*}
I({\mathbf x}',{\mathbf y}') &=& H({\mathbf x}') + H({\mathbf y}') - H([{\mathbf x}',{\mathbf y}']) \\
&=& \sum_{i=1}^{D_x} H(x'_i) + \sum_{i=1}^{D_y} H(y'_i) - H([{\mathbf x}',{\mathbf y}'])
\end{eqnarray*}
Let us consider the variable result of stacking ${\mathbf x}'$ and ${\mathbf y}'$, $\mathbf{z} = [{\mathbf x}', {\mathbf y}']$. Previous equation can be expressed as:
\begin{eqnarray*}
\Tilde{I}({\mathbf x}',{\mathbf y}') &=& \sum_{i=1}^{D_x + D_y} \Tilde{H}(z_i) - \Tilde{H}(\mathbf{z}) = \Tilde{T}(\mathbf{z})
\end{eqnarray*}
On the other hand, note that $I$ is invariant under the reparametrization of the space of each variable using smooth and uniquely invertible transformations \cite{Kraskov2004}. If we compute $G_x({\mathbf x})$ and $G_y({\mathbf y})$ using RBIG the requirement for the transformations is satisfied. Taking this into account:
\begin{eqnarray*}
\Tilde{I}({\mathbf x}, {\mathbf y}) &=& \Tilde{I}({\mathbf x}', {\mathbf y}') \\
&=& \Tilde{T}(\mathbf{z}) \\ &=& \Tilde{T}([{\mathbf x}',{\mathbf y}']).
\end{eqnarray*}
\subsubsection{Validation on known PDFs}
Here we consider two different situations:
\begin{itemize}
\item{\em Gaussian vs Gaussian with different covariance.} We generate data from Gaussians distributions with the same mean but difference covariance following the same procedure described in sec.~\ref{sec:kld_validation}.
\item{\em Student vs Student.} We compare data generated from two multivariate Student distribution. The process is similar to the one described in sec.~\ref{sec:kld_validation}, yet here we use the same value for both distributions, i.e. $\nu = \nu_1 = \nu_2$, but we consider different values for $\nu = [3,5,20]$. The difference between both distributions came from the difference in the shape matrices.
The shape matrices were different in each trial but keeping the value of mutual information in a controlled regime: the diagonal is forced to have a value of $10$ and the off-diagonal coefficients are generated sampling from a uniform distribution ${\mathcal U}(0,1)$.
\end{itemize}
Details of how to compute analytically the $I$ value for each configuration are given in the appendix \ref{app:formulas}. Table \ref{tab:MI} summarizes the results for a given number of samples. Exhaustive comparison can be found in the supplementary material. While the computation of mutual information in multiple dimensions is a particularly challenging problem, RBIG achieves reasonable low error rates. As in the previous experiments, the estimation performed using RBIG clearly outperforms the other methods.
\begin{table}[t!]
\begin{center}
\caption{Relative mean absolute errors in percentage for mutual information estimation on known PDFs. Best value in dark gray, second best value in light gray.}\label{tab:MI}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
& & $D_x$ & \textbf{RBIG} & \textbf{kNN} & \textbf{KDP} & \textbf{expF} & \textbf{vME} & \textbf{Ens} \\
\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Gaussians}}} & & 3 & \cellcolor[HTML]{C0C0C0}10.60 & 26.00 & 149.10 & \cellcolor[HTML]{656565}9.20 & 13.20 & 48.50 \\
& & 10 & \cellcolor[HTML]{656565}9.60 & 76.30 & 102.60 & \cellcolor[HTML]{C0C0C0}23.70 & 311.00 & 91.00 \\
& & 50 & \cellcolor[HTML]{656565}6.80 & 104.70 & 100.70 & \cellcolor[HTML]{C0C0C0}39.50 & 68.00 & 105.50 \\
& & 100 & \cellcolor[HTML]{656565}11.70 & 107.20 & >1000 \iffalse inf \fi & \cellcolor[HTML]{C0C0C0}42.60 & 77.40 & 106.10 \\
\hline
\parbox[t]{2mm}{\multirow{12}{*}{\rotatebox[origin=c]{90}{Student vs. Student}}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=3$}}} & 3 & \cellcolor[HTML]{656565}35.72 & 95.32 & >1000 \iffalse 2273.91 \fi & \cellcolor[HTML]{C0C0C0}63.73 & >1000 \iffalse 1214.05 \fi & 86.58 \\
& & 10 & 22.26 & \cellcolor[HTML]{656565}2.66 & 118.51 & \cellcolor[HTML]{C0C0C0}18.14 & >1000 \iffalse 1876.73 \fi & 66.77 \\
& & 50 & \cellcolor[HTML]{656565}1.51 & 88.38 & 104.50 & \cellcolor[HTML]{C0C0C0}36.10 & 810.02 & 105.83 \\
& & 100 & \cellcolor[HTML]{656565}15.34 & 98.66 & >1000 \iffalse inf \fi & \cellcolor[HTML]{C0C0C0}65.71 & 789.55 & 105.34 \\
\cline{2-9}
& \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=5$}}} & 3 & \cellcolor[HTML]{656565}18.51 & 118.04 & >1000 \iffalse 2694.28 \fi & \cellcolor[HTML]{C0C0C0}56.49 & >1000 \iffalse 1321.06 \fi & 96.41 \\
& & 10 & \cellcolor[HTML]{656565}3.07 & 24.83 & 113.89 & \cellcolor[HTML]{C0C0C0}9.39 & >1000 \iffalse 7024.12 \fi & 101.26 \\
& & 50 & \cellcolor[HTML]{656565}10.91 & 102.89 & 105.08 & \cellcolor[HTML]{C0C0C0}25.17 & 849.12 & 117.30 \\
& & 100 & \cellcolor[HTML]{656565}24.43 & 105.41 & 101.10 & \cellcolor[HTML]{C0C0C0}42.57 & 805.44 & 110.58 \\
\cline{2-9}
& \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=20$}}} & 3 & 73.63 & 194.16 & >1000 \iffalse 5062.99 \fi & \cellcolor[HTML]{656565}14.63 & >1000 \iffalse 2692.12 \fi & \cellcolor[HTML]{C0C0C0}15.36 \\
& & 10 & \cellcolor[HTML]{C0C0C0}40.02 & 108.82 & 110.68 & \cellcolor[HTML]{656565}29.69 & >1000 \iffalse inf \fi & 208.20 \\
& & 50 & \cellcolor[HTML]{656565}29.98 & 149.53 & 102.93 & \cellcolor[HTML]{C0C0C0}36.30 & 946.93 & 154.88 \\
& & 100 & \cellcolor[HTML]{656565}37.21 & 128.27 & 101.44 & \cellcolor[HTML]{C0C0C0}43.77 & 844.41 & 127.67 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Experiments on real-world data: learning in neural networks}
RBIG estimates of mutual information have been used to analyze the transmission of information in artificial and biological neural networks.
For instance, in \cite{Malmgren17igarss} RBIG estimates of $I$ between different features and the outputs of the network were used to compare different feature extraction methods before using a convolutional neural network (CNN). In \cite{IMM2018} RBIG estimates of $I$ were used to analyze overfitting in CNN predictions. In \cite{Malo19} RBIG was used to measure the visual information available from different neural layers in psychophysically tuned networks.
Here we present a novel analysis of a learning system
in the context of the information bottleneck \cite{Tishby2015INFOBOTTLE}
in artificial neural networks (ANNs).
We illustrate the use of $I$ via RBIG to monitor the information shared between each layer of an ANN and the actual outputs while the network is learning to solve a regression problem.
The regression problem considered here consists of predicting atmosphere temperatures from radiance measurements.
The input samples were acquired by the infrared atmospheric sounding interferometer (IASI) sensor in the MetOp satellite,
The data has originally 8461 spectral bands which have been reduced to 50 dimensions using PCA, i.e. ${\mathbf x} \in \mathbb{R}^{50}$.
The desired output is the temperature at three different pressure levels in the atmosphere, ${\mathbf y} \in \mathbb{R}^{3}$: $10^3$ hPa (land surface), $10$ hPa ($\sim 30$ km) and $10^{-2}$ hPa ($\sim 80$ km).
Labeled data were obtained using the European Centre for Medium-Range Weather Forecasts (ECMWF) analysis model (see \cite{Sobrino19} for a more exhaustive data description). The number of samples used is 100000: 80000 for training and 20000 for testing the network.
In the experiment we illustrate how $I$ can be used as a metric to monitor the output of the network (as regular metrics do) but also it can be used to monitor the evolution of the intermediate layers during the training process. The experiment setup and results are illustrated in Fig.~\ref{fig:Real_MI_ANN}.
The considered ANN has three dense layers of 20, 10, and 3 neurons respectively, Fig. \ref{fig:Real_MI_ANN}a).
The first two layers use a sigmoid activation function, the third layer does not have an activation function since we do not want to restrict the output domain.
The ANN has been trained for 100 epochs to minimize the Mean Absolute Error (MAE) with the actual temperatures, Fig. \ref{fig:Real_MI_ANN}b).
Interestingly, Fig. \ref{fig:Real_MI_ANN}c) shows how the $I$ between the predicted and the actual outputs evolves consistently with the MAE
for both training and test data.
Fig. \ref{fig:Real_MI_ANN}d) shows the evolution of the $I$ between the output of each layer and the output for the training data.
A number of consequences can be derived. First, RBIG-based estimations of $I$ can be used as a regular metric to monitor the predictions.
More interestingly, $I$ can be used to monitor the particular evolution of each layer (not only the last one), where conventional metrics
(such as MAE or MSE) are not applicable. In the example we can see how the last layer (L3) learns very fast at the beginning and, at some point, the learning rate drops, and it keeps learning slowly. On the other hand, the first layer (L1) starts to learn slowly, it keeps learning at medium rate for longer time, and at some point it saturates and seems to stop learning. The behavior of the intermediate layer (L2) is in between L1 and L3.
\begin{figure*}
\centering
\begin{tabular}{cc}
a) & b) \\
\includegraphics[width=6.9cm]{Figures/experiments/FIGS_MI/FIG_MI_Toy_1.png} &
\includegraphics[width=6.9cm]{Figures/experiments/FIGS_MI/FIG_3d_FINAL_MAE.png} \\
c) & d) \\
\includegraphics[width=6.9cm]{Figures/experiments/FIGS_MI/FIG_3d_FINAL_MI_tr_vs_ts.png} &
\includegraphics[width=6.9cm]{Figures/experiments/FIGS_MI/FIG_3d_FINAL_MI_L1_vs_L2_vs_L3_norm.png}
\end{tabular}
\caption{Learning in artificial neural networks from RBIG estimations of mutual information: evolution of $I$ during the training of an ANN. a) Configuration of the considered neural network. b) Error evolution. c) Evolution of $I$ between the predicted output and the actual data. d) Evolution of $I$ per dimension between the output of each layer and the actual data.}
\label{fig:Real_MI_ANN}
\end{figure*}
\subsubsection{Estimation using RBIG}
When applying RBIG to particular data, ${\mathbf x}$, the output
has zero total correlation, i.e. $T({\mathbf x}^{(N)}) = 0$. Therefore, by applying Eq.~\eqref{lemma}, the total correlation of the original data is the sum of the $T$ variation at each layer:
\begin{eqnarray}
\Tilde{T}({\mathbf x}) &=& \sum_{n=0}^{N-1} \Delta \Tilde{T}^{(n)} \label{Estim_T} \\
&=& \frac{(N-1) D_x}{2} \log(2\pi e) - \sum_{n=1}^{N} \sum_{i=1}^{D_x} \Tilde{H}(x_i^{(n)}). \nonumber
\label{eq:TC_from_RBIG}
\end{eqnarray}
Note that the estimation of this (challenging) multivariate magnitude reduces to a set of (easy-to-compute) univariate marginal entropy estimations (sec. \ref{sec:key}). Intuitively, each RBIG layer removes part of the total correlation present in the original data, introducing as many layers as necessary until it is removed.
\subsubsection{Validation on known PDFs}
\label{sec:val_TC}
We validate the performance of the proposed method on three different types of multivariate distributions:
\begin{itemize}
\item {\em Gaussian distribution.} We consider data drawn from Gaussian distributions with zero mean and random covariance matrices. Each coefficient in the covariance matrix is generated randomly from a uniform ${\mathcal U}(0,1)$ distribution in each trial, the covariance matrix is enforced to be symmetric.
\item {\em Linearly transformed uniform distribution.} Data is generated from a multidimensional uniform distribution and multiplied by a random squared matrix generated from a uniform ${\mathcal U}(0,1)$ distribution. The transformation matrix is different in each trial.
\item {\em Multivariate Student distribution.} Different values of the degrees of freedom, $\nu$, that controls the weight of the tails are tested. The coefficients of the symmetric scale matrix (or shape matrix) are generated sampling from a uniform distribution. The diagonal is enforced to have a fixed value (in our case $10$) in order to keep the total correlation in a controlled regime.
\end{itemize}
Results for different number of samples and dimensions and comparison with the performance of different estimators are given in Fig. \ref{fig:TC_gauss}. Results are given in percentage of absolute error with regard to the analytic value of $T$. In Appendix \ref{app:formulas} we give details on how to compute these values analytically for each distribution. Table \ref{tab:TC} shows a summary of the relative mean absolute errors when using $10^4$ samples.
Results show that, while some methods behave well only in specific distributions, RBIG is consistent and always obtains a good performance for all distributions.
Only when the distribution is Gaussian, RBIG is outperformed by \emph{expF}. This was expected since \emph{expF} assumes a Gaussian distribution. Being a non-parametric model, RBIG will generally provide more sensible estimates in real scenarios where the data generating mechanism is typically unknown. Importantly, note that the benefits of RBIG are more visible when the data dimensionality increases. This effect is visible also in the measures of the following sections.
\begin{table}[h!]
\begin{center}
\hspace{-0cm}
\caption{Relative mean absolute errors in percentage for total correlation estimation on known PDFs. Best value in dark gray, second best value in bright gray.}\label{tab:TC}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
& & $D_x$ & \textbf{RBIG} & \textbf{kNN} & \textbf{KDP} & \textbf{expF} & \textbf{vME} & \textbf{Ens} \\
\hline
\parbox[t]{3mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Gaussian}}} & - & 3 & \cellcolor[HTML]{C0C0C0}0.87 & 0.94 & 76.65 & \cellcolor[HTML]{656565}0.63 & 4.27 & 4.03 \\
& & 10 & \cellcolor[HTML]{C0C0C0}0.97 & 23.48 & >100 \iffalse 157.92 \fi & \cellcolor[HTML]{656565}0.27 & 31.72 & 34.83 \\
& & 50 & \cellcolor[HTML]{C0C0C0}1.45 & 45.77 & >100 \iffalse 154.52 \fi & \cellcolor[HTML]{656565}0.52 & >100 \iffalse 155.87 \fi & 54.74 \\
& & 100 & \cellcolor[HTML]{C0C0C0}1.55 & 52.78 & >100 \iffalse 154.55 \fi & \cellcolor[HTML]{656565}0.41 & >100 \iffalse 161.34 \fi & 59.94 \\
\hline
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Rotated}}} & - & 3 & \cellcolor[HTML]{656565}1.70 & \cellcolor[HTML]{C0C0C0}1.80 & 82.90 & 16.80 & 1.90 & 9.40 \\
& & 10 & \cellcolor[HTML]{656565}8.30 & 27.20 & >100 \iffalse 176.10 \fi & \cellcolor[HTML]{C0C0C0}11.00 & 24.20 & 38.70 \\
& & 50 & \cellcolor[HTML]{656565}7.70 & 51.10 & >100 \iffalse 157.60 \fi & \cellcolor[HTML]{C0C0C0}15.10 & >100 \iffalse 101.40 \fi & 59.40 \\
& & 100 & \cellcolor[HTML]{656565}7.50 & 57.80 & >100 \iffalse 151.10 \fi & \cellcolor[HTML]{C0C0C0}15.50 & >100 \iffalse 128.40 \fi & 64.50 \\
\hline
\parbox[t]{2mm}{\multirow{14}{*}{\rotatebox[origin=c]{90}{Student}}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=3$}}} & 3 & \cellcolor[HTML]{656565}7.01 & \cellcolor[HTML]{C0C0C0}13.55 & >100 \iffalse 1612.14 \fi & 94.03 & >100 \iffalse 125.13 \fi & 66.59 \\
& & 10 & 32.93 & \cellcolor[HTML]{C0C0C0}16.73 & >100 \iffalse 1721.38 \fi & 67.32 & >100 \iffalse 660.66 \fi & \cellcolor[HTML]{656565}15.27 \\
& & 50 & \cellcolor[HTML]{C0C0C0}18.18 & \cellcolor[HTML]{656565}12.02 & >100 \iffalse 612.71 \fi & 29.44 & >100 \iffalse 547.71 \fi & 24.65 \\
& & 100 & \cellcolor[HTML]{656565}12.71 & \cellcolor[HTML]{C0C0C0}17.41 & >100 \iffalse 450.29 \fi & 21.12 & >100 \iffalse 324.58 \fi & 28.63 \\
\cline{2-9}
& \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=5$}}} & 3 & \cellcolor[HTML]{656565}26.61 & \cellcolor[HTML]{C0C0C0}52.76 & >100 \iffalse 2413.52 \fi & 89.74 & 81.85 & 133.12 \\
& & 10 & 23.94 & \cellcolor[HTML]{C0C0C0}19.74 & >100 \iffalse 1762.12 \fi & 49.60 & >100 \iffalse 1064.84 \fi & \cellcolor[HTML]{656565}12.31 \\
& & 50 & \cellcolor[HTML]{656565}10.10 & \cellcolor[HTML]{C0C0C0}16.87 & >100 \iffalse 504.90 \fi & 20.29 & >100 \iffalse 693.07 \fi & 32.14 \\
& & 100 & \cellcolor[HTML]{656565}7.10 & 22.53 & >100 \iffalse 379.10 \fi & \cellcolor[HTML]{C0C0C0}15.39 & >100 \iffalse 398.88 \fi & 34.96 \\
\cline{2-9}
& \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{$\nu=20$}}} & 3 & \cellcolor[HTML]{C0C0C0}88.27 & >100 \iffalse 256.00 \fi & >100 \iffalse 10656.34 \fi & \cellcolor[HTML]{656565}48.56 & >100 \iffalse 517.23 \fi & >100 \iffalse 639.53 \fi \\
& & 10 & \cellcolor[HTML]{656565}3.05 & 11.86 & >100 \iffalse 1788.96 \fi & \cellcolor[HTML]{C0C0C0}10.51 & >100 \iffalse 1421.39 \fi & 19.93 \\
& & 50 & \cellcolor[HTML]{656565}3.07 & 33.17 & >100 \iffalse 367.66 \fi & \cellcolor[HTML]{C0C0C0}4.54 & >100 \iffalse 876.64 \fi & 52.62 \\
& & 100 & \cellcolor[HTML]{656565}1.31 & 35.56 & >100 \iffalse 271.56 \fi & \cellcolor[HTML]{C0C0C0}3.43 & >100 \iffalse 483.73 \fi & 49.46 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\centering
\begin{tabular}{m{0.8mm}m{41mm}m{41mm}m{41mm}m{41mm}}
& $D = 3$ & $D = 10$ & $D = 50$ & $D = 100$ \\
\parbox[t]{2mm}{\multirow{1}{*}{\rotatebox[origin=c]{90}{Gaussian}}} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_gaus_experiment_dim_3.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_gaus_experiment_dim_10.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_gaus_experiment_dim_50.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_gaus_experiment_dim_100.png} \\
\hline
\parbox[t]{2mm}{\multirow{1}{*}{\rotatebox[origin=c]{90}{Uniform}}} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_lin_experiment_dim_3.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_lin_experiment_dim_10.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_lin_experiment_dim_50.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_lin_experiment_dim_100.png}\\
\hline
\parbox[t]{2mm}{\multirow{18}{*}{\rotatebox[origin=c]{90}{Student}}} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_1_experiment_dim_3.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_1_experiment_dim_10.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_1_experiment_dim_50.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_1_experiment_dim_100.png} \\
& \includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_2_experiment_dim_3.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_2_experiment_dim_10.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_2_experiment_dim_50.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_2_experiment_dim_100.png} \\
& \includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_3_experiment_dim_3.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_3_experiment_dim_10.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_3_experiment_dim_50.png} &
\includegraphics[width=0.95\linewidth]{Figures/experiments/FIGS_TC/tc_tstu_3_experiment_dim_100.png}
\end{tabular}
\begin{tabular}{m{150mm}}
\includegraphics[width=0.95\linewidth, trim={0 100mm 0 0}, clip]{Figures/experiments/FIGS_TC/legend.png}\\
\end{tabular}
\caption{Total correlation estimation results in relative mean absolute error. Results for different distributions are given: Gaussian, uniform and the Student PDFs ($\nu = 3,5,20$ for each row respectively). Each column correspond to an experiment of a particular number of dimensions $D$. Mean and standard deviation are given for five trials.}
\label{fig:TC_gauss}
\end{figure*}
\subsubsection{Experiments on real-world data: visual neuroscience}
Here we use RBIG estimations of $T$ to measure the reduction of redundancy along the neural pathway in the visual brain.
The Efficient Coding Hypothesis argues that the retina-cortex pathway evolved to get an optimal representation of natural images~\cite{Barlow61,Barlow01}. Emergence of biological-like mechanisms from redundancy reduction optimization is the classical way to confirm this hypothesis~\cite{Olshausen96,Schwartz01,Malo06,Laparra15}.
However, an alternative confirmation relies on evaluating the statistical performance of biologically plausible networks that have not been optimized for information-theoretic goals~\cite{Malo10,GomezVilla19}.
Biological networks substantially reducing signal redundancy suggests that the hypothesis is correct.
In this example we compute the redundancy of real image data~\cite{VanHateren98} injected through physiologically sensible networks~\cite{Carandini12} tuned to fit visual psychophysics as in~\cite{Watson02,Malo10,LaparraJOSA17,GomezVilla19}.
RBIG is used to compute the redundancy (total correlation shared by the elements of the image signal) at the input of the biological network, and at the output. Substantial $\Delta T$ means that the biological network is efficient in encoding the considered signal.
This example is technically interesting because
the RBIG estimate of $\Delta T$ can be compared with a reliable reference of $\Delta T$ computed via Eq.~\eqref{deltaT}.
In this visual neuroscience problem, the reference (or ground truth for comparison) can be obtained from the
analytic Jacobian of the transform applied by the biological network, i.e. $\nabla G(\vect{x})$ in Eq.~\eqref{deltaT}, which is known~\cite{Martinez18}.
In our experiment, $1.2 \cdot 10^6$ image samples of 3-pixels were considered across the space of luminance and contrast.
The responses of the biological model to these images were computed and the reduction of redundancy for stimuli from each region (or bin) across the image
space was estimated using (1) RBIG, and (2) the reference computed using the analytic Jacobian.
Results of $\Delta T$
is compared to the PDF of natural images in the luminance/contrast space in Fig.~\ref{fig:Fig_vision}.
These results show that the efficiency of biological vision, $\Delta T$, is bigger in more populated regions of the image space, in line with the Efficient Coding Hypothesis. It is important to stress that this match between biological system and natural images holds regardless of the image database and regardless of minor differences in the biological model. Note that these results (obtained from the radiometrically calibrated VanHateren database~\cite{VanHateren98} and a narrow interaction kernel in the Divisive Normalization\footnote{Image data and code for vision model and analysis are available at \texttt{http://isp.uv.es/code/redundancy\_natural\_images.zip}}) are consistent with similar results in~\cite{GomezVilla19}, that used hyperspectral scenes~\cite{Foster16}; or results in~\cite{Malo19}, that used the colorimetrically calibrated IPL database~\cite{Laparra12}. In these cases, different kernels in Wilson-Cowan and
Divisive Normalization vision models were used.
The similarity of all these results indicates that the connection between neural efficiency and image statistics is pretty robust. From the technical point of view, the RBIG estimate using real image data closely follows the theoretical reference. Note that the deviations from the reference are small in general, and slightly bigger only in the region where the number of samples is small (the high-luminance / high-contrast region).
\begin{figure*}[t!]
\centering
\hspace{0cm}\includegraphics[width=17cm]{Figures/experiments/FIGS_TC/result_neuro_vis.png}
\caption{Efficient Coding Hypothesis in Visual Neuroscience from RBIG estimations of total correlation: redundancy reduction in the human visual system. Left panel shows the PDF of natural images (VanHateren database~\cite{VanHateren98}) at the luminance/contrast plane. Surfaces at the top-right show the redundancy reduction $\Delta T$ along the considered biological network (see~\cite{Carandini12,Martinez18,GomezVilla19} for background on these networks) at different points of the image space. Estimations are done with RBIG (center) and with the theoretical reference computed with the analytical Jacobian of the network~\cite{Martinez18} via Eq.~\eqref{deltaT} (right).
This represents the efficiency of the visual brain in transmitting information about natural images.
The surfaces at the bottom display the uncertainty of the RBIG and reference estimates computed from 10 realizations.}
\label{fig:Fig_vision}
\end{figure*}
|
{
"timestamp": "2020-11-26T02:16:15",
"yymm": "2010",
"arxiv_id": "2010.03807",
"language": "en",
"url": "https://arxiv.org/abs/2010.03807"
}
|
\section{Introduction}
There is now considerable evidence that bosonic matter coupled to Chern-Simons (CS) gauge theories and fermionic matter coupled to - roughly speaking - `level-rank dual' CS gauge theories are dual to each other in three spacetime dimensions
\cite{Giombi:2011kc, Aharony:2011jz, Maldacena:2011jn, Maldacena:2012sf,
Chang:2012kt, Jain:2012qi, Aharony:2012nh, Yokoyama:2012fa,
GurAri:2012is, Aharony:2012ns, Jain:2013py, Takimi:2013zca,
Jain:2013gza, Frishman:2013dvg, Yokoyama:2013pxa, Bardeen:2014paa, Jain:2014nza,
Bardeen:2014qua, Gurucharan:2014cva, Dandekar:2014era,
Frishman:2014cma, Moshe:2014bja, Aharony:2015pla, Inbasekar:2015tsa,
Bedhotiya:2015uga, Gur-Ari:2015pca, Minwalla:2015sca,
Radicevic:2015yla, Geracie:2015drf, Aharony:2015mjs,
Yokoyama:2016sbx, Gur-Ari:2016xff, Karch:2016sxi, Murugan:2016zal,
Seiberg:2016gmd, Giombi:2016ejx, Hsin:2016blu, Radicevic:2016wqn,
Karch:2016aux, Giombi:2016zwa, Wadia:2016zpd, Aharony:2016jvv,
Giombi:2017rhm, Benini:2017dus, Sezgin:2017jgm, Nosaka:2017ohr,
Komargodski:2017keh, Giombi:2017txg, Gaiotto:2017tne,
Jensen:2017dso, Jensen:2017xbs, Gomis:2017ixy, Inbasekar:2017ieo,
Inbasekar:2017sqp, Cordova:2017vab, Charan:2017jyc, Benini:2017aed,
Aitken:2017nfd, Jensen:2017bjo, Chattopadhyay:2018wkp,
Turiaci:2018nua, Choudhury:2018iwf, Karch:2018mer, Aharony:2018npf,
Yacoby:2018yvy, Aitken:2018cvh, Aharony:2018pjn, Dey:2018ykx, Skvortsov:2018uru,
Chattopadhyay:2019lpr, Dey:2019ihe, Halder:2019foo, Aharony:2019mbc,
Li:2019twz, Jain:2019fja, Inbasekar:2019wdw, Inbasekar:2019azv,
Jensen:2019mga, Kalloor:2019xjb, Ghosh:2019sqf, Inbasekar:2020hla, Jain:2020rmw, Minwalla:2020ysu, Jain:2020puw, amiyata1} \footnote{There are two pair of conjecturally dual theories in non-supersymmetric matter coupled Chern-Simons theories. One is regular fermion and critical boson which are together called quasi fermionic theories. The other pair is regular boson and critical fermions which are together called quasi bosonic theories. For details about these see e.g. in \cite{Maldacena:2011jn, Maldacena:2012sf, Aharony:2012ns, Jain:2013py, Jain:2014nza, Minwalla:2015sca, Choudhury:2018iwf, Dey:2018ykx, Aharony:2018pjn, Minwalla:2020ysu}.}.
More specifically, $SU(N_B)$ CS gauge fields coupled to bosonic matter is level-rank dual to $U(N_F)$ CS gauge fields coupled to the fermionic matter (in the strict large $N$ limit - in which we will be working in - the difference between $SU(N)$ and $U(N)$ effectively disappears) \footnote{In the dimensional regulation scheme, the Chern-Simons levels are renormalized. In the large $N$ limit, the renormalized levels and ranks of the two gauge groups are related as $\kappa_F=-\kappa_B$ and $N_F=|\kappa_B|-N_B$ (see for details e.g. in \cite{Jain:2014nza, Minwalla:2015sca, Choudhury:2018iwf, Aharony:2018pjn, Dey:2018ykx, Minwalla:2020ysu}). For a precise form of conjectured duality see appendix A of \cite{Minwalla:2020ysu}.}. This is an example of strong-weak coupling duality. One of the exciting and interesting fact about these theories is that, in the 't Hooft large $N$ limit, exact computations of observables can be performed on both sides of the dual pair of theories and the duality can be explicitly checked in this limit.
Previously there have been numerous computations and checks of this duality in the case of both the massless and massive matter theories at zero temperature. Even the computations have already been performed at finite temperature to compute the thermal free energy of these theories and it has already been shown that the thermal free energies in both the fermionic theories and the bosonic theories map to each other under duality \cite{Aharony:2012ns, Jain:2013py, Choudhury:2018iwf, Dey:2018ykx, Minwalla:2020ysu}. These theories also admit an infinite set of higher spin currents (spin $\geq$ 1) of single trace operators \cite{Giombi:2011kc, Maldacena:2011jn,Maldacena:2012sf, Frishman:2013dvg}. These theories have a global $U(1)$ symmetry. The spin one operator is identified as the corresponding $U(1)$ conserved current. The spin two operator is interpreted as the stress tensor of the corresponding theories. Apart from these, these theories also contain a single trace, gauge invariant scalar `current' operator which we refer to as spin-zero operator. Previously, there have been several computations of various correlation functions of these operators in different limiting cases. For example, the correlation functions of single trace operators of various spins were computed in the case of massless fundamental matter (for both fermions and bosons) coupled to Chern-Simons theory at zero temperature (see, e.g. in \cite{Aharony:2012nh, GurAri:2012is, Yacoby:2018yvy, Kalloor:2019xjb}). In \cite{Geracie:2015drf, Gur-Ari:2016xff}, the two-point functions of spin-one current is computed in the massive fermionic matter theory coupled to Chern-Simons gauge fields at zero temperature. At finite temperature, the effects of holonomy of gauge fields - which is basically the zero-mode of the gauge field along the thermal circle - become important and one has to take into account that as well \cite{Aharony:2012ns, Jain:2013py}. In the large-$N$ limit, the effect of holonomy can be described by a continuous distribution function. There have been few computations of the current two-point functions in the massless fundamental matter theories at non-zero temperature with a specific holonomy distribution. For example, in \cite{Gur-Ari:2016xff}, the authors first computed the two-point function of $U(1)$ conserved current in the massless fermionic matter theories at finite temperature in the context of studying the Hall conductivity. In their analysis, however, they considered only a particular holonomy distribution of gauge field, i.e., the universal table-top distribution and the final results of that paper was in terms of some integral form. In a recent paper \cite{Ghosh:2019sqf} also, the authors have computed two-point functions of several current operators in the massless fermionic and massless bosonic matter theories at non-zero temperature. However, they had chosen a particular (table-top) holonomy distribution which is appropriate for infinite volume limit. In this work, we generalize the computations of various current correlators to the general case of the massive matter at finite temperature considering an arbitrary holonomy distribution for the gauge holonomy, in Chern-Simons coupled to fundamental matter theories \footnote{We have been able to solve the resultant Schwinger-Dyson equations explicitly by `effectively performing' the loop momenta integrals. From this point view, this work can also be thought of as the study of solving an interesting `mathematical problem'.}. We consider massive fermionic and bosonic matter theories separately at non-zero temeperature. There are two types of these theories: one is the regular matter theories and the other is the critical matter theories. The regular matter theories on one side of the duality is conjectured to be dual to the critical matter on the other side and vice versa. It was discussed in \cite{Aharony:2012nh, GurAri:2012is} and also pointed out in \cite{Ghosh:2019sqf} that the two-point correlators of various spin $s$ operators - for spin $s\geq 1$ - are basically the same whether we consider the regular or critical matter theories, except the difference being implicitly through the exact masses of the fundamental excitations of fermion/boson. The two-point function of spin-zero, single trace scalar operator is different whether one considers the regular matter or critical matter; and it was discussed in \cite{Aharony:2012nh, GurAri:2012is} that they are related to each other in a particular way, at least in the massless theory. To be specific, we consider here the massive regular fermionic matter theory coupled to $U(N_F)$ Chern-Simons gauge fields. For convenience of the computation, we also consider the massive regular bosonic matter theory coupled to $SU(N_B)$ Chern-Simons gauge fields. We choose to work in the dimensional regulation scheme in which the Chern-Simons level is given by the renormalized parameter $\kappa$. We work in the 't Hooft large-$N$ limit, which is by taking $N,\kappa\to \infty$ but keeping $\lambda=\frac{N}{\kappa}$ fixed, and the modulus of the 't Hooft coupling $\lambda$ is less than unity. One of our goals in this paper is to check the conjectured bosonization duality. To do that we take the critical limit of the regular bosonic theory \cite{Aharony:2012nh, GurAri:2012is, Jain:2014nza} and check the duality.
For the computation of bosonic correlators, we take a slightly different route than \cite{Ghosh:2019sqf}. Following \cite{Aharony:2012nh}, in the case of massive bosonic theories at finite temperature, we first compute the offshell thermal four-point functions of fundamental scalars; we do so by generalizing the results of \cite{Aharony:2012nh, Jain:2014nza} to include the finite temperature effects. The offshell four-point function of scalars is then used to compute the various current correlators. In the massive case, even at zero temperature, there are two phases of bosonic matter theories: one is unhiggsed phases of scalars and the other is the higgsed phases of $W$ and $Z$ bosons (for details on the Higgsed phases of bosonic matter coupled to Chern-Simons theories, see e.g. in \cite{Choudhury:2018iwf, Dey:2018ykx}). In the critical boson theory with a bare mass parameter $m_B^{\text{cri}}$, there are two possible signs. The unhiggsed phase of bosonic scalar corresponds to the case $m_B^{\text{cri}} > 0$ which under bose-fermi duality gets mapped to the regular fermionic matter theory with $\text{sgn}(m_F\lambda_F)=+1$ \footnote{At finite temperature, this condition is replaced by a more general condition $\text{sgn}(h_F\lambda_F)=+1$, where, $\text{sgn}(h_F)$ is defined around \eqref{masseqnforcF}. The regular fermionic theory at nonzero temperature with $\text{sgn}(h_F\lambda_F)=+1$ is dual to the unhiggsed phases of critical boson theory. The other case, i.e., the case $\text{sgn}(h_F\lambda_F)=-1$ is dual to the higgsed phases of critical boson theory. At zero temperature, $\text{sgn}(h_F)$ is replaced by $\text{sgn}(m_F)$. For details about this see e.g., in \cite{Choudhury:2018iwf}, where $\text{sgn}(h_F)$ is labelled by a different symbol $\text{sgn}(X_F)$.}, where, $m_F$ is the bare mass parameter of the regular fermionic theory, and $\lambda_F=\frac{N_F}{\kappa_F}$ is 't Hooft coupling. We have explicitly checked the bose-fermi duality between the current correlators in these phases. However, for the fermionic theories, the results are valid for both possible signs of $m_F$. The other possible case, i.e., the case $\text{sgn}(m_F\lambda_F)=-1$ in the fermionic theory is dual to the higgsed phases of the bosonic theory (which corresponds to the case $m_B^{\text{cri}} < 0$). We use the results of current correlators in the fermionic theory to predict the corresponding results in the Higgsed phases of bosonic matter. We leave the excercise of explicit check of duality between the current correlators by exact computations in this phase for future work.
The organization of this paper is as follows. In section \ref{holonomy}, we briefly review and discuss about the effect of holonomy in Chern-Simons matter theories at finite temperature. In section \ref{fermth}, we study the fermionic matter coupled Chern-Simons theory and present the computations of thermal two-point functions of spin-zero and spin-one current operators. In section \ref{bosonth}, we study the bosonic matter coupled Chern-Simons theory and present the results of the offshell four-point function of fundamental scalars in the bosonic scalar theory at finite temperature (the details of which is presented in the appendix \ref{4ptscalar}) and also present the computations of thermal two-point functions of spin-zero and spin-one current operators. In section \ref{anlzresult}, we analyze the main results of this paper in various special limits and see that it agrees with the existing results in the limiting cases. In section \ref{dualitycheck}, we explicitly check that the results obtained in this paper are consistent with the conjectured bose-fermi duality. In section \ref{discussion}, we draw conclusions about our results, and discuss various outlook and interesting future directions.
\section{A note on holonomy and related conventions}\label{holonomy}
In an Euclidean thermal field theory at some temperature $T$ in three dimensions, $x^3$ direction is put along a circle $\mathbf{S}^1$ of radius $\beta=T^{-1}$ \footnote{In other words, the identification $x^3 \sim x^3+\beta$ is used; here, $x^3$ is the Euclidean time coordinate.}.
The position space integrals that we encounter here, are given by
\begin{equation}
\int {d}^3x \ f(x) = \int d^2\vec{x} \int_{0}^{\beta} dx^3 \ f(\vec{x},x^3) \ .
\end{equation}
In Chern-Simons matter theory at finite temperature, holonomy, i.e., the zero mode of the gauge field along the thermal circle, becomes important and crucially effects the physical observables \cite{Aharony:2012ns, Jain:2013py}. Holonomy of the gauge field in a $U(N)$ gauge theory is completely specified by its eigenvalues, $e^{\i \alpha_j}$ where $j=1,\dots, N$ and $\alpha_j\in (-\pi,\pi]$ \cite{Choudhury:2018iwf, Minwalla:2020ysu, Aharony:2012ns}. In the large-$N$ limit, the location of eigenvalues on unit circle can be specified by a continuous distribution function $\rho(\alpha)$ defined by
\begin{equation}
\rho(\alpha)=\lim_{N\to \infty} \frac{1}{N}\sum_{j=1}^{N}\delta(\alpha-\alpha_j) \ .
\end{equation}
It follows that the holonomy distribution function is normalized, i.e., $\int_{-\pi}^{\pi} \rho(\alpha) ~d\alpha=1$. We choose the holonomy distribution $\rho(\alpha)$ to be arbitrary, throughout the paper, unless otherwise mentioned \footnote{Throughout, we assume the holonomy distribution $\rho(\alpha)$ to be an even function of $\alpha$, i.e., $\rho(-\alpha)=\rho(\alpha)$.}. The correlators that we compute in this paper, by summing Feynman diagrams and performing loop integrals, are in momentum space. Technically, the effect of non-trivial holonomy is to shift the loop momenta in the direction of $k_3$ for a generic loop momentum $k\equiv (k_1,k_2,k_3)$ \cite{Aharony:2012ns, Gur-Ari:2016xff, Choudhury:2018iwf, Ghosh:2019sqf}. Let us briefly explain this point. As $x^3$-direction is compactified in the thermal theory, the conjugate momenta along the third direction is quantized. We will not, however, explicitly write every time the quantized version of $k_3$ but will implicitly assume that is the case. The momentum space integrals are given by
\begin{equation}\label{momint}
\int \frac{\mathcal{D}^3k}{(2\pi)^3} f(k)=\int \frac{d^2\vec{k}}{(2\pi)^2} \int \frac{\mathcal{D}k_3}{2\pi} f(\vec{k},k_3) \ ,
\end{equation}
where, $d^2\vec{k}$ is the usual integration measure $dk_1dk_2$ for the spatial momenta, and the momentum integration measure $\mc{D}k_3$ is defined by
\begin{equation}\label{mom3int}
\int \frac{\mathcal{D}k_3}{2\pi} f(\vec{k},k_3) = \frac{1}{\beta} \int_{-\pi}^{\pi} \rho(\alpha) d\alpha \ \sum_{k_3} f\Big( \vec{k},k_3+\frac{\alpha}{\beta} \Big) \ ,
\end{equation}
Here, as mentioned earlier, the effect of holonomy is to shift the $k_3$ momenta by $\beta^{-1}\alpha$. Exact meaning of `integral over $k_3$' which is really a sum \cite{Choudhury:2018iwf}, is slightly different for bosons and fermions depending upon the periodic/antiperiodic boundary conditions on bosonic and fermionic fields along $\mathbf{S}^1$, the precise meaning of which is explained below in \eqref{bosholonomy} and \eqref{fermholonomy}. To see the precise form of the quantization condition on the momenta $k_3$, we consider the fourier transform of a function $f(x)\equiv f(\vec{x},x^3 )$ which is defined by \cite{Choudhury:2018iwf}
\begin{equation}\label{ft}
f(\vec{x},x^3 ) = \int \frac{\mathcal{D}^3 k}{(2\pi)^3} \ e^{ \i \vec{k}\cdot \vec{x}+\i k_3x^3} \ f(\vec{k},k_3) \ .
\end{equation}
Depending upon the periodicity/antiperiodicity of the boundary conditions on the fields along the thermal circle, there are two possible cases.
\subsubsection*{Bosonic Theory :}
Bosonic fields are periodic along the thermal circle, i.e., $\phi(\vec{x},x^3+\beta )=\phi(\vec{x},x^3 ) $.
This implies $e^{ \i k_3\beta} =e^{ 2n_k \pi \i }$, where $n_k \in \mathbb{Z}$; i.e., $k_3$ is quantized with the quantization condition $k_3=\frac{2n_k \pi }{\beta}$.
It follows from \eqref{momint} and \eqref{mom3int} that the momentum integral with quantized $k_3$ in the bosonic theory for an arbitrary holonomy distribution $\rho_B(\alpha)$ is given by
\begin{equation}\label{bosholonomy}
\int \frac{\mathcal{D}_B^3 k}{(2\pi)^3} f(\vec{k},k_3)= \int \frac{d^2 \vec{k}}{(2\pi)^2} \int \frac{\mathcal{D}_B k_3}{2\pi} f(\vec{k},k_3) =\frac{1}{\beta} \int \frac{d^2 \vec{k}}{(2\pi)^2} \int_{-\pi}^{\pi} \rho_{B}(\alpha) ~d\alpha \sum_{n_k\in \mathbb{Z}} f\bigg(\vec{k},\frac{2n_k\pi + \alpha}{\beta} \bigg) \ .
\end{equation}
For later use, we here define the following function $\chi_B(z)$ of a real variable $z$
\begin{equation}\label{chiB}
\chi_B(z) =\frac{1}{2} \int_{-\pi}^{\pi} \rho_{B}(\alpha) ~d\alpha ~\bigg[\coth\bigg(\frac{\beta z+\i \alpha}{2} \bigg)+\coth\bigg(\frac{\beta z-\i \alpha}{2} \bigg)\bigg] \ .
\end{equation}
Following \cite{Choudhury:2018iwf}, we also define another function $\xi_B(z)$ as $\xi_B(z)=\int^{z}\chi_B(w)\ dw$, an (indefinite) integral over $\chi_B(z)$. The explicit form of the function $\xi_B(z)$ is given by \eqref{xiB} in the appendix \ref{appdef}.
\subsubsection*{Fermionic Theory :}
Fermion fields on the other hand satisfy the anti-periodicity boundary condition along the thermal cirlce, i.e., $\psi(\vec{x},x^3+\beta )=- \psi(\vec{x},x^3 ) $, which
implies $e^{ \i k_3\beta} =e^{ \i (2n_k+1) \pi }$, where $n_k \in \mathbb{Z}$. This means $k_3$ in the fermionic theory is quantized with the quantization condition $k_3=\frac{(2n_k+1) \pi }{\beta}$.
So, the momentum integral with quantized $k_3$ in the fermionic theory for an arbitrary holonomy distribution $\rho_F(\alpha)$ is given by
\begin{equation}\label{fermholonomy}
\int \frac{\mathcal{D}_F^3 k}{(2\pi)^3} f(\vec{k},k_3)=\int \frac{d^2 \vec{k}}{(2\pi)^2} \int \frac{\mathcal{D}_F k_3}{2\pi} f(\vec{k},k_3) =\frac{1}{\beta} \int \frac{d^2 \vec{k}}{(2\pi)^2} \int_{-\pi}^{\pi} \rho_{F}(\alpha) ~d\alpha \sum_{n_{k}\in \mathbb{Z}} f\bigg(\vec{k},\frac{(2n_k+1)\pi +\alpha}{\beta} \bigg) \ .
\end{equation}
As above in bosonic theory, we define here the following function $\chi_F(z)$
\begin{equation}\label{chiF}
\chi_F(z) =\frac{1}{2} \int_{-\pi}^{\pi} \rho_{F}(\alpha) ~d\alpha ~\bigg[\tanh\bigg(\frac{\beta z+\i \alpha}{2} \bigg)+\tanh\bigg(\frac{\beta z-\i \alpha}{2} \bigg)\bigg] ,
\end{equation}
which will be used later in the paper. As in the case of bosons, we define another function $\xi_F(z)$ as an integral over $\chi_F(z)$, i.e., as $\xi_F(z)=\int^{z}\chi_F(w)\ dw$. The explicit form of $\xi_F(z)$ is given by \eqref{xiFz}. For other conventions and useful definitions see Appendix \ref{appdef}.
\section{Thermal correlators in the fermionic theory} \label{fermth}
\subsection{Brief review of the theory}
As discussed in the introduction, in this paper we study the massive, regular fermionic matter theory with a finite bare mass $m_F$ coupled to $U(N_F)$ Chern-Simons gauge fields at finite temperature in the large-$N$ limit. In the dimensional regulation scheme, the Euclidean action of this theory is given by
\begin{align}\label{rflag}
\mc{S}_{F}[A,\psi] = \frac{\i \kappa_F}{4\pi} \int \text{d}^3 x\ \epsilon^{\mu\nu\rho}\,\text{tr}\left( A_\mu \partial_\nu A_\rho - \frac{2\i}{3} A_\mu A_\nu A_\rho\right)\ +\int \text{d}^3 x \left(\bar\psi \gamma^\mu D_\mu \psi + m_F \bar\psi \psi\right) \ ,
\end{align}
where, covariant derivatives for fundamental and antifundamental fields are defined by, $D_{\mu} {\psi}=\partial_\mu {\psi}-\i A_{\mu} \psi$ and $D_{\mu} \bar{\psi}=\partial_\mu \bar{\psi}+\i \bar{\psi}A_{\mu}$, respectively \footnote{A note on the notations and conventions: we don't explicitly write the color indices, but the color contractions are easily understood from the context. $\psi$ and $\bar{\psi}$ can be thought of as $N_F$ component column and row vectors, respectively and $A_{\mu}$ can be thought of as $N_F\times N_F$ matrices in the color space. We don't show explicitly the spinor indices of $\psi$ and $\bar{\psi}$ which are two-component spinors in three dimensions. Also, we don't explicitly write the adjoint index of the gauge fields $A_{\mu}\equiv A_{\mu}^a T^a$, and color trace over the gauge group generators $T^a$. One can easily restore these indices and explicitly use the normalizations $\text{Tr}(T^aT^b)=C(N_F)\delta^{ab}$. Conventionally, $C(N_F)=\frac{1}{2}$. The convention for the color contraction that is used here, is such that $\bar{\psi}\psi=\bar{\psi}_{i}\psi^{i}$; but $\psi\bar{\psi}$ does not necessarily have the contracted color indices. Also, $\bar{\psi}M\psi=\bar{\psi}_{i}M^{i}_{\ j} \psi^{j}$. The same convention is used for the bosonic theories where $N_F$ is replaced by $N_B$.}. The gauge field $A_\mu$ is in the adjoint representation of $U(N_F)$. The gamma matrices are chosen to be the ordinary $2\times 2$ Pauli matrices $\gamma^\mu = \sigma^\mu $, $\mu=1,2,3$.
Following the literature, we work in the gauge $A_{-}=0$. With this choice of gauge, the action \eqref{rflag} in momentum space takes the following form \footnote{Normalization for the Levi-Civita tensor used in this paper is such that $\epsilon^{123}=\epsilon_{123}=1 $ and $\epsilon^{+-3}=-\epsilon_{+-3}=-\i $.}
\begin{equation}\label{fermactinfourspace}
\begin{split}
\mathcal{S}_{F}[A,\psi]
= & - \frac{\kappa_F \epsilon^{\mu -\nu} }{4\pi }\int \frac{\mc{D}_F^3k}{(2\pi)^3} \ A_{\mu }(-k) k_{-}A_{\nu }(k) \ + \ \int \frac{\mc{D}_F^3k}{(2\pi)^3} \ \bar{\psi}(-k)(\i \gamma^\mu k_\mu +m_F) \psi(k) \\
&-\i \int \frac{\mc{D}_F^3p}{(2\pi)^3} \frac{\mc{D}_F^3k}{(2\pi)^3}~\bar{\psi}(p)\gamma^{\mu} A_{\mu}(-p-k ) \psi(k) \ .
\end{split}
\end{equation}
In the momentum space, some of the Feynman rules for this theory are given below.
\subsubsection*{Propagator for the gauge field :}
The propagator for the gauge field is defined as
\begin{equation}
\langle A_{\mu}(p')A_{\nu}(p)\rangle = G^F_{\nu\mu}(p)~(2\pi)^3~\delta^{(3)}(p+p') \ ,
\end{equation}
where,
\begin{equation}\label{gaugebospropforFermion}
G^F_{\nu \mu}(p)=\frac{2\pi \epsilon_{\nu -\mu} }{\kappa_F \ p_{-}} \ .
\end{equation}
{As discussed in \cite{Aharony:2012ns}, the gauge field propagator is independent of $p_3$, so is not effected by the holonomy.}
\subsubsection*{Exact propagator for the Fermionic field :}
The exact fermionic propagator, in the strict large-$N$ limit, to all orders in the 't Hooft coupling parameter $\lambda_F$ \footnote{As discussed in the introduction, the 't Hooft coupling parameter for the fermionic theory is defined by $\lambda_F=\frac{N_F}{\kappa_F}$; similarly for the bosonic theory, $\lambda_B=\frac{N_B}{\kappa_B}$. The range of the couplings are $0< |\lambda|<1$, with $\lambda=0$ being the weak coupling limit and $\lambda=1$ being the strong-coupling limit of the corresponding theories.}, is given by
\begin{equation}
\langle \psi(p)\bar{\psi}(p')\rangle =\ S_F(p) \ (2\pi)^3~\delta^{(3)}(p+p') \ ,
\end{equation}
where \footnote{Alternatively, the exact propagator can be written as
\begin{equation}\label{exactpsiprop}
S_F(p) = \frac{\tilde{\Sigma}_{\mathbb{1} }(p) \mathbb{1} -\i \gamma^{\mu} \big(p_\mu +\Sigma_{\mu} \big)}{p^2+c_F^2} \equiv S^F_{\mu}(p)\gamma^\mu +S^F_{\mathbb{1}}(p)\mathbb{1} \ .
\end{equation}
},
\begin{equation}
S_F(p) = \frac{1}{\i \gamma^\mu p_\mu +m_F\mathbb{1}+ \Sigma_F(p)} \ .
\end{equation}
Here, $\Sigma_F(p)$ is the fermionic self energy. It is useful to expand $\Sigma_F(p)$ in the basis $\{\gamma^\mu, \mathbb{1}\}$ as $\Sigma_F=\i \gamma^{\mu}\Sigma_\mu +\Sigma_{\mathbb{1}}\mathbb{1} $, where, $\mathbb{1}$ is $2\times 2$ identity matrix. At non-zero temperature, the self energy was already computed in the literature (see, e.g. in \cite{Giombi:2011kc,Aharony:2012ns, Jain:2013py} for details) in the lightcone gauge, and is listed here for the purpose of later use \footnote
The fermionic self energy is obtained by summing the $1PI$ graphs (see, e.g., Figure 5 in \cite{Aharony:2012ns}) and is given by \cite{Giombi:2011kc, Aharony:2012ns, Jain:2013py}
\begin{equation}
\Sigma_F(p)= - N_F \int \frac{\mc{D}_F^3\ell}{(2\pi)^3} \ \mathcal{V}^{\mu}(\l,p) S_F(\l) \ \mathcal{V}^{\nu}(p,\l) G^F_{\nu\mu}(\l-p) \ .
\end{equation}}
\begin{equation}\label{fermionselfenrgy}
\begin{split}
\tilde{\Sigma}_{\mathbb{1}}(p_s) =m_F +\lambda_{F}\xi_F(a(p_s)) \ , \ \Sigma_{+}(\vec{p} ) =\frac{p_{+}}{p_s^2}\big[c_F^2-\tilde{\Sigma}^2_{\mathbb{1}}(p) \big] \ , \ \Sigma_{-}(p) \ = \ \Sigma_{3}(p)=0 \ ,
\end{split}
\end{equation}
where, $\tilde{\Sigma}_{\mathbb{1}}=m_F +\Sigma_{\mathbb{1}}$, $a(p_s)=+\sqrt{p_s^2+c_F^2} \ $ and $p^2=2p_{+}p_{-}+p_3^2=p_s^2+p_3^2$. The function $\xi_F(z)$ is defined in \eqref{xiFz}. $c_F$ is the thermal mass of the fermion which is determined by the gap equation
\begin{equation}\label{masseqnforcF}
c_F =\text{sgn}(h_F) (m_F + \lambda_F \xi_F(c_F) ) \ ,
\end{equation}
where, $\text{sgn}(h_F)= \text{sgn}(m_F+\lambda_F \xi_{F}(c_F))$ (see e.g. in \cite{Choudhury:2018iwf}). We use the convention where, $c_F$ (also $c_B$ in the case of bosons) is always positive. There are two possibilities depending upon the two possible signs of $\text{sgn}(h_F\lambda_F)$ \cite{Choudhury:2018iwf}) \footnote{The choice $\text{sgn}(h_F\lambda_F)=+1$ corresponds to, under duality, the unHiggsed critical bosonic scalar theory, and the choice $\text{sgn}(h_F\lambda_F)=-1$ corresponds to the Higgsed phases of critical bosons under duality.}. At zero temperature, $\text{sgn}(h_F)$ is replaced by $\text{sgn}(m_F)$.
Diagramatically, the gauge field and the exact fermionic propagators are
\begin{displaymath}
\begin{tikzpicture}
\begin{feynman}
\vertex (a) {\( \mu \)} ;
\vertex [right=2.5cm of a] (b) {\( \nu \)};
\diagram* {
(a) -- [boson, momentum={[arrow shorten=0.35]\(p\)}] (b),
};
\draw (4.0,0) node {{$= \ G_{\nu\mu}^{F}(p)$}};
\end{feynman}
\end{tikzpicture} \\
\hspace{2cm}
\begin{tikzpicture}
\begin{feynman}
\coordinate (c) at (2.4,0) ;
\vertex (a) ;
\vertex [right=1.2cm of a] (b) ;
\diagram* {
(a) -- [fermion] (b),
};
\filldraw[black] (b) circle (3pt) ;
\draw (3.5,0) node {{$= \ S_{F}(p)$}};
\draw (b) -- (c) ;
\draw (a)+(0.6, 0.3) node {{$p$}};
\end{feynman}
\end{tikzpicture}
\end{displaymath}
The vertex factor associated with the interaction term $\bar\psi(p) A_{\mu}(-p-k) \psi(k)$ is given by
\begin{equation}\label{vertexpsibarApsi}
\mathcal{V}^\mu(k,p)=\i \gamma^\mu \ .
\end{equation}
We now compute various correlators of gauge invariant, single trace operators in this theory.
\subsection{Case I: Spin zero}
The gauge invariant, single trace, spin $0$ operator in the regular fermionic theory is given by $J_0^{F}(x)=\bar{\psi}(x)\psi(x)$, which in the momentum space takes the form
\begin{equation}\label{J0(-q)}
\begin{split}
J_0^{F}(-q)
& =\int \frac{\mathcal{D}_F^3 k}{(2\pi)^3}~\bar{\psi}(-(k+q))\psi(k) \\
\end{split}
\end{equation}
In order to compute $\langle J_0^{F}J_0^{F} \rangle $ two-point correlator, we need to compute the exact $J_0^{F}$ insertion vertex, which we compute by solving the corresponding Schwinger-Dyson equations shown in Fig.\ref{sdexactvertx} in the subsection below.
\subsubsection{Schwinger-Dyson equation for the exact vertex
Diagrammatically, the Schwinger-Dyson equation for the exact $J_0^{F}$ vertex is shown in Fig.\ref{sdexactvertx}. The exact $J_0^{F}$ insertion vertex is defined by
\begin{equation}
\big\langle J_0^{F}(-q)\psi(k)\bar{\psi}(p) \big\rangle =V_{0}^F(k,q)~(2\pi)^3\delta^{(3)}(p+k+q) \ .
\end{equation}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.7, cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\begin{feynman}
\coordinate (A) at (0,0);
\coordinate (D) at (2,2) ;
\coordinate (Dp) at (2,-2) ;
\coordinate (L) at (-1.1,0) ;
\diagram* {
(Dp) -- [fermion] (A),
};
\draw (A)+(2.5,0) node {{$= $}};
\node (c) [draw,circle,cross,minimum width=0.15 cm](b) at (A){};
\draw (A) -- (D) ;
\draw [-] (A) -- (Dp) node [midway, below ] {$k$};
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\end{feynman}
\end{tikzpicture}
\raisebox{7pt} {
\begin{tikzpicture}[scale=0.7, cross/.style={path picture={ \draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east); }}]
\begin{feynman}
\coordinate (A) at (0,0);
\coordinate (D) at (2,1.7) ;
\coordinate (Dp) at (2,-1.7) ;
\coordinate (L) at (-1.1,0) ;
\diagram* {
(Dp) -- [fermion] (A),
};
\draw (A)+(2.5,0) node {{$+$}};
\draw (A) node {$\times$} ;
\draw (A) -- (D) ;
\draw [-] (A) -- (Dp) node [midway, below ] {$k$};
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\end{feynman}
\end{tikzpicture} }
\raisebox{-6pt}{
\begin{tikzpicture}[scale=0.7, cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\begin{feynman}
\coordinate (A) at (0,0);
\coordinate (B) at (0.8,0.8) ;
\coordinate (C) at (1.5,1.5) ;
\coordinate (D) at (2,2) ;
\coordinate (Bp) at (0.8,-0.8) ;
\coordinate (Cp) at (1.5,-1.5) ;
\coordinate (Dp) at (2,-2) ;
\coordinate (L) at (-1.1,0) ;
\diagram* {
(C) -- [boson, momentum={[arrow shorten=0.35]\(\ell -k \)}] (Cp), };
\diagram* {
(Dp) -- [fermion] (Cp), };
\end{feynman}
\draw (A) -- (B) ;
\filldraw[black] (B) circle (3pt) ;
\draw (B) -- (C) ;
\draw (C) -- (D) ;
\draw (A) -- (Bp) ;
\filldraw[black] (Bp) circle (3pt) ;
\draw (Bp) -- (Cp) ;
\draw (Cp) -- (Dp) ;
\node [draw,circle,cross,minimum width=0.15 cm](b) at (A){};
\draw [-] (Cp) -- (Dp) node [midway, below ] {$k$};
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\end{tikzpicture} }
\end{center}
\caption{Schwinger-Dyson equations for the exact vertices. The circled cross denote an insertion of an exact vertex and a filled circle denotes the exact fermion propagator. The bare cross denotes the `tree level' insertion.}
\label{sdexactvertx}
\end{figure}
In the planar limit, the Schwinger-Dyson equation for $V_{0}^F(k,q)$ is given by
\begin{equation}\label{SDforV0F}
V_{0}^F(k,q)=\widetilde{V}_{0}^F(k,q)+\ N_F \int \frac{\mathcal{D}_F^3 \l }{(2\pi)^3} \big[ \mathcal{V}^\mu(\l+q,k+q) S_F(\l+q) V_{0}^F(\l,q) S_F(\l) \mathcal{V}^\nu(k,\ell) \big] G^F_{\nu\mu}(\l-k) \ ,
\end{equation}
where, the factor of $N_F$ appearing in front of the second term is the contribution from the color factors. $\widetilde{V}_{0}^F$ is the tree-level insertion vertex corresponding to $J_{0}^F$. From the definition of $J_0^{F}$ operator in \eqref{J0(-q)}, it follows that $\widetilde{V}_{0}^F(k,q)=1$. We work in the `lightcone kinematics' $q_{\pm}=0$, in which case, the calculations simplfy considerably \footnote{The difficulty with working in the general case of $q_{\pm}\neq 0$ lies in the fact that the corresponding integrals over the spatial components $\ell_1$ and $\ell_2$ of the loop momenta $\ell$ are difficult to perform exactly.}. Using \eqref{vertexpsibarApsi} and \eqref{gaugebospropforFermion}, \eqref{SDforV0F} simplifies to
\begin{equation}\label{SDforV0Fsimplified}
V_{0}^F(k,q)=1- 2\pi \i \lambda_{F} \int \frac{\mathcal{D}_F^3 \l }{(2\pi)^3} \ \Big[ \gamma^{[3|} S_F(\l+q) V_{0}^F(\l,q) S_F(\l) \gamma^{|+]} \Big] \ \frac{1}{(\l-k)_{-}} \ .
\end{equation}
Here and in the rest of the paper, we use the notation
\begin{equation}\label{gammaas}
\gamma^{[\rho|}A \gamma^{|\nu]} = \gamma^\rho A \gamma^\nu - \gamma^\nu A \gamma^\rho = 2\i \epsilon^{\rho \mu \nu} (A_{\mu}\mathbb{1}-A_{\mathbb{1}}\gamma_{\mu} ) \ ,
\end{equation}
for any $2\times 2$ matrix $A= A_{\mu}\gamma^\mu + A_{\mathbb{1}}\mathbb{1}$.
As $V_{0}^F(k,q)$ is a $2\times 2$ matrix in the spinor space, it can be expanded in the complete basis $\{ \gamma^\mu, \mathbb{1} \}$ of $2\times 2$ matrices as $V_{0}^F(k,q)=V_{0,\mu}(k,q)\gamma^\mu +V_{0, \mathbb{1}}\mathbb{1} $. Comparing it with the RHS of \eqref{SDforV0Fsimplified}, we get $V_{0,-}(k,q)=V_{0,3}(k,q)=0$ and the following set of two non-trivial coupled integral equations
\begin{equation}\label{componenteqnsforV0}
\begin{split}
V_{0,+}(k,q) & = - 4\pi \i \lambda_{F} \int \frac{\mathcal{D}_F^3 \l }{(2\pi)^3} \ \big[S_F(\l+q) V_{0}^F(\l,q) S_F(\l)\big]_{\mathbb{1}} \ \frac{1}{(\l-k)_{-}} \ , \\
V_{0,\mathbb{1} }(k,q) & = 1+ 4\pi \i \lambda_{F} \int \frac{\mathcal{D}_F^3 \l }{(2\pi)^3} \ \big[S_F(\l+q) V_{0}^F(\l,q) S_F(\l)\big]_{-} \ \frac{1}{(\l-k)_{-}} \ ,
\end{split}
\end{equation}
where, we have used the notation $[ABC]_{a}$ to denote the $a$-th component of the matrix product $ABC$ of three matrices $A$, $B$ and $C$, and $a\in (\mu,\mathbb{1})$.
The RHS of the above equation \eqref{componenteqnsforV0} does not depend upon $k_3$. In the `lightcone kinematics' $q_{\pm} =0 $, the only non-zero component of $q$ is $q_3$. So, we use the $SO(2)$ rotational symmetry in the $1$-$2$ plane to write down
\begin{equation}\label{definefgforV0}
V_{0,\mathbb{1} }(\vec{\l} ,q_3)\equiv f(\l_s,q_3) \ , \ \ \ \ \
V_{0,+}(\vec{\l} ,q_3)\equiv \frac{\l_{+}}{\ell_s^2} g(\l_s,q_3) \ .
\end{equation}
The extra factor of $\ell_s^2$ in the denominator of the second expression above is kept for later convenience. Using \eqref{definefgforV0} and the explicit components of the exact fermion propagator \eqref{exactpsiprop}, the equations \eqref{componenteqnsforV0} can be simplified to give
\begin{equation}\label{eqnforfinV0}
f(k_s,q_3) = 1-4\pi \i \lambda_F \int\frac{\mathcal{D}_F^3\l }{(2\pi)^3} \frac{[-q_3+2\i \tilde{\Sigma}_{\mathbb{1}}(\l_s)]f(\l_s,q_3)+ g(\l_s,q_3) } {(\l_3^2+a^2(\l_s))((\l_3+q_3)^2+a^2(\l_s)} \frac{\l_{-} }{(\l-k)_{-}} \ ,
\end{equation}
and
\begin{equation}\label{eqnforginV0}
\frac{k_{+}}{k_s^2} g(k_s,q_3) = 4\pi \i \lambda_F \int\frac{\mathcal{D}_F^3\l }{(2\pi)^3} \frac{[\l_3(\l_3+q_3)+a^2(\l_3)-2 \tilde{\Sigma}^2_{\mathbb{1}}(\l_s)]f(\l_s,q_3)+\frac{1}{2}[q_3+2\i \tilde{\Sigma}_{\mathbb{1}}(\l_s) ] g(\l_s,q_3) } {(\l_3^2+a^2(\l_s))((\l_3+q_3)^2+a^2(\l_s)) } \frac{1}{(\l-k)_{-}} \ .
\end{equation}
Due to the $SO(2)$ rotational symmetry in the $\ell_1$-$\ell_2$ plane, it is useful to write the integration measure as
\begin{equation}\label{DF3ell}
\mathcal{D}_F^3\l \equiv \ell_s d\ell_s d\theta_{\ell} \mathcal{D}_F \l_3 \ .
\end{equation}
where, $\ell_s$ is the radial momenta in the $\ell_{+}$-$\ell_{+}$ plane (or equivalently, $\ell_{1}$-$\ell_{2}$ plane) and is given by $\ell_s^2=2\ell_{+}\ell_{-}=\ell_1^2+\ell_2^2$. $\theta_\ell$ is the angular variable in the lightcone plane. The measure $\mathcal{D}_F \l_3$ as before is given by \eqref{fermholonomy}. One can do the angular integration by using the result \eqref{angint} and also perform the integral over the momentum component $\l_3$ by using \eqref{GF2} and \eqref{GF3}; by doing so, we find
\begin{equation}\label{eqnforfinV0simpl}
f(k_s,q_3) = 1-2 \i \lambda_F \int_{k_s}^{\infty} \frac{\l_s d\l_s}{a(\l_s)} F_F(a(\l_s),q_3)\big[[-q_3+2\i \tilde{\Sigma}_{\mathbb{1}}(\l_s)]f(\l_s,q_3)+ g(\l_s,q_3) \big] \ ,
\end{equation}
and
\begin{equation}\label{eqnforginV0simpl}
g(k_s,q_3) = -2 \i \lambda_F \int_{0}^{k_s} \frac{\l_s d\l_s}{a(\l_s)} \ F_F(a(\l_s),q_3) \ \big[ 4[a^2(\l_s)- \tilde{\Sigma}^2_{\mathbb{1}}(\l_s)]f(\l_s,q_3)+ [q_3+2\i \tilde{\Sigma}_{\mathbb{1}}(\l_s) ] g(\l_s,q_3) \big] \ .
\end{equation}
where, the function $F_F(z)$ is defined by $F_F(z)=\frac{\chi_F(z)}{q_3^2+4z^2}$, and $\chi_F(z)$ is given by \eqref{chiF}. In order to solve these two coupled integral equations \eqref{eqnforfinV0simpl} and \eqref{eqnforginV0simpl}, we first introduce a change of variable $a(\l_s) =+\sqrt{\l_s^2+c_F^2} =w$ and $a(k_s) =+\sqrt{k_s^2+c_F^2} =z$. In terms of these reduced variables (as the functional dependence changes, so, we redefine $f(k_s,q_3)\equiv \tilde{f}(z,q_3)$ and ${g}(k_s,q_3)\equiv \tilde{{g} }(z,q_3)$ and $\tilde{\Sigma}_{\mathbb{1}}(\l_s)\equiv h_F(w)$),
\begin{equation}\label{eqnforfinV0reduced}
\tilde{f}(z,q_3) = 1-2 \i \lambda_F \int_{z}^{\infty} dw \ F_F(w,q_3)\big[[-q_3+2\i h_F(w)]\tilde{f}(z,q_3)+\tilde{{g}}(w,q_3) \big] \ ,
\end{equation}
and
\begin{equation}\label{eqnforginV0reduced}
\tilde{{g}}(z,q_3) = -2 \i \lambda_F \int_{c_F}^{z} dw \ F_F(w,q_3) \ \big[ 4[w^2- h_F^2(w)]\tilde{f}(w,q_3) + [q_3+2\i \tilde{\Sigma}_{\mathbb{1}}(\l_s) ] \tilde{{g}}(w,q_3) \big] \ .
\end{equation}
The above two coupled integral equations \eqref{eqnforfinV0reduced} and \eqref{eqnforginV0reduced} can be decoupled and solved easily, by converting them first to a set of two differential equations by taking derivatives w.r.t. the free parameter $z$. The corresponding differential equations take the following form
\begin{equation}\label{eqnforfinV0diff}
\partial_{z} \tilde{f}(z,q_3) = 2 \i \lambda_F \big[[-q_3+2\i h_F(z)] \tilde{f}(z,q_3)+\tilde{g}(z,q_3) \big] F_F(z,q_3) \ ,
\end{equation}
and
\begin{equation}\label{eqnforginV0diff}
\partial_{z} \tilde{{g}}(z,q_3) = -2 \i \lambda_F \big[ 4[z^2-h_F^2(z) ]\tilde{f}(z,q_3)+ [q_3+2\i h_F(z) ] \tilde{{g}}(z,q_3) \big] \ F_F(z,q_3) \ .
\end{equation}
There is a particuar combination of the two equations \eqref{eqnforfinV0diff} and \eqref{eqnforginV0diff}, which can be written as a total derivative w.r.t. the variable $z$. This is easily done by multiplying \eqref{eqnforfinV0diff} by $(q_3+2\i h_F(z))$ and then adding that to \eqref{eqnforginV0diff}. From the definitions $h_F(z)=m_F+\lambda_F\xi_F(z)$, and $\xi_F(z)=\int^{z}\chi_F(w)dw$, it follows that $h_F'(z)=\lambda_F \xi_F'(z)=\lambda_F \chi_F(z)$. Constructing the particular combination mentioned in this paragraph, we find that it can be written as a total derivative as given below
\begin{equation}\label{eqnfor(f+g)inV0diffsimpl}
\begin{split}
\partial_{z} \big[(q_3+2\i h_F(z)) \tilde{f}(z,q_3) \ + \ \tilde{{g}}(z,q_3)\big] =0 \ .
\end{split}
\end{equation}
The general solution of \eqref{eqnfor(f+g)inV0diffsimpl} can be written as
\begin{equation}\label{solnfor(f+g)inV0diff}
\begin{split}
(q_3+2\i h_F(z)) \tilde{f}(z,q_3) \ + \ \tilde{{g}}(z,q_3)=\eta(q_3) \ ,
\end{split}
\end{equation}
where, $\eta(q_3) $ is an unknown function of $q_3$ to be determined from the boundary conditions. From \eqref{eqnforfinV0reduced} and \eqref{eqnforginV0reduced}, we see that the above differential equations \eqref{eqnforfinV0reduced} and \eqref{eqnforginV0reduced} must satisfy the following boundary conditions
\begin{equation}\label{boundcondforfandginV0}
\tilde{f}(z=\Lambda,q_3) = 1, \ \ \ \ \ \tilde{{g}}(z=c_F,q_3) = 0, \ \ \ \text{where}, \ \ \Lambda\rightarrow \infty \ .
\end{equation}
Here and also later in this paper, we have introduced a UV cutoff $\Lambda$ in the radial momentum integral, in the intermediate steps of calculations to keep track of the divergent terms in the integrals which we have eventually dropped away by regularization. Finally, $\Lambda$ is taken to infinity. Using \eqref{solnfor(f+g)inV0diff} we substitute $\tilde{{g}}(z,q_3)$ in \eqref{eqnforfinV0reduced} and get
\begin{equation}\label{eqnforfinV0diffgreplaced}
\begin{split}
\partial_{z} \tilde{f}(z,q_3)
& = 2 \i \lambda_F \big[\eta(q_3)- 2q_3 \tilde{f}(z,q_3) \
\big] F_F(z,q_3) \ .
\end{split}
\end{equation}
To solve the above equation \eqref{eqnforfinV0diffgreplaced}, we choose an ansatz of the form
\begin{equation}\label{ansatzf0}
\tilde{f}(z,q_3) = A\bigg(1+B \exp\big[4\i \lambda_F q_3 \int_{z}^{\Lambda} dw \ F_F(w,q_3)\big]\bigg) \ .
\end{equation}
Using the definition, $H_F(z,q_3)=\exp\big[4\i \lambda_F q_3 \int^{z} dw \ F_F(w,q_3)\big]$, \eqref{ansatzf0} can be written in the following form
\begin{equation}\label{ansatzfoa}
\tilde{f}(z,q_3) =A\bigg(1+B \frac{H_F(\Lambda,q_3)}{H_F(z,q_3)} \bigg) \ ,
\end{equation}
which is more useful in the intermediate steps of the calculations.
There are three unknowns $A$, $B$ and $\eta(q_3)$ which need to be fixed to completely determine $\tilde{f}$ and $\tilde{g}$. Using the boundary condition \eqref{boundcondforfandginV0} on $\tilde{f}$, we fix $A$ in terms of $B$. $\eta(q_3)$ is in terms of $B$ by substituting the ansatz \eqref{ansatzf0} in \eqref{eqnforfinV0diffgreplaced}. Finally, we use the boundary condition on $\tilde{g}$ to determine $B$ from equation \eqref{solnfor(f+g)inV0diff}. The final result for the exact $J_0^F$ insertion vertex \eqref{definefgforV0} is given by
\begin{equation}\label{finalsolnforfandgwithB'}
\begin{split}
\tilde{f}(z,q_3) &=\frac{1+ B \frac{H_F(\Lambda,q_3)}{H_F(z,q_3)} }{1+B} \ , \ \ \
\tilde{g}(z,q_3) = \frac{2q_3}{1+B} - \big(q_3+2\i (m_F+\lambda_F \xi_F(z))\big)\tilde{f}(z,q_3) \ ,
\end{split}
\end{equation}
where, $B$ is given by
\begin{equation}
\begin{split}
B =\frac{\big(q_3-2\i h_F(c_F)\big)}{\big(q_3+2\i h_F(c_F)\big)}\frac{H_F(c_F,q_3)}{H_F(\Lambda,q_3)} \ .
\end{split}
\end{equation}
\subsubsection{Two-point function : }
The two-point function is computed by computing the Feynman diagram shown in Fig.\ref{jj2pt}. To compute the two-point function $\langle J_0^{F}(q') J_0^{F}(q) \rangle $, only a single insertion of an exact vertex $J_0^{F}(q')$ is required to account for all the perturbative Feynman diagrams without any overcounting.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.7, cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\begin{feynman}
\coordinate (A) at (0,0) ;
\coordinate (B) at (6,0) ;
\coordinate (L) at (-1.1,0) ;
\draw (A) node {$\otimes $} ;
\draw (B) node {$\times$} ;
\filldraw (3,1.5) circle (3pt) ;
\filldraw (3,-1.5) circle (3pt) ;
\end{feynman}
\draw (0,0) .. controls (2,2) and (4,2) .. (6,0);
\draw (0,0) .. controls (2,-2) and (4,-2) .. (6,0);
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\draw (A)+(0.8,0) node {$\ J^{(s)}$};
\draw (5,0) node {$\ J^{(s)}$};
\end{tikzpicture}
\caption{Diagram contributing to two-point function $\langle J^{(s)}(-q)J^{(s)}(q)\rangle$ for spin zero and spin one. Filled circle denotes the exact fermion propagator.}
\label{jj2pt}
\end{center}
\end{figure}
For definiteness, we choose insertion at the left in Fig.\ref{jj2pt} as the exact vertex. Insertion on right side in Fig.\ref{jj2pt} is the `free' insertion vertex \footnote{We alternately use the terms `free', `tree level' and `bare' insertion vertex for the insertion on the RHS of the schematic diagrams Fig.\ref{jj2pt} or Fig.\ref{jj2ptBs1} of two-point correlators. What this exactly means is that it includes only the required diagrams (which may contain loops) to avoid any overcounting \cite{Aharony:2012nh, GurAri:2012is, Ghosh:2019sqf}. The required diagrams for the `bare' insertion vertex can be got from the definitions of the corresponding current operator.} which we define as follows :
\begin{equation}
\langle J_0^{F}(-q)\psi(k)\bar{\psi}(p)\rangle =U_{0}^F(k,q)~(2\pi)^3\delta^{(3)}(p+k+q) \ .
\end{equation}
From the definition of $J_0^{F}(q)$ operator in \eqref{J0(-q)}, it follows that $U_{0}^F(k,q) =1$.
We define the two-point correlation function of $J_0^{F}$ operator as
\begin{equation}\label{J0(-q)J0(q)}
\langle J_0^{F}(q')J_0^{F}(q)\rangle =\mc{G}_0^F(q) \ (2\pi)^3 \delta^{(3)}(q'+q) \ .
\end{equation}
From Fig.\ref{jj2pt}, the expression for $\mc{G}_0^F$ is given by
\begin{equation}\label{integralforG0F(q)}
\mc{G}_0^F(q) =-N_F\int \frac{\mathcal{D}_F^3k}{(2\pi)^3} \ \text{Tr}_F \bigg[U_0^F(k+q,-q) \ S_F(k+q) \ V_0^F(k,q) \ S_F(k)\bigg] \ ,
\end{equation}
where, the extra $(-1)$ factor above in \eqref{integralforG0F(q)} is because of the integration over fermion loop. Here, $\text{Tr}_F$ indicates the trace in spinor space. Inserting the expressions of exact fermion propagator \eqref{exactpsiprop}, the exact insertion vertex $V_0^F$ computed in \eqref{finalsolnforfandgwithB'}, the expression of $U_0^F$, doing the gamma-matrix algebra and performing the momentum integral over the loop momenta $k$, the final result for $\langle J_0^{F}J_0^{F}\rangle$ that we find is given by
\begin{equation}\label{G0F(q)finalB'replaced}
\begin{split}
\mc{G}_0^F(q)
& = \lim_{\Lambda \rightarrow \infty} \bigg[
\frac{\i N_Fq_3 }{4\pi \lambda_{F} }\frac{ \frac{H_F(\Lambda,q_3)}{H_F(c_F,q_3)}e^{-2\i \text{sgn}(h_F)\tan^{-1}\frac{q_3}{2c_F}}+ 1}{\frac{H_F(\Lambda,q_3)}{H_F(c_F,q_3)}e^{-2\i \text{sgn}(h_F)\tan^{-1}\frac{q_3}{2c_F}} - 1} + \frac{N_F h_F(\Lambda) }{2\pi \lambda_{F} } \ \bigg] \ ,
\end{split}
\end{equation}
where, $H_F(z,q_3)=\exp\big[4\i \lambda_F q_3 \int^{z} dw \ F_F(w,q_3)\big]$, $F_F(z,q_3)=\frac{\chi_F(z)}{q_3^2+4z^2}$ and $h_F(z)=m_F+\lambda_F \xi_F(z)$, and $\text{sgn}(h_F)$ is defined in the gap equation \eqref{masseqnforcF}. From the expression of $\xi_F(w)$ in \eqref{xiFz}, it is clear that $\xi_F(\infty)$ diverges linearly. However, we regularize \cite{GurAri:2012is, Aharony:2012ns} the answer by subtracting the divergent term $ \xi_F(\infty)$ and take the $\Lambda\to \infty$ limit \footnote{One can use the dimensional regularization for the radial momenta integrals as discussed in \cite{Choudhury:2018iwf} to remove the term $\xi_F(\infty)$. Alternatively, it can be done by adding a mass counterterm for the background source of $J^{F}_0$ as discussed in \cite{GurAri:2012is}.}. Thus, the renormalized two-point function $\langle J_0^{F}J_0^{F}\rangle$ is
\begin{equation}\label{G0F(q)thefinal}
\begin{split}
\mc{G}_0^F(q)
=
\frac{\i N_Fq_3 }{4\pi \lambda_{F} }\frac{ \frac{H_F(\infty,q_3)}{H_F(c_F,q_3)}e^{-2\i \text{sgn}(h_F)\tan^{-1}\frac{q_3}{2c_F}}+ 1}{\frac{H_F(\infty,q_3)}{H_F(c_F,q_3)}e^{-2\i \text{sgn}(h_F)\tan^{-1}\frac{q_3}{2c_F}} - 1} + \frac{N_F m_F}{2\pi \lambda_{F} } \ .
\end{split}
\end{equation}
\eqref{G0F(q)thefinal} is valid at finite temperature and non-zero mass. In various limiting cases, this agrees with the existing results \cite{GurAri:2012is,Ghosh:2019sqf, amiyata1}.
\subsection{Case II: Spin one}
In this section we generalize the computaions of two-point correlators of spin one operator, done previously \cite{GurAri:2012is, Gur-Ari:2016xff, Ghosh:2019sqf, amiyata1}, to the general case of massive regular fermionic theory at finite temperature with arbitrary holonomy distribution. The regular fermionic theory \eqref{rflag} that we study in this paper has a gauge invariant, conserved $U(1)$ current, given by the single trace operator $J^{F}_\mu (x)=\i \ \bar{\psi}(x)\gamma_\mu \psi(x)$.
In momentum space, this operator is given by
\begin{equation}\label{J1Fmu(-q)}
\begin{split}
J^{F}_{\mu}(-q)
& =\i \int \frac{\mathcal{D}_F^3 k}{(2\pi)^3}~\bar{\psi}(-(k+q)) \gamma_\mu \psi(k) \ . \\
\end{split}
\end{equation}
Our goal in this section is to compute the two-point correlator $\langle J^{F}_\mu J^{F}_\nu \rangle $, for which as in the case of $J_0^F$, we need the exact $J^{F}_\mu$ vertex.
\subsubsection{Schwinger-Dyson equation for the exact insertion vertex
Exact $J^{F}_\mu$ insertion vertex is computed by solving the Schwinger-Dyson equation given schematically in Fig.\ref{sdexactvertx}. The exact $J^{F}_\mu $ vertex is defined as
\begin{equation}
\langle J^{F}_\mu (-q)\psi(k)\bar{\psi}(p)\rangle =V_{(\mu) }^{F}(k,q)~(2\pi)^3\delta^{(3)}(p+k+q) \ .
\end{equation}
The corresponding Schwinger-Dyson equation for $V_{\mu}^{F}(k,q)$ is given by
\begin{equation}\label{SDforV1Fmu}
V_{(\mu)}^{F}(k,q)=\widetilde{V}_{(\mu)}^{F}(k,q)+\ N_F \int \frac{\mathcal{D}_F^3 \l }{(2\pi)^3} \big[ \mathcal{V}^\nu(\l+q,k+q) S_F(\l+q) V_{(\mu)}^{F}(\l,q) S_F(\l) \mathcal{V}^\rho(k,\l) \big] G^F_{\rho \nu}(\l-k) \ ,
\end{equation}
where, $\widetilde{V}_{(\mu)}^{F}(k,q)$ denotes the tree-level insertion of $ J^{F}_\mu $. From \eqref{J1Fmu(-q)}, it follows that $\widetilde{V}_{(\mu)}^{F}(k,q)=\i \gamma_\mu $. Using \eqref{vertexpsibarApsi} and \eqref{gaugebospropforFermion} , the above equation \eqref{SDforV1Fmu} can be simplified to
\begin{equation}\label{SDforV1Fmusimplified}
V_{(\mu)}^{F} (k,q)=\i\gamma_\mu - 2\pi \i \lambda_{F} \int \frac{\mathcal{D}_F^3 \l }{(2\pi)^3} \big[ \gamma^{[3|} S_F(\l+q) V_{(\mu)}^{F}(\l,q) S_F(\l) \gamma^{|+]} \big] \frac{1}{(\l-k)_{-}} \ .
\end{equation}
The only non-zero component of $\langle J_{\mu}^F(-q)J_{\nu}^F(q) \rangle $ in the $x^1$-$x^2$ plane, in `lightcone kinematics' $q_{\pm}=0$, is the $(\mu\nu)\equiv (-+)$ (or equivalently $(+-)$) component \footnote{This folows from the $SO(2)$ rotational symmetry in the $1$-$2$ plane.}. Other components are zero \footnote{As the $U(1)$ current is classically conserved, it follows from the Ward identity, that in the momentum space, $q^{\mu}\langle J_{\mu}(-q)J_{\nu}(q)\rangle$ should vanish upto (at most) a contact term \cite{Ghosh:2019sqf}. With the external momentum choice $q_{\pm}=0$, it follows then that the component $\langle J_{3}J_{3}\rangle$ should vanish upto a contact term. We have also explicitly checked (though not provided here keeping in mind about the length of the paper) in the case of fermionic theory (with the $U(1)$ current given by \eqref{J1Fmu(-q)}), that this is indeed the case, i.e., $\langle J_{-}J_{-}\rangle $, $\langle J_{+}J_{+}\rangle$, $\langle J_{\pm}J_{3}\rangle$ components vanish in the choice $q_{\pm}=0$. Also we have explicitly checked that $\langle J_{3}J_{3}\rangle$ vanishes exactly in this momentum choice.}. So, from now on, we will be considering only $\langle J_{-}J_{+}\rangle$ (the same argument goes for the bosonic case as well). Also as in the $J_0^F$ case, in order for the computation of two-point functions, only a single exact vertex is required. To compute $\langle J_{-}^F(-q)J_{+}^F(q) \rangle $, we will use the exact $J_{-}^F$ insertion vertex and the `tree level' insertion vertex for $J_{+}^F$. So, we need to compute the exact $J_{-}^F$ insertion vertex. It is clear from \eqref{SDforV1Fmusimplified} that $V_{(\mu)}^{F} (k,q)$ is independent of $k_3$. Also, in the lightcone kinematics $q_{\pm}=0$, the only non-zero component of the external momenta $q$ is $q_3$. So, the momentum dependence of $V_{(\mu)}^{F} (k,q)$ is basically $V_{(\mu)}^{F} (\vec{k},q_3)$. As in the $J_0^F$ case, $V_{(-)}^{F}(k,q)$ can be expanded as,
\begin{equation}\label{Vmfgdef}
V_{(-)}^{F}(\vec{k},q_3)=V_{(-),\nu}^{F}(\vec{k},q_3) \gamma^\nu +V_{(-), \mathbb{1}}^{F}(\vec{k},q_3) \mathbb{1} = g_{m}(k_s,q_3) \gamma^+ + k_{-} f_{m}(k_s,q_3) \mathbb{1} \ .
\end{equation}
To solve for exact $V_{(-)}^{F}$, we plug \eqref{Vmfgdef} in the minus-component equation of \eqref{SDforV1Fmusimplified} and get a set of two coupled integral equations involving $f_m$ and $g_m$. As in the case of $J_0^F$, these equations for $f_m$ and $g_m$ get simplified a lot once we utilize the $SO(2)$ rotational symmetry in the lightcone plane and decompose the momentum integration measure as \eqref{DF3ell}. We perform the angular integration using \eqref{angint} and also perform the integral over momentum component $\l_3$ by using \eqref{GF2} and \eqref{GF3}. Performing a change of variable $a(\l_s) =+\sqrt{\l_s^2+c_F^2} =w$ and $a(k_s) =+\sqrt{k_s^2+c_F^2} =z$, and relabelling $f_m(k_s,q_3)\equiv \tilde{f}_m(z,q_3)$ and $g_m(k_s,q_3)\equiv \tilde{g}_m(z,q_3)$ and $\tilde{\Sigma}_{\mathbb{1}}(\l_s)\equiv h_F(w)$, the equations for $\tilde{f}_m$ and $\tilde{g}_{m}$ can be written in a much simpler form, which takes the following form
\begin{equation}\label{eqnforfinV1Freduced}
\tilde{f}_m(z,q_3) = 2 \i \lambda_F \int_{z}^{\infty} dw \ F_F(w,q_3)\bigg[\big(q_3-2\i h_F(w)\big) \tilde{f}_m(z,q_3) \ - \ 2\tilde{g}_m(w,q_3) \bigg] \ ,
\end{equation}
and
\begin{equation}\label{eqnforginV1Freduced}
\tilde{g}_m(z,q_3) =\i \ +\ 2 \i \lambda_F \int_{z}^{\infty} dw \ F_F(w,q_3) \ \bigg[ 2\big(w^2- h_F^2(w)\big) \tilde{f}_m(w,q_3) + \big(q_3+2\i \tilde{\Sigma}_{\mathbb{1}}(\l_s) \big) \tilde{g}_m(w,q_3) \bigg] \ .
\end{equation}
To solve the above two equations \eqref{eqnforfinV1Freduced} and \eqref{eqnforginV1Freduced}, it is best to convert them into a set of differential equations. The boundary conditions that follow from \eqref{eqnforfinV1Freduced} and \eqref{eqnforginV1Freduced} are \footnote{The boundary conditions for the exact vertex $V_{(-)}^F$ is different from the the boundary conditions for that of the exact $V_{0}^F$ vertex.}
\begin{equation}\label{boundcondforfandginV1F}
\tilde{f}_m(z=\Lambda,q_3) = 0, \ \ \ \ \ \tilde{g}_m(z=\Lambda ,q_3) = \i , \ \ \ \text{where}, \ \ \Lambda\rightarrow \infty \ .
\end{equation}
Solving the equations \eqref{eqnforfinV1Freduced} and \eqref{eqnforginV1Freduced}, by first converting them into differential equations with the boundary conditions \eqref{boundcondforfandginV1F}, we find the final solution for the exact $J_{-}^F$ vertex in terms of $f_m$ and $g_m$ as
\begin{equation}\label{finalsolnforfandginV1F}
\begin{split}
\tilde{f}_m(z,q_3) &=\frac{\i}{q_3}\bigg(1- \exp\big[4\i \lambda_F q_3 \int_{z}^{\Lambda} dw \ F_F(w,q_3)\big]\bigg) \ , \\
\tilde{g}_m(z,q_3) & = \i - \frac{1}{2}\big(q_3+2\i h_F(z)\big)\tilde{f}_m(z,q_3) \ .
\end{split}
\end{equation}
\subsubsection{Two-point function}\label{2ptJ0F}
In this subsection, we compute $\langle J_{\mu}^{F}(-q)J_{\nu}^{F}(q)\rangle $. We define the two-point correlator of spin one operator $J_{\mu}^{F}$ as
\begin{equation}\label{J1Fmu(-q)J1Fnu(q)}
\langle J_{\mu}^{F}(-q)J_{\nu}^{F}(q)\rangle =\mc{G}_{\mu\nu}^{F}(q) \ (2\pi)^3 \delta^{(3)}(q'+q) \ .
\end{equation}
The corresponding Feynman diagram for this is given in Fig.\ref{jj2pt}. Similar to the $J_0^F$ case, the insertion on left side of the diagram is chosen to be the exact vertex, and the insertion on right side is chosen to be the `tree-level' insertion to avoid any overcounting of the diagrams. We define the `tree-level' insertion $J^{F}_{\nu}$ vertex as follows
\begin{equation}
\langle J_{\nu}^{F}(-q)\psi(k)\bar{\psi}(p)\rangle =U_{(\nu)}^{F}(k,q)~(2\pi)^3\delta^{(3)}(p+k+q) \ .
\end{equation}
From the definition of $J_{\mu}^{F}$ current in \eqref{J1Fmu(-q)}, it follows that $U_{(\nu)}^{F}(k,q) =\i \gamma_\nu $.
From Fig.\ref{jj2pt}, we see that the two-point function of the spin one current is given by
\begin{equation}\label{integralforG1F(q)}
\mc{G}_{\mu\nu}^F(q) =-N_F\int \frac{\mathcal{D}_F^3k}{(2\pi)^3} \ \text{Tr}_F \bigg[ \ S_F(k+q) \ V_{(\mu)}^{F}(k,q) \ S_F(k) \ U_{(\nu)}^{F}(k+q,-q)\ \bigg] \ ,
\end{equation}
where, again the extra $(-1)$ factor above in \eqref{integralforG1F(q)} is because of the integration over fermion loop. Using the expression of $U_\nu^F$ and performing the gamma matrix algebra \footnote{Use the fact that for a general matrix $M=M_\mu \gamma^\mu +M_{\mathbb{1}}\mathbb{1} $, $\text{Tr}_F(M\gamma_\nu )= M_{\mu} \text{Tr}_F(\gamma^\mu \gamma_\nu )=2M_{\nu } $. }, \eqref{integralforG1F(q)} reduces to
\begin{equation}\label{integralforG1F(q)simpl2}
\mc{G}_{\mu\nu}^{F}(q) =-2\i N_F\int \frac{\mathcal{D}_F^3k}{(2\pi)^3} \ \big[ S_F(k+q) \ V_{(\mu)}^{F}(k,q) \ S_F(k)\big]_{\nu} \ .
\end{equation}
Below we consider the $(\mu\nu)\equiv (-+)$ component of \eqref{integralforG1F(q)simpl2}. Using the expressions of $S_F$ and $V_{-}^F$ in \eqref{integralforG1F(q)simpl2} and performing the momentum integral, we find
\begin{equation}\label{G1F(q)final4}
\begin{split}
\mc{G}_{-+}^{F}(q)
&= \lim_{\Lambda \rightarrow \infty} \bigg[\frac{\i N_F q_3 }{16\pi \lambda_F} \bigg(1+ \frac{2\i h_F(c_F) }{q_3 }\bigg)^2\bigg[\frac{H_F(\Lambda,q_3)}{H_F(c_F,q_3)} -1 \bigg] -\frac{N_F\xi_F(c_F)}{8 \pi}+ \frac{N_F\xi_F(\Lambda)}{8 \pi}\bigg]
\end{split}
\end{equation}
As in the case of $J_0^F$, \eqref{G1F(q)final4} is linearly divergent due to the appearance of $\xi_F(\infty)$. Regularizing the above answer by throwing away the term $\xi_F(\infty)$ \footnote{This divergence term can be removed by dimensional regularization as discussed in \cite{Choudhury:2018iwf}. Alternatively, one can also subtract this by adding the mass counterterm for the source field corresponding to $J_{\mu}^F$ \cite{GurAri:2012is}.}, we report below the final result for the renormalized current two-point correlator $\mc{G}_{-+}^F$
\begin{equation}\label{G1F(q)final}
\begin{split}
\mc{G}_{-+}^{F}(q)
&= \frac{\i N_F q_3 }{16\pi \lambda_F} \bigg(1+ \frac{2\i h_F(c_F) }{q_3 }\bigg)^2\bigg[\frac{H_F(\infty,q_3)}{H_F(c_F,q_3)} -1 \bigg] -\frac{N_F\xi_F(c_F)}{8 \pi}
\end{split}
\end{equation}
Alternatively, one could have considered the exact $J_{+}^F$ vertex and the `tree level' $J_{-}^F$ insertion vertex and perform the above excercise and would have got the same result \footnote{We also explicitly checked this (not provided here).}. This follows from the fact that $\mc{G}_{-+}^F(q)= \mc{G}_{+-}^F(-q)$.
\section{Thermal correlators in the bosonic theory}\label{bosonth}
\subsection{Brief review of the theory}
In this section, we study the mass deformed regular bosonic matter theory coupled to $SU(N_B)$ Chern-Simons gauge fields in the large $N_B$ limit, at finite temperature. The Euclidean action for this theory is given by
\begin{equation}\label{RBlag}
\begin{split}
\mc{S}_{\text{B}}[A, \phi] & =\frac{\i \kappa_B}{4\pi} \int \text{d}^3 x\ \epsilon^{\mu\nu\rho}\,\text{tr}\left( A_\mu \partial_\nu A_\rho - \frac{2\i}{3} A_\mu A_\nu A_\rho\right)\ \\
& + \int d^3 x ~\bigg((D_\mu \bar{\phi})(D^\mu \phi)+m_B^2 \bar{\phi}\phi+\frac{b_4}{2N_B}(\bar{\phi}\phi)^2+\frac{b_6}{6N_B^2}(\bar{\phi}\phi)^3\bigg) \ .
\end{split}
\end{equation}
One of our goal is to check the duality between the fermionic and the bosonic theories. We have studied the regular fermionic matter theory in the previous section which is dual to the critical boson theory. We will study critical boson theory by taking critical limit of the regular boson theory defined by the action \eqref{RBlag}, in the next section. We work in the lightcone gauge $A_{-}=0$.
The Feynman rules in this theory include the following :
\subsubsection*{Propagator for the gauge field}
The gauge boson propagator in the lightcone gauge $A_{-}=0$, in Euclidean space, is given by
\begin{equation}
\langle A_\mu (p') A_\nu(p)\rangle = G_{\nu\mu}^B(p) ~(2\pi)^{3}\delta^{(3)}(p+p') \ ,
\end{equation}
where,
\begin{equation}\label{gaugebosonprop}
G^B_{\nu\mu} (p) =\frac{2\pi\epsilon_{\nu-\mu}}{\kappa_B p_{-}} \ .
\end{equation}
As also mentioned in the fermionic case, the gauge field propagator \eqref{gaugebosonprop} is independent of $p_3$, so is not effected by the holonomy \cite{Aharony:2012ns}. We label the gauge fields both in the case of fermionic theory and here in the bosonic theory by the same symbol $A_\mu$. However, the distinction should be obvious from the context whether we study fermionic or bosonic theory. Also we denote the gauge field propagator by the same feynman diagram as shown around equation \eqref{gaugebospropforFermion} but now it is equal to $G_{\nu\mu}^B(p)$.
\subsubsection*{Propagator for the scalar field}
The exact propagator for the scalar fields in the Euclidean space is given by
\begin{equation}\label{exactphipropdef}
\langle \phi(p)\bar{\phi}(p')\rangle = S_B(p) ~(2\pi)^{3}\delta^{(3)}(p+p') \ ,
\end{equation}
where,
\begin{equation}\label{exactphiprop}
S_{B}(p) =\frac{1}{p^2+c_B^2} \ .
\end{equation}
Here, $c_B$ is the thermal mass of the scalar field, which is related to the bare mass-squared $m_B^2$ as $c_B^2=m_B^2+\Sigma_{B}$, where $\Sigma_B$ is the bosonic self energy \footnote{As discussed in the literature in great details, one can compute the self energy either by summing the feynman diagrams or by intgerating out the matter fields and using the Hubbard-Stratonovich trick.}. The final result for the gap equation of the thermal is given by \footnote{As already mentioned before, we use the convention such that $c_B$ is always positive. In the zero temperature limit, $\xi_B(c_B)$ reduces to $c_B$. So, in this limit \eqref{thermalcB} reduces to $$c_B^2=m_B^2-\frac{b_4}{4\pi}c_B +\left(\frac{\lambda_{B}^2}{4}+\frac{b_6}{32\pi^2}\right)c_B^2 \ . $$}
\begin{equation}\label{thermalcB}
c_B^2=m_B^2-\frac{b_4}{4\pi}\xi_B(c_B) +\left(\frac{\lambda_{B}^2}{4}+\frac{b_6}{32\pi^2}\right)\xi_B^2(c_B) \ ,
\end{equation}
where, the function $\xi_B(x)$ is defined in \eqref{xiB}. Solving the equation \eqref{thermalcB}, one finds the thermal mass $c_B$ of the regular boson theory. In Feynman diagram, the exact scalar propagator is denoted by
\begin{displaymath}
\begin{tikzpicture}
\begin{feynman}
\coordinate (c) at (2.4,0) ;
\vertex (a) ;
\vertex [right=1.2cm of a] (b) ;
\filldraw[black] (b) circle (3pt) ;
\draw (3.5,0) node {{$= \ S_{B}(p)$}};
\draw (a) -- (c) ;
\draw (a)+(0.6, 0.3) node {{$p$}};
\end{feynman}
\end{tikzpicture}
\end{displaymath}
In the lightcone gauge $A_{-}=0$,
the contribution to the vertex factor corresponding to the term $\bar\phi A_3 A^3 \phi $ is $\mathcal{V}_{\bar{\phi}A^2\phi}=-1$.
In momentum space of the action \eqref{RBlag},
the vertex contribution corresponding to the interaction term $\bar{\phi}(p)A_{\mu}(-(p+k)) \phi(k)$, is given by $\mathcal{V}_{\bar{\phi}A\phi}^\mu(k,p)=(k-p)^{\mu}$, with explicit momentum conservation at the vertex.
\begin{comment}
\begin{displaymath}\label{phibarphiAvertex}
\begin{tikzpicture}
\begin{feynman}
\vertex (b) ;
\vertex [left=2cm of b] (a) {\(\phi\)};
\vertex [right=2cm of b] (c) {\(\bar{\phi}\)};
\vertex [above=2cm of b] (d) {\( A_\mu\)};
\diagram* {
(a) -- [fermion,momentum={[arrow shorten=0.35]\(k\)}] (b),
(b) -- [fermion,rmomentum={[arrow shorten=0.35]\(p\)}] (c),
(b) -- [boson,rmomentum={[arrow shorten=0.35]\( \l \)}] (d),
};
\end{feynman}
\end{tikzpicture}
=(k-p)^{\mu}
\end{displaymath}
\end{comment}
\subsection{Thermal four-point function of fundamental scalars}
In the zero-temperature theory, the connected scalar four-point function was computed previously in the literature (see, e.g., \cite{Aharony:2012nh, Jain:2014nza}). In \cite{Aharony:2012nh}, the authors computed the connected scalar four-point function in the massless regular boson theory. On the other hand, in the case of massive regular boson theory as given by the action \eqref{RBlag}, the off-shell connected scalar four-point function was computed in \cite{Jain:2014nza}, at zero temperature. In this section, we generalize these computations to the finite temperature for the massive theory. We closely follow the procedure given in \cite{Jain:2014nza} \footnote{For details of their method, see section 3.1 and appendix D of \cite{Jain:2014nza}.} to do the computation of four-point function of scalars. The corresponding Schwinger-Dyson equations that we need to solve are the same as given in \cite{Jain:2014nza} (for the relevant Schwinger-Dyson equations see \eqref{eqn1} and \eqref{eqn2} \footnote{Or, see e.g. equation 4.6 of \cite{Jain:2014nza}.}, and for the diagrams see Fig.\ref{4ptscalarsd},\ref{oneloop} and \ref{4ptvertex} \footnote{Or, see figure 5 and figure 4 of \cite{Jain:2014nza}.}). Here, we generalize these computations to the finite temperature by taking into account the effect of holonomy. In the Appendix \ref{4ptscalar}, we present the details of the computation and discuss about the relevant modifications we need to do, because of the holonomy, in the methods of computations given in \cite{Jain:2014nza}. In this subsection we present the finite temperature results of the off-shell four-point function of fundamental scalars.
Following \cite{Aharony:2012nh, Jain:2014nza}, we define the exact, connected off-shell scalar four-point function by
\begin{equation}\label{4ptdef}
\langle \phi^i(p+q)\bar{\phi}_{j}(-(k+q)) \phi^m(k) \bar{\phi}_{n}(-p) \rangle = \mathcal{A}^{im}_{jn}(p,k,q) \ (2\pi)^3 \delta^{(3)}(0) \ .
\end{equation}
Here, we have explicitly shown the color indices of the scalar fields. Without loss of generality, we choose the color contractions to be $\delta^{i}_{n} \delta^{m}_{j}$(terms with other possible color contractions are related to this by the permutation of momenta) \cite{Aharony:2012nh, Jain:2014nza}; so, we consider \footnote{$\mathcal{A}(p,k,q)$ here is the same as the quantity ${V}(p,k,q)$ in \cite{Jain:2014nza} but now at finite temperature. We use a different symbol to avoid the notational clash with the rest of the paper.}
\begin{equation}
\mathcal{A}^{im}_{jn}(p,k,q)=\mathcal{A}(p,k,q) \ \delta^{i}_{n} \delta^{m}_{j} \ .
\end{equation}
Following \cite{Jain:2014nza} and including the finite temperature effects as discussed in the Appendix \ref{4ptscalar}, we compute $\mathcal{A}(p,k,q)$ in the choice of the overall external momenta $q_{\pm}=0$, by solving the Schwinger-Dyson equation given in Fig.\ref{4ptscalarsd}. The final result for $\mathcal{A}(p,k,q)$ is (see \eqref{exactscalar4ptapp} or equivalently \eqref{4ptfnexp})
\begin{equation}\label{exactscalar4pt}
\begin{split}
N_B\mathcal{A}(\vec{p},\vec{k},q_3)
& = \frac{H_B(a(p_s), q_3)}{H_B(a(k_s),q_3)} \bigg\{(4\pi \i \lambda_{B} q_{3})\frac{(p+k)_{-}}{(p-k)_{-}} + j( q_3) \bigg\} \ ,
\end{split}
\end{equation}
where, $H_B(z,q_3)$ is given by \eqref{Hbfunction} and $a(p_s)=+\sqrt{p_s^2+c_B^2}$. The function $j(q_3)$ is given by
\begin{equation}\label{jq3}
\frac{j(q_3)}{ 4\pi \i \lambda_{B}q_3}= \frac{4\pi \i \lambda_{B} q_3(H_B(c_B,q_3)-H_B(\infty,q_3)) +\tilde{b}_4 (H_B(c_B,q_3)+H_B(\infty,q_3)) }{4\pi \i \lambda_{B} q_3(H_B(c_B,q_3)+H_B(\infty,q_3)) +\tilde{b}_4 (H_B(c_B,q_3)-H_B(\infty,q_3)) } \ ,
\end{equation}
where, $\tilde{b}_4$ is given by \eqref{b4tl}. An alternative and simplified form of $j(q_3)$ is given by \eqref{jq3simp}. In the zero temperature limit, this matches with the existing results given in \cite{Jain:2014nza}. We use the result \eqref{exactscalar4pt} to compute the correlation functions of gauge invariant, single trace operators of different spin.
\subsection{Case I: Spin zero
In the regular boson theory that we study here, there is a gauge invariant, single trace, spin 0 operator given by $J^B_{0}(x)=\bar{\phi}(x)\phi(x)$. In the momentum space, this takes the follwing form
\begin{equation}\label{spin0current}
J^B_{0}(-q) =\int \frac{\mathcal{D}_B^3 k}{(2\pi)^3} \ \bar{\phi}(-(k+q)) \ \phi(k) \ .
\end{equation}
\subsubsection{Exact insertion vertex}
To compute the correlators involving $J^B_{0}$, we need the exact $J^B_{0}$ insertion vertex which is defined as
\begin{equation}\label{J0Bexactvertex}
\langle J^B_{0}(-q) \phi(k)\bar{\phi}(p) \rangle =V^B_{0}(k,q) \ (2\pi)^3 \delta^{(3)} (p+k+q) \ .
\end{equation}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.8, cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\begin{feynman}
\coordinate (A) at (0,0);
\coordinate (D) at (2.2,1.2) ;
\coordinate (Dp) at (2.2,-1.2) ;
\coordinate (L) at (-1.1,0) ;
\diagram* {
(Dp) -- [fermion] (A),
};
\draw (A)+(2.5,0) node {{$= $}};
\node (c) [draw,circle,cross,minimum width=0.15 cm](b) at (A){};
\draw (A) -- (D) ;
\draw [-] (A) -- (Dp) node [midway, below ] {$k$};
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\end{feynman}
\end{tikzpicture}
\raisebox{1.5pt} {
\begin{tikzpicture}[scale=0.8, cross/.style={path picture={ \draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east); }}]
\begin{feynman}
\coordinate (A) at (0,0);
\coordinate (D) at (2,1.2) ;
\coordinate (Dp) at (2,-1.2) ;
\coordinate (L) at (-1.1,0) ;
\diagram* {
(Dp) -- [fermion] (A),
};
\draw (A)+(2.5,0) node {{$+$}};
\draw (A) node {$\bigtimes$} ;
\draw (A) -- (D) ;
\draw [-] (A) -- (Dp) node [midway, below ] {$k$};
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\end{feynman}
\end{tikzpicture} }
\raisebox{-20pt}{
\begin{tikzpicture}[scale=0.7, cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\begin{feynman}
\def\m{2.0cm};
\def\s{0.6};
\coordinate (A) at (0,0);
\coordinate (B) at (1,\s) ;
\coordinate (C) at (2,1.2) ;
\coordinate (D) at (3.5, 3.5*\s) ;
\coordinate (Bp) at (1,-0.6) ;
\coordinate (Cp) at (2,-1.2) ;
\coordinate (Dp) at (3.5,-3.5*\s) ;
\coordinate (L) at (-1.1,0) ;
\diagram* { (Dp) -- [fermion] (Cp), };
\diagram* { (Cp) -- [fermion] (Bp), };
\end{feynman}
\draw (A) -- (B) ;
\filldraw[black] (B) circle (3pt) ;
\draw (B) -- (C) ;
\draw (C) -- (D) ;
\draw (A) -- (Bp) ;
\filldraw[black] (Bp) circle (3pt) ;
\draw (Bp) -- (Cp) ;
\draw (Cp) -- (Dp) ;
\draw (A) node {$\bigtimes$} ;
\draw (Bp)+(0.3,-0.7) node {$p$};
\draw (Cp)+(0.7,-0.9) node {$k$};
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\fill[gray] (2.2,0) ellipse [x radius=4.5mm, y radius=1.8cm, rotate=0];
\end{tikzpicture} }
\end{center}
\caption{Exact vertices $\langle J_{(s)}^B\phi \bar{\phi} \rangle$. The circled cross denotes an insertion of an exact vertex, the bare cross denotes the insertion vertex in the `free' theory. The filled circle denotes the exact scalar propagator. And the elliptic blob denotes the exact scalar four-point function.}
\label{bosexactvertx}
\end{figure}
From \eqref{spin0current}, we see that the $J_0^B$ insertion in the `free' theory is $V^B_{0,\text{free}}(k,q)=1$. We work in the momentum $q_{\pm }=0$. The exact $J_0^B$ insertion vertex is shown in Fig.\ref{bosexactvertx}. In mathematical form, this is given by
\begin{equation}\label{bootstrapV0}
V^B_0(k,q) =1+N_B \int \frac{\mathcal{D}_B^3p}{(2\pi)^3} \ \frac{ \mathcal{A}(\vec{p}, \vec{k},q_3)}{(p_3^2+a^2(p_s))((p_3+q_3)^2+a^2(p_s))} \ ,
\end{equation}
where, $a(p_s)=+\sqrt{p_s^2+c_B^2} \ $. The factor $N_B$ in the second term on the RHS of \eqref{bootstrapV0} comes from the color trace in the loop. It is useful to break the momentum integration measure as $\mathcal{D}_B^3p=p_sdp_s d\theta_p \mathcal{D}_Bp_3$ where, $p_s$ is the radial momentum in the lightcone plane and $\theta_p$ is the angular direction in that plane. $\mc{D}_Bp_3$ is the integration measure for the momentum component $p_3$ taking into account the effect of holonomy as given in \eqref{bosholonomy}.
Doing the integration over the momenta $p_3$, we get
\begin{equation}\label{}
V^B_0(k,q) =1+N_B \int \frac{d^2\vec{p}}{(2\pi)^2} \ \frac{ \chi_B(a(p_s))\mathcal{A}(\vec{p}, \vec{k},q_3)}{a(p_s)(p_3^2+a^2(p_s))} \ .
\end{equation}
Using the definition \eqref{Hbfunction} and the explicit expression of $\mathcal{A}(\vec{p}, \vec{k},q_3)$ given by \eqref{exactscalar4pt} and simplifying, we get,
\begin{equation}\label{}
\begin{split}
V^B_0(k,q) =1+\bigg[\frac{1}{2 H(a(k_s),q_3)} & \int_0^{\infty} p_s( \partial_{p_s} H(a(p_s),q_3)) \\
& \int_0^{2\pi} \frac{d\theta_p}{2\pi} \bigg\{\frac{(p+k)_{-}}{(p-k)_{-}} +\frac{ j( q_3)}{4\pi \i \lambda_{B} q_{3}} \bigg\} \bigg] \ .
\end{split}
\end{equation}
Performing the angular integral, we get
\begin{equation}\label{}
\begin{split}
V^B_0(k,q) =1+\bigg[\frac{1}{2 H(a(k_s),q_3)} & \int_0^{\infty} p_s( \partial_{p_s} H(a(p_s),q_3)) \\
& \bigg\{2\Theta(p_s-k_s)-1+\frac{ j( q_3)}{4\pi \i \lambda_{B} q_{3}} \bigg\} \bigg] \ .
\end{split}
\end{equation}
Finally, performing the radial integral and simplifying further, we find
\begin{equation}\label{V0Bfin}
\begin{split}
V^B_0(k,q) =\frac{ \frac{1}{2}\big(\frac{ j( q_3)}{4\pi \i \lambda_{B} q_{3}} +1\big)H(\infty,q_3)- \frac{1}{2}\big(\frac{ j( q_3)}{4\pi \i \lambda_{B} q_{3}} -1\big)H(c_B,q_3) }{H(a(k_s),q_3)} \ .
\end{split}
\end{equation}
For later convenience, we label the numerator on the RHS of \eqref{V0Bfin} by the symbol $\tilde{V}(q_3)$, in terms of which \eqref{V0Bfin} takes the following form
\begin{equation}\label{V0Bfina}
\begin{split}
V^B_0(k,q) =\frac{ \tilde{V}(q_3) }{H(a(k_s),q_3)} \ .
\end{split}
\end{equation}
\subsubsection{Two-point function}
As in the case of fermions described in subsubsection \ref{2ptJ0F}, the two-point function for the spin zero operator $J_0^B$ is computed by evaluating the Feynman diagram shown in Fig.\ref{jj2ptB}. To compute the two-point function $\langle J_0^{B}(q') J_0^{B}(q) \rangle $, a single exact insertion vertex $J_0^{B}(q')$ is required to account for all the perturbative Feynman diagrams without any overcounting.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.85, cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\begin{feynman}
\coordinate (A) at (0,0) ;
\coordinate (B) at (6,0) ;
\coordinate (L) at (-1.1,0) ;
\draw (A) node {$\otimes $} ;
\draw (B) node {$\times$} ;
\filldraw (3,1.05) circle (3pt) ;
\filldraw (3,-1.05) circle (3pt) ;
\end{feynman}
\draw (0,0) .. controls (2,1.4) and (4,1.4) .. (6,0);
\draw (0,0) .. controls (2,-1.4) and (4,-1.4) .. (6,0);
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\draw (A)+(0.8,0) node {$\ J_0$};
\draw (5,0) node {$\ J_0$};
\end{tikzpicture}
\caption{Diagram contributing to two-point function $\langle J_0(-q)J_0(q)\rangle$. The filled circle denotes the exact scalar propagator.}
\label{jj2ptB}
\end{center}
\end{figure}
We choose the insertion at the left in Fig.\ref{jj2ptB} as the exact vertex. Insertion on right side in Fig.\ref{jj2ptB} is the `free' insertion vertex which we define as follows :
\begin{equation}
\langle J_0^{B}(-q)\phi(k)\bar{\phi}(p)\rangle =U_{0}^B(k,q)~(2\pi)^3\delta^{(3)}(p+k+q) \ .
\end{equation}
From the definition of $J_0^{B}(q)$ operator in \eqref{J0(-q)}, it follows that $U_{0}^B(k,q) =1$.
We define the two-point correlation function of $J_0^{B}$ operator as
\begin{equation}\label{J0B2pt}
\langle J_0^{B}(q')J_0^{B}(q)\rangle =\mc{G}_0^B(q) \ (2\pi)^3 \delta^{(3)}(q'+q) \ .
\end{equation}
From the definition of the $J_0^B$, it follows that the other `free` insertion vertex is $U_0^B(k,q)=1$.
We now have all the building blocks to compute the $\langle J_0^B J_0^B \rangle$ correlator. As discussed before, we compute the diagram shown in Fig.\ref{jj2ptB}, which translates to equation as
\begin{equation}\label{spin02pt}
\mc{G}^B_{0}(q)= N_B\int \frac{\mathcal{D}_B^3k}{(2\pi)^3} \ \frac{ V^B_{0}(k,q)U^B_{0}(k+q,-q)}{(k^2+c_B^2)((k+q)^2+c_B^2)} \ .
\end{equation}
Working in the case $q_{\pm} = 0$, and inserting the expressions for $V_0^B$ and $U_0^B$, we find
\begin{equation}\label{G0Bint}
\mc{G}^B_{0}(q)= N_B \tilde{V}(q_3) \int \frac{\mathcal{D}_B^3k}{(2\pi)^3} \ \frac{ 1 }{(k_3^2+a^2(p_s))((k_3+q_3)^2+a^2(p_s))H(a(k_s),q_3)} \ .
\end{equation}
The crucial fact is that the above momentum integral \eqref{G0Bint} can be carried out analytically. As before, we choose to separate the above integral as an integral over the radial momentum $k_s=\sqrt{k_1^2+k_2^2}=\sqrt{2k_{+}k_{-}} \ $, and an integral over an angular variable $\theta_k$. It is convenient to perform the $k_3$ integral first, which can be carried out using the integration result \eqref{FBfunction}. The angular integral in \eqref{G0Bint} contributes unity. Once these two integrals are performed, finally, the integration over radial momentum $k_s$ can also be performed. By changing the integration variable $k_s$ to another variable $a(k_s)=+\sqrt{k_s^2+c_B^2} \ $, the integrand in the radial integral can be written as a total derivative w.r.t. the variable $a(k_s)$. Thus the remaining integral can be carried out completely and the final result (inserting back the explicit form of $\tilde{V}(q_3)$) for the $\langle J_0^BJ_0^B\rangle $ two-point function in the regular boson theory at finite temperature is
\begin{equation}\label{spin02ptfinal}
\mc{G}^B_{0}(q)= - \frac{N_B}{4\pi \i \lambda_{B} q_3\Big(\frac{H_B(c_B,q_3)+H_B(\infty,q_3)}{ H_B(c_B,q_3)-H_B(\infty,q_3)}\Big)+\tilde{b}_4} \ .
\end{equation}
\subsection{Case II: Spin one
The regular boson theory that we study here, has a global $U(1)$ symmetry and the corresponding conserved spin 1 current is
\begin{equation}\label{Jmu(x)}
\begin{split}
J^{B}_\mu(x) & = \i \Big[(D_\mu \bar{\phi}) \phi -\bar{\phi}(D_\mu \phi)\Big] \ . \\
\end{split}
\end{equation}
In the momentum space, the $J_\mu^B$ can be written as
\begin{equation}\label{J1Bmu(-q)final}
\begin{split}
J_\mu(-q) &
= \int_{k} (2k+q)_{\mu} \ \bar{\phi}(-(k+q)) \ \phi(k) \ - 2 \int_{p,k} \ \bar{\phi}(p) \ A_\mu(-(p+k+q) \ \phi(k) \ .
\end{split}
\end{equation}
\subsubsection{Exact insertion vertex
In this subsection we compute the exact $J_{\mu}^{B}$ insertion vertex which is one of the building blocks for computing the corresponding correlators. We define the exact $J_\mu^B$ insertion vertex by
\begin{equation}\label{J1Bmu(-q)def}
\langle J_{\mu}^{B}(-q) \phi(k) \bar{\phi}(p) \rangle =V_{(\mu)}^{B}(k,q) \ (2\pi)^3 \ \delta^{(3)}(p+k+q) \ .
\end{equation}
The exact insertion vertex is diagramatically shown in Fig.\ref{bosexactvertx}, by a circled cross. The insertion denoted by a bare cross is understood to be the insertion vertex in the `free' theory. In equation, $V_{(\mu)}^{B}(k,q)$ is given by
\begin{equation}\label{bootstrapV1Bmu}
V_{(\mu)}^{B}(k,q) =V_{(\mu),\text{free} }^{B}(k,q) +N_B \int \frac{\mathcal{D}_B^3p}{(2\pi)^3} \ \bigg[S_B(p+k) V_{(\mu),\text{free} }^{B}(p,q) S_B(p)\mathcal{A}(p,k,q) \bigg] \ ,
\end{equation}
where, $S_B(k)$ is the exact scalar propagator given by \eqref{exactphiprop} and $\mathcal{A}(p,k,q)$ is the thermal scalar $4$-point function which in the `lightcone kinematics' ($q_{\pm} =0 $), is given by \eqref{exactscalar4pt}. Substituting the exact scalar propagator we write this in a more convenient from as
\begin{equation}\label{bootstrapV1Bmuexplicit}
V_{(\mu) }^{B}(k,q) =V_{(\mu),\text{free} }^{B}(k,q) +N_B \int \frac{\mathcal{D}_B^3p}{(2\pi)^3} \ \frac{ V_{(\mu),\text{free} }^{B}(p,q) \mathcal{A}(p,k,q)}{(p^2+c_B^2)((p+q)^2+c_B^2)} \ ,
\end{equation}
To perform the integral in \eqref{bootstrapV1Bmuexplicit}, we need $V_{(\mu),\text{free} }^{B}(k,q)$, which can be easily read from the definition of $J_\mu^B$ current given in \eqref{J1Bmu(-q)final}.
In this paper the explicit computations are performed in the lightcone gauge $A_{-}=0$ and with the external momenta $q_{\pm}=0$. For the moment, we will be interested in the computation of exact $J_{-}^B$ vertex. It follows from \eqref{J1Bmu(-q)final} that $V_{(-),\text{free} }^{B}(k,q) = 2k_{{-}} $. From \eqref{bootstrapV1Bmuexplicit}, it follows that
\begin{equation}\label{V1Bminusintegral}
V_{(-) }^{B}(k,q) =2k_{-} +N_B \int \frac{\mathcal{D}_B^3p}{(2\pi)^3} \ \frac{ (2p_{-} ) \mathcal{A}(\vec{p},\vec{k},q_3)}{(p^2+c_B^2)((p+q)^2+c_B^2)} \ .
\end{equation}
To perform the momentum space integral we follow the same procedure as in the case of $V_0^B$. We perform the intgeral over $p_3$ first and then carry out the angular integral by inserting the explicit form of $\mathcal{A}(\vec{p},\vec{k},q_3)$ to reduce into an one-dimensional integral over the radial momenta $p_s$. Interestingly enough, it is noted that the radial momenta integral can also be carried out analytically by writing the corresponding integrand as a total derivative w.r.t. the reduced integration variable $a(p_s)=+\sqrt{p_s^2+c_B^2} \ $. Performing the integral, we find the final result for the exact $J_{-}^B$ insertion vertex as
\begin{equation}\label{V1Bminusfinal}
V_{(-) }^{B}(k,q) =2k_{-} \ \frac{H_B(\infty,q_3)}{H_B(a(k_s),q_3)} \ .
\end{equation}
\subsubsection{Two-point function}
In this subsubsection, we compute the $\langle J_{-}J_{+} \rangle $ two-point correlator. The corresponding feynman diagram is shown in Fig.\ref{jj2ptBs1}.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.85, cross/.style={path picture={
\draw[black]
(path picture bounding box.south east) -- (path picture bounding box.north west) (path picture bounding box.south west) -- (path picture bounding box.north east);
}}]
\begin{feynman}
\coordinate (A) at (0,0) ;
\coordinate (B) at (6,0) ;
\coordinate (L) at (-1.1,0) ;
\draw (A) node {$\otimes $} ;
\node [draw,cross,minimum width=0.01 cm] at (B){};
\filldraw (3,1.05) circle (3pt) ;
\filldraw (3,-1.05) circle (3pt) ;
\end{feynman}
\draw (0.10,0.10) .. controls (2,1.4) and (4,1.4) .. (5.85,0.15);
\draw (0.10,-0.10) .. controls (2,-1.4) and (4,-1.4) .. (5.85,-0.15);
\draw[-latex] (L)--+(0.7,0);
\draw (L)+(0.3, 0.3) node {$q$};
\draw (A)+(0.8,0) node {$\ J^{(s)}$};
\draw (5,0) node {$\ J^{(s)}$};
\end{tikzpicture}
\caption{Schematic diagram contributing to two-point function $\langle J^{(s)}(-q)J^{(s)}(q)\rangle $ for spin $s>0$. The filled circles denote the exact scalar propagator.}
\label{jj2ptBs1}
\end{center}
\end{figure}
As mentioned before, we choose the insertion at the left of the digram to be the exact vertex which in our present case is the the exact $J_{-}^B$ vertex. The remaining thing is to compute the contribution from the insertion on the right side. To keep a distinction, we define the insertion on the right to be
\begin{equation}\label{J1Bnu(-q)def}
\langle J_{\nu}^{B}(-q) \phi(k) \bar{\phi}(p) \rangle =U_{(\nu)}^{B}(k,q) \ (2\pi)^3 \ \delta^{(3)}(p+k+q) \ .
\end{equation}
To compute the two-point function $\langle J_{-}^BJ_{+}^B \rangle$, we need the `free' $J_{+}^B$ insertion vertex. From the definition \eqref{J1Bmu(-q)final}, it is clear that the `bare' insertion $ U_{(+)}^{B}(k,q) $ has one insertion without the gauge field i.e., the first term and the one involving the gauge field which is the second term; diagramatically, these are shown in Fig.\ref{UpB}.
\begin{figure}[h!]
\begin{center}
\raisebox{1pt}{
\begin{tikzpicture}[scale=0.55]
\begin{feynman}
\coordinate (a) at (-4,0) ;
\coordinate (b) at (0,0) ;
\coordinate (bp) at (2.5,0) ;
\coordinate (c) at (4,0) ;
\end{feynman}
\draw (a) -- (b) ;
\draw (b) -- (c) ;
\draw[-Latex] (a) -- ++(2,0) ;
\draw[-Latex] (0,-1.5) -- ++(0,1) ;
\draw (0.5, -1.2) node {$q$} ;
\draw (-2, 0.5) node {$k$} ;
\draw (b) node {$\times$} ;
\end{tikzpicture} }
\raisebox{1pt}{
\begin{tikzpicture}[scale=0.55]
\begin{feynman}
\coordinate (a) at (-4,0) ;
\coordinate (b) at (0,0) ;
\coordinate (bp) at (2.5,0) ;
\coordinate (c) at (4,0) ;
\diagram* {
(b) -- [boson, bend left] (bp)
};
\end{feynman}
\draw (a) -- (b) ;
\draw (b) -- (c) ;
\filldraw (b)+(1.2,0) circle (5pt) ;
\draw[-Latex] (a) -- ++(2,0) ;
\draw[-Latex] (0,-1.5) -- ++(0,1) ;
\draw (0.5, -1.2) node {$q$} ;
\draw (-2, 0.5) node {$k$} ;
\draw (b) node {$\times$} ;
\end{tikzpicture} }
\raisebox{1pt}{
\begin{tikzpicture}[scale=0.55]
\begin{feynman}
\coordinate (a) at (-4,0) ;
\coordinate (b) at (0,0) ;
\coordinate (bp) at (2.5,0) ;
\coordinate (c) at (4,0) ;
\diagram* {
(b) -- [boson, bend left] (bp)
};
\end{feynman}
\draw (a) -- (b) ;
\draw (b) -- (c) ;
\filldraw (b)+(1.2,0) circle (5pt) ;
\draw[-Latex] (a) -- ++(2,0) ;
\draw[-Latex] (2.5,-1.5) -- ++(0,1) ;
\draw (3, -1.2) node {$q$} ;
\draw (-2, 0.5) node {$k$} ;
\draw (bp) node {$\times$} ;
\end{tikzpicture} }
\end{center}
\caption{Diagrams with nonzero contributions to the `bare' vertex $ U_{(+)}^{B}$ which is the boxed cross on the RHS in Fig.\ref{jj2ptBs1}. The filled circle denotes the exact scalar propagator. The loop momentum in the last two digrams is $\ell$.}
\label{UpB}
\end{figure}
Summing the diagrams in Fig.\ref{UpB}, total non-zero contribution to the `bare' insertion vertex $U_{(+)}^{B}$ is given by
\begin{equation}\label{U1B(+)(k,q) }
\begin{split}
U_{(+)}^{B}(k,q)
& = 2k_{+} - \ 4\pi \lambda_B \epsilon_{+-\rho} q^{\rho} \int \frac{\mathcal{D}_B^3 \l }{(2\pi)^3 } \frac{S_B(\l)}{(\l-k)_{-}} \ . \\
\end{split}
\end{equation}
Here $S_B(\ell)$ is the exact scalar propagator given by \eqref{exactphiprop}. Performing the angular integral, the integral over $\l_3$ and finally performing the radial integral over $\l_s$ (also using the fact that $\epsilon_{+-\rho}=\i \delta_{\rho 3}$), we get
\begin{equation}\label{U1B(+)(k,q)final}
\begin{split}
U_{(+)}^{B}(k,q)
& = 2k_{+} + \ \frac{\i \lambda_B q_3 }{k_{-}} \bigg[\xi_B(a(k_s))-\xi_B(c_B)\bigg] \ ,
\end{split}
\end{equation}
where, the function $\xi_B(z)$ is given by \eqref{xiB}.
We are now all set to compute the two-point function of the spin-one current operator.
We define the two-point function $\langle J_{\mu}^{B}(-q) J_{\nu}^{B}(q)\rangle $ as
\begin{equation}\label{G1Bmunudefn}
\langle J_{\mu}^{B}(q') J_{\nu}^{B}(q)\rangle = \mc{G}^{B}_{\mu\nu} (q) \ (2\pi)^{3} \delta^{(3) }(q'+q) \ .
\end{equation}
Diagramatically, this is shown in Fig.\ref{jj2ptBs1}. In equation, it is given by
\begin{equation}\label{G1Bmunu(q)}
\mc{G}^{B}_{\mu\nu}(q)= N_B\int \frac{\mathcal{D}_B^3k}{(2\pi)^3} \ \bigg[S_B(k+q) V^{B}_{(\mu)}(k,q)S_B(k)U^{B}_{(\nu)}(k+q,-q) \bigg] \ ,
\end{equation}
where, $S_{B}(k)$ is the exact scalar field propagator given by \eqref{exactphiprop}. Explicitly, we write this as
\begin{equation}\label{G1Bmunu(q)explicit}
\mc{G}^{B}_{\mu\nu}(q)= N_B\int \frac{\mathcal{D}_B^3k}{(2\pi)^3} \ \frac{ V^{B}_{(\mu)}(k,q)\ U^{B}_{(\nu)}(k+q,-q) }{(k^2+c_B^2)((k+q)^2+c_B^2)} \ .
\end{equation}
As we are working in the case of external momenta $q_{\pm}=0$, and in $A_{-}=0$ gauge, the non-trivial component in this case is $(\mu\nu)\equiv (-+)$ component. In this case, substituting the expressions \eqref{V1Bminusfinal} and \eqref{U1B(+)(k,q)final}, we find
\begin{equation}\label{G1B-+(q)simpl}
\mc{G}^{B}_{-+}(q)= N_B\int \frac{\mathcal{D}_B^3k}{(2\pi)^3} \ (2k_{-})\frac{H_B(\infty,q_3) }{H_B(a(k_s),q_3)} \ \frac{ 2k_{+} -\frac{\i \lambda_B q_3}{k_{-} } \big[\xi_B(a(k_s))-\xi_B(c_B) \big]}{(k_3^2+a^2(k_s))((k_3+q_3)^2+a^2(k_s) )} \ .
\end{equation}
The interesting fact is that the momentum integral here in \eqref{G1B-+(q)simpl} can be carried out completely analytically. We first perform the intgeration over the momentum component $k_3$ by using the result \eqref{GB2}. For the integrations in the lightcone plane, we use the polar coordinates to write the integration effectively as an integration over a radial momentum $k_s$ and an integration over an angular variable $\theta_k$. The angular integral in this case is trivial and contributes unity. Doing these, the integration in \eqref{G1B-+(q)simpl} is reduced to an integration over a single variable $k_s$. The $k_s$ intgeral can be performed analytically. The intgrand can be written as a total derivative of the reduced variable $a(k_s)=+\sqrt{k_s^2+c_B^2}\equiv z$, and so carrying out the radial integral, the final result for $\langle J_{-}J_{+}\rangle $ is given by
\begin{equation}\label{G1B-+(q)finalanswer}
\begin{split}
\mc{G}^{B}_{-+}(q)
& = \lim_{\Lambda\to \infty} \bigg[ \frac{\i N_Bq_3 }{16\pi \lambda_B } \ \bigg(1+\frac{4c_B^2}{q_3^2}\bigg) \bigg[ \frac{H_B(\Lambda,q_3)}{H_B(c_B, q_3)} -1\bigg] - \frac{N_B \xi_B(c_B)}{ 4 \pi} +\frac{N_B \xi_B(\Lambda)}{ 4 \pi} \bigg]
\end{split}
\end{equation}
where, $\Lambda$ is the UV cut-off in the radial momenta. The term $\xi_B(\Lambda\to \infty)$ is a pure divergent term. We regularize this answer by dropping this linearly divergent piece \footnote{This can be removed by using dimensional regularization in the integration over $k_s$ \cite{Choudhury:2018iwf}. This can also be done by turning on the mass counterterm for the background source which couples to the current $J_\mu$ \cite{Aharony:2012nh}.}, the renormalized two-point function $\langle J_{-}^BJ_{+}^B\rangle $, upto the momentum conserving delta function, is given by
\begin{equation}\label{G1B-+(q)fin}
\begin{split}
\mc{G}^{B}_{-+}(q)
& = \frac{\i N_Bq_3 }{16\pi \lambda_B } \ \bigg(1+\frac{4c_B^2}{q_3^2}\bigg) \bigg[ \frac{H_B(\infty,q_3)}{H_B(c_B, q_3)} -1\bigg] - \frac{N_B \xi_B(c_B)}{ 4 \pi}
\end{split}
\end{equation}
As discussed in \cite{Gur-Ari:2016xff} (see e.g. around equation 80 of \cite{Gur-Ari:2016xff}), the two-point correlator of the $U(1)$ current \eqref{Jmu(x)} is $\langle J_{\mu}^BJ_{\nu}^B\rangle-2\delta_{\mu\nu}\langle\bar{\phi}\phi\rangle$. We report the relevant result by taking the contribution of $\langle\bar{\phi}\phi\rangle$ into account in subsubsection \ref{gis1b}.
\section{Analysis of results}\label{anlzresult}
In this section we analyze the results that we have obtained in this paper along with studying various limiting cases.
\subsection{Fermionic result}
As discussed above in details, we have studied in this paper the massive regular fermions coupled to Chern-Simons theory at finite temperature with arbitrary holonomy distribution. Below we summarize the result and analyze the different relevant structures of the results.
\subsubsection{Two-point correlator of spin 0 operator : }
The final result for $\langle J_0^F(-q)J_0^F(q) \rangle$ is given in \eqref{G0F(q)thefinal} and is rewritten below
\begin{equation}\label{G0Fan}
\begin{split}
\mc{G}_0^F(q)
& =
\frac{N_F|q_3|}{4\pi \lambda_{F} } \ \cot \bigg[ 2\lambda_F|q_3| \int_{c_F}^{\infty} \frac{dw \ \chi_F(w)}{q_3^2+4w^2} -\text{sgn}(h_F)\tan^{-1}\frac{|q_3|}{2c_F} \bigg] + \frac{N_F m_F}{2\pi \lambda_{F} } \ ,
\end{split}
\end{equation}
where, $\chi_F(w)$ is given by \eqref{chiF} and $\text{sgn}(h_F)$ is defined around \eqref{masseqnforcF} (at zero temperature, $\text{sgn}(h_F)$ effectively reduces to $\text{sgn}(m_F)$). This result is general and is valid in the general case of massive theory at finite temperature. Interestingly, \eqref{G0Fan} is even in $q_3$. From the corresponding CFT results \cite{GurAri:2012is} \footnote{By corresponding CFT, we here mean the CFT from which one gets the massive theory that is being studied in this paper by deforming it with relevant deformations.}, we know that the two point function of spin zero scalar `current' operator is even in the general momentum $q$. The spin zero operator is even under parity in the corresponding CFT \cite{GurAri:2012is}, and it is unlikely that the finite temperature effects will break that structure. Assuming this to be true, we `covariantize' the result \eqref{G0Fan} by replacing $|q_3|$ with corresponding `$SO(3)$ rotation-invariant' generalization, i.e., with $|q|=+\sqrt{q_1^2+q_2^2+q_3^2}$ \footnote{Strictly speaking, the external momentum component $q_3$ is discrete at finite temperature. However, we formally treat this like a continuous variable for the purpose of `covariantization'. Also the true rotation symmetry is actually $SO(2)$ in the spatial plane. We however formally write the `covariantized' form by replacing $|q_3|$ with $+\sqrt{q_1^2+q_2^2+q_3^2}$ which reduces to $|q_3|$ in the choice $q_{\pm}=0$. The `covariantized' results are indeed the correct results obtained in this paper in the case $q_{\pm}=0$ in which case $|q|$ should be thought of as $|q_3|$. It will be nice to have a direct independent computation without setting $q_{\pm}$ to zero to check if this is indeed the case. \label{fncovs0f}}. The corresponding `covariant' generalization is given below
\begin{equation}\label{G0Fcov}
\begin{split}
\mc{G}_0^F(q)
& =
\frac{N_F|q|}{4\pi \lambda_{F} } \ \cot \bigg[ 2\lambda_F|q| \int_{c_F}^{\infty} \frac{dw \ \chi_F(w)}{q^2+4w^2} -\text{sgn}(h_F)\tan^{-1}\frac{|q|}{2c_F} \bigg] + \frac{N_F m_F}{2\pi \lambda_{F} } \ ,
\end{split}
\end{equation}
We now study various limiting cases of this result and check that it matches with the existing previous results.
\subsubsection*{ Zero temperature and zero mass}
We start with the simplest possible limiting case, i.e., the case when both temperature and fermion bare mass parameter $m_F$ is zero. In this case, the effect of holonomy vanishes and it is clear from \eqref{chiF} that in the zero temperature limit, $\chi_F(w)=1$. Also from the gap equation \eqref{masseqnforcF} we see that in this case $c_F$ also vanishes; that is, there is no self energy correction to the pole mass $c_F$ in the corresponding CFT at zero temperature \cite{Giombi:2011kc}. As bare mass $m_F$ can have both possible signs, we choose to take the mass goes to zero limit from the side in which $\text{sgn}(m_F)=\text{sgn}(\lambda_F)$, which is known to be dual to the unhiggsed phase of bosonic theory. It follows from \eqref{G0Fan} or its `covariantized' form \eqref{G0Fcov}, that the two-point function of the scalar operator $J_0^F$ in this limiting case takes the form
\begin{equation}\label{G0FmF0T0}
\begin{split}
\mc{G}_0^F(q)
& = -\frac{N_F |q| }{4\pi \lambda_{F} } \ \tan\bigg(\frac{\pi \lambda_F}{2} \bigg)
\end{split}
\end{equation}
As expected, this result matches with the result in \cite{GurAri:2012is}. This provides a support to the correctness of the computation performed in this paper.
\subsubsection*{Zero temperature and nonzero mass}
In this case, as before the effect of holonomy vanishes implying $\chi_F(w)=1$. The integral in the argument of cotangent in \eqref{G0Fcov} can be performed easily in this situation and the final result is given by
\begin{equation}\label{G0FmFn0T0}
\begin{split}
\mc{G}_0^F(q)
& =
\frac{N_F|q|}{4\pi \lambda_{F} } \ \cot \bigg[ (\lambda_F-\text{sgn}(m_F)) \tan^{-1}\frac{|q|}{2c_F} \bigg] + \frac{N_F m_F}{2\pi \lambda_{F} } \ .
\end{split}
\end{equation}
\subsubsection*{Nonzero temperature and zero mass}
This is the case, when the effect of holonomy becomes non-trivial. The existing result in this case \cite{Ghosh:2019sqf} is only in the case when the holonomy takes the universal table-top distribution form, i.e., $\rho_F(\alpha)=\frac{\Theta(\pi|\lambda_F|-|\alpha|)}{2\pi|\lambda_F|}$. Performing the integral over the holonomy with this particular distribution, we get
\begin{equation}\label{chiFtabletop}
\chi_F(w) = \frac{\i}{\pi |\lambda_F|} \Big[\log \cosh\Big(\frac{\beta w-\i \pi |\lambda_F|}{2}\Big) - \log \cosh\Big(\frac{\beta w+\i \pi |\lambda_F|}{2}\Big) \Big] \ .
\end{equation}
Similar to the zero temperature case discussed above, when the bare mass of the fermion is zero, the theory is dual to the bosonic scalar theory for which $\text{sgn}(h_F)=\text{sgn}(\lambda_F)$. The final result of the two-point function of spin zero operator $J_0^F$ in the thermal CFT in the `covariantized' form is given by
\begin{equation}\label{G0FmF0Tn0}
\begin{split}
\mc{G}_0^F(q)
& =
\frac{N_F|q|}{4\pi \lambda_{F} } \ \cot \bigg[ 2\lambda_F|q| \int_{c_F}^{\infty} \frac{dw \ \chi_F(w)}{q^2+4w^2} -\text{sgn}(\lambda_F)\tan^{-1}\frac{|q|}{2c_F} \bigg] .
\end{split}
\end{equation}
where, $c_F$ is computed from the gap equation \eqref{masseqnforcF} by using the fact that $m_F=0$ and using \eqref{xiFtt}. $\chi_F(w)$ in this case is given by \eqref{chiFtabletop}. In the special choice of external momenta $q=(0,0,q_3)$, the result \eqref{G0FmF0Tn0} agrees with \cite{Ghosh:2019sqf}.
\subsubsection{Two-point correlator of spin 1 current : }
The final result for spin 1 current two point function $\langle J_{-}^FJ_{+}^F\rangle$ is given by \eqref{G1F(q)final}. We rewrite this result below by using the fact that $h_F(c_F)=\text{sgn}(h_F)c_F$ (this follows from the gap equation \eqref{masseqnforcF}) and the explicit form of $H_F(z,q_3)$. It takes the following form \footnote{This does not have a definite even/odd property under $q_3\to -q_3$. However, this can formally be written as a sum of the even and odd parts, i.e., $\mc{G}_{-+}^{F}(q) =\mc{G}_{-+}^{F, \text{even}}(q) +\mc{G}_{-+}^{F,\text{odd}}(q)$, where, $\mc{G}_{-+}^{F, \text{even}}(-q)=\mc{G}_{-+}^{F, \text{even}}(q)$ and $\mc{G}_{-+}^{F,\text{odd}}(-q)=-\mc{G}_{-+}^{F,\text{odd}}(q)$. In this footnote, we propose to formally write the possible `covariantized form' of the correlator by replacing $|q_3|$ with $|q|=+\sqrt{q_1^2+q_2^2+q_3^2}$ as (also see footnote \ref{fncovs0f} for a related clarification)
\begin{equation}\label{GFevencov}
\mc{G}_{\mu \nu}^{F, \text{even}}(q) = \Big[ \frac{N_F}{16\pi \lambda_F}\Big(1-\frac{4c_F^2}{q^2}\Big)\sin\big(4\lambda_F|q|A_F\big) + \frac{N_F\text{sgn}(h_F)c_F}{4\pi\lambda_F|q|} \Big(\cos(4\lambda_F|q|A_F)-1\Big) +\frac{N_F\xi_{F}(c_F)}{4\pi|q|} \Big] \frac{q_\mu q_\nu-\delta_{\mu\nu}q^2}{|q|} \ ,
\end{equation}
and
\begin{equation}\label{GFoddcov}
\mc{G}_{\mu \nu}^{F, \text{odd}}(q) =\frac{N_F}{8\pi \lambda_Fq^2} \Big[ (q^2-4c_F^2)\sin^2\big(2\lambda_F|q|A_F\big) + 2\text{sgn}(h_F)c_F|q| \sin\big(4\lambda_F|q|A_F\big) \Big] \epsilon_{\mu\nu\rho}q^{\rho} \ ,
\end{equation}
where, in the above two expressions $A_F=\int_{c_F}^{\infty}\frac{dw \ \chi_F(w)}{q^2+4w^2}$. It would be nice to have a direct independent check if this is indeed the correct form of $\mc{G}_{\mu\nu}^F(q)$ without choosing $q_{\pm}=0$.
Similar arguments apply for the correlator of spin one current in the bosonic case as given in \eqref{tlG1Ban}.
The author would like to thank S. Minwalla for asking a question about this. \label{fncovs1f}}
\begin{equation}\label{G1Fan}
\begin{split}
\mc{G}_{-+}^{F}(q)
&= \frac{\i N_F q_3 }{16\pi \lambda_F} \bigg(1+ \frac{2\i \text{sgn}(h_F)c_F }{q_3 }\bigg)^2\bigg[\exp \bigg( 4\i \lambda_F q_3 \int_{c_F}^{\infty} \frac{dw \ \chi_F(w)}{q_3^2+4w^2}\bigg) -1 \bigg] -\frac{N_F\xi_F(c_F)}{4 \pi} \ .
\end{split}
\end{equation}
Below we study limiting cases of this result and verify with the existing results.
\subsubsection*{Zero temperature and zero mass}
As already discussed in the spin zero case, in the zero temperature limit, $\chi_F(w)=1$ and so, the integral appearing in the exponential in \eqref{G1Fan} can be performed exactly. Also as discussed before, the pole mass $c_F$ vanishes in this case. Using these and the fact that $m_F=0$ in this case, we get
\begin{equation}\label{G1FmF0T0}
\begin{split}
\mc{G}_{-+}^{F}(q)
= \frac{\i N_F q_3}{16 \pi \lambda_F } & \ \bigg[e^{\pi \i \lambda_{F} \text{sgn}(q_3)}-1\bigg] \ .
\end{split}
\end{equation}
This exactly matches with the corresponding CFT results at zero temperature reported in \cite{GurAri:2012is} \footnote{As discussed in \cite{GurAri:2012is}, this result can be separated into parity even and parity odd parts. And in this case, the covariantized form of the parity even part is given by $\frac{N_F}{16}\frac{\sin(\pi \lambda_F)}{\pi\lambda_F}\frac{q_\mu q_\nu-\delta_{\mu\nu}q^2}{|q|}$ as reported in \cite{GurAri:2012is}. The parity odd part can come from a contact term proportional to $\epsilon_{\mu\nu\rho}q^\rho$ \cite{GurAri:2012is} (see discussion around equation 35 of \cite{GurAri:2012is}).}.
\subsubsection*{Zero temperature and nonzero mass}
In this limiting case, the effect of holonomy goes away. As mentioned before, in the zero temperature limit $\chi_F(w)$ equals to unity and the integral appearing in the exponential in \eqref{G1Fan} can be performed exactly. Keeping finite nonzero bare mass $m_F$, in this case the result \eqref{G1Fan} reduces to
\begin{equation}\label{G1F(q)T0mFneq0final}
\begin{split}
\mc{G}_{-+}^{F}(q)
= \frac{\i N_F q_3}{16 \pi \lambda_F } \bigg(1+ \frac{2\i c_F\text{sgn}(m_F) }{q_3} \bigg)^2 & \ \bigg[ e^{2\i \lambda_{F} \tan^{-1}\big(\frac{q_3}{2c_F}\big)}-1\bigg] -\frac{N_Fc_F}{4\pi} \ ,
\end{split}
\end{equation}
where, $c_F$ is the fermionic pole mass determined from the gap equation \eqref{masseqnforcF} at zero temperature which gives $c_F=\frac{m_F}{\text{sgn}(m_F)-\lambda_F}$ . This exactly matches with the results of \cite{Gur-Ari:2016xff, amiyata1}.
\subsubsection*{Nonzero temperature and zero mass}
As discussed before, at finite temperature the effects of holonomy becomes non-trivial. The result in \eqref{G1Fan} is valid for any arbitrary holonomy distribution. In the specific case of table-top holonomy $\rho_F(\alpha)=\frac{\Theta(\pi|\lambda_F|-|\alpha|)}{2\pi|\lambda_F|}$, as used in \cite{Gur-Ari:2016xff, Ghosh:2019sqf}, the final result is given by \eqref{G1Fan}, where $\chi_F(w)$ is now given by \eqref{chiFtabletop}. $\xi_F(c_F)$ in this case is given by \eqref{xiFz} with the table-top holonomy distribution, i.e.,
\begin{equation}\label{xiFtt}
\xi_{F}(c_F) = \frac{1}{2\pi|\lambda_F|\beta} \int_{-\pi|\lambda_F|}^{\pi|\lambda_F|} d\alpha \ \Big(\ln 2\cosh\big(\frac{\beta c_F+\i \alpha}{2}\big)+ \ln 2\cosh\big(\frac{\beta c_F-\i \alpha}{2}\big) \Big) \ .
\end{equation}
The results of the holonomy integral above are in terms of the dilogarithm functions as given, e.g., in \cite{Aharony:2012ns, Gur-Ari:2016xff, Ghosh:2019sqf}.
\subsection{Bosonic result}
We have studied in this paper the bosonic theory as well. We have considered the massive regular bosonic matter theory coupled to $SU(N_B)$ Chern-Simons theory at finite temperature with arbitrary holonomy distribution. We have computed various correlators which we summarize below considering several limiting cases as well and check that it agrees with the existing results in limiting cases.
\subsubsection{Two-point correlator spin 0 operator : }
The final result for the renormalized two point function of single trace, spin zero operator $J_0^B$ in the regular boson theory is given by \eqref{spin02ptfinal} which we rewrite below to analyze this result further.
\begin{equation}\label{G0Ban}
\mc{G}^B_{0}(q)= \frac{N_B}{4\pi \lambda_{B} q_3 \ \cot \Big(2\lambda_B q_3 \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q_3^2+4w^2} \Big)-\tilde{b}_4} \ ,
\end{equation}
where, $\tilde{b}_4$ is given by \eqref{b4tl} and thermal mass $c_B$ is given by \eqref{thermalcB}. First of all, it is easy to see that this result is even under $q_3\to -q_3$ (we have also seen the same property in the fermionic result in \eqref{G0Fan}). As described in the $J_0^F$ case (see footnote \ref{fncovs0f}), we `covariantize' this result by replacing $|q_3|$ with $|q|=+\sqrt{q_1^2+q_2^2+q_3^2}$. The `covariantized' form of this result is given by
\begin{equation}\label{G0Bcov}
\mc{G}^B_{0}(q)= \frac{N_B}{4\pi \lambda_{B} |q| \ \cot \Big(2\lambda_B |q| \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q^2+4w^2} \Big)-\tilde{b}_4} \ .
\end{equation}
\subsubsection*{Zero temperature and zero mass}
Let us consider the simple case of zero temperature and zero bare mass $m_B=0$. In the zero temperature limit $\chi_B(w)=1$ and $\xi_{B}(w)=w$. In this case, the pole mass of the scalar obtained by solving \eqref{thermalcB}. The integration appearing in the argument of \eqref{G0Bcov} can be performed explicitly in case of zero temperature. The final result in this case is given by
\begin{equation}\label{G0BT0mB0}
\mc{G}^B_{0}(q)= \frac{N_B}{4\pi \lambda_{B} |q| \ \cot \Big(\lambda_B\tan^{-1}\big(\frac{|q|}{2c_B} \big) \Big)-\tilde{b}_4} \ .
\end{equation}
Solving \eqref{thermalcB}, in this case, one finds the pole mass $c_B=0$. It follows that with nonzero quartic coupling $b_4$, the two-point function of the scalar operator in this case is given by
\begin{equation}\label{G0BT0mB0b4n0}
\mc{G}^B_{0}(q)= \frac{N_B}{4\pi \lambda_{B} |q| \ \cot \big(\frac{\pi \lambda_B}{2} \big)+{b}_4} \ .
\end{equation}
This agrees with the results of \cite{Aharony:2012nh} (see e.g. equation 66 of \cite{Aharony:2012nh}). In the special case when $b_4$ is zero, \eqref{G0BT0mB0b4n0} reduces to
\begin{equation}\label{G0BT0mB0b40}
\mc{G}^B_{0}(q)= \frac{N_B}{4\pi \lambda_{B} |q|} \ \tan \Big(\frac{\pi \lambda_B}{2} \Big) \ .
\end{equation}
This result exactly matches with the one given in \cite{Aharony:2012nh} (see e.g. equation 35 of \cite{Aharony:2012nh}).
\subsubsection*{Zero temperature and nonzero mass}
As discussed above, at zero temperature the effect of holonomy disappears. As the bare mass $m_B$ is nonzero, the pole mass $c_B$ in this case is nonzero and is obtained by solving the gap equation \eqref{thermalcB} at zero temperature in which case $\xi_{B}(c_B)=c_B$. The two-point correlator of $J_0^B$ operator in this case is given by
\begin{equation}\label{G0BT0mBn0}
\mc{G}^B_{0}(q)= \frac{N_B}{4\pi \lambda_{B} |q| \ \cot \Big(\lambda_B\tan^{-1}\big(\frac{|q|}{2c_B} \big) \Big)-\tilde{b}_4} \ .
\end{equation}
\subsubsection*{Nonzero temperature and zero mass}
In this case, the non-trivial effects of holonomy becomes important. The result that we have given in \eqref{G0Bcov} is valid for any arbitrary holonomy distribution. In the case of specific table-top holonomy $\rho_B(\alpha)=\frac{\Theta(\pi|\lambda_B|-|\alpha|)}{2\pi|\lambda_B|}$, $\rho_B(\alpha)$, as discussed in \cite{Ghosh:2019sqf}, the final result is given by \eqref{G0Bcov}, where $\chi_B(w)$ is now given by
\begin{equation}\label{chiBtabletop}
\chi_B(w) = \frac{\i}{\pi |\lambda_B|} \Big[\log \sinh\Big(\frac{\beta w-\i \pi |\lambda_B|}{2}\Big) - \log \sinh\Big(\frac{\beta w+\i \pi |\lambda_B|}{2}\Big) \Big] \ .
\end{equation}
$\xi_B(c_B)$ in this case is given by \eqref{xiB} with the table-top holonomy distribution. This can be written as
\begin{equation}\label{xiBtt}
\xi_{B}(c_B) = \frac{1}{2\pi|\lambda_B|\beta} \int_{-\pi|\lambda_B|}^{\pi|\lambda_B|} d\alpha \ \Big(\ln 2\sinh\big(\frac{\beta c_B+\i \alpha}{2}\big)+ \ln 2\sinh\big(\frac{\beta c_B-\i \alpha}{2}\big) \Big) \ ,
\end{equation}
and the results of the holonomy integral are in terms of the dilogarithm functions as given, e.g., in \cite{Aharony:2012ns, Ghosh:2019sqf}.
\subsubsection{Critical theory limit}
One of our goal in this paper is to check the conjectured bose-fermi duality. In this paper, we have studied the regular fermionic matter coupled to Chern-Simons theory which is conjectured to be dual to the critical bosons coupled to Chern-Simons theory. There is a calculational evidence of this conjectured duality from explicit computations of the thermal free energies on both sides of this duality. To check the duality of the two-point functions of gauge invariant operators, i.e., to match with the regular fermionic theory, we need to have the corresponding results in the critical bosonic theory. The `critical' limit of the theory is defined \cite{Jain:2014nza, Aharony:2012nh, GurAri:2012is} by taking $b_4\rightarrow \infty $ and $m_B^2\to \infty$ with $\frac{4\pi m_B^2}{b_4}=m_B^{\text{cri}}$ and $b_6$ kept fixed. The action for the critical theory can be obtained from the action \eqref{RBlag} of the regular boson theory, by introducing a Hubbard-Stratonovich field $\sigma_B$, and taking the critical limit (we set $b_6=0$) \cite{Jain:2014nza}. The action in this case takes the form
\begin{equation}\label{CBlag}
\begin{split}
\mc{S}_{\text{CB}}[A, \phi, \sigma_B] & =\frac{\i \kappa_B}{4\pi} \int \text{d}^3 x\ \epsilon^{\mu\nu\rho}\,\text{tr}\left( A_\mu \partial_\nu A_\rho - \frac{2\i}{3} A_\mu A_\nu A_\rho\right)\ \\
& + \int d^3 x ~\bigg((D_\mu \bar{\phi})(D^\mu \phi)+\sigma_B \Big( \bar{\phi}\phi+\frac{N_B}{4\pi}m_B^{\text{cri}} \Big) \bigg) \ .
\end{split}
\end{equation}
We restrict our attention here to the case when $m_B^{\text{cri}}>0$. In the critical boson theory, the single trace scalar operator $J_0^B$ is simply the Lagrange multiplier field $\sigma_B$. The first term in the denominator of \eqref{G0Bcov} is finite (here thermal mass $c_B$ is assumed to be finite), however the second term is proportional to $b_4$ which grows without limit in the critical limit. However, we extract finite result by rescaling the $J_0^B$ operator with $b_4$ \footnote{The scaling is feasible, because in the corresponding CFT theory, the scaling dimension of the $J_0^B$ operator in the regular bosonic theory is $1$ in leading order in $N_B$ whereas, in the critical boson theory, the scaling dimension of the corresponding primary operator is $2$.} and define a new operator
\begin{equation}\label{scaledJ0B}
\tilde{J}_0^B=\frac{b_4}{4\pi\lambda_B} \ J_0^B \ .
\end{equation}
Here, we have used the particlar normalization to exactly match its two-point function with the dual result in the regular fermionic theory. We define the two-point function of $\tilde{J}_0^B$ operator by $\mc{\tilde{G}}_0^B$ upto the momentum conserving delta function, and so, we see that in the critical boson theory,
\begin{equation}\label{G0B(q)finalb4infty}
\mc{\tilde{G}}^B_{0}(q)= \lim_{b_4\rightarrow \infty} \bigg(\frac{b_4}{4\pi\lambda_B}\bigg)^2 \mc{G}^B_{0}(q) \ .
\end{equation}
Taking the critical limit, by sending $b_4$ and $m_B^2$ to infinity while keeping their ratio fixed, we find the two-point function of the operator $\tilde{J}_0^B$ to be given by (we drop the contact term $\frac{N_Bb_4}{(4\pi\lambda_B)^2}$)
\begin{equation}\label{G0Bcri}
\mc{\tilde{G}}^B_{0}(q)= -\frac{N_B |q|}{4\pi\lambda_B} \ \cot \Big(2\lambda_B |q| \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q^2+4w^2} \Big)
\end{equation}
where, the thermal mass $c_B$ appearing in \eqref{G0Bcri} has to be computed from \eqref{thermalcB} in the critical limit which amounts to solving the gap equation
$\xi_{B}(c_B)=m_{B}^{\text{cri}}$ for a fixed $m_B^{\text{cri}}>0$.
\paragraph*{Zero temperature and zero critical mass : }
As already mentioned above, in the zero temperature limit as the holonomy becomes trivial, it is easy see that $\chi_B(w)=1$ and $\xi_B(w)=w$. Thus, we can explicitly perform the integral appearing in the argument of \eqref{G0Bcri}. Also in this case, as the critical mass is zero, i.e., $m_B^{\text{cri}}=0$, so, we see that the pole mass $c_B$ vanishes. In this case, \eqref{G0Bcri} reduces to
\begin{equation}\label{tlG0BT0m0}
\mc{\tilde{G}}^B_{0}(q)= - \frac{N_B |q| }{4\pi\lambda_B} \ \cot\Big( \frac{\pi \lambda_B}{2} \Big)
\end{equation}
This result matches with the existing results \cite{Aharony:2012nh} in this limiting case.
\paragraph*{Zero temperature and nonzero critical mass : }
This is similar to the above case with only difference being that in this case the pole mass is non zero, and is given by $c_B=m_B^{\text{cri}}$. So, from \eqref{G0Bcri} it follows that the final result in this case is
\begin{equation}\label{tildeG0B(q)thefinalsimplifiedfurther}
\mc{\tilde{G}}^B_{0}(q)= - \frac{N_B |q| }{4\pi\lambda_B} \ \cot\Big[ \lambda_B \tan^{-1}\Big(\frac{|q|}{2c_B} \Big)\Big]
\end{equation}
\paragraph*{Nonzero temperature and zero critical mass : } Taking into account the effect of holonomy, in this case, $\chi_B(w)$ is not simply unity but is given by \eqref{chiB}. The result \eqref{G0Bcri} is valid for arbitrary holonomy distribution. In the specific case of table-top holonomy, \eqref{chiB} reduces to \eqref{chiBtabletop}. So, in this case, the result is \eqref{G0Bcri} where, $\chi_{B}$ is now given by \eqref{chiBtabletop} and the thermal mass $c_B$ is determined from the gap equation $\xi_{B}(c_B)=0$, where, the function $\xi_B(c_B)$ is now given by \eqref{xiBtt}.
\subsubsection{Two-point correlator of spin 1 current : }\label{gis1b}
In \eqref{G1B-+(q)fin}, we have given the result of the two-point function $\langle J_{-}^BJ_{+}^B\rangle$ of the $U(1)$ current \eqref{Jmu(x)}. However, as discussed in \cite{Gur-Ari:2016xff} (see e.g. equation 80 of \cite{Gur-Ari:2016xff}), the gauge-invariant correlator of $U(1)$ currrent \eqref{Jmu(x)} is given by
\begin{equation}\label{correlatorU1}
\langle J_{\mu}^BJ_{\nu}^B \rangle - 2\delta_{\mu\nu} \langle \bar{\phi}\phi\rangle
\end{equation}
There is an extra $-2\langle \bar{\phi}\phi\rangle $ term which contributes to the correlator of the $U(1)$ current. The contribution corresponding to this part (apart from the overall momentum conserving delta function) is given by
\begin{equation}\label{phibarphicont}
\begin{split}
& -2N_B\int \frac{\mathcal{D}_B^3 \l }{(2\pi)^3} \ \frac{1}{\l^2+c_B^2} = \frac{N_B \xi_B(c_B)}{2\pi}
\end{split}
\end{equation}
where, to obtain the RHS of \eqref{phibarphicont} we have used the intgration result \eqref{gBintegration}. We label the correlator \eqref{correlatorU1} of the $U(1)$ current by (apart from the overall delta function) $\mc{\tilde{G}}_{\mu\nu}^B$, and so we find that
\begin{equation}\label{tlG1Bdef}
\mc{\tilde{G}}_{\mu\nu}^B(p) = \mc{{G}}_{\mu\nu}^B(p) + \delta_{\mu\nu} \ \frac{N_B \xi_B(c_B)}{2\pi}
\end{equation}
where, $\mc{{G}}_{\mu\nu}^B$ is defined by \eqref{G1Bmunudefn}.
The final result of the correlator $\mc{\tilde{G}}_{\mu\nu}^B$ of the $U(1)$ currrent (its $(-+)$ component) is \footnote{Following footnotes \ref{fncovs0f} and \ref{fncovs1f}, a possible `covariantized' form can be written formally as $\mc{\tilde{G}}_{\mu\nu}^B(q)=\mc{\tilde{G}}_{\mu\nu}^{B,\text{even}}(q)+\mc{\tilde{G}}_{\mu\nu}^{B,\text{odd}}(q)$, where,
\begin{equation}\label{GBevencov}
\mc{\tilde{G}}_{\mu \nu}^{B, \text{even}}(q) = \Big[ \frac{N_B}{16\pi \lambda_B}\Big(1+\frac{4c_B^2}{q^2}\Big) \sin\big(4\lambda_B|q|A_B\big) -\frac{N_B\xi_{B}(c_B)}{4\pi|q|} \Big] \frac{q_\mu q_\nu-\delta_{\mu\nu}q^2}{|q|} \ ,
\end{equation}
and
\begin{equation}\label{GBoddcov}
\mc{\tilde{G}}_{\mu \nu}^{B, \text{odd}}(q) =\frac{N_B}{8\pi \lambda_B}\Big(1+\frac{4c_B^2}{q^2}\Big)\sin^2\big(2\lambda_B|q|A_B\big) \epsilon_{\mu\nu\rho}q^{\rho} \ ,
\end{equation}
where, in the above two expressions, $A_B=\int_{c_B}^{\infty}\frac{dw \ \chi_B(w)}{q^2+4w^2}$. It would be nice to have a direct independent check if this is indeed the correct form of $\mc{\tilde{G}}_{\mu\nu}^B(q)$.
\label{fncovs1b}}
\begin{equation}\label{tlG1Ban}
\begin{split}
\mc{\tilde{G}}^{B}_{-+}(q)
& = \frac{\i N_Bq_3 }{16\pi \lambda_B } \ \bigg(1+\frac{4c_B^2}{q_3^2}\bigg) \bigg[ \exp \bigg( 4\i \lambda_B q_3 \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q_3^2+4w^2}\bigg) -1\bigg] + \frac{N_B \xi_B(c_B)}{ 4 \pi}
\end{split}
\end{equation}
It is worth mentioning at this point that, unlike the spin zero case, there is no explicit appearance of the quartic and sextic coupling parameters $b_4$ and $b_6$ in the result \eqref{tlG1Ban} except implicitly through the thermal mass $c_B$ given by \eqref{thermalcB}. So, this result is unchanged also in the critical boson theory except in the critical boson theory the gap equation of the thermal mass $c_B$ is given by $\xi_B(c_B)=m_B^{\text{cri}}$. Below, we analyze various limiting cases of the above result \eqref{tlG1Ban}.
\subsubsection*{Zero temperature and zero mass}
In the zero temperature and massless limit, $m_B=0$, the pole mass $c_B$ vanishes. So, in this limit, we have
\begin{equation}\label{tlG1BT0m0}
\begin{split}
\mc{\tilde{G}}_{-+}^{B}(q) = \frac{\i N_B q_3}{16 \pi \lambda_B} \bigg[e^{\pi \i \lambda_{B} \text{sgn}(q_3) }-1 \bigg]
\end{split}
\end{equation}
This exactly matches with the zero temperature results at zero bare mass reported in \cite{Aharony:2012nh} (see equation 44 of \cite{Aharony:2012nh}) \footnote{As in the case of fermions, this result can be separated into parity even and parity odd parts as discussed in \cite{Aharony:2012nh}. The covariantized form of the parity even part in this case is given by $\frac{N_B}{16}\frac{\sin(\pi \lambda_B)}{\pi\lambda_B}\frac{q_\mu q_\nu-\delta_{\mu\nu}q^2}{|q|}$.}.
\subsubsection*{Zero temperature and nonzero mass}
At zero temperature, the contribution of holonomy vanishes which implies $\chi_B(w)=1$ and $\xi_B(z)=z$. The integral appearing in \eqref{tlG1Ban} can be performed explicitly and the result is
\begin{equation}\label{G1B-+(q)T0mBneq0final}
\begin{split}
\mc{\tilde{G}}^{B}_{-+}(q)
& =\frac{\i N_Bq_3 }{16\pi \lambda_B } \ \bigg(1+\frac{4c_B^2}{q_3^2}\bigg) \bigg[
\exp \bigg(2\i \lambda_{B} \tan^{-1} \frac{q_3}{2c_B}\bigg)-1\bigg] - \frac{N_B c_B}{ 4 \pi}
\end{split}
\end{equation}
This exactly agrees with the results \cite{amiyata1}.
\subsubsection*{Nonzero temperature and zero mass}
At nonzero temperature the holonomy contribution becomes non-trivial. The result \eqref{tlG1Ban} is valid for an arbitrary holonomy distribution function. In the specific case of tabletop holonomy $\rho_B(\alpha)=\frac{\Theta(\pi|\lambda_B|-|\alpha|)}{2\pi |\lambda_B|}$, the final result for $\tilde{\mc{G}}^{B}_{-+}$ is given by \eqref{tlG1Ban}, where $\chi_B(w)$ is now given by \eqref{chiBtabletop} and the thermal mass $c_B$ is determined from \eqref{thermalcB} where $\xi_B(c_B)$ is now given by $\xi_{B}(c_B)$.
\section{Check of duality} \label{dualitycheck}
As already mentioned before, one of our goal in this paper is to check the bose-fermi duality at the level of correlation functions. In this section, we explicitly check the duality between the results that we have obtained in this paper both in the fermionic and in the bosonic theory. We check the dualities between the general results that we have got in the case of massive Chern-Simons matter theory at finite temperature with arbitrary holonomy distribution function. As mentioned in the introduction, the massive regular fermionic theory which is conjecturally dual to the massive critical bosonic theory.
Below we list down the parameter map which is required to explicitly show that the results are dual to each other.
\subsection{Duality map : }\label{dmap}
The large $N$ 't Hooft couplings $\lambda_F=\frac{N_F}{\kappa_F}$, $\lambda_B=\frac{N_B}{\kappa_B}$, the holonomies $\rho(\alpha)$ and the exact thermal masses $c_F/c_B$, under duality, are mapped via
\begin{equation}\label{dualitymap}
\begin{split}
\lambda_F & = \lambda_B-\text{sgn}(\lambda_B) \ , \ \kappa_F=-\kappa_B \ , \ c_F=c_B , \\
& \lambda_F \rho_{F}(\pi -\alpha ) = \lambda_B \rho_{B}(\alpha) - \frac{\text{sgn}{(\lambda_B)}}{2\pi } \ .
\end{split}
\end{equation}
In the present case $c_F$ is the thermal mass of the regular boson theory, determined by the gap equation \eqref{masseqnforcF}. The gap equation for the critical bosonic scalar theory is determined from $\xi_B(c_B)=m_B^{\text{cri}}$ \footnote{In the critical limit i.e., in the limit $b_4\to \infty, \ m_B^2 \to \infty $ with $\frac{4\pi m_B^2}{b_4}=m_B^{\text{cri}}$ and $b_6$ fixed, it follows from the gap equation \eqref{thermalcB} that $m_B^{\text{cri}}=\xi_B(c_B)$.}. The map between the UV parameters of the two theories in this case is given by $m_F=-\lambda_Bm_B^{\text{cri}}$. It follows from the definitions of $\
\chi_B(z)$ and $\chi_F(z)$ as given in \eqref{chiB} and \eqref{chiF}, \footnote{In the zero temperature limit ($\beta \rightarrow \infty $), this reduces to the familiar expression $\lambda_F = \lambda_B-\text{sgn}(\lambda_B)$.}
\begin{equation}\label{chimap}
\lambda_F \chi_{F}(z) = \lambda_B \chi_{B}(z) - \text{sgn}{(\lambda_B)}
\end{equation}
Using the definition $\xi(z)=\int^{z} \chi(w) dw$, we can rewrite the above equation in terms of the duality map of $\xi(z)$ as below
\begin{equation}\label{ximap}
\lambda_F \xi_{F}(z) = \lambda_B \xi_{B}(z) - \text{sgn}{(\lambda_B)} z
\end{equation}
From the fermionic mass gap equation \eqref{masseqnforcF}, we can write the fermionic mass $m_F$ in terms of the bosonic variables as
\begin{equation}
m_F = \big(1-\eta\big)\text{sgn}(\lambda_B)c_B -\lambda_{B}\xi_B(c_B)
\end{equation}
where, we have used the definition $\eta = \text{sgn}(h_F\lambda_F )$ \footnote{Also we use the fact that $\text{sgn}(\lambda_F)=-\text{sgn}(\lambda_B)$.}. The parameter $\eta$ can have two possible values \cite{Choudhury:2018iwf}. In the case of $\eta=+1$, the dual theory in the bosonic side is the scalar theory in the unhiggsed phase, in which case, we get
\begin{equation}
m_F = -\lambda_{B}\xi_B(c_B)
\end{equation}
However, we will not always write the maps between the bare parameters, instead use the map between the thermal masses, i.e., $c_F=c_B$. As we have considered the bosonic scalar theory \eqref{RBlag} in the paper, we will use the choice $\eta=+1$ to compare with the results in the bosonic theory. On the other hand, we use $\eta=-1$ to predict the corresponding results for the critical boson theory in the Higgsed phase.
\subsection{For spin 0 : }
We dualize the result of two-point correlator of spin zero operator $J_0^F$ given by \eqref{G0Fcov} in terms of the bosonic variables.
Using duality map \eqref{dualitymap} and \eqref{chimap}, we rewrite \eqref{G0Fcov} in terms of bosonic variables as
\begin{equation}\label{dG0Fcov}
\begin{split}
\mc{G}_0^F(q)
& =
-\frac{N_B|q|}{4\pi \lambda_{B} } \ \cot \bigg[ 2\lambda_B|q| \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q^2+4w^2} -(1-\eta)\text{sgn}(\lambda_B)\tan^{-1}\frac{|q|}{2c_B} \bigg] \\ & +\frac{N_B\xi_{B}(c_B)}{2\pi} - (1-\eta) \frac{N_B c_B}{2\pi |\lambda_{B}| } \ .
\end{split}
\end{equation}
Depending upon the two possible values $\eta=\pm 1$, there are two cases. The regular fermionic theory is dual to the critical boson theory in the unHiggsed phase when $\eta=+1$. On the other hand, in the case of $\eta=-1$, the regular fermionic theory is dual to the critical bosonic theory in the Higgsed phase (this is the case when $m_B^{\text{cri}}<0$) (for details about this see \cite{Choudhury:2018iwf}).
\subsubsection{Duality check in the unhiggsed phase of critical bosons: }
As mentioned above, in the unhiggsed phase, $\eta =+1$. This implies that the fermionic result
\begin{equation}\label{uHG0Fcov}
\begin{split}
\mc{G}_0^{uH}(q)
& =
-\frac{N_B|q|}{4\pi \lambda_{B} } \ \cot \bigg[ 2\lambda_B|q| \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q^2+4w^2} \bigg] +\frac{N_B\xi_{B}(c_B)}{2\pi} \ .
\end{split}
\end{equation}
This matches with the result of two-point correlator of spin zero operator in the critical boson theory given by \eqref{G0Bcri} upto contact terms. Thus, we see that under duality, $J_{0}^F$ maps to $J_0^B$ (more precisely to $\tilde{J}_{0}^B$ given in \eqref{scaledJ0B}) \footnote{As mentioned earlier, the single trace spin-zero scalar operator $J_0^B$ in the case of critical boson theory \eqref{CBlag} is simply the Lagrange multiplier field $\sigma_B$ appearing in the action \eqref{CBlag} \cite{Aharony:2012nh, GurAri:2012is}.}.
\subsubsection{Prediction for the higgsed phase of critical bosons: }
In the higgsed phase, on the other hand, $\eta =-1$. So, the predicted answer of the two-point correlator of single trace, spin zero `scalar' current $J_0^H$ in the higgsed phases of critical bosons is given by \footnote{The single trace, spin-zero scalar operator in the Higgsed phase of critical bosons which is dual to corresponding spin zero operator $J_{0}^F=\bar{\psi}\psi$ in the massive regular fermionic theory, is given by $J_0^H=\overline{W}_\mu W^\mu +Z_\mu Z^\mu$ \cite{amiyata1}.}
\begin{equation}\label{HG0Fcov}
\begin{split}
\mc{G}_0^{H}(q)
& =
-\frac{N_B|q|}{4\pi \lambda_{B} } \ \cot \bigg[ 2\lambda_B|q| \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q^2+4w^2} -2\text{sgn}(\lambda_B)\tan^{-1}\frac{|q|}{2c_B} \bigg] \\ & +\frac{N_B\xi_{B}(c_B)}{2\pi} - \frac{N_B c_B}{\pi |\lambda_{B}| } \ .
\end{split}
\end{equation}
It is an interesting excercise to compute this (exactly in large $N$) directly from the higgsed phases of bosons, which we leave for future work.
\subsection{For spin 1 : }
The final result that we have obtained for spin 1 current correlator in the fermionic theory, is given by \eqref{G1Fan}.
Under duality maps \eqref{dualitymap}, \eqref{chimap} and \eqref{ximap}, $\mc{G}_{-+}^{F}(q)$ can be rewritten as
\begin{equation}\label{dG1Fan}
\begin{split}
\mc{G}_{-+}^{F}(q)
&= - \frac{\i N_B q_3 }{16\pi \lambda_B} \bigg(1-\eta \frac{2\i \text{sgn}(\lambda_B)c_B}{q_3 }\bigg)^2\bigg[ - \bigg(\frac{ q_3+ 2 \i \text{sgn}(\lambda_B)c_B}{q_3- 2 \i \text{sgn}(\lambda_B)c_B}\bigg) \exp \bigg( 4\i \lambda_B q_3 \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q_3^2+4w^2}\bigg) -1 \bigg] \\
& +\frac{N_B\xi_B(c_B)}{4 \pi } -\frac{N_Bc_B}{4 \pi |\lambda_B|} \ .
\end{split}
\end{equation}
Depending upon the two possible values of $\eta$, we consider two phases separately below.
\subsubsection{Duality check in the unhiggsed phase of critical bosons : }
As mentioned before, in the unhiggsed phase $\eta =+1$. So, it follows from \eqref{dG1Fan} that it dualizes to the following result in the unhiggsed phase of bosons,
\begin{equation}\label{uHG1Fan}
\begin{split}
\mc{G}_{-+}^{uH}(q)
&= \frac{\i N_B q_3 }{16\pi \lambda_B} \bigg(1+ \frac{4c_B^2}{q_3^2}\bigg)\bigg[ \exp \bigg( 4\i \lambda_B q_3 \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q_3^2+4w^2}\bigg) -1 \bigg] +\frac{N_B\xi_B(c_B)}{4 \pi } +\frac{\i N_B q_3 }{8\pi \lambda_B} \ .
\end{split}
\end{equation}
This exactly matches with the bosonic results \eqref{tlG1Ban} upto a contact term $\frac{\i N_B q_3}{8\pi \lambda_B }$. Thus, we see that $J_{\mu}^F$ maps to $J_{\mu}^B$ under duality.
\subsubsection{Prediction for the higgsed phase of critical bosons : }
In the higgsed phase ($\eta = -1 $), the predicted result for the two-point correlator of the single trace spin one current is \footnote{A possible `covariant' form of $\langle J_{\mu}^H(q')J_{\nu}^H(q)\rangle=\mc{G}_{\mu\nu}^H\ (2\pi)^3\ \delta^{(3)}(q'+q)$, is obtained from \eqref{GFevencov} and \eqref{GFoddcov} with the substitution of $\text{sgn}(h_F)=-\text{sgn}(\lambda_F)$(which is equal to $\text{sgn}(\lambda_B)$) and applying the duality map given in subsection \ref{dmap}. It would be nice to have a direct independent check if this is indeed the case.\label{fncovs1h}}
\begin{equation}\label{HG1Fan}
\begin{split}
\mc{G}_{-+}^{H}(q)
&= \frac{\i N_B q_3 }{16\pi \lambda_B} \bigg(1 + \frac{2\i \text{sgn}(\lambda_B)c_B}{q_3 }\bigg)^2\bigg[ \bigg(\frac{ q_3+ 2 \i \text{sgn}(\lambda_B)c_B}{q_3- 2 \i \text{sgn}(\lambda_B)c_B}\bigg) \exp \bigg( 4\i \lambda_B q_3 \int_{c_B}^{\infty} \frac{dw \ \chi_B(w)}{q_3^2+4w^2}\bigg) +1 \bigg] \\
& +\frac{N_B\xi_B(c_B)}{4 \pi } -\frac{N_Bc_B}{4 \pi |\lambda_B|} \ .
\end{split}
\end{equation}
We leave for the future work, the matching of this prediction with the exact large $N$ computation of $\langle J_{\mu}^{H}J_{\nu}^{H}\rangle $ directly in the Higgsed phases of critical bosons \footnote{In the Higgsed phase of critical bosons in the unitary gauge \cite{Choudhury:2018iwf}, the $U(1)$ current $J_{\mu}^H$ is proportional to the $Z$ bosons, i.e., $J_{\mu}^H\propto Z_{\mu}$ \cite{amiyata1}. So, in this case, $\mc{G}_{\mu\nu}^H\propto \langle Z_{\mu}Z_{\nu}\rangle$.}.
\section{Discussions}\label{discussion}
In this work we have obtained several two-point momentum space correlators of gauge invariant, single trace operators in Chern-Simons coupled to massive fundamental matter theories in the large-$N$ 't Hooft limit, at finite temperature and considering arbitrary holonomy distribution for gauge holonomy. One of the challenging aspects of our work was to solve the correspoinding Schwinger-Dyson equations for the correlators because we chose to work with an arbitrary holonomy distribution, with massive matter at finite temperature. However, we have been able to overcome this technical difficulty and have solved these equations analytically by explicitly performing the loop momenta integrals. The results for the two-point correlators are indeed very simple in form. We have seen that in different limiting cases, the results obtained in this work agree with the existing previous results \cite{Aharony:2012nh, GurAri:2012is, Geracie:2015drf, Gur-Ari:2016xff, Ghosh:2019sqf}. We have also explicitly checked the bose-fermi duality between the results of the regular fermionic theory and the critical bosonic scalar theory. We now discuss about the applications of the results obtained and further possible generalizations of the analysis done in this paper.
We computed the two-point correlators of the spin one conserved current of single trace operators both in the case of massive fermionic theory and in the case of massive bosonic scalar theory. We used the duality map to predict for the two-point function of the spin one current in the case of the higgsed phases of bosons. The analysis done in this paper is in the Euclidean signature and with the external momentum $q_{\mu}\equiv (0,0,q_3)$. By analytic continuations, the results in the Lorentzian signature can be obtained by Wick rotating back to Minkowski space $q_3\to \i \omega$. In \cite{Geracie:2015drf}, it was highlighted that the two-point correlator of the $U(1)$ current can be used to calculate the conductivity tensor by applying the Kubo formula. Their analysis was done at zero temperature in the fermionic theory. Also in \cite{Gur-Ari:2016xff}, the study of conductivity was continued in Chern-Simons fermionic matter theories. However in \cite{Gur-Ari:2016xff}, they considered the massless regular fermionic matter coupled to Chern-Simons gauge fields at finite temperature with table-top holonomy distribution. Moreover the results in \cite{Gur-Ari:2016xff} were given as an integral expressions. Our results of two-point correlators of spin-one current in the massive fermionic theory at finite temperature is a generalization of \cite{Geracie:2015drf, Gur-Ari:2016xff} and the final result is given in a much simplified form \eqref{G1Fan}. Following \cite{Geracie:2015drf, Gur-Ari:2016xff}, this can be used to study the conductivity of the massive regular fermionic theory at finite temperature by applying the Kubo formula. In section \ref{spins2pt}, we have commented on the computations of correlators of arbitrary spin $s$ currents and we have seen a general structure of the results; following the same procedure used in the case of spin-zero and spin-one operators, one can get, e.g., the stress tensor two-point correlation function which can be used to study the viscosity of these theories \cite{Geracie:2015drf}.
As already discussed above, in the massive regular fermionic theory the results can be dual to either critical bosonic scalar theory in the unHiggsed phase or to the Higgsed phase of critical bosons depending upon the signs of $\text{sgn}(h_F\lambda_F)$. We have explicitly checked the duality of the regular fermionic theory with the critical bosonic scalar theory in the unHiggsed phase, which is the case with $\text{sgn}(h_F\lambda_F)=+1$. Also, in the case of $\text{sgn}(h_F\lambda_F)=-1$, we have given the predictions for the two-point correlators of the corresponding current operators in the Higgsed phases of critical bosons. One may compute these correlators to all orders in 't Hooft coupling in the large-$N$ limit directly in the Higgsed phases of critical bosons coupled to Chern-Simons theory \cite{Choudhury:2018iwf, Dey:2018ykx} to explicitly check the duality in the higgsed phase. The technicalities of performing exact computations in the higgsed phases of bosons are rather involved, and we leave this excercise for future work. It is realized that the spin-one current of the massive fermionic theory is dual to the $Z$ bosons in the Higgsed phases \cite{amiyata1}. So, the dualized result of $\langle J_{\mu}^FJ_{\nu}^F\rangle $ in the higgsed phases is proportional to the $Z$ boson propagator $\langle Z_{\mu}Z_{\nu} \rangle$. One can try to explictly compute the exact $Z$ boson propagator directly in the Higgsed phases of bosonic theory \cite{Choudhury:2018iwf}, at least in large $N$ limit, and match with the results predicted here.
The pair of theories that we have considered in this paper has global $U(1)$ symmetry. One can onsider the analysis presented here in the presence of a chemical potential $\mu$ by turning on a constant background gauge field $\mc{A}_{\nu}=\i \mu \delta_{\nu 3} $. At least in the case when $|\mu|<c_B$ or $|\mu|<c_F$, the analysis seems to go through exactly the same way and the final result in presence of chemical potential is obtained by making the substitution $\alpha\to \alpha -\i \beta\mu$ in the integrand (except in the holonomy distribution $\rho(\alpha)$) of holonomy integral appearing in the results obtained in this paper at the zero chemical potential. It will be nice to explicitly check if the structure of the results remain the same even in the case when the chemical potential is larger than the thermal mass $c_B$ or $c_F$ \cite{Minwalla:2020ysu}. It seems like, even in the case when $|\mu|>c_B$ or $|\mu|>c_F$, the structure of the results of the correlators of this paper will remain the same; the only modifications will be through the functions $\chi_B(w)$ and $\chi_F(w)$ due to the contour deformations of the holonomy integrals away from the unit circle as prescribed in detail in \cite{Minwalla:2020ysu}. We leave the careful analysis about this as a future excercise.
As discussed before, as part of the analysis in the bosonic scalar theory, following \cite{Jain:2014nza}, we have generalized the computations of four-point functions of fundamental scalars to finite temperature. As in \cite{Jain:2014nza}, this might be useful to study the scattering of fundamental scalars at finite temperature.
In footnotes \ref{fncovs0f}, \ref{fncovs1f}, \ref{fncovs1b} and \ref{fncovs1h}, we have provided a possible `covariantized' form of the two-point correlators of the spin one currents of the fermionic and bosonic theories, valid for arbitrary values of the external momenta $q$. It would be nice to have an independent check of these by direct computations without assuming $q_{\pm}=0$.
One can also extend the computations of two-point current correlators to higher point correlators of single trace operators. We have already computed in this paper various exact insertion vertices. And one can use these to explicitly compute the higher-point current correlators by generalizing the existing results (e.g., three-point correlators \cite{Aharony:2012nh, GurAri:2012is}, four-point correlators \cite{Yacoby:2018yvy, Kalloor:2019xjb}) to the massive matter theories at finite temperature. Presumably, one may also use the results of two-point correlators of single trace operators obtained in this paper to compute the thermal one-point functions using bootstrap approaches. In this paper we have not paid much attention to the contact terms appearing in the results of the two point-correlators. It would be interesting to understand if the contact terms appearing in this paper have any physical significance. Also, it would be interesting to extend the analysis of this paper by going beyond large-$N$ limit, taking into account the contributions from the non-planar diagrams. we leave all these problems for future work.
\subsection*{Acknowledgements}
The author would like to thank S. Chakraborty, S. Jain and N. Prabhakar for helpful conversations. The author would like to thank S. Minwalla for valuable discussions and comments.
The author would also like to acknowledge support from the Infosys Endowment for the study of the Quantum Structure of Spacetime.
|
{
"timestamp": "2021-01-28T02:12:50",
"yymm": "2010",
"arxiv_id": "2010.03699",
"language": "en",
"url": "https://arxiv.org/abs/2010.03699"
}
|
\section{Introduction}
Quantum computers are believed to be capable of implementing several tasks such as factoring and Hamiltonian simulations, in exponentially smaller computational times than those of classical computers~\cite{shor1999polynomial, harrow2009quantum}. However, quantum systems generally interact with their environments, which leads to physical errors in the system that may destroy their quantum advantages. Since the physical error rates of quantum computers are still much higher than those of classical computers, it is vital to suppress these errors. As a solution, fault-tolerant quantum computing~(FTQC) using quantum error-correcting codes has been studied~\cite{nielsen2002quantum,lidar2013quantum,fowler2012surface,horsman2012surface,fowler2018low}. The long-term FTQC allows executing conventional quantum algorithms such as Hamiltonian simulation algorithms~\cite{lloyd1996universal}. According to the current state-of-the-art resource estimations~\cite{kivlichan2020improved,babbush2018encoding}, the logical quantum operation count will be in the order of $10^{10}$ to observe clear quantum advantages based on the computational complexity theory.
Towards the realization of the long-term FTQC, we will experience several intermediate regimes as shown in Fig.\,\ref{fig:regime_overview} because high-level encoding is not allowed due to restrictions of quantum resources such as qubit and magic-state count\textcolor{black}{~\cite{fowler2012surface,fowler2018low}}. Since quantum error correction~(QEC) requires massive classical computation for repetitive error estimations, the available code distance would also be strongly limited in the near future~\cite{holmes2020nisq+,ueno2021qecool,das2021lilliput}.
As quantum technologies become mature, computational quantum supremacy\textcolor{black}{~\cite{arute2019quantum}} will be achieved in the logical space. We will refer to the intermediate regime from the realization of logical quantum supremacy to the demonstration of long-term applications as an early-FTQC regime. The number of physical qubits will go beyond one thousand in this region, and we anticipate that more than about $10^4$ reliable logical operations on $10^2$ logical qubits are available. Even at the beginning of the early-FTQC regime, we may observe a quantum speed-up with heuristic quantum algorithms, for example, with the variational quantum eigensolver~\cite{peruzzo2014variational,kandala2017hardware,mcclean2016theory}.
\begin{figure*}[t!]
\centering
\includegraphics[clip,width=2\columnwidth]{fig_overview_pic.pdf}
\caption{A schematic picture representing the transitional period towards the realization of the long-term FTQC. In the figure, the purple line indicates the hardware requirement for performing classically intractable tasks with a realistic time whereas the blue line corresponds to the requirement for demonstrating quantum advantages with conventional long-term quantum algorithms. To estimate these lines, we refer to the quantum supremacy experiments~\cite{arute2019quantum} and the existing state-of-the-art resource estimation~\cite{kivlichan2020improved,babbush2018encoding,note_blue_line}. The early-FTQC regime is defined as a region between these lines. In the main text, we assume that the number of error events during FTQC $N_{\rm e}$ is required to be smaller than $10^{-3}$, which is shown as the dotted black line. Our technique allows for FTQCs with the number of error events in the order of unity $N_{\rm e} \sim 1$, which is shown as a solid black line, to execute applications that originally require a much smaller error event count. For example, at the beginning and the end of the early-FTQC regime, our technique allows simulating applications (white and yellow circles with black rims) with the relaxed hardware requirement (white and yellow circles with red rims).}
\label{fig:regime_overview}
\end{figure*}
In this paper, to realize efficient and high-accuracy quantum computation in the early-FTQC era, we propose a novel framework of FTQC, where QEC and quantum error mitigation~(QEM) are combined on an equal footing.
While QEM has been considered to be an alternative error minimization technique for noisy intermediate-scale quantum~(NISQ) devices due to its low hardware overhead at the expense of the sampling cost, we show that, by integrating probabilistic error cancellation~\cite{temme2017error,endo2018practical} into the FTQC framework, we can mitigate all the dominant types of errors in the logical space. We also note that our scheme can efficiently mitigate Pauli errors by virtually updating the quantum states with a classical memory called the Pauli frame\textcolor{black}{~\cite{fowler2012surface}}.
In the conventional QEM formalism, the sampling cost of QEM increases exponentially with the number of physical error events~\cite{takagi2021fundamental,wang2021can}. Therefore, the sampling overheads of QEM become unrealistic in NISQ computing when the number of physical operations increases for a fixed error rate per quantum gate; and the number of error events that QEM can efficiently suppress is limited to the order of unity.
In our framework, the sampling cost of QEM increases exponentially with the number of {\it logical error events} in the encoded space. Note that we can tune the number of logical error events by adjusting several parameters such as the code distance, distillation levels, and precision of approximations for Solovay-Kitaev decomposition. Thus, it is highly likely that we can find regions where the QEM techniques are the most effective, i.e., the number of logical error events is the order of unity. Accordingly, we can relax the hardware requirement with constant sampling overheads. Even after the scalable FTQC is realized, taking QEM into account, we can optimize quantum computation by allocating computation resources at will to perform even more efficient quantum computing.
We need to overcome several fundamental difficulties for applying QEM in the logical space because the costs and restrictions of logical operations and dominant sources of errors are different from the NISQ formalism. We resolve them in the affirmative by giving a solution one by one. For example, solutions to major problems are as follows.
In FTQC, logical Clifford operations and Pauli measurements can be efficiently applied while non-Clifford operations are costly because it involves a number of $T$-gate injection, distillation, and teleportation procedures\textcolor{black}{~\cite{fowler2012surface,fowler2018low}}. These logical operations are affected by three types of logical errors: logical errors in each elementary gate operation due to restricted code distances, noise in non-Clifford logical gates deriving from shortage of magic-state distillation processes, and errors induced in the Solovay-Kitaev decomposition~\cite{kitaev1997quantum,dawson2005solovay}. We call the first two logical errors decoding errors and the last one approximation errors. We will discuss what types of errors are present when implementing logical operations, and provide a hierarchical way to mitigate noisy and costly operations with clean and less costly ones.
To detect and correct physical errors during computation, we store the estimated errors in the Pauli frame instead of physically applying recovery operations\textcolor{black}{~\cite{fowler2012surface}}. This means that actual physical states are almost never in the code space. We will provide concrete procedures for a universal set of logical operations incorporating QEM which are compatible with the Pauli frame. To apply probabilistic error cancellation, we need a good characterization of the noise model to construct QEM operations. We show that decoding errors can be efficiently characterized with gate set tomography~\cite{blume2013robust,greenbaum2015introduction} on the code space. Note that the approximation errors of the Solovay-Kitaev algorithm can be characterized efficiently on classical computers. Finally, while probabilistic error cancellation is a QEM technique to mitigate errors in the algorithms for calculating the expectation values, many FTQC algorithms are sampling algorithms using the phase estimation\textcolor{black}{~\cite{kitaev1995quantum,kivlichan2020improved,babbush2018encoding}}. We show that probabilistic error cancellation is compatible with the phase estimation algorithm. See Appendix.\,\ref{sec:apply_qem_to_bqp} for details.
We perform resource estimation of FTQC under realistic scenarios with and without QEM, and we show that our scheme can dramatically alleviate the required computational overheads in FTQC. We assume that the mean number of logical error events $N_{\rm e}$ is required to reach $N_{\rm e}=10^{-3}$, and the sampling overhead by QEM is restricted to a reasonable level, i.e., within $10^2$ times greater samples for achieving a certain accuracy. We expect at least $10^4$ logical operations are required to demonstrate classically intractable applications. In this case, the required number of qubits is reduced to approximately one-fifth with QEM compared to the original qubit count. We also expect that $10^{10}$ logical operations are at least necessary to perform conventional long-term applications. The required number of qubits is reduced to $55\%$ in this regime.
From another perspective, our scheme can be used for increasing the number of available logical operations when the available code distance is strongly restricted. The lifetime of current superconducting qubits is about up to $1~{\rm ms}$, and a cycle of error estimations during FTQC must be sufficiently faster than the lifetime, i.e., about $1~{\rm \mu s}$\textcolor{black}{~\cite{holmes2020nisq+,gidney2021factor}}. To cope with this strong restriction, an efficient implementation of classical error-decoding architectures has been studied. According to the recent state-of-the-art proposals~\cite{holmes2020nisq+,ueno2021qecool,das2021lilliput}, the available code distance would be limited up to about $11$ in the near future even with simplified decoding algorithms. When the available code distance is limited up to $11$, our scheme enables $10^3$ times more logical operations with the same hardware requirement.
Thus, our technique can clearly accelerate the realization of applications in early- and long-term FTQC regimes. This improvement is illustrated by red arrows in Fig.\,\ref{fig:regime_overview}. It is also worth noting that, to the best of our knowledge, these are the first examples where the performance of useful quantum algorithms with clear quantum advantages is enhanced via QEM under realistic conditions since QEM has been investigated for near-term heuristic quantum algorithms dependent on numerical optimization.
This paper is organized as follows. In Sec.\,\ref{sec:preliminary}, we review probabilistic error cancellation and the architecture of fault-tolerant quantum computing. In Sec.\,\ref{sec:method}, we describe how to evaluate decoding errors and approximation errors. Then we show our novel FTQC architecture with an analytical argument of the cost of QEM and explain the effect of model estimation errors.
In Sec.\,\ref{sec:result}, we numerically analyze the sampling cost of QEM for decoding errors and approximation errors and demonstrate that we can effectively increase the code distance and the number of $T$-gates via QEM even when there are finite estimation errors.
Finally, we conclude our paper with a discussion in Sec.\,\ref{sec:conclusion}.
\section{Preliminaries}
\label{sec:preliminary}
\subsection{Quantum error mitigation and probabilistic error cancellation}
\label{sec:pec}
Quantum processors are affected by a number of physical noise sources, which should be mitigated to obtain correct results. Here, for simplicity, we will assume that the gate errors are Markovian, i.e., the noise process $\mathcal{N}$ for a gate is totally independent of other gate errors. In this case, we have
\begin{align}
\rho_{\mathrm{out}}= \mathcal{N}_{N_G} \circ \mathcal{U}_{N_G} \circ \mathcal{N}_{N_G-1} \circ \mathcal{U}_{N_G-1} \cdots \mathcal{N}_{1}\circ \mathcal{U}_{1} (\rho_{\mathrm{in}}),
\label{noisy}
\end{align}
where $\rho_{\mathrm{out}}$ and $\rho_{\mathrm{in}}$ are the output and input quantum states, $\mathcal{U}_k$ and $\mathcal{N}_k$ denote the ideal and noisy part of the process of the $k$-th gate, and ${N_G}$ is the number of gates. To ensure correct computations, it is necessary to mitigate the effect of $\mathcal{N}_k,~(k=1,2,..., N_G)$ and obtain
\begin{align}
\rho_{\mathrm{out}}^{\mathrm{ideal}}= \mathcal{U}_{N_G} \circ \mathcal{U}_{N_G-1} \cdots \circ \mathcal{U}_{1} (\rho_{\mathrm{in}}).
\end{align}
Quantum error mitigation (QEM) has been proposed as a method for suppressing errors without encoding, and it is useful especially for NISQ devices with a restricted number of qubits~\cite{temme2017error,li2017efficient,endo2018practical}. Generally speaking, QEM methods recover not the ideal density matrix $\rho^{\mathrm{ideal}}_{\mathrm{out}}$ itself, but rather the ideal expectation value of an observable $\braket{\hat{M}}_{\mathrm{ideal}}=\mathrm{Tr}(\rho^{\mathrm{ideal}}_{\mathrm{out}} \hat{M})$
via classical post-processing. Note that QEM is not a scalable technique because it needs exponentially increasing circuit runs with the number of error events in the quantum circuit~\cite{endo2018practical,temme2017error}.
Now let us explain the concept of probabilistic error cancellation with which we can eliminate a bias from the expectation value of the observables completely given the complete information on the noise model~\cite{temme2017error,endo2018practical}. (Later, we will use this method to suppress errors in FTQC.)
First, we identify the noise map $\mathcal{N}$ via either process or gate set tomography~\cite{blume2013robust,greenbaum2015introduction}, and calculate the inverse $\mathcal{N}^{-1}$. Then, by finding a set of processes $\{\mathcal{B}_i \}$ such that $\mathcal{N}^{-1}=\sum_i \eta_i \mathcal{B}_i$ where $\eta_i \in \mathbb{R}$ and $\sum_i \eta_i=1$, we have
\begin{equation}
\begin{aligned}
\mathcal{U}&= \mathcal{N}^{-1} \mathcal{N} \mathcal{U} \\
&=\sum_i \eta_i \mathcal{B}_i \mathcal{N} \mathcal{U}.
\label{Eq: inverse}
\end{aligned}
\end{equation}
Note that arbitrary operations can be represented as linear combinations of tensor products of single-qubit Clifford operations and Pauli measurements~\cite{endo2018practical}. Here, we can rewrite Eq.\,(\ref{Eq: inverse}) as
\begin{align}
\mathcal{U}=\gamma_Q \sum_i q_i \mathrm{sgn}(\eta_i) \mathcal{B}_i \mathcal{N} \mathcal{U},
\label{Eq:quasimonte}
\end{align}
where $\gamma_Q=\sum_i |\eta_i|$, $q_i=\frac{|\eta_i|}{\gamma_Q}$, $\gamma_Q \geq 1$ and $\mathrm{sgn}(\eta_i)$ is a parity which takes $\pm 1$, corresponding to the operation $\mathcal{B}_i$. We refer to $\gamma_Q$ as the QEM cost because it is related to the sampling overhead.
Now let us suppose that we have measured an observable $\hat{M}$ and obtain
\begin{equation}
\begin{aligned}
\braket{\hat{M}}_{\mathcal{U}}= \gamma_Q \sum_i q_i \braket{\hat{\mu}_i^{\rm eff}}.
\end{aligned}
\end{equation}
Here, $\hat{\mu}_i^{\rm eff} =\mathrm{sgn} (\eta_i) \hat{m}_i$, and $\hat{m}_i$ is a measurement outcome for a process $\mathcal{B}_i \mathcal{N} \mathcal{U}$. We generate the process $\mathcal{B}_i$ with a probability $q_i$ and multiply the corresponding parity with the measurement result, which is denoted as $\hat{\mu}^{\rm eff}$. Then, the expectation value of the random variable $\hat{\mu}^{\rm mit}= \gamma_Q \hat{\mu}^{\rm eff}$ approximates the error-free expectation value $\braket{\hat{M}}_{\mathcal{U}}$. Note that since $\mathrm{Var}[\hat{\mu}^{\rm mit}]=\gamma_Q^2 \mathrm{Var}[\hat{\mu}^{\rm eff}]$ and a measurement outcome without QEM, which we denote $\hat{\mu}^{\rm nmit}$ has a similar variance, the variance of the error-mitigated value is approximately amplified as $\Gamma_Q=\gamma_Q^2$. Therefore we need to have $\Gamma_Q$ times more samples to achieve a similar accuracy before applying QEM.
In practice, we use probabilistic error cancellation for each gate in quantum circuits. The ideal process for the entire quantum circuit is described as $\prod_{k=1}^{N_G} \mathcal{U}_k$. Denoting $\mathcal{U}_k=\gamma_Q^{(k)} \sum_{i_k} q_{i_k} \mathrm{sgn} (\eta_{i_k}) \mathcal{B}_{i_k} \mathcal{N}_k \mathcal{U}_k$, we have
\begin{equation}
\begin{aligned}
&\prod_{k=1}^{N_G} \mathcal{U}_k = \prod_{k=1}^{N_G}\gamma_Q^{(k)} \sum_{i_1 i_2...i_{N_G}} \prod_{k=1}^{N_G}q_{i_k} \prod_{k=1}^{N_G}\mathrm{sgn} (\eta_{i_k}) \prod_{k=1}^{N_G} \mathcal{B}_{i_k} \mathcal{N}_k \mathcal{U}_k.
\label{Eq:quasiincircuit}
\end{aligned}
\end{equation}
From Eq.\,(\ref{Eq:quasiincircuit}), we can see that, in each gate, a process $\mathcal{B}_{i_k}$ is generated with probability $q_{i_k}$, and the product of parities $\prod_{k=1}^{N_g}\mathrm{sgn} (\eta_{i_k})$ is multiplied with the measurement results to obtain the outcome $\hat{\mu}^{\rm eff}$. This procedure is repeated, and the product of the mean of the outcomes $\braket{\hat{\mu} ^{\rm eff}}$ and $\gamma^{\rm tot}_Q=\prod_{k=1}^{N_g}\gamma_Q^{(k)}$ approximates the correct expectation value. Note that here $\gamma^{\rm tot}_Q$ is the QEM cost for the entire quantum circuit. Let us assume the cost for each gate is uniform and can be approximated as $\gamma_Q^{(k)} = \gamma_Q = 1+ a \varepsilon$ with $a$ and $\varepsilon$ being a positive constant value and the effective error rate, respectively.
Now the QEM cost and sampling overhead can be approximated as $\gamma^{\rm tot}_Q \simeq e^{a \varepsilon N_G}= e^{(\gamma_Q-1) N_G}$ and $\Gamma^{\rm tot}_Q = (\gamma^{\rm tot}_Q)^2$, which increase exponentially with the mean number of error events in the quantum circuit $\varepsilon N_G$. Note that for $\varepsilon N_G =O(1)$ and $\varepsilon \rightarrow 0$, since $\varepsilon^k N_G=0 ~(k \geq 2)$, the QEM cost can be exactly described as $\gamma_Q^{\rm tot}= e^{(\gamma_Q-1) N_G}$.
\subsection{Fault-tolerant quantum computing}
\label{sec:ftqc}
\subsubsection{Stabilizer formalism}
In the framework of fault-tolerant quantum computing (FTQC), one prepares a redundant number of physical qubits and performs quantum computing in a code space defined as a subspace of the whole Hilbert space. By repetitively performing quantum error detection and correction, we can protect the logical qubits defined in the code space against physical errors. The state of the logical qubits is manipulated in a fault-tolerant manner with a set of logical operations.
The stabilizer formalism~\cite{gottesman1997stabilizer,nielsen2002quantum} is the most standard way to construct quantum error-correcting codes. Here, supposing that we construct $k$ logical qubits with $n$ physical qubits, a $2^k$-dimensional code space $\mathcal{C}$ is specified with a subgroup of $n$-qubit Pauli operators called the stabilizer group.
Let the $n$-qubit Pauli group be
\begin{equation}
\begin{aligned}
\mathcal{G}_n = \{\pm 1, \pm i\} \times \{I,X,Y,Z\}^{\otimes n},
\end{aligned}
\end{equation}
where $I$ is the identity operator and $X = \left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) , Y = \left( \begin{matrix} 0 & -i \\ i & 0 \end{matrix}\right), Z = \left( \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix}\right)$ are Pauli operators.
The set of Pauli operators $\mathcal{S} \subset \mathcal{G}_n$ is called a stabilizer group if $\mathcal{S}$ is a commutative subgroup, the number of elements in $\mathcal{S}$ is $2^{n-k}$, and $-I \not\in \mathcal{S}$. We denote the $(n-k)$ generator set of a stabilizer group as $\mathcal{G} = (g_1, \cdots, g_{n-k})$. The code space $\mathcal{C}$ is defined as an eigenspace with $+1$ eigenvalues for all the operators in the stabilizer group, i.e., $\mathcal{C} = \{\ket{\psi} | ~\forall s_i \in \mathcal{S}, s_i\ket{\psi} = \ket{\psi}\}$. In the code space, we can introduce a logical basis as $\{\ket{0}_L, \ket{1}_L\}^{\otimes k}$ and logical Pauli operators as $\{I_L, X_L, Y_L, Z_L\}^{\otimes k}$. The code distance $d$ is defined as the minimum number of physical qubits on which an arbitrary logical operator, except the logical identity $I_L^{\otimes k}$, acts.
During a quantum computation, physical errors that occur in the encoded state are detected by using $(n-k)$ Pauli measurements $P_{s} = \frac{1}{2}(I + (-1)^s g_i)$ for $s \in \{0,1\}$. These measurements are called stabilizer measurements and their binary outcomes $s$ are called syndrome values. The original state is restored by applying appropriate feedback operations that are estimated from the syndrome values.
These stabilizer measurements are performed repeatedly during a computation. One repetition of the stabilizer measurements is called a code cycle of fault-tolerant quantum computing. If the effective error probability per physical qubit during a cycle is smaller than a certain threshold, we can estimate the Pauli operator that restores the original state with an exponentially small failure probability with the code distance $d$. Since the required number of physical qubits $n$ increases polynomially with the code distance $d$ in typical quantum error-correcting codes, we can exponentially decrease the error probability of logical qubits with a polynomial qubit overhead.
\subsubsection{Logical operations}
We must not only correct physical errors but also update the logical quantum state for performing quantum computation. To this end, a universal set of logical operations should be performed in a fault-tolerant manner. According to the Solovay-Kitaev theorem~\cite{kitaev1997quantum,dawson2005solovay}, we can approximate an arbitrary one- and two-qubit gates with a finite set of local operations. For example, the Hadamard gate $H = \frac{1}{\sqrt{2}}\left( \begin{matrix}1 & 1 \\ 1 & -1\end{matrix} \right)$, controlled-not (CNOT) gate $\Lambda = \ket{0}\bra{0} \otimes I + \ket{1}\bra{1} \otimes X$, and $T$-gate $T = \exp\left(i \frac{\pi}{8} Z\right)$ form a universal gate set. Several logical operations can be performed by transversally operating the same one- or two-qubit operations on physical qubits.
Since transversal operations constantly increase the effective physical error rate per qubit during a cycle, we can fault-tolerantly achieve transversal logical operations.
However, it is known that there is no stabilizer code for which the set of transversal gates is universal~\cite{eastin2009restrictions}. Thus, we need an additional technique to achieve fault-tolerant and universal quantum computing. The most promising solution is to create a quantum state called a magic state and perform non-transversal logical operations with gate teleportation~\cite{fowler2012surface}. For example, $\ket{A}_{\rm L} = T H \ket{0}_{\rm L} = \frac{1}{\sqrt{2}}(e^{i\pi/8} \ket{0}_L + e^{-i\pi/8} \ket{1}_L)$ is a typical magic state and $T$-gate operations can be performed by consuming this state. This magic state encoded in a logical qubit can be constructed with a process called magic-state injection. While the infidelity of a magic state created by magic-state injection is generally larger than the logical error rate, we can create a high-fidelity magic state from several noisy magic states by using another quantum error-correcting code implemented on the logical space, which is called magic-state distillation.
Since the application of $T$-gates requires a longer time than the other operations, the number of $T$-gates is the dominant factor affecting the computation time of FTQC.
Although we can estimate a Pauli operation for recovery from syndrome values, we do not directly apply it immediately after estimation. Instead, we store the Pauli operations that should be applied to the physical qubits for recovery in a classical memory called the Pauli frame~\cite{fowler2012surface,riesebos2017pauli}. The stored operations will be taken into account when the logical measurements are performed; the outcome of a logical measurement is flipped according to the Pauli frame. A schematic figure is shown in Fig.\,\ref{fig:pauliframe}.
\begin{figure}[t!]
\centering
\includegraphics[width=7.5cm]{Pauliframe.png}
\caption{Schematic figure of the Pauli frame. The recovery operations are not physically applied to quantum computers but rather are stored in the Pauli frame and efficiently updated after each Clifford gate operation. The measurement outcomes are flipped depending on the state of the Pauli frame. }
\label{fig:pauliframe}
\end{figure}
In the above construction of logical operations, the whole process, except for magic-state injection, consists only of Clifford operations and Pauli channels in the code space. Since a Pauli operator conjugated by a Clifford operator is also a Pauli operator, we can always track a recovery operator as a Pauli operator during a computation. In addition, when we can apply a logical Pauli operator to a quantum state, we can perform it simply by updating the Pauli frame, since a logical Pauli operator is a transversal physical Pauli operation.
As far as classical computers are reliable, this operation is effectively noiseless.
\section{Quantum error mitigation for fault-tolerant quantum computing}
\label{sec:method}
In this section, we discuss how to integrate QEM into the FTQC architecture. Here, we consider two types of errors in FTQC: decoding errors due to failures in the error estimation and insufficiency of magic-state distillation and approximation errors in the Solovay-Kitaev decomposition.
In Sec.\,\ref{sec:error_in_ftqc}, we explain how these errors in FTQC can be modeled. In Sec.\,\ref{sec:mitigate_error_in_ftqc}, we discuss how these errors can be canceled and evaluate their QEM costs. Probabilistic error cancellation requires the errors to be estimated in advance. In Sec.\,\ref{sec:effect_estimation_error}, we also discuss the effect of estimation errors on probabilistic error cancellation and the characterization efficiency.
\subsection{Errors in fault-tolerant quantum computing}
\label{sec:error_in_ftqc}
\subsubsection{Decoding error}
Here, we describe noise due to the failures of error estimation in elementary logical operations, i.e., stabilizer measurements and magic-state distillation. The first obstacle to applying probabilistic error cancellation to FTQC is how to characterize an effective map of noise due to the failures of error estimation. If we suppose that the physical errors can be modeled as a stochastic physical Pauli map and assume that there are no errors on the ancillary qubits for syndrome measurements, we can define a logical noise map for decoding errors that is Markovian and a logical stochastic Pauli map. Yet, these assumptions do not hold in practice. Nevertheless, here we will assume that we can define an effectively Markovian logical error map for each logical operation and also assume that this noise map is a stochastic logical Pauli map.
It is known that even if noise is unitary, a noise map in a logical space of surface codes can be well-approximated as stochastic Pauli noise when the code distance is sufficiently large~\cite{bravyi2018correcting}. Furthermore, the remaining coherent errors can be canceled by using pulse optimization techniques. Thus, it is reasonable to suppose that the decoding errors due to the failure of error estimations in surface codes are almost stochastic Pauli errors. In addition, we numerically verified that we can regard the decoding errors as Markovian errors even in the presence of measurement errors. See Appendix.\,\ref{sec:decoding_error_approx} for the details.
While we mainly describe and analyze the decoding errors in the surface codes, a similar idea can be applied to the decoding errors due to insufficient magic-state distillation. As for the logical noise map on a prepared magic state due to insufficient magic-state distillation, we can twirl the noise map by logical Clifford operations, and it can be also assumed to be a stochastic Pauli noise.
Under the above assumptions, we can describe a noise map for a $l$-qubit logical operation $\mathcal{N}_{\rm dec}$ as the following stochastic Pauli noise:
\begin{equation}
\begin{aligned}
\mathcal{N}_{\rm dec}(\rho) =& \sum_{g \in \{I_{\rm L},X_{\rm L},Y_{\rm L},Z_{\rm L}\}^{\otimes l}} p_g g \rho g^{\dag},
\label{Eq:logicalerror}
\end{aligned}
\end{equation}
where $p_g \in {\mathbb R} $, $\sum_g p_g = 1$ and $p_g \ge 0$.
The sum of probabilities of non-identity logical operations is called the logical error probability $p_{\rm dec}$, i.e., $p_{\rm dec} = \sum_{g \neq I^{\otimes m}} p_g$.
It is known that when the physical error rate $p$ is smaller than a value called the threshold $p_{\rm th}$, the effective logical error probability decreases exponentially with respect to the code distance $d$. For the effective logical error probability per syndrome-measurement cycle of surface codes $p_{\rm cyc}$, it decreases as
\begin{equation}
\begin{aligned}
p_{\rm cyc} \simeq C_1 \left(C_2 \frac{p}{p_{\rm th}} \right)^{(d+1)/2},
\label{Eq:threshold}
\end{aligned}
\end{equation}
where $C_1, C_2$ are constants~\cite{jones2012layered}. While the constant values depend on the details of the error correction schemes, $C_1 \simeq 0.13$ and $C_2 \simeq 0.61$ are expected in a typical construction of surface codes and the noise model~\cite{jones2012layered,fowler2012towards}.
Suppose that a logical operation requires $m$ cycles; then, the logical error probability for the logical operation can be approximated as $p_{\rm dec}$ as
\begin{equation}
\begin{aligned}
p_{\rm dec} = 1 - (1-p_{\rm cyc})^m \simeq mp_{\rm cyc}.
\label{Eq:pdec_def}
\end{aligned}
\end{equation}
Note that the number of cycles per logical gate increases at most linearly with the code distance $d$.
In order to apply probabilistic error cancellation, we need to know the logical error probabilities $\{p_g\}$ in advance. While we can estimate $\{p_g\}$ by using gate set tomography in the logical space, the estimations are not exact. The effect of estimation errors is discussed in Sec.\,\ref{sec:effect_estimation_error}, while the efficiency of our proposal, including noise characterization, is discussed in Appendix.\,\ref{sec:PECwithgateset}.
\subsubsection{Approximation error}
Since we are only allowed to use a limited set of logical operations for achieving fault-tolerance, we need to decompose an arbitrary unitary gate into a sequence of available gates. Any unitary operator can be decomposed into a product of CNOT gates and single-qubit gates. Thus, we need to approximate single-qubit gates with a given gate set to the desired accuracy. By using the improved Solovay-Kitaev algorithm~\cite{ross2014optimal}, given a universal gate set such as $\{T, H, S\}$ and the single-qubit gate $U$ to be approximated, we can construct an approximated gate $\tilde{U}$ which satisfies $\varepsilon=\|\tilde{U} - U\|$ to an arbitrary accuracy $\varepsilon$ as a sequence of given gate set with length $\tilde{O}(\mathrm{log}(\varepsilon^{-1}))$ with $\| \cdot \|$ being an operator norm. The error of approximated map is given by
\begin{equation}
\begin{aligned}
\mathcal{N}_{\rm SK}(\rho) = \tilde{U} U^\dag \rho U \tilde{U}^{\dag}.
\label{Eq:approerror}
\end{aligned}
\end{equation}
Since this decomposition involves only single-qubit operations, this error channel can be efficiently and exactly evaluated in advance.
\subsection{Quantum error mitigation for fault-tolerant quantum computing}
\label{sec:mitigate_error_in_ftqc}
\subsubsection{Overview of our framework}
Here, we show that decoding errors and approximation errors can be mitigated with probabilistic error cancellation.
When we insert recovery operations for probabilistic error cancellation, it is assumed that the noise level of the recovery operations for QEM is much lower than that of the error-mitigated gates. In NISQ computing, for example, it is reasonable to assume that the error probabilities of two-qubit gates are much larger than those of single-qubit gates and measurements; therefore, the errors of two-qubit gates can be mitigated by using single-qubit recovery operations. However, this is not a reasonable assumption in FTQC, since the operations that are noisy and time-consuming are different from those of NISQ architecture. More concretely, even Clifford operations involving only one logical qubit suffer decoding errors.
Here, we show an architecture of FTQC that implements QEM with small overheads. The keys are the two significant properties of FTQC architecture: logical Pauli operations are error-free and instantaneous due to the Pauli frame, and the noise map of the decoding errors can be assumed as stochastic Pauli noise. Thanks to these properties, we can mitigate errors in all the elementary logical operations simply by updating the Pauli frame. This means the error-mitigated Clifford operations and Pauli measurements are available for computation. Because they form a complete basis for mitigating arbitrary errors~\cite{endo2018practical}, we can mitigate approximation errors due to the Solovay-Kitaev decomposition. Since the approximation errors can be exactly known in advance, an unbiased estimator free from approximation error can be obtained, as will be explained in Sec.\,\ref{sec:mitigate_approximation_error}.
To make our QEM procedure to work, the accuracy and efficiency of the decoding error estimation are vital. We show that the decoding errors can be estimated with gate set tomography under an appropriate choice of the gauge, considering state-preparation and measurement errors. We also show that the cost of gate set tomography is acceptable compared with the main computation of FTQC for estimating expectation values in Sec.\,\ref{sec:effect_estimation_error}. In this section, we further show a refined gate set tomography suited to our framework that significantly improves the estimation for logical Clifford gates.
\subsubsection{Quantum error mitigation for decoding errors}
\label{sec:mitigate_decoding_error}
We can express the inverse channel of the non-uniform depolarizing channel Eq.\,(\ref{Eq:logicalerror}) as a linear combination of Pauli operations. Thus, we can express the inverse channel as
\begin{equation}
\begin{aligned}
\mathcal{N}_{\rm dec}^{-1} (\rho) &= \sum_{g \in \{I_{\rm L},X_{\rm L},Y_{\rm L},Z_{\rm L}\}^{\otimes m}} \eta_g g \rho g^{\dag} \\
&= \gamma_{\rm dec} \sum_{g \in \{I_{\rm L},X_{\rm L},Y_{\rm L},Z_{\rm L}\}^{\otimes m}} q_g \mathrm{sgn}(\eta_g) g \rho g^\dag.
\label{Eq:inverse}
\end{aligned}
\end{equation}
Refer to Appendix.\,\ref{sec:coef} for a concrete expression of each coefficient $\eta_g$, $\gamma_{\rm dec}$, and $q_g$. Thus, we can suppress the errors by applying probabilistic error cancellation only with Pauli operators after the decoding processes.
The QEM cost for decoding errors in the entire circuit can be expressed as $\gamma^{\rm tot}_{\rm dec} = \prod_{k=1}^{N_{\rm dec}}\gamma_{\rm dec}^{(k)}$, where $N_{\rm dec}$ is the number of logical gates, and $\gamma_{\rm dec}^{(k)}$ is a QEM cost of the $k$-th operation.
Note that probabilistic error cancellation usually applies the recovery operations of QEM immediately after the noisy gates ~\cite{temme2017error,endo2018practical}; however, because we perform only logical Pauli operations as the recovery operations for decoding errors, they can be done simply by updating the Pauli frame instead of directly applying them after noisy gates. Finally, the measurement result is post-processed according to the state of the Pauli-frame, the parity corresponding to the applied recovery operations, and the QEM cost. Thus, unlike in probabilistic error cancellation for NISQ devices, the logical noise due to decoding errors can be mitigated without any additional noise due to the recovery operation.
A schematic figure is shown in Fig.\,\ref{fig:pauliframe2}.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{pauliframe2.png}
\caption{Schematic figure for the Pauli frame incorporating QEM. If a QEM recovery operation is a Pauli operation, it is not directly applied to the quantum computer but rather the Pauli frame is updated instead. The parity is also updated in accordance with the generated recovery operations of QEM. Here, we denote the parity corresponding to the QEM recovery operation as $p_a$ in the figure. If a QEM recovery operation is not a Pauli operator, it is performed physically. Then measurement outcomes are then post-processed depending on the Pauli frame, parity, and QEM cost. }
\label{fig:pauliframe2}
\end{figure}
Note that the information on the QEM cost and the parity is used only when the final measurement result is obtained; the outcome of a destructive logical Pauli measurement is flipped depending only on the state of the Pauli frame. Whether we can mitigate decoding errors of complicated logical operations such as magic-state preparation, gate teleportation, and adaptive Clifford gates by simply updating the Pauli frame is not trivial; therefore we provide a concrete procedure for actual devices and Pauli frames in Appendix.\,\ref{sec:concrete_logical_operations}.
In the case of decoding errors in surface codes, by approximating the QEM cost to the first order of the logical error, we have
\begin{equation}
\begin{aligned}
\gamma_{\rm dec} \simeq 1 + 2p_{\rm dec}.
\label{Eq:costandlogicalerror}
\end{aligned}
\end{equation}
Refer to Appendix.\,\ref{sec:coef} for details. Under the assumption that the logical error rate is the same for all the logical operations and $p_{\rm dec} N_{\rm dec}=O(1)$ with $p_{\rm dec} \rightarrow +0$, the QEM cost $\gamma_{\rm dec}^{\rm tot}$ for the entire quantum circuit can be shown to be exactly equal to $e^{2 p_{\rm dec} N_{\rm dec}}$ on the basis of the argument in Sec.\,\ref{sec:pec}.
Thus, by using Eqs.\,(\ref{Eq:threshold}) and (\ref{Eq:pdec_def}), we obtain
\begin{equation}
\begin{aligned}
\gamma_{\rm dec} - 1 = 2 m C_1 \left(C_2 \frac{p}{p_{\rm th}}\right)^{(d+1)/2},
\end{aligned}
\label{Eq:gammadec}
\end{equation}
which results in the total QEM sampling overhead
\begin{align}
\Gamma^{\rm tot}_{L}=e^{2 (\gamma_{\rm dec}-1) N_{\rm dec}}= \exp \left( 4 m C_1 N_{\rm dec} \left( C_2 \frac{p}{p_{\rm th}}\right)^{(d+1)/2} \right).
\label{Eq:tradeoff}
\end{align}
Notice that Eq.~(\ref{Eq:tradeoff}) clearly shows a trade-off relationship between the sampling overhead and the code distance, i.e., the number of physical qubits.
\subsubsection{Quantum error mitigation for approximation errors}
\label{sec:mitigate_approximation_error}
Unlike decoding errors due to the failure of error correction, we cannot describe errors due to the Solovay-Kitaev decomposition as stochastic Pauli errors. Nevertheless, we can still apply probabilistic error cancellation with negligible overheads.
Denote $\mathcal{N}_{\rm SK}(\rho) = \tilde{U} U \rho (\tilde{U} U)^{\dag}$; we invert this approximation error by
\begin{equation}
\begin{aligned}
\label{Eq:sv_decomp}
\mathcal{N}_{\rm SK}^{-1}&=\sum_i \eta_i \mathcal{B}_i^{(L)} \\
&=\gamma_{\rm SK} \sum_i q_i \mathrm{sgn}(\eta_i) \mathcal{B}_i^{(L)},
\end{aligned}
\end{equation}
where $\{\mathcal{B}_i^{(L)}\}$ denotes recovery operations in the logical space. Note that we can represent any map as a linear combination of Clifford operations and Pauli channels~\cite{endo2018practical}, and thus, we do not need $T$-gates for mitigating approximation errors. Recovery operations are randomly chosen and applied immediately after each single-qubit logical operation if they are not Pauli operations. In the case of Pauli operations, we can again use the Pauli frame, and physical operations on quantum computers are not required, in a similar vein to QEM for decoding errors. Since a single-qubit logical unitary operation consists of several repetitions of Clifford gates and $T$-gate teleportation, the insertion of the recovery operation for probabilistic error cancellation negligibly increases the length of the quantum circuit.
In the numerical simulations described in the next section, we will verify that the QEM costs can be approximated with the following equation:
\begin{equation}
\begin{aligned}
\gamma_{\rm SK}-1 = \beta_1 e^{-\beta_2 N_T },
\end{aligned}
\label{Eq:SVcost}
\end{equation}
where $\beta_1$ and $\beta_2$ are constants dependent on the quantum gate and $N_T$ is the number of available $T$-gates.
The QEM cost due to approximation errors can also be represented as $\gamma^{\rm tot}_{\rm SK}=\prod_{k=1}^{N_{\rm SK}} \gamma_{\rm SK}^{(k)}$, where $N_{\rm SK}$ is the total number of recovery operations for mitigating approximation errors in the quantum circuit with the cost $\gamma_{\rm SK}^{(k)}$ corresponding to the $k$-th recovery operation.
By assuming that the cost does not depend on gates, we have the following QEM sampling overhead:
\begin{align}
\Gamma_{\rm SK}^{\rm tot} \simeq \exp \left(2 \beta_1 N_{\rm SK} e^{- \beta_2 N _T}\right).
\end{align}
This shows there is a trade-off relationship between the sampling overhead and the number of available $T$-gates.
\subsection{Effect of estimation errors of the noise map}
\label{sec:effect_estimation_error}
\subsubsection{Effect of estimation errors on expectation values}
While approximation errors can be exactly determined in advance, decoding errors have to be characterized. Since the logical error probabilities of the decoding errors are small, it is unavoidable that the characterization will contain finite and non-negligible estimation errors. Thus, we need to care about QEM with estimation errors and the efficiency of the characterization of decoding errors.
Let us discuss how estimation errors affect the performance of QEM. Given a perfect characterization of the noise model $\mathcal{N}_k$ for the $k$-th gate, we can realize the inverse operation $\mathcal{N}_k^{-1}$ with probabilistic error cancellation to achieve $\mathcal{N}_k^{-1} \mathcal{N}_k= \mathcal{I}$. If we obtain an incorrect estimation for the error process $\mathcal{N}_k' \neq \mathcal{N}_k$, it leads to an estimation error $\Delta \mathcal{N}_k \equiv \mathcal{N}_k^{\prime -1} \mathcal{N}_k \neq \mathcal{I}$.
Now, denoting the ideal process of the $k$-th gate as $\mathcal{U}_k$, the difference of the the error-mitigated process and the error-free process for the entire quantum circuit can be described by the diamond norm:
\begin{equation}
\begin{aligned}
\bigg\|\prod_{k=1}^{N_G} \Delta \mathcal{N}_k \mathcal{U}_k - \prod_{k=1}^{N_G} \mathcal{U}_k \bigg\|_{\diamond} \leq \Delta \varepsilon N_G
\end{aligned}
\end{equation}
where we used the fact that the diamond norm is subadditive and we denote $\Delta \varepsilon = \mathrm{max}_k \|\Delta \mathcal{N}_k -\mathcal{I} \|_\diamond$. Similarly, the discrepancy of the noisy and ideal process can be upper-bounded as $\|\prod_{k=1}^{N_G} \mathcal{N}_k \mathcal{U}_k - \prod_{k=1}^{N_G} \mathcal{U}_k \|_{\diamond} \leq \varepsilon N_G$, where $\varepsilon = \mathrm{max}_k \| \mathcal{N}_k -\mathcal{I} \|_\diamond$. Because the deviation of the expectation values of an observable $M$ for two processes $\mathcal{E}_1$ and $\mathcal{E}_2$ with the input state $\rho$ can be described as $\delta M = \mathrm{Tr}[M (\mathcal{E}_1(\rho)- \mathcal{E}_2(\rho))] \leq \|M \| \|\mathcal{E}_1-\mathcal{E}_2 \|_\diamond$~\cite{campbell2019random}, where $\|\cdot \|$ is an operator norm, we have $\delta M_{\rm QEM} \leq \|M \| \Delta \varepsilon N_G $ and $\delta M_{\rm noise} \leq \|M \| \varepsilon N_G $. Here, $\delta M_{\rm QEM}$ and $\delta M_{\rm noise}$ are the deviation of the observable with and without error mitigation.
Thus, we can see that QEM is beneficial when we can achieve $r<1$ for
\begin{equation}
\begin{aligned}
\label{def:coef_est_error}
\Delta \varepsilon = r \varepsilon.
\end{aligned}
\end{equation}
Note that this discussion does not include sampling errors; i.e., $\delta M$ is the error of the expectation value given infinite samples.
\subsubsection{Efficiency of characterization of decoding errors}
As a cause of model estimation errors, when we use gate set tomography to characterize the noise model for decoding errors, we need to consider state preparation and measurement (SPAM) errors and the finite statistical error arising from an insufficient number of samples. It has been shown that the effect of SPAM errors can be eliminated in the case of probabilistic error cancellation based on gate set tomography~\cite{endo2018practical}. While the general choice of the gauge is not compatible with the Pauli frame, we can modify the scheme of gate set tomography so that this method is compatible with QEM with the Pauli frame. Refer to Appendix.\,\ref{sec:PECwithgateset} for details.
To achieve an accuracy $r$ given in Eq.\,(\ref{def:coef_est_error}), we need to perform $N_{\rm GST} = O((r\varepsilon)^{-2})$ samplings with gate set tomography~\cite{greenbaum2015introduction,nielsen2020gate}. Here, we show this efficiency is acceptable compared with the main part of FTQC, i.e., the time required for gate set tomography corresponds to $O(r^{-2} n_q N_G)$ runs of the whole quantum logical circuits to obtain expectation values, where $n_q$ is the number of logical qubits.
Let the time for a single run of the logical circuit of FTQC be $\tau$.
The depth of logical quantum circuit is estimated as $O(N_G n_q^{-1})$, and the time per gate can be roughly approximated as $\tau_{\mathrm{gate}}=O(\tau n_q N_G^{-1})$. Then, the time for gate set tomography can be estimated as $\tau_{\rm GST} = O(\tau_{\rm gate} N_{\rm GST}) = O(\tau N_G^{-1} n_q (r \varepsilon)^{-2})$.
In a situation where QEM is useful, we have $\varepsilon N_G = O(1)$~\cite{endo2020hybrid}. Thus, we can conclude that to use QEM to decrease the logical error rate $p_{\rm dec}$ to $r p_{\rm dec}$ by QEM, we need gate set tomography as a pre-computation that takes $\tau_{\rm GST} = O(\tau N_G n_q r^{-2})$, which is $\tau_{\rm GST}/\tau = O(N_G n_q r^{-2})$ times longer than a single circuit run of FTQC.
The numbers of logical gates $N_G$ and logical qubits $n_q$ are expected to grow polynomially with the problem size, and FTQC circuits will be repeated on the order of $O(r^{-2})$ to make the statistical fluctuation of expectation values smaller than the reduced bias. Accordingly, while the estimation costs of the noise map cause another overhead to FTQC depending on the required accuracy, it is performed with a time that grows polynomially with the problem size and without requiring additional physical qubits.
We remark that when we assume the noise properties of the quantum devices are uniform, we can perform the sampling for gate set tomography in parallel. If we use all the logical qubits for characterization, the time for gate set tomography is reduced to $\tau_{\rm GST} = O(\tau N_G r^{-2})$. Note that, in the scenario that we can fully parallelize the sampling procedure, i.e., when we have $O((r\varepsilon)^{-2})$ distinct uniform quantum gates, we have $\tau_{\rm GST} = O(\tau N_G^{-1})$.
To further make the characterization of noise more efficient, we propose an improved gate set tomography for decoding errors of the Clifford process that is fast and compatible with the Pauli frame. See Appendix.\,\ref{sec:PECwithgateset} for the details of this scheme. The number of measurements $N_{\rm GST}$ is reduced to $N_{\rm GST} = O(r^{-2} \varepsilon^{-1})$, which makes the costs of pre-computation $O(n_q r^{-2})$. Thus, as long as $r$ is not too small, the time for characterization is expected to be relatively short.
While our efficient gate set tomography cannot be applied to the characterization of the $T$-gate preparation, several ways to reduce the costs for estimating errors of $T$-gates can be considered. Since the error of logical $T$-gate depends on physical $T$-gate and the process of injection and distillation is constructed by a few $T$-gate circuit, there may be an efficient way to numerically estimate the noise of logical $T$-gate from the characterization of physical $T$-gate and efficient simulation for quantum circuits dominated by Clifford gates~\cite{bravyi2016improved}. There may be a way to mitigate $T$-gate errors by temporally expanding the code distance or increasing the distillation depth for $T$-gate. The cost of gate set tomography might be also reduced by utilizing long-sequence GST~\cite{nielsen2020gate}, i.e., repeating several $T$-gates to amplify a small error rate to a large value. Ref.\,\cite{piveteau2021error} shows that if decoding errors of logical Clifford gates are negligible, one can reliably twirl the noise of $T$-gate and perform efficient process tomography on that by repeating $T$-gates. Nevertheless, it is still an open problem whether there exists a more efficient gate set tomography on the logical space with imperfect logical Clifford gates.
\subsubsection{Effective increase in code distance by quantum error mitigation under estimation errors}
We can regard that QEM effectively increases the code distance. Suppose that we can effectively achieve an $r$ times smaller logical error rate $p_{\rm eff} = r p_{\rm dec}$ via QEM. Since the logical error rate is roughly approximated with the code distance as $p_{\rm dec}(d)=p(p/p_{\rm th})^{(d-1)/2}$, QEM effectively achieves a larger code distance $d'$ where $p_{\rm eff} = p_{\rm dec}(d')$ without increasing the number of physical qubits. The effective increase in the code distance via QEM can be derived as
\begin{equation}
d'-d=2 \frac{\mathrm{ln}~r}{\mathrm{ln}\big(\frac{p}{p_{\rm th}}\big)}.
\end{equation}
Therefore, by setting $r=(p/p_{\rm th})^{x}$, we can effectively increase the code distance by $2x$. Note that, as discussed in the previous sections, we need $\mathrm{exp}(O (N_{\rm dec} p_{\rm dec}))=\mathrm{exp}(O(1))$ times more repetitions to achieve the same precision as the error-free case.
It is worth noting that we can increase the number of available logical qubits via QEM. If we are allowed to use a fixed number of physical qubits, the decrease in the code distance indicates that we can allocate more logical qubits; therefore, we can convert the code distance into the number of logical qubits.
\section{Numerical analysis}
\label{sec:result}
We numerically evaluated how well error mitigation suppresses the qubit overhead in FTQC. (See Appendix.\,\ref{sec:numercal_detail} for the detailed settings and the definitions of the terms used in the numerical analysis.)
\subsection{Quantum error mitigation for decoding errors}
\subsubsection{Cost analysis}
We evaluated the performance of QEM on decoding errors occurring during logical operations, where we assumed FTQC with surface codes and lattice surgery. (See Appendix.\,\ref{sec:surface_code_and_lattice_surgery} for details about surface codes.) For simplicity, we assumed a single-qubit depolarizing noise model for each data and measurement qubit at the beginning of each cycle, which corresponds to a phenomenological noise model~\cite{wang2003confinement,delfosse2017almost}. To determine the failure probability of decoding with faulty syndrome-measurement cycles, we further assumed perfect syndrome measurements in the $0$-th and $d$-th cycles. Then, we checked whether any logical Pauli errors occurred during $d$ cycles. The recovery operations were estimated from the syndrome values by using the minimum-weight perfect matching decoder~\cite{dennis2002topological,edmonds1965paths}. We evaluated the logical error probabilities of Pauli-$X,Y,Z$ and computed the QEM cost for $d$ cycles according to Eq.\,(\ref{QEM_cost_Pauli_1q}). Despite our assumption of perfect syndrome measurements of the $0$-th and $d$-th cycle, we expect that the numerical results are asymptotically equivalent to those without the assumption when $d$ is sufficiently large.
The logical error probabilities of Pauli-$X,Y,Z$ of a single logical qubit for several code distances were calculated. The sum of their probabilities are plotted according to physical error rates in Fig.\,\ref{fig:logical_error_lowp}.
\begin{figure}
\centering
\begin{tabular}{c}
\subfigure[]{
\includegraphics[width=7.5cm]{fig_decode_threshold_small.pdf}
\label{fig:logical_error_lowp}
} \\
\subfigure[]{
\includegraphics[width=7.5cm]{fig_decode_threshold_zoom.pdf}
\label{fig:logical_error_zoom}
} \\
\subfigure[]{
\includegraphics[width=7.5cm]{fig_decode_gamma.pdf}
\label{fig:logical_error_gamma}
}
\end{tabular}
\caption{Logical error probability and QEM cost for $d$-cycle syndrome measurements plotted against the physical error rates at several code distances. (a) Logical error probability as a function of physical error rate. (b) The same figure zoomed in around the threshold value. (c) QEM costs for $d$-cycle idling operations. The first-order approximations from Eq.\,(\ref{Eq:costandlogicalerror}) are shown as solid lines.}
\end{figure}
The logical error probability exponentially decreases according to the code distance when the physical error probability $p$ is smaller than a threshold value. Fig.\,\ref{fig:logical_error_zoom} plots the logical error probabilities around the threshold value, which is around $p_{\rm th} = 0.044$.
We computed the QEM costs for decoding errors $\gamma_{\rm dec}$ corresponding to $d$ cycles and different code distances and compared them with the first-order approximation shown in Eq.\,(\ref{Eq:costandlogicalerror}). The numerical results are plotted in Fig.\,\ref{fig:logical_error_gamma}. In this figure, the solid lines correspond to the approximation of the QEM cost in Eq. (\ref{Eq:costandlogicalerror}).
We can see that the QEM costs decay exponentially depending on the code distances and show a threshold behavior like the logical error probabilities. They coincide well when the physical error rate is sufficiently small.
\subsubsection{Performance analysis}
\label{sec:logical_error_performance_analysis}
Next, we examined the performance of QEM on decoding errors in large-scale quantum circuits with a 100-qubit logical random Clifford circuit with 100 layers. We remark that since a linear combination of Clifford operations can represent arbitrary quantum operations, it is sufficient to demonstrate the performance of QEM for Clifford operations~\cite{strikis2020learning}.
We simultaneously applied randomly generated single-qubit Clifford gates to each layer, and then we applied 50 CNOT gates to two randomly chosen qubits.
We can simulate these protocols efficiently by using an efficient algorithm for stabilizer circuits~\cite{gottesman1997stabilizer,aaronson2004improved}.
As an observable, we chose a Pauli operator whose measurement outcome is always unity for the final state vector if there are no physical errors; i.e., the final state is a +1 eigenstate of the chosen observable.
The numerical simulations assumed a non-uniform single-qubit depolarizing logical error in the form of Eq.~(\ref{Eq:logicalerror}) for each layer. The logical error probabilities of depolarizing channels were determined according to the numerical results of the last section. We chose $p=0.01$ and obtained the logical error probabilities by extrapolation. The estimated logical error probabilities are summarized in Table.\,\ref{tab:logical_probs}.
\begin{table}[htb]
\centering
\begin{tabular}{|l|l|l|}
\hline
code distance $d$ & $p_{X_{\rm L}}, p_{Z_{\rm L}}$ & $p_{Y_{\rm L}}$ \\
\hline \hline
$5$ & $1.80 \times 10^{-4}$ & $1.96 \times 10^{-6}$ \\
\hline
$7$ & $1.39 \times 10^{-5}$ & $4.11 \times 10^{-8}$ \\
\hline
$9$ & $1.08 \times 10^{-6}$ & $8.64 \times 10^{-10}$ \\
\hline
$11$ & $8.35 \times 10^{-8}$ & $1.81 \times 10^{-11}$ \\
\hline
\end{tabular}
\caption{Estimated logical error probability for code distances with $p = 0.01$ and $p_{\rm th} \sim 0.044$.}
\label{tab:logical_probs}
\end{table}
Without QEM, the final state converges to a highly mixed state due to physical errors. Thus, it is expected that expectation values decay to zero. By employing QEM, they are taken back to unity, sacrificing statistical accuracy and requiring a greater number of experiments accordingly.
Note that while the required number of cycles for Clifford operations scales linearly with the code distance, the actual number of cycles and logical error probability per logical gate are dependent on the Clifford operations. In particular, logical CNOT gates with lattice surgery may induce correlated logical Pauli errors on multiple logical qubits. Nevertheless, we used a simplified error model, since we expect this evaluation captures the basic properties of QEM performance.
We numerically performed a series of $10^4$ experiments, each of which computed an expectation value from $10^4$ single-shot measurements.
The results are shown in Fig.\,\ref{fig:demonstrate_clifford}, while data around the ideal expectation value are shown in Fig.\,\ref{fig:demonstrate_clifford_mitigated}.
Fig.\,\ref{fig:demonstrate_clifford_compare} shows the mean value of $10^4$ samples for each logical error probability together with its standard deviation (the error bar).
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{fig_clifford_mitigate.pdf}
\label{fig:demonstrate_clifford}
} \\
\begin{tabular}{cc}
\subfigure[]{
\includegraphics[width=7.5cm]{fig_clifford_mitigate_zoom.pdf}
\label{fig:demonstrate_clifford_mitigated}
} &
\subfigure[]{
\includegraphics[width=7.5cm]{fig_clifford_mitigate_decay.pdf}
\label{fig:demonstrate_clifford_compare}
}
\end{tabular}
\caption{(a) Histogram of expectation values for 100-qubit random Clifford circuits. (b) Histogram of expectation values with QEM for 100-qubit random Clifford circuits. (c) Sample averages and standard deviation of 100-qubit random Clifford circuits are plotted as a function of code distances.}
\end{figure*}
We can see that there was a large bias in the expectation value without QEM, but no bias when the QEM technique was employed, while its standard deviation was amplified. The standard deviation of the expectation value for $d=5$ was $14.2$, and thus, it is not visible in the histogram because it is too large.
The mean number of Pauli errors in the whole quantum circuit was $3.6$ for $d=5$ and $0.28$ for $d=7$. Thus, as explained in Sec.\,\ref{sec:pec}, QEM is useful when the number of Pauli errors in the circuit is less than unity.
These results show that the QEM technique is effective for large-scale quantum computing and it enables us to increase the effective code distance.
\subsection{Quantum error mitigation for approximation errors}
\subsubsection{Cost analysis}
Next, we studied the performance of QEM when the Solovay-Kitaev decomposition is used. Since the actual QEM cost $\gamma_{\rm SK}$ depends on the target unitary operator, we drew a sample of unitary operations $U$ from a Haar-measure random distribution $\mu_{\rm H}$. Then, we decomposed a unitary gate in the form of $U = R_Z(\theta_1) \sqrt{X} R_Z(\theta_2) \sqrt{X} R_Z(\theta_3)$, where $\sqrt{X} = HSH$ is a Clifford operation. We used the improved Solovay-Kitaev algorithm from Ref.\,\cite{ross2014optimal}. This algorithm enables us to approximate an arbitrary Pauli-$Z$ rotation $R_Z(\theta) = \exp(i\frac{\theta}{2} Z)$ with an operator $\tilde{U}$ which is described as a sequence of Clifford operations and $T$-gates. We set the maximum count of $T$-gates for each decomposition for three Pauli-$Z$ rotations to check the trade-off relation between the $T$-gate count and the approximation accuracy.
Fig.\,\ref{fig:solovay_opnorm} shows the histogram of errors evaluated with operator norm $|| U - \tilde{U}||$.
\begin{figure}
\centering
\begin{tabular}{c}
\subfigure[]{
\includegraphics[width=7.4cm]{fig_sv_norm.pdf}
\label{fig:solovay_opnorm}
}\\
\subfigure[]{
\includegraphics[width=7.4cm]{fig_sv_gamma.pdf}
\label{fig:solovay_gamma}
}\\
\subfigure[]{
\includegraphics[width=7.4cm]{fig_sv_gamma_drop.pdf}
\label{fig:solovay_gamma_drop}
}
\end{tabular}
\caption{Approximation error and QEM cost of improved Solovay-Kitaev method calculated for Haar-random unitary operations. Each color corresponds to the maximum count of $T$-gates. (a) Histogram of approximation errors due to the Solovay-Kitaev decomposition. (b) Histogram of QEM costs. (c) Histogram of QEM costs for approximation errors as a function of the number of allowed $T$-gates.}
\end{figure}
As expected, there was an exponential decrease in the approximation errors.
Next, we calculated the QEM cost by using Eq.\,(\ref{Eq:sv_decomp}).
Fig.\,\ref{fig:solovay_gamma} shows the histogram of QEM costs $\gamma_{\rm SK}$, and Fig.\,\ref{fig:solovay_gamma_drop} plots the QEM cost versus the number of allowed $T$-gates.
We can see that $\gamma_{\rm SK}-1$ exponentially decreases according to the number of $T$-gates, and its variance also decreases exponentially.
We fitted the QEM cost $\gamma_{\rm SK}$ with Eq.\,(\ref{Eq:SVcost}), and obtained $\beta_1 = 3.9(5)$ and $\beta_2 = 0.072(1)$.
\subsubsection{Performance analysis}
Next, we evaluated the performance of QEM for approximation errors due to the Solovay-Kitaev decomposition in a simulation of a SWAP test circuit with 7 qubits. A SWAP test circuit evaluates the overlap of two input states $\rho$ and $\sigma$ as $\mathrm{Tr}[\rho \sigma]$ by measuring ancilla qubits~\cite{ekert2002direct}.
We set one of the input states to the ideal state and the other to the state affected by approximation errors.
A schematic diagram is shown in Fig.\,\ref{fig:swaptest}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{swaptest.png}
\caption{Schematic figure of simulated 7-qubit SWAP test circuit. We calculated the overlap of randomly generated states and the approximation of the generated state by using the Solovay-Kitaev algorithm. The random circuit was composed of three layers, each of which consisted of random single-qubit rotation gates and CNOT gates acting on two randomly chosen qubits. }
\label{fig:swaptest}
\end{figure}
The ideal state was generated by using random quantum circuits composed of three layers. In each layer, random single-qubit unitary operations were simultaneously applied; then a CNOT gate acted on two randomly chosen qubits. The same random quantum circuit was applied to the approximate state by applying the Solovay-Kitaev decomposition to each single-qubit rotation. In this case, if there are no approximation errors, we necessarily obtain +1 as measurement outcomes since the input states are the same; hence the expectation value is also $+1$ for the Pauli-$Z$ operator of the ancilla qubit. On the other hand, the expectation value becomes smaller than unity when the inner product is reduced by approximation errors. Since approximation errors cannot be treated in the framework of stabilizer simulation, we simulated the quantum circuits by directly updating the state vector after each gate.
We numerically performed a series of $10^4$ experiments and computed the expectation values from $10^4$ single-shot measurements. The number of allowed $T$-gates in each Solovay-Kitaev decomposition for single-qubit unitary operations was varied from $24$ to $60$.
Fig.\,\ref{fig:demonstrate_sk} shows the results, while Fig.\,\ref{fig:demonstrate_sk_mitigated} shows the data around the ideal expectation value. Moreover, Fig.\,\ref{fig:demonstrate_sk_decay} shows the mean value of $10^4$ samples for each logical error probability together with its standard deviation (the error bar). Note that standard deviation with $21$ $T$-gates is $4.65$.
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{fig_SK_mitigate.pdf}
\label{fig:demonstrate_sk}
} \\
\begin{tabular}{cc}
\subfigure[]{
\includegraphics[width=7.5cm]{fig_SK_mitigate_zoom.pdf}
\label{fig:demonstrate_sk_mitigated}
} &
\subfigure[]{
\includegraphics[width=7.5cm]{fig_SK_mitigate_decay.pdf}
\label{fig:demonstrate_sk_decay}
}
\end{tabular}
\caption{Histogram of expectation values of SWAP test with approximation errors. (a) Histogram of expectation values. (b) The same figure zoomed in around the ideal expectation value. (c) Sample averages and standard deviation as a function of the allowed number of $T$-gates.}
\end{figure*}
We can see that our QEM technique successfully removed bias from the expectation value in the noisy cases. Compared with the QEM cost for decoding errors, we obtained a larger QEM cost in this case. This is consistent with the results reported in Ref.~\cite{endo2018practical} that indicates the QEM cost for unitary errors tends to be larger than that for stochastic errors.
This problem may be alleviated by performing several Solovay-Kitaev decompositions with the same accuracy, constructing randomizing approximation errors, and removing the coherent component of the noise.
Note that with a sufficiently large sample size, our QEM technique enables the effective number of $T$-gates to be increased by inserting additional Clifford gates and Pauli channels and conducting repeated sampling, with negligible additional hardware requirements.
\subsection{Quantum error mitigation with estimation errors}
Probabilistic error cancellation assumes that the noise maps to be canceled are known in advance. While we can determine the approximation error of the Solovay-Kitaev decomposition within the numerical precision, it is hard to exactly characterize the noise maps of the decoding errors because GST is affected by finite sampling, as discussed in Sec.\,\ref{sec:effect_estimation_error}. In this section, we numerically evaluated the performance of our framework in the case of finite estimation errors.
For the benchmarks, we chose the same quantum circuit, noise model, and observable as in Sec.\,\ref{sec:logical_error_performance_analysis}. We evaluated the expectation value for a 100-qubit noisy Clifford circuit that is unity if there is no noise. A non-uniform depolarizing channel where Pauli-$X,Y,Z$ occurs with probabilities $(p_x, p_y, p_z)$ was inserted after each gate. We used the probabilities obtained from the simulation of surface codes shown in Table.\,\ref{tab:logical_probs}. The point of difference from the previous simulations is that these probabilities were over- or under-estimated as $((1+r)p_x, (1+r)p_y, (1+r)p_z)$ ($r>-1$) in probabilistic error cancellation. Here, $r=0$ corresponds to an exact characterization and perfect error mitigation, while $r=-1$ corresponds to FTQC without error mitigation. As in the discussion in Sec.\,\ref{sec:effect_estimation_error}, we expect the distribution is such that its mean value is the same as in the simulation of the mitigated noise model $(|r|p_x, |r|p_y, |r|p_z)$ and its variance is approximately $\Gamma$, which is the overhead of QEM determined by the estimated noise model $((1+r)p_x, (1+r)p_y, (1+r)p_z)$.
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{fig_estimation_error.pdf}
\label{fig:demonstrate_estimation_error}
} \\
\begin{tabular}{cc}
\subfigure[]{
\includegraphics[width=7.5cm]{fig_estimation_error_decay.pdf}
\label{fig:demonstrate_estimation_error_decay}
}
\end{tabular}
\caption{Histogram of expectation values with estimation errors. (a) Histogram of expectation values. (b) Sample averages and standard deviation as a function of estimation accuracy.}
\end{figure*}
The histogram in the case of finite estimation errors parametrized by $r$ for $d=7$ and $d=9$ are plotted in Fig.\,\ref{fig:demonstrate_estimation_error}, and the mean values and standard deviations are plotted as a function of $r$ in Fig.\,\ref{fig:demonstrate_estimation_error_decay}.
We can see that the residual bias increases exponentially with $|r|$.
With infinite samples, QEM is beneficial when $r$ is sufficiently smaller than unity. Comparing the cases of under-estimation $r<0$ and over-estimation $r>0$ with the same absolute value $|r|$, we find that over-estimation ($r>0$) has a larger variance than that of under-estimation ($r<0$), while there is similar amount of bias in the expectation values. Since QEM prefers under-estimation to over-estimation, we conclude that characterization methods with a weighted penalty may lead to a further improvement in QEM.
\subsection{Practical utility of quantum error mitigation for FTQC}
While we have shown our method effectively decreases the hardware requirement of FTQC with the examples of random Clifford and swap-test circuits, the practical availability of QEM in the region where FTQC is used for useful applications with a quantum advantage is also vital. In this section, we discuss the enhancement of computation accuracy in this regime with our protocol.
We estimated how many quantum logical Clifford operations $N_G$ are required in the practical region from the existing resource estimation. Note that there are other noise sources, such as imperfect $T$-gate preparation, that can be counted as the overhead of logical operations in distillation processes, and thus we counted the effects of them as decoding errors. We considered two scenarios for evaluation; an optimistic one and a realistic one. An optimistic scenario is the case of lightweight applications mainly for showing a quantum advantage, i.e., an application with the minimum possible $N_G$ that cannot be simulated with the existing classical computers. We referred to the analysis of Refs.\,\cite{arute2019quantum,pednault2019leveraging} to estimate the maximum problem size tractable with the existing classical computers. According to them, we expect that quantum circuits with depth 100 and 100 logical qubits are sufficient to go beyond the limitation of classical simulation and that we can achieve this with $N_G \sim 10^4$.
The other scenario is the ground-energy estimation of spin models and chemical molecules since the quantum simulation is expected as one of the most resource-efficient applications whose quantum advantage is well studied. We picked an expected number of gates from recent the state-of-the-art resource estimation in Refs.\,\cite{kivlichan2020improved} and \cite{babbush2018encoding} that discuss the cost for calculating ground-state energy. These papers utilize quantum simulation with Trotterization~\cite{suzuki1991general} and qubitization~\cite{low2019hamiltonian} as a subroutine, respectively. Trotterization is a method to simulate quantum systems by approximating Hamiltonian dynamics with Trotter decomposition~\cite{suzuki1991general}, and qubitization~\cite{low2019hamiltonian} is a recently proposed method that constructs the state after time-evolution by repetitive applications of Grover-like iterations.
According to the Tables.\,1 and 2 in Ref.\,\cite{kivlichan2020improved} and the Table.\,IV in Ref.\,\cite{babbush2018encoding}, approximately $10^{8}$ $T$-gates are required to simulate a Hubbard model that is hard to simulate with classical computers.
Note that while these algorithms use phase estimation sampling to obtain the ground energy as binary digits and are not a procedure to estimate expectation values, we can still apply QEM to these algorithms with small overheads. This is because these problems can be translated into a series of decision problems. See Appendix.\,\ref{sec:apply_qem_to_bqp} for a detailed explanation. Considering there is an overhead of executing magic-state distillation and so on, we expect that $N_G = 10^{10}$ is required as a pessimistic estimation.
Next, we considered how QEM can reduce the effect of decoding errors of logical operations during FTQC. Without QEM, a logical error probability $p_{\rm L}$ must be much smaller than the inverse of the number of logical operations $N_G$. In other words, the mean number of logical errors $N_G p_{\rm L}$ must satisfy $N_G p_{\rm L}=O(1)$. Suppose the allowed mean number of errors without QEM for satisfying the required accuracy as $N_{\rm e}$, which becomes small as required accuracy becomes strict. On the other hand, with QEM, the bias of expectation values caused by logical errors whose mean number is below unity can be mitigated with $e^{O(1)}$, i.e., about $e^4 \sim 55$, sampling overheads according to Eq.\,(\ref{Eq:tradeoff}). Thus, the required logical error rates are relaxed from $p_{\rm L} \sim N_{\rm e} / N_G$ to $p_{\rm L} \sim 1 / N_G$. The effective error rates of elementary logical operations such as logical Clifford gates decrease exponentially with the code distance. When we focus on the advantage of QEM in the logical space, we can estimate the relations between the problem size $N_G$ and code distances $d$ in terms of the required mean number of errors without QEM $N_{\rm e}$ as shown in Fig.\,\ref{fig:resource_estimation_large_application}.
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[width=7.5cm]{fig_resource_estiamtion_large_application.pdf}
\label{fig:resource_estimation_large_application}
}
\subfigure[]{
\includegraphics[width=7.5cm]{fig_resource_estiamtion_large_application_reduce_qubit.pdf}
\label{fig:resource_estimation_large_application_reduce_qubit}
}
\caption{(a) The relation between required number of gates for executing algorithm $N_G$ and required code distance for encoding the information of qubit $d$. The relation is plotted for several numbers of allowed logical error counts during FTQC $N_{\rm e}$. (b) The ratio between the number of required physical qubits per logical qubit with and without QEM.}
\label{fig:resource_estimation_large_application_both}
\end{figure*}
In this figure, the required code distance for reliable accuracy is plotted as a function of the number of logical gates required for executing an algorithm for several cases of $N_{\rm e}$. For the calculation, we used Eq.\,(\ref{Eq:threshold}) with $p/p_{\rm th} = 0.1$. The number of physical qubits per logical qubit scales as $O(d^2)$ in the case of two-dimensional topological codes. We plot how the number of physical qubits are reduced with QEM $d_{\rm mit}^2 / d_{\rm nmit}^2$ in Fig.\,\ref{fig:resource_estimation_large_application_reduce_qubit}, where $d_{\rm mit}$ and $d_{\rm nmit}$ are the required code distance with and without QEM, respectively.
While the improvement of the code distance $d$ is constant, the impact of resource reduction depends on the expected technologies and the size of the problems of interest.
When we consider applications in the early FTQC era with $N_G \sim 10^4$, according to Fig.\,\ref{fig:resource_estimation_large_application}, the required code distance is reduced from about $9$ to $4$ if $N_{\rm e} = 10^{-3}$ and the required number of qubits is reduced to $21\%$. This advantage becomes large when the required accuracy $N_{\rm e}$ becomes strict. Even when we take the cost for distillation and lattice surgery into account, FTQC with $100$ logical qubits with the code distance $4$ is estimated to require about $10^4$ physical qubits with QEM, which can be the promising milestone to show the computational supremacy on the logical space. While this advantage becomes comparably small when we see promising long-term applications in $N_G \sim 10^{10}$, the code distance is still reduced from $19$ to $14$, which suppresses the number of physical qubits to $55\%$. Thus, we can conclude that in both cases, our proposal is expected to drastically alleviate the hardware requirement in practice.
It should be noted that the reduction of the code distance is vital not only for the reduction of the number of physical qubits but also for relaxing the requirement of error decoding architectures. To estimate occurred Pauli errors on physical qubits during FTQC, we need classical peripherals for decoding with sufficiently small latency for stabilizer measurement cycles. However, recent results show the tractable size of realistic implementation can decode up to about code distance $11$~\cite{holmes2020nisq+,ueno2021qecool,das2021lilliput}. While this value may be improved in the future, it is obvious that the performance of decoding units is another restriction of FTQC. When we assume tractable code distance is limited to $11$, we can use at most $10^5$ logical gates with $N_{\rm e} = 10^{-3}$, which is just around the limitation of the classical simulation. On the other hand, the use of QEM can increase the available logical gates to $10^8$. Thus, our proposal can be the key to push the performance of FTQC from a classically simulatable region to the quantum supremacy regime.
While we only discussed the reduction of code distance, we can similarly reduce the effective required number of $T$-gates. While the effective increase of the resource is also constant as in the case of code distance, this can be used not only for reducing the total number of generated $T$-gates but also for mitigating the bias of generation throughput of magic states during FTQC protocol. Since magic-state generation succeeds probabilistically, the number of available magic states per time unit statistically fluctuates and cannot be estimated in advance. We expect our method can exempt these kinds of difficult run-time scheduling and make long-time execution of FTQC reliable. Furthermore, in the case of distributed FTQC, logical CNOT gates between distant nodes require entanglement distribution and distillation, which also typically succeed probabilistically. Thus, QEM can be used for reducing a wide range of difficulties in FTQC.
\section{Discussion}
\label{sec:conclusion}
We have described a method to effectively decrease errors in FTQC by performing QEM in the logical space. In the case of decoding errors due to insufficient code distances and magic-state distillation, we can perform QEM with small modification on quantum circuits. In particular, owing to the Pauli frame, we can perform QEM without implementing any physical operations if decoding errors are stochastic Pauli maps, while QEM operations may induce additional errors when we implement it physically on general quantum circuits. In regard to the approximation errors due to the Solovay-Kitaev decomposition, we cannot always use the Pauli frame because QEM employs not only Pauli operations but also Clifford operations. Since Clifford operations can be efficiently performed in FTQC and the number of decoding processes for error correction is much larger than the gate count in Solovay-Kitaev algorithms, this overhead is negligible. We have verified the trade-off between costs of QEM and the code distance and the number of $T$-gates. Furthermore, we have estimated the sampling cost of gate set tomography to obtain the decoding noise map to the required accuracy, and clarified that our approach enables quantum computing corresponding to more than the achievable code distance. We have numerically compared the computation with and without QEM on FTQC with the same sampling number and have shown the advantage of QEM even under the existence of finite estimation errors. We also have estimated the required resources with and without QEM in the early-FTQC era, and have shown that the required number of physical qubits can be suppressed up to tens of percent.
It should be noted that this is the first result to clearly show QEM can dramatically improve the computation accuracy for useful applications under realistic assumptions as we have shown by using the example of quantum simulation on FTQC. This is because the computational advantage of typical NISQ algorithms such as variational algorithms are empirically assumed and the required runtime of the algorithm has not been revealed; yet the accuracy of FTQC algorithms can be estimated reliably depending on the complexity of the problem. Accordingly, the usefulness of quantum error mitigation can be clearly discussed in the FTQC scenario.
Another important aspect of providing the theory and implementation of QEM for FTQC rather than NISQ computing is as follows. While it is known that QEM is the most effective when the mean number of errors during computation is in the order of unity~\cite{endo2018practical}, this criterion is not necessarily satisfied in the NISQ regime depending on the problem size. On the other hand, because we are allowed to tune code distances, magic-state distillation levels, and the number of $T$-gates per Solovay-Kitaev decomposition in FTQC, it is highly likely that we can satisfy this criterion. Thus, the main drawback of QEM, the exponential growth of the sampling overheads, can be circumvented; therefore we can find the highly practical regimes where QEM can help enhance the computation accuracy. Accordingly, we can conclude that the theory of QEM on FTQC is more versatile than that for NISQ computing.
As mentioned in the main text, promising applications of our method are quantum phase estimation algorithms and Hamiltonian simulation algorithms for investigating quantum many-body dynamics. There are algorithmic errors in the Trotter decomposition~\cite{lloyd1996universal} and recently proposed methods such as Taylor series~\cite{berry2015simulating} and quantum signal processing~\cite{low2019hamiltonian,low2017optimal}. In Refs.~\cite{endo2019mitigating,vazquez2020enhancing}, it is shown that such algorithmic errors can be mitigated by employing extrapolation. Since algorithmic errors can be controlled by changing the simulation accuracy, this technique can also be naturally incorporated in an FTQC scenario. Thus, the dominant errors in FTQC can be compensated via QEM.
The first generation of FTQC may not be sufficiently large for naively solving large and useful problems. While the architecture of distributed quantum computing is the most straightforward approach to increase the total number of qubits, it requires interconnections between quantum nodes, which induces additional overheads for entanglement distillation. Thus, we sometimes cannot use the sufficient number of distilled entanglements for distributed FTQC. In this context, techniques developed in NISQ era~\cite{mitarai2019constructing,mitarai2020overhead} for solving larger problems with small NISQ computers may also be useful in the middle-term FTQC. Our work is the first proposal that makes the best of a technique tailored for NISQ devices in the context of FTQC.
Finally, we should discuss the difference between our scheme and a similar work that combines QEM with the quantum error correction proposed by \,\textcite{mcclean2020decoding}. Their method considers implementing quantum error correction for NISQ devices via classical post-processing in the case that experimentalists cannot implement stabilizer measurements because of the limited connectivity and large error rates of NISQ devices. Although their method enables the state to be projected to the code space via quantum subspace expansion~\cite{mcclean2017hybrid}, logical errors cannot be fully eliminated. On the other hand, our scheme assumes FTQC can be performed but the number of qubits and $T$-gates cannot be increased infinitely. The remarkable advantage of our method is that we can fully eliminate the decoding errors and approximation errors by using a greater number of measurements at negligible hardware overhead, given the good characterization of the noise model.
\section*{Note added}
After we uploaded this work to arXiv, three relevant works appeared that have a similar concept that quantum error mitigation is incorporated in fault-tolerant quantum computing to relax the hardware requirement at the cost of sampling overheads~\cite{xiong2020sampling,piveteau2021error,lostaglio2021error}. Ref.\,\cite{xiong2020sampling} shows quantum error mitigation for encoded qubits but they focus on concatenated codes rather than topological codes. Ref.\,\cite{piveteau2021error} uses quantum error mitigation for implementing $T$-gate without magic-state distillation and shows efficient characterization methods for the errors of $T$-gate under the assumption that logical Clifford operations are perfect. Ref.\,\cite{piveteau2021error} also discusses on how to relax the costs for implementing $T$-gates by using the concepts of robustness of magic.
Compared to these works, we emphasize that our framework consider a different scenario where logical Clifford operations are imperfect, and thus FTQC suffers from decoding errors of logical Clifford procedure, logical noise on prepared magic states, and insufficient magic-state supply. This difference makes our framework versatile in the early FTQC era. While we provided consistent analysis of gate set tomography with noisy logical Clifford gates, we have refined the treatment of the noise of magic state preparation in gate set tomography, motivated by Refs.\,\cite{piveteau2021error,lostaglio2021error}.
\section*{Acknowledgement}
This work is supported by PRESTO, JST, Grant No.\,JPMJPR1916; ERATO, JST, Grant No.\,JPMJER1601; CREST, JST, Grant No.\,JPMJCR1771; MEXT Q-LEAP Grant No.\,JPMXS0120319794 and JPMXS0118068682; Moonshot R\&D, JST, Grant No.\,JPMJMS2061. We would like to thank Takanori Sugiyama for a fruitful discussion on gate set tomography. We acknowledge useful discussions with Zhenyu Cai, Xiao Yuan, Rui Asaoka, and Kaoru Yamamoto.
|
{
"timestamp": "2021-10-19T02:32:04",
"yymm": "2010",
"arxiv_id": "2010.03887",
"language": "en",
"url": "https://arxiv.org/abs/2010.03887"
}
|
\section*{Acknowledgements}
We like to acknowledge a grant from ONR N00014-18-1-2826. Authors would also like to thank anonymous reviewers, members of AI2, UW-NLP and the H2Lab at The University of Washington for their valuable feedback and comments.
\section{Introduction}
\label{sec:intro}
\begin{quotation}
\noindent ``Some experts are familiar with one field, such as AI or nanotechnology [...] no one is capable of connecting the dots and seeing how breakthroughs in AI might impact nanotechnology, or vice versa.'' {\textit{--Yuval Noah Harari, Homo Deus, 2016}}
\end{quotation}
The effort to mitigate the COVID-19 pandemic is an interdisciplinary endeavor the world has rarely seen~\cite{nytimes}. As one recent example, expertise in virology, physics, epidemiology and engineering enabled a group of 200 scientists to understand and bring attention to the airborne transmissibility of the SARS-CoV-2 virus \cite{indoor}. The diverse and rapidly expanding body of past and present findings related to COVID-19 \cite{wang2020cord} makes it challenging to keep up, hindering scientists' pace in making new discoveries and connections.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{mechanism/teaserNEW.pdf}
\caption{Our COVID-19 Mechanism KB ({\sc Comb}\xspace) is extracted from scientific papers and can be searched for diverse activities, functions and influences (1), retrieving relations from the literature (2).}
\label{fig:teaser}
\end{figure}
Research in natural language processing (NLP) has provided important resources to extract {\it fine-grained} relations from scientific papers in specific areas, such as certain subfields of biomedicine ~\cite{kim2013genia,nye2018corpus} or computer science~\cite{wadden2019entity}. However, these cover only a fraction of all concepts in the literature; in biomedicine alone, there are myriad concepts \cite{salvadores2013bioportal} not covered by NLP resources. For COVID-19 research, the challenge is especially pronounced due to diversity and emerging concepts; even reading just one paper may require background knowledge in multiple biomedical subfields, physics, chemistry, engineering, computer science and the social sciences.
For example, consider a paper studying the indoor dynamics of aerosolized SARS-CoV-2 and the effect of ventilation on transmission by using simulation models, or work on economic impacts of COVID-19 on prices and consumption.
To make progress in consolidating such diverse information, we introduce a unified schema of \emph{mechanisms} as a \emph{unified language} covering activities, functions and influences across the sciences. These can be proteins that block viral binding, algorithms to design drugs, the effect heat has on viruses, or COVID-19 has on food prices (Fig.~\ref{fig:teaser}).
We build on the fact that mechanisms underlie much of the natural language of scientific papers \cite{rohl2012mechanisms}, and construct a unified schema with two coarse-grained mechanism relations:
\begin{itemize}[noitemsep,topsep=.5mm]
\item \textit{Direct Mechanisms}: mechanistic \textit{activities} (e.g., viral binding) or \textit{functions} engendered by natural or artificial entities (e.g., a protein used for binding or algorithm used for diagnosis).
\item \textit{Indirect Mechanisms}: \textit{influences and associations} such as economic effects of COVID-19 or complications associated with medical procedures.
\end{itemize}
Our coarse-grained relation schema, over free-form text spans, strikes a balance between the granular information extracted by Closed-IE approaches~\cite{Freitag1998TowardGL,Hoffmann2010Learning5R} and the schema-free breadth of Open IE approaches~\cite{etzioni2008open,stanovsky2018supervised}, which often lead to generic and uninformative relations for scientific applications \cite{Kruiper2020_SORE}.
Furthermore, our schema facilitates
construction of a high-quality KB that synthesizes interdisciplinary knowledge.
We construct precisely this, releasing {\sc Mechanic}\xspace (\textbf{Mech}anisms \textbf{AN}otated in \textbf{C}OVID-19 papers) -- an annotated dataset of 2,400 mechanisms based on our schema. We train a state-of-the-art model to extract this information from scientific papers, and use it to build {\sc Comb}\xspace (\textbf{C}OVID-19 \textbf{O}pen \textbf{M}echanism Knowledge \textbf{B}ase) -- a broad-coverage KB of 1.5M mechanisms in COVID-19 papers. We analyze the characteristics of {\sc Comb}\xspace, showing the distribution of relations across scientific subfields and comparing their quality to other IE approaches.
We demonstrate the utility of {\sc Comb}\xspace in two studies with experts. In the first study, our system achieves high precision and recall in scientific search with structured queries on both diverse viral mechanisms and applications of AI in the literature. In the second study, we evaluate {\sc Comb}\xspace in a usability study with MDs active in treating and researching COVID-19. Our system is rated higher than PubMed search by the clinical experts, in terms of utility and quality.
\noindent \textbf{Our main contributions include:}
\begin{itemize}[topsep=1mm,noitemsep]
\item We introduce a unified schema for {\em mechanisms} that generalizes across many types of activities, functions and influences. We construct and distribute {\sc Mechanic}\xspace, an annotated dataset of papers related to COVID-19, with 2,400 instances of our mechanism relation.
\item Using {\sc Mechanic}\xspace, we train an IE model and apply it to 160K abstracts in COVID-19 literature, constructing {\sc Comb}\xspace, a KB of 1.5M mechanism instances. Manual evaluation of relations sampled from our KB shows them to have 88\% accuracy. We also find a model trained on our data reaches roughly 80\% accuracy on a sample of general biomedical papers from across the PubMed corpus, with no additional training, demonstrating the generalization of our approach.
\item We showcase the utility of {\sc Comb}\xspace in structured search for mechanisms in the literature. In a study with MDs working to combat COVID-19, our system is rated higher than PubMed search in terms of utility and quality.
\end{itemize}
\section{Introduction}
\label{sec:intro}
\begin{quotation}
\noindent ``Some experts are familiar with one field, such as AI or nanotechnology [...] no one is capable of connecting the dots and seeing how breakthroughs in AI might impact nanotechnology, or vice versa.'' {\textit{--Yuval Noah Harari, Homo Deus, 2016}}
\end{quotation}
The global effort to mitigate the COVID-19 pandemic is an interdisciplinary endeavor with an intensity the world has rarely seen~\cite{nytimes}.
It is challenging to keep up with the interdisciplinary and rapidly expanding body of research on both past and present findings related to COVID-19 \cite{wang2020cord}. As a recent example, expertise in diverse science and engineering areas enabled a group of 200 scientists to understand airborne transmissibility of the SARS-CoV-2 virus \cite{indoor}.
To make progress in connecting research areas, this paper aims at building a knowledge base with a simple unified language capturing diverse relationships between scientific concepts.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{mechanism/teaser2.pdf}
\caption{Our knowledge base of \textit{mechanism} relations spans a wide range of activities, functions, and influences extracted from CORD-19, a corpus of papers related to COVID-19.\tnote{hanna's comment}
}
\label{fig:teaser}
\end{figure}
Research in natural language processing has provided resources to understand scientific literature and extract {\it fine-grained} relations in specific research areas such as computer science~\cite{x} or bio-medicine~\cite{x}. In contrast, the interdisciplinary nature of COVID-19 papers requires investigating relationships between scientific concepts not only in the medical domain (e.g., viral binding), but also across different areas (e.g., using image analysis for public health), see Figure~\ref{fig:teaser}. \tom{there's an important point here -- biomedicine is a huge fragmented area, and even within just this "domain", ignoring areas like CS/climatology/etc., we have a lot to contribute by capturing biomed mechanisms that are left uncovered otherwise. }\dan{agree, but not sure how to make the point effectively} Despite this topic diversity, such relations are often described with similar textual phrases across different areas. For example similar language can be used to describe ``a protein used for binding'' or ``image analyses used in public health''.\dan{last sentences seems confused? don't we want to say 'DIFFERENT terminology is used to refer to the same thing? This motivates embedding-based approach. If you really mean to say what is written, one needs to include an example --- i can't think of a phrase that means both binding protein and image analysis}\aida{I think we should say something like this: Similar textual structures are often employed to convey information independent of the domain (e.g. ``a protein \textbf{used in} binding'' or ``image analyses \textbf{used in} public health''). Though the meaning of each phrases(e.g. ``protein'' or ``image analysis'') can assist in determining underlying information where those surface structures are not present.(e.g. ``study of image analyses for face mask detection'')}
In this paper, we define a unified schema to cover {\it scientific mechanisms} as relationships between diverse scientific concepts, and build a knowledge base (KB) of {scientific mechanisms} described in the literature related to COVID-19. Our broad schema consists of two coarse-grained relations between generic entities:
\begin{itemize}
\item \textit{Direct Mechanisms} (mechanistic activities and functions) which include mentions of mechanistic \textit{activities} (e.g., viral binding) or \textit{functions} engendered by natural or artificial entities (e.g., a protein used for binding or image analysis used in public health); and also
\item \textit{Indirect Mechanisms} (influences and associations) which include \textit{influences and associations} such as possible complications associated with a medical procedure.
\end{itemize}
Our unified view of mechanisms strike a balance between fine-grained expressivity achieved by Closed-IE~\cite{Freitag1998TowardGL,Hoffmann2010Learning5R} and breadth achieved by Open IE approaches~\cite{etzioni2008open,stanovsky2018supervised,Zhan2020SpanMF}. Closed-IE approaches rely on fine-grained relation types that do not cover the wide range of relations in the scientific domain. Open-IE approaches focus on general-purpose, schema-free extraction of relations, but many of the relations are generic and uninformative for scientific applications \cite{Kruiper2020_SORE}.
We collect a dataset of mechanisms (denoted {\sc Mechanic}\xspace---or \textbf{C}OVID-19 \textbf{O}pen \textbf{F}unctional \textbf{I}nformation \textbf{E}xtraction), containing annotated sentences from 250 COVID-19 abstracts with our unified schema. We use our dataset to train an information extraction model.
We apply the learned extractor to CORD-19, a
large corpus of papers about COVID-19~\cite{wang2020cord}, to generate a large
knowledge base of 1.5M mechanism relations ({\sc Comb}\xspace). We additionally provide search algorithms to answer queries over this KB.
We evaluate the quality of {\sc Comb}\xspace in three expert studies.\dan{sounds like two studies?} We run search queries on our KB, and find that experts evaluate the returning relations as highly relevant with precision of 85-90\% and recall $\geq 40\%$. Importantly, in a study with MDs actively treating COVID-19 patients and conducting research in this area, our system is found substantially 20\% more useful in terms of search quality and utility, compared to the prominent PubMed KB-powered search engine~\cite{x}. Our main contributions include:
\begin{itemize}[topsep=1mm,noitemsep]
\item We introduce a novel schema for {\em mechanisms} that generalizes across many types of mechanistic activities, functions, influences and associations. We then construct and distribute {\sc Mechanic}\xspace, an annotated dataset comprising 2,730 mechanisms using this schema.
\item Using {\sc Mechanic}\xspace, we train a state-of-the-art information extraction system and apply it to 160K abstracts in the COVID-19 literature, constructing {\sc Comb}\xspace, a knowledge-base of 1.5M mechanisms.\footnote{GitHub repository redacted for annonymous review}
\item We analyze the characteristics and quality of {\sc Comb}\xspace, showing the distribution of relations across scientific subfields and comparing the precision of extracted mechanisms to strong baselines using semantic role labeling.
\item We showcase the utility of {\sc Comb}\xspace in answering interdisciplinary scientific search queries in COVID-19 literature, including examples such as climatic effects on SARS-CoV-2 and exploring applications of AI in the CORD-19 corpus. In a study with MDs active in the fight against COVID-19, our system is rated higher than PubMed search in terms of utility and quality.
\end{itemize}
\section{Related work}
\label{sec:mechie}
\begin{table*}[t]
\setlength{\belowcaptionskip}{-0pt}
{\small
\begin{tabular}{p{0.08\linewidth}p{0.205\linewidth}p{0.21\linewidth}p{0.4\linewidth}}
\toprule
\textbf{Schema} &
\textbf{Entity types} &
\textbf{Relations } &
\textbf{Example} \\
\midrule
\parbox{\linewidth}{SciERC
&\parbox{\linewidth}{CS methods/tasks \newline (free-form spans) }
&\parbox{\linewidth}{used-for}
&\parbox{\linewidth}{Use \gspan{GNNs}for \gspan{relation extraction.}}\\
\midrule
\parbox{\linewidth}{SemRep
&\parbox{\linewidth}{Clinical (drugs, diseases, anatomy \dots)}
&\parbox{\linewidth}{causes, affects, treats, \newline inhibits, interacts, used \dots}
&\parbox{\linewidth}{\dots intratympanic \gspan{dexamethasone injections}for patients with intractable \gspan{Meniere's disease.}}\\
\midrule
\parbox{\linewidth}{ChemProt
&\parbox{\linewidth}{Chemicals, proteins}
&\parbox{\linewidth}{direct/indirect regulator, \\ inhibitor, activator \dots}
&\parbox{\linewidth}{\gspan{Captopril}inhibited \gspan{MMP-9}expressions in right ventricles.}\\
\midrule
\parbox{\linewidth}{DDI
&\parbox{\linewidth}{Drugs}
&\parbox{\linewidth}{interacts}
&\parbox{\linewidth}{\gspan{Quinolones}may enhance the effect of \gspan{Warfarin.}}\\
\midrule
\parbox{\linewidth}{GENIA
&\parbox{\linewidth}{Proteins, cellular entities }
&\parbox{\linewidth}{binding, modification, \\ regulation \dots}
&\parbox{\linewidth}{\gspan{BMP-6}induced phosphorylation of \gspan{Smad1/5/8.}}\\
\midrule
\parbox{\linewidth}{PICO
&\parbox{\linewidth}{Clinical}
&\parbox{\linewidth}{Interventions, outcomes}
&\parbox{\linewidth}{The \gspan{bestatin} group achieved \gspan{longer remission.}}\\
\midrule
\parbox{\linewidth}{\textbf{Ours: \\ {\sc Mechanic}\xspace}
&\parbox{\linewidth}{Medicine, epidemiology, genetics, molecular bio., CS, math, ecology, \newline economics \dots (free-form)}
&\parbox{\linewidth}{direct (activities, functions) / indirect (influences, \\ associations)}
&\parbox{\linewidth}{$\cdot$ \gspan{RL} can be used to learn \gspan{mitigation policies in epidemiological models.} \newline
$\cdot$ \gspan{Histophilus-somni} causes \gspan{respiratory, reproductive, cardiac and neuronal diseases in cattle.}} \\
\bottomrule
\end{tabular}
}
\caption{Our broad concept of mechanisms covers many relations within existing science-IE schemas. The table shows examples of representative schemas, and the types of entities and relations they capture.}
\label{tab:ieexamples}
\end{table*}
\noindent \textbf{Mechanisms in science} The concept of \emph{mechanisms}, also referred to as \emph{functional relations}, is fundamental across the sciences. For example mechanisms are described in biomedical ontologies \cite{burek2006top,rohl2012mechanisms,keeling2019philosophy}, engineering \cite{hirtz2002functional}, and across science. Mechanisms can be {natural} (e.g., the mechanism by which amylase in saliva breaks down starch into sugar), {artificial} (electronic devices), {non-physical constructs} (algorithms, economic policies), and very often a blend (a pacemaker regulating the beating of a heart through electricity and AI algorithms).
Although seemingly intuitive, exact definitions of mechanisms are subject to debate in the philosophy of science \cite{rohl2012mechanisms,keeling2019philosophy}. An Oxford dictionary definition of mechanisms refers to \textit{a natural or established process by which something takes place or is brought about}. More intricate definitions discuss ``complex systems producing a behavior'', ``entities and activities productive of regular changes'', ``a structure performing a function in virtue of its parts and operations'', or the distinction between ``correlative property changes'' and ``activity determining how a correlative change is achieved'' \cite{rohl2012mechanisms}.
Abstract definitions can help with generalization across many important types of mechanisms. The schema we propose (Sec.~\ref{sec:schema}) is inspired by such definitions, operationalizing them and making them more concrete, and also simple enough for models and human annotators to identify.
\vspace{.1cm} \noindent \textbf{Information extraction from scientific texts} There is a large body of literature on extracting information from scientific papers, primarily in the biomedical sphere. This information often corresponds to very \textit{specific} types of mechanisms, as shown in Tab.~\ref{tab:ieexamples}. Examples include ChemProt \cite{li2016biocreative} with mechanisms of chemical-protein regulation, drug interactions in the DDI dataset \cite{segura2013semeval}, genetic and cellular activities/functions in GENIA \cite{kim2013genia}, semantic roles of clinical entities \cite{kilicoglu2011constructing}, PICO interventions and outcomes \cite{wallace2016extracting,nye2018corpus}, and computer science methods/tasks in SciERC \cite{luan2018multi}. Such schemas have been used, for example, to extract genomic KBs \cite{poon2014literome} and automate systematic reviews \cite{nye2020understanding}.
Our schema draws on these approaches, but with a much broader reach across concepts seen in COVID-19 papers (Tab. \ref{tab:ieexamples}, Fig. \ref{fig:mechtop}).
An important area in information extraction focuses on \emph{open} concepts, with prominent approaches being Open IE \cite{etzioni2008open} and Semantic Role Labeling (SRL; \citealp{carreras2005srl}), which share similar properties and predictions \cite{stanovsky2018supervised}. While such methods are intended to be domain independent, they perform significantly worse in the scientific domain \cite{groth2018open}. \citet{Kruiper2020_SORE} developed a multi-stage process to post-process Open IE outputs, involving trained models and humans to find a balance between generic and fine-grained clusters of relation arguments and omitting noisy clusters.
In contrast, our unified schema enables annotating a dataset of mechanism relations between free-form spans and training IE models to automatically generalize across diverse relation types.
Our schema is also related broadly to the task of training reading comprehension models on procedural texts describing scientific processes (such as short paragraphs written by crowd workers to explain photosynthesis in simple language; \citealp{mishra2018tracking}). Our representation of scientific texts in terms of a graph of causal relations can potentially help infer processes across science.
\vspace{.1cm} \noindent \textbf{COVID-19 IE} Recent work \cite{verspoor2020proceedings} has focused on extracting information from the CORD-19 corpus \cite{wang2020cord}. PICO concepts are extracted and visualized in an exploratory interface in the COVID-SEE system \cite{verspoor2020covid}.
In \citet{wang2020covid}, genes, diseases, chemicals and organisms are extracted and linked to existing biomedical KBs with information such as gene-disease relations. Additional relations based on the GENIA schema are extracted from the text. To address the novel COVID-19 domain, the schema is enriched with new entity types such as viral proteins and immune responses.
In this paper, we focus on a more general schema that captures diverse concepts appearing in literature related to COVID-19, an emerging domain with novel concepts coming from many fields and subfields. The mechanism KG we construct includes---as a subset ---diverse biomolecular and clinical information (such as chemical-disease relations) as part of a general mechanism schema.
\section{Mechanism Relation Schema}
\label{sec:schema}
We present a schema that builds upon and consolidates many of the types of mechanisms discussed in Sec.~\ref{sec:mechie}. Our defined schema has three key properties: (1)~it uses a generalized concept of mechanism relations, capturing specific types of mechanisms in existing schema and extending them broadly; (2)~it includes flexible, generic entities not limited to predefined types, and (3)~it is simple enough for human annotators and models to identify in the natural language of scientific texts. This schema enables forming our KB by identifying a set of mechanism relations in a corpus of scientific documents (Sec.~\ref{sec:KB}).
We formally define each mechanism as a relation $(E_1,E_2,\text{{\tt class}})$ between entities $E_1$ and $E_2$, where each entity $E$ is a text span and the {\tt class} indicates the type of the mechanism relation. Entities all share a single common type and can be either natural (e.g., protein functions, viral mechanistic activities) or artificial (e.g., algorithms, devices), to capture the generality of the concepts in science (see Fig.~\ref{fig:mechtop}). We allow each entity to take part in multiple relations (tuples) within a given text, leading to a ``mechanism graph''. Mechanisms are categorized into two coarse-grained classes:\footnote{We also provide a dataset and extraction model for ternary relations in the form of \emph{(subject, object, predicate)}. We focus on the coarse-grained mechanism schema due its broader flexibility and coverage. See App.~\ref{appendix:granular} for details.}
\vspace{.1cm}\noindent{\bf Direct mechanisms} include {\it activities} of a mechanistic nature -- actions explicitly performed by an entity, such as descriptions of a virus binding to a cell, and explicit references to a function (e.g., a use of a drug for treatment, or the use of AI for drug design as in Fig.~\ref{fig:teaser}).
\vspace{.1cm}\noindent{\bf Indirect mechanisms} include influences or associations without explicit mechanistic information or mention of a function (such as describing observed effects, without
the process involved). These relations correspond more to ``input-output correlations'' \cite{rohl2012mechanisms}, such as indicating that COVID-19 may lead to economic impacts but not \emph{how} (Fig.~\ref{fig:teaser}), as opposed to direct mechanisms describing ``inner workings'' -- revealing more of the intermediate states that lead from initial conditions (COVID-19) to final states (price inflation) or explicitly describing a function. As an example for the utility of this distinction between direct and indirect relations, consider an MD looking to generate a structured list of all \textit{uses} of a treatment (direct mechanism), but not include side effects or complications (indirect).
\begin{figure}[t]
\includegraphics[width=1.0\columnwidth]{mechanism/mech_topic_count.pdf}
\caption{{\sc Mechanic}\xspace covers a diverse set of scientific fields. Histogram of domains in {\sc Mechanic}\xspace (sample of 350 relations). Manually labeled relation entities, based on a list of scientific disciplines from Wikipedia.}
\label{fig:mechtop}
\end{figure}
\section{KB Construction}
\label{sec:kbc}
\begin{figure*}[t]
\includegraphics[width=\linewidth]{mechanism/mechflow.pdf}
\caption{\textbf{Overview of our approach}. We collect annotations of mechanisms (textual relations) from the CORD-19 corpus, which are used to train an IE model. We apply the model to over 160K documents in the corpus, extracting over 1.5M relations that are fed into our KB. Entity mention spans are embedded with a language model tuned for semantic similarity, and indexed with FAISS for fast similarity search as part of our search interface.}
\label{fig:flow}
\end{figure*}
We describe our approach (depicted in Fig.~\ref{fig:flow}) for extracting a knowledge base of mechanisms using our unified schema. We first curate {\sc Mechanic}\xspace, an annotated dataset of general mechanisms from a small collection of scientific papers (Sec.~\ref{sec:data}). We then train a model on our annotated data to extract mechanism relations from the entire CORD-19 corpus of scientific papers; we use it to build {\sc Comb}\xspace, a knowledge base of mechanisms across the entire CORD-19 corpus of (Sec.~\ref{sec:kb_mechanisms}), which supports semantic search for relations (Sec.~\ref{sec:KB}).
\subsection{Collecting Mechanism Annotations}
\label{sec:data}
We construct a dataset of mechanism relations in texts randomly sampled from the CORD-19 corpus \cite{wang2020cord} that includes scientific papers connected to COVID-19. To circumvent annotation challenges in scientific datasets~\cite{luan2018multi} and ensure high-quality annotations, we follow a three-stage process of (1) annotating entities and relations using biomedical experts, (2) unifying span boundaries with an NLP expert, and (3) verifying annotations with a bio-NLP expert. Our annotation process is a relatively low-resource and generalizable approach for a rapid response to the COVID-19 emergency.
In the first stage, five annotators with biomedical and engineering background annotate all mechanism relations as defined in Sec.~\ref{sec:schema} (full annotation guidelines are available in our code repository). Relations are annotated as either direct/indirect. Entities are annotated as the longest span of text that is involved in a relation with another entity, while not including redundant or irrelevant tokens. As in related tasks \cite{luan2018multi}, annotators are guided to resolve doubt on span boundaries by selecting the longest relevant span.
Annotators had a one-hour training session. In the first part of the training session, annotation guidelines were reviewed. The guidelines included simple explanations of direct/indirect mechanisms along with introductory examples (e.g., ``\emph{the virus makes use of spike protein to bind to a cell}'', ``\emph{A virus leads to respiratory infection}''). In the second part, annotators saw examples from papers in the annotation interface (see Fig.~\ref{fig:annotations}, App.~\ref{appendix:data_anno}), and performed a few live training annotations.
We initially observed significant variation between annotators in identifying span boundaries for entity annotations, stemming from inherent subjectivity in such annotation tasks \cite{stanovsky2018supervised,luan2018multi} and from lack of NLP experience by some annotators.
In the second stage, an NLP expert annotator conducted a round of style unification by viewing annotations and adjusting span boundaries to be more cohesive while preserving the original meaning, focusing on boundaries that capture essential but not redundant or generic information (e.g., adjusting the span \textit{substantial \textbf{virus replication} by unknown mechanisms} to include only \textit{virus replication}).
Finally, in the third stage, a bio-NLP expert with experience in annotating scientific papers verified the annotations and corrected them as needed. The expert accepted $81\%$ of the annotations from the second stage without modification, confirming the high quality of the stage-2 data. Relation label mismatches accounted for 5\% of the remaining 19\%. Other sources of disagreement were span mismatches and new relations added by the bio-NLP expert adjudicator.
The resulting dataset ({\sc Mechanic}\xspace: \textbf{Mech}anisms \textbf{AN}otated in \textbf{C}OVID-19 papers) contains 2,370 relation instances (1645 direct, 725 indirect) appearing in 1,000 sentences from 250 abstracts.\footnote{The dataset is similar in size to related scientific IE datasets \cite{luan2018multi} which share related challenges in collecting expert annotations of complex or ambiguous concepts over difficult texts.} Average span length is 4 tokens, while the average distance between relation arguments is 11.40 tokens.
\subsection{Extracting a KB of Mechanisms}
\label{sec:kb_mechanisms}
Using {\sc Mechanic}\xspace, we train an IE model to extract mechanism relations from sentences in scientific documents. We train DyGIE++~\cite{wadden2019entity}, a state-of-the-art end-to-end IE model which extracts entities and relations jointly (without assuming to have entity spans given), classifying each relation as one of $\{\text{{\tt DIRECT}},\text{{\tt INDIRECT}}\}$.\footnote{We use DyGIE++ with SciBERT \cite{beltagy-etal-2019-scibert} embeddings fine-tuned on our task and perform hyperparameter grid search (for dropout and learning rate only) and select the best-performing model on the development set ($7e-4$ and $0.43$, respectively). Full details are in App.~\ref{appendix:hyperparam}.}
To form our corpus-level KB, we apply the trained model to each document in our corpus (all 160K abstracts in the CORD-19 corpus) to extract mechanism relations and then integrate the extracted relations. We find that our trained model achieves high precision scores for high confidence predictions (precision $\geq 80\%$ within top-$20$ predicted relations; see $P@K$ figure, App.~\ref{appendix:IE_eval}). Therefore, our corpus-level KB is constructed by filtering predictions with low confidence.
To integrate relations and entities across the corpus, we use standard surface-level string normalization (such as removing punctuation, lemmatizing, and lowercasing) and unify and normalize entity mentions using coreference clusters of entities within a document.\footnote{We use a pre-trained DyGIE++ model trained on SciERC to obtain coreference clusters.} Each coreference cluster is assigned a representative entity as the mention with the longest span of text, and all other entities in that cluster are replaced with the representative entity. This is particularly useful for normalizing pronouns such as \textit{it} with the original mention they referred to (e.g., a specific virus or method \textit{it} refers to).
Our final KB ({\sc Comb}\xspace) consists of 1.5M relations in the form of $(E_1,E_2,\text{{\tt DIRECT/INDIRECT}})$ filtered by high confidence score ($>= 90\%$), where entities $E_i$ are standardized free-form spans of text.
\subsection{Semantic Relation Search} \label{sec:KB} The constructed KB enables applications for retrieving relations across concepts from many disciplines. For example, searching for all documents that include mechanisms to incorporate {\it AI} in studies of {\it heart disease} $(E_1 = \text{AI},E_2 = \text{heart disease},\text{{\tt DIRECT}})$ requires going beyond simply finding documents that mention \textit{AI} and \textit{heart disease}. Here, we describe our approach for searching over the KB by encoding entities and relations, capturing related concepts (such as \emph{cardiac disease} and \emph{heart conditions}), as well as simpler surface matches (\emph{artificial intelligence \textbf{methods}}, \emph{artificial intelligence \textbf{models}}).
Specifically, for a given query $\mathbf{q} \coloneqq (E^q_{1},E^q_{2}, \text{{\tt class}})$, our goal is to find mechanisms $r_i$ in {\sc Comb}\xspace whose entities are free-form texts similar to $E^q_{1},E^q_{2}$ in the query. The {\tt class} is used to filter for the type of relation---for example, when explicitly requiring {\tt DIRECT} mechanisms.
\vspace{0.2cm}\noindent\textbf{Entity encoding} We obtain an encoding function $f:E\mapsto\mathbb{R}^d$ to encode all unique spans (entities) in the KB to a $d$ dimensional vector space. The encoding function is derived by fine-tuning a language model (LM) originally trained on PubMed papers~\cite{Gururangan2020DontSP} on semantic similarity tasks. For fine-tuning, we use sentence pairs in STS \cite{cer2017semeval} and SNLI \cite{bowman2015large} following~\citet{reimers-2019-sentence-bert}, and add biomedical sentence pairs from the BIOSSES dataset \cite{souganciouglu2017biosses}.
\vspace{0.2cm}\noindent\textbf{Relation similarity} Given a query $\mathbf{q}$, we rank the set of all {\sc Comb}\xspace relations with the same {\tt class} as the query. For each candidate relation $r=(E_1,E_2,\tt class)$ in {\sc Comb}\xspace, we compute its similarity to the query relation $\mathbf{q}$ as the minimum similarity between encodings of their corresponding entities: $\min\limits_{j \in \{1,2\}} f(E_{j}) \cdot f(E^q_{j})$. With this definition, a relation $(E_1,E_2)$ with $E_1$ very similar to the first entity of the query $E^q_{1}$ but $E_2$ distant from $E^q_{2}$ will be ranked low. For example, with the query $(E^q_{1}=\text{deep learning},E^q_{2}=\text{drugs})$, the relation $(E_1=\text{microscope},E_2=\text{drugs})$ will be ranked low due to the pair (deep learning, microscope).
For efficient search, we create an index of embeddings corresponding to the 900K unique surface forms in {\sc Comb}\xspace and employ a system designed for fast similarity-based search \cite{JDH17}.
\section{Evaluating {\sc Comb}\xspace}
\label{sec:kbeval}
In this section, we evaluate the constructed KB of mechanisms in terms of correctness and informativeness (Sec.~\ref{sec:kb_eval_corr}), and its utility in searching for mechanisms (Sec.~\ref{sec:kb_eval_util}). Our main goal is to ensure the mechanism relations have high quality to support our large-scale KB and search applications. We further show that our schema is useful as compared to other schema.
\begin{table*}[t]
\begin{subtable}[t]{\textwidth}
\centering
{\small
\begin{tabular}{|p{0.23\linewidth}|p{0.45\linewidth}|}
\hline
\textbf{Relation query} &
\textbf{Example results from KB search interface}\\
\hline
\parbox{\linewidth}{\begin{minipage}{\linewidth}{\vspace{.2\baselineskip}\includegraphics[height=.9cm]{screenshotsUI/warm_q.pdf}\vspace{.2\baselineskip}} \end{minipage}}
&\parbox{\linewidth}{ \begin{minipage}{\linewidth}{\includegraphics[width=\linewidth]{screenshotsUI/warm.pdf}} \end{minipage}}\\
\hline
\parbox{\linewidth}{\begin{minipage}{\linewidth}{\vspace{.2\baselineskip}\includegraphics[height=.9cm]{screenshotsUI/bilat_q.pdf}\vspace{.2\baselineskip}} \end{minipage}}
&\parbox{\linewidth}{ \begin{minipage}{\linewidth}{\includegraphics[width=\linewidth]{screenshotsUI/bilat.pdf}} \end{minipage}}\\
\hline
\parbox{\linewidth}{\begin{minipage}{\linewidth}{\vspace{.2\baselineskip}\includegraphics[height=1cm]{screenshotsUI/aero_q.pdf}\vspace{.2\baselineskip}} \end{minipage}}
&\parbox{\linewidth}{ \begin{minipage}{\linewidth}{\includegraphics[width=\linewidth]{screenshotsUI/aero.pdf}}\end{minipage}}\\
\hline
\end{tabular}
}
\caption{\textbf{Viral mechanism search}. Queries for $(E1,E2)$ relations, and example retrieved results.}
\label{tab:searchres_ex}
\end{subtable}
\begin{subtable}[t]{\textwidth}
\centering
\vspace{0.25cm}
{\small
\begin{tabular}{|p{0.23\linewidth}|p{0.45\linewidth}|}
\hline
\parbox{\linewidth}{\begin{minipage}{\linewidth}{\vspace{.4\baselineskip}\includegraphics[height=1cm]{screenshotsUI/cnn_q.pdf}\vspace{.4\baselineskip}} \end{minipage}}
&\parbox{\linewidth}{ \begin{minipage}{\linewidth}{\includegraphics[width=\linewidth]{screenshotsUI/cnn.pdf}} \end{minipage}}\\
\hline
\parbox{\linewidth}{\begin{minipage}{\linewidth}{\includegraphics[height=1cm]{screenshotsUI/compv_q.pdf}} \end{minipage}}
&\parbox{\linewidth}{ \begin{minipage}{\linewidth}{\includegraphics[width=\linewidth]{screenshotsUI/compv.pdf}} \end{minipage}}\\
\hline
\parbox{\linewidth}{\begin{minipage}{\linewidth}{\includegraphics[height=1cm]{screenshotsUI/gnn_q.pdf}} \end{minipage}}
&\parbox{\linewidth}{ \begin{minipage}{\linewidth}{\includegraphics[width=\linewidth]{screenshotsUI/gnn.pdf}\vspace{-0.5\baselineskip}} \end{minipage}}\\
\hline
\end{tabular}
}
\caption{\textbf{AI search}. Queries consists of only $E_1$, to find all applications of AI approaches/areas.}
\label{tab:searchres_ex2}
\end{subtable}
\caption{Example search queries and results for the viral mechanism and AI applications tasks.}
\end{table*}
\subsection{KB Correctness and Informativeness}
\label{sec:kb_eval_corr}
We employ two annotators with biomedical and CS backgrounds to judge the quality of the predicted relations in {\sc Comb}\xspace. In particular, following \citet{groth2018open},
annotators are given a predicted relation together with the sentence from which it was extracted. We collapse all entities/relations into one generic type for this analysis. Annotators are asked to label the predicted relation as correct if (1) it accurately reflects a mechanistic relation mentioned in the sentence (\textit{correctness}), and (2) the extracted entities and relation label are sufficient to convey the meaning of the relation, without referring to the source sentence (\textit{informativeness}). We collect human judgements for 300 predicted relations for our approach and baselines, sampled from 150 randomly selected sentences. Agreement is $71\%$ by Cohen's Kappa and $73\%$ by Matthew's Correlation Coefficient.
\vspace{.1cm} \noindent {\bf Comparing KB quality to other schemas} To showcase the benefit of our approach, we compare the relations extracted using a DyGIE model trained on {\sc Mechanic}\xspace, versus a DyGIE model trained on other resources that are most related to our mechanisms: SemRep \cite{kilicoglu2011constructing} captures a wide range of biomedical relations (such as drug-drug interactions), and SciERC \cite{luan2018multi} contains relations relevant to computer science (such as ``method-task'' and ``used-for'' relations).\footnote{We use an expert annotator to align external resources to our {\tt direct} or {\tt indirect mechanism} annotations, e.g., USED-FOR is mapped to \textit{direct} mechanism).} In addition, we compare with a Semantic Role Labeling (SRL) method \cite{shi2019simple} that captures broad relations between free-form spans that focus on agents and actions, and a neural OpenIE model \cite{stanovsky2018supervised}.
Fig.~\ref{fig:kb_eval} (left) shows that $88\%$ of relations from {\sc Comb}\xspace are marked as correct by human raters, demonstrating that our approach extracts mechanism relations with better quality than external resources.\footnote{We also experiment with automated evaluation. We split {\sc Mechanic}\xspace into train/dev/test sets (170/30/50 abstracts), and obtain $F1=50.2$ for entity detection, $F1=45.6$ for relation detection and $F1=42.8$ for classification, on par with performance in other similar scientific IE tasks \cite{luan2018multi}. See more details in App.~\ref{appendix:cofie_model}.} These results suggest that our predicted relations are of overall high quality and can be used to build our corpus-level KB and explore its utility.
\vspace{.1cm} \noindent {\bf Examining Generalization} COVID-19 papers are highly diverse both topically and chronologically. We conduct a small-scale preliminary experiment examining whether a model trained on {\sc Mechanic}\xspace can generalize to capture mechanism relations in the general biomedical papers, from a much larger corpus of open access papers on PubMed Central (PMC).\footnote{\href{https://www.ncbi.nlm.nih.gov/pmc/}{https://www.ncbi.nlm.nih.gov/pmc/}} We randomly sample a set of 200 predicted relations from papers across the entire PMC corpus, and label them using the same criteria used above. As expected, we find that performance drops, but encouragingly is still considerably high: after filtering predictions with confidence lower than 90\% in the same way we construct {\sc Comb}\xspace, 76\% of relations are considered correct. When filtering for confidence with a threshold of 95\% (which captures 70\% of the samples), the rate of correct predictions is 78\%. In future work it would be interesting to fine-tune our model on a small set of labeled examples from the general PMC corpus to potentially improve these results.
\subsection{{\sc Comb}\xspace Utility}
\label{sec:kb_eval_util}
We design several search tasks and user studies to evaluate the utility of the constructed KB (Sec.~\ref{sec:searchQuality}) and compare it with the PubMed medical KB and search engine (Sec.~\ref{sec:pubmed}), as judged by medical doctors working on the front lines of COVID-19 treatment and research. All tasks are designed to evaluate our framework's utility in helping researchers and clinicians looking to quickly search for mechanisms or cause-effect relations in the literature and retrieve a list of structured results.
\subsubsection{Search Quality} \label{sec:searchQuality} We form search queries based on a wide range of topics pertaining to (1) SARS-CoV-2 mechanisms (such as modes of transmission, drug effects, climatic influences, molecular-level properties) and (2) applications of AI in this area. Tab.~\ref{tab:searchres_ex} and ~\ref{tab:searchres_ex2} show queries and example relations returned from {\sc Comb}\xspace, along with the context sentences from which they were extracted.
\vspace{.1cm} \noindent {\bf Viral mechanism search} Queries are formed based on statements in recent scientific claim-verification work (\citealp{wadden2020fact}; see full list in App.~\ref{appendix:kb_util}). For example, for the statement \emph{the coronavirus cannot thrive in warmer climates}, we form the query as
$(E_1=\text{Warm climate},E_2=\text{coronavirus})$ (see Tab.~\ref{tab:searchres_ex} row 1).
For statements reflecting an indirect association/influence, we filter for {\tt INDIRECT} relations (Tab.~\ref{tab:searchres_ex} row 2).
For statements that reflect an undirected mechanism relation (e.g., \emph{Lymphopenia is associated with severe COVID-19 disease}), we query for both directions.
\vspace{.1cm} \noindent{\bf AI applications search}
This task is designed to explore the uses of AI in COVID-19 papers (Tab.~\ref{tab:searchres_ex2}). We use queries where the first entity $E_1$ is a leading subfield or method within AI (e.g., \textit{deep reinforcement learning} or \textit{text analysis}), and the second entity $E_2$ is left unspecified. Since all queries relate to \textit{uses} of AI, we filter for {\tt DIRECT} relations. These open-ended queries simulate an exploratory search scenario, and can potentially surface inspirations for new applications of AI against COVID-19 or help users discover where AI is being harnessed.
\vspace{.1cm}\noindent {\bf Evaluation} Expert annotators are instructed to judge if a relation is related to the query or not and if the sentence actually expresses the mechanism. These annotations are used as ground-truth labels to compute precision/recall scores of the relations extracted by our algorithm. Since it is not feasible to label every relation, annotators are shown a list of 20 relations for each query including high and low rank relations returned by our search algorithm.\footnote{Specifically, for each query we retrieve the top-1000 similar relations from {\sc Comb}\xspace, ranked as described in Sec.~\ref{sec:kbc}, and select the top and bottom 10 relations (20 per query, 200(=20x10) per task, 400(=200x2) in total), shuffle their order, and present to annotators together with the original sentence from which each relation was extracted.} In total, we use 5 annotators to obtain 1,700 relevance labels across both tasks. Inter-annotator agreement is high by several metrics, ranging from $0.7$--$0.8$ depending on the metric and task; see App.~\ref{appendix:kb_util}. Annotators have graduate/PhD-level background in medicine or biology (for the first task) and CS or biology (for the second task).
\vspace{.1cm} \noindent {\bf Results} Fig.~\ref{fig:kb_eval} (center) shows our results for both tasks. For biomedical search queries, we observe $90\%$ precision that remains stable for recall values as high as $70\%$. For {\it AI applications} we observe a precision of $85\%$ at a recall of $40\%$ that drops more quickly. This lower precision is likely due to the fact that $E_2$ is unspecified, leading to a wider range of results with more variable quality.
Overall, these results showcase the effectiveness of our approach in searching for mechanisms between diverse concepts in COVID-19 papers.
\subsubsection{Comparing {\sc Comb}\xspace\ with PubMed}
\label{sec:pubmed}
This experiment compares the utility of {\sc Comb}\xspace in structured search for causal relationships of clinical relevance to COVID-19 with PubMed\footnote{\href{https://pubmed.ncbi.nlm.nih.gov/}{https://pubmed.ncbi.nlm.nih.gov/}}---a prominent search engine for biomedical literature that clinicians and researchers frequently peruse as their go-to tool. PubMed allows users to control structure (e.g., with MeSH terms or pharmacological actions), is supported by a KB of biomedical entities used for automatic query expansion, and has many other functions.
\vspace{.1cm} \noindent {\bf Expert evaluation} We recruit five expert MDs---with a wide range of specialities including gastroenterology, cardiology, pulmonary and critical care---who are active in treating COVID-19 patients and in research. Each expert completed search randomly ordered tasks using both PubMed and our {\sc Comb}\xspace UI, showing the full set of ranked relations, as well as the sentence snippet mentioning the relation, the paper title, and hyperlink to abstract. At the end of the study after all search tasks are completed for both our system and PubMed, experts are given a questionnaire of 21 7-point Likert-scale questions to judge system utility, interface, and search quality. The first 16 questions are taken from a Post Study System Usability Questionnaire (PSSUQ; \citealp{lewis2002psychometric}) widely used in system quality research. The last 5 questions are designed by the authors to evaluate search quality such as overall result relevance and ranking (for the full question list, see App.~\ref{appendix:kb_util}).
Each question is asked twice, once for PubMed and once for our system, leading to 21$\times$2$\times$5 = 210 responses.
\vspace{.1cm} \noindent {\bf Search queries} We provide experts with seven search queries that were created by an expert medical researcher, relating to causal links (e.g., between COVID-19 and cardiac arrhythmias) and functions (e.g., Ivermectin as a treatment). See full set of queries in App.~\ref{appendix:human_evaluation_guidelines}.
\vspace{.1cm} \noindent {\bf Results} Fig.~\ref{fig:kb_eval} (right) shows the average Likert scores (normalized to [0\%,100\%]) across all questions and users for {\sc Comb}\xspace and PubMed.
The results show that the medical experts strongly prefer {\sc Comb}\xspace to PubMed (overall average of 91\% vs. 74\%, with non-normalized scores of 6.6 vs. 5.2). On average across the 21 questions, the majority of the five experts assigned our interface a higher score than PubMed, at an average rate of 3.5/5. This rate increases further when considering ties---on average 4.75/5 of the experts assigned our system a score equal or higher than PubMed.
Overall, our system significantly outperforms PubMed in this task, with an average gap of roughly 20\% for search and utility-related questions (Wilcoxon signed rank test p-value is significant at $4.77\times10^{-7}$). These results are particularly interesting and indicate the potential of {\sc Comb}\xspace because of the experts' strong familiarity with PubMed and the simple nature of our UI.
Our system searches and retrieves \textit{relations}---only texts explicitly mentioning relations that match the input query. This often more precisely reflects the query than results returned by PubMed, which do not have the additional layer of structured information in {\sc Comb}\xspace. For example, for the query ($E_1$=cardiac arrhythmias, $E_2$=COVID-19), PubMed returns the following title of one paper: \textit{Guidance for cardiac electrophysiology during the \textbf{COVID-19} pandemic [....] Electrocardiography and \textbf{Arrhythmias} Committee}---$E_1$ and $E_2$ are both mentioned, but not within a mechanism relation.
\section{Conclusion}
\label{sec:conclusion}
We introduced a unified schema for {\em mechanisms} that generalizes across many types of activities, functions and influences. We constructed and distributed {\sc Mechanic}\xspace, a dataset of papers related to COVID-19 annotated with this schema. We trained an IE model and applied it to COVID-19 literature, constructing {\sc Comb}\xspace, a KB of 1.5M mechanisms. We showcased the utility of {\sc Comb}\xspace in structured search for mechanism relations in COVID-19 literature. In a study with MDs active in the fight against the disease, our system is rated higher than PubMed search for both utility and quality. Our unified view of mechanisms can help generalize and scale the study of COVID-19 and related areas. More broadly, we envision a KB of mechanisms that enables the transfer of ideas across the literature \cite{hope2017accelerating}, such as by finding relationships between mechanisms in SARS-CoV-2 and other viruses, and assists in literature-based discovery \cite{Swanson1996UndiscoveredPK} by finding cross-document causal links.
\section*{Ethical considerations}
Our knowledge-base and search system is primarily intended to be used by biomedical researchers working on COVID-19, and researchers from more general areas across science. Models trained and developed on our dataset are likely to serve researchers working on COVID-19 information extraction, and scientific NLP more broadly. We hope our system will be helpful for accelerating the pace of scientific discovery, in the race against COVID-19 and beyond.
Our knowledge-base can include incorrect information to the extent that scientific papers can have wrong information. Our KB includes metadata on the original paper from which the information was extracted, such as journal/venue and URL. Our KB can also miss information included in some papers.
Our data collection process respected intellectual property, using abstracts from CORD-19 \cite{wang2020cord}, an open collection of COVID-19 papers. Our knowledge-base fully attributes all information to the original papers. All annotators were given extensive background on our objectives, and told their annotations will help build and evaluate a knowledge-base and search engine over COVID-19 research. Graduate-student annotators were payed 25 USD per hour. MD experts helped evaluate the tool on a voluntary basis.
\section{Data Annotation}
\label{appendix:data_anno}
\subsection{ Granular Relations}
\label{appendix:granular}
In addition to the two coarse-grained relation classes, we also experimented with \emph{granular} relations where the {\tt class} represents a specific type of a mechanism relation explicitly mentioned in the text (we constrain the mention a single token for simplicity, e.g., \textit{binds, causes, reduces}; see Fig.~\ref{fig:eventfig} for examples of granular relations). While more granular, these relations are also less general -- as the natural language of scientific papers describing mechanisms often does not conform to this more rigid structure (e.g., long-range and implicit causal relations). We thus focus most of our work on coarse-grained relations. We release our dataset and a model for extraction of granular relations to support future research and applications, in our code repository.
\begin{figure}[h]
\setlength{\belowcaptionskip}{-12pt}
\centering
\includegraphics[width=0.9\linewidth]{mechanism/events_new.pdf}
\caption{Examples of granular relations.}
\label{fig:eventfig}
\end{figure}
\subsection{Annotation Collection}
\label{appendix:anno_collection}
We utilize the Prodigy \cite{Prodigy:2018} annotation platform which provides the ability to select span boundaries and relations with ease. Each annotator undergoes a training session in which we cover the definitions of spans and relations as well as use of the platform. See annotation guidelines in our code repository for more details and examples.
\label{appendix:anno_example}
\begin{table*}[h]
\centering
\setlength{\belowcaptionskip}{-8pt}
\begin{tabular}{|p{7cm}|p{4cm}|p{4cm}|}
\hline
\textbf{Context} & \textbf{Annotator 1} & \textbf{Annotator 2} \\
\hline
Predicted siRNAs should effectively silence the genes of SARS - CoV-2 during siRNA mediated treatment. & (predicted siRNAs, silence the genes of SARS - CoV-2, DIRECT)
& (siRNAs, silence the genes of SARS - CoV-2 during siRNA mediated treatment, DIRECT) \\
\hline
Recent reports show that the inhibition of \textbf{NSP4} expression by small interfering RNAs leads to alteration of the production and distribution of other viral proteins and mRNA synthesis , suggesting that NSP4 also \textbf{affects} \textbf{virus} \textbf{replication} by unknown mechanisms. & (NSP4, affects virus replication, INDIRECT) & (NSP4, virus replication by unknown mechanisms, INDIRECT) \\
\bottomrule
\end{tabular}
\caption{Examples of differences between two annotators. The core meaning of the relation is equivalent across both annotators.}
\label{table:annotation_errors}
\end{table*}
Tab.~\ref{table:annotation_errors} shows examples of differences between annotations, with disagreements in the span boundaries. This reflects the challenging nature of our task with relations between flexible, open entities.
\begin{figure*}
\centering
\setlength{\belowcaptionskip}{-8pt}
\includegraphics[width=0.45\linewidth]{mechanism/Anno_direct.png}
\includegraphics[width=0.48\linewidth]{mechanism/cofie-t-gui.pdf}
\caption{Example of the annotation interface for coarse (left) and granular (right) mechanism relations.}
\label{fig:annotations}
\end{figure*}
\section{IE Evaluations }
\label{appendix:IE_eval}
\subsection{Automated evaluation metrics}
\label{appendix:eval_metrics}
\vspace{.1cm}\noindent{\bf Entity detection}
Given a boolean span matching function $\textrm{m}(s_1, s_2) = \mathbbm{1}(s_1$ matches $s_2)$, a predicted entity mention $\hat{e}$ is correctly \emph{identified} if there exists some gold mention $e^*$ in $\mathcal{D}$ such that $\textrm{m}(\hat{e}, e^*) = 1$ (since there is only one entity type, an entity mention is correctly classified as long as its span is correctly identified).
Following common practice in work on Open IE \cite{stanovsky2018supervised}, we report results using a partial-matching similarity function, in this case based on the widely-used Rouge score: $\textrm{m}_{\textrm{rouge}}(s_1, s_2)$ is true if Rouge-L$(s_1, s_2) > 0.5$ \cite{lin2004rouge}.
\vspace{.2cm}\noindent{\bf Relation detection / classification} Given a boolean span matching function, a predicted coarse-grained relation $\hat{r} = (\hat{E}_1, \hat{E}_2, \hat{y})$ is correctly \emph{identified} if there exists some gold relation $r^* = (E_1^*, E_2^*, y^*)$ in $\mathcal{D}$ such that $\textrm{m}(\hat{E}_1, E_1^*) = 1$ and $\textrm{m}(\hat{E}_2, E_2^*) = 1$. It is properly \emph{classified} if, in addition, $\hat{y} = y^*$.
Relation identification measures the model's ability to identify mechanisms of any type - direct or indirect - while relation classification aims to discriminate between direct and indirect types of mechanism mentions in the text.
\subsection{Baselines}
\label{appendix:baseline}
\paragraph{SemRep} The SemRep dataset \cite{kilicoglu2011constructing}, consisting of 500 sentences from MEDLINE abstracts and annotated for semantic predication. Concepts and relations in this dataset relate to clinical medicine from the UMLS biomedical ontology \cite{bodenreider2004unified}, with entities such as drugs and diseases. Some of the relations correspond to mechanisms (such as X TREATS Y or X CAUSES Y); By the lead of domain experts, we map these existing relations to our mechanism classes and use them to train DyGIE. Other relations are even broader, such as PART-OF or IS-A -- we do not attempt to capture these categories as they often do not reflect a functional relation.
\paragraph{Scierc} SciERC dataset \cite{luan2018multi}, consisting of 500 abstracts from computer science papers that are annotated for a set of relations, including for USED-FOR relations between methods and tasks. We naturally map this relation to our {\tt DIRECT} label and discard other relation types, and use this dataset to train DYGIE.
\paragraph{SRL} Finally we also use a recent BERT-based SRL model \cite{shi2019simple}. We select relations of the form (\texttt{Arg0, verb, Arg1}), and evaluate using our partial metrics applied to \texttt{Arg0} and \texttt{Arg1} respectively.
\subsection{Hyperparameter Search}
\label{appendix:hyperparam}
We perform hyperparameter search over these sets of parameters:
\begin{itemize}
\item {\bf Dropout} is randomly selected from intervals $[0, 0.5]$.
\item {\bf Learning rate} is randomly selected between $[1e-5, 1e-2]$
\item {\bf Hidden Size} is randomly selected from interval $[64, 512]$
\end{itemize}
Hyperparameter search is implemented using grid search with the Allentune library \cite{showyourwork}. For each experiment we set the search space to be among $30$ total samples in hyperparameter space. We select the best-performing parameters using the development set.
\subsection{Best Performing Model over {\sc Mechanic}\xspace{}}
\label{appendix:cofie_model}
We use the DYGIE package~\cite{wadden2019entity} to train models for entity and relation extraction over {\sc Mechanic}\xspace{} and we utilize SciBERT \cite{beltagy-etal-2019-scibert} for our text embeddings and finetune upon that, with learning rate for finetuning set to $5e-5$ with weight decay of $0.01$. The training was run for $100$ epochs with the $slanted\_triangular$ \cite{howard2018universal} learning rate scheduler. We used the AdamW \cite{Loshchilov2017FixingWD} optimization algorithm. In our objective function we assign equal weights to relation and span loss terms. The maximum allowed length of spans is $12$.
The hyperparameters achieving best performance over our development search are $0.43$, $7e-4$ and $215$ for dropout, learning rate and hidden size respectively. All other parameters are kept to default values (available in our code repository).
Tab.~\ref{table:IEresultsie} compares the performance of our best model with the baselines introduced in Sec.~\ref{appendix:baseline}. Fig.~\ref{fig:p_at_K} shows Precision@K results, with our model reaching high absolute numbers.
\begin{table}
\setlength{\belowcaptionskip}{-6pt}
\centering
\begin{small}
\begin{tabular}{l|rrr}
\toprule
Model &\multicolumn{1}{c}{\textbf{RC}} & \multicolumn{1}{c}{\textbf{RD}} & \multicolumn{1}{c}{\textbf{ED}} \\
\toprule
OpenIE & - & 15.5 & 25.6 \\\
SRL & - & 24.5 & 27.7 \\
\midrule
DYGIE(SemRep) & 6.8 & 8.3 & 32.5 \\
DYGIE(SciERC) & 18.6 & 20.4 & 39.2 \\
\midrule
DYGIE({\sc Mechanic}\xspace) & \textbf{42.8} & \textbf{45.6} & \textbf{50.2} \\
\end{tabular}
\end{small}
\caption{F1 scores. Relations from SRL and OpenIE do not map directly to {\tt DIRECT MECHANISM} and {\tt INDIRECT MECHANISM} classes, and do not have relation classification scores.}
\label{table:IEresultsie}
\end{table}
\begin{figure}[t]
\includegraphics[width=\linewidth]{mechanism/P_at_K_naacl.png}
\caption{Precision@K of our model compared with pre-trained SciERC and SemRep baselines. P@K for our model is high in absolute numbers.}
\label{fig:p_at_K}
\end{figure}
\subsection{Granular relation prediction}
\label{appendix:granular_preds}
Granular relations are evaluated in the same fashion as coarse-grained relations, with the additional requirement that the predicted predicate token $\hat{p}$ must match the gold $p^*$.
Our evaluation shows that the model trained to predict granular triples achieves $F1$ score of $44.0$. When predicting relations without trigger labels (i.e., {\tt (s, o)}), the model achieves $F1$ scores of $53.4$. These results are not comparable to those for {\sc Mechanic}\xspace{}, which includes more documents and relations that did not directly conform to the {\tt (s, o, p)} schema.
\subsection{Best Performing Model over {\sc Mechanic-G}\xspace{}}
\label{appendix:granular_model}
Here too we use the DYGIE package \cite{wadden2019entity} with SciBERT \cite{beltagy-etal-2019-scibert}. Due to technical equivalence in the annotation schema of our granular relations and the event extraction task in \citet{wadden2019entity}, we make use of the event extraction functionality of DYGIE. For fine-tuning the embedding weights of SciBERT we used the same learning rate weight as for {\sc Mechanic}\xspace{}, and the best hyperparameters found are $0.30$, $11e-4$ and $372$ for dropout, learning rate and hidden layer size respectively. All other parameters are kept to default values (available in our code repository).
\section{Human evaluation guidelines}\label{appendix:human_evaluation_guidelines}
\subsection{KB Correctness and Informativeness evaluation guideline}
\label{appendix:KB_eval}
\paragraph{Relation quality evaluations over various domains} For the task involving the exploration of viral mechanisms, we used 10 recent scientific claims taken from \cite{wadden2020fact}. These 10 claims, and the queries constructed for them, are as follows:
\begin{itemize}
\item Remdesevir has exhibited favorable clinical responses when used as a treatment for coronavirus. X = [Remdesevir], Y = [SARS-CoV-2, coronavirus, COVID-19]
\item Lopinavir / ritonavir have exhibited favorable clinical responses when used as a treatment for coronavirus. X = [Lopinavir, Ritonavir], Y = [SARS-CoV-2, coronavirus, COVID-19]
\item Aerosolized SARS-CoV-2 viral particles can travel further than 6 feet. X = [Air, Aerosols, Droplets, Particles, Distance], Y = [SARS-CoV-2 transmission]
\item Chloroquine has shown antiviral efficacy against SARS-CoV-2 in vitro through interference with the ACE2-receptor mediated endocytosis. X = [Chloroquine], Y = [ACE2-receptor, Endocytosis, interference with the ACE2-receptor mediated endocytosis.]
\item Lymphopenia is associated with severe COVID-19 disease. X = [Lymphopenia], Y = [severe COVID-19 disease, COVID-19]
\item Bilateral ground glass opacities are often seen on chest imaging in COVID-19 patients. X = [Bilateral ground glass opacities], Y = [chest imaging in COVID-19 patients]
\item Cardiac injury is common in critical cases of COVID-19. X = [COVID-19], Y = [Cardiac injury]
\item Cats are carriers of SARS-CoV-2. X = [Cats], Y = [SARS-CoV-2]
\item Diabetes is a common comorbidity seen in COVID-19 patients. X = [Diabetes], Y = [COVID-19]
\item The coronavirus cannot thrive in warmer climates. X = [warmer climates], Y = [coronavirus]
\item SARS-CoV-2 binds ACE2 receptor to gain entry into cells. X = [SARS-CoV-2], Y = [binds ACE2 receptor, binds ACE2 receptor to gain entry into cells]
\end{itemize}
For the AI open-ended search task, we used the following approaches/areas as queries (see guidelines and examples in our code repository): \textit{artificial intelligence, machine learning, statistical models, predictive models, Graph Neural Network model, Convolutional Neural Network model, Recurrent Neural Network model, reinforcement learning, image analysis, text analysis, speech analysis}.
For both tasks, we use the following metrics to measure pairwise agreement between annotators (Fig.~\ref{fig:anno_agree}): standard accuracy (proportion of matching rating labels), F1 (taking into account both precision and recall symmetrically), balanced accuracy (with class weights to down-weight the higher proportion of positive ratings), and finally the Matthew Correlation Coefficient (MCC) score, using the corresponding functions in the Scikit-Learn Python library \cite{scikit-learn}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{mechanism/AI_BIO_KB_agree.pdf}
\caption{Average pairwise annotator agreement by several metrics. In the AI task human labels were more diverse but with overall high precision / recall.}
\label{fig:anno_agree}
\end{figure}
\paragraph{Comparing KB quality to other schema}
We sampled the relations predicted by our model and the baseline models introduced in App.~\ref{appendix:baseline}. We randomly selected 20 abstracts from the {\sc Mechanic}\xspace{} test set and show at most two predictions (if available) for each sentence within that abstract. In total 300 relations are extracted.
Each relation is shown separately to two bio-NLP expert annotators (with annotators blind to the condition), who label each relation with a 0/1 label (1 if the relation is both \emph{correct} and \emph{informative}).
\subsection{KB Utility}
\label{appendix:kb_util}
MDs are instructed to search with our interface and with PubMed search, with the following 7 topics:
\begin{itemize}
\item Query 1: Cardiac arrhythmias caused by COVID 19
\item Query 2: Hydroxychloroquine and its effect on COVID 19
\item Query 3: Ivermectin and its role in management of COVID 19
\item Query 4: Pulmonary embolism effect on complications related to COVID 19
\item Query 5: Liver disease and COVID 19
\item Query 6 : Inflammatory bowel disease and COVID -19
\item Query 7 : Antibody therapy and its uses/effects on COVID-19
\end{itemize}
The full list of the post-study evaluation questions given to MDs is shown in Fig.~\ref{fig:kb_survey}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{mechanism/KB_survey.png}
\caption{List of post-study questions given to MDs.}
\label{fig:kb_survey}
\end{figure}
|
{
"timestamp": "2021-04-20T02:37:40",
"yymm": "2010",
"arxiv_id": "2010.03824",
"language": "en",
"url": "https://arxiv.org/abs/2010.03824"
}
|
\section{I. INTRODUCTION}\label{intro}
Recently, both experiments and theories have witnessed their huge growth in the spallation reactions ever since the spallation neutron source proved itself a powerful tool in researches and applications \cite{source1, source2}. In the social and ecological areas, spallation reaction lies at the heart of the transmutation of long-lived radiotoxic nuclear waste whose half-life can be drastically shortened to an acceptable scale of hundreds of years via fast fission induced by neutrons. This process at the same time generates enough energy to supply the electric grid and sustain the facility itself \cite{ads1, ads2, ads3, ads4, ads5, ads6, ads7}. In space missions, preflight assessments on the damaging effects on the astronauts and electronic parts must be undertaken with spallation models \cite{space} in order for the success of a space flight. In astrophysics, spallation cross-section is a key input to the evaluation of the propagation of cosmic rays both in the atmosphere and inter-stellar media \cite{cosmicrays1, cosmicrays2, cosmicrays3}. In material science, neutrons produced by spallation neutron sources are used to probe the properties of condensed matter \cite{material1, material2}. Other areas include synthesis of rare isotopes \cite{synthesis1, synthesis2, synthesis3}, cancer therapy \cite{cancer1, cancer2, cancer3}, biology \cite{biology}, cosmography \cite{cosmography}, etc.. Because of the diversity of the applications as mentioned above, the broad range of reaction conditions including the beam energy and the target size, and the complicated reaction mechanisms that the spallation reaction tails, great challenges have been imposed on the experimental measurements of the yields and kinematics of spallation products (see \cite{latest} for latest advances), on the calculation of the related observables, and on the theoretical understanding of the underlying mechanisms through which the spallation products emerge. For a comprehensive review on these subjects, we refer the readers to Refs. \cite{overview1, overview2}.
Typically, the spallation reaction is described by the two-step model, as was first suggested by Seber \cite{serber}. In view of this description, in the first stage, the incident light particle at energies from hundreds of MeV to several GeV induces a cascade of collisions through a series of hadron-hadron collisions within a target way heavier than the incident projectile. This process has a momentary duration of only tens of fm/c, in which the incident energy is partly emitted in form of high energy ejectiles, pions, nucleons and Light-Charged-Particles with $Z \le 4$ (LCPs) for instance, while the remaining part is deposited during the process thermalizing the target nucleus, the latter often excited to hundreds of MeV. The first stage is a fast dynamical process, whereas the second stage is a statistical one and several orders of magnitude longer in duration than the former. During the second stage, random fluctuations in the distribution of energy and nuclear density multiply locally or globally, respectively leading the compound nucleus to undergo light particle evaporation or fission sequentially, until all products are fully deexcited. In the light of these considerations, very practical numerical codes based on the ideas of intranuclear cascade plus statistical decay have already been developed \cite{incl1} and improved \cite{incl2} which are capable of reproducing yields and kinematics of some reaction products to an appreciable accuracy for spallation reactions at incident energies not lower than 200 MeV. However, on the theoretical ground as pointed out and discussed in Refs. \ \cite{imf2004,imf2007,imf2008}, more sophisticated reaction mechanism beyond the simple two-step model is required in order to account for the production of the Intermediate-Mass-Fragments (IMFs) featured in its experimentally revealed triple-humped kinematics in the velocity distributions \cite{triple hump}. So these features were studied by introducing the deexcitation mode of multifragmentation through Intranuclear-Cascade plus Statistical-Multifragmentation model (INC+SMM) \cite{imf2008} or more inherent models such as the Boltzmann-Langevin-One-Body (BLOB) \cite{blob} or the Quantum-Molecular-Dynamics model (QMD) \cite{fan zhang}. Moreover, the fermionic properties of nucleons also has an important role to play in the formation of IMFs \cite{imfpsdc } and thus the beginning part of discussion of this work is related to this area of research.
Following this, comes the central focus of this work which has a lot to do with application. In the design of shielding of spallation facilities and astronautical equipments, the energy spectra of LCPs at all angles in spallation reactions are a vital information \cite{space} and when experimental data are not available, theoretical simulations become indispensible. In microscopic transport models for nuclear reactions, one approach to treat the pre-equilibrium emission of LCPs and give a reliable account of the Double-Differential-Cross-sections (DDXs) at any angles is a modification of the transport model through incorporating a coalescence mechanism near the target surface in the whole of time evolution. This kinematical treatment was first discussed by Goldberger \cite{Goldberger} and Metropolis \cite{Metropolis} and later on modified by Nagle \cite{Nagle} and Mattiello {\it et al.} \cite{Mattiello}. Nowadays this mechanism has been mounted onto INCL \cite{INCL_surf1, INCL_surf2, INCL_surf3, INCL_surf4}, QMD \cite{QMD1_surf, QMD2_surf} and the coalescence exciton model \cite{EXI_surf}, for the refinement of LCPs production in these models. Thus in the second part of this work, dynamics of pre-equilibrium emission of light clusters are studied and discussed in the light of this idea and the results of calculations of the DDXs of LCPs are presented for LCPs with Z up to 2 and A up to 4.
This paper is organized as follows: Section II is a brief description of the models employed in this work. Section III is the presentation of the results of our calculation in which subsection 1 is devoted to the discussion on the reproduction of total yields of spallation fragments under various conditions and the results are compared with previous studies. In subsections 2 and 3, the DDXs of LCPs and neutrons are presented and discussed, followed by a brief summary in section IV.
\section{II. MODEL DESCRIPTION}
\subsection{1. Transport model}
The quantum molecular dynamics model was first proposed by J. Aichelin et al. \cite{aichelin1986} as a novel approach based on the idea of classical molecular dynamics model to incorporate the wave-particle duality of microscopic systems. Later on, the fermionic nature of nucleons was considered by solving the equation of motion starting right from the anti-symmetrized wave equation of the nucleus as a whole, i.e., the Fermionic-Molecular-Dynamics model (FMD) \cite{fmd1990 } or the Anti-symmetrized-Molecular-Dynamics model (AMD) \cite{amd1992}.
In this work, the Lanzhou-Quantum-Molecular-Dynamics (LQMD) code \cite{LQMD1, LQMD2, LQMD3} is employed, in which the motion of the individual nucleons is parameterized as Gaussian wave packets in both coordinate space and momentum space
\begin{eqnarray}
\phi_{i}(\mathbf{r}, t)=&&
\frac{1}{(2\pi \sigma_{r}^{2})^{3/4}} {\rm exp} [-\frac{(\mathbf{r}-\mathbf{r}_{i}(t))^{2}}{4\sigma_{r}^{2}}]
\nonumber\\
&& \cdot {\rm exp}(\frac{i\mathbf{p}_{i}(t)\cdot\mathbf{r}}{\hbar})
\end{eqnarray}
where $ \mathbf{r}_{i}(t)$ and $ \mathbf{p}_{i}(t)$ are the centers of the wave packets in coordinate space and momentum space separately. The width of the packets depends on the parameter $\sigma_{r}$. These are parameters to be solved by subjecting the following total wave function of the reaction system to the variational method \cite{variation}
\begin{eqnarray}
\Phi(\mathbf{r},t)=\prod_{i}\phi_{i}(\mathbf{r},\mathbf{r}_{i},\mathbf{p}_{i},t).
\end{eqnarray}
Neglecting the change of the packet width $L$ through time and letting them to be constants, the equations of motion of the wavepacket parameters $\mathbf{r}_{i}^{,} \ s$ and $\mathbf{p}_{i}^{,} \ s$ are obtained formally as
\begin{align}
\dot{\mathbf{r}_{i}}=\frac{\partial H}{\partial \mathbf{p}_{i}}, \quad
\dot{\mathbf{p}_{i}}=-\frac{\partial H}{\partial \mathbf{r}_{i}}
\end{align}
together with the density-functional Hamiltonian
\begin{eqnarray}
H=T+U_{Coul}+\int V_{loc}[\rho(\mathbf{r})] d \mathbf{r} +U_{MDI}
\end{eqnarray}
where $U_{Coul}$ is the Coulomb energy of the whole system and $V_{loc}$ is the nuclear potential energy density which is evaluated through Wigner transformation \cite{wigner}, and takes the form
\begin{eqnarray}
V_{loc}(\rho)=&& \frac{\alpha}{2}\frac{\rho^{2}}{\rho_{0}} +
\frac{\beta}{1+\gamma}\frac{\rho^{1+\gamma}}{\rho_{0}^{\gamma}} + E_{sym}^{loc}(\rho)\rho\delta^{2}
\nonumber \\
&& + \frac{g_{sur}}{2\rho_{0}}(\nabla\rho)^{2} + \frac{g_{sur}^{iso}}{2\rho_{0}}[\nabla(\rho_{n}-\rho_{p})]^{2}
\end{eqnarray}
where
\begin{align}
\rho(\mathbf{r},t) &= \int f(\mathbf{r}, \mathbf{p}, t)d\mathbf{p} \nonumber \\
&=\sum_{i} \frac{1}{(2\pi \sigma_{r}^{2})^{3/2}} {\rm exp} \left[-\frac{(\mathbf{r} - \mathbf{r}_{i}(t))^{2}}{2\sigma_{r}^{2}}\right]
\\
\label{wigner-transform}
f(\mathbf{r},\mathbf{p},t)&=\sum_{i} f_{i}(\mathbf{r},\mathbf{p},t) \nonumber \\
&=\sum_{i} \frac{1}{(\pi \hbar)^{3}} {\rm exp} \left[-\frac{(\mathbf{r} - \mathbf{r}_{i}(t))^{2}}{2\sigma_{r}^{2}} - \frac{(\mathbf{p} - \mathbf{p}_{i}(t))^{2}\cdot 2\sigma_{r}^{2}}{\hbar^{2}} \right],
\end{align}
while $U_{MDI}$ is the momentum dependent interaction (MDI) \cite{mdi} and assumes the form
\begin{eqnarray}
U_{MDI}=&& \frac{1}{2\rho_{0}}\sum_{i,j,j\neq i}\sum_{\tau,\tau'}C_{\tau,\tau'}\delta_{\tau,\tau_{i}}\delta_{\tau',\tau_j}\int\int\int d\textbf{p}d\textbf{p}'d\textbf{r} \nonumber \\
&& \times f_{i}(\textbf{r},\textbf{p},t)\left[\texttt{ln}(\epsilon(\textbf{p}-\textbf{p}')^{2}+1)\right]^{2}f_{j}(\textbf{r},\textbf{p}',t)
\end{eqnarray}
respectively.
The coefficients of each term are the mean-field parameters constrained by reproducing the basic saturation properties and the incompressibility within a sensible range for symmetric nuclear matter. Two sets of mean-field parameters labelled PAR1 and PAR2 are chosen for the calculations, as given in Table \ref{skyrme} along with their incompressibilities. In the MDI term, $C_{\tau, \tau^{'}}=C_{mom}(1+x)$ for $\tau=\tau^{'}$ and $C_{\tau, \tau^{'}}=C_{mom}(1-x)$ for $\tau\neq\tau^{'}$, where the subscripts $\tau$ and $\tau^{'}$ stand for isospin whose values are -1 and 1, and the parameter $x=-0.65$ is the strength of isospin splitting. In the isospin asymmetric terms, $\rho_{n}$, $\rho_{p}$ and $\rho=\rho_{n}+\rho_{p}$ are the neutron, proton and total densities, respectively, $\delta=(\rho_{n}-\rho_{p})/(\rho_{n}+\rho_{p})$ being the isospin asymmetry. The coefficients in the isospin-dependent and density-gradient-dependent terms $g_{sur}$, $g_{sur}^{iso}$ and $\rho_{0}$ are taken to be 23 MeV fm$^{2}$, -2.7 MeV fm$^{2}$ and 0.16 fm$^{-3}$, respectively.
In addition to the motion under the nucleons' mean field, the collision between nucleons is another key ingredient that makes up the time evolution of the reaction system. In the simulation, when the spacial separation of any two nucleons in their center-of-mass frame is smaller than a value
\begin{eqnarray}
r_{NN} = \sqrt{\sigma_{NN}(\sqrt{s})/\pi},
\end{eqnarray}
a collision between the two nucleons is considered, in which $\sigma_{nn}(\sqrt{s})$ is the total nucleon-nucleon collision cross-section at their invariant mass $\sqrt{s}$. The NN elastic scattering cross-section is parameterized to fit the experimentally available data in a wide energy domain \cite{Fe09}. Finally, taking into account the effect of Pauli-blocking due to the fermionic property of nucleons, the collision is decided to be executed or blocked by comparing with a random number the blocking probability $b_{ij}=1-(1-\overline{P}_{i})(1-\overline{P}_{j})$ of the two participant nucleons $i$ and $j$ in the final state in which $\overline{P}_{i}$ is given by
\begin{eqnarray}
\overline{P}_{i}= && \frac{32\pi^{2}}{9h^{3}}\sum_{i\neq k, \tau_{i}=\tau_{k}} (\Delta r_{ik})^{2}(3R_{0}-\Delta r_{ik}) \nonumber \\
&& \times(\Delta p_{ik})^{2}(3P_{0}-\Delta p_{ik}).
\end{eqnarray}
The $\Delta r_{ik}=|\mathbf{r}_{i}-\mathbf{r}_{k}|$ and $\Delta p_{ik}=|\mathbf{r}_{i}-\mathbf{r}_{k}|$ are the relative distances of two nucleons in coordinate and momentum spaces, respectively. The summation is satisfied the criterion in phase space $\Delta r_{ik}<2R_{0}$ and $\Delta p_{ik}<2P_{0}$ with $R_{0}=3.367$fm and $P_{0}=112.5$ MeV/c.
\begin{table*}[htp]
\vspace{20pt}
\centering
\caption{Skyrme parameters PAR1 and PAR2 employed in the LQMD model.}
\setlength{\tabcolsep}{1.5em}
\begin{tabular}{cccccccc} \hline
\specialrule{0em}{1pt}{1pt}
\centering
& $\alpha \ ({\rm MeV})$ & $\beta \ ({\rm MeV})$ & $\gamma$ & $C_{mom} \ ({\rm MeV})$ & $\epsilon \ ({\rm c^{2}/MeV^{2}})$ & $m^{*}_{\infty}/m$ & $K_{\infty} \ ({\rm MeV})$ \\
\hline
\specialrule{0em}{1pt}{1pt}
PAR1 & -215.7 & 142.4 & 1.322 & 1.76 & $5 \times 10^{-4}$ & 0.75 & 230 \\
\specialrule{0em}{1pt}{1pt}
PAR2 & -226.5 & 173.7 & 1.309 & 0. &0. & 1. & 230 \\
\hline
\label{skyrme}
\end{tabular}
\end{table*}
\subsection{2. Fragment recognition and statistical decay}
At the end of dynamical evolution when all violent changes have settled and the nucleons are re-aggregated and condensed to form individual clusters, a procedure called Minimum Spanning Tree (MST) must be followed to identify these hot remnants before the transition to statistical decay. In the LQMD model, a constituent nucleon can incorporate into its intermediate cluster a neighboring nucleon of relative momentum and location $\Delta p \leq 200$ MeV/c and $\Delta r \leq 3.5$ fm with respect to itself, given that this new nucleon is also located close around the surface of the cluster, i.e., with a distance smaller than 3.5 fm plus the r.m.s radius of the cluster. Also two neighboring intermediate clusters can join to form a bigger cluster if the size of the new cluster they thus compose is within a limit which is adopted as the liquid-drop-model radius.
After the hot remnants are reconstructed, the simulation is conveyed to the next stage, cooling down by statistical decay. The statistical decay is realized by the GEMINI code by R. J. Charity \cite{charity}. Generally speaking, in the GEMINI code, the compound nucleus experiences a sequence of binary divisions in forms of light particle evaporation or fission, until the compound nucleus is thoroughly deexcited. In asymmetric divisions, as for the emission of light particles with $Z$ up to 4, the Hauser-Feshbach formulism is adopted \cite{hauser} and the decay width of emitting a light particle $(Z_{1},A_{1})$ of spin $J_{1}$ from a mother nucleus $(Z_{0},A_{0})$ of excitation energy $E^{*}$ and $J_{0}$ leaving behind it a residue $(Z_{2},A_{2})$ of spin $J_{2}$ is given by,
\begin{align}
\Gamma_{J_{2}}(Z_{1},A_{1},&Z_{2},A_{2})= \nonumber \\
\displaystyle{\frac{2J_{1}+1}{2\pi \rho_{0}}} & \sum_{l=|J_{0}-J_{2}|}^{J_{0}+J_{2}} \int_{0}^{E^{*}-B-E_{rot}(J_{2})} T_{l}(\epsilon)\rho_{2}d\epsilon
\end{align}
where $\rho_{0}$ and $\rho_{2}$ are the level densities of the mother and the residual nucleus, respectively, and $T_{l}(\epsilon)$ is the transmission coefficient. $B$ is the binding energy between the light particle and the residue and $E_{rot}(J_{2})$ the rotation plus deformation energy of the latter. For asymmetric fission, Moretto's generalized transition-state formalism \cite{moretto} which determines the fission probability by the phase space density on the ridge line around the saddle point is used. For symmetric fission which is an available option in the code's input, the Bohr-Wheeler formalism \cite{bohr} is used. Fission barrier heights are mainly calculated through the rotating-finite-range model \cite{rfr} and both shell and pairing corrections are also considered.
\subsection{3. Phase space density constraint method}
Nucleons are fermions and so nucleons with the same isospin are repulsive to one another in avoidance of getting to close in phase space. In the QMD framework, this effect was simply neglected or counted in by introducing an artificial phase space repulsive potential \cite{pauli1, pauli2} until M. Papa proposed \cite{papa2001} that this may be solved by performing unphysical elastic collisions for nucleons coming too close to each other in phase space, maintaining in a local phase space occupation at the center of each nucleon $i$, $\overline{f}_{i} < 1$ with $\overline{f}_{i}$ in the form
\begin{eqnarray}
\overline{f}_{i}=\sum_{j}\delta_{s_{i},s_{j}}\delta_{\tau_{i},\tau_{j}}\int_{h^{3}}f_{j}(\mathbf{r},\mathbf{p},t)d\mathbf{r}d\mathbf{p}
\end{eqnarray}
where $f_{j}(\mathbf{r},\mathbf{p},t)$ as already implied in Eq. \ref{wigner-transform} is the Wigner transform of the nucleon $j$'s wavepacket and the subscript $s$'s and $\tau$'s respectively stand for the spin and isospin of the corresponding nucleons. The above described method is called Phase-Space-Density-Constraint (PSDC). Since the Pauli exclusion effect is repulsive between identical fermions, with the PSDC included, the expansion and multifragmentation phenomenon in some reactions underestimated by the model are supposed to be better reproduced \cite{imfpsdc }.
\subsection{4. Surface coalescence}
For a better description of pre-equilibrium cluster emission, we alloyed the surface coalescence model into the LQMD model following the specifications given by Ref. \cite{INCL_surf1}. When an outgoing nucleon trepasses a certain radial distance $R_{0} + D_{0}$ with respect to the center of the mother nucleus, recursive construction of LCPs from this leading nucleon is initiated by picking up a first nucleon, a second and a third and so on. Here $R_{0} = 1.4A_{targ}^{1/3}$ fm and for the proton incident energies involved, $D_{0}$ is taken to be a proper value 2 fm. In our work, only the constructions of $d$, $t$, $^{3}$He and $^{4}$He clusters are considered. Of course, this method has already been extended to include the construction of heavier clusters \cite{NewPot}, which, for a preliminary reliability test, is not yet considered in the present work. During the process, an intermediate cluster picks up a nucleon to form higher clusters by judging the following phase space condition,
\begin{eqnarray}
R_{Nj} P_{Nj} \le h_{0}, \quad R_{Nj} \ge 1 fm
\end{eqnarray}
where $R_{Nj}$ is the spatial distance between the intermediate cluster $N$ and the nucleon $j$ to pick up ,and $P_{Nj}$ is the relative momentum between the two objects. Let $\mathbf{R}_{N}$ and $\mathbf{r}_{j}$ be the position of the intermediate cluster and the nucleon in coordinate space, $\mathbf{p}_{N}$ and $\mathbf{p}_{j}$ the momenta, and $M_{N}$ and $m_{j}$ the masses of the two objects. They have the form,
\begin{gather}
R_{Nj} = \displaystyle{|\mathbf{R}_{N} - \mathbf{r}_{j}|} \nonumber\\
P_{Nj} = \displaystyle{|\frac{m_{j}}{M_{N} + m_{j}}\mathbf{p}_{N} - \frac{M_{N}}{M_{N} + m_{j}}\mathbf{p}_{j}|}.
\end{gather}
The latter is in fact the momentum of either object in their common center-of-mass frame. As to the choice of the phase space parameter $h_{0}$, though various refinements are available, for simplicity, we adopted those prescribed by Ref. \cite{INCL_surf1} and Ref. \cite{INCL_surf2} which we label by Set 1 and Set 2 as listed in Table \ref{Surf_coal_para}. When all possible combinations including $d$, $t$, $^{3}$He and $^{4}$He constituted by different nucleons and the leading nucleon are listed. An emission test is performed according to the priority $^{4}$He, $^{3}$He (or $t$), $d$, in which a $^{4}$He particle is randomly selected among all other $^{4}$He possibilities and tested to see if its total energy under the target mean field renders its penetration through the Coulomb plus Woods-Saxon barrier. If the candidate passes the test, it is emitted along its tangential direction and the time evolution of the reaction system is resumed. Otherwise a lower cluster in the priority list is selected in the same way and tested and so on. If all the tests fail, the penetration test is performed on the leading nucleon to decide whether it is emitted or reflected. The total energy of all emission candidates are calculated according to the following equation,
\begin{eqnarray}
E_{lcp} = \sum_{i=1}^{A_{lcp}} (E_{i} + V_{i}) + B_{lcp}
\end{eqnarray}
where $E_{i}$ and $V_{i}$ are the kinetic energy and potential energy of the constituent nucleon $i$ under the target mean field, $B_{lcp}$ is the binding energy of the cluster, and $A_{lcp}$ is the mass number of the cluster. Last but not least, in the procedure stated above, all clusters constructed must be appropriately far away from the center of the target nucleus in order that they be clusters formed near the target's surface and $R_{l}$ measures this distance which is taken to be
\begin{eqnarray}
R_{l} = CA_{targ}^{1/3}.
\end{eqnarray}
A too small $C$ results in a too rich production of clusters and vice versa. About this, there is a brief discussion in the corresponding section.
\begin{table}[h]
\vspace{20pt}
\centering
\caption{Suface coalescence parameters Set 1 and Set 2}
\setlength{\tabcolsep}{2.5em}
\begin{tabular}{lcc}
\hline
\specialrule{0em}{1pt}{1pt}
Construction & \multicolumn{2}{c}{$h_{0}$ (fm MeV/c)} \\ \cline{2-3}
\hline
\specialrule{0em}{1pt}{1pt}
& Set 1 & Set 2 \\
\hline
\specialrule{0em}{1pt}{1pt}
$p + n \rightarrow d$ & 387 & 336 \\
$d + n \rightarrow t$ & 387 & 315 \\
$d + p \rightarrow ^{3}$He & 387 & 315 \\
$t + p \rightarrow ^{4}$He & 387 & 300 \\
$^{3}$He $+ n \rightarrow ^{4}$He & 387 & 300 \\
\hline
\label{Surf_coal_para}
\end{tabular}
\end{table}
\subsection{5. Simulation settings}
For any one reaction system in this calculation, the maximum impact parameter $b_{max}$ is chosen as $b_{max}^{'}+0.3$ fm where $b_{max}^{'}$ is the smallest impact parameter at which the target no longer suffers from nucleon-nucleon collision with the incident proton passing by. The extra 0.3 fm is reserved for Coulomb excitation. Beside the maximum impact parameter, the switching time from dynamical stage to statistical stage is another determining factor for a reliable reproduction of the realistic physical circumstances. In INC simulation, this quantity is given by an established formula \cite{incl1}, which in our case, however, is not proper for QMD simulation since the latter is capable of describing the oncoming evolution after the system has been fully excited. The criterion we adopted to select the switching time is such that after this moment of time, the all observables in question be relatively stable as time goes on after the end of the violent fluctuation of the preceding dynamical evolution. Furthermore, during the pre-equilibrium cascade process, nucleon and cluster emitted in the forward direction are generated in an earlier stage whereas those in the backward direction emerge in a later stage. Because of this, the pre-equilibrium time span must be long enough as to cover the emissions at all polar angles. Considering all these complications and for a shorter CPU time, the switching times of $p + ^{56}$Fe, $^{58}$Ni are 65 fm/c, $^{112}$Cd, $^{136}$Xe, 85 fm/c, and $^{181}$Ta, $^{184}$W, $^{208}$Pb, 115 fm/c.
\section{III. RESULTS AND DISCUSSION}
\subsection{1. Total yields of spallation fragments}
There has long been a debate on the physical origin of the sources of IMFs in spallation reactions. Fission and multifragmentation were proposed as source candidates hinted by experimental revelation \cite{triple hump}. When the incident energy is low or impact parameters are large, the amount of incident energy deposited in the target nucleus is small and the excited nucleus undergoes normal fission-evaporation deexcitation mode to cool down.
\begin{figure*}
\includegraphics[width=16 cm]{p136Xe}
\vspace{-0.3cm}
\caption{The fragment yields as functions of the charge number (left panel) and the neutron number (right panel) for in the reaction of $p+^{136}$Xe at the incident energy of 1 GeV. The experimental data are taken from Ref. \cite{data136Xe}. }
\label{cnpcs1}
\end{figure*}
\begin{figure}
\includegraphics[width=8 cm]{p56Fe}
\vspace{-0.35cm}
\caption{Calculated total cross-section as a function of mass number for $p+^{56}$Fe at 1 GeV. The experimental data are taken from Ref. \cite{data56Fe}.}
\label{cnpcs2}
\end{figure}
However, when the excitation energy exceeds about a certain threshold \cite{threshold}, the hot nucleus may expand and a fast breakup becomes possible. These two different ways of deexcitation would exhibit different kinematics in their velocity spectra. In the velocity spectra of IMFs, contributions from both mechanism has been disentangled into separate kinematic components \cite{triple hump}, which provides a very strong evidence of the existence of multifragmentation in proton-induced spallation reactions. This section can partly be deemed as an investigation of the IMFs production, following some of the previous studies in the literature.
The total cross-section $\sigma$ of the reaction system $p + ^{136}$Xe at 1 GeV plotted as functions of the charge number Z and the mass number A is presented in Fig. \ref{cnpcs1}. The experimental data, taken from Ref. \ \cite{data136Xe}, are plotted in black solid dots. In this part, we tried to employ the PSDC method in the reproduction of the IMFs yields, as was previously done in Ref. \cite{fan zhang} in which the reproduced IMFs cross-sections in between $5\le Z\le40$ agrees with the experimental data to some extent. Our calculation, differing from that of Ref. \cite{fan zhang} in the mean field parameters (given in the first row of Table \ \ref{skyrme}) and the technical details in the code, turned out to be quite different. As expected, without PSDC (shown in blue histogram), the IMFs yields are seriously underestimated. In comparison to this, the introduction of the PSDC method only negligibly increases the IMF cross-sections (as shown by the green histogram in comparison to the blue one in Fig. \ref{cnpcs1}). This may suggest that the PSDC method does help filling the blank between $5\le Z \le40$, but the efficiency of the PSDC method in inducing multifragmentation of the excited target nucleus may depend greatly on the choice of the mean field parameters and the technical details in the code. On the other hand, it is found that when a set of mean field parameters PAR2 which incorporates the momentum dependent interaction is used instead, the production of target-like fragments are underestimated together with the spetra right below it overestimated. This can be scribed to the extra fluctuation brought in for the presence of the MDI which causes spurious emission of nucleons in the pre-equilibrium stage.
Now the investigation with the same purposes is extended to a lighter reaction system $p + ^{56}$Fe at 1 GeV, as shown in Fig. \ref{cnpcs2}. It is seen that the experimental results \cite{data56Fe} are well reproduced under either momentum dependent or independent mean fields in both the trend and value to some extent. This time, however, the IMFs yields turn out to be overestimated on an overall scale without MDI, at the cost of the underestimation of the target-like tail. Nevertheless, the local trends are same for both settings. The fact that the success of the reproduction of the IMFs under either condition may be ascribed to the relatively more abundant multifragmentation, evaporation or relatively more cascade emission allowed at higher per-nucleon excitation energies gained at the same incident proton energy but with atomic mass twice smaller compare to the previous case of $^{136}$Xe. The differences between the results of the two settings might be understood that in calculation with MDI, the IMFs formed during the reaction is more unstable out of more violent fluctuations and thus more nucleons but less IMFs are emitted with respect to the case without MDI. For analysis on the experimental data and the results of INC calculation of IMFs production with the same reaction system, see Ref. \cite{imf2008}.
\begin{figure*}
\includegraphics[width=17 cm]{208Pb}
\vspace{-0.33cm}
\caption{Calculated total cross-section plotted versus the charge number (left) and the mass number (right) for $p+^{208}$Pb at 1 GeV. The experimental data are taken from Ref. \cite{data208Pb}. Here the fission delay parameters prescribed in Ref. \cite{theo208Pb_OuLi} are adopted instead of the default ones.}
\label{208Pb}
\end{figure*}
In the preceding discussion, $^{56}$Fe, a light target was considered. In the next, we draw our attention to another extreme, high-energy proton induced spallation reaction at still the same energy but on a very heavy target, $^{208}$Pb with which more exhibited divergences between simulations with and without momentum dependent interaction be expected as the fission process comes to play a key role. In Fig. \ref{208Pb}, the total cross-section plotted as a function of both the atomic number Z and the mass number A is presented. The black dots stand for the experimental data which are taken from Ref. \cite{data208Pb}, the red line for calculation with MDI and the blue line for the case without MDI. It is seen that above all, calculation with or without MDI both reproduce the main trend and the main features of the experimentally measured spectra but it is apparent that the case without MDI gives a very much better overall fit to the data whereas the case with MDI peaks too early at the target-like end in the plot versus the mass number, which is accompanied by an overestimation of the region between the target-like end and the valley in the middle of the graph. This is due to the spurious emission of nucleons in the presence of MDI which always tends to induce more fluctuation. As a result, the fission peak is underestimated since the very target-like residues which possess lower fission barriers and are thus more fission-likely ended up rarer with respect to the case without MDI. To end the present discussion, let us make one more final comment on the results of the case without MDI. In the statistical decay stage of our simulation, the fission delay parameters as previously prescribed in Ref. \cite{theo208Pb_OuLi} are adopted instead of the default ones in GEMINI. In Fig. \ref{208Pb}, it is seen that our results are roughly the same as those given by Ref. \cite{theo208Pb_OuLi}. The heights of both the fission peak and the target-like tail of the spectra agree very well with respect to the experimental data. So our results can serve as a further confirmation of the fission delay prescription given by Ref. \cite{theo208Pb_OuLi}.
\subsection{2. Light cluster kinetic energy spectra}
For the production of high energy LCPs in high-energy proton induced spallation reaction, we considered the following three reaction systems: p + $^{58}$Ni at 1.2 GeV, p + $^{181}$Ta at 1.2 GeV and p + $^{197}$Au at 1.2 GeV whose experimental data are available. Since the present work is intended to provide an overall description of the spallation reaction and a test of the predictive power of the LQMD model in such reaction scenario, we did not dig deeper into the vast and arduous task of parameter fitting to the experimental results as was already done in a recent work \cite{Parafit}. So we just make a few comments on the results we so far obtained. In Fig. \ref{58Ni} and Fig. \ref{197Au}, the DDXs of light cluster production at three different angles are presented for targets $^{58}$Ni and $^{197}$Au bombarded on by proton at 1.2 GeV, the values being scaled by a 10$^{-2}$ for every angle with respected to the former one. Besides, similar results are presented in Fig. \ref{181Ta} for p + $^{181}$Ta at 1.2 GeV.
\begin{figure*}
\includegraphics[width=16 cm]{58Ni}
\vspace{-0.1cm}
\caption{Double differential cross-section of $d$, $t$, $^{3}$He and $^{4}$He as a function of kinetic energy and polar angle for p + $^{58}$Ni at 1.2 GeV calculated with different sets of parameter. The data are taken from Ref. \cite{Parafit}.}
\label{58Ni}
\end{figure*}
\begin{figure*}
\resizebox{1\textwidth}{!}{
\includegraphics{197Au}
}
\vspace{-0.6cm}
\caption{Double differential cross-section of $d$, $t$, $^{3}$He and $^{4}$He as a function of kinetic energy and polar angle for p + $^{197}$Au at 1.2 GeV calculated with different sets of parameter. The data are taken from Ref. \cite{Parafit}.}
\label{197Au}
\end{figure*}
\begin{figure*}
\resizebox{1\textwidth}{!}{
\includegraphics{181Ta}
}
\vspace{-0.55cm}
\caption{Double differential cross-section of $d$, $t$, $^{3}$He and $^{4}$He as a function of kinetic energy and polar angle for p + $^{181}$Ta at 1.2 GeV calculated with $C = 0.86$ fm and phase space parameters Set 1. The data are taken from Ref. \cite{INCL_surf3}.}
\label{181Ta}
\end{figure*}
We see that in most cases, the high energy tails of the DDXs are reproduced very well except for the very forward angles of $d$ and for large angles of $t$ and $^{3}$He. So it turns out that with a rather rough set of parameters, a fairly acceptable quality of description can still be obtained for both light and heavy targets bombarded on by high-energy protons. However it is noticed that the region around the potential barrier is sometimes overestimated and other descrepancies with respect ot the experimental data are present. They can arise from the following sources. Firstly, the production of light clusters in the surface coalescence model is regulated by two types of parameters, the distance coefficient $C$ which controls the separation of constructed clusters from the center of the target nucleus, and the phase space parameter $h_{0}$ which controls the size of the clusters in phase space. In Fig. \ref{58Ni} and Fig. \ref{197Au}, the calculation with three different choices of parameters are plotted by lines in different colours and shapes as indicated by the legends. It can be observed that increasing the threshold separation of the constructed clusters from the target's center by increasing the parameter $C$ results in, to some extent, a similar effect as that of substituting the phase space parameters Set 2 for Set 1 which sets a larger upper bound for the phase space sizes of the constructed clusters. Both settings bring down the high energy tails, except for $d$. However, for an agreeable reproduction of the high energy tails of large clusters with respect to experiments , say $^{4}$He cluster for example, $C$ must not be too large. Otherwise the high energy tails of these clusters die out too early. Secondly, a high quality description of the emergence of the leading nucleons that initiate the construction of clusters is a prerequisit for a high quality description of cluster production and meanwhile, a correct time evolution of the phase space nuclear density distribution of the target nucleus is also important. As seen in the next section, our reproduction of the neutron DDXs is not that desirable quantitatively for backward angles, which can acount for the corresponding discrepancies that occur in our cluster DDXs. Thirdly, in our consideration of barrier tunneling, the contribution of centrifugal potential to the total barrier height is neglected. Some outgoing clusters with energy around the barrier sometimes bring away with them ten to twenty units of angular momentum measured in $\hbar$ and that amounts to a contribution of, for instance, for a $^{4}$He cluster with $l=10$ in a $^{197}$Au target, about 6 MeV to the barrier height and thus this modifies both the height and shape of the spectra around the barrier. Apart from the interplay among all these above, other potential sources may also be able to acount for the problems. Nevertheless, there are a lot more efforts required to give a higher quality reproduction of cluster DDXs \cite{HighQual}, but a simple surface coalescence model with these roughly selected parameters, implemented into our LQMD model, works rather well in describing the key characteristics of light cluster emission in spallation reactions.
\begin{figure*}
\resizebox{1\textwidth}{!}{
\includegraphics{neutron}
}
\vspace{-0.6cm}
\caption{Calculated neutron DDXs of $p+^{112}$Cd, $^{184}$W, $^{208}$Pb at 800 MeV. The experimental data are taken from Ref. \cite{DDXs data}.}
\label{neutron}
\end{figure*}
\subsection{3. Neutron double differential cross-sections}
In this section, the model is applied to the reproduction of the DDXs of spallation neutron which has been intensely investigated experimentally and theoretically on a large number of spallation targets over a vast range of incident energies in the past decades. The accurate neutron DDXs is a vital information for the design and the various utilizations of the spallation neutron source \cite{application of DDXs }. However, whenever experimental data are not available, theoretical calculation tools, such as the moving source model \cite{moving source }, Intranuclear-Cascade plus Evaporation model (INC+E) \cite{inc+e} or HETC-3STEP \cite{hetc-3step} play indispensable roles. QMD calculation of the DDXs, as a more sophisticated way than any other, was studied by G. Peilert {\it et al.} \cite{QMDspallation1} who were soon followed by K. Niita {\it et al.} \cite{QMDspallation2,QMDspallation2,QMDspallation3,QMDspallation4}. Comparison of the results of calculation among different sets of mean field parameters has already been studied by Li Ou {\it et al.} in Ref. \cite{Li Ou}.
In this section, the mean field parameters PAR1 are employed to simulate 800 MeV proton-induced spallation reactions on $^{112}$Cd, $^{184}$W and $^{208}$Pb targets. It is obvious in Fig. \ref{neutron} that the model can reproduce the main trend of the spectra given by experiments and the data are taken from Ref. \cite{DDXs data}. However, in the low energy domain around $E=20$ MeV, the data are somewhat overestimated with the increase of the polar angle, which we find is not originated from the evaporation stage but simply from the cascade stage in form of extra free neutrons. The high energy tails close to the incident energy at \ang{30} drop too soon, while at larger angles, it is rather nicely reproduced. The ambiguity of the results at energies close to the incident energy is simply due to the quality of statistics which is limited by the computational resources available.
\section{IV. SUMMARY}
Several aspects of high-energy proton induced spallation reactions have been investigated with the LQMD model, i.e., the total fragment yields and the DDXs of light clusters and neutrons, for different targets. For the total yields, it is found that the efficiency of the PSDC method in enhancing multifragmentation are dependent on the choice of mean field parameters and the technical details in the code. The PSDC method is favorable for multifragmentation and thus contribute to the total yields of IMFs. On the other hand, it is found that the inclusion of the MDI distorts the yield spectra of spallation on $^{136}$Xe and $^{208}$Pb. The agreement with the experimental data is obtained in terms of total fragment yields with the set of fission delay parameters in the GEMINI code, which again fortifies the validity of this prescription in this scenario. For the description of cluster emission from statistical decay and pre-equilibrium stages, the GEMINI code and a simple surface coalescence model are employed. Though the parameters adopted in the surface coalescence model are rough, a rather good overall reproduction of the DDXs of light clusters is achieved with our model and it is seen that there is a good potentiality that the model could be refined by polishing the choice of the different parameters and the potential barrier. For the reproduction of neutron DDXs, three heavy targets $^{112}$Cd, $^{184}$W, $^{208}$Pb and an incident energy of 800 MeV are chosen. The model reproduces the experimental results, but descrepancies remain to some extent, which is a push for future furtherance of the model in this aspect.
\section{Acknowledgements}
This work is supported by the National Natural Science Foundation of China (Projects No. 11722546 and No. 11675226) and the Talent Program of South China University of Technology.
|
{
"timestamp": "2020-10-09T02:06:31",
"yymm": "2010",
"arxiv_id": "2010.03715",
"language": "en",
"url": "https://arxiv.org/abs/2010.03715"
}
|
\section{INTRODUCTION}
\label{sec:intro}
In early 2020 the binary system Eta Carinae ($\eta$~Car) underwent another periastron passage in its ${P = 2023}$ day orbit.
The system includes two very massive stars, the primary and the hotter secondary.
The primary star is a very massive star with mass $M_1=120$--$170 ~\rm{M_{\sun}}$ \citep{Hillieretal2001, DavidsonHumphreys2012, KashiSoker2010, KashiSoker2016}, that may be in its Luminous Blue Variable (LBV) stage (though it differs from classical LBVs in a few aspects, see \citealt{HillierAllen1992}). The secondary star is a hot and evolved star \citep{Verneretal2005, Mehneretal2010} with a mass of $M_2=30$--$80 ~\rm{M_{\sun}}$ \citep{KashiSoker2010, KashiSoker2016}, that may be in its Wolf-Rayet stage \citep[e.g.,][]{Hiraietal2021}.
The system is a colliding wind binary \citep{Damineli1996, Pittardetal1998} with large eccentricity $e \simeq 0.9$ \citep{Davidsonetal2017}, resulting in large differences in the wind collision interface between periastron and apastron.
The distance to the system has been determined to be in the range $2.3 ~\rm{kpc}$ \citep{Smith2006}
to $2.6 ~\rm{kpc}$ \citep{Davidsonetal2018a}.
The system is famous for its energetic eruptions in the nineteenth century \citep{DavidsonHumphreys2012}, the great eruption (GE; 1837.9--$\sim$1858) and the lesser eruption (LE; 1887.3--1895.3). These eruptions overpowered the Eddington limit and ejected a significant portion of the primary's stellar atmosphere, forming what we know today as the Homunculus nebula \citep{DavidsonHumphreys2012}.
The primary wind has a larger momentum than that of the secondary, with mass loss rate $\dot M_1 \simeq 3$--$10 \times 10^{-4} ~\rm{M_{\odot}}~\rm{yr^{-1}}$ \citep{DavidsonHumphreys2012,Clementeletal2014,Kashi2017,Kashi2019}.
The weaker secondary wind carves a conical cavity in the dense primary wind with direction and shape that changes with the orbit. This cavity wraps around the primary close to periastron passage, when the orbital velocity increases and becomes comparable to the primary wind velocity \citep{Hamaguchietal2007,Parkinetal2011,KashiSoker2009a,Maduraetal2012}.
Being such a unique system at a relatively close distance, each $\eta$~Car periastron passage is an event of great interest and is monitored from both ground \citep{DuncanWhite2003,vanGenderenSterken2004,Whitelocketal2004,Abrahametal2005b,Daminelietal2008a,Daminelietal2008b,FernandezLajusetal2010,Teodoroetal2012,Teodoroetal2016} and space \citep{PittardCorcoran2002, Martinetal2006, Corcoran2005, Corcoranetal2010, Corcoranetal2017, Hamaguchietal2007, Hamaguchietal2014a, Hamaguchietal2014b, Henleyetal2008, Davidsonetal2015, Mehneretal2015}.
Prior to periastron passage, $\eta$ Car's spectral lines, which are key probes of the dynamics of the two stars and their winds, change their profiles, some dramatically. This behaviour of the lines as well as rapid variations in various bands, from IR to X-ray, around periastron passages has earned the name `the spectroscopic event' \citep{DavidsonHumphreys2012}.
The spectroscopic events observed over multiple periastrons have varied considerably, with a trend of becoming shorter and less intense. This may be due either to a change of state in the primary wind \citep{Mehneretal2010,Davidsonetal2018b,Kashietal2016}, or related to the dissipation of the surrounding Homunculus Nebula, at least along our line of sight \citep{Daminelietal2019,Mehneretal2019}.
The spectroscopic events of $\eta$~Car should yield insight into the binary's geometry and, more generally, the physics of stellar wind interactions.
However, there is still disagreement on the orientation.
The inclination of the binary was initially assumed to be the same as the orientation of the Homunculus nebula, $i \simeq 41^\circ$
\citep{Davidsonetal2001}, but the direction of motion was ambiguous.
\cite{Maduraetal2012} and \cite{Teodoroetal2016} deduced the inclination to be $i=130^\circ$--$145^\circ$ and $i=135^\circ$--$153^\circ$, respectively, suggesting a direction of orbital motion opposite to that adopted by \citet{Davidsonetal2001}. The foregoing range in inferred binary inclination is narrow enough not to pose any difficulties for purposes of interpreting observational data; nor is the direction of binary motion important, for such purposes.
Unlike the inclination, the argument of periapsis $\omega$, for which different values have been obtained, has significant implications for understanding the behavior of $\eta$~Car near periastron passages.
A value of $\omega=90^\circ$ implies that the secondary star (which launches the fast wind) is closest to us at periastron and furthest at apastron, while a value of
$\omega=270^\circ$ implies that the primary star (which launches the slow and dense wind) is closest to us at periastron and furthest at apastron.
A number of studies use fitting of spectral line profiles to claim the orientation is $\omega\approx270^\circ$
\citep[e.g.][]{Nielsenetal2007, Henleyetal2008, Daminelietal2008b, Richardsonetal2015}.
\cite{Maduraetal2012} ran Smooth Particle Hydrodynamics (SPH) simulations of the colliding winds and, by matching the simulation results to line spectro-imaging data, determined an orientation within the range $\omega=240^\circ$--$285^\circ$.
\cite{Weigeltetal2016} claimed to have identified a fan-shaped structure in the wind. They combined their claim with the simulations of \cite{Maduraetal2012} to support their preferred orientation of $\omega\approx240$--$270^\circ$. However, in \cite{KashiSoker2018} we showed that the fan probably does not exist, and even if so it does not support the orientation that \cite{Weigeltetal2016} argued for.
Recently, \cite{Grantetal2020} found a slightly higher eccentricity $e=0.91$, and determined $\omega=241^\circ$ by fitting Balmer lines with Gaussian components and then fitting a Keplerian model.
Fitting an orbital orientation from spectral lines depends on assumptions as to where these lines are emitted or where they are absorbed. Attributing the lines to different locations can result in the opposite solution for $\omega$, i.e., $\omega\approx90^\circ$
\citep{KashiSoker2007,KashiSoker2008a,KashiSoker2016, Kashietal2011}.
\cite{KashiSoker2018} showed that the orientation with $\omega=90^\circ$ can explain the absence of a mass segment in the torus that was ejected during the GE, as described by \cite{Smithetal2018b}.
\cite{Abrahametal2005a}, \cite{Falcetaetal2005} and \citep{AbrahamFalceta2007} also proposed an orientation with $\omega\approx60$--$90^\circ$, but their model invokes a shell ejection event to explain the X-ray behavior of $\eta$~Car during the spectroscopic event and is hence quite different from the model of \cite{KashiSoker2008a} (discussed below).
X-ray observations of $\eta$~Car have also been used to derive the orientation. \cite{Okazakietal2008} ran SPH simulations of the colliding winds and, based on fits to the X-ray luminosity, claim $\omega=243^\circ$.
\cite{Parkinetal2009} built a model to fit the X-ray light curve and derived $\omega=270^\circ$--$300^\circ$. However,
\cite{KashiSoker2009a} showed that the X-ray light curve is not a strong indicator of the binary orientation.
They suggested that the hydrogen column density to the hot X-ray emitting gas is a better observable for purposes of differentiating between the different proposed orientations, because the hot X-ray emitting gas arises from within the post-shock secondary wind located between the two stars.
\cite{KashiSoker2009a} analyzed the expected X-ray flux from $\eta$~Car considering a specific accretion model.
Their model includes the slow-dense primary and fast secondary winds, an approximation for the colliding wind interface, and analytic expressions for the thickness of the regions that include the post-shock primary and secondary winds.
They also included in their model the rotation angle that the conical shell forms with the line connecting the two stars as the secondary approaches periastron passage and the orbital velocity becomes significant.
They calculated the expected time dependence of the hydrogen column density $N_{\rm{H}}(t)$ from different directions, demonstrating that it can serve as an indicator of the binary orientation.
They found that for orientations in which the primary is closer to the observer at periastron passage ($\omega \simeq 270^\circ$), the primary wind absorbs so much of the colliding wind X-ray flux that no X-rays should be detected.
They could also explain the observed X-ray emission measure with their preferred orientation of $\omega=90^\circ$.
Close to periastron passage the X-ray flux decreases, which makes observations more challenging. The central binary (the shocked secondary wind close to the apex, located between the two stars) that is responsible for the hard X-ray emission dims.
This dimming can be explained as due either to lower emission from the wind-collision region and/or a very large increase in $N_{\rm H}$ toward the wind-collision region.
\cite{Hamaguchietal2007} demonstrate the existence of an emission component referred to as a 'central constant emission component' (CCE) which can significantly contribute to the X-ray spectrum during periastron. This component lies within $\sim$1 arcsecond of the central binary and arises from a larger (but unresolved) region surrounding the binary system and, due to its larger plasma volume, does not significantly vary in time. \cite{Hamaguchietal2007} find that this component can be fit with an $\sim$ 1.1 keV plasma and an $N_{\rm H} \sim 5 \times 10^{22} ~\rm{cm}^{-2}$. This component is further discussed in \cite{Hamaguchietal2014a,Hamaguchietal2014b,Hamaguchietal2016}.
During periastron, it is unclear to what extent this CCE component dominates the entire X-ray spectrum.
\cite{Hamaguchietal2014a,Hamaguchietal2014b,Hamaguchietal2016} analyzed X-ray observations taken by \textit{Chandra}, \textit{XMM-Newton} and \textit{NuSTAR} telescopes, and found that the hydrogen column density to the central binary near periastron may be as large as $N_{\rm H} \approx 10^{24} ~\rm{cm}^{-2}$.
The smooth variations of $N_{\rm H}$ as a function of time before, during and after periastron could suggest that the central binary is not completely obscured and as a result, $N_{\rm H}$ measurements could at least partially probe the central binary system. Such smooth variations were demonstrated in the 2--10 keV range with the Rossi X-ray Timing Explorer (\textit{RXTE}) in \cite{Ishibashietal1999}.
In early 2020, the $\eta$~Car system underwent another periastron passage.
The X-ray light curve was observed by the Neutron Star Interior Composition Explorer (\textit{NICER}) observatory \citep{Corcoranetal2020,Espinoza-Galeasetal2020}.
The X-ray light curve reached a minimum on Feb 14, 2020 \citep{Corcoranetal2020,Espinoza-Galeasetal2020}.
The exact date of periastron is uncertain and may begin about a week or two around that date; we will hence refer to it as the 2020.1 periastron passage.
The light curve shows similar properties to that of previous X-ray minima \citep{Corcoranetal2017},
but with a somewhat earlier exit from the minimum.
The X-ray flux increases in the months preceding the periastron passage, with strong flares superimposed on top of the smooth $\sim 1/r$ increase. These flares are most likely associated with clumps in the wind \citep{MoffatCorcoran2009}, and were also observed prior to the 2020 minimum \citep{Corcoranetal2019}.
After showing a few strong X-ray flares the light curve sharply declines into a non-zero minimum which lasts for several weeks, then slowly recovers to a quiescence value.
\cite{Hamaguchietal2020} also report bright non-thermal X-ray emission in \textit{NuSTAR} observations taken during the recovery from the 2020 X-ray minimum.
In this paper, we model X-ray data for the 2020.1 periastron passage, obtained with the \textit{NICER} X-ray telescope, with the goal of explaining the variation of the X-ray luminosity $L_{\rm X}$ and of the column density $N_{\rm H}$ in the frame of the accretion model for the spectroscopic event.
According to the accretion model the X-ray minimum is not caused by an absorption. Rather, the secondary star accretes mass from the primary stellar wind for several weeks near periastron passages \citep{Soker2005, Soker2007, SokerBehar2006, Akashietal2006, KashiSoker2007, KashiSoker2008b, KashiSoker2009c}. This accretion suppresses the secondary star's wind.
Since the source of the hard X-ray emission is the post-shock fast, $v_2 \simeq 3000 ~\rm{km} ~\rm{s}^{-1}$, secondary wind, the X-ray luminosity displays a minimum with a duration of several weeks.
Hydrodynamic simulations show that indeed the secondary star accretes mass near periastron, in a process that suffers instabilities \citep{Akashietal2013, Kashi2017, Kashi2019}.
The accretion model can account for several other observed properties, such as the orbital variations of numerous lines \citep{KashiSoker2007,KashiSoker2008a, KashiSoker2009a,KashiSoker2009b,KashiSoker2016,KashiSoker2018}, the infrared light curve \citep{KashiSoker2008b}, the X-ray light curve and emission measure \citep{SokerBehar2006, Akashietal2006, KashiSoker2009a}, the timing of the peaks in the light curve of the GE and LE \citep{KashiSoker2010}, the very fast velocities of gas ejected during the GE \citep{AkashiKashi2020}, and more.
\cite{Navareteetal2020} observed optical lines during the 2020.1 spectroscopic event and concluded that the circumstellar ejecta is dissipating, but the primary does not change significantly. Similar conclusions were reported by \cite{Mehneretal2019} and \cite{Daminelietal2019}.
On the other hand, long term observations of optical lines $\eta$~Car lead \cite{Davidsonetal2005} to suggest there is a decrease in the mass-loss rate from the primary star, referred to as a `change of state'.
This effect was obtained in numerical simulations of the recovery of $\eta$~Car from the great eruption \citep{Kashietal2016}.
Further indication for the change came from comparison of UV lines emission at similar orbital phases separated by two orbital revolutions, at positions far from periastron passage \citep{Davidsonetal2018b}.
Here we show that the 2020 X-ray minimum provides further evidence for a recent decrease in the primary's mass loss rate.
In section \ref{sec:theory_NH} we describe the expected general behavior of $N_{\rm H}$ around periastron passage. We then describe the observations (section \ref{sec:obs}) and the X-ray light curve (section \ref{sec:X-Ray-LC}).
We return to the accretion model and instabilities in our analysis of the new observations in section \ref{sec:NH}. We summarize our results in section \ref{sec:summary}.
\section{Theoretical considerations for the X-ray column density}
\label{sec:theory_NH}
We derive the column density for three cases according to the position of the secondary star with respect to the primary star and the observer. These cases are indicated as $N_{\rm H,90}$, $N_{\rm H,c}$, and $N_{\rm H,f}$ in Figure \ref{fig:Schematic}. One case ($N_{\rm H,90}$) is relevant to the two opposite orientations, while each of the two other cases is relevant to a different, specific orientation.
In one orientation ($N_{\rm H,c}$) the secondary is closest to us at periastron ($\omega \simeq 90^\circ$), while in the other ($N_{\rm H,f}$) the secondary star is furthest from us at periastron ($\omega \simeq 270^\circ$). For the inclination of the orbit we adopt $i=144^\circ$, based on results in \cite{Maduraetal2012} and
\cite{Teodoroetal2016}; the adopted value represents the middle of the range determined by \cite{Teodoroetal2016}.
The inclination is defined as the angle from the line of sight to the angular momentum of the binary system (along a line perpendicular to the orbital plane). In Figure \ref{fig:Schematic} we indicate the angle between the line of sight and the orbital plane. According to \cite{Teodoroetal2016}, in the `secondary furthest orientation’ at periastron the secondary is not precisely behind the primary, but there are other uncertainties that are larger. One such uncertainty is the X-ray emitting zone (see below). Another uncertainty is the exact density structure of the primary stellar wind, i.e., its exact mass loss rate, how clumpy it is, and how the interaction zone of the two winds behave (see \citealt{KashiSoker2008a}). In Figure \ref{fig:Schematic} we indicate with a solid black arrow the integration line for the column density in the `secondary furthest orientation', $N_{\rm H,f}$. This line is in the plane of the diagram. For the `secondary closest orientation' column density $N_{\rm H,c}$, we will obtain a similar expression for the line of integration (solid red arrow at top of diagram), but with different integration boundaries.
\begin{figure}
\centering
\includegraphics[trim=0.8cm 14cm 4.0cm 2.8cm ,clip, scale=0.53]{EtaCarXrayDepth.pdf}
\begin{singlespace}
\caption{Schematic drawing of the geometry of the binary system in the plane that contains the center of the primary star, the secondary at periastron, and the observer. We examine two opposite orientations where during periastron the secondary star is either at the closest ($\omega \simeq 90^\circ$) or furthest point from us ($\omega \simeq 270^\circ$). We draw both alternative orientations on the figure by indicating the two alternative periastron locations of the secondary star. The green arrow indicates the orbital angular momentum axis of the binary system. The dashed horizontal line indicates that the integration is not in the plane of the figure, but at a distance of $D_{90}$ from the plane (see text).
}
\end{singlespace}
\label{fig:Schematic}
\end{figure}
We define the angle $\psi=i-\pi/2 = 54^\circ$ as indicated in Figure \ref{fig:Schematic}.
These integration lines are at a distance of $h=a(1-e)\sin \psi = 1.35 ~\rm{AU}$ from the line of sight to the center of the primary star,
where we take $a=16.64 ~\rm{AU}$ and $e=0.9$ for the semi-major axis and eccentricity, respectively. For a period of 2023 days, this value of the semi-major axis implies a total binary mass of $M_1+M_2= 150 M_\odot$.
Note that we took here what is usually referred to as the conventional model for the masses of $\eta$ Car, but as will be seen from the following equations (that are calibrated for the conventional model) the values of the column density only slightly change for the high-mass model, for which $a=19.73 ~\rm{AU}$ and $M_1+M_2= 250 M_\odot$ (see table 1 in \citealt{Kashi2017}).
In addition to $N_{\rm H,c}$ and $N_{\rm H,f}$, we also calculate the column density from the secondary star to the observer when the primary-secondary line is perpendicular to the line of sight, $N_{\rm H,90}$. This line of integration lies at a distance of $D_{90}=a(1-e^2) = 3.16 ~\rm{AU}$ above or below the plane of Figure \ref{fig:Schematic}.
Expressions for the foregoing column densities are obtained as follows. The proton number density of the primary wind is
\begin{equation}
n_{\rm H} = \frac {X \dot M_1}{4 \pi r^2 m_{\rm H} v_1} \equiv \Lambda r^{-2},
\label{eq:nHwind}
\end{equation}
where $X$ is the hydrogen mass fraction, $v_1$ is the velocity of the primary wind, and $\dot M_1$ is the mass loss rate into the primary wind, and the second equality defines the parameter
\begin{equation}
\begin{split}
\Lambda &= 6.01 \times10^{23} \left( \frac {X}{0.5} \right)
\left(\frac{\dot M_1}{3 \times 10^{-4} ~\rm{M_{\sun}} ~\rm{yr}^{-1}} \right) \\
& \times \left( \frac {v_1}{500 ~\rm{km} ~\rm{s}^{-1}} \right)^{-1} ~\rm{cm}^{-2} ~\rm{AU},
\end{split}
\label{eq:Lambda}
\end{equation}
where the unusual choice of units facilitates later scaling.
In this calculation we assume that the bulk of the observed 2--10 keV X-ray emission from $\eta$~Car arises very close to the secondary star. The emission should in fact be generated at some distance between the stars, which will reduce the difference in distance between the two orientation cases at periastron, but on the other hand will bring the X-ray emitting gas deeper into the dense primary wind. The latter effect is larger, and as a consequence this model may underestimate the difference in column densities between the two orientations.
Given these uncertainties, the calculation we present here is an approximation. Nonetheless, we think is adequate for our purposes, i.e., interpreting the observed X-ray flux and absorption in terms of the binary orbital parameters.
We can now calculate the column density for the three cases illustrated in Figure \ref{fig:Schematic}. As noted,
the first case, column density $N_{\rm H,c}$, is for the case where the secondary closer to the observer at periastron, while the second, $N_{\rm H,f}$, is for the secondary behind the primary wind at periastron.
The third value of the column density, $N_{\rm H,90}$, is the same for both orbit orientations in the simple form presented in Figure \ref{fig:Schematic}, and this third case applies when the secondary-primary line is perpendicular to the line of sight. At that time the distance between the secondary and primary is $D_{90} = a (1-e^2) = 3.16 ~\rm{AU} $
for the parameters we adopt here.
We can obtain these column densities as follows. For the perpendicular phase we have
\begin{equation}
\begin{split}
N_{\rm H,90} &= \int^{\infty}_0 n_{\rm H} dl = \int^{\infty}_0 \Lambda \frac{dl}{l^2 + D^2_{90}} = \frac{\Lambda}{D_{90}} \frac {\pi}{2} \\
&= 2.99 \times 10^{23} \left( \frac{\Lambda}{6.01 \times10^{23} ~\rm{cm}^{-2} ~\rm{AU}} \right) \\
&\times \left( \frac{D_{90}}{3.16 ~\rm{AU}} \right) ^{-1} ~\rm{cm}^{-2}.
\end{split}
\label{eq:NH90}
\end{equation}
Using the geometry as presented in Figure \ref{fig:Schematic} we find that $D_{\rm p}=a(1-e) \cos \psi$, and so $D_{\rm p}/h=\cot \psi=0.73$.
We then obtain
\begin{equation}
\begin{split}
N_{\rm H,c} &= \int^{\infty}_{D_{\rm p}} n_{\rm H} dl = \int^{\infty}_{D_{\rm p}} \Lambda \frac{dl}{l^2 + h^2} \\
& = \frac{\Lambda}{h} \left( \frac {\pi}{2} - \tan^{-1} \frac{D_{\rm p}}{h} \right) = \frac{\Lambda}{h} \psi = \frac{\Lambda}{r_{\rm{p}}} \frac{\psi}{\sin \psi} \\
&= 4.21 \times 10^{23} \left( \frac{\Lambda}{6.01 \times10^{23} ~\rm{cm}^{-2} ~\rm{AU}} \right) \\
&\times \left( \frac{r_{\rm{p}}}{1.66 ~\rm{AU}} \right) ^{-1} ~\rm{cm}^{-2} ,
\end{split}
\label{eq:NHc}
\end{equation}
where $r_{\rm{p}}=1(1-e) =1.66 ~\rm{AU}$ is the periastron distance.
Similarly,
\begin{equation}
\begin{split}
N_{\rm H,f} &= \int^{\infty}_{-D_{\rm p}} n_{\rm H} dl = \int^{\infty}_{-D_{\rm p}} \Lambda \frac{dl}{l^2 + h^2}
\\& = \frac{\Lambda}{h} \left( \frac {\pi}{2} + \tan^{-1} \frac{D_{\rm p}}{h} \right) = \frac{\Lambda}{h} (\pi+ \psi) = \frac{\Lambda}{r_{\rm{p}}} \frac{\pi+\psi}{\sin \psi} \\
& = 9.82 \times 10^{23} \left( \frac{\Lambda}{6.01 \times10^{23} ~\rm{cm}^{-2} ~\rm{AU}} \right) \\
&\times \left( \frac{r_{\rm{p}}}{1.66 ~\rm{AU}} \right) ^{-1} ~\rm{cm}^{-2} ,
\end{split}
\label{eq:NHf}
\end{equation}
These expressions show that if the distance from primary to X-ray emitting region is smaller than the distance from primary to secondary
(i.e., X-rays are generated in between the two stars),
then the column density is larger than the calibrated values in equations (\ref{eq:NHc}) and (\ref{eq:NHf}).
For later analysis the predicted ratios of these column densities are
\begin{equation}
\frac{N_{\rm H,c}}{N_{\rm H,90}} = 1.41 \left(\frac{1+e}{1.9}\right)\quad {\rm and}
\quad
\frac{N_{\rm H,f}}{N_{\rm H,90}} = 3.29 \left(\frac{1+e}{1.9}\right),
\label{eq:ratios}
\end{equation}
for the closest-orientation and furthest-orientation, respectively.
Note that the above ratios do not depend on $\Lambda$, and even do not depend on $a$.
They only depend on $(1+e)$, and as the eccentricity is a well constraint variable, if we take $e\simeq0.88$--$0.92$ these ratios vary only by 1\%. We note again that this calculation assumes a smooth primary wind that fills the entire volume.
\section{Observations}
\label{sec:obs}
Observations of $\eta$ Car were obtained with the Neutron Star Interior Composition Explorer (\textit{NICER}), under \textit{NICER} Guest Observer programs 1110 (PI: K. Gendreau), 2612 (PI: M. Corcoran), and 3651 (PI: D. Espinoza-Galeas). \textit{NICER} is an X-ray telescope attached to the International Space Station. \textit{NICER}'s large effective area, broad band pass, moderate spectral resolution and 30 arcmin$^{2}$ field of view provide the capability of collecting spatially unresolved observations of X-ray emitting regions surrounding $\eta$~Car. The Point Spread Function (PSF) is approximately the entire field of view ($R\simeq3.1$ arcmin).
Archival observations of $\eta$~Car by \textit{NICER} were obtained for observing dates between 2017-07-20 and 2020-07-24. Archival data were reduced using the HEASoft package (v 6.27.2) available from the High Energy Astrophysics Archive Research Center (HEASARC). The \textit{NICER} Heasoft tool \texttt{nicerl2}\footnote{\url{https://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/nicerl2.html}}
was used to create cleaned level 2 event files with up-to-date gain calibrations (\textit{NICER} CALDB version 20200722). Source and background spectral files for each observation were extracted using the \textit{NICER} background estimator tool \texttt{nibackgen3c50}\footnote{\url{https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer_bkg_est_tools.html}} (v5). Due to its flux variability, $\eta$~Car occasionally reaches the detection threshold of \textit{NICER} in a given observation (Figure \ref{fig:xrayspectra}). To prevent adding unnecessary background counts to our source spectra, we chose to remove portions of an observation if the good time intervals indicated an exposure of less than 100 seconds. These very short exposures likely include very few source counts but can have high background levels. A maximum good time interval (GTI) time of 3000 seconds and a maximum net high background rate value of 0.05 counts s$^{-1}$ were chosen as input parameters when generating background spectra for the occasionally faint source $\eta$~Car. Of the $\approx 290$ \textit{NICER} observations with non-zero exposure times in our date range, $\approx 50$ observations did not meet the above criteria for background selection were discarded.
The \textit{NICER} instrument is unable to produce spatially resolved spectra for the $\eta$~Car region; all X-ray emission in the 30 arcmin$^{2}$ field of view (FOV) surrounding $\eta$~Car is included in these spectra. Thus, as discussed in \cite{Hamaguchietal2007} for the case of \textit{XMM-Newton} observations of $\eta$~Car, several different X-ray emission components associated with $\eta$~Car can be observed in the \textit{NICER} spectra. In particular, the outer ejecta regions and the Homunculus Nebula both contribute significantly. Since we cannot remove these contributions to the \textit{NICER} spectra of $\eta$~Car, we instead focus on relative changes in the X-ray spectrum due to the variability of the $\eta$~Car central binary as the system first approached and then recovered from periastron. We assume that on timescales of days to months, no variability is induced by these external X-ray components in the 2--10~keV range where we perform our spectral fits.
We checked the field of view (FOV) of \textit{NICER} in order to exclude potential contamination from other X-ray sources.
For that we used the catalogue of X-rays sources from the \textit{CHANDRA} Carina Complex Project \citep{Broosetal2011} to identify other sources that may contribute to the X-ray flux.
The FOV includes $\simeq400$ additional sources, the vast majority of which are young stellar sources pre-main sequence stars that have median X-ray energies $<2$~keV in the 0.5--8~keV range. Individually their fluxes are negligible compared to the extremely bright $\eta$~Car in the 2--10~keV and 5--10~keV ranges.
The most notable X-ray point source that falls within the field of view of the \textit{NICER} $\eta$~Car observations is HDE~303308 \citep[a spectroscopic binary O4.5V star;][]{Sotaetal2014}.
The combined 2--8~keV flux of all these $\sim 400$ field X-ray sources is $\simeq1.5\times 10^{-12}~\rm{erg~s^{-1}~cm^{-2}}$ which is lower than all observations which we include in our analysis of $N_{\rm H}$ presented here.
Moreover, these sources are distributed across the \textit{NICER} field of view and hence their X-ray detection rates are suppressed relative to the on-axis $\eta$~Car source
Only one observation during the X-ray minimum (on JD$=$2458895.7) has a 2--10~keV flux smaller that $3\times 10^{-12}~\rm{erg~s^{-1}~cm^{-2}}$, and hence might include a contribution from the field X-ray sources.
Fits to the resulting \textit{NICER} spectra were obtained via the Sherpa\footnote{\url{https://cxc.cfa.harvard.edu/sherpa/}} X-ray spectral fitting package (v 4.12).
We constructed a python script to automate the fitting process for the 242 \textit{NICER} observations.
Each spectrum was background subtracted and grouped to require a signal to noise ratio (SNR) of at least 3 per bin.
Fits were performed for each spectrum between two energy ranges, 2--10 keV and 5--10 keV (see section \ref{sec:NH}), using the Nelder-Mead Simplex optimization method with the $\chi^{2}$ Gehrels statistic.
Our tests showed that grouping the spectra by counts per bin (e.g.,
grouping by 5 or 15 counts per bin) rather than by signal-to-noise ratio
did not consistently improve the fits.
Following a similar procedure to that discussed in \cite{Hamaguchietal2007}, we fit each spectrum with an absorbed \texttt{{{VAPEC}}} thermal equilibrium model \citep[using \texttt{wabs} for the absoption component of the model;][]{MorrisonMcCammon1983} and two Gaussians at the positions of the Fe K$\alpha$ and K$\beta$ lines (6.4 keV and 7.1 keV, respectively). An additional Gaussian component was used in the 2--10 keV fits to model the S {\sc xv} line at $\sim 2.5 ~\rm{keV}$; this component is associated with the Homoculus Nebula \citep{Hamaguchietal2007}. The Fe abundance remained free between 0.0 and 1.0 times solar in the fits, while the other metals were frozen to their solar values \citep{Hillieretal2001}. The ratio of the Fe K$\alpha$ flux to Fe K$\beta$ was set to 11.3\% \citep{Hamaguchietal2007, Yamaguchietal2014}.
Spectral fitting results, including the observed X-ray fluxes calculated for both the 2--10 keV and 5--10 keV fits and $\chi^2_{\rm red}$ values, are listed in
Tables 1 and 2.
All fit parameters are reported with their 90\% confidence intervals. The fluxes reported for the 2--10~keV fits do not include the S {\sc xv} Gaussian component.
Emission measure (EM) is reported from the fit normalization parameter assuming a distance of $2.6 ~\rm{kpc}$. Large uncertainties are present in the 5--10 keV best-fit parameters as expected for a source with low flux in this energy range. For both the 2--10 keV and 5--10 keV fits, we do not report values or errors where the best-fit model did not converge on a particular parameter value (i.e., a maxima or minima boundary was hit).
\renewcommand{\arraystretch}{0.98}
\begin{table*}[]
\begin{center}
\begin{singlespace}
\caption{Best-fit model parameters to the 2--10 keV \textit{NICER} X-ray spectra of Eta Car between July 2017 and July 2020 and their 90\% confidence values.}
\end{singlespace}
\begin{tabular}{c c c c c c c c c}
\hline\hline
Date & Obs.\,ID. & $N_\mathrm{H}$ & $kT$ & Fe & log(EM) & Flux $\times10^{-11}$ & $\chi^2_{\rm red}$\\
$[$UT$]$ & & $[10^{22}$ cm$^{-2}]$ & $[$keV$]$ & $[$Z$_{\odot}]$ & $[$cm$^{-3}]$ & $[$erg s$^{-1}$ cm$^{-2}]$ & \\
\hline
2017-07-20T02:59:20 & 1110010101 & 2.7$^{+0.9}_{-0.9}$ & 3.6$^{+1.6}_{-0.9}$ & 0.67$^{+0.37}_{-0.33}$ & 57.70$^{+0.12}_{-0.13}$ & 4.2$^{+1.0}_{-1.0}$ &0.67\\
2017-07-21T00:36:20 & 1110010102 & 3.0$^{+0.6}_{-0.7}$ & 3.2$^{+0.9}_{-0.5}$ & 0.76$^{+0.28}_{-0.25}$ & 57.79$^{+0.08}_{-0.10}$ & 4.5$^{+1.4}_{-1.1}$ &0.54\\
2017-07-22T04:23:20 & 1110010103 & 3.3$^{+0.6}_{-0.7}$ & 2.9$^{+0.6}_{-0.4}$ & 0.60$^{+0.28}_{-0.25}$ & 57.84$^{+0.09}_{-0.09}$ & 4.2$^{+0.6}_{-0.6}$ &0.61\\
2017-07-24T10:25:00 & 1110010105 & 2.8$^{+1.6}_{-1.3}$ & 3.4$^{+2.6}_{-1.2}$ & 0.79$^{+0.66}_{-0.50}$ & 57.72$^{+0.22}_{-0.18}$ & 4.0$^{+3.5}_{-2.8}$ &0.62\\
2017-10-05T02:11:20 & 1110010106 & 3.6$^{+0.8}_{-0.7}$ & 2.5$^{+0.6}_{-0.5}$ & 0.54$^{+0.35}_{-0.29}$ & 57.92$^{+0.12}_{-0.11}$ & 4.1$^{+1.6}_{-1.5}$ &0.63\\
2017-11-21T06:42:27 & 1110010108 & 3.0$^{+0.8}_{-0.8}$ & 3.0$^{+0.8}_{-0.6}$ & 0.56$^{+0.32}_{-0.28}$ & 57.79$^{+0.11}_{-0.10}$ & 4.0$^{+1.6}_{-1.5}$ &0.56\\
2017-12-12T06:09:20 & 1110010109 & 2.9$^{+1.7}_{-1.1}$ & 2.7$^{+1.0}_{-0.8}$ & $<$1.49 & 57.81$^{+0.22}_{-0.15}$ & 3.6$^{+1.0}_{-1.1}$ &0.73\\
2017-12-22T00:41:36 & 1110010110 & 2.7$^{+0.2}_{-0.2}$ & 3.5$^{+0.3}_{-0.3}$ & 0.53$^{+0.10}_{-0.09}$ & 57.83$^{+0.03}_{-0.03}$ & 5.4$^{+0.6}_{-0.6}$ &0.78\\
2017-12-30T00:19:52 & 1110010111 & 3.3$^{+0.8}_{-0.7}$ & 3.3$^{+0.9}_{-0.7}$ & 0.62$^{+0.32}_{-0.28}$ & 57.86$^{+0.11}_{-0.10}$ & 5.1$^{+1.3}_{-1.3}$ &0.60\\
2018-01-18T20:51:00 & 1110010114 & 3.2$^{+1.0}_{-1.0}$ & 3.1$^{+1.4}_{-0.7}$ & $<$0.42 & 57.88$^{+0.14}_{-0.15}$ & 4.6$^{+2.0}_{-1.7}$ &0.54\\
\hline
\end{tabular}
\end{center}
The columns in order describe the (1) \textit{NICER} observation date and time in UT, (2) unique observation number associated with the observation, (3) absorbing column density, (4) plasma energy (temperature), (5) Iron abundance relative to Solar, (6) log of the emission measure, (7) observed flux in the 2--10 keV band assuming a distance of 2.6 kpc, and (8) the reduced $\chi^{2}$ value associated with the best fit model.
Table locations with missing values indicate the spectral parameter or both confidence values hit a hard minimum or maximum when fitting and thus are not reliable. Spectral parameters missing a single upper or lower confidence value are reported as lower or upper limits, respectively.
X-ray spectral models and fitting procedure are described in more detail in section \ref{sec:obs}.
The full table is available in machine readable format.
\label{table:210}
\end{table*}
\begin{table*}[]
\begin{center}
\begin{singlespace}
\caption{
Best-fit model parameters to the 5--10 keV \textit{NICER} X-ray spectra of Eta Car between July 2017 and July 2020 and their 90\% confidence values.}
\end{singlespace}
\begin{tabular}{c c c c c c c c c}
\hline\hline
Date & Obs.\,ID. & $N_\mathrm{H}$ & $kT$ & Fe & log(EM) & Flux $\times10^{-11}$ & Int. Flux $\times10^{-11}$ & $\chi^2_{\rm red}$\\
$[$UT$]$ & & $[$$10^{22}$ cm$^{-2}]$& $[$keV$]$ & $[$Z$_{\odot}]$ & $[$cm$^{-3}]$ & $[$erg s$^{-1}$ cm$^{-2}]$ & $[$erg s$^{-1}$ cm$^{-2}]$ & \\
\hline
2017-07-20T02:59:20 & 1110010101 & $<$64 & 1.8$^{+3.3}_{-0.8}$ & 0.69$^{+1.52}_{-0.41}$ & 58.28$^{+1.04}_{-0.77}$ & 1.2$^{+4.1}_{-1.1}$ &2.1$^{+6.7}_{-1.9}$ &0.53\\
2017-07-21T00:36:20 & 1110010102 & --- & 4.9$^{+1.9}_{-2.7}$ & --- & 57.53$^{+0.84}_{-0.11}$ & 1.5$^{+1.2}_{-0.9}$ &2.0$^{+1.3}_{-1.1}$ &0.60\\
2017-07-22T04:23:20 & 1110010103 & 59$^{+38}_{-39}$ & 1.3$^{+0.6}_{-0.4}$ & 0.41$^{+0.48}_{-0.22}$ & 59.29$^{+0.84}_{-0.85}$ & 2.2$^{+7.2}_{-1.8}$ &7.2$^{+19}_{-6.2}$ &0.65\\
2017-07-24T10:25:00 & 1110010105 & --- & 2.3$^{+4.1}_{-1.4}$ & 0.60$^{+2.13}_{-0.45}$ & 58.09$^{+1.91}_{-0.58}$ & 0.9$^{+3.7}_{-0.8}$ &2.4$^{+6.4}_{-2.2}$ &0.67\\
2017-10-05T02:11:20 & 1110010106 & $<$106 & 1.2$^{+1.8}_{-0.4}$ & 0.37$^{+0.77}_{-0.26}$ & 59.32$^{+0.93}_{-1.54}$ & 2.2$^{+10}_{-2.0}$ &7.8$^{+27}_{-6.9}$ &0.63\\
2017-11-21T06:42:27 & 1110010108 & --- & 1.9$^{+3.0}_{-0.8}$ & 0.73$^{+1.10}_{-0.44}$ & 58.06$^{+0.99}_{-0.64}$ & 0.9$^{+3.2}_{-0.8}$ &1.4$^{+4.4}_{-1.2}$ &0.57\\
2017-12-12T06:09:20 & 1110010109 & $<$95 & 1.3$^{+1.0}_{-0.7}$ & --- & 58.46$^{+1.61}_{-0.64}$ & 0.5$^{+2.3}_{-0.5}$ &1.5$^{+4.8}_{-1.3}$ &1.74\\
2017-12-22T00:41:36 & 1110010110 & 29$^{+17}_{-19}$ & 2.2$^{+0.9}_{-0.5}$ & 0.38$^{+0.14}_{-0.10}$ & 58.44$^{+0.39}_{-0.46}$ & 2.0$^{+3.4}_{-1.5}$ &3.6$^{+5.1}_{-2.8}$ &0.58\\
2017-12-30T00:19:52 & 1110010111 & $<$63 & 1.7$^{+1.5}_{-0.6}$ & 0.77$^{+0.87}_{-0.46}$ & 58.34$^{+0.96}_{-0.55}$ & 1.0$^{+2.9}_{-0.9}$ &1.8$^{+3.8}_{-1.4}$ &0.72\\
2018-01-18T20:51:00 & 1110010114 & $<$99 & 1.6$^{+5.5}_{-1.0}$ & $<$0.80 & 58.76$^{+1.45}_{-1.28}$ & 1.5$^{+8.3}_{-1.3}$ &4.7$^{+15}_{-3.9}$ &0.37\\
\hline
\end{tabular}
\end{center}
The columns in order describe the (1) \textit{NICER} observation date and time in UT, (2) unique observation number associated with the observation, (3) absorbing column density, (4) plasma energy (temperature), (5) Iron abundance relative to Solar, (6) log of the emission measure, (7) observed flux in the 5--10 keV band assuming a distance of 2.6 kpc, (8) the intrinsic (i.e., unabsorbed) flux in the 5--10 keV band, and (9) the reduced $\chi^{2}$ value associated with the best fit model.
Table locations with missing values indicate the spectral parameter or both confidence values hit a hard minimum or maximum when fitting and thus are not reliable. Spectral parameters missing a single upper or lower confidence value are reported as lower or upper limits, respectively.
X-ray spectral models and fitting procedure are described in more detail in section \ref{sec:obs}
The full table is available in machine readable format.
\label{table:510}
\end{table*}
Although the abundances of the X-ray emitting plasma associated with $\eta$ Car are uncertain (and are likely enriched in He and N; e.g., \citealt{Gulletal2020}), we note that our assumption of solar metal abundances (apart from Fe) is unlikely to affect these fit results. This is mainly because the 5--10~keV region is largely devoid of strong metal emission lines, apart from the 6.4~keV and 7.1~keV Fe lines. Indeed, based on tests performed on a subset of observations, we confirmed that the results for X-ray flux and $N_{\rm H}$ are insensitive to the assumed metallicity.
Figure \ref{fig:xrayspectra} shows selected $\eta$~Car \textit{NICER} spectra for a number of notable epochs.
January 7, 21, 27, and February 3 (2020) are times of strong peaks in the lightcurve, which are usually referred to as flares.
February 14 and 18 are representative specta taken during the X-ray minimum.
Figs. \ref{fig:xrayspectra210} and \ref{fig:xrayspectra510} show our fit to the \textit{NICER} 2--10 keV and 5--10 keV energy ranges, respectively.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 1.5cm,clip=true,width=0.99\textwidth]{etacar_NICER_variability_up2date_caldb_SIMPLEX_dtmin100_dtmax3000_hbg05_03_10kev_log_log_v1}
\begin{singlespace}
\caption{
\textit{NICER} X-ray spectra of $\eta$ Car across the 2020 periastron passage grouped to require a minimum SNR$=$6. Observations of the flare are displayed on January 7, 21, 27, and February 3. \textit{NICER} spectra during the X-ray minimum are displayed for February 14 and 18. The extended X-ray emitting regions surrounding $\eta$~Car contribute primarily at energies $<$ approximately 2 keV where little variability is observed. The 2--10 keV emission is associated with previously shocked gas farther from the central stars. Emission in the 5--10 keV range comes from the apex of the colliding winds located between the two stars but closer to the secondary.}
\end{singlespace}
\label{fig:xrayspectra}
\end{figure*}
\begin{figure*}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-07_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-21_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-27_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-03_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-14_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-18_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\begin{singlespace}
\caption{X-ray spectral fits to the 2--10 keV \textit{NICER} data of $\eta$ Car across the 2020 X-ray minimum. Spectra are grouped to require a minimum SNR$=$3 where dates and colors correspond to the same format as Figure \ref{fig:xrayspectra}. The best-fit models are shown in orange and described in section \ref{sec:obs}.}
\end{singlespace}
\label{fig:xrayspectra210}
\end{figure*}
\begin{figure*}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-07_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-21_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-27_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-03_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-14_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-18_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\begin{singlespace}
\caption{X-ray spectral fits to the 5--10 keV \textit{NICER} data of $\eta$ Car across the 2020 X-ray minimum. Spectra are grouped to require a minimum SNR$=$3 where dates and colors correspond to the same format as Figure \ref{fig:xrayspectra}. The best-fit models are shown in orange and described in section \ref{sec:obs}.}
\end{singlespace}
\label{fig:xrayspectra510}
\end{figure*}
Figure \ref{fig:xray} presents the resulting broad-band (2--10 keV) and hard (5--10 keV) X-ray flux variations (hereafter we will refer to them simply as light curves) during the 2020 spectroscopic X-ray minimum.
We present only observations that yielded reduced $\chi^{2}$ values in the range $0.4\leq\chi^2_{\rm red}\leq2.5$, and $N_{\rm H}$ where 90\% confidence intervals were determined, as well as those observations with very low flux $F\leq 10^{-11} \rm{erg~s^{-1}~cm^{-2}}$ even if the aforementioned conditions for $\chi^2_{\rm red}$ and $N_{\rm H}$ were not fulfilled.
Adopting a distance of $2.6 ~\rm{kpc}$ \citep{Davidsonetal2018a}, flux of $10^{-10} ~\rm{erg~s^{-1}~cm^{-2}}$ corresponds to luminosity of $8.09\times10^{34} ~\rm{erg~s^{-1}}$.
Figure \ref{fig:xray} also shows the 0.5--10 keV X-ray light curve reported by \cite{Espinoza-Galeasetal2020}, where we have combined the two sets of observations.
In the \cite{Espinoza-Galeasetal2020} light curve, the X-ray minimum appears to be interrupted by a flare, with a peak flux $3.3\times10^{-11} ~\rm{erg~s^{-1}~cm^{-2}}$ and a duration of $\simeq 5 ~\rm{days}$.
However, in our reanalysis of the 2--10 keV \textit{NICER} data, we did not recover a flare during this time period (compare the solid green and dashed magenta curves in Figure~ \ref{fig:xray}).
The S/N ratios in the spectra during X-ray minimum are poor, such that small differences in background subtraction methods may account for the differences between the respective light curves.
\cite{Espinoza-Galeasetal2020} ATel do not detail background subtraction methods and thus it is difficult to identify the discrepancy in the flare detection. As described in section \ref{sec:obs}, we use the most up to date calibration files as well as a suggested background determination method from the \textit{NICER} team (Remillard et al, in prep\footnote{See \textit{NICER} background estimator tools page: \url{https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer_bkg_est_tools.html}}).
We note that the apparent flare reported by \cite{Espinoza-Galeasetal2020} cannot originate from the 0.5--2 keV spectral component, since this component arises from $\eta$~Car's surrounding, extended emission and hence does not vary during the X-ray minimum \citep{Hamaguchietal2007}.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\textwidth]{Flux_14a15a.pdf}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\textwidth]{Flux_zoom_14a15a.pdf}
\begin{singlespace}
\caption{
{Upper panel:} The \textit{NICER} 2-10 keV and 5--10 keV (hard X-ray) light curve of $\eta$~Car and 90\% confidence values for both ranges.
\textit{Lower panel:} zoom in on the 2020 X-ray minimum.
We compare our fitted flux to the results of \cite{Espinoza-Galeasetal2020}. Note that error bars were not given for the data in \cite{Espinoza-Galeasetal2020}.
The time axis is set to the beginning of periastron passage; $t=0$ is on Feb 10, 2020 (JD$=2458890$).
We see that the X-ray minimum flare obtained by \cite{Espinoza-Galeasetal2020} on $t=8 ~\rm{days}$ is not present in our results.
The figure also demonstrates that the for for the 5--10 keV energy range results in better error estimate than for the 2--10 keV energy range.
}
\end{singlespace}
\label{fig:xray}
\end{figure*}
The duration of the X-ray minimum has varied in the last 4 cycles where it was closely monitored \citep{Corcoranetal2017}.
Figure \ref{fig:xray_cycles} shows a comparison of the last 5 cycles, focused on the X-ray minimum.
As can be seen in Figure \ref{fig:xray}, the duration of the X-ray minimum in the 2020 cycle is not well constrained as the recovery is gradual and has some fluctuations. The duration can be considered to be anything in the range $\simeq25$--$37 ~\rm{days}$.
The hard component has a clearer recovery, and its duration is $23 ~\rm{days}$.
The recovery from the 2020 X-ray minimum occurs at the steepest slope amongst the 5 cycles.
A quantifying criterion that takes the slope of the recovery into account would indicate that the present cycle had the shortest minimum.
If, for example, we consider the duration of the X-ray minimum to be the time where the flux is $F \leq 5 \times 10^{-11} \rm{erg~s^{-1}~cm^{-2}}$ then the 2020 X-ray minimum is the shortest.
Hereafter we will refer to this finding about the 2020 Minimum simply as `fastest recovery'.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\textwidth]{compare_cycles.pdf}
\begin{singlespace}
\caption{\
Comparison of the X-ray minimum for the last 5 X-ray minima of $\eta$~Car.
Data for earlier cycles is adopted from \cite{Corcoranetal2017}.
The time scale is fixed and the beginning of the {1997--8} X-ray minimum and a period of 2023 days is used to fold the data.
The duration of the 2020 X-ray minimum can be considered to be anything in the range $\simeq25$--$37 ~\rm{days}$, depending on the definition.
The recovery from the 2020 X-ray minimum occurs at the steepest slope.
}
\end{singlespace}
\label{fig:xray_cycles}
\end{figure*}
Figure \ref{fig:N_H_210510} shows the hydrogen column density $N_{\rm H}$ from the fitted \textit{NICER} spectra for the broad-band and hard X-ray ranges, while Figure \ref{fig:N_H} shows the hard X-ray flux and $N_{\rm H}$ together.
The duration of X-ray minimum for the hard X-ray component is marked in these two figures.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.95\textwidth]{N_H_14a15a.pdf}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.97\textwidth]{N_H_zoom_14a15a.pdf}
\begin{singlespace}
\caption{\textit{Upper panel:} The derived hydrogen column density $N_{\rm H}$ for the 2--10 keV and 5--10 keV energy ranges, and 90\% confidence values bot both ranges.
The time axis begins about two years before periastron passage, where the stars are very far from each other on their $P=2023$ days orbit.
Dashed vertical lines in both panels represent the beginning and end of the X-ray minimum (note the uncertainties in the exit date that we discuss in the text).
\textit{Lower panel:} zoom in on the 2020 X-ray minimum (close to periastron passage).
}
\end{singlespace}
\label{fig:N_H_210510}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.89\textwidth]{N_H_and_F.pdf}
\includegraphics[trim= 1.1cm 0.0cm 0.0cm 0.0cm,clip=true,width=1.00\textwidth]{N_H_and_F_zoom.pdf}
\end{center}
\begin{singlespace}
\caption{\textit{Upper panel:} The derived hydrogen column density $N_{\rm H}$ of the hard X-ray $\eta$~Car from \textit{NICER} observations (left axis, blue line).
The time axis begins about two years before periastron passage, where the stars are very far from each other on their $P=2023$ days orbit.
Dashed vertical lines in both panels represent the beginning and end of the X-ray minimum.
\textit{Lower panel:} zoom in on the 2020 X-ray minimum (close to periastron passage).
The flux from Figure \ref{fig:xray} is also plotted in both panels (right axis, red line).
}
\end{singlespace}
\label{fig:N_H}
\end{figure*}
\section{The X-ray light curve}
\label{sec:X-Ray-LC}
Before we turn to analyse the column density evolution, we discuss the X-ray light curve shortly before, during, and right after the X-ray minimum. We emphasise three properties of the X-ray light curve, and explain them in the frame where the hard X-ray results from the post-shock secondary wind as it collides with the primary wind in the regions between the two stars (e.g., \citealt{PittardCorcoran2002, Akashietal2006}).
\textit{1. A deep X-ray minimum.} Figure \ref{fig:xray} shows that during the X-ray minimum the X-ray emission is very weak. Absorption alone cannot account for this.
Hydrodynamic simulations by our group \citep{Akashietal2013, Kashi2017, Kashi2019} show that near periastron passage the secondary star accretes mass from the primary wind, a process that shuts down the secondary wind for several weeks.
For several weeks the flow of gas in the binary system is that of a secondary star accreting from the primary wind, rather than a flow of colliding winds.
These simulations reveal a complicated flow structure that is highly asymmetrical (apart from mirror-symmetry about the equatorial plane), and more complicated than the Bondi-Hoyle-Lyttleton (BHL; \citealt{HoyleLyttleton1939,BondiHoyle1944}) accretion picture. Nevertheless, we note the BHL accretion rate produces the accretion rate to within a factor of a few \citep{KashiSoker2009b}. The high $N_{\rm H}$ results in a lack of radiative cooling at the surface of the secondary star, further complicating the flow structure \citep{Akashietal2006}.
The \textit{NICER} light curve (Figure \ref{fig:xray}) suggests indeed that the secondary wind ceases to exist during the X-ray minimum. This seems to support the numerical results that the primary wind manages to reach the secondary star, and the secondary star accretes mass from the wind of the primary star rather than blowing its wind (e.g., \citealt{Soker2005, Kashi2019}; see section \ref{sec:intro}). According to this scenario, when the secondary star rebuilds its wind the X-ray emission resumes. In section \ref{sec:summary} we further discuss this point in relation to the orientation of the binary system.
During most of the X-ray minimum the X-ray emission is due mainly to post-shock secondary wind that was shocked weeks to months before periastron passage, and now resides at large distances of {$d_{x, {\rm min}} \ga 10 ~\rm{AU}$} from the primary star.
By equation (\ref{eq:NHc}) or equation (\ref{eq:NHf}) this ensures a low column density, as we replace $r_{\rm{p}}$ with $d_{x, {\rm min}}$.
Although the post-shock secondary wind suffers adiabatic cooling, its X-ray tail contributes to the $>5 ~\rm{keV}$ band.
This explains the low values of $N_{\rm H}$ at the X-ray minimum.
\textit{2. An early exit from the X-ray minimum.} From Figure \ref{fig:xray_cycles} we learn that the last cycle had the earliest exit from the X-ray minimum. This continues the trend of the previous two cycles that had earlier exits than the first two cycles for which we have X-ray light curves \citep{Corcoran2005, Corcoranetal2017, Espinoza-Galeasetal2020}. As we mention in section \ref{sec:intro}, this might result from a weaker primary wind that allows the secondary wind to rebuild itself earlier.
\textit{3. Strong pre-minimum fluctuations.} One feature common to all five light curves of the five cycles is that the light curve after exit from the minimum is relatively smooth. On the other hand, the light curves in the weeks to months before the minimum show large fluctuations, known as flares \citep{MoffatCorcoran2009}. This most likely is an outcome of large-amplitude instabilities in the process of the wind collisions, as numerical simulations show \citep{Akashietal2013, Kashi2017, Kashi2019}. This is important also for the minimum itself. The instabilities imply the presence of dense clumps in the primary wind. Such clumps are the first to reach the secondary star as the system approaches periastron, and they seem to weaken the secondary wind. This in turn allows more of the primary wind to reach the secondary star and completely or almost completely turn off the secondary wind for the duration of the minimum.
\section{The column density to the hard X-ray source (5--10 ~\rm{keV})}
\label{sec:NH}
We perform our analysis under the common assumption that in $\eta$~Car most of the hard X-ray emission, $>5 ~\rm{keV}$, comes from the central binary \citep{Hamaguchietal2007}, namely from the wind collision region (specifically, the post-shock secondary wind). This region lies mainly between the two stars and is closer to the secondary star \citep[e.g.,][]{Ishibashietal1999, PittardCorcoran2002, Akashietal2006, Hamaguchietal2007}.
\subsection{$N_{\rm H}$ close to apastron}
\label{sucsec:NHapastron}
\cite{Hamaguchietal2007} deduced from their analysis that away from periastron (near apastron) the column density toward the hard X-ray emission component ($>5 ~\rm{keV}$) is $N_{\rm H,a} \simeq 17 \times 10^{22} ~\rm{cm}^{-2}$ (their figure 13). Subscripts `$a$' and `$p$' refer to values near apastron and near periastron, respectively.
First we note that the X-ray light curve has large fluctuations (`flares' and `troughs'), as in previous cycles (e.g., \citealt{Corcoran2005}). The same goes for $N_{\rm H}$ toward the main hard X-ray source. From the left side of the upper panel of Figure \ref{fig:N_H} we find that away from periastron the values of the column density are in the range of $N_{\rm H,a} \simeq 20$--$60 \times 10^{22} ~\rm{cm}^{-2}$.
\cite{KashiSoker2008a} calculated (their figure 6) the expected values of the absorbing column density for the two orientations close to apastron.
For $\omega \simeq 270^\circ$ (secondary closer to the observer at apastron) they obtained $N_{\rm H,a}{\rm (Cal270)} \simeq 4 \times 10^{22} ~\rm{cm}^{-2}$, while for $\omega \simeq 90^\circ$ (primary closer to the observer at apastron) they obtained $N_{\rm H,a}{\rm (Cal90)} \simeq 17 \times 10^{22} ~\rm{cm}^{-2}$.
It is clear that the $\omega \simeq 270^\circ$ orientation (secondary close to us near apastron) is not consistent with the high ($N_{\rm H,a} \simeq 20$--$60 \times 10^{22} ~\rm{cm}^{-2}$) column density derived in this work towards the wind collision region.
Therefore --- although there are large uncertainties and large variations from data point to data point --- overall, the new column density values we find from \textit{NICER} observations of the last cycle near apastron strengthen the claim of \cite{KashiSoker2008a} for the $\omega \simeq 90^\circ$ orientation.
\subsection{$N_{\rm H}$ close to periastron}
\label{subsec:NHperiastron}
To determine the column density toward the postshock secondary wind we consider only the hard X-rays, $>5 ~\rm{keV}$, that we attribute (see above) to the post-shock secondary wind, mainly close to the secondary star.
During most of the $>5 ~\rm{keV}$ X-ray minimum, JD=2458889 (10-2-2020) to JD=2458912 (4-3-2020), the values of $N_{\rm H}$ are not much larger than the $N_{\rm H}$ values before and after the X-ray minimum, when the X-ray flux is much larger. Although the uncertainties in the values of $N_{\rm H}$ are on the order of the values themselves, the inferred change in $N_{\rm H}$ is sufficiently small to suggest that the minimum is \textit{not} due to absorption of the X-ray source. Hence, the X-ray source power must substantively diminish during the X-ray minimum.
The general range of $N_{\rm H}$ values during the X-ray-minimum are in the range $28$--$72 \times 10^{22} ~\rm{cm}^{-2}$, with a median value of $\simeq 57 \times 10^{22} ~\rm{cm}^{-2}$.
Taking the weighted mean over this range we get $N_{\rm H,min}=(53 \pm 13) \times 10^{22} ~\rm{cm}^{-2}$.
We also note that when $\eta$ Car was observed during the the 2003.5 X-ray minimum both by \textit{XMM-Newton} and \textit{Chandra} (that has better spatial resolutions\footnote{\cite{Henleyetal2008} mention that the \textit{Chandra} observed spectral line profiles during the 2003.5 X-ray minimum can be fitted with synthetic profiles with a model of the emissivity along the colliding winds boundary.}), similar values of $N_{\rm H} \simeq 20$--$60 \times 10^{22} ~\rm{cm}^{-2}$ were derived from observations in both telescopes \citep{Hamaguchietal2007}.
This is also the same range of values we derive here during the 2020 X-ray minimum.
On JD=2458910 the binary system is about 20 days after periastron. At that time the binary orientation is at about $90^\circ$ degrees in the orbital motion with respect to periastron and the column density is about similar to the mean value during periastron.
There are few valid $N_{\rm H}$ observations at the exit from periastron so we will consider also values at later times, during the recovery (exit) from the minimum, JD=2458912 to JD=2458930. We note again that the exit from the minimum in the last cycle is the earliest among the five X-ray recorded cycles, and so the hard X-ray flux at exit is sufficiently strong to allow us determination of $N_{\rm H}$ at exit from minimum. During this exit period the values of the column density have a median of $\simeq 58 \times 10^{22} ~\rm{cm}^{-2}$ and the taking weighted mean over the period gives $N_{\rm H}{\rm (exit)}=(67\pm 11) \times 10^{22} ~\rm{cm}^{-2}$.
Thus we obtain
\begin{equation}
\frac{N_{\rm H,p}}{N_{+90^\circ}} \simeq 0.79 \pm 0.23,
\label{eq:ObservedRatio+}
\end{equation}
where $N_{\rm H,p}$ and $N_{+90^\circ}$ are the means of the best-fit column densities during and after periastron passage.
Prior to the periastron passage the X-ray flux is larger, resulting in better determined values of $N_{\rm H}$.
About 20 days before the beginning of the X-ray minimum, JD $\simeq 2458870$, the binary system is about to enter the X-ray minimum. Taking the range JD=2458865 to JD=2458875 the column density has a median of $\simeq 25 \times 10^{22} ~\rm{cm}^{-2}$, and taking weighted mean over this range gives $N_{\rm H}{\rm (enter)} =(26 \pm 3) \times 10^{22} ~\rm{cm}^{-2}$. Adopting this value for the column density $90^\circ$ before periastron, i.e., $N_{\rm H}{\rm (enter)}=N_{-90}$, we find
\begin{equation}
\frac{N_{\rm H,p}}{N_{-90}} \simeq 2.04 \pm 0.55,
\label{eq:ObservedRatio-}
\end{equation}
where $N_{\rm H,p}$ and $N_{-90^\circ}$ are the means of the best-fit column densities during and before periastron passage.
Let us compare the observationally determined ratios in equations (\ref{eq:ObservedRatio+}) and (\ref{eq:ObservedRatio-}) with the theoretical expectation of equation (\ref{eq:ratios}). We first note that the simple theoretical setting we use in deriving equation (\ref{eq:ratios}) gives the same column density before and after the X-ray minimum, e.g., ${N_{-90}}={N_{+90}}$. Therefore, if we compare one value of the theoretical expectation with the observational findings it is the average of the values of equations (\ref{eq:ObservedRatio+}) and (\ref{eq:ObservedRatio-}), i.e., a ratio of $N_{\rm H,p}/N_{\rm H, 90} \simeq 1.4$ (recall that this is a theoretical ratio with about 1\% error).
For an assumed orientation in which the secondary star is closest to us at periastron ($\omega \simeq 90^\circ$) the theoretical ratio of the column density at periastron to that at $90^\circ$ orbit, which occurs about 20 days before and after periastron, is $N_{\rm H,c} /N_{\rm 90} \simeq 1.4$, while for the orientation where the secondary star is away from us at periastron ($\omega \simeq 270^\circ$) the expected value is $N_{\rm H,f} /N_{\rm 90} \simeq 3.3$ (equation \ref{eq:ratios}).
The observed ratios (equations (\ref{eq:ObservedRatio+}) and (\ref{eq:ObservedRatio-})) are hence more consistent with the $\omega=90^\circ$ orientation.
We note that it is possible that the $N_{\rm H}$ we deduce during the faintest portions of the \textit{NICER} X-ray minimum is not to the vicinity of the secondary star as we assume here, but rather to a different weak source near the center, what \cite{Hamaguchietal2007,Hamaguchietal2014a,Hamaguchietal2014b,Hamaguchietal2016} term the central constant emission (CCE) component.
We prefer the foregoing model --- wherein the intrinsic X-ray luminosity has declined (due to accretion onto the secondary) and $N_{\rm H}$ is indeed measured toward the central binary --- in part because of the smooth variation of $N_{\rm H}$ across the X-ray minimum. We would have expected that if the $N_{\rm H}$ during the X-ray minimum is measuring the absorbing column toward an X-ray source whose extent is much larger than the central binary, then $N_{\rm H}$ would show a steep drop as the flux drops when entering the X-ray minimum, and that these lower values are sustained until the flux recovers.
This signature is not apparent in these observations,
although we note that the value of $N_{\rm H}$ and its uncertainties are poorly constrained due to the low X-ray flux during minimum.
Similar smooth variation has been observed for the {1997--8} X-ray minimum by \textit{RXTE} \citep{Ishibashietal1999}.
\section{Summary and Discussion}
\label{sec:summary}
Beginning in July 2017, the \textit{NICER} X-ray telescope facility has regularly been observing $\eta$~Car, including daily coverage of the 2020.1 X-ray minimum that coincided with the so-called ``spectroscopic event'' of strong variability in visible and IR line emission which occurs around periastron passages (with last passage occurring in February 2020).
We processed and analyzed the \textit{NICER} X-ray observations (examples of which are shown in Figure \ref{fig:xrayspectra}), with our analysis focused on the hard (5--10 keV) X-ray light curve and the hydrogen column density to the source of the hard X-rays.
The light curve shows the expected X-ray minimum 5.54 years after the previous minimum in mid-2014. The \textit{NICER} data demonstrate that this most recent periastron passage exhibited the fastest recovery among the five observed X-ray minima (Fig \ref{fig:xray}).
We interpret the fast recovery of this X-ray minimum in the frame of the accretion model of the spectroscopic event \citep{KashiSoker2008a}.
According to the accretion model, near periastron, the stellar wind collision region becomes very close to the secondary star; as a result, the secondary star accretes mass from the dense primary wind. This accretion process suppresses the secondary's wind.
We surmise that the primary wind was relatively weak during this most recent periastron passage, and this weak wind allowed the secondary wind to quickly revive itself, thereby terminating the X-ray minimum at the earliest recorded orbital phase after periastron passage.
By fitting the hard (5--10 keV) region of the X-ray spectra (see examples in Figure \ref{fig:xrayspectra510}), we determined the column density $N_{\rm H}(t)$ to the hard X-ray source as function of time (Figure \ref{fig:N_H}). The hard X-ray source originates close to the apex of the post-shock secondary wind, where the two winds collide directly, and is therefore located between the two stars, closer to the secondary than the primary star.
Although there are large uncertainties in the values of $N_{\rm H}(t)$ derived from the individual \textit{NICER} spectral fits, we found that we can use the time-averaged values of $N_{\rm H}$ during specific orbital phases (before, during, and just after the X-ray flux minimum) to discriminate between two alternative orientations proposed for the binary system (i.e., secondary in front of vs.\ behind primary during periastron passage).
Specifically, in section \ref{sucsec:NHapastron}, we compared the values of $N_{\rm H}$ that we derived from observations away from periastron, i.e., near apastron (left region of upper panel of Figure \ref{fig:N_H}) with the theoretically expected values that we derived in \cite{KashiSoker2008a}. We found that the column densities away from periastron are too large to be consistent with the binary orientation wherein the secondary star is closer to us at apastron, i.e., $\omega \simeq 270^\circ$, since we require the primary dense wind to supply the column density. These results thereby support the earlier conclusion of \cite{KashiSoker2008a} \citep[which were based on $N_{\rm H}$ values derived from \textit{XMM-Newton} X-ray observations;][]{Hamaguchietal2007} that the secondary star of $\eta$~Car is away from us near apastron, i.e., $\omega \simeq 90^\circ$.
In section \ref{subsec:NHperiastron} we compared the values of $N_{\rm H}$ that we calculated for the two alternative opposite binary orientations (section \ref{sec:theory_NH}) to those that we derived from observations (lower panel of Figure \ref{fig:N_H}).
We took the periastron passage to have occurred just after the beginning of the X-ray minimum. We determined the approximate ratio of the column density near periastron to that about 20 days later, when the primary-secondary position had changed by $90^\circ$ (equation \ref{eq:ObservedRatio+}), and the ratio of periastron column density to that 20 days before periastron (equation \ref{eq:ObservedRatio-}). These ratios of $N_{\rm H}$ variation are $\approx 0.8$ and $\approx 2.0$, respectively.
In the simple theoretical model we have developed (Figure \ref{fig:Schematic}), these two ratios are aught to be small if $\omega \simeq 90^\circ$, since during this time the line of sight to the hard X-ray source should not reach very close to the primary where the densities are very large.
This range of column density ratio variation, 0.8--2.0, can then be compared with the theoretical predictions for the variations that should result from the two opposing assuming binary orientations. For the $\omega = 90^\circ$ orientation (upper part of Figure \ref{fig:Schematic}) the predicted ratio of variation is 1.4 --- precisely in the middle of the observed range --- while for the $\omega = 270^\circ$ orientation (lower part of Figure \ref{fig:Schematic}) the predicted ratio of variation is 3.3 (equation \ref{eq:ratios}), i.e., outside the range of observed variation.
Therefore, this comparison of the theoretically expected variation of $N_{\rm H}$ around periastron (that we express as ratios with the value at the X-ray minimum) to those that we derived from observations, also supports the $\omega=90^\circ$ orientation (upper part of Figure \ref{fig:Schematic}).
Thus, the main conclusion we draw from our analysis of the \textit{NICER} X-ray observations of the most recent periastron passage of $\eta$~Car is that the binary orbital orientation is $\omega \approx 90^\circ$, i.e., the secondary star is closer to us than the primary star at periastron.
Our secondary conclusion is that the weakening of the primary stellar wind over the last several cycles allowed the earlier revival of the secondary wind and, resulting in the fastest recovery from X-ray minimum yet observed for $\eta$~Car.
The results described in this paper serve to illustrate how the variation of column density toward the hard X-ray source during the orbital motion of the massive binary system $\eta$~Car, $N_{\rm H}(t)$, is potentially a more sensitive diagnostic of the configuration of the binary and the orientation of the binary orbit than is the variation of the X-ray flux $F_{\rm X}(t)$ \citep{KashiSoker2008a}. The drawback of this application of column densities derived from spectral fitting is that --- especially in the (hard X-ray) energy range of interest here ($\sim$5--10 keV) --- $N_{\rm H}(t)$ is subject to larger uncertainties than $F_{\rm X}(t)$. Thus, in the coming decades, higher quality X-ray observations of $\eta$~Car around periastron utilizing planned X-ray missions such as Athena+ \citep{Nandraetal2013} are essential if we are to improve our understanding of this astrophysically important massive binary system.
\vspace{0.5cm}
We thank an anonymous referee for helpful comments. This paper used data from the Neutron star Interior Composition Explorer (\textit{NICER}), obtained from the High Energy Astrophysics Archive Research Center (HEASARC).
We thank R. Remillard for his helpful discussion regarding use of the \textit{NICER} \texttt{nibackgen3c50} tool.
AK acknowledges support from the R\&D Authority, and the chairman of the Department of Physics in Ariel University.
NS was supported by a grant from the Israel Science Foundation (769/20).
\software{Sherpa, (v4.12; \citealt{Freemanetal2001,Doeetal2007,Burkeetal2020}, HEAsoft (v6.27.2; HEASARC 2014))}
\software{nicerl2, HEAsoft (v6.27.2; HEASARC 2014), \url{https://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/nicerl2.html}}
\software{nibackgen3c50, HEAsoft (v6.27.2; HEASARC 2014), \url{https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer_bkg_est_tools.html}}
\software{VAPEC (\citealt{MorrisonMcCammon1983}; HEAsoft (v6.27.2; HEASARC 2014)),
\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node135.html\#vapec}}
\vspace{0.5cm}
\section{INTRODUCTION}
\label{sec:intro}
In early 2020 the binary system Eta Carinae ($\eta$~Car) underwent another periastron passage in its ${P = 2023}$ day orbit.
The system includes two very massive stars, the primary and the hotter secondary.
The primary star is a very massive star with mass $M_1=120$--$170 ~\rm{M_{\sun}}$ \citep{Hillieretal2001, DavidsonHumphreys2012, KashiSoker2010, KashiSoker2016}, that may be in its Luminous Blue Variable (LBV) stage (though it differs from classical LBVs in a few aspects, see \citealt{HillierAllen1992}). The secondary star is a hot and evolved star \citep{Verneretal2005, Mehneretal2010} with a mass of $M_2=30$--$80 ~\rm{M_{\sun}}$ \citep{KashiSoker2010, KashiSoker2016}, that may be in its Wolf-Rayet stage \citep[e.g.,][]{Hiraietal2021}.
The system is a colliding wind binary \citep{Damineli1996, Pittardetal1998} with large eccentricity $e \simeq 0.9$ \citep{Davidsonetal2017}, resulting in large differences in the wind collision interface between periastron and apastron.
The distance to the system has been determined to be in the range $2.3 ~\rm{kpc}$ \citep{Smith2006}
to $2.6 ~\rm{kpc}$ \citep{Davidsonetal2018a}.
The system is famous for its energetic eruptions in the nineteenth century \citep{DavidsonHumphreys2012}, the great eruption (GE; 1837.9--$\sim$1858) and the lesser eruption (LE; 1887.3--1895.3). These eruptions overpowered the Eddington limit and ejected a significant portion of the primary's stellar atmosphere, forming what we know today as the Homunculus nebula \citep{DavidsonHumphreys2012}.
The primary wind has a larger momentum than that of the secondary, with mass loss rate $\dot M_1 \simeq 3$--$10 \times 10^{-4} ~\rm{M_{\odot}}~\rm{yr^{-1}}$ \citep{DavidsonHumphreys2012,Clementeletal2014,Kashi2017,Kashi2019}.
The weaker secondary wind carves a conical cavity in the dense primary wind with direction and shape that changes with the orbit. This cavity wraps around the primary close to periastron passage, when the orbital velocity increases and becomes comparable to the primary wind velocity \citep{Hamaguchietal2007,Parkinetal2011,KashiSoker2009a,Maduraetal2012}.
Being such a unique system at a relatively close distance, each $\eta$~Car periastron passage is an event of great interest and is monitored from both ground \citep{DuncanWhite2003,vanGenderenSterken2004,Whitelocketal2004,Abrahametal2005b,Daminelietal2008a,Daminelietal2008b,FernandezLajusetal2010,Teodoroetal2012,Teodoroetal2016} and space \citep{PittardCorcoran2002, Martinetal2006, Corcoran2005, Corcoranetal2010, Corcoranetal2017, Hamaguchietal2007, Hamaguchietal2014a, Hamaguchietal2014b, Henleyetal2008, Davidsonetal2015, Mehneretal2015}.
Prior to periastron passage, $\eta$ Car's spectral lines, which are key probes of the dynamics of the two stars and their winds, change their profiles, some dramatically. This behaviour of the lines as well as rapid variations in various bands, from IR to X-ray, around periastron passages has earned the name `the spectroscopic event' \citep{DavidsonHumphreys2012}.
The spectroscopic events observed over multiple periastrons have varied considerably, with a trend of becoming shorter and less intense. This may be due either to a change of state in the primary wind \citep{Mehneretal2010,Davidsonetal2018b,Kashietal2016}, or related to the dissipation of the surrounding Homunculus Nebula, at least along our line of sight \citep{Daminelietal2019,Mehneretal2019}.
The spectroscopic events of $\eta$~Car should yield insight into the binary's geometry and, more generally, the physics of stellar wind interactions.
However, there is still disagreement on the orientation.
The inclination of the binary was initially assumed to be the same as the orientation of the Homunculus nebula, $i \simeq 41^\circ$
\citep{Davidsonetal2001}, but the direction of motion was ambiguous.
\cite{Maduraetal2012} and \cite{Teodoroetal2016} deduced the inclination to be $i=130^\circ$--$145^\circ$ and $i=135^\circ$--$153^\circ$, respectively, suggesting a direction of orbital motion opposite to that adopted by \citet{Davidsonetal2001}. The foregoing range in inferred binary inclination is narrow enough not to pose any difficulties for purposes of interpreting observational data; nor is the direction of binary motion important, for such purposes.
Unlike the inclination, the argument of periapsis $\omega$, for which different values have been obtained, has significant implications for understanding the behavior of $\eta$~Car near periastron passages.
A value of $\omega=90^\circ$ implies that the secondary star (which launches the fast wind) is closest to us at periastron and furthest at apastron, while a value of
$\omega=270^\circ$ implies that the primary star (which launches the slow and dense wind) is closest to us at periastron and furthest at apastron.
A number of studies use fitting of spectral line profiles to claim the orientation is $\omega\approx270^\circ$
\citep[e.g.][]{Nielsenetal2007, Henleyetal2008, Daminelietal2008b, Richardsonetal2015}.
\cite{Maduraetal2012} ran Smooth Particle Hydrodynamics (SPH) simulations of the colliding winds and, by matching the simulation results to line spectro-imaging data, determined an orientation within the range $\omega=240^\circ$--$285^\circ$.
\cite{Weigeltetal2016} claimed to have identified a fan-shaped structure in the wind. They combined their claim with the simulations of \cite{Maduraetal2012} to support their preferred orientation of $\omega\approx240$--$270^\circ$. However, in \cite{KashiSoker2018} we showed that the fan probably does not exist, and even if so it does not support the orientation that \cite{Weigeltetal2016} argued for.
Recently, \cite{Grantetal2020} found a slightly higher eccentricity $e=0.91$, and determined $\omega=241^\circ$ by fitting Balmer lines with Gaussian components and then fitting a Keplerian model.
Fitting an orbital orientation from spectral lines depends on assumptions as to where these lines are emitted or where they are absorbed. Attributing the lines to different locations can result in the opposite solution for $\omega$, i.e., $\omega\approx90^\circ$
\citep{KashiSoker2007,KashiSoker2008a,KashiSoker2016, Kashietal2011}.
\cite{KashiSoker2018} showed that the orientation with $\omega=90^\circ$ can explain the absence of a mass segment in the torus that was ejected during the GE, as described by \cite{Smithetal2018b}.
\cite{Abrahametal2005a}, \cite{Falcetaetal2005} and \citep{AbrahamFalceta2007} also proposed an orientation with $\omega\approx60$--$90^\circ$, but their model invokes a shell ejection event to explain the X-ray behavior of $\eta$~Car during the spectroscopic event and is hence quite different from the model of \cite{KashiSoker2008a} (discussed below).
X-ray observations of $\eta$~Car have also been used to derive the orientation. \cite{Okazakietal2008} ran SPH simulations of the colliding winds and, based on fits to the X-ray luminosity, claim $\omega=243^\circ$.
\cite{Parkinetal2009} built a model to fit the X-ray light curve and derived $\omega=270^\circ$--$300^\circ$. However,
\cite{KashiSoker2009a} showed that the X-ray light curve is not a strong indicator of the binary orientation.
They suggested that the hydrogen column density to the hot X-ray emitting gas is a better observable for purposes of differentiating between the different proposed orientations, because the hot X-ray emitting gas arises from within the post-shock secondary wind located between the two stars.
\cite{KashiSoker2009a} analyzed the expected X-ray flux from $\eta$~Car considering a specific accretion model.
Their model includes the slow-dense primary and fast secondary winds, an approximation for the colliding wind interface, and analytic expressions for the thickness of the regions that include the post-shock primary and secondary winds.
They also included in their model the rotation angle that the conical shell forms with the line connecting the two stars as the secondary approaches periastron passage and the orbital velocity becomes significant.
They calculated the expected time dependence of the hydrogen column density $N_{\rm{H}}(t)$ from different directions, demonstrating that it can serve as an indicator of the binary orientation.
They found that for orientations in which the primary is closer to the observer at periastron passage ($\omega \simeq 270^\circ$), the primary wind absorbs so much of the colliding wind X-ray flux that no X-rays should be detected.
They could also explain the observed X-ray emission measure with their preferred orientation of $\omega=90^\circ$.
Close to periastron passage the X-ray flux decreases, which makes observations more challenging. The central binary (the shocked secondary wind close to the apex, located between the two stars) that is responsible for the hard X-ray emission dims.
This dimming can be explained as due either to lower emission from the wind-collision region and/or a very large increase in $N_{\rm H}$ toward the wind-collision region.
\cite{Hamaguchietal2007} demonstrate the existence of an emission component referred to as a 'central constant emission component' (CCE) which can significantly contribute to the X-ray spectrum during periastron. This component lies within $\sim$1 arcsecond of the central binary and arises from a larger (but unresolved) region surrounding the binary system and, due to its larger plasma volume, does not significantly vary in time. \cite{Hamaguchietal2007} find that this component can be fit with an $\sim$ 1.1 keV plasma and an $N_{\rm H} \sim 5 \times 10^{22} ~\rm{cm}^{-2}$. This component is further discussed in \cite{Hamaguchietal2014a,Hamaguchietal2014b,Hamaguchietal2016}.
During periastron, it is unclear to what extent this CCE component dominates the entire X-ray spectrum.
\cite{Hamaguchietal2014a,Hamaguchietal2014b,Hamaguchietal2016} analyzed X-ray observations taken by \textit{Chandra}, \textit{XMM-Newton} and \textit{NuSTAR} telescopes, and found that the hydrogen column density to the central binary near periastron may be as large as $N_{\rm H} \approx 10^{24} ~\rm{cm}^{-2}$.
The smooth variations of $N_{\rm H}$ as a function of time before, during and after periastron could suggest that the central binary is not completely obscured and as a result, $N_{\rm H}$ measurements could at least partially probe the central binary system. Such smooth variations were demonstrated in the 2--10 keV range with the Rossi X-ray Timing Explorer (\textit{RXTE}) in \cite{Ishibashietal1999}.
In early 2020, the $\eta$~Car system underwent another periastron passage.
The X-ray light curve was observed by the Neutron Star Interior Composition Explorer (\textit{NICER}) observatory \citep{Corcoranetal2020,Espinoza-Galeasetal2020}.
The X-ray light curve reached a minimum on Feb 14, 2020 \citep{Corcoranetal2020,Espinoza-Galeasetal2020}.
The exact date of periastron is uncertain and may begin about a week or two around that date; we will hence refer to it as the 2020.1 periastron passage.
The light curve shows similar properties to that of previous X-ray minima \citep{Corcoranetal2017},
but with a somewhat earlier exit from the minimum.
The X-ray flux increases in the months preceding the periastron passage, with strong flares superimposed on top of the smooth $\sim 1/r$ increase. These flares are most likely associated with clumps in the wind \citep{MoffatCorcoran2009}, and were also observed prior to the 2020 minimum \citep{Corcoranetal2019}.
After showing a few strong X-ray flares the light curve sharply declines into a non-zero minimum which lasts for several weeks, then slowly recovers to a quiescence value.
\cite{Hamaguchietal2020} also report bright non-thermal X-ray emission in \textit{NuSTAR} observations taken during the recovery from the 2020 X-ray minimum.
In this paper, we model X-ray data for the 2020.1 periastron passage, obtained with the \textit{NICER} X-ray telescope, with the goal of explaining the variation of the X-ray luminosity $L_{\rm X}$ and of the column density $N_{\rm H}$ in the frame of the accretion model for the spectroscopic event.
According to the accretion model the X-ray minimum is not caused by an absorption. Rather, the secondary star accretes mass from the primary stellar wind for several weeks near periastron passages \citep{Soker2005, Soker2007, SokerBehar2006, Akashietal2006, KashiSoker2007, KashiSoker2008b, KashiSoker2009c}. This accretion suppresses the secondary star's wind.
Since the source of the hard X-ray emission is the post-shock fast, $v_2 \simeq 3000 ~\rm{km} ~\rm{s}^{-1}$, secondary wind, the X-ray luminosity displays a minimum with a duration of several weeks.
Hydrodynamic simulations show that indeed the secondary star accretes mass near periastron, in a process that suffers instabilities \citep{Akashietal2013, Kashi2017, Kashi2019}.
The accretion model can account for several other observed properties, such as the orbital variations of numerous lines \citep{KashiSoker2007,KashiSoker2008a, KashiSoker2009a,KashiSoker2009b,KashiSoker2016,KashiSoker2018}, the infrared light curve \citep{KashiSoker2008b}, the X-ray light curve and emission measure \citep{SokerBehar2006, Akashietal2006, KashiSoker2009a}, the timing of the peaks in the light curve of the GE and LE \citep{KashiSoker2010}, the very fast velocities of gas ejected during the GE \citep{AkashiKashi2020}, and more.
\cite{Navareteetal2020} observed optical lines during the 2020.1 spectroscopic event and concluded that the circumstellar ejecta is dissipating, but the primary does not change significantly. Similar conclusions were reported by \cite{Mehneretal2019} and \cite{Daminelietal2019}.
On the other hand, long term observations of optical lines $\eta$~Car lead \cite{Davidsonetal2005} to suggest there is a decrease in the mass-loss rate from the primary star, referred to as a `change of state'.
This effect was obtained in numerical simulations of the recovery of $\eta$~Car from the great eruption \citep{Kashietal2016}.
Further indication for the change came from comparison of UV lines emission at similar orbital phases separated by two orbital revolutions, at positions far from periastron passage \citep{Davidsonetal2018b}.
Here we show that the 2020 X-ray minimum provides further evidence for a recent decrease in the primary's mass loss rate.
In section \ref{sec:theory_NH} we describe the expected general behavior of $N_{\rm H}$ around periastron passage. We then describe the observations (section \ref{sec:obs}) and the X-ray light curve (section \ref{sec:X-Ray-LC}).
We return to the accretion model and instabilities in our analysis of the new observations in section \ref{sec:NH}. We summarize our results in section \ref{sec:summary}.
\section{Theoretical considerations for the X-ray column density}
\label{sec:theory_NH}
We derive the column density for three cases according to the position of the secondary star with respect to the primary star and the observer. These cases are indicated as $N_{\rm H,90}$, $N_{\rm H,c}$, and $N_{\rm H,f}$ in Figure \ref{fig:Schematic}. One case ($N_{\rm H,90}$) is relevant to the two opposite orientations, while each of the two other cases is relevant to a different, specific orientation.
In one orientation ($N_{\rm H,c}$) the secondary is closest to us at periastron ($\omega \simeq 90^\circ$), while in the other ($N_{\rm H,f}$) the secondary star is furthest from us at periastron ($\omega \simeq 270^\circ$). For the inclination of the orbit we adopt $i=144^\circ$, based on results in \cite{Maduraetal2012} and
\cite{Teodoroetal2016}; the adopted value represents the middle of the range determined by \cite{Teodoroetal2016}.
The inclination is defined as the angle from the line of sight to the angular momentum of the binary system (along a line perpendicular to the orbital plane). In Figure \ref{fig:Schematic} we indicate the angle between the line of sight and the orbital plane. According to \cite{Teodoroetal2016}, in the `secondary furthest orientation’ at periastron the secondary is not precisely behind the primary, but there are other uncertainties that are larger. One such uncertainty is the X-ray emitting zone (see below). Another uncertainty is the exact density structure of the primary stellar wind, i.e., its exact mass loss rate, how clumpy it is, and how the interaction zone of the two winds behave (see \citealt{KashiSoker2008a}). In Figure \ref{fig:Schematic} we indicate with a solid black arrow the integration line for the column density in the `secondary furthest orientation', $N_{\rm H,f}$. This line is in the plane of the diagram. For the `secondary closest orientation' column density $N_{\rm H,c}$, we will obtain a similar expression for the line of integration (solid red arrow at top of diagram), but with different integration boundaries.
\begin{figure}
\centering
\includegraphics[trim=0.8cm 14cm 4.0cm 2.8cm ,clip, scale=0.53]{EtaCarXrayDepth.pdf}
\begin{singlespace}
\caption{Schematic drawing of the geometry of the binary system in the plane that contains the center of the primary star, the secondary at periastron, and the observer. We examine two opposite orientations where during periastron the secondary star is either at the closest ($\omega \simeq 90^\circ$) or furthest point from us ($\omega \simeq 270^\circ$). We draw both alternative orientations on the figure by indicating the two alternative periastron locations of the secondary star. The green arrow indicates the orbital angular momentum axis of the binary system. The dashed horizontal line indicates that the integration is not in the plane of the figure, but at a distance of $D_{90}$ from the plane (see text).
}
\end{singlespace}
\label{fig:Schematic}
\end{figure}
We define the angle $\psi=i-\pi/2 = 54^\circ$ as indicated in Figure \ref{fig:Schematic}.
These integration lines are at a distance of $h=a(1-e)\sin \psi = 1.35 ~\rm{AU}$ from the line of sight to the center of the primary star,
where we take $a=16.64 ~\rm{AU}$ and $e=0.9$ for the semi-major axis and eccentricity, respectively. For a period of 2023 days, this value of the semi-major axis implies a total binary mass of $M_1+M_2= 150 M_\odot$.
Note that we took here what is usually referred to as the conventional model for the masses of $\eta$ Car, but as will be seen from the following equations (that are calibrated for the conventional model) the values of the column density only slightly change for the high-mass model, for which $a=19.73 ~\rm{AU}$ and $M_1+M_2= 250 M_\odot$ (see table 1 in \citealt{Kashi2017}).
In addition to $N_{\rm H,c}$ and $N_{\rm H,f}$, we also calculate the column density from the secondary star to the observer when the primary-secondary line is perpendicular to the line of sight, $N_{\rm H,90}$. This line of integration lies at a distance of $D_{90}=a(1-e^2) = 3.16 ~\rm{AU}$ above or below the plane of Figure \ref{fig:Schematic}.
Expressions for the foregoing column densities are obtained as follows. The proton number density of the primary wind is
\begin{equation}
n_{\rm H} = \frac {X \dot M_1}{4 \pi r^2 m_{\rm H} v_1} \equiv \Lambda r^{-2},
\label{eq:nHwind}
\end{equation}
where $X$ is the hydrogen mass fraction, $v_1$ is the velocity of the primary wind, and $\dot M_1$ is the mass loss rate into the primary wind, and the second equality defines the parameter
\begin{equation}
\begin{split}
\Lambda &= 6.01 \times10^{23} \left( \frac {X}{0.5} \right)
\left(\frac{\dot M_1}{3 \times 10^{-4} ~\rm{M_{\sun}} ~\rm{yr}^{-1}} \right) \\
& \times \left( \frac {v_1}{500 ~\rm{km} ~\rm{s}^{-1}} \right)^{-1} ~\rm{cm}^{-2} ~\rm{AU},
\end{split}
\label{eq:Lambda}
\end{equation}
where the unusual choice of units facilitates later scaling.
In this calculation we assume that the bulk of the observed 2--10 keV X-ray emission from $\eta$~Car arises very close to the secondary star. The emission should in fact be generated at some distance between the stars, which will reduce the difference in distance between the two orientation cases at periastron, but on the other hand will bring the X-ray emitting gas deeper into the dense primary wind. The latter effect is larger, and as a consequence this model may underestimate the difference in column densities between the two orientations.
Given these uncertainties, the calculation we present here is an approximation. Nonetheless, we think is adequate for our purposes, i.e., interpreting the observed X-ray flux and absorption in terms of the binary orbital parameters.
We can now calculate the column density for the three cases illustrated in Figure \ref{fig:Schematic}. As noted,
the first case, column density $N_{\rm H,c}$, is for the case where the secondary closer to the observer at periastron, while the second, $N_{\rm H,f}$, is for the secondary behind the primary wind at periastron.
The third value of the column density, $N_{\rm H,90}$, is the same for both orbit orientations in the simple form presented in Figure \ref{fig:Schematic}, and this third case applies when the secondary-primary line is perpendicular to the line of sight. At that time the distance between the secondary and primary is $D_{90} = a (1-e^2) = 3.16 ~\rm{AU} $
for the parameters we adopt here.
We can obtain these column densities as follows. For the perpendicular phase we have
\begin{equation}
\begin{split}
N_{\rm H,90} &= \int^{\infty}_0 n_{\rm H} dl = \int^{\infty}_0 \Lambda \frac{dl}{l^2 + D^2_{90}} = \frac{\Lambda}{D_{90}} \frac {\pi}{2} \\
&= 2.99 \times 10^{23} \left( \frac{\Lambda}{6.01 \times10^{23} ~\rm{cm}^{-2} ~\rm{AU}} \right) \\
&\times \left( \frac{D_{90}}{3.16 ~\rm{AU}} \right) ^{-1} ~\rm{cm}^{-2}.
\end{split}
\label{eq:NH90}
\end{equation}
Using the geometry as presented in Figure \ref{fig:Schematic} we find that $D_{\rm p}=a(1-e) \cos \psi$, and so $D_{\rm p}/h=\cot \psi=0.73$.
We then obtain
\begin{equation}
\begin{split}
N_{\rm H,c} &= \int^{\infty}_{D_{\rm p}} n_{\rm H} dl = \int^{\infty}_{D_{\rm p}} \Lambda \frac{dl}{l^2 + h^2} \\
& = \frac{\Lambda}{h} \left( \frac {\pi}{2} - \tan^{-1} \frac{D_{\rm p}}{h} \right) = \frac{\Lambda}{h} \psi = \frac{\Lambda}{r_{\rm{p}}} \frac{\psi}{\sin \psi} \\
&= 4.21 \times 10^{23} \left( \frac{\Lambda}{6.01 \times10^{23} ~\rm{cm}^{-2} ~\rm{AU}} \right) \\
&\times \left( \frac{r_{\rm{p}}}{1.66 ~\rm{AU}} \right) ^{-1} ~\rm{cm}^{-2} ,
\end{split}
\label{eq:NHc}
\end{equation}
where $r_{\rm{p}}=1(1-e) =1.66 ~\rm{AU}$ is the periastron distance.
Similarly,
\begin{equation}
\begin{split}
N_{\rm H,f} &= \int^{\infty}_{-D_{\rm p}} n_{\rm H} dl = \int^{\infty}_{-D_{\rm p}} \Lambda \frac{dl}{l^2 + h^2}
\\& = \frac{\Lambda}{h} \left( \frac {\pi}{2} + \tan^{-1} \frac{D_{\rm p}}{h} \right) = \frac{\Lambda}{h} (\pi+ \psi) = \frac{\Lambda}{r_{\rm{p}}} \frac{\pi+\psi}{\sin \psi} \\
& = 9.82 \times 10^{23} \left( \frac{\Lambda}{6.01 \times10^{23} ~\rm{cm}^{-2} ~\rm{AU}} \right) \\
&\times \left( \frac{r_{\rm{p}}}{1.66 ~\rm{AU}} \right) ^{-1} ~\rm{cm}^{-2} ,
\end{split}
\label{eq:NHf}
\end{equation}
These expressions show that if the distance from primary to X-ray emitting region is smaller than the distance from primary to secondary
(i.e., X-rays are generated in between the two stars),
then the column density is larger than the calibrated values in equations (\ref{eq:NHc}) and (\ref{eq:NHf}).
For later analysis the predicted ratios of these column densities are
\begin{equation}
\frac{N_{\rm H,c}}{N_{\rm H,90}} = 1.41 \left(\frac{1+e}{1.9}\right)\quad {\rm and}
\quad
\frac{N_{\rm H,f}}{N_{\rm H,90}} = 3.29 \left(\frac{1+e}{1.9}\right),
\label{eq:ratios}
\end{equation}
for the closest-orientation and furthest-orientation, respectively.
Note that the above ratios do not depend on $\Lambda$, and even do not depend on $a$.
They only depend on $(1+e)$, and as the eccentricity is a well constraint variable, if we take $e\simeq0.88$--$0.92$ these ratios vary only by 1\%. We note again that this calculation assumes a smooth primary wind that fills the entire volume.
\section{Observations}
\label{sec:obs}
Observations of $\eta$ Car were obtained with the Neutron Star Interior Composition Explorer (\textit{NICER}), under \textit{NICER} Guest Observer programs 1110 (PI: K. Gendreau), 2612 (PI: M. Corcoran), and 3651 (PI: D. Espinoza-Galeas). \textit{NICER} is an X-ray telescope attached to the International Space Station. \textit{NICER}'s large effective area, broad band pass, moderate spectral resolution and 30 arcmin$^{2}$ field of view provide the capability of collecting spatially unresolved observations of X-ray emitting regions surrounding $\eta$~Car. The Point Spread Function (PSF) is approximately the entire field of view ($R\simeq3.1$ arcmin).
Archival observations of $\eta$~Car by \textit{NICER} were obtained for observing dates between 2017-07-20 and 2020-07-24. Archival data were reduced using the HEASoft package (v 6.27.2) available from the High Energy Astrophysics Archive Research Center (HEASARC). The \textit{NICER} Heasoft tool \texttt{nicerl2}\footnote{\url{https://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/nicerl2.html}}
was used to create cleaned level 2 event files with up-to-date gain calibrations (\textit{NICER} CALDB version 20200722). Source and background spectral files for each observation were extracted using the \textit{NICER} background estimator tool \texttt{nibackgen3c50}\footnote{\url{https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer_bkg_est_tools.html}} (v5). Due to its flux variability, $\eta$~Car occasionally reaches the detection threshold of \textit{NICER} in a given observation (Figure \ref{fig:xrayspectra}). To prevent adding unnecessary background counts to our source spectra, we chose to remove portions of an observation if the good time intervals indicated an exposure of less than 100 seconds. These very short exposures likely include very few source counts but can have high background levels. A maximum good time interval (GTI) time of 3000 seconds and a maximum net high background rate value of 0.05 counts s$^{-1}$ were chosen as input parameters when generating background spectra for the occasionally faint source $\eta$~Car. Of the $\approx 290$ \textit{NICER} observations with non-zero exposure times in our date range, $\approx 50$ observations did not meet the above criteria for background selection were discarded.
The \textit{NICER} instrument is unable to produce spatially resolved spectra for the $\eta$~Car region; all X-ray emission in the 30 arcmin$^{2}$ field of view (FOV) surrounding $\eta$~Car is included in these spectra. Thus, as discussed in \cite{Hamaguchietal2007} for the case of \textit{XMM-Newton} observations of $\eta$~Car, several different X-ray emission components associated with $\eta$~Car can be observed in the \textit{NICER} spectra. In particular, the outer ejecta regions and the Homunculus Nebula both contribute significantly. Since we cannot remove these contributions to the \textit{NICER} spectra of $\eta$~Car, we instead focus on relative changes in the X-ray spectrum due to the variability of the $\eta$~Car central binary as the system first approached and then recovered from periastron. We assume that on timescales of days to months, no variability is induced by these external X-ray components in the 2--10~keV range where we perform our spectral fits.
We checked the field of view (FOV) of \textit{NICER} in order to exclude potential contamination from other X-ray sources.
For that we used the catalogue of X-rays sources from the \textit{CHANDRA} Carina Complex Project \citep{Broosetal2011} to identify other sources that may contribute to the X-ray flux.
The FOV includes $\simeq400$ additional sources, the vast majority of which are young stellar sources pre-main sequence stars that have median X-ray energies $<2$~keV in the 0.5--8~keV range. Individually their fluxes are negligible compared to the extremely bright $\eta$~Car in the 2--10~keV and 5--10~keV ranges.
The most notable X-ray point source that falls within the field of view of the \textit{NICER} $\eta$~Car observations is HDE~303308 \citep[a spectroscopic binary O4.5V star;][]{Sotaetal2014}.
The combined 2--8~keV flux of all these $\sim 400$ field X-ray sources is $\simeq1.5\times 10^{-12}~\rm{erg~s^{-1}~cm^{-2}}$ which is lower than all observations which we include in our analysis of $N_{\rm H}$ presented here.
Moreover, these sources are distributed across the \textit{NICER} field of view and hence their X-ray detection rates are suppressed relative to the on-axis $\eta$~Car source
Only one observation during the X-ray minimum (on JD$=$2458895.7) has a 2--10~keV flux smaller that $3\times 10^{-12}~\rm{erg~s^{-1}~cm^{-2}}$, and hence might include a contribution from the field X-ray sources.
Fits to the resulting \textit{NICER} spectra were obtained via the Sherpa\footnote{\url{https://cxc.cfa.harvard.edu/sherpa/}} X-ray spectral fitting package (v 4.12).
We constructed a python script to automate the fitting process for the 242 \textit{NICER} observations.
Each spectrum was background subtracted and grouped to require a signal to noise ratio (SNR) of at least 3 per bin.
Fits were performed for each spectrum between two energy ranges, 2--10 keV and 5--10 keV (see section \ref{sec:NH}), using the Nelder-Mead Simplex optimization method with the $\chi^{2}$ Gehrels statistic.
Our tests showed that grouping the spectra by counts per bin (e.g.,
grouping by 5 or 15 counts per bin) rather than by signal-to-noise ratio
did not consistently improve the fits.
Following a similar procedure to that discussed in \cite{Hamaguchietal2007}, we fit each spectrum with an absorbed \texttt{{{VAPEC}}} thermal equilibrium model \citep[using \texttt{wabs} for the absoption component of the model;][]{MorrisonMcCammon1983} and two Gaussians at the positions of the Fe K$\alpha$ and K$\beta$ lines (6.4 keV and 7.1 keV, respectively). An additional Gaussian component was used in the 2--10 keV fits to model the S {\sc xv} line at $\sim 2.5 ~\rm{keV}$; this component is associated with the Homoculus Nebula \citep{Hamaguchietal2007}. The Fe abundance remained free between 0.0 and 1.0 times solar in the fits, while the other metals were frozen to their solar values \citep{Hillieretal2001}. The ratio of the Fe K$\alpha$ flux to Fe K$\beta$ was set to 11.3\% \citep{Hamaguchietal2007, Yamaguchietal2014}.
Spectral fitting results, including the observed X-ray fluxes calculated for both the 2--10 keV and 5--10 keV fits and $\chi^2_{\rm red}$ values, are listed in
Tables 1 and 2.
All fit parameters are reported with their 90\% confidence intervals. The fluxes reported for the 2--10~keV fits do not include the S {\sc xv} Gaussian component.
Emission measure (EM) is reported from the fit normalization parameter assuming a distance of $2.6 ~\rm{kpc}$. Large uncertainties are present in the 5--10 keV best-fit parameters as expected for a source with low flux in this energy range. For both the 2--10 keV and 5--10 keV fits, we do not report values or errors where the best-fit model did not converge on a particular parameter value (i.e., a maxima or minima boundary was hit).
\renewcommand{\arraystretch}{0.98}
\begin{table*}[]
\begin{center}
\begin{singlespace}
\caption{Best-fit model parameters to the 2--10 keV \textit{NICER} X-ray spectra of Eta Car between July 2017 and July 2020 and their 90\% confidence values.}
\end{singlespace}
\begin{tabular}{c c c c c c c c c}
\hline\hline
Date & Obs.\,ID. & $N_\mathrm{H}$ & $kT$ & Fe & log(EM) & Flux $\times10^{-11}$ & $\chi^2_{\rm red}$\\
$[$UT$]$ & & $[10^{22}$ cm$^{-2}]$ & $[$keV$]$ & $[$Z$_{\odot}]$ & $[$cm$^{-3}]$ & $[$erg s$^{-1}$ cm$^{-2}]$ & \\
\hline
2017-07-20T02:59:20 & 1110010101 & 2.7$^{+0.9}_{-0.9}$ & 3.6$^{+1.6}_{-0.9}$ & 0.67$^{+0.37}_{-0.33}$ & 57.70$^{+0.12}_{-0.13}$ & 4.2$^{+1.0}_{-1.0}$ &0.67\\
2017-07-21T00:36:20 & 1110010102 & 3.0$^{+0.6}_{-0.7}$ & 3.2$^{+0.9}_{-0.5}$ & 0.76$^{+0.28}_{-0.25}$ & 57.79$^{+0.08}_{-0.10}$ & 4.5$^{+1.4}_{-1.1}$ &0.54\\
2017-07-22T04:23:20 & 1110010103 & 3.3$^{+0.6}_{-0.7}$ & 2.9$^{+0.6}_{-0.4}$ & 0.60$^{+0.28}_{-0.25}$ & 57.84$^{+0.09}_{-0.09}$ & 4.2$^{+0.6}_{-0.6}$ &0.61\\
2017-07-24T10:25:00 & 1110010105 & 2.8$^{+1.6}_{-1.3}$ & 3.4$^{+2.6}_{-1.2}$ & 0.79$^{+0.66}_{-0.50}$ & 57.72$^{+0.22}_{-0.18}$ & 4.0$^{+3.5}_{-2.8}$ &0.62\\
2017-10-05T02:11:20 & 1110010106 & 3.6$^{+0.8}_{-0.7}$ & 2.5$^{+0.6}_{-0.5}$ & 0.54$^{+0.35}_{-0.29}$ & 57.92$^{+0.12}_{-0.11}$ & 4.1$^{+1.6}_{-1.5}$ &0.63\\
2017-11-21T06:42:27 & 1110010108 & 3.0$^{+0.8}_{-0.8}$ & 3.0$^{+0.8}_{-0.6}$ & 0.56$^{+0.32}_{-0.28}$ & 57.79$^{+0.11}_{-0.10}$ & 4.0$^{+1.6}_{-1.5}$ &0.56\\
2017-12-12T06:09:20 & 1110010109 & 2.9$^{+1.7}_{-1.1}$ & 2.7$^{+1.0}_{-0.8}$ & $<$1.49 & 57.81$^{+0.22}_{-0.15}$ & 3.6$^{+1.0}_{-1.1}$ &0.73\\
2017-12-22T00:41:36 & 1110010110 & 2.7$^{+0.2}_{-0.2}$ & 3.5$^{+0.3}_{-0.3}$ & 0.53$^{+0.10}_{-0.09}$ & 57.83$^{+0.03}_{-0.03}$ & 5.4$^{+0.6}_{-0.6}$ &0.78\\
2017-12-30T00:19:52 & 1110010111 & 3.3$^{+0.8}_{-0.7}$ & 3.3$^{+0.9}_{-0.7}$ & 0.62$^{+0.32}_{-0.28}$ & 57.86$^{+0.11}_{-0.10}$ & 5.1$^{+1.3}_{-1.3}$ &0.60\\
2018-01-18T20:51:00 & 1110010114 & 3.2$^{+1.0}_{-1.0}$ & 3.1$^{+1.4}_{-0.7}$ & $<$0.42 & 57.88$^{+0.14}_{-0.15}$ & 4.6$^{+2.0}_{-1.7}$ &0.54\\
\hline
\end{tabular}
\end{center}
The columns in order describe the (1) \textit{NICER} observation date and time in UT, (2) unique observation number associated with the observation, (3) absorbing column density, (4) plasma energy (temperature), (5) Iron abundance relative to Solar, (6) log of the emission measure, (7) observed flux in the 2--10 keV band assuming a distance of 2.6 kpc, and (8) the reduced $\chi^{2}$ value associated with the best fit model.
Table locations with missing values indicate the spectral parameter or both confidence values hit a hard minimum or maximum when fitting and thus are not reliable. Spectral parameters missing a single upper or lower confidence value are reported as lower or upper limits, respectively.
X-ray spectral models and fitting procedure are described in more detail in section \ref{sec:obs}.
The full table is available in machine readable format.
\label{table:210}
\end{table*}
\begin{table*}[]
\begin{center}
\begin{singlespace}
\caption{
Best-fit model parameters to the 5--10 keV \textit{NICER} X-ray spectra of Eta Car between July 2017 and July 2020 and their 90\% confidence values.}
\end{singlespace}
\begin{tabular}{c c c c c c c c c}
\hline\hline
Date & Obs.\,ID. & $N_\mathrm{H}$ & $kT$ & Fe & log(EM) & Flux $\times10^{-11}$ & Int. Flux $\times10^{-11}$ & $\chi^2_{\rm red}$\\
$[$UT$]$ & & $[$$10^{22}$ cm$^{-2}]$& $[$keV$]$ & $[$Z$_{\odot}]$ & $[$cm$^{-3}]$ & $[$erg s$^{-1}$ cm$^{-2}]$ & $[$erg s$^{-1}$ cm$^{-2}]$ & \\
\hline
2017-07-20T02:59:20 & 1110010101 & $<$64 & 1.8$^{+3.3}_{-0.8}$ & 0.69$^{+1.52}_{-0.41}$ & 58.28$^{+1.04}_{-0.77}$ & 1.2$^{+4.1}_{-1.1}$ &2.1$^{+6.7}_{-1.9}$ &0.53\\
2017-07-21T00:36:20 & 1110010102 & --- & 4.9$^{+1.9}_{-2.7}$ & --- & 57.53$^{+0.84}_{-0.11}$ & 1.5$^{+1.2}_{-0.9}$ &2.0$^{+1.3}_{-1.1}$ &0.60\\
2017-07-22T04:23:20 & 1110010103 & 59$^{+38}_{-39}$ & 1.3$^{+0.6}_{-0.4}$ & 0.41$^{+0.48}_{-0.22}$ & 59.29$^{+0.84}_{-0.85}$ & 2.2$^{+7.2}_{-1.8}$ &7.2$^{+19}_{-6.2}$ &0.65\\
2017-07-24T10:25:00 & 1110010105 & --- & 2.3$^{+4.1}_{-1.4}$ & 0.60$^{+2.13}_{-0.45}$ & 58.09$^{+1.91}_{-0.58}$ & 0.9$^{+3.7}_{-0.8}$ &2.4$^{+6.4}_{-2.2}$ &0.67\\
2017-10-05T02:11:20 & 1110010106 & $<$106 & 1.2$^{+1.8}_{-0.4}$ & 0.37$^{+0.77}_{-0.26}$ & 59.32$^{+0.93}_{-1.54}$ & 2.2$^{+10}_{-2.0}$ &7.8$^{+27}_{-6.9}$ &0.63\\
2017-11-21T06:42:27 & 1110010108 & --- & 1.9$^{+3.0}_{-0.8}$ & 0.73$^{+1.10}_{-0.44}$ & 58.06$^{+0.99}_{-0.64}$ & 0.9$^{+3.2}_{-0.8}$ &1.4$^{+4.4}_{-1.2}$ &0.57\\
2017-12-12T06:09:20 & 1110010109 & $<$95 & 1.3$^{+1.0}_{-0.7}$ & --- & 58.46$^{+1.61}_{-0.64}$ & 0.5$^{+2.3}_{-0.5}$ &1.5$^{+4.8}_{-1.3}$ &1.74\\
2017-12-22T00:41:36 & 1110010110 & 29$^{+17}_{-19}$ & 2.2$^{+0.9}_{-0.5}$ & 0.38$^{+0.14}_{-0.10}$ & 58.44$^{+0.39}_{-0.46}$ & 2.0$^{+3.4}_{-1.5}$ &3.6$^{+5.1}_{-2.8}$ &0.58\\
2017-12-30T00:19:52 & 1110010111 & $<$63 & 1.7$^{+1.5}_{-0.6}$ & 0.77$^{+0.87}_{-0.46}$ & 58.34$^{+0.96}_{-0.55}$ & 1.0$^{+2.9}_{-0.9}$ &1.8$^{+3.8}_{-1.4}$ &0.72\\
2018-01-18T20:51:00 & 1110010114 & $<$99 & 1.6$^{+5.5}_{-1.0}$ & $<$0.80 & 58.76$^{+1.45}_{-1.28}$ & 1.5$^{+8.3}_{-1.3}$ &4.7$^{+15}_{-3.9}$ &0.37\\
\hline
\end{tabular}
\end{center}
The columns in order describe the (1) \textit{NICER} observation date and time in UT, (2) unique observation number associated with the observation, (3) absorbing column density, (4) plasma energy (temperature), (5) Iron abundance relative to Solar, (6) log of the emission measure, (7) observed flux in the 5--10 keV band assuming a distance of 2.6 kpc, (8) the intrinsic (i.e., unabsorbed) flux in the 5--10 keV band, and (9) the reduced $\chi^{2}$ value associated with the best fit model.
Table locations with missing values indicate the spectral parameter or both confidence values hit a hard minimum or maximum when fitting and thus are not reliable. Spectral parameters missing a single upper or lower confidence value are reported as lower or upper limits, respectively.
X-ray spectral models and fitting procedure are described in more detail in section \ref{sec:obs}
The full table is available in machine readable format.
\label{table:510}
\end{table*}
Although the abundances of the X-ray emitting plasma associated with $\eta$ Car are uncertain (and are likely enriched in He and N; e.g., \citealt{Gulletal2020}), we note that our assumption of solar metal abundances (apart from Fe) is unlikely to affect these fit results. This is mainly because the 5--10~keV region is largely devoid of strong metal emission lines, apart from the 6.4~keV and 7.1~keV Fe lines. Indeed, based on tests performed on a subset of observations, we confirmed that the results for X-ray flux and $N_{\rm H}$ are insensitive to the assumed metallicity.
Figure \ref{fig:xrayspectra} shows selected $\eta$~Car \textit{NICER} spectra for a number of notable epochs.
January 7, 21, 27, and February 3 (2020) are times of strong peaks in the lightcurve, which are usually referred to as flares.
February 14 and 18 are representative specta taken during the X-ray minimum.
Figs. \ref{fig:xrayspectra210} and \ref{fig:xrayspectra510} show our fit to the \textit{NICER} 2--10 keV and 5--10 keV energy ranges, respectively.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 1.5cm,clip=true,width=0.99\textwidth]{etacar_NICER_variability_up2date_caldb_SIMPLEX_dtmin100_dtmax3000_hbg05_03_10kev_log_log_v1}
\begin{singlespace}
\caption{
\textit{NICER} X-ray spectra of $\eta$ Car across the 2020 periastron passage grouped to require a minimum SNR$=$6. Observations of the flare are displayed on January 7, 21, 27, and February 3. \textit{NICER} spectra during the X-ray minimum are displayed for February 14 and 18. The extended X-ray emitting regions surrounding $\eta$~Car contribute primarily at energies $<$ approximately 2 keV where little variability is observed. The 2--10 keV emission is associated with previously shocked gas farther from the central stars. Emission in the 5--10 keV range comes from the apex of the colliding winds located between the two stars but closer to the secondary.}
\end{singlespace}
\label{fig:xrayspectra}
\end{figure*}
\begin{figure*}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-07_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-21_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-27_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-03_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-14_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-18_2_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\begin{singlespace}
\caption{X-ray spectral fits to the 2--10 keV \textit{NICER} data of $\eta$ Car across the 2020 X-ray minimum. Spectra are grouped to require a minimum SNR$=$3 where dates and colors correspond to the same format as Figure \ref{fig:xrayspectra}. The best-fit models are shown in orange and described in section \ref{sec:obs}.}
\end{singlespace}
\label{fig:xrayspectra210}
\end{figure*}
\begin{figure*}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-07_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-21_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-01-27_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-03_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-14_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\includegraphics[trim= 1.3cm 0.0cm 2.0cm 1.5cm,clip=true,width=0.5\textwidth]{etacar_NICER_2020-02-18_5_10kev_kTfree_SXV_gauss_sigma_cor_flux_cor_SIMPLEX_up2dateCALDB_dtmin_100_dtmax3000_hbg_05_color_v8}
\begin{singlespace}
\caption{X-ray spectral fits to the 5--10 keV \textit{NICER} data of $\eta$ Car across the 2020 X-ray minimum. Spectra are grouped to require a minimum SNR$=$3 where dates and colors correspond to the same format as Figure \ref{fig:xrayspectra}. The best-fit models are shown in orange and described in section \ref{sec:obs}.}
\end{singlespace}
\label{fig:xrayspectra510}
\end{figure*}
Figure \ref{fig:xray} presents the resulting broad-band (2--10 keV) and hard (5--10 keV) X-ray flux variations (hereafter we will refer to them simply as light curves) during the 2020 spectroscopic X-ray minimum.
We present only observations that yielded reduced $\chi^{2}$ values in the range $0.4\leq\chi^2_{\rm red}\leq2.5$, and $N_{\rm H}$ where 90\% confidence intervals were determined, as well as those observations with very low flux $F\leq 10^{-11} \rm{erg~s^{-1}~cm^{-2}}$ even if the aforementioned conditions for $\chi^2_{\rm red}$ and $N_{\rm H}$ were not fulfilled.
Adopting a distance of $2.6 ~\rm{kpc}$ \citep{Davidsonetal2018a}, flux of $10^{-10} ~\rm{erg~s^{-1}~cm^{-2}}$ corresponds to luminosity of $8.09\times10^{34} ~\rm{erg~s^{-1}}$.
Figure \ref{fig:xray} also shows the 0.5--10 keV X-ray light curve reported by \cite{Espinoza-Galeasetal2020}, where we have combined the two sets of observations.
In the \cite{Espinoza-Galeasetal2020} light curve, the X-ray minimum appears to be interrupted by a flare, with a peak flux $3.3\times10^{-11} ~\rm{erg~s^{-1}~cm^{-2}}$ and a duration of $\simeq 5 ~\rm{days}$.
However, in our reanalysis of the 2--10 keV \textit{NICER} data, we did not recover a flare during this time period (compare the solid green and dashed magenta curves in Figure~ \ref{fig:xray}).
The S/N ratios in the spectra during X-ray minimum are poor, such that small differences in background subtraction methods may account for the differences between the respective light curves.
\cite{Espinoza-Galeasetal2020} ATel do not detail background subtraction methods and thus it is difficult to identify the discrepancy in the flare detection. As described in section \ref{sec:obs}, we use the most up to date calibration files as well as a suggested background determination method from the \textit{NICER} team (Remillard et al, in prep\footnote{See \textit{NICER} background estimator tools page: \url{https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer_bkg_est_tools.html}}).
We note that the apparent flare reported by \cite{Espinoza-Galeasetal2020} cannot originate from the 0.5--2 keV spectral component, since this component arises from $\eta$~Car's surrounding, extended emission and hence does not vary during the X-ray minimum \citep{Hamaguchietal2007}.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\textwidth]{Flux_14a15a.pdf}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\textwidth]{Flux_zoom_14a15a.pdf}
\begin{singlespace}
\caption{
{Upper panel:} The \textit{NICER} 2-10 keV and 5--10 keV (hard X-ray) light curve of $\eta$~Car and 90\% confidence values for both ranges.
\textit{Lower panel:} zoom in on the 2020 X-ray minimum.
We compare our fitted flux to the results of \cite{Espinoza-Galeasetal2020}. Note that error bars were not given for the data in \cite{Espinoza-Galeasetal2020}.
The time axis is set to the beginning of periastron passage; $t=0$ is on Feb 10, 2020 (JD$=2458890$).
We see that the X-ray minimum flare obtained by \cite{Espinoza-Galeasetal2020} on $t=8 ~\rm{days}$ is not present in our results.
The figure also demonstrates that the for for the 5--10 keV energy range results in better error estimate than for the 2--10 keV energy range.
}
\end{singlespace}
\label{fig:xray}
\end{figure*}
The duration of the X-ray minimum has varied in the last 4 cycles where it was closely monitored \citep{Corcoranetal2017}.
Figure \ref{fig:xray_cycles} shows a comparison of the last 5 cycles, focused on the X-ray minimum.
As can be seen in Figure \ref{fig:xray}, the duration of the X-ray minimum in the 2020 cycle is not well constrained as the recovery is gradual and has some fluctuations. The duration can be considered to be anything in the range $\simeq25$--$37 ~\rm{days}$.
The hard component has a clearer recovery, and its duration is $23 ~\rm{days}$.
The recovery from the 2020 X-ray minimum occurs at the steepest slope amongst the 5 cycles.
A quantifying criterion that takes the slope of the recovery into account would indicate that the present cycle had the shortest minimum.
If, for example, we consider the duration of the X-ray minimum to be the time where the flux is $F \leq 5 \times 10^{-11} \rm{erg~s^{-1}~cm^{-2}}$ then the 2020 X-ray minimum is the shortest.
Hereafter we will refer to this finding about the 2020 Minimum simply as `fastest recovery'.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\textwidth]{compare_cycles.pdf}
\begin{singlespace}
\caption{\
Comparison of the X-ray minimum for the last 5 X-ray minima of $\eta$~Car.
Data for earlier cycles is adopted from \cite{Corcoranetal2017}.
The time scale is fixed and the beginning of the {1997--8} X-ray minimum and a period of 2023 days is used to fold the data.
The duration of the 2020 X-ray minimum can be considered to be anything in the range $\simeq25$--$37 ~\rm{days}$, depending on the definition.
The recovery from the 2020 X-ray minimum occurs at the steepest slope.
}
\end{singlespace}
\label{fig:xray_cycles}
\end{figure*}
Figure \ref{fig:N_H_210510} shows the hydrogen column density $N_{\rm H}$ from the fitted \textit{NICER} spectra for the broad-band and hard X-ray ranges, while Figure \ref{fig:N_H} shows the hard X-ray flux and $N_{\rm H}$ together.
The duration of X-ray minimum for the hard X-ray component is marked in these two figures.
\begin{figure*}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.95\textwidth]{N_H_14a15a.pdf}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.97\textwidth]{N_H_zoom_14a15a.pdf}
\begin{singlespace}
\caption{\textit{Upper panel:} The derived hydrogen column density $N_{\rm H}$ for the 2--10 keV and 5--10 keV energy ranges, and 90\% confidence values bot both ranges.
The time axis begins about two years before periastron passage, where the stars are very far from each other on their $P=2023$ days orbit.
Dashed vertical lines in both panels represent the beginning and end of the X-ray minimum (note the uncertainties in the exit date that we discuss in the text).
\textit{Lower panel:} zoom in on the 2020 X-ray minimum (close to periastron passage).
}
\end{singlespace}
\label{fig:N_H_210510}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.89\textwidth]{N_H_and_F.pdf}
\includegraphics[trim= 1.1cm 0.0cm 0.0cm 0.0cm,clip=true,width=1.00\textwidth]{N_H_and_F_zoom.pdf}
\end{center}
\begin{singlespace}
\caption{\textit{Upper panel:} The derived hydrogen column density $N_{\rm H}$ of the hard X-ray $\eta$~Car from \textit{NICER} observations (left axis, blue line).
The time axis begins about two years before periastron passage, where the stars are very far from each other on their $P=2023$ days orbit.
Dashed vertical lines in both panels represent the beginning and end of the X-ray minimum.
\textit{Lower panel:} zoom in on the 2020 X-ray minimum (close to periastron passage).
The flux from Figure \ref{fig:xray} is also plotted in both panels (right axis, red line).
}
\end{singlespace}
\label{fig:N_H}
\end{figure*}
\section{The X-ray light curve}
\label{sec:X-Ray-LC}
Before we turn to analyse the column density evolution, we discuss the X-ray light curve shortly before, during, and right after the X-ray minimum. We emphasise three properties of the X-ray light curve, and explain them in the frame where the hard X-ray results from the post-shock secondary wind as it collides with the primary wind in the regions between the two stars (e.g., \citealt{PittardCorcoran2002, Akashietal2006}).
\textit{1. A deep X-ray minimum.} Figure \ref{fig:xray} shows that during the X-ray minimum the X-ray emission is very weak. Absorption alone cannot account for this.
Hydrodynamic simulations by our group \citep{Akashietal2013, Kashi2017, Kashi2019} show that near periastron passage the secondary star accretes mass from the primary wind, a process that shuts down the secondary wind for several weeks.
For several weeks the flow of gas in the binary system is that of a secondary star accreting from the primary wind, rather than a flow of colliding winds.
These simulations reveal a complicated flow structure that is highly asymmetrical (apart from mirror-symmetry about the equatorial plane), and more complicated than the Bondi-Hoyle-Lyttleton (BHL; \citealt{HoyleLyttleton1939,BondiHoyle1944}) accretion picture. Nevertheless, we note the BHL accretion rate produces the accretion rate to within a factor of a few \citep{KashiSoker2009b}. The high $N_{\rm H}$ results in a lack of radiative cooling at the surface of the secondary star, further complicating the flow structure \citep{Akashietal2006}.
The \textit{NICER} light curve (Figure \ref{fig:xray}) suggests indeed that the secondary wind ceases to exist during the X-ray minimum. This seems to support the numerical results that the primary wind manages to reach the secondary star, and the secondary star accretes mass from the wind of the primary star rather than blowing its wind (e.g., \citealt{Soker2005, Kashi2019}; see section \ref{sec:intro}). According to this scenario, when the secondary star rebuilds its wind the X-ray emission resumes. In section \ref{sec:summary} we further discuss this point in relation to the orientation of the binary system.
During most of the X-ray minimum the X-ray emission is due mainly to post-shock secondary wind that was shocked weeks to months before periastron passage, and now resides at large distances of {$d_{x, {\rm min}} \ga 10 ~\rm{AU}$} from the primary star.
By equation (\ref{eq:NHc}) or equation (\ref{eq:NHf}) this ensures a low column density, as we replace $r_{\rm{p}}$ with $d_{x, {\rm min}}$.
Although the post-shock secondary wind suffers adiabatic cooling, its X-ray tail contributes to the $>5 ~\rm{keV}$ band.
This explains the low values of $N_{\rm H}$ at the X-ray minimum.
\textit{2. An early exit from the X-ray minimum.} From Figure \ref{fig:xray_cycles} we learn that the last cycle had the earliest exit from the X-ray minimum. This continues the trend of the previous two cycles that had earlier exits than the first two cycles for which we have X-ray light curves \citep{Corcoran2005, Corcoranetal2017, Espinoza-Galeasetal2020}. As we mention in section \ref{sec:intro}, this might result from a weaker primary wind that allows the secondary wind to rebuild itself earlier.
\textit{3. Strong pre-minimum fluctuations.} One feature common to all five light curves of the five cycles is that the light curve after exit from the minimum is relatively smooth. On the other hand, the light curves in the weeks to months before the minimum show large fluctuations, known as flares \citep{MoffatCorcoran2009}. This most likely is an outcome of large-amplitude instabilities in the process of the wind collisions, as numerical simulations show \citep{Akashietal2013, Kashi2017, Kashi2019}. This is important also for the minimum itself. The instabilities imply the presence of dense clumps in the primary wind. Such clumps are the first to reach the secondary star as the system approaches periastron, and they seem to weaken the secondary wind. This in turn allows more of the primary wind to reach the secondary star and completely or almost completely turn off the secondary wind for the duration of the minimum.
\section{The column density to the hard X-ray source (5--10 ~\rm{keV})}
\label{sec:NH}
We perform our analysis under the common assumption that in $\eta$~Car most of the hard X-ray emission, $>5 ~\rm{keV}$, comes from the central binary \citep{Hamaguchietal2007}, namely from the wind collision region (specifically, the post-shock secondary wind). This region lies mainly between the two stars and is closer to the secondary star \citep[e.g.,][]{Ishibashietal1999, PittardCorcoran2002, Akashietal2006, Hamaguchietal2007}.
\subsection{$N_{\rm H}$ close to apastron}
\label{sucsec:NHapastron}
\cite{Hamaguchietal2007} deduced from their analysis that away from periastron (near apastron) the column density toward the hard X-ray emission component ($>5 ~\rm{keV}$) is $N_{\rm H,a} \simeq 17 \times 10^{22} ~\rm{cm}^{-2}$ (their figure 13). Subscripts `$a$' and `$p$' refer to values near apastron and near periastron, respectively.
First we note that the X-ray light curve has large fluctuations (`flares' and `troughs'), as in previous cycles (e.g., \citealt{Corcoran2005}). The same goes for $N_{\rm H}$ toward the main hard X-ray source. From the left side of the upper panel of Figure \ref{fig:N_H} we find that away from periastron the values of the column density are in the range of $N_{\rm H,a} \simeq 20$--$60 \times 10^{22} ~\rm{cm}^{-2}$.
\cite{KashiSoker2008a} calculated (their figure 6) the expected values of the absorbing column density for the two orientations close to apastron.
For $\omega \simeq 270^\circ$ (secondary closer to the observer at apastron) they obtained $N_{\rm H,a}{\rm (Cal270)} \simeq 4 \times 10^{22} ~\rm{cm}^{-2}$, while for $\omega \simeq 90^\circ$ (primary closer to the observer at apastron) they obtained $N_{\rm H,a}{\rm (Cal90)} \simeq 17 \times 10^{22} ~\rm{cm}^{-2}$.
It is clear that the $\omega \simeq 270^\circ$ orientation (secondary close to us near apastron) is not consistent with the high ($N_{\rm H,a} \simeq 20$--$60 \times 10^{22} ~\rm{cm}^{-2}$) column density derived in this work towards the wind collision region.
Therefore --- although there are large uncertainties and large variations from data point to data point --- overall, the new column density values we find from \textit{NICER} observations of the last cycle near apastron strengthen the claim of \cite{KashiSoker2008a} for the $\omega \simeq 90^\circ$ orientation.
\subsection{$N_{\rm H}$ close to periastron}
\label{subsec:NHperiastron}
To determine the column density toward the postshock secondary wind we consider only the hard X-rays, $>5 ~\rm{keV}$, that we attribute (see above) to the post-shock secondary wind, mainly close to the secondary star.
During most of the $>5 ~\rm{keV}$ X-ray minimum, JD=2458889 (10-2-2020) to JD=2458912 (4-3-2020), the values of $N_{\rm H}$ are not much larger than the $N_{\rm H}$ values before and after the X-ray minimum, when the X-ray flux is much larger. Although the uncertainties in the values of $N_{\rm H}$ are on the order of the values themselves, the inferred change in $N_{\rm H}$ is sufficiently small to suggest that the minimum is \textit{not} due to absorption of the X-ray source. Hence, the X-ray source power must substantively diminish during the X-ray minimum.
The general range of $N_{\rm H}$ values during the X-ray-minimum are in the range $28$--$72 \times 10^{22} ~\rm{cm}^{-2}$, with a median value of $\simeq 57 \times 10^{22} ~\rm{cm}^{-2}$.
Taking the weighted mean over this range we get $N_{\rm H,min}=(53 \pm 13) \times 10^{22} ~\rm{cm}^{-2}$.
We also note that when $\eta$ Car was observed during the the 2003.5 X-ray minimum both by \textit{XMM-Newton} and \textit{Chandra} (that has better spatial resolutions\footnote{\cite{Henleyetal2008} mention that the \textit{Chandra} observed spectral line profiles during the 2003.5 X-ray minimum can be fitted with synthetic profiles with a model of the emissivity along the colliding winds boundary.}), similar values of $N_{\rm H} \simeq 20$--$60 \times 10^{22} ~\rm{cm}^{-2}$ were derived from observations in both telescopes \citep{Hamaguchietal2007}.
This is also the same range of values we derive here during the 2020 X-ray minimum.
On JD=2458910 the binary system is about 20 days after periastron. At that time the binary orientation is at about $90^\circ$ degrees in the orbital motion with respect to periastron and the column density is about similar to the mean value during periastron.
There are few valid $N_{\rm H}$ observations at the exit from periastron so we will consider also values at later times, during the recovery (exit) from the minimum, JD=2458912 to JD=2458930. We note again that the exit from the minimum in the last cycle is the earliest among the five X-ray recorded cycles, and so the hard X-ray flux at exit is sufficiently strong to allow us determination of $N_{\rm H}$ at exit from minimum. During this exit period the values of the column density have a median of $\simeq 58 \times 10^{22} ~\rm{cm}^{-2}$ and the taking weighted mean over the period gives $N_{\rm H}{\rm (exit)}=(67\pm 11) \times 10^{22} ~\rm{cm}^{-2}$.
Thus we obtain
\begin{equation}
\frac{N_{\rm H,p}}{N_{+90^\circ}} \simeq 0.79 \pm 0.23,
\label{eq:ObservedRatio+}
\end{equation}
where $N_{\rm H,p}$ and $N_{+90^\circ}$ are the means of the best-fit column densities during and after periastron passage.
Prior to the periastron passage the X-ray flux is larger, resulting in better determined values of $N_{\rm H}$.
About 20 days before the beginning of the X-ray minimum, JD $\simeq 2458870$, the binary system is about to enter the X-ray minimum. Taking the range JD=2458865 to JD=2458875 the column density has a median of $\simeq 25 \times 10^{22} ~\rm{cm}^{-2}$, and taking weighted mean over this range gives $N_{\rm H}{\rm (enter)} =(26 \pm 3) \times 10^{22} ~\rm{cm}^{-2}$. Adopting this value for the column density $90^\circ$ before periastron, i.e., $N_{\rm H}{\rm (enter)}=N_{-90}$, we find
\begin{equation}
\frac{N_{\rm H,p}}{N_{-90}} \simeq 2.04 \pm 0.55,
\label{eq:ObservedRatio-}
\end{equation}
where $N_{\rm H,p}$ and $N_{-90^\circ}$ are the means of the best-fit column densities during and before periastron passage.
Let us compare the observationally determined ratios in equations (\ref{eq:ObservedRatio+}) and (\ref{eq:ObservedRatio-}) with the theoretical expectation of equation (\ref{eq:ratios}). We first note that the simple theoretical setting we use in deriving equation (\ref{eq:ratios}) gives the same column density before and after the X-ray minimum, e.g., ${N_{-90}}={N_{+90}}$. Therefore, if we compare one value of the theoretical expectation with the observational findings it is the average of the values of equations (\ref{eq:ObservedRatio+}) and (\ref{eq:ObservedRatio-}), i.e., a ratio of $N_{\rm H,p}/N_{\rm H, 90} \simeq 1.4$ (recall that this is a theoretical ratio with about 1\% error).
For an assumed orientation in which the secondary star is closest to us at periastron ($\omega \simeq 90^\circ$) the theoretical ratio of the column density at periastron to that at $90^\circ$ orbit, which occurs about 20 days before and after periastron, is $N_{\rm H,c} /N_{\rm 90} \simeq 1.4$, while for the orientation where the secondary star is away from us at periastron ($\omega \simeq 270^\circ$) the expected value is $N_{\rm H,f} /N_{\rm 90} \simeq 3.3$ (equation \ref{eq:ratios}).
The observed ratios (equations (\ref{eq:ObservedRatio+}) and (\ref{eq:ObservedRatio-})) are hence more consistent with the $\omega=90^\circ$ orientation.
We note that it is possible that the $N_{\rm H}$ we deduce during the faintest portions of the \textit{NICER} X-ray minimum is not to the vicinity of the secondary star as we assume here, but rather to a different weak source near the center, what \cite{Hamaguchietal2007,Hamaguchietal2014a,Hamaguchietal2014b,Hamaguchietal2016} term the central constant emission (CCE) component.
We prefer the foregoing model --- wherein the intrinsic X-ray luminosity has declined (due to accretion onto the secondary) and $N_{\rm H}$ is indeed measured toward the central binary --- in part because of the smooth variation of $N_{\rm H}$ across the X-ray minimum. We would have expected that if the $N_{\rm H}$ during the X-ray minimum is measuring the absorbing column toward an X-ray source whose extent is much larger than the central binary, then $N_{\rm H}$ would show a steep drop as the flux drops when entering the X-ray minimum, and that these lower values are sustained until the flux recovers.
This signature is not apparent in these observations,
although we note that the value of $N_{\rm H}$ and its uncertainties are poorly constrained due to the low X-ray flux during minimum.
Similar smooth variation has been observed for the {1997--8} X-ray minimum by \textit{RXTE} \citep{Ishibashietal1999}.
\section{Summary and Discussion}
\label{sec:summary}
Beginning in July 2017, the \textit{NICER} X-ray telescope facility has regularly been observing $\eta$~Car, including daily coverage of the 2020.1 X-ray minimum that coincided with the so-called ``spectroscopic event'' of strong variability in visible and IR line emission which occurs around periastron passages (with last passage occurring in February 2020).
We processed and analyzed the \textit{NICER} X-ray observations (examples of which are shown in Figure \ref{fig:xrayspectra}), with our analysis focused on the hard (5--10 keV) X-ray light curve and the hydrogen column density to the source of the hard X-rays.
The light curve shows the expected X-ray minimum 5.54 years after the previous minimum in mid-2014. The \textit{NICER} data demonstrate that this most recent periastron passage exhibited the fastest recovery among the five observed X-ray minima (Fig \ref{fig:xray}).
We interpret the fast recovery of this X-ray minimum in the frame of the accretion model of the spectroscopic event \citep{KashiSoker2008a}.
According to the accretion model, near periastron, the stellar wind collision region becomes very close to the secondary star; as a result, the secondary star accretes mass from the dense primary wind. This accretion process suppresses the secondary's wind.
We surmise that the primary wind was relatively weak during this most recent periastron passage, and this weak wind allowed the secondary wind to quickly revive itself, thereby terminating the X-ray minimum at the earliest recorded orbital phase after periastron passage.
By fitting the hard (5--10 keV) region of the X-ray spectra (see examples in Figure \ref{fig:xrayspectra510}), we determined the column density $N_{\rm H}(t)$ to the hard X-ray source as function of time (Figure \ref{fig:N_H}). The hard X-ray source originates close to the apex of the post-shock secondary wind, where the two winds collide directly, and is therefore located between the two stars, closer to the secondary than the primary star.
Although there are large uncertainties in the values of $N_{\rm H}(t)$ derived from the individual \textit{NICER} spectral fits, we found that we can use the time-averaged values of $N_{\rm H}$ during specific orbital phases (before, during, and just after the X-ray flux minimum) to discriminate between two alternative orientations proposed for the binary system (i.e., secondary in front of vs.\ behind primary during periastron passage).
Specifically, in section \ref{sucsec:NHapastron}, we compared the values of $N_{\rm H}$ that we derived from observations away from periastron, i.e., near apastron (left region of upper panel of Figure \ref{fig:N_H}) with the theoretically expected values that we derived in \cite{KashiSoker2008a}. We found that the column densities away from periastron are too large to be consistent with the binary orientation wherein the secondary star is closer to us at apastron, i.e., $\omega \simeq 270^\circ$, since we require the primary dense wind to supply the column density. These results thereby support the earlier conclusion of \cite{KashiSoker2008a} \citep[which were based on $N_{\rm H}$ values derived from \textit{XMM-Newton} X-ray observations;][]{Hamaguchietal2007} that the secondary star of $\eta$~Car is away from us near apastron, i.e., $\omega \simeq 90^\circ$.
In section \ref{subsec:NHperiastron} we compared the values of $N_{\rm H}$ that we calculated for the two alternative opposite binary orientations (section \ref{sec:theory_NH}) to those that we derived from observations (lower panel of Figure \ref{fig:N_H}).
We took the periastron passage to have occurred just after the beginning of the X-ray minimum. We determined the approximate ratio of the column density near periastron to that about 20 days later, when the primary-secondary position had changed by $90^\circ$ (equation \ref{eq:ObservedRatio+}), and the ratio of periastron column density to that 20 days before periastron (equation \ref{eq:ObservedRatio-}). These ratios of $N_{\rm H}$ variation are $\approx 0.8$ and $\approx 2.0$, respectively.
In the simple theoretical model we have developed (Figure \ref{fig:Schematic}), these two ratios are aught to be small if $\omega \simeq 90^\circ$, since during this time the line of sight to the hard X-ray source should not reach very close to the primary where the densities are very large.
This range of column density ratio variation, 0.8--2.0, can then be compared with the theoretical predictions for the variations that should result from the two opposing assuming binary orientations. For the $\omega = 90^\circ$ orientation (upper part of Figure \ref{fig:Schematic}) the predicted ratio of variation is 1.4 --- precisely in the middle of the observed range --- while for the $\omega = 270^\circ$ orientation (lower part of Figure \ref{fig:Schematic}) the predicted ratio of variation is 3.3 (equation \ref{eq:ratios}), i.e., outside the range of observed variation.
Therefore, this comparison of the theoretically expected variation of $N_{\rm H}$ around periastron (that we express as ratios with the value at the X-ray minimum) to those that we derived from observations, also supports the $\omega=90^\circ$ orientation (upper part of Figure \ref{fig:Schematic}).
Thus, the main conclusion we draw from our analysis of the \textit{NICER} X-ray observations of the most recent periastron passage of $\eta$~Car is that the binary orbital orientation is $\omega \approx 90^\circ$, i.e., the secondary star is closer to us than the primary star at periastron.
Our secondary conclusion is that the weakening of the primary stellar wind over the last several cycles allowed the earlier revival of the secondary wind and, resulting in the fastest recovery from X-ray minimum yet observed for $\eta$~Car.
The results described in this paper serve to illustrate how the variation of column density toward the hard X-ray source during the orbital motion of the massive binary system $\eta$~Car, $N_{\rm H}(t)$, is potentially a more sensitive diagnostic of the configuration of the binary and the orientation of the binary orbit than is the variation of the X-ray flux $F_{\rm X}(t)$ \citep{KashiSoker2008a}. The drawback of this application of column densities derived from spectral fitting is that --- especially in the (hard X-ray) energy range of interest here ($\sim$5--10 keV) --- $N_{\rm H}(t)$ is subject to larger uncertainties than $F_{\rm X}(t)$. Thus, in the coming decades, higher quality X-ray observations of $\eta$~Car around periastron utilizing planned X-ray missions such as Athena+ \citep{Nandraetal2013} are essential if we are to improve our understanding of this astrophysically important massive binary system.
\vspace{0.5cm}
We thank an anonymous referee for helpful comments. This paper used data from the Neutron star Interior Composition Explorer (\textit{NICER}), obtained from the High Energy Astrophysics Archive Research Center (HEASARC).
We thank R. Remillard for his helpful discussion regarding use of the \textit{NICER} \texttt{nibackgen3c50} tool.
AK acknowledges support from the R\&D Authority, and the chairman of the Department of Physics in Ariel University.
NS was supported by a grant from the Israel Science Foundation (769/20).
\software{Sherpa, (v4.12; \citealt{Freemanetal2001,Doeetal2007,Burkeetal2020}, HEAsoft (v6.27.2; HEASARC 2014))}
\software{nicerl2, HEAsoft (v6.27.2; HEASARC 2014), \url{https://heasarc.gsfc.nasa.gov/lheasoft/ftools/headas/nicerl2.html}}
\software{nibackgen3c50, HEAsoft (v6.27.2; HEASARC 2014), \url{https://heasarc.gsfc.nasa.gov/docs/nicer/tools/nicer_bkg_est_tools.html}}
\software{VAPEC (\citealt{MorrisonMcCammon1983}; HEAsoft (v6.27.2; HEASARC 2014)),
\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node135.html\#vapec}}
\vspace{0.5cm}
|
{
"timestamp": "2021-06-17T02:15:38",
"yymm": "2010",
"arxiv_id": "2010.03877",
"language": "en",
"url": "https://arxiv.org/abs/2010.03877"
}
|
\section{Acknowledgments}
\vspace{-2pt}
This work was supported in part by
the National Science Foundation (DIBBs 1443054, CAREER IIS-1253549),
and by the IU Office of the Vice Provost for Research,
the IU College of Arts and Sciences,
and the IU Luddy School of Informatics, Computing, and Engineering
through the Emerging Areas of
Research Project “Learning: Brains, Machines, and Children.”
We acknowledge the use of data from CReSIS with support from
the University of Kansas and Operation IceBridge (NNX16AH54G).
\section{More Experiment Results}
{Fig.~\ref{fig:sample_results_two}} shows two more examples with different numbers of
internal layers. In each example, the first column shows
the human annotated layers with input data as background.
The second column shows the estimated result generated by
one of our baselines, CNN3B. The results of this baseline
roughly agree with the ground truth, but have some clear
defects. For example, in the first row, the second orange
layer in the CNN3B result fails to match the ground truth
well. In the second row, the third yellow layer and fourth
purple layer show clear mismatches.
The third and fourth columns show the prediction result
with and without the layer number oracle. Since our
CNN3B+RNN model successfully predicts the number of
layers in these three cases, the output with and without the
oracle are nearly the same. Both our CNN3B+RNN results
show improvements in both cases. In the first row, the
second orange layer matches the human annotation in the
first column better than CNN3B in the second column. In
the second row, the third yellow layer and fourth purple
layer are clearly closer to the ground truth than the result.
There are two failure cases showed in Fig.~\ref{fig:failure_cases}.
In the fist failure case, there are 6 internal layers in the ground truth, but \textit{CNN3B+RNN} without the oracle
fails to predict the number of layers correctly. Our \textit{CNN3B} only predicts
the first layer well and shows a clear mismatch in all the other
layers, but our \textit{CNN3B+RNN} result in the fourth column predicts the
first 5 layers reasonably well. Our \textit{CNN3B+RNN} with the oracle shows
the best results for this case, but there are still many ripples in the result showing that our model failed to predict it perfectly.
In the second failure case, our \textit{CNN3B} model fails to predict almost all the layers, while \textit{CNN3B+RNN} both fails to estimate the number of layers or match most of the internal layers. However,
\textit{CNN3B+RNN} still works well for the first two layers.
There are two causes for these failure cases. First, the input data is noisy and complex, and difficult to be annotated even by an experienced human annotator.
Not only does this means that our model must learn to process the complex and
noisy input data, but it also means that the ``ground truth'' from human
annotators also has much noise and inconsistency. This annotation noise
not only adds noise during training, but also means that we are measuring
error against ground truth that in itself has many errors.
Second, we face a significant unbalanced dataset issue: there only 1.35\% of the images have more than 5 internal ice layers, for example,
which may affect our model's learning capability.
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{figures/good_examples.png}
\end{center}
\caption{Sample results. First column is input data with ground
truth. Second column is one of our baselines, \textit{CNN3B},
with ground truth number of layers. Third column is our
best model, \textit{CNN3B+RNN}, with ground truth number of
layers. Fourth column is our best model, \textit{CNN3B+RNN},
with estimated number of layers.}
\label{fig:sample_results_two}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{figures/bad_samples.png}
\end{center}
\caption{Failure cases. First column is input data with ground truth. Second column is one of our baseline \textit{CNN3B} result with ground truth number of layers. Third column is our best model \textit{CNN3B+RNN} result with ground truth number of layers. Fourth column is our best model \textit{CNN3B+RNN} result with prediction number of layers. Two rows represent two different input data with annotations and prediction results.}
\label{fig:failure_cases}
\end{figure*}
\section{Conclusion}
\vspace{-2pt}
We have considered a generalization of the tiered segmentation problem
and apply it
to a problem of great societal consequence: automatically
understanding the internal layer structure of the polar ice sheets
from ground-penetrating radar echograms.
We show that our approach can effectively estimate arbitrary numbers of snow or ice layers from noisy radar images.
Experimental results on a challenging, publicly-available dataset demonstrate
the significant improvements over existing methods.
\section{Experiments}
\vspace{-5pt}
\subsection{Dataset}
We use the annual ice layer dataset collected by the
Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas
and the National Snow and Ice Data
Center at the University of Colorado~\cite{koenig2016annual}.
The data is collected by ultra-wideband snow radar
operated over a frequency range from 2.0 to 6.5 GHZ, and
consists of 17,529 radar images
with human-labeled annotations that identify the positions of
internal ice layers. Formally, our task is to detect all internal ice
layers $\hat{M}$ in a given single-channel image $I$. Each
element in $\hat{M}$ indicates the
row coordinate (in the range $[1, H]$, where $H$ is image height) of an ice layer
for a given column.
\xhdr{Preprocessing.}
We resize all input images to
$300 \times 256$ by using bi-cubic interpolation. We normalize
the grayscale pixel values by
subtracting the mean and dividing by the standard deviation (both
of which are calculated from the training
data). Following~\cite{xu2018multi}, we also normalize the ground truth
row labels to a coordinate system spanning $[-1, 1]$ in each image.
We also remove input images that have missing data.
\vspace{-5pt}
\subsection{Implementation Details}
We use PyTorch to implement our model and do the
training and all experiments on a system with Pascal Nvidia
Titan X graphics card.
We randomly choose 80\% of images for training and 20\% for testing.
The Adam optimizer with default parameters is used to learn the CNN
parameters with batch size of 16. The training process is stopped
after 30 epochs, starting with a learning rate of $10^{-4}$ and
reducing in half every 10 epochs. The RNN training uses the same
optimizer, update rule, and batch size as the CNN's,
but initial learning rate is set to $10^{-3}$.
\subsection{Evaluation Metrics}
Prior work has used mean absolute error in pixels between predicted and
ground truth layers~\cite{crandall2012layer,lee2014estimating}, a familiar
evaluation metric in signal processing applications. However, in our
problem of internal ice detection, the number of layers is unknown,
which means the evaluation metric must capture both the accuracy of
estimated layer count and the localization accuracy of the layers.
Prior work on internal layer detection
has typically been evaluated qualitatively~\cite{mitchell2013semi}.
We thus introduce two quantitative, objective evaluation approaches for the
tiered segmentation problem.
Our first evaluation protocol assumes that the correct number of layers is known
via an oracle, and then measures mean absolute error in pixels.
Assuming that the correct number of layers is given is useful both for
isolating the accuracy of layer localization, and for allowing comparison with models that are not able
to estimate layer counts.
To evaluate the accuracy of both the layer count and layer boundaries,
we propose \textit{layer-AP} based on average precision. For each estimated layer, we search through
the ground truth layers to find the closest match according to mean absolute error.
Each ground truth layer is only allowed to match one estimated layer.
Then we define a set of threshold values $t_l$. For each threshold, we count the number of estimated layers which have a
mean absolute error in pixels under the threshold, and call this the number of matches $m_i$ for that threshold.
In particular, the layer average precision is computed as,
\begin{equation}
\textrm{layer-AP} = \frac{1}{l}\sum_{i=1}^{l} \frac{m_{i}}{N+1},
\end{equation}
where $N+1$ is the number of layer boundaries ($N$ is the number of gaps between boundaries), and $l$ indicates the number of
thresholds. In this work, we use 10 different thresholds and set $t_l = [1,4,7,10,13,16,19,22,25,27]$,
assuming input images are in size of $300 \times 256$.
\subsection{Baselines}
We are not aware of any existing work that solves
the problem we consider here:
existing fully-automatic approaches
to the tiered segmentation problem
assume the number of layers is no greater than two and is known ahead of time.
We thus develop some baseline models to compare our results
against.
Crandall~\etal \quad ~\cite{crandall2012layer} proposed a technique based on graphical models to find layer boundaries, which we call \textbf{Sequential}. However, they assume
exactly two layer boundaries because the running time is exponential in the number of layers. Here, we adapted it to our problem by
using an oracle to determine the number of layers (by looking at ground truth), and
then running sequentially to find each layer one-by-one.
\textbf{Naive CNN} uses the
VGG16~\cite{simonyan2014very} as backbone
which directly predicts a fixed number
of internal layers by producing a label matrix in one-shot.
\textbf{RNN30} models the dependencies in the vertical direction:
given the estimated boundary for a given layer and previous layers, it
predicts the boundary for the subsequent (next-deeper) annual layer.
\textbf{RNN256} is a baseline that uses a recurrent neural network (RNN)
to model sequential dependencies across columns, assuming a fixed number
of layers.
\textbf{CNN2B} is a simpler version of our model that uses
only two branches, one to predict the top
layer (the air and ice boundary), and
one to predict the average gap between layers.
\textbf{CNN3B} is a version of our model with all three branches, but without the RNN refinement.
\textbf{CNN3B+RNN} is our full model described above.
\subsection{Evaluation Results}
\vspace{-5pt}
\xhdr{Quantitative results} are presented in Table~\ref{tab:mean_results} and~\ref{tab:lap_results}
in terms of mean absolute error and layer-AP, respectively.
In each table, we present two sets of results: one in which the number
of layers is known ahead of time by an oracle (\textit{i.e.}, by consulting
the ground truth), and one in which it is predicted automatically.
Note that only the techniques that use \textit{CNN3B} are able to estimate
the number of layers automatically, which is why the other results
are listed as missing in the table. For calculating mean absolute error
when models incorrectly estimate the number of layers,
we pad either the ground truth or the output (whichever has fewer layers) with extra layers consisting of zero vectors to penalize
these incorrect estimations.
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{figures/example.pdf}
\end{center}
\vspace{-20pt}
\caption{Sample results, showing (a) ground truth, (b)
\textit{CNN3B} output with ground truth number of layers, (c) \textit{CNN3B+RNN} with ground truth number of
layers, and (d) \textit{CNN3B+RNN} with estimated number of layers.}
\label{fig:sample_results}
\end{figure*}
Comparing with other models in Table~\ref{tab:mean_results} and~\ref{tab:lap_results},
we observe that our combination of
CNN3B and RNN models significantly outperforms all baselines
in terms of both mean average error and layer-AP.
Our two models \textit{CNN3B} and \textit{CNN3B+RNN} have the ability
to estimate the number of internal ice layers, and reach $85.2\%$
accuracy on this layer counting task, which is why their accuracy
decreases only slightly when the number of layers is not provided by the oracle.
Our model \textit{CNN3B+RNN} shows the best results of all other
baselines, even when our model must estimate the number of layers and the
the baselines know it from the oracle.
\begin{table}
\begin{center}
\scalebox{0.88}{\begin{tabular}{lcc}
\toprule
& \multicolumn{2}{c}{Mean Error (in pixels) $\downarrow$} \\ \cmidrule{2-3}
& \multicolumn{1}{l}{\# layers from oracle} & \multicolumn{1}{l}{\# layers estimated} \\ \midrule
Sequential \cite{crandall2012layer} & 88.98 & - \\
Naive CNN & 24.32 & - \\
RNN30 & 21.79 & - \\
RNN256 & 20.20 & - \\
CNN2B & 11.94 & - \\
CNN3B & 7.91 & 9.27 \\
CNN3B+RNN & 6.96 & 8.73 \\ \bottomrule
\end{tabular}}
\end{center}
\vspace{-12pt}
\caption{Evaluation results by measuring the error in terms of the mean absolute column-wise difference compared to ground truth, in pixels.}
\label{tab:mean_results}
\end{table}
\begin{table}
\begin{center}
\scalebox{0.88}{\begin{tabular}{lcc}
\toprule
& \multicolumn{2}{c}{layer-AP $\uparrow$} \\ \cmidrule{2-3}
& \multicolumn{1}{l}{\# layers from oracle} & \multicolumn{1}{l}{\# layers estimated} \\ \midrule
Sequential \cite{crandall2012layer} & 0.059 & - \\
Naive CNN & 0.183 & - \\
RNN30 & 0.218 & - \\
RNN256 & 0.254 & - \\
CNN2B & 0.635 & - \\
CNN3B & 0.843 & 0.822 \\
CNN3B+RNN & 0.882 & 0.853 \\ \bottomrule
\end{tabular}}
\end{center}
\vspace{-12pt}
\caption{Evaluation results by measuring the layer average precision with thresholds compared to ground truth.}
\vspace{-5pt}
\label{tab:lap_results}
\end{table}
\xhdr{Qualitative results} are shown in Fig.~\ref{fig:sample_results}.
The first column shows the human annotated layers, while the second
column shows the result generated by one of our baselines,
\textit{CNN3B}. The results of this baseline roughly agree with the
ground truth, but all layers except the first
show different degrees of inaccurate localization compared with the human
annotations. The third and fourth columns show the
results with and without the layer number oracle. Since our
\textit{CNN3B+RNN} model is highly accurate at estimating the number of layers,
the output with and without the oracle are nearly the same.
We provide additional sample results in
supplementary material.
As shown in the examples, our model only needs the input image
to generate results that are very close to human annotations in most cases.
The improvement between \textit{CNN3B} and \textit{CNN3B+RNN} indicates that the RNN
contributes to our final result even though the \textit{RNN30} and
\textit{RNN256} baselines fail to work well on their own. The results show that
both steps of our model are important to achieve high
performance.
\section{Introduction}
Understanding the impacts of global climate change begins at the north
and south poles: as the earth warms and the vast polar ice breaks
apart and melts, sea levels will rise and absorb more solar energy,
which in turn will cause the Earth to warm even faster~\cite{climate}.
To predict and potentially mitigate these changes,
glaciologists have developed models of how polar ice and snow will
react to changing climates. But these models require detailed
information about the current state of the ice. While we may think of
polar ice sheets as simply vast quantities of frozen water, in reality
they have important structure that influences how they will react to
rising temperatures. For example, deep beneath the ice is
bedrock, which has all the same diverse features as the rest of the
Earth's surface -- mountains, valleys, ridges, etc.~-- that affect how melting ice will behave. The ice
sheets themselves also have structure: snow and ice accumulate in
annual layers year after year, and these layers record important
information about past climatological events that can help predict the
future.
To directly collect data about the structure of ice requires drilling
ice cores, which is a slow, expensive, and extremely laborious
process. Fortunately, ground-penetrating radar systems have been developed that can
fly above the ice and collect data about the material
boundaries deep under the surface. This
process generates radar echograms
(Fig.~\ref{fig:teaser}), where the vertical axis represents the depth
of the return, the horizontal axis corresponds to distance along the
flight path, and the pixel brightness indicates the amount of energy
scattered from the subsurface structure. However, the echograms are
very noisy and typically require laborious manual
annotation~\cite{macgregor}.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{figures/teaser.pdf}
\end{center}
\vspace{-15pt}
\caption{Given an echogram from ground-penetrating radar over polar ice,
we automatically estimate the number and position of internal ice layers. Left figure is adapted from \cite{nasayoutube}.}
\label{fig:teaser}
\vspace{-5pt}
\end{figure}
Most automatic techniques for finding layers in these images
only consider the ice-air and
ice-bedrock boundaries, which are the most prominent~\cite{crandall2012layer,lee2014estimating,xu2017automatic,xu2018multi,kamangir2018deep}.
A much more challenging problem is to identify the
``internal'' layers of the ice and snow caused by annual accumulation.
At first glance, solving this problem may seem like a straightforward application of
traditional computer vision techniques.
However, approaches based on edge detection do not work well
because
the layer boundaries are subtle, the noise characteristics vary dramatically, and the
number of visible layers changes across
different regions of ice.
Unlike most segmentation problems in computer vision,
the layers
here do not correspond to
``objects'' with distinctive colors or textures.
Nevertheless, our problem
can be viewed as a generalization of the tiered scene segmentation
problem~\cite{tiered}. Tiered segmentation partitions an image
into a set of regions $\{r_1, r_2, ..., r_n\}$ such that in each image
column, all pixels belonging to $r_i$ are above (have lower row index
than) all pixels corresponding to $r_j$ for $i<j$. Felzenswalb and
Veksler~\cite{tiered} solved this problem using energy minimization
with dynamic programming, but they assumed no more than three distinct
labels per column because their inference time was exponential in the
number of labels.
In this paper, we revisit tiered labeling using deep learning,
and we consider a more challenging problem in which the number of labels is large and unknown ahead of time.
We propose a novel deep neural network
which performs the tiered segmentation in two stages. We first use
a 2D convolutional network (CNN) to simultaneously solve three problems:
detecting the position of the top layer, roughly estimating the
average layer thickness, and estimating
the number of visible layers.
Propagating the estimated first layer downward using the estimated thickness gives a rough approximation
of the tiered segmentation.
Then we refine the pixel-level boundary positions
using a recurrent neural network (RNN) to account for differences across
different layers.
We evaluate our method on
internal ice layer segmentation on a large-scale, publicly-available
polar echogram dataset.
Experimental results show that our approach significantly outperforms
baseline methods, and is especially efficient on multi-layer
detection.
Beyond polar ice, our technique is
general and can be applied to tiered segmentation
problems in other domains.
\section{Methodology}
\begin{figure*}
\begin{center}
\includegraphics[width=0.90\textwidth]{figures/network.pdf}
\end{center}
\vspace{-15pt}
\caption{Architecture of our model for detecting internal ice
layers in radar echogram images. Through a combination of CNN and RNN
networks, we estimate both the number of layers and the boundaries of each layer.}
\label{fig:model}
\vspace{-5pt}
\end{figure*}
Given a noisy radar echogram $I$, which is a 2D image of size $1\times H \times W$ pixels,
our goal is to localize $N$ internal ice layer boundaries
and exactly one surface boundary between the ice and air. The
output thus should be $N + 1$ boundaries. We need to estimate both the
number of boundaries $N$ (which varies from image to image, although our
implementation assumes $N<30$) and all the boundary locations
based on noisy and ambiguous data.
Our technique encodes the physical constraints of this tiered
segmentation problem. First, since the labeled regions correspond to
physical layers, layer boundaries cannot cross; more precisely, we
partition the image into regions $\{r_1, r_2, ..., r_n\}$ such that in
each image column, all pixels belonging to $r_i$ are above all pixels
corresponding to $r_j$ for $i<j$. Second, we assume that adjacent
boundaries are roughly parallel, which is reasonable since the amount of
snow or ice that falls in any given year is roughly consistent across
local spatial locations. Finally, we assume that the thickness of
different layers is roughly the same at any given spatial location,
which is reasonable since the amount of snow or ice is similar across
different years. These are all rough, weak assumptions, and our model
is able to handle the significant deviations from them that occur in
real radar data.
We address this problem using two main steps, following the intuition
that a human annotator might use: first do a rough
top-down segmentation of the image to incorporate global constraints
on the layer structure, and then use that rough segmention to do a
bottom-up refinement of the layer boundaries. More specifically, we
first design a triple-task Convolutional Neural Network (CNN) model to
estimate the number of ice layers $\hat{N}$, the location of the top
layer $\hat{F}$ (encoded as a $W$-d vector indicating the row index
for each column of the image), and the average thickness of all layers
(the average vertical gap between boundaries) in the echogram.
The top boundary is
typically quite prominent since it is between air and ice, and
provides a strong prior on the shape of the much weaker boundaries
below. Second, we design a Recurrent Neural Network (RNN) to estimate
$\hat{GapM}$, an $N \times W$ matrix encoding the thickness (gap
between adjacent boundaries) for each layer at each column, based on
the estimates in the first step.
To generate the final segmentation, we combine $\hat{F}$ and $\hat{GapM}$ according to $\hat{N}$ to generate
output $\hat{M}$ which is a $(N+1) \times W$ matrix. Each element in
$\hat{M}$ indicates the row index for a given boundary at a given column in the
input $I$.
\subsection{Triple Task CNN (CNN3B)}
Our first step applies a three-branch Convolutional Neural Network
(CNN) to roughly estimate the surface boundary location, the number of
layer boundaries, and the average thickness of each internal layer across the
echogram.
Fig.~\ref{fig:model} shows our CNN architecture, which was
inspired by Xu~\etal \quad ~\cite{xu2018multi} but with significant
modifications. Our model takes a 2D image $I$ as input. Then we use
three shared convolutional blocks, each of which is followed by max
pooling operations. The shared convolutional blocks are used to
extract low-level features for the next three branches, because
similar evidence is useful for estimating the first layer, the number
of layers, and the average thickness.
The model then divides into three branches.
The first branch estimates the position of the surface layer, and
uses six convolutional layers for
modeling features specific to the first layer and one fully connected
layer to generate outputs
$\hat{F}=\{ \hat{f_1},\hat{f_2},\cdots,\hat{f_W} \}$. Each element
represents the row coordinate of the first layer within that
column. The ground truth vector $F = \{ f_1,f_2, \cdots,f_W \}$ is generated
from the top boundary of the human-labeled ground truth $M_N =
\{ m_{N,1},m_{N,2}, \cdots,m_{N,W} \}$. The loss function for estimating the first layer uses an L1 Manhattan distance to
encourage the model's output to agree with
the human-labeled ground truth,
\begin{equation}
L_{fl}=\frac{1}{W} \sum_{w=1}^{W}\left|\hat{f}_{w}-f_{w}\right|.
\end{equation}
The second branch predicts the number of ice layer boundarise, and
includes six convolutional layers and three fully connected layers.
We view this as a classification problem, so this
branch produces a vector $v$ which is a probability distribution over a
discrete set of possible numbers of boundaries. In our experiments, we assume $N < 30$, so $v$ is 31-dimensional.
The ground truth
is the number of labeled boundaries $N$ in the human-annotated ground truth of the image. Cross-entropy loss
is used during training,
\begin{equation}
L_{number}=-\log \left(\frac{\exp (v[\operatorname{N}])}{\sum_{j} \exp (v[j])}\right).
\end{equation}
The third branch roughly estimates the average thickness (gap) of all the layers in the echogram,
and follows the same general design as the first branch but with a
single scalar output from the final fully connected layer. The loss
calculates the absolute value between the output $\hat{\Delta}$
and the ground truth $\Delta$,
\begin{equation}
L_{\Delta}=\left|\hat{\Delta}-\Delta\right|.
\end{equation}
Finally, our CNN loss function combines the three branches,
\begin{equation}
L= L_{fl} + L_{number} + L_{\Delta}.
\end{equation}
We use VGG16~\cite{simonyan2014very}
as the network backbone.
\subsection{Multiple Gap RNN}
Having roughly estimated the global structure of the echogram in the last section,
we next use
an RNN
to generate a more accurate gap (thickness) value for each layer in the echogram.
We use Gated Recurrent
Units (GRUs)~\cite{chung2015gated}, which require less computational cost and
are easier to train than LSTMs~\cite{hochreiter1997long}.
As shown in Fig.~\ref{fig:model}, our model has one hidden layer, wherein
each GRU cell takes feature map $Avg_F$
generated before the fully connected layer of our Triple Task CNN
model's third branch and the output of the previous GRU cell as
inputs, and produces $W$ real-valued numbers indicating the predicted
gap between layer boundaries within each column of the data. $Avg_F$ is
projected to the size of the GRU hidden state with a fully connected
layer before GRU takes it as input. During training, the GRU cell
is operated for $N$ iterations, where each iteration $n$ estimates the
gap between layer $n+1$ and layer $n$. In a given iteration $n$,
the GRU cell takes the projected $Avg_F$ as input. The GRU cell
outputs a sequence of hidden states $\{ h_1,h_1, \cdots,h_n\}$ with
iteration $n \in [1,N]$, and each hidden state $h_n$ is followed by a
fully-connected layer to predict gap value $\hat{GapM_n}$. We use L1
Manhattan distance to supervise the model to predict $\hat{GapM}$
according to the human-labeled ground truth $GapM$,
\begin{equation}
L_{GapM}=\frac{1}{N}\frac{1}{W}\sum_{n=1}^{N}\sum_{w=1}^{W}
\left|\hat{GapM_{n,w}}-GapM_{n,w}\right|.
\end{equation}
\vspace{-15pt}
\subsection{Combination}
We combine our Triple Task CNN and Multiple Gap RNN to predict the
number of internal ice layer boundaries and their positions in the input image
$I$. The RNN uses general features as shown in Fig.~\ref{fig:model} to
initialize the GRU's hidden state and takes an average feature map
$Avg_F$ as input. Based on the first layer output and the number of
layers from the Triple Task CNN, our model generates the first boundary
$M_0$ ($W$-d vector) in our result $\hat{M}$. We then apply the layer gap output
$GapM$ predicted by our multiple Gap RNN according to the first layer result,
\begin{equation}
\begin{array}{c}
M_{i}=M_{i-1}+GapM_{i}, i \in(1, N),
\end{array}
\end{equation}
where N is number of layer boundaries.
In addition, $M_{i}$ and $GapM_{i}$ are both
$W$-d vectors. For each image, we compute all $M_i$'s to create
$\hat{M}$ which is a $(N+1,W)$ matrix, and compare it with ground
truth $M$ to evaluate our model.
\section{Related Work}
Crandall~\etal \quad ~\cite{crandall2012layer} detected two specific
types of layer boundaries (ice-air and ice-bed) in echograms using
discrete energy minimization with a pretrained template model and a
smoothness prior. Lee~\etal \quad ~\cite{lee2014estimating} proposed
a more accurate method using Gibbs sampling from a
joint distribution over all candidate layers, while Carrer and
Bruzzone~\cite{carrer2017automatic} further reduced the computational
cost with a divide-and-conquer strategy. Xu~\etal \quad
~\cite{xu2017automatic} extended the work to estimate 3D ice surfaces
using a Markov Random Field (MRF), and Berger~\etal \quad
~\cite{berger2018automated} followed up with better cost functions
that incorporate domain-specific priors.
More recent work has applied deep learning. Kamangir~\etal \quad ~\cite{kamangir2018deep}
detected ice boundaries using convolutional
neural networks applied to wavelet features. Xu~\etal \quad ~\cite{xu2018multi} proposed a
multi-task spatiotemporal neural network to reconstruct 3D ice
surfaces from sequences of tomographic images. However, all of this
work focuses on detecting a small, known number of layer boundaries
(typically two) and thus is not appropriate for internal layers, because
the number of visible internal layers varies and may be quite large.
Very recent work, contemporaneous to ours, has considered the internal layer
detection problem.
Varshney~\etal \quad~\cite{varshney2020deep}
treat the problem as semantic segmentation, while
Yari~\etal \quad~\cite{yari2020multi} classify pixels into layer boundaries or not,
which is a binary classification problem. Those papers require
post-processing steps either to smooth the inconsistent labels between
layers or to specify the layer indices.
Our problem can be thought of as a more general version of
the tiered segmentation problem~\cite{tiered} proposed by Felzenswalb
and Veksler, who presented an algorithm based on dynamic programming.
However, their solution required the number of
tiers (labels) to be fixed ahead of time to a small number (3) because
their inference was exact and {exponential in the number of labels},
and used hand-crafted features.
In this paper, we propose a new approach to a more general version of the
tiered segmentation problem, in which the number of labels can be large
and unknown. Our technique combines convolutional
and recurrent neural networks for counting and detecting
an arbitrary number of layer boundaries.
|
{
"timestamp": "2021-04-07T02:09:41",
"yymm": "2010",
"arxiv_id": "2010.03712",
"language": "en",
"url": "https://arxiv.org/abs/2010.03712"
}
|
\section{Introduction and Background}
Nature does things in an incredible way. Behind the visible phenomena, sometimes there are innumerable invisible causes. Scientists have been observing nature for hundred of years and trying to understand, explain, adapt and reproduce artificial systems based on it. There are countless living and non-living agents, act in parallel and sometimes against each other, to define nature and regulate the harmony. This is considered the dialectic of nature that resides in the concept of evolution of the natural world. The evolution of complexity in nature follows a distinctive order. There is also a distributed, self-organized and optimal processing of information in nature without any central control. The whole series of forms, mechanical, physical, chemical, biological and social, are distributed and aligned according to the complexity of the lowest to the highest. This sequence expresses its mutual dependence and its relation in terms of structure and history. Associated activities also change due to changing circumstances. All of these phenomena known or partially known so far emerge as new areas of study in science and technology. Computer science helps to study nature-based problem solving techniques, underlying principles, mechanisms of natural, physical, chemical and biological organisms, who perform complex tasks appropriately with limited resources and capabilities. \cite{Siddique2015}
Science is a bridge between scientists and nature which has evolved over the centuries by enriching itself with new concepts, methods and tools and has developed into well-defined disciplines of scientific activity. Since then, humanity has been trying to understand nature by developing new tools and techniques. The field of nature-based computer science (NIC) is interdisciplinary in nature, combining computer science with knowledge from different branches of science, mathematics and engineering, which allows the development of new computational tools such as algorithms, hardware or software to solve problem. This chapter provides limitations of current technology, an overview of existing classification on Nature Inspired Algorithm (NIA), new approach called End Goal based classification, framework examples to understand use of framework.
\begin{table}[h]
\centering
\begin{tabular}{p{3cm} p{9cm}}
\specialrule{.2em}{.1em}{.1em}
Acronym & Full Name\\
\specialrule{.1em}{.05em}{.05em}
NIA & Nature Inspired Algorithm\\ \hline
\multirow{2}{*}{TRIZ} & Russian : Teoriya Resheniya Izobretatelskikh Zadatch\\
& English : Theory of inventive problem solving\\\hline
AI & Artificial Intelligence\\\hline
ML & Machine Learning\\\hline
DL & Deep Learning\\\hline
FOA & Fruit Fly Optimization\\\hline
FOA-MHW & Fruit Fly Optimisation Algorithm - Multiplicative Holt-Winters\\\hline
BA & Bat Algorithm\\\hline
LSSVM & Least Square Support Vector Regression Model\\\hline
MARS & Multivariate Adaptive Regression Splines\\\hline
DP & Dynamic Programming \\\hline
GA & Genetic Algorithm\\\hline
TSP & Traveling Salesman Problem \\\hline
ACO & Ant Colony Optimization \\
\specialrule{.2em}{.1em}{.1em}
\end{tabular}
\caption{Acronyms used in chapter}
\end{table}
\section{Motivation behind NIA exploration}
In the first two decades of the 21st century, access to large amounts of data (known as "big data"), faster computers and advanced machine learning techniques were successfully applied to many problems for commercial benefits. The AI/ML algorithms and its applications are pervasive today, and they are solving many specific problems and making life easier. However, data scientist's favourite algorithms and many other technology from various engineering branches have their own limitations. We will discuss limitations in details in subsequent subsection.
\subsection{Prevailing Issues with technology}
\subsubsection{Data dependencies}
Data is the essence of AI/ML algorithm to achieve reasonable accuracy. Today's algorithms are highly data dependent. However, Issues like cost of acquisition of data, processing it, maintaining it and storing it in compliant way, makes it difficult to have sufficient amount of data many a times. The cost of data is one the biggest investment for any organization who want to leverage AI/ML. In absence of data, the accuracy of AI/ML algorithm suffers and renders them unfit for use \cite{redman_2018}. The key question is can we develop algorithms and alternatives which are not highly dependant on data, which can achieve "less data approach"?
We need to be aware of the limitations of AI and where humans still need to take the lead. Data and algorithms cannot solve all type of problems. For a specific set of problems, the available set of algorithms fails to perform adequately despite of huge amount of available data \cite{lando_2018}.
For a long time, Facebook believed that problems like the spread of misinformation and hate speech could be algorithmically identified and stopped. But under recent pressure from legislators, the company quickly pledged to replace its algorithms with an army of over 10,000 human reviewers. The medical profession has also recognised that AI cannot be considered a solution for all problems. The IBM Watson for Oncology program was a piece of AI that was meant to help doctors treat cancer. Even though it was developed to deliver the best recommendations, human experts found it difficult to trust the machine. As a result, the AI program was abandoned in most hospitals where it was trialled.
These examples demonstrate that there is no AI solution for everything. Not every problem is best addressed by applying machine intelligence to it. \cite{VYACHESLAV_2018}
\subsubsection{Demand of higher software complexity}
The increasing software demands also lead to increase in complexity. The complexity is increasing at repaid rate as demand for intuitive and complex solutions is growing. The traditional methods of software programming and solutions are already proven inadequate to manage complexity \cite{braude2016software}. The question is can we develop software with reduced complexity yet rich in features?
\subsubsection{NP-Hard problems}
There are sets of intractable problems (NP-hard) which are not solvable due to computational complexity involved arising from known set of algorithms for example Traveling Salesman Problem. TSP is discussed in detail in following sections. The question is can we solve NP-Hard problems with available compute power today? \cite{vzerovnik2015heuristics} \cite{woeginger2003exact}
\subsubsection{Energy Consumption}
Though DL algorithms are claimed to mimic human brains, still they lack in terms of efficiency and energy consumption. There is a long way to go to mimic human brains perfectly.
Despite the availability of data, algorithm and computational power, it is not possible to solve set of problems due to sheer amount of energy usage. The cost-benefit analysis produces unfavorable results due to impact of energy usage on environment \cite{hao_2019}. The key question is can we solve the problem in energy efficient way to match human brains energy efficiency frontier?
The discussed problems are considered small today; however, they are growing drastically. These problems indicate that growth of AI/ML based algorithms and applications are not sustainable from technical, business, and environmental point of view. Therefore to tackle the stated problem, there is a need to explore alternative solutions. Scientists believe NIA is the best suitable alternative.
\bigbreak
\subsection{Nature Inspired Algorithm at rescue}
Nature faces varied problems, and it has found the best way to solve them using constrained optimization over a time. The species on earth are doing various forms of optimization for rest of their life in their respective environment. Their survival is proof that the evolved optimizations are one of the best possible solutions. Nature has its way of transferring minimum intelligence from one generation to another using genes. Later on, life forms acquire higher amount of intelligence based on their experiences interacting with environment. In the entire process of acquiring intelligence, the usage of data is very minimum. At the same time, the decisions derived from the intelligence is good enough at the least. The life forms are capable of managing the complexity of real-world, and their decisions are achieved in the most efficient way as their survival depends on it.
As discussed, today’s digitally powered world is facing significant complex problems due to temporal and spatial complexity, variability, and constrained environments. The similarity between problems faced by nature and digital world is striking, and hence the proposition is to take over solutions of nature and implement it to solve digital problems. The solution formalized by studying nature is referred as Nature Inspired Algorithm (NIA) \cite{yang2010nature}. NIA are meta-heuristic algorithms, which provides approximate answers. They are designed to optimize numerical benchmark functions, multi-objective functions and solve NP-hard problems for a large number of variables, and dimensions.
The growth of current technologies is bound to diminish and will pave the way for new technologies to emerge. We believe that NIA is going to be the next disruptive technology to address the problems faced by current technologies and provide answers to key questions asked above.
\section{Novel TRIZ + NIA approach}
\subsection{Traditional Classification}
There exist a classification for NIAs, which is solution based. It focuses on the techniques used by algorithms. According to traditional classification,Algorithms are classified into following classes.\cite{Siddique2015, nanda2014survey, binitha2012survey, fister2013brief}. Figure \ref{fig:tradition_classificationl} classifies NIA into below mentioned classes.
\begin{itemize}
\item Swarm Intelligence
\item Evolutionary Algorithms
\item Bio-inspired Algorithms
\item Physics-based Algorithm
\item Other Nature Inspired Algorithms\\
\end{itemize}
\begin{figure}
\centering
\centerline{\includegraphics[width=\textwidth]{Traditional_Classification.PNG}}
\caption{Categories of Traditional Classification of NIAs}
\label{fig:tradition_classificationl}
\end{figure}
\subsubsection{Swarm Intelligence}
Swarm intelligence is something in which agents work in parallel to achieve a certain task. Swarm Intelligence (SI) is simply the aggregate conduct of decentralized, self-organized entities. The similar idea is utilized for Artificial Intelligence in early days. Firstly it was presented by Gerardo Beni and Jing Wang in 1989, with regards to cellular robotic systems\cite{beni1993swarm}.
SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. The agents follow simple rules, and although there is no centralized control structure dictating behaviour of individual agents.Their random interaction leads to emergence of "intelligent" global behavior, unknown to the individual agents. Examples of swarm intelligence in natural systems include ant colonies, bird flocking, hawks hunting, animal herding, bacterial growth, fish schooling and microbial intelligence.
The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general set of algorithms. 'Swarm prediction' has been used to solve forecasting problems. Similar approaches to those proposed for swarm robotics are considered for genetically modified organisms in synthetic collective intelligence\cite{sole2016synthetic}. Animal Migration Algorithm, Ant Colony Optimization, Artificial Fish Swarm Optimization, Fruit fly Optimization Algorithm, etc are example of Swarm Intelligence.
\subsubsection{Evolution Algorithm}
In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation,\cite{vikhar2016evolutionary}which represents a generic population-based meta heuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. For optimization problem, candidate solution plays role of individual in population and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the fitness function over generations.
Evolutionary algorithms often perform well by approximating solutions to all types of problems as they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of micro evolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor.\cite{cohoon2003evolutionary} In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems \cite{cohoon2003evolutionary} like knapsack which is explained below. Therefore, there might not be any direct link between algorithm complexity and problem complexity. Bacterial Evolutionary Algorithm, Genetic Algorithm, etc. are famous Evolutionary Algorithm.
\subsubsection{Bio-inspired Algorithms}
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation. In simpler words, Bio-inspired Algorithms imitates certain biological system of animal body. Artificial Immune System, Dendrite Cell system, etc. are example of Bio-Inspired Algorithms.
\subsubsection{Physics-based Algorithm}
Physics-inspired algorithms employ basic principles of physics, for example, Newton’s laws of gravitation, laws of motion and Coulomb’s force law of electrical charge. They are all based on deterministic physical principles. Black Hole Algorithm, Artificial Chemical Process Optimization Algorithm, Central force Optimization Algorithm, Gravitational Search Algorithm, etc. fall under this category.
\subsubsection{Other Nature Inspired Algorithms}
The set of algorithms which does not fit directly to above classification are put into this category. Artificial algae algorithm, Bat Algorithm,Coral Reef Optimization, Cuckoo Search, Firefly Algorithm, Flower Pollination Algorithm, etc. are popular example of Other Nature Inspired Algorithms.
\subsection{Limitation of traditional classification}
Drawback of the traditional classification is that it is not helpful for mapping of real life problem to conceptual problem. For an application, selection of algorithm is achieved using brute force. This classification does not make it easy to select algorithm.Hence the solution based approach is not ideal for mapping problem.
Drawback of tradition classifications are following :
\begin{itemize}
\item It is solution based approach. The classification focuses on, how nature is solving an issue, not on what nature actually want to achieve.
\item It is not helpful for mapping of real life problem to conceptual problem as it does not factor nature's problem.
\end{itemize}
\subsection{Combined approach NIA + TRIZ}
It is been analyzed that traditional classification and approaches lack a systematic method and framework to map real-world problems to NIA. They are mainly brute force in nature. Hence a novel approach to map real-world problems to NIA in a systematic way using TRIZ methodology \textcolor{blue}{is proposed}\cite{Palak2019mapping}. TRIZ principles are used along with new classification approach.
\subsubsection{TRIZ}
TRIZ is the Russian acronym for the "Theory of Inventive Problem Solving \cite{altshuller1999innovation}." TRIZ presents a systematic approach for understanding and defining challenging problems and it is the most useful in roles such as product development, design engineering, and process management. TRIZ includes a practical methodology, tool sets, a knowledge base, and model-based technology for generating innovative solutions for problem solving.
It is useful for problem formulation, system analysis, failure analysis, and patterns of system evolution.The TRIZ was developed on three primary findings:
\begin{enumerate}
\item Problems and solutions are repeated across industries and sciences
\item Patterns of technical evolution are also repeated across industries and sciences
\item The innovations generally use the solution and scientific finding from outside of focused field.\\
\end{enumerate}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.50]{Triz.png}}
\caption{TRIZ Problem-Solution Approach \cite{nakagawa2005new}}
\label{Figure 4.1}
\end{figure}
Prism of TRIZ, as depicted in Figure \ref{Figure 4.1}. represents the 4 step approach for problem-solving. The real-world problem is mapped to conceptual problem using abstraction. The conceptual problem and corresponding solutions (40 TRIZ Principle) database is then used to find analogous solution to conceptual problem. The conceptual solution then is converted to a real-world solution. \textcolor{blue}{} In simple words, prizm TRIZ suggests to take help from solved problem to solve newer problems. Here, NIAs are used as solved solutions of nature’s problems.
\subsubsection{NIA + TRIZ}
\textcolor{blue}{Authors envisioned }that the TRIZ prism is the most suitable methodology to map real-world problems to nature problems and then provide corresponding solutions from NIA. A novel methodology as depicted in figure \ref{fig:Triz_Nia}, which combines TRIZ with end goal based classification. According to TRIZ foundation, if problems and solutions are repeated across industries and sciences, then existing pairs of problems and solutions can be used. For us that pair of problems and solutions are inspired from nature. However for this approach to work, we need new classification which is based on problems.For this reason, \textcolor{blue}{we} introduce novel classification. Here modified 4 step TRIZ process is explained.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.4]{NIA+TRIZ.PNG}}
\caption{NIA+TRIZ Approach\cite{nakagawa2005new}}
\label{fig:Triz_Nia}
\end{figure}
\begin{enumerate}
\item The real-world problem is mapped to conceptual problem using abstraction.
\item The conceptual problem is then mapped to end goal of NIA.
\item From an available database of NIA problem and solution; analogous NIA is derived into conceptual solution.
\item The conceptual solution then is converted to a real-world solution.
\end{enumerate}
\subsection{End goal based Classification}
\begin{figure}[ht]\vspace*{4pt}
\centerline{\includegraphics[scale=0.65]{NIA_classification.png}}
\caption{End goal based classification of NIA}
\label{fig:NIA_class}
\end{figure}
End goal based classification mainly focuses on problems nature has solved. It also considers the goal nature wants to achieve by solving the problem. The classification is 4 levels deep and varies based on goals and sub-goals, as depicted in Figure \ref{fig:NIA_class}. In total 75 NIA are classified using this approach and are present at one of the leaf nodes. Figure \ref{fig:NIA_class} represents classification diagram and Table \ref{Table(a)} \& \ref{Table(b)} explains the respective levels of classification.
\begin{enumerate}[label=Level\arabic*:,start=1]
\item Biology and non biology based to distinguish living from non living
\item Based on the primary goal
\item Based on the sub goal
\item Based on the behavior
\end{enumerate}
The detailed mapping of leaf nodes for NIA is available in Table \ref{Table(b)} for biology based and Table \ref{Table(a)} for non-biology based. For non-biology based, the classification is available based on primary goals only as sub goals and behavior has no real meaning.
\begin{table}[]
\caption{Biology based Algorithms}
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Level:2 Primary Goal} & \textbf{Level:3 Sub Goal} & \textbf{Level:4 Behavior} & \textbf{NIA} \\ \hline
\multirow{22}{*}{Resource Seeking} & \multirow{19}{*}{Food Seeking} & \multirow{6}{*}{Hunting} & Antlion Optimizer \\ \cline{4-4}
& & & Bat algorithm \\ \cline{4-4}
& & & Grey wolf optimizer \\ \cline{4-4}
& & & Lion Optimization Algorithm \\ \cline{4-4}
& & & Salp swarm algorithm \\ \cline{4-4}
& & & Whale optimization algorithm \\ \cline{3-4}
& & \multirow{2}{*}{Migration} & Animal Migration Optimization \\ \cline{4-4}
& & & Artificial Algea Algorithm (AAA) \\ \cline{3-4}
& & \multirow{11}{*}{Herd Behavior} & Ant Colony optimization \\ \cline{4-4}
& & & Artificial Bee Colony Algorithm \\ \cline{4-4}
& & & Artificial Fish swarm optimization \\ \cline{4-4}
& & & chicken swarm optimization \\ \cline{4-4}
& & & Dragonfly Algorithm \\ \cline{4-4}
& & & Fruit fly optimization algorithm \\ \cline{4-4}
& & & Gross hoper optimization \\ \cline{4-4}
& & & krill herd algorithm \\ \cline{4-4}
& & & Locust search algorithm \\ \cline{4-4}
& & & Particle swarm optimization algorithm \\ \cline{4-4}
& & & Strawberry algorithm \\ \cline{2-4}
& \multirow{3}{*}{Habitat Seeking} & \multirow{3}{*}{Herd Behavior} & Monarch butterfly optimization \\ \cline{4-4}
& & & Moth flame optimization algorithm \\ \cline{4-4}
& & & Sperm swarm optimization \\ \cline{1-4}
\multirow{6}{*}{Survival} & \multicolumn{2}{l|}{\multirow{3}{*}{Self}} & Artificial Immune system \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Dendritic Cell Algorithm \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Gross hoper optimization \\ \cline{2-4}
& \multicolumn{2}{l|}{\multirow{2}{*}{Offspring}} & Cuckoo Search \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Emperor penguins colony \\ \cline{2-4}
& \multicolumn{2}{l|}{Dependant} & Tree physiology optimization \\ \cline{1-4}
\multirow{17}{*}{Reproduction} & \multicolumn{2}{l|}{\multirow{4}{*}{Mating Searching}} & Elephant herd optimization \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Firefly Algorithm \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Honey bee mating optimization \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Social spider optimization \\ \cline{2-4}
& \multicolumn{2}{l|}{\multirow{11}{*}{Evolution}} & Artificial Eco System with Species \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Bacterial Evolutionary Algorithm \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Bird mating optimizer \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Bull optimization algorithm \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Bumble bees mating algorithm \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Coral reefs optimization \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Differencial evolution \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Evolutionary Programming \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Evolution Strategies \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Genetic Algorithm \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Memetic algorithm \\ \cline{2-4}
& \multicolumn{2}{l|}{\multirow{2}{*}{Pollinstion}} & Flower Pollination Algorithm \\ \cline{4-4}
& \multicolumn{2}{l|}{} & Forest optimization algorithm \\ \hline
\end{tabular}
\label{Table(a)}
\end{table}
\begin{table}
\caption{Non-Biology based Algorithms}
\begin{tabular}{|p{0.3\linewidth}|p{0.65\linewidth}|}
\hline
\textbf{Level:2 Primary Goal} & \textbf{NIA} \\ \hline
\multirow{3}{*}{Gravity} & Black Hole Algorithm \\ \cline{2-2}
& Central force optimization \\ \cline{2-2}
& Gravitation search algorithm \\ \hline
\multirow{2}{*}{Entropy Reduction} & Artificial Chemical Process Optimization Algorithm \\ \cline{2-2}
& Intelligent Water drop algorithm \\ \hline
\multirow{3}{*}{Law of Equilibrium} & Harmony search \\ \cline{2-2}
& Water wave optimization \\ \cline{2-2}
& wind driven optimation \\ \hline
\end{tabular}
\label{Table(b)}
\end{table}
\section{Examples to support TRIZ+NIA approach}
\subsection{Fruit Optimization Algorithm to predict monthly electricity consumption }
In a paper, Jiang et al. \cite{jiang2019monthly} demonstrated use of fruit fly optimization algorithm to improve prediction of monthly electricity consumption with limited amount of training data. The proposed solution uses a hybrid forecasting model named FOA-MHW (Fruit Fly Optimisation Algorithm - Multiplicative Holt-Winters). The Holt-Winters algorithm is exponential smoothing algorithm for forecasting for time series data. The parameters of exponential smoothing are generated using FOA.
The real-world problem of MHW is to find optimal parameters for smoothing with minimum amount of data. The parameter finding is converted into conceptual problem of Food-seeking problem which is under Resource seeking (Resource seeking -> Food-seeking problem -> Herd Behaviour) as referred in Figure \ref{fig:NIA_class}. For food-seeking in herd, the fruit fly is one of the superior species as it uses acute smell sensing in swarm with intelligent communication. Hence corresponding solution of fruit fly optimization is suitable for the identified problem. Therefore for the stated problem, FOA is found most suitable and outperformed traditional algorithms.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.3]{Fruit-fly-optimization-algorithm-FOA.png}}
\caption{Diagram for Fruit fly optimization algorithm FOA}
\label{fig:FOA}
\end{figure}
\subsection{Bat Algorithm to model River Dissolved Oxygen Concentration}
Yaseen et al. \cite{yaseen2018integration} uses Bat Algorithm (BA) in Modeling River Dissolved Oxygen Concentration. Here NIA is integrated with Least Square Support Vector Regression Model. The accuracy of LSSVM-BA model compared with those M5 tree and MARS models are found to increase by 20\% and 42\%, respectively, in terms of root-mean-square error.
Studies have reported that the efficiency of LSSVM models significantly depends on the values of the kernel and regularization parameters. The hyper-parameters of LSSVM can be considered as decision variables and should be determined accurately by optimization algorithms for better performance of LSSVM models. In this study, the hyper-parameters of LSSVM were optimized using the BA. In BA, Bat's hunting behaviour is mimicked. Bat hunting comes under resource seeking followed by food seeking category(Resource seeking -> Food-seeking problem -> Hunting) as referred in Figure \ref{fig:NIA_class}, So it can be concluded that parameter optimization problem can be mapped with food seeking problem.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.50]{Bat-Algorithm.jpg}}
\caption{Diagram for Bat Algorithm}
\label{fig:BA}
\end{figure}
\subsection{Genetic Algorithm to tune the structure and parameters of a neural network}
Frank et al.\cite{leung2003tuning} discuss tuning of Neural Network parameters using Genetic Algorithm. This is the very first paper from 2003 where it was proposed to use NIA to train network.
Neural network is proved to be a universal approximator. A three-layer feed forward neural network can approximate any nonlinear continuous function to an arbitrary accuracy. However, a fixed structure may not provide the optimal performance within a given training period. A small network may not provide good performance owing to its limited information processing power. A large network, on the other hand, may have some of its connections redundant. Moreover, the implementation cost for a large network is high. It could be the best if algorithm suggests the best structure for a neural network.It can lead to low cost of implementing the neural network, in terms of hardware and processing time.
Here parameters like number of neurons, number of level, dense layer activation function and network optimizer are presented in one array form. Array is also used to present output solution. Choosing the correct representation of an output solution is very important in Nature Inspired Algorithms. In initialization, any random value for these parameter is taken. Priory can also be used instead of random values. Networks are trained using these parameter. The difference between predicted and actual value is fitness function. The change values of parameter is according to improved Genetic Algorithm. Figure \ref{fig:improved_GA} depicts pseudo algorithm along with procedure.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.85]{procedure_of_improved_GA.jpg}}
\caption{Procedure of improved Genetic Algorithm}
\label{fig:improved_GA}
\end{figure}
If we talk about abstraction from real world problem to Nature issues, then we can say the best structure is the result of survival among rest. self survival can be mapped with survival of the best structure. That is the reason GA algorithm from Biology based -> survival ->self category as referred in Figure \ref{fig:NIA_class} is chosen.
\section{Solution of NP-H using NIA}
\subsection{0-1 Knapsack Problem}
The knapsack problem is part of Combinatorial optimization problems' family. Here, 0-1 knapsack problem is one of the variant of knapsack problem. Knapsack problems appear in real-world decision-making processes in a wide variety of fields. Few traditional applications are finding the least wasteful way to cut raw materials, in the construction, scoring of tests in which the test-takers have a choice as to which questions they answer, etc.\cite{KPwiki} The knapsack problem has been studied for more than a century. Computer scientists always has fascination for knapsack problem because the decision problem form of the knapsack problem is NP-complete, thus there is no known algorithm both correct and fast (polynomial-time) in all cases.
The knapsack problem is defined as a set of items(xi) is given, each with a weight(wi) and a value(vi). Determine the number of each item to include in a collection so that the total weight is less than or equal to a given capacity(W) and the total value is as large as possible. in 0-1 knapsack the condition is that each and every chosen items must be whole, fraction of an item cannot be selected in solution.\cite{KPwiki}
Dynamic programming solution for the 0-1 knapsack problem also runs in pseudo-polynomial time. Solution runs in $O(nW)$ time and $O(nW)$ space. Another algorithm for 0-1 knapsack, discovered in 1974 and sometimes called "meet-in-the-middle" due to parallels to a similarly named algorithm in cryptography, is exponential in the number of different items but may be preferable to the DP algorithm when capacity$(W)$ is large compared to number of total itmes$(n)$. The algorithm takes O$(2^{n/2})$ space and O$(n2^{n/2})$ time. George Dantzig proposed a greedy approximation algorithm to solve the knapsack problem, but for the bounded problem, where the supply of each kind of item is limited, the algorithm is incapable to give optimal solution.\cite{KPwiki} These results concludes thatDynamicc Programming is the best approach among the traditional algorithms to treat the knapsack problem.
\begin{figure}[ht]
\centerline{\includegraphics[scale=0.9]{DPvsGA}}
\caption{Execution time comparison between Dynamic Programming and Genetic Algorithm}
\label{fig:GAvsDP}
\end{figure}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.42]{Flowchart_Knapsack_GA}}
\caption{Flow chart of Genetic Algorithm to solve 0-1 knapsack problem}
\label{fig:GA_KP}
\end{figure}
Generally knapsack is an packing kind of problem. As we all know some operations gets better over time. Experience makes those task to perform better and to take better decisions. Experience comes with the the generations and generations are part of evolution. So here we can take help of Biology based algorithms -> Reproduction -> Evolutionary algorithms for knapsack problem. In which genetic algorithm is one of the best choice. Figure \ref{fig:GA_KP} shows flow diagram to use Genetic Algorithm to solve 0-1 knapsack problem.
When this flow diagram is implemented. It shows better results then Dynamic programming. Figure \ref{fig:GAvsDP} shows the execution time compression. Here, problem size is the total number item, so it is observable that slope of execution time for GA is is lesser than slope of execution time for DP. In terms of execution time GA performs much better than DP.
\subsection{Travelling Salesman Problem}
Travelling Salesman is also an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. In the theory of computational complexity, the decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities. The TSP has several applications even in its purest formulation, such as planning, logistics.
TSP is defined as a list of cities and the distances between each pair of cities is given and the question is to find the shortest possible route that visits each city and returns to the origin city.
The most direct solution would be to try brute force approach. Testing of all permutations (ordered combinations) takes running time of a polynomial factor of $O(n!)$, the factorial of the number of cities, so this solution becomes impractical even for only 20 cities. Another approach is branch-and-bound algorithms, which can be used to process TSPs containing 40–60 cities.
Artificial intelligence researcher Marco Dorigo described in 1993 a method of heuristically generating "good solutions" to the TSP. As we all know NIAs are the best meta-heuristic algorithms. Route finding task is matter of team work, simultaneous searching helps to achieve the best solution. Herd behaviour can be helpful for this type of problem. Scientists' concerned problem is already solved by nature. Ants are the best solver of route optimization problem. Hence (Biology-based algorithms -> Resource seeking -> Herd Behaviour -> Ant Colony Optimization) leads to the end of our algorithm search.
Ant Colony Optimization Algorithm sends out a large number of virtual ant agents to explore many possible routes on the map. Each ant probabilistically chooses the next city to visit based on a heuristic combining the distance to the city and the amount of virtual pheromone deposited on the edge to the city. The ants explore, depositing pheromone on each edge that they cross, until they have all completed a tour. At this point the ant which completed the shortest tour deposits virtual pheromone along its complete tour route (global trail updating). The amount of pheromone deposited is inversely proportional to the tour length: the shorter the tour, the more it deposits.
\textcolor{blue}{Manhattan distance and Euclidean Distance}
\section{Conclusion}
The sole purpose of introducing 'The End Goal based Classification' is to convey the missing pattern to link problem and solution. The identification of the problem and classification of problem helps to narrow down the range of suitable NIAs. A task like Parameter tuning, structure selection, parameter optimization which follows brute force approach can be solved with the help of NIA classification and mapping with conceptual problems. This approach is expected to benefit researchers and engineers working on computationally intensive and data starved problems to identify solutions in the most efficient way.
|
{
"timestamp": "2020-10-09T02:09:58",
"yymm": "2010",
"arxiv_id": "2010.03795",
"language": "en",
"url": "https://arxiv.org/abs/2010.03795"
}
|
\section*{Appendix}
\section{Algorithms}
\label{sec:algo}
\begin{algorithm}[ht!]
\caption{Symbolic Reward Learner}
\label{alg:learner}
\begin{algorithmic}[1]
\Procedure {Initialize}{}
\State Initialize actor $\pi$ and critic $\mathcal{Q}$ with weights $\theta^\pi$ and $\theta^\mathcal{Q}$, respectively.
\State Initialize target actor $\pi'$ and critic $\mathcal{Q}'$ with weights $\theta^{\pi'}$ and $\theta^{\mathcal{Q}'}$, respectively.
\State Initialize the symbolic tree $\mathcal{ST}$ for reward generation
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Function Evaluate}
\label{alg:episode}
\begin{algorithmic}[1]
\Procedure {Evaluate}{$\pi$, R}
\State $fitness = 0$
\State Reset environment and get initial state $s_0$
\While{env is not done}
\State Select action $a_t = \pi(s_t|\theta^\pi)$
\State Execute action $a_t$ and observe reward $r_t$ and new state $s_{t+1}$
\State Append transition $(s_t, a_t, r_t, s_{t+1})$ to $R$
\State $fitness \leftarrow fitness + r_t$ and $s = s_{t+1}$
\EndWhile
\State Return $fitness$, R
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Function Mutate}
\label{alg:mutate}
\begin{algorithmic}[1]
\Procedure {Mutate}{$\theta^\pi$}
\For{Weight Matrix $\mathcal{M} \in \theta^\pi$}
\For{iteration = 1, $mut_{frac} * |\mathcal{M}|$}
\State Randomly sample indices $i$ and $j$ from $\mathcal{M}'s$ first and second axis, respectively
\If{$r() < supermut_{prob}$}
\State $\mathcal{M}[i,j]$ = $\mathcal{M}[i,j]$ * $\mathcal{N}(0,\,100 * mut_{strength})$
\ElsIf{$r() < reset_{prob}$}
\State $\mathcal{M}[i,j]$ = $\mathcal{N}(0,\,1)$
\Else
\State $\mathcal{M}[i,j]$ = $\mathcal{M}[i,j]$ * $\mathcal{N}(0,\,mut_{strength})$
\EndIf
\EndFor
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Symbolic Tree Details}
\label{subsec:operator_list}
The complete list of operators used for symbolic tree generation is shown below.
\begin{minted}{python}
def add(left, right):
return left + right
def subtract(left, right):
return left - right
def multiply(left, right):
return left*right
def cos(left):
return np.cos(left)
def sin(left):
return np.sin(left)
def tan(left):
return np.tan(left)
def max(nums):
return np.maxmimum(nums)
def min(nums):
return np.minimum(nums)
def pass_greater(left, right):
if left > right: return left
return right
def pass_smaller(left, right):
if left < right: return left
return right
def equal_to(left, right):
return float(left == right)
def gate(left, right, condtion):
if condtion <= 0:
return left
else:
return right
def square(left):
return left*left
def is_negative(left):
if left < 0: return 1.0
return 0.0
def div_by_100(left):
return left/100.0
def div_by_10(left):
return left/10.0
def protected_div(left, right):
with np.errstate(divide='ignore',invalid='ignore'):
x = np.divide(left, right)
if isinstance(x, np.ndarray):
x[np.isinf(x)] = 1
x[np.isnan(x)] = 1
elif np.isinf(x) or np.isnan(x):
x = 1
return x
\end{minted}
\clearpage
\section{Implementation Details}
\label{sec:hyperparameters}
The complete list of hyperparameters used for LISR experiments are given below. For football experiments, we used the same hyperparameters that we used in discrete control tasks.
\begin{table}[h!]
\centering
\caption{Hyperparameters for LISR for continuous control tasks}
\begin{tabular}{|c||c|}
\hline
Hyperparameter & Value\\
\hline \hline
Population Size $k$ & 50 \\
Target Weight $\tau$ & $1e^{-3}$ \\
Actor Learning Rate & $[1e^{-3}, 1e^{-4}, 3e^{-5}]$ \\
Critic Learning Rate & $[1e^{-3}, 1e^{-4}, 3e^{-5}]$\\
Replay Buffer & $1e^{6}$\\
Batch Size & $[256, 1024]$ \\
Exploration Steps & 5000 \\
Optimizer & Adam \\
Hidden Layer Size & 256 \\
Mutation Probability $mut_{prob}$ & $0.9$ \\
Mutation Fraction $mut_{frac}$ & $0.1$\\
Mutation Strength $mut_{strength}$ & $0.1$ \\
Super Mutation Probability $supermut_{prob}$ & $0.05$ \\
Reset Mutation Probability $resetmut_{prob}$ & $0.05$ \\
Number of elites $e$ & $7\%$ \\
\hline \hline
\end{tabular}
\label{varied}
\vspace{2em}
\end{table}
\begin{table}[h!]
\centering
\caption{Hyperparameters for LISR for discrete control tasks}
\begin{tabular}{|c||c|}
\hline
Hyperparameter & Value\\
\hline \hline
Population Size $k$ & 50 \\
Target Weight $\tau$ & $1e^{-3}$ \\
Actor Learning Rate & $[1e^{-3}, 1e^{-4}]$ \\
Maxmin DQN Heads & 2 \\
Regularization Weight & $1e^{-8}$ \\
Replay Buffer & $1e^{6}$\\
Batch Size & $[64, 256]$ \\
Exploration Steps & 5000 \\
Optimizer & Adam \\
Hidden Layer Size & 256 \\
Mutation Probability $mut_{prob}$ & $0.9$ \\
Mutation Fraction $mut_{frac}$ & $0.1$\\
Mutation Strength $mut_{strength}$ & $0.1$ \\
Super Mutation Probability $supermut_{prob}$ & $0.05$ \\
Reset Mutation Probability $resetmut_{prob}$ & $0.05$ \\
Number of elites $e$ & $7\%$ \\
\hline \hline
\end{tabular}
\label{varied}
\vspace{2em}
\end{table}
\section{Conclusion}
In this paper, we presented LISR - a framework that combines ideas from symbolic, rule-based machine learning with modern gradient-based learning. We showed that it is possible to discover intrinsic rewards completely from observational data and train an RL policy. LISR outperformed other approaches that rely on neural network based reward estimators.
Our work is an effort to bridge the interpretability gap in Deep RL. While we cannot claim that the discovered reward functions are \textit{interpretable}, they are relatively easier to parse - comprising of tens of symbolic operations compared to thousands of operations common in a neural network estimator. At the very least, this structure lends itself being more ``human readable'' compared to black box solutions. For example, as shown in our example tree, LISR required only 22 operations to compute a reward in PixelCopter - including simple trigonometric transforms on positional variables and one \textit{if-then-else} gating condition. In a scenario where a policy is unstable, it could be feasible to trace the cause of instability to a subset of those operations. This kind of ``limited explainability'' could be important for safety-critical applications like autonomous driving scenarios. Future work will focus on building on the level of interpretability of the discovered functions.
One important drawback of LISR is the lack of sample-efficiency compared to established methods like SAC. This is somewhat expected as LISR operates with the key disadvantage of not having a pre-defined dense reward signal. The primary bottleneck involves the search for an optimal reward function. In this work, we implemented EA as the search mechanism. Future work will explore other alternatives like Monte-Carlo Tree Search (MCTS) as well as explore ways to turn off search when a reasonably good reward function has been discovered.
\section{Experiments}
\label{sec:experiments}
Our main objective is to demonstrate that LISR can be applied to problems involving continuous and discrete action spaces. To this end, we evaluated LISR on Mujoco~\citep{todorov2012mujoco} for continuous control tasks and on Pygame~\citep{gym-games} and OpenAI-Gym Atari games~\citep{gym} for discrete control tasks. We evaluated LISR's performance against three baselines: policies trained using a standard EA implementation, policies trained using only intrinsic symbolic rewards and {\em Curiosity } where agents learn on a combination of intrinsic rewards and environment-provided dense rewards.
For the continuous control tasks, we used \textbf{Soft Actor-Critic (SAC)}~\citep{haarnoja2017soft} as our PG algorithm as it is the state-of-the-art on a number of benchmarks. SAC is an off-policy actor-critic method based on the maximum entropy RL framework~\citep{ziebart_2010}. The goal of SAC is to learn an optimal policy while behaving as randomly as possible. This behavior encourages efficient exploration and robustness to noise and is achieved by maximizing the policy entropy and the reward.
For the discrete environments we adopt \textbf{Maxmin DQN}~\citep{Lan-2020-ICLR} which extends DQN~\citep{Mnih-2015-Nature} to addresses the overestimation bias problem in {Q}-learning by using an ensemble of neural networks to estimate unbiased {Q}-values.
\textbf{Continuous control tasks:} We evaluated on four environments from the Mujoco benchmark - HalfCheetah, Ant, Hopper and Swimmer. We trained each environment with five random seeds for 150 million frames. We fixed the total population size (EA and SR learners) to 50 for all experiments. For LISR experiments, we kept the ratio between EA learners and SR learners equal to 0.5. For the {\em Curiosity } experiments, we integrated the {\em Intrinsic Curiosity Module (ICM)} to work with SAC. We performed a grid search for multiple learning rates for LISR, SR and {\em Curiosity } and report results corresponding to the best performing hyperparameters.
\begin{figure*}[h!]
\centering
\captionsetup[subfigure]{labelformat=empty}
\subfloat[HalfCheetah-v2]{
\includegraphics[width=0.4\textwidth]{images/halfcheetah/half_cheetah_final.pdf}}\hspace{0.1\textwidth}
\subfloat[Ant-v2]{
\includegraphics[width=0.4\textwidth]{images/ant/ant_results_final.pdf}} \\
\subfloat[Hopper-v2]{
\includegraphics[width=0.4\textwidth]{images/hopper/hopper_results_final.pdf}}\hspace{0.1\textwidth}
\subfloat[Swimmer-v2]{
\includegraphics[width=0.4\textwidth]{images/swimmer/swimmer_final_results.pdf}}
\caption{Results on continuous control tasks in Mujoco. LISR outperforms all baselines except on the low-dimensional problem in Swimmer. {\em Curiosity}, with access to explicit as well as implicit rewards, is unable to learn an effective policy on any environment. SR on its own, with access only to intrinsic rewards, is also unable to scale but slightly outperforms {\em Curiosity }. }
\label{fig:lisr_continuous_results}
\end{figure*}
Our results on the continuous control baselines are shown in~\Cref{fig:lisr_continuous_results}. We see that {\em Curiosity } fails to learn an effective policy on all environments - even though it has access to the dense rewards in addition to its own intrinsic rewards. SR on its own is also non-performant - however, it slightly outperforms {\em Curiosity } in 3 out of 4 environments. EA and LISR are both able to find effective policies. Notably, LISR outperforms EA substantially in 3 out of 4 environments. On Swimmer, EA slightly improves on sample efficiency - although both EA and LISR find the optimal solution quickly. This finding is consistent with \citet{khadka2019collaborative} that also showed that EA outperformed reinforcement learning on the relatively low dimensional problem in Swimmer. Since the key difference between EA and LISR is the presence of SR learners, these results demonstrate the incremental importance of the discovered symbolic rewards in solving the tasks.
\textbf{Discrete control tasks:} We evaluated LISR on four different discrete environments: LunarLander and Amidar, two high dimensional environments from Atari games and PixelCopter and Catcher, two low dimensional environments from Pygames. We trained a multi-headed Maxmin DQN as our policy gradient learner and used the MeanVector regularizer~\citep{sheikh2020reducing} to ensure diversity in the Q-values. Similar to the baselines in the continuous control experiments, we evaluated the performance of LISR against the performance of only EA, only SR learners and {\em Curiosity }. We trained PixelCopter, LunarLander and Amidar for 50 million frames and Catcher for 30 million frames and show the results in~\Cref{fig:lisr_discrete_results}. We observe that similar to the continuous control tasks, LISR outperforms all the baselines except {\em Curiosity } in the Catcher environment where the performance of both LISR and {\em Curiosity } are similar. Notably, in the PixelCopter and Catcher environments, the SR learners alone were able to achieve the maximum performance - thus relying purely on discovered symbolic rewards. {\em Curiosity } significantly underperforms all baselines in all except the Catcher environment. The complete list of hyperparameters is shown in~\Cref{sec:hyperparameters}.
\begin{figure*}[htp]
\centering
\captionsetup[subfigure]{labelformat=empty}
\subfloat[Catcher]{
\includegraphics[width=0.4\textwidth]{images/catcher/catcher_final_results.pdf}}\hspace{0.1\textwidth}
\subfloat[PixelCopter]{
\includegraphics[width=0.4\textwidth]{images/copter/pixelCopter_final_results.pdf}}\\
\subfloat[Amidar]{
\includegraphics[width=0.4\textwidth]{images/amidar/amidar_final_results.pdf}}\hspace{0.1\textwidth}
\subfloat[LunarLander]{
\includegraphics[width=0.4\textwidth]{images/lunar/lunar_final_results.pdf}}
\caption{Results on discrete control tasks in Atari (top) and Pygames (bottom). LISR outperforms all baselines in all environments. {\em Curiosity }'s performance is overall significantly better on these tasks compared to continuous control. SR on its own, with no access to environment provided dense rewards, is able to completely solve the Atari environments. SR on its own also outperforms {\em Curiosity } on all but the Catcher environment.}
\label{fig:lisr_discrete_results}
\end{figure*}
\textbf{Multiplayer football}: We also applied LISR to Google Research Football~\citep{kurach2020google}, a physics-based, multiplayer 3D environment with discrete action spaces where multiagent teams aim to score goals and maximize their margin of victory. The environment provides a \textit{Scoring} reward based on goals scored and a denser \textit{Checkpoint} reward based on the distance of the ball to the goal.
\begin{figure}[ht]
\captionsetup[subfigure]{labelformat=empty}
\begin{center}
\subfloat[Run to Score]{
\includegraphics[width=0.32\textwidth,trim={2px 2px 2px 35px},clip]{images/football/picture_academy_empty_goal.png}}
\subfloat[Empty Goal]{
\includegraphics[width=0.32\textwidth,trim={2px 2px 2px 35px},clip]{images/football/picture_academy_run_to_score.png}}
\subfloat[3 vs 1 Keeper]{
\includegraphics[width=0.32\textwidth,trim={2px 2px 2px 35px},clip]{images/football/picture_academy_3_vs_1.png}}
\end{center}
\caption{Google Research Football environments used in the experiments}
\label{fig:google_env}
\end{figure}
We test LISR on 3 environment in the \textit{Football Academy} set of environments - which describe specific game scenarios of varying difficulty. Specifically, we consider the following scenarios.
\begin{itemize}
\item \textit{Run to Score}: Our player starts in the middle of the field with the ball, and needs to score against an empty goal. Five opponent players chase ours from behind.
\item \textit{Empty Goal}: Our player starts in the middle of the field with the ball, and needs to score against an empty goal.
\item \textit{3 vs 1 with Keeper}: Three of our players try to score from the edge of the box, one on each side, and the other at the center. Initially, the player at the center has the ball, and is facing the defender. There is an opponent keeper.
\end{itemize}
In the first two scenarios, we only control one player. In the last scenario, we consider variations where we control only one of our players and two of our players. In all cases, any player that we do not control utilizes the strategy of a built-in \textit{AI bot}. The scenarios are shown in ~\Cref{fig:google_env}.
We benchmark LISR against EA and their published results with IMPALA~\citep{espeholt2018impala} - a popular distributed RL framework that was shown to outperform other popular algorithms like PPO~\citep{schulman2017ppo} and variations of DQN~\citep{horgan2018distributed} on this benchmark. For LISR, we only use the aggregated sum of rewards in an episode as a sparse fitness function. IMPALA, on the other hand, utilizes a standard RL setup that exploits the dense rewards to learn a policy. Our goal was to investigate if LISR can be competitive with IMPALA with no access to the dense rewards.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{images/football/bar.png}
\caption{Experiments on Google Research Football environments. Numbers in parentheses indicate the number of players controlled by LISR}
\label{fig:football_bar2}
\end{figure}
Figure \ref{fig:football_bar2} shows the performance on the four scenarios we evaluated. On the simpler environments involving an empty goal, all three algorithms were able to find performant solutions in less than 5M time steps. For the more difficult scenarios in involving 3 players vs 1, IMPALA does outperform LISR. However, LISR is able to find competitive strategies compared to IMPALA in both scenarios. This is significant as it shows that even in relatively complex, non-stationary multiagent scenarios, LISR is able to discover intrinsic symbolic rewards and be competitive with well-established algorithms that exploit dense rewards.
\textbf{Discovered Rewards}: A key motivation to design LISR is the discovery of \textbf{symbolic} reward functions that are involve many fewer operations than a typical neural network based reward estimator. In all our experiments, we restricted the depth of the symbolic trees to $3$ operational layers in order to impose these constraints.
\begin{wrapfigure}{r}{0.2\textwidth}
\centering
\vspace{-15pt}
\includegraphics[width=0.2\columnwidth]{images/algo/pixelcopter.png}
\caption{PixelCopter environment}
\label{fig:env_pixelcopter}
\vspace{-15pt}
\end{wrapfigure}
Consider the PixelCopter environment shown in Figure \ref{fig:env_pixelcopter}. It provides $8$ state variables which are: $s_0$: position; $s_1$: velocity; $s_2$: distance to floor; $s_3$: distance to ceiling; $s_4$: next block's x distance to player; $s_5$: next block's top y location and $s_6$: next block's bottom y location and $s_7$: agent's action.
\Cref{fig:sr_tree_pixelcopter} shows an example of such a tree at the end of training. For better parsability, we unroll the tree into Python code.
While we cannot claim that the particular reward function is interpretable, it is similar in structure to a classical symbolic rule and appears to rely on trigonometric transformations of positional variables. In this instance, it only utilizes 22 operations - thus making it relatively easier to analyze compared to neural network reward estimators. In contrast, {\em Curiosity }'s ICM module that generates an intrinsic reward is implemented as three neural networks with $5634$ parameters.
\begin{minipage}{\linewidth}
\noindent\hspace{0.05\linewidth}\begin{minipage}{\linewidth}
\begin{minted}[fontsize=\scriptsize]{python}
def get_intrinsic_reward(s_0, s_1, s_2, s_3, s_4, s_5, s_6, s_7):
p_1 = tan(cos(s_4)); p_2 = cos(s_3); p_3 = pass_smaller(p_1, p_2)
x_1 = multiply(-1, abs(subtract(s_7, p_3)))
q_1 = multiply(-1, abs(subtract(1, s_4)))
q_2 = max([s_2, 1, s_7, q_1, 0])
q_3 = max([q_2, s_7, cos(0), multiply(s_0, s_6), multiply(s_5, subtract(s_6, 1))])
y_1 = div_by_10(q_3)
y_2 = square(s_7)
y_3 = protected_div(1, div_by_100(s_0))
x_2 = gate(y_1, y_2, y_3)
z = equal_to(x_2, x_1)
reward = add(0, pass_smaller(div_by_10(s_7), z))
return reward
\end{minted}
\end{minipage}
\captionof{figure}{An example of a discovered symbolic reward on PixelCopter. We unroll the corresponding symbolic tree into Python-like code that can be parsed and debugged. $\{s_i\}$ represent state observations.}
\label{fig:sr_tree_pixelcopter}
\end{minipage}
\vspace{-3pt}
\section{Introduction}
\begin{wrapfigure}{r}{0.6\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{images/algo/LISR_overview.png}
\caption{LISR: agents discover latent rewards as symbolic functions and use it to train using standard Deep RL methods}
\label{fig:lisr_overview}
\end{wrapfigure}
RL algorithms aim to learn a target task by maximizing the rewards provided by the underlying environment. Only in a few limited scenarios are the rewards provided by the environment dense and continuously supplied to the learning agent, e.g. a running score in Atari games~\citep{Mnih-2015-Nature}, or the distance between the robot arm and the object in a picking task~\citep{Lillicrap-2015-ICLR}. In many real world scenarios, these dense extrinsic rewards are sparse or altogether absent.
In these environments, it is common approach to hand-engineer a dense reward and combine with the sparse objective to construct a surrogate reward. While the additional density leads to faster convergence of a policy, creating a surrogate reward fundamentally changes the underlying Markov Decision Process (MDP) formulation central to many Deep RL solutions. Thus, the learned policy may differ significantly from the optimal policy ~\citep{Rajeswaran-2017-NIPS, Ng-1999-ICML}. Moreover, the achieved task performance depends on the heuristics used to construct the dense reward, and the specific function used to mix the sparse and dense rewards.
Recent works ~\citep{pathak2017curiosity, zheng2018learning, Du-2019-NeurIPS} have also explored training a neural network to generate dense local rewards automatically from data. While, these approaches have sometimes outperformed Deep RL algorithms that rely on hand-designed dense rewards, they have only been tested in a limited number of settings. Further, the resulting reward function estimators are black-box neural networks with several thousand parameters - thus rendering them intractable to parse. A symbolic reward function lends itself better applications such as formal verification in AI and in ensuring fairness and removal of bias in the polices that are deployed.
In this paper, we present a method that discovers dense rewards in the form of low-dimensional symbolic trees rather than as high-dimensional neural networks. The trees use simple functional operators to map an agent's observations to a scalar reward, which then supervises the policy gradient learning of a neural network policy. We refer to our proposed method as Learned Intrinsic Symbolic Rewards (LISR). The high level concept of LISR is shown in~\Cref{fig:lisr_overview}.
To summarize, our contributions in this paper are:
\begin{itemize}
\item We conceptualize intrinsic reward functions as low-dimensional, learned symbolic trees constructed entirely of arithmetic and logical operators. This makes the discovered reward functions relatively easier to parse compared to neural network based representations.
\item We deploy gradient-free symbolic regression to discover reward functions. To the best of our knowledge, symbolic regression has not previously been used to estimate optimal reward functions for deep RL.
\end{itemize}
\section{LISR: Learning Intrinsic Symbolic Rewards}\label{sec:lisr}
The principal idea behind LISR is to discover symbolic reward functions that then guide the learning of a policy using standard policy gradient methods. A general flow of the algorithm is shown in \Cref{fig:lisr}.
Two populations, comprising EA and SR learners respectively, are initialized. The EA population evolves using standard EA processes using a fitness function. In the SR population, each SR learner has a corresponding symbolic tree that maps state observations to a scalar reward. The nodes of the tree represent simple mathematical or logical operators sampled from a pre-defined dictionary of operators or \textit{basis functions}. The complete list of basis functions that are utilized by the symbolic trees is described in~\Cref{subsec:operator_list}. The symbolic trees evolve using crossover and mutation based on a fitness function - leading to the discovery of novel reward functions.~\Cref{fig:sr_evolution} depicts these operations on symbolic trees.
Each SR learner uses its reward to update its weights via policy gradient (PG) methods. We adopt Soft Actor-Critic \citep{haarnoja2017soft} for the continuous control tasks and Maxmin DQN \citep{Lan-2020-ICLR} for the discrete control tasks as the algorithms of choice since they are both state-of-the-art methods in those respective environments. In either case, the reward used to compute policy gradients is always an intrinsic, symbolic rewards and not any explicit dense reward provided by the environment.
\begin{wrapfigure}{r}{0.6\textwidth}
\centering
\vspace{-12pt}
\includegraphics[width=0.6\textwidth]{images/algo/LISR_framework.png}
\caption{LISR: EA (bottom) and SR (top) learners share a common replay buffer. A set of symbolic trees sample observations from this buffer and map them into scalar rewards. SR learners also sample the same observations and the corresponding reward to train using policy gradients. The champion policy (circled) is selected by ranking all policies, EA and SR, based on a fitness function.}
\label{fig:lisr}
\vspace{-10pt}
\end{wrapfigure}
The fitness function for any policy (SR or EA) is computed as the undiscounted sum of rewards received from the environment, which is given only at the completion of an episode. Thus, any dense reward provided by the environment is seen by any agent (SR or EA) only as a sparse, aggregated fitness function. At the end of each generation, all policies, EA and SR, are combined, ranked and a champion policy is selected.
The \textbf{shared replay buffer} is the principal mechanism enabling sharing of information across the EA and the SR learners in the population. Unlike, traditional EA where the data is discarded after calculating the fitness, LISR pools the experience for all learners (EA and SR) in the shared replay buffer - identical to standard off-policy deep reinforcement learning algorithms. All SR learners are then able to sample experiences from this collective buffer and use it to generate symbolic intrinsic rewards from their respective symbolic trees and update the policy parameters parameters using gradient descent. This mechanism maximizes the information extracted from each individual experiences.
This architecture is motivated by CERL \citep{khadka2019collaborative} where a common replay buffer between evolutionary and policy gradient learners was shown to significantly accelerate learning. In our experiments, we vary the proportion of EA and SR policies in order to distil the incremental importance of each to the final performance. For completeness, the LISR is shown in~\Cref{alg:lisralgo}.
\begin{figure*}[h]
\centering
\subfloat{
\includegraphics[width=0.33\textwidth]{images/algo/mutation.png}}\hspace{0.12\textwidth}
\subfloat{
\includegraphics[width=0.5\textwidth]{images/algo/crossover_horizontal.png}}
\caption{Evolution of symbolic trees. The colored polygons represent basic mathematical operators. For mutation (left), a random sub-tree is replaced using another random sub-tree (gene). For crossover (right), two parent trees swap sub-trees to form a child tree.
\label{fig:sr_evolution}}
\end{figure*}
\begin{algorithm}[h!]
\caption{LISR Algorithm}
\label{alg:lisralgo}
\begin{algorithmic}[1]
\State Initialize portfolio $\mathcal{P}$ with SR learners$~\rightarrow$ ~(\Cref{alg:learner})
\State Initialize a population of $k$ EA actors $pop_{\pi}$
\State Initialize an empty cyclic replay buffer $\mathcal{R}$
\State Define a random number generator $r()$ $\in$ $[0,1)$
\For{generation = 1, $\infty$}
\For{actor $\pi$ $\in$ $pop_{\pi}$}
\State fitness, R = Evaluate($\pi$, R)$~\rightarrow$ ~(\Cref{alg:episode} in Appendix)
\EndFor
\State Rank the population based on fitness scores
\State Select the first $e$ actors $\pi$ $\in$ $pop_{\pi}$ as elites
\State Select $(k-e)$ actors $\pi$ from $pop_{\pi}$, to form Set $S$ using tournament selection with replacement
\While{$|S|$ $<$ $(k-e)$}
\State Use single-point crossover between a random $\pi \in e$ and $\pi \in S$ and append to $S$
\EndWhile
\For{Actor $\pi$ $\in$ Set $S$}
\If{$r() < mut_{prob}$}
\State Mutate($\theta^\pi$)$~\rightarrow$ ~(\Cref{alg:mutate} in Appendix)
\EndIf
\EndFor
\For{Learner $L$ $\in$ $\mathcal{P}$ }
\State Sample a random minibatch of T transitions $(s_i, a_i, s_{i+1})$ from $\mathcal{R}$
\State Compute reward $\hat{r}_i=L_\mathcal{{ST}}\left(s_i, a_i, s_{i+1}\right)$
\State Compute $y_i = \hat{r}_i + \gamma \displaystyle \min_{j=1,2} L_{\mathcal{Q}_{j}'}(s_{i+1},\tilde a|\theta^{L_{\mathcal{Q}_{j}'}})$
\State Update $L_\mathcal{Q}$ by minimizing the loss: $\mathcal{L}_i = \frac{1}{T} \sum_{i} (y_i - L_{\mathcal{Q}_{i}}(s_i, a_i|\theta^{L_\mathcal{Q}}) )^2$
\State Update $L_\pi$ using the sampled policy actions
\State Soft update target networks:
\State $L_{\theta^{\pi^\prime}} \Leftarrow \tau L_{\theta^\pi} + (1 - \tau) L_{\theta^{\pi^\prime}}$ and
\State $L_{\theta^{\mathcal{Q}^\prime}} \Leftarrow \tau L_{\theta^\mathcal{Q}} + (1 - \tau)L_{\theta^{\mathcal{Q}^\prime}}$
\EndFor
\For{Learner ${L} \in \mathcal{P}$ }
\State $score,$ R = Evaluate$(L_{\pi}, $R$)$
\EndFor
\State Rank the learners $\mathcal{P}$ based on $scores$
\State Select the first $j$ learners $L\in \mathcal{P}$ as elites
\State Select $(m-j)$ symbolic trees $\mathcal{ST}$ from $\mathcal{P_{ST}}$, to form Set $N$ using tournament selection.
\While{$|N|$ $<$ $(m-j)$}
\State Use single-point crossover between a random $\mathcal{ST} \in j$ and $\mathcal{ST} \in N$ and append to $N$
\State Use mutation between a random $\mathcal{ST} \in j$ and $\mathcal{ST} \in N$ and append to $N$
\EndWhile
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Related Work}
\label{sec:related_work}
The LISR architecture relies on the following key elements:
\begin{itemize}
\item Symbolic Regression on a population of symbolic trees to learn intrinsic rewards
\item Off-policy RL to train neural networks using the discovered rewards
\item Evolutionary algorithms (EA) on a population of neural network policies search for an optimal policy
\end{itemize}
\textbf{Learning Intrinsic Rewards:}
Some prior works ~\citep{Liu-TAMD-2014, kulkarni2016hierarchical, Dilokthanakul-2019-NNLS, zheng2018learning} have used heuristically designed intrinsic rewards in RL settings leading to interesting formulations such as surprise-based metrics \citep{s2019learning}. In this work, we benchmark against \citet{pathak2017curiosity} where a {\em Curiosity } metric was successfully used to outperform A3C on relatively complex environments like VizDoom and Super Mario Bros. LISR differs from these works in that the reward functions discovered are low-dimensional symbolic trees instead of high-dimensional neural networks. Further, unlike LISR, we are not aware other works that benchmark a single intrinsic reward approach on both discrete and continuous control tasks as well as single and multiagent settings.
\textbf{Symbolic Regression in DL} is a well known search technique in the space of symbolic functions. A few works have applied symbolic regression to estimate activation functions \citep{sahoo2018learning}, value functions \citep{kubalik2019symbolic} and to directly learn interpretable RL policies in model based RL \citep{hein2018interpretable}. To the best of our knowledge, symbolic regression has not previously been used to optimize for the reward function of an RL algorithm. For simplicity of design, we adopt a classic implementation where a population of symbolic trees undergo mutation and crossover to generate new trees.
\textbf{Evolutionary Algorithms} (EAs) are a class of gradient-free search algorithms ~\citep{fogel_1995,spears1993overview} where a population of possible solutions undergo mutate and crossover to discover novel solutions in every generation. Selection from this population involves a ranking operation based on a fitness function.
Recent works have successfully combined EA and Deep RL to accelerate learning. Evolved Policy Gradients (EPG)~\citep{houthooft2018evolved} utilized EA to evolve a differentiable loss function parameterized as a convolutional neural network. CERL~\citep{khadka2019collaborative} combined policy gradients (PG) and EA to find the champion policy based on a fitness score. Our work takes motivation from both. Like EPG, we also search in the space of loss functions - albeit in the form of low-dimensional symbolic trees. Like CERL, we allow EA and PG learners to share a replay buffer to accelerate exploration. However, unlike LISR, CERL relies on access to an environment-provided dense reward function for the PG learners.
|
{
"timestamp": "2020-10-12T02:10:45",
"yymm": "2010",
"arxiv_id": "2010.03694",
"language": "en",
"url": "https://arxiv.org/abs/2010.03694"
}
|
\section{Introduction}
\label{sec:introduction}
The fifth generation (5G) of cellular networks is expected to provide extremely low latency, high reliability, and high throughput services, which appeal to interactive and demanding applications such as haptic communication, automated driving, and others~\cite{andrews2014will}. Along with the freedom of movement given by wireless connectivity, these applications generally require the timely and synchronized delivery of a multitude of sensor data and commands, to guarantee control effectiveness.
For example, haptic communication allows users to interact with remote environments using haptic devices (i.e., haptic sensors and actuators) that exchange kinesthetic and tactile information. In the case of the closed-loop bilateral teleoperation systems, kinesthetic data is time-sensitive. Although stability control mechanisms can be employed to compensate for undesirable end-to-end delays that disturb the stability of such systems, this approach may deteriorate the \textit{transparency} of the service, i.e., the feeling of interactivity and, hence, the quality of telepresence \cite{lawrence1993}. A more transparent way to decrease the end-to-end delay, instead, consists in reducing the sensor data to be transmitted according to human perception model, but at the cost of a less accurate reconstructed signal at the receiver. \cite{Antonakoglou2018}.
Connected vehicles, on the other hand, will exchange data generated by on-board sensors via \gls{v2x} communications, to collaboratively build a richer context awareness and coordinate driving decisions~\cite{higuchi2019value}.
However, disseminating sensors' observations is expected to increase data traffic in vehicular networks by multiple orders of magnitude, thus potentially leading to congestion.
In order to operate effectively, these applications will need to implement the basic functions of the transport layer, i.e, retransmitting lost packets and limiting their send rate to avoid congestion. The \gls{tcp} and \gls{udp} are the two traditional options, each with its pitfalls.
By using \gls{tcp}, applications can delegate congestion control and retransmission to the transport layer, providing a simple and well-tested interface with standardized behavior. However, most congestion control mechanisms can create significant latency issues, and the requirement for in-order delivery can cause the head of line blocking problem, increasing the delay as received data is buffered waiting for the retransmission of earlier lost packets. In turn, \gls{udp} offers full flexibility to the applications, but leaves the burden of managing congestion and retransmissions to them.
In order to overcome the issues of both protocols, in this work we propose \gls{est}, a framework that combines the recently developed QUIC protocol with a suitable scheduling scheme. QUIC combines the ease of use and retransmission/control mechanisms of \gls{tcp} with \gls{udp}'s flexibility in the data delivery order, implementing different \emph{streams} that can be delivered in parallel and independently of each other and thus avoiding the head of line blocking issue and maintaining low latency. Implementers then have freedom in scheduling the application data on the streams. We hence propose a multi-stream scheduling scheme that, leveraging the QUIC features, biases data transmissions as a function of the \gls{voi}~\cite{giordani2019investigating}. We define the \gls{voi} as a scalar value quantifying the utility of the data for the receiving application. The \gls{voi} takes into consideration the potential correlation in time and across different sensors, as well as the intrinsic value of their measurements. The performance of this proposal is tested in the haptic communication and \gls{v2x} scenarios, and we show that our approach guarantees better utility compared to traditional scheduler implementations.
The rest of the paper is organized as follows. Sec.~\ref{sec:quic} presents the QUIC protocol, the adaptations we introduce to the protocol, and Sec.~\ref{sec:sched} describes our proposed \gls{voi}-based scheduler. Sec.~\ref{sec:applications} presents two reference scenarios in which our system can be used, namely, autonomous driving and haptic communication, while we evaluate the performance of the \gls{voi}-based scheduler in Sec.~\ref{sec:performance_evaluation}. Finally, Sec.~\ref{sec:conclusions} concludes the paper.
\section{Adapting QUIC for Time-Sensitive Multi-Sensory Applications}\label{sec:quic}
The QUIC protocol~\cite{langley2017quic} was designed by Google to solve some of the latency issues that \gls{tcp} typically causes with Internet traffic. QUIC packets are encrypted by default and encapsulated in \gls{udp} datagrams, and the protocol is run directly on applications within the operating system kernel, making it easier to modify and~deploy.
As well as providing no encryption, \gls{tcp} requires in-order delivery, interpreting transmitted data as a single stream and leaving the task of separating application-level objects to the application itself. In-order delivery can cause issues when multiple objects are transmitted over the same connection, such as for most Web pages. An error on one element of the page can indeed block other objects for a significant time, even though they were already received and could be displayed immediately. QUIC addresses the head of line blocking issue by adopting the same solution as the older \gls{sctp}, i.e., defining separate \emph{streams} of data within the same connection. Each stream is treated by the protocol as a logically separate data flow with in-order reliable delivery, independent of the other streams. Fig.~\ref{fig:hol} shows an example of the way QUIC handles multiple streams: while the loss of the blue packet also blocks the orange and green packets in \gls{tcp}, the logical separation between the streams allows QUIC to deliver the data.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{quic_mx.pdf}
\caption{The head of line blocking problem and the stream-based solution.}
\label{fig:hol}
\end{figure}
QUIC was designed for Web traffic consisting of a potentially large number of logically independent objects to be delivered with the lowest possible latency. However, its features can be well-adapted to interactive multi-sensory applications in which sensing data needs to be broadcast, potentially with low delay, to preserve the user's \gls{qoe}. Unlike Web traffic, these applications do not usually require data from all available sensors, as they are built to be redundant. This makes the head of line blocking issue even more pressing, since the undelivered data might not even be necessary for successful operation.
Two examples of such applications are automated driving and haptic teleoperation, which typically use \gls{udp} at the transport layer, handling congestion and retransmissions at the application.
In turn, we propose the \gls{est} scheme as a way of adapting QUIC to the multi-sensory application requirements.
In \gls{est}, each sensory reading can be considered as a separate object. As sensors produce several readings per second, we propose to use not just a different stream for each sensor, but a \emph{different stream for each object}. Whenever all packets sent into a stream are acknowledged, the stream can be reused for a new object.
On the contrary, if a stream gets blocked by a packet loss and the data become stale, the send will transmit a \texttt{RESET\_STREAM} frame to tell the receiver to discard any existing out-of-order data received from that stream and start anew, as suggested in~\cite{shi2019dtp}.
\section{Value of Information-Based Scheduling}\label{sec:sched}
While the use of streams allows QUIC to transmit data from different sensors independently, the capacity of the connection might not be sufficient to deliver the data from all sensors within the required time. In this case, the choice of which sensory readings are transmitted and in which order becomes a central problem. The QUIC standard does not specify a scheduler for streams, leaving the possibility to implement a priority-based mechanism open.
We then define a scheduling algorithm that aims at maximizing the \emph{effective \gls{voi}} at the receiver, while avoiding congestion in the connection. To this end, the algorithm needs to be fed with four types of information, namely: \emph{(i)} the (estimated) available capacity of the connection, \emph{(ii)} the size of the data with respect to the capacity, \emph{(iii)} the intrinsic \gls{voi} of the data, and \emph{(iv)} the correlation between the data generated by different sensors (which impacts the effective \gls{voi} of the transmitted data).
More specifically, we assume that, denoting by $N$ the number of objects generated in a batch by an application, the scheduler is given the following inputs.
\begin{itemize}
\item
The available capacity $C$ along the path. This information can be provided by the \gls{bbr} congestion control algorithm \cite{cardwell2016bbr}, which estimates the bottleneck capacity and minimum \gls{rtt} directly to avoid congestion, and promises a low transmission delay which is crucial for latency-sensitive applications. A possible alternative is the use of other latency-aware mechanisms such as the classic Vegas algorithm, estimating the capacity indirectly from the congestion windows. QUIC natively supports both mechanisms.
\item
The \emph{size vector} $\mathbf{s} \in \mathbb{N}^N$, which contains the sizes of the objects, in bits. Clearly, the amount of data sent over the connection should not exceed its capacity $C$, to avoid congestion.
\item The \emph{weight vector} $\mathbf{v} \in [0; 1]^N$, which contains the \emph{intrinsic value} of the objects, i.e., the \gls{voi} when considering only that source. The intrinsic \gls{voi} is determined based on a number of factors, such as the position of the sensor (e.g., front sensors in an autonomous vehicle are generally more informative than side sensors for driving decisions, or the finger sensors in a haptic application are more informative than wrist sensors), the current state of the process (e.g., the presence of an interesting object in a camera's field of view, or the detection of a sudden gesture in haptic applications). The intrinsic \gls{voi} can also depend on the \emph{time correlation} of the sample process. If the process is slow varying, consecutive readings from the same sensor can be highly correlated and, hence, easily predictable by the receiver. Although the relation between the time since the last update from a sensor and the correlation with the new reading is highly application-dependent, it is often assumed to follow an exponential decrease~\cite{giordani2019investigating}. Some control applications have inbuilt compensation mechanisms for delay, which do not require new measurements until a certain time has passed, so the correlation for these cases can be modeled as a step function. A sigmoid function can then be used to model an imperfect compensation mechanism with a gentler degradation curve.
Given the specificities of the different applications, we assume that the intrinsic \gls{voi} is determined by the application itself, and passed to the scheduling algorithm in the form of the weight vector. In the next section we will provide examples of how these values can be computed in two use-cases.
\item The \emph{cross-sensor correlation matrix} $\mathbf{W}\in [0; 1]^{N \times N}$, which contains the correlation between objects. Indeed, if the application relies on multiple sensors, there is often a significant amount of redundancy in their information. For example, multiple cameras might have partially overlapping \glspl{fov}, or scalar sensors might be measuring correlated quantities. Therefore, the intrinsic \gls{voi} of some data may need to be discounted to account for the cross-sensor correlation, because the effective \gls{voi} of two correlated measurements can be lower than the sum of the \glspl{voi} of the two measurements taken separately.
\end{itemize}
Finding the optimal schedule can then be reduced to finding the set of objects that maximizes the \gls{voi} while respecting the delay requirements. If we limit the analysis to couples of objects, i.e., we do not consider the effects of triplets of correlated objects, this is an instance of the \gls{qkp}~\cite{pisinger2007quadratic}, which is NP-hard, but efficient heuristics to solve it have been developed. Fig.~\ref{fig:scheduler} shows the basic structure of the proposed scheduling framework: multiple sensors write data with a given \gls{voi} to a QUIC socket, and the application supplies the cross-sensor correlation matrix $\mathbf{W}$. The available capacity is read from the BRR estimate, and the scheduler finds the optimal set of objects that can be delivered before the next sensor update, sending them through the connection as fast as congestion control allows. If the connection is lossy or time-varying, scheduling decisions can be revised based on what was already sent and recomputing the solution to the problem.
To the best of our knowledge, the scheduling of multi-sensory data on the transport layer is a new research topic, which requires the study of the application and sensor features as well as the dynamics of the end-to-end capacity. \gls{est} is a first step in that direction, considering correlated measurements in time and across multiple sensors and using congestion control to estimate the capacity. The scheduling framework is relatively simple, but it can support a wide range of applications, guaranteeing reliability and maximizing the delivered \gls{voi}.
\begin{figure}[!t]
\centering
\resizebox{0.99\columnwidth}{!}{
\footnotesize{
\tikzset{
block/.style = {draw, rectangle, minimum height = 1em, minimum width = 1em}}
\begin{tikzpicture}[auto]
\coordinate(empty2) at (5, 0) {};
\node at (4.5,0) [draw, minimum width=4cm, minimum height=4cm,fill={gray!20}] (sched) {};
\node at (4.5,1) [draw, minimum width=3cm, minimum height=0.5cm,fill=white] (capacity) {Residual capacity};
\node at (4.5,-1) [draw, minimum width=3cm, minimum height=1.5cm,fill={blue!20}] (c1) {2};
\node at (4.5,-0.25) [draw, minimum width=3cm, minimum height=0.75cm,fill={red!20}] (c2) {3};
\node at (4.5,0.5) [draw, minimum width=3cm, minimum height=0.75cm,fill={red!20}] (c3) {1};
\node at (4.5,0.5) [draw,dashed, minimum width=5cm, minimum height=5.5cm] (quic) {};
\node at (6,2.75) [draw, minimum width=1cm, minimum height=0.5cm,fill={green!20}] (bbr) {BBR};
\node at (3,4) [draw, minimum width=1cm, minimum height=0.5cm,fill={green!20}] (app) {Appl.};
\node at (0,0) [draw, minimum width=1cm, minimum height=0.5cm,fill={green!20}] (s3) {Sensor 3};
\node at (0,0.75) [draw, minimum width=1cm, minimum height=0.5cm,fill={blue!20}] (s2) {Sensor 2};
\node at (0,-0.75) [draw, minimum width=1cm, minimum height=0.5cm,fill={red!20}] (s4) {Sensor 4};
\node at (0,1.5) [draw, minimum width=1cm, minimum height=0.5cm,fill={green!20}] (s1) {Sensor 1};
\node at (0,-1.5) [draw, minimum width=1cm, minimum height=0.5cm,fill={red!20}] (s5) {Sensor 5};
\node at (4.5,1.75) [minimum width=2cm, minimum height=1cm] (slab) {\textbf{Scheduling}};
\node at (4.5,3) [minimum width=2cm, minimum height=1cm] (qlab) {\textbf{QUIC socket}};
\node at (7.75,0) [draw, minimum width=1cm, minimum height=0.5cm,fill={blue!20}] (d1) {2};
\node at (8.5,0) [draw, minimum width=0.5cm, minimum height=0.5cm,fill={red!20}] (d2) {3};
\node at (9,0) [draw, minimum width=0.5cm, minimum height=0.5cm,fill={red!20}] (d3) {1};
\draw[->] (s1.east) to node[pos=0.4,above] {Data, VoI} (2.5,1.5);
\draw[-] (sched.east) to (d1.west);
\draw[->] (d3.east) to node[midway,above] {Sending} (10.5,0);
\draw[->] (s2.east) to node[pos=0.4,above] {Data, VoI} (2.5,0.75);
\draw[->] (s3.east) to node[pos=0.4,above] {Data, VoI} (sched.west);
\draw[->] (s4.east) to node[pos=0.4,above] {Data, VoI} (2.5,-0.75);
\draw[->] (s5.east) to node[pos=0.4,above] {Data, VoI} (2.5,-1.5);
\draw[->] (bbr.south) to node[midway] {Capacity} (6,2);
\draw[->] (app.south) to node[pos=0.1] {Sensor correlation} (3,2);
\end{tikzpicture}
}
}
\caption{Basic components of the framework and main data exchanges. In the figure, the data from sensors 1 and 5 is discarded, while the data from sensors 2, 3, and 4 is sent in that order.}
\label{fig:scheduler}
\end{figure}
\section{Application Scenarios for QUIC-EST}
\label{sec:applications}
The scheduling problem is a well-studied subject for multi-stream applications~\cite{bai2010uplink} at the medium access layer, but considering temporal and cross-sensor correlation is a relatively new idea, which has never been applied at the transport layer, to the best of our knowledge.
In the following, we give two examples of applications that might benefit from the proposed QUIC-EST modifications and VoI-based scheduler, namely autonomous driving (Sec.~\ref{sub:quic_for_v2x}) and haptic communication (Sec.~\ref{sub:quic_for_ti}).
\subsection{QUIC-EST for Autonomous Driving}
\label{sub:quic_for_v2x}
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\textwidth]{scenarios_2.png}
\caption{Scheduling input parameters for the autonomous driving (left) and haptic communication (right) scenarios.}
\label{fig:scenarios}
\end{figure*}
The autonomous driving scheduling problem can be reformulated as follows.
\smallskip
\emph{Size vector.}
First, we define the size vector, which depends on the type of automotive sensor that is considered, the rate at which observations are captured, and their resolution. For example, while for low resolution \gls{lidar} sensors the data rate is relatively low at around 50 kbps, for high resolution \gls{lidar} sensors, such as the Velodyne HDL-64E, the required data
rate is around 30 Mbps.
For camera images, the data rate ranges from around 10 Mbps to 500 Mbps depending on the resolution~\cite{va2016millimeter}, even though JPG compression can reduce the image size by several orders of magnitude.
In this work, we consider $N=5$ sensors: one camera on the vehicles' top left corner ($lft$), one on the top right corner ($rgt$), one on the front size ($f$), one on the rear side ($r$), and one \gls{lidar} on the rooftop of the car ($L$).
The size of sensors' observations are calculated based on the nuScenes dataset~\cite{caesar2020nuscenes}, which carries a full autonomous vehicle sensor suite, assuming a 1 Byte pixel encoding and JPG compression for the camera images: we consider a size for the front/rear cameras of 180 KB, for the lateral cameras of 140 KB, and for the \gls{lidar} of 1300 KB, as depicted in Fig.~\ref{fig:scenarios} (left).
\smallskip
\emph{Intrinsic \gls{voi}.}
In our scheduler, data transmission is discriminated not only based on the intrinsic characteristics of the different sensors but also on the correlation among consecutive observations.
This information is contained in the weight vector. In particular, we reasonably expect that the \gls{lidar} would be more valuable compared to automotive cameras because it can provide a three-dimensional (rather than bi-dimensional) representation of the environment, and can work efficiently in different weather/time conditions.
Also, we assume that different cameras might have different priorities depending on the characteristics of the environment in which the vehicles move (e.g., for the majority of the time, lateral cameras in the highway scenario likely make background observations with very similar characteristics, while frontal/rear cameras' acquisitions might incorporate more valuable information). Based on this assumption, the correlation vector $\mathbf{v}\in[0,1]^N$ can be built empirically, resulting in the values shown in Fig.~\ref{fig:scenarios} (left).
The \gls{voi} should also depend on the temporal obsolescence of the information.
Based on the results in~\cite{giordani2019investigating}, we suggest to use an exponential function that depends on the relative age of information, i.e., the time between the generation and reception of the information, and a temporal decay parameter that is proportional to the delay sensitivity of the observation, i.e., the temporal horizon over which that piece of information is considered valuable.
\smallskip
\emph{Cross-sensor correlation.}
As the name suggests, the cross-sensor correlation matrix represents the degree of correlation among the different sensors. For cameras, the correlation depends on the \gls{fov}. We assume that the rear camera is uncorrelated to the rest of the cameras, very little overlap between lateral cameras, and partial overlap between lateral and front cameras.
On the other hand, \gls{lidar} sensors operate through 360-degree rotations and their observations can be highly correlated/redundant with those of the cameras.
The correlation matrix is hence structured as displayed in Fig.~\ref{fig:scenarios} (left).
\subsection{QUIC-EST for Haptic Communication}
\label{sub:quic_for_ti}
For the haptic communication scenario, the scheduling problem can be reformulated as follows.
\smallskip
\emph{Size vector.} In this scenario, the size vector should depend on the amount of sensors and actuators integrated on the haptic devices. The CyberGrasp~\cite{CyberGrasp2013} device combines a haptic glove that is sensing orientation and movement of the hand using 22 sensors and an exoskeleton with five kinesthetic actuators for providing force feedback to the user. Taking into account only the 22 movement sensors that transmit one floating-point value each (i.e., typically 32 bits using the IEEE 754 standard) with the typical 1 kHz sampling rate, for two haptic devices (one for each hand) we have $N = 44$, resulting in a 1.4 Mbps data stream, as represented in Fig.~\ref{fig:scenarios} (right).
\smallskip
\emph{Intrinsic \gls{voi}.} In order to determine the \gls{voi} of each haptic data sample generated by the haptic device's sensors, we rely on the psychophysical aspects of human perception. More specifically, we can use Weber's law of \gls{jnd}, as in the deadband transmission algorithm in \cite{Hinterseer2005}, which can be applied in position, velocity and force data. The \gls{voi} is then given by the difference between the last transmitted sample from that sensor and the current value, which can be easily computed by the sending application and given to the scheduler; sensors have the same inherent \gls{voi}, but the actual value of the information depends on how novel it is with respect to the one currently available to the receiver. This definition implicitly includes the time correlation between samples, as the difference between consecutive samples will usually be small, but then grow with time consistently with the age of the data.
\smallskip
\emph{Cross-sensor correlation.} In the haptic communication case, the flexibility of a robotic hand makes the relation between different sensors strongly dependent on their position. If the hand is grasping an object, the correlation between sensors will be different from when it is at rest. Consequently, we cannot give a constant cross-sensor correlation matrix based on the sensors' positions, like we did in the vehicular case; if the application can compute the instantaneous correlation between sensors in real time, the scheduler will use it to improve performance, but if it cannot, it will simply consider the measurements independent.
\section{Performance Evaluation}
\label{sec:performance_evaluation}
In this section, we present a performance evaluation of QUIC-EST, comparing our scheduling to different algorithms and showing its benefits in terms of delivered \gls{voi}. We evaluate the system in the two scenarios presented in Sec.~\ref{sec:applications}, with extremely different features. While realistic, the assumptions about the two scenarios are arbitrary, and the purpose of showing them is to illustrate the methodology from a qualitative perspective, rather than giving a complete quantitative assessment of the scheme. The autonomous vehicle in the first scenario transmits only 10 frames per second but with a maximum rate of 155.2 Mbps; on the other hand, haptic communication has a maximum rate of just 1.4 Mbps, but its sample frequency is 100 times faster, i.e., 1 kHz. Furthermore, while the haptic communication scenario has 44 different sensors that need to be scheduled, the autonomous driving scenario only has 5.
We can consider these scenarios as examples of the two types of traffic patterns supported by 5G: the high-throughput \gls{v2x} traffic, which can tolerate a higher latency due to the lower frame rate, fits the requirements of \gls{embb} traffic, while the low-throughput haptic communication traffic, which requires millisecond latency, is a perfect example of \gls{urllc}.
In both scenarios, we study the average \gls{voi} as a function of the available (constant) capacity.
We compare four different scheduler implementations:
\begin{itemize}
\item \emph{\gls{fifo}.} This is the default QUIC scheduler, which transmits pieces of data in the same order they were received from the application. It limits transmission to the achievable send rate, discarding any objects that would exceed the connection's capacity. We consider this as a baseline, as its behavior is similar to \gls{tcp}, without the head of line blocking.
\item \emph{\gls{voi}-based.} This scheduler considers the \gls{voi} as the decision factor for transmitting objects that fit the transmission capacity. It is an instance of the classic knapsack problem, as it does not consider cross-sensor correlation or even temporal correlation between values, but only the intrinsic value of each sensor.
\item \emph{Cross-sensor \gls{voi}.} This scheduler considers cross-sensor correlation, but neglects the temporal correlation. It is equivalent to the optimal scheduler if subsequent measurements from the same sensor are independent, and to the \gls{voi}-based scheduler if the sensors' measurements are independent as well.
\item \emph{\gls{est}.} The scheduler considers \gls{voi} as well as time and cross-sensor correlation. It is equivalent to the \acrlong{qkp}, and gives the best performance if we consider the full application model.
\end{itemize}
\begin{figure}[!t]
\centering
\includegraphics{v2x-one.tex}
\caption{Comparison between schedulers in the autonomous driving scenario in terms of the normalized \gls{voi} as a function of capacity.}
\label{fig:v2x_one}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics{v2x-two.tex}
\caption{Average update frequency for the different schedulers in the autonomous driving scenario, with $C=100$ Mbps.}
\label{fig:v2x_two}
\end{figure}
The performance for the autonomous driving scenario is shown in Fig.~\ref{fig:v2x_one}: as we consider capacities from 20 to 120 Mbps, the normalized \gls{voi}, defined as the average \gls{voi} divided by the \gls{voi} for a connection with infinite capacity, grows for all schedulers, but cross-sensor correlation is clearly the most important factor. As Fig.~\ref{fig:v2x_two} shows, this stark difference is due to the frequency at which the schedulers pick LIDAR frames, which are large and highly correlated with data from the cameras, as LIDAR has a $360$-degree \gls{fov}. If capacity is limited, prioritizing LIDAR frames leads to sending fewer camera frames, but while camera frames have a high joint value, LIDAR provides highly correlated data with relatively little new information. Schedulers that consider cross-correlation only send a limited number of LIDAR frames, maximizing the joint \gls{voi} and making the best use of the capacity.
For the haptic communication scenario, we consider the \gls{voi} as a logistic function of the difference between the current sample and the last transmitted one. We used realistic haptic traffic model parameters from~\cite{abu2009empirical} and cautiously selected a \acrfull{jnd} value of 5\% of the dynamic range of the sensors. Accordingly, we simulate each sensor as an independent Gauss-Markov process, setting $\sigma=2.15$\% of the dynamic range to fit the empirical model from the paper. The \gls{voi} is then given by a logistic function with a center $x_0=1.65\sigma$ and a sharpness $k=10$. We chose these values to ensure that all sensors with differences over the \gls{jnd} are prioritized, while sensors that are close to the threshold can be sent if the capacity allows it. In the haptic communication scenario, we have no cross-sensor correlation, and all sensors have the same intrinsic \gls{voi}, so the \gls{fifo}, \gls{voi}-based, and cross-sensor \gls{voi} schedulers are equivalent.
As Fig.~\ref{fig:ti_perf} shows, the \gls{est} scheme can achieve an almost perfect \gls{qoe} (i.e., most of the sensors are under the \gls{jnd} threshold most of the time) even at less than a third of the capacity needed to send all packets. In this case, the time correlation is critical: all other schedulers, which do not use a \gls{jnd}-based value, have a lower performance. This gain is the minimum for the scheme, as there are no cross-sensor correlations to exploit, and the introduction of more complex models that can take them into account could further decrease the resources that the application would need to request.
\begin{figure}[!t]
\centering
\includegraphics{tactile.tex}
\caption{Comparison between schedulers in the haptic communication scenario in terms of the normalized \gls{qoe} as a function of capacity.}
\label{fig:ti_perf}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have presented \gls{est}, a flexible transport-layer protocol based on the emerging QUIC protocol and meant for multi-sensory applications with time-sensitive data. We showed that this scheme, combined with a new scheduling mechanism that biases transmission decisions based on VoI and temporal/cross-sensor correlation, can be adapted to widely different applications with good results, using autonomous driving and haptic communication as our Future Internet use cases. At the moment, the transport layer is still an unexplored topic for this kind of applications, as research has focused on enabling them with single wireless links, but it is a crucial factor as their deployment in real networks begins.
There are several possible avenues of future work which we plan to pursue. First, we will test \gls{est} in a more realistic setting, using data from real applications and full system-level simulation to give it a more robust evaluation. Another key aspect is reliability, as the distribution of capacity might play a role in scheduling time-varying scenarios, such as most wireless networks. Finally, the combination of this protocol with network slicing techniques would provide application with reliability guarantees, which can be fundamental for safety-critical applications such as \gls{v2x} for autonomous~vehicles.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2020-10-09T02:11:06",
"yymm": "2010",
"arxiv_id": "2010.03831",
"language": "en",
"url": "https://arxiv.org/abs/2010.03831"
}
|
\section{Introduction}
Named Entity Recognition (NER, \citealt{florian2006factorizing,florian2010improving}) and
Relation Extraction (RE, \citealt{zhao2005extracting,jiang2007systematic,sun2011semi,plank2013embedding})
are two fundamental tasks in Information Extraction (IE).
Both tasks aim to extract structured information from unstructured texts.
One typical approach is to first identify entity mentions,
and next perform classification between every two mentions to extract relations,
forming a pipeline \cite{zelenko2003kernel,chan2011exploiting}.
An alternative and more recent approach is to perform these two tasks jointly \cite{li2014incremental,miwa2014modeling,miwa2016end},
which mitigates the error propagation issue associated with the pipeline approach and leverages the interaction between tasks,
resulting in improved performance.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{example}
\caption{An example of table filling for NER and RE.}
\label{fig:example}
\end{figure}
Among several joint approaches, one popular idea is to cast NER and RE as a table filling problem \cite{miwa2014modeling,gupta2016table,zhang2017end}.
Typically, a two-dimensional (2D) table is formed where each entry captures the interaction between two individual words within a sentence.
NER is then regarded as a sequence labeling problem where tags are assigned along the diagonal entries of the table.
RE is regarded as the problem of labeling other entries within the table.
Such an approach allows NER and RE to be performed using a single model, enabling the potentially useful interaction between these two tasks.
One example\footnote{
The exact settings for table filling may be different for different papers.
Here we fill the entire table (rather than the lower half of the table), and assign relation tags to cells involving two complete entity spans (rather than part of such spans).
We also preserve the direction of the relations.}
is illustrated in Figure \ref{fig:example}.
Unfortunately, there are limitations with the existing joint methods. First, these methods typically suffer from \emph{feature confusion} as they use a single representation for the two tasks -- NER and RE. As a result, features extracted for one task may coincide or conflict with those for the other, thus confusing the learning model.
Second, these methods \emph{underutilize} the table structure as they usually convert it to a sequence and then use a sequence labeling approach to fill the table. However, crucial structural information (e.g.,
the 4 entries at the bottom-left corner of Figure \ref{fig:example} share the same label) in the 2D table might be lost during such conversions.
In this paper, we present a novel approach to address the above limitations.
Instead of predicting entities and relations with a single representation,
we focus on learning two types of representations, namely \emph{sequence representations} and \emph{table representations}, for NER and RE respectively.
On one hand, the two separate representations can be used to capture task-specific information.
On the other hand, we design a mechanism to allow them to interact with each other, in order to take advantage of the inherent association underlying the NER and RE tasks.
In addition, we employ neural network architectures that can better capture the structural information within the 2D table representation.
As we will see, such structural information (in particular the context of neighboring entries in the table) is essential in achieving better performance.
The recent prevalence of BERT \cite{bert} has led to great performance gains on various NLP tasks.
However, we believe that the previous use of BERT, i.e., employing the contextualized word embeddings, does not fully exploit its potential.
One important observation here is that the pairwise self-attention weights maintained by BERT carry knowledge of \emph{word-word interactions}.
{Our model can effectively use such knowledge, which helps to better learn table representations.}
To the best of our knowledge, this is the first work to use the attention weights of BERT for learning table representations.
We summarize our contributions as follows:
\squishlist
\item
We propose to learn two separate encoders -- a table encoder and a sequence encoder. They interact with each other, and can capture task-specific information for the NER and RE tasks;
\item
We propose to use multidimensional recurrent neural networks to better exploit the structural information of the table representation;
\item
We effectively leverage the word-word interaction information carried in the attention weights from BERT, which further improves the performance.
\squishend
Our proposed method achieves the state-of-the-art performance on four datasets, namely ACE04, ACE05, CoNLL04, and ADE.
We also conduct further experiments to confirm the effectiveness of our proposed approach.
\section{Related Work}
NER and RE can be tackled by using separate models.
By assuming gold entity mentions are given as inputs,
RE can be regarded as a classification task.
Such models include kernel methods \cite{zelenko2003kernel}, RNNs \cite{zhang2015relation},
recursive neural networks \cite{socher2012semantic}, CNNs \cite{zeng2014relation},
and Transformer models \cite{verga2018simultaneously,wang2019extracting}.
Another branch is to detect cross-sentence level relations \cite{peng2017cross,gupta2019neural}, and even document-level relations \cite{yao2019docred,nan-etal-2020-reasoning}.
However, entities are usually not directly available in practice,
so these approaches may require an additional entity recognizer to form a pipeline.
Joint learning has been shown effective since it can alleviate the error propagation issue and benefit from exploiting the interrelation between NER and RE.
Many studies address the joint problem through a cascade approach, i.e., performing NER first followed by RE.
\citet{miwa2016end} use bi-LSTM \cite{graves2013speech} and tree-LSTM \cite{tai2015improved} for the joint task.
\citet{bekoulis2018adversarial,bekoulis2018joint} formulate it as a head selection problem.
\citet{nguyen2019end} apply biaffine attention \cite{dozat2016deep} for RE.
\citet{luan2019general}, \citet{dixit2019span}, and \citet{wadden2019entity} use span representations to predict relations.
\citet{miwa2014modeling} tackle joint NER and RE as from a table filling perspective,
where the entry at row $i$ and column $j$ of the table corresponds to the pair of $i$-th and $j$-th word of the input sentence.
The diagonal of the table is filled with the entity tags and the rest with the relation tags indicating possible relations between word pairs.
Similarly, \citet{gupta2016table} employ a bi-RNN structure to label each word pair.
\citet{zhang2017end} propose a global optimization method to fill the table.
\citet{tran2019neural} investigate CNNs on this task.
Recent work \cite{luan2019general,dixit2019span,wadden2019entity,li2019entity,eberts2019span}
usually leverages pre-trained language models such as ELMo \cite{elmo}, BERT \cite{bert}, RoBERTa \cite{roberta}, and ALBERT \cite{albert}.
However, none of them use pre-trained attention weights, which convey rich relational information between words.
We believe it can be useful for learning better table representations for RE.
\section{Problem Formulation}
In this section, we formally formulate the NER and RE tasks.
We regard NER as a sequence labeling problem, where the gold entity tags $\v y^{\text{NER}}$ are in the standard BIO (Begin, Inside, Outside) scheme \cite{sang1999representing,ratinov2009design}.
For the RE task, we mainly follow the work of \citet{miwa2014modeling} to formulate it as a table filling problem.
Formally, given an input sentence $\v x = [x_i]_{1 \le i \le N}$,
we maintain a tag table $\v y^{\text{RE}} = [y^{\text{RE}}_{i,j}]_{1 \le i,j \le N}$.
Suppose there is a relation with type $r$ pointing from mention $x_{i^b},..,x_{i^e}$ to mention $x_{j^b},..,x_{j^e}$,
we have $y^{\text{RE}}_{i,j} = \overrightarrow{r}$ and $y^{\text{RE}}_{j,i} = \overleftarrow{r}$ for all $i \in [i^b , i^e] \wedge j \in [j^b, j^e]$.
We use $\bot$ for word pairs with no relation.
An example was given earlier in Figure \ref{fig:example}.
\section{Model}
We describe the model in this section.
The model consists of two types of interconnected encoders, a table encoder for table representation and a sequence encoder for sequence representation, as shown in Figure \ref{fig:network}.
Collectively, we call them {\em table-sequence encoders}.
Figure \ref{fig:encoder} presents the details of each layer of the two encoders, and how they interact with each other.
In each layer, the table encoder uses the sequence representation to construct the table representation;
and then the sequence encoder uses the table representation to contextualize the sequence representation.
With multiple layers, we incrementally improve the quality of both representations.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{encoder}
\caption{A layer in the table-sequence encoders.}
\label{fig:encoder}
\end{figure}
\subsection{Text Embedder}
For a sentence containing $N$ words $\v x = [x_i]_{1 \le i \le N}$, we define the word embeddings $\v x^w \in \mathbb{R}^{N \times d_1}$, as well as character embeddings $\v x^c \in \mathbb{R}^{N \times d_2}$ computed by an LSTM \cite{lample2016neural}.
We also consider the contextualized word embeddings $\v x^\ell \in \mathbb{R}^{N \times d_3}$, which can be produced from language models such as BERT.
We concatenate those embeddings for each word and use a linear projection to form the initial sequence representation $\v{S}_0 \in \mathbb{R}^{N \times H}$:
\begin{equation}
\v{S}_0= \linear([\v x^{c}; \v x^{w}; \v x^{\ell}])
\end{equation}
where each word is represented as an $H$ dimensional vector.
\subsection{Table Encoder} \label{sec:table}
The table encoder, shown in the left part of Figure \ref{fig:encoder}, is a neural network used to learn a table representation, an $N \times N$ table of vectors, where the vector at row $i$ and column $j$ corresponds to the $i$-th and $j$-th word of the input sentence.
We first construct a non-contextualized table by concatenating every two vectors of the sequence representation followed by a fully-connected layer to halve the hidden size.
Formally, for the $l$-th layer, we have $\v{X}_l \in \mathbb{R}^{N \times N \times H}$, where:
\begin{equation}
X_{l,i,j} = \relu(\linear([S_{l-1,i};S_{l-1,j}])) \label{math:input_mdrnn}
\end{equation}
Next, we use the Multi-Dimensional Recurrent Neural Networks (MD-RNN, \citealt{graves2007multi}) with Gated Recurrent Unit (GRU, \citealt{gru}) to contextualize $\v{X}_l$.
We iteratively compute the hidden states of each cell to form the contextualized table representation $\v{T}_{l}$, where:
\begin{equation}
\resizebox{0.89\hsize}{!}{$T_{l,i,j} = \gru(X_{l,i,j}, T_{l-1,i,j}, T_{l,i-1,j}, T_{l,i,j-1})$} \label{math:mdrnn}
\end{equation}
We provide the multi-dimensional adaptations of GRU in Appendix \ref{sec:mdrnn} to avoid excessive formulas here.
Generally, it exploits the context along \emph{layer}, \emph{row}, and \emph{column} dimensions.
That is, it does not consider only the cells at neighbouring rows and columns,
but also those of the previous layer.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{mdrnn}
\caption{How the hidden states are computed in MD-RNN with 4 directions.
We use ${D}^+$ or ${D}^-$ to indicate the direction that the hidden states flow between cells at the ${D}$ dimension (where $D$ can be \emph{layer}, \emph{row} or \emph{col}).
For brevity, we omit the input and the \emph{layer} dimension for cases (b), (c) and (d), as they are the same as (a).}
\label{fig:mdrnn}
\end{figure}
{The time complexity of the naive implementation (i.e., two for-loops) for each layer is $O(N \times N)$ for a sentence with length $N$.
However, antidiagonal entries\footnote{
We define antidiagonal entries to be entries at position $(i,j)$ such that $i+j=N+1+\Delta$,
where $\Delta \in [-N+1, N-1]$ is the offset to the main antidiagonal entries.
} can be calculated at the same time as they do not depend on each other.
Therefore, we can optimize it through parallelization and reduce the effective time complexity to $O(N)$.}
The above illustration describes a unidirectional RNN, corresponding to Figure \ref{fig:mdrnn}(a).
Intuitively, we would prefer the network to have access to the surrounding context in all directions.
However, this could not be done by one single RNN.
For the case of 1D sequence modeling, this problem is resolved by introducing bidirectional RNNs.
\citet{graves2007multi} discussed quaddirectional RNNs to access the context from four directions for modeling 2D data.
Therefore, similar to 2D-RNN, we also need to consider RNNs in four directions\footnote{In our scenario, there is an additional layer dimension. However, as the model always traverses from the first layer to the last layer, only one direction shall be considered for the layer dimension.}.
We visualize them in Figure \ref{fig:mdrnn}.
Empirically, we found the setting only considering cases (a) and (c) in Figure \ref{fig:mdrnn} achieves no worse performance than considering four cases altogether.
Therefore, to reduce the amount of computation, we use such a setting as default.
The final table representation is then the concatenation of the hidden states of the two RNNs:
\begin{align}
&\resizebox{0.891\hsize}{!}{$T^{(a)}_{l,i,j} = \gru^{(a)}(X_{l,i,j}, T^{(a)}_{l-1,i,j}, T^{(a)}_{l,i-1,j}, T^{(a)}_{l,i,j-1})$} \\
&\resizebox{0.891\hsize}{!}{$T^{(c)}_{l,i,j} = \gru^{(c)}(X_{l,i,j}, T^{(c)}_{l-1,i,j}, T^{(c)}_{l,i+1,j}, T^{(c)}_{l,i,j+1})$} \\
&T_{l,i,j} = [T^{(a)}_{l,i,j}; T^{(c)}_{l,i,j}]
\end{align}
\subsection{Sequence Encoder}
The sequence encoder is used to learn the sequence representation -- a sequence of vectors,
where the $i$-th vector corresponds to the $i$-th word of the input sentence.
The architecture is similar to Transformer \cite{transformer}, shown in the right portion of Figure \ref{fig:encoder}.
However, we replace the scaled dot-product attention with our proposed {\em table-guided attention}.
Here, we mainly illustrate why and how the table representation can be used to compute attention weights.
First of all, given $\v Q$ (queries), $\v K$ (keys) and $\v V$ (values), a generalized form of attention is defined in Figure \ref{fig:attention}.
For each query, the output is a weighted sum of the values, where the weight assigned to each value is determined by the relevance (given by score function $f$) of the query with all the keys.
\begin{figure}
\includegraphics[width=0.95\linewidth]{attention2}
\caption{The generalized form of attention. The softmax function is used to normalize the weights of values $\v V$ for each query $Q_i$.}
\label{fig:attention}
\end{figure}
For each query $Q_i$ and key $K_j$, \citet{bahdanau2014neural} define $f$ in the form of:
\begin{align}
f(Q_i, K_j) &= U \cdot g(Q_i, K_j) \label{math:score}
\end{align}
where $U$ is a learnable vector and $g$ is the function to map each query-key pair to a vector.
Specifically, they define $g(Q_i, K_j) = \tanh(Q_i W_0 + K_j W_1)$, where $W_0, W_1$ are learnable parameters.
Our attention mechanism is essentially a self-attention mechanism, where the queries, keys and values are exactly the same. In our case, they are essentially sequence representation $S_{l-1}$ of the previous layer (i.e., $\v Q = \v K = \v V = \v S_{l-1}$).
The attention weights (i.e., the output from the function $f$ in Figure \ref{fig:attention}) are essentially constructed from both queries and keys (which are the same in our case).
On the other hand, we also notice the table representation $\v T_l$ is also constructed from $\v S_{l-1}$. So we can consider $\v T_l$ to be a function of queries and keys, such that $T_{l,i,j} = g(S_{l-1,i}, S_{l-1,j}) = g(Q_i, K_j)$.
Then we put back this $g$ function to Equation \ref{math:score}, and get the proposed table-guided attention, whose score function is:
\begin{align}
f(Q_i, K_j) &= U \cdot T_{l,i,j}
\end{align}
We show the advantages of using this table-guided attention:
(1) we do not have to calculate $g$ function since $\v T_l$ is already obtained from the table encoder;
(2) $\v T_l$ is contextualized along the \emph{row}, \emph{column}, and \emph{layer} dimensions, which corresponds to {queries, keys, and queries and keys in the previous layer, respectively}. Such contextual information allows the network to better capture more difficult word-word dependencies;
(3) it allows the table encoder to participate in the sequence representation learning process, thereby forming the bidirectional interaction between the two encoders.
The table-guided attention can be extended to have multiple heads \cite{transformer}, where each head is an attention with independent parameters.
We concatenate their outputs and use a fully-connected layer to get the final attention outputs.
The remaining parts are similar to Transformer.
For layer $l$, we use position-wise feedforward neural networks (FFNN) after self-attention,
and wrap attention and FFNN with a residual connection \cite{resnet} and layer normalization (\citealt{layernorm}),
to get the output sequence representation:
\begin{align}
\tilde{\v S}_{l} &= \layernorm(\v S_{l-1} + \selfattn(\v S_{l-1})) \\
\v S_{l} &= \layernorm(\tilde{\v S}_{l} + \ffnn(\tilde{\v S}_{l}))
\end{align}
\subsection{Exploit Pre-trained Attention Weights}
In this section, we describe the dashed lines in Figures \ref{fig:network} and \ref{fig:encoder}, which we ignored in the previous discussions.
Essentially, they exploit information in the form of attention weights from a pre-trained language model such as BERT.
We stack the attention weights of all heads and all layers to form $\v{T}^{\ell} \in \mathbb{R}^{N \times N \times (L^{\ell} \times A^{\ell})}$, where $L^{\ell}$ is the number of stacked Transformer layers, and $A^{\ell}$ is the number of heads in each layer.
We leverage $\v{T}^{\ell}$ to form the inputs of MD-RNNs in the table encoder.
Equation \ref{math:input_mdrnn} is now replaced with:
\begin{equation}
\resizebox{0.866\hsize}{!}{$X_{l,i,j} = \relu(\linear([S_{l-1,i};S_{l-1,j};T^{\ell}_{i,j}]))$}
\label{math:input_mdrnn2}
\end{equation}
We keep the rest unchanged.
We believe this simple yet novel use of the attention weights allows us to effectively incorporate the useful word-word interaction information captured by pre-trained models such as BERT into our table-sequence encoders for improved performance.
\section{Training and Evaluation}
We use $\v{S}_{L}$ and $\v{T}_{L}$ to predict the probability distribution of the entity and relation tags:
\begin{align}
P_{\theta}({\v Y}^{\text{NER}}) &= \softmax(\linear(\v {S}_{L})) \\
P_{\theta}({\v Y}^{\text{RE}} ) &= \softmax(\linear(\v {T}_{L}))
\end{align}
where ${\v Y}^{\text{NER}}$ and ${\v Y}^{\text{RE}}$ are random variables of the predicted tags,
and ${P}_{\theta}$ is the estimated probability function with $\theta$ being our model parameters.
For training, both NER and RE adopt the prevalent cross-entropy loss.
Given the input text $\v x$ and its gold tag sequence $\v y^{\text{NER}}$ and tag table $\v y^{\text{RE}}$,
we then calculate the following two losses:
\begin{align}
\mathcal L^{\text{NER}} &= \sum_{i \in [1, N]} -\log{{P}_{\theta}(Y^{\text{NER}}_{i} = y^{\text{NER}}_{i})} \\
\mathcal L^{\text{RE}} &= \hspace{-8px} \sum_{i,j \in [1, N]; i \ne j} \hspace{-8px} -\log{{P}_{\theta}(Y^{\text{RE}}_{i,j} = y^{\text{RE}}_{i,j})}
\end{align}
The goal is to minimize both losses $\mathcal L^{\text{NER}} + \mathcal L^{\text{RE}}$.
During evaluation, the prediction of relations relies on the prediction of entities, so we first predict the entities,
and then look up the relation probability table ${P}_{\theta}({\v Y}^{\text{RE}})$ to see if there exists a valid relation between predicted entities.
Specifically, we predict the entity tag of each word by choosing the class with the highest probability:
\begin{equation}
\argmax_e {P}_{\theta}(Y^{\text{NER}}_{i} = e)
\end{equation}
The whole tag sequence can be transformed into entities with their boundaries and types.
Relations on entities are mapped to relation classes with highest probabilities on words of the entities.
We also consider the two directed tags for each relation.
Therefore, for two entity spans $(i^{b}, i^{e})$ and $(j^{b}, j^{e})$, their relation is given by:
\begin{align}
\scalemath{0.89}{
\argmax_{\overrightarrow{r}}{
\hspace{-15px}
\sum_{
i \in [i^b , i^e], j \in [j^b , j^e]
}
\hspace{-20px}
{P}_{\theta}(Y^{\text{RE}}_{i,j} = \overrightarrow{r})
+ {P}_{\theta}(Y^{\text{RE}}_{j,i} = \overleftarrow{r} )
}
}
\end{align}
{where the no-relation type $\bot$ has no direction, so if $\overrightarrow{r} = \bot$, we have $\overleftarrow{r} = \bot$ as well.}
\section{Experiments}
\subsection{Data}
We evaluate our model on four datasets, namely
ACE04 \cite{ace04}, ACE05 \cite{ace05}, CoNLL04 \cite{conll04} and ADE \cite{ade}.
More details could be found in Appendix \ref{sec:data}.
Following the established line of work, we use the F1 measure to evaluate the performance of NER and RE.
For NER, an entity prediction is correct if and only if its type and boundaries both match with those of a gold entity.\footnote{
Follow \citet{li2014incremental,miwa2016end}, we use head spans for entities in ACE. And we keep the full mention boundary for other corpora.}
For RE, a relation prediction is considered correct if its relation type and the boundaries of the two entities match with those in the gold data.
We also report the strict relation F1 (denoted RE{+}), where a relation prediction is considered correct if its relation type as well as the boundaries and types of the two entities all match with those in the gold data.
Relations are asymmetric, so the order of the two entities in a relation matters.
\subsection{Model Setup}
We tune hyperparameters based on results on the development set of ACE05 and use the same setting for other datasets.
GloVe vectors \cite{pennington2014glove} are used to initialize word embeddings.
We also use the BERT variant -- ALBERT as the default pre-trained language model.
Both pre-trained word embeddings and language model are fixed without fine-tuning.
In addition, we stack three encoding layers ($L=3$) with independent parameters including the GRU cell in each layer.
For the table encoder, we use two separate MD-RNNs with the directions of ``layer$^+$row$^+$col$^+$'' and ``layer$^+$row$^-$col$^-$'' respectively.
For the sequence encoder, we use eight attention heads to attend to different representation subspaces.
We report the averaged F1 scores of 5 runs for our models.
For each run, we keep the model that achieves the highest averaged entity F1 and relation F1 on the development set, and evaluate and report its score on the test set.
Other hyperparameters could be found in Appendix \ref{sec:training}.
\begin{table}[t!]
\centering
\tabcolsep=3px
\newcommand{${\scriptstyle\triangledown}$}{${\scriptstyle\triangledown}$}
\newcommand{${\scriptstyle\blacktriangle}$}{${\scriptstyle\blacktriangle}$}
\scalebox{0.82}
{
\begin{tabular}{clccc}
\toprule
Data & Model & NER & RE & RE{+} \\ \midrule
\multirow{8}{*}{\rotatebox[origin=c]{90}{ACE04}}
& \citet{li2014incremental} ${\scriptstyle\triangledown}$ & 79.7 & 48.3 & 45.3 \\
& \citet{katiyar2017going} ${\scriptstyle\triangledown}$ & 79.6 & 49.3 & 45.7 \\
& \citet{bekoulis2018joint} ${\scriptstyle\triangledown}$ & 81.2 & - & 47.1 \\
& \citet{bekoulis2018adversarial} ${\scriptstyle\triangledown}$ & 81.6 & - & 47.5 \\
& \citet{miwa2016end} ${\scriptstyle\triangledown}$ & 81.8 & - & 48.4 \\
& \citet{li2019entity} ${\scriptstyle\triangledown}$ & 83.6 & - & 49.4 \\
& \citet{luan2019general} ${\scriptstyle\triangledown}$ & 87.4 & 59.7 & - \\ \cmidrule(l{5pt}r{5pt}){2-5}
& Ours{} ${\scriptstyle\triangledown}$ & \textbf{88.6} & \textbf{63.3} & \textbf{59.6} \\
\midrule
\multirow{10}{*}{\rotatebox[origin=c]{90}{ACE05}}
& \citet{li2014incremental} ${\scriptstyle\triangledown}$ & 80.8 & 52.1 & 49.5 \\
& \citet{miwa2016end} ${\scriptstyle\triangledown}$ & 83.4 & - & 55.6 \\
& \citet{katiyar2017going} ${\scriptstyle\triangledown}$ & 82.6 & 55.9 & 53.6 \\
& \citet{zhang2017end} ${\scriptstyle\triangledown}$ & 83.6 & - & 57.5 \\
& \citet{sun2018extracting} ${\scriptstyle\triangledown}$ & 83.6 & - & 59.6 \\
& \citet{li2019entity} ${\scriptstyle\triangledown}$ & 84.8 & - & 60.2 \\
& \citet{dixit2019span} ${\scriptstyle\triangledown}$ & 86.0 & 62.8 & - \\
& \citet{luan2019general} ${\scriptstyle\triangledown}$ & 88.4 & 63.2 & - \\
& \citet{wadden2019entity} ${\scriptstyle\triangledown}$ & 88.6 & 63.4 & - \\ \cmidrule(l{5pt}r{5pt}){2-5}
& Ours{} ${\scriptstyle\triangledown}$ & \textbf{89.5} & \textbf{67.6} & \textbf{64.3} \\
\midrule
\multirow{11}{*}{
\rotatebox[origin=c]{90}{CoNLL04}
}
& \citet{miwa2014modeling}${\scriptstyle\triangledown}$ & 80.7 & - & 61.0 \\
& \citet{bekoulis2018adversarial}${\scriptstyle\blacktriangle}$ & 83.6 & - & 62.0 \\
& \citet{bekoulis2018joint}${\scriptstyle\blacktriangle}$ & 83.9 & - & 62.0 \\
& \citet{tran2019neural}${\scriptstyle\blacktriangle}$ & 84.2 & - & 62.3 \\
& \citet{nguyen2019end}${\scriptstyle\blacktriangle}$ & 86.2 & - & 64.4 \\
& \citet{zhang2017end}${\scriptstyle\triangledown}$ & 85.6 & - & 67.8 \\
& \citet{li2019entity}${\scriptstyle\triangledown}$ & 87.8 & - & 68.9 \\
& \citet{eberts2019span}${\scriptstyle\triangledown}$ & 88.9 & - & 71.5 \\
& \citet{eberts2019span}${\scriptstyle\blacktriangle}$ & 86.3 & - & 72.9 \\ \cmidrule(l{5pt}r{5pt}){2-5}
& Ours{}${\scriptstyle\triangledown}$ & \textbf{90.1} & \textbf{73.8} & \textbf{73.6} \\
& Ours{}${\scriptstyle\blacktriangle}$ & \textbf{86.9} & \textbf{75.8} & \textbf{75.4} \\
\midrule
\multirow{7}{*}{\rotatebox[origin=c]{90}{ADE}}
& \citet{li2016joint} ${\scriptstyle\blacktriangle}$ & 79.5 & - & 63.4 \\
& \citet{li2017neural} ${\scriptstyle\blacktriangle}$ & 84.6 & - & 71.4 \\
& \citet{bekoulis2018joint} ${\scriptstyle\blacktriangle}$ & 86.4 & - & 74.6 \\
& \citet{bekoulis2018adversarial} ${\scriptstyle\blacktriangle}$ & 86.7 & - & 75.5 \\
& \citet{tran2019neural} ${\scriptstyle\blacktriangle}$ & 87.1 & - & 77.3 \\
& \citet{eberts2019span} ${\scriptstyle\blacktriangle}$ & 89.3 & - & 79.2 \\ \cmidrule(l{5pt}r{5pt}){2-5}
& Ours{} ${\scriptstyle\blacktriangle}$ & \textbf{89.7} & \textbf{80.1} & \textbf{80.1} \\
\bottomrule
\end{tabular}
}
\caption{Main results.
${\scriptstyle\triangledown}$: micro-averaged F1; ${\scriptstyle\blacktriangle}$: macro-averaged F1.
}
\label{tab:main}
\end{table}
\subsection{Comparison with Other Models}
Table \ref{tab:main} presents the comparison of our model with previous methods on four datasets.
Our NER performance is increased by 1.2, 0.9, 1.2/0.6 and 0.4 absolute F1 points over the previous best results.
Besides, we observe even stronger performance gains in the RE task, which are 3.6, 4.2, 2.1/2.5 (RE{+}) and 0.9 (RE{+}) absolute F1 points, respectively.
This indicates the effectiveness of our model for jointly extracting entities and their relations.
Since our reported numbers are the average of 5 runs, we can consider our model to be achieving new state-of-the-art results.
\begin{table}[t]
\centering
\scalebox{0.82}
{
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{LM}
& \multicolumn{2}{c}{$+\v x^{\ell}$} & \multicolumn{2}{c}{$+\v{x}^{\ell}$ $+\v{T}^{\ell}$} \\
\cmidrule(l{5pt}r{5pt}){2-3} \cmidrule(l{5pt}r{5pt}){4-5}
& NER & RE & NER & RE \\ \midrule
ELMo & 86.4 & 64.3 & - & - \\
BERT & 87.8 & 64.8 & 88.2 & 67.4 \\
RoBERTa & 88.9 & 66.2 & 89.3 & 67.6 \\
ALBERT & 89.4 & 66.0 & 89.5 & 67.6 \\
\bottomrule
\end{tabular}
}
\caption{Using different pre-trained language models on ACE05.
$+\v x^{\ell}$ uses the contextualized word embeddings;
$+\v T^{\ell}$ uses the attention weights.}
\label{tab:lm}
\end{table}
\subsection{Comparison of Pre-trained Models} \label{sec:pretrained}
In this section, we evaluate our method with different pre-trained language models,
including ELMo, BERT, RoBERTa and ALBERT,
with and without attention weights, to see their individual contribution to the final performance.
Table \ref{tab:lm} shows that, even using the relatively earlier contextualized embeddings without attention weights (ELMo $+\v x^{\ell}$), our system is still comparable to the state-of-the-art approach \cite{wadden2019entity},
which was based on BERT and achieved F1 scores of 88.6 and 63.4 for NER and RE respectively.
It is important to note that the model of \citet{wadden2019entity} was trained on the additional coreference annotations from OntoNotes \cite{weischedel2011ontonotes} before fine-tuning on ACE05.
Nevertheless, our system still achieves comparable results, showing the effectiveness of the table-sequence encoding architecture.
The overall results reported in Table \ref{tab:lm} confirm the importance of leveraging the attention weights, which bring improvements for both NER and RE tasks.
This allows the system using vanilla BERT to obtain results no worse than RoBERTa and ALBERT in relation extraction.
\subsection{Ablation Study}
We design several additional experiments to understand the effectiveness of components in our system.
The experiments are conducted on ACE05.
We also compare different table filling settings, which are included in Appendix \ref{sec:form}.
\subsubsection{Bidirectional Interaction}
We first focus on the understanding of the necessity of modeling the bidirectional interaction between the two encoders.
Results are presented in Table \ref{tab:joint}.
``RE (gold)'' is presented so as to compare with settings that do not predict entities, where the gold entity spans are used in the evaluation.
\begin{table}[t]
\centering
\scalebox{0.82}
{
\begin{tabular}{lccc}
\toprule
Setting & NER & RE &
RE (gold)
\\ \midrule
Default & 89.5 & 67.6 & 70.4 \\
\quad w/o Relation Loss & 89.4 & - & - \\
\quad w/o Table Encoder & 88.4 & - & - \\
\quad w/o Entity Loss & - & - & 69.8 \\
\quad w/o Sequence Encoder & - & - & 69.2 \\
\quad w/o Bi-Interaction & 88.2 & 66.3 & 69.2 \\
NER on diagonal & 89.4 & 67.1 & 70.2 \\
\quad w/o Sequence Encoder & 88.6 & 67.0 & 70.2 \\
\bottomrule
\end{tabular}
}
\caption{
Ablation of the two encoders on ACE05.
Gold entity spans are given in RE (gold).
}
\label{tab:joint}
\end{table}
We first try optimizing the NER and RE objectives separately, corresponding to ``w/o Relation Loss'' and ``w/o Entity Loss''.
Compared with learning with a joint objective, the results of these two settings are slightly worse,
which indicates that learning better representations for one task not only is helpful for the corresponding task, but also can be beneficial for the other task.
Next, we investigate the individual sequence and table encoder, corresponding to ``w/o Table Encoder'' and ``w/o Sequence Encoder''.
We also try jointly training the two encoders but cut off the interaction between them, which is ``w/o Bi-Interaction''.
Since no interaction is allowed in the above three settings, the table-guided attention is changed to conventional multi-head scaled dot-product attention,
and the table encoding layer always uses the initial sequence representation $\v S_0$ to enrich the table representation.
The results of these settings are all significantly worse than the default one, which indicates the importance of the bidirectional interaction between sequence and table representation in our table-sequence encoders.
We also experiment the use of the main diagonal entries of the table representation to tag entities, with results reported under ``NER on diagonal''.
This setup attempts to address NER and RE in the same encoding space, in line with the original intention of \citet{miwa2014modeling}.
By exploiting the interrelation between NER and RE, it achieves better performance compared with models without such information.
However, it is worse than our default setting.
We ascribe this to the potential incompatibility of the desired encoding space of entities and relations.
Finally, although it does not directly use the sequence representation, removing the sequence encoder will lead to performance drop for NER, which indicates the sequence encoder can help improve the table encoder by better capturing the structured information within the sequence.
\subsubsection{Encoding Layers}
\begin{table}[t!]
\centering
\tabcolsep=4px
\scalebox{0.82}
{
\begin{tabular}{lcccccc}
\toprule
\multirow{2}{*}{\# Layers}
& \multicolumn{3}{c}{Shared} & \multicolumn{3}{c}{Non-shared} \\
\cmidrule(l{5pt}r{5pt}){2-4} \cmidrule(l{5pt}r{5pt}){5-7}
& \# params & NER & RE & \# params & NER & RE \\ \midrule
$L=1$ & 2.2M & 89.2 & 66.0 & 1.9M & 89.2 & 66.0 \\
$L=2$ & 2.2M & 89.5 & 67.0 & 3.2M & 89.5 & 67.1 \\
$L=3$ & 2.2M & 89.3 & 67.3 & \underline{4.5M} & \underline{89.5} & \underline{67.6} \\
$L=4$ & 2.2M & 89.7 & 67.6 & 5.7M & 89.6 & 67.7 \\
$L=5$ & 2.2M & 89.6 & 67.6 & 7.0M & 89.6 & 67.7 \\
\bottomrule
\end{tabular}
}
\caption{
The performance on ACE05 with different number of layers.
Pre-trained word embeddings and language models are not counted to the number of parameters.
The underlined ones are from our default setting.
}
\label{tab:stack}
\end{table}
Table \ref{tab:stack} shows the effect of the number of encoding layers, which is also the number of bidirectional interactions involved.
We conduct one set of experiments with shared parameters for the encoding layers and another set with independent parameters.
In general, the performance increases when we gradually enlarge the number of layers $L$.
Specifically, since the shared model does not introduce more parameters when tuning $L$,
we consider that our model benefits from the mutual interaction inside table-sequence encoders.
Typically, under the same value $L$, the non-shared model employs more parameters than the shared one to enhance its modeling capability, leading to better performance.
However, when $L>3$, there is no significant improvement by using non-shared model. We believe that increasing the number of layers may bring the risk of over-fitting, which limits the performance of the network. We choose to adopt the non-shared model with $L=3$ as our default setting.
\subsubsection{Settings of MD-RNN}
Table \ref{tab:encoder_new} presents the comparisons of using different dimensions and directions to learn the table representation, based on MD-RNN.
Among those settings,
``Unidirectional'' refers to an MD-RNN with direction ``layer$^+$row$^+$col$^+$'';
``Bidirectional'' uses two MD-RNNs with directions ``layer$^+$row$^+$col$^+$'' and ``layer$^+$row$^-$col$^-$'' respectively;
``Quaddirectional'' uses MD-RNNs in four directions, illustrated in Figure \ref{fig:mdrnn}.
Their results are improved when adding more directions, showing richer contextual information is beneficial.
Since the bidirectional model is almost as good as the quaddirectional one, we leave the former as the default setting.
\begin{table}[t]
\centering
\scalebox{0.82}
{
\begin{tabular}{lcc}
\toprule
Setting & NER & RE \\ \midrule
Unidirectional
& 89.6 & 66.9 \\
\underline{Bidirectional}
& \underline{89.5} & \underline{67.6} \\
Quaddirectional
& 89.7 & 67.6 \\
Layer-wise only
& 89.3 & 63.9 \\
Bidirectional w/o column
& 89.5 & 67.2 \\
Bidirectional w/o row
& 89.3 & 67.4 \\
Bidirectional w/o layer
& 89.3 & 66.7 \\
\bottomrule
\end{tabular}
}
\caption{
The effect of the dimensions and directions of MD-RNNs.
Experiments are conducted on ACE05.
The underlined ones are from our default setting.
}
\label{tab:encoder_new}
\end{table}
In addition, we are also curious about the contribution of \emph{layer}, \emph{row}, and \emph{column} dimensions for MD-RNNs.
We separately removed the \emph{layer}, \emph{row}, and \emph{column} dimension. As we can see, the results are all lower than the original model without removal of any dimension.
``Layer-wise only'' removed \emph{row} and \emph{col} dimensions, and is worse than others as it does not exploit the sentential context.
More experiments with more settings are presented in Appendix \ref{sec:context}.
Specifically, all unidirectional RNNs are consistently worse than others,
while bidirectional RNNs are usually on-par with quaddirectional RNNs.
Besides, we also tried to use CNNs to implement the table encoder.
However, since it is usually difficult for CNNs to learn long-range dependencies, we found the performance was worse than the RNN-based models.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{vis.pdf}
\caption{
Comparison between ground truth and selected heads of ALBERT and table-guided attention.
The sentence is randomly selected from the development set of ACE05.
}
\label{fig:vis}
\end{figure*}
\subsection{Attention Visualization} \label{sec:vis}
We visualize the table-guided attention with bertviz \cite{bertviz}\footnote{\url{https://github.com/jessevig/bertviz}} for a better understanding of how the network works.
We compare it with pre-trained Transformers (ALBERT) and human-defined ground truth, as presented in Figure \ref{fig:vis}.
Our discovery is similar to \citet{clark2019does}.
Most attention heads in the table-guided attention and ALBERT show simple patterns.
As shown in the left part of Figure \ref{fig:vis}, these patterns include attending to the word itself, the next word, the last word, and the punctuation.
The right part of Figure \ref{fig:vis} also shows task-related patterns, i.e., entities and relations.
For a relation, we connect words from the head entity to the tail entity;
For an entity, we connect every two words inside this entity mention.
We can find that our proposed table-guided attention has learned more task-related knowledge compared to ALBERT.
In fact, not only does it capture the entities and their relations that ALBERT failed to capture, but it also has higher confidence.
This indicates that our model has a stronger ability to capture complex patterns other than simple ones.
\subsection{Probing Intermediate States}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{pcase2_non_shared}
\caption{Probing intermediate states}
\label{fig:main_pcase1}
\end{figure}
Figure \ref{fig:main_pcase1} presents an example picked from the development set of ACE05.
The prediction layer after training (a linear layer) is used as a probe to display the intermediate state of the model,
so we can interpret how the model improves both representations from stacking multiple layers and thus from the bidirectional interaction.
{Such probing is valid since we use skip connection between two adjacent encoding layers, so the encoding spaces of the outputs of different encoding layers are consistent and therefore compatible with the prediction layer.}
In Figure \ref{fig:main_pcase1}, the model made many wrong predictions in the first layer, which were gradually corrected in the next layers.
Therefore, we can see that more layers allow more interaction and thus make the model better at capturing entities or relations, especially difficult ones.
More cases are presented in Appendix \ref{sec:prob}.
\section{MD-RNN} \label{sec:mdrnn}
In this section we present the detailed implementation of MD-RNN with GRU.
Formally, at the time-step layer $l$, row $i$, and column $j$, with the input $X_{l,i,j}$,
the cell at layer $l$, row $i$ and column $j$ calculates the gates as follows:
\begin{align}
T^{prev}_{l,i,j} &= [T_{l-1,i,j}; T_{l,i-1,j}; T_{l,i,j-1}], \in \mathbb{R}^{3H} \\
r_{l,i,j} &= \sigma([X_{l,i,j}; T^{prev}_{l,i,j}] W^r + b^r)), \in \mathbb{R}^{H} \\
z_{l,i,j} &= \sigma([X_{l,i,j}; T^{prev}_{l,i,j}] W^z + b^z)), \in \mathbb{R}^H \\
\tilde \lambda_{l,i,j,m} &= [X_{l,i,j}; T^{prev}_{l,i,j}] W^{\lambda}_m + b^{\lambda}_m, \in \mathbb{R}^H \\
\lambda_{l,i,j,0}, &\lambda_{l,i,j,1}, \lambda_{l,i,j,2} = \nonumber \\
& \softmax(\tilde \lambda_{l,i,j,0}, \tilde \lambda_{l,i,j,1}, \tilde \lambda_{l,i,j,2})
\end{align}
And then calculate the hidden states:
\begin{align}
\tilde T_{l,i,j} &= \tanh(X_{l,i,j} {W^x} \nonumber \\
& + r_{l,i,j} \odot (T^{prev}_{l,i,j} {W^p}) + b^h), \in \mathbb{R}^{H} \\
\tilde T^{prev}_{l,i,j} &= \lambda_{l,i,j,0} \odot T_{l-1,i,j} \nonumber \\
&+ \lambda_{l,i,j,1} \odot T_{l,i-1,j} \nonumber \\
&+ \lambda_{l,i,j,2} \odot T_{l,i,j-1}, \in \mathbb{R}^{H} \\
T_{l,i,j} &=
z_{l,i,j} \odot \tilde T_{l,i,j} \nonumber \\
& + (1-z_{l,i,j}) \odot \tilde T^{prev}_{l,i,j}, \in \mathbb{R}^{H}
\end{align}
where $W$ and $b$ are trainable parameters and
please note that they share parameters in different rows and columns but not necessarily in different layers.
Besides, $\odot$ is the element-wise product, and $\sigma$ is the sigmoid function.
As in GRU, $r$ is the \emph{reset} gate controlling whether to forget previous hidden states, and
$z$ is the \emph{update} gate, selecting whether the hidden states are to be updated with new hidden states.
In addition, we employ a \emph{lambda} gate $\lambda$, which is
used to weight the predecessor cells before passing them through the \emph{update} gate.
There are two slightly different ways to compute the candidate activation $\tilde T_{l,i,j}$, namely
\begin{align}
\tilde T_{l,i,j} &= \tanh(X_{l,i,j} W^x \nonumber \\
&+ r_{l,i,j} \odot (T^{prev}_{l,i,j} W^p) + b^h_l)
\end{align}
and
\begin{align}
\tilde T_{l,i,j} &= \tanh(W^x_l X_{l,i,j} \nonumber \\
&+ (r_{l,i,j} \odot T^{prev}_{l,i,j}) W^p + b^h_l)
\end{align}
And we found in our preliminary experiments that both of them performed as well as each other, and we choose the former, which saves some computation.
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\linewidth]{complex}
\caption{For 2D-RNNs, cells in the same color can be computed in parallel.}
\label{fig:complex}
\end{figure}
The time complexity of the naive implementation (i.e., two for-loops in each layer) is $O(L\times N \times N)$ for a sentence with length $N$ and the number of encoding layer $L$.
However, antidiagonal entries can be calculated at the same time because their values do not depend on each other, shown in the same color in Figure \ref{fig:complex}.
Therefore, we can optimize it through parallelization and reduce the effective time complexity to $O(L \times N)$.
\section{Data} \label{sec:data}
\begin{table}[]
\centering
\scalebox{0.82}
{
\begin{tabular}{cccc}
\toprule
& \# sentences & \# entities & \# relations \\%& \multirow{2}{*}{metric} \\
& & (types) & (types) \\%& \\
\midrule
ACE04 & 8.7k & 22.5k (7) & 4.0k (6) \\%& micro F1 \\
ACE05 & 14.5k & 38.3k (7) & 7.1k (6) \\%& micro F1 \\
CoNLL04 & 1.4k & 5.3k (4) & 2.0k (5) \\%& micro/macro F1 \\
ADE & 4.2k & 10.5k (2) & 6.6k (1) \\%& macro F1 \\
\bottomrule
\end{tabular}
}
\caption{Dataset statistics}
\label{tab:data}
\end{table}
Table \ref{tab:data} shows the dataset statistics after pre-processing.
We keep the same pre-processing and evaluation standards used by most previous works.
The ACE04 and ACE05 corpora are collected from a variety of domains, such as newswire and online forums.
We use the same entity and relation types, data splits,
and pre-processing as \citet{li2014incremental} and \citet{miwa2016end}\footnote{
We use the prepocess script provided by \citet{luan2019general}:
\url{https://github.com/luanyi/DyGIE/tree/master/preprocessing}}.
Specifically, they use head spans for entities but not use the full mention boundary.
The CoNLL04 dataset provides entity and relation labels.
We use the same train-test split as \citet{gupta2016table}\footnote{
\url{https://github.com/pgcool/TF-MTRNN/tree/master/data/CoNLL04}},
and we use the same 20\% train set as development set as \citet{eberts2019span}\footnote{
\url{http://lavis.cs.hs-rm.de/storage/spert/public/datasets/conll04/}}.
Both micro and macro average F1 are used in previous work, so we will specify this while comparing with other systems.
The ADE dataset is constructed from medical reports that describe the
adverse effects arising from drug use. It contains a single relation
type ``Adverse-Effect'' and the two entity types ``Adverse-Effect'' and ``Drug''.
Similar to previous work, we filter out instances containing overlapping entities, only accounting for 2.8\% of total.
Following prior work, we perform 5-fold cross-validation for ACE04 and 10-fold for ADE.
Besides, we use 15\% of the training set as the development set.
We report the average score of 5 runs for every dataset.
For each run, we use the model that achieves the best performance (averaged entity metric score and relation metric score) on the development set, and evaluate and report its score on the test set.
\section{Hyperparameters and Pre-trained Language Models} \label{sec:training}
\begin{table}[]
\centering
\scalebox{0.82}
{
\begin{tabular}{ll}
\toprule
Setting & Value \\
\midrule
batch size & 24 \\
optimizer & Adam \\
learning rate (lr) & 1e-3 \\
warm-up steps & 1000 \\
dropout rate & 0.5 \\
\# layers (L) & 3 \\
\# attention heads (A) & 8 \\
hidden dim (H) & 200 \\
token emb dim & 100 \\
char emb dim & 30 \\
gradient clipping & 5.0 \\
\bottomrule
\end{tabular}
}
\caption{Hyperparameters used in our experiments. }
\label{tab:hyperparameters}
\end{table}
The detailed hyperparameters are present in Table \ref{tab:hyperparameters}.
For the word embeddings, we use 100-dimensional GloVe word embeddings
trained on 6B tokens\footnote{\url{https://nlp.stanford.edu/projects/glove/}} as initialization.
We disable updating the word embeddings during training.
We set the hidden size to 200, and since we use bidirectional MD-RNNs, the hidden size for each MD-RNN is 100.
We use inverse time learning rate decay: $\hat{lr} = {lr} / (1 + \text{decay\_rate} \times \text{steps} / \text{decay\_steps})$,
with decay rate 0.05 and decay steps 1000.
Besides, the tested pre-trained language models are shown as follows:
\begin{itemize}
\item \textbf{[ELMo]} \cite{elmo}:
Character-based pre-trained language model.
We use the \texttt{large} checkpoint, with embeddings of dimension 3072.
\item \textbf{[BERT]} \cite{bert}:
Pre-trained Transformer.
We use the \texttt{bert-large-uncased} checkpoint,
with embeddings of dimension 1024 and attention weight feature of dimension 384 (24 layers $\times$ 16 heads).
\item \textbf{[RoBERTa]} \cite{roberta}:
Pre-trained Transformer.
We use the \texttt{roberta-large} checkpoint,
with embeddings of dimension 1024 and attention weight feature of dimension 384 (24 layers $\times$ 16 heads).
\item \textbf{[ALBERT]} \cite{albert}:
A lite version of BERT with shared layer parameters.
We use the \texttt{albert-xxlarge-v1} checkpoint,
with embeddings of dimension 4096 and attention weight feature of dimension 768 (12 layers $\times$ 64 heads).
\textbf{We by default use this pre-trained model.}
\end{itemize}
We use the implementation provided by \citet{Wolf2019HuggingFacesTS}\footnote{\url{https://github.com/huggingface/Transformers}}
and \citet{akbik2019flair}\footnote{\url{https://github.com/flairNLP/flair}} to generate contextualized embeddings and attention weights.
Specifically, we generate the contextualized word embedding by averaging all sub-word embeddings in the last four layers;
we generate the attention weight feature (if available) by summing all sub-word attention weights for each word,
which are then concatenated for all layers and all heads.
Both of them are fixed without fine-tuning.
\section{Ways to Leverage the Table Context} \label{sec:context}
\begin{table}[]
\centering
\scalebox{0.82}
{
\begin{tabular}{lcc}
\toprule
Setting & NER & RE \\ \midrule
MD-RNN && \\
\quad layer$^+$\sout{row}\ \ \sout{col}
& 89.3 & 63.9 \\
\quad layer$^+$row$^+$col$^+$
& 89.6 & 66.9 \\
\quad layer$^+$row$^+$col$^-$
& 89.4 & 66.3 \\
\quad layer$^+$row$^-$col$^-$
& 89.6 & 66.9 \\
\quad layer$^+$row$^-$col$^+$
& 89.4 & 66.7 \\
\quad layer$^+$row$^+$\sout{col}\ \ ; layer$^+$row$^-$\sout{col}
& 89.5 & 67.2 \\
\quad layer$^+$\sout{row}\ \ col$^+$; layer$^+$\sout{row}\ \ col$^-$
& 89.3 & 67.4 \\
\quad \sout{layer}\ \ row$^+$col$^+$; \sout{layer}\ \ row$^-$col$^-$
& 89.3 & 66.7 \\
\quad layer$^+$row$^+$col$^+$; layer$^+$row$^-$col$^-$
& 89.5 & 67.6 \\
\quad layer$^+$row$^+$col$^-$; layer$^+$row$^-$col$^+$
& 89.7 & 67.4 \\
\quad\begin{tabular}{@{}l@{}}
layer$^+$row$^+$col$^+$; layer$^+$row$^-$col$^-$; \\[-5pt]
layer$^+$row$^+$col$^-$; layer$^+$row$^-$col$^+$
\end{tabular}
& 89.7 & 67.6 \\
CNN && \\
\quad kernel size $1 \times 1$
& 89.3 & 64.7 \\
\quad kernel size $3 \times 3$
& 89.3 & 66.2 \\
\quad kernel size $5 \times 5$
& 89.3 & 65.8 \\
\bottomrule
\end{tabular}
}
\caption{Comparisons with different methods to learn the table representation.
For MD-RNN, $D^+$, $D^-$ and $\text{\sout{$D$}}$ are indicators representing the direction, in which the hidden state flows forward, backward, or unable to flow at dimension $D$ ($D$ could be layer, row, or col).
When using multiple MD-RNNs, we separate the indicators by ``;''.
}
\label{tab:encoder}
\end{table}
Table \ref{tab:encoder} presents the comparisons of different ways to learn the table representation.
\vspace{4px}
\noindent
\textbf{Importance of context}
Setting ``layer$^+$\sout{row}\ \sout{col}'' does not exploit the table context when learning the table representation, instead, only layer-wise operations are used.
As a result, it performs much worse than the ones exploiting the context, confirming the importance to leverage the context information.
\vspace{4px}
\noindent
\textbf{Context along row and column}
Neighbors along both the \emph{row} and \emph{column} dimensions are important.
setting ``layer$^+$row$^+$\sout{col}\ ; layer$^+$row$^-$\sout{col}'' and ``layer$^+$\sout{row}\ col$^+$; layer$^+$\sout{row}\ col$^-$''
remove the \emph{row} and \emph{column} dimensions respectively, and their performance is though better than ``layer$^+$\sout{row}\ \sout{col}'',
but worse than setting ``layer$^+$row$^+$col$^+$; layer$^+$row$^-$col$^-$''.
\vspace{4px}
\noindent
\textbf{Multiple dimensions}
Since in setting ``layer$^+$row$^+$col$^+$'', the cell at row $i$ and column $j$ only knows the information before the $i$-th and $j$-th word,
causing worse performance than bidirectional (``layer$^+$row$^+$col$^+$; layer$^+$row$^-$col$^-$'' and ``layer$^+$row$^+$col$^-$; layer$^+$row$^-$col$^+$'') and
quaddirectional (``layer$^+$row$^+$col$^+$; layer$^+$row$^-$col$^-$; layer$^+$row$^+$col$^-$; layer$^+$row$^-$col$^+$'') settings.
Besides, the quaddirectional model does not show superior performance than bidirectional ones, so we use the latter by default.
\vspace{4px}
\noindent
\textbf{Layer dimension}
Different from the \emph{row} and \emph{column} dimensions, the \emph{layer} dimension does not carry more sentential context information.
Instead, it carries the information from previous layers, so the model can reason high-level relations based on low-level dependencies captured by predecessor layers,
which may help recognize syntactically and semantically complex relations.
Moreover, recurring along the \emph{layer} dimension can also be viewed as a layer-wise short-cut,
serving similarly to high way \cite{highway} and residual connection \cite{resnet}
and making it possible for the networks to be very deep.
By removing it (results under ``\sout{layer}\ row$^+$col$^+$; \sout{layer}\ row$^-$col$^-$''), the performance is harmed.
\vspace{4px}
\noindent
\textbf{Other network}
Our model architecture can be adapted to other table encoders.
We try CNN to encode the table representation.
For each layer $l$, given inputs $\v X_l$, we have:
\begin{align}
\v T^0_l &= \relu(\linear([\v X_{l}; \v T_{l-1}])) \\
\v T^1_l &= \relu(\layernorm(\cnn(\v T^0_l))) \\
\v T_l &= \relu(\v T_{l-1} + \layernorm(\cnn(\v T^1_l)))
\end{align}
We also try different kernel sizes for CNN.
However, despite its advantages in training time, its performance is worse than the MD-RNN based ones.
\begin{table}[t!]
\centering
\scalebox{0.82}
{
\begin{tabular}{ccccc}
\toprule
\begin{tabular}{@{}c@{}}
entire\\table?
\end{tabular}
&
\begin{tabular}{@{}c@{}}
entire\\entity?
\end{tabular}
&
\begin{tabular}{@{}c@{}}
directed\\relation tag?
\end{tabular}
& NER & RE \\ \midrule
\ding{55}(\texttt{L}) & \ding{55} & \ding{51} & 89.2 & 65.9 \\
\ding{55}(\texttt{U}) & \ding{55} & \ding{51} & 89.2 & 65.8 \\
\ding{51} & \ding{55} & \ding{55} & 89.4 & 65.1 \\
\ding{51} & \ding{51} & \ding{55} & 89.3 & 65.8 \\
\ding{51} & \ding{55} & \ding{51} & 89.6 & 67.1 \\
\ding{51} & \ding{51} & \ding{51} & 89.5 & 67.6 \\
\bottomrule
\end{tabular}
}
\caption{Comparisons of different table filling formulations.
When not filling the entire table, \texttt{L} only fills the lower-triangular part,
and \texttt{U} fills the upper-triangular part.}
\label{tab:formulation}
\end{table}
\section{Table Filling Formulations} \label{sec:form}
Our table filling formulation does not exactly follow \citet{miwa2014modeling}.
Specifically, we fill the entire table instead of only the lower (or higger) triangular part, and
we assign relation tags to cells where entity spans intersect instead of where last words intersect.
To maintain the ratio of positive instances to negative instances, although the entire table can express directed relations by undirected tags,
we still keep the directed relation tags. I.e, if $y^{\text{RE}}_{i,j} = \overrightarrow{r}$ then $y^{\text{RE}}_{j,i} = \overleftarrow{r}$, and vice versa.
Table \ref{tab:formulation} ablates our formulation (last row), and compares it with the original one \cite{miwa2014modeling} (first row).
\begin{figure}[t!]
\centering
\captionsetup[subfigure]{position=b}
\subcaptionbox{Correct the prediction at the 2nd layer
\label{fig:pcase0}}
{\includegraphics[height=155px]{pcase0_non_shared}}
\vspace{10px}
\subcaptionbox{Correct the prediction at the 3rd layer
\label{fig:pcase1}}
{\includegraphics[height=155px]{pcase2_non_shared}}
\vspace{10px}
\subcaptionbox{A mistake at the last layer
\label{fig:ncase0}}
{\includegraphics[height=155px]{ncase0_non_shared}}
\caption{Comparisons of predictions by different encoding layers.
We predict relations and entities with the intermediate sequence and table representation,
so that we can figure out how the model improves both representations by stacking multiple encoding layers. }
\label{fig:cases}
\vspace{-4mm}
\end{figure}
\section{Probing Intermediate States} \label{sec:prob}
Figure \ref{fig:cases} presents examples picked from the development set of ACE05.
The prediction layer (a linear layer) after training is used as a probe to display the intermediate state of the model,
so we can interpret how the model improves both representations from stacking multiple layers and thus from the bidirectional interaction.
Such probing is valid since
for the table encoder, the encoding spaces of different cells are consistent as they are connected through gate mechanism, including cells in different encoding layers;
for the sequence encoder, we used residual connection so the encoding spaces of the inputs and outputs are consistent.
Therefore, they are all compatible with the prediction layer.
Empirically, the intermediate layers did give valid predictions, although they are not directly trained for prediction.
In Figure \ref{fig:pcase0}, the model made a wrong prediction with the representation learned by the first encoding layer.
But after the second encoding layer, this mistake has been corrected by the model.
This is also the case that happens most frequently, indicating that two encoding layers are already good enough for most situations.
For some more complicated cases, the model needs three encoding layers to determine the final decision, shown in Figure \ref{fig:pcase1}.
Nevertheless, more layers do not always push the prediction towards the correct direction,
and Figure \ref{fig:ncase0} shows a negative example, where the model made a correct prediction in the second encoding layer, but in the end it decided not to output one relation, resulting in a false-negative error.
But we must note that such errors rarely occur, and the more common errors are that entities or relationships are not properly captured at all encoding layers.
\section{Conclusion}
In this paper, we introduce the novel {\em table-sequence encoders} architecture for joint extraction of entities and their relations.
It learns two separate encoders rather than one -- a sequence encoder and a table encoder where explicit interactions exist between the two encoders.
We also introduce a new method to effectively employ useful information captured by the pre-trained language models for such a joint learning task where a table representation is involved.
We achieved state-of-the-art F1 scores for both NER and RE tasks across four standard datasets, which confirm the effectiveness of our approach.
{\color{black}
In the future, we would like to investigate how the table representation may be applied to other tasks.
Another direction is to generalize the way in which the table and sequence interact to other types of representations.
}
\section*{Acknowledgements}
We would like to thank the anonymous reviewers for their helpful comments and Lidan Shou for his suggestions and support on this work. This work was done during the first author's remote internship with the StatNLP Group in Singapore University of Technology and Design. This research is supported by Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOE2017-T2-1-156).
Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the Ministry of Education, Singapore.
|
{
"timestamp": "2020-10-09T02:11:44",
"yymm": "2010",
"arxiv_id": "2010.03851",
"language": "en",
"url": "https://arxiv.org/abs/2010.03851"
}
|
\section{Introduction}
Abusive language in online communities has become a significant societal problem \cite{nobata2016abusive} and online abusive language detection (ALD) aims to identify any type of insult, vulgarity, or profanity that debases a target or group online. It is not only limited to detecting offensive language \cite{razavi2010offensive}, cyberbullying \cite{xu2012learning}, and hate speech \cite{djuric2015hate}, but also to more nebulous or implicit forms of abuse. Many social media companies and researchers have utilised multiple resources, including machine learning, human reviewers and lexicon-based text analytics to detect abusive language \cite{waseem2016you,qian2018leveraging}. However, none of them can perfectly resolve the ALD task because of the difficulties of moderating user content and in classifying ambiguous posts \cite{CadeMetz2019Facebook}. On the technical side, previous ALD models were developed on only a few subtasks (e.g. hate speech, racism, sexism) in a single domain (like Twitter), and each specialised model is not successfully transferable to general ALD in different online communities.
Our research question is, ``What would be the best generic ALD model that can be used for different types of abusive language detection sub-tasks and in different online communities?"
To solve this, we found that \newcite{waseem2017understanding} reviewed the existing online abusive language detection literature, and defined a generic abusive language typology that can encompass the targets of a wide range of abusive language subtasks in different types of domain. The typology is categorised in the following two aspects: \textbf{1) Target aspect}: The abuse can be directed towards either a) a specific individual/entity or b) a generalised group. This is an essential sociological distinction as the latter refers to a whole category of people, like a race or gender, rather than a specific individual or organisation; \textbf{2) Content aspect}: The abusive content can be explicit or implicit. Whether directed or generalised, explicit abuse is unambiguous in its potential to be damaging, while implicit abusive language does not immediately imply abuse (through the use of sarcasm, for example). For example, assume that we have a tweet ``F***''. ``You are sooo sweet like other girls''. It includes all those aspects; the directed target (``yourself''), the generalised target (``girls''), the explicit content (``F***''), and the implicit content (``You are sooo sweet'').
Inspired by this abusive language typology, we propose a new generic ALD framework, MACAS (\textbf{M}ulti-\textbf{A}spect \textbf{C}ross \textbf{A}ttention \textbf{S}uper Joint for ALD), using aspect models and a cross-attention aspect gate flow. First, we build four different types of abusive language aspect embeddings, including directed target, generalised target, explicit content, and implicit content. We also propose to use a heterogeneous graph to analyse the linguistic behaviour of each author and learn word and document embeddings with graph convolutional networks (GCNs). Not every online community (e.g. news forums) allows user-to-user relationship (e.g. follower-following), so we avoid using user-community relationship information. Then, we propose a cross-attention aspect gate flow to obtain the mutual enhancement between the two aspects. The gate flow contains two gates, target gate and content gate, then fuses the outputs of those gates. The target gate draws on the content probability distribution, utilising the semantic information of the whole input sequence along with the target source, while the content gate takes in the target aspect probability distribution as supplementary information for content-based prediction. For evaluation, we test six state-of-the-art ALD models across seven datasets focused on different aspects and collected from different domains. Our proposed model rivals or exceeds those ALD methods on all of the evaluated datasets. The contributions of the paper can be summarised as follows: 1) We perform a rigorous comparison of six state-of-the-art ALD models across seven ALD benchmark datasets, and find those models do not embrace different types of abusive language aspects in different online communities.
2) We propose a generic new ALD algorithm that enables explicit integration of multiple aspects of abusive language, and detection of generic abusive language behaviour in different domains. The proposed model rivals state-of-the-art algorithms on ALD benchmark datasets and performs best overall.
\section{Related Work}
\subsection{ALD Datasets}
\label{section:dataset}
We briefly review the seven ALD benchmark datasets (Table \ref{tab:statistics table}), which were collected from different online community sources and focused on multiple compositions. \textbf{Waseem} \cite{waseem2016hateful} is a Twitter ALD dataset regarding the specific aspects of racist and sexist. The collected tweets were labeled into \textit{Racism}, \textit{Sexism} or \textit{None}.
\textbf{HatEval} \cite{basile2019semeval} is a Twitter-based hate speech detection dataset released in SemEval-2019. It provides a general-level hate speech annotation, \textit{Hateful} or \textit{Non-hateful}, especially against immigrants and women. \textbf{OffEval} \cite{zampieri2019semeval} covers the Twitter-based offensive language detection task in SemEval-2019. It annotates as \textit{Offensive} or \textit{Not-offensive}, and includes insults, threats, and any form of untargeted profanity.
\textbf{Davids} \cite{davidson2017automated} is a Twitter-based ALD dataset, which includes three classes, \textit{Hate}, \textit{Offensive} or \textit{Neither} based on the hate speech lexicon from \textit{Hatebase.org}.
\textbf{Founta} \cite{DjouvasConstantinos2018LSCa} is a large Twitter-based ALD dataset claimed to be annotated with high accuracy based on their proposed incremental and iterative annotation method. It is annotated with four classes, \textit{Hateful}, \textit{Abusive}, \textit{Normal} or \textit{Spam}.
\textbf{FNUC} \cite{gao2017detecting} is a hate speech detection dataset, which was collected from complete Fox News discussion threads, and annotated with the general level categories \textit{Hateful} or \textit{Non-hateful}.
\textbf{StormW}\cite{de2018hate} is a Stormfront-based hate speech detection dataset with general-level labels \textit{Hate} and \textit{NoHate}. Stormfront is a supremacist forum where people promote white nationalism and antisemitism.
\begin{table*}[t]
\centering
\begin{adjustbox}{width=.8\linewidth}
\begin{tabular}{c|c|c|c}
\hline
\textbf{Dataset} & \textbf{Source} & \textbf{Size} & \textbf{Composition} \\ \hline
\textbf{Waseem}\cite{waseem2016hateful} & Twitter & 16.2k & \textit{Racism(11.97\%)}, \textit{Sexism(19.43\%)}, \textit{None(68.60\%)} \\ \hline
\textbf{HatEval}\cite{basile2019semeval} & Twitter & 13k & \textit{Hateful(42.08\%)}, \textit{Non-hateful(57.92\%)} \\ \hline
\textbf{OffEval}\cite{zampieri2019semeval} & Twitter & 13.2k & \textit{Offensive(33.23\%)}, \textit{Not-offensive(66.77\%)} \\ \hline
\textbf{Davids}\cite{davidson2017automated} & Twitter & 24.8k & \textit{Hate(5.77\%)}, \textit{Offensive(77.43\%)}, \textit{Neither(16.80\%)} \\ \hline
\textbf{Founta}\cite{DjouvasConstantinos2018LSCa}& Twitter & 99k & \textit{Abusive(27.15\%)}, \textit{Hateful(4.97\%)}, \textit{Normal(53.85\%)}, \textit{Spam(4.97\%)} \\ \hline
\textbf{FNUC}\cite{gao2017detecting} & Fox News Discussion Threads & 1.5k & \textit{Hateful(28.50\%)}, \textit{Non-hateful(71.50\%)} \\ \hline
\textbf{StormW}\cite{de2018hate} & Stormfront(forum) & 10.7k & \textit{Hate(10.93\%)}, \textit{NoHate(89.07\%)} \\ \hline
\end{tabular}
\end{adjustbox}
\setlength{\belowcaptionskip}{-10pt}
\caption{Comparison and Statistical analysis of seven benchmark datasets evaluated in this paper. The composition column represents different class aspects, and the class distribution in each dataset.}
\label{tab:statistics table}
\end{table*}
\subsection{ALD Approaches}
In the early stages, ALD was commonly addressed via hand-crafted rules and manual feature engineering. The first reported ALD work\cite{spertus1997smokey} utilised a decision tree to detect hostile messages based on heuristic rules. \newcite{yin2009detection} and \newcite{razavi2010offensive} added lexicon-based features together with semantic rules and designed a linear SVM and Naïve Bayes classifier for detecting hostile language. \newcite{djuric2015hate} first applied in ALD neural networks with the paragraph2vec \cite{le2014distributed} representation. \newcite{nobata2016abusive} introduced a Yahoo! dataset and tested it with neural networks by applying a combination of word, character-based and syntactic features. Recently, deep learning techniques have become popular in ALD. \newcite{badjatiya2017deep} tested FaxtText/Glove, Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTMs) in detecting hate speech. \newcite{park2017one} designed a HybridCNN (word-level and character-level) model on abusive tweet detection in both one-step and two-step style. Several works have applied bidirectional Gated Recurrent Unit (Bi-GRU) networks with Latent Topic Clustering (LTC) \newcite{lee2018comparative} and a a transformer-based framework \newcite{bugueno2019learning}. Some works integrated user profiling into their ALD models. \newcite{qian2018leveraging} utilised the bi-LSTM to model the historical behaviour of users to generate inter-user and intra-user representation. \newcite{mishra2018author} applied node2vec \cite{grover2016node2vec} to the constructed community graph of users to derive the user embedding. However, a user profiling-based approach is only possible when the user profiles are public and when the domain provides the user-community relation information.
\section{The MACAS ALD Model}
\begin{figure}[t]
\includegraphics[width=1\textwidth]{MACAS.png}
\centering
\setlength{\belowcaptionskip}{-10pt}
\caption{The conceptual architecture of our model \textit{MACAS}}
\label{fig:architecture}
\end{figure}
We propose the \textbf{M}ulti-\textbf{A}spect \textbf{C}ross \textbf{A}ttention \textbf{S}uper Joint model for ALD. It is designed as an generic ALD that can embrace different types of abusive language aspects in different online communities. As shown in Figure \ref{fig:architecture}, MACAS can be divided into three main phases:
1) \textbf{Multi-Aspect features embedding}[Sec.\ref{MA}]. The Multi-Aspect Embedding Layer represents understanding of multi-aspects of abusive language for detecting generic abusive language behaviours. We focus on two main aspects, target and content, and each aspect has two sub-aspects. \textbf{1) Target aspect} represents abuse directed towards either a) a specific individual/entity or b) a generalised group (e.g. gender or race). \textbf{2) Content aspect} covers a) explicit or b)implicit. Explicit abuse is unambiguous in its potential to be damaging, while implicit abusive language does not immediately impact (e.g. sarcasm). In addition to this, if the platform provides users’ historical posts, we apply Graph Convolutional Network(GCN)s to build a word-document graph embedding that represents linguistic behaviours of users. Not every online community (e.g. news forums) has user-to-user relationships (e.g. follower-following), so we avoid using user-community relationship and community network information.
2) \textbf{Cross-Attention Gate Flow for integrating multi-aspects} [Sec.\ref{CA}] The Cross-Attention gate produces the joint integration of the target aspect and content aspect model and obtains the mutual enhancement between the two aspects. This is for producing well-integrated multi-aspects and improving the performance of generic ALD.
3) \textbf{Final Aggregation of learned ALD embeddings} [Sec.\ref{FF}] We aggregate multi-aspect embeddings and the user's linguistic behaviour embedding across the online post using convolutional neural networks, and produce the ALD using multi-layer-perceptron.
\subsection{Multi-Aspect Embedding Layer \footnote{In this paper, we use only four state-of-the-art natural language processing techniques that represent each abusive language aspect well. However, we expect that more techniques for each aspect embedding would produce better performance.}}
\label{MA}
\subsubsection{Target: Directed Abuse Embedding}
Directed abuse is abuse towards a specific individual or entity \cite{waseem2017understanding}. To model this aspect, a named entity recognition (NER) approach is used. To train the NER model, we apply stacked bi-directional LSTMs, which are one of the state-of-the-art models \cite{chiu2016named}. We extract the vector before the final $Softmax$ layer of the NER model and use it as the Directed Abuse Embedding.
\subsubsection{Target: Generalised Abuse Embedding}
Generalised abuse tends to target people belonging to a small set of categories, primarily gender. The gender debiasing embedding \cite{Kaneko:ACL:2019} is applied. The vocabulary set ($V$) is split into 4 mutually exclusive sets of words, namely, masculine ($V_m$), feminine ($V_f$), neutral ($V_n$) and stereotypical ($V_s$). Each word is represented by a vector which is calculated by minimising a loss function to satisfy the criteria: 1) protect the feminine information for words in $V_f$; 2) protect the masculine information for words in $V_m$; 3) protect the neutrality for words in $V_n$ (iv) remove gender biases for words in $V_s$.
\subsubsection{Content: Explicit Abuse Embedding}
For the explicit abuse, whether the target is directed or generalised, explicit abuse is usually indicated by specific keywords from the homophobic slurs lexicon. We used dict2vec \cite{tissier2017dict2vec}, which aims to learn word embeddings based on natural language dictionaries. In this paper, the model is trained by Cambridge, Collins, Oxford, dictionary.com, and we add an abusive language lexicon\footnote{http://www.rsdb.org}.
This approach first defines strong pairs and weak pairs of words. If both words appear in each other's definition, the word pair is defined as a strong pair. If only one word appears in the other's definition, the word pair is defined as a weak pair. If the words do not appear in each other's definition they are not related. Each word is represented by a vector. Strongly paired words have more similar vectors then weakly paired words which in turn have more similar vectors than unrelated words.
\subsubsection{Content: Implicit Abuse Embedding}
Implicit abusive language does not immediately imply or denote abuse, similar to sarcasm. Here we use a hybrid of CNN and LSTM-based sarcasm detection models \cite{ghosh2016fracking}. The vector before the final $Softmax$ layer of the sarcasm detection model is the Implicit Abuse Embedding.
\subsubsection{Additional: User Linguistic Behaviour Embedding}
We model the graph by setting each comment in the training set as a document. The vocabulary is the set of all words in the documents. The corpus is the collection of all documents. The nodes of our graph are the union of the documents and the vocabulary. An edge weighted 1 exists between each node and itself. An edge exists between a document and a word if the word is in that document. The edge is weighted with the TF-IDF for the (document, word) pair, within the corpus. An edge exists between two words if they have a non-negative point-wise mutual information (PMI) with a sliding window size of 20, within the corpus. The weight for the edge is the PMI for the word pair. The edge weightings are compiled into an adjacency matrix combined with the graph's degree matrix and passed into a 2 layer GCN trained to map each document to each user as a label. For datasets without user id provided, we use the actual classification target as the document node label. From this network, we obtain embeddings for each node, that is an embedding of each document or each word. The trained word embeddings $G_{e}$ are fed into transformer encoders to get linguistic behaviour outputs.
\subsection{Cross-Attention Gate Flow}
\label{CA}
In the Cross-Attention Gate Flow, first, we use a cross transformer encoder for refining our four types of embedding: Directed abuse embedding $D$, Generalized abuse embedding $G$, Explicit abuse embedding $E$ and Implicit abuse embedding $I$. Before putting them into the cross transformer encoders, we combine $D$ with $G$ as Target embedding $T_{e}$ and broadcast $I$ to sequence length $N$, them combine it with $E$ as Content embedding $C_{e}$. Normally, for the transformer encoder \cite{vaswani2017attention}, the attention is calculated using key ($K$ of dimension $d_{k}$), query ($Q$), value ($V$):
\begin{gather}
Attention(Q,K,V) = softmax(\frac{QK^T}{\sqrt{d_k}})V.
\end{gather}
\begin{figure}[t]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=0.45\linewidth]{CB.png}
\caption{\textbf{C}AGF at the \textbf{B}eginning}
\label{fig:CB}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=0.45\linewidth]{CBM.png}
\caption{\textbf{C}AGF at the \textbf{B}eginning and the \textbf{M}iddle}
\label{fig:CBM}
\end{subfigure}
\setlength{\belowcaptionskip}{-10pt}
\caption{Variances of Cross-Attention Gate Flow}
\label{fig:cross-transformer}
\end{figure}
\noindent However, to produce the joint integration of target aspect model and content aspect model, we apply the cross-transformer to $T_{e}$ and $C_{e}$. As shown in the Figure \ref{fig:cross-transformer} for each transformer encoder, we have K,Q,V for $T_{e}$ and $C_{e}$. The K,V of $T_{e}$ and $C_{e}$ are switched, which means K,V of $T_{e}$ goes to the transformer encoder of $C_{e}$ and K,V of $C_{e}$ goes to $T_{e}$’s encoder. Then attention is calculated by
\begin{equation}
Attention_{content} = softmax(\frac{Q_cK_t^T}{\sqrt{d_k}})V_t, \quad
Attention_{target} = softmax(\frac{Q_tK_c^T}{\sqrt{d_k}})V_c
\end{equation}
We call the cross transformer here \textit{Cross at Beginning(CB)}. Similar to the original transformer encoder, each encoder contains one or more encoder stack(s), which mainly consists of two sub-layers: a multi-head attention layer and a fully connected feed-forward neural network (FNN). A residual connection followed by layer normalization is employed around each of the two sub-layers before feeding to the next sub-layer.
Another way to produce the joint integration occurs before the FNN layer. The output of Multi-Head Attention will be the input for the FNN layer, and then an Add \& Norm layer is applied. Normally, the output of transformer encoder is calculated by
\begin{gather}
Output = norm(FNN(O_{MHA})+O_{MHA})
\end{gather}
The input for FNN can also be switched for Content and Target, which is called \textit{Cross in the Middle (CM)}, the output of transformer encoder will be calculated by
\begin{equation}
T_{h} = norm(FNN(C_{MHA})+T_{MHA}), \quad C_{h} = norm(FNN(T_{MHA})+C_{MHA})
\end{equation}
If the cross happens both at the beginning and in the middle, the structure will be called \textit{Cross at the Beginning and in the Middle (CBM)}. The comparison of different cross transformer structures will be discussed in \ref{ATestIntegration}.
Both of the input embeddings $T_{e}$ and $C_{e}$ are of shape [$N$, $D_{e}$], where $D_{e}$ is the sum of the dimension of the concatenated embedding. The transformer encoder will output $T_{h}$ and $C_{h}$ in the same shape [$N$, $D_{e}$]. The hidden state of encoders $T_{h}$ from $T_{e}$ and $C_{h}$ from $C_{e}$ will be used to compute the initial abusive language probability, which is the major input of our bi-directional aspect gate flow.
On top of the Cross-Attention, we introduce the Bi-directional Aspect Gate Flow that contains two gates: content gate and target gate. Denote the input sequences to our gates from the previous layer encoder as $T_h \in \mathbb{R}^{N\times D_T}$ and $C_h \in \mathbb{R}^{N\times D_C}$ where $N$ is the sequence length while $D_{T}$ and $D_{C}$ equal to dimension of target embedding and content embedding respectively. In the content gate, we first flatten $T_h$ to be $T_{hf} \in \mathbb{R}^{1\times (N*D_T)}$. We then pass $T_{hf}$ through a dense layer and apply the $Softmax$ function. The resultant $P_{Th}$ is a $D$-dimensional probability vector, where $D=N_{cls}$ is the number of distinct labels to classify, $W_C\in \mathbb{R}^{D\times D_C}$ is the weight matrix and $b_C\in \mathbb{R}^{1\times D}$ is the bias vector. Then we broadcast $P_{Th}$ over $N$ tokens. This yields $\hat{P_{Th}}\in \mathbb{R}^{N\times D}$. Then we concatenate $\hat{P_{Th}}$ with transformer encoder output state $C_{h}$ from content source, generating the augmented content state $O_C\in \mathbb{R}^{N\times (D+D_C)}$. We then again flatten $O_C$ and pass the output to the dense layer, producing an output matrix $P_C \in \mathbb{R}^{1\times D}$.
The procedure in the target gate is almost the same as the content gate. Here we flattened the input sequence $C_h$, generating the flattened output $C_{hf} \in \mathbb{R}^{1\times (N*D_C)}$. We then pass the result through a dense layer and apply the $Softmax$ function. The resultant $P_{Ch}$ is also broadcast to be $\hat{P_{Ch}}$ and then concatenated with the target encoder output state $T_h$, where $O_T\in \mathbb{R}^{N\times (D+D_T)}$ is the augmented target state as output matrix. Finally, $O_T$ is also flattened and then passed to the dense layer, which produces the output matrix $P_T \in \mathbb{R}^{1\times D}$.
\subsection{Final Fusion}
\label{FF}
We propose a hierarchical fusion, which fuses linguistic behaviour outputs ($P_{G}$) with content gate output ($P_{C}$) and target gate output ($P_{T}$) respectively and uses two CNNs to integrate that fusion to get $C_{C}$ and $C_{T}$, then we concatenate $C_{C}$ and $C_{T}$ then flatten it to $F_{F}$. Finally, a multi-layer perceptron (MLP) is used for final prediction:
\begin{equation}
L_1 = ReLU(W_1 \cdot F_{F} + b_1), \quad L_2 = ReLU(W_2 \cdot L_1 + b_2), \quad Z = softmax(W_3 \cdot L_2 + b_3)
\end{equation}
Three layers are stacked. For the each layer, $W_i$ and $b_i$ represent the weight matrix and bias vector, and the ReLU activation function is used for the first two layers. For the last layer, to get the probability of each class $Z$, softmax layer is used.
\section{Evaluation Setting}
We conducted experiments on all seven datasets with and without GCN as well as using the three different types of cross-transformer variances, which will be discussed in \ref{ATestIntegration}. The GCN embedding dimension for this linguistic behaviour graph is $D_{LBG}=200$. For transformer encoder configuration, we used dropout rate = 0.5, encoder number = 2, head number = 3, and hidden dimension = 1296. The models are trained with batch size = 16, and lr(learning rate) and number of epochs differ: \textbf{Waseem}: lr = 4e-4, epochs = 6, \textbf{HatEval}: lr = 1e-7, epochs = 6, \textbf{OffEval}: lr = 1e-7, epochs = 13, \textbf{Davids}: lr = 4e-4, epochs = 6, \textbf{Founta}: lr = 1e-5, epochs = 8, \textbf{FNUC}: lr = 1e-6, epochs = 13, \textbf{StormW}: lr = 1e-6, epochs = 7. The hyper-parameters are decided by splitting the training set into 90:10 training and validation set.
The followings are the models evaluated in our experiments. \textbf{TF-IDF features and SVM Classifier (TIS)}: TIS \cite{yin2009detection} applies TF-IDF with SVM Classifier to detect abusive language. First, TF-IDF weights of words are generated and a Support Vector Machine with radial basis function (RBF) kernel is trained to classify different kinds of abusive languages. \textbf{One-Two Steps Hybrid CNN (OTH)}: OTH \cite{park2017one} used a Hybrid CNN (word-level and character-level) model and applied it to abusive tweet detection. We applied Chars2vec as a character embedding and Glove as a word embedding. The convolutional layers with kernel size 256, 128, and 64 are stacked, and the model is trained using learning rate 4e-5 with 10 epochs. \textbf{Multi-Features with RNN (MFR)}: MFR \cite{mehdad2016characters} used a hybrid character-based and word-based Recurrent Neural Network (RNN) model to detect abusive language. After the Chars2vec and Glove embeddings, there is a vanilla stacked RNN. Three RNN layers with hidden dimensions 128, 128, and 64 are stacked, and the model is trained using learning rate 4e-6 with 10 epochs. \textbf{Two-step Word-level LSTM (TWL)}: TWL \cite{badjatiya2017deep} produced LSTM-derived representations with a Gradient Boosted Decision Trees classifier. The model applied LSTM to Glove embeddings, and the results are fed into the model. Three LSTM layers with hidden dimensions 128,128,64 are stacked, and the model is trained using learning rate 4e-6 with 10 epochs. \textbf{Latent Topic Clustering with Bi-GRU (LTC)}: LTC \cite{lee2018comparative} applies a Bi-GRU with latent topic clustering, which extracts the topic information from the aggregated hidden states of the two directions of the Bi-GRU. Three Bi-GRU layers with hidden dimensions 128, 128, and 64 are stacked, and the model is trained using learning rate 4e-5 with 10 epochs. \textbf{Character-based Transformer (CBT)}: CBT \cite{bugueno2019learning} uses a transformer-based classifier with Chars2vec embeddings. Transformer encoders with hidden dimension 400, learning rate 4e-6 with 3 epochs are used.
\section{Experiments and Results}
\subsection{Performance Comparison}
In this part, we compare our model with six baseline models over all seven datasets, discussed in Sec \ref{section:dataset}. These baseline models are constructed with various word representations as well as different neural networks or classifiers. Table \ref{evaluation1table} presents the weighted average f1 performance of each baseline model and our model over each dataset. Our model outperforms the baseline models for all these seven datasets. Applying multiple aspect embeddings enables our model to process the texts from multi-perspective views. The Cross-Attention gate flow makes it possible to obtain the mutual enhancement between the two different aspects. Although some of the baseline models such as OTH, MFR also combine two embedding approaches (Chars2vec and Glove) to get more information, they still just consider the general information of the texts rather than extract information in a targeted fashion from various aspects. For these reasons our model can achieve performance above the baseline models.
As well as comparing our model with the baseline models, we also make some observations from comparing the six baseline models amongst themselves. Firstly, OTH and MFR use the combined embeddings of Chars2vec and Glove which gives more information. So, they can achieve relatively better weighted average f1 scores compared to most other baseline models which just use a single embedding method. Secondly, the results of TWL and LTC indicate that the bi-directional recurrent neural network leads to better performance than the simple forward recurrent neural network. This means that not only the future states but also the past ones will affect the prediction results. Thirdly, although we may not consider TF-IDF with SVM to be as good as Chars2vec or Glove with deep neural networks, TIS baseline model never gets the worst weighted f1 score for the seven datasets when compared with other models. In fact it even outperforms other baseline models on \textbf{Waseem} and \textbf{Founta}. For both datasets, there might be some particular words which are really significant for identifying the class. So TF-IDF can achieve good results for these two datasets.
\begin{table*}[t]
\fontsize{6}{7.2}\selectfont
\centering
\begin{adjustbox}{width=.7\textwidth}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\textbf{Dataset/Algorithm} & \textbf{TIS} & \textbf{OTH} & \textbf{MFR} & \textbf{TWL} & \textbf{LTC} & \textbf{CBT} & \textbf{Ours}\\ \hline
Waseem\cite{waseem2016hateful} & \cellcolor{green!30}83.56 & 79.10 & 62.39 & 73.88 & 79.94 & 79.11 & \cellcolor{green}86.00\\
HatEval\cite{basile2019semeval} & 41.63 & 40.48 & \cellcolor{green!30}53.17 & 52.03 & 53.14 & 49.25 & \cellcolor{green}53.97\\
OffEval\cite{zampieri2019semeval} & 75.37 & 76.84 & 55.59 & 67.15 & \cellcolor{green!30}77.90 & 58.71 & \cellcolor{green}78.80\\
Davids\cite{davidson2017automated} & 88.11 & 88.37 & 79.44 & 83.74 & 87.56 & \cellcolor{green!30}88.94 & \cellcolor{green}90.34\\
Founta\cite{DjouvasConstantinos2018LSCa} & \cellcolor{green!30}79.58 & 78.59 & 73.64 & 75.23 & 79.49 & 72.04 & \cellcolor{green}80.36\\
FNUC\cite{gao2017detecting} & 68.92 & 64.51 & \cellcolor{green!30}70.71 & 65.67 & 69.78 & 67.07 & \cellcolor{green}73.20\\
StormW\cite{de2018hate} & 82.73 & \cellcolor{green!30}85.48 & 82.06 & 81.91 & 83.83 & 82.90 & \cellcolor{green}85.86\\
\hline
\end{tabular}
\end{adjustbox}
\setlength{\belowcaptionskip}{-10pt}
\caption{Overall f1 results from seven ALD models (including \textit{MACAS}) evaluated across all seven benchmark datasets. We highlight the top 2 models for each dataset, using darker colors for higher performance. For all the benchmark datasets, we train models on the train split and report results on test splits.}
\label{evaluation1table}
\end{table*}
\begin{table*}[t]
\fontsize{7}{8.4}\selectfont
\centering
\begin{tabularx}{\textwidth}{c|b|c|c|s}
\hline
\textbf{Dataset} & \textbf{Abusive Example} & \textbf{Actual C.} & \textbf{Correctly C.} & \textbf{Incorrectly C.}\\ \hline
Waseem & Really \textcolor{blue}{\textbf{bitch}} really? Get on the runway and \textcolor{blue}{\textbf{back in the kitchen}}.
& Sexism & TIS, Ours & OTH, MFR, TWL, LTC, CBT
\\ \hline
Waseem & thegeek\_\textcolor{blue}{\textbf{chick}} Please answer... & Sexism & TIS, Ours & OTH, MFR, TWL, LTC, CBT \\ \hline
Waseem & RT randomfox: \#ManSpreading \textcolor{blue}{\textbf{\#FeminismIsAwful \#WomenAgainstFeminism}} & Sexism & TIS, Ours & OTH, MFR, TWL, LTC, CBT \\ \hline
Waseem & RT @asredasmyhair: \textcolor{blue}{\textbf{Fems, for the love of God}}, please stop propagating the wage gap myth. \#FemFreeFriday & Sexism & TIS, Ours & OTH, MFR, TWL, LTC, CBT \\ \hline
FNUC & \textcolor{blue}{\textbf{BLM is a Hate group}} Given Birth by Obama's comments That put a target on the backs of police officers
& Hateful & Ours & TIS, OTH, MFR, TWL, LTC, CBT
\\ \hline
StormW & i am a \textcolor{blue}{\textbf{white nationalist}} of a christian faith but still am a \textcolor{blue}{\textbf{white nationalist}} for racial survival the anti racist christians are the true chirstians enemy & Hate & OTH, Ours & TIS, MFR, TWL, LTC, CBT
\\ \hline
\end{tabularx}
\setlength{\belowcaptionskip}{-10pt}
\caption{Some prediction example from our model \textit{MACAS} compared to the baselines. Actual C.: Actual Class, Correctly C.: Correctly Classified Models, Incorrectly C.: Incorrectly Classified Models.}
\label{evaluation1table2}
\end{table*}
Next, an analysis of examples to show how different models perform on different datasets is conducted, and the results are shown in Table \ref{evaluation1table2}. The first example is from \textbf{Waseem}, “Really bitch really? Get on the runway and back in the kitchen.”, which should be predicted as Sexism. It is quite explicit in that the word “bitch” is in this sentence, and this makes TIS predict it as Sexism easily since TF-IDF is focusing on the word occurrence. Besides, “back in the kitchen” is implicit Sexism, implying women should be in the kitchen. The similar patterns can be found from the second instance “thegeek\_chick please answer” by explicitly mentioning the word ‘chick’. The third and fourth samples represent abusive language or hate speech about the topic Feminism. The third explicitly stated the words ‘Feminism’ and ‘Awful’ and TIS and our model successfully detected the abuse with an explicit hate speech aspect identification. Our model, which considers the explicit and implicit aspects, can predict the sentence as Sexism easily. Another example is from \textbf{FNUC}, “BLM is a Hate group Given Birth by Obama's comments That put a target on the backs of police officers” which should be Hateful. This comment insults the “Black Life Matters” by calling it a Hate Group. Normally, describing something as a hate group is not hate speech, but in this case, calling BLM a hate group is racism. This is not easy for the baseline models to spot, and only our model predicts it correctly. For the last example from \textbf{StormW}, “i am a white nationalist of a christian faith but still am a white nationalist for racial survival the anti racist christians are the true chirstians enemy”, the user described himself as “white nationalist” which is one kind of hate speech, and OTH can predict this sentence as Hate. The reason is that the CNN used in OTH can capture the information for phrases, which is the “white nationalist” here. Besides, our model can predict this sentence correctly since the sentence is a general explicit hate speech.
\subsection{Ablation Testing - Cross-attention gate flow}
\label{ATestIntegration}
In this part, three different structures of cross transformer encoders are tested: 1) Cross-transformer at the beginning of the the transformer encoder \textbf{(CB)}: exchanging content's and target's K and V at the beginning of the transformer encoders as in Figure \ref{fig:cross-transformer}; 2) Cross-transformer in the middle of the transformer encoder \textbf{(CM)}: exchanging content's and target’s input for Feed Forward layer in the transformer encoder, which is in the middle of the transformer encoders; 3) Cross-transformer at both places \textbf{(CBM)}: the combination of CB and CM. Due to the poor performance of CM, only results for 7 datasets with CB and CBM structure are shown in Table \ref{evaluation2table}. Besides, to find whether and how GCN is improving the performance of our model, different structures are also compared: 1) Model without GCN; 2) Model with GCN using hierarchical fusion, repeating one or three times. We show one and three times here because on all the datasets our model achieves the best performance with one or three repeated fusions when GCN is also used. Two conclusions are drawn based on the results of CB and CBM:
\begin{table*}[t]
\fontsize{6}{7.2}\selectfont
\centering
\begin{adjustbox}{width=.7\textwidth}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\textbf{Methods} & \textbf{Waseem} & \textbf{HatEval} & \textbf{OffEval} & \textbf{Davids} & \textbf{Founta} & \textbf{FNUC} & \textbf{StormW}\\ \hline
CB, no G & 82.35 & \cellcolor{green}53.97 & \cellcolor{green}78.80 & \cellcolor{green}90.34 & \cellcolor{green}80.36 & 65.31 & 82.90\\
CB, G, N=1 & \cellcolor{green}86.00 & 51.88 & 75.06 & 87.36 & 76.90 & \cellcolor{green}73.20 & 84.52\\
CB, G, N=3 & 83.76 & 42.71 & 75.03 & 88.24 & 75.42 & 68.39 & \cellcolor{green}85.86\\
CBM, no G & 81.53 & \cellcolor{green!30}53.28 & \cellcolor{green!30}77.37 & \cellcolor{green!30}90.25 & \cellcolor{green!30}80.28 & 65.67 & 84.14\\
CBM, G, N=1 & \cellcolor{green!30}85.22 & 39.91 & 72.60 & 90.12 & 76.06 & \cellcolor{green!30}68.92 & 85.09\\
CBM, G, N=3 & 82.77 & 42.86 & 75.10 & 90.11 & 77.03 & 68.16 & \cellcolor{green!30}85.12\\ \hline
\end{tabular}
\end{adjustbox}
\setlength{\belowcaptionskip}{-10pt}
\caption{Abusive language detection results across seven benchmark datasets for \textit{MACAS} with two cross attention aspect gate flow mechanisms and graph embedding. We highlight the top 2 settings for each dataset. The darker the colour, the better the performance. The comparison provides different parameters (\textit{N}) of final fusion layers, including N=1 or 3. (CB: cross-attention at the beginning, CBM: cross-attention at the beginning and the middle, G: the user linguistic behaviour graph embedding)}
\label{evaluation2table}
\end{table*}
Firstly, the best model is always the CB model, and the second best is always the CBM model with the same GCN structure. So comparing between CB and CBM structure, CB has a better performance and we use this structure as our final model. Besides, in most cases, CB outperforms CBM if they share the same GCN structure, which also shows that, overall, CBM is worse than CB. Considering the fact that CM is the worst, we can say that cross in the middle transformer encoder will lower the model performance. Exchanging content’s and target’s K,V is important since it allows target aspects to query on the content aspects and vice versa. However, exchanging values before Feed Forward Layer only gives a different add and norm which doesn’t increase the interaction between content aspects and target aspects usefully.
Secondly, our model can have a better performance with GCN when there is user id in the dataset. Not all the datasets provide user id, and as mentioned in Sec \ref{MA}, User Linguistic Behavior embedding is trained by using the user id as the target. For those datasets without userid, the real abusive labels are used as the training target. By comparison, we can find that \textbf{Waseem}, \textbf{StormW}, and \textbf{FNUC} which provide user id in the datasets have a better performance using a model with GCN, and the other four datasets, which don’t provide user id, have a better performance using a model without GCN. Therefore, for the dataset with user id, User Linguistic Behavior which is from GCN, can improve the performance of our model. And for those datasets without user id, the model structure without GCN is recommended.
\subsection{Ablation Testing - Multi-aspect embedding}
\begin{table*}[t]
\fontsize{6}{7.2}\selectfont
\centering
\begin{adjustbox}{width=.7\textwidth}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\textbf{Combinations} & \textbf{Waseem} & \textbf{HatEval} & \textbf{OffEval} & \textbf{Davids} & \textbf{Founta} & \textbf{FNUC} & \textbf{StormW}\\
\hline
D + E & 80.16 & 49.94 & 75.81 & 89.58 & 80.02 & 66.03 & 82.23\\
D + I & \cellcolor{red!25}61.93 & \cellcolor{red!25}47.04 & \cellcolor{red!25}54.63 & \cellcolor{red!25}68.27 & \cellcolor{red!25}67.88 & 64.51 & \cellcolor{red!25}81.91\\
D + E + I & 80.57 & 47.11 & 69.95 & 87.11 & 79.80 & \cellcolor{red!25}64.03 & \cellcolor{red!25}81.91\\
G + E & 79.67 & 52.78 & 76.95 & 86.92 & 79.39 & 65.56 & \cellcolor{green!30}84.85\\
G + I & 80.10 & 53.63 & 57.38 & 87.52 & 79.35 & 64.24 & \cellcolor{red!25}81.91\\
G + E + I & 79.12 & 48.71 & 72.17 & 89.19 & 79.14 & 65.96 & 82.04\\
D + G + E & 78.63 & 53.60 & 73.78 & 88.51 & 80.23 & \cellcolor{green}\textbf{68.28} & 82.44\\
D + G + I & 79.74 & 52.65 & 75.12 & 89.76 & 79.76 & 65.31 & 83.44\\
D + G + E + I & \cellcolor{green}\textbf{82.35} & \cellcolor{green}\textbf{53.97} & \cellcolor{green}\textbf{78.80} & \cellcolor{green}\textbf{90.34} & \cellcolor{green}\textbf{81.57} & 65.31 & \cellcolor{green}\textbf{83.93}\\ \hline
\end{tabular}
\end{adjustbox}
\setlength{\belowcaptionskip}{-10pt}
\caption{Ablation studies comparing different types integration of multi-aspects for the generic ALD model. In the proposed model, \textit{MACAS}, we introduced four aspect embeddings, including directed abuse (D), generalised abuse (G), explicit abuse (E), and implicit abuse (I). Directed and generalised abuses are in the group of a target aspect, while explicit and implicit abuses are in a content aspect group. The ablation testing is conducted in a different combination of aspect embedding from each higher-level of aspect groups. The highest performance is highlighted in green, the lowest is marked in red.}
\label{evaluation3table}
\end{table*}
To check how aspect embeddings contribute to the model, an ablation test on different combinations of the embeddings is conducted on all these seven datasets. We use the CB model without GCN for the prediction. Table \ref{evaluation3table} presents the weighted average f1 scores for 9 different combinations of four aspect embedding models, including Directed abuse $D$, Generalised abuse $G$, Explicit abuse $E$, and Implicit abuse $I$. Each target and content aspect should include at least one embedding.
For \textbf{Waseem}, the $D+G+E+I$ combination achieves the best performance with the weighted average f1 score 82.35 and most other combinations have a slightly lower performance. In contrast, $D+I$ gets the worst weighted f1 score of 61.93. The reason why $D+I$ is much worse than other combinations may lie in two facts: 1) In this dataset, abusive language is generally more explicit rather than directly aiming at a specific target in an implicit way. 2) Even humans can not distinguish Direct Abuse in an Implicit way easily, and it can be very difficult for the annotators to annotate the label correctly. Besides, the $D+G+E+I$ combination outperforms other cases because it takes all the aspects into consideration. Similar results occur on other Twitter datasets \textbf{Davids}, \textbf{HatEval}, \textbf{OffEval} and \textbf{Founta}, $D+G+E+I$ achieves the best while $D+I$ is much worse. For \textbf{FNUC}, due to the small volume of dataset and imbalanced labels, not all the combinations have a good prediction result. $D+G+E$ having the best performance implies that the dataset doesn’t have a large number of implicit abuse samples. For \textbf{StormW}, $D+G+E+I$ gets the best performance. Besides, $G+E$ also has a good performance. The reason is that this dataset is collected from a racism forum and most hate speech on that website is generally abusive in an explicit way. Based on the analysis of the different embedding combinations on these datasets, we can conclude that the embeddings used may vary based on different kinds of datasets, but combining them all is always a good idea.
Although four specific different embeddings are selected in our model to represent four different aspects, other kinds of embeddings could also be used as long as they can represent the corresponding aspects.
\section{Conclusion}
Abusive language detection is an essential but challenging task, and it is almost impossible to successfully encompass all different abusive language tasks in different domains. The evaluation also shows that most of the state-of-the-art ALD algorithms do not generalise their model to different types of abusive language problems or datasets. In this paper, we proposed a new generic abusive language model, called MACAS, which applied multi-aspect embeddings to represent generalised characteristics of the domain and introduced a cross-attention gate flow model to achieve better performance by mutual enhancement between the target aspect and the content aspect. The results indicate that our framework was successful and effective in capturing abusive language aspects in different domains. Compared to other ALD models, our model successfully works in general abusive language detection, and it is hoped that MACAS provides some insight into the future direction of generic abusive language detection.
\bibliographystyle{coling}
|
{
"timestamp": "2020-10-12T02:14:14",
"yymm": "2010",
"arxiv_id": "2010.03776",
"language": "en",
"url": "https://arxiv.org/abs/2010.03776"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.